uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,153,828 | arxiv | \section{Introduction}
\label{intro}
The combinatorial theory of convex polytopes has provided
mathematicians with interesting problems since antiquity. Some of
these problems concern skeleta of polytopes and their
connectivity; see for instance \cite{Ka1}. Balinski's theorem
\cite{Ba} (see also Theorem \ref{thm:ba}) gives a sharp lower
bound on the (vertex) connectivity degree of the abstract graph
${\mathcal G}(P)$ defined by the one-dimensional skeleton of a
$d$-dimensional convex polytope $P$. This graph can also be
understood as the dual graph of the normal fan ${\mathcal N}_P$ of $P$ or,
equivalently, as the dual graph of the $(d-1)$-dimensional
skeleton of any polytope $Q$ which is polar-dual to $P$. By
definition, these are the simple graphs on the node set of maximal
faces of ${\mathcal N}_P$ and of $(d-1)$-dimensional faces (facets) of $Q$,
respectively, in which two such faces are adjacent if they share a
common codimension one face. It is natural to inquire about the
connectivity of the dual graphs of other skeleta of ${\mathcal N}_P$ or
$Q$. These graphs correspond to the simple graphs ${\mathcal G}_k (P)$
defined for nonnegative integers $k$ as follows. The nodes of
${\mathcal G}_k (P)$ are the $k$-dimensional faces of $P$ and two such
faces are adjacent if there exists a $(k+1)$-dimensional face of
$P$ which contains them both. In other words, ${\mathcal G}_k (P)$ is the
graph on the node set of rank $k+1$ elements in the face lattice
of $P$, two such elements being adjacent if they have a common
cover in this lattice. For $k=0$, the graph ${\mathcal G}_k (P)$ reduces to
the graph ${\mathcal G}(P)$ which appears in Balinski's theorem. It is
folklore that the graphs ${\mathcal G}_k (P)$ are connected; see
\cite[Theorem 19.5.2]{Ka1}. Their higher (vertex) connectivity is
the subject of this paper.
Let $m$ be a positive integer and recall that an abstract graph
${\mathcal G}$ is said to be $m$-connected if ${\mathcal G}$ has at least $m+1$
nodes and any graph obtained from ${\mathcal G}$ by deleting $m-1$ or fewer
nodes and their incident edges is connected. Our main result is as
follows.
\begin{theorem}
For fixed nonnegative integers $k$ and $d$ satisfying $0 \le k \le
d-1$, let $m_k (d)$ denote the largest integer $m$ such that the
graph ${\mathcal G}_k (P)$ is $m$-connected for all convex polytopes $P$ of
dimension $d$. We have
\[ m_k (d) \, = \, \begin{cases}
d, & \text{if \ $k=d-2$} \\
(k+1) (d-k), & \text{otherwise.}
\end{cases} \]
\label{thm0}
\end{theorem}
A few remarks on Theorem \ref{thm0} are in order. Any node of an
$m$-connected graph ${\mathcal G}$ must have at least $m$ neighbors, since
the graph obtained from ${\mathcal G}$ by removing all neighbors of a given
node either has a single node or else is disconnected. Theorem
\ref{thm0} is made plausible by the fact that any $k$-dimensional
face $F$ of a $d$-dimensional polytope $P$ has at least $(k+1)
(d-k)$ neighbors in ${\mathcal G}_k (P)$. Indeed, such a face $F$ is
contained in at least $d-k$ faces of $P$ of dimension $k+1$, each
one of those has at least $k+1$ faces of dimension $k$ other than
$F$ and all these $k$-dimensional faces of $P$ are pairwise
distinct and are neighbors of $F$ in ${\mathcal G}_k (P)$. On the other
hand, if $P$ is a $d$-dimensional simplex then any $k$-dimensional
face of $P$ has exactly $(k+1) (d-k)$ neighbors in ${\mathcal G}_k (P)$ and
therefore ${\mathcal G}_k (P)$ is not $m$-connected for any value of $m$
which exceeds $(k+1) (d-k)$. This example shows that $m_k (d) \le
(k+1) (d-k)$ for all $k$ and $d$. Theorem \ref{thm0} reduces to
Balinski's theorem in the case $k=0$ and is trivial for $k=d-1$,
since ${\mathcal G}_{d-1} (P)$ is the complete graph on the set of facets
of $P$ and any $d$-dimensional polytope has at least $d+1$ facets.
The connectivity of skeleta of polytopes and more general cell
complexes was studied by Fl$\o$ystad \cite{Fl} from a homological
point of view and, more recently, by Bj\"orner \cite{Bj2} from a
homotopy-theoretic point of view. Fl$\o$ystad showed that
Balinski's theorem holds for one-dimensional skeleta of a class of
cell complexes which includes all finite Cohen-Macaulay polyhedral
complexes, namely that of finite Cohen-Macaulay regular cell
complexes with the intersection property. It is plausible (see
Conjecture \ref{conj}) that Theorem \ref{thm0} generalizes for $0
\le k \le d-2$ in this respect.
This paper is structured as follows. Section \ref{sec:pre} reviews
graph-theoretic terminology and basic background on convex
polytopes. The special cases $k=d-2$ and $k=1$ of Theorem
\ref{thm0} are proved in Sections \ref{sec:ridgets} and
\ref{sec:edges}, respectively. The proof of the theorem is
completed in Section \ref{sec:proof}. Section \ref{sec:cell}
discusses possible generalizations to classes of cell complexes.
\begin{remark} {\rm
After this paper was written, it was pointed out to the author by
Ronald Wotzlaw that the graphs ${\mathcal G}_k (P)$ have been previously
studied by Sallee \cite{Sa}. In the notation of Theorem
\ref{thm0}, it is proved in \cite{Sa} (see equation (7.18) on page
495) that $(k+1) (d-k) - k \le m_k (d) \le (k+1) (d-k)$. More
generally, given integers $0 \le r < s \le d-1$, upper and lower
bounds are given in \cite{Sa} for the connectivity degree of the
graph on the node set of $r$-dimensional faces of a
$d$-dimensional convex polytope $P$, in which two such faces are
adjacent if there exists an $s$-dimensional face of $P$ which
contains them both (thus our setting corresponds to the case $r=k$
and $s=k+1$). More precise results are obtained in \cite{Sa} for
various other notions of connectivity for the incidence graphs
between faces of dimension $r$ and faces of dimension $s$ of
$d$-dimensional polytopes. \qed}
\label{rem0}
\end{remark}
\section{Preliminaries}
\label{sec:pre}
All graphs considered in this paper will be simple (without loops
or multiple edges) and finite. Thus every edge of a graph ${\mathcal G}$
connects two distinct nodes of ${\mathcal G}$, called its \emph{endpoints},
and is said to be \emph{incident} to each of these nodes. Two
nodes of ${\mathcal G}$ are said to be \emph{adjacent} if they are
connected by an edge in ${\mathcal G}$. A \emph{walk} of length $n$ in
${\mathcal G}$ is an alternating sequence $w = (v_0, e_1, v_1,\dots,e_n,
v_n)$ of nodes and edges, such that $v_{i-1}$ and $v_i$ are the
endpoints of $e_i$ for $1 \le i \le n$. We say that $w$
\emph{connects} nodes $v_0$ and $v_n$, which are the
\emph{endpoints} of $w$. Thus ${\mathcal G}$ is connected if any two nodes
can be connected by a walk in ${\mathcal G}$. Given a subset $V$ of the set
of nodes of ${\mathcal G}$, we denote by ${\mathcal G} {\smallsetminus} V$ the graph obtained
from ${\mathcal G}$ by deleting the nodes in $V$ and all incident to these
nodes edges of ${\mathcal G}$. Given a positive integer $m$, the graph
${\mathcal G}$ is said to be \emph{$m$-connected} if it has at least $m+1$
nodes and ${\mathcal G} {\smallsetminus} V$ is connected for all subsets $V$ of the set
of nodes of ${\mathcal G}$ with cardinality at most $m-1$.
A \emph{convex polytope} $P$ is defined as the convex hull of a
finite set of points in ${\mathbb R}^N$. The \emph{dimension} of $P$ is
the dimension of the affine hull of $P$, as an affine subspace of
${\mathbb R}^N$. A \emph{face} of $P$ is either a subset of $P$ on which
some linear functional on ${\mathbb R}^N$ achieves its minimum on $P$ or
else the empty set. Any face of $P$ is a polytope in ${\mathbb R}^N$ and
the intersection of two faces of $P$ is again a face of $P$. Faces
of $P$ of dimension zero or one are called \emph{vertices} or
\emph{edges}, respectively, and faces of codimension one are
called \emph{facets}. Every edge has exactly two vertices, also
called its \emph{endpoints}. The following lemma is well-known;
see, for instance \cite[Section 1.2]{Ka2}.
\begin{lemma}
Any $d$-dimensional convex polytope has at least ${d+1 \choose
i+1}$ faces of dimension $i$ for all nonnegative integers $i$.
\label{prop:LBT}
\end{lemma}
The \emph{graph of $P$}, denoted by ${\mathcal G}(P)$, is the abstract
graph which has as nodes the vertices of $P$ and in which two
nodes are adjacent if they are the endpoints of an edge of $P$. We
will need the following slightly stronger version of Balinski's
theorem, which follows, for instance, from the proof given in
\cite[Section 3.5]{Zi}.
\begin{theorem}
{\rm (Balinski \cite{Ba})} Given a $d$-dimensional convex polytope
$P \subset {\mathbb R}^N$, the graph ${\mathcal G}(P) {\smallsetminus} V$ is connected for any
subset $V$ of the vertex set of $P$ which is contained in some
$(d-2)$-dimensional affine subspace of ${\mathbb R}^N$. In particular,
${\mathcal G} (P)$ is $d$-connected.
\label{thm:ba}
\end{theorem}
For any convex polytope $P$ there exists a polytope $Q$
(necessarily of the same dimension) whose set of faces is in
inclusion-reversing bijection with the set of faces of $P$. Such a
polytope is said to be \emph{polar-dual} to $P$. Given an
$r$-dimensional face $F$ of a $d$-dimensional convex polytope $P$,
the set of faces of $P$ which contain $F$ is in
inclusion-preserving bijection with the set of faces of a
$(d-r-1)$-dimensional polytope $P_F$, called a \emph{face figure}
(or \emph{vertex figure} if $F$ is a vertex) of $P$ at $F$. It is
clear that walks in the graph ${\mathcal G}_k (P_F)$ (as defined in the
introduction) correspond bijectively to walks in ${\mathcal G}_{k+r+1} (P)$
which involve only faces of $P$ containing $F$. For more
information on convex polytopes and their combinatorial structure
we refer the interested reader to \cite{Ka2, Zi}.
\section{The case $k=d-2$}
\label{sec:ridgets}
In this section we restate and prove Theorem \ref{thm0} in the
case $k=d-2$.
\begin{proposition}
The graph ${\mathcal G}_{d-2} (P)$ is $d$-connected for all convex
polytopes $P$ of dimension $d$. Moreover, in any dimension $d \ge
2$ there exist convex polytopes for which ${\mathcal G}_{d-2} (P)$ is not
$(d+1)$-connected.
\label{prop:i=d-2}
\end{proposition}
It is perhaps easier to visualize the graph ${\mathcal G}_{d-2} (P)$ in
terms of a polytope $Q$ which is polar-dual to $P$. Since faces of
$P$ of dimension $d-2$ and $d-1$ correspond to edges and vertices
of $Q$, respectively, ${\mathcal G}_{d-2} (P)$ is isomorphic to the graph
$\Gamma(Q)$ on the node set of edges of $Q$ in which two nodes are
adjacent if they have a vertex of $Q$ as a common endpoint.
\medskip
\noindent \emph{Proof of Proposition \ref{prop:i=d-2}.} Let $Q$ be
a polytope polar-dual to $P$, so that ${\mathcal G}_{d-2} (P)$ may be
replaced by the graph $\Gamma(Q)$ on the node set of edges of $Q$.
To show that $\Gamma(Q)$ is $d$-connected, let ${\mathcal E}$ be any subset
of the set of edges of $Q$ of cardinality at most $d-1$. We
observe first that given any $e \in {\mathcal E}$, it is possible to choose
some two-dimensional face $F$ of $P$ containing $e$, so that $e$
is the unique edge of $F$ which belongs to ${\mathcal E}$. Indeed, in view
of our assumption on the cardinality of ${\mathcal E}$, this is so because
$e$ is contained in at least $d-1$ faces of $P$ of dimension 2 and
any two such faces share no edge other than $e$ in common. Since
${\mathcal G}(Q)$ is connected, so is $\Gamma(Q)$. Our previous remark
shows that any walk $w$ in $\Gamma(Q)$ connecting two nodes not in
${\mathcal E}$ can be transformed to one connecting the same nodes that
does not involve elements of ${\mathcal E}$. This can be done by replacing
any node $e \in {\mathcal E}$ that may appear in $w$ with the sequence of
edges other than $e$ (and vertices other than the endpoints of
$e$), ordered appropriately, of a two-dimensional face of $P$
which contains $e$ but no other edge in ${\mathcal E}$. It follows that
$\Gamma(Q) {\smallsetminus} {\mathcal E}$ is connected and hence that $\Gamma(Q)$ is
$d$-connected.
To prove the second statement in the proposition let $I$ be a line
segment, let $Q = \Delta \times I$ be the prism over a
$(d-1)$-dimensional simplex $\Delta$ and denote by ${\mathcal E}$ the set
of edges of $Q$ of the form $v \times I$, where $v$ is a vertex of
$\Delta$. It is clear that the set ${\mathcal E}$ has $d$ elements and that
the graph $\Gamma(Q) {\smallsetminus} {\mathcal E}$ is disconnected. This implies that
$\Gamma(Q)$ is not $(d+1)$-connected. \qed
\section{The case $k=1$}
\label{sec:edges}
In this section we prove Theorem \ref{thm0} in the case $k=1$ as
follows.
\begin{proposition}
The graph ${\mathcal G}_1 (P)$ is $(2d-2)$-connected for all convex
polytopes $P$ of dimension $d \ge 4$.
\label{thm1}
\end{proposition}
\begin{proof}
Recall that the set of nodes of the graph ${\mathcal G}_1 (P)$ coincides
with the edge set of $P$. Let ${\mathcal E}$ be any subset of this set of
cardinality less than $2d-2$ and let $f$ and $g$ be any two edges
of $P$ not in ${\mathcal E}$. We need to show that $f$ and $g$ can be
connected by a walk in the graph ${\mathcal G}_1 (P) {\smallsetminus} {\mathcal E}$. For any
vertex $v$ and any edge $e$ of $P$, we will denote by $s(v)$ the
number of edges in ${\mathcal E}$ which have $v$ as an endpoint and by
$t(e)$ the number of edges in ${\mathcal E}$ which have at least one common
endpoint with $e$.
\medskip
\noindent {\bf Claim:} There exists a walk $w = (v_0, e_1,
v_1,\dots,e_n, v_n)$ in ${\mathcal G} (P)$ such that $v_0$ is an endpoint
of $f$, $v_n$ is an endpoint of $g$ and the following hold:
\begin{itemize}
\itemsep=0pt
\item[{\rm (a)}] $s(v_i) \le d-2$ for $0 \le i \le n$
and
\item[{\rm (b)}] $t(e_i) \le d-1$ for all $1 \le i \le n$ with
$e_i \in {\mathcal E}$.
\end{itemize}
\noindent Given the claim, we can proceed as follows. For any
index $1 \le i \le n$ with $e_i \in {\mathcal E}$, there exist at least
$d-1$ two-dimensional faces of $P$ which contain $e_i$ and any two
of them have no edge other than $e_i$ in common. Because of
condition (b), in at least one of these faces none of the two
edges which share exactly one common endpoint with $e_i$ belongs
to ${\mathcal E}$. Therefore we may choose edges $g_i$ and $f_i$ of $P$ not
in ${\mathcal E}$ which are adjacent nodes in the graph ${\mathcal G}_1 (P)$, so
that $v_{i-1}$ is an endpoint of $g_i$ and $v_i$ is an endpoint of
$f_i$. We set $g_i = f_i = e_i$ for those indices $1 \le i \le n$
for which $e_i$ is not an element of ${\mathcal E}$. We also set $f_0 = f$
and $g_{n+1} = g$. For $0 \le i \le n$ we observe that $f_i$ and
$g_{i+1}$ are edges of $P$ not in ${\mathcal E}$ which have $v_i$ as a
common endpoint and denote by $P_i$ the vertex figure of $P$ at
$v_i$ and by $V_i$ the set of vertices of $P_i$ which correspond
to elements of ${\mathcal E}$ having $v_i$ as an endpoint. Since, by
Theorem \ref{thm:ba}, the graph ${\mathcal G}(P_i)$ is $(d-1)$-connected
and, by condition (a), the set $V_i$ has no more than $d-2$
elements, the vertices of $P_i$ which correspond to $f_i$ and
$g_{i+1}$ can be connected by a walk in ${\mathcal G}(P_i) {\smallsetminus} V_i$. This
implies that $f_i$ and $g_{i+1}$ can be connected by a walk in the
graph ${\mathcal G}_1 (P) {\smallsetminus} {\mathcal E}$ for $0 \le i \le n$. Therefore $f_0=f$
and $g_{n+1}=g$ can also be connected by a walk in this graph.
Thus it suffices to prove the claim. Let us call a vertex $v$ of
$P$ bad if it is an endpoint of at least $(d+1)/2$ edges in ${\mathcal E}$
and good otherwise. We will also call an edge $e$ of $P$ bad if it
violates condition (b), meaning that $e \in {\mathcal E}$ and $t(e) \ge d$.
Otherwise we call $e$ good. Clearly, any vertex of $P$ which
violates condition (a) is bad and any bad edge has at least one
bad endpoint. As a result, any walk in ${\mathcal G}(P)$ which does not go
through bad vertices satisfies conditions (a) and (b). Let $q$
denote the cardinality of ${\mathcal E}$. Since each edge of $P$ has only
two endpoints, the number $p$ of bad vertices of $P$ satisfies $p
\left\lceil (d+1)/2 \right\rceil \le 2q \le 2(2d-3)$. From this
and our assumption $d \ge 4$ it follows that $p \le d-1$, that is
there exist at most $d-1$ bad vertices. Therefore, by Theorem
\ref{thm:ba}, deleting all bad vertices of $P$ from ${\mathcal G}(P)$ and
their incident edges results in a connected graph. Thus the claim
will follow if we can show that each of the edges $f$ and $g$
either has a good endpoint or else one of its endpoints, say $v$,
satisfies $s(v) \le d-2$ and is connected to a good vertex by a
good edge of $P$. We will prove this statement for $f$, the same
arguments applying for $g$. Let $a$ and $b$ be the endpoints of
$f$. We distinguish two cases:
{\bf Case 1:} At least one of the endpoints of $f$, say $a$,
satisfies $s(a) \ge d-1$. Since ${\mathcal E}$ has at most $2d-3$ elements
and $f$ is not in ${\mathcal E}$, we must have $s(b) \le d-2$. As a result,
there exists at least one edge $e$ of $P$ other than $f$ which has
$b$ as an endpoint and does not belong to ${\mathcal E}$. Simple counting
shows that at least one of the endpoints of $e$ is good. Since $e$
is a good edge, we are done in this case.
{\bf Case 2:} We have $s(a) \le d-2$ and $s(b) \le d-2$. As a
result, each of $a, b$ is an endpoint of at least one edge of $P$
not in ${\mathcal E}$, other than $f$. There is nothing to prove if at
least one of $a, b$ is good, so we may assume that both of them
are bad. Suppose first that there exist distinct vertices $a'$ and
$b'$ of $P$ other than $a, b$ which are connected to $a$ and $b$,
respectively, by edges of $P$ not in ${\mathcal E}$. Once again, simple
counting shows that at least one of $a', b'$ must be good and the
desired statement follows. Otherwise $a$ and $b$ must be connected
to a vertex $c$ of $P$ with edges not in ${\mathcal E}$ and we must have
$s(a) = s(b) = d-2$. It follows that $s(c) \le 1$ and that $c$ is
good, so we are done in this case as well.
\end{proof}
\section{Proof of Theorem \ref{thm0}}
\label{sec:proof}
In this section we complete the proof of our main theorem. The
structure of the proof is similar to that of Proposition
\ref{thm1}.
\medskip
\noindent \emph{Proof of Theorem \ref{thm0}.} Let us write $n_k
(d) = (k+1)(d-k)$. We have already remarked in the introduction
that there exist $d$-dimensional convex polytopes $P$ such that
${\mathcal G}_k (P)$ is not $m$-connected for $m > n_k (d)$ and that
Theorem \ref{thm0} is trivial for $k=d-1$. In view of Proposition
\ref{prop:i=d-2}, it remains to show that for $1 \le k \le d-3$
the graph ${\mathcal G}_k (P)$ is $n_k (d)$-connected for all convex
polytopes $P$ of dimension $d$.
We proceed by induction on $d$ and $k$, where the case $k=1$ was
treated by Proposition \ref{thm1}. Assume that $k \ge 2$. Let $U$
be any subset of the set of $k$-dimensional faces of $P$ of
cardinality less than $n_k (d)$ and let $F$ and $G$ be two
$k$-dimensional faces of $P$ not in $U$. We need to show that $F$
and $G$ can be connected by a walk in ${\mathcal G}_k (P) {\smallsetminus} U$.
\medskip
\noindent {\bf Claim:} There exists a walk $w$ in ${\mathcal G} (P)$ which
connects a vertex of $F$ to a vertex of $G$ and has the following
properties:
\begin{itemize}
\itemsep=0pt \item[{\rm (a)}] no edge of $w$ is contained in ${d-1
\choose k-1}$ or more faces in $U$ and
\item[{\rm (b)}] no node of $w$ belongs to $n_{k-1}(d-1)$ or more
faces in $U$.
\end{itemize}
\noindent Given the claim, the proof proceeds as follows. Let $w =
(v_0, e_1, v_1,\dots,e_n, v_n)$ be a walk as in the claim and set
$F_0=F$ and $F_{n+1} = G$. It follows from Lemma \ref{prop:LBT}
that each edge of $P$ is contained in at least ${d-1 \choose k-1}$
faces of $P$ of dimension $k$. Therefore, in view of our condition
(a), for each index $1 \le i \le n$ we may choose a
$k$-dimensional face $F_i$ of $P$ not in $U$ which contains the
edge $e_i$. Note that $v_i$ is a vertex of both $F_i$ and
$F_{i+1}$ for $0 \le i \le n$. Let $P_i$ denote the vertex figure
of $P$ at $v_i$ and let $U_i$ denote the set of
$(k-1)$-dimensional faces of $P_i$ which correspond to the faces
of $U$ containing $v_i$. By the induction hypothesis on $k$, the
graph ${\mathcal G}_{k-1} (P_i)$ is $n_{k-1}(d-1)$-connected. By condition
(b), this implies that ${\mathcal G}_{k-1} (P_i) {\smallsetminus} U_i$ is connected and
hence that $F_i$ and $F_{i+1}$ can be connected by a walk in the
graph ${\mathcal G}_k (P) {\smallsetminus} U$. Since this holds for all $0 \le i \le n$,
we conclude that $F_0=F$ and $F_{n+1}=G$ can be connected by a
walk in ${\mathcal G}_k (P) {\smallsetminus} U$ as well. It follows that ${\mathcal G}_k (P) {\smallsetminus}
U$ is connected and hence that ${\mathcal G}_k (P)$ is $n_k (d)$-connected,
as desired. It therefore suffices to prove the claim. We
distinguish two cases:
{\bf Case 1:} $k=2$. We are given that $d \ge 5$ and that $U$ is a
set of two-dimensional faces of $P$ of cardinality less than $n_k
(d) = 3d-6$ and note that $n_{k-1} (d-1) = 2d-4$. Let us call an
edge or vertex of $P$ bad if this edge or vertex is contained in
at least $d-1$ or $2d-4$, respectively, elements of $U$. The
following hold: (i) there exist at most two bad edges of $P$, (ii)
there exist at most two bad vertices of $P$ and (iii) if $v$ is a
bad vertex of $P$ and $e$ is a bad edge, then $v$ is an endpoint
of $e$. Indeed, the existence of three bad edges of $P$ would
require at least $3d-6$ elements of $U$, since given any two edges
of a polytope $P$, there exists at most one 2-dimensional face of
$P$ which contains both of these edges. In view of our assumption
on the cardinality of $U$, this proves (i). We next observe that
if $u$ and $v$ are distinct bad vertices of $P$, then there exist
at least $d-1$ elements of $U$ which contain both $u$ and $v$.
Therefore any two bad vertices are connected by a bad edge and
(ii) follows from (i). Finally, if $v$ is a bad vertex of $P$ and
$e$ is a bad edge, then there must be at least two elements of $U$
which contain both $v$ and $e$. The intersection of these has to
equal $e$ and contains $v$. This proves (iii). It follows from
facts (i)--(iii) that one can choose a set $V$ consisting of at
most two vertices of $P$ such that ${\mathcal G} (P) {\smallsetminus} V$ contains no bad
vertex or edge. This completes the proof of the claim in this case
since ${\mathcal G} (P) {\smallsetminus} V$ is connected, by Theorem \ref{thm:ba}, and
any walk in this graph connecting a vertex of $F$ to a vertex of
$G$ satisfies conditions (a) and (b).
{\bf Case 2:} $k \ge 3$. Since we have ${d-1 \choose k-1} \ge k
(d-k) = n_{k-1}(d-1)$ for $d \ge k+3 \ge 6$, condition (a) follows
from (b) and can thus be ignored. Let $V$ be the set of vertices
of $P$ which belong to at least $n_{k-1}(d-1) = k(d-k)$ faces in
$U$. We will show that any $k+1$ vertices, say $v_0,
v_1,\dots,v_k$, in $V$ are affinely dependent. Indeed, since $U$
has less than $n_k (d) = (k+1)(d-k)$ elements and each $v_i$
belongs to at least $k(d-k)$ of them, there must be at least $k$
elements of $U$ which contain all of $v_0, v_1,\dots,v_k$.
Clearly, the intersection of any two if these $k$ elements
contains the $v_i$ and has affine dimension at most $k-1$, so
$v_0, v_1,\dots,v_k$ must be affinely dependent. It follows that
the dimension of the affine span of $V$ is at most $k-1$. As a
consequence, each one of the $k$-dimensional faces $F$ and $G$ has
at least one vertex not in $V$ and, by Theorem \ref{thm:ba}, the
graph ${\mathcal G} (P) {\smallsetminus} V$ is connected. These two facts imply the
existence of a walk $w$ in ${\mathcal G} (P)$ with the claimed properties.
\qed
\section{Cell complexes}
\label{sec:cell}
In this section we discuss possible generalizations of Theorem
\ref{thm0} to certain classes of regular cell complexes. We will
assume some familiarity with regular cell complexes and standard
notions in topological combinatorics; excellent sources on these
topics are \cite[Section 4.7]{OM} and the article \cite{Bj}.
A \emph{regular cell complex} is a finite collection ${\mathcal C}$ of
balls in a Hausdorff space $\|{\mathcal C}\| = \bigcup_{\sigma \in {\mathcal C}} \,
\sigma$, called \emph{cells} or \emph{faces}, such that:
\begin{itemize}
\itemsep=0pt
\item[{\rm (i)}] $\varnothing \in {\mathcal C}$,
\item[{\rm (ii)}] the relative interiors of the nonempty cells
partition $\|{\mathcal C}\|$ and
\item[{\rm (iii)}] the boundary of any cell in ${\mathcal C}$ is a union of
cells in ${\mathcal C}$.
\end{itemize}
\noindent Cells of dimension zero or one are called
\emph{vertices} or \emph{edges}, respectively, and cells which are
maximal with respect to inclusion are called \emph{facets}. The
dimension of ${\mathcal C}$ is the maximum dimension of a cell. The
(loop-free) abstract graph ${\mathcal G}({\mathcal C})$ defined by the vertices and
edges of ${\mathcal C}$ is called the \emph{graph} of ${\mathcal C}$.
A \emph{polyhedral complex} in ${\mathbb R}^N$ is a regular cell complex
each of whose cells is a polytope in ${\mathbb R}^N$. A \emph{simplicial
complex} is a polyhedral complex in which every cell is a simplex.
A regular cell complex ${\mathcal C}$ is said to have the
\emph{intersection property} if the intersection of any two cells
in ${\mathcal C}$ is also a cell in ${\mathcal C}$ (in particular, the graph
${\mathcal G}({\mathcal C})$ is simple). For instance, polyhedral complexes have the
intersection property. A regular cell complex ${\mathcal C}$ with the
intersection property is said to be \emph{Cohen-Macaulay} (over a
field ${\mathbb K}$) \cite[Section 2]{Fl} if, under the inclusion partial
order, it is a Cohen-Macaulay poset (over ${\mathbb K}$); see
\cite[Section 11]{Bj} for the notion of Cohen-Macauliness for
simplicial complexes and posets. Such a complex ${\mathcal C}$ is pure,
meaning that all facets of ${\mathcal C}$ have the same dimension, and
strongly connected, meaning that for any two facets $\tau$ and
$\tau'$ of ${\mathcal C}$ there exists a sequence $\tau=\tau_0,
\tau_1,\dots,\tau_n = \tau'$ of facets of ${\mathcal C}$ such that
$\tau_{i-1}$ and $\tau_i$ intersect on a common face of
codimension one for all $1 \le i \le n$. The following
generalization of Balinski's theorem was proved by Fl$\o$ystad, as
a consequence of \cite[Corollary 2.7]{Fl}.
\begin{theorem}
{\rm (Fl$\o$ystad \cite{Fl})} For any $d$-dimensional
Cohen-Macaulay regular cell complex with the intersection
property, the graph ${\mathcal G}({\mathcal C})$ is $d$-connected.
\label{thm:fl}
\end{theorem}
Let ${\mathcal G}_k ({\mathcal C})$ denote the simple graph on the node set of
$k$-dimensional cells of ${\mathcal C}$ in which two such cells are
adjacent if there exists a $(k+1)$-dimensional cell of ${\mathcal C}$ which
contains them both. Theorem \ref{thm:fl} is the special case $k=0$
of the following statement.
\begin{conjecture}
For any $d$-dimensional Cohen-Macaulay regular cell complex ${\mathcal C}$
with the intersection property, the graph ${\mathcal G}_k ({\mathcal C})$ is
\begin{itemize}
\itemsep=0pt
\item[$\circ$] $(k+1)(d-k)$-connected if $0 \le k \le
d-3$ and
\item[$\circ$] $d$-connected if $k=d-2$.
\end{itemize}
\label{conj}
\end{conjecture}
The $d$-dimensional simplicial complex which has two
$d$-dimensional simplices as facets, intersecting on a common
codimension one face, shows that Theorem \ref{thm0} does not
extend to the setup of Conjecture \ref{conj} for $k=d-1$. We can
verify this conjecture in the special cases which appear in the
following statement.
\begin{proposition}
Under the assumptions of Conjecture \ref{conj}, the graph ${\mathcal G}_k
({\mathcal C})$ is
\begin{itemize}
\itemsep=0pt
\item[(i)] $(k+1)(d-k)$-connected if $0 \le k \le
d-3$ and ${\mathcal C}$ is a polyhedral complex,
\item[(ii)] $d$-connected if $k=d-2$.
\end{itemize}
\label{thm:cell}
\end{proposition}
\begin{proof}
Suppose first that ${\mathcal C}$ is a polyhedral complex and that $0 \le k
\le d-3$ (a similar argument works for $k=d-2$). Let $U$ be any
subset of the set of $k$-dimensional faces of ${\mathcal C}$ of cardinality
less than $(k+1)(d-k)$ and let $F$ and $G$ be two $k$-dimensional
faces of ${\mathcal C}$ not in $U$. We need to show that these two faces
can be connected by a walk in the graph ${\mathcal G}_k ({\mathcal C}) {\smallsetminus} U$. Let
$P$ and $Q$ be facets of ${\mathcal C}$ (necessarily of dimension $d$)
containing $F$ and $G$, respectively. By strong connectivity of
${\mathcal C}$, we may choose a sequence $P=P_0, P_1,\dots,P_n = Q$ of
facets of ${\mathcal C}$ such that $P_{i-1}$ and $P_i$ intersect on a
common $(d-1)$-dimensional face for all $1 \le i \le n$. By Lemma
\ref{prop:LBT}, each intersection $P_{i-1} \cap P_i$ has at least
${d \choose k+1}$ faces of dimension $k$. Since ${d \choose k+1}
\ge (k+1)(d-k)$ for $k \le d-3$, there exists at least one
$k$-dimensional face, say $F_i$, of $P_{i-1} \cap P_i$ which is
not an element of $U$. We let $F_0 = F$ and $F_{n+1} = G$ and note
that $F_i$ and $F_{i-1}$ can be connected with a walk in ${\mathcal G}_k
(P_{i-1}) {\smallsetminus} U$ for all $1 \le i \le n+1$, by Theorem \ref{thm0}.
It follows that $F$ and $G$ can be connected with a walk in ${\mathcal G}_k
({\mathcal C}) {\smallsetminus} U$. This proves (i).
Suppose now that $k=d-2$ and that ${\mathcal C}$ is as in Conjecture
\ref{conj}. We can proceed as in the proof of Proposition
\ref{prop:i=d-2} with no need to pass to a dual to ${\mathcal C}$ object.
Indeed, let ${\mathcal E}$ be any subset of the set of $(d-2)$-dimensional
faces of ${\mathcal C}$ of cardinality at most $d-1$. We observe first that
any $e \in {\mathcal E}$ has at least $d-1$ codimension one faces and that
no $(d-2)$-dimensional face of ${\mathcal C}$ other than $e$ contains two
or more of those. Therefore, it is possible to choose a
codimension one face $\sigma$ of $e$ so that $e$ is the unique
element of ${\mathcal E}$ which contains $\sigma$. We then check that in
any part of a walk in ${\mathcal G}_{d-2} ({\mathcal C})$ of the form $(\tau, e,
\tau')$, where $e \in {\mathcal E}$ and $\tau, \tau'$ are faces of
dimension $d-1$ containing $e$, the node $e$ can be replaced by a
walk that does not involve elements of ${\mathcal E}$ as follows. Since the
inclusion poset of faces of ${\mathcal C}$ which strictly contain $e$ is
Cohen-Macaulay of rank one, and hence connected, we may assume
that some facet $\rho$ of ${\mathcal C}$ contains both $\tau$ and $\tau'$.
We pick a codimension one face $\sigma$ of $e$ as above and
observe that the set of faces of ${\mathcal C}$ containing $\sigma$ and
contained in $\rho$ is in inclusion preserving bijection with the
set of faces of a polygon $\Pi$ (this holds more generally for
Gorenstein* posets of rank 3) and that $e$ and $\tau, \tau'$
correspond to a vertex and its two incident edges in $\Pi$. Hence
we can deviate our given walk in ${\mathcal G}_{d-2} ({\mathcal C})$ around $e$
through the boundary of $\Pi$, thus avoiding nodes in ${\mathcal E}$ by the
defining property of $\sigma$. To complete the proof of (ii) it
remains to comment that the graph ${\mathcal G}_{d-2} ({\mathcal C})$ is connected.
This holds because the inclusion order on the set of faces of
${\mathcal C}$ of dimension $d-2$ or $d-1$ inherits the Cohen-Macaulay
property from ${\mathcal C}$.
\end{proof}
\medskip
\noindent \emph{Acknowledgements}: The author thanks Bernd
Sturmfels for encouraging discussions, Vic Reiner for providing an
example of a regular cell complex with a nonregular face figure
and Ronald Wotzlaw for the content of Remark \ref{rem0} and for
useful pointers to the literature.
|
2,869,038,153,829 | arxiv | \section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the ICCV 2019 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\iccvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the ICCV70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 23)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
Make sure the first page is numbered by commenting out the first page being
empty on line 46
\begin{verbatim}
\end{verbatim}
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the ICCV 2019 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee}
\section{Introduction}
Consider Figure~\ref{fig:motivation}: Given one or few images of an \emph{Amanita Muscaria} (left), one can easily recognize it in the wild.
Identifying a \emph{Russula} (center) may require more samples, enough to distinguish it from the deadly \emph{Amanita Phalloides} (right), but likely not millions of them. We refer to this as {\em few-shot learning.} This ability comes from having seen and touched millions of other objects, in different environments, under different lighting conditions, partial occlusions and other nuisances. We refer to this as {\em meta-learning.} We wish to exploit the availability of large annotated datasets to meta-train models so they can learn new concepts from few samples, or ``shots.'' We refer to this as {\em meta-training for few-shot learning.}
In this paper we develop a framework for both meta-training (learning a potentially large number of classes from a large annotated dataset) and few-shot learning (using the learned model to train new concepts from few samples), designed to have the following characteristics.
\noindent{\bf Open set:} Accommodate an unknown, growing, and possibly unbounded number of new classes in an {\em ``open set'' or ``open universe''} setting. Some of the simpler methods available in the literature, for instance based on nearest-neighbors of fixed embeddings \cite{Snell:nips17}, do so in theory. In these methods, however, there is no actual few-shot {\em learning} per se, as all learnable parameters are set at meta-training.
\noindent{\bf Continual:} Enable leveraging few-shot data to improve the model parameters, even those inferred during meta-training. While each class may only have few samples, as the number of classes grows, the few-shot training set may grow large. We want a model flexible enough to enable {\em ``lifelong''} or {\em ``continual''} learning.
\noindent{\bf Shot Free:} Accommodate a variable number of shots for each new category. Some classes may have a few samples, others a few hundred; we do not want to meta-train different models for different number of shots, nor to restrict ourselves to all new classes having the same number of shots, as many recent works do. This may be a side-effect of the benchmarks available that only test a few combinations of shots and ``ways'' (classes).
\noindent{\bf Embedded Class Models:} {\em Learn} a representation of the classes that is not constrained to live in the same space as the representation of the data. All known methods for few-shot learning choose an explicit function to compute class representatives (a.k.a. ``prototypes'' \cite{Snell:nips17}, ``proxies,'' ``means,'' ``modes,'' or ``templates'') as some form of averaging in the embedding (feature) space of the data. By decoupling the data (feature space) from the classes (class embedding), we free the latter to live in a richer space, where they can better represent complex distributions, and possibly grow over time.
To this end, our contributions are described as follows:
\begin{itemize}
\item \noindent{\bf Shot-free:} A meta-learning model and sampling scheme that is suitable for use with any number of ways and any number of shots, and can operate in an open-universe, life-long setting. When we fix the shots, as done in the benchmarks, we achieve essentially state-of-the-art performance, but with a model that is far more flexible.
\item \noindent{\bf Embedded Identities:} We abstract the identities to a different space than the features, thus enabling capturing more complex classes.
\item \noindent{\bf Implicit Class Representation:} The class representation function has a variable number of arguments, the shots in the class. Rather than fixing the number of shots, or choosing a complex architecture to handle variable numbers, we show that learning an {\em implicit form} of the class function enables seamless meta-training, while requiring a relatively simple optimization problem to be solved at few-shot time. We do not use either recurrent architectures that impose artificial ordering, or complex set-functions.
\item \noindent{\bf Metric Learning} is incorporated in our model, enabling us to add new classes without crowding the class representation space.
\item \noindent{\bf Performance:} Since there is no benchmark to showcase all the features of our model, we use existing benchmarks for few-shot learning that fix the number of ways and shots to a few samples. Some of the top performing methods are tailored to the benchmark, training different models for different number of shots, which does not scale, and does not enable handling the standard case where each way comes with its own number of shots. Our approach, while not tuned to any benchmark, achieves state-of-the-art performance and is more general.
\end{itemize}
In the next section we present a formalism for ordinary classification that, while somewhat pedantic, allows us to generalize to life-long, open universe, meta- and few-shot training. The general model allows us to analyze existing work under a common language, and highlights limitations that motivate our proposed solution in Sect. \ref{sec:related_work}.
\subsection{Background, Notation; Ordinary Classification}
In ordinary classification, we call ${\cal B} = \{(x_i, y_i) \}_{i = 1}^M$, with $y_i \in \{1, \dots, B\}$ a ``large-scale'' training set;
$(x_j, y_j) \sim P(x,y)$ a sample from the same distribution. If it is in the training set,
we write formally
$P(y = k | x_i) = \delta(k-y_i)$.
Outside the training set, we approximate this probability with
\begin{equation}
P_w(y = k| x) := \frac{\exp(-\phi_w(x)_k)}{\sum_k \exp(-\phi_w(x)_k)}
\label{eq:phi}
\end{equation}
where the discriminant $\phi_w: X\rightarrow \mathbb{R}^K$ is an element of a sufficiently rich parametric class of functions with parameters, or ``weights,'' $w$, and the subscript $k$ indicates the $k$-th component. The empirical cross-entropy loss is defined as
\begin{eqnarray}
L(w) &:=& \sum_{\stackrel{k=1}{(x_i, y_i) \in {\cal B}}}^K -P(y = k |x_i)\log P_w(y = k |x_i) \nonumber \\
& = & \sum_{(x_i, y_i) \in {\cal B}} -\log P_w(y_i | x_i)
\label{eq:L}
\end{eqnarray}
minimizing which is equivalent to maximizing $\prod_i P_w(y_i | x_i)$. If ${\cal B}$ is i.i.d., this yields the maximum-likelihood estimate $\hat w$,
that depends on the dataset $\cal B$ and approximates $\phi_{\hat w}(x)_y \simeq \log P(y|x)$. We write cross-entropy explicitly as a function of the discriminant as
\begin{equation}
L(w) = \sum_{(x_i, y_i) \in {\cal B}} \ell(\phi_w(x_i)_{y_i})
\label{eq:H}
\end{equation}
by substituting \eqref{eq:phi} into \eqref{eq:L}, where $\ell$ is given, with a slight abuse of notation, by
\begin{equation}
\ell(v_i) := -v_i + {\rm LSE}(v)
\label{eq:ell}
\end{equation}
with the log-sum-exp ${\rm LSE}(v) := \log\left(\sum_{k=1}^K \exp( v_k) \right)$.
Next, we introduce the general form for few-shot and life-long learning, used later to taxonomize modeling choices made by different approaches in the literature.
\subsection{General Few-Shot Learning}
Let ${\cal F} = \{(x_j, y_j)\}_{j = 1}^{N(k)}$ be the few-shot training set, with $k \in \mathbb{N}$ the classes, or ``ways,'' and $N(k)$ the ``shots,'' or samples per class. We assume that meta- and few-shot data $x_i, x_j$ live in the same domain ({\em e.g.}, natural images), while the meta- and few-shot classes are disjoint, which we indicate with $y \in B+\{1, \dots, K \}$.\footnote{The number of ways $K$ is a-priori unknown and potentially unbounded. It typically ranges from a few to few hundreds, while $N(k)$ is anywhere from one to a few thousands. The meta-training set has typically $M$ in the millions and $B$ in the thousands. Most benchmarks assume the same number of shots for each way, so there is a single number $N$, an artificial and unnecessary restriction. There is no loss of generality in assuming the classes are disjoint, as few-shot classes that are shared with the meta-training set can just be incorporated into the latter.}
During meta-training, from the dataset ${\cal B}$ we learn a parametric representation (feature, or embedding) of the data $\phi_w(x)$, for use later for few-shot training. During few-shot training, we use $N(k)$ samples for each new category $k>B$ to train a classifier, with $k$ potentially growing unbounded (life-long learning). First, we define ``useful'' and then formalize a criterion to learn the parameters $w$, both during meta- and few-shot training.
Unlike standard classification, discussed in the previous section, here we do not know the number of classes ahead of time, so we need a representation that is more general than a $K$-dimensional vector $\phi_w$. To this end, consider two additional ingredients: A representation of the classes $c_k$ (identities, prototypes, proxies), and a mechanism to associate a datum $x_j$ to a class $k$ through its representative $c_k$. We therefore have three functions, all in principle learnable and therefore indexed by parameters $w$. The {\em data representation} $\phi_w: X \rightarrow \mathbb{R}^F$ maps each datum to a fixed-dimensional vector, possibly normalized,
\begin{equation}
z = \phi_w(x).
\label{eq:z}
\end{equation}
We also need a {\em class representation}, that maps the $N(k)$ features $z_j$ sharing the same identity $y_j = k$, to some representative $c_k$ through a function $\psi_w: \mathbb{R}^{F N(k)} \rightarrow \mathbb{R}^C$ that yields, for each $k = B+1, \dots, B+K$
\begin{equation}
c_k = \psi_w\left(\{z_j \ | \ y_j = k\}\right)
\label{eq:ck}
\end{equation}
where $z_j = \phi_w(x_j)$. Note that the argument of $\psi$ has variable dimension. Finally, the {\em class membership} can be decided based on the posterior probability of a datum belonging to a class, approximated with a sufficiently rich parametric function class in the exponential family as we did for standard classification,
\begin{equation}
P_w(y = k | x_j) := \frac{\exp\left( - \chi_w(z_j, c_k)\right)}{\sum_k \exp(-\chi_w(z_j, c_k))}
\label{eq:chi}
\end{equation}
where $\chi_w: \mathbb{R}^F \times \mathbb{R}^C \rightarrow \mathbb{R}$ is analogous to \eqref{eq:phi}. The cross-entropy loss \eqref{eq:L} can then be written as
\begin{equation}
L(w) = \sum_{k = B+1}^{B+K}\sum_{j = 1}^{N(k)} \ell( \chi_w(z_j, c_k))
\end{equation}
with $\ell$ given by \eqref{eq:ell} and $c_k$ by \eqref{eq:ck}. The loss is minimized when $\chi_{\hat w}(z_j, c_k) = \log P(y_j = k | x_j),$ a function of the few-shot set ${\cal F}$. Note, however, that this loss can also be applied to the meta-training set, by changing the outer sum to $k = 1, \dots, B$, or or to any combination of the two, by selecting subsets of $\{1, \dots, B+K\}$. Different approaches to few-shot learning differ in the choice of model $\cal M$ and mixture of meta- and few-shot training sets used in one iteration of parameter update, or training ``episode.''
\section{Stratification of Few-shot Learning Models}
\label{sec:stratification}
Starting from the most general form of few-shot learning described thus far, we restrict the model until there is no few-shot learning left, to capture the modeling choices made in the literature.
\subsection{Meta Training}
In general, during meta-training for few-shot learning, one solves some form of
\begin{multline}
\hat w = \arg\min_w \underbrace{\sum_{(x_i, y_i) \in {\cal B}} \ell (\chi_w(z_i, c_i))}_{L(w,c)} \\
{\rm s. \ t. \ } z_i = \phi_w(x_i); \ c_i = \psi_w(\{z_j \ | y_j = i\}).
\nonumber
\end{multline}
\noindent{\bf Implicit class representation function:} Instead of the explicit form in \eqref{eq:ck}, one can infer the function $\psi_w$ implicitly: Let $r = \min_w L(w,\psi_w)$ be the minimum of the optimization problem above. If we consider $c = \{c_1, \dots, c_B\}$ as free parameters in $L(w, c)$, the equation $ r = L(\hat w, c)$ defines $c$ implicitly as a function of $\hat w$, $\psi_{\hat w}$.
One can then simply find $\hat w$ and $c$ simultaneously by solving
\begin{equation}
\hat w, \hat c = \arg\min_{w,c} \sum_{\stackrel{k = 1}{i | y_i = k} }^B
\ell (\chi_w(\phi_w(x_i), c_k))
\label{eq:meta}
\end{equation}
which is equivalent to the previous problem, even if there is no explicit functional form for the class representation $\psi_w$. As we will see, this simplifies meta-learning, as there is no need to design a separate architecture with a variable number of inputs $\psi_w$, but requires solving a (simple) optimization during few-shot learning. This is unlike all other known few-shot learning methods, that learn or fix $\psi_w$ during meta-learning, and keep it fixed henceforth.
Far from being a limitation, the implicit solution has several advantages, including bypassing the need to explicitly define a function with a variable number of inputs (or a set function) $\psi_w$. It also enables the identity representation to live in a different space than the data representation, again unlike existing work that assumes a simple functional form such as the mean.
\subsection{Few-shot Training}
\noindent{\bf Lifelong few-shot learning:} Once meta-training is done, one can use the same loss function in \eqref{eq:meta} for $k>B$ to achieve life-long, few-shot learning. While each new category $k>B$ is likely to have few samples $N(k)$, in the aggregate the number of samples is bound to grow beyond $M$, which we can exploit to update both the embedding $\phi_w$, the metric $\chi_w$ and the class function $c_k = \psi_w$.
\noindent{\bf Metric learning:} A simpler model consists of fixing the parameters of the data representation $\hat \phi := \phi_{\hat w}$ and using the same loss function, but summed for $k>B$, to learn from few shots $N_k$ the new class proxies $c_k$ and change the metric $\chi_w$ as the class representation space becomes crowded. If we fix the data representation, during the few-shot training phase, we solve
\begin{equation}
\hat w, \hat c = \arg\min_{w,c} \sum_{k = B+1}^{B+K}\sum_{j | y_j = k}\ell(\chi_w(\hat \phi(x_j), c_k))
\label{eq:backfill}
\end{equation}
where the dependency on the meta-training phase is through $\hat \phi$ and both $\hat w$ and $\hat c$ depend on the few-shot dataset ${\cal F}$.
\noindent{\bf New class identities:}
One further simplification step is to also fix the metric $\chi$, leaving only the class representatives to be estimated
\begin{equation}
\hat c = \arg\min_c\sum_{k = B+1}^{B+K}\sum_{j | y_j = k} \ell(\chi(\hat \phi(x_j), c_k)).
\label{eq:c-implicit}
\end{equation}
The above is the implicit form of the parametric function $\psi_w$, with parameters $w = c$, as seen previously. Thus evaluating $\hat c_k = \psi_c(\{z_j \ | y_j = k\})$ requires solving an optimization problem.
\noindent{\bf No few-shot learning:} Finally, one can fix even the function $\psi$ explicitly, forgoing few-shot learning and simply computing
\begin{equation}
\hat c_k = \psi(\{\hat \phi(x_j) \ | y_j = k\}), \ k > B
\label{eq:nolearn}
\end{equation}
that depends on ${\cal B}$ through $\hat \phi$, and on ${\cal F}$ through $Y_k$.
We articulate our modeling and sampling choices in the next section, after reviewing the most common approaches in the literature in light of the stratification described.
\subsection{Related Prior Work}
\label{sec:related_work}
Most current approaches fall under the case~\eqref{eq:nolearn}, thus involving no few-shot learning, forgoing the possibility of lifelong learning and imposing additional undue limitations by constraining the prototypes to live in the same space of the features. Many are variants of Prototypical Networks \cite{Snell:nips17}, where only one of the three components of the model is learned: $\psi$ is fixed to be the mean, so $c_k : = \frac{1}{|Y_k|}\sum_{j\in Y_k} z_j$ and $\chi(z,c) =\| z - c \|^2$ is the Euclidean distance. The only learning occurs at meta-training, and the trainable portion of the model $\phi_w$ is a conventional neural network. In addition, the sampling scheme used for training makes the model dependent on the number of shots, again unnecessarily.
Other work can be classified into two main categories: gradient based \cite{meta-lstm,MAML,SNAIL,LEO} and metric based \cite{Snell:nips17,matching-net,Oreshkin:Nips18,Gidaris:cvpr18}. In the first, a \emph{meta-learner} is trained to adapt the parameters of a network to match the few-shot training set. \cite{meta-lstm} uses the base set to learn long short-term memory (LSTM) units \cite{LSTM} that update the base classifier with the data from the few-shot training set. MAML \cite{MAML} learns an initialization for the network parameters that can be adapted by gradient descent in a few steps. LEO \cite{LEO} is similar to MAML, but uses a task specific initial condition and performs the adaptation in a lower-dimensional space. Most of these algorithms adapt $\phi_w(x)$ and use an ordinary classifier at few-shot test time. There is a different $\phi_w(x)$ for every few-shot training set, with little re-use or any continual learning.
On the metric learning side, \cite{matching-net} trains a weighted classifier using an \emph{attention mechanism} \cite{XuEtAl:AttentionMechanism:ICML:2015} that is applied to the output of a feature embedding trained on an the base set. This method requires the shots at meta- and few-shot training to match. \emph{\wraptxt{Prototypical Networks}} \cite{Snell:nips17} are trained with episodic sampling and a loss function based on the performance of a nearest-mean classifier \cite{TibshiraniEtAl:CancerDiagNearestCentroid:PNAS:2002} applied to a few-shot training set. \cite{Gidaris:cvpr18} generates classification weights for a novel class based on a feature extractor using the base training set.
Finally, \cite{r2d2} incorporates ridge regression in an end-to-end manner into a deep-learning network.
These methods learn a single $\phi_w(x)$, which is reused across few-shot training tasks. The class identities are then either obtained through a function defined a-priori such as the sample mean in \cite{Snell:nips17}, an attention kernel \cite{matching-net}, or ridge regression \cite{r2d2}. The form of $\psi_w$ or $\chi$ do not change at few-shot training. \cite{Oreshkin:Nips18} uses task-specific adaptation networks to facilitate the adapting embedding network with output on a task-dependent metric space. In this method, the form of $\chi$ and $\psi$ are fixed and the output of $\phi$ is modulated based on the few-shot training set.
Next, we describe our model that, to the best of our knowledge, is the first and only to learn each component of the model: The embedding $\phi_w$, the metric $\chi_w$, and implicitly the class representation $\phi_w$.
\section{Proposed Model}
Using the formalism of Sect.~\ref{sec:stratification} we describe our modeling choices. Note that there is redundancy in the model class ${\cal M}$, as one could fix the data representation $\phi(x) = x$, and devolve all modeling capacity to $\psi$, or vice-versa. The choice depends on the application context. We outline our choices, motivated by limitations of prior work.
\noindent{\bf Embedding} $\phi_w$: In line with recent work, we choose a deep convolutional network. The details of the architecture are in Sect. \ref{sec:implementation}.
\noindent{\bf Class representation function} $\psi_w$: We define it implicitly by treating the class representations $c_k$ as parameters along with the weights $w$. As we saw earlier, this means that at few-shot training, we have to solve a simple optimization problem \eqref{eq:c-implicit} to find the representatives of new classes, rather than computing the mean as in Prototypical Networks and its variants:
\begin{equation}
c_k = \arg\min_c \sum_{j | y_j = k} \ell(\chi_w(\hat \phi(x_j), c)) = \psi_c(k).
\end{equation}
Note that the class estimates depend on the parameters $w$ in $\chi$. If few-shot learning is resource constrained, one can still learn the class representations implicitly during meta-training, and approximate them with a fixed function, such as the mean, during the few-shot phase.
\noindent{\bf Metric} $\chi$: we choose a discriminant induced by the Euclidean distance in the space of class representations, to which data representations are mapped by a learnable parameter matrix $W$:
\begin{equation}
\chi_{{}_W}(z_j, c_k) = \| W \hat \phi(x_j)- c_k \|^2
\label{eq:metric-learning-linear}
\end{equation}
Generally, we pick the dimension of $c$ larger than the dimension of $z$, to enable capturing complex multi-modal identity representations. Note that this choice encompasses metric learning: If $Q = Q^T$ was a symmetric matrix representing a change of inner product, then $\|W \phi - c\|^2_Q = \phi^T W^T Q c$ would be captured by simply choosing the weights $\tilde W = QW$. Since both the weights and the class proxies as free, there is no gain in generality in adding the metric parameters $Q$. Of course, $W$ can be replaced by any non-linear map, effectively ``growing'' the model via
\begin{equation}
\chi_{w}(z_j, c_k) = \| \hat f_w(\phi(x_j)) - c_k \|^2
\end{equation}
for some parametric family $f_w$ such as a deep neural network.
\section{Implementation}
\label{sec:implementation}
\paragraph{Embedding $\phi_w(x_j)$} We use two different architectures. The first \cite{Snell:nips17,matching-net} is four-convolution blocks, each block with 64 $3\times3$ filters followed by batch-normalization and ReLU. This is passed through max-pooling of a $2\times 2$ kernel. Following the convention in \cite{Gidaris:cvpr18}, we call this architecture C64. The other network is a modified ResNet \cite{resnet}, similar to \cite{Oreshkin:Nips18}. We call this ResNet-12.
In addition, we normalize the embedding to live on the unit sphere, \ie $\phi(x) \in \mathbb{S}^{d-1}$, where $d$ is the dimension of the embedding. This normalization is added as a layer to ensure that the feature embedding are on the unit sphere, as opposed to applying it post-hoc.
This adds some complications during meta-training due to poor scaling of gradients \cite{Wang:2017}, and is addressed by a single parameter layer after normalization, whose sole purpose is scaling the output of the normalization layer. This layer is not required at test time.
\paragraph{Class representation:} As noted earlier, this is implicit during meta-training.
In order to show the flexibility of our framework, we increase the dimension of the class representation.
\paragraph{Metric $\chi$} We choose the angular distance in feature space, which is the $d$-hypersphere:
\begin{align}
\chi(z_j,c_k) = \|W z_j- c_k\|^{2} &= 2s^{2}(1 - \cos\theta),
\end{align}
where $s$ is the scaling factor used during training and $\theta$ the angle between the normalized arguments. As the representation $z = \phi_w(x)$ is normalized, the class-conditional model is a Fisher-Von Mises (spherical Gaussian). However, as $W\phi_w(x_i) \in \mathbb{S}^{d-1}$, we need $W \psi_w \in \mathbb{S}^{d-1}$.
During meta-training we apply the same normalization and scale function to the implicit representation as well.
\begin{equation}
P_w(y=k|x) \propto \exp \langle W \phi_w(x), c_{k} \rangle
\end{equation}
up to the normalization constant.
\paragraph{Sampling}
At each iteration during meta-training, images from the training set $\mathcal{B}$ are presented to the network in the form of \emph{episodes} \cite{matching-net,meta-lstm,Snell:nips17}; each episode consists of images sampled from $K$ classes. The images are selected by first sampling $K$ classes from $\mathcal{B}$ and then sampling $N_{e}$ images from each of the sampled classes. The loss function is now restricted to the $K$ classes present in the episode as opposed to the entire set of classes available at meta-training. This setting allows for the network to learn a better embedding for an open set classification as shown in \cite{closerlook,matching-net}
Unlike existing sampling methods that use episodic sampling~\cite{meta-lstm,Snell:nips17}, we do not split the images within an episode into a meta-train set and a meta-test set. For instance, prototypical networks \cite{Snell:nips17} use the elements in the meta-train set to learn the mean of the class representation. \cite{meta-lstm} learns the initial conditions for optimization. This requires a notion of training ``shot,'' and results in multiple networks to match the shots one expects at few-shot training.
\paragraph{Regularization}
First, we notice that the loss function \eqref{eq:meta} has a degenerate solution where all the centers and the embeddings are the same. In this case, $P_w(y=k|x_j) = P_w(y=k'|x_j)$ for all $k$ and $k'$, \ie, $P_w(y=k'|x_j)$ is a uniform distribution. For this degenerate case, the entropy is maximum, so we use entropy to bias the solution away from the trivial one. We also use Dropout \cite{dropout} on top of the embedding $\phi_w(x)$ during meta-training. Even when using episodic sampling, the embedding tends to over-fit on the base set in the absence of dropout. We do not use this at few-shot train and test time.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figures/protfig_final.pdf}
\caption{Our meta-training loss flow: The layers represented in blue are the layers that remain after meta-training. While the green layers are used only for training. Here $\|\cdot\|$ represents an $L_2$ normalization layer and $s(\cdot)$ represents a scaling layer}
\label{fig:my_label}
\end{figure}
\figref{fig:my_label} summarizes our architecture for the loss function during meta training. This has layers that are only needed for training such as the scale layer, Dropout and the loss. During few-shot training, we only use the learned embedding $\phi_w(x).$
\section {Experimental Results}
We test our algorithm on three datasets: \wraptxt{miniImagenet}{} \cite{matching-net}, \wraptxt{tieredImagenet}{} \cite{Ren:2018} and \wraptxt{CIFAR Few-Shot}{} \cite{r2d2}. The \wraptxt{miniImagenet} dataset consists of images of size $84 \times 84$ sampled from 100 classes of the ILSVRC \cite{imagenet} dataset, with 600 images per class.
We used the data split outlined in \cite{meta-lstm}, where 64 classes are used for training, 16 classes are used for validation, and 20 classes are used for testing.
We also use \wraptxt{tieredImagenet}{} \cite{Ren:2018}.
This is a larger subset of ILSVRC, and consists of 779,165 images of size $84 \times 84$ representing 608 classes hierarchically grouped into 34 high-level classes.
The split of this dataset ensures that sub-classes of the 34 high-level classes are not spread over the training, validation and testing sets, minimizing the semantic overlap between training and test sets.
The result is 448,695 images in 351 classes for training, 124,261 images in 97 classes for validation, and 206,209 images in 160 classes for testing.
For a fair comparison, we use the same training, validation and testing splits as in \cite{Ren:2018}, and use the classes at the lowest level of the hierarchy.
Finally, we use \wraptxt{CIFAR Few-Shot}{}, (CIFAR-FS) \cite{r2d2} containing images of size $32\times32$, a reorganized version of the CIFAR-100 \cite{cifar100} dataset.
We use the same data split as in \cite{r2d2}, dividing the 100 classes into 64 for training, 16 for validation, and 20 for testing.
\subsection{Comparison to \wraptxt{Prototypical Networks}}
Many recent methods are variants of \wraptxt{Prototypical Networks}, so we perform detailed comparison with it. We keep the training procedure, network architecture, batch-size as well as data augmentation the same. The performance gains are therefore solely due to the improvements in our method.
We use ADAM~\cite{Kingma2015Adam:Optimization} for training with an initial learning rate of $10^{-3}$, and a decay factor of $0.5$ every 2,000 iterations. We use the validation set to determine the best model. Our data augmentation consists of mean subtraction, standard-deviation normalization, random cropping and random flipping during training. Each episode contains 15 query samples per class during training. In all our experiments, we set $\lambda=1$ and did not tune this parameter.
Except otherwise noted, we always test few-shot algorithms on 2000 episodes, with 30 query classes per point per episode. At few-shot training, we experimented with setting the class identity to be implicit (optimized) or average prototype (fixed). The latter may be warranted when the few-shot phase is resource-constrained and yields similar performance. To compare computation time, we use the fixed mean. Note that, in all cases, the class prototypes are learned implicitly during meta-training.
The results of this comparison are shown in \tabref{tab:baseline}. From this table we see that for the 5-shot 5-way case we perform similarly to \wraptxt{Prototypical Network}. However, for the 1-shot case we see significant improvements across all three datasets. Also, the performance of \wraptxt{Prototypical Networks} drops when the train and test shot are changed. \tabref{tab:baseline} shows a significant drop in performance when we test models with a 5-shot setting and train with 1-shot. Notice that, from the table, our method is able to maintain the same performance. Consequently, we only train \textbf{one} model and test it across the different shot scenarios, hence the moniker ``shot-free.''
\begin{table*}[htbp]
\begin{center}
\begin{tabular}{@{}|c|c|c|c|c|c @{}}
\hline
Dataset & Testing Scenario & Training Scenario & Our implementation of \cite{Snell:nips17} & Our Method \\
\hline
\multirow{ 3}{*}{\wraptxt{miniImagenet}} & \shot{1}{5} & \shot{1}{5} & 43.88 $\pm$ 0.40 & \textbf{49.07 $\pm$ 0.43} \\
& \shot{5}{5} & \shot{1}{5} & 58.33 $\pm$ 0.35 & \textbf{64.98 $\pm$ 0.35} \\
& \shot{5}{5} & \shot{5}{5} & 65.49 $\pm$ 0.35 & \textbf{65.73 $\pm$ 0.36} \\
\hline
\hline
\multirow{ 3}{*}{\wraptxt{tieredImagenet}} &\shot{1}{5} & \shot{1}{5} & 41.36 $\pm$ 0.40 & \textbf{48.19 $\pm$ 0.43}\\
& \shot{5}{5} & \shot{1}{5} & 55.93 $\pm$ 0.39 & \textbf{64.60 $\pm$ 0.39}\\
& \shot{5}{5} & \shot{5}{5} & 65.51 $\pm$ 0.38 & 65.50 $\pm$ 0.39\\
\hline
\hline
\multirow{ 3}{*}{\wraptxt{CIFAR Few-Shot}} &\shot{1}{5} & \shot{1}{5} & 50.74 $\pm$ 0.48 & \textbf{55.14 $\pm$ 0.48}\\
&\shot{5}{5} & \shot{1}{5} & 64.63 $\pm$ 0.42 & \textbf{70.33 $\pm$ 0.40}\\
&\shot{5}{5} & \shot{5}{5} & 71.57 $\pm$ 0.38 & \textbf{2x $\pm$ 0.39}\\
\hline
\end{tabular}
\end{center}
\caption{Comparison of results from our method to that of our implementation of \wraptxt{Prototypical Network}{} \cite{Snell:nips17} using the C64 network architecture.
The table shows the accuracy and 95\% percentile confidence interval of our method averaged over 2,000 episodes on different datasets. Note that our method does not have a notion of shot, here we when we imply training by different shot, we mean that the batch sizes is the same as that of the prescribed method.}
\label{tab:baseline}
\end{table*}
\subsection {Effect of Dimension of Class Identities}
Class identities $c_k$ can live in a space of different dimensions than the feature embedding. This can be done in two ways: by lifting the embedding into a higher dimension space or by projecting the class identity into the embedding dimension. If the dimension of the class identity changes, we also need to modify $\chi$ according to \eqref{eq:metric-learning-linear}. The weight matrix $W \in \mathbb{R}^{d\times \mu}$, where $d$ is the dimension of the embedding and $\mu$ is the dimension of the class identities, can be learned during meta-training. This is equivalent to adding a fully connected layer through which the class identities are passed before normalization. Thus, we now learn $\phi_w$, $\psi_k$ and $\chi_W$. We show experimental results with the $C64$ architecture on the \wraptxt{miniImagenet} datasets in \tabref{tab:meandim}. Here, we tested the dimension of the class identities to be $2\times$,~$5\times$ and $10\times$ the dimension of the embedding. From this table we see that increasing the dimensions gives us a performance boost. However, this increase saturates at a dimension of $2\times$ the dimension of the embedding space.
\begin{table}[htbp]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{} |c|c|c|c|c| @{}}
\hline
Dimension & 1x & 2x & 5x & 10x \\
\hline
Performance & 49.07 & 51.46 & 51.46 & 51.32\\
\hline
\end{tabular}}
\end{center}
\caption{Performance of our method on \wraptxt{miniImagenet} with the class identity dimension as a function of the embedding dimension using the C64 network architecture. The table shows the accuracy averaged over 2,000 episodes.}
\label{tab:meandim}
\end{table}
\subsection{Comparison to the State-of-the-art}
In order to compare with the state-of-the-art, we use the ResNet-12 base architecture, train our approach using SGD with Nesterov momentum with an initial learning rate of $0.1$, weight decay of $5e-4$, momentum of $0.9$ and eight episodes per batch. Our learning rate was decreased by a factor of $0.5$ every time the validation error did not improve for 1000 iterations. We did not tune these parameters based on the dataset. As mentioned earlier, we train \textbf{one} model and test across various shots. We also compare our method with class identities in a space with twice the dimension of the embedding. Lastly, we compare our method with a variant of ResNet where we change the filter sizes to (64,160,320,640) from (64,128,256,512).
The results of our comparison for \wraptxt{miniImagenet} is shown in \tabref{tab:misota}. Modulo empirical fluctuations, our method performs at the state-of-the art and in some cases exceeds it. We wish to point out that SNAIL \cite{SNAIL}, TADAM \cite{Oreshkin:Nips18,MTLF}, LEO \cite{LEO}, MTLF \cite{MTLF} pre-train the network for a 64 way classification task on \wraptxt{miniImagenet} and 351 way classification on \wraptxt{tieredImagenet}. However, all the models trained for our method are trained from scratch and use no form of pre-training. We also do not use the meta-validation set for tuning any parameters other than selecting the best trained model using the error on this set. Furthermore, unlike all other methods, we did not have to train multiple networks and tune the training strategy for each case. Lastly, LEO \cite{LEO} uses a very deep 28 layer Wide-ResNet as a base model compared to our shallower ResNet-12. A fair comparison would involve training our methods with the same base network. However, we include this comparison for complete transparency.
\begin{table}[htbp]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{} |c|c|c|c| @{}}
\hline
Algorithm & 1-shot & 5-Shot & 10-shot \\
& 5-way & 5-way & 5-way \\
\hline
Meta LSTM \cite{meta-lstm} & 43.44& 60.60 &- \\
Matching networks \cite{matching-net} & 44.20 & 57.0 &-\\
MAML \cite{MAML} & 48.70 & 63.1 &-\\
\wraptxt{Prototypical Networks} \cite{Snell:nips17} & 49.40 & 68.2 & -\\
Relation Net \cite{relation-net} & 50.40& 65.3 & - \\
R2D2 \cite{r2d2} & 51.20 & 68.2 &- \\
SNAIL \cite{SNAIL} & 55.70 & 68.9 &-\\
Gidaris\etal \cite{Gidaris:cvpr18} & 55.95 & 73.00 & -\\
TADAM \cite{Oreshkin:Nips18} & 58.50 & 76.7 & 80.8\\
MTFL \cite{MTLF}& 61.2 & 75.5 &- \\
LEO \cite{LEO} & 61.76 & 77.59 & -\\
\hline
Our Method (ResNet-12) &59.00 & 77.46 & 82.33\\
Our Method (ResNet-12) 2x dims. & 60.64 & 77.02 & 80.80 \\
Our Method (ResNet-12 Variant) & 59.04 & \textbf{77.64} & \textbf{82.48} \\
Our Method (ResNet-12 Variant) 2x dims & 60.71 & {77.26} & 81.34 \\
\hline
\end{tabular}}
\end{center}
\caption{Performance of 4 variants of our method on \wraptxt{miniImagenet} compared to the state-of-the-art.
The table shows the accuracy averaged over 2,000 episodes.}
\label{tab:misota}
\end{table}
\begin{table}[htbp]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{} |c|c|c|c| @{}}
\hline
Algorithm & 1-shot & 5-Shot & 10-shot \\
& 5-way & 5-way & 5-way \\
\hline
\multicolumn{4}{c} {\wraptxt{tieredImagenet}} \\
\hline
MAML \cite{MAML} & 51.67 & 70.30 & - \\
\wraptxt{Prototypical Networks} \cite{Ren:2018} & 53.31 & 72.69 & - \\
Relation Net \cite{relation-net} & 54.48 & 71.32 & -\\
LEO \cite{LEO} & 65.71 & 81.31 & - \\
\hline
Our Method (ResNet-12) & 63.99 & 81.97 & 85.89 \\
Our Method (ResNet-12) 2x dims. & \textbf{66.87} & \textbf{82.64} & 85.53 \\
Our Method (ResNet-12) Variant & 63.52 & 82.59 & \textbf{86.62} \\
Our Method (ResNet-12) Variant 2x dims & \textbf{66.87} & {82.43} & 85.74 \\
\hline
\multicolumn{4}{c} {\wraptxt{CIFAR Few-Shot}} \\
\hline
MAML \cite{MAML} & 58.9 & 71.5 & -\\
\wraptxt{Prototypical Networks} \cite{Snell:nips17} & 55.5 & 72.0 & - \\
Relation Net & 55.0 & 69.3 & - \\
R2D2 \cite{r2d2} & 65.3 & 79.4 & - \\
\hline
Our Method (ResNet-12) & 69.15 &84.70 & 87.64 \\
\hline
\end{tabular}}
\end{center}
\caption{Performance of our method on \wraptxt{tieredImagenet}~and \wraptxt{CIFAR Few-Shot} datasets as compared to the state-of-the-art. The performance numbers for \wraptxt{CIFAR Few-Shot} are from \cite{r2d2}.
The table shows the accuracy averaged over 2,000 episodes.
Note that the training setting for the prior work is different.}
\label{tab:fssota}
\end{table}
The performance of our method on \wraptxt{tieredImagenet} is shown in \tabref{tab:fssota}. This table shows that we are the top performing method for 1-shot 5-way and 5-shot 5-way. We test on this dataset as it is much larger and does not have semantic overlap between meta training and few-shot training even though fewer baselines exist for this dataset compared to \wraptxt{miniImagenet}. Also shown in \tabref{tab:fssota} is the performance of our method on the \wraptxt{CIFAR Few-Shot} dataset. We show results on this dataset to illustrate that our method can generalize across datasets. From this table we see that our method performs the best for \wraptxt{CIFAR Few-Shot}.
\subsection{Effect of Choices in Training}
As a final remark, there is no consensus on the few-shot training and testing paradigm in the literature. There are too many variables that can affect performance. To illustrate this, we show the effect of few training choices.
\paragraph{Effect of Optimization algorithm}
In the original implementation of \wraptxt{Prototypical Networks} \cite{Snell:nips17}, ADAM~\cite{Kingma2015Adam:Optimization} was used as the optimization algorithm. However, most newer algorithms such as \cite{Oreshkin:Nips18,Gidaris:cvpr18} use SGD as their optimization algorithm. This result of using different optimization algorithms is shown in \tabref{tab:choices}. Here, we show the performance of our algorithm on the \wraptxt{miniImagenet} dataset using a ResNet-12 model. From this table we see that, while for the \shot{1}{5} the results are better with ADAM as opposed to SGD, we see that the same does not hold for the \shot{5}{5} and \shot{10}{5}
scenarios. This shows that SGD generalizes better for our algorithm as compared to ADAM.
\begin{table}[htbp]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{} |c|c|c|c| @{}}
\hline
Optimization Algorithm & 1-shot & 5-Shot & 10-shot \\
& 5-way & 5-way & 5-way\\
\hline
ADAM & \textbf{59.41} & 76.75 & 81.33 \\
SGD & 59.00 & \textbf{77.46} & \textbf{82.33} \\
\hline
\end{tabular}}
\end{center}
\caption{Performance of our method on \wraptxt{miniImagenet} using the ResNet-12 model with different choices of optimization algorithm. The table shows the accuracy averaged over 2,000 episodes.}
\label{tab:choices}
\end{table}
\paragraph{Effect of number of tasks per iteration.}
TADAM \cite{Oreshkin:Nips18} and Gidaris \etal \cite{Gidaris:cvpr18} use multiple episodes per iteration. They refer to this as tasks in TADAM \cite{Oreshkin:Nips18}, which uses 2 tasks for 5-shot, 1 task for 10-shot and 5 task for 1-shot. We did not perform any such tuning and instead defaulted it to 8 episodes per iteration based on Gidaris \etal \cite{Gidaris:cvpr18}. We also experimented with 16 episodes per iteration. However, this led to a loss in performance across all testing scenarios. \tabref{tab:choicestask}, shows the performance numbers on \wraptxt{miniImagenet} dataset using the ResNet-12 architecture and trained using ADAM~\cite{Kingma2015Adam:Optimization} as the optimization algorithm. From this table we see that for all the scenarios 8 episodes per iteration has a better performance.
\begin{table}[htbp]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{} |c|c|c|c| @{}}
\hline
Choice & 1-shot & 5-Shot & 10-shot \\
& 5-way & 5-way & 5-way\\
\hline
8 episodes per iteration & \textbf{59.41} & \textbf{76.75} & \textbf{81.33} \\
16 episodes per iteration & 58.22 & 74.53 & 78.61 \\
\hline
\end{tabular}}
\end{center}
\caption{Performance of our method on \wraptxt{miniImagenet} using a ResNet-12 model with different choices of episodes per iteration. The table shows the accuracy averaged over 2,000 episodes.}
\label{tab:choicestask}
\end{table}
Even with all major factors such as network architecture, training procedure, batch size remaining the same, factors such as the number of query points used for testing these methods affect the performance and methods in existing literature uses anywhere between 15-30 points for testing, and for some methods it is unclear what this choice was. This calls for stricter protocols for evaluation, and richer benchmark datasets.
\section{Discussion}
We have presented a method for meta-learning for few-shot learning where all three ingredients of the problem are learned: The representation of the data $\phi_w$, the representation of the classes $\psi_c$, and the metric or membership function $\chi_W$. The method has several advantages compared to prior approaches. First, by allowing the class representation and the data representation spaces to be different, we can allocate more representative power to the class prototypes. Second, by learning the class models implicitly we can handle a variable number of shots without having to resort to complex architectures, or worse, training different architectures, one for each number of shots. Finally, by learning the membership function we implicitly learn the metric, which allows class prototypes to redistribute during few-shot learning.
While some of these benefits are not immediately evident due to limited benchmarks, the improved generality allows our model to extend to a continual learning setting where the number of new classes grows over time, and is flexible in allowing each new class to come with its own number of shots. Our model is simpler than some of the top performing ones in the benchmarks. A single model performs on-par or better in the few-shot setting and offers added generality.
\bibliographystyle{ieee_fullname}
\balance
|
2,869,038,153,830 | arxiv | \section{INTRODUCTION}
It is well known that quantum information tasks. e. g. quantum
cryptography, quantum encoding \cite{bin,Metwally2011} and quantum
computation \cite{Nilsen}. require pure states to be implemented
with high efficiencies. However, decoherence is an inevitable
process due to the interaction with the surroundings. There are
different techniques that have been introduced to protect these
states' decoherence. Among of these methods are quantum
purification \cite{bin96}, weak measurement \cite{Guo}, and
quantum filtering \cite{Yash}.
Recently, it was shown that different pulse shapes can keep the
quantum correlation survival and consequently, the phenomena of
the longed lived entanglements is depicted \cite{Shoukry}. Very
recently, Metwally and Hassan \cite{Shoukry1} investigated the
initial parameters which describe the pulsed driven state that
maximize/ minimize the Fisher information which contained in the
driven state. However, in our previous work, we showed that the
possibility of estimating these parameters is very small during
the pulse duration and for some cases it is frozen. This means
that, one may estimate the these parameters within a certain
constant value during the pulsed time and consequently, if this
state is captured by any Eavesdropper, may he/she get a minimum
information or nothing at all. Theses observations motivated us
to investigate the possibility of freezing \cite{Metwally2018} the
pulsed qubits from a sender to a receiver by using the rectangular
pulse.
The paper
is organized as following. In Sec.(2), we describe the initial
system and its driving by the rectangular pulse. In Sec.(3), we
evaluate the encoded information of the driven qubit. Finally, we
summarize our result in Sec.(4).
\section{The suggested Model }
Here, we consider a single qubit taken as 2-level atomic
transition of frequency $\omega_q$ and driven by a short laser
pulse of arbitrary shape and of circular frequency $\omega_c$ in
the absence of any dissipation process. The quantized Hamiltonian
of the system (in units of $\hbar=1$) in the dipole and rotating
wave approximation and in a rotating frame of $\omega_c$ is given
by\cite{Shoukry1},
\begin{equation}\label{Ham}
\hat{H}=\Delta\hat{\sigma_z}+\frac{\Omega(t)}{2}(\hat{\sigma_{+}}+\hat{\sigma_{-}})
\end{equation}
where, the spin-$\frac{1}{2}$ operators $\hat{S}_{\pm,z}$ obey
the $Su(2)$ algebra,
\begin{equation}\label{Com}
[\hat{\sigma_{+}}, \hat{\sigma_{-}}]=2\hat{\sigma_{z}},\quad
[\hat{\sigma_{z}},\hat{\sigma_{\pm}}]=\pm\hat{\sigma_{\pm}}
\end{equation}
and $\Delta=\omega_q-\omega_c$ is the atomic detuning and
$\Omega(t)=\Omega_o f(t)$, is the real laser Rabi frequency with
$f(t)$ is the pulse shape. Heisenberg equation of motion for the
spin operators
$\hat{\sigma_x}=\frac{1}{2}(\hat{\sigma_+}+\hat{\sigma_{-}})$,
$\hat{\sigma_y}=\frac{1}{2i}(\hat{\sigma_+}-\hat{\sigma_{-}})$ and
$\hat{\sigma_z}$ according to (1), (2) are of the form,
\begin{eqnarray}
\hat{\sigma'}_x&=&-\Delta\hat{\sigma_y}
\nonumber\\
\hat{\sigma'_y}&=&\Delta\hat{\sigma_x}-\Omega(t)\hat{\sigma}_z \nonumber\\
\hat{\sigma'_z}&=&=\Omega(t)\hat{\sigma_y}
\end{eqnarray}
In the case of a rectangular pule of a short duration $T$(much
smaller than the life time of the qubit), we have
$\Omega(t)=\Omega_0; f(t)=1, t\in[0,T]$ and zero otherwise. In
this case, the exact solution of the average Bloch vector
components $s_{x,y,z}(t)=\expect{{\hat\sigma}_{x,y,z}(t)}$ is the
matrix form (cf\cite{Shoukry1,Sukry008}),
\begin{equation}
\row{\sigma(t)}=A(t)\row{\sigma(0)}
\end{equation}
where $\row{S}=(\sigma_x,\sigma_y,\sigma_Z)$ and the matrix
$A=[a_{ij}]; i,j=1..3$ with coefficient $a_{ij}$ are given in the
appendix (A).
Initially, we assume that the information is encoded in the single
qubit which is prepared in the coherent state,
\begin{equation}\label{iniQ}
\ket{\psi_q}=\cos(\theta/2)\ket{0}+e^{-i\phi}\sin(\theta/2)\ket{1},
\end{equation}
where $0\leq \phi\leq 2\pi$, $0\leq \theta \leq \pi$ and
$\ket{0},\ket{1}$ are the lower and upper states, respectively.
The initial Bloch vector $\row{s(0)}$ with the state (\ref{iniQ})
has the componnents,
\begin{equation}
s_x(0)=\sin\theta\cos\phi, ~~s_y(0)=\sin\theta\sin\phi,~~
s_z(0)=-\cos\theta
\end{equation}
\section{Dynamics of information}
\subsection{Mathematical Forms}
\begin{itemize}
\item Fisher Information:
It is known that, the density operator for 2-level atomic system
is given by,
\begin{equation}
\rho_q=\frac{1}{2}(I+\row{s}\cdot\row{\sigma})
\end{equation}
where, $\row{s}=(s_x(0),s_y(0),s_z(0))$ is the Bloch vector and
$\hat\sigma=(\hat\sigma_x,\hat\sigma_y,\hat\sigma_z)$ are the spin
Pauli operators. In terms of Bloch vector $\row{s}(\beta)$, the
quantum Fisher information(QFI) with respect to the parameter
$\beta$ is defined as \cite{Shoukry1,Xing016},
\begin{equation}
\mathcal{F}_{\beta} = \left\{ \begin{array}{ll}
\frac{\Bigl[\row{s}(\beta)\cdot\frac{\partial{\row{s}(\beta)}}{\partial\beta}\Bigr]^2}{1-\bigl|\row{s}(\beta)\bigr|^2}
+\Bigl(\frac{\partial\row{s}(\beta)}{\partial\beta}\Bigr)^2&\row{s}(\beta)|<1,\\
\nonumber\\
\Bigl|\frac{\partial\row{s}(\beta)}{\partial\beta}\Bigr|^2 & ~|\row{s}(\beta)|=1\\
\end{array} \right.
\end{equation}
where $\beta$ is the parameter to be estimated. From Eq.(7), it is
clear that the final solution depends on the initial parameters
($\theta, \phi$) in addition to the system parameters $\delta$,
$\Omega_0$.
\item The encoded information\\
let us assume that Alice has encoded a given information to be
used in the context of quantum cryptography, for example. She will
use the Bennett and Wiesner protocol \cite{bin}. If the final
state is given by
\begin{equation}\label{Final}
\rho(t)=\frac{1}{2}(1+s_x(t)\sigma_x+s_y(t)\sigma_y+s_z(t)\sigma_z)
\end{equation}
The amount of the coded information is given by
\begin{equation}
I_{cod}=-\lambda_1log\lambda_1-\lambda_2log\lambda_2
\end{equation}
where $\lambda_i,i=1,2$ are the eigenvalues of the state
(\ref{Final}).
\end{itemize}
\subsection{Numerical results}
In the
following subsections, we estimate these parameters by
calculating their corresponding QFI, $\mathcal{F}_\beta$. The
larger QFI is the higher degree of estimation for the parameter
$\beta$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.35\textwidth]{FIS-phi-pi-D-0.2a.eps}
\put(-130,15){\large $\theta$}
\put(-190,75){\large $\mathcal{F}_\theta$}
\put(-15,35){\large $\Omega_0$}
\put(-190,100){$(a)$}\\
\vspace{0.5cm}
\includegraphics[width=0.35\textwidth]{FIS-phi-pi-D-0.2b.eps}
\put(-90,-5){\large $\theta$}
\put(-190,95){\large $\Omega_0$}
\put(-190,140){$(b)$}
\caption{ (a) The pulsed Fisher information $(\mathcal{F}_\theta)$
as a function of the frequency $\Omega_0$ and $\theta$ with $\Delta=0.2, \phi=\pi$
(b) The contour of $\mathcal{F}_\theta$. }
\end{figure}
Fig.(1a) describes the behavior of the quantum Fisher information
with respecte to the weight parameter $\theta$ as a function of
the frequency $\Omega_0$ at small value of the detuning
parameter, $\Delta$. It is clear that, the quantum Fisher
information $\mathcal{F}_\theta$ is almost zero for any value of
$\Omega_0<0.1$ and any initial value $\theta\in[0,\pi]$. This
means that, in this interval one can not estimate the weight
parameter. However, for larger values of $\Omega_0$,
$\mathcal{F}_\theta$ increases gradually to reach its maximum
values at $\Omega_0=1$. Note also that for the range
$0.4<\Omega_0<1$, the quantum Fisher information deceases for
$\theta\in[0,\pi/4]$ and increases for $\theta\in[\pi/4,\pi]$.
This behavior is displayed in Fig.(1b), as a contour plot, where
it is divided into different regions that have the same degree of
brightness/darkness. This means that, in these regions, the
quantum Fisher information $\mathcal{F}_\theta$ is frozen. In the
more brightened regions, the possibility of estimating the weight
parameter $\theta$ increases, while it decreases as the darkness
increases.
\begin{figure}[t!]
\centering
\includegraphics[width=0.35\textwidth]{Cod-phi-pi-D-0.2a.eps}
\put(-130,15){\large $\theta$}
\put(-200,80){\large $I_{cod}$}
\put(-15,35){\large $\Omega_0$}
\put(-190,120){$(a)$}\\
\vspace{0.5cm}
\includegraphics[width=0.35\textwidth]{Cod-phi-pi-D-0.2b.eps}
\put(-100,-5){\large $\theta$}
\put(-190,95){\large $\Omega_0$}
\put(-200,160){$(b)$}
\caption{(a)The pulsed encoded information, $I_{cod}$
as a function of the frequency $\Omega_0$ and $\theta$ with $\Delta=0.2, \phi=\pi$
(b) The contour plot of $I_{cod}$. }
\end{figure}
In Fig.(2a), we plot the amount of the encoded information in the
pulsed state at $\Delta=0.2$. It is clear that, as soon as the
pulse is switched on, the encoded information $I_{cod}$ is maximum
at small values of $\Omega_0$ and for any initial values of the
weight parameter, $\theta$. For larger values of $\Omega_0$, the
quantum encoded information $I_{cod}$ gradually decreases with the
minimum values of the estimation degree around $\pi=\pi/2$. The
contour plot, Fig.(2b), displays the regions in which the encoded
information is large and decreases as the initial weight
parameter $(\theta)$ decreases. On the other hand, there are no
dark regions depicted which means that the encoded information
cannot vanishes.
For larger value of the detuning parameter $(\Delta=0.9)$ the
contour of $\mathcal{F}_\theta$ in the $(\theta,\Omega_0)-$plane
is shown in , Fig.(3), where it shows the areas where the quantum
fisher information may be frozen. It is clear that, the dark
regions are wider than those displayed for small values of the
detuning parameter (see Fig.(1b)). This means that the possibility
of estimation $\theta$ decreases as one increases $\Delta$
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{FIS-phi-pi-D-0.9b.eps}
\put(-90,-5){\large $\theta$}
\put(-190,95){\large $\Omega_0$}
\caption{ The contour plot of $\mathcal{F}_\theta$ in the
$(\theta, \Omega_0)$-plane with $\Delta=0.9,~ \phi=\pi$ . }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{Cod-phi-pi-D-0.9b.eps}
\put(-90,-5){\large $\theta$}
\put(-190,95){\large $\Omega_0$}
\caption{The same as Fig.(3) but for $I_{cod}$. }
\end{figure}
The contour plot of the encoded information, $I_{cod}$ in
Fig.(4) shows that the size of the bright
regions is much larger than that displayed in Fig.(2b). However,
the degree of brightness degreases as $\Omega_0$ increases which
means that there is a leakage of the pulsed information.
From Figs.(1-4), one may conclude that, it is possible to freeze
the coherence of the estimation degree of the weight parameter
$(\theta)$ by controlling the strength of the pulse and the
detuning between the qubit and the pulse. For larger values of the
detuning and smaller values of the strength one can increase the
possibility of freezing the estimation degree of the weight
parameter. The amount of the coded information may be maximized
as the estimation degree of the weight parameter is minimized.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{FIS-phi-pi-omega-0.1b.eps}
\put(-90,-5){\large $\theta$}
\put(-190,95){\large $\Delta$}
\caption{ The contour plot of $\mathcal{F}_\theta$
in the $(\theta, \Delta)$-plane with $\Omega_0=0.1, \phi=\pi$. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{Cod-phi-pi-omega-0.1b.eps}
\put(-90,-5){\large $\theta$}
\put(-190,95){\large $\Delta$}
\caption{ The same as Fig.(5) but for the encoded information $I_{cod}$. }
\end{figure}
Figs.(5) and (6) display the contour behavior of the quantum
Fisher information and the encoded information, respectively, in
the $(\theta, \Delta)$-plane. It is clear that, the detuning
parameter has a decoherence effect on the Fisher information, with
a coherence effect on the encoded information. Fig.(5) shows the
size of regions in which one may estimate the weight parameter
$(\theta)$, where the possibility of freezing the pulsed Fisher
information increases as the detuning parameter increases.
The dynamics of the pulsed encoded information, $I_{cod}$ is
depicted in Fig.(6), where it reaches its maximum values at
$\Delta=\theta=0$ and decreases suddenly as the initial weight
parameter increases and vanish completely at
$\theta\simeq\pi/16$. However, at any
$\theta\in[\pi/16,15\pi/16]$ and $\Delta<0.1$, the encoded
information is almost zero. For larger values of $\Delta$ and
arbitrary value of the weight parameter, the encoded information
is almost maximum. There are two displayed peaks where the
encoded information $I_{cod}$ is slightly decreases. Fig.(6)
represents the behavior of the encoded information in a contour
plot, where the indicated dark regions are very small, while the
brightened regions are large and reach its maximum values at
large values of the detuning parameter.
Further, it is clear that, one can maximize the amount of pulsed
encoded information at the expense of minimizing the estimation
degree of the weight parameter $(\theta)$. This phenomena may be
achieved by decreasing the pulse strength and increasing the
detuning between the qubit and the pulse.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{FIS-theta-pi-2-omega-0.5b.eps}
\put(-100,-5){\large $\phi$}
\put(-200,95){\large $\Delta$}
\caption{ The contour plot of $\mathcal{F}_\phi$ in the $(\phi,\Delta)$-plane
with $\Omega_0=0.5, \theta=\pi$. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{Cod-theta-pi-2-omega-0.5b.eps}
\put(-100,-5){\large $\phi$}
\put(-190,95){\large $\Delta$}
\caption{ The same as Fig.(7) but for the encoded information $I_{cod}$. }
\end{figure}
In Figs.(7) and (8), we investigate the behavior of the Fisher
information $\mathcal{F}_\phi$ and the encoded information when
the phase parameter $(\phi)$ is estimated, such that the driven
qubit is initially prepared in the state $e^{-i\phi}\ket{1}$,
namely, we set the weight parameter $\theta=\pi$. It is clear
that, the larger values of the detuning has a decoherence effect
on the Fisher information,$\mathcal{F}_\phi $, where it decreases
as $\Delta$ increases. Fig.(7) displays the area in which the
Fisher information is frozen, where the degree of the darkness
indicates the estimation. As $\Delta$ increases, the darkness
increases which means that, the possibility of estimating the
phase parameter $(\phi)$ decreases. On the other hand, Fig.(8),
for the encoded information shows that the brightness increases as
$\Delta$ increases and the maximum bounds are displayed around
$\phi=\pi/2$.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{Cod-theta-pi-omega-0.5b.eps}
\put(-100,-5){\large $\phi$}
\put(-190,95){\large $\Delta$}
\caption{ The encoded information $I_{cod}$, in
the $(\phi,\Delta)$-plane with $\Omega_0=0.5, \theta=0$. }
\end{figure}
Fig.(9) describes the contour behavior of the encoded information
$I_{cod}$ for a different initial state setting, where it is
assumed that the qubit is initially prepared in the state,
$\ket{\psi(0)}=\ket{0}$, namely, $\theta=0$. This means that, the
initial state doesn't depend on the phase $\phi$ and may be taken
arbitrary. On the other hand, the freezing phenomena of the pulsed
encoded information, $I_{cod}$ is depicted at small values of the
detuning and the degree of freezing decreases as the detuning
increases.
\section{CONCLUSIONS}
In this contribution, we investigate the relation between the
pulsed Fisher information of the qubit's parameters and the
encoded information. The suggested system consists of a single
qubit driven by a rectangular pulse. These physical quantities,
the Fisher and the encoded information, are discussed for
different values of the pulse strength and the detuning between
the qubit and the pulse.
In case of estimating the weight parameter $(\theta)$, it is shown
that, large values of the pulse strength increase the possibility
of estimating the weight parameter and decreases the capacity of
encoded information in the qubit. Large values of the detuning
increase the size of the frozen areas for the two physical
quantities; estimation degree and the channel capacity. However,
for increased detuning, the estimation degree of the weight
parameter increases, while the channel capacity decreases. The
behavior of the Fisher information and the encoded information as
functions of the detuning parameter is discussed for small values
of the pulse strength. It is shown that, it is possible to
maximize the channel capacity at the expense of the estimation
degree. Moreover, one can always freeze both quantities for any
initial state setting of the weight parameter.
The behavior of Fisher information and the coded information is
discussed when the phase parameter $(\phi)$ is estimated. In this
case, the initial phase plays an important role on the
decoherence/ coherence effect of the pulse. The results, show
that the encoded information doesn't depend on $\phi$, while the
Fisher information depend on it.
{\it In conclusion}, it is possible to freeze the Fisher
information and the amount of the encoded information for both
qubit parameters $(\theta,\phi)$. One can increase the size of
the frozen area of the encoded information at the expense of
Fisher information. We show that, the encoded information doesn't
depend on the phase ($\phi$). We expect that, these results may
be useful in the context of cryptography and secure
communications.
|
2,869,038,153,831 | arxiv | \section{Introduction}
OLYMPUS is a particle physics experiment comparing the elastic cross section
for positron-proton scattering to that of electron-proton scattering \cite{Milner:2013daa}. This
measurement has been of interest recently because it tests the hypothesis that
hard two-photon exchange is responsible for the proton form factor discrepancy \cite{Guichon:2003qm,Blunden:2003sp}.
OLYMPUS took data in 2012 and 2013 at the DORIS storage ring at DESY, in Hamburg,
Germany. During data taking, beams of electrons and positrons were directed
through a windowless hydrogen gas target. The scattered lepton and recoiling
proton from elastic scattering events were detected in coincidence with a toroidal
magnetic spectrometer. The spectrometer's support structure, magnet, and several
subdetectors were originally part of the BLAST experiment \cite{Hasell:2009zza}.
Several new detectors were specially built for OLYMPUS to serve as luminosity monitors.
A schematic of the apparatus is shown in Figure \ref{fig:olympus}.
\begin{figure}[hptb]
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0)
{
\includegraphics[width=\columnwidth,clip=true,trim=0 5.5cm 0 4cm]{mag.pdf}
};
\begin{scope}[x={(image.south east)},y={(image.north west)},scale=1.15]
\node[align=center,black] at (.11,.68) {$z$};
\node[align=center,black] at (.12,.82) {$y$};
\node[align=center,black] at (.18,.75) {$x$};
\node[align=left,black] at (.72,.69) {target chamber};
\node[align=center,black] at (.518,.805) {beam direction};
\draw[arrows=->,draw=black,thick] (.1,.73) -- (.1,.83) node[midway,above]{};
\draw[arrows=->,draw=black,thick] (.1,.73) -- (.2,.73) node[midway,above]{};
\draw[arrows=->,draw=black,thick] (.1,.73) -- (0.085,.67) node[midway,above]{};
\draw[arrows=-,draw=black] (.1,0.01) -- (.1,0.04) node[midway,above]{};
\draw[arrows=-,draw=black] (.711,0.01) -- (.711,0.04) node[midway,above]{};
\node[align=center,black] at (.406,0.02) {5~m};
\draw[arrows=<-,draw=black,thick] (.1,.02) -- (.37,.02) node[midway,above]{};
\draw[arrows=<-,draw=black,thick] (.711,.02) -- (.442,.02) node[midway,above]{};
\draw[arrows=->,draw=black,thick] (.506,.785) -- (.482,.660) node[midway,above]{};
\draw[arrows=->,draw=black,thick] (.606,.69) -- (.445,.495) node[midway,above]{};
\end{scope}
\end{tikzpicture}
\caption{This schematic shows how the eight magnet coils are situated around the
OLYMPUS detectors.
\label{fig:olympus}}
\end{figure}
The OLYMPUS spectrometer used a magnetic field for two purposes.
First, the field produced curvature in the trajectories of charged particles
so that the detectors could measure their momentum. Typical momenta of particles
from elastic scattering reactions ranged from $0.2$ to $2$~GeV$/c$,
corresponding to sagittas as small as $5$~mm in the OLYMPUS tracking detectors. Secondly, the magnet acted
as a filter, preventing low-energy charged particles (from background processes
like M\o ller or Bhabha scattering) from reaching the tracking detectors.
\begin{figure}[hptb]
\includegraphics[width=\columnwidth]{coil_design.pdf}
\caption{Individual magnet coils were narrower at the upstream end to
accommodate the target chamber.
\label{fig:coil}}
\end{figure}
The OLYMPUS magnet consisted of eight coils, identical in shape, each with
26 windings of hollow copper bars potted together with epoxy.
The coil shape is shown in Figure \ref{fig:coil}.
The coils were arranged to produce a toroidal field,
with the beamline passing down the symmetry axis of the toroid. During
OLYMPUS running, the coils carried 5000~A of current, which produced a
peak field of approximately 0.3~T.
Knowledge of the spectrometer's magnetic field was necessary for reconstructing
particle trajectories through the OLYMPUS spectrometer. However, calculating
the field using the design drawings and the Biot-Savart law was not feasible for two reasons. First,
the positions of the copper bars within the epoxy were not well known. Secondly,
the coils were observed to deform slightly when current passed through them due to magnetic forces.
Instead, at the conclusion of data taking, extensive field measurements
were made of the magnet in situ. A measurement apparatus, consisting of a
three-dimensional Hall probe actuated by a system of translation tables,
was used to measure the magnetic field vector at over 36,000 positions in
both the left- and right-sector tracking volumes.
This paper presents both the measurement technique and the subsequent
data analysis used to characterize the field of the OLYMPUS magnet.
Section \ref{sec:measurements} describes the measurement apparatus,
the measurement procedure, and the techniques used to establish the Hall probe
position. Section \ref{sec:coilfitting} describes how we fit the magnetic field
data with a numerical field model to allow us to calculate the field at positions
we did not measure. Section \ref{sec:interpolation} describes the special interpolation
scheme we developed to facilitate rapid queries of the magnetic field. Finally, Section
\ref{sec:special} describes the special modifications to our field model that were
needed in two regions where the model did not perform adequately.
\section{Coordinate System}
This paper makes frequent references to positions and directions in
the standard OLYMPUS coordinate system. In this system, the $x$-axis points
left from the beam direction, the $y$-axis points up,
and the $z$-axis points in the direction of the beam. The
coordinate origin is taken to be the center of the target. OLYMPUS has
two sectors of detectors, which lie to the left ($x>0$) and right ($x<0$)
of the beamline, centered on the $y=0$ plane. Since the magnet has toroidal
symmetry, it is sometimes convenient to work with cylindrical coordinates.
We use $r$ to refer to the radius from the $z$ axis and
$\phi$ to refer to the azimuthal angle from the $xz$ plane. For example,
a point on the positive $y$ axis lies at $\phi=90^\circ$.
\section{Field Measurements}
\label{sec:measurements}
The magnetic field measurements at OLYMPUS were more involved than those
made during the BLAST experiment, detailed in a previous article
\cite{Dow:2009zz}. Like at BLAST, preliminary field measurements along
the beamline were made to align the coils during the toroid's assembly;
in addition, a detailed survey was made after data taking was complete.
This was important because OLYMPUS compared scattering with electrons to scattering
with positrons; the magnetic field introduces differences in trajectories
between the two species. Field inaccuracies directly contribute to the systematic error.
\begin{figure}[hptb]
\includegraphics[width=\columnwidth]{tables.pdf}
\caption{Two 1~m $xy$ tables supported a long $xyz$ table, which
could scan 4~m in the $z$ direction. In this configuration, the
apparatus is assembled to measure the field on the $x>0$ side of
the toroid.
\label{fig:tables}}
\end{figure}
The measurement apparatus was built from a system of translation tables,
a schematic of which is shown in Figure \ref{fig:tables}. The apparatus
was originally built to measure the field of the undulator magnets of the
FLASH free electron laser at DESY \cite{Grimm2010105}.
The entire apparatus was supported by a pair of two-dimensional translation
stands, which had 1~m of range in the $x$ and $y$ directions. These stands
were moved synchronously and acted as a single table. This table supported
a three-dimensional translation table with 4~m of range in the $z$ direction
and 0.2~m of range in the $x$ and $y$ directions. This system of translation
tables was used to move a three-dimensional Hall probe at the end of a carbon fiber rod,
held parallel to the $x$ axis. The range of motion in $x$ and $y$ was
extended beyond the $1.2$~m range of the translation tables by using rods of different
lengths and different brackets to connect the rods to the tables.
To allow the Hall probe and rod to move through the magnet volume without
obstructions, the detectors and parts of the beamline were removed before
the apparatus was assembled.
The position of the Hall probe was monitored using a theodolite equipped
with a laser range-finder. The theodolite could determine the relative position
to a reflective target, which was attached to the measurement rod. The theodolite's
position was calibrated daily by surveying a set of reference points
on the walls of the experimental hall and on the frame of the magnet, the
positions of which were established by previous surveys.
Measurement scans were made by moving to a desired $x$ and $y$ position
and then stepping along the $z$ direction. At the starting and ending point
of a scan, the theodolite was used to survey the position of the
reflective target. At each step in the scan, the probe would be moved to the
new $z$ position, followed by a pause of one second to allow any vibrations in
the rod to dampen. Then, a measurement was made, and the probe was stepped to the
next point. This procedure was computer-controlled using a LabVIEW application.
Measurements were made in a three-dimensional grid with 50~mm spacing within 1~m
of the beamline and 100~mm spacing elsewhere. Measurement scans were made along as
many $x,y$ trajectories as could be reached, given the ranges of the rods and tables,
while still avoiding collisions between the magnet and the measurement apparatus.
The nominal probe positions for each scan are shown in Figure \ref{fig:scan}.
The field was measured at over 36,000 points across both sectors.
\begin{figure}[hptb]
\includegraphics[width=\columnwidth]{scans.pdf}
\caption{Magnetic field measurements were made with greater density in the
inner part of the tracking volume where the field gradients are highest.
\label{fig:scan}}
\end{figure}
After surveying the left sector, the apparatus was taken apart and reassembled
for measurements of the right sector. The field in the beamline region was measured from both the
left and right. These overlapping measurements were consistent in all three field
directions at a level better than $5\times 10^{-5}$~T.
Measurements with the theodolite confirmed that the translation tables did not
provide the millimeter-level accuracy desired. Over the course of $4$~m of translation
in $z$, the long table introduced perturbations of several millimeters in $x$ and $y$.
The position measurements from the theodolite were used to correct for these
perturbations and recover the true position of the Hall probe. Every time a new rod
was installed on the tables, a calibration scan was made, in which the reflective
target was surveyed at every step in the $4$~m $z$ translation. This allowed us to
determine the three-dimensional trajectory of the target. We found that the trajectories
had similar shapes for all rods, and the motion could be parameterized with:
\begin{align}
x(z) =& x_t(z) + L_r \cos\left(\theta_t (z)\right) + \text{linear term} \\
y(z) =& y_t(z) + L_r \sin \left(\theta_t (z)\right) + \text{linear term},
\label{eq:perturb}
\end{align}
where $x_t$, $y_t$, and $\theta_t$ represent perturbation functions that are
common to the table (independent of which rod was used), and $L_r$ is the length
of the specific rod in use. A linear term was used to match the start and end
points of the trajectory to the starting and ending positions of each measurement scan,
as surveyed by the theodolite. Figure \ref{fig:traj} shows data from three
calibration scans fit using this parameterization.
\begin{figure}[hptb]
\includegraphics[width=\columnwidth]{trajectories.pdf}
\caption{The translation tables deviated from their nominal positions over the length of a scan,
but these deviations could be fit with a simple parameterization. \label{fig:traj}}
\end{figure}
\section{Coil Fitting}
\label{sec:coilfitting}
The magnetic field measurement data were fit using a numerical magnetic field
model, thereby solving two problems. First, the model can be used to calculate
the magnetic field in regions where measurements could not be made, i.e., where the
probe was obstructed. Secondly, any measurements with errant field readings or offset
position assignments have their local influence dampened in the model.
In the magnetic field model, the current in each copper winding was approximated
by a sequence of line segments. The magnetic field produced by a line segment of
current was found by integrating the Biot-Savart law. The magnetic field at
position $\vec{x}$ produced by a line segment starting at position $\vec{p}_1$ and
ending at $\vec{p}_2$, carrying current $I$, is given by:
\begin{equation}
\vec{B} = \frac{\mu_0 I}{4\pi} \frac{\vec{L}\cdot(\hat{V}_2 - \hat{V}_1)}{L^2
(\vec{c}-\vec{x})\cdot(\vec{c}-\vec{x}-\vec{L}/L)} \vec{L} \times (\vec{c}-\vec{x})
\end{equation}
\begin{equation}
\vec{c} = (\vec{p}_1 + \vec{p}_2)/2
\end{equation}
\begin{equation}
\vec{L} = \vec{p}_2 - \vec{p}_1
\end{equation}
\begin{equation}
\hat{V}_i = \frac{\vec{p}_i - \vec{x}}{|\vec{p}_i - \vec{x}|}.
\end{equation}
The OLYMPUS coils have straight sections, easily described by line segments, and curved sections,
in which the curves are approximately circular arcs. We approximated the arcs in the model by
connecting multiple line segments to form a polygon. To divide an arc subtending angle $\alpha$
into $N$ segments, one can place $(N+1)$ vertices evenly along the arc, and connect them to form
a polygon. However, we found we could better match the magnetic field of the arc by placing the
vertices slightly outside of the arc, so that the polygon and arc had equal area, and thus equal
dipole moments. In our approximation, we chose to start the first segment at the
beginning of the arc, and to end the last segment at the end of the arc to maintain continuity.
We displaced the $(N-1)$ intermediate vertices radially outward from the arc radius $R$ to $R'$,
given by:
\begin{equation}
R' = \frac{R}{N-2}\left[ \sqrt{1 + \frac{\alpha (N-2) }{\sin \left( \frac{\alpha}{N} \right)}} - 1 \right].
\end{equation}
To fit the magnetic field model to the measurements, several parameters were allowed to vary, and a best
fit was found by minimizing the sum of the squared residuals $\sum |\vec{B}_\text{meas.} - \vec{B}_\text{model}|^2$.
Several attempts were made in order to strike a balance between giving the model sufficient freedom
to explain the data and introducing degrees of freedom that were unconstrained by the measurements.
Ultimately, a model with 35 free parameters was chosen. The four coils that immediately surrounded
the measurement region were each given six degrees of freedom (three for positioning and three
for orientation). The remaining four coils were positioned around a common toroid axis (three parameters
to specify the origin and three parameters to specify the orientation), but were allowed to vary collectively
in radius and in-plane rotation angle. All of the coils were collectively given two degrees of freedom to
stretch or compress in both in-plane directions, since the coils were observed to deform slightly when
current was passed through them. The final free parameter was the current carried in the magnet, which reproduced
the measured input current to within 0.1\%.
In the final fit, arcs were divided into one line segment per degree of curvature.
\begin{figure}[hptb]
\includegraphics[width=\columnwidth]{field_in_plane.pdf}
\caption{The magnetic field in the $y$ direction is shown for the $y=0$ plane. \label{fig:inplane}}
\end{figure}
The model with best-fit parameters had an r.m.s.\ residual:
$\sqrt{\frac{1}{3N}\sum |\vec{B}_\text{meas.} - \vec{B}_\text{model}|^2}=1.873\times 10^{-3}$~T, where
$N$ describes the total number of measurement points.
The residuals were not uniformly distributed over the measurement region and do not represent
Gaussian errors. Many systematic effects contribute to the residuals, including any errors in
determining the true probe position and inadequacy of the magnetic field model. The magnetic
field generated by the model with the best fit parameters is shown in Figure \ref{fig:inplane}.
\section{Interpolation}
\label{sec:interpolation}
The model calculation described in the previous section was too slow to be used directly for
simulating or reconstructing particle trajectories for the OLYMPUS analysis, and so a fast
interpolation scheme was developed. The coil model was used to pre-calculate the magnetic
field vector and its spatial derivatives on a regular $50$~mm~$\times$~$50$~mm~$\times$~$50$~mm grid covering the entire
spectrometer volume, so that the field could be interpolated between surrounding grid points.
The interpolation scheme had to balance several competing goals:
\begin{itemize}
\item Minimizing the memory needed to store the field grid
\item Minimizing computation time for field queries
\item Faithfully reproducing the coil model in both the field and its derivatives.
\end{itemize}
To achieve this, an optimized tricubic spline interpolation scheme was developed based
on the routine of Lekien and Marsden \cite{NME:NME1296}. For each point $P$ in the grid,
24 coefficients were calculated using the coil model (8 per component of the vector magnetic field):
\begin{multline}
C_{i,P} = \left\{ B_i, \pd{x}{B_i}, \pd{y}{B_i},\pd{x\partial y}{B_i},\pd{z}{B_i},\pd{x\partial z}{B_i},\pd{y\partial z}{B_i},\pd{x\partial y \partial z}{B_i}\right\}
\\ \text{for} \:\: i\in \left\{ x,y,z \right\},
\end{multline}
using numerical differentiation to compute the required derivatives from the coil model.
For the interpolation, it is convenient to consider the grid in terms of boxes defined
by eight grid points, as shown in Figure \ref{fig:grid}, and define box-fractional
coordinates $x_f,y_f,z_f \in [0,1] $ parallel to the global axes spanning each box.
Each point in the grid is labeled with an integer index $j$, such that stepping from
point $P_j$ one unit in $x$ reaches point $P_{j+1}$. Stepping one unit in $y$ from
point $P_j$ reaches $P_{j+n_x}$, where $n_x$ is the size of the grid in the $x$ direction.
Stepping from point $P_j$ one unit in $z$ reaches point $P_{j+n_xn_y}$, where $n_y$ is the
size of the grid in $y$ direction. Then, a local tricubic spline can be defined for each
field component in the box:
\begin{equation}
B_i(x,y,z) = \sum_{l,m,n=0}^3 a_{i,lmn} x_f^ly_f^mz_f^n \:\:\:\: i\in \left\{ x,y,z \right\},
\end{equation}
where the coefficients $\left\{a_{i,lmn}\right\}$ are functions of
the set of the 64 parameters $\left\{C_{i,P}\right\}$, where $P$ is any of the eight grid points at
the vertices of the box.
This function is a 64-term polynomial for each box and is $C^1$ at the box boundaries.
The coefficients $\left\{a\right\}$ can be computed from the parameters $C_{i,P}$ following the
prescription in Lekien and Marsden. However, this prescription requires three $64\times64$ matrix
multiplications per box. Once completed for a given grid box, these multiplications can be stored
for future use, but this adds to the size of the grid in memory, approaching a factor of 8 for
large grids.
\begin{figure}[hptb]
\includegraphics[width=\columnwidth]{gridcube.eps}
\caption{A generalized box in the interpolation grid identified by its lowest-indexed grid point $P_j$, where $n_x$ and
$n_y$ are the $x$ and $y$ dimensions of the grid in units of grid points. \label{fig:grid}}
\end{figure}
To avoid these costs, the spline was refactored so that the parameters $C_{i,P}$ can be used
directly as coefficients. We introduce the following basis functions:
\begin{gather}
f_0(x_i) = \left(x_i-1\right)^2 \left(2x_i+1\right) \\
f_1(x_i) = x_i\left(x_i-1\right)^2 \\
f_2(x_i) = x_i^2\left(3-2x_i\right) \\
f_3(x_i) = x_i\left(x_i-1\right)
\end{gather}
where $ i\in \left\{ x_f,y_f,z_f \right\}$. The interpolation then takes the form:
\begin{equation}
\label{eq:spline}
B_i(x,y,z) = \sum_{l,m,n=0}^3 b_{i,lmn} f_l(x_f) f_m(y_f) f_n(z_f) \:\:\:\: i\in \left\{ x,y,z \right\},
\end{equation}
where each coefficient $\left\{b_{i,lmn}\right\}$ is one of the parameters $C_{i,P}$. The correspondence
between $\left\{b_{i,lmn}\right\}$ and $C_{i,P}$ is shown in Table \ref{tab:coef}.
\begin{table}[hptb]
\tabcolsep=0.15cm
\begin{center}
\begin{tabular}{l | c c c c c c c c}
& $B_i$ & $ \pd{x}{B_i}$ & $ \pd{y}{B_i}$ & $\pd{x\partial y}{B_i}$ & $\pd{z}{B_i}$ & $\pd{x\partial z}{B_i}$ & $\pd{y\partial z}{B_i}$ & $\pd{x\partial y \partial z}{B_i}$ \\
\hline
$P_{j}$ & 000 & 100 & 010 & 110 & 001 & 101 & 011 & 111 \\
$P_{j+1}$ & 200 & 300 & 210 & 310 & 201 & 301 & 211 & 311 \\
$P_{j+n_x}$ & 020 & 120 & 030 & 130 & 021 & 121 & 031 & 131 \\
$P_{j+n_x+1}$ & 220 & 320 & 230 & 330 & 221 & 321 & 231 & 331 \\
$P_{j+n_xn_y}$ & 002 & 102 & 012 & 112 & 003 & 103 & 013 & 113 \\
$P_{j+n_xn_y+1}$ & 202 & 302 & 212 & 312 & 203 & 303 & 213 & 313 \\
$P_{j+n_xn_y+n_x}$ & 022 & 122 & 032 & 132 & 023 & 123 & 033 & 133 \\
$P_{j+n_xn_y+n_x+1}$ & 222 & 322 & 232 & 332 & 223 & 323 & 233 & 333 \\
\end{tabular}
\end{center}
\caption{Mapping of the coefficients $\left\{b_{i,lmn}\right\}$ (defined in Equation
\ref{eq:spline}) to the field values and
derivatives at the grid points contained in the box with lowest-indexed point $P_j$. Entries
in the table are the values of $lmn$ corresponding to each combination of point and
coefficient on the interpolation box. \label{tab:coef}}
\end{table}
With this interpolation scheme, the procedure for querying the field map consisted
of determining the grid box containing the queried point, computing the box fractional
coordinates of the queried point in that box, and then applying the tricubic spline
interpolation for each of the three field components independently. Special care was
taken to optimize the speed and number of arithmetic operations in the routine (e.g., by pre-computing factors
such as the basis functions that are used repeatedly and by converting division operations to
multiplications whenever possible). Additionally, the coefficients for each grid
point were arranged in a single array so that, for any box in the grid, the 192 coefficients
associated with the 8 points of the box occurred in 16 continuous blocks of 12 coefficients,
permitting rapid reading of the entire array and facilitating single instruction, multiple data (SIMD)
computing, further increasing the speed of field queries.
\section{Special Considerations}
\label{sec:special}
Simulations revealed two regions where the magnetic field model and interpolation were inadequate.
These special cases are described in the following subsections.
\subsection{Field near the coils}
Some of the interpolation grid points sat very close to line segments of current in the field
model, which was problematic since the field diverged there. The field and the field derivatives
at these grid points were unreliable, and this spoiled the magnetic field over the eight grid boxes
surrounding every such point. When simulating trajectories that pass close to the coils, we observed that particles had
abruptly different trajectories if they entered a spoiled grid box. The instrumented region in
OLYMPUS extends to approximately $\pm 15^\circ$ in azimuth about the $y=0$ plane; aberrant
trajectories were observed at an azimuth of only $12^\circ$. The problematic points sit near
the coils, at $\pm 22.5^\circ$.
To safeguard against this problem, we produced a second field grid using a different procedure.
We defined a safe region, $\pm 15^\circ$ in azimuth from the $y=0$ plane, inside which we trusted
the magnetic field model. (Points inside this region were sufficiently far from the coils to avoid
problems with the diverging field.) For grid points in this region,
the field and its derivatives were calculated as before. For points outside this region, we used
an alternative calculation, exploiting the approximate azimuthal symmetry of the magnet. For each
outside grid point, we first calculated the field and its derivatives at the point with the same
$z$ and same $r$, but on the nearest $\phi=\pm 15^\circ$ plane. Derivatives were calculated in
cylindrical coordinates, and any with respect to $\phi$ were set to 0. We then rotated the field
and derivative vectors back to the point of interest.
For example, given a grid point at $\phi=20^\circ$, $r=1$~m, $z=2$~m, the field
would first be calculated at $\phi=15^\circ$, $r=1$~m, $z=2$~m. The derivatives $d\vec{B}/dr$,
$d\vec{B}/dz$, and $d^2 \vec{B}/drdz$ would be calculated numerically. All other derivatives
would be set to 0. The vectors would then be rotated by $5^\circ$ in $\phi$, so as to correspond
appropriately for the grid point at $\phi=20^\circ$, $r=1$~m, $z=2$~m.
The grid produced using this procedure was interpolated with the scheme in Section \ref{sec:interpolation}.
Subsequent tests showed that simulated trajectories were not aberrant, even out to $\phi=\pm 15^\circ$,
the limits of the OLYMPUS instrumented region. Furthermore these trajectories were essentially
the same as those simulated with the field model directly, without interpolation.
\subsection{Beamline region}
The region near the beamline, where the magnetic field was small, was difficult to reproduce
accurately using the coil fitting model for two reasons. The first is that this region is close
to all eight coils. Slight changes in the model's coil placement can create large gradients
in the region. The second is that there were few measurements made in that region (and none
inside the volume of the target chamber) to constrain the fit. M{\o}ller and Bhabha tracks
pass through this region, and an accurate simulation is necessary for the M{\o}ller/Bhabha
luminosity monitor (described by P{\' e}rez Benito et\ al.~\cite{Benito:2016cmp}). Since the magnetic field model failed in this region,
a dedicated alternative interpolation scheme was developed, for use strictly in this volume.
The region included all points with $\left|x\right|<100$ mm, $\left|y\right|<50$ mm, and $500$~mm~$<z<3000$~mm, shown
in Figure \ref{fig:symb}.
To provide an accurate field map for the forward M{\o}ller/Bhabha scattering region, interpolation was
performed directly on the measured data points in the region. This data was approximately located on the $y=0$ plane
with small variations due to the imperfections of the translation table, described in Section \ref{sec:coilfitting}.
Since the variation of the field in $y$ was
very small in this region ($\sim$$10^{-4}$~T), the $y$ variation in the grid was ignored and a Lagrange polynomial interpolation
was used on the remaining irregular 2D grid to produce a regular $5$ mm $\times$ $5$ mm grid in $x$ and $z$ for each of the three
field components in the region \cite{Bevington:1305448}. This regular grid was then appended to the field map and used for interpolation
of the field in the special region.
\begin{figure}[hptb]
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0)
{
\includegraphics[width=\columnwidth]{renderandwire12degoverblank.pdf}
};
\begin{scope}[x={(image.south east)},y={(image.north west)},xscale=.1127,yscale=.212]
\node[align=center,black] at (0.4,1.1) {$x$};
\node[align=center,black] at (1.0,0.4) {$z$};
\node[align=center,black] at (0.43,5.2) {\scriptsize $-1$~m};
\node[align=center,black] at (2.30,5.2) {\scriptsize $0$};
\node[align=center,black] at (4.17,5.2) {\scriptsize $1$~m};
\node[align=center,black] at (6.04,5.2) {\scriptsize $2$~m};
\node[align=center,black] at (7.91,5.2) {\scriptsize $3$~m};
\node[align=center,black] at (2.7,1.5) {\scriptsize Drift chambers};
\node[align=center,black] at (1.4,3.4) {\scriptsize Target chamber};
\node[align=center,black] at (5.9,2.7) {\scriptsize Downstream beamline};
\node[align=center,black] at (8.1,1.7) {\scriptsize M\o ller/Bhabha};
\node[align=center,black] at (8.2,1.5) {\scriptsize calorimeters};
\draw[arrows=<->,draw=black] (0.1,4.9) -- (8.5,4.9) node[midway,above]{};
\draw[arrows=-,draw=black] (0.43,5.0) -- (0.43,4.9) node[midway,above]{};
\draw[arrows=-,draw=black] (2.3,5.0) -- (2.3,4.9) node[midway,above]{};
\draw[arrows=-,draw=black] (4.17,5.0) -- (4.17,4.9) node[midway,above]{};
\draw[arrows=-,draw=black] (6.04,5.0) -- (6.04,4.9) node[midway,above]{};
\draw[arrows=-,draw=black] (7.91,5.0) -- (7.91,4.9) node[midway,above]{};
\draw[arrows=->,draw=black] (1.4,3.25) -- (1.9,2.7) node[midway,above]{};
\draw[arrows=->,draw=black] (2.7,1.65) -- (3.4,3.5) node[midway,above]{};
\draw[arrows=->,draw=black] (2.7,1.32) -- (2.95,1.05) node[midway,above]{};
\draw[arrows=-,draw=black] (8.75,1.85) -- (8.75,2.5) node[midway,above]{};
\draw[arrows=->,draw=black] (8.75,2.2) -- (8.55,2.2) node[midway,above]{};
\draw[arrows=->,draw=black] (8.75,2.5) -- (8.55,2.5) node[midway,above]{};
\draw [fill=black, draw=none, opacity=0.3] (3.18,2.13) rectangle (7.91,2.574);
\draw[arrows=->,draw=black,thick] (0.2,0.2) -- (0.2,1.2) node[midway,above]{};
\draw[arrows=->,draw=black,thick] (0.2,0.2) -- (1.2,0.2) node[midway,above]{};
\end{scope}
\end{tikzpicture}
\caption{The special beamline field region, shown in the shaded box, contained the entirety of
the downstream beam pipe from the target cell to the collimators of the M{\o}ller/Bhabha
calorimeters. \label{fig:symb}}
\end{figure}
During the field measurements, it was not feasible to pass the Hall probe through the walls of the target
chamber. Consequently, no measurements were made for points where $|x| < 100$~mm, $|y|<50$~mm, and $z<500$~mm.
However, a few field measurements were made in 2011, prior to the installation of the chamber,
and these data confirmed that the field inside the chamber is on the order of $10^{-4}$~T. In this region,
we chose to use the standard grid interpolation for $B_y$ and $B_z$, since the field model reproduces the 2011
measurements well in those directions. For $B_x$, a parabolic fit to the 2011 measurements was chosen for
the $x$ dependence, with the assumption that $B_x$ is independent of $y$ and $z$.
\section{Conclusions}
We have described the measurement procedure and the data analysis techniques employed to make a
comprehensive survey of the OLYMPUS spectrometer's magnetic field. Using an apparatus consisting
of a Hall probe, actuated with a system of translation tables, we measured the magnetic field at
over 36,000 positions in and around the spectrometer volume. We chose to fit these field data
with a numerical field model to calculate the field at arbitrary positions.
For analysis applications that required rapid queries of the
magnetic field, we precomputed the field and its derivatives on a grid of positions, and
developed a scheme to interpolate the field between grid points using tricubic splines.
By refactoring the splines, we found that we could reduce the memory needed to store the necessary spline
coefficients by a factor of eight. This interpolation scheme worked well for the majority of
the spectrometer volume; however, two regions---the region close to the coils, and the region
along the beamline---required special adjustments, which we described. By making these
adjustments, we succeeded in producing a scheme for determining the magnetic field rapidly
and accurately. This is crucial for the OLYMPUS analysis since it allows a high-rate simulation
of particle trajectories.
\section{Acknowledgements}
We gratefully acknowledge Philipp Altmann at DESY for his assistance in assembling the
measurement apparatus and preparing for the field measurements. We are also very thankful
for the expertise of Martin Noak at DESY in surveying and aligning the apparatus.
This work was supported by the Office of Nuclear Physics of the U.S.\
Department of Energy, grant No.\ DE-FG02-94ER40818.
\bibliographystyle{model1-num-names}
|
2,869,038,153,832 | arxiv | \section*{Abstract}
{\bf
We verify Standard Model Effective Field Theory Ward identities
to one loop order when background field gauge is used to quantize the theory.
The results we present lay the foundation of next to leading order automatic generation of results
in the SMEFT, in both the perturbative and non-perturbative expansion using the geoSMEFT
formalism, and background field gauge.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
The Standard Model Effective Field Theory (SMEFT) \cite{Buchmuller:1985jz,Grzadkowski:2010es}
is a core theory for interpreting many current and future experimental measurements in particle physics.
The SMEFT is defined by the field content of the Standard Model, including an $\rm SU_L(2)$ scalar Higgs doublet ($H$),
and a linear realization of $\rm SU(3) \times SU_L(2) \times U(1)_Y$ symmetry.
Operators of mass dimension $d$ are suppressed by powers of an unknown non-Standard Model scale $\Lambda^{d-4}$.
The SM treated as an EFT has both derivative and field expansions.
The Higgs field expansion plays an essential role as it
can collapse terms in a composite operator onto a target n-point interaction when the classical background
field expectation value of the Higgs is taken. This introduces modifications of low n-point functions,
and the corresponding Lagrangian parameters such as the masses, gauge couplings and mixing angles.
These modifications result in much of the interesting phenomenology of the SMEFT.
Actively organising the formulation of the SMEFT using
field space geometry is advantageous. This approach is known as the geoSMEFT~\cite{Helset:2020yio}, and builds on
the theoretical foundation laid down in Refs.~\cite{Vilkovisky:1984st,Burgess:2010zq,Alonso:2015fsp,Alonso:2016btr,Alonso:2016oah,Helset:2018fgq,Corbett:2019cwl}.
The geoSMEFT separates out the scalar field space expansion (in a gauge independent manner)
from the derivative expansion. This approach naturally generalizes the SM Lagrangian parameters to their SMEFT counterparts,
which are understood to be the masses, gauge couplings and mixing angles on the curved background
Higgs manifold.\footnote{Generally the canonically normalised SMEFT parameters
consistently defined on the curved background manifold of the Higgs are denoted in this work with a bar
superscript, such as $M_{Z} \rightarrow \bar{M}_{Z}, s_{\theta} \rightarrow s_{\bar{\theta}}$ etc..}
The degree of curvature of the Higgs field spaces is dictated by the ratio of the Electroweak scale
$\bar{v}_T \equiv \sqrt{\langle 2 H^\dagger H \rangle}$ compared to the scale of
new physics $\Lambda$. The geoSMEFT enables all orders results in the $\bar{v}_T/\Lambda$ expansion
to be defined, due to the constraints of a self consistent description of the geometry present in the theory, and has already
resulted in the first exact formulation of the SMEFT to $\mathcal{O}(\bar{v}_T^4/\Lambda^4)$ \cite{Hays:2020scx}.
Organizing the SMEFT using field space geometry can be done while background field gauge invariance
is maintained by using the Background Field Method (BFM).
The BFM is also advantageous, as then gauge fixing does not obscure naive and intuitive one loop Ward-Takahashi
identities \cite{Ward:1950xp,Takahashi:1957xn} (hereafter referred to as Ward identities for brevity)
that reflect the unbroken
$\rm SU_L(2) \times U(1)_Y$ global symmetries of the background fields.
The geoSMEFT approach was developed by first determining the BFM gauge fixing
in the SMEFT in Ref.~\cite{Helset:2018fgq}.
The BFM Ward identities for the SMEFT were reported in Ref.~\cite{Corbett:2019cwl}.
Remarkably, the BFM Ward identities are, for the most part,\footnote{An exception is the modification of the tadpole terms
dependence in the SMEFT Ward identities, due to the need to carefully treat two derivative operators involving the Higgs field.} the natural and direct generalization of the SM BFM Ward identities; with
the SM parameters generalized to the curved field space Lagrangian terms
in the geoSMEFT \cite{Corbett:2019cwl}. This supports the notion that the use of the BFM in the SMEFT
is of increased importance.
When a field theory does not have a physical non-trivial background field configuration,
the use of the BFM is largely a choice of convenience in a calculation. In the SMEFT the physics is different,
as it is an EFT with a non-trivial background manifold, namely,
the Higgs taking on its vacuum expectation value ($\bar{v}_T$). As such, a BFM based approach to the SMEFT
naturally and efficiently organizes the physics that is present,
at higher orders in the power counting expansions, and the loop expansion. Considering the complexity of the
SMEFT, the cross checks afforded in this approach are quite valuable to validate results and avoid subtle theoretical inconsistencies.
Although subtle, such inconsistencies can introduce violations of background field symmetries (i.e. make it impossible to consistently
incorporate the IR effect of the field space geometries)
and dramatically impact conclusions drawn from experimental constraints, which are $S$ matrix elements
that depend on a consistent projection of the field space geometry. For a discussion on one such
subtlety in Electroweak precision data, with significant consequences to the SMEFT global fit effort, see Ref.~\cite{Brivio:2017bnu}.
The BFM Ward identities constrain n-point functions and the SMEFT masses, gauge couplings and mixing angles.
As the higher dimensional operators in the SMEFT also obey the $\rm SU(3) \times SU_L(2) \times U(1)_Y$ symmetry of the SM, the
one loop Ward identities formulated in the BFM are respected operator by operator in the SMEFT.
In this paper, we demonstrate this is indeed the case. We explicitly verify
a set of these identities (relating one and two point functions) to one loop order, and demonstrate the manner in which various
contributions combine to satisfy the BFM Ward identities of the SMEFT operator by operator, in a consistent formulation of
this theory to $\mathcal{O}(\bar{v}_T^2/\Lambda^2\, g_{SM}^n/16 \pi^2)$.
\section{SMEFT and geoSMEFT}\label{setup}
The SMEFT Lagrangian is defined as
\begin{align}
\mathcal{L}_{\textrm{SMEFT}} &= \mathcal{L}_{\textrm{SM}} + \mathcal{L}^{(d)}, & \mathcal{L}^{(d)} &= \sum_i \frac{C_i^{(d)}}{\Lambda^{d-4}}\mathcal{Q}_i^{(d)}
\quad \textrm{ for } d>4.
\end{align}
The SM Lagrangian and conventions are consistent with Ref.~\cite{Alonso:2013hga,Brivio:2017vri,Helset:2020yio}.
The operators $\mathcal{Q}_i^{(d)}$ are labelled with a mass dimension $d$ superscript
and multiply unknown Wilson coefficients $C_i^{(d)}$. Conventionally we
define $\tilde{C}^{(d)}_i \equiv C^{(d)}_i \bar{v}_T^{d-4}/\Lambda^{d-4}$.
The parameter $\bar{v}_T \equiv \sqrt{\langle 2 H^\dagger H \rangle}$ in the SMEFT is defined as the minimum of the potential, including
corrections due to higher-dimensional operators. We use the Warsaw basis~\cite{Grzadkowski:2010es} for
$\mathcal{L}^{(6)}$ and otherwise geoSMEFT \cite{Helset:2020yio} for operator conventions.
GeoSMEFT organizes the theory in terms of field-space connections $G_i$
multiplying composite operator forms $f_i$, represented schematically by
\begin{eqnarray}\label{basicdecomposition}
\mathcal{L}_{\textrm{SMEFT}} = \sum_i G_i(I,A,\phi \dots) \, f_i ,
\end{eqnarray}
where $G_i$ depend on the group indices $I,A$ of the (non-spacetime) symmetry groups,
and the scalar field coordinates of the composite operators, except powers of $D^\mu H$, which are grouped into $f_i$.
The field-space connections depend on the coordinates of the Higgs scalar doublet expressed in terms of
real scalar field coordinates, $\phi_I = \{\phi_1,\phi_2,\phi_3,\phi_4\}$, with normalization
\begin{align}
H(\phi_I) = \frac{1}{\sqrt{2}}\begin{bmatrix} \phi_2+i\phi_1 \\ \phi_4 - i\phi_3\end{bmatrix}.
\end{align}
The gauge boson field coordinates are defined as $\mathcal{W}^A = \{W^1,W^2,W^3,B\}$ with $A =\{1,2,3,4\}$.
The corresponding general coupling in the SM is $\alpha_A = \{g_2, g_2, g_2, g_1\}$. The mass eigenstate
field coordinates are $\mathcal{A}^A = \{\mathcal{W}^+,\mathcal{W}^-,\mathcal{Z},\mathcal{A}\}$.
The geometric Lagrangian parameters that appear in the Ward identities are functions of the field-space connections.
Of particular importance are the field space connections $h_{IJ},g_{AB}$ which we refer to as metrics in this work.
These metrics are defined at all orders in the geoSMEFT organization of the SMEFT operator expansion as
\begin{eqnarray}\label{hijdefn}
h_{IJ}(\phi) = \left.\frac{g_{\mu \nu}}{d} \, \frac{\delta^2 \mathcal{L}_{\rm SMEFT}}{\delta(D_\mu \phi)^I \, \delta (D_\nu \phi)^J} \right|_{\mathcal{L}(\alpha,\beta \cdots) \rightarrow 0},
\end{eqnarray}
and
\begin{eqnarray}
g_{AB}(\phi)
= \left. \frac{-2 \, g_{\mu \nu} \, g_{\sigma \rho}}{d^2}
\, \frac{\delta^2 \mathcal{L}_{\rm SMEFT}}{\delta {\mathcal{W}}^A_{\mu \sigma} \, \delta {\mathcal{W}}^B_{\nu \rho}}
\, \right|_{\mathcal{L}(\alpha,\beta \cdots) \rightarrow 0,{\textrm{CP-even}}}.
\end{eqnarray}
The notation $\mathcal L(\alpha,\beta\cdots)$ corresponds to non-trivial Lorentz-index-carrying Lagrangian terms and spin connections, e.g. $(D^\mu\Phi)^K$ and $W_{\mu\nu}^A$. The explicit form of the metrics are given in Ref.~\cite{Helset:2020yio}. Here $d$ is the spacetime dimension.
The matrix square roots of these field space connections are $\sqrt{g}_{AB} = \langle g_{AB} \rangle^{1/2}$,
and $\sqrt{h}_{IJ} = \langle h_{IJ} \rangle^{1/2}$. The SMEFT perturbations are small corrections to the SM,
so the field-space connections are positive semi-definite matrices, with unique positive semi-definite square roots.\footnote{Note that $\sqrt{g}^{AB} \sqrt{g}_{BC} \equiv \delta^A_C$ and
$\sqrt{h}^{IJ} \sqrt{h}_{JK} \equiv \delta^I_K$.}
The transformation of the gauge fields, gauge parameters and scalar fields into mass eigenstates in the SMEFT is
given {\it at all orders in the $\bar{v}_T/\Lambda$ expansion} by
\begin{align}\label{basicrotations}
\hat{\mathcal{W}}^{A,\nu} &= \sqrt{g}^{AB} U_{BC} \mathcal{\hat{A}}^{C,\nu}, \\
\alpha^{A} &= \sqrt{g}^{AB} U_{BC} \mathcal{\beta}^{C},\\
\hat{\phi}^{J} &= \sqrt{h}^{JK} V_{KL} \hat{\Phi}^{L},
\end{align}
with $\hat{\mathcal{A}}^C =(\hat{\mathcal{W}}^+,\hat{\mathcal{W}}^-,\hat{\mathcal{Z}},\hat{\mathcal{A}})$,
$\hat{\Phi}^L = \{\hat{\Phi}^+,\hat{\Phi}^-,\hat{\chi},\hat{H} \}$. $\mathcal{\beta}^{C}$ is obtained directly
from $\alpha^{A}$ (defined above) and $U_{BC}$.
The transformation of the quantum fields is of the same form.
The matrices $U,V$ are unitary, and given by
\begin{align*}
U_{BC} &= \begin{bmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 & 0 \\
\frac{i}{\sqrt{2}} & \frac{-i}{\sqrt{2}} & 0 & 0 \\
0 & 0 & c_{\overline{\theta}} & s_{\overline{\theta}} \\
0 & 0 & -s_{\overline{\theta}} & c_{\overline{\theta}}
\end{bmatrix},& \quad
V_{JK} &= \begin{bmatrix}
\frac{-i}{\sqrt{2}} & \frac{i}{\sqrt{2}} & 0 & 0 \\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.
\end{align*}
These matricies $U,V$ are rotations; i.e. orthogonal matricies whose transpose is equal to the matrix inverse.
The short hand combinations
\begin{align*
{\mathcal U}^{A}_C &= \sqrt{g}^{AB} U_{BC},& ({\mathcal U^{-1}})^{D}_F &= U^{DE} \sqrt{g}_{\, EF} , \\
{\mathcal V}^{A}_C &= \sqrt{h}^{AB} V_{BC}, & ({\mathcal V^{-1}})^{D}_F &= V^{DE} \sqrt{h}_{\, EF} ,
\end{align*}
are useful to define as they perform the mass eigenstate rotation for the vector
and scalar fields, and bring the corresponding kinetic term to canonical form, including higher-dimensional-operator corrections.
As can be directly verified, the combined operation is not an orthogonal matrix whose transpose is equal to the matrix inverse;
i.e. ${\mathcal U}^{A}_C, {\mathcal V}^{A}_C$ are not rotations. Although the transformation between mass and
canonically normalized weak eigenstates are properly and formally
rotations in the SM, this is no longer the case in the SMEFT.
\section{Background Field Method, Gauge fixing and Ward identities}
The BFM \cite{DeWitt:1967ub,tHooft:1973bhk,Abbott:1981ke} is a theoretical approach to gauge fixing a quantum field theory
in a manner that leaves the effective action invariant under background field gauge transformations.
To this end, the fields are split into quantum (un-hated) and classical (hatted) background
fields: $F \rightarrow F+ \hat{F}$.
The classical fields are associated with the external states
of the $S$-matrix in an LSZ procedure \cite{Lehmann:1954rq}, and a gauge fixing term is defined so that
the effective action is unchanged under a local gauge transformation of
the background fields in conjunction with a linear change of variables on the quantum fields, see Ref.~\cite{Abbott:1981ke}.
In the BFM, relationships between Lagrangian parameters due to unbroken
background $\rm SU_L(2) \times U(1)_Y$ symmetry then follow a ``naive" (classical) expectation when
quantizing the theory. These are the BFM Ward identities.
In the case of the SMEFT, the naive BFM Ward identities of the SM are upgraded to involve the canonically normalized
Lagrangian parameters (i.e. barred parameters) defined in the geoSMEFT by using the field space connections.
The BFM generating functional of the SMEFT is given by
\begin{align}
Z[\hat{F},J]= \int \mathcal{D} F \,{\rm det}\left[\frac{\Delta \mathcal{G}^A}{\Delta \alpha^B}\right]e^{i \int d x^4 \left(S[F + \hat{F}] + \mathcal{L}_{\textrm{GF}} +
{\rm source \, terms} \right)} \nonumber.
\end{align}
The generating functional is integrated over the quantum field configurations via $\mathcal{D} F$,
with $F$ field coordinates describing all long-distance propagating states. The sources $J$
only couple to the quantum fields \cite{tHooft:1975uxh}.
The issue of gauge fixing the SMEFT in the BFM was discussed as a novel challenge in Ref.~\cite{Hartmann:2015oia}
(see also Refs.~\cite{Ghezzi:2015vva,Dedes:2017zog,Misiak:2018gvl}).
The core issue to utilizing the BFM in the SMEFT (to calculate complete dependence
on IR quantities such as masses) is to define a gauge fixing procedure in the presence of higher dimensional operators,
while preserving background field gauge invariance.
Ref.~\cite{Helset:2018fgq} reported that such a gauge fixing term is uniquely
\begin{align}\label{gaugefixing1}
\mathcal{L}_{\textrm{GF}} &= -\frac{\hat{g}_{AB}}{2 \, \xi} \mathcal{G}^A \, \mathcal{G}^B, &
\mathcal{G}^X &\equiv \partial_{\mu} \mathcal{W}^{X,\mu} -
\tilde\epsilon^{X}_{ \, \,CD}\hat{\mathcal{W}}_{\mu}^C \mathcal{W}^{D,\mu}
+ \frac{\xi}{2}\hat{g}^{XC}
\phi^{I} \, \hat{h}_{IK} \, \tilde\gamma^{K}_{C,J} \hat{\phi}^J.
\end{align}
Here $\hat{g}$ and $\hat{h}$ are the background field values of the metrics, as indicated with the
hat superscript. See Ref.~\cite{Helset:2018fgq} for more details.
This approach to gauge fixing has an intuitive interpretation. The fields are gauge fixed on the curved Higgs field space
defined by the SMEFT (field) power counting expansion (i.e. in $\bar{v}_T/\Lambda$).
This is done by upgrading the naive squares of fields in the gauge fixing term, to less-naive
contractions of fields through the Higgs field space metrics $g_{AB}, h_{IK}$. Such contractions characterize the
curved Higgs field space geometry the theory is being quantized on to define the correlation functions.
When the field space metrics are trivialized to their values in the SM: $\hat{h}_{IJ} = \delta_{IJ}$
and $\hat{g}_{AB} = \delta_{AB}$. The field space manifold is no longer curved due to SMEFT corrections
in this $\bar{v}_T/\Lambda \rightarrow 0$ limit. The gauge fixing term in the Background Field Method
then simplifies to that in the SM, as given in Ref.~\cite{Shore:1981mj,Einhorn:1988tc,Denner:1994xt}.
The Faddeev-Popov ghost term, derived from Eqn.~\ref{gaugefixing1} is \cite{Helset:2018fgq}
\begin{align}
\mathcal{L}_{\textrm{FP}} = &- \hat{g}_{AB}\bar{u}^B \left[- \partial^2\delta^A_C -
\overleftarrow\partial_{\mu}\tilde\epsilon^A_{\, \,DC}(\mathcal{W}^{D,\mu} + \hat{\mathcal{W}}^{D,\mu})
+ \tilde\epsilon^A_{\, \,DC}\hat{\mathcal{W}}^D_{\mu}\overrightarrow\partial^{\mu} \right. \\
&\, \hspace{2cm} \left. - \tilde\epsilon^A_{\, \,DE}\tilde\epsilon^E_{\, \,FC}\hat{\mathcal{W}}^D_{\mu}
(\mathcal{W}^{F,\mu} + \hat{\mathcal{W}}^{F,\mu}) - \frac{\xi}{4} \hat{g}^{AD}(\phi^J + \hat{\phi}^J) \tilde\gamma_{C,J}^{I} \, \hat{h}_{IK}\, \tilde\gamma_{D,L}^{K} \,
\hat{\phi}^L) \right]u^C. \nonumber
\end{align}
Our notation is such that the covariant derivative acting on the bosonic fields of the SM in the doublet
and real representations respectively is \cite{Helset:2018fgq}
\begin{eqnarray}
D^\mu H &=& (\partial^\mu + i \, g_2 W^{a, \mu} \,\sigma_a/2 + i \,g_1 \, \mathsf{y}_h B^\mu)H, \\
(D^{\mu}\phi)^I &=& (\partial^{\mu}\delta_J^I - \frac{1}{2}\mathcal{W}^{A,\mu}\tilde\gamma_{A,J}^I)\phi^J,
\end{eqnarray}
with symmetry generators for the real scalar manifold
$\tilde{\gamma}_{A,j}^I$ (see Ref.~\cite{Helset:2018fgq,Helset:2020yio} for the explicit forms of the generators).
Here $\sigma_a$ are the Pauli matricies and $a = \{1,2,3\}$, $\mathsf{y}_h$ is the Hypercharge of the Higgs field.
The structure constants (that absorb gauge coupling parameters) are
\begin{align}
\tilde{\epsilon}^{A}_{\, \,BC} &= g_2 \, \epsilon^{A}_{\, \, BC}, \text{ \, \, with } \tilde{\epsilon}^{1}_{\, \, 23} = g_2, \nonumber \\
\tilde{\gamma}_{A,J}^{I} &= \begin{cases} g_2 \, \gamma^{I}_{A,J}, & \text{for } A=1,2,3 \\
g_1\gamma^{I}_{A,J}, & \text{for } A=4.
\end{cases}
\end{align}
For infinitesimal local gauge parameters $\delta \hat{\alpha}_A(x)$ the BF gauge transformations are
\begin{align}\label{backgroundfieldshifts}
\delta \, \hat{\phi}^I &= -\delta \hat{\alpha}^A \, \frac{\tilde{\gamma}_{A,J}^I}{2} \hat{\phi}^J, \nonumber \\
\delta \hat{\mathcal{W}}^{A, \mu} &= - (\partial^\mu \delta^A_B + \tilde{\epsilon}^A_{\, \,BC} \, \, \hat{\mathcal{W}}^{C, \mu}) \delta \hat{\alpha}^B, \nonumber \\
\delta \hat{h}_{IJ} &= \hat{h}_{KJ} \, \frac{\delta \hat{\alpha}^A \, \tilde{\gamma}_{A,I}^K}{2}+ \hat{h}_{IK} \, \frac{\delta \hat{\alpha}^A \, \tilde{\gamma}_{A,J}^K}{2}, \nonumber \\
\delta \hat{g}_{AB} &= \hat{g}_{CB} \,\tilde{\epsilon}^C_{\, \,DA} \, \delta \hat{\alpha}^D + \hat{g}_{AC} \,\tilde{\epsilon}^C_{\, \,DB} \, \delta \hat{\alpha}^D, \nonumber \\
\delta \mathcal{G}^X &= -\tilde{\epsilon}^X_{\, \,AB} \, \delta \hat{\alpha}^A \mathcal{G}^B, \nonumber\\
\delta f_i &= \Lambda_{A,i}^{j}\, \hat{\alpha}^A \, f_{j}, \nonumber \\
\delta \bar{f}_i &= \hat{\alpha}^A \, \bar{f}_j \bar{\Lambda}^{j}_{A,i}.
\end{align}
The BFM Ward identities follow from the invariance of $\Gamma [\hat{F},0]$ under background-field gauge transformations,
\begin{align}
\frac{\delta \Gamma [\hat{F},0]}{\delta \hat{\alpha}^B} &= 0.
\end{align}
In position space, the identities are \cite{Helset:2018fgq}
\begin{align}
0 =& \left(\partial^\mu \delta^A_B - \tilde{\epsilon}^A_{\, \,BC} \, \, \hat{\mathcal{W}}^{C, \mu}\right)
\frac{\delta \Gamma}{\delta \hat{\mathcal{W}}_A^{\mu}} - \frac{\tilde{\gamma}_{B,J}^I}{2} \hat{\phi}^J \frac{\delta \Gamma}{\delta \hat{\phi}^I}
+\sum_j \left(\bar{f}_j \bar{\Lambda}_{B,i}^{j} \, \frac{\delta \Gamma}{\delta \bar{f}_{i}}
- \frac{\delta \Gamma}{\delta f_{i}} \Lambda_{B,j}^{i} f_j \right).
\end{align}
The structure constants and generators, transformed to those corresponding to the mass eigenstates, are defined using bold text as
\begin{align*}
{ {\bm \epsilon}}^{C}_{\, \,GY} &= ({\mathcal U^{-1}})^C_A \tilde{\epsilon}^{A}_{\, \,DE} \, {\mathcal U}^D_G \,
{\mathcal U}^E_Y, &
{\bm \gamma}_{G,L}^{I} &= \frac{1}{2}\tilde{\gamma}_{A,L}^{I} \, {\mathcal U}^A_G,\nonumber \\
{\bm \Lambda}^i_{X,j} &=\Lambda_{A,j}^{i} \, {\mathcal U}^A_X.
\end{align*}
The background-field gauge transformations in the mass eigenstates are
\begin{align}
\delta \hat{\mathcal{A}}^{C,\mu} &= - \left[\partial^\mu \delta^C_G + { {\bm \epsilon}}^{C}_{\, \,GY} \hat{\mathcal{A}}^{Y,\mu} \right] \delta \hat{\beta}^G, \nonumber \\
\delta \hat{\Phi}^{K} &=- ({\mathcal V^{-1}})^K_I \,{\bm \gamma}_{G,L}^{I} \, {\mathcal V}^L_N \hat{\Phi}^{N} \delta \hat{\beta}^G.
\end{align}
The Ward identities are then expressed compactly as \cite{Helset:2018fgq}
\begin{eqnarray}
0 &=& \frac{\delta \Gamma}{\delta \hat{\beta}^G}, \\
\iffalse
= \frac{\delta \hat{\mathcal{A}}^{C,\mu}}{\delta \hat{\beta}^G} \frac{\delta \Gamma}{\delta \hat{\mathcal{A}}^{C,\mu}}
+ \frac{\delta \hat{\Phi}^{K}}{\delta \hat{\beta}^G} \frac{\delta \Gamma}{\delta \hat{\Phi}^{K}},
\fi
&=& \partial^\mu \frac{\delta \Gamma}{\delta \hat{\mathcal{A}}^{X,\mu}}
+\sum_j \left(\bar{f}_j \overline{\bm \Lambda}^j_{X,i} \, \frac{\delta \Gamma}{\delta \bar{f}_{i}}
- \frac{\delta \Gamma}{\delta f_{i}} {\bm \Lambda}^i_{X,j} f_j \right)
-
\frac{\delta \Gamma}{\delta \hat{\mathcal{A}}^{C\mu}} {\bm \epsilon}^{C}_{\, \,XY} \hat{\mathcal{A}}^{Y \mu}
- \frac{\delta \Gamma}{\delta \hat{\Phi}^K} ({\mathcal V^{-1}})^K_I {\bm \gamma}_{X,L}^{I} {\mathcal V}^L_N \hat{\Phi}^N. \nonumber
\end{eqnarray}
In this manner, the ``naive" form of the Ward identities is maintained. The descending relationships between
n-point functions encode the constraints of the unbroken (but non-manifest in the mass eigenstates) $\rm SU(2)_L \times U(1)_Y$ symmetry
that each operator in the SMEFT respects.
\section{Background Field Ward Identities}
The results of this work are the SMEFT extension of the treatment of the Electroweak
Standard Model in the BFM,
as developed in Refs.~\cite{Shore:1981mj,Einhorn:1988tc,Denner:1994nn,Denner:1994nh,Denner:1994xt,Denner:1995jd,Denner:1996gb,Denner:1996wn}.
Our results (with appropriate notational redefinitions) simplify to those reported in these past works in the limit
$\bar{v}_T/\Lambda \rightarrow 0$.
The background Higgs field $\hat{H}$ takes on this vacuum expectation value, while the quantum Higgs field has vanishing expectation value
\begin{align*}
\hat{H}(\phi_I) &= \frac{1}{\sqrt{2}}\begin{bmatrix} \hat{\phi}_2+i\hat{\phi}_1 \\ \bar{v}_T + \hat{\phi}_4 - i\hat{\phi}_3\end{bmatrix},
& \quad
H(\phi_I) &= \frac{1}{\sqrt{2}}\begin{bmatrix} \phi_2+i \phi_1 \\ \phi_4 - i \phi_3\end{bmatrix}.
\end{align*}
In the remainder of this paper we verify that a set of the Ward identities hold at one loop order.
This requires some notation. Our convention is that all momentum are incoming (here denoted with $k^\mu$) and we define
short hand notation as in Refs.~\cite{Denner:1994nn,Denner:1994nh,Denner:1994xt,Denner:1995jd,Denner:1996gb,Denner:1996wn}
\begin{eqnarray}
-i \Gamma^{\hat{V},\hat{V}'}_{\mu \nu}(k)&=&
\left(-g_{\mu \nu} k^2 + k_\mu k_\nu + g_{\mu \nu} \bar{M}_{\hat{V}}^2\right)\delta^{\hat{V} \hat{V}'}
+\left(-g_{\mu \nu} +\frac{k_\mu k_\nu}{k^2} \right) \Sigma_{T}^{\hat{V},\hat{V}'}- \frac{k_\mu k_\nu}{k^2}
\Sigma_{L}^{\hat{V},\hat{V}'},\\
\frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{4 \mu} \delta \hat{\Phi}^{3}} &=& i k^\mu \Sigma^{\hat{\mathcal{A}} \, \hat{\mathcal{\chi}}}(k^2),\\
\frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{4 \mu} \delta \hat{\Phi}^{4}} &=& i k^\mu \Sigma^{\hat{\mathcal{A}} \, \hat{\mathcal{H}}}(k^2),\\
\frac{\delta^2 \Gamma}{{\delta \hat{\Phi}^{3} \delta \hat{\mathcal{A}}^{3 \nu}}} &=&
{-\frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{3 \nu}\delta \hat{\Phi}^{3}}} =
i k^\nu \left[i \,\bar{M}_{\mathcal{Z}}+ \Sigma^{\hat{Z}\hat{\chi}}(k^2) \right], \\
\frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{3} \delta \hat{\Phi}^{3}} &=& i k^2 + i \Sigma^{\hat{\chi}\hat{\chi}}(k^2),\\
\frac{\delta^2 \Gamma}{{\delta \hat{\Phi}^{\pm} \delta \hat{\mathcal{W}}^{\mp \nu}}} &=&
= {-i} \, k^\nu \left[{\pm} \,\bar{M}_W + \Sigma^{{\hat{\Phi}^\pm \hat{\mathcal{W}}^\mp}}(k^2) \right], \\
{\frac{\delta^2 \Gamma}{\delta \hat{\mathcal{W}}^{\pm \nu} \delta \hat{\Phi}^{\mp}}}&=&
{i \, k^\nu \left[\mp \,\bar{M}_W + \Sigma^{{\hat{\mathcal{W}}^\pm \hat{\Phi}^\mp}}(k^2) \right]},\\
\frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{+} \delta \hat{\Phi}^{-}} &=& i k^2 + i \Sigma^{\hat{\Phi}^+\hat{\Phi}^-}(k^2),\\
\frac{\delta^2 \Gamma}{\delta \hat{H} \delta \hat{H}} &=& i (k^2-\bar{m}_H^2) + i \Sigma^{\hat{H} \hat{H}}(k^2).
\end{eqnarray}
The two point function mass eigenstate SMEFT Ward identities in the BFM are \cite{Corbett:2019cwl}
\begin{align}\label{reducedwardID}
0 &= \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{4 \mu} \delta \hat{\mathcal{A}}^{Y \nu}}, \\
0 &= \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{4 \mu} \delta \hat{\Phi}^{I}}, \\
0 &= \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{3 \mu} \delta \hat{\mathcal{A}}^{Y \nu}} -
\bar{M}_{\mathcal{Z}} \, \frac{\delta^2 \Gamma}{{\delta \hat{\Phi}^{3} \delta \hat{\mathcal{A}}^{Y \nu}}},
\end{align}
and
\begin{align}
0 &= \partial^\mu \! \! \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{A}}^{3 \mu} \delta \hat{\Phi}^{I} }
-\bar{M}_{\mathcal{Z}} \frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{3} \delta \hat{\Phi}^{I} }, \\
&+ \frac{\bar{g}_Z}{2} \frac{\delta \Gamma}{\delta \hat{\Phi}^{4}} \left[ \left(\sqrt{h}_{[4,4]} \sqrt{h}^{[3,3]} - \sqrt{h}_{[4,3]} \sqrt{h}^{[4,3]}\right) \delta^3_I
- \left(\sqrt{h}_{[4,4]} \sqrt{h}^{[3,4]} - \sqrt{h}_{[4,3]} \sqrt{h}^{[4,4]}\right) \delta^4_I \right], \nonumber \\
0 &= \partial^\mu \frac{\delta^2 \Gamma}{{\delta \hat{\mathcal{W}}^{\pm \mu} \delta \hat{\mathcal{A}}^{Y \nu}}} \pm
i \bar{M}_W \frac{\delta^2 \Gamma}{{\delta \hat{\Phi}^{\pm} \delta \hat{\mathcal{A}}^{Y \nu}}}, \\
0 &= \partial^\mu \frac{\delta^2 \Gamma}{\delta \hat{\mathcal{W}}^{\pm \mu} \delta \hat{\Phi}^{I}}
\pm i \bar{M}_W \frac{\delta^2 \Gamma}{\delta \hat{\Phi}^{\pm} \delta \hat{\Phi}^{I}}
\mp \frac{i \bar{g}_2}{4} \frac{\delta \Gamma}{\delta \hat{\Phi}^{4}}
\left(\sqrt{h}_{[4,4]}\mp i \sqrt{h}_{[4,3]} \right) \times \\
&\,\left[(\sqrt{h}^{[1,1]}+ \sqrt{h}^{[2,2]} \mp i \sqrt{h}^{[1,2]} \pm i \sqrt{h}^{[2,1]}) \delta^{\mp}_I
\mp (\sqrt{h}^{[1,1]}- \sqrt{h}^{[2,2]} \pm i \sqrt{h}^{[1,2]} \pm i \sqrt{h}^{[2,1]}) \delta^{\pm}_I\right]. \nonumber
\end{align}
To utilize these definitions, note that sign dependence of $k^\mu$ being always incoming in the case of charged fields
leads to several implicit sign conventions that must be respected to establish the Ward identities.
From these identities, it follows that
\begin{align}
\Sigma_L^{\hat{\mathcal{A}} \, \hat{\mathcal{A}}}(k^2) &= 0, & \quad
\Sigma_T^{\hat{\mathcal{A}} \, \hat{\mathcal{A}}}(0) &= 0, \\
\Sigma_L^{\hat{\mathcal{A}} \, \hat{\mathcal{Z}}}(k^2) &= 0, & \quad
\Sigma_T^{\hat{\mathcal{A}} \, \hat{\mathcal{Z}}}(0) &= 0,
\end{align}
and
\begin{align}
\Sigma^{\hat{\mathcal{A}} \, \hat{\mathcal{\chi}}}(k^2) &= 0, & \quad
\Sigma^{\hat{\mathcal{A}} \, \hat{\mathcal{H}}}(k^2) &= 0.
\end{align}
Limiting the evaluation of the field space metrics to
$\mathcal{L}^{(6)}$ corrections in the Warsaw basis \cite{Grzadkowski:2010es},
further identities that directly follow are
\begin{align}
0 &= \Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)- i \bar{M}_{\mathcal{Z}} \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}(k^2), \\
0 &= k^2 \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}(k^2)- i \bar{M}_{\mathcal{Z}} \, \Sigma^{\hat{\chi} \hat{\chi}}(k^2) + i \, \frac{\bar{g}_Z}{2} T^H \left(1 - \tilde{C}_{H \Box} \right),
\end{align}
and
\begin{align}
0 &= \Sigma_L^{\hat{\mathcal{W}^\pm} \hat{\mathcal{W}}^\mp}(k^2) \pm \bar{M}_W \Sigma^{{\hat{\Phi}^\pm \hat{\mathcal{W}}^\mp}}(k^2), \\
0 &= k^2 \Sigma^{\hat{\mathcal{W}}^\pm \hat{\Phi}^\mp}(k^2)\pm \bar{M}_W \, \Sigma^{{\hat{\Phi}^\pm \hat{\Phi}^\mp}}(k^2)
\mp \frac{\bar{g}_2}{2} T^H \left(1 - \tilde{C}_{H \Box} + \frac{\tilde{C}_{HD}}{4} \right).
\end{align}
Note the appearance of the two derivative operators involving the Higgs field modifying the tadpole terms {$T^H= -i \delta \Gamma/\delta \hat{H}$} fixing the vev.
It is important to include such corrections, which are a consistency condition due to the background field geometry
the SMEFT is quantized on.
Several of the remaining two point functions vanish exactly, and the corresponding Ward identities are trivially satisfied.
The geometric SMEFT Lagrangian parameters to $\mathcal{L}^{(6)}$ appearing in the Ward identities
are the geometric SMEFT masses\cite{Alonso:2013hga}
\begin{align}
\bar{M}_W^2 &= \frac{\bar g_2^2 \bar{v}_T^2}{4}, \\
\bar{M}_{\mathcal{Z}}^2 &= \frac{\bar{v}_T^2}{4}({\overline g_{1}}^2+{\overline g_{2}}^2)+\frac{1}{8} \, \bar{v}_T^2 \, ({\overline g_{1}}^2+{\overline g_{2}}^2) \, \tilde{C}_{HD} +\frac{1}{2} \bar{v}_T^2 \, {\overline g_{1}} \, {\overline g_{2}} \, \tilde{C}_{HWB},\\
\bar{m}_h^2 &= 2 \lambda \bar{v}_T^2 \left[1- 3 \frac{\tilde{C}_H}{2 \, \lambda} + 2 \left(\tilde{C}_{H \Box} - \frac{\tilde{C}_{HD}}{4}\right)\right],
\end{align}
and the geometric SMEFT couplings
\begin{align}
\bar{e} &= \frac{{\overline g_{1}} \, {\overline g_{2}}}{\sqrt{{\overline g_{1}}^2+{\overline g_{2}}^2}} \left[1- \frac{{\overline g_{1}} \, {\overline g_{2}}}{{\overline g_{1}}^2+{\overline g_{2}}^2} \, \tilde{C}_{HWB} \right],
& \quad
\bar{g}_Z &= \sqrt{{\overline g_{1}}^2+{\overline g_{2}}^2}+ \frac{{\overline g_{1}} \, {\overline g_{2}}}{\sqrt{{\overline g_{1}}^2+{\overline g_{2}}^2}} \, \tilde{C}_{HWB}, \\
{\overline g_{1}} &= g_1(1+ \tilde{C}_{HB}),& \quad
{\overline g_{2}} &= g_2(1+ \tilde{C}_{HW}).
\end{align}
These parameters are defined at all orders in the $\bar{v}_T/\Lambda$ expansion in Ref.~\cite{Helset:2020yio,Hays:2020scx},
and we stress the Ward identities hold at all orders in the $\bar{v}_T/\Lambda$ expansion, and also hold for cross
terms in the perturbative expansion and $\bar{v}_T/\Lambda$ expansion. As such, the Ward identities provide a powerful
and important cross check of non-perturbative and perturbative results in the SMEFT.
\subsection{SM results; Bosonic loops}
We verify the Ward identities at the level of divergent one-loop contributions to the various $n$-point functions.
In the case of the SM, we confirm the results of
Refs.~\cite{Denner:1994nn,Denner:1994nh,Denner:1994xt,Denner:1995jd,Denner:1996gb,Denner:1996wn}
and reiterate these results here for a common notation and due to their contributions to the SMEFT Ward identities.
We focus on two point functions involving the gauge fields due to the role that the scalar and
gauge boson field space metrics have as the field space geometry modifies the Ward identities into those of the SMEFT.
The results (using $d = 4 - 2 \epsilon$ in dim. reg.) are
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{SM} &=&\frac{g_1^2 \, g_2^2}{(g_1^2+ g_2^2)} \, k^2 \, \left(\frac{-7}{16 \pi^2 \epsilon} \right), \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{SM} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{SM} &=& - \frac{g_1 \, g_2}{(g_1^2+ g_2^2)} \, k^2 \, \left(\frac{43 g_2^2 + g_1^2}{96 \pi^2 \epsilon} \right), \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{SM} &=&0,
\end{eqnarray}
and
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{SM} &=&
\frac{8 k^2 (g_1^4 - 43 g_2^4)+3 (\xi + 3) \bar{v}_T^2\, (g_1^2+g_2^2)^2 \, (g_1^2 + 3 g_2^2)}{768 \, \pi^2\, \epsilon \, (g_1^2+g_2^2)}, \\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{SM} &=&
\frac{(\xi +3) \bar{v}_T^2 (g_1^2 + g_2^2) (g_1^2 + 3 g_2^2)}{256 \pi^2 \epsilon}, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{SM} &=&
-i\frac{(\xi+ 3) \bar{v}_T \sqrt{g_1^2+ g_2^2} \, (g_1^2+ 3 g_2^2)}{128 \pi^2 \epsilon},\\
\left[\Sigma_T^{\hat{\mathcal{W}}^\pm\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{SM} &=&
\frac{g_2^2 (3 (\xi + 3) \bar{v}_T^2 (g_1^2 + 3 g_2^2)-344 k^2)}{768 \pi^2 \epsilon},\\
\left[\Sigma_L^{\hat{\mathcal{W}^\pm}\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{SM} &=&
\frac{g_2^2 (\xi + 3) \bar{v}_T^2 (g_1^2 + 3 g_2^2)}{256 \pi^2 \epsilon},\\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{SM} &=&
- \left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{SM} =
- \frac{g_2 \, (\xi + 3) \bar{v}_T (g_1^2+ 3 g_2^2)}{128 \pi^2 \epsilon}, \\
\left[\Sigma^{\hat{\mathcal{\chi}}\hat{\mathcal{\chi}}}(k^2)\right]^{div}_{SM}
&=& \left[\Sigma^{\hat{\Phi}^+\hat{\Phi}^-}(k^2)\right]^{div}_{SM}, \\
&=& \frac{1}{\bar{v}_T} \left[T^H\right]_{SM}^{div}
- \frac{(\xi + 3) k^2 (g_1^2 + 3 g_2^2)}{64 \pi^2 \epsilon}, \nonumber \\\\
\left[T^H\right]_{SM}^{div} &=&
\frac{\bar{v}_T^3 (3 g_1^4 + 9 g_2^4 + 96 \lambda^2 + 12 g_2^2 \lambda \xi +
g_1^2 (6 g_2^2 + 4 \lambda \xi))}{256 \, \pi^2 \, \epsilon}.
\end{eqnarray}
Reducing to the SM limit the SMEFT Ward ID ($\Lambda \rightarrow \infty, \bar{v}_T \rightarrow v$) yields the corresponding
SM Ward ID, consistent with Refs.~\cite{Denner:1994nn,Denner:1994nh,Denner:1994xt,Denner:1995jd,Denner:1996gb,Denner:1996wn}.
These expressions satisfy the SM Ward identities.
The fermion self energies in the SM, and the fermionic contributions to the bosonic two point functions
are suppressed here.
\subsection{SM results; Fermion loops}
Unlike the contributions to the bosonic one and two point functions discussed in the previous section,
the contributions from fermion loops depend on the number of fermion generations. We discuss these contributions
in a factorized fashion in the SM and the SMEFT for this reason. The bosonic one and two point functions
contributions in the SM from fermion loops are shown in Fig.~\ref{fig:twopoints}, which give
the results
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{SM} &=&\frac{g_1^2 \, g_2^2}{(g_1^2+ g_2^2)} \, \left(\frac{32}{9} \right) \, \frac{k^2}{16 \pi^2 \epsilon} \, n, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{SM} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{SM} &=&
- \frac{ 20 \, g_1^3 \, g_2 - 12 \, g_1 \, g_2^3}{9 (g1^2+g2^2)} \, \frac{k^2}{16 \pi^2 \epsilon} \, n, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{SM} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{SM} &=&
\frac{4 \, g_2^2}{3} \, \frac{k^2}{16 \, \pi^2 \, \epsilon} \, n
-\sum_\psi \frac{N_C^\psi \, m_\psi^2 \, g_2^2}{32 \pi^2 \epsilon},\\
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{SM} &=&
-\sum_\psi \frac{N_C^\psi \, m_\psi^2 \, g_2^2}{32 \pi^2 \epsilon},\\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{SM} &=&
\frac{5g_1^4+3g_2^4}{g_1^2+g_2^2}\frac{k^2}{36\pi^2\epsilon}n
-\sum_\psi N_C^\psi \, \frac{m_\psi^2 \, (g_1^2+g_2^2)}{32 \pi^2 \epsilon},
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{SM} &=&-\sum_\psi \frac{N_C^\psi \, m_\psi^2 \, (g_1^2+g_2^2)}{32 \pi^2 \epsilon}, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{SM} &=&
{-}i \, \sum_\psi \, N_C^\psi \, Y_\psi^2 \, \bar{v}_T\, \frac{ \sqrt{g_1^2 + g_2^2}}{32 \, \pi^2 \, \epsilon}, \\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{SM} &=&
- \left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{SM} =
\sum_\psi \, N_C^\psi \, Y_\psi^2 \, \bar{v}_T\, \frac{g_2}{32 \, \pi^2 \, \epsilon}, \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{SM} &=&
\frac{k^2}{16 \, \pi^2 \, \epsilon} \sum_\psi \, N_C^\psi \, Y_\psi^2 - \frac{\bar{v}_T^2}{16 \, \pi^2 \, \epsilon} \sum_\psi \, N_C^\psi \, Y_\psi^4,\\
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{SM} &=&
\frac{k^2}{16 \, \pi^2 \, \epsilon} \, \sum_\psi \, N_C^\psi \, Y_\psi^2 - \frac{\bar{v}_T^2}{16 \, \pi^2 \, \epsilon}
\, N_C^\psi \, Y_\psi^4,\\
\left[T^H\right]_{SM}^{div} &=& - \frac{\bar{v}_T^3}{16 \, \pi^2 \, \epsilon} \sum_\psi \, N_C^\psi \, Y_\psi^4.
\end{eqnarray}
Here $Y_\psi$ is the fermion $\psi$ Yukawa coupling, and $N_C^\psi= (3,3,1)$ for up quarks, down quarks and leptons respectively.
$n$ sums over the generations and colours in each generation.
These expressions, consistent with those in Ref.~\cite{Denner:1994nn,Denner:1994nh,Denner:1994xt,Denner:1995jd,Denner:1996gb,Denner:1996wn}
satisfy the SM limit BFM Ward identities.
\subsection{SMEFT results; Bosonic loops}
Directly evaluating the diagrams in Fig.~\ref{fig:twopoints}, with a full set of all possible higher dimensional operator
insertions, we find the following for the SMEFT.
The results have been determined automatically using a new code package for BFM based SMEFT calculations to one loop order.
This code package is reported on in a companion paper \cite{Tyler}. The results have also been directly calculated by hand independently in a
cross check and verification of the automated generation of results. In many cases, consistently modifying
the SM parameters into those of the geoSMEFT leads to some intricate cancelations
in Wilson coefficient dependence in a Feynman diagram, through modified Feynman rules
in the BFM, and subsequently in the summation of the diagrams
into the two point functions. Further cancelations, and non-trivial combinations of Wilson coefficient dependence,
occurs combining the full two point functions, with the geoSMEFT lagrangian parameters that feed into the Ward identities.
Such intricate cancelations follow from the unbroken background field symmetries.
\subsubsection{Operator $Q_{HB}$}
Defining the combinations of coupling which occur frequently for this operator as
\begin{eqnarray}
\mathcal{P}_{CHB}^1 &=& \frac{((g_1^4 +3 \, g_2^4) \, \xi + 4 \, g_1^2 \, g_2^2 \, (\xi - 7)
+ 8 \, (g_1^2+g_2^2) \, \lambda)}{32 \pi^2 \epsilon},\\
\mathcal{P}_{CHB}^2 &=& \frac{(3+ \xi)(g_1^2 + 2 g_2^2)}{32 \pi^2 \epsilon},\\
\mathcal{P}_{CHB}^3 &=& \frac{7\,(g_1^2 + 7 g_2^2)}{48 \pi^2 \epsilon},\\
\mathcal{P}_{CHB}^4 &=& \frac{9\,(g_1^2 + g_2^2)+ 4 \, \lambda \, \xi}{128 \pi^2 \epsilon},
\end{eqnarray}
the two point functions in the SMEFT are
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=& \tilde{C}_{HB}\, k^2 \, \frac{g_2^2 \,\mathcal{P}_{CHB}^1}{(g_1^2+g_2^2)^2}, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
-\tilde{C}_{HB} \, k^2\, \left[\frac{g_1 \, g_2 \, \mathcal{P}_{CHB}^1}{(g_1^2+g_2^2)^2}
+ \frac{g_1 \, g_2 \, \mathcal{P}_{CHB}^3}{2 \,(g_1^2+g_2^2)}\right],\\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
\tilde{C}_{HB} \left[ k^2 \frac{g_1^2 \, \mathcal{P}_{CHB}^1}{(g_1^2+g_2^2)^2}
+ k^2 \, \frac{g_1^2 \,\mathcal{P}_{CHB}^3}{(g_1^2+g_2^2)}
+ \bar{v}_T^2 \, \frac{g_1^2 \,\mathcal{P}_{CHB}^2}{2} \right],\\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
\tilde{C}_{HB} \, \bar{v}_T^2 \, \frac{g_1^2 \,\mathcal{P}_{CHB}^2}{2}, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
- i \, \tilde{C}_{HB}\, \frac{\bar{v}_T\, g_1^2 \, (\xi + 3) \, (3 \, g_1^2 + 5 \, g_2^2)}{\sqrt{g_1^2+ g_2^2} \, 128 \, \pi^2 \, \epsilon},
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{W}}^\pm\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
\tilde{C}_{HB} \, \bar{v}_T^2 \, \frac{g_1^2 \, g_2^2 \, (\xi + 3)}{128 \pi^2 \epsilon},\\
\left[\Sigma_L^{\hat{\mathcal{W}^\pm}\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
\tilde{C}_{HB} \, \bar{v}_T^2 \frac{g_1^2 \, g_2^2 \, (\xi + 3)}{128 \, \pi^2 \, \epsilon},\\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
-\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}_{HB}} =
- \tilde{C}_{HB} \, \bar{v}_T \frac{g_1^2 \, g_2 \, (\xi + 3)}{64 \, \pi^2 \, \epsilon},\\
\left[\Sigma^{\hat{\mathcal{\chi}}\hat{\mathcal{\chi}}}(k^2)\right]^{div}_{\tilde{C}_{HB}}
&=& \left[\Sigma^{\hat{\Phi}^+\hat{\Phi}^-}(k^2)\right]^{div}_{\tilde{C}_{HB}}
= \tilde{C}_{HB} \, g_1^2 \, \left[-k^2 \frac{\xi+ 3}{32 \pi^2 \epsilon} + \bar{v}_T^2 \mathcal{P}_{CHB}^4 \right], \nonumber \\\\
\left[T^H\right]_{\tilde{C}_{HB}}^{div} &=&
\tilde{C}_{HB} \, g_1^2 \, \bar{v}_T^3 \mathcal{P}_{CHB}^4;
\end{eqnarray}
$\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HB}}$
and $\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}}$ are exactly vanishing in the BFM,
consistent with the BFM Ward identities. Conversely
$\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HB}}$
and $\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HB}}$
are proportional to $k^2$, and only vanish as $k^2 \rightarrow 0$. This is also consistent with the SMEFT BFM Ward identities.
The remaining Ward identities are maintained in a more intricate and interesting fashion. For example
\begin{eqnarray}
- i \bar{M}_{\mathcal{Z}} \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}(k^2) &=& - i \, \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2}
\, \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HB}}
-i \frac{g_1^2 \, \tilde{C}_{HB} \, \bar{v}_T}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{SM},\nonumber \\
&=&
- \tilde{C}_{HB}\, \bar{v}_T^2 \, (\xi+ 3) \left[ \frac{g_1^2 \, (3 \, g_1^2 + 5 \, g_2^2)}{ 256 \, \pi^2 \, \epsilon}
+ \frac{g_1^2 \,(g_1^2+ 3 g_2^2)}{256 \pi^2 \epsilon}\right],\nonumber \\
&=& - \tilde{C}_{HB}\, \bar{v}_T^2 \, (\xi+ 3) \frac{g_1^2 (g_1^2+ 2 g_2^2)}{64 \, \pi^2 \, \epsilon},
\end{eqnarray}
which exactly cancels $\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}}$ establishing the corresponding BFM
Ward identity.
Here we have not expanded out $\bar{v}_T$, simply for compact notation. Expanding $\bar{v}_T$ out
in terms of the SM vev and corrections does not change the Ward identity for this operator.
The manner in which the Ward identities are maintained in the SMEFT involves a nontrivial combination of the appearance of the
SMEFT geometric Lagrangian parameters in the Ward identities, in conjunction with the direct evaluation of the
one loop diagrams in the BFM. In the later, one must expand out the
dependence on corresponding Wilson coefficient in the geometric SMEFT pole masses diagram by diagram.
Similarly, the following $\mathcal{Z}$ identity has the individual contributions
\begin{eqnarray}
k^2 \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
- i \, \tilde{C}_{HB}\, k^2 \, \frac{\bar{v}_T\, g_1^2 \, (\xi + 3) \, (3 \, g_1^2 + 5 \, g_2^2)}{\sqrt{g_1^2+ g_2^2} \, 128 \, \pi^2 \, \epsilon}, \\
- i \bar{M}_{\mathcal{Z}} \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div} &=&
- i \, \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2}
\, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HB}}
-i \frac{g_1^2 \, \tilde{C}_{HB} \, \bar{v}_T}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div}_{SM}, \nonumber \\
&=& i \, \tilde{C}_{HB}\, k^2 \, \frac{\bar{v}_T\, g_1^2 \, (\xi + 3) \, (3 \, g_1^2 + 5 \, g_2^2)}{\sqrt{g_1^2+ g_2^2} \, 128 \, \pi^2 \, \epsilon}
- i \, \frac{\tilde{C}_{HB}}{2} \, g_1^2 \, \bar{v}_T^3 \, \sqrt{g_1^2+g_2^2} \, \mathcal{P}^4_{HB}, \nonumber \\
&-&i \, \frac{\tilde{C}_{HB} \, g_1^2}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[T^H\right]_{SM}^{div},\\
i \, \frac{\bar{g}_Z}{2} T^H &=&
i \, \frac{\tilde{C}_{HB}}{2} \, g_1^2 \, \bar{v}_T^3 \, \sqrt{g_1^2+g_2^2} \, \mathcal{P}^4_{HB}
+ i \, \frac{\tilde{C}_{HB} \, g_1^2}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[T^H\right]_{SM}^{div},
\end{eqnarray}
that combine to satisfy the corresponding Ward Identity.
The charged field Ward identies are satisfied directly for this operator, as
\begin{eqnarray}
\left[\Sigma^{\hat{\mathcal{W}}^+ \,\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{HB}} + \frac{g_2 \, \bar{v}_T}{2} \left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HB}} = 0
\end{eqnarray}
and
\begin{eqnarray}
k^2 \left[\Sigma^{{\hat{\mathcal{W}}^+} \hat{\Phi}^-}(k^2) \right]^{div}_{\tilde{C}_{HB}}
+ \frac{g_2 \, \bar{v}_T}{2} \, \left[\Sigma^{\hat{\Phi}^- \hat{\Phi}^+}(k^2) \right]^{div}_{\tilde{C}_{HB}}
- \frac{g_2}{2} \left[T^H\right]^{div}_{\tilde{C}_{HB}} =0.
\end{eqnarray}
\subsubsection{Operator $Q_{HW}$}
Defining the combinations of coupling which occur frequently for this operator as
\begin{eqnarray}
\mathcal{P}_{CHW}^1 &=& \frac{((g_1^4 +3 \, g_2^4) \, \xi + 4 \, g_1^2 \, g_2^2 \, (\xi - 7)
+ 8 \, (g_1^2+g_2^2) \, \lambda)}{32 \pi^2 \epsilon},\\
\mathcal{P}_{CHW}^2 &=& \frac{(3+ \xi)(2 \, g_1^2 + 3 \, g_2^2)}{32 \pi^2 \epsilon},\\
\mathcal{P}_{CHW}^3 &=& \frac{5 \, g_1^2 - 37 g_2^2}{48 \pi^2 \epsilon},\\
\mathcal{P}_{CHW}^4 &=& \frac{(9 g_1^2 + 27 g_2^2)+ 12 \, \lambda \, \xi}{128 \pi^2 \epsilon},
\end{eqnarray}
the two point functions in the SMEFT are
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=& \tilde{C}_{HW}\, k^2 \, \frac{g_1^2 \,\mathcal{P}_{CHW}^1}{(g_1^2+g_2^2)^2}, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
\tilde{C}_{HW} \, k^2\, \left[\frac{g_1 \, g_2 \, \mathcal{P}_{CHW}^1}{(g_1^2+g_2^2)^2}
+ \frac{g_1 \, g_2 \, \mathcal{P}_{CHB}^3}{2 \,(g_1^2+g_2^2)}\right],\\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
\tilde{C}_{HW} \left[ k^2 \frac{g_2^2 \, \mathcal{P}_{CHW}^1}{(g_1^2+g_2^2)^2}
+ k^2 \, \frac{g_2^2 \,\mathcal{P}_{CHW}^3}{(g_1^2+g_2^2)}
+ \bar{v}_T^2 \, \frac{g_2^2 \,\mathcal{P}_{CHW}^2}{2} \right],\\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
\tilde{C}_{HW} \, \bar{v}_T^2 \, \frac{g_2^2 \,\mathcal{P}_{CHW}^2}{2}, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
- i \, \tilde{C}_{HW}\, \frac{\bar{v}_T\, g_2^2 \, (\xi + 3) \,
(7 \, g_1^2 + 9 \, g_2^2)}{\sqrt{g_1^2 + g_2^2} \, 128 \, \pi^2 \, \epsilon},\\
\left[\Sigma_T^{\hat{\mathcal{W}^\pm}\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
\tilde{C}_{HW} \left[k^2 \, \frac{\mathcal{P}_{CHW}^1}{g_1^2 + g_2^2}
+ k^2 \, g_2^2\, \frac{\mathcal{P}_{CHW}^3}{g_1^2 + g_2^2} +\bar{v}_T^2 \, \frac{(g_1^2 + 6 g_2^2)\, g_2^2 \, (\xi + 3)}{128 \pi^2 \epsilon} \right], \\
\left[\Sigma_L^{\hat{\mathcal{W}}^\pm\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
\tilde{C}_{HW} \, \bar{v}_T^2 \, \frac{(g_1^2 + 6 g_2^2)\, g_2^2 \, (\xi + 3)}{128 \pi^2 \epsilon},\\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
-\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}_{HW}} =
- \tilde{C}_{HW}\, \bar{v}_T \frac{(g_1^2+9 g_2^2) \, g_2 \, (\xi + 3)}{128 \, \pi^2 \, \epsilon},\\
\left[\Sigma^{\hat{\mathcal{\chi}}\hat{\mathcal{\chi}}}(k^2)\right]^{div}_{\tilde{C}_{HW}}
&=& \left[\Sigma^{\hat{\Phi}^+\hat{\Phi}^-}(k^2)\right]^{div}_{\tilde{C}_{HW}}
= \tilde{C}_{HW} \, g_2^2 \, \left[-3 \, k^2 \frac{\xi+ 3}{32 \pi^2 \epsilon}
+ \bar{v}_T^2 \, \mathcal{P}_{CHW}^4 \right], \\
\left[T^H\right]_{\tilde{C}_{HW}}^{div} &=&
\tilde{C}_{HW} \, g_2^2 \, \bar{v}_T^3 \mathcal{P}_{CHW}^4;
\end{eqnarray}
$\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HW}}=
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}}=0$
and $\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HW}}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HW}}$
have the same dependence on $k^2$ as in the case of $\tilde{C}_{HB}$. The
corresponding SMEFT BFM Ward identities are satisfied in the same manner.
Further, we find
\begin{eqnarray}
- i \bar{M}_{\mathcal{Z}} \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}(k^2) &=& - i \, \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2}
\, \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HW}}
-i \frac{g_2^2 \, \tilde{C}_{HW} \, \bar{v}_T}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{SM},\nonumber \\
&=&
- \tilde{C}_{HW}\, \bar{v}_T^2 \, (\xi+ 3) \left[ \frac{g_2^2 \, (7 \, g_1^2 + 9 \, g_2^2)}{ 256 \, \pi^2 \, \epsilon}
+ \frac{g_2^2 \,(g_1^2+ 3 g_2^2)}{256 \pi^2 \epsilon}\right], \\
&=& - \tilde{C}_{HW}\, \bar{v}_T^2 \, (\xi+ 3) \frac{g_2^2 (2 g_1^2+ 3 g_2^2)}{64 \, \pi^2 \, \epsilon},
\end{eqnarray}
which exactly cancels $\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}}$.
In the case of $\tilde{C}_{HW}$, the remaining $\mathcal{Z}$ identity has the individual contributions
\begin{eqnarray}
k^2 \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
- i \, \tilde{C}_{HW}\, k^2 \, \frac{\bar{v}_T\, g_2^2 \, (\xi + 3) \, (7 \, g_1^2 + 9 \, g_2^2)}{\sqrt{g_1^2+ g_2^2} \, 128 \, \pi^2 \, \epsilon}, \\
- i \bar{M}_{\mathcal{Z}} \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div} &=&
- i \, \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2}
\, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HW}}
-i \frac{g_2^2 \, \tilde{C}_{HW} \, \bar{v}_T}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div}_{SM}, \nonumber \\
&=& i \, \tilde{C}_{HW}\, k^2 \, \frac{\bar{v}_T\, g_2^2 \, (\xi + 3) \, (7 \, g_1^2 + 9 \, g_2^2)}{\sqrt{g_1^2+ g_2^2} \, 128 \, \pi^2 \, \epsilon}
- i \, \frac{\tilde{C}_{HW}}{2} \, g_1^2 \, \bar{v}_T^3 \, \sqrt{g_1^2+g_2^2} \, \mathcal{P}^4_{HW}, \nonumber \\
&-&i \, \frac{\tilde{C}_{HW} \, g_2^2}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[T^H\right]_{SM}^{div},\\
i \, \frac{\bar{g}_Z}{2} T^H &=&
i \, \frac{\tilde{C}_{HW}}{2} \, g_2^2 \, \bar{v}_T^3 \, \sqrt{g_1^2+g_2^2} \, \mathcal{P}^4_{HW}
+ i \, \frac{\tilde{C}_{HW} \, g_2^2}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[T^H\right]_{SM}^{div},
\end{eqnarray}
that combine to satisfy the corresponding Ward Identity.
A charged field Ward identities is satisfied directly, as
\begin{eqnarray}
\left[\Sigma_L^{\hat{\mathcal{W}}^+ \,\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{HW}} + \frac{g_2 \, \bar{v}_T}{2} \left[\Sigma^{\hat{\mathcal{W}}^- \,\hat{\phi}^+}(k^2)\right]^{div}_{\tilde{C}_{HW}} +\frac{g_2 \bar v_T}{2}\tilde C_{HW}\left[\Sigma^{\hat{\mathcal{W}}^- \,\hat{\phi}^+}(k^2)\right]^{div}_{SM}= 0,
\end{eqnarray}
the remaining identity also requires the redefinition of the $\mathcal{W}$ mass into the geoSMEFT mass to be established
as
\begin{eqnarray}
k^2 \left[\Sigma^{{\hat{\mathcal{W}}^+ \hat{\Phi}^-}}(k^2) \right]^{div}_{\tilde{C}_{HW}} &=&
\tilde{C}_{HW}\, k^2 \, \bar{v}_T \frac{(g_1^2+9 g_2^2) \, g_2 \, (\xi + 3)}{128 \, \pi^2 \, \epsilon},\\
\bar{M}_W \left[\Sigma^{\hat{\Phi}^- \, \hat{\Phi}^+}(k^2) \right]
&=& \frac{g_2 \, \bar{v}_T}{2} \, \left[\Sigma^{\hat{\Phi}^- \, \hat{\Phi}^+}(k^2) \right]^{div}_{\tilde{C}_{HW}}
+ \frac{g_2 \, \bar{v}_T}{2} \, \tilde{C}_{HW} \, \, \left[\Sigma^{\hat{\Phi}^- \, \hat{\Phi}^+}(k^2) \right]^{div}_{SM},\nonumber \\
&=& \frac{g_2}{2} \tilde{C}_{HW} \left[T^H\right]_{SM}^{div}
+ \frac{g_2}{2} \left[T^H\right]^{div}_{\tilde{C}_{HW}}
- \tilde{C}_{HW}\, k^2 \, \bar{v}_T \frac{(g_1^2+9 g_2^2) \, g_2 \, (\xi + 3)}{128 \, \pi^2 \, \epsilon},\nonumber \\
- \frac{\bar{g}_2}{2} \left[T^H\right] &=&
- \frac{g_2}{2} \tilde{C}_{HW} \left[T^H\right]^{div}_{SM}
- \frac{g_2}{2} \left[T^H\right]^{div}_{\tilde{C}_{HW}}.
\end{eqnarray}
\begin{figure*}[!tp]
\centering
\includegraphics[width=1\textwidth]{diagrams.pdf}
\caption{Two point function diagrams evaluated in the SMEFT. In each diagram, all possible operator insertions
are implied in the one and two point functions. Here long dashed lines are scalar fields, including Goldstone boson fields,
and short dashed lines are ghost fields.}
\label{fig:twopoints}
\end{figure*}
\subsubsection{Operator $Q_{HWB}$}
The Wilson coefficient of the operator $Q_{HWB}$ modifies the Weinberg angle of the SM
into the appropriate rotation to mass eigenstates in the SMEFT, given in Eqn.~\eqref{basicrotations}.
The same Wilson coefficient shifts the definition of the $\mathcal{Z}$ mass in $\bar{M}_{\mathcal{Z}}$, modifies $g_Z$ to
$\bar{g}_Z$ etc. The various contributions to the BFM Ward identities combine in the following
(somewhat intricate) fashion. Again defining combinations of coupling which occur frequently as
\begin{eqnarray}
\mathcal{P}_{CHWB}^1 &=& \frac{((g_1^4 +3 \, g_2^4) \, \xi + 12 g_2^4 + 4 \, g_1^2 \, g_2^2 \, (\xi - 4)
+ 8 \, (g_1^2+g_2^2) \, \lambda)}{32 \pi^2 \epsilon},\\
\mathcal{P}_{CHWB}^2 &=& \frac{(3+ \xi)(g_1^2 + 2 \, g_2^2)}{32 \pi^2 \epsilon},\\
\mathcal{P}_{CHWB}^3 &=& \frac{g_1^2 - 3 g_2^2}{32 \pi^2 \epsilon},\\
\mathcal{P}_{CHWB}^4 &=& \frac{9 (g_1^2 +g_2^2)+ 4 \, \lambda \, \xi}{128 \pi^2 \epsilon},
\end{eqnarray}
the two point functions in the SMEFT are
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
-\tilde{C}_{HWB}\, k^2 \, \frac{g_1 \, g_2 \,\mathcal{P}_{CHWB}^1}{(g_1^2+g_2^2)^2}, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
\tilde{C}_{HWB} \, k^2\, \left[-\frac{(g_2^2-g_1^2) \, \mathcal{P}_{CHWB}^1}{2 \,(g_1^2+g_2^2)^2}
+ \mathcal{P}_{CHWB}^3\right],\\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
\tilde{C}_{HWB} \, g_1 \, g_2 \, \left[+ k^2 \frac{\mathcal{P}_{CHWB}^1}{(g_1^2+g_2^2)^2}
+ \frac{k^2}{4 \, \pi^2 \, \epsilon}
+ \bar{v}_T^2 \, \frac{\mathcal{P}_{CHWB}^2}{2} \right],\\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
\tilde{C}_{HWB} \, \bar{v}_T^2 \, \frac{g_1 \,g_2 \,\mathcal{P}_{CHWB}^2}{2}, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
- i \, \tilde{C}_{HWB}\, \frac{\bar{v}_T\, g_1 \, g_2 \, (\xi + 3) \,
(3 \, g_1^2 + 5 \, g_2^2)}{\sqrt{g_1^2 + g_2^2} \, 128 \, \pi^2 \, \epsilon},\\
\left[\Sigma_T^{\hat{\mathcal{W}^\pm}\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
\tilde{C}_{HWB} \, g_1 \, g_2 \, \left[\frac{k^2}{16 \, \pi^2 \, \epsilon} + \frac{\bar{v}_T^2 \, g_2^2 \, (\xi + 3)}{128 \, \pi^2 \, \epsilon} \right], \\
\left[\Sigma_L^{\hat{\mathcal{W}}^\pm\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
\tilde{C}_{HWB} \, \bar{v}_T^2 \, \frac{g_1 \, g_2^3 \, (\xi + 3)}{128 \pi^2 \epsilon},\\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
-\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} =
- \tilde{C}_{HWB}\, \bar{v}_T \frac{g_1 \, g_2^2 \, (\xi + 3)}{64 \, \pi^2 \, \epsilon},\\
\left[\Sigma^{\hat{\mathcal{\chi}}\hat{\mathcal{\chi}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}}
&=& \left[\Sigma^{\hat{\Phi}^+\hat{\Phi}^-}(k^2)\right]^{div}_{\tilde{C}_{HWB}}
= \tilde{C}_{HWB} \,g_1 \, g_2 \, \left[- \, k^2 \frac{\xi+ 3}{32 \pi^2 \epsilon}
+ \bar{v}_T^2 \, \mathcal{P}_{CHWB}^4 \right], \\
\left[T^H\right]_{\tilde{C}_{HWB}}^{div} &=&
\tilde{C}_{HWB} \, g_1 \, g_2 \, \bar{v}_T^3 \mathcal{P}_{CHWB}^4;
\end{eqnarray}
Once again
\begin{eqnarray}
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}}=
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}}=0,
\end{eqnarray}
and the fact that
\begin{align*}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &\propto k^2, & \quad
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &\propto k^2
\end{align*}
directly establish the SMEFT BFM Ward identities involving the photon.
Due to the modification of the mass parameter of the $\mathcal{Z}$ to $\bar{M}_\mathcal{Z}$ one finds
\begin{eqnarray}
- i \bar{M}_{\mathcal{Z}} \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}(k^2) &=& - i \, \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2}
\, \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HWB}}
-i \frac{g_1 \, g_2 \, \tilde{C}_{HWB} \, \bar{v}_T}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{SM},\nonumber \\
&=&
- \tilde{C}_{HWB}\, \bar{v}_T^2 \, (\xi+ 3) \left[ \frac{g_1 \, g_2 \, (3 \, g_1^2 + 5 \, g_2^2)}{ 256 \, \pi^2 \, \epsilon}
+ \frac{g_1 \, g_2 \,(g_1^2+ 3 g_2^2)}{256 \pi^2 \epsilon}\right], \nonumber \\
&=& - \tilde{C}_{HWB}\, \bar{v}_T^2 \, (\xi+ 3) \frac{g_1 g_2 (g_1^2+ 2 g_2^2)}{64 \, \pi^2 \, \epsilon}.
\end{eqnarray}
This combined result cancels $\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}}$ exactly.
A similar modification of $g_Z$ to $\bar{g}_Z$ in the SMEFT Ward identities in the BFM results in
\begin{eqnarray}
k^2 \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
- i \, \tilde{C}_{HWB}\, k^2 \, \frac{\bar{v}_T\, g_1 \, g_2 \, (\xi + 3) \, (3 \, g_1^2 + 5 \, g_2^2)}{\sqrt{g_1^2+ g_2^2} \, 128 \, \pi^2 \, \epsilon}, \\
- i \bar{M}_{\mathcal{Z}} \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div} &=&
- i \, \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2}
\, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HWB}}
-i \frac{g_1 \, g_2 \, \tilde{C}_{HWB} \, \bar{v}_T}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div}_{SM}, \nonumber \\
&=& i \, \tilde{C}_{HWB}\, k^2 \, \frac{\bar{v}_T\, g_1 g_2 \, (\xi + 3) \, (3 \, g_1^2 + 5 \, g_2^2)}{\sqrt{g_1^2+ g_2^2} \, 128 \, \pi^2 \, \epsilon}
- i \, \frac{\tilde{C}_{HWB}}{2} \, g_1 \, g_2 \, \bar{v}_T^3 \, \sqrt{g_1^2+g_2^2} \, \mathcal{P}^4_{HWB}, \nonumber \\
&-&i \, \frac{\tilde{C}_{HWB} \, g_1 \, g_2}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[T^H\right]_{SM}^{div},\\
i \, \frac{\bar{g}_Z}{2} T^H &=&
i \, \frac{\tilde{C}_{HWB} \, g_1g_2}{2} \, \bar{v}_T^3 \, \sqrt{g_1^2+g_2^2} \, \mathcal{P}^4_{HB}
+ i \, \frac{\tilde{C}_{HWB} \, g_1 \, g_2}{2 \, \sqrt{g_1^2+g_2^2}} \, \left[T^H\right]_{SM}^{div}.
\end{eqnarray}
The remaining two point function Ward identities are trivially satisfied for this operator.
\subsubsection{Operator $Q_{HD}$}
For all operators in the SMEFT,
a consistent analysis of the effects of an operator is essential to avoid introducing a hard breaking of a symmetry
that defines the theory. The two derivative Higgs operators in $\mathcal{L}^{(6)}$ satisfy the Ward identities in a manner
that involves a direct modification of tadpole contributions. Including such effects in
a formulation of the SMEFT is essential, even at tree level, for the background field gauge invariance encoding unbroken but non manifest
$\rm SU(2)_L\times U(1)_Y$ symmetry of the theory to be maintained.
These symmetry constraints are the Ward identities.
We define for $\tilde{C}_{HD}$ the short hand notation
\begin{eqnarray}
\mathcal{P}_{CHD}^1 &=& \frac{2 \, (g_1^2 + 3 g_2^2) \, \xi + (9 \, g_1^2+ 21 \, g_2^2) + 24 \, \lambda}{512 \pi^2 \epsilon},\\
\mathcal{P}_{CHD}^2 &=& \frac{15 \, g_1^4 + 30 \, g_1^2 \, g_2^2 + 9 \, g_2^4 - 608 \lambda^2 - 4 \xi \, \lambda (g_1^2+ 3g_2^2)}{1024 \, \pi^2 \, \epsilon}.
\end{eqnarray}
The one and two point functions dependence on $\tilde{C}_{HD}$ at one loop is
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=& 0, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=& 0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=&
- \tilde{C}_{HD} \, k^2\, \frac{g_1 \, g_2}{192 \, \pi^2 \epsilon},\\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=&
\tilde{C}_{HD} \left[k^2 \frac{g_1^2}{96 \pi^2 \epsilon} + \bar{v}_T^2 \, (g_1^2+ g_2^2) \,\mathcal{P}_{CHD}^1 \right],\\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=&
\tilde{C}_{HD} \, \bar{v}_T^2 \, (g_1^2 + g_2^2) \,\mathcal{P}_{CHD}^1, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=&
- i \tilde{C}_{HD}\, \bar{v}_T \, \sqrt{g_1^2 + g_2^2} \, \frac{3 \, (g_1^2+ 3 g_2^2) \, \xi + 15 g_1^2 + 33 g_2^2 + 48 \lambda}{512 \, \pi^2 \, \epsilon},\\
\left[\Sigma_T^{\hat{\mathcal{W}^\pm}\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=& \left[\Sigma_L^{\hat{\mathcal{W}}^\pm\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{HD}} =
- 3 \, \tilde{C}_{HD} \, g_2^2 \, \frac{\bar{v}_T^2 \,(g_2^2-g_1^2)}{256 \, \pi^2 \, \epsilon}, \\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=&
-\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}_{HD}} =
3 \, \tilde{C}_{HD}\, \bar{v}_T \frac{g_2 \, (g_2^2 - g_1^2)}{128 \, \pi^2 \, \epsilon},\\
\left[\Sigma^{\hat{\mathcal{\chi}}\hat{\mathcal{\chi}}}(k^2)\right]^{div}_{\tilde{C}_{HD}}
&=& - \, k^2 \, \tilde{C}_{HD}\, \frac{(g_1^2 + 3 g_2^2) \xi + 6 (g_1^2 + 2 g_2^2) + 24 \lambda}{128 \pi^2 \epsilon}, \nonumber \\
&+& \bar{v}_T^2 \, \tilde{C}_{HD} \frac{3 g_1^2 \,(g_1^2 + 2 g_2^2) - 2 \lambda \, (g_1^2 + 3 g_2^2) \xi -176 \lambda^2}{256 \pi^2 \epsilon} \, \\
\left[\Sigma^{\hat{\Phi}^+\hat{\Phi}^-}(k^2)\right]^{div}_{\tilde{C}_{HD}}&=&
k^2 \, \tilde{C}_{HD} \,\frac{3 \,(g_2^2-g_1^2)}{64 \, \pi^2 \, \epsilon}
+ \bar{v}_T^2 \, \tilde{C}_{HD} \, \frac{9 \, (g_1^2+ g_2^2)^2 - 256 \lambda^2}{512 \pi^2 \epsilon},\\
\left[T^H\right]_{\tilde{C}_{HD}}^{div} &=&
\tilde{C}_{HD} \, \bar{v}_T^3 \, \mathcal{P}_{CHD}^2;
\end{eqnarray}
The photon Ward identities are trivially satisfied for this operator.
As $\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HD}} \propto k^2$
the remaining identity for $\Sigma^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}$ directly follows.
Further, $\bar{M}_\mathcal{Z}$ is modified by $\tilde{C}_{HD}$ in the geoSMEFT, and one finds the expected relationship
\begin{eqnarray}
- i \bar{M}_{\mathcal{Z}} \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}(k^2) &=& - i \, \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2}
\, \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HD}}
-i \sqrt{g_1^2+g_2^2} \frac{\tilde{C}_{HD} \, \bar{v}_T}{8} \, \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{SM},\nonumber \\
&=&-\tilde{C}_{HD} \, \bar{v}_T^2 \, (g_1^2 + g_2^2) \,\mathcal{P}_{CHD}^1,
\end{eqnarray}
leading to the cancelation of $\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HD}}$.
The remaining $\mathcal{Z}$ identity has individual contributions
\begin{eqnarray}
- i \bar{M}_{\mathcal{Z}} \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div} &=&
- i \, \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2}
\, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HD}}
-i \sqrt{g_1^2+g_2^2} \frac{\tilde{C}_{HD} \, \bar{v}_T}{8} \, \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div}_{SM},\nonumber \\
&=& - i \, \frac{\sqrt{g_1^2+g_2^2}}{2} \, \left[T^H\right]_{\tilde{C}_{HD}}^{div} - k^2 \left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HD}}, \\
i \, \frac{\bar{g}_Z}{2} T^H &=&
i \, \frac{\sqrt{g_1^2+g_2^2}}{2} \, \left[T^H\right]_{\tilde{C}_{HD}}^{div},
\end{eqnarray}
that combine to satisfy the corresponding Ward Identity.
The BFM Ward identity: $\Sigma_L^{\hat{\mathcal{W}^+} \hat{\mathcal{W}}^-} + \bar{M}_W \Sigma^{\hat{\mathcal{W}}^- \hat{\Phi}^+} =0$,
is directly satisfied for this operator. More interesting is the modified Tadpole contribution in
the identity
\begin{eqnarray}
0 = k^2 \Sigma^{{\hat{\mathcal{W}}^+\hat{\Phi}^-}} + \bar{M}_W \, \Sigma^{\hat{\Phi}^- \hat{\Phi}^+}
- \frac{\bar{g}_2}{2} T^H \left(1 + \frac{\tilde{C}_{HD}}{4} \right).
\end{eqnarray}
The individual terms of this Ward identity, dependent on $\tilde{C}_{HD}$ expand out as
\begin{eqnarray}
k^2 \Sigma^{{\hat{\mathcal{W}}^+\hat{\Phi}^-}} &=&
k^2 \, \left[\Sigma^{{\hat{\mathcal{W}}^+ \hat{\Phi}^- }}\right]_{\tilde{C}_{HD}}^{div},\nonumber \\
&=& - 3 \, k^2 \, \bar{v}_T \, \tilde{C}_{HD} \, \frac{g_2 \,(g_2^2-g_1^2)}{128 \, \pi^2 \, \epsilon}, \\
\bar{M}_W \, \Sigma^{\hat{\Phi}^- \hat{\Phi}^+} &=&
\bar{v}_T \, \tilde{C}_{HD} \, g_2 \, \left[
3 \, k^2 \, \frac{(g_2^2-g_1^2)}{128 \, \pi^2 \, \epsilon}
+ \bar{v}_T^2 \, \frac{9 \, (g_1^2+ g_2^2)^2 - 256 \lambda^2}{1024 \pi^2 \epsilon}\right], \\
- \frac{\bar{g}_2}{2} T^H \left(1 + \frac{\tilde{C}_{HD}}{4} \right)
&=& - \frac{g_2}{2} \left[ T^H\right]_{\tilde{C}_{HD}}^{div}
- \frac{g_2}{8} \, \tilde{C}_{HD}\, \left[ T^H\right]_{SM}^{div},
\end{eqnarray}
and the Ward identity is satisfied as
\begin{eqnarray}
\tilde{C}_{HD} \, \frac{9 \, (g_1^2+ g_2^2)^2 - 256 \lambda^2}{128 \pi^2 \epsilon}
- \frac{4}{\bar{v}_T^3} \left[ T^H\right]_{\tilde{C}_{HD}}^{div}
- \frac{\tilde{C}_{HD}}{\bar{v}_T^3 } \, \left[ T^H\right]_{SM}^{div} = 0.
\end{eqnarray}
\subsubsection{Operator $Q_{H \Box}$}
The one and two point function dependence on $\tilde{C}_{H \Box}$ is
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} &=&
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} =
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} =
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} = 0, \\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} &=&
\tilde{C}_{H \Box} \frac{(g_1^2+ g_2^2)}{384 \, \pi^2 \, \epsilon} \left[4 \, k^2 + 9 \, \bar{v}_T^2\,(g_1^2+ g_2^2) \right],\\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} &=&
3 \, \tilde{C}_{H \Box} \, \bar{v}_T^2 \frac{(g_1^2+ g_2^2)^2}{128 \, \pi^2 \, \epsilon} , \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} &=&
- 3 \, i \,\tilde{C}_{H \Box} \,\bar{v}_T \, \sqrt{g_1^2+ g_2^2} \frac{(g_1^2+g_2^2)}{64 \, \pi^2 \, \epsilon}, \\
\left[\Sigma_T^{\hat{\mathcal{W}^\pm}\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} &=&
\tilde{C}_{H \Box} \frac{g_2^2}{384 \pi^2 \epsilon} \left[4 \, k^2 + 9 \, g_2^2 \, \bar{v}_T^2 \right], \\
\left[\Sigma_L^{\hat{\mathcal{W}}^\pm\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} &=&
\tilde{C}_{H \Box} \frac{3 \, g_2^4 \, \bar{v}_T^2}{128 \pi^2 \epsilon}, \\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} &=&
-\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}} =
-3 \, \tilde{C}_{H \Box} \, \bar{v}_T \, \frac{g_2^3}{64 \pi^2 \epsilon},\\
\left[\Sigma^{\hat{\mathcal{\chi}}\hat{\mathcal{\chi}}}(k^2)\right]^{div}_{\tilde{C}_{H \Box}}
&=&\tilde{C}_{H \Box}\, \left[- 3 \, k^2 \, \frac{g_1^2 + g_2^2}{32 \pi^2 \epsilon} + \bar{v}_T^2 \, \frac{64 \, \lambda^2}{32 \pi^2 \epsilon} \right], \\
\left[\Sigma^{\hat{\Phi}^+\hat{\Phi}^-}(k^2)\right]^{div}_{\tilde{C}_{H \Box}}&=&
\tilde{C}_{H \Box}\, \left[- 3 \, k^2 \, \frac{g_2^2}{32 \pi^2 \epsilon} + \bar{v}_T^2 \, \frac{64 \, \lambda^2}{32 \pi^2 \epsilon} \right],\\
\left[T^H\right]_{\tilde{C}_{H \Box}}^{div} &=&
\tilde{C}_{H \Box} \, \bar{v}_T^3 \, \frac{3 \, g_1^4 + 6 \, g_1^2 \, g_2^2 + 9 \, g_2^4 + 608 \, \lambda^2
+ 4 \, \lambda \, \xi \, (g_1^2 + 3 \, g_2^2)}{256 \, \pi^2 \, \epsilon}.
\end{eqnarray}
For $Q_{H \Box}$ the identities involving the photon are trivially satisfied.
The identities without a tadpole contribution are also directly satisfied for this operator.
For the identities involving a tadpole contribution, the dependence on $\tilde{C}_{H \Box}$ combines to satisfy the
BFM Ward identity as
\begin{eqnarray}
k^2 \, \left[\Sigma^{\hat{Z}\hat{\chi}}(k^2)\right]^{div} &=&
- 3 \, i \,\tilde{C}_{H \Box} \, k^2 \, \bar{v}_T \, \sqrt{g_1^2+ g_2^2} \frac{(g_1^2+g_2^2)}{64 \, \pi^2 \, \epsilon},\\
- i \bar{M}_{\mathcal{Z}} \, \left[\Sigma^{\hat{\chi}\hat{\chi}}(k^2)\right]^{div} &=&
- i \, \bar{v}_T \,
\tilde{C}_{H \Box}\,\sqrt{g_1^2+g_2^2} \, \left[- 3 \, k^2 \, \frac{g_1^2 + g_2^2}{64 \pi^2 \epsilon} + \bar{v}_T^2 \, \frac{64 \, \lambda^2}{64 \pi^2 \epsilon} \right], \\
i \, \frac{\bar{g}_Z}{2} T^H (1- \tilde{C}_{H \Box}) &=&
i \, \frac{\sqrt{g_1^2+g_2^2}}{2} \, \left[T^H\right]_{\tilde{C}_{H \Box}}^{div}
- i \, \frac{\sqrt{g_1^2+g_2^2}}{2} \, \tilde{C}_{H \Box} \, \left[T^H\right]_{SM}^{div}, \nonumber \\
&=& i \, \tilde{C}_{H \Box}\, \bar{v}_T^3 \,\sqrt{g_1^2+g_2^2} \, \frac{\lambda^2}{\pi^2 \, \epsilon},
\end{eqnarray}
and the individual terms in the corresponding charged field Ward identity, dependent on $\tilde{C}_{H \Box}$ expand out as
\begin{eqnarray}
k^2 \, \left[\Sigma^{{\hat{\mathcal{W}}^+\hat{\Phi}^-}}\right]_{\tilde{C}_{H \Box}}^{div}
&=& \tilde{C}_{H \Box} \, \bar{v}_T \, k^2 \, \frac{3 \, g_2^3}{64 \pi^2 \epsilon}, \\
\bar{M}_W \, \left[\Sigma^{\hat{\Phi}^- \hat{\Phi}^+}\right]_{\tilde{C}_{H \Box}}^{div}
&=&\tilde{C}_{H \Box}\, \bar{v}_T \, \left[- 3 \, k^2 \, \frac{g_2^3}{64 \pi^2 \epsilon} + \bar{v}_T^2 \, \frac{64 \, g_2\, \lambda^2}{64 \pi^2 \epsilon} \right], \\
- \frac{\bar{g}_2}{2} T^H \left(1 - \tilde{C}_{H \Box}\right)
&=& - \frac{g_2}{2} \left[ T^H\right]_{\tilde{C}_{H \Box}}^{div}
+ \frac{g_2}{2} \, \tilde{C}_{H \Box}\, \left[ T^H\right]_{SM}^{div},\nonumber \\
&=& - \tilde{C}_{H \Box}\, \frac{g_2 \lambda^2 \bar{v}_T^3}{\pi^2 \epsilon}.
\end{eqnarray}
\subsubsection{Operator $Q_{H}$}
The operator $Q_{H}$ leads to a modification of the vacuum expectation value in the SM into that of the SMEFT.
$Q_{H}$ also contributes directly to the Goldstone boson two point functions, and generates a tadpole term at one loop.
It follows from the results in Ref.~\cite{Corbett:2019cwl} that for this operator
\begin{eqnarray}
\left[\Sigma^{\hat{\Phi}^+ \hat{\Phi}^-}\right]_{\tilde{C}_{H}}^{div}
= \left[\Sigma^{\hat{\chi} \hat{\chi}}\right]_{\tilde{C}_{H}}^{div} =
\bar v_T\left[T^H\right]_{\tilde{C}_{H}}^{div},
\end{eqnarray}
and we find this relationship holds as expected, with
\begin{eqnarray}
\left[\Sigma^{\hat{\Phi}^+ \hat{\Phi}^-}\right]_{\tilde{C}_{H}}^{div} =
- \frac{3 \, \tilde{C}_H \, \bar{v}_T^2 \, (64 \lambda + (g_1^2 + 3 g_2^2)\, \xi)}{128 \pi^2 \, \epsilon}.
\end{eqnarray}
\subsubsection{Operator $Q_{W}$}
The two point function dependence on $\tilde{C}_{W}$ is entirely transverse and is given by
\begin{eqnarray}
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{W}} &=&
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{W}}
= \left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{W}}
= \left[\Sigma_L^{\hat{\mathcal{W}}^\pm\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{W}} = 0,\\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{W}} &=&
- \frac{3 \,\tilde C_W \, g_1^2 \, g_2}{8 \pi^2\epsilon (g_1^2+g_2^2)} \left[ 3 \, g_2^2 - 2 \, \frac{k^2}{\bar{v}_T^2}\right] \, k^2, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{W}} &=&
- \frac{3 \,\tilde C_W \, g_1 \, g_2^2}{8 \pi^2\epsilon (g_1^2+g_2^2)} \left[ 3 \, g_2^2 - 2 \, \frac{k^2}{\bar{v}_T^2}\right]\, k^2,\\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{W}} &=&
- \frac{3 \,\tilde C_W \, g_2^3}{8 \pi^2 \epsilon (g_1^2+g_2^2)} \left[ 3 \, g_2^2 - 2 \, \frac{k^2}{\bar{v}_T^2}\right] \, k^2,\\
\left[\Sigma_T^{\hat{\mathcal{W}^\pm}\hat{\mathcal{W}}^\mp}(k^2)\right]^{div}_{\tilde{C}_{W}} &=&
- \frac{3 \,\tilde C_W \, g_2}{8 \pi^2 \epsilon} \left[ 3 \, g_2^2 - 2 \, \frac{k^2}{\bar{v}_T^2}\right] \, k^2, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{W}} &=&
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{W}} =
\left[\Sigma^{\hat{\mathcal{\chi}}\hat{\mathcal{\chi}}}(k^2)\right]^{div}_{\tilde{C}_{W}}
= \left[\Sigma^{\hat{\Phi}^+\hat{\Phi}^-}(k^2)\right]^{div}_{\tilde{C}_{W}}= 0, \\
\left[T^H\right]_{\tilde{C}_{W}}^{div} &=& 0.
\end{eqnarray}
As the contributions from this operator come about due to field strengths, which limits the Helicity connections, the results are purely transverse, and
also proportional to $k^2$. The overall coupling dependence also directly follows from rotating the
fields to mass eigenstates. For this operator, the SMEFT Ward identities are directly satisfied.
\subsection{SMEFT results; Fermion loops}
\subsubsection{Operator $Q_{HB}$}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
\tilde{C}_{HB} \, \frac{g_1^2 \, g_2^4}{(g_1^2+ g_2^2)^2} \, \frac{64}{9} \, \frac{k^2}{16 \pi^2 \epsilon} \, n, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
-\tilde{C}_{HB} \, \frac{g_1 \, g_2}{(g_1^2+ g_2^2)^2} \, \frac{4}{9} \, \left(5 g_1^4 + 18 g_1^2 \, g_2^2 -3 g_2^4 \right) \, \frac{k^2}{16 \pi^2 \epsilon} \, n, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&0,
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
- \tilde{C}_{HB} \sum_\psi \, N_C^\psi \, m_\psi^2 \frac{g_1^2}{16 \pi^2 \epsilon}
+ \left(\frac{8\,\tilde{C}_{HB}}{9} \frac{k^2 \,n}{16 \pi^2 \epsilon}\right) \frac{g_1^2 \,(5 g_1^4 + 10 g_1^2 \, g_2^2 -3 g_2^4)}{(g_1^2 + g_2^2)^2}, \nonumber \\ \\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
- \tilde{C}_{HB} \, \sum_\psi \frac{N_C^\psi \, m_\psi^2 \, g_1^2}{16 \pi^2 \epsilon}, \\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{HB}} =
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}_{HB}}=
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HB}} =
0,\nonumber \\ \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
i \, \tilde{C}_{HB} \frac{g_1^2 \, \bar{v}_T}{32 \pi^2 \epsilon \,\sqrt{g_1^2+ g_2^2}} \sum_\psi
N_C^\psi Y_\psi^2, \\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HB}} &=&
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HB}} =
\left[T^H\right]_{\tilde{C}_{HB}}^{div} = 0.
\end{eqnarray}
Most of the BFM Ward identities are trivially satisfied. These contributions come from rescaling of SM results
to the two point functions through fermion loops. An interesting case is the
$\mathcal{Z}$ Ward identity where the geometric $\mathcal{Z}$ mass dependence on this Wilson coefficient plays a role
\begin{eqnarray}
\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}- i \bar{M}_{\mathcal{Z}} \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}
&=& \left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}\right]_{\tilde{C}_{HB}}^{div}
- \frac{i \, \tilde{C}_{HB} \, g_1^2 \, \bar{v}_T}{2 \sqrt{g_1^2+g_2^2}}
\left[ \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}\right]_{SM}^{div}
- i \frac{\sqrt{g_1^2+g_2^2} \, \bar{v}_T}{2} \left[ \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}\right]_{\tilde{C}_{HB}}^{div}, \nonumber \\
&=&0.
\end{eqnarray}
\subsubsection{Operator $Q_{HW}$}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
\tilde{C}_{HW} \, \frac{g_1^4 \, g_2^2}{(g_1^2+ g_2^2)^2} \, \frac{64}{9} \, \frac{k^2}{16 \pi^2 \epsilon} \, n, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
-\tilde{C}_{HW} \, \frac{g_1 \, g_2}{(g_1^2+ g_2^2)^2} \, \frac{4}{9} \, \left(5 g_1^4 -14 g_1^2 \, g_2^2 -3 g_2^4 \right) \, \frac{k^2}{16 \pi^2 \epsilon} \, n, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&0,
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
- \tilde{C}_{HW} \sum_\psi \, N_C^\psi \, m_\psi^2 \frac{g_2^2}{16 \pi^2 \epsilon}
- \left(\frac{8\,\tilde{C}_{HW}}{9} \frac{k^2 \,n}{16 \pi^2 \epsilon}\right) \frac{g_2^2 \, (5 g_1^4 -6 g_1^2 \, g_2^2 -3 g_2^4)}{(g_1^2 + g_2^2)^2}, \nonumber \\ \\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
- \tilde{C}_{HW} \, \sum_\psi \frac{N_C^\psi \, m_\psi^2 \, g_2^2}{16 \pi^2 \epsilon}, \\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
- \tilde{C}_{HW} \sum_\psi \, N_C^\psi \, m_\psi^2 \frac{g_2^2}{16 \pi^2 \epsilon}
+ \left(\frac{8\,\tilde{C}_{HW}}{3}g_2^2 \frac{k^2 \,n}{16 \pi^2 \epsilon}\right) ,\nonumber \\
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
- \tilde{C}_{HW} \, \sum_\psi \frac{N_C^\psi \, m_\psi^2 \, g_2^2}{16 \pi^2 \epsilon}, \\
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}_{HW}}&=& 0,\\
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
- \left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{SM} =
\tilde{C}_{HW} \, \sum_\psi \, N_C^\psi \, Y_\psi^2 \, \bar{v}_T\, \frac{g_2}{32 \, \pi^2 \, \epsilon}, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
i \, \tilde{C}_{HW} \, \sum_\psi \, N_C^\psi \, Y_\psi^2 \, \bar{v}_T\, \frac{g_2^2}{32 \, \pi^2 \, \epsilon \, \sqrt{g_1^2+ g_2^2}}, \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HW}} &=&
\left[T^H\right]_{\tilde{C}_{HW}}^{div} = 0.
\end{eqnarray}
The BFM photon Ward identities are trivially satisfied.
The remaining Ward identities we examine work out as
\begin{eqnarray}
\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}- i \bar{M}_{\mathcal{Z}} \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}
&=& \left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}\right]_{\tilde{C}_{HW}}^{div}
- \frac{i \, \tilde{C}_{HW} \, {g_2^2} \, \bar{v}_T}{2 \sqrt{g_1^2+g_2^2}}
\left[ \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}\right]_{SM}^{div} -
\frac{i \sqrt{g_1^2+g_2^2} \bar{v}_T}{2}
\left[ \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}\right]_{\tilde{C}_{HW}}^{div}, \nonumber \\
&=& 0,\\
k^2 \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}- i \bar{M}_{\mathcal{Z}} \, \Sigma^{\hat{\chi} \hat{\chi}} + i \, \frac{\bar{g}_Z}{2} T^H
&=&
i \, \tilde{C}_{HW} \,\frac{g_2^2 \, \bar{v}_T}{2 \sqrt{g_1^2+g_2^2}} \left[
k^2 \sum_\psi \frac{N_C^\psi Y_\psi^2}{16 \pi^2 \epsilon}
- \left[ \Sigma^{\hat{\chi} \hat{\chi}}\right]_{SM}^{div}
+ {\frac{1}{\bar v_T}}\left[T^H\right]_{SM}^{div} \right], \nonumber \\
&=& 0, \\
\Sigma_L^{\hat{\mathcal{W}^\pm} \hat{\mathcal{W}}^\mp} \pm \bar{M}_W \Sigma^{\hat{\mathcal{W}}^\mp \hat{\Phi}^\pm}
&=& \left[\Sigma_L^{\hat{\mathcal{W}^\pm} \hat{\mathcal{W}}^\mp}\right]_{\tilde{C}_{HW}}^{div}
\pm \frac{g_2 \, \bar{v}_T}{2} \left(\left[\Sigma^{ \hat{\mathcal{W}}^\mp \hat{\Phi}^\pm}\right]_{\tilde{C}_{HW}}^{div}
\pm \tilde{C}_{HW} \,\left[\Sigma^{\hat{\mathcal{W}}^\mp\hat{\Phi}^\pm}\right]_{SM}^{div} \right),\nonumber \\
&=& 0, \\
k^2 \Sigma^{{\hat{\mathcal{W}}^\pm \hat{\Phi}^\mp}} \!\!\!\!\ \pm \bar{M}_W \, \Sigma^{\hat{\Phi}^\mp \hat{\Phi}^\pm}
\!\!\!\!\! \mp \frac{\bar{g}_2}{2} T^H
&=& k^2 \left[\Sigma^{{\hat{\mathcal{W}}^\pm \hat{\Phi}^\mp}} \right]_{\tilde{C}_{HW}}^{div}
\!\!\!\!\! + \frac{\tilde{C}_{HW} \,g_2}{2} \left(
\pm \bar{v}_T \, \left[\Sigma^{\hat{\Phi}^\mp \hat{\Phi}^\pm}\right]_{SM}^{div}
\mp \left[T^H\right]_{SM}^{div} \right),\nonumber \\
&=& 0.
\end{eqnarray}
\subsubsection{Operator $Q_{HWB}$}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
- \tilde{C}_{HWB} \, \frac{g_1^3 \, g_2^3}{(g_1^2+ g_2^2)^2} \, \frac{64}{9} \, \frac{k^2}{16 \pi^2 \epsilon} \, n, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&0, \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
\tilde{C}_{HWB} \, \frac{g_1^2 \, g_2^2}{(g_1^2+ g_2^2)^2} \, \frac{{32}}{9} \,
\left(g_1^2 - g_2^2 \right) \, \frac{k^2}{16 \pi^2 \epsilon} \, n, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&0,
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
- \tilde{C}_{HWB} \sum_\psi \, N_C^\psi \, m_\psi^2 \frac{g_1 g_2}{16 \pi^2 \epsilon}
+ \left(\frac{64\,\tilde{C}_{HWB}}{9} \frac{k^2 \,n}{16 \pi^2 \epsilon}\right) \frac{g_1^3 g_2^3}{(g_1^2 + g_2^2)^2}, \\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
- \tilde{C}_{HWB} \, \sum_\psi \frac{N_C^\psi \, m_\psi^2 \, g_1 g_2}{16 \pi^2 \epsilon}, \\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{HWB}} = 0, \\
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}_{HWB}}&=&
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} =0, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
i \, \tilde{C}_{HWB} \, \sum_\psi \, N_C^\psi \, Y_\psi^2 \, \bar{v}_T\, \frac{g_1 g_2}{32 \, \pi^2 \, \epsilon \, \sqrt{g_1^2+ g_2^2}}, \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HWB}} &=&
\left[T^H\right]_{\tilde{C}_{HWB}}^{div} = 0.
\end{eqnarray}
The BFM Ward identities involving the photon and charged fields are trivially satisfied.
The remaining identities of interest work out as
\begin{eqnarray}
\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}- i \bar{M}_{\mathcal{Z}} \Sigma^{\hat{\mathcal{Z}} \hat{\chi}}
&=& \left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}\right]_{\tilde{C}_{HWB}}^{div}
\!\!\!\!\! - i \frac{\sqrt{g_1^2+ g_2^2} \bar{v}_T}{2}\left[\Sigma^{\hat{\mathcal{Z}} \hat{\chi}}\right]_{\tilde{C}_{HWB}}^{div}
\!\!\!\!\! - i \, \tilde{C}_{HWB} \frac{g_1 \, g_2 \bar{v}_T}{2 \sqrt{g_1^2+ g_2^2}}\left[\Sigma^{\hat{\mathcal{Z}} \hat{\chi}}\right]_{SM}^{div}, \nonumber \\
&=& 0, \\
k^2 \Sigma^{\hat{\mathcal{Z}} \hat{\chi}} \!\!- i \bar{M}_{\mathcal{Z}} \, \Sigma^{\hat{\chi} \hat{\chi}}\!\! + i \, \frac{\bar{g}_Z}{2} T^H
&=& k^2 \left[\Sigma^{\hat{\mathcal{Z}} \hat{\chi}}\right]_{\tilde{C}_{HWB}}^{div}
\!\!\!\!\!- i \, \tilde{C}_{HWB} \frac{g_1 \, g_2}{2 \sqrt{g_1^2+ g_2^2}}
\left[\bar{v}_T \left[\Sigma^{\hat{\chi} \hat{\chi}}\right]_{SM}^{div} - \left[T^H\right]_{SM}^{div} \right],\nonumber \\
&=& 0.
\end{eqnarray}
\subsubsection{Operator $Q_{HD}$}
For this operator the non-zero divergent results for the fermion loops are
\begin{eqnarray}
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=&
i \, \tilde{C}_{HD} \bar{v}_T\frac{\sqrt{g_1^2+ g_2^2}}{128 \pi^2 \epsilon} \, \sum_\psi \, N_C^\psi \, Y_\psi^2,\\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{HD}} &=&
\frac{\tilde{C}_{HD}\bar{v}_T^2}{32 \pi^2 \epsilon}\, \sum_\psi \, N_C^\psi \, Y_\psi^4 - k^2\, \frac{\tilde{C}_{HD}}{32 \pi^2 \epsilon}
\, \sum_\psi \, N_C^\psi \, Y_\psi^2,\\
\left[T^H\right]_{\tilde{C}_{HD}}^{div} &=& -\frac{\tilde{C}_{HD}}{4}
\left[T^H\right]_{SM}^{div}.
\end{eqnarray}
Only the Ward identities involving the Tadpole contributions are non trivial for this Wilson coefficient dependence,
and these results combine with the SM divergent terms from fermion loops to exactly satisfy the Ward identities.
\subsubsection{Operator $Q_{H \Box}$}
The fermion loops are simple for this operator, with only the Tadpole being non-vanishing when considering divergent terms
at one loop
\begin{eqnarray}
\left[T^H\right]_{\tilde{C}_{H \Box}}^{div} = \tilde{C}_{H \Box}\left[T^H\right]_{SM}^{div}
\end{eqnarray}
so that
\begin{eqnarray}
T^H \left(1 - \tilde{C}_{H \Box} \right) = 0.
\end{eqnarray}
\subsubsection{Class 5 operators: $Q_{\psi H}$}
Class five operators (in the Warsaw basis, see Table \ref{op59}) can act as mass insertions and also lead to direct vertex corrections
emitting goldstone bosons. In addition, a four point interaction is present
that is not present in the SM which contributes
to two point functions through a closed fermion loop, as shown in Fig.~\ref{fig:twopoints}.
We define the mass eigenstate Wilson coefficients
\begin{eqnarray}
\tilde{C}'_{\substack{\psi H\\pr}} = \mathcal{U}^\dagger(\psi,L) \, \tilde{C}_{\substack{\psi H}} \, \mathcal{U}(\psi,R),
\end{eqnarray}
with the rotation between mass (primed) and weak eigenstates
\begin{eqnarray}
\psi_{L/R} = \mathcal{U}(\psi,L/R) \psi'_{L/R}
\end{eqnarray}
where the fermion sum is over $\psi = \{u,d,\ell\}$ and $p,r$ sums over mass eigenstate
flavors. The contributions to the one and two point functions are
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} &=&
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} =
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} =
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} =0, \\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} &=&\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{\psi H}}=
\sum_{\psi} \frac{N_C^\psi \, \bar{v}_T^2 \, g_2^2}{64 \pi^2 \epsilon} \, Y_{\substack{\psi\\rr}}\, \tilde{C}'_{\substack{\psi H\\rr}},\\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} &=&
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} =
\sum_{\psi} \frac{N_C^\psi \, \bar{v}_T^2 \, (g_1^2 + g_2^2)}{64 \pi^2 \epsilon} \, Y_{\substack{\psi\\pp}}\, \tilde{C}'_{\substack{\psi H\\pp}}, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} &=&
- i \sum_{\psi} \frac{N_C^\psi \, \bar{v}_T \, \sqrt{g_1^2 + g_2^2}}{32 \pi^2 \epsilon} \, Y_{\substack{\psi\\pp}}\, \tilde{C}'_{\substack{\psi H\\pp}}, \\
-\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} &=&
\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} =
\sum_{\psi} \frac{N_C^\psi \, \bar{v}_T \, g_2}{32 \pi^2 \epsilon} \, Y_{\substack{\psi\\rr}}\, \tilde{C}'_{\substack{\psi H\\rr}}, \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} &=&
-\sum_{\psi} \frac{k^2 \, N_C^\psi}{16\pi^2 \epsilon} \, Y_{\substack{\psi\\pp}}\, \tilde{C}'_{\substack{\psi H\\pp}}
+ \sum_{\psi} \frac{3 \, N_C^\psi \, \bar{v}_T^2}{16\pi^2 \epsilon} \, Y_{\substack{\psi\\pp}}^3\, \tilde{C}'_{\substack{\psi H\\pp}},\\
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}_{\psi H}} &=&
-\sum_{\psi} \frac{k^2 \, N_C^\psi}{16\pi^2 \epsilon} \, Y_{\substack{\psi\\pp}}\, \tilde{C}'_{\substack{\psi H\\pp}}
+ \sum_{\psi} \frac{3 \, N_C^\psi \, \bar{v}_T^2}{16\pi^2 \epsilon} \, Y_{\substack{\psi\\pp}}^3\, \tilde{C}'_{\substack{\psi H\\pp}},\\
\left[T^H\right]_{\tilde{C}_{\psi H}}^{div} &=&
\sum_{\psi} \frac{3 \, N_C^\psi \, \bar{v}_T^3}{16\pi^2 \epsilon} \, Y_{\substack{\psi\\pp}}^3\, \tilde{C}'_{\substack{\psi H\\pp}}
\end{eqnarray}
The Ward identities are satisfied in the same manner as those in the SM involving fermion loops.
\subsubsection{Class 6 operators: $Q_{eB}$, $Q_{dB}$, $Q_{uB}$}
Class six operators (see Table \ref{op59}) only act as vertex corrections.
We define the mass eigenstate Wilson coefficients
\begin{eqnarray}
\tilde{C}'_{\substack{\psi B\\pr}} = \mathcal{U}^\dagger(\psi,L) \, \tilde{C}_{\substack{\psi B}} \, \mathcal{U}(\psi,R),
\end{eqnarray}
and find
\begin{eqnarray}
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} &=&
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}}=
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}}
= \left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}}=0,\nonumber \\ \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} &=&
-\frac{g_1 \, g_2^2 \, k^2}{(g_1^2 + g_2^2) 4 \pi^2 \epsilon}
\left[Q_e\,Y_{\substack{e\\pp}} \, \tilde{C}'_{\substack{e B\\ pp}}+ Q_d\,N_c\,Y_{\substack{d\\pp}} \, \tilde{C}'_{\substack{d B\\ pp}}
+ Q_u \,N_c\, Y_{\substack{u\\pp}} \, \tilde{C}'_{\substack{u B\\ pp}} + h.c.\right], \nonumber \\ \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} &=&
\frac{g_2 \, k^2}{(g_1^2 + g_2^2) 32 \pi^2 \epsilon }
\left[(g_2^2-7 g_1^2) Y_{\substack{e\\pp}} \, \tilde{C}'_{\substack{e B\\ pp}}
+ (13 \, g_1^2 - 3 \, g_2^2) Y_{\substack{u\\pp}} \, \tilde{C}'_{\substack{u B\\ pp}} + h.c.\right],\nonumber \\
&+&\frac{g_2 \, k^2}{(g_1^2 + g_2^2) 32 \pi^2 \epsilon}
\left[(3 \, g_2^2 - 5 \, g_1^2) Y_{\substack{d\\pp}} \, \tilde{C}'_{\substack{d B\\ pp}} + h.c.\right],\\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} &=&
\frac{g_1 \, k^2}{(g_1^2 + g_2^2) 16 \pi^2 \epsilon }
\left[(3 \, g_1^2 - g_2^2) Y_{\substack{e\\pp}} \, \tilde{C}'_{\substack{e B\\ pp}}
+ \frac{(3 \, g_2^2 - 5 \, g_1^2) \, N_C}{3} Y_{\substack{u\\pp}} \, \tilde{C}'_{\substack{u B\\ pp}} + h.c. \right],\nonumber \\
&+&\frac{g_1 \, k^2}{(g_1^2 + g_2^2) 16 \pi^2 \epsilon}
\left[\frac{(g_1^2 - 3 \, g_2^2) \, N_C }{3}Y_{\substack{d\\pp}} \, \tilde{C}'_{\substack{d B\\ pp}} + h.c.\right],\\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} &=&
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} =
-\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}}=
\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} = 0, \nonumber \\ \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} &=&
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}'_{\psi B}} =
\left[T^H\right]_{\tilde{C}'_{\psi B}}^{div} =0.
\end{eqnarray}
Here $Q_\psi =\left(-1,-1/3,2/3\right)$ for $\psi =\left(e,d,u\right)$.
As the non-vanishing divergent results are purely transverse, the SMEFT Ward identities are trivially satisfied.
A subset of these results can be checked against the literature, and they do agree with Ref.~\cite{Jenkins:2017dyc}.
\subsubsection{Class 6 operators: $Q_{eW}$, $Q_{dW}$, $Q_{uW}$}
We define mass eigenstate Wilson coefficients in the same manner for this operator class and find
\begin{eqnarray}
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}} &=&
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}}=
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}}
= \left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}}=0,\nonumber \\ \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}} &=&
\frac{g_1^2 \, g_2 \, k^2}{(g_1^2 + g_2^2) 4 \pi^2 \epsilon}
\left[Q_e \, Y_{\substack{e\\pp}} \, \tilde{C}'_{\substack{e W\\ pp}} + N_c \, Q_d Y_{\substack{d\\pp}} \, \tilde{C}'_{\substack{d W\\ pp}}
+ N_c \, Q_u \, Y_{\substack{u\\pp}} \, \tilde{C}'_{\substack{u W\\ pp}} + h.c.\right], \nonumber \\ \\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}} &=&
\frac{g_1 \, k^2}{(g_1^2 + g_2^2) 32 \pi^2 \epsilon }
\left[(3 g_1^2- 5 g_2^2) Y_{\substack{e\\pp}} \, \tilde{C}'_{\substack{e W\\ pp}}
+ (5 \, g_1^2 - 11 \, g_2^2) Y_{\substack{u\\pp}} \, \tilde{C}'_{\substack{u W\\ pp}} + h.c. \right],\nonumber \\
&+&\frac{g_2 \, k^2}{(g_1^2 + g_2^2) 32 \pi^2 \epsilon}
\left[(g_1^2 - 7 \, g_2^2) Y_{\substack{d\\pp}} \, \tilde{C}'_{\substack{d W\\ pp}} + h.c.\right],\\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}} &=&
\frac{g_2 \, k^2}{(g_1^2 + g_2^2) 16 \pi^2 \epsilon }
\left[(3 g_1^2 - g_2^2) Y_{\substack{e\\pp}} \, \tilde{C}'_{\substack{e W\\ pp}}
+ (5 \, g_1^2 - 3 \, g_2^2) Y_{\substack{u\\pp}} \, \tilde{C}'_{\substack{u W\\ pp}} + h.c.\right],\nonumber \\
&+&\frac{g_2 \, k^2}{(g_1^2 + g_2^2) 16 \pi^2 \epsilon}
\left[(g_1^2 - 3 \, g_2^2) Y_{\substack{d\\pp}} \, \tilde{C}'_{\substack{d W\\ pp}}+ h.c. \right],
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}}
&=&-\frac{g_2\, k^2}{(g_1^2+g_2^2)\,16\,\pi^2}\left[U_{\substack{PMNS\\pr}}\,\tilde{C}'_{\substack{e W\\ rq}}\,Y_{\substack{e\\pq}}+h.c.\right] \\
&&-\frac{g_2\, k^2}{(g_1^2+g_2^2)\,16\,\pi^2}\left[N_C\,V_{\substack{CKM\\pr}}\,\tilde{C}'_{\substack{u W\\ rq}}\,Y_{\substack{u\\pq}}+N_C\,V_{\substack{CKM\\pr}}\,\tilde{C}'_{\substack{d W\\ rq}}\,Y_{\substack{d\\pq}}+h.c.\right],\nonumber \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}} &=&
-\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}}=
\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}} = 0, \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}} &=&
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}'_{\psi W}} =
\left[T^H\right]_{\tilde{C}'_{\psi W}}^{div} =0.
\end{eqnarray}
Once again, the non-vanishing divergent results are purely transverse, and the SMEFT Ward identities are trivially satisfied.
\subsubsection{Class 7 operators: $Q_{He}$, $Q_{Hu}$, $Q_{Hd}$, $Q_{Hud}$}
For this operator class, we define the mass eigenstate Wilson coefficients
\begin{eqnarray}
\tilde{C}'_{\substack{H \psi_R\\pr}} = \mathcal{U}^\dagger(\psi,R) \, \tilde{C}_{\substack{H \psi_R}} \, \mathcal{U}(\psi,R),
\end{eqnarray}
and note that only the flavour diagonal contributions $r=p$ contribute at one loop due to the lack of flavour changing neutral currents
in tree level couplings in the SM. Directly we find
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} = 0,\\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
-\frac{g_1 \, g_2 \, k^2}{24 \pi^2 \epsilon}\left[Q_e \, \tilde{C}'_{\substack{H e\\pp}} + N_c\,Q_u \, \tilde{C}'_{\substack{H u\\pp}}
+N_C\,Q_d\, \tilde{C}'_{\substack{H d\\pp}} \right],\\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=& 0, \\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
- \tilde{C}'_{\substack{Hud\\ {pr}}} V^\star_{\substack{CKM\\pr}} \frac{g_2^2 \, N_C \, \bar{v}_T^2 \, Y_{\substack{d\\rr}} \, Y_{\substack{u\\pp}}}{64 \pi^2 \epsilon} + h.c.,\\
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
- \tilde{C}'_{\substack{Hud\\ {pr}}} V^\star_{\substack{CKM\\pr}} \frac{g_2^2 \, N_C \, \bar{v}_T^2 \, Y_{\substack{d\\rr}} \, Y_{\substack{u\\pp}}}{64 \pi^2 \epsilon} + h.c.,\\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
\frac{g_1^2 \, k^2}{12 \pi^2 \epsilon}\left[Q_e\tilde{C}'_{\substack{H e\\pp}} +N_C\,Q_u \, \tilde{C}'_{\substack{H u\\pp}} +
Q_d\, N_C \tilde{C}'_{\substack{H d\\pp}} \right] \\
&+&\frac{(g_1^2 + g_2^2)\, \bar{v}_T^2}{16 \pi^2 \epsilon} \left[\tilde{C}'_{\substack{H e\\pp}}\,Y^2_{\substack{e\\pp}} - N_C \, \tilde{C}'_{\substack{H u\\pp}}
\,Y^2_{\substack{u\\pp}} + N_C \, \tilde{C}'_{\substack{H d\\pp}} \,Y^2_{\substack{d\\pp}} \right]
,\\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
\frac{(g_1^2 + g_2^2)\, \bar{v}_T^2}{16 \pi^2 \epsilon} \left[\tilde{C}'_{\substack{H e\\pp}}\,Y^2_{\substack{e\\pp}} - N_C \, \tilde{C}'_{\substack{H u\\pp}}
\,Y^2_{\substack{u\\pp}} + N_C \, \tilde{C}'_{\substack{H d\\pp}} \,Y^2_{\substack{d\\pp}} \right],
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
-i \frac{\sqrt{g_1^2 + g_2^2}\, \bar{v}_T}{8 \pi^2 \epsilon}\left[\tilde{C}'_{\substack{H e\\pp}}\,Y^2_{\substack{e\\pp}} - N_C \, \tilde{C}'_{\substack{H u\\pp}}
\,Y^2_{\substack{u\\pp}} + N_C \, \tilde{C}'_{\substack{H d\\pp}} \,Y^2_{\substack{d\\pp}} \right], \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
-\left[\tilde{C}'_{\substack{H e\\pp}}\,Y^2_{\substack{e\\pp}} - N_C^q \, \tilde{C}'_{\substack{H u\\pp}}
\,Y^2_{\substack{u\\pp}} + N_C \, \tilde{C}'_{\substack{H d\\pp}} \,Y^2_{\substack{d\\pp}} \right]
\frac{k^2}{4 \pi^2 \epsilon}
,\\
\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&- \left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}}=
- \tilde{C}'_{\substack{Hud\\ {pr}}} V^\star_{\substack{CKM\\pr}} \frac{g_2 \, N_C \, \bar{v}_T \, Y^d_{rr} \, Y^u_{pp}}{32 \pi^2 \epsilon} + h.c.,\nonumber \\ \\
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}_{H \psi_R}} &=&
\tilde{C}'_{\substack{Hud\\ {pr}}} V^\star_{\substack{CKM\\pr}} \frac{N_C \, Y_{\substack{d\\rr}} \, Y_{\substack{u\\pp}}}{16 \pi^2 \epsilon} + h.c.\\
\left[T^H\right]_{\tilde{C}_{H \psi_R}}^{div} &=& 0.
\end{eqnarray}
These results directly satisfy the corresponding SMEFT Ward identities.
\subsubsection{Class 7 operators: $Q_{H\ell}^{(1)}$, $Q_{Hq}^{(1)}$}
For the left handed fermion operators in this class, we similarly define the mass eigenstate Wilson coefficients
\begin{eqnarray}
\tilde{C}'^{(1,3)}_{\substack{H \psi_L\\pr}} = \mathcal{U}^\dagger(\psi,L) \, \tilde{C}^{(1,3)}_{\substack{H \psi_L}} \, \mathcal{U}(\psi,L).
\end{eqnarray}
Again, only the flavour diagonal contributions $r=p$ contribute at one loop due to the lack of flavour changing neutral currents
in tree level couplings in the SM. We find
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=&
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} = 0,\\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=&
\left[\tilde{C}'^{(1)}_{\substack{H \ell \\pp}}-\frac{N_c}{3} \tilde{C}'^{(1)}_{\substack{H q \\pp}}\right]
\frac{g_1 \, g_2 \, k^2}{48 \pi^2 \epsilon}, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=& 0, \\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=&
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} = 0,
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=&
\left[\frac{N_C}{3}\tilde{C}'^{(1)}_{\substack{H q \\pp}}- \tilde{C}'^{(1)}_{\substack{H \ell \\pp}}\right]
\frac{g_1^2 \, k^2}{24 \pi^2 \epsilon}
- \frac{(g_1^2 + g_2^2) \, \bar{v}_T^2}{32 \pi^2 \epsilon}\left[N_C \, \tilde{C}'^{(1)}_{\substack{H q \\pp}}(Y_{\substack{d\\pp}}^2-Y_{\substack{u\\pp}}^2) + \tilde{C}'^{(1)}_{\substack{H \ell \\pp}}Y_{\substack{e\\pp}}^2\right], \nonumber \\\\
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=&
- \frac{(g_1^2 + g_2^2) \, \bar{v}_T^2}{32 \pi^2 \epsilon}\left[N_C \, \tilde{C}'^{(1)}_{\substack{H q \\pp}}(Y_{\substack{d\\pp}}^2-Y_{\substack{u\\pp}}^2) + \tilde{C}'^{(1)}_{\substack{H \ell \\pp}}Y_{\substack{e\\pp}}^2\right], \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=&
i \left[N_C \, \tilde{C}'^{(1)}_{\substack{H q \\pp}}(Y_{\substack{d\\pp}}^2-Y_{\substack{u\\pp}}^2)
+ \tilde{C}'^{(1)}_{\substack{H \ell \\pp}}Y_{\substack{e\\pp}}^2\right]
\frac{\sqrt{g_1^2 + g_2^2} \, \bar{v}_T}{16 \pi^2 \epsilon}, \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=&
\left[N_C \, \tilde{C}'^{(1)}_{\substack{H q\\pp}}
\,(Y^2_{\substack{d\\pp}} - Y^2_{\substack{u\\pp}}) + \tilde{C}'^{(1)}_{\substack{H \ell\\pp}}\,Y^2_{\substack{e\\pp}}\right]
\frac{k^2}{8 \pi^2 \epsilon}
,\\
\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} &=&
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}} =
\left[T^H\right]_{\tilde{C}'^{(1)}_{\substack{H \psi_L}}}^{div} = 0.
\end{eqnarray}
Again the SMEFT Ward identities are directly satisfied by these expressions.
\subsubsection{Class 7 operators: $Q_{H\ell}^{(3)}$, $Q_{Hq}^{(3)}$}
In this case one finds
\begin{eqnarray}
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{A}}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} = 0,\\
\left[\Sigma_T^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
\left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pp}} + \tilde{C}'^{(3)}_{\substack{H \ell \\pp}}\right]
\frac{g_1 \, g_2 \, k^2}{48 \pi^2 \epsilon}, \\
\left[\Sigma_L^{\hat{\mathcal{A}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=& 0, \\
\left[\Sigma_T^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
\left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pr}} \, V^\star_{\substack{CKM\\ rp}} + \tilde{C}'^{(3)}_{\substack{H \ell \\pr}}
U_{\substack{PMNS\\ rp}}^\star + h.c. \right] \frac{g_2^2 \, k^2}{48 \pi^2 \epsilon}, \\
&-& \frac{g_2^2 \, \bar{v}_T^2}{64 \, \pi^2 \, \epsilon}\left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pr}} \, V^\star_{\substack{CKM\\ rp}}(Y^2_{\substack{u\\ rr}}+Y^2_{\substack{d\\pp}}) + \tilde{C}'^{(3)}_{\substack{H \ell \\pr}}
U_{\substack{PMNS\\ rp}}^\star \,Y^2_{\substack{e\\ rr}} + h.c. \right], \nonumber \\ \\
\left[\Sigma_L^{\hat{\mathcal{W}}^+\hat{\mathcal{W}}^-}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
-\frac{g_2^2 \, \bar{v}_T^2}{64 \, \pi^2 \, \epsilon}\left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pr}} \, V^\star_{\substack{CKM\\ rp}}(Y^2_{\substack{u\\ rr}}+Y^2_{\substack{d\\ pp}}) + \tilde{C}'^{(3)}_{\substack{H \ell \\pr}}
U_{\substack{PMNS\\rp}}^\star \,Y^2_{\substack{e\\ rr}} + h.c. \right] , \nonumber \\ \\
\left[\Sigma_T^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
\left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pp}} + \tilde{C}'^{(3)}_{\substack{H \ell \\pp}}
\right] \frac{g_2^2 \, k^2}{24 \pi^2 \epsilon},\\
&-&\left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pp}} \,(Y^2_{\substack{u\\ pp}}+Y^2_{\substack{d\\ pp}}) + \tilde{C}'^{(3)}_{\substack{H \ell \\pp}}
\,Y^2_{\substack{e\\ pp}} \right] \frac{(g_1^2 + g_2^2) \, \bar{v}_T^2}{32 \, \pi^2 \, \epsilon},
\end{eqnarray}
\begin{eqnarray}
\left[\Sigma_L^{\hat{\mathcal{Z}}\hat{\mathcal{Z}}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
-\left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pp}} \,(Y^2_{\substack{u\\ pp}}+Y^2_{\substack{d\\ pp}}) + \tilde{C}'^{(3)}_{\substack{H \ell \\pp}}
\,Y^2_{\substack{e\\ pp}} \right] \frac{(g_1^2 + g_2^2) \, \bar{v}_T^2}{32 \, \pi^2 \, \epsilon}, \\
\left[\Sigma^{\hat{\mathcal{Z}}\hat{\chi}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
i \left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pp}} \,(Y^2_{\substack{u\\ pp}}+Y^2_{\substack{d\\ pp}}) + \tilde{C}'^{(3)}_{\substack{H \ell \\pp}}
\,Y^2_{\substack{e\\ pp}} \right]
\frac{\sqrt{g_1^2 + g_2^2} \, \bar{v}_T}{16 \pi^2 \epsilon}, \\
\left[\Sigma^{\hat{\chi} \hat{\chi}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
\left[N_C\, \tilde{C}'^{(3)}_{\substack{H q \\pp}} \,(Y^2_{\substack{u\\ pp}}+Y^2_{\substack{d\\ pp}}) + \tilde{C}'^{(3)}_{\substack{H \ell \\pp}}
\,Y^2_{\substack{e\\ pp}} \right] \frac{k^2}{8 \pi^2 \epsilon},\\
-\left[\Sigma^{{\hat{\phi}^- \hat{\mathcal{W}}^+}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}} &=&
\left[\Sigma^{{\hat{\phi}^+ \hat{\mathcal{W}}^-}}(k^2)\right]^{div}_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}}, \\
&=&\frac{g_2 \, \bar{v}_T}{32 \, \pi^2 \, \epsilon} \left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pr}} \, V^\star_{\substack{CKM\\ rp}}(Y^2_{\substack{u\\ rr}}+Y^2_{\substack{d\\ pp}}) + \tilde{C}'^{(3)}_{\substack{H \ell \\pr}}
U_{\substack{PMNS\\ rp}}^\star \,Y^2_{\substack{e\\ pp}} + h.c. \right] \nonumber\\
\left[\Sigma^{\hat{\phi}^+ \hat{\phi}^-}(k^2)\right]^{div}_{\tilde{C}_{H \psi_L}} &=&
\frac{k^2}{16\pi^2\epsilon}
\left[N_C \, \tilde{C}'^{(3)}_{\substack{H q \\pr}} \, V^\star_{\substack{CKM\\ rp}}(Y^2_{\substack{u\\ rr}}+Y^2_{\substack{d\\ pp}}) + \tilde{C}'^{(3)}_{\substack{H \ell \\pr}} U_{\substack{PMNS\\ rp}}^\star \,Y^2_{\substack{e\\ pp}} + h.c. \right] \nonumber \\ \\
\left[T^H\right]_{\tilde{C}'^{(3)}_{\substack{H \psi_L}}}^{div} &=& 0.
\end{eqnarray}
Again, these results directly satisfy the corresponding SMEFT Ward identities.
\section{Discussion}
Theoretical consistency checks, such as the BFM Ward identities examined and validated at one loop in this work, are useful
because they allow internal cross checks of theoretical calculations, and provide a means of
validating numerical codes that can be used for experimental studies.
This is of increased importance in the SMEFT, which is a complex field theory.
It is important to stress that the Ward identities are always modified transitioning to
the SMEFT from the SM, but the nature of the changes to the identities depends on the gauge fixing procedure.
If the Background Field Method is not used, then only more complicated Slavnov-Taylor \cite{Veltman:1970dwc,Taylor:1971ff,Slavnov:1972fg}
identities hold. These identities also necessarily involve modifications from the SM case due to the presence of SMEFT operators.
The derivation in Ref.~\cite{Helset:2018fgq}, that is expanded upon in this work, should make
clear why this is necessarily the case. The identities are modified because the Lagrangian quantities
on the curved background Higgs manifold's present, that the correlation functions are quantized on,
and related in the Ward or Slavnov-Taylor identities,
are the natural generalization of the coupling constants and
masses of the SM for these field spaces.
To our knowledge, the first discussion on the need to modify these identities in the SMEFT in the literature is in Ref.~\cite{Passarino:2012cb},
and this point is also consistent with discussion in Ref.~\cite{Cullen:2019nnr,Cullen:2020zof}, which recognizes this
modification of Ward identities is present.
In the literature, one loop calculations have been done in the SMEFT within the BFM
\cite{Jenkins:2013zja,Jenkins:2013wua,Alonso:2013hga,Alonso:2014zka,Hartmann:2015oia,Hartmann:2016pil,Jenkins:2017dyc,Dekens:2019ept}, and also outside of the BFM
\cite{Pruna:2014asa,Ghezzi:2015vva,Crivellin:2017rmk,Cullen:2019nnr,Cullen:2020zof,Dedes:2018seb,Baglio:2019uty,Dawson:2019clf,Dawson:2018pyl,Degrande:2020evl}.
It is important, when comparing results, that one recognizes that radiative scheme dependence, includes
differing dependence on Wilson coefficients in the two point functions. These functions differ in the BFM in the SMEFT, compared to other schemes,
because the corresponding symmetry constraints encoded in the Ward identities or Slavnov-Taylor identities also differ.
Scheme dependence is manifestly a very significant issue in the SMEFT when seeking to build up a global fit,
which will necessarily combine many predictions produced from multiple research groups.
It is important that scheme and input parameter dependence is clearly and completely specified in a one loop SMEFT calculation
to aid this effort, and one should not misunderstand scheme dependence, and equate differences found in results in different schemes
with error when comparing. In this work, we avoid such an elementary mistake. In any case, we stress again
that in the SMEFT, in any gauge fixing approach, the Ward identities, or Slavnov-Taylor identities,
necessarily differ from those in the SM.\footnote{For an alternative point of view on these issues see Ref.~\cite{Dedes:2018seb}}
We also emphasize the appearance of the two derivative Higgs operators in the Ward identities, modifying the tadpole contributions.
This is consistent with, and an explicit representation of, the discussions in Refs.~\cite{Passarino:2016pzb,Passarino:2016saj,Brivio:2017vri}.
The subtle appearance of such corrections again show the need to take the SMEFT to mass eigenstate interactions
in a consistent manner.\footnote{It is interesting to compare the treatment of such effects in this work to Ref.~\cite{Falkowski:2015fla}}
A consistent treatment of the SMEFT to all orders in $\bar{v}_T/\Lambda$ \cite{Helset:2020yio}
while preserving background field invariance leads directly to the geoSMEFT.
This approach also gives an intuitive interpretation of
how and why the Lagrangian parameters are modified, due to the presence of the curved Higgs field
spaces modifying correlation functions.
\section{Conclusions}
In this paper we have validated Ward identities in the SMEFT at one loop, when calculating using the Background Field Method
approach to gauge fixing. These results lay the groundwork for generating numerical codes to next to leading order
in both the perturbative and non-perturbative expansions in the theory while
using the Background Field Method in the geoSMEFT. The results also offer a clarifying demonstration on the need to carefully define
SMEFT mass eigenstate interactions, to ensure that the theory is formulated in a consistent manner.
Utilizing the Background Field Method is of increased utility (in the opinion of the authors of this paper) in the case of the SMEFT,
as this is an effective theory including a Higgs field. Any correct
formulation of the SMEFT is consistent with the assumed $\rm SU(2)_L\times U(1)_Y$ symmetry at one loop,
and this can be checked by comparing against the Ward-Takahashi or Slavnov-Taylor identities.
We encourage those developing alternative formulations of the SMEFT to demonstrate the
consistency of their results with the corresponding symmetry constraints classically, and at one loop,
to ensure that the various approaches are all well defined.
In this work we have demonstrated that the Ward identities provide an
excellent opportunity to cross check loop calculations performed in the SMEFT. In future works, this will
allow for consistency checks of relevant full one-loop contributions to the effective action. For example, the full
one-loop calculation of the $W$-boson propagator can be consistency checked against the full loop calculation
of $\mathcal W$-$\mathcal \phi$ mixing. The background field
method will also allow for Dyson resummation of the one-loop corrections to the propagator without breaking
gauge invariance \cite{Denner:1996gb}. To the best of the authors' knowledge, no works concerning the
SMEFT have formulated or confirmed the corresponding Slavnov-Taylor identities for traditional $R_\xi$ gauge fixing.
This work provides a clear foundation from which these next steps can be approached.
\section*{Acknowledgments}
We acknowledge support from the Villum Fonden, project number 00010102,
and the Danish National Research Foundation through a DFF project grant.
TC acknowledges funding from European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 890787.
We thank A. Helset and G. Passarino for discussions. MT thanks CERN for generous support during a Corona split SASS visit when much of this work
was developed.
|
2,869,038,153,833 | arxiv | \section{Introduction}
In this paper we study analogues of the classical Poisson limit theorem in the noncommutative probability framework set by bm-independence. Our study can be considered as parallel to the results of Crismale, Griseta and the second-named author \cite{CGW2020} and \cite{CGW2021}, where the weakly monotone case has been considered. Here we develop theory extending the monotone case of Muraki \cite{Mur0}.
In general, a noncommutative probability space is a unital $*-$algebra $\mathcal{A}$ (or a $C^*-$algebra) with a normalized state $\varphi$ on it, which plays the role of classical expectation.
There are several notions of independence in such framework, which are the rules for computing mixed moments, i.e. expressions of the form $\varphi(a_{i_1}a_{i_2}\ldots a_{i_k})$ for $a_{i_1}, a_{i_2}, \ldots , a_{i_k}\in \mathcal{A}$. The universal ones are the freeness of Voiculescu \cite{Voi}, the monotonic independence of Muraki \cite{Mur1} and the Boolean independence \cite{S.W}. There are other notions of noncommutative independence, which are mixtures of these. For more information on this topic we refer to the Introduction in \cite{bmPoisson}.
Here we develop the theory of bm-independence introduced in \cite{JW2008, JW2009}, which combines boolean and monotonic ones. A particular feature of this notion is that it is defined for random variables indexed by a partially ordered set. The definition of this notion is presented in the next section. In this study, however, we consider only specific partially ordered sets, which are defined by \textit{positive symmetric cones} in Euclidean spaces, classification of which can be found in \cite{J.F.A.K}. These cones have additional geometric property called the \textit{volume characteristic}, which allows to develop analogues of classical properties in such general framework.
Noncommutative analogues of the classical Poisson limit theorem arise in two ways. The first one is by considering the convolution powers of Bernoulli type distributions, and this can be done for the free, monotone and boolean convolutions. This can be generalized to arrays of independent random variables (see \cite{Sp} for free independence and \cite{bmPoisson} for bm-independence). The second method, used by Muraki \cite{Mur0}, comes from constructions of creation $A_i^+$, annihilation $A_i^-$ and conservation $A_i^{\circ}$ operators ($i\in\mathbb N$) on appropriate Fock space associated, as the toy model, with a given noncommutative independence. In particular, these operators are independent (in the given noncommutative sense). For a parameter $\lambda \geq 0$, one considers the sums
\begin{equation}\label{Poisson general
S_{N}(\lambda):=\frac{1}{\sqrt{N}}\sum_{i=1}^{N}(A_{i}^{+}+A_{i}^{-})+\lambda\sum_{i=1}^{N}A_{i}^{\circ}
\end{equation}
and one studies the limits of moments (for every $p\in\mathbb N$)
\[
m_p(\lambda):=\lim_{N\to +\infty} \varphi((S_N)^p).
\]
Then these are moments of Poisson type measure (which is to be identified in each case) associated with the given noncommutative independence. In particular, for free independence this is the free Poisson (i.e. Marchenko-Pastur) measure \cite{Sp}, whereas for monotonic independence the measure has been partially identified by Muraki \cite{Mur0}. Our constructions and results extend those of Muraki, so we recall his description of moments of the limit measure in the monotonic case (for details we refer to \cite{Mur0}):
\begin{equation}\label{convergence}
m_{p}(\lambda):=\lim_{N\rightarrow +\infty}\varphi\left((S_{N}(\lambda))^{p}\right)=\sum_{\pi \in \mathcal{NC}_{2}^{1, i}(p)}V(\pi)\lambda^{s(\pi)},
\end{equation}
where $\mathcal{NC}_{2}^{1, i}(p)$ denotes the set of all noncrossing partitions of the set $[p]:=\{1, \ldots, p\}$, with pair blocks and inner singleton blocks, and $V(\pi)$ counts (approximately, as $N\rightarrow +\infty$) the ratio of all blocks of $\pi\in \mathcal{NC}_{2}^{1, i}(p)$ with monotonic labellings by integers $\{1, 2, \ldots, N\}$, divided by $N^{\frac{p-s(\pi)}{2}}$, and $s(\pi)$ is the number of singleton blocks in $\pi$.
The independence we study here can be defined for any partially ordered set, however such generality does not allow for specific results. In particular one needs additional structure of the poset, and this is guaranteed by partial orders defined by positive symmetric cones, classification and properties of which can be found in \cite{J.F.A.K}.
In this study we consider the following three classes of positive symmetric cones $\Pi_d$ (with $d\in\mathbb N$) in particular vector spaces $\mathtt{X}_d$:
\begin{enumerate}
\item The cones $\Pi_d={\mathbb R}_{+}^{d}:=\{ \xi=(\xi_{1}, \xi_{2}, \ldots, \xi_{d})\in {\mathbb R}^{d}: \xi_{1}, \ldots, \xi_{d}\geq 0\}$ in the vector spaces ${\mathtt{X}_d}={\mathbb R}^{d}$, where for $\xi=(\xi_{1}, \ldots, \xi_{d}), \rho=(\rho_{1},\ldots, \rho_{d})\in {\mathbb R}^{d}$, the partial order $\preceq$ is given by coordinatewise comparison:
$$\xi\preceq \rho \quad \text{if}\quad \xi_{1}\leq \rho_{1}, \ldots , \xi_{d}\leq \rho_{d}.$$
\item The Lorentz light cones $\Lambda_{d}^{1}$ in $(d+1)$-Minkowski space time ${\mathtt{X}_d}={\mathbb R}_{+}\times{\mathbb R}^{d}$ with the positive cone $\Pi_d=\Lambda_{d}^{1}:=\left\{(t; x_{1}, ..., x_{d})\in {\mathbb{X}_d}: t\geq \left(\sum_{i=1}^{d}x_{i}^{2}\right)^{\frac{1}{2}}\right\}$. In this case, for $\xi= (t; x_{1}, ..., x_{d})$, $\rho=(s; y_{1}, ..., y_{d})\in {\mathbb R}_{+}\times{\mathbb R}^{d}$, we have
$$\rho\preceq \xi \quad \text{if}\quad t-s\geq \left(\sum_{i=1}^{d}(x_{i}-y_{i})^{2}\right)^{\frac{1}{2}}.$$
\item The cones $\Pi_d={\mathtt{Symm}}_d^+(\R)$ of positive semidefinite real symmetric $(d\times d)$ matrices:
\[
{\mathtt{Symm}}_d^+(\R):=\{\xi \in \mathbb M_{d}({\mathbb R}): \xi =\xi^T,\ \xi\geq 0 \} \subset \mathtt{X}_d=\mathbb M_{d}({\mathbb R}).
\]
For $\xi, \rho\in\mathbb M_{d}({\mathbb R})$, one defines
\[
\rho \preceq \xi \quad \text{ if } \quad \xi-\rho\in{\mathtt{Symm}}_d^+(\R).
\]
\end{enumerate}
These cones satisfy the property of \emph{volume characteristic} which plays crucial role in our studies. This property, introduced in \cite{J.A}, reads as follows:
\begin{thm}[volume characteristic \cite{J.A}]\label{volchar}
For each of the positive symmetric cones $\Pi_d$ we consider, there exists a sequence $(\gamma_{m}(\Pi_d))_{m\geq 1}$ such that for any $\xi\in \Pi_d$ and any $m\in{\mathbb N}$
$$\gamma_{m}(\Pi_d)=\frac{1}{\mathtt{v}(\xi)^{m}}\int_{\rho\in [0, \xi]}\mathtt{v}(\rho)^{m-1}d(\rho),$$
where $\mathtt{v}(\xi):=\mathtt{vol}[0, \xi]$ denotes the Euclidean volume of the interval $[0, \xi]\subset \Pi_d\subset {\mathbb R}^{m}$ and $d(\rho)$ is the Lebesgue measure on ${\mathbb R}^{m}$ (the dimension $m$ is minimal for the embedding $\Pi_d\subset {\mathbb R}^{m}$).
\end{thm}
The \textit{volume characteristic} property generalizes the following simple fact for the positive cone ${\mathbb R}_{+}$, where $\mathtt{v}(x):=\mathtt{vol}[0, x]=x$ for $x>0$:
\[
\frac{1}{\mathtt{v}(x)^m} \int_{y\in[0,x]} \mathtt{v}(y)^{m-1} dy = \frac{1}{x^m}\int_{0}^{x}y^{m-1}dy=\frac{1}{m}=\gamma_{m}({\mathbb R}_{+}),
\]
In what follows we will write simply $\gamma_m$ if the cone is specified.
In our studies of Poisson type limit theorems we deal with bm-independent random variables, indexed by elements of positive symmetric cones $\Pi_d$. This creates some challenges regarding the formulation of the theorems. These challenges are the same as those given in our study \cite{bmPoisson} of the law of small numbers for bm-independent random variables. For the reader's convenience we recall briefly how these problems has been solved in \cite{bmPoisson}.
\begin{enumerate}
\item Instead of the index set of positive integers we use the following discrete subsets ${\mathbf {I}}\subset\Pi_d$ depending on the positive cone $\Pi_d$:
\begin{enumerate}
\item ${\mathbf {I}}:={\mathbb N}^{d}$ if $\Pi_d={\mathbb R}_{+}^{d}$,
\item ${\mathbf {I}}:={\mathbb N}\times {\mathbb Z}^d$ if $\Pi_d=\Lambda_{d}^{1}$ is the Lorentz light cone,
\item ${\mathbf {I}}:={\mathtt{Symm}}_d^+(\Z)$ if $\Pi_d={\mathtt{Symm}}_d^+(\R)\subset{\mathbb M}_{d}({\mathbb R})$.
\end{enumerate}
\item The range of summation $j\in \{1,2,\ldots , N\}=[1,N]\cap \mathbb N$ in the partial sum $\sum\limits_{j=0}^{N}A_{j}^{\varepsilon}$ (with $\varepsilon\in \{-, +, \circ\}$) in the formula \eqref{Poisson general}, is replaced by the finite summation $\sum\limits_{\xi \in [0, \rho]_{{\mathbf {I}}}}A_{\xi}^{\varepsilon}$, where $[0, \rho]_{{\mathbf {I}}}:=[0, \rho]\cap {\mathbf {I}}$ for $ \rho\in\Pi_d $ and $[0, \rho]:=\{\xi\in \Pi_d: 0\preceq \xi\preceq \rho\}$ is an interval in the positive cone $\Pi_d$.
\item The replacement of the convergence $N\rightarrow +\infty$ in the formula \eqref{convergence} has to be formulated separately for each positive symmetric cone under consideration. We use the notation $\rho\xrightarrow[]{\Pi}\infty$ for the following:
\begin{itemize}
\item For $\Pi_d={\mathbb R}_{+}^{d}$ and $\rho:=(a_{1}, \cdots ,a_{d})\in \Pi_d$, then $\rho \xrightarrow[]{\Pi}\infty$ means that $a_{j}\rightarrow \infty$ for all $1\leq j\leq d$,
\item For $\Pi_d=\Lambda_{d}^{1}$ and $\rho:=(t;x_{1}, \ldots, x_{d})\in \Pi_d$, then $\rho\xrightarrow[]{\Pi}\infty$ means that $t-\left(\sum_{i=1}^{d}x_{i}^{2}\right)^{\frac{1}{2}}\rightarrow \infty$,
\item For $\Pi_d={\mathtt{Symm}}_d^+(\R)$ and $\rho \in \Pi_d$, if $0<\rho_{1}\leq \rho_{2}\leq \cdots \leq \rho_{d}$ are the eigenvalues of $\rho$, then $\rho\xrightarrow[]{\Pi}\infty$ means that $\rho_{1}\rightarrow \infty$ (and consequently $\rho_{j}\rightarrow \infty$ for all $1\leq j \leq d$).
\end{itemize}
\item
For a function $f: \Pi_d\mapsto {\mathbb R}$, we define $\lim\limits_{\rho \xrightarrow[]{\Pi}\infty}f(\rho)=\alpha$ if for each $\epsilon>0$ there exists $\mu\in\Pi_d$ such that for every $\rho\in\Pi_d$ if $\mu\preceq \rho$ then $|f(\rho)-\alpha|<\epsilon$.
\item Furthermore, the normalization factor $\sqrt{N}$ in \eqref{Poisson general} is replaced by $\sqrt{\#[0, \rho]_{\mathbf {I}}}$, where $\#[0, \rho]_{\mathbf {I}}$ is the number of elements in $[0, \rho]_{\mathbf {I}}:=[0, \rho]\cap \mathbf {I}$, which is asymptotically the same as the Euclidean volume $\mathtt{v}(\rho):=\mathtt{vol}[0, \rho]$ of an interval $[0, \rho]\subset\Pi_d$.
For the reader's convenience we recall the following formulas for the Euclidean volume of intervals $[0, \rho]$ (c.f. \cite{J.F.A.K}):
\begin{enumerate}
\item $\displaystyle \mathtt{v}(\rho)=\prod_{j=1}^{d}a_j$ if $\rho:=(a_1, \ldots , a_d)\in\Pi_d=\mathbb R_+^d$,
\item $\displaystyle \mathtt{v}(\rho)=\alpha_d(t^2-\|x\|^2)^{\frac{d+1}{2}}$ for some constant $\alpha_d$, if $\rho:=(t; x) \in \Pi_d = \Lambda_d^1$,
\item $\displaystyle \mathtt{v}(\rho)=\beta_d\left(\prod_{j=1}^{d}\lambda_j\right)^{\frac{d+1}{2}}=\beta_d\Big(\det(\rho)\Big)^{\frac{d+1}{2}}$ for some constant $\beta_d$, if $(\lambda_1, \ldots , \lambda_d)$ are the eigenvalues of $\rho\in \Pi_d={{\mathtt{Symm}}_d^+(\R)}\subset {\mathbb M}_{d}(\mathbb R)$.
\end{enumerate}
\end{enumerate}
With these changes our bm-analogue of the formula \eqref{Poisson general} can be written as
\begin{equation}\label{bmP}
S_{\rho}(\lambda):=\frac{1}{\sqrt{\mathtt{v}(\rho)}}\sum_{\xi\in[0, \rho]_{\mathbf {I}}}(A_{\xi}^{+}+A_{\xi}^{-})+\lambda\sum_{\xi\in[0, \rho]_{\mathbf {I}}}A_{\xi}^{\circ}.
\end{equation}
The main goal of our study is the limit
\[
\lim_{\rho\xrightarrow[]{\Pi}\infty}\varphi((S_{\rho}(\lambda))^p), \quad p\in{\mathbb N},
\]
which we shall describe in a combinatorial manner.
Our starting point is the construction of \emph{discrete bm-Fock spaces} (for each positive symmetric cone) and the creation $A_{\xi}^{+}$, annihilation $A_{\xi}^{-}$ and conservation $A_{\xi}^{\circ}$ operators on it. The construction is related to the bm-product Fock space \cite{JW1} and generalizes the ideas of Muraki.
In \cite{J.W3} the second-named author provided a model for bm-independence, where \textit{bm-Fock space} has been constructed and the algebras $\{\mathcal{A}_{\xi}: \xi\in \mathbb{X}\}$, generated by creation $A^+_{\xi}:=A^+(e_{\xi})$ and annihilation $A^-_{\xi}:=A^-(e_{\xi})$ operators, are bm-independent if the vectors $\{e_{\xi}:\xi\in \mathbb{X}\}$ are mutually orthogonal. This construction, however, is not suitable for the present study, which requires a discrete version of it. Therefore, in the next section, we present new constructions of \textit{discrete bm-Fock spaces}, each of which is related with the given positive symmetric cone.
\section{Preliminaries}
In this section we present the framework for our study.
A \emph{noncommutative probability space} $(\mathcal{A}, \varphi)$ consists of a unital $*$-algebra $\mathcal{A}$ and a state $\varphi$ on it, that is, a linear functional $\varphi: \mathcal{A}\rightarrow {\mathbb C}$ which is positive $(\varphi(a^*a)\geq 0$ for all $a\in\mathcal{A})$ and unital $(\varphi(1_{\mathcal{A}})=1$ for the unit $1_{\mathcal{A}}\in\mathcal{A})$. Self-adjoint elements $a=a^{*}\in \mathcal{A}$ are called \emph{noncommutative random variables} and the distribution of a random variable $a=a^*\in \mathcal{A}$ is a probability measure $\mu$ with the moments given by the sequence $\displaystyle m_n(\mu):=\varphi(a^n)$. It always exists, since the sequence of moments is positive definite.
General references on noncommutative probability are \cite{HO, Mey, Par} and references therein.
\subsection{bm-independence}
The general formulation of \emph{bm-independence} was given by the second-named author in \cite{J.W3} for families of algebras indexed by partially ordered sets. If $(\mathbb{X}, \preceq)$ is a poset, we shall use the following notation: $x\sim y$ if $x,y\in \mathbb{X}$ are comparable, $x\nsim y$ if $x,y\in \mathbb{X}$ are incomparable and $x\prec y$ if $x\preceq y$ and $x\neq y$.
Now we recall the definition of bm-independence from \cite{J.W3}.
\begin{df}[bm-independence]
Let $(\mathcal{A}, \varphi)$ be a noncommutative probability space. We say that a family $\{\mathcal{A}_{\xi}: \xi\in {\mathbb{X}}\}$ of subalgebras of $\mathcal{A}$, indexed by a partially ordered set $({\mathbb{X}}, \preceq)$, is \textbf{bm-independent} in $(\mathcal{A}, \varphi)$ if the following conditions hold:
\begin{itemize}
\item ${\mathtt {BM1}}:$ If $\xi, \rho, \eta \in {\mathbb{X}}$ satisfy: $\xi\prec \rho\succ \eta$ or $\xi\nsim \rho\succ \eta$ or $\xi \prec \rho \nsim\eta$, then for any $a_{1}\in \mathcal{A}_{\xi}, a_{2}\in \mathcal{A}_{\rho}, a_{3}\in \mathcal{A}_{\eta}$ we have
\begin{equation}\label{bm1d}
a_{1}a_{2}a_{3}=\varphi(a_{2})a_{1}a_{3}.
\end{equation}
\item ${\mathtt {BM2}}:$ If $\xi_{1}\succ \cdots \succ \xi_{m}\nsim \cdots \nsim \xi_{k}\prec \xi_{k+1}\prec \cdots \prec \xi_{n}$ for some $1\leq m\leq k\leq n$ and $\xi_{1}, \cdots, \xi_{n}\in {\mathbb{X}}$, with $a_{j}\in \mathcal{A}_{\xi_{j}}$ for $1\leq j\leq n$, then
\begin{equation}\label{bm2d}
\varphi(a_{1} \cdots a_{n})=\prod_{j=1}^{n}\varphi(a_{j}).
\end{equation}
\end{itemize}
\end{df}
Noncommutative random variables $\{a_{\xi}\in \mathcal{A}: \xi\in {\mathbb{X}}\}$ are called \emph{bm-independent} if the subalgebras $\mathcal{A}_{\xi}$ they generate are bm-independent. For more information about properties of bm-independence we refer to \cite{J.W3, J.A}.
\begin{rem}
In the above definition, if ${\mathbb{X}}$ is totally ordered (i.e. every two elements in ${\mathbb{X}}$ are comparable), then we obtain the monotonic independence. On the other hand, if $\mathbb{X}$ is totally disordered (i.e. none of the elements of ${\mathbb{X}}$ are comparable), then we obtain the Boolean independence (c.f. \cite{J.W3, J.A}).\\
The two conditions ${\mathtt {BM1}}$ and ${\mathtt {BM2}}$ allow to compute all joint moments $\varphi(a_{1}\cdots a_{n})$ for bm-independent random variables $a_{1}, \ldots, a_{n}$ via marginals $\varphi_{j}:=\varphi|\mathcal{A}_{j}$, i.e. by the restriction of $\varphi$ to the subalgebras they generate (c.f. \cite{J.W3}, Lemmas 2.3, 2.4).
The idea is that first one applies ${\mathtt {BM1}}$ as many times as possible, and then what remains is subject to ${\mathtt {BM2}}$. We also refer to \cite{bmPoisson} for an algorithm which allows to evaluate explicitly joint moments using the conditions ${\mathtt {BM1}}$ and ${\mathtt {BM2}}$.
\end{rem}
\subsection{Noncrossing partitions with pair or singleton blocks and bm-orders}
For $p\in {\mathbb N}$, a \emph{partition} $\pi$ of the finite set $[p]:=\{1, \ldots, p\}$ is a collection of disjoint nonempty subsets of $[p]$, called \emph{blocks}, whose union is $[p]$, i.e. if $\pi$ has $k$ blocks $B_{1}, \ldots, B_{k}$, one has for any $1\leq i, j\leq k$
$$B_{i}\cap B_{j}=\emptyset \quad \text{if} \quad i\neq j \quad \text{ and} \quad \quad \bigcup_{i=1}^{p}B_{i}=[p].$$
For a partition $\pi$ with $k$ blocks $B_{1}, \ldots, B_{k}$ we will write
$\pi:=(B_{1}, \ldots, B_{k})$ to indicate that the blocks are ordered by their minimal elements: $\min(B_j)<\min(B_{j+1})$ for $1\leq j \leq k-1$. We denote by $\mathcal{P}(p)$ the set of all partitions $\pi$ of $[p]$, and by $|B|$ the cardinality of a block $B$ in $\pi$. If $|B|=1$, then the block $B$ consists of one element and is called \emph{singleton}. The numbers of blocks in $\pi$ will be denoted by $b(\pi)$ and the number of singletons of $\pi$ by $s(\pi)$. One says that a partition $\pi$ has a \emph{crossing} if there exists two distinct blocks $B_{i}$ and $B_{j}$ in $\pi$, and elements $u_{1}, u_{2}\in B_{i}, v_{1}, v_{2}\in B_{j}$ such that $u_{1}< v_{1}<u_{2}<v_{2}$. Otherwise, $\pi$ has \emph{no crossings} and it is called a \textit{noncrossing partition}. The set of all \emph{noncrossing partitions} of $[p]$ will be denoted by $\mathcal{NC}(p)$ (or by $\mathcal{NC}(p, k)$ for noncrocssing partitions in $\mathcal{NC}(p)$ with exactly $k$ blocks).\\
Pictorially, a partition $\pi\in\mathcal{P}(p)$ will be presented by drawing the integers 1 to $p$ (from right to left) on a horizontal line and then joining the elements of a blocks by lines above; in particular singletons are marked by vertical lines. For instance, the graphical representation of the partition $$\pi=(\{1\}, \{2, 4\}, \{3, 5\}, \{6, 11\}, \{7, 9, 10\}, \{8\})\in \mathcal{P}(11),$$ is the following figure:
\begin{center}
\begin{tikzpicture}[thick,font=\small]
\path (0,0) node[] (a) {11}
(0.5,0) node[] (b) {10}
(1,0) node[] (c) {9}
(1.5,0) node[] (d) {8}
(2,0) node[] (e) {7}
(2.5,0) node[] (f) {6}
(3,0) node[] (g) {5}
(3.5,0) node[] (h) {4}
(4,0) node[] (i) {3}
(4.5,0) node[] (j) {2}
(5,0) node[] (k) {1};
\draw (a) -- +(0,1) -| (f);
\draw (g) -- +(0,1) -| (i);
\draw (k)-- +(0,1);
\draw (d)-- +(0,0.7);
\draw (b) -- +(0,0.8) -| (e);
\draw (b) -- +(0,0.8) -| (c);
\draw (h) -- +(0,0.8) -| (j);
\end{tikzpicture}
\end{center}
This partition has two singletons $\{1\}$ and $\{8\}$ and a crossing between the blocks $\{2,4\}$ and $\{3,5\}$. Of course, singletons do not produce crossings.
\begin{df}[Inner and outer blocks of a noncrossing partition]
A block $B_{j}\in\pi\in\mathcal{NC}(p)$ is called \textbf{inner} if there exists another block $B_{i}$ such that \begin{equation*}
\min B_{i}< \min B_{j}\leq \max B_{j}< \max B_{i},
\end{equation*}
where $\min B$ (resp. $\max B$) denotes the minimal (resp. maximal) element of the block $B$.
Otherwise, $B_j$ is called an \textbf{outer} block.
\end{df}
Observe that in the noncrossing partition every block is either inner or outer.
It is convenient to define a partial order $\preceq_{\pi}$ on the blocks of a noncrossing partition $\pi=(B_{1}, \ldots, B_{k})\in \mathcal{NC}(p, k)$. Namely, we will write $B_{i}\preceq _{\pi} B_{j}$ if $B_{j}$ is inside $B_{i}$, that is
\begin{equation*}
\min B_{i}< \min B_{j}\leq \max B_{j}< \max B_{i}.
\end{equation*}
A noncrossing partition $\pi=(B_{1}, \ldots, B_{k})\in \mathcal{NC}(p, k)$ is called a \emph{pair partition} (notation: $\pi \in \mathcal{NC}_{2}(p)$) if $|B_{j}|=2$ for all $1\leq j\leq k$ (each block $B_{i}$ contains exactly two elements). On the other hand, a partition $\pi\in \mathcal{NC}(p)$ is called a \emph{noncrossing partition with pair or singleton blocks} if $ |B|\in \{1, 2\}$ for all $B\in\pi$; the set of all such partitions will be denoted by $\mathcal{NC}_{2}^{1}(p)$. We distinguish two subsets of $\mathcal{NC}_{2}^{1}(p)$ which will appear in the combinatorial descriptions of our Poisson type limit distribution in the next section:
\begin{itemize}
\item The set $\mathcal{NC}_{2. o}^{1, i}(p)\subset \mathcal{NC}_{2}^{1}(p)$, in which the partitions have \textbf{no inner pair blocks} and \textbf{no outer singletons}, i.e. pair blocks must be outer and singletons must be inner.
\item The set $\mathcal{NC}_{2}^{1, i}(p)\subset \mathcal{NC}_{2}^{1}(p)$ in which the partitions have \textbf{no outer singletons} (i.e. singletons must be inner blocks, but pair blocks can be either inner or outer).
\end{itemize}
\begin{df}
Let $(\xi_{p}, \ldots, \xi_{1})$ be a sequence of elements of ${\mathbb{X}}$. For $\xi\in\{\xi_{p}, \ldots, \xi_{1}\}$, we define a subset $B(\xi):=\{1\leq j\leq p: \xi_{j}=\xi\}\subset [p]$, with $i=i(\xi):=\min B(\xi)$ being the minimal element. If the cardinality of the set $\{\xi_{p}, \ldots, \xi_{1}\}$ is $|\{\xi_{p}, \ldots, \xi_{1}\}|=k$, then we obtain $k$ disjoint subsets $B_1, \ldots , B_k$, which constitute a partition of $[p]$. Assuming that $\min(B_i)<\min(B_{i+1})$ for $1\leq i \leq k-1$ we obtain a partition $\pi=(B_{1}, \ldots, B_{k})\in \mathcal{P}(p)$. In this case, we say that the partition $\pi$ is \textbf{adapted} to the sequence $(\xi_{p}, \ldots, \xi_{1})$ and we denote this by $(\xi_{p}, \ldots, \xi_{1})\sim \pi$ or $\pi\sim(\xi_{p}, \ldots, \xi_{1})$.
\end{df}
Observe that the partition adapted to a sequence is unique.
\begin{df}
For a finite subset $\mathbf{J}\subset\mathbf {I}$ and a partition $\pi=(B_{1}, \ldots, B_{k})\in\mathcal{P}(p)$ with $k$ blocks, we define the \textbf{label function} $\mathtt{L}: \pi\longrightarrow \mathbf{J}$ by $\mathtt{L}(B_{i})\in\mathbf{J}$ for $1\leq i\leq k$. If we consider $\pi\sim (\xi_{p}, \ldots, \xi_{1})$ as above, then for $\xi\in\{\xi_{p}, \ldots, \xi_{1}\}$, $\mathtt{L}(B(\xi)):=\xi$ will be called \textbf{the label} of the block $B(\xi)$, and $(\mathtt{L}(B_{k}), \ldots, \mathtt{L}(B_{1}))$ will be called \textbf{the label sequence} of the partition $\pi\sim(\xi_{p}, \ldots, \xi_{1})$.
\end{df}
\begin{rem}\label{equalsets}
If a partition $\pi=(B_{1}, \ldots, B_{k})\in \mathcal{P}(p)$ is adapted to a sequence $(\xi_{p}, \ldots, \xi_{1})$, then the set $\{\xi_{p}, \ldots, \xi_{1}\}$ of elements and the set $\{\mathtt{L}(B_{k}), \ldots, \mathtt{L}(B_{1})\}$ of labels are equal. Moreover, the label sequence $(\mathtt{L}(B_{k}), \ldots, \mathtt{L}(B_{1}))$ is uniquely defined.
\end{rem}
\begin{df}
Let $\pi=(B_{1}, \ldots, B_{k})\in\mathcal{NC}_{2}^{1}(p)$ be a given partition and let $B_{i}$ and $B_{j}$ be two blocks, which for any other block $B_{l}\in\pi$, satisfy
$$\text{if $B_{i}\preceq_{\pi}B_{l}\preceq_{\pi} B_{j}$ then $B_{i}=B_{l}$ or $B_{j}=B_{l}$}.$$
Then we say that the block $B_{j}$ is a $\textbf{direct successor}$ of the block $B_{i}$; equivalently, the block $B_{i}$ is a $\textbf{direct predecessor}$ of the block $B_{j}$.
\end{df}
\begin{exmp}
Consider $\pi=(B_{1}, B_{2}, B_{3}, B_{4}, B_{5})\in\mathcal{NC}_{2}(6)$ with $B_{1}=\{1, 8\}, B_{2}=\{2, 4\}, B_{3}=\{3\}, B_{4}=\{5, 7\}$ and $B_{5}=\{6\}$. Then, graphically, \\\\\\
\setlength{\unitlength}{0.5cm}
\begin{picture}(0,5
\thicklines
\put(7,0.7){$\pi=$}
\put(9,1){\line(0,1){4}}
\put(14.5,5.3){$B_{1}$}
\put(12.1,4.1){$B_{4}$}
\put(17,4.1){$B_{2}$}
\put(12.1,3){$B_{5}$}
\put(17,3){$B_{3}$}
\put(8.77,0.9){\Huge.}
\put(8.5,0.3){ $8$}
\put(11,1){\line(0,1){2.8}}
\put(10.77,0.9){\Huge.}
\put(10.5,0.3){ $7$}
\put(12.5,1){\line(0,1){1.8}}
\put(12.28,0.9){\Huge.}
\put(12,0.3){ $6$}
\put(14,1){\line(0,1){2.8}}
\put(13.77,0.9){\Huge.}
\put(13.5,0.3){ $5$}
\put(11,3.8){\line(1,0){3}}
\put(16,1){\line(0,1){2.8}}
\put(15.77,0.9){\Huge.}
\put(15.5,0.3){ $4$}
\put(17.5,1){\line(0,1){1.8}}
\put(17.28,0.9){\Huge.}
\put(17,0.3){ $3$}
\put(19,1){\line(0,1){2.8}}
\put(18.77,0.9){\Huge.}
\put(18.5,0.3){ $2$}
\put(16,3.8){\line(1,0){3}}
\put(21,1){\line(0,1){4}}
\put(20.77,0.9){\Huge.}
\put(20.5,0.3){ $1$}
\put(9,5){\line(1,0){12}}
\end{picture}\\\\
and
\begin{itemize}
\item $B_{1}$ is a direct predecessor of $B_{2}$ and $B_{4}$;
\item $B_2$ and $B_4$ are direct successors of $B_1$;
\item $B_{3}$ is a direct successor of $B_{2}$;
\item $B_{5}$ is a direct successor of $B_{4}$.
\end{itemize}
\end{exmp}
For a given partition $\pi\in\mathcal{P}(p)$, we define the partition $\tilde{\pi}$ as the partition obtained from $\pi$ by removing all singletons: $\tilde{\pi}:=\pi\setminus\{\text{singletons}\}$. We will call such $\tilde{\pi}$ the \textbf{reduced partition} of $\pi$. Obviously, for $\pi\in\mathcal{NC}_{2}(p)$, we have $\tilde{\pi}=\pi$, but in general, for $\pi \in\mathcal{NC}_{2}^{1}(p)$ $\tilde{\pi}$ is a subpartition of $\pi$. For instance, \\\\if
\setlength{\unitlength}{0.5cm}
\begin{picture}(0,5
\thicklines
\put(0.7,0){$\pi=$}
\put(2.3,0){\line(0,1){1.5}}
\put(2.7,0){\line(0,1){1.5}}
\put(3.1,0){\line(0,1){1}}
\put(3.6,0){\line(0,1){1}}
\put(2.7,1.5){\line(1,0){1.32}}
\put(4,0){\line(0,1){1.5}}
\put(4.5,0){\line(0,1){1.5}}
\put(4.5,1.5){\line(1,0){1.52}}
\put(4.8,0){\line(0,1){1}}
\put(5.26,0){\line(0,1){0.6}}
\put(5.7,0){\line(0,1){1}}
\put(4.8,1){\line(1,0){0.9}}
\put(6,0){\line(0,1){1.5}}
\put(6.33,0){\line(0,1){1.5}}
\put(6.8,0){$\in\mathcal{NC}_{2}^{1}(11)$}
\end{picture} \hspace{6.5cm} then \hspace{3cm}\begin{picture}(0,5
\thicklines
\put(-2,0){$\tilde \pi=$}
\put(-0.5,0){\line(0,1){1.5}}
\put(-0.5,1.5){\line(1,0){1.32}}
\put(0.8,0){\line(0,1){1.5}}
\put(1.3,0){\line(0,1){1.5}}
\put(1.3,1.5){\line(1,0){1.52}}
\put(1.6,0){\line(0,1){1}}
\put(2.5,0){\line(0,1){1}}
\put(1.6,1){\line(1,0){0.9}}
\put(2.8,0){\line(0,1){1.5}}
\put(3.3,0){$\in \mathcal{NC}_{2}(6).$}
\end{picture}\\\\
\begin{df}[bm-order on noncrossing partitions with pair or singleton blocks]\label{bmorder}
Let $(\xi_{p}, \ldots, \xi_{1})$ be a given sequence of elements from a partially ordered set $(\mathbb{X}, \preceq)$, such that $(\xi_{p}, \ldots, \xi_{1})\sim\pi=(B_{1}, \ldots, B_{k})\in \mathcal{NC}_{2}^{1}(p)$.
We say that the sequence $\xi:=(\xi_{p}, \ldots, \xi_{1})$ \textbf {defines bm-order on the partition} $\pi$ (notation $\xi \trianglelefteq \pi$) if for all $1\leq i\neq j\leq k$ the following conditions hold:
\begin{enumerate}
\item If $|B_{j}|=2$ and $B_{i}\prec _{\pi}B_{j}$, then $\mathtt{L}(B_{i})\preceq \mathtt{L}(B_{j})$;
\item If $|B_{j}|=1, B_{i}\prec B_{j}$ and $B_{j}$ is a direct successor of $B_{i}$, then $\mathtt{L}(B_{i})=\mathtt{L}(B_{j})$.
\end{enumerate}
We say that the sequence \textbf{defines strict bm-order on the partition} $\pi$ (notation $\xi\vartriangleleft \pi$) if for all $1\leq i\neq j\leq k$ the condition $(1)$ reads as: $B_{i}\prec_{\pi}B_{j}$ implies that $\mathtt{L}(B_{i})\prec \mathtt{L}(B_{j})$.
\end{df}
In particular, in a bm-ordered partition a singleton block must have the same label as its direct predecessor and pair blocks are labeled increasingly.
Similarly to the above definition and by virtue of the Remark \ref{equalsets}, we can define bm-order given by label sequence.
\begin{df}
Let $\xi:=(\xi_{p}, \ldots , \xi_{1})\in \mathbb{X}^p$ be a given sequence with the adapted partition $\pi=(B_{1}, \ldots, B_{k})\in \mathcal{NC}_{2}^{1}(p)$ and $\mu:=(\mathtt{L}(B_{k}), \ldots, \mathtt{L}(B_{1}))$ be the label sequence of blocks $B_{1}, \ldots , B_{k}\in \pi$. If $\xi \trianglelefteq \pi$ (resp. $\xi \vartriangleleft \pi$), we say that the label sequence $\mu$ \textbf{defines bm-order} (resp. \textbf{defines strict bm-order}) on $\pi$ and we will use the notation $\mu \trianglelefteq _{\mathtt{L}}\pi$ (resp. $\mu \vartriangleleft_{\mathtt{L}}\pi$).
\end{df}
Our proofs of Poisson type limit theorems on discrete bm-Fock spaces eventually reduce to the study of the following combinatorial sets.
\begin{df}
For $\pi \in \mathcal{NC}_{2}^{1}(p, k)$ and $\rho \in\Pi_d$ we define the following sets of sequences from $\mathbf {I}$:
\begin{enumerate}
\item the set
\[
\mathrm{bmo}(\pi, \rho):=\{\xi=(\xi_{1}, \ldots, \xi_{p})\in [0, \rho]_{{\mathbf {I}}}^{p}: \xi\trianglelefteq \pi\}
\]
of all sequences of elements which satisfy $0\preceq\xi_j \preceq \rho$, $\xi_j\in \mathbf {I}$ ($1\leq j \leq p$), and which define bm-order on $\pi$;
\item the set
\[
\mathrm{BMO}(\pi, \rho):=\{\mu=(\mu_{1}, \ldots, \mu_{k}): \mu_{i}=\mathtt{L}(B_{i})\in [0, \rho]_{{\mathbf {I}}}, \quad1\leq i\leq k \quad and \quad \mu\trianglelefteq_{\mathtt{L}} \pi\}
\]
of all label sequences of elements which satisfy $0\preceq\mu_j \preceq \rho$, $\mu_j\in \mathbf {I}$ ($1\leq j \leq k$), and define bm-order $\pi\in \mathcal{NC}_{2}^{1}(p, k)$;
\item the set
\[
\mathtt{BMO}(\pi, \rho):=\{\mu=(\mu_{1}, \ldots, \mu_{k}): \mu_{i}=\mathtt{L}(B_{i})\in [0, \rho]_{{\mathbf {I}}}, \quad 1\leq i\leq k \quad and \quad \mu\vartriangleleft_{\mathtt{L}} \pi\}
\]
of all label sequences of elements which satisfy $0\preceq\mu_j \preceq \rho$, $\mu_j\in \mathbf {I}$ ($1\leq j \leq k$), and define strict bm-order on $\pi\in \mathcal{NC}_{2}^{1}(p, k)$.
\end{enumerate}
\end{df}
\begin{rem}
If $\pi\in \mathcal{NC}_{2}(p)$ is a pair partition, then the sets
$\mathrm{BMO}(\pi, \rho)$ and $\mathtt{BMO}(\pi, \rho)$ are equal.
\end{rem}
We conclude this subsection by recalling the following result from \cite{bmPoisson} which plays significant role in this study.
\begin{thm}[\cite{bmPoisson}]\label{thmp}
Let $\pi\in\mathcal{NC}(p, k)$ be a noncrossing partition of $p$ elements with $1\leq b(\pi)=k\leq p$ blocks, then for each positive symmetric cone $\Pi_d$ which we consider, there exists the limit
$$\lim_{\rho\xrightarrow[]{\Pi}\infty}\frac{|\mathtt{BMO}(\pi, \rho)|}{\mathtt{v}(\rho)^{k}}=V(\pi),$$
where the function $V(\pi):=V_{\Pi_d}(\pi)$ depends on the cone $\Pi_d$ and the volume characteristic sequence $(\gamma_{n}(\Pi_d))_{n\geq 1}$, and satisfies the following recursive formula:
$$V(\pi)=\left\{
\begin{array}{ll}
1 & \hbox{if $ b(\pi)=1$ or $\pi=\emptyset$ ,} \\
\gamma_{b(\pi)}\cdot\prod\limits_{i=1}^{k}V(\pi_{i}) & \hbox{if $
\thicklines
\put(0,0){$\pi=$}$
\linethickness{0.3mm}
\put(1.5,0){\line(0,1){2.5}}
\put(1.6,1){ $...$}
\put(3,0){\line(0,1){2.5}}
\put(3.1,1){ $\pi_{1}$}
\put(4.5,0){\line(0,1){2.5}}
\put(4.9,1){$ ...$}
\put(6,0){\line (0,1){2.5}}
\put(6.1,1){ $\pi_{2}$}
\put(7.5,0){\line (0,1){2.5}}
\put(7.6,1){\Huge $...$}
\put(9.2,0){\line (0,1){2.5}}
\put(9.3,1){ $\pi_{k}$}
\put(10.7,0){\line (0,1){2.5}}
\put(11.1,1){$ ...$}
\put(12.3,0){\line (0,1){2.5}}
\put(1.5,2.5){\line(1,0){10.8}}
\put(12.5,0){ ,}
} \\
\prod\limits_{i=1}^{k}V(\pi_{i}), & \hbox{if $\pi=\pi_1\cup \pi_2\cup\ldots \cup\pi_k$.}
\end{array}
\right.
$$
Here, the notation in the second case is understood as a partition $\pi$ with one outer block with $m$ element (vertical lines) and inside it there are $k$ arbitrary partitions, $\pi_{1}, \ldots, \pi_{k}$. Whereas, by written $\pi=\pi_1\cup \pi_2\cup\ldots \cup\pi_k$, we mean a disjoint union of sub-partitions with exactly one outer block (which could be as well a singleton).
\end{thm}
\section{Main results}
\subsection{Discrete bm-Fock space and related operators}
Let us consider a family $\{\mathcal{H}_{\xi}: \ \xi\in {\mathbf {I}}\subset\Pi_d \}$ of Hilbert spaces, indexed by the discrete set $\mathbf {I}$, with orthonormal basis $\{e_{\xi}^{m}\in \mathcal{H}_{\xi} : m=0,1, 2, 3, \cdots\}$. We assume that $\Omega:=e_{\xi}^{0}$ for all $\xi\in {\mathbf {I}}$ is a common unit vector, called the \emph{vacuum vector}.
Following ideas of Muraki \cite{Mur2, Mur0}, for every $n\in {\mathbb N}$, we define
$$\varXi_{n}:=\{(\rho_{n}, \ldots, \rho_{1})\in \mathbf {I}^n: \quad \rho_{n}\succ \rho_{n-1}\succ \cdots \succ \rho_{1}\}.$$
The \emph{discrete bm-Fock space}, denoted by $\mathcal{F}_{bm}^{d}(\mathbf {I})$, is the Hilbert space spanned by the \emph{vacuum vector} $\Omega$ and the simple tensors of the form $h_{\rho_{n}}\otimes \cdots \otimes h_{\rho_{1}}$, where $h_{\rho}\in \mathcal{H}_{\rho}, h_{\rho}\bot\Omega$ and $(\rho_{n}, \ldots, \rho_{1})\in \varXi_{n}$; the scalar product is given by
$$\langle h_{\rho}, h_{\eta}\rangle=0 \text{ if } \rho\neq \eta $$
and
$$\langle h_{\rho_{n}}\otimes\cdots\otimes h_{\rho_{1}}, f_{\rho_{m}}\otimes\cdots\otimes f_{\rho_{1}}\rangle=\delta_{mn}\prod_{i=1}^{n}\langle h_{\rho_{i}}, f_{\rho_{i}}\rangle,$$
where $\langle h_{\rho_{i}}, f_{\rho_{i}}\rangle$ is the scalar product in $\mathcal{H}_{\rho_{i}}$.
The discrete bm-Fock space $\mathcal{F}_{bm}^{d}(\mathbf {I})$ is the \emph{bm-product} \cite{JW1} of a family of Hilbert spaces $\{\mathcal{H}_{\xi}, \xi\in\mathbf {I}\}$, indexed by elements in $\mathbf {I}\subset\Pi_d$.
In the following, we define the creation, the annihilation and the conservation operators on the discrete bm-Fock space $\mathcal{F}_{bm}^{d}(\mathbf {I})$, and show their properties. Let $g_{\xi}\in\mathcal{H}_{\xi}$ be a unit vector, and let $(\rho_{n}, \ldots, \rho_{1})\in \varXi_{n}$ be an index sequence such that $h_{\rho_j}\in\mathcal{H}_{\rho_j}$ for $1\leq j \leq n$.
\begin{enumerate}
\item \textbf{The creation operator} $A_{g_{\xi}}^{+}$ is defined as follows:\\
$A_{g_{\xi}}^{+}\Omega=g_{\xi}$,\\
$A_{g_{\xi}}^{+}(h_{\rho_{n}}\otimes \cdots \otimes h_{\rho_{1}})=\left\{
\begin{array}{ll}
g_{\xi}\otimes h_{\rho_{n}}\otimes \cdots \otimes h_{\rho_{1}} & \hbox{if $\xi\succ\rho_{n}$,} \\
0 & \hbox{ otherwise.}
\end{array}
\right.$\\
\item \textbf{The annihilation operator} $A_{g_{\xi}}^{-}$ is defined by\\
$A_{g_{\xi}}^{-}\Omega=0$,\\
$A_{g_{\xi}}^{-}(h_{\rho_{n}}\otimes \cdots \otimes h_{\rho_{1}})=\left\{
\begin{array}{ll}
\langle g_{\xi}, h_{\rho_{n}}\rangle \cdot h_{\rho_{n-1}}\otimes \cdots \otimes h_{\rho_{1}} & \hbox{if $\xi=\rho_{n}$,} \\
0 & \hbox{otherwise .}
\end{array}
\right.
$\\
\item \textbf{The conservation operator} $A_{g_{\xi}}^{\circ}$ is defined by\\
$A_{g_{\xi}}^{\circ}\Omega=0,$\\
$A_{g_{\xi}}^{\circ}(h_{\rho_{n}}\otimes \cdots \otimes h_{\rho_{1}})=\left\{
\begin{array}{ll}
\langle g_{\xi}, h_{\rho_{n}}\rangle \cdot h_{\rho_{n}}\otimes \cdots \otimes h_{\rho_{1}} & \hbox{if $\xi=\rho_{n}$,} \\
0 & \hbox{otherwise.}
\end{array}
\right.
$\\\\
\end{enumerate}
It follows that these operators are bounded and that $A_{g_{\xi}}^{+}$ and $A_{g_{\xi}}^{-}$ are mutually adjoint (i.e. $(A_{g_{\xi}}^{+})^{*}=A_{g_{\xi}}^{-}$). Moreover, the conservation operator $A_{g_{\xi}}^{\circ}$ is self-adjoint. In addition, we also have the following commutation relations, for the unit vectors $g_{\xi}\in\mathcal{H}_{\xi}$ and $g_{\eta}\in\mathcal{H}_{\eta}$
\begin{equation}\label{cr1}
A_{g_{\eta}}^{+}A_{g_{\xi}}^{+}=A_{g_{\xi}}^{-}A_{g_{\eta}}^{-}=0 \quad\quad (\text{ for } \quad \xi\succeq \eta),
\end{equation}
and
\begin{equation}\label{cr3}
A_{g_{\xi}}^{-}A_{g_{\eta}}^{\circ}=A_{g_{\xi}}^{\circ}A_{g_{\eta}}^{+}=A_{g_{\xi}}^{-}A_{g_{\eta}}^{+}=A_{g_{\xi}}^{\circ}A_{g_{\eta}}^{\circ}=0 \quad\quad (\text{ for } \quad \eta\neq \xi).
\end{equation}
In what follows we will fix a family of unit vectors $\{g_{\xi}\in \mathcal{H}_{\xi}:\ \xi \in \mathbf {I} \}$ and consider the family of operators
$$A_{\xi}^{\varepsilon}:=A_{g_{\xi}}^{\varepsilon} \text{ for } \varepsilon\in\{\circ, -, +\}.$$
Let $\mathcal{A}$ be the $C^*$-algebra of all bounded operators on $\mathcal{F}_{bm}^{d}(\mathbf {I})$ and let $\varphi(A):=\langle A\Omega, \Omega \rangle$ for $A\in\mathcal{A}$ be the \emph{vacuum state}.
For $\xi\in {\mathbf {I}}$, let $\mathcal{A}_{\xi}:=\text{alg}\{A_{\xi}^{+}, A_{\xi}^{-}\}$ be the $*$-algebra generated by $A_{\xi}^{+}$ and $A_{\xi}^{-}$. In a similar way as in \cite{J.W3} one shows the following theorem.
\begin{thm}\label{bmia}
The algebras $\{\mathcal{A}_{\xi},\ \xi\in {\mathbf {I}}\}$ are bm-independent in $(\mathcal{A}, \varphi)$.
\end{thm}
\subsection{Poisson type limit theorems}
In this subsection we will investigate the limits distributions of the operators $S_{\rho}(\lambda)$ defined on the discrete bm-Fock space $\mathcal{F}_{bm}^{d}(\mathbf {I})$ by
\begin{equation}\label{OPD}
S_{\rho}(\lambda):=\frac{1}{\sqrt{\mathtt{v}(\rho)}}\sum_{\xi\in[0, \rho]_{\mathbf {I}}}(A_{\xi}^{+}+A_{\xi}^{-})+\lambda\sum_{\xi\in[0, \rho]_{\mathbf {I}}}A_{\xi}^{\circ},
\end{equation}
as $\rho \xrightarrow[]{\Pi} \infty$ under the vacuum state $\varphi$.
To this end we define
$$m_{p}(\lambda):=\lim_{\rho \xrightarrow []{\Pi}\infty}\varphi\left((S_{\rho}(\lambda))^{p}\right),$$
the limit of $p$-th moments for such operators.\\
For each $\xi\in \Pi_d$ and $g_{\xi}\in \mathcal{H}_{\xi}$ such that $\lVert g_{\xi}\rVert=1$, as in the previous section we will use the notations $A_{\xi}^{+}:=A_{g_{\xi}}^{+}, A_{\xi}^{-}:=A_{g_{\xi}}^{-}$ and $A_{\xi}^{\circ}:=A_{g_{\xi}}^{\circ}$. Furthermore we assign the numbers $-1, 0, +1$ to the variable $\varepsilon$ according to $\varepsilon= -, \circ, +$.
Now we present formulation of our main results, which are the analogues of the classical Poisson limit theorem for bm-independent random variables, indexed by elements of positive symmetric cones.
\begin{thm}\label{mthm}
For any positive integer $p\geq 2$, one has
\begin{equation}\label{mp}
m_{p}(\lambda):=\lim_{\rho \xrightarrow[]{\Pi} \infty }\varphi(
(S_{\rho}(\lambda))^{p})=\sum_{\pi\in \mathcal{NC}_{2}^{1, i}(p)}\lambda^{s(\pi)}\cdot V(\tilde \pi),
\end{equation}
where
\begin{equation}\label{bmolim}
\lim\limits_{\rho\xrightarrow[]{\Pi} \infty}\frac{|\mathrm{BMO}(\pi, \rho)|}{(\mathtt{v}(\rho))^{b(\tilde \pi)}}=V(\tilde \pi).
\end{equation}
Here $b(\tilde \pi)$ is the number of blocks in the reduced partition $\tilde \pi$ of $\pi $ and the limit \eqref{bmolim} exists for every partition $\pi\in \mathcal{NC}_{2}^{1, i}(p)$. The combinatorial function $V(\tilde \pi):=V_{\Pi_d}(\tilde{\pi})$ depends on the positive cone $\Pi_d$ and its volume characteristic $\gamma_{n}(\Pi_d)$. Moreover, it is multiplicative and satisfies the following recursive formula
\begin{equation}\label{rfv}
V(\tilde \pi)=\left\{
\begin{array}{ll}
1 & \hbox{if $b(\tilde \pi)=1$ or $\tilde \pi=\emptyset$} \\
\gamma_{|\tilde \pi |}(\Pi_d)\cdot V(\pi') & \hbox{if $
\put(0.2,0){$\tilde \pi=$}$
\begin{picture}(5,2)
\thicklines
\put(1.3,0){\line(0,1){1.5}}
\put(3,0.2){$\pi'$}
\put(5,0){\line(0,1){1.5}}
\put(1.3,1.5){\line(1,0){3.72}}
\end{picture}} \\
\prod\limits_{i=1}^{k}V(\pi_{i}) & \hbox{if \hspace{0.14cm}$\tilde \pi=\pi_{1}\cup \pi_{2}\cup \cdots\cup\pi_{k}$,}
\end{array}
\right.
\end{equation}
\end{thm}
\subsection{Examples and remarks}
The following examples and figures explain the rules for the calculations of $V(\tilde \pi)$ for a given partition $\pi\in\mathcal{NC}_{2}^{1, i}(p)$. For example, consider the following partition $\pi$:
$$\pi=\{\{1, 15\}, \{2\}, \{3, 9\}, \{4, 8\}, \{5\}, \{6\}, \{7\}, \{10, 13\}, \{11\}, \{12\}, \{14\}\}\in\mathcal{NC}_{2}^{1, i}(15)$$ Then we can draw the following pictures of $\pi$ and $\tilde{\pi}$\\\\
\begin{picture}(4.8,2)
\thicklines
\put(-0.1,0){$\pi=$}
\put(1.3,0){\line(0,1){2}}
\put(1.5,0){\line(0,1){1.5}}
\put(1.7,0){\line(0,1){1.5}}
\put(2.1,0){\line(0,1){1}}
\put(2.6,0){\line(0,1){1}}
\put(1.7,1.5){\line(1,0){1.32}}
\put(3,0){\line(0,1){1.5}}
\put(3.5,0){\line(0,1){1.5}}
\put(3.5,1.5){\line(1,0){1.52}}
\put(3.8,0){\line(0,1){1}}
\put(4,0){\line(0,1){0.6}}
\put(4.26,0){\line(0,1){0.6}}
\put(4.5,0){\line(0,1){0.6}}
\put(4.7,0){\line(0,1){1}}
\put(3.8,1){\line(1,0){0.9}}
\put(5,0){\line(0,1){1.5}}
\put(5.5,0){\line(0,1){2}}
\put(5.25,0){\line(0,1){1.5}}
\put(1.3,2){\line(1,0){4.22}}
\end{picture} \hspace{2.5cm} and \hspace{2cm}\begin{picture}(4.8,2)
\thicklines
\put(-0.1,0){$\tilde \pi=$}
\put(1.3,0){\line(0,1){2}}
\put(1.7,0){\line(0,1){1.5}}
\put(1.7,1.5){\line(1,0){1.32}}
\put(3,0){\line(0,1){1.5}}
\put(3.5,0){\line(0,1){1.5}}
\put(3.5,1.5){\line(1,0){1.52}}
\put(3.8,0){\line(0,1){1}}
\put(4.7,0){\line(0,1){1}}
\put(3.8,1){\line(1,0){0.9}}
\put(5,0){\line(0,1){1.5}}
\put(5.5,0){\line(0,1){2}}
\put(1.3,2){\line(1,0){4.22}}
\end{picture}\\\\
Recall that, for $\Pi_2={\mathbb R}_{+}^{2}$ and $\Pi_1=\Lambda_{1}^{1}$, the volume characteristic sequence is $\gamma_{n}=\frac{1}{n^{2}}$, and for $\Pi_3={\mathbb R}_{+}^{3}$, $\gamma_{n}=\frac{1}{n^{3}}$, while for $\Pi_2=\Lambda_{2}^{1}$ and $\Pi_2={\mathtt{Symm}}_2^+(\R)$, $\gamma_{n}=\frac{24}{(3n-1)3n(3n+1)}$. Now we can compute $V(\tilde \pi)$ for such cones using the recursive formula \eqref{rfv}:\\\\
$1.\quad\Pi_2={\mathbb R}_{+}^{2}$ and $\Pi_1=\Lambda_{1}^{1}.$ \\\\
V($\tilde \pi$)=V(\hspace{-0.5cm}\begin{picture}(4.8,2)
\thicklines
\put(1.3,0){\line(0,1){2}}
\put(1.7,0){\line(0,1){1.5}}
\put(1.7,1.5){\line(1,0){1.32}}
\put(3,0){\line(0,1){1.5}}
\put(3.5,0){\line(0,1){1.5}}
\put(3.5,1.5){\line(1,0){1.52}}
\put(3.8,0){\line(0,1){1}}
\put(4.7,0){\line(0,1){1}}
\put(3.8,1){\line(1,0){0.9}}
\put(5,0){\line(0,1){1.5}}
\put(5.5,0){\line(0,1){2}}
\put(1.3,2){\line(1,0){4.22}}
\end{picture}\hspace{0.5cm})=$\frac{1}{16}\cdot$ V\hspace{0.1cm}(\begin{picture}(5,2)
\thicklines
\put(0.19,0){\line(0,1){1.5}}
\put(0.19,1.5){\line(1,0){1.32}}
\put(1.49,0){\line(0,1){1.5}}
\end{picture}\hspace{-1.65cm})$\cdot V(\begin{picture}(5,2)
\thicklines
\put(0.2,0){\line(0,1){1.5}}
\put(0.2,1.5){\line(1,0){1.52}}
\put(0.5,0){\line(0,1){1}}
\put(1.4,0){\line(0,1){1}}
\put(0.5,1){\line(1,0){0.9}}
\put(1.7,0){\line(0,1){1.5}}
\end{picture}$\hspace{-1.55cm})=$\frac{1}{16}\cdot 1\cdot \frac{1}{4}\cdot V(\begin{picture}(5,2)
\thicklines
\put(0.2,0){\line(0,1){1}}
\put(1.1,0){\line(0,1){1}}
\put(0.2,1){\line(1,0){0.9}}
\end{picture}\hspace{-1.85cm})=\frac{1}{64}$
\\\\ $2.\quad\Pi_3={\mathbb R}_{+}^{3}.$\\\\
V($\tilde \pi$)=V(\hspace{-0.5cm}\begin{picture}(4.8,2)
\thicklines
\put(1.3,0){\line(0,1){2}}
\put(1.7,0){\line(0,1){1.5}}
\put(1.7,1.5){\line(1,0){1.32}}
\put(3,0){\line(0,1){1.5}}
\put(3.5,0){\line(0,1){1.5}}
\put(3.5,1.5){\line(1,0){1.52}}
\put(3.8,0){\line(0,1){1}}
\put(4.7,0){\line(0,1){1}}
\put(3.8,1){\line(1,0){0.9}}
\put(5,0){\line(0,1){1.5}}
\put(5.5,0){\line(0,1){2}}
\put(1.3,2){\line(1,0){4.22}}
\end{picture}\hspace{0.5cm})=$\frac{1}{64}\cdot$ V(\begin{picture}(5,2)
\thicklines
\put(0.2,0){\line(0,1){1.5}}
\put(0.2,1.5){\line(1,0){1.32}}
\put(1.5,0){\line(0,1){1.5}}
\end{picture}\hspace{-1.65cm})$\cdot V(\begin{picture}(5,2)
\thicklines
\put(0.2,0){\line(0,1){1.5}}
\put(0.2,1.5){\line(1,0){1.52}}
\put(0.5,0){\line(0,1){1}}
\put(1.4,0){\line(0,1){1}}
\put(0.5,1){\line(1,0){0.9}}
\put(1.7,0){\line(0,1){1.5}}
\end{picture}$\hspace{-1.55cm})=$\frac{1}{64}\cdot 1\cdot \frac{1}{8}\cdot V(\begin{picture}(5,2)
\thicklines
\put(0.2,0){\line(0,1){1}}
\put(1.1,0){\line(0,1){1}}
\put(0.2,1){\line(1,0){0.9}}
\end{picture}\hspace{-1.85cm})=\frac{1}{512}$\\\\
$3.\quad\Pi_2=\Lambda_{2}^{1}$ and $\Pi_2={\mathtt{Symm}}_2^+(\R)$.\\\\
V($\tilde \pi$)=V(\hspace{-0.5cm}\begin{picture}(5,2)
\thicklines
\put(1.3,0){\line(0,1){2}}
\put(1.7,0){\line(0,1){1.5}}
\put(1.7,1.5){\line(1,0){1.32}}
\put(3,0){\line(0,1){1.5}}
\put(3.5,0){\line(0,1){1.5}}
\put(3.5,1.5){\line(1,0){1.52}}
\put(3.8,0){\line(0,1){1}}
\put(4.7,0){\line(0,1){1}}
\put(3.8,1){\line(1,0){0.9}}
\put(5,0){\line(0,1){1.5}}
\put(5.5,0){\line(0,1){2}}
\put(1.3,2){\line(1,0){4.22}}
\end{picture}\hspace{0.4cm})=$\frac{2}{143}\cdot$ V(\begin{picture}(5,2)
\thicklines
\put(0.2,0){\line(0,1){1.5}}
\put(0.2,1.5){\line(1,0){1.32}}
\put(1.5,0){\line(0,1){1.5}}
\end{picture}\hspace{-1.65cm})$\cdot V(\begin{picture}(5,2)
\thicklines
\put(0.2,0){\line(0,1){1.5}}
\put(0.2,1.5){\line(1,0){1.52}}
\put(0.5,0){\line(0,1){1}}
\put(1.4,0){\line(0,1){1}}
\put(0.5,1){\line(1,0){0.9}}
\put(1.7,0){\line(0,1){1.5}}
\end{picture}$\hspace{-1.55cm})=$\frac{2}{143}\cdot 1\cdot \frac{12}{105}\cdot V(\begin{picture}(5,2)
\thicklines
\put(0.2,0){\line(0,1){1}}
\put(1.1,0){\line(0,1){1}}
\put(0.2,1){\line(1,0){0.9}}
\end{picture}\hspace{-1.85cm})=\frac{8}{505}$\\\\
For illustration, the first few moments ($m_{1}(\lambda), \ldots, m_{6}(\lambda)$) of the monotone Poisson distribution in comparison with some of the bm-Poisson analogues are listed below (observe that always $m_{1}(\lambda)=0, m_{2}(\lambda)=1$ and $m_{3}(\lambda)=\lambda$).
\vspace{1cm}
\begin{center}
\hspace{0.5cm}
\begin{tabular}{ c c c c c c c}
\hspace{-2cm}Monotone Poisson distribution &&&& \hspace{2cm}bm-case for $\Pi_3={\mathbb R}_{+}^{3}$ \\
\hspace{-2cm}$0$ \vspace{0.2cm}&&&&\hspace{2cm} $0$\\
\hspace{-2cm}$1$ \vspace{0.2cm}&&&&\hspace{2cm} $1$\\
\hspace{-2cm}$\lambda$ \vspace{0.2cm}&&&& \hspace{2cm}$\lambda$ \\
\hspace{-2cm}$\lambda^{2}+\frac{3}{2}$ \vspace{0.2cm}&&&&\hspace{2cm}$\lambda^{2}+\frac{9}{8}$\\
\hspace{-2cm}$\lambda^{3}+\frac{7}{2}\lambda$ \vspace{0.2cm}&&&& \hspace{2cm}$\lambda^{3}+\frac{19}{8}\lambda$ \\
\hspace{-2cm}$\lambda^{4}+\frac{9}{2}\lambda^{2}+\frac{5}{2}$ \vspace{2cm} &&&& \hspace{2cm}$\lambda^{4}+\frac{15}{4}\lambda^{2}+\frac{31}{24}$ \\
\end{tabular}
\end{center}
\vspace{0.2cm}
\begin{center}
\hspace{0.5cm}
\begin{tabular}{ c c c c c c c}
\hspace{-1cm} bm-cases for $\Pi_2=\mathbb R_+^2$ and $\Pi_1=\Lambda_{1}^{1}$&&&& \hspace{0.2cm}bm-cases for $\Pi_2=\Lambda_{2}^{1}$ and $\Pi_2={\mathtt{Symm}}_2^+(\R)$ \\
\hspace{-1cm}$0$ \vspace{0.2cm}&&&&\hspace{0.2cm} $0$\\
\hspace{-1cm}$1$ \vspace{0.2cm}&&&& \hspace{0.2cm} $1$ \\
\hspace{-1cm}$\lambda$ \vspace{0.2cm}&&&&\hspace{0.2cm}$\lambda$\\
\hspace{-1cm}$\lambda^{2}+\frac{5}{4}$ \vspace{0.2cm}&&&& \hspace{0.2cm}$\lambda^{2}+\frac{39}{35}$ \\
\hspace{-1cm}$\lambda^{3}+\frac{11}{4}\lambda$ \vspace{0.2cm}&&&& \hspace{0.2cm}$\lambda^{3}+\frac{12}{35}\lambda$ \\
\hspace{-1cm}$\lambda^{4}+\frac{9}{2}\lambda^{2}+\frac{59}{36}$ \vspace{0.2cm} &&&& \hspace{0.2cm}$\lambda^{4}+\frac{129}{35}\lambda^{2}+\frac{443}{350}$ \\
\end{tabular}
\end{center}
\vspace{1cm}
In the special case $\lambda=0$, the operators \eqref{OPD} become
$$S_{\rho}(0)=\frac{1}{\sqrt{\mathtt{v}(\rho)}}\sum_{\xi\preceq \rho, \hspace{0.1cm}\xi\in\mathbf {I}}(A_{\xi}^{+}+A_{\xi}^{-}).$$
Since $\{A_{\xi}^{+}+A_{\xi}^{-}, \xi\in{\mathbf {I}}\}$ are bm-independent on $(\mathcal{A}, \varphi)$, then we obtain a new example of noncommutative ``de Moivre-Laplace theorem" which is a kind of central limit theorem. Namely the limit distribution of the operator $S_{\rho}(0)$ at $\rho\xrightarrow[]{\Pi} \infty$ under the vacuum state $\varphi$ is given by
$$m_{2n+1}(0):=\lim_{\rho\xrightarrow[]{\Pi} \infty}\varphi((S_{\rho}(0))^{2n+1})=0\quad\text{ and }\quad m_{2n}(0):=\lim_{\rho\xrightarrow[]{\Pi}\infty}\varphi((S_{\rho}(0))^{2n})=\sum_{\pi\in\mathcal{NC}_{2}(2n)}V(\pi).$$
On the other hand, by the bm-central limit theorem \cite{J.W3} the even moments $m_{2n}(0)$ can be written as follows
$$m_{2n}(0)=g_{n}=\sum_{k=1}^{n}\gamma_{k}(\Pi_d)g_{k-1}g_{n-k}, \quad\quad g_{0}=g_{1}=1.$$
Hence, we have another combinatorial description for the bm-central limit theorem associated with symmetric cones \cite{J.W3}.\\
It is worthwhile to mention that it is also possible to extend the results of this paper to two classes of positive cones studied in \cite{OW21019} which are non-symmetric. Namely, the sectorial cones
$$\Omega_{\bm u}^{n}:=\biggl\{\sum_{j=1}^{n}a_{j}u_{j}: \bm u:=(u_{1}, \ldots, u_{n}), a_{j}\geq 0, u_{j}\in\mathbb{R}^{n} \text{ and } u_{1}, \ldots, u_{n} \text{ are linearly independent} \biggr\}$$
and the circular cones
$$C_{\theta}^{n}:=\{(t; x)\in \mathbb{R}_{+}\times \mathbb{R}^{n}: ||x||\leq t\cdot \tan \theta\}.$$
In what follows we will consider the subspace
$$\mathcal{F}_{0}:=\text{ span }\{\Omega, g_{\xi_{n}}\otimes \cdots \otimes g_{\xi_{1}}: \xi_{n}\succ \xi_{n-1}\succ \cdots \succ \xi_{1},\ n \in \mathbb N\},
$$
and by $\mathtt {B}_{\xi}^{\varepsilon}:=A_{\xi}^{\varepsilon}|_{_{\mathcal{F}_{0}}}$ for $\varepsilon\in\{-, \circ, +\}$ and $\xi\in \mathbf {I}$ we will denote the restrictions of $A_{\xi}^{\varepsilon}$ to the subspace $\mathcal{F}_{0}$. Observe that each $A_{\xi}^{\varepsilon}$ preserve the subspace $\mathcal{F}_{0}$, and moreover
\begin{equation}\label{eqsub}
\mathtt {B}_{\xi}^{\circ}=\mathtt {B}_{\xi}^{+}\mathtt {B}_{\xi}^{-} \text { in } \mathcal{F}_{0}.
\end{equation}
Let $\mathcal{C}$ be the unital $*$-algebra of all bounded operators on $\mathcal{F}_{0}$. Hence, if we define $\mathcal{C}_{\xi}$ to be the $*$-algebra generated by $\mathtt {B}_{\xi}^{+}, \mathtt {B}_{\xi}^{-}$ and $\mathtt {B}_{\xi}^{\circ}$, we get the following crucial Corollary.
\begin{cor}\label{bmc}
The algebras $\{\mathcal{C}_{\xi}, \xi\in {\mathbf {I}}\}$ are bm-independent with respect to $\varphi$ in $(\mathcal{C}, \varphi).$
\end{cor}
\subsection{Proof of Theorem \ref{mthm}}
The proof of Theorem \ref{mthm} consists of several reductions and combinatorial considerations. First we will show that the limit \eqref{mp} can be reduced combinatorially to a sum over noncrossing partitions $\pi$ with pair or inner singleton blocks only. Next, we shall prove that the only terms which survive in the limit are those which satisfy the bm-ordered labellings $\mu\trianglelefteq_{\mathtt{L}}\pi$ for a partition $\pi$. Finally, we shall show that the limit can be counts approximately, as the limit (for $\rho \xrightarrow[]{\Pi}\infty$) of the ratio of the cardinality $|\mathrm{BMO}(\pi, \rho)|$ and the Euclidean volume $\mathtt{v}(\rho)^{b(\tilde{\pi})}$ for $\pi\in\mathcal{NC}_{2}^{1, i}(p)$, and $\tilde{\pi}$ is the reduced partition of $\pi$. The rest of the proof then goes in a similar manner as for Theorem 6.1 in \cite{bmPoisson}.
\subsubsection{Combinatorial reduction}
We start the proof with the observation that $\varphi((S_{\rho}(\lambda))^{p})$ can be written as
\begin{align}\label{credu1}
\varphi((S_{\rho}(\lambda))^{p})&=\frac{1}{\mathtt{v}(\rho)^{\frac{p}{2}}}\sum_{(\varepsilon_{p}, \ldots, \varepsilon_{1})\in \{-1, 0, +1\}^{p}}\hspace{0.2cm}\sum_{(\xi_{p}, \ldots, \xi_{1})\in [0, \rho]_{\mathbf {I}}^{p}}\varphi(A_{\xi_{p}}^{\varepsilon_{p}} \cdots A_{\xi_{1}}^{\varepsilon_{1}})\lambda_{\rho}^{\sum_{i=1}^{p}\delta_{0}(\varepsilon_{i})}\nonumber\\
&=\frac{1}{\mathtt{v}(\rho)^{\frac{p}{2}}}\sum_{(\varepsilon_{p}, \ldots, \varepsilon_{1})\in \{-1, 0, +1\}^{p}}\hspace{0.2cm}\sum_{(\xi_{p}, \ldots, \xi_{1})\in [0, \rho]_{\mathbf {I}}^{p}}\varphi(\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}} \cdots \mathtt {B}_{\xi_{1}}^{\varepsilon_{1}})\lambda_{\rho}^{\sum_{i=1}^{p}\delta_{0}(\varepsilon_{i})}
\end{align}
where $$\delta_{0}(\varepsilon_{i})=\left\{
\begin{array}{ll}
1 & \hbox{if $\varepsilon_{i}=0$} \\
0 & \hbox{otherwise,}
\end{array}
\right.$$
and $\lambda_{\rho}=\lambda\cdot \sqrt{\mathtt{v}(\rho)}.$\\
For $p\in {\mathbb N}$, we will use the following notations $\bm{\varepsilon}:=(\varepsilon_{p},\ldots, \varepsilon_{1})\in \{-1, 0, +1\}^{p}$, $\bm{\xi}:=(\xi_{p}, \ldots, \xi_{1})\in [0, \rho]_{{\mathbf {I}}}^{p}$ and $\mathtt {B}_{\bm\xi}^{\bm \varepsilon}:=\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}} \cdots \mathtt {B}_{\xi_{1}}^{\varepsilon_{1}}$ .
\begin{lemm}\label{lem1}
For a positive integer $p\geq 2$, let $\bm{\varepsilon}\in \{-1, 0, +1\}^{p}$ and $\bm \xi\in [0, \rho]_{{\mathbf {I}}}^{p}$.\\
If $\varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})\neq 0$, then the sequence $\bm \varepsilon$ satisfies the following three conditions:
\begin{enumerate}
\item $\varepsilon_{1}=+1, \varepsilon_{p}=-1,$
\item $\sum\limits_{i=1}^{p}\varepsilon_{i}=0,$
\item $\sum\limits_{i=1}^{k}\varepsilon_{i}\geq 0$\quad for $ k=1, \ldots, p-1.$
\end{enumerate}
\end{lemm}
\noindent{\bf Proof: }
We proceed the proof by an induction on $p\geq 2$. Observe that for $p=1$ we have $\varphi(\mathtt {B}_{\xi_{1}}^{\varepsilon_{1}})=0$ for all $\varepsilon_{1}\in\{-1, 0, +1\}$.\\
For $p=2$, the assumption $\varphi(\mathtt {B}_{\xi_{2}}^{\varepsilon_{2}}\mathtt {B}_{\xi_{1}}^{\varepsilon_{1}})=\langle \mathtt {B}_{\xi_{1}}^{\varepsilon_{1}}\Omega, \mathtt {B}_{\xi_{2}}^{-\varepsilon_{2}}\Omega \rangle \neq 0$ implies that $\varepsilon_{1}=+1$ and $\varepsilon_{2}=-1$.\\
Let us see how the induction works in the case $p=3$. we have
$$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})=\langle \mathtt {B}_{\xi_{3}}^{\varepsilon_{3}}\mathtt {B}_{\xi_{2}}^{\varepsilon_{2}}\mathtt {B}_{\xi_{1}}^{\varepsilon_{1}}\Omega, \Omega\rangle=\langle \mathtt {B}_{\xi_{2}}^{\varepsilon_{2}}\mathtt {B}_{\xi_{1}}^{\varepsilon_{1}}\Omega, \mathtt {B}_{\xi_{3}}^{-\varepsilon_{3}}\Omega\rangle.$$ Hence
$$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})\neq 0\quad \Longrightarrow \varepsilon_{1}=+1, \varepsilon_{3}=-1\quad and \quad \varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})=\langle \mathtt {B}_{\xi_{2}}^{\varepsilon_{2}}g_{\xi_{1}}, g_{\xi_{3}}\rangle\neq 0.$$
Thus, $\varepsilon_{2}=0$ and $\xi_{1}=\xi_{2}=\xi_{3}$.\\
Assume that the lemma holds for all $q<p$ and that $\varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})\neq 0$. Let $\bm \varepsilon \in \{-1, 0, +1\}^{p}$ and $\bm{\xi}\in [0, \rho]_{{\mathbf {I}}}^{p}$. The assumption $\varphi(\mathtt {B}_{\bm{\xi}}^{\bm{\varepsilon}})=\langle \mathtt {B}_{\xi_{p-1}}^{\varepsilon_{p-1}}\cdots \mathtt {B}_{\xi_{1}}^{\varepsilon_{1}}\Omega , \mathtt {B}_{\xi_{p}}^{-\varepsilon_{p}}\Omega\rangle\neq 0$ implies that $\varepsilon_{1}=+1$ and $ \varepsilon_{p}=-1$. It is straightforward to check that there exists a smallest integer $2\leq j\leq p$ such that $\xi_{1}=\xi_{j}$. Then $\xi_{k}\succ \xi_{1}=\xi_{j}$ for all $ 1<k<j$, and we have the following
\begin{equation}\label{eql11}
\mathtt {B}_{\xi_{j}}^{\varepsilon_{j}}\mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{1}}^{+}\Omega=\mathtt {B}_{\xi_{j}}^{\varepsilon_{j}}\mathtt {B}_{\xi_{1}}^{+}\Omega=\mathtt {B}_{\xi_{j}}^{\varepsilon_{j}}g_{\xi_{1}}\neq 0,
\end{equation}
\begin{equation}\label{eql12}
\varphi(\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}}\cdots \mathtt {B}_{\xi_{1}}^{\varepsilon_{1}})=\varphi(\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}}\cdots \mathtt {B}_{\xi_{j+1}}^{\varepsilon_{j+1}}\mathtt {B}_{\xi_{j}}^{\varepsilon_{j}}\mathtt {B}_{\xi_{1}}^{+})\neq 0.
\end{equation}
Since $\xi_{1}=\xi_{j}$, then $\varepsilon_{j}\in\{-1, 0\}$:
\begin{itemize}
\item Case $\varepsilon_{j}=-1:$\\
From \eqref{eql11}, one has
\begin{equation*}
B_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{3}}^{\varepsilon_{3}}\mathtt {B}_{\xi_{2}}^{\varepsilon_{2}}\Omega=\Omega \text{ and } \varphi(\mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{2}}^{\varepsilon_{2}})=1\neq 0.
\end{equation*}
Using the induction hypothesis, we find
\begin{equation*}
\sum_{i=2}^{j-1}\varepsilon_{i}=0\quad \text{ and } \quad \sum_{i=2}^{k}\varepsilon_{i}\geq 0, \quad \forall \hspace{0.1cm}k=2, \ldots, j-2.
\end{equation*}
Furthermore,
\begin{equation}\label{eql13b}
\sum_{i=1}^{j}\varepsilon_{i}=0\quad \text{ and }\quad \sum_{i=1}^{k}\varepsilon_{i}\geq 0, \quad \forall \hspace{0.1cm}k=1, \ldots, j-1,
\end{equation}
since $\varepsilon_{1}=+1$ and $\varepsilon_{j}=-1$.\\
Moreover, from \eqref{eql12} we get $\varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})=\varphi(\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}}\cdots \mathtt {B}_{\xi_{j+1}}^{\varepsilon_{j+1}})\neq 0$, and using the induction hypothesis, we obtain that
\begin{equation}\label{eql14b}
\sum_{i=j+1}^{p}\varepsilon_{i}=0\quad \text{ and } \quad \sum_{i=j+1}^{k}\varepsilon_{i}\geq 0, \quad \forall \hspace{0.1cm}k=j+1, \cdots, p-1.
\end{equation}
Finally, combining \eqref{eql13b} and \eqref{eql14b}, yields
$$\sum_{i=1}^{p}\varepsilon_{i}=0\quad \text{ and } \quad \sum_{i=1}^{k}\varepsilon_{i}\geq 0,\quad \forall\hspace{0.1cm} k=1, \cdots, p-1.$$
\item Case $\varepsilon_{j}=0$:\\
From the case above, we know that
\begin{equation*}
\sum_{i=2}^{j-1}\varepsilon_{i}=0\quad \text{ and } \quad \sum_{i=2}^{k}\varepsilon_{i}\geq 0, \quad \forall \hspace{0.1cm}k=2, \cdots, j-2.
\end{equation*}
Since $\varepsilon_{j}=0$, then
\begin{equation}\label{eql13}
\sum_{i=2}^{j}\varepsilon_{i}=0\quad \text{ and } \quad \sum_{i=2}^{k}\varepsilon_{i}\geq 0, \quad \forall \hspace{0.1cm}k=2, \cdots, j-1.
\end{equation}
On the other hand, from \eqref{eql12} we have
$$\varphi(\mathtt {B}_{\bm \xi}^{\bm\varepsilon})=\varphi(\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}}\cdots \mathtt {B}_{\xi_{j+1}}^{\varepsilon_{j+1}}\mathtt {B}_{\xi_{1}}^{+})\neq 0,$$
which, by the induction assumption, implies that
\begin{equation}\label{eql14}
\varepsilon_{1}+\sum_{i=j+1}^{p}\varepsilon_{i}=0, \quad\varepsilon_{1}\geq 0\quad \text{ and }\quad \varepsilon_{1}+\sum_{i=j+1}^{k}\varepsilon_{i}\geq 0, \quad \forall \hspace{0.1cm}k=j+1, \cdots, p-1.
\end{equation}
Therefore, by \eqref{eql13} and \eqref{eql14}, we obtain that
$$\sum_{i=1}^{p}\varepsilon_{i}=0\quad \text{ and } \quad \sum_{i=1}^{k}\varepsilon_{i}\geq 0,\quad \forall\hspace{0.1cm} k=1, \cdots, p-1.$$
\end {itemize}
\hfill $\Box$\\
\begin{rem}
For the case $\varepsilon_{j}=0$ in the proof of Lemma \ref{lem1} above, we can also use \eqref{eqsub} and apply the same arguments as for the case $\varepsilon_{j}=-1$.
\end{rem}
From now on, we will use ${\mathtt {D}}_{p}$ to denote the set of all sequences $\bm \varepsilon=(\varepsilon_{p}, \ldots, \varepsilon_{1})\in \{-1, 0, +1\}^{p}$ satisfying the following three conditions:
\begin{enumerate}
\item $\varepsilon_{1}=+1, \varepsilon_{p}=-1,$
\item $\sum\limits_{i=1}^{p}\varepsilon_{i}=0,$
\item $\sum\limits_{i=1}^{k}\varepsilon_{i}\geq 0, \quad \forall k=1, \ldots, p-1.$
\end{enumerate}
According to Lemma \ref{lem1} and using the above notations,~\eqref{credu1} becomes
\begin{equation}\label{credu2}
\varphi((S_{\rho}(\lambda))^{p})=\frac{1}{\mathtt{v}(\rho)^{\frac{p}{2}}}\sum_{\bm \varepsilon\in \mathtt {D}_{p}}\hspace{0.2cm}\sum_{\bm \xi\in [0, \rho]_{{\mathbf {I}}}^{p}}\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})\lambda_{\rho}^{\sum_{i=1}^{p}\delta_{0}(\varepsilon_{i})}.
\end{equation}
For a fixed sequence $\bm{\varepsilon}\in \mathtt {D}_{p}$ and for each $1\leq k\leq p-1$ ($p\geq 2$) such that $\varepsilon_{k}=+1$, we define
\begin{equation}\label{tk}
T(k):=\min \{l: l>k \quad and\quad \varepsilon_{k}+\cdots+\varepsilon_{l}=0\}.
\end{equation}
The inequality $\varepsilon_{k}+\varepsilon_{k+1}+\cdots+\varepsilon_{p}\leq 0$ (since $\sum\limits_{i=1}^{p}\varepsilon_{i}=0$ and $\sum\limits_{i=1}^{k-1}\varepsilon_{i}\geq 0)$ justifies the existence of such a number $T(k)$. Hence
$$\varepsilon_{T(k)}=-1.$$
Furthermore, if there exists $k<m<T(k)$ such that $\varepsilon_{m}=+1$, then
$$\varepsilon_{k}+\cdots+\varepsilon_{m-1}>0 \text{ and } \varepsilon_{m}+\varepsilon_{m+1}+\cdots+\varepsilon_{T(k)}<0.$$
This implies that
$$k<m<T(m)<T(k).$$
Therefore, if we define a partition
$$\pi:=\{\{k, T(k)\}: \varepsilon_{k}=+1\} \cup\{\{j\}: \varepsilon_{j}=0\},$$
we obtain $\pi\in \mathcal{NC}_{2}^{1}(p)$. In other words, for any sequence $\bm \varepsilon\in \mathtt {D}_{p}$ we can uniquely associate a noncrossing partition with blocks being a pair or a singleton, and we will often use the identification $\mathtt {D}_{p}\ni \bm\varepsilon\equiv \pi\in\mathcal{NC}_{2}^{1}(p)$.
\begin{lemm}\label{lem2}
For $p\geq 2$, let $\bm \varepsilon\in \mathtt {D}_{p}, \bm\xi\in [0, \rho]_{{\mathbf {I}}}^{p}$ and $1\leq k\leq p-1$ such that $\varepsilon_{k}=+1$.
If $\varphi(\mathtt {B}_{\bm{\xi}}^{\bm{\varepsilon}})\neq 0$ then $\xi_{k}=\xi_{T(k)}.$
\end{lemm}
\noindent{\bf Proof: }
We use induction over $p\geq 2$:\\
For $p=2$, we have just one pair block $\{1, T(1)=2\}$, then $\varphi(\mathtt {B}_{\xi_{2}}^{\varepsilon_{2}}\mathtt {B}_{\xi_{2}}^{\varepsilon_{1}})=\langle \mathtt {B}_{\xi_{2}}^{\varepsilon_{2}}\mathtt {B}_{\xi_{1}}^{\varepsilon_{1}}\Omega, \Omega\rangle\neq 0$ implies that $\xi_{1}=\xi_{2}=\xi_{T(1)}$ and $\varepsilon_{1}=+1,\varepsilon_{2}=-1$.\\
For $p=3$, we have the sequences $\bm\varepsilon=(-1, 0, +1)$ and $\bm\xi=(\xi_{3},\xi_{2},\xi_{1})$. Hence, $T(1)=3$ and the assumption $\varphi(\mathtt {B}_{\xi}^{\varepsilon})\neq 0$ yields $\xi_{1}=\xi_{2}=\xi_{3}=\xi_{T(1)}$.\\
We assume that the lemma holds for all $r<p$ and that $\varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})\neq 0$. Consider
$$\text{$1\leq k_{0}\leq p-1$ such that $ \varepsilon_{k_{0}}=+1$ and $T(k_{0})< T(k)$ for all $ 1\leq k\leq p-1$ with $\varepsilon_{k}=+1$},$$
that is, $$T(k_{0}):=\min \{ T(k): \varepsilon_{k}=+1,\quad 1\leq k\leq p-1\}.$$\\
Then, if $|T(k_{0})-k_{0}|\geq 2$, we have $\varepsilon_{j}=0$ for all $k_{0}<j<T(k_{0})$, and if there exists $1\leq i\leq k_{0}$, then $\varepsilon_{i}\in\{0, +1\}$. If $T(k_{0})=p$, then $k_{0}=1$. In such a case, we have
$$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})=\varphi(\mathtt {B}_{\xi_{p}}^{-}\mathtt {B}_{\xi_{p-1}}^{\circ}\cdots \mathtt {B}_{\xi_{2}}^{\circ}\mathtt {B}_{\xi_{1}}^{+})\neq 0.$$
Hence,
$$\xi_{1}=\xi_{2}=\cdots=\xi_{p-1}=\xi_{p}=\xi_{T(1)}.$$
On the other hand, for $T(k_{0})< p$, the assumption $\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})\neq 0$ yields
\begin{equation*}
\mathtt {B}_{\xi_{k_{0}}}^{+}(\mathtt {B}_{\xi_{k_{0}-1}}^{\varepsilon_{k_{0}-1}}\cdots \mathtt {B}_{\xi_{1}}^{+})\Omega\neq 0 \Longrightarrow \mathtt {B}_{\xi_{k_{0}}}^{+}(\mathtt {B}_{\xi_{k_{0}-1}}^{\varepsilon_{k_{0}-1}}\cdots \mathtt {B}_{\xi_{1}}^{+})\Omega=\mathtt {B}_{\xi_{k_{0}}}^{+}w=g_{\xi_{k_{0}}}\otimes w\neq 0,
\end{equation*}
where
$$w=g_{\xi_{s_{l}}}\otimes \cdots \otimes g_{\xi_{s_{1}}}\otimes g_{\xi_{1}} \text{ and
} \xi_{s_{1}}, \ldots, \xi_{s_{l}}\in \{\xi_{2}, \ldots, \xi_{k_{0}-1}\} \text{ with }\xi_{k_{0}}\succ \xi_{s_{l}}\succ \cdots \succ \xi_{s_{1}}\succ \xi_{1}.$$
Then,
$$\mathtt {B}_{\xi_{T(k_{0})-1}}^{\circ}\cdots \mathtt {B}_{\xi_{k_{0}+1}}^{\circ}g_{\xi_{k_{0}}}\otimes w\neq 0\Longrightarrow \xi_{T(k_{0})-1}=\cdots =\xi_{k_{0}+1}=\xi_{k_{0}},$$
and
$$\mathtt {B}_{\xi_{T(k_{0})}}^{-}(\mathtt {B}_{\xi_{T(k_{0})-1}}^{\circ}\cdots \mathtt {B}_{\xi_{k_{0}+1}}^{\circ})g_{\xi_{k_{0}}}\otimes w=\mathtt {B}_{\xi_{T(k_{0})}}^{-}(\mathtt {B}_{\xi_{T(k_{0})}}^{\circ})^{n}g_{\xi_{k_{0}}}\otimes w=\mathtt {B}_{\xi_{T(k_{0})}}^{-}g_{\xi_{k_0}}\otimes w\neq 0,$$
where $n=T(k_{0})-1-k_{0}$.\\
Therefore,
$$\xi_{T(k_{0})}=\xi_{k_{0}} \text{ and } \varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})=\varphi(\mathtt {B}_{\bm{\xi}'}^{\bm{\varepsilon}'}),$$
where
$$\bm {\varepsilon}'=(\varepsilon_{p}, \ldots, \varepsilon_{T(k_{0})+1}, \varepsilon_{k_{0}-1}, \ldots, \varepsilon_{1}) \text{ and } \bm{\xi}'=(\xi_{p}, \ldots, \xi_{T(k_{0})+1}, \xi_{k_{0}-1}, \ldots, \xi_{1}).$$
Since $r=p-(n+2)<p$, we can use the induction hypothesis for the sequences $\bm{\varepsilon}'$ and $ \bm{\xi}'$.
\hfill $\Box$\\
\begin{cor}\label{cor1}
Let $\bm{\varepsilon}\in\mathtt {D}_{p}$ and $\bm{\xi}\in[0, \rho]_{\mathbf {I}}^{p}$. If $\varphi(\mathtt {B}_{\bm\varepsilon}^{\bm\xi})\neq 0$, then the singletons in the partition $\pi=\{\{k, T(k)\}, \varepsilon_{k}=+1\}\cup\{\{j\}, \varepsilon_{j}=0\}$ which is associated to the sequence $\bm \varepsilon$, cannot be outer blocks, i.e. $\pi\in \mathcal{NC}_{2}^{1, i}(p)$.
\end{cor}
Using Lemma \ref{lem2} and Corollary \ref{cor1}, the identity \eqref{credu2} can be written as follows:
\begin{equation}\label{credu3}
\varphi((S_{\rho}(\lambda))^{p})=\frac{1}{\mathtt{v}(\rho)^{\frac{p}{2}}}\sum_{\pi\in \mathcal{NC}_{2}^{1, i}(p)}\sum_{\pi\sim \bm \xi\in [0, \rho]_{{\mathbf {I}}}^{p}}\varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})\lambda^{s(\pi)},
\end{equation}
where the second summation is over all sequence $\bm \xi=(\xi_{p}, \ldots, \xi_{1})\in[0, \rho]_{{\mathbf {I}}}^{p}$ associated with a given partition $\pi\in \mathcal{NC}_{2}^{1, i}(p)$.
\subsubsection{Reduction to bm-ordered noncrossing partitions with pair or singleton blocks:}
In the next lemma we will show that the only terms which survive in the limit \eqref{mp} come from those sequences which establish bm-order on noncrossing partitions with pair or inner singleton blocks, i.e., they satisfy the two conditions in Definition \ref{bmorder}.
\begin{lemm}[bm-order]
Let $\pi\in \mathcal{NC}_{2}^{1, i}(p)\subset\mathcal{NC}_{2}^{1}(p)$ be a noncrossing partition with pair or inner singleton blocks given by the sequences $\bm \varepsilon\in \mathtt {D}_{p}$ and $\bm{\xi}\in[0, \rho]_{\mathbf {I}}^{p}$ such that $\bm{\xi}\sim\pi$. If $\varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})\neq 0$ then the sequence $\bm \xi$ establishes bm-order on $\pi$, i.e. $\bm \xi \unlhd \pi $.
\end{lemm}
\noindent{\bf Proof: }
We proceed the proof by an induction on $p\geq 2$:\\
For $p=2$, we have $\bm \varepsilon=(-1, +1), \bm\xi=(\xi_{2}, \xi_{1})$ and $\pi=\{\{1, 2\}\}$. Then $\varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})\neq 0$ implies that $\xi_{1}=\xi_{2}$ and then $\bm \xi=(\xi_{2}, \xi_{1})$ establishes bm-order on $\pi=\{\{1, 2\}\}$.\\
For $p=3$, our sequences are $\bm\varepsilon=(-1, 0, +1)$ and $\bm \xi=(\xi_{3}, \xi_{2}, \xi_{1})$ with the associated partition $\pi=\{\{1, 3\}, \{2\}\}$. Then
$$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})\neq 0\Longrightarrow \mathtt {B}_{\xi_{3}}^{-}\mathtt {B}_{\xi_{2}}^{0}\mathtt {B}_{\xi_{1}}^{+}\Omega\neq 0\Longrightarrow \xi_{1}=\xi_{2}=\xi_{3},$$
and $\bm\xi=(\xi_{3}, \xi_{2}, \xi_{1})$ establishes bm-order on $\pi=\{\{1, 3\},\{2\}\}$.\\
Let us consider $\bm\xi=(\xi_{p}, \ldots, \xi_{1})\sim \pi\in \mathcal{NC}_{2}^{1, i}(p)$ and $\bm{\varepsilon}=(\varepsilon_{p}, \ldots, \varepsilon_{1})\in \mathtt {D}_{p}$. We assume that the lemma holds for every $q<p$ and $\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})\neq 0$. We have the following two cases, depending on the position of $T(1)$.
\begin{enumerate}
\item $T(1)<p:$\\
Using the notations: $C:=\mathtt {B}_{\xi_{T(1)}}^{\varepsilon_{T(1)}} \cdots \mathtt {B}_{\xi_{1}}^{+}$ and $D:=\mathtt {B}_{\xi_{p}}^{-} \cdots \mathtt {B}_{\xi_{T(1)+1}}^{\varepsilon_{T(1)+1}}$, we have
$$\varphi(\mathtt {B}_{\bm \xi}^{\bm \varepsilon})=\varphi(DC)\neq 0\Longrightarrow C\Omega\neq 0
.$$
Since $\xi_{1}=\xi_{T(1)}$ and $\varepsilon_{T(1)}=-1$, one obtains
$$C\Omega=\Omega \text{ and } \varphi(DC)=\varphi(D)\neq 0.$$
Then $\varepsilon_{T(1)+1}=+1$.\\
If we denote by $\bm{\varepsilon}':=(\varepsilon_{T(1)}, \ldots, \varepsilon_{1}), \bm{\xi}':=(\xi_{T(1)}, \ldots, \xi_{1}), \bm {\varepsilon}'':=(\varepsilon_{p}, \ldots, \varepsilon_{T(1)+1})$ and $\bm{\xi}'':=(\xi_{p}, \ldots, \xi_{T(1)+1})$, then $\bm{\varepsilon}'\equiv\pi'\in \mathcal{NC}_{2}^{1, i}(T(1))$ and $\bm{\varepsilon}''\equiv \pi''\in \mathcal{NC}_{2}^{1, i}(p-T(1))$, where $\pi'\sim\bm{\xi}'$ and $\pi''\sim\bm{\xi}''$.\\
Hence,
$$1=\varphi(C)=\varphi(\mathtt {B}_{\bm \xi'}^{\bm\varepsilon'})\neq 0 \quad and \quad \varphi(D)=\varphi(\mathtt {B}_{\bm \xi''}^{\bm \varepsilon''})\neq 0.$$
Since $T(1)<p$ and $p-T(1)<p$, we can use the induction hypothesis for $\varphi(\mathtt {B}_{\bm\xi'}^{\bm\varepsilon'})\neq 0$ to obtain that $\bm\xi'$ establishes bm-order on $\pi'$, and for $\varphi(\mathtt {B}_{\bm\xi''}^{\bm\varepsilon''})\neq 0$ to obtain that $\bm \xi''$ establishes bm-order on $\pi''$.\\
However, there is no relation between the sequence $\bm \xi'\sim\pi'$ and the sequence $\bm \xi''\sim\pi''$, i.e. for $\eta\in \bm\xi'$ and $\mu\in \bm \xi''$ all three possibilities $\eta\succeq\mu,\eta\preceq \mu$ and $\eta \nsim \mu$ are allowed (no relation between the blocks of $\pi'$ and $\pi''$). Therefore, $\bm\xi$ establishes bm-order on $\pi=\pi'\cup \pi''$.
\item $T(1)=p:$\\
Let $j=\min\{i: 2\leq i \leq p, \varepsilon_{i}=-1\}$ (such a number exists because $\sum\limits_{l=1}^{p}\varepsilon_{l}=0$ and $\varepsilon_{p}=-1$). We may note that if $j=2$, then $T(1)=2=p$ (we have one pair block $\{1, 2\}$), and if $j=p$ (we have one outer pair block $\{1, p\}$ and $(p-2)$ inner singletons), then $\varepsilon_{i}=0$ and $ \xi_{i}=\xi_{1}=\xi_{T(1)}$ for $i=2, \ldots, p-1$. In both cases, the sequence $\bm\xi$ establishes bm-order on $\pi\in \mathcal{NC}_{2}^{1, i}(p)$, and
$$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})=\varphi(\mathtt {B}_{\xi_{p}}^{-}\mathtt {B}_{\xi_{p-1}}^{\circ}\cdots B^{\circ}_{\xi_{\xi_{2}}}\mathtt {B}_{\xi_{1}}^{+})=\varphi(\mathtt {B}_{\xi_{p}}^{-}\mathtt {B}_{\xi_{1}}^{+})=1.$$
On the other hand, if $2<j<p$, then
$\varepsilon_{l}\in \{0, +1\}$, for all $ 2\leq l\leq j-1,$
and we have one of the following two cases:
\begin{enumerate}
\item There exists $2 \leq l \leq j-1$ such that $\varepsilon_{l}=0$. In this case, we define
$$l'=\min\{l: 2\leq l\leq j-1, \varepsilon_{l}=0\}.$$
It follows that , $\varepsilon_{i}=+1$ for all $1\leq i< l'$. Hence,
$$\varphi(\mathtt {B}_{\bm\varepsilon}^{\bm\xi})=\varphi(\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}}\mathtt {B}_{\xi_{p-1}}^{\varepsilon_{p-1}}\cdots \mathtt {B}_{\xi_{j+1}}^{\varepsilon_{j+1}} \mathtt {B}_{\xi_{j}}^{-} \mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{l'+1}}^{\varepsilon_{l'+1}}\mathtt {B}_{\xi_{l'}}^{\circ}\mathtt {B}_{\xi_{l'-1}}^{+}\cdots B^{+}_{\xi_{2}}B^{+}_{\xi_{1}}\Omega)\neq 0,$$
which implies that $$\mathtt {B}_{\xi_{j}}^{-} \mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{l'+1}}^{\varepsilon_{l'+1}}\mathtt {B}_{\xi_{l'}}^{\circ}\mathtt {B}_{\xi_{l'-1}}^{+}\cdots B^{+}_{\xi_{2}}B^{+}_{\xi_{1}}\Omega\neq 0.$$ \\
Using the commutation relations \eqref{cr1} and \eqref{cr3}, one has
$$\xi_{l'}=\xi_{l'-1}\succ \xi_{l'-2}\cdots \succ \xi_{2}\succ \xi_{1}$$
and
\begin{align*}
\mathtt {B}_{\xi_{j}}^{-}\mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{l'}}^{\circ}\mathtt {B}_{\xi_{l'-1}}^{+}\cdots \mathtt {B}_{\xi_{2}}^{+}\mathtt {B}_{\xi_{1}}^{+}\Omega &=\mathtt {B}_{\xi_{j}}^{-}\mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{l'+1}}^{\varepsilon_{l'+1}}\mathtt {B}_{\xi_{l'}}^{\circ}(g_{\xi_{l'-1}}\otimes \cdots\otimes g_{\xi_{1}})\\
&=\mathtt {B}_{\xi_{j}}^{-}\mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{l'+1}}^{\varepsilon_{l'+1}}(g_{\xi_{l'-1}}\otimes \cdots\otimes g_{\xi_{1}})\\
&=\mathtt {B}_{\xi_{j}}^{-}\mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{l'+1}}^{\varepsilon_{l'+1}}\mathtt {B}_{\xi_{l'-1}}^{+}\cdots \mathtt {B}_{\xi_{2}}^{+}\mathtt {B}_{\xi_{1}}^{+}\Omega\neq 0.
\end{align*}
The single block $\{l'\}$ is a direct successor of the pair block $\{l'-1, T(l'-1)\}$. Therefore,
$$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})=\varphi(\mathtt {B}_{\xi_{p}}^{-}\mathtt {B}_{\xi_{p-1}}^{\varepsilon_{p-1}}\cdots \mathtt {B}_{\xi_{j+1}}^{\varepsilon_{j+1}}\mathtt {B}_{\xi_{j}}^{-}\mathtt {B}_{\xi_{j-1}}^{\varepsilon_{j-1}}\cdots \mathtt {B}_{\xi_{l'+1}}^{\varepsilon_{l'+1}}\mathtt {B}_{\xi_{l'-1}}^{+}\cdots \mathtt {B}_{\xi_{2}}^{+}\mathtt {B}_{\xi_{1}}^{+})\neq 0.$$
Moreover, the action of $\mathtt {B}_{\bm\xi}^{\bm\varepsilon}$ on $\Omega$ is the same as the action of $\mathtt {B}_{\bm\xi'}^{\bm\varepsilon'}$ on $\Omega$, where $\bm\varepsilon'=(\varepsilon_{p}, \ldots, \varepsilon_{l'+1}, \varepsilon_{l'-1}, \ldots, \varepsilon_{1})\in \mathtt {D}_{p-1}, \bm\xi'=(\xi_{p}, \ldots, \xi_{l'+1}, \xi_{l'-1}, \ldots, \xi_{1})\in [0, \rho]_{{\mathbf {I}}}^{p-1}$ and $\bm\xi'\sim \pi' \in \mathcal{NC}_{2}^{1, i}(p-1)$. We can now use the induction hypothesis to obtain that $\bm\xi'$ establishes bm-order on $\pi'=\pi\setminus\{l'\} \in \mathcal{NC}_{2}^{1, i}(p-1) $. Moreover, since the single block $\{l'\}$ is a direct successor of the pair block $\{l'-1, T(l'-1)\}$ and $\xi_{l'}=\xi_{l'-1} \succ \cdots \succ \xi_{1}$, then
$\bm\xi\trianglelefteq\pi\in \mathcal{NC}_{2}^{1, i}(p).$
\item For all $2\leq l \leq j-1, \varepsilon_{l}=+1$. In this case, it follows that $j=T(j-1)$. Hence,
$$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})\neq 0\Longrightarrow \mathtt {B}_{\xi_{j}}^{-}\mathtt {B}_{\xi_{j-1}}^{+}\cdots \mathtt {B}_{\xi_{2}}^{+}\mathtt {B}_{\xi_{2}}^{+}\Omega\neq 0$$ and
$$\xi_{j}=\xi_{j-1}\succ \xi_{j-2}\succ \cdots \succ \xi_{1}.$$
Therefore,
\begin{align*}
\mathtt {B}_{\xi_{j}}^{-}\mathtt {B}_{\xi_{j-1}}^{+}\cdots \mathtt {B}_{\xi_{1}}^{+}\Omega &=\mathtt {B}_{\xi_{j}}^{-}\mathtt {B}_{\xi_{j-1}}^{+}(g_{\xi_{j-2}}\otimes \cdots \otimes g_{\xi_{1}})\\
&=g_{\xi_{j-2}}\otimes g_{\xi_{j-3}}\otimes \cdots \otimes g_{\xi_{1}}\\
&=\mathtt {B}_{\xi_{j-2}}^{+}\mathtt {B}_{\xi_{j-3}}^{+}\cdots \mathtt {B}_{\xi_{1}}^{+}\Omega,
\end{align*}
and $$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})=\varphi(\mathtt {B}_{\xi_{p}}^{-}\cdots \mathtt {B}_{\xi_{j+1}}^{\varepsilon_{j+1}}\mathtt {B}_{\xi_{j-2}}^{+}\cdots \mathtt {B}_{\xi_{1}}^{+})=\varphi(\mathtt {B}_{\bm\xi'}^{\bm\varepsilon'})\neq 0,$$
where
$$\bm\xi':=(\xi_{p}, \ldots, \xi_{j+1}, \xi_{j-2}, \ldots, \xi_{1})\sim \pi'\in\mathcal{NC}_{2}^{1, i}(p-2)$$
and
$$\bm\varepsilon':=(\varepsilon_{p}, \ldots, \varepsilon_{j+1}, \varepsilon_{j-2}, \ldots, \varepsilon_{1})\in\mathtt {D}_{p-2}, \text{ where } \varepsilon_1=\cdots=\varepsilon_{j-2}=+1.$$
We can use the induction assumption to the sequences $\bm\varepsilon'$ and $\bm\xi'$, which implies
$$\bm\xi'\trianglelefteq \pi'=\pi\setminus\{j-1, j\}\in\mathcal{NC}_{2}^{1, i}(p-2).$$
Since $\xi_{j}=\xi_{j-1}\succ \xi_{j-2}\cdots \succ \xi_{1}$, we have $\bm\xi\trianglelefteq \pi\in \mathcal{NC}_{2}^{1, i}(p)$.
\end{enumerate}
\end{enumerate}
\hfill $\Box$\\
\begin{rem}\label{phi1}
It follows from the proofs above that, for a sequence $\bm\xi=(\xi_{p}, \ldots, \xi_{1})\trianglelefteq \pi\in\mathcal{NC}_{2}^{1, i}(p)$, we have
$$\varphi(\mathtt {B}_{\bm\xi}^{\bm\varepsilon})=1.$$
\end{rem}
In the previous results, we have shown that the proof of Theorem \ref{mthm} can be reduced to showing that for each $p\in {\mathbb N}$ the following limit exists
$$\lim_{\rho\xrightarrow[]{\Pi} \infty} \frac{1}{\mathtt{v}(\rho)^{\frac{p}{2}}}\sum_{\pi\in\mathcal{NC}_{2}^{1, i}(p)}\hspace{0.1cm}\sum_{\bm \xi\in \mathrm{bmo}(\pi, \rho)}\varphi(\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}}\cdots \mathtt {B}_{\xi_{1}}^{\varepsilon_{1}})\lambda_{\rho}^{s(\pi)}=\sum_{\pi \in \mathcal{NC}_{2}^{1, i}(p)} V(\tilde \pi)\lambda^{s(\pi)},$$
where $V(\tilde \pi)$ satisfies the recurrence relation given in \eqref{rfv}. Moreover, from Remark \ref{phi1} it follows that
$$\lim_{\rho \xrightarrow[]{\Pi}\infty}\frac{1}{\mathtt{v}(\rho)^{\frac{p}{2}}}\sum_{\pi\in \mathcal{NC}_{2}^{1, i}(p)}\hspace{0.1cm}\sum_{\bm \xi\in {\mathrm{bmo}}(\pi, \rho)}\varphi(\mathtt {B}_{\xi_{p}}^{\varepsilon_{p}}\cdots \mathtt {B}_{\xi_{1}}^{\varepsilon_{1}})\lambda_{\rho}^{s(\pi)}=\sum_{\pi\in \mathcal{NC}_{2}^{1, i}(p)}\hspace{0.1cm}\lim_{\rho \xrightarrow []{\Pi}\infty}\frac{|\mathrm{bmo}(\pi, \rho)|}{\mathtt{v}(\rho)^{b(\tilde \pi)}}\lambda^{s( \pi)},$$
where $b(\tilde \pi)=\frac{p-s(\pi)}{2}$ is the number of blocks on the reduced partition $\tilde \pi\in \mathcal{NC}_{2}(p-s(\pi))$ of the partition $\pi\in\mathcal{NC}_{2}^{1, i}(p)$. This way we have reduced our consideration to the combinatorics involving only the cardinality of the set $\mathrm{bmo}(\pi, \rho)$ . Therefore it suffices to prove the following proposition.
\begin{prop}\label{limv}
Let $\pi\in \mathcal{NC}_{2}^{1, i}(p)$ be a noncrossing partition with pair or inner singleton blocks. Then for each positive symmetric cone $\Pi_d$ which we consider, the following limit exists
$$\lim_{\rho\xrightarrow[]{\Pi} \infty}\frac{|\mathrm{bmo}(\pi, \rho)|}{\mathtt{v}(\rho)^{b(\tilde \pi)}}= V (\tilde \pi),$$
where the function $V(\tilde \pi):=V_{\Pi_d}(\tilde \pi)$ depends on the cone $\Pi_d$ and can be recursively expressed by the volume characteristic sequence $(\gamma_{n}(\Pi_d))_{n\geq 0}$ as in \eqref{rfv}.
\end{prop}
\noindent{\bf Proof: }
We are going to prove the following equality
\begin{equation}\label{bmoeq}
|\mathrm {bmo}(\pi, \rho)|=|\mathtt{BMO}(\tilde \pi, \rho)|,
\end{equation}
and then, applying Theorem \ref{thmp}.\\
Let us consider the sequence $\bm\xi$ with the adapted partition $\pi=(B_{1}, \ldots, B_{\frac{p+s(\pi)}{2}})\in \mathcal{NC}_{2}^{1, i}(p; \frac{p+s(\pi)}{2})$, where $B_{1}, \ldots, B_{\frac{p+s(\pi)}{2}}$ are the $\frac{p+s(\pi)}{2}=k$ blocks of $\pi$, and denote by $\mu:=(\mu_k, \ldots, \mu_1)$ the label sequence of blocks $B_1, \ldots, B_k\in\pi$. Since $[0, \rho]_{\mathbf {I}}^{p}\ni\bm\xi\trianglelefteq \pi$, then $[0, \rho]_{\mathbf {I}}^{k}\ni\mu\trianglelefteq_{\mathtt{L}}\pi$, and by virtue of Remark {\ref{equalsets}}, we obtain
$$|\mathrm{bmo}(\pi, \rho)|=|\mathrm{BMO}(\pi, \rho)|.$$
Moreover, the labels of the direct successors (singleton blocks) are the same as those of its direct predecessors (pair blocks). Then
$$|\mathtt{BMO}(\tilde \pi, \rho)|=|\mathrm{BMO}(\pi, \rho)|=|\mathrm{bmo}(\pi, \rho)|,$$
and
$$\lim_{\rho \xrightarrow []{\Pi}\infty}\frac{|\mathrm{bmo}(\pi, \rho)|}{\mathtt{v}(\rho)^{b(\tilde \pi)}}=\lim_{\rho \xrightarrow[]{\Pi}\infty}\frac{|\mathtt{BMO}(\tilde \pi, \rho)|}{\mathtt{v}(\rho)^{b(\tilde \pi)}}.$$
Applying Theorem \ref{thmp} gives the proof of Proposition \ref{limv} and hence the proof of Theorem \ref{mthm}.
\hfill $\Box$\\
\section*{Appendix}
In this section, we present the results for the single operators $S(\lambda):=A_{\xi}^{+}+A_{\xi}^{-}+\lambda A_{\xi}^{\circ}$, for $\lambda>0$ and $\xi\in{\mathbf {I}}$. In particular, the $p$-th moment $a_{p}(\lambda):=\varphi((S(\lambda))^{p})$ and the associated probability measure $\nu_\lambda$ are given. It is worthwhile to mention that these results are the same as for monotone case, given by Muraki \cite{Mur0}.
\begin{prop}\label{propap}
For any positive integer $p\geq 2$, one has
\begin{equation}\label{ap}
a_{p}(\lambda)=\sum_{\pi\in \mathcal{NC}_{2, o}^{1, i}(p)}\lambda^{s(\pi)},
\end{equation}
where $\mathcal{NC}_{2, o}^{1, i}(p)\subset\mathcal{NC}_{2}^{1}(p)$ is the set of all noncrossing partitions with pair or singleton blocks such that the pair blocks must be outer and the singletons must be inner.\\
Furthermore, the following recursive formula holds
\begin{equation}\label{ra}
a_{0}(\lambda)=1, a_{1}(\lambda)=0, a_{p}(\lambda)=\lambda a_{p-1}(\lambda)+a_{p-2}(\lambda), \quad \text{ for }\quad p\geq 2.
\end{equation}
\end{prop}
\begin{exmp}
For $p=0, 1, \ldots, 6$ we can use the recursive formula \eqref{ra}, to obtain the following moment sequence:
$$a_{p}(\lambda)=\{1, 0, 1, \lambda, \lambda^2+1, \lambda^3+2\lambda, \lambda^4+3\lambda^2+1\}.$$
\end{exmp}
The moment generating function and the Cauchy transform of the moments of the operators $S(\lambda)=A_{\xi}^{+}+A_{\xi}^{-}+\lambda A_{\xi}^{\circ}$ are given, respectively, by
\begin{equation}\label{mgf}
\mathrm{M}_{\lambda}(x)=\frac{1-\lambda x}{1-\lambda x- x^{2}},
\end{equation}
and
\begin{equation}\label{ctr}
\mathrm{G}_{\lambda}(x)=\frac{x-1}{x^{2}-\lambda x-1}.
\end{equation}
Then, by the Stieltjes inverse formula, the probability distribution $\nu_{\lambda}$ of $ S(\lambda)$ under the vacuum state $\varphi$ is given as follows
$$\nu_{\lambda}=p_{1}\delta_{x_{1}}+p_{2}\delta_{x_{2}},$$
where $x_{1}=\frac{\lambda}{2}+\frac{\sqrt{\lambda^{2}+4}}{2}$, $x_{2}=\frac{\lambda}{2}-\frac{\sqrt{\lambda^{2}+4}}{2}$, $p_{1}=\frac{1}{2}-\frac{\lambda}{2\sqrt{\lambda^{2}+4}}$, $p_{2}=\frac{1}{2}+\frac{\lambda}{2\sqrt{\lambda^{2}+4}}$ and $\delta_{x}$ denotes the Dirac measure at a point $x$.
Furthermore, according to Corollary \ref{bmc}, one can show that the operators $\{S(\lambda)\}_{\xi}$ are bm-independent with respect to $\varphi$ in $(\mathcal{C}, \varphi)$, and then the operators $\{S(\lambda)\}_{\xi}$ can be considered as \emph{bm-Bernoulli} random variables.
\begin{rem}
For $\lambda=1$ and $p\geq 1$, the moment sequence $a_{p}(1)$ are the shifted Fibonacci numbers, given by
$$a_{p}(1)=F(p-1).$$
\end{rem}
\vspace{1cm}
\section*{Acknowledgements}
Research partially supported by the National Agency for Academic Exchange (NAWA) POLONIUM project PPN/BIL/2018/1/00197/U/00021 and by the Polish National Science Center (NCN) grant 2016/21/B/ST1/00628.
|
2,869,038,153,834 | arxiv | \section{Introduction}
Neural keyphrase generation \cite{Meng2017Deep, Chen2018Keyphrase}, a conditioned Sequence-to-Sequence (Seq2Seq) approach for automated keyphrase extraction, has recently shown a promising result as another domain for exploring latent aspects of Seq2Seq models \cite{sutskever2014sequence, cho2014learning, Jiatao2016Copying, See2017Get, vinyals2015show, xu2015show}. Given pairs of document and the corresponding keyphrase references as ground truth labels, the task is to encode sequence of words in the source document into contextual vector representation; and accordingly generate sequence of target words, a \texttt{\small ``keyword''} or \texttt{\small ``keyphrase''} that retains core information of the source document.
Keyphrase generation shares a common objective with Seq2Seq-based document summarization \cite{See2017Get}, i.e. to condense a document into a short document abstraction. Consequently, both domains also share common challenges: the generation algorithm needs to accommodate two mechanisms -- \textbf{to copy} words from source, and \textbf{to generate} semantically related words not featured in source document. While the ``copying'' task is particularly easy for an unsupervised keyword extractor (e.g. TfIdf), a generative model such as Seq2Seq Recurrent Network has not been specifically trained on such task. The problem has been addressed by incorporating copying mechanism \cite{Jiatao2016Copying} into Seq2Seq architecture, resulting in models referred as \textbf{CopyRNN} \cite{Meng2017Deep} and \textbf{CorrRNN} \cite{Chen2018Keyphrase}. However, there has not been enough attention on addressing decoding issues in the current keyphrase generation task, referred as ``beam search curse'' \cite{Yang2018breaking}, which is also listed as one of six challenges for NMT \cite{koehn2017six}.
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.3]{bs_stopping_.PNG}
\caption{Beam Search Decoding Issues: sequence length bias and beam diversity}
\label{fig:bs_prob12}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.3]{copy_absent_kps.PNG}
\caption{Beam Search Decoding Issues: generating Present (C) vs. Absent (A) keyphrases}
\label{fig:copy_absent_kps}
\end{figure*}
This work further studies the decoding issues in neural keyphrase generation. We focus on two challenges of beam search decoding: (1) the algorithm bias to shorter sequences; and (2) the algorithm tendency to produce nearly identical sequences, causing the beam candidates to be less diverse.
Our empirical finding (fig. \ref{fig:copy_absent_kps}) shows an example of discrepancy between what the network (\textbf{CorrRNN}) has learnt from training examples and what the network generates at test time. \emph{Here}, the attention successfully annotates sequence of words in the source document with a higher score (darker highlight), matched against the corresponding ground truth references. On the contrary, the decoding algorithm fails to include the corresponding sequences in a final prediction set. This finding suggests that apart from improving the model architecture to be more expressive to the corresponding ``copying'' task, the overall generation performance is also conditioned on the decoding algorithm. This example of decoding issue in the current task becomes our main motivation to further utilizing the learnt attention scores as a mechanism to constrain the sequence generation process at test time. We argue that constraining beam search decoding based on ``\emph{hints}'' provided by attention network is favourable in the current task, given a condition that most keyphrase references are present in the document.
\section{Beam Search Decoding Issues}
\label{sec:bs_issues}
\subsection{Sequence length bias}
By default, beam search decoding algorithm of Seq2Seq model \cite{Bahdanau2014Neural} stops the search when exactly $b$ completed candidates of target sequences found, i.e. when the decoder network generates ``{\small{\texttt{$<$end$>$}}}'' token. As illustrated in fig.\@ \ref{fig:bs_prob12}, although increasing beam size increases the possibility of more candidates to explore, the likelihood for the model to find the ``{\small{\texttt{$<$end$>$}}}'' token after the first decoding step is also high. Consequently, the beam mainly contains short sequences ($1-$grams, $2-$grams), disregarding many potential $n-$grams candidates with longer sequences ($n \geq 3$). This tendency of the decoding algorithm to favour shorter sequences can hurt performance severely, specifically in the current keyphrase generation task, where ground truth references are in variable length ($n-$grams, $n = 1, \ldots, 5$). We further show empirically in section \ref{sec:empiric} that solely utilizing a normalization technique does not guarantee to solve the sequence length bias issues in the current task.
\subsection{Beam diversity}
Figure \ref{fig:bs_prob12} also exemplifies diversity problem in the standard beam search decoding of Seq2Seq model for the current task. The generation of nearly identical beams, i.e. $>80\%$ of sequences started with word ``{\small{\texttt{internet}}}'', results in a low ``informativeness'' (based on Precision $P$ metric). Consequently, it decreases the decoding performance in the current task.
\begin{comment}
\subsection{Copying sequences from source}
\label{sec:copy_vs_gen}
Further inspection into the \emph{inner} works of encoder network by attention visualization, shown in fig. \ref{fig:copy_absent_kps}, discloses a discrepancy between encoder and decoder network. The encoder network (\emph{left}) can properly distribute higher weights (darker highlight) on relevant words matched against the ground truth references. The decoder, however, (\emph{right}) fails to generate the corresponding annotated context words. Figure \ref{fig:copy_absent_kps} also shows that the ability to ``copying'' words from source text plays more important role in the current task, accounting for the recall of $67\%$ of ground truth references. Failing to include these sequences can later result in a severe performance degradation. Thus, ensuring the decoding algorithm to include these attention ``hints'' within the beam candidates, arguably is beneficial for the current task.
\end{comment}
\section{Neural keyphrase generation}
\subsection{Seq2Seq Attention Model}
Our Attention-based Seq2Seq model is constructed of: \textbf{(1)} \textbf{encoder - decoder} architecture with \textbf{attention mechanism} \cite{Bahdanau2014Neural} as a backbone architecture; \textbf{(2)} \textbf{copying mechanism} \cite{Jiatao2016Copying, See2017Get, Meng2017Deep}; and \textbf{(3)} \textbf{coverage-review mechanism} \cite{See2017Get, Chen2018Keyphrase}. We re-implemented and modified \textbf{CopyRNN} \cite{Meng2017Deep} and \textbf{CorrRNN} \cite{Chen2018Keyphrase} to be able to be trained on two conditions of target vocabulary size: truncated vocabulary and a very large target vocabulary \cite{jean2015Using}. In truncated vocabulary setting, the vocabulary included in look-up dictionary is constrained to top-50K most frequent words, while Out-of-Vocabulary (OOV) set is referred as ``\texttt{\small {<unk>}}''. In this study, Seq2Seq model with truncated vocabulary is referred as \textbf{CorrRNN}, while the modified model for large target vocabulary is referred as \textbf{CorrRNN-L}.
\subsubsection{CorrRNN}
For both Seq2Seq models in this study (CorrRNN, CorrRNN-L), we use MLP (concat) attention mechanism \cite{Bahdanau2014Neural} as the attention scoring function \texttt{\small score}$($\texttt{\small Q,K}$) = $\texttt{\small V}$^T$ \texttt{\small tanh}$($\texttt{\small W}$[$\texttt{\small Q,K}$])$ for incorporating copying, coverage, and review mechanism into Seq2Seq architecture. \texttt{\small Q} corresponds to query-attention: decoder state at one time step $s_t$; and \texttt{\small K} denotes keys-attention: a sequence of encoder states $h_{1 \ldots T_x}$ and coverage vector of attention \texttt{\small cov} up to the current time step $t$. The latter corresponds to a coverage mechanism \cite{See2017Get,Chen2018Keyphrase}. \emph{Here}, for \textbf{CorrRNN}, the model has two inputs: (1) sequences with a truncated dictionary size referred as $x$; and (2) sequences with extended vocabulary referred as $x^{(ext)}$. The additional \texttt{\small oov} dictionary index size is set to $23914$, excluding the main vocabulary index with top$-50K$ most frequent words.
\begin{comment}
\paragraph{Attention and Coverage mechanism}
\begin{equation}
\alpha_{t} = f(\texttt{\small V}^T \texttt{\small tanh}(\texttt{\small W}[s_t, h_{1 \ldots T_x}, \texttt{\small cov}]))
\label{eq:alpha}
\end{equation}
\vspace{-.5em}
\begin{equation}
\texttt{\small cov} = \sum_{t}^{T_y} \alpha_t
\label{eq:cov}
\end{equation}
\vspace{-.5em}
Here, $f(.)$ denotes softmax projection as normalization function of attention scores over an input sequence (max length $= T_x$); and \texttt{\small cov} corresponds to the summarization vector of attention scores up to the current time step $t$. The weighted sum between the attention $\alpha$ and encoder states $h_{1 \ldots T_x}$ (eq. \ref{eq:c_cov}) is then referred as context representation $c^{(\texttt{\small cov})}$ and used as one of decoder inputs.
\vspace{-.5em}
\begin{equation}
c_t^{(\texttt{\small cov})} = \sum_{t}^{T_x} \alpha_t \cdot h_{t} = \texttt{\small DOT}(\alpha_t, h_{1 \ldots T_x})
\label{eq:c_cov}
\end{equation}
\vspace{-.5em}
\paragraph{Review mechanism} While the coverage mechanism sums attention vectors over a sequence of encoder states, the review mechanism distributes the attention $\beta$ over decoder states and review vector from previous decoding time steps \texttt{\small rev} (eq. \ref{eq:beta} and \ref{eq:rev}). Likewise, the context representation of decoder states based on this review vector $c^{(\texttt{\small rev})}$ (eq. \ref{eq:c_rev}) then becomes one of input sources of decoder network.
\begin{equation}
\beta_t = f(\texttt{\small V}^T \texttt{\small tanh}(\texttt{\small W}[s_t, s_{t-1}, \texttt{\small rev}]))
\label{eq:beta}
\end{equation}
\vspace{-.85em}
\begin{equation}
\texttt{\small rev} = \sum_{t}^{T_y} \beta_t
\label{eq:rev}
\end{equation}
\vspace{-.85em}
\begin{equation}
c_t^{(\texttt{\small rev})} = \sum_{t}^{T_y} \beta_t \cdot s_t = \texttt{\small DOT}(\beta_t, s_t)
\label{eq:c_rev}
\end{equation}
\vspace{-.85em}
\paragraph{Copy-mode}
\begin{equation}
\psi(y_t=x_i) = \texttt{\small V}^T \texttt{\small tanh}(\texttt{\small W}[s_t, h_{1 \ldots T_x}, \texttt{\small cov}]), \small{x_i \in X}
\label{eq:copy-mode}
\end{equation}
\vspace{-.85em}
\begin{equation}
\texttt{\small copy-score} = \texttt{\small DOT}(\psi, s_t)
\label{eq:copy-mode2}
\end{equation}
\vspace{-.85em}
The score for ``copying'' word from input sequence $x$ is calculated as a dot product between \emph{un-}softmax soft attention (eq. \ref{eq:copy-mode}) and decoder state at the current time step $s_t$ (eq. \ref{eq:copy-mode2}).
\paragraph{Prob-copy}
Probability of copying words from source text is defined as a softmax projection $f(.)$ of weighted sum of the extended input sequence $x^{(ext)}$ with copy score (eq. \ref{eq:prob-copy}), where $p(x^{(ext)})$ is one hot vector projection of $x^{(ext)}$.
\begin{equation}
p(\texttt{\small copy}) = f(\texttt{\small DOT}(p(x^{(ext)}), \texttt{\small copy-score}))
\label{eq:prob-copy}
\end{equation}
\vspace{-.5em}
\paragraph{Pointer-copy and Selective-read}
\begin{equation}
\rho tr =
\begin{cases}
\frac{1}{K}M * p(\texttt{\small copy}) & \parbox[t]{.45\textwidth}{$x^{(ext)}=y_t$}\\
0 & \parbox[t]{.45\textwidth}{otherwise}
\end{cases}
\label{eq:ptr}
\end{equation}
\vspace{-.5em}
The pointer function $\rho tr$ is computed based on the casting of Boolean matching function ($M$) between target output and the extended input sequences $x^{(ext)}$. For each target output, the matching score of target co-occurrence $M$ (either 1 or 0) is normalized based on the frequency occurrences $(K)$ of the target word in input sequence. This score is then multiplied with probability of copying p(\texttt{\small copy}) element-wisely.
\begin{equation}
\zeta = \sum_{t}^{T_x} \rho tr \cdot h_t = \texttt{\small DOT}(\rho tr, h_{1 \ldots T_x})
\label{eq:selective-read}
\end{equation}
\vspace{-.5em}
The selective read $\zeta$ is defined as weighted sum of encoder states based on pointer-copy score.
\paragraph{Decoder} Finally, the three outputs of copying, coverage, and review mechanism: context representation based on coverage attention $c^{(\texttt{\small cov})}$, context representation based on review attention $c^{(\texttt{\small rev})}$, and selective read $\zeta$ are being fed into decoder network, in addition to default input set for decoder (previous decoder states $s_{t-1}$ and the embedding projection of true label $y_t$). This results in \emph{logits} probability $s_t$, a component that we later augment in our proposed beam search decoding.
\begin{equation}
s_t = f(s_{t-1}, c_{t-1}^{(\texttt{\small cov})}, c_{t-1}^{(\texttt{\small rev})}, e(y_t), \zeta)
\label{eq:dec}
\end{equation}
\vspace{-.5em}
\end{comment}
\subsubsection{CorrRNN-L}
Since the vocabulary is not being truncated, CorrRNN-L model does not have issues with OOV. The problem, however, is shifted to the complexity of training and decoding due to large number of target words. This issue has been addressed by incorporating sampled softmax approximation based on adaptive sampling algorithm \cite{jean2015Using} as an approximate learning approach to a very large target vocabulary size. \emph{Here}, we trained the model on $366730$ vocabulary size. The number of vocabulary samples for the approximate softmax layer, instead of \emph{full} softmax in \textbf{CorrRNN} model architecture, is set to $50K$.
\begin{comment}
\begin{equation}
s_t = f(s_{t-1}, c_{t-1}^{(\texttt{\small cov})}, c_{t-1}^{(\texttt{\small rev})}, e(y_t), \zeta)
\label{eq:dec}
\end{equation}
\begin{equation}
\alpha = f(s_{t-1}, h, \texttt{\small cov})
\label{eq:alpha_}
\end{equation}
\end{comment}
\section{Beam Search Decoding with Attention Reward (BSDAR)}
\label{sec:bs_attention}
\subsection{Word-level Attention-Reward}
We propose a word-level attention reward function \texttt{\small \textbf{WORD\_ATT\_REWARD}} as a mechanism to augment Seq2Seq prediction, shown in alg.\@ \ref{alg:word-att-rwd}. For each decoding time step $t$, the logits probability output of decoder network ($s_t$) is augmented by an attention vector ($\alpha_w \in R^{Tx}$), which each value corresponds to the attention weight of a word in source sequence $x$. $\hat{\alpha}_w[w]$ (alg. \ref{alg:word-att-rwd} \texttt{\small \textbf{line 3}}) denotes a normalized (mean) attention score of a particular word, given the possibility that the word occurs multiple times in the source text. Since the attention vector $\alpha_w$ is bounded to the sequential position of words in source input, the dictionary look-up of word index and the corresponding position $f(w): K \xrightarrow{} V$ is also given during the decoding time as a reference to calculate mean $\hat{\alpha}_w[w]$, where $K$ denotes a set of words in input sequence $K=\{w_0 \ldots w_N\}$ and $V$ corresponds to the position where the word occurs $V = [w_{p_1} \ldots w_{p_n}]$ in input sequence. Correspondingly, the augmentation of decoder logits only applies for words appear in source text.
To intensify the effect of logits augmentation, which is becoming critical in a noisy decoding of Seq2Seq with large target vocabulary size (\textbf{CorrRNN-L}), we use $\lambda$ and $\gamma$ (alg. \ref{alg:word-att-rwd} line 4) as an augmentation factor of attention reward $\hat{\alpha}_w$. \emph{Here}, we set $\lambda = 2$ and $\gamma = \lambda * $ \texttt{\small \textbf{MAX}} $(\hat{\alpha}_w)$.
\begin{algorithm}[!ht]
\caption{\textbf{\texttt{\small WORD\_ATT\_REWARD}}}
\begin{algorithmic}[1]
\For {$t=0, 1, \ldots, $ \texttt{\small MAX\_STEPS}}
\State Collect logits and attention weights
\NoNumber{$s_t = $ \textbf{\texttt{\small DEC}} $(y_{t}, c_{t-1}^{(\texttt{\small cov})}, c_{t-1}^{(\texttt{\small rev})}, s_{t-1}, \zeta_{t-1})$}
\NoNumber{$\alpha = \textbf{\texttt{\small MLP}} (s_{t-1}, h, \texttt{\small cov})$}
\NoNumber{$s_t \in R^{V}, \alpha \in R^{T_x}$}
\State Compute normalized $\hat{\alpha}_w$, for each word
\NoNumber{$\hat{\alpha}_w[w] = \texttt{\small \textbf{MEAN}}(\alpha[w_{p_1} \ldots w_{p_n}])$}
\State Augment the logits, for words in $\hat{\alpha}_w$
\NoNumber{$\widetilde{s}_t[w] = s_t[w] + (\lambda * \hat{\alpha}_w[w]) + \gamma $}
\State Get $b-$top most probable words
\NoNumber{\texttt{\small tokens} $=$ \textbf{\texttt{\small ARGSORT}} $\{\widetilde{s}_t\}[:b]$}
\NoNumber{\texttt{\small probs} $=$ \textbf{\texttt{\small SORT}} $\{\widetilde{s}_t\}[:b]$}
\State \textbf{return} \texttt{\small BEAM-HYP} $=$ \texttt{\small (tokens, probs)}
\EndFor
\end{algorithmic}
\label{alg:word-att-rwd}
\end{algorithm}
\begin{algorithm*}[!t]
\caption{\textbf{\texttt{\small NGRAM\_ATT\_REWARD}}}
\begin{algorithmic}[1]
\While{\texttt{\small steps} $<$ \texttt{\small MAX\_STEPS} \textbf{and} \texttt{\small Results} $<$ \texttt{\small BEAM\_SIZE} $b$}
\State Run one step decoding with word attention reward
\NoNumber{\texttt{\small BEAM-HYP} $=$\texttt{\small \textbf{WORD\_ATT\_REWARD}()}}
\State Expand tree with new candidates \texttt{\small HYPS} $\frown$\texttt{\small BEAM-HYP}
\State Construct sequence candidates
\NoNumber{\texttt{\small SEQ} $=$ \texttt{\small HYPS.PREV\_TOKENS} $+$ \texttt{\small HYPS.CURR\_TOKENS}}
\If{\texttt{\small SEQ} found in annotated source \texttt{\small ATT-ANNOT}}
\State Reward candidates with $\hat{\alpha}_p$
\NoNumber{\texttt{\small AUG\_PROB} $=$ \texttt{\small HYPS.CURR\_PROB} $ + (\lambda * \hat{\alpha}_p) + \gamma $}
\ElsIf{\texttt{\small SEQ} is partly composed of tokens in \texttt{\small ATT-ANNOT}}
\State Set probability into negative value (log ``inf'') to penalize \texttt{\small SEQ} candidate
\NoNumber{\texttt{\small AUG\_PROB} $= -0.05$}
\Else
\State Set logits probability without reward score
\NoNumber{\texttt{\small AUG\_PROB} $=$ \texttt{\small HYPS.CURR\_PROB} }
\EndIf
\State Re-sort beam candidates
\State Update tree with new candidates
\State Sort the candidates stored in memory (tree) based on normalized joint probability
\State Expand \texttt{\small Results} with completed sequence candidates
\EndWhile
\end{algorithmic}
\label{alg:ngram-att-rwd}
\end{algorithm*}
Our proposed word-level attention reward shares a common intuition with actor-critic algorithm for sequence prediction \cite{Bahdanau2017Actor}. Intuitively, the \texttt{\small \textbf{WORD\_ATT\_REWARD}} function corresponds to increasing the probability of \emph{actions} that the encoder network gives a higher attention value; and decreasing the probability of \emph{actions} that are being ignored by the encoder network. The actions correspond to the Seq2Seq predictions for each decoding step. The proposed reward mechanism, thus, bounds the actions with an attention score, which each value represents a degree of importance of word in source sequence based on what the Seq2Seq attention network has learnt during training time.
\subsection{$N-$gram-level attention reward}
The proposed word-level attention reward in alg. \ref{alg:word-att-rwd} is based only on bag-of-words assumption. As a potential trade-offs on the decoding performance, the beam algorithm may then favour sequence candidates containing words with higher attention score. Thus, it disregards whether the formed sequence is sensible as a candidate of keyphrase. For instance, from the attention visualization in fig. \ref{fig:copy_absent_kps}, a keyphrase candidate ``\texttt{\small inquiry honours}'' may also be rewarded highly, considering the sequence is composed of words with a high attention score (``\texttt{\small inquiry}'', ``\texttt{\small honours}''). This leads to a noisy prediction and can potentially decrease decoding performance.
To mitigate this issue, we also introduce the $n-$gram-level attention reward function \texttt{\small \textbf{NGRAM\_ATT\_REWARD}} to further augment and penalize beam candidates before adding them into the memory (alg. \ref{alg:ngram-att-rwd}). For each decoding step $t$, given the previous stored tokens in memory and the current beam candidate returned by \texttt{\small \textbf{WORD\_ATT\_REWARD}}, an $n-$gram candidate \texttt{\small SEQ} were formed. For all \texttt{\small SEQ} candidates matched against the extracted $n-$gram attention annotations \texttt{\small ATT-ANNOT}, the logits of the last tokens of the \texttt{\small SEQ} candidates were added by the corresponding $n-$gram attention score $\hat{\alpha}_p$ (alg. \ref{alg:ngram-att-rwd} line 6).
\paragraph{Automated attention-based annotation}
The steps for acquiring both extracted n-gram attention annotations \texttt{\small ATT-ANNOT} and the corresponding attention score are shown in alg. \ref{alg:att-annot}. The attention annotation \texttt{\small ATT-ANNOT} was constructed during the first decoding step ($t=0$) by a simple $n-$gram ($n=1, \ldots, 5$) chunking method (i.e. sliding window of $n$ and $n+1$) of a filtered source document. The filtering of source document is based on a percentile rank $10$ of sorted attention score $\alpha$ as a cutting attention threshold $\tau$. Given the attention threshold $\tau$, the \texttt{\small ATT-ANNOT} was extracted based on $n-$gram chunks of element-wise multiplication between filtered attention score $\alpha^{\delta}$ (\texttt{\small \textbf{line 3}}) and source sequence (\texttt{\small \textbf{line 4}}). The final result is a list of $n-$grams with the corresponding attention score $\hat{\alpha}_p$. $\hat{\alpha}_p$ was acquired by computing the mean of attention scores of words $p_i$ composing the corresponding $n-$gram sequence $P$: $\hat{\alpha}_p[P] = \frac{1}{N_p} \sum_{i=1}^{N_p} \hat{\alpha}[p_1] + \ldots + \hat{\alpha}[p_i] $. The resulting extracted atention annotation from the first decoding step is then used as a global reference for the following decoding time steps ($t>0$) and is utilized as reference for \texttt{\small \textbf{NGRAM\_ATT\_REWARD}} function.
\begin{algorithm}[!ht]
\caption{\textbf{\texttt{\small EXTRACT\_ATT\_ANNOT}}}
\begin{algorithmic}[1]
\State Collect attention vector $\alpha$
\State Compute threshold $\tau$ based on percentile $10$ of $\alpha$
\State Binarize attention values based on threshold $\tau$
\NoNumber{$\alpha^{\delta}_{t} = [1$ if $\alpha_t>\tau$ else $-1]$}
\State Extract n-grams attention annotation
\NoNumber{\texttt{\small ATT-ANNOT} = \texttt{\small \textbf{CHUNK}}$(\alpha^{\delta} * x_{1 \ldots T_x})$}
\State Extract $\alpha$ based on sequential position
\NoNumber{\texttt{\small SEQ-POS} = \texttt{\small \textbf{CHUNK}} $(\alpha^\delta *$ $x-$\texttt{\small pos}$)$}
\NoNumber{$\hat{\alpha}[P] = \alpha[$\texttt{\small SEQ-POS}$_i]$}
\State Extract n-gram attention values $\hat{\alpha}_p$
\NoNumber{$\hat{\alpha}_p =$ \texttt{\small \textbf{MEAN}} $(\hat{\alpha}[P_1 \ldots P_n])$}
\State \textbf{Return} \{\texttt{\small ATT-ANNOT} $= f: K \xrightarrow{} V $, $K=\{P_1 \ldots P_n\}$ $V = [ \hat{\alpha_p} ]$\}
\end{algorithmic}
\label{alg:att-annot}
\end{algorithm}
\paragraph{Penalty score}
Penalty was given to the beam candidates that are partially composed of word tokens found in \texttt{\small ATT-ANNOT}. These sequence candidates contain words with high attention score, but are mainly \emph{non-}sensical. In this subset, the last tokens of the sequence candidates were set with a negative probability ($-0.05$), shown in alg. \ref{alg:ngram-att-rwd} \texttt{\small \textbf{line 8}}. For all candidates with negative value of probability in beam tree, the logits were then set to zero. This penalty encourages the sequence candidates to have an extremely large log probability values (``inf''), and correspondingly lower ranks during beam re-sorting stage. For sequence candidates that do not contain words and phrases in \texttt{\small ATT-ANNOT}, the logits output of decoder network were not augmented, kept \emph{as is} (\texttt{\small \textbf{line 10}}). Thus, the sequences not featured in source text were still considered as candidates of Seq2Seq final prediction, but ranked after those found in the source sequence. This last step intuitively aims to preserve the ``abstractive'' ability of the model, i.e. the ability to produce sequences that do not necessarily appear in the source but has a close semantic meaning with the corresponding document.
\subsection{Re-rank method}
\label{sec:heuristic2}
In addition to the proposed decoding with attention reward, a heuristic approach to alleviate sequence length bias and diversity issues was employed. We adopted a concept of \emph{intra-} and \emph{inter-} sibling rank of beam decoding \cite{LiJurafskiMutual2016} into a simple implementation. We refer the heuristic approach adopted in this study as (1) \emph{pre-}intra siblings rank; (2) \emph{post-}intra siblings re-rank; and (3) \emph{post-}inter sibling re-rank. \emph{Here}, in \emph{pre-}intra siblings ranking, for each decoding step, we only consider top-3 beams (word tokens) to be added into the tree queue. Given completed sequences (i.e. sequences with ``\texttt{\small <end>}'' as last token), in \emph{post-}intra siblings re-rank, given candidates with the same parent node and sequence length, only top-1 beam candidates were considered. Likewise, in \emph{post-}inter sibling re-rank, only top-5 candidates were considered as final solution. While \emph{pre-}intra siblings rank was ranked based on the probability scores of the last tokens of the sequence candidates, the \emph{post-}intra siblings re-rank and \emph{post-}inter siblings re-rank were sorted based on normalized joint probability score of the completed sequences.
\vspace{-1em}
\section{Increasing traversal breadth and depth}
\label{sec:empiric}
\begin{comment}
\begin{figure*}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\linewidth]{bs1_avg_plot_r3.pdf}
\caption{$n-$gram evaluation}
\label{fig:bs1_a}
\end{subfigure}%
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\linewidth]{bs1_uni_avg_plot_r3.pdf}
\caption{uni-gram evaluation}
\label{fig:bs1_b}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\linewidth]{bs2_div_score_min.pdf}
\caption{Changing MIN\_SEQ}
\label{fig:bs1_c}
\end{subfigure}%
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\linewidth]{bs2_div_score.pdf}
\caption{Changing MAX\_SEQ}
\label{fig:bs1_d}
\end{subfigure}%
\caption{The effect of changing beam size on decoding performance (a,b); The effect of changing minimum and maximum sequence length on the diversity score (c,d)}\label{fig:bs1}
\end{figure*}
\end{comment}
Conventional wisdom for tackling the aforementioned beam search decoding issues is by expanding the beam size (traversal breadth) and sequence length (traversal depth). \emph{Here}, we show empirically that increasing both parameters does not guarantee to result optimal solutions or significantly increase the performance gain in the current study.
\paragraph{Expanding Beam Size}
Figure (\ref{fig:bs1_a} shows that there is no gain on increasing beam size (up to $95$ beams) and utilizing a simple length normalization technique. The different trend of uni-gram-based evaluation in figure \ref{fig:bs1_b}, however, indicates that the beam solution includes potential candidates partially (i.e. contains partial words in references), but fails to correctly form $n-$gram sequence candidate longer than a uni-gram. Example of the decoding results is shown in table \ref{table:gen_bs1}.
\begin{table}[!ht]
\centering
\resizebox{.45\textwidth}{!}{
\begin{tabular}{ | c | c | }
\hline
\bf No Length Normalization & \bf With Length Normalization \\
\hline
internet & internet\\
recommendations & internet analysis\\
support & recommendations\\
$<$unk$>$ & support\\
online & $<$unk$>$ \\
information & online\\
internet analysis & information\\
web & web\\
decision & decision\\
computer & computer\\
\hline
\end{tabular}
}
\caption{\label{table:gen_bs1} Decoding results. Beam size $b$ is set to 10.}
\vspace{-1em}
\end{table}
\begin{figure*}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{bs1_avg_plot_r3.pdf}
\caption{$n-$gram evaluation}
\label{fig:bs1_a}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{bs1_uni_avg_plot_r3.pdf}
\caption{uni-gram evaluation}
\label{fig:bs1_b}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{bs2_div_score.pdf}
\caption{\texttt{\small MAX\_SEQ} vs. diversity score}
\label{fig:bs1_d}
\end{subfigure}
\caption{The effect of changing beam size on decoding performance}\label{fig:bs1}
\end{figure*}
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{bs1_avg_plot_r3.pdf}
\caption{$n-$gram evaluation}
\label{fig:bs1_a}
\vspace{-1em}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{bs1_uni_avg_plot_r3.pdf}
\caption{uni-gram evaluation}
\label{fig:bs1_b}
\vspace{-1em}
\end{figure}
\end{comment}
\vspace{-1em}
\paragraph{Increasing sequence length}
\label{sec:seqlen}
Increasing the traversal depth (\texttt{\small MAX\_SEQ}), shown in figure \ref{fig:bs1_d}, also does not add any values on improving the diversity of beam candidates in the current task. We define a diversity score (\emph{y} axis of figure \ref{fig:bs1_d}) as the ``diversity'' of the first word tokens in prediction and ground truth set, as such.
\begin{equation}
\texttt{\small \textbf{DIV\_SCORE}} = \frac{\texttt{\small \textbf{COUNT}} (\{w^{(1)}\})}{\texttt{\small \textbf{COUNT}}(Y)}
\end{equation}
Where $\{w^{(1)}\}$ denotes a set of unique words, which each corresponds to the first token of a sequence and $Y$ corresponds to a list of keyphrases. The purpose of this diversity score metric is to measure the repetitiveness of beam decoding based on the first word tokens of the generation results. The higher the diversity score is (e.g. close to the diversity score of ground truth references), the better of the decoding algorithm to overcome the beam diversity issue.
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{bs2_div_score.pdf}
\caption{\texttt{\small MAX\_SEQ} vs. diversity score}
\label{fig:bs1_d}
\vspace{-1em}
\end{figure}
\end{comment}
\begin{comment}
\begin{equation}
\texttt{\small \textbf{DIV\_SCORE}} = \frac{\texttt{\small \textbf{COUNT}} (\{w^{(1)}\})}{\texttt{\small \textbf{COUNT}}(\texttt{\small SET})}
\label{eq:divscore1}
\end{equation}
\vspace{-.5em}
\end{comment}
\section{Experiments and results}
\label{sec:exp}
Seq2Seq model in this study was trained on KP20k corpus \cite{Meng2017Deep} ($\approx 530K$ documents, providing $\approx 2M$ training sets after preprocessing). For training the model, both sources $x_i$ and the corresponding keyphrase labels $P_i^{(n)}$ are represented as word sequences, such as $x_i = \{x_1^{(i)}, x_2^{(i)}, \cdots x_{T_x}^{(i)}\}$ and $P_{i,n} = \{p_1^{(i,n)}, p_2^{(i,n)}, \cdots p_{T_y}^{(i,n)}\}$, where $T_x$ and $T_y$ are maximum sequence length of source document and target keyphrase labels respectively. To be noted, each source document corresponds to $n$ multiple keyphrase labels. By splitting the data sample $\{x_i,P_i\}$ into $n$ pairs $\{x_i,P_i^{(1)}\}, \ldots, \{x_i,P_i^{(m)}\}$, the training set is presented as text-keyphrase pairs, each contains only one source text sequence and target keyphrase sequence. In inference stage, standard evaluation data sets for keyphrase extraction were used (Inspec \cite{Hulth:2003} ($201$), Krapivin \cite{Krapivin:2009} ($201$), NUS \cite{Nguyen:2007} ($156$), Semeval-2010 \cite{Kim:2010} ($242$).
\begin{table*}[!ht]
\centering
\resizebox{.75\textwidth}{!}{
\begin{tabular}{ | l | cc cc cc cc |}
\hline
\bf BS Decoding & \multicolumn{8}{|c|}{\bf CorrRNN} \\
& \multicolumn{2}{c}{\bf Inspec (201)$^{\dagger}$} & \multicolumn{2}{c}{\bf Krapivin (201)$^{\dagger}$} & \multicolumn{2}{c}{\bf NUS (156)$^{\dagger}$} & \multicolumn{2}{c|}{\bf Semeval-2010 (242)$^{\dagger}$} \\
& \bf $R@50$ & \bf $R@N$ & \bf $R@50$ & \bf $R@N$ & \bf $R@50$ & \bf $R@N$ & \bf $R@50$ & \bf $R@N$ \\
\hline
BS & 0.098 & 0.098 & 0.073 & 0.073 & 0.093 & 0.093 & 0.056 & 0.056 \\
BS++ & 0.091 & 0.091 & 0.094 & 0.094 & 0.098 & 0.098 & 0.059 & 0.059 \\
\bf \textit{BSDAR} & \bf 0.290 & \bf 0.372 & \bf 0.249 & \bf 0.317 & \bf 0.211 & \bf 0.255 & \bf 0.169 & \bf 0.221\\
\hline
\end{tabular}
}
\caption{\label{table:bs_corrRNN} Comparison of Beam Search (BS) decoding algorithms on CorrRNN performance. \small{\textit{$^{\dagger}$ size of test corpora}}}
\end{table*}
\begin{table*}[!ht]
\centering
\resizebox{.75\textwidth}{!}{
\begin{tabular}{ | l | cc cc cc cc |}
\hline
\bf BS Decoding & \multicolumn{8}{|c|}{\bf CorrRNN-L} \\
& \multicolumn{2}{c}{\bf Inspec (201)$^{\dagger}$} & \multicolumn{2}{c}{\bf Krapivin (201)$^{\dagger}$} & \multicolumn{2}{c}{\bf NUS (156)$^{\dagger}$} & \multicolumn{2}{c|}{\bf Semeval-2010 (242)$^{\dagger}$} \\
& \bf $R@50$ & \bf $R$ & \bf $R@50$ & \bf $R$ & \bf $R@50$ & \bf $R$ & \bf $R@50$ & \bf $R$ \\
\hline
BS & 0.045 & 0.045 & 0.038 & 0.038 & 0.043 & 0.043 & 0.026 & 0.026 \\
BS++ & 0.033 & 0.047 & 0.038 & 0.05 & 0.0409 & 0.052 & 0.020 & 0.029 \\
\bf \textit{BSDAR} & \bf 0.268 & \bf 0.327 & \bf 0.193 & \bf 0.226 & \bf 0.152 & \bf 0.182 & \bf 0.139 & \bf 0.166 \\
\hline
\end{tabular}
}
\caption{\label{table:bs_corrRNN_L} Comparison of Beam Search (BS) decoding algorithms on CorrRNN-L performance. \small{\textit{$^\dagger$ size of test corpora}}}
\end{table*}
\begin{table*}[!ht]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ | l | cccc | cccc | cccc | cccc |}
\hline
\bf Decoding & \multicolumn{4}{c|}{\bf Inspec (199)$^{\dagger}$} & \multicolumn{4}{c|}{\bf Krapivin (185)$^{\dagger}$} & \multicolumn{4}{c|}{\bf NUS (148)$^{\dagger}$} & \multicolumn{4}{c|}{\bf Semeval-2010 (242)$^{\dagger}$} \\
& \multicolumn{2}{c}{\bf CorrRNN} & \multicolumn{2}{c|}{\bf CorrRNN-L} & \multicolumn{2}{c}{\bf CorrRNN} & \multicolumn{2}{c|}{\bf CorrRNN-L} & \multicolumn{2}{c}{\bf CorrRNN} & \multicolumn{2}{c|}{\bf CorrRNN-L} & \multicolumn{2}{c}{\bf CorrRNN} & \multicolumn{2}{c|}{\bf CorrRNN-L} \\
&\bf {\small R} & \bf {\small ROUGE-L} &\bf {\small R} & \bf {\small ROUGE-L} &\bf {\small R} & \bf {\small ROUGE-L} &\bf {\small R} & \bf {\small ROUGE-L} &\bf {\small R} & \bf {\small ROUGE-L} &\bf {\small R} & \bf {\small ROUGE-L} &\bf {\small R} & \bf {\small ROUGE-L} &\bf {\small R} & \bf {\small ROUGE-L} \\
\hline
BS & 0.027 & 0.207 & 0.066 & 0.211 & 0.026 & 0.257 & 0.053 & 0.226 & 0.034 & 0.240 & 0.048 & 0.197 & 0.015 & 0.194 & 0.026 & 0.154\\
BS++ & 0.022 & 0.194 & 0.078 & 0.329 & 0.024 & 0.247 & \bf 0.081 & \bf 0.369 & 0.029 & 0.228 & 0.064 & 0.304 & 0.013 & 0.182 & 0.031 & \bf 0.260\\
\bf \textit{BSDAR} & \bf 0.038 & \bf 0.249 & \bf 0.079 & \bf 0.331 & \bf 0.071 & \bf 0.300 & 0.064 & 0.348 & \bf 0.037 & \bf 0.260 & \bf 0.065 & \bf 0.310 & \bf 0.031 & \bf 0.225 & \bf 0.041 & 0.253 \\
\hline
\end{tabular}
}
\caption{\label{table:ABS1} Abstractive Performance. Scores are based on \texttt{\small{Micro-Average Recall (R)}} and \texttt{\small ROUGE-L average F1-score}. \small{\textit{$^{\dagger)}$ size of test corpora after preprocessing.}}}
\end{table*}
\begin{table*}[!ht]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ | l | cccc cccc |}
\hline
\bf Decoding & \multicolumn{2}{c}{\bf Inspec (138)$^{\dagger}$} & \multicolumn{2}{c}{\bf Krapivin (85)$^{\dagger}$} & \multicolumn{2}{c}{\bf NUS (93)$^{\dagger}$} & \multicolumn{2}{c|}{\bf Semeval-2010 (212)$^{\dagger}$} \\
& \bf CorrRNN & \bf CorrRNN-L & \bf CorrRNN & \bf CorrRNN-L & \bf CorrRNN & \bf CorrRNN-L & \bf CorrRNN & \bf CorrRNN-L \\
\hline
BS & 0.236 & 0.189 & 0.236 & 0.183 & 0.215 & 0.150 & 0.196 & 0.142 \\
BS++ & 0.249 & 0.328 & 0.248 & 0.344 & 0.234 & 0.279 & 0.199 & 0.269 \\
\bf \textit{BSDAR} & \bf 0.405 & \bf 0.423 & \bf 0.359 & \bf 0.408 & \bf 0.335 & \bf 0.349 & \bf 0.277 & \bf 0.285 \\
\hline
\end{tabular}
}
\caption{\label{table:ABS2} Abstractive Performance on longer sequences ($n-$grams, $n=3 \ldots 5$). Scores are based on \texttt{\small{ROUGE-L average F1-score}}. \small{\textit{$^{\dagger)}$ size of test corpora after preprocessing.}}}
\end{table*}
\paragraph{Beam Parameters}
For the first decoding time step ($t=0$), beam size ($b$) is set to be a larger number ($b=100$) than the remaining decoding time steps ($b=50$). Number of hypothesis \texttt{\small num\_hyps} representing the partial solutions (queue) of sequence candidates to be added into the memory is set to $200$.
\paragraph{Evaluation Metrics}
We employ Micro-average Recall (R), ROUGE-L average F1-score, and diversity (section \ref{sec:seqlen}) metrics to evaluate performance of beam search decoding algorithms in this study.
\begin{table*}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ | l | ccc | ccc | ccc | ccc | }
\hline
\bf Model & \multicolumn{3}{|c|}{\textbf{Inspec-L (189)$^{\dagger}$}} & \multicolumn{3}{|c|}{\textbf{Krapivin-L (118)$^{\dagger}$}} & \multicolumn{3}{|c|}{\textbf{NUS-L (121)$^{\dagger}$}} & \multicolumn{3}{|c|}{\textbf{Semeval2010-N (234)$^{\dagger}$} } \\
& \bf $R@10$ & \bf $R@50$ & ROUGE-L & \bf $R@10$ & \bf $R@50$ & ROUGE-L & \bf $R@10$ & \bf $R50$ & ROUGE-L & \bf $R@10$ & \bf $R@50$ & ROUGE-L \\
\hline
TfIdf & 0.082 & 0.252 & 0.635 & \bf 0.079 & 0.189 & 0.562 & \bf 0.044 & 0.110 & 0.478 & 0.039 & 0.157 & \bf 0.441\\
CorrRNN $+$ BS & 0. & 0. & 0.256 & 0. & 0. & 0.265 & 0. & 0. & 0.251 & 0. & 0. & 0.236\\
CorrRNN $+$ BS++ & 0.005 & 0.005 & 0.294 & 0.011 & 0.011 & 0.306 & 0.009 & 0.009 & 0.299 & 0.006 & 0.006 & 0.261\\
CorrRNN $+$ \bf \textit{BSDAR} & 0.007 & 0.130 & 0.643 & 0.009 & 0.123 & 0.560 & 0.009 & 0.070 & 0.4960 & 0.009 & 0.058 & 0.426 \\
CorrRNN-L $+$ BS & 0. & 0. & 0.138 & 0. & 0. & 0.151 & 0. & 0. & 0.152 & 0. & 0. & 0.134 \\
CorrRNN-L $+$ BS++ & 0. & 0.003 & 0.308 & 0.009 & 0.009 & 0.341 & 0.005 & 0.005 & 0.320 & 0.002 & 0.003 & 0.299 \\
CorrRNN-L $+$ \bf \textit{BSDAR} & \bf 0.102 & \bf 0.342 & \bf 0.664 & 0.057 & \bf 0.219 & \bf 0.572 & 0.035 & \bf 0.164 & \bf 0.4962 & \bf 0.049 & \bf 0.162 & 0.436 \\
\hline
\end{tabular}
}
\caption{\label{table:bs_extractiveness1} Comparison between models in subsets with longer keyphrases (n-grams, $n=3 \ldots 5$). \textit{\small $^{\dagger)}$ size of test corpora after preprocessing.}}
\end{table*}
\subsection{Decoding performance}
Table \ref{table:bs_corrRNN} and \ref{table:bs_corrRNN_L} shows the comparison between our proposed method (\textit{\textbf{BSDAR}}) and standard beam search decoding algorithms for Seq2Seq prediction. To show that simply applying the heuristic rules (section \ref{sec:heuristic2}) does not guarantee to solve the decoding issues, we included the heuristic-based only beam decoding (BS++) as a comparison. In general, the results for both \textbf{CorrRNN} and \textbf{CorrRNN-L} models show that \textit{\textbf{BSDAR}} can significantly improve the recall score of about $20-30 \%$.
We also show that while expanding retrieval set (from $R@50$ to $R@N$, where $N=50 \ldots 200$) has no gain for the decoding based on standard beam decoding (BS) and heuristic-based decoding (BS++), the performance of the proposed decoding method significantly increases. This result indicates that the solutions based on standard beam search (BS) and heuristic-based beam search (BS++) mainly contain noises and non-relevant sequences, as compared to the predictions resulted by \textit{\textbf{BSDAR}}.
\subsection{Abstractive performance}
For a neural generation task, it is also important to maintain the model ability to generate ``abstractive'' sequences. The results in table \ref{table:ABS1} show that \textit{\textbf{BSDAR}} is able to maintain a high performance on generating sequences not featured in source text, given subsets with ``absent'' keyphrase references in a variable sequence length ($n=1, \ldots, 5$). We further evaluate the ``abstractive'' performance of the decoding algorithms on longer sequences (table \ref{table:ABS2}). Since this task is generally challenging for any models, we use ROUGE-L average $F_1-$score metric to match the prediction and ground truth set. In addition to maintaining a high abstractive performance in the subset with longer target sequences, \textit{\textbf{BSDAR}} is also able to disclose the ``actual'' intrinsic performance of \textbf{CorrRNN-L}. As compared to low decoding performance of \textbf{CorrRNN-L} based on standard beam search (BS), the decoding performance of \textbf{CorrRNN-L} based on \textit{\textbf{BSDAR}} is higher. This indicates that the corresponding model has actually learnt to attend on relevant words in source text, but the decoding algorithm fails to include the words in the final generated sequences. Understanding the intrinsic performance of a complex Seq2Seq model exemplified by this empirical result is becoming essential, specifically to further address a challenging ``abstractive generation'' task for any neural sequence models.
\subsection{On sequence length bias issue}
To measure whether the proposed method in this study (\textit{\textbf{BSDAR}}) can overcome the algorithm bias to shorter sequences, we compare the Seq2Seq decoding performance based on \textit{\textbf{BSDAR}} with \textbf{Tf-Idf} unsupervised keyword extractor in a subset with longer target sequences ($n-$grams, $n>3$). Intuitively, this subset introduces challenges for both neural network and non-neural network (Tf-Idf) approaches, since both models may suffer with sequence length bias issues, resulting the prediction with shorter sequence length. The result in table \ref{tab:diversity_compar} shows that the proposed solution can improve the decoding performance of Seq2Seq models, outperforming Tf-Idf in three data sets, as compared to Seq2Seq with standard beam (BS) and heuristic-based only beam decoding (BS++).
\subsection{On diversity issue}
We also show that \textit{\textbf{BSDAR}} can overcome the diversity issue of beam decoding algorithm. Based on the result in table \ref{tab:diversity_compar}, \textit{\textbf{BSDAR}} can maintain high diversity score (section \ref{sec:seqlen}) close to the diversity measure of ground truth references. Furthermore, as compared to the standard beam (BS) and heuristic beam (BS++), \textit{\textbf{BSDAR}} shows a reasonably better decoding performance for recalling uni-gram ($R^{(1)}$), bi-gram ($R^{(2)}$), and $n-$gram references ($R^{(n)}, n \geq 3$).
\begin{table}[!ht]
\centering
\resizebox{.45\textwidth}{!}{
\begin{tabular}{|l|c | c |c |c |}
\hline
\bf Keyphrase set & \texttt{\small DIV}$^{(n)}$ & \bf $R^{(1)}$ & \bf $R^{(2)}$ & \bf $R^{(n)}$ \\
\hline
Ground truth & 0.942 & N/A & N/A & N/A\\
BS & 0.058 & 0.120 & 0.017 & 0.\\
BS++ & 0.661 & 0.112 & 0.030 & 0.004 \\
\textit{\textbf{BSDAR}} & \bf 0.746 & \bf 0.301 & \bf 0.198 & \bf 0.223 \\
\hline
\end{tabular}
}
\caption{Diversity measure of decoding algorithms across data sets}
\label{tab:diversity_compar}
\end{table}
\section{Conclusion}
We present an approach to overcome two decoding issues in neural keyphrase generation by incorporating attention vector as a re-scoring method of beam search decoding algorithm. We show empirically that the proposed solution (\textit{\textbf{BSDAR}}) not only performs well on improving the generation of longer and diverse sequences, but also maintains the Seq2Seq ability to predict ``abstractive'' keyphrases, i.e. semantically relevant keyphrases that are not present in source text.
\bibliographystyle{ieeetr}
|
2,869,038,153,835 | arxiv | \section{Introduction}
\subsection{Previous work}
In 1962 B\"uchi was the first to introduce finite automata on infinite words. He needed them to solve some fundamental decision problems in mathematics and logic (\cite{B62}, \cite{M66} \cite{R69}). They became a popular area of research due to their elegance and the tight relation between automata on infinite objects and monadic second-order logic. Nowadays, automata are seen as a very useful tool in verification and specification of nonterminating systems. This is why the complexity of problems concerning automata has recently been considered a hot topic (e. g. \cite{kv97}, \cite{AK08}).
To serve different applications, different types of automata were introduced. In his proof of the decidability of the satisfiability of S1S, B\"uchi introduced nondeterministic automata on infinite words (NBW), which are a natural tool to model things that happen infinitely often. In a B\"uchi automaton, some of the states are {\it accepting} and a run on an infinite word is accepting if and only if it visits some accepting state infinitely often (\cite{B62}). Dually, a run of co-B\"uchi automaton (NCW) is accepting if and only if it visits non-accepting states only finitely often. There are also automata with more complicated accepting conditions -- most well known of them are parity automata, Street automata, Rabin automata and Muller automata.
As in the case of finite automata on finite words, four basic types of transition relation can be considered: {\it deterministic, nondeterministic, universal} and {\it alternating}. {\bf In this paper, from now on, we only consider nondeterministic automata}.
The problem of comparing the power of different types of automata is well studied and understood. For example it is easy to see that not every language that can be recognized by a B\"uchi automaton on infinite words (such languages are called $\omega$-regular languages) can be also recognized by a co-B\"uchi automaton. The most popular example of $\omega$-regular language that cannot be expressed by NCW is the language $L=\{w | w$ has infinitely many 0's$\}$ over the alphabet $\{0, 1\}$. On the other hand, it is not very hard to see that every language that can be recognized by a co-B\"uchi automaton is $\omega$-regular.
As we said, the problem of comparing the power of different types of automata is well studied. But we are quite far from knowing everything about the number of states needed to simulate an automaton of one type by an automaton of another type -- see for example the survey \cite{K07} to learn about the open problems in this area.
In this paper we consider the problem of the cost of simulating a B\"uchi automaton on infinite words by a co-B\"uchi automaton (if such NCW exists), left open in \cite{K07}: given a number $n\in {\cal N}$, for what $f(n)$ can we be sure that every nondeterministic B\"uchi automaton with no more than $n$ states, which can be simulated by a co-B\"uchi automaton, can be simulated by a co-B\"uchi automaton with at most $f(n)$ states?
There is a large gap between the known upper bound and the known lower bound for such a translation. The best currently known translation goes via intermediate deterministic Street automaton, involving exponential blowup of the number of states (\cite{S88}, \cite{AK08}). More precisely, for NBW with $n$ states, we get NCW with $2^O(n \log n)$ states. For a long time the best known lower bound for $f$ was nothing more than the trivial bound $n$. In 2007 it was shown that there is an NBW with equivalent NCW such that there is no NCW equivalent to this NBW on the same structure (\cite{KMM04}). The first non-trivial (and the best currently known) lower bound is linear -- the result of \cite{AK08} is that, for each $n\in {\cal N}$, there exists a NBW with $n$ states such that there is a $NCW$ which recognizes the same language, but every such $NCW$ has at least $3(n - 1)/2$ states.
There is a good reason why it is hard to show a lower bound for the above problem. The language (or rather, to be more precise, class of languages) used to show such a bound has to be hard enough to be expressed by a co-B\"uchi automaton, but on the other hand not too hard, because some (actually, most of) $\omega$-regular languages cannot be expressed by NCW at all. The idea given in the proof of $3(n - 1)/2$ lower bound in \cite{AK08}, was to define a language which can be easily split into parts that can by recognized by a NBW but cannot by recognized by a NCW. The language they used was $L_k = \{w \in \{0, 1\}^{\omega} | $ both $0$ and $1$ appear at least $k$ times in $w\}$. Let $L_k^i = \{w \in \{0, 1\}^{\omega} | i$ appears infinitely often in $w$ and $(1-i)$ appear at least $k$ times in $w\}$, then it is easy to see that $L_k = L_k^0 \cup L_k^1$, $L_k^i$ can be recognized by B\"uchi automata of size $k$ and $L_k^i$ cannot by recognized by any co-B\"uchi automaton. It is still, however, possible to built a NCW that recognizes $L_k$ with $3k+1$ states and indeed, as it was proved in \cite{AK08}, every NCW recognizing $L_k$ has at least $3k$ states.
\subsection{Our contribution -- a nonlinear lower bound}
In this paper we give a strong improvement of the lower bound from \cite{AK08}. We show that, for every integer $k$, there is a language $L_k$ such that $L_k$ can be recognized by an NBW with $\Theta(k^2)$ states, whereas every NCW that recognizes this language has at least $\Theta(k^{7/3})$ states. Actually, the smallest NCW we know, which recognizes $L_k$, has $\Theta(k^{3})$ states, and we believe that this automaton is indeed minimal. In the terms of function $f$ from the above subsection this means that $f$ equals at least $cn^{7/6}$ for some $c$ (this is since $n$ is $\Theta(k^2)$) and, if our conjecture concerning the size of a minimal automaton for $L_k$ is true, $f$ would equal at least $cn^{3/2}$ for some constant $c$.
The technical part of of this paper is organized as follows. In subsection \ref{preliminaria} we give some basic definitions. In subsection \ref{Lk} the definition of the language $L_k$ is presented. Also in this section we show how this language can be recognized by a B\"uchi automaton with $O(k^2)$ states and how $L_k$ can be recognized by a co-B\"uchi automaton with $O(k^3)$ states. The main theorem, saying that
every co-B\"uchi automaton that recognizes $L_k$ has at least $\Theta(k^{7/3})$ states is formulated in the end of subsection \ref{Lk} and the rest of the paper is devoted to its proof.
\section{Technical Part}
\subsection{Preliminaries}\label{preliminaria}
A \emph{nondeterministic $\omega$-automaton} is a quintuple $\langle \Sigma, Q, q_0, \delta , \alpha\rangle$, where $\Sigma$ is an alphabet, $Q$ is a set of states, $q_0\in Q$ is an initial state, $\delta \subseteq Q \times \Sigma \times Q$ is a transition relation and $\alpha \subseteq Q$ is an accepting condition.
A \emph{run} of $\omega$-automaton over a word $w=w_1w_2\dots$ is a sequence of states $q_0q_1q_2\dots$ such that for every $i \geq 0$, $q_i\in Q$ and $\langle q_i, w_{i+1}, q_{i+1}\rangle \in \delta$.
Depending on type of the automaton we we have different definitions of \emph{accepting run}. For a B\"uchi automaton, a run is accepting, if it visits some state from accepting condition $\alpha$ infinitely often. In the case of a co-B\"uchi automaton, a run is accepting, if only states from the set $\alpha$ are visited infinitely often in this run. For a given nondeterministic $\omega$-automaton $\cal{A}$ and a given word $w$, we say that $\cal{A}$ \emph{accepts} $w$ if there exists an accepting run of $\cal{A}$ on $w$. The words accepted by $\cal{A}$ form the language of $\cal{A}$, denoted by $L(\cal{A})$.
We say that a co-B\"uchi automaton ${\cal{A}}=\langle \Sigma , Q, q_0, \delta , \alpha\rangle$ is in the {\bf normal form} iff for each $\langle q, a, q'\rangle \in \delta$ if $q$ is in $\alpha$, then also $q'$ is in $\alpha$. Note that for a given NCW
${\cal{A}}=\langle \Sigma, Q, q_0, \delta , \alpha\rangle$
the automaton ${\cal{A'}}=\langle \Sigma, Q', \langle q_0, 0\rangle, \delta', \alpha\times\{1\}\rangle$, where
$Q'=Q\times\{0\} \cup \alpha \times \{1\}$ and
$\delta'=\{\langle \langle q, i\rangle, a, \langle q', j \rangle\rangle \; | \; \langle q, a, q'\rangle \in \delta \wedge i \leq j
\wedge \langle q, i\rangle,\langle q', j \rangle\in Q' \}$
is in the normal form, recognizes the same language and has at most $2|Q|$ states.
For a given word $w=w_1, w_2, \dots$, let $w[i, j]=w_i, w_{i+1}, \dots, w_j$ and let $w[i, \infty]=w_i, w_{i+1}, \dots$.
An accepting run $q_0, q_1, \dots$ on a word $w=w_1, w_2, \dots$ of a co-Buchi automaton in the normal form is called \emph{shortest}, if it reaches
an accepting state as early as possible, that is if for each accepting run $p_0, p_1, \ldots$ of this automaton on $w$ it holds that
if $p_i\in \alpha$ then also $q_i \in \alpha$.
\subsection{Languages $L_k$ and their automata}\label{Lk}
Let $k \geq 64 $ be a -- fixed -- natural number and let $\mathfrak{A}_k=\{1,2\ldots k\}$.
The set $\Sigma_k=\{a,\bar{a} \;| \: a\in \mathfrak{A}_{2k}\}$ will
be the alphabet of our language $L_k$.
Let us begin with the following informal interpretation of words
over $\Sigma_k$. Each symbol $j\in \Sigma_k$ should be read as ``agent $j$ makes a promise''.
Each symbol $\bar{j}\in \Sigma_k$ should be read as ``$j$ fulfills his promise''. The language $L_k$ consists (roughly speaking)
of the words in which there is someone who at least $2 k$ times fulfilled his promises, but there are also
promises which were never fulfilled.
To be more formal:
\begin{definition}
For a word $w\in \Sigma_k^\omega$, where $w=w_1w_2\ldots $ and $i\in \cal N$, define the interpretation $h_i(w)$ as:
\begin{itemize}
\item $h_i(w) = \sharp$ ~if $w_i \in \mathfrak{A}_{2k}$ and $\bar{w_i}$ occurs in $w[i+1,\infty]$; (it is the fulfillment that counts, not a promise).
\item $h_i(w) = 0 $ if $w_i \in \mathfrak{A}_{2k}$ and $\bar{w_i}$ does not occur in $w[i+1,\infty]$; (unfulfilled promises are read as 0).
\item Suppose $w_i = \bar{s}$ for some $s\in \mathfrak{A}_{2k}$. Then $h_i(w)=s$ if there is $j<i$ such that $w_j=s$ and $\bar{s}$ does not occur in the word
$w[j,i-1]$, and $h_i(w)=\sharp$ if there is no such $j$ (one first needs to make a promise, in order to fulfill it).
\end{itemize}
The interpretation $h(w)$ is now defined as the infinite word $h_1(w)h_2(w)\ldots $.
\end{definition}
Now we are ready to formally define the language $L_k$:
\begin{definition}
$L_k$ is the set of such words $w\in \Sigma_k^\omega$ that:
\begin{itemize}
\item
either there is at least one $0$ in $h(w)$ and there exists $s\in \mathfrak{A}_{2k}$ which occurs at least $2 k$ times in $h(w)$,
\item
or there exists $i$ such that $h_j(w)=\sharp$ for all $j>i$.
\end{itemize}
\end{definition}
\begin{figure}
\begin{center}
\includegraphics[width = \textwidth]{nbw.pdf}
\end{center}
\caption{The $\omega$-automaton recognizing $L_k$ -- all differences between NBW version and NCW version are in the body of $A_i$ (fig. \ref{f-ai}). Label $\neg i$ stands (for better readability) for alternative of every label except $i$.}
\label{f-NBW}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 270pt]{ai.pdf}
\end{center}
\caption{Automaton $A_i$ -- the same for NBW and NCW, modulo the body of $B_{i, j}$ (fig. \ref{f-bij-buchi} for NBW and fig. \ref{f-bij-cobuchi} for NCW)}
\label{f-ai}
\end{figure}
It is easy to see that each $w\in \Sigma_k^\omega$ satisfies at least one of the following three conditions:
there is $s\in \mathfrak{A}_{2k}$ such that $h_i(w)=s$ for infinitely many numbers $i$, or there are infinitely many occurrences of $0$ in $h(w)$, or
there is only a finite number of occurrences of symbols from $\mathfrak{A}_{2k}$ in $w$. Using this observation, we can represent $L_k$ in the following way:
\begin{eqnarray}
\nonumber L_k & = & \{ viw \ | v \in \Sigma_k^* \wedge \\
& ( &\nonumber (i \in \mathfrak{A}_{2k} \wedge w \in (\Sigma_k\setminus\{\overline{i}\})^{\omega} \wedge \exists j \in \mathfrak{A}_{2k}\;\; h_m(viw)=j \text{ for infinitely many} \\
& & \text{ numbers } m) \label{inf0} \\
& \nonumber \vee & (i \in \mathfrak{A}_{2k} \wedge w \in (\Sigma_k\setminus\{\overline{i}\})^{\omega} \wedge \exists j \in \mathfrak{A}_{2k} \;\; h_m(viw)=j \text{ for at least }\\
& &2 k \text{ numbers } m \text{ such that } m\leq |v|) \label{inf1} \\
& \vee & (w \in \{\overline{1}, \dots, \overline{2k}\}^{\omega}))\} \label{noinf}
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width = 160pt]{bij-buchi.pdf}
\end{center}
\caption{Automaton $B_{i,j}$ in the NBW case}
\label{f-bij-buchi}
\end{figure}
Keeping in mind the above representation, it is easy to build a small NBW recognizing $L_k$ (see Figures \ref{f-NBW}, \ref{f-ai} and \ref{f-bij-buchi}). The accepting state on the bottom left of Figure \ref{f-NBW} checks if condition (\ref{noinf}) is satisfied, and the other states (except of the states in the boxes $A_i$ and of the initial state) check if the condition (\ref{inf1}) is satisfied. Reading the input $y$ the automaton first guesses the number $j$ from condition (\ref{inf1}) and makes sure that $j$ occurs at least $2 k$ times in $h(y)$. Then it guesses
$i$ from condition (\ref{inf1}), accepts, and remains in the accepting state forever, unless it spots $\bar{i}$. This part of the automaton works also correctly for the co-B\"uchi case.
The most interesting condition is (\ref{inf0}). It is checked in the following way. At first, the automaton waits in the initial state until it spots $i$ from condition (\ref{inf0}). Then, it goes to $A_i$, guesses $j$ and goes to the module $B_{i, j}$, which checks if $i$ does not occur any more, and if both $j$ and $\overline{j}$ occur infinitely often. This can be summarized as:
\begin{theorem} \label{mainth}
Language $L_k$ can be recognized with a nondeterministic B\"uchi automaton with $\Theta(k^2)$ states.
\end{theorem}
\begin{figure}
\begin{center}
\includegraphics[width = 270pt]{bij-cobuchi.pdf}
\end{center}
\caption{Automaton $B_{i,j}$ in the NCW case}
\label{f-bij-cobuchi}
\end{figure}
Condition (\ref{inf0}) cannot be checked by any NCW. However, it can be replaced by the condition
\begin{eqnarray}
\nonumber i \in \mathfrak{A}_{2k} \wedge w \in (\Sigma_k\setminus\{\overline{i}\})^{\omega} \wedge \exists j \in \mathfrak{A}_{2k} \;\; h_m(viw)=j \text{ for at least } 2 k \text{ numbers } m \label{inf0'}
\end{eqnarray}
which leads us to a NCW as on Figures \ref{f-NBW}, \ref{f-ai} and \ref{f-bij-cobuchi}. In this case, automaton $B_{i, j}$ needs to count to $2 k$, so it needs $\Theta(k)$ states. Therefore, the whole NCW automaton has $\Theta(k^3)$ states. Actually, we believe that every NCW recognizing $L_k$ indeed needs $\Theta(k^3)$ states.
Now we are ready to state our main theorem:
\begin{theorem} \label{main-theorem}
Every NCW recognizing $L_k$ has at least $k \cdot \frac{k^{4/3}}{8}$ states.
\end{theorem}
The rest of this paper is devoted to the proof of this theorem. In subsection \ref{disjoint} we will define, for each co-B\"uchi automaton in the normal form, recognizing $L_k$ a family of $k$ disjoint sets of states, and in subsection \ref{mtheorem} we will show that each such set has at least $\frac{k^{4/3}}{4}$ states. As we have seen in subsection \ref{preliminaria}, for a given NCW with $n$ states we can always build a NCW in the normal for with at most $2n$ states, which finally leads to $\frac 1 2 \cdot k \cdot \frac{k^{4/3}}{4}$ lower bound.
\subsection{The k disjoint sets of states}\label{disjoint}
Let ${\cal A}=\langle \Sigma_k , Q, q_0, \delta , \alpha\rangle$ be an NCW in the normal form with $N$ states that recognizes $L_k$.
Let $w_{i,j} = i(j^N\overline{1},\overline{2},\dots,\overline{i-1}, \overline{i+1}, \dots, \overline{2k})^{\omega}$.
For every $i\neq j$ let $q_0$, $q_{i, j}^1$, $q_{i, j}^2$, $q_{i, j}^3, \dots$ be a fixed shortest accepting run of $\cal A$ on $w_{i, j}$.
Words $w_{i,j}$ will be the main tool in our attempt to fool the automaton if it has too few states so let us comment on their structure. First notice, that the $i$, the very first symbol of $w_{i,j}$, will turn into the only $0$ in $h(w)$ -- this is, among other reasons, since for all $m\neq i$ the symbol $\overline{m}$ occurs infinitely many times in $w$. See also that if we replaced
the blocks $j^N$ in the definition of $w_{i,j}$ by just a single $j$, then the word would still be in $L_k$ -- since we do not count promises but fulfillments, the remaining $j$'s are almost redundant. It is only in the proof Lemma \ref{observations}(ii) that we will need them. In the rest of the proof we will only be interested in one state of $A$ per each such block of symbols $j$.
For this reason we define $block(l) = N + 1 + l(N+2k-1)$ as the function that points to the index of the state in run $q_0, q_{i, j}^1, q_{i, j}^2, q_{i, j}^3, \dots$ just after reading the $l$-th block $j^N$.
Let $Q_{i, j} = \{q_{i,j}^{block(c)} | c \in {\cal N}\}$.
\begin{lemma} \label{lemma1}
For every $i, j, m, l \in \mathfrak{A}_{2k}$ such that $m \neq i \neq j \neq l$ and $m \neq j$, the sets $Q_{i, m}$ and $Q_{j, l}$ are disjoint.
\end{lemma}
\begin{proof}
Suppose that there exist $i, j, m, l \in \mathfrak{A}_{2k}$, and $s, t \in {\cal N}$ such that $m \neq i \neq j \neq l$, $m \neq j$ and $q_{i, m}^s = q_{j, l}^t$. Let $v = w_{i, m}[0, block(s)] . w_{j, l}[block(t) + 1, \infty]$. This word is accepted by $\cal A$, because there exists an accepting run $q_0$, $q_{i,m}^1$, $\dots$ , $q_{i,m}^{block(s)}, q_{j,l}^{block(t) + 1}, q_{j,l}^{block(t) + 2}, \dots$ of $\cal A$.
The only letters without the overline in $v$ are $i$, $m$ and $l$. However, the only overlined letter that does not occur infinitely often in $v$ is $\overline{j}$. This letter is different from $i$, $m$ and $l$ because of the assumptions we made. Therefore $0$ does not occur in $h(v)$ and $v\not \in L_k$.\qed
\end{proof}
We say that $l$ is \emph{huge} if $l>k$ and that $l$ is \emph{small} otherwise.
For every $i$ let $Q_i = \bigcup \{Q_{i, j} | j \text{ is small} \}$. A simple conclusion from Lemma \ref{lemma1} is that for each huge $i, j$ such that $i \neq j$ the sets $Q_i$ and $Q_j$ are disjoint. This implies, that Theorem \ref{main-theorem} will be proved, once we prove the following lemma:
\begin{lemma} \label{lemma2}
For each huge $i \in \mathfrak{A}_{2k}$ the size of the set $Q_i$ is greater than $\frac{k^{4/3}}{4}$.
\end{lemma}
\subsection{Combinatorial lemma}\label{clemma}
The $n \times m$ \emph{state matrix} is a two-dimensional matrix with $n$ rows and $m$ columns. We say that $n \times m$ state matrix is $l$-\emph{painted} if each of its cells is labeled with one of $l$ colors and the minimal distance between two cells
in the same row and of the same color is at least $m$.
For a painted $n \times m$ state matrix, we say that an $M_{i, j}$ is a cell \emph{on the left border} if $j=1$, and is \emph{on the right border} if $j = m$. We say that $M_{i, j}$ is \emph{a successor} of $M_{i', j'}$ if $i = i'$ and $j = j'+1$.
The \emph{path} $w$ through a painted $n \times m$ state matrix $M$ is a sequence of cells $c_1$, $c_2$, $\dots$, $c_z$ such that $c_1$ is on the left border, $c_z$ is on the right border, and for each $s<z$ either $c_{s+1}$ is a successor of $c_s$ (we say that ``there is a right move from
$c_s$ to $c_{s+1}$'') or $c_s$ and $c_{s+1}$ are of the same color (we say that ``there is a jump from
$c_s$ to $c_{s+1}$'')
We say that a path $w$
is \emph{good}, if there are no consecutive $k$ right moves in $w$, and no jump leads to (a cell in) a row that was already visited by this path. Notice that in particular a good path visits at most $k$ cells in any row.
Our main combinatorial tool will be:
\begin{lemma} \label{combinatorial-lemma}
Let $M$ be an $\lfloor \frac {k^{4/3}} {4} \rfloor$-painted $k \times \lfloor \frac{k^{4/3}} {4} \rfloor$ state matrix. Then there exists a good path on $M$.
\end{lemma}
The proof of this lemma is left to subsection \ref{clemma-proof}
\subsection{From automaton to state matrix}\label{mtheorem}
We are now going to prove Lemma \ref{lemma2}. Let a huge $i \in \mathfrak{A}_{2k}$ be fixed in this subsection
and assume that $|Q_i| < \frac {k^{4/3}} 4$.
We will show that there exists a word $w$ such that $\cal A$ accepts $w$ and no agent fulfiles its promises at least $2 k$ times in $w$.
Let $j$ be an small number from $\mathfrak{A}_{2k}$. Let us begin from some basic facts about $Q_{i, j}$:
\begin{lemma}\label{observations}
\begin{enumerate}
\item[(i)] There exists a number $l$ such that for every $s < l$ the state $q_{i, j}^{block(s)}$ is not in $\alpha$ and for every $s\geq l$ the state $q_{i, j}^{block(s)}$ is in $\alpha$. Define $acc(i, j)=l$.
\item[(ii)] \label{contradiction} No accepting state from $Q_i$ can be reached on any run of $A$ before some agent fulfilled its promises $2 k-1$ times. It also implies that $acc(i, j)\geq 2 k -1$.
\item[(iii)] \label{positional} The states $q_{i, j}^{block(0)}, q_{i, j}^{block(1)}, \dots, q_{i, j}^{block(acc(i, j))}$ are pairwise different.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item[(i)] This is since $\cal A$ is in the normal form.
\item[(ii)] While reading a block of $N$ symbols $j$, the automaton is in $N+1$ states, so there is a state visited at least twice. If this state was accepting, then a pumping argument would be possible -- we could simply replace the suffix of the word after this block with the word $j^\omega$ and the new word would still be accepted, despite the fact that it is not in $L_k$.
\item[(iii)] Suppose $q_{i, j}^{block(s)}$ and $q_{i, j}^{block(t)}$ are equal and non-accepting. For every $s < t \leq acc(i, j)$, the words $w_{i,j}[block(s)+1, \infty]$ and $w_{i,j}[block(t)+1, \infty]$ are identical. Then a pumping argument works again -- we can find a shorter accepting run by pumping out the states $q_{i, j}^{block(s)}, \dots, q_{i, j}^{block(t)-1}$. But this contradicts the assumption that our run is shortest.\qed
\end{enumerate}
\end{proof}
We want to show that $|Q_i| \geq \frac {k^{4/3}} 4$. If for any small $j$ there is $acc(i, j) \geq \frac {k^{4/3}} 4 - 1$ then, thanks to Lemma \ref{observations}(iii) we are done. So, for the rest of this subsection, we assume that
$acc(i, j) < \frac {k^{4/3}} 4 - 1$ for each small $j$.
We will now construct a $\lfloor \frac {k^{4/3}} {4} \rfloor$ - painted $k \times \lfloor \frac {k^{4/3}} {4} \rfloor$ state matrix $M$ in such a way, that its $m$'th row will, in a sense,
represent the accepting run on the word $w_{i,m}$. More precisely, take a $k \times \lfloor \frac {k^{4/3}} {4} \rfloor$ matrix $M$ and
call the cells $M_{m,j}$ of $M$, where $j\leq acc(i,m)$, \emph{ real cells} and call the cells $M_{m,j}$ of $M$ with $j > acc(i,m)$ \emph{ghosts}. For a ghost cell $M_{m,j}$ and the smallest natural number $l$ such that $j-lk\leq acc(i,m)$ call the real
cell $M_(m,j-lk)$ \emph{the host of} $M_{m,j}$. Notice that each ghost has its host, since, by Lemma \ref{observations} (ii), $acc(i,m) \geq 2 k - 1$, which means that there are at least $k$ real cells in each row.
If $M_{m,j}$ is real then define its color as $q_{i, m}^{block(j-1)}$. If $M_{m,j}$ is a ghost then define its color as the color of its host. Now see that $M$ is indeed a $\lfloor \frac {k^{4/3}} {4} \rfloor$ - painted $k \times \lfloor \frac {k^{4/3}} {4} \rfloor$ state matrix --
the condition concerning the shortest distance between cells of the same
color in the same row of $M$ is now satisfied by Lemma \ref{observations} (iii) and the condition concerning the number of colors is satisfied, since we assume that
$|Q_i| \leq \frac{k^{4/3}}{4}$.
By Lemma \ref{combinatorial-lemma} we know that there is a good path in $M$. This means that Lemma \ref{lemma2} will
be proved once we show:
\begin{lemma}\label{translating}
If there exists a good path in $M$, then there exists a word $w \not \in L_k$ such that $w$ is accepted by $\cal A$.
\end{lemma}
\begin{proof}
Suppose $r$ is a good path in $M$ and $c$ is the first ghost cell on $r$. Let $c'$ be the direct predecessor of $c$ on $r$. If the move from $c'$ to $c$ was a right move then define a new path $p$ as
the prefix of $r$ ending with $c$. If the move from $c'$ to $c$ was a jump, then suppose $c''$ is the host of $c$, and define $p$ as the following path: first take the prefix of $r$ ending with $c'$. Then jump to $c''$ (it is possible, since the color of a ghost is the color of its host). Then make at most $k-1$ right moves to the last real cell in this row.
It is easy to see that $p$ satisfies all the conditions defining a good path, except that it does not reach the right border of $M$.
Let $p$ be a concatenation of words $p_1$,$p_2\ldots$,$p_z$, such that each move between $p_x$ and $p_{x+1}$ is a jump but there are no jumps inside any of $p_x$. This means that
each $p_x$ is contained in some row of $M$, let $\beta(x)$ be a number of this row. This also means, since $p$ is (almost) a
good path, that $|p_x|\leq k$ for each $x$.
Let $v_i = \overline{1},\overline{2},\dots,\overline{i-1}, \overline{i+1}, \dots$, $\overline{2k}$. Now define an infinite word $w$ as follows:
$$w = i\beta(1)^N(v_i\beta(1)^N)^{|p_1|-1}
(v_i \beta(2)^N)^{|p_2|-1}\ldots
(v_i \beta(z)^N)^{|p_z|-1}\beta(z)^\omega $$
To see that $w\not\in L_k$ notice, that a symbol $s\in \mathfrak{A}_{2k}$ occurs in $h(w)$ only if $s=\beta(x)$ for some $x\in\{1,2\ldots z\}$ and that it occurs at most $|p_x|+1\leq k$ times in $w$. The fact that $A$ accepts $w$ follows from the construction of path $p$ and from
Lemma \ref{observations} (ii).\qed
\end{proof}
\subsection{Proof of the combinatorial lemma} \label{clemma-proof}
Let $n=\lfloor \frac {k^{4/3}} 4 \rfloor$ and $M$ be an $n$-painted $k \times n $ state matrix.
We split the matrix $M$ into matrices $M^0, M^1, ..., M^{\lceil
\frac{2n}{k}\rceil - 1}$,
each of them of $k$ rows and each of them (possibly except of
the last one) of $\frac k 2$ columns,
such that $M^i$ contains columns $i\frac k 2 +1, i\frac k 2 +2 \dots,
min(i\frac k 2 + \frac k 2, n)$.
The matrices $M^0, M^1, ..., M^{\lceil \frac{2n}{k}\rceil - 2}$ will
be called \emph{multicolumns}.
We are going to build a path $w=c_1c_2\ldots c_z$ through $M$
satisfying the following:
\begin{itemize}
\item if $w$ has a jump from $c_j$ to $c_{j+1}$ then both $c_j$ and
$c_{j+1}$ belong to the same multicomumn;
\item $w$ has exactly ${\lceil \frac{2n}{k}\rceil - 1}$ jumps, one in
each multicolumn;
\item no jump on $w$ leads to a previously visited row of $M$.
\end{itemize}
Clearly, such a path will be a good path. This is since the width of
each multicolumn is $\frac k 2$, and each sequence of consecutive
right moves on $w$ will be contained in two adjacent multicolumns
(except of the last such sequence, which is contained in the last
multicolumn
and $M^{\lceil \frac{2n}{k}\rceil - 1}$).
Let $s = \frac {k^{1/3}}{2}$. Since $\lceil s \rceil = \lceil \frac
{2 \cdot k^{4/3}/4}{k} \rceil \geq \lceil \frac{2n}{k}\rceil$,
the number $\lceil s \rceil -1$ is not smaller than the number of jumps we want to make.
Now we concentrate on a single multicolumn $M^i$, which is a matrix
with $k$ rows and with $\frac k 2$ columns. We will call
two rows of such a multicolumn \emph{brothers} if at least one cell
of one of those rows is of the same color as at least one cell of
another (i.e. two
rows are brothers if a path through $M^i$ can make a jump between them).
Suppose some of the rows of the multicolumn $M^i$ belong to some set
$D^i$ of \emph{dirty} rows. The rows which are not dirty will be
called \emph{clean}.
A color will be called clean if it occurs in some of the clean rows.
A row will be called \emph{poor} if it has less than $\lceil s \rceil$ clean
brothers. One needs to take care here -- in the following
procedure, while more rows will get dirty, more rows will also get poor:
\noindent{\bf Procedure} (Contaminate a single multicolumn($D^i$,$M^i$) )\\
\begin{tabular}{c p{10.8cm}}
\hspace*{30pt} &{\bf while} there are clean poor rows (with respect to the current set $D^i$ of
dirty rows) in $M^i$, select any clean poor row and all his brothers,
and make them dirty (changing $D^i$ accordingly).
\end{tabular}\\
{\bf end of procedure}
We would like to know how many new dirty rows can be produced as a
result of an execution of the above procedure.
Each execution of the body of the while loop makes dirty at most $\lceil s \rceil$
rows and decreases the number of clean colors
by at least $\frac k 2$ -- none of the colors of the selected clean
poor row remains clean after the body of the while loop is executed.
Since there are at most $n$ colors in the multicolumn (as $M$ is
$n$-colored), the body of the while loop can be executed at most $\frac n {k/2} \leq \lceil s \rceil$
times, which means that at most $\lceil s \rceil^2$ new dirty rows can be produced.
Notice that after an execution of the procedure, none of the clean rows is poor.
Now we are ready for the next step:
\noindent{\bf Procedure} (Contaminate all multicolumns)\\
\hspace*{30pt}Let $D^{\lceil \frac{2n}{k}\rceil - 1}=\emptyset$;\\
\hspace*{30pt}{\bf for} $i= \lceil \frac{2n}{k}\rceil - 2$ {\bf down to 0}\\
\hspace*{60pt}Let $D^i = D^{i+1}$;\\
\hspace*{60pt}Contaminate a single multicolumn($D^i$,$M^i$);\\
{\bf end of procedure}
We used a convention here, that a set $D^i$ of rows is identified with
the set of numbers of those rows. Thanks to that we could
write the first line of the above procedure, saying ``consider the
dirty rows of $M^{i+1}$ to be also dirty in $M^i$''.
Suppose $D^0,D^1\ldots D^{\lceil \frac{2n}{k}\rceil - 2}$ are sets of
dirty rows in
multicolumns $M^0$,$M^1$, $\ldots$, $M^{\lceil \frac{2n}{k}\rceil - 2}$
resulting from an execution of the procedure Contaminate all
multicolumns.
Notice, that for each $0\leq i \leq \lceil \frac{2n}{k}\rceil - 2$
the inclusion $D^{i+1} \subseteq D^i$ holds. In other words, if a row
is clean in
$M^i$, then it is also clean in $M^{i+1}$.
The following lemma explains why clean rows are of interest for us:
\begin{lemma}
Suppose $w=c_1c_2\ldots c_z$ is a path through the matrix consisting
of the first $i$ multicolumns of $M$ (or, in other words,
of the first $\frac{ki}{2}$ columns of $M$). Suppose (i) $w$ has
exactly one jump in each multicolumn, and each jump leads to a row which was not visited before, (ii) if there is
a jump from $c_j$ to $c_{j+1}$ then both $c_j$ and $c_{j+1}$ belong to
the same multicomumn. Suppose finally, that
(iii) the
cell where $w$ reaches the right border of the matrix, belongs to a
clean row $r$. Then $w$ can be extended to a path through the matrix consisting of the first $i+1$ multicolumns of
$M$, in such a way that this extended path will also satisfy
conditions
(i)-(iii).
\end{lemma}
\begin{proof}
The only thing that needs to be proved is that one can
jump, in multicolumn $M^i$, from row $r$ to some clean row which was
not visited before.
Since, by assumption, $r$ was clean in $M^{i-1}$, it is also clean in
$M^i$. Since there are no clean poor rows in $M^i$, we know
that $r$ has at least $\lceil s \rceil$ clean brothers. At most $i$ of them were
visited so far by the path, where of course $i \leq \lceil s \rceil-1 $.\qed
\end{proof}
Now, starting from an empty path and a clean row in $M^0$ and using
the above lemma $\lceil \frac {2n} {k}\rceil - 2$ times we can construct a path $w$ as described
in the beginning of this subsection and finish the proof of Lemma \ref{combinatorial-lemma}. The only
lemma we still need for that is:
\begin{lemma}
$|D^0|< k$. In other words, there are clean rows in $M^0$.
\end{lemma}
\begin{proof}
Let $l=\lceil s \rceil - 2$ be the index of the last multicolumn. The number of dirty rows in $D^{l-i}$ can be bounded by $(i+1) \cdot \lceil s \rceil^2$ because of observations about defined procedures. For $i=l$, we have $(\lceil s \rceil - 1) \cdot \lceil s \rceil^2$, what is not greater then $s(s+1)^2=\frac {k^{1/3}} {2}(\frac {k^{1/3}} {2} + 1)^2$ which is, finally, less then $k$, because $k\geq 8$.\qed
\end{proof}
|
2,869,038,153,836 | arxiv | \section{Conclusions}
The scaling exponents $\tau=1.75$, $\phi_{I}=0.67$ and $\phi_{II}=1.33$
are different from the Zipf's result on word frequency where the
exponents are $\tau=2.0$ and $\phi=1.0$.
The power-law relation between $N$ and $S$ and it's exponent
$\chi=0.65$ observed in family name distribution seem to be nontrivial.
One may expect this scaling law breaks if the number of available family
names in a society is too small compared to the population.
Cohen {\it et.~al.} \cite{cohen97} found that this situation occurred in the
words frequency distribution --- for very large $S$, $N(S)$ approaches a
plateau. They found that the exponent $\chi$ for the number of different
words in a text is also a function of length of the text.
This is true also for the societies where the family names are strictly
inherited from fathers to sons without any creation of new family names.
In fact, the expectation number of sons per parents is one under
the stationary constant population.
Then the survival probability $P(t)$ of a family name after
$t$ generations decreases as $P(t) \sim t^{-0.5}$.
As a result, after many generations, only a few family names
will dominate the whole population in the society.
This is the situation in countries
where the creation of new family names has been strictly restricted
for many generations such as in Korea.
The total number of family names in Korea is about $250$ while
the total population is about $50$ millions.
On the contrary Japan has most rich family names in the world
whose total number of family names is about $132,000$ and
the population is about $125$ millions.
The creation of a new family name in Japan is also very rare.
However, historically the most of Japanese family names were created
about $120$ years ago \cite{history}.
The short history of family names may cause to preserve
the diversity and the scaling properties of family names
as it was at the creation.
In summary, we have investigated the distribution of Japanese family names
for five different regional communities in Japan. From the our
empirical investigation, the power-law relation between total number of
different family names and total population appeared in a telephone
directory with the exponent $\chi = 0.65$. Also we have
found that the name-variety-size distribution shows nice power-law
scaling with the exponent $\tau = 1.75$ and the cutoff exponent, $\alpha =
0.37$. These scaling properties are consistent
for five regional communities and randomly generated societies with
with different populations.
In a size-rank distribution of family name we have obtained
a crossover behaviour from one exponent, $\phi_I = 0.67$ to another
exponent $\phi_{II} = 1.33$ at the crossover point $r^* \sim S^{\alpha'}$
with $\alpha '= 0.5$. This result is consistent
even if the specific family names of higher rank in one community is
different from those in other communities.
We have also derived scaling relations between these
exponents.
\bigskip
Acknowdgements
\medskip
We thank I. Grosse, P.Ch. Ivanov and S. Havlin for helpful discussions.
|
2,869,038,153,837 | arxiv | \section{Introduction}\label{s_int}
Let $\hol(\bd)$ denote the space of holomorphic functions on the
unit ball $\bd$ of $\cd$, $d\ge 1$.
For $0< p < \infty$ and $f\in\hol(\bd)$, the standard integral means $\mpf_p (f, r)$
are defined as
\[
\mpf_p (f, r) = \left( \int_{\spd} |f(r\za)|^p \, d\sid(\za)\right)^\frac{1}{p}, \quad 0\le r <1,
\]
where $\sid$ denotes the normalized Lebesgue measure on the unit sphere $\spd$.
For $p=\infty$, put
\[
M_\infty (f, r) = \sup\{|f(z)|: |z|=r\}, \quad 0\le r <1.
\]
A function $\we: [0,1) \to (0, +\infty)$ is called a weight if
$\we$ is continuous and non-decreasing.
A weight $\we$ is said to be \textsl{log-convex} if
$\log\we(r)$ is a convex function of $\log r$, $0<r<1$.
It is known that $\mpf_p(f, r)$, $0\le r<1$, is a log-convex weight
for any $f\in \hol(\bd)$, $f(0)\neq 0$, $d\ge 1$, $0<p \le \infty$.
In fact, for $d=1$, this result constitutes the classical Hardy convexity theorem (see \cite{H14}).
The corresponding proofs are extendable to all dimensions $d$, $d\ge 2$
(see, for example \cite[Lemma~1]{XZ11}).
In the present paper, for each $0<p\le \infty$, we show that
the functions $\mpf_p(f, r)$, $f\in \hol(\bd)$, $f(0)\neq 0$, are generic log-convex weights in the sense
of the following equivalence:
Let $u, v: X \to (0, +\infty)$.
We say that $u$ and $v$ are equivalent ($u\asymp v$, in brief)
if there exist constants $C_1, C_2>0$ such that
\[
C_1 u(x) \le v(x) \le C_2 u(x), \quad x\in X.
\]
\begin{theorem}\label{t_lp_gen}
Let $d\ge 1$ and let $\we: [0,1)\to (0, +\infty)$ be a log-convex weight.
There exists $f\in \hol(\bd)$ such that
\[
\mpf_p (f,r) \asymp \we(r),\quad 0\le r <1,
\]
for each $0<p\le \infty$.
\end{theorem}
Also, we consider volume integral means for $0<q<\infty$.
The logarithmic convexity properties for such integral means have been recently investigated
in a series of papers (see, for example, \cite{WXZ15, WZ14, XZ11}).
Applying Theorem~\ref{t_lp_gen}, we obtain, in particular, the following result.
\begin{cory}\label{c_vol_example}
Let $d\ge 1$, $0<q< \infty$ and let $\we: [0,1)\to (0, +\infty)$ be a weight.
The following properties are equivalent:
\begin{itemize}
\item[(i)]
$\we(r)$ is equivalent to a log-convex weight on $[0,1)$;
\item[(ii)]
there exists $f\in\hol(\bd)$ such that
\[
\left( \frac{1}{\vlm(r\bd)} \int_{r\bd} |f(z)|^q\, d\vlm(z) \right)^{\frac{1}{q}} \asymp \we(r), \quad 0< r <1,
\]
where $\vlm$ denotes the normalized volume measure on $\bd$.
\end{itemize}
\end{cory}
\subsection*{Organization of the paper}
Section~\ref{s_prf_thmLp} is devoted to the proof of Theorem~\ref{t_lp_gen}.
Corollary~\ref{c_vol_example} and other results related to volume integral means
are obtained in Section~\ref{s_volume}.
\section{Proof of Theorem~\ref{t_lp_gen}}\label{s_prf_thmLp}
Put $\Dbb = B_1$ and $\Tbb = \partial \Dbb$.
For a log-convex weight $\we$ on $[0,1)$,
Theorem~1.2 from \cite{AD15} provides functions $f_1, f_2\in\hol(\Dbb)$
such that $|f_1(z)| + |f_2(z)|\asymp \we(|z|)$, $z\in\Dbb$.
These functions are almost sufficient for a proof of Theorem~\ref{t_lp_gen} with $d=1$.
However, we will need additional technical information contained in \cite{AD15}.
Namely, applying Lemma~2.2 from \cite{AD15} and arguing as in the proof of Theorem~1.2 from \cite{AD15},
we obtain the following lemma.
\begin{lemma}\label{l_blms}
Let $\we$ be a log-convex weight on $[0,1)$.
There exist $a_k>0$, $n_k\in\Nbb$, $k=1,2,\dots$,
and constants $r_0\in (\frac{9}{10}, 1)$, $C_1, C_2 >0$
with the following properties:
\begin{align}
n_k &< n_{k+1},\quad k=1,2,\dots;
\label{e_blms_nk} \\
\sum_{k=1}^\infty a_k r^{n_k} &\le C_1 \we(r),\quad r_0 \le r <1;
\label{e_blms_up} \\
|g_1(r\za)| + |g_2(r\za)| &\ge C_2 \we(r),\quad r_0 \le r <1,\ \za\in\Tbb;
\label{e_blms_low}
\end{align}
where
\[
g_1(z) = \sum_{j=1}^\infty a_{2j-1} z^{n_{2j-1}}, \quad
g_2(z) = \sum_{j=1}^\infty a_{2j} z^{n_{2j}}, \quad z\in \Dbb.
\]
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{t_lp_gen}]
We are given a log-convex weight $\we$ on $[0,1)$.
First, assume that $d=1$.
Let $a_k$ and $n_k$, $k=1,2,\dots$, $g_1$ and $g_2$ be those provided by Lemma~\ref{l_blms}.
By \eqref{e_blms_low},
\[
|g_1(r\za)|^2 + |g_2(r\za)|^2 \ge C_3 \we^2(r),\quad r_0 \le r <1,\ \za\in\Tbb.
\]
Using \eqref{e_blms_nk} and integrating the above inequality with respect to Lebesgue measure $\sigma_1$ on $\Tbb$,
we obtain
\[
\sum_{k=1}^\infty a_k^2 r^{2n_k} \ge C_3 \we^2(r),\quad r_0\le r <1.
\]
Therefore,
\[
1+ \sum_{k=1}^\infty a_k^2 r^{2n_k} \ge C_4 \we^2(r),\quad 0\le r <1.
\]
So, by \eqref{e_blms_nk}, we have
\begin{equation}\label{e_disk_m2_low}
M_2(f,r) \ge \we(r), \quad 0\le r <1,
\end{equation}
where
\[
\sqrt{C_4} f(z) = 1+ \sum_{k=1}^\infty a_k z^{n_k}, \quad z\in\Dbb.
\]
Also, \eqref{e_blms_up} guarantees that
\begin{equation}\label{e_disk_up}
|f(r\za)| \le C_0 \we(r), \quad 0\le r <1,\ \za\in\Tbb.
\end{equation}
Hence, $M_2(f,r)\le M_\infty(f,r) \le C\we(r)$, $0\le r <1$.
Combining these estimates and \eqref{e_disk_m2_low}, we conclude that
$M_2(f,r)\asymp M_\infty(f,r) \asymp \we(r)$.
Thus, $\mpf_p(f,r) \asymp \we(r)$, $0\le r <1$, for any $2\le p \le \infty$.
Also, we claim that $\mpf_p(f,r) \asymp \we(r)$ for any $0 < p < 2$.
Indeed,
\eqref{e_disk_m2_low} and \eqref{e_disk_up} guarantee that
\[
\sigma_1 \left\{\za\in\Tbb: |f(r\za)|\ge \frac{\we(r)}{2}\right\} \ge \frac{1}{2 C_0^2}.
\]
Therefore,
$M_\infty(f,r) \ge \mpf_p(f,r) \ge C_p\we(r)$, $0\le r <1$.
So, the proof of the theorem is finished for $d=1$.
Now, assume that $d\ge 2$.
Let $W_k$, $k=1,2,\dots$, be a Ryll--Wojtaszczyk sequence (see \cite{RW83}).
By definition, $W_k$ is a holomorphic homogeneous polynomial of degree $k$,
$\|W_k\|_{L^\infty(\spd)} =1$ and $\|W_k\|_{L^2(\spd)} \ge \de$ for a constant $\de>0$ which does not depend on $k$.
Put
\[
F(z) = 1+ \sum_{k=1}^\infty a_k W_k(z), \quad z\in \bd.
\]
Clearly, \eqref{e_blms_up} guarantees that
$|F(r\za)| \le C \we(r)$, $0\le r <1$, $\za\in\spd$.
Also, the polynomials $W_k$, $k=1,2,\dots$, are mutually orthogonal in $L^2(\spd)$; hence,
$M_2(F, r) \ge C(\de) \we(r)$, $0\le r<1$.
So, arguing as in the case $d=1$, we conclude that $\mpf_p(F, r) \asymp \we(r)$ for any $0<p\le \infty$, as required.
\end{proof}
As indicated in the introduction, for any $f\in \hol(\bd)$,
the function $\mpf_p(f, r)$ is log-convex;
hence, Theorem~\ref{t_lp_gen} implies the following analog of Corollary~\ref{c_vol_example}.
\begin{cory}\label{c_means}
Let $d\ge 1$, $0<p\le \infty$ and let $\we: [0,1)\to (0, +\infty)$ be a weight.
The following properties are equivalent:
\begin{itemize}
\item[(i)]
$\we(r)$ is equivalent to a log-convex weight on $[0,1)$;
\item[(ii)]
there exists $f\in\hol(\bd)$ such that
\[
\mpf_p(f,r) \asymp \we(r), \quad 0\le r <1.
\]
\end{itemize}
\end{cory}
\section{Volume integral means}\label{s_volume}
In this section, we consider integral means based on volume integrals.
Recall that $\vlm$ denotes the normalized volume measure on the unit ball $\bd$.
For $f\in \hol(\bd)$, $0<q<\infty$ and
a continuous function $u: [0,1) \to (0, +\infty)$, define
\begin{align*}
\mpf_{q,u} (f, r)
&= \left( \frac{1}{r^{2d}} \int_{r\bd} |f(z)|^q u(|z|) \, d\vlm(z)\right)^{\frac{1}{q}}, \quad 0< r <1; \\
\mpf_{q,u} (f, 0) &= |f(0)| u^{\frac{1}{q}}(0).
\end{align*}
\begin{prop}\label{p_volume}
Let $0<q<\infty$ and let $\wgt, \we: [0,1)\to (0, +\infty)$ be log-convex weights.
There exists $f\in \hol(\bd)$ such that
\[
\mpf_{q,\frac{1}{\wgt}} (f,r) \asymp \we(r), \quad 0\le r <1.
\]
\end{prop}
\begin{proof}
By Theorem~\ref{t_lp_gen} with $p=2$,
there exist $a_k\ge 0$, $k=0,1,\dots$, such that
\[
\we^q(t) \asymp \sum_{k=0}^\infty a_k t^k, \quad 0\le t <1.
\]
Let
\[
\varphi^q(t) = \sum_{k=0}^\infty (k+2d) a_k t^k, \quad 0\le t <1.
\]
The functions $\varphi^q(t)$ and $\varphi(t)$
are correctly defined log-convex weights on $[0,1)$.
Hence, $\varphi(t) \wgt^{\frac{1}{q}}(t)$ is a log-convex weight as the product of two log-convex weights.
By Theorem~\ref{t_lp_gen}, there exists $f\in \hol(\bd)$ such that
\[
\int_{\spd} |f(t\za)|^q\, d\sid(\za)
\asymp \varphi^q(t) \wgt(t), \quad 0\le t <1,
\]
or, equivalently,
\[
\frac{t^{2d-1}}{\wgt(t)} \int_{\spd} |f(t\za)|^q\, d\sid(\za)
\asymp \sum_{k=0}^\infty (k+2d) a_k t^{k+2d-1}, \quad 0\le t <1.
\]
Representing $\mpf^q_{q,\frac{1}{\wgt}} (f,r)$ in polar coordinates and
integrating the above estimates with respect to $t$, we obtain
\begin{align*}
\mpf^q_{q,\frac{1}{\wgt}} (f,r) &= \frac{2d}{r^{2d}} \int_0^r \int_{\spd} |f(t\za)|^q\, d\sid(\za) \, \frac{t^{2d-1}}{\wgt(t)} dt \\
&\asymp \sum_{k=0}^\infty a_k r^k, \\
&\asymp \we^q(r), \quad 0\le r <1,
\end{align*}
as required.
\end{proof}
Clearly, Proposition~\ref{p_volume} is of special interest if
$\mpf_{q,\frac{1}{u}} (f, r)$ is log-convex or equivalent to a log-convex function for any $f\in\hol(\bd)$.
Also, we have to prove Corollary~\ref{c_vol_example}.
So, assume that $u\equiv 1$ and define
\begin{align*}
\vmean_q(f,r) &= \left( \frac{1}{\vlm(r\bd)} \int_{r\bd} |f(z)|^q\, d\vlm(z) \right)^{\frac{1}{q}}, \quad 0<r<1, \\
\vmean_q(f,0) &= |f(0)|,
\end{align*}
where $0< q < \infty$.
\begin{proof}[Proof of Corollary~\ref{c_vol_example}]
By Proposition~\ref{p_volume}, (i) implies (ii).
To prove the reverse implication, assume that $\we(t)$ is a weight on $[0,1)$
and $\we(r) \asymp \vmean_q(f,r)$ for some $f\in\hol(\bd)$, $f(0)\neq 0$.
If $d=1$ and $0<q< \infty$, then $\vmean_q(f,r)$ is log-convex by Theorem~1 from \cite{WXZ15}.
So, (ii) implies (i) for $d=1$.
The function $\vmean_q(f,r)$ is also log-convex if $1\le q <\infty$ and $d\ge 2$.
Indeed, we have
\[
\vmean_q(f,r) = \left( \int_{\bd} |f(rz)|^q\, d\vlm(z) \right)^{\frac{1}{q}}, \quad 0\le r <1.
\]
Thus, Taylor's Banach space method applies (see \cite[Theorem~3.3]{T50}).
Now, assume that $d\ge 2$ and $0<q<1$.
The function $\mpf^q_q(f, t)$ is a log-convex weight.
Hence, by Theorem~\ref{t_lp_gen} with $p=2$, there exist $a_k\ge 0$, $k=0,1, \dots$, such that
\[
\mpf^q_q(f,t) \asymp \sum_{k=0}^\infty a_k t^k, \quad 0\le t <1.
\]
Thus,
\begin{align*}
\vmean^q_q(f,r) &= \frac{2d}{r^{2d}} \int_0^r \mpf^q_q(f,t) t^{2d-1}\, dt\\
&\asymp \sum_{k=0}^\infty \frac{a_k}{k+2d} r^k, \quad 0\le r <1.
\end{align*}
In other words, $\vmean_q(f,r)$ is equivalent to a log-convex weight on $[0,1)$.
So, (ii) implies (i) for all $d\ge 1$ and $0<q<\infty$.
The proof of the corollary is finished.
\end{proof}
For $\alpha>0$, Proposition~\ref{p_volume} also applies to the following integral means:
\[
\frac{1}{r^{2d}} \int_{r\bd} |f(z)|^p (1-|z|^2)^{\alpha} \, d\vlm(z), \quad 0\le r <1.
\]
However, in general, the above integral means are not log-convex.
\bibliographystyle{amsplain}
|
2,869,038,153,838 | arxiv | \section*{Figure Captions}
\end{center}
\begin{enumerate}
\item[Fig.1]
The unit cell of our model one-dimensional
semiconductor, where the slowly varying applied potential
$\Delta (V_{\rm ext}+V_{\rm H})$, that changes the
interacting electron density by $\Delta n$,
is the most linear.
Both $\Delta V_{{\rm eff,1}}$ and $\Delta V_{\rm eff,2}$,
used in the non-interacting Kohn-Sham equations,
yield the same $\Delta n$.
$\Delta V_{\rm eff,2}$ is a periodic potential with no
linear slope,
while $\Delta V_{{\rm eff,1}}$, whose linear part is $\Delta V^{\rm
linear}_{{\rm
eff,1}}$,
reproduces not only $\Delta n$ but also the change of polarization due
to
$\Delta (V_{\rm ext}+V_{\rm H})$.
This illustrates the need for polarization-dependence in $E_{xc}$.
For clarity, the potential curves have been aligned
so that they all start from zero.
\end{enumerate}
\end{document}%
|
2,869,038,153,839 | arxiv | \chapter{How Overlap Determines the Macronuclear Genes in Ciliates}
}
\begin{abstract}
Formal models for gene assembly in ciliates have been developed, in
particular the string pointer reduction system (SPRS) and the graph
pointer reduction system (GPRS). The reduction graph is a valuable
tool within the SPRS, revealing much information about how gene
assembly is performed for a given gene. The GPRS is more abstract
than the SPRS and not all information present in the SPRS is
retained in the GPRS. As a consequence the reduction graph cannot be
defined for the GPRS in general, but we show that it can be defined
(in an equivalent manner as defined for the SPRS) if we restrict
ourselves to so-called realistic overlap graphs. Fortunately, only
these graphs correspond to genes occurring in nature. Defining the
reduction graph within the GPRS allows one to carry over several
results within the SPRS that rely on the reduction graph.
\end{abstract}
\section{Introduction}
Gene assembly is a biological process that takes place in a large
group of one-cellular organisms called ciliates. The process
transforms one nucleus, called the micronucleus, through a large
number of splicing operations into another nucleus, called the
macronucleus. The macronucleus is very different from the
micronucleus, both functionally and in terms of differences in DNA.
Each gene occurring in the micronucleus in transformed into a
corresponding gene in the macronucleus. Two models that are used to
formalize this process are the string pointer reduction system
(SPRS) and the graph pointer reduction system (GPRS). The former
consist of three types of string rewriting rules operating on
strings, called legal strings, while the latter consist of three
types of graph rewriting rules operating on graphs, called overlap
graphs. The GPRS can be seen as an abstraction of the SPRS, however
it is not fully equivalent with the SPRS: some information present
in the SPRS is lost in the GPRS.
Legal strings represent genes in their micronuclear form. The
reduction graph, which is defined for legal strings, is a notion
that describes the corresponding gene in its macronuclear form
(along with its waste products). Moreover, it has been shown that
the reduction graph retains much information on which string
negative rules (one of the three types of string rewriting rules)
can be or are used in this transformation
\cite{Extended_paper,DBLP:conf/complife/BrijderHR05,DBLP:conf/complife/BrijderHR06}.
Therefore it is natural to define an equivalent notion for the GPRS.
However, as we will show, since the GPRS loses some information
concerning the application of string negative rules, there is no
unique reduction graph for a given overlap graph. We will show
however, that when we restrict ourselves to `realistic' overlap
graph then there is a unique reduction graph corresponding to this
graph. These overlap graphs are called realistic since non-realistic
overlap graphs cannot correspond to (micronuclear) genes. Moreover,
we explicitly define the notion of reduction graph for these overlap
graphs (within the GPRS) and show the equivalence with the
definition for legal strings (within the SPRS). Finally, we show
some immediate results due to this equivalence, including an open
problem formulated in Chapter~13 in \cite{GeneAssemblyBook}.
In Section~\ref{sect_notation} we recall some basic notions and
notation concerning sets, strings and graphs. In
Section~\ref{sect_gene_assembly} we recall notions used in models
for gene assembly, such as legal strings, realistic strings and
overlap graphs. In Section~\ref{sect_reduction_graph} we recall the
notion of reduction graph within the framework of SPRS and we prove
a few elementary properties of this graph for legal strings. In
particular we establish a calculus for the sets of overlapping
pointers between vertices of the reduction graph. In
Section~\ref{sect_reduction_graph_realistic} we prove properties of
the reduction graph for a more restricted type of legal strings, the
realistic strings. It is shown that reduction graphs of realistic
strings have a subgraph of a specific structure, the root subgraph.
Moreover the existence of the other edges in the reduction graph is
shown to depend directly on the overlap graph, using the calculus
derived in the Section~\ref{sect_reduction_graph}. In
Section~\ref{sect_compress_function} we provide a convenient
function for reduction graphs (but not only reduction graphs) which
simplifies reduction graphs without losing any information. In
Section~\ref{sect_overlap_to_red_graph} we define the reduction
graph for realistic overlap graphs, and prove the main theorem of
this paper: the equivalence of reduction graphs defined for
realistic strings and reduction graphs defined for realistic overlap
graphs. In Section~\ref{sect_consequences} we show immediate
consequences of this theorem.
\section{Notation and Terminology} \label{sect_notation}
In this section we recall some basic notions concerning functions,
strings, and graphs. We do this mainly to set up the basic notation
and terminology for this paper.
The cardinality of set $X$ is denoted by $|X|$. The symmetric
difference of sets $X$ and $Y$, $(X \backslash Y) \cup (Y \backslash
X)$, is denoted by $X \oplus Y$. Being an associative operator, we
can define the symmetric difference of a family of sets $(X_i)_{i
\in A}$ and denote it by $\bigoplus_{i \in A} X_i$. The
\emph{composition} of functions $f: X \rightarrow Y$ and $g: Y
\rightarrow Z$ is the function $g f: X \rightarrow Z$ such that $(g
f) (x) = g(f(x))$ for every $x \in X$. The restriction of $f$ to a
subset $A$ of $X$ is denoted by $f|A$.
We will use $\lambda$ to denote the empty string.
For strings $u$ and $v$, we say that $v$ is a \emph{substring of
$u$} if $u = w_1 v w_2$, for some strings $w_1$, $w_2$; we also say
that $v$ \emph{occurs in $u$}. Also, $v$ is a \emph{cyclic substring
of $u$} if either $v$ is a substring of $u$ or $u = v_2 w v_1$ and
$v = v_1 v_2$ for some strings $v_1, v_2, w$.
We say that $v$ is a \emph{conjugate of $u$} if $u = w_1 w_2$ and $v
= w_2 w_1$ for some strings $w_1$ and $w_2$. For a string $u = x_1
x_2 \cdots x_n$ over $\Sigma$ with $x_i \in \Sigma$ for all $i \in
\{1,\ldots,n\}$, we say that $v = x_n x_{n-1} \cdots x_1$ is the
\emph{reversal of $u$}. A \emph{homomorphism} is a function
$\varphi: \Sigma^* \rightarrow \Delta^*$ such that $\varphi(uv) =
\varphi(u) \varphi(v)$ for all $u,v \in \Sigma^*$.
We move now to graphs. A \emph{labelled graph} is a 4-tuple
$$
G = (V,E,f,\Gamma),
$$
where $V$ is a finite set, $E \subseteq \{ \{x,y\} \mid x,y \in V, x
\not= y \}$, and $f: V \rightarrow \Gamma$.
The elements of $V$ are called \emph{vertices} and the elements of
$E$ are called \emph{edges}. Function $f$ is the \emph{labelling
function} and the elements of $\Gamma$ are the \emph{labels}. We say
that $G$ is \emph{discrete} if $E = \emptyset$. Labelled graph $G' =
(V',E',f|V',\Gamma)$ is a \emph{subgraph of $G$} if $V' \subseteq V$
and $E' \subseteq E_{V'} = E \cap \{ \{x,y\} \mid x,y \in V', x
\not= y \}$. If $E' = E_{V'}$, we say that $G'$ is the
\emph{subgraph of $G$ induced by $V'$}.
A string $\pi = e_1 e_2 \cdots e_n \in E^*$ with $n \geq 1$ is a
\emph{path in $G$} if there is a $v_1 v_2 \cdots v_{n+1} \in V^*$
such that $e_i = \{v_i, v_{i+1}\}$ for all $1 \leq i \leq n$.
Labelled graph $G$ is \emph{connected} if there is a path between
every two vertices of $G$. A subgraph $H$ of $G$ induced by $V_H$ is
a \emph{component of $G$} if both $H$ is connected and for every
edge $e \in E$ we have either $e \subseteq V_H$ or $e \subseteq V
\backslash V_H$.
As usual, labelled graphs $G = (V,E,f,\Gamma)$ and $G' =
(V',E',f',\Gamma)$ are \emph{isomorphic}, denoted by $G \approx G'$,
if there is a bijection $\alpha: V \rightarrow V'$ such that $f(v) =
f'(\alpha(v))$ for $v \in V$, and
$$
\{x,y\} \in E \mbox{ iff } \{\alpha(x),\alpha(y)\} \in E'
$$
for $x,y \in V$. Bijection $\alpha$ is then called an
\emph{isomorphism from $G$ to $G'$}.
In this paper we will consider graphs with two sets of edges.
Therefore, we need the notion of 2-edge coloured graphs. A
\emph{2-edge coloured graph} is a 5-tuple
$$
G = (V,E_1,E_2,f,\Gamma),
$$
where both $(V,E_1,f,\Gamma)$ and $(V,E_2,f,\Gamma)$ are labelled
graphs.
The basic notions and notation for labelled graphs carry over to
2-edge coloured graphs. However, for the notion of isomorphism care
must be taken that the two sorts of edges are preserved. Thus, if $G
= (V,E_1,E_2,f,\Gamma)$ and $G' = (V',E'_1,E'_2,f',\Gamma')$ are
2-edge coloured graphs, then it must hold that for an isomorphism
$\alpha$ from $G$ to $G'$,
$$
(x,y) \in E_i \mbox{ iff } (\alpha(x),\alpha(y)) \in E_i'
$$
for $x,y \in V$ and $i \in \{1,2\}$.
\section{Gene Assembly in Ciliates} \label{sect_gene_assembly}
Two models that are used to formalize the process of gene assembly
in ciliates are the string pointer reduction system (SPRS) and the
graph pointer reduction system (GPRS). The SPRS consist of three
types of string rewriting rules operating on \emph{legal strings}
while the GPRS consist of three types of graph rewriting rules
operating on \emph{overlap graphs}. For the purpose of this paper it
is not necessary to recall the string and graph rewriting rules; a
complete description of SPRS and GPRS, as well as a proof of their
``weak'' equivalence, can be found in \cite{GeneAssemblyBook}. We do
recall the notions of legal string and overlap graph, and we also
recall the notion of realistic string.
We fix $\kappa \geq 2$, and define the alphabet $\Delta =
\{2,3,\ldots,\kappa\}$. For $D \subseteq \Delta$, we define $\bar D
= \{ \bar a \mid a \in D \}$ and $\Pi_D = D \cup \bar D$; also $\Pi
= \Pi_{\Delta}$. The elements of $\Pi$ will be called
\emph{pointers}. We use the ``bar operator'' to move from $\Delta$
to $\bar \Delta$ and back from $\bar \Delta$ to $\Delta$. Hence, for
$p \in \Pi$, $\bar {\bar {p}} = p$. For $p \in \Pi$, we define
$\pset{p} =
\begin{cases} p & \mbox{if } p \in \Delta \\ \bar{p} & \mbox{if }
p \in \bar{\Delta}
\end{cases}$
, i.e., $\pset{p}$ is the ``unbarred'' variant of $p$.
For a string $u = x_1 x_2 \cdots x_n$ with $x_i \in \Pi$ ($1 \leq i
\leq n$), the \emph{complement of $u$} is $\bar x_1 \bar x_2 \cdots
\bar x_n$. The \emph{inverse of $u$}, denoted by $\bar u$, is the
complement of the reversal of $u$, thus $\bar u = \bar x_n \bar
x_{n-1} \cdots \bar x_1$. The \emph{domain of $u$}, denoted by
$dom(u)$, is $\{ \pset{p} \mid \mbox{$p$ occurs in $v$} \}$. We say
that $u$ is a \emph{legal string} if for each $p \in dom(u)$, $u$
contains exactly two occurrences from $\{p,\bar p\}$.
We define the alphabet $\Theta_{\kappa} = \{M_i, \bar M_i \mid 1
\leq i \leq \kappa \}$. We say that $\delta \in \Theta^*_{\kappa}$
is a \emph{micronuclear arrangement} if for each $i$ with $1 \leq i
\leq \kappa$, $\delta$ contains exactly one occurrence from
$\{M_i,\bar M_i\}$. With each string over $\Theta_{\kappa}$, we
associate a unique string over $\Pi$ through the homomorphism
$\pi_{\kappa}: \Theta^*_{\kappa} \rightarrow \Pi^*$ defined by:
$$
\pi_{\kappa}(M_1) = 2, \quad \pi_{\kappa}(M_{\kappa}) = \kappa,
\quad \pi_{\kappa}(M_i) = i(i+1) \quad \mbox{for } 1 < i < \kappa,
$$
and $\pi_{\kappa}(\bar M_j) = \overline{\pi_{\kappa}(M_j)}$ for $1
\leq j \leq \kappa$. We say that string $u$ is a \emph{realistic
string} if there is a micronuclear arrangement $\delta$ such that $u
= \pi_{\kappa}(\delta)$. We then say that $\delta$ is a
\emph{micronuclear arrangement for $u$}.
Note that every realistic string is a legal string. However, not
every legal string is a realistic string. For example, a realistic
string cannot have ``gaps'' (missing pointers): thus $2244$ is not
realistic while it is legal. It is also easy to produce examples of
legal strings which do not have gaps but still are not realistic ---
$3322$ is such an example. Realistic strings are most useful for the
gene assembly models, since only these legal strings can correspond
to genes in ciliates.
For a pointer $p$ and a legal string $u$, if both $p$ and $\bar p$
occur in $u$ then we say that both $p$ and $\bar p$ are
\emph{positive in $u$}; if on the other hand only $p$ or only $\bar
p$ occurs in $u$, then both $p$ and $\bar p$ are \emph{negative in
$u$}. So, every pointer occurring in a legal string is either
positive or negative in it. Therefore, we can define a partition of
$\mathrm{dom}(u) = \mathrm{pos}(u) \cup \mathrm{neg}(u)$, where $\mathrm{pos}(u) = \{ p \in
\mathrm{dom}(u) \mid \mbox{$p$ is positive in $u$} \}$ and $\mathrm{neg}(u) = \{ p
\in \mathrm{dom}(u) \mid \mbox{$p$ is negative in $u$} \}$.
Let $u = x_1 x_2 \cdots x_n$ be a legal string with $x_i \in \Pi$
for $1 \leq i \leq n$. For a pointer $p \in \Pi$ such that
$\{x_i,x_j\} \subseteq \{p,\bar p\}$ and $1 \leq i < j \leq n$, the
\emph{p-interval of $u$} is the substring $x_i x_{i+1} \cdots x_j$.
Substrings $x_{i_1} \cdots x_{j_1}$ and $x_{i_2} \cdots x_{j_2}$
\emph{overlap in $u$} if $i_1 < i_2 < j_1 < j_2$ or $i_2 < i_1 < j_2
< j_1$. Two distinct pointers $p,q \in \Pi$ \emph{overlap in $u$} if
the $p$-interval of $u$ overlaps with the $q$-interval of $u$. Thus,
two distinct pointers $p,q \in \Pi$ overlap in $u$ iff there is
exactly one occurrence from $\{p, \bar p\}$ in the $q$-interval, or
equivalently, there is exactly one occurrence from $\{q, \bar q\}$
in the $p$-interval of $u$. Also, for $p \in \mathrm{dom}(u)$, we denote
$$
O_u(p) = \{ q \in \mathrm{dom}(u) \mid \mbox{$p$ and $q$ overlap in $u$} \},
$$
and for $0 \leq i \leq j \leq n$, we denote by $O_u(i,j)$ the set of
all $p \in \mathrm{dom}(u)$ such that there is exactly one occurrence from
$\{p, \bar p\}$ in $x_{i+1} x_{i+2} \cdots x_j$. Also, we define
$O_u(j,i) = O_u(i,j)$. Intuitively, $O_u(i,j)$ is the set of $p \in
\mathrm{dom}(u)$ for which the the substring between ``positions'' $i$ and
$j$ in $u$ contains exactly one representative from $\{ p, \bar p
\}$, where position $i$ for $0 < i < n$ means the ``space'' between
$x_i$ and $x_{i+1}$ in $u$. For $i = 0$ it is the ``space'' on the
left of $x_1$, and for $i = n$ it is the ``space'' on the right of
$x_n$. A few elementary properties of $O_u(i,j)$ follow. We have
$O_u(i,n) = O_u(0,i)$ for $i$ with $0 \leq i \leq n$. Moreover, for
$i,j,k \in \{0, \ldots, n\}$, $O_u(i,j) \oplus O_u(j,k) = O_u(i,k)$;
this is obvious when $i < j < k$, but it is valid in general. Also,
for $0 \leq i \leq j \leq n$, $O_u(i,j) = \emptyset$ iff $x_{i+1}
\cdots x_j$ is a legal string.
\begin{Definition}
Let $u$ be a legal string. The \emph{overlap graph of $u$}, denoted
by $\overlapgru{u}$, is the labelled graph
$$
(\mathrm{dom}(u),E,\sigma,\{+,-\}),
$$
where
$$
E = \{ \{p,q\} \mid p,q \in \mathrm{dom}(u), p \not= q, \mbox{and $p$ and
$q$ overlap in $u$} \},
$$
and $\sigma$ is defined by:
$$
\sigma(p) = \begin{cases} + & \mbox{if } p \in \mathrm{pos}(u) \\ - &
\mbox{if } p \in \mathrm{neg}(u)
\end{cases}
$$
for all $p \in \mathrm{dom}(u)$.
\end{Definition}
\begin{figure}
$$
\xymatrix @=15pt{
& 3^- \ar@{-}[dl] \ar@{-}[d] \ar@{-}[dr] & \\
2^- & 4^- & 5^-
}
$$
\caption{The overlap graph of legal string $u = 24535423$.}
\label{non_realistic_overlap_graph}
\end{figure}
\begin{Example}
Let $u = 24535423$ be a legal string. The overlap graph of $u$ is
$$
\gamma = (\{2,3,4,5\},\{\{2,3\}, \{4,3\},
\{5,3\}\},\sigma,\{+,-\}),
$$
where $\sigma(v) = -$ for all vertices $v$ of $\gamma$. The
overlap graph is depicted in
Figure~\ref{non_realistic_overlap_graph}.
\end{Example}
Let $\gamma$ be an overlap graph. Similar to legal strings, we
define $\mathrm{dom}(\gamma)$ as the set of vertices of $\gamma$,
$\mathrm{pos}(\gamma) = \{ p \in \mathrm{dom}(\gamma) \mid \sigma(p) = +
\}$, $\mathrm{neg}(\gamma) = \{ p \in \mathrm{dom}(\gamma) \mid \sigma(p)
= - \}$ and for $q \in \mathrm{dom}(u)$, $O_{\gamma}(q) = \{ p \in
\mathrm{dom}(\gamma) \mid \{p,q\} \in E \} $.
An overlap graph $\gamma$ is \emph{realistic} if it is the
overlap graph of a realistic string. Not every overlap graph of a
legal string is realistic. For example, it can be shown that the
overlap graph $\gamma$ of $u = 24535423$ depicted in
Figure~\ref{non_realistic_overlap_graph} is not realistic. In fact,
one can show that it is not even \emph{realizable} --- there is no
isomorphism $\alpha$ such that $\alpha(\gamma)$ is realistic.
\section{The Reduction Graph} \label{sect_reduction_graph}
We now recall the (full) reduction graph, which was first introduced
in \cite{Extended_paper}.
\begin{Remark}
Below we present this graph in a slightly modified form: we omit the
special vertices $s$ and $t$, called the source vertex and target
vertex respectively, which did appear in the definition presented in
\cite{Extended_paper}. As shown in
Section~\ref{sect_reduction_graph_realistic}, in this way a
realistic overlap graph corresponds to exactly one reduction graph.
Fortunately, several results concerning reduction graphs do not rely
on the special vertices, and therefore carry over trivially to
reduction graphs as defined here.
\end{Remark}
\begin{Definition}
Let $u = p_1 p_2 \cdots p_n$ with $p_1,\ldots,p_n \in \Pi$ be a
legal string. The \emph{reduction graph of $u$}, denoted by
$\mathcal{R}_u$, is a 2-edge coloured graph
$$
(V,E_1,E_2,f,\mathrm{dom}(u)),
$$
where
$$
V = \{\RGVertL{1},\RGVertL{2},\ldots,\RGVertL{n}\} \ \cup \
\{\RGVertR{1},\RGVertR{2},\ldots,\RGVertR{n}\},
$$
$$
E_{1} = \{ e_1, e_2, \ldots, e_{n} \} \mbox{ with } e_i = \{
\RGVertR{i},\RGVertL{i+1} \} \mbox{ for } 1 \leq i \leq n-1, e_n =
\{ \RGVertR{n}, \RGVertL{1} \},
$$
\begin{eqnarray*}
E_{2} = & \{ \{\RGVertR{i},\RGVertL{j}\},
\{\RGVertL{i},\RGVertR{j}\} \mid i,j \in \{1,2,\ldots,n\}
\mbox{ with } i \not= j \mbox{ and } p_i = p_j \} \ \cup \ \\
& \{ \{\RGVertL{i},\RGVertL{j}\}, \{\RGVertR{i},\RGVertR{j}\} \mid
i,j \in \{1,2,\ldots,n\} \mbox{ and } p_i = \bar{p}_j \}, \mbox{
and}
\end{eqnarray*}
$$
\mbox{$f(\RGVertL{i}) = f(\RGVertR{i}) = \pset{p_i}$ for $1 \leq i
\leq n$.}
$$
\mbox{ }
\end{Definition}
The edges of $E_1$ are called the \emph{reality edges}, and the
edges of $E_2$ are called the \emph{desire edges}. Intuitively, the
``space'' between $p_i$ and $p_{i+1}$ corresponds to the reality
edge $e_i = \{ \RGVertR{i}, \RGVertL{i+1} \}$. Hence, we say that
$i$ is the \emph{position of $e_i$}, denoted by $\mathrm{posn}(e_i)$, for
all $i \in \{1,2,\ldots,n\}$. Note that positions are only defined
for reality edges. Since for every vertex $v$ there is a unique
reality edge $e$ such that $v \in e$, we also define the
\emph{position of $v$}, denoted by $\mathrm{posn}(v)$, as the position of
$e$. Thus, $\mathrm{posn}(\RGVertR{i}) = \mathrm{posn}(\RGVertL{i+1}) = i$ (while
$\mathrm{posn}(\RGVertL{1}) = n$).
\begin{figure}
$$
\xymatrix @=20pt{
& & \RGVertL{1},3 \ar@{-}[ddddd] & \RGVertR{1},3 \ar@{-}[dr] \ar@{-}[ddddd] & &\\
& \RGVertR{6},4 \ar@{-}@/_1.0pc/[rrrddd] \ar@{-}[ur] & & & \RGVertL{2},2 &\\
\RGVertL{6},4 \ar@{-}@/^1.0pc/[rrrrrd] & & & & & \RGVertR{2},2 \ar@{-}[d] \\
\RGVertR{5},2 \ar@{-}@/^1.0pc/[rrrrru] \ar@{-}[u] & & & & & \RGVertL{3},4\\
& \RGVertL{5},2 \ar@{-}@/_1.0pc/[rrruuu] & & & \RGVertR{3},4 \ar@{-}[dl] &\\
& & \RGVertR{4},3 \ar@{-}[ul] & \RGVertL{4},3 & &\\
}
$$
\caption{The reduction graph of $u$ of
Example~\ref{ex_red_graph_of_legal_string}.}
\label{ex_red_graph_of_legal_string_fig1}
\end{figure}
\begin{figure}
$$
\xymatrix @=20pt{
2 \ar@{-}[d] \ar@2{-}[r] & 4 \ar@{-}[d] & 2 \ar@{-}[d] \ar@2{-}[r] & 3 \ar@{-}[r] & 3 \ar@2{-}[r] & 4 \ar@{-}[d] \\
2 \ar@2{-}[r] & 4 & 2 \ar@2{-}[r] & 3 \ar@{-}[r] & 3 \ar@2{-}[r] & 4
}
$$
\caption{The reduction graph of $u$ of
Example~\ref{ex_red_graph_of_legal_string} in the simplified
representation.} \label{ex_red_graph_of_legal_string_fig2}
\end{figure}
\begin{Example} \label{ex_red_graph_of_legal_string}
Let $u = 3 2 \bar 4 3 \bar 2 4$ be a legal string. Since $\bar 4 3
\bar 2$ can not be a substring of a realistic string, $u$ is not
realistic. The reduction graph $\mathcal{R}_u$ of $u$ is depicted in
Figure~\ref{ex_red_graph_of_legal_string_fig1}. The labels of the
vertices are also shown in this figure. Note the desire edges
corresponding to positive pointers (here $2$ and $4$) cross (in the
figure), while those for negative pointers are parallel. Since the
exact identity of the vertices in a reduction graph is not essential
for the problems considered in this paper, in order to simplify the
pictorial representation of reduction graphs we will omit this in
the figures. We will also depict reality edges as ``double edges''
to distinguish them from the desire edges.
Figure~\ref{ex_red_graph_of_legal_string_fig2} shows the reduction
graph in this simplified representation.
\end{Example}
\begin{figure}
$$
\xymatrix @=30pt{
3 \ar@{-}[r] & 3 \ar@2{-}[r] & 6 \ar@{-}[r] & 6 \ar@2{-}[r] & 2 \ar@{-}[r] & 2 \ar@2{-}[d] \\
7 \ar@2{-}[u] & 7 \ar@{-}[l] & 5 \ar@2{-}[l] & 5 \ar@{-}[l] & 4 \ar@2{-}[l] & 4 \ar@{-}[l] \\
2 \ar@{-}[r] & 2 \ar@2{-}[r] & 3 \ar@{-}[r] & 3 \ar@2{-}[r] & 4 \ar@{-}[r] & 4 \ar@2{-}[d] \\
7 \ar@2{-}[u] & 7 \ar@{-}[l] & 6 \ar@2{-}[l] & 6 \ar@{-}[l] & 5 \ar@2{-}[l]
& 5 \ar@{-}[l]}
$$
\caption{The reduction graph of $u$ of
Example~\ref{ex_red_graph_of_realistic_string}.}
\label{ex_red_graph_of_realistic_string_fig1}
\end{figure}
\begin{Example} \label{ex_red_graph_of_realistic_string}
Let $u = \pi_7(M_7 M_1 M_6 M_3 M_5 \overline{M_2} M_4)= 72673456
\bar 3 \bar 2 45$. Thus, unlike the previous example, $u$ is a
realistic string. The reduction graph is given in
Figure~\ref{ex_red_graph_of_realistic_string_fig1}. As usual, the
vertices are represented by their labels.
\end{Example}
The reduction graph is defined for legal strings. In this paper, we
will show how to directly construct the reduction graph of realistic
string $u$ from only the overlap graph of $u$. In this way we can
define the reduction graph for realistic overlap graphs in a direct
way.
Next we consider sets of overlapping pointers corresponding to pairs
of vertices in reduction graphs, and start to develop a calculus for
these sets that will later enable us to characterize the existence
of certain edges in the reduction graph, cf.
Theorem~\ref{overlap_to_redgraph}.
\begin{Example}
We again consider the legal string $u = 3 2 \bar 4 3 \bar 2 4$ and
its reduction graph $\mathcal{R}_u$ from
Example~\ref{ex_red_graph_of_legal_string}. Desire edge $e =
\{\RGVertR{2}, \RGVertR{5}\}$ is connected to reality edges $e_1 =
\{\RGVertR{2}, \RGVertL{3}\}$ and $e_2 = \{\RGVertR{5},
\RGVertL{6}\}$ with positions $2$ and $5$ respectively. We have
$O_u(2,5) = \{2,3,4\}$. Also, reality edges $\{\RGVertR{1},
\RGVertL{2}\}$ and $\{\RGVertR{2}, \RGVertL{3}\}$ have positions $1$
and $2$ respectively. We have $O_u(1,2) = \{2\}$.
\end{Example}
\begin{Lemma} \label{overlap_edge_1}
Let $u$ be a legal string. Let $e = \{v_1,v_2\}$ be a desire edge of
$\mathcal{R}_u$ and let $p$ be the label of both $v_1$ and $v_2$. Then
$$
O_u(\mathrm{posn}(v_1),\mathrm{posn}(v_2)) = \begin{cases} O_u(p) & \mbox{if $p$ is negative in $u$},\\
O_u(p) \oplus \{p\} & \mbox{if $p$ is positive in $u$}. \end{cases}
$$
\end{Lemma}
\begin{Proof}
Let $u = p_1 p_2 \ldots p_n$ with $p_1, p_2, \ldots, p_n \in \Pi$
and let $i$ and $j$ be such that $i<j$ and $p = p_i = p_j$. Without
loss of generality, we can assume $\mathrm{posn}(v_1) < \mathrm{posn}(v_2)$. Then,
$v_1 \in \{\RGVertL{i}, \RGVertR{i}\}$ and $v_2 \in \{\RGVertL{j},
\RGVertR{j}\}$, hence $\mathrm{posn}(v_1) \in \{i-1,i\}$ and $\mathrm{posn}(v_2) \in
\{j-1,j\}$.
First, assume that $p$ is negative in $u$. By the definition of
reduction graph, the following two cases are possible:
\begin{enumerate}
\item $e = \{ \RGVertL{i}, \RGVertR{j} \}$, thus
$O_u(\mathrm{posn}(\RGVertL{i}),\mathrm{posn}(\RGVertR{j})) = O_u(i-1,j) = O_u(p)$,
\item $e = \{ \RGVertR{i}, \RGVertL{j} \}$, thus $O_u(\mathrm{posn}(\RGVertL{i}),\mathrm{posn}(\RGVertR{j-1})) = O_u(i,j-1) = O_u(p)$,
\end{enumerate}
Thus in both cases we have $O_u(\mathrm{posn}(v_1),\mathrm{posn}(v_2)) = O_u(p)$.
Finally, assume that $p$ is positive in $u$. By the definition of
reduction graph, the following two cases are possible:
\begin{enumerate}
\item $e = \{ \RGVertL{i}, \RGVertL{j} \}$, thus
$O_u(\mathrm{posn}(\RGVertL{i}),\mathrm{posn}(\RGVertL{j})) = O_u(i-1,j-1) = O_u(p)
\oplus \{p\}$,
\item $e = \{ \RGVertR{i}, \RGVertR{j} \}$, thus $O_u(\mathrm{posn}(\RGVertR{i}),\mathrm{posn}(\RGVertR{j})) = O_u(i,j) = O_u(p) \oplus \{p\}$,
\end{enumerate}
Thus in both cases we have $O_u(i_1,i_2) = O_u(p) \oplus \{p\}$.
\end{Proof}
The following result follows by iteratively applying the previous
lemma.
\begin{Corollary} \label{overlap_edge_1_iterative}
Let $u$ be a legal string. Let
$$
\xymatrix @=18pt{ p_0 \ar@2{-}[r] & p_1 \ar@{-}[r] & p_1 \ar@2{-}[r] &
p_2 \ar@{-}[r] & p_2 \ar@2{-}[r] & .. \ar@2{-}[r] & p_n \ar@{-}[r] & p_n
\ar@2{-}[r] & p_{n+1}}
$$
be a subgraph of $\mathcal{R}_u$, where (as usual) the vertices in the
figure are represented by their labels, and let $e_1$ ($e_2$, resp.)
be the leftmost (rightmost, resp.) edge. Note that $e_1$ and $e_2$
are reality edges and therefore $\mathrm{posn}(e_1)$ and $\mathrm{posn}(e_2)$ are
defined. Then $O_u(\mathrm{posn}(e_1),\mathrm{posn}(e_2)) = \left( \mathrm{pos}(u) \cap P
\right) \oplus \left(\bigoplus_{t \in P} O_u(t)\right)$ with $P =
\{p_1,\ldots,p_n\}$.
\end{Corollary}
By the definition of the reduction graph the following lemma holds.
\begin{Lemma} \label{overlap_edge_2}
Let $u$ be a legal string. If $\RGVertL{i}$ and $\RGVertR{i}$ are
vertices of $\mathcal{R}_u$, then
$O_u(\mathrm{posn}(\RGVertL{i}),\mathrm{posn}(\RGVertR{i})) = \{p\}$, where $p$ is
the label of $\RGVertL{i}$ and $\RGVertR{i}$.
\end{Lemma}
\begin{Example}
We again consider the legal string $u$ and desire edge $e$ as in the
previous example. Since $e$ has vertices labelled by positive
pointer $2$, by Lemma~\ref{overlap_edge_1} we have (again) $O_u(2,5)
= O_u(2) \oplus \{2\} = \{2,3,4\}$. Also, since $\RGVertL{2}$ and
$\RGVertR{2}$ with positions $1$ and $2$ respectively are labelled
by $2$, by Lemma~\ref{overlap_edge_2} we have (again) $O_u(1,2) =
\{2\}$.
\end{Example}
\section{The Reduction Graph of Realistic Strings}
\label{sect_reduction_graph_realistic} The next theorem asserts that
overlap graph $\gamma$ for realistic string $u$ retains all
information of $\mathcal{R}_u$ (up to isomorphism). In the next few
sections, we will give a method to determine $\mathcal{R}_u$ (up to
isomorphism), given $\gamma$. Of course, the naive method is to
first determine a legal string $u$ corresponding to $\gamma$ and
then to determine the reduction graph of $u$. However, we present a
method that is able to construct $\mathcal{R}_u$ in a direct way from
$\gamma$.
\begin{Theorem} \label{one_to_one_overlap_redgraph}
Let $u$ and $v$ be realistic strings. If $\overlapgru{u} =
\overlapgru{v}$, then $\mathcal{R}_u \approx \mathcal{R}_v$.
\end{Theorem}
\begin{Proof}
By Theorem~1 in \cite{DBLP:conf/birthday/HarjuPR04} (or Theorem~10.2
in \cite{GeneAssemblyBook}), we have $\overlapgru{u} =
\overlapgru{v}$ iff $v$ can be obtained from $u$ by a composition of
reversal, complement and conjugation operations. By the definition
of reduction graph it is clear that the reduction graph is invariant
under these operations (up to isomorphism). Thus, $\mathcal{R}_u \approx
\mathcal{R}_v$.
\end{Proof}
\begin{figure}
$$
\xymatrix @=15pt{
2^- \ar@{-}[dr] & & 4^- \ar@{-}[dl] \\
& 3^- & \\
6^- \ar@{-}[ur] & & 5^- \ar@{-}[ul]
}
$$
\caption{The overlap graph of both legal strings $u$ and $v$ of
Example~\ref{ex_legal_not_1_to_1_overlap_redgraph}.}
\label{ex_legal_not_1_to_1_overlap_redgraph_fig1}
\end{figure}
\begin{figure}
$$
\xymatrix @=30pt{
& 2 \ar@2{-}[r] \ar@{-}[d] & 6 \ar@{-}[d] & & 5 \ar@{-}[d] \ar@2{-}[r] & 6 \ar@{-}[d]\\
& 2 \ar@2{-}[r] & 6 & & 5 \ar@2{-}[r] & 6\\
& 2 \ar@2{-}[r] \ar@{-}[d] & 4 \ar@{-}[d] & 4 \ar@{-}[d] \ar@2{-}[r] & 3 \ar@{-}[r] & 3 \ar@2{-}[r] & 5 \ar@{-}[d] \\
& 2 \ar@2{-}[r] & 4 & 4 \ar@2{-}[r] & 3 \ar@{-}[r] & 3 \ar@2{-}[r] & 5
}
$$
\caption{The reduction graph of $u$ of
Example~\ref{ex_legal_not_1_to_1_overlap_redgraph}.}
\label{ex_legal_not_1_to_1_overlap_redgraph_fig2}
\end{figure}
The previous theorem is \emph{not} true for legal strings in general
--- the next two examples illustrate that
legal strings having the same overlap graph can have different
reduction graphs.
\begin{Example} \label{ex_legal_not_1_to_1_overlap_redgraph}
Let $u = 2653562434$ and $v = h(u)$, where $h$ is the homomorphism
that interchanges $5$ and $6$.
Thus, $v = 2563652434$. Note that both $u$ and $v$ are not
realistic, because substrings $535$ of $u$ and $636$ of $v$ can
obviously not be substrings of realistic strings. The overlap graph
of $u$ is depicted in
Figure~\ref{ex_legal_not_1_to_1_overlap_redgraph_fig1}. From
Figure~\ref{ex_legal_not_1_to_1_overlap_redgraph_fig1} and the fact
that $v$ is obtained from $u$ by renumbering $5$ and $6$, it follows
that the overlap graphs of $u$ and $v$ are equal. The reduction
graph $\mathcal{R}_u$ of $u$ is depicted in
Figure~\ref{ex_legal_not_1_to_1_overlap_redgraph_fig2}. The
reduction graph $\mathcal{R}_v$ of $v$ is obtained from $\mathcal{R}_u$ by
renumbering the labels of the vertices according to $h$. Clearly,
$\mathcal{R}_u \not\approx \mathcal{R}_v$.
\end{Example}
\begin{figure}
$$
\xymatrix @=30pt{
2 \ar@2{-}[d] \ar@{-}@/^1.0pc/[d] & 3 \ar@2{-}[d] \ar@{-}@/^1.0pc/[d] & 4 \ar@2{-}[d] \ar@{-}@/^1.0pc/[d] & 2 \ar@{-}[r] \ar@2{-}[d] & 2 \ar@2{-}[r] & 3 \ar@{-}[d]\\
2 & 3 & 4 & 4 \ar@{-}[r] & 4 \ar@2{-}[r] & 3}
$$
\caption{The reduction graph of $u$ of
Example~\ref{ex2_legal_not_1_to_1_overlap_redgraph}.}
\label{ex2_legal_not_1_to_1_overlap_redgraph_fig1}
\end{figure}
\begin{figure}
$$
\xymatrix @=30pt{
2 \ar@2{-}[d] \ar@{-}@/^1.0pc/[d] & 4 \ar@2{-}[d] \ar@{-}@/^1.0pc/[d] & 2 \ar@{-}[d] \ar@2{-}[r] & 3 \ar@{-}[d] & 3 \ar@{-}[d] \ar@2{-}[r] & 4 \ar@{-}[d]\\
2 & 4 & 2 \ar@2{-}[r] & 3 & 3 \ar@2{-}[r] & 4}
$$
\caption{The reduction graph of $v$ of
Example~\ref{ex2_legal_not_1_to_1_overlap_redgraph}.}
\label{ex2_legal_not_1_to_1_overlap_redgraph_fig2}
\end{figure}
\begin{Example} \label{ex2_legal_not_1_to_1_overlap_redgraph}
Let $u = \pi_{\kappa}(M_1 M_2 M_3 M_4) = 223344$ be a realistic
string and let $v = 234432$ be a legal string. Note that $v$ is not
realistic. Legal strings $u$ and $v$ have the same overlap graph
$\gamma$ ($\gamma = (\{2,3,4\},\emptyset,\sigma,\{+,-\})$,
where $\sigma(v) = -$ for $v \in \{2,3,4\}$). The reduction graph
$\mathcal{R}_u$ of $u$ is depicted in
Figure~\ref{ex2_legal_not_1_to_1_overlap_redgraph_fig1}, and the
reduction graph $\mathcal{R}_v$ of $v$ is depicted in
Figure~\ref{ex2_legal_not_1_to_1_overlap_redgraph_fig2}. Note that
$\mathcal{R}_u$ has a component consisting of six vertices, while
$\mathcal{R}_v$ does not have such a component. Therefore, $\mathcal{R}_u
\not\approx \mathcal{R}_v$.
\end{Example}
For realistic strings the reduction graph has a special form. This
is seen as follows. For $1 < i < \kappa$ the symbol $M_i$ (or
$\bar{M}_i$) in the micronuclear arrangement defines two pointers
$p_i$ and $p_{i+1}$ (or $\bar{p}_{i+1}$ and $\bar{p}_{i}$) in the
corresponding realistic string $u$. At the same time the substring
$p_i p_{i+1}$ (or $\bar{p}_{i+1} \bar{p}_{i}$, resp.) of $u$
corresponding to $M_i$ (or $\bar{M}_i$, resp.) defines four vertices
$\RGVertL{j}, \RGVertR{j}, \RGVertL{j+1}, \RGVertR{j+1}$ in
$\mathcal{R}_u$. It is easily verified (cf.
Theorem~\ref{micronuclear_to_root_subgraph} below) that the
``middle'' two vertices $\RGVertR{j}$ and $\RGVertL{j+1}$, labelled
by $p_i$ and $p_{i+1}$ respectively, are connected by a reality edge
and $\RGVertR{j}$ ($\RGVertL{j+1}$, resp.) is connected by a desire
edge to a ``middle vertex'' resulting from $M_{i-1}$ or
$\bar{M}_{i-1}$ ($M_{i+1}$ or $\bar{M}_{i+1}$, resp.). This leads to
the following definition.
\begin{Definition}
Let $u$ be a legal string and let $\kappa = |\mathrm{dom}(u)|+1$. If
$\mathcal{R}_u$ contains a subgraph $L$ of the following form:
$$
\xymatrix @=18pt{ 2 \ar@{-}[r] & 2 \ar@2{-}[r] & 3 \ar@{-}[r] & 3
\ar@2{-}[r] & .. \ar@2{-}[r] & \kappa \ar@{-}[r] & \kappa }
$$
where the vertices in the figure are represented by their labels,
then we say that $u$ is \emph{rooted}. Subgraph $L$ is called a
\emph{root subgraph of $\mathcal{R}_u$}.
\end{Definition}
\begin{Example}
The realistic string $u$ with $\mathrm{dom}(u) = \{2,3,\ldots,7\}$ in
Example~\ref{ex_red_graph_of_realistic_string} is rooted because the
reduction graph of $u$, depicted in
Figure~\ref{ex_red_graph_of_realistic_string_fig1}, contains the
subgraph
$$
\xymatrix @=18pt{ 2 \ar@{-}[r] & 2 \ar@2{-}[r] & 3 \ar@{-}[r] & 3
\ar@2{-}[r] & .. \ar@2{-}[r] & 7 \ar@{-}[r] & 7 }
$$
\end{Example}
The next theorem shows that indeed every realistic string is rooted.
\begin{Theorem} \label{micronuclear_to_root_subgraph}
Every realistic string is rooted.
\end{Theorem}
\begin{Proof}
Consider a micronuclear arrangement for a realistic string $u$. Let
$\kappa = |\mathrm{dom}(u)|+1$. By the definition of $\pi_{\kappa}$, there
is a reality edge $e_i$ (corresponding to either $\pi_{\kappa}(M_i)
= i(i+1)$ or $\pi_{\kappa}(\overline{M_i}) = \overline{(i+1)} \
\overline{i}$) connecting a vertex labelled by $i$ to a vertex
labelled by $i+1$ for each $2 \leq i < \kappa$. It suffices to prove
that there is a desire edge connecting $e_i$ to $e_{i+1}$ for each
$2 \leq i < \kappa - 1$. This can easily be seen by checking the
four cases where $e_i$ corresponds to either $\pi_{\kappa}(M_i)$ or
$\pi_{\kappa}(\overline{M_i})$ and $e_{i+1}$ corresponds to either
$\pi_{\kappa}(M_{i+1})$ or $\pi_{\kappa}(\overline{M_{i+1}})$.
\end{Proof}
In the remaining of this paper, we will denote $|\mathrm{dom}(u)|+1$ by
$\kappa$ for rooted strings, when it is clear which rooted string
$u$ is meant. The reduction graph of a realistic string may have
more than one root subgraph: it is easy to verify that realistic
string $2 3 4 \cdots \kappa 2 3 4 \cdots \kappa$ for $\kappa \geq 2$
has two root subgraphs.
Example~\ref{ex_red_graph_of_legal_string} shows that not every
rooted string is realistic. The remaining results that consider
realistic strings also hold for rooted strings, since we will not be
using any properties of realistic string that are not true for
rooted strings in general.
For a given root subgraph $L$, it is convenient to uniquely identify
every reality edge containing a vertex of $L$. This is done through
the following definition.
\begin{Definition}
Let $u$ be a rooted string and let $L$ be a root subgraph of
$\mathcal{R}_u$. We define ${r \! spos}_{L,k}$ for $2 \leq k < \kappa$ as the
position of the edge of $L$ that has vertices labelled by $k$ and
$k+1$. We define ${r \! spos}_{L,1}$ (${r \! spos}_{L,\kappa}$, resp.) as the
position of the edge of $\mathcal{R}_u$ not in $L$ containing a vertex
of $L$ labelled by $2$ ($\kappa$, resp.). When $\kappa = 2$, to
ensure that ${r \! spos}_{L,1}$ and ${r \! spos}_{L,\kappa}$ are well defined,
we additionally require that ${r \! spos}_{L,1} < {r \! spos}_{L,\kappa}$.
\end{Definition}
Thus, ${r \! spos}_{L,k}$ (for $1 \leq k \leq \kappa$) uniquely
identifies every reality edge containing a vertex of $L$. If it is
clear which root subgraph $L$ is meant, we simply write ${r \! spos}_{k}$
instead of ${r \! spos}_{L,k}$ for $1 \leq k \leq \kappa$.
The next lemma is essential to prove the main theorem of this paper.
\begin{Lemma} \label{overlap_equal_pos}
Let $u$ be a rooted string. Let $L$ be a root subgraph of
$\mathcal{R}_u$. Let $i$ and $j$ be positions of reality edges in
$\mathcal{R}_u$ that are not edges of $L$. Then $O_u(i,j) = \emptyset$
iff $i=j$.
\end{Lemma}
\begin{Proof}
The reverse implication is trivially satisfied. We now prove the
forward implication. The reality edge $e_k$ (for $2 \leq k <
\kappa$) in $L$ with vertices labelled by $k$ and $k+1$ corresponds
to a cyclic substring $\tilde{M}_k \in \{p_1 p_2, p_2 p_1 \mid p_1
\in \{k, \overline{k}\}, p_2 \in \{k+1, \overline{k+1}\}\}$ of $u$.
Let $k_1$ and $k_2$ with $2 \leq k_1 < k_2 < \kappa$. If $k_1 + 1 =
k_2$, then $e_{k_1}$ and $e_{k_2}$ are connected by a desire edge
(by the definition of $L$). Therefore, pointer $k_2$ common in
$\tilde{M}_{k_1}$ and $\tilde{M}_{k_2}$ originates from two
different occurrences in $u$. If on the other hand $k_1 + 1 \not=
k_2$, then $\tilde{M}_{k_1}$ and $\tilde{M}_{k_2}$ do not have a
letter in common. Therefore, in both cases, $\tilde{M}_{k_1}$ and
$\tilde{M}_{k_2}$ are disjoint cyclic substrings of $u$. Thus the
$\tilde{M}_k$ for $2 \leq k < \kappa$ are pairwise disjoint cyclic
substrings of $u$.
Without loss of generality assume $i \leq j$. Let $u = u_1 u_2
\cdots u_n$ with $u_i \in \Pi$. Since $u$ is a legal string, every
$u_{l}$ for $1 \leq l \leq n$ is either part of a $\tilde{M}_k$
(with $2 \leq k < \kappa$) or in $\{2, \bar 2, \kappa, \bar
\kappa\}$. Consider $u' = u_{i+1} u_{i+2} \cdots u_{j}$. Since $i$
and $j$ are positions of reality edges in $\mathcal{R}_u$ that are not
edges of $L$, we have $u' = \tilde{M}_{k_1} \tilde{M}_{k_2} \cdots
\tilde{M}_{k_m}$ for some distinct $k_1, k_2, \ldots, k_{m} \in
\{1,2,\ldots,\kappa\}$, where $\tilde{M}_1 \in \{2, \bar 2\}$ and
$\tilde{M}_{\kappa} \in \{\kappa, \bar \kappa\}$.
It suffices to prove that $u' = \lambda$. Assume to the contrary
that $u' \not= \lambda$. Then there is a $1 \leq l \leq \kappa$ such
that $\tilde{M}_l$ is a substring of $u'$. Because $O_u(i,j) =
\emptyset$, we know that $u'$ is legal. If $l
> 1$, then $\tilde{M}_{l-1}$ is also a substring of $u'$, otherwise
$u'$ would not be a legal string. Similarly, if $l < \kappa$, then
$\tilde{M}_{l+1}$ is also a substring of $u'$. By iteration, we
conclude that $u' = u$. Therefore, $i = 0$. This is a contradiction,
since $0$ cannot be a position of a reality edge. Thus, $u' =
\lambda$.
\end{Proof}
\begin{Lemma} \label{overlap_edge_realistic}
Let $u$ be a rooted string. Let $L$ be a root subgraph of
$\mathcal{R}_u$. If $\RGVertL{i}$ and $\RGVertR{i}$ are vertices of
$\mathcal{R}_u$, then exactly one of $\RGVertL{i}$ and $\RGVertR{i}$
belongs to $L$.
\end{Lemma}
\begin{Proof}
By the definition of reduction graph, $\RGVertL{i}$ and
$\RGVertR{i}$ have a common vertex label $p$ but are not connected
by a desire edge. Therefore, $\RGVertL{i}$ and $\RGVertR{i}$ do not
both belong to $L$. Now, if $\RGVertL{i}$ and $\RGVertR{i}$ both do
not belong to $L$, then the other vertices labelled by $p$, which
are $\RGVertL{j}$ and $\RGVertR{j}$ for some $j$, both belong to $L$
-- a contradiction by the previous argument. Therefore, either
$\RGVertL{i}$ or $\RGVertR{i}$ belongs to $L$, and the other one
does not belong to $L$.
\end{Proof}
The next result provides the main idea to determine the reduction
graph given (only) the overlap graph as presented in
Section~\ref{sect_overlap_to_red_graph}. It relies heavily on the
previous lemmas.
\begin{Theorem} \label{th_partone_char}
Let $u$ be a rooted string, let $L$ be a root subgraph of
$\mathcal{R}_u$, and let $p,q \in \mathrm{dom}(u)$ with $p < q$. Then there is a
reality edge $e$ in $\mathcal{R}_u$ with both vertices not in $L$, one
vertex labelled by $p$ and the other labelled by $q$ iff
$$ \bigoplus_{t \in P} O_{u}(t) = \left( \mathrm{pos}(u) \cap P \right)
\oplus \{p\} \oplus \{q\},
$$
where $P = \{p+1,\ldots,q-1\} \cup P'$ for some $P' \subseteq
\{p,q\}$.
\end{Theorem}
\begin{Proof}
We first prove the forward implication. Let $e = \{v_1,v_2\}$ with
$v_1$ labelled by $p$, $v_2$ labelled by $q$, and $\mathrm{posn}(e) = i$.
Thus $e = \{ \RGVertR{i}, \RGVertL{i+1} \}$. We assume that $v_1 =
\RGVertR{i}$ and $v_2 = \RGVertL{i+1}$, the other case is proved
similarly. Let $i_1 = \mathrm{posn}(\RGVertL{i})$ and $i_2 =
\mathrm{posn}(\RGVertR{i+1})$. By Lemma~\ref{overlap_edge_2} $O_u(i,i_1) =
\{p\}$ and $O_u(i_2,i) = \{q\}$. By
Lemma~\ref{overlap_edge_realistic}, $\RGVertL{i}$ (labelled by $p$)
and $\RGVertR{i+1}$ (labelled by $q$) belong to $L$. Thus $i_1 \in
\{ {r \! spos}_{p-1}, {r \! spos}_p \}$ and $i_2 \in \{ {r \! spos}_{q-1}, {r \! spos}_q
\}$. By applying Corollary~\ref{overlap_edge_1_iterative} on $L$, we
have $O_u(i_1,i_2) = \left( \mathrm{pos}(u) \cap P \right) \oplus
\left(\bigoplus_{t \in P} O_u(t)\right)$ with $P =
\{p+1,\ldots,q-1\} \cup P'$ for some $P' \subseteq \{p,q\}$. By
definition of $O_u(i,j)$ we have
$$
\emptyset = O_u(i,i) = O_u(i,i_1) \oplus O_u(i_1,i_2) \oplus
O_u(i_2,i)
$$
Thus the desired result follows.
We now prove the reverse implication. By applying
Corollary~\ref{overlap_edge_1_iterative} on $L$, we have
$O_u(i_1,i_2) = \left( \mathrm{pos}(u) \cap P \right) \oplus
\left(\bigoplus_{t \in P} O_u(t)\right)$ for some $i_1 \in \{
{r \! spos}_{p-1}, {r \! spos}_p \}$ and $i_2 \in \{ {r \! spos}_{q-1}, {r \! spos}_q
\}$ (depending on $P'$). By Lemma~\ref{overlap_edge_2}, there is a
vertex $v_1$ ($v_2$, resp.) labelled by $p$ ($q$, resp.) with
position $i$ ($j$, resp.) such that $O_u(i,i_1) = \{p\}$ and
$O_u(i_2,j) = \{q\}$. By Lemma~\ref{overlap_edge_realistic} these
vertices are not in $L$. We have now
$$
\emptyset = O_u(i,i_1) \oplus O_u(i_1,i_2) \oplus O_u(i_2,j) =
O_u(i,j)
$$
By Lemma~\ref{overlap_equal_pos}, $O_u(i,j) = \emptyset$ implies
that $i=j$. Thus, there is a reality edge $\{v_1,v_2\}$ in
$\mathcal{R}_u$ (with position $i$), such that $v_1$ is labelled by $p$
and $v_2$ is labelled by $q$ and both are not vertices of $L$.
\end{Proof}
Let $\gamma_u$ be the overlap graph of some legal string $u$.
Clearly we have $\mathrm{pos}(u) = \mathrm{pos}(\gamma_u)$ and for all $p
\in \mathrm{dom}(u) = \mathrm{dom}(\gamma_u)$, $O_{u}(p) = O_{\gamma_u}(p)$.
Thus by Theorem~\ref{th_partone_char} we can determine, given the
overlap graph of a rooted string $u$, if there is a reality edge in
$\mathcal{R}_u$ with both vertices outside $L$ that connects a vertex
labelled by $p$ to a vertex labelled by $q$. We will extend this
result to completely determine the reduction graph given the overlap
graph of a rooted string (or a realistic string in particular).
\section{Compressing the Reduction Graph} \label{sect_compress_function}
In this section we define the $\mathrm{cps}$ function. The $\mathrm{cps}$
function simplifies reduction graphs by replacing the subgraph $
\xymatrix @=18pt{ p \ar@{-}[r] & p} $ by a single vertex labelled by
$p$. In this way, one can simplify reduction graphs without ``losing
information''. We will define $\mathrm{cps}$ for a general family of
graphs $\mathcal{G}$ which includes all reduction graphs. The formal
definitions of $\mathcal{G}$ and $\mathrm{cps}$ are given below.
Let $\mathcal{G}$ be the set of 2-edge coloured graphs $G = (V, E_1,
E_2, f, \Gamma)$ with the property that for all $\{v_1, v_2\} \in
E_2$, it holds that $f(v_1) = f(v_2)$. Note that for a reduction
graph $\mathcal{R}_u$, we have $\mathcal{R}_u \in \mathcal{G}$ because both
vertices of a desire edge have the same label. For all $G \in
\mathcal{G}$, $\mathrm{cps}(G)$ is obtained from $G$ by considering the
second set of edges as vertices in the labelled graph. Thus, for the
case when $G$ is a reduction graph, the function $\mathrm{cps}$
``compresses'' the desire edges to vertices.
\begin{Definition}
The function $\mathrm{cps}$ from $\mathcal{G}$ to the set of labelled
graphs is defined as follows. Let $G = (V, E_1, E_2, f, \Gamma) \in
\mathcal{G}$, then
$$
\mathrm{cps}(G) = (E_2, E'_1, f', \Gamma)
$$
is a labelled graph, where
$$
E'_1 = \{ \{e_1,e_2\} \subseteq E_2 \mid \exists v_1, v_2 \in V: v_1
\in e_1, v_2 \in e_2, e_1 \not= e_2 \mbox{ and } \{v_1, v_2\} \in
E_1 \},
$$
and for $e \in E_2$: $f'(e) = f(v)$ with $v \in e$.
\end{Definition}
Note that $f'$ is well defined, because for all $\{v_1, v_2\} \in
E_2$, it holds that $f(v_1) = f(v_2)$.
\begin{figure}
$$
\xymatrix @=30pt{
3 \ar@{-}[r] & 6 \ar@{-}[r] & 2 \ar@{-}[d] \\
7 \ar@{-}[u] & 5 \ar@{-}[l] & 4 \ar@{-}[l] \\
2 \ar@{-}[r] & 3 \ar@{-}[r] & 4 \ar@{-}[d] \\
7 \ar@{-}[u] & 6 \ar@{-}[l] & 5 \ar@{-}[l] }
$$
\caption{The labelled graph $\mathrm{cps}(\mathcal{R}_u)$, where
$\mathcal{R}_u$ is defined in Example~\ref{ex_compress_realistic_graph}.
The vertices in the figure are represented by their labels.}
\label{ex_compress_realistic_graph_fig1}
\end{figure}
\begin{Example} \label{ex_compress_realistic_graph}
We are again considering the realistic string $u$ defined in
Example~\ref{ex_red_graph_of_realistic_string}. The reduction graph
of $\mathcal{R}_u$ is depicted in
Figure~\ref{ex_red_graph_of_realistic_string_fig1}. The labelled
graph $\mathrm{cps}(\mathcal{R}_u)$ is depicted in
Figure~\ref{ex_compress_realistic_graph_fig1}. Since this graph has
just one set of edges, the reality edges are depicted as `single
edges' instead of `double edges' as we did for reduction graphs.
\end{Example}
It is not hard to see that for reduction graphs $\mathcal{R}_u$ and
$\mathcal{R}_v$, we have $\mathcal{R}_u \approx \mathcal{R}_v$ iff
$\mathrm{cps}(\mathcal{R}_u) \approx \mathrm{cps}(\mathcal{R}_v)$. In this sense,
function $\mathrm{cps}$ allows one to simplify reduction graphs without
losing information.
\section{From Overlap Graph to Reduction Graph}
\label{sect_overlap_to_red_graph} Here we define reduction graphs
for realistic overlap graphs, inspired by the characterization of
Theorem~\ref{th_partone_char}. In the remaining part of this section
we will show its equivalence with reduction graphs for realistic
strings.
\begin{Definition} \label{def_reduction_graph_overlap_graph}
Let $\gamma = (Dom_{\gamma},E_{\gamma},\sigma,\{+,-\})$
be a realistic overlap graph and let $\kappa = |Dom_{\gamma}|+1$.
The \emph{reduction graph of $\gamma$}, denoted by
$\mathcal{R}_{\gamma}$, is a labelled graph
$$
\mathcal{R}_{\gamma} = (V,E,f,Dom_{\gamma}),
$$
where
$$
V = \{ \RGOVertNonRoot{p}, \RGOVertRoot{p} \mid 2 \leq p \leq \kappa
\},
$$
$$
\mbox{$f(\RGOVertNonRoot{p}) = f(\RGOVertRoot{p}) = p$, for $2 \leq
p \leq \kappa$, and}
$$
$e \in E$ iff one of the following conditions hold:
\begin{enumerate}
\item $e = \{\RGOVertRoot{p},\RGOVertRoot{p+1}\}$ and $2 \leq p <
\kappa$.
\item $e = \{\RGOVertNonRoot{p},\RGOVertNonRoot{q}\}$, $2 \leq p < q \leq \kappa$, and
$$
\bigoplus_{t \in P} O_{\gamma}(t) = \left( \mathrm{pos}(\gamma)
\cap P \right) \oplus \{p\} \oplus \{q\},
$$
where $P = \{p+1,\ldots,q-1\} \cup P'$ for some $P' \subseteq
\{p,q\}$.
\item $e = \{\RGOVertRoot{2},\RGOVertNonRoot{p}\}$, $2 \leq p \leq \kappa$, and
$$
\bigoplus_{t \in P} O_{\gamma}(t) = \left( \mathrm{pos}(\gamma)
\cap P \right) \oplus \{p\},
$$
where $P = \{2,\ldots,p-1\} \cup P'$ for some $P' \subseteq \{p\}$.
\item $e = \{\RGOVertRoot{\kappa},\RGOVertNonRoot{p}\}$, $2 \leq p \leq \kappa$, and
$$
\bigoplus_{t \in P} O_{\gamma}(t) = \left( \mathrm{pos}(\gamma)
\cap P \right) \oplus \{p\},
$$
where $P = \{p+1,\ldots,\kappa\} \cup P'$ for some $P' \subseteq
\{p\}$.
\item $e = \{\RGOVertRoot{2},\RGOVertRoot{\kappa}\}$, $\kappa > 3$, and
$$
\bigoplus_{t \in P} O_{\gamma}(t) = \mathrm{pos}(\gamma) \cap P,
$$
where $P = \{2,\ldots,\kappa\}$.
\end{enumerate}
\end{Definition}
\begin{figure}
$$
\xymatrix @=30pt{
2^- \ar@{-}[r] & 3^- \ar@{-}[r] \ar@{-}[d] \ar@/^0.0pc/@{-}[dr] \ar@/^0.0pc/@{-}[dl] & 4^- \ar@{-}[d] \\
6^- & 7^- \ar@{-}[r] & 5^- }
$$
\caption{The overlap graph $\gamma$ of a realistic string (used
in Example~\ref{ex1_overlap_graph}).} \label{ex1_fig1_overlap_graph}
\end{figure}
\def\pijlrc#1{\ar@/^0.3pc/@{-}[r]}
\def\pijllc#1{\ar@/^0.3pc/@{-}[l]}
\def\pijlr#1{\ar@{-}[r]}
\def\pijll#1{\ar@{-}[l]}
\def\pijld#1{\ar@{-}[d]}
\def\pijlu#1{\ar@{-}[u]}
\begin{figure}
$$
\xymatrix @=30pt{
4 \pijlr{7} & 7 & 2 \pijlr{3,5,6} & 6 \\
2 \pijlr{2,5,6} & 3 \pijlr{4,7} & 4 \pijlr{3,4,5,7} & 5 \pijld{5} \\
3 \pijlu{2,3,5,6} & 5 \pijll{3,4,7} & 7 \pijll{} & 6 \pijll{3,5} }
$$
\caption{The reduction graph $\mathcal{R}_\gamma$ of the overlap
graph $\gamma$ of Example~\ref{ex1_overlap_graph}. The vertices
in the figure are represented by their labels.}
\label{ex1_fig2_red_graph}
\end{figure}
\begin{Example} \label{ex1_overlap_graph}
The overlap graph $\gamma$ in
Figure~\ref{ex1_fig1_overlap_graph} is realistic. Indeed, for
example realistic string $u = \pi_7 (M_4 M_3 M_7 M_5 M_2 M_1 M_6) =
453475623267$ has this overlap graph. Clearly, the reduction graph
$\mathcal{R}_\gamma$ of $\gamma$ has the edges
$\{\RGOVertRoot{p},\RGOVertRoot{p+1}\}$ for $2 \leq p < 7$. The
following table lists the remaining edges of $\mathcal{R}_\gamma$.
The table also states the characterizing conditions for each edge as
stated in Definition~\ref{def_reduction_graph_overlap_graph}. Note
that $\mathrm{pos}(\gamma) = \emptyset$, and consequently the
right-hand side of the defining equations in points 2, 3 and 4 in
Definition~\ref{def_reduction_graph_overlap_graph} are independent
of the choice of $P'$.
\begin{tabular}[t]{c|c|r}
Edge & $P$ & Witness \\
\hline $\{J_2,J_6\}$ & $\{3,4,5\}$ & $\{2,4,5,6,7\} \oplus \{3,5\}
\oplus \{3,4,7\} = \{2,6\}$ \\
$\{J_2,J_6\}$ & $\{2,3,4,5,6\}$ & $\{3\} \oplus \{2,4,5,6,7\} \oplus
\{3,5\} \oplus \{3,4,7\} \oplus \{3\} = \{2,6\}$ \\
$\{J_4,J_7\}$ & $\{5,6\}$ & $\{3,4,7\} \oplus \{3\}= \{4,7\}$ \\
$\{J_4,J_7\}$ & $\{4,5,6,7\}$ & $\{3,5\} \oplus \{3,4,7\} \oplus \{3\} \oplus \{3,5\} = \{4,7\}$ \\
$\{J_3,J_5\}$ & $\{4\}$ & $\{3,5\} = \{3,5\}$\\
$\{J_5,J'_7\}$ & $\{6,7\}$ & $\{3\} \oplus \{3,5\} = \{5\}$ \\
$\{J'_2,J_3\}$ & $\{2\}$ & $\{3\} = \{3\}$
\end{tabular}
We have now completely determined $\mathcal{R}_\gamma$; it is shown
in Figure~\ref{ex1_fig2_red_graph}. As we have done for reduction
graphs of legal strings, in the figures, the vertices of reduction
graphs of realistic overlap graphs are represented by their labels.
\end{Example}
\begin{figure}
$$
\xymatrix @=30pt{ & 2^+ \ar@{-}[dl] \ar@{-}[dr] \ar@{-}[drr] \\
5^- \ar@{-}[rr] \ar@{-}[dr] \ar@{-}[ddr] & & 4^- \ar@{-}[dl] \ar@{-}[ddl] & 7^- \ar@{-}[ddll]\\
& 3^+ \ar@{-}[d] \\
& 6^- }
$$
\caption{The overlap graph $\gamma$ of a realistic string (used
in Example~\ref{ex2_overlap_graph}).} \label{ex2_fig1_overlap_graph}
\end{figure}
\begin{figure}
$$
\xymatrix @=30pt{
3 \ar@{-}[r] & 6 \ar@{-}[r] & 2 \ar@{-}[d] \\
7 \ar@{-}[u] & 5 \ar@{-}[l] & 4 \ar@{-}[l] \\
2 \ar@{-}[r] & 3 \ar@{-}[r] & 4 \ar@{-}[d] \\
7 \ar@{-}[u] & 6 \ar@{-}[l] & 5 \ar@{-}[l] }
$$
\caption{The reduction graph $\mathcal{R}_\gamma$ of the overlap
graph $\gamma$ of Example~\ref{ex2_overlap_graph}.}
\label{ex2_fig2_red_graph}
\end{figure}
\begin{Example} \label{ex2_overlap_graph}
In the second example we construct the reduction graph of an overlap
graph that contains positive pointers. The overlap graph
$\gamma$ in Figure~\ref{ex2_fig1_overlap_graph} is realistic.
Indeed, for example realistic string $u = \pi_7(M_7 M_1 M_6 M_3 M_5
\overline{M_2} M_4)= 72673456 \bar 3 \bar 2 45$ introduced in
Example~\ref{ex_red_graph_of_realistic_string} has this overlap
graph. Again, the reduction graph $\mathcal{R}_\gamma$ of
$\gamma$ has the edges $\{\RGOVertRoot{p},\RGOVertRoot{p+1}\}$
for $2 \leq p < 7$. The remaining edges are listed in the table
below.
\begin{tabular}[t]{c|c|r}
Edge & $P$ & Witness \\
\hline $\{J_3,J_7\}$ & $\{4,5,6\}$ & $\{2,3,5,6\} \oplus \{2,3,4,6\}
\oplus \{3,4,5,7\} = \{3,7\}$ \\
$\{J_3,J_6\}$ & $\{3,4,5\}$ & $\{3\} \oplus \{4,5,6\} \oplus
\{2,3,5,6\} \oplus \{2,3,4,6\} = \{3,6\}$ \\
$\{J_2,J_6\}$ & $\{2,3,4,5,6\}$ & $\{2\} \oplus \{4,5,7\} \oplus
\{3\} \oplus \{4,5,6\} \oplus \{2,3,5,6\}$\\
& & $\oplus \{2,3,4,6\} \oplus \{3,4,5,7\} = \{2,6\}$ \\
$\{J_2,J_4\}$ & $\{3,4\}$ & $\{3\} \oplus \{4,5,6\} \oplus \{2,3,5,6\} = \{2,4\}$ \\
$\{J_4,J_5\}$ & $\{4,5\}$ & $\{2,3,5,6\} \oplus \{2,3,4,6\} = \{4,5\}$\\
$\{J_5,J_7\}$ & $\{5,6,7\}$ & $\{2,3,4,6\} \oplus \{3,4,5,7\} \oplus \{2,6\} = \{5,7\}$\\
$\{J'_2,J'_7\}$ & $\{2,\ldots,7\}$ & $\{2\} \oplus \{4,5,7\} \oplus
\ldots \oplus \{2,6\} = \emptyset$
\end{tabular}
Again, we have now completely determined the reduction graph; it is
shown in Figure~\ref{ex2_fig2_red_graph}.
\end{Example}
Figures~\ref{ex_compress_realistic_graph_fig1} and
\ref{ex2_fig2_red_graph} show, for $u = 72673456 \bar 3 \bar 2 45$,
that $\mathrm{cps}(\mathcal{R}_u) \approx \mathcal{R}_\gamma$. The next
theorem shows that this is true for every realistic string $u$.
\begin{Theorem} \label{overlap_to_redgraph}
Let $u$ be a realistic string. Then, $\mathrm{cps}(\mathcal{R}_u) \approx
\mathcal{R}_{\overlapgru{u}}$.
\end{Theorem}
\begin{Proof}
Let $\kappa = |\mathrm{dom}(u)|+1$, let $\gamma = \overlapgru{u}$, let
$\mathcal{R}_{\gamma} = (V_{\gamma}, E_{\gamma},
f_{\gamma}, \mathrm{dom}(u))$, let $R_u = \mathrm{cps}(\mathcal{R}_u) = (V_u,
E_u, f_u, \mathrm{dom}(u))$ and let $L$ be a root subgraph of $\mathcal{R}_u$.
Recall that the elements of $V_u$ are the desire edges of
$\mathcal{R}_u$.
Let $h: V_u \rightarrow V_{\gamma}$ defined by
$$
h(v) = \begin{cases}
J_{f_u(v)} & \mbox{if $v$ is not an edge of $L$} \\
J'_{f_u(v)} & \mbox{if $v$ is an edge of $L$}
\end{cases}.
$$
We will show that $h$ is an isomorphism from $R_u$ to
$\mathcal{R}_{\gamma}$. Since for every $l \in \mathrm{dom}(u)$ there exists
exactly one desire edge $v$ of $\mathcal{R}_u$ that belongs to $L$ with
$f_u(v) = l$ and there exists exactly one desire edge $v$ of
$\mathcal{R}_u$ that does not belong to $L$ with $f_u(v) = l$, it
follows that $h$ is one-to-one and onto. Also, it is clear from the
definition of $f_{\gamma}$ that $f_u(v) = f_{\gamma}(h(v))$.
Thus, it suffices to prove that $\{v_1,v_2\} \in E_u \Leftrightarrow
\{h(v_1),h(v_2)\} \in E_{\gamma}$.
We first prove the forward implication $\{v_1,v_2\} \in E_u
\Rightarrow \{h(v_1),h(v_2)\} \in E_{\gamma}$. Let $\{v_1,v_2\}
\in E_u$, let $p = f_u(v_1)$ and let $q = f_u(v_2)$. Clearly, $v_1
\not= v_2$. By the definition of $\mathrm{cps}$, there is a reality
edge $\tilde e = \{ \tilde v_1, \tilde v_2 \}$ of $\mathcal{R}_u$ with
$\tilde v_1 \in v_1$ and $\tilde v_2 \in v_2$ (and thus $\tilde v_1$
and $\tilde v_2$ are labelled by $p$ and $q$ in $\mathcal{R}_u$,
respectively). Let $i$ be the position of $\tilde e$. We consider
four cases (remember that $v_1$ and $v_2$ are both desire edges of
$\mathcal{R}_u$):
\begin{enumerate}
\item
Assume that $\tilde e$ belongs to $L$. Then clearly, $v_1$ and $v_2$
are edges of $L$. Without loss of generality, we can assume that $p
\leq q$. From the structure of root subgraph and the fact that
$\tilde e$ is a reality edge of $\mathcal{R}_u$ in $L$, it follows that
$q = p+1$. Now, $h(v_1) = J'_p$ and $h(v_2) = J'_q = J'_{p+1}$. By
the first item of the definition of reduction graph of an overlap
graph, it follows that $\{h(v_1), h(v_2) \} = \{J'_p, J'_{p+1}\} \in
E_{\gamma}$. This proves the first case. In the remaining cases,
$\tilde e$ does not belong to $L$.
\item
Assume that $v_1$ and $v_2$ are both not edges of $L$ (thus $\tilde
e$ does not belong to $L$).
Now by Theorem~\ref{th_partone_char} and the second item of the
definition of reduction graph of an overlap graph, it follows that
$\{h(v_1), h(v_2) \} = \{J_p, J_q\} \in E_{\gamma}$. This proves
the second case.
\item
Assume that either $v_1$ or $v_2$ is an edge of $L$ and that the
other one is not an edge of $L$ (thus $\tilde e$ does not belong to
$L$). We follow the same line of reasoning as we did in
Theorem~\ref{th_partone_char}. Without loss of generality, we can
assume that $v_1$ is not an edge of $L$ and that $v_2$ is an edge of
$L$. Clearly,
$$
\emptyset = O_u(i,i) = O_u(i,i_1) \oplus O_u(i_1,i)
$$
for each position $i_1$. By the structure of $L$ we know that $q =
2$ or $q = \kappa$. We prove it for the case $q = 2$ ($q = \kappa$,
resp.). By Lemma~\ref{overlap_edge_2} and
Lemma~\ref{overlap_edge_realistic}, we can choose $i_1 \in \{
{r \! spos}_{p-1}, {r \! spos}_p \}$ such that $O_u(i_1,i) = \{p\}$. By
applying Corollary~\ref{overlap_edge_1_iterative} on $L$, we have
$O_u(i,i_1) = \left( \mathrm{pos}(u) \cap P \right) \oplus
\left(\bigoplus_{t \in P} O_u(t)\right)$ with $P = \{2,\ldots,p-1\}
\cup P'$ ($P = \{p+1,\ldots,\kappa\} \cup P'$, resp.) for some $P'
\subseteq \{p\}$. By the third (fourth, resp.) item of the
definition of reduction graph of an overlap graph, it follows that
$\{h(v_1), h(v_2) \} = \{J'_2, J_q\} \in E_{\gamma}$ ($\{h(v_1),
h(v_2) \} = \{J'_{\kappa}, J_q\} \in E_{\gamma}$, resp.). This
proves the third case.
\item
Assume that both $v_1$ and $v_2$ are edges of $L$, but $\tilde e$
does not belong to $L$. Again, we follow the same line of reasoning
as we did in Theorem~\ref{th_partone_char}. Without loss of
generality, we can assume that $p \leq q$. By the structure of $L$,
we know that $p = 2$ and $q = \kappa > 3$. By applying
Corollary~\ref{overlap_edge_1_iterative} on $L$, we have $\emptyset
= O_u(i,i) = \left( \mathrm{pos}(u) \cap P \right) \oplus
\left(\bigoplus_{t \in P} O_u(t)\right)$ with $P =
\{2,\ldots,\kappa\}$. By the fifth item of the definition of
reduction graph of an overlap graph, it follows that $\{h(v_1),
h(v_2) \} = \{J'_2, J'_{\kappa}\} \in E_{\gamma}$. This proves
the last case.
\end{enumerate}
This proves the forward implication. We now prove the reverse
implication $\{v_1,v_2\} \in E_{\gamma} \Rightarrow
\{h^{-1}(v_1),h^{-1}(v_2)\} \in E_u$, where $h^{-1}$, the inverse of
$h$, is given by:
$$
\begin{array}{l}
\mbox{$h^{-1}(J_p)$ is the unique $v \in V_u$ with $f_u(v) = p$ that is not an edge of $L$,}\\
\mbox{$h^{-1}(J'_p)$ is the unique $v \in V_u$ with $f_u(v) = p$ that is an edge of $L$,}\\
\end{array}
$$
for $2 \leq p \leq \kappa$. Let $e \in E_{\gamma}$. We consider
each of the five types of edges in the definition of reduction graph
of an overlap graph.
\begin{enumerate}
\item
Assume $e$ is of the first type. Then $e = \{J'_p, J'_{p+1}\}$ for
some $p$ with $2 \leq p < \kappa$. Since $h^{-1}(J'_p)$ is the
desire edge of $L$ with both vertices labelled by $p$ and
$h^{-1}(J'_{p+1})$ is the desire edge of $L$ with both vertices
labelled by $p+1$, it follows, by the definition of root subgraph,
that $h^{-1}(J'_p)$ and $h^{-1}(J'_{p+1})$ are connected by a
reality edge in $L$. Thus, we have
$\{h^{-1}(J'_p),h^{-1}(J'_{p+1})\} \in E_u$. This proves the reverse
implication when $e$ is of the first type (in
Definition~\ref{def_reduction_graph_overlap_graph}).
\item
Assume $e$ is of the second type. Then $e = \{J_p, J_q\}$ for some
$p$ and $q$ with $2 \leq p < q \leq \kappa$ and
$$
\emptyset = \left( \mathrm{pos}(u) \cap P \right) \oplus \{p\} \oplus
\{q\} \oplus \left(\bigoplus_{t \in P} O_u(t)\right)
$$
with $P = \{p+1,\ldots,q-1\} \cup P'$ for some $P' \subseteq
\{p,q\}$.
By Theorem~\ref{th_partone_char}, there is a reality edge
$\{w_1,w_2\}$ in $\mathcal{R}_u$, such that $w_1$ has label $p$ and
$w_2$ has label $q$ and both are not vertices of $L$. By the
definition of $\mathrm{cps}$, we have a $\{w'_1, w'_2\} \in E_u$ such
that $f_u(w'_1) = p$ ($f_u(w'_2) = q$, resp.) and $w'_1$ ($w'_2$,
resp.) is not an edge of $L$. Therefore $w'_1 = h^{-1}(J_p)$ and
$w'_2 = h^{-1}(J_q)$. This proves the reverse implication when $e$
is of the second type.
\item
The last three cases are proved similarly.
\end{enumerate}
This proves the reverse implication and we have shown that $h$ is an
isomorphism from $R_u$ to $\mathcal{R}_{\gamma}$.
\end{Proof}
\begin{figure}
$$
\xymatrix @=30pt{
& 4 \ar@2{-}[r] \ar@{-}[d] & 7 \ar@{-}[d] & & & 2 \ar@{-}[d] \ar@2{-}[r] & 6 \ar@{-}[d] \\
& 4 \ar@2{-}[r] & 7 & & & 2 \ar@2{-}[r] & 6 \\
2 \ar@{-}[r] & 2 \ar@2{-}[r] & 3 \ar@{-}[r] & 3 \ar@2{-}[r] & 4 \ar@{-}[r] & 4 \ar@2{-}[r] & 5 \ar@{-}[r] & 5 \ar@2{-}[d] \\
3 \ar@2{-}[u] \ar@{-}[r] & 3 \ar@2{-}[r] & 5 \ar@{-}[r] & 5 \ar@2{-}[r] & 7 \ar@{-}[r] & 7 \ar@2{-}[r] & 6 \ar@{-}[r] & 6 \\
}
$$
\caption{The reduction graph of $u$ of
Example~\ref{ex3_overlap_graph}. The vertices in the figure are
represented by their labels.} \label{ex3_fig1_red_graph}
\end{figure}
\begin{Example} \label{ex3_overlap_graph}
The realistic string $u = 453475623267$ was introduced in
Example~\ref{ex1_overlap_graph}. The reduction graph
$\mathcal{R}_{\gamma}$ of the overlap graph of $u$ is given in
Figure~\ref{ex1_fig2_red_graph}. The reduction graph $\mathcal{R}_u$ of
$u$ is given in Figure~\ref{ex3_fig1_red_graph}. It is easy to see
that the result of applying $\mathrm{cps}$ to $\mathcal{R}_u$ is a graph
that is indeed isomorphic to $\mathcal{R}_{\gamma}$. This makes
clear why there were two proofs for both edges $\{J_2,J_6\}$ and
$\{J_4,J_7\}$ in Example~\ref{ex1_overlap_graph}; each one
corresponds to one reality edge in $\mathcal{R}_u$ (outside $L$).
\end{Example}
Formally, we have not yet (up to isomorphism) constructed the
reduction graph $\mathcal{R}_u$ of a realistic string $u$ from its
overlap graph. We have `only' constructed $\mathrm{cps}(\mathcal{R}_u)$ (up
to isomorphism). However, it is clear that $\mathcal{R}_u$ can easily be
obtained from $\mathrm{cps}(\mathcal{R}_u)$ (up to isomorphism) by
considering the edges as reality edges and replacing every vertex by
a desire edge of the same label.
\section{Consequences} \label{sect_consequences}
Using the previous theorem and \cite{SuccessfulnessChar_Original}
(or Chapter~13 in \cite{GeneAssemblyBook}), we can now easily
characterize successfulness for realistic overlap graphs in any
given $S \subseteq \{Gnr,Gpr,Gdr\}$. The notions of successful
reduction, string negative rule and graph negative rule used in this
section are defined in \cite{GeneAssemblyBook}.
Below we restate a theorem of \cite{Extended_paper}.
\begin{Theorem} \label{th_recall_number_cc}
Let $N$ be the number of components in $\mathcal{R}_u$. Then every
successful reduction of $u$ has exactly $N-1$ string negative rules.
\end{Theorem}
Due to the `weak equivalence' of the string pointer reduction system
and the graph pointer reduction system, proved in Chapter 11 of
\cite{GeneAssemblyBook}, we can, using
Theorem~\ref{overlap_to_redgraph}, restate
Theorem~\ref{th_recall_number_cc} in terms of graph reduction rules.
\begin{Theorem} \label{char_gnr_overlap_graph}
Let $u$ be a realistic string, and $N$ be the number of components
in $\mathcal{R}_{\overlapgru{u}}$. Then every successful reduction of
$\overlapgru{u}$ has exactly $N-1$ graph negative rules.
\end{Theorem}
As an immediate consequence we have the following corollary. It
provides a solution to an open problem formulated in Chapter~13 in
\cite{GeneAssemblyBook}.
\begin{Corollary} \label{char_successfulness_overlap_graph}
Let $u$ be a realistic string. Then $\overlapgru{u}$ is successful
in $\{Gpr,Gdr\}$ iff $\mathcal{R}_{\overlapgru{u}}$ is connected.
\end{Corollary}
\begin{Example}
Every successful reduction of the overlap graph of
Example~\ref{ex1_overlap_graph} has exactly two graph negative
rules, because its reduction graph consist of exactly three
components. For example ${\bf gnr}_4 \ {\bf gdr}_{5,7} \ {\bf gnr}_2
\ {\bf gdr}_{3,6}$ is a successful reduction of this overlap graph.
Every successful reduction of the overlap graph of
Example~\ref{ex2_overlap_graph} has exactly one graph negative rule.
For example ${\bf gnr}_2 \ {\bf gpr}_4 \ {\bf gpr}_5 \ {\bf gpr}_{7}
\ {\bf gpr}_6 \ {\bf gpr}_{3}$ is a successful reduction of this
overlap graph.
\end{Example}
With the help of \cite{SuccessfulnessChar_Original} (or Chapter~13
in \cite{GeneAssemblyBook}) and
Corollary~\ref{char_successfulness_overlap_graph}, we are ready to
complete the characterization of successfulness for realistic
overlap graphs in any given $S \subseteq \{Gnr, Gpr, Gdr\}$.
\begin{Theorem} \label{th_char}
Let $u$ be a realistic string. Then $\overlapgru{u}$ is successful
in:
\begin{itemize}
\item
$\{Gnr\}$ iff $\overlapgru{u}$ is a discrete graph with only
negative vertices.
\item
$\{Gnr,Gpr\}$ iff each component of $\overlapgru{u}$ that consists
of more than one vertex contains a positive vertex.
\item
$\{Gnr, Gdr\}$ iff all vertices of $\overlapgru{u}$ are negative.
\item
$\{Gnr, Gpr, Gdr\}$.
\item
$\{Gdr\}$ iff all vertices of $\overlapgru{u}$ are negative and
$\mathcal{R}_{\overlapgru{u}}$ is connected.
\item
$\{Gpr\}$ iff each component of $\overlapgru{u}$ contains a positive
vertex and $\mathcal{R}_{\overlapgru{u}}$ is connected.
\item
$\{Gpr,Gdr\}$ iff $\mathcal{R}_{\overlapgru{u}}$ is connected.
\end{itemize}
\end{Theorem}
\section{Discussion} \label{sect_discussion}
We have shown how to directly construct the reduction graph of a
realistic string $u$ (up to isomorphism) from the overlap graph
$\gamma$ of $u$. From a biological point of view, this allows
one to reconstruct a representation of the macronuclear gene (and
its waste products) given only the overlap graph of the micronuclear
gene. Moreover, this results allows one to (directly) determine the
number $n$ of graph negative rules that are necessary to reduce
$\gamma$ successfully. Along with some results in previous
papers, it also allows us to give a complete characterization of the
successfulness of $\gamma$ in any given $S \subseteq
\{Gnr,Gpr,Gdr\}$.
It remains an open problem to find a (direct) method to determine
this number $n$ for overlap graphs $\gamma$ in general (not just
for realistic overlap graphs). That is, a better method than first
determining a legal string $u$ corresponding with $\gamma$ and
then determining the reduction graph of $u$.
\paper{
\bibliographystyle{plain}
|
2,869,038,153,840 | arxiv | \section{Introduction}
\label{sec:intro}
There are many well-known algorithms to detect sources in imaging
data, from simple identification of connected pixels above a threshold
(\citealt{bertin}, \citealt{irwin}), through matched filtering
(\citealt{stetson}, \citealt{tegmark}, \citealt{herranz_matched}) to
wavelet techniques (\citealt{vielva}, \citealt{gonzalez-nuevo},
\citealt{grumitt}). These have generally been developed with single
pass-bands in mind, but recently the increase in availability of
multi-wavelength data has spurred the development of techniques that
make optimal use of several pass bands. This has been particularly
useful for sub-mm data (\citealt{naselsky}, \citealt{herranz},
\citealt{lanz}, \citealt{planck}).
This paper describes the method that was used to detect sources
for the Herschel Astrophysical Terahertz Large Area Survey, hereafter
H-ATLAS (\citealt{eales}, \citealt{rigby}, \citealt{v16},
\citealt{m18}). The H-ATLAS is based on observations in the 100, 160,
250, 350 and 500\,$\mu $m\, bands of the {\it Herschel Space
Observatory}\footnote{{\it Herschel} is an ESA space observatory
with science instruments provided by European-led Principal
Investigator consortia and with important participation from NASA}
(\citealt{Pilbratt}), which provide maps, covering $\sim$600 square degrees
of sky in the five bands. The unprecedented depth of the Herschel data
and the desire to have a blind far-infrared selected survey meant that we could
not rely on data from other surveys to identify sources; the source
detection had to be based on the Herschel maps alone. Also, the depth
of the Herschel data mean that the source density is high, and the
maps are significantly affected by source blending and confusion, so
standard methods do not perform well.
It is fairly straightforward to show that the optimal way to detect an
isolated point source in a map with a uniform background and simple
Gaussian noise, is to filter the data with the point-spread function
(PSF) and find the peak in the filtered map (e.g. \citealt{north},
\citealt{pratt}, \citealt{kay}, \citealt{stetson}). The value of the
peak is equivalent to a least squares fit of the PSF to the data at
the position of the peak, and provides the minimum variance flux
estimate of the source. Our method is based on this matched filter
approach, but includes significant improvements: namely that the
matched filter includes the effect of confusion noise; the application
of the filter includes a locally defined noise-weighting; that several
bands can be combined in an optimal way to maximise the efficiency of
detecting sources; and that the fluxes are estimated sequentially to
reduce the effects of source blending.
Simply detecting images in each band individually and merging the
catalogues is not the optimal way to construct a combined catalogue,
and combining multi-wavelength data in an optimal way enhances the
source detection reliability and automatically produces a band-matched
catalogue. There has been extensive research in this area, considering
correlated noise between bands, variable source sizes and different
spectral behaviour, as reviewed by \cite{herranz_rev}. We developed
our method to find sources in the H-ATLAS survey, where data is
available in five bands, with different angular resolution, and each
with spatially varying noise. Note that the spatially non-uniform
noise distributions mean that it is not a good approximation to assume
simple Gaussian noise with known power spectra and cross-correlation
functions. This means that methods such as the matched multi-filter
approach of \cite{lanz} are not directly applicable to the
H-ATLAS data.
The next four sections in this paper describe the steps in the
detection and extraction process: in section~\ref{sec:back} we
estimate and subtract a non-uniform background; in
section~\ref{sec:filt} we filter the map for each waveband; in
section~\ref{sec:combine} we combine the wavebands and detect sources;
in section~\ref{sec:parameters} we estimate the source positions and
fluxes. Then in section~\ref{sec:simulations} we describe simulations
which show the improvements of our method compared to single-band PSF
filtered catalogues.
\section{Background Estimation}
\label{sec:back}
The first step in detecting sources is to estimate the background,
which may be spatially varying. A background may be instrumental or a
real astronomical signal which is a contamination to the point sources
we wish to extract. In the H-ATLAS, the background was largely local
`cirrus' from dust emission in our Galaxy. In general it is impossible
to differentiate between multiple confused sources and a smoothly
varying background (and/or foreground) component, but in either case,
it is necessary to remove the contribution of the background flux from
each individual source. So, we need to determine the local background
at all relevant positions in the map. We have done this by splitting
the map into blocks of pixels corresponding to $\sim 10\times$ the
full width at half maximum (FWHM) of the PSF, and constructing a
histogram of pixel values for each block. We then fit a Gaussian to
the peak of the histogram to find the modal value of the background,
and compare to the median value. If the peak is more than 1-$\sigma$
from the median, the fit is flagged as unreliable, and we use the
median instead. Near the edges of the map, there may be only a small
number of pixels contributing to a block. If there are less than 20
pixels in a block, the background is not estimated from the local
pixels, but is set to the final mean background from the whole
map. This ensures that the edges do not suffer from higher noise in
the background.
\begin{figure}
\includegraphics[scale=0.6]{bhist_model.pdf}
\caption{\protect\label{fig:bhist_model} Simulated histogram of pixel
values in a background block. The model has a true background of
10mJy with 6mJy Gaussian noise and a single 1Jy source in the
centre. The red line is the best fit Gaussian. The fitted peak is
is 10.3mJy (red dashed line), the median is 11.3mJy (blue dashed
line), and the mean is 42mJy (green dashed line). The dotted blue
lines are the $\pm 2 \sigma $ from the median.}
\end{figure}
This technique is valid only so long as the angular scale of a point
source is significantly smaller than the scale of background
variations. Since we have set the background blocks to be ten times
the FWHM of the PSF, this is a good approximation, and the fitted peak
of the histogram is very insensitive to bright sources in the
block. As a simple test we made a set of 1000 realisations of a model
with a background of 10mJy with Gaussian random noise with an rms of
6mJy, and put a single 1Jy Gaussian source in the middle. The
resulting histogram for a single realisation is shown in
Figure~\ref{fig:bhist_model}. The mean of the block is 42mJy, and so
would give an error of 32 mJy if it were used as the background
estimate. The median is more robust, leading to an error of 1mJy, and
the peak fit is biased by only 0.3 mJy. It is worth noting that
background subtraction using simple filtering methods, such as the
Mexican Hat filter, are intrinsically linear, and so are approximately
equivalent to using the local mean value as the background
estimate. This means that they are significantly biased around bright
sources.
The background at each pixel is then estimated using a bi-cubic
interpolation between the coarse grid of backgrounds, and subtracted
from the data. This approach to background estimation is similar to
the {\tt nebuliser}
algorithm\footnote{\url{http://casu.ast.cam.ac.uk/surveys-projects/software-release/background-filtering}},
developed by the Cambridge Astronomical Survey Unit. After the initial
analysis of the science-verification data for the H-ATLAS
survey (\citealt{rigby}), we used {\tt nebuliser} to perform the
background subtraction rather than the inbuilt background subtraction
(\citealt{v16}, \citealt{m18}). This choice was largely based on the
much faster run-time of {\tt nebuliser} compared to the MADX version.
The catalogues described in his paper use the built-in MADX background
subtraction.
\section{Filtering}
\label{sec:filt}
Typically image data is sampled finely enough that point sources have
2 or 3 pixels within the FWHM in each direction. This means that the
flux from a source is spread over many pixels, and the optimal
estimate of the source flux is given by a weighted sum over pixels.
For an isolated source on a uniform background with uniform Gaussian
errors, the minimum variance estimate of the source flux is given by
the sum of the data weighted by the point spread function (PSF) at the
true position of the source. Cross-correlating the data by the PSF
gives the PSF-weighted sum at the position of each pixel, and choosing
the peak in the filtered map gives the minimum variance estimate of
the source position and source flux (see e.g. \citealt{stetson}).
If the pixel uncertainties vary spatially, the optimal weighting must
also include the inverse of the estimated variance, as derived by
\cite{serjeant}. If the power spectrum of the noise is not flat, the
optimal filter is different from the PSF. In
particular, when the source density is high, confusion noise is
important, and the optimal filter is narrower than the PSF. The
optimal filter, $Q$, can be estimated using a matched filter approach
that includes confusion noise (\citealt{chapin}). In this case, the
noise-weighted filtered map, $F$, is given by
\begin{equation}
F = \frac{(DW)\otimes Q}{W\otimes PQ} ,
\label{eqn:filt}
\end{equation}
where $D$ is the background subtracted data in each pixel, the weight
$W=1/\mathrm{var}(D)$ is the inverse of the variance of each pixel,
$P$ is the PSF, and $\otimes$ represents the cross-correlation operator.
Assuming that the instrumental noise on each pixel in the unfiltered
map is uncorrelated, the variance of each pixel in the filtered map is
given by
\begin{equation}
V = \frac{ W \otimes Q^2 }{(W \otimes PQ)^2},
\label{eqn:fvar}
\end{equation}
This is the generalisation of the PSF-filtering approach derived by
\cite{serjeant}; setting $Q=P$ yields exactly the Serjeant et
al. results. The noise-weighting in this step is particularly
important for the H-ATLAS data, since the noise varies dramatically on
small angular scales depending on the number of detector passes a
particular sky position has (\citealt{m18}).
The filtered map gives the minimum variance estimate of the flux that
a source would have at any given pixel in the map. Standard FFT
routines allow easy calculation of the filtered maps at integer pixel
positions, but in practice, sources are not centred on pixels. In
order to find the best flux estimates, we need to allow for sub-pixel
positioning. Without this, the fluxes will be significantly
underestimated, particularly when a source lies at the edge of a
pixel. Our approach to solving this problem is discussed in Section~\ref{sec:sub-pixel}.
\section{Combining Wavebands}
\label{sec:combine}
The filtered map in a single band provides an estimate of source flux
and uncertainty at any position, and this approach can be extended to
include any other wavebands that are available. If we know the
observed spectral energy distribution (SED), of a source,
$S(\lambda)$, then the flux in a band with response $R(\lambda)$, is
given by
\begin{equation}
F = \frac{\int S(\lambda) R(\lambda) d\lambda } {\int R(\lambda)
d\lambda } ,
\end{equation}
where we assume the detector measures total energy, as in a bolometer,
not the total count of photons as in a CCD.
We define the normalised SED as $S_0(\lambda)$ where
\begin{equation}
S_0(\lambda)= \frac{S(\lambda)}{ \int S(\lambda) d\lambda} ,
\end{equation}
so the observed SED of the source is $A S_{0}(\lambda)$, where
$A=\int S(\lambda) d\lambda$. Given a set of filter pass-bands, $R_k$
and the source SED, the true broad-band flux in each band is
$F_k = AF_{k0}$, where
\begin{equation}
F_{k0} = \frac{\int S_{0}(\lambda) R_k(\lambda) d\lambda } {\int( R_k(\lambda)
d\lambda } .
\end{equation}
Since the value of $A$ does not depend on wavelength, the filtered map
in each band gives an independent estimate of $A$. In order to combine
the maps, we need to have the estimates at exactly the same
position. As discussed in Section~\ref{sec:sub-pixel}, it is
reasonable to use a bi-cubic interpolation to estimate the source flux
at non-integer pixel positions. If we interpolate the lower resolution
maps to the pixel centres of the highest resolution map, then we can
take the inverse variance weighted sum to obtain the minimum variance
estimate of $A$ at the pixel positions of the highest resolution map.
For waveband $k$ the estimate of $A$ at position $x$ is
$A_k = F_k(x)/F_{k0}$, and the variance is
$\sigma_{A,k}^2 = V_k(x)/F_{k0}^2$. Hence the overall minimum variance
estimate of $A_{\mathrm{tot}}$ is given by
\begin{equation}
\label{eqn:Atot}
A_{\mathrm{tot}} = \frac{\sum_k{F_{k}\frac{F_{k0}}{V_k}} }
{\sum_k{\frac{F_{k0}^2}{V_k}}},
\end{equation}
and the uncertainty on $A_\mathrm{tot}$ is given by
\begin{equation}
\label{eqn:sigmatot}
\sigma_A^2 = \frac{1}{\sum_k{\frac{F_{k0}^2}{V_k}}}.
\end{equation}
So, the significance of a source detection at any position is
$A_\mathrm{tot}/\sigma_A $. This is a very simple derivation of the
standard result first presented by \cite{naselsky}. As for the case of
a filtered map in a single band, we estimate the most likely position
of the source as the position of the peak in the combined significance
map.
Note that these formulae include a factor $F_{k0}^2$ as part of the
weight given to the waveband $k$, so that the true SED acts as a
weighting term for each band as well as the inverse variance
factor. This makes intuitive sense: if a source's flux is expected to
peak in a particular band, we should give that band the most weight in
determining the position of the source; if a source has a flat
spectrum, so that the flux is equal in all bands, then all bands are
given equal weight. In general we do not actually know the true SED of
any particular source, and clearly don't know the SED of sources that
we have not yet detected. However, we can maximise the detection rate
of a particular type of source by choosing an SED prior to match.
This derivation considers an isolated source, but we can filter and
combine the full area of available data to produce a global
significance map, and find all the peaks to consider as potential
sources. For H-ATLAS, we kept those that are more than $2.5\sigma$ as
potential sources. In principle we could retain all peaks, but
rejecting the low-significance peaks gives a large saving in
computing time, while not losing any significant detections.
\section{Source parameters}
\label{sec:parameters}
\subsection{Estimating positions and fluxes}
To estimate the position of each source, we perform a variance
weighted least-squares fit of a Gaussian to the $5\times 5$ pixels
around each peak. The position of the peak is allowed to vary freely,
and is not constrained to be at integer pixel positions. We fit only
to pixels near the peak to minimise the effects of confusion from
other nearby sources. Since the individual maps have been filtered,
the peak pixels already include data from the surrounding raw pixels,
combined in an optimal way; the peak fitting is solely to find the
position at the sub-pixel level.
In order to estimate the flux in each band, we use the individual
filtered maps, and interpolate to find the value of each map at the
position of the peak in the combined map. For an isolated source, this
will provide the optimal flux estimates. However, if there are sources
that are close together, so that the PSFs significantly overlap, this
simple approach will `double-count' flux, because the wings of each
source add on to the peak of its neighbour. A simple way to avoid this
problem is to sort the sources in order of decreasing flux based on
their initial peak-pixel value, and then estimate the optimal fluxes
in sequence. After getting the optimal fluxes for a source, we
subtract the scaled filtered PSF from the maps before estimating the
fluxes for the next source. This is similar in concept to the clean
algorithm (\citealt{hogbom}) but with just one pass. This process is
done separately for each band, so the `clean' works from the brightest
sources in each band. In principle, the procedure could be iterated to
a stable solution, but in practice the difficult cases are blends of
sources that require a more sophisticated de-blending technique to
improve the flux estimates. So iterating this simple clean algorithm
would provide a very small gain in reliability at a large
computational cost.
To provide the uncertainty for each flux measurement, we create a map
of the filtered noise, using equation~\ref{eqn:fvar} and perform the
same interpolation to estimate the flux variance in each band at each
source position.
\subsection{Sub-pixel subtleties}
\label{sec:sub-pixel}
The above analysis ignores the complication that our data typically
sample the sky with only 2 or 3 pixels across the FWHM of the PSF.
When the PSF is sampled into coarse pixels, the value of the peak
pixel is averaged over the whole area of the central pixel, and so is
suppressed relative to the peak of the true PSF. For a PSF that is
close to a Gaussian with 3 pixels across the FWHM, this suppression is
typically $\sim5$ per\,cent. Since we use the pixelated PSF (the
Point Response Function - or PRF) when filtering the data, the
filtered data is boosted by the suppression factor, and so the
estimated flux for source that is centred in a pixel is unbiased. For
a source that is not centred in a pixel, the observed peak value is
suppressed compared to a pixel centred source.
\begin{figure}
\includegraphics[width=0.5\textwidth,trim={0 10mm 0 0},clip]{subpixel_errors.pdf}
\caption{Mean fractional flux errors as a function of the precise source
position within a pixel. The contours are linearly space from 0 to
$-0.9$ per\,cent. The central thicker contour corresponds to mean error of 0,
and the outer thicker contour is $-0.5$ per\,cent. The maximum error is
$\sim -1$ per\,cent for a source in the corner of 4 pixels.
\label{fig:sub-pixel}
}
\end{figure}
In order to obtain the most accurate estimates of source flux for an
arbitrary position, we need to consider the true flux distribution
within the footprint of each pixel. An obvious way to improve on the
flux estimated from individual pixel values is to interpolate between
them. However, a bi-linear interpolation does not remove the bias, as
can be seen by considering a source that is exactly half way between
two pixels; each pixel will have the same value that is less than the
true peak value, so the interpolated value will also be biased low. A
bi-cubic interpolation allows the interpolated value to be higher than
either individual pixel, and so gives a better flux estimate.
Since we estimate the source positions by fitting a Gaussian to the
central $5\times5$ pixels of each source, the positions are not
directly affected by the pixelization; for signal to noise ratio
greater than 20 we find that the source positions are accurate to
better than 1/12 of a pixel (see Figure~\ref{fig:positions}). So, to
estimate the flux of the source, we use the filtered maps interpolated
to the best fit sub-pixel position. If the source position lies at
the centre of a pixel, the interpolation returns that pixel value, and
this will be an unbiased flux estimate. If the source position lies
at the boundary between two pixels, the pixel values are suppressed
relative to the peak of the pixel-centred PSF, but the bi-cubic
interpolation means that the estimated flux will be higher than the
pixel values, thus reducing the suppression. We tested this effect by
creating simulated data at higher resolution, averaging over the small
pixels to produce a low-resolution data, and then measuring the
recovered flux from the interpolated, filtered low-resolution data. We
find the interpolation reduces the suppression due to pixelization,
but does leave a slight underestimate of the actual peak. The
fractional flux error as a function of position is shown in
Figure~\ref{fig:sub-pixel}. The error is zero at the centre of a
pixel, and smoothly increases towards the pixel edges. It is largest
for a source at the corner of 4 pixels when the flux is underestimated
by $\sim 1$ per\,cent. Although the simulations are specific to a
simple Gaussian PSF, this is a good approximation to the H-ATLAS
data. In fact, the PSF in most astronomical data can be approximated
by a Gaussian near the peak, and the pixel scales are typically chosen
to sample the FWHM at a similar spacing, and so similar improvements
are likely for other data.
\section{Tests of the methods}
\label{sec:simulations}
\subsection{Simulations}
As a simple test of the source detection algorithm we generated maps
covering $3.4^\circ \times 13.6^\circ$, in three bands: 250, 350 and
500\,$\mu $m\,. This is equivalent to a single one of the three H-ATLAS
equatorial fields (\citealt{v16}). Sources are placed on a grid of
positions separated by 3 arcminutes on the sky with a small blank area
around the edges of the maps, leading to 17750 sources in the
maps. Each source is assigned a small random offsets from the exact
grid centre, so that the pixels sample the PSF with random offsets
from the pixel centres. The PSF for each band is chosen to be a
Gaussian with full-width half maximum of 18, 24 and 36 arc seconds
respectively, roughly matching the Herschel beam in the three bands.
The PSF is over-sampled by a factor of 50, (corresponding to 0.12
arc-second pixels in the 250\,$\mu $m\, band) and re-binned to the final
pixel sizes of 6, 8 and 12 arc seconds, chosen to match the H-ATLAS
maps. Each source is given a 250\,$\mu $m\, flux between 1mJy and 1Jy,
uniformly spaced in log flux. This is clearly not a good match to the
flux distribution of real sources, but is a simple way to provide good
statistics over the full flux range. The 350\,$\mu $m\, and 500\,$\mu $m\, fluxes
for each source are then assigned so that the SED matches a modified
black body with $\beta$ chosen from a uniform random distribution
between 1 and 2, and temperature, $T$, randomly chosen from a
log-normal distribution centred on $T=25$K, and ranging from 20K to
35K, as shown in Figure~\ref{fig:Tdist}. This distribution roughly
matches the SEDs of low-redshift galaxies seen in the H-ATLAS survey
(\citealt{smith}).
Sources are included following a two-component redshift distribution with a
low-redshift population peaking at $z=0.3$, and a high redshift
population extending to $z\sim 2$ with a peak at $z=1.2$. This
reproduces the $F_{250}/F_{350}$ colour distribution observed in the
H-ATLAS survey, as shown in Figure~\ref{fig:col_hist}. Although these
simulations provide a reasonable match to the $F_{250}/F_{350}$ colour
of real data, they are not red enough in the $F_{500}/F_{350}$ colour
distribution. To investigate the impact for redder sources, we also
considered simulations with an extended tail of high redshift sources
which leads to an enhanced population of extremely red sources
(\citealt{highz}). Finding a best fit model which reproduces the
observed colour distributions is beyond the scope of this paper, but
the real H-ATLAS data will be somewhere between these two sets of
simulations.
\begin{figure
\includegraphics[width=0.5\textwidth]{temp_hist.pdf}
\caption{The temperature distribution of simulated sources. The red
line shows the scaled and shifted log-normal probability distribution used to
generate the temperatures. The blue histogram shows the source
counts for a single realisation of the simulations.
\label{fig:Tdist}}
\end{figure}
\begin{figure
\includegraphics[width=0.5\textwidth]{col_hist.pdf}
\caption{The $F_{250}/F_{350}$ colour distribution of simulated sources
compared to that observed in the H-ATLAS survey. The black dotted
histogram shows the observed colour distribution. The blue
histogram shows the distribution for a particular realisation with
matching colour distribution. The Red histogram shows a simulation
with a extended high-z population .
\label{fig:col_hist}}
\end{figure}
We also added a galactic background by taking the 100\,$\mu $m\, and
temperature maps from \cite{SFD} and scaling the 100\,$\mu $m\, emission
to the relevant wavelength using a modified black-body. The resolution
of these maps is several arcminutes, and so does not contain
small-scale structure in the cirrus background. The HATLAS data show
that in some patches of sky there is strong cirrus emission with
significant structure on sub-arc-minute scales, but in most areas, the
emission is relatively smooth. Our simulated background is a
reasonable approximation for most of the sky, but will be somewhat
easier to subtract than the areas where the true cirrus is
particularly strong and structured.
Finally we add Gaussian noise to the maps. The standard deviation is
set to roughly match the instrumental noise in the H-ATLAS survey
(\citealt{v16}). The values we use here are 9.3, 9.8 and 13.5\,mJy per
pixel in the 250, 350 and 500\,$\mu $m\, bands respectively. Note that the
sources are positioned on a grid, and so do not suffer from confusion
noise, meaning that the appropriate matched filter is the PSF. We consider
confusion noise and the modified matched filter in
Section~\ref{sec:mf}
We then run MADX on the simulated maps to detect the sources and
measure their position and fluxes. We used several different priors:
first using only the single bands for the detection: weights
1,0,0 for the 250\,$\mu $m\, band; weights 0,1,0 for the 350\,$\mu $m\, band;
weights 0,0,1 for the 500\,$\mu $m\, band. Second, we used equal weighting
for each band (weights 1,1,1), corresponding to a flat spectrum
source. By design, the sources are on a grid, and cannot overlap, so a
simple positional match allows us to associate the detected sources to
the corresponding input sources, and calculate the errors in position
and fluxes. We identify a recovered source with an input source if the
recovered position is within one pixel of the input position. For very
low signal-to-noise detections, the large standard deviation of the
positional errors means that some detections are not matched within
the one pixel radius. This has a small effect on the catalogue
completeness, but the dominant source of incompleteness is simply
noise on the flux estimates.
\subsection{Results}
\label{sec:results}
\begin{figure}
(a)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{completeness_250_new.pdf}}\\
(b)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{completeness_350_new.pdf}}\\
(c)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{completeness_500_new.pdf}}
\caption{
Completeness of recovered source catalogues as a function of flux in
each band. The blue and black lines are for simulations which match
the H-ATLAS colour distribution; the red and magenta include a
high-redshift red population. The dotted lines are for source
detection using only the relevant single band in each panel, and the
solid lines are for detection using the equal weighting of bands. The dashed curves show
the expected completeness from Gaussian errors and the vertical dashed
lines show the 2.5-$\sigma$ detection threshold. In the 250\,$\mu $m\, band,
using the flat prior pushes the 50 per\,cent completeness limit about
a factor 1.25 deeper. In the 350\,$\mu $m\, and 500\,$\mu $m\, bands the gains using the flat
prior are factors of 1.28 and 3 for the colour-matched simulations.
The effect of the red population is to slightly reduce the
completeness in the 350 and 500\,$\mu $m\, bands. This is a small
effect as a percentage of the full catalogue, but represents a
significant improvement for the red population itself.
\label{fig:completeness} }
\end{figure}
\begin{figure}
(a)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{false_250_new.pdf}}\\
(b)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{false_350_new.pdf}}\\
(c)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{false_500_new.pdf}}
\caption{
The number of false detections per beam as a function of signal to
noise. The blue and black lines are for simulations which match the
H-ATLAS colour distribution; the red and magenta include a
high-redshift red population. The dotted lines are for source
detection using only the relevant single band and the solid
lines show the equal weighting of bands. The dashed lines
show the expected rate of false detections from Gaussian errors and
the 2.5-$\sigma$ threshold applied during the MADX detection. For the
250\,$\mu $m\, and 350\,$\mu $m\, bands, the flat prior reduces the false detection rate by
about a factor 4 at the 4-$\sigma$ limit, and a factor 6 at
3-$\sigma$. In the 500\,$\mu $m\, band the gain is roughly a factor 10 between
2 and 4-$\sigma$.
The inclusion of the extra red population does not significantly
change the false detection rates.
\label{fig:false} }
\end{figure}
For each simulation, we measure the completeness by simply counting
the fraction of input sources that are detected as a function of
flux. Figure~\ref{fig:completeness} shows the completeness as a
function of flux in each band for both the colour-matched and
red-population simulations, and for catalogues using the single-band
priors and the flat priors. The blue lines are for colour matched
simulations with catalogues using the single band prior and the black
lines use the flat prior. It is clear that including information from
all three bands significantly improves the completeness of the
resulting catalogue. The flux limit at 50 per\,cent completeness in
the 250\,$\mu $m\, and 350\,$\mu $m\, band samples is a factor $\sim1.4$ deeper
using the multi-band approach compared to the single band method. The
gain for the 500\,$\mu $m\, band is about a factor of 3, reflecting the very
large gain in signal to noise ratio by using the information from the
other bands to identify sources. The improvements are very similar for
the simulations with extra red sources.
The noise in the maps leads to peaks that are detected as a source,
but do not correspond to an input source. The number of false
detections per beam area is shown as a function of signal to noise for
each band in Figure~\ref{fig:false}. Using the flat-prior detection
reduces the number by a factor between 4 and 10 in all bands compared
to the single-band detection. Including extra red sources in the
simulations makes no significant difference to the rate of false
detections. For the single band detections, the expected number of
noise peaks for Gaussian noise with a 2.5-$\sigma$ cut is shown by the
dashed line. The observed number of false detections follows this
very well above the 2.5-$\sigma$ cut. There are a small number of
sources with fluxes below the 2.5-$\sigma$ limit, and this is because
cut is applied to the initial peak pixel value, while the final flux
plotted on $x$-axis uses the flux measured at the interpolated peak
position. The noise between the two flux estimates scatters some
sources below the initial cut. In practise for the real H-ATLAS data
we apply a much higher limit (between 4 and 5$\sigma$) in the final
catalogue, so this effect is not visible.
\begin{figure}
(a)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{position_errors_250.pdf}}\\
(b)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{position_errors_flat.pdf}}
\caption{
\label{fig:positions} The measured positional errors of simulated
sources plotted as a function of signal-to-noise in the 250\,$\mu $m\,
band. The standard deviation of the errors in RA are shown as blue
circles, and in Dec shown as red crosses. Panel (a) shows the
measurements using only the 250\,$\mu $m\, band to detect sources and measure
their positions. Panel (b) shows the measurements using the flat-prior
detection. The black line in both panels shows the variation expected
from the theoretical analysis of \protect\cite{ivison}. For
signal-to-noise ratio less than four the 1 pixel matching criterion
means that some true matches with large positional errors have not
been included in the matched sample, and this reduces the apparent
$\sigma$. A 2-d Gaussian with $\sigma=4$ truncated at 1 pixel radius
($=6''$) has a standard deviation of $2.7''$, as we see here. }
\end{figure}
Next we compare the measured positions to the input positions. While
including data from lower resolution, or lower signal-to-noise bands
could increase the positional errors, extra information is provided by
summing data with the correct weighting, and the positional accuracy
is improved. This can be seen in Figure~\ref{fig:positions}, which
shows the rms position error in RA and Dec as a function of
signal-to-noise, defined as $A_{tot}/\sigma_A$ from
equations~\ref{eqn:Atot} and \ref{eqn:sigmatot}. Using only the
250\,$\mu $m\, band to detect the sources leads to positional errors in good
agreement with the theoretical expectations from
\cite{ivison}. Including all bands significantly reduces the
positional errors, even though the other bands have poorer
resolution. At low signal-to-noise (SNR$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 4$) the apparent
positional error starts to flatten because only the sources within one
pixel are counted as matches, and the apparent standard deviation is
biased too low.
Finally we compare the measured and input fluxes for the sources, in
terms of both random and systematic errors. To assess the random
errors, we simply calculate the standard deviation of the between the
difference between the measured and input fluxes. Over the range of
interest, we find that this does not vary significantly as a function
of flux, and is 4.4, 4.6 and 6.4\,mJy for the 250\,$\mu $m\,, 350\,$\mu $m\, and
500\,$\mu $m\, bands respectively. These are between 3 and 6 per\,cent higher
than the values of 4.2, 4.5 and 6.1\,mJy expected from the simple
application of equation~\ref{eqn:fvar} the estimate the flux
errors. This small discrepancy is likely to be caused by sub-pixel
positioning effects and residual background subtraction errors. The
choice of prior makes no significant difference to the flux errors, as
these errors are dominated by the pixel-to-pixel flux errors on the
map for each band. Also the colour distribution used in the
simulations makes no significant difference to the errors.
\begin{figure}
(a)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{flux_bias_250_new.pdf}}
(b)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{flux_bias_350_new.pdf}}
(c)\raisebox{-0.9\height}{\includegraphics[width=0.47\textwidth]{flux_bias_500_new.pdf}}
\caption{\label{fig:fluxes} The ratio of mean measured flux compared
to the mean input flux as a function of input signal to noise for
different source detection priors. The blue lines show the
single-band priors, and the black lines use a flat prior. Panels
(a),(b) and (c) shows the results for the 250\,$\mu $m\,, 350\,$\mu $m\, and
500\,$\mu $m\, fluxes respectively. Using the single-band source detection
leads to significant flux boosting in the measured fluxes. For the
250\,$\mu $m\, and 350\,$\mu $m\, bands, the flat prior reduces the boosting
effect at fainter fluxes. For the 500\,$\mu $m\, band, the flat-prior
fluxes are systematically underestimated at fainter fluxes. The red
and magenta lines show the results from simulations with extra red
sources. The red sources do not introduce any significant changes. }
\end{figure}
Figure~\ref{fig:fluxes}(a) shows the mean ratio of measured to input
flux as a function of signal to noise. At high signal to
noise there is a small ($\sim$0.5 per\,cent) underestimate of flux due the
peak pixelization issues discussed in Section~\ref{sec:sub-pixel}.
For sources with fluxes near the detection limit, there is a
systematic bias to higher fluxes when using the single-bands to detect
sources (flux boosting). This is related to Eddington/Malmquist bias
when selecting sources to be above a signal-to-noise threshold: faint
sources with negative errors are not retained in the catalogue,
whereas those with positive errors are detected to a fainter level.
The precise form of the boosting depends on the distribution of true
source fluxes, as well as the measurement errors. For our current
simulations we chose to distribute sources uniformly in log flux, and
so they do not match real source flux distributions, even though the
colours are realistic. Hence we cannot use them to estimate the
boosting as a function of flux for real data. \cite{v16} created
realistic simulations and used them to estimate both the completeness
and boosting correction factors that apply to the H-ATLAS data.
Using the flat prior combination to detect sources includes
information from all three bands, and so reduces the bias from noise
peaks in any single band, leading to significantly reduced flux
boosting. For the 500\,$\mu $m\, band, the angular size of the PSF is larger,
and the signal-to-noise is significantly smaller than the other two
bands. This mean that it contributes only a small amount to the
detection signal and positional measurement. The positional errors
mean that the local peak is missed and the flux estimate is
systematically underestimated. This bias can be corrected using the
average values measured from simulations (\citealt{v16}).
The flux boosting and biases are not significantly affected by the
inclusion of extra red sources in the simulations.
\subsection{Confusion Noise}
\label{sec:mf}
The simulations described so far contain no confusion noise, and so
the appropriate optimal filter is simply the PSF. To test the
performance of the modified matched filter approach, we have added
confusion noise to each pixel in the simulations as an extra term
consisting of PSF-filtered Gaussian noise, with the variance as seen
in the H-ATLAS data (\citealt{v16}). The corresponding standard
deviation is $\sim$7\,mJy per pixel in all three bands. We use these
values of confusion noise to calculate the matched filters as
described in Appendix A of \cite{chapin}. We then re-ran the image
detection based on the single-band priors, and used both the PSF
filter and the matched filter. We also used the flat-prior detection
with the matched filters. The resulting completeness comparisons are
shown in Figure~\ref{fig:completeness_mf}. The matched filter
selection improves the completeness at a given signal to noise cut,
providing a catalogue $\sim$20 per\,cent deeper. As before, using the
flat prior detection also provides an improvement of a factor
$\sim1.3$ in flux at the same completeness level for the 250\,$\mu $m\, and
350\,$\mu $m\, bands, and a factor 2 for the 500\,$\mu $m\, band. Overall the two
modifications give a catalogue that is a factor of 1.5 to 3 deeper in
flux at 50 per\,cent completeness.
The added confusion noise means that the flux errors are larger than
for the simulations with only Gaussian noise. Using the matched filter
reduces the errors by about 10 per\,cent compared to using the PSF. As an
aside, we note that using the matched filter for the Gaussian noise
simulations leads to a 14 per\,cent increase in flux errors compared to the
PSF filter. This is exactly as would be expected since the optimal
filter for data with Gaussian noise is the PSF, and the optimal filter
for data with confusion noise is the appropriate matched filter.
As shown in Figure\,\ref{fig:false_conf}, using the matched filter
reduces the number of false detections by a factor 8 at the
4-$\sigma$ limit for the 250\,$\mu $m\, band. Using the flat prior as well
reduces the rate by a further factor 2. In the 350\,$\mu $m\, and 500\,$\mu $m\,
bands the matched filter provides similar reductions in the false
detection rate.
Figure\,\ref{fig:flux_conf} shows the ratio of measured to input fluxc
when confusion noise is included. The behaviour is very similar to
that shown in Figure\,\ref{fig:fluxes} for simple Gaussian noise. At
high signal to noise there is a small ($\sim$0.5 per\,cent)
underestimate of flux due the peak pixelization issues discussed in
Section~\ref{sec:sub-pixel}. Near the detection limit, the
single-band detection leads to significant flux boosting. Using the
flat prior to detect sources reduces this effect. For the 500\,$\mu $m\,
band, the positional errors again mean that the fluxes are
underestimated, but the effect is slightly smaller than for Gaussian
noise because the confusion noise has less power on the small scales
that directly affect the position estimates.
\section{Summary}
We have presented a simple approach to detecting sources in data which
consists of multiple broad-band images. For high signal to noise
sources, fitting a Gaussian to estimate the source position and using
bi-cubic interpolation to estimate fluxes significantly improves the
accuracy over single pixels estimates. Combining multiple bands in an
optimal way before detecting images leads to a significant improvement
in sensitivity to faint sources, a reduction in the number of false
detections, and an improvement in positional accuracy. Using a matched
filter which accounts for confusion noise improves the signal-to-noise
ratio of individual flux measurements and so further improves the
source detection reliability. Together the two modifications provide
catalogues a factor two to three deeper than possible with a standard
single-band PSF filter approach.
\section*{Acknowledgements}
LD and SJM acknowledge support from the European Research Council
(ERC) in the form of Consolidator Grant {\sc CosmicDust}
(Proposal ERC-2014-CoG-647939, PI H\,L\,Gomez), and support from the ERC in the
form of the Advanced Investigator Program, COSMICISM (Proposal ERC-2012-ADG-321302, PI R.J.Ivison).
|
2,869,038,153,841 | arxiv |
\section{Introduction}\label{intro}
Operational semantics is a standard de facto to defining the semantics of programming languages \cite{PLOTKIN}.
However, producing a programming language definition is still a hard task. It is not surprising that theoretical and software tools for supporting the modeling of languages based on operational semantics have received attention in research \cite{LangWorkbenches,Rosu2010,Redex}.
In this paper, we address an important aspect of language reuse which has not received attention so far: Producing language definitions from existing ones by the application of transformation algorithms. Such algorithms may automatically add features to the language, or switch to different semantics styles. In this paper, we aim at providing theoretical foundations and a software tool for this aspect.
Consider the typing rule of function application below on the left and its version with algorithmic subtyping on the right.
{\footnotesize
\begin{gather*}
{
\ninference{t-app}
{
\Gamma \vdash \; e_1 : T_1\to T_2 \\
\Gamma \vdash \; e_2 : T_1
}
{ \Gamma \vdash \; e_1\; e_2 : T_2}
}
~~ \stackrel{f(\textsc{t-app})}{\Longrightarrow} ~~
{
\ninference{t-app'}
{
\Gamma \vdash \; e_1 : T_{11}\to T_2 \\
\Gamma \vdash \; e_2 : T_{12} \\\\ T_{12} <: T_{11}
}
{ \Gamma \vdash \; e_1\; e_2 : T_2}
}
\end{gather*}
}
Intuitively, we can describe \textsc{(t-app')} as a function of \textsc{(t-app)}. Such a function includes, at least, giving new variable names when a variable is mentioned more than once, and must relate the new variables with subtyping according to the variance of types (covariant vs contravariant).
Our question is: \emph{Can we express, easily, language transformations in a safe calculus?}
Language transformations are beneficial for a number of reasons. On the theoretical side, they isolate and make explicit the insights that underly some programming languages features or semantics style. On the practical side, language transformations do not apply just to one language but to several languages. They can alleviate the burden to language designers, who can use them to automatically generate new language definitions using well-established algorithms rather than manually defining them, an error prone endeavor.
In this paper, we make the following contributions.
\begin{itemize}
\item We present $\mathcal{L}\textendash\textsf{Tr}$ (pronounced ``Elter''), a formal calculus for language transformations (Section \ref{main}). We define the syntax (Section \ref{syntax}), operational semantics (Section \ref{operational}), and type system (Section \ref{typesystem}) of $\mathcal{L}\textendash\textsf{Tr}$.
\item We prove that $\mathcal{L}\textendash\textsf{Tr}$ is type sound (Section \ref{typesystem}).
\item We show the applicability of $\mathcal{L}\textendash\textsf{Tr}$ to the specification of two transformations: adding subtyping and switching from small-step to big-step semantics (Section \ref{examples}). Our examples show that $\mathcal{L}\textendash\textsf{Tr}$ is expressive and offers a rather declarative style to programmers.
\item We have implemented $\mathcal{L}\textendash\textsf{Tr}$ \cite{ltr}, and we report that we have applied our transformations to several language definitions
\end{itemize}
Related work are discussed in Section \ref{related}, and Section \ref{conclusion} concludes the paper.
\section{A Calculus for Language Transformations}\label{main}
We focus on language definitions in the style of operational semantics.
To briefly summarize, languages are specified with a BNF grammar and a set of inference rules.
BNF grammars have \emph{grammar productions} such as $\text{\sf Types} \; T ::= B \mid \; T\to T$. We call $\text{\sf Types}$ a \emph{category name}, $T$ is a \emph{grammar meta-variable}, and $B$ and $T\to T$, as well as, for example, $(\lambda x.e\; v)$, are \emph{terms}.
$(\lambda x.e\; v) \longrightarrow e[v/x]$ and $\Gamma \vdash (e_1\; e_2) : T_2$ are \emph{formulae}.
An \emph{inference rule} $\inference{f_1, \ldots, f_n}{f}$ has a set of formulae above the horizontal line, which are called \emph{premises}, and a formula below the horizontal line, which is called the \emph{conclusion}.
\subsection{Syntax of $\mathcal{L}\textendash\textsf{Tr}$}\label{syntax}
Below we show the $\mathcal{L}\textendash\textsf{Tr}$ syntax for language definitions, which reflects
the operational semantics style of defining languages. Sets are accommodated with lists.
{\footnotesize
$
cname \in \textsc{CatName}, \langVar{X} \in \textsc{Meta-Var},
opname \in \textsc{OpName}, predname \in \textsc{PredName}
$
\begin{syntax}
\text{\sf Language} & \mathcal{L} & ::= & (G,R) \\
\text{\sf Grammar} & G & ::= & \{ s_1, \ldots, s_n \} \\
\text{\sf Grammar Pr.} & s & ::= & cname \; \langVar{X} ::= lt\\
\text{\sf Rule} & r & ::= & \inference{lf}{f}\\
\text{\sf Formula} & f & ::= & predname \; lt \\
\text{\sf Term} & t & ::= & \langVar{X} \mid opname\; lt \mid (\langVar{X})t \mid t[t/\langVar{X}] \\
\text{\sf List of Rules} & R & ::= & {\key{nil}} \mid \consLNC{r}{R}\\
\text{\sf List of Formula} & \mathit{lf} & ::= & {\key{nil}} \mid \consLNC{f}{\mathit{lf}}\\
\text{\sf List of Terms} & lt & ::= & {\key{nil}} \mid \consLNC{t}{lt}
\end{syntax}
}
We assume a set of category names \textsc{CatName}, a set of meta-variables \textsc{Meta-Var}, a set of constructor operator names \textsc{OpName}, and a set of predicate names \textsc{PredName}. We assume that these sets are pairwise disjoint.
\textsc{OpName} contains elements such as $\to$ and $\lambda$ (elements do not have to necessarily be (string) names).
\textsc{PredName} contains elements such as $\vdash$ and $\longrightarrow$.
To facilitate the modeling of our calculus, we assume that terms and formulae are defined in abstract syntax tree fashion. Here this means that they always have a top level constructor applied to a list of terms.
$\mathcal{L}\textendash\textsf{Tr}$ also provides syntax to specify unary binding $(\langVar{z})t$ and capture-avoiding substitution $t[t/\langVar{z}]$. Therefore, $\mathcal{L}\textendash\textsf{Tr}$ is tailored for static scoping rather than dynamic scoping.
Lists can be built as usual with the ${\key{nil}}$ and \key{cons} operator. We sometimes use the shorthand $[o_1, \ldots o_n]$ for the corresponding series of $\key{{\key{cons}}}$ applications ended with ${\key{nil}}$.
To make an example, the typing rule for function application and the $\beta$-reduction rules are written as follows. ($app$ is the top-level operator name for function application).
{\footnotesize
\begin{gather*}
\inference{[\; \vdash \; [\langVar{\Gamma}, \langVar{e_1}, (\to [\langVar{T_1}, \langVar{T_2}])], \; \vdash \; [\langVar{\Gamma}, \langVar{e_2}, \langVar{T_1}]\; ]}
{ \vdash \; [\langVar{\Gamma}, (app \; [\langVar{e_1}, \langVar{e_2}]), \langVar{T_2}]}
\qquad
\inference{[]}{\longrightarrow \; [(app \; [(\lambda \; [(\langVar{x})\langVar{e}]), \langVar{v}]), \langVar{e}[\langVar{v}/x]] }
\end{gather*}
}
Below we show the rest of the syntax of $\mathcal{L}\textendash\textsf{Tr}$. \\
{\footnotesize
$
x \in \textsc{Var
, str \in \textsc{String}
, \{\mathit{self}, \mathit{premises}, \mathit{conclusion}\} \subseteq \textsc{Var}
$
\begin{syntax}
\text{\sf Expression} & e & ::= & x \mid cname \mid str \mid \LNC{t} \mid \LNC{f} \mid \LNC{r}\\
&&& \mid \key{nil} \mid \consLNC{e}{e}\mid {\key{head}} \; e \mid {\key{tail}} \; e \mid e @ e \\
&&& \mid \createMapLNC{e}{e} \mid e(e)\mid \mapKeysLNC{e}\\
&&& \mid \justLNC{e} \mid \key{nothing} \mid \getLNC{e}\\
&&& \mid cname \; \langVar{X} ::= e \; \mid cname \; \langVar{X} ::= \ldots \; e \\
&&&\mid \key{getRules}\mid \setRulesLNC{e}\\% \mid str\; x - operator \\
&&& \mid \select{e}{p}{e} \mid \select{e(\key{keep})}{p}{e}
\mid \uniquefy{e}{e}{str}{x}{x}{e} \\%\mid \uniquefy{str}{str}{x}{e}
&&& \mid \ifLang{b}{e}{e} \mid e \; ; \; e \mid \ruleComposition{e}{e} \mid \key{skip} \\
&&& \mid \key{newVar} \mid \tickedMMLNC{e}{e}\mid \foldLNC{predname}{e} \\
&&& \mid \key{error}\\
\text{\sf Boolean Expr.} & b & ::= &
e == e \mid \isEmptyLNC{e} \mid \isIn{e}{e}\mid \isNothing{e}
\mid \andLNC{b}{b} \mid \orLNC{b}{b} \mid \notLNC{b} \\
\text{\sf $\mathcal{L}\textendash\textsf{Tr}$ Rule} & \LNC{r} & ::= & \inference{e}{e}\\
\text{\sf $\mathcal{L}\textendash\textsf{Tr}$ Formula} & \LNC{f} & ::= & predname \; e \mid x \; e \\
\text{\sf $\mathcal{L}\textendash\textsf{Tr}$ Term} & \LNC{t} & ::= & \langVar{X} \mid opname\; e \mid x \; e \mid (\langVar{X})e \mid e[e/\langVar{X}] \\
\text{\sf Pattern} & p & ::= & x:T\mid predname \; p \mid opname \; p \mid x \; p \mid \key{nil} \mid \consLNC{p}{p}\\
\text{\sf Value} & v & ::= & {t} \mid {f} \mid {r} \mid cname\mid str \\
&&& \mid \key{nil} \mid \consLNC{v}{v}
\mid \createMapLNC{v}{v}\mid \justLNC{v} \mid \key{nothing} \mid \key{skip}
\end{syntax}
}
Programmers write expressions to specify transformations.
At run-time, an expression will be executed with a language definition.
Evaluating an expression may modify the current language definition.
\emph{Design Principles:}
We strive to offer well-crafted operations that map well with the language manipulations that are frequent in adding features to languages or switching semantics styles.
There are three features that we can point out which exemplify our approach the most: 1) The ability to program parts of rules, premises and grammars, 2) selectors $\select{e}{p}{e}$, and 3) the \key{uniquefy} operation.
Below, we shall the describe the syntax for transformations, and place some emphasis in motivating these three operations.
\\ \indent \emph{Basic Data Types:}
$\mathcal{L}\textendash\textsf{Tr}$ has strings and has lists with typical operators for extracting their head and tail, as well as for concatenating them ($@$).
$\mathcal{L}\textendash\textsf{Tr}$ also has maps (key-value). In $\createMapLNC{e_1}{e_2}$, $e_1$ and $e_2$ are lists. The first element of $e_1$ is the key for the first element of $e_2$, and so on for the rest of elements. Such a representation fits better our language transformations examples, as we shall see in Section \ref{examples}. Operation $e_1(e_2)$ queries a map, where $e_1$ is a map and $e_2$ is a key, and $\mapKeysLNC{e}$ returns the list of keys of the map $e$.
Maps are convenient in $\mathcal{L}\textendash\textsf{Tr}$ to specify information that is not expressible in the language definition. For example, we can use maps to store information about whether some type argument is covariant or contravariant, or to store information about the input-output mode of the arguments of relations. Section \ref{examples} shows that we use maps in this way extensively.
$\mathcal{L}\textendash\textsf{Tr}$ also has options (\key{just}, \key{nothing}, and \key{get}). We include options because they are frequently used in combination with the selector operator described below.
Programmers can refer to grammar categories (\emph{cname}) in positions where a list is expected. When \emph{cname} is used the corresponding list of grammar items is retrieved.
\\ \indent \emph{Grammar Instructions}:
$cname \; \langVar{X} ::= e$ is essentially a grammar production. With this instruction, the current grammar is augmented with this production.
$cname \; \langVar{X} ::= \ldots \; e$ (notice the dots) adds the terms in $e$ to an existing production.
$\key{getRules}$ and $\setRulesLNC{e}$ retrieve and set the current list of rules, respectively.
\\ \indent \emph{Selectors}:
$\select{e_1}{p}{e_2}$ is the selector operator.
This operation selects one by one the elements of the list $e_1$ that satisfy the pattern $p$ and executes the body $e_2$ for each of them.
This operation returns a list that collects the result of each iteration.
Selectors are useful for selecting elements of a language with great precision, and applying manipulations to them. To make an example, suppose that the variable \emph{prems} contains the premises of a rule and that we wanted to invert the direction of all subtyping premises in it.
The operation $\select{prems}{T_1 <: T_2}{\key{just} \; T_2 <: T_1}$ does just that.
Notice that the body of a selector is an option. This is because it is common for some iteration to return no values ($\key{nothing}$). The examples in Section \ref{examples} show this aspect.
Since options are commonly used in the context of selector iterations, we have designed our selector operation to automatically handle them. That is, $\key{nothing}$s are automatically removed, and the selector above returns the list of new subtyping premises rather than a list of options.
The selector $\select{e(\key{keep})}{p}{e}$ works like an ordinary selector except that it also returns the elements that failed the pattern-matching.
\\ \indent\emph{Uniquefy}:
When transforming languages it is often necessary to assign distinct variables. The example of algorithmic subtyping in the introduction is archetypal.
$\mathcal{L}\textendash\textsf{Tr}$ accommodates this operation as primitive with \key{uniquefy}. \\
$\uniquefy{e_1}{e_2}{str}{x}{y}{e_3}$ takes in input a list of formulae $e_1$, a map $e_2$, and a string $str$ (we shall discuss $x$, $y$, and $e_3$ shortly). This operation modifies the formulae $e_2$ to use different variable names when a variable is mentioned more than once. However, not every variable is subject to the replacement. Only the variables that appear in some specific positions are targeted.
The map $e_2$ and the string $str$ contain the information to identify these positions.
$e_2$ maps operator names and predicate names to a list that contains a label (as a string) for each of their arguments.
For example, the map $m = \{\vdash \; \mapsto [``{in}", ``{in}", ``{out}"]\}$ says that $\Gamma$ and $e$ are inputs in a formula $\Gamma \vdash e:T$, and that $T$ is the output.
Similarly, the map $\{\to\; \mapsto [``{contravariant}", ``{covariant}""]\}$ says that $T_1$ is contravariant and $T_2$ is covariant in $T_1 \to T_2$.
The string $str$ specifies a label. $\mathcal{L}\textendash\textsf{Tr}$ inspects the formulae in $e_1$ and their terms. Arguments that correspond to the label according to the map
then receive a new variable.
To make an example, if $\mathit{lf}$ is the list of premises of \textsc{(t-app)} and $m$ is defined as above (input-output modes), the operation $\uniquefy{\mathit{lf}}{m}{``{out}"}{x}{y}{e_3}$ creates the premises of \textsc{(t-app')} shown in the introduction.
Furthermore, the computation continues with the expression $e_3$ in which $x$ is bound to these premises and $y$ is bound to a map that summarizes the changes made by \key{uniquefy}.
This latter map associates every variable $X$ to the list of new variables that \key{uniquefy} has used to replace $X$. For example, since \key{uniquefy} created the premises of \textsc{(t-app')} by replacing $T_1$ in two different positions with $T_{11}$ and $T_{12}$, the map $\{T_1\mapsto [T_{11}, T_{12}]\}$ is passed to $e_3$ as $y$.
Section \ref{examples} will show two examples that make use of \key{uniquefy}.
\\\indent \emph{Control Flow}:
$\mathcal{L}\textendash\textsf{Tr}$ includes the if-then-else statement with typical guards.
$\mathcal{L}\textendash\textsf{Tr}$ also has the sequence operation $;$ (and $\key{skip}$) to execute language transformations one after another. $e_1 ;_{\text{r}} e_2$, instead, executes sequences of transformations on rules. After $e_1$ evaluates to a rule, $e_2$ makes use of that rule as the subject of its transformations.
\\\indent\emph{Programming Rules, Premises, and Terms}:
In $\mathcal{L}\textendash\textsf{Tr}$ a programmer can write $\mathcal{L}\textendash\textsf{Tr}$ terms ($\hat{t}$), $\mathcal{L}\textendash\textsf{Tr}$ formulae ($\hat{f}$), and $\mathcal{L}\textendash\textsf{Tr}$ rules ($\hat{r}$) in expressions.
These differ from the terms, formulae and rules of language definitions in that they can contain arbitrary expressions, such as if-then-else statements, at any position.
This is a useful feature as it provides a declarative way to create rules, premises, or terms. To make an example with rule creation, we can write
\[\inference{\select{prems}{T_1 <: T_2}{\key{just} \; T_2 <: T_1}}{f}\]
where \emph{prems} is the list of premises from above, and $f$ is a formula. As we can see, using expressions above the horizontal line is a convinient way to compute the premises of a rule.
\\\indent\emph{Other Operations}:
The operation $\foldLNC{predname}{e}$ creates a list of formulae that interleaves $predname$ to any two subsequent elements of the list $e$.
To make an example, the operation $\foldLNC{=}{[T_1, T_2, T_3, T_4]}$ generates the list of formulae $[T_1 = T_2, T_2 = T_3, T_3 = T_4]$.
$\varsLNC{e}$ returns the list of the meta-variables in $e$. $\key{newVar}$ returns a meta-variable that has not been previously used. The tick operator $\tickedMMLNC{e}{}$ gives a prime $'$ to the meta-variables of $e_1$ ($\langVar{X}$ becomes $\langVar{X'}$).
$\key{vars}$ and the tick operator also work on lists of terms.
\\\indent\emph{Variables and Substitution:}
Some variables have a special treatment in $\mathcal{L}\textendash\textsf{Tr}$.
We can refer to the value that a selector iterates over with the variable $\mathit{self}$.
If we are in a context that manipulates a rule, we can also refer to the premises and conclusion with variables $\mathit{premises}$ and $\mathit{conclusion}$.
We use the notation $e[v/x]$ to denote the capture-avoiding substitution.
$\theta$ ranges over finite sequences of substitutions denoted with $[v_1/x_1,\ldots, v_n/x_n]$. $e[v_1/x_2, v_1/x_2, \ldots, v_n/x_n]$ means $((e[v_1/x_1])[v_2/x_2])\ldots[v_n/x_n]$.
We omit the definition of substitution because it is standard, for the most part. The only aspect that differs from standard substitution is that we do not substitute $\mathit{self}$, $\mathit{premises}$ and $\mathit{conclusion}$ in those contexts that will be set at run-time ($;_{\textsf{r}}$, and selector body). For example, $(\ruleComposition{e_1}{e_2})[v/\mathcal{X}] \equiv \ruleComposition{(e_1[v/\mathcal{X}])}{e_2}$, where $\mathcal{X}\in \{\mathit{self},\mathit{premises}, \mathit{conclusion}\}$.
\subsection{Operational Semantics of $\mathcal{L}\textendash\textsf{Tr}$}\label{operational}
\begin{figure}[tbp]
\small
\textsf{Dynamic Semantics} \hfill \fbox{$V ; \mathcal{L} ; e\longrightarrow V ; \mathcal{L} ; e$}
\begin{gather*}
\inference{\{cname\; \langVar{X} ::= v\} \in G}{V ; (G,R) ; cname \longrightarrow\atomic V ; (G,R) ; v} \label{beta}\tagsc{r-cname-ok}\\[1ex]
\inference{\{cname\; \langVar{X} ::= v\} \not\in G}{V ; (G,R) ; cname \longrightarrow\atomic V ; (G,R) ; \key{error}} \label{beta}\tagsc{r-cname-fail}\\[1ex]
V ; (G,R) ; \key{getRules} \longrightarrow\atomic V ; (G,R) ; R \label{beta}\tagsc{r-getRules}\\[1ex]
V ; (G,R) ; \setRulesLNC{v} \longrightarrow\atomic V ; (G,v) ; \key{skip} \label{beta}\tagsc{r-setRules}\\[1ex]
\inference{G' = (G \backslash cname) \cup \{cname\; \langVar{X} ::= v\}}{V ; (G,R) ; (cname\; \langVar{X} ::= v) \longrightarrow\atomic \emptyset ; (G',R) ; \key{skip} } \label{beta}\tagsc{r-new-syntax}\\[1ex]
\inference{\{cname\; \langVar{X} ::= v'\} \in G}{V ; (G,R) ; (cname \; \langVar{X} ::= \ldots \; v) \longrightarrow\atomic \emptyset ; (G,R) ; cname\; \langVar{X} ::= v' @ v } \label{beta}\tagsc{r-add-syntax-ok}\\[1ex]
\inference{\{cname\; \langVar{X} ::= v'\} \not\in G }{V ; (G,R) ; (cname \; \langVar{X} ::= \ldots \; v) \longrightarrow\atomic \emptyset ; (G',R) ; \key{error}} \label{beta}\tagsc{r-add-syntax-fail}\\[1ex]
V ; \mathcal{L} ; (\key{skip} ; e) \longrightarrow\atomic V ; \mathcal{L} ; e \label{beta}\tagsc{r-seq}\\[1ex]
V ; \mathcal{L} ; \ruleComposition{v}{e} \longrightarrow\atomic V ; \mathcal{L} ; e\subsForRule{v} \label{r-rule-comp}\tagsc{r-rule-comp}\\[1ex]
V ; \mathcal{L} ; \select{\key{nil}}{p}{e} \longrightarrow\atomic V ; \mathcal{L} ; \key{nil} \label{beta}\tagsc{r-selector-nil}\\[2ex]
\inference{\matchLNC{v_1}{p} = \theta \qquad
\theta' = \mbox{$\begin{cases}
\subsForRule{r}
& \mbox{if } v_1=r\\
\{\mathit{self} \mapsto v_1\} & \mbox{otherwise}
\end{cases}
$}}
{V ; \mathcal{L} ; \select{(\consLNC{v_1}{v_2})}{p}{e} \longrightarrow\atomic V ; \mathcal{L} ;
(\mathit{cons}^{*}\; {e\theta\theta'}\; {(\select{v_2}{p}{e})})}
\label{beta}\tagsc{r-selector-cons-ok}\\[1ex]
\inference{\matchLNC{v_1}{p} \not= \theta}
{V ; \mathcal{L} ; \select{(\consLNC{v_1}{v_2})}{p}{e} \longrightarrow\atomic V ; \mathcal{L} ;
(\select{v_2}{p}{e})}
\label{beta}\tagsc{r-selector-cons-fail}\\[1ex]
\inference
\langVar{X'} \not\in V \cup \varsSEM{\mathcal{L}} \cup \rangeSEM{\mathit{tick}}}{V ; (G,R) ; \key{newVar}\; \longrightarrow\atomic V \cup \{\langVar{X'}\} ; \mathcal{L} ; \langVar{X'}} \label{r-newar}\tagsc{r-newvar}\\[1ex]
\inference{(\mathit{lf}', v_2) = \uniquefySEM\subListFormula{\mathit{lf}}{v_1}{str}{\createMapLNC{[]}{[]}}}{V ; \mathcal{L} ; \uniquefy{\mathit{lf}}{v_1}{str}{x}{y}{e} \longrightarrow\atomic V ; \mathcal{L} ; e[\mathit{lf}'/x,v_2/y] } \label{r-uniquefy-ok}\tagsc{r-uniquefy-ok}\\[1ex]
\inference{\uniquefySEM\subListFormula{\mathit{lf}}{v_1}{str}{\createMapLNC{[]}{[]}} = \mathit{fail}}{V ; \mathcal{L} ; \uniquefy{\mathit{lf}}{v_1}{str}{x}{y}{e} \longrightarrow\atomic V ; \mathcal{L} ; \key{error} } \label{r-uniquefy-fail}\tagsc{r-uniquefy-fail}\\[1ex]
\text{where } \subsForRule{r} \equiv [r/\mathit{self},v_1/ \mathit{premises} , v_2/\mathit{conclusion} ] \qquad \text{if } r=\inference{v_1}{v_2}
\end{gather*}
\caption{Reduction Semantics of $\mathcal{L}\textendash\textsf{Tr}$}
\label{fig:dynamicsemantics}
\end{figure}
In this section we show a small-step operational semantics for $\mathcal{L}\textendash\textsf{Tr}$.
A configuration is denoted with $V ; \mathcal{L} ; e$, where $e$ is an expression, $\mathcal{L}$ is the language subject of the transformation, and $V$ is the set of meta-variables that have been generated by $\key{newVar}$. Calls to $\key{newVar}$ make sure not to produce name clashes.
The main reduction relation is $V ; \mathcal{L} ; e\longrightarrow V' ; \mathcal{L}' ; e'$, defined as follows.
Evaluation contexts $E$ are straightforward and can be found in Appendix \ref{evaluationcontexts}.
{\footnotesize
\begin{gather*}
\inference
{V ; \mathcal{L} ; e\longrightarrow\atomic V' ; \mathcal{L}' ; e' \\ \vdash \mathcal{L}' }
{V ; \mathcal{L} ; E[e] \longrightarrow V' ; \mathcal{L}' ; E[e']
\qquad
\inference
{V ; \mathcal{L} ; e\longrightarrow\atomic V' ; \mathcal{L}' ; e' \\ \not\vdash \mathcal{L}' }
{V ; \mathcal{L} ; E[e] \longrightarrow V ; \mathcal{L} ; \key{error}
\\[1ex]
\inference
{} {V ; \mathcal{L} ; E[\key{error}] \longrightarrow V ; \mathcal{L} ; \key{error}
\end{gather*}
}
This relation relies on a step $V ; \mathcal{L} ; e\longrightarrow\atomic V' ; \mathcal{L}' ; e'$, which concretely performs the step.
Since a transformation may insert ill-formed elements such as $\vdash T \; T$ or $\to \; e \; e$ in the language, we also rely on a notion of type checking for language definitions $\vdash \mathcal{L}'$ decided by the language designer. For example, our implementation of $\mathcal{L}\textendash\textsf{Tr}$ compiles languages to $\lambda$-prolog and detects ill-formed languages at each step, but the logic of Coq, Agda, Isabelle could be used as well.
Our type soundness theorem works regardless of the definition of $\vdash \mathcal{L}'$.
Fig. \ref{fig:dynamicsemantics} shows the reduction relation $V ; \mathcal{L} ; e\longrightarrow\atomic V' ; \mathcal{L}' ; e'$.
We show the most relevant rules. The rest of the rules can be found in Appendix \ref{app:operational}.
\\
\indent
\textsc{(r-cname-ok)} and \textsc{(r-cname-fail)} handle the encounter of a category name. We retrieve the corresponding list of terms from the grammar or throw an error if the production does not exist.
\\
\indent
\textsc{(r-getRules)} retrieves the list of rules of the current language, and \textsc{(r-setRules)} updates this list.
\\
\indent
\textsc{(r-new-syntax)} replaces the grammar with a new one that contains the new production. The meta-operation $G \backslash cname$ in that rule removes the production with category name $cname$ from $G$ (definition is straightforward and omitted). The position of $cname$ in $(cname\; \langVar{X} ::= v)$ is not an evaluation context, therefore \textsc{(r-cname-ok)} will not replace that name.
\textsc{(r-add-syntax-ok)} takes a step to the instruction for adding \emph{new} syntax. The production to be added includes both old and new grammar terms.
\textsc{(r-add-syntax-fail)} throws an error when the category name does not exist in the grammar, or the meta-variable does not match.
\\
\indent
\textsc{(r-rule-seq)} applies when the first expression has evaluated, and starts the evaluation of the second expression. (Evaluation context $E ; e$ evaluates the first expression)
\\
\indent
\textsc{(r-rule-comp)} applies when the first expression has evaluated to a rule, and starts the evaluation of the second expression where $\subsForRule{v}$ sets this rule as the current rule.
\\
\indent
Rules \textsc{(r-selector-*)} define the behavior of a selector.
\textsc{(r-selector-cons-ok)} and \textsc{(r-selector-cons-fail)} make use of the meta-operation $\matchLNC{v_1}{p} = \theta$. If this operation succeeds it returns the substitutions $\theta$ with the associations computed during pattern-matching. The definition of $\mathit{match}$ is standard and is omitted.
The body is evaluated with these substitutions and with $\mathit{self}$ instantiated with the element selected. If the element selected is a rule, then the body is instantiated with $\subsForRule{v}$ to refer to that rule as the current rule.
The body of the selector always returns an option type. However, $\mathit{cons}^{*}$ is defined as: $\mathit{cons}^{*} \; {e_1}\; {e_2} \equiv \consLNCFilter{e_1}{e_2}$.
Therefore, $\key{nothing}$s are discarded, and values wrapped in \key{just}s are unwrapped.
\\
\indent
\textsc{(r-newvar)} returns a new meta-variable and augments $V$ with it. Meta-variables are chosen among those that are not in the language, have not previously been generated by $\key{newVar}$, and are not in the range of $\mathit{tick}$. This meta-operation is used by the tick operator to give a prime to meta-variables.
\textsc{r-newvar} avoids clashes with these variables, too.
\\
\indent
\textsc{(r-uniquefy-ok)} and \textsc{(r-uniquefy-fail)} define the semantics for \key{uniquefy}. They rely on the meta-operation $\uniquefySEM\subRule{\mathit{lf}}{v}{str}{\createMapLNC{[]}{[]}}$, which takes the list of formulae $\mathit{lf}$, the map $v$, the string $str$, and an empty map to start computing the result map.
The definition of $\mathit{uniquefy}\subRule$ is mostly a recursive traversal of list of formuale and terms, and we omit that. It can be found in Appendix \ref{uniquefy}.
This function can succeed and return a pair $(\mathit{lf}',v_2)$ where $\mathit{lf}'$ is the modified list of formulae and $v_2$ maps meta-variables to the new meta-variables that have replaced it. $\mathit{uniquefy}\subRule$ can also fail. This may happen when, for example, a map such as $\{\to \; \mapsto ``contra"\}$ is passed when $\to$ requires two arguments.
\subsection{Type System of $\mathcal{L}\textendash\textsf{Tr}$}\label{typesystem}
\begin{figure}[tbp]
\textsf{Type System (Configurations)} \hfill \fbox{$\Gamma \vdash \; V ; \mathcal{L} ; e $}
\small
\begin{gather*}
\inference{ V \cap \varsSEM{\mathcal{L}} = \emptyset & \vdash \mathcal{L} & \emptyset \vdash e : \key{Language}}
{
\vdash \; V ; \mathcal{L} ; e
}
\end{gather*}
\textsf{Type System (Expressions)} \hfill \fbox{$\Gamma \vdash \; e : T $}
\begin{gather*}
\ninference{t-var}
{}{
\Gamma, x:T\vdash \; x: T}
\quad
\ninference{t-opname}
{\Gamma \vdash \; e : {\key{List}}\;\key{Term} }
{ \Gamma \vdash \; (opname\; e) : \key{Term}}
\quad
\ninference{t-opname-var}
{ \Gamma \vdash \; e : {\key{List}}\;\key{Term} }
{ \Gamma, x : \key{OpName} \vdash \; (x\; e) : \key{Term}}
\\[2ex]
\ninference{t-meta-var}
{}
{ \langVar{X}: \key{Term}}
\qquad
\ninference{t-abs}
{\Gamma \vdash \; e : \key{Term} }
{ \Gamma \vdash \; (\langVar{z})e: \key{Term}}
\qquad
\ninference{t-subs}
{\Gamma \vdash \; e_1 : \key{Term} \quad \Gamma \vdash \; e_2 : \key{Term} }
{ \Gamma \vdash \; e_1[e_2/\langVar{z}]: \key{Term}}
\\[2ex]
\ninference{t-predname}
{ \Gamma \vdash \; e : {\key{List}}\;\key{Term} }
{ \Gamma \vdash \; ( predname \; e) : \key{Formula}}
\qquad
\ninference{t-predname-var}
{\Gamma \vdash \; e : {\key{List}}\;\key{Term} }
{ \Gamma,x: \key{PredName} \vdash \; ( x \; e) : \key{Formula}}
\\[2ex]
\ninference{t-rule}
{\Gamma \vdash \; e_1 : {\key{List}}\;\key{Formula} \\\\ \Gamma \vdash \; e_2 : \key{Formula}}
{ \Gamma \vdash \; \inference{e_1}{e_2} : \key{Rule}}
\qquad
\ninference{t-seq}
{\Gamma \vdash \; e_1 : \key{Language} \\\\ \Gamma \vdash \; e_2 : \key{Language} }
{ \Gamma \vdash \; e_1 ; e_2 : \key{Language}}
\qquad
\ninference{t-rule-comp}
{ \Gamma \vdash \; e_1 : \key{Rule} \\\\ \Gamma,\GammaForRule{r} \vdash \; e_2 : \key{Rule} }
{ \Gamma \vdash \; \ruleComposition{e_1}{ e_2} : \key{Rule}}
\\[1ex]
\ninference{t-selector}
{\Gamma \vdash \; e_1 : {\key{List}} \; T \quad \Gamma \vdash \; p : T \Rightarrow \Gamma' \\
\Gamma'' = \mbox{$
\begin{cases}
\GammaForRule{r} & \mbox{if } T = \key{Rule} \\
\mathit{self} : T & \mbox{otherwise}
\end{cases}
$}\\
\Gamma, \Gamma', \Gamma'' \vdash \; e_2 : \MaybeType{T'}}
{ \Gamma \vdash \; \select{e_1}{p}{e_2} : {\key{List}} \; T'}
\\[2ex]
\ninference{t-syntax-new \text{and} t-syntax-add}
{\Gamma \vdash \; e : {\key{List}} \; \key{Term}}
{\mbox{$\begin{array}{c}
\Gamma \vdash \; cname \; \langVar{X} ::= e : \key{Language}\\
\Gamma \vdash \; cname \; \langVar{X}::= \ldots \; e: \key{Language}
\end{array}$
}}
\qquad
\ninference{t-cname}
{}
{\Gamma \vdash cname : {\key{List}} \; \key{Term}}
\\[2ex]
\ninference{t-getRules}
{}
{\Gamma \vdash \key{getRules} : {\key{List}} \; \key{Rule}}
\qquad
\ninference{t-setRules}
{\Gamma \vdash \; e : {\key{List}} \; \key{Rule}}
{\Gamma \vdash \; \setRulesLNC{e} : \key{Language}}
\\[2ex]
\ninference{t-uniquefy}
{\Gamma \vdash \; e_1 : {\key{List}}\;\key{Formula} \\\\
\Gamma \vdash \; e_2 : \MapType{T'}{({\key{List}} \; \key{String})} \quad T' = \key{OpName} \text{ or } T' = \key{PredName} \\\\
\Gamma, x: {\key{List}}\;\key{Formula}, y: \MapType{\key{Term}}{({\key{List}}\; \key{Term})} \vdash \; e_3 : T
}
{ \Gamma \vdash \; \uniquefy{e_1}{e_2}{str}{x}{y}{e_3} : T}
\qquad
\begin{array}{c}
\ninference{t-skip}
{}
{\Gamma \vdash \; \key{skip}: \key{Language}}
\\[2ex]
\ninference{t-newvar}
{}
{ \Gamma \vdash \; \key{newVar} : \key{Term}}
\end{array}\\[2ex]
\text{where } \GammaForRule{r} \equiv \mathit{self} : \key{Rule}, \mathit{premises} : {\key{List}} \; \key{Formula}, \mathit{conclusion}: \key{Formula}
\end{gather*}
\caption{Type System of $\mathcal{L}\textendash\textsf{Tr}$}
\label{fig:typesystem}
\end{figure}
In this section we define a type system for $\mathcal{L}\textendash\textsf{Tr}$.
Types are defined as follows
{\small
\begin{syntax}
\text{\sf Type} & T & ::= & \key{Language} \mid \key{Rule} \mid \key{Formula} \mid \key{Term} \\
&&& {\key{List}}\; T \mid \MapType{T}{T} \mid \MaybeType{T}\mid \key{String} \mid \key{OpName} \mid \key{PredName}\\
\text{\sf Type Env} & \Gamma & ::= & \emptyset \mid \Gamma, x:T
\end{syntax}
}
\noindent We have a typical type environment that maps variables to types.
Fig. \ref{fig:typesystem} shows the type system.
The typing judgement $\vdash V ; \mathcal{L} ; e$ means that the configuration $V ; \mathcal{L} ; e$ is well-typed.
This judgment checks that the variables of $V$ and those in $\mathcal{L}$ are disjoint.
This is an invariant that ensures that $\key{newVar}$ always produces fresh names.
We also check that $\mathcal{L}$ is well-typed and that $e$ is of type $\key{Language}$.
We type check expressions with the typing judgement $\Gamma \vdash \; e : T$, which means that $e$ has type $T$ under the assignments in $\Gamma$.
Most typing rules are straightforward. We omit rules about lists and maps because they are standard. We comment only on the rules that are more involved. \textsc{(t-selector)} type checks a selector operation. We use $\Gamma \vdash \; p : T \Rightarrow \Gamma'$ to type check the pattern $p$ and return the type environment for the variables of the pattern. Its definition is standard and is omitted.
When we type check the body $e_2$ we then include $\Gamma'$. If the elements of the list are rules then we also include $\GammaForRule{r}$ to give a type to the variables for referring to the current rule. Otherwise, we assign $\mathit{self}$ the type of the element of the list. Selectors with \key{keep} are analogous and omitted.
\textsc{(t-rule-comp)} type checks a rule composition. In doing so, we type check the second expression with $\GammaForRule{r}$.
\textsc{(t-uniquefy)} type checks the \key{uniquefy} operation. As we rename variables depending on the position they hold in terms and formulae, the keys of the map are of type $\key{OpName}$ or $\key{PredName}$, and values are strings. We type check $e_3$ giving $x$ the type of list of formulae, and $y$ the type of a map from meta-variables to list of meta-variables.
We have proved that $\mathcal{L}\textendash\textsf{Tr}$ is type sound.
\begin{theorem}[Type Soundness]
For all $\Gamma$, $V$, $\mathcal{L}$, $e$, if $\vdash V ; \mathcal{L} ; e$ then $V ; \mathcal{L} ; e \longrightarrow^{*} V' ; \mathcal{L}' ; e'$ s.t. i) $e' = \key{skip}$, ii) $e' = \key{error}$, or iii) $V' ; \mathcal{L}' ; e' \longrightarrow V'' ; \mathcal{L}'' ; e''$, for some $e''$.
\end{theorem}
The proof is by induction on the derivation $\vdash V ; \mathcal{L} ; e$, and follows the standard approach of Wright and Felleisen \cite{WrightFelleisen94} through a progress theorem and a subject reduction theorem.
The proof can be found in Appendix \ref{proof}.
\section{Examples}\label{examples}
We show the applicability of $\mathcal{L}\textendash\textsf{Tr}$ with two examples of language transformations: adding subytyping \cite{tapl} and switching to big-step semantics \cite{Kahn87}.
In the code we use let-binding, pattern-matching, and an overlap operation that returns true if two terms have variables in common. These operations can be easily defined in $\mathcal{L}\textendash\textsf{Tr}$, and we show them in Appendix \ref{let}.
The code below defines the transformation for adding subtyping. We assume that two maps are already defined, $mode = \{\vdash\;\mapsto[``inp",\;``inp",\;``out"]\}$ and $variance = \{\to\; \;\mapsto[``contra",\; ``cova"]\}$.
\lstset
{
numbers=left,
stepnumber=1,
}
\begin{lstlisting}[mathescape=true]
$\key{setRules}$
$\key{getRules}(\key{keep})[(\vdash [\Gamma, e, T])]:$
$\key{uniquefy}(premises, mode, ``out") => (uniq, newpremises):$
$\underline{newpremises \; @ \; \key{concat}(\key{mapKeys}(uniq)[T_f]: \key{fold} <: uniq(T_f))}$
$conclusion$
$;_{\textsf{r}}$
$\key{concat}(premises(\key{keep})[T_1 <: T_2]:$
$premises[(\vdash [\Gamma, e_v, (c_v \; Ts_v)])]:$
$\key{let}\; vmap = \key{map}(Ts_v,\; variance(c_v)) \;\key{in}\;$
$\key{if}\; vmap(T_1) = ``contra" \;\key{then}\; T_2 <: T_1$
$\underline{\key{else}\;\key{if}\; vmap(T_1) = ``inv" \;\key{and}\; vmap(T_2) = ``inv" \;\key{then}\; T_1 = T_2 \;\key{else}\; T_1 <: T_2)}$ $conclusion$
$;_{\textsf{r}}$
$\key{let}\; \mathit{outputVars} = \key{match}\; conclusion \;\key{with}\; (\vdash [\Gamma, e_c, T_c]) \Rightarrow \varsLNC{T_c} \;\key{in}\;$
$\key{let}\; joins = \key{mapKeys}(uniq)[T_i]:$
$\key{if}\; T_i \;\key{in}\; \mathit{outputVars} \;\key{then}\; (\sqcup\; uniq(T_i)\; =\; T_i) \;\key{else}\; \key{nothing}$
$\key{in}\;\underline{premises \; @ \; joins}$
$conclusion$
\end{lstlisting}
Line 1 updates the rules of the language with the rules computed by the code in lines 2-17.
Line 2 selects all typing rules, and each of them will be the subject of the transformations in lines 3-17.
Line 3 calls $\key{uniquefy}$ on the premises of the selected rule. We instruct $\key{uniquefy}$ to give new variables to the outputs of the typing relation $\vdash$, if they are used more than once in that position.
As previously described, $\key{uniquefy}$ returns the list of new premises, which we bind to $\mathit{newpremises}$, and the map that assigns variables to the list of the new variables generated to replace them, which we bind to $uniq$.
The body of $\key{uniquefy}$ goes from line 4 to 17.
Lines 4 and 5 build a new rule with the conclusion of the selected rule (line 5). It does so using the special variable name \emph{conclusion}. The premises of this rule include the premises just generated by $\key{uniquefy}$. Furthermore, we add premises computed as follows. With $\key{mapKeys}(uniq)[T_f]$, we iterate over all the variables replaced by $\key{uniquefy}$. We take the variables that replaced them and use fold to relate them all with subtyping. In other words, for each $\{T\;\mapsto[T_1, \ldots, T_n]\}$ in $uniq$, we have the formulae $T_1 <: T_2, \ldots, T_{n-1} <: T_n$.
This transformation has created a rule with unique outputs and subtyping, but subtyping may be incorrect because if some variable is contravariant its corresponding subtyping premise should be swapped.
Lines 7-11, then, adjust the subtyping premises based on the variance of types. Line 7 selects all subtyping premises of the form $T_1 <: T_2$. For each, Line 8 selects typing premises with output of the form $(c_v \; Ts_v)$. We do so to understand the variance of variables. If the first argument of $c_v$ is contravariant, for example, then the first element of $Ts_v$ warrants a swap in a subtyping premise because it is used in the contravariant position.
We achieve this by creating a map that associates the variance to each argument of $c_v$. The information about the variance for $c_v$ is in $variance$.
If $T_1$ or $T_2$ (from the pattern of the selected premise) appear in $Ts_v$ then they find themselves with a variance assigned in $vmap$.
Lines 10-11 generate a new premise based on the variance of variables. For example, if $T_1$ is contravariant then we generate $T_2 <: T_1$.
The program written so far (lines 1-11) is enough to add subtyping to several typing rules. For example, \textsc{(t-app)} can be transformed into \textsc{(t-app')} with this program.
However, some typing rules need a more sophisticated algorithm. Below is the typing rule for if-then-else on the left, and its version with subtyping on the right, which makes use of the join operator ($\sqcup$) (see, \cite{tapl}).
\[
{\footnotesize
\begin{array}{ccc}
{
\inference
{
\Gamma \vdash \; e_1 : \key{Bool} \\
\Gamma \vdash \; e_2 : T \quad
\Gamma \vdash \; e_2 : T
}
{ \Gamma \vdash \; (\mathit{if} \; e_1\; e_2\; e_3) : T}
}
& ~~ \Longrightarrow ~~ &
{
\inference
{\Gamma \vdash e_1 : \key{Bool} \qquad \Gamma \vdash e_2 : T_1 \\\\ \Gamma \vdash e_3 : T_2 \qquad T_1 \sqcup T_2 = T
}
{\Gamma \vdash (\mathit{if} \; e_1 \; e_2 \; e_3) : T}
}
\end{array}
}
\]
If we removed $T_1 \sqcup T_2$ the meta-variable $T$ would have no precise instantiation because its counterpart variables have been given new names.
Lines 13-17 accommodate for cases of the like. Line 13 saves the variables that appear the output type of the rule in \emph{outputVar}. We then iterate over all the keys of $uniq$, that is, the variables that have been replaced. For each of them, we see if they appear in \emph{outputVar}. If so then we create a join operator with the variables newly generated to replace this variable, which can be retrieved from $uniq$. We set the output of the join operator to be the variable itself, because that is the one used in the conclusion.
The algorithm above shows that \key{uniquefy} is a powerful operation of $\mathcal{L}\textendash\textsf{Tr}$.
To illustrate \key{uniquefy} further, let us consider a small example before we address big-step semantics.
Suppose that we would like to make every test of equality explicit. We therefore want to disallow terms such as $(\key{op}\; e\; e\; e)$ to appear in the premises, and want to turn them into $(\key{op}\; e_1\; e_2\; e_3)$ together with premises $e_1 = e_2$ and $e_2 = e_3$. In $\mathcal{L}\textendash\textsf{Tr}$ we can do this in the following way. Below we assume that the map \emph{allOps} maps each operator to the string ``yes" for each of its arguments. This instructs \key{uniquefy} to look for every argument.
\lstset
{
numbers=left,
stepnumber=1,
}
\begin{lstlisting}[mathescape=true]
...
$\key{uniquefy}(premises, allOps, ``yes") => (uniq, newpremises):$
${newpremises \; @ \; \key{concat}(\key{mapKeys}(uniq)[T_f]: \key{fold} = uniq(T_f))}$
\end{lstlisting}
Below, we show the code to turn language definitions into big-step semantics
\lstset
{
numbers=left,
stepnumber=1,
}
\begin{lstlisting}[mathescape=true]
$\key{setRules}$
$\mathit{Value}[v]: v \longrightarrow v\; @$
$\key{getRules}(\key{keep})[(op \; es) \;\longrightarrow\; et]:$
$\key{if}\; \key{isEmpty}(\select{Expression}{(op\; \_)}{self})\;\key{then}\; \key{nothing} \;\key{else}\;$
$\key{let}\; v_{res} = \key{newVar}\;\key{in}$
$\key{let}\; emap = \key{createMap}((\select{es}{e}{\key{newVar}}), es) \;\key{in}$
$(\key{mapKeys}(emap)[e]: \key{if}\; \key{isVar}(emap(e))\; \key{and}\; \key{not}(emap(e)\;\key{in}\; \varsLNC{et})$
$\key{then}\; \key{nothing} \;\key{else}\; e \longrightarrow emap(e) )$
$\underline{@\; (\key{if}\; \key{not}(et \;\key{in}\; es)\;\key{then}\; [(et \;\longrightarrow\; v_{res})] \;\key{else} \; \key{nil}) \; @ \; premises\qquad\qquad\qquad ~~~}$
${(op \; (\key{mapKeys}(emap))) \;\longrightarrow\; \key{if}\;\key{not}(et \;\key{in}\; es)\;\key{then}\; v_{res} \;\key{else}\; et}$
\end{lstlisting}
\lstset
{
numbers=left,
stepnumber=1,
basicstyle=\footnotesize,
}
Line 1 updates the rules of the language with the list computed in lines 2-9. Line 2 generates reduction rules such as $\lambda x.e \longrightarrow \lambda x. e$, for each value, as it is standard in big-step semantics. These rules are appended to those generated in lines 3-9. Line 3 selects all the reduction rules. Line 4 leaves out those rules that are not about a top-level expression operator. This skips contextual rules that take a step $E[e] \longrightarrow E[e']$, which do not appear in big-step semantics.
To do so, line 4 make use of $\select{\emph{Expression}}{(op\; \_)}{\emph{self}})$.
As $op$ is bound to the operator we are focusing on (from line 2), this selector returns a list with one element if $op$ appears in \emph{Expression}, and an empty list otherwise.
This is the check we perform at line 4.
Line 5 generates a new variable that will store the final value of the step.
Line 6 assigns a new variable to each of the arguments in $(es)$. We do so creating a map \emph{emap}.
These new variables are the formal arguments of the new rule being generated (Line 9).
Line 7-8 makes each of these variables evaluate to its corresponding argument in $es$ (line 8). For example, for the beta-reduction an argument of $es$ would be $\lambda x. e$ and we therefore generate the premise $e_1 \longrightarrow \lambda x. e$, where $e_1$ is the new variable that we assigned to this argument with line 6.
Line 7 skips generating the reduction premise if it is a variable that does not appear in $e_t$. For example, in the translation of \textsc{(if-true)} $(\mathit{if} \; true \; e_2 \; e_3) \longrightarrow e_2$ we do not evaluate $e_3$ at all.
Line 9 handles the result of the overall small-step reduction. This result is evaluated to a value ($v_{res}$), unless it already appears in the arguments $es$.
The conclusion of the rule syncs with this, and we place $v_{res}$ or $e_t$ in the target of the step accordingly.
Line 9 also appends the premises from the original rule, as they contain conditions to be checked.
When we apply this algorithm to the simply typed $\lambda$-calculus with if-then-else
we obtain: (we use standard notation rather than $\mathcal{L}\textendash\textsf{Tr}$ syntax)
{\footnotesize
\begin{align*}
(\lambda x.e \; v) \longrightarrow e[v/x]
\qquad&\Rightarrow\qquad
\inference
{e_1' \longrightarrow \lambda x.e & e_2' \longrightarrow v& e[v/x]\longrightarrow v_{res}}
{(e_1'\; e_2')\longrightarrow v_{res}}
\\[2ex]
(\mathit{if} \; true \; e_1 \; e_2) \longrightarrow e_1
\qquad&\Rightarrow\qquad
\inference
{e_1' \longrightarrow {true}
& e_2' \longrightarrow e_2}
{(\mathit{if} \; e_1' \; e_2' \; e_3') \longrightarrow e_2}
\end{align*}
}
We have implemented $\mathcal{L}\textendash\textsf{Tr}$ and we have applied it to the examples in this paper as well as $\lambda$-calculi with lists, pairs, sums, options, let-binding, function composition $(g\circ f)(x)$, and System F. We also considered these calculi in both call-by-value and call-by-name version, as well as lazy evaluation for data types such as pairs and lists.
The languages produced by our tool are compiled to $\lambda$-prolog, which type checks them successfully and, in fact, can execute them. We have tested subtyping with simple programs and checked that this functionality has been added.
We have also tested big-step evaluations with simple programs and our tests evaluate to the expected values in one step.
The tool, repo of languages generated, and details of our tests can be found at the website of our tool \cite{ltr}.
\section{Related Work}\label{related}
An excellent classification of language transformations has been provided in \cite{ErdwegGR12}.
The paper defines five operations: language extension, language unification, language restriction, self-extension of an (embedded) language, and support for composition of extensions.
Language workbenches (Rascal, Spoofax, etcetera) implement these types of transformations and similar ones.
These transformations are coarse grained in nature because they do not access the components of languages with precision. $\mathcal{L}\textendash\textsf{Tr}$, instead, includes operations to scan rules, and select/manipulate formulae and terms with precision. In this regard, we offer low-level manipulations, and yet programmers can enjoy a rather declarative language. We are not aware of calculi that provide these features. We are also not aware of type soundness proofs of similar calculi.
Proof assistants are optimized for handling inductive (rule-based) definitions, and can automatically generate powerful inductive reasoning mechanisms from these definitions.
$\mathcal{L}\textendash\textsf{Tr}$ does not provide these features, and does not assist language designers with their proofs. On the other hand, proof assistants do not have reflective features for programmatically retrieving their own inductive definitions, selected by a pattern, and for manipulating them to form a different specification, which is instead characteristic of $\mathcal{L}\textendash\textsf{Tr}$. It would be interesting to explore the merging of the features of proof assistants and $\mathcal{L}\textendash\textsf{Tr}$ in one tool.
Another limitation of $\mathcal{L}\textendash\textsf{Tr}$ compared to proof assistants (and general-purpose programming languages) is that $\mathcal{L}\textendash\textsf{Tr}$ does not offer recursion but only a simple form of iteration.
We are not aware of algorithms that automatically add subtyping. Instead, there has been some work on deriving big-step semantics, which we discuss below.
The work of Danvy et al \cite{danvy2004refocusing,Danvy:2008,danvy:reduction-free} offers a comprehensive translation from small-step to big-step semantics. The approach derives small-step abstract machines first, which are then translated into to big-step abstract machines and finally into big-step reduction semantics. The approach is rather elaborate and involves techniques such as refocusing and transition compression. It would be interesting to express these algorithms in $\mathcal{L}\textendash\textsf{Tr}$, extending the calculus if needed.
Our algorithm differs slightly from that of Ciob\^ac\u{a} \cite{Ciobaca13}.
Rules in \cite{Ciobaca13} have been proved correct.
We do not offer correctness theorems about our algorithms, and we have shown them solely to demonstrate the kinds of manipulations that our calculus offers.
\section{Conclusion}\label{conclusion}
We have presented $\mathcal{L}\textendash\textsf{Tr}$, a calculus for expressing language transformations. The calculus is expressive enough to model interesting transformations such as adding subtyping, and switching from small-step to big-step semantics. We have proved the type soundness of $\mathcal{L}\textendash\textsf{Tr}$, and we have implemented the calculus in a tool.
As $\mathcal{L}\textendash\textsf{Tr}$ manipulates inference systems it can, in principle, be applied to logical systems, and we plan to explore this research venue.
Overall, we believe that the calculus offers a rather declarative style for manipulating languages.
\bibliographystyle{splncs04}
|
2,869,038,153,842 | arxiv | \section{Introduction\label{sec:Introduction}}
Using massive tadpole diagrams significant progress has been made in the
improved prediction of various physical quantities. They have, e.g.,
contributed to an amazingly precise
relation between quark masses (in particular the
top-quark mass $M_t$), the mass of the Higgs boson, $M_H$, and the
masses of the gauge bosons, $M_W$ and $M_Z$.
Three- and even four-loop
corrections have become accessible during the past years. Direct and
indirect measurements are well consistent, at least within the current
world average $M_t=173.21\pm0.51\pm0.71$~GeV~\cite{Agashe:2014kda} and
$M_H=125.7\pm0.4$~GeV~\cite{Agashe:2014kda}. Current-current
correlators, evaluated in three- and partially four-loop approximation
are thus an indispensable tool for tests of the Standard Model (SM),
as we will discuss in more detail in Section~\ref{sec:II}.
The evaluation of three- and even four-loop tadpole diagrams is
directly related to the evaluation of moments of the charm- and
bottom-quark correlators. These may in turn lead to a precise
determination of charm- and bottom-quark masses. Since all these
quantities, in turn, are directly accessible, both to a perturbative
and a non-perturbative treatment, a remarkably consistent picture
emerges. In fact the analysis for the charm- as well as the
bottom-quark mass leads to $m_c(3~\mbox{GeV})=0.986\pm{0.013}$~{GeV} and
$m_b(10~\mbox{GeV})=3.610\pm{0.016}$~GeV~\cite{Kuhn:2007vp,Chetyrkin:2009fv},
a result quite comparable to other methods, in particular to
non-perturbative studies as will be
discussed in Section~\ref{sec:III}.
Finally, in Section~\ref{sec:IV}, we list a collection of topics and
results which are connected to the main theme of this
article. This includes the decoupling of heavy
quarks at four loops, the Higgs-gluon coupling up to five-loop order,
and the Higgs-decay rate into two photons
at three and four loops, including non-singlet and singlet terms.
\section{Weak corrections\label{sec:II}}
Let us start with the $\rho$ parameter, the quantity introduced in the
early times of electroweak interactions of the SM. In its more modern
version it gives the relation between gauge boson masses, the weak-
and the electromagnetic coupling $G_F$ and $\alpha$, a relation, which
exists at tree level already. In higher orders this relation depends
on all remaining parameters of the SM. The radiative corrections are
dominated by the quadratic dependence on the mass of the top quark
$M_t$, the logarithmic dependence on the Higgs boson, $M_H$, and,
to a lesser extent, also on the masses of the remaining quarks
$m_q$ and leptons $m_{\ell}$:
\begin{equation}
M_W=f(G_F, M_Z,\alpha; M_t, M_H; m_q, m_{\ell} ).
\end{equation}
A slightly different version of the same equation
\begin{equation}
M_W^2\left(1-{M_W^2\over
M_Z^2}\right)={\pi\alpha\over\sqrt{2}G_F}(1+\Delta r)
\end{equation}
makes the presence of the electroweak corrections even more
transparent. This equation can be rewritten and simplified even further
by separating $\Delta r$ into a piece which is dominated by weak effects
and another one which is dominated by electromagnetic effects, mainly
due to the running of the electromagnetic coupling $\Delta\alpha$
(see Refs.~\cite{Steinhauser:1998rq} and~\cite{Sturm:2013uka} for three- and
four-loop corrections, respectively). Furthermore, it is convenient to
separate the leading $M_t^2$ dependence which leads to
\begin{equation}
\Delta r = -{\cos^2{\theta_W}\over \sin^2{\theta_W}}\Delta\rho +
\Delta\alpha+\Delta r_{\mbox{\tiny{remaining}}}\,.
\end{equation}
Here $\Delta\rho={3{G_F M_t^2/(8\sqrt{2}\pi^2)}}+\ldots$ incorporates the
dominant weak terms evaluated in leading order by
Veltman~\cite{Veltman:1977kh} nearly 40 years ago.
It is the aim of ongoing and of future theoretical studies to compete with the
precision anticipated for the next round of experiments. The present precision
and the precision anticipated for the future (as far
as $\delta M_W$ and $\delta M_t$ are concerned) are given by
\begin{center}
{\scalefont{0.9}
\begin{tabular}{c|c|c}
$\delta M_W$ [MeV]&$\delta M_t$ [GeV]&\\\hline
33 & 5 & status 2003 (LEP, TEV)\\\hline
15 & 0.76 & now (TEVATRON, LHC)\\\hline
8$\to$5 & 0.6 & aim (LHC); theory\\\hline
3$\to$1.2 & 0.1-0.2 & ILC, TLEP
\end{tabular}
}
\end{center}
As it turns out, the relative shifts of $M_W$ and $M_t$ are just of the
right order of magnitude to explore the sensitivity towards radiative
corrections. This is seen most clearly by considering the shift in $M_t$
that is compensated by a shift in $M_W$
\begin{equation}
\delta M_W\approx 6\cdot 10^{-3}\delta M_t
\,,
\end{equation}
keeping $\alpha(M_Z)$, $M_Z$ and $M_H$ fixed.
Let us now recall the development of the theory predictions for
$\Delta\rho$ during the past one or two decades.
Early results related to the two-loop approximation can be found in
Refs.~\cite{Barbieri:1992nz,Fleischer:1993ub}. These papers are based
on the approximation $M_t^2\gg M_W^2$. The first step into the
three-loop regime was taken in the limit
$M_H=0$~\cite{vanderBij:2000cg}. In fact, this turns out to be a poor
approximation, leading to tiny corrections for the terms of order
$X_t^3$ and $\alpha_s X_t^2$ with
\begin{equation}
X_t={G_F M_t^2\over8\sqrt{2}\pi^2}\,.
\end{equation}
The first three-loop result with $M_H$
different from zero requires the full set of three-loop
tadpoles~\cite{Faisst:2003px}. At order $\alpha_s X_t^2
f(M_t/M_H)$ this corresponds to QCD corrections to the two-loop
diagrams of order $X^2_t f(M_t/M_H)$ (see Fig.~\ref{fig:rhoQCD}
for a sample Feynman diagram). At order $X_t^3 f(M_t/M_H)$
diagrams with one quark line contribute, as well as those involving
two disconnected quark lines.
\begin{figure}[!ht]
\begin{center}
\includegraphics[clip,width=3.5cm]{vecself3l3}
\end{center}
\vspace*{-0.6cm}
\caption{Sample diagram for the $\alpha_s X_t^2 f(M_t/M_H)$ contribution.
Solid lines denote top quarks, dashed lines Higgs or Goldstone bosons and
curly lines gluons.\label{fig:rhoQCD}}
\end{figure}
At the same time the translation from the $\overline{\mbox{MS}}$ mass $m_t(M_t)$
to the pole mass $M_t$ has to be performed at two loops. This
corresponds to the evaluation of two-loop on-shell diagrams. For the case
of the $X_t^3 f(M_t/M_H)$ corrections the counterterms are of order
$X_t^2$ and are depicted in Fig.~\ref{fig:onshelldia}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5cm]{top2l2}
\includegraphics[width=3.5cm]{top2l3}
\includegraphics[width=3.5cm]{top2l4}
\includegraphics[width=3.5cm]{top2l5}
\includegraphics[width=3.5cm]{top2l1}
\end{center}
\vspace*{-0.6cm}
\caption{Two-loop on-shell diagrams which contribute to the translation
of the $\overline{\mbox{MS}}$ mass to the pole mass.\label{fig:onshelldia}}
\end{figure}
In contrast to the pure QCD problem (see
Section~\ref{sec:III}) with only one mass scale being present (and
which can therefore be expressed in closed analytic form) here one
typically encounters two different scales, $M_H$ and $M_t$. A closed
analytical solution is no longer at hand. There are, however, various
cases which allow for expansions, one of which is perfectly valid for
$M_H$ and $M_t$ in the region of interest. Analytic results are
available in the cases $M_H=0$ and $M_H=M_t$, where only one scale is
present. Expansions, which in principle allow for arbitrary high
precision are then accessible in two cases: for the case of large
$M_H$ with the approximation in $(M_t^2/M_H^2)^n$ modulo logarithms
which is valid down to $M_H \approx 2 M_t$ and the case of $M_H$
around $M_t$ which is valid from fairly small $M_H$, say
$M_H\approx 0.1 M_t$, up to $M_H\approx 2M_t$. The results for
the expansion in $(M_H/M_t)^n$ and in
$(M_H-M_t)^2/M_t^2$ are shown in Fig.~\ref{fig:exprho}. Note
that for $M_H=0/126$~GeV one obtains for the prefactor of $\alpha_s
X_t^2$, a part of the three-loop term of $\Delta \rho$, the values
$2.9/120$.
\begin{figure}[!h]
\begin{center}
\includegraphics[clip,width=9cm]{rho3l_as_OS}
\end{center}
\vspace*{-1.1cm}
\caption{\label{fig:exprho} Contributions of order $\alpha_s X_t^2$
to $\Delta \rho$ in the on-shell definition of the top-quark
mass. The black squares indicate the points where the exact result
is known. Using the latest numerical values for $M_t$ and
$M_H$ one obtains $M_H/M_t\approx 0.73$.}
\end{figure}
The results for the shift in $M_W$ and the effective weak mixing
angle are shown in Fig.~\ref{fig:shift} for the
four contributions which are most relevant. The terms proportional to
$X_t$ and $X_t\alpha_s$
must be taken into account in any sensible analysis and amount to a
shift in $\delta\rho$ of order 0.00865.
\begin{figure}[!h]
\begin{center}
\includegraphics[clip,width=8cm]{deltaMw}
\end{center}
\vspace*{-1.1cm}
\caption{\label{fig:shift} The shift in $M_W$ and
$\sin^2\theta_{\rm eff}$ as a function of $M_H/M_t$ induced by
the corrections of order $X_t^2$, $\alpha_s^2 X_t$, $\alpha_s
X_t$, and $X_t^3$.}
\end{figure}
The two-loop piece proportional to $X_t^2$ is of the same
order as the three loop piece, proportional to $\alpha_s^2 X_t$.
The purely weak term proportional $X_t^3$ is negligible now and in the
foreseeable future, the term proportional to $\alpha_s X_t^2$ is
just below the present sensitivity.
\subsection*{Four-loop QCD contributions}
Two- and three-loop QCD corrections to $\Delta\rho$ have been
computed in
Refs.~\cite{Djouadi:1987gn,Djouadi:1987di,Kniehl:1988ie,Avdeev:1994db,Chetyrkin:1995ix,Chetyrkin:1995js}
about 20 years ago. As stated above, since several years it is now
possible to push the predictions for the $\rho$ parameter to the
four-loop level. This requires, on the one hand, the relation between
pole- and $\overline{\mbox{MS}}$ mass in three-loop
approximation~\cite{Chetyrkin:1999qi,Melnikov:2000qh} and, on the
other hand, the evaluation of about fifty
four-loop tadpole diagrams. This has been achieved by a
combination of analytically methods, difference equations and
semi-numerical
integration~\cite{Chetyrkin:2006bj,Boughezal:2006xk,Faisst:2006sr}.
In a first step this has lead
to the four-loop result for the $\rho$ parameter in the $\overline{\mbox{MS}}$ scheme
\begin{equation}
\delta\rho_t^{\mbox{\tiny{(4 loops)}}}=3{G_F
\overline{m}_t^2\over8\sqrt{2}\pi^2}\left({\alpha_s\over\pi}\right)^3
\underbrace{(-3.2866+1.6067)}_{-1.6799}\,.
\end{equation}
The first term has been evaluated in Ref.~\cite{Schroder:2005db},
the second one, which (in the $\overline{\mbox{MS}}$ scheme) leads
to a reduction by a factor of about $1/2$,
in Refs.~\cite{Chetyrkin:2006bj,Boughezal:2006xk}.
For the numerical evaluation the translation from the
$\overline{\mbox{MS}}$ to the pole mass is more convenient and one finds
\begin{equation}
\delta\rho_t^{\mbox{\tiny{(4 loops)}}} = 3{G_F
M_t^2\over8\sqrt{2}\pi^2}
\left(\frac{\alpha_s}{\pi}\right)^3 (-93.1501)
\end{equation}
which corresponds to a shift in $M_W$ by about 2~MeV, similar to
the three loop term of order $\alpha_s X_t^2$.
\section{Charm- and bottom-quark masses\label{sec:III}}
The precise determination of the charm- and bottom-quark masses from
relativistic four-loop moments can be considered as one of the truly
remarkable successes of quantum field theory with remarkable agreement
between perturbative~\cite{Kuhn:2007vp,Chetyrkin:2009fv} and lattice
methods~\cite{Allison:2008xk,McNeile:2010ji,Colquhoun:2014ica}. Let us
first give a more detailed motivation, then present the theoretical
framework and finally compare the results based on perturbative and
lattice methods.
Precise bottom quark masses
enter many $B$ physics quantities. During the past years
significant progress has been made on the one hand in the analysis of
$B$-meson decays and, on the other hand, the $\Upsilon$ spectroscopy. In
particular the latter has led to a fairly consistent result of
$m_b(m_b)=4.193^{+0.022}_{-0.035}$~GeV~\cite{Beneke:2014pta} (see also
Ref.~\cite{Penin:2014zaa} where $m_b(m_b)=4.169\pm0.009$~GeV has been
obtained) in excellent agreement with the result based on sum rules discussed
in detail below.
Let us further motivate the need for precise quark masses for the case of the
bottom-quark mass, which enters a number of physical observables. Most
prominently, we want to mention the decay of a Higgs boson
into bottom quarks which, using the
scalar correlator to five-loop precision~\cite{Baikov:2005rw}, can be written
in the form
\begin{eqnarray}
\Gamma(H\to b\bar{b})&=&{G_F M_H^2\over4\sqrt{2}\pi}m_b^2(M_H)\*R^{(S)}(M_H)\\
R^{(S)}(M_H)&=&
1
+ 5.667 \left({\alpha_s \over \pi}\right)
+ 29.147 \left({\alpha_s \over \pi}\right)^2 \nonumber \\
&&\mbox{}+41.758 \left({\alpha_s \over \pi}\right)^3
- 825.7 \left({\alpha_s \over \pi}\right)^4 \nonumber\\
&=&
1
+ 0.19551
+ 0.03469 \nonumber\\
&&\mbox{}+0.00171
- 0.00117.
\end{eqnarray}
The theory uncertainty, which is generously taken from a variation of the
scale parameter between $M_H/3$ and $3M_H$, is reduced from $5\permil$ for the
four-loop to $1.5\permil$ for the five-loop result. Thus, the main
uncertainty is induced from the uncertainty of the bottom quark mass which (at
energy scale 10~GeV) is given by~\cite{Chetyrkin:2009fv}
\[
\hspace*{-0.5cm}
m_b(10~\mbox{GeV})=\left(3610-{\alpha_s-0.1189\over0.002}\cdot12\pm11\right)~\mbox{MeV}
\,.
\]
The running from 10~GeV to $M_H$ depends on the anomalous mass
dimension $\gamma_m$, the $\beta$ function and on $\alpha_s$. With the present
knowledge (i.e. four-loop anomalous dimensions as implemented in {\tt
RunDec}~\cite{Chetyrkin:2000yt,Schmidt:2012az}) one finds
\begin{eqnarray}
m_b(M_H)=2759\pm\left.8\right|_{m_b}\pm\left.27\right|_{\alpha_s}~\mbox{MeV}
\,.
\label{eq::mbMH}
\end{eqnarray}
It is interesting to investigate the effect of the five-loop anomalous
dimensions. Taking into account the
$\gamma_m$ to five-loop accuracy~\cite{Baikov:2014pja,Baikov:2014qja}
together with $\beta_4$, the five-loop contribution to the $\beta$
function of QCD, which is still unknown, one obtains the following
uncertainties
\begin{eqnarray}
\delta m_b^2(M_H)\over m_b^2(M_H)&=&
-1.3\times10^{-4} (\beta_4/\beta_0=0)\nonumber\\[-0.25cm]
&=& -4.3\times10^{-4} (\beta_4/\beta_0=100)\nonumber\\
&=& -7.3\times10^{-4} (\beta_4/\beta_0=200)\,,
\end{eqnarray}
which lead to an uncertainty of a few MeV in $m_b(M_H)$
which is small compared to the current error shown in Eq.~(\ref{eq::mbMH}).
Another motivation which also points to a precision around 10~MeV is
based on the picture of Yukawa unification. In this approach
$\lambda_\tau\sim\lambda_b\sim\lambda_t$ at the GUT scale. In effect
this implies $\delta m_b/m_b\sim\delta m_t/m_t$. Assuming a precision
of the top-quark mass $\delta m_t\approx0.5$~GeV then leads to a precision
of $\delta m_b\approx10$~MeV, consistent with our finding below.
\subsection*{SVZ sum rules, moments and tadpoles}
The main idea originally advocated in Ref.~\cite{Novikov:1977dq} is
based on the observation (cf. Fig.~\ref{fig:r}), that the cross
section for hidden ($J/\Psi$ plus $\Psi(2S)$) plus open
($D\overline{D}$ plus resonances) charm production is well
described by perturbation theory, if the analysis is restricted to
sufficiently low moments.
\begin{figure}[!h]
\begin{center}
\includegraphics[angle=-1,width=7cm,angle=180]{rhad}
\end{center}
\vspace*{-0.6cm}
\caption{Sketch of the $R$ ratio in the charm-threshold region.\label{fig:r}}
\end{figure}
Let us first recall some definitions. The
two-point correlation function
\begin{eqnarray*}
\!\!\!\!(-q^2g_{\mu\nu}+q_{\mu}q_{\nu})\Pi(q^2)
=\mbox{i}\!\int\!dx e^{iqx} \langle0| T j_{\mu}(x)j_{\nu}(0)|0\rangle
\end{eqnarray*}
is related to the electromagnetic current $j_\mu$ as follows
\begin{equation}
R(s)=12\pi\mbox{Im}\left[\Pi(q^2=s+\mbox{i}\varepsilon)\right]\,.
\end{equation}
In fact, we are only interested in lowest moments of
$\Pi$, corresponding to the first few terms of the
Taylor expansion of $\Pi(q^2)$:
\begin{equation}
\Pi(q^2)=Q_q^2 {3\over16\pi^2}\sum_{n\ge0} \overline{C}_n z^n\,,
\end{equation}
where $Q_q$ corresponds to the charge of the considered quark.
Here $z=q^2/(4m_q^2)$ and $m_q=m_q(\mu=m_q)$ is the $\overline{\mbox{MS}}$ mass at
the scale $\mu=m_q$. Let us, for definiteness, restrict the
following discussion to the charm quark, i.e., $q=c$.
For the moments one finds
\begin{eqnarray}
\overline{C}_n&=&\overline{C}_n^{(0)}
+ \frac{\alpha_\mathrm{s}}\pi \overline{C}_n^{(1)}
+ \left( \frac{\alpha_\mathrm{s}}\pi\right)^2 \overline{C}_n^{(2)}\nonumber\\
&&+ \left( \frac{\alpha_\mathrm{s}}\pi\right)^3 \overline{C}_n^{(3)}
+ \dots,
\end{eqnarray}
if the renormalization scale is set to $\mu=m_q$ or
\begin{eqnarray}
\overline{C}_n &=& \overline{C}_n^{(0)}
+ \frac{\alpha_\mathrm{s}}{\pi} \big(
\overline{C}_n^{(10)}
+ \overline{C}_n^{(11)}l_{m_c}
\big)
\nonumber\\&&\mbox{}
+\left( \frac{\alpha_\mathrm{s}}{\pi}\right)^2 \big(
\overline{C}_n^{(20)}
+ \overline{C}_n^{(21)}l_{m_c}
+ \overline{C}_n^{(22)}l_{m_c}^2 \big)
\nonumber\\&&\mbox{}
+\left( \frac{\alpha_\mathrm{s}}\pi\right)^3
\big(
\overline{C}_n^{(30)}
+ \overline{C}_n^{(31)}l_{m_c}
+ \overline{C}_n^{(32)}l_{m_c}^2
\nonumber\\&&\mbox{}\qquad\quad
+ \overline{C}_n^{(33)}l_{m_c}^3\big)
+ \dots,
\end{eqnarray}
if one is interested in the generic form with \mbox{$l_{m_c} = \ln
(m_c^2(\mu)/\mu^2)$}. The next-to-next-to-leading order calculation had
been performed already nearly twenty years
ago~\cite{Chetyrkin:1995ii,Chetyrkin:1996cf,Chetyrkin:1997mb} and is available
for all four (vector, axial, scalar and pseudoscalar) correlators. The
original evaluation was up to $n=8$. More recently, this has been extended to
$n=30$~\cite{Boughezal:2006uu,Maier:2007yn}. Now this project has been pushed
to N$^{3}$LO, and the results will be described in the following. In a first
step the $n_f^2$ contribution has been computed for $\overline{C}_0$ and
$\overline{C}_1$~\cite{Chetyrkin:2004fq}, then the complete result became
available. The reduction of the many different diagrams has been performed to
13 master integrals, shown in Fig.~\ref{fig:MI}, using the Laporta
algorithm~\cite{Laporta:2001dd}. Subsequently these 13 remaining integrals
are evaluated, using originally a combination of numerical and analytical, now
purely analytical methods.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.5cm]{40903}\hspace{0.5ex}
\includegraphics[width=1.5cm]{40802}\hspace{0.5ex}
\includegraphics[width=1.5cm]{407117}\hspace{0.5ex}
\includegraphics[width=1.5cm]{407118}\\%\hspace{0.5ex}
\includegraphics[width=1.5cm]{406112}\hspace{0.5ex}
\includegraphics[width=1.5cm]{406111}\hspace{0.5ex}
\includegraphics[width=1.5cm]{40602}\hspace{0.5ex}
\includegraphics[width=1.5cm]{40618}\\%\hspace{0.5ex}
\includegraphics[width=1.5cm]{40505}\hspace{0.5ex}
\includegraphics[width=1.5cm]{40504}\hspace{0.5ex}
\includegraphics[width=1.5cm]{40524}\hspace{0.5ex}
\includegraphics[width=1.5cm]{40523}\\%\hspace{0.5ex}
\includegraphics[width=1.5cm]{40401}
\end{center}
\vspace*{-0.6cm}
\caption{The 13 master integrals. Solid lines denote massive
propagators; dashed lines represent massless propagators.\label{fig:MI}}
\end{figure}
The reduction of hundreds of integrals to master integrals has
been achieved, originally for $\overline{C}_0$ and
$\overline{C}_1$~\cite{Chetyrkin:2006xg,Boughezal:2006px},
subsequently for $\overline{C}_2$ and $\overline{C}_3$, using the
program Crusher~\cite{Maier:2008he,Maier:2009fz}. In the meantime all
master integrals are known in closed analytic
form to high order in $\varepsilon$, using results by a number of
different authors (see
Refs.~\cite{Laporta:2002pg,Schroder:2005va,Chetyrkin:2006dh,Lee:2010hs} and
references therein). The results for $\overline{C}_4$ up to
$\overline{C}_{10}$ are known approximately, using an approximation
based on additional information from low energies ($q^2=0$), from
threshold ($q^2=4m^2$) and from high
energy~\cite{Hoang:2008qy,Kiyo:2009gb,Greynat:2010kx}. Closely related
results are also known for axial, scalar and pseudoscalar
correlators~\cite{Sturm:2008eb,Maier:2009fz,Kiyo:2009gb}. These can be
used for the investigation of correlators on the
lattice~\cite{Allison:2008xk} and will not be investigated further in
this more phenomenological study.
The heavy quark vector current correlator for $q^2\ll m^2$
has also been determined in the large $\beta_0$
limit~\cite{Grozin:2004ez} in order to study the
large-order behaviour.
The moments are directly related to measurements as follows. From
theoretical considerations one finds
\begin{eqnarray}
\mathcal{M}_n^{\mbox{\tiny{th}}}&\equiv&\left.{12\pi^2\over n!}
\left({d\over dq^2}\right)^n\Pi_c(q^2)\right|_{q^2=0}
\nonumber\\
&=&
{9\over4}Q_c^2\left({1\over4m_c^2}\right)^n\overline{C}_n\,,
\end{eqnarray}
where the quantity $\overline{C}_n$ depends on $\alpha_s(\mu^2)$
and $\ln(m_c^2/\mu^2)$. As default value for $\mu$ we
use $\mu=3$~GeV.
To obtain experimental moments one considers the correlator
given by
\begin{equation}
\Pi_c(q^2)={q^2\over12\pi^2}\int\mbox{d} s {R_c(s)\over s(s-q^2)}+ \mbox{subtraction}\,,
\end{equation}
which leads to
\begin{equation}
\mathcal{M}^{\mbox{\tiny{exp}}}_n=\int{\mbox{d} s\over s^{n+1}} R_c(s)\,.
\label{eq:Mnexp}
\end{equation}
Imposing the constraint
$\mathcal{M}_n^{\mbox{\tiny{exp}}}=\mathcal{M}_n^{\mbox{\tiny{th}}}$
leads to $m_c$ at the scale $\mu=3$~GeV, a result that could be
easily translated to arbitrary values of $\mu$.
\begin{table}[t]
\begin{center}
\scalebox{0.7}{
\begin{tabular}{l|lll|l||l}
\hline
$n$ & ${\mathcal{M}_n^{\rm res}}$
& ${\mathcal{M}_n^{\rm thresh}}$
& ${\mathcal{M}_n^{\rm cont}}$
& ${\mathcal{M}_n^{\rm exp}}$
& ${\mathcal{M}_n^{\rm np}}$
\\
& $\times 10^{(n-1)}$
& $\times 10^{(n-1)}$
& $\times 10^{(n-1)}$
& $\times 10^{(n-1)}$
& $\times 10^{(n-1)}$
\\
\hline
$1$&$ 0.1201(25)$ &$ 0.0318(15)$ &$ 0.0646(11)$ &$ 0.2166(31)$ &$ -0.0001(2)$ \\
$2$&$ 0.1176(25)$ &$ 0.0178(8)$ &$ 0.0144(3)$ &$ 0.1497(27)$ &$ 0.0000(0)$ \\
$3$&$ 0.1169(26)$ &$ 0.0101(5)$ &$ 0.0042(1)$ &$ 0.1312(27)$ &$ 0.0007(14)$ \\
$4$&$ 0.1177(27)$ &$ 0.0058(3)$ &$ 0.0014(0)$ &$ 0.1249(27)$ &$ 0.0027(54)$ \\
\hline
\end{tabular}
}
\caption{
\label{tab:Mexp} The experimental moments in $(\mbox{GeV})^{-2n}$ as defined in
Eq.~(\ref{eq:Mnexp}) are shown, separated according to the
contributions from the narrow resonances, the charm threshold region
and the continuum region above $\sqrt{s}=4.8$~GeV. The last column
gives the contribution from the gluon condensate.}
\end{center}
\end{table}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=\columnwidth]{r}
\end{center}
\vspace*{-0.6cm}
\caption{The normalized cross section $R(s)$ between 2~GeV and 10~GeV.
The solid line corresponds to the theoretical prediction. The
uncertainties obtained from the variation of the input parameters and
of $\mu$ are indicated by the dashed curves. The inner and outer error
bars give the statistical and systematical uncertainty,
respectively. The data points are from
BES~\cite{Bai:2001ct,Ablikim:2006mb}, MD-1~\cite{Blinov:1993fw} and
CLEO~\cite{Ammar:1997sk}. The vertical dashed lines correspond to
the location of the $J/\Psi$ and $\Psi'$ resonances.\label{fig:tbd}}
\end{figure}
Let us first discuss the ingredients for charm, and then for bottom
quarks. The results for the electronic widths of $J/\Psi$ and $\Psi'$
are taken from the combination of BES, CLEO and BABAR experiments,
and for the continuum $R(s)$ from BES. For
the charm case there is also a non-perturbative contribution
which is, however, negligible for the three lowest moments and
remains relatively small even for the fourth. A careful investigation
of non-perturbative terms combined with the extrapolation of $R_{uds}$
as well as $R_c$ in the region sufficiently far above the respective
threshold leads to a remarkably consistent result, with errors on
$m_c(3~\mbox{GeV})$, as extracted for $n=1$ to $3$, below
20~MeV. The result for the moments individually split into
contributions from the narrow resonances, threshold and continuum
region is shown in Tab.~\ref{tab:Mexp}. $R(s)$ around the charm
threshold region is shown in Fig~\ref{fig:tbd}.
In particular we observe a remarkable consistency between the results
for $n=1,2,3$ and $4$ and a relatively small shift when moving from
three- to four-loop approximation (cf. Fig.~\ref{fig:mcmoments}).
\begin{table}[!h]
\begin{center}
\begin{tabular}{l|r@{}l|rrrr|c}
\hline
$n$ & \multicolumn{2}{|c|}{$m_c(3~\mbox{GeV})$} & exp & $\alpha_s$ & $\mu$ & np &
total
\\
\hline
1& &986& 9& 9& 2& 1 & 13 \\
2& &976& 6& 14& 5& 0 & 16 \\
3& &978& 5& 15& 7& 2 & 17 \\
4& 1&004& 3& 9& 31& 7 & 33 \\
\hline
\end{tabular}
\caption{\label{tab:mc1}The second column shows the results for
$m_c(3~\mbox{GeV})$ in MeV. The errors in the four inner columns are
from experiment, $\alpha_s$, variation of $\mu$
and the gluon condensate. The last column shows the total error.
}
\end{center}
\end{table}
In taking the lowest moment as our
final result we find~\cite{Chetyrkin:2009fv}
\begin{equation}
m_c(3~\mbox{GeV})=986\pm13~\mbox{MeV}.
\end{equation}
When converted from $\mu=3$~GeV to the scale $m_c$, this is modified to
$m_c(m_c)=1279\pm13$~MeV, nicely consistent with other
determinations~\cite{Allison:2008xk,McNeile:2010ji}. The
robustness of our result is demonstrated in Fig.~\ref{fig:mcmoments}, where the
results are compared for different orders $\mathcal{O}(\alpha_s^i)$,
with $i$=0, 1, 2 and 3 and for different moments, with $n$ varying
between $n=1$ and $n=4$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\columnwidth]{mcmc}
\end{center}
\vspace*{-0.6cm}
\caption{Dependence of $m_c(3~\mbox{GeV})$ on the number of moments $n$ and on $\mathcal{O}\left(\alpha_s^i\right)$ for $i=0,\dots,3$.
\label{fig:mcmoments}}
\end{figure}
The result can be compared to those from a large
number of results, which are based on various different
observables. Fig~\ref{fig:mccomparison} shows a compilation
of recent analyses~\cite{Kuhn:2001dm,Rolf:2002gu,Kuhn:2007vp,Allison:2008xk,Signer:2008da,Chetyrkin:2009fv,Chetyrkin:2010ic,Bodenstein:2011ma,Heitger:2013oaa,Carrasco:2014cwa,Chakraborty:2014aca,Dehnadi:2014kya,Agashe:2014kda}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\columnwidth]{mc3cmp}
\end{center}
\vspace*{-0.6cm}
\caption{Comparison of $m_c(3~\mbox{GeV})$ with several other
results.\label{fig:mccomparison}}
\end{figure}
Similar considerations are applicable for the corresponding
investigations of the $\Upsilon$-resonances and the mass of the bottom
quark. For convenience of the reader we again list in
Tab.~\ref{tab::mb_mom} separately the contributions from the narrow
resonances ($\Upsilon(1S)-\Upsilon(4S)$)~\cite{Yao:2006px}, the
threshold region (10.618~GeV-11.2~GeV)~\cite{Aubert:2008ab} and the
perturbative continuum ($E>11.2$~GeV).
\begin{table}[!h]
\centering
{\scalefont{0.8}
\begin{tabular}{c|lll|l}
\hline
$n$&${\cal M}_n^{\mbox{res,(1S-4S)}}$&${\cal M}_n^{\mbox{thresh}}$&${\cal M}_n^{\mbox{cont}}$&${\cal M}_n^{\mbox{exp}}$
\\
& $\times 10^{(2n+1)}$
& $\times 10^{(2n+1)}$
& $\times 10^{(2n+1)}$
& $\times 10^{(2n+1)}$\\
\hline
$1$&$ 1.394(23)$ &$ 0.287(12)$ &$ 2.911(18)$ &$ 4.592(31)$ \\
$2$&$ 1.459(23)$ &$ 0.240(10)$ &$ 1.173(11)$ &$ 2.872(28)$ \\
$3$&$ 1.538(24)$ &$ 0.200(8)$ &$ 0.624(7)$ &$ 2.362(26)$ \\
$4$&$ 1.630(25)$ &$ 0.168(7)$ &$ 0.372(5)$ &$ 2.170(26)$ \\\hline
\end{tabular}
\caption{Moments for the bottom quark system in $(\mbox{GeV})^{-2n}$}
\label{tab::mb_mom}
}
\end{table}
For the lowest moment the latter gives the
main contribution, starting from the second moment the resonance and the
threshold regions are again dominant. In particular moments number two
and three offer a fair compromise between smallness of the error and the
contribution of
the threshold region. Significant progress has been made between the
first measurements of CLEO~\cite{Besson:1984bd} and the more recent one
by the BABAR collaboration in particular in the continuum region.
As expected~\cite{Kuhn:2007vp} the original
CLEO result is too large by a factor around 1.3 but reproduces well the
qualitative behaviour. The more recent BABAR
result~\cite{Aubert:2008ab}, however, is significantly more precise and
well in agreement with the expectations for the continuum, based on the
parton cross section. Let us note in this connection that the
original BABAR result for $R_b(s)$ has to be deconvoluted with respect
to initial state radiation (ISR), a fact that leads to a slight shift of
$R_b(s)$.
The results for the bottom quark mass
for the lowest four moments are given in
Tab.~\ref{tab:mb}~\cite{Chetyrkin:2009fv,Chetyrkin:2010ic}.
For our final results we choose $n=2$ which leads to
\begin{eqnarray}
m_b(m_b) &=&\,4.163 \pm 0.016~\mbox{GeV}\,,\nonumber\\
m_b(10~\mbox{GeV}) &=&\,3.610 \pm 0.016~\mbox{GeV}\,, \nonumber\\
m_b(M_H)&=&2.759 \pm 0.028~\mbox{GeV}\,.
\label{eq:mb}
\end{eqnarray}
\begin{table}[!h]
\begin{center}
\begin{tabular}{l|l|rrr|l|l}
\hline
$n$ & $m_b(10~\mbox{GeV})$ &
exp & $\alpha_s$ & $\mu$ &
total &$m_b(m_b)$
\\
\hline
1& 3597& 14& 7& 2& 16& 4151 \\
2& 3610& 10& 12& 3& 16& 4163 \\
3& 3619& 8& 14& 6& 18& 4172 \\
4& 3631& 6& 15& 20& 26& 4183 \\
\hline
\end{tabular}
\end{center}
\vspace*{-0.6cm}
\caption{The different columns show the results for
$m_b(10~\mbox{GeV})$ in the second column, obtained from the
different moments listed in the first column. The last column gives
the value of $m_b(m_b)$. The three inner columns give the
uncertainty due to the error in the experimental moments (exp), the
uncertainty due to the error in $\alpha_s$ and the uncertainty due
to the residual scale dependence $\mu$. The second to last column
gives the total uncertainty. All masses and uncertainties are in
units of MeV.\label{tab:mb}}
\end{table}
The consistency, when comparing the results for the lowest four
moments is very close to the one from 2007~\cite{Kuhn:2007vp}, where
only estimates were available for the four-loop term of $n=2, 3$ and
$4$. Furthermore, only recalibrated results for the continuum
corresponding to the aforementioned factor 1.3 were available.
The result for
$m_b(m_b)$ can also be compared to those from other studies in
Fig.~\ref{fig:mbcomparison}~\cite{Kuhn:2001dm,Pineda:2006gx,Kuhn:2007vp,Chetyrkin:2009fv,Chetyrkin:2010ic,McNeile:2010ji,Bodenstein:2011fv,Hoang:2012us,Penin:2014zaa,Chakraborty:2014aca,Colquhoun:2014ica,Ayala:2014yxa,Beneke:2014pta,Agashe:2014kda}. Although somewhat towards the low side,
the result are well compatible with those of earlier investigations.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\columnwidth]{mbcmp}
\end{center}
\vspace*{-0.6cm}
\caption{Comparison of $m_b(m_b)$ with several other
determinations.\label{fig:mbcomparison}}
\end{figure}
In Fig.~\ref{fig::ka_pdg} the results for the charm and bottom quark masses as
obtained from the low-moment sum
rules~\cite{Kuhn:2001dm,Kuhn:2007vp,Chetyrkin:2009fv,Chetyrkin:2010ic} are
compared with the numerical values proposed by the PDG for the years between
2000 and 2014. It is interesting to note that the extracted mass values, which
were first based on three, later on four-loop perturbative input,
remained rather constant whereas the PDG numbers seem to converge towards
these results.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\columnwidth]{mc_ka_pdg}
\includegraphics[width=\columnwidth]{mb_ka_pdg}
\end{center}
\vspace*{-0.6cm}
\caption{\label{fig::ka_pdg}
Comparison of $m_c(m_c)$ and $m_b(m_b)$ as obtained using
low-moment sum
rules~\cite{Kuhn:2001dm,Kuhn:2007vp,Chetyrkin:2009fv,Chetyrkin:2010ic}
(narrow band) and the values from the PDG between 2000 and 2014.}
\end{figure}
\section{Further applications of massive tadpoles\label{sec:IV}}
\subsection*{Decoupling function at four loops}
In many QCD applications the mass of a heavy quark $m$ is much larger
than the characteristic momentum scale~$\sqrt{s}$ of a considered
physical process. As a result these different mass scales involved in
the process can lead to potentially large logarithms like
$\log(\mu/\sqrt{s})$ or $\log(\mu/m)$ when using an MS-like
renormalization scheme. In such a situation one can not set the
renormalization scale $\mu$ to two different mass scales
simultaneously, so that a proper choice of $\mu$ in order to avoid
large logarithms is not possible anymore. However, by ``integrating
out'' the heavy quark field one can construct an effective field
theory with $n_l=n_f-1$ light quarks only, where $n_f$ is the number
of quark flavours.
The $\overline{\mbox{MS}}$ coupling constants $\alpha_s^{(n_f)}$ and
$\alpha_s^{(n_l)}$ of the quark-gluon interaction in the full
$n_f$-flavor QCD and the effective $n_l$-flavor one are different and
are related by the decoupling function
$\zeta_g(\mu,\alpha_s^{(n_f)}(\mu),m)$ through the matching condition
\begin{equation}
\alpha_s^{(n_l)}(\mu) =
\zeta_g^2(\mu,\alpha_s^{(n_f)}(\mu),m)\,\,\alpha_s^{(n_f)}(\mu).
\end{equation}
At leading order the decoupling function is equal to one, but receives
corrections in higher orders of perturbation theory. This matching
condition for the $\overline{\mbox{MS}}$ strong coupling constant $\alpha_s$ at a
heavy quark threshold has been computed in
Refs.~\cite{Chetyrkin:2005ia,Schroder:2005hy} to four-loop order. The
decoupling function can be determined through the computation of
polarization functions. The bare relation for $\zeta_g^0$
reads~\cite{Chetyrkin:1997un}
\begin{equation}
\zeta_g^0={\tilde{\zeta}_{1}^{0}\over\tilde{\zeta}_{3}^{0}\sqrt{\zeta_{3}^{0}}},
\end{equation}
where
\begin{eqnarray}
\zeta_{3}^{0}&=&1+\Pi_{G}^{0h}(0),\nonumber\\
\tilde{\zeta}_{3}^{0}&=&1+\Pi_{c}^{0h}(0),\nonumber\\
\tilde{\zeta}_{1}^{0}&=&1+\Gamma_{G\bar{c}c}^{0h}(0,0)
\end{eqnarray}
with the gluon $G$ and ghost $c$ vacuum polarization functions
$\Pi_G^{0h}(q^2)$ and $\Pi_{c}^{0h}(q^2)$. The vertex function
$\Gamma_{G\bar{c}c}^{0h}(p,k)$ is the one-particle irreducible part of
the amputated Green's function, where $p$ and $k$ are the outgoing
four momenta of the fields $c$ and $G$, respectively. The computation
of these functions leads again to the evaluation of tadpole diagrams
up to four-loop order. The four-loop contribution can be expressed in
terms of the 13 master integrals shown in Fig.~\ref{fig:MI}. The
renormalized decoupling function is obtained from
\begin{equation}
\alpha_s^{(n_l)}=\left(
{Z_g^{(n_f)}\over Z_g^{(n_l)}}\zeta_g^0\right)^2\alpha_s^{(n_f)}(\mu)
\equiv\zeta_g^2\alpha_s^{(n_f)}(\mu),
\end{equation}
where $(Z_g)^2$ is the renormalization constant of the strong coupling constant.
The decoupling function plays an important role in testing QCD by running the
strong coupling to different energy scales. For example, the strong coupling
$\alpha_s(m_{\tau})$ can be measured at the scale of the $\tau$-lepton mass
$m_{\tau}$. In the next step one can run this value to the scale of the
$Z$-boson mass $M_Z$ by using the proper running and decoupling at the heavy
charm- and bottom-quark thresholds and compare it to the experimentally
measured result of $\alpha_s(M_Z)$. This procedure provides thus an excellent
test of QCD asymptotic freedom. The four-loop contribution to the decoupling
function leads to a reduction of the matching-related uncertainties in the
evolution of $\alpha_s(m_{\tau})$ to $\alpha_s(M_Z)$ by a factor of
two~\cite{Chetyrkin:2005ia,Schroder:2005hy}.
\subsection*{Higgs-gluon coupling to N$^4$LO}
Gluon fusion is the dominant production mechanism of the SM Higgs boson $H$ at
the Large Hadron Collider (LHC), where the leading order process is already at
the one-loop level and the Higgs boson is produced by the fusion of two gluons
through a heavy top-quark loop. The decoupling function enters in this context
as an important building block since it can be used to derive the effective
coupling of the Higgs boson to gluons via the following low-energy theorem
\begin{equation}
C_1^0=-{1\over2}m_t^0{\partial \ln\zeta_g^0 \over\partial m_t^0}
\,.
\end{equation}
$C_1$ enters into an effective Lagrangian
\begin{equation}
\mathcal{L}_{\mbox{\footnotesize eff}}=-2^{1/4}G_F^{1/2}HC_1[O_1']
\end{equation}
in QCD with five flavours, where the top mass dependence is contained in
$C_1$. The symbol $G_F$ is the Fermi constant, and $[O_1']$ is the
renormalized form of the operator
$O_1'=G_{a\mu\nu}^{0'}G_{a}^{0'\mu\nu}$, where $G_{a\mu\nu}^{0'}$ is the
gluon field strength tensor. The prime indicates that the object is in
the effective five-flavour theory and the superscript $0$ denotes a bare
quantity. Using the four-loop result for $\zeta_g^2$ of
Refs.~\cite{Chetyrkin:2005ia,Schroder:2005hy} allows to determine $C_1$
in four-loop approximation, which confirms the result of
Ref.~\cite{Chetyrkin:1997un} in a completely different and independent
way. With the help of the anomalous dimensions even the five-loop
contribution to $C_1$ has been
predicted~\cite{Chetyrkin:2005ia,Schroder:2005hy} up to unknown
five-loop $n_f$-dependent terms of the $\beta$ function.
\subsection*{Decoupling of heavy quarks from the running of the fine
structure constant}
In complete analogy one can determine from the massive photon vacuum
polarization function the photon decoupling function
${\left({\zeta^0_{g\gamma}}\right)^2}$
\begin{equation}
{\left({\zeta^0_{g\gamma}}\right)^2}={1\over 1+\Pi_{\gamma}^{0h}(0)}\,.
\end{equation}
The three-loop results of Ref.~\cite{Chetyrkin:1997un} have been
extended to four loops in Ref.~\cite{Sturm:2014nva}. Starting from
three-loop order there arise also diagrams where the external photon couples
to massless fermions with the insertion of a heavy fermion loop. At
four-loop order also singlet type diagrams arise for the first time,
where the photon couples to two different fermion loops. Some example
diagrams are shown in Fig.~\ref{fig:Pi4loop}.
\begin{figure}[!h]
\begin{center}
\begin{minipage}{2cm}
\begin{center}
\includegraphics[width=2cm]{SingletA
\end{center}
\end{minipage}
\begin{minipage}{2cm}
\begin{center}
\includegraphics[width=2cm]{SingletB
\end{center}
\end{minipage}
\begin{minipage}{2cm}
\begin{center}
\includegraphics[width=2cm]{VacPol4loopQq-h
\end{center}
\end{minipage}\\[0.2cm]
\begin{minipage}{2cm}
\begin{center}
\includegraphics[width=2cm]{VacPol4loop-hh
\end{center}
\end{minipage}
\begin{minipage}{2cm}
\begin{center}
\includegraphics[width=2cm]{VacPol4loopQq-hg
\end{center}
\end{minipage}
\begin{minipage}{2cm}
\begin{center}
\includegraphics[width=2cm]{VacPol4loop-Fh
\end{center}
\end{minipage}
\end{center}
\vspace*{-0.6cm}
\caption{
The first two diagrams are example singlet diagrams; the last four
diagrams are examples for the situation where the external photon
couples to a massless fermion loop with the insertion of a heavy
internal fermion loop. The solid lines represent heavy top-quarks, the
twisted lines denote gluons, and the dashed lines represent massless
quarks.\label{fig:Pi4loop}}
\end{figure}
\subsection*{Higgs boson decay to photons}
The photon decoupling function can be used to determine higher order
QCD corrections to the partial decay width of the Higgs boson into two
photons ($\gamma$). The amplitude of the partial decay width $H\to\gamma\gamma$
\begin{equation}
\label{eq:decaywidth}
\Gamma(H\to\gamma\gamma)={M_H^3\over64\*\pi}\*\Big|A_W(\tau_W)+\sum_{f}A_f(\tau_f)\Big|^2\,,
\end{equation}
consists of two parts, a purely bosonic part $A_W(\tau_W)$ and a
purely fermionic part $A_f(\tau_f)$ with $\tau_W=M_H^2/(4M_W^2)$ and
$\tau_f=M_H^2/(4M_f^2)$. Higher order QCD corrections modify the
fermionic part $A_f(\tau_f)$ of the amplitude. The top quark gives a
dominant contribution to the amplitude $A_f$ ($f=t$), since it is the heaviest
fermion in the SM. In the heavy top-quark mass limit one can again
describe the Higgs-photon-photon interaction in terms of an effective
Lagrangian approach
\begin{equation}
\label{eq:Lhgg}
\mathcal{L}_{\mbox{\scriptsize{eff}}}=-{H^0\over v^0}\,C^0_{1\gamma}\,F^{'0,\mu\nu}F^{'0}_{\mu\nu}\,,
\end{equation}
with the vacuum expectation value $v^0$ and the field strength tensor
$F_{\mu\nu}^{'0}$. The subscript $0$ indicates a bare quantity and the
prime denotes that the quantity is considered in the effective theory
with $n_l$ light active quark flavours. The coefficient function
$C_{1\gamma}^{0}$ depends on the photon decoupling function
\begin{equation}
C_{1\gamma}^{0}=-{1\over2}m_t^0{\partial\ln \zeta_{g\gamma}^0 \over\partial
m_t^0}.
\end{equation}
This approach allows one to determine the leading contributions in the
heavy top-quark mass limit to the Higgs-boson decay into two photons,
where the external photons couple to the same heavy fermion loop, the so
called non-singlet contributions.
At three-loop order in perturbative QCD the non-singlet contributions to the
decay $H\to\gamma\gamma$ have been computed in
Ref.~\cite{Steinhauser:1996wy} with several different methods, including
power corrections of the order $[M_H^2/(4M_t^2)]^2$. The singlet
contributions have been added in Ref.~\cite{Maierhofer:2012vv},
where also additional power corrections of higher orders in
$M_H^2/(4M_t^2)$ were calculated. The singlet contributions appear for
the first time at three-loop order. An example diagram is shown in
Fig.~\ref{fig:Hgamgam3loopSing}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=4cm]{Hgg3loopSing}
\end{center}
\vspace*{-0.6cm}
\caption{Example three-loop singlet diagram. Solid lines denote top
quarks, wavy lines are photons, twisted lines represent gluons and the
dashed line is the Higgs boson.\label{fig:Hgamgam3loopSing} }
\end{figure}
In Ref.~\cite{Chetyrkin:1997un} $C_{1\gamma}$ has been determined at
four-loop order in the effective Lagrangian approach with the help of
the knowledge of the anomalous dimensions. This result was subsequently also
obtained independently through the calculation of the four-loop order of
the decoupling function $\zeta_{g\gamma}$ in Ref.~\cite{Sturm:2014nva},
where also the corresponding five-loop contributions were determined,
with the help of the anomalous dimensions. Some example diagrams at
four-loop order are depicted in Fig.~\ref{fig:Hgamgam4loop}.
\begin{figure}[!h]
\begin{center}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{Cf3}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{Cf2Ca}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{CfCa2}\\
\end{center}
\end{minipage}\\[0.2cm]
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{Cf2nh}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{CfCanh}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{Cf2nl}\\
\end{center}
\end{minipage}\\[0.2cm]
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{CfCanl}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{Cfnl2}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{Cfnhnl}\\
\end{center}
\end{minipage}\\[0.2cm]
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{Cfnh2}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{SiCfnh}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{SiCfCanh}\\
\end{center}
\end{minipage}\\[0.2cm]
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{SiCfnlnh}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{SiCfnh2}\\
\end{center}
\end{minipage}
\begin{minipage}{2.5cm}
\begin{center}
\includegraphics[width=2.5cm]{dabcnh}\\
\end{center}
\end{minipage}
\end{center}
\caption{Example diagrams which illustrate different kind of diagram
classes which have been determined in the heavy top-quark mass limit
at four-loop order. Dotted lines represent massless quarks; all other
lines are as defined in Fig.~\ref{fig:Hgamgam3loopSing}.
\label{fig:Hgamgam4loop} }
\end{figure}
\section{Conclusions\label{sec:DiscussConclude}}
The systematic investigation of four-loop tadpole integrals started about a
decade ago.
In this article we have briefly touched the techniques which have been
developed to perform the reduction to master integrals and to obtain results
for the latter. Furthermore, we have described in some detail the most
important applications. Among them are four-loop corrections to the
electroweak $\rho$ parameter, the precise determination of charm and bottom
quark masses and the decoupling constants in QCD. The latter have a close
connection to Higgs boson production and decay into gluons and photons
which has also been elaborated.
\section*{Acknowledgements}
This work is supported by the Deutsche
Forschungsgemeinschaft in the Sonderforschungsbereich Transregio~9
``Computational Particle Physics''.
\nocite{*}
\bibliographystyle{elsarticle-num}
|
2,869,038,153,843 | arxiv | \section{ Introduction}
The anharmonic oscillator is a physical system generalizing the simple
linear harmonic oscillator $\frac{d^{2}x}{dt^{2}}+\omega _{0}^{2}x(t)=0$,
where $x(t)$ is the position coordinate, $t$ is the time, and $\omega _{0}$
is the oscillation frequency. In general, the time evolution of the space
variable $x$ of the anharmonic oscillator is governed by the following
nonlinear second order differential equation \cite{7,12}
\begin{equation}
\frac{d^{2}x}{dt^{2}}+f_{1}\left( t\right) \frac{dx}{dt}+f_{2}\left(
t\right) x+f_{3}\left( t\right) x^{n}=f_{4}\left( t\right) , \label{1}
\end{equation}%
where $f_{i}\left( t\right) $, $i=1,2,3,4$, and $x$ are arbitrary real
functions of $t$ defined on a real interval $I\subseteq \Re $, with $%
f_{i}\left( t\right) $ and $x\left( t\right) $ $\in C^{\infty }(I)$. The
factors $f_{i}\left( t\right) $ are physically interpreted as follows: $%
f_{1}\left( t\right) $ is a damping factor; $f_{2}\left( t\right) $ is a
time dependent oscillation frequency coefficient; $f_{3}\left( t\right) $ is
the simplest possible anharmonic term; $f_{4}\left( t\right) $ is a forcing
term, and $n$ is a real constant \cite{7}. The equation of motion of the anharmonic
oscillator is strongly nonlinear, and when the anharmonicity term $%
f_{3}\left( t\right) x^{n}$ is small, its solutions can be obtained by using
perturbation theory. If the anharmonicity is large, then other numerical
techniques need to be implemented.
The anharmonic oscillator equation Eq.~(\ref{1}) with specific values of the
exponent $n$ can be used to model many different physical systems. For $%
n=3/2 $, one obtains the Thomas and Fermi atomic model \cite{2,3}, while the
case $n=-3$ corresponds to the Ermakov \cite{4}, or Pinney \cite{5},
equation. For $n=-1$ one obtains the Brillouin electron beam focusing system
equation \cite{22s,23s}, and $n=3$ gives the case of the Duffing oscillator %
\cite{6}.
An interesting particular case of the general anharmonic oscillator equation
Eq.~(\ref{1}) is the Ermakov-Pinney equation (EPE), which is a well-known
example of a nonlinear second order differential equation with important
physical applications (we refer the reader to \cite{6a,6aa} for an
historical development and an excellent review of the properties of the EPE
equation). The EPE is endowed with a wide range of physical applications,
including quantum cosmology \cite{7a}, dynamics of scalar field cosmologies
and the braneworld scenario \cite{8a}, quantum field theory \cite{9a,10a},
nonlinear elasticity \cite{11a}, nonlinear optics \cite{12a,13a},
description of the wavefunction of Bose-Einstein condensates (BEC) at the
mean-field level \cite{14a}, the envelope of the electric field in nonlinear
optics \cite{16a}, amongst others. In this context, the EPE provides an
effective description for the time-dependence of the relevant spatially
dependent field, typically being associated with its width both in the BEC %
\cite{17a,18a} and in optical settings \cite{19a}. The mathematical analysis
and structure of the EPE have been extensively discussed in \cite%
{20a,21a,22a,23a}.
Note that for generic values of the coefficients, Eq.~(\ref{1}) is
equivalent to a third order autonomous dynamical system, which generically
admits no closed form general solution \cite{7}. The mathematical properties
and applications of particular forms of Eq.~(\ref{1}) have been widely
investigated, such as, the partial integrability of the anharmonic
oscillator \cite{7}, the time-dependent driven anharmonic oscillator and its
adiabaticity properties \cite{8}, toroidal $p$-branes, anharmonic
oscillators and (hyper)elliptic solutions \cite{9}, conformal mappings and
other power series methods for solving ordinary differential equations \cite%
{10}, and the anharmonic oscillator in the context of the optimized basis
expansion \cite{11}. The Painlev\'{e} analysis of Eq.~(\ref{1}) was
performed in \cite{13}. Specific transformation properties of the anharmonic
oscillator were considered in \cite{12}, where an excellent review of the
Lie symmetries approach to Eq.~(\ref{1a}) can also be found.
The most general conditions on the functions $f_{1}$, $f_{2}$ and $f_{3}$,
for which Eq.~(\ref{1a}) may be integrable, as well as conditions for the
existence of Lie point symmetries, were obtained in \cite{12}.
Time-dependent first integrals were also constructed. The main results of %
\cite{12} are that if $n\notin \left\{ -3,-1,0,1\right\} $, then Eq.~(\ref%
{1a}) can be point transformed to an equation of the form $%
d^{2}X/dT^{2}+X^{n}\left( T\right) =0$, can be linearized as $%
d^{2}X/dT^{2}+k_{2}=0$, $k_{2}\in \Re \backslash {0}$, and it admits a
two-dimensional Lie point symmetry algebra.
It is the purpose of the present paper to obtain, by using the results of %
\cite{12}, some classes of exact solutions of the anharmonic oscillator Eq.~(%
\ref{1}) without the forcing term. The first solution is obtained by
considering a particular solution of the point transformed equation $%
d^{2}X/dT^{2}+X^{n}\left( T\right) =0$, equivalent to the initial anharmonic
oscillator equation. The integrability condition obtained in \cite{12} for
the anharmonic oscillator can be formulated in terms of a Riccati equation
for the $f_{1}(t)$ and $\frac{1}{f_{3}(t)}\frac{df_{3}}{dt}$ terms, respectively.
By imposing some specific constraints on the coefficients of the Riccati
equation, namely, by requiring that the Riccati equation can be reduced to a
Bernoulli equation, two distinct classes of exact solutions of the
anharmonic oscillator equation with zero forcing term are obtained. In the
analysis outlined below, we shall use the generalized Sundman
transformations $X\left( T\right) =F\left( t,x\right) $ and $dT=G\left(
t,x\right) dt$ \cite{14,15,16}. The latter have been widely applied in the
literature \cite{14,15}, namely, in the study of the mathematical properties
of the second order differential equations, and the third order differential
equation
\begin{equation}
\frac{d^{3}x}{dt^{3}}+h_{1}\left( t\right) \frac{d^{2}x}{dt^{2}}+h_{2}\left(
t\right) \frac{dx}{dt}+h_{3}\left( t\right) x+h_{4}\left( t\right) =0.
\end{equation}
The present paper is organized as follows. Three distinct classes of general
solutions of Eq.~(\ref{1}) without the forcing term, which explicitly depict the time
evolution of the anharmonic oscillator, are presented in Section \ref%
{sect2_1}. We discuss and conclude our results in Section \ref{sect3}.
\section{Exact integrability cases for the anharmonic oscillator}
\label{sect2_1}
In the present Section, by starting from the integrability condition of the
anharmonic oscillator equation obtained in \cite{12}, we obtain three cases
of exact integrability of the anharmonic oscillator without forcing.
\subsection{The integrability condition for the anharmonic oscillator}
\label{sect2}
In the following we assume that the forcing term $f_{4}( t) $ vanishes in
Eq.~(\ref{1}). Hence the latter takes the form
\begin{equation}
\frac{d^{2}x}{dt^{2}}+f_{1}\left( t\right) \frac{dx}{dt}+f_{2}\left(
t\right) x+f_3(t) x^{n}=0. \label{1a}
\end{equation}
An integrability condition of Eq.~(\ref{1a}) can be formulated as the
following:
\textbf{Theorem} \cite{12}. If and only if $n\notin \left\{
-3,-1,0,1\right\} $, and the coefficients of Eq.~(\ref{1a}) satisfy the
differential condition
\begin{equation}
f_{2}\left( t\right) =\frac{1}{n+3}\frac{1}{f_{3}(t)}\frac{d^{2}f_{3}}{dt^{2}%
}-\frac{n+4}{\left( n+3\right) ^{2}}\left[ \frac{1}{f_{3}(t)}\frac{df_{3}}{dt%
}\right] ^{2}+\frac{n-1}{\left( n+3\right) ^{2}}\left[ \frac{1}{f_{3}(t)}%
\frac{df_{3}}{dt}\right] f_{1}\left( t\right) +\frac{2}{n+3}\frac{df_{1}}{dt}%
+\frac{2\left( n+1\right) }{\left( n+3\right) ^{2}}f_{1}^{2}\left( t\right) ,
\label{5c}
\end{equation}%
with the help of the pair of transformations
\begin{eqnarray}
X\left( T\right) &=&Cx\left( t\right) f_{3}^{\frac{1}{n+3}}\left( t\right)
e^{\frac{2}{n+3}\int^{t}f_{1}\left( \phi \right) d\phi }, \label{3b} \\
T\left( x,t\right) &=&C^{\frac{1-n}{2}}\int^{t}f_{3}^{\frac{2}{n+3}}\left(
\xi \right) e^{\left( \frac{1-n}{n+3}\right) \int^{\xi }f_{1}\left( \phi
\right) d\phi }d\xi , \label{4a}
\end{eqnarray}%
where $C$ is an arbitrary constant, Eq.~(\ref{1a}) can be point transformed
into the second order differential equation for $X\left( T\right) $,
\begin{equation}
\frac{d^{2}X}{dT^{2}}+X^{n}\left( T\right) =0. \label{2a}
\end{equation}
The general solution of Eq.~(\ref{2a}) is given by
\begin{equation}
T=T_{0}+\epsilon \int \frac{dX}{\sqrt{2\left( C_{0}-\frac{X^{n+1}}{n+1}%
\right) }}, \qquad n\neq -1, \label{T}
\end{equation}
where $T_{0}$ and $C_{0}$ are arbitrary constants of integration. For
convenience, we have denoted $T=T_{\pm }$, $C_{0}=C_{0\pm }$ $T_{0}=T_{0\pm
} $ and $\epsilon =\pm $.
By substituting the integrability condition given by Eq.~(\ref{5c}) into
Eq.~(\ref{1a}), we obtain the following integrable differential equation
\begin{eqnarray}
&&\frac{d^{2}x}{dt^{2}}+f_{1}\left( t\right) \frac{dx}{dt}+\Bigg\{ \frac{1}{%
n+3} \frac{1}{f_3(t)}\frac{d^{2}f_{3}}{dt^{2}} -\frac{n+4}{\left( n+3\right)
^{2}}\left[ \frac{1}{f_3(t)}\frac{df_{3}}{dt}\right] ^{2} + \notag \\
&&\frac{n-1}{\left( n+3\right) ^{2}}\left[ \frac{1}{f_3(t)}\frac{df_{3}}{dt}%
\right] f_{1}\left( t\right) +\frac{2}{n+3}\frac{df_{1}}{dt}+\frac{2\left(
n+1\right) }{\left( n+3\right) ^{2}}f_{1}^{2}\left( t\right) \Bigg\} %
x+f_{3}\left( t\right) x^{n}=0, n\notin \left\{-3,-1,0,1\right\}.
\label{15m}
\end{eqnarray}
\subsection{A particular exact solution for the anharmonic oscillator
equation}
The general solution of Eq.~(\ref{2a}) can be given as
\begin{equation}
T=T_{0}+\frac{\epsilon }{C_{0}}X\sqrt{\frac{C_{0}\left( n+1\right) -X^{n+1}}{%
2\left( n+1\right) }}\,_{2}F_{1}\left[ 1,\frac{n+3}{2\left( n+1\right) };%
\frac{n+2}{n+1};\frac{X^{n+1}}{C_{0}(n+1)}\right] , \qquad n\neq -1,
\end{equation}%
where $_{2}F_{1}(a,b;c;d)$ is the hypergeometric function. A particular
solution of Eq.~(\ref{2a}) is given by
\begin{equation}
X\left( T\right) =\left[ \epsilon \left( T-T_{0}\right) \right] ^{\frac{2}{%
1-n}}\left[ -\frac{\left( n-1\right) ^{2}}{2\left( n+1\right) }\right] ^{%
\frac{1}{1-n}}, \label{X2}
\end{equation}%
where we have defined $X\left( T\right) =X_{\pm }\left( T\right) $, and we have taken
the arbitrary integration constant as zero, $C_{0}=0$. In order to have a real value of the displacement $x(t)$ one must impose the condition $n<-1$ on the anharmonicity exponent $n$. From
Eqs.~(\ref{3b}) and (\ref{X2}), we obtain the result
\begin{equation}
x\left( t\right) =\frac{1}{C}\left[ \epsilon \left( T-T_{0}\right) \right] ^{%
\frac{2}{1-n}}\left[ -\frac{\left( n-1\right) ^{2}}{2\left( n+1\right) }%
\right] ^{\frac{1}{1-n}}f_{3}^{-\frac{1}{n+3}}\left( t\right) e^{-\frac{2}{%
n+3}\int^{t}f_{1}\left( \phi \right) d\phi }, \label{X1}
\end{equation}%
where we have denoted $x\left( t\right) =x_{\pm }\left( t\right) $, for
simplicity.
By inserting Eq. (\ref{4a}) into Eq. (\ref{X1}) yields the general solution
of Eq. (\ref{15m}) describing the time evolution of anharmonic oscillator.
Therefore we have obtained the following:
\textbf{Corollary 1}. The anharmonic oscillator equation Eq.~(\ref{15m}) has
the particular solution
\begin{equation}
x\left( t\right) =x_{0}\left[ C^{\frac{1-n}{2}}\int^{t}f_{3}^{\frac{2}{n+3}%
}\left( \xi \right) e^{\left( \frac{1-n}{n+3}\right) \int^{\xi }f_{1}\left(
\phi \right) d\phi }d\xi -T_{0}\right] ^{\frac{2}{1-n}}f_{3}^{-\frac{1}{n+3}%
}\left( t\right) e^{-\frac{2}{n+3}\int^{t}f_{1}\left( \phi \right) d\phi
},n<-1, \label{X2k}
\end{equation}
where we have defined $x_{0}=C^{-1}\left[ -\frac{\left( n-1\right) ^{2}}{%
2\left( n+1\right) }\right] ^{\frac{1}{1-n}}$.
\subsection{Second integrability case for the anharmonic oscillator equation}
Now, by rearranging the terms of Eq.~(\ref{5c}) yields the following Riccati
equation for $f_{1}\left( t\right) $ given by
\begin{equation}
\frac{df_{1}}{dt}=a\left( t\right) +b\left( t\right) f_{1}\left( t\right)
+c\left( t\right) f_{1}^{2}\left( t\right) , \label{6a}
\end{equation}%
where the coefficients $a(t)$, $b(t)$ and $c(t)$ are defined as
\begin{eqnarray}
a\left( t\right) &=&\frac{3+n}{2}f_{2}\left( t\right) -\frac{1}{2f_{3}\left(
t\right) }\frac{d^{2}f_{3}}{dt^{2}}+\frac{4+n}{2\left( 3+n\right) }\left[
\frac{1}{f_{3}\left( t\right) }\frac{df_{3}}{dt}\right] ^{2}, \label{7a} \\
b\left( t\right) &=&\frac{1-n}{2\left( 3+n\right) }\left[ \frac{1}{%
f_{3}\left( t\right) }\frac{df_{3}}{dt}\right] , \label{8a} \\
c\left( t\right) &=&-\frac{1+n}{3+n}. \label{9a}
\end{eqnarray}
We consider now that the coefficient $a\left( t\right) $ of Eq.~(\ref{6a})
vanishes, so that Eq. (\ref{7a}) can be written as
\begin{equation}
f_{2}\left( t\right) =\frac{1}{3+n}\frac{1}{f_{3}\left( t\right)} \frac{%
d^{2}f_{3}}{dt^{2}} -\frac{4+n}{\left( 3+n\right) ^{2}}\left[ \frac{1}{%
f_{3}\left( t\right)} \frac{df_{3}}{dt}\right] ^{2}. \label{10a}
\end{equation}
Hence the Riccati Eq. (\ref{6a}) becomes a Bernoulli type equation, with the
general solution given by
\begin{equation}
f_{1}\left( t\right) =\frac{f_{3}^{\frac{1-n}{2\left( 3+n\right) }}\left(
t\right) }{C_{1}+\frac{1+n}{3+n}\int^{t}f_{3}^{\frac{1-n}{2\left( 3+n\right)
}}\left( \phi \right) d\phi }, \label{11a}
\end{equation}
where $C_{1}$ is an arbitrary constant of integration.
By substituting Eqs. (\ref{10a}) and (\ref{11a}) into Eq. (\ref{1a}), the
latter yields the following differential equation
\begin{equation}
\frac{d^{2}x}{dt^{2}}+\left[ \frac{f_{3}^{\frac{1-n}{2\left( 3+n\right) }%
}\left( t\right) }{C_{1} +\frac{1+n}{3+n}\int^{t}f_{3}^{\frac{1-n}{2\left(
3+n\right) }}\left( \phi \right) d\phi }\right] \frac{dx}{dt} +\left\{ \frac{%
1}{3+n}\frac{1}{f_{3}\left( t\right)} \frac{d^{2}f_{3}}{dt^{2}} -\frac{4+n}{%
\left( 3+n\right) ^{2}}\left[ \frac{1}{f_{3}\left( t\right)} \frac{df_{3}}{dt%
}\right] ^{2} \right\} x+f_{3}\left( t\right) x^{n}=0. \label{12a}
\end{equation}
Therefore we have obtained the following:
\textbf{Corollary 2}. The general solution of Eq.~(\ref{12a}), describing
the time evolution of the anharmonic oscillator, is given by
\begin{eqnarray}
x\left( t\right) &=&x_{0}\left[ C^{\frac{1-n}{2}}\int^{t}f_{3}^{\frac{2}{n+3}%
}\left( \xi \right) e^{\left( \frac{1-n}{n+3}\right) \int^{\xi }\left[ \frac{%
f_{3}^{\frac{1-n}{2\left( 3+n\right) }}\left( \phi \right) }{C_{1}+\frac{1+n%
}{3+n}\int^{\phi }f_{3}^{\frac{1-n}{2\left( 3+n\right) }}\left( \psi \right)
d\psi }\right] d\phi }d\xi -T_{0}\right] ^{\frac{2}{1-n}}\times \notag \\
&&\times f_{3}^{-\frac{1}{n+3}}\left( t\right) e^{-\frac{2}{n+3}\int^{t}%
\left[ \frac{f_{3}^{\frac{1-n}{2\left( 3+n\right) }}\left( \phi \right) }{%
C_{1}+\frac{1+n}{3+n}\int^{\phi }f_{3}^{\frac{1-n}{2\left( 3+n\right) }%
}\left( \xi \right) d\xi }\right] d\phi }, \qquad n\notin \left\{-3,-1,0,1\right\}.
\label{13a}
\end{eqnarray}
\subsection{Third integrability case for the anharmonic oscillator equation}
Now, by introducing a new function $u\left( t\right) $ defined as
\begin{equation}
u\left( t\right) =\frac{1}{f_{3}\left( t\right) }\frac{df_{3}}{dt},
\end{equation}
or, equivalently,
\begin{equation}
f_{3}\left( t\right) =f_{03}e^{\int^{t}u\left( \phi \right) d\phi },
\end{equation}
where $f_{03}$ is an arbitrary constant of integration, after substituting $%
u\left( t\right) $ into Eq.~(\ref{5c}) yields the following Riccati equation
for $u\left( t\right) $, given by
\begin{equation}
\frac{du}{dt}=a_{1}\left( t\right) +b_{1}\left( t\right) u\left( t\right)
+c_{1}\left( t\right) u^{2}\left( t\right) , \label{6k}
\end{equation}%
where the coefficients are defined as
\begin{eqnarray}
a_{1}\left( t\right) &=&\left( 3+n\right) f_{2}\left( t\right) -\frac{%
2\left( 1+n\right) }{3+n}f_{1}^{2}\left( t\right) -2\frac{df_{1}}{dt},
\label{7k} \\
b_{1}\left( t\right) &=&\frac{1-n}{3+n}f_{1}\left( t\right) , \label{8k} \\
c_{1}\left( t\right) &=&\frac{1}{3+n}. \label{9k}
\end{eqnarray}
We consider that the coefficient $a_1\left( t\right) $ of Eq.~(\ref{6k})
vanishes, as before, so that Eq.~(\ref{7k}) can be written as
\begin{equation}
f_{2}\left( t\right) =\frac{2\left( 1+n\right) }{\left( 3+n\right) ^{2}}%
f_{1}^{2}\left( t\right) +\frac{2}{\left( 3+n\right) }\frac{df_{1}}{dt}.
\label{10m}
\end{equation}
Then the Riccati Eq. (\ref{6k}) becomes a Bernoulli type equation, with the
general solution given by
\begin{equation}
u\left( t\right) =\frac{e^{\frac{1-n}{3+n}\int^{t}f_{1}\left( \phi \right)
d\phi }}{C_{2}-\frac{1}{3+n}\int^{t}e^{\frac{1-n}{3+n}\int^{\xi }f_{1}\left(
\phi \right) d\phi }d\xi }, \label{11m}
\end{equation}
where $C_{2}$ is an arbitrary constant of integration. Thus, the coefficient
$f_{3}\left( t\right) $ of Eq. (\ref{1a}) is readily given by
\begin{equation}
f_{3}\left( t\right) =f_{03}e^{\int^{t}\left[ \frac{e^{\frac{1-n}{3+n}%
\int^{\psi }f_{1}\left( \phi \right) d\phi }}{C_{2}-\frac{1}{3+n}\int^{\psi
}e^{\frac{1-n}{3+n}\int^{\xi }f_{1}\left( \phi \right) d\phi }d\xi }\right]
d\psi }. \label{12k}
\end{equation}
By substituting Eqs.~(\ref{10m}) and (\ref{12k}) into Eq. (\ref{1a}), the
latter yields the following differential equation
\begin{equation}
\frac{d^{2}x}{dt^{2}}+f_{1}\left( t\right) \frac{dx}{dt}+\left[ \frac{%
2\left( 1+n\right) }{\left( 3+n\right) ^{2}}f_{1}^{2}\left( t\right) +\frac{2%
}{\left( 3+n\right) }\frac{df_{1}}{dt}\right] x+f_{03}e^{\int^{t}\left[
\frac{e^{\frac{1-n}{3+n}\int^{\psi }f_{1}\left( \phi \right) d\phi }}{C_{2}-%
\frac{1}{3+n}\int^{\psi }e^{\frac{1-n}{3+n}\int^{\xi }f_{1}\left( \phi
\right) d\phi }d\xi }\right] d\psi }x^{n}=0. \label{13k}
\end{equation}
Therefore we have obtained the following:
\textbf{Corollary 3}. The general solution of Eq. (\ref{13k}), describing
the time evolution of an anharmonic oscillator, is given by
\begin{eqnarray}
x\left( t\right) &=&x_{0}f_{03}^{-\frac{1}{3+n}}\left[ C^{\frac{1-n}{2}%
}f_{03}^{\frac{2}{3+n}}\int^{t}e^{\frac{2}{3+n}\int^{\xi }\left[ \frac{e^{%
\frac{1-n}{3+n}\int^{\psi }f_{1}\left( \phi \right) d\phi }}{C_{2}-\frac{1}{%
3+n}\int^{\psi }e^{\frac{1-n}{3+n}\int^{\rho }f_{1}\left( \phi \right) d\phi
}d\rho }+\frac{1-n}{2}f_{1}\left( \psi \right) \right] d\psi }d\xi -T_{0}%
\right] ^{\frac{2}{1-n}}\times \notag \\
&& \times \; e^{-\frac{1}{3+n}\int^{t}\left[ \frac{e^{\frac{1-n}{3+n}%
\int^{\psi }f_{1}\left( \phi \right) d\phi }}{C_{2}-\frac{1}{3+n}\int^{\psi
}e^{\frac{1-n}{3+n}\int^{\rho }f_{1}\left( \phi \right) d\phi }d\rho }%
+2f_{1}\left( \psi \right) \right] d\psi }, \qquad n\notin \left\{-3,-1,0,1\right\}.
\label{15k}
\end{eqnarray}
\section{Conclusions}
\label{sect3}
In the limit of a small function $X\left( T\right) $, and by assuming that
the constant $n$ is large, $n\rightarrow +\infty $, in view of Eq.~(\ref{2a}%
) we obtain a linear relation between $X\left( T\right) $ and $T\left(
t\right) $, given by
\begin{equation}
X\left( T\right) =\epsilon \sqrt{2C_{0}}\left( T-T_{0}\right) . \label{16k}
\end{equation}%
With the help of Eqs.~(\ref{3b}) and (\ref{4a}), the approximate solution of
Eq.~(\ref{1a}), describing the time evolution of anharmonic oscillator is
given by
\begin{equation}
x\left( t\right) \approx \frac{\epsilon \sqrt{2C_{0}}}{C}\left[ C^{\frac{1-n%
}{2}}\int^{t}f_{3}^{\frac{2}{n+3}}\left( \xi \right) e^{\left( \frac{1-n}{n+3%
}\right) \int^{\xi }f_{1}\left( \phi \right) d\phi }d\xi -T_{0}\right]
f_{3}^{-\frac{1}{n+3}}\left( t\right) e^{-\frac{2}{n+3}\int^{t}f_{1}\left(
\phi \right) d\phi }. \label{17k}
\end{equation}%
With this approximate solution, once the functions $f_{1}(t)$ and $f_{3}(t)$
are given, one can study the time evolution of the anharmonic oscillator for
small $X\left( T\right) $ and for a very large anharmonicity exponent $n$.
in the present paper, by extending the results of \cite{12}, where the first integral of Eq.~(\ref%
{1a}) was obtained, we have obtained three classes of
exact general solutions of Eq.~(\ref{1a}), by explicitly showing that the
Theorem obtained in \cite{12} is very useful for obtaining the explicit
general solutions of the anharmonic oscillator type second order
differential equations.
In order to have real solutions, the general solutions Eqs. (\ref{X2k}), (%
\ref{13a}) and (\ref{15k}) of the second order differential Eqs. (\ref{15m}%
), (\ref{12a}) and (\ref{13k}), respectively, must obey the condition $n<-1$%
, thus leading to an anharmonic term of the form $f_3(t)/x^n$, $n>0$. Such a
term may be singular at $x=0$. Note that in \cite{20s}, the author has used
the degree theory to remove a technical assumption in the non-resonance
results in \cite{21s} and to obtain a complete set of the non-resonance
conditions for differential equations with repulsive singularities. In doing
so, a nice relation between the Hill's equation $\frac{d^{2}x}{dt^{2}}+\eta
\left( t\right) x\left( t\right) =0$ and the EPE was established. This
relation itself is useful in studying the stability of periodic solutions of
Lagrangian systems of degree of freedom of $3/2$.
It is well-known that the second order ordinary differential equations with
anharmonic term of the form $f_{3}(t)/x^{n}$ entail many problems in the
applied sciences. Some examples are the Brillouin focusing system, and the
motion of an atom near a charged wire. The Brillouin focusing system can be
described by the second order differential equation
\begin{equation}
\frac{d^{2}x}{dt^{2}}+\alpha \left( 1+\cos t\right) x\left( t\right) =\frac{%
\beta }{x\left( t\right) },
\end{equation}%
where $\alpha $ and $\beta $ are positive constants. In the context of
electronics, this differential equation governs the motion of a magnetically
focused axially symmetric electron beam under the influence of a Brillouin
flow, as shown in \cite{22s}. From the mathematical point of view, this
differential equation is a singular perturbation of a Mathieu equation
\begin{equation}
\frac{d^{2}x}{dt^{2}}+\left( a-2q\cos 2t\right) x\left( t\right) =0,
\end{equation}%
where $a$ and $q$ are arbitrary constants. Existence and uniqueness of
elliptic periodic solutions of the Brillouin electron beam focusing system
has been discussed in \cite{23s}. Hence, the results obtained in the present
paper could open the possibility of obtaining some exact solutions of
non-linear differential equations of scientific or technological interest.
|
2,869,038,153,844 | arxiv | \section{Introduction}\label{intro}
\subsection*{A brief background} All graphs in this paper are finite and simple.
Treewidth is a well-studied graph parameter, of great interest in both structural and algorithmic graph theory. It was notably featured in the seminal work of Robertson and Seymour on graph minors \cite{RS-GMII}, and in myriad other papers ever since. For a more in-depth overview of the literature, the interested reader is invited to consult e.g.\ Bodlaender's survey \cite{Bodlaendersurvey} and the references therein.
As part of their graph minors series, Robertson and Seymour fully described the unavoidable minors in graphs of large treewidth. The relevant result, the so-called Grid Minor Theorem \cite{RS-GMV}, states that every graph of large enough treewidth must contain a minor isomorphic to a large grid (or equivalently, a minor isomorphic to a large {\em wall}, illustrated in Figure~\ref{fig:5x5wall}; a precise definition can be found in \cite{wallpaper}). Since walls have large treewidth themselves, and treewidth cannot increase when taking minors, that result shows a structural dichotomy: a graph has large treewidth if and only if it contains a large wall as a minor.
\begin{figure}
\centering
\begin{tikzpicture}[scale=2,auto=left]
\tikzstyle{every node}=[inner sep=1.5pt, fill=black,circle,draw]
\centering
\node (s10) at (0,1.2) {};
\node(s12) at (0.6,1.2){};
\node(s14) at (1.2,1.2){};
\node(s16) at (1.8,1.2){};
\node(s18) at (2.4,1.2){};
\node (s20) at (0,0.9) {};
\node (s21) at (0.3,0.9) {};
\node(s22) at (0.6,0.9){};
\node (s23) at (0.9,0.9) {};
\node(s24) at (1.2,0.9){};
\node (s25) at (1.5,0.9) {};
\node(s26) at (1.8,0.9){};
\node (s27) at (2.1,0.9) {};
\node(s28) at (2.4,0.9){};
\node (s29) at (2.7,0.9) {};
\node (s30) at (0,0.6) {};
\node (s31) at (0.3,0.6) {};
\node(s32) at (0.6,0.6){};
\node (s33) at (0.9,0.6) {};
\node(s34) at (1.2,0.6){};
\node (s35) at (1.5,0.6) {};
\node(s36) at (1.8,0.6){};
\node (s37) at (2.1,0.6) {};
\node(s38) at (2.4,0.6){};
\node (s39) at (2.7,0.6) {};
\node (s40) at (0,0.3) {};
\node (s41) at (0.3,0.3) {};
\node(s42) at (0.6,0.3){};
\node (s43) at (0.9,0.3) {};
\node(s44) at (1.2,0.3){};
\node (s45) at (1.5,0.3) {};
\node(s46) at (1.8,0.3){};
\node (s47) at (2.1,0.3) {};
\node(s48) at (2.4,0.3) {};
\node (s49) at (2.7,0.3) {};
\node (s51) at (0.3,0.0) {};
\node (s53) at (0.9,0.0) {};
\node (s55) at (1.5,0.0) {};
\node (s57) at (2.1,0.0) {};
\node (s59) at (2.7,0.0) {};
\foreach \from/\to in {s10/s12, s12/s14,s14/s16,s16/s18}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s20/s21, s21/s22, s22/s23, s23/s24, s24/s25, s25/s26,s26/s27,s27/s28,s28/s29}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s30/s31, s31/s32, s32/s33, s33/s34, s34/s35, s35/s36,s36/s37,s37/s38,s38/s39}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s40/s41, s41/s42, s42/s43, s43/s44, s44/s45, s45/s46,s46/s47,s47/s48,s48/s49}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s51/s53, s53/s55,s55/s57,s57/s59}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s10/s20, s30/s40}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s21/s31,s41/s51}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s12/s22, s32/s42}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s23/s33,s43/s53}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s14/s24, s34/s44}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s25/s35,s45/s55}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s16/s26,s36/s46}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s27/s37,s47/s57}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s18/s28,s38/s48}
\draw [-] (\from) -- (\to);
\foreach \from/\to in {s29/s39,s49/s59}
\draw [-] (\from) -- (\to);
\end{tikzpicture}
\caption{The 5-by-5 wall $W_{5 \times 5}$}
\label{fig:5x5wall}
\end{figure}
The overarching goal of our current series and of several other recent works \cite{aboulker, multistar, Korhonen, lozin, layered-wheels, longhole2} is to try and understand treewidth from the perspective of induced subgraphs rather than minors. A first remark is that to force bounded treewidth, we need to forbid four kinds of induced subgraphs: a complete graph $K_t$, a complete bipartite graph $K_{t, t}$, all subdivisions of some wall $W_{t, t}$, and the line graphs of subdivisions of $W_{t, t}$. Let us call those graphs {\em $t$-basic obstructions}, and say that a graph $G$ is {\em $t$-clean} if $G$ contains no induced $t$-basic obstruction. Moreover, call a hereditary class $\mathcal{C}$ of graphs \textit{clean} if the treewidth of $t$-clean graphs in $\mathcal C$ is bounded above by a function of $t$.
The class of all graphs is not clean: various constructions of unbounded treewidth avoiding the basic obstructions have been discovered \cite{multistar, daviesconst, layered-wheels}. In fact, it is at the moment unclear whether a dichotomy similar to the minor one is at all achievable for induced subgraphs. Nevertheless, steady progress is being made; in particular, a number of clean classes have been identified. This includes every proper minor-closed class of graphs (Aboulker, Adler, Kim, Sintiari
and Trotignon \cite{aboulker}), (even hole, diamond, pyramid)-free graphs \cite{pyramiddiamond}, and graphs in which no vertex has two or more neighbors in a hole \cite{onenbr}. Of note are also the following result, which was independently proved twice:
\begin{theorem}[Gartland, Lokshtanov, Pilipczuk, Pilipczuk and Rz\k{a}\.{z}ewski \cite{longholes}, Wei\ss auer \cite{longhole2}]\label{longholepili}
For every integer $\lambda\geq 1$, the class of graphs with no induced cycle of length more than $\lambda$ is clean.
\end{theorem}
\noindent as well as a general result conjectured by Aboulker et al., and shown by Korhonen:
\begin{theorem}[Korhonen \cite{Korhonen}]\label{boundeddeg}
For every integer $d\geq 1$, the class of graphs of maximum degree at most $d$ is clean. Equivalently, for all integers $d,t\geq 1$, there exists an integer $q=q(d,t)\geq 1$ such that every graph with maximum degree at most $d$ and treewidth more than $q$ contains either a subdivision of $W_{t\times t}$ or the line graph of a subdivision of $W_{t\times t}$ as an induced subgraph.
\end{theorem}
Finally, we mention one more result -- a characterization of finite graph sets which yield bounded treewidth when forbidden as induced subgraphs:
\begin{theorem}[Lozin and Razgon \cite{lozin}]\label{lozinmain}
Let $\mathcal{H}$ be a finite set of graphs. Then the class of all graphs with no induced subgraphs from $\mathcal{H}$ has bounded treewidth if and only if $\mathcal{H}$ contains a complete graph, a complete bipartite graph, a forest of maximum degree at most three in which every component has at most one vertex of degree more than two, and the line graph of such a forest.
\end{theorem}
\subsection*{Our results}
Our main result, Theorem~\ref{mainconnectify} requires some set-up, so we postpone its precise statement and the relevant definitions until Section~\ref{mainresultssec}. Informally, we show that every clean class of graphs with large treewidth contains a ``connectification'' of a large subdivided star forest as an induced subgraph. Roughly speaking, this is a graph which can be partitioned into a subdivided star forest $F$, and a second part, anticomplete to everything but the roots of $F$, and which ``minimally connects'' those roots.
Theorem~\ref{mainconnectify} uses three ingredients. The first one is Theorem~\ref{blockmanystar}, which adapts the methods from \cite{lozin} (itself employing the strategy from \cite{longhole2}) in order to show that clean graphs with a large {\em block} -- a certain kind of highly connected structure -- must contain a large subdivided star forest. As a byproduct of this, we also obtain another way to derive Theorem~\ref{longholepili}.
The second ingredient is Theorem~\ref{noblock}. The theorem combines a result of Wei\ss auer linking blocks and tree decompositions, together with Korhonen's bounded degree result (Theorem~\ref{boundeddeg}), in order to show that the class of graphs without a large block is clean.
The final ingredient, Theorem~\ref{minimalconnectedgeneral}, is a result of independent interest, and will be used in future papers in our series. Starting from a result of Davies \cite{Davies}, we provide a complete description of minimal connected graphs containing many vertices from a suitably large subset of a connected component. Put differently, we show that if a large enough set of vertices belongs to the same component, then a large subset of them must connect in one of a few prescribed ways.
\medskip
We note that the first two out of those intermediate results already yield (the difficult direction of) an appealing dichotomy for clean classes defined by finitely many forbidden induced subgraphs, mirroring Theorem~\ref{lozinmain}. Indeed, writing $\mathcal{F}_H$ for the class of graphs with no induced subgraph isomorphic to $H$, we have:
\begin{theorem}\label{mainstarforest1}
Let $H$ be a graph. Then $\mathcal{F}_H$ is clean if and only if $H$ is a forest whose components are subdivided stars.
\end{theorem}
While the stronger Theorem~\ref{mainconnectify} might appear unwieldy at first, we remark that it has easier-to-state implications that are still more general than the above dichotomy. To illustrate this, denote by $\tilde{\mathcal{F}}_H$ the class of all graphs with no induced subgraph isomorphic to a subdivision of $H$. It follows that the ``if'' direction of Theorem~\ref{mainstarforest1} is equivalent to $\tilde{\mathcal{F}}_H$ being clean for every subdivided star forest $H$, and Theorem~\ref{longholepili} is equivalent to $\tilde{\mathcal{F}}_H$ being clean for every cycle $H$. Then Theorem~\ref{mainconnectify} readily implies the following, where by a \textit{subdivided double star}, we mean a a tree with at most two vertices of degree more than two.
\begin{theorem}\label{mainsubforest}
Let $H$ be a forest in which one component is a subdivided double star and every other component is a subdivided star. Then $\tilde{\mathcal{F}}_H$ is clean.
\end{theorem}
\subsection*{Outline of the paper}
We set up our notation and terminology in Section~\ref{defns}. Section~\ref{daviessec} describes the construction of \cite{daviesconst}, which is used to prove the ``only if'' direction of Theorem~\ref{mainstarforest1}. In Section~\ref{mainresultssec}, we state Theorem~\ref{mainconnectify} precisely, and show how to deduce Theorems~\ref{mainstarforest1} and~\ref{mainsubforest} from it. In Section~\ref{connectedsec}, we show that a connected graph $G$ with a sufficiently large subset $S$ of its vertices contains an induced connectifier with many vertices from $S$. The main result of Section~\ref{noblocksec} is Theorem~\ref{noblock}, where we prove that the class of graphs with no $k$-block is clean. In Section~\ref{distancesec}, we show that in a $t$-clean graph, every huge block can be transformed into a large block such that there is no short path between any two vertices of the new block. Section~\ref{ramseyblcoksec} uses this in order to show that a $t$-clean graph with a huge block contains a large subdivided star forest. Finally, in Section~\ref{connectifysec}, we combine the main results from Sections~\ref{connectedsec}, \ref{noblocksec} and \ref{ramseyblcoksec} to prove Theorem~\ref{mainconnectify}.
\section{Preliminaries}
\label{defns}
\subsection*{Graphs, subgraphs, and induced subgraphs} All graphs in this paper are finite and with no loops or multiple edges. Let $G=(V(G),E(G))$ be a graph. A \textit{subgraph} of $G$ is a graph obtained from $G$ by removing vertices or edges, and an \textit{induced subgraph} of $G$ is a graph obtained from $G$ by only removing vertices. Given a subset $X \subseteq V(G)$, $G[X]$ denotes the subgraph of $G$ induced by $X$, that is, the graph obtained from $G$ by removing the vertices not in $X$. We put $G \setminus X = G[V(G) \setminus X]$ (and in general, we will abuse notation and use induced subgraphs and their vertex sets interchangeably). Additionally, for an edge $e\in E(G)$, we write $G-e$ to denote the graph obtained from $G$ by removing the $e$. For a graph $H$, by a \textit{copy of $H$ in $G$}, we mean an induced subgraph of $G$ isomorphic to $H$, and we say \textit{$G$ contains $H$} if $G$ contains a copy of $H$. We also say $G$ is \emph{$H$-free} if $G$ does not contain $H$. For a class $\mathcal{H}$ of graphs we say $G$ is $\mathcal{H}$-free if $G$ is $H$-free for every $H \in \mathcal{H}$. For a graph $H$, we write $G=H$ whenever $G$ and $H$ have the same vertex-set and the same edge set.
For a graph $H$, we write $G=H$ whenever $G$ and $H$ have the same vertex-set and the same edge set.
\subsection*{Neighborhoods} Let $v \in V(G)$. The \emph{open neighborhood of $v$}, denoted by $N(v)$, is the set of all vertices in $G$ adjacent to $v$. The \emph{closed neighborhood of $v$}, denoted by $N[v]$, is $N(v) \cup \{v\}$. Let $X \subseteq G$. The \emph{open neighborhood of $X$}, denoted by $N(X)$, is the set of all vertices in $G \setminus X$ with at least one neighbor in $X$.
If $H$ is an induced subgraph of $G$ and $X \subseteq G$ with $H\cap X=\emptyset$, then $N_H(X)=N(X) \cap H$.
Let $X,Y \subseteq V(G)$ be disjoint. We say $X$ is \textit{complete} to $Y$ if all possible edges with an end in $X$ and an end in $Y$ are present in $G$, and $X$ is \emph{anticomplete}
to $Y$ if there is no edge between $X$ and $Y$. In the case $X=\{x\}$, we often say $x$ is \textit{complete} (\textit{anticomplete}) to $Y$ to mean $X$ is complete (anticomplete) to $Y$.
\subsection*{Tree decompositions and blocks} A \emph{tree decomposition} $(T, \chi)$ of $G$ consists of a tree $T$ and a map $\chi: V(T) \to 2^{V(G)}$ with the following properties:
\begin{enumerate}[(i)]
\itemsep -.2em
\item For every vertex $v \in V(G)$, there exists $t \in V(T)$ such that $v \in \chi(t)$.
\item For every edge $v_1v_2 \in E(G)$, there exists $t \in V(T)$ such that $v_1, v_2 \in \chi(t)$.
\item For every vertex $v \in V(G)$, the subgraph of $T$ induced by $\{t \in V(T) \mid v \in \chi(t)\}$ is connected.
\end{enumerate}
For each $t\in V(T)$, we refer to $\chi(t)$ as a \textit{bag of} $(T, \chi)$. The \emph{width} of a tree decomposition $(T, \chi)$, denoted by $\width(T, \chi)$, is $\max_{t \in V(T)} |\chi(t)|-1$. The \emph{treewidth} of $G$, denoted by $\tw(G)$, is the minimum width of a tree decomposition of $G$.
\subsection*{Cliques, stable sets, paths, and cycles}
A \textit{clique} in $G$ is a set of pairwise adjacent vertices in $G$, and a \textit{stable set} in $G$ is a set of pairwise non-adjacent vertices in $G$. A {\em path in $G$} is an induced subgraph of $G$ that is a path, while a {\em cycle in $G$} is a (not necessarily induced) subgraph of $G$ that is a cycle. If $P$ is a path, we write $P = p_1 \hbox{-} \cdots \hbox{-} p_k$ to mean that $V(P) = \{p_1, \dots, p_k\}$, and $p_i$ is adjacent to $p_j$ if and only if $|i-j| = 1$. We call the vertices $p_1$ and $p_k$ the \emph{ends of $P$}, and say that $P$ is \emph{from $p_1$ to $p_k$}. The \emph{interior of $P$}, denoted by $P^*$, is the set $P \setminus \{p_1, p_k\}$. For a path $P$ in $G$ and $x,y\in P$, we denote by $P[x,y]$ the subpath of $P$ with ends $x$ and $y$. The \emph{length} of a path $P$ is the number of its edges. Let $C$ be a cycle. We write $C = c_1 \hbox{-} \cdots \hbox{-} c_k\hbox{-} c_1$ to mean $V(C) = \{c_1, \dots, c_k\}$, and $c_i$ is adjacent to $c_j$ if and only if $|i-j|\in \{1,k-1\}$.
\subsection*{Subdivisions} By a \textit{subdivision} of a graph $G$, we mean a graph obtained from $G$ by replacing the edges of $G$ by pairwise internally disjoint paths between the corresponding ends. Let $r\geq 0$ be an integer. An $r$-\textit{subdivision} of $G$ is a subdivision of $G$ in which the path replacing each edge has length $r+1$. Also, a $(\leq r)$-\textit{subdivision} of $G$ is a subdivision of $G$ in which the path replacing each edge has length at most $r+1$, and a $(\geq r)$-\textit{subdivision} of $G$ is defined similarly. We refer to a $(\geq 1)$-subdivision of $G$ as a \textit{proper} subdivision of $G$.
\subsection*{Classes of graphs} A class $\mathcal{C}$ of graphs is called \textit{hereditary} if it is closed under isomorphism and taking induced subgraphs, or equivalently, if $\mathcal{C}$ is the class of all $\mathcal{H}$-free graphs for some other graph class $\mathcal{H}$. For a class of graphs $\mathcal{C}$ and a positive integer $t$, we denote by $\mathcal{C}^t$ the class of all $t$-clean graphs in $\mathcal{C}$. Thus, $\mathcal{C}$ is clean if for every positive integer $t$ there exists a positive integer $w(t)$ such that every graph in $\mathcal{C}^t$ has treewidth at most $w(t)$. The following is immediate from the definition of a clean class.
\begin{lemma}\label{cleanlemma}
Let $\mathcal{X}$ be a class of graphs. Assume that for every $t$, there exists a clean class of graphs $\mathcal{Y}_t$ such that $\mathcal{X}^t\subseteq \mathcal{Y}_t$. Then $\mathcal{X}$ is clean. In particular, every subclass of a clean class is clean.
\end{lemma}
\subsection*{Stars and forests} For every forest $F$, we say a vertex $v\in V(F)$ is a \textit{leaf} of $F$ if $v$ has degree at most one in $F$. We denote by $\mathcal{L}(F)$ the set of all leaves of $F$. By a \textit{branch} vertex of $F$, we mean a vertex of degree more than two. By a \textit{star} we mean a graph isomorphic to the complete bipartite graph $K_{1,\delta}$ for some integer $\delta\geq 0$, and a \textit{star forest} is a forest in which every component is a star. Then subdivided stars are exactly trees with at most one branch vertex, and subdivided star forests are exactly forests in which every component is a subdivided star. A \textit{subdivided double star} is a tree with at most two branch vertices.
By a \textit{rooted subdivided star} $S$ we mean a subdivided star $S$ together with a choice of one vertex $r$ in $S$, called the \textit{root}, such that if $S$ is not a path, then $r$ is the unique branch vertex of $S$. \textit{A rooted subdivided star forest} $F$ is a subdivided star forest with a choice of a root for every component of $F$. We also refer to the root of each component of $F$ as a \textit{root} of $F$, and denote by $\mathcal{R}(F)$ the set of all roots of $F$. By a \textit{stem} in $F$, we mean a path in $F$ from a leaf to a root. It follows that each stem is the (unique) path from a leaf of some component of $F$ to the root of the same component. The \textit{reach} of a rooted subdivided star $S$ is the maximum length of a stem in $S$. Also, the \textit{reach} of a subdivided star forest $F$ is the maximum reach of its components and the \textit{size} of $F$ is the number of its components. For a positive integer $\theta$ and graph $H$, we denote by $\theta H$ the disjoint union of $\theta$ copies of $H$. For integers $\delta\geq 0$ and $\lambda\geq 1$, we denote by $S_{\delta,\lambda}$ the $(\lambda-1)$-subdivision of $K_{1,\delta}$. So for $\delta\geq 3$, $\theta S_{\delta,\lambda}$ is a subdivided star forest of maximum degree $\delta$, reach $\lambda$ and size $\theta$.
\section{A construction from \cite{daviesconst}} \label{daviessec}
The goal of this section is to prove the ``only if'' direction of Theorem~\ref{mainstarforest1} using a construction from \cite{daviesconst}.
We begin with a definition, which will be used in subsequent sections, as well. Let $P$ be a path and $\rho,\sigma,\geq 0$ and $\theta\geq 1$ be integers. A $2\theta$-tuple $(p_1,\ldots,p_{2\theta})$ of vertices of $P$ is said to be a $(\rho,\sigma)$-\textit{widening} of $P$ if
\begin{itemize}
\item the vertices $p_1$ and $p_{2\theta}$ are the ends of $P$;
\item traversing $P$ from $p_1$ to $p_{2\theta}$, the vertices $p_1,\ldots,p_{2\theta}$ appear on $P$ in this order;
\item $P[p_{2i-1}, p_{2i}]$ has length $\rho$ for each $i\in [\theta]$, and;
\item $P[p_{2i}, p_{2i+1}]$ has length at least $\sigma$ for each $i\in [\theta-1]$.
\end{itemize}
The $(\sigma,\rho)$-widening $(p_1,\ldots,p_{2\theta})$ is \textit{strict} if for each $i\in [\theta-1]$, $P[p_{2i}, p_{2i+1}]$ has length equal to $\sigma$. Also, we say a $\theta$-tuple $(p_1,\ldots,p_{\theta})$ of vertices of $P$ is a $\sigma$-\textit{widening} of $P$ if the $2\theta$-tuple $(p_1,p_1\ldots,p_{\theta},p_{\theta})$ is a $(0,\sigma)$-\textit{widening} of $P$.
We now describe the construction of \cite{daviesconst} (though \cite{daviesconst} only mentions the case $\rho=0$). Let $\rho\geq 0$, $\sigma\geq 1$ and $\theta\geq 2$ be integers. We define $J_{\rho,\sigma,\theta}$ to be the graph with the following specifications (see Figure~\ref{daviesfig}).
\begin{itemize}
\item $J_{\rho,\sigma,\theta}$ contains $\theta$ pairwise disjoint and anticomplete paths $P_1,\ldots, P_{\theta}$.
\item For each $j\in [\theta]$, $P_j$ admits a strict $(\rho,\sigma)$-widening $(p^j_1,\ldots,p^j_{2\theta})$.
\item We have $G\setminus (\bigcup_{i\in [\theta]}V(P_i))=\{x_1,\ldots,x_{\theta}\}$ such that $x_1,\ldots,x_{\theta}$ are all distinct, and for all $i,j\in [\theta]$, we have $N_{J_{\rho,\sigma,\theta}}(x_i)=\bigcup_{j\in [\theta]}P_j[p^j_{2i-1}, p^j_{2i}]$.
\end{itemize}
\begin{figure}[!htb]
\begin{tikzpicture}[scale=1,auto=left]
\tikzstyle{every node}=[inner sep=1.5pt, fill=black,circle,draw]
\foreach \i in {1,...,4} {
\draw (-1, \i) -- (-4 ,\i);
\node() at (-\i, 5) {};
\draw (-\i , 5) edge[out = -90, in = 90] (-\i , 4);
\draw (-\i , 5) edge[out = -105, in = 105] (-\i , 3);
\draw (-\i , 5) edge[out = -112, in = 112] (-\i , 2);
\draw (-\i , 5) edge[out = -120, in = 120] (-\i , 1);
}
\foreach \i in {1,...,4} {
\foreach \x in {1,...,4} {
\node() at (-\i, \x) {};
}
}
\foreach \i in {1,...,4} {
\draw (1, \i) -- (8 ,\i);
\draw (\i+\i-0.5 , 5) edge[] (\i+\i , 4);
\draw (\i+\i-0.5 , 5) edge[] (\i+\i-1 , 4);
\draw (\i+\i-0.5 , 5) edge[] (\i+\i , 3);
\draw (\i+\i-0.5 , 5) edge[] (\i+\i-1 , 3);
\draw (\i+\i-0.5 , 5) edge[] (\i+\i , 2);
\draw (\i+\i-0.5 , 5) edge[] (\i+\i-1 , 2);
\draw (\i+\i-0.5 , 5) edge (\i+\i , 1);
\draw (\i+\i-0.5 , 5) edge (\i+\i-1 , 1);
}
\foreach \i in {1,...,8} {
\foreach \x in {1,...,4} {
\node() at (\i,\x) {};
\node() at (\x+\x-0.5, 5) {};
}
}
\end{tikzpicture}
\caption{The graphs $J_{0,1,4}$ (left) and $J_{1,1,4}$ (right).}
\label{daviesfig}
\end{figure}
The following was proved in \cite{Davies}. Here we include a proof for the sake of completeness.
\begin{theorem}\label{daviesclean}
For all integers $\rho\geq 0$, $\sigma\geq 1$ and $\theta\geq 2$, $J_{\rho,\sigma,\theta}$ is a $4$-clean graph of treewidth at least $\theta$.
\end{theorem}
\begin{proof}
Note that $J_{\rho,\sigma,\theta}$ contains a $K_{\theta,\theta}$-minor (by contracting each path $P_i$ into a vertex), which implies that $\tw(J_{\rho,\sigma,\theta})\geq \theta$. Also, $J_{\rho,\sigma,\theta}$ is easily seen to be $\{K_4,K_{3,3}\}$-free. Let us say that a connected graph $H$ is \textit{feeble} if either $H$ has either a vertex $v$ such that $H\setminus N_H[v]$ is not connected, or $H$ has a set $S$ of at most two branch vertices such that $H\setminus S$ has maximum degree at most two. Then every connected induced subgraph of $J_{\rho,\sigma,\theta}$ is feeble. On the other hand, for an integer $t\geq 4$, let $H$ be either a subdivision of $W_{t\times t}$ or the line graph of such a subdivision. Then one may observe that for every vertex $v\in H$, $H\setminus N_H[v]$ is connected. Moreover, $H$ contains a stable set $S$ of branch vertices with $|S|\geq 3$. It follows that $H$ is not feeble, and so $H$ is not isomorphic to an induced subgraph of $J_{\rho,\sigma,\theta}$. Hence, $J_{\rho,\sigma,\theta}$ is $4$-clean, as desired.
\end{proof}
The proof of the next lemma is straightforward, and we leave it to the reader.
\begin{lemma}\label{daviesgirth}
For all integers $\sigma\geq 1$ and $\theta\geq 2$, the following hold.
\begin{itemize}
\item $J_{0,\sigma,\theta}$ has girth at least $2\sigma+4$.
\item Let $u_1,u_2\in J_{1,\sigma,\theta}$ such that for each $i\in \{1,2\}$, $N_{J_{1,\sigma,\theta}}(u_i)$ contains a stable set of cardinality three. Then there is no path of length less than $\sigma+2$ in $J_{1,\sigma,\theta}$ from $u_1$ to $u_2$.
\end{itemize}
\end{lemma}
We are now ready to prove the main result of this section.
\begin{theorem}\label{onlyifmain}
Let $H$ be a graph for which $\mathcal{F}_H$ is clean. Then $H$ is a subdivided star forest.
\end{theorem}
\begin{proof}
By the assumption, for every integer $t\geq 1$, there exists an integer $w(t)\geq 1$ such that every $t$-clean graph in $\mathcal{F}_H$ has treewidth at most $w(t)$. We deduce:
\sta{\label{daviesforest}$H$ is a forest.}
Suppose not. Let $\sigma$ be the length of the shortest cycle in $H$. By Theorem~\ref{daviesclean}, $J_{0,\sigma,w(4)+1}$ is $4$-clean. Also, by the first outcome of Lemma~\ref{daviesgirth}, $J_{0,\sigma,w(4)+1}$ has girth at least $2\sigma+4$, and so $J_{0,\sigma,w(4)+1}\in \mathcal{F}_H$. But then we have $\tw(J_{0,\sigma,w(4)+1})\leq w(4)$, which violates Theorem~\ref{daviesclean}. This proves \eqref{daviesforest}.
\sta{\label{daviesstarforest}Every component of $H$ has at most one branch vertex.}
Suppose for a contradiction that some component $C$ of $H$ contains two branch vertices $u$ and $v$. By \eqref{daviesforest}, $H$ is a forest, and so $C$ is a tree. Therefore, there exists a unique path in $H$ from $u$ to $v$, say of length $\sigma$, and we have $|N_{H}(u)\setminus N_{H}(v)|,|N_{H}(v)\setminus N_{H}(u)|\geq 2$. It follows from the second outcome of Lemma~\ref{daviesgirth} that $J_{1,\sigma,w(4)+1}\in \mathcal{F}_H$. Also, by Theorem~\ref{daviesclean}, $J_{1,\sigma,w(4)+1}$ is $4$-clean. But then we have $\tw(J_{1,\sigma,w(4)+1})\leq w(4)$, a contradiction with Theorem~\ref{daviesclean}. This proves \eqref{daviesstarforest}.\medskip
Now the result follows from \eqref{daviesforest} and \eqref{daviesstarforest}. This completes the proof of Theorem~\ref{onlyifmain}.
\end{proof}
\section{Connectification and statement of the main result}
\label{mainresultssec}
Here we state the main result of the paper, Theorem~\ref{mainconnectify}. Then we discuss how it implies Theorems~\ref{mainstarforest1} and \ref{mainsubforest}.
We need numerous definitions. A vertex $v$ of a graph $G$ is said to be \textit{simplicial} if $N_G(v)$ is a clique of $G$. The set of all simplicial vertices of $G$ is denoted by $\mathcal{Z}(G)$. It follows that every degree-one vertex in $G$ belongs to $\mathcal{Z}(G)$. In particular, for every forest $F$, we have $\mathcal{L}(F)=\mathcal{Z}(F)$.
By a \textit{caterpillar} we mean a tree $C$ of maximum degree three in which all branch vertices lie on a path A path $P$ in $C$ is called a \textit{spine} for $C$ if all branch vertices of $C$ belong to $V(P)$ and subject to this property $P$ is maximal with respect to inclusion (our definition of a caterpillar is non-standard for two reasons: a caterpillar is often allowed to be of arbitrary maximum degree, and a spine often contains all vertices of degree more than one.)
Let $C$ be a caterpillar with $\theta\geq 3$ leaves. Note that $C$ has exactly $\theta-2$ branch vertices, and both ends of each spine of $C$ are leaves of $C$. Also, for every leaf $l\in \mathcal{L}(C)$, there exists a unique branch vertex in $C$, denoted by $v_l$, for which the unique path in $C$ from $l$ to $v_l$ does not contain any branch vertex of $C$ other than $v_l$ (and, in fact, $\{v_l:l\in \mathcal{L}(C)\}$ is the set of all branch vertices of $C$). We say an enumeration $(l_1,\ldots, l_{\theta})$ of $\mathcal{L}(C)=\mathcal{Z}(C)$ is $\sigma$-\textit{wide} if for some spine $P$ of $C$, the $\theta$-tuple $(l_1, v_{l_2},\ldots ,v_{l_{\theta-1}}, l_{\theta})$ is a $\sigma$-widening of $P$. Also, let $H$ be the line graph of $C$. Then assuming $e_l$ to be the unique edge in $C$ incident with the leaf $l\in \mathcal{L}(C)$, we have $\mathcal{Z}(H)=\{e_l:l\in \mathcal{L}(C)\}$. An enumeration $(e_{l_1},\ldots, e_{l_{\theta}})$ of $\mathcal{Z}(H)$ is called $\sigma$-\textit{wide} if $(l_1,\ldots, l_{\theta})$ is a $\sigma$-wide enumeration of $\mathcal{L}(C)$. By a $\sigma$-\textit{caterpillar}, we mean a caterpillar $C$ for which $\mathcal{L}(C)$ admits a $\sigma$-wide enumeration. It follows that if $H$ is the line graph of a caterpillar $C$, then $\mathcal{Z}(H)$ admits a $\sigma$-wide enumeration if an only if $C$ is a $\sigma$-caterpillar.
Let $H$ be a graph and $S$ be a set. We say $H$ is \textit{$S$-tied} if $\mathcal{Z}(H)\subseteq H\cap S$ and \textit{loosely $S$-tied} if $\mathcal{Z}(H)=H\cap S$. Also, for a positive integer $\eta\geq 1$, we say $H$ is (\textit{loosely}) $(S,\eta)$-\textit{tied} if $H$ is (loosely) $S$-tied and $|H\cap S|=
\eta$.
For a graph $G$, $S\subseteq G$ and integers $\eta\geq 2$ and $\sigma\geq 1$ and $i\in \{0,\ldots, 4\}$, we say an induced subgraph $H$ of $G$ is an $(S,\eta,\sigma)$-\textit{connectifier of type $i$} if $H$ satisfies the condition (C$i$) below.
\begin{enumerate}[(C1)]
\setcounter{enumi}{-1}
\item $H$ is a loosely $(S,\eta)$-tied line graph of a subdivided star in which every stem has length at least $\sigma$.
\item $H$ is an $(S,\eta)$-tied rooted subdivided star with root $r$ in which every stem has length at least $\sigma$, and we have $(H\cap S)\setminus \mathcal{L}(H)\subseteq \{r\}$.
\item $H$ is an $(S,\eta)$-tied path with $H\cap S=\{s_1,\ldots, s_{\eta}\}$ where $(s_1,\ldots, s_{\eta})$ is a $\sigma$-widening of $H$.
\item $H$ is a loosely $(S,\eta)$-tied $\sigma$-caterpillar.
\item $H$ is a loosely $(S,\eta)$-tied line graph of a $\sigma$-caterpillar.
\end{enumerate}
We say $G$ is an $(S,\eta)$-\textit{connectifier of type $i$} if it is an $(S,\eta,1)$-connectifier of type $i$. Also, we say $H$ is an $(S,\eta,\sigma)$-\textit{connectifier} (resp.\ $(S,\eta)$-\textit{connectifier}) if it is an $(S,\eta,\sigma)$-connectifier (resp.\ $(S,\eta)$-connectifier) of type $i$ for some $i\in \{0,\ldots, 4\}$. Note that connectifiers of type $0$ contain large cliques, and since we mostly work with $t$-clean graphs, they do not come up in our arguments. However, for the sake of generality, we cover them in both the above definition and the main result of next section, Theorem~\ref{minimalconnectedgeneral}.
Let $\sigma$ be a positive integer, $F$ be a graph and $X\subseteq F$ with $|X|\geq 2$. Let $\pi:[|X|]\rightarrow X$ be a bijection. By a $\sigma$-\textit{connectification of $(F,X)$ with respect to $\pi$}, we mean a graph $\Xi$ with the following specifications.
\begin{itemize}
\item $F$ is an induced subgraph of $\Xi$.
\item $F\setminus X$ is anticomplete to $\Xi\setminus F$.
\item Let $H=\Xi\setminus (V(F)\setminus X)$. Then $H$ is $(X,|X|,\sigma)$-connectifier in $\Xi$ of type $i$ for $i\in [4]$ such that
\begin{itemize}
\item if $H$ is of type 2 (that is, $H$ is path), then traversing $H$ from one end to another, $(\pi(1),\ldots, \pi(|X|))$ is a $\sigma$-widening of $H$, and;
\item if $H$ is of type $3$ or $4$, then $(\pi(1),\ldots, \pi(|X|))$ is $\sigma$-wide enumeration of $\mathcal{Z}(H)$.
\end{itemize}
\end{itemize}
Also, by a $\sigma$-\textit{connectification of $(F,X)$}, we mean a $\sigma$-connectification of $(F,X)$ with respect to some bijection $\pi:[|X|]\rightarrow X$.
Let $\mathcal{C}_{\sigma, F,X,\pi}$ be the class of all graphs with no induced subgraph isomorphic to a $\sigma$-connectification of $(F,X)$ with respect to $\pi$, and $\mathcal{C}_{\sigma,F,X}$ be the class of all graphs with no induced subgraph isomorphic to a $\sigma$-connectification of $(F,X)$. In other words, $\mathcal{C}_{\sigma, F,X}$ is the intersection of all classes $\mathcal{C}_{\sigma,F,X, \pi}$ over all bijections $\pi:[|X|]\rightarrow X$. It follows that for every $\pi:[|X|]\rightarrow X$, we have $\mathcal{C}_{\sigma, F,X}\subseteq \mathcal{C}_{\sigma,F,X, \pi}$. The following result is a key step in all the
proofs in this paper; it will be proved in Section~\ref{connectifysec}.
\begin{theorem}\label{mainconnectify}
Let $\sigma\geq 1$ be an integer, $F$ be a rooted subdivided star forest of size at least two and $\pi:[|\mathcal{R}(F)|]\rightarrow \mathcal{R}(F)$ be a bijection. Then the class $\mathcal{C}_{\sigma, F, \mathcal{R}(F),\pi}$ is clean.
\end{theorem}
Next we discuss briefly how to deduce Theorems~\ref{mainstarforest1} and \ref{mainsubforest} using Theorem~\ref{mainconnectify}. The ``only if'' direction of Theorem~\ref{mainstarforest1} is proved in Theorem~\ref{onlyifmain}. Also, the ``if'' direction of Theorem~\ref{mainstarforest1} follows from Theorem~\ref{mainsubforest}. So it suffices to prove Theorem~\ref{mainsubforest}, which we restate:
\begin{theorem}\label{mainsubforestrestate}
Let $H$ be a forest in which one component is a subdivided double star and every other component is a subdivided star. Then $\tilde{\mathcal{F}}_H$ is clean.
\end{theorem}
\begin{proof}
We define $F$ and $\sigma$ as follows. If $H$ is a subdivided star forest, then let $F=2H$ be rooted and $\sigma=2$. If $H$ is not a subdivided star forest, let $H'$ be the $1$-subdivision of $H$. Then there are two branch vertices $u_1,u_2\in H'$ and a path $Q$ in $H'$ from $u_1$ to $u_2$ with $Q^*\neq \emptyset$ such that $F'=H'\setminus Q^*$ is a subdivided star forest. For each $i\in \{1,2\}$, let $F_i$ be the component of $F'$ containing $u_i$. Then $u_i$ is a vertex of maximum degree in $F_i$ and so $u_i$ is a valid choice for a root of $F_i$. Let $F'$ be rooted such that $u_1,u_2\in \mathcal{R}(F')$. Let $\delta, \lambda$ and $\theta$ be the maximum degree, the reach and the size of $F'$, respectively. So we have $\delta,\theta\geq 2$ and $\lambda\geq 1$. Let $F=\theta S_{\delta+1,\lambda}$ be rooted with its unique choice of roots and let $\sigma=|Q|\geq 3$. Then one may observe that every $\sigma$-connectification of $(F,\mathcal{R}(F))$ contains a subdivision of $H$. Therefore, for every bijection $\pi:[|\mathcal{R}(F)|]\rightarrow \mathcal{R}(F)$, we have $\tilde{\mathcal{F}}_H\subseteq \mathcal{C}_{\sigma,F,\mathcal{R}(F)}\subseteq \mathcal{C}_{\sigma,F,\mathcal{R}(F),\pi}$. It follows that for every integer $t\geq 1$, we have $\tilde{\mathcal{F}}^t_H\subseteq \mathcal{C}_{\sigma,F,\mathcal{R}(F),\pi}$. This, together with Theorem~\ref{mainconnectify} and Lemma~\ref{cleanlemma} implies Theorem~\ref{mainsubforestrestate}.
\end{proof}
\section{Obtaining a connectifier}\label{connectedsec}
We begin with the following folklore result, see, for example, \cite{wallpaper} for a proof.
\begin{theorem}\label{minimalconnected}
Let $G$ be a connected graph, $X\subseteq V(G)$ with $|X|=3$ and $H$ be a connected induced subgraph of $G$ with $X\subseteq H$ and with $H$ minimal subject to inclusion. Then one of the following holds.
\begin{itemize}
\item There exists a vertex $a \in H$ and three paths $\{P_x:x\in X\}$ (possibly of length zero) where $P_x$ has ends $a$ and $x$, such that
\begin{itemize}
\item $H = \bigcup_{x\in X} P_x$, and;
\item the sets $\{P_x\setminus \{a\}: x\in X\}$ are pairwise disjoint and anticomplete.
\end{itemize}
\item There exists a triangle with vertex set $\{a_x:x\in X\}$ in $H$ and three paths $\{P_x: x\in X$\} (possibly of length zero) where $P_x$ has ends $a_x$ and $x$, such that
\begin{itemize}
\item $H= \bigcup_{x\in X} P_x$;
\item the sets $\{P_x \setminus \{a\}: x\in X\}$ are pairwise disjoint and anticomplete, and;
\item for distinct $x,y\in X$, $a_xa_y$ is the only edge of $H$ between $P_x$ and $P_y$.
\end{itemize}
\end{itemize}
\end{theorem}
Theorem~\ref{minimalconnected} may be reformulated as follows: for every choice of three vertices $x,y,z$ in a connected graph $G$, there is an induced subgraph $H$ of $G$ containing $x,y,z$ such that, for some $\delta\in [3]$, $H$ is isomorphic to either a subdivision of $K_{1,\delta}$ or line graph graph of a subdivision of $K_{1,\delta}$, and $\mathcal{Z}(H)\subseteq \{x,y,z\}$. The main result of this section, the following, can be viewed as a qualitative extension of Theorem~\ref{minimalconnected}.
\begin{theorem}\label{minimalconnectedgeneral}
For every integer $\eta\geq 1$, there exists an integer $\mu=\mu(\eta)\geq 1$ with the following property. Let $G$ be a graph and
$S \subseteq V(G)$ with $|S|\geq \mu$ such that $S$ is contained in a connected component of $G$. Then $G$ contains an $(S,\eta)$-connectifier $H$.
In particular, $H$ is connected, we have $|H\cap S|=\eta$ and every vertex in $H\cap S$ has degree at most $\eta$ in $H$.
\end{theorem}
For a graph $G$, $S\subseteq G$ and positive integer $\eta$, one may observe that $(S,\eta)$-connectifiers are minimal with respect to being connected and containing $\eta$ vertices from $S$. Also, for $\eta_1,\eta_2\geq 4$ (which, given Theorem~\ref{minimalconnected}, captures the main content of Theorem~\ref{minimalconnectedgeneral}) and distinct $i_1,i_2\in \{0,1,\ldots,4\}$, no $(S,\eta_1)$-connectifier of type $i_1$ contains an induced subgraph which is an $(S,\eta_2)$-connectifier of type $i_2$. Therefore, Theorem~\ref{minimalconnectedgeneral} provides an efficient characterization of all minimally connected induced subgraphs of $G$ containing many vertices from a sufficiently large subset $S$ of vertices in $G$.
In order to prove Theorem~\ref{minimalconnectedgeneral}, we need a few definitions and a result from \cite{Davies}. By a \textit{big clique} in a graph $J$, we mean a maximal clique of cardinality at least three. A graph $J$ is said to be a \textit{bloated tree} if
\begin{itemize}
\item every edge of $J$ is contained in at most one maximal clique of $J$.
\item for every big clique $K$ of $J$ and every $v\in K$, $v$ has at most one neighbor in $J\setminus K$, and;
\item the graph obtained from $J$ by contracting each big clique into a vertex is a tree.
\end{itemize}
It follows that every connected induced subgraph of a bloated tree is a bloated tree. Furthermore, we deduce:
\begin{lemma}\label{blaotedlemma}
Let $J$ be a bloated tree. Then for every cycle $C$ in $J$, $V(C)$ is a clique of $J$.
\end{lemma}
\begin{proof}
Suppose for a contradiction that for some cycle $C$ in $J$, $V(C)$ contains two vertices which are non-adjacent in $G$. Let $C$ be chosen with $|V(C)|=k$ as small as possible. It follows that $k\geq 4$. Let $C=c_1\hbox{-} \cdots \hbox{-} c_k\hbox{-} c_1$ such that $c_1$ and $c_i$ are not adjacent for some $i\in \{3,\ldots,k-1\}$. Let $P$ be a path in $G$ from $c_1$ to $c_i$ with $P^*\subseteq \{c_2\ldots, c_{i-1}\}$ and let $Q$ be a path in $G$ from $c_1$ to $c_i$ with $Q^*\subseteq \{c_{i+1}\ldots, c_{k-1}\}$. So $P$ and $Q$ are internally disjoint and $|P|,|Q|\geq 3$. Also, $H=J[P\cup Q]$ is a connected induced subgraph of $J$, and so $H$ is a bloated tree. If $P^*$ is anticomplete to $Q^*$, then $H$ is cycle. But then the graph obtained from $H$ by contracting each big clique into a vertex is $H$ itself, which is not tree, a contradiction with $H$ being a bloated tree. It follows that there exists $p\in P^*$ and $q\in Q^*$ such that $pq\in E(J)$. Consequently, $C_1=c_1\hbox{-} P\hbox{-} p\hbox{-} q\hbox{-} Q\hbox{-} c_1$ and $C_2=c_i\hbox{-} P\hbox{-} p\hbox{-} q\hbox{-} Q\hbox{-} c_i$ are two cycles in $G$ with $|V(C_1)|,|V(C_2)|<|V(C)|$. Thus, by the choice of $C$, for each $i\in \{1,2\}$, $K_i=J[V(C_i)]$ is a clique of $J$. For each $i\in \{1,2\}$, let $K_i'$ be a maximal clique of $J$ containing $K_i$. But now the edge $pq\in E(J)$ is contained in two maximal cliques of $J$, namely $K_1'$ and $K_2$, which violates $J$ being a bloated tree. This proves Lemma~\ref{blaotedlemma}.
\end{proof}
The following was proved in \cite{Davies}:
\begin{theorem}[Davies \cite{Davies}]\label{daviesbloated} For every integer $k\geq 1$, there exists an integer $f=f(k)$ such that if $G$ is a connected graph and $S\subseteq V(G)$ with $|S|\geq f(k)$, then $G$ has an induced subgraph $J$ which is a bloated tree and $|J\cap S|\geq k$.
\end{theorem}
We also need the following well-known result; see, for example, \cite{wallpaper} for a proof.
\begin{lemma}\label{degreeorpath}
For all positive integers $d,q$, there exists a positive integer $N(d,q)$ such that for every connected graph $G$ on at least $N(d,q)$ vertices, either $G$ contains a vertex of degree at least $d$, or there is path in $G$ with $q$ vertices.
\end{lemma}
For a graph $G$ and a set $S\subseteq G$, by an $S$-\textit{bump} we mean a vertex $v\in G\setminus S$ of degree two in $G$, say $N_G(v)=\{v_1,v_2\}$, such that $v_1v_2\notin E(G)$. Also, by \textit{suppressing} the $S$-bump $v$ we mean removing $v$ from $G$ and adding the edge $v_1v_2$ (hence, $G$ is a subdivision of the resulting graph). We are now ready to prove Theorem~\ref{minimalconnectedgeneral}.
\begin{proof}[Proof of Theorem~\ref{minimalconnectedgeneral}]
Let $f(\cdot)$ be as in Theorem~\ref{daviesbloated}, and $N(\cdot, \cdot)$ be as in Lemma~\ref{degreeorpath}. We choose
$$\mu=\mu(\eta)=f(\max\{N(\eta,8\eta^2+\eta),2\}).$$
By Theorem~\ref{daviesbloated}, since $|S|\geq \mu$, $G$ has an induced subgraph $J$ which is a bloated tree with $|J\cap S|\geq \max\{N(\eta,8\eta^2+\eta),2\}$, and subject to this property, $J$ has as few vertices as possible. Assume that $\eta=2$. Then since $J$ is connected and $|J\cap S|\geq 2$, there is a path $H$ in $J$, and so in $G$, with ends in $S$ and $H^*\cap S=\emptyset$. But then $H$ is an $(S,2)$-connectifier of type $2$, as desired. Therefore, we may assume that $\eta\geq 3$.
\sta{\label{getSminimal} Let $X\subseteq J$ such that $X$ is connected. Then for every connected component $Q$ of $J\setminus X$, we have $Q\cap S\neq \emptyset$. In particular, we have $\mathcal{Z}(J)\subseteq S$.}
Suppose not. Let $Q$ be a component of $J\setminus X$ such that $Q\cap S= \emptyset$. Since $X$ connected, $J\setminus Q$ is connected, as well. It follows that $J\setminus Q$ is bloated tree and $|(J\setminus Q)\cap S|=|J\cap S|$, which contradicts the minimality of $J$. This proves \eqref{getSminimal}.\medskip
Let $J_1$ be the graph obtained from $J$ by successively suppressing $S$-bumps in $J$ until there are none. Then $J_1$ is also a bloated tree, and $J$ is a subdivision of $J_1$. The following is immediate from \eqref{getSminimal} and the definition of $J_1$.
\sta{\label{getSminimalj1}$J_1$ has no $S$-bump, and we have $J_1\cap S=J\cap S$. Also, for every $X\subseteq J_1$ with $X$ connected and every connected component $Q$ of $J_1\setminus X$, we have $Q\cap S\neq \emptyset$. In particular, we have $\mathcal{Z}(J_1)\subseteq S$.}
Since $J$ is a bloated tree and so contains no hole, it follows in fact that $J$ is a subdivision of $J_1$ with the additional property that for every edge $e\in E(J_1)$ which is contained in a big clique of $J_1$, we have $e\in E(J)$ (that is, $e$ is not subdivided while obtaining $J$ from $J_1$). This, along with the fact that $J_1\cap S=J\cap S$, implies that $J$ contains an $(S,\eta)$-connectifier if and only if $J_1$ contains an $(S,\eta)$-connectifier. Therefore, in order to prove Theorem~\ref{minimalconnectedgeneral}, it suffices to show that $J_1$ contains an $(S,\eta)$-connectifier, which we do in the rest of the proof.
\sta{\label{maximaldisjoint} Let $K$ be a maximal clique of $J_1$, and for every $v\in K$, let $Q_v$ be the connected component of $J_1\setminus (K\setminus\{v\})$ containing $v$. Then for every two distinct vertices $u,v\in K$, we have $Q_u\cap Q_v=\emptyset$, and $uv$ is the only edge of $J_1$ between $Q_u$ and $Q_v$.}
Suppose for a contradiction that there exist two distinct vertices $u,v\in K$ for which either $Q_u\cap Q_v\neq \emptyset$ or there is an edge in $J_1$ different from $uv$ with an end in $Q_u$ and an end in $Q_v$. It follows that $J_1[Q_u\cup Q_v]-uv$ is connected, and so there exists a path $P$ in $J_1$ of length more than one from $u$ to $v$ with $P^*\subseteq (Q_u\cup Q_v)\setminus \{u,v\}\subseteq J_1\setminus K$. Let $x\in P^*$. Then $C=u\hbox{-} P\hbox{-} v\hbox{-} u$ is a cycle in $J_1$. Since $J_1$ is a bloated tree, by Lemma~\ref{blaotedlemma}, $V(C)$ is a clique, and so $x$ is adjacent to both $u$ and $v$. Now, suppose that there exists a vertex $y\in K\setminus N_{J_1}(x)$. Then we have $y\notin \{u,v\}$, and so $C'=x\hbox{-} u\hbox{-} y\hbox{-} v\hbox{-} x$ is a cycle in $J_1$ where $V(C')$ contains two non-adjacent vertices, namely $x$ and $y$, which contradicts Lemma~\ref{blaotedlemma} and the fact that $J_1$ is a bloated tree. Therefore, $x$ is complete to $K$, and so $K\cup \{x\}$ is a clique of $J_1$ strictly containing $K$. This violates the maximality of $K$, and so proves \eqref{maximaldisjoint}.\medskip
\sta{\label{largedegreehonest} Suppose that $J_1$ contains a big clique $K$ with $|K|\geq \eta$. Then $J_1$ contains an $(S,\eta)$-connectifier of type $0$.}
For every $v\in K$, let $Q_v$ be the connected component of $J_1\setminus (K\setminus\{v\})$ containing $v$. Then by \eqref{maximaldisjoint}, for every two distinct vertices $u,v\in K$, we have $Q_u\cap Q_v=\emptyset$, and there is no edge in $J_1$ with an end in $Q_u$ and an end in $Q_v$ except for $uv$. Also, by \eqref{getSminimalj1}, for every $v\in K$, we have $Q_v\cap S\neq \emptyset$. Therefore, since $Q_v$ is connected, we may choose a path $P_v$ in $Q_v$ from $v$ to a vertex $l_v\in S$ (possibly $v=l_v$) with $P_v\cap S=\{l_v\}$. It follows that for distinct $u,v\in K$, we have $P_u\cap P_v=\emptyset$, and there is no edge in $J_1$ with an end in $P_u$ and an end in $P_v$ except for $uv$. Now, let $K'\subseteq K$ with $|K'|=\eta$. Since $\eta\geq 3$, it follows that $H=J_1[\bigcup_{v\in K'}P_v]$ is a loosely $(S,\eta)$-tied line graph of a subdivided star, that is, $H$ is an $(S,\eta)$-connectifier of type $0$ in $J_1$. This proves \eqref{largedegreehonest}.\medskip
\sta{\label{maximaldisjointvertex} Let $x\in J_1$ such that $N_{J_1}(x)$ is a stable set of $J_1$, and for every $a\in N_{J_1}(x)$, let $Q_a$ be the connected component of $J_1\setminus x$ containing $a$. Then the sets $\{Q_a: a\in N_{J_1}(x)\}$ are pairwise disjoint and anticomplete.}
Suppose for a contradiction that there there exist two distinct vertices $a,b\in N_{J_1}(x)$ for which either $Q_a\cap Q_b\neq \emptyset$, or there is an edge in $J_1$ with an end in $Q_a$ and an end in $Q_b$. It follows that $J_1[Q_a\cup Q_b]$ is connected, and so there exists a path $P$ in $J_1$ of length more than one from $a$ to $b$ with $P^*\subseteq Q_a\cap Q_b\setminus \{a,b\}\subseteq J_1\setminus \{a,b,x\}$. Then $C=a\hbox{-} P\hbox{-} b\hbox{-} x\hbox{-} a$ is a cycle in $J_1$ where $V(C)$ contains two non-adjacent vertices, namely $a$ and $b$. This contradicts Lemma~\ref{blaotedlemma} and the fact that $J_1$ is a bloated tree, and so proves \eqref{maximaldisjointvertex}.\medskip
Now we can handle the case where $J_1$ contains vertices of large degree.
\sta{\label{largedegree} Suppose that $J_1$ has a vertex of degree at least $\eta$. Then $J_1$ contains an $(S,\eta)$-connectifier of type $0$ or $1$.}
Since $J_1$ is a bloated tree, for every vertex $x\in J_1$, either $N_{J_1}(x)$ is a clique, or $N_{J_1}(x)$ is stable set, or $J_1[N_{J_1}(x)]$ has an isolated vertex $y$ for which $N_{J_1}(x)\setminus \{y\}$ is a clique. Therefore, $J_1$ has a vertex of degree at least $\eta$, it follows that either $J_1$ contains a big clique $K$ with $|K|\geq \eta$ or there exists a vertex $x\in V(J_1)$ of degree at least $\eta$ in $J_1$ such that $N_{J_1}(x)$ is a stable set of $J_1$. In the former case, \eqref{largedegree} follows from \eqref{largedegreehonest}. So we may assume that the latter case holds. For each $a\in N_{J_1}(x)$, let $Q_a$ be the connected component of $J_1\setminus x$ containing $a$. Then by \eqref{maximaldisjointvertex}, the sets $\{Q_a: a\in N_{J_1}(x)\}$ are pairwise disjoint and anticomplete. Also, by \eqref{getSminimalj1}, for every $a\in N_{J_1}(x)$, we have $Q_a\cap S\neq \emptyset$. Therefore, since $Q_a$ is connected, we may choose a path $P_a$ in $Q_a$ from $a$ to a vertex $l_a\in S$ (possibly $a=l_a$) with $P_a\cap S=\{l_a\}$. It follows that the paths $\{P_a: a\in N_{J_1}(x)\}$ are pairwise disjoint and anticomplete. Let $A$ be a subset of $N_{J_1}(x)$ with $|A|=\eta-1$ if $x\in S$ and $|A|=\eta$ if $x\notin S$. Then $H=J_1[\bigcup_{a\in A}P_a]$ is a $(S,\eta)$-tied rooted subdivided star with root $x$ such that $(H\cap S)\setminus \mathcal{L}(H)\subseteq \{x\}$, that is, $H$ is $(S,\eta)$-connectifier in $J_1$ of type $1$. This proves \eqref{largedegree}.\medskip
Henceforth, by \eqref{largedegree}, we may assume that $J_1$ has no vertex of degree at least $\eta$. Also, by \eqref{getSminimalj1}, we have $|J_1|\geq |J_1\cap S|\geq N(\eta,8\eta^2+\eta)$. As a result, by Lemma~\ref{degreeorpath}, $J_1$ contains a path $P$ on $8\eta^2+\eta$ vertices.
\sta{\label{pathcase} Suppose that there is no path in $P\setminus S$ of length $8\eta$. Then $J_1$ contains an $(S,\eta)$-connectifier of type $2$.}
Suppose not. Then $P$ contains no $(S,\eta)$-tied path. Let $|P\cap S|=s$. It follows that $s<\eta$. Therefore, since there is no path in $P\setminus S$ of length $8\eta$, we have $|P|\leq 8\eta(s+1)+s<8\eta^2+\eta$, a contradiction. This proves \eqref{pathcase}.\medskip
In view of \eqref{pathcase}, we may assume that $P$ contains a path $P_1$ of length $8\eta$ with $P_1\cap S=\emptyset$, say
$$P_1=d_0\hbox{-} a_1\hbox{-} b_1\hbox{-} c_1\hbox{-} d_1\hbox{-} a_2\hbox{-} b_2\hbox{-} c_2\hbox{-} d_2\hbox{-} \cdots \hbox{-} a_{2\eta}\hbox{-} b_{2\eta}\hbox{-} c_{2\eta}\hbox{-} d_{2\eta}.$$
For each $i\in [2\eta]$, let $A_i=\{a_i,b_i,c_i\}$, let $L_i$ be the connected component of $J_1\setminus A_i$ containing $P_1[d_0,d_{i-1}]$ and let $R_i$ be the connected component of $J_1\setminus X_i$ containing $P_1[d_{i},d_{2\eta}]$. We deduce:
\sta{\label{disjointcomponentsxi} For each $i\in [2\eta]$, $L_i$ and $R_i$ are distinct, and so $L_i\cap R_i=\emptyset$.}
Suppose not. Then $J_1[L_i\cup R_i]$ is connected. Therefore, there exists a path $Z$ in $J_1$ from a vertex $z\in L_i$ to a vertex $z'\in R_i$ such that $Z^*\subseteq (L_i\cup R_i)\setminus P_1$. But then $C=z\hbox{-} P_1\hbox{-} z'\hbox{-} Z\hbox{-} z$ is a cycle in $J_1$ and $V(C)$ contains two non-adjacent vertices, namely $a_i$ and $c_i$, a contradiction with $J_1$ being a bloated tree. This proves \eqref{disjointcomponentsxi}.
\sta{\label{thirdcomponent} For each $i\in [2\eta]$, there exists a component $Q_i$ of $J_1\setminus A_i$ different from $L_i$ and $R_i$.}
Suppose not. Then $J_1\setminus A_i$ has exactly two distinct components, namely $L_i$ and $R_i$. Assume that $b_i$ has degree two in $J_1$. Then since $b_i\in P_1\subseteq J_1\setminus S$, it follows that $b_i$ is an $S$-bump, which violates \eqref{getSminimalj1}. So there exists a vertex $z\in N_{J_1}(b_i)\setminus A_i\subseteq L_i\cup R_i$, say $z\in L_i$. Consequently, since $L_i$ is connected, there exists a path $Z$ in $L_i$ from $z$ to a vertex $z'\in P_1[d_{0},d_{i-1}]$ with $Z\cap P_1=\{z'\}$. But then $C=b_i\hbox{-} z\hbox{-} Z\hbox{-} z'\hbox{-} P_1\hbox{-} b_i$ is a cycle in $J_1$ and $V(C)$ contains two non-adjacent vertices, namely $b_i$ and $d_{i-1}$, a contradiction with $J_1$ being a bloated tree. This proves \eqref{thirdcomponent}.
\sta{\label{disjointcomponentsqi} For each $i\in [2\eta]$, let $Q_i$ be as in \eqref{thirdcomponent}. Then we have $P_1\cap Q_i=\emptyset$ and $N_{J_1}(Q_i)\subseteq A_i$. Also, the sets $\{Q_i: i\in [2\eta]\}$ are pairwise disjoint and anticomplete.}
The first two assertions are immediate from the fact that $Q_i$ is a component of $J_1\setminus A_i$ different from $L_i$ and $R_i$. For the third one, suppose for a contradiction that $Q_i\cup Q_j$ is connected for some distinct $i,j\in [2\eta]$, say $i<j$. Since $J_1$ is connected and $N_{J_1}(Q_j)\subseteq A_j$, it follows that $Q_j\cup A_j$ is connected, and so $Q_i\cup Q_j\cup A_j$ is connected. As a result, there exists a path $R$ in $J_1$ with an end $q\in Q_i$ and an end $q'\in A_j\subseteq R_i$ with $R^*\subseteq Q_j$. Also, we have $A_i\cap R\subseteq A_i\cap (Q_i\cup Q_j\cup A_j)\subseteq P_1\cap (Q_i\cap Q_j)=\emptyset$. In other words, $R$ is a path in $J_1\setminus A_i$ from $q\in Q_i$ to $q'\in L_i$. But then we have $q\in Q_i\cap R_i$, a contradiction with \eqref{thirdcomponent}. This proves \eqref{disjointcomponentsqi}.\medskip
For each $i\in [2\eta]$, let $Q_i$ be as in \eqref{thirdcomponent}. Then by \eqref{getSminimalj1}, since $A_i$ is connected, we have $Q_i\cap S\neq \emptyset$. Also, from \eqref{thirdcomponent} and the connectivity of $J_1$, we have $N_{Q_i}(A_i)\neq \emptyset$. Therefore, since $Q_i$ is connected, we may choose a path $W_i$ in $Q_i$ from a vertex in $x_i\in N_{Q_i}(A_i)$ to a vertex in $y_i \in Q_i\cap S$ (possibly of $x_i=y_i$) such that $W_i^*\cap (N_{Q_i}(A_i)\cup S)=\emptyset$.
Let $G_i=J_1[A_i\cup W_i]$. It follows that $G_i$ is connected and $G_i\cap S=\{y_i\}$.
\sta{\label{caterfinal}The set $G_i:i\in [2\eta]$ are pairwise disjoint and anticomplete. Also, for every $i\in [2\eta]$, $d_{i-1}a_i$ and $c_id_{i}$ are the only edges in $J_1$ with an end in $G_i$ and an end in $P_1\setminus G_i$.}
The proof is almost concluded. Note that since $J_1$ is a bloated tree, it follows that for every $i\in [2\eta]$, there is no cycle in $J_1$ containing both $a_i$ and $c_i$. Consequently, we have either $|N_{A_i}(x_i)|=1$, or $N_{A_i}(x_i)=\{a_i,b_i\}$, or $N_{A_i}(x_i)=\{b_i,c_i\}$, as otherwise $x_i\hbox{-} a_i\hbox{-} b_i\hbox{-} c_i\hbox{-} x_i$ is a cycle in $J_1$ containing both $a_i$ and $c_i$. Let $I\subseteq [2\eta]$. We say $I$ is \textit{light} if $|N_{A_i}(x_i)|=1$ for every $i\in I$. Also, we say $I$ is \textit{heavy} if for every $i\in I$, we have either $N_{A_i}(x_i)=\{a_i,b_i\}$, or $N_{A_i}(x_i)=\{b_i,c_i\}$. It follows that there exists $I\subseteq [2\eta]$ with $|I|=\eta$ which is either light or heavy. Let $i_1$ and $i_{\eta}$ be smallest and the largest elements of $I$, respectively. It follows from $\eta\geq 3$ that $i_1$ and $i_{\eta}$ are distinct and $i_{\eta}\geq 3$. Let $Z_{1}$ be a path in $G_{i_1}$ from $c_{i_1}$ to $y_{i_1}$, and $Z_{\eta}$ be a path in $G_{i_{\eta}}$ from $a_{i_{\eta}}$ to $y_{i_{\eta}}$. Let
$$H=J_1\left[P_1[c_{i_1},a_{i_{\eta}}] \cup \left(Z_1\cup Z_{\eta}\right)\cup \left(\bigcup_{i\in I\setminus \{i_1,i_{\eta}\}}G_i\right)\right].$$
Using \eqref{caterfinal}, it is straightforward to observe that if $I$ is light, then $H$ is a loosely $(S,\eta)$-tied caterpillar, and if $I$ is heavy, then $H$ is a loosely $(S,\eta)$-tied line graph of a caterpillar. In other words, $H$ is an $(S,\eta)$-connectifier of type $3$ or $4$. This completes the proof of Theorem~\ref{minimalconnectedgeneral}.
\end{proof}
\section{Graphs with no $k$-block}\label{noblocksec}
Let $G$ be a graph. By a \textit{separation} in $G$ we mean a triple $(L,M,R)$ of pairwise disjoint subsets of vertices in $G$ with $L\cup M\cup R=G$, such that neither $L$ nor $R$ is empty and $L$ is anticomplete to $R$ in $G$. Let $x,y\in G$ be distinct. We say a set $M\subseteq G\setminus \{x,y\}$ \textit{separates $x$ and $y$} if there exists a separation $(L,M,R)$ in $G$ with $x\in L$ and $y\in R$. For a positive integer $k$, a $k$-\textit{block} in $G$ is a set $B$ of at least $k$ vertices such that no two distinct vertices $x,y\in B$ are separated by a set $M\subseteq G\setminus \{x,y\}$ with $|M|<k$. It follows that every non-null graph contains a $1$-block and every graph which is not a forest contains a $2$-block. Also, recall the following well-known theorem of Menger's \cite{Menger}:
\begin{theorem}[Menger \cite{Menger}]\label{menger}
Let $k\geq 1$ be an integer, let $G$ be a graph and let $x,y\in G$ be distinct. Then either there exists a set $M\subseteq G\setminus \{x,y\}$ with $|M|<k$ such that $M$ separates $x$ and $y$, or there are $k$ pairwise internally disjoint paths in $G$ from $x$ to $y$.
\end{theorem}
In view of Theorem~\ref{menger}, a $k$-block can equivalently be defined as a set $B$ of at least $k$ vertices of $G$ such that for every two distinct vertices $x,y\in B$, there are $k$ pairwise internally disjoint paths in $G$ from $x$ to $y$. It follows that for every $0<k'\leq k$ and every $B'\subseteq B$ with $|B'|\geq k'$, $B'$ is $k'$-block. In particular, $B$ is also a $k'$-block.
The application of $k$-blocks in bounding the treewidth in hereditary graph classes is not unprecedented; see for example \cite{lozin,longhole2}. In particular, this often appears as an intermediate step, showing that for a given proper hereditary class of graphs $\mathcal{C}$ (which is aimed to prove to have bounded treewidth) and every positive integer $k$, every graph in $\mathcal{C}$ with no $k$-block has bounded treewidth. In this section, we prove that for all positive integers $k,t$, every $t$-clean graph with no $k$-block has bounded treewidth. In other words, we show that for every positive integer $k$, the class of all graphs with no $k$-block is clean.
To begin with, we need some definitions as well as a couple of results from the literature. For a tree $T$ and $xy\in E(T)$, we denote by $T_{x,y}$ the component of $T-xy$ containing $x$. Let $G$ be a graph and $(T,\chi)$ be a tree decomposition for $G$. For every $S\subseteq T$, let $\chi(S)=\bigcup_{x\in S}\chi(x)$. Also, for every edge $xy\in E(T)$, we define an \textit{adhesion} for $(T,\chi)$ as $\chi(x,y)=\chi(x)\cap \chi(y)=\chi(T_{x,y})\cap \chi(T_{y,x})$. For every $x\in T$, by the \textit{torso at $x$}, denoted by $\hat{\chi}(x)$, we mean the graph obtained from the bag $\chi(x)$ by, for each $y\in N_T(x)$, adding an edge between every two non-adjacent vertices $u,v\in \chi(x,y)$.
We say $(T,\chi)$ is \textit{tight} if for each edge $e\in E(T)$ and each end $x$ of $e$, assuming $y$ to be the end of $e$ distinct from $x$, there is a component $C$ of $\chi(T_{x,y})\setminus \chi(T_{y,x})$ such that every vertex in $\chi(x,y)$ has a neighbor in $C$. The following theorem shows how the absence of a $k$-block guarantees a tree decomposition with useful properties. It is directly derived from combining Lemma 6 and Theorem 10 in \cite{tighttw}.
\begin{theorem}[Wei\ss auer \cite{tighttw}, see Lemma 6 and Theorem 10 therein]\label{tightdeg}
Let $k\geq 3$ be an integer and $G$ be a graph with no $k$-block. Then there is a
tight tree decomposition $(T,\chi)$ of $G$ for which every adhesion has cardinality at most $k$ and every torso has fewer than $k$ vertices of degree at least
$2(k-1)(k-2)$.
\end{theorem}
It is also a well-known observation that clique cutsets do no effect the treewidth. More precisely, the following holds (a proof can be worked out easily using Lemma 5 from \cite{cliquetw}).
\begin{theorem}[folklore, see Lemma 5 in \cite{cliquetw}]\label{bodtorso}
Let $G$ be a graph and $(T,\chi)$ be a tree decomposition of $G$. Then the treewidth of $G$ is at most the maximum treewidth of a torso $\hat{\chi}(x)$ taken over all $x\in T$.
\end{theorem}
We now prove the main result of this section. For every positive integer $k$, let $\mathcal{B}_k$ be the class of all graphs with no $k$-block.
\begin{theorem}\label{noblock}
For every integer $k\geq 1$, the class $\mathcal{B}_k$ is clean.
\end{theorem}
\begin{proof}
Note every graph in $\mathcal{B}_1$ has empty vertex set, and $\mathcal{B}_2$ is the class of all forests. So the result is immediate for $k\in \{1,2\}$, and we may assume that $k\geq 3$.
Let $t\geq 1$ and let $G\in \mathcal{B}_{k}^t$, that is, $G$ is a $t$-clean graph with no $k$-block. We aim to show that there exists an integer $w(k,t)\geq 1$ such that $\tw(G)\leq w(k,t)$. By Theorem~\ref{tightdeg}, $G$ has a tight tree decomposition $(T,\chi)$ for which every torso has fewer than $k$ vertices of degree at least
$2(k-1)(k-2)$. For each $x\in T$, let $K_x\subseteq \hat{\chi}(x)$ be the set of all vertices in $\hat{\chi}(x)$ of degree at least $2(k-1)(k-2)$ and $\tau_x=\hat{\chi}(x)\setminus K_x$. So $\tau_x$ has maximum degree less than $2k^2$. Let $q(\cdot,\cdot)$ be as Theorem~\ref{boundeddeg}. Let $q_1=q(4,t)+1$ and $\gamma=\gamma(k,t)=q(q_1,2k^2)$. We claim that:
\sta{\label{wallintorso} For every $x\in T$, we have $\tw(\tau_x)\leq \gamma$.}
Suppose not. Then since $\tau_x$ has maximum degree at most $2k^2$, it follows from Theorem~\ref{boundeddeg} that $\tau_x$ contains an induced subgraph $W$ which is isomorphic to either a subdivision of $W_{q_1\times q_1}$ or the line graph of a subdivision of $W_{q_1\times q_1}$. In particular, $W$ has maximum degree at most four. Let us say a non-empty subset $K\subseteq W$ is a \textit{blossom} if there exists $y\in N_T(x)$ such that $K\subseteq \chi(x,y)$, and subject to this property, $K$ is maximal with respect to inclusion. It follows that every blossom is a clique in $W$ and every two blossoms intersect in at most one vertex. Let $\mathcal{K}$ be the set of all blossoms, and for every blossom $K\in \mathcal{K}$, let us fix $y_k\in N_T(x)$ such that $K\subseteq \chi(x,y_K)$. From the maximality of blossoms, it follows that the vertices $\{y_K: K\in \mathcal{K}\}$ are all distinct. Note that $(T,\chi)$ is tight, and so for every $y\in N_T(x)$, there exists a component $C(y)$ of $\chi(T_{y,x})\setminus \chi(T_{x,y})$ such that the every vertex in $\chi(x,y)$ has a neighbor in $C(y)$. Since $(T,\chi)$ is a tree-decomposition, it follows that the sets $\{C(y_K): y\in N_T(x)\}$ are pairwise distinct, disjoint and anticomplete in $G$. Now, for every $K\in \mathcal{K}$, we have $|K|\in \{1,2,3\}$. Let $H_K$ be a connected induced subgraph of $G[(C(y_K)\cup K)]$ which contains $K$, and subject to this property, assume that $H_K$ is minimal with respect to inclusion. It follows that if $|K|=1$, then $H_K=K$, if $|K|=2$, then $H_K$ is a path in $G$ between the two vertices in $K$ with $H_K^*\subseteq C(y_{K})$, and if $|K|=3$, then $H_K$ satisfies one of the two outcomes of Theorem~\ref{minimalconnected}. Also, the sets $\{H_K\setminus K: K\in \mathcal{K}\}$ are pairwise distinct, disjoint and anticomplete in $G$. Now, let
\[H=G\left[\left(W\setminus \left(\bigcup_{K\in \mathcal{K}}K\right)\right)\cup \left(\bigcup_{K\in \mathcal{K}}H_K\right)\right].\]
Since $W$ has maximum degree at most four, $H$ has maximum degree at most four, as well. Also, it is straightforward to observe that $H$ contains $W_{q_1\times q_1}$ as a minor, and so we have $\tw(H)\geq q_1>q(4,t)$. Consequently, by Theorem~\ref{boundeddeg}, $H$, and so $G$, contains contains either a subdivision of $W_{t\times t}$ or the line graph of a subdivision of $W_{t\times t}$. But this violates the assumption that $G$ is $t$-clean, and so proves \eqref{wallintorso}.\medskip
Now, for every $x\in T$, \eqref{wallintorso} along with the fact that $|K_x|<k$ yields $\tw(\hat{\chi}(x))\leq \gamma+k$. Hence, writing $w(k,t)=\gamma(k,t)+k$, by Theorem~\ref{bodtorso}, we have $\tw(G)\leq w(k,t)$. This completes the proof of Theorem~\ref{noblock}.
\end{proof}
\section{$k$-blocks with distant vertices}\label{distancesec}
The main result of this section, Theorem~\ref{distance}, asserts that for every positive integer $k$, every graph containing a sufficiently large block contains either a subdivision of a large complete graph with all paths short as a subgraph, or an induced subgraph which contains a $k$-block with its vertices pairwise far from each other. This will be of essential use in subsequent sections, and before proving it, we recall the classical result of Ramsey (see e.g.\ \cite{ajtai} for an explicit bound).
\begin{theorem}[See \cite{ajtai}]\label{classicalramsey}
For all integers $a,b\geq 1$, there exists an integer $R=R(a, b)\geq 1$ such that every graph $G$ on at least $R(a,b)$ vertices contains either a clique of cardinality $a$ or a stable set of cardinality $b$. In particular, for all integers $t\geq 1$ and $n\geq R(t,t)$, every graph $G$ containing $K_{n,n}$ as a subgraph contains either $K_t$ or $K_{t,t}$ as an induced subgraph.
\end{theorem}
For a graph $G$ and a positive integer $d$, a \textit{$d$-stable set} in $G$ is a set $S\subseteq G$ such that for every two distinct vertices $u,v\in S$, there is no path of length at most $d$ in $G$ from $u$ to $v$. Note that a $d$-stable set is also a $d'$-stable set for every $0<d'\leq d$. Here comes the main result of this section.
\begin{theorem}\label{distance}
For all integers $d,k\geq 1$ and $m\geq 2$, there exist an integer $k_0=k_0(d,k,m)\geq 1$ with the following property. Let $G$ be a graph and $B_0$ be a $k_0$-block in $G$. Assume that $G$ does not contain a $(\leq d)$-subdivision of $K_m$ as a subgraph. Then there exists $A\subseteq G$ with $S\subseteq B_0\setminus A$ such that $S$ is both a $k$-block and a $d$-stable set in $G\setminus A$.
\end{theorem}
\begin{proof}
Let $R(m,k)$ be as in Theorem~\ref{classicalramsey}. We show that $$k_0=k_0(d,k,m)=\binom{R(m,k)}{2}(d-1)+R(m,k)$$
satisfies Theorem~\ref{distance}. Let $X\subseteq B_0$ with $|X|=R(m,k)$. Let $g=\binom{R(m,k)}{2}$. Let $e_1, \ldots, e_{g}$ be an enumeration of the $2$-subsets of $X$, and let $e_i=\{x_i,y_i\}$ for each $i\in [g]$. Let $U_0=\emptyset$, and for every $i\in [g]$, having defined $U_{i-1}$, we define $P_i$ and $U_i$ as follows. If there exists a path $P$ in $G$ of length at most $d$ from $x_i$ to $y_i$ with $P^*\cap (U_{i-1}\cup X)=\emptyset$, then let $P_i=P$ and $U_i=U_{i-1}\cup P^*_i$. Otherwise, let $P_i=\emptyset$ and $U_i=U_{i-1}$. It follows that for all $i,j\in [g]$ with $i<j$ and $P_i,P_j\neq \emptyset$, we have $P_i\cap P^*_j=U_i\cap P^*_j=\emptyset$ and $P^*_i\cap P_j=P_i^*\cap X=\emptyset$.
Let $G_0$ be the graph with $V(G_0)=X$ and for each $i\in [g]$, $x_i$ is adjacent to $y_i$ in $G_0$ if and only if $P_i\neq \emptyset$.
\sta{\label{distancenoclique}$G_0$ contains no clique of cardinality $m$.}
Suppose for a contradiction that $G_0$ contains a clique $C$ of cardinality $m$. Then for every $i\in [g]$ with $e_i\subseteq C$, we have $P_i\neq \emptyset$. Also, for all distinct $i,j\in [g]$, we have $P_i\cap P^*_j=P^*_i\cap P_j=\emptyset$. But then $G[\bigcup_{e_i\subseteq C}P_i]$, and so $G$, contains a ($\leq d$)-subdivision of $K_m$ as a subgraph, a contradiction. This proves \eqref{distancenoclique}.\medskip
Since $|G_0|=|X|=R(m,k)$, Theorem~\ref{classicalramsey} combined with \eqref{distancenoclique} implies that $G_0$ contains a stable set $S$ of cardinality $k$. Let $A=U_g\cup (X\setminus S)$. Then we have $|A|\leq g(d-1)+R(m,k)-k$. Therefore, since $S\subseteq B_0\setminus A$ and $B_0$ is a $(g(d-1)+R(m,k))$-block, we deduce that $S$ is a $k$-block in $G\setminus A$. It remains to show that $S$ is a $d$-stable set in $G\setminus A$. Suppose not. Then there exists $x,y\in S$ and a path $Q$ in $G\setminus A$ of length at most $d$ from $x$ to $y$. Thus, we may choose $i\in [g]$ such that $e_i\subseteq Q\cap S$ and, assuming $P=Q[x_i,y_i]$, we have $P^*\cap S=\emptyset$. It follows that $P$ is a path in $G\setminus A$ (and so in $G$) of length at most $d$ from $x_i$ to $y_i$ with $P^*\subseteq G\setminus (A\cup S)=G\setminus (U_g\cup X)\subseteq G\setminus (U_{i-1}\cup X)$. It follows that $P_i\neq\emptyset$. But we have $e_i\subseteq S$ and $S$ is a stable set in $G_0$, which implies that $P_i=\emptyset$, a contradiction.
This completes the proof of Theorem~\ref{distance}.
\end{proof}
\section{Planted subdivided star forests}\label{ramseyblcoksec}
In this section we extend ideas from \cite{lozin} to produce a subdivided star forests
whose roots are contained in sets with useful properties. Let $G$ be a graph, $S\subseteq G$ and $F$ be a subdivided star forest. We say a subgraph $F'$ of $G$ isomorphic to $F$ is $S$-\textit{planted} if $F'$ is rooted and $\mathcal{R}(F')\subseteq S$. Write $\mathcal{H}_\lambda$ for the class of graphs with no holes of length more than $\lambda$. The main result of this section is the following.
\begin{theorem}\label{blockmanystar}
For all positive integers $d,k,t,\delta,\lambda$ and $\theta$ with $\delta\geq 2$, there exists a positive integer $k_1=k_1(d,k,t,\delta,\lambda, \theta)$ with the following property. Let $G$ be a $t$-clean graph and let $B_1$ be a $k_1$-block in $G$. Then there exists $A\subseteq V(G)$ and $S\subseteq B_1\setminus A$ such that the following hold.
\begin{itemize}
\item[(a)] $S$ is both a $k$-block and a $d$-stable set in $G\setminus A$.
\item[(b)] $G\setminus A$ contains an $S$-planted copy of $\theta S_{\delta,\lambda}$.
\item[(c)] $G\setminus A$ contains a hole of length more than $\lambda$.
\end{itemize}
In particular, we have $\mathcal{F}^t_{\theta S_{\delta,\lambda}},\mathcal{H}^t_{\lambda}\subseteq \mathcal{B}_{k_1}$.
\end{theorem}
Note that Theorem~\ref{blockmanystar}, combined with Theorem~\ref{noblock} and Lemma~\ref{cleanlemma}, implies Theorems~\ref{longholepili} and \ref{mainstarforest1} at once. Theorem~\ref{blockmanystar} is also a key tool in the proof of Theorem~\ref{mainconnectify} in Section~\ref{connectifysec}. We need the following two results from \cite{lozin}.
\begin{lemma}[Lozin and Razgon \cite{lozin}]\label{ramsey1}
For all positive integers $a$ and $b$, there is a positive integer $c=c(a, b)$ such that if a graph
$G$ contains a collection of $c$ pairwise disjoint subsets of $V(G)$, each of size at most $a$ and with at least
one edge between every two of them, then $G$ contains $K_{b,b}$ as a subgraph.
\end{lemma}
\begin{theorem}[Lozin and Razgon \cite{lozin}]\label{ramsey2}
For all positive integers $p$ and $r$, there exists a positive integer $m = m(p,r)$ such that every
graph $G$ containing a ($\leq p$)-subdivision of $K_m$ as a subgraph contains either $K_{p,p}$ as a subgraph or a
proper ($\leq p$)-subdivision of $K_{r,r}$ as an induced subgraph.
\end{theorem}
We begin with a short lemma.
\begin{lemma} \label{lem:noKmsubdivision}
Let $t\geq 1$ be an integer and let $G$ be a $t$-clean graph. Let $\rho$ be an integer with $\rho\geq R(t,t)$, where $R(\cdot,\cdot)$ is as in Theorem~\ref{classicalramsey}. Let $m = m(R(t, t), 2t^2)$, where $m(\cdot, \cdot)$ is as in Theorem~\ref{ramsey2}. Then $G$ does not contain a $(\leq \rho)$-subdivision of $K_m$ as a subgraph.
\end{lemma}
\begin{proof}
Suppose for a contradiction that $G$ contains a $(\leq \rho)$-subdivision of $K_m$ as a subgraph. Then by Theorem~\ref{ramsey2}, $G$ contains either $K_{\rho, \rho}$ as a subgraph, or an induced subgraph $H$ isomorphic to a proper subdivision of $K_{2t^2,2t^2}$. In the former case, by Theorem~\ref{classicalramsey}, $G$ contains either $K_t$ or $K_{t,t}$, which violates the assumption that $G$ is $t$-clean. In the latter case, note that a proper subdivision of $K_{2t^2,2t^2}$ contains a proper subdivision of every bipartite graph on at most $2t^2$ vertices. In particular, $H$, and so $G$, contains a subdivision of $W_{t\times t}$, again a contradiction with the fact that $G$ is $t$-clean. This proves the Lemma~\ref{lem:noKmsubdivision}.\medskip
\end{proof}
We are now ready to prove the main result of this section.
\begin{proof}[Proof of Theorem~\ref{blockmanystar}]
Let $R(\cdot,\cdot)$ be as in Theorem~\ref{classicalramsey}. Let $c = c(\lambda, R(t, t))$, where $c(\cdot, \cdot)$ is as in Lemma~\ref{ramsey1}. Let $m = m(R(t, t), 2t^2)$, where $m(\cdot, \cdot)$ is as in Theorem~\ref{ramsey2}. Let $k_0(\cdot,\cdot,\cdot)$ be as in Theorem~\ref{distance}. Let
$$k_1 = k_1(d,k,t,\delta,\lambda, \theta)= k_0(\max\{d, R(t,t), 2\lambda + 1\}, \max\{k, R(c, \delta), \theta\}, m).$$
We claim that this choice of $k_1$ satisfies Theorem~\ref{blockmanystar}.
To see this, suppose that $G$ is a $t$-clean graph which has a $k_1$-block $B_1$. Note first that, by Lemma~\ref{lem:noKmsubdivision}, $G$ does not contain a $(\leq \max\{d, R(t,t), 2\lambda + 1\})$-subdivision of $K_m$ as a subgraph. Therefore, Theorem~\ref{distance} applies, and there exists $A \subseteq V(G)$ and $S \subseteq B_1 \setminus A$ such that $S$ is both a $\max\{k, R(c, \delta), \theta\}$-block and a $\max\{d, R(t,t), 2\lambda + 1\}$-stable set in $G \setminus A$. In particular, $S$ is both a $k$-block and a $d$-stable set in $G \setminus A$, which proves (a). Next we claim that:
\sta{\label{1star}For every $x\in S$, there exists a copy $F_x$ of $S_{\delta,\lambda}$ in $G$ where $x\in F_x$ has degree $\delta$ in $F_x$.}
Pick two distinct vertices $y \in S\setminus \{x\}$ (it is routine to check that $k_1 \geq 2$). Since $S$ is a $R(c, \delta)$-block in $G\setminus A$, there exists a collection $\{P_i: i\in [R(c, \delta)]\}$ of pairwise internally disjoint paths in $G \setminus A$ from $x$ to $y$. Since $S$ is a $(2\lambda + 1)$-stable set, for each $i\in [R(c, \delta)]$, $P_i$ has length greater than $\lambda + 1$; let $P_i'$ be the subpath of $P_i$ of length $\lambda$ containing $x$ as an end. Then $\{P_i':i\in [R(c,\delta)]\}$ is a collection of $R(c, \delta)$ pairwise disjoint subsets of $V(G)$ of cardinality $\lambda$.
Let $\Gamma$ be the graph with $V(\Gamma)=[R(c, \delta)]$ such that for all distinct $i,j\in [R(c, \delta)]$, $i$ is adjacent to $j$ in $\Gamma$ if and only if there is an edge in $G$ between $P_i'\setminus \{x\}$ and $P_j'\setminus \{x\}$. By Theorem~\ref{classicalramsey}, $\Gamma$ contains either a clique of cardinality $c$ or a stable set of cardinality $\delta$. Suppose first that $\Gamma$ contains a clique of cardinality $c$. Then Theorem~\ref{ramsey1} implies that $G$ contains $K_{R(t, t), R(t, t)}$ as a subgraph, and thus $G$ contains $K_t$ or $K_{t, t}$ by Theorem~\ref{classicalramsey}, which violates the assumption that $G$ is $t$-clean. Consequently, $\Gamma$ has a stable set $I$ of cardinality $\delta$. But now $F_x=G[\bigcup_{i\in I}P_i']$ is a copy of $S_{\delta,\lambda}$ in $G$, where $x\in F_x$ has degree $\delta$ in $F_x$. This proves \eqref{1star}.\medskip
Now we can prove (b). For every $x\in S$, let $F_x$ be as in \eqref{1star}. Note that since $S$ is a $(2\lambda + 1)$-stable set in $G \setminus A$, it follows that for all distinct $x,x'\in S$, $F_x$ and $F_{x'}$ are disjoint and anticomplete. Also, since $S$ is a $\theta$-block, there exists $S'\subseteq S$ with $|S'|=\theta$. But now $G[\bigcup_{x\in S'}F_x]$ is an $S$-planted copy of $\theta S_{\delta,\lambda}$ in $G\setminus A$. This proves (b).
It remains to prove (c). Proceeding as in the proof of \eqref{1star}, we may pick distinct vertices $x,y\in S$ and two internally disjoint paths $P_1$ and $P_2$ in $G\setminus A$ from $x$ to $y$ such that $P_1'\setminus \{x\}$ is anticomplete to $P_2'\setminus \{x\}$, where for each $i\in \{1,2\}$, $P_i'$ is the subpath of $P_i$ of length $\lambda$ containing $x$ as an end. Traversing $P_1$ from $x$ to $y$, let $z$ be the first vertex in $P_1^*$ with a neighbor in $P_2 \setminus \{x\}$ (this vertex exists, since the neighbor of $y$ in $P_1$ is adjacent to $P_2 \setminus \{x\}$). Also, traversing $P_2$ from $x$ to $y$, let $w \in P_2\setminus \{x\}$ be the first neighbor of $z$ in $P_2\setminus \{x\}$. Note that since $P_1'$ is anticomplete to $P_2'$, it follows that either $z \notin P'_1$ or $w\notin P'_2$. But now $x \hbox{-} P_1 \hbox{-} z \hbox{-} w \hbox{-} P_2 \hbox{-} x$ is a hole in $G\setminus A$ of length at least $\lambda+3$. This completes the proof of Theorem~\ref{blockmanystar}.
\end{proof}
\section{Proof of Theorem~\ref{mainconnectify}}\label{connectifysec}
The last step in the proof of Theorem~\ref{mainconnectify} is the following. Note that the condition $\delta\geq 3$ is due to the fact that there is only one choice of roots for subdivided star forests in which every component has a branch vertex, and so it is slightly more convenient to work with them.
\begin{lemma}\label{blockconnectify}
For all positive integers $t, \delta, \lambda, \sigma, \theta$ with $\delta\geq 3$ and $\theta\geq 2$, there exists an integer $k_2=k_2(t, \delta, \lambda, \sigma, \theta)\geq 1$ with the following property. Let $G$ be a $t$-clean graph containing a $k_2$-block. Then $G$ contains a $\sigma$-connectification of $(\theta S_{\delta,\lambda},\mathcal{R}(\theta S_{\delta,\lambda}))$. In other words, we have $\mathcal{C}^t_{\sigma,\theta S_{\delta,\lambda}, \mathcal{R}(\theta S_{\delta,\lambda})}\subseteq \mathcal{B}_{k_2}$.
\end{lemma}
\begin{proof}
Let $\mu(\cdot)$ be as in Theorem~\ref{minimalconnectedgeneral}. Let
$$\gamma_1=\mu(\max\{t,\sigma\theta,\theta+1\}),$$
$$\gamma_2=\mu(\gamma_1),$$
$$\gamma_3=\gamma_2((2t\gamma_1+\delta)\lambda+1).$$
Let $k_1(\cdot, \cdot,\cdot,\cdot,\cdot, \cdot)$ be as in Theorem~\ref{blockmanystar}. We define:
$$\displaystyle k_2=k_2(t, \delta, \lambda, \sigma, \theta)=k_1\left(2\sigma-1,\gamma_3+R(t,t)\binom{\gamma_3}{2t},t,2t\gamma_1+\delta,\lambda,\gamma_2\right).$$
Let $B_2$ be a $k_2$-block in $G$. By Lemma~\ref{blockmanystar}, there exists $A\subseteq G$ and $S\subseteq B_2\setminus A$ such that the following hold. Let $G_0=G\setminus A$.
\begin{itemize}
\item $S$ is both a $R(t,t)\binom{\gamma_3}{2t}$-block and a $(2\sigma -1)$-stable set in $G_0$.
\item $G_0$ contains an $S$-planted copy $F$ of $\gamma_2 S_{2t\gamma_1+\delta,\lambda}$.
\end{itemize}
Then $|\mathcal{R}(F)|=\gamma_2$ and $|F|=\gamma_3$. For every $x\in \mathcal{R}(F)$, let $F_x$ be the component of $F$ with root $x$. Let $W$ be the set of all vertices in $G_0\setminus F$ with at least $2t$ neighbors in $F$.
\sta{\label{boundednbrs} We have $\displaystyle |W|<R(t,t)\binom{\gamma_3}{2t}$.}
Suppose not. Let $q=R(t,t)\binom{\gamma_3}{2t}$ and let $w_1, \ldots, w_q\in W$ be distinct. For every $i\in [q]$, let $N_i$ be a set of $2t$ neighbors of $w_i$ in $F$. It follows that there exist $I\subseteq [q]$ and $N\subseteq F$ such that $|I|=R(t,t)$, $|N|=2t$ and $N_{i}=N$ for all $i\in I$. Note that since $F$ is a forest, $N$ contains a stable set $N'$ of $G_0$ with $|N'|=t$. Also, since $G_0$ is $t$-clean, it does not contains a clique of cardinality $t$. Thus, by Lemma~\ref{classicalramsey}, $G_0[\{w_i:i\in I\}]$ contains a stable set $N''$ of cardinality $t$. But then $G_0[N'\cup N'']$ is isomorphic to $K_{t,t}$, which violates the fact that $G_0$ is $t$-clean. This proves \eqref{boundednbrs}.\medskip
Let $G_1=G_0\setminus W$. Then $G_1$ is a $t$-clean induced subgraph of $G$. In order to prove Theorem~\ref{blockconnectify}, it suffices to show that $G_1$ contains a $\sigma$-connectification of $\theta S_{\delta,\lambda}$, which we do in the rest of the proof.
Recall that $S$ is both a $(\gamma_3+R(t,t)\binom{\gamma_3}{2t})$-block and a $(2\sigma-1)$-stable set in $G_0$. Thus, since $S\setminus W\subseteq G_1$, by \eqref{boundednbrs}, $S\setminus W$ is both a $\gamma_3$-block and a $(2\sigma-1)$-stable set in $G_1$. Also, we have $\mathcal{R(F)}\subseteq S\setminus W$. It follows $\mathcal{R}(F)$ is a $(2\sigma-1)$-stable set in $G_1$, and for every two distinct vertices $x,x'\in \mathcal{R}(F)$, since $|F\setminus \mathcal{R}(F)|<\gamma_3$, there is a path in $G_1\setminus (F\setminus \mathcal{R}(F))$ from $x$ to $x'$. Consequently, $G_1\setminus (F\cup \mathcal{R}(F))$ has a component containing $\mathcal{R}(F)$. Let $G_2$ be the graph obtained from $G_1$ by contracting $F_x$ into $x$ for each $x\in \mathcal{R}(F)$. Then $G_2$ contains $G_1\setminus (F\cup \mathcal{R}(F))$ as a spanning subgraph, and so $G_2$ has a component containing $\mathcal{R}(F)$. Since $\mathcal{R}(F)\geq \gamma_2=\mu(\gamma_1)$, from Theorem ~\ref{minimalconnectedgeneral} applied to $G_2$ and $\mathcal{R}(F)$, it follows that $G_2$ contains a connected induced subgraph $H_2$ such that, assuming $S'=H_2\cap \mathcal{R}(F)$, we have $|S'|= \gamma_1$ and every vertex in $S'$ has degree at most $\gamma_1$ in $H_2$. Let
$$H_1=G_1\left[H_2\cup \left(\bigcup_{x\in S'} F_x\right)\right].$$
In other words, $H_1$ is the induced subgraph of $G_1$ obtained from $H_2$ by undoing the contraction of $F_x$ into $x$ for each $x\in H_2\cap \mathcal{R}(F)$. It follows that $H_1$ is a connected induced subgraph of $G_1$ and $H_1\cap \mathcal{R}(F)=H_2\cap \mathcal{R}(F)=S'$. Moreover, since $\mathcal{R(F)}$ is a $(2\sigma-1)$-stable set in $G_1$, $S'$ is also a $(2\sigma-1)$-stable set in $H_1$.
\sta{\label{degree2t} For every $x\in S'$, we have $|N_{F_x}(H_1\setminus F_x)|< 2t\gamma_1$.}
Note that $N_{H_1\setminus F_x}(F_x)=N_{H_2}(x)$, and so $|N_{H_1\setminus F_x}(F_x)|\leq \gamma_1$. Also, since $H_1$ is an induced subgraph of $G_1$, by the definition of $W$, no vertex in $N_{H_1\setminus F_x}(F_x)\subseteq G_1\setminus F$ has at least $2t$ neighbors in $F_x$. Therefore, we have $|N_{F_x}(H_1\setminus F_x)|< 2t\gamma_1$. This proves \eqref{degree2t}.\medskip
The following is immediate from \eqref{degree2t} and the fact that for every $x\in S'$, $F_x$ is isomorphic to $S_{2t\gamma_1+\delta,\lambda}$.
\sta{\label{anticompletestars} For every $x\in S'$, $F_x$ contains an induced copy $F'_x$ of $S_{\delta,\lambda}$ containing $x$ such that $F'_x\setminus \{x\}$ is anticomplete to $H_1\setminus F'_x$.}
Next, we define:
$$H'_1=H_1\setminus \left(\bigcup_{x\in S'} (F'_x\setminus \{x\})\right).$$
It follows that $H_1'$ is a connected induced subgraph of $G_1$ and $S'\subseteq H'_1$ is a $(2\sigma-1)$-stable set in $H_1'$.
\sta{\label{getconnectifier}
$H_1'$, and so $G_1$, contains an $(S',\theta, \sigma)$-connectifier $H$ of type $i$ for some $i\in [4]$.}
Since $|S'|\geq \gamma_1=\mu(\max\{t,\theta \sigma,\theta+1\})$, we can apply Theorem~\ref{minimalconnectedgeneral} to $H_1'$ and $S'$. It follows that $H_1'$ contains an $(S',\max\{t,\theta \sigma,\theta+1\})$-connectifier $H'$. Since $S'$ is $(2\sigma-1)$-stable set in $H_1'$, $H'\cap S'$ is also a $(2\sigma-1)$-stable set in $H'$. It is straightforward to observe that if $H'$ is of type $i$ for $i\in \{2,3,4\}$, then $H'$, and so $H_1'$, contains an $(S',\theta, \sigma)$-connectifier $H$. Also, if $H'$ is of type $0$, then $H'$ contains a clique of cardinality $t$, which violates the fact that $G_1$ is $t$-clean. It remains to consider the case where $H'$ is of type $1$. Then $H'$ contains an $(S',\theta+1)$-tied rooted subdivided star $H''$ with root $r$ in which every stem has length at least $\sigma$ and $(H''\cap S')\setminus \mathcal{L}(H'')\subseteq \{r\}$. Since $\theta\geq 2$, it follows that $H''$ has at least three vertices and $r$ is not a leaf of $H''$. If $H''$ is a path with ends $h_1,h_2\in S'$, then $\theta=2$ and $r\in S'$. This, along with the fact that $H''\cap S'$ is a $(2\sigma-1)$-stable set in $H''$, implies that $H=H''[h_1,r]$ has length at least $2\sigma$. But then $H$ is $(S',\theta,\sigma)$-connectifier of type $2$ in $H''$, and so in $H_1'$. Also, if $H''$ is not a path, then $r$ is the unique branch vertex of $H''$. Again, since $H''\cap S'$ is $(2\sigma-1)$-stable set in $H'$ (and so in $H''$), there exists a stem $P$ of $H''$ such that every stem of $H''$ other than $P$ has length at least $\sigma$. Therefore, $H=H'\setminus (P\setminus \{r\})$ is an $(S',\theta,\sigma)$-connectifier of type $1$ in $H''$, and so in $H_1'$. This proves \eqref{getconnectifier}.\medskip
Let $H$ be as in \eqref{getconnectifier}. Let $X=H\cap S$. Let $F'=\bigcup_{x\in X}F'_x$ and $\Xi=G_1[H\cup F']$. Then by \eqref{anticompletestars}, $F'$ is an induced subgraph of $\Xi$ isomorphic to $\theta S_{\delta,\lambda}$ and $F'\setminus X$ is anticomplete to $\Xi\setminus F$. Also, we have $\Xi\setminus (F'\setminus X)=H$. But then by \eqref{getconnectifier}, $\Xi$ is a $\sigma$-connectification of $(F',X)$, and so $\Xi$ is an induced subgraph of $G$ isomorphic to a $\sigma$-connectification of $(\theta S_{\delta,\lambda},\mathcal{R}(\theta S_{\delta,\lambda}))$. This completes the proof of Lemma~\ref{blockconnectify}.
\end{proof}
We need one more definition before proving Theorem~\ref{mainconnectify}. For two rooted subdivided star forests $F_1$ and $F_2$, we say $F_2$ \textit{embeds in} $F_1$ if $\mathcal{R}(F_2)\subseteq \mathcal{R}(F_1)$ and there exists a collection $\mathcal{S}$ of stems of $F_1$ such that $F_2=F_1\setminus ((\bigcup_{P\in \mathcal{S}}P)\setminus \mathcal{R}(F_1))$.
Now we prove Theorem~\ref{mainconnectify}, which we restate:
\setcounter{section}{4}
\setcounter{theorem}{0}
\begin{theorem}
Let $\sigma\geq 1$ be an integer, let $F$ be a rooted subdivided star forest of size at least two and let $\pi:[|\mathcal{R}(F)|]\rightarrow \mathcal{R}(F)$ be a bijection. Then the class $\mathcal{C}_{\sigma, F, \mathcal{R}(F),\pi}$ is clean.
\end{theorem}
\begin{proof}
Let $F$ be of maximum degree $\delta\geq 0$, reach $\lambda\geq 0$ and size $\theta\geq 2$. For every $x\in \mathcal{R}(F)$, let $F_x$ be the component of $F$ with root $x$. Let $F^+=\theta S_{\delta+3,\lambda+1}$ be rooted (with its unique choice of roots). For every $y\in \mathcal{R}(F^+)$, let $F^+_y$ be the component of $F^+$ with root $y$. Then for every $x\in \mathcal{R}(F)$ and every $y\in \mathcal{R}(F^+)$, $F^+_{y}$ contains a copy $F^+_{x,y}$ of $F_{x}$ such that $F^+_{x,y}$ embeds in $F^+_{y}$. Now, for every choice of bijections $\pi:[\theta]\rightarrow \mathcal{R}(F)$ and $\pi^+:[\theta]\rightarrow \mathcal{R}(F^+)$, and every $\sigma$-connectification $\Xi^+$ of $(F^+,\mathcal{R}(F^+))$ with respect to $\pi^+$, let $$\Xi=(\Xi^+\setminus F^+)\cup \left(\bigcup_{i\in [\theta]}F^+_{\pi(i),\pi^+(i)}\right).$$
It follows that $\Xi$ is isomorphic to a $\sigma$-connectification of $(F,\mathcal{R}(F))$ with respect to $\pi$. In other words, for every bijection $\pi:[\theta]\rightarrow \mathcal{R}(F)$, every $\sigma$-connectification of $(F^+,\mathcal{R}(F^+))$ contains an induced subgraph isomorphic to a $\sigma$-connectification of $(F,\mathcal{R}(F))$ with respect to $\pi$. Therefore, we have $\mathcal{C}_{\sigma, F, \mathcal{R}(F),\pi}\subseteq \mathcal{C}_{\sigma, F^+, \mathcal{R}(F^+)}$. This, together with Lemma~\ref{blockconnectify}, implies that for every integer $t\geq 1$, we have $\mathcal{C}^t_{\sigma, F, \mathcal{R}(F),\pi}\subseteq \mathcal{C}^t_{\sigma, F^+, \mathcal{R}(F^+)}\subseteq \mathcal{B}_{k_2}$, where $k_2=k_2(t,\delta+3,\lambda+1,\sigma,\theta)$ is as in Lemma~\ref{blockconnectify}. Now the result follows from Theorem~\ref{noblock} and Lemma~\ref{cleanlemma}.
\end{proof}
\bibliographystyle{plain}
|
2,869,038,153,845 | arxiv | \section{Introduction}
We say that a graph $G$ is \emph{$H$-free} if $G$ does not contain any induced subgraph isomorphic to $H$. For $n\geq 1$, denote by $K_n$ the complete graph on $n$ vertices. A \textit{subdivision} of a graph $G$ is obtained by subdividing its edges into paths of arbitrary length (at least one). We say that $H$ is \textit{an ISK4 of a graph} $G$ if $H$ is an induced subgraph of $G$ and $H$ is a subdivision of $K_4$. A graph that does not contain any induced subdivision of $K_4$ is said to be \textit{ISK4-free}. For instance, series-parallel graphs and line graph of cubic graphs are ISK4-free (see \cite{LMT12}). A \textit{triangle} is a graph isomorphic to $K_3$.
The \textit{chromatic number} of a graph $G$, denoted by $\chi(G)$, is the smallest integer $k$ such that $G$ can be partitioned into $k$ stable sets. Denote by $\omega(G)$ the size of a largest clique in $G$. A class of graphs $\cal G$ is \textit{$\chi$-bounded} with \textit{$\chi$-bounding function $f$} if, for every graph $G\in {\cal G}$, $\chi(G)\leq f(\omega(G))$. This concept was introduced by Gy\'arf\'as \cite{G87} as a natural extension of perfect graphs, that form a $\chi$-bounded class of graphs with $\chi$-bounding function $f(x)=x$. The question is: which induced subgraphs need to be forbidden to get a $\chi$-bounded class of graphs? One way to forbid induced structures is the following: fix a graph $H$, and forbid every induced subdivision of $H$. We denote by \emph{Forb$^*(H)$} the class of graphs that does not contain any induced subdivision of $H$. The class Forb$^*(H)$ has been proved to be $\chi$-bounded for a number of graph $H$. Scott \cite{S97} proved that for any forest $F$, Forb$^*(F)$ is $\chi$-bounded. In the same paper, he conjectured that Forb$^*(H)$ is $\chi$-bounded for any graph $H$. Unfortunately, this conjecture has been disproved (see \cite{PK14}). However, there is no general conjecture on which graph $H$, Forb$^*(H)$ is $\chi$-bounded. This question is discussed in \cite{CELM14}. We focus on the question when $H=K_4$. In this case, Forb$^*(K_4)$ is the class of ISK4-free graphs. Since $K_4$ is forbidden, proving that the class of ISK4-free graphs is $\chi$-bounded is equivalent to proving that there exists a constant $c$ such that for every ISK4-free graph $G$, $\chi(G)\leq c$. Remark that the existence of such constant was pointed out in \cite{LMT12} as a consequence of a result in \cite{KO04}, but it is rather large ($\geq 2^{2^{2^{25}}}$) and very far from these two conjectures:
\begin{conjecture}[L{\'e}v{\^e}que, Maffray, Trotignon 2012 \cite{LMT12}] \label{conj:1}
If $G$ is an ISK4-free graph, then $\chi(G)\leq 4$.
\end{conjecture}
\begin{conjecture}[Trotignon, Vu{\v{s}}kovi{\'c} 2016 \cite{TV16}] \label{conj:2}
If $G$ is an $\{$ISK4, triangle$\}$-free graph, then $\chi(G)\leq 3$.
\end{conjecture}
No better upper bound is known even for the chromatic number of $\{$ISK4, triangle$\}$-free graphs. However, attempts were made toward these two conjectures. A \textit{hole} is an induced cycle on at least four vertices. For $n\geq 4$, we denote by $C_n$ the hole on $n$ vertices. A \textit{wheel} is a graph consisting of a hole $H$ and a vertex $x\notin H$ which is adjacent to at least three vertices on $H$. The \textit{girth} of a graph is the length of its smallest cycle. The optimal bound is known for the chromatic number of $\{$ISK4, wheel$\}$-free graphs and $\{$ISK4, triangle, $C_4\}$-free graphs:
\begin{theorem}[L{\'e}v{\^e}que, Maffray, Trotignon 2012 \cite{LMT12}] \label{Thm:wheel-free}
Every $\{$ISK4, wheel$\}$-free graph is $3$-colorable.
\end{theorem}
\begin{theorem}[Trotignon, Vu{\v{s}}kovi{\'c} 2016 \cite{TV16}] \label{Thm:girth5}
Every ISK4-free graph of girth at least $5$ contains a vertex of degree at most $2$ and is $3$-colorable.
\end{theorem}
The proof of Theorems \ref{Thm:wheel-free} and \ref{Thm:girth5} relies on structural decompositions. One way to prove Conjectures \ref{conj:1} and \ref{conj:2} is to find a vertex of small degree. This approach is successfully used in \cite{TV16} to prove Theorem \ref{Thm:girth5}. Two following conjectures will immediately imply the correctness of Conjectures \ref{conj:1} and \ref{conj:2} (definitions of $K_{3,3}$, prism and $K_{2,2,2}$ are given in Section \ref{S:2}) :
\begin{conjecture} [Trotignon '2015 \cite{T15}] \label{conj:3}
Every $\{$ISK4, $K_{3,3}$, prism, $K_{2,2,2}\}$-free graph contains a vertex of degree at most three.
\end{conjecture}
\begin{conjecture} [Trotignon, Vu{\v{s}}kovi{\'c} '2016 \cite{TV16}] \label{conj:4}
Every $\{$ISK4, $K_{3,3}$, triangle$\}$-free graph contains a vertex of degree at most two.
\end{conjecture}
However, we find a new bound for the chromatic number of ISK4-free graphs using another approach. Our main results are the following theorems:
\begin{theorem} \label{Thm:1}
Let $G$ be an $\{$ISK4, triangle$\}$-free graph. Then $\chi(G)\leq 4$.
\end{theorem}
\begin{theorem} \label{Thm:2}
Let $G$ be an ISK4-free graph. Then $\chi(G)\leq 24$.
\end{theorem}
Remark that the bounds we found are much closer to the bound of the conjectures than the known ones. The main tool that we use to prove these theorems is classical. It is often used to prove $\chi$-boundedness results relying on the layers of neighborhood. The paper is organized as follows. We first introduce some notations in Section \ref{S:2}. Sections \ref{S:3} and \ref{S:4} are devoted to the proof of Theorem \ref{Thm:1} and \ref{Thm:2}, respectively.
\section{Preliminaries} \label{S:2}
In this section, we present some notations and useful lemmas which will be used later in our proof. Let $G(V,E)$ be a graph, we denote by $|G|$ the number of its vertices. A vertex $v$ of the graph $G$ is \textit{complete} to a set of vertices $S\subseteq V(G)\setminus v$ if $v$ is adjacent to every vertex in $S$. A graph is called \textit{complete bipartite} (resp. \textit{complete tripartite}) if its vertex set can be partitioned into two (resp. three) non-empty stable sets that are pairwise complete to each other. If these two (resp. three) sets have size $p$, $q$ (resp. $p$, $q$, $r$) then the graph is denoted by $K_{p,q}$ (resp. $K_{p,q,r}$). A complete bipartite or tripartite graph is \textit{thick} if it contains a $K_{3,3}$. Given a graph $H$, the \textit{line graph} of $H$ is the graph $L(H)$ with vertex set $E(G)$ and edge set $\{ef:e\cap f\neq \emptyset\}$. A graph $P$ on $\{x_1,\ldots,x_n\}$ is a \textit{path} if $x_ix_j\in E(P)$ iff $|i-j|=1$ (this is often referred to \textit{induced path} in literature). The \textit{length} of a path is the number of its edges. The two \textit{ends} of $P$ are $x_1$ and $x_n$. The \textit{interior} of $P$ is $\{x_2,\ldots,x_{n-1}\}$. We denote by $x_iPx_j$ the subpath of $P$ from $x_i$ to $x_j$ and denote by $P^*$ the subpath of $P$ from $x_2$ to $x_{n-1}$ ($x_2Px_{n-1}$). A path $P$ is \textit{flat} in $G$ if all the interior vertices of $P$ are of degree $2$ in $G$. When $S\subseteq V(G)$, we denote by $N(S)$ the set of neighbors of $S$ in $G\setminus S$ and denote by $G|S$ the subgraph of $G$ induced by $S$. When $K\subseteq V(G)$ and $C\subseteq V(G)\setminus K$, we denote by $N_K(C)$ the set of neighbors of $C$ in $K$, or $N_K(C)=N(C)\cap K$.
A \textit{cutset} in a graph is a subset $S\subsetneq V(G)$ such that $G\setminus S$ is disconnected. For any $k\geq 0$, a $k$-cutset is a cutset of size $k$. A cutset $S$ is a \textit{clique cutset} if $S$ is a clique. A \textit{proper $2$-cutset} of a graph $G$ is a $2$-cutset $\{a,b\}$ such that $ab\notin E(G)$, $V(G)\setminus \{a,b\}$ can be partitioned into two non-empty sets $X$ and $Y$ so that there is no edge between $X$ and $Y$ and each of $G[X\cup\{a,b\}]$ and $G[Y\cup\{a,b\}]$ is not a path from $a$ to $b$. A \textit{prism} is a graph made of three vertex-disjoint paths $P_1 = a_1\ldots b_1$, $P_2 = a_2\ldots b_2$, $P_3 = a_3\ldots b_3$ of length at least $1$, such that $a_1a_2a_3$ and $b_1b_2b_3$ are triangles and no edges exist between the paths except these of the two triangles. Let $S=\{u_1,u_2,u_3,u_4\}$ induces a square (i.e. $C_4$) in $G$ with $u_1$, $u_2$, $u_3$, $u_4$ in this order along the square. A \textit{link} of $S$ is a path $P$ of $G$ with ends $p$, $p'$ such that either $p=p'$ and $N_S(p)=S$, or $N_S(p)=\{u_1,u_2\}$ and $N_S(p')=\{u_3,u_4\}$, or $N_S(p)=\{u_1,u_4\}$ and $N_S(p')=\{u_2,u_3\}$, and no interior vertex of $P$ has a neighbor in $S$. A \textit{rich square} is a graph $K$ that contains a square $S$ as an induced subgraph such that $K\setminus S$ has at least two components and every component of $K\setminus S$ is a link of $S$. For example, $K_{2,2,2}$ is a rich square (it is the smallest one).
We use in this paper some decomposition theorems from \cite{LMT12}:
\begin{lemma}[see Lemma 3.3 in \cite{LMT12}] \label{Lm:K33}
Let $G$ be an ISK4-free graph that contains $K_{3,3}$. Then either $G$ is a thick complete bipartite or complete tripartite graph, or $G$ has a clique cutset of size at most $3$.
\end{lemma}
\begin{lemma}[see Lemmas 6.1 and 7.2 in \cite{LMT12}] \label{Lm:prism,K222}
Let $G$ be an ISK4-free graph that contains a rich square or a prism. Then either $G$ is the line graph of a graph with maximum degree $3$, or $G$ is a rich square, or $G$ has a clique cutset of size at most $3$ or $G$ has a proper $2$-cutset.
\end{lemma}
\textit{Reducing} a flat path $P$ of length at least $2$ means deleting its interior and add an edge between its two ends. The following lemma shows that a graph remains ISK4-free after reducing a flat path:
\begin{lemma} [see Lemma 11.1 in \cite{LMT12}] \label{Lm:reducing}
Let $G$ be an ISK4-free graph. Let $P$ be a flat path of length at least $2$ in $G$ and $G'$ be the graph obtained from $G$ by reducing $P$. Then $G'$ is ISK4-free.
\end{lemma}
\begin{proof}
Let $e$ be the edge of $G'$ that results from the reduction of $P$. Suppose that $G'$ contains an ISK4 $H$. Then $H$ must contain $e$, for otherwise $H$ is an ISK4 in $G$. Then replacing $e$ by $P$ in $H$ yields an ISK4 in $G$, contradiction.
\end{proof}
It is shown in \cite{LMT12} that clique cutsets and proper $2$-cutsets are useful for proving Conjecture \ref{conj:1} in the inductive sense. If we can find such a cutset in $G$, then we immediately have a bound for the chromatic number of $G$, since $\chi(G)\leq \max\{\chi(G_1),\chi(G_2)\}$, where $G_1$ and $G_2$ are two blocks of decomposition of $G$ with respect to that cutset (see the proof of Theorem~ 1.4 in \cite{LMT12}). Therefore, we only have to prove Conjecture \ref{conj:1} for the class of $\{$ISK4, $K_{3,3}$, prism, $K_{2,2,2}\}$-free graphs and prove Conjecture \ref{conj:2} for the class of $\{$ISK4, $K_{3,3}$, triangle$\}$-free graphs since the existence of $K_{3,3}$, prism or $K_{2,2,2}$ implies a good cutset by Lemmas \ref{Lm:K33} and \ref{Lm:prism,K222}.
We say that $S$ \textit{dominates} $C$ if $N_C(S)=C$. The \textit{distance} between two vertices $x$, $y$ in $V(G)$ is the length of a shortest path from $x$ to $y$ in $G$. Let $u\in V(G)$ and $i$ be an integer, denote by $N_i(u)$ the set of vertices of $G$ that are of distance exactly $i$ from $u$. Note that there are no edges between $N_i(u)$ and $N_j(u)$ for every $i,j$ such that $|i-j|\geq 2$.
\begin{lemma} \label{Lm:upstair-path}
Let $G$ be a graph, $u\in V(G)$ and $i$ be an integer $\geq 1$. Let $x,y$ be two distinct vertices in $N_i(u)$. Then, there exists a path $P$ in $G$ from $x$ to $y$ such that $V(P)\subseteq \{u\}\cup N_1(u)\cup \ldots\cup N_{i}(u)$ and $|V(P)\cap N_j(u)|\leq 2$ for every $j\in\{1,\ldots,i\}$.
\end{lemma}
\begin{proof}
We prove this by induction on $i$. If $i=1$, we have $x,y\in N_1(u)$. If $xy\in E(G)$, we choose $P=xy$, otherwise, choose $P=xuy$. Suppose that the lemma is true until $i=k$, we prove that it is also true for $i=k+1$. If $xy\in E(G)$, we choose $P=xy$. Otherwise, let $x',y'$ be the vertices in $N_k(u)$ such that $x'x,y'y\in E(G)$. If $x'=y'$, we choose $P=xx'y$, otherwise choose $P=P'\cup \{x,y\}$, where $P'$ is the path with two ends $x'$ and $y'$ generated by applying induction hypothesis.
\end{proof}
Such a path $P$ in Lemma \ref{Lm:upstair-path} is called the \textit{upstairs path} of $\{x,y\}$. For three distinct vertices $x,y,z\in V(G)$, a graph $H$ is a \textit{confluence} of $\{x,y,z\}$ if it is one of the two following types:
\begin{itemize}
\item Type $1$:
\begin{itemize}
\item $V(H)=V(P_x)\cup V(P_y)\cup V(P_z)$.
\item $P_x$, $P_y$, $P_z$ are three paths having a common end $u$ and $P_x\setminus u$, $P_y\setminus u$, $P_z\setminus u$ are pairwise disjoint. The other ends of $P_x$, $P_y$, $P_z$ are $x$, $y$, $z$, respectively.
\item These are the only edges in $H$.
\end{itemize}
\item Type $2$:
\begin{itemize}
\item $V(H)=V(P_x)\cup V(P_y)\cup V(P_z)$.
\item $P_x$ is a path with two ends $x$ and $x'$.
\item $P_y$ is a path with two ends $y$ and $y'$.
\item $P_z$ is a path with two ends $z$ and $z'$.
\item $P_x$, $P_y$, $P_z$ are pairwise disjoint.
\item $x'y'z'$ is a triangle.
\item These are the only edges in $H$.
\end{itemize}
\end{itemize}
If $H$ is a confluence of Type $1$, the vertex $u$ is called the \textit{center} of $H$ and if $H$ is a confluence of Type $2$, the triangle $x'y'z'$ is called the \textit{center triangle} of $H$. Note that the length of $P_x$ can be $0$ when $x=u$ (for Type $1$) or $x=x'$ (for Type $2$).
\begin{lemma} \label{Lm:confluence}
Let $G$ be a graph, $u\in V(G)$ and $i$ be an integer $\geq 1$. Let $x,y,z$ be three distinct vertices in $N_i(u)$. Then, there exists a set $S\subseteq \{u\}\cup N_1(u)\cup \ldots\cup N_{i-1}(u)$ such that $G|(S\cup\{x,y,z\})$ is a confluence of $\{x,y,z\}$.
\end{lemma}
\begin{proof}
Let $G'$ be the subgraph of $G$ induced by $\{u\}\cup N_1(u)\cup \ldots\cup N_{i-1}(u)$. It is clear that $G'$ is connected. Let $P$ be a path in $G'$ from $x$ to $y$ and $Q$ be a path in $G'$ from $z$ to $P$ (one end of $Q$ is in $P$). We choose $P$ and $Q$ subject to minimize $|V(P\cup Q)|$. It is easy to see that $G|V(P\cup Q)$ is a confluence of $\{x,y,z\}$.
\end{proof}
The notions of upstairs path and confluence are very useful to find induced structures in our graph since they establish a way to connect two or three vertices of the same layer through only the upper layers.
\begin{lemma} \label{Lm:chromatic-layer}
Let $G$ be a graph and $u\in V(G)$. Then: $$\chi(G)\leq \max_{i\textrm{ odd}}\chi(G|N_i(u))+\max_{j \textrm{ even}}\chi(G|N_j(u)).$$
\end{lemma}
\begin{proof}
It is clear that in $G$, there are no edges between $N_i(u)$ and $N_j(u)$ if $i\neq j$ and $i,j$ are of the same parity. Therefore, we can color all the odd layers with $\max_{i\textrm{ odd}}\chi(G|N_i(u))$ colors and all the even layers with $\max_{j \textrm{ even}}\chi(G|N_j(u))$ other colors. The lemma follows.
\end{proof}
\section{Proof of Theorem \ref{Thm:1}} \label{S:3}
The next lemma shows that if there is a set $S$ that dominates some hole $C$, then there must exist some vertices in $S$ which have very few (one or two) neighbors in $C$.
\begin{lemma} \label{Lm:attach-hole}
Let $G$ be an $\{$ISK4, triangle, $K_{3,3}\}$-free graph and $C$ be a hole in $G$. Let $S\subseteq V(G)\setminus C$ be such that every vertex in $S$ has at least a neighbor in $C$ and $S$ dominates $C$. Then one of the following cases holds:
\begin{enumerate}
\item \label{Lm:attach-hole:1} There exist four distinct vertices $u_1$, $u_2$, $u_3$, $u_4$ in $S$ and four distinct vertices $v_1$, $v_2$, $v_3$, $v_4$ in $C$ such that for $i\in\{1,2,3,4\}$, $N_C(u_i)=\{v_i\}$.
\item \label{Lm:attach-hole:2} There exist three distinct vertices $u_1$, $u_2$, $u_3$ in $S$ and three distinct vertices $v_1$, $v_2$, $v_3$ in $C$ such that for $i\in\{1,2,3\}$, $N_C(u_i)=\{v_i\}$ and $v_1$, $v_2$, $v_3$ are pairwise non-adjacent.
\item \label{Lm:attach-hole:3} There exist three distinct vertices $u_1$, $u_2$, $u_3$ in $S$ and four distinct vertices $v_1$, $v_2$, $v_3$, $v_3'$ in $C$ such that $N_C(u_1)=v_1$, $N_C(u_2)=v_2$, $N_C(u_3)=\{v_3,v_3'\}$ and $v_1$, $v_3$, $v_2$, $v_3'$ appear in this order along $C$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove Lemma \ref{Lm:attach-hole} by induction on the length of hole $C$ for every $\{$ISK4, triangle, $K_{3,3}\}$-free graph. First, suppose that the length of $C$ is $4$ and $C=c_0c_1c_2c_3$. Since $G$ is triangle-free, a vertex in $S$ can only have one or two neighbors in $C$. We consider two cases:
\begin{itemize}
\item If some vertex $u\in S$ has two neighbors in $C$, w.l.o.g, suppose $N_C(u)=\{c_0,c_2\}$. Since $S$ dominates $C$, there exists some vertices $v$, $w\in S$ such that $vc_1, wc_3\in E$. If $v=w$ then $\{u,v,w\}\cup C$ induces $K_{3,3}$ (if $uv\in E$) or an ISK4 (if $uv\notin E$), contradiction. Therefore, $v\neq w$ and $u,v,w$ are three vertices satisfying output \ref{Lm:attach-hole:3} of the lemma.
\item If every vertex in $S$ has exactly one neighbor in $C$, output \ref{Lm:attach-hole:1} of the lemma holds.
\end{itemize}
Now, we may assume that $|C|\geq 5$ and the lemma is true for every hole of length at most $|C|-1$. A vertex $u\in S$ is a \textit{bivertex} if $N_C(u)=\{u',u''\}$ and the two paths $P_1$, $P_2$ from $u'$ to $u''$ in $C$ are of lengths at least $3$. Suppose that $S$ contains such a bivertex $u$. Let $C_1=P_1\cup \{u\}$, $C_2=P_2\cup \{u\}$, note that $|C_1|,|C_2|<|C|$. Consider the graph $G'$ obtained from $G$ as follows: $V(G')=V(G)\cup \{a,b,c\}$, $E(G')=E(G)\cup \{au,bu',cu''\}$. It is clear that $G'$ is $\{$ISK4, triangle, $K_{3,3}\}$-free. Let $S_1=\{v\in S\setminus u|N_{C_1}(v)\neq \emptyset\}\cup \{a,b,c\}$ and $S_2=\{v\in S\setminus u|N_{C_2}(v)\neq \emptyset\}\cup \{a,b,c\}$. By applying the induction hypothesis on $S_1$ and $C_1$, we obtain that there is some vertex $x\in S$ such that $x$ has exactly one neighbor in $P_1$ which is in $P_1^*$ ($x$ can be adjacent to $u$). We claim that $x$ has exactly one neighbor in $C$. Indeed, if $x$ has exactly one neighbor $x'$ in $P_2^*$ then $C\cup \{x,u\}$ induces an ISK4 (if $xu\notin E(G)$) or $C_1\cup \{x\}\cup Q$ induces an ISK4 (if $xu\in E(G)$), where $Q$ is the shorter path in one of the two paths in $C$: $x'P_2u'$ and $x'P_2u''$, contradiction. If $x$ has at least two neighbors in $P_2^*$, let $x'$, $x''$ be the neighbors of $x$ closest to $u'$, $u''$ on $P_2^*$, respectively. Then $C_1\cup \{x\}\cup x'P_2u'\cup x''P_2u''$ induces an ISK4 (if $xu\notin E(G)$) or $C_1\cup \{x\}\cup x'P_2u'$ induces an ISK4 (if $xu\in E(G)$), contradiction. So, $x$ has no neighbor in $P_2^*$ and has exactly one neighbor in $C$ as claimed. Similarly, by applying the induction hypothesis on $S_2$ and $C_2$, we know that there is some vertex $y\in S$ such that $y$ has exactly one neighbor in $P_2^*$ and this is also its only neighbor in $C$. Now, $\{x,y,u\}$ satisfies output \ref{Lm:attach-hole:3} of the lemma. Hence, we may assume that $S$ contains no bivertex.
Suppose that there is some vertex $u$ in $S$ which has at least four neighbors in $C$. Let $N_C(u)={u_0,\ldots,u_k}$ where $u_0,\ldots,u_k$ ($k\geq 3$) appear in that order along $C$. Let $P_u(i,i+3)$ be the path of $C$ from $u_i$ to $u_{i+3}$ which contains $u_{i+1}$ and $u_{i+2}$ and define $\amp(u,C)=\max_{i=0}^k |P_u(i,i+3)|$ (the index is taken in modulo $k+1$). Note that this notion is defined only for a vertex with at least four neighbors in $C$. Let $v\in S$ be such that $\amp(v,C)$ is maximum. W.l.o.g suppose that $P_v(0,3)$ is the longest path among all paths of the form $P_v(i,i+3)$. Let $P_0$, $P_1$, $P_2$ be the subpaths of $P_v(0,3)$ from $v_0$ to $v_1$, $v_1$ to $v_2$, $v_2$ to $v_3$, respectively. Let $C_0=\{v\}\cup P_0$, $C_1=\{v\}\cup P_1$ and $C_2=\{v\}\cup P_2$. Consider the graph $G'$ obtained from $G$ as follows: $V(G')=V(G)\cup \{a,b,c\}$, $E(G')=E(G)\cup \{av,bv_0,cv_1\}$. It is clear that $G'$ is $\{$ISK4, triangle, $K_{3,3}\}$-free. Let $S_0=\{u\in S\setminus v|N_{C_0}(u)\neq \emptyset\}\cup \{a,b,c\}$. By applying the induction hypothesis on $S_0$ and $C_0$, we obtain that there is some vertex $x\in S$ such that $x$ has exactly one neighbor $x_0$ in $P_0$ which is in $P_0^*$ ($x$ can be adjacent to $v$). We claim that $x$ has exactly one neighbor in $C$. Suppose that $x$ has some neighbor in $P_1$. Let $x_1$, $x_2$ be the neighbors of $x$ in $P_1$ which is closest to $v_1$ and $v_2$, respectively ($x_1$ and $x_2$ could be equal). Then we have $\{x,v\}\cup P_0\cup v_1P_1x_1\cup v_2P_1x_2$ induces an ISK4 (if $xv\notin E(G)$) or $\{x,v\}\cup P_0\cup v_1P_1x_1$ induces an ISK4 (if $xv\in E(G)$), contradiction. Therefore, $x$ has no neighbor in $P_1$. Suppose that $x$ has some neighbor in $P_2$, let $x_1$ be the neighbor of $x$ in $P_2$ which is closest to $v_2$. Let $Q$ be the path from $x_0$ to $x_1$ in $C$ which contains $v_1$. We have $\{x,v\}\cup Q\cup v_0P_0x_0$ induces an ISK4 (if $xv\notin E(G)$) or $\{x,v\}\cup Q$ induces an ISK4 (if $xv\in E(G)$), contradiction. Hence, $x$ has no neighbor in $P_2$. Now if $x$ has at least four neighbors in $C$, $\amp(x,C)>\amp(v,C)$, contradiction to the choice of $v$. Hence, $x$ can have at most one neighbor in the path from $v_0$ to $v_3$ in $C$ which does not contain $v_1$. Suppose $x$ has one neighbor $x'$ in that path. By the assumption that we have no bivertex, $x'v_0, v_0x_0\in E(G)$. Let $Q$ be the path from $v_{-1}$ to $x'$ in $C$ which does not contain $v_0$. We have $\{x,x',v_0,x_0,v\}\cup Q\cup v_1P_0x_0$ induces an ISK4 (if $xv\notin E(G)$) or $\{x,x',v_0,x_0,v\}\cup Q$ induces an ISK4 (if $xv\in E(G)$), contradiction. Hence, $x_0$ is the only neighbor of $x$ in $C$, as claimed. Similarly, we can prove that there exist two vertices $y,z\in S$ such that they have exactly one neighbor in $C$ which are in $P_1^*$, $P_2^*$, respectively. Note that the proof for $y$ is not formally symmetric to the one for $x$ and $z$, but the proof is actually the same. In particular, a vertex $y$ with a unique neighbor in $P_1^*$, no neighbor in $P_0$, $P_2$ and at least four neighbors in $C$ also yields a contradiction to the maximality of $\amp(v,C)$. Therefore, $\{x,y,z\}$ satisfies output \ref{Lm:attach-hole:2} of the lemma. Now, we can assume that no vertex in $S$ has at least four neighbors in $C$.
Hence, every vertex in $S$ either has exactly one neighbor in $C$ or exactly two neighbors in $C$ and is not a bivertex. Suppose there is some vertex $u$ that has two neighbors $u'$, $u''$ on $C$ and let $x\in C$ be such that $xu',xu''\in E$. Let $v\in S$ be a vertex adjacent to $x$. If $v$ has another neighbor $x'$ in $C$ then $x'$ must be adjacent to $u'$ or $u''$, since $v$ is not a bivertex. So, we have $\{u,v,x',u',x,u''\}$ induces an ISK4 (if $uv\in E(G)$) or $\{u,v\}\cup C$ induces an ISK4 (if $uv\notin E(G)$), contradiction. So, $v$ has only one neighbor $x$ in $C$. Hence, if we have at least one vertex which has two neighbors on $C$, the output \ref{Lm:attach-hole:3} holds. If every vertex has exactly one neighbor in $C$, the output \ref{Lm:attach-hole:1} holds, which completes the proof.
\end{proof}
\begin{lemma} \label{Lm:hole}
Let $G$ be an $\{$ISK4, triangle, $K_{3,3}\}$-free graph and $u\in V(G)$. For every $i\geq 1$, $G|N_i(u)$ does not contain any hole.
\end{lemma}
\begin{proof}
Suppose for some $i$, $G|N_i(u)$ contains a hole $C$. For every vertex $v\in C$, there exists a vertex $v'\in N_{i-1}(u)$ such that $vv'\in E$. Hence there exists a subset $S\subseteq N_{i-1}(u)$ such that $S$ dominates $C$. Let us apply Lemma \ref{Lm:attach-hole} for $S$ and $C$:
\begin{itemize}
\item If output \ref{Lm:attach-hole:1} or \ref{Lm:attach-hole:2} of Lemma \ref{Lm:attach-hole} holds, then there exist three distinct vertices $u_1$, $u_2$, $u_3$ in $S$ and three distinct vertices $v_1$, $v_2$, $v_3$ in $C$ such that for $i\in\{1,2,3\}$, $N_C(u_i)=\{v_i\}$. By Lemma \ref{Lm:confluence}, since $G$ is triangle-free, there exists a confluence $F$ of $\{u_1,u_2,u_3\}$ of Type $1$, so $F\cup C$ induces an ISK4, contradiction.
\item If output \ref{Lm:attach-hole:3} of Lemma \ref{Lm:attach-hole} holds, then there exist two distinct vertices $u_1$, $u_2$ in $S$ and three distinct vertices $v_1$, $v_2$, $v_2'$ in $C$ such that $N_C(u_1)=v_1$, $N_C(u_2)=\{v_2,v_2'\}$. By Lemma \ref{Lm:upstair-path}, there exists an upstairs path $P$ of $\{u_1,u_2\}$, so $P\cup C$ induces an ISK4, contradiction.
\end{itemize}
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:1}]
We prove the theorem by induction on the number of vertices of $G$. Suppose that $G$ has a clique cutset $K$. So $G\setminus K$ can be partitioned into two sets $X$, $Y$ such that there is no edge between them. By induction hypothesis $\chi(G|(X\cup K))$ and $\chi(G|(Y\cup K))\leq 4$, therefore $\chi(G)\leq \max\{\chi(G|(X\cup K)),\chi(G|(Y\cup K))\}\leq 4$. Hence we may assume that $G$ has no clique cutset. If $G$ contains a $K_{3,3}$, then by Lemma \ref{Lm:K33}, $G$ is a thick complete bipartite graph and $\chi(G)\leq 2$. So we may assume that $G$ contains no $K_{3,3}$. By Lemma \ref{Lm:hole}, for every $u\in V(G)$, for every $i\geq 1$, $G|N_i(u)$ is a forest, hence $\chi(G|N_i(u))\leq 2$. By Lemma \ref{Lm:chromatic-layer}, $\chi(G)\leq 4$, which completes the proof.
\end{proof}
\section{Proof of Theorem \ref{Thm:2}} \label{S:4}
A \textit{boat} is a graph consisting of a hole $C$ and a vertex $v$ that has exactly four consecutive neighbors in $C$ ($N_C(v)$ induces a $C_4$ if $|C|=4$ or a $P_4$ if $|C|\geq 5$). A \textit{4-wheel} is a particular boat whose hole is of length $4$. Let ${\cal C}_1$ be the class of $\{$ISK4, $K_{3,3}$, prism, boat$\}$-free graphs, ${\cal C}_2$ be the class of $\{$ISK4, $K_{3,3}$, prism, $4$-wheel$\}$-free graphs and ${\cal C}_3$ be the class of $\{$ISK4, $K_{3,3}$, prism, $K_{2,2,2}\}$-free graphs. Remark that ${\cal C}_1\subsetneq {\cal C}_2\subsetneq {\cal C}_3\subsetneq$ ISK4-free graphs.
\begin{lemma} \label{Lm:boat-free}
Let $G$ be a graph in ${\cal C}_1$. Then $\chi(G)\leq 6$.
\end{lemma}
\begin{proof}
We prove first the following claim.
\begin{claim} \label{Cl:no triangle and C_4}
Let $u\in V(G)$ and $i\geq 1$. Then $G|N_i(u)$ contains no triangle and no $C_4$.
\end{claim}
\begin{proof}
Suppose $G|N_i(u)$ contains a triangle $abc$. No vertex is complete to $abc$ since $G$ is $K_4$-free. Suppose that there is some vertex $x\in N_{i-1}(u)$ which has exactly two neighbors in the triangle, w.l.o.g. assume that they are $a$ and $b$. Let $y$ be some vertex in $N_{i-1}(u)$ adjacent to $c$ and $P$ be an upstairs path of $\{x,y\}$. If $y$ has exactly one neighbor in $abc$ (which is $c$), then $P\cup \{a,b,c\}$ induces an ISK4, contradiction. Hence $y$ must have another neighbor in $C$, say $a$ up to symmetry. In this case, $P\cup \{a,b,c\}$ induces a boat, contradiction. Then every vertex in $N_{i-1}(u)$ has exactly one neighbor in $abc$. Suppose there are three vertices $x,y,z\in N_{i-1}(u)$ such that $N_{abc}(x)=\{a\}$, $N_{abc}(y)=\{b\}$ and $N_{abc}(z)=\{c\}$. By Lemma \ref{Lm:confluence}, there exists a confluence $S$ of $\{x,y,z\}$. If $S$ is of Type $1$, then $S\cup \{a,b,c\}$ induces an ISK4, contradiction. If $S$ is of Type $2$, then $S\cup \{a,b,c\}$ induces a prism, contradiction. Hence, $G|N_i(u)$ contains no triangle.
Suppose $N_i(u)$ contains a $C_4$, namely $abcd$. Every vertex can only have zero, one or two neighbors in $abcd$ since a $4$-wheel is a boat. Suppose there is some vertex $x\in N_{i-1}(u)$ which has exactly two non-adjacent neighbors in $\{a,b,c,d\}$, say $N_{abcd}(x)=\{a,c\}$. Let $y$ be some vertex in $N_{i-1}(u)$ adjacent to $d$ and $P$ be an upstairs path of $\{x,y\}$. If $yb\in E$, then $\{x,y,a,b,c,d\}$ induces an ISK4 (if $xy\notin E$) or a $K_{3,3}$ (if $xy\in E$), contradiction. If $ya\in E$, $P\cup \{a,c,d\}$ induces an ISK4, contradiction. Then $yc\notin E$ also by symmetry, and $N_{abcd}(y)=\{d\}$. In this case $P\cup \{a,b,c,d\}$ induces an ISK4, contradiction. Therefore, there is no vertex in $N_{i-1}(u)$ has two non-adjacent neighbors in $abcd$. Now, suppose that there is some vertex $x\in N_{i-1}(u)$ which has exactly two consecutive neighbors $\{a,b\}$ in $abcd$. Let $y$ be some vertex in $N_{i-1}(u)$ adjacent to $d$ and $P$ be an upstairs path of $\{x,y\}$. If $y$ is adjacent to $c$, then $P\cup \{a,b,c,d\}$ induces a prism, contradiction. If $N_{abcd}(y)=\{d\}$, then $P\cup \{a,b,c,d\}$ induces an ISK4, contradiction. Hence $N_{abcd}(y)=\{a,d\}$. Let $z$ be some vertex in $N_{i-1}(u)$ adjacent to $c$, $P_{xz}$ be an upstairs path of $\{x,z\}$ and $P_{yz}$ be an upstairs path of $\{y,z\}$. If $zb\in E$, $P_{yz}\cup \{a,b,c,d\}$ induces a prism, contradiction. If $zd\in E$, $P_{xz}\cup \{a,b,c,d\}$ induces a prism, contradiction. Hence $N_{abcd}(z)=\{c\}$. In this case, $P_{xz}\cup \{a,b,c,d\}$ induces an ISK4, contradiction. Therefore, there is no vertex in $N_{i-1}(u)$ having two neighbors in $abcd$. So, there are three vertices $x,y,z\in N_{i-1}(u)$ such that $N_{abcd}(x)=\{a\}$, $N_{abcd}(y)=\{b\}$, $N_{abcd}(z)=\{c\}$. By Lemma \ref{Lm:confluence}, there exists a confluence $S$ of $\{x,y,z\}$. If $S$ is of Type $1$, $S\cup \{a,b,c,d\}$ induces an ISK4, contradiction. If $S$ is of Type $2$, $S\cup \{a,b,c\}$ induces an ISK4, contradiction. Therfore, $G|N_i(u)$ contains no $C_4$.
\end{proof}
By Claim \ref{Cl:no triangle and C_4}, the girth of $N_i(u)$ is at least $5$ for $i\geq 1$. By Theorem \ref{Thm:girth5}, $\chi(G|N_i(u))\leq 3$. By Lemma \ref{Lm:chromatic-layer}, $\chi(G)\leq 6$, which completes the proof.
\end{proof}
\begin{lemma} \label{Lm:4-wheel-free}
Let $G$ be a graph in ${\cal C}_2$. Then $\chi(G)\leq 12$.
\end{lemma}
\begin{proof}
We first prove that: for any $u\in V(G)$ and $i\geq 1$, $G|N_i(u)$ contains no boat. We may assume that $i\geq 2$, since $G|N_1(u)$ is triangle-free, the conclusion holds for $i=1$. Suppose for contradiction that $G|N_i(u)$ contains a boat consisting of a hole $C$ and a vertex $x$ that has four neighbors $a$, $b$, $c$, $d$ in this order on $C$. Since $G$ contains no $4$-wheel, we can assume that $|C|\geq 5$ and $\{a,b,c,d\}$ induces a $P_4$. Let $P$ be the path from $a$ to $d$ in $C$ which does not go through $b$.
\begin{claim} \label{Cl:bc}
No vertex in $N_{i-1}(u)$ is adjacent to both $b$ and $c$.
\end{claim}
\begin{proof}
Suppose there is a vertex $y\in N_{i-1}(u)$ adjacent to both $b$ and $c$. Since $\{x,y,b,c\}$ does not induce $K_4$, $xy\notin E$. If $ya\in E$, $\{a,b,c,x,y\}$ induces a $4$-wheel, contradiction. Hence, $ya\notin E$. We also have $yd\notin E$ by symmetry. We claim that $N_{C}(y)=\{b,c\}$. Suppose that $y$ has some neighbor in $P^*$. If $y$ has exactly one neighbor in $P^*$, then $\{y\}\cup C$ induces an ISK4, contradiction. If $y$ has exactly two consecutive neighbor in $P^*$, then $C\cup \{x,y\}\setminus \{c\}$ induces a prism, contradiction. If $y$ has at least three neighbors in $P^*$, or two neighbors in $P^*$ that are not consecutive, then let $z$ be the one closest to $a$ and $t$ be the one closest to $d$. Then $\{x,y,b\}\cup zPa\cup tPd$ induces an ISK4, contradiction. So $N_{C}(y)=\{b,c\}$. Let $z$ be a vertex in $N_{i-1}(u)$ which has a neighbor in $P^*$ and $P_{yz}$ be an upstairs path of $\{y,z\}$. If $z$ has exactly one neighbor in $C$, then $P_{yz}\cup C$ induces an ISK4, contradiction. If $z$ has exactly two consecutive neighbors in $C$, then $P_{yz}\cup C$ induces a prism, contradiction. If $z$ has at least three neighbors in $C$ or two neighbors in $C$ which are not consecutive, let $t,w$ be the ones closest to $b,c$ in $C$, respectively. Let $Q$ be the path form $t$ to $w$ in $C$ which contains $b$. We have that $P_{yz}\cup Q$ induces an ISK4, contradiction.
\end{proof}
By Claim \ref{Cl:bc}, let $y,z$ be two distinct vertices in $N_{i-1}(u)$ such that $yb,zc\in E$ and $P_{yz}$ be an upstairs path of $\{y,z\}$.
\begin{claim}
$xy,xz\in E$.
\end{claim}
\begin{proof}
Suppose $xy\notin E$. Then $xz\notin E$, otherwise $P_{yz}\cup \{x,b,c\}$ induces an ISK4. Let $t\in N_{i-1}(u)$ such that $tx\in E$, let $P_{ty}$ and $P_{tz}$ be an upstairs paths of $\{t,y\}$ and $\{t,z\}$, respectively. If $tb\in E$, then $P_{tz}\cup \{x,b,c\}$ induces an ISK4, contradiction. If $tc\in E$, then $P_{ty}\cup \{x,b,c\}$ induces an ISK4, contradiction. So $N_{xbc}(t)=\{x\}$. By Lemma \ref{Lm:confluence}, let $S$ be a confluence of $\{y,z,t\}$. If $S$ is of Type $1$, $S\cup \{x,b,c\}$ induces an ISK4, contradiction. If $S$ is of Type $2$, $S\cup \{x,b,c\}$ induces a prism, contradiction. Then $xy\in E$. Symmetrically, $xz\in E$.
\end{proof}
\begin{claim} \label{Cl:only}
$N_C(y)=\{b\}$ and $N_C(z)=\{c\}$.
\end{claim}
\begin{proof}
We prove only $N_C(y)=\{b\}$, the other conclusion is proved similarly. First, $ya,yc\notin E$, otherwise $\{y,x,a,b\}$ or $\{y,x,a,c\}$ induces a $K_4$. We also have $yd\notin E$, otherwise $\{x,y,b,c,d\}$ induces a $4$-wheel. If $y$ has some neighbor in $P^*$, let $t$ be the one closest to $a$. In this case, $tPa\cup \{x,y,b\}$ induces an ISK4, contradiction. Hence $N_C(y)=\{b\}$.
\end{proof}
Let $t$ be a vertex in $N_{i-1}(u)$ such that $ta\in E$ and $P_{yt}$ be an upstairs path of $\{y,t\}$. By Claim \ref{Cl:only}, $tb,tc\notin E$. We have $tx\in E$, otherwise $P_{yt}\cup \{x,a,b\}$ induces an ISK4. Suppose that $N_C(t)=\{a\}$. There exists a confluence $S$ of $\{t,y,z\}$ by Lemma \ref{Lm:confluence}. If $S$ is of Type $1$, $S\cup C$ induces an ISK4, contradiction. If $S$ is of Type $2$, $S\cup \{a,b,c\}$ induces an ISK4, contradiction. Hence, $t$ must have some neighbor in $P\setminus \{a\}$, let $w$ be the one closest to $d$ along $P$ and $P_w$ be the path from $a$ to $w$ in $C$ which contains $b$.
\begin{claim}
$t$ has some neighbor in $P_{yz}$.
\end{claim}
\begin{proof}
Suppose that $t$ has no neighbor in $P_{yz}$. Because $G|(u\cup N_1(u)\cup\ldots\cup N_{i-2}(u))$ is connected, there exists a path $Q$ from $t$ to some $t'$ such that $Q\setminus \{t\}\subseteq u\cup N_1(u)\cup\ldots\cup N_{i-2}(u)$ and $t'$ is the only vertex in $Q$ which has some neighbor in $P_{yz}$. If $t'$ has exactly one neighbor in $P_{yz}$, then $P_w\cup Q\cup P_{yz}$ induces an ISK4, contradiction. If $t'$ has exactly two consecutive neighbors in $P_{yz}$, then $Q\cup P_{yz}\cup \{a,b,c\}$ induces an ISK4. If $t'$ has at least three neighbors in $P_{yz}$ or two neighbors in $P_{yz}$ which are not consecutive, let $y'$, $z'$ be the one closest to $y$, $z$, respectively, then $Q\cup P_w\cup y'P_{yz}y\cup z'P_{yz}z$ induces an ISK4, contradiction. Then $t$ must have some neighbor in $P_{yz}$.
\end{proof}
Let $y',z'\in P_{yz}$ such that $y'y,z'z\in E$. Since $t\in N_{i-1}(u)$, $N_{P_{yz}}(t)\subseteq \{y,z,y',z'\}$. If $t$ has exactly one neighbor in $P_{yz}$, then $\{t\}\cup P_{yz}\cup P_w$ induces an ISK4, contradiction. If $t$ has exactly two neighbors in $P_{yz}$, then $\{t,a,b,c\}\cup P_{yz}$ induces an ISK4, contradiction. If $t$ has exactly three neighbors in $P_{yz}$, then $\{t,b,c\}\cup P_{yz}$ induces an ISK4, contradiction. Hence, $t$ has four neighbors in $P_{yz}$ or $N_{P_{yz}}(t)= \{y,z,y',z'\}$. In particular, $ty\in E$ and $\{x,t,y,a,b\}$ induces a $4$-wheel, contradiction. Hence, $G|N_i(u)$ is boat-free.
Now, for every $i\geq 1$, $G|N_i(u)\in {\cal C}_1$. By Lemma \ref{Lm:boat-free}, $\chi(G|N_i(u))\leq 6$ . By Lemma \ref{Lm:chromatic-layer}, $\chi(G)\leq 12$, completing the proof.
\end{proof}
\begin{lemma} \label{Lm:K222-free}
Let $G$ be a graph in ${\cal C}_3$. Then $\chi(G)\leq 24$.
\end{lemma}
\begin{proof}
Let $u\in V(G)$ and $i\geq 1$. We claim that $G|N_i(u)$ contains no $4$-wheel. Suppose that $G|N_i(u)$ contains a $4$-wheel consisting of a hole $abcd$ and a vertex $x$ complete to $abcd$. By similar argument as in the proof of Lemma \ref{Lm:boat-free} (the proof of $C_4$-free), the hole $abcd$ cannot be dominated by only the vertices in $N_{i-1}(u)$ which has one or two neighbors in $abcd$. Hence, there exists some vertex $y\in N_{i-1}(u)$ complete to $abcd$. It is clear that $xy\notin E$, otherwise $\{x,y,a,b\}$ induces a $K_4$. Now, $\{x,y,a,b,c,d\}$ induces a $K_{2,2,2}$, contradiction. So, $G|N_i(u)$ contains no $4$-wheel. By Lemma \ref{Lm:4-wheel-free}, $\chi(G|N_i(u))\leq 12$. By Lemma \ref{Lm:chromatic-layer}, we have $\chi(G)\leq 24$, which proves the lemma.
\end{proof}
Before the main proof, we have several lemmas proving the bound of chromatic number of some basic graphs.
\begin{lemma}\label{Lm:line-graph}
Let $G$ be the line graph of a graph $H$ with maximum degree three. Then $\chi(G)\leq 4$.
\end{lemma}
\begin{proof}
To prove that $G$ is $4$-colorable, we only need to prove that $H$ is $4$-edge-colorable. But since the maximum degree of $H$ is three, this is a direct consequence of Vizing's theorem (see \cite{BM76}).
\end{proof}
\begin{lemma}\label{Lm:rich-square}
Let $G$ be a rich square. Then $\chi(G)\leq 4$.
\end{lemma}
\begin{proof}
By the definition of a rich square, there is a square $S=\{u_1,u_2,u_3,u_4\}$ in $G$ such that every component of $G\setminus S$ is a link of $S$. We show a $4$-coloring of $G$ as follows. Assign color $1$ to $\{u_1,u_3\}$ and color $2$ to $\{u_2,u_4\}$. Let $P$ be a component of $G\setminus S$ with two ends $p$, $p'$. If $p=p'$, give it color $3$. If $p\neq p'$, give $p$, $p'$ color $3$, $4$, respectively and assign color $1$ and $2$ alternately to the internal vertices of $P$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:2}]
We prove the theorem by induction on the number of vertices of $G$. Suppose that $G$ has a clique cutset $K$. So $G\setminus K$ can be partitioned into two sets $X$, $Y$ such that there are no edges between them. By the induction hypothesis, $\chi(G|(X\cup K))$ and $\chi(G|(Y\cup K))\leq 24$, therefore $\chi(G)\leq \max\{\chi(G|(X\cup K)),\chi(G|(Y\cup K))\}\leq 24$. Hence we may assume that $G$ has no clique cutset. If $G$ contains a $K_{3,3}$, then by Lemma \ref{Lm:K33}, $G$ is a thick complete bipartite graph or complete tripartite graph and $\chi(G)\leq 3$. So we may assume that $G$ contains no $K_{3,3}$.
Suppose that $G$ has a proper $2$-cutset $\{a,b\}$. So $G\setminus \{a,b\}$ can be partitioned into two sets $X$, $Y$ such that there is no edge between them. Since $G$ has no clique cutset, it is $2$-connected, so there exists a path $P_Y$ with ends $a$ and $b$ and with interior in $Y$. Let $G_X'$ be the subgraph of $G$ induced by $X\cup P_Y$. Note that $P_Y$ is a flat path in $G_X'$. Let $G_X''$ be obtained from $G_X'$ by reducing $P_Y$. Define a graph $G_Y''$ similarly. Since $G_X'$ is an induced subgraph of $G$, it contains no ISK4. So, by Lemma \ref{Lm:reducing}, $G_X''$ contains no ISK4. The same hold for $G_Y''$. By induction hypothesis, $G_X''$ and $G_Y''$ admit a $24$-coloring. Since $a$ and $b$ have different colors in both coloring, we can combine them so that they coincide on $\{a,b\}$ and obtain a $24$-coloring of $G$. Now, we may assume that $G$ has no proper $2$-cutset. If $G$ contains a $K_{2,2,2}$ (rich square) or a prism, then by Lemma \ref{Lm:prism,K222}, $G$ is the line graph of a graph with maximum degree $3$, or a rich square. By Lemmas \ref{Lm:line-graph} and \ref{Lm:rich-square}, $\chi(G)\leq 4<24$. Therefore, we may assume that $G$ contains neither prism nor $K_{2,2,2}$. So $G\in {\cal C}_3$ and $\chi(G)\leq 24$ by Lemma \ref{Lm:K222-free}.
\end{proof}
\section{Conclusion}
Not only the bound we found in Theorem \ref{Thm:1} is very close to the one stated in Conjecture \ref{conj:2}, but the simple structure of each layer is also interesting. We believe that it is very promising to settle Conjecture \ref{conj:2} by this way of looking at our class. For Theorem \ref{Thm:2}, we are convinced that the bound $24$ we found could be slightly improved by this method if we look at each layer more carefully and exclude more structures, but it seems hard to reach the bound mentioned in Conjecture \ref{conj:1}.
\subsection*{Acknowledgement} The author would like to thank Nicolas Trotignon for his help and useful discussion.
\bibliographystyle{abbrv}
|
2,869,038,153,846 | arxiv | \section{Introduction}
\IEEEPARstart{I}n the big data era, the world has witnessed the explosive growth of data-intensive applications.
IDC predicts that the volume of global data will reach a staggering 175 Zettabytes by 2025~\cite{IDC_18}.
Modern distributed storage systems, e.g., Amazon Simple Storage Service (S3)~\cite{AmazonS3}, Google Cloud Storage~\cite{GoogleCloud}, and Microsoft Azure~\cite{Azure}, use two redundancy schemes, i.e., data replication and erasure codes, to enhance data reliability.
By creating full data copies at storage nodes near end users, data replication can reduce the data service latency with good fault tolerance performance~\cite{Liu_DataBot_19,Annamalai_Akkio_18}.
However, it suffers from high bandwidth and storage costs with the growing number of data replicas.
With erasure codes, each data item is coded into data chunks and parity chunks.
Compared with replication, erasure codes can lower the bandwidth and storage costs by an order of magnitude while with the same or better level of data reliability~\cite{Hu_SoCC17,EC_Store}.
Nevertheless, in geo-distributed storage systems, erasure codes may incur high access latency since 1) end users need to contact multiple remote storage nodes to access data~\cite{Agar_17}, and 2) the decoding process with parity chunks incurs non-negligible computation overheads.
The high latency prevents the further application of erasure codes to data-intensive applications, limiting its use to rarely-accessed archive data~\cite{RE_Store}.
As supplements to the geo-distributed storage system, major content providers, e.g., Akamai and Google, deploy frontend servers to achieve low latency~\cite{PANDO_20}.
End users issue requests to their nearest frontend servers, which have cached a pool of popular data items.
Nevertheless, data caching faces critical challenges in the distributed coded storage system.
As data items are coded into data chunks and parity chunks, the caching scheme should determine which chunks to cache for each data item.
To serve end users across the globe, spreading the coded chunks of each data item at more storage nodes can lower the latencies of geographically dispersed requests~\cite{PANDO_20}.
The latency of fetching different chunks varies as the coded chunks may be placed at geographically diverse nodes.
Since the data request latency is determined by the slowest chunk retrieval, caching more chunks may not proportionally reduce the overall latency.
Traditional caching schemes at the data item level may not enjoy the full benefits of caching~\cite{Sprout}.
In this paper, we propose optimal offline and near-optimal online caching schemes that are specifically designed for the distributed coded storage systems.
Preliminary experiment results in Sec.~\ref{subsec:caching} and~\ref{subsec:Motivation} show that a positive correlation exists between the latency and the physical distance of data retrieval over the wide area network (WAN).
For any two geographically diverse storage nodes, the latency gap of accessing the same data item keeps fairly stable.
The average data access latency is used as the performance metric to quantify the benefits of caching.
Assuming that the future data popularity and latency information is available, the proposed optimal scheme explores the ultimate performance of caching on latency reduction with theoretical guarantees.
Although theoretically sound in design, the optimal scheme faces the challenges of long running time and large computation overheads when applied to a large-scale storage system.
Guided by the optimal scheme, an online caching scheme is proposed with no assumption about future data popularity and network latencies.
Based on the measured data popularity and network latencies in real time, the caching decision is updated upon the arrival of each request, without completely overriding the existing caching decisions.
The theoretical analysis provides the worst-case performance guarantees of the online scheme.
The main contributions in this paper include:
\begin{itemize}
\item Novel optimal offline and near-optimal online caching schemes at the frontend servers are designed with performance guarantees for the distributed coded storage system.
\item The proposed caching schemes are extended and implemented to the case of storage server failure.
\item A prototype of the caching system is built based on Amazon S3. Extensive experiment results show that the online scheme can approximate the optimal scheme well with dramatically reduced computation overheads.
\end{itemize}
The rest of this paper is organized as follows.
Sec.~\ref{sec: related} summarizes the related work.
Sec.~\ref{sec:Modeling} presents the model of the distributed coded storage system and states the caching problem.
Sec.~\ref{sec: Optimal_caching} and~\ref{sec: Online_caching} provide the design of the optimal and online caching schemes, respectively.
Sec.~\ref{sec:Evaluation} evaluates the efficiency and performance of the caching schemes through extensive experiments.
Sec.~\ref{sec:conclusion} concludes this paper and lists future work.
\section{Related Work} \label{sec: related}
\textbf{Caching at the data item level.} Data caching has been considered a promising solution to achieve low latency in distributed storage systems.
Ma et al.~\cite{Ma_13} proposed a replacement scheme that cached a full copy of data items to reduce the storage and bandwidth costs while maintaining low latencies.
Liu et al.~\cite{DistCache} designed DistCache, a distributed caching scheme with provable load balancing performance for distributed storage systems.
Song et al.~\cite{Song_NSDI_20} proposed a learning-based scheme to approximate the Belady's optimal replacement algorithm with high efficiency~\cite{Belady}.
However, all these previous studies focus on caching at the data item level.
Due to the limited cache capacity, keeping a full copy of data items may not be space efficient to achieve the full benefits of caching with erasure codes.
\textbf{Low latency in coded storage systems.} Erasure codes have been extensively investigated in distributed storage systems as it can provide space-optimal data redundancy.
However, it is still an open problem to quantify the accurate service latency for coded storage systems~\cite{Sprout}.
Therefore, recent studies have attempted to analyze the latency bounds based on queuing theory~\cite{Sprout,MDSqueue,Xiang_TNSM_17,Badita_TIT_19}.
These researches are under the assumption of stable request arrival process and exponential service time distribution, which may not be applicable for a dynamic network scenario.
Moreover, prior works also focused on the design of data request scheduling schemes to achieve load balancing in coded storage systems~\cite{Hu_SoCC17,EC_Store,Aggarwal_17}.
In this way, the data access latency could be reduced by the avoidance of data access collision.
These scheduling schemes are suitable for intra-data center storage systems as the network congestion dominates the overall data access latency.
\textbf{Caching in coded storage systems.} Compared with schemes that cache the entire data items, Aggarwal et al.~\cite{Sprout} pointed out that caching partial data chunks had more scheduling flexibility.
Then, a caching scheme based on augmenting erasure codes was designed to reduce the data access latency.
Nevertheless, extra storage overheads were introduced with the augmented scheme.
Halalai et al.~\cite{Agar_17} designed Agar, a dynamic programming-based caching scheme to achieve low latency in coded storage systems.
Agar was a static policy that pre-computed a cache configuration for a certain period of time without any worst-case performance guarantees.
Rashmi et al.~\cite{ECCache_16} applied an online erasure coding scheme on data stored in the caching layer to achieve load balancing and latency reduction.
However, this online erasure coding scheme introduced extra computation overheads when handling massive data read requests.
Different from previous studies, our proposed caching schemes leverage the measured end-to-end latency to quantify the benefits of caching.
Furthermore, the proposed schemes only cache data chunks rather than parity chunks to avoid the decoding overheads of data read requests.
\begin{figure}[!t]
\centerline{\includegraphics[width=3.3in]{Figures/SystemModel.pdf}}
\caption{An illustration of the distributed coded storage system with caching services is shown. Data items A and B are coded into $K = 6$ data chunks and $R = 3$ parity chunks.}
\label{fig:System_Model}
\end{figure}
\section{System Model and Problem Statement} \label{sec:Modeling}
This section presents the architecture of the geo-distributed storage system with erasure codes and discusses how to reduce the access latency of data requests with caching services.
\subsection{Geo-distributed Storage System and Erasure Codes} \label{subsec:system}
\begin{table*}[!t] \scriptsize
\caption{The deployment of storage nodes over six Amazon Web Services (AWS) regions and the average data access latency (in milliseconds) from remote storage nodes to end users at three different locations.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Storage node}} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} \\
\hline
\multicolumn{2}{|c|}{\textbf{Region}} & Tokyo & Ohio & Ireland & S\~{a}o Paulo & Oregon & Northern California \\
\hline
\multirow{3}*{\textbf{Average latency (ms)}} & \textbf{Victoria, CA} & 479.3 & 345.5 & 686.3 & 803.9 & 128.3 & 179.3 \\
\cline{2-8}
& \textbf{San Francisco, US} & 513.2 & 338.4 & 663.2 & 786.9 & 158.3 & 84.7 \\
\cline{2-8}
& \textbf{Toronto, CA} & 794.7 & 129.0 & 631.5 & 705.5 & 302.6 & 355.7 \\
\cline{2-8}
\hline
\end{tabular}
\label{tab:AWS_Regions}
\end{center}
\end{table*}
As shown in Fig.~\ref{fig:System_Model}, the geo-distributed storage system consists of a set of geographically dispersed storage nodes $\mathcal{N}$ (with size $N=|\mathcal{N}|$)~\footnote{The storage nodes could be data centers in practice. As shown in Fig.~\ref{fig:System_Model}, each storage node consists of multiple storage servers.}.
The set of data items stored in the system is denoted by $\mathcal{M}$ (with size $M=|\mathcal{M}|$).
Similar to Hadoop~\cite{HDFS} and Cassandra~\cite{Cassandra}, all data items are with a default block size.
To achieve the target reliability with the maximum storage efficiency, the Reed-Solomon (RS) codes are adopted as the storage scheme~\footnote{Other erasure codes, e.g., local reconstruction codes (LRC)~\cite{LRC}, can also be applied in our solution.}.
With a linear mapping process, each data item is coded into equal-sized $K$ data chunks and $R$ parity chunks.
All coded chunks are distributed among storage nodes for fault tolerance, which are denoted by
\begin{equation} \label{equ:InitialPlacement}
\left\{\begin{matrix}
m_k \rightarrow i, k \in \{1,...,K\}, i \in \mathcal{N},
\\
m_r \rightarrow j, r \in \{1,...,R\}, j \in \mathcal{N},
\end{matrix}\right.
\end{equation}
which represents chunk $m_k$ and $m_r$ are placed at node $i$ and $j$, respectively~\footnote{In this paper, data replication at the remote storage nodes is not considered to achieve low storage overheads.
Our design is also applicable to the scenario of data replication. The data request is served by fetching $K$ data chunks from the nearest storage nodes.}.
Please note that the coded chunks are not placed at a single storage node since this will increase the data access latency of end users far from that node.
When the requested data is temporarily unavailable, the original data can be recovered via the decoding process from any $K$ out of $K+R$ chunks.
The decoding process with parity chunks will inherently incur considerable computation overheads to the storage system.
Generally speaking, a read request is first served by obtaining $K$ data chunks to reconstruct the original data with low overheads~\cite{Hu_SoCC17}.
The action of parity chunk retrieval for decoding is defined as {\bf degraded read}.
The degraded read is passively triggered 1) when the storage server storing the data chunks is momentarily unavailable, or 2) during the recovery of server failure.
Moreover, the data write/update process is not considered in this paper, since most storage systems are append-only where all the data are immutable. Instead, data with any updates will be considered as separate items with new timestamps~\cite{Hu_SoCC17}.
Erasure codes may incur high data access latency in the geo-distributed storage system.
The requested chunks are retrieved by accessing multiple remote storage nodes.
The high latency impedes the extensive application of erasure codes to data-intensive applications.
Therefore, it is imperative to reduce the data request latencies in the coded storage system.
\subsection{Caching at Frontend Servers for Low Latency} \label{subsec:caching}
This paper adopts caching to achieve low latency data services.
As illustrated in Fig.~\ref{fig:System_Model}, multiple frontend servers are deployed to serve geographically dispersed end users.
Each frontend server creates an in-memory caching layer to cache popular data items near end users.
Instead of interacting directly with remote storage nodes, end users retrieve data from the frontend server.
Let $C$ denote the cache capacity of the frontend server.
Due to the scarcity of memory, not all data chunks can be cached in the caching layer, i.e., $C \leq M K$.
With erasure codes, we may not need to cache all chunks of each data item to achieve the full benefits of caching.
This can be demonstrated through preliminary experiments based on Amazon S3.
As shown in Fig.~\ref{fig:System_Model}, a prototype of the coded storage system is deployed over $N = 6$ Amazon Web Services (AWS) regions, i.e., Tokyo,
Ohio, Ireland, Sao Paulo, Oregon, and Northern California.
In each AWS region, three {\tt buckets} are created.
Each {\tt bucket} represents a server for remote data storage.
The storage system is populated with $M = 1,000$ data items.
The numbers of data and parity chunks are set as $K = 6$ and $R = 3$.
The default size of all chunks is 1 MB~\cite{Agar_17}.
For each data item, the nine coded chunks are uniformly distributed among eighteen {\tt buckets} to achieve load balancing.
The coded chunks of each data item are not placed at the same server to guarantee the $R$-fault tolerance.
As noted in prior work~\cite{ECCache_16,Hu_SoCC17}, the popularity of data items follows a Zipf distribution.
Furthermore, three frontend servers are deployed near the end users at various locations, i.e., Victoria, Canada, San Francisco, United States, and Toronto, Canada.
{\tt Memcached}~\cite{Memcached} module is adopted for data caching in RAM.
The frontend server uses a thread pool to request data chunks in parallel.
For each read request, the end user needs to obtain all six data chunks from remote {\tt buckets} without caching at the frontend server.
Table~\ref{tab:AWS_Regions} shows the average data access latency from remote {\tt buckets} to end users~\footnote{Compared with the long access latency over WAN (in hundreds of milliseconds), the reconstruction latency with data chunks and the network latency from the frontend server to end users is negligible.}.
For data requests, the latency is determined by the slowest chunk retrieval among all chunks.
As shown in Fig.~\ref{fig:System_Model}, if data item B (including data chunk B1--B6) is requested from the frontend server in Victoria, the latency is about 479.3 ms as we need to fetch data chunk B5 and B6 from the farthest storage node in Tokyo.
\begin{figure}[htbp]
\centerline{\includegraphics[width=2.7in]{Figures/LatencyExample_line.pdf}}
\caption{Experiment results show the average access latency of caching different number of data chunks on the frontend server in Victoria. The relationship between the number of cached data chunks and the reduced latency is nonlinear. The storage locations of data items A and B are shown in Fig.~\ref{fig:System_Model}.}
\label{fig:Latency_Example}
\end{figure}
\begin{figure*}[!t]
\centering
\subfigure[Latencies from Victoria]{
\begin{minipage}[b]{0.31\textwidth}
\includegraphics[width=1\textwidth]{Figures/Average_latency_Victoria.pdf}
\end{minipage}
}
\subfigure[Latencies from San Francisco]{
\begin{minipage}[b]{0.31\textwidth}
\includegraphics[width=1\textwidth]{Figures/Average_latency_SF.pdf}
\end{minipage}
}
\subfigure[Latencies from Toronto]{
\begin{minipage}[b]{0.31\textwidth}
\includegraphics[width=1\textwidth]{Figures/Average_latency_Toronto.pdf}
\end{minipage}
}
\subfigure[CDF of latencies from Victoria]{
\begin{minipage}[b]{0.31\textwidth}
\includegraphics[width=1\textwidth]{Figures/CDF_latency_Victoria.pdf}
\end{minipage}
}
\subfigure[CDF of latencies from San Francisco]{
\begin{minipage}[b]{0.31\textwidth}
\includegraphics[width=1\textwidth]{Figures/CDF_latency_SF.pdf}
\end{minipage}
}
\subfigure[CDF of latencies from Toronto]{
\begin{minipage}[b]{0.31\textwidth}
\includegraphics[width=1\textwidth]{Figures/CDF_latency_Toronto.pdf}
\end{minipage}
}
\caption{Experiment results show the data access latencies from storage nodes to end users over a period of one hour (from 15:00 to 16:00 on July 30, 2020).}
\label{fig:Latency}
\end{figure*}
Then, we consider the performance of caching at the frontend server.
Fig.~\ref{fig:Latency_Example} illustrates the latency reduction performance by gradually increasing the number of cached data chunks for different data items.
In the design of the preliminary experiments, only data chunks will be cached to avoid degraded read.
The data chunk with higher access latency is cached with higher priority.
Then, the data access latency can be progressively decreased.
Fig.~\ref{fig:Latency_Example} shows the average access latency of caching different number of data chunks on the frontend server in Victoria.
Let $\varepsilon_m$ denote the number of cached data chunks for data item $m$, $\varepsilon_m \in \{0,...,K\}$, $m \in \mathcal{M}$.
We have the following two observations:
\begin{itemize}
\item The access latency function $f_m(\varepsilon_m)$ is nonlinear.
\item The storage locations of chunks may be different for various data items, e.g., data items A and B in Fig.~\ref{fig:System_Model}.
For various data items, the access latency function $f_m(\varepsilon_m)$ could also be different due to the diverse storage locations.
For instance, if three chunks are cached for data item A, the data access latency is reduced by 40.3\%.
For data item B, three cached data chunks can reduce the latency by 62.6\%.
\end{itemize}
The observations show that the latency reduction varies from data item to data item.
Traditional caching schemes at the data item level cannot achieve the full benefits of caching.
\subsection{Caching Problem Statement} \label{subsec:ProblemStatement}
To minimize the overall latency of data read requests at a frontend server, the number of cached chunks $\varepsilon_m$ for each data item should be optimized as follows~\footnote{For data item $m$, $\varepsilon_m$ data chunks placed at the farthest storage nodes, i.e., with the highest data access latencies, will be cached.}:
\begin{equation}\label{equ:Opt}
\begin{array}{l}
\mathop {\min }\limits_{\varepsilon_m \in \mathbb{N}, m \in \mathcal{M}} \sum\limits_{m \in \mathcal{M}} f_m(\varepsilon_m) \cdot r_m \\
{\rm{s}}.{\rm{t}}.\quad 0 \leq \varepsilon_m \leq K,\\
\quad \quad \sum\nolimits_{m \in \mathcal{M}} \varepsilon_m \leq C,\\
\end{array}
\end{equation}
where $r_m$ denotes the user request rate for data item $m$.
Constraint $0 \leq \varepsilon_m \leq K$ ensures that the number of cached chunks for each data is no larger than the number of coded data chunks.
Furthermore, $\sum_{m \in \mathcal{M}} \varepsilon_m \leq C$ ensures that the cache capacity constraint is not violated.
Then, the hardness of the optimization problem~(\ref{equ:Opt}) is examined as follows:
\begin{itemize}
\item Experiments demonstrate that $f_m(\varepsilon_m)$, e.g., the latency function of data item B in Fig.~\ref{fig:Latency_Example}, could be both nonlinear and nonconvex.
Therefore, problem~(\ref{equ:Opt}) is an integer programming problem with non-convexity and nonlinearity.
Generally speaking, complex combinatorial techniques are needed for an efficient solution~\cite{Liu_TSC_18}.
\item In a dynamic scenario, the network conditions and the data request rates may be time variant.
It is a challenge to design online schemes that can react quickly to real-time changes.
\end{itemize}
For a large-scale storage system with uncertainties, the goals of the caching schemes are 1) achieving the ultimate performance of caching on latency reduction, 2) highly efficient for a quick caching decision, and 3) flexible to change the caching decision in an online fashion.
\section{Optimal Caching Schemes Design}\label{sec: Optimal_caching}
In this section, the motivation and design overview are first presented.
Then, assuming that future data popularity and network condition information is available, an offline scheme is designed to find the optimal caching solution.
The results of the optimal scheme can be regarded as a lower bound of the data access latency when the ultimate performance of caching is achieved.
\subsection{Motivation and Design Overview} \label{subsec:Motivation}
As mentioned in Sec.~\ref{subsec:caching}, the latency function $f_m(\varepsilon_m)$ can be different for various data items.
Considering the diversity of chunk storage locations, it is complicated to design a mathematical latency model that is suitable for the entire storage system.
Therefore, the end-to-end latency of data access is used to quantify the benefits of caching.
Through experiments, we analyze the characteristic of data access latencies over WAN.
Based on the deployed experiment platform, the access latencies of data chunk retrieval from remote {\tt buckets} to end users are measured over an hour (from 15:00 to 16:00 on July 30, 2020).
Fig.~\ref{fig:Latency}(a), (b), and (c) show that the data access latencies over WAN remain fairly stable in the long term.
The reason is that the propagation delay dominates and depends primarily on the physical distance of data transmission~\cite{Bogdanov_SoCC_18}.
Experiment results confirm the positive correlation between the physical distance and the latency.
For instance, the data access latency from S\~{a}o Paulo to end users in Victoria is always higher than that from Oregon.
Fig.~\ref{fig:Latency}(d), (e), and (f) demonstrate that the data access latencies are stable for most of the service time.
For example, 89.58\% of the data access latencies from Oregon to Victoria will be in the range of [100, 140] ms.
Besides, 91.94\% of the data access latencies from Ireland to Victoria will be in the range of [650, 700] ms.
For two arbitrary storage nodes, the latency gap also keeps fairly stable in the long term.
The average data access latency can be used to quantify the benefits of caching.
In Sec.~\ref{subsec:OptimalScheme}, assuming that the future data popularity and network condition information are available, an optimal scheme is designed to explore the ultimate performance of caching on latency reduction.
\subsection{Optimal Caching Scheme} \label{subsec:OptimalScheme}
Let $l_i$ denote the average network latency of data access from storage node $i$ to end users for a certain period of time.
According to the storage location information in~(\ref{equ:InitialPlacement}), the average latency of sending data chunk $m_k$ is given by
\begin{equation} \label{equ:Chunk_Latency}
l_{m_k}= l_i \cdot \mathbf{1}({m_k \rightarrow i}),
\end{equation}
where $\mathbf{1}({m_k \rightarrow i})$ indicates whether data chunk $m_k$ is placed at node $i$ or not, returning 1 if true or 0 otherwise, $k \in \{1, ..., K\}$, $i \in \mathcal{N}$.
For ease of notation, let us relabel the data chunks according to the descending order of the data access latency.
For example, $m_1$ denotes the data chunk placed at the farthest storage node.
Based on the sorted latencies $\{l_{m_1}, ..., l_{m_K}\}$, a $(K+1)$-dimensional array is maintained for each data item
\begin{align}
\begin{split} \label{equ:Latency_Array}
\{\tau_{m,0}, \tau_{m,1},..., \tau_{m,K}\} = \{0, (l_{m_1}-l_{m_2})\cdot r_m, ... , (l_{m_1}-\\ l_{m_k}) \cdot r_m, ..., (l_{m_1}-l_{m_K}) \cdot r_m, l_{m_1} \cdot r_m\} ,
\end{split}
\end{align}
where $\tau_{m,k-1}=(l_{m_1}-l_{m_k}) \cdot r_m$ represents the value of reduced latency when $k-1$ data chunks are cached.
For example, if chunk $m_1$ and $m_2$ are cached, then $l_{m_3}$ becomes the bottleneck.
Clearly, $\tau_{m,0} = 0$ as no latency will be reduced without caching.
When all $K$ data chunks are cached, the maximum value of reduced latency is $\tau_{m,K} = l_{m_1} \cdot r_m$.
Based on the reduced latency information, an $M \times (K+1)$ valuation array $\tau$ can be maintained for all data items.
As shown in Fig.~\ref{fig:Latency_Example}, $f_m(\varepsilon_m)$ is a monotonic decreasing function.
Minimizing the overall data access latency in (\ref{equ:Opt}) is equivalent to maximizing the total amount of reduced latency:
\begin{equation}\label{equ:Opt_maximize}
\begin{array}{l}
\mathop {\max } \limits_{\varepsilon_m \in \mathbb{N}, m \in \mathcal{M}} \Theta (\varepsilon_m) = \sum\limits_{m \in \mathcal{M}} \tau_{m, \varepsilon_m} \\
{\rm{s}}.{\rm{t}}.\quad 0 \leq \varepsilon_m \leq K,\\
\quad \quad \sum\limits_{m \in \mathcal{M}} \varepsilon_m = C.\\
\end{array}
\end{equation}
As $C \leq M K$, $\sum_{m \in \mathcal{M}} \varepsilon_m = C$ ensures the cache capacity can be fully utilized for latency reduction.
Then, we determine the optimal decision $\varepsilon_m$ in the following two steps:
\renewcommand{\algorithmiccomment}[1]{\hfill\eqparbox{COMMENT}{$\triangleright$ #1}}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\setcounter{algorithm}{0}
\begin{algorithm}
\caption{Iterative Search for Cache Partitions}
\label{Alg:ExhaustiveSearch}
\begin{algorithmic}[1]
\REQUIRE Cache capacity $C$, number of coded data chunks $K$, number of data items $M$.
\ENSURE Set of cache partitions $\chi$.
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE $x_1 \leftarrow C$, $\forall x_k \leftarrow 0$, $k\in\{2,...,K\}$, $\chi \leftarrow \emptyset$.
\WHILE{$\{x_1,...,x_K\} \notin \chi$}
\STATE Add $\{x_1,...,x_K\}$ to $\chi$ if $\sum_{k=1}^{K}x_k \leq M$;
\STATE $x_2 \leftarrow x_2 + 1$ if $x_2 < \hat x_2$ else $x_2 \leftarrow 0$;
\FOR{$k=3$ to $K$}
\IF{$x_{k-1}$ is reset to 0}
\STATE $x_k \leftarrow x_k + 1$ if $x_k < \hat x_k$ else $x_k \leftarrow 0$;
\ENDIF
\STATE $x_1=C-\sum_{k=2}^{K}k x_k$;
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsubsection{{\bf Cache Partitions}}
Let $x_k$ denote the number of data items with $k$ data chunks cached, i.e.,
\begin{equation} \label{equ:Data_num}
x_k = \sum \nolimits_{m \in \mathcal{M}} \mathbf{1}(\varepsilon_m= k),
\end{equation}
where $\mathbf{1}(\varepsilon_m= k)$ indicates whether $k$ data chunks are cached for data item $m$ or not, $x_k \in \mathbb{N}$.
We define $\{x_1,...,x_K\} \in \chi$ as a potential partition of caching decisions.
Based on the constraints in~(\ref{equ:Opt_maximize}), we have the Diophantine equations as follows
\begin{equation} \label{equ:Diophantine}
\left\{\begin{matrix}
x_1 + 2x_2 + ... + K x_K = C,
\\
x_1 + x_2 + ... + x_K \leq M.
\end{matrix}\right.
\end{equation}
All partitions of caching decisions $\chi$ can be derived from~(\ref{equ:Diophantine}) through iterative search.
The pseudo code of the iterative search is listed in Algorithm~\ref{Alg:ExhaustiveSearch}.
Given the value of $\{x_{k+1},...,x_K\}$, the maximum value that $x_k$ can be assigned is
\begin{equation} \label{equ:Max_x_k}
\hat x_k = \left \lfloor \frac{C-\sum_{n=k+1}^{K}n x_n}{k} \right \rfloor.
\end{equation}
Initially, $\{x_1, x_2, ..., x_K\}=\{C,0,...,0\}$ is a feasible solution if $\sum_{k=1}^{K}x_k = C \leq M$.
We gradually increase the value of $x_2$ from 0.
If $x_2=\hat x_2$, $x_2$ is reset to 0 in the next iteration.
In this way, the value of $x_k$ can be iteratively determined, $k \in \{3,...,K\}$.
If $x_{k-1}=0$, $x_k$ is incremented by 1 next.
If $x_k = \hat x_k$, $x_k$ is also reset to 0 then.
Based on the value of $\{x_{2},...,x_K\}$, $x_1$ is set to $C-\sum_{k=2}^{K}k x_k$.
We repeat the above process until all cache partitions are included in $\chi$.
A simple example is used to demonstrate the iterative search process.
Let $K = 3$ and $C = 5$.
Algorithm~\ref{Alg:ExhaustiveSearch} sequentially appends five cache partitions, i.e., $\{5,0,0\}$, $\{3,1,0\}$, $\{1,2,0\}$, $\{2,0,1\}$, and $\{0,1,1\}$.
The theoretical analysis of the iterative search algorithm is provided as follows.
\begin{property} \label{property:1}
Algorithm~\ref{Alg:ExhaustiveSearch} needs less than $\prod_{k=2}^{K} (\left \lfloor \frac{C}{k} \right \rfloor + 1)$ iterations to finish.
The size of set $\left | \chi \right |$ is also less than $\prod_{k=2}^{K} (\left \lfloor \frac{C}{k} \right \rfloor + 1)$.
\end{property}
\begin{proof}
According to~(\ref{equ:Diophantine}), as $\forall x_k \in \mathbb{N}$, the maximum value of $x_k$ is $\left \lfloor \frac{C}{k} \right \rfloor$.
Therefore, $x_k$ can be assigned up to $\left \lfloor \frac{C}{k} \right \rfloor +1$ different values.
As $\{x_{2},...,x_K\}$ cannot be assigned to the maximum values at the same time, Algorithm~\ref{Alg:ExhaustiveSearch} needs less than $\prod_{k=2}^{K} (\left \lfloor \frac{C}{k} \right \rfloor + 1)$ iterations to obtain all cache partitions in $\chi$.
The set size $\left | \chi \right | < \prod_{k=2}^{K} (\left \lfloor \frac{C}{k} \right \rfloor + 1)$ also holds.
\end{proof}
\setcounter{algorithm}{1}
\begin{algorithm}
\caption{Optimal Assignment for Cache Partitions}
\label{Alg:OptimalCaching}
\begin{algorithmic}[1]
\REQUIRE Set of cache partitions $\chi$, valuation array $\tau$, market clearing price $p_m$.
\ENSURE Caching decision $\varepsilon_m$.
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE $\forall \varepsilon_m,\hat{\varepsilon}_m, p_m \leftarrow 0$.
\FOR{Cache partition $\{x_1,...,x_K\} \in \chi$}
\STATE $\mathcal{G} \leftarrow \mathit{preferred\_seller\_graph}(\tau,\{x_1,...,x_K\})$;
\STATE $\{\mathcal{M}^{[\textup{c}]}, \mathcal{K}^{[\textup{c}]}\} \leftarrow \mathit{constricted\_set}(\mathcal{G})$;
\STATE $\tau^{\prime} \leftarrow \tau$;
\WHILE{$\{\mathcal{M}^{[\textup{c}]}, \mathcal{K}^{[\textup{c}]}\} \neq \emptyset$}
\FOR{$m \in \mathcal{M}^{[\textup{c}]}$}
\FOR{$k \in \mathcal{K}^{[\textup{c}]}$}
\STATE $V_{k} \leftarrow \mathit{sum\_top}(\tau^{\prime}(:,k),x_k)$;
\STATE $V_{k}^{m} \leftarrow \mathit{sum\_top}(\tau^{\prime}(:,k) \setminus \{\tau^{\prime}_{m,k}\},x_k)$;
\ENDFOR
\STATE $p_m \leftarrow p_m + \max\{1, \max\{V_{k}-V_{k}^{m}\}\}$;
\STATE $\tau^{\prime}(m,:) \leftarrow \tau(m,:) - p_m$;
\ENDFOR
\STATE $\mathcal{G} \leftarrow \mathit{preferred\_seller\_graph}(\tau^{\prime},\{x_1,...,x_K\})$;
\STATE $\{\mathcal{M}^{[\textup{c}]}, \mathcal{K}^{[\textup{c}]}\} \leftarrow \mathit{constricted\_set}(\mathcal{G})$;
\ENDWHILE
\STATE $\hat{\varepsilon}_m \leftarrow k$ according to $\mathcal{G}$;
\STATE $\forall \varepsilon_m \leftarrow \hat{\varepsilon}_m$ if $\sum_{m \in \mathcal{M}} \tau_{m, \varepsilon_m} < \sum_{m \in \mathcal{M}} \tau_{m, \hat{\varepsilon}_m}$;
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsubsection{{\bf Optimal Assignment for Cache Partitions}}
Recall that each element $x_k$ in a cache partition $\{x_1,..., x_K\}$ represents that $x_k$ data items are selected, each of which cached $k$ chunks in the frontend server.
For example, if data item $m$ is assigned to $x_k$, the caching decision for $m$ becomes $\varepsilon_m=k$.
As shown in Fig.~\ref{fig:Assignment_Example}, the data items and cache partition can be treated as {\bf sellers} and {\bf buyers}, respectively.
According to the valuation array $\tau$, each buyer has a valuation for each data item.
Thus, the optimal assignment can be considered as a market competing for data items with higher valuations.
The pseudo code of the optimal assignment is listed in Algorithm~\ref{Alg:OptimalCaching}.
As shown in Fig.~\ref{fig:Assignment_Example}, buyers may compete for a certain data item.
The basic idea is to increase the price $p_m$ of data item $m$ until the competition is over.
The price $p_m$ is known as {\bf market clearing price}~\cite{Market_Clearing}.
With no competition, the local optimal caching decision $\hat{\varepsilon}_m$ can be obtained for a certain cache partition.
The global optimal assignment is the one that has the maximum valuation among all cache partitions in $\chi$.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.0in]{Figures/Assignment.pdf}}
\caption{An illustration of the data item assignment for data partition is shown.}
\label{fig:Assignment_Example}
\end{figure}
Let $\tau(:,k)$ denote the $k$-th column of $\tau$, which represents the reduced latencies of all data items when $k$ data chunks of that data item are cached.
To maximize the caching benefits, function $\mathit{preferred\_seller\_graph}$ matches sellers and buyers with the largest $x_k$ elements in $\tau(:,k)$.
As shown in Fig.~\ref{fig:Assignment_Example}(a), a preferred seller graph $\mathcal{G}$ is constructed with function $\mathit{preferred\_seller\_graph}$.
In $\mathcal{G}$, different buyers may compete for the same data item, while each data item can only be assigned to one buyer.
Then, the constricted set $\{\mathcal{M}^{[\textup{c}]}, \mathcal{K}^{[\textup{c}]}\}$ is constructed with function $\mathit{constricted\_set}$, where $\mathcal{M}^{[\textup{c}]}$ denotes the set of competed data, and $\mathcal{K}^{[\textup{c}]}$ represents the set of competing buyers.
Then, we show how to set the market clearing price $p_m$ for each data $m \in \mathcal{M}^{[\textup{c}]}$.
We initialize $p_m=0$, $\forall m \in \mathcal{M}$.
Then, the payoff array $\tau^{\prime}$ can be initialized as the valuation array $\tau$ with $\tau^{\prime} \leftarrow \tau$.
Let $V_{k}$ denote the total payoff of assigning data items (including the competed data item $m$) to buyer $x_k \in \mathcal{K}^{[\textup{c}]}$, i.e.,
\begin{equation} \label{equ:Kvaluations}
V_{k} = \mathit{sum\_top}(\tau^{\prime}(:,k),x_k),
\end{equation}
where function $\mathit{sum\_top}(\tau^{\prime}(:,k),x_k)$ represents the sum of largest $x_k$ elements in set $\tau^{\prime}(:,k)$.
Then, if data item $m$ is not assigned to $x_k$, the total payoff is given by
\begin{equation} \label{equ:KAvaluations}
V_{k}^{m} = \mathit{sum\_top}(\tau^{\prime}(:,k) \setminus \{\tau^{\prime}_{m,k}\},x_k).
\end{equation}
If $\max\{V_{k}-V_{k}^{m}\} > 0$ for all buyers in $\mathcal{K}^{[\textup{c}]}$, $p_m$ is incremented by $\max\{V_{k}-V_{k}^{m}\}$.
Then, the payoff of $m$ for all buyers $\tau^{\prime}(m,:)$ is updated as $\tau(m,:) - p_m$.
This ensures $m$ will be assigned to only one buyer $k = \argmax\{V_{k}-V_{k}^{m}\}$ in the next iteration.
If $\max\{V_{k}-V_{k}^{m}\} = 0$, $p_m$ is incremented by the unit price 1.
The above process is repeated until the constricted set is empty.
The whole process needs $M$ iterations at most as at least one data item is excluded from the constricted set in one iteration.
Then, the updated preferred seller graph with no competition is added to the existing assignment.
If the local caching decision $\hat{\varepsilon}_m$ for the current cache partition yields a higher payoff than all previous ones, the global caching decision is updated with $\varepsilon_m \leftarrow \hat{\varepsilon}_m$.
The theoretical analysis of the assignment algorithm is provided as follows.
\begin{theorem}\label{thm:OPT}
Algorithm~\ref{Alg:OptimalCaching} yields the optimal caching decision.
\end{theorem}
\begin{proof}
Firstly, we prove that the optimal decision can be obtained for each cache partition.
This is equivalent to proving that interchanging any two pairs of caching decisions cannot further increase the total valuations.
%
Let $m$ and $m'$ denote two randomly selected data items in $\mathcal{G}$.
With Algorithm~\ref{Alg:OptimalCaching}, let $k$ and $k'$ denote their corresponding number of cached chunks.
To verify optimality, we need to prove
%
\begin{equation} \label{equ:TheoremProof_1}
\tau_{m,k}+\tau_{m',k'}\geq \tau_{m',k}+\tau_{m,k'}.
\end{equation}
If $k$ and $k'$ are not in the constricted set, i.e., $\tau_{m,k}\geq \tau_{m',k}$ and $\tau_{m',k'}\geq \tau_{m,k'}$, we have $\tau_{m,k}+\tau_{m',k'}\geq \tau_{m',k}+\tau_{m,k'}$.
If $k$ and $k'$ are in the constricted set of $m$ in the previous iteration~\footnote{The case of data item $m'$ can be proved in the same way.} and $m$ is finally assigned to $k$, we have
%
\begin{equation} \label{equ:TheoremProof_2}
\begin{aligned}
V_{k}-V_{k}^{m} \geq V_{k'}-V_{k'}^{m}.
\end{aligned}
\end{equation}
%
Besides, $m'$ is finally assigned to $k'$ with no competition, i.e., $\tau_{m',k}$ is not one of the largest $x_k$ elements in set $\tau^{\prime}(:,k)$.
We have
%
\begin{equation} \label{equ:Final_assign}
\left\{\begin{matrix}
V_{k}-V_{k}^{m} \leq \tau_{m,k}-\tau_{m',k},
\\
V_{k'}-V_{k'}^{m} = \tau_{m,k'}-\tau_{m',k'}.
\end{matrix}\right.
\end{equation}
%
This means
%
\begin{equation}
\begin{aligned}
\tau_{m,k}-\tau_{m',k} \geq \tau_{m,k'}-\tau_{m',k'},\\
\end{aligned}
\end{equation}
%
which concludes that the optimal caching decision is obtained for the cache partition.
As all partitions in $\chi$ are considered, Algorithm~\ref{Alg:OptimalCaching} yields the global optimal caching decision.
\end{proof}
\begin{property}\label{pro:Computation_complexity}
The computation complexity of Algorithm~\ref{Alg:OptimalCaching} is less than $O(\frac{C^{K-1} \cdot M^2}{(K-1)!})$.
\end{property}
\begin{proof}
To obtain the preferred seller graph, all columns in $\tau$ are sorted via the radix sort algorithm.
The sorting complexity is $O(M K)$ (Line 2).
Then, all data items need to be considered to determine the constricted set with the complexity of $O(M)$ (Line 3).
As all buyers may compete for a data item, the calculation of the market clearing price needs $K$ iterations at most (Line 7--12).
Furthermore, the preferred seller graph and the constricted set are updated with the complexity of $O(M + M K)$ (Line 14--15).
As discussed above, the while loop needs $M$ iterations at most.
Furthermore, the data item assignment in Line 17 and 18 needs $K$ iterations at most.
The optimal assignment for a cache partition needs $M^2(K+1) + 2MK + M + K$ iterations at most.
The computation complexity for a cache partition is $O(M^2 K)$.
Considering all cache partitions in $\chi$, the computation complexity of Algorithm~\ref{Alg:OptimalCaching} is less than $O(\frac{C^{K-1} \cdot M^2}{(K-1)!})$.
\end{proof}
Property~\ref{pro:Computation_complexity} demonstrates that the computation complexity of Algorithm~\ref{Alg:OptimalCaching} is mainly determined by the total number of cache partitions $\left | \chi \right |$.
Based on the experiment platform deployed in Sec.~\ref{subsec:caching}, Table~\ref{tab:Impact_of_C} in Sec.~\ref{subsec:Factors} shows the number of cache partitions $\left | \chi \right |$ and the average running time (ART) of Algorithm~\ref{Alg:OptimalCaching} under different settings.
The number of required iterations $\left | \chi \right |$ and the ART increase rapidly with the increase of cache capacity $C$ and the number of coded data chunks $K$.
This means that Algorithm~\ref{Alg:OptimalCaching} may incur a heavy computation burden for a large-scale storage system.
Furthermore, the long running time implies that the optimal scheme cannot react quickly to real-time network changes.
The network states may change before caching decisions can be updated.
To sum up, the optimal scheme is an offline solution with the requirement of future data popularity and network condition information.
\section{Online Caching Scheme Design} \label{sec: Online_caching}
Guided by the optimal caching scheme in Sec.~\ref{subsec:OptimalScheme}, an online caching scheme is proposed with no assumption about future data popularity and network condition information.
Furthermore, we extend the proposed caching schemes to the case of storage server failure.
\subsection{Online Caching Scheme}
Let $\mathcal{T}$ denote the whole data service period.
The online scheme updates the caching decision according to the measured data popularity $r_m^t$ and network latencies $l_i^t$ in real time, $t \in \mathcal{T}$.
{\bf Data Popularity}: The Discounting Rate Estimator (DRE)~\cite{DRE} method is applied to construct the real-time request information $r_m^t$.
On the frontend server, a counter is maintained for each data item, which increases with every data read, and decreases periodically with a ratio factor.
The benefits of DRE are as follows: 1) it reacts quickly to the request rate changes, and 2) it only requires $O(1)$ space and update time to maintain the prediction for each counter.
{\bf Network Latency}: Similar to~\cite{Liu_DataBot_19}, the Exponentially Weighted Moving Average (EWMA) method~\cite{EWMA} is used to estimate the average network latency of data requests.
Specifically, after a data read operation, $l_i^t$ is updated by
\begin{equation}
l_i^t = \alpha_l \cdot l_i^t + (1-\alpha_l) \cdot \iota _i,
\end{equation}
where $\iota _i$ is the measured end-to-end latency of a data request, and $\alpha_l$ is the discount factor to reduce the impact of previous requests.
The advantage of EWMA is that it only needs $O(1)$ space to maintain the prediction for each storage node.
Let $\Gamma$ denote the set of data requests in the service period $\mathcal{T}$.
To ensure the adaptivity of our design, the caching decision is updated upon the arrival of each request $\gamma_m^t$, $\gamma_m^t \in \Gamma$, $t \in \mathcal{T}$.
To improve the solution efficiency, there is no need to modify the storage system by completely overriding the existing caching decisions.
The valuation $\{\tau_{m,0},\tau_{m,1},..., \tau_{m,K}\}$ is updated according to the latest measurement of data access latency $l_i^t$ and request rate $r_m^t$.
\setcounter{algorithm}{2}
\begin{algorithm}
\caption{Online Caching Scheme}
\label{Alg:OnlineCaching}
\begin{algorithmic}[1]
\REQUIRE Cache capacity $C$, number of coded data chunks $K$, number of data items $M$, valuation array $\tau$, set of data requests $\Gamma$ in period $\mathcal{T}$.
\ENSURE Set of cached data items $\hat{\mathcal{M}}$, online caching decision $\varepsilon_m^t$, $m \in \mathcal{M}$.
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE $\hat{\mathcal{M}} \leftarrow \emptyset$, $\forall \varepsilon_m^t \leftarrow 0$.
\FOR {Data request $\gamma_m^t \in \Gamma$, $t \in \mathcal{T}$}
\STATE Update $\{\tau_{m,0},\tau_{m,1},..., \tau_{m,K}\}$ according to~(\ref{equ:Chunk_Latency}) and~(\ref{equ:Latency_Array});
\IF {$\sum_{n \in \mathcal{M}} \varepsilon_n^t \leq C-K$ and $\varepsilon_m^t < K$}
\STATE $\varepsilon_m^t \leftarrow K$, add $m$ to $\hat{\mathcal{M}}$;
\ELSIF {$\sum_{n \in \mathcal{M}} \varepsilon_n^t > C-K$ and $\varepsilon_m^t < K$}
\STATE $\hat{\mathcal{M}}^{\prime} \leftarrow \{m\}$;
\REPEAT
\STATE $n \leftarrow \argmin_{n \in \hat{\mathcal{M}} \setminus \hat{\mathcal{M}}^{\prime}}\{\frac{\tau_{n,k}}{k}\}$, add $n$ to $\hat{\mathcal{M}}^{\prime}$;
\UNTIL $K \leq C - \sum_{n \in \hat{\mathcal{M}}} \varepsilon_n^t + \sum_{n' \in \hat{\mathcal{M}}^{\prime}} \varepsilon_{n'}^t \leq 2K-1$
\STATE $\forall \varepsilon_{n'}^t \leftarrow 0$, $n' \in \hat{\mathcal{M}}^{\prime}$;
\STATE Invoke Algorithm~\ref{Alg:ExhaustiveSearch} for cache partition set $\hat{\chi}$ based on the available cache capacity;
\STATE Invoke Algorithm~\ref{Alg:OptimalCaching} to update the caching decisions $\varepsilon_{n'}^t$ based on $\hat{\mathcal{M}}^{\prime}$ and $\hat{\chi}$, $n' \in \hat{\mathcal{M}}^{\prime}$;
\STATE Update $\hat{\mathcal{M}}$, remove $n$ from $\hat{\mathcal{M}}$ if $\varepsilon_n^t=0$, $\forall n \in \hat{\mathcal{M}}^{\prime}$;
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
Let $\hat{\mathcal{M}}$ denote the set of already cached data items.
If the cache capacity is not fully utilized, i.e., $\sum_{n \in \hat{\mathcal{M}}} \varepsilon_n^t \leq C-K$, all $K$ data chunks of the requested data item $m$ should be cached for latency reduction.
In contrast, if $\sum_{n \in \hat{\mathcal{M}}} \varepsilon_n^t > C-K$, we need to determine 1) whether data item $m$ should be cached or not, 2) how many chunks for $m$ should be cached, and 3) which data items in $\hat{\mathcal{M}}$ should be replaced?
To solve this problem, the data items in $\hat{\mathcal{M}}$ with the lowest valuations per unit are added into subset $\hat{\mathcal{M}}^{\prime}$.
The data items in $\hat{\mathcal{M}}^{\prime}$ are expected to be replaced first by the requested data item $m$ to maximize the total amount of reduced latency.
Furthermore, $m$ is also added into $\hat{\mathcal{M}}^{\prime}$.
All data items in $\hat{\mathcal{M}}^{\prime}$ are cache replacement candidates.
The cached data items in $\hat{\mathcal{M}}$ are gradually added into $\hat{\mathcal{M}}^{\prime}$ until the available cache capacity $C - \sum_{n \in \hat{\mathcal{M}}} \varepsilon_n^t + \sum_{n' \in \hat{\mathcal{M}}^{\prime}} \varepsilon_{n'}^t \geq K$.
This guarantees that all $K$ data chunks of $m$ have a chance to be cached.
The expansion of $\hat{\mathcal{M}}^{\prime}$ needs $K$ iterations at most with $\left |\hat{\mathcal{M}}^{\prime} \right | \leq K+1$ and $C - \sum_{n \in \hat{\mathcal{M}}} \varepsilon_n^t + \sum_{n' \in \hat{\mathcal{M}}^{\prime}} \varepsilon_{n'}^t \leq 2K-1$.
Based on the available cache capacity, Algorithm~\ref{Alg:ExhaustiveSearch} is invoked to calculate the cache partition set $\hat{\chi}$.
Then, based on subset $\hat{\mathcal{M}}^{\prime}$ and $\hat{\chi}$, Algorithm~\ref{Alg:OptimalCaching} is invoked to update the caching decisions $\varepsilon_n^t$, $n \in \hat{\mathcal{M}}^{\prime}$.
The pseudo code of the online caching scheme is listed in Algorithm~\ref{Alg:OnlineCaching}.
The theoretical analysis of the online scheme is provided as follows.
\begin{theorem}\label{thm:Online}
Algorithm~\ref{Alg:OnlineCaching} yields the worst-case approximation ratio of $1-\frac{2K-1}{C}$ upon the arrival of data requests.
\end{theorem}
\begin{proof}
In Algorithm~\ref{Alg:OnlineCaching}, the greedy selection of $\hat{\mathcal{M}}^{\prime}$ (Line 8) may incur performance loss.
Let $\varepsilon_m^t = k$ denote the caching decision obtained with Algorithm~\ref{Alg:OnlineCaching} for request $\gamma_m^t$, $0 \leq k \leq K$.
Then, we consider the following two different cases:
1) $\sum_{n \in \hat{\mathcal{M}}^{\prime} \setminus \{m\}} \tau_{n, \varepsilon_n^t} \leq \tau_{m,K}$: Since Algorithm~\ref{Alg:OptimalCaching} is invoked, Algorithm~\ref{Alg:OnlineCaching} outputs the optimal decision for subset $\hat{\mathcal{M}}^{\prime}$.
As $\tau_{m,k} \leq \tau_{m,K}$, the obtained objective value $\Theta$ satisfies
\begin{equation} \label{equ:Theorem2Proof_1}
\Theta \geq \sum \nolimits_{n \in \hat{\mathcal{M}}} \tau_{n, \varepsilon_n^t} - \sum \nolimits_{n \in \hat{\mathcal{M}}^{\prime} \setminus \{m\}} \tau_{n, \varepsilon_n^t} + \tau_{m,K}.
\end{equation}
The global optimal objective value satisfies
\begin{equation} \label{equ:Theorem2Proof_2}
\Theta^* \leq \sum \nolimits_{n \in \hat{\mathcal{M}}} \tau_{n, \varepsilon_n^t} + \tau_{m,K}.
\end{equation}
Due to the greedy selection, $\frac{\tau_{n', \varepsilon_{n'}^t}}{\varepsilon_{n'}^t} \leq \frac{\tau_{n, \varepsilon_n^t}}{\varepsilon_n^t}$ holds, $\forall n' \in \hat{\mathcal{M}}^{\prime}$, $\forall n \in \hat{\mathcal{M}} \setminus \hat{\mathcal{M}}^{\prime}$.
As $\sum_{n \in \hat{\mathcal{M}}^{\prime}} \varepsilon_n^t \leq 2K-1$, we have
\begin{equation} \label{equ:Theorem2Proof_3}
\frac{\sum_{n \in \hat{\mathcal{M}}} \tau_{n, \varepsilon_n^t}}{C} \geq \frac{\sum_{n \in \hat{\mathcal{M}}^{\prime} \setminus \{m\}} \tau_{n, \varepsilon_n^t}}{2K-1}.
\end{equation}
The worst-case performance bound is given by
\begin{align}
\begin{split}
\frac{\Theta}{\Theta^*}& \geq \frac{C-2K+1}{C}.
\end{split}
\end{align}
2) $\sum_{n \in \hat{\mathcal{M}}^{\prime} \setminus \{m\}} \tau_{n, \varepsilon_n^t} > \tau_{m,K}$: In this case, we have
\begin{align}
\begin{split}
\Theta^*& < \sum \nolimits_{n \in \hat{\mathcal{M}}} \tau_{n, \varepsilon_n^t} + \tau_{m,K} \\
& < \sum \nolimits_{n \in \hat{\mathcal{M}}} \tau_{n, \varepsilon_n^t}+\sum \nolimits_{n \in \hat{\mathcal{M}}^{\prime} \setminus \{m\}} \tau_{n, \varepsilon_n^t}.
\end{split}
\end{align}
As $\Theta \geq \sum_{n \in \hat{\mathcal{M}}} \tau_{n, \varepsilon_n^t}$, we have
\begin{align}
\begin{split}
\frac{\Theta}{\Theta^*}& > \frac{\sum_{n \in \hat{\mathcal{M}}} \tau_{n, \varepsilon_n^t}}{\sum_{n \in \hat{\mathcal{M}}} \tau_{n, \varepsilon_n^t}+\sum_{n \in \hat{\mathcal{M}}^{\prime} \setminus \{m\}} \tau_{n, \varepsilon_n^t} } > \frac{C}{C+2K-1}.
\end{split}
\end{align}
The proof completes.
\end{proof}
\begin{property}\label{pro:Computation_complexity_2}
For a data request, the computation complexity of Algorithm~\ref{Alg:OnlineCaching} is less than $O((K+1)^3 \cdot K!)$.
\end{property}
\begin{proof}
For any data requests, we have $\left |\hat{\mathcal{M}}^{\prime} \right | \leq K+1$ and $C - \sum_{n \in \hat{\mathcal{M}}} \varepsilon_n^t + \sum_{n' \in \hat{\mathcal{M}}^{\prime}} \varepsilon_{n'}^t \leq 2K-1$.
According to Property~\ref{property:1}, the set size $\left | \hat{\chi} \right | < \prod_{k=2}^{K} (\left \lfloor \frac{2K-1}{k} \right \rfloor + 1)$ holds.
As $\frac{2K-1}{k} < K-k+2$, $k \in \{2,...,K\}$, we have $\left \lfloor \frac{2K-1}{k} \right \rfloor + 1 \leq K-k+2$.
Therefore, $\left | \hat{\chi} \right | < K!$ holds.
Furthermore, similar to the analysis in Property~\ref{pro:Computation_complexity}, the computation complexity for a cache partition is $O((K+1)^3)$.
The computation complexity of Algorithm~\ref{Alg:OnlineCaching} is less than $O((K+1)^3 \cdot K!)$.
\end{proof}
In large-scale storage systems, the number of coded data chunks per data item $K$ is much smaller than the cache capacity $C$ and the number of data items $M$, i.e., $K \ll C$ and $K \ll M$.
Therefore, Theorem~\ref{thm:Online} shows that Algorithm~\ref{Alg:OnlineCaching} can approximate the optimal solution well.
Furthermore, Property~\ref{pro:Computation_complexity_2} shows that the computation complexity of Algorithm~\ref{Alg:OnlineCaching} is tremendously reduced when compared with that of Algorithm~\ref{Alg:OptimalCaching}.
Table~\ref{tab:Impact_of_C} in Sec.~\ref{subsec:Factors} shows the maximum number of required iterations $\left | \hat{\chi} \right |$ and the ART of Algorithm~\ref{Alg:OnlineCaching}.
The low computation complexity ensures that the online scheme can react quickly to real-time changes.
\subsection{Caching with Server Failure} \label{subsec:EC-Caching-With-Failure}
The proposed optimal and online schemes work well without server failure.
However, servers may experience downtime in the distributed storage system.
In this subsection, the proposed caching schemes are extended to the case of storage server failure.
Let $\mathcal{M}_i$ denote the set of remotely unavailable data chunks when a storage server at node $i$ fails.
If data chunk $m_k \in \mathcal{M}_i$ is not cached beforehand, the degraded read is triggered to serve the data requests.
The parity chunk $m_r$ with the lowest data access latency will be fetched from node $j$ for data reconstruction.
The unavailable data chunk $m_k$ is replaced by parity chunk $m_r$, i.e., $m_k \leftarrow m_r$ and $l^t_{mk} \leftarrow l^t_{mr}$.
Similar to~(\ref{equ:Chunk_Latency}), the average latency of sending $m_r$ is given by
\begin{equation} \label{equ:Parity_Chunk_Latency}
l^t_{m_r} = \min \{l^t_j \cdot \mathbf{1}({m_r \rightarrow j})\}.
\end{equation}
When Algorithm~\ref{Alg:OptimalCaching} or~\ref{Alg:OnlineCaching} suggest caching $m_r$, the recovered data chunk $m_k$ (instead of $m_r$) is directly added into the caching layer.
In this way, the decoding overheads of the subsequent data requests can be reduced.
This means our design still works well when server failure happens.
\begin{figure*}[!t]
\centering
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.35in]{Figures/Case_latency.pdf}
\caption{Average data request latencies.}
\label{fig:Average_latency_case}
\end{minipage}
\hspace{5pt}
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.35in]{Figures/Case_tail_latency.pdf}
\caption{95th percentile tail latencies.}
\label{fig:tail_latencies_case}
\end{minipage}%
\hspace{5pt}
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.35in]{Figures/Case_hit.pdf}
\caption{Hit ratio of data chunk requests.}
\label{fig:Hit_ratio}
\end{minipage}%
\end{figure*}
\section{Experimental Evaluation} \label{sec:Evaluation}
In this section, we build a prototype of the caching system in Python and then integrate it with the experiment platform deployed in Sec.~\ref{subsec:caching}.
Next, extensive experiments are performed to quantitatively evaluate the performance of the proposed optimal and online caching schemes.
\textbf{Experimental Setup:}
We deploy eighteen {\tt buckets} in $N = 6$ AWS regions.
Each {\tt bucket} denotes a remote storage server.
The library {\tt zfec}~\cite{zfec} is adopted to implement the RS codes.
By default, we set the number of coded chunks $K = 6$ and $R = 3$.
The coded chunks of $M=1,000$ data items are with the same size 1 MB~\cite{Agar_17}.
They are uniformly distributed at different {\tt buckets} to achieve fault tolerance.
As shown in Table~\ref{tab:AWS_Regions}, three frontend servers are built on personal computers in different Cities.
The hardware features an Intel(R) Core(TM) i7-7700 HQ processor and 16 GB memory.
The cache capacity of {\tt Memcached} is set to 100 MB in RAM, i.e., the maximum number of cached data chunks is set to $C = 100$.
The data service period $\mathcal{T}$ is set to 1 hour.
Similar to the previous studies~\cite{ECCache_16,Hu_SoCC17}, the popularity of data requests follows a Zipf distribution, which is common in many real-world data request distributions.
The tail index of the Zipf distribution is set to 2.0 under the default settings, i.e., highly skewed.
\textbf{Performance Baselines:} For a fair performance comparison, four other schemes are adopted as performance baselines.
\begin{itemize}
\item Backend---All required $K$ data chunks are directly fetched from the remote {\tt buckets} with no caching.
This scheme is adopted to quantify the benefits of caching.
\item LRU and LFU---The Least Recently Used (LRU) and Least Frequently Used (LFU) caching policies are used to replace the contents in the caching layer.
For each selected data item, all $K$ data chunks are cached.
\item Agar~\cite{Agar_17}---A dynamic programming-based scheme is designed to iteratively add data chunks with larger request rates and higher data access latencies in the caching layer.
\end{itemize}
\subsection{Experimental Results} \label{subsec:Results}
To begin with, the performances of six schemes, i.e., Backend, LRU, LFU, Agar, and the proposed optimal and online schemes, are compared under the default settings.
As all requested data chunks are fetched from the remote {\tt buckets}, Backend yields the highest average latency (746.27 ms, 726.43 ms, and 745.34 ms) and 95th percentile tail latencies (1045.49 ms, 943.12 ms, and 1159.16 ms) at three frontend servers.
LRU caches the recently requested data items by discarding the least recently used data items.
LFU caches the data items with higher request rates.
Compared with Backend, LRU and LFU reduce the average latencies of all data requests at three frontend servers by 24.6\% and 23.2\%, respectively.
As illustrated in Fig.~\ref{fig:Hit_ratio}, 24.7\% and 23.3\% of requests are fulfilled by the cached data chunks with LRU and LFU.
With the whole data items cached, LRU and LFU reduce the access latencies to 0 ms for 24.7\% and 23.3\% of data requests, respectively.
However, as the cache capacity is limited, the rest parts of the requests suffer high access latencies.
Compared with Backend, the 95th percentile tail latencies are only reduced by 3.5\% and 3.9\%, respectively.
LFU and LRU overlook the diversity of data chunk storage locations and the heterogeneity of latencies across different storage nodes.
Caching the whole data items cannot enjoy the full benefits of caching.
Agar iteratively improves the existing caching configurations by considering new data chunks.
Each data item is assigned with a weight, given by the number of data chunks to cache.
Compared with LFU and LRU, more data items can enjoy the benefits of caching.
The average latencies at three frontend servers are reduced to 504.21 ms, 503.42 ms, and 512.27 ms, respectively.
Moreover, Agar prefers to evict low valuation data chunks which incur high access latencies from the caching layer.
The 95th percentile tail latencies are reduced to 912.24 ms, 877.02 ms, and 978.97 ms.
With an overall consideration of data request rates and access latencies, the proposed optimal scheme optimizes the number of cached data chunks for each data item, minimizing the average latencies to 439.06 ms, 421.52 ms, and 479.79 ms.
Fig.~\ref{fig:Average_latency_case} shows that the proposed online scheme approximates the optimal scheme well with a similar latency of 450.95 ms, 431.66 ms, and 488.84 ms.
As shown in Table~\ref{tab:Impact_of_C}, although raising the average latency by 2.3\%, the online scheme greatly reduces the computation overheads.
Furthermore, the proposed optimal and online schemes optimize the caching decisions by selecting the data chunks with higher valuations.
This means the contents in the caching layers are with higher data request rates and lower access latencies.
The hit ratios of data requests from three frontend servers are 25.8\%, 25.7\%, and 26.0\%, respectively.
The 95th percentile tail latencies with the optimal scheme are reduced to 907.86 ms, 878.11 ms, and 955.65 ms.
The online scheme incurs a similar 95th percentile tail latencies of 916.45 ms, 883.0 ms, and 973.13 ms.
\subsection{Impact of Other Factors} \label{subsec:Factors}
\begin{figure*}[!t]
\centering
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.3in]{Figures/Capacity.pdf}
\caption{Impact of cache capacity.}
\label{fig:Cache_capacity}
\end{minipage}
\hspace{5pt}
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.3in]{Figures/K.pdf}
\caption{Impact of number of coded data chunks.}
\label{fig:EC_num}
\end{minipage}%
\hspace{5pt}
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.3in]{Figures/M.pdf}
\caption{Impact of number of data items.}
\label{fig:Impact_M}
\end{minipage}%
\end{figure*}
\begin{table}[!t]
\caption{The number of cache partitions and the ART of caching schemes with the variation of cache capacity and number of coded data chunks.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{$C$}} & \textbf{60} & \textbf{80} & \textbf{100} & \textbf{120}\\
\hline
\textbf{LRU} & {ART (ms)} & \multicolumn{4}{|c|}{0.05}\\
\hline
\textbf{LFU} & {ART (ms)} & \multicolumn{4}{|c|}{0.05}\\
\hline
\textbf{Agar} & {ART (ms)} & 544.48 & 661.37 & 800.02 & 960.97\\
\hline
\multirow{2}*{\textbf{Optimal}} & $\left | \chi \right |$ & 19,858 & 69,624 & 189,509 & 436,140\\
\cline{2-6}
& {ART (s)} & 391.64 & 2239.67 & 7863.99 & 23746.93\\
\hline
\multirow{2}*{\textbf{Online}} & $\max \left | \hat{\chi} \right |$ & \multicolumn{4}{|c|}{37} \\
\cline{2-6}
& {ART (ms)} & 2.29 & 1.93 & 1.71 & 1.55\\
\hline
\hline
\multicolumn{2}{|c|}{\textbf{$K$}} & \textbf{2} & \textbf{4} & \textbf{6} & \textbf{8} \\
\hline
\textbf{LRU} & {ART (ms)} & \multicolumn{4}{|c|}{0.05}\\
\hline
\textbf{LFU} & {ART (ms)} & \multicolumn{4}{|c|}{0.05}\\
\hline
\textbf{Agar} & {ART (ms)} & 421.74 & 628.31 & 800.02 & 948.22\\
\hline
\multirow{2}*{\textbf{Optimal}} & $\left | \chi \right |$ & 51 & 8,037 & 189,509 & 1,527,675 \\
\cline{2-6}
& ART (s) & 2.04 & 666.27 & 7863.99 & 49662.91 \\
\hline
\multirow{2}*{\textbf{Online}} & $\max \left | \hat{\chi} \right |$ & 2 & 9 & 37 & 127 \\
\cline{2-6}
& ART (ms) & 0.28 & 0.90 & 1.71 & 6.22 \\
\hline
\end{tabular}
\label{tab:Impact_of_C}
\end{center}
\end{table}
In this section, the impacts of cache capacity, number of coded data chunks, number of data items, data popularity, and server failure, are considered for performance evaluation.
For simplicity, the average latency represents the average latency of all data requests from three frontend servers in the following of the paper.
\textbf{Cache Capacity:} Fig.~\ref{fig:Cache_capacity} illustrates the average latency when the cache capacity $C$ increases from 60 to 120 chunks.
With no data caching, the average latencies of Backend remain stable at 739.35 ms.
With more data requests enjoy the caching benefits, the average latencies with all five caching schemes decrease.
With the increase of $C$, the proposed schemes have more space for caching decision optimization.
Compared with Agar, the percentage of reduced latency via the proposed optimal scheme is improved from 3.1\% to 16.0\%.
As shown in Table~\ref{tab:Loss}, when compared with the optimal scheme, the online scheme only increases the average latency from 1.3\% to 2.4\% with the variation of cache capacity.
Then, the ART of five caching schemes is evaluated, which determines the efficiency of deploying a caching solution.
As shown in Table~\ref{tab:Impact_of_C}, by using simple heuristics, LRU and LFU only need 0.05 ms to update the caching decision.
Agar periodically optimizes the caching configuration for all data items in the storage system, which needs hundreds of milliseconds for a round of optimization.
With the increase of cache capacity, the number of cache partitions $\left | \chi \right |$ increases rapidly from 19,858 to 436,140.
The ART of the optimal scheme increases from 391.64 s to 23746.93 s.
In contrast, the online scheme updates the caching decision upon the arrival of each data request.
According to our design in Algorithm~\ref{Alg:OnlineCaching}, the maximum number of cache partitions $\left | \hat{\chi} \right |$ is determined by the number of coded data chunks $K$.
When a data request arrives, the caching decision will not be updated if the data item is already cached.
Therefore, the ART of the online scheme for each request decreases from 2.29 ms to 1.55 ms with the increase of cache capacity.
This means the online scheme is a scalable solution for a large-scale storage system.
\textbf{Number of Coded Data Chunks:} The size of data items is increased from 2 MB to 8 MB.
With the same size of coded chunks (1 MB), the number of coded data chunks $K$ increases from 2 to 8.
As coded chunks are uniformly distributed among remote {\tt buckets}, more data chunks will be placed at the {\tt buckets} with higher access latencies with the increase of $K$.
Therefore, as shown in Fig.~\ref{fig:EC_num}, with $C=100$ and $M=1,000$, the average data access latency with Backend increases from 596.06 ms to 762.83 ms.
Moreover, when the data item is coded into more data chunks, more requests are served by fetching data chunks from the remote {\tt buckets}.
The average latencies with all five caching schemes increase accordingly.
Fig.~\ref{fig:EC_num} shows that the proposed optimal and online schemes always incur lower latencies than Agar, LRU, LFU, and Backend.
Compared with Agar, the percentage of reduced latency via the online scheme varies from 29.9\% to 3.4\% with the increase of $K$.
Furthermore, Table~\ref{tab:Impact_of_C} shows that the ART of the online scheme only increases from 0.28 ms to 6.22 ms.
With the online scheme, few extra delays will be introduced to handle the intensive data requests.
\begin{table}[!t]
\caption{The percentage of increased data access latency incurred by the online scheme, i.e., performance loss, when compared with the optimal scheme.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{$C$} & \textbf{60} & \textbf{80} & \textbf{100} & \textbf{120}\\
\hline
\textbf{Performance loss (\%)} & 1.3 & 1.6 & 2.3 & 2.4 \\
\hline
\hline
\textbf{$K$} & \textbf{2} & \textbf{4} & \textbf{6} & \textbf{8}\\
\hline
\textbf{Performance loss (\%)} & 3.0 & 5.7 & 2.3 & 1.2 \\
\hline
\hline
\textbf{$M$} & \textbf{500} & \textbf{1,000} & \textbf{2,000} & \textbf{5,000}\\
\hline
\textbf{Performance loss (\%)} & 6.7 & 2.3 & 2.0 & 2.2\\
\hline
\hline
\textbf{Tail index} & \textbf{Uniform} & \textbf{1.2} & \textbf{1.5} & \textbf{2.0}\\
\hline
\textbf{Performance loss (\%)} & 2.2 & 2.2 & 1.7 & 2.3\\
\hline
\end{tabular}
\label{tab:Loss}
\end{center}
\end{table}
\begin{figure*}[!t]
\centering
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.3in]{Figures/Skewness_CDF.pdf}
\caption{CDF of data popularity.}
\label{fig:Skewness_CDF}
\end{minipage}
\hspace{5pt}
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.3in]{Figures/Skewness.pdf}
\caption{Impact of data popularity.}
\label{fig:Skewness}
\end{minipage}%
\hspace{5pt}
\begin{minipage}[b]{0.315\textwidth}
\centering
\includegraphics[width=2.3in]{Figures/Failure.pdf}
\caption{Impact of server failure.}
\label{fig:Failure}
\end{minipage}%
\end{figure*}
\textbf{Number of Data Items:} As shown in Fig.~\ref{fig:Impact_M}, with $C=100$ and $K=6$, the number of deployed data items $M$ is increased from 500 to 5,000.
The average data access latency with Backend remains basically the same.
As the data popularity follows Zipf distribution, a small portion of data items get the majority of data requests.
With the growing total number of data items, the number of data items with relatively higher request rates increases.
Due to the limited cache capacity, more and more data requests are served by fetching data chunks from the remote servers.
Therefore, the average latencies increase rapidly with LRU (from 370.93 ms to 710.07 ms), LFU (from 401.26 ms to 707.84 ms), Agar (from 464.45 ms to 662.62 ms), and the optimal (from 272.66 ms to 651.48 ms) and online (from 291.01 ms to 666.0 ms) schemes.
\textbf{Data Popularity:} Fig.~\ref{fig:Skewness_CDF} illustrates the CDF of the data popularity using uniform and Zipf distributions.
As shown in Fig.~\ref{fig:Skewness}, all six schemes incur similar data access latencies when the data popularity follows a uniform distribution.
When all data items are with the same popularity, the caching valuation is only determined by the storage locations of data items.
With a similar caching valuation for different data items, the benefits of caching are not significant when the cache capacity is limited.
Then, with the increase of tail index from 1.2 to 2.0, the skew of the data popularity becomes higher and higher.
A fraction of data items with higher request frequencies can benefit more from caching.
When the tail index is set to 2.0, compared with Backend, LRU, LFU, and Agar, the optimal scheme reduces the average latency by 39.6\%, 21.3\%, 19.9\%, and 11.8\%, respectively.
Furthermore, Table~\ref{tab:Loss} demonstrates that the online scheme can approximate the optimal scheme well.
When compared with the optimal scheme, the online scheme only increases the average latency by about 2\% under different settings of data popularity.
\textbf{Server Failure:} Then, we evaluate the performance of the proposed optimal and online schemes when server failure happens.
For a fair performance comparison, LRU, LFU, and Agar also cache the recovered data chunks (instead of parity chunks) to reduce the decoding overheads.
Please note that erasure codes can tolerate up to $R$ simultaneous server failures.
Recent research indicated that single server failure is responsible for 99.75\% of all kinds of server failures~\cite{Khan_FAST_12}.
Therefore, single server failure is considered in this paper by terminating each storage server in turn.
The experiment setting is identical to that in Sec.~\ref{subsec:Results} except for storage server failure.
If the needed data chunks are not available on the remote servers or cached in the caching layer, degraded read will be triggered to serve the data requests.
In this case, the data access latency contains two parts, i.e., the network latency and the decoding latency.
Fig.~\ref{fig:Failure} illustrates the average data access latencies with various schemes.
Without caching services, Backend incurs the average network latency of 742.81 ms and the average decoding latency of 18.82 ms.
By caching the recovered data chunks to avoid unnecessary decoding overheads of the subsequent data requests, LFU, LRU, Agar, and the proposed optimal and online schemes reduce the average decoding latencies by more than 55\%.
Compared with Backend, LRU, LFU, and Agar, the optimal scheme reduces the overall average data access latency by 40.4\%, 20.5\%, 19.7\%, and 12.0\%, respectively.
Furthermore, compared with the optimal scheme, the online scheme incurs a performance loss of 2.5\% in the presence of server failure.
\section{Conclusion and Future Work} \label{sec:conclusion}
In this paper, novel caching schemes were proposed to achieve low latency in the distributed coded storage system.
To reduce the data access latency, frontend servers, each with an in-memory caching layer, were deployed to cache coded data chunks near end users.
Experiments based on Amazon S3 confirmed the positive correlation between the latency and the physical distance of data retrieval over the WAN.
As the distributed storage system spans multiple geographical sites, the average data access latency was used to quantify the benefits of caching.
With the assumption of future data popularity and network latency information, an optimal caching scheme was proposed to obtain the lower bound of data access latency.
Guided by the optimal scheme, we further designed an online caching scheme based on the measured data popularity and network latencies in real time.
Extensive experiments demonstrated that the online scheme approximates the optimal scheme well and significantly reduces the computation complexity.
In future work, more performance metrics, e.g., load balance among storage nodes, will be considered in the caching problem.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2,869,038,153,847 | arxiv | \section{Introduction}
In this note, we point out that compressed quadtrees can be built via
randomized incremental construction. Compressed quadtrees are simple
geometric data-structure. Despite their simplicity, they are
surprisingly useful for carrying out various geometric tasks, see
\cite{h-gaa-08}.
The first randomized algorithm for building compressed quadtrees is
due to Clarkson \cite{c-faann-83}. Eppstein \textit{et~al.}\xspace \cite{egs-sqsdd-05}
suggested building compressed quadtrees by using hierarchical random
sampling in a style similar to skip-lists. If one allows bitwise
operations (in particular, interleaving the bits of two integers in
constant time) one can build compressed quadtrees using $z$-order
\cite{g-ewrq-82, h-gaa-08} by a relatively simple algorithm, but the
task is more challenging if such operations are not allowed.
The new algorithm we describe seems to be quite simple, and can be
interpreted as a variant of the skip quadtree of Eppstein \textit{et~al.}\xspace
\cite{egs-sqsdd-05}.
\section{Preliminaries}
\begin{defn}[Grid.]
For a real positive number $z$ and a point $p = (x, y)$ in
$\Re^2$, define $\mathsf{G}_z(p)$ to be the grid point
$\pth[]{\floor{x/z} z, \floor{y/z} z}$. Observe that
$\mathsf{G}_z$ partitions the plane into square regions, which we
call grid \emphic{cells}{cell}. Formally, for any $i,j \in
\mathbb{Z}$, the intersection of the half-planes $x \geq z i$,
$x < z(i+1)$, $y \geq z j$ and $y < z(j+1)$ is said to be
a grid \emphi{cell}.
\end{defn}
\begin{defn}[Canonical square.]
A square is a \emphi{canonical square}, if it is contained inside
the unit square, it is a cell in a grid $\mathsf{G}_r$, and $r$ is a
power of two.
\end{defn}
Given a set ${\mathsf{P}}$ of $n$ points in the unit square, a quadtree
$\EuScript{T}$ is built as follows: The root corresponds to the unit square.
Every node $v \in \EuScript{T}$ corresponds to a cell $\Box_v$ (i.e., a
square), and it has four children. The four children correspond to the
four squares formed by splitting $\Box_v$ into four equal size
squares, by horizontal and vertical cuts. The construction is
recursive, and we start from $v = \mathrm{root}_\EuScript{T}$. As long as
the current node contains more than, say, two points of ${\mathsf{P}}$, we
create its children, and continue recursively the construction in each
child. We stop when each leaf of this tree contains a single point of
${\mathsf{P}}$.
\parpic[r]{\begin{minipage}{7.6cm}
\includegraphics{figs/quadtree_points}
\caption{A compressed edge corresponds to a tile that is the set
difference of two canonical squares.}
\figlab{tile}
\medskip
\end{minipage}
}
By compressing paths (in this tree) of nodes that all have a single
child, we get a compressed quadtree of size $O(n)$. Let $\mathcal{QT}({\mathsf{P}})$
denote the (uniquely defined) \emphi{compressed quadtree} of
${\mathsf{P}}$.
A leaf in this quadtree corresponds to a canonical square, and a
compressed edge (or more precisely the top vertex of this edge)
corresponds to an annulus formed by the set difference of two
canonical squares. We will refer to such region as a \emphi{tile}, see
\figref{tile}; that is, a tile is either a square (corresponding to a
leaf of the compressed quadtree) or an annulus (corresponding to a
compressed edge). As such, a compressed quadtree induces a partition
of the unit square into these tiles. We denote the planar map induced
by these tiles of the compressed quadtree of ${\mathsf{P}}$ by
$\mathcal{QT}({\mathsf{P}})$.
\section{Algorithm and analysis}
\subsection{The algorithm}
Pick a random permutation $\permut{{\mathsf{P}}} = \permut{\mathsf{p}_1, \ldots,
\mathsf{p}_n}$ of the points of ${\mathsf{P}}$. Let $\EuScript{T}_i$ be the
compressed quadtree of ${\mathsf{P}}_i = \brc{\mathsf{p}_1, \ldots, \mathsf{p}_i}$.
In any node of $\EuScript{T}_i$ that corresponds to a tile $f$ of
$\mathcal{QT}({\mathsf{P}}_i)$, we store a list, denoted by $\mathrm{c{}l}(f)$, of all the
points of ${\mathsf{P}}$ that lie inside $f$. As such, any point of
${\mathsf{P}}$ is stored exactly once somewhere in $\EuScript{T}_i$. We will
refer to $\mathrm{c{}l}(f)$ as the \emphi{conflict list} of $f$. We also store
for every point of ${\mathsf{P}}$ a pointer to the node of $\EuScript{T}_i$ that
contains it.
In the $i$th iteration, we find the node $v_i$ of $\EuScript{T}_{i-1} =
\mathcal{QT}({\mathsf{P}}_{i-1})$ that stores $\mathsf{p}_i$, and we insert $\mathsf{p}_i$ into
this node. This insertion might result in at most a constant number
(i.e., three) of new nodes being created.\footnote{Here is a sketch
why this claim is correct: Only a leaf of a compressed quadtree
might contain an inserted point. As such, we might need to
introduce a new leaf to store the new point ${\mathsf{P}}_i$. Hanging
this new leaf in tree might require splitting an existing
compressed edge of $\EuScript{T}_{i-1}$, by introducing a new
vertex. Similarly, if the leaf $f$ we insert $\mathsf{p}_i$ into already
stores an inserted point, then we need to introduce a new leaf not
only for $\mathsf{p}_i$ but also for the previously stored point in this
leaf. This might also result in a new compressed edge if two points
are close together compared to the diameter of $f$.} The resulting
tree $\EuScript{T}_{i}$ is the compressed quadtree of ${\mathsf{P}}_i$. Now, we
need to move all the points stored in $v_i$ to their new proper place
in $\EuScript{T}_{i}$. Thus, for every point stored in $v_i$, we check if
it has to now be stored in one of the new nodes, and if so we move it
to this new node. If there are $k$ points in the conflict list of
$v_i$ then this iteration takes $O(1 + k)$ time.
The compressed quadtree $\EuScript{T}_n$ is the required tree.
\subsection{The analysis}
\begin{defn}
Let $Y$ be an arbitrary subset of ${\mathsf{P}}$, and consider a tile
$f \in \mathcal{QT}(Y)$. A set $X \subseteq Y$ is a \emphi{defining set}
for $f$, if $f \in \mathcal{QT}(X)$ and it is a minimal set with this
property (i.e., no proper subset of $X$ has $f$ as a tile).
\end{defn}
The following is proved by a tedious but easy case analysis.
\begin{lemma}
If $X$ is a defining set of a tile $f \in \mathcal{QT}({\mathsf{P}})$ then
$\cardin{X} \leq 4$.
\lemlab{tedious}
\end{lemma}
Unlike ``traditional'' randomized incremental construction, the
defining set is not unique in this case.
\begin{lemma}
Consider a tile $f \in \mathcal{QT}({\mathsf{P}}_i)$. The probability that $f$
was created in the $i$th iteration is $\leq 4/i$. Formally, we
claim that
\[
\Prob{ f \in \mathcal{QT}({\mathsf{P}}_i) \setminus \mathcal{QT}({\mathsf{P}}_{i-1})
\sep{ f \in \mathcal{QT}({\mathsf{P}}_i)}} \leq \frac{4}{i}.
\]
\lemlab{backward}
\end{lemma}
\begin{proof}
Let $D_1, \ldots, D_m \subseteq {\mathsf{P}}_i$ be all the different
defining sets of $f$. Consider the set $Z = D_1 \cap D_2 \cap
\cdots \cap D_m$.
Observe that $f$ was created in the $i$th iteration only if
$\mathsf{p}_i \in Z$. Indeed, if $\mathsf{p}_i \notin Z$, then there exists a
defining set $D_t$ of $f$ such that $\mathsf{p}_i \notin D_t$. But then,
$f$ is also a tile of $\mathcal{QT}({\mathsf{P}}_{i-1})$ as $D_t \subseteq
{\mathsf{P}}_{i-1}$, and the probability of this tile to be created in
the $i$th iteration is zero.
Now, by \lemref{tedious}, all the defining sets have cardinality
at most four, and $\cardin{Z} \leq 4$. As such, the required
probability is bounded by the probability that $\mathsf{p}_i$ is in $Z$.
We bound this probability by backward analysis. Indeed, fix the
set ${\mathsf{P}}_i$ and consider all possible permutations of this
set. The probability that one of the (at most) four points of $Z$
is the last point in this permutation (of $i$ elements) is at most
$4/i$.
\end{proof}
\medskip
Observe that the probability of a tile $f$ to be created (according to
\lemref{backward}) is independent of the size of its conflict list.
\begin{lemma}
The expected amount of work in the $i$th iteration is $O(1 +
n/i)$.
\lemlab{iteration}
\end{lemma}
\begin{proof}
Consider a tile $f \in \mathcal{QT}({\mathsf{P}}_i)$. The amount of work spent
on it, if it was created in the $i$th iteration, is proportional
to the size of its conflict list $\mathrm{cl}(f)$. Let $X_i$ be
the random variable which is the amount of work spend by the
algorithm in the $i$th iteration. Since the total size of the
conflict lists of $\EuScript{T}_i$ is $n$, we get by \lemref{backward} that the
expected work in the $i$th iteration is bounded by
\[
\Ex{ X_i \sep{ {\mathsf{P}}_i}} = O \pth{ 1 + \sum_{f \in
\mathcal{QT}({\mathsf{P}}_i)} \frac{4}{i} \cardin{\mathrm{c{}l}(f)} } = O\pth{ 1 +
\frac{n}{i}}.
\]
(Again, the expectation here is over all possible permutations of
${\mathsf{P}}_i$.) Now, we have that $\Ex{X_i} = \Ex{\Ex{ X_i \sep{
{\mathsf{P}}_i}}} = O(1+n/i)$.
\end{proof}
\begin{theorem}
Given a point set ${\mathsf{P}}$ of $n$ points in the plane contained
inside the unit square, one can build a compressed quadtree for
${\mathsf{P}}$ in $O(n \log n)$ expected time.
\end{theorem}
\begin{proof}
By \lemref{iteration}, the total expected work of the above
algorithm is $O \pth{ \sum_{i=1}^n \pth{1 + n/i}} = O(n \log n)$.
\end{proof}
\begin{remark}
The algorithm can also be analyzed using the results from Clarkson
\textit{et~al.}\xspace \cite{cms-frric-93}.
\end{remark}
\section{Discussion and conclusions}
The algorithm presented for building quadtrees works also for points
in higher dimensions.
It is natural to compare our algorithm to Eppstein \textit{et~al.}\xspace
\cite{egs-sqsdd-05}. They get a slightly more complicated algorithm,
but they support both insertions and deletions, while our algorithm
can only build the quadtree. In light of our approach, it is natural
to interpret the algorithm of Eppstein \textit{et~al.}\xspace \cite{egs-sqsdd-05} as a
lazy randomized incremental algorithm for building quadtrees
\cite{bds-lric-95}.
The author believes that this is a neat example of backward
analysis. The reader naturally has the right to disagree.
\section*{Acknowledgments}
The author thanks Ken Clarkson and David Eppstein for useful
discussions on the problem studied in this note.
\bibliographystyle{alpha}
|
2,869,038,153,848 | arxiv | \subsection{Couette flow}
The first test is a Couette flow through two parallel plates without a pressure gradient. The distance between the plates is $h=1$. The top plate moves at velocity of $u_x=u_0=0.1$ in the streamwise dierection and the bottom plate is fixed. If $x$ stands for the streamwise direction and $y$ for the vertical direction, the analytical solution is
\begin{equation}
u_x(y) = \frac{u_0}{h} y,
\label{Couette -no-pre}
\end{equation}
which is the same test as that used by Chen et al. \cite{Chen.etc:2017}. This is a very interested case as the steady flow is independent of flow viscosity according to the theory \eqn{Couette -no-pre}. I use $\delta x = 0.02$ and $20 \times 50$ lattices in $x$ and $y$ directions for three simulations of flows with three kinematic viscosities of $\nu_1 =0.01,\ \nu_2=0.001$ and $\nu_3=0.0006$, respectively. The period boundary conditions are applied at the inflow and outflow boundaries. After the steady solutions are reached, the results are indeed independent of the viscosities and one of those are shown in Fig.~\ref{Couette_flow_noPress}, demonstrating excellent agreement with the analytical solution.
The second test is the same flow as that in Test 1 except that a pressure gradient of $\partial p/\partial x = -0.0001$ is specified, which is added to the right hand side of \eq{mlb-u.1} as $+ \delta x/(e \rho) \partial p/\partial x$ \cite{zhou.bk.2004}. Both plates are fixed with zero velocities at top and bottom boundaries at which no calculations are needed. The flow is affected by viscosity and the analytical solution is
\begin{equation}
u(y) = \frac{u_0}{h} y + \frac{1}{2\rho \nu} \frac{\partial p}{\partial x} (y^2 - hy).
\label{Couette-PreGrad}
\end{equation}
I simulate this flow using three viscosities of $\nu_1 =0.003,\ \nu_2=0.001,$ and $\nu_3 = 0.0006$. The numerical results have been plotted in Fig.~\ref{Couette_flow_Press_com_nu}, showing the effect of viscosity on the flow in excellent agreements with the analytical solutions. This confirms the unique feature that the model can simulate viscous flow correctly due to the use of \eq{mlb-viscosity} although no explicit effect of viscosity on flows is taken into account.
The third test is a 2D cavity flow, which is a well-known complex flow within a simple geometry. The domain is a $1 \times 1$ square. The boundary conditions are that the top lid moves at velocity of $u_x = u_0$ and $u_y = 0$ with $u_0=1$; the other three sides are fixed, or no slip boundary condition is applied, i.e., $u_x =0$ and $u_y = 0$. The Reynolds number $R_e = u_0/\nu = 1000$. I use $\delta x = 0.0025$ or $400 \times 400$ lattices in the simulation, which is carried out on the inside of the cavity excluding the four sides where velocities are retained as boundary conditions. After the steady solution is obtained, the flow pattern in velocity vectors is shown in Fig~\ref{2D_cavity_vec}, which closely agrees with the well-known study by Ghia et al. \cite{Ghia.etc:1982}. The results are further compared against their numerical solution for velocity profiles of $u_x$ and $u_y$ along $y$ and $x$ directions through the geometric centre of the cavity in Figs.~\ref{2D_cavity_dis-u} and \ref{2D_cavity_dis-v}, respectively, demonstraing very good agreements.
The fourth test is a 2D Taylor-Green vortex. This is an
unsteady flow driven by decaying vortexes for which there is an exact solution of the incompressible Navier-Stokes equations and it is often applied to validation of a numerical method for solution to the incompressible Navier-Stokes equations.
The initial conditions are $u_x(x,y,0)=-u_0 \cos(x) \sin(y)$ and $u_y(x,y,0)=u_0 \sin(x) \cos(y)$. The analytical solution are $u_x(x,y,t)=-u_0 \cos(x) \sin(y) \exp(-2\nu t)$ and \linebreak $u_y(x,y,t)=u_0 \sin(x) \cos(y) \exp(-2\nu t)$. The time for an unsteady flow from initial state is accumulated by its increase with time step $\delta t$. I use $\delta x = 0.157$ or $40 \times 40$ lattices for square domain of $2 \pi \times 2\pi$ with kinematic viscosity of $\nu=0.0314$ and $u_0=0.05$, which gives the Reynolds number of $R_e=2\pi u_0/\nu=10$. The periodic boundary conditions are used. The simulation is
run for the total time of 30 seconds. The velocity field is
plotted in Fig.~\ref{TGV_vec_col.pdf}, showing correct flow pattern. The velocity profiles for $u_x$ at $x=\pi$ and $u_y$ at $x=\pi /2$ along $y$-direction are depicted and compared with the analytical solutions in Fig.~\ref{TGV_com_uv_col.pdf}, showing excellent agreemeents and confirming the accuracy of the method for an unsteady flow.
The final test is a 3D cavity flow. This is again a well-known complex flow involving 3D vortices within a simple cube with the dimensions of $1 \times 1 \times 1$ in streamwise direction $x$, spanwise direction $y$ and vertical direction $z$. No-slip boundary conditions, i.e, $u_x=0$ and $u_y=0$ and $u_z=0$, are applied to five fixed sides except for the top lid, where $u_x=u_0$, $u_y=0$ and $u_z=0$ with $u_0=1$ are specified. The Reynolds number is $R_e = u_0/\nu = 400$. $\delta x = 0.004$ or total lattices of $250 \times 250 \times 250$ are used and the simulation is undertaken only within the cube excluding the boundaries where the velocities are retained. After the steady solution is reached, the flow characterises are displayed
through the two dimensional planar projections of the velocity vector field on the $x$-$z$, $y$-$z$ and $x$-$y$ centroidal planes of the cube in Figs.~\ref{3D_Re400_vec_uw_col.pdf}, \ref{3D_Re400_vec_vw_col.pdf} and \ref{3D_Re400_vec_uv}, respectively, demonstrating good agreed flow patterns with those by Wong and Baker \cite{Wong.Baker:2002}. In addition, the distribution of the velocity component $u_x$ on the vertical plane centerline is widely used as a 3D lid-driven cavity benchmark test. I compare this velocity component against the results by Wong and Baker \cite{Wong.Baker:2002} and also by Jiang et al. \cite{Jiang.etc:1994} in Fig.~\ref{3D Re400 dis-u-250_col.pdf}, showning good agreements.
In conclusion, the results demonstrate that the MacLAB is able to simulate fluid flows using only lattice size, bringing the LBM into a precise Lattice Boltzmann method. This takes the research on the method into a new era when future work may focus on improving on accuracy of or formulating a new local equilibrium distribution function. The particle speed is determined through the viscosity and lattice size and the time step $\delta t$ is calculated as $\delta t = \delta x /e$. The model is unconditional stable as long as the valid condition for the local equilibrium distribution function holds. All these make the method an automatic simulator for fluid flows. The method is straightforward to be extended for resolving other physical problems in different disciplines.
\section*{Figures.}
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.5in]{Couette_flow_noPress_col.pdf}
\caption{Couette flow through two parallel plates without a pressure gradient. The distance between the plates is $h=1$. The top plate moves at velocity of $0.1$ in the streamwise dierection and the bottom plate is fixed where no calculations are required. The period boundary conditions are applied at the inflow and outflow boundaries. $\delta x = 0.02$ is used for three simulations of flows with three kinematic viscosities of $\nu_1 =0.01,\ \nu_2=0.001$ and $\nu_3=0.0006$, respectively. All the steady numerical results are almost identical and are independent of flow viscosity as shown here in the comparison of one numerical results with the analytical solution.}
\label{Couette_flow_noPress}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.5in]{Couette_flow_Press_com_nu_col.pdf}
\caption{Couette flow through two parallel plates with a pressure gradient of $\partial p/\partial x = -0.0001$. The distance between the plates is $h=1$. Both plates are fixed with zero velocities at top and bottom boundaries where no calculations are needed. The steady numerical results are dependent on flow viscosity as confirmed in the simulations using the three viscosities of $\nu_1 =0.003,\ \nu_2=0.001,$ and $\nu_3 = 0.0006$. }
\label{Couette_flow_Press_com_nu}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{2D_cavity_vec_col.pdf}
\caption{2D cavity flow within $1 \times 1$ square for $R_e = 1000$. The top lid moves at velocity of $u_x = 1$ and $u_y = 0$ and the other three sides are fixed, or no slip boundary condition is applied. After the steady solution is obtained, the flow pattern in velocity vectors shows a primary vortex and two secondary vortices.}
\label{2D_cavity_vec}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{2D_cavity_dis-u_col.pdf}
\caption{2D cavity flow within $1 \times 1$ square for $R_e = 1000$. The top lid moves at velocity of $u_x = 1$ and $u_y = 0$ and the other three sides are fixed, or no slip boundary condition is applied. After the steady solution is obtained, the comparison of velocity $u_x$ profile along $y$ direction through the geometric centre of the cavity with the numerical solution by Ghia et al. \cite{Ghia.etc:1982}. }
\label{2D_cavity_dis-u}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{2D_cavity_dis-v_col.pdf}
\caption{2D cavity flow within $1 \times 1$ square for $R_e = 1000$. The top lid moves at velocity of $u_x = 1$ and $u_y = 0$ and the other three sides are fixed, or no slip boundary condition is applied. After the steady solution is obtained, the comparison of velocity $u_y$ profile along $x$ direction through the geometric centre of the cavity with the numerical solution by Ghia et al. \cite{Ghia.etc:1982}. }
\label{2D_cavity_dis-v}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{TGV_vec_col.pdf}
\caption{Taylor-Green vortex within $2\pi \times 2\pi$ domain for $R_e = 10$. The initial conditions are $u_x(x,y,0)=-u_0 \cos(x) \sin(y)$ and $u_y(x,y,0)=u_0 \sin(x) \cos(y)$ with $u_0=0.05$. The periodic boundary conditions are used. Here shown is the flow
pattern in velocity vectors at $t = 30$ seconds, remaining the same vortex pattern as that at initial state.}
\label{TGV_vec_col.pdf}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.6in]{TGV_com_uv_col.pdf}
\caption{Taylor-Green vortex within $2\pi \times 2\pi$ domain for $R_e = 10$. The initial conditions are $u_x(x,y,0)=-u_0 \cos(x) \sin(y)$ and $u_y(x,y,0)=u_0 \sin(x) \cos(y)$ with $u_0=0.05$. The periodic boundary conditions are used. Here shown is
the comparisons of the relative velocity profiles for $u_x/u_0$ at $x=\pi$ and $u_y/u_0$ at $x=\pi/2$ matching the analytical solutions of $u_x(x,y,t)=-u_0 \cos(x) \sin(y) \exp(-2\nu t)$ and $u_y(x,y,t)=u_0 \sin(x) \cos(y) \exp(-2\nu t)$ at $t=30$ seconds.}
\label{TGV_com_uv_col.pdf}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{3D_Re400_vec_uw_col.pdf}
\caption{3D cavity flow within $1 \times 1 \times 1$ cube for $R_e = 400$. The top lid moves at velocity of $u_x = 1$, $u_y = 0$ and $u_z = 0$ and the other five sides are fixed, or no slip boundary condition is applied. After the solution is reached, the flow pattern in vectors in $x - z$ centroidal plane shows the primary and secondary vortices.}
\label{3D_Re400_vec_uw_col.pdf}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{3D_Re400_vec_vw_col.pdf}
\caption{3D cavity flow within $1 \times 1 \times 1$ cube for $R_e = 400$. The top lid moves at velocity of $u_x = 1$, $u_y = 0$ and $u_z = 0$ and the other five sides are fixed, or no slip boundary condition is applied. After the solution is reached, the flow pattern in vectors in $y - z$ centroidal plane shows one pair of strong secondary vortices at bottom and one pair of weak secondary vortices at top. }
\label{3D_Re400_vec_vw_col.pdf}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{3D_Re400_vec_uv_col.pdf}
\caption{3D cavity flow within $1 \times 1 \times 1$ cube for $R_e = 400$. The top lid moves at velocity of $u_x = 1$, $u_y = 0$ and $u_z = 0$ and the other five sides are fixed, or no slip boundary condition is applied. After the solution is reached, the flow pattern in vectors in $x - y$ centroidal plane shows a pair of third vortices close to inflow boundary.}
\label{3D_Re400_vec_uv}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{3D_Re400_dis_u_250_col.pdf}
\caption{3D cavity flow within $1 \times 1 \times 1$ cube for $R_e = 400$. The top lid moves at velocity of $u_x = 1$, $u_y = 0$ and $u_z = 0$ and the other five sides are fixed, or no slip boundary condition is applied. After the solution is reached, the comparisons of the distribution of the velocity component $u_x$ on the vertical plane centerline with the restuls by Wong and Baker \cite{Wong.Baker:2002} and Jiang et al. \cite{Jiang.etc:1994}.}
\label{3D Re400 dis-u-250_col.pdf}
\end{center}
\end{figure}
\clearpage
\vspace{1cm}
\noindent
{\bf METHODS}
\noindent
I present the detail of the derivation for the present model. Setting $\tau = 1$ in \eq{lb.1} leads to
\begin{equation}
f_\alpha(x_j + e_{\alpha j} \delta t, t + \delta t) =f_\alpha^{eq}(x_j, t),
\label{MacLBM.d1} \end{equation}
which can be rewritten as
\begin{equation}
f_\alpha(x_j, t) = f_\alpha^{eq} (x_j - e_{\alpha j} \delta t, t - \delta t).
\label{MacLBM.d2} \end{equation}
Taking $\sum$ \eq{MacLBM.d2} and $\sum e_{\alpha i}$\eq{MacLBM.d2} yields
\begin{equation}
\sum f_\alpha(x_j, t) = \sum f_\alpha^{eq} (x_j - e_{\alpha j} \delta t, t - \delta t),
\label{MacLBM.d3} \end{equation}
and
\begin{equation}
\sum e_{\alpha i} f_\alpha(x_j, t) = \sum e_{\alpha i} f_\alpha^{eq} (x_j - e_{\alpha j} \delta t, t - \delta t),
\label{MacLBM.d4} \end{equation}
respectively. In the lattice Boltzmann method, the density and velocity are determined using the distribution function as
\begin{equation}
\rho(x_j, t)=\sum_\alpha f_\alpha (x_j, t), \hspace{13mm}
u_i (x_j, t) = \frac{1}{\rho} \sum_\alpha e_{\alpha i} f_\alpha (x_j, t).
\label{MacLBM.d5} \end{equation}
Combining \eq{MacLBM.d5} with \eqs{MacLBM.d3} and \eqn{MacLBM.d4} results in the current MacLAB, \eqs{mlb-p.1} and \eqn{mlb-u.1}. Since the local equilibrium distribution function $f_\alpha^{eq}$ has the features of
\begin{equation}
\sum_\alpha f_\alpha^{eq} (x_j, t) =\rho (x_j, t), \hspace{13mm}
\frac{1}{\rho} \sum_\alpha e_{\alpha i} f_\alpha^{eq} (x_j, t) = u_i (x_j, t),
\label{fea-0}
\end{equation}
with reference to \eq{MacLBM.d5} the following relationships,
\begin{equation}
\sum_\alpha f_\alpha (x_j, t) = \sum_\alpha f_\alpha^{eq} (x_j, t) , \hspace{13mm}
\sum_\alpha e_{\alpha i} f_\alpha (x_j, t) = \sum_\alpha e_{\alpha i} f_\alpha^{eq} (x_j, t),
\label{fea-0add}
\end{equation}
hold, which are the conditions that retain the conservation of the mass and momentum in the lattice Boltzmann method.
Next, I prove that the continuity and Navier-Stokes equations can be recovered from \eqs{mlb-p.1} and \eqn{mlb-u.1}. Rewriting \eq{lb.1} as
\begin{equation}
f_\alpha(x_j , t) = f_\alpha(x_j - e_{\alpha j} \delta t, t - \delta t)
+ \frac{1}{\tau} [f_\alpha^{eq}(x_j - e_{\alpha j} \delta t, t - \delta t) -f_\alpha(x_j - e_{\alpha j} \delta t, t - \delta t) ].
\label{MacLBM-lb.d1} \end{equation}
Apparently, when $\tau = 1$, the above equation becomes \eq{MacLBM.d2} that leads to \eqs{mlb-p.1} and \eqn{mlb-u.1}; hence \eq{MacLBM-lb.d1} is a general equation and is used in the following derivation.
Applying a Taylor expansion to the two terms on the right hand side of \eq{MacLBM-lb.d1} in time and space
at point $({\bf x}, t)$ yields
\begin{equation}
f_\alpha(x_j - e_{\alpha j} \delta t, t - \delta t)
=
f_\alpha(x_j , t) - \delta t \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha +
\frac{1}{2} \delta t^2 \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right )^2 f_\alpha
+ {\cal O} (\delta t^3),
\label{lb.3} \end{equation}
and
\begin{equation}
f_\alpha^{eq}(x_j - e_{\alpha j} \delta t, t - \delta t)
=
f_\alpha^{eq}(x_j , t) - \delta t \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha^{eq} +
\frac{1}{2} \delta t^2 \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right )^2 f_\alpha^{eq}
+ {\cal O} (\delta t^3).
\label{lb.3.1} \end{equation}
According to the Chapman-Enskog analysis, $f_\alpha$ can be expanded around $f_\alpha^{(0)}$
\begin{equation}
f_\alpha = f_\alpha^{(0)} + f_\alpha^{(1)} \delta t +
f_\alpha^{(2)} \delta t^2 + {\cal O} (\delta t^3).
\label{fa-ex.1} \end{equation}
After substituting \eqs{lb.3}, \eqn{lb.3.1} and \eqn{fa-ex.1} into \eq{MacLBM-lb.d1}, equating the coefficients results in for the order $(\delta t)^0$
\begin{equation}
f_\alpha^{(0)} = f_\alpha^{eq},
\label{Chpman-Enskog.01} \end{equation}
for the order $(\delta t)^1$
\begin{equation}
\left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha^{(0)},
=-\frac{1}{\tau} f_\alpha^{(1)} ,
\label{Chpman-Enskog.1} \end{equation}
and for the order $(\delta t)^2$
\begin{equation}
\left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha^{(1)}
- \frac{1}{2} \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right )^2 f_\alpha^{(0)}
=-\frac{1}{\tau} f_\alpha^{(2)} + \frac{1}{\tau} \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha^{(1)} .
\label{Chpman-Enskog.2} \end{equation}
Substitution of \eq{Chpman-Enskog.1} into the
above equation gives
\begin{equation}
\left (\frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha^{(1)}
- \frac{1}{2} \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right )
(-\frac{1}{\tau} f_\alpha^{(1)})
=-\frac{1}{\tau} f_\alpha^{(2)} + \frac{1}{\tau} \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha^{(1)} ,
\label{Chpman-Enskog.3} \end{equation}
which is rearranged as
\begin{equation}
\left ( 1- \frac{1}{2 \tau} \right ) \left (\frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right)
f_\alpha^{(1)}
=-\frac{1}{\tau} f_\alpha^{(2)} .
\label{Chpman-Enskog.4} \end{equation}
From \eq{Chpman-Enskog.1} +
\eq{Chpman-Enskog.4} $ \times \delta t$, I have
\begin{equation}
\left (\frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha^{(0)}
+ \delta t \left ( 1- \frac{1}{2 \tau} \right ) \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right )
f_\alpha^{(1)}
=
-\frac{1}{\tau} (f_\alpha^{(1)}+\delta t f_\alpha^{(2)}) .
\label{Chpman-Enskog.5} \end{equation}
Now taking $\sum$\eq{Chpman-Enskog.5} leads to
\begin{equation}
\sum \left (\frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) f_\alpha^{(0)}
= 0
\label{Chpman-Enskog.6} \end{equation}
as
\begin{equation}
\sum_\alpha f_\alpha^{(1)} = \sum_\alpha f_\alpha^{(2)} = \sum_\alpha e_{\alpha i} f_\alpha^{(1)} =
\sum_\alpha e_{\alpha i} f_\alpha^{(2)} = 0
\label{relation-chapman}
\end{equation}
due to the condition of conservation of mass and momentum \eq{fea-0}. Evaluating the terms in the above equation using \eq{feq-full} produces the exact continuity equation,
\begin{equation}
\frac{\partial \rho}{\partial t} + \frac{\partial (\rho u_j)}{\partial x_j} = 0.
\label{relation-chapman-mass}
\end{equation}
Multipling \eq{Chpman-Enskog.5} by $e_{\alpha i}$ provides
\begin{equation}
\left (\frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) e_{\alpha i} f_\alpha^{(0)}
+ \delta t \left ( 1- \frac{1}{2 \tau} \right ) \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right )
e_{\alpha i} f_\alpha^{(1)}
=
-\frac{1}{\tau} (e_{\alpha i} f_\alpha^{(1)}+\delta t e_{\alpha i} f_\alpha^{(2)}) .
\label{Chpman-Enskog.7} \end{equation}
Taking $\sum$\eq{Chpman-Enskog.7} leads to
\begin{equation}
\sum \left (\frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right ) e_{\alpha i} f_\alpha^{(0)}
+ \delta t \left ( 1- \frac{1}{2 \tau} \right ) \sum \left ( \frac{\partial}{\partial t}+e_{\alpha j}
\frac{\partial}{\partial x_j} \right )
e_{\alpha i} f_\alpha^{(1)}
= 0
\label{C-E.Moment.1} \end{equation}
under the same condition \eqn{relation-chapman} as that in the derivation of \eq{Chpman-Enskog.6}. Evaluating the terms in the above equation using \eq{feq-full} produces the exact momentum equation, the Navier-Stokes equation at second-order accuracy on condition that the Mack number $M=U_c/e << 1$,
\begin{equation}
\frac{\partial (\rho u_i)}{\partial t} + \frac{\partial (\rho u_i u_j)}{\partial x_j} = -\frac{\partial p}{\partial x_i}
+ \nu \frac{\partial ^2 (\rho u_i)}{\partial x_j^2} ,
\label{relation-chapman-moment}
\end{equation}
where pressure $p$ is defined as
\begin{equation}
p=\frac{1}{3} \rho e^2
\label{relation-chapman-pressure}
\end{equation}
and the kinmatic viscosity is
\begin{equation}
\nu =\frac{1}{6} ( 2 \tau - 1 ) e \delta x.
\label{relation-chapman-viscosity}
\end{equation}
As $\tau$ takes a constant, use of $\tau = 1$ will recovers the continuity and the Navier-Stoker equations at the second-order accurate as the above derivation shows. In this case, \eq{relation-chapman-viscosity} becomes \eq{mlb-viscosity}, which determines the particle speed $e$.
\vspace{1cm}
\noindent
{\bf ADDITIONAL INFORMATION}
\noindent
{\bf D2Q9 and D3Q19 Lattice Structures}.
The D2Q9 uniform square and D3Q19 cubic lattices are depicted in Figs.~\ref{SUPPL-D2Q9} and \ref{SUPPL-D3Q19}, respectively.
\begin{figure}[h]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=2in]{LBsquare_9-Mrt.pdf}
\caption{D2Q9 square lattice.}
\label{SUPPL-D2Q9}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=2.2in]{19speed-Cuboid.pdf}
\caption{D3Q19 cubic lattice.}
\label{SUPPL-D3Q19}
\end{subfigure}
\caption{Square and cubic Lattices for 2D and 3D flows.}
\label{D2Q9_D3Q19}
\end{figure}
|
2,869,038,153,849 | arxiv | \section{Introduction}
The B7 V star $\zeta^2$ Coronae Borealis (HD 139892, HR 5834, ADS
9737 A; $\alpha(2000) = 15^{\rm h}39^{\rm m}22\fs 66$, $\delta(2000) =
+36\arcdeg 38\arcmin 9\farcs 26$) has been known to be a double-lined
spectroscopic binary since the work of \cite{pla25}. $\zeta^2$ CrB{} is the
brighter component ($\Delta m = 1.0$) of the visual double star ADS
9737 with a separation of $6\farcs 4$ from the fainter component
$\zeta^1$ CrB (\cite{hof82}). Abhyankar \& Sarma (1966), hereafter
\cite{abh66}, refining the orbit of \cite{pla25} found a period of
12.5842 days from 58 observations. They comment on the difficulty of
measuring the double broad lines of $\zeta^2$ CrB{}, which is evident in
their observed radial velocities' large deviations from their
calculated orbit (see Fig.\ 1 \& 2 of \cite{abh66}). As a result,
their orbit received a `d' (poor orbit) classification in the {\it 8th
Catalogue of the Orbital Elements of Spectroscopic Binary Systems}
(\cite{bat89}).
In an effort to understand the large deviations found by
\cite{abh66}, $\zeta^2$ CrB{} was added to the observing program at Ritter
Observatory. From the first few spectra of $\zeta^2$ CrB{}, we noted that
the lines associated with the two components of $\zeta^2$ CrB{} (A \& B) were
broadened by different amounts. As \cite{abh66} found the two
components of $\zeta^2$ CrB{} to have nearly identical masses and a
relatively short period, $\zeta^2$ CrB{} is expected to be tidally
interacting (\cite{tas92}). This tidal interaction is predicted to
synchronize the orbital and rotational periods of the binary.
Thus, $\zeta^2$ CrB{} makes possible a sensitive test of synchronization
theory, as the two components have similar masses yet show different
rotational velocities.
In $\S$2, we describe the observations. The measurement of the
radial velocities, the computation of the $v\sin i$'s, and the
determination of the orbital parameters of $\zeta^2$ CrB{} are detailed in
$\S$3. Section 4 contains a discussion of the results.
\section{Observations}
The observations were carried out between May 1994 and July 1996.
The Ritter Observatory 1-m telescope was used in conjunction with an
\'echelle spectrograph connected to the Cassegrain focus of the
telescope by a fiber optic cable. The spectrograph camera was
fabricated by Wright Instruments Ltd.\ and utilized a $1200 \times
800$ thick chip CCD with $22.5 \mu {\rm m}$ square pixels. The CCD
was liquid-nitrogen cooled to an operating temperature of 140 K. Data
acquisition was controlled by an IBM compatible personal computer
running software supplied by Wright Instruments, Ltd.
The reduction of the data was done in the Interactive Data Language
(IDL) with a specialized program written for Ritter Observatory
spectra (\cite{gor96}) based on methods detailed in \cite{hal94}. A
brief outline of the reduction process is as follows. The average
bias and flat field were constructed on a pixel-by-pixel basis
allowing the removal of cosmic ray hits. The average bias was
subtracted from the average flat field, object, and comparison frames.
The flat field was used to determine the order and background
templates. The background template was used to remove the scattered
light from the flat field, objects, and comparisons after fitting a
polynomial to the inter-order background on a column by column basis.
Cosmic ray hits in the objects and comparisons were removed from
consideration by comparison with the average flat field. The
normalized, smoothed, flat field profile was squared and multiplied by
the object profile, and the resulting function was summed in order to
obtain a profile-weighted extraction of the object spectrum. The
wavelength calibration was accomplished by scaling all the comparison
lines to one super-order and fitting a polynomial to the result,
iteratively removing points until a preset standard deviation was
achieved. Further information about Ritter Observatory (telescope,
instruments, reduction, and archive) can be found on the World Wide
Web (WWW) site {\em
http://www.physics.utoledo.edu/www/ritter/ritter.html}.
\begin{figure}[tbp]
\begin{center}
\plotfiddle{table1.ps}{5in}{0}{100}{100}{-300}{-200}
\end{center}
\end{figure}
The observations of $\zeta^2$ CrB{} are tabulated in Table I, with the UT
date, UT time, HJD, and exposure time given in columns 1--4. The
spectral coverage consisted of 9 disjoint orders between 5200 \AA\ and
6600 \AA\ with each order approximately 70 \AA\ wide. Lines of
interest were H$\alpha$, He I 5876 \AA, and Si II 6347,6371 \AA. The
slit width of the spectrograph was chosen to give a resolving power of
$R \approx 25,000$. With this resolving power, the average
signal-to-noise ratio (S/N) of the observations of $\zeta^2$ CrB{} was about
100. With this high S/N ratio, it was easy to distinguish the two
components of $\zeta^2$ CrB{} (narrow versus broad) by examining the
composite Si II 6371 \AA\ line. As the Si II 6371 \AA\ line is a weak
line in B7 V stars, it was important to get high S/N observations.
\section{Analysis}
\subsection{Radial Velocities}
The radial velocities were determined two ways, by fitting Gaussians
to individual lines and by cross-correlating the weak Si II 6371 \AA\
line. The lines fit to Gaussians were H$\alpha$, Si II 6347, 6371
\AA, and He I 5876 \AA. The results for each line were averaged
together after the rest wavelengths of H$\alpha$ and He I 5876 \AA\
were shifted until their radial velocities roughly matched the Si II
radial velocities. This was done as both H$\alpha$ and He I 5876 \AA\
are multiplets with their rest wavelengths dependent on the different
strengths of the individual lines in the multiplet.
The resulting average radial velocities determined from fitting
Gaussians were used in constructing templates for the
cross-correlation method. The templates were constructed in a manner
similar to Cardelli \& Ebbets (1993). Each observed spectrum was
assumed to include four sources - $\zeta^2$ CrB{} A (broad lines), $\zeta^2$ CrB{} B
(narrow lines), $\zeta^1$ CrB, and atmospheric lines. On nights with
poor seeing, $\zeta^1$ CrB contributed to the spectrum due to the
large size of the fiber ($d = 5\arcsec$). Atmospheric lines were
present in most of the spectra with strengths dependent on the water
content of the atmosphere. A template was constructed for $\zeta^2$ CrB{} A,
$\zeta^2$ CrB{} B, the average $\zeta^1$ CrB spectrum, and an average
atmospheric spectrum. Each template was constructed by iteratively
dividing each spectrum by the other three templates and then coadding
all the spectra in the rest frame of the template. The spectra used
in constructing the templates were those in which the broad and narrow
lines were separated by over 100 km s$^{-1}$. After 10 iterations, no
significant change in the templates was seen. These templates were
used to cross-correlate the individual spectra and determine radial
velocities used in the rest of this paper.
Table 1 lists the radial velocities determined from
cross-correlating the spectra with the $\zeta^2$ CrB{} A \& B templates.
Columns 5 \& 7 of Table 1 list the radial velocities for $\zeta^2$ CrB{} A \&
B, respectively. The cross-correlation worked well for the narrow
lines of $\zeta^2$ CrB{} B, but for the broad lines of $\zeta^2$ CrB{} A it was
difficult choosing the right peak in the cross-correlation spectrum.
As a result, the error in $\zeta^2$ CrB{} A's radial velocity measurements
was $\approx 10$ km s$^{-1}$ and the error for $\zeta^2$ CrB{} B's radial
velocities was $\approx 1.5$ km s$^{-1}$ (see $\S$3.3). While the
narrow line was not affected by pair blending in spectra when the two
lines were separated by less than 50 km s$^{-1}$, the broad line was
affected. For these spectra the radial velocities measured by either
method for the broad line were systematically too close to the
velocity of the narrow line.
\begin{figure}[tbp]
\begin{center}
\plotone{spectra_both.eps}
\caption{The Si II 6371 \AA\ line in the templates for $\zeta^2$ CrB{} A
(top) and $\zeta^2$ CrB{} B (bottom) is plotted. Note the difference between
the rotational broadenings of the line for $\zeta^2$ CrB{} A \&
B. \label{spectra:both}}
\end{center}
\end{figure}
\subsection{$V\sin i$}
As the templates were constructed from many individual spectra, they
had a S/N greater than 500, a large improvement over the individual
spectra. The region around the Si II 6371 \AA\ line in both templates
of $\zeta^2$ CrB{} A \& B is plotted in Fig.~{\ref{spectra:both}}. As can be
seen from Fig.~{\ref{spectra:both}}, the line in $\zeta^2$ CrB{} A's spectrum
has the elliptical shape expected from fairly fast rotation, while the
line in $\zeta^2$ CrB{} B's spectrum has a Gaussian shape expected from
fairly slow rotation. Using the templates, the half width at half
maximum (HWHM) was measured as $37 \pm 2$ km s$^{-1}$ for $\zeta^2$ CrB{} A
and $8 \pm 1$ km s$^{-1}$ for $\zeta^2$ CrB{} B. These measured HWHM were
instrumentally debroadened using an instrumental HWHM of 5.5 km
s$^{-1}$ resulting in HWHM of $36 \pm 2$ km s$^{-1}$ and $6 \pm 1$ km
s$^{-1}$ for $\zeta^2$ CrB{} A \& B, respectively. The $v\sin i$ values were
computed using the standard half-width method in which the unbroadened
line is assumed to be arbitrarily sharp but with a finite equivalent
width (\cite{col91}). The value of the limb-darkening coefficient,
$\alpha$, can vary between 0.0 and 1.0 with the standard value being
0.6. The values of $v\sin i$ corresponding to $\alpha =$ 0.0, 0.6, \&
1.0 were 41.7, 49.2, \& 51.0 km s$^{-1}$ for $\zeta^2$ CrB{} A and 6.7, 7.9,
\& 8.2 km s$^{-1}$ for $\zeta^2$ CrB{} B. Using the standard method for
computing $v\sin i$ has been shown to be accurate to the 10\% level
for slowly to moderately rotating ($< 125$ km s$^{-1}$) early-type
stars (\cite{col95}). Taking into account the error in the half width
and the range of possible values of $\alpha$, the values of $v\sin i$
were $46 \pm 7$ km s$^{-1}$ for $\zeta^2$ CrB{} A and $7.5 \pm 2$ km s$^{-1}$
for $\zeta^2$ CrB{} B.
\subsection{Orbit}
Due to the difficulty in measuring the radial velocities of broad
lines of $\zeta^2$ CrB{} A, all but one of the orbital parameters for
$\zeta^2$ CrB{} were determined from the radial velocities of $\zeta^2$ CrB{} B.
The K amplitude of $\zeta^2$ CrB{} A was determined by assuming the $\zeta^2$ CrB{}
B's orbital fit and only fitting the K amplitude of the radial
velocities of $\zeta^2$ CrB{} A. Orbital fits were done using the IDL
procedure CURVEFIT which was taken from Bevington (1969). A more
specific program for computing spectroscopic orbits by \cite{wol67}
was also run producing similar results.
Using all of $\zeta^2$ CrB{} B's radial velocities resulted in an orbit
with a period of 1.72 days, over a factor of 7 smaller than the
orbital period calculated by \cite{abh66}. The residuals calculated
from the 1.72 day orbit were as large as 20 km s$^{-1}$, much higher
than could be attributed to measurement errors. From measurements of
standard radial velocity stars taken on the same nights as the
$\zeta^2$ CrB{} observations, the radial velocity error for a narrow line was
about 0.2 km s$^{-1}$. These large residuals could only be a result
of another period in the radial velocities of $\zeta^2$ CrB{} B.
As B7 V stars are unlikely to have pulsations (\cite{wae91},
\cite{ste93}), a third star in $\zeta^2$ CrB{} was the most likely cause of
the residuals. In order to possess a stable orbit, the third star
would need to have a significantly longer period than the inner binary
consisting of $\zeta^2$ CrB{} A \& B. Therefore, an orbit for the inner
binary was fitted to observations closely spaced in time. There were
two such data sets, HJD = 2449490.711 to 2449515.616 and HJD =
2450197.806 to 2450268.659. The orbits fitted to these two data sets
had residuals on order of 1 km s$^{-1}$, a great improvement over the
previous fit using all the data. The orbits determined from the two
data sets were essentially the same, except for a 1 km s$^{-1}$ shift
of their systemic velocities, which was within the systemic velocity
errors. Combining both data sets resulted in an improved fit. An
orbit for $\zeta^2$ CrB{} A was found by adopting all the orbital parameters
from $\zeta^2$ CrB{} B's fit, except for the K amplitude which was fit using
$\zeta^2$ CrB{} A's radial velocities.
The residuals computed from observations where both $\zeta^2$ CrB{} A \&
B's radial velocities were at least 30 km s$^{-1}$ from the systemic
velocity were found to be correlated with a linear correlation
coefficient of 0.65. Bevington (1969) gives a 0.1\% probability that
such a correlation coefficient involving 26 points would arise
randomly. Thus, the residuals of $\zeta^2$ CrB{} A \& B have the same origin
and this provides concrete evidence that $\zeta^2$ CrB{} is actually a triple
system.
The outer binary, assumed to consist of the inner binary ($\zeta^2$ CrB{} A
\& B) and an unseen third star ($\zeta^2$ CrB{} C), was examined by looking
at the residuals of $\zeta^2$ CrB{} B's orbit. As the residuals of $\zeta^2$ CrB{}
A \& B were correlated, changes in $\zeta^2$ CrB{} B's residuals were the
result of $\zeta^2$ CrB{} C's orbit around $\zeta^2$ CrB{} A \& B. The phase
coverage was sufficient to derive an orbit, but this orbit was not
well determined due to the lack of observations between phases 0.95
and 1.05. The orbital fits to both $\zeta^2$ CrB{} B and $\zeta^2$ CrB{} AB were
refined iteratively by subtracting the contribution from one orbitial
fit from $\zeta^2$ CrB{} B's radial velocities, fitting for the other orbit,
and repeating. After the fifth iteration little change was seen in
the fitted orbital parameters. The final orbital parameters for both
the inner binary and the outer binary are tabulated in Table 2.
Columns 6 \& 8 in Table 1 give the residuals (O-C) after subtracting
both inner and outer binary orbits. These residuals were used to
estimate the errors in an individual radial velocity measurement
giving 10.5 km s$^{-1}$ for $\zeta^2$ CrB{} A and 1.2 km s$^{-1}$ for
$\zeta^2$ CrB{} B. Figures~\ref{fig:inner_binary} \& \ref{fig:outer_binary}
plot the fitted orbits and radial velocities for both the inner and
outer binaries, respectively. The orbital fits for the outer and
inner orbits have been removed from the radial velocities plotted in
Figures 2 \& 3, respectively.
\begin{table}[tbp]
\begin{center}
{\sc TABLE 2} \\
{\sc Orbit Parameters} \\[0.1in]
\begin{tabular}{cccc} \tableline\tableline
& $\zeta^2$ CrB{} A & $\zeta^2$ CrB{} B & $\zeta^2$ CrB{} AB \\ \tableline
$V_o$ & \multicolumn{3}{c}{$-21.9 \pm 0.4$ km s$^{-1}$} \\ \tableline
$P$ [days] & \multicolumn{2}{c}{$1.72357 \pm 0.0001$} & $251.5 \pm 0.6$ \\
$T$ [days] & \multicolumn{2}{c}{$2450196.2793 \pm 0.0137$} &
$2449373.5 \pm 1.7$ \\
$e$ & \multicolumn{2}{c}{$0.013 \pm 0.002$} & $0.48 \pm 0.03$ \\ \tableline
$K$ [km s$^{-1}$] & $109.6 \pm 13.6$ & $121.2 \pm 0.3$ & $28.5 \pm 2.0$ \\
$\omega$ & $49\arcdeg \pm 3\arcdeg$ & $229\arcdeg \pm 3\arcdeg$ &
$191\fdg 8 \pm 2\fdg 9$ \\
$a\sin i$ [R$_{\sun}$] & $3.73 \pm 0.46$ & $4.13 \pm 0.01$ & $124 \pm 7$ \\
$m\sin^3 i$ [M$_{\sun}$] & $1.155 \pm 0.142$ & $1.045 \pm 0.142$ & \nodata \\
\tableline\tableline
\end{tabular}
\end{center}
\label{table:param}
\end{table}
\begin{figure}[tbp]
\begin{center}
\plotone{inner_orbit.eps}
\caption{The radial velocities for both $\zeta^2$ CrB{} A \& B are plotted
phased to the inner binary period, 1.72 days, given in Table 2. The
fitted orbit to the outer binary has been subtracted from the radial
velocities plotted. The solid lines are the fitted orbits using the
parameters listed in Table 2. Note the large errors in the radial
velocities of $\zeta^2$ CrB{} A when it is near the systemic velocity of the
system.
\label{fig:inner_binary}}
\end{center}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\plotone{outer_orbit.eps}
\caption{The radial velocities for $\zeta^2$ CrB{} B are plotted phased to
the outer binary period, 251.5 days, given in Table 2. The fitted
orbit to the inner binary has been subtracted from the radial
velocities plotted. The solid line is the fitted orbit using the
parameters listed in Table 2. \label{fig:outer_binary}}
\end{center}
\end{figure}
\subsection{Inclination}
In non-eclipsing, unresolved binaries a common method to determine
the inclination is to assume one of the stars is synchronously
rotating. This assumption implies the rotational and orbital periods
are equal and the rotational and orbital axes are parallel. The
equatorial velocity of the star is computed from an assumed stellar
radius and then the inclination is computed from the star's measured
$v\sin i$.
For $\zeta^2$ CrB{}, we applied this method assuming $\zeta^2$ CrB{} A (broad
lines) was synchronously rotating. Assuming $\zeta^2$ CrB{} B (narrow lines)
synchronously rotates resulted in a equatorial velocity for $\zeta^2$ CrB{} A
larger than its breakup velocity. The radius was assumed to be 2.38
R$_{\sun}$, but is only accurate to 50\% (\cite{and91}). Using this
radius and the range in $v\sin i$ calculated in $\S$3.2, the resulting
range in $\sin i$ was 0.62--0.76. Using these values of $\sin i$, the
range in masses was 4.83--2.64 M$_{\sun}$ for $\zeta^2$ CrB{} A and
4.37--2.39 M$_{\sun}$ for $\zeta^2$ CrB{} B. For a B7 V star, Andersen
(1991) gives a mass of 4.13 M$_{\sun}$ with an accuracy of 15\%.
Thus, consistent values for $\zeta^2$ CrB{} A \& B radii and masses were
possible if the limb-darkening coefficient, $\alpha$, is fairly low.
Assuming an $\alpha$ of 0.0, gives $i = 38\arcdeg$.
The mass function for the outer binary ($\zeta^2$ CrB{} AB \& C) was
computed to have a value of $0.42 \pm 0.11$ M$_{\sun}$. Assuming that
all three components of $\zeta^2$ CrB{} are coplanar and have the range in
$\sin i$ calculated above, the mass function can be solved for
$\zeta^2$ CrB{} C's
mass. The mass range for $\zeta^2$ CrB{} C was computed to be 8.5--3.5
M$_{\sun}$. Such a massive star would be visible in the spectrum of
$\zeta^2$ CrB{}. Due to the lack of observations between the critical phases
0.95 and 1.05 of the outer binary orbit, the fitted K amplitude is
likely to be too large. Reducing the K amplitude by a few km s$^{-1}$
would greatly reduce $\zeta^2$ CrB{} C's mass as the mass function is
proportional to $K^3$.
\section{Discussion}
The significant results of this work were the discovery that
$\zeta^2$ CrB{} is a triple system, the inner binary has a much shorter
period than previously thought (\cite{abh66}), and $\zeta^2$ CrB{} B is
rotating asynchronously.
The identification of $\zeta^2$ CrB{} as a triple system and the 1.72357
day inner binary period most likely explains the large residuals of
\cite{abh66}'s orbit. \cite{abh66} calculated such a different
period, 12.5842 days, most likely for two reasons. First, they only
calculated corrections to Plaskett's (1925) orbit. Second, they only
measured H and He lines which, as multiplets, are intrinsically
broadened. As a result, the H and He lines appear to have similar
widths making an identification of broad versus narrow difficult.
Only with high S/N spectra and a weak, intrinsically narrow line, such
as Si II 6371 \AA, were we able to distinguish consistently between
lines from $\zeta^2$ CrB{} A \& B. In fact, \cite{abh66}'s data are
consistent with our fitted orbits with only a small number of their
points having wrong identifications.
Using \cite{abh66}'s orbit, the lower limit on the masses of
$\zeta^2$ CrB{} A \& B ($m\sin^3 i$) were 9.9 M$_{\sun}$ and 9.4 M$_{\sun}$,
respectively. \cite{abh66}'s lower limits on the masses are over a
factor of two greater normal mass of a B7 V star which is 4.13
M$_{\sun}$ (\cite{and91}). Using our orbit,
the lower limit on the masses of $\zeta^2$ CrB{} A \& B are $1.155 \pm
0.142$ M$_{\sun}$ and $1.045 \pm 0.142$ M$_{\sun}$, respectively.
These lower limits are consistent with the mass of a B7 V star
clearing up the contradiction implied by \cite{abh66}'s work.
The two components of the inner binary, $\zeta^2$ CrB{} A \& B, have equal
masses within the error bars, yet possess very different rotational
velocities. From the work of Tassoul \& Tassoul (1992) and Claret et
al.\ (1995), the circularization and synchronization time-scales were
computed to be $10^6$--$10^7$ and $10^4$ years, respectively. From
the above calculations, $\zeta^2$ CrB{} B should have synchronized its
rotation even before the inner binary circularized. Obviously, some
other process is keeping $\zeta^2$ CrB{} B from synchronizing. Claret \&
Gim\'enez (1995) were able to explain the asynchronous rotation in TZ
For as due to evolutionary effects. Similarly, evolutionary effects
probably explain the asynchronous rotation of $\zeta^2$ CrB{} B.
More observations of $\zeta^2$ CrB{} are needed, especially between phases
0.95 and 1.05 of the outer binary. These observations would greatly
refine outer binary orbit, specifically its K amplitude and
eccentricity.
\acknowledgments
This work was possible only with the help of the Ritter technician
Bob Burmeister and the crack Ritter observing team. Team members
contributing observations to this effort were Jason Aufdenberg,
Michelle Beaver, Bruce Cantor, David Knauth, Alex Mak, Nancy Morrison,
Jens Petersohn, and both authors. We are also thankful for many
helpful conversations with Nancy Morrison and Bernard Bopp. Support
for observational research at Ritter Observatory is provided by NSF
grant AST-9024802 to B.\ W.\ Bopp and by The University of Toledo.
This research has made use of the Simbad database, operated at CDS,
Strasbourg, France.
|
2,869,038,153,850 | arxiv | \section{ INTRODUCTION }
You might drive a car in a foreign city and have to stop at many red
traffic lights. From the statistics of waiting times you probably can
quickly figure out which signals operate independently and which ones
operate the smart way with inductive-loop traffic detectors in a
so-called {\sl demand-actuated mode}. Thus, statistics of waiting times
bears crucial information how a system works, either having independent
elements that act randomly, or consisting of elements with long-range
connections that enable coupling and synchronization. In geophysics,
aftershocks have been found to exhibit different waiting time statistics
(Omori's law) than the main shocks of earthquakes (e.g., Omori 1895;
Bak et al.~2002; Saichev and Sornette 2006). In magnetospheric physics,
waiting time statistics is used to identify Poisson
random processes, self-organized criticality, intermittent turbulence,
finite system size effects, or clusterization, such as in
auroral emission (Chapman et al.~1998, 2001), the auroral electron jet (AE)
index (Lepreti et al.~2004; Boffetta et al.~1999), or in substorms at the
Earth's magnetotail (Borovsky et al.~1993; Freeman and Morley 2004).
Waiting time statistics is studied intensely in solar physics, where
most flares are found to be produced by a Poissonian random process, but
there are also so-called {\sl sympathetic flares} that have a causal connection
or trigger each other (e.g., Simnett 1974; Gergely and Erickson 1975;
Fritzova-Svestkova et al.~1976; Pearce and Harrison 1990; Bumba and Klvana
1993; Biesecker and Thompson 2000; Moon et al.~2002). Waiting time
statistics of solar flares was studied in hard X-rays (Pearce et al.~1993;
Biesecker 1994; Crosby 1996; Wheatland et al.~1998; Wheatland and Eddy 1998),
in soft X-rays (Wheatland 2000a; Boffetta et al.~1999; Lepreti et al.~2001;
Wheatland 2001; Wheatland and Litvinenko 2001; Wheatland 2006;
Wheatland and Craig 2006; Moon et al.~2001; Grigolini et al.~2002),
for coronal mass ejections (CMEs) (Wheatland 2003; Yeh et al.~2005;
Moon et al.~2003), for solar radio bursts (Eastwood et al.~2010), and for
the solar wind (Veltri 1999; Podesta et al.~2006a,b, 2007, Hnat et al.~2007;
Freeman et al.~2000; Chou 2001; Watkins et al.~2001a,b, 2002; Gabriel and
Patrick 2003; Bristow 2008; Greco et al. 2009a,b). In astrophysics,
waiting time distributions have been studied for flare stars
(Arzner and Guedel 2004) as well as for black-hole candidates, such as
Cygnus X-1 (Negoro et al.~1995). An extensive review on waiting time statistics
can be found in chapter 5 of Aschwanden (2010).
In this study we focus on waiting time distributions of solar flares
detected in hard X-rays. The most comprehensive sampling of solar flare
waiting times was gathered in soft X-rays so far, using a 25-year catalog
of GOES flares (Wheatland 2000a; Boffetta et al.~1999; Lepreti et al.~2001),
but three different interpretations were proposed, using the very same data:
(i) a not-stationary
(time-dependent) Poisson process (Wheatland 2000a), (ii) a shell-model
of MHD turbulence (Boffetta et al.~1999), or (iii) a L\'evy flight
model of self-similar processes with some memory (Lepreti et al.~2001).
All three interpretations can produce a powerlaw-like distribution of
waiting times. On the other side, self-organized criticality models
(Bak et al. 1987, 1988; Lu and Hamilton 1991; Charbonneau et al.~2000) predict
a Poissonian random process, which has an exponential distribution of
waiting times for a stationary (constant) flare rate, but can produce
powerlaw-like waiting time distributions with a slope of $p \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} 3$
for nonstationary variations of the flare rate (Wheatland and Litvinenko
2002). Therefore, the finding of a powerlaw-like distribution of waiting
times of solar flares has ambiguous interpretations. The situation in
solar flare hard X-rays is very discordant: Pearce et al.~(1993)
and Crosby (1994) report powerlaw distributions of waiting times with
a very flat slope of $p \approx 0.75$, Biesecker (1994) reports a
near-exponential distribution after correcting for orbit effects,
while Wheatland et al.~(1998) finds a double-hump distribution with an
overabundance of short waiting times ($\Delta t \approx 10$ s - 10 min)
compared with longer waiting times ($\Delta t \approx 10-1000$ min), but
is not able to reproduce the observations with a nonstationary Poisson
process. In this study we analyze flare catalogs from HXRBS/SMM, BATSE/CGRO
and RHESSI and are able to model all observed hard X-ray waiting time
distributions with a unified model in terms of a nonstationary Poisson
process in the limit of high intermittency. We resolve also the
discrepancy between exponential and powerlaw-like waiting time distributions,
in terms of selected fitting ranges.
\section{ Bayesian Waiting Time Statistics }
The waiting time distribution $P(\Delta t)$ for a {\sl Poissonian
random process} can be approximated with an exponential distribution,
\begin{equation}
P(\Delta t) = \lambda e^{-\lambda \Delta t} \ ,
\end{equation}
where $\lambda$ is the mean event occurrence rate, with the probability
$\int P(\Delta t) d\Delta t=1$ normalized to unity. If the average event rate
$\lambda$ is time-independent, we call it a {\sl stationary Poisson process}.
If the average rate varies
with time, the waiting time distribution reflects a superposition of multiple
exponential distributions with different e-folding time scales, and may even
ressemble a powerlaw-like distribution. The statistics of such {\sl inhomogeneous}
or {\sl nonstationary Poisson processes} can be characterized with
{\sl Bayesian statistics}.
A nonstationary Poisson process may be subdivided into time intervals where
the occurrence rate is constant, within the fluctuations expected from Poisson
statistics, so it consists of piecewise stationary processes, e.g.,
\begin{equation}
P(\Delta t) = \left\{ \begin{array}{ll}
\lambda_1 e^{-\lambda_1 \Delta t} & {\rm for} \ t_1 \le t \le t_2 \\
\lambda_2 e^{-\lambda_2 \Delta t} & {\rm for} \ t_2 \le t \le t_3 \\
............... & ....... \\
\lambda_n e^{-\lambda_n \Delta t} & {\rm for} \ t_n \le t \le t_{n+1}
\end{array} \right.
\end{equation}
where the occurrence rate $\lambda_i$ is stationary during a time interval
$[t_i, t_{i+1}]$, but has different values in subsequent time intervals. The
time intervals $[t_i, t_{i+1}]$ where the occurrence rate is stationary are
called {\sl Bayesian blocks}, and we talk about {\sl Bayesian statistics}
(e.g., Scargle 1998). The variation of the occurrence rates $\lambda_1,
\lambda_2, ..., \lambda_n$ can be defined with a new time-dependent function
$\lambda(t)$ and the probability function of waiting times becomes
itself a function of time (e.g., Cox \& Isham 1980; Wheatland et al.~1998),
\begin{equation}
P(t, \Delta t) = \lambda(t+\Delta t)
\exp{\left[-\int_t^{t+\Delta t} \lambda(t') dt' \right]} \ .
\end{equation}
If observations of a nonstationary Poisson process are made for the time
interval $[0, T]$, then the distribution of waiting times for that time
interval will be, weighted by the number of events $\lambda(t) dt$ in
each time interval $(t, t+dt)$,
\begin{equation}
P(\Delta t) = {1 \over N} \int_0^T \lambda(t) \ P(t, \Delta t) dt \ ,
\end{equation}
where the rate is zero after the time interval $t>T$, i.e., $\lambda(t>T)=0$,
and $N=\int_0^T \lambda(t) dt$. If the
rate is slowly varying, so that it can be subdivided into piecewise stationary
Poisson processes (into Bayesian blocks), then the distribution of waiting
times will be
\begin{equation}
P(\Delta t) \approx
\sum_i \varphi(\lambda_i) \lambda_i e^{-\lambda_i \Delta t} \ ,
\end{equation}
where
\begin{equation}
\varphi(\lambda_i)={\lambda_i t_i \over \sum_j \lambda_j t_j} \ ,
\end{equation}
is the fraction of events associated with a given rate $\lambda_i$
in the (piecewise) time interval $t_i$ (or Bayesian block) over which
the constant rate $\lambda_i$ is observed. If we make the transition from
the summation over discrete time intervals $t_i$ (Eqs.~5 and 6) to a
continuous integral function over the time interval $[0<t<T]$, we obtain,
\begin{equation}
P(\Delta t) =
{\int_0^T \lambda(t)^2 e^{-\lambda(t) \Delta t} dt
\over \int_0^T \lambda(t) dt} \ .
\end{equation}
When the occurrence rate $\lambda(t)$ is not a simple function,
the integral Eq.~(7) becomes untractable, in
which case it is more suitable to substitute the integration variable $t$
with the variable $\lambda$. Defining $f(\lambda)=(1/T) dt(\lambda)/d\lambda$
as the fraction of time that the flaring rate is in the range $(\lambda,
\lambda+d\lambda)$, or $f(\lambda) d\lambda = dt/T$,
we can express Eq.~(7) as an integral of
the variable $\lambda$,
\begin{equation}
P(\Delta t) =
{\int_0^{\infty}
f(\lambda) \lambda^2 e^{-\lambda \Delta t} d\lambda
\over \int_0^{\infty} \lambda f(\lambda)\ d\lambda} \ ,
\end{equation}
where the denominator $\lambda_0=\int_0^\infty \lambda f(\lambda) d\lambda$
is the mean rate of flaring.
\medskip
Let us make some examples. In Fig.~1 we show five cases: (1) a stationary
Poisson process with a constant rate $\lambda_0$; (2) a two-step process with
two different occurrence rates $\lambda_1$ and $\lambda_2$;
(3) a nonstationary Poisson process with piece-wise linearly increasing
occurrence rates $\lambda(t)=\lambda_0 (t/T)$, varying like a triangular
function for each cycle, (4) piece-wise exponential functions,
and (5) piece-wise exponential function steepened by a reciprocal factor.
For each case we show the time-dependent occurrence rate $\lambda(t)$ and
the resulting probability distribution $P(\Delta t)$ of
events. We see that a stationary Poisson process produces an exponential
waiting time distribution, while nonstationary Poisson processes with a
discrete number of occurrence rates $\lambda_i$ produce a superposition of
exponential distributions, and continuous occurrence rate functions
$\lambda(t)$ generate powerlaw-like waiting time distributions
at the upper end.
We can calculate the analytical functions for the waiting time distributions
for these five cases. The first case is simply an exponential function as given
in Eq.~(1) because of the constant rate $\lambda(t)=\lambda_0$,
\begin{equation}
P(\Delta t) = \lambda_0 e^{-\lambda_0 \Delta t} \ .
\end{equation}
The second case follows from Eq.~(5) and yields
\begin{equation}
P(\Delta t) =
{1 \over 10} \lambda_1 e^{-\lambda_1 \Delta t} +
{9 \over 10} \lambda_2 e^{-\lambda_2 \Delta t} \ .
\end{equation}
The third case can be integrated with Eq.~(7). The time-dependent flare
rate grows linearly with time to a maximum rate of $\lambda_m=2 \lambda_0$
over a time interval $T$, with a mean rate of $\lambda_0$,
\begin{equation}
\lambda(t) = \lambda_m \left( { t \over T} \right) =
2 \lambda_0 \left( { t \over T} \right)
\end{equation}
Defining a constant $a=-\lambda_m(\Delta t/T)$ the integral of Eq.~7 yields
$P(\Delta t) = (2 \lambda_m/T^3) \int_0^T t^2 e^{at} dt$. The integral
$\int x^2 e^{ax} dx= e^{ax}(x^2/a-2x/a^2+2/a^3)$ can be obtained from an
integral table. The analytical function of the waiting time distribution
for a linearly increasing occurrence rate is then
\begin{equation}
P(\Delta t) = 2 \lambda_m
\left[{2 \over (\lambda_m \Delta t)^3} -
e^{-\lambda_0 \Delta t}
\left({1 \over (\lambda_m \Delta t)}
+{2 \over (\lambda_m \Delta t)^2}
+{2 \over (\lambda_m \Delta t)^3}\right)\right] \ ,
\end{equation}
which is a flat distribution for small waiting times and approaches a
powerlaw function with a slope of $p=3$ at large waiting times, i.e.,
$P(\Delta t) \propto \Delta t^{-3}$ (Fig.~1, third case).
The distribution is the same for a single linear ramp or a cyclic
triangular variation, because the total time spent at each rate
$[\lambda, \lambda+d\lambda]$ is the same.
The fourth case, which mimics the solar cycle,
has an exponentially growing (or decaying) occurrence rate, i.e.,
\begin{equation}
f(\lambda) =\left( { 1 \over \lambda_0} \right)
\exp{\left(-{\lambda \over \lambda_0} \right)} \ ,
\end{equation}
defined in the range of
$[0 < \lambda < \infty]$, and has a mean of $\lambda_0$. The waiting time
distribution can therefore be written with Eq.~(8) as
\begin{equation}
P(\Delta t) = \int_0^{\infty}
\left({ \lambda \over \lambda_0}\right)^2
\exp{\left(-{\lambda \over \lambda_0} [1 + \lambda_0 \Delta t]\right)}
d\lambda \ ,
\end{equation}
which corresponds to the integral $\int_0^{\infty} x^2 e^{ax} dx = -2/a^3$
using $a=-(1+\lambda_0 \Delta t)/\lambda_0$ and thus has the solution
$P(\Delta t) = -2/(a^3 \lambda_0^2)$, i.e.,
\begin{equation}
P(\Delta t) = {2 \lambda_0 \over (1 + \lambda_0 \Delta t)^3} \ .
\end{equation}
For very large waiting times $(\Delta t \gg 1/\lambda_0)$ it approaches
the powerlaw limit $P(\Delta t) \approx (2/\lambda_0^2) \Delta t^{-3}$
(see Fig.~1 fourth panel).
The fifth case has an exponentially growing occurrence rate, multiplied
with a reciprocal factor, i.e.,
\begin{equation}
f(\lambda) = \lambda^{-1} \
\exp{\left(-{\lambda \over \lambda_0} \right)} \ ,
\end{equation}
and fulfills the normalization $\int_0^\infty \lambda f(\lambda) d\lambda
= \lambda_0$. The waiting time distribution can then be written with Eq.~(8) as
\begin{equation}
P(\Delta t) = \int_0^{\infty}
\left({\lambda \over \lambda_0}\right) \
\exp{\left(-{\lambda \over \lambda_0} [1 + \lambda_0 \Delta t]\right)}
d\lambda \ ,
\end{equation}
which, with defining $a=-(1+\lambda_0 \Delta t)/\lambda_0$,
corresponds to the integral $\int x e^{ax} dx = (e^{ax}/a^2) (ax-1)$
and has the limit $\int_0^{\infty} x e^{ax} dx = 1/a^2$,
yielding the solution $P(\Delta t) = 1/(a^2 \lambda_0^2)$, i.e.,
\begin{equation}
P(\Delta t) = {\lambda_0 \over (1 + \lambda_0 \Delta t)^2} \ .
\end{equation}
For very large waiting times $(\Delta t \gg 1/\lambda_0)$ it approaches
the powerlaw limit $P(\Delta t) \approx (1/\lambda_0^2) \Delta t^{-2}$
(see Fig.~1 bottom panel).
Thus we learn from the last four examples
that most continuously changing occurrence rates produce powerlaw-like
waiting time distributions with slopes of $p \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} 2, ..., 3$ at large
waiting times, despite of the intrinsic exponential distribution that
is characteristic to stationary Poisson processes. If the variability of
the flare rate is gradual (third and fourth case in Fig.~1),
the powerlaw slope of the waiting time distribution is close to
$p \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} 3$. However, if the variability of the flare rate
shows spikes like $\delta$-functions (Fig.~1, bottom), which is highly
intermittent with short clusters of flares, the distribution of waiting
times has a slope closer to $p \approx 2$. This phenomenon is also
called clusterization and has analogs in earthquake statistics, where
aftershocks appear in clusters after a main shock (Omori's law).
Thus the powerlaw slope of waiting times contains essential information
whether the flare rate varies gradually or in form of intermittent
clusters.
\section{ DATA ANALYSIS }
We present an analysis of the waiting time distribution of solar flares
observed in hard X-rays, using flare catalogs obtained from the
{\sl Ramaty High Energy Solar Spectroscopic Imager (RHESSI)},
the {\sl Compton Gamma Ray Observatory (CGRO)}, and
the {\sl Solar Maximum Mission (SMM)}, and model also previously published
data sets from SMM (Pearce et al.~1993), GRANAT (Crosby 1996), and
ISEE-3 (Wheatland 1998).
\subsection{ RHESSI Waiting Time Analysis }
RHESSI (Lin et al.~2002) was launched on February 5, 2000 and still
operates at the time of writing, for a continuous period of 8 years.
The circular spacecraft orbit has a mean period of 96 min (1.6 hrs), a
mean altitude of 600 km, and an inclination of $38^\circ$ with respect
to the equator, so the observing mode is interrupted by gaps
corresponding to the spacecraft night (with a duration of about 35
percent of the orbit), as well as some other data gaps when flying
through the {\sl South Atlantic Anomaly (SAA)}. These data gaps
introduce some systematic clustering of waiting times around
an orbital period ($\approx 1.6$ hrs).
We are using the official RHESSI flare list (which can be downloaded
from the RHESSI webpage {\sl
http://hesperia.gsfc.nasa.gov/rhessidatacenter/}). We are using data
from the first six years of the mission (from February 12, 2002, to
December 31, 2007), when a uniform threshold for flare detection was
applied, while a lower threshold was applied after this period.
We include all flare events that have a solar origin, in the sense
that they could be imaged in the energy range of 12 to
25 keV. The complete event catalog (sampled up to Feb 1, 2010)
contains $n=52,014$ events, of which 12,379 have been confirmed to be
solar. The number of confirmed solar flare events we are using from the
first six years includes 11,594 events, which corresponds to a mean
flare rate of $<\lambda> \approx 5.5$ events per day during 2002-2007.
A time series of the
daily RHESSI flare rate is shown in Fig.2 (top panel). The mean annual
flare rate clearly drops systematically from 2002 to 2008, but the
daily fluctuations are larger than the slowly-varying mean trend. We
calculate the waiting times simply from the time difference between
two subsequent flare events,
\begin{equation}
\Delta t_i = t_i - i_{i-1} \ .
\end{equation}
If we plot the waiting time distribution on a log-log scale
(Fig.~2 bottom right), we see that the waiting time distribution
$N(\Delta t) = n P(\Delta t)$ can approximately be fitted with a
powerlaw function, i.e., $N(\Delta t) \propto \Delta t^{-p}$,
with a powerlaw slope of $p \approx 2.0$. However,
there is some clustering of events above one orbital period of
$\Delta t \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} t_{orbit} \approx 1.6$ hrs, as well as at the
second to fourth harmonics of the orbital
period (Fig.~1, bottom left), which are caused
for several reasons. A first reason is that events that start at spacecraft
night cannot be detected until the spacecraft comes out of the night part of
the orbit, causing some clustering after one full orbital period.
A second reason is that large events that extend over more than one
full orbital period are double-counted in each subsequent orbit. These
instrumental biases have been modeled with Monte-Carlo simulations in previous
waiting time studies (e.g., Biesecker 1994), but instead of correcting for
these erroneous waiting times, we will just exclude the range of
$\Delta t \approx (0.5-2) \times t_{orbit}$ ($\approx$0.4-2.4 hrs) in the
fits of waiting time distributions.
Thus our first result is that the waiting time distribution of RHESSI flares
is approximately consistent with a powerlaw distribution in the time range of
$\Delta t \approx 1-1000$ hrs, with a powerlaw slope of $p \approx 2.0$.
For a nonstationary Poisson process we expect powerlaw slopes in the
range of $p \approx 2.0, ..., 3.0$ (Fig.~1), so the measured
distribution is close to what the theory predicts for very intermittently
varying flare rates (Fig.~1, bottom). The high degree of intermittency
is indeed clearly recognizable in the data (Fig.~2, top panel), where
the short-term fluctuations are much faster than the long-term trend.
One might subdivide the flare activity into two states, i.e.,
{\sl ``quiescent''} and {\sl ``flare-active''} states, similar to the
quiescent periods (soft state) and active periods (hard state) of
pulses from accretion disk objects and black-hole candidates
(e.g., Cygnus X-1; Mineshige et al.~1994). We indicate quiescent periods
with waiting times $\Delta t \ge 5$ hrs in Fig.~2 (top panel), where it
can be seen that they occur in every month through all 8 years, but more
frequently during the (extended) solar minimum (2006-2008).
Thus, flare-active periods are very intermittent and not lasting
contiguously over extended periods of time. The situation is also similar
to earthquakes, where aftershocks appear clustered in time intervals
after larger main shocks (Omori's law).
In a next step we investigate the effect of flux thresholds in the event
definition on the distribution of waiting times, an issue that has been
raised in previous studies (e.g., Buchlin et al.~2005; Hamon et al.~2002).
Hamon et al.~(2002) finds for the Olami-Feder-Christensen model (Olami
et al.~1992), which is a cellular automaton model for systems with
self-organized criticality, that event definitions without any threshold
lead to stretched exponential waiting time distributions, while
threshold-selected events produce an excess of longer waiting times.
We investigate this problem simply by applying various flux thresholds
to the RHESSI flare catalog, e.g., $P=10, 100, 300, 1000, 3000$ cts s$^{-1}$,
and by re-sampling the waiting times for events above these flux thresholds.
The corresponding six waiting time distributions are shown in Fig.~3,
which contain $n=11,594$ events detected without a threshold,
and $n_T=9596$, 2038, 781, 271, and 108 events for the thresholded subsets.
The waiting time distributions clearly show an increasing excess of longer
waiting times with progressively higher thresholds, which is expected due
to the filtering
out of shorter time intervals between flares with weaker fluxes below the
threshold. Based on the reduction in the number $n$ of events as a function
of the flux threshold, we can make a prediction how the mean waiting time
interval increases with increasing threshold, namely a reciprocal relationship,
since the total duration $T$ of waiting times is constant,
\begin{equation}
T = \sum_i^n \Delta t_i = <n> <\Delta t>
= <n_T> <\Delta t>_T \ .
\end{equation}
Thus, from the number of events $n_T$ detected above a selected
threshold we can predict the mean waiting time,
\begin{equation}
<\Delta t>_T = { <n> \over <n_T> } <\Delta t> \ .
\end{equation}
Using the full set of data with $<n>=11,594$ events and a mean waiting
time of $<\Delta t>=0.71$ hrs (Fig.~3, top left), we can predict the
distributions and average waiting times for the thresholded subsets,
based on their detected numbers $n_T$ using Eq.~(21):
$<\Delta t>_T = 0.9, 4.0, 10.5, 30.4, 76.3$ hrs. We fit our theoretical
model of the waiting time distribution (Eq.~18) of a nonstationary
Poisson process and predict the distributions for the threshold datasets,
using the scaling of the predicted average waiting times $<\Delta t>_T$.
The predicted distributions (thick curves in Fig.~3) match the observed
distributions of thresholded waiting times (histograms with error bars
in Fig.~3) quite accurately, and thus demonstrates how the waiting time
distribution changes in a predictable way when flux thresholds are used
in the event selection.
Regarding the mean flare waiting time we have to distinguish between the
theoretical model value $\Delta t_0$ and the mean detected interval
$<\Delta t>^{obs}$. The theoretical value is calculated based on a complete
distribution in the range of $\Delta t=[0, \infty]$. For RHESSI
data we obtain a value of $\Delta t_0=0.71$ hrs. The observational value,
which is about the total observing time span ($\approx 5.8$ yrs) multiplied
with the spacecraft duty cycle, which is about $q=0.65$ for RHESSI (based on
spacecraft nights with a length of 32-38 minutes), and divided by the
number of observed events $n$. So, we obtain a mean time interval of
$<\Delta t>^{obs} = T q / n = 5.8 \times 365 \times 24 \times 0.65
/ 11,594 \approx 3.0$ hrs. This value is about a factor of 4 longer
than the theoretical value. This discrepancy results from either
missing short waiting times predicted by the model (since
the observed distribution falls off at waiting times of $\Delta t \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$}
0.02$ hrs, i.e., $\approx 1$ minute), or because the model overpredicts
the maximum flare rate, and thus needs to be limited at the maximum
flare rate or lower cutoff of waiting times. Neverthelss, whatever
lower cutoff of observed waiting times or what flux threshold is used,
the theoretical value $\Delta t_0$ always indicates where the breaking
point is in the waiting time distribution, between the powerlaw part
and the rollover at short waiting times.
\subsection{ BATSE/CGRO Waiting Time Analysis }
The Compton Gamma Ray Observatory (CGRO) was launched on April 5, 1991, and
de-orbited on June 4, 2000. The CGRO spacecraft had an orbit with a period
of $t_{orbit} \approx 1.5$ hrs also, and thus is subject to similar data gaps
as we discussed for RHESSI, which causes a peak in the waiting time
distribution around $\Delta t \approx (0.5-1.1) t_{orbit}$.
We use the solar flare catalog from the {\sl Burst and Transient Source
Experiment (BATSE)} onboard CGRO, which is accessible at the NASA/GSFC
homepage {\sl http://umbra.nascom.nasa.gov/batse/}. Here we use a subset of
the flare catalog obtained during the maximum of the solar cycle between
April 19, 1991 and November 12, 1993, containing 4113 events during a span of
1.75 years, yielding a mean event rate of $<\lambda> \approx 6.4$ events per day.
BATSE has 8 detector modules, each one consisting of an uncollimated, shielded
NaI scintillation crystal with an area of 2025 cm$^{2}$, sensitive in the
energy range of 25 keV-1.9 MeV (Fishman et al. 1989).
The waiting time distribution obtained from BATSE/CGRO is shown in Fig.~4
(middle right), which ranges from $\Delta t \approx 0.01$ hrs up to
$\approx 200$ hrs. We fit the same theoretical model of the waiting
time distribution (Eq.~18) as for RHESSI, and the powerlaw slope of
$p=2$ fits equally well. Since the obtained mean flare
rate ($\Delta t_0 = 0.92$ hrs) is similar to RHESSI ($\Delta t_0=0.71$ hrs),
the thresholds for selected flare events seems to be compatible.
\subsection{ HXRBS/SMM Waiting Time Analysis }
While RHESSI observed during the solar cycles \#23+24, CGRO observed the
previous cycles \#22+23, and the Solar Maximum Mission (SMM) the previous ones
\#21+22, none of them overlapping with each other. SMM was lauched on
February 14, 1980 and lasted until orbit decay on December 2, 1989.
The {\sl Hard X-Ray Burst Spectrometer (HXRBS)} (Orwig et al.~1980) is
sensitive in the range of 20-260 keV and has a detector area of 71 cm$^{-2}$.
The orbit of SMM had initially a height of 524 km and an inclination of
$18.6^\circ$, which causes similar data gaps as RHESSI and CGRO.
HXRBS recorded a total of 12,772 flares above energies of $\lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} 20$ keV
during a life span of 9.8 years, so the average flare rate is
$<\lambda> \approx 3.6$ flares per day, which is about half that of BATSE
and slightly less than RHESSI, which results as a combination of different
instrument sensitivities, energy ranges, and flux thresholds.
The waiting time distribution obtained from HXRBS/SMM is shown in Fig.~4
(top right), which has a similar range of $\Delta t \approx 0.01-500$ hrs
as BATSE and RHESSI. We fit the same waiting time distribution (Eq.~18)
with a powerlaw slope of $p = 2$ and find similar best-fit value for
the average waiting times, i.e., $\Delta t_0=1.08$ hrs, so the sensitivity
and threshold is similar. Also the orbital effects modify the observed
distributions exactly the same way. Thus we can interpret the distributions
of waiting times for HXRBS in the same way as for BATSE and RHESSI, in terms
of a nonstationary Poisson process with high intermittency.
An earlier study on the waiting time distribution of HXRBS/SMM flares was
published by Pearce et al.~(1993), containing a subset of 8319 events
during the first half of the mission, 1980-1985 (Fig.~4, top left panel).
However, the published
distribution contained only waiting times in a range of $\Delta t=1-60$ min
($\approx 0.02-1.0$ hrs), and was fitted with a powerlaw function with a
slope of
$p=0.75\pm0.1$ (Pearce et al.~1993; Fig.~5 therein). We fitted the same
intermediate range of waiting times ($\Delta t \approx 0.1-1.0$ hrs)
and found similar powerlaw slopes, i.e. $p \approx 0.72$ for HXRBS,
$p = 0.84$ for BATSE, and $p=0.65$ for RHESSI, so all distributions seem
to have a consistent slope in this range.
This partial powerlaw fits are indicated with a straight line in the
range of $\Delta t \approx 0.02-1.0$ hrs in all six cases shown in Fig.~4.
However, the waiting time distribution published by Pearce et al.~(1993)
extends only over 1.7 orders of magnitude, and thus does not reveal the
entire distribution we obtained over about 5 orders of magnitude
with HXRBS, BATSE, and RHESSI. If we try to fit the same waiting time
distribution (Eq.~18) to the dataset of Pearce et al.~(1993), we obtain
a similar fit but cannot constrain the powerlaw slope for longer waiting
times.
\subsection{ WATCH/GRANAT Waiting Time Analysis }
A waiting time distribution of solar flares has earlier been published
using WATCH/GRANAT data (Crosby 1996), obtained from the Russian GRANAT
satellite, launched on December 1, 1989. GRANAT has a highly eccentric
orbit with a period of 96 hrs, a perigee of 2000 km, and an apogee of
200,000 km. Such a high orbit means that the spacecraft can observe the
Sun uninterrupted without Earth occultation (i.e., no spacecraft night),
which makes the waiting time distribution complete and free of data gaps.
WATCH is the {\sl Wide Angle Telescope for Cosmic Hard X-Rays}
(Lund 1981) and has a sensitive detector area of 95 cm$^2$ in the energy
range of 10-180 keV.
Waiting time statistics was gathered during four
time epochs: Jan-Dec 1990, April-Dec 1991, Jan-April 1992, and July 1992.
Crosby (1996) obtained a waiting time distribution for
$n=182$ events during these epochs with waiting times in the range of
$\Delta t \approx 0.04-15$ hrs. The distribution was found to be
a powerlaw with a slope of $p=0.78\pm0.13$ in the range of
$\Delta t \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} 3$ hrs, with an exponential fall-off in the range of
$\Delta t \approx 3-15$ hrs. We reproduce the measured waiting time
distribution of Crosby (1996, Fig.~5.16 therein) in Fig.~4 (middle left
panel) and are able to reproduce the same powerlaw slope of $p=0.78$
by fitting the same range of $\Delta t \approx 0.1-2.0$ hrs as fitted
in Crosby (1996). We fitted also the model of the waiting time distribution
for a nonstationary Poisson process (Eq.~18) and find a similar mean
waiting time of $\Delta t_0=0.97$ hrs (Fig.~4), although there are
no data for waiting times longer than $\lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} 15$ hrs. Thus, the
partial distribution measured by Crosby (1996) is fully consistent with
the more complete datasets analyzed from HXRBS/SMM, BATSE/CGRO, and
RHESSI.
\subsection{ ISEE-3/ICE Waiting Time Analysis }
Another earlier study on waiting times of solar flare hard X-ray bursts
was published by Wheatland et al.~(1998), which is of great interest here
because it covers the largest range of waiting times analyzed previously
and the data are not
affected by periodic orbital datagaps. The {\sl International
Sun-Earth/Cometary Explorer (ISEE-3/ICE)} spacecraft was inserted into a
``halo'' orbit about the libration point L1 some 240 Earth radii upstream
between the Earth and Sun. This special orbit warrants uninterrupted
observations of the Sun without orbital data gaps. ISEE-3 had a
duty cycle of 70-90\% during the first 5 yrs of observations, and was falling
to 20-50\% later on. ISEE-3 detected 6919 hard X-ray bursts during the
first 8 years of its mission, starting in August 1978.
Wheatland et al.~(1998) used a flux selection of $> 4$ photons cm$^{-2}$
s$^{-1}$ to obtain a near-complete sampling of bursts, which reduced the
dataset to $n=3574$ events. The resulting waiting time distribution
extends over $\Delta t \approx 0.002-14$ hrs and could not be fitted with a
single nonstationary process, but rather exhibited an overabundance of
short waiting times $\Delta t \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} 0.2$ hrs (Wheatland et al.~1998).
We reproduce the measured waiting time distribution of Wheatland et
al.~(1998; Fig.~2 therein) in Fig.~4 (bottom left panel) and fit the
combined waiting time distribution of a nonstationary Poisson process
(Eq.~18) and an exponential random distribution for the short waiting
times (Eq.~1). We find a satisfactory fit with the two waiting
time constants $\Delta t_0=0.03$ hrs and $\Delta t_1=1.84$ hrs.
The primary component of $\Delta t_0=0.03$ hrs ($\approx 2$ min) is
much shorter than for any other dataset, which seems to correspond to
a clustering of multiple hard X-ray bursts per flare, but is consistent
with a stationary random process itself. The secondary component contains
45\% of the hard X-ray bursts and has a waiting time scale of
$\Delta t_1=1.84$ hrs. This component seems to correspond to the
generic waiting time distribution we found for the other data, within
a factor of two. Since there are no waiting times longer than 20 hrs,
the powerlaw section of the distribution is not well constrained for
longer waiting times, but seems to be consistent with the other
datasets.
\section{ Conclusions }
We revisited three previously published waiting time distributions of
solar flares (Pearce et al.~1993; Crosby 1996; Wheatland et al.~1998)
using data sets from HXRBS/SMM, WATCH/GRANAT, ISEE-3/ICE, and analyzed
three additional datasets from HXRBS/SMM, BATSE/CGRO, and RHESSI.
While the previously published studies arrive at three different
interpretations and conclusions, we are able to reconcile all datasets
and the apparent discrepancies with a unified waiting time distribution
that corresponds to a nonstationary Poisson process in the limit of
high intermittency. Our conclusions are the following:
\begin{enumerate}
\item{Waiting time statistics gathered over a relatively small range
of waiting times, e.g., $\lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} 2$ decades as published by
Pearce et al.~(1993) or Crosby (1996), does not provide sufficient
information to reveal the true functional form of the waiting time
distribution. Fits to such partial distributions were found to be consistent
with a powerlaw
function with a relatively flat slope of $p\approx 0.75$ in the range of
$\Delta t \approx 0.1-2.0$ hrs, but this powerlaw slope does not extend
to shorter or longer waiting times, and thus is not representative for
the overall distribution of waiting times.}
\item{Waiting times sampled over a large range of $5-6$ decades all
reveal a nonstationary waiting time distribution with a mean waiting
time of $\Delta t_0 = 0.80\pm0.14$ hrs (averaged
from WATCH/GRANAT, HXRBS/SMM, BATSE/CGRO, and RHESSI)
and an approximate powerlaw slope of $p \approx 2.0$.
This value of the powerlaw slope is consistent with a theoretical
model of highly intermittently variations of the flare rate,
in contrast to a more gradually changing flare rate that would produce
a powerlaw slope of $p \approx 3.0$. Waiting time studies with solar
flares observed in soft X-rays (GOES) reveal powerlaw slopes in the
range from $p \approx 1.4$ during the solar minimum to $p \approx 3.2$
during the solar maximum (Wheatland and Litvinenko 2002), which would
according to our model correspond to rather gradual variation of the
flare rate with few quiescent periods during the solar maximum, but a
high intermittency with long quiescent intervals during the solar minimum.
The flare rate essentially rarely drops to a low background level during
the solar maximum, so long waiting times are avoided and the frequency
distribution is steeper due to the lack of long waiting times.}
\item{For the dataset of ISEE-3/ICE there is an overabundance of
short waiting times with a mean of $\Delta t = 0.03$ hrs (2 min),
which seems to be associated with the detection of clusters of multiple
hard X-ray bursts per flare (Wheatland et al.~1998).}
\item{The threshold used in the detection of flare events modifies the
observed waiting time distribution in a predictable way. If a first
sample of $n_1$ flare events is detected with a low threshold and a second
sample of $n_2$ events with a high threshold, the waiting time distributions
of the form $N(\Delta t) = (1/\Delta t_{i}) / (1 + \Delta t/\Delta t_{i})^2$
relate to each other with a reciprocal value for the mean waiting time, i.e.,
$\Delta t_2/\Delta t_1 = (n_1/n_2)$. This relationship could be confirmed
for the RHESSI dataset for six different threshold levels.}
\end{enumerate}
The main observational result of this study is that the waiting time
distribution of solar flares is consistent with a nonstationary
Poisson process, with a highly fluctuating variability of the flare rate
during brief active periods, separated by quiescent periods with much
lower flaring variability. This behavior is
similar to the quiescent periods (soft state) and active periods (hard state)
of pulses from accretion disk objects and black-hole candidates
(Mineshige et al.~1994). The fact that solar flares are consistent
with a nonstationary Poissonian process
does not contradict interpretations in terms of
self-organized criticality (Bak et al. 1987, 1988; Lu and Hamilton 1991;
Charbonneau et al.~2000), except that the input rate or driver is
intermittent. Alternative interpretations, such as intermittent turbulence,
have also been proposed to drive solar flares (Boffetta et al.~1999;
Lepreti et al.~2001), but they have not demonstrated a theoretical model
that correctly predicts the waiting time distribution over a range of five
orders of magnitude, as observed here with five different spacecraft.
\acknowledgements {\sl Acknowledgements:}
This work is partially supported by NASA contract
NAS5-98033 of the RHESSI mission through University of California,
Berkeley (subcontract SA2241-26308PG) and NASA grant NNX08AJ18G.
We acknowledge access to solar mission data and flare catalogs from the
{\sl Solar Data Analysis Center(SDAC)} at the NASA Goddard Space Flight
Center (GSFC).
\section*{REFERENCES}
\def\ref#1{\par\noindent\hangindent1cm {#1}}
\ref{Arzner, K. and Guedel, M. 2004, ApJ 602, 363.}
\ref{Aschwanden, M.J., 2010, {\sl Self-Organized Criticality in Astrophysics.
Statistics of Nonlinear Processes in the Universe},
PRAXIS Publishing Inc, Chichester, UK, and Springer, Berlin
(in preparation).}
\ref{Bak, P., Tang, C., and Wiesenfeld, K. 1987, Phys. Rev. Lett. 59/27, 381.}
\ref{Bak, P., Tang, C., \& Wiesenfeld, K. 1988, Phys. Rev. A 38/1, 364.}
\ref{Bak, P., Christensen, K., Danon, L., and Scanlon, T. 2002,
Phys. Rev. lett. 88/17, 178,501.}
\ref{Biesecker, D.A. 1994, PhD Thesis, University of New Hampshire.}
\ref{Biesecker, D.A. and Thompson, B.J. 2000,
J. Atmos. Solar-Terr. Phys. 62/16, 1449.}
\ref{Boffetta, G., Carbone, V., Giuliani, P., Veltri, P., and Vulpiani, A.
1999, Phys. Rev. Lett. 83/2, 4662.}
\ref{Borovsky, J.E., Nemzek, R.J., and Belian, R.D. 1993,
J. Geophys. Res. 98, 3807.}
\ref{Bristow, W. 2008, JGR 113, A11202, doi:10.1029/2008JA013203.}
\ref{Buchlin, E., Galtier, S., and Velli, M. 2005, AA 436, 355.}
\ref{Bumba, V. and Klvana, M. 1993, Solar Phys. 199, 45.}
\ref{Chapman, S.C., Watkins, N.W., Dendy, R.O., Helander, P.,
and Rowlands, G. 1998, GRL 25/13, 2397.}
\ref{Chapman, S.C., Watkins, N., and Rowlands, G. 2001,
J. Atmos. Solar-Terr. Phys. 63, 1361.}
\ref{Charbonneau, P., McIntosh, S.W., Liu, H.L., and Bogdan, T.J. 2001,
Solar Phys. 203, 321.}
\ref{Chou, Y.P. 2001, Solar Phys. {\bf 199}, 345.}
\ref{Cox, D. and Isham, V. 1980, {\sl Point Processes},
London: Chapman \& Hall.}
\ref{Crosby, N.B. 1996, PhD Thesis, University Paris VII, Meudon, Paris.}
\ref{Eastwood, J.P., Wheatland, M.S., Hudson, H.S., Krucker, S., Bale, S.D.,
Maksimovic, M., Goetz, K. 2010, ApJ 708, L95.}
\ref{Fishman, G.J., Meegan, C.A., Wilson, R.B., Paciesas, W.S., Parnell, T.A.,
Austin,R.W., Rehage, J.R., Matteson, et al.
1989, {\sl ``CGRO Science Workshop''}, Proc. Gamma Ray Observatory
Science Workshop, (ed. W.N.Johnson), NASA Document 4-366-4-373, GSFC,
Greenbelt, Maryland, p.2-39 and p.3-47.}
\ref{Freeman, M.P., Watkins, N. W., and Riley, D. J. 2000,
Phys. Rev. E, 62, 8794.}
\ref{Freeman, M.P. and Morley, S.K. 2004, GRL 31, L12807.}
\ref{Fritzova-Svestkova, L., Chase, R.C., and Svestka, Z. 1976,
Solar Phys. 48, 275.}
\ref{Gabriel, S.B. and Patrick, G.J. 2003, Space Sci. Rev. 107, 55.}
\ref{Gergely, T., and Erickson, W.C. 1975, Solar Phys. 42, 467.}
\ref{Greco, A., Matthaeus, W.H., Servidio, S., and Dmitruk, P. 2009a,
Phys. Rev. E 80, 046401.}
\ref{Greco, A., Matthaeus, W.H., Servidio, S., Chuychai, P., and Dmitruk, P.
2009b, ApJ 691, L111.}
\ref{Grigolini, P., Leddon, D., and Scafetta, N. 2002,
Phys. Rev. E 65, 046203.}
\ref{Hamon, D., Nicodemi,M., and Jensen,H.J., 2002, AA 387, 326.}
\ref{Hnat, B., Chapman, S.C., Kiyani, K., Rowlands, G., and Watkins, N.W. 2007,
Geophys. Res. Lett. 34/15, CiteID L15108.}
\ref{Lepreti, F., Carbone, V., and Veltri, P. 2001, ApJ 555, L133.}
\ref{Lepreti, F., Carbone, V., Giuliani, P., Sorriso-Valvo, L., and Veltri, P.
2004, Planet. Space Science 52, 957.}
\ref{Lin, R.P., Dennis, B.R., Hurford, G.J., Smith, D.M., Zehnder, A.,
Harvey, P.R., Curtis, D.W., Pankow, D. et al. 2002,
Solar Phys. 210, 3.}
\ref{Lu, E.T. and Hamilton, R.J. 1991, ApJ 380, L89.}
\ref{Lund, N. 1981, ApJSS 75, 145.}
\ref{Mineshige, S., Ouchi,N.B., and Nishimori, H. 1994,
Publ. Astron. Soc. Japan 46, 97.}
\ref{Moon, Y.J., Choe, G.S., Yun, H.S., and Park, Y.D. 2001,
J. Geophys. Res. 106/A12, 29951.}
\ref{Moon, Y.J., Choe, G.S., Park, Y.D., Wang, H., Gallagher, P.T., Chae, J.C.,
Yun, H.S., and Goode, P.R. 2002, ApJ 574, 434.}
\ref{Moon, Y.J., Choe, G.S., Wang, H., and Park, Y.D. 2003, ApJ 588, 1176.}
\ref{Negoro, H., Kitamoto, S., Takeuchi, M., and Mineshige, S. 1995,
ApJ 452, L49.}
\ref{Olami, Z., Feder, H.J.S., and Christensen, K. 1992,
Phys. Rev. Lett. 68/8, 1244.}
\ref{Omori, F., 1895, J. Coll. Sci. Imper. Univ. Tokyo 7, 111.}
\ref{Orwig, L.E., Frost, K.J., and Dennis, B.R. 1980, Solar Phys. 65, 25.}
\ref{Pearce, G. and Harrison, R.A. 1990, AA 228, 513.}
\ref{Pearce, G., Rowe, A.K., and Yeung, J. 1993,
Astrophys. Space Science 208, 99.}
\ref{Podesta, J.J., Roberts, D.A., and Goldstein, M.L. 2006a,
J. Geophys. Res. 111/A10, CiteID A10109.}
\ref{Podesta, J.J., Roberts, D.A., and Goldstein, M.L. 2006b,
J. Geophys. Res. 111/A9, CiteID A09105.}
\ref{Podesta, J.J., Roberts, D.A., and Goldstein, M.L. 2007, ApJ 664, 543.}
\ref{Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetterling, W.T.
1986, {\sl Numerical recipes. The art of scientific computing},
Cambridge University Press, Cambridge.}
\ref{Saichev, A. and Sornette,D. 2006, Phys. Rev. Lett. 97/7, id. 078501.}
\ref{Scargle, J. 1998, ApJ {\bf 504}, 405-418.}
\ref{Simnett, G.M. 1974, Solar Phys. 34, 377.}
\ref{Veltri, P. 1999, Plasma Phys. Controlled Fusion {\bf 41}, A787-A795.}
\ref{Watkins, N.W., Freeman, M.P., Chapman, S.C., and Dendy, R.O. 2001a,
J. Atmos. Solar-Terr Phys. 63, 1435.}
\ref{Watkins, N.W., Oughton, S., and Freeman, M.P. 2001b,
Planet Space Sci. 49, 1233.}
\ref{Wheatland, M.S., Sturrock, P.A., and McTiernan, J.M. 1998, ApJ 509, 448.}
\ref{Wheatland, M.S. and Eddey, S.D. 1998,
in Proc. Nobeyama Symposium, ``Solar Physics with Radio Observations'',
(eds. Bastian, T., Gopalswamy, N., and Shibasaki, K.),
NRO Report 479, p.357.}
\ref{Wheatland, M.S. 2000a, Astrophys. J. {\bf 536}, L109.}
\ref{Wheatland, M.S. 2001, Solar Phys. {\bf 203}, 87.}
\ref{Wheatland, M.S. and Litvinenko, Y.E. 2001, ApJ 557, 332.}
\ref{Wheatland, M.S. and Litvinenko, Y.E. 2002, Solar Phys. 211, 255.}
\ref{Wheatland, M.S. 2003, Solar Phys. 214, 361.}
\ref{Wheatland, M.S. 2006, Solar Phys. 236, 313.}
\ref{Wheatland, M.S. and Craig, I.J.D. 2006, Solar Phys. 238, 73.}
\ref{Yeh,, W.J., and Kao, Y.H. 1984, Phys. Rev. Lett. 53/16, 1590.}
\clearpage
\begin{figure}
\plotone{f1.eps}
\caption{One case of a stationary Poisson process (top) and four cases of
nonstationary Poisson processes with two-step, linear-increasing,
exponentially varying, and $\delta$-function variations of the occurrence rate
$\lambda(t)$. The time-dependent occurrence rates
$\lambda(t)$ are shown on the left side, while the waiting time distributions
are shown in the right-hand panels, in form of histograms
sampled from Monte-Carlo simulations, as well as in form of the analytical
solutions (given in Eqs.~10-18). Powerlaw fits $N(\Delta t)
\propto \Delta t^{-p}$ are indicated with
a dotted line and labeled with the slope $p$.}
\end{figure}
\begin{figure}
\plotone{f2.eps}
\caption{{\sl Top:} Flare rate per day observed with RHESSI during
2002-2009, containing a total of 52,014 events.
Quiescent time intervals with $\Delta t > 5$ hrs are marked in the form of
a ``bar code'' at the top of the panel.
{\sl Bottom left:} The frequency distribution
of waiting times $N(\Delta t)$ is shown for short time intervals
$\Delta t \le 10$ hrs, which shows peaks at subharmonics of the
orbital period of $\approx 1.6$ hrs.
{\sl Bottom right:} The longer waiting time intervals
$\Delta t \approx 2-24$ hrs can be fitted with a powerlaw function
with a slope of $p = 2.0$.}
\end{figure}
\begin{figure}
\plotone{f3.eps}
\caption{Waiting time distributions for six different subsets of the data
selected by thresholds, $P \ge 0, 10, 100, 300, 1000, 3000$ cts s$^{-1}$.
The full distribution (with no threshold) is fitted with a model
of a nonstationary Poisson process (Eq.~18) with a powerlaw slope of $=2$
(thick solid curve in top left panel). The same model functions are predicted
(using the scaling of Eq.~21) for the threshold-selected five subsets based
on the number of detected events above the threshold.}
\end{figure}
\begin{figure}
\plotone{f4.eps}
\caption{Waiting time distributions of six different datasets:
HXRBS/SMM (top left and right), WATCH/GRANAT (middle left),
ISEE-3/ICE (bottom left), BATSE/CGRO (middle right), and
RHESSI (bottom right). The distribution of the observed waiting times
are shown with histograms, fits of nonstationary Poisson processes with
dashed curves (with a powerlaw tail with slope of $p=2$), and the best fit
in the fitted range with thick solid curves.
Powerlaw fits in the range of $\Delta t \approx 0.1-2.0$ hrs as fitted in the
original publications (Pearce et al.~1993; Crosby 1996) are also shown
(straight line and slope $p$).
The excess of events with waiting times near the orbital period
($t_{orbit} \approx 1.6$ hrs) is an artificial effect and is not
included in the model fits.}
\end{figure}
\end{document}
|
2,869,038,153,851 | arxiv | \section*{Introduction}
\bigskip
In this paper we investigate some consequences of the Gross/Zagier
type formulae which were introduced by Gross and Zagier and then
generalized in various directions by Hatcher, Zhang, Kudla and
others \cite{GZ,Gross,Hatcher1,Zhang,MSRI}. Let us now recall these
formulae in the classical context. Denote by $K$ an imaginary
quadratic field of discriminant $-D$ say, with associated quadratic
character $\chi_{-D}=(\tfrac{-D}{\cdot})$, $\Psi$ a character of the
ideal class group ${\mathrm {Pic}}({\mathcal O}_{K})$ of $K$, ${\mathcal H}$ the upper
half plane,
and $g_{\Psi}$ the weight one theta series associated with $\Psi$:
$$g_{\Psi}(z)=\sum_{m\geq 0}r_{\Psi}(m)q^m, \, \, q=\exp(2\pi \iota z), z\in {\mathcal H},$$
where for $m\geq 1$
$$r_{\Psi}(m)=\sum_{N(\mathfrak{a})=m}\Psi(\mathfrak{a})$$
and $\mathfrak{a}\subset{\mathcal O}_{K}$ ranging over the ${\mathcal O}_{K}$-ideal of norm
$m$. We will denote the trivial character of ${\mathrm {Pic}}({\mathcal O}_{K})$ by
$1_K$.
Now let $f$ be an holomorphic new cuspform of level $N$ coprime with $D$,
trivial nebentypus and weight $2k$:
$$f(z)=\sum_{m\geq 1}a_{m}(f)q^{m}.$$
Depending on how the primes dividing $N$ split in $K$, the
Gross/Zagier formula expresses the central value at $s=k$ (or the
derivative of that value) of the Rankin-Selberg $L$-function
$$L(s,f,\Psi):=L(2s,\chi_{-D})\sum_{m\geq 1}a_{m}(f)r_{\Psi}(m)m^{-s}$$
in term of an intersection/height pairing of the $f$-isotypic
component $e_{\Psi,f}$ of a cycle $e_{\Psi}$
living on some Hecke module $M=M_{k,N}$: Denoting this pairing by
$\peter{\cdot,\cdot}_{M}$ and the Petersson inner product on $S_{2k}(N)$ by
$$\peter{f,g}=\int_{Y_{0}(N)}=f(z)\ov {g(z)}y^{2k-2}dxdy,$$ where $Y_0(N)$ denotes
the open modular curve $\Gamma_0(N)\backslash {\mathcal H}$, one has
\begin{equation}\label{GZformula}
c_{k,K}\frac{L^{(i)}(k,f,\Psi)}{\peter{f,f}}=\peter{e_{\Psi,f},e_{\Psi,f}}_{M}
\end{equation}
for some constant $c_{k,K}>0$ and the order of derivative
$i=i_{K,N}$ is $0$ or $1$ (depending on the sign of the functional equation).
Originally
the formula was proven as follows (for $i=0$):
let $M_{2k}(N)$ (resp. $S_{2k}(N)$) denote
the space of holomorphic forms (resp. cusp forms) of weight $2k$ level $N$ and
trivial nebentypus.
The map $$f\mapsto L(s,f,\Psi)$$ being linear on $S_{2k}(N)$, can be represented
by a kernel $f\mapsto \peter{f,G_{\Psi}}$ for some $G_{\Psi}\in M_{2k}(N)$ (same
for the first derivative).
By the Rankin-Selberg theory
$$L(k,f,\Psi)=\int_{Y_{0}(N)}f(z)g_{\Psi}(z)E_{2k-1}(z)y^{(2k+1)/2-2}dxdy$$
for a suitable holomorphic Eisenstein series
$E_{2k-1}$ of weight $2k-1$. The determination
of $G_{\Psi}$ amounts to first taking the trace from level $N^\prime={\sl lcm}(4,N)$
to $N$, and then computing the projection of $g_{\Psi}(z)E_{2k-1}(z)$ on
$M_{2k}(N)$. This can be done and one infers from the
computation of the Fourier expansion of $g_{\Psi}(z)E_{2k-1}(z)$, that the Fourier
coefficients $a_{m}(G_{\Psi})$ of $G_{\Psi}$ are relatively elementary
expressions involving the
arithmetical functions $r_{\Psi}$ and variants thereof: see below for an example.
One the other hand, using the theory of complex multiplication,
Gross and Zagier, and subsequently other people, showed by an auxiliary computation that
$$G_{\Psi}(z)=a_{0}(G_{\Psi})+\sum_{m\geq 1}\peter{T_{m}e_{\Psi},e_{\Psi}}_{M}q^m$$
where $T_{m}$ denote the $m$-th Hecke operator acting on the module $M$. The final
result follows then from a formal argument involving
the multiplicity one theorem. The main observation underlying this paper is that
the above computation provides formally an expression for the {\it average}
of the central values $L(k,f,\Psi)$. Namely, if $S^{new}_{2k}(N)$ denote the set of
arithmetically normalized new forms, then $\{f/\peter{f,f}^{1/2}\}_{f\in S^{new}_{2k}(N)}$
may be completed to an orthonormal basis of $S_{2k}(N)$. Then decomposing $G_{\Psi}$ along
such an orthonormal basis, and taking the $m$-th Fourier coefficient in the above
decomposition,
one deduces, for any $m\geq 1$,
$$
\sum_{f\in
S^{new}_{2k}(N)}\frac{L(k,f,\Psi)}{\peter{f,f}}a_{m}(f) \, = \,
a_{m}(G_{\Psi}) + +{\mathcal A}_{\rm old}(m)+{\mathcal A}_{\rm
Eis}(m),
$$
where ${\mathcal A}_{\rm old}(m)$, resp. ${\mathcal A}_{\rm Eis}(m)$, is the contribution from the old forms,
resp. the Eisenstein series, of weight $2k$ and level $N$.
In principle, the $\hbox{Eisenstein series contribution}$ could be evaluated explicitly, while the
$\hbox{ old forms contribution}$ could be computed by induction on $N$
by following the same scheme, though there is an added complication of finding a suitable orthonormal basis.
We shall consider here the nicest possible situation for which
these additional contributions
have a particularly simple expression, in fact where the old part vanishes! Therefore we
obtain,
by the first step of the proof of the Gross/Zagier type formulae, a simple expression
for the first moment
$$\sum_{f\in S^{new}_{2k}(N)}\frac{L(k,f,\Psi)}{\peter{f,f}}a_{m}(f).$$
Let us turn to a more specific example. Set $h=h_{K}=|{\mathrm {Pic}}({\mathcal O}_{K})|$, the class number of
$K$, $u=|{\mathcal O}_{K}^\times/\{\pm 1\}|$, and
$$R(m):=\begin{cases}h/2u, \, &\mbox{ $m=0$}\\
\sum\limits_\stacksum{\mathfrak{a}\subset{\mathcal O}_{K}}{N(\mathfrak{a})=m}\, 1, \, &\hbox{$m\geq 1$}
\end{cases},$$
Moreover extend, for any ideal class group character $\Psi$, the definition of $r_{\Psi}(m)$
to $m=0$ by setting
$$r_{\Psi}(0)=\begin{cases}0, &\hbox{if $\Psi\not=1_K$}\\
h/2u, &\hbox{if $\Psi=1_K$}.
\end{cases}$$
We also set
$$\sigma_{N}(m)=\sum_\stacksum{d|m}{(d,N)=1}d$$
Specializing to a generalization by Hatcher \cite{Hatcher1,Hatcher2} of a formula of Gross
\cite{Gross}, we obtain
\begin{Theorem}\label{identity} Let $-D<0$ be an odd fundamental discriminant; let $N$ be a
prime which inert in $K=\mathbb Q(\sqrt{-D})$ and let $k\geq 2$ be an even integer.
For $\Psi$ a character of ${\mathrm {Pic}}({{\mathcal O}}_K)$, then for
any positive integer $m$, we have the following exact identity:
\begin{multline*}
\noindent(2) \, \quad \quad
\frac{(2k-2)!D^{1/2}u^2}{2\pi(4\pi)^{2k-1}}\sum\limits_{f \in
{\mathcal F}_{2k}(N)}
\frac{L(f, \Psi, k)}{\peter{f,f}}a_m(f) \, = \,\\
-\delta\frac{12h^2}{N-1}{\sigma_N(m)} +{um^{k-1}r_\Psi(m)h} +
u^2m^{k-1}\sum_{n=1}^{\frac{mD}{N}}\Phi_k(n,\Psi,N)
\end{multline*}
Here
$$
\Phi_k(n,\Psi,N) \, = \, d((n,D))\delta_1(\Psi)R(n)r_\Psi(mD-nN)P_{k-1}(1-\frac{2nN}{mD}),
$$
with $P_{k-1}$ denoting the $(k-1)$-th Legendre polynomial;
$\delta\in \{0,1\}$ is $1$ iff $(k,\Psi)=(1,1_K)$;
$\delta_1(\Psi)\in \{0,1\}$ is $1$ if $D$ is prime, and when $D$ is
composite, it is $1$ iff $\Psi^2=1_K$ and there exist ideals $\frak
a, \frak b$, of respective norms $mD-nN$ and $n$, such that, for a
prime ideal $Q$ congruent to $-N$ mod $D$, the class of $\frak
a\frak bQ$ is a square in ${\mathrm {Pic}}({\mathcal O}_{K})$.
\end{Theorem}
An asymptotic formula involving the average on the left was first
established for $k=1, \Psi=1_K$ by W.~Duke (\cite{Duke}), which
spurred a lot of other work, including that of Iwaniec and Sarnak
(\cite{IwaniecSarnak}) relating it to the problem of Siegel zeros
for $L(s,\chi_{-D})$. In the work of the second named author with
J.~Rogawski (\cite{RaRo}), a different proof of Duke's result was
given (for all weights), using Jacquet's relative trace formula
involving the integration of the kernel over the square of the split
torus, and in addition, the intervening measure was identified.
It is important to note that one obtains a {\it stability theorem}
when $N$ is sufficiently large compared with $D$ and $m$, and this
could perhaps be considered the most unexpected consequence of our
approach. Indeed, when $N>mD$, the sum on the far right of the
identity furnished by Theorem $1$ becomes zero, and our exact
average simplifies as follows:
\begin{Corollary} ({\rm Stability}) \, With the above notations and assumptions,
suppose moreover $N>mD$, then one has
\begin{multline*}
\frac{(2k-2)!D^{1/2}u^2}{2\pi(4\pi)^{2k-1}}\sum\limits_{f \in {\mathcal F}_{2k}(N)}
\frac{L(f, \Psi, k)}{\peter{f,f}}a_m(f) =\\
-\delta\frac{12h^2}{N-1}{\sigma_N(m)} +{um^{k-1}r_\Psi(m)h}
\end{multline*}
\end{Corollary}
We call the range $N>mD$, the {\it stable range}. As one can check
with other instances of the Gross/Zagier formulas, such as for the
derivative in the case of odd order of vanishing, this phenomenon
appears to be quite general. It has been recently generalized to
Hilbert modular forms of square-free level by B.~Feigon and
D.~Whitehouse (\cite{FW}), using the relative trace formula, now by
integrating the kernel over a non-split torus.
\medskip
When $\Psi=1_K$, we have the factorization
$$L(s,f,1_K)=L(s,f_K)=L(s,f)L(s,f\otimes \chi_{-D}),$$
where $f_K$ denotes the base change of $f$ to $K$, $L(s,f)$ the
Hecke $L$-function of $f$, and $f\otimes\chi_{-D}$ the twist of $f$
by $\chi_{-D}$. Thus for $m=1$ and $N>D$, we get the following
explicit identity involving the class number of $K$:
$$\frac{(2k-2)!D^{1/2}u}{2\pi(4\pi)^{2k-1}}\sum\limits_{f \in {\mathcal F}_{2k}(N)}
\frac{L(k,f)L(k,f\otimes\chi_{-D})}{\peter{f,f}}
={h}\bigl(1-\delta\frac{12h}{u(N-1)}\bigr)$$ In the weight 2 case,
as $N$ is taken to be a prime here, the cardinality of ${\mathcal
F}_2(N)$ is just the genus $g_0(N)$ of the compactification $X_0(N)$
of $Y_{0}(N)$. It is amusing to note that when $g_{0}(N)$ is zero,
one finds that
$$h=\frac{(N-1)u}{12},$$
implying that $h = 1$ when $(-D,N)$ is $(-3,5)$, $(-7,13)$,
$(-8,13)$ or $(-11,13)$, agreeing with known data. Similarly,
$X_0(11)$ is an elliptic curve $E/\mathbb Q$, and if we denote by $E_{-D}$
the $-D$-twist of $E$, we see, for $D=3$, that the algebraic special
value $A(1,E)A(1,E_{-3})$ is just $1/5$. In general one gets more
complicated identities, involving average central values, which are
all compatible with the Birch and Swinnerton-Dyer conjecture for
$E$, $E_{-D}$, and the Shafarevich-Tate groups {Sh}$(E)$,
{Sh}$(E_{-D})$.
\subsection{Application to the subconvexity problem}
We now discuss some simple applications of the above exact average
formula, the first one being a subconvex estimate for the central
values $L(k,f,\Psi)$. We refer to \cite{GAFA2000} for a general
discussion on the subconvexity problem. In the present case the
convexity bound is given by
$$
L(k,f,\Psi)\ll_{\varepsilon}(kND)^\varepsilon kN^{1/2}D^{1/2},\leqno(3)
$$
for any $\varepsilon>0$. We prove here
\begin{Corollary} \, Preserve the notations of Theorem \ref{identity}.
Then for any $\varepsilon>0$, we have
$$L(k,f,\Psi)\ll_{\varepsilon}(kDN)^\varepsilon kN^{1/2}D^{1/2}\bigl(\frac{1}{N^{1/2}}+\frac{N^{1/2}}{D^{1/2}}\bigr).$$
In particular this improves on convexity as long as
$$(kD)^\delta \leq N\leq D(kD)^{-\delta}$$
for some fixed $\delta>0$.
\end{Corollary}
Note that this breaks convexity for any fixed $k$, as long as $N$ is
between $D^\delta$ and $D^{1-\delta}$. The beauty is that we can
also vary $k$ in an appropriate region, obtaining a {\it hybrid
subconvexity}.
At this point we do not know of any application to these subconvex
estimates, but we are intrigued by them because they come for free
and seem to be hard to prove with the current methods of analytic
number theory (eg. see \cite{DFI,KMV}). Note also that such bounds
are fundamentally limited to the critical center $s=k$. For a
generalization to the Hilbert modular case, where $\Psi$ is allowed
to be any ray class character, see \cite{FW}.
\subsection{Application to non-vanishing problems}
Another line of application addresses the existence of $f$ for which
$L(k,f,\Psi)$ does not vanish. Indeed several variants of such
problems have been considered in the past by various methods
\cite{Duke,IwaniecSarnak,KM,OnoSkinner,Vatsal2}. Here we obtain
non-vanishing results which are valid with a fairly large uniformity
in the parameters, and again such uniformity seems hard to achieve
by purely analytic methods.
\begin{Theorem}Assumptions being as for Theorem A. Suppose that $$N\gg_{\delta} D^{1/2+\delta}$$
for some $\delta>0$, then there exists $f\in S^{new}_{2k}(N)$ such that
$$L(k,f,\Psi)\not=0.$$
The same conclusion holds as long as $N>D$ and either $k\not=1$ or
$\Psi\not=1_K$.
\end{Theorem}
When $\Psi=1_K$, we also obtain non-vanishing result in a somewhat
greater range:
\begin{Theorem} Suppose $\Psi=1_K$, $k=1$ and
$$h<\frac{N-1}{12}.$$ Then there exist $f$ such that
$$L(k,f)L(k,f\otimes\chi_{-D})\not=0.$$
\end{Theorem}
Non-vanishing theorems of this kind, with an {\em explicit} dependence between $N$ and $D$
(like $N>D$ or $N-1>12h$), are of some interest. For instance, in the paper
\cite{Merel1}, Merel needs to consider the following problem: Given a prime $p$
and a character $\chi$ of conductor $p$ which is not even and quadratic, does there exist an
$f\in\mathcal{F}_{2}(p)$ such that $L(1,f\otimes\chi)\not=0$? In the appendix of that paper, the first
named author and E. Kowalski prove that this is the case when $p$ is greater than an
explicit but very large number. In particular, it has so far not been possible to answer the problem numerically
in the finitely many remaining cases; this has been answered however for
$p<1000$ \cite{Merel2}.
Closer to the main concern of the present paper, Ellenberg
\cite{Ellenberg1,Ellenberg2}
uses analytic methods to prove the non-vanishing of the twisted $L$-function
$L(1,f\otimes\chi_{-4})$ for some $f$ in ${\mathcal F}_{2}(N)$
for $N$ of the form $p^2$ or $2p^2$ ($p$ an odd prime) and with prescribed eigenvalues at the
Atkin/Lehner operators $w_{2},w_{p}$,
subject to an {\em explicit} lower bound on $p$. Ellenberg concludes from this the
non-existence of primitive integral solutions to the
generalized Fermat equation
$A^4+B^2=C^{p}$
as long as $p>211$; that this equation has only a finite number of primitive solutions is a theorem of
Darmon and Granville. Another related set of examples is in the work of Dieulefait and Urroz (\cite{DU}).
In a sequel to this paper under preparation (\cite{MiRa}), we will develop a suitable generalization of the
exact average formula to a class of composite levels $N$, and investigate similar questions by modifying the method.
This extension is subtle for three reasons: $N$ is not square-free,
$D$ is not odd, and $N,D$ are not relatively prime.
\subsection{Nonvanishing modulo $p$}
The exactness of the Gross/Zagier formulae even enable us to obtain {\it average non-vanishing
results} for the {\it algebraic part} of the $L(k,f,\Psi)$ modulo suitable primes $p$.
Again, such a question has been considered in the past, see for example
\cite{BJOSK,Vatsal2}. However, these earlier works
addressed the question of the existence of the non-vanishing of
$L(k,f,\Psi)$ mod $p$ when the form $f$ is {\em fixed} and when the character $\Psi$ varies. Here our results go
in the other direction as we fix $p$ and let $N$ and $f$ vary. Given
$f\in{\mathcal F}_{2k}(N)$ and $g_{\Psi}$ as above, we denote by $L^{\mathrm{alg}}(k,f,\Psi)$ the algebraic part of
$L(k,f,\Psi)$ (see section 5, (11), for a precise definition). It follows from the work of Shimura
that $L^{\mathrm{alg}}(k,f,\Psi)$ is an algebraic number satisfying the reciprocity law
$$L^{\mathrm{alg}}(k,f,\Psi)^\sigma=L^{\mathrm{alg}}(k,f^\sigma,\Psi^\sigma)$$
for any $\sigma$ automorphism of $\mathbb C$ \cite{Shimura}.
\begin{Theorem}\label{padic} Let $p>2k+1$ be a prime, $\mathcal P$ be a chosen place in $\ov{\mathbb Q}$
above $p$ and let $N,D$ be as in Theorem \ref{identity}.
Suppose moreover that $p$ does not divide $h=h_{-D}$, that $N>D$, and that $N$ is greater that some
absolute constant. Then there exists $f\in {\mathcal F}_{2k}(N)$ such that
$$L^{\mathrm{alg}}(k,f,\Psi)\not\equiv 0 \, \, (\mathrm{mod}\ {\mathcal P}).$$
\end{Theorem}
Naturally, the question of integrality of $L^{\mathrm{alg}}(k,f,\Psi)$, which
is subtle, and our result only concerns the numerator of the
$L$-value. When $\Psi=1_K$, we also prove the following variant:
\begin{Theorem}\label{padic2}Notations and assumptions as in Theorem \ref{padic}. Suppose
moreover that $\Psi=1$ and $N>pD$.
Then there exists $f\in {\mathcal F}_{2k}(N)$ such that
$$\sqrt{D}(2\pi)^{-2k}\frac{L(k,f)L(k,f\otimes\chi_{-D})}{\langle f,f\rangle}a_{p}(f)\not\equiv 0 \, \, (\mathrm{mod}\ {\mathcal P}^{2k-1}).$$
\end{Theorem}
The assertion makes sense because the left hand side is (see
section 5.1) a $p$-unit times
$a_p(f)$ times $L^{\mathrm{alg}}(k,f,1_K)$.
\medskip
There are two fundamental periods $c^+(f)$ and $c^-(f)$ associated
to $f$ such that for any Dirichlet character $\nu$, the special
value $L^{\mathrm{alg}}(k,f\otimes\nu)$, defined as $L(k,f\otimes \nu)/c^{{\rm
sgn}(\nu(-1))}(f)$ times a simple factor (see section 5, (12)) is an
algebraic number. One gets the near-factorization
$$
\eta_fL^{\mathrm{alg}}(k,f,1_K) \, = \, L^{\mathrm{alg}}(k,f)L^{\mathrm{alg}}(k,f\otimes\chi_{-D}),
$$
where $\eta_f$ is essentially the order of the congruence module
considered by Hida, Wiles, Taylor, Flach, Diamond, and others, which
measures the congruences $f$ has with other modular forms modulo
$p$. The needed non-divisibility properties of $\eta_f$ (for
suitable $p$) are understood (at least) if $f$ is ordinary or $k=1$.
Now finally, let us suppose we are in the classical weight $2$
situation, i.e., with $\Psi=1_K$ and $k=1$.
\begin{Theorem}\label{padic3} Let $p$ an odd prime not dividing $Dh_{-D}$,
with $D$ odd. Then there exist infinitely many newforms of $f$ of
prime level $N$ and weight $2$ such that
$$
{\rm num}\left(\frac{L^{\mathrm{alg}}(1,f\otimes\chi_{-D} )}{\eta_f}\right) \,
\not\equiv \, 0 \, \pmod p,
$$
where $\eta_f$ is the order of the congruence module of $f$.
\end{Theorem}
See section 5 for a discussion of $\eta_f$, which measures the
congruences which $f$ may have with other modular forms of the same
weight and level. An analogue of Theorem 6 should also hold, in a
suitable range of $p$, for forms of higher weight, and this question
will be taken up elsewhere.
\medskip
\subsection{Acknowledgement} Serge Lang always conveyed infectious excitement about
Mathematics to anyone he came into contact with, and he will be
missed. He was quite interested in the values of $L$-functions and
in the {\it divisibility properties} of arithmetic invariants, and
it is a pleasure to dedicate this article to him. The first author
would like to thank Caltech for its hospitality during the
preparation of this work. The second author would like to thank
Flach, Hida, Prasanna and Vatsal for helpful conversations
concerning the last part, and the National Science Foundation for
support through the grant DMS0402044.
\section{The weight $2$ case}
It may be instructive to explain why the exact average formula holds
in the weight $2$ case when $\Psi=1$. Let $B$ be a quaternion
division algebra over $\mathbb Q$, ramified only at $N$ and $\infty$, with
maximal order $R$. Put $Y$ is the associated rational curve such
that Aut$(Y) = B^\ast/\mathbb Q^\ast$. Put
$$
X = B^\ast\backslash Y \times \hat{B}^\ast/\hat{R}^\ast =
\cup_{j=1}^n \Gamma_j\backslash Y,
$$
where $\hat{B}^\ast=\prod\limits_p{}' B_p^\ast$ and
$\hat{R}^\ast=\prod\limits_p R_p^\ast$, with each $\Gamma_j$ being a
finite group. Then Pic$(X)$ identifies with $\{e_1, e_2, \ldots,
e_n\}$, where each $e_j$ is the class of $\Gamma_j\backslash Y$.
{Since} $N$ is inert in $K=\mathbb Q[\sqrt{-D}]$, there is an embedding
$f\in {\rm Hom}(K,B) = Y(K)$. It results in certain {\it Heegner
points} $x=(f,b)$ of discriminant $-D$ in $X$, with $b \in
\hat{B}^\ast/\hat{R}^\ast$. For any eigenform $f$, let $c_f$ denote
the $f$-component of $c = \sum_A x_A$, where $A$ runs over ideal
classes of $K$. Then by a beautiful theorem of B.~Gross ([G]),
providing an analogue for the $L$-value of the Gross-Zagier theorem
for the first derivative, one has
$$
\langle c_f, c_f\rangle \, = \,
u^2\sqrt{D}\frac{L(1,f)L(1,f\otimes\chi_{-D})}{(f,f)},
$$
where $\langle \cdot, \cdot\rangle$ is a natural {\it height
pairing} on Pic$(x)$. We have by orthogonality,
$$
\langle c,T_mc\rangle = \langle c_E,T_mc_E\rangle +\sum\limits_f
\langle c_f,T_mc_f\rangle,
$$
where $T_m$ is the operator corresponding to the $m$-the Hecke
operator on $M_2(N)$, $f$ runs over newforms in $M_2(N)$, and $E$
denotes the unique (holomorphic) Eisenstein series (of weight $2$
and level $N$). Using the fact that $f$ and $E$ are Hecke
eigenforms, and that $\langle c_E, c_E\rangle \, = \,
\frac{12h^2}{N-1}$, we get by averaging Gross's formula,
$$
u^2\sqrt{\vert D\vert}\sum\limits_f
\frac{L(1,f)L(1,f\otimes{\chi_{-D}})}{(f,f)} =
-\sigma_N(m)\frac{12h^2}{N-1} + \langle c, T_mc\rangle.
$$
One has
$$
\langle c,T_mc\rangle \, = \, \sum\limits_A\sum\limits_B \langle
x_B,T_mx_{AB}\rangle. $$ If we pick $q \equiv -N ($mod $D)$, with
$q{\mathcal O}_K = Q\overline Q$ in $K$, one sees that
$$
\sum\limits_B\langle x_B,T_mx_{AB}\rangle \, = \, uhR_A(m)
+\sum\limits_{n=1}^{mD/N} R_A(mD-nN)d((n,D))R_{\{QA\}}(n),
$$
with
$$
R_{\{QA\}}(n) = \vert\{I : N(I)=n, QAI \in {\rm
Pic}({{\mathcal O}}_K)^2\}\vert.
$$
Note that $R_{\{QA\}}(n)$ is just $R_A(n)$ when $D$ is prime. The
assertion of Theorem 1 now follows by summing over $A$. Moreover,
when $mD$ is less than $N$, $\sum\limits_B\langle
x_B,T_mx_{AB}\rangle$ simply equals $uhR_A(m)$, and this furnishes
Corollary 1 (stability) in the weight $2$ case.
\section{\bf Proof of the main identity for all $k\geq 1$}
\subsection{Preliminaries} \, For $N\geq 1$, let $M_{2k}(N)$ (resp
$S_{2k}(N)$) denote, as usual, the space of holomorphic modular
forms (resp. cusp forms) of weight $2k$ level $N$ and trivial
character. For $f\in M_{2k}(N)$, we write the Fourier expansion at
the infinite cusp as
\[
f(z)=\sum_{m\geq 0}a_m(f)q^m, q=\exp (2\pi\imath z).
\]
We denote by ${\mathcal F}_{2k}(N)$, the set of cuspidal new forms $f$ (normalized
in the usual way, so that the first Fourier coefficient $a_1(f)$ is
1. Whenever it converges, we denote the Petersson inner product on
$M_{2k}(N)$ by
\[
\langle f,g\rangle =\int_{Y_{0}(N)}f(z)\overline{g(z)}y^{2k-2}dxdy.
\]
Let $-D<0$ be an odd fundamental discriminant, $K=\mathbb Q (\sqrt{-D}), {\mathcal O}_k$ the
maximal order of $K , {\mathrm {Pic}}({\mathcal O}_K)$ the ideal class group, and $u=u_k=|
{\mathcal O}_K{}^\times |/2$. For any ideal class $A\in{\rm Pic}({\mathcal O}_k)$, define
\[
r_A(m)=\begin{cases}|\{\mathfrak{a}\subset{\mathcal O}_K,N(\mathfrak{a} )=m,\mathfrak{a}\in A\}| &\text{if }
m\geq 1\\
\frac{1}{2u} & \text{if }m=0
\end{cases}
\]
The theta series
\[
\theta_A(z)=\sum_{m\geq 0}r_A(m)q^m,q=\exp (2\pi\imath z)
\]
is a modular form of weight 1, level $D$ and central character
$\chi_{-D}$. Moreover, for any $\Psi\in\widehat{{\rm Pic}({\mathcal O}_K)}$,
put
\[
\theta_\Psi (z)=\sum_A\overline\Psi (A)\theta_A(z),
\]
whose Fourier coefficients are then given by
$$
a_m(\theta_\Psi) = \sum_A \overline\Psi(A)a_m(\theta_A).
$$
In particular, the constant term $a_0(\theta_\Psi)$ equals
$\frac{1}{2u}\sum_A \overline\Psi(A)$, which is, by orthogonality,
zero iff $\Psi\ne 1_K$, when $\theta_\Psi$ is a cusp form. Setting
\[
L(s,f,A):=\sum_{\stacksum{n>1}{(n,N)=1}}\frac{\chi_{-D}(n)}{n^{1+1(s-k)}}\sum_{m\geq
1} \frac{a_m(f)r_A(m)}{m^s},
\]
one has
\[
L(s,f,\Psi )=\sum_{A\in{\mathrm {Pic}} ({\mathcal O}_K)}\Psi (A)L(s,f,A).
\]
Define a holomorphic function $G_A$ on the upper half plane
${\mathcal H}$, invariant under $z\rightarrow z+1$, by means of its
Fourier expansion at infinity:
\begin{equation}
G_A(z):=\sum^\infty_{m=0}b_{m,A}q^m,
\end{equation}
where
\begin{align}
b_{m,A}&=m^{k-1}\frac{h}{u}r_A(Dm)\\
&+m^{k-1}\sum^{mD/N}_{n=1}\delta (n)r_A(mD-nN)R_{(-NA)}(n)P_{k-1}\left (
1-\frac{2nN}{mD}\right )\nonumber
\end{align}
In this definition, $u$ and $R(n)=\sum_Ar_A(n)$ are as in the Introduction,
$\delta (n)$ is 1 (resp. 2) if $(m,D)$ is 1 (resp. $\neq 1$), and for $r\geq 0,
P_r$ is the $r$-th Legendre polynomial defined by
\[
P_r(x):=\frac{1}{2^r}\sum^{[r/2]}_{m=1}(-1)^m\begin{pmatrix}r\\m\end{pmatrix}
\begin{pmatrix}2r-2m\\r\end{pmatrix}x^{r-2m}.
\]
The following result, due to B. Gross, D. Zagier and R. Hatcher, is
crucial to us:
\begin{Theorem}
$G_A$ is a modular form of weight $2k$, level $N$, and trivial character;
it is cuspidal if $k>1$, and for every newform $f$ of weight $2k$ and level
$N$, we have
\[
L(k,f,A)=\frac{(4\pi )^{2k}}{2(2k-2)!D^{1/2}}(f,G_A).
\]
\end{Theorem}
For $k=1$, see [11], Prop. 9.1, and for general $k$, this is in [12],
Theorem 5.6 and [14], Theorem 5.2. (See also [13], where the case $D$ prime
is treated.)
\subsection{The exact average formula} Let
\[
E \, = \, E_{2,N} \, = \, \sum^\infty_{n=0}a_n(E)q^n
\]
denote a holomorphic Eisenstein series for $\Gamma_0(N)$ of weight
2. Since $N$ is prime, the modular curve $Y_{0}(N)$ has only two
cusps, namely $\infty$ and 0. It then follows that $E$ is unique up
to scalar multiple, and so $E(z)/a_0(E)$ is well defined with
constant term 1 at $\infty$. To be specific, we will take
\[
E(z)=\frac{N-1}{12}+\sum^\infty_{m=1}\sigma_N(m)q^m,
\]
where $\sigma_N(m)=\sum_{d|m,(d,N)=1}d$.
For $A\in{\mathrm {Pic}} ({\mathcal O}_K)$, with $G_A$ being as in the previous section, put
\begin{equation}
G^{\rm cusp}_A(z):G_A(z)-\delta_{k=1}\frac{b_{0,A}}{a_0(E)}E(z),
\end{equation}
with $\delta_{k,1}$ being 1 (resp. 0) if $k=1$ (resp. $k\neq 1$). Then
$G^{\rm cusp}_A$ is a holomorphic cusp form of level $N$, weight $2k$,
and trivial character, with coefficients $a_m(G^{\rm cusp}_A)$.
\medskip
\noindent{\bf Lemma 2.1.} {\it For $-D$ an odd fundamental
discriminant and $N$ a prime inert in $K$, we have, for any $m\geq
1$,
\begin{align*}
\frac{2(2k-2)!D^{1/2}}{(4\pi )^{2k}}\sum_{f\in{\mathcal F}_{2k}(N)}&\frac{L(k,f,A)}{\langle
f,f\rangle}a_m(f)\\
&=a_m(G^{\rm cusp}_A)=b_{m,A}-\delta_{k=1}\frac{b_{0,A}}{a_0(E)}a_m(E)
\end{align*}}
In order to prove this, we first need the following
\medskip
\noindent{\bf Lemma 2.2.} {\it Assume that $N$ is a prime which is
inert in $K=\mathbb Q [\sqrt{-D}]$. Let $\varphi$ be any old form in
$S_{2k}(N)$. Then we have, for every $A\in{\mathrm {Pic}} ({\mathcal O}_K)$,
\[
(\varphi ,G^{\rm cusp}_A)=0.
\]}
There is nothing to prove when $k<6$, since $S_{2k}(1)$ is zero in
that case (cf. \cite{L}, for example.) Such a Lemma will not in
general hold for composite $N$.
\medskip
{\bf Proof of Lemma 2.2.} Since $\varphi$ is cuspidal, it suffices
to prove that $(\varphi ,G_A)=0$. Put
\[
G_\Psi :=\sum_{A\in{\mathrm {Pic}} ({\mathcal O}_K)}\Psi (A)G_A
\]
which is modular form of weight 1 and character $\chi_{-D}$. It is sufficient
to show that $(\varphi ,G_\Psi )=0$ for all ideal class characters $\Psi$ of $K$.
If $\varphi =\sum^\infty_{n=1}a_n(\varphi )q^n$, put
\begin{equation}
D(s,\varphi \times\theta_\Psi )=\sum^\infty_{n=1}\frac{a_n(\varphi )\overline a_n
(\theta_\Psi)}{n^s}
\end{equation}
Then the Rankin-Selberg method give the identity
\begin{equation}
(2\pi )^{-k}\Gamma (k)D(k,\varphi\times\theta_\Phi )=\langle f,{\rm Tr}_{ND/N}
(\theta_\Phi\mathcal E_{2k-1,N})\rangle
\end{equation}
where $\mathcal E_{2k-1,N}$ is the result of slashing a holomorphic
Eisentein series of weight $2k-1$ (and character $\chi_{-D}$) with
the Atkin involution $u_N$, and Tr$_{ND/D}$ denotes the trace from
$S_{2k}(ND)$ to $S_{2k}(N)$. In fact, the calculations of Gross and
Zagier (\cite{GZ}) show that
\begin{equation}
G_\Psi ={\rm Tr}_{ND/N}(\theta_\Psi\mathcal E_{2k-1,N}).
\end{equation}
Now let $\varphi$ be a newform of level 1 (and weight $2k$). Then
since $N$ is prime, it defines two old forms of level $N$, namely
$\varphi_1(z)=\varphi (z)$ and $\varphi_2(z)=\varphi (Nz)$, so that
$a_m(\varphi_2)$ is zero unless $N|m$, and
$a_mN(\varphi_2)=a_m(\varphi )$. Since the new and old forms are
orthogonal to each other under $(\cdot ,\cdot )$, and since the
space of old forms of level $N$ are spanned by $\{\varphi_d,d=1,N\}$
with $\varphi$ running overl all the cusp forms of level 1, it
suffices to prove that each $D(k, \varphi_d\times\theta_\Psi )=0$.
Let $d=1$. Then one obtains (by section 3, Lemma 1, of [Sh]):
\begin{equation}
L(2k,\chi_{-D})D(k,\varphi_d\times\theta_\Psi )=L(k,\varphi\times\theta_\Psi ).
\end{equation}
Since $L(x,\chi_{-D})$ is non-zero at $s=2k$ (which is in the region of
absolute convergence), it reduces to checking the vanishing of the right hand
side. Since $\varphi$ has level 1, the root number of $L(k,\varphi \times
\theta_\Psi )$ is $-1$, yielding the requisite vanishing. When $d=N,D(k,
\varphi_d\times\theta_\Psi )$ is still a non-zero multiple of $L(k,\varphi\times
\theta_\Psi )$, which is zero.
\hfill$\Box$
\medskip
{\bf Proof of Lemma 2.1} We may choose an orthogonal basis $\mathcal B$ of
$S_{2k}(N)$ to be of the form ${\mathcal F}_{2k}(N)\cup\mathcal B'$, where $\mathcal B'$ consists
of old forms. Clearly we have
\begin{equation}
\sum_{f\in\mathcal B}\frac{(f,G^{\rm cusp}_A)}{\langle f,f\rangle}=G^{\rm cusp}_A.
\end{equation}
In view of the Lemma, the sum on the left hand side needs to run
only over newforms $f$. Applying Theorem 6, and using (8), we
obtain
\[
\frac{2(2k-2)!D^{1/2}}{(4\pi )^{2k}}\sum_{f\in{\mathcal F}_{2k}(N)}\frac{L(f,\Psi ,k)}
{\langle f,f\rangle}=G^{\rm cusp}_A.
\]
The lemma now follows by taking the $m$-coefficient of the above identity.
\hfill$\Box$
\medskip
{\bf Proof of Theorem 1} The exact average formula follows by
performing the averaging $\sum_{A\in{\mathrm {Pic}} ({\mathcal O}_K)}\Psi (A)\dots$ on
both sides of the formula in Lemma 2.1 using the formula (5) for the
coefficients $b_{m.A}$, and by noting that
\[
\frac{a_m(E)}{a_0(E)}=\frac{12}{N-1}\sigma_N(m)
\]
and that $b_{0,A}=\tfrac{h}{2u^2}$.
\hfill$\Box$
\section{\bf Subconvex Bounds}
In this section, we prove Corollary 2. By the work of Waldspurger,
Guo and Jacquet (\cite{Guo, Waldspurger}; also \cite{KohZ} for
$\Psi=1_K$),
\[
L(k,f,\Psi )\geq 0.
\]
Thus from formula (2) for $m=1$, we have
\[
\frac{(2k-2)!D^{1/2}}{2(4\pi )^{2k}}\frac{L(f,\Psi ,k)}{\langle f,f\rangle}
\leq\frac{h}{u}+\sum^{\frac{D}{N}}_{n=1}|\Psi_k(n,\Psi ,N)|
\]
Since $|P_{k-1}(x)|\leq 1$ for $|x|\leq 1$ and $R(n),|r_\Psi (n)|\leq d(n)$
(where $d(n)$ is the number of divisors of $n$), so that
\[
R(n)|r_\Psi (D-nN)|\leq d(n)^2+d(D-nN)^2,
\]
we see that the $n$-sum on the right side is bounded by $\tfrac{D}{N}(\log
D)^3$. From the class number formula, we have
\[
h\ll D^{1/2}\log D
\]
and
\[
\langle f,f\rangle\ll (4\pi )^{-2k}(2k-1)!N(\log kN)^3
\]
as follows from \cite{ILS}, (2.3), (unlike the corresponding bound
for Maass forms (\cite{HL}) this upper bound is elementary since $f$
holomorphic so its Fourier coefficients satisfy the
Ramanujan|Petersson bound). Thus we see that
\[
L(f,\Psi ,k)\ll (\log kN)^3(\log D)^3k(N+D^{1/2}).
\]
\hfill$\Box$
\section{\bf Application to non-vanishing}
We prove here Theorem 2. Arguing exactly as above we have
\begin{align*}
\frac{(2k-2)!D^{1/2}}{2(4\pi )^{2k}}\sum_{f\in{\mathcal F}_{2k}(N)}\frac{L(f,\Psi
,k)}{\langle f,f
\rangle}&=\frac{h}{u}-\delta\frac{6(h/u)^2}{N-1}+O\left (\frac{D}
{N}(\log D)^3\right )\\
&=\frac{h}{u}+O\left (\frac{D}{N}(\log D)^3\right )
\end{align*}
By Siegel's Theorem, which gives $h=D^{1/2 +o(1)}$, we see that the
right side is positive as soon as $N>D^{1/2+\delta}$ for some
$\delta >0$. If $N>D$, then we are in the stable range and we have
\begin{equation}
\frac{(2k-2)!D^{1/2}}{2(4\pi )^{2k}}\sum_{f\in{\mathcal F}_{2k}(N)}\frac{L(f,\Psi ,k)}
{\langle f,f\rangle}=\frac{h}{u}\left (1-\delta\frac{6(h/u)}{N-1}
\right ).
\end{equation}
When $\delta =0$, this concludes the proof of Theorem 2 since $h\geq
1$. \hfill$\Box$
\medskip
Suppose now that $\delta =1$ (ie. $k=1,\Psi =1_K$). Then we remark
that
\[
\sum^{\frac{D}{N}}_{n=1}\Psi_1(n,1,N)\geq 0
\]
so that
\[
\frac{(2k-2)!D^{1/2}}{2(4\pi )^{2k}}\sum_{f\in{\mathcal F}_{2k}(N)}\frac{L(f,\Psi ,k)}
{\langle f,f\rangle}\geq\frac{h}{u}\left (1-\frac{6(h/u)}{N-1}
\right )
\]
combining the proof of Theorem 3.\hfill$\Box$
\section{\bf Non-vanishing mod $p$}
\subsection{\bf Algebraic Parts of $L$-values}
Let us put
\begin{equation}
L^{\mathrm{alg}}(k,f, \Psi) \, = \,
(-1)^k(2\pi)^{-2k}(k-1)!^2g(\chi_{\-D})\frac{L(k,f,\Psi)}{\langle f,
f\rangle},
\end{equation}
where $g(\overline{\Psi})$ is the Gauss
sum. Then it is known, by Shimura (\cite{Shimura}, see also
\cite{Hd1}), that $L^{\mathrm{alg}}(k,f, \psi)$ is an algebraic number obeying
the reciprocity law:
$$
L^{\mathrm{alg}}(k,f^\sigma,\Psi^\sigma) = L^{\mathrm{alg}}(k,f, \Psi)^\sigma,
$$
for every automorphism $\sigma$ of $\mathbb C$.
Next recall that for $\Psi=1_K$, $L(k,f,\Psi)$ factors as
$L(k,f)L(k,f\otimes\chi_{-D})$. For any Dirichlet character $\nu$,
the algebraic part of $L(k,f\otimes\nu)$ is given by
\begin{equation}
L^{\mathrm{alg}}(k,f\otimes\nu) \, = \, g(\overline
\nu)(k-1)!\frac{L(k,f,\nu)}{(-2\pi i)^kc_\pm(f)},
\end{equation}
where $c_\pm(f)$ is a fundamental period of $f$, with $\pm =
\nu(-1)$. Again, one has for any automorphism $\sigma$ of $\mathbb C$,
$L^{\mathrm{alg}}(k,f^\sigma\otimes\nu^\sigma)$ is
$L^{\mathrm{alg}}(k,f\otimes\nu)^\sigma$.
This leads to the near-factorization
\begin{equation}
\eta_fL^{\mathrm{alg}}(k,f,1_K) \, = \, L^{\mathrm{alg}}(k,f)L^{\mathrm{alg}}(k,f\otimes\chi_{-D}),
\end{equation}
where $\eta_f$ equals, thanks to a series of papers of
Hida (cf. \cite{Hd1}, \cite{Hd2}), Wiles (\cite{Wiles}),
Taylor-Wiles (\cite{TW}), and Diamond-Flach-Guo (\cite{DFG}), the
order of the congruence module of $f$, i.,e the number which counts
the congruences of $f$ with other modular forms of the same weight
and level.
\subsection{\bf Proof of Theorems 4 and 5}
From the definition of the algebraic part, the hypothesis of Theorem
4 and the formula (9), used in conjunction with $\delta =0$, we have
(up to multiplication by a $p$-unit)
\[
\sum_{f\in{\mathcal F}_{2k}(N)}L^{\rm alg}(k,f,\Psi )=\frac{h}{u}.
\]
The conclusion of Theorem 4 is immediate.
For the proof of Theorem 5, we have, assuming that $N>pD$,
\[
\sum_{f\in{\mathcal F}_{2k}(N)}L^{\rm alg}(k,f,1_K)=\frac{h}{u}\left
(1-\frac{12(h/u)}{N-1} \right ).
\]
Therefore the conclusion holds except possibly if
$p|(1-\tfrac{6(h/u)}{N-1})$. Suppose we are in that latter case.
Then we apply the exact formula of Corollary 1 with $m=p$ and get
\[
\sum_{f\in{\mathcal F}_{2k}(N)}L^{\rm alg}(k,f,1_K)a_p(f)=\frac{h}{u}\left (R(p)-
\frac{6(h/u)}{N-1}(p+1)\right )
\]
$R(p)$ is either 0 or 2, if it is zero, then the left hand side of
the previous formula is not divisible by $p$. If $R(p)=2$, then
$2-\tfrac{6(h/u)}{N-1}$ is not divisible by $p$ since by assumption
$p|(1-\tfrac{6(h/u)}{N-1})$. So we are done in all
cases.\hfill$\Box$
\medskip
\subsection{Proof of Theorem 6}
\medskip
Here are restricting to the weight $2$ case, and by the theory of
modular symbols, cf. Stevens \cite{St} and Vatsal \cite{V} - see
also Prasanna \cite{P} - we know that for any Dirichlet character
$\nu$, the special value $L^{\mathrm{alg}}(1,f\otimes\nu)$ is integral except
possibly at the Eisenstein primes; these are the primes dividing
$$
\tilde{N}: = \, \prod_{q\vert N} q(q^2-1),
$$
which is related to the order of the cuspidal divisor class group,
studied for modular curves, among others, by Kubert and Lang.
We may, and we will, choose $N$ to lie in the infinite family of
primes which are inert in $K$ and are such that $p \nmid \tilde{N}$.
Now Theorem 6 follows by the near-factorization (13) of
$L^{\mathrm{alg}}(1,f,1_K)$. It may be useful to note that when $f$ has
$\mathbb Q$-coefficients, with associated elliptic curve $E$ over $\mathbb Q$, one
knows (cf. Flach \cite{F}) that any prime dividing $\eta_F$ also
divides the degree of the modular parametrization $X_0(N) \to E$.
\vskip 0.2in
\bibliographystyle{math}
|
2,869,038,153,852 | arxiv | \section{The big question}
Every compact group admits a unique translation-invariant probability measure, namely its (normalized) Haar measure. In this note we discuss the question of whether every compact group has a non-Haar-measurable subgroup, henceforth referred to as \emph{the big question}. Our main result is that it is consistent with ZFC that the answer to the big question is yes.
The big question goes back at least as far as 1985 (see \cite{S&S}). A positive answer to the big question was given for compact Abelian groups by Comfort, Raczkowski, and Trigos-Arrieta in \cite{CRT}, and a more thorough analysis of non-measurable subgroups of the real line is given in \cite{Krz}. Partial progress on the non-commutative case was made in \cite{Gel}, and a good deal of further progress was made in \cite{HHM}.
With the exception of Karazishvili's paper \cite{Krz}, these results have been accomplished by finding subgroups of countable index. If a subgroup of a compact group $G$ has index $\aleph_0$, then it is non-measurable (by the translation invariance and countable additivity of Haar measure). If a subgroup has finite index but is not closed, then it is non-measurable (see \cite{HHM}, Proposition 1.1(b)).
Hunting for countable index subgroups has been a powerful tool for answering the big question: in \cite{HHM}, this technique is used to solve every case except for a certain class of metric profinite groups. However, this last remaining case cannot be solved simply by finding countable index subgroups, because some groups in this class do not have non-closed subgroups of countable index. Therefore a new construction for finding non-measurable subgroups wil be needed before the big question can be put to rest.
In Section~\ref{sec:consistency} we will prove our main theorem. In Section~\ref{sec:constructions} we will give a natural construction for obtaining subgroups of a given compact group. For Abelian groups, we prove that these subgroups are non-measurable, and we conjecture that they are also non-measurable for arbitrary metric profinite groups as well. Proving this conjecture correct would result in a positive general solution to the big question.
\section{The main theorem}\label{sec:consistency}
The proof of our main theorem requires four ``lemmas'', each of which is an important theorem in its own right.
\begin{lemma}[Hern\'andez, Hoffman, and Morris]\label{HHMtheorem}
If $G$ is an infinite compact group other than a metric profinite group, then $G$ has a nonmeasurable subgroup.
\end{lemma}
\begin{proof}
This is the main result of \cite{HHM}.
\end{proof}
\begin{lemma}\label{CantorSpace}
Every infinite, metric, profinite group is, as a topological space, homeomorphic to $2^\omega$.
\end{lemma}
\begin{proof}
See Theorem 10.40 in \cite{H&M}.
\end{proof}
For the statement of the next lemma, two measures $\mu$ and $\nu$ on a set $X$ are \textbf{isomorphic} if there is a bijection $\varphi: X \to X$ such that, for every $Y \subseteq X$, $Y$ is $\mu$-measurable if and only if $\varphi(Y)$ is $\nu$-measurable, and if this is the case then $\mu(Y) = \nu(\varphi(Y))$. By the Lebesgue measure on $2^\omega$ we mean the unique measure generated by giving the set $\set{f \in 2^\omega}{f(n) = 0}$ measure $\frac{1}{2}$ for each $n$.
\begin{lemma}\label{CantorSpace2}
The (normalized) Haar measure for any group structure on $2^\omega$ is isomorphic to the Lebesgue measure.
\end{lemma}
\begin{proof}
This follows from Theorem 17.41 in \cite{Kec} together with the fact that the Haar measure for any group structure on $2^\omega$ is continuous. A stronger version of this result, with a more constructive proof, is given in \cite{B&M}.
\end{proof}
\begin{lemma}\label{randomreals}
There is a model of \emph{ZFC} in which the following holds: there is a subset $X$ of $2^\omega$ such that $X$ is not Lebesgue measurable and $\card{X} < 2^{\aleph_0}$.
\end{lemma}
\begin{proof}
This result is well-known. The idea is essentially this: if $M$ is a model of ZFC and we add many random reals to $M$ by forcing, then any uncountable subset of random reals will be non-measurable in the extension. See \cite{Jec}, pp. 535-536, for details.
\end{proof}
We can now piece these results together to prove our main theorem:
\begin{theorem}\label{consistencyproof}
It is consistent with \emph{ZFC} that every infinite compact group has a non-measurable subgroup.
\end{theorem}
\begin{proof}
Let $G$ be a compact group. By Theorem~\ref{HHMtheorem} and Lemma~\ref{CantorSpace}, we may assume that $G$, considered as a topological space, is homeomorphic to $2^\omega$. By Lemma~\ref{CantorSpace2}, we may assume that the measure on $G$ is Lebesgue measure (provided we do not use the translation invariance property that is specific to Haar measure). By Lemma~\ref{randomreals}, it is consistently true that there is a nonmeasurable subset $X$ of $2^\omega$ such that $\card{X} < 2^{\aleph_0}$. We will use this hypothesis to obtain a non-measurable subgroup of $G$
Let $H = \<X\>$ be the subgroup of $G$ generated by $X$. Clearly $\card{H} = \card{X} \cdot \aleph_0 = \card{X} < 2^{\aleph_0}$. $H$ cannot have measure $0$, because then every subset of $H$, including $X$, would have measure $0$. $H$ also cannot have positive measure, since then $H$ would be closed (and infinite) in $2^\omega$ and thus of cardinality $2^{\aleph_0}$ (see \cite{HHM}, Proposition 1.1(b) for why $H$ would be closed). Thus $H$ is nonmeasurable.
\end{proof}
\section{The construction}\label{sec:constructions}
We now exhibit a technique for obtaining subgroups of a given compact group. In general, we do not know whether this construction produces non-measurable subgroups. We will prove that it does so in the Abelian case and leave the general case open. However, we conjecture that this technique always produces non-measurable subgroups of profinite groups; i.e., it provides a possible candidate for a solution to the question at hand.
\begin{theorem}\label{Zorn}
Let $G$ be an infinite, non-discrete group, let $p \in G \setminus \{1\}$, and let $Q$ be a proper dense subgroup of $G$.
\begin{enumerate}
\item There is a subgroup $M$ of $G$ such that $p \notin M \supseteq Q$ and, for any $x \notin M$, $p \in \<M,x\>$.
\item If $G$ is Abelian, then any such $M$ is non-measurable.
\end{enumerate}
\end{theorem}
\begin{proof}
$(1)$ Let $\mathbb{P}$ be the set of all subgroups $X$ of $G$ such that $p \notin X \supseteq Q$. Let $\mathcal{C}$ be a subset of $\mathbb{P}$ totally ordered by $\subseteq$. Then $p \notin \bigcup \mathcal{C} \supseteq Q$. Furthermore, $\bigcup C$ is a group because if $x,y \in \bigcup \mathcal{C}$ then there is some $X_x,X_y \in \mathcal{C}$ with $x \in X_x$ and $y \in X_y$, and since $X_x \subseteq X_y$ without loss of generality, we have $x^{-1}y \in X_y$, hence $x^{-1}y \in \bigcup \mathcal{C}$. Thus the conditions of Zorn's Lemma are satisfied: an application of Zorn's Lemma yields an element $M$ of $\mathbb{P}$ that is not properly contained in any other element of $\mathbb{P}$. This $M$ is the desired group.
$(2)$ Suppose $G$ is Abelian, and $M$ is as given above. For every $x \in G \setminus M$, $p \in \<M,x\>$. Using the assumption that $G$ is Abelian, for every $x \in G \setminus M$ there is some $n \in \mathbb{Z}$ such that $p \in nx+M$. Since $p \in G \setminus M$, the coset $p+M$ is not the identity of $G / M$. Since $G/M$ is Abelian, there is a character $f: G/M \to \mathbb{R}/\mathbb{Z}$ with $f(p+M)$ different from the identity. Then $\mathrm{ker}(f)$ is a subgroup of $G/M$ that does not contain $p+M$. Since $M$ is maximal among all subgroups of $G$ that contain $Q$ and not $p$, we must have $M = \mathrm{ker}(f)$. Hence $f$ is injective.
Because $p \in \<M,x\>$ for every $x \in G \setminus M$, we have $f(p+M) \in \<f(M),f(x+M)\> = \<f(x+M)\>$ for every $x \in G \setminus M$. This implies that for each $x \in G \setminus M$ there is some $n \in Z$ such that $f(x+M)^n = f(M)$. But any element of $\mathbb{R}/\mathbb{Z}$ has only finitely many roots of each order, hence only countably many roots. Thus $f(G/M)$ is countable, and $[G:M]$ is countable. Recall that $Q \subseteq M \neq X$, $M$ is not clopen. As discussed in the introduction, any non-clopen subgroup of countable index is non-measurable.
\end{proof}
Note that every infinite, metric, profinite group has a proper dense subgroup. This follows from Lemma~\ref{CantorSpace}: any such group has a countable dense subset $Q$, and then $\<Q\>$ is the desired subgroup.
\begin{question}
If $G$ is an infinite, metric, profinite group and $Q$ is a proper dense subgroup of $G$, does Theorem~\ref{Zorn}$(1)$ provide a non-measurable subgroup of $G$?
\end{question}
|
2,869,038,153,853 | arxiv | \section{Introduction}
Historically the first publication on the subject of deformed Heisenberg algebra was Snyder's paper \cite{Snyder47} where
the Lorentz-covariant
deformed Heisenberg algebra leading to a quantized
space-time was proposed. For a long time there were only a few papers on this
subject. The interest to deformed algebras was renewed after
investigations in string theory and quantum gravity which suggested the
existence of a nonzero minimal uncertainty in position following
from the generalized uncertainty principle (GUP) (see, e.g., \cite{gross, maggiore, witten}).
In \cite{maggiore2,Kem95,Kem96} it was shown that GUP and nonzero minimal
uncertainty in position can be obtained from some modifications of Heisenberg
algebra. Subsequently there were published many
papers where different systems and their properties in space with deformed
Heisenberg algebra was studied: one-dimensional harmonic
oscillator with minimal uncertainty in position \cite{Kem95} and
also with minimal uncertainty in position and momentum
\cite{Tkachuk1,Tkachuk2}, $D$-dimensional isotropic harmonic
oscillator \cite{chang, Dadic}, three-dimensional Dirac oscillator
\cite{quesne} and one-dimensional Coulomb problem \cite{fityo},
(1+1)-dimensional Dirac oscillator with Lorentz-covariant deformed
algebra \cite{Quesne10909}, three-dimensional Coulomb problem with
deformed Heisenberg algebra in the frame of perturbation theory
\cite{Brau,Benczik,mykola,Stet,mykolaOrb}, singular inverse square
potential with a minimal length \cite{Bou1,Bou2},
the scattering problem in the deformed space with minimal length \cite{Stet07},
ultra-cold
neutrons in gravitational field with minimal length
\cite{Bra06,Noz10,Ped11},
the influence of minimal length on Lamb's shift, Landau levels, and tunneling current in scanning tunneling microscope \cite{Das,Ali2011},
the Casimir effect in a space with minimal length \cite{Frassino},
the effect of noncommutativity and of the existence of a minimal length on the phase space of cosmological model \cite{Vaki}.
various physical consequences which follow from the noncommutative Snyder space-time geometry \cite{Batt},
the classical mechanics in a space with deformed Poisson brackets \cite{BenczikCl,Fryd,Sil09},
composite system in deformed space with
minimal length \cite{Quesne10,Bui10}, equivalence principle in deformed space with
minimal length \cite{Tka12}.
We would like to note that all the deformed algebras studied in the above references are nonlinear ones.
In this paper we consider the relation between nonlinear algebras and linear ones. As far as we know, up to now
this question
has not been studied in the literature. So, the
aim of this paper is to fill this gap.
\section{Linearization of deformed algebra}
Let us consider a one-dimensional deformed nonlinear Heisenberg algebra with function of deformation $f(P)$,
namely
\begin{eqnarray} \label{a1}
[X,P]=if(P),
\end{eqnarray}
where $\hbar=1$.
The function of deformation is an even $f(-P)=f(P)$ and positive function. It means that the space has the same properties in two opposite
directions.
The momentum representation reads:
\begin{eqnarray} \label{repr0}
P=p, \ \ X=if(p){d\over dp}
\end{eqnarray}
and acts on square integrable functions $\phi(p) \in \mathcal{L}^2(-a\,,a;f)\, ,(a
\leq \infty)$ where the norm of $\phi$ is given by
\begin{equation}
\parallel \phi\parallel^2 \ = \ \int_{-a}^a\,\frac{dp}{f(p)} \mid\phi(p)\mid^2\, .\label{aa3}
\end{equation}
For the hermiticity of ${X}$ it is enough that $\phi(-a) = \phi(a)$ or $\phi(-a) = -\phi(a)$. More strong boundary conditions
$\phi(-a) = \phi(a)=0$ were considered in \cite{Mas12}.
Now we extend this algebra by one additional operator $F=f(p)$. Using representation (\ref{repr0}) one can easily find
\begin{eqnarray}
[X,F]=[if{d\over dp},f(p)]=iff',\ \ [P,F]=[p,f(p)]=0.
\end{eqnarray}
We require that algebra of three operators $X$, $P$ and $F$ is linear and to close this algebra we put
\begin{eqnarray}\label{eqf}
ff'=\alpha +\beta p + \gamma f.
\end{eqnarray}
where $\alpha$, $\beta$ and $\gamma$ are real parameters.
Note that the linear combination in the right hand side of (\ref{eqf}) does not contain $X$ because $ff'$ is a function of $p$ only.
Let us take into account the fact that the function of deformation is an even one. Then changing $p\to-p$ in (\ref{eqf})
and taking into account that $f(-p)=f(p)$ we find
\begin{eqnarray}\label{eqfN}
ff'=-\alpha +\beta p - \gamma f.
\end{eqnarray}
Comparing (\ref{eqf}) and (\ref{eqfN}) we find $\alpha=\gamma=0$. So, only equation
\begin{eqnarray}\label{eqfNN}
ff'=\beta p.
\end{eqnarray}
has even solutions.
The solution in this case reads
\begin{eqnarray}\label{solf1}
f(p)=\pm\sqrt{c+\beta p^2},
\end{eqnarray}
where $c$ is the constant of integration. Choosing
$"+"$ in (\ref{solf1}) and putting
constant of integration $c=1$ we have an even and positive function of deformation
\begin{eqnarray}\label{solf11}
f(p)=\sqrt{1+\beta p^2}.
\end{eqnarray}
The linear algebra in this case reads
\begin{eqnarray} \label{lin-a1}
[X,P]=iF, \ \ [X,F]=i\beta P, \ \ [P,F]=0.
\end{eqnarray}
This is Lie algebra. One can find the Casimir operator (invariant) for this algebra
\begin{eqnarray}\label{Kaz1}
K=P^2-{1\over\beta}F^2
\end{eqnarray}
commuting with all elements of the algebra. When we return to nonlinear deformed algebra we find that
the Casimir operator is constant. Really,
\begin{eqnarray}
K=p^2-{1\over\beta}f^2(p)=-{1\over\beta}.
\end{eqnarray}
So, in this section we found that when the function of deformation is given by (\ref{solf11}) then
nonlinear algebra (\ref{a1}) for two operators can be transformed to linear algebra (\ref{lin-a1}) with three operators.
Let us consider in more detail linear algebra (\ref{lin-a1}) denoting the hermitean generators as
\begin{equation}
A_1 \ = \ \lambda\,P\, ,\quad A_2 \ = \ F\, ,\quad A_3 \ = \ \frac{1}{\lambda}\,X\, ,\label{r1.1}
\end{equation}
and they fulfill commutation relations in the form
\begin{equation}
\left[ A_1\,,A_2\right] \ = \ 0\, ,\quad \left[ A_3\,,A_1\right] \ = \ i\,A_2\, ,\quad \left[ A_3\,, A_2\right] \ = \ i\,\mbox{sign}(\beta)\,A_1\, ,\label{r1.2}
\end{equation}
where $\beta = \pm \lambda^2$. So, the momentum realization of this algebra takes a canonical form
\begin{equation}
A_1 \ = \ \lambda\,p\, ,\quad A_2 \ = \ \sqrt{1 + \beta\,p^2}\, ,\quad A_3 \ = \ \frac{i}{\lambda}\,\sqrt{1 + \beta\,p^2} \frac{d}{dp}\, .\label{r1.3}
\end{equation}
It is convenient to use a pair of commuting hermitean operators $P_+\,,P_-$ defined as follows
\begin{eqnarray}
P_+ &=& A_1 + A_2 \ = \ \lambda\,P + F\, ,\label{r1.201}\\ P_- &=& A_2 - A_1 \ = \ F - \lambda\,P \, ,\label{r1.4}
\end{eqnarray}
satisfying the relations
\begin{eqnarray}
\left[ A_3\,, P_+\right] &=& i\,\mbox{sign}(\beta) \left( A_1 + \mbox{sign}(\beta)\,A_2\right)\, ,\label{r1.5}\\
\left[ A_3\,, P_-\right] &=& i\,\mbox{sign}(\beta) \left( A_1 - \mbox{sign}(\beta)\,A_2\right)\, .\label{r1.6}
\end{eqnarray}
In this basis the Casimir operator (\ref{Kaz1}) take the form
\begin{equation}
K(\beta) \ = \ P^2 - \frac{1}{\beta}\,F^2=\left\{\begin{array}{l} C_2(\lambda^2) \equiv\frac{1}{\lambda^2}\,P_+ P_-\,;\quad \beta=\lambda^2 > 0,\\ \\ C_2(-\lambda^2) \equiv\frac{1}{2 \lambda^2}\left( P_+^2 + P_-^2\right)\,;\beta= -\lambda^2 < 0\,.\end{array}\right.\label{r1a.1}
\end{equation}
\noindent(i)\quad{\it Inhomogeneous hyperbolic rotations of two-dimesional plane $ISO(1\,,1)$}\medskip
In the case $\beta > 0$ we obtain three dimensional Lie algebra generated by $P_+\,,P_-\,,A_3$ satisfying the commutation relations
\begin{equation}
\left[ A_3\,, P_+\right] \ = \ i\,P_+\, ,\quad \left[ A_3\,, P_-\right] \ = \ -i\,P_-\, ,\quad \left[P_+\,,P_-\right] \ = \ 0\, ,\label{r1.7}
\end{equation}
where $A_3$ generates $\mathfrak{so}(1\,,1)$ hyperbolic rotations. The Lie algebra $\mathfrak{iso}(1,1) \sim \mathfrak{so}(1,1)\supsetplus \mathfrak{t}^2$ is a semidirect sum of hyperbolic rotations and translations $\mathfrak{t}^2$ (we use the notation of reference \cite{Gilmore}).
We can change the momentum variable $\lambda\,p = \sinh(\lambda\xi)$ then generators $P_\pm$ take a simple form
\begin{eqnarray}
P_+ &=& \sqrt{1 + \lambda^2\,p^2} + \lambda\,p \ = \ e^{\lambda\,\xi} \ = \ e^{\mbox{\scriptsize{arcsh}}(\lambda P)}\, ,\label{r1.8}\\
P_- &=& \sqrt{1 + \lambda^2\,p^2} - \lambda\,p \ = \ e^{-\lambda\,\xi} \ = \ e^{-\mbox{\scriptsize{arcsh}}(\lambda P)}\, .\label{r1.9}
\end{eqnarray}
In this realization the spectrum of both generators $P_\pm$ is positive and it is easy to find the hermitean generators of the algebra $\mathfrak{iso}(1,1)$ in the Hilbert space of the square integrable functions $\phi\in \mathcal{L}^2(\mathcal{R})$. First we notice that the scalar products are related as follows (cf.(\ref{aa3}))
\begin{equation}
\langle\psi\,,\phi \rangle \ = \ \int_{-\infty}^\infty\,\frac{dp}{\sqrt{1 + \lambda^2 p^2}}\,{\psi}^*(p)\,\phi(p) \ = \
\int_{-\infty}^\infty\,d\xi\,\tilde{\psi}^*(\xi)\,\tilde{\phi}(\xi) \, .\label{r1.10}
\end{equation}
where $\tilde{\phi}(\xi) = \phi((1/\lambda)\,\sinh \xi)$ and the generators of Lie algebra $\mathfrak{iso}(1,1)$ take the form
\begin{equation}
A_3 \ = \ \frac{i}{\lambda}\,\frac{d}{d\xi}\, ,\quad P_+ \ = \ e^{\lambda\,\xi}\, ,\quad P_- \ = \ e^{-\lambda\,\xi}\, .\label{r1.11}
\end{equation}
$$
$$
\noindent(ii)\quad{\it Inhomogeneous rotations of two-dimesional plane $ISO(2)$}\medskip
Similarly, in the case $\beta < 0$ we get the Lie algebra $\mathfrak{iso}(2)$ of transformations of the Euclidean plane
\begin{equation}
\left[ A_3\,, P_+\right] \ = \ i\,P_-\, ,\quad \left[ A_3\,, P_-\right] \ = \ -i\,P_+\, ,\quad \left[P_+\,,P_-\right] \ = \ 0\, ,\label{r1.12}
\end{equation}
where $A_3$ is a generator of rotations of two-dimensional plane. The algebra $\mathfrak{iso}(2)\sim \mathfrak{so}(2)\supsetplus \mathfrak{t}^2$ is a semidirect sum of $\mathfrak{so}(2)$ rotations and two-dimensional translations $\mathfrak{t}^2$.
Introducing new variable $\theta\,, \lambda\,p = \sin(\lambda\,\theta)$ we get
\begin{eqnarray}
P_+ &=& \sqrt{1 - \lambda^2\,p^2} + \lambda\,p \ = \ \cos(\lambda\,\theta) + \sin(\lambda\,\theta)\, ,\label{r1.13}\\
P_- &=& \sqrt{1 - \lambda^2\,p^2} - \lambda\,p \ = \ \cos(\lambda\,\theta) - \sin(\lambda\,\theta)\, .\label{r1.14}
\end{eqnarray}
The hermicity condition puts on the generators $P_\pm$ implies that $-1 \leq \lambda\,p \leq 1$ and $-\frac{\pi}{2\lambda} \leq \theta \leq \frac{\pi}{2\lambda}$ if we demand that the correspondence $p \leftrightarrow \theta$ is unambiguous.
In considered case the scalar products are related in the following way
\begin{equation}
\langle\psi\,,\phi \rangle \ = \ \int_{-1/\lambda}^{1/\lambda}\,\frac{dp}{\sqrt{1 - \lambda^2 p^2}}\,{\psi}^*(p)\,\phi(p) \ = \
\int_{-\pi/2\lambda}^{\pi/2\lambda}\,d\theta\,\tilde{\psi}^*(\theta)\,\tilde{\phi}(\theta) \, ,\label{r1.15}
\end{equation}
where $\tilde{\phi}(\theta) = \phi((1/\lambda)\,\sin \theta)$ and the generators of Lie algebra $\mathfrak{iso}(2)$ of Euclidean group are given by the formulae
\begin{equation}
A_3 \ = \ \frac{i}{\lambda} \frac{d}{d \theta}\, ,\quad P_\pm \ = \ \cos(\lambda \theta) \pm \sin(\lambda \theta)\, ,\label{r1.16}
\end{equation}
or equivalently
\begin{equation}
A_1 \ = \ \frac{1}{2} \left(P_+ - P_-\right) \ = \ \sin(\lambda\,\theta)\, ,\quad A_2 \ = \ \frac{1}{2} \left(P_+ + P_-\right) \ = \ \cos(\lambda\,\theta)\, .\label{r1.17}
\end{equation}
It is worthwhile to notice that operators $A_1\,, A_2$ in this representation appeared in quantum mechanics as the sine and cosine operators in discussion on existence of the self-adjoint phase operator \cite{Nieto}.\medskip
\noindent(iii)\quad{\it On expansion to simple algebras $\mathfrak{so}(2, 1)$ and $\mathfrak{so}(3)$.}\medskip
Both considered Lie algebras are closely related to three dimensional orthogonal and pseudo-orthogonal rotation algebras by the expansion procedure.
Further we shall follow the framework of expansion described in ref.\cite{Gilmore1} (Chapter 10).
We define new generators as follows
\begin{equation}
\Pi_\pm \ = \ \left[ A_3^2\,, P_\pm\right] \ = \ \left\{\begin{array}{l} \pm i \{A_3\,, P_\pm\}\quad\mbox{for}\,;\quad \beta=\lambda^2 > 0,
\\ \\ \pm i \{A_3\,, P_\mp\}\quad\mbox{for}\quad\beta= -\lambda^2 < 0\,.\end{array}\right.\label{r1a.2}
\end{equation}
where $\{A\,,B\} = A B + B A$. Using the relation $\{A\,,B\}= 2AB - \left[ A\,,B\right]$ we find the commutation relations\\
(a)\quad for $\beta = \lambda^2 > 0$:
\begin{eqnarray}
\left[ A_3\,, \Pi_\pm \right] &=& - \{ A_3\,, P_\pm \} \ = \ \pm i \Pi_\pm\, ,\label{r1a.3}
\\
\left[ \Pi_+\,, \Pi_- \right] &=& -8i\,A_3 P_+ P_- \ = \ -8i\lambda^2\,A_3 C_2(\lambda^2)\, .\label{r1a.4}
\end{eqnarray}
Introducing redefined generators
\begin{equation}
\tilde{P}_\pm(\varepsilon) \ = \ \frac{\Pi_\pm}{2\sqrt{\varepsilon P_+ P_-}} \ = \ \frac{\Pi_\pm}{2\sqrt{\varepsilon\lambda^2 C_2(\lambda^2)}}\, ,\label{r1a.5}
\end{equation}
where $\varepsilon = \pm 1$ and the Casimir is given by (\ref{r1a.1}), then we obtain the relations
\begin{equation}
\left[ A_3\,, \tilde{P}_\pm(\varepsilon) \right] \ = \ \pm i\,\tilde{P}_\pm(\varepsilon)\, ,\qquad \left[\tilde{P}_+(\varepsilon)\,, \tilde{P}_-(\varepsilon) \right] \ = \ -2i\varepsilon\,A_3\, ,\label{r1a.6}
\end{equation}
or
\begin{equation}
\left[ \tilde{A}_1\,, \tilde{A}_2\right] \ = \ -i \varepsilon\,\tilde{A}_3\, ,\quad
\left[ \tilde{A}_2\,, \tilde{A}_3\right] \ = \ i\,\tilde{A}_1\, ,\quad
\left[ \tilde{A}_3\,, \tilde{A}_1\right] \ = \ i\,\tilde{A}_2\, ,\label{r1a.7}
\end{equation}
where $\tilde{P}_\pm(\varepsilon) = \tilde{A}_2 \pm \tilde{A}_1\, , \tilde{A}_3 = A_3$. Let us notice that for $\varepsilon = 1$ all generators $\tilde{A}_k ( k=1,2,3)$ are hermitean and the second order Casimir operator have the form
\begin{equation}
\tilde{C}_2(\varepsilon) \ = \ \tilde{A}_1^2 + \tilde{A}_2^2 - \varepsilon \tilde{A}_3^2\, ,\label{r1a.8}
\end{equation}
and for $\varepsilon=1$ the hermitean generators $\tilde{A}_k$ span the Lie algebra $\mathfrak{so}(2,1)$. On the other hand for $\varepsilon=-1$ the generators $\tilde{A}_1\,,\tilde{A}_2$ become antihermitean operators and span the Lie algebra $\mathfrak{so}(3)$ as an extension of $\mathfrak{so}(2,1)$ algebra by the Weyl unitary trick \cite{Gilmore1}.\medskip\\Similarly, the commutation relations take the form\\(b) for $\beta = -\lambda^2 < 0$:
\begin{eqnarray}
\left[ A_3\,, \Pi_\pm \right] &=& \pm i\,\Pi_\mp\, ,\label{r1b.3}\\
\left[ \Pi_+\,, \Pi_- \right] &=& 4i\,A_3 \left( P_+^2 + P_-^2\right) \ = \ 8\lambda^2 i\,A_3 C_2(-\lambda^2)\, .\label{r1a.9}
\end{eqnarray}
Now, redefined generators are given by
\begin{equation}
\tilde{P}_\pm(\varepsilon) \ = \ \frac{\Pi_\pm}{\sqrt{2\varepsilon\left( P_+^2 + P_-^2\right)}} \ = \ \frac{\Pi_\pm}{2\sqrt{\varepsilon\lambda^2 C_2(-\lambda^2)}} \, ,\label{r1a.10}
\end{equation}
satisfying the relations
\begin{equation}
\left[ A_3\,, \tilde{P}_\pm(\varepsilon) \right] \ = \ \pm i\,\tilde{P}_\mp(\varepsilon)\, ,\qquad \left[\tilde{P}_+(\varepsilon)\,, \tilde{P}_-(\varepsilon) \right] \ = \ 2i\varepsilon\,A_3\, ,\label{r1a.11}
\end{equation}
The commutation relations for $\tilde{A}_k$ generators take the form
\begin{equation}
\left[ \tilde{A}_1\,, \tilde{A}_2\right] \ = \ i \varepsilon\,\tilde{A}_3\, ,\quad
\left[ \tilde{A}_2\,, \tilde{A}_3\right] \ = \ i\,\tilde{A}_1\, ,\quad
\left[ \tilde{A}_3\,, \tilde{A}_1\right] \ = \ i\,\tilde{A}_2\, ,\label{r1a.12}
\end{equation}
and the Casimir operator is given by
\begin{equation}
\tilde{C}_2(\varepsilon) \ = \ \tilde{A}_1^2 + \tilde{A}_2^2 + \varepsilon \tilde{A}_3^2\, .\label{r1a.13}
\end{equation}
We see that for $\varepsilon=1$ the generators $\tilde{A}_k$ are hermitean operators and form a simple three dimensional rotation algebra $\mathfrak{so}(3)$. For $\varepsilon <0$ the generators $\tilde{A}_1, \tilde{A}_2$ become nonhermitean and we obtain a nonhermitean realization of $\mathfrak{so}(2,1)$ non-compact pseudo-orthogonal Lie algebra.
\subsection{Minimal length versus discreteness of configurational space}
In the paper \cite{Mas12}
we obtain the result for the minimal length which is defined as the minimum in uncertainty of position
operator
\begin{eqnarray}
l_0={\rm min}\sqrt{\langle(\Delta X)^2\rangle}
= \ \frac{\pi}{2}\,\left(\int_0^{a}{d p \over
f(p)}\right)^{-1}\, .
\end{eqnarray}
With function of deformation (\ref{solf11})
for $\beta\ge 0$ the momentum is defined on the whole line
$-\infty<p<\infty$ and we demand the boundary conditions $\phi(-\infty)=\phi(\infty)=0$.
The minimal length $l_0$ in this case
is zero.
Let us consider in more detail the second case $\beta=-\lambda^2<0$ with the function of deformation
\begin{eqnarray}\label{solf12}
f(p)=\sqrt{1-\lambda^2 p^2}.
\end{eqnarray}
The momentum now is bounded to the region $-1/\lambda< p<1/\lambda$.
In this case the minimal length $l_0$ is nonzero (see paper \cite{Mas12})
\begin{eqnarray}\label{minl1}
\sqrt{\langle(\Delta X)^2\rangle}\ge l_0=\lambda.
\end{eqnarray}
Here it is important to note that the result concerning the minimal length
was obtained in \cite{Mas12} for zero boundary conditions
\begin{eqnarray}\label{bc0}
\phi(-1/\lambda) = \phi(1/\lambda)=0.
\end{eqnarray}
The eigenvalue equation for the position operator $X$ with these boundary conditions does not have solutions
that are in agreement with the the fact that according to (\ref{minl1}) the uncertainty in position is nonzero $\langle(\Delta X)^2\rangle\ne0$.
Now let us consider nonzero boundary conditions
\begin{eqnarray}\label{bcn0}
\phi(-1/\lambda) = \phi(1/\lambda)
\end{eqnarray}
and
\begin{eqnarray}\label{bcn0m}
\phi(-1/\lambda) = -\phi(1/\lambda)
\end{eqnarray}
which are weaker than those imposed in \cite{Mas12}. Therefore the result concerning the minimal length now is not applied.
The eigenvalue equation for the position operator in this case reads
\begin{eqnarray}
i\sqrt{1-\lambda^2p^2}{d\over dp}\phi(p)=l\phi(p).
\end{eqnarray}
The square integrable solution for eigenfunction is
\begin{eqnarray}\label{eisol1}
\phi(p)=\sqrt{\lambda\over\pi}\exp\left(-i{l\over\lambda}\arcsin\lambda p\right).
\end{eqnarray}
The boundary condition (\ref{bcn0}) in an explicit form reads
\begin{eqnarray}
\exp\left(-i{l\over\lambda}\pi\right)=1
\end{eqnarray}
and for the eigenvalues we find
\begin{eqnarray}\label{eigval1p}
l_n=\lambda 2n, \ \ n=0,\pm 1, \pm 2, ...
\end{eqnarray}
Similarly, for boundary condition (\ref{bcn0m}) we find
\begin{eqnarray}
\exp\left(-i{l\over\lambda}\pi\right)=-1
\end{eqnarray}
and the eigenvalues now read
\begin{eqnarray}\label{eigval1m}
l_n=\lambda (2n+1), \ \ n=0,\pm 1, \pm 2, ...
\end{eqnarray}
So, the eigenvalues of position operator are discrete.
It is obvious that for eigenstates (\ref{eisol1}) the uncertainty in position is zero and therefore the minimal length is zero.
Thus we have two possible scenarios. When boundary condition (\ref{bc0})
is imposed then there exists nonzero minimal length and there are no solutions of the eigenvalue equation for the position operator.
For boundary condition (\ref{bcn0}) and (\ref{bcn0m}) the solutions of the
eigenvalue equation for the position operator exist, the eigenvalues are discrete and the minimal length is zero.
\subsection{Angular momentum representation of linear algebra}
It is easy to verify that linear algebra (\ref{lin-a1}) with $\beta=-\lambda^2$ is satisfied by the operators
\begin{eqnarray}\label{XPF}
X=\lambda\left(-ix{\partial\over\partial y}+iy{\partial\over\partial x}\right)=\lambda L_z,\\
P=x, \ \ F=\lambda y.
\end{eqnarray}
The Casimir operator (\ref{Kaz1}) in this representation reads
\begin{eqnarray}
K=p^2+{1\over\lambda^2}F^2=x^2+y^2.
\end{eqnarray}
Returning to nonlinear representation we find $K=1/\lambda^2$.
As result of (\ref{XPF}) the linear algebra (\ref{lin-a1}) with $\beta=-\lambda^2$
is in fact the algebra of angular momentum $L_z$ and position operators $x$, $y$.
Let us consider in frame of this algebra eigenvalue problem for $X$
that $X$ is proportional to $z$-component of the angular momentum.
In spherical coordinates the operators read
\begin{eqnarray}
x={1\over\lambda}\sin\phi,\ \ y={1\over\lambda}\cos\phi, \ \ L_z=-i{\partial\over\partial\phi}.
\end{eqnarray}
The eigenvalue equation for $L_z$
\begin{eqnarray}
-i{\partial\over\partial\phi}\psi=m\psi
\end{eqnarray}
give the solution
\begin{eqnarray}
\psi=Ce^{im\phi}.
\end{eqnarray}
The boundary conditions (\ref{bcn0}) and (\ref{bcn0m}) now read
\begin{eqnarray}\label{B1}
\psi(-\pi/2)=\psi(\pi/2), \\
\psi(-\pi/2)=-\psi(\pi/2)\label{B2}.
\end{eqnarray}
For (\ref{B1}) we find $m=2n$, where $n=0,\pm1,\pm2,...$ and thus eigenvalues
for $X$ is $l_n=\lambda2n$, that is in agreement with (\ref{eigval1p}).
For (\ref{B2}) we find $m=2n+1$, where $n=0,\pm1,\pm2,...$ and thus eigenvalues
for $X$ in this case is $l_n=\lambda(2n+1)$, that is in agreement with (\ref{eigval1m}).
\section{Algebra of total angular momenta and nonlinear algebra}
In this section we consider nonlinear deformed algebra of the form
\begin{eqnarray}\label{algXP}
[X,P]=i\sqrt{1-(\lambda_1^2X^2+\lambda_2^2P^2)}.
\end{eqnarray}
It is interesting that this nonlinear algebra is related with the Lie algebra for angular momentum
\begin{eqnarray}\label{angul}
[J_x,J_y]=iJ_z,\ \ [J_z,J_x]=iJ_y, \ \ [J_y,J_z]=iJ_x.
\end{eqnarray}
Squared total angular momentum $J^2$ commute with each component of the total angular momentum and is the Casimir operator.
Let us consider the subspace with fixed eigenvalue of squared total angular momentum.
Then
\begin{eqnarray}
J^2=J_x^2+J_y^2+J_z^2=j(j+1),
\end{eqnarray}
where $j=0,1,2,...$ or $j=1/2, 3/2, 5/2,...$.
We find
\begin{eqnarray}\label{Jz}
J_z=\pm\sqrt{j(j+1)-(J_x^2+J_y^2)}.
\end{eqnarray}
Choosing $"+"$ and substituting it into the first equation in (\ref{angul}) we obtain
\begin{eqnarray}
[J_x,J_y]=i\sqrt{j(j+1)-(J_x^2+J_y^2)}.
\end{eqnarray}
Note that with $"+"$ we are restricted to the subspace spanned by eigenstates of $J_z$ with positive eigenvalues.
As we see, this algebra is very similar to (\ref{algXP}). Let us introduce operators of position and momentum as follows
\begin{eqnarray}\label{XPang}
X=\lambda_2 J_x, \ \ P=\lambda_1 J_y.
\end{eqnarray}
Then
\begin{eqnarray}
[X,P]=i\lambda_1\lambda_2\sqrt{j(j+1)-\left({1\over\lambda_2^2}X^2+{1\over\lambda_1^2}P^2\right)}=\\
=i\sqrt{\lambda_1^2\lambda_2^2j(j+1)-\left({\lambda_1^2}X^2+{\lambda_2^2}P^2\right)}.
\end{eqnarray}
Choosing
\begin{eqnarray}\label{l1l2}
\lambda_1^2\lambda_2^2j(j+1)=1
\end{eqnarray}
we obtain deformed algebra (\ref{algXP}). So, nonlinear deformed algebra (\ref{algXP}) is related with
the Lie algebra for total angular momentum (\ref{angul}).
Finally, let us introduce new operator
\begin{eqnarray}\label{FJ}
F=\lambda_1\lambda_2 J_z=\sqrt{1-(\lambda_1^2X^2+\lambda_2^2P^2)}.
\end{eqnarray}
Then according to (\ref{XPang}) and (\ref{FJ}) the algebra of operators $X, P, F$ reads
\begin{eqnarray}\label{LinJ}
[X.P]=iF, \ \ [X,F]=-i\lambda_2^2P, \ \ [P,F]=i\lambda_1^2X.
\end{eqnarray}
Thus, the nonlinear algebra (\ref{algXP}) with the help of operator (\ref{FJ}) can be extended to linear one (\ref{LinJ}).
It is worth to stress that parameters $\lambda_1$ and $\lambda_2$ are not independent but are related by
(\ref{l1l2}).
Here it is worth to stress that in the limit $\lambda_1\to 0$ the linear algebra (\ref{LinJ}), which
is related to the algebra of angular momentum,
corresponds to (\ref{lin-a1}), which
is related to the algebra of transformations of the Euclidian space. This limit corresponds to the contraction procedure
described in \cite{Gilmore} (Chapter 13).
Respectively, in the limit $\lambda_1\to 0$ deformed algebra (\ref{algXP}) corresponds to (\ref{a1}) with the function of deformation (\ref{solf11}). Therefore, the contraction procedure relating two linear algebras relates also corresponding
nonlinear deformed algebras.
Using representation (\ref{XPang}) one can find that eigenvalues of position operator are $\lambda_2 m$, where $ -j\le m\le j $
and minimal length is zero. Similarly, the eigenvalues for momentum operator are $\lambda_1 m$ and thus minimal momentum is zero.
In conclusion it worth stressing that proposed here construction of nonlinear algebra from linear one
with the help of the Casimir invariant can be applied to an arbitrary linear algebra.
\subsection{Harmonic oscillator}
We consider the eigenvalue problem for harmonic oscillator with Hamiltonian
\begin{eqnarray}\label{Hosc}
H={1\over 2}(P^2+X^2)
\end{eqnarray}
in space described by deformed algebra (\ref{algXP}) where $\lambda_1=\lambda_2=\lambda$
\begin{eqnarray}\label{algXP1}
[X,P]=i\sqrt{1-\lambda^2(X^2+P^2)}.
\end{eqnarray}
This deformed algebra is related with the Lie algebra for total angular momentum when (\ref{l1l2})
is satisfied. It gives the relation
\begin{eqnarray}\label{lj}
\lambda^4={1\over j(j+1)}.
\end{eqnarray}
In order to find energy spectrum of (\ref{Hosc}) we use the relation of nonlinear algebra (\ref{algXP1}) with the
Lie algebra for total angular momentum (\ref{angul}). Using (\ref{XPang}) we rewrite the Hamiltonian as follows
\begin{eqnarray}\label{HJ}
H={\lambda^2\over 2}(J_x^2+J_y^2)={\lambda^2\over 2}(J^2 - J_z^2),
\end{eqnarray}
where $J^2$ and $J_z$ commute, the
eigenvalue of $J^2$ is $j(j+1)$ and the eigenvalue of $J_z$ is $m$, $ -j\le m\le j $. Note that on the subspace
where deformed algebra is related with Lie one the quantum number $j$ is fixed and is related
to the deformed parameter (\ref{lj}), $m$ is positive that corresponds to choosing $"+"$ in (\ref{Jz}).
Thus eigenvalues of (\ref{HJ}) read
\begin{eqnarray}
E_m={1\over 2\sqrt{j(j+1)}}(j(j+1)-m^2),
\end{eqnarray}
where for integer $j$ the quantum number $m=0,1,2,...,j$, for half integer $m=1/2, 3/2,...,j$.
Note that in this notation the maximal quantum number $m=j$ corresponds to the ground state energy.
It is convenient to rewrite $m=j-n$ where $n=0$ corresponds to the ground state energy.
Then
\begin{eqnarray}
E_n={1\over 2\sqrt{j(j+1)}}(j(j+1)-(j-n)^2),
\end{eqnarray}
where for integer $j$ the quantum number $n=0,1,2,...,j$, for half integer $n=0, 1,...,j-1/2$.
In the limit $j\to \infty$ the deformed parameter $\lambda \to 0$ and for the energy spectrum we obtain
\begin{eqnarray}
E_n=n+{1\over 2}.
\end{eqnarray}
It reproduces the energy spectrum of a non-deformed harmonic oscillator as it must be
when the parameter of deformation tends to zero.
\section{Conclusions}
In this paper we establish the relation between nonlinear algebras and linear ones.
Namely, we find that deformed nonlinear algebra (\ref{a1}) for two operators with function of deformation (\ref{solf11})
can be transformed to linear algebra (\ref{lin-a1}) with three operators. It is interesting to note that this
linear algebra is equivalent to the Lie algebra of angular momentum $L_z$ and coordinates $x$, $y$.
The next interesting fact revealed for algebra (\ref{a1}) with function of deformation (\ref{solf12}) is
that here we have two possible scenarios concerning the existence of the minimal length.
When zero boundary condition (\ref{bc0}) is imposed then there exists nonzero minimal length
and there are no solutions of the eigenvalue equation for position operator.
For nonzero boundary conditions (\ref{bcn0}) and (\ref{bcn0m}) there exist solutions of eigenvalue equation for the position operator and
the eigenvalues are discrete.
The minimal length which is defined as a nonzero minimal uncertainty in position in this case obviously is zero.
We also show that starting from linear algebra it is possible to find corresponding nonlinear one. Namely, starting from the Lie algebra
for total angular momentum we construct corresponding nonlinear algebra for two operators which can be associated with
position and momentum operators.
The relation between linear and nonlinear algebras is not only interesting on its own right but is important from the practical point of view.
This relation gives a possibility to simplify the eigenvalue problem for corresponding operators.
It is demonstrated in section 3.1 on the example of harmonic oscillator with deformed algebra. Using the relation
of this algebra with the algebra of total angular momentum we easily find the energy spectrum for this oscillator.
\section*{Acknowledgment}
VMT thanks for warm hospitality the University of Zielona G\'ora where the main part of this paper was done.
|
2,869,038,153,854 | arxiv | \section{Introduction and Statement of Results}\label{intro}
For a positive integer $n$, a {\it partition} of $n$ is a non-increasing sequence of positive integers that sum to $n$, where each summand is called a {\it part}. The partition function $p(n)$ counts the number of partitions of $n$, and we define $p(0)=1$.
The celebrated Ramanujan congruences demonstrate compelling divisibility properties for $p(n)$,
\begin{align*}
p(5n+4) & \equiv 0 \pmod 5 \\
p(7n+5) & \equiv 0 \pmod 7 \\
p(11n+6) & \equiv 0 \pmod{11}.
\end{align*}
Dyson \cite{Dyson} defined the {\it rank} of a partition $\lambda$ to be $l(\lambda)-n(\lambda)$, where $l(\lambda)$ and $n(\lambda)$ denote the largest part and number of parts of $\lambda$, respectively. Dyson conjectured that this gave a combinatorial explanation for the Ramanujan congruences modulo $5$ and $7$. In particular, if $N(s,m,n)$ is defined to be the number of partitions of $n$ that have rank congruent to $s$ modulo $m$, then Dyson conjectured that for each residue class $s$,
\begin{align}
N(s,5,5n+4) &= \frac{p(5n+4)}{5} \label{Dyson1}\\
N(s,7,7n+5) &= \frac{p(7n+5)}{7} \label{Dyson2}.
\end{align}
Atkin and Swinnerton-Dyer \cite{ASD} proved (\ref{Dyson1}), (\ref{Dyson2}) by obtaining generating functions for rank differences of the form $N(s,\ell,\ell n+b)-N(t,\ell,\ell n+b)$ for $\ell=5,7$, and showing that the relevant differences were always $0$ in the setting $(\ell,b)\in\{(5,4),(7,5)\}$. They determined all of the generating functions for $N(s,\ell,\ell n+b)-N(t,\ell,\ell n+b)$ where $\ell=5,7$, and obtained several interesting identities for the non-Ramanujan cases.
Lovejoy and Osburn \cite{LO1, LO3, LO2} used similar techniques to obtain interesting generating function representations for rank differences of overpartitions, as well as partitions without repeated odd parts. For example, let $\lambda$ be a partition without repeated odd parts. The {\it $M_2$ rank} of $\lambda$ is defined to be
\[
\left\lceil{\frac{l(\lambda)}{2}}\right\rceil - n(\lambda).
\]
Let $N_2(s,m,n)$ count the number of partitions of $n$ with distinct odd parts and $M_2$ rank congruent to $s$ modulo $m$. Lovejoy and Osburn \cite{LO3} obtained generating function identities for rank differences of the form $N_2(s,\ell,\ell n + b) - N_2(t,\ell,\ell n + b)$ for $\ell = 3$ and $\ell = 5$.
Most recently, Mao \cite{Mao1, Mao2} has derived generating function formulas for Dyson's rank on partitions modulo $10$, and the $M_2$ rank on partitions without repeated odd parts modulo $6$ and $10$. In this work he proves a number of inequalities, including for example
\begin{align*}
N(0,10,5n+1) & > N(4,10,5n+1), \\
N_2(0,6,3n) + N_2(1,6,3n) & > N_2(2,6,3n) + N_2(3,6,3n).
\end{align*}
Mao gives the following conjectures based on computational evidence. The first, is for Dyson's rank on unrestricted partitions.
\begin{conjecture}
Computational evidence suggests that
\begin{align}
\label{Mao10 Conjecture a}
N(0,10,5n) + N(1,10,5n) & > N(4,10,5n) + N(5,10,5n) \text{ for } n \geq 0,\\
\label{Mao10 Conjecture b}
N(1,10,5n) + N(2,10,5n) & \geq N(3,10,5n) + N(4,10,5n) \text{ for } n \geq 1.
\end{align}
\end{conjecture}
The second, is for the $M_2$ rank on partitions without repeated odd parts.
\begin{conjecture}
Computational evidence suggests that
\begin{align}
\label{Mao610 Conjecture b}
N_2(0,10,5n) + N_2(1,10,5n) & > N_2(4,10,5n) + N_2(5,10,5n) \text{ for } n \geq 0,\\
\label{Mao610 Conjecture c}
N_2(0,10,5n + 4) + N_2(1,10,5n + 4) & > N_2(4,10,5n + 4) + N_2(5,10,5n + 4) \text{ for } n \geq 0,\\
\label{Mao610 Conjecture d}
N_2(1,10,5n) + N_2(2,10,5n) & > N_2(3,10,5n) + N_2(4,10,5n) \text{ for } n \geq 1,\\
\label{Mao610 Conjecture e}
N_2(1,10,5n + 2) + N_2(2,10,5n + 2) & > N_2(3,10,5n + 2) + N_2(4,10,5n + 2) \text{ for } n \geq 1,\\
\label{Mao610 Conjecture a}
N_2(0,6,3n + 2) + N_2(1,6,3n+2) & > N_2(2,6,3n + 2) + N_2(3,6,3n + 2) \text{ for } n \geq 0.
\end{align}
\end{conjecture}
In this paper we prove the following theorem using elementary techniques.
\begin{theorem}\label{main}
Mao's conjectures \eqref{Mao10 Conjecture a}, \eqref{Mao10 Conjecture b}, \eqref{Mao610 Conjecture b}, and \eqref{Mao610 Conjecture c} are true. In fact, in \eqref{Mao10 Conjecture b}, the strict inequality holds.
\end{theorem}
We note that our method did not suffice to prove the remaining three conjectures, which are still open.
The rest of the paper is organized as follows. In Section \ref{prelim}, we gather some definitions, notation, and lemmas that will be used later. In Section \ref{proof}, we prove Theorem \ref{main}.
\section{Preliminaries}\label{prelim}
We use the following standard $q$-series notation. For $n\in\mathbb{N}$, $a\in\mathbb{C}$, define
\begin{align*}
(a;q)_n &:= \prod_{i=0}^{n-1}(1-aq^{i}) \\
(a;q)_\infty &:= \prod_{i=0}^\infty(1-aq^{i}),
\end{align*}
and also define $(a;q)_0=1$. As shorthand, write
\begin{align*}
(a_1, \ldots, a_k;q)_n &:= (a_1;q)_n \cdots (a_k;q)_n\\
(a_1, \ldots, a_k;q)_\infty &:= (a_1;q)_\infty \cdots (a_k;q)_\infty.
\end{align*}
Furthermore, we will make use of the following notation of Mao.\footnote{We note that our definition of $L_{a,b}$ differs from Mao's in that the roles of $a$ and $b$ are reversed.} For positive integers $a<b$, define
\begin{align*}
J_b &:= (q^b; q^b)_{\infty} \\
J_{a,b} &:= (q^a, q^{b-a}, q^b; q^b)_{\infty},\\
L_{a,b} &:=\frac{J_b^2}{J_{a,b}}.
\end{align*}
\begin{lemma}[Mao \cite{Mao1}]\label{L lemma}
Given positive integers $a<b$, the $q$-series coefficients of $L_{a,b}$ are all nonnegative.
\end{lemma}
Mao proved rank difference formulas that we will use in our proof of Theorem \ref{main}. First, for unrestricted partitions, Mao proved the following theorem.
\begin{theorem}[Mao \cite{Mao1}]\label{MaoThm1}
We have that
\begin{align*}
&\sum_{n=0}^{\infty} \big(N(0,10,n) + N(1,10,n) - N(4,10,n) - N(5,10,n)\big)q^n \\
&= \left(\frac{J_{25} J_{50}^5 J_{20,50}^2 }{ J_{10,50}^4J_{15,50}^3} + \frac{1}{J_{25}} \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{75n(n + 1)/2 + 5}}{1 + q^{25n + 5}}\right) + q\left(\frac{J_{25} J_{50}^5}{J_{5,50} J_{10,50}^2 J_{15,50}^2}\right) \\
&+ q^2\left(\frac{J_{25} J_{50}^5}{J_{5,50}^2 J_{15,50} J_{20,50}^2}\right) + q^3\left(\frac{J_{25} J_{50}^5J_{10,50}^2 }{J_{5,50}^3 J_{20,50}^4} - \frac{1}{J_{25}} \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{75n(n + 1)/2 + 5}}{1 + q^{25n + 10}}\right) \\
&+ q^4\left(\frac{2 J_{50}^6}{J_{25} J_{5,50} J_{10,50} J_{15,50} J_{20,50}}\right), \\
\end{align*}
and
\begin{align*}
&\sum_{n=0}^{\infty} \big(N(1,10,n) + N(2,10,n) - N(3,10,n) - N(4,10,n)\big)q^n \\
&= \left(\frac{2q^5 J_{50}^6}{J_{25} J_{10,50}^2 J_{15,50}^2} - \frac{1}{J_{25}} \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{75n(n + 1)/2 + 5}}{1 + q^{25n + 5}}\right) \\
& + q\left(\frac{2q^5 J_{50}^6}{J_{25} J_{5,50} J_{15,50} J_{20,50}^2}\right) + q^2\left(\frac{J_{25}J_{50}^5 J_{20,50} }{J_{10,50}^3 J_{15,50}^3}\right) + q^3\left(\frac{J_{25} J_{50}^5}{J_{5,50} J_{10,50} J_{15,50}^2 J_{20,50} }\right) \\
& + q^4\left(\frac{J_{25} J_{50}^5J_{20,50}^2 J_{25,50} }{2q^5 J_{10,50}^4 J_{15,50}^4} - \frac{1}{q^5 J_{25}} \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(75n^2 + 25n)/2}}{1 + q^{25n}}\right). \\
\end{align*}
\end{theorem}
We will also make use of the following theorem for $M_2$ rank of partitions without repeated odd parts.
\begin{theorem}[Mao \cite{Mao2}] \label{MaoThm2}
We have that
\begin{align*}
& \sum_{n=0}^{\infty} \big(N_2(0,10,n) + N_2(1,10,n) - N_2(4,10,n) - N_2(5,10,n)\big)q^n \\
&= \left( \frac{2q^5 J_{100}^{15} J_{10,100} J_{50,100} }{J_{5,100}^3 J_{15,100}^2 J_{20,100}^2 J_{25,100}^3 J_{30,100} J_{35,100}^2 J_{45,100}^3} + \frac{1}{J_{25,100}} \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{50n^2 + 25n}}{1 + q^{50n + 10}}\right) \\
& + q \left(\frac{J_{100}^{15} J_{20,100} J_{30,100}^2 J_{50,100}}{ J_{5,100}^2 J_{10,100}^2 J_{15,100}^4 J_{25,100}J_{35,100}^4 J_{40,100}^3J_{45,100}^2 } \right) \\
&+ q^2 \left(\frac{J_{100}^{15} J_{50,100} }{ J_{5,100}^3 J_{15,100}^3 J_{20,100} J_{25,100}J_{35,100}^3 J_{40,100} J_{45,100}^3}\right) \\
&+ q^3\left(\frac{J_{100}^{15}J_{10,100}^2J_{40,100} J_{50,100} }{ J_{5,100}^4J_{15,100}^2 J_{20,100}^3 J_{25,100} J_{30,100}^2 J_{35,100}^2J_{45,100}^4}\right) \\
& + q^4\left(\frac{2J_{100}^{15}J_{30,100} J_{50,100} }{J_{5,100}^2 J_{10,100} J_{15,100}^3 J_{25,100}^3 J_{35,100}^3J_{40,100}^2 J_{45,100}^2 } + \frac{1}{J_{25,100}} \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{50n^2 + 75n +20}}{1 + q^{50n + 30}}\right).
\end{align*}
\end{theorem}
In addition, we require the following two facts about $q$-series which follow directly from the definitions. For integers $a,b,c$ we have
\begin{align}
\label{lemma1}
(q^a;q^b)_\infty(-q^a;q^b)_\infty & = (q^{2a};q^{2b})_\infty, \\
\label{lemma2}
(cq^a;q^{2b})_\infty(cq^{a+b};q^{2b})_\infty & = (cq^{a};q^{b})_\infty.
\end{align}
Finally, we recall the Jacobi Triple Product formula, which can be found in \cite{Andrews},
\begin{equation}\label{JTP}
\sum_{n \in \mathbb{Z}} z^n q^{n^2} = (-zq,-q/z,q^2;q^2)_{\infty}.
\end{equation}
\section{Proof of Theorem \ref{main}}\label{proof}
\subsection{Proof of (\ref{Mao10 Conjecture a})}\label{first proof}
In order to prove (\ref{Mao10 Conjecture a}), we need to show that the series
\[
\sum_{n=0}^\infty (N(0,10,5n) + N(1,10,5n) - N(4,10,5n) - N(5,10,5n))q^n
\]
has strictly positive coefficients. Using the first part of Theorem \ref{MaoThm1}, we see that
\[
\sum_{n=0}^{\infty} \big(N(0,10,n) + N(1,10,n) - N(4,10,n) - N(5,10,n)\big)q^n= S_0 + qS_1 + q^2S_2 + q^3S_3 + q^4S_4,
\]
where each $S_i$ is a series in $q^5$. Thus we can obtain our desired generating function by letting $q\mapsto q^\frac{1}{5}$ in $S_0$. We obtain that
\begin{multline}\label{gen1}
\sum_{n=0}^\infty (N(0,10,5n) + N(1,10,5n) - N(4,10,5n) - N(5,10,5n))q^n \\
= \frac{J_5 J_{10}^5 J_{4,10}^2 }{J_{2,10}^4 J_{3,10}^3 } + \frac{1}{J_5} \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}} = \frac{1}{J_5}\left(\frac{J_5^2 J_{10}^5 J_{4,10}^2 }{J_{2,10}^4 J_{3,10}^3 } + \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}}\right).
\end{multline}
We will now show that \eqref{gen1} has strictly positive $q$-series coefficients for $n \geq 0$. Since $\frac{1}{J_5}$ has all nonnegative coefficients and a constant term of $1$, it suffices to show that
\[
\frac{J_5^2 J_{10}^5 J_{4,10}^2 }{J_{2,10}^4 J_{3,10}^3 } + \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}}
\]
has all positive coefficients. First, we split the sum into nonnegative and negative indices, and reindex to see that
\begin{multline*}
\sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}} = \sum_{n=0}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}} + \sum_{n=1}^{\infty} \frac{(-1)^n q^{(15n^2 - 5n)/2}}{1 + q^{5n - 1}} \\
= \sum_{n=0}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}(1 - q^{5n + 1})}{1 - q^{10n + 2}} + \sum_{n=1}^{\infty} \frac{(-1)^n q^{(15n^2 - 5n)/2}(1 - q^{5n - 1})}{1 - q^{10n - 2}}.
\end{multline*}
Now, we split according to the summation index $n$ modulo $2$, to obtain
\begin{multline*}
\sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}} \\
= \sum_{n=0}^{\infty} \frac{q^{(15(2n)^2 + 15(2n))/2 + 1}(1 - q^{5(2n) + 1})}{1 - q^{10(2n) + 2}} - \sum_{n=0}^{\infty} \frac{q^{(15(2n+1)^2 + 15(2n+1))/2 + 1}(1 - q^{5(2n+1) + 1})}{1 - q^{10(2n+1) + 2}} \\
+ \sum_{n=1}^{\infty} \frac{q^{(15(2n)^2 - 5(2n))/2}(1 - q^{5(2n) - 1})}{1 - q^{10(2n) - 2}} - \sum_{n=1}^{\infty} \frac{q^{(15(2n-1)^2 - 5(2n-1))/2}(1 - q^{5(2n-1) - 1})}{1 - q^{10(2n-1) - 2}}.
\end{multline*}
Gathering the positive summands together, we see that
\[
\sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}} = S - T_1 - T_2 - T_3 - T_4,
\]
where
\[
S := \sum_{n=0}^{\infty}\frac{q^{30n^2 +15n + 1}}{1 - q^{20n + 2}} + \sum_{n=0}^{\infty}\frac{q^{30n^2+55n+22}}{1 - q^{20n+12}} + \sum_{n=1}^{\infty}\frac{q^{30n^2-5n}}{1 - q^{20n-2}}+ \sum_{n=1}^{\infty}\frac{q^{30n^2 -25n+ 4 }}{1 - q^{20n-12}},
\]
and
\begin{align*}
T_1 = \sum_{n=0}^\infty a_1(n)q^n &:= \sum_{n=0}^{\infty}\frac{q^{30n^2+25n + 2}}{1 - q^{20n + 2}}, \\
T_2 = \sum_{n=0}^\infty a_2(n)q^n &:= \sum_{n=0}^{\infty}\frac{q^{30n^2+45n+16}}{1 - q^{20n+12}}, \\
T_3 = \sum_{n=0}^\infty a_3(n)q^n &:= \sum_{n=1}^{\infty}\frac{q^{30n^2 + 5n - 1}}{1 - q^{20n-2}}, \\
T_4 = \sum_{n=0}^\infty a_4(n)q^n &:= \sum_{n=1}^{\infty}\frac{q^{30n^2 -35n+10}}{1 - q^{20n-12}}. \\
\end{align*}
We see that $S$, $T_1,\ldots,T_4$ all have nonnegative coefficients. Thus to prove (\ref{Mao10 Conjecture a}), it suffices to show that
\[
\frac{J_5^2 J_{10}^5 J_{4,10}^2 }{J_{3,10}^3 J_{2,10}^4} - T_1 - T_2 - T_3 - T_4
\]
has positive coefficients. Let $T_1+T_2+T_3+T_4 = \sum_{n=1}^\infty a(n)q^n$, and let
\[
\frac{J_5^2 J_{10}^5 J_{4,10}^2 }{J_{3,10}^3 J_{2,10}^4} = 1 +\sum_{n=1}^\infty b(n)q^n.
\]
We will show that $b(n)>a(n)$ for all $n\geq 1$.
Expanding the denominator of $T_1$ as a geometric series, we see that
\[
T_1 = \sum_{n=0}^\infty \sum_{k=0}^\infty q^{30n^2 + (20k+25)n + (2k+2)}.
\]
Thus for a given $N\geq 0$, we see that $a_1(N)$ counts the number of nonnegative integer pairs $(n,k)$ such that
\begin{equation}\label{T1eqn}
N=30n^2 + (20k+25)n + (2k+2).
\end{equation}
Clearly for each choice of $n\geq 0$ there is at most one $k\geq 0$ such that $(n,k)$ is a solution to \eqref{T1eqn}. Also, since $(20k+25)n + (2k+2)$ is positive for all $n,k\geq 0$, if $n\geq \sqrt{\frac{N}{30}}$, then no solutions exist. Thus, we have that $a_1(N)\leq \lfloor \sqrt{\frac{N}{30}} \rfloor +1$ for all $N\geq 0$.
Similarly,
\[
T_2 = \sum_{n=0}^\infty \sum_{k=0}^\infty q^{30n^2 + (20k+45)n + (12k+16)},
\]
and so $a_2(N)\leq \lfloor \sqrt{\frac{N}{30}}\rfloor +1$ for all $N\geq 0$ as well. For $T_3$, we have
\[
T_3 = \sum_{n=1}^\infty \sum_{k=0}^\infty q^{30n^2 + (20k+5)n - (2k+1)}.
\]
Since the sum starts at $n=1$ we have one fewer term. Also, we see that $(20k+5)n - (2k+1)$ is positive for all $n\geq1,k\geq0$. Thus we get a bound of $a_3(N)\leq \lfloor \sqrt{\frac{N}{30}}\rfloor $ for all $N\geq 1$. The $T_4$ case is a little different. Here,
\[
T_4 = \sum_{n=1}^\infty \sum_{k=0}^\infty q^{30n^2 + (20k-35)n - (12k-10)}.
\]
We observe that $30n^2 + (20k-35)n - (12k-10) \geq 5(3n-2)(2n-1) \geq 5(2n-1)^2$ for all $n\geq 1$. Thus if $2n-1 \geq \sqrt{\frac{N}{5}}$, then no solutions exist to $N=30n^2 + (20k-35)n - (12k-10)$. We then get a bound of $a_4(N)\leq \lfloor \sqrt{\frac{N}{20}}\rfloor +1$ for all $N\geq 1$.
Together, noting that none of the $T_i$ have a constant term, we see that for any $n\geq 1$,
\begin{equation}\label{a_bound}
a(n) \leq 3\left\lfloor \sqrt{\frac{n}{30}} \right\rfloor + \left\lfloor \sqrt{\frac{n}{20}} \right\rfloor + 3.
\end{equation}
By \eqref{lemma1} and \eqref{lemma2}, we see that
\begin{align*}
\frac{J_5^2 J_{10}^5 J_{4,10}^2 }{J_{2,10}^4 J_{3,10}^3 } &= \frac{(q^5; q^5)_{\infty}^2 (q^4, q^6; q^{10})_{\infty}^2}{(q^3, q^7; q^{10})_{\infty}^3 (q^2, q^8; q^{10})_{\infty}^4} \\
&= \frac{(q^5; q^5)_{\infty}^2 (-q^2, -q^3, -q^7, -q^8; q^{10})_{\infty}^2}{(q^3, q^7; q^{10})_{\infty} (q^2, q^8; q^{10})_{\infty}^2} \\
&= \frac{(q^5; q^5)_{\infty}^2 (-q^2, -q^3; q^5)_{\infty}^2}{(q^3, q^7; q^{10})_{\infty} (q^2, q^8; q^{10})_{\infty}^2}.
\end{align*}
\noindent Applying \eqref{JTP} with $z = q^{1/2}$ and $q = q^{5/2}$, we obtain
\begin{multline*}
\frac{J_5^2 J_{4,10}^2 J_{10}^5}{J_{3,10}^3 J_{2,10}^4} = \frac{1}{(q^3, q^7; q^{10})_{\infty} (q^2, q^8; q^{10})_{\infty}^2} \left[\sum_{n=-\infty}^{\infty} q^{n(5n + 1)/2}\right]^2 \\
= \frac{1}{(1 - q^2)(1 - q^3)} \cdot \frac{1}{(q^2, q^7, q^{12}, q^{13}; q^{10})_{\infty} (q^8; q^{10})_{\infty}^2}\left[\sum_{n=-\infty}^{\infty} q^{n(5n + 1)/2}\right]^2,
\end{multline*}
where we observe that all series involved in this product have nonnegative coefficients. Let
\[
\frac{1}{(1 - q^2)(1 - q^3)} = \sum_{i,j\geq 0} q^{2i+3j}=\sum_{n=0}^{\infty} c(n)q^n.
\]
We note that $c(0)=1$ and for $n\geq 1$, $c(n)$ is equal to the number of nonnegative integer solutions $(i,j)$ of the equation $2i + 3j = n$. For a fixed $n\geq 1$, and $j\geq 0$, we see that there is at most one $i\geq 0$ for which $(i,j)$ is a solution, and such an $i$ exists if and only if $0\leq j \leq n/3$ and $j\equiv n \pmod 2$. Considering each possible residue of $n$ modulo $6$, we see that in all cases, $c(n)\geq \lfloor \frac{n}{6} \rfloor $. Thus, we have that for all $n\geq 1$,
\begin{equation}\label{b_bound}
b(n) \geq \left\lfloor \frac{n}{6} \right\rfloor.
\end{equation}
It suffices then to show that $\frac{n}{6} > 3\sqrt{\frac{n}{30}}+\sqrt{\frac{n}{20}} + 4$ for sufficiently large $n$, and to check that $b(n)>a(n)$ for all remaining cases. We have that $\frac{n}{6} > 3\sqrt{\frac{n}{30}}+\sqrt{\frac{n}{20}} + 4$ if and only if $\frac{1}{6}n - (\frac{\sqrt{30}+\sqrt{5}}{10})\sqrt{n} -4 \geq 0$, which occurs for $n\geq 60$. Moreover, we also see that $b(n)>a(n)$ for $1\leq n \leq 59$, by a quick Maple calculation, which completes the proof of \eqref{Mao10 Conjecture a}.
For the remaining conjectures we use a similar technique, so give a somewhat abbreviated discussion of the proofs.
\subsection{Proof of (\ref{Mao10 Conjecture b})}\label{second proof}
In order to prove (\ref{Mao10 Conjecture b}), we need to show that
\[
\sum_{n=1}^\infty (N(1,10,5n) + N(2,10,5n) - N(3,10,5n) - N(4,10,5n))q^n
\]
has nonnegative coefficients. Using the second part of Theorem \ref{MaoThm1}, we obtain that
\begin{multline}\label{gen2}
\sum_{n=1}^\infty (N(1,10,5n) + N(2,10,5n) - N(3,10,5n) - N(4,10,5n))q^n \\
= \frac{1}{J_5} \left(\frac{2qJ_{10}^6}{J_{2,10}^2 J_{3,10}^2} - \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}}\right).
\end{multline}
Since $\frac{1}{J_5}$ has all nonnegative coefficients and a constant term of $1$, it suffices to show that
\[
\frac{2qJ_{10}^6}{J_{2,10}^2 J_{3,10}^2} - \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{(15n^2 + 15n)/2 + 1}}{1 + q^{5n + 1}}
\]
has all nonnegative coefficients. We observe the sum in this case is the same as the sum in the proof of \eqref{Mao10 Conjecture a}. However in this setting we are subtracting, rather than adding the sum. Thus by our dissection in the last subsection, if suffices to prove that
\[
\frac{2qJ_{10}^6}{J_{2,10}^2 J_{3,10}^2} - T_1' - T_2' - T_3' - T_4'
\]
has positive coefficients, where
\begin{align*}
T_1' = \sum_{n=0}^\infty a_1'(n)q^n &:= \sum_{n=0}^{\infty}\frac{q^{30n^2 +15n + 1}}{1 - q^{20n + 2}}= \sum_{n=0}^{\infty}\sum_{k=0}^{\infty} q^{30n^2 + 15n+1 +k(20n+2)}, \\
T_2' = \sum_{n=0}^\infty a_2'(n)q^n &:= \sum_{n=0}^{\infty}\frac{q^{30n^2+55n+22}}{1 - q^{20n+12}}= \sum_{n=0}^{\infty}\sum_{k=0}^{\infty} q^{30n^2 + 55n+22 +k(20n+12)}, \\
T_3' = \sum_{n=0}^\infty a_3'(n)q^n &:= \sum_{n=1}^{\infty}\frac{q^{30n^2-5n}}{1 - q^{20n-2}}= \sum_{n=1}^{\infty}\sum_{k=0}^{\infty} q^{30n^2 -5n +k(20n-2)}, \\
T_4' = \sum_{n=0}^\infty a_4'(n)q^n &:= \sum_{n=1}^{\infty}\frac{q^{30n^2 -25n+ 4 }}{1 - q^{20n-12}}= \sum_{n=1}^{\infty}\sum_{k=0}^{\infty} q^{30n^2 -25n+ 4 +k(20n-12)}. \\
\end{align*}
Let $T_1'+T_2'+T_3'+T_4' = \sum_{n=1}^\infty a'(n)q^n$, and let
\[
\frac{2qJ_{10}^6}{J_{2,10}^2 J_{3,10}^2} = \sum_{n=1}^\infty b'(n)q^n.
\]
We will show that $b'(n)> a'(n)$ for all $n\geq 1$.
Arguing as in Section \ref{first proof}, we see that for any $N\geq 1$, $a_1'(N),a_2'(N) \leq \lfloor \sqrt{\frac{N}{30}} \rfloor +1$. Also, we note that since $30n^2 -5n+k(20n-2) > (5n)^2$ for all $n\geq 1$, we have that $a_3'(N) \leq \lfloor \sqrt{\frac{N}{25}} \rfloor$. Similarly, since $30n^2 -25n+ 4 > 30(n-1)^2$ for all $n\geq 1$, we have that $a_4'(N) \leq \lfloor \sqrt{\frac{N}{30}} \rfloor + 1$. Together, noting that none of the $T_i'$ have a constant term, we see that for any $n\geq 1$,
\begin{equation}\label{a'_bound}
a'(n) \leq 3\left\lfloor \sqrt{\frac{n}{30}} \right\rfloor + \left\lfloor \sqrt{\frac{n}{25}} \right\rfloor + 3.
\end{equation}
We now examine the product. By \eqref{lemma1}, \eqref{lemma2}, and\eqref{JTP}, we see that
\begin{align*}
\frac{2q J_{10}^6}{J_{2,10}^2 J_{3,10}^2} &= 2qL_{3,10} \left(\frac{(-q^2, -q^8; q^{10})_{\infty} (q^{10}; q^{10})_{\infty}}{(q^4, q^{16}; q^{20})_{\infty} (q^2, q^3, q^7, q^8; q^{10})_{\infty}}\right) \\
&= \frac{2q}{(1 - q^2)(1 - q^3)} \left(\frac{L_{3,10}}{(q^4, q^{16}; q^{20})_{\infty} (q^7, q^8, q^{12}, q^{13}; q^{10})_{\infty}} \cdot \sum_{n=-\infty}^{\infty} q^{5n^2 + 2n}\right).
\end{align*}
Arguing as before, we find that the coefficient of $q^n$ in $\frac{2q}{(1 - q^2)(1 - q^3)}$ is at least $2\left\lfloor\frac{n - 1}{6}\right\rfloor$ for $n \geq 1$. We see that $L_{3,10}$ has a constant term of $1$, and by Lemma \ref{L lemma}, $L_{3,10}$ has all nonnegative coefficients. Thus we have that for all $n\geq 1$,
\[
b'(n)\geq 2\left\lfloor\frac{n - 1}{6}\right\rfloor.
\]
Since $2\left\lfloor{\frac{n - 1}{6}}\right\rfloor > 3\left\lfloor \sqrt{\frac{n}{30}} \right\rfloor + \left\lfloor \sqrt{\frac{n}{25}} \right\rfloor + 3$ for $n \geq 24$, it thus suffices to check that $b'(n)> a'(n)$ for $1 \leq n \leq 23$. A quick computation in Maple verifies that this is true.
\subsection{Proof of (\ref{Mao610 Conjecture b})}\label{third proof}
In order to prove (\ref{Mao610 Conjecture b}), we need to show that
\[
\sum_{n=0}^\infty (N_2(0,10,5n) + N_2(1,10,5n) - N_2(4,10,5n) - N_2(5,10,5n))q^n
\]
has positive coefficients. Using the Theorem \ref{MaoThm2}, we obtain that
\begin{multline}\label{gen3}
\sum_{n=0}^\infty (N_2(0,10,5n) + N_2(1,10,5n) - N_2(4,10,5n) - N_2(5,10,5n))q^n \\
= \frac{1}{J_{5,20}} \left(\frac{2qJ_{20}^{15}J_{2,20}J_{10,20}}{J_{1,20}^3 J_{3,20}^2J_{4,20}^2J_{5,20}^2J_{6,20}J_{7,20}^2J_{9,20}^3} + \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{10n^2 + 5n}}{1 + q^{10n + 2}}\right).
\end{multline}
Since $\frac{1}{J_{5,20}}$ has all nonnegative coefficients and a constant term of $1$, it suffices to show that
\[
\frac{2qJ_{20}^{15}J_{2,20}J_{10,20}}{J_{1,20}^3 J_{3,20}^2J_{4,20}^2J_{5,20}^2J_{6,20}J_{7,20}^2J_{9,20}^3} + \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{10n^2 + 5n}}{1 + q^{10n + 2}}
\]
has all positive coefficients. Splitting up the sum as we do in the proof of \eqref{Mao10 Conjecture a}, we obtain that
\[
\sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{10n^2 + 5n}}{1 + q^{10n + 2}} = S'' - T_1'' - T_2'' - T_3'' - T_4'',
\]
where
\[
S'':= \sum_{n=0}^{\infty}\frac{q^{40n^2 +10n}}{1 - q^{40n + 4}} + \sum_{n=0}^{\infty}\frac{q^{40n^2+70n+27}}{1 - q^{40n+24}} + \sum_{n=1}^{\infty}\frac{q^{40n^2+10n-2}}{1 - q^{40n-4}}+ \sum_{n=1}^{\infty}\frac{q^{40n^2 -10n-9}}{1 - q^{40n-24}},
\]
and
\begin{align*}
T_1'' = \sum_{n=0}^\infty a_1''(n)q^n &:= \sum_{n=0}^{\infty}\frac{q^{40n^2+30n + 2}}{1 - q^{40n + 4}} = \sum_{n=0}^{\infty}\sum_{k=0}^{\infty} q^{40n^2 + 30n+2 +k(40n+4)}, \\
T_2'' = \sum_{n=0}^\infty a_2''(n)q^n &:= \sum_{n=0}^{\infty}\frac{q^{40n^2+50n+15}}{1 - q^{40n+24}}= \sum_{n=0}^{\infty}\sum_{k=0}^{\infty} q^{40n^2 + 50n+15 +k(40n+24)}, \\
T_3'' = \sum_{n=0}^\infty a_3''(n)q^n &:= \sum_{n=1}^{\infty}\frac{q^{40n^2 + 30n - 4}}{1 - q^{40n-4}}= \sum_{n=1}^{\infty}\sum_{k=0}^{\infty} q^{40n^2 + 30n-4 +k(40n-4)}, \\
T_4'' = \sum_{n=0}^\infty a_4''(n)q^n &:= \sum_{n=1}^{\infty}\frac{q^{40n^2 -30n+3}}{1 - q^{40n-24}}= \sum_{n=1}^{\infty}\sum_{k=0}^{\infty} q^{40n^2 -30n+3 +k(40n-24)}. \\
\end{align*}
We see that $S''$, $T_1'',\ldots,T_4''$ all have nonnegative coefficients. Thus to prove (\ref{Mao610 Conjecture b}), it suffices to show that
\[
\frac{2qJ_{20}^{15}J_{2,20}J_{10,20}}{J_{1,20}^3 J_{3,20}^2J_{4,20}^2J_{5,20}^2J_{6,20}J_{7,20}^2J_{9,20}^3} - T_1'' - T_2'' - T_3'' - T_4''
\]
has positive coefficients. Let $T_1''+T_2''+T_3''+T_4'' = \sum_{n=1}^\infty a''(n)q^n$, and let
\[
\frac{2qJ_{20}^{15}J_{2,20}J_{10,20}}{J_{1,20}^3 J_{3,20}^2J_{4,20}^2J_{5,20}^2J_{6,20}J_{7,20}^2J_{9,20}^3} = \sum_{n=1}^\infty b''(n)q^n.
\]
We will show that $b''(n) > a''(n)$ for all $n\geq 1$.
Again arguing as in Section \ref{first proof}, we see that for any $N\geq 1$, $a_1''(N),a_2''(N) \leq \lfloor \sqrt{\frac{N}{40}} \rfloor +1$, and $a_3''(N) \leq \lfloor \sqrt{\frac{N}{40}} \rfloor$. Also, since $40n^2 -30n+3 > 6(2n-1)^2$ for all $n\geq 1$, we have that $a_4''(N) \leq \lfloor \sqrt{\frac{N}{24}} \rfloor + 1$. Together, noting that none of the $T_i''$ have a constant term, we see that for any $n\geq 1$,
\begin{equation}\label{a''_bound}
a''(n) \leq 3\left\lfloor \sqrt{\frac{n}{40}} \right\rfloor + \left\lfloor \sqrt{\frac{n}{24}} \right\rfloor + 3.
\end{equation}
We now examine the product. By \eqref{lemma1}, we find that
\begin{multline*}
\frac{2qJ_{2,20} J_{10,20} J_{20}^{15}}{J_{6,20}J_{3,20}^2 J_{4,20}^2 J_{5,20}^2 J_{7,20}^2 J_{1,20}^3 J_{9,20}^3} \\
= \frac{2q}{(1-q)^2}L_{9,20}^2 \left(\frac{(-q, -q^9, -q^{11}, -q^{19}; q^{20})_{\infty} (-q^5, -q^{15}; q^{20})_{\infty}^2}{(q^6, q^{14}; q^{20})_{\infty} (q^3, q^4, q^7, q^{13}, q^{16}, q^{17}, q^{19},q^{21}; q^{20})_{\infty}^2 (q^{19}; q^{20})_{\infty}^3}\right). \\
\end{multline*}
Expanding gives that $\frac{2q}{(1-q)^2} = \sum_{n=1}^\infty 2nq^n$. We know by Lemma \ref{L lemma} that $L_{9,20}$ has nonnegative coefficients and a constant term of $1$, and we can observe that this is true for the rest of the product as well. Thus, we have for all $n\geq 1$,
\[
b''(n)\geq 2n.
\]
Since $2n > 3\left\lfloor \sqrt{\frac{n}{40}} \right\rfloor + \left\lfloor \sqrt{\frac{n}{24}} \right\rfloor + 3$ for $n \geq 2$, it thus suffices to observe that $2=b''(1) > a''(1)=0$.
\subsection{Proof of (\ref{Mao610 Conjecture c})}\label{fourth proof}
In order to prove (\ref{Mao610 Conjecture c}), we need to show that
\[
\sum_{n=0}^\infty (N_2(0,10,5n+4) + N_2(1,10,5n+4) - N_2(4,10,5n+4) - N_2(5,10,5n+4))q^n
\]
has positive coefficients. Using the Theorem \ref{MaoThm2}, we obtain that
\begin{multline}\label{gen4}
\sum_{n=0}^\infty (N_2(0,10,5n+4) + N_2(1,10,5n+4) - N_2(4,10,5n+4) - N_2(5,10,5n+4))q^n \\
= \frac{1}{J_{5,20}} \left(\frac{2J_{20}^{15}J_{6,20}J_{10,20}}{J_{1,20}^2 J_{2,20}J_{3,20}^3J_{5,20}^2J_{7,20}^3J_{8,20}^2J_{9,20}^2} + \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{10n^2 + 15n+4}}{1 + q^{10n + 6}}\right).
\end{multline}
Since $\frac{1}{J_{5,20}}$ has all nonnegative coefficients and a constant term of $1$, it suffices to show that
\[
\frac{2J_{20}^{15}J_{6,20}J_{10,20}}{J_{1,20}^2 J_{2,20}J_{3,20}^3J_{5,20}^2J_{7,20}^3J_{8,20}^2J_{9,20}^2} + \sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{10n^2 + 15n+4}}{1 + q^{10n + 6}}
\]
has all positive coefficients. Splitting up the sum as we do in the proof of \eqref{Mao10 Conjecture a}, we obtain that
\[
\sum_{n=-\infty}^{\infty} \frac{(-1)^n q^{10n^2 + 15n +4}}{1 + q^{10n + 6}} = S''' - T_1''' - T_2''' - T_3''' - T_4''',
\]
where
\[
S''':= \sum_{n=0}^{\infty}\frac{q^{40n^2 +30n+4}}{1 - q^{40n + 12}} + \sum_{n=0}^{\infty}\frac{q^{40n^2+ 90n+45 }}{1 - q^{40n+32}} + \sum_{n=1}^{\infty}\frac{q^{40n^2-10n-2}}{1 - q^{40n-12}}+ \sum_{n=1}^{\infty}\frac{q^{40n^2 -30n-3 }}{1 - q^{40n-32}},
\]
and
\begin{align*}
T_1''' = \sum_{n=0}^\infty a_1'''(n)q^n &:= \sum_{n=0}^{\infty}\frac{q^{40n^2+50n + 10}}{1 - q^{40n + 12}} = \sum_{n=0}^{\infty}\sum_{k=0}^{\infty} q^{40n^2+50n + 10 +k(40n+12)}, \\
T_2''' = \sum_{n=0}^\infty a_2'''(n)q^n &:= \sum_{n=0}^{\infty}\frac{q^{40n^2+ 70n+29 }}{1 - q^{40n+32}}= \sum_{n=0}^{\infty}\sum_{k=0}^{\infty} q^{40n^2+ 70n+29 +k(40n+32)}, \\
T_3''' = \sum_{n=0}^\infty a_3'''(n)q^n &:= \sum_{n=1}^{\infty}\frac{q^{40n^2 + 10n - 8}}{1 - q^{40n-12}}= \sum_{n=1}^{\infty}\sum_{k=0}^{\infty} q^{40n^2 + 10n - 8 +k(40n-12)}, \\
T_4''' = \sum_{n=0}^\infty a_4'''(n)q^n &:= \sum_{n=1}^{\infty}\frac{q^{40n^2 -50n+13}}{1 - q^{40n-32}}= \sum_{n=1}^{\infty}\sum_{k=0}^{\infty} q^{40n^2 -50n+13 +k(40n-32)}. \\
\end{align*}
We see that $S'''$, $T_1''',\ldots,T_4'''$ all have nonnegative coefficients. Thus to prove (\ref{Mao610 Conjecture c}), it suffices to show that
\[
\frac{2J_{20}^{15}J_{6,20}J_{10,20}}{J_{1,20}^2 J_{2,20}J_{3,20}^3J_{5,20}^2J_{7,20}^3J_{8,20}^2J_{9,20}^2} - T_1''' - T_2''' - T_3''' - T_4'''
\]
has positive coefficients. Let $T_1'''+T_2'''+T_3'''+T_4''' = \sum_{n=1}^\infty a'''(n)q^n$, and let
\[
\frac{2J_{20}^{15}J_{6,20}J_{10,20}}{J_{1,20}^2 J_{2,20}J_{3,20}^3J_{5,20}^2J_{7,20}^3J_{8,20}^2J_{9,20}^2} = 2+ \sum_{n=1}^\infty b'''(n)q^n.
\]
We will show that $b'''(n) > a'''(n)$ for all $n\geq 1$.
Again arguing as in Section \ref{first proof}, we see that for any $N\geq 1$, $a_1'''(N),a_2'''(N) \leq \lfloor \sqrt{\frac{N}{40}} \rfloor +1$, and $a_3'''(N) \leq \lfloor \sqrt{\frac{N}{40}} \rfloor$. Also, since $40n^2 -50n+13 > 40(n-1)^2$ for all $n\geq 1$, we have that $a_4'''(N) \leq \lfloor \sqrt{\frac{N}{40}} \rfloor + 1$. Together, noting that none of the $T_i'''$ have a constant term, we see that for any $n\geq 1$,
\begin{equation}\label{a'''_bound}
a'''(n) \leq 4\left\lfloor \sqrt{\frac{n}{40}} \right\rfloor + 3.
\end{equation}
By computing the $n = 0$ term of the sum appearing in (\ref{Mao610 Conjecture c}), we find that the constant term is $1$. We may thus instead consider
\begin{equation}
\label{Mao610c Reformulated}
\frac{2J_{6,20} J_{10,20} J_{20}^{15}}{J_{2,20} J_{1,20}^2 J_{5,20}^2 J_{8,20}^2 J_{9,20}^2 J_{3,20}^3 J_{7,20}^3} - \frac{1}{(1 - q)^2}.
\end{equation}
We now examine the product. By \eqref{lemma1}, we find that
\begin{multline*}
\frac{2 J_{6,20} J_{10,20} J_{100}^{15}}{J_{2,20} J_{1,20}^2 J_{5,20}^2 J_{8,20}^2 J_{9,20}^2 J_{3,20}^3 J_{7,20}^3} \\
= \frac{2}{(1-q)^2}L_{9,20}^2 \left(\frac{(-q^{3}, -q^{7}, -q^{13}, -q^{17}; q^{20})_{\infty} (-q^{5}, -q^{15}; q^{20})_{\infty}^2}{(q^2, q^{18}; q^{20})_{\infty} (q^3, q^7, q^8, q^{12}, q^{13}, q^{17}, q^{19}, q^{21}; q^{20})_{\infty}^2}\right).
\end{multline*}
As in Section \ref{third proof}, we have that expanding gives that $\frac{2}{(1-q)^2} =\sum_{n=0}^\infty 2(n+1)q^n$. Also, we have already seen that $L_{9,20}$ has nonnegative coefficients and a constant term of $1$, and we can observe that this is true for the rest of the product as well. Thus, we have for all $n\geq 1$,
\[
b'''(n)\geq 2(n+1).
\]
Since $2(n+1) > 4\left\lfloor \sqrt{\frac{n}{40}} \right\rfloor + 3$ for $n \geq 1$, this completes the proof of \eqref{Mao610 Conjecture c}.
|
2,869,038,153,855 | arxiv | \section{Conclusion}
This paper proposea a Perturbed Estimated Gradient Descent Algorithm with only access to the zeroth-order information of objective function. With only estimated gradient information, we prove the second-order stationary point convergence of the algorithm and provide the convergence rate. This is the first result, to the best of our knowledge, that provides the convergence rate results of gradient descent based method for achieving $\epsilon$-second order stationary point with zeroth-order information.
In the proposed algorithm, we use a perturbation of the estimated gradient descent, where the perturbation was needed to escape the first order stationary point that is not a second order stationary point. However, it may be possible that the estimation error controlled through Gaussian smoothening alone helps escape saddle points. Whether the additional perturbation in the algorithm can be removed is a topic of future work.
\section{Problem Formulation and Assumptions}
In this section, we will introduce the notations used in this paper, describe some definitions that will be used in this paper, and define the problem formulation formally.
\subsection{Notations}
Bold upper-case letters $\mathbf{A}$ and bold lower-case letters $\bm{x}$ represent the matrices and vectors, respectively. $\bm{x}_i$ denotes the $i^{th}$ element of the vector $\bm{x}$. $\Vert\cdot\Vert$ is the $l_2$-norm and spectral norm for vectors and matrices, respectively. We use $\lambda_{min}(\cdot)$ to denote the smallest eigenvalue of a matrix.
For a twice-differentiable function $f$: $\mathbb{R}^d\rightarrow\mathbb{R}$, $\nabla f(\cdot)$ and $\nabla^2 f(\cdot)$ are denoted to be the gradient and Hessian of $f$. $f^{*}$ represents the global minimum of the function $f$. $h(n) = O(g(n))$ if and only if there exists a positive real number $M$ and a real number $n_0$ such that ${\displaystyle |h(n)|\leq \;Mg(n){\text{ for all }}n\geq n_{0}.}$ Further, $h(n) = {\widetilde{O}}(g(n))$ if $h(n) = O(g(n) \log^k (g(n)) )$ for any $k>0$.
$\mathbb{B}^{(d)}_{\bm{x}}(r)$ represents the ball in $d$ dimension with radius $r$ and center point $\bm{x}$ and we will use $\mathbb{B}_{\bm{x}}(r)$ to simplify the notation when it is clear. $\mathcal{P}_\chi(\cdot)$ is used to denote the projection to the subspace of $\chi$. The norm is assumed to be the Euclidean norm, unless mentioned otherwise.
\subsection{Definitions }
In this sub-section, we will define a few properties of the function and the stationary point that will be used in the paper.
\begin{definition} \label{smoothdef}
A differentiable function $f(\cdot)$ is $l$-smooth if $\forall \bm{x},\bm{y},$
$$\quad \Vert\nabla f(\bm{x})-\nabla f(\bm{y})\Vert\leq l\Vert\bm{x}-\bm{y}\Vert.$$
\end{definition}
$l$-smooth limits the speed of increase of the function value. Using the property of $l$-smooth, it is well known that by selecting the stepsize $\eta=\frac{1}{l}$, the gradient descent algorithm will converge within the $O\big(\frac{l(f(\bm{x}_0)-f^*)}{\epsilon^2}\big)$ to the $\epsilon$-first-order stationary point \citep{jinsept2019}, which is defined as follows.
\begin{definition} Given a differentiable function $f(\cdot)$, $\bm{x}$ is a first-order stationary point if $\Vert\nabla f(\bm{x})\Vert=0$, and $\bm{x}$ is a $\epsilon$-first-order stationary point if $\Vert\nabla f(\bm{x})\Vert\leq\epsilon$.
\end{definition}
A first order stationary point can be either a local minimum, a local maximum, or a saddle point. In minimization problems, all local maxima and saddle points needs to be avoided. In this paper, we use ``saddle point" to refer to both of them and are defined as follows:
\begin{definition} Given a differentiable function $f(\cdot)$, $\bm{x}$ is a local minimum if $\exists \epsilon>0$ and $\Vert\bm{y}-\bm{x}\Vert<\epsilon$, we have $f(\bm{x})<f(\bm{y})$. $\bm{x}$ is a ``saddle" point if $\nabla f(\bm{x})=0$ but $\bm{x}$ is not a local minimum. We also define a saddle point $\bm{x}$ to be a strict saddle point if $\lambda_{min}(\nabla^2f(\bm{x}))<0$, which means $\bm{x}$ is non-degenerate.
\end{definition}
In this definition, we simply use the word strict saddle point to avoid the degenerate condition where $\lambda_{min}(\nabla^2f(\bm{x}))=0$ and second-order information is not enough to decide the property of $\bm{x}$.
To aviod all strict saddle points in general non-convex problem, we define the $\rho$-Hessian Lipschitz to be as follows.
\begin{definition} \label{defnhessian}
Given a twice differentiable function $f(\cdot)$, $f$ is $\rho$-Hessian Lipschitz if $\forall\bm{x},\bm{y}$,
$$\quad \Vert\nabla^2f(\bm{x})-\nabla^2f(\bm{y})\Vert\leq\rho\Vert\bm{x}-\bm{y}\Vert.$$
\end{definition}
The $\rho$-Hessian Lipschitz limits the speed of function increase and also constrains the speed of Hessian matrix changing. Further, we give the definition of $\epsilon$-second-order stationary point, which is the key objective for the proposed algorithm.
\begin{definition}
Given a $\rho$-Hessian Lipschitz function $f(\cdot)$, $\bm{x}$ is a second-order stationary point if $\Vert\nabla f(\bm{x})\Vert=0$ and $\lambda_{min}(\nabla^2f(\bm{x}))\geq0$. Further, $\bm{x}$ is a $\epsilon$-second-order stationary point if
$$\Vert\nabla f(\bm{x})\Vert\leq\epsilon, \quad\quad\lambda_{min}(\nabla^2f(\bm{x}))\geq-\sqrt{\rho\epsilon}.$$
\end{definition}
Finally, we give the definition of the distance between the estimated gradient and true gradient, which is used in the following sections.
\begin{definition}
Given a differentiable function $f(\cdot)$ and a gradient estimator $\hat{\nabla}$, we say $\hat{\nabla}f(\bm{x})$ is $\hat{\epsilon}$-close to $\nabla f(\bm{x})$ for given point $\bm{x}$ and for some $\hat{\epsilon}>0$ if
$$\Vert\hat{\nabla}f(\bm{x})-\nabla f(\bm{x})\Vert\leq\hat{\epsilon}.$$
\end{definition}
\subsection{Problem Formulation}
In this paper, we aim to propose an algorithm that is model-free and solves the non-convex optimization problem such that the converged solution is an $\epsilon$-second-order stationary point. We will use an estimate of the gradient, and perform the gradient descent algorithm. Using this estimated gradient descent, the main aim of the paper is to find the number of iterations required for convergence, as well as the number of function queries needed to converge to an $\epsilon$-second-order stationary point. In order to show the convergence rate, we use the following assumption.
\begin{assumption}\label{assum}
Function $f$ is both $l$-smooth and $\rho$-Hessian Lipschitz, and $\Vert\nabla f(\bm{x})\Vert\leq B$ for some finite and positive $B$ for all $\mathbf{x}\in\mathbb{R}^d$.
\end{assumption}
\begin{assumption}\label{symmetric_hessian}
Hessian matrix, $\nabla^2f(\mathbf{x})$ , of function $f$ is a symmetric matrix for all $\mathbf{x}\in\mathbb{R}^d$.
\end{assumption}
Based on these assumptions, this paper studies an algorithm of estimating gradient and performing gradient descent based on such estimate, and finds the complexity to return an $\epsilon$-second-order stationary point without any oracle access to the gradient and the Hessian.
\section{Proposed Algorithm}
In this section, we will describe the approach used for estimating the gradient, and the estimated gradient descent algorithm that uses this estimate of the gradient.
\subsection{Estimation of the Gradient}
In zeroth-order oracle, no gradient information is available. To use a gradient descent algorithm, we need to first estimate the gradient information by using the function value. In this subsection, we describe the graident estimation algorithm that is used in this paper. This Algorithm pseudo-code is given in Algorithm \ref{GE}.
\begin{algorithm}[h]
\label{GE}
\caption{Gradient Estimation $GE({d,l,B,c',\hat{\epsilon},\bm{x}})$}
\begin{algorithmic}[1]
\State $v\leftarrow\frac{\hat{\epsilon}}{c'l(d+3)^{1.5}}$, $\sigma^2\leftarrow2c'^{2}(d+4)B^2$, $m\leftarrow\frac{32\sigma^2}{\hat{\epsilon}^2}(\log(\frac{1}{\hat{\epsilon}})+\frac{1}{4})$
\State Generate $\bm{u}_1,...\bm{u}_m$, where $\bm{u}_i\sim\mathcal{N}(0,\mathbf{I}_d)$
\State $\hat{\nabla}f(\bm{x})=\frac{1}{m}\sum_{i=1}^{m}\frac{f(\bm{x}+v\bm{u}_i)-f(\bm{x})}{v}\bm{u}_i$
\State \Return $\hat{\nabla}f(\bm{x})$
\end{algorithmic}
\end{algorithm}
The estimation of the gradient uses a Gaussian smoothing approach. Using Gaussian smooth method isn't a new idea. \citep{nesterov2017random} described this method systematically and used this method to give the guarantee for zero-order convex optimization. Despite \citep{balasubramanian2019zeroth} using the similar idea on zeroth-order non-convex optimization, to the best of our knowledge, there is no work that provides the total number of samples required for gradient estimation with error at most $\hat{\epsilon}$. In this paper, we use concentration inequality and conditional probability results to provide such a result which is described formally in Lemma \ref{lemma_GE}.
Recall that $d$ is the dimension of $\bm{x}$, $l$ is $l$-smooth parameter in Definition \ref{smoothdef}, $B$ is our bound in Assumption \ref{assum} for gradient norm, $c'>1$ is a constant defined in Lemma \ref{lemma_GE}, $\bm{x}$ is the point we make an estimation and $\hat{\epsilon}$ is the intended gap between the estimated gradient and true gradient given in Definition 6. The line 1 in the algorithm gives the parameter used in the following lines. $v$ is the Gaussian smooth parameter. $\sigma^2$ is the bound for the variance of gradient estimator in one sample. $m$ gives the total number of samples we need to get error less than $\hat{\epsilon}$. Line 2 generates $m$ Gaussian random vector with zero mean and variance $\mathbf{I}_d$ which are used to calculate the estimate of the gradient. The estimation algorithm (Line 3) takes an average of estimated gradient using $m$ samples. The next result shows that with an appropriate choice of $m$ and $v$, the estimated gradient is within $\hat{\epsilon}>0$ of the true gradient with probability at least $1-\hat{\epsilon}$. More formally, we have
\begin{lemma} \label{lemma_GE}
Assume $f(\cdot)$ satisfies Assumption 1. Given an $\hat{\epsilon}>0$, there are fixed constant $c'_{min}$, sample number $m=O(\frac{d}{\hat{\epsilon}^2}\log(\frac{1}{\hat{\epsilon}}))$ and Gaussian smooth parameter $v=\frac{\hat{\epsilon}}{c'l(d+3)^{1.5}}$, such that for $c'>c_{min}$, the estimated gradient,
$$\hat{\nabla}=\frac{1}{m}\sum_{i=1}^{m}\frac{f(\bm{x}+v\bm{u})-f(\bm{u})}{v}\bm{u},$$
is $\hat{\epsilon}$-close to $\nabla f(\bm{x})$ with probability at least $1-\hat{\epsilon}$.
\end{lemma}
\begin{proof}
For a function $f$ satisfying $l$-smooth, we define Gaussian smooth function $f_v(\bm{x})=\mathbf{E}_{\bm{u}}[f(\bm{x}+v\bm{u})]$, where $\bm{u}$ is a $d$ dimensional standard Gaussian random vector $\bm{u}\sim\mathcal{N}(0,\mathbf{I}_d)$ and $v\in(0,\infty)$ is smooth parameter. Eq. 21 in Section 2 of \citep{nesterov2017random} shows that
\begin{equation}
\nabla f_v(x)=\mathbf{E}_{\bm{u}}[\frac{f(\bm{x}+v\bm{u})-f(\bm{x})}{v}\bm{u}]
\end{equation}
We define a gradient estimator $$\hat{\nabla}=\frac{1}{m}\sum_{i=1}^{m}\frac{f(\bm{x}+v\bm{u}_i)-f(\bm{x})}{v}\bm{u}_i,\ \bm{u}_i\sim\mathcal{N}(0,\mathbf{I}_d)$$
From Lemma 3 and Theorem 4 in \citep{nesterov2017random}, we see that for any function $f$ satisfying $l$-smooth (Notice in the proof of the first inequality in Theorem 4, no convexity is needed), and for any $\bm{x}\in\mathbb{R}^d$, the following hold:
\begin{equation}\label{fv_bound}
\Vert\nabla f_v(\bm{x})-\nabla f(\bm{x})\Vert\leq\frac{v}{2}l(d+3)^{\frac{3}{2}}
\end{equation}
\begin{equation}\label{var_bound}
\begin{aligned}
\frac{1}{v^2}\mathbf{E}_{\bm{u}}&[\{f(\bm{x}+v\bm{u})-f(\bm{x})\}^2\Vert\bm{u}\Vert^2]\\
&\leq\frac{v^2}{2}l^2(d+6)^{3}+2(d+4)\Vert\nabla f(\bm{x})\Vert^2
\end{aligned}
\end{equation}
To give the distance between $\hat{\nabla}$ and $\nabla f$ is less than $\hat{\epsilon}$, we split the difference to two terms. Here we only consider $0<\hat{\epsilon}<1$.
\begin{equation*}
\Vert\hat{\nabla}-\nabla f\Vert\leq\Vert\nabla f_v-\nabla f\Vert+\Vert\hat{\nabla}-\nabla f_v\Vert
\end{equation*}
Choosing $v=\frac{\hat{\epsilon}}{c'l(d+3)^\frac{3}{2}}$, where $c'>1$ is a constant will be defined later, we have $\Vert\nabla f_v-\nabla f\Vert\leq\frac{\hat{\epsilon}}{2}$ based on Eq. \eqref{fv_bound}. To bound the second term, noticing that $\mathbf{E}[\hat{\nabla}]=\nabla f_v(\bm{x})$, choose $$\bm{s}_i=\frac{f(\bm{x}+v\bm{u}_i)-f(\bm{x})}{v}\bm{u}_i-\nabla f_v\quad \bm{s}_i'=\bm{s}_i+\nabla f_v$$
We immediately know $\mathbf{E}[\bm{s}_i]=0$, and the variance of $\bm{s}_i$ can be bounded by
\begin{equation*}
\begin{aligned}
\mathbf{E}[\Vert\bm{s}_i\Vert^2]&=\mathbf{E}[\Vert\bm{s}_i'-\nabla f_v\Vert^2]
\overset{(a)}= \mathbf{E}[\Vert\bm{s}_i'\Vert^2]-\Vert\nabla f_v\Vert^2\\
&\overset{(b)}\leq 2(d+4)B^2+\frac{v^2l^2}{2}(d+6)^3\\
&\overset{(c)}\leq 2(d+4)B^2+\frac{4\hat{\epsilon}^2}{c'^2}\\
&\overset{(d)}\leq 2c'^2(d+4)B^2 =: \sigma^2
\end{aligned}
\end{equation*}
where step $(a)$ follows from $\mathbf{E}[\bm{s}_i']=\nabla f_v$. Step $b$ follows from Eq. \eqref{var_bound} and choosing $B>1$. Step $(c)$ holds due to the definition of $v$. Step $(d)$ follows that we omit the term with $\hat{\epsilon}^2$ by multiplying $c'^2>4$ to the first term. Using $l$-smooth, we have
\begin{equation*}
\begin{aligned}
\Vert f(\bm{x}+v\bm{u})-f(\bm{x})\Vert
\leq vB\Vert \bm{u}\Vert+\frac{lv^2}{2}\Vert\bm{u}\Vert^2
\end{aligned}
\end{equation*}
Thus, the norm of $\bm{s}_i$ can be bounded as:
\begin{equation}\label{si_bound}
\begin{aligned}
\Vert\bm{s}_i\Vert
&\leq\frac{\Vert f(\bm{x}+v\bm{u})-f(\bm{x})\Vert\Vert\bm{u}\Vert}{v}+B\\
&\leq B+B\Vert \bm{u}\Vert^2+\frac{lv}{2}\Vert\bm{u}\Vert^3
\end{aligned}
\end{equation}
However, $\bm{u}$ is a Gaussian random vector, there is no bound for it directly. But we can say, given some constant $a\geq 0$,
\begin{equation}\label{p_bound}
P(\Vert\bm{u}\Vert>a)\leq p
\end{equation}
where $p$ is some probability we will calculate in followings.
Assume $\bm{u}\sim\mathcal{N}(0,\mathbf{I}_d)$, then $\Vert\bm{u}\Vert^2$ follows chi-squared distribution of $d$ degrees of freedom. Consider random variable $e^{t\Vert\bm{u}^2\Vert}$, where $t$ is a constant. For $t>0$, $e^{t\Vert\bm{u}\Vert^2}$ is strictly increasing with $\Vert\bm{u}\Vert^2$, and using Markov's inequality we obtain,
\begin{align}
P(\Vert\bm{u}\Vert^2>a^2)&=P(e^{t\Vert\bm{u}^2\Vert}>e^{ta^2})\leq\frac{\mathbf{E}[e^{t\Vert\bm{u}^2\Vert}]}{e^{ta^2}}\\
&=\frac{(1-2t)^{-\frac{d}{2}}}{e^{ta^2}} \label{eq:MGF_chi_squared}\\
&=(1-2t)^{-\frac{d}{2}}e^{-ta^2},\ \forall\ 0<t<\frac{1}{2} \label{norm_bound}
\end{align}
Equation \ref{eq:MGF_chi_squared} comes from using the moment generating function of chi-squared distribution of $d$ degrees of freedom.
Define $f(t)=(1-2t)^{-\frac{d}{2}}e^{-ta^2}$, and choosing $t=\arg\min f(t) = \frac{1}{2}(1-\frac{d}{a^2})$ in Equation \eqref{norm_bound}, we have:
\begin{equation}\label{P_bound}
\begin{aligned}
P(\Vert\bm{u}\Vert^2>a^2)
&\leq(\frac{d}{a^2})^{-\frac{d}{2}}e^{-\frac{1}{2}(a^2-d)}\\
&=d^{-\frac{d}{2}}e^{\frac{d}{2}}a^de^{-\frac{a^2}{2}}
\end{aligned}
\end{equation}
For $0<\hat{\epsilon}<1$, we choose $a=c'\cdot\sqrt{\frac{d}{\hat{\epsilon}}}$ so that $t>0$ always holds. Besides, choose $c'>1$ large enough such that $$P(\Vert\bm{u}\Vert^2>a^2)\leq B^{-2}a^{-8}$$
Now, assuming that $\Vert\bm{u}\Vert\leq a$, combine with Eq. \eqref{si_bound} we have
\begin{equation*}
\begin{aligned}
\Vert\bm{s}_i\Vert
&\leq B+Ba^2+\frac{lv}{2}a^3=B+\frac{Bc'^2d}{\hat{\epsilon}}+\frac{lvc'^3}{2}\frac{d^{1.5}}{\hat{\epsilon}^{1.5}}\\
&\leq B+\frac{Bc'^2d}{\hat{\epsilon}}+\frac{c'^2}{2\hat{\epsilon}^{0.5}}\leq\frac{3Bc'^2d}{\hat{\epsilon}}=:\mu
\end{aligned}
\end{equation*}
Combining with Eq. \eqref{p_bound}, we can say given $m$ samples of $\bm{s}_i^{'}$, with probability at least $1-mp$, $\Vert\bm{u}_i\Vert\leq a\ \forall i=1,\cdots,m$. Let $B>1.5$. Based on Lemma 18 in \citep{2017arXiv170505933K}, we have vector Bernstein Inequality, based on which for $0<\hat{\epsilon}<\frac{\sigma^2}{\mu}=\frac{2(d+4)}{3d}B\hat{\epsilon}$, we have
\begin{equation*}
P\big(\Vert\hat{\nabla}-\nabla f_v\Vert\geq\frac{\epsilon}{2}\big)\leq \exp\big(-m\cdot\frac{\epsilon^2}{32\sigma^2}+\frac{1}{4}\big)
\end{equation*}
Choosing $m>\frac{32\sigma^2}{\hat{\epsilon}^2}(\log\frac{2}{\hat{\epsilon}}+\frac{1}{4})$, we have $P\big(\Vert\hat{\nabla}-\nabla f_v\Vert\leq\frac{\hat{\epsilon}}{2}\big)\geq 1-\frac{\hat{\epsilon}}{2}$.
By union bound, the final probability that $\Vert\hat{\nabla}-\nabla\Vert\leq\hat{\epsilon}$ is at least
\begin{equation*}
\begin{aligned}
&1-mp-\frac{\hat{\epsilon}}{2}\\
&\geq 1-\frac{32{c'}^2(d+4)B^2}{\hat{\epsilon}^2}(\log\frac{1}{\hat{\epsilon}}+\frac{1}{4})\frac{\hat{\epsilon}^4}{{c'}^8d^4B^2}-\frac{\hat{\epsilon}}{2}\\
&\overset{(a)}\geq 1-(\frac{1}{4}+\frac{1}{4})\hat{\epsilon}-\frac{\hat{\epsilon}}{2}
\geq 1-\hat{\epsilon}
\end{aligned}
\end{equation*}
By choosing $c'\geq3$, and noting that $\log\frac{1}{\hat{\epsilon}} \leq \frac{1}{\hat{\epsilon}}$. the inequality $(a)$ holds. This completes the proof of the Lemma.
\begin{comment}
For $\epsilon\geq1$, we choose $a=c\cdot\sqrt{d}\epsilon^{\frac{1}{3}}$, and $c>1$ large enough such that $P(\Vert\bm{u}\Vert^2>a^2)\leq a^{-6}$, we have
\begin{equation}
\begin{aligned}
\Vert\bm{s}_i\Vert
&\leq B+Ba^2+\frac{lv}{2}a^3=B+Bc^2d\epsilon^{\frac{2}{3}}+\frac{lvc^3}{2}d^{1.5}\epsilon\\
&\leq B+Bc^2d\epsilon^{\frac{2}{3}}+\frac{c^2\epsilon^2}{2}\leq 3Bc^2d\epsilon^2=\mu
\end{aligned}
\end{equation}
Combing with section 1, we can say given $n$ samples of $\bm{s}_i^{'}$, with probability at least $1-mp$, $\forall i=1,...,m$.\\
In this case, choose $\sigma'^2=\sigma^2\epsilon^3>\sigma^2$, based on Jonas17, we have vector Bernstein Inequality that for $0\leq\epsilon\leq\frac{\sigma^2}{\mu}\leq\frac{\sigma'^{2}}{\mu}=\frac{2}{3}B\epsilon$, which is always true, we have
\begin{equation}
P\big(\Vert\hat{\nabla}-\nabla f_v\Vert\geq\frac{\epsilon}{2}\big)\leq \exp\big(-m\cdot\frac{\epsilon^2}{32\sigma^2}+\frac{1}{4}\big)
\end{equation}
Choosing $m>\frac{32\sigma'^2}{\epsilon^2}(\log\epsilon+\frac{1}{4})=32\epsilon\sigma^2(\log\epsilon+\frac{1}{4})$, we have $P\big(\Vert\hat{\nabla}-\nabla f_v\Vert\leq\frac{\epsilon}{2}\big)\geq 1-\frac{1}{\epsilon}$\\
Thus, the final probability that $\Vert\hat{\nabla}-\nabla\Vert\leq\epsilon$ is at least
\begin{equation}
1-mp-\frac{1}{\epsilon}\geq 1-32\epsilon\sigma^2(\log\epsilon+\frac{1}{4})\frac{1}{c^6d^3\epsilon^{2}}-\frac{1}{\epsilon}\geq 1-\frac{2}{\epsilon}
\end{equation}
\end{comment}
\end{proof}
\begin{comment}
\begin{proof}
To prove this lemma, notice that $\hat{\nabla}$ is an unbiased estimator for $\nabla f_v(\bm{x})=\mathbf{E}\big(\frac{f(\bm{x}+v\bm{u})}{v}\bm{u}\big)$, where $f_v(\bm{x})=f(\bm{x}+v\bm{u})$ is the Gaussian smoothing function for $f$. Thus, by bounding the two norms of $\nabla f-\nabla f_v$ and $\nabla f_v-\hat{\nabla}$, we can immediately obtain the bound for our estimation $\Vert\hat{\nabla}-\nabla\Vert$. To bound the first term, we can choose the Gaussian smoothening parameter $v$ small enough. To bound the second term, using the property of $\mathbf{E}[\hat{\nabla}]=\nabla f_v$, we use the Vector Bernstein Inequality. However, to use this inequality, it is required that each sample is bounded, which is not the case because Gaussian random vector $\bm{u}$ can go anywhere. In order to do so, we first give use a constant $a$ to bound the probability of $P(\Vert\bm{u}\Vert>a)$ by using Markov inequality. In this condition, Vector Bernstein Inequality can be used to give result on the required number of samples. Finally, we show that Lemma $\ref{lemma_GE}$ holds for a high probability by using an union bound of probability of $\Vert\bm{u}_i\Vert<a$ and the probability that Vector Bernstein Vector holds. The detailed proof is provided in Appendix \ref{prooflem1}.
\end{proof}
\end{comment}
Then, based on this result, we run the gradient descent algorithm with estimated gradient in Algorithm \ref{PGD-MF}.
\subsection{Estimated Gradient Descent Algorithm}
This subsection describes the proposed algorithm, which will be analyzed in this paper.
\begin{algorithm}[h]
\label{PGD-MF}
\caption{Estimated Gradient Descent Algorithm $EGD(\bm{x}_0,d,l,B,\chi_1,\theta,\rho,\epsilon,\hat{\epsilon},c,c',\delta,\Delta_f)$}
\begin{algorithmic}[1]
\State $\chi \leftarrow \max\{(1+\theta)\log(\frac{2d\ell\Delta_f}{c\epsilon^2\delta}), \chi_1\}$, $\eta\leftarrow\frac{c}{l}$, $g_{thres}\leftarrow\frac{\sqrt{c}}{\chi^2}\cdot\epsilon$, $f_{thres}\leftarrow\frac{c}{\chi^3}\cdot\sqrt{\frac{\epsilon^3}{\rho}}$, $t_{thres}\leftarrow\frac{\chi}{c^2}\cdot\frac{l}{\sqrt{\rho\epsilon}}$, $t_{temp}\leftarrow-t_{thres}-1$, $r\leftarrow \frac{g_{thres}}{l}$
\For{$t=0,1,...$}
\State $\hat{\nabla}f(\bm{x}_t)=GE(d,l,B,c',\hat{\epsilon},\bm{x}_t)$
\If{$\Vert\hat{\nabla}f(\bm{x}_{t})\Vert\leq g_{thres}$, $t-t_{temp}>t_{thres}$}
\State $\bm{x}_t\leftarrow\bm{x}_t+\bm{\xi}_t$, $\bm{\xi}_t\sim\mathbb{B}^d(r)$
\State $t_{temp}\leftarrow t$
\EndIf
\If{$t-t_{temp}=t_{thres}$ and $f(\bm{x}_t)-f(\bm{x}_{t-t_{thres}})>-f_{thres}$}
\State \Return $\bm{x}_{t-t_{thres}}$
\EndIf
\State $\bm{x}_{t+1}\leftarrow\bm{x}_t-\eta\hat{\nabla}f(\bm{x}_t)$
\EndFor
\end{algorithmic}
\end{algorithm}
The algorithm is described in Algorithm \ref{PGD-MF} and is denoted as EGD. Line 1 gives the input of the algorithm, $\bm{x}_0$ is the initialization point, $d,l,B,\hat{\epsilon},c'$ are the same defined in algorithm \ref{GE}, $\rho$ is the $\rho$-Hessian parameter as in Definition \ref{defnhessian}, $\theta$ is any constant larger than 0, and $\chi_1$ is the constant so that $\chi_1^3e^{-\chi_1} \le e^{-\chi_1/(1+\theta)}$. We use $\epsilon$ to denote the $\epsilon$-second-order stationary point. $\Delta f$ is a constant so that $\Delta f\geq(f(\bm{x}_0)-f^*)$. $c>0$ is a constant and $\delta>0$ is used such that the probability of Algorithm \ref{PGD-MF} working correctly is at least $1-\delta$. Due to only zeroth-order information being available, Algorithm \ref{GE} is first used to give an estimate of gradient in each iteration (Line 3). Then the estimated gradient will be used in gradient descent step to replace the unavailable true gradient (Line 11). Besides, the Line (4 - 6) shows that we add a perturbation from a uniformly distributed ball to $\bm{x_t}$ when $\Vert\hat{\nabla}f(\bm{x}_{t})\Vert\leq g_{thres}$ and $t-t_{temp}>t_{thres}$. This means the perturbation will be added when the gradient is small in order to escape the saddle points and it will be added at most once between $t_{thres}$ steps. (Line 8 - 9) checks the terminal condition of the algorithm. If $f(\bm{x}_t)-f(\bm{x}_{t-t_{thres}})>-f_{thres}$ meaning that the function has not changed enough in the last $t_{thres}$ steps after adding a perturbation, the algorithm immediately returns the point $\bm{x}_{t-t_{thres}}$ as the final result. Our proof in the following section will show that this will indeed lead to an $\epsilon$-second-order stationary point. Thus, this is the condition of the termination for the for-loop.
\subsubsection*{\bibname}}
\title{Escaping Saddle Points for Zeroth-order Nonconvex Optimization using Estimated Gradient Descent}
\author{ Qinbo Bai, Mridul Agarwal, and Vaneet Aggarwal \thanks{The authors are with Purdue University, West Lafayette IN 47907, USA, email:\{bai113,agarw180,vaneet\}@purdue.edu}}
\begin{document}
\if 0
\twocolumn[
\aistatstitle{Escaping Saddle Points for Zeroth-order Nonconvex Optimization using Estimated Gradient Descent}
\aistatsauthor{ Qinbo Bai, Mridul Agarwal, and Vaneet Aggarwal}
\aistatsaddress{Purdue University} ]
\fi
\maketitle
\input{intro}
\input{related}
\input{formulation}
\input{gradest}
\input{result}
\input{conclusion}
\appendices
\input{lem5proof}
\input{lem6proof}
\input{lem4proof}
\input{prooflem3}
\bibliographystyle{IEEEtran}
\section{Introduction}
Gradient descent and its variants (e.g., Stochastic Gradient Descent) are widely used in machine learning due to their favorable computational
properties, for example, in optimizing weights of a deep neural network. Given a function $f$: $\mathbb{R}^d\rightarrow\mathbb{R}$, the gradient descent (GD) algorithm updates $\bm{x}_t$ in each iteration as
\begin{equation}
\bm{x}_{t+1}=\bm{x}_t-\eta\nabla f(\bm{x}_t),
\end{equation}
where $\eta>0$ represent the step size. This algorithm can be shown to achieve $\epsilon$-first-order stationary point for non-convex optimization problem in $O(\frac{1}{\epsilon^2})$ iterations \citep{nesterov1998introductory}. Recently, second order stationary guarantees have been studied by using a perturbed version of gradient descent \citep{jin2017escape}. However, in many cases, gradient of function may not be accessible and only function value can be queried. This paper studies an algorithm which uses an estimate of the gradient to perform gradient descent, and shows that the algorithm achieves an $\epsilon$-second order stationary point.
In non-convex settings, convergence to a first-order stationary points is not satisfactory since this point can be a global minima, a local minima, a saddle point, or even a local maxima. Even though finding global minima can be hard, recent results show that, in many problems of interest, all local minima are global minima (e.g., in matrix and tensor completion \citep{jain2013low,liu2016low}, dictionary learning \citep{sun2016complete}, and certain classes of deep neural networks \citep{kawaguchi2016deep}). Saddle points (and local maxima) can correspond to highly suboptimal solutions in many problems \citep{dauphin2014identifying}, where the authors argue that the saddle points are ubiquitous in high-dimensional, non-convex optimization problems, and are thus the main bottleneck in training neural networks. Standard analysis of gradient descent only considers first-order stationary guarantees which do not rule out the saddle points.
Using stable manifold theorem, authors of \citep{lee2016gradient} prove that gradient descent can indeed escape when the saddle point when initial point is not on the stable manifold. However, they do no provide any complexity analysis for the steps to escape the saddle points. Recently, there has been results based on perturbation of gradient descent to achieve the second-order stationary point \citep{jin2017escape}. However, what happens if the gradient is not known, which can happen when the function is complex (or available as a black-box) to find the gradient. In this scenario, one approach is to estimate the gradient and perform a gradient descent algorithm. This motivates the question: {\em Can estimated gradient descent escape saddle points and converge to local minima? }
This paper answers this question in positive. We note that this is the first work on the guarantees of gradient-descent based algorithm for zeroth-order non-convex optimization, where only the function can be queried, while the gradient information is not available. Recently, the authors of \citep{balasubramanian2019zeroth} considered the problem of zeroth-order non-convex optimization, while they use cubic regularization based Newton's method. In contrast, we investigate use of regular gradient descent algorithm with the estimation of the gradients.
In this work, without any estimate of the Hessian, {we use the Gaussian smoothening method combined with concentration inequality to give the minimal number of samples we need to estimate the gradient with error at most $\hat{\epsilon}$}. Bounding the error in gradient estimation, we prove that a $\epsilon$-second-order stationary point can be reached with complexity of $O\big(\frac{l(f(\bm{x}_0)-f^*(\bm{x}))}{\epsilon^2}\big)$ iterations following the idea in \citep{jin2017escape}. However, since each iteration queries function multiple times to obtain an estimate of the gradient, the overall complexity is $\widetilde{O}(\frac{d^{2+\frac{\theta}{2}}}{\epsilon^{8+\theta}})$ where $\theta$ is arbitrary positive number.
The key idea is to use the geometry around saddle points such that the stuck region from which the gradient descent can't escape is a thin band. This means that the small error in the estimation of the gradient in each iteration can lead to escaping this stuck region if the point is not an $\epsilon$-second-order stationary point. Further, the function calls within each iteration can be paralleled decreasing the run-time of the algorithm.
\subsection{Proof of Lemma \ref{lemma_GE}}\label{prooflem1}
\begin{proof}
For a function $f$ satisfying $l$-smooth, we define Gaussian smooth function $f_v(\bm{x})=\mathbf{E}_{\bm{u}}[f(\bm{x}+v\bm{u})]$, where $\bm{u}$ is a $d$ dimensional standard Gaussian random vector $\bm{u}\sim\mathcal{N}(0,\mathbf{I}_d)$ and $v\in(0,\infty)$ is smooth parameter. \citep{nesterov2017random} Eq. 21 in section 2 shows that
\begin{equation}
\nabla f_v(x)=\mathbf{E}_{\bm{u}}[\frac{f(\bm{x}+v\bm{u})-f(\bm{x})}{v}\bm{u}]
\end{equation}
We define a gradient estimator $\hat{\nabla}=\frac{1}{m}\sum_{i=1}^{m}\frac{f(\bm{x}+v\bm{u})-f(\bm{x})}{v}\bm{u}$ so that $\mathbf{E}[\hat{\nabla}]=\nabla f_v(\bm{x})$.
From \citep{nesterov2017random} Lemma 3 and Theorem 4, we know that for any function $f$ satisfying $l$-smooth (Notice in the proof of the first inequality in Theorem 4, no convexity is needed) and any $\bm{x}\in\mathbb{R}^d$, the following are always true:
\begin{equation}\label{fv_bound}
\Vert\nabla f_v(\bm{x})-\nabla f(\bm{x})\Vert\leq\frac{v}{2}l(d+3)^{\frac{3}{2}}
\end{equation}
\begin{equation}\label{var_bound}
\frac{1}{v^2}\mathbf{E}_{\bm{u}}[\{f(\bm{x}+v\bm{u})-f(\bm{x})\}^2\Vert\bm{u}\Vert^2]\leq\frac{v^2}{2}l^2(d+6)^{3}+2(d+4)\Vert\nabla f(\bm{x})\Vert^2
\end{equation}
To give the distance between $\hat{\nabla}$ and $\nabla f$ is less than $\hat{\epsilon}$, we split the difference to two terms. Here we only consider $0<\hat{\epsilon}<1$.
\begin{equation*}
\Vert\hat{\nabla}-\nabla f\Vert\leq\Vert\nabla f_v-\nabla f\Vert+\Vert\hat{\nabla}-\nabla f_v\Vert
\end{equation*}
Choosing $v=\frac{\hat{\epsilon}}{c'l(d+3)^\frac{3}{2}}$, where $c'>1$ is a constant will be defined later, we have $\Vert\nabla f_v-\nabla f\Vert\leq\frac{\hat{\epsilon}}{2}$ based on Eq. \eqref{fv_bound}. To bound the second term, noticing that $\mathbf{E}[\hat{\nabla}]=\nabla f_v(\bm{x})$, choose $\bm{s}_i=\frac{f(\bm{x}+v\bm{u})-f(\bm{x})}{v}\bm{u}-\nabla f_v$ and $\bm{s}_i^{'}=\bm{s}_i+\nabla f_v$. We immediately know $\mathbf{E}[\bm{s}_i]=0$, and the variance of $\bm{s}_i$ can be bounded by
\begin{equation*}
\begin{aligned}
\mathbf{E}[\Vert\bm{s}_i\Vert^2
]&=\Vert\bm{s}_i^{'}-\nabla f_v\Vert^2
\overset{(a)}\leq \mathbf{E}[\Vert\bm{s}_i^{'}\Vert^2]+B\\
&\overset{(b)}\leq 2(d+5)B^2+\frac{v^2l^2}{2}(d+6)^3\\
&\overset{(c)}\leq 2(d+5)B^2+\frac{4\hat{\epsilon}^2}{c'^2}\\
&\overset{(d)}\leq 2c'^2(d+5)B^2 =: \sigma^2
\end{aligned}
\end{equation*}
where step $(a)$ follows from $\mathbf{E}[\bm{s}_i^{i}]=\nabla f_v$ and $\Vert\nabla f\Vert\leq B$ in the assumption. By choosing $B>1$ and using Eq. \eqref{var_bound}, step $(b)$ is straight forward. Step $(c)$ holds due to the definition of $v$. Step $(d)$ follows that we omit the term with $\hat{\epsilon}^2$ by multiplying $c'^2>4$ to the first term.\\
By $l$-smooth, we have
\begin{equation*}
\begin{aligned}
\Vert f(\bm{x}+v\bm{u})-f(\bm{x})\Vert\leq vB\Vert \bm{u}\Vert+\frac{lv^2}{2}\Vert\bm{u}\Vert^2
\end{aligned}
\end{equation*}
Thus, the norm of $\bm{s}_i$ can be bounded as:
\begin{equation}\label{si_bound}
\begin{aligned}
\Vert\bm{s}_i\Vert\leq\frac{\Vert f(\bm{x}+v\bm{u})-f(\bm{x})\Vert\Vert\bm{u}\Vert}{v}+B
\leq B+B\Vert \bm{u}\Vert^2+\frac{lv}{2}\Vert\bm{u}\Vert^3
\end{aligned}
\end{equation}
However, $\bm{u}$ is a Gaussian random vector, we can't bound it directly. But we can say, given some constant $a\geq 0$,
\begin{equation}\label{p_bound}
P(\Vert\bm{u}\Vert>a)\leq p
\end{equation}
where $p$ is some probability we will calculate in followings.
Assume $\bm{u}\sim\mathcal{N}(0,\mathbf{I}_d)$, \citep{nesterov2017random} Lemma 1 tells us $E\Vert\bm{u}\Vert^2\leq(d+2)$. Consider random variable $e^{t\Vert\bm{u}^2\Vert}$, where $t$ is a constant. We immediately know $e^{t\Vert\bm{u}^2\Vert}$ is non-negative and $E[e^{t\Vert\bm{u}^2\Vert}]=e^{tE[\Vert\bm{u}^2\Vert]}<\infty$. Thus, by using Markov's inequality, for $t>0$, we have:
\begin{equation}\label{norm_bound}
\begin{aligned}
P(\Vert\bm{u}\Vert^2>a^2)&=P(e^{t\Vert\bm{u}^2\Vert}>e^{ta^2})\leq\frac{E[e^{t\Vert\bm{u}^2\Vert}]}{e^{ta^2}}\\
&=\frac{E[e^{t(u_1^2+u_2^2+...+u_d^2)}]}{e^{ta^2}}=\frac{\prod_{i=1}^{d}E[e^{tu_i^2}]}{e^{ta^2}}\\
\end{aligned}
\end{equation}
Because the co-variance matrix is Identity, thus $u_1,u_2,...,u_d$ is i.i.d and $u_i\sim\mathcal{N}(0,1)$
\begin{equation}\label{expectation}
\begin{aligned}
E[e^{tu_i^2}]&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{tx^2}e^{-\frac{x^2}{2}}dx=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-\frac{x^2}{2}(1-2t)}dx\\
&=\frac{1}{\sqrt{2\pi}}\frac{1}{\sqrt{1-2t}}\int_{-\infty}^{\infty}e^{-\frac{y^2}{2}}dy=\frac{1}{\sqrt{1-2t}}
\end{aligned}
\end{equation}
where we choose $y=\sqrt{1-2t}\cdot x$ and we need $t<\frac{1}{2}$.\\
Combine Eq. \eqref{norm_bound} and Eq. \eqref{expectation}, we have
\begin{equation}\label{norm_bound2}
\begin{aligned}
P(\Vert\bm{u}\Vert^2>a^2)&\leq\frac{(1-2t)^{-\frac{d}{2}}}{e^{ta^2}}=(1-2t)^{-\frac{d}{2}}e^{-ta^2}\\
\end{aligned}
\end{equation}
Define $f(t)=(1-2t)^{-\frac{d}{2}}e^{-ta^2}$, we have:
\begin{equation*}
\begin{aligned}
f'(t)=-\frac{d}{2}(1-2t)^{-\frac{d}{2}-1}e^{-ta^2}+(1-2t)^{-\frac{d}{2}}e^{-ta^2}(-a^2)\\
\end{aligned}
\end{equation*}
Let $f'(t)=0$, we have
\begin{center}
$$-\frac{d}{2}(1-2t)^{-\frac{d}{2}-1}(-2)e^{-ta^2}+(1-2t)^{-\frac{d}{2}}e^{-ta^2}(-a^2)=0$$
$$d+(1-2t)(-a^2)=0$$
$$(1-2t)=\frac{d}{a^2}$$
\end{center}
Thus, $t=\frac{1}{2}(1-\frac{d}{a^2})<\frac{1}{2}$. Using this into Eq. \eqref{norm_bound2}, we have:
\begin{equation}
\begin{aligned}
P(\Vert\bm{u}\Vert^2>a^2)&\leq(\frac{d}{a^2})^{-\frac{d}{2}}e^{-\frac{1}{2}(a^2-d)}=d^{-\frac{d}{2}}e^{\frac{d}{2}}a^de^{-\frac{a^2}{2}}
\end{aligned}
\end{equation}
For $0<\hat{\epsilon}<1$, we choose $a=c'\cdot\sqrt{\frac{d}{\hat{\epsilon}}}$ so that $t>0$ always holds. Besides, choose $c'>1$ large enough such that $P(\Vert\bm{u}\Vert^2>a^2)\leq B^{-2}a^{-8}$, we have
\begin{equation*}
\begin{aligned}
\Vert\bm{s}_i\Vert
&\leq B+Ba^2+\frac{lv}{2}a^3=B+\frac{Bc'^2d}{\hat{\epsilon}}+\frac{lvc'^3}{2}\frac{d^{1.5}}{\hat{\epsilon}^{1.5}}\\
&\leq B+\frac{Bc'^2d}{\hat{\epsilon}}+\frac{c'^2}{2\hat{\epsilon}^{0.5}}\leq\frac{3Bc'^2d}{\hat{\epsilon}}=:\mu
\end{aligned}
\end{equation*}
Combing with Eq. \eqref{p_bound}, we can say given $m$ samples of $\bm{s}_i^{'}$, with probability at least $1-mp$, $\forall i=1,...,m$. $\Vert\bm{u}_i\Vert\leq a$. In this case, based on \citep{2017arXiv170505933K} Lemma 18, we have vector Bernstein Inequality that for $0<\hat{\epsilon}<\frac{\sigma^2}{\mu}=\frac{2(d+5)}{3d}B\hat{\epsilon}$, which is always true for a $B>1.5$, we have
\begin{equation*}
P\big(\Vert\hat{\nabla}-\nabla f_v\Vert\geq\frac{\epsilon}{2}\big)\leq \exp\big(-m\cdot\frac{\epsilon^2}{32\sigma^2}+\frac{1}{4}\big)
\end{equation*}
Choosing $m>\frac{32\sigma^2}{\hat{\epsilon}^2}(\log\frac{2}{\hat{\epsilon}}+\frac{1}{4})$, we have $P\big(\Vert\hat{\nabla}-\nabla f_v\Vert\leq\frac{\hat{\epsilon}}{2}\big)\geq 1-\frac{\hat{\epsilon}}{2}$\\
Thus, by union bound, the final probability that $\Vert\hat{\nabla}-\nabla\Vert\leq\hat{\epsilon}$ is at least
\begin{equation*}
1-mp-\frac{\hat{\epsilon}}{2}\geq 1-\frac{32{c'}^2(d+5)B^2}{\hat{\epsilon}^2}(\log\frac{1}{\hat{\epsilon}}+\frac{1}{4})\frac{\hat{\epsilon}^4}{{c'}^8d^4B^2}-\frac{\hat{\epsilon}}{2}\overset{(a)}\geq 1-(\frac{1}{4}+\frac{1}{4})\hat{\epsilon}+\frac{\hat{\epsilon}}{2}\geq 1-\hat{\epsilon}
\end{equation*}
By choose $c'\geq3$, the inequality $(a)$ holds. Thus, we finish the proof of Lemma \ref{lemma_GE}.
\begin{comment}
For $\epsilon\geq1$, we choose $a=c\cdot\sqrt{d}\epsilon^{\frac{1}{3}}$, and $c>1$ large enough such that $P(\Vert\bm{u}\Vert^2>a^2)\leq a^{-6}$, we have
\begin{equation}
\begin{aligned}
\Vert\bm{s}_i\Vert
&\leq B+Ba^2+\frac{lv}{2}a^3=B+Bc^2d\epsilon^{\frac{2}{3}}+\frac{lvc^3}{2}d^{1.5}\epsilon\\
&\leq B+Bc^2d\epsilon^{\frac{2}{3}}+\frac{c^2\epsilon^2}{2}\leq 3Bc^2d\epsilon^2=\mu
\end{aligned}
\end{equation}
Combing with section 1, we can say given $n$ samples of $\bm{s}_i^{'}$, with probability at least $1-mp$, $\forall i=1,...,m$.\\
In this case, choose $\sigma'^2=\sigma^2\epsilon^3>\sigma^2$, based on Jonas17, we have vector Bernstein Inequality that for $0\leq\epsilon\leq\frac{\sigma^2}{\mu}\leq\frac{\sigma'^{2}}{\mu}=\frac{2}{3}B\epsilon$, which is always true, we have
\begin{equation}
P\big(\Vert\hat{\nabla}-\nabla f_v\Vert\geq\frac{\epsilon}{2}\big)\leq \exp\big(-m\cdot\frac{\epsilon^2}{32\sigma^2}+\frac{1}{4}\big)
\end{equation}
Choosing $m>\frac{32\sigma'^2}{\epsilon^2}(\log\epsilon+\frac{1}{4})=32\epsilon\sigma^2(\log\epsilon+\frac{1}{4})$, we have $P\big(\Vert\hat{\nabla}-\nabla f_v\Vert\leq\frac{\epsilon}{2}\big)\geq 1-\frac{1}{\epsilon}$\\
Thus, the final probability that $\Vert\hat{\nabla}-\nabla\Vert\leq\epsilon$ is at least
\begin{equation}
1-mp-\frac{1}{\epsilon}\geq 1-32\epsilon\sigma^2(\log\epsilon+\frac{1}{4})\frac{1}{c^6d^3\epsilon^{2}}-\frac{1}{\epsilon}\geq 1-\frac{2}{\epsilon}
\end{equation}
\end{comment}
\end{proof}
\section{Proof of Lemma \ref{lemma_esc}}\label{apdlem4}
\begin{proof}
W.L.O.G, let $\widetilde{\bm{x}} = 0$ be the origin. Let $(c^{(2)}_{\max}, \hat{c})$ be the absolute constant so that Lemma \ref{lemma_wt} holds, also let $c^{(1)}_{\max}$ be the absolute constant to make Lemma \ref{lemma_ut} holds based on our current choice of $\hat{c}$.
We choose $c_{\max} \le \min\{c^{(1)}_{\max}, c^{(2)}_{\max}\}$ so that our learning rate $\eta \le c_{\max}/\ell$ is small enough which makes both Lemma \ref{lemma_ut} and Lemma \ref{lemma_wt} hold. Let $T^{*}:=\hat{c}\mathcal{T}$ and define:
\begin{equation*}
T' = \inf_t\left\{t| \widetilde{f}_{\bm{u}_0}(\bm{u}_t)-f(\bm{u}_0)\le-4.5f_{thres} \right\}
\end{equation*}
Let's consider following two cases:
\paragraph{Case $T' \le T^{*}$:} In this case, by Lemma \ref{lemma_ut}, we know $\norm{\bm{u}_{T'-1}} \le O(\mathcal{P})$. Using $l$-smooth, we have
\begin{equation*}
\begin{aligned}
\Vert\bm{u}_{T'}\Vert
&\overset{(a)}\leq\Vert\bm{u}_{T'-1}\Vert+\eta\Vert\hat{\nabla}f(\bm{u}_{T'-1})\Vert
\leq\Vert\bm{u}_{T'-1}\Vert+\eta\Vert\nabla f(\bm{u}_{T'-1})\Vert+\eta\Vert\hat{\nabla}f(\bm{u}_{T'-1})-\nabla f(\bm{u}_{T'-1})\Vert\\
&\overset{(b)}\leq\Vert\bm{u}_{T'-1}\Vert+\eta\Vert\nabla f(\widetilde{\bm{x}})\Vert+\eta l\Vert\bm{u}_{T'-1}\Vert+\eta\hat{\epsilon}
\overset{(c)}\leq 2(\Vert\bm{u}_{T'-1}\Vert+\eta g_{thres})
\overset{(d)}\leq O(\mathcal{P})
\end{aligned}
\end{equation*}
where $(a)$ comes from the gradient descent step in Algorithm \ref{PGD-MF}, $(b)$ uses the $l$-smooth property, $(c)$ follows the definition of $\widetilde{\bm{x}}$ and $\hat{\epsilon}$ and $\eta g_{thres}\leq\frac{\sqrt{c}}{\chi^2}\cdot\frac{\epsilon}{l}=\frac{\sqrt{\epsilon\rho}}{l\chi}(\frac{\sqrt{c}}{\chi}\sqrt{\frac{\epsilon}{\rho}})=\frac{1}{\chi\kappa}\mathcal{P}\leq\mathcal{P}$ gives the inequality $(e)$.\\
Using this, we can the function decrease greatly from $\bm{u}_0$ to $\bm{u}_{T'}$
\begin{equation*}
\begin{aligned}
f(\bm{u}_{T'}) - f(\bm{u}_0)
&\overset{(a)}\le \nabla f(\bm{u}_0)^T(\bm{u}_{T'}-\bm{u}_0) + \frac{1}{2}(\bm{u}_{T'}-\bm{u}_0)^{\top} \nabla^2 f(\bm{u}_0) (\bm{u}_{T'}-\bm{u}_0)
+ \frac{\rho}{6} \norm{\bm{u}_{T'}-\bm{u}_0}^3 \\
&\overset{(b)}=\widetilde{f}_{\bm{u}_0}(\bm{u}_{T'})-f(\bm{u}_0)+\frac{1}{2}(\bm{u}_{T'}-\bm{u}_0)^{\top} [\nabla^2f(\bm{u}_0)-\nabla^2f(\widetilde{\bm{x}})](\bm{u}_{T'}-\bm{u}_0)+ \frac{\rho}{6} \norm{\bm{u}_{T'}-\bm{u}_0}^3\\
&\overset{(c)}\le \widetilde{f}_{\bm{u}_0}(\bm{u}_{T'})-f(\bm{u}_0)+\frac{\rho}{2}\Vert\bm{u}_0-\widetilde{\bm{x}}\Vert\Vert\bm{u}_{T'}-\bm{u}_0\Vert^2 + \frac{\rho}{6} \norm{\bm{u}_{T'}-\bm{u}_0}^3 \\
&\overset{(d)}\le-4.5f_{thres} + O(\rho\mathcal{P}^3) \overset{(e)}= -4.5f_{thres} + O(\sqrt{c}\cdot f_{thres}) \overset{(f)}\le -4f_{thres}
\end{aligned}
\end{equation*}
where $(a)$ and $(c)$ directly use $\rho$-Hessian Lipschitz, $(b)$ comes from the definition of $\widetilde{f}_{\bm{u}_0}(\bm{u}_{T'})$, $(d)$ follows the Lemma \ref{lemma_ut} and $\rho\mathcal{P}^3=\frac{(c\epsilon)^{1.5}}{\chi^3\sqrt{\rho}}=\sqrt{c}\frac{c}{\chi^3}\sqrt{\frac{\epsilon^3}{\rho}}=\sqrt{c}f_{thres}$ give the inequality $(e)$. Finally, by choosing $c$ small enough, the inequality $(f)$ holds.\\
Now, we are going to bound the increase of function from step $T'$ to $T$. Because when $\Vert\hat{\nabla}f(\bm{x}_t)\Vert>g_{thres}$, the function value will decrease by Lemma \ref{lemma_leq}. Thus, we only consider the condition that $\Vert\hat{\nabla}f(\bm{x}_t)\Vert\leq g_{thres}$. According to Eq. \eqref{Decrease} step (c) in Lemma \ref{lemma_geq}, by choosing $\hat{\epsilon}\leq cg_{thres}=O(\epsilon)$, we have
\begin{equation}\label{Decrease2}
\begin{aligned}
f(\bm{u}_{t+1}) - f(\bm{u}_t) &\le \eta\Vert\nabla f(\bm{u}_t)-\hat{\nabla}f(\bm{u}_t)\Vert\Vert\hat{\nabla}f(\bm{u}_t)\Vert+\frac{l\eta^2}{2}\Vert\hat{\nabla} f(\bm{u}_t)\Vert^2-\eta\Vert\hat{\nabla} f(\bm{u}_t)\Vert^2\\
&\overset{(a)}\le \eta\hat{\epsilon}\Vert\hat{\nabla} f(\bm{u}_t)\Vert+\frac{c\eta}{2}\Vert\hat{\nabla} f(\bm{u}_t)\Vert^2\\
&\le \eta cg_{thres}^2+\frac{c\eta}{2}g_{thres}^2=\frac{3}{2}c\eta g_{thres}^2
\end{aligned}
\end{equation}
where we omit the non-positive term in step $(a)$.\\
Choosing $c_{\max} \le \min \{1, \frac{1}{\hat{c}}\}$. We know $T=\frac{\mathcal{T}}{c}\geq\frac{\mathcal{T}}{c_{max}}\geq\hat{c}\mathcal{T}=T^*\geq T'>0$. Thus, the number of steps between $T$ and $T'$ are at most $\frac{\mathcal{T}}{c}$. Therefore, during these steps, the function value can at most increase:
\begin{equation}
\begin{aligned}
f(\bm{u}_T)-f(\bm{u}_{T'})\leq\big(f(\bm{u}_t) - f(\bm{u}_{t+1})\big)\frac{\mathcal{T}}{c}
&\leq \frac{3}{2}c\eta g_{thres}^2\frac{\chi}{c\eta\gamma}
=\frac{3}{2}\frac{c}{\chi^4}\epsilon^2\frac{\chi}{\sqrt{\rho\epsilon}}\\
&\leq\frac{3c}{2\chi^3}\cdot\sqrt{\frac{\epsilon^3}{\rho}}=1.5f_{thres}\\
\end{aligned}
\end{equation}
Thus, we have:
\begin{equation*}
f(\bm{u}_T) - f(\bm{u}_0) = [f(\bm{u}_T) - f(\bm{u}_{T'})] + [f(\bm{u}_{T'}) - f(\bm{u}_0)] \le 1.5f_{thres} - 4f_{thres}= -2.5f_{thres}
\end{equation*}
\paragraph{Case $T' > T^*$:} In this case, by Lemma \ref{lemma_ut}, we know $\norm{\bm{u}_t}\le O(\mathcal{P})$ for all $t\le T^*$. Define
\begin{equation*}
T'' = \inf_t\left\{t| \tilde{f}_{\bm{w}_0}(\bm{w}_t) - f(\bm{w}_0) \le -4.5f_{thres} \right\}
\end{equation*}
Noticing that $\Vert\bm{w}_0-\widetilde{\bm{x}}\Vert\leq\Vert\bm{u}_0+\mu r\bm{e}_1\Vert\leq2r$. By Lemma \ref{lemma_ut}, we have for $t<T_2$, $\Vert \bm{w}_t-\widetilde{\bm{x}}\Vert\leq100(\hat{c}\cdot\mathcal{P})$, which is exactly the condition in Lemma \ref{lemma_wt}. Thus, by Lemma \ref{lemma_wt}, we immediately have $T'' \le T^*$. Applying same argument as in first case (replacing notation $\bm{u}$ with $\bm{w}$), we have for $T=t_{thres}=\frac{\mathcal{T}}{c}$ that $f(\bm{w}_T) - f(\bm{w}_0) \le -2.5f_{thres}$.
\end{proof}
\section{Proof of Lemma \ref{lemma_ut}}\label{apdlem5}
\begin{proof}
Without loss of generality, we set $\bm{u}_0=0$ to be the origin, by the update function, we have:
\begin{equation}
\label{E3}
\begin{aligned}
\bm{u}_{t+1}&=\bm{u}_t-\eta\hat{\nabla}f(\bm{u}_t)\\
&=\bm{u}_t-\eta\nabla f(\bm{u}_t)-\eta[\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t)]\\
&=\bm{u}_t-\eta\nabla f(0)-\eta\left[\int_{0}^{1}\nabla^2f(\theta\bm{u}_t)d\theta\right]\bm{u}_t-\eta[\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t)]\\
&=\bm{u}_t-\eta\nabla f(0)-\eta(\mathcal{H}+\Delta_t)\bm{u}_t-\eta[\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t)]\\
&=(\mathbf{I}-\eta\mathcal{H}-\eta\Delta_t)\bm{u}_t-\eta\nabla f(0)-\eta[\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t)]\\
\end{aligned}
\end{equation}
where $\Delta_t=\int_{0}^{1}\nabla^2f(\theta\bm{u}_t)d\theta-\mathcal{H}$ can be bounded as:
\begin{equation}\label{Delta_bound}
\begin{aligned}
\Vert\Delta_t\Vert&=\Vert\int_{0}^{1}\nabla^2f(\theta\bm{u}_t)d\theta-\mathcal{H}\Vert\\
&\leq\int_{0}^{1}\Vert\nabla^2f(\theta\bm{u}_t)-\nabla^2f(\widetilde{\bm{x}})\Vert d\theta\\
&\leq\int_{0}^{1}\rho\Vert\theta\bm{u}_t-\widetilde{\bm{x}}\Vert d\theta\\
&\leq\rho\int_{0}^{1}\theta\Vert\bm{u}_t\Vert+\Vert\widetilde{\bm{x}}\Vert d\theta \leq\rho(\Vert\bm{u}_t\Vert+\Vert\widetilde{\bm{x}}\Vert)
\end{aligned}
\end{equation}
Besides, based on $l$-smooth, we have $\Vert\nabla f(0)\Vert\leq\Vert\nabla f(\widetilde{\bm{x}})\Vert+l\Vert\widetilde{\bm{x}}\Vert\leq g_{thres}+2lr=3g_{thres}$.\\
Now let $\mathcal{S}$ to be the spaced spanned by the eigenvectors of $\mathcal{H}$ whose eigenvalue is less than $-\frac{\gamma}{\hat{c}\chi}$. Let $\mathcal{S}^c$ to be the space spanned by the other eigenvectors. Let $\bm{\alpha}_t$ and $\bm{\beta}_t$ denote the projections of $\bm{u_t}$ onto $\mathcal{S}$ and $\mathcal{S}^c$. According to Eq. \ref{E3}, we have
\begin{equation}
\label{beta_t}
\bm{\beta}_{t+1}=(\mathbf{I}-\eta\mathcal{H})\bm{\beta}_t-\eta\mathcal{P_S}^c\Delta_t\bm{u}_t-\eta\mathcal{P_S}^c\nabla f(0)-\eta\mathcal{P_S}^c[\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t)]
\end{equation}
By the definition of $T_1$ in lemma \ref{lemma_ut}, for all $t<T_1$
\begin{equation}
\label{bound_ut}
-4.5f_{thres}<\widetilde{f}_0(\bm{u}_t)-f(0)=\nabla f(0)^T\bm{u}_t+\frac{1}{2}\bm{u}_t^T\mathcal{H}\bm{u}_t\leq\nabla f(0)^T\bm{u}_t-\frac{\gamma}{2}\frac{\Vert\bm{\alpha}_t\Vert^2}{\hat{c}\chi}+\bm{\beta}_t\mathcal{H}\bm{\beta}_t
\end{equation}
To see the last inequality, we define the orthogonal eigenvectors in $\mathcal{S}$ and $\mathcal{S}^c$ are $\bm{\alpha}^1,\bm{\alpha^2},...,\bm{\alpha^m}$ and $\bm{\beta}^1,\bm{\beta}^2,...,\bm{\beta}^n$, where $d=m+n$. Thus, $\bm{u}_t=\bm{\alpha_t}+\bm{\beta_t}=a_1\bm{\alpha}^1+a_2\bm{\alpha}^2+...+a_m\bm{\alpha}^m+b_1\bm{\beta}^1+b_2\bm{\beta^2}+...+b_n\bm{\beta}^n$, where $a_1,...a_m,b_1,...b_n$ are the linear combination parameter, and the eigenvalues for eigenvectors $\bm{\alpha}^1,...\bm{\alpha}^m\leq-\frac{\gamma}{\hat{c}\chi}$ by the definition of the space $\mathcal{S}$. Thus, we have
\begin{equation*}
\begin{aligned}
\bm{u}_t^T\mathcal{H}\bm{u}_t&=\bm{u}_t^T\mathcal{H}(a_1\bm{\alpha}^1+a_2\bm{\alpha}^2+...+a_m\bm{\alpha}^m+b_1\bm{\beta}^1+b_2\bm{\beta^2}+...+b_n\bm{\beta}^n)\\
&\leq-\frac{\gamma}{\hat{c}\chi}\bm{u}_t^T(a_1\bm{\alpha}^1+a_2\bm{\alpha}^2+...+a_m\bm{\alpha}^m)+\bm{u}_t^T\mathcal{H}\bm{\beta_t}\\
&\leq-\frac{\gamma}{\hat{c}\chi}\Vert\bm{\alpha}_t\Vert^2+\bm{\beta}_t^T\mathcal{H}\bm{\beta}_t
\end{aligned}
\end{equation*}
where the last step use the orthogonality of $\bm{\alpha_t}$ and $\bm{\beta}_t$\\
According to $\Vert\bm{u}_t^2\Vert=\Vert\bm{\alpha}_t^2\Vert+\Vert\bm{\beta}_t^2\Vert$, noticing that $\Vert\nabla f(0)\Vert\leq 3g_{thres}$, combing with Eq. \eqref{bound_ut}, we have,
\begin{equation*}
\begin{aligned}
\Vert\bm{u}_t\Vert^2 &\leq\frac{2\hat{c}\chi}{\gamma}\left(4.5f_{thres}+\nabla f(0)^T\bm{u}_t+\bm{\beta}_t\mathcal{H}\bm{\beta}_t\right)+\Vert\bm{\beta}_t^2\Vert\\
&\leq17\cdot\max\big\{\frac{g_{thres}\hat{c}\chi}{\gamma}\Vert\bm{u}_t\Vert,\frac{f_{thres}\hat{c}\chi}{\gamma},\frac{\bm{\beta}_t\mathcal{H}\bm{\beta}_t\hat{c}\chi}{\gamma},\Vert\bm{\beta_t}\Vert^2\big\}
\end{aligned}
\end{equation*}
Which means,
\begin{equation}\label{ut_bound}
\begin{aligned}
\Vert\bm{u}_t\Vert&\leq17\cdot\max\big\{\frac{g_{thres}\hat{c}\chi}{\gamma},\sqrt{\frac{f_{thres}\hat{c}\chi}{\gamma}},\sqrt{\frac{\bm{\beta}_t\mathcal{H}\bm{\beta}_t\hat{c}\chi}{\gamma}},\Vert\bm{\beta_t}\Vert\big\}\\
&=17\cdot\max\big\{\hat{c}\cdot\mathcal{P},\hat{c}\cdot\mathcal{P},\sqrt{\frac{\bm{\beta}_t\mathcal{H}\bm{\beta}_t\hat{c}\chi}{\gamma}},\Vert\bm{\beta_t}\Vert\big\}
\end{aligned}
\end{equation}
The last equality is due to the definition of $g_{thres}$ and $f_{thres}$. Now, we use induction to prove for all $t<T_1$, we have $\Vert\bm{u}_t\Vert\leq100(\mathcal{P}\cdot\hat{c})$.
According to the Eq. \eqref{ut_bound}, we only need to use induction on the last two terms.
When $t=0$, it is obvious due to $\bm{u_0}=0$, suppose the induction holds when $\tau=t<T_1$, we will show that it still holds for $\tau=t+1<T_1$, Let $$\bm{\delta}_t=\mathcal{P_S}^c\left[-\Delta_t\bm{u}_t-\nabla f(0)-(\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t)) \right]$$
By Eq. \eqref{beta_t}, define $\kappa=\frac{l}{\gamma}>1$, we have
\begin{equation}
\label{delta_dyn}
\bm{\beta_{t+1}}=(\mathbf{I}-\eta\mathcal{H})\bm{\beta_t}+\eta\bm{\delta_t}
\end{equation}
and we can bound $\bm{\delta}_t$ as
\begin{equation} \label{delta_bound}
\begin{aligned}
\Vert\bm{\delta}_t\Vert
&\leq\Vert\Delta_t\Vert\Vert\bm{u}_t\Vert+\Vert\nabla f(0)\Vert+\Vert\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t)\Vert\\
&\overset{(a)}\leq \rho(\Vert\bm{u}_t\Vert+\Vert\widetilde{\bm{x}}\Vert)\Vert\bm{u}_t\Vert+\Vert\nabla f(0)\Vert+\hat{\epsilon}\\
&\overset{(b)}\leq\rho\cdot100\hat{c}(100\hat{c}\mathcal{P}+2r)\mathcal{P}+\frac{5}{4}g_{thres}\\
&=100\hat{c}(100\hat{c}+\frac{2}{\chi k})\rho\mathcal{P}^2+\frac{5}{4}g_{thres}\\
&\overset{(c)}\leq[100\hat{c}(100\hat{c}+2)\sqrt{c}+\frac{5}{4}]g_{thres}
\overset{(d)}\leq 1.5g_{thres}
\end{aligned}
\end{equation}
where $(a)$ uses Eq. \eqref{Delta_bound}, $(b)$ uses the induction assumption when $\tau=t$, $\rho\mathcal{P}=\rho(\frac{c}{\chi^2}\cdot\frac{\epsilon}{\rho})=\sqrt{c}(\frac{\sqrt{c}}{\chi^2}\cdot\epsilon)=\sqrt{c}g_{thres}$ gives the step $(c)$.
By choosing $c_{max}\leq\frac{1}{4}\frac{1}{100\hat{c}(100\hat{c}+2)}$ and step size $c\leq c_{max}$, the last inequality $(d)$ holds.
\paragraph{Bounding $\norm{\bm{\beta}_{t+1}}$:}
Combining Eq.\eqref{delta_dyn}, Eq.\eqref{delta_bound} and using the definition of $\mathcal{S}^c$, we have:
\begin{equation*}
\norm{\bm{\beta}_{t+1}} \le (1+ \frac{\eta \gamma}{\hat{c}\chi}) \norm{\bm{\beta}_t} + 1.5\eta g_{thres}
\end{equation*}
Since $\norm{\bm{\beta}_0} = 0$ and $t+1 \le T_1$, by applying above relation recursively, we have:
\begin{equation}\label{beta_bound}
\norm{\bm{\beta}_{t+1}}
\le \sum_{\tau = 0}^{t}1.5(1+ \frac{\eta \gamma}{\hat{c}\chi})^\tau\eta g_{thres}
\overset{(a)}\le 1.5\cdot 3\cdot T_1\eta g_{thres}
\overset{(b)}\le 5(\mathcal{P} \cdot \hat{c})
\end{equation}
Step $(a)$ holds because $T_1\le \hat{c} \mathcal{T}=\frac{\eta\gamma}{c\chi}$ by definition, so that $(1+ \frac{\eta \gamma}{\hat{c}\chi})^{T_1}\le 3$. And step $(b)$ holds because $T_1\leq\hat{c}\mathcal{T}\eta g_{thres}=\hat{c}\frac{\chi}{\eta\gamma}\eta\frac{\sqrt{c}}{\chi^2}\epsilon=\hat{c}\frac{\sqrt{c}}{\chi}\sqrt{\frac{\epsilon}{\rho}}=\hat{c}\mathcal{P}$
\paragraph{Bounding $\bm{\beta}_{t+1}^{\top}\mathcal{H}\bm{\beta}_{t+1}$:} Using Eq.\eqref{delta_dyn}, we can also write the update equation as:
\begin{equation*}
\begin{aligned}
\bm{\beta_t} &= (\mathbf{I}-\eta\mathcal{H})\bm{\beta_{t-1}}+\eta\bm{\delta_{t-1}}\\
&=(\mathbf{I}-\eta\mathcal{H})[(\mathbf{I}-\eta\mathcal{H})\bm{\beta_{t-2}}+\eta\bm{\delta_{t-2}}]+\eta\bm{\delta_{t-2}}\\
&=(\mathbf{I}-\eta\mathcal{H})^2\bm{\beta_{t-2}}+(\mathbf{I}-\eta\mathcal{H})\eta\bm{\delta_{t-2}}+\eta\bm{\delta_{t-1}}\\
&=...\\
&=\sum_{\tau=0}^{t-1}(\mathbf{I}-\eta\mathcal{H})^\tau\eta\bm{\delta_{t-\tau-1}}
\end{aligned}
\end{equation*}
Combining with Eq.\eqref{delta_bound}, this gives
\begin{equation}\label{beta2_bound}
\begin{aligned}
\bm{\beta}_{t+1}^{\top} \mathcal{H} \bm{\beta}_{t+1} =& \eta^2\sum_{\tau_1 = 0}^t \sum_{\tau_2 = 0}^t
\bm{\delta_{t-\tau_1}}^{\top} (\mathbf{I} - \eta \mathcal{H})^{\tau_1}\mathcal{H}(\mathbf{I} - \eta \mathcal{H})^{\tau_2}\bm{\delta_{t-\tau_2}} \\
\le &\eta^2\sum_{\tau_1 = 0}^t \sum_{\tau_2 = 0}^t \norm{\bm{\delta_{t-\tau_1}}}
\norm{(\mathbf{I} - \eta \mathcal{H})^{\tau_1}\mathcal{H}(\mathbf{I} - \eta \mathcal{H})^{\tau_2}}\norm{\bm{\delta_{t-\tau_2}}} \\
\le& 4\eta^2 g_{thres}^2\sum_{\tau_1 = 0}^t \sum_{\tau_2 = 0}^t \norm{(\mathbf{I} - \eta \mathcal{H})^{\tau_1}\mathcal{H}(\mathbf{I} - \eta \mathcal{H})^{\tau_2}}
\end{aligned}
\end{equation}
Let the eigenvalues of $\mathcal{H}$ to be $\{\lambda_i\}$, then for any $\tau_1, \tau_2 \ge 0$, we know the eigenvalues of
$(\mathbf{I} - \eta \mathcal{H})^{\tau_1}\mathcal{H}(\mathbf{I} - \eta \mathcal{H})^{\tau_2}$ are $\{\lambda_i(1-\eta \lambda_i)^{\tau_1 + \tau_2}\}$.
Let $g_t(\lambda):=\lambda (1-\eta \lambda)^t$, and setting its derivative to zero, we obtain:
\begin{equation*}
g_t(\lambda)' = (1-\eta\lambda)^t -t\eta\lambda(1-\eta\lambda)^{t-1} = 0
\end{equation*}
Because $l$ is the largest eigenvalue of Hessian, we must have $\lambda\leq l=\frac{c}{\eta}\leq\frac{1}{\eta}$. Thus,
we see that $\lambda_t^\star = \frac{1}{(1+t)\eta}$ is the unique maximizer, and $g_t(\lambda)$ is monotonically increasing in $(-\infty, \lambda_t^\star]$. This gives:
\begin{equation*}
\norm{(\mathbf{I} - \eta \mathcal{H})^{\tau_1}\mathcal{H}(\mathbf{I} - \eta \mathcal{H})^{\tau_2}}
= \max_i \lambda_i(1-\eta \lambda_i)^{\tau_1 + \tau_2}
\le \hat{\lambda}(1-\eta\hat{\lambda})^{\tau_1 + \tau_2} \le \frac{1}{(1+\tau_1+\tau_2)\eta}
\end{equation*}
where $\hat{\lambda} = \min\{l, \lambda_{\tau_1 + \tau_2}^\star\}$. Using this equation in Eq. \eqref{beta2_bound}, we have
\begin{equation}\label{beta2_bound2}
\begin{aligned}
\bm{\beta}_{t+1}^{\top} \mathcal{H} \bm{\beta}_{t+1}
&\le 4\eta^2 g_{thres}^2\sum_{\tau_1 = 0}^t \sum_{\tau_2 = 0}^t \norm{(\mathbf{I} - \eta \mathcal{H})^{\tau_1}\mathcal{H}(\mathbf{I} - \eta \mathcal{H})^{\tau_2}} \\
&\le 4\eta g_{thres}^2\sum_{\tau_1 = 0}^t \sum_{\tau_2 = 0}^t \frac{1}{1+\tau_1+\tau_2}\\
&\overset{(a)}\le 8\eta T_1 g_{thres}^2
\overset{(b)}\le 8\eta\hat{c}\mathcal{T}g_{thres}^2
\overset{(c)}= 8 \mathcal{P}^2 \gamma \hat{c} \cdot \chi^{-1}
\end{aligned}
\end{equation}
where step $(a)$ holds by rearranging the summation as follows:
\begin{equation*}
\sum_{\tau_1 = 0}^t \sum_{\tau_2 = 0}^t \frac{1}{1+\tau_1+\tau_2}
= \sum_{\tau = 0}^{2t} \min\{1+\tau, 2t+1-\tau\} \cdot \frac{1}{1+\tau} \le 2t+1 < 2T_1
\end{equation*}
and step $(b)$ use the definition $T_1\leq\hat{c}\mathcal{T}$ and $\eta\mathcal{T}g_{thres}^2=\eta\frac{\chi}{\eta\gamma}\frac{c}{\chi^4}\epsilon^2=\frac{c\epsilon^2}{\gamma\chi^3}=(\frac{c}{\chi^2}\frac{\epsilon}{\rho})\gamma\chi^{-1}=\mathcal{P}^2\gamma\chi^{-1}$ give the result of step $(c)$
~
Finally, substituting Eq. \eqref{beta_bound} and Eq. \eqref{beta2_bound2} into Eq.\eqref{ut_bound}, we have
\begin{align*}
\norm{\bm{u}_{t+1}} \le& 17\cdot\max\big\{\hat{c}\cdot\mathcal{P},\hat{c}\cdot\mathcal{P},\sqrt{\frac{\bm{\beta}_t\mathcal{H}\bm{\beta}_t\hat{c}\chi}{\gamma}},\Vert\bm{\beta_t}\Vert\big\}\\
\le & 100 (\mathcal{P} \cdot \hat{c})
\end{align*}
This finishes the induction as well as the proof of the lemma. \hfill
\end{proof}
\section{Proof of Lemma \ref{lemma_wt}}\label{apdxlem6}
\begin{proof}
In this lemma, we will show that if sequence $\bm{u}_t$ is inside a small ball, then the sequence $\bm{w}_t$ can escape the stuck region. To see this, we focus on the difference of these two sequence in direction $\bm{e}_1$. We will prove that the different in $\bm{e}_1$ direction is increase as power series with base larger than 1. In this sense, it won't take long to get sequence $\bm{w}_t$ escaping the stuck region.\\
W.L.O.G, set $\bm{u}_0 = 0$ to be the origin. Define $\bm{v}_t = \bm{w}_t - \bm{u}_t$, by assumptions in Lemma \ref{lemma_leq}, we have $\bm{v}_0 = \mu r\bm{e}_1, ~\mu \in [\hat{\delta}/(2\sqrt{d}), 1]$. Now, consider the update equation for $\bm{w}_t$:
\begin{align*}
\bm{u_{t+1}+\bm{v}_{t+1}}=\bm{w}_{t+1}&=\bm{w}_t-\eta\hat{\nabla}f(\bm{w}_t)\\
&=\bm{u}_t+\bm{v}_t-\eta\nabla f(\bm{u}_t+\bm{v}_t)+\eta(\hat{\nabla}f(\bm{w}_t)-\nabla f(\bm{w}_t))\\
&=\bm{u}_t+\bm{v}_t-\eta\nabla f(\bm{u}_t)-\eta\big[\int_{0}^{1}\nabla^2f(\bm{u}_t+\theta\bm{v}_t)d\theta\big]\bm{v}_t+\eta(\hat{\nabla}f(\bm{w}_t)-\nabla f(\bm{w}_t))\\
&=\bm{u}_t+\bm{v}_t-\eta\nabla f(\bm{u}_t)-\eta(\mathcal{H}+\Delta_t^{'})\bm{v}_t+\eta(\hat{\nabla}f(\bm{w}_t)-\nabla f(\bm{w}_t))\\
&=\bm{u}_t-\eta\nabla f(\bm{u}_t)+(\mathbf{I}-\eta\mathcal{H}-\eta\Delta_t^{'})\bm{v}_t+\eta(\hat{\nabla}f(\bm{w}_t)-\nabla f(\bm{w}_t))\\
&=\bm{u}_{t+1}+(\mathbf{I}-\eta\mathcal{H}-\eta\Delta_t^{'})\bm{v}_t+\eta(\hat{\nabla}f(\bm{w}_t)-\nabla f(\bm{w}_t))+\eta(\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t))
\end{align*}
where $\Delta'_t := \int_{0}^1 \nabla^2 f(\bm{u}_t + \theta\bm{v}_t)d\theta - \mathcal{H}$. By Hessian Lipschitz, similar to Lemma \ref{lemma_ut}, we have $\norm{\Delta'_t} \le \rho(\norm{\bm{u}_t} + \norm{\bm{v}_t}+ \norm{\widetilde{\bm{x}}})$.
Thus, $\bm{v}_t$ satisfies
\begin{equation}\label{v_t_dyn}
\bm{v}_{t+1} = (\mathbf{I} - \eta \mathcal{H} - \eta \Delta'_t) \bm{v}_t+\eta(\hat{\nabla}f(\bm{w}_t)-\nabla f(\bm{w}_t))+\eta(\hat{\nabla}f(\bm{u}_t)-\nabla f(\bm{u}_t))
\end{equation}
Since $\Vert\bm{w}_0-\widetilde{\bm{x}}\Vert=\Vert\bm{u}_0-\widetilde{\bm{x}}+\bm{v}_0\Vert\leq\Vert\bm{u}_0-\widetilde{\bm{x}}\Vert+\Vert\bm{v}_0\Vert\leq2r$ by definition of $\bm{u}_0$, directly applying Lemma \ref{lemma_ut}, we obtain $\bm{w}_t\le 100(\mathcal{P}\cdot\hat{c})$ for all $t \le T_2$. By condition of Lemma \ref{lemma_wt}, we obtain $\norm{\bm{u}_t} \le 100(\mathcal{P}\cdot\hat{c})$ for all $t<T_2$.
This gives:
\begin{equation} \label{vt_bound}
\norm{\bm{v}_t} \le \norm{\bm{u}_t} + \norm{\bm{w}_t} \le 200(\mathcal{P}\cdot\hat{c}) \text{~for all~} t<T_2
\end{equation}
Thus, for $t<T_2$, we have:
\begin{equation*}
\norm{\Delta'_t} \le \rho( \norm{\bm{u}_t} + \norm{\bm{v}_t}+ \norm{\widetilde{\bm{x}}})
\le \rho(300\mathcal{P}\cdot\hat{c}+r)
=\rho\mathcal{P}(300\hat{c}+\frac{1}{\chi\kappa})
\le \rho\mathcal{P}(300\hat{c}+1)
\end{equation*}
Denote $\psi_t\geq0$ as the norm of $\bm{v}_t$ projected onto $\bm{e}_1$ direction, and let $\varphi_t\geq0$ be the norm of $\bm{v}_t$ projected onto the subspace spanned by eigenvectors whose eigenvalue larger than $-\gamma$. Eq. \eqref{v_t_dyn} gives:
\begin{align*}
\psi_{t+1} \ge& (1+\gamma \eta)\psi_t -\sigma\sqrt{\psi_t^2 + \varphi_t^2}-2\eta\hat{\epsilon}\\
\varphi_{t+1} \le &(1+\gamma\eta)\varphi_t + \sigma\sqrt{\psi_t^2 + \varphi_t^2}+2\eta\hat{\epsilon}
\end{align*}
where $\sigma = \eta\rho\mathcal{P}(300\hat{c} + 1)$.
Noticing that, by choosing $\sqrt{c_{\max}}\le \frac{1}{300\hat{c}+1}\min\{\frac{1}{4}, \frac{1}{4\hat{c}}\}$, and $c\leq c_{max}$, we have for all $t+1<T_2$
\begin{equation}\label{sigmat_bound}
4\sigma (t+1) \le 4\sigma T_2 \le
4\eta\rho\mathcal{P}(300\hat{c} + 1)\hat{c}\mathcal{T} =4\sqrt{c}(300 \hat{c} + 1)\hat{c}\le 1
\end{equation}
Besides, according to the assumption, we have:
$$\hat{\epsilon}\leq\frac{4-2\sqrt{2}}{4}\frac{c\sqrt{\epsilon^3\rho}}{\chi^3l}\frac{\hat{\delta}}{2\sqrt{d}}(300\hat{c}+1)=\widetilde{O}(\frac{\epsilon^{3+\frac{\theta}{2}}}{d^{\frac{1}{2}(1+\frac{\theta}{2})}})$$
The is because $\sqrt{\epsilon^3}\frac{\hat{\delta}}{2\sqrt{d}}=\frac{\sqrt{d}l\epsilon}{\sqrt{\rho}}e^{-\chi}=\frac{\sqrt{d}l\epsilon}{\sqrt{\rho}}\min\{(\frac{c\epsilon^2\delta}{2dl\Delta_f})^{1+\frac{\theta}{4}},e^{-\chi_1}\}=O(\frac{\epsilon^{3+\frac{\theta}{2}}}{d^{\frac{1}{2}(1+\frac{\theta}{2})}})$.
Also notice that we use the notation $\widetilde{O}$ to hide the $\log(\cdot)$ term coming from $\chi$. By this definition, we have for all $t<T_2$:
\begin{equation}\label{E15}
\begin{aligned}
2\eta\hat{\epsilon}
&\leq(2-\sqrt{2})\eta\frac{\hat{\delta}}{2\sqrt{d}}\frac{c\sqrt{\epsilon^3\rho}}{\chi^3l}(300\hat{c}+1)\\
&\overset{(a)}\leq(2-\sqrt{2})\eta\mu r\cdot\frac{\sqrt{c\rho\epsilon}}{\chi}(300\hat{c}+1)\\
&=(2-\sqrt{2})\mu r\mathcal{P}\rho\eta(300\hat{c}+1)\\
&=(2-\sqrt{2})\sigma\psi_0
\end{aligned}
\end{equation}
Where step $(a)$ comes from definition of $\mu$ and $r$. \\
We will now prove via double induction that for pairs $(t_1,t_2)$, $t_1<T_2$, $t_2<T_2$:
\begin{equation}\label{E12}
\varphi_{t_1} \le 4 \sigma_{t_1} \cdot \psi_{t_1}\quad and\quad 2\eta\hat{\epsilon}\leq(2-\sqrt{2})\sigma \psi_{t_2}
\end{equation}
By hypothesis of Lemma \ref{lemma_esc}, $\varphi_0 = 0$ and choosing $t=0$ in Eq. \eqref{E15}, we know the base case of induction holds. Assume Eq. \eqref{E12} is true for $(\tau_1,\tau_2)$, where $\tau_1=\tau_2=t\leq T_2$, For $(\tau_1+1,\tau_2+1)=(t+1,t+1)$, $t+1\le T_2$, we have:
\begin{align*}
4\sigma(t+1)\psi_{t+1}
\ge & 4\sigma (t+1) \left( (1+\gamma \eta)\psi_{t} - \sigma \sqrt{\psi_{t}^2 + \varphi_{t}^2}-2\eta\hat{\epsilon}\right) \\
\varphi_{t+1} \le &4 \sigma t(1+\gamma\eta) \psi_{t} + \sigma \sqrt{\psi_{t}^2 + \varphi_{t}^2}+2\eta\hat{\epsilon}
\end{align*}
To derive the first equation, we multiply $4\sigma(t+1)$ on both sides and to get the second equation, we use induction when $\tau_1=t$.\\
Based on induction when $\tau_1=t$ and Eq. \eqref{sigmat_bound}, we know that $\varphi_{t}\leq4\sigma t\cdot\psi_{t}\leq\psi_{t}$. To finish the induction, we only need to show:
$$ 4 \sigma t(1+\gamma\eta) \psi_{t} + \sigma \sqrt{\psi_{t}^2 + \varphi_{t}^2}+2\eta\hat{\epsilon}\leq4\sigma (t+1) \left( (1+\gamma \eta)\psi_{t} - \sigma \sqrt{\psi_{t}^2 + \varphi_{t}^2}-2\eta\hat{\epsilon}\right)$$
Which means we only need to show
\begin{equation*}
\left(1+4\sigma (t+1)\right)[\sigma\sqrt{\psi_{t}^2 + \varphi_{t}^2}+2\eta\hat{\epsilon}]
\le 4 (1+\gamma \eta)\sigma\psi_{t}
\end{equation*}
Recall that $\varphi_t \le 4 \mu t \cdot \psi_t \le \psi_t$, combine with Eq. \eqref{sigmat_bound} and use the induction assumption when $\tau_2=t$, we have
\begin{align*}
\left(1+4\sigma (t+1)\right)[\sigma\sqrt{\psi_{t}^2 + \varphi_{t}^2}+2\eta\hat{\epsilon}]
&\leq\left(1+4\sigma (t+1)\right)\big[\sigma\sqrt{2\psi_{t}^2}+2\eta\hat{\epsilon}\big]\\
&\leq 2\sqrt{2}\sigma\psi_{t}+(4-2\sqrt{2})\sigma\psi_{t}\\
&=4\sigma\psi_{t}<4(1+\gamma\eta)\sigma\psi_{t}
\end{align*}
which finishes the proof for $\tau_1=t+1$.\\
Recall that $\varphi_t \le 4 \mu t \cdot \psi_t \le \psi_t$, again use the induction assumption when $\tau_2=t$, we have
\begin{equation}\label{power_increase}
\psi_{t+1} \ge (1+\gamma \eta)\psi_t - \sqrt{2}\sigma\psi_t -(2-\sqrt{2})\sigma\psi_t=(1+\gamma\eta)\psi_t-2\sigma\psi_t
\ge (1+\frac{\gamma \eta}{2})\psi_t
\end{equation}
where the last step follows from $\sigma = \eta\rho\mathcal{P}(300\hat{c}+ 1) \le \sqrt{c_{\max}}(300\hat{c} + 1) \gamma \eta \cdot\chi^{-1} < \frac{\gamma \eta}{4}$.\\
This mean $\psi_{t+1}\geq\psi_t$. Combing with Eq. \eqref{E15}, we finish the proof for $\tau_2=t+1$. Thus, we finish the whole double induction.
Finally, combining Eq. \eqref{vt_bound} and \eqref{power_increase}, we have for all $t<T_2$:
\begin{align*}
200(\mathcal{P}\cdot\hat{c})
\ge &\norm{\bm{v}_t} \ge \psi_t \ge (1+\frac{\gamma \eta}{2})^t \psi_0
\ge (1+\frac{\gamma \eta}{2})^t \frac{\delta}{2\sqrt{d}}r
= (1+\frac{\gamma \eta}{2})^t \frac{\hat{\delta}}{2\sqrt{d}}\frac{\mathcal{P}}{\kappa\chi}
\end{align*}
Noticing that $\frac{\eta\gamma}{2}=\frac{c\gamma}{2l}=\frac{c}{2k}<1$, and we have for $x\in(0,1)$, $\log(1+x)>\frac{x}{2}$. Choosing $t=\frac{T_2}{2}<T_2$ in above equation, this implies:
\begin{equation*}
\begin{aligned}
T_2 &< 2\frac{\log(400\frac{k\sqrt{d}}{\hat{\delta}}\cdot\hat{c}\chi)}{\log(1+\frac{\eta\gamma}{2})} < 8\frac{\log(400\frac{kd}{\hat{\delta}}\cdot\hat{c}\chi)}{\eta\gamma}=8\frac{\log(400\hat{c})+\log(\chi)+\log(\frac{\kappa d}{\hat{\delta}})}{\eta\gamma}\\
&\overset{(a)}=8\frac{\log(400\hat{c})+\log(\chi)+\chi}{\eta\gamma}
\le 8\frac{\log(400\hat{c})\chi+\chi+\chi}{\eta\gamma}
=8(2+\log(400\hat{c}))\frac{\chi}{\eta\gamma}
=8(2+\log(400\hat{c}))\mathcal{T}
\end{aligned}
\end{equation*}
Notice that $\log(\frac{d\kappa}{\hat{\delta}})=\log(\frac{d\kappa\sqrt{\rho\epsilon}}{dl}e^{\chi})=\log(e^{\chi})=\chi$. By choosing constant $\hat{c}$ to be large enough to satisfy $8(2 + \log (400 \hat{c})) \le \hat{c}$, we will have
$T_2 < \hat{c}\mathcal{T} $, which finishes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm1}}
\if 0
In this proof, we will actually achieve some point satisfying following condition:
\begin{equation} \label{E1}
\norm{\hat{\nabla} f(\bm{x})} \le g_{\text{thres}}, \quad \lambda_{\min}(\nabla^2 f(\bm{x})) \ge - \sqrt{\rho \epsilon}
\end{equation}
where this two condition implies $\bm{x}$ is an $\epsilon$-first-order and $\epsilon$-second-order stationary point. (see Lemma \ref{lemma2})
\fi
Choosing $c<\frac{1}{4}$ and starting from $\bm{x}_0$, we consider two cases:
\begin{enumerate}
\item $\norm{\hat{\nabla} f(\bm{x}_0)} > g_{\text{thres}}$: By Lemma \ref{lemma_geq}, we have
\begin{equation*}
f(\bm{x}_{1}) - f(\bm{x}_0) \le -\frac{\eta}{4} \cdot g_{\text{thres}}^2 = -\frac{c^2}{4\chi^4}\cdot\frac{\epsilon^2}{\ell}
\end{equation*}
\item $\norm{\hat{\nabla} f(\bm{x}_0)} \le g_{\text{thres}}$:
In this case, Algorithm \ref{PGD-MF} will add a perturbation and check terminal condition after $t_{thres}$ steps. If the condition is not met, with probability at least $1-\hat{\delta}$, we have:
\begin{equation*}
f(\bm{x}_{t_{\text{thres}}}) - f(\bm{x}_0) \le -f_{\text{thres}} = -\frac{c}{\chi^3}\cdot\sqrt{\frac{\epsilon^3}{\rho}}
\end{equation*}
This means on an average, every step decreases the function value by
\begin{equation*}
\frac{f(\bm{x}_{t_{\text{thres}}}) - f(\bm{x}_0)}{t_{\text{thres}}} \le -\frac{c^3}{\chi^4}\cdot\frac{\epsilon^2}{\ell}
\end{equation*}
\end{enumerate}
In Case 1, we can repeat this argument for $t = 1$. In Case 2, we can repeat this argument for $t=t_{thres}+1$.
Since we choose $c_{max}<\frac{1}{4}$, the gradient descent will decrease function value in each iteration by at least $\frac{c^3}{\chi^4}\cdot\frac{\epsilon^2}{\ell}$. However, the function value can't be decreased by more than $f(\bm{x}_0) - f^*$, where $f^*$ is the function value of global minima. This means algorithm \ref{PGD-MF} must terminate within the following number of iterations:
\begin{equation*}
\begin{aligned}
\frac{f(\bm{x}_0) - f^*}{\frac{c^3}{\chi^4}\cdot\frac{\epsilon^2}{\ell}}
&= \frac{\chi^4}{c^3}\cdot \frac{\ell(f(\bm{x}_0) - f^*)}{\epsilon^2} \\
&= O\left(\frac{\ell(f(\bm{x}_0) - f^*)}{\epsilon^2}\log^{4}\left(\frac{d\ell\Delta_f}{\epsilon^2\delta}\right) \right)
\end{aligned}
\end{equation*}
Recall that our choice for $\hat{\epsilon}\leq\widetilde{O}(\frac{\epsilon^{3+\frac{\theta}{2}}}{d^{\frac{1}{2}(1+\frac{\theta}{2})}})$. The number of function evaluations of Algorithm \ref{PGD-MF} as a function of parameters $d$ and $\epsilon$ is given as
$$O(\frac{1}{\epsilon^2}\log^4\big(\frac{d}{\epsilon^2}\big)\cdot\frac{d}{\hat{\epsilon}^2}\log\frac{1}{\hat\epsilon})=\widetilde{O}(\frac{d}{\epsilon^2\hat{\epsilon}^2})=\widetilde{O}(\frac{d^{2+\frac{\theta}{2}}}{\epsilon^{8+\theta}}).$$
Finally, we give the probability of obtaining an $\epsilon$-second order stationary point when the gradient descent algorithm stops.
According to Lemma \ref{lemma_geq}, the function value always decreases in case 1. By Lemma \ref{lemma_leq}, we know the function value decreases with probability at least $1-\frac{d\ell}{\sqrt{\rho\epsilon}}e^{-\chi}$ each time the algorithm meets case 2. Besides, we know the number of times we check the terminal condition during the process of gradient descent is at most:
\begin{equation*}
\frac{1}{t_{\text{thres}}} \cdot \frac{\chi^4}{c^3}\cdot \frac{\ell(f(\bm{x}_0) - f^*)}{\epsilon^2}
=\frac{\chi^3}{c}\frac{\sqrt{\rho\epsilon}(f(\bm{x}_0) - f^*)}{\epsilon^2}
\end{equation*}
Besides, by Lemma \ref{lemma_GE}, we know the probability of $\Vert\hat{\nabla}-\nabla\Vert\leq\hat{\epsilon}$ is at least $1-\hat{\epsilon}$ each time we make a estimation. And the number of estimation is given by the number of iteration $\frac{\chi^4}{c^3}\cdot\frac{l\Delta_f}{\epsilon^2}$.
Thus, by union bound, we bound this two probability together to give the final probability of the Algorithm \ref{PGD-MF} giving an $\epsilon$-second order stationary point is at least:
\begin{equation*}
\begin{aligned}
1- &\frac{d\ell}{\sqrt{\rho\epsilon}}e^{-\chi} \cdot \frac{\chi^3}{c}\frac{\sqrt{\rho\epsilon}(f(\bm{x}_0) - f^*)}{\epsilon^2} - \hat{\epsilon}\cdot\frac{\chi^4l\Delta_f}{c^3\epsilon^2}\\
&= 1 - \frac{\chi^3e^{-\chi}}{c}\cdot \frac{d\ell(f(\bm{x}_0) - f^*)}{\epsilon^2} -\hat{\epsilon}\cdot\frac{\chi^4l\Delta_f}{c^3\epsilon^2}
\end{aligned}
\end{equation*}
Recall our choice of $\chi = \max\{(1+\frac{\theta}{4})\log(\frac{2d\ell\Delta_f}{c\epsilon^2\delta}), \chi_1\}$, where $\theta>0$, we have $\chi_1^3e^{-\chi_1} \le e^{-\chi_1/(1+\frac{\theta}{4})}$, and $\hat{\epsilon}\leq \widetilde{O}(\epsilon^3)$ this gives the probability of the Alforithm not resulting in an $\epsilon$-second order stationary point is at most
\begin{equation*}
\begin{aligned}
\frac{\chi^3e^{-\chi}}{c}&\cdot \frac{d\ell(f(\bm{x}_0) - f^*)}{\epsilon^2} + \hat{\epsilon}\cdot\frac{\chi^4l\Delta_f}{c^3\epsilon^2}\\
&\le e^{-\chi/(1+\frac{\theta}{4})} \frac{d\ell(f(\bm{x}_0) - f^*)}{c\epsilon^2} +\frac{\delta}{2}\le \delta
\end{aligned}
\end{equation*}
which finishes the proof of the Theorem.
\section{Proof of Lemma \ref{lemma_leq}}\label{apdlem3}
Using $l$-smooth, by adding a perturbation, we know the function value at most increase by
\begin{equation}\label{Decrease2}
\begin{aligned}
f(\bm{x}_0) - f(\widetilde{\bm{x}})
\le \nabla f(\widetilde{\bm{x}})^T\bm{\xi}+\frac{l}{2}\Vert\bm{\xi}\Vert^2
\le g_{thres}r+\frac{lr^2}{2}
\le \frac{3}{2}f_{thres}
\end{aligned}
\end{equation}
Where $(a)$ holds due to $\hat{\epsilon}$ close gradient estimation, $\eta=\frac{c}{l}$, and omitting the last non-positive term
By applying Lemma \ref{lemma_esc}, we know for any $\bm{x}_0\in \mathcal{X}_{\text{stuck}}$, it is guaranteed that $(\bm{x}_{0} \pm \mu r \bm{e}_1) \not \in \mathcal{X}_{\text{stuck}} $, where $\mu \in [\frac{\hat{\delta}}{2\sqrt{d}}, 1]$. Let $I_{\mathcal{X}_{\text{stuck}}}(\cdot)$ be the indicator function of being inside set $\mathcal{X}_{\text{stuck}}$; and vector $\bm{x} = (x^{(1)}, \bm{x}^{(-1)})$, where $x^{(1)}$ is the component along $\bm{e}_1$ direction, and $\bm{x}^{(-1)}$ is the remaining $d-1$ dimensional vector. Define $\mathbb{B}^{(d)}(r)$ be $d$-dimensional ball with radius $r$. We obtain an upper bound on the volume of $\mathcal{X}_{\text{stuck}}$ as follows.
\begin{eqnarray}
\text{Vol}(\mathcal{X}_{\text{stuck}}) \nonumber
&= & \int_{\mathbb{B}^{(d)}_{\tilde{\bm{x}}}(r)} \mathrm{d}\bm{x} \cdot I_{\mathcal{X}_{\text{stuck}}}(\bm{x})\nonumber\\
&= & \int_{\mathbb{B}^{(d-1)}_{\tilde{\bm{x}}}(r)} \mathrm{d} \bm{x}^{(-1)} \int_{y_l}^{y_u} \mathrm{d} x^{(1)} \cdot I_{\mathcal{X}_{\text{stuck}}}(\bm{x})\nonumber\\
&\le & \int_{\mathbb{B}^{(d-1)}_{\tilde{\bm{x}}}(r)} \mathrm{d} \bm{x}^{(-1)} \cdot\left(2\cdot \frac{\hat{\delta}}{2\sqrt{d}}r \right) \nonumber\\
&=& \text{Vol}(\mathbb{B}_0^{(d-1)}(r))\times \frac{\hat{\delta} r}{\sqrt{d}},
\end{eqnarray}
where $y_l=\tilde{x}^{(1)} - \sqrt{r^2 - \norm{\tilde{\bm{x}}^{(-1)} - \bm{x}^{(-1)}}^2}$, and $y_u=\tilde{x}^{(1)} + \sqrt{r^2 - \norm{\tilde{\bm{x}}^{(-1)} - \bm{x}^{(-1)}}^2}$.
We next obtain an upper bound on $\frac{\text{Vol}(\mathcal{X}_{\text{stuck}})}{\text{Vol}(\mathbb{B}^{(d)}_{\tilde{\bm{x}}}(r))}$ as follows.
\begin{eqnarray}
\frac{\text{Vol}(\mathcal{X}_{\text{stuck}})}{\text{Vol}(\mathbb{B}^{(d)}_{\tilde{\bm{x}}}(r))}&
\le& \frac{\frac{\hat{\delta} r}{\sqrt{d}} \times \text{Vol}(\mathbb{B}^{(d-1)}_0(r))}{\text{Vol} (\mathbb{B}^{(d)}_0(r))}\nonumber\\
&=& \frac{\hat{\delta}}{\sqrt{\pi d}}\frac{\Gamma(\frac{d}{2}+1)}{\Gamma(\frac{d}{2}+\frac{1}{2})} \nonumber\\
&\le& \frac{\hat{\delta}}{\sqrt{\pi d}} \cdot \sqrt{\frac{d}{2}+\frac{1}{2}}\nonumber \\
&\le& \hat{\delta}
\end{eqnarray}
The second last inequality is by the Gautschi's inequality \citep{elezovic2000best}, which states that $\frac{\Gamma(x+1)}{\Gamma(x+1/2)}<\sqrt{x+\frac{1}{2}}$ as long as $x\ge 0$.
Due to $\bm{\xi}$ chosen from uniform distribution ball with radius $r$ , therefore, with at least probability $1-\hat{\delta}$, $\bm{x}_0 \not \in \mathcal{X}_{\text{stuck}}$. Thus, by Lemma \ref{lemma_esc}
\begin{align*}
f(\bm{x}_T) - f(\tilde{\bm{x}}) =& f(\bm{x}_T) - f(\bm{x}_0) + f(\bm{x}_0) - f(\widetilde{\bm{x}}) \\
\le & -2.5f_{thres} + 1.5f_{thres} \le -f_{thres}
\end{align*}
which completes the proof of Lemma \ref{lemma_leq}.
\section{Related Work}
In recent years, multiple algorithms have been investigated for non-convex optimization problems that converge to $\epsilon$-second-order stationary point. Most of the work has been done for model-based approaches which assume the knowledge of gradient and/or the Hessian of the objective function. Recently, there has also been some work in model-free approaches for non-convex optimization.
{\bf Model-Based Non-Convex Optimization: } Model-based approaches typically assume the knowledge of derivatives (first or higher order) of the function. We summarize key proposed algorithms on these directions that have been shown to achieve $\epsilon$-second-order stationary point convergence guarantees.
Based on the knowledge of gradients, the authors of \citep{jin2017escape,jinsept2019} show that the perturbation of gradients in each iteration of the gradient descent can lead to $\epsilon$-second-order stationary point guarantees in $\widetilde{O}(\epsilon^{-2})$ iterations, thus providing no additional loss in the number of iterations required for first order stationary point guarantees. Perturbed versions of stochastic gradient descent have also been studied in \citep{jinsept2019}, where the algorithm finds $\epsilon$-second-order stationary point in ${\tilde O}(\epsilon^{-4})$ iterations if the stochastic gradients are Lipschitz, and ${\tilde O}(d\epsilon^{-4})$ iterations if the stochastic gradients are not Lipschitz.
If the Hessian is also known, one of the approach is to use a successive convex approximation (SCA) method. Perturbation in each iteration of SCA has been shown to achieve $\epsilon$-second-order stationary point in \citep{bedi2019escaping}. Another approach is to add a cubic regularization of Newton method in the iterations \citep{nesterov2006cubic}, where the authors showed that the algorithm can converge to an $\epsilon$-second-order stationary point within $O(\frac{1}{\epsilon^{1.5}})$ gradient and Hessian oracle calls. Recently, stochastic variants of this algorithm have been studied, and have been shown to improve the iterations as compared to stochastic gradient descent \citep{tripuraneni2018stochastic}. Instead of directly querying the Hessian information, recent research shows that one can achieve an $\epsilon$-second-order stationary point using Hessian-vector-product \citep{agarwal2017finding}.
In contrast to these works, we consider a model-free approach, where there is no oracle available to query gradients and/or Hessian. Thus, an estimation of gradient is used to perform gradient descent algorithm.
{\bf Model-Free Non-Convex Optimization: } A model-free approach to non-convex optimization, also called zeroth-order non-convex optimization assumes that there is an oracle for querying the function. However, anything else about the function (e.g., gradients) is not available. Model-free approaches
for optimization problems estimate the values of gradients and/or Hessians, and are not well understood from a theoretical perspective. Such problems have applications in model-free reinforcement learning \citep{salimans2017evolution} where the objective is not available in closed form and can only be queried. An approach for estimation of gradient has been studied in \citep{conn2009introduction,nesterov2017random}. However, the existing works either find the guarantees for convex optimization or first-order stationary point guarantees. Recently, the authors of \citep{balasubramanian2019zeroth} provided the first model-free algorithm for non-convex optimization with second order guarantees. They use the cubic regularizer of the Newton's method after estimating gradient and Hessian. In contrast, we only estimate the gradient to compute the estimated gradient descent. We also note that the algorithm in \citep{balasubramanian2019zeroth} requires $O(\frac{d}{\epsilon^{3.5}})+\widetilde{O}(\frac{d^8}{\epsilon^{2.5}})$ function calls while we require $\widetilde{O}(\frac{d^{2+\frac{\theta}{2}}}{\epsilon^{8+\theta}})\approx\widetilde{O}(\frac{d^2}{\epsilon^8})$ function calls to achieve $\epsilon$-second-order stationary point. Thus, our result outperforms that in \citep{balasubramanian2019zeroth} when $d=\Omega(\epsilon^{-(11/12+\delta)})$ for arbitrarily small $\delta>0$.
\if 0
\begin{table*}[h]
\caption{Sample Table Title} \label{sample-table}
\begin{center}
\begin{tabular}{llll}
\textbf{Algorithm} &\textbf{Complexity} &\textbf{Oracle} &\textbf{Stochastic}\\
\hline \\
\citep{?} &$O(\epsilon^{-1.5})$ &Hessian &No\\
\citep{?} &$O(\epsilon^{-1.5})$ &Hessian &No\\
\hline \\
\citep{?} &$\widetilde{O}(\epsilon^{-2})$ &Hessian-vector-product &No\\
\citep{?} &$\widetilde{O}(\epsilon^{-1.75})$ &Hessian-vector-product &No\\
\citep{?} &$\widetilde{O}(\epsilon^{-3.5})$ &Hessian-vector-product &Yes\\
\citep{?} &$\widetilde{O}(\epsilon^{-3.5})$ &Hessian-vector-product &Yes\\
\hline \\
\citep{?} &$O(ploy(d)\epsilon^{-4})$ &Gradient &Yes\\
\citep{?} &$\widetilde{O}(\epsilon^{-2})$ &Gradient &No\\
\citep{?} &$O(d\epsilon^{-4})$ &Gradient &Yes\\
\hline \\
\citep{?} &$O(\frac{d}{\epsilon^{-3.5}})+\widetilde{O}(\frac{d^8}{\epsilon^{-2.5}})$ &Function value &Yes\\
\textbf{This work} &$\widetilde{O}(\frac{d^2}{\epsilon^8})$ &Function value &No\\
\end{tabular}
\end{center}
\end{table*}
\noindent\textbf{Hessian-based:} Intuitively, by computing the Hessian matrix of the object function, one can directly know whether the first-order stationary point is a local minimum so that the gradient descant algorithm can converge to the second order stationary point. One of the famous result is given by Nesterov and Polyak \citep{?}. By designing a cubic regularization of Newton method for non-convex problems, they prove that the algorithm can converge to an $\epsilon$-second-order stationary point within $O(\frac{1}{\epsilon^{1.5}})$ gradient and Hessian oracle calls to the entire function. Later the adaptive cubic regularisation (ARC) method is proposed \citep{?} and achieve the same grantee. ARC approach is more general
for the reason that it allows the cubic model to be solved only approximately and may employ approximate Hessians. However, these algorithms need to compute Hessian information each iteration and it is expensive in high dimensional condition.
\noindent\textbf{Hessian-vector-product-based:}
Instead of directly querying the Hessian information, recent research shows that one can achieve an $\epsilon$-second-order stationary point only use Hessian-vector-product. Given a function $f$, a point $\bm{x}$ and a direction $\bm{v}$, the Hessian-vector-product is $\nabla^2 f(\bm{x}) \cdot \bm{v}$. Carmon and Duchi \citep{?} gave their algorithm to find an $\epsilon$-second-order stationary point within $\widetilde{O}(\epsilon^{-2})$. Later, Carmon and Agarwal \citep{?} proposed the accelerated algorithm so that the complexity is reduced to $\widetilde{O}(\epsilon^{-1.75})$. Besides, when it comes to the stochastic gradient and stochastic Hessian. Xu and Yang \citep{?} showed that algorithms can be reduced to an $\widetilde{O}(\epsilon^{-3.5})$ using the gradient evaluation and Hessian-vector-products approximation. Recently, Nilesh et al. \citep{?} achieved the same convergence rate by only using the stochastic gradient and stochastic Hessian-vector-product without any delicate acceleration and variance reduction techniques.
\fi
\section{Guarantees for the Proposed Algorithm}
In this section, we will show that the proposed algorithm, EGD, returns an $\epsilon$-second-order stationary point. The main result is given as follows.
\begin{theorem} \label{thm1}
Assume that $f$ satisfies Assumption \ref{assum}. Then there exists constants $c_{max}$ and $c'_{min}$ such that, for any $\delta>0$, $c\leq c_{max}$, $c'\geq c'_{min}$, $\Delta_f\geq f(\bm{x}_0)-f^{*}$, $\epsilon>0$, $\theta>0$, Let
$$\hat{\epsilon}\leq\min\{O(\epsilon), \widetilde{O}(\frac{\epsilon^{3+\frac{\theta}{2}}}{d^{\frac{1}{2}(1+\frac{\theta}{2})}})\}$$ $$\chi=\max\{(1+\frac{\theta}{4})\log(\frac{dl\Delta_f}{c\epsilon^2\delta}),\chi_1\}$$
and $\chi_1$ is a constant such that $\chi_1^3e^{-\chi_1} \le e^{-\chi_1/(1+\frac{\theta}{4})}$ $EGD(\bm{x}_0,d,l,B,\chi_1,\theta,\rho,\epsilon,\hat{\epsilon},c,c',\delta,\Delta_f)$ will output an $\epsilon$-second-order stationary point with probability of $1-\delta$, and terminate in the following number of iterations:
$$O\big(\frac{l(f(\bm{x}_0)-f^{*})}{\epsilon^2}\log^4\big(\frac{dl\Delta_f}{\epsilon^2\delta}\big)\big)=\widetilde{O}(\frac{1}{\epsilon^2}).$$
Further, the number of calls to the function $f(\cdot)$ for the zeroth order algorithm are
$$\widetilde{O}(\frac{d^{2+\frac{\theta}{2}}}{\epsilon^{8+\theta}}).$$
\end{theorem}
The rest of the section proves this result. We will first describe the different Lemmas used in the proof, and then use them to give the proof of the theorem.
\subsection{Key Lemmas}
To prove the main result, we first describe two lemmas - Lemma \ref{lemma_geq} and Lemma \ref{lemma_leq}. Lemma \ref{lemma_geq} indicates that if $\Vert\hat{\nabla}\Vert>g_{thres}$, the function will keep decreasing with the iterations. In other words, we have
\begin{lemma} \label{lemma_geq}
Assume $f(\cdot)$ satisfies $l$-smooth and $\hat{\nabla}f(\cdot)$ is $\hat{\epsilon}$-close to the $\nabla f(\cdot)$, for any given $\epsilon>0$. Let $\hat\epsilon\leq\frac{\sqrt{c}}{4\chi^2}\cdot
\epsilon=O(\epsilon)$ and $c\leq c_{max}$. When $\Vert\hat{\nabla}f(\bm{x}_t)\Vert\geq g_{thres}$, gradient descent with step size $\eta<\frac{1}{l}$ will give
\begin{equation}
\label{E2}
f(\bm{x}_{t+1})\leq f(\bm{x}_t)-\frac{\eta}{4}\Vert\hat{\nabla}f(\bm{x}_t)\Vert
\end{equation}
\end{lemma}
\begin{proof}
The result is based on the smoothness property of the function and that the estimated gradient is close to the actual gradient. The steps for the proof can be seen as
\begin{equation}\label{Decrease}
\begin{aligned}
&f(\bm{x}_{t+1})\\
&\overset{(a)}\leq f(\bm{x}_t)+\nabla f(\bm{x}_t)^T(\bm{x}_{t+1}-\bm{x}_t)+\frac{l}{2}\Vert\bm{x}_{t+1}-\bm{x}_t\Vert^2\\
&\overset{(b)}=f(\bm{x}_t)-\eta\nabla f(\bm{x}_t)^T\hat{\nabla}f(\bm{x}_t)+\frac{l\eta^2}{2}\Vert\hat{\nabla} f(\bm{x}_t)\Vert^2\\
&=f(\bm{x}_t)-\eta [\nabla f(\bm{x}_t)-\hat{\nabla}f(\bm{x}_t)+\hat{\nabla}f(\bm{x}_t)] ^T\hat{\nabla}f(\bm{x}_t)\\& \quad +\frac{l\eta^2}{2}\Vert\hat{\nabla} f(\bm{x}_t)\Vert^2\\
&\overset{(c)}=f(\bm{x}_t)+\eta\Vert\nabla f(\bm{x}_t)-\hat{\nabla}f(\bm{x}_t)\Vert\Vert\hat{\nabla}f(\bm{x}_t)\Vert\\&\quad+\frac{l\eta^2}{2}\Vert\hat{\nabla} f(\bm{x}_t)\Vert^2-\eta\Vert\hat{\nabla} f(\bm{x}_t)\Vert^2\\
&\overset{(d)}\leq f(\bm{x}_t)+\eta\Vert\nabla f(\bm{x}_t)-\hat{\nabla}f(\bm{x}_t)\Vert\Vert\hat{\nabla}f(\bm{x}_t)\Vert\\&\quad-\frac{\eta}{2}\Vert\hat{\nabla} f(\bm{x}_t) \Vert^2\\
&\overset{(e)}\leq f(\bm{x}_t)-\frac{\eta}{2}\Vert\hat{\nabla} f(\bm{x}_t) \Vert^2+\eta\hat{\epsilon}\Vert\hat{\nabla}f(\bm{x}_t)\Vert\\
&\overset{(f)}\leq f(\bm{x}_t)-\frac{\eta}{4}\Vert\hat{\nabla} f(\bm{x}_t) \Vert^2
\end{aligned}
\end{equation}
The inequality $(a)$ directly follows from the $l$-smooth property. $(b)$ uses the gradient descent step in Algorithm \ref{PGD-MF}. $(d)$ and $(e)$ holds due to the condition $\eta<\frac{1}{l}$ and $\hat{\nabla}f$ is $\hat{\epsilon}$ close to the $\nabla f$, respectively. Finally, from $\hat{\epsilon}\leq\frac{g_{thres}}{4}\leq\frac{\Vert\hat{\nabla}f(\bm{x}_t)\Vert}{4}$, $(f)$ follows.
\end{proof}
Besides, we note that when $\Vert\hat{\nabla}f(\bm{x})\Vert<g_{thres}$, we have
\begin{equation*}
\begin{aligned}
\Vert\nabla f\Vert&=\Vert(\nabla f-\hat{\nabla}f)+\hat{\nabla}f\Vert\\
&\leq\Vert\nabla f-\hat{\nabla}f\Vert+\Vert\hat{\nabla}f\Vert\\
&\leq\hat{\epsilon}+\frac{\sqrt{c}}{\chi^2}\epsilon=\frac{5}{4}\frac{\sqrt{c}}{\chi^2}\epsilon\leq\epsilon
\end{aligned}
\end{equation*}
By choosing $c<\frac{1}{4}$, the last inequality holds since $\chi>1$. Thus, any $\bm{x}$ satisfying $\Vert\hat{\nabla}f(\bm{x})\Vert<g_{thres}$ is a first order stationary point and satisfies the first requirement of an $\epsilon$-second-order stationary point.
The next result, Lemma \ref{lemma_leq}, indicates that if $\Vert\hat{\nabla}f(\widetilde{\bm{x}})\Vert\leq g_{thres}$ and $\lambda_{min}(\nabla^2f(\widetilde{\bm{x}}))\leq-\sqrt{\rho\epsilon}$, inficating that it is (approximately) first order stationary point with estimated gradient while not (approximately) a second-order stationary point, the proposed algorithm will escape this saddle point by decreasing more than $f_{thres}$ in $t_{thres}$ iterations.
\begin{lemma} \label{lemma_leq}
There exist absolute constant $c_{max}$ such that: if $f(\cdot)$ satisfies $l$-smooth and $\rho$-Hessian Lipschitz and any $c\leq c_{max}$, $\hat{\delta}=\frac{dl}{\sqrt{\rho\epsilon}}e^{-\chi}<1$, $$\hat{\epsilon}\leq\min\{O(\epsilon),\widetilde{O}(\frac{\epsilon^{3+\frac{\theta}{2}}}{d^{\frac{1}{2}(1+\frac{\theta}{2})}})\}$$ (we will see $O(\epsilon)$ and $\widetilde{O}(\frac{\epsilon^{3+\frac{\theta}{2}}}{d^{\frac{1}{2}(1+\frac{\theta}{2})}})$ in following lemmas). Let $\eta,r,g_{thres},f_{thres},t_{thres}$ defined as in Algorithm \ref{PGD-MF}. Define $\gamma=\sqrt{\rho\epsilon}$, $\mathcal{T}=\frac{t_{thres}}{c}=\frac{\chi}{\eta\gamma}$ Then if $\widetilde{\bm{x}}$ satisfies:
$$\Vert\hat{\nabla}f(\widetilde{\bm{x}})\Vert\leq g_{thres}\quad and\quad \lambda_{min}(\nabla^2f(\widetilde{\bm{x}}))\leq-\sqrt{\rho\epsilon}$$
Let, $\bm{x}_0=\widetilde{\bm{x}}+\bm{\xi}$, where $\bm{\xi}$ comes from the uniform distribution over ball with radius $r=\frac{\sqrt{c}}{\chi^2}\cdot\frac{\epsilon}{l}$. Then with at least probability $1-\hat{\delta}$, we have for $T=t_{thres}=\frac{\mathcal{T}}{c}$:
$$f(\bm{x}_{T})-f(\widetilde{\bm{x}})\leq-f_{thres}$$
\end{lemma}
Note that $\delta$ is the probability defined for the algorithm \ref{PGD-MF} and $\hat{\delta}$ is the probability defined for Lemma \ref{lemma_leq}. We first describe the key results to prove this lemma and then give the steps that use these results to prove Lemma \ref{lemma_leq}.
\subsubsection{Key results to prove Lemma \ref{lemma_leq}}
Let $\widetilde{\bm{x}}$ satisfies the conditions in Lemma \ref{lemma_leq}, and without loss of generality let $\bm{e}_1$ be the minimum eigenvector of $\nabla^2f(\widetilde{\bm{x}})$. Consider two gradient descent sequences $\{\bm{u}_t\}$,$\{\bm{w}_t\}$ with initial points $\bm{u}_0,\bm{w}_0$ satisfying:
$$\Vert\bm{u}_0-\widetilde{\bm{x}}\Vert\leq r,\quad \bm{w}_0=\bm{u}_0+\mu r\bm{e}_1,\mu\in[\frac{\hat{\delta}}{2\sqrt{d}},1].$$ Further, let $\mathcal{P}=\frac{\sqrt{c}}{\chi}\sqrt{\frac{\epsilon}{\rho}}$, $\mathcal{H}=\nabla^2 f(\widetilde{\bm{x}})$, and $\widetilde{f}_{\bm{y}}(\bm{x}):=f(\bm{y})+\nabla f(\bm{y})^T(\bm{x}-\bm{y})+\frac{1}{2}(\bm{x}-\bm{y})^T\mathcal{H}(\bm{x}-\bm{y})$ be a quadratic approximation of $f$ on $\bm{x}$.
The next result, Lemma \ref{lemma_ut}, shows that if $\Vert\bm{u}_0-\widetilde{\bm{x}}\Vert\leq2r$, we have $\Vert\bm{u}_t-\widetilde{\bm{x}}\Vert\leq100(\mathcal{P}\cdot\hat{c})$ for all $t<T_1$, where $T_1$ is defined in the following result.
\begin{lemma} \label{lemma_ut}
Let $f(\cdot),\widetilde{\bm{x}}$ satisfies the conditions in Lemma \ref{lemma_leq}, for any initial point $\bm{u}_0$ with $\Vert\bm{u}_0-\widetilde{\bm{x}}\Vert\leq2r$. Let $$T_1=\min\big\{\inf_t\{t\vert\widetilde{f}_{\bm{u}_0}(\bm{u}_t)-f(\bm{u}_0)\leq-4.5f_{thres}\},\hat{c}\mathcal{T}\big\}.$$
Then, there exist absolute constant $c_{max}$ such that for any constant $\hat{c}>3$, $c\leq c_{max}$, $\hat{\epsilon}\leq\frac{\sqrt{c}}{4\chi^2}\cdot\epsilon=O(\epsilon)$ and $t<T_1$, we have $\Vert\bm{u}_t-\widetilde{\bm{x}}\Vert\leq100(\mathcal{P}\cdot\hat{c})$.
\end{lemma}
\begin{proof}
The proof is provided in Appendix \ref{apdlem5}.
\end{proof}
Let
\begin{equation}\label{eps_cube_bound}
\hat{\epsilon}\leq\frac{2-\sqrt{2}}{2}\frac{c\sqrt{\epsilon^3\rho}}{\chi^3l}\frac{\hat{\delta}}{2\sqrt{d}}(300\hat{c}+1)=\widetilde{O}(\frac{\epsilon^{3+\frac{\theta}{2}}}{d^{\frac{1}{2}(1+\frac{\theta}{2})}})
\end{equation} $$$$
where $\theta>0$ is a constant we define in theorem \ref{thm1}.
The next result shows that if $\Vert\bm{u}_t-\widetilde{\bm{x}}\Vert\leq100(\mathcal{P}\cdot\hat{c})$, we will have $T_2<\hat{c}\mathcal{T}$, where $T_2$ is as in the statement of the following Lemma. Besides, we will also see how to derive the above $\hat{\epsilon}$ in the proof of this lemma.
\begin{lemma} \label{lemma_wt}
Let $f(\cdot)$, $\widetilde{\bm{x}}$ satisfy the conditions in Lemma \ref{lemma_leq}. Let $$T_2=\min\{\inf_t\{t\vert\widetilde{f}_{\bm{w}_0}(\bm{w}_t)-f(\bm{w}_0)\leq-4.5f_{thres}\},\hat{c}\mathcal{T}\}.$$
There are absolute constants $c_{max}$, and $\hat{c}$ such that for any $c\leq c_{max}$, $\hat{\epsilon}$ satisfies Eq. \eqref{eps_cube_bound}, if $\Vert\bm{u}_t-\widetilde{\bm{x}}\Vert\leq100(\mathcal{P}\cdot\hat{c})$ for all $t<T_2$, we have $T_2<\hat{c}\mathcal{T}$
\end{lemma}
\begin{proof}
The proof is provided in Appendix \ref{apdxlem6}.
\end{proof}
The next result, Lemma \ref{lemma_esc}, combines the two results above to show that given two gradient descent sequence $\{\bm{u}_t\},\{\bm{w}_t\}$ satisfying the properties given above, at least one of them helps the algorithm decrease the function value greatly
\begin{lemma} \label{lemma_esc}
There exist absolute constant $c_{max}$, such that for any step size $\eta\leq\frac{c_{max}}{l}$, gradient estimation accuracy $\hat{\epsilon}\leq\frac{\sqrt{c^3}}{4\chi}\cdot\epsilon=O(\epsilon)$, and any $T=\frac{\mathcal{T}}{c}$, we have:
$$\min\{f(\bm{u}_T)-f(\bm{u}_0),f(\bm{w}_T)-f(\bm{w}_0)\}\leq-2.5f_{thres}.$$
\end{lemma}
\begin{proof}
The proof is given in Appendix \ref{apdlem4}.
\end{proof}
\subsubsection{Proof of Lemma 3}
\begin{proof}
Given the result in Lemma 6, the proof of Lemma 3 follows on the same lines as Lemma 14 in \citep{jin2017escape}. For completeness, we provide the detailed steps in Appendix \ref{apdlem3}.
\end{proof}
\input{mainthm}
\if 0
\textbf{Remark 2}: Notice that the lemma \ref{lemma3} has no relationship with the absolute value of $r$ because for any $r>0$, we can always found such two initial points $\bm{u}_0$ and $\bm{w}_0$ which satisfies the requirement. This insight gives the guarantee of the correctness of the whole algorithm. Intuitively thinking, $\bm{x}_{t+1}=\bm{x}_t-\eta\hat{\nabla}f(\bm{x}_t)=\bm{x}-\eta\nabla f(\bm{x}_t)-\eta[\hat{\nabla}f(\bm{x}_t)-\nabla f(\bm{x}_t)]$, where the error resulting from the gradient estimation help us to plus a noise to the potential saddle point. Given the absolute value of the noise (or we say perturbation), $\hat{\epsilon}$ isn't limited. Thus, we can choose a small enough $\hat{\epsilon}$ to ensure the correct direction of gradient descent and meanwhile avoid the saddle point. This idea will be shown in details in the proof of Lemma \ref{lemma3} and Lemma \ref{lemma6}.
To prove the Lemma \ref{lemma4},
\fi
|
2,869,038,153,856 | arxiv | \section*{Acknowledgements}
E.C. and D.K. acknowledge the support of the Bulgarian-JINR collaborative Grant,
E.C. is grateful to Grant 08-17/2016 of the Bulgarian Science Foundation and
D.K. acknowledges the support of the Bogoliubov-Infeld Program.
D.K. thanks also A. Kotlorz for useful comments on numerical analysis.
|
2,869,038,153,857 | arxiv | \section{Introduction}
Throughout this article, $R$ denotes a commutative ring with $1 \neq 0$. In 2007, A. Badawi introduced the concept of 2-absorbing ideals of commutative rings as a generalization of prime ideals. He defined an ideal $I$ of $R$ to be 2-absorbing if whenever $a, b, c \in R$ and $abc \in I$, then $ab$ or $ac$ or $bc$ is in $I$ \cite{B1}. As in the case of prime ideals, 2-absorbing have a characterization in terms of ideals. Namely, $I$ is 2-absorbing if whenever $I_{1}, I_{2}, I_{3}$ are ideals of $R$ and $I_{1}I_{2}I_{3} \subseteq I$, then $I_{1}I_{2}$ or $I_{1}I_{3}$ or $I_{2}I_{3}$ is contained in $I$ \cite[Theorem 2.13]{B1}.
In 2011, D.F. Anderson, A. Badawi inspired from the definition of 2-absorbing ideals and defined the n-absobing ideals for any positive integer n. Where an ideal $I$ is called n-absorbing ideal if whenever $x_{1} \dots x_{n+1} \in I$ for $x_{1}, \dots, x_{n+1} \in R$ then there are n of the $x_{i}$’s whose product is in I. Also they introduced the strongly-n-absorbing ideals as another generalization of prime ideals, where an ideal $I$ of $R$ is said to be a strongly n-absorbing ideal if whenever $I_{1} \dots · · · I_{n+1} \subseteq I$ for ideals $I_{1}, \dots, · · ·, I_{n+1}$ of $R$, then the product of some n of the $I_{j}$’s is contained in $I$. Obviously, a strongly n-
absorbing ideal of $R$ is also an n-absorbing ideal of $R$, and by the last fact in the previous paragraph, 2-absorbing and strongly 2 absorbing are the same. Moreover D.F. Anderson, A. Badawi were able to prove that n-absorbing and strongly n-absorbing are equivalent in the class of Prufer domains \cite[Corollary 6.9]{AB}, and they conjectured that these
two concepts are equivalent in any commutative ring \cite[Conjecture 1]{AB}.
In 1975, Jr. P. Quartararo and H.S. Butts defined the u-rings to be those rings in which if an ideal $I$ is contained in the union of ideals, then it must be contained in one of them. Then they proved that it suffices to consider the case $I$ is finitely generated ideal of $R$ \cite[Proposition 1.1]{QB}. Moreover, in \cite[Corollary 1.6]{QB}, they proved that the class of Prufer domains (domains in which every finitely generated ideal is invertible) is contained in the class of u-rings. So we have the following diagram of implications:
\begin{center}
Prufer domains
$\Downarrow$
u-rings
\end{center}
where the implication is irreversible in general; see Example \ref{e:1} for a u-ring which is not a domain, particularly, not a Prufer domain.
In section one of this paper, we provide an alternative proof of \cite[Theorem 2.13]{B1}. The technique of this proof helps in proving the main result of Section 2. In section 2, we solve positively Anderson-Badawi's Conjecture of n-Absorbing and strongly n-absorbing ideals in the class of u-rings. The main result (Theorem \ref{T:2}) extends and recovers Anderson-Badawi's related result on Prufer domains (Corollary \ref{c:1}).
\section{Alternative proof}
As we mentioned in the introduction, 2-absorbing ideals and strongly 2-absorbing are the same. This follows trivially from \cite[Theorem 2.13]{B1}. In this section, we present an alternative proof of \cite[Theorem 2.13]{B1}, which inspires us in solving \cite[Conjecture 1]{AB} in the class of u-rings. For the seek of completeness, We provide the proof of the following lemma; which can be found as an exercise in the classical ring theory texts.
\begin{lemma} \label{l:1}
Let $I$ be an ideal of $R$. If $I=I_1\cup I_2$, where $I_1$ and $I_2$ are also ideals, then $I=I_1$ or $I=I_2$.
\end{lemma}
\begin{proof}
Suppose $I_1\setminus I_2$ and $I_2\setminus I_1$ are nonempty. Let $a\in I_1\setminus I_2$ and $b\in I_2\setminus I_1$. Since $I_1\cup I_2$ is ideal, $a+b\in I_1\cup I_2$. Assume, without loss of generality, that $a+b\in I_1$. Then $b=(a+b)-a\in I_1$, a contradiction. Therefore, either $I_1\setminus I_2 = \phi$ or $I_2\setminus I_1 = \phi$; equivalently, $I_1 \subseteq I_2$ or $I_2 \subseteq I_1$.
So that $I=I_1$ or $I=I_2$.
\end{proof}
Now, we prove a few lemmas in a sequence, finishing with the proof of the theorem.
\begin{lemma} \label{l:2}
Suppose that $I$ is a 2-absorbing ideal of $R$, $J$ is an ideal of $R$ and $xyJ\subseteq I$ for some $x, y \in R$. Then $xy\in I$ or $xJ\subseteq I$ or $yJ\subseteq I$.
\end{lemma}
\begin{proof}
Suppose $xy\not\in I$. Denote by $J_x=\{z\in J \ |\ xz\in I\}$ and $J_y=\{z\in J\ |\ yz\in I\}$. It is not hard to show that $J_x$ and $J_y$ are ideals. Now, if $a \in J$, then $xya \in I$. But $I$ being 2-absorbing and $xy\not\in I$ imply that $xa \in I$ or $ya \in I$. Thus, either $a \in J_x$ or $a \in J_y$, and hence $J=J_x\cup J_y$. Therefore, by Lemma \ref{l:1}, either $J=J_x$, and hence $xJ\subseteq I$ or $J=J_y$, and hence $yJ\subseteq I$.
\end{proof}
We generalize the previous lemma as follows.
\begin{lemma} \label{l:3}
Suppose that $I$ is a 2-absorbing ideal of $R$, $I_1$ and $I_2$ are ideals of $R$, and $xI_1I_2\subseteq I$ for some $x \in R$. Then $xI_1\subseteq I$ or $xI_2\subseteq I$ or $I_1I_2\subseteq I$.
\end{lemma}
\begin{proof}
Suppose $xI_2\not\subseteq I$. By Lemma \ref{l:2}, for all $y\in I_1$, either $xy\in I$ or $yI_2\subseteq I$. Let $N=\{y\in I_1 \ | \ xy\in I\}$ and $M=\{y\in I_1\ |\ yI_2\subseteq I\}$. Then $M$ and $N$ are ideals of $R$, and simlirly as in the proof of Lemma \ref{l:2}, $I_1=N\cup M$. Thus, again by Lemma \ref{l:1}, either $I_1=N$, and in this case $xI_1\subseteq I$, or $I_1=M$, and in this case $I_1I_2\subseteq I$.
\end{proof}
Finally, we use the above lemmas to prove the main theorem.
\begin{thm} \cite[Theorem 2.13]{B1} \label{T:1}
An ideal $I$ of $R$ is 2-absorbing ideal if and only if it is strongly 2-absorbing ideal.
\end{thm}
\begin{proof}
Obviously, strongly 2-absorbing ideals are 2-absorbing. Conversely, Assume that $I$ is 2-absorbing and $I_1I_2I_3 \subseteq I$, where $I_1$, $I_2$, and $I_3$ are ideals of $R$. Further, Suppose $I_2I_3\not\subseteq I$, and let $N=\{x\in I_1 \ | \ xI_2\subseteq I\}$ and $M=\{x\in I_1 \ |\ xI_3\subseteq I\}$. Then $M$ and $N$ are ideals. By Lemma \ref{l:3}, all $x\in I_1$ are in either $N$ or $M$, and thus $I_1=N\cup M$.
Therefore by Lemma \ref{l:1}, either $I=N$ or $I=M$; which implies that $I_1I_2\subseteq I$ or $I_1I_3\subseteq I$.
\end{proof}
\section{The conjecture}
The following conjecture was announced in \cite{AB}.\\
\textbf{Anderson and Badawi's conjecture:} In every ring, the notions of $n$-absorbing ideals and strongly $n$-absorbing ideals are equivalent.\\
It is easy to see that strongly $n$-absorbing ideals are $n$-absorbing. We aim to find conditions for the converse to hold. We adopt the following terminology from \cite{G} and \cite{QB}: If $I_1,...,I_n$ are ideals of $R$, then $I_1\cup ...\cup I_n$ is called an efficient covering of $I$ if $I\subseteq I_1\cup ...\cup I_n$, but $I$ is not contained in the union of any $n-1$ of these ideals \cite{G}. In view of this definition, an ideal $I$ of $R$ is called a u-ideal if there is no efficient covering of $I$ with more then one ideal.\\
The following result solves Anderson and Badawi's conjecture to u-rings, generalizing thus Corollary 6.9 from \cite{AB}.
\begin{thm}\label{T:2}
In a $u$-ring, an $n$-absorbing ideal is strongly $n$-absorbing.
\end{thm}
In order to prove this main theorem, we prove the following four lemmas:
\begin{lemma} \label{l:21}
A principal ideal is a u-ideal.
\end{lemma}
\begin{proof}
Say $I\subseteq I_1\cup...\cup I_n$, and $I=(x)$. Then for some $j$, $x\in I_j$ so $I\subseteq I_j$.
\end{proof}
\begin{lemma}\label{l:22}
Let $I$ be an $n$-absorbing ideal of $R$, and $I_1, ..., I_{n+1}$ be $u$-ideals of $R$.
Suppose that the following condition is satisfied:\\
whenever $I_1\cdots I_{n+1}\subseteq I$, and at least $k+1$ of the ideals $I_1,...,I_{n+1}$ are principal, then $I$ contains a product of some $n$ of them.\\
Then the same holds when we replace $k+1$ with $k$. Here $n\geq k\geq 0$.
\end{lemma}
\begin{proof}
Assume the statement is true for $I$ and $k+1$. Let $I_1\cdots I_{n+1}\subseteq I$, where $I_j$ is principal for $j\leq k$. Assume $\prod_{j\leq n}I_j \not\subseteq I$. For all $i\leq n$, let
$$J_i=\{y\in I_{n+1}\ |\quad y\prod_{j\neq n+1, i}I_j \subseteq I\}$$
Then by our assumption, $I_{n+1}=\cup_{i\leq n}J_i$. Since it is a $u$-ideal, it is equal to some $J_i$. But then
$$\prod_{j\neq i}I_j\subseteq I$$
This concludes the proof.
\end{proof}
\begin{lemma} \label{l:23}
Let $I$ be an $n$-absorbing ideal. If $I_1\cdots I_{n+1}\subseteq I$, where every $I_j$ is a $u$-ideal, then $I$ contains the product of some $n$ of these ideals.
\end{lemma}
\begin{proof}
By the definition of $I$, and Lemma \ref{l:21}, the statement holds when $I_1, ..., I_{n+1}$ are all principal ideals. We use Lemma \ref{l:22} to induct down from the case $k=n$ (where we require $k+1$ ideals to be principle) to $k=0$ (where we require no ideals to be principle), which is exactly what we want.
\end{proof}
This allows us to prove the main theorem of this article (Theorem \ref{T:2}).
\textbf{Proof of Theorem \ref{T:2}}: Assume the contrary. Then in some $u$-ring, there are ideals $I,I_1, ..., I_{n+1}$ such that $I$ is $n$-absorbing and $I_1\cdots I_{n+1}\subseteq I$, but $I$ doesn't contain the product of any $n$ of these ideals. But $R$ is a u-ring, and hence $I_1, ..., I_n$ are $u$-ideals. Lemma \ref{l:23} gives a contradiction.
\begin{remark}
We can alter the proof of Lemma \ref{l:23} above slightly, to get a more general statement when $n=2$. Indeed, notice that if $I=I_1\cup I_2$, then $I=I_1$ or $I_2$ (well-known). Then we can drop the condition of the ideals needing to be $u$-ideals from Lemma \ref{l:23}, and hence we obtain for arbitrary rings, every $2$-absorbing ideal is strongly $2$-absorbing. This is Theorem \ref{T:1}.
\end{remark}
We can use this to give an alternative proof to corollary 6.9 from \cite{AB}. To achieve that, we cite the following results first.
\begin{proposition}
Every invertible ideal is a $u$-ideal, and a Prüfer domain is a $u$-ring.
\end{proposition}
\begin{proof}
See Theorem 1.5 and Corollary 1.6 from \cite{QB}.
\end{proof}
As a straightforward application of Theorem \ref{T:2}, we recover Anderson-Badawi’s related
result on Prufer domains
\begin{corollary}\label{c:1}
In Prüfer domains, an $n$-absorbing ideal is strongly $n$-absorbing.
\end{corollary}
Lastly, to ensure that u-rings is strictly larger that the class of Prüfer domains, we prove the following lemma which provides an example of one such family of u-rings. A more general result, proved in the same way, can be found in \cite{QB}.
\begin{lemma}\label{l:25}
Suppose $R$ is a ring with $\mathbb{Q}\subseteq R$. Then $R$ is a $u$-ring.
\end{lemma}
\begin{proof}
Let $I = I_1\cup \dots\cup I_n$ be an efficient covering of $I$. Take $a_1\in I_1$ with $a_1\not\in I_j$ for $j\neq 1$. Choose $a_2$ analogously. Then for all $k\in \mathbb{Z}$, $a_1+ka_2\not\in I_1, I_2$. Since there are infinite possibilities for $k$, there will be $a_1+ka_2$ and $a_1+la_2$ in the same $I_j$. But then $(k-l)a_2\in I_j$, so $a_2\in I_j$ for $j\neq 2$, contradiction.
\end{proof}
The following is an example of a u-ring which is not a domain, and hence not a Prüfer domain.
\begin{example}\label{e:1}
$\mathbb{Q} \times \mathbb{Q}$ is a ring with zero divisors (not domain) which contains $\mathbb{Q} \cong {0} \times \mathbb{Q}$ as a subring. Consequently, by Lemma \ref{l:25}, $\mathbb{Q} \times \mathbb{Q}$ is a u-ring.
\end{example}
\textbf{Acknowledgements}. We are grateful to the Undergraduate Research Opportunities Program at MIT (UROP) as well as to the J-WEL Grant in Higher Education Innovation, “Educational Innovation in Palestine,” for providing and funding this research opportunity. Also, we would like to thank Professor
Haynes Miller for his crucial role in mentoring this project.
|
2,869,038,153,858 | arxiv |
\section*{Acknowledgements}
\noindent We express our gratitude to our colleagues in the CERN
accelerator departments for the excellent performance of the LHC. We
thank the technical and administrative staff at the LHCb
institutes.
We acknowledge support from CERN and from the national agencies:
CAPES, CNPq, FAPERJ and FINEP (Brazil);
MOST and NSFC (China);
CNRS/IN2P3 (France);
BMBF, DFG and MPG (Germany);
INFN (Italy);
NWO (Netherlands);
MNiSW and NCN (Poland);
MEN/IFA (Romania);
MSHE (Russia);
MinECo (Spain);
SNSF and SER (Switzerland);
NASU (Ukraine);
STFC (United Kingdom);
DOE NP and NSF (USA).
We acknowledge the computing resources that are provided by CERN, IN2P3
(France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands),
PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex
LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil),
PL-GRID (Poland) and OSC (USA).
We are indebted to the communities behind the multiple open-source
software packages on which we depend.
Individual groups or members have received support from
AvH Foundation (Germany);
EPLANET, Marie Sk\l{}odowska-Curie Actions and ERC (European Union);
ANR, Labex P2IO and OCEVU, and R\'{e}gion Auvergne-Rh\^{o}ne-Alpes (France);
Key Research Program of Frontier Sciences of CAS, CAS PIFI, and the Thousand Talents Program (China);
RFBR, RSF and Yandex LLC (Russia);
GVA, XuntaGal and GENCAT (Spain);
the Royal Society
and the Leverhulme Trust (United Kingdom).
|
2,869,038,153,859 | arxiv | \section{Inert Doublet Model}
A number of astrophysical observations based on gravitational interactions point to
the existence of dark matter (DM) in the Universe, which can not be described with
the Standard Model.
One of the simplest extensions of the Standard Model which can
provide a dark matter candidate is the Inert Doublet
Model~\cite{Deshpande:1977rw,Cao:2007rm,Barbieri:2006dq}.
In this model, the scalar sector is extended by a so-called inert or
dark doublet $\Phi_D$ (the only field odd under $Z_2$ symmetry) in
addition to the SM Higgs doublet $\Phi_S$. This results in five
physical states after electroweak symmetry breaking: the SM Higgs
boson $h$ and four dark scalars: two neutral, $H$ and $A$, and two
charged, $H^\pm$.
A discrete $Z_2$ symmetry prohibits the inert scalars from interacting
with the SM fermions through Yukawa-type interactions and makes the
lightest neutral scalar, chosen to be $H$ in this work, a good dark
matter candidate.
Two sets of benchmark points (BPs) in agreement with all
theoretical and experimental constraints were proposed
in~\cite{Kalinowski:2018ylg}, covering different possible signatures
at $e^+e^-$ colliders, with masses of IDM particles extending up to
1 TeV.
Prospects for the discovery of IDM scalars at CLIC running at
380\,GeV, 1.5\,TeV and 3.5\,TeV were then described in detail
in~\cite{Kalinowski:2018kdn} and summarized in \cite{deBlas:2018mhx},
focusing on leptonic final states.
In this contribution we update these results
and extend them to ILC running at 250\,GeV and 500\,GeV.
We also include newe results based on the semi-leptonic channel analysis,
for CLIC running at 1.5\,TeV and 3.5\,TeV, which supersede results
presented in \cite{Sokolowska:2019xhe}.
\section{Benchmark scenarios}
Distributions of the scalar masses for the IDM benchmark scenarios
considered in~\cite{Kalinowski:2018ylg} are shown in Fig.~\ref{fig:mass}.
For the considered benchmark scenarios $H$ is the lightest, stable neutral
scalar, which can be much lighter than the other two, $A$ and
$H^\pm$.
On the other hand the mass splitting between $A$ and
$H^\pm$ is limited by existing constraints to about 70\,GeV.
\begin{figure}[b]
\includegraphics[width=0.49\textwidth]{bp_mass_scan_plot.png}
\includegraphics[width=0.49\textwidth]{bp_mass_split_plot.png}
\caption{ Distribution of benchmark candidate points (yellow) in the
(m$_{A}$;m$_{H^\pm}$) plane (left) and in the
(m$_{A} -\,$m$_{H}$;m$_{H^\pm} -\,$m$_{H}$) plane (right),
after all constraints are taken into account, as well as selected
benchmark points (blue) in the same planes~\cite{Kalinowski:2018ylg}.
}\label{fig:mass}
\end{figure}
The following tree-level production processes of inert scalars
at $e^+ e^-$ collisions are considered: neutral scalar
pair-production, $ e^+e^- \to A~H$, and charged scalar
pair-production, $e^+e^-\to H^+H^-$.
The leading-order cross sections for these processes are presented in
Fig.~\ref{fig:cros} for collision energies of 380\,GeV and 1.5\,TeV.
As the couplings of inert scalars to SM bosons are determined by SM parameters,
production cross sections are determined by the scalar masses and
depend very weakly on additional model parameters.
Far from the kinematic threshold, the production cross section,
dominated by the $s$-channel $Z$-boson exchange, decreases as $1/s$
with the collision energy.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{ah380_cs.pdf}
\includegraphics[width=0.49\textwidth]{hphm380_cs.pdf}\\[-1.7cm]
{\hspace*{0.1\textwidth}\scriptsize $\sqrt{s}=$380 GeV}\\[-0.53cm]
{\scriptsize \hspace*{0.59\textwidth} $\sqrt{s}=$380 GeV} \\[0.95cm]
\includegraphics[width=0.49\textwidth]{ah1500_cs.pdf}
\includegraphics[width=0.49\textwidth]{hphm1500_cs.pdf}\\[-1.7cm]
{\hspace*{0.1\textwidth}\scriptsize $\sqrt{s}=$1.5 TeV}\\[-0.53cm]
{\scriptsize \hspace*{0.59\textwidth} $\sqrt{s}=$1.5 TeV} \\[0.5cm]
\caption{ Leading-order cross sections for neutral (left) and charged (right) inert
scalar production, $ e^+e^- \to A~H$ and $e^+e^-\to H^+H^-$, for collision energy
of 380\,GeV (upper plots) and 1.5\,TeV (lower plots). The yellow band represents
all scenarios selected in the model scan~\cite{Kalinowski:2018ylg} while the
blue dots represent the selected benchmark scenarios. Beam energy spectra are not
included.}\label{fig:cros}
\end{figure}
In the scenarios considered in this paper the produced dark scalar $A$
decays to a (real or virtual) $Z$ boson and the (lighter)
neutral scalar $H$, $A \rightarrow Z^{(\star)}H$, while the produced
charged boson $H^\pm$ decays predominantly to a (real or virtual) $W^\pm$ boson
and the neutral scalar $H$, $H^+ \rightarrow {W^\pm}^{(\star)}H$,
where the DM candidate $H$ escapes detection.
The mono-$Z$ signature of the neutral scalar pair-production can be
considered in the leptonic or hadronic $Z$-boson decay channel.
For the charged scalar pair production, resulting in two $W$ bosons in
the final state, leptonic, semi-leptonic and hadronic final states are possible.
\section{Leptonic channel analysis}
\label{sec:leptonic}
Isolated leptons (electrons and muons) can be identified and
reconstructed with very high efficiency and purity, and the signatures
based solely on lepton measurements are usually considered ``golden
channels'', if the expected statistics of signal events is high
enough.
For purely leptonic final state, the detector acceptance cuts can be
applied on the generator level and other detector effects are expected
to have marginal impact on the outcome of the analysis.
Therefore, in~\cite{Kalinowski:2018kdn} we focused on
leptonic decays of $Z$ and $W^\pm$,
leading to a signature of leptons and missing transverse energy.
We considered the $\mu^+\mu^-$ final state as a
signature of the neutral scalar pair-production, while
the different flavour lepton pairs, $\mu^+ e^-$ and $e^+ \mu^-$, were
considered as a signature for production of charged inert scalars, see
Fig.~\ref{fig:diag}.
\begin{figure}[t]
\centerline{\includegraphics[width=0.75\textwidth]{idm_diagrams.pdf}}
\caption{Signal Feynman diagrams for the considered production and
decay process for:
(left) neutral scalar production, $e^+e^- \to H A \to H H l l$,
and
(right) charged scalar production, $e^+e^- \to H^+ H^- \to H H l l' \nu \nu'$.
}\label{fig:diag}
\end{figure}
Signal and background samples were generated with \textsc{Whizard}\xspace
2.2.8~\cite{Kilian:2007gr,Ohl:2000hq}. Generator level cuts reflecting detector
acceptance for leptons and ISR photons were applied.
For the neutral inert scalar pair production, $e^+ e^- \to AH$,
the invariant mass of the lepton pair from (virtual) $Z$ decay depends on the
mass splitting between $A$ and $H$ and is relatively small,
$M_{\mu\mu} \le M_{Z}$.
We apply pre-selection cuts on the invariant mass and the
longitudinal boost of the lepton pair to suppress the dominant
background process $e^+ e^- \to \mu^+ \mu^- (\gamma)$, see
Fig.~\ref{fig:presel}.
\begin{figure}[tb]
\includegraphics[width=0.49\textwidth]{AHpreselection_BP1.png}
\includegraphics[width=0.49\textwidth]{AHpreselection_BP9.png}
\caption{
Distribution of the lepton pair invariant mass, M$_{\mu\mu}$,
as a function of the lepton pair longitudinal momentum,
P$_\textrm{z}^{\mu\mu}$, for IDM signal (green points) and Standard Model
background (red points). Signal events were simulated for BP1
scenario (left) and BP9 scenario (right), for centre-of-mass energy
of 250\,GeV. The blue box indicates the cut used to remove the
dominant background from $e^+e^- \to \mu^+\mu^- (\gamma)$ process.
}\label{fig:presel}
\end{figure}
Distributions of selected kinematic variables describing the
leptonic final state in $AH$ analysis, after the pre-selection cuts,
are presented in Fig.~\ref{fig:dist}.
\begin{figure}[tb]
\includegraphics[width=0.49\textwidth]{plot_ah_250_Epair.pdf}
\includegraphics[width=0.49\textwidth]{plot_ah_250_pTpair.pdf}
\caption{
Distributions of the kinematic variables describing the
leptonic final state in $AH$ analysis: lepton pair
energy, E$_{\mu\mu}$ and total transverse momentum, p$^{\mu\mu}_\textrm{T}$.
Expected distributions for representative benchmarks BP1 (red
histogram), BP2 (green) and BP7 (blue) are compared with expected
background (black histogram) simulated for 1\,ab$^{-1}$ collected at 250\,GeV.
}\label{fig:dist}
\end{figure}
Presented in Fig.~\ref{fig:bdt}\,(left) is the lepton pair invariant mass
distribution after pre-selection cuts and additional selection based
on lepton pair energy, transverse momentum, production angle (polar
angle of the $Z$ boson) and the difference of the lepton azimuthal angles.
\begin{figure}[tb]
\includegraphics[width=0.49\textwidth]{plot_ah_250_Mpair_cuts.pdf}
\includegraphics[width=0.49\textwidth]{bdt_response_1_250_ah_new.pdf}
\caption{
Left: distribution of the lepton pair invariant mass, M$_{\mu\mu}$,
for BP1 (red histogram), BP2 (green) and BP7 (blue) signal
scenarios, compared with the expected Standard Model background (black
histogram), after event selection cuts (see text for details).
Right: response distributions of the BDT classifiers used for the
selection of $AH$ production events, for BP1.
Samples are normalised to 1\,ab$^{-1}$ collected at 250\,GeV.
}\label{fig:bdt}
\end{figure}
Already with this simplest, cut-based approach, the IDM signal would
result in the visible excess in the invariant mass distribution for
the number of benchmark scenarios.
For the final selection of signal-like events, a multivariate analysis
is performed using a Boosted Decision Tree (BDT) classifier
\cite{Hocker:2007ht} with 8 input variables~\cite{Kalinowski:2018kdn}.
The standard approach in this type of analysis is to train BDT to
separate the considered signal scenario from the background events.
However, this approach, also used in our previous
study~\cite{Kalinowski:2018kdn}, is only valid if we do have some
initial estimates on the model parameters, scalar masses in particular.
For the results presented here we modified our approach and we train
BDTs using all accessible (at given energy) benchmark scenarios from
given category (separately for virtual and real $Z$ in the final
state) but for the one we look for.
This procedure, which we consider a more general
(``scenario-independent'') approach, results in the
expected significances of observation lower by up to 20\% compared to
the ``educated-selection'' results.
Response distributions of the BDT classifier used for the
selection of $AH$ production events for the benchmark scenario BP1 at
250\,GeV are presented in Fig.~\ref{fig:bdt}\,(right).
Expected significance of the deviations from the Standard Model
predictions, assuming 1\,ab$^{-1}$ of data collected at
centre-of-mass energy of 250\,GeV, 380\,GeV and 500\,GeV, are shown in Figs.~\ref{fig:ahsig}.
Only scenarios resulting in significances above 5$\sigma$ are
shown.
\begin{figure}[tb]
\centerline{\includegraphics[width=0.8\textwidth]{sensitivity_ah_new_bdt2.pdf}}
\caption{Significance of the deviations from the Standard Model
predictions, expected for 1\,ab$^{-1}$
of data collected at centre-of-mass energy of 250\,GeV, 380\,GeV and
500\,GeV, for events with two muons in the final state, for all considered
low mass benchmark scenarios. Only significance above 5$\sigma$ is shown.
}\label{fig:ahsig}
\end{figure}
The selection of $H^+H^-$ production events is more challenging as the
two leptons in the final state no longer originate from a single (on-
or off-shell) intermediate state.
No pre-selection cuts are applied (except for the detector acceptance
cuts on the generator level), but we focus on electron-muon pairs in
the final state, avoiding large SM background from the direct lepton pair production.
In Fig.~\ref{fig:bdt2} (left) the distribution of the lepton pair
invariant mass, M$_{e\mu}$, for three benchmark scenarios (BP1, BP3
and BP6) is compared with Standard Model expectations for centre-of-mass energy of 380\,GeV.
The expected background cross section for the considered final state
is over two orders of magnitude higher than for the considered
benchmark points.
However, kinematic distributions are very different, as two massive
scalars are produced in the signal case, reducing the kinematic space
available for lepton pair production,
allowing for efficient selection of signal-enhanced sample of events
using the multivariate analysis. The same procedure and the same set
of input variables is used as for the $AH$ analysis.
\begin{figure}[tb]
\includegraphics[width=0.49\textwidth]{clic_hphm_380_Mpair.pdf}
\includegraphics[width=0.49\textwidth]{bdt_response_1_380_hphm_6.pdf}
\caption{
Left: distribution of the lepton pair invariant mass, M$_{e\mu}$,
for BP1 (red histogram), BP3 (green) and BP6 (blue) signal
scenarios, compared with the expected Standard Model background (black
histogram).
Right: response distributions of the BDT classifiers used for the
selection of $H^+H^-$ production events, for BP1.
Samples are normalised to 1\,ab$^{-1}$ collected at 380\,GeV.
}\label{fig:bdt2}
\end{figure}
Response distributions of the BDT classifier used for the
selection of $H^+H^-$ production events for the benchmark scenario BP1
at 380\,GeV are presented in Fig.~\ref{fig:bdt2}\,(right).
In Fig.~\ref{fig:hphmsig} we show the expected significance of the
deviations from the Standard Model predictions for 1\,ab$^{-1}$ of
data collected at 250\,GeV, 380\,GeV and 500\,GeV,
for scenarios resulting in the significances above 5$\sigma$.
\begin{figure}[tb]
\centerline{\includegraphics[width=0.8\textwidth]{sensitivity_hphm_new_bdt2.pdf}}
\caption{Significance of the deviations from the Standard Model
predictions, expected for 1\,ab$^{-1}$
of data collected at centre-of-mass energy of 250\,GeV, 380\,GeV and
500\,GeV, for events with an electron and a muon in the final state, for all considered
low mass benchmark scenarios. Only significance above 5$\sigma$ is shown.
}\label{fig:hphmsig}
\end{figure}
We found that for scenarios accessible at a certain energy, up to
500\,GeV, high significance can be expected for leptonic signature at
future $e^+e^-$ colliders.
The significance is mainly related to the inert scalar production cross
sections.
We display the dependence of the expected significance on the inert
scalar masses, for events with two muons and for events with and
electron-muon pair in the final state, in Fig.~\ref{fig:lesig}.
\begin{figure}[tb]
\hspace{0.03\textwidth}
\includegraphics[width=0.47\textwidth]{signif_bdt2_ah_new.pdf}
\includegraphics[width=0.47\textwidth]{signif_bdt2_hphm_new.pdf}
\caption{Significance of the deviations from the Standard Model
predictions expected for 1\,ab$^{-1}$ of data collected at
centre-of-mass energy of 250\,GeV, 380\,GeV and 500\,GeV, for:
(left) events with two muons in the final state ($\mu^+\mu^-$)
as a function of the sum of neutral inert scalar masses and
(right) events with an electron and a muon in the final state
($e^+\mu^-$ or $e^-\mu^+$) as a function of twice the charged scalar
mass.
}\label{fig:lesig}
\end{figure}
With 1\,ab$^{-1}$ of integrated luminosity collected at 250\,GeV,
380\,GeV and 500\,GeV, the expected discovery reach of $e^+e^-$
colliders extends up to neutral scalar mass sum of 220\,GeV, 300\,GeV
and 330\,GeV, respectively, and for charged scalar pair-production up
to charged scalar masses of 110\,GeV, 160\,GeV and 200\,GeV.
\begin{figure}[tb]
\hspace{0.03\textwidth}
\includegraphics[width=0.47\textwidth]{signif_bdt2_ah_mass.pdf}
\includegraphics[width=0.47\textwidth]{signif_bdt2_hphm_mass.pdf}
\caption{As in Fig.~\ref{fig:lesig} but for expected CLIC running
scenario: 1\,ab$^{-1}$ of data collected at 380\,GeV,
2.5\,ab$^{-1}$ at 1.5\,TeV and 5\,ab$^{-1}$ at 3\,TeV.
}\label{fig:hesig}
\end{figure}
For collision energies much above the threshold, the inert scalar
pair-production cross section decreases fast with the collision energy
(see Fig.~\ref{fig:cros}).
For CLIC running at 1.5\,TeV, only a moderate increase in discovery
reach is expected for the leptonic channel, even with 2.5\,ab$^{-1}$
of data, see Fig.~\ref{fig:hesig}.
The neutral scalar pair-production can be discovered in the leptonic
channel for $m_A + m_H < 450$\,GeV
and the charged scalar production for $m_{H^\pm} < 500$\,GeV.
Marginal improvement is expected when running at 3 TeV.
The significance is mainly driven by the signal production cross section and
is approximately proportional to the square-root of the integrated luminosity.
Shown in Fig.~\ref{fig:hesig2} are the significance results scaled to
the integrated luminosity of 1\,ab$^{-1}$, presented as a function of
the signal production cross section.
For the $AH$ channel, which leads to $\mu^+\mu^-$ final state, a
universal linear dependence on the signal cross section is observed
which does not seem to depend on the running energy. Significant
(above $5\sigma$) observation is possible for cross sections
roughly larger than 0.5 fb.
For the $H^+H^-$ channel, with $e^\pm \mu^\mp$ final state, low energy
running seem to give better sensitivity to signal scenarios for the
same cross section.
\begin{figure}[tb]
\hspace{0.03\textwidth}
\includegraphics[width=0.47\textwidth]{sig1000_ah_ene.pdf}
\includegraphics[width=0.47\textwidth]{sig1000_hphm_log.pdf}
\caption{Significance of the deviations from the Standard Model
predictions expected at different CLIC running stages, assuming
the same integrated luminosity of 1 ab$^{-1}$, as a function of
the signal cross section in the considered channel, for:
(left) events with two muons in the final state ($\mu^+\mu^-$)
and
(right) events with an electron and a muon in the final state
($e^+\mu^-$ or $e^-\mu^+$).
}\label{fig:hesig2}
\end{figure}
\section{Semi-leptonic channel}
For charged scalar pair-production, significant improvement of the discovery reach
for scenarios with high scalar masses can be achieved using the
semi-leptonic final state, see Fig.~\ref{fig:diag2}.
As the signal cross section increases by an order of magnitude and
a similar scaling is expected for the background processes (dominated
by the $W^+W^-$ production), the significance of the
observation in the semi-leptonic channel should increase by a factor
of about 3.
Additional improvement is possible due to kinematic constraints which
can be imposed on the hadronic final state (corresponding to one of
the produced $W$ bosons).
However, detector response has to be taken into account in more
details.
\begin{figure}[tb]
\centerline{\includegraphics[width=0.4\textwidth]{slep_diagram.png}}
\caption{Signal Feynman diagram for the charged scalar pair-production in semi-leptonic decay channel:
$e^+e^- \to H^+ H^- \to H H j j l \nu$.
}\label{fig:diag2}
\end{figure}
Results presented in the following are based on the signal and
background event samples were generated with \textsc{Whizard}\xspace
2.7.0~\cite{Kilian:2007gr,Ohl:2000hq}, taking into account the beam
energy profile expected for CLIC running at 1.5\,TeV and 3\,TeV.
We assume running with -80\% electron beam polarisation and the
corresponding integrated luminosity of 2\,ab$^{-1}$ and 4\,ab$^{-1}$
respectively.
For realistic simulation of the CLIC detector response fast simulation
framework \textsc{Delphes}\xspace~\cite{deFavereau:2013fsa} was used, with control
cards prepared for the new detector model CLICdet~\cite{Leogrande:2019qbe}.
Selected for the analysis are events with exactly one isolated lepton
(electron or muon) and two exclusive jets reconstructed with the VLC
algorithm\footnote{The VLC algorithm is run with parameter $R=1$ at 1.5\,TeV
and $R=1.2$ at 3\,TeV, and with $\beta=\gamma=1$.}
\cite{Boronat:2016tgd}.
Also rejected are events with an isolated photon with energy above 10 GeV
or with the energy sum of the energy-flow objects outside the two
reconstructed jets higher than 20 GeV.
In Fig.~\ref{fig:slepdist}, distributions of the jet pair invariant mass,
$M_{jj}$, and the sum of jet energies, $E_{j_1} + E_{j_2}$, for the
two signal scenarios, are compared with the expected SM background for
CLIC running at 3\,TeV.
\begin{figure}[tb]
\includegraphics[width=0.49\textwidth]{jklamka_mjj_3tev.pdf}
\includegraphics[width=0.49\textwidth]{jklamka_ejsum_3tev.pdf}
\caption{
Distributions of the kinematic variables describing the
semi-leptonic final state in $H^+ H^-$ analysis: jet pair invariant mass,
$M_{jj}$, and the sum of jet energies, $E_{j_1} + E_{j_2}$.
Expected distributions for benchmark scenarios BP23 (blue
histogram) and HP15 (red) are compared with expected
background (black histogram) simulated for 4\,ab$^{-1}$ of data
collected at 3\,TeV width -80\% electron beam polarisation.
}\label{fig:slepdist}
\end{figure}
The analysis procedure is similar to the one used for the leptonic
channel.
Huge background coming mainly from $W^+W^-$ and $ZZ$
pair-production is first suppressed by the pre-selection cuts based on
lepton and jet kinematics.
Then a multivariate analysis is performed using the BDT classifier
with 11 input variables:
total energy in an event, missing transverse momentum, missing
(recoil) mass; energy, transverse momentum and scattering angle of the
isolated lepton; energy, invariant mass and emission angle of the jet
pair; reconstructed angles of the hadronic $W$ decay.
As before, the BDT is trained separately for scenarios with virtual
$W^\pm$ production (when the difference of $H^\pm$ and $H$ masses is
smaller than the mass of $W^\pm$) and with real $W^\pm$ production
(larger mass differences).
Shown in Fig.~\ref{fig:sigslep} is the significance for observing
deviations from the Standard Model predictions.
Results based on the semi-leptonic channel analysis for CLIC running
at 1.5\,TeV and 3\,TeV are compared with the leptonic channel
sensitivity presented in Sec.~\ref{sec:leptonic}.
Huge increase of the signal significance is observed, up to a factor
of 6, and the discovery reach for charged scalar pair-production is
extended up to $m_{H^\pm} \sim$ 1\,TeV.
\begin{figure}[tb]
\centerline{\includegraphics[width=0.55\textwidth]{signif_bdt2_hphm_mass_slep_new.pdf}}
\caption{Significance of the deviations from the Standard Model predictions in the
leptonic channel (open circels) and the semi-leptonic channel (filled squares)
as a function of twice the charged scalar mass for expected CLIC running
scenario: 1\,ab$^{-1}$ of data collected at 380\,GeV,
2.5\,ab$^{-1}$ at 1.5\,TeV and 5\,ab$^{-1}$ at 3\,TeV.
}\label{fig:sigslep}
\end{figure}
\section{Conclusions}
The Inert Doublet Model is one of the simplest SM extensions
providing natural candidate for dark matter.
Light IDM scenarios, with scalar masses in $\mathcal{O}$(100\,GeV)
range are still not excluded by the current experimental and
theoretical constraints.
Low mass IDM scenarios can be observed with high significance in the
di-lepton channels already at a $e^+e^-$ collider with 250\,GeV
center-of-mass energy.
Discovery reach increases for higher $\sqrt{s}$ and significant
improvement in the discovery reach is observed when considering the
semi-leptonic final state.
Full simulation study of the charge scalar pair-production in the
semi-leptonic decay channel is ongoing.
\subsection*{Acknowledgements}
This contribution was supported by the National Science Centre, Poland, the
OPUS project under contract UMO-2017/25/B/ST2/00496 (2018-2021) and
the HARMONIA project under contract UMO-2015/18/M/ST2/00518
(2016-2019), and by the European Regional Development Fund - the
Competitiveness and Cohesion Operational Programme (KK.01.1.1.06 - RBI
TWIN SIN).
\clearpage
|
2,869,038,153,860 | arxiv | \section{Introduction}
In the present paper, a more general notion of weight called
\textit{admissible} is introduced. For such types of weights, we
study the weighted strong laws of large numbers (SLLN) on
vector-valued $L_p$-spaces. If one considers a particular cases of
the introduced weights, we recover the earlier know results (see
for example \cite{CL2003,C}).
We point out that investigation of the convergence of weighted
strong laws of large numbers, for independent identically
distributed (iid) variables, has been started in \cite{JOP}.
Namely, for a sequence $\{w_k\}$ of positive numbers (weights)
with partial sums $W_n=\sum_{k=1}^n w_k$ (which goes to $\infty$),
they found necessary and sufficient conditions to ensure that for
every iid sequence $\{\xi_k\}$ with finite expectations the
weighted averages
$$
\frac{1}{W_n}\sum_{k=1}^n w_k\xi_k
$$
converge almost everywhere (a.e.). In \cite{LinWeb}, for a linear
contraction $T:L_p\to L_p$ the a.e. convergence or $L_p$-norm convergence of the weighted averages
$$
\frac{1}{W_n}\sum_{k=1}^n w_k\ T^k f, \ \ f\in L_p(\mu)
$$
have been studied. Moreover, weighted strong laws of large numbers
(SLLN) were established for centered random variables. There are
many results devoted to the investigations of SLLN \cite{As2,As3}.
On the other hand, the investigations of SLLN are closely related
to the a.e. convergence of the one-sided ergodic Hilbert
transform
$$
\sum_{n=1}^\infty \frac{T^nf}{n}, \ \ f\in L_p
$$
of a contraction $T: L_p\to L_p$ (see \cite{H}). Depending on
$p$, many papers are devoted to the convergence (i.e. a.e. or norm
convergence) of the one-sided Hilbert transform (see
\cite{AC,AsLin,C,Cot,CL09,CCL,Ga,LOT})
In \cite{BMS,CL2003,C} relations between weighted SLLN and weighted one-sided ergodic Hilbert transforms have been studied.
Here by the weighted one-sided ergodic Hilbert transform we mean the following one
$$
\sum_{n=1}^\infty \frac{a_nT^nf}{n^{\gamma} \def\G{\Gamma}},
$$
where $\{a_n\}$ some bounded sequence, and $\gamma} \def\G{\Gamma>0$. The case
$a_n=\lambda} \def\LL{\Lambda^n$, (where $|\lambda} \def\LL{\Lambda|=1$) and $\gamma} \def\G{\Gamma=1$ has been considered in
\cite{CCC}. Other cases have been investigated in \cite{CL05} by
means of extensions of Menchoff-Rademacher theorem. These types of
research also are closely related to the investigations of ergodic
series, i.e.
$$
\sum_{n=1}^\infty a_n f(T^nx)
$$
where $T$ is a measure-preserving transformation of $(\O,\mathcal{ F},\mu)$ (see \cite{A,BeW,H,K}). Recently, in \cite{Fan17}
it was proved that the last series converges a.e. if and only if
$\sum_{n=1}^\infty |a_n|^2<\infty$.
In the present paper, we obtain the a.e. convergence of weighted
one-sided ergodic Hilbert transforms, i.e.
$$
\sum_{n=1}^\infty \frac{a_nT^{n_k}f}{W_n},
$$
which is clearly more general then the known ones. It is stressed
that the obtained results extend and provide a general framework
for all known theorems in this direction. Furthermore, as an
application of the proved results, the a.e. converges of random
modulated weighted ergodic series is obtained which is even new in
the classical setting. We hope that the results of this paper open
new insight in the theory of random ergodic Hilbert transforms.
\section{Admissible weights}
Let $(\Omega,\mathcal{F},\mu)$ be an arbitrary measure space with a
probability measure $\mu$, and $X$ a Banach space (over real or
complex numbers). We denote the Lebesgue-Bochner space
$L_p(\O,\mathcal{F},\mu;X)$ ($p\geq 1$) by $L_p(\mu,X)$, and by
$L_p(\mu)$ when $X$ is a scalar field, if there is no chance of
confusing the underlying measurable space. By $\ell_p$ we denote
the standard $p$-summable sequence space. For the definitions and
properties of these spaces we refer to \cite{DS,DU}.
An increasing sequence $\{G_n\}$ is said to be a \textit{weight} if $G_1\geq 1$ and $G_n\to\infty$ as $n\to\infty$.
A weight $\{W_n\}$ is called \textit{weak $p$-admissible} ($p>1$) w.r.t. to a weight $\{G_n\}$ if one can find an increasing sequence $\{n_k\}\subset{\mathbb N}$ and a sequence $\{\xi_n\}$ such that
\begin{eqnarray}\label{W1}
&&\sum_{k=1}^\infty\bigg(\frac{G_{n_k}}{W_{n_k}}\bigg)^p<\infty\\[2mm]
\label{W2}
&&\sum_{k=1}^\infty\bigg(\frac{\xi_k}{W_{n_k}}\bigg)^p<\infty
\end{eqnarray}
\begin{rem} We note that in this definition it is not necessary that the sequence $\{\xi_k\}$ equals $\{G_{n_k}\}$.
Below, we provide an example which clarifies this issue, and it also gives some way to construct weak $p$-admissible weights.
\end{rem}
\begin{exm}\label{E0} Let $p>1$ and assume that $\{G_n\}$ be a weight and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence such that
\begin{equation}\label{E01}
\sum_{k=1}^\infty\frac{1}{n_k}<\infty, \ \ \ \sum_{k=1}^\infty\frac{1}{(G_{n_k})^{p}}<\infty, \ \ p>1
\end{equation}
Define $W_n=n^{1/p}G_n$. Then, due to \eqref{E01}
$$
\sum_{k=1}^\infty\bigg(\frac{G_{n_k}}{W_{n_k}}\bigg)^p=\sum_{k=1}^\infty\frac{1}{n_k}<\infty
$$
Notice that
$$
\sum_{k=1}^\infty\bigg(\frac{G_{n}}{W_{n}}\bigg)^p=\infty
$$
Now choose $\xi_k=n_k^{1/p}$ which is clearly different from $\{G_{n_k}\}$. Then
$$
\sum_{k=1}^\infty\bigg(\frac{\xi_k}{W_{n_k}}\bigg)^p=\sum_{k=1}^\infty\frac{1}{G_{n_k}^p}<\infty.
$$
Hence, the weight $\{W_n\}$ is weak $p$-admissible w.r.t. $\{G_n\}$.
Now let us consider, a particular case, namely, we define $G_n=(\ln n)^{\beta+1/p}(\ln\ln n)^\gamma} \def\G{\Gamma$ with $\beta>0$, $\gamma} \def\G{\Gamma\geq 1$. Then for $n_k=k^k$, the condition \eqref{E01} is satisfied, hence, $W_n=n^{1/p}(\ln n)^{\beta+1/p}(\ln\ln n)^\gamma} \def\G{\Gamma$ is weak $p$-admissible weight w.r.t. $\{G_n\}$.
\end{exm}
A weight $\{W_n\}$ is called \textit{p-admissible} w.r.t. to a weight $\{G_n\}$ if one can find an increasing sequence $\{n_k\}\subset{\mathbb N}$ such that
\begin{eqnarray}\label{W3}
&&\sum_{k=1}^\infty\bigg(\frac{G_{n_k}}{W_{n_k}}\bigg)^p<\infty\\[2mm]
\label{W4}
&&\sum_{k=1}^\infty\bigg(\frac{n_{k+1}-n_k}{W_{n_k}}\bigg)^p<\infty
\end{eqnarray}
Clearly, if we define $\xi_k=n_{k+1}-n_k$, then one obviously gets
that any $p$-admissible weight is weak $p$-admissible. The set of
all $p$-admissible weights w.r.t. $\{G_n\}$ we denote by
$\mathcal{W}_p(G_n)$. Examples of $p$-admissible weights are provided in
the Appendix.
\section{Weighted Strong Law of large numbers}
Let $\{W_n\}$ be a weak $p$-admissible weight w.r.t. $\{G_n\}$ with associated sequences
$\{n_k\}$ and $\{\xi_k\}$. A sequence $\{f_n\}\subset L_p(\mu,X)$ is called \textit{$(W_n)$-sequence} if there is $C>0$ such that for any $m\in{\mathbb N}$
\begin{eqnarray}\label{WE1}
\bigg(\int\max\limits_{n_m< n\leq n_{m+1}}\bigg\|\sum_{k=n_m+1}^n f_k\bigg\|_X^pd\mu\bigg)^{1/p}\leq C\xi_m
\end{eqnarray}
\begin{rem}\label{r31} We notice that if $\{f_n\}\subset L_p(\mu,X)$ with $A=\sup_n\|f_n\|_p<\infty$, and $\{W_n\}$ is $p$-admissible, then $\{f_n\}$ is $(W_n)$-sequence. Indeed, we get
\begin{eqnarray*}
\int\max\limits_{n_m< n\leq n_{m+1}}\bigg\|\sum_{k=n_m+1}^n f_k\bigg\|_X^p d\mu&\leq &\int\max\limits_{n_m< n\leq n_{m+1}}\bigg(\sum_{k=n_m+1}^n\|f_k\|_X\bigg)^pd\mu\\[2mm]
&\leq &\bigg\|\sum_{k=n_m+1}^{n_{m+1}}\|f_k\|_X\bigg\|_p^p\\[2mm]
&\leq &\bigg(\sum_{k=n_m+1}^{n_{m+1}}\|f_k\|_p\bigg)^p\\[2mm]
&\leq& A^p (n_{m+1}-n_{m})^p
\end{eqnarray*}
this yields the required assertion.
\end{rem}
\begin{thm}\label{T1} Let $p\geq 1$ and $\{G_n\}$ be a weight. Assume that $\{W_n\}$ is a weak $p$-admissible weight w.r.t. $\{G_n\}$, and
$\{f_n\}\subset L_p(\mu,X)$ is a $(W_n)$-sequence such that
\begin{equation}\label{T11}
\sup_n\bigg\|\frac{1}{G_n}\sum_{k=1}^n f_k\bigg\|=K<\infty.
\end{equation}
Then $\frac{1}{W_n}\sum_{k=1}^n f_k$ converges a.e. Furthermore,
\begin{equation}\label{1T11}
\sup_n\frac{1}{W_n}\bigg\|\sum_{k=1}^n f_k\bigg\|_X\in L_p(\mu).
\end{equation}
\end{thm}
\begin{proof} Since $\{W_n\}$ is a weak $p$-admissible weight w.r.t. $\{G_n\}$ then there is an increasing $\{n_m\}\subset{\mathbb N}$ and $\{\xi_m\}$ such that \eqref{W1} and \eqref{W2} hold.
Now due to \eqref{T11} we have
\begin{eqnarray}\label{T12}
\bigg\|\frac{1}{W_{n_m}}\sum_{k=1}^{n_m} f_k\bigg\|_p&=&\frac{G_{n_m}}{W_{n_m}}\bigg\|\frac{1}{G_{n_m}}\sum_{k=1}^{n_m} f_k\bigg\|_p\\[2mm]
&\leq& K \frac{G_{n_m}}{W_{n_m}}
\end{eqnarray}
Hence, according to \eqref{W1}, the last one implies
\begin{eqnarray*}
\int\sum_{m=1}^\infty\bigg\|\frac{1}{W_{n_m}}\sum_{k=1}^{n_m} f_k\bigg\|^p_X d\mu &=&
\sum_{m=1}^\infty\bigg\|\frac{1}{W_{n_m}}\sum_{k=1}^{n_m} f_k\bigg\|_p^p\\[2mm]
&\leq& \sum_{m=1}^\infty K^p \bigg(\frac{G_{n_m}}{W_{n_m}}\bigg)^p<\infty.
\end{eqnarray*}
Hence,
$$
\sum_{m=1}^\infty\bigg\|\frac{1}{W_{n_m}}\sum_{k=1}^{n_m} f_k\bigg\|^p_X<
\infty \ \ \textrm{a.e.}
$$
so,
\begin{equation}\label{T13}
\bigg\|\frac{1}{W_{n_m}}\sum_{k=1}^{n_m} f_k\bigg\|_X \to 0 \ \ \ \textrm{a.e.}
\end{equation}
Now, for $ n_m\leq n<n_{m+1}$,
\begin{eqnarray*}
\int\max_{ n_m<n\leq n_{m+1}}\bigg\|\frac{1}{W_{n}}\sum_{k=1}^{n} f_k-\frac{1}{W_{n}}\sum_{k=1}^{n_m} f_k\bigg\|_X^pd\mu&\leq&
\frac{1}{W^p_{n}}\int\max_{ n_m< n\leq n_{m+1}}\bigg\|\sum_{k=n_m+1}^n f_k\bigg\|_X^pd\mu\\[2mm]
&\leq& C\bigg(\frac{\xi_m}{W_{n_m}}\bigg)^p
\end{eqnarray*}
Hence, due to \eqref{W2},
\begin{eqnarray}\label{T14}
\max_{ n_m<n\leq n_{m+1}}\bigg\|\frac{1}{W_{n}}\sum_{k=1}^{n} f_k-\frac{1}{W_{n}}\sum_{k=1}^{n_m} f_k\bigg\|_X\to 0
\ \ \textrm{a.e.}
\end{eqnarray}
On the other hand, by \eqref{T13}
$$
\bigg\|\frac{1}{W_{n}}\sum_{k=1}^{n_m} f_k\bigg\|_X\leq\frac{1}{W_{n_m}}\bigg\|\sum_{k=1}^{n_m} f_k\bigg\|_X\to 0
$$
Therefore, \eqref{T14} implies
$$
\bigg\|\frac{1}{W_{n}}\sum_{k=1}^{n} f_k\bigg\|_X\to 0 \ \ \ \textrm{a.e.}
$$
Now, let us denote $S_n=\sum\limits_{k=1}^{n} f_k$.
Then
\begin{eqnarray*}
\int\sup_m\bigg\|\frac{S_{n_m}}{W_{n_m}}\bigg\|_X^pd\mu&\leq& \int\sum_{m=1}^\infty\bigg\|\frac{S_{n_m}}{W_{n_m}}\bigg\|_X^pd\mu\\[2mm]
&\leq&\sum_{m=1}^\infty K^p \bigg(\frac{G_{n_m}}{W_{n_m}}\bigg)^p<\infty,
\end{eqnarray*}
hence,
$$
\sup_m\bigg\|\frac{S_{n_m}}{W_{n_m}}\bigg\|_X\in L_p(\mu)
$$
For $ n_m\leq n<n_{m+1}$,
\begin{eqnarray*}
\bigg\|\frac{S_{n}}{W_{n}}\bigg\|_X\leq \bigg\|\frac{S_{n_m}}{W_{n_m}}\bigg\|_X+\frac{1}{W_{n_m}}\bigg\|\sum_{k=n_{m}+1}^n f_k\bigg\|_X
\end{eqnarray*}
Consequently,
\begin{eqnarray*}
\bigg\|\sup_n\frac{\|S_{n}\|_X}{W_{n}}\bigg\|_p\leq \bigg\|\sup_m\frac{\|S_{n_m}\|_X}{W_{n_m}}\bigg\|_p+\bigg\|\sup_{m,n_m}\frac{1}{W_{n_m}}\bigg\|\sum_{k=n_{m}+1}^n f_k\bigg\|_X\bigg\|_p
\end{eqnarray*}
The first term of RHS of the last expression is finite, therefore, we show the finiteness of the second term. Indeed,
\begin{eqnarray*}
\bigg\|\sup_{m,n_m}\frac{1}{W_{n_m}}\bigg\|\sum_{k=n_{m}+1}^n f_k\bigg\|_X\bigg\|_p^p&\leq &
\int\sum_{m\geq 1}\frac{1}{W_{n_m}}\sup_{n_m<n\leq n_{m+1}}\bigg\|\sum_{k=n_{m}+1}^n f_k\bigg\|_X^p d\mu\\[2mm]
&\leq& C\sum_{m\geq 1}\bigg(\frac{\xi_m}{W_{n_m}}\bigg)^p<\infty
\end{eqnarray*}
which completes the proof.
\end{proof}
\begin{rem} The proved theorem extends the results of \cite{CL2003,CL05} to more general weights, since in the mentioned
papers weights considered as in Examples \ref{E1}-\ref{E3} have
been studied.
\end{rem}
Due to Remark \ref{r31} from the proved theorem, we immediately
find the following corollary.
\begin{cor}\label{CT1} Let $p\geq 1$ and $\{G_n\}$ be a weight. Assume that
$\{f_n\}\subset L_p(\mu,X)$ with $\sup_n\|f_n\|_p<\infty$ and
\begin{equation}\label{CT11}
\sup_{n\geq 1}\bigg\|\frac{1}{G_n}\sum_{k=1}^n f_k\bigg\|_p<\infty.
\end{equation}
Then, for any $\{W_n\}\in\mathcal{W}_p(G_n)$, the weighted means
$$\frac{1}{W_n}\sum_{k=1}^n f_k$$ converge a.e. Furthermore
$$
\sup_{n\geq 1}\frac{1}{W_n}\bigg\|\sum_{k=1}^n f_k\bigg\|_X\in L_p(\mu).
$$
\end{cor}
\begin{thm}\label{T2} Let the conditions of Theorem \ref{T1} are satisfied. Assume that
\begin{equation}\label{T21}
\sum_{n=1}^\infty\frac{G_n}{W_n}\bigg(1-\frac{W_n}{W_{n+1}}\bigg)<\infty.
\end{equation}
Then the series
\begin{equation}\label{T22}
\sum_{k=1}^\infty\frac{f_k}{W_k}
\end{equation}
converges a.e., and
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{f_k}{W_k}\bigg\|_X\in L_p(\mu).
$$
\end{thm}
\begin{proof} Let us denote
$$
S_0=0, \ \ S_n=\sum_{k=1}^n f_k, \ \ \nu\in{\mathbb N}.
$$
Then by means of Abel's summation, one finds
\begin{eqnarray}\label{T23}
\sum_{k=1}^n \frac{f_k}{W_k}&=&\sum_{k=1}^n\frac{S_k-S_{k-1}}{W_k}\nonumber\\[2mm]
&=&\frac{S_n}{W_n}+\sum_{k=1}^{n-1}\bigg(\frac{1}{W_k}-\frac{1}{W_{k+1}}\bigg)S_k.
\end{eqnarray}
Due to Theorem \ref{T1},
$$
\frac{S_n}{W_n}\to 0 \ \ \ \textrm{a.e.}
$$
On the other hand, we have
\begin{eqnarray*}
\sum_{k=1}^{n}\bigg\|\bigg(\frac{1}{W_k}-\frac{1}{W_{k+1}}\bigg)S_k\bigg\|_X&
\leq& \sum_{k=1}^{n}\bigg|\frac{1}{W_k}-\frac{1}{W_{k+1}}\bigg|\|S_k\|_X\\[2mm]
&=&\sum_{k=1}^{n}\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\bigg)\bigg\|\frac{S_k}{G_k}\bigg\|_X
\end{eqnarray*}
Now, by \eqref{T21}, \eqref{T11}
\begin{eqnarray*}
\int\sum_{k=1}^{\infty}\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\bigg)\bigg\|\frac{S_k}{G_k}\bigg\|_X d\mu\leq
\sum_{k=1}^{\infty}\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\bigg)\bigg\|\frac{S_k}{G_k}\bigg\|_p.
\end{eqnarray*}
Hence, the series
$$
\sum_{k=1}^{\infty}\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\bigg)\bigg\|\frac{S_k}{G_k}\bigg\|_X
$$ converges a.e. which, due to \eqref{T23}, yields the convergence of
$$
\sum_{k=1}^\infty \frac{f_k}{W_k}.
$$
From \eqref{T23}, we immediately obtain
\begin{eqnarray*}
\sup_n\bigg\|\sum_{k=1}^n \frac{f_k}{W_k}\bigg\|_X&\leq &\sup_n\frac{\|S_n\|_X}{W_n}+\sup_n\bigg\|\sum_{k=1}^{n-1}\bigg(\frac{1}{W_k}-\frac{1}{W_{k+1}}\bigg)S_k\bigg\|_X
\end{eqnarray*}
Consequently, by Theorem \ref{T1}
\begin{eqnarray*}
\bigg\|\sup_n\bigg\|\sum_{k=1}^n \frac{f_k}{W_k}\bigg\|_X\bigg\|_p&\leq &\bigg\|\sup_n\frac{\|S_n\|_X}{W_n}\bigg\|_p+\
\sum_{k=1}^{\infty}\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\bigg) \bigg\|\frac{S_k}{G_k}\bigg\|_p<\infty
\end{eqnarray*}
Let us establish the norm convergence of $\sum\limits_{k=1}^\infty \frac{f_k}{W_k}$. Indeed,
\begin{eqnarray*}
\bigg\|\sum_{k=1}^n \frac{f_k}{W_k}-\sum_{k=1}^m \frac{f_k}{W_k}\bigg\|_p\leq \bigg\|\sum_{k=m+1}^{n-1}\bigg(\frac{1}{W_k}-\frac{1}{W_{k+1}}\bigg)S_k\bigg\|_p+\bigg\|\frac{S_m}{W_m}\bigg\|_p+\bigg\|\frac{S_n}{W_n}\bigg\|_p
\end{eqnarray*}
Due to Theorem \ref{T1}
$$
\bigg\|\frac{S_m}{W_m}\bigg\|_p\to 0, \ \ \bigg\|\frac{S_n}{W_n}\bigg\|_p\to 0.
$$
On the other hand, one finds
$$
\bigg\|\sum_{k=m+1}^{n-1}\bigg(\frac{1}{W_k}-\frac{1}{W_{k+1}}\bigg)S_k\bigg\|_p\leq K
\sum_{k=m+1}^{\infty}\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\bigg)\to 0 \ \ \textrm{as} \ \ m\to\infty.
$$
This implies that the series $\sum\limits_{k=1}^\infty \frac{f_k}{W_k}$ converges in $L_p$-norm.
\end{proof}
\begin{rem} We point out that in all proved results the
independence of the random variables $\{f_n\}$ is not required. In
\cite{BM} the a.e. convergence of the series of admissible random
variables in Orlicz spaces has been investigated.
\end{rem}
Let us provide an example for which conditions of Theorem \ref{T2} are satisfied.
\begin{exm}\label{EwA} Let us consider $L_2(\mu,\mathcal{ H})$, where $\mathcal{ H}$ is a Hilbert space. Take any orthogonal sequence
$\{f_n\}\subset L_2(\mu,\mathcal{ H})$ with $\|f_n\|_2=\sqrt{n}$. Define
$$
G_n=\bigg(\sum_{k=1}^n \|f_k\|_2^2\bigg)^{1/2}, \ \ n\in{\mathbb N}.
$$
It is clear that $\{G_n\}$ is a weigh. One can see that
$$
\bigg\|\frac{1}{G_n}\sum_{k=1}^n f_k\bigg\|_2^2=\frac{1}{G_n^2}\sum_{k=1}^n \|f_k\|_2^2=1, \ \ \forall n\geq 1.
$$
Hence, the sequence $\{f_n\}$ satisfies \eqref{T11}.
Let $n_m=m^2$, $m\in{\mathbb N}$ and
$$
\xi_m=\bigg(\sum_{k=n_m+1}^{n_{m+1}} \|f_k\|_2^2\bigg)^{1/2}, \ \ m\in{\mathbb N}.
$$
Now define a weight $\{W_n\}$ as follows:
$$
W_n=n^{\frac{1+{\bf 1}\!\!{\rm I}}{4}}\sqrt{n(n+1)}, \ \ 0<{\bf 1}\!\!{\rm I}<1.
$$
Let us show that $\{W_n\}$ is a weak 2-admissible w.r.t. $\{G_n\}$. Indeed, first we notice that
$$
G_n^2=\frac{n(n+1)}{2}, \ \ \ \xi_m=G_{n_{m+1}}-G_{n_m}.
$$
Then
\begin{eqnarray*}
\sum_{m=1}^\infty\bigg(\frac{G_{n_m}}{W_{n_m}}\bigg)^2&=&\sum_{m=1}^\infty\frac{n_m(n_m+1)}{2n_m^{(1+{\bf 1}\!\!{\rm I})/2}n_m(n_m+1)}\\[2mm]
&=&\frac{1}{2}\sum_{m=1}^\infty\frac{1}{m^{1+{\bf 1}\!\!{\rm I}}}<\infty
\end{eqnarray*}
From this we infer that
\begin{eqnarray*}
\sum_{m=1}^\infty\bigg(\frac{\xi_{m}}{W_{n_m}}\bigg)^2&=&\sum_{m=1}^\infty\frac{(G_{n_{m+1}}-G_{n_m})^2}{W_{n_m}^2}\\[2mm]
&\leq& 2^2\sum_{m=1}^\infty\bigg(\bigg(\frac{G_{n_{m+1}}}{W_{n_m}}\bigg)^2+\bigg(\frac{G_{n_{m}}}{W_{n_m}}\bigg)^2\bigg)\\[2mm]
&\leq& 2^2\sum_{m=1}^\infty\bigg(\bigg(\frac{G_{n_{m+1}}}{W_{n_{m+1}}}\bigg)^2+\bigg(\frac{G_{n_{m}}}{W_{n_m}}\bigg)^2\bigg)<\infty
\end{eqnarray*}
Hence, $\{W_n\}$ is a weak 2-admissible weight.
Notice that due to ${\bf 1}\!\!{\rm I}\in (0,1)$
$$
\sum_{m=1}^\infty\bigg(\frac{G_{n}}{W_{n}}\bigg)^2=\frac{1}{2}\sum_{m=1}^\infty\frac{1}{m^{(1+{\bf 1}\!\!{\rm I})/2}}=\infty.
$$
By the following relation
$$
\max_{n_m<n\leq n_{m+1}}\bigg\|\sum_{k=n_{m}+1}^n f_k\bigg\|^2_2\leq \max_{n_m<n\leq n_{m+1}}\sum_{k=n_m+1}^n\|f_k\|_2^2\leq \xi_m^2
$$
we conclude that $\{f_n\}$ is a $(W_n)$-sequence. Therefore, according to Theorem \ref{T1} we find the a.e. convergence of
$\frac{1}{W_n}\sum_{k=1}^n f_k$.
Now, let us check the condition \eqref{T21}. Indeed,
\begin{eqnarray*}
\frac{G_{n}}{W_{n}}\bigg(1-\frac{W_n}{W_{n+1}}\bigg)&=&\frac{1}{n^{(1+{\bf 1}\!\!{\rm I})/4}}\bigg(1-\frac{n^{(1+{\bf 1}\!\!{\rm I})/4}\sqrt{n(n+1)}}{(n+1)^{(1+{\bf 1}\!\!{\rm I})/4}\sqrt{(n+1)(n+2)}}\bigg)\\[2mm]
&=&\frac{1}{n^{(1+{\bf 1}\!\!{\rm I})/4}}\frac{(n+1)^{(1+{\bf 1}\!\!{\rm I})/4}\sqrt{n+2}-n^{(1+{\bf 1}\!\!{\rm I})/4}\sqrt{n}}{(n+1)^{(1+{\bf 1}\!\!{\rm I})/4}\sqrt{n+2}}\\[2mm]
&\leq &\frac{1}{n^{(1+{\bf 1}\!\!{\rm I})/2}\sqrt{n}}\big((n+1)^{(1+{\bf 1}\!\!{\rm I})/4}\sqrt{n+2}-n^{(1+{\bf 1}\!\!{\rm I})/4}\sqrt{n}\big) \\[2mm]
&\leq &\frac{1}{n^{(1+{\bf 1}\!\!{\rm I})/2}}\big((n+1)^{(1+{\bf 1}\!\!{\rm I})/4}-n^{(1+{\bf 1}\!\!{\rm I})/4}\big) \\[2mm]
&\leq &\frac{1}{n^{(1+{\bf 1}\!\!{\rm I})/2}}\frac{1+{\bf 1}\!\!{\rm I}}{4}n^{-(3-{\bf 1}\!\!{\rm I})/4} \\[2mm]
&\leq &\frac{1+{\bf 1}\!\!{\rm I}}{4}\frac{1}{n^{\frac{5+{\bf 1}\!\!{\rm I}}{4}}}
\end{eqnarray*}
This yields the convergence of the series
$$
\sum_{n=1}^\infty\frac{G_{n}}{W_{n}}\bigg(1-\frac{W_n}{W_{n+1}}\bigg)<\infty.
$$
Therefore, Theorem \ref{T2} implies the a.e. convergence of the series
$$
\sum_{k=1}^\infty\frac{f_n}{W_n}.
$$
\end{exm}
\begin{rem} We point out that for a given weight $\{G_n\}$ one can always find $p$-admissible weight $\{W_n\}$ such that
$$
\sum_{k=1}^\infty\frac{G_k}{W_k}<\infty.
$$
Indeed, if we take $W_n=n^pG_n$, the one gets the desired relation. However, this kind of weights are not interesting, since from \eqref{T11} we immediately get
$\frac{1}{W_n}\sum_{k=1}^n f_k\to 0$ in norm. Therefore, the obtained results are much more meaningful if one assumes
\begin{equation}\label{rrr}
\sum_{k=1}^\infty\frac{G_k}{W_k}=\infty
\end{equation}
while the conditions \eqref{W1}, \eqref{W2} and \eqref{T21} hold. In this setting, it is important to get \eqref{1T11} which is not obvious with \eqref{rrr}.
\end{rem}
\begin{cor}\label{CT2}
Let $1<p<\infty$ and $\{G_n\}$ be a weight. Assume that
$\{f_n\}\subset L_p(\mu,X)$ with $\sup_n\|f_n\|_p<\infty$ and $\{a_n\}$ be a bounded sequence. Let
\begin{equation}\label{CT21c}
\sup_n\bigg\|\frac{1}{G_n}\sum_{k=1}^n a_kf_k\bigg\|_p<\infty.
\end{equation}
Then for any $\{W_n\}\in\mathcal{W}_p(G_n)$ with \eqref{T21} the series
\begin{equation}\label{CT22}
\sum_{k=1}^\infty\frac{a_kf_k}{W_k}
\end{equation}
converges a.e., and
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_kf_k}{W_k}\bigg\|_X\in L_p(\mu).
$$
\end{cor}
The proof immediately follows from Theorem \ref{T2} and Corollary \ref{CT1} if one takes $f_k'=a_kf_k$ for which the condition \eqref{CT21c} reduces to \eqref{CT11}. In Appendix B, we provided some other
kinds of examples for which the above mentioned conditions are valid.
\begin{thm}\label{T32} Let $1<p<\infty $ and let $\{G_n\}$ be a weight.
Assume that $\{f_k\}\subset L_p(\mu,X)$ with $\sup_{k}\|f_k\|_p<\infty$ such that
\begin{equation}\label{T321}
\sup_{n\geq 1}\bigg\|\frac{1}{G_n}\sum_{k=1}^n f_k\bigg\|=K<\infty.
\end{equation}
Let $\{W_n\}\in\mathcal{W}_p(G_{n})$ with \eqref{T21} and let $\{a_k\}$ be a bounded sequence such that
\begin{equation}\label{T322}
\sum_{k=1}^\infty\frac{|a_k-a_{k+1}|G_{k}}{W_k}<\infty.
\end{equation}
Then the series
$$
\sum_{k=1}^\infty\frac{a_kf_k}{W_k}
$$
converges a.e.
\end{thm}
\begin{proof} As before, let us denote $S_0=0$, $S_k=\sum\limits_{j=1}^k f_j$. Then we have
\begin{equation}
\sum_{k=1}^n\frac{a_k f_k}{W_k}=\frac{a_n S_n}{W_n}+\underbrace{\sum_{k=1}^{n-1}\bigg(\frac{a_k}{W_k}-\frac{a_{k+1}}{W_{k+1}}\bigg)S_k}_J
\end{equation}
Due to the boundedness of $\{a_k\}$, by Corollary \ref{CT1}, we infer
$$
\frac{a_n S_n}{W_n}\to 0 \ \ \ \textrm{a.e.}
$$
Let us establish that the term $J$ is absolutely a.e. convergent. Indeed,
\begin{eqnarray*}
\int\bigg\|\sum_{k=1}^{n}\bigg(\frac{a_k}{W_k}-\frac{a_{k+1}}{W_{k+1}}\bigg)S_k\bigg\|_pd\mu&\leq & \sum_{k=1}^{n}\bigg|\frac{a_k}{W_k}-\frac{a_{k+1}}{W_{k+1}}\bigg| \|S_k\|_p\\[3mm]
&\leq & \sum_{k=1}^{n}\frac{|a_k-a_{k+1}|}{W_{k+1}}\|S_k\|_p\\[2mm]
&&+\sum_{k=1}^{n}\frac{\|\{a_k\}\}\|_\infty|W_k-W_{k+1}|}{W_kW_{k+1}} \|S_k\|_p\\[3mm]
&\leq & K\sum_{k=1}^{n}\frac{|a_k-a_{k+1}|G_k}{W_{k}}\\[2mm]
&&+K\|\{a_k\}\}\|_\infty\sum_{k=1}^{n}\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\bigg)\\[3mm]
\end{eqnarray*}
which, due to the hypothesis of Theorem, implies the desired assertion.
\end{proof}
\section{Modulated one-side ergodic Hilbert transforms on $L_p(\mu,X)$}
In this section, we are going to apply the proved results of the
previous section to the a.e. convergence of the modulated
one-sided ergodic Hilbert transforms.
Let $T:L_p(\mu,X)\to L_p(\mu,X)$ be a power bounded operator. Given sequences $\{a_n\}\subset{\mathbb C}$, $\{n_k\}\subset{\mathbb N}$ and a weight $\{W_n\}$ by \textit{the modulated one-sided ergodic Hilbert transform} we mean the following series
$$
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{W_k}.
$$
In this section, we are going to study sufficient conditions for the a.e. convergence of this transform.
\begin{thm}\label{T3}
Let $1<p<\infty$ and let $\{G_n\}$ be a weight and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence. Assume that $T:L_p(\mu,X)\to L_p(\mu,X)$ is a power bounded operator. If for $f\in L_p(\mu,X)$ and a bounded sequence $\{a_n\}$ one has
\begin{equation}\label{CT31}
\sup_{n\geq 1}\bigg\|\frac{1}{G_n}\sum_{k=1}^n a_k T^{n_k}f\bigg\|_p<\infty.
\end{equation}
Then for any $\{W_n\}\in\mathcal{W}_p(G_n)$ with \eqref{T21} the series
\begin{equation*}
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{W_k}
\end{equation*}
converges a.e., and
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_kT^{n_k}f}{W_k}\bigg\|_X\in L_p(\mu).
$$
\end{thm}
The proof immediately follows from Corollary \ref{CT2}, if one takes $f_k=T^{n_k}f$ which is clearly bounded.
\begin{rem} We note that if one considers the weights considered in Examples \ref{E2}, \ref{E3} and $T$ is taken as an invertible
power bounded operator acting on $L_p$-spaces (classical space),
the proved Thorem recovers the results of \cite{Cuny11}. If one
considers a measure-preserving dynamical system $(\O,\mathcal{ F},\mu;T)$,
then, in \cite{Fan17}, it was given necessary and sufficient
conditions for the fulfillment of \eqref{CT31} in $L_2(\mu)$ for
the weight $G_n=\sum_{k=1}^n|a_k|^2$ associated with a given
sequence $\{a_n\}$.
\end{rem}
From Theorem \ref{T32} we obtain the following interesting result.
\begin{cor}\label{CT321}
Let $1<p<\infty$ and $\{G_n\}$ be a weight and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence. Assume that $T:L_p(\mu,X)\to L_p(\mu,X)$ is a power bounded operator and for $f\in L_p(\mu,X)$ one has
\begin{equation}\label{CT31}
\sup_{n\geq 1}\bigg\|\frac{1}{G_n}\sum_{k=1}^n T^{n_k}f\bigg\|_p<\infty.
\end{equation}
Assume that $\{W_n\}\in\mathcal{W}_p(G_{n})$ with \eqref{T21} and
$\{a_n\}$ is a sequence satisfying \eqref{T322}.
Then the series
\begin{equation}\label{CT22}
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{W_k}
\end{equation}
converges a.e.
\end{cor}
\begin{rem} The case $G_n=n^{1-\beta}$ was treated in \cite{CL05}. Other particular cases were investigated in \cite{DL,Ga81,MT94}. Our results contain more varieties of weights which allow
to produce many interesting examples.
\end{rem}
We recall that a linear contraction $T:L_1(\mu,X)\to L_1(\mu,X)$ is called \textit{Dunford-Schwartz} if $\|Tf\|_\infty\leq\|f\|_\infty$ for all $f\in L_1\cap L_\infty$. Any such kind of
operator induces a contraction on all $L_p$ ($1<p\leq \infty$) (see \cite{DS,K}). From, now on, we suppose that $X$ is considered as a Hilbert space $\mathcal{ H}$ with inner product $\langle\cdot,\cdot\rangle$.
We are ready to formulate a result.
\begin{thm}\label{T4}
Let $\{G_n\}$ be a weight, and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence, and let $\{a_n\}$ be a bounded sequence of complex numbers such that
\begin{equation}\label{1T41}
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda |=1}\bigg|\frac{1}{G_n}\sum_{k=1}^n a_k \lambda} \def\LL{\Lambda^{n_k}\bigg|=K<\infty.
\end{equation}
\begin{itemize}
\item[(i)] For every contraction $T:L_2(\mu,\mathcal{ H})\to L_2(\mu,\mathcal{ H})$ and $f\in L_2(\mu,\mathcal{ H})$ the series
\begin{equation}\label{T42}
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{W_k}
\end{equation}
converges a.e., for any $\{W_n\}\in\mathcal{W}_2(G_n)$ with \eqref{T21}. Moreover,
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_kT^{n_k}f}{W_k}\bigg\|_\mathcal{ H}\in L_2(\mu).
$$
\item[(ii)] For every Dunford-Schwartz operator $T:L_1(\mu,\mathcal{ H})\to L_1(\mu,\mathcal{ H})$ and $f\in L_p(\mu,\mathcal{ H})$, $1<p<2$, the series
\begin{equation}\label{T43}
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{\tilde W_k}
\end{equation}
converges a.e., for any $\{\tilde W_n\}\in\mathcal{W}_p(G_{n}^{(p)})$, where $G_{n}^{(p)}=G_n^{\frac{2(p-1)}{p}}n^{\frac{2-p}{p}}$. Moreover,
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_kT^{n_k}f}{\tilde W_k}\bigg\|_\mathcal{ H}\in L_p(\mu).
$$
\end{itemize}
\end{thm}
\begin{proof} (i) From \eqref{1T41} due to Theorem 2.1 \cite{BLRT} and the unitary dilation theorem, we have
\begin{equation*}
\sup_{n\geq 1}\bigg\|\frac{1}{G_n}\sum_{k=1}^n a_k T^{n_k}\bigg\|=K<\infty.
\end{equation*}
for every contraction $T$ in Hilbert space.
Let $T:L_2(\mu,X)\to L_2(\mu,\mathcal{ H})$ be a contraction, and $f\in L_2(\mu,\mathcal{ H})$. Then
\eqref{CT21c} holds with $f_k=T^{n_k}f$. Consequently, Corollary \ref{CT2} yields a.e. convergence of the series
$$
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{W_k}
$$
and
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_kT^{n_k}f}{W_k}\bigg\|_\mathcal{ H}\in L_2(\mu),
$$
for every $\{W_n\}\in\mathcal{W}_p(G_n)$ with \eqref{T21}.
Moreover,
$$
\bigg\|\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_kT^{n_k}f}{W_k}\bigg\|_\mathcal{ H}\bigg\|_2\leq CK\|f\|_2.
$$
(ii) Let us denote
\begin{equation}\label{psi1}
\psi_n(z)=\sum_{k=1}^na_k z^{n_k}.
\end{equation}
By the maximal principle and \eqref{T41},
$$
|\psi_n(z)|\leq K G_n, \ \ |z|\leq 1.
$$
Hence, for every contraction $T$ on a Hilbert space, by Theorem A \cite{RiN}
$$
\|\psi_n(T)\|\leq K G_n
$$
where $\psi_n(T)=\sum_{k=1}^na_k T^{n_k}$.
Now, assume that $T$ is a Dunfors-Schwartz operator of $L_1(\mu,\mathcal{ H})$, and put
$$
T_n=\sum_{k=1}^na_k T^{n_k}.
$$
Then, due to above observation $\|T_n\|_2\leq KG_n$. Moreover, we also have $\|T_n\|_1\leq n\|\{a_k\}\|_{\ell_\infty}$.
Hence, the Riesz-Thorin interpolation Theorem \cite[VII, p. 95]{Zyg} implies that for $1<p<2$
\begin{equation}\label{T44}
\|T_n\|_p\leq \big(n\|\{a_k\}\|_{\ell_\infty}\big)^{\frac{2-p}{p}}\big(K G_n)^{\frac{2(p-1)}{p}}
\end{equation}
Denote
$$
G_{n}^{(p)}=G_n^{\frac{2(p-1)}{p}}n^{\frac{2-p}{p}}
$$
then from \eqref{T44}, for $f\in L_p(\mu,\mathcal{ H})$ we obtain
$$
\bigg\|\frac{1}{G_{n}^{(p)}}\sum_{k=1}^n a_k T^{n_k}f\bigg\|_p\leq K_p
$$
where $K_p=\|\{a_k\}\|_{\ell_\infty}^{\frac{2-p}{p}}K^{\frac{2(p-1)}{p}}$.
Hence, Theorem \ref{T3} implies that the series
$$
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{\tilde W_k}
$$
converges a.e. for any $\{\tilde W_n\}\in\mathcal{W}_p(G_{n}^{(p)})$ with \eqref{T21} for $G_{n}^{(p)}$, and
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_kT^{n_k}f}{\tilde W_k}\bigg\|_\mathcal{ H}\in L_p(\mu).
$$
This completes the proof.
\end{proof}
Now, we are going to improve above theorem. To do so, we need some auxiliary facts.
\begin{lem}\label{L5}
Let $\{G_n\}$ be a weight, and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence, and let $\{a_n\}$ be a bounded sequence of complex numbers such that
\begin{equation}\label{L51}
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda |=1}\bigg|\frac{1}{G_n}\sum_{k=1}^n a_k \lambda} \def\LL{\Lambda^{n_k}\bigg|=K<\infty.
\end{equation}
Then for any $r\in{\mathbb R}$
\begin{equation}\label{T41}
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda |=1}\bigg|\frac{1}{G_{n,r}}\sum_{k=1}^n a_k k^{ir}\lambda} \def\LL{\Lambda^{n_k}\bigg|\leq |r|K
\end{equation}
where
\begin{equation}\label{L52}
G_{n,r}=\frac{1}{|r|}G_n+\sum_{k=1}^{n-1}\frac{G_k}{k}.
\end{equation}
\end{lem}
\begin{proof} Denote
Using Abel's summation and \eqref{L51}, we obtain
\begin{eqnarray*}
\bigg|\sum_{k=1}^na_kk^{ir}\lambda} \def\LL{\Lambda^{n_k}\bigg|&=&|n^{ir}\psi_n(\lambda} \def\LL{\Lambda)|+\bigg|\sum_{k=1}^{n-1}\big(k^{ir}-(k+1)^{ir}\big)\psi_k(\lambda} \def\LL{\Lambda)\bigg|\\[2mm]
&\leq &|\psi_n(\lambda} \def\LL{\Lambda)|+|r|\sum_{k=1}^{n-1}\frac{|\psi_k(\lambda} \def\LL{\Lambda)|}{k}\\[2mm]
&\leq& KG_n+|r|K\sum_{\ell=1}^{n-1}\frac{G_k}{k}\\[2mm]
&=&|r|KG_{n,r}
\end{eqnarray*}
where $\psi_k(\lambda} \def\LL{\Lambda)$ is given by \eqref{psi1}.
This completes the proof.
\end{proof}
\begin{lem}\label{L6}
Let $\{G_n\}$ be a weight and $G_{n,r}$ ($r\in{\mathbb R}$) be given by \eqref{L52}. Then, for any weight $\{W_n\}\in \mathcal{W}_p(G_{n,r})$ and $\delta} \def\Delta{\Delta>0$
$$
\bigg\{\frac{1}{\delta} \def\Delta{\Delta}W_n\bigg\}\in \mathcal{W}_p(G_{n,\delta} \def\Delta{\Delta r}).
$$
\end{lem}
\begin{proof} From $\{W_n\}\in \mathcal{W}_p(G_{n,r})$ we infer the existence of $\{n_k\}\subset{\mathbb N}$ such that
$$
\sum_{k=1}^\infty\bigg(\frac{G_{n_k,r}}{W_{n_k}}\bigg)^p<\infty, \ \ \ \ \sum_{k=1}^\infty\bigg(\frac{n_{k+1}-n_k}{W_{n_k}}\bigg)^p<\infty
$$
Take any $\delta} \def\Delta{\Delta>0$, and to prove the statement, it is enough to show
\begin{equation}\label{L61}
\sum_{k=1}^\infty\bigg(\frac{G_{n_k,\delta} \def\Delta{\Delta r}}{\frac{1}{\delta} \def\Delta{\Delta}W_{n_k}}\bigg)^p<\infty
\end{equation}
First, we consider
\begin{eqnarray*}
\bigg(\frac{G_{n_k,\delta} \def\Delta{\Delta r}}{\frac{1}{\delta} \def\Delta{\Delta}W_{n_k}}\bigg)^p&=&\bigg(\frac{\frac{1}{\delta} \def\Delta{\Delta |r|}G_{n_k}+\frac{1}{\delta} \def\Delta{\Delta}\sum\limits_{k=1}^{n_k-1}\frac{G_k}{k}+\big(1-\frac{1}{\delta} \def\Delta{\Delta}\big)\sum\limits_{k=1}^{n_k-1}\frac{G_k}{k}}{\frac{1}{\delta} \def\Delta{\Delta}W_{n_k}}\bigg)^p\\[3mm]
&\leq&\bigg(\frac{G_{n_k,r}}{W_{n_k}}+\frac{|1-1/\delta} \def\Delta{\Delta|}{1/\delta} \def\Delta{\Delta}\frac{\sum\limits_{k=1}^{n_k-1}\frac{G_k}{k}}{W_{n_k}}\bigg)^p\\[3mm]
&\leq & \bigg(\frac{G_{n_k,r}}{W_{n_k}}+|\delta} \def\Delta{\Delta-1|\frac{G_{n_k,r}}{W_{n_k}}\bigg)^p\\[2mm]
&=&(1+|\delta} \def\Delta{\Delta-1|)^p\bigg(\frac{G_{n_k,r}}{W_{n_k}}\bigg)^p
\end{eqnarray*}
which yields \eqref{L61}. This completes the proof.
\end{proof}
\begin{thm}\label{T7} Let $\{G_n\}$ be a weight, and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence, and let $\{a_n\}$ be a bounded sequence of complex numbers such that
\begin{equation}\label{T71}
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda |=1}\bigg|\frac{1}{G_n}\sum_{k=1}^n a_k \lambda} \def\LL{\Lambda^{n_k}\bigg|=K<\infty.
\end{equation}
For any $\{W_n\}\in\mathcal{W}_p(G_{n,1})$ with
\begin{equation}\label{T72}
\sum_{n=1}^\infty\frac{G_{n,1}}{W_n}\bigg|1-\frac{W_n}{W_{n+1}}\bigg|<\infty,
\end{equation}
let $\beta>0$ such that
\begin{equation}\label{T73}
\sum_{k=1}^\infty\frac{1}{k^\beta W_k}<\infty.
\end{equation}
Then for any DS operator $T:L_p(\mu,\mathcal{ H})\to L_p(\mu,\mathcal{ H})$ and $f\in L_p(\mu,\mathcal{ H})$, $1<p<2$, the series
$$
\sum_{k=1}^\infty\frac{a_kT^{n_k}f}{k^{\beta t}W_k}
$$
converges a.e., where $t=\frac{2-p}{p}$. Moreover,
\begin{equation}\label{T74}
\bigg\|\sup_{n\geq 1}\bigg\|\sum_{k=1}^n\frac{a_k}{k^{\beta t}W_k}T^{n_k}f\bigg\|_\mathcal{ H}\bigg\|_p<\infty.
\end{equation}
\end{thm}
\begin{proof} Assume that \eqref{T71} holds, then, for DS-operator $T$, and $f\in L_2(\mu,\mathcal{ H})$, taking into account Lemma \ref{L5} according to the proof of (i) Theorem \ref{T4}, we have
\begin{equation}\label{pT71}
\bigg\|\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_k k^{ir}T^{n_k}f}{\tilde W_k}\bigg\|_\mathcal{ H}\bigg\|_2\leq |r|K\|f\|_2.
\end{equation}
for any $\{\tilde W_n\}\in\mathcal{W}_p(G_{n,r})$ with
\begin{equation*}
\sum_{n=1}^\infty\frac{G_{n,r}}{W_n}\bigg|1-\frac{W_n}{W_{n+1}}\bigg|<\infty.
\end{equation*}
Due to Lemma \ref{L6}, we may replace any $p$-admissible weight $\{\tilde W_n\}\in\mathcal{W}_p(G_{n,r})$ with $p$-admissible weight $\{W_n\}\in\mathcal{W}_p(G_{n,1})$ with \eqref{T72}
Hence, for every $\{W_n\}\in\mathcal{W}_p(G_{n,1})$ the inequality \eqref{pT71} reduces to
\begin{equation}\label{pT72}
\bigg\|\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_k k^{ir}T^{n_k}f}{W_k}\bigg\|_\mathcal{ H}\bigg\|_2\leq K\|f\|_2.
\end{equation}
Now, let $\beta>0$ such that \eqref{T73} holds, i.e.
$$
\sum_{k=1}^\infty\frac{1}{k^\beta W_k}<\infty.
$$
For $z\in{\mathbb C}$, let us denote
\begin{equation}\label{pT73}
\Phi_{n,z}(T):=\sum_{k=1}^n\frac{a_k k^{-z\beta}}{W_k}T^{n_k}.
\end{equation}
Then from \eqref{pT72}
\begin{equation}\label{pT74}
\bigg\|\sup_{n\geq 1}\big\|\Phi_{n,ir}(T)f\big\|_\mathcal{ H}\bigg\|_2\leq K\|f\|_2, \ \ r\in{\mathbb R}.
\end{equation}
For $z=1+ir$, one has
$$
\sup_{n\geq 1}\big\|\Phi_{n,1+ir}(T)f\big\|_\mathcal{ H}\leq \sum_{k=1}^\infty\frac{|a_k|k^{-\beta}\|T^{n_k}f\|_\mathcal{ H}}{W_k}
$$
Hence, keeping in mind \eqref{T73}, we obtain
\begin{eqnarray}\label{pT75}
\sup_{n\geq 1}\big\|\Phi_{n,1+ir}(T)f\big\|_\mathcal{ H} \leq \|\mathbf{\{|a_k|\}}\|_\infty\|f\|_1\sum_{k=1}^\infty\frac{1}{k^\beta W_k}<\infty
\end{eqnarray}
Now, we will follow ideas of \cite{CL2003} to employ Stein's interpolation theorem.
For a bounded measurable positive integer-valued function $I$ and $z$ with $0\leq \Re(z)\leq 1$, let us define a linear operator
$$
\Phi_{I,z}f(x):=\sum_{k=1}^{I(x)}\frac{a_kk^{-z\beta}}{W_k}T^{n_k}f(x)=\sum_{j=1}^{\max I}1_{I=j}(x)\sum_{k=1}^j \frac{a_kk^{-z\beta}}{W_k}T^{n_k}f(x), \ \ \ f\in L_p(\mu,\mathcal{ H}).
$$
Take any two integrable simple functions $f$ and $g$, then one check that the function
$$
F(z)=\int \langle \Phi_{I,z}f,g\rangle d\mu=\sum_{j=1}^{\max I}\sum_{k=1}^j \int \frac{a_kk^{-z\beta}}{W_k}\langle T^{n_k}f(x),1_{I=j}(x)g(x)\rangle d\mu
$$
is continuous and bounded in the strip $\{z\in{\mathbb C}: \ 0\leq \Re(z)\leq 1\}$, and analytic in its interior. Moreover, due to \eqref{pT74} and \eqref{pT75} one has
\begin{eqnarray*}
&&\|\Phi_{I,ir}f\|_2\leq \bigg\|\sup_{n\geq 1}\|\Phi_{n,ir}f\|_\mathcal{ H}\bigg\|_2\leq K\|f\|_2\\[2mm]
&&\|\Phi_{I,1+ir}f\|_1\leq \bigg\|\sup_{n\geq 1}\|\Phi_{n,1+ir}f\|_\mathcal{ H}\bigg\|_1\leq \bigg(\|\mathbf{\{|a_k|\}}\|_\infty\sum_{k=1}^\infty\frac{1}{k^\beta W_k}\bigg)\|f\|_1
\end{eqnarray*}
For $1<p<2$, let $t=\frac{2}{p}-1$, so one has $\frac{1}{p}=(1-t)\frac{1}{2}+t 1$. The Stein's interpolation Theorem implies the existence of a constant $A_t$ such that for every $f\in L_p(\mu,\mathcal{ H})$
$$
\|\Phi_{I,t}f\|_p\leq A_t\|f\|_p
$$
which is equivalent to
$$
\bigg\|\sum_{k=1}^{I(x)}\frac{a_kk^{-t\beta}}{W_k}T^{n_k}f\bigg\|_p\leq A_t\|f\|_p.
$$
For an integer $N\geq 2$, let $I_N(x)=j$ for $j$ the first integer with
$$
\bigg\|\sum_{k=1}^{j}\frac{a_kk^{-t\beta}}{W_k}T^{n_k}f(x)\bigg\|_\mathcal{ H}=\max_{1\leq n\leq N}\bigg\|\sum_{k=1}^{n}\frac{a_kk^{-t\beta}}{W_k}T^{n_k}f(x)\bigg\|_\mathcal{ H}.
$$
Then for $f\in L_p(\mu,\mathcal{ H})$, we have
\begin{eqnarray*}
\bigg\|\max_{1\leq n\leq N}\bigg\|\sum_{k=1}^{n}\frac{a_kk^{-t\beta}}{W_k}T^{n_k}f(x)\bigg\|_\mathcal{ H}\bigg\|_p&=&\bigg\|\sum_{k=1}^{I(x)}\frac{a_kk^{-t\beta}}{W_k}T^{n_k}f(x)\bigg\|_p\leq A_t\|f\|_p
\end{eqnarray*}
and letting $N\to \infty$, one concludes that
\begin{eqnarray}\label{pT76}
\bigg\|\sup_{n\geq 1}\bigg\|\sum_{k=1}^{n}\frac{a_k}{k^{t\beta}W_k}T^{n_k}f\bigg\|_\mathcal{ H}\bigg\|_p<\infty,
\end{eqnarray}
where $t=\frac{2-p}{p}$.
Due to
$$
\bigg(\frac{G_{n_k}+\sum_{k=1}^{n_k-1}\frac{G_k}{k}}{W_{n_k}}\bigg)\in\ell_p
$$
and
$$
\frac{G_{n_k}}{W_{n_k}}\leq \frac{G_{n_k}+\sum_{k=1}^{n_k-1}\frac{G_k}{k}}{W_{n_k}}
$$
we infer that $(W_n)\in\mathcal{W}_p(G_n)$. Hence, according to Theorem \ref{T4} (i) one finds the a.e. convergence of the series
$$
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{W_k}, \ \ \ f\in L_2(\mu,\mathcal{ H}).
$$
On the other hand, from
$$
\bigg\|\frac{a_k T^{n_k}f}{k^{t\beta}W_n}\bigg\|_2\leq \bigg\|\frac{a_k T^{n_k}f}{W_n}\bigg\|_2
$$
we infer that the series
\begin{eqnarray}\label{pT77}
\sum_{k=1}^\infty\frac{a_k T^{n_k}f}{k^{t\beta}W_k}
\end{eqnarray}
converges a.e. for $f\in L_2(\mu,\mathcal{ H})$.
Therefore, the last statement with \eqref{pT76} by applying the Banach Principle, yields the a.e. convergence of the series \eqref{pT77} for any $f\in L_p(\mu,\mathcal{ H})$. This completes the proof.
\end{proof}
\begin{thm}\label{T8} Let $\{G_n\}$ be a weight, and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence, and let $\{a_n\}$ be a bounded sequence of complex numbers such that
\eqref{T71} is satisfied.
For any weight $\{W_n\}$ with \eqref{T72} and
\begin{equation}\label{T81}
\frac{G_{n}}{W_n}\to 0 \ \ \ n\to \infty.
\end{equation}
and every contraction $T:L_2(\mu,\mathcal{ H})\to L_2(\mu,\mathcal{ H})$ the series
$$
\sum_{k=1}^\infty\frac{a_kT^{n_k}f}{W_k}
$$
converges in operator norm. Moreover, this convergence is uniform in all contractions.
\end{thm}
\begin{proof} Given $T$, let us denote
$$
S_n(T)=\sum_{k=1}^n a_k T^{n_k}.
$$
Then from \eqref{T71} by means of the spectral theorem for unitary operators and the unitary dilation theorem, we obtain
$$
\|S_n(T)\|\leq K G_n, \ \ \textrm{for all} \ \ T.
$$
On the other hand, by means of \eqref{T23}, we have
\begin{eqnarray}\label{T82}
\sum_{k=1}^n \frac{a_k T^{n_k}}{W_k}=\frac{S_n(T)}{W_n}+\sum_{k=1}^{n-1}\bigg(\frac{1}{W_k}-\frac{1}{W_{k+1}}\bigg)S_k(T).
\end{eqnarray}
Due to \eqref{T81} one has
$$
\bigg\|\frac{S_n(T)}{W_n}\bigg\|\leq K\frac{G_n}{W_n}\to 0, \ \ \ n\to\infty
$$
which converges uniformly w.r.t. $T$.
Now, let us estimate the second term in \eqref{T82}. Indeed, using \eqref{T72},
\begin{eqnarray*}
\bigg\|\sum_{k=j}^{n-1}\bigg(\frac{1}{W_k}-\frac{1}{W_{k+1}}\bigg)S_k(T)\bigg\|&\leq &\sum_{k=j}^{n-1}\bigg\|\frac{S_k(T)}{W_k}\bigg\|\bigg(1-\frac{W_k}{W_{k+1}}\bigg)\\[3mm]
&\leq &\sum_{k=j}^{n-1}\bigg\|\frac{S_k(T)}{G_k}\bigg\|\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\ \bigg)\\[3mm]
&\leq & K\sum_{k=j}^{\infty}\frac{G_k}{W_k}\bigg(1-\frac{W_k}{W_{k+1}}\bigg)\to 0 \ \ \ j\to\infty.
\end{eqnarray*}
This means that the second term in \eqref{T82} is Cauchy in operator norm, uniformly in $T$. Hence, the series
$$
\sum_{k=1}^\infty\frac{a_k T^{n_k}}{W_k}
$$
converges in operator norm.
\end{proof}
\section{Random modulated one-sided ergodic Hilbert transforms}
In this section, we provide applications of the results of the previous sections to obtain random versions of that kinds of results.
\begin{thm}\label{RT1}
Let $\{G_n\}$ be a weight, and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence. Assume that $\{f_n\}\subset L_p(Y, \nu)$ be a bounded sequence such that there exists
a subset $Y^*\subset Y$ with $\nu(Y^*)=1$ when $y\in Y^*$ one has
\begin{equation}\label{RT11}
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda |=1}\bigg|\frac{1}{G_n}\sum_{k=1}^n f_k(y) \lambda} \def\LL{\Lambda^{n_k}\bigg|=K<\infty.
\end{equation}
Then for every contraction $T:L_2(\mu,\mathcal{ H})\to L_2(\mu,\mathcal{ H})$ and $g\in L_2(\mu,\mathcal{ H})$ the series
\begin{equation*}\label{RT12}
\sum_{k=1}^\infty\frac{f_k(y)T^{n_k}g}{W_k}
\end{equation*}
converges $\mu$-a.e., for any $\{W_n\}\in\mathcal{W}_2(G_n)$ with \eqref{T21}. Moreover,
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{f_k(y)T^{n_k}g}{W_k}\bigg\|_\mathcal{ H}\in L_2(\mu).
$$
\end{thm}
The proof uses the same argument of the proof of Theorem \ref{T4}.
\begin{rem}
We notice that in \cite{W} for a symmetric independent complex valued random variables $\{f_n\}$ the condition \eqref{RT11}
has been established for certain weights depending on
$\{f_n\}$. A particular case of this type of series has been
investigated in \cite{BW} for $L_2$-contractions.
\end{rem}
Let us provide some sufficient condition for the fulfillment of condition \eqref{RT11}. To do it, we will follow \cite{CL2003} and \cite{MPi}.
\begin{thm}\label{1RT}
Let $\{G_n\}$ be a weight, and let $(Y,\nu)$ be a probability space. Assume that $\{f_n\}\subset L_2(Y,\nu)$ be an independent with $\int f_n=0$ and $\sup\limits_n\|f_n\|_2<\infty$. Suppose
that $\{n_k\}\subset{\mathbb N}$ is strictly increasing sequence such that, for every $0<\alpha<1$ one has
\begin{equation}\label{1RT1}
\gamma} \def\G{\Gamma:=\sum_{k=1}^\infty\frac{n_k^\alpha}{G^2_k}<\infty.
\end{equation}
Then the series
\begin{equation}\label{1RT2}
\sum_{k=1}^\infty\frac{f_k(y)\lambda} \def\LL{\Lambda^{n_k}}{G_k}
\end{equation}
converges $\nu$-a.e uniformly in $\lambda} \def\LL{\Lambda$. Moreover, we have
\begin{equation}\label{1RT21}
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda|=1}\bigg|\sum_{k=1}^n \frac{f_k(y)\lambda} \def\LL{\Lambda^{n_k}}{G_k}\bigg|\in L_2(\nu).
\end{equation}
\end{thm}
\begin{proof} To establish this theorem, we are going to use \cite[Corollary 1.1.2, p.10]{MPi} with group $G$ the unit circle, the set $A=\{n_k\}$, and the independent random variables
$\xi_{n_k}=f_k$. The sequence $\{a_n\}$ is defined as follows: $a_{n_k}=1/G_k$, and $a_j=0$, if $j\notin A$.
In what follows, we will identify the unit circle with $[0,2\pi]$ with addition modulo $2\pi$.
According to the mentioned result we need to show that
$$
I(\sigma} \def\S{\Sigma):=\int_0^{2\pi}\frac{\bar\sigma} \def\S{\Sigma(s)ds}{s(\log \frac{8\pi}{s})^{1/2}}
$$
is finite, where $\bar\sigma} \def\S{\Sigma$ is the rearrangement of $\sigma} \def\S{\Sigma$, which is defined as follows (see \cite{MPi,CL2003} for details)
\begin{eqnarray}\label{1RT3}
\sigma} \def\S{\Sigma(t):=\bigg(\sum_{j\in A}|a_j|^2|1-e^{ijt}|^2\bigg)^{1/2}=2\bigg(\sum_{k=1}^\infty \frac{\sin^2\frac{n_k t}{2}}{G^2_k}\bigg)^{1/2}.
\end{eqnarray}
Now using $|\sin t|\leq 1$, $\sin^2 t\leq |\sin t|^\alpha\leq |t|^\alpha$ and \eqref{1RT1}, from \eqref{1RT3} we obtain
\begin{eqnarray}\label{1RT4}
\sigma} \def\S{\Sigma(t)&\leq &2\bigg(\sum_{k=1}^\infty \frac{n_k^\alpha |t|^\alpha}{2^\alpha G^2_k}\bigg)^{1/2}\nonumber\\[2mm]
&=&2^{1-\alpha/2}|t|^{\alpha/2}\bigg(\sum_{k=1}^\infty \frac{n_k^\alpha}{G^2_k}\bigg)^{1/2}\nonumber\\[2mm]
&=&2^{1-\alpha/2}|t|^{\alpha/2}\sqrt{\gamma} \def\G{\Gamma}
\end{eqnarray}
Then for the distribution of $\sigma} \def\S{\Sigma$ by means of \eqref{1RT4}
\begin{eqnarray*}
m_\sigma} \def\S{\Sigma(u):=\nu\{t\in[0,2\pi]: \ \sigma} \def\S{\Sigma(t)<u\}\geq 2^{2/\alpha-1}\frac{|u|^{2/\alpha}}{\sqrt{\gamma} \def\G{\Gamma}} .
\end{eqnarray*}
Hence, the last inequality yields, for the rearrangement of $\sigma} \def\S{\Sigma$
\begin{eqnarray*}
\bar\sigma} \def\S{\Sigma(s):=\sup\{ t>0 : \ m_\sigma} \def\S{\Sigma(t)<s\}\leq 2^{1-\alpha/2}|s|^{\alpha/2}\sqrt{\gamma} \def\G{\Gamma} .
\end{eqnarray*}
Now, let us estimate
$$
I(\sigma} \def\S{\Sigma)=\int_0^{2\pi}\frac{\bar\sigma} \def\S{\Sigma(s)ds}{s(\log \frac{8\pi}{s})^{1/2}}\leq 2^{1-\alpha/2}\sqrt{\gamma} \def\G{\Gamma} \int_0^{2\pi}\frac{ds}{s^{1-\alpha/2}(\log \frac{8\pi}{s})^{1/2}}<\infty
$$
Hence, by \cite[Corollary 1.2, p. 10]{MPi} the series
$$
\sum_{k=1}^\infty\frac{f_k(\omega} \def\O{\Omega)\lambda} \def\LL{\Lambda^{n_k}}{G_k}
$$
converges a.e. uniformly in $\lambda} \def\LL{\Lambda$. Moreover, the inequality (1.15) \cite{MPi} yields \eqref{1RT21}.
\end{proof}
\begin{cor}\label{C1RT}
Let the conditions of Theorem \ref{1RT} are satisfied. Then there is a subset $Y^*\subset Y$ with $\nu(Y^*)=1$ such that for every $y\in Y^*$ one has
\begin{equation*}
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda|=1}\bigg|\frac{1}{G_n}\sum_{k=1}^n f_k(y)\lambda} \def\LL{\Lambda^{n_k}\bigg|<\infty.
\end{equation*}
\end{cor}
\begin{proof} By Theorem \ref{1RT} there is a subset $Y^*\subset Y$ with $\nu(Y^*)=1$ such that for every $y\in Y^*$ the series
$\sum_{k=1}^\infty\frac{f_k(y)\lambda} \def\LL{\Lambda^{n_k}}{G_k}$ converges uniformly in $\lambda} \def\LL{\Lambda$. This implies
$$
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda|=1}\bigg|\sum_{k=1}^n \frac{f_k(y)\lambda} \def\LL{\Lambda^{n_k}}{G_k}\bigg|<\infty.
$$
Then by the Kronecker's Lemma, we immediately obtain
$$
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda|=1}\bigg|\frac{1}{G_n}\sum_{k=1}^n f_k(y)\lambda} \def\LL{\Lambda^{n_k}\bigg|<\infty
$$
which is the required assertion.
\end{proof}
We notice that Corollary \ref{C1RT} gives a sufficient condition for the validity of \eqref{RT11}.\\
Now, let us get another application of Theorem \ref{T4}.
\begin{thm}\label{RT2}
Let $\{G_n\}$ be a weight and $\{n_k\}\subset{\mathbb N}$ be an increasing sequence. Assume that $\{a_n\}$ is a bounded sequence such that
\begin{equation}\label{RT21}
\sup_{n\geq 1}\max_{|\lambda} \def\LL{\Lambda |=1}\bigg|\frac{1}{G_n}\sum_{k=1}^n a_k\lambda} \def\LL{\Lambda^{n_k}\bigg|=K<\infty.
\end{equation}
Let $\alpha: \Omega\to\Omega$ be a measure-preserving transformation, and $\{T_\omega} \def\O{\Omega\}$ be a family of measurable
linear contractions of $\mathcal{ H}$, i.e. $\|T_\omega} \def\O{\Omega g\|_\mathcal{ H}\leq \|g\|_\mathcal{ H}$ for all $g\in \mathcal{ H}$. Then for every $h\in L_2(\mu)$ and $g\in \mathcal{ H}$
the series
\begin{equation}\label{RT22}
\sum_{k=1}^\infty\frac{a_k h(\alpha^{n_k}(\omega} \def\O{\Omega))T_\omega} \def\O{\Omega T_{\alpha(w)}\cdots T_{\alpha^{n_k-1}(\omega} \def\O{\Omega)}g}{W_k}
\end{equation}
converges $\mu$-a.e. for any $\{W_n\}\in\mathcal{W}_2(G_n)$ with \eqref{T21}.
Moreover, one has
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n\frac{a_k h(\alpha^{n_k}(\omega} \def\O{\Omega))T_\omega} \def\O{\Omega T_{\alpha(w)}\cdots T_{\alpha^{n_k-1}(\omega} \def\O{\Omega)}g}{W_k}\bigg\|_\mathcal{ H}\in L_2(\mu)
$$
\end{thm}
\begin{proof}
To prove, we define a mapping $\mathcal{ T}: L_2(\mu,\mathcal{ H})\to L_2(\mu,\mathcal{ H})$ as follows
\begin{equation}\label{RT23}
\mathcal{ T}(f)(\omega} \def\O{\Omega)=T_{\omega} \def\O{\Omega}f(\alpha(w)), \ \ f=f(\omega} \def\O{\Omega), w\in \Omega.
\end{equation}
One can see that
$$
\|\mathcal{ T}(f)\|_2^2=\int_\O\|T_\omega} \def\O{\Omega f(\alpha(\omega} \def\O{\Omega))\|_\mathcal{ H}^2d\mu\leq \int_\O\|f(\alpha(\omega} \def\O{\Omega))\|_\mathcal{ H}^2d\mu= \int_\O\|f(\omega} \def\O{\Omega)\|_\mathcal{ H}^2d\mu=\|f\|_2^2
$$
which implies that $\mathcal{ T}$ is a contraction of $L_2(\mu,\mathcal{ H})$.
Hence, the condition \eqref{RT21} with Theorem \ref{T4} implies that
the series
\begin{equation}\label{RT24}
\sum_{k=1}^\infty\frac{a_k\mathcal{ T}^{n_k}f}{W_k}
\end{equation}
converges $\mu$-a.e., for any $\{W_n\}\in\mathcal{W}_2(G_n)$ with \eqref{T21}. Moreover,
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n \frac{a_k\mathcal{ T}^{n_k}f}{W_k}\bigg\|_\mathcal{ H}\in L_2(\mu).
$$
Now, let us choose $f$ as follows:
$$
f(\omega} \def\O{\Omega)=h(\omega} \def\O{\Omega)g
$$
where $h\in L_2(\O,\mu)$, $g\in \mathcal{ H}$. Then from \eqref{RT23}
$$
\mathcal{ T}(f)(\omega} \def\O{\Omega)=f(\alpha(w))T_\omega} \def\O{\Omega(g)
$$
which yields
\begin{equation*}\label{RT25}
\mathcal{ T}^n(f)(\omega} \def\O{\Omega)=h(\alpha^n(\omega} \def\O{\Omega))T_\omega} \def\O{\Omega T_{\alpha(w)}\cdots T_{\alpha^{n-1}(\omega} \def\O{\Omega)}g.
\end{equation*}
Consequently, from \eqref{RT24}
$$
\sum_{k=1}^\infty\frac{a_k h(\alpha^{n_k}(\omega} \def\O{\Omega))T_\omega} \def\O{\Omega T_{\alpha(w)}\cdots T_{\alpha^{n_k-1}(\omega} \def\O{\Omega)}g}{W_k}
$$
converges $\mu$-a.e.
\end{proof}
\begin{rem}
We stress that such kind of result is not known in the classical setting.
Moreover, similar kind of result can be proved for DS operators for $L_p$-spaces as well.
\end{rem}
If we consider a particular case, i.e. if $h$ is constant, then the last theorem yields the a.e. convergence of
\begin{equation*}\label{5t5}
\sum_{k=1}^\infty\frac{a_k T_\omega} \def\O{\Omega T_{\alpha(w)}\cdots T_{\alpha^{n_k-1}(\omega} \def\O{\Omega)}g}{W_k}.
\end{equation*}
Moreover, we have an other corollary of the last theorem.
\begin{cor}\label{CRT2} Assume that the conditions of Theorem \ref{RT2} are satisfied.
Let $\alpha: \Omega\to\Omega$ be a measure-preserving transformation, and $T:\mathcal{ H}\to \mathcal{ H}$ be a linear contractions of $\mathcal{ H}$.
Then for every $h\in L_2(\mu)$ and $g\in \mathcal{ H}$
the series
\begin{equation}\label{CRT21}
\sum_{k=1}^\infty\frac{a_k h(\alpha^{n_k}(\omega} \def\O{\Omega))T^{n_k}g}{W_k}
\end{equation}
converges $\mu$-a.e. for any $\{W_n\}\in\mathcal{W}_2(G_n)$ with \eqref{T21}.
Moreover,
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n\frac{a_k h(\alpha^{n_k}(\omega} \def\O{\Omega))T^{n_k}g}{W_k}\bigg\|\in L_2(\mu)
$$
\end{cor}
The proof immediately follows from Theorem \ref{RT2} if we suppose that the family $\{T_\omega} \def\O{\Omega\}$ does not depend on $\omega} \def\O{\Omega$, and then from \eqref{RT22} one gets the
desired convergence.
The last corollary is a combination of ergodic series and the one-sided ergodic Hilbert transforms. This is an another kind of a.e. convergence of a random one-sided ergodic Hilbert transform of $T$ acting on $X$. Using the methods of \cite{Fan17} one can get some applications of Corollary \ref{CRT2} to hyperbolic dynamical systems. \\
By ${\mathbb T}$ we denote the unit circle in ${\mathbb C}$. Then, we have following result.
\begin{cor}\label{CRT3} Assume that the conditions of Theorem \ref{RT2} are satisfied.
Let $T:\mathcal{ H}\to \mathcal{ H}$ be a linear contractions of $\mathcal{ H}$.
Then for every $\lambda} \def\LL{\Lambda \in{\mathbb T}$ and $g\in \mathcal{ H}$
the series
\begin{equation}\label{CRT31}
\sum_{k=1}^\infty\frac{a_k \lambda} \def\LL{\Lambda^{n_k}T^{n_k}g}{W_k}.
\end{equation}
converges in norm of $X$, for any $\{W_n\}\in\mathcal{W}_2(G_n)$ with \eqref{T21}.
\end{cor}
\begin{proof} Let us consider a special particular case, when $\O=\mathbb{T}$, and $\mu$ is the standard Lebesgue measure on ${\mathbb T}$.
Assume that $\alpha:{\mathbb T}\to{\mathbb T}$ is given by $\alpha(z)=\lambda} \def\LL{\Lambda z$, $\lambda} \def\LL{\Lambda\in{\mathbb T}$. It is clear that $\alpha$ preserves the measure $\mu$. Now, we take
$h(z)=z$ in \eqref{CRT21}. Then, from Corollary \ref{CRT2} we obtain the norm convergence of the series \eqref{CRT31}.
This implies the norm convergence of the weighted rotated one-sided ergodic Hilbert transform associated with a contraction of $T$ (under \eqref{RT21} condition).
\end{proof}
\begin{rem} We point out that the a.e. convergence of the rotated one-sided ergodic Hilbert transforms of a contraction $T$ acting on a Hilbert space has been investigated
in \cite{CCC}, where it was considered the case $W_n=n$, $a_n=1$. Our last result gives the norm convergence for contractions on arbitrary Banach spaces. This opens a new direction
in the theory of Hlbert transforms in Banach spaces (see, \cite{CCL, Cuny}).
\end{rem}
Now combining Theorem \ref{1RT}, Corollary \ref{C1RT} and Theorem \ref{RT2} one can establish the following result.
\begin{thm}\label{RT4}
Assume $(Y,\nu)$ be a probability space, and $\{f_n\}\subset L_2(Y,\nu)$ be an independent with $\int f_n=0$ and $\sup\limits_n\|f_n\|_2<\infty$. Let $\{G_n\}$ be a weight, and suppose
that $\{n_k\}\subset{\mathbb N}$ is strictly increasing sequence such that \eqref{1RT1} holds.
Let $\alpha: \Omega\to\Omega$ be a measure-preserving transformation, and $\{T_\omega} \def\O{\Omega\}$ be a family of measurable
linear contractions of $\mathcal{ H}$.
Then there is a subset $Y^*\subset Y$ with $\nu(Y^*)=1$ such that for every $y\in Y^*$ and for every $h\in L_2(\mu)$ , $g\in \mathcal{ H}$
the series
\begin{equation}\label{RT41}
\sum_{k=1}^\infty\frac{f_k(y) h(\alpha^{n_k}(\omega} \def\O{\Omega))T_\omega} \def\O{\Omega T_{\alpha(w)}\cdots T_{\alpha^{n_k-1}(\omega} \def\O{\Omega)}g}{W_k}
\end{equation}
converges $\mu$-a.e. for any $\{W_n\}\in\mathcal{W}_2(G_n)$ with \eqref{T21}.
Moreover,
$$
\sup_{n\geq 1}\bigg\|\sum_{k=1}^n\frac{f_k(y) h(\alpha^{n_k}(\omega} \def\O{\Omega))T_\omega} \def\O{\Omega T_{\alpha(w)}\cdots T_{\alpha^{n_k-1}(\omega} \def\O{\Omega)}g}{W_k}\bigg\|_\mathcal{ H}\in L_2(\mu)
$$
\end{thm}
\begin{rem} In particularly, if we consider a weight $G_n=n^{\gamma} \def\G{\Gamma}$, then the last theorem is a new result for this type of weight. One can recover results of \cite{BW,CL2003}.
Moreover, by choosing the sequence $\{f_n\}$ and varying weights, one can obtain a lot interesting results.
\end{rem}
|
2,869,038,153,861 | arxiv | \section{Introduction}
Instantons in vacuum Einstein gravity were subject of intense
investigations since the late seventies, which culminated in their
complete topological classification \cite{inst}. Instantons in
extended supergravities are non-vacuum and typically involve
multiplets of scalar and vector fields in four dimensions and form
fields in higher dimensions. Non-vacuum axionic gravitational
instantons attracted attention in the late eighties in connection
with the idea of multiverse mechanism of fixing the physical
constants \cite{axion}. These are particular solutions of the
present theory whose bosonic sector is frequently termed as the
Einstein-Maxwell-dilaton-axion (EMDA) model. All extremal instanton
solutions in the one-vector EMDA theory were found recently in
\cite{acg}. Here we make some steps toward classification of
extremal instantons in the full $\cN=4,\, D=4$ theory.
Supersymmetric solutions to the Lorentzian $\cN=4$ supergravity were
classified solving the Killing spinor equations in
\cite{Tod:1995jf,Bellorin:2005zc}. An alternative technique to
classify BPS solutions relates to classification of null geodesics
of the target space of the sigma model obtained via dimensional
reduction of the theory along the Killing coordinate. The method was
suggested in the context of the five-dimensional Kaluza-Klein (KK)
theory by G. Clement \cite{gc}, building on the interpretation by
Neugebauer and Kramer \cite{nk} of solutions depending on a single
potential as geodesics of the three-dimensional target space. It was
further applied in \cite{bps} to classify Lorentzian extremal
solutions of the EMDA theory and Einstein-Maxwell (EM) theory. In
two of these three cases (KK and EMDA) it was found that the matrix
generators $B$ of null geodesics split into a nilpotent class
($B^n=0$ starting with certain integer $n$), in which cases the
matrix is degenerate ($\det B=0$), and a non-degenerate class ($\det
B\neq 0$). The solutions belonging to the first class are regular,
while the second class solutions, though still satisfy the no-force
constraint on asymptotic charges, generically contain
singularities. More recently similar approach partially overlapping
with the present one was suggested as the method of nilpotent
orbits \cite{Boss}. The latter starts with some matrix condition
following from supersymmetry, which is generically stronger than our
condition selecting the null geodesic subspace of the target space.
In the minimal $N=2$ theory all null geodesics are nilpotent orbits,
corresponding to the Israel-Wilson-Perj\`es solutions \cite{bps},
while in the $N=4$ case there are null geodesics whose generators
are not nilpotent. These correspond to solutions satisfying the
no-force condition, but presumably not supersymmetric.
\section{The setup}
Bosonic sector of $\cN=4, \;D=4$ supergravity contains
\be
g_{\mu\nu}, \quad \phi \;\;[{\rm dilaton }],\quad \ka \;\;[{\rm
axion}], \quad{\rm six \;\; vector\;\; fields}\; A_\mu.
\ee The theory has global symmetries: S-duality $SL(2,R)$
and $O(6)$, rotating the vector fields. The scalar fields
parametrize the coset $SL(2,R)/SO(2)$.
\subsection{Euclidean action} Correct choice of the Euclidean action for the
axion field follows from the positivity requirement and amounts to
starting with the three-form field:
\begin{equation}\label{acH}
S_0=\frac1{16\pi}\int\limits_{\cM} \left(
-R\star 1+2 \dd\phi\wedge \star
\dd\phi+ 2\e^{-4\phi} H\wedge \star H + 2\e^{-2\phi}
F_n\wedge \star F_n \right)\; - \frac1{8\pi}\int\limits_{\pa\cM}\e^{\psi/2}K\star\dd \Phi\,,
\end{equation}
where $F_n=\dd A_n$ are the Maxwell two-forms and the sum over $n$ from one to six
is understood. $H$ is the three-form field strength related to the
two-form potential $B$ via the relation involving the Chern-Simons
term:
$
H=\dd B-A_n\wedge F_n.
$
The boundary $\pa\cM$ of $\cM$ is described by
$\Phi(x^{\mu})\equiv 0$, while $\e^{\psi/2}$ is a scale factor
ensuring that $\e^{\psi/2}\dd \Phi$ measures the proper distance in
a direction normal to $\pa\cM$, and $K$ is the trace of the
extrinsic curvature of $\pa\cM$ in $\cM$.
To pass to the
pseudoscalar axion one has to ensure the Bianchi identity for $H$:
$
\dd \dd B=\dd (H + A\wedge F)=0\,
$ which is effected adding to the action (\ref{acH}) a new term with
the Lagrange multiplier $\ka$
\be
S_{\kappa}= \frac1{8\pi}\int_{\cM'} \ka\; \dd (H + A_n\wedge F_n)=
\frac1{8\pi}\int_{\cM'}\ka \; (\dd H + F_n\wedge F_n)\,,
\ee
where $\cM'$ is $\cM$ with the monopole sources of $H$ cut out.
Integrating out the three-form $H$ we obtain the bulk action in
terms of the pseudoscalar axion
\begin{equation}\label{ac2}
S_E= \frac1{16\pi}\int\limits_{\cM} \left(
-R\star 1+2 \dd\phi\wedge
\star\dd\phi- \fr12\, \e^{4\phi} \dd \ka\wedge \star\dd\ka +
2\e^{-2\phi}
F\wedge \star F + 2\ka F\wedge F\right)\,
\end{equation}
plus the boundary term. Combining the latter with the gravitational
boundary term we get
\be\lb{GHax}
\textsuperscript{4}S_b= \frac1{16\pi}\int\limits_{\pa\cM'}[\ka
\e^{4\phi}\star \dd\ka] -
\frac1{8\pi}\int\limits_{\pa\cM}\e^{\psi/2}[K]\star\dd \Phi,
\ee
where the pull-back of the three-form $\star \dd\ka$ onto the
boundary $\pa\cM$ is understood. Square brackets denote the
background subtraction which is necessary to make the action finite.
Note that the bulk matter action in the form (\ref{ac2}) is not
positive definite in contrast to (\ref{acH}): the difference is
hidden in the boundary term.
\subsection{3D sigma-model}
To develop generating technique for instantons we apply dimensional
reduction to three dimensions, where the equations of motion are
equivalent to those of the sigma model on the homogeneous space of
the three-dimensional U-duality group. The derivation of the sigma
model in the case $p=1$ was first given in \cite{emda} and
generalized to arbitrary $p$ in \cite{Gal'tsov:1996cm}. This leads
to the homogeneous space of the group $SO(2,p+2)$. In the
particular case $p=2$ the coset has a simpler representation
$G/H=SU(2,2)/\left(SO(1,3)\times SO(1,1)\right)$
\cite{Gal'tsov:1997kp} due to isomorphism $SO(2,4)\sim SU(2,2)$.
We parametrize the four-dimensional metric as
\begin{equation}\label{an}
\dd s^2=g_{\mu\nu}\dd x^\mu \dd x^\nu=f(\dd t-\omega_i\dd
x^i)^2+\frac{1}{f}\,h_{ij}\dd x^i\dd x^j\,,
\end{equation}
where where $t$ is an Euclidean coordinate with period $\beta$, and
$f,\,\omega_i,\,h_{ij}$ are functions of $x^i$ ($i=1,2,3$).
Occasionally we will also use an exponential parametrization of the
scale factor $f=\e^{-\psi}$.
To be able to compute the on-shell instanton action one has to keep
all boundary terms \cite{acg} which were neglected in \cite{emda,
Gal'tsov:1996cm, Gal'tsov:1997kp} .
The Maxwell fields are parameterized by the electric $v_n$ and
magnetic $u_n$ potentials partly solving the equations of motion
\begin{align} &F_{i4}=\frac{1}{\sqrt{2}}\,\partial_iv\,,\\&
\lb{mag} \e^{-2\phi}F^{ij}-\kappa {\tilde
F}^{ij}=\frac{f}{\sqrt{2h}}\,\epsilon^{ijk}
\partial_ku\,,
\end{align}
where the index $n$ labeling different vector fields is omitted. The
rotation one-form $\om$ in the metric is dualized to the NUT
potential $\chi$:\be\lb{twist}
\partial_i\chi +v\partial_iu-u\partial_iv=-f^2h_{ij}\,
\frac{\epsilon^{jkl}}{\sqrt{h}}\,\partial_k\omega_l \ee
(we define $\epsilon_{1234}=+1$). The resulting full bulk action is that
of the gravity-coupled three-dimensional sigma model
\be\lb{acsig}
S_\sigma = -\frac{\beta}{16\pi}\int \dd ^3x \sqrt{h}\left({\cal R}-
G_{AB}\partial_iX^A\partial_j X^B h^{ij}\right)\,\,,
\end{equation}
where the target space variables are $\textbf{X} =
(f,\phi,v_n,\chi,\kappa,u_n)\,,$ integration is over the three-space
$\cE$ and the target space metric $\dd l^2 = G_{AB}\dd X^A \dd X^B$
reads
\be\lb{tar4} \dd l^2 = \frac12\,f^{-2}\dd f^2 -
\frac12\,f^{-2}(\dd \chi + v_n\dd u_n - u_n\dd v_n)^2 +
f^{-1}\e^{-2\phi}\dd v_n^2- f^{-1}\e^{2\phi}(\dd u_n - \kappa \dd
v_n)^2 + 2\dd \phi^2 - \frac12\,\e^{4\phi}\dd \kappa^2\,.
\ee
This space has the isometry group $G=SO(2,p+2)$, the same as its
Lorentzian counterpart \cite{Gal'tsov:1996cm}. The metric
(\ref{tar4}) is the metric on the coset $G/H$, whose nature can be
uncovered from a signature argument. The Killing metric of
$so(2,p+2)$ algebra has the signature $(+2(p+1),-(p^2+3p+4)/2)$,
with plus standing for non-compact and minus for compact generators.
Since the signature of the target space is $(+(p+2),-(p+2))$, it is
clear that the isotropy subalgebra must contain $p+2$ non-compact
and $p(p+1)/2$ compact generators. Such a subalgebra of $so(2,p+2)$
is ${\rm lie\,}(H) \sim so(1,p+1)\times so(1,1)$. We therefore
deal with the coset $SO(2,p+2)/(SO(1,p+1)\times SO(1,1))$. In the
$p=2$ case this is isomorphic to $SU(2,2)/(SO(1,3)\times SO(1,1))$.
As was argued in \cite{bps}, the maximal number of independent
harmonic functions with unequal charges is equal to the number of
independent isotropic directions in the target space. Here the
target space has $p+2$ positive and $p+2$ negative direction, thus
the number of independent null vectors is $p+2$.
In addition to the bulk action we have a number of surface terms
resulting from three-dimensional dualizations as well as from
dimensional reduction of the four-dimensional Gibbons-Hawking-axion
term. Collecting these together, and taking care of the rescaling of
the electric and magnetic potentials, we get:
\be\lb{btot}
S_{\rm inst} = \textsuperscript{3}S_b =
\frac{\beta}{16\pi}\int\limits_{\pa\cE} (-2[k]\ast\dd \Psi +
\ast\dd\psi_0) + \frac{\beta}{16\pi}\int\limits_{\pa\cE'}\left([\ka
\e^{4\phi}\ast \dd\ka] + 2\sqrt2u_nF_n + (\chi+u_nv_n)\cF\right) \,.
\ee
Note that the {\em on-shell} value of the action which we are
interested in for instantons is entirely given by the boundary term
$\textsuperscript{3}S_b$ since the bulk sigma-model action vanishes
by virtue of the contracted three-dimensional Einstein equations.
Variation of the bulk action (\ref{acsig}) over $X^A$ gives the
equations of motion
\be\lb{eqsigma}
\pa_i\left(\sqrt{h}h^{ij}G_{AB}\pa_j X^B\right)=
\fr12\, G_{BC,A}\pa_i X^B\pa_j X^C h^{ij}\sqrt{h},
\ee
which can be rewritten in a form explicitly covariant both with
respect to the three-space metric $h_{ij}$, and to the target space
metric $G_{AB}$
\be\lb{consJ}
\nabla_i J^i_{A}=0\,,
\ee
where $\nabla_i$ is the total covariant derivative involving
Christoffel symbols both of $h_{ij}$ and $G_{AB}$. The currents
associated with the potentials read
\be
J^i_{A}=h^{ij}\pa_j X^B G_{AB}\,.
\ee
\subsection{Geodesic solutions}
Neugebauer and Kramer \cite{nk}, considering Kaluza-Klein theory,
noticed that, if the target space coordinates $X^A$ depend on
$x^i$ through the only scalar function,
$X^A=X^A[\tau(x^i)]$, the geodesic curves of the target space \be
\frac{d^2X^A}{d\tau^2}+\Gamma^A_{BC}\frac{dX^B}{d\tau}
\frac{dX^C}{d\tau}=0, \ee where $\Gamma^A_{BC}$ are Christoffel
symbols of the metric $G_{AB}$, solve the sigma-model equations of
motion, provided
$\tau(x^i)$ is a harmonic function in three-space with the metric
$h_{ij}$: \be
\Delta\tau\equiv\frac{1}{\sqrt{h}}\partial_i\left(\sqrt{h}h^{ij}
\partial_j \tau\right)=0.
\ee Therefore certain classes of solutions can be associated with
geodesics surfaces in the target space. Note that no assumptions
were made here about the metric of the three-space, which is
generically
curved.
\section{Matrix representation}
In view of the rotational symmetry in the space of vectors, to get
all different metrics it is not sufficient to consider only one, but
it is enough to consider two vector fields. Indeed, as we will see
later, solutions can be labeled by asymptotic electric $Q_n$ and
magnetic $P_n$ charges, the metric being dependent on the three
invariants $Q^2=Q_nQ_n,\;P^2=P_nP_n$ and $PQ=Q_n P_n$. In one-vector
case $QP=0$ implies that either $Q^2=0$ or $P^2=0$. To have the
third invariant $QP$ independent of the first two, it is enough,
however, to take $p=2$: then, e.g., for $Q_1\neq 0,\; Q_2=0,\;
P_1=0,\;P_2\neq 0$ one has $QP=0$ but $Q^2\neq 0, \;P^2\neq 0$.
Using rotation in the space of vector fields, one can can always
choose this configuration as a representative of a general one. So
in what follows we will consider the case $p=2$.
To proceed, we have to introduce the matrix representation of the
coset $SU(2,2)/(SO(1,3)\times SO(1,1))$. In the Lorentz case the
corresponding coset is $SU(2,2)/(SO(2,2)\times SO(2))$, its
representation in terms of the complex matrices $4\times 4$ was
given in \cite{Gal'tsov:1997kp}. The analogous representation of the
Euclidean coset $G/H=SU(2,2)/\left(SO(1,3)\times SO(1,1)\right)$ is
given by the hermitian block matrix
\be\lb{ME} M = \left(\begin{array}{cc}
P ^{-1}&P ^{-1}Q\\
QP ^{-1}&-P +QP ^{-1}Q
\end{array}\right)\,,
\ee
with $2\times 2$ hermitian blocks
\be \lb{PQ} P = \e^{-2\phi}\left(\begin{array}{cc} f\e^{2\phi}+v_n^2& v_1-iv_2\\
v_1+iv_2&1
\end{array}\right)\,, \quad
Q=\left(\begin{array}{cc}
v_nw_n -\chi & w_1-iw_2\\
w_1+iw_2 &-\kappa
\end{array}\right)\,,
\ee where $w_n=u_n-\ka v_n$. In terms of these matrices the target
space metric reads \be \lb{dm}\dd l^2 = -\frac14\,\tr\left( \dd M
\dd M ^{-1}\right) = \frac12\,[(P ^{-1}\dd P )^2 - (P ^{-1}\dd
Q)^2]\,. \ee To read off the target space potentials from the matrix
$M$ it is enough to use its following two blocks:
\ba\lb{block}
&P^{-1} = f^{-1}\left(\begin{array}{cc} 1 & -(v_1-iv_2)\\
- (v_1+iv_2)&f\e^{2\phi}+ v_n^2
\end{array}\right)\,,& \\ & P^{-1}Q= f^{-1}\left(\begin{array}{cc}
- {\tilde \chi} & u_1-iu_2\\ {\tilde \chi}(v_1+iv_2)
+f\e^{2\phi}(w_1+iw_2) & -\kappa f\e^{2\phi}- v_nu_n +iW
\end{array}\right)\,,&\nonumber
\ea
where ${\tilde \chi}=\chi+iW,\; W=v_1u_2-v_2u_1$.
\subsection{Asymptotic conditions}
We will be interested by finite action solution with vanishing
target space potentials $v_n=u_n=\ka=0$ in the asymptotic region
(specified as $r\to\infty$), while the NUT potential and the dilaton
may be growing there. If the dilaton tends to a constant value at
infinity, the asymptotic metrics can be classified as in pure
gravity \cite{inst}. These are known to be of two types. The first
includes asymptotically locally flat (ALF) solutions, with
$f(\infty) = 1,\;\chi(\infty) =0$ and the asymptotic form of the
metric
\be\lb{metalf} \dd s^2 =(\dd
t - 2N\cos\theta\,\dd\varphi)^2 + \dd r^2 + r^2 (\dd\theta^2 +
\sin^2\theta\,\dd\varphi^2)\,,
\ee
where $N$ is the NUT parameter. Our coset matrix $M_\infty$,
corresponding to this solution reads \be\lb{etaE}
M_{ALF}=\si_{z0}\equiv\sigma_z\otimes\sigma_0=\left(\begin{array}{cc} \sigma_0 & 0 \\
0 & -\sigma_0
\end{array}\right)\,, \ee
where (and in what follows) we use the notation
$\si_{\mu\nu}=\si_\mu\otimes\si_\nu,\; \mu,\,\nu=0,1,2,3$ for the
direct product of the matrices $\sigma_\mu = (\sigma_0, \sigma_i)$,
with $\sigma_i$ being the Pauli matrices and $\sigma_0=1$.
The second class, with growing $f=\chi\sim r$ at infinity,
corresponds to asymptotically locally Euclidean (ALE) solutions with
the asymptotic metric
\be\lb{4eucl}
ds^2 = \dd \rho^2 + \rho^2 \dd \Omega_3^2,\ee where the three-sphere
is parametrized as \be \dd \Omega_3^2=\frac14[\dd \theta^2 +
\sin^2\theta \dd \varphi^2 + (\dd \eta + \cos\theta \dd
\varphi)^2]\,,
\ee
with the angular coordinate $\eta = t$, and the radial coordinate
$\rho = (4r)^{1/2}$. In this case $M_\infty$ is
\be
M_{ALE}=\frac12\left(\si_{30}-\si_{33}- \si_{10}-\si_{13}\right).
\ee
Both these asymptotic solutions satisfy source-free Euclidean
Einstein equation.
Two additional asymptotic types correspond to ALF and ALE solutions
with dilaton growing at infinity. The ALF solutions are related to
Lorentzian solutions with the linear dilaton asymptotic
\cite{Clement:2002mb}, while the ALE ones are dilaton-axion dressed
Eguchi-Hanson and lens spaces \cite{acg}.
\subsection{Null geodesic solutions}
In the matrix terms the sigma-model field equations (\ref{consJ})
read
\be\lb{sigeq}
\pmb\nabla\left(M^{-1}\pmb\nabla M\right)=0\,,
\ee
where $\pmb\nabla$ stands for the three--dimensional covariant
derivative, and the scalar product with respect to the metric
$h_{ij}$ is understood. The geodesic solutions then obey the
equation
\begin{equation} \frac{\dd}{\dd \tau}\left(M^{-1}\,\frac{\dd M}{\dd
\tau}\right)=0\,,
\end{equation}
which is first integrated by
\be\lb{B}
M^{-1}\frac{\dd M}{\dd\tau} = B \,,
\ee
where $B$ is a constant matrix generator of the coset. A second
integration leads to the solution to the geodesic equation in the
exponential form
\begin{equation} \label{AB}
M = M_\infty\,{\rm e}^{B\tau}\,,
\end{equation}
where we assume that $\tau(\infty)=0$.
The three--dimensional Einstein equations now read
\begin{equation} \label{ei}
{\cal R}_{ij}=-\frac{1}{4}\,\tr\left(\nabla_i M \nabla_j
M^{-1}\right)\,.
\end{equation}
The parametrisation (\ref{AB}) (\ref{ei}) to
\begin{equation}
{\cal R}_{ij}=\frac{1}{4}\,(\tr B^2)\nabla_i \tau \nabla_j \tau\,.
\end{equation}
\noindent From this expression it is clear that in the particular
case
\begin{equation} \label{null}
\tr B^2 =0
\end{equation}
the three--space is Ricci--flat. In three dimensions the Riemann
tensor is then also zero, and consequently the three--space $\cE$ is
flat. We shall assume in the following that $\cE =\mathbb{R}^3$.
From Eq. (\ref{dm}) one can see \cite{gc} that the condition
(\ref{null}) corresponds to null geodesics of the target space
\begin{equation}\lb{ngeo}
\dd l^2=\frac{1}{4}\,(\tr B^2)\,\dd \tau^2=0\,.
\end{equation}
In terms of the above notation for $4\times 4$ matrices the eight
generators of the coset $G/H=SU(2,2)/(SO(1,3)\times SO(1,1))$ can be
chosen as \be\lb{generators} B=\left\{ {\rm lie\,}(G)\ominus{\rm
lie\,}(H)\right\}=\left\{\si_{3\mu},\; i\si_{2\mu} \right\}.\ee
\subsection{Charge vectors}
In the ALF case one assumes the following behavior of the target
space variables at spatial infinity:
\begin{eqnarray} \label{as}
f \sim 1-\frac{2M}{r}\,, \quad &&\chi \sim -\frac{2N}{r}\,,\nonumber\\
\phi \sim \frac{D}{r}\,,\quad && \kappa \sim \frac{2A}{r}\,, \nonumber\\
v_n \sim \frac{\sqrt{2}Q_n}{r}\,, \quad &&
u_n\sim\frac{\sqrt{2}P_n}{r}\,.
\end{eqnarray}
Then comparing (\ref{ME}, \ref{PQ}) with (\ref{etaE}, \ref{AB}) and
using the basis (\ref{generators}), the matrix generator $B$ can be
parametrized by two vectors $\mu^\alpha,\;\nu^\alpha$ in the
four-dimensional flat space with the $SO(1,3)$ metric
$\eta_{\mu\nu}$ of the signature $(-+++)$ as follows: \be\lb{B}
B(\mu,\nu)=\nu^0\si_{30}+i\nu^i\si_{2i}+i\mu^0\si_{20}+\mu^i\si_{3i},
\ee where explicitly
\be\lb{munu}
\mu^\al=\left(N-A,\; -\sqrt{2} Q_1,\; -\sqrt{2}
Q_2,\;M-D\right),\quad \nu^\al=\left(M+D,\; \sqrt{2} P_1,\; \sqrt{2}
P_2,\;N+A\right).
\ee
In the space of charges the $SO(1,3)\times SO(1,1)$ global symmetry
is acting. Fixing the corresponding ``Lorentz frame'' one can
simplify matrices describing physically distinct classes of the
solutions.
\section{Classification of null geodesics}
Squaring the matrix (\ref{B}) we obtain
\be\lb{B2}
B^2=(\mu^2-\nu^2)\si_{00}+2\si_{0i}\left(\nu^0\mu^i-\mu^0\nu^i\right)
+2i\epsilon_{ijk}\mu^i\nu^i\si_{1k},
\ee
where $\mu^2=\mu^\al\mu^\beta\eta_{\al\beta}$ etc. Its diagonal
part is proportional to the difference of squared charge vectors,
while the non-diagonal part is defined through their skew product.
\subsection{No-force condition}
To ensure the three-space to be flat, which presumably corresponds
to BPS solutions, we must impose the vanishing trace condition on
$B^2$, which in view of (\ref{B2}) reduces to the equality of the
norms of two charge vectors $\mu^2=\nu^2$, indeed
\be
\tr B^2=4(\mu^2-\nu^2)=0.
\ee
Substituting (\ref{munu}) we get the
relation between the asymptotic charges
\begin{equation}\label{BPS}
M^2+D^2+Q^2=N^2+A^2+P^2\,,
\end{equation}
where $Q^2=Q_nQ_n,\; P^2=P_n P_n$. This is the no-force condition in
the Euclidean case, where the mass, the dilaton charge and the
electric charges are attractive, while the NUT charge, the axion
charge and the magnetic charge are repulsive.
\subsection{Characteristic equation}
Imposing the condition (\ref{BPS}), we get for the third power of
$B$
\be
\frac12 B^3=\left( \mu^2\nu^0-(\mu\nu)\mu^0\right)\si_{30}+
\left((\mu\nu)\nu^i -\nu^2\mu^i\right)\si_{3i}-i
\left( \nu^2\mu^0-(\mu\nu)\nu^0\right)\si_{20}-i
\left((\mu\nu)\mu^i -\mu^2\nu^i\right)\si_{2i},
\ee
where $(\mu\nu)=\mu^\al\nu_\al$. The forth power, again with
(\ref{BPS}), is
\be \lb{B4}
\frac14B^4=\left( (\mu\nu)^2-\mu^2\nu^2\right)\si_{00}.
\ee
It is easy to check that
\be \tr B=0,\quad \tr B^3=0,
\ee
so, together with (\ref{BPS}), one finds the following
characteristic equation for the matrix $B$
\be\label{s2} B^4+(\det B)I=0\,,
\ee
consistently with $B^4$ being proportional to the unit $4\times 4$
matrix. In view of (\ref{s2}), if the matrix $B$ is degenerate,
$\det B =0$, one has $B^4=0$, so the expansion of the exponential in
(\ref{AB}) contains only the terms up to cubic. In the
non-degenerate case the series is infinite. It turns out that in the
latter case most of the solutions contain singularities and are not
supersymmetric, so we do not discus them here.
The degeneracy condition in terms of the charge vectors (restricted
by the no-force condition) according to (\ref{B4}) is one of the
two
\be\lb{dege}
2 (M\pm N)(D\mp A)=(Q_n\mp P_n)^2,
\ee
where the sum over $n$ is understood.
\subsection{Strongly degenerate case}
The rank of the degenerate $B$ can be either two or three. In the
first case $B^2=0$ and the coset matrix $M$ is linear in terms of
$B$:
\be \lb{ML}
M=\eta(I+B\tau)\,.
\ee
According to (\ref{B2}), vanishing of $B^2$, apart from (\ref{BPS})
imposes the following conditions on the charge vectors:
\be
\nu^0\mu^i-\mu^0\nu^i=0,\quad \mu^i\nu^j-\mu^j\nu^i=0,
\ee
which are equivalent to vanishing of the bivector $\mu^\al\wedge
\nu^\beta$.
This leads to different subclasses of solutions according to
whether the charge vectors $\mu^\al,\;\nu^\al$ are time-like,
space-like or null. Further details of classification of rank two
solutions in the case $p=1$ can be found in \cite{acg}. They include
solutions of all mentioned above asymptotic types.
\subsection{Weakly degenerate case}
In the case of rank three all terms in the series expansion of $M$
up to the third are non-zero:
\begin{equation}\label{w1}
M=\eta(I+B\tau+B^2\tau^2/2+B^3\tau^3/6)\,,
\end{equation}
while $B^4$ and higher terms vanish by virtue of the degeneracy
condition $\det B=0$. Now we have only two conditions on eight (for
$p=2$)
charges: (\ref{BPS}) and one of the two in (\ref{dege}). Again this
case includes a variety of new ALF, ALE and dilatonic instantons.
\subsection{Multiple harmonic functions}
The construction (\ref{AB}) may be generalized \cite{gc,bps} to the
case of several truly independent harmonic functions
$\tau_a,\;\Delta \tau_a =0$, by replacing the exponent in (\ref{AB})
by a linear superposition
\begin{equation} \label{mupt}
M = A \exp \left(\sum_a B_a \tau_a\right).
\end{equation}
This solves the field equations (\ref{sigeq}) provided that the
commutators $[B_a, B_b]$ commute with the $B_c$ (for the proof see
\cite{bps}):
\begin{equation} \label{dcom}
[\,[B_a, B_b], B_c] = 0 \,.
\end{equation}
The three-dimensional Einstein equations (\ref{ei}) generalize to
\begin{equation}
R_{ij} = \frac{1}{4} \,\sum_a \sum_b \tr (B_a B_b) \,\nabla_i\tau_a
\nabla_j\tau_b \,,
\end{equation}
so that the three-space is Ricci flat if the matrices $B_a$ satisfy
\begin{equation} \label{bal}
\tr (B_a B_b) = 0 \,.
\end{equation}
The number of independent harmonic functions on which an extremal
solution of the form (\ref{mupt}) may depend is limited by the
number of independent mutually orthogonal null vectors of the target
space. In the present case of Euclidean EMDA with two vector fields
this number is four. This gives a number of solutions, whose
explicit form (in the case $p=1$) can be found in \cite{acg}.
\section{Concluding remarks}
We described general structure of the space of extremal instantons
in N=4 D=4 supergravity as null geodesics of the coset
$G/H=SU(2,2)/\left(SO(1,3)\times SO(1,1)\right)$. A number of
particular $p=1$ new solutions was given in \cite{acg}. Apart from
some simple extremal solutions, which were previously known
explicitly in the purely scalar ALE sector \cite{axion}, new scalar
ALF and ALE were found, such as dilaton-axion dressed Taub-NUT,
Eguchi-Hanson and lens-space instantons. There are also new types of
wormholes interpolating between ALF or ALE and conical ALF spaces.
All electrically and magnetically charged solutions are entirely new
except for those which were (or could be) found by euclideanization
of known Lorentzian black hole and/or IWP-type solutions, which were
rederived in the general treatment as well. The new charged ALE
solutions include, among others, purely electric solutions, as well
as purely magnetic instantons with linear dilaton asymptotics.
\section*{Acknowledgments} The author thanks Cestmir Burdik for the
invitation to QTS7 conference in Prague and A. Strominger for useful
remarks. He is especially grateful to M. Azreg-A\"{\i}nou and G.
Cl\'ement for fruitful and enjoying collaboration. The work was
supported by the RFBR project 11-02-01371-a.
\section*{References}
|
2,869,038,153,862 | arxiv | \section{Introduction}
\subsection{Lengths statistics of random multicurves in large genus}
Let $X$ be a closed Riemann surface of genus $g \geq 2$ endowed with its conformal hyperbolic metric of constant curvature $-1$. A \emph{simple closed curve} on $X$ is a connected closed curve on $X$, non-homotopic to a point and without self-intersection. In the free homotopy class of a simple closed curve $\gamma$, there exists a unique geodesic representative with respect to $X$. We denote by $\ell_X(\gamma)$ the length of this geodesic representative.
A \emph{multicurve} on $X$ is a multiset of disjoint simple closed curves on $X$. Given a multicurve $\gamma$, a \emph{component} of $\gamma$ is a maximal family of freely homotopic curves in $\gamma$. The cardinal of a component is called its \emph{multiplicity} and the \emph{length} of a component is the sum of the lengths of the simple curves belonging to the component (or equivalently its multiplicity multiplied by the length of any simple closed curve if the component). A multicurve is called \emph{primitive} if all its components have multiplicity one. We denote by $\boldsymbol\ell^{\mkern1mu \downarrow}_X(\gamma)$ the vector of the lengths of each component sorted in decreasing order and by $\operatorname{\mathbf{mult}}(\gamma)$ the multiset of the multiplicities of the components of $\gamma$, and by $\operatorname{mult}(\gamma)$ the maximum of $\operatorname{\mathbf{mult}}(\gamma)$. Neither $\operatorname{\mathbf{mult}}(\gamma)$ nor $\operatorname{mult}(\gamma)$ depend on the hyperbolic structure $X$. We define $\ell_X(\gamma)$ as the sum of the entries of $\boldsymbol\ell^{\mkern1mu \downarrow}_X(\gamma)$ and the \emph{normalized length vector} to be
\[
\hat{\boldsymbol\ell}_X^{\mkern1mu \downarrow}(\gamma) \coloneqq \frac{\boldsymbol\ell^{\mkern1mu \downarrow}_X(\gamma)}{\ell_X(\gamma)}.
\]
We denote by $\mathcal{ML}_X(\mathbb{Z})$ the set of homotopy classes of multicurves on $X$. Our notation for the set of multicurves is explained by the fact that multicurves are the integral points of the space of measured laminations usually denoted $\mathcal{ML}_X$.
In order to make sense of convergence, we need all normalized vectors to belong to the same space. For an integer $k \geq 1$ and
a real number $r > 0$ let us define
\[
\Delta_{\leq r}^k
\coloneqq
\{(x_1,x_2,\ldots,x_k) \in [0,\infty)^k : x_1 + x_2 + \cdots + x_k \leq r\}.
\]
Let also define
\[
\Delta_{\leq r}^\infty
\coloneqq
\{(x_1,x_2,\dots) \in [0,\infty)^{\mathbb{N}} : x_1 + x_2 + \cdots \leq r\}.
\]
For $k \leq k'$ we have an injection $\Delta_{\leq r}^k \to \Delta^{k'}_{\leq r}$ by completing vectors with zeros.
The infinite simplex $\Delta_{\leq r}^\infty$ is the inductive limit of these injections and we always identify $\Delta_{\leq r}^k$ as a subspace of $\Delta_{\leq r}^\infty$. In particular each vector $\hat{\boldsymbol\ell}_X^{\mkern1mu \downarrow}(\gamma)$ is naturally an element of $\Delta_{\leq 1}^\infty$ by completing its coordinates with infinitely many zeros.
As our aim is to study convergence of random infinite vectors, let us mention that $\Delta_{\leq 1}^\infty$ is a closed subset of $[0,1]^\mathbb{N}$ endowed with the product topology. This topology coincides with the topology of the inductive limit. When we consider a convergence in distribution on $\Delta_{\leq 1}^\infty$ we mean convergence in the space of Borel probability measures on $\Delta_{\leq 1}^\infty$ which is a compact set.
The following result is a consequence of the works~\cite{AH21} and~\cite{Liu19}.
\begin{thm} \label{thm:Lg}
Let $g \geq 2$ and $m \in \mathbb{N} \cup \{+\infty\}$. There exists a random variable $L^{(g,m)\downarrow} = (L^{(g,m)\downarrow}_1, L^{(g,m)\downarrow}_2,\ldots)$ on $\Delta^{3g-3}_{\leq 1}$ with the following properties. For
any Riemann surface $X$ of genus $g$, as $R \to \infty$ we have the following convergence in distribution
\[
\frac{1}{s_X(R, m)} \sum_{\substack{\gamma \in \mathcal{ML}_X(\mathbb{Z}) \\ \ell_X(\gamma)\leq R \\ \operatorname{mult}(\gamma) \le m }} \delta_{\hat{\boldsymbol\ell}_X^{\mkern1mu \downarrow}(\gamma)} \xrightarrow[R \to \infty]{} L^{(g,m)\downarrow}
\]
where $\delta_{\hat{\boldsymbol\ell}_X^{\mkern1mu \downarrow}(\gamma)}$ is the Dirac mass at the vector $\hat{\boldsymbol\ell}_X^{\mkern1mu \downarrow}(\gamma)$ and $s_X(R, m) \coloneqq \# \{\gamma \in \mathcal{ML}_X(\mathbb{Z}) : \ell_X(\gamma)\leq R \text{ and } \operatorname{mult}(\gamma) \le m \}$ is the number of multicurves on $X$ of length at most $R$ and multiplicities at most $m$.
\end{thm}
We actually prove a more precise version of the above statement, Theorem~\ref{thm:LgMorePrecise}, in which the law of $L^{(g,m)\downarrow}$ is made explicit. Remark that the limit depends only on the genus of $X$ and not on its hyperbolic metric.
\smallskip
The Poisson--Dirichlet distribution is a probability measure on $\Delta_{\leq 1}^\infty$. The simplest way to introduce it is via the \emph{stick-breaking process}. Let $U_1,U_2,\dots$, be i.i.d.\ random variables with law $\Beta(1,\theta)$ (i.e.\ they are supported on $]0,1]$ with density $\theta (1-x)^{\theta -1}$). Define the vector
\[
V \coloneqq (U_1, (1-U_1)\, U_2, (1-U_1)(1-U_2)\,U_3, \ldots).
\]
Informally, the components of $V$ are obtained by starting from a stick of length 1 identified with $[0,1]$. At the first stage, $U_1$ determines where we break the first piece and we are left with a stick of size $1-U_1$. We then repeat the process ad libitum.
The law of $V$ is the \emph{Griffiths-Engen-McCloskey distribution of parameter $\theta$} that we denote $\GEM(\theta)$. The \emph{Poisson--Dirichlet distribution of parameter $\theta$}, denoted $\PD(\theta)$, is the distribution of $V^\downarrow$, the vector $V$ whose entries are sorted in decreasing order. For more details, we refer the reader to Section~\ref{ssec:PoissonDirichletAndGEM}. The distribution $\PD(1)$ is the limit distribution of the orbit length of uniform random permutations. The distribution $\PD(\theta)$ appears when considering the Ewens distribution with parameter $\theta$ on the symmetric group. See Section~\ref{sssec:permutations} below for a more detailed discussion on permutations.
Our main result is the following.
\begin{thm}\label{thm:PD}
For any $m \in \mathbb{N} \cup \{+\infty\}$, the sequence $(L^{(g,m)\downarrow})_{g \ge 2}$ converges in distribution to $\PD(1/2)$ as $g \to \infty$.
\end{thm}
The most interesting cases of this convergence are for $m=1$ (primitive multicurves) and $m=\infty$ (all multicurves). Let us insist that $L^{(g,1){\mkern1mu \downarrow}}$ and $L^{(g,+\infty)}$ converge to the same limit as $g \to \infty$.
All marginals of the Poisson--Dirichlet law can be computed, see for example~\cite[Section~4.11]{ABT03}. In particular if $V = (V_1, V_2, \ldots) \sim \PD(\theta)$ then
\[
\mathbb{E}((V_j)^n)
=
\frac{\varGamma(\theta + 1)}{\varGamma(\theta + n)} \int_0^\infty \frac{(\theta E_1(x))^{j-1}}{(j-1)!} \, x^{n-1} e^{-x-\theta E_1(x)} \, dx
\]
where $E_1(x) \coloneqq\int_x^\infty \frac{e^{-y}}{y}\, dy$. The formulas can be turned into a computer program and values were tabulated in~\cite{Gri79,Gri88}. For $\theta=1/2$ we have
\[
\mathbb{E}(V_1) \approx 0.758,\quad
\mathbb{E}(V_2) \approx 0.171,\quad \text{and} \quad
\mathbb{E}(V_3) \approx 0.049.
\]
\subsection{Further remarks}
\subsubsection{Square-tiled surfaces}
In this section we give an alternative statement of Theorem~\ref{thm:PD} in terms of square-tiled surfaces. The correspondence between statistics of multicurves and statistics of square-tiled surfaces is developed in~\cite{DGZZ21} and~\cite{AH20b} and we refer the readers to these two references.
A \emph{square-tiled surface} is a connected surface obtained from gluing finitely many unit squares $[0,1] \times [0,1]$ along their edges by translation $z \mapsto z + u$ or ``half-translation'' $z \mapsto -z + u$. Combinatorially, one can label the squares from $1$ to $N$ and then a square-tiled surface is encoded by two involutions without fixed points $(\sigma, \tau)$ of $\{\pm 1, \pm 2, \ldots, \pm N\}$. More precisely, $\sigma$ encodes the horizontal gluings: $+i$ and $-i$ are respectively the right and left sides of the $i$-th squares. The orbits of $\sigma$ with different signs are glued by translations and the ones with same signs are glued by half-translations. And $\tau$ encodes the vertical gluings : $+i$ and $-i$ are respectively the top and bottom sides of the $i$-th squares. The labelling is irrelevant in our definition and two pairs $(\sigma, \tau)$ and $(\sigma', \tau')$ encode the same square-tiled surface if there exists a permutation $\alpha$ of $\{\pm 1, \pm 2, \ldots, \pm N\}$ so that $\alpha(-i) = - \alpha(+i)$, $\sigma' = \alpha \circ \sigma \circ \alpha^{-1}$ and $\tau' = \alpha \circ \tau \circ \alpha^{-1}$.
A square-tiled surface comes with a conformal structure and a quadratic form coming from the conformal structure of the unit square and the quadratic form $dz^2$ (both are being preserved by translations and half-translations). This quadratic form might have simple poles and we denote by $\mathcal{Q}_g(\mathbb{Z})$ the set of holomorphic square-tiled surfaces of genus $g$.
A square-tiled surface come equipped with a filling pair of multicurves $(\gamma_h, \gamma_v)$ coming respectively from the gluings of the horizontal segments $[0,1] \times \{1/2\}$ and vertical segments $\{1/2\} \times [0,1]$ of each square. Conversely, the dual graph of a filling pair of multicurves in a surface of genus $g$ defines a square-tiled surface in $\mathcal{Q}_g(\mathbb{Z})$. Our notation comes from the fact that holomorphic square-tiled surfaces can be seen as integral points in the moduli space of quadratic differentials $\mathcal{Q}_g$.
A component of the multicurve $\gamma_h$ corresponds geometrically to a horizontal cylinder. For a square-tiled surface $M$ we denote by $\boldsymbol{A}^\downarrow(M)$ the normalized vector of areas of these horizontal cylinders sorted in decreasing order and by $\operatorname{height}(M)$ the maximum of their heights. Here as in the introduction, normalized mean that we divide by the sum of entries of a vector which coincides with $\area(M)$. The following is a particular case of~\cite[Theorem~1.29]{DGZZ21} using the explicit formulas for $L^{(g,m)}$ given in Theorem~\ref{thm:LgMorePrecise}.
\begin{thm}[\cite{DGZZ21}] \label{thm:LgSquareTiled}
Let $g \ge 2$ and $m \in \mathbb{N} \cup \{+\infty\}$. Let $L^{(g,m)}$ be the random variable from Theorem~\ref{thm:Lg}. Then as $N \to \infty$ we have the following convergence in distribution
\[
\frac{1}{
\#\left\{
M \in \mathcal{Q}_g(\mathbb{Z}) :
\begin{array}{l}
\operatorname{height}(M) \le m \\
\area(M) \le N
\end{array}
\right\}
}
\sum_{\substack{M \in \mathcal{Q}_g(\mathbb{Z}) \\ \operatorname{height}(M) \le m \\ \area(M) \le N}}
\delta_{\boldsymbol{A}^\downarrow(M)}
\to
L^{(g,m)}.
\]
\end{thm}
An important difference to notice between Theorem~\ref{thm:Lg} and Theorem~\ref{thm:LgSquareTiled} is that in the former the (hyperbolic) metric $X$ is fixed and we sum over the multicurves $\gamma$ while in the latter we sum over the discrete set of holomorphic square-tiled surfaces $M$.
Using Theorem~\ref{thm:LgSquareTiled}, our Theorem~\ref{thm:PD} admits the following reformulation.
\begin{cor}
The vector of normalized areas of horizontal cylinders of a random square-tiled surface of genus $g$ converges in distribution to $\PD(1/2)$ as $g$ tends to $\infty$.
\end{cor}
\subsubsection{Permutations and multicurves}
\label{sssec:permutations}
Given a permutation $\sigma$ in $S_n$ we denote by $K_n(\sigma)$ the number of orbits it has on $\{1, 2, \ldots, n\}$ or equivalently the number of cycles in its disjoint cycle decomposition. The \emph{Ewens measure with parameter $\theta$} on $S_n$ is the probability measure defined by
\[
\mathbb{P}_{n,\theta}(\sigma) \coloneqq \frac{\theta^{K_n(\sigma)}}{Z_{n,\theta}}
\quad
\text{where}
\quad
Z_{n,\theta} \coloneqq \sum_{\sigma \in S_n} \theta^{K_n(\sigma)}.
\]
Then under $\mathbb{P}_{n,\theta}$, as $n \to \infty$ we have that
\begin{itemize}
\item the random variable $K_n$ behaves as a Poisson distribution $\Poisson(\theta \log(n))$ (e.g.\ by mean of a local limit theorem),
\item the normalized sorted vector of cycle lengths of $\sigma$ tends to $\PD(\theta)$,
\item the number of cycles of length $k$ of $\sigma$ converges to $\Poisson(\theta/k)$.
\end{itemize}
See for example~\cite{ABT03}.
By analogy let us denote by $K^{(g,m)}$ the number of non-zero components of $L^{(g,m)}$. In~\cite{DGZZ20}, it is proven that $K^{(g,m)}$ behaves as a Poisson distribution with parameter $\frac{\log(g)}{2}$ (by mean of a local limit theorem) independently of $m$. In other words, it behaves as the number of cycles $K_g(\sigma)$ for a random permutation $\sigma$ under $\mathbb{P}_{g,1/2}$.
Our Theorem~\ref{thm:PD} provides another connection between $L^{(g,m)}$ and $\mathbb{P}_{g,1/2}$. Namely, $L^{(g,m){\mkern1mu \downarrow}}$ is asymptotically close to the normalized sorted vector of the cycle length of $\sigma$ under $\mathbb{P}_{g,1/2}$.
Finally, let us mention that components of $L^{(g,m)}$ of the order of $o(1)$ are invisible in the convergence towards $\PD(1/2)$. It is a consequence of Theorem~\ref{thm:PD} that the macroscopic components of the order of a constant carry the total mass. Building on the intuition that in the large genus asymptotic regime random multicurves on a surface $X$ of genus $g$ behave like the cycles of a random permutation in the symmetric group $S_g$, one should expect to have a Poisson limit for components of order $g^{-1}$ and that there is no component of order $g^{-1-\epsilon}$. In a work in progress, we provide an affirmative answer to this intuition. However, because lengths are continuous parameters, the limit is a continuous Poisson process and not a discrete one supported on $\mathbb{N}$ as in the permutation case.
\subsection{Proof overview and structure of the paper}
The first step of the proof consists in writing an explicit expression for the random variable $L^{(g,m)\downarrow}$ that appears in Theorem~\ref{thm:Lg}, see Theorem~\ref{thm:LgMorePrecise} in Section~\ref{sec:RandomMulticurves}. The formula follows from the work of M.~Mirzakhani on pants decompositions~\cite{Mir08} and the result of F.~Arana-Herrera~\cite{AH21} and M.~Liu~\cite{Liu19} on length distribution for each fixed topological type of multicurves. The expression of $L^{(g,m)\downarrow}$ can be seen as a refinement of the formula for the Masur--Veech volume of the moduli space of quadratic differentials from~\cite{DGZZ21}.
The formula for $L^{(g,m)\downarrow}$ involves a super-exponential number of terms in $g$ (one term for each topological type of multicurve on a surface of genus $g$). However, in the large genus limit only $O(\log(g))$ terms contribute. This allows us to consider a simpler random variable $\tilde{L}^{(g,m,\kappa)\downarrow}$ which, asymptotically, coincides with $L^{(g,m)\downarrow}$. See Theorem~\ref{thm:reduction} in Section~\ref{sec:reduction}. This reduction is very similar to the one used for the large genus asymptotics of Masur--Veech volumes in~\cite{Agg21} and~\cite{DGZZ20}.
The core of our proof consists in proving the convergence of moments of the simpler variable $\tilde{L}^{(g,m,\kappa)\downarrow}$. We do not use directly $\tilde{L}^{(g,m,\kappa)\downarrow}$ but its size-biased version $\tilde{L}^{(g,m,\kappa)*}$. The definition of size bias and the link with the Poisson--Dirichlet distribution is explained in Section~\ref{sec:PDandGEM}. In Section~\ref{sec:LargeGenus}, we show that the moments $\tilde{L}^{(g,m,\kappa)*}$ converge to the moments of $\GEM(1/2)$ which is the size-biased version of the Poisson--Dirichlet process $\PD(1/2)$, see Theorem~\ref{thm:GEM}.
\subsection{Acknowledgement}
We warmly thank Anton Zorich who encouraged us to join our forces and knowledge from~\cite{Liu19} and~\cite{DGZZ20} to study the lengths statistics of random multicurves. The second author would like to thank Gr\'egoire Sergeant-Perthuis and Maud Szusterman for helpful conversations about probability theory.
The work of the first named author is partially supported by the ANR-19-CE40-0003 grant.
\section{Background material}
In this section we introduce notations and state results from the literature that are used in our proof.
\subsection{Multicurves and stable graphs}
\label{ssec:multicurvesAndStableGraphs}
Recall from the introduction that a multicurve on a hyperbolic surface $X$ of genus $g$ is a finite multiset of free homotopy classes of disjoint simple closed curves. We denote by $\mathcal{ML}_X(\mathbb{Z})$ the set of multicurves on $X$. The homotopy classes that appear in a multicurve $\gamma$ are called components. There are at most $3g-3$ of them. The multiplicity of a component is the number of times it is repeated in $\gamma$, and $\gamma$ is primitive if all the multiplicities are $1$.
Let us also recall our notations:
\begin{itemize}
\item $\ell_X(\gamma)$: total length of $\gamma$,
\item $\boldsymbol\ell^{\mkern1mu \downarrow}_X(\gamma)$: length vector of the components of $\gamma$,
\item $\operatorname{mult}(\gamma)$: maximum multiplicity of component in $\gamma$,
\item $\operatorname{\mathbf{mult}}(\gamma)$: multiset of multiplicities of components in $\gamma$.
\end{itemize}
The mapping class group $\Mod(X)$ of $X$ acts on multicurves. We call \emph{topological type} of a multicurve its equivalence class under the $\Mod(X)$-action. For each fixed genus $g$, there are finitely many topological types of primitive multicurves and countably many topological types of multicurves. They are conveniently encoded by respectively stable graphs and weighted stable graphs that we define next. Informally given a multicurve $\gamma$ with components $\gamma_1,\dots,\gamma_k$ and multiplicities $m_1,\dots,m_k$ we build a dual graph $\Gamma$ as follows:
\begin{itemize}
\item we add a vertex for each connected component of the complement $X \smallsetminus (\gamma_1 \cup \cdots \cup \gamma_k)$; the vertex $v$ carries an integer weight the genus $g_v$ of the corresponding component,
\item we add an edge for each component $\gamma_i$ of the multicurve between the two vertices corresponding to the connected components bounded by $\gamma_i$; this edge carries a weight $m_i$.
\end{itemize}
More formally, a \emph{stable graph} $\Gamma$ is a 5-tuple $(V, H, \iota, \sigma, \{g_v\}_{v \in V})$ where
\begin{itemize}
\item $V$ is a finite set called \emph{vertices},
\item $H$ is a finite set called \emph{half-edges},
\item $\iota : H \to H$ is an involution without fixed points on $H$; each pair $\{h, \iota(h)\}$ is called an \emph{edge}
and we denote by $E(\Gamma)$ the set of edges,
\item $\sigma: H \to V$ is a surjective map ($\sigma(h)$ is the vertex at which $h$ is rooted),
\item $g_v \in \mathbb{Z}_{\geq 0}$,
\end{itemize}
such that
\begin{itemize}
\item (\emph{connectedness}) for each pair of vertices $u,v \in V$ there exists
a sequence of edges, $\{h_1, h'_1\}$, $\{h_2, h'_2\}$, \ldots, $\{h_n, h'_n\}$
such that $\sigma(h_1) = u$, $\sigma(h'_n) = v$ and for $i \in \{1, \ldots, n-1\}$
we have $\sigma(h'_i) = \sigma(h_{i+1})$,
\item (\emph{stability}) for each vertex $v\in V$ we have
\[
2g_v - 2 + \deg(v) > 0
\]
where $\deg(v) \coloneqq |\sigma^{-1}(v)|$ is the \emph{degree} of the vertex $v$.
\end{itemize}
Given a stable graph $\Gamma$, its \emph{genus} is
\[
g(\Gamma)
\coloneqq
|E| - |V| + 1 + \sum_{v\in V} g(v).
\]
An \emph{isomorphism} between two stable graphs $\Gamma=(V, H, \iota, \sigma, g)$ and $\Gamma' = (V', H', \iota', \sigma', g')$
is a pair of bijections $\phi: V \to V'$ and $\psi: H \to H'$ such that
\begin{itemize}
\item $\psi \circ \iota = \iota' \circ \psi$ (in other words, $\psi$ maps an edge to an edge)
\item $\phi \circ \sigma = \sigma' \circ \psi$,
\item for each $v \in V$, we have $g'_{\phi(v)} = g_v$.
\end{itemize}
Note that $\psi$ determines $\phi$ but it is convenient to record automorphism as a pair $(\phi, \psi)$.
We denote by $\Aut(\Gamma)$ the set of automorphisms of $\Gamma$ and by $\mathcal{G}_g$ the finite set of isomorphism classes of stable graphs of genus $g$.
A \emph{weighted stable graph} is a pair $(\Gamma, \boldsymbol{m})$ where $\Gamma$ is a stable graph and $\boldsymbol{m} \in\mathbb{N}^{E(\Gamma)}$.
An \emph{isomorphism} between two weighted stable graphs $(\Gamma, \boldsymbol{m})$ and $(\Gamma', \boldsymbol{m}')$ is
an isomorphism $(\phi, \psi)$ between $\Gamma$ and $\Gamma'$ such that for each edge $e$ of $\Gamma'$
we have $\boldsymbol{m}_{e} = \boldsymbol{m}'_{\psi(e)}$ (where we use $\psi(e)$ to denote $\{\psi(h), \psi(h')\}$
for the edge $e = \{h,h'\} \subset H'$). We denote by $\Aut(\Gamma, \boldsymbol{m})$ the set of
automorphisms of the weighted graph $(\Gamma, \boldsymbol{m})$. There is a one-to-one correspondence between
topological types of multicurves and weighted stable graphs. Primitive multicurves correspond to the case where
all edges carry weight $1$.
\subsection{$\psi$-classes and Kontsevich polynomial}
\label{subsec:PsiClasses}
The formula for the random variable $L^{(g,m)\downarrow}$ that appears in Theorem~\ref{thm:Lg} involves intersection numbers of $\psi$-classes that we introduce now. These rational numbers are famously related to the Witten conjecture~\cite{Wit91} proven by Kontsevich~\cite{Kon92}.
Let $\overline{\mathcal{M}}_{g,n}$ denote the Deligne--Mumford compactification of moduli space of smooth complex curves of genus $g$ with $n$ marked points.
There exist $n$ so-called \emph{tautological line bundles} $\mathcal{L}_1,\dots,\mathcal{L}_n \to \overline{\mathcal{M}}_{g,n}$ over $\overline{\mathcal{M}}_{g,n}$ such that the fiber of $\mathcal{L}_i$ at $(C;x_1,\dots,x_n) \in \overline{\mathcal{M}}_{g,n}$ is the cotangent space of $C$ at the $i$-th marked point $x_i$. The $i$-th \emph{psi-class} $\psi_i$ is defined as the first Chern class of the $i$-th tautological line bundle $c_1(\mathcal{L}_i)\in H^2(\overline{\mathcal{M}}_{g,n},\mathbb{Q})$. We use the following standard notation
\[
\langle\tau_{d_1} \cdots \tau_{d_n}\rangle_{g,n}
\coloneqq
\int_{\overline{\mathcal{M}}_{g,n}} \psi_1^{d_1} \cdots \psi_n^{d_n}
\]
when $d_1+\cdots +d_n = \dim_\mathbb{C}\overline{\mathcal{M}}_{g,n} = 3g-3+n$. All
these intersection numbers are positive rational numbers and can be
computed by recursive equations from $\langle \tau_0^3 \rangle_{0,3} = 1$
and $\langle \tau_1 \rangle_{1,1} = \frac{1}{24}$, see for example~\cite{ItzZub92}.
For our purpose, it is convenient to consider the \emph{Kontsevich polynomial} $V_{g,n}\in\mathbb{Q}[x_1,\dots,x_n]$ that gathers the intersection number into a symmetric polynomial on $n$ variables. More precisely,
\begin{align*}
V_{g,n}(x_1,\dots,x_n) &\coloneqq \frac{1}{2^{3g-3+n}} \sum_{\substack{(d_1,\dots,d_n)\in\mathbb{Z}^n_{\geq 0} \\ d_1+\cdots+d_n=3g-3+n}} \frac{\langle\tau_{d_1}\cdots\tau_{d_n}\rangle_{g,n}}{d_1!\cdots d_n!} \cdot x_1^{2d_1}\cdots x_n^{2d_n} \\
&= \int_{\overline{\mathcal{M}}_{g,n}} \exp \left( \sum_{i=1}^n \frac{x_i^2}{2} \psi_i \right).
\end{align*}
For later use we gather the list of small Kontsevich polynomials below
\begin{align*}
V_{0,3}(x_1,x_2,x_3) &= 1, \\
V_{0,4}(x_1,x_2,x_3,x_4) &= \frac{1}{2} (x_1^2 + x_2^2 + x_3^2 + x_4^2), \\
V_{1,1}(x_1) &= \frac{1}{48} x_1^2, \\
V_{1,2}(x_1,x_2) &= \frac{1}{192} (x_1^2 + x_2^2)^2. \\
\end{align*}
\subsection{Random multicurves}
M.\,Mirzakhani proved the polynomial growth of the number of multicurves on hyperbolic surfaces with respect to its length.
This result and some extensions of it are nicely presented in the book of V.~Erlandsson and J.~Souto~\cite{ES}.
Let $X$ be a hyperbolic surface of genus $g$. We define
\begin{equation} \label{eq:sXRgamma}
s_X(R,\gamma) \coloneqq \# \{\eta \in \Mod(X) \cdot \gamma : \ell_X(\eta) \leq N\}.
\end{equation}
\begin{thm}[{\cite[Theorem 1.1, 1.2 and 5.3]{Mir08}}]\label{thm:s=cN}
Let $X$ be a hyperbolic surface.
For any multicurve $\gamma\in\mathcal{ML}_X(\mathbb{Z})$ there exists a positive rational constant $c(\gamma)$ such that we have as $R \to \infty$,
\[
|s_X(R,\gamma)|
\sim B(X) \cdot \frac{c(\gamma)}{b_{g}} \cdot R^{6g-6}
\]
where $B(X)$ is the Thurston volume of the unit ball in the space of measured laminations $\mathcal{ML}_X$ with respect to the length function $\ell_X$, and
\[
b_g
=
\sum_{[\gamma]\in\mathcal{ML}_X(\mathbb{Z})/\Mod(X)} c(\gamma)
=
\int_{\mathcal{M}_g} B(X)\, dX.
\]
\end{thm}
The above theorem allows to give sense to the notion of a random multicurve. Namely we endow the set of topological types of multicurves $\mathcal{ML}_X(\mathbb{Z}) / \Mod(X)$ with the probability measure
which assigns $c(\gamma)/b_g$ to $[\gamma]$. We now provide the explicit expression for this probability. For $\Gamma \in \mathcal{G}_g$ a stable graph we define the
polynomial $F_\Gamma$ on the variables $\{x_e\}_{e \in E(\Gamma)}$ by
\begin{equation} \label{eq:stableGraphPolynomial}
F_\Gamma \left(\{x_e\}_{e \in E(\Gamma)} \right) =
\prod_{e \in E(\Gamma)} x_e \cdot \prod_{v \in V(\Gamma)} V_{g_v,n_v}(\bm{x}_v),
\end{equation}
where $\bm{x}_v$ is the multiset of variables $x_e$ where $e$ is an edge adjacent to $v$ and
$V_{g_v,n_v}$ are the Kontsevich polynomial defined in Section~\ref{subsec:PsiClasses}. In
the case $e$ is a loop based at $v$, the variable $x_e$ is repeated twice in $\bm{x}_v$.
\begin{figure}[!ht]
\begin{tabular}{c|c|l}
\hline
multicurve & stable graph & polynomial $F_\Gamma$ \\
\hline \hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=0.3]{genus_two_11.pdf}} &
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_11.pdf}} &
\begin{minipage}{0.4\textwidth}
$x_a V_{1,2}(x_a, x_a)$ \\
$= \frac{1}{48} x_a^5$
\end{minipage}
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=0.3]{genus_two_12.pdf}} &
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_12.pdf}} &
\begin{minipage}{0.4\textwidth}
$x_a V_{1,1}(x_a) V_{1,1}(x_a)$ \\
$= \frac{1}{2304} x_a^5$
\end{minipage}
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=0.3]{genus_two_21.pdf}} &
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_21.pdf}} &
\begin{minipage}{0.4\textwidth}
$x_a x_b V_{0,4}(x_a, x_a, x_b, x_b)$ \\
$= x_a^3 x_b + x_a x_b^3$
\end{minipage}
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=0.3]{genus_two_22.pdf}} &
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_22.pdf}} &
\begin{minipage}{0.4\textwidth}
$x_a x_b V_{0,3}(x_a, x_a, x_b) V_{1,1}(x_b)$ \\
$= \frac{1}{48} x_a x_b^3$
\end{minipage}
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=0.3]{genus_two_31.pdf}} &
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_31.pdf}} &
\begin{minipage}{0.4\textwidth}
$x_a x_b x_c V_{0,3}(x_a,x_a,x_b) V_{0,3}(x_b,x_c,x_c)$ \\
$= x_a x_b x_c$
\end{minipage}
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=0.3]{genus_two_32.pdf}} &
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_32.pdf}} &
\begin{minipage}{0.4\textwidth}
$x_a x_b x_c V_{0,3}(x_a,x_b,x_c) V_{0,3}(x_a,x_b,x_c)$ \\
$= x_a x_b x_c$
\end{minipage}
\\
\hline
\end{tabular}
\caption{The list of topological types of primitive multicurves in genus 2,
their associated stable graphs and their corresponding polynomial $F_\Gamma$.
The labels on edges are used as variable indices in $F_\Gamma$.}
\end{figure}
\begin{rem} \label{rk:normalization}
The polynomial $F_\Gamma$ appeared first in Mirzakhani's work~\cite{Mir08}, see in particular Theorem 5.3. They were related to square-tiled surfaces and Masur--Veech volumes in~\cite{DGZZ21} though with a different normalization. Namely, the polynomial $P_\Gamma$ from~\cite{DGZZ21} is related to $F_\Gamma$ by
\[
P_\Gamma = 2^{4g-2} \cdot \frac{(4g-4)!}{(6g-7)!} \cdot \frac{1}{|\Aut (\Gamma) |} \cdot F_\Gamma.
\]
The normalization of $F_\Gamma$ is identical to the conventions used in~\cite{ABCDGLW} and simplifies the computations of the present article.
\end{rem}
Following~\cite{DGZZ21}, for a weighted stable graph $(\Gamma, \boldsymbol{m})$ and we denote by $\mathcal{Y}_{\boldsymbol{m}}\colon \mathbb{Q}[\{x_e\}_{e \in E(\Gamma)}] \to \mathbb{Q}$ the linear operator defined on monomials
\[
\mathcal{Y}_{\boldsymbol{m}}(x_1^{n_1}x_2^{n_2}\cdots x_k^{n_k})
\coloneqq
\frac{n_1! n_2!\cdots n_k!}{\boldsymbol{m}_1^{n_1+1}\boldsymbol{m}_2^{n_2+1}\cdots \boldsymbol{m}_k^{n_k+1}}
\]
and for $m \in \mathbb{N} \cup \{+\infty\}$, set
\[
\mathcal{Z}_m \coloneqq \sum_{\substack{\boldsymbol{m} \in \mathbb{N}^{E(\Gamma)} \\ \forall e \in E(\Gamma), \boldsymbol{m}_e \le m}} \mathcal{Y}_{\boldsymbol{m}}.
\]
We derive the following directly from~\cite{DGZZ21}:
\begin{thm} \label{thm:c=vol}
Let $\gamma$ be a multicurve in genus $g$ and $(\Gamma, \boldsymbol{m})$ the dual weighted stable graph. Then
\[
c(\gamma) = \frac{1}{(6g-6)!}\, \frac{1}{|\Aut(\Gamma, \boldsymbol{m})|} \, \mathcal{Y}_{\boldsymbol{m}}(F_\Gamma).
\]
Furthermore
\[
b_g = \frac{1}{(6g-6)!}\, \sum_{\Gamma \in \mathcal{G}_g} \frac{1}{|\Aut(\Gamma)|} \, \mathcal{Z}(F_\Gamma).
\]
\end{thm}
\begin{rem} \label{rk:automorphisms}
In Theorem~\ref{thm:c=vol} we fix a misconception in~\cite{DGZZ21} about automorphisms of multicurves (or equivalently weighted stable graph). Indeed, the way we defined automorphisms of stable graphs and weighted stable graphs in Section~\ref{ssec:multicurvesAndStableGraphs} make it so that the following formula is valid
\[
\sum_{\Gamma} \frac{1}{|\Aut(\Gamma)|} \mathcal{Z}(F_\Gamma) = \sum_{(\Gamma, \boldsymbol{m})} \frac{1}{|\Aut(\Gamma, \boldsymbol{m})|} \mathcal{Y}_{\boldsymbol{m}}(F_\Gamma)
\]
where the sums are taken over isomorphism classes of respectively stable graphs of genus $g$ and weighted stable graphs of genus $g$.
\end{rem}
\begin{proof}
Up to the correction of Remark~\ref{rk:automorphisms} this is exactly~\cite[Theorem 1.22]{DGZZ21} (see Remark~\ref{rk:normalization} for the difference between $P_\Gamma$ and $F_\Gamma$).
\end{proof}
\begin{figure}[!ht]
\begin{tabular}{c|l}
\hline
stable graph $\Gamma$ & value of $\mathcal{Y}_{\boldsymbol{m}}(\Gamma)$ \\
\hline \hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_11.pdf}} &
$\displaystyle\frac{5}{2} \cdot \frac{1}{\boldsymbol{m}_a^6}$
\\
\hline
\raisebox{-.35\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_12.pdf}} &
$\displaystyle\frac{5}{96} \cdot \frac{1}{\boldsymbol{m}_a^6}$
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_21.pdf}} &
$\displaystyle 24 \left( \frac{1}{\boldsymbol{m}_a^4 \cdot \boldsymbol{m}_b^2} + \frac{1}{\boldsymbol{m}_a^2 \cdot \boldsymbol{m}_b^4} \right)$
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_22.pdf}} &
$\displaystyle \frac{1}{2} \cdot \frac{1}{\boldsymbol{m}_a^2 \cdot \boldsymbol{m}_b^4}$
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_31.pdf}} &
$\displaystyle \frac{1}{\boldsymbol{m}_a^2 \cdot \boldsymbol{m}_b^2 \cdot \boldsymbol{m}_c^2}$
\\
\hline
\raisebox{-.5\operatorname{height}}{\includegraphics[scale=1]{genus_two_graph_32.pdf}} &
$\displaystyle \frac{1}{\boldsymbol{m}_a^2 \cdot \boldsymbol{m}_b^2 \cdot \boldsymbol{m}_c^2}$
\\
\hline
\end{tabular}
\caption{The list of topological types of primitive multicurves in genus 2
and the associated values $\mathcal{Y}_{\boldsymbol{m}}(\Gamma)$ that is proportional
to $c(\gamma)$ (see Theorem~\ref{thm:c=vol}).}
\end{figure}
\subsection{Asymptotics of $\psi$-correlators and $b_g$}
Our proof of Theorem~\ref{thm:PD} uses crucially the asymptotics of $\psi$-intersections and Masur--Veech volumes from~\cite{Agg21} and further developed in~\cite{DGZZ20}.
\begin{thm}[\cite{Agg21}] \label{thm:AggarwalAsymptotics}
For $g,n\in\mathbb{N}$ and $\bm{d} = (d_1, \ldots, d_n)\in\mathbb{Z}_{\geq 0}^n$ with $d_1 + \cdots + d_n = 3g-3+n$, let $\epsilon(\bm{d})$ be defined by
\[
\langle\tau_{d_1} \cdots \tau_{d_n}\rangle_{g,n} = \frac{(6g-5+2n)!!}{(2d_1+1)!!\cdots (2d_n+1)!!} \cdot \frac{1}{g! \cdot 24^g} \cdot (1+\epsilon(\bm{d})).
\]
Then
\[
\lim_{g \to \infty}
\sup_{n < \sqrt{g/800}}
\sup_{\substack{\bm{d} = (d_1, \ldots, d_n) \\ d_1 + \cdots + d_n = 3g-3+n}} \epsilon(\bm{d}) = 0.
\]
\end{thm}
For $m \in \mathbb{N} \cup \{+\infty\}$ we define
\begin{equation} \label{eq:bgm}
b_{g,m} := \frac{1}{(6g-6)!} \sum_{\Gamma \in \mathcal{G}_g} \frac{\mathcal{Z}_m(\Gamma)}{|\Aut(\Gamma)|}.
\end{equation}
Note that $b_g = b_{g,+\infty}$.
\begin{rem}
We warn the reader that the constant denoted $b_{g,m}$ in this article has
nothing to do with the analogue of $b_g$ in the context of surfaces of
genus $g$ with $n$ boundaries which is denoted $b_{g,n}$ in~\cite{Mir16}
and~\cite{DGZZ21}.
\end{rem}
For $m \in \mathbb{N} \cup \{+\infty\}$ and a real number $\kappa > 1$ we also define
\begin{equation} \label{eq:bgmkappa}
\tilde{b}_{g,m,\kappa} \coloneqq
\frac{1}{(6g-6)!}
\sum_{\substack{\Gamma \in \mathcal{G}_g \\ |V(\Gamma)| = 1 \\ |E(\Gamma)| \le \kappa \frac{\log(6g-6)}{2}}} \frac{1}{|\Aut(\Gamma)|}
\sum_{\substack{\boldsymbol{m} \in \mathbb{N}^{E(\Gamma)} \\ \forall e \in E(\Gamma), \boldsymbol{m}_e \leq m}} \mathcal{Y}_{\boldsymbol{m}}(F_\Gamma).
\end{equation}
As we have less terms in its definition, $\tilde{b}_{g,m,\kappa} \le b_{g,m}$.
We will use the asymptotic results of~\cite{Agg21} and~\cite{DGZZ20} in the following form.
\begin{thm}[\cite{Agg21}, \cite{DGZZ20}] \label{thm:A+DGZZtruncatedSum} \label{thm:A+DGZZbgAsymptotics}
Let $m \in \mathbb{N} \cup \{+\infty\}$ and $\kappa > 1$. Then as $g \to \infty$ we have
\[
b_{g,m} \sim \tilde{b}_{g,m,\kappa} \sim \frac{1}{\pi} \cdot \frac{1}{(6g-6) \cdot (4g-4)!} \cdot \sqrt{\frac{m}{m+1}} \cdot \left( \frac{4}{3} \right)^{4g-4}.
\]
\end{thm}
\section{Length vectors of random multicurves}
\label{sec:RandomMulticurves}
The aim of this section is to state and prove a refinement of Theorem~\ref{thm:Lg} that provides an explicit description of the random variable $L^{(g,m)}$. For each weighted stable graph $(\Gamma, \boldsymbol{m})$ we define a random variable $U^{(\Gamma, \boldsymbol{m})}$. We then explain how $L^{(g,m)}$ is obtained from them.
Let $\Gamma$ be a stable graph and let $k \geq |E(\Gamma)|$. For
each injection $\iota \colon E(\Gamma) \to \{1,2,\ldots,k\}$,
we define an injection
$g_{\Gamma,\iota} \colon \mathbb{R}^{E(\Gamma)} \to \mathbb{R}^k$
by
\[
g_{\Gamma,\iota}(\{x_e\}_{e \in E(\Gamma)}) = (y_1, y_2, \ldots, y_k),
\qquad
\text{where }
y_i = \left\{ \begin{array}{ll}
x_{\iota^{-1}(i)} & \text{if $i \in \iota(E(\Gamma))$} \\
0 & \text{otherwise.}
\end{array} \right.
\]
Given a measure $\mu$ on $\mathbb{R}^{E(\Gamma)}$ we define its \emph{$k$-th symmetrization}
to be the measure on $\mathbb{R}^k$ given by
\[
G_{\Gamma,k}(\mu) \coloneqq \frac{(k - |E(\Gamma)|)!}{k!} \sum_{\iota} (g_{\Gamma,\iota})_* (\mu).
\]
The $k$-th symmetrization is supported on the subspaces of dimension $|E(\Gamma)|$
generated by basis vectors. Because of the coefficient $(k - |E(\Gamma)|)!)/k!$, the
total weights of the measures $\mu$ and $(g_{\Gamma,\iota})_* \mu$ are the same.
We prove the following refinement of Theorem~\ref{thm:Lg}.
\begin{thm}
\label{thm:LgMorePrecise}
Let $L^{(g,m)}$ be the random variable on $\Delta^{3g-3}_{\leq 1}$ with density
\begin{equation} \label{eq:LgAsSum}
\frac{1}{(6g-6) \cdot b_{g,m}}
\sum_{\Gamma \in \mathcal{G}_g} \frac{1}{|\Aut(\Gamma)|} \sum_{\substack{\boldsymbol{m} \in \mathbb{N}^{E(\Gamma)} \\ \forall e \in E(\Gamma), \boldsymbol{m}_e \le m}} G_{\Gamma,3g-3}(\mu_{\Gamma,\boldsymbol{m}})
\end{equation}
where $b_{g,m}$ is defined in~\eqref{eq:bgm} and $\mu_{\Gamma,m}$ is the measure on $\Delta^{E(\Gamma)}_{=1}$ with density
\begin{equation} \label{eq:muGammam}
F_\Gamma\left(\left\{\frac{x_e}{\boldsymbol{m}_e}\right\}_{e \in E(\Gamma)}\right)
\cdot
\prod_{e \in E(\Gamma)} \frac{1}{\boldsymbol{m}_e}
\end{equation}
where $F_\Gamma$ is the polynomial defined in~\eqref{eq:stableGraphPolynomial}.
Then
\begin{equation} \label{eq:SumAllCurves}
\frac{1}{s_X(R, m)} \sum_{\substack{\gamma \in \mathcal{ML}_X(\mathbb{Z}) \\ \ell_X(\gamma) \leq R \\ \operatorname{mult}{\gamma} \le m}} \delta_{\hat{\boldsymbol\ell}_X^{\mkern1mu \downarrow}(\gamma)} \xrightarrow[R \to \infty]{} L^{(g,m)\downarrow}
\end{equation}
where $s_X(R, m) \coloneqq \# \{\gamma \in \mathcal{ML}_X(\mathbb{Z}) : \ell_X(\gamma) \leq R \text{ and } \operatorname{mult}(\gamma) \leq m\}$ is the number of multicurves on $R$ of length at most $R$ and multiplicity at most $m$ and $L^{(g,m)\downarrow}$ is the vector $L^{(g,m)}$ sorted in decreasing order.
\end{thm}
The study of the length vector of multicurves of a given topological type was initiated by M.\,Mirzakhani in~\cite{Mir16}. She studied the special case of maximal multicurve corresponding to a pants decomposition. The general case that we present now was proved independently in~\cite{AH21} and~\cite{Liu19}.
\begin{thm}[\cite{AH21}, \cite{Liu19}] \label{thm:AHLiu}
Let $X$ be a hyperbolic surface and $\gamma$ a multicurve on $X$ with $k$ components. Let $(\Gamma, \boldsymbol{m})$ be a weighted stable graph dual to $\gamma$. Let $U^{(\Gamma,\boldsymbol{m})}$ be the random variable on $\Delta^{k}_{=1}$ with density
\begin{equation}\label{eq:pdf}
\frac{(6g-7)!}{\mathcal{Y}_{\boldsymbol{m}}(F_\Gamma)} \frac{1}{\boldsymbol{m}_1 \cdots \boldsymbol{m}_k} \cdot F_\Gamma \left(\frac{x_1}{\boldsymbol{m}_1}, \dots, \frac{x_k}{\boldsymbol{m}_k} \right)
\end{equation}
where $E(\Gamma)$ and $V(\Gamma)$ are the set of edges and the set of vertices of $\Gamma$, respectively. Then we have the convergence in distribution
\[
\frac{1}{s_X(R,\gamma)}
\sum_{\substack{\eta \in \Mod(X) \cdot \gamma \\ \ell_X(\eta) \leq R}} \delta_{\hat{\boldsymbol\ell}_X^{\mkern1mu \downarrow}(\eta)} \xrightarrow[R \to \infty]{} U^{(\Gamma,\boldsymbol{m})\downarrow}
\]
where $U^{(\Gamma,\boldsymbol{m})\downarrow}$ is the sorted version of $U^{(\Gamma,\boldsymbol{m})}$ and $s_X(R,\gamma)$ is defined in~\eqref{eq:sXRgamma}.
\end{thm}
We endow $\Delta^k_{\le r}$ with the restriction of the Lebesgue measure on $\mathbb{R}^k$ that we denote by $\lambda_{\le r}^k$. We define the slice $\Delta_{=r}^k$ inside $\Delta^k_{\le r}$ as
\[
\Delta_{=r}^k
\coloneqq
\{(x_1,x_2,\ldots,x_k) \in [0,\infty)^k : x_1 + x_2 + \cdots + x_k = r\}
\]
and its infinite counterpart
\[
\Delta_{=r}^\infty
\coloneqq
\{(x_1,x_2,\ldots) \in [0,\infty)^\mathbb{N} : x_1 + x_2 + \cdots = r\}.
\]
Let us mention that $\Delta_{\le r}^\infty$ is closed for the product topology in $[0,r]^\mathbb{N}$ and hence compact. However $\Delta_{=r}^\infty$ is dense in $\Delta_{\le r}^\infty$. For this reason, it is more convenient to work with measures on $\Delta_{\le r}^\infty$ even though they are ultimately supported on $\Delta_{=r}^\infty$.
On $\Delta^k_{= r}$ which is contained in a hyperplane in $\mathbb{R}^k$ we consider the Lebesgue measure induced by any choice of $k-1$ coordinates among $x_1$, \ldots, $x_k$. The latter measure is well defined since the change of variables between different choices has determinant $\pm 1$. We first start with an elementary integration lemma.
\begin{lem}\label{lem:int_sp}
Let $d_1,\dots,d_k\in\mathbb{R}_{\geq 0}$. Then
\[
\int_{\Delta_{\leq r}^k} x_1^{d_1}x_2^{d_2}\cdots x_k^{d_k} \, d\lambda^k_{\leq r}
= \frac{d_1!\cdots d_k!}{(d_1+\cdots+d_k+k)!} \cdot r^{d_1+\cdots+d_k+k}
\]
and
\[
\int_{\Delta_{=r}^k} x_1^{d_1}x_2^{d_2}\cdots x_k^{d_k} \, d\lambda^k_{=r}
= \frac{d_1!\cdots d_k!}{(d_1+\cdots+d_k+k-1)!} \cdot r^{d_1+\cdots+d_k+k-1},
\]
\end{lem}
Here the factorial of a real number has to be considered by mean of the analytic continuation
given by the gamma function : $x! = \varGamma(x+1)$.
\begin{rem} \label{rk:proba}
Using Lemma~\ref{lem:int_sp}, let us check that~\eqref{eq:LgAsSum} and~\eqref{eq:pdf} are indeed densities of probability measures. From the second equation in the statement of Lemma~\ref{lem:int_sp} it follows that the total mass of~\eqref{eq:muGammam} is $\mathcal{Y}_m(F_\Gamma)$, namely
\[
\mathcal{Y}_m(F_\Gamma) = (6g-7)! \int_{\Delta^k_{=1}} \frac{1}{\boldsymbol{m}_1\cdots \boldsymbol{m}_k} \, F_\Gamma \left( \frac{x_1}{\boldsymbol{m}_1}, \ldots, \frac{x_k}{\boldsymbol{m}_k}\right) d\lambda^k_{=1}(x_1, \ldots, x_k).
\]
Indeed, each monomial that appears in $F_\Gamma$ has $k$ variables and total degree $6g-6-k$. Hence the denominator coming from the formula of Lemma~\ref{lem:int_sp} compensates the $(6g-7)!$ term from~\eqref{eq:muGammam}. The numerator in the formula of Lemma~\ref{lem:int_sp} matches the definition of $\mathcal{Y}_m$.
\end{rem}
\begin{proof}[Proof of Lemma~\ref{lem:int_sp}]
For $x > 0$ real and $\alpha,\beta \geq 1$ integral, we have the
following scaling of the beta function
\begin{equation}\label{eq:beta}
\int_0^x t^{\alpha-1}(x-t)^{\beta-1}\, dt
=
\frac{\varGamma(\alpha) \, \varGamma(\beta)}{\varGamma(\alpha+\beta)}\, x^{\alpha+\beta-1}.
\end{equation}
This implies that
\[
\int_{\Delta^k_{\leq r}} \frac{x_1^{d_1}}{d_1!} \frac{x_2^{d_2}}{d_2!} \cdots \frac{x_k^{d_k}}{d_k!} \, d\lambda^k_{\leq r}
=
\int_{\Delta^{k-1}_{\leq r}}
\frac{x_1^{d_1}}{d_1!} \frac{x_2^{d_2}}{d_2!} \cdots \frac{x_{k-1}^{d_{k-1}+d_k+1}}{(d_{k-1}+d_k+1)!} \, d\lambda^{k-1}_{\leq r}
\]
and
\[
\int_{\Delta^k_{=r}} \frac{x_1^{d_1}}{d_1!} \frac{x_2^{d_2}}{d_2!} \cdots \frac{x_k^{d_k}}{d_k!} \, d\lambda^k_{=r}
=
\int_{\Delta^{k-1}_{=r}}
\frac{x_1^{d_1}}{d_1!} \frac{x_2^{d_2}}{d_2!} \cdots \frac{x_{k-1}^{d_{k-1}+d_k+1}}{(d_{k-1}+d_k+1)!} \, d\lambda^{k-1}_{=r}.
\]
The two equations in the statement then follow by induction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:LgMorePrecise}]
We just have to gather the different contributions of each multicurve coming from Theorem~\ref{thm:AHLiu} that F.~Arana-Herrera and M.~Liu gave. From Theorem~\ref{thm:s=cN} of M.~Mirzakhani, for any multicurve $\gamma\in\mathcal{ML}_X(\mathbb{Z})$, its asymptotic density in $\mathcal{ML}_X(\mathbb{Z})$ is $\frac{c(\gamma)}{b_g}$. Now Theorem~\ref{thm:c=vol} provides the values of $c(\gamma)$ and $b_{g,m}$ in terms of the stable graph polynomials $F_\Gamma$.
\end{proof}
\begin{rem}
In~\cite{Liu19}, the length vector $\boldsymbol\ell_X(\gamma)$ for a multicurve $\gamma = \boldsymbol{m}_1\gamma_1 + \cdots + \boldsymbol{m}_k\gamma_k$ is defined as $(\ell_X(\gamma_1),\dots,\ell_X(\gamma_k))$ instead of $(\ell_X(\boldsymbol{m}_1 \gamma_1),\dots,\ell_X(\boldsymbol{m}_k\gamma_k))$ as in this paper, and the limit distribution is thus supported on the (non-standard) simplex
\[
\{(x_1,\dots,x_k)\in\mathbb{R}^k : x_1,\dots,x_k\geq 0,\ \boldsymbol{m}_1x_1 + \cdots + \boldsymbol{m}_kx_k = 1\}.
\]
As a consequence, our distribution is the push-forward of the distribution in~\cite{Liu19} under the map $(x_1,\dots,x_k) \mapsto (\boldsymbol{m}_1x_1,\dots,\boldsymbol{m}_kx_k)$.
\end{rem}
\section{Reduction in the asymptotic regime}
\label{sec:reduction}
The random variable $L^{(g,m)}$ appearing in Theorem~\ref{thm:LgMorePrecise} is delicate
to study because it involves a huge number of terms. Using Theorem~\ref{thm:A+DGZZtruncatedSum}
from~\cite{Agg21} and~\cite{DGZZ20} we show that we can restrict to a sum involving only
$O(\log(g))$ terms associated to non-separating multicurves.
We denote by $\Gamma_{g,k}$ the stable graph of genus $g$
with a vertex of genus $g-k$ and $k$ loops. To simplify the notation we fix a bijection
between the edges of $\Gamma_{g,k}$ and $\{1,2,\ldots,k\}$ so that $F_{\Gamma_{g,k}}$
is a polynomial in $\mathbb{Q}[x_1, \ldots, x_k]$. Note that because the edges in $\Gamma_{g,k}$
are not distinguishable, the polynomial $F_{\Gamma_{g,k}}$ is symmetric.
Using the same notation as in Theorem~\ref{thm:LgMorePrecise} we have the following
result.
\begin{thm} \label{thm:reduction}
For $m \in \mathbb{N} \cup \{+\infty\}$ and $\kappa > 1$, let $\tilde{L}^{(g,m,\kappa)}$ be the random variable on $\Delta^{3g-3}_{\leq 1}$ with density
\begin{equation} \label{eq:LgkappaAsSum}
\frac{1}{(6g-6) \cdot \tilde{b}_{g,m,\kappa}}
\sum_{k=1}^{\kappa \cdot \frac{\log(6g-6)}{2}} \frac{1}{|\Aut \Gamma_{g,k}|} \sum_{\substack{\boldsymbol{m} \in \mathbb{N}^k \\ \forall i \in \{1, \ldots, k\}, \boldsymbol{m}_i \le m}} G_{\Gamma_{g,k},3g-3}(\mu_{\Gamma_{g,k},\boldsymbol{m}})
\end{equation}
where $\tilde{b}_{g,m,\kappa}$ is defined in~\eqref{eq:bgmkappa}.
Then for any function $h \in L^\infty(\Delta^\infty_{\le 1})$ we have
\[
\mathbb{E}(h(L^{(g,m)}))
\sim
\mathbb{E}(h(\tilde{L}^{(g,m,\kappa)}))
\]
as $g\to\infty$.
\end{thm}
Note that terms appearing in the sum~\eqref{eq:LgkappaAsSum} in Theorem~\ref{thm:reduction} form a subset of the terms in the sum~\eqref{eq:LgAsSum} in Theorem~\ref{thm:LgMorePrecise}.
\begin{proof}
By Theorem~\ref{thm:A+DGZZtruncatedSum}, a random multicurve of high genus is almost surely non-separating with less than $\kappa \frac{\log(6g-6)}{2}$ edges. As $h$ is bounded, we obtain the result.
\end{proof}
\section{Size-biased sampling and Poisson--Dirichlet distribution}
\label{sec:PDandGEM}
\subsection{Size-biased reodering}
The components of a multicurve are not ordered in any natural way. In Theorem~\ref{thm:Lg} we solve this issue by defining a symmetric random
variable $L^{(g,m)}$ on $\Delta^{3g-3}_{\le 1}$ and making the convergence happen
towards $L^{(g,m)\downarrow}$ whose entries are sorted in decreasing order.
In this section we introduce another natural way of ordering the entries: the size-biased ordering. Contrarily to the symmetrization or
the decreasing order, it is a random ordering. The size-biased ordering
turns out to be convenient in the proof of Theorem~\ref{thm:Lg}.
We work with vectors $x = (x_1, \ldots, x_k)$ in $\Delta^k_{\leq 1}$. A
\emph{reordering}
of $x$ is a random variable of the form $(x_{\sigma(1)}, \ldots, x_{\sigma(k)})$ where
$\sigma$ is a random permutation in $S_k$. We aim to define the size-biased
reordering $x^* = (x_{\sigma(1)}, x_{\sigma(2)}, \ldots, x_{\sigma(k)})$ of $x$.
The idea under the size-biased reordering is to pick components according to
their values. One can define the random permutation $\sigma$ inductively as
follows. If $x$ is the zero vector, then $x^* = x_\sigma$ where $\sigma$ is
taken uniformly at random. Otherwise, we set $\sigma(1)$ according to
\[
\mathbb{P}_x(\sigma(1) = i) \coloneqq \frac{x_i}{x_1 + \cdots + x_k}
\]
and define a new vector $y = (x_1, \ldots, \widehat{x}_{\sigma(1)}, \ldots, x_k)$
on $\Delta^{k-1}_{\leq 1}$ which is the vector $x$ with the component $x_{\sigma(1)}$
removed. In order to keep track of the components we denote $\phi: \{1, 2, \ldots, k-1\} \to \{1, 2, \ldots, k\}$
the unique increasing injection such that its image avoids $\sigma(1)$. In other words
\[
\phi(i) \coloneqq
\left\{ \begin{array}{ll}
i & \text{if $1 \le i < \sigma(1)$} \\
i+1 & \text{if $\sigma(1) \le i \le k-1$.}
\end{array} \right.
\]
Assuming that by induction $y$ has a size-biased reordering $\sigma_y$ we
define for $i \in \{1, 2, \ldots, k-1\}$ the other values by $\sigma(\phi(i)) \coloneqq \phi(\sigma_y(i))$.
This defines inductively the size-biased reordering.
A more direct definition can be given as follows. Given $r$ such that at least $r$
components of $x$ are positive, for $1\leq i_1,\dots,i_r \leq k$ distinct integers, we have
\begin{equation}
\label{eq:sizeBiasedPermutation}
\begin{split}
\mathbb{P}_x(\sigma(1) = i_1, \ldots, \sigma(r) &= i_r)
= \\
& \frac{x_{i_1} x_{i_2} \ldots x_{i_{r}}}{s(s-x_{i_1})(s-x_{i_1}-x_{i_2}) \cdots (s - x_{i_1} - \cdots - x_{i_{r-1}})}
\end{split}
\end{equation}
where $s = x_1 + x_2 + \cdots + x_k$. Note that for $r=k$ we have $x_r = 1 - x_1 - \cdots - x_{r-1}$ and one can
perform a simplification of the last terms in the numerator and denominator.
Now let $X \colon \varOmega \to \Delta^k_{\le 1}$ be a random variable. In
order to define its size-biased reordering $X^* \colon \varOmega \to \Delta^k_{\le 1}$,
we consider for each $x \in \Delta^k_{\le 1}$ independent random variables $\sigma_x$ distributed according to $\mathbb{P}_x$
as defined above which are furthermore independent from $X$. We then define for each
$\omega \in \varOmega$
\[
X^*(\omega) \coloneqq \sigma_{X(\omega)} \cdot X(\omega)
\]
where $\sigma \cdot x = (x_{\sigma(1)}, \ldots, x_{\sigma(k)})$.
\begin{lem} \label{lem:MarginalSizeBiased}
Let $X$ a random variable on $\Delta^k_{= 1}$ with
density $f_X \colon \Delta^k_{= 1} \to \mathbb{R}$.
Let $1 \leq r \leq k$. Then the $r$-th marginal of the size-biased reordering of $X$, that is to say
the density of the vector $(X^*_1, \ldots, X^*_r)$ is
\begin{align*}
& f_{(X_1^*,\dots,X_r^*)}(x_1,\dots,x_r)
=
\frac{1}{(k-r)!} \frac{x_1 \cdots x_r}{(1-x_1)\cdots (1-x_1-\cdots -x_{r-1})} \cdot \\
& \qquad
\cdot \int_{\Delta_{= 1-x_1-\cdots -x_r}^{k-r}} \sum_{\sigma\in S_{k}} f_X(x_{\sigma(1)},\dots,x_{\sigma(k)}) \, d\lambda^{k-r}_{=1-x_1-\cdots-x_r}(x_{r+1}, \dots, x_{k}),
\end{align*}
\end{lem}
\begin{proof}
Let us define $g(x_1, \ldots, x_k) \coloneqq \sum_{\sigma \in S_k} f(x_{\sigma(1)}, \ldots, x_{\sigma(k)})$.
We first consider the case $r=k$.
Since $X$ admits a density, almost surely all components are positive and distinct. Hence one can use~\eqref{eq:sizeBiasedPermutation}
to write its density as
\[
f_{X^*}(x_1, \ldots, x_k) = \frac{x_1 \cdots x_{k}}{(1-x_1) \cdots (1-x_1-\cdots-x_{k-1})} \, g(x_1,\ldots,x_k).
\]
In the above formula we used the fact that the sum of $X$ is $s=1$ almost surely.
Now, for $1 \leq r \leq k-1$, the $r$-th marginal is obtained by integrating the free variables
\begin{equation}
\label{eq:marginalIntegral}
f_{(X^*_1, \ldots, X^*_r)}(x_1, \ldots, x_r) = \int_{\Delta^{k-r}_{=1-s}} f_{X^*}(x_1,\ldots,x_k) \, d\lambda^{k-r}_{=1-s}(x_{r+1},\ldots,x_k)
\end{equation}
where $s = x_1 + \cdots + x_r$. For a permutation $\tau \in S_{k-r}$ we define the subsimplex
\[
\Delta_{=1-s;\tau}^{k-r}
\coloneqq
\{(x_1, \ldots, x_{k-r}) \in \Delta_{=1-s}^{k-r} : x_{\tau(1)} > x_{\tau(2)} > \cdots > x_{\tau(k-r)}\}.
\]
We can decompose the integral~\eqref{eq:marginalIntegral} as a sum over these subsimplices
\begin{align*}
f_{(X^*_1, \ldots, X^*_r)} & (x_1, \ldots, x_r)
=
\frac{x_1 \cdots x_r}{(1-x_1) \cdots (1-x_1 - \cdots - x_{r-1})} \\
& \cdot \sum_{\tau \in S_{r-k}}
\int_{\Delta_{=1-s;\tau}^{k-r}}
\frac{x_{r+1} \cdots x_k\, g(x_1, \ldots, x_k) \, d\lambda^{k-r}_{=1-s}(x_{r+1}, \ldots, x_k)}{(1-s-x_{k+1}) \cdots (1-s-x_{k+1}-\cdots-x_{r-1})}.
\end{align*}
Using the fact that $g$ is symmetric, we can rewrite it by mean of a change of variables on
the standard simplex $\Delta_{=1-s; \id}^{k-r}$
\begin{align*}
& f_{(X^*_1, \ldots, X^*_r)} (x_1, \ldots, x_r)
=
\frac{x_1 \cdots x_r}{(1-x_1) \cdots (1-x_1 - \cdots - x_{r-1})} \cdot \\
&
\int_{\Delta_{=1-s;\id}^{k-r}}
\sum_{\tau \in S(\{r+1,\ldots,k\})}
\frac{x_{\tau(r+1)} \cdots x_{\tau(k)} \, g(x_1, \ldots, x_k) \, d\lambda^{k-r}_{=1-s}(x_{r+1}, \ldots, x_k)}{(1-s-x_{\tau(k+1)}) \cdots (1-s-x_{\tau(k+1)}-\cdots-x_{\tau(r-1)})}.
\end{align*}
Using the facts that
\[
\sum_{\tau \in S(\{r+1,\ldots,k\})}
\frac{x_{\tau(r+1)} \cdots x_{\tau(k)}}{(1-s-x_{\tau(k+1)}) \cdots (1-s-x_{\tau(k+1)}-\cdots-x_{\tau(r-1)})} = 1
\]
and
\begin{align*}
\int_{\Delta_{=1-s;\id}^{k-r}}
\sum_{\sigma \in S_k} & f(x_{\sigma(1)}, \ldots, x_{\sigma(k)}) \, d\lambda^{k-r}_{=1-s}(x_{r+1}, \ldots, x_k)
\\
& =
\frac{1}{(k-r)!}
\int_{\Delta^{k-r}_{=1-s}}
\sum_{\sigma \in S_k} f(x_{\sigma(1)}, \ldots, x_{\sigma(k)}) \, d\lambda^{k-r}_{=1-s}(x_{r+1}, \ldots, x_k)
\end{align*}
we obtain the result.
\end{proof}
We finish this section by mentioning that the size-biased reordering extends to infinite vectors, that is elements on $\Delta^{\infty}_{\le 1}$.
\subsection{Poisson--Dirichlet and GEM distributions}
\label{ssec:PoissonDirichletAndGEM}
Recall that the $\GEM(\theta)$ distribution was defined in the introduction via the stick-breaking process. We also defined the $\PD(\theta)$ as the sorted reordering of $\GEM(\theta)$. The Poisson--Dirichlet distribution admits an intrinsic definition in terms of the Poisson process first introduced by Kingman~\cite{Kin75}. We refer to~\cite[Section~4.11]{ABT03} for this definition. Instead we concentrate on the simpler Griffiths-Engen-McCloskey distribution.
In the introduction we passed from $\GEM(\theta)$ to $\PD(\theta)$. The following result formalizes the equivalence between the two distributions.
\begin{thm}[\cite{DJ89}] \label{thm:PDversusGEM}
Let $X = (X_1, X_2, \ldots)$ be a random variable on $\Delta_{= 1}^\infty$. Let $\theta > 0$. Then the sorted reordering $X^\downarrow$ has distribution $\PD(\theta)$ if and only if the size-biased reordering $X^*$ has distribution $\GEM(\theta)$.
\end{thm}
We will use the above result in the following form.
\begin{cor}[\cite{DJ89}] \label{cor:lim=PD<->lim=GEM}
Let $X^{(n)}$ be a sequence of random variables on $\Delta_{= 1}^\infty$. Let $\theta > 0$. Then the sorted sequence $X^{(n)\downarrow}$ converges in distribution to $\PD(\theta)$ if and only if the size-biased sequence $X^{(n)*}$ converges in distribution to $\GEM(\theta)$.
\end{cor}
In order to prove convergence towards $\GEM$ we will need the explicit description of its marginals.
\begin{prop}[\cite{DJ89}] \label{prop:pdfGEM}
Let $X = (X_1,X_2,\dots)$ be a random variable with distribution $\GEM(\theta)$. Then the distribution the $r$-first components $(X_1,X_2,\dots,X_r)$ of $X$ supported on $\Delta^{r}_{\leq 1}$ admit a distribution with density given by
\begin{equation} \label{eq:GEMdensity}
\frac{\theta^{r} (1-x_1-\cdots -x_r)^{\theta-1}}{(1-x_1)(1-x_1-x_2)\cdots (1-x_1-\cdots -x_{r-1})}.
\end{equation}
\end{prop}
In order to simplify computations, we consider moments of the $\GEM$ distribution that get
rid of the denominator in the density~\eqref{eq:GEMdensity}. Namely, for a random variable
$X = (X_1, X_2, \ldots)$ on $\Delta^\infty_{\leq 1}$ and $p = (p_1, \ldots, p_r)$ a $r$-tuple
of non-negative integers we define
\begin{equation} \label{eq:MpDefinition}
M_p(X) \coloneqq
\mathbb{E}((1-X_1) \cdots (1-X_1-\cdots -X_{r-1}) \cdot X_1^{p_1}\cdots X_r^{p_r})
\end{equation}
These moments of $\GEM(\theta)$ are as follows.
\begin{lem}\label{lem:GEMMoments}
If $X=(X_1,X_2,\dots) \sim \GEM(\theta)$ and $(p_1, \ldots, p_r)$ is a non-negative integral
vector, then the moment $M_p(X)$ defined in~\eqref{eq:MpDefinition} has the following value
\[
M_p(X) = \frac{\theta^r \cdot (\theta-1)! \cdot p_1!\cdots p_r!}{\varGamma(p_1+ \cdots + p_r + \theta + r)!}.
\]
\end{lem}
\begin{proof}
By Proposition~\ref{prop:pdfGEM} we have
\begin{align*}
M_p(X) &= \int_{\Delta^r_{\leq 1}} \theta^{r} x_1^{p_1} \cdots x_r^{p_r} (1-x_1-\cdots -x_r)^{\theta-1} \, d\lambda^r_{\le 1}\\
&= \theta^r \int_{\Delta^{r+1}_{=1}} x_1^{p_1} \cdots x_r^{p_r} x_{r+1}^{\theta-1} \, d\lambda^{r+1}_{=1}.
\end{align*}
The last term is an instance of Lemma~\ref{lem:int_sp} on the simplex $\Delta^{r+1}_{=1}$.
Replacing the value obtained from the integration lemma gives the result.
\end{proof}
\section{Proof of the main theorem}
\label{sec:LargeGenus}
The aim of this section is to prove the following result
\begin{thm}\label{thm:GEM}
For $g \ge 2$ integral, $m \in \mathbb{N} \cup \{+\infty\}$ and $\kappa > 1$ real, let $\tilde{L}^{(g,m,\kappa)*}$ be the size-biased version of the random variable $\tilde{L}^{(g,m,\kappa)}$ from Theorem~\ref{thm:reduction}. Then as $g$ tends to $\infty$, $\tilde{L}^{(g,m,\kappa)*}$ converges in distribution to $\GEM(1/2)$.
\end{thm}
Let us first show how to derive our main Theorem~\ref{thm:PD} from Theorem~\ref{thm:GEM}.
\begin{proof}[Proof of Theorem~\ref{thm:PD}]
By Theorem~\ref{thm:reduction}, the random variables $L^{(g,m,\kappa)*}$ and $L^{(g,m)*}$ have the same limit distribution as $g \to +\infty$. Hence by Theorem~\ref{thm:GEM}, the random variable $L^{(g,m)*}$ converges in distribution towards $\GEM(1/2)$.
Finally Corollary~\ref{cor:lim=PD<->lim=GEM} shows that the convergence
in distribution of $L^{(g,m)*}$ towards $\GEM(1/2)$ is equivalent to
the convergence of $L^{(g,m){\mkern1mu \downarrow}}$ towards $\PD(1/2)$. This concludes the proof
of Theorem~\ref{thm:PD}.
\end{proof}
\subsection{Moment's method}
Let us recall from Section~\ref{ssec:PoissonDirichletAndGEM} Equation~\eqref{eq:MpDefinition} that we defined some specific moments $M_{(p_1,\ldots,p_r)}(X)$ for a random variable $X$ on $\Delta^\infty_{=1}$. In this section, we show that the convergence of a sequence of random variables $X^{(n)}$ is equivalent to the convergence of all the moments $M_p(X^{(n)})$. This strategy called the \emph{method of moments} is a standard tool in probability, see for example~\cite[Section~30]{Billingsley} for the case of real variables.
\begin{lem}\label{lem:momentMethod}
A sequence of random variables $X^{(n)} = (X^{(n)}_1,X^{(n)}_2,\ldots)$ in $\Delta_{=1}^\infty$ converges in distribution to a random variable $X^{(\infty)}$ in $\Delta_{=1}^\infty$ if and only if for all $p=(p_1,\ldots,p_r)$ vector of non-negative integers we have $\lim_{n \to \infty} M_p(X^{(n)}) = M_p(X^{(\infty)})$.
\end{lem}
\begin{proof}
The infinite-dimensional cube $[0,1]^\mathbb{N}$ is compact with respect to the product topology by Tychonoff's theorem. The set $\Delta_{\leq 1}^\infty$ is a closed subset of $[0,1]^\mathbb{N}$, and is therefore compact. The signed measures on $\Delta_{\leq 1}^\infty$ are identified with the dual of the real continuous function $C(\Delta_{\leq 1}^\infty, \mathbb{R})$. In particular, we have the convergence of $X^{(n)}$ towards $X^{(\infty)}$ in distribution if and only if for any continuous function $f \in C(\Delta_{\le 1}^\infty, \mathbb{R})$ we have the convergence of $\mathbb{E}(f(X^{(n)}))$ towards $\mathbb{E}(f(X^{(\infty)}))$.
Now let $S$ be the set of functions in $C(\Delta_{\leq 1}^\infty, \mathbb{R})$ of the form
\[
(1-x_1)(1-x_1-x_2)\cdots (1-x_1-\cdots -x_{r-1}) \cdot x_1^{p_1}\cdots x_r^{p_r},
\]
with $r\geq 0$, $p_1,\dots,p_r \geq 0$. We claim that the span of $S$ (that is finite linear combinations of elements of $S$) is dense in $C(\Delta_{\leq 1}^\infty, \mathbb{R})$.
Indeed, $S$ contains $1$ and is stable under multiplication. Therefore, the algebra generated by $S$ is equal to its span.
Now, the set $S$ is a separating subset of $C(\Delta_{\leq 1}^\infty,\mathbb{R})$ and density follows from the Stone--Weierstrass theorem.
\end{proof}
We will use the following asymptotic simplification of the moments.
\begin{thm}\label{thm:LgMomentsAsymptotics}
For $g \geq 2$ integral, $m \in \mathbb{N} \cup \{+\infty\}$ and $\kappa > 1$ real, let $\tilde{L}^{(g,m,\kappa)*}$ be the size-biased reordering of the random variable $\tilde{L}^{(g,m,\kappa)}$ from Theorem~\ref{thm:reduction}. Let $r\geq 1$ and $p_1,\dots,p_r\in\mathbb{N}$.
Then, as $g \to \infty$, the moment $M_p(\tilde{L}^{(g,m,\kappa)*})$ is asymptotically equivalent to
\[
\frac{\sqrt{\frac{m+1}{m}} \cdot \sqrt{\pi}}{2 \cdot (6g-6)^{p_1+\cdots+p_r+r-1/2}}
\sum_{k=r}^{\kappa \frac{\log(6g-6)}{2}} \frac{1}{(k-r)!} \sum_{\substack{(j_1,\dots,j_k) \in \mathbb{N}^k \\ j_1+\cdots +j_k = 3g-3}}
\prod_{i=1}^k \frac{\zeta_m(2 j_i)}{2 j_i} \prod_{i=1}^r \frac{(2 j_i + p_i)!}{(2j_i - 1)!}
\]
where
\[
\zeta_m(s) \coloneqq \sum_{n = 1}^m \frac{1}{n^s}
\]
is the partial Riemann zeta function.
\end{thm}
Following~\cite[Equation~(14)]{DGZZ20}, we define
\[
\begin{split}
c_{g,k}(d_1,\dots,d_k)
& \coloneqq \frac{g! \cdot (3g-3+2k)!}{(6g-5+4k)!} \frac{3^g}{2^{3g-6+5k}}
\cdot (2d_1+2)!\cdots (2d_k+2)! \\
& \qquad \cdot \sum_{\substack{d_i^-+d_i^+ = d_i \\ d_i^-,d_i^+\geq 0, 1\leq i\leq k}} \frac{\langle\tau_{d_1^-}\tau_{d_1^+} \cdots \tau_{d_k^-}\tau_{d_k^+}\rangle_{g,2k}}{d_1^-! d_1^+! \cdots d_k^- d_k^+!}.
\end{split}
\]
The above coefficients were introduced because by~\cite[Lemma~3.5]{DGZZ20}, and we have
\[
\lim_{g \to \infty} \sup_{k \leq \sqrt{g/800}} |c_{g,k}(d_1,\dots,d_k) - 1| = 0.
\]
This asymptotic result is a direct consequence of Theorem~\ref{thm:AggarwalAsymptotics} of A.~Aggarwal that we stated in the introduction. We define $\tilde{c}_{g,k}(j_1,\ldots,j_k) \coloneqq c_{g-k,k}(j_1-1,\dots,j_k-1)$.
\begin{lem} \label{lem:UgkDensity}
For each $k=1,\ldots,3g-3$, let $U^{(g,m,k)}$ be the random variable on $\Delta^k_{=1}$ with density
\begin{equation} \label{eq:UgkDensity}
\frac{(6g-7)!}{\mathcal{Z}_m(F_{g,k})} \sum_{\substack{\boldsymbol{m} \in \mathbb{N}^k \\ \forall i \in \{1, \ldots, k\}, \boldsymbol{m}_i \le m}} F_{g,k} \left(\frac{x_1}{\boldsymbol{m}_1}, \dots, \frac{x_k}{\boldsymbol{m}_k} \right) \cdot \frac{1}{\boldsymbol{m}_1 \cdots \boldsymbol{m}_k}
\end{equation}
where we use the notation $F_{g,k}$ for $F_{\Gamma_{g,k}}$. Then for any $p = (p_1, \ldots, p_r) \in \mathbb{N}^k$ we have
\begin{align*}
M_p(U^{(g,k)*})
& =
\frac{w_{g,k} \cdot k!}{\mathcal{Z}_m(F_{g,k}) \cdot (k-r)! \cdot (6g-7 + p_1 + \cdots + p_r + r)!} \\
& \qquad
\cdot \sum_{\substack{(j_1,\dots,j_k) \in\mathbb{N}^k \\ j_1 + \dots + j_k = 3g-3}}
\tilde{c}_{g,k}(j_1,\dots,j_k) \prod_{i=1}^k \frac{\zeta_m(2j_i)}{2j_i}
\prod_{i=1}^r \frac{(2j_i+p_i)!}{(2j_i-1)!}
\end{align*}
where
\[
w_{g,k} \coloneqq
\frac{(6g-5-2k)! \cdot (6g-7)!}{(g-k)! \cdot (3g-3-k)!} \cdot \frac{2^{3k-3}}{3^{g-k}}.
\]
\end{lem}
Note that Formula~\eqref{eq:UgkDensity} is the density of a probability measure by Remark~\ref{rk:proba}. It is more precisely the density of the asymptotic normalized vector of length of random multicurves restricted to multicurves of the type $\Gamma_{g,k}$.
\begin{proof}
By definition of the stable graph polynomial we have
\begin{align*}
&F_{g,k}(x_1, x_2, \ldots, x_k)
=
x_1 \cdots x_k \cdot V_{g-k,2k}(x_1, x_1, x_2, x_2, \ldots, x_k, x_k)
= \\
&\frac{1}{2^{3g-3-k}}
\sum_{\substack{(d_1^-,d_1^+,\dots,d_k^-,d_k^+)\in\mathbb{Z}_{\geq 0}^{2k} \\ d_1^- + d_1^+ + \cdots + d_k^-+d_k^+ = 3g-3-k}} \frac{\langle \tau_{d_1^-}\tau_{d_1^+}\cdots \tau_{d_k^-}\tau_{d_k^+}\rangle_{g-k,2k}}{d_1^-!d_1^+!\cdots d_k^-! d_k^+!} \, x_1^{2(d^+_1+d^-_1)+1}\cdots x_k^{2(d^+_k+d^-_k)+1}.
\end{align*}
Using the coefficients $\tilde{c}_{g,k}$ defined just above the statement of the lemma, we rewrite the polynomial $F_{g,k}$ as
\[
F_{g,k} =
\frac{(6g-5-2k)!}{(g-k)! \cdot (3g-3-k)!} \cdot \frac{2^{3k-3}}{3^{g-k}}
\sum_{\substack{(j_1,\dots,j_k) \in \mathbb{N}^k \\ j_1 + \dots + j_k = 3g-3}}
\tilde{c}_{g,k}(j_1,\dots,j_k) \prod_{i=1}^k \frac{x_i^{2j_i-1}}{(2j_i)!}.
\]
Hence the density of $U^{(g,m,k)}$ in~\eqref{eq:UgkDensity} can be rewritten as
\[
\frac{w_{g,k}}{\mathcal{Z}_m(F_{g,k})}
\sum_{\substack{(j_1,\dots,j_k) \in \mathbb{N}^k \\ j_1 + \dots + j_k = 3g-3}}
\tilde{c}_{g,k}(j_1,\dots,j_k) \prod_{i=1}^k \zeta_m(2j_i) \frac{x_i^{2j_i-1}}{(2j_i)!}.
\]
Now, by Lemma~\ref{lem:MarginalSizeBiased}, the $r$-th marginal of the sized-biased version $U^{(g,m,\kappa)*}$ of $U^{(g,m,\kappa)}$ is
\begin{align*}
& \frac{w_{g,k} \cdot k!}{\mathcal{Z}_m(F_{g,k}) \cdot (k-r)!} \cdot \frac{1}{(1-x_1) \cdots (1-x_1-\cdots-x_{r-1})} \\
& \qquad \cdot \sum_{\substack{(j_1,\dots,j_k)\in\mathbb{N}^k \\ j_1 + \dots + j_k = 3g-3}} \tilde{c}_{g,k}(j_1,\dots,j_k) \prod_{i=1}^k \frac{\zeta_m(2j_i)}{(2j_i)!} \prod_{i=1}^r x_i^{2j_i} \\
& \qquad \qquad \cdot \int_{\Delta_{= 1-x_1-\cdots-x_r}^k} x_{r+1}^{2j_{r+1}-1} \cdots x_k^{2j_k-1} \, d\lambda^k_{=1}(x_{r+1},\ldots,x_k)
\end{align*}
In the above, we used the fact that the density of $U^{(g,m,k)}$ is a symmetric function. Hence the sum over all
permutations of $k$ elements only pops out a $k!$ coefficient.
The value of the integral in the above sum follows from Lemma~\ref{lem:int_sp} and is equal to
\[
\frac{(2j_{r+1}-1)! \cdots (2j_k-1)!}{(2j_{r+1} + \cdots 2j_k - 1)!}
(1 - x_1 - \cdots - x_r)^{2j_{r+1} + \cdots + 2j_k - 1}.
\]
We end up with the following formula for the distribution of the $r$-th marginal of $U^{(g,m,k)*}$
\begin{align*}
\frac{w_{g,k} \cdot k!}{\mathcal{Z}_m(F_{g,k}) \cdot (k-r)!}
\sum_{\substack{(j_1,\dots,j_k)\in\mathbb{N}^k \\ j_1 + \dots + j_k = 3g-3}}
& \frac{\tilde{c}_{g,k}(j_1,\dots,j_k)}{(2j_{r+1} + \cdots 2j_k - 1)!} \cdot
\prod_{i=1}^k \frac{\zeta_m(2j_i)}{2j_i} \prod_{i=1}^r \frac{x_i^{2j_i}}{(2j_i-1)!} \\
& \qquad \cdot \frac{(1 - x_1 - \cdots - x_r)^{2j_{r+1} + \cdots + 2j_k - 1}}{(1-x_1) \cdots (1-x_1-\cdots-x_{r-1})}.
\end{align*}
From the above formula and the definition of the moment $M_p$ in~\eqref{eq:MpDefinition}, the moment $M_p(U^{(g,m,\kappa)*})$ equals
\begin{align*}
& \frac{w_{g,k} \cdot k!}{\mathcal{Z}_m(F_{g,k}) \cdot (k-r)!}
\sum_{\substack{(j_1,\dots,j_k\in\mathbb{N}^k) \\ j_1 + \dots + j_k = 3g-3}}
\frac{\tilde{c}_{g,k}(j_1,\dots,j_k)}{(2j_{r+1} + \cdots 2j_k - 1)!}
\prod_{i=1}^k \frac{\zeta_m(2j_i)}{2j_i} \prod_{i=1}^r \frac{1}{(2j_i-1)!} \\
& \qquad \cdot \int_{\Delta^r_{\le 1}}
x_1^{2j_1+p_1}\cdots x_r^{2j_r+p_r} (1 - x_1 - \cdots - x_r)^{2j_{r+1} + \cdots + 2j_k - 1} \, d\lambda_{\le 1}^r.
\end{align*}
Lemma~\ref{lem:int_sp} gives the value of the above integral
\begin{align*}
\int_{\Delta^r_{\le 1}}
& x_1^{2j_1+p_1}\cdots x_r^{2j_r+p_r} (1 - x_1 - \cdots - x_r)^{2j_{r+1} + \cdots + 2j_k - 1} \, d \lambda_{\le 1}^r \\
& \qquad = \int_{\Delta^{r+1}_{= 1}}
x_1^{2j_1+p_1}\cdots x_r^{2j_r+p_r} x_{r+1}^{2j_{r+1} + \cdots + 2j_k - 1} \, d \lambda_{= 1}^{r+1} \\
& \qquad = \frac{(2j_1+p_1)! \cdots (2j_r+p_r)! \cdot (2j_{r+1} + \cdots + 2j_k-1)!}{(6g-6+p_1+\cdots+p_r + r - 1)!}.
\end{align*}
Substituting the above value in our last expression for $M_p(U^{(g,m,\kappa)*})$ gives the announced formula.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:LgMomentsAsymptotics}]
Because the distribution of $\tilde{L}^{(g,m,\kappa)}$ is a weighted sum of distributions, we can perform the computation of the moments for each term in the sum and gather the result in the end.
More precisely, we have
\begin{equation} \label{eq:momentsLgkappaAsSum}
M_p(\tilde{L}^{(g,m,\kappa)*}) =
\frac{1}{(6g-6)! \cdot \tilde{b}_{g,m,\kappa}}
\sum_{k=1}^{\kappa \frac{\log(6g-6)}{2}}
\frac{\mathcal{Z}_m(F_{g,k})}{2^k \cdot k!} \cdot M_p(U^{(g,m,k)*})
\end{equation}
where $\tilde{b}_{g,m,\kappa}$ was defined in~\eqref{eq:bgmkappa}.
Now substituting the formula for $M_p(U^{(g,m,k)*})$ from Lemma~\ref{lem:UgkDensity} and the asymptotic value of $\tilde{b}_{g,m,\kappa}$ from Theorem~\ref{thm:A+DGZZtruncatedSum} in the sum~\eqref{eq:momentsLgkappaAsSum}, we have as $g \to \infty$ the asymptotic equivalence
\begin{equation} \label{eq:asymp1}
\begin{split}
M_p(L^{(g,m,\kappa)*})
& \sim
\frac{ (4g-4)! \cdot \pi}{ (6g-7)! \cdot (6g-7+p_1+\cdots+p_r+r)!} \cdot \sqrt{\frac{m+1}{m}} \cdot \left( \frac{3}{4} \right)^{4g-4} \\
& \qquad \cdot \sum_{k=1}^{\kappa \frac{\log(6g-6)}{2}}
\bigg(\frac{1}{2^k \cdot (k-r)!} \cdot \frac{(6g-5-2k)! \cdot (6g-7)! \cdot 2^{3k-3}}{(g-k)! \cdot (3g-3-k)! \cdot 3^{g-k}} \\
& \qquad \qquad \cdot \sum_{\substack{(j_1,\dots,j_k)\in\mathbb{N}^k \\ j_1 + \dots + j_k = 3g-3}}
\prod_{i=1}^k \frac{\zeta_m(2j_i)}{2j_i}
\prod_{i=1}^r \frac{(2j_i+p_i)!}{(2j_i-1)!} \bigg)
.
\end{split}
\end{equation}
where we have used that $\tilde{c}_{g,k}(j_1,\dots,j_k) \sim 1$ uniformly in $k \in [1,\kappa \log(6g-6)/2]$. On the one hand, by~\cite[Equation~(3.13)]{DGZZ20} (in the proof of Theorem~3.4) we have
\begin{equation} \label{eq:simplify1}
\frac{(4g-4)! \cdot (6g-5-2k)!}{(6g-7)! \cdot (g-k)! \cdot (3g-3-k)!}
\sim
(6g-6)^{1/2} \frac{1}{\sqrt{\pi}} \frac{2^{8g-6-2k}}{3^{3g-4+k}}.
\end{equation}
On the other hand
\begin{equation} \label{eq:simplify2}
\frac{(6g-7)!}{(6g-7+p_1+\ldots+p_r+r)!}
\sim
\frac{1}{(6g-6)^{p_1 + \cdots + p_r + r}}.
\end{equation}
Replacing~\eqref{eq:simplify1} and~\eqref{eq:simplify2} in~\eqref{eq:asymp1} we obtain
\[
\begin{split}
M_p(\tilde{L}^{(g,m,\kappa)*})
& \sim
\frac{1}{2 \cdot (6g-6)^{p_1 + \cdots + p_r + r - 1/2}} \cdot \sqrt{\frac{m+1}{m}} \cdot \sqrt{\pi} \\
& \qquad \cdot \sum_{k=1}^{\kappa \frac{\log(6g-6)}{2}}
\frac{1}{(k-r)!}
\sum_{\substack{(j_1,\dots,j_k)\in\mathbb{N}^k \\ j_1 + \dots + j_k = 3g-3}}
\prod_{i=1}^k \frac{\zeta_m(2j_i)}{2j_i}
\prod_{i=1}^r \frac{(2j_i+p_i)!}{(2j_i-1)!}
\end{split}
\]
which is the announced formula.
\end{proof}
\subsection{Asymptotic expansion of a related sum}
Let $\theta = (\theta_i)_{i \geq 1}$ be a sequence of non-negative real numbers and
let $p = (p_1, \ldots, p_r)$ be a non-negative integral vector.
This section is dedicated to the asymptotics in $n$ of the numbers
\begin{equation} \label{eq:Sthetan}
S_{\theta,p,n}
\coloneqq
\sum_{k=r}^{\infty} \frac{1}{(k-r)!} \sum_{\substack{(j_1,\dots,j_k) \in \mathbb{N}^k \\ j_1+\cdots +j_k = n}}
\prod_{i=1}^k \frac{\theta_i}{2 j_i} \prod_{i=1}^r \frac{(2 j_i + p_i)!}{(2j_i - 1)!}.
\end{equation}
which should be reminiscent of the formula from Theorem~\ref{thm:LgMomentsAsymptotics}.
\begin{df}
Let $\theta = (\theta_j)_{j \geq 1}$ be non-negative real numbers and let
$g_\theta(z)$ be the formal series
\begin{equation} \label{eq:gtheta}
g_\theta(z) \coloneqq \sum_{j \geq 1} \theta_j \frac{z^j}{j}.
\end{equation}
We say that $\theta$ is \emph{admissible} if the function $g_\theta(z)$
\begin{itemize}
\item converges in the open disk $D(0,1)\subset\mathbb{C}$ centered at $0$ of radius $1$,
\item $g_\theta(z) + \log(1-z)$ extends to a holomorphic function on $D(0,R)$
with $R > 1$.
\end{itemize}
\end{df}
\begin{thm} \label{thm:transfer}
Let $\theta = (\theta_k)_{k \geq 1}$ be admissible, then as $n \to \infty$ we have
\begin{equation} \label{eq:SthetaAsymptotics}
S_{\theta,p,n} \sim \sqrt{\frac{e^{\beta}}{2}} \cdot \frac{p_1! \cdots p_r!}{2^{r-1}} \frac{n^{p_1 + \cdots + p_r + r - 1/2}}{\varGamma(p_1 + \cdots + p_r + r + 1/2)}
\end{equation}
where $\beta$ is the value at $z=1$ of $g_\theta(z) + \log(1-z)$.
\end{thm}
The following is essentially~\cite[Lemma~3.8]{DGZZ20} that we reproduce
for completeness.
\begin{lem}
For $m \in \mathbb{N} \cup \{+\infty\}$, let
\[
g_m(z) \coloneqq \sum_{j \ge 1} \zeta_m(2j) \frac{z^j}{j}.
\]
Then $g_m(z)$ is summable in $D(0,1)$ and $g_m(z) + \log(1-z)$ extends
to a holomorphic function on $D(0,4)$.
In particular the sequence $(\zeta(2j))_{j \ge 1}$ is admissible.
Moreover
$(g_m(z) + \log(1-z))|_{z=1} = \log \left( \frac{2m}{m+1} \right)$.
\end{lem}
\begin{proof}
Since $\zeta_m(2j)$ is bounded uniformly in $j$, the series converges in $D(0,1)$.
Now, expanding the definition of the partial zeta function $\zeta_m$ and changing
the order of summation we have for $z \in D(0,1)$
\[
g_m(z) = - \sum_{n=1}^m \log \left(1 - \frac{z}{n^2} \right)
\]
and hence
\[
g_m(z) + \log(1-z) = - \sum_{n=2}^m \log \left(1 - \frac{z}{n^2} \right).
\]
The term $\log \left(1 - \frac{z}{n^2} \right)$ defines a holomorphic function
on $D(0,n^2)$. Since
$\left| \log \left(1 - \frac{z}{n^2} \right) \right| \le
\frac{4}{n^2}
\left| \log \left(1 - \frac{z}{4} \right) \right|
$
we have absolute convergence even for $m=+\infty$ and $g_m(z) + \log(1-z)$
defines a holomorphic function in $D(0,4)$.
Now for the value at $z=1$ we obtain
\begin{align*}
(g_m(z) + \log(1-z))|_{z=1} &=
- \sum_{n=2}^m \log \left(1 - \frac{1}{n^2}\right) \\
&= \sum_{n=2}^m \left(2 \log(n) - \log(n-1) + \log(n+1 ) \right) \\
&= \log \left( \frac{2m}{m+1} \right).
\end{align*}
This completes the proof.
\end{proof}
\begin{cor} \label{cor:zetamAsymptotics}
Let $m \in \mathbb{N} \cup \{+\infty\}$.
For $\theta = (\zeta_m(2i))_{i \geq 1}$ we have
\[
S_{\theta,p,n} \sim
\sqrt{\frac{m}{m+1}}
\cdot
\frac{p_1! \cdots p_r!}{2^{r-1}} \frac{n^{p_1 + \cdots + p_r + r - 1/2}}{\varGamma(p_1 + \cdots + p_r + r + 1/2)}.
\]
\end{cor}
For a non-negative integer $p$ we define the differential operator on $\mathbb{C}[[z]]$
by
\[
D_p(f) \coloneqq z \frac{d^{p+1}}{dz^{p+1}} (z^p f).
\]
We start with some preliminary lemmas.
\begin{lem} \label{lem:coeffExtraction}
Let $\theta = (\theta_i)_i$ and $g_\theta(z)$ as in Theorem~\ref{thm:transfer}.
Let $p = (p_1, \ldots, p_r)$ be a tuple of non-negative integers and let
\begin{equation} \label{eq:Gthetap}
G_{\theta,p}(z) \coloneqq \exp\left(\frac{1}{2} g_\theta(z^2)\right) \prod_{i=1}^r D_{p_i}\left( \frac{1}{2} g_\theta(z^2) \right).
\end{equation}
Then, for any $n \geq 0$ we have $[z^{2n}] \, G_{\theta,p}(z) = S_{\theta,p,n}$ where $[z^{2n}]$ is the coefficient extraction operator and $S_{\theta,p,n}$ is the sum in~\eqref{eq:Sthetan}
\end{lem}
\begin{proof}
Let us first note that $\frac{1}{2} g_\theta(z^2) = \sum_i \theta_i \frac{z^{2i}}{2i}$. We aim
to compute the expansion of $D_p \left( \frac{1}{2} g_\theta(z^2) \right)$. By linearity, it is
enough to compute a single term and we have
\[
D_p \left( z^{2j} \right) = z \frac{d^{p+1}}{dz^{p+1}} (z^{2j+p}) = \frac{(2j + p)!}{(2j-1)!} z^{2j}.
\]
Hence
\[
D_p \left( \frac{1}{2} g_\theta(z^2) \right)
=
\sum_{j=1}^{\infty} \frac{(2j+p)!}{(2j-1)!} \theta_i \frac{z^{2j}}{2j}.
\]
The lemma follows by expanding the exponential.
\end{proof}
\begin{lem} \label{lem:DpOfLog}
For any $p \geq -1$ we have
\[
\frac{1}{p!} D_p \left( - \log(1 \pm z) \right)
= \frac{1}{(1 \pm z)^{p+1}} - 1.
\]
\end{lem}
\begin{proof}
By Leibniz's rule,
\begin{align*}
\frac{z}{p!} \frac{d^{p+1}}{dz^{p+1}}\left(z^p \log\frac{1}{1-z}\right)
&= \frac{z}{p!} \sum_{i=0}^{p} \binom{p+1}{i} p(p-1)\cdots (p-i+1) z^{p-i} \frac{(p-i)!}{(1-z)^{p+1-i}} \\
&= -1 +\sum_{i=0}^{p+1} \binom{p+1}{i} \left(\frac{z}{1-z}\right)^{p-i+1} \\
&= -1 + \left(1 + \frac{z}{1-z}\right)^{p+1} \\
&= -1 + \frac{1}{(1-z)^{p+1}}.
\end{align*}
The proof for $- \log(1 + z)$ is similar.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:transfer}]
By Lemma~\ref{lem:coeffExtraction}, the sum $S_{\theta,p,n}$ is the coefficient in front of $z^{2n}$ of $G_{\theta,p}$.
By the conditions in the statement, $g_\theta(z^2) = - \log(1-z^2) + \beta + r_\theta(z)$ where $r_\theta(z)$ is holomorphic on $D(0,\sqrt{R})$ and $r_\theta(1) = 0$.
Using Lemma~\ref{lem:DpOfLog}, we deduce that for any $p \geq 0$ we have
\[
D_p \left( g_\theta(z^2) \right)
=
\frac{p!}{(1-z)^{p+1}} + \frac{p!}{(1+z)^{p+1}} + r_{\theta,p}(z)
\]
where $r_{\theta,p}$ is holomorphic in $D(0,\sqrt{R})$.
We deduce that $G_{\theta,p}(z)$ is meromorphic in $D(0,\sqrt{R}) \setminus [1,\sqrt{R})$
and satisfies as $z \to 1$
\begin{align*}
G_{\theta,p}(z) &= \exp\left(- \frac{1}{2} \left(\log(1-z) + \log(2) - \beta \right) + O(1-z) \right)
\prod_{i=1}^r \frac{p!}{2} \left(\frac{1}{(1-z)^{p_i+1}} + O(1) \right) \\
&=
\sqrt{\frac{e^\beta}{2}} \cdot \frac{p_1!\cdots p_r!}{2^r} \frac{1}{(1-z)^{p_1+\cdots+p_r+r+1/2}} \, (1 + o(1)).
\end{align*}
Similarly, as $z \to -1$ we have
\[
G_{\theta,p}(z)
=
\sqrt{\frac{e^\beta}{2}} \cdot \frac{p_1!\cdots p_r!}{2^r} \frac{1}{(1+z)^{p_1+\cdots+p_r+r+1/2}} (1 + o(1)).
\]
Now using~\cite[Theorem VI.5]{FS09}, we obtain
\[
[z^{2n}] \, G_{\theta,p}(z)
\sim 2 \cdot
\sqrt{\frac{e^\beta}{2}} \cdot \frac{p_1! \cdots p_r!}{2^r} \cdot \frac{(2n)^{p_1 + \cdots + p_r + r - 1/2}}{\varGamma(p_1+\cdots+p_r+r+1/2)}.
\]
This completes the proof.
\end{proof}
\subsection{Truncation estimates}
Recall that Theorem~\ref{thm:LgMomentsAsymptotics} provided an expression for the moment $M_p(\tilde{L}^{(g,\kappa)*})$
which involves a sum which is a truncated version of $S_{\theta,p,n}$ from~\eqref{eq:Sthetan}. In this
section, we show that the difference between $S_{\theta,p,n}$ and its truncation is negligible
compared to the asymptotics of Theorem~\ref{thm:transfer}.
\begin{thm} \label{thm:remainder}
Let $\theta$ and $g_\theta(z)$ be as in Theorem~\ref{thm:transfer}.
Then for any real $\kappa > 1$ we have as $n \to \infty$
\begin{equation} \label{eq:sumVSPartialSum}
S_{\theta,p,n} \sim
\sum_{k=r}^{\kappa \frac{\log(2n)}{2}} \frac{1}{(k-r)!} \sum_{\substack{(j_1,\dots,j_k) \in \mathbb{N}^k \\ j_1+\cdots +j_k = n}}
\prod_{i=1}^k \frac{\theta_i}{2 j_i} \prod_{i=1}^r \frac{(2 j_i + p_i)!}{(2j_i - 1)!}.
\end{equation}
\end{thm}
Bounding the coefficient in a Taylor expansion is a standard tool in asymptotic
analysis as the ``Big-Oh transfer''~\cite[Theorem VI.3]{FS09}. However, in our
situation we need to bound the $n$-th Taylor coefficient of a function $f_n$
that depends on $n$. To do so, we track down the dependencies on the
functions inside the transfer theorem.
\begin{lem}[{\cite[Lemma 4.4]{DGZZ20}}]\label{lem:DGZZ20.4.4}
Let $\lambda$ and $x$ be positive real numbers. We have,
\[
\sum_{k=\lceil x\lambda \rceil}^{\infty} \frac{\lambda^k}{k!}
\leq \exp(-\lambda(x\log x -x)).
\]
\end{lem}
\begin{lem} \label{lem:partialSumUpperBound}
Let $g(z)$ be a holomorphic function on $D(0,R) \setminus ([1,R) \cup [-1,-R))$ such that, as $z \to \pm 1$ we have
\[
h(z) = - \frac{1}{2} \log(1-z^2) + O(1).
\]
Fix a real $\kappa > 1$, and non-negative integers $p$ and $q$.
For $n \geq 1$ let
\[
f_n(z)
= \frac{1}{(1-z)^p (1+z)^q} \sum_{k = \lfloor \kappa \frac{\log n}{2} \rfloor}^{\infty} \frac{h(z)^k}{k!}.
\]
Then, we have as $n \to \infty$
\[
[z^n] \, f_n(z)
= O\left( n^{\max\{p,q\}-1- \frac{1}{2} (\kappa\log\kappa - \kappa)} \right).
\]
\end{lem}
\begin{proof}
Let $0 < \eta < R-1$ and $0 < \phi < \pi/2$ and define the contour $\gamma$ as the union $\sigma_+ \cup \sigma_- \cup \lambda_\nearrow \cup \lambda_\nwarrow \cup \lambda_\searrow \cup \lambda_\swarrow \cup \lambda_\nearrow \cup \Sigma_+ \cup \Sigma_-$ with
\begin{align*}
\sigma_+ & = \{z : |z-1| = 1/n, \ |\arg(z-1)|\geq \phi\}, \\
\sigma_- & = \{z : |z+1| = 1/n, \ |\arg(z-1)|\geq \phi\}, \\
\lambda_\nearrow & = \{z : |z-1|\geq 1/n,\ |z| \leq 1 + \eta, \arg(z-1) = \phi\}, \\
\lambda_\nwarrow & = \{z : |z-1|\geq 1/n,\ |z| \leq 1 + \eta, \arg(z-1) = -\phi\}, \\
\lambda_\searrow & = \{z : |z+1|\geq 1/n,\ |z| \leq 1 + \eta, \arg(z-1) = \pi - \phi\}, \\
\lambda_\swarrow & = \{z : |z+1|\geq 1/n,\ |z| \leq 1 + \eta, \arg(z-1) = -\pi + \phi\}, \\
\Sigma_+ & = \{z : |z| = 1+\eta,\ \arg(z-1) \geq \phi,\ \arg(z+1) \leq \pi - \phi\}, \\
\Sigma_- & = \{z : |z| = 1+\eta,\ \arg(z-1) \leq - \phi,\ \arg(z+1) \geq -\pi + \phi\}.
\end{align*}
See Figure~\ref{fig:contour} for a picture of $\gamma$.
Sine $f_n$ is holomorphic on $D(0,R) \setminus ([1,R) \cup [-1,-R))$, we have the Cauchy's residue theorem for its coefficients
\begin{equation} \label{eq:Cauchy}
[z^n] \, f_n(z)
=
\frac{1}{2\pi i}\int_{\gamma} \frac{f_n(z)}{z^{n+1}} \, dz.
\end{equation}
\begin{figure}[ht]
\centering \includegraphics{contour.pdf}
\caption{The contour $\gamma$.}
\label{fig:contour}
\end{figure}
Taking absolute values in~\eqref{eq:Cauchy} we obtain
\begin{equation} \label{eq:CauchyUpperBound}
\left| [z^n] \, f_n(z) \right|
\leq \frac{1}{2\pi} \int_\gamma \frac{|dz|}{|z|^{n+1}} \frac{1}{|1-z|^p |1+z|^q} \sum_{k=\lfloor \kappa \frac{\log n}{2} \rfloor}^{\infty} \frac{|h(z)|^k}{k!}.
\end{equation}
The proof proceeds by analyzing the right-hand side in~\eqref{eq:CauchyUpperBound} for each piece of the contour $\gamma$.
Let us start with the small arc of the circle $\sigma_+$.
The change of variables $z = 1-e^{i \theta}/n$ yields
\[
\left|\frac{1}{2\pi i} \int_{\sigma_+} \frac{f_n(z)dz}{z^{n+1}} \right|
\leq
\frac{n^{p-1}}{2\pi \cdot (R+1)^q} \int_{-\pi + \phi}^{\pi-\phi} \frac{d\theta}{|1-e^{i\theta}/n|^{n+1}} \sum_{k=\lfloor \kappa \frac{\log n}{2} \rfloor}^{\infty} \frac{|h(1-e^{i\theta}/n)|^k}{k!}.
\]
First $h(1-e^{i \theta}/n) = \frac{\log(n)}{2} + O(1)$ uniformly in $\theta$.
Hence, by Lemma~\ref{lem:DGZZ20.4.4}, uniformly in $\theta$ as $n \to \infty$ we have
\[
\sum_{k=\lceil \kappa \frac{\log n}{2} \rceil}^{\infty} \frac{|h(z)|^k}{k!} \leq \exp \left(- (\kappa\log \kappa - \kappa) \cdot \frac{\log n + O(1)}{2} \right)
= O \left( n^{-\frac{1}{2} (\kappa\log \kappa - \kappa)} \right).
\]
Since $\frac{1}{|1-e^{i\theta}/n|^{n+1}}$ is uniformly bounded in $n$,
\[
\left|\frac{1}{2\pi i} \int_{\sigma_+} \frac{f_n(z)\, dz}{z^{n+1}} \right|
= O\left( n^{p-1-\frac{1}{2} (\kappa\log \kappa - \kappa)} \right).
\]
Similarly,
\[
\left|\frac{1}{2\pi i} \int_{\sigma_-} \frac{f_n(z)\, dz}{z^{n+1}} \right|
= O\left( n^{q-1- \frac{1}{2} (\kappa\log \kappa - \kappa)} \right).
\]
Let us now consider the case of $\lambda_\nearrow$. Let $r$ be the positive solution of the equation $|1 + re^{i\phi}| = 1+\eta$. Perform the change of variable $z = 1+e^{i\phi} \cdot t/n$, we have
\begin{align*}
& \left| \frac{1}{2\pi i}\int_{\lambda_\nearrow} \frac{f(z)}{z^{n+1}}\, dz \right| \\
& \qquad \leq \frac{n^{p-1}}{2\pi \cdot (R+1)^q} \int_1^{nr} dt \cdot t^{-p} \left| 1+ e^{i\phi}t/n \right|^{-n-1} \sum_{k=\lfloor \kappa \frac{\log n}{2} \rfloor}^{\infty} \frac{|h(1 + e^{i\phi} t/n)|^k}{k!}.
\end{align*}
For $n$ large enough and uniformly in $t$, $|h(1+e^{i\phi} t/n)| = \frac{\log(n)}{2} + O(1)$.
Lemma~\ref{lem:DGZZ20.4.4} gives
\[
\sum_{k=\lfloor \kappa \frac{\log n}{2} \rfloor}^{\infty} \frac{|h(1+e^{i/\phi}t/n|^k}{k!}
=
O(n^{-\frac{1}{2} (\kappa\log\kappa - \kappa)}).
\]
From the boundedness of $|1+e^{i\phi} t/n|^{-n-1}$ it follows that
\[
\int_1^{nr} t^{-p} |1+e^{i\phi}\cdot t/n|^{-n-1}\, dt
= O(1),
\]
and therefore
\[
\left| \frac{1}{2\pi i} \int_{\lambda_\nearrow} \frac{f(z)dz}{z^{n+1}} \right|
=
O\left( n^{-\frac{1}{2} (\kappa\log\kappa - \kappa)} \right).
\]
The same estimate is valid for the integral along the other three segments $\lambda_\nwarrow$, $\lambda_\searrow$, and $\lambda_\swarrow$.
For the two large demi-circles $\Sigma_+$ and $\Sigma_-$, we have
\[
\left| \frac{1}{2\pi i} \int_{\Sigma_+} \frac{f(z) dz}{z^{n+1}} \right|
\leq \frac{1}{2\pi} \cdot \eta^{\, p+1/2} \cdot (1+\eta)^{-n-1} \cdot 2\pi(1+\eta)
= \frac{\eta^{p+1/2}}{(1+\eta)^{n+1}}
\]
which decreases exponentially fast.
We conclude the proof by combining the above estimates.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:remainder}]
Similarly to Lemma~\ref{lem:coeffExtraction}, if we write
\[
G_{n,\theta,p}(z) \coloneqq \sum_{k = \lfloor \frac{\log(n)}{2} \rfloor}^\infty \frac{(\frac{1}{2} g_\theta(z^2))^k}{k!} \prod_{i=1}^r D_{p_i}\left( \frac{1}{2} g_\theta(z^2) \right).
\]
then $[z^{2n}] \, G_{n,\theta,p}$ is the complement of the partial sum in the right hand side of~\eqref{eq:sumVSPartialSum}. Following the proof of Theorem~\ref{thm:transfer} we obtain as $z \to 1$
\[
G_{n,\theta,p}(z) =
\sum_{k = \lfloor \frac{\log(2n)}{2} \rfloor}^\infty
\frac{(\frac{1}{2} g_\theta(z^2))^k}{k!}
\prod_{i=1}^r \frac{p!}{2} \left(\frac{1}{(1-z)^{p_i+1}} + O(1) \right)
\]
where the $O(1)$ is uniform in $n$ (it only depends on $g_\theta(z)$).
Applying Lemma~\ref{lem:partialSumUpperBound} we obtain
\[
[z^{2n}] \, G_{n,\theta,p}(z)
=
O \left( (2n)^{p_1+\cdots + p_r + r - 1 - \frac{1}{2} (\kappa\log\kappa - \kappa)} \right).
\]
For $\kappa > 1$ we have $-1-\frac{1}{2}(\kappa \log \kappa - \kappa) < -1/2$ and the above sum is negligible compared to the asymptotics of the full sum $S_{\theta,p,n}$ from Theorem~\ref{thm:transfer}.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:GEM}}
\begin{proof}[Proof of Theorem~\ref{thm:GEM}]
By Lemma~\ref{lem:momentMethod}, it suffices to prove the convergence of the moments $M_p(\tilde{L}^{(g,m,\kappa)*})$ for all $p=(p_1,\ldots,p_r)$ towards the moments of the $\GEM(1/2)$ distribution that were computed in Lemma~\ref{lem:GEMMoments}.
Now, Theorem~\ref{thm:LgMomentsAsymptotics}, provides an asymptotic equivalence of $M_p(\tilde{L}^{(g,m,\kappa)*})$ involving the sum
\[
\sum_{k=r}^{\kappa \frac{\log(6g-6)}{2}} \frac{1}{(k-r)!} \sum_{\substack{(j_1,\dots,j_k) \in \mathbb{N}^k \\ j_1+\cdots +j_k = 3g-3}}
\prod_{i=1}^k \frac{\zeta_m(2 j_i)}{2 j_i} \prod_{i=1}^r \frac{(2 j_i + p_i)!}{(2j_i - 1)!}.
\]
The asymptotics of the above sum was then obtained from Corollary~\ref{cor:zetamAsymptotics} and Theorem~\ref{thm:remainder}. Namely, the above is asymptotically equivalent to
\[
\sqrt{\frac{m}{m+1}} \cdot
\frac{p_1! \cdots p_r!}{2^{r-1}} \cdot \frac{(6g-6)^{p_1 + \cdots + p_r + r - 1/2}}{\varGamma(p_1 + \cdots + p_r + r + 1/2)}.
\]
Substituting this value in the formula of Theorem~\ref{thm:LgMomentsAsymptotics} we
obtain as $g \to \infty$
\[
M_p(\tilde{L}^{(g,m,\kappa)*}) \sim
\frac{\sqrt{\pi}}{2^r} \cdot
\frac{p_1! \cdots p_r!}{\varGamma(p_1 + \cdots + p_r + r + 1/2)}.
\]
The above is the value of the moments $M_p$ of the distribution $\GEM(1/2)$ from
Lemma~\ref{lem:GEMMoments} as $\theta=1/2$ and $(\theta-1)! = (-1/2)! = \varGamma(1/2) = \sqrt{\pi}$.
Since the convergence of $M_p(\tilde{L}^{(g,m,\kappa)*})$ holds for all $p=(p_1,\ldots,p_r)$, the sequence $\tilde{L}^{(g,m,\kappa)*}$ converges in distribution towards $\GEM(1/2)$.
\end{proof}
|
2,869,038,153,863 | arxiv |
\section{Setup}\label{appendix:setting}
This section details our experimental setup for reproducibility, including dataset information and training details for the various baselines and proposed RealPatch framework. The code is made available at \url{https://github.com/wearepal/RealPatch}.
\subsection{Dataset}
Following the setup used by Goel et al. \cite{goel2020model}, ~\ref{tab:appendix_data_size} summarises sizes of each subgroup in both CelebA and Waterbirds. For each dataset, subgroup sizes are kept consistent across the three runs. The same information is provided for iWildCam-small, where 26 and 255 are the IDs of the two camera trap locations considered.
\begin{table}[h!]
\centering
\caption{Number of train/validation/test set images in each dataset.}
\scalebox{0.8}{
\begin{tabular}{llllll}
\toprule
\textbf{Dataset} & \textbf{Split} & \multicolumn{4}{c}{\textbf{Subgroup Size}} \\
\midrule
& & Non-Blonde & Non-Blonde & Blonde & Blonde\\
& & Female & Male & Female & Male\\
\cmidrule{3-6}
\textbf{CelebA} & \textbf{train} & 4054 & 66874 & 22880 & 1387\\
& \textbf{validation} & 8535 & 8276 & 2874 & 182\\
& \textbf{test} & 9767 & 7535 & 2480 & 180\\
\midrule
& & Landbird & Landbird & Waterbird & Waterbird\\
& & Land & Water & Land & Water\\
\cmidrule{3-6}
\textbf{Waterbirds} & \textbf{train} & 3498 & 184 & 56 & 1057\\
& \textbf{validation} & 467 & 466 & 133 & 133\\
& \textbf{test} & 2255 & 2255 & 642 & 642\\
\midrule
& & Meleagris Ocellata & Crax Rubra & Meleagris Ocellata & Crax Rubra\\
& & ID 26 & ID 255 & ID 26 & ID 255\\
\cmidrule{3-6}
\textbf{iWildCam-small} & \textbf{train} & 35 & 940 & 980 & 50\\
& \textbf{validation} & 80 & 80 & 80 & 400\\
& \textbf{test} & 85 & 80 & 90 & 449\\
\bottomrule
\end{tabular}}
\label{tab:appendix_data_size}
\end{table}
\subsection{Baseline Training Details}\label{appendix:baselines_training}
For CelebA and Waterbirds all four baselines use a fine-tuned ResNet50 architecture, pre-trained on ImageNet. For ERM, GDRO and CAMEL we follow the setup used in \cite{goel2020model}. For each baseline, the hyperparameters selected are summarised in Table~\ref{tab:baseline_hyperparameter_values}. For iWildCam-small we use features extracted with a pre-trained BiT model to train both ERM and SGDRO; for ERM we use a logistic regression model with regularisation $C\!=\!1$, L2-penalty, tolerance of $1e^{-12}$ and sample weight inversely proportional to its subgroup frequency. For SGDRO we perform model selection using the robust accuracy on the validation set.
We consider the following hyperparameters sweep for this baseline.
For the Waterbirds dataset, adjustment coefficient is in a range of $\{2, 3, 5, 7\}$, weight decay is in a range of $\{0.005, 0.01, 0.05\}$ and batch size is in a range of $\{64, 128, 256 \}$.
For the CelebA dataset, adjustment coefficient is in a range of $\{2, 3, 5\}$, weight decay is in a range of $\{0.005, 0.01\}$, and batch size is fixed to 64. For the iWildCam-small dataset, adjustment coefficient is in a range of $\{1, 2\}$, weight decay is fixed to $0.01$ and batch size is in a range of $\{64, 128\}$.
For all datasets, we trained SGDRO for 100 epochs. The selected hyperparameters for each of the three runs are summarised in Table \ref{tab:baseline_hyperparameter_values_sgdro}.
\begin{table}[h!]
\centering
\caption{The hyperparameters used for ERM, GDRO, and CAMEL baselines for CelabA and Waterbirds, following~\cite{goel2020model}.}
\scalebox{0.8}{
\begin{tabular}{llllllll}
\toprule
\textbf{Dataset} & \textbf{Method} & \multicolumn{6}{c}{\textbf{Hyperparameters}} \\
\midrule
& & \textbf{Epochs} & \textbf{Learning} & \textbf{Weight} & \textbf{Batch} & \textbf{GDRO} & \textbf{$\lambda$}\\
& & & \textbf{Rate} & \textbf{Decay} & \textbf{Size} & \textbf{Adjustment} &\\
\cmidrule{3-8}
\textbf{CelebA} & \textbf{ERM} & 50 & 0.00005 & 0.05 & 16 & - \\
& \textbf{GDRO} & 50 & 0.0001 & 0.05 & 16 & 3 & - \\
& \textbf{CAMEL} & 50 & 0.00005 & 0.05 & 16 & 3 & 5 \\
\midrule
\textbf{Waterbirds} & \textbf{ERM} & 500 & 0.001 & 0.001 & 16 & - & - \\
& \textbf{GDRO} & 500 & 0.00001 & 0.05 & 24 & 1 & - \\
& \textbf{CAMEL} & 500 & 0.0001 & 0.001 & 16 & 2 & 100 \\
\bottomrule
\end{tabular}}
\label{tab:baseline_hyperparameter_values}
\end{table}
\begin{table}[h!]
\centering
\caption{The hyperparameters used for the SGDRO baseline for each of the three runs.}
\scalebox{0.8}{
\begin{tabular}{lllll}
\toprule
\textbf{Dataset} & \textbf{Run} & \multicolumn{3}{c}{\textbf{Hyperparameters}} \\
\midrule
& & \textbf{Weight} & \textbf{GDRO} & \textbf{Batch}\\
& & \textbf{Decay} & \textbf{Adjustment} & \textbf{Size} \\
\cmidrule{3-5}
\textbf{CelebA} & \textbf{1} & 0.005 & 5 & 64 \\
& \textbf{2} & 0.005 & 5 & 64 \\
& \textbf{3} & 0.005 & 5 & 64\\
\midrule
\textbf{Waterbirds} & \textbf{1} & 0.01 & 7 & 64 \\
& \textbf{2} & 0.05 & 5 & 64 \\
& \textbf{3} & 0.005 & 2 & 256\\
\midrule
\textbf{iWildCam-small} & \textbf{1} & 0.01 & 2 & 128 \\
& \textbf{2} & 0.01 & 2 & 64 \\
& \textbf{3} & 0.01 & 1 & 64\\
\bottomrule
\end{tabular}}
\label{tab:baseline_hyperparameter_values_sgdro}
\end{table}
\subsection{RealPatch Training Details}\label{appendix:realpatch}
To give each image a chance of being included in the final matched dataset $D^{\star}$, we match in both directions, i.e. we consider both values of the spurious attribute to represent the treatment and control group in turn. The size of $D^{\star}$ can therefore be in the range $\lbrack0, 2N\rbrack$; $0$ in the extreme case where no image is paired and $2N$ in the case that all images are.
For example, in CelebA we first use our pipeline (Figure~\ref{fig:pipeline}) to match \textit{male} to \textit{female} samples, we then apply it to match \textit{female} to \textit{male} samples (using the same configuration and hyperparameters).
\paragraph{Reweighting strategy.} In our logistic regression models for predicting the propensity score we explore the use of no reweighting, as well as a \textit{spurious-reweighting} strategy. For each sample $s$, its weight $w_s$ is defined as:
\begin{equation*}
w_s = \frac{N}{2 \cdot N_{z_s}},
\end{equation*}
where $N_{z_s}$ is the size of the spurious group $\left(Z\!=\!z_s\right)$.
\paragraph{Hyperparameters for Reducing Subgroup Performance Gap.} We include the hyperparameter sweep and provide the best hyperparameters found for each dataset and run. To select the hyperparameters for Stage 1 of RealPatch we perform a grid search summarised in Table \ref{tab:realpatch_hyperparameter_search}, selecting the configuration with the best covariates balance in terms of \textit{SMD} and \textit{VR}. Although we need to perform hyperparameters search, we notice the optimal values (Table~\ref{tab:realpatch_hyperparameter_values}) are quite stable across different seeds; in practice, the grid search for Stage~1 can be restricted. As per the hyperparameters of Stage 2, we perform model selection utilising the robust accuracy on the validation set.
We consider the following hyperparameters sweep. For the Waterbirds dataset, adjustment coefficient is in a range of $\{2, 3, 5, 7\}$, weight decay is in a range of $\{0.005, 0.01, 0.05\}$, regularisation strength $\lambda$ is in a range of $\{0, 1, 5, 10\}$ and batch size is in a range of $\{64, 128, 256 \}$.
For the CelebA dataset, adjustment coefficient is in a range of $\{2, 3, 5\}$, weight decay is in a range of $\{0.005, 0.01\}$, $\lambda$ is in a range of $\{0, 1, 5\}$, and batch size fixed to 64.
For the iWildCam-small dataset, adjustment coefficient is in a range of $\{1, 2\}$, weight decay is fixed to $0.01$, $\lambda$ is in a range of $\{0, 1, 2, 7, 10, 12, 15\}$, and batch size is in a range of $\{64, 128\}$.
Table \ref{tab:realpatch_hyperparameter_values} reports the values of the best hyperparameters found.
\begin{table}[h!]
\begin{center}
\caption{Hyperparameter grid search used in Stage 1 of RealPatch for reducing subgroup performance gap.}
\scalebox{0.8}{
\begin{tabular}{ll}
\toprule
\textbf{Hyperparameter} & \textbf{Sweep} \\
\midrule
\textbf{PS-reweighting} & no reweighting\\
& spurious reweighting \\
\textbf{PS-temperature} (t) & $\lbrack0.6, 1.3\rbrack$ with step $0.05$\\
\textbf{Fixed caliper} ($c$) & 0.1\\
& 0.05\\
& 0 (None)\\
\textbf{Std-based caliper} ($\alpha$) & 0.2 \\
& 0.4 \\
& 0.6 \\
& $\infty$ (None) \\
\bottomrule
\end{tabular}}
\label{tab:realpatch_hyperparameter_search}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{The hyperparameters values selected for RealPatch on CelebA, Waterbirds and iWildCam-small across three runs.}
\scalebox{0.8}{
\medskip
\begin{tabular}{lllllllll}
\toprule
\multicolumn{6}{c}{\textbf{CelebA} dataset} \\
\midrule
\textbf{Run} & \textbf{PS-reweighting} & $t$ & \textbf{$c$} & \textbf{$\alpha$} & \textbf{Weight} & \textbf{GDRO} & \textbf{Reg.} & \textbf{Batch} \\
& & & & & \textbf{Decay} & \textbf{Adj.} & $\lambda$ & \textbf{Size} \\
\cmidrule{2-9}
\textbf{1} & no reweighting & 0.7 & 0.1 & 0.6 & 0.01 & 5 & 5 & 64\\
\textbf{2} & no reweighting & 0.7 & 0.1 & 0.6 & 0.005 & 5 & 1 & 64\\
\textbf{3} & no reweighting & 0.7 & 0.1 & 0.6 & 0.005 & 5 & 1 & 64\\
\midrule
\multicolumn{6}{c}{\textbf{Waterbirds} dataset} \\
\midrule
\textbf{Run} & \textbf{PS-reweighting} & $t$ & \textbf{$c$} & \textbf{$\alpha$} & \textbf{Weight} & \textbf{GDRO} & \textbf{Reg.} & \textbf{Batch} \\
& & & & & \textbf{Decay} & \textbf{Adj.} & $\lambda$ & \textbf{Size} \\
\cmidrule{2-9}
\textbf{1} & no reweighting & 0.9 & 0.1 & $\infty$ & 0.05 & 3 & 1 & 128\\
\textbf{2} & no reweighting & 0.9 & 0.1 & $\infty$ & 0.05 & 3 & 1 & 128\\
\textbf{3} & no reweighting & 0.7 & 0.1 & $\infty$ & 0.005 & 2 & 1 & 256\\
\midrule
\multicolumn{6}{c}{\textbf{iWildCam-small} dataset} \\
\midrule
\textbf{Run} & \textbf{PS-reweighting} & $t$ & \textbf{$c$} & \textbf{$\alpha$} & \textbf{Weight} & \textbf{GDRO} & \textbf{Reg.} & \textbf{Batch} \\
& & & & & \textbf{Decay} & \textbf{Adj.} & $\lambda$ & \textbf{Size} \\
\cmidrule{2-9}
\textbf{1} & spurious-reweighting & 1 & 0.05 & $\infty$ & 0.01 & 2 & 5 & 128\\
\textbf{2} & spurious-reweighting & 1.3 & 0.1 & $\infty$ & 0.01 & 1 & 12 & 128\\
\textbf{3} & spurious-reweighting & 1 & 0.05 & $\infty$ & 0.001 & 1 & 10 & 64\\
\bottomrule
\end{tabular}}
\label{tab:realpatch_hyperparameter_values}
\end{center}
\end{table}
\paragraph{Hyperparameters for Reducing Dataset and Model Leakage.} For the imSitu dataset we perform a grid search over hyperparameters, using \textit{spurious reweighting} in the propensity score estimation model, temperature $t\!=\!\lbrack0.6, 1\rbrack$ with step $0.1$, a fixed caliper with $c\!=\!\left\{0, 0.1\right\}$, and an std-based caliper with $\alpha\!=\!0.2$. For model selection, we use the covariate balanced achieved on the training set in terms of \textit{SMD} and \textit{VR}. The selected hyperparameters are \textit{spurious reweighting}, $t\!=\!0.6$, $c\!=\!0$, and $\alpha\!=\!0.2$.
\section{Results}
In Appendix \ref{sec:realpacth_results} we show additional results for our RealPatch framework. In Appendix \ref{sec:camel_results} we report the results obtained using different setups for the CAMEL baseline.
\subsection{RealPatch}\label{sec:realpacth_results}
In this section we include 1) the information to confirm the effect of RealPatch hyperparamaters (further to the \hyperref[sec:ablation]{Ablation Analysis}), 2) additional examples of RealPatch and CycleGAN counterfactuals for both CelebA and Waterbirds datasets, 3) subgroup results for each dataset, 4) examples of matched pairs and achieved matching quality for iWildCam-small, and 5) examples of matched pairs and achieved matching quality for imSitu dataset.
\paragraph{Effect of Temperature on Propensity Score.} For a single run of Waterbirds, in Figure \ref{fig:ablation_ps_waterbirds} we show the estimated propensity score distribution for each of the four subgroups for the dataset obtained after matching $D^{\star}$. Similarly to Figure~\ref{fig:ablation_ps_celeba} we compare the distributions obtained when imposing no temperature scaling ($t\!=\!1$) and when selecting the temperature hyperparameter (here, $t\!=\!0.9$). The figure shows consistent results with what was already observed in CelebA: decreasing $t$ leads to the two modes having more similar values, resulting in matched dataset with better propensity score balance and covariate balance in terms of \textit{SMD} and \textit{VR} (Table~\ref{tab:ablation_balance}).
\begin{figure*}[t!]
\centering
\includegraphics[scale=0.16]{figures/waterbirds_ps_after_matching.png}
\caption{Estimated propensity score distributions on the Waterbirds dataset after matching, shown for each of the four subgroups. We compare the original distribution (blue, $t\!=\!1$) with its scaled version using the selected temperature (orange, $t\!=\!0.9$). Post-matching, the propensity score is approximately bimodal, showing that our procedure is balancing the propensity distribution across subgroups. Decreasing $t$ makes the two modes have more similar values, resulting in a matched dataset with better covariate balance in terms of SMD and VR (Table~\ref{tab:ablation_balance}).}
\label{fig:ablation_ps_waterbirds}
\end{figure*}
\begin{figure*}[h!]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.575]{figures/celeba_example_appendix_f_to_m.pdf}
\caption{Examples of female images and their male counterfactuals.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.575]{figures/celeba_example_appendix_m_to_f.pdf}
\caption{Examples of male images and their female counterfactuals.}
\end{subfigure}
\caption{Examples of pairs retrieved using Stage 1 of RealPatch (top); both original and matched images are real samples from the CelebA dataset. We also show CycleGAN synthetic counterfactual results (bottom) on the same attribute-translation task.}
\label{fig:celeba_example_appendix}
\end{figure*}
\begin{figure*}[t
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.575]{figures/waterbirds_example_appendix_l_to_w.pdf}
\caption{Examples of birds on land and their counterfactuals in water.}
\label{fig:waterbirds_example_appendix_l_to_w}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.57]{figures/waterbirds_example_appendix_w_to_l.pdf}
\caption{Examples of birds on water and their counterfactuals in land.}
\label{fig:waterbirds_example_appendix_w_to_l}
\end{subfigure}
\caption{Examples of pairs retrieved using Stage 1 of RealPatch (top); both original and matched images are real samples from the Waterbirds dataset. We also show CycleGAN synthetic counterfactual results (bottom) on the same attribute-translation task. CycleGAN often adds artifacts and is frequently unable to recognise birds in the Waterbirds dataset (often removing them when translating from land to water; see left-column \ref{fig:waterbirds_example_appendix_l_to_w}).}
\label{fig:waterbirds_example_appendix}
\end{figure*}
\paragraph{Additional Counterfactual Examples.} In this section we show additional samples of retrieved matched pairs as well as random synthetic examples generated using CycleGAN. For the CelebA dataset, in Figure~\ref{fig:celeba_example_appendix} we include our results (a) when matching females-to-males and (b) males-to-females. Similarly, for the Waterbirds dataset we include in Figure~\ref{fig:waterbirds_example_appendix} the matched pairs (a) land-to-water and (b) water-to-land. In both datasets we notice that CycleGAN often adds artifacts and is frequently unable to recognise birds in the Waterbirds dataset (often removing them when translating from land to water; see Figure \ref{fig:waterbirds_example_appendix_l_to_w}).
\paragraph{Subgroup results.} Table~\ref{tab:results_subgroup} and Table~\ref{tab:results_subgroup_iwildcam} are an extension of Table~\ref{tab:results_aggregate} and Table~\ref{tab:results_iwildcam} to include the accuracy of all the four subgroups.
It is worth mentioning that the worst-case accuracy can be observed in different subgroups across the three runs; therefore the average robust accuracy does not necessarily correspond to the average accuracy of one of the four subgroups.
Although we observe degradation on a subgroup(s) to improve the worst-case in all methods including baselines, our RealPatch makes strikingly better trade-off of the aggregate and robust accuracies than all the baselines.
\begin{table*}[t!]
\begin{center}
\medskip
\caption{A comparison between RealPatch and four baselines on two benchmark datasets which includes the subgroup results. This table is an extension of Table~\ref{tab:results_aggregate}. The results shown are the average (standard deviation) performances over three runs.}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{lllllllll}
\toprule
\textbf{Dataset} & \textbf{Method} & \makecell{\textbf{Aggregate} $\uparrow$\\ \textbf{Acc. (\%)}} & \makecell{\textbf{Robust} $\uparrow$\\ \textbf{Acc. (\%)}} & \makecell{\textbf{Robust} $\downarrow$ \\ \textbf{Gap (\%)}} & \multicolumn{4}{c}{\makecell{\textbf{Subgroup} $\uparrow$ \quad Y \\ \textbf{Acc. (\%)} \quad Z}} \\
\midrule
& & & & & Non-Blonde, & Non-Blonde, & Blonde, & Blonde, \\
& & & & & Female & Male & Female & Male\\
\cmidrule{6-9}
\textbf{CelebA} & \textbf{ERM} & 89.21 (0.32) & 55.3 (0.65) & 43.48 (0.68) & 80.2 (0.78) & 98.78 (0.11) & 98.07 (0.39) & 55.3 (0.65) \\
& \textbf{GDRO} & \textbf{90.47} (7.16) & 63.43 (18.99) & 34.77 (19.65) & 90.03 (10.21) & 92.5 (9.57) & 87.66 (11.07) & 68.75 (26.15) \\
& \textbf{SGDRO} & 88.92 (0.18) & 82.96 (1.39) & 7.13 (1.67) & 90.09 (0.31) & 87.67 (0.38) & 88.52 (1.29) & 82.96 (1.39)\\
& \textbf{CAMEL} & 84.51 (5.59) & 81.48 (3.94) & \textbf{5.09} (0.44) & 85.57 (5.48) & 82.51 (5.26) & 84.15 (2.50) & 81.63 (3.70) \\
& \textbf{RealPatch (Our)} & 89.06 (0.13) & \textbf{84.82} (0.85) & 5.19 (0.9) & 90.01 (0.05) & 87.78 (0.14) & 89.52 (0.63) & 84.82 (0.85)\\
\midrule
& & & & & Landbird, & Landbird, & Waterbird, & Waterbird, \\
& & & & & Land & Water & Land & Water\\
\cmidrule{6-9}
\textbf{Waterbirds} & \textbf{ERM} & 86.36 (0.39) & 66.88 (3.76) & 32.57 (3.95) & 99.45 (0.22) & 76.39 (1.36) & 66.88 (3.76) & 94.95 (0.5) \\
& \textbf{GDRO} & \textbf{88.26} (0.55) & 81.03 (1.16) & 14.80 (1.15) & 95.83 (0.36) & 81.03 (1.16) & 83.01 (0.7) & 92.2 (0.81) \\
& \textbf{SGDRO} & 86.85 (1.71) & 83.11 (3.65) & 6.61 (6.01) & 88.53 (4.08) & 85.99 (1.95) & 84.63 (4.81) & 86.19 (1.56)\\
& \textbf{CAMEL} & 79.0 (14.24) & 76.82 (18.0) & 7.35 (5.66) & 77.17 (17.39) & 82.08 (12.23) & 84.17 (12.86) & 78.85 (19.72) \\
& \textbf{RealPatch (Our)} & 86.89 (1.34) & \textbf{84.44} (2.53) & \textbf{4.43} (4.48) & 88.03 (3.03) & 86.39 (1.1) & 85.67 (3.54) & 85.93 (0.78)\\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:results_subgroup}
\end{center}
\end{table*}
\begin{table*}[t!]
\begin{center}
\medskip
\caption{A comparison between RealPatch and two baselines on iWildCam-small datasets which includes the subgroup results. This table is an extension of Table\ref{tab:results_iwildcam}. The results shown are the average (standard deviation) performances over three runs.}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{llllllll}
\toprule
\textbf{Method} & \makecell{\textbf{Aggregate} $\uparrow$\\ \textbf{Acc. (\%)}} & \makecell{\textbf{Robust} $\uparrow$\\ \textbf{Acc. (\%)}} & \makecell{\textbf{Robust} $\downarrow$ \\ \textbf{Gap (\%)}} & \multicolumn{4}{c}{\makecell{\textbf{Subgroup} $\uparrow$ \quad Y \\ \textbf{Acc. (\%)} \quad Z}} \\
\midrule
& & & & Meleagris Ocellata, & Meleagris Ocellata, & Crax Rubra, & Crax Rubra, \\
& & & & 26 & 255 & 26 & 255\\
\cmidrule{5-8}
\textbf{ERM} & \textbf{79.97} (1.18) & 75.43 (3.01) & 19.65 (1.96) & 84.31 (7.33) & 87.07 (2.95) & 92.22 (4.8) & 75.43 (3.01)\\
\textbf{SGDRO} & 78.55 (2.45) & 75.5 (3.58) & 14.28 (4.35) & 85.49 (4.93) & 87.5 (3.06) & 79.25 (2.76) & 75.5 (3.58)\\
\textbf{RealPatch (Our)} & 79.36 (2.09) & \textbf{76.7} (3.19) & \textbf{11.36} (4.87) & 87.06 (4.8) & 84.58 (2.95) & 80.37 (1.38) & 76.76 (3.26)\\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:results_subgroup_iwildcam}
\end{center}
\end{table*}
\paragraph{Additional Results for iWildCam-small.} In Figure~\ref{fig:iwildcam_matching} we show samples of retrieved matched pairs, here we can see how matching is able to preserve the bird species as well as the background colours.
Similarly to Table~\ref{tab:ablation_balance}, in Table~\ref{tab:ablation_balance_iwildcam} we compare the effect of the main components of our statistical matching stage for iWildCam-small dataset, analysing the effect of temperature scaling and calipers. The selected best configuration for all three runs do not include the usage of std-based caliper, therefore we do not study the effect of removing such component (i.e. setting $\alpha=\infty$). The results are consistent with what observed for CelebA and Waterbirds: the strongest effect is obtained by removing the influence of the fixed caliper, while the impact of temperature scaling is weaker overall.
\begin{table}[t!]
\begin{center}
\caption{Comparison of the covariate balance in 1) the original dataset $D$, 2) the matched dataset $D^{\star}$ 3) the matched dataset $D^{\star}$ with no temperature scaling 4) $D^{\star}$ with no fixed caliper. The results are reported for a single run per dataset. Our matching procedure can successfully improve the covariate balance in iWildCam-small dataset, with fixed caliper significantly boosting its quality.}
\medskip
\scalebox{.85}{
\begin{tabular}{lllllll}
\toprule
& \multicolumn{3}{c}{\makecell{\textbf{SMD}}} & \multicolumn{3}{c}{\makecell{\textbf{VR}}} \\
\midrule
& $\le 0.1$ $\uparrow$ & $(0.1, 0.2)$ $\downarrow$ & $\ge 0.2$ $\downarrow$ & $\le 4/5$ $\downarrow$ & $(4/5, 5/4)$ $\uparrow$ & $\ge 5/4$ $\downarrow$\\
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
$D$ & 413 & 354 & 1281 & 612 & 471 & 965\\
$D^{\star}$ (best) & \textbf{1125} & 656 & 267 & 161 & \textbf{1005} & 882\\
$D^{\star}\,\,(t\!=\!1)$ & 753 & 615 & 680 & 191 & 695 & 1162\\
$D^{\star}\,\,(c\!=\!0)$ & 1037 & 641 & 370 & 331 & 930 & 787\\
\bottomrule
\end{tabular}
}
\label{tab:ablation_balance_iwildcam}
\end{center}
\end{table}
\begin{figure}[th!]
\centering
\includegraphics[scale=0.825]{figures/iwildcam_examples.pdf}
\caption{Matched samples on a subset of iWildCam dataset using the spurious attribute camera trap location.}
\label{fig:iwildcam_matching}
\end{figure}
\paragraph{Additional Results for imSitu.} In Figures \ref{fig:imsitu_example_appendix} we show examples of matched pairs retrieved using RealPatch on the imSitu dataset. Here, we observe that the activity is generally preserved, though not necessary reflecting an identical \emph{situation} label in the dataset; for example, we have matched images of agents ``pumping'' and ``cleaning'' a car (both related to car maintenance) or agents ``curling'' and ``combing'' hair (both related to hair styling). Additionally, in Table~\ref{tab:matching_balance_imsitu} we show the comparison of the achieved covariates balance imSitu: RealPatch is able to produce a matched dataset with the majority of coviarates perfectly balanced in term of \textit{SMD} and \textit{VR}.
\begin{figure*}[t
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.575]{figures/imsitu_example_appendix_m_to_f.pdf}
\caption{Examples of male images and their female counterfactuals}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.575]{figures/imsitu_example_appendix_f_to_m.pdf}
\caption{Examples of female images and their male counterfactuals}
\end{subfigure}
\caption{Examples of pairs retrieved using using Stage 1 of RealPatch; both original and matched images are real samples from the imSitu dataset. Note that activities are generally preserved across pairs despite not conditioning on the target class during matching.}
\label{fig:imsitu_example_appendix}
\end{figure*}
\begin{table}[t!]
\caption{A comparison of the covariates balance in imSitu, before matching ($D$) and after matching ($D^{\star}$). Our procedure is able to produce a dataset with the majority of covariates perfectly balanced (992 and 1010 out of 1024) in term of \textit{SMD} and \textit{VR}.}
\begin{center}
\scalebox{0.85}{
\medskip
\begin{tabular}{lllllll}
\toprule
& \multicolumn{3}{c}{\textbf{SMD}} & \multicolumn{3}{c}{\textbf{VR}}\\
\midrule
& $\le 0.1$ $\uparrow$ & $(0.1, 0.2)$ $\downarrow$ & $\ge 0.2$ $\downarrow$ & $\le 4/5$ $\downarrow$ & $(4/5, 5/4)$ $\uparrow$ & $\ge 5/4$ $\downarrow$\\
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
$D$ & 327 & 271 & 426 & 272 & 510 & 242\\
$D^{\star}$ & \textbf{992} & 32 & 0 & 4 & \textbf{1010} & 10\\
\bottomrule
\end{tabular}}
\label{tab:matching_balance_imsitu}
\end{center}
\end{table}
\subsection{CAMEL}\label{sec:camel_results}
In Table \ref{tab:results_camel} we report three results for the CAMEL model: (a) the metrics obtained after training the model for full 50 epochs for CelebA (and 500 epochs for Waterbirds) as per \cite{goel2020model}; (b) the results from the epoch where the model achieved the best robust metric on the validation set; in accordance with RealPatch, we report the average (standard deviation) across three repeats over different data splits for both (a) and (b) results; (c) the results from Table 2 in \cite{goel2020model} are also included since the authors have a different setup, namely they keep the default train/validation/test split while \emph{changing the random seed to initialise the model}.
We include both settings (a) and (b) since the exact procedure in \cite{goel2020model} is somewhat unclear; we use authors' implementation of CAMEL to produce them. The results in Table~\ref{tab:results_aggregate} show the output of the method described in (b) as it appears to be the closest comparison.
\begin{table*}[t!]
\begin{center}
\caption{Three different results for the CAMEL model: 1) metrics obtained at the last epoch after training the model; 2) results from the epoch where the model achieved the best robust gap on the validation set; 3) the results included in Table 2 in \cite{goel2020model}.}
\medskip
\begin{adjustbox}{width=0.8\textwidth}
\begin{tabular}{llllll}
\toprule
\textbf{Dataset} & \textbf{Method} & \makecell{\textbf{Aggregate} $\uparrow$\\ \textbf{Acc. (\%)}} & \makecell{\textbf{Robust} $\uparrow$ \\ \textbf{Acc. (\%)}} & \makecell{\textbf{Robust} $\downarrow$\\ \textbf{Gap (\%)}} \\
\midrule
\textbf{CelebA} & \textbf{CAMEL} (re-run epoch 50) & 96.6 (0.51) & 57.96 (3.55) & 40.12 (4.18) \\
& \textbf{CAMEL} (re-run SGDRO) & 84.51 (5.59) & 81.48 (3.94) & 5.09 (0.44) \\
& \textbf{CAMEL} \cite{goel2020model}, Table 2 & 92.90 (0.35) & 83.90 (1.31) & - \\
\midrule
\textbf{Waterbirds} & \textbf{CAMEL} (re-run epoch 500) & 89.63 (7.84) & 68.12 (6.93) & 29.59 (3.91)\\
& \textbf{CAMEL} (re-run SGDRO) & 79.0 (14.24) & 76.82 (18.0) & 7.35 (5.66) \\
& \textbf{CAMEL} \cite{goel2020model}, Table 2 & 90.89 (0.87) & 89.12 (0.36) & - \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:results_camel}
\end{center}
\end{table*}
\section{Introduction}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.75]{figures/celeba_teaser_bmq.pdf}
\caption{Examples of images and their counterfactuals on the attribute male/female, retrieved using RealPatch (left); both original and matched images are real samples from the CelebA dataset. RealPatch preserves characteristics across matched pairs such as pose, facial expression, and accessories. We also show CycleGAN synthetic counterfactual results (right) on the same attribute.}
\label{fig:celeba_teaser}
\end{figure}
Machine learning models have fast become powerful yet ferocious pattern matching tools, able to exploit complex relationships and distant correlations present in a dataset. While often improving the average accuracy across a dataset, making use of spurious correlations (i.e. relationships that appear causal but in reality are not) for decision making is often undesirable, hurting generalization in the case that spurious correlations do not hold in the test distribution, and resulting in models that are biased towards certain subgroups or populations.
Recent works in machine learning have studied the close link between invariance to spurious correlations and causation~\cite{PetBuhMei16,HeiMeiPet18,ArjBotGulPaz19,MitMcWWalHoletal21,VeiDamYadEis21}.
Causal analysis allows us to ask counterfactual questions in the context of machine learning predictions by relying on attribute-labelled data and imagining ``what would happen if"
some of these attributes were different.
For example, ``would the prediction of a smile change had this person's cheeks been rosy''?
While simple to answer with tabular data, generating counterfactuals for image data is non-trivial.
Recent advances in generative adversarial networks (GAN) aim to build a realistic generative model of images that affords controlled manipulation of specific attributes, in effect generating image counterfactuals.
Leveraging research progress in GANs, several works now use counterfactuals for (a) detecting unintended algorithmic bias, e.g. checking whether a classifier's ``smile'' prediction flips when traversing different attributes such as ``heavy makeup'' \cite{DenHutMitGebetal20}, and (b) reducing the gap in subgroup performance, e.g. ensuring a ``blonde hair'' classifier performs equally well on male and female subgroups \cite{sharmanska2020contrastive,goel2020model}. The first relies on an \textit{invert then edit} methodology, in which images are first inverted into the latent space of a pre-trained GAN model for generating counterfactuals, while the latter uses an \textit{image-to-image translation} methodology. One of the most recent approaches, CAMEL \cite{goel2020model}, focuses on the latter usage of counterfactuals to \emph{patch} the classifier's dependence on subgroup-specific features.
GAN-based counterfactual results are encouraging, however, we should note that GAN models have a number of common issues such as mode collapse, failure to converge, and poor generated results in a setting with a large number of class labels and limited samples per class. We provide an alternative counterfactual-based model patching method that is simpler, faster, and more data-efficient. We focus on a statistical matching technique (see for example \cite{rubin1973matching}) such that for every image, we find an image with similar observable features yet having an opposite attribute value than the observed one; our counterfactual results are shown in Figure \ref{fig:celeba_teaser}. A statistical matching framework has been widely utilised to assess causality relationships in numerous fields, such as education \cite{morgan2001counterfactuals}, medical \cite{chastain1985estimated},\cite{christian2010prenatal},\cite{saunders1993association}, and community policies \cite{biglan2000value}, \cite{perry1996project} to name some.
In this work, we explore statistical matching in the context of computer vision, and show its application for model patching with real samples.
Our paper provides the following contributions:
\begin{enumerate}
\item We propose an image-based counterfactual approach for model patching called \emph{RealPatch} that uses real images instead of GAN generated images;
\item We provide an empirical evaluation
of different statistical matching strategies for vision datasets. Our results can be used as a \emph{guideline} for future statistical matching applications, for example showing the importance of using calipers;
\item We show applications of RealPatch for improving the worst-case performance across subgroups and reducing the subgroup performance gap in a $2$-class classification setting. We observe that spurious correlation leads to shortcut learning, and show how RealPatch mitigates this by utilising a balanced dataset to regularise the training;
\item We show applications of RealPatch for reducing dataset leakage and model leakage in a multi $211$-class classification setting.
\end{enumerate}
\begin{figure*}[t!]
\centering
\includegraphics[scale=0.475]{figures/matching_pipeline.pdf}
\caption{RealPatch: statistical matching pipeline. Given the dataset $D$ and the spurious attribute $Z$, the output is a matched dataset $D^{\star}$. To produce $D^{\star}$ we: 1) estimate the propensity score, and adjust it with temperature scaling; 2) restrict $D$ using the fixed caliper to remove \textit{extreme} samples; 3) compute the pair-wise closeness for each sample; 4) use the std-caliper to restrict the possible pairs according a maximum propensity score distance ; 5) for each sample, select the closest sample in the opposite group.}
\label{fig:pipeline}
\end{figure*}
\textbf{Related Work.}
Data augmentation strategies using operations such as translation, rotation, flipping, cutout \cite{DeVTay17}, mixup \cite{ZhaCisDauLop18}, and cutmix \cite{YunHanOhChuetal19} are widely used for increasing the aggregate performance of machine learning models in computer
vision applications.
To improve performance in a targeted fashion, image transformation techniques that learn to produce semantic changes to an image are used to generate samples for underrepresented subgroups.
Sharmanska et al. \cite{sharmanska2020contrastive} used a StarGAN model \cite{ChoChoKimHaetal18} to augment the dataset with respect to a subgroup-specific feature, and subsequently optimized a standard Empirical Risk Minimization (ERM) training objective.
Whereas, CAMEL, a framework by Goel et al. \cite{goel2020model}, used a CycleGAN image transformation approach \cite{ZhuParIsoEfr17} and minimized a Sub-Group Distributionally Robust Optimization (SGDRO) objective function.
GDRO method \cite{sagawa2019distributionally} aims to minimize the worst-case loss over groups in the training data.
CAMEL models \cite{goel2020model} minimize the class-conditional worst-case loss over groups.
Another approach to reduce the effects of spurious correlation is optimizing a notion of invariance.
Invariance serves as a proxy for causality, as features representing ``causes'' of class labels rather than ``effects'' will generalize well under intervention.
Invariant Risk Minimization (IRM) \cite{ArjBotGulPaz19} tries to find a data representation which discards the spurious correlations by enforcing that the classifier acting on that representation is simultaneously optimal in each subgroup.
However, more analysis and better algorithms are needed to realize the promise of this framework in practice \cite{MitMcWWalHoletal21,goel2020model}.
Model patching focuses on robustness with respect to the unexpected failure of standard classifiers on subgroups of a class.
Subgroups can correspond to environments/domains such as water or land, and can also refer to demographic attributes such as females or males \cite{CreJacZem21}.
Our work is therefore also related to many works addressing dataset biases in computer vision, particularly, in which the notion of bias relates to demographic attributes (e.g. \cite{Wang2019ICCV,YanQinFeiDenetal20,KehBarThoQua20,KehBarShaQua22}).
Wang et al. \cite{Wang2019ICCV} showed that even when datasets are balanced e.g. each class label co-occurs equally with each gender, learned models amplify the association between labels and gender, as much as if data had not been balanced.
We refine their conclusions about balanced dataset and show that balancing with a statistical matching framework can successfully eliminate dataset leakage while reducing model leakage and maintaining high utility.
\section{Our RealPatch Framework}
We propose \textit{RealPatch}, a framework that first resamples a dataset such that the \textit{spurious groups} are balanced and equally informative and then utilise such dataset to regularise a classification objective.
RealPatch only uses real samples from the original dataset when constructing the augmented dataset, making it faster to perform, and simpler to apply to new tasks, compared to approaches such as CAMEL \cite{goel2020model} which require models to generate synthetic samples.
Unlike standard data augmentation, our augmentation is in the context of statistical matching; it is a model-based approach for providing joint statistical information based on variables collected through two or more data sources.
If we have two sources, e.g. male and female, matching augments the source domain of male images with female images, and the domain of female images with male images.
In this section we outline the two stages of RealPatch. In Stage 1, a statistical matching procedure is used to construct a \emph{matched dataset}, a collection of comparable pairs of images that have opposite values of the spurious attribute. In Stage 2, we learn a model to predict the target label by including the representations of instances in the matched dataset.
\textbf{Setup.} Given a dataset of $N$ samples $D\!=\!\left\{1, \dots, N \right\}$ with target label $Y$ and spurious label $Z$, the dataset is divided into two \textit{spurious groups} $D_T$ and $D_C$ based on the value of $Z$. These partitions define the so-called \emph{treatment} ($Z\!=\!1$) and \emph{control} ($Z\!=\!0$) groups of size $N_T$ and $N_C$, respectively. Additionally, we call \textit{target groups} the two partitions created by $Y$ and \textit{subgroups} the four sets caused by both $Y$ and $Z$.
We use $X$ to denote feature representations of the input images extracted from a pre-trained model such as ResNet~\cite{he2016deep}, or Big Transfer BiT~\cite{kolesnikov2020big}.
In our framework these encoded representations $X$ are the \emph{observed covariates} that are used to compute the distances $M$ between images and identify the matched pairs. Following the work of causal inference, image representations in $X$ are assigned a propensity score, a measure of how likely an image $s$ belongs to the treatment group, $e_s = \hat{P}(Z_s\!=\!1|X_s)$. Propensity scores are used during Stage 1 to help prevent the inclusion of instances that would lead to poor matches.
\subsection{Stage 1: Statistical Matching}\label{sec:matching}
Matching is a sampling method to reduce model dependency and enforce covariate balance in observational studies across a treatment and control group.
In this work, we study the nearest-neighbour (NN) matching algorithm, which for each treatment sample selects the closest control sample.
Figure~\ref{fig:pipeline} depicts our proposed matching pipeline.
The pipeline has the following main building blocks: 1) \emph{propensity score estimation}; 2) \emph{closeness measure}; and 3) \emph{calipers} as a threshold mechanism. Before using the matched dataset in Stage 2, the \emph{matching quality} is measured by assessing the achieved balance of the covariates.
{\bf Propensity Score Estimation.} In causal inference, a propensity score $e_s$ is the probability of a sample $s$ being in the \textit{treatment} group $D_T$, given its observed covariates $X_s$.
This conditional probability is usually unknown, therefore it has to be estimated. This is typically done using a logistic regression on the observed $X$ to predict the binary variable $Z$ \cite{cox1989analysis}. Logistic regression allows us to reweight samples when optimising the loss function. We explore the use of \emph{spurious reweighting}, where samples are weighted inversely proportional to the frequency of their spurious label $Z$; more details are provided in Appendix A.
The shape of the conditional distribution has the potential to impact finding a suitable threshold. In this work we explore the use of \emph{temperature scaling} as a post-processing step to adjust the propensity score distribution before its use for matching. Temperature scaling has become a common approach for re-calibrating models \cite{guo2017calibration}, but to the best of our knowledge has not been utilised in the context of statistical matching for causal inference. In binary classification cases such as ours, for each sample $s$ the logits $z_s$ are divided by a (learned or fixed) parameter $t$ before applying the sigmoid function:
\begin{equation*}
z_s = \text{log} \left( \frac{e_s}{1-e_s}\right), \ q_s = \frac{1}{1 + e^{-z_s / t}}.
\end{equation*}
With $t\!=\!1$ we obtain the original probabilities. When $t\!<\!1$ the rescaled probabilities have a sharper distribution reaching a point mass at $t\!=\!0$. When $t\!>\!1$ the rescaled probabilities are smoother, reaching a uniform distribution as $t\! \rightarrow\!\infty$. As we show in our ablation study, we found rescaling to be beneficial for improving the achieved covariate balance (Table~\ref{tab:ablation_balance}).
{\bf Closeness Measure.} There are multiple metrics that can be used to measure the distance $M_{i,j}$ between samples $i \in D_T \text{ and } \: j \in D_C$, the most commonly used are \emph{Euclidean} and \emph{propensity score} distances.
The \emph{Euclidean distance} is defined as $M_{ij} = (X_i - X_j)^{\top}(X_i - X_j)$ and the \emph{propensity score distance} as the distance between propensity scores $M_{ij} = |e_i - e_j|$.
Both \textit{Euclidean} and \textit{propensity score} distances have the advantage of being able to control how many samples are included via a threshold. While propensity score is the most commonly used matching method, Euclidean distance matching should be preferred \cite{king2019propensity} as the goal is to produce exact balance of the observed covariates rather than balance them on average.
{\bf Calipers.} Nearest-neighbour matching is forced to find a match for every treatment sample and is therefore at risk of finding poor matched pairs. Caliper matching is a method designed to prevent matching samples with limited covariate overlap.
In this work we explore the usage of two different types of caliper, namely \emph{fixed caliper} and \emph{standard deviation (std) based caliper}, both applied to the estimated propensity score. \emph{Fixed caliper} \cite{crump2009dealing} is a selection rule that discards samples that have an estimated propensity score outside of a specific range; i.e. the dataset is restricted to $\left\{s, \: \forall \: s \in D \:|\: e_s \in [c, 1 - c]\right\}$. This allows the exclusion of examples with \textit{extreme} propensity scores; a rule-of-thumb used in previous studies \cite{crump2009dealing} considers the interval defined by $c\!=\!0.1$, i.e. $[0.1, 0.9]$. \emph{Standard deviation (std) based caliper} \cite{cochran1973controlling} is used to enforce a predetermined maximum discrepancy for each matching pair in terms of propensity score distance.
The distance $M_{ij}$ is kept unaltered if $|e_i - e_j| \le \sigma \cdot \alpha $, and is set to $\infty$ otherwise.
The variable $\sigma$ is the standard deviation of the estimated propensity score distribution and $\alpha$ is a parameter controlling the percentage of \textit{bias reduction} of the covariates. Cochran and Rubin \cite{cochran1973controlling} showed the smaller the $\alpha$ value the more the bias is reduced, the actual percentage of bias reduction depends on the initial standard deviation $\sigma$. Commonly used $\alpha$ values are $\left\{0.2, 0.4, 0.6\right\}$ \cite{cochran1973controlling}.
In our application we 1) restrict potential matches based on fixed caliper and 2) follow a hybrid approach selecting the closest sample using Euclidean distance matching while defining a maximum propensity score distance between samples. The final outcome of Stage 1 is a \emph{matched dataset} $D^{\star}$.
{\bf Matching Quality.} Matching quality can be assesed through measuring the balance of the covariates across the treatment and control groups. Two commonly used evaluation measures are \emph{standardised mean differences (SMD)} and \emph{variance ratio (VR)} \cite{rubin2001using}. In the case that high imbalance is identified, Stage 1 should be iterated until an adequate level of balanced is achieved; we provide a guideline of adequacy for each metric below.
\emph{Standardised Mean Differences} computes the difference in covariate means between each group, divided by the standard deviation of each covariate. For a single covariate $\mathbf{a}$ from $X$ we have:\\
$$
\text{SMD} = \frac{\bar{\mathbf{a}}_T- \bar{\mathbf{a}}_C}{\sigma} \text{ , where }\sigma=\sqrt{\frac{s^2_T + s^2_C}{2}}
$$
Here, $\bar{\mathbf{a}}_T$ ($\bar{\mathbf{a}}_C$) and $s^2_T $ ($s^2_C $) are respectively the sample mean and variance of covariate $\mathbf{a}$ in group $D_T$ ($D_C$).
Intuitively, smaller \emph{SMD} values are better and as a rule of thumb an \emph{SMD} value below $0.1$ expresses an adequate balance, a value between $0.1$ and $0.2$ is considered not balanced but acceptable, and above $0.2$ shows a severe imbalance of the covariate~\cite{normand2001validating}.
\emph{Variance Ratio} is defined as the ratio of covariate variances between the two groups, with an ideal value close to 1. While in some studies \cite{zhang2019balance} a variance in the interval $(0, 2)$ is defined acceptable, we follow Rubin \cite{rubin2001using} and use the stricter interval $(4/5, 5/4)$ to indicate the desired proximity to~1. To obtain a single measure for all covariates $X$, we categorise \emph{SMD} into $\le\!0.1$, $(0.1, 0.2)$, and $\ge\!0.2$, and \emph{VR} into $\le\!4/5$, $(4/5, 5/4)$, and $\ge\!5/4$ and assess the distribution of covariates. We show an assessment of matching quality for one run on each dataset in Section \ref{sec:ablation}, comparing the covariate balance before and after matching as well the effect of using temperature scaling.
\subsection{Stage 2: Target Prediction}\label{sec:target_prediction}
This stage is concerned with predicting a discrete target label $Y$ from covariates $X$. Inspired by Goel et al. \cite{goel2020model} our training process involves the minimization of a loss $\mathcal{L}$ that combines a SGDRO objective function $\mathcal{L}_{SGDRO}$ and a self-consistency regularisation term $\mathcal{L}_{SC}$:
\begin{equation}
\mathcal{L} = \mathcal{L}_{SGDRO} + \lambda \mathcal{L}_{SC},
\end{equation}
where $\lambda$ is a hyperparameters controlling the regularisation strength.
The SGDRO loss is inspired by GDRO \cite{sagawa2019distributionally}, with the difference of considering a non-flat structure between the \textit{target} and \textit{spurious} labels; the hierarchy between target and spurious labels is included by considering the \textit{spurious groups} difference within each \textit{target group}. The SGDRO component of our loss is computed on the entire dataset $D$.
Similarly to \cite{goel2020model}, our $\mathcal{L}_{SC}$ encourages predictions $f_{\theta}(\cdot)$ of a matched pair $(x_T, x_C)$ in $D^{\star}$ to be consistent with each other and is defined as:
\begin{equation}
\mathcal{L}_{SC}(x_T, x_C, \theta) = \frac{1}{2} \left[KL(f_{\theta}(x_T) || \tilde{m}) + KL(f_{\theta}(x_C) || \tilde{m})\right],
\end{equation}
where $\tilde{m}$ is the average output distribution of the matched pair.
While the SGDRO objective accounts for the worst-case subgroup performance, the form of the regularisation term induces model's predictions to be subgroup invariant \cite{goel2020model}.
\section{Experiments}
We conduct two sets of experiments to assess the ability of RealPatch to 1) improve the worst-case subgroup performance and reduce the subgroup performance gap in a binary classification setting, and 2) reduce dataset and model leakage w.r.t. a spurious attribute in a 211-class classification setting. We describe them in turn.
\subsection{Reducing Subgroup Performance Gap}\label{sec:experiments_gap}
In this section we study the effect of our RealPatch for increasing the worst-case performance across subgroups and reducing the gap in subgroup performance. We evaluate RealPatch against a variety baselines on three datasets, and perform an ablation analysis on configurations of RealPatch. We compare approaches using \textbf{Robust~Accuracy}: the lowest accuracy across the four subgroups, \textbf{Robust~Gap}: the maximum accuracy distance between the subgroups,
as well as \textbf{Aggregate~Accuracy}: a standard measure of accuracy. Our goal is to improve the robust accuracy and gap while retaining the aggregate accuracy performance as much as possible. That is because performance degradation on a subgroup(s) might occur if this improves the worst performing subgroup (e.g. \cite{martinez2020minimax}).
{\bf Datasets.}
We use three publicly available datasets, CelebA\footnote{\scriptsize\url{http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html}}~\cite{liu2015deep}, Waterbirds\footnote{\scriptsize\url{https://github.com/kohpangwei/group\_DRO}}~\cite{sagawa2019distributionally} and iWildCam-small\footnote{\scriptsize\url{https://github.com/visipedia/iwildcam_comp/tree/master/2020}}~\cite{beery2020iwildcam}.
\textbf{CelebA} has 200K images of celebrity faces that come with annotations of 40 attributes.
We follow the setup in \cite{goel2020model}, and consider hair colour $Y \in \left\{ \text{blonde}, \text{ non-blonde}\right\}$ as target label, and gender $Z \in \left\{ \text{male}, \text{ female}\right\}$ as spurious attribute. In this setup, the subgroup $\left(Y\!=\!\text{non-blonde}, Z\!=\!\text{female}\right)$ is under-sampled in the training set (from 71,629 to 4,054) as per \cite{goel2020model} amplifying a spurious correlation between the target and the demographic attribute. We keep all other subgroups as well as the validation and test sets unchanged. The images are aligned and resized to $128$x$128$. For stability we repeat our experiments three times using different randomly under-sampled subgroups $\left(Y\!=\!\text{non-blonde}, Z\!=\!\text{female}\right)$.
\textbf{Waterbirds} has 11,788 examples of birds living on land or in water. We follow \cite{sagawa2019distributionally} and predict $Y \in \left\{ \text{waterbird}, \text{ landbird}\right\}$, and use the background attribute $Z \in \left\{ \text{water}, \text{ land}\right\}$ as spurious feature. The spurious correlation between target and background is present in the dataset as waterbirds appear more frequently in a water scene, whereas landbirds on land. In order to perform three runs we randomly define the train/validation/test splits while enforcing the original subgroup sizes as per~\cite{goel2020model}.
\textbf{iWildCam-small} is a subset of iWildCam dataset~\cite{beery2020iwildcam}, whose task is to classify animal species in camera trap images. Here, we consider two species (meleagris ocellata and crax rubra) within two camera trap locations. The dataset contains $3,349$ images, specifically $2,005$ (train), $640$ (val) and $704$ (test). These splits have a spurious correlation between animal species and locations.This experiment emphasizes the applicability of RealPatch in a small dataset setting.
{\bf Baselines.}
Here we describe the four baseline methods used for comparison. \textbf{Empirical Risk Minimization (ERM)} is a standard stochastic gradient descent model trained to minimize the overall classification loss.
\textbf{Group Distributionally Robust Optimisation (GDRO)} is a stochastic algorithm proposed by \cite{sagawa2019distributionally} with the aim of optimising the worst-case performance across the subgroups. \textbf{Sub-Group Distributionally Robust Optimisation (SGDRO)} \cite{goel2020model}
as described in Section \ref{sec:target_prediction}.
\textbf{CAMEL} is a two stage approach proposed by \cite{goel2020model} that uses the synthetic samples to define a subgroup consistency regulariser for model patching. Conceptually this model is most similar to ours, where we use real samples for model patching.
The training details are in Appendix A.
\textbf{RealPatch Configurations and Hyperparameters.}
RealPatch can be instantiated in many different configurations. In these experiments hyperparameters of RealPatch include the choice of \textit{calipers}, \textit{temperature}, and \textit{reweighting strategies} in the propensity score estimation model as well as \textit{self-consistency strength} $\lambda$, \textit{adjustment coefficients} and \textit{learning rates} in the target prediction model. To select hyperparameters for Stage 1 of RealPatch we perform a grid search, selecting the configuration with the best covariates balance in term of \textit{SMD} and \textit{VR}. An ablation study on such hyperparameters is provided in Section~\ref{sec:ablation}. As per the hyperparameters of Stage 2, we perform model selection utilising the robust accuracy on the validation set. Further details of hyperparameters used and best configuration selected are summarised in Appendix A.
\begin{table*}[t!]
\begin{center}
\medskip
\caption{A comparison between RealPatch and four baselines on two benchmark datasets. The results shown are the average (standard deviation) performances over three runs. RealPatch is able to construct a model that is robust across subgroups with high robust accuracy and small robust gap.}
\scalebox{.85}{
\begin{tabular}{lllll}
\toprule
\textbf{Dataset} & \textbf{Method} & \makecell{\textbf{Aggregate} $\uparrow$\\ \textbf{Accuracy (\%)}} & \makecell{\textbf{Robust} $\uparrow$\\ \textbf{Accuracy (\%)}} & \makecell{\textbf{Robust} $\downarrow$\\ \textbf{Gap (\%)}}\\
\midrule
\textbf{CelebA} & \textbf{ERM} & 89.21 (0.32) & 55.3 (0.65) & 43.48 (0.68)\\
& \textbf{GDRO} & \textbf{90.47} (7.16) & 63.43 (18.99) & 34.77 (19.65)\\
& \textbf{SGDRO} & 88.92 (0.18) & 82.96 (1.39) & 7.13 (1.67)\\
& \textbf{CAMEL} & 84.51 (5.59) & 81.48 (3.94) & \textbf{5.09} (0.44)\\
& \textbf{RealPatch (Ours)} & 89.06 (0.13) & \textbf{84.82} (0.85) & 5.19 (0.9)\\
\midrule
\textbf{Waterbirds} & \textbf{ERM} & 86.36 (0.39) & 66.88 (3.76) & 32.57 (3.95)\\
& \textbf{GDRO} & \textbf{88.26} (0.55) & 81.03 (1.16) & 14.80 (1.15)\\
& \textbf{SGDRO} & 86.85 (1.71) & 83.11 (3.65) & 6.61 (6.01)\\
& \textbf{CAMEL} & 79.0 (14.24) & 76.82 (18.0) & 7.35 (5.66)\\
& \textbf{RealPatch (Ours)} & 86.89 (1.34) & \textbf{84.44} (2.53) & \textbf{4.43} (4.48)\\
\bottomrule
\end{tabular}
}
\label{tab:results_aggregate}
\end{center}
\end{table*}
\begin{table}[t!]
\begin{center}
\medskip
\caption{Experiments with iWildCam-small\cite{beery2020iwildcam}. The results shown are the average (standard deviation) performances over three runs. CycleGAN-based CAMEL is not applicable for small training data (2K images).}
\begin{adjustbox}{width=.7\textwidth}
\begin{tabular}{llll}
\toprule
\textbf{Method} & \makecell{\textbf{Aggregate} $\uparrow$\\ \textbf{Accuracy (\%)}} & \makecell{\textbf{Robust} $\uparrow$\\ \textbf{Accuracy (\%)}} & \makecell{\textbf{Robust} $\downarrow$ \\ \textbf{Gap (\%)}} \\
\midrule
\textbf{ERM} & \textbf{79.97} (1.18) & 75.43 (3.01) & 19.65 (1.96) \\
\textbf{SGDRO} & 78.55 (2.45) & 75.50 (3.58) & 14.28 (4.35) \\
\textbf{RealPatch (Ours)} & 79.36 (2.09) & \textbf{76.70} (3.19) & \textbf{11.36} (4.87) \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:results_iwildcam}
\end{center}
\end{table}
\textbf{Results on CelebA.}
From Table \ref{tab:results_aggregate}, RealPatch is able to significantly improve the worst-case subgroup performance and reduce the subgroup performance gap compared to the other baseline methods such as ERM, GDRO, SGDRO. Our proposed method improves the robust accuracy, robust gaps and aggregate accuracy with respect to the best baseline SGDRO by $1.86\%$, $1.94\%$ and $0.14\%$ respectively. When compared to CAMEL, RealPatch improves robust accuracy ($+3.34\%$), but slightly worsens the robust gap ($+0.1\%$). Compared with CAMEL, GDRO and SGDRO, RealPatch is very consistent across runs, with a standard deviation of $0.13$, $0.85$ and $0.9$ for aggregate accuracy, robust accuracy and robust gap, in contrast to $5.59$, $7.16$ and $0.18$ (aggregate accuracy), $3.94$, $18.99$ and $1.39$ (robust accuracy) and $0.44$, $19.65$ and $1.67$ (robust gap) for CAMEL, GDRO and SGDRO respectively.
On inspection of matched pairs from the dataset $D^{\star}$, we observe preservation in pose, facial expression (e.g. smiling in most of the examples), hair style (in many cases the colour as well, but not always), and accessories such as hat and glasses. Figures \ref{fig:celeba_teaser} shows samples of retrieved matched pairs, further examples are in Appendix B. Naturally, due to use of real samples used in matching, RealPatch suffers no issues regarding quality of images in the augmented dataset often observed with generative models. Figure \ref{fig:celeba_teaser} shows CycleGAN generated examples used in the consistency regularizer in the CAMEL's loss.
\textbf{Results on Waterbirds.} Our RealPatch model can significantly reduce the gap between subgroup performances and improve the worst-case accuracy compared to all baselines. While GDRO have a better aggregate accuracy up to $1.37\%$, this model exhibits a higher imbalance over the subgroup performances with a robust gap of $14.80\%$ in comparison to $4.43\%$ of RealPatch and a robust accuracy of $81.03\%$ as opposed to $84.44\%$ of RealPatch. When compared to CAMEL, RealPatch shows improvements across all metrics, with $+7.89\%$ aggregate accuracy, $+7.62\%$ robust accuracy and $-2.92\%$ robust gap. Similar conclusions hold true when comparing RealPatch against the best baseline SGDRO with $+1.33\%$ robust accuracy and $-2.18\%$ robust gap.
The characteristics preserved between matched pairs are less obvious than in CelebA, mainly observing matching across the bird's primary/body colour; examples are shown in Appendix B.
\textbf{Results on iWildCam-small.} We show results comparing RealPatch against ERM and the best baseline SGDRO in Table~\ref{tab:results_iwildcam}. When compared to the two baseline methods, our RealPatch improves robust accuracy by $+1.27\%$ and $+1.2\%$ and robust gap by $-8.29\%$ and $-2.92\%$.
It is worth noticing in this setting CycleGAN-based CAMEL is not applicable due to insufficient data to train CycleGAN. Examples of retrieved matched pairs are in Appendix B.
Please refer to Appendix B for the full results of Table~\ref{tab:results_aggregate} and Table~\ref{tab:results_iwildcam} which include the four subgroup performances.
\textbf{Ablation Analysis.}\label{sec:ablation}
\import{./}{ablation.tex}
\subsection{Reducing Dataset and Model Leakage}
In this section we study the effect of our RealPatch on dataset and model leakage.
{\bf Leakage.} We use dataset leakage and model leakage \cite{Wang2019ICCV} to measure dataset bias. Dataset leakage measures how much information the true labels leak about gender, and corresponds to the accuracy of predicting gender from the ground truth annotations. In model leakage, the model is being trained on the dataset, and we measure how much the predicted labels leak about gender.
{\bf imSitu dataset.}
We use the imSitu dataset \cite{yatskar2016} of situation recognition, where we have images of 211 activities being performed by an agent (person). We follow the setting of prior works \cite{zhao2017,Wang2019ICCV} and study the activity bias with respect to a binarised gender of the agent.
The dataset contains 24,301 images (training), 7,730 images (validation), 7,669 images (test).
\textbf{Matching Results.} We performed matching on the training data, and include all matched pairs as a rebalanced dataset to analyse the leakage. On this dataset, all samples have been matched, and the dataset size after matching has been doubled. This is expected, given the dataset has 211 classes with $44-182$ samples per class, which is significantly less than in CelebA.
Doubling the size of the dataset does not mean we include every sample twice.
Instead this should be seen as rebalancing/resampling the dataset based on how many times each sample has been matched.
For matching we use the features extracted with a pre-trained ResNet101 model.
The selected hyperparameters are \textit{spurious reweighting} in propensity score estimation, a temperature of $t\!=\!0.6$, $c\!=\!0$ in the fixed caliper, and $\alpha\!=\!0.2$ in the std-based caliper. This configuration was selected based on the best covariates balanced achieved on the training set: we can reach an adequate balance with an \emph{SMD} value below 0.1 and \emph{VR} close to 1 for most of the 1024 covariates used, 992 and 1010 respectively compared to 327 and 510 of the original dataset. A table with all the covariates balance is in Appendix B.
\textbf{Leakage Results.}
We follow the same architectures and training procedures as \cite{Wang2019ICCV} to measure the dataset and model leakage.
We compare our results with the rebalancing strategies based on gender-label co-occurances proposed in \cite{Wang2019ICCV}.
We report our findings in Table \ref{table:imsitu}.
\begin{table*}[t!]
\begin{center}
\medskip
\caption{Matching-based rebalancing in imSitu achieves the best leakage-accuracy trade-off. It shows nearly no dataset leakage, leading to a reduction in model leakage while maintaining overall accuracy. This is in contrast to the co-occurrence-based rebalancing based on gender-label statistics (e.g. $\alpha\!=\!1$ \cite{Wang2019ICCV}), where a reduction in dataset leakage does not lead to reduction in model leakage in a meaningful way, and the overall accuracy drops.}
\scalebox{.9}{
\begin{tabular}{lllcc}
\toprule
\textbf{Data} & \makecell{\textbf{Dataset} $\downarrow$\\ \textbf{leakage} $\lambda_D$ } & \makecell{\textbf{Model} $\downarrow$\\ \textbf{leakage} $\lambda_M$ } & mAP {$\uparrow$} & F1 $\uparrow$\\
\midrule
original training data &68.35 (0.16) &76.79 (0.17) & 41.12 & 39.91\\
\midrule
{balancing with} $\alpha=3$ \cite{Wang2019ICCV} & 68.11 (0.55) &75.79 (0.49) & 39.20 & 37.64\\
{balancing with} $\alpha=2$ \cite{Wang2019ICCV} & 68.15 (0.32) &75.46 (0.32) &37.53 & 36.41\\
{balancing with} $\alpha=1$ \cite{Wang2019ICCV} & {53.99} (0.69) &74.83 (0.34) & 34.63 & 33.94\\
RealPatch (ours) &55.13 (0.76) &68.76 (0.69) & 38.74 & 38.13\\
\bottomrule
\end{tabular}
}
\label{table:imsitu}
\end{center}
\end{table*}
The results clearly show that dataset rebalancing via matching helps to achieve the best trade-off between debiasing (the dataset and the model leakage), and performance (F1 and mAP scores). We achieve significant reduction in dataset leakage (nearly no leakage $55.13$ versus original $68.35$) and model leakage ($68.76$ versus $76.79$), while maintaining the accuracy of the model with mAP and F1 scores comparable to those achieved with the original training data.
This is in contrast to rebalancing based on co-occurrences of gender and activity labels \cite{Wang2019ICCV}.
In the case of a rebalanced dataset with $\alpha\!=\!1$ that achieves nearly no dataset leakage ($53.99$), the model trained on this dataset leaks similarly to the model trained on the original data ($74.83$ versus $76.79$), and has a significant drop in the overall performance.
This suggests that statistical matching helps to reduce dataset leakage in a meaningful way as the model trained on the rebalanced dataset can reduce leakage as well.
\section{Limitations and Intended Use}
While \textit{SMD} and \textit{VR} are valuable metrics to indicate the quality of the matched dataset, there is no rule-of-thumb for interpreting whether the covariates have been \textit{sufficiently} balanced. Supplementing \textit{SMD} and \textit{VR} with manual inspection of matched pairs and evaluating on a downstream task is still required.
Additionally, RealPatch currently only handles binary spurious attributes, requiring additional work (such as \cite{lopez2017estimation}) to handle matching over multiple treatments. It is worth noticing that also the baselines considered, GDRO, SGDRO and CAMEL, have only been tested on a binary spurious attribute.
We intend to explore the usage of RealPatch for non-binary spurious attributes in the future work.
A natural extension would be to use a One-vs-Rest approach for matching: for each sample find the closest sample having a different value of the spurious attribute.
\section{Conclusions}
We present RealPatch, a two-stage framework for model patching by utilising a dataset with real samples using statistical matching. We demonstrate the effectiveness of RealPatch on three benchmark datasets, CelebA, Waterbirds and iWildCam-small.
We show that RealPatch's Stage 1 is successfully balancing a dataset with respect to a spurious attribute and we effectively improve subgroup performances by including such matched dataset in the training objective of Stage 2.
We also highlight the applicability of RealPatch in a small dataset setting experimenting with the so-called iWildCam-small.
Compared to CAMEL, a related approach that requires the training of multiple CycleGAN models, we see competitive reductions in the subgroup performance gap without depending on the ability to generate synthetic images. We also show the effectiveness of RealPatch for reducing dataset leakage and model leakage in a 211-class setting, where relying on generative model-based patching such as CAMEL is impractical. RealPatch can successfully eliminate dataset leakage while reducing model leakage and maintaining high utility. Our findings show the importance of selecting calipers to achieve a satisfactory covariates balance and serve as a guideline for future work on statistical matching on visual data.
We encourage the use of RealPatch as a competitive baseline for strategic rebalancing and model patching, especially in the case where developing models for image generation is prohibitive or impractical.
\newline\\
\textbf{Acknowledgments.} This research was supported by a European Research Council (ERC) Starting Grant for the project "Bayesian Models and Algorithms for Fairness and Transparency", funded under the European Union's Horizon 2020 Framework Programme (grant agreement no. 851538). NQ is also supported by the Basque Government through the BERC 2018-2021 program and by Spanish Ministry of Sciences, Innovation and Universities: BCAM Severo Ochoa accreditation SEV-2017-0718.
\clearpage
\bibliographystyle{splncs04}
|
2,869,038,153,864 | arxiv | \section{Acknowledgments}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
The emergence of serverless computing has extensively simplified the way that developers access cloud resources. Existing serverless computing platforms, such as AWS Lambda, Google Cloud Functions, and Azure Functions, have enabled a wide spectrum of cloud applications, including video processing~\cite{fouladi2017encoding, ao2018sprocket}, data analytics~\cite{jonas2017occupy, muller2020lambada}, and machine learning~\cite{carreira2019cirrus, siren2019infocom} with automated resource provisioning and management.
By decoupling traditional monolithic cloud applications into inter-linked microservices executed by stateless \textit{functions}, serverless computing frees developers from infrastructure management and administration with fine-grained resource provisioning, auto-scaling, and pay-as-you-go billing~\cite{berkeley-view}.
The serverless architecture has also introduced new challenges in computation efficiency, resource management, and task scheduling for cloud providers, attracting a rich literature of research studies. Some focused on optimizing the low-level virtualization layer to mitigate function cold-starts and enhance resource isolation~\cite{sock, sand, catalyzer, firecracker, gvisor}. Some proposed new resource management and function scheduling algorithms to accelerate serverless applications and improve resource utilization~\cite{azure-traces, ensure, sreekanti2020cloudburst, gunasekaran2020fifer}.
However, existing serverless services are still suffering from low resource efficiency due to the inappropriate resource allocations requested by users, who are uncertain about the exact function resource demands. Decoupling monolithic cloud applications to a serverless architecture generates numerous types of functions and complex inter-function dependencies~\cite{shahrad2019architectural, gunasekaran2020fifer}. Though serverless functions have much fewer configuration knobs than the traditional cloud virtual machines, it is still tricky for most users to estimate the accurate memory and CPU demands for each function.
Existing serverless computing platforms apply static strategies to provision resources for functions. For example, AWS Lambda allocates CPU resources to a function proportionally to its memory size configured by users~\cite{awslambdalimits}, leading to either CPU over-provision or under-provision for the function execution.
Therefore, serverless service providers are enduring poor resource utilization due to users' inappropriate function configuration---some functions are assigned with more resources than they need~\cite{gunasekaran2020fifer}. The high concurrency and fine-grained resource isolation of serverless computing further amplify such inefficient resource provisioning.
A few studies have attempted to address the above issues. Shahrad~\textit{et al}.~\cite{azure-traces} and FaasCache~\cite{fuerst2021faascache} proposed to maximize resource utilization and reduce the number of cold-starts by predicting the keep-alive windows of individual serverless functions. Fifer~\cite{gunasekaran2020fifer} incorporated the awareness of function dependencies into the design of a new resource manager to improve resource utilization. Besides, several research works aimed to accelerate functions and improve resource efficiency by adjusting CPU core allocations for serverless functions in reaction to their performance degradation during function executions~\cite{cpu-cap, ensure, core-granular}. Though, users still have to configure memory size for each function.
However, none of the existing studies has directly tackled the low resource efficiency issue raised by the inappropriate function configurations. There are three critical challenges in the perspective of serverless service providers to address this issue: \textit{First}, a user function is secured as a black box that shares no information about its internal code and workloads, making it hardly possible for the serverless system to estimate user functions' precise resource demands. \textit{Second}, decoupling monolithic cloud applications to serverless computing architectures generates a variety of functions with diverse resource demands and dynamic input workloads. \textit{Third}, the resource provisioning for serverless functions is fine-grained spatially ({\it i.e.}, small resource volumes) and temporally ({\it i.e.}, short available time).
We present \textit{FaaSRM}\xspace, a new serverless resource manager that dynamically harvests idle resources to accelerate functions and maximize resource utilization. The intuition behind \textit{FaaSRM}\xspace is to carefully harvest idle computing resources from functions over-supplied and accelerate functions under-supplied without degrading the performance of functions with resources harvested.
\textit{FaaSRM}\xspace monitors a series of performance metrics and resource footprints, including CPU utilization, memory utilization, and function execution times to estimate the actual resource demands of running functions. We apply an experience-driven algorithm to identify functions over-supplied and under-supplied. \textit{FaaSRM}\xspace learns to accurately determine the volume and available time of idle resources for harvesting from over-supplied functions to under-supplied functions. By decoupling the fixed proportion between CPU and memory resource allocation, \textit{FaaSRM}\xspace adjusts both types of resources independently. We also design a safeguard mechanism to guarantee that \textit{FaaSRM}\xspace's resource harvesting leads to no performance degradation. To deal with the highly volatile environment of serverless computing and large numbers of concurrent functions, we propose to apply the Proximal Policy Optimization (PPO) algorithm~\cite{ppo2} to learn from the realistic serverless system and make per-invocation resource adjustments.
We implement \textit{FaaSRM}\xspace based on PyTorch with multi-process support in the Apache OpenWhisk platform \cite{openwhisk}. We evaluate \textit{FaaSRM}\xspace with other three baseline RMs in simulation and OpenWhisk experiments using Azure Functions traces~\cite{azure-traces}. Compared to the default RM in OpenWhisk, \textit{FaaSRM}\xspace reduces the execution time of 98\% function invocations\footnote{In this paper, a function refers to a runnable code deployed on serverless platforms, an invocation refers to an instance of executing the function.} by 87.60\% and 35.81\%, in simulation and OpenWhisk, respectively. Particularly, \textit{FaaSRM}\xspace harvests idle resources from 38.78\% of function invocations while accelerating 39.18\% on the OpenWhisk cluster. Besides, \textit{FaaSRM}\xspace only degrades the performance of a negligible number of function invocations under the system performance variations of the OpenWhisk cluster.
\section{Background and Motivation}
This section will first briefly introduce serverless computing systems and resource provisioning strategies.
We then use a realistic example to motivate the necessity to improve resource utilization of serverless computing platforms.
We also show how to optimize serverless computing resource provisioning with reinforcement learning.
\subsection{Resource Provisioning in Serverless Computing}
Generally, serverless computing users are only responsible for uploading source codes and choosing resource limits for their applications.
Serverless platforms provision resources only when users invoke their functions.
Requesting too much resource leads to low resource utilization and extra bills for users. Thus insufficient resource provision leads to poor performance.
However, it is non-trivial for users to figure out a performant and cost-efficient resource configuration for their applications.
To make the right decision, users must fully understand both functions and platforms, which may require countless testing and profiling before the actual deployment.
Typical serverless applications such as image processing, machine learning inference, data analytics are stateless and event-driven.
The performance is mainly dominated by computing resources, such as CPU and memory.
Existing FaaS platforms, such as Apache OpenWhisk and AWS Lambda, request users to define the memory limit for their functions and then allocate CPU limit proportional to the memory. For example, Wang \textit{et al}.~\cite{peeking-behind-serverless} identify that both AWS Lambda and Google Cloud Function allow users to utilize CPU proportionally to function memory.
Such resource management policies could lead to various problems. For example, it encourages users to over-provision memory to accelerate their functions, leading to memory under-utilization. To mitigate this issue,
ENSURE~\cite{ensure} proposes to decouple CPU allocation from memory and automate CPU adjustment based on performance degradation detection.
However, such decoupling is hard since functions in serverless platforms have bursty invocations and repeatable workloads. For example, the analysis of characteristics of serverless applications running on Microsoft Azure Functions \cite{azure-traces} indicate that most serverless workloads have apparent patterns in the traces of duration, invocation, and resource consumption. To handle such dynamics, we propose a novel resource management framework for serverless platforms, \textit{FaaSRM}\xspace, which fully manages resources from users.
\textit{FaaSRM}\xspace manages and provisions computing resources by learning the patterns behind serverless platforms and functions.
As far as we know, no existing FaaS platform can intelligently manage and provision resources for serverless functions based on their workload patterns.
\subsection{A Motivating Example}
\label{sec:motivating}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{.28\textwidth}
\centering
\includegraphics[width=1\columnwidth]{figures/simple-avg-fet-per-func.pdf}
\vspace{-0.22in}
\caption{The average function execution time of individual functions for Fixed RM and Greedy RM.}
\label{fig:simple-avg-fet-per-func}
\end{minipage}\hfill
\begin{minipage}[t]{.28\textwidth}
\centering
\includegraphics[width=1\columnwidth]{figures/simple-cdf.pdf}
\vspace{-0.22in}
\caption{The CDF of FETs of function invocations for Fixed RM and Greedy RM.}
\label{fig:simple-cdf}
\end{minipage}\hfill
\begin{minipage}[t]{.26\textwidth}
\centering
\includegraphics[width=1\columnwidth]{figures/simple-avg-cpu-per-func.pdf}
\vspace{-0.22in}
\caption{CPU core allocation of individual functions for Fixed RM and Greedy RM.}
\label{fig:simple-avg-cpu-per-func}
\end{minipage}
\vspace{-0.15in}
\end{figure*}
To illustrate our motivation, we first conduct a simple experiment using the following two representative resource managers (RMs) to show the performance improvement when adopting an RM that dynamically allocates resources for serverless functions:
\noindent\textbf{Fixed RM} is the default RM employed by most existing serverless platforms, requires users to pre-define memory for their functions, and then consistently provisions the exact amount of memory. CPU resources are then allocated proportionally to functions based on the memory. We assume users configure their memory and make no changes after invocation starts.
\noindent\textbf{Greedy RM} proactively optimizes resource allocation based on fixed step size. For each function invocation, it allocates resources based on function's recent resource usage.
We simulate a 60-second workload with online invocation arrivals on our simulator.
We randomly sample six functions from real-world serverless function traces released by Microsoft Azure Functions \cite {azure-traces}.
For Fixed RM, we pre-configure the CPU and memory of six functions according to memory limits from the corresponding memory traces.
For Greedy RM, we initially allocate the same amount of resources with Fixed RM.
We configure the simulating cluster with ten servers, and each server has 8 CPU cores and 2 GBs memory available.
Each function has access to 8 CPU cores and 512 MBs memory at most. We describe the details of workloads and the simulator in Section \ref{sec:evaluation}.
Figure \ref{fig:simple-avg-fet-per-func} shows the average function execution time (FET) of individual functions. In contrast to Fixed RM, Greedy RM allocates resources proactively, resulting in significant improvement on under-provisioned functions.
Figure \ref{fig:simple-cdf} shows the cumulative distribution function (CDF) of FETs of invocations for two RMs provisioning the same workloads. Greedy RM completes most invocations faster than Fixed RM.
Figure \ref{fig:simple-avg-cpu-per-func} shows the average CPU allocation of individual functions for Fixed RM and Greedy RM.
Compared to Fixed RM, Greedy RM harvests idle CPU cores from over-provisioned functions (function 2, 4, 5, and 6) without significantly deteriorating their performance and accelerates under-provisioned functions (function 1 and 3) with additional resource.
Given the same environment and workload, Greedy RM outperforms Fixed RM by significantly improving the entire workload.
However, human-designed resource provisioning policies can hardly comprehend both characteristics of platforms and functions, which can result in certain problems. For example, Greedy RM may fail to handle a load spike due to over-harvesting or offer more resources than needed when supplementing functions. This paper argues for a new resource manager for serverless platforms using deep reinforcement learning, \textit{FaaSRM}\xspace, which dynamically and safely harvests idle resources from over-supplied functions to support under-supplied functions.
\subsection{Deep Reinforcement learning}
\label{subsec:drl}
Reinforcement learning (RL) is a machine learning technique that an RL agent learns to make sequential decisions by autonomously interacting with the environment.
At every timestep $t$, the agent is in a specific state $s_t$, and evolves to state $s_{t+1}$ according to a Markov process with the transition probability $\mathbb{P}(s_t,a_t,s_{t+1})$ when action $a_t$ is taken. The immediate reward for the agent to take action $a_t$ in state $s_t$ is denoted as $r_t.$ The goal of the agent is to find a policy $\pi$ that makes decisions regarding what action to take at each timestep, {\it i.e.}, $a_t \sim \pi(\cdot|s_t)$ so as to maximize the expected cumulative rewards, {\it i.e.}, $\mathbb{E}_\pi[\sum^{\infty}_{t=0} r_t].$
However, it was tricky to apply RL to optimize real-world systems due to the large state space. To address this challenge, deep reinforcement learning (DRL) has been proposed to optimize scheduling and resource provisioning in distributed systems~\cite{deeprm, decima}.
More specifically, deep learning provides \textit{function approximators} while RL algorithms describe how to train the weights of an arbitrary function approximator for sequential decision problems. As a result, DRL simply takes deep networks as a non-linear function approximator, which
outputs computable functions that depend on a set of adjustable parameters, $\theta$, which we can adjust to affect the behavior of a policy via some optimization algorithms.
We refer $\theta$ as \textit{policy parameters} and represent the policy as $a_t \sim \pi_{\theta}(\cdot|s_t)$.
In DRL, Deep Neural Networks (DNNs) have been the most popular function approximators to solve stochastic RL tasks, as hand-crafted features are not required by DNNs.
We consider resource provisioning for serverless computing systems as a stochastic large-scale RL task, hence we use a neural network to implement our \textit{FaaSRM}\xspace policy.
\section{An Overview of \textit{FaaSRM}\xspace}
\label{sec:overview}
\subsection{Objectives}
\label{sec:objectives}
Given serverless workloads, \textit{FaaSRM}\xspace rebalances resources at the function level, {\it i.e.}, \textit{FaaSRM}\xspace harvests idle resources from over-provisioned functions to improve under-provisioned functions. We focus on the \textit{function execution time} (FET), which is the time from starting its execution until completion, of serverless functions as FET mostly depends on computing resources (CPU and memory). We focus on function execution time rather than initialization overhead or queuing delay, but optimizations on the two objectives can be complementary to our work. As \textit{FaaSRM}\xspace accelerates under-provisioned function invocations by reducing the FET with supplementary resources, we must ensure that those functions being harvested have no obvious performance degradation. Hence, \textit{FaaSRM}\xspace targets three objectives: harvesting idle resources from over-provisioned functions, accelerating under-provisioned functions, and guaranteeing no function suffering from obvious performance degradation.
\subsection{\textit{FaaSRM}\xspace's Architecture}
We present \textit{FaaSRM}\xspace, a resource manager in serverless platforms that dynamically improves resource allocation for functions. Figure~\ref{fig:openwhisk} depicts an overview of \textit{FaaSRM}\xspace's architecture. Serving workloads in a serverless platform, \textit{FaaSRM}\xspace offers an optimal resource allocation predicted by an RL agent for each function invocation of workloads. To provide an allocation option for an incoming function invocation, \textit{FaaSRM}\xspace collects information of the platform and the function, and embeds them into a state as input to its RL agent. The RL agent computes an optimal allocation option based on the given information. Then \textit{FaaSRM}\xspace applies the option to the invocation by communicating with the platform.
Serverless functions generally have different demands for CPU and memory. CPU-intensive applications are forced to over-provision memory to acquire more CPU power, which leads to memory under-utilization. To locate the needs of two resources for functions independently, \textit{FaaSRM}\xspace decouples CPU and memory allocation binding in existing serverless platforms such as AWS Lambda and OpenWhisk, and manages CPU and memory using an identical but independent system. \textit{FaaSRM}\xspace allows users to specify both CPU (cores) and memory (MBs) for their functions, and employs the user-defined resources as baselines in the \textit{safeguard mechanism} to guarantee Service Level Objectives (SLOs) of individual function invocations.
Harvesting idle resources from ``over-provisioned'' functions can be dangerous, especially with RL predictions. Performance can degrade when \textit{FaaSRM}\xspace under-predicts the resource demands and harvests from functions that can utilize full capacity. To avoid obvious performance degradation of harvested functions, \textit{FaaSRM}\xspace applies a \textit{safeguard mechanism} to prevent those potentially dangerous allocation options and guarantees the SLOs of every function invocation within a workload. The safeguard works on top of allocation options provided by \textit{FaaSRM}\xspace's RL agent. After deployed in a platform, \textit{FaaSRM}\xspace keeps a history of recent requests for each function. Before selecting an option for the incoming invocation, \textit{FaaSRM}\xspace filters out all allocation options below the peak from amongst the historical allocation decisions. Once the last request usage exceeds allocation (exceeds 90\% of the allocation in Algorithm 1), \textit{FaaSRM}\xspace immediately calls a \textit{safeguard invocation} to return all harvested resources to the function, {\it i.e.}, allocating the exact amount of resources defined by users, in the next request. \textit{FaaSRM}\xspace then recalibrates the function's baseline by taking the execution time of safeguard invocation. Hence, \textit{FaaSRM}\xspace improves the overall performance of workloads while ensuring SLOs of individual function invocations by regulating its decision-making process.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{figures/freyr-arch.eps}
\vspace{-0.15in}
\caption{\textit{FaaSRM}\xspace's architecture.}
\vspace{-0.3in}
\label{fig:openwhisk}
\end{figure}
\subsection{Challenges}
\label{subsec:challenges}
We tackle three key challenges of rebalancing resources for serverless functions:
\textbf{Decouple CPU and memory.} Existing serverless platforms bind CPU with memory to improve resource utilization of servers, {\it i.e.}, allocate CPU power proportionally to the memory. To harvest idle resources independently, it is non-trivial for platforms to decouple CPU and memory from binding them together.
Furthermore, decoupling CPU and memory introduce two independent decision dimensions of resource management. To make good allocation decisions, the resource manager has to identify the demands of both CPU and memory of functions.
\textbf{Huge action space.} To provision supplementary resources to under-provisioned functions, \textit{FaaSRM}\xspace has to select an allocation option from a two-dimensional resource pool, leading to an immense action space. For example, AWS Lambda allows users to configure memory from 128 MB to 10240 MB and have access to 6 CPU cores, in total 60,672 choices. A huge action space can result in a neural network with a giant output size, which is inefficient and difficult to train.
\textbf{Performance degradation.} While \textit{FaaSRM}\xspace harvests resources from functions deemed as over-provisioned and improves the entire workload, one necessary requirement is to prevent the performance of those functions from degrading. It is vital to guarantee SLOs of individual functions, {\it i.e.}, harvested functions have no significant performance degradation.
To address the first two challenges,
we propose a RL agent using \textit{score function} to make decisions inside \textit{FaaSRM}\xspace. By reusing the same score function, we manage to decouple CPU and memory resources, and make two-dimensional decisions with small, fixed-size of action space. For the last challenge,
we devise a safeguard mechanism to protect performance of functions harvested by \textit{FaaSRM}\xspace.
\section{Evaluation}
\label{sec:evaluation}
We conduct an extensive evaluation of \textit{FaaSRM}\xspace.
We evaluate \textit{FaaSRM}\xspace on a simulator at scale with 1,000 unique functions sampled from Azure Functions traces~\cite{azure-traces}.
We also evaluate \textit{FaaSRM}\xspace on an OpenWhisk cluster with 10 realistic serverless applications. Our goal is to investigate the effectiveness and generalizability of \textit{FaaSRM}\xspace under different workloads and system conditions, as \textit{FaaSRM}\xspace is designed to be general and workload-agnostic.
Results of simulation and OpenWhisk evaluation indicate that \textit{FaaSRM}\xspace outperforms the other three baseline RMs. Compared to the default RM in OpenWhisk, \textit{FaaSRM}\xspace reduces the execution time of 98\% of function invocations by 87.60\% and 35.81\% for the same workload, in simulation and OpenWhisk, respectively. On the OpenWhisk cluster, \textit{FaaSRM}\xspace harvests idle resources from 38.78\% of function invocations while accelerating 39.18\%.
\subsection{Methodology}
\label{sec:methodology}
For evaluating performance of \textit{FaaSRM}\xspace with different workload types, we use different real-world trace samples from the Azure Functions traces \cite{azure-traces} in both simulating and OpenWhisk evaluation, which contains execution times, memory sizes, and invocation-timestamps for more than 50,000 unique functions.
In two evaluations, we compare \textit{FaaSRM}\xspace with three baseline RMs:
(1) \textit{Fixed RM}: the default RM employed by most existing serverless platforms.
Fixed RM requires users to pre-define memory for their functions and then consistently provisions the exact amount of memory.
CPU power is then allocated proportionally to functions based on the memory.
We assume users pre-configure their memory and make no changes after invocation starts.
(2) \textit{Greedy RM}: a greedy RM that proactively optimizes resource allocation by harvesting idle resources and supplements resources based on fixed size of steps.
For each function invocation, Greedy RM also queries the same history that \textit{FaaSRM}\xspace accesses. Rather than using RL predictions, Greedy RM increases or decreases resources based on a fine-tuned, fixed step size. In our implementation, we set the step size to be 1 core in CPU management and 64 MBs in memory.
(3) \textit{ENSURE}: the proposed RM in \cite{ensure}, which manages CPU resources for serverless platforms at the function level.
ENSURE dynamically adjusts CPU power for each function only when detecting performance degradation.
However, ENSURE doesn't provide memory management, {\it i.e.}, function memory stays fixed as user-requested. We implement the CPU management policy of ENSURE as one of our baselines.
\begin{table}[t]
\centering
\caption{Characterization of four workload sets used in simulating (SIM) and OpenWhisk (OW) evaluation. Metrics include: total number of functions (Funcs), total number of invocations (Calls), average inter-arrival time (IAT), and requests per second.}
\vspace{-0.1in}
\begin{small}
\begin{tabular}{cccccc}
\toprule
\textbf{Set} & \textbf{Funcs} & \textbf{Calls} & \textbf{Avg IAT (s)} & \textbf{Reqs/sec} \\
\midrule[\heavyrulewidth]
SIM-train & 15,427 & 83,521,859 & 0.71 & 1.39 \\
SIM-test & 1,000 & 85,470 & 0.69 & 1.42 \\
OW-train & 10 & 26,705 & 2.21 & 0.44 \\
OW-test & 10 & 268 & 2.20 & 0.45 \\
\bottomrule
\end{tabular}
\end{small}
\vspace{-0.2in}
\label{table:workloads}
\end{table}
\begin{table*}[t]
\centering
\caption{Characterizations of serverless functions used in OpenWhisk evaluation. Metrics include: average CPU usage peak (cores), average memory usage peak (MBs), average cold duration (s), and average warm duration (s).}
\vspace{-0.1in}
\begin{small}
\begin{tabular}{lcccccc}
\toprule
\textbf{Function} & \textbf{Type} & \textbf{Dependency} & \textbf{CPU Peak} & \textbf{Memory Peak} & \textbf{Cold} & \textbf{Warm} \\
\midrule[\heavyrulewidth]
Dynamic Html (DH) & Web App & Jinja2, CouchDB & 3.77 & 181 & 4.45 & 2.34 \\
Email Generation (EG) & Web App & CouchDB & 1.15 & 159 & 2.20 & 0.215 \\
Image Processing (IP) & Multimedia & Pillow, CouchDB & 2.09 & 149 & 5.88 & 3.52 \\
Video Processing (VP) & Multimedia & FFmpeg, CouchDB & 3.06 & 537 & 6.86 & 1.19 \\
Image Recognition (IR) & Machine Learning & Pillow, torch, CouchDB & 6.0 & 421 & 4.28 & 0.09 \\
K Nearest Neighbors (KNN) & Machine Learning & scikit-learn, CouchDB & 4.52 & 268 & 4.99 & 1.11 \\
Gradient Descent (GD) & Machine Learning & NumPy, CouchDB & 2.04 & 268 & 4.15 & 0.60 \\
Arithmetic Logic Unit (ALU) & Scientific & CouchDB & 4.31 & 188 & 5.72 & 3.45 \\
Merge Sorting (MS) & Scientific & CouchDB & 5.67 & 228 & 3.87 & 1.94 \\
DNA Visualisation (DV) & Scientific & Squiggle, CouchDB & 5.79 & 282 & 8.57 & 3.11 \\
\bottomrule
\end{tabular}
\end{small}
\label{table:functions}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{figures/sim-per-invocation.pdf}
\vspace{-0.15in}
\caption{The RFETs of all function invocations processed by Fixed RM, Greedy RM, ENSURE, and \textit{FaaSRM}\xspace in simulation (\textbf{Harv}: invocations harvested by RMs, \textbf{Safe}: invocations with user-requested allocation, \textbf{Acc}: invocations supplemented by RMs).}
\vspace{-0.1in}
\label{fig:sim-per-invocation}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{figures/sim-fet-cdf.eps}
\vspace{-0.1in}
\caption{The CDF of function execution times of invocations processed by four RMs in simulation.}
\vspace{-0.2in}
\label{fig:sim-cdf}
\end{figure}
\subsection{Simulation at Scale}
We first evaluate \textit{FaaSRM}\xspace in a simulative serverless computing environment based on OpenAI Gym \cite{openaigym}, an open-source library for evaluating RL algorithms.
To build a simulating framework that closes to real serverless platforms, we refer to various features of OpenWhisk.
In the simulator, we mimic the features of containerization or sandbox techniques in a real serverless platform.
Whenever users pick new memory limits for their functions, new containers have to be launched, thus leading to cold starts \cite{berkeley-view}.
We implement APIs for defining a customized simulative cluster. The cluster can be configured with arbitrary numbers of servers, and each server is enabled with features of a realistic serverless environment, such as temporary function dependency caching for warm
starts and message queuing for high volumes of invocation bursts.
The function placement problem is not discussed in \textit{FaaSRM}\xspace, for which we adopt the default hashing algorithm of OpenWhisk load balancer. We configure our simulating cluster with 20 worker servers, each with 8 CPU cores and 2 GBs memory available for functions. Each function has access to 8 CPU cores and 512 MBs memory at most.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{figures/openwhisk-per-invocation.pdf}
\vspace{-0.15in}
\caption{The RFETs of all function invocations processed by Fixed RM, Greedy RM, ENSURE, and \textit{FaaSRM}\xspace in OpenWhisk evaluation (\textbf{Harv}: invocations harvested by RMs, \textbf{Safe}: invocations with user-requested allocation, \textbf{Acc}: invocations supplemented by RMs).}
\vspace{-0.25in}
\label{fig:openwhisk-per-invocation}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{figures/openwhisk-fet-cdf.eps}
\vspace{-0.1in}
\caption{The CDF of function execution times of invocations processed by four RMs in OpenWhisk evaluation.}
\vspace{-0.2in}
\label{fig:openwhisk-cdf}
\end{figure}
\textbf{Workloads:}
We randomly sample two workload sets consisting of over 16,000 unique functions from Azure Functions traces. Table \ref{table:workloads} (set SIM-train and SIM-test) depicts the characteristics of workload sets we use in the simulating evaluation for training and testing, respectively. In simulation, we examine the performance of \textit{FaaSRM}\xspace when serving serverless functions at scale.
\textbf{Results:}
In the simulation, \textit{FaaSRM}\xspace outperforms other baseline RMs by achieving a minimal average RFET of the workload with 0.76, whereas Fixed RM, Greedy RM, ENSURE are of 1.00, 1.49, and 0.91, respectively. Due to some sampling issues of traces, FixedRM has few outliers that RFETs are not 1.
Figure~\ref{fig:sim-per-invocation} presents the RFETs of 85,470 function invocations processed by four RMs. Fixed RM has no performance degradation but also no resource adjustment. Greedy RM experiences some serious SLO violations while harvesting too much resource. ENSURE also violates SLOs of invocations even only adjusting CPU resources. In contrast, \textit{FaaSRM}\xspace rationally harvests idle resources from over-provisioned invocations and provides under-provisioned with significant acceleration.
Figure~\ref{fig:sim-cdf} shows the cumulative distribution function (CDF) of function execution times of all the invocations. \textit{FaaSRM}\xspace processes most of function invocations faster than other baseline RMs. \textit{FaaSRM}\xspace completes the 98\% percentile of invocations with execution time less than 15 seconds, whereas Fixed RM, Greedy RM, and ENSURE complete execution with 121, 76, and 48 seconds, respectively.
In simulation, \textit{FaaSRM}\xspace outperforms other baseline RMs with a minimal average RFET by achieving fairness between invocations within a workload, {\it i.e.}, \textit{FaaSRM}\xspace maintains SLO of each invocation and improves as many invocations as possible. We then evaluate \textit{FaaSRM}\xspace and other baselines in a real-world serverless platform, OpenWhisk, with realistic serverless applications.
\subsection{OpenWhisk Evaluation}
We deploy and evaluate \textit{FaaSRM}\xspace on an OpenWhisk cluster with 13 servers. One server hosts the REST frontend, API gateway, and Redis services; one backend server hosts the Controller, Distributed Messaging, and Database services; one server hosts \textit{FaaSRM}\xspace agent; and the remaining 10 servers are Invokers for executing functions.
The server hosting \textit{FaaSRM}\xspace agent has 16 Intel Xeon Skylake CPU cores and 64 GB memory, whereas each of the other 12 servers has 8 Intel Xeon Skylake CPU cores and 32 GB memory.
Each Invoker provided 8 CPU cores and 2 GB RAM for individual function executions. Each function can be configured with 8 CPU cores and 512 MBs of RAM at most.
We use a window of 100 ms to monitor the CPU and memory usage of function invocations inside containers.
\textbf{Workloads:}
We randomly sample another two invocation sets for OpenWhisk evaluation. Table~\ref{table:workloads} depicts the two invocation sets (OW-train and OW-test) used in the OpenWhisk evaluation.
We use a scaled-down version of the invocation traces, {\it i.e.}, we assume the invocation trace is based on seconds rather than minutes. This re-scaling increases the intensity of workloads while speeding up \textit{FaaSRM}\xspace OpenWhisk training by reducing the total workload duration.
We employ 10 real-world applications from three serverless benchmark suites: SeBS \cite{copik2020sebs}, ServerlessBench \cite{serverlessbench}, and ENSURE-workloads \cite{ensure}. Table~\ref{table:functions} describes the characteristics of 10 serverless applications, including average CPU and memory usage peak during execution, and average cold and warm duration. We set 8 CPU cores and 512 MBs for VP, IR, and DV as their initial user-defined allocation. For other functions, we set 4 CPU cores and 256 MBs as their initial user-defined allocation.
\textbf{Results:}
For the testing workload set in OpenWhisk evaluation, \textit{FaaSRM}\xspace outperforms other baseline RMs by achieving an average RFET of the workload with 0.89, whereas Fixed RM, Greedy RM, ENSURE are of 1.09, 0.93, and 1.12, respectively. Notice that in OpenWhisk experiment, the average RFET of Fixed RM is not strictly 1.0 due to performance variation.
Figure~\ref{fig:openwhisk-per-invocation} shows the RFETs of 268 function invocations processed by four RMs. Fixed RM has no resource adjustment. Both Greedy RM and ENSURE severely violate SLOs of some invocations while harvesting resources. In contrast to the perfect SLOs of invocations in simulation, evaluating \textit{FaaSRM}\xspace in OpenWhisk occurs some performance variation. However, \textit{FaaSRM}\xspace still achieves a minimal average RFET and maintains the 98\% percentile of RFETs below 1.28 for all invocations, whereas Fixed RM, Greedy RM, and ENSURE are 1.27, 2.52, and 5.2, respectively.
Figure~\ref{fig:openwhisk-cdf} shows the cumulative distribution function (CDF) of function execution times of all the invocations. \textit{FaaSRM}\xspace processes most of function invocations faster than other baseline RMs. \textit{FaaSRM}\xspace completes the 98\% percentile of invocations with execution time less than 16.91 seconds, whereas Fixed RM, Greedy RM and ENSURE complete execution with 32.16, 25.80, and 27.39 seconds, respectively.
We conduct extensive experiments in both simulation and OpenWhisk evaluation. \textit{FaaSRM}\xspace outperforms other baseline RMs with minimal average RFET by achieving fairness between invocations within a workload, {\it i.e.}, \textit{FaaSRM}\xspace guarantees SLOs of individual invocations and improves as many invocations as possible. Hence, \textit{FaaSRM}\xspace harvests idle resources from over-provisioned functions and accelerates under-provisioned functions with supplementary resources while preventing any functions from apparent performance degradation.
\section{Concluding Remarks}
This paper proposed a new resource manager, \textit{FaaSRM}\xspace, which harvests idle resources from over-provisioned functions and accelerates under-provisioned functions with supplementary resources. Given realistic serverless workloads, \textit{FaaSRM}\xspace can improve most of function invocations while safely harvesting idle resources using the reinforcement learning and the safeguard mechanism.
Results of simulation and experiments on the OpenWhisk cluster demonstrate that \textit{FaaSRM}\xspace outperforms other baseline RMs. Compared to the default RM in OpenWhisk, \textit{FaaSRM}\xspace reduces the execution time of 98\% function invocations by 87.60\% and 35.81\% for the same workload, in simulation and OpenWhisk, respectively. In OpenWhisk evaluation, \textit{FaaSRM}\xspace harvests idle resources from 38.78\% of function invocations while accelerating 39.18\%.
\section{Information-Agnostic Resource Provisioning}
\section{\textit{FaaSRM}\xspace's Design}
In this section, we present the design of \textit{FaaSRM}\xspace.
We first formulate the resource harvesting and provisioning problem in serverless computing and then introduce the information collection and embedding procedure. We describe the score function based policy network in \textit{FaaSRM}\xspace and how it allocates resources for function invocations. We also describe the safeguard mechanism atop \textit{FaaSRM}\xspace for preventing individual functions from significant performance degradation. Finally, we present the training algorithm for \textit{FaaSRM}\xspace.
\subsection{Problem formulation}
\label{sec:formulation}
A serverless platform has multiple worker nodes. On each node, we focus on the two main resource types in a serverless environment, {\it i.e.}, CPU and memory.
We assume that all nodes have homogeneous hardware configurations and the total available resource of the platform is limited. We consider functions with the same resource demands for CPU and memory.
The resource profile of each function $f$ is given by vector $r = (r_c, r_m)$, where $r_c$ (resp. $r_m$) denotes a set of CPU (resp. memory) resources allocated to $f$.
In the existing serverless platforms ({\it e.g.}, AWS Lambda and Apache OpenWhisk), the resource allocation is non-preemptive and fixed, {\it i.e.}, $r$ must be provisioned consistently until the function completes.
We are interested in two key metrics: \textit{relative function execution time (RFET)} and \textit{function throughput}.
\textbf{Relative Function Execution Time (RFET)}. We consider a FaaS platform that handles a multiple function concurrent workload. Let $S$ denote the set of functions invoked within the workload, $f$ denotes a function invocation in $S$. At the first invocation of $f$, \textit{FaaSRM}\xspace captures the FET $e^b$ with resources $(r^b_c, r^b_m)$ configured by the user and employs it as a baseline (here $b$ represents \textit{baseline}). Then \textit{FaaSRM}\xspace captures the FET $e^i$ of a further $i$-th invocation after it completes execution. The RFET of the $i$-th invocation is calculated as
\begin{equation}
RFET := \frac{e^i}{e^b}.
\label{eq:rfet}
\end{equation}
{Our goal is to minimize the total average RFET of a workload, which is given by }
\begin{equation*}
\Bar{E} := \frac{1}{|S|}\sum_{f \in S} \frac{e^i}{e^b}.
\label{eq:avg-rfet}
\end{equation*}
{Since we aim to improve certain functions without degrading the performance of others, we introduce RFET to guarantee fairness between function invocations with different length of completion time.}
\textbf{Function Throughput}. We also maximize the function throughput of a given workload. It is possible that \textit{FaaSRM}\xspace harvests and allocates a big amount of resources to long-running functions while short-running functions suffer from queuing and starving. This issue leads to low function throughput and lengthens the total duration of a workload. To avoid inappropriate resources holding and long queuing time, we make our \textit{FaaSRM}\xspace aware of function throughput when provisioning workloads.
Function invocations are event-driven and arrive at the platform in real-time.
The platform makes a decision on the resource allocation when serving each invocation event.
We assume the resource profile $r$ is adjustable when receiving each invocation during the workload processing.
Specifically at time $t$, a function $f$ is invoked by a user.
The resource profiles $r$ determines the resource provisioning for the invoked function $f$.
Existing serverless platforms require users to pre-define resources for their functions and allocate exact amount of resources when functions are invoked, which in our context the platform keeps the $r$ unchanged.
However, serverless platforms can harvest resources from those over-provisioned functions and accelerate under-provisioned functions by optimizing resource allocation of each function invocation without introducing obvious performance degradation to individual functions, thus improving overall performance.
\subsection{Information Collection and Embedding}
\label{subsec:embedding}
\begin{table}[b]
\centering
\caption{The state space of \textit{FaaSRM}\xspace agent.}
\begin{small}
\begin{tabular}{ll}
\toprule%
\multirow{2}{4em}{\textbf{Platform State}} & \texttt{avail\_cpu}, \texttt{avail\_mem} \\
& \texttt{inflight\_request\_num} \\\midrule%
\multirow{3}{4em}{\textbf{Function State}} & \texttt{avg\_cpu\_peak}, \texttt{avg\_mem\_peak}, \\
& \texttt{avg\_interval}, \texttt{avg\_execution\_time}, \\
& \texttt{request\_per\_sec}, \texttt{baseline} \\
\bottomrule%
\end{tabular}
\end{small}
\label{table:state-space}
\end{table}
When allocating resources for a function invocation, \textit{FaaSRM}\xspace collects information from two levels: platform level and function level, as summarized in Table~\ref{table:state-space}. Specifically, for the platform, \textit{FaaSRM}\xspace captures the number of invocations remaining in the system ({\it i.e.}, \texttt{inflight\_request\_num}), available CPU cores, and available memory. For the incoming function, \textit{FaaSRM}\xspace queries invocation history of the function which records average CPU peak, average memory peak, average inter-arrival time, average execution time, request per second, and baseline execution time ({\it i.e.}, \texttt{baseline}) with user-requested resources.
Once collecting such information, \textit{FaaSRM}\xspace encapsulates them with a potential resource allocation option. More precisely, we embed information and the potential configuration option together into a flat state vector as input to \textit{FaaSRM}\xspace agent, with the information embedding process illustrated in Figure~\ref{fig:workflow}.
\subsection{Score Function}
\label{subsec:score-function}
\begin{figure*}[t]
\centering
\includegraphics[width=0.92\textwidth]{figures/freyr-workflow.pdf}
\vspace{-0.1in}
\caption{The workflow of \textit{FaaSRM}\xspace.}
\vspace{-0.1in}
\label{fig:workflow}
\end{figure*}
\textit{FaaSRM}\xspace uses a \textit{score function} to calculate the priority of selecting potential resource allocation options. Figure~\ref{fig:workflow} visualizes the policy network of \textit{FaaSRM}\xspace agent, and illustrates the workflow of how the agent selects the best allocation option based on states.
At time $t$, a function invocation arrives at the platform which has in total $N$ potential resource configuration options. After embedding procedure, \textit{FaaSRM}\xspace collects a batch of the state vectors $\boldsymbol{s}_t = (\boldsymbol{s}^1_t, \ldots, \boldsymbol{s}^n_t, \ldots, \boldsymbol{s}^N_t)$ , where $\boldsymbol{s}^n_t$ maps the state to the $n$-th option. \textit{FaaSRM}\xspace inputs $s_t$ to the score function. We implement the score function using two neural networks, an \textit{actor network} and a \textit{critic network}. The actor network computes a score $q^n_t$, which is a scalar value mapped from the state vector $\boldsymbol{s}^n_t$ representing the priority to select configuration option $n$. Then \textit{FaaSRM}\xspace applies a Softmax operation to the scores $(q^1_t, \ldots, q^n_t, \ldots, q^N_t)$ to compute the probability of selecting option $n$ based on the priority scores, given by
\begin{equation*}
\mathbb{P}_t(\textit{option}=n) = \frac{\exp(q^n_t)}{\sum^N_{n=1} \exp(q^n_t)},
\label{eq:softmax}
\end{equation*}
at time $t$.
The critic network outputs a baseline value $b^n_t$ for option $n$, the averaged baseline value $\Bar{b}_t$ is calculated as
\begin{equation}
\Bar{b}_t = \frac{1}{N}\sum^N_{n=1} b^n_t,
\label{eq:baseline-value}
\end{equation}
which is used to reduce variance when training \textit{FaaSRM}\xspace. The whole operation of policy network is end-to-end differentiable.
The score function itself contains no manual feature engineering. \textit{FaaSRM}\xspace agent automatically learns to compute accurate priority score of allocation options through training. More importantly, \textit{FaaSRM}\xspace uses the same score function for all function invocations and all potential resource allocation options. By embedding options into state vectors, \textit{FaaSRM}\xspace can distinguish between different options and use the score function to select the best option. Reusing the score function reduces the size of networks and limits the action space of \textit{FaaSRM}\xspace agent significantly.
\subsection{Safeguard}
\label{subsec:safeguard}
We design \textit{FaaSRM}\xspace to improve both over-provisioned and under-provisioned functions. However, when harvesting resources from functions deemed as over-provisioned, it is possible that \textit{FaaSRM}\xspace under-predicts their resource demands. The performance of functions degrades when being over-harvested. We devise a safeguard mechanism atop \textit{FaaSRM}\xspace to regulate decisions by avoiding decisions that may harm performance and returning harvested resources immediately when detecting a usage spike. We use this safeguard mechanism to mitigate obvious performance degradation of individual functions.
\begin{algorithm}[b]
\caption{Safeguard mechanism atop \textit{FaaSRM}\xspace.}
\begin{algorithmic}[1]
\WHILE{True}
\STATE calibrate\_baseline = False
\STATE last\_request = \texttt{QueryRequestHistory}(function\_id)
\IF{last\_request is None}
\STATE range = [user\_defined] \texttt{\#Safeguard invoke}
\STATE calibrate\_baseline = True
\ELSE
\STATE last\_alloc = last\_request.alloc
\STATE last\_peak = last\_request.peak
\STATE recent\_peak = \texttt{GetRecentPeak}(function\_id)
\IF{last\_peak / user\_defined <= 0.9}
\STATE \texttt{\#Over-provisioned}
\IF{last\_peak / last\_alloc >= 0.9}
\STATE range = [user\_defined] \texttt{\#Safeguard invoke}
\STATE calibrate\_baseline = True
\ELSE
\STATE range = [recent\_peak + 1, user\_defined]
\ENDIF
\ELSE
\STATE \texttt{\#Under-provisioned}
\STATE range = [recent\_peak + 1, max\_per\_function]
\ENDIF
\ENDIF
\STATE alloc\_option = \texttt{\textit{FaaSRM}\xspace}(function\_id, range)
\STATE \texttt{Invoke}(function\_id, alloc\_option, calibrate\_baseline)
\ENDWHILE
\end{algorithmic}
\label{algo:safeguard}
\end{algorithm}
Algorithm~\ref{algo:safeguard} summarizes the safeguard mechanism built atop \textit{FaaSRM}\xspace. We refer safeguard invocation as invoking the function with user-defined resources. When there are no previous invocations, \textit{FaaSRM}\xspace triggers the safeguard invocation to obtain resource usage and calibrate the baseline mentioned in Equation~\ref{eq:rfet} (lines 4-6). For further invocations, \textit{FaaSRM}\xspace queries the request history of the functions, and polls the usage peak, allocation of the last invocation, and recent highest peak since last baseline calibration (lines 8-9). \textit{FaaSRM}\xspace first checks current status of the function, {\it i.e.}, over-provisioned or under-provisioned (line 11). We assume functions with resource usage below 90\% of user-requested level is over-provisioned. For over-provisioned (harvested) functions, \textit{FaaSRM}\xspace then checks the usage peak of last invocation (line 13). If the usage peak approaches 90\% of allocation, we suspect there may be a load spike, which could use more resources than current allocation. This triggers the safeguard invocation and baseline recalibration, \textit{FaaSRM}\xspace immediately returns harvested resource to the function at the next invocation (lines 14-15). If there is no usage spike, \textit{FaaSRM}\xspace is allowed to select an allocation option from recent peak plus one unit to a user-requested level (line 17). For under-provisioned functions, \textit{FaaSRM}\xspace is allowed to select from recent peak plus one unit to the maximum available level (line 21). After an allocation option is selected, \textit{FaaSRM}\xspace invokes the function and forwards the invocation to worker servers for execution.
We apply the safeguard algorithm to manage CPU and memory, respectively. The results in Section~\ref{sec:evaluation} show that our safeguard mechanism effectively regulates decisions made by \textit{FaaSRM}\xspace and protects SLOs of individual functions while improving the entire workload.
\subsection{Training the RL Agent}
\label{subsec:training}
\textit{FaaSRM}\xspace training proceeds in \textit{episodes}.
In each episode, a series of function invocations arrive at the serverless platform, and each requires a two-dimensional action to configure CPU and memory resources.
When the platform completes all function invocations, the current episode ends.
Let $T$ denote the total number of invocations in an episode, and $t_i$ denote the wall clock time of the $i$-th invocation.
We continuously feed \textit{FaaSRM}\xspace with a reward $r$ after it takes an action to handle an invocation.
Concretely, we penalize \textit{FaaSRM}\xspace with
\begin{equation*}
r_i = - \frac{1}{P_t^{\frac{1}{3}}}\sum_{f \in S|^{t_i}_{t_{i-1}}} \frac{e^i}{e^b} + R_{(\textit{RFET}<1)} - R_{(\textit{RFET}>1)},
\end{equation*}
after taking action on the $i$-th invocation, where $P_t$ denotes current number of executed invocations ({\it i.e.}, function throughput), $S$ is the set of invocations that finish during the interval $[t_{i-1}, t_i)$, $\frac{e^i}{e^b}$ is the RFET of an invocation mentioned in Section~\ref{sec:objectives}, and two constant summaries for awarding good and penalizing bad actions ($R_{(\textit{RFET}<1)}$ and $R_{(\textit{RFET}>1)}$).
In practice, we find that taking cube root of $P_t$ can prevent rewards from vanishing when it goes too large.
The goal of the algorithm is to maximize the expected cumulative rewards given by
\begin{equation}
\mathbb{E}\Bigg[\sum^T_{i=1}\Bigg( - \frac{1}{P_t^{\frac{1}{3}}}\sum_{f \in S|^{t_i}_{t_{i-1}}} \frac{e^i}{e^b} + R_{(\textit{RFET}<1)} - R_{(\textit{RFET}>1)}\Bigg)\Bigg].
\label{eq:rewards}
\end{equation}
Hence, \textit{FaaSRM}\xspace learns to minimize the average \textit{RFET} of individual functions while maximizing the function throughput.
\textit{FaaSRM}\xspace uses a policy gradient algorithm for training.
Policy gradient methods are a class of RL algorithms that learn policies by performing gradient ascent directly on the parameters of neural networks using the rewards received during training.
When updating policies, large step sizes may collapse the performance, while small step sizes may worsen the sampling efficiency.
We use the Proximal Policy Optimization (PPO) algorithms \cite{ppo2} to ensure that \textit{FaaSRM}\xspace takes appropriate step sizes during policy updates. More specifically, given a policy $\pi_\theta$ parameterized by $\theta$,
the PPO algorithm updates policies at the $k$-th episode via
\begin{equation*}
\theta_{k+1} = \mathop{\arg\max}\limits_{\theta} \mathop{\mathbb{E}}\limits_{s,a \sim \pi_{\theta_k}} \Big[\mathbb{L}(s,a,\theta_k,\theta)\Big],
\label{eq:ppo-update}
\end{equation*}
where $\mathbb{L}$ is the \textit{surrogate advantage} \cite{trpo}, a measure of how policy $\pi_\theta$ performs relative to the old policy $\pi_{\theta_k}$ using data from the old policy. Specifically we use the PPO-clip version of a PPO algorithm, where $\mathbb{L}$ is given by
\begin{equation*}
\mathbb{L}(s,a,\theta_k,\theta) = \min \Big(\frac{\pi_{\theta}(a|s)}{\pi_{\theta_k}(a|s)}A^{\pi_{\theta_k}}(s,a),\ g(\epsilon, A^{\pi_{\theta_k}}(s,a))\Big),
\label{eq:ppo-adv}
\end{equation*}
and $g(\epsilon, A)$ is a clip operation defined as
\begin{equation*}
g(\epsilon, A) =
\begin{cases}
(1+\epsilon)A, & \text{if $A \geq 0$}, \\
(1-\epsilon)A, & \text{otherwise},
\end{cases}
\label{eq:ppo-g}
\end{equation*}
where $A$ is the advantage calculated as rewards $r$ subtracted by baseline values $b$; $\epsilon$ is a hyperparameter which restricts how far the new policy is allowed to go from the old.
Intuitively, the PPO algorithm sets a range for step size of policy updates, which prevents the new policy from going too far from the old (either positive or negative).
\begin{algorithm}[t]
\caption{\textit{FaaSRM}\xspace Training Algorithm.}
\begin{algorithmic}[1]
\STATE Initial policy (actor network) parameters $\theta_0$ and value function (critic network) parameters $\phi_0$
\FOR {episode k = 0, 1, 2, \ldots}
\STATE Run policy $\pi_k = \pi(\theta_k)$ in the environment until $T$-th invocation completes
\STATE Collect set of trajectories $\mathbb{D}_k = \{\tau_i\}$, where $\tau_i = (s_i, a_i), i \in [0, T]$
\STATE Compute reward $\hat{r}_t$ via Equation \ref{eq:rewards}
\STATE Compute baseline value ${\Bar b}_t$ via Equation \ref{eq:baseline-value}
\STATE Compute advantage $\hat{\mathbb{A}}_t = \hat{r}_t - {\Bar b}_t$
\STATE Update actor network by maximizing objective using stochastic gradient ascent:
\begin{equation*}
\begin{aligned}
\theta_{k+1} = \arg\mathop{\max_{\theta}} \frac{1}{|\mathbb{D}_k|T} \mathop{\sum}_{\tau \in \mathbb{D}_k}\mathop{\sum}^T_{t=0} \mathbb{L}(s_t,a_t,\theta_k,\theta)
\end{aligned}
\end{equation*}
\STATE Update critic network by regression on mean-squared error using stochastic gradient descent:
\begin{equation*}
\phi_{k+1} = \arg\mathop{\min_{\phi}} \frac{1}{|\mathbb{D}_k|T} \mathop{\sum}_{\tau \in \mathbb{D}_k}\mathop{\sum}^T_{t=0} ({\Bar b}_t - \hat{r}_t)^2
\end{equation*}
\ENDFOR
\end{algorithmic}
\label{algo:freyr}
\end{algorithm}
Algorithm \ref{algo:freyr} presents the training process of \textit{FaaSRM}\xspace. For each episode, we record the whole set of trajectories including the states, actions, rewards, baseline values predicted by the critic network, and the logarithm probability of the actions for all invocations. After each training episode finishes, we use the collected trajectories to update the actor and critic networks.
\section{Related Work}
\parab{Cold start}. Recent research works propose different policies based on serverless computing's unique characteristics, such as fine-grained resource granularity and event-driven.
The paper \cite{azure-traces} indicates that most serverless applications have exigent patterns in invocation frequency, duration, or resource consumption. They propose a diagram resource management policy based on Azure Functions traces to predict pre-warming and keep-alive windows of containers for serverless applications.
FaaSProfiler~\cite{shahrad2019architectural} investigates architectural implications of serverless computing from a provider view.
FaasCache~\cite{fuerst2021faascache} uses caching techniques to optimize cold starts in serverless environment.
\parab{Resource management}.
Research has been conducted on VM resource management in traditional clouds for years. Recently, SmartHarvest~\cite{wang2021smartharvest} proposes a VM resource harvesting algorithm using online learning. Unlike \textit{FaaSRM}\xspace, which uses harvested resources to accelerate function executions, SmartHarvest offers a new low-priority VM service using harvested resources. Directly replacing \textit{FaaSRM}\xspace with SmartHarvest is not feasible as SmartHarvest is not designed for serverless computing. Spock~\cite{gunasekaran2019spock} proposes a serverless-based VM scaling system to improve SLOs and reduce costs.
For resource management in serverless, \cite{cpu-cap} and \cite{ensure} both aim to automatically adjust CPU resource when detecting performance degradation during function executions, which help mitigate the issue of resource over-provisioning. In contrast to \cite{cpu-cap} and \cite{ensure}, which only focus on CPU, \textit{FaaSRM}\xspace manages CPU and memory resources independently.
In \cite{core-granular}, they propose a centralized scheduler for serverless platforms that assigns each CPU core of worker servers to CPU cores of scheduler servers for fine-grained core-to-core management. \textit{FaaSRM}\xspace focuses on resource allocation rather than scheduling or scaling.
\parab{Reinforcement learning}.
\textsc{Siren} \cite{siren2019infocom} adopts DRL techniques to dynamically invoke functions for distributed machine learning with a serverless architecture.
Our work \textit{FaaSRM}\xspace leverages DRL to improve the platform itself rather than serverless applications.
Decima~\cite{decima} leverages RL to schedule DAG jobs for data processing clusters.
Metis~\cite{metis} proposes a scheduler to schedule long-running applications in large container clusters.
\section{Implementing \textit{FaaSRM}\xspace}
\textit{FaaSRM}\xspace provides a general resource management service for functions in serverless platforms.
For concreteness, we describe its implementation in the context of Apache OpenWhisk framework \cite{openwhisk}. In this section, we briefly introduce the architecture of OpenWhisk, and describe the workflow of \textit{FaaSRM}\xspace managing resources for functions in OpenWhisk system.
\subsection{Integrating \textit{FaaSRM}\xspace with OpenWhisk}
\label{subsec:openwhisk}
Apache OpenWhisk is an open-source, distributed serverless platform which powers IBM Cloud Functions \cite{ibm-cloud-functions}. Figure~\ref{fig:openwhisk} illustrates the architecture of \textit{FaaSRM}\xspace integrated with OpenWhisk. OpenWhisk exposes an NGINX-based REST interface for users to interact with the platform. Users can create new functions, invoke functions, and query results of invocations via the frontend. The Frontend forwards function invocations to the Controller, which selects an Invoker (typically hosted using VMs) to execute invocations. The Load Balancer inside the Controller implements the scheduling logic by considering Invoker's health, available capacity, and infrastructure state. Once choosing an Invoker, the Controller sends the function invocation request to the selected Invoker via a Kafka-based distributed messaging component. The Invoker receives the request and executes the function using a Docker container. After finishing the function execution, the Invoker submits the result to a CouchDB-based Database and informs the Controller. Then the Controller returns the result of function executions to users synchronously or asynchronously. Here we focus on resource management for containers.
We modify the following modules of OpenWhisk to implement our resource manager:
\textbf{Frontend:} Initially, OpenWhisk only allows users to define the memory limit of their functions and allocates CPU power proportionally based on memory. To decouple CPU and memory, we add a CPU limit and enable Frontend to take CPU and memory inputs from users. Now users are allowed to specify CPU cores and memory of their functions, and the Frontend forwards both CPU and memory limit to Controller.
\textbf{Controller:} The Load Balancer makes scheduling decisions for the Controller. When selecting an Invoker, the Load Balancer considers available memory of Invokers. We modify Load Balancer also to check available CPU cores of Invokers, {\it i.e.}, now Load Balancer selects Invokers with enough available CPU cores and memory to execute function invocations.
\textbf{Invoker:} The Invoker uses a semaphore-based mechanism to control access of containers to its available memory. To manage CPU independently, we apply the identical mechanism to control access to available CPU cores.
\textbf{Container:} By default, OpenWhisk uses \texttt{cpu-shares} parameter to regulate CPU power of containers. When plenty of CPU cycles are available, all containers with \texttt{cpu-shares} use as much CPU as they need. While \texttt{cpu-shares} improves CPU utilization of Invokers, it can result in performance variation of function executions. We change the CPU parameter to \texttt{cpus} which restricts how many CPU cores a container can use. This is aligned with the CPU allocation policy of AWS Lambda \cite{awslambdalimits}. For each function invocation, we monitor the CPU cores and memory usage of its container using \texttt{cgroups}. We record the usage peak during function execution and keep it as history for \textit{FaaSRM}\xspace to query.
\subsection{Implementing RL Agent in \textit{FaaSRM}\xspace}
\label{subsec:freyr-workflow}
\textit{FaaSRM}\xspace communicates with OpenWhisk Controller directly via a Key-Value (KV) Store implemented using Redis. When receiving a function invocation, the Load Balancer in the Controller first sends the current state information to the KV Store. The DRL Agent in \textit{FaaSRM}\xspace then fetches the state and sends an action back to Controller. After Controller picks up the action, the Load Balancer adjusts the resource allocation of the function invocation based on the action provided by \textit{FaaSRM}\xspace, and then forwards it to a chosen Invoker for execution. Because \textit{FaaSRM}\xspace is a resource manager inside serverless platforms, we make \textit{FaaSRM}\xspace communicate with the Controller rather than the Frontend. This reduces communication overhead and speeds up the training.
We implement the prototype of \textit{FaaSRM}\xspace agent using two neural networks; each with two fully-connected hidden layers. The first hidden layer has 32 neurons, and the second layer has 16 neurons; each neuron uses \texttt{Tanh} as its activation function. The agent of \textit{FaaSRM}\xspace is implemented in Python using PyTorch \cite{pytorch}. \textit{FaaSRM}\xspace uses multiprocessing to retrieve results of function invocations for computing rewards. The implementation of \textit{FaaSRM}\xspace is lightweight as the policy network consists of 1858 parameters (12 KBs in total) because \textit{FaaSRM}\xspace reuses the score function. Mapping a state to an action takes less than 50 ms.
We use the algorithm presented in Section \ref{subsec:training} to train \textit{FaaSRM}\xspace with 4 epochs per surrogate optimization and a 0.2 clip threshold \cite{ppo2}. We update the policy network parameters using the AdamW optimizer \cite{adamw} with a learning rate of 0.001.
For simulation, We train \textit{FaaSRM}\xspace over 5000 episodes. For OpenWhisk experiment, we train \textit{FaaSRM}\xspace with 500 episodes. The total time for OpenWhisk training takes about 120 hours. We restart the OpenWhisk platform before each training episode. Figure~\ref{fig:loss} shows the learning curve of \textit{FaaSRM}\xspace training in simulation and OpenWhisk experiment, respectively. The descending loss trendlines in both indicate that \textit{FaaSRM}\xspace gradually learns to make good resource management decisions for functions through training.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.45\linewidth}
\includegraphics[width=\textwidth]{figures/sim-loss.pdf}
\caption{\textit{FaaSRM}\xspace training in simulation through 5000 episodes.}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\includegraphics[width=\textwidth]{figures/openwhisk-loss.pdf}
\caption{\textit{FaaSRM}\xspace training in OpenWhisk through 500 episodes.}
\end{subfigure}
\vspace{-0.1in}
\caption{Learning curves of \textit{FaaSRM}\xspace training in simulation and OpenWhisk evaluation.}
\vspace{-0.2in}
\label{fig:loss}
\end{figure} |
2,869,038,153,865 | arxiv | \section{Introduction}
Many subsurface applications in porous media ranging from groundwater and contaminant hydrology to $\mathrm{CO_2}$ sequestration rely on physical models
\citep{middleton2015shale,choo2018cracking,yu2020poroelastic,kadeethum2019investigation,bouklas2012swelling,newell2017investigation,kadeethum2021locally}. These models often seek the solution of the governing partial differential equations (PDEs) in a domain of interest. For instance, coupled hydro-mechanical (HM) processes in porous media, in which fluid flow and solid deformation tightly interact in a time-dependent fashion, could be described by Biot's formulation of linear poroelasticity \citep{biot1941general}. These PDEs are often solved numerically using various techniques such as finite volume or finite element methods \citep{wheeler2014coupling,deng2017locally,liu2018lowest}, which is referred to as full order model (FOM) approaches. However, computational methods to handle field-scale systems require substantial computational resources, especially when discontinuities or nonlinearities arise in the solution
\citep{hansen2010discrete,hesthaven2016certified}. Therefore, in some instances, the FOM is not directly suitable to handle large-scale inverse problems, optimization, or even concurrent multiscale calculations in which an extensive set of simulations are required to be explored \citep{ballarin2019pod,hesthaven2016certified}. \par
A reduced order model (ROM) that aims to produce a low-dimensional representation of FOM could be an alternative to handling field-scale inverse problems, optimization, or real-time reservoir management \citep{schilders2008model,amsallem2015design,choi2019accelerating,choi2020gradient, mcbane2021component,yoon2009environmental,yoon2009numerical}. The ROM methodology primarily relies on the parameterization of a problem (i.e., repeated evaluations of a problem depending on parameters), which could correspond to physical properties, geometric characteristics, or boundary conditions \citep{ballarin2019pod,venturi2019weighted,hesthaven2016certified,hoang2021domain,copeland2021reduced,choi2020sns,choi2021space,carlberg2018conservative}. However, it is difficult to parameterize heterogeneous spatial fields of PDE coefficients such as heterogeneous material properties by a few parameters. Coupled processes such as HM processes commonly involve complex subsurface structures \citep{flemisch2018benchmarks,jia2017comprehensive, chang2020hydromechanical,chang2020operational,chang2021mitigating} where the corresponding spatially distributed material properties can span several orders of magnitude and include discontinuous features (e.g., fractures, vugs, or channels). Consequently, traditional projection-based ROM approaches might not be suitable for this type of problem as they commonly employ a proper orthogonal decomposition (POD) approach, which in turn will require a high dimensional reduced basis to capture most of the information at the expense of the computational cost \citep{kadeethum2021framework}. \par
Deep learning (DL), in particular, neural network-based supervised learning approaches, have been recently investigated for subsurface flow and transport problems \citep{zhu2018bayesian, mo2019deep, wen2021ccsnet, wen2021u, xu2021solution,kadeethum2021nonTH,wei2021aliased}. These DL approaches train various DL algorithms using training data generated by FOMs to map heterogeneous PDEs coefficients (i.e., heterogeneous permeability and/or porosity fields) and injection scenarios (e.g., injection rates and a number of wells) into state variables such as pressure and CO$_{2}$ saturation. During the online phase (i.e., prediction), these trained models are used to predict state variables, evaluate uncertainty quantification as fast forward models, or estimate material properties as inverse models \citep{kadeethum2021framework}. In most reservoir problems, these DL models can also account for time-dependent PDEs, but the output of trained models is limited to the same time interval as in the input of training data and mostly flow and transport problems. The incorporation of physical constraints (e.g., equations, relationships, and known properties) into the learning process is actively studied to improve the accuracy and training efficiency of data-driven modeling.
\cite{kadeethum2021framework} presented a data-driven generative adversarial networks (GAN) framework that can parameterize heterogeneous PDEs coefficients (i.e., heterogeneous fields), which has been demonstrated with steady-state cases for both forward and inverse problems. This GAN-based work could be considered as an extension of regression in subsurface physics through GAN model such as \cite{zhong2019predicting, laloy2019gradient, lopez2021deep}, in which heterogeneous fields are also parameterized through GAN model \citep{chan2020parametrization, hao2022siamese, guan2021reconstructing} and subsequently used to predict the state variables (pressure and displacement responses). In \cite{kadeethum2021framework}, the conditional GAN (cGAN) approach \citep{mirza2014conditional,isola2017image} was extended to the heterogeneous field for both generator and critic (i.e., discriminator) where usage of Earth mover's distance through Wasserstein loss (W loss) and gradient penalty \citep{arjovsky2017wasserstein,gulrajani2017improved} improved the model accuracy and training stability compared to the traditional GAN approach. This improvement contributed to the Earth mover's distance enforcing the cGAN framework to approximate the training data distribution rather than a point-to-point mapping. However, the framework developed by \cite{kadeethum2021framework} is limited to only steady-state solutions of given PDEs.
Recently, \cite{ding2020continuous} developed continuous cGAN (CcGAN) to condition the GAN model with continuous variables such as quantitative measures (e.g., the weight of each animal) rather than categorical data (e.g., cat or dog). For PDE problems, the concept of CcGAN can also be extended to quantify continuous variables (e.g., time domain), enabling the solution of time-dependent PDEs. In this work, we extend our previous work \citep{kadeethum2021framework} by extending the CcGAN concept to solve for time-dependent PDEs. This new framework is developed by utilizing element-wise addition or conditional batch normalization \citep{de2017modulating} to incorporate the time domain in both training and prediction processes.
As presented in Figure \ref{fig:novelty}, our model treats the time domain as a continuous variable. Therefore, this model can handle the training data that contains different time-step resolutions. Furthermore, we can predict the continuous-time response without being limited to time instances that correspond to the training data. This design also provides flexibility to incorporate other parameterized continuous variables (e.g., Young's modulus, boundary conditions) as parameters to our framework.
\begin{figure}[!ht]
\centering
\includegraphics[width=13.5cm,keepaspectratio]{pictures/ccgan_intro.png}
\caption{The main characteristics of this model. Our model treats the time domain as a continuous variable, which means our training data could have different time-step. Moreover, during the prediction, our model can predict responses at time-step that does not exist in the training data. This model could be extended to include any continuous variables (e.g., Young's modulus) the same way it treats the time domain.}
\label{fig:novelty}
\end{figure}
The CcGAN approach to solving time-dependent PDEs in this work is uniquely employed to develop a data-driven surrogate model given highly heterogeneous permeability fields generated from two known distributions. The CcGAN performance will be evaluated by comparing the CcGAN-based results with FOM-based solutions, highlighting the computational accuracy and efficiency in heterogeneous permeability fields. The rest of the manuscript is summarized as follows. We begin with the model description and CcGAN architecture in Section \ref{sec:method}. The two variants of the framework, including element-wise addition or conditional batch normalization to incorporate the time domain, are also discussed. We then illustrate our ROM framework using three numerical examples in Section \ref{sec:numer_results}. Lastly, we conclude our findings in Section \ref{sec:conclusions}.
\section{Methodology}\label{sec:method}
\subsection{General governing equations}
We first present a general framework with a parameterized system of time-dependent PDEs and then as a demonstration case we focus on the linear poroelasticity to represent a coupled solid deformation and fluid diffusion in porous media with highly heterogeneous permeability fields. A parameterized system of time-dependent PDEs are as following
\begin{equation} \label{eq:gen_pdes}
\begin{split}
\bm{F}\left( t^n, \bm{\mu}^{(i)}\right) = \bm{0} &\text { \: in \: } \Omega, \\
\bm{X} =\bm{f}_{D} &\text { \: on \: } \partial \Omega_{D},\\
- \nabla \bm{X} \cdot \mathbf{n}=\bm{f}_N &\text { \: on \: } \partial \Omega_{N}. \\
\bm{X}=\bm{X}_{0} &\text { \: in \: } \Omega \text { at } t^n = 0,
\end{split}
\end{equation}
\noindent
where $\bm{F}\left( \cdot \right)$ corresponds to the system of time dependent PDEs, $\Omega \subset \mathbb{R}^{n_d}$ (${n_d} \in \{1,2,3\}$) denotes the computational domain, $\partial \Omega_{D}$ and $\partial \Omega_{N}$ denote the Dirichlet and Neumann boundaries, respectively. $\bm{f}_{D}$ and $\bm{f}_N$ are prescribed values on $\partial \Omega_{D}$ and $\partial \Omega_{N}$, respectively. $\bm{X}_{0}$ is an initial value of $\bm{X}$. The time domain $\mathbb{T} = \left(0, \tau\right]$ is partitioned into $N^t$ subintervals such that $0=: t^{0}<t^{1}<\cdots<t^{N} := \tau$, We denote $t^{n} \in \mathbb{T}$ as $n$th time-step, $n\in[0,N]$. $\bm{X}$ is a set of scalar ($\bm{X} \in \mathbb{R}$) or tensor valued (e.g. $\bm{X} \in
\mathbb{R}^{n_d}\,\,\mathrm{or}\,\,\mathbb{R}^{n_d}\times\mathbb{R}^{n_d}$) generalized primary variables. For the parameter domain $\mathbb{P}$, it composes of $\mathrm{M}$ members, i.e., $\bm{\mu}^{(1)}$, $\bm{\mu}^{(2)}$, $\dots$, $\bm{\mu}^{(\mathrm{M-1})}$, $\bm{\mu}^{(\mathrm{M})}$, and $\bm{\mu}^{(i)}$ could be any instances of $\bm{\mu}$ given $i = 1, \dots, \mathrm{M}$. $\bm{\mu}^{(i)}$ could be scalar ($\bm{\mu}^{(i)} \in \mathbb{R}$) or tensor valued (e.g. $\bm{\mu}^{(i)} \in
\mathbb{R}^{n_d}\,\,\mathrm{or}\,\,\mathbb{R}^{n_d}\times\mathbb{R}^{n_d}$) generalized parameters. We want to emphasize that $\bm{X}$ is an exact solution of $\bm{F}\left( \bm{X}; t^n, \bm{\mu}^{(i)}\right)$ and ${\bm{X}_h}$ is an approximation obtained from FOM. In general, $\bm{\mu}^{(i)}$ could correspond to physical properties, geometric characteristics, or boundary conditions at any given time $t$. In this study, we limit our interest in approximate primary variables $\bm{X}_h$ for a solution of the system of PDEs given the generalized parameters $\bm{\mu}$ such as heterogeneous permeability fields that are constant through time $t$. \par
\subsection{Framework development}
As in a conceptual schematic (Figure \ref{fig:intro}), we train our framework using $\bm{X}_h$ obtained from FOM to deliver $\widehat{\bm{X}_h}$ with acceptable accuracy and high computational efficiency. The proposed framework consists of (1) the offline phase starting from data generation of permeability fields and $\bm{X}_h$ to training of our proposed CcGAN and (2) the online phase of predicting $\widehat{\bm{X}_h}$ as presented in Figure \ref{fig:intro}b. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=13.5cm,keepaspectratio]{pictures/time_cgan_sum.png}
\caption{The main idea and procedures taken in developing the framework is shown. Gen. represents generator, and cri. is critic. In $\bm{\mathrm{(a)}}$, $\bm{X}$ is an exact solution with given $t^n$ and $\bm{\mu}^{(i)}$, $\bm{X}_h$ is an approximation of $\bm{X}$ by FOM, and $\widehat{\bm{X}_h}$ is an approximation of $\bm{X}$. $\bm{\mu}^{(i)}$ here represents a heterogeneous permeability field. Our goal is to develop a proxy that could provide $\widehat{\bm{X}_h}$ that is as close as possible to $\bm{X}_h$, but requires a much cheaper computational cost. In $\bm{\mathrm{(b)}}$, we illustrate procedures taken to develop a data-driven solution of time-dependent coupled hydro-mechanical processes using continuous conditional generative adversarial networks. }
\label{fig:intro}
\end{figure}
\subsubsection{Offline stage}
The first step is an initialization of training, validation, and test sets of parameters ($\bm{\mu}_\mathrm{training}$, $\bm{\mu}_\mathrm{validation}$, and $\bm{\mu}_\mathrm{test}$) of cardinality $\mathrm{M}_\mathrm{training}$, $\mathrm{M}_\mathrm{validation}$, and $\mathrm{M}_\mathrm{test}$, respectively. We illustrate only $\bm{\mu}$ and $\bm{\mu}_\mathrm{test}$ in Figure \ref{fig:intro} and omit a subscript of training in $\bm{\mu}_\mathrm{training}$ and $\mathrm{M}_\mathrm{training}$ hereinafter for the sake of brevity. We want to emphasize that $\bm{\mu}$ $\cap$ $\bm{\mu}_\mathrm{validation}$ $=$ $\varnothing$, $\bm{\mu}$ $\cap$ $\bm{\mu}_\mathrm{test}$ $=$ $\varnothing$, and $\bm{\mu}_\mathrm{validation}$ $\cap$ $\bm{\mu}_\mathrm{test}$ $=$ $\varnothing$. These $\bm{\mu}$, $\bm{\mu}_\mathrm{validation}$, and $\bm{\mu}_\mathrm{test}$ here represent any physical properties, but it could also serve as geometric characteristics or boundary conditions. In this work, we follow \cite{kadeethum2021framework} and focus on using $\bm{\mu}$, $\bm{\mu}_\mathrm{validation}$, and $\bm{\mu}_\mathrm{test}$ to represent collections of spatially heterogeneous scalar coefficients - more specifically heterogeneous permeability fields as described in the Section \ref{sec:data_generation} for data generation.
In the second step, we query the FOM, which can provide a solution in a finite-dimensional setting for each parameter $\bm{\mu}$ (i.e., $\bm{\mu}^{(i)}$) in the training set. Throughout this study, for the sake of simplicity, we use a uniform time-step which leads to each query of the FOM having the same number of $N^t$. However, as presented in Figure \ref{fig:novelty}, our framework could handle cases where adaptive time-stepping is required, for instance, advection-diffusion problems. The same operations follow for each $\bm{\mu}_\mathrm{validation}$ and $\bm{\mu}_\mathrm{test}$ in the validation and test sets. This work focuses on the linear poroelasticity equations and demonstrates our proposed framework with highly heterogeneous permeability fields. The FOM is used to approximate primary variables $\bm{X}_h$, which correspond to bulk displacement ($\bm{u}_h$) and fluid pressure ($p_h$) fields at each time-step $t^n$ given the field of parameters $\bm{\mu}^{(i)}$, in this case - permeability field, as input. Please find the detailed description in Appendix \ref{sec:prob_description}. \par
In the third step, ROM is constructed by training the data generated from the FOM where the inputs to the model are $t^n$ and $\bm{\mu}^{(i)}$, and the output is $\bm{u}_h$ or $\bm{X}_{h}$ with given $t^n$ and $\bm{\mu}^{(i)}$. In this study, we build a separate model for each primary variable ($\bm{u}_h$ and $p_h$), although both primary variables can be trained together with a single model. A key aspect of this work is to apply the CcGAN image-to-image translation framework for time-dependent PDEs by adapting the concept of a naive label input (NLI) and an improved label input (ILI) proposed by \cite{ding2020continuous} to the framework developed by \cite{kadeethum2021framework}. The proposed framework in this work consists of a \emph{generator} and \emph{critic} where two types of architecture for the \emph{generator} with a similar \emph{critic} architecture are presented (Figure \ref{fig:model}).
The first one uses the NLI concept (i.e., NLI model) by introducing a temporal term ($t^{n} \in \mathbb{T}$) to the generator's bottleneck using element-wise addition. The details of the architecture can be found in Table \ref{tab:unet_model1} and Listing \ref{list:nli}. The second one adopts the ILI concept (i.e., ILI model) by injecting the temporal term to all layers inside the generator through conditional batch normalization \citep{de2017modulating}. However, in contrast to \cite{ding2020continuous} and \cite{de2017modulating}, our $t^{n}$ is not categorical data (i.e., not a tag of number ranging from zero to nine), but continuous variable (i.e., $t^{n} \in \mathbb{T}$). Hence, we replace embedded layers with an artificial neural network (ANN). To elaborate, each conditional batch normalization composes of one ANN and one batch normalization layer. Each ANN composes of one input, three hidden, and one output layers. Each hidden layer and the input layer are subjected to the tanh activation function. In this way, we can inform each $t^{n}$ to each layer of our generator through a conditional batch normalization concept, see Listing \ref{list:cbn} for the implementation. The details of the ILI architecture can be found in Table \ref{tab:unet_model2} and Listing \ref{list:ili}. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=13.5cm,keepaspectratio]{pictures/time_cgan_model.png}
\caption{ROM for time-dependent PDEs using continuous conditional generative adversarial networks (CcGAN). We note that the critic is similar for both models (generator with NLI and generator with ILI).}
\label{fig:model}
\end{figure}
The critic, similar for both NLI and ILI, uses time $t^n$, parameter $\bm{\mu}^{(i)}$, and primary variable ($\bm{u}_h$ or $p_h$) as its inputs. The output is a patch score added with an inner product calculated using two linear layers and output from the last contracting block ($4^{\mathrm{th}}$ contracting block) of the critic. To elaborate, parameter ($\bm{\mu}^{(i)}$) and primary variable ($\bm{u}_h$ or $p_h$) are fed into the $1^{\mathrm{st}}$ convolutional layer of the critic while time ($t^n$) is injected into the model using $2^{\mathrm{nd}}$ linear layer shown in Figure \ref{fig:model}. The output from the $4^{\mathrm{th}}$ contracting block of the critic is then passed through the $1^{\mathrm{st}}$ linear layer shown in Figure \ref{fig:model} and performed an inner product operation with the output from the $2^{\mathrm{nd}}$ linear layer. The result of this inner product is then added (element-wise) to the patch score presented in Figure \ref{fig:model}. The architecture of the critic can be found in Table \ref{tab:disc} and Listing \ref{list:critic}. \par
To train both generator and critic, we normalize $t^n$, $\bm{\mu}^{(i)}$, and primary variables ($\bm{X}_{h}$ or, in this paper, $\bm{u}_h$ and $p_h$) to be in a range of $[0, 1]$. The Wasserstein (W) loss (\cite{arjovsky2017wasserstein,gulrajani2017improved}) is used since it has been shown to provide the best result when dealing with building data-driven frameworks for PDEs as shown in \cite{kadeethum2021framework}. In short, this implementation enforces the model to approximate the training data distribution instead of aiming to do the point-to-point mapping. This improves our model generalization, which is essential as we deal with heterogeneous permeability fields. The W loss is expressed as \par
\begin{equation} \label{eq:total_min_max_add_wloss}
\min _{G} \max _{C} \left[ \ell_{a} +\lambda_{{r}}\ell_r +\lambda_{{p}}\wp_p \right].
\end{equation}
\noindent
Here, $G$ and $C$ are short for generator and critic, respectively, and $\ell_{a}$ is the Earth mover's distance defined as
\begin{equation}
\ell_{a} = \frac{1}{\mathrm{B}} \sum_{i = 1}^{\mathrm{B}} C(t^i,\bm{\mu}^{(i)},{\bm{X}_{h}}_i) -\frac{1}{\mathrm{B}} \sum_{i = 1}^{\mathrm{B}} C\left(t^i,\bm{\mu}^{(i)},\widehat{{\bm{X}_{h}}_i} \right),
\end{equation}
\noindent
where $C\left( \cdot \right)$ is the final score of the critic with given $t^i$, $\bm{\mu}^{(i)}$, and ${\bm{X}_{h}}_i$ or $\widehat{{\bm{X}_{h}}_i}$ shown in Figure \ref{fig:model}, and $\mathrm{B}$ is a batch size, which is set as $\mathrm{B}=4$. The Earth mover's distance is used to measure the distance between output of the model and the training data. This way helps our model to better generalize its prediction. Additionally, $\lambda_r$ is a user-defined penalty constant that we set at $\lambda_r=500$, and $\ell_r$ as a reconstruction error term is given by
\begin{equation} \label{eq:l1_loss}
\ell_r= \frac{1}{\mathrm{B}} \sum_{i = 1}^{\mathrm{B}} \left|\widehat{{\bm{X}_{h}}_{i}}-{\bm{X}_{h}}_{i}\right|.
\end{equation}
\noindent
$\lambda_{{p}}$ denotes a gradient penalty constant set to 10 throughout this study; $\wp_p$ is the gradient penalty regularization. The latter is used to enforce Lipschitz continuity of the weight matrices ($\mathbf{W}$), i.e., Euclidean norm of discriminator’s gradient is at most one, and it reads as
\begin{equation}
\wp_p = \frac{1}{\mathrm{B}} \sum_{i = 1}^{\mathrm{B}}\left(\|\nabla C({t^i,\bm{\mu}}_i, \Bar{{\bm{X}_{h}}}_i)\|_{2}-1\right)^{2},
\end{equation}
\noindent
where $\| \cdot \|_{2}$ is L2 or Euclidean norm. This term helps to improve the stability of the training by limiting the step we can take in updating our trainable parameters (weight matrices ($\mathbf{W}$) and biases ($\bm{b}$). The term $\Bar{{\bm{X}_{h}}}_i$ is an interpolation between $\widehat{{\bm{X}_{h}}}_i$ and ${\bm{X}_{h}}_i$, which is defined by
\begin{equation}
\Bar{{\bm{X}_{h}}} = {\epsilon}_i {\bm{X}_{h}} + (1 - {\epsilon}_i)\widehat{{\bm{X}_{h}}}_i.
\end{equation}
\noindent
We randomly select $\epsilon_i$ for each $\Bar{{p_h}}_i$ from a uniform distribution on the interval of $[0, 1)$. We use the adaptive moment estimation (ADAM) algorithm \citep{kingma2014adam} to train the framework (i.e., updating a set of weight matrices ($\mathbf{W}$) and biases ($\bm{b}$)). The learning rate ($\eta$) is calculated as \cite{loshchilov2016sgdr}
\begin{equation}
\eta_{c}=\eta_{\min }+\frac{1}{2}\left(\eta_{\max }-\eta_{\min }\right)\left(1+\cos \left(\frac{\mathrm{step_c}}{\mathrm{step_f}} \pi\right)\right)
\end{equation}
\noindent
where $\eta_{c}$ is a learning rate at $\mathrm{step_c}$, $\eta_{\min }$ is the minimum learning rate ($1 \times 10^{-16}$), $\eta_{\max }$ is the maximum or initial learning rate ($1 \times 10^{-4}$), $\mathrm{step_c}$ is the current step, and $\mathrm{step_f}$ is the final step. We note that each step refers to each time we perform back-propagation, including updating both generator and critic's parameters ($\mathbf{W}$ and $\bm{b}$). \par
\subsubsection{Online stage}
For the fourth step, we use the \emph{trained} generator to predict $\widehat{\bm{X}_{h}}$ given $t^n$ and $\bm{\mu}_\mathrm{validation}$ or $\bm{\mu}_\mathrm{test}$ for the instances of the parameters belonging to the validation and test sets. To avoid over-fitting, we first use $\bm{\mu}_\mathrm{validation}$ to evaluate our framework as a function of epoch. Subsequently, we select the model (fixed $\mathbf{W}$ and $\bm{b}$ at a certain epoch) that provides the best accuracy for $\bm{\mu}_\mathrm{validation}$ to test the $\bm{\mu}_\mathrm{test}$. To elaborate, we train our model for 50 epochs. We then test our model against our validation set and observe the model performance as a function of the epoch. We then select a set of $\mathbf{W}$ and $\bm{b}$ at the epoch that has the best accuracy. Other hyper-parameters including a number of convoluational neural netwrok (CNN) layers, a number of hidden layers, and CNN parameters, and initialization of the framework are used based on the study in \cite{kadeethum2021framework}. \par
\begin{remark} \label{remark:1}
As presented in Figure \ref{fig:novelty}, by treating the time domain as a continuous variable, our framework could be trained using training data that contains different time-step. Furthermore, during our online inquiry, we simply interpolate at the time of interest within the time domain provided during the training phase, which may or may not exist in the training data. This characteristic is an asset of our framework because our framework is not bound by a time-stepping scheme that traditional numerical analysis or other data-driven framework has \citep{zhu2018bayesian, mo2019deep, wen2021ccsnet, xu2021solution}. Our framework can evaluate quantities of interest at any time required. For instance, we may be interested in a pressure field at one, two, and three hours with a given permeability field. To achieve that using FOM, one may need to go through many \textit{intermediate} steps in between to satisfy, but our framework could evaluate any particular time-step immediately within training data we evaluate.
\end{remark}
\section{Data generation}\label{sec:data_generation}
We utilize a discontinuous Galerkin finite element (FE) model of linear poroelasticity developed in \cite{kadeethum2020enriched, kadeethum2020finite} to generate training, validation, and test sets, see Figure \ref{fig:intro} - initialization. The geometry, boundary conditions, and input parameters are similar to that used in \cite{kadeethum2021framework} where a steady-state solution of linear poroelasticity is studied, but, in this work, the temporal output of pressure and displacement are investigated, resulting in the dynamic behavior of pressure ($p_h$), displacement ($\bm{u}_h$) as well as pore volume. The mesh and boundary conditions over the square domain used in this work are presented in Figure \ref{fig:mesh}. We enforce constant pressures of 0 and 1000 \si{Pa} at the top and bottom boundaries, respectively, to allow fluid to flow from the bottom to the top while no-flow boundary on both left and right sides. Furthermore, we compress the medium with normal traction of 1000 \si{Pa} applied at the top boundary. We fix the normal displacement to zero \si{m} for the left, right, and bottom boundaries. The initial pressure is 1000 \si{Pa}, and initial displacement is calculated based on the equilibrium state. \par
To obtain a set of parameters $\bm{\mu}$ corresponding to heterogeneous $\bm{k}$ fields, we focus on two types of highly heterogeneous $\bm{k}$ fields generated using: (1) a Zinn \& Harvey transformation \citep{zinn2003good}, and (2) a a bimodal transformation \citep{muller2020}. The details of generation of $\bm{k}$ fields is available in \cite{kadeethum2021framework}. Briefly, the $\bm{k}$ field from the Zinn \& Harvey transformation has a wider range of $\bm{k}$ values with thinner high permeability pathways. This feature represents highly heterogeneous sedimentary aquifers with preferential flow pathways, such as the MADE site in Mississippi \citep{zinn2003good} and the Culebra dolomite developed for the Waste Isolation Pilot Plant (WIPP) project in New Mexico \citep{yoon2013parameter}. In contrast, the $\bm{k}$ field from the bimodal transformation has narrow range $\bm{k}$ values with wider high permeability pathways, which is a good representation of sandstone reservoirs with an iron inclusion, for example, Chadormalu reservoirs in Yazd province, Iran \citep{daya2015application}. A few examples of $\bm{k}$ fields from both transformations are shown in Figures \ref{fig:pics_of_test_com}. \par
In this work, three examples of $\bm{k}$ fields are used. Two examples are from Zinn \& Harvey (Example 1) and bimodal (Example 2) distributions. For Example 3, these two $\bm{k}$ fields are used together. Note that we employ unstructured grids in the finite element solver. However, our framework in this study requires a structured data set. Thus, we interpolate the FE result $p_h$ to structured grids using cubic spline interpolation. We then replace the FOM dimension ${N}_h^p$, associated with the unstructured grid, with a pair $(\widetilde{N}_h^p, \widetilde{N}_h^p) = (128, 128)$, corresponding to the structured grid. The same procedures are carried out for the displacement field $\bm{u}_h$. \par
For simplicity, the FE solver employs a fixed number of $N^t$ for all $\bm{k}^{(i)} \in \bm{k}$. Our time domain is set as $\mathbb{T} = \left(0, 250 \right]$ \si{seconds}, and $N^t = 10$, which leads to $\Delta t = 25$ \si{seconds}.
The total size of data set is calculated by $\mathrm{M}_\mathrm{i} N^t$ where $\mathrm{i}$ is the number of training, validation, or test sets. While we investigate the effect of $\mathrm{M}$ on the data-driven model accuracy, $\mathrm{M}_\mathrm{validation} = \mathrm{M}_\mathrm{test} = 500$ is fixed. The samples of the test set of $\bm{k}^{(i)}$, $p_h\left( t^n, \bm{k}^{(i)}
\right)$, and $\bm{u}_h\left( t^n, \bm{k}^{(i)} \right)$ are presented in Figure \ref{fig:pics_of_test_com}. We note that the difference (DIFF) between solutions produced by the FOM (FEM in this case) and ROM (CcGAN in this case) is calculated by
\begin{equation}\label{eq:diff}
\operatorname{DIFF}_{\bm{X}}(t^n, \bm{\mu}_\mathrm{test}^{(i)})= \left|\bm{X}_h(:, t^n, \bm{\mu}_\mathrm{test}^{(i)}) - \widehat{\bm{X}}_h(:, t^n, \bm{\mu}_\mathrm{test}^{(i)})\right|.
\end{equation}
\noindent
To reiterate, $\bm{X}_h$ represents $p_h$ and $\bm{u}_h$, $\widehat{\bm{X}}_h$ represents $\hat{p}_h$ and $\hat{\bm{u}}_h$, and $\bm{\mu}_\mathrm{test}^{(i)}$ is $\bm{k}_\mathrm{test}^{(i)}$ field. We also use the relative root mean square error (relative RMSE) between $x_{i}$ (FOM) and $\hat{x}_{i}$ (ROM) to evaluate the model performance as
\begin{equation}\label{eq:rmse}
\mathrm{relative \: \: RMSE}=\frac{\sqrt{\frac{\sum_{i=1}^{\mathrm{M}}\left(x_{i}-\hat{x}_{i}\right)^{2}}{\mathrm{M}}}}{{\sqrt{\frac{\sum_{i=1}^{\mathrm{M}} x_{i}^2} {\mathrm{M}}}}}, \quad x_i \in \bm{X}_h \: \mathrm{and} \: \hat{x}_{i} \in \widehat{\bm{X}_h}.
\end{equation}
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, height=11.2cm]{pictures/test_set_rev.pdf}
\caption{Comparison of pressure (a-d) and displacement (e-h) results between the FEM (FOM) and the CcGAN (ROM) given the permeability field. For pressure results, (a-b) Zinn \& Harvey transformation cases at $t=50$ and $t=225$, (c-d) bimodal transformation cases at $t=25$ and $t=250$. For displacement results, (e-f) Zinn \& Harvey transformation cases at $t=50$ and $t=225$, (g-h) bimodal transformation cases at $t=25$ and $t=250$. Each example is randomly selected at different times given the permeability field from 1000 test sets to show that our model can evaluate at any time. The results are shown from the test set of the ILI framework trained with $\mathrm{M}$ = 20,000 examples (Example 3) where each Zinn \& Harvey (Example 1) and bimodal (Example 2) transformation has 10,000 examples as presented in Table 1. The fluid flows from the bottom to the top surface. The media is compressed from the top. The rest of the surfaces, left, right, and bottom, are allowed to be moved only in the normal direction. More details can be found in Appendix A. Note that the best model used for the test set is selected based on the performance of the validation set and the permeability ranges of each transformation are different.}
\label{fig:pics_of_test_com}
\end{figure}
\section{Results and discussion}\label{sec:numer_results}
\subsection{Example 1: Zinn \& Harvey transformation}
The first Example test cases from the Zinn \& Harvey transformation are shown in Figures \ref{fig:pics_of_test_com}a-b, e-f, including $\bm{k}$ fields, FOM and ROM results, and DIFF fields for pressure and displacement fields, respectively. The box plots of relative RMSE values of pressure ($\hat{p}_h$) during training for different training samples are presented for NLI and ILI models with the validation set in Figures \ref{fig:ex1_val_model1} and \ref{fig:ex1_val_model2}, respectively. As expected, the relative RMSE values improve over epochs during training, but reaches a plateau around epochs $\approx$ 32-34. The model performance is improved with increasing the number of training samples ($\mathrm{M}$). Figures \ref{fig:ex1_val_model1} and \ref{fig:ex1_val_model2} show that the ILI model performs slightly better than the NLI model. The behavior of the results of $\hat{\bm{u}}_h$ is similar to that of $\hat{p}_h$ (results not shown). \par
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_1250_1F_nl.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2500_1F_nl.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_5000_1F_nl.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_10000_1F_nl.pdf}
\caption{Example 1: relative Root Mean Square Error (relative RMSE) of NLI for (a) $\mathrm{M} = 1250$, (b) $\mathrm{M} = 2500$, (c) $\mathrm{M} = 5000$, and (d) $\mathrm{M} = 10000$. These results are calculated based on the validation set (500 samples). We note that the black dots represent outliers, and the box plot covers the interval from the 25th percentile to 75th percentile, highlighting the mean (50th percentile) with an black line. Blue line and blue text represent a mean value.}
\label{fig:ex1_val_model1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_1250_1F_imp.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2500_1F_imp.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_5000_1F_imp.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_10000_1F_imp.pdf}
\caption{Example 1: relative Root Mean Square Error (relative RMSE) of ILI for (a) $\mathrm{M} = 1250$, (b) $\mathrm{M} = 2500$, (c) $\mathrm{M} = 5000$, and (d) $\mathrm{M} = 10000$. These results are calculated based on the validation set (500 samples). We note that the black dots represent outliers, and the box plot covers the interval from the 25th percentile to 75th percentile, highlighting the mean (50th percentile) with an black line. Blue line and blue text represent a mean value.}
\label{fig:ex1_val_model2}
\end{figure}
The best-trained model is tested against the test set. The distributions of $\hat{p}_h$ and $\hat{\bm{u}}_h$ for the selected test cases are shown in Figure \ref{fig:pics_of_test_com}a-b and e-f, respectively. The DIFF values are very low (i.e., less than one percent of the average field values). The relative RMSE results of both $\hat{p}_h$ and $\hat{\bm{u}}_h$ of the test set are provided in Table \ref{tab:rmse_test}, which are very close to those of the validation set (Figures \ref{fig:ex1_val_model1} and \ref{fig:ex1_val_model2}). As in training, model performance improves with increasing $\mathrm{M}$. Besides, ILI always performs better than NLI. The relative RMSE of displacement ($\hat{\bm{u}}_h$) is generally lower than that of pressure ($\hat{p}_h$), which attributes to the relatively uniform response of displacement fields, compared to the pressure field as shown in Figure \ref{fig:pics_of_test_com}. Hence, the ROM can learn the solution easier. \par
\begin{table}[!ht]
\centering
\caption{The relative RMSE (Eq. \eqref{eq:rmse}) results for testing data of three example cases as a function of the number of training data ($\mathrm{M}$) for pressure and magnitude of displacement (in parenthesis). Each example is evaluated with both NLI and ILI models. }
\begin{tabular}{|c|l|c|c|c|c|}
\hline
\multirow{9}[0]{*}{ \makecell{\textbf{Pressure} \\ (\textbf{Displacement})}} &\textbf{Example 1} & $\mathrm{M} = 1250$ & $\mathrm{M} = 2500$ & $\mathrm{M} = 5000$ & $\mathrm{M} = 10000$ \\
\cline{2-6} & NLI (\%) & \cellcolor[rgb]{ .973, .412, .42}4.63 (4.32) & \cellcolor[rgb]{ .98, .686, .694}3.24 (2.98) & \cellcolor[rgb]{ .988, .859, .871}2.34 (2.14) & \cellcolor[rgb]{ .988, .976, .988}1.74 (1.57) \\
\cline{2-6} & ILI (\%) & \cellcolor[rgb]{ .976, .427, .435}4.55 (4.13) & \cellcolor[rgb]{ .98, .702, .71}3.15 (2.78) & \cellcolor[rgb]{ .988, .867, .878}2.30 (2.03) & \cellcolor[rgb]{ .988, .988, 1}1.67 (1.33) \\
\cline{2-6} & \textbf{Example 2} & $\mathrm{M} = 1250$ & $\mathrm{M} = 2500$ & $\mathrm{M} = 5000$ & $\mathrm{M} = 10000$ \\
\cline{2-6} & NLI (\%) & \cellcolor[rgb]{ .976, .443, .451}3.60 (3.60) & \cellcolor[rgb]{ .98, .671, .682}2.61 (2.51) & \cellcolor[rgb]{ .988, .851, .863}1.83 (1.47) & \cellcolor[rgb]{ .988, .984, .996}1.24 (1.10) \\
\cline{2-6} & ILI (\%) & \cellcolor[rgb]{ .973, .412, .42}3.73 (3.37) & \cellcolor[rgb]{ .98, .686, .694}2.55 (2.07) & \cellcolor[rgb]{ .988, .886, .898}1.67 (1.26) & \cellcolor[rgb]{ .988, .988, 1}1.22 (0.83) \\
\cline{2-6} & \textbf{Example 3} & $\mathrm{M} = 2500$ & $\mathrm{M} = 5000$ & $\mathrm{M} = 10000$ & $\mathrm{M} = 20000$ \\
\cline{2-6} & NLI (\%) & \cellcolor[rgb]{ .973, .412, .42}3.36 (3.28) & \cellcolor[rgb]{ .984, .706, .718}2.31 (2.28) & \cellcolor[rgb]{ .988, .89, .902}1.65 (1.27) & \cellcolor[rgb]{ .988, .98, .992}1.32 (1.27) \\
\cline{2-6} & ILI (\%) & \cellcolor[rgb]{ .976, .494, .502}3.07 (2.74) & \cellcolor[rgb]{ .984, .725, .737}2.24 (2.15) & \cellcolor[rgb]{ .988, .894, .906}1.63 (1.18) & \cellcolor[rgb]{ .988, .988, 1}1.29 (1.05) \\
\hline
\end{tabular}%
\footnotesize{ Example 3: a total number of $\mathrm{M}$ is the sum of training data from both Examples 1 and 2.}
\label{tab:rmse_test}%
\end{table}%
\subsection{Example 2: bimodal transformation}
The second Example presents the model performance using $\bm{k}$ fields from the bimodal transformation, which has a narrow range of $\bm{k}$ values with wider high permeability pathways. As in Example 1, selected test cases of $\bm{k}$ fields, FOM and ROM results, and DIFF fields are presented in Figure \ref{fig:pics_of_test_com}c-d and g-h. The box plot of relative RMSE values of the validation set is presented in Figures \ref{fig:ex2_val_model1} and \ref{fig:ex2_val_model2}. Similar to Example 1, the model performance improves with increasing $\mathrm{M}$. Moreover, the higher the number of epochs, the model tends to provide more accurate results. Although there are some fluctuations of the relative RMSE values at a later training stage, the model accuracy tends to improve as the training progresses. Except for the case where $\mathrm{M} = 1250$, the ILI model provides better results than the NLI. \par
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_1250_1F_nl_bm.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2500_1F_nl_bm.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_5000_1F_nl_bm.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_10000_1F_nl_bm.pdf}
\caption{Example 2: relative Root Mean Square Error (relative RMSE) of NLI for (a) $\mathrm{M} = 1250$, (b) $\mathrm{M} = 2500$, (c) $\mathrm{M} = 5000$, and (d) $\mathrm{M} = 10000$. These results are calculated based on the validation set (500 samples). We note that the black dots represent outliers, and the box plot covers the interval from the 25th percentile to 75th percentile, highlighting the mean (50th percentile) with an black line. Blue line and blue text represent a mean value.}
\label{fig:ex2_val_model1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_1250_1F_imp_bm.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2500_1F_imp_bm.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_5000_1F_imp_bm.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_10000_1F_imp_bm.pdf}
\caption{Example 2: relative Root Mean Square Error (relative RMSE) of ILI for (a) $\mathrm{M} = 1250$, (b) $\mathrm{M} = 2500$, (c) $\mathrm{M} = 5000$, and (d) $\mathrm{M} = 10000$. These results are calculated based on the validation set (500 samples). We note that the black dots represent outliers, and the box plot covers the interval from the 25th percentile to 75th percentile, highlighting the mean (50th percentile) with an black line. Blue line and blue text represent a mean value.}
\label{fig:ex2_val_model2}
\end{figure}
For the testing set, the ILI model provides better accuracy than the NLI except $\mathrm{M} = 1250$ for $\hat{p}_h$ (Table \ref{tab:rmse_test}). It is noted that ILI always performs better than NLI for the state variable $\hat{\bm{u}}_h$. Moreover, the relative RMSE results of $\hat{\bm{u}}_h$ is always lower than those of $\hat{p}_h$, which are similar to Example 1. \par
The relative RMSE of Example 2 is slightly lower than that of Example 1 ( Table \ref{tab:rmse_test}), however, the trend of the relative RMSE values between NLI and ILI is similar in both Examples 1 and 2. We will discuss the performance of NLI and ILI in the next sections. Since $\bm{k}$ fields from the bimodal transformation have a narrower range and wider permeable pathways with less contrast compared to those from the Zinn \& Harvey transformation (see Figure \ref{fig:pics_of_test_com}), the corresponding pressure field may have similar features. This can be seen in the DIFF distribution where the DIFF values are larger along high-pressure gradient regions in all pressure cases (Figure \ref{fig:pics_of_test_com}a-d). At $t=25$ in Example 2 (Figure \ref{fig:pics_of_test_com}c), high DIFF values are mostly located near the top boundary where the pressure boundary is set to zero after the initial pressure of 1000 Pa. Over time the DIFF distribution propagates as the pressure contrast migrates along the high permeability regions (e.g., the DIFF fields at $t=250$ in Figure \ref{fig:pics_of_test_com}d). Compared to Example 1 where pressure gradients tend to be slightly gradual (e.g., wider transition) and the high DIFF values are distributed in larger areas, Example 2 cases show the higher contrast of pressure values along with the pressure transition, and the high DIFF values tend to be distributed more locally (Figure \ref{fig:pics_of_test_com}a-d). \par
Although this comparison qualitatively shows the dependency of the model performance on input and output characteristics, it is also well-known that deep neural networks often need complex neural network architecture to extract and learn high-frequency features such as high permeability contrast and high-pressure gradients in this work \citep{xu2019frequency}. A recent work by \cite{kim2021connectivity} transformed physical connectivity information of the high contrast drainage network into multiple binary matrices, which improved the network generation using deep convolutional GAN. However, the success rates of the drainage network with proper connectivity were relatively low. CcGAN developed in this work shows that although it is still challenging to improve the prediction accuracy, the increase in the training data sets may provide a potential solution to this challenging problem. However, the increase of training data sets also increases required computational resources. This aspect will be discussed later. For displacement results (Figure \ref{fig:pics_of_test_com}e-h) the relative RMSE results in Example 2 (Table \ref{tab:rmse_test}) follow the same trend in the pressure results. The lower relative RMSE values of displacement than pressure also stem from the smooth displacement fields compared to pressure fields. It is noted that the relative RMSE values of the test set are similar to those of the validation set. \par
\subsection{Example 3: Combined Zinn \& Harvey and bimodal transformations}
In Example 3, permeability fields from both Zinn \& Harvey and bimodal transformations are used to test the generalization ability of the proposed approach (i.e., Figures \ref{fig:pics_of_test_com}a-h). As discussed earlier, Example 3 can represent different types of heterogeneity with high permeable pathways. The relative RMSE of the validation set for pressure fields ($\hat{p}_h$) is presented in Figures \ref{fig:ex3_val_model1} and \ref{fig:ex3_val_model2}. Similar to Examples 1 and 2, the model accuracy improves with increasing $\mathrm{M}$, and we did not observe any over-fitting behavior. \par
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2f_2500_nl.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2f_5000_nl.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2f_10000_nl.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2f_20000_nl.pdf}
\caption{Example 3: relative Root Mean Square Error (relative RMSE) of NLI for (a) $\mathrm{M} = 2500$, (b) $\mathrm{M} = 5000$, (c) $\mathrm{M} = 10000$, and (d) $\mathrm{M} = 20000$. These results are calculated based on the validation set (500 samples). We note that the black dots represent outliers, and the box plot covers the interval from the 25th percentile to 75th percentile, highlighting the mean (50th percentile) with an black line. Blue line and blue text represent a mean value.}
\label{fig:ex3_val_model1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2f_2500.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2f_5000.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2f_10000.pdf}
\includegraphics[keepaspectratio, height=4.0cm]{pictures/rmse_p_2f_20000.pdf}
\caption{Example 3: relative Root Mean Square Error (relative RMSE) of ILI for (a) $\mathrm{M} = 2500$, (b) $\mathrm{M} = 5000$, (c) $\mathrm{M} = 10000$, and (d) $\mathrm{M} = 20000$. These results are calculated based on the validation set (500 samples). We note that the black dots represent outliers, and the box plot covers the interval from the 25th percentile to 75th percentile, highlighting the mean (50th percentile) with an black line. Blue line and blue text represent a mean value.}
\label{fig:ex3_val_model2}
\end{figure}
As presented in Table \ref{tab:rmse_test}, for both pressure and displacement fields, the ILI model performs better than the NLI, and the model with a higher $\mathrm{M}$ (i.e., more training data) provides better accuracy. Note that Example 3 has two times higher $\mathrm{M}$ than the other two cases. Overall, the relative RMSE values in Example 3 are closer to those in Example 1 rather than Example 2 for the same number of $\mathrm{M}$. This indicates that more challenging fields for ML training predominantly govern the model performance with combined fields. In addition, Example 3 tends to have higher relative RMSE values than Examples 1 and 2 for the lower number of $\mathrm{M}$ (e.g., $\mathrm{M}$ = 2500 and 5000 for pressure and displacement, respectively). As the number of $\mathrm{M}$ increases, however, Example 3 has lower RMSE values than Example 1 and gets closer to Example 2, indicating that more training data sets improve the ML model to a certain degree. Although there may be more optimal hyperparameters to train both fields better, these results demonstrate the general learning capability of the proposed models. The inclusion of a more challenging data set will increase the generalization capability of the trained model. \par
\subsection{Computational costs}
A summary of the wall time used for each operation (i.e., steps 2 to 4 in Figure \ref{fig:intro}) is presented in Table S4\ref{tab:wall_time} where the time for step 1 or initialization was negligible compared to the other steps. The FE model (FOM) was run using a single AMD Ryzen Threadripper
3970X, while training and testing of CcGAN or ROM models were carried out with single GPU (NVIDIA Quadro RTX 6000). With the fixed number of $N^t = 10$ throughout this study, each FOM simulation (per $\bm{\mu}^{(i)}$) takes about 40 seconds. Consequently, if we select $\mathrm{M} = 10000$ as an example, it would take about 400,000 seconds ($\approx$111 hrs) to build the training set. To train the model using $\mathrm{M} = 10000$, it takes approximately 30 hours. During the online or the prediction phase, however, the trained model could provide its result within 0.001 seconds per testing $\left(t^n, \bm{\mu}^{(i)} \right)$. It should be noted that the ROM as the trained model in this work is not required to have the number of $N^t$ in which FOM uses. Assuming the ROM also uses $N^t = 10$, it still provides a speed-up by 10,000 times. One advantage of the ROM framework compared to the FOM is that it can also deliver the solution at any time, including times that do not exist in the training snapshots since it is simply a nonlinear interpolation in output space. This characteristic is an asset of the ROM in this work because it is not bound by any time-stepping constraints and can evaluate quantities of interest at any time required. For instance, we may be interested in pressure and displacement fields at one, two, and three hours with given different $\bm{\mu}$. To achieve this with FOM, it may need to go through many steps in between. However, ROM enables us to evaluate it at those three times only.
\begin{table}[!ht]
\centering
\caption{Comparison of the wall time (seconds) used for each operation presented in Figure 1 (main text). $\bm{\mu}$ is a set of parameterize spatial fields, and $\bm{\mu}_i \in \bm{\mu}$.}
\begin{tabular}{|l|c|c|c|}
\hline
& NLI & ILI & remark \\ \hline
Build FOM snapshots & 40 & 40 & per $\bm{\mu}_i$ for $N^t = 10$ \\ \hline
Train ROM with $\mathrm{M} = 1250$ & 12600 & 12600 & approximately 3.75 hours \\ \hline
Train ROM with $\mathrm{M} = 2500$ & 25200 & 25200 & approximately 7.5 hours \\ \hline
Train ROM with $\mathrm{M} = 5000$ & 50400 & 50400 & approximately 15 hours \\ \hline
Train ROM with $\mathrm{M} = 10000$ & 108000 & 108000 & approximately 30 hours \\ \hline
Train ROM with $\mathrm{M} = 20000$ & 216000 & 216000 & approximately 60 hours \\ \hline
Prediction & 0.001 & 0.001 & per testing $\left(t^n, \bm{\mu}_i \right) $ \\ \hline
\end{tabular}
\label{tab:wall_time}
\end{table}
\subsection{Prediction accuracy and subsurface physics applications}
With three examples presented, we observe that the ILI model performs better than the NLI for all cases except one particular case (Table \ref{tab:rmse_test}). This finding is in good agreement with classification problems in \cite{ding2020continuous}. \cite{ding2020continuous} speculates that the ILI overcomes the label inconsistency of the classification problems while the NLI could not. Our reasoning for the outperformance of the INI stems from continuous conditional batch normalization, which can carry out temporal information more consistently than the element-wise addition in the NLI model. In addition to the skip connections that are essential in transferring multiscale features between input (permeability fields) and output (pressure and displacement fields) (see \cite{kadeethum2021framework}), CcGAN provides the framework to account for temporal features over time, resulting in better representation of temporal patterns. \par
One key aspect of many subsurface energy applications is a coupled process that involves hydrogeology and geomechanics. Although ML-based data-driven modeling has been increasingly studied for reservoir modeling, most are still limited to uncoupled processes (e.g., \cite{lange2020machine,miah2020predictive}) or relatively homogeneous fields (e.g., \cite{zounemat2021ensemble,zhao2021reduced}). The CcGAN approach proposed in this study demonstrates its capability to handle coupled hydro-mechanical processes with relative RMSE less than 2\% of the transient pressure and displacement responses in the worst case. The results also show that the relative RMSE of the ILI model can be improved to a scale of about 1\% with more training data sets, which can be acceptable given the uncertainty and observational error in the subsurface systems. Additionally, our framework for model prediction (i.e., online stage) achieves up to $\approx$10,000x speed-up compared to the FE solver. Note that the computational advantage of ML-driven ROM models will increase further with increasing degree of freedom in the FE solver (e.g., three-dimensional and longer transient problems). This computational advantage and accuracy will enable us to achieve real-time reservoir management and robust uncertainty quantification even for vast parameter space. At the same time, the ROM can be updated offline as necessary. It should be noted that the method presented here can be extended to incorporate more continuous variables into the system. For instance, besides the time domain, we could also add Young's modulus and Poisson ratio into the CcGAN model. Furthermore, since this model is \emph{data-driven}, it is not limited to only coupled hydro-mechanical processes presented in this manuscript but also applicable to other coupled processes such as thermal-hydrological-mechanical-chemical (THMC) processes. \par
\section{Conclusions} \label{sec:conclusions}
This work presents a data-driven framework for solving a system of time-dependent partial differential equations, more explicitly focusing on coupled hydro-mechanical processes in heterogeneous porous media. This framework can be used as a proxy for time-dependent coupled processes in heterogeneous porous media, which is challenging in classical model order reduction. Our framework is developed upon continuous conditional generative adversarial networks (CcGAN) composed of the U-Net generator and patch-based critic. The model has two variations: (1) the time domain is introduced to only the generator's bottleneck using element-wise addition (i.e., NLI), and (2) the time domain is injected into all layers inside the generator through conditional batch normalization (i.e., ILI). The critic is similar for both models. Our approach is desirable because it does not require any cumbersome modifications of FOM source codes and can be applied to any existing FOM platforms. In this regard, the CcGAN approach to solve time-dependent PDEs is uniquely employed to develop a data-driven surrogate model given highly heterogeneous permeability fields. We illustrate that our framework could efficiently and accurately approximate finite element results given a wide variety of permeability fields. Our results have a relative root mean square error of less than 2\% with 10,000 samples for training. Additionally, it could speed up at least 10,000 times compared to a forward finite element solver. ILI delivers slightly better results than NLI without any observable additional computational costs. To this end, this framework will enable us to do a large-scale operation of real-time reservoir management or uncertainty quantification with complex heterogeneous permeability fields, which are practically very difficult to do with FOM and traditional model order reductions.
\section{Acknowledgments}
TK and HY were supported by the Laboratory Directed Research and Development program at Sandia National Laboratories and US Department of Energy Office of Fossil Energy and Carbon Management, Science-Informed Machine Learning to Accelerate Real Time Decisions-Carbon Storage (SMART-CS) initiative.
DO acknowledges support from Los Alamos National Laboratory's Laboratory Directed Research and Development Early Career Award (20200575ECR).
YC acknowledges LDRD funds (21-FS-042) from Lawrence Livermore National Laboratory. Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344 (LLNL-JRNL-827590).
HV is grateful to the funding support from the U.S. Department of Energy (DOE) Basic Energy Sciences (LANLE3W1).
NB acknowledges startup support from the Sibley School of Mechanical and Aerospace Engineering, Cornell University.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA-0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
\section{Data and code availability}
We will release our CcGAN scripts and training and testing data to reproduce all results in the manuscript through the Sandia National Laboratories software portal — a hub for GitHub-hosted open source projects \hfill \break (\url{https://github.com/sandialabs}) after the manuscript will be accepted.
For review purpose we provide the training, validation, and testing data or the scripts used to generate them at \url{https://codeocean.com/capsule/2052868/tree/v1} \cite{kadeethum2021frameworksourcecode}. FOM source codes were already made available here: \url{https://github.com/teeratornk/jcp_YJCPH_110030_git} as well as a tutorial (see tutorial number 9) for \textbf{multiphenics} package \url{https://github.com/multiphenics}.
\newpage
\begin{appendices}
\beginapp
\section{Problem description, model geometry, and boundaries} \label{sec:prob_description}
To recap, the coupled HM processes in porous media are governed by two equations. The first equation is linear momentum balance equation
\begin{equation} \label{eq:linear_balance}
\begin{split}
\nabla \cdot \bm{\sigma}^{\prime}(\bm{u}) -\alpha \nabla \cdot \left(p \mathbf{I}\right)
+ \bm{f} = \bm{0} &\text { \: in \: } \Omega \times \mathbb{T}, \\
\bm{u} =\bm{u}_{D} &\text { \: on \: } \partial \Omega_{D} \times \mathbb{T},\\
\bm{\sigma} {(\bm{u})} \cdot \mathbf{n}=\bm{t_D} &\text { \: on \: } \partial \Omega_{N} \times \mathbb{T}, \\
\bm{u}=\bm{u}_{0} &\text { \: in \: } \Omega \text { at } t = 0,
\end{split}
\end{equation}
\noindent
where $\bm{\sigma}^{\prime}$ is the effective Cauchy stress tensor, $p$ is the pore pressure, $\bm{u}$ is bulk displacement, $\bm{u}_0$ is an initial displacement, $\mathbf{I}$ is the second-order identity tensor, $\alpha$ is the Biot coefficient, $\bm{f}$ is the body force term defined as $\rho \phi \mathbf{g}+\rho_{s}(1-\phi) \mathbf{g}$, where $\rho$ is the fluid density, $\rho_s$ is the solid density, $\phi$ is the porosity, $\mathbf{g}$ is the gravitational acceleration vector, $\bm{u}_D$ and ${\bm{t}_D}$ are prescribed displacement and traction values at the boundaries, respectively, and $\mathbf{n}$ is the unit normal vector to the boundary. \par
The second equation is mass balance equation read
\begin{equation} \label{eq:mass_balance}
\begin{split}
\left(\frac{1}{M}+\frac{\alpha^{2}}{K}\right) \frac{\partial p}{\partial t}+\frac{\alpha}{K} \frac{\partial \sigma_{v}}{\partial t}- \nabla \cdot \left( \bm{\kappa} \nabla p \right)= g &\text { \: in \: } \Omega \times \mathbb{T}, \\
p=p_{D} &\text { \: on \: } \partial \Omega_{D} \times \mathbb{T}, \\
- \bm{\kappa} \nabla p \cdot \mathbf{n}=q_{D} &\text { \: on \:} \partial \Omega_{N} \times \mathbb{T}, \\
p=p_{0} &\text { \: in \: } \Omega \text { at } t = 0,
\end{split}
\end{equation}
\noindent
where is the Biot modulus, $\sigma_{v}:=\frac{1}{3} \tr(\bm{\sigma})$ is the volumetric stress, $p_D$ and $q_D$ are the given boundary pressure and flux, respectively,$p_0$ is an initial pressure, $K$ is bulk modulus, $\bm{\kappa}=\frac{\bm{k}}{\mu_f}$ is the porous media conductivity, $\mu_f$ is the fluid viscosity, $\bm{k}$ is the matrix permeability tensor defined as
\begin{equation} \label{eq:permeability_matrix}
\bm{k} :=
\begin{cases}
\left[ \begin{array}{lll}{{k}_{xx}} & {{k}_{xy}} & {{k}_{xz}} \\ {{k}_{yx}} & {{k}_{yy}} & {{k}_{yz}} \\ {{k}_{zx}} & {{k}_{zy}} & {k}_{zz}\end{array}\right] & \text{if} \ d = 3, \ \\ \\
\left[ \begin{array}{ll}{{k}_{xx}} & {{k}_{xy}} \\ {{k}_{yx}} & {{k}_{yy}} \\ \end{array}\right] & \text{if} \ d = 2, \ \\ \\
\ {k}_{xx} & \text{if} \ d = 1.
\end{cases}
\end{equation}
\noindent
To simplify our problem, we assume all off-diagonal terms of \eqref{eq:permeability_matrix} to be zero and all diagonal terms have similar value.
It is noted that, throughout this study, our model takes heterogeneous permeability fields, $\bm{k}$, which in general sense is $\bm{\mu}$, $\bm{\mu}_\mathrm{validation}$, or $\bm{\mu}_\mathrm{test}$ depending we are dealing with train, validation, or test sets and time, $t^n$, as input. The framework will deliver $\bm{u}_h(t^n, \bm{k}^{(i)})$ and $p_h(t^n, \bm{k}^{(i)})$, which are $\bm{X}_h(t^n, \bm{\mu}^{(i)})$ in general terms. These $\bm{u}_h(t^n, \bm{k}^{(i)})$ and $p_h(t^n, \bm{k}^{(i)})$ are approximation of $\bm{u}$ and $p$ through FOM. Please refer to these FOM source codes were already made available here: \url{https://github.com/teeratornk/jcp_YJCPH_110030_git} as well as a tutorial (see tutorial number 9) for \textbf{multiphenics} package \url{https://github.com/multiphenics}. \par
The mesh and boundaries are presented in Figure \ref{fig:mesh}.
\begin{figure}[!ht]
\centering
\includegraphics[width=6.0cm,keepaspectratio]{pictures/mesh.pdf}
\caption{Domain, its boundaries, and mesh used for all numerical examples.}
\label{fig:mesh}
\end{figure}
\section{Detailed information of generators and critic}
The generator resembles the well-established architecture of the U-net, which is typically used for image segmentation. The first component of the generator is a contracting block that performs two convolutions followed by a max pool operation. The second component is an expanding block in which it performs an upsampling, a convolution, and a concatenation of its two inputs. The generator used in this study consists of six contracting and six expanding blocks. Note that each contracting block uses LeakyReLU with a negative slope of 0.2 as its activation function, each expanding block uses ReLU as its activation function. The $1^{\mathrm{st}}$ convolutional layer is used to map the input channel ($\mathrm{C}_{\mathrm{in}}$) to hidden layer size ($\mathrm{H}$), and it is not subject to an activation function. The $2^{\mathrm{nd}}$ convolutional layer is used to map hidden layer size ($\mathrm{H}$) to output channel ($\mathrm{C}_{\mathrm{out}}$), and it is subject to the Sigmoid activation function. For the generator, $\mathrm{C}_{\mathrm{in}}$ $=$ $\mathrm{C}_{\mathrm{out}}$ $=$ $\mathrm{C}$, and $\mathrm{H} = 32$. The domain size is governed by $\widetilde{N}_h^p$, $\widetilde{N}_h^p$, which is 128 $\times$ 128 throughout this manuscript. \par
We note that we typically employ unstructured grids in the finite element solver; however, our framework in this study requires a structured data set. Thus, we pre-process our finite element data by interpolating the finite element result $p_h$ to structured grids, such as the finite element interpolation operator or cubic spline interpolation. We then replace the FOM dimension ${N}_h^p$, associated with the unstructured grid, with a pair $(\widetilde{N}_h^p, \widetilde{N}_h^p)$, associated to the structured grid. In practice, the value of $\widetilde{N}_h^p$ is often chosen independently on ${N}_h^p$. The same procedures are carried out for $\bm{u}_h$. $\mathrm{B}$ corresponds to batch size. \par
The architecture of NLI model is presented in Table \ref{tab:unet_model1}.
\begin{table}[!ht]
\centering
\caption{Generator: NLI's detail used in this study (input and output sizes are represented by {[}$\mathrm{B}$, $\mathrm{C}$, $\widetilde{N}_h^p$, $\widetilde{N}_h^p${]}. We use hidden layers $\mathrm{H} = 32$). BN refers to batch normalization. }
\begin{tabular}{|l|c|c|c|c|}
\hline
Block & \multicolumn{1}{l|}{Input size} & \multicolumn{1}{l|}{Output size} & \multicolumn{1}{l|}{BN} & \multicolumn{1}{l|}{Dropout} \\ \hline
$1^{\mathrm{st}}$ convolutional layer & {[}$\mathrm{B}$, $\mathrm{C}$, 128, 128{]} & {[}$\mathrm{B}$, 32, 128, 128{]} & & \\ \hline
$1^{\mathrm{st}}$ contracting block & {[}$\mathrm{B}$, 32, 128, 128{]} & {[}$\mathrm{B}$, 64, 64, 64{]} & \checkmark & \checkmark \\ \hline
$2^{\mathrm{nd}}$ contracting block & {[}$\mathrm{B}$, 64, 64, 64{]} & {[}$\mathrm{B}$, 128, 32, 32{]} & \checkmark & \checkmark \\ \hline
$3^{\mathrm{rd}}$ contracting block & {[}$\mathrm{B}$, 128, 32, 32{]} & {[}$\mathrm{B}$, 256, 16, 16{]} & \checkmark & \checkmark \\ \hline
$4^{\mathrm{th}}$ contracting block & {[}$\mathrm{B}$, 256, 16, 16{]} & {[}$\mathrm{B}$, 512, 8, 8{]} & \checkmark & \\ \hline
$5^{\mathrm{th}}$ contracting block & {[}$\mathrm{B}$, 512, 8, 8{]} & {[}$\mathrm{B}$, 1024, 4, 4{]} & \checkmark & \\ \hline
$6^{\mathrm{th}}$ contracting block & {[}$\mathrm{B}$, 1024, 4, 4{]} & {[}$\mathrm{B}$, 2048, 2, 2{]} & \checkmark & \\ \hline
$1^{\mathrm{st}}$ expanding block & {[}$\mathrm{B}$, 2048, 2, 2{]} & {[}$\mathrm{B}$, 1024, 4, 4{]} & \checkmark & \\ \hline
$2^{\mathrm{nd}}$ expanding block & {[}$\mathrm{B}$, 1024, 4, 4{]} & {[}$\mathrm{B}$, 512, 8, 8{]} & \checkmark & \\ \hline
$3^{\mathrm{rd}}$ expanding block & {[}$\mathrm{B}$, 512, 8, 8{]} & {[}$\mathrm{B}$, 256, 16, 16{]} & \checkmark & \\ \hline
$4^{\mathrm{th}}$ expanding block & {[}$\mathrm{B}$, 256, 16, 16{]} & {[}$\mathrm{B}$, 128, 32, 32{]} & \checkmark & \\ \hline
$5^{\mathrm{th}}$ expanding block & {[}$\mathrm{B}$, 128, 32, 32{]} & {[}$\mathrm{B}$, 64, 64, 64{]} & \checkmark & \\ \hline
$6^{\mathrm{th}}$ expanding block & {[}$\mathrm{B}$, 64, 64, 64{]} & {[}$\mathrm{B}$, 32, 128, 128{]} & \checkmark & \\ \hline
$2^{\mathrm{nd}}$ convolutional layer & {[}$\mathrm{B}$, 32, 128, 128{]} & {[}$\mathrm{B}$, $\mathrm{C}$, 128, 128{]} & & \\ \hline
\end{tabular}
\label{tab:unet_model1}
\end{table}
\noindent
We also provide a code snippet for NLI's generator in Listing \ref{list:nli}.
\lstinputlisting[language=Python, caption= Illustration of the generator of NLI implementation,label={list:nli}]{pictures/NLI_part_of_script.py}
The differences between NLI and ILI are (1) NLI introduces $t^{n} \in \mathbb{T}$ to only the generator's bottleneck using element-wise addition while (2) ILI injects $t^{n} \in \mathbb{T}$ to all layers inside the generator through conditional batch normalization (CBN). The architecture of ILI model is presented in Table \ref{tab:unet_model2}.\par
\noindent
\begin{table}[!ht]
\centering
\caption{Generator: ILI's detail used in this study (input and output sizes are represented by {[}$\mathrm{B}$, $\mathrm{C}$, $\widetilde{N}_h^p$, $\widetilde{N}_h^p${]}. We use hidden layers $\mathrm{H} = 32$). CBN refers to conditional batch normalization.}
\begin{tabular}{|l|c|c|c|c|}
\hline
Block & \multicolumn{1}{l|}{Input size} & \multicolumn{1}{l|}{Output size} & \multicolumn{1}{l|}{CBN} & \multicolumn{1}{l|}{Dropout} \\ \hline
$1^{\mathrm{st}}$ convolutional layer & {[}$\mathrm{B}$, $\mathrm{C}$, 128, 128{]} & {[}$\mathrm{B}$, 32, 128, 128{]} & & \\ \hline
$1^{\mathrm{st}}$ contracting block & {[}$\mathrm{B}$, 32, 128, 128{]} & {[}$\mathrm{B}$, 64, 64, 64{]} & \checkmark & \checkmark \\ \hline
$2^{\mathrm{nd}}$ contracting block & {[}$\mathrm{B}$, 64, 64, 64{]} & {[}$\mathrm{B}$, 128, 32, 32{]} & \checkmark & \checkmark \\ \hline
$3^{\mathrm{rd}}$ contracting block & {[}$\mathrm{B}$, 128, 32, 32{]} & {[}$\mathrm{B}$, 256, 16, 16{]} & \checkmark & \checkmark \\ \hline
$4^{\mathrm{th}}$ contracting block & {[}$\mathrm{B}$, 256, 16, 16{]} & {[}$\mathrm{B}$, 512, 8, 8{]} & \checkmark & \\ \hline
$5^{\mathrm{th}}$ contracting block & {[}$\mathrm{B}$, 512, 8, 8{]} & {[}$\mathrm{B}$, 1024, 4, 4{]} & \checkmark & \\ \hline
$6^{\mathrm{th}}$ contracting block & {[}$\mathrm{B}$, 1024, 4, 4{]} & {[}$\mathrm{B}$, 2048, 2, 2{]} & \checkmark & \\ \hline
$1^{\mathrm{st}}$ expanding block & {[}$\mathrm{B}$, 2048, 2, 2{]} & {[}$\mathrm{B}$, 1024, 4, 4{]} & \checkmark & \\ \hline
$2^{\mathrm{nd}}$ expanding block & {[}$\mathrm{B}$, 1024, 4, 4{]} & {[}$\mathrm{B}$, 512, 8, 8{]} & \checkmark & \\ \hline
$3^{\mathrm{rd}}$ expanding block & {[}$\mathrm{B}$, 512, 8, 8{]} & {[}$\mathrm{B}$, 256, 16, 16{]} & \checkmark & \\ \hline
$4^{\mathrm{th}}$ expanding block & {[}$\mathrm{B}$, 256, 16, 16{]} & {[}$\mathrm{B}$, 128, 32, 32{]} & \checkmark & \\ \hline
$5^{\mathrm{th}}$ expanding block & {[}$\mathrm{B}$, 128, 32, 32{]} & {[}$\mathrm{B}$, 64, 64, 64{]} & \checkmark & \\ \hline
$6^{\mathrm{th}}$ expanding block & {[}$\mathrm{B}$, 64, 64, 64{]} & {[}$\mathrm{B}$, 32, 128, 128{]} & \checkmark & \\ \hline
$2^{\mathrm{nd}}$ convolutional layer & {[}$\mathrm{B}$, 32, 128, 128{]} & {[}$\mathrm{B}$, $\mathrm{C}$, 128, 128{]} & & \\ \hline
\end{tabular}
\label{tab:unet_model2}
\end{table}
\noindent
We also provide a code snippet for the generator of ILI in Listing \ref{list:ili}.
\lstinputlisting[language=Python, caption= Illustration of the generator of ILI implementation,label={list:ili}]{pictures/ILI_part_of_script.py}
\noindent
The CBN is implemented as shown in Listing \ref{list:cbn}. We also illustrate that if we have more continuous single-value parameters, we can include them in our model through CBN.
\lstinputlisting[language=Python, caption= Illustration of the conditional batch normalization (CBN) implementation,label={list:cbn}]{pictures/CBN.py}
Both NLI and ILI use the same critic presented in Table \ref{tab:disc}. The critic utilizes the contracting block, which is also used in the generator. Each contracting block of the critic uses LeakyReLU with a negative slope of 0.2 as its activation function, the $1^{\mathrm{st}}$ convolutional layer is used to map $\mathrm{C}$ $+$ conditional field to $\mathrm{H} = 8$, and it does not subject to any activation function, and and the $2^{\mathrm{nd}}$ convolutional layer is used to map $\mathrm{H} = 8$ to $\mathrm{C}$. Dissimilar to the generator where the input size is equal to output size $\widetilde{N}_h^p$, $\widetilde{N}_h^p$ $=$ 128 $\times$ 128, the discriminator input size is $\widetilde{N}_h^p$, $\widetilde{N}_h^p$ $=$ 128 $\times$ 128, while the output size is a patch matrix of size $\mathrm{PATCH_X}$ $\times$ $\mathrm{PATCH_Y}$ $=$ 8 $\times$ 8.
\noindent
\begin{table}[!ht]
\centering
\caption{Critic: NLI and ILI's detail used in this study (input size is represented by {[}$\mathrm{B}$, $\mathrm{C}$, $\widetilde{N}_h^p$, $\widetilde{N}_h^p${]}, and output size is represented by {[}$\mathrm{B}$, $\mathrm{C}$, $\mathrm{PATCH_X}$, $\mathrm{PATCH_Y}${]}. We use hidden layers $\mathrm{H} = 8$).}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Block & \multicolumn{1}{l|}{Input size} & \multicolumn{1}{l|}{Output size} & \multicolumn{1}{l|}{BN} \\ \hline
$1^{\mathrm{st}}$ convolutional layer & {[}$\mathrm{B}$, $\mathrm{C}$+1, 128, 128{]} & {[}$\mathrm{B}$, 8, 128, 128{]} & \\ \hline
$1^{\mathrm{st}}$ contracting block & {[}$\mathrm{B}$, 8, 128, 128{]} & {[}$\mathrm{B}$, 16, 64, 64{]} & \\ \hline
$2^{\mathrm{nd}}$ contracting block & {[}$\mathrm{B}$, 16, 64, 64{]} & {[}$\mathrm{B}$, 32, 32, 32{]} & \checkmark \\ \hline
$3^{\mathrm{rd}}$ contracting block & {[}$\mathrm{B}$, 32, 32, 32{]} & {[}$\mathrm{B}$, 64, 16, 16{]} & \checkmark \\ \hline
$4^{\mathrm{th}}$ contracting block & {[}$\mathrm{B}$, 64, 16, 16{]} & {[}$\mathrm{B}$, 128, 8, 8{]} & \checkmark \\ \hline
$2^{\mathrm{nd}}$ convolutional layer & {[}$\mathrm{B}$, 128, 8, 8{]} & {[}$\mathrm{B}$, $\mathrm{C}$, 8, 8{]} & \\ \hline
\end{tabular}
\label{tab:disc}
\end{table}
\noindent
The Critic implementation is as shown in Listing \ref{list:critic}.
\lstinputlisting[language=Python, caption= Illustration of the Critic implementation,label={list:critic}]{pictures/critic_part_of_script.py}
\end{appendices}
\bibliographystyle{plainnat}
|
2,869,038,153,866 | arxiv | \section{Introduction}
Regression models, frequently based on machine learning concepts~\cite{fernandez2019extensive} like support vector machines~\cite{chang2011libsvm}, decision trees~\cite{loh2011classification}, random forests~\cite{breiman2001random} or neural networks~\cite{lathuiliere2019comprehensive} are common methods to predict or estimate properties of interest. However, a difficult balance needs to be found when regression arises as part of a real-world engineering problem: while complex models may provide a high accuracy, they can be opaque. For example, while certain neural network models are powerful enough to approximate any function~\cite{hornik1989multilayer}, their nonlinear and heavily parameterized layered structure is prohibitive to interpretation. For many space-related problems, a lack of interpretability is a show stopper, as predictability and robustness of the systems are of critical importance.
Consequently, simpler models like linear regressors~\cite{weisberg2005applied} are favored in such cases, as they allow to study the behavior of the model analytically. The price of assuming a linear model is often a suboptimal accuracy if non-linear relationships are dominant in the underlying data. A human expert in the loop might instead impose certain types of functions (e.g., logarithmic, polynomial, etc.) to improve the fit, but this pragmatic approach introduces human bias and scalalibity problems.
Symbolic regression approaches aim to substitute this expert by an impartial system that will discover the best function by evolutionary recombination of basic mathematical operators, like addition, multiplication, trigonometrical functions, exponentials and similar. This allows various approaches~\cite{udrescu2020ai,schmidt2009solving,mcconaghy2011ffx} the rediscovery of explicit mathematical functions from sparse samples. In contrast, work that applies the same techniques to practical engineering problems is, putting a few noteworthy industrial exceptions~\cite{wang2019symbolic} aside, seemingly rare, raising the question why this is the case?
A potential explanation could be that numerical constants are essential to describe any reasonably noisy and complex application.
Constants are, however, rarely found in the expressions deployed to test the effectiveness of symbolic regression techniques. While those techniques excel at recombining the basic operators, they are poorly equipped to invent constants, either relying on providing the correct constants already as a fixed (sometimes mutatable) input or producing them by idiosyncratic combinations of operators.\footnote{For example, the constant $3$ might be constructed from an arbitrary input variable $x$ by evolving the expression $\frac{x}{x} + \frac{x}{x} + \frac{x}{x}$.}
In this report, we introduce a new multi-objective memetic algorithm based on differentiable Cartesian Genetic Programming (dCGP)~\cite{izzo2017differentiable,izzo2020dcgp} that is capable of finding interpretable mathematical expressions for two different real world problems coming from the space domain without the need for specific expert knowledge. We show that by exploiting the differentiability of a dCGP expression, the best fitting constants can be learned directly. The evolved expressions are shown to be either competitive or outperforming state of the art reference models for the tasks under consideration, including machine learned regression models. Due to its multi-objective nature, a decision maker is able to balance interpretability (i.e. expression complexity) with model accuracy.
Our report is structured as follows: Section~\ref{sec:related} provides some context of previous symbolic regression works and relates them to ours. Section~\ref{sec:symbolic} briefly explains dCGP and introduces our memetic and multi-objective algorithm. Section~\ref{sec:case1} and Section~\ref{sec:case2} describe two case studies, one from the Mars express mission and one from star dating, to which our algorithm is applied and evaluated. We conclude our work in Section~\ref{sec:conclusions}.
\section{Related Work}
\label{sec:related}
Early works on symbolic regression were inspired by the Genetic Programming approach of Koza~\cite{koza1994genetic} and deployed encodings based on arithmetic trees or grammars to evolve mathematical expressions. The encoding of our chromosomes is based on an extension of Cartesian Genetic Programming (CGP), which was originally developed by Miller~\cite{miller2011cartesian}
A comprehensive overview about recent genetic symbolic regression techniques and their applications is given by Wang et. al~\cite{wang2019symbolic}. Most influential is the seminal work of Michael Schmidt and Hod Lipson~\cite{schmidt2009distilling} who extracted conservation laws from observing physical systems like the double pendulum. Their evolutionary technique is available in the commercial ``Eureqa'' software. Eureqa uses an evolutionary technique to determine the numerical value of constants.
There also exist approaches (the deterministic FFX algorithm~\cite{mcconaghy2011ffx}) that do not rely on genetic programming. Most recently, Udrescu and Tegmark~\cite{udrescu2020ai} introduced the ``AI Feynman algorithm'' which relies on a series of regressions techniques, brute-forcing and neural networks instead of evolutionary approaches. However, AI Feynman requires constants to be provided as input a priori or must otherwise construct them by operator recombination.
\section{Symbolic Regression with dCGP}
\label{sec:symbolic}
\subsection{dCPG}
Differentiable Cartesian Genetic Programming (dCGP) is a recent development in the field of Genetic Programming (GP) adding to genetic programs their any-order differential representation to be used in learning tasks~\cite{izzo2020dcgp}.
The use of low-order differentials, i.e. gradients, to learn parameters in Genetic Programs is rare in previous works and can be found, for example, in \cite{topchy2001faster} and \cite{emigdio2015local}.
In past research, attempts to use genetic operators such as crossover and mutation \cite{howard1995ga} as well as meta-heuristics such as simulated annealing \cite{bitch} were made to adapt such parameters.
It is clear how access to the model's differential information, when available, may bring substantial gains if used during learning as proved, for example, by the enormous success of stochastic gradient descent in machine learning and in particular in learning deep neural network parameters. Key to the successful use of the differential information is its availability at a low computational cost, a feature that for first order derivatives, i.e. the gradient, is guaranteed by the backward automated differentiation technique, while for higher order derivatives is more difficult to obtain.
Thanks to the built-in efficient implementation of the algebra of Taylor truncated polynomials, evaluating a dCGP program allows seamless access to gradients and Hessians (as well as higher order differential information).
In the following, we describe how our new Multi Objective Memetic Evolutionary Strategy (MOMES) leverages the loss gradient and its Hessian to evolve the dCGP program and learn the ephemeral constants simultaneously.
\subsection{Multi Objective Memetic Evolutionary Strategy (MOMES)}
Let us denote with $\boldsymbol \eta = (\boldsymbol \eta_u, \mathbf c)$ a generic chromosome encoding the symbolic expression $\hat y = f_{\boldsymbol \eta_u}(\mathbf x, \mathbf c)$. We recognize in $\boldsymbol \eta $ two distinct parts: $\boldsymbol \eta_u$ is the integer part of the chromosome making use of the classical Cartesian Genetic Programming \cite{miller2011cartesian} encoding. Thus, $\boldsymbol \eta_u$ represents the mathematical form of $\mathbf f$, while $\mathbf c$ represents the actual values of the ephemeral constants in a continuous direct encoding. We may then introduce the gradient and Hessian as:
$$
\begin{array}{l}
\nabla \hat y = [\frac{\partial \hat y}{\partial c_1}, ..., \frac{\partial \hat y}{\partial c_n}]
\hspace{1.5cm}
\nabla^2 \hat y = \begin{bmatrix}
\frac{\partial \hat y^2}{\partial c_1\partial c_1} & \dots & \frac{\partial \hat y^2}{\partial c_1\partial c_n} \\
\vdots & \ddots & \\
\frac{\partial \hat y^2}{\partial c_n\partial c_1} & & \frac{\partial \hat y^2}{\partial c_n\partial c_n}
\end{bmatrix}
\end{array}
$$
Assuming to work on labelled data denoted by $\mathcal D = (\mathbf x_i, y_i)$, $i=1..N$ we may thus compute, for any chromosome $\boldsymbol \eta$, the mean square error (MSE) loss:
$$
\ell = \sum_i (y_i - \hat y_i)^2
$$
its full Hessian ${\mathbf H}$ and its full gradient ${\mathbf G}$. We also define the complexity $\mu$ of the chromosome $\boldsymbol \eta$ as the number of active nodes in its CGP.
Let us now introduce the concept of active constants $\tilde {\mathbf c}$ as the selection from $\mathbf c$ of all components with nonzero entries in the gradient ${\mathbf G}$. Zero entries correspond, most of the time, to ephemeral constants that are not expressed by the corresponding CGP and are thus inactive. Similarly, we call $\tilde{\mathbf H}$ and $\tilde{\mathbf G}$, respectively, the loss active Hessian and the loss active gradient when considering only the active constants. Following these notations and definitions, we may now describe the MOMES algorithm.
MOMES is an evolutionary strategy where all members in a population of $NP$ individuals, each represented by a chromosome $\boldsymbol \eta_i$, $i=1..NP$, undergo a mutation step at each generation, acting on all the genes (active and inactive) in $\boldsymbol \eta_u$. The mutated chromosome is then subject to a lifelong learning process consisting of one single step of Newton's method and acting on the (active) continuous part $\tilde {\mathbf c}$ of the chromosome:
$$
\tilde {\mathbf c}_{new} = \tilde {\mathbf c}_{old} - \tilde{\mathbf H}^{-1}\tilde{\mathbf G}
$$
The new individual fitness is then evaluated and, to preserve diversity, the chromosome is added to the $NP$ parents only if its fitness is not already present in the candidate pool.
Non-dominated sorting over the objectives $(\ell, \mu)$ is then applied to the candidate pool selecting $NP$ individuals to be carried over to the new generation. The exact implementation details of MOMES are documented in the open source project dCGP~\cite{izzo2020dcgp}.
The design of MOMES required the use of a few ideas that were the results of experiments and iterations that are worth discussing briefly at this point. Firstly, the mutation of the CGP expression encoded in $\boldsymbol \eta_u$ acts randomly on all genes, also the ones that are not expressed by the resulting expression. This type of mutation, already noted as beneficial when used in the standard evolutionary strategy technique used for CGP~\cite{miller2011cartesian}, has an added benefit here as it allows for a mutated chromosome to express the same formula as the parent. In these cases, the subsequent lifelong learning ensures the possibility to apply further iterations of the Newton's method to the same expression, and the mechanism will thus ensure that, eventually, a full local descent and not only one single iteration is made over the most promising individuals. Additionally, this simple trick acts as a necessary diversity preservation mechanism as otherwise the very same expression (possibly having a different complexity as counted by the number of active nodes in the CGP) would flood the non-dominated front and lead to a degraded convergence.
Secondly, the choice of using a single step of a second order method for the memetic part of MOMES was found to be most effective when coupled with MSE loss. It has been noted (see \cite{izzo2017differentiable}) that, whenever constants appear linearly in the expressed CGP, one single step of a second order method is enough to get to the optimal choice if the loss is expressed as MSE. This mechanism adds a small, but beneficial bias towards simple expressions during evolution, since these are evaluated with optimal constants values at each generation, while more complex appearances of constants would require more generations to reach their optimal values.
Lastly, the introduction of active constants, Hessians and gradients allows the Hessian matrix to admit an inverse and thus enable an effective lifelong learning.
\section{Case 1: Mars Express Thermal Data}
\label{sec:case1}
\subsection{Background and Dataset}
\label{subsec:mexdataset}
Mars Express (MEX) is a spacecraft of the European Space Agency (ESA) that has been monitoring the ``red planet'' continuously since 2004.
To remain operable in the challenging environment of space, the probe is equipped with a thermal regulation system to which a certain power has to be allocated.
An accurate power budget is essential for the mission, as underestimates might lead to malfunctions and overestimates will waste energy that should have been allocated to scientific tasks instead.
The MEX data for this work~\cite{maertens2022mars} is split in a training set MEX1 and a test set MEX2. MEX1 consists of 5795 completed orbits in the time frame from 01.01.2015 to 18.08.2019 and MEX2 to 1507 orbits from 18.08.2019 to 31.10.2020. Measurements of potential relevance for the prediction of the thermal power budget have been aggregated for each orbit into several thermal contributors. In particular, the task is to find the thermal power demand $P_{th}$ as a function
\begin{equation}
\label{eq:thcontrib}
P_{th}(LVAH, SH, D_{ecl}, TX, FO, NS).
\end{equation}
A reference model based on linear regression
\begin{equation}
\label{eq:refmodel}
P_{th} = c \cdot LVAH + d \cdot SH + e \cdot D_{ecl} + f \cdot TX + g \cdot FO + h \cdot NS + i
\end{equation}
was fitted to the MEX1 data. Table~\ref{tab:mex} gives a short description of each thermal contributor and the value of the fitted coefficients. Figure~\ref{fig:mex1original} shows the MEX data in relation to orbital conditions and actual power consumption. The evaluation metric for all MEX data is the Root Mean Square Error (RMSE).
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|l|c|}
\hline
\thead{thermal\\ contri-\\ butor} & \thead{description} & \thead{related \\ coefficient} & \thead{dCGP \\ variable} \\
\hline
$LVAH$ & Launch Vehicle Adapter heating & $c = -7.666 \cdot 10^1$ & $x_0$ \\
\hline
$SH$ & solar heating & $d = -1.764 \cdot 10^{-1}$ & $x_1$ \\
\hline
$D_{ecl}$ & heating due to eclipses & $e = -3.387 \cdot 10^{-3}$ & $x_2$ \\
\hline
$TX$ & transmitter activities & $f = -6.898 \cdot 10^0$ & $x_3$ \\
\hline
$FO$ & flag indicating power off & $g = +1.107 \cdot 10^1$ & $x_4$ \\
\hline
$NS$ & guidance flag & $h = +6.820 \cdot 10^0$ & $x_5$ \\
\hline
- & (baseline) & $i = +2.267 \cdot 10^2$ & \\
\hline
\end{tabular}
\caption{The reference model THP for MEX data is a linear regression model with the highlighted coefficients (compare Equation~\ref{eq:refmodel}). Last column shows corresponding variables for dCGP expressions.}
\label{tab:mex}
\end{table}
\begin{figure}[h]
\includegraphics[width=\textwidth]{figures/mex_data.png}
\caption{MEX data, including orbital and operative parameters. Goal is to predict the average power consumption from thermal contributors, compare Equation~\ref{eq:thcontrib}.}
\label{fig:mex1original}
\end{figure}
\subsection{Results}
\label{subsec:mexresults}
For this experiment, MEX1 data is used for training and MEX2 data for additional testing. Our features are the thermal contributors of the reference THP model (see Table~\ref{tab:mex}), with two differences: $SH$ and $D_{ecl}$ are standardized by subtracting the mean and dividing by the standard deviation of the training set. The determination of optimal hyperparameters for dCGP and MOMES is not within the scope of this work, as it would require us to thoroughly explain, characterize and statistically analyze each of them. In practice, evolution converges fast enough to perform grid searches over some reasonable parameter guesses that can be further calibrated over a few iterations of the optimization pipeline. Table~\ref{tab:mexhyper} reports the hyperparameters that we found to be effective enough to make our point with this experiment.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/mex_pareto.png}
\includegraphics[width=\textwidth]{figures/mex_result.png}
\caption{\textbf{Top:} Non-dominated front found by MOMES, 18 out of 40 individuals, MEX1 data. Blue line is the reference model THP. The expressions M1, M2 and M3 are highlighted for further analysis. \textbf{Bottom:} Predictions of THP and dCGP expression M1 in direct comparison with target power consumption. Shaded region is test data MEX2.}
\label{fig:mexpareto}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
\thead{parameter} & \thead{value} \\
\hline
basic operators & $+, -, \times, \div, \log$ \\
\hline
rows & 2 \\
\hline
columns & 20 \\
\hline
levels back & 20 \\
\hline
maximum number of constants & 5 \\
\hline
maximum mutations & 4 \\
\hline
generations & 50000 \\
\hline
population size & 40 \\
\hline
\end{tabular}
\caption{Hyperparameters deployed for dCGP and MOMES for the MEX1 dataset.}
\label{tab:mexhyper}
\end{table}
As common with evolutionary techniques, we multi-start MOMES 200 times to account for its stochasticity. Out of those 200 results, we select the one that has the strongest extreme point (i.e. lowest RMSE) and show the complete Pareto front in Figure~\ref{fig:mexpareto}. For further comparison with the THP reference model, we select three different formulas: M1 is at the extreme point, M2 is the expression with minimum complexity that is still above the reference and M3 is a low complexity expression. Table~\ref{tab:mexresults} shows the corresponding RMSE and the average over- and underestimations.
\begin{table}[h]
\centering
\rotatebox[origin=c]{90}{MEX1}
\begin{tabular}{|c||c|c|c|c|}
\hline
\thead{Model} & \thead{RMSE} & \thead{complx.} & \thead{avg. over-\\estimate} & \thead{avg. under-\\estimate} \\
\hline THP & 8.672 & - & 6.665 & 6.774 \\
\hline M1 & 8.478 & 61 & 6.657 & 6.442 \\
\hline M2 & 8.647 & 34 & 6.826 & 6.619 \\
\hline M3 & 9.223 & 22 & 7.184 & 7.232 \\
\hline
\end{tabular}\\
\rotatebox[origin=c]{90}{MEX2}
\begin{tabular}{|c||c|c|c|c|}
\hline
\thead{Model} & \thead{RMSE} & \thead{complx.} & \thead{avg. over-\\estimate} & \thead{avg. under-\\estimate} \\
\hline THP & 12.454 & - & 10.684 & 4.549 \\
\hline M1 & 11.390 & 61 & 9.753 & 4.461 \\
\hline M2 & 12.828 & 34 & 10.924 & 4.998 \\
\hline M3 & 14.991 & 22 & 12.928 & 6.745 \\
\hline
\end{tabular}
\caption{RMSE for 3 selected expression from Figure~\ref{fig:mexpareto}, complexities and values for average over- and underestimation of thermal power consumption. THP is the linear regression reference model.}
\label{tab:mexresults}
\end{table}
We observe that the RMSE for MEX2 is considerably higher than for MEX1 for each of the models, with Expression M1 generalizing slightly better to the unseen conditions of MEX2 than THP. We show the actual predictions of both, THP and Expression M1 in Figure~\ref{fig:mexpareto}, which highlights the differences between the models. From a mathematical point of view, M1 includes several products of variables, resulting in a degree 2 polynomial. Despite having access to the division and logarithm operator, evolution did not favor them in the population. However, other non-dominated fronts from the 200 runs (not shown) included sometimes division, constructing rational functions of similar RMSE. M3 is a linear model without the thermal contributors $x_2$ and $x_3$ that is worse than THP but could provide an interesting alternative if certain measurements would not be readily available. While some of the numerical constants are similar across different expression, it is clear that the memetic algorithm is optimizing them for each expression independently to further reduce the error. Furthermore, the maximum of 5 constants is not (always) expressed, highlighting that MOMES is selective and parsimonious. In comparison, the simple linear regression model THP relies already on 7 explicit constants.
\section{Case 2: Star Dating Gyrochronology}
\label{sec:case2}
\subsection{Background and Dataset}
\label{subsec:stardataset}
This dataset was released by Moya et al.~\cite{moya2021ai} as a benchmark for AI research on star dating. Many physical models for astronomical problems (e.g. the search for habitable exoplanets) depend on the age of a star, which is however not directly measurable. Consequently, the age of a star needs to be inferred from observable features. The public benchmark describes 1464 stars whose accurate ages have been determined by asteroseismology and association to star clusters, providing for each star the stellar features described in Table~\ref{tab:starfeatures}.
In this context an approach termed ``gyrochronology'' is of particular interest, which hypothesizes a functional relationship between star age, stellar mass and rotational period. Some empirical relations (assuming a fixed functional form) have been proposed~\cite{angus2015calibrating} but the accuracy of gyrochronology remains controversial, with some works suggesting that linear relationships might not be sufficient at all or just applicable for stars of a certain age group~\cite{barnes2010simple}. Consequently, the authors of the dataset propose to study star dating directly as regression problem using supervised machine learning models.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
\thead{Feature} & \thead{Description} & \thead{dCGP \\ variable} \\
\hline
$M$ & Stellar Mass & $x_0$ \\
\hline
$R$ & Stellar Radius & $x_1$ \\
\hline
$T_{eff}$ & Stellar effective temperature & $x_2$ \\
\hline
$L$ & Luminosity & $x_3$ \\
\hline
$[Fe/H]$ & Metallicity & $x_4$ \\
\hline
$\log g$ & Logarithm of surface gravity & $x_5$ \\
\hline
$P_{rot}$ & Stellar rotational period & $x_6$ \\
\hline
\end{tabular}
\caption{Features available for every star in the dataset.}
\label{tab:starfeatures}
\end{table}
Several reference models including linear regression, decision tree regressor, random forest regressor, support vector regressor, Gaussian process, kNN, neural networks and ensembles are provided within the benchmark. Additionally, four different splits of the data for training and test are available:
\begin{itemize}
\item \textbf{A} a random 80/20 split
\item \textbf{B1} stars of age $[0.4, 4.2]$ Gyr are used for training, ages $[4.2, 13.8]$ for testing
\item \textbf{B2} stars dated by cluster belonging used for training, remainder for testing
\item \textbf{C} independent control with 32 new stars for testing, including our own sun
\end{itemize}
The evaluation metric for all gyrochronology models is the Mean Absolute Error (MAE). Additionally, the dataset provides error bounds on the age of the stars allowing to define a precision metric by the fraction of star age predictions that fall within the corresponding confidence interval.
\subsection{Results}
\label{subsec:starresults}
For this experiment, we deployed MOMES on all training sets of the given benchmarks and re-evaluated the found expressions on the test sets. We selected the 6 features as described in Table~\ref{tab:starfeatures} and scaled them by dividing each of them by the standard deviation of the corresponding training set.
Similar to Section~\ref{sec:case1}, a grid-search was used to find suitable hyperparameters for dCGP and MOMES, reported in Table~\ref{tab:starhyper}.
\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
\thead{parameter} & \thead{value} \\
\hline
basic operators & $+, -, \times, \div, \log, \sin$ \\
\hline
rows & 2 \\
\hline
columns & 16 \\
\hline
levels back & 16 \\
\hline
number of constants & 5 \\
\hline
maximum mutations & 6 \\
\hline
generations & 500000 \\
\hline
population size & 40 \\
\hline
\end{tabular}
\caption{Hyperparameters deployed for dCGP and MOMES for star dating.}
\label{tab:starhyper}
\end{table}
A multi-start setup of 400 independent runs (100 for each benchmark) has been deployed to our servers and completed in about 12 hours.
We selected the run that resulted in the lowest MAE (extreme point) on the training set for further analysis. Table~\ref{tab:starresults1} shows the MAE and the precision of the lowest MAE expression found in the non-dominated front of MOMES in comparison with the reference models. To exemplify the diversity and complexity trade-offs in the found expressions, we show the approximated Pareto fronts in Figure~\ref{fig:starpareto} for Benchmark A and B1 (for reasons of space).
Interpreting the results, the dCGP based models are competitive with some of the machine learning models for benchmark A, but have a higher error than neural networks, Gaussian processes, kNN and random forests. Benchmark B1 represents a domain gap study, which is (without the deployment of additional techniques or assumptions) hard for most machine learning models to bridge. Since dCGP looks to capture the physical relations between the different features in algebraic expressions, it leads itself to better generalization.
Investigating the Pareto front of dCGP on the test-set for B2, the simple expression $x_6 - \sin (x_1 - x_4) + 6.987$ could be shown to outperforms all other machine learning models by some margin. On a closer look however, it turned out that the correlation between the training error and test error in this benchmark was rather low when looking at the entirety of our MOMES runs. Although this particular result for B2 has to be taken with a grain of salt, it nevertheless demonstrates the existence of surprisingly simple expressions that provide an accurate explanation of the data. Thanks to the interpretability of dCGP expressions, these type of insights may uncover potential issues related to overfitting of machine learning models or issues related to the data split that would go unnoticed otherwise. Lastly, on benchmark C the best dCGP expressions achieves a low MAE, only surpassed by the Gaussian process model.
\begin{figure}
\includegraphics[width=\textwidth]{figures/stars_pareto_A.png}\\
\includegraphics[width=\textwidth]{figures/stars_pareto_B1.png}
\caption{Non-dominated front found by MOMES for \textbf{Benchmark A (top)} and \textbf{B1 (bottom)}. Shown is the non-dominated front after the complete population was re-evaluated on the test-set of its corresponding benchmark. Blue lines represent reference machine learned models as described by Moya et al.~\cite{moya2021ai}}
\label{fig:starpareto}
\end{figure}
\setlength{\tabcolsep}{0.6em}
\begin{table}
\centering
\rotatebox[origin=c]{90}{Precision~~~~~~~~~~~~MAE~~}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|}
\hline
\thead{} & \thead{dcgp} & \thead{nnet} & \thead{lr} & \thead{dtr} & \thead{rf} & \thead{svm} & \thead{bayes} & \thead{knn} & \thead{gp} \\
\hline
A & 0.62 & 0.41 & 0.95 & 0.70 & 0.57 & 0.68 & 0.95 & 0.53 & 0.45 \\
B1 & 1.41 & 1.48 & 1.92 & 1.62 & 1.71 & 1.72 & 1.94 & 1.61 & 1.67 \\
B2 & 1.39 & 1.85 & 1.56 & 1.90 & 1.85 & 1.51 & 1.48 & 1.84 & 1.62 \\
C & 0.97 & 1.10 & 1.30 & 2.11 & 1.01 & 1.28 & 1.30 & 1.73 & 0.84 \\
\hline
\hline
\thead{} & \thead{dcgp} & \thead{nnet} & \thead{lr} & \thead{dtr} & \thead{rf} & \thead{svm} & \thead{bayes} & \thead{knn} & \thead{gp} \\
\hline
A & 50.39 & 73.23 & 35.43 & 63.78 & 60.63 & 46.46 & 32.28 & 77.95 & 62.20 \\
B1 & 51.97 & 59.84 & 29.92 & 51.18 & 48.03 & 37.79 & 28.35 & 55.91 & 51.97 \\
B2 & 42.26 & 28.45 & 30.96 & 21.34 & 25.94 & 33.05 & 29.71 & 25.52 & 30.96 \\
C & 15.63 & 34.37 & 21.87 & 9.37 & 21.87 & 12.50 & 21.87 & 15.62 & 40.62 \\
\hline
\end{tabular}
\caption{MAE and precision on test set for all star dating benchmarks. Each model was trained for each benchmark separately. The dCGP expressions used are always the ones that are at the extreme end of the MAE objective of the non-dominated front created by MOMES.}
\label{tab:starresults1}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
Following our experiments on the Mars express and star dating datasets, we have demonstrated that MOMES is capable of finding algebraic expressions that explain unseen data and have the capability to generalize well to it. The fact that the numerical constants for each expression are learned at each step of evolution guides the memetic algorithm towards expressions that are both accurate and of low complexity. The expressions with the lowest error notably contain products of sub-expressions or non-linear operators like the $\sin$-function and are thus more expressive than linear regressors while still amenable to mathematical analysis. Consequently, no expert is required to make potentially biased guesses in finding non-linear expressions. The task that still remains for the expert is the engineering and scaling of features (e.g. the thermal contributors in case of MEX) to obtain the best possible results, as well as the determination of suitable basic operators and efficient hyperparameters.
As shown in the case of benchmark B2 for the star dating case study, deploying symbolic regression in general is beneficial as sometimes simple expressions appear to be more general than complex black box models. Given the practical benefits of interpretability that comes with algebraic expressions together with the low requirements for their inference make algorithms like MOMES valuable methods of knowledge discovery. In particular, applications with sparse data in extreme environments like space are likely to benefit from it.
\subsubsection*{Acknowledgements}
The authors are grateful to Thomas Dreßler, who helped making the MEX data public and explained it to us. Similarly, the authors want to thank Roberto J. López-Sastre for providing support and insight into the gyrochronology data.
\bibliographystyle{splncs04}
|
2,869,038,153,867 | arxiv | \section{Introduction}
A natural question with a long history in group theory is whether an infinite group must have an infinite number of conjugacy classes of elements. In 1949, Higman-Neumann-Neumann constructed an infinitely generated group with only a finite number of conjugacy classes. Subsequently, S. Ivanov constructed one such example where the group is finitely generated. More recently, D. Osin gave an example of a finitely generated infinite group in which all non-trivial elements are conjugate. More generally, given a group endomorphism $\varphi: \pi \to \pi$, one considers the $\varphi$-twisted conjugacy classes. Equivalently, $\pi$ acts on $\pi$ via $\sigma \cdot \alpha \mapsto \sigma \alpha \varphi(\sigma)^{-1}$ and the Reidemeister number $R(\varphi)$, the cardinality of the set of orbits of this action, is the same as the number of $\varphi$-twisted conjugacy classes. Clearly, $R({\rm id}_{\pi})$ is simply the number of conjugacy classes. From the point of view of Nielsen-Wecken fixed point theory, $R(\varphi)$ is precisely the number of fixed point classes of any map $f:X\to X$ with induced homomorphism $f_{\#}=\varphi$ on $\pi_1(X)=\pi$. For many spaces $X$ (e.g. $X$ is a compact manifold of dimension at least three), the Nielsen number $N(f)$, which is the principal object of study, coincides with the minimal number $MF[f]$ of fixed points of maps within the homotopy class of $f$. The Nielsen number is always bounded above by the Reidemeister number $R(f)=R(\varphi)$ and for a large class of spaces, $N(f)=0$ or $N(f)=R(f)$. While the computation of $N(f)$ is in general very difficult, the problem of determining the finiteness of $R(\varphi)$ is more tractable.
In 1985, D. Anosov \cite{anosov} and independently E. Fadell and S. Husseini \cite{ed-suf} showed that for any selfmap $f:N\to N$ of a compact nilmanifold, $|L(f)|=N(f)$ where $L(f)$ denotes the Lefschetz number of $f$. Using the approach of \cite{ed-suf}, this was later strengthened in \cite{felshtyn-hill-wong} where it was shown in particular that $N(f)>0$ iff $R(f)<\infty$. In fact, for selfmaps $f$ of a nilmanifold, either $N(f)=0$ or $N(f)=R(f)$.
In 1994, A. Fel'shtyn and R. Hill \cite{fel-hill} conjectured that for a finitely generated group $\pi$ of exponential growth, if $\varphi:\pi \to \pi$ is injective then $R(\varphi)=\infty$. Using techniques from geometric group theory, G. Levitt and M. Lustig \cite{levitt-lustig} showed that if $\pi$ is finitely generated torsion-free non-elementary Gromov hyperbolic then every automorphism $\varphi \in {\rm Aut}(\pi)$ must have $R(\varphi)=\infty$. This was subsequently proven in \cite{fel:1} without the torsion-free assumption.
We say that a group {\it $G$ has the
$R_{\infty}$ property for automorphisms}, in short {\it $G$ has the
$R_{\infty}$ property}, if for every automorphism $\varphi:G \to G$ we have $R(\varphi)=\infty$. It was shown in \cite{daci-peter} that the Fel'shtyn-Hill conjecture does not hold in general. Moreover, non-elementary polycyclic groups of exponential growth that are not Gromov hyperbolic without the $R_{\infty}$ property were constructed. Since then, many examples of groups with $R_{\infty}$ have been discovered (see e.g., \cite{fel-daci}, \cite{fgw}, \cite{felshtyn-leonov-troitsky}, \cite{daci-peter4}, \cite{levitt}, \cite{TW1}, and \cite{TW2}). (For connections between the study of Reidemeister classes and other areas, see e.g. \cite{felshtyn-troitsky}.) In particular, groups that are quasi-isometric to generalized Baumslag-Solitar groups \cite{TW2} have the $R_{\infty}$ property. In these examples, the groups are all of exponential growth. Since a result of M. Gromov states that a finitely generated group is of polynomial growth iff it is virtually nilpotent, it is natural to ask if one can find (virtually) nilpotent groups with the $R_{\infty}$ property. The main objective of this paper is to construct finitely generated groups of polynomial growth that have the $R_{\infty}$ property. This shows that the $R_{\infty}$ property does {\it not} depend on the growth type of the group as suggested by the original conjecture of Fel'shtyn and Hill.
It is easy to see that finitely generated abelian groups do not have the $R_{\infty}$ property. Therefore, we will first explore the $R_{\infty}$ property for virtually abelian and for nilpotent groups.
The basic algebraic techniques used in the present paper for showing $R(\varphi)=\infty$ is the relationship among the Reidemeister numbers of groups homomorphisms of a short exact sequence. In general, given a commutative diagram of groups and homomorphisms
\begin{equation*}
\begin{CD}
A @>{\eta}>> B \\
@V{\psi}VV @VV{\varphi}V \\
A @>{\eta}>> B
\end{CD}
\end{equation*}
the homomorphism $\eta$ induces a function $\hat {\eta}:\mathcal R(\psi) \to \mathcal R(\varphi)$ where $\mathcal R(\alpha)$ denotes the set of $\alpha$-twisted conjugacy classes.
Some of the basic facts that will be used throughout this paper are given in the following lemma. For more general results, see \cite{daci-peter} and \cite{wong}.
\begin{lemma}\label{R-facts}
Given an endomorphism $\psi:G\to G$ of a finitely generated torsion-free abelian group $G$, $Fix \psi=\{1\}$ iff $R(\psi)<\infty$.
Consider the following commutative diagram
\begin{equation*}\label{general-Reid}
\begin{CD}
1 @>>> A @>>> B @>>> C @>>> 1 \\
@. @V{\varphi'}VV @V{\varphi}VV @V{\overline \varphi}VV @.\\
1 @>>> A @>>> B @>>> C @>>> 1
\end{CD}
\end{equation*}
where the rows are short exact sequences of groups and the vertical arrows are group endomorphisms.
(1) If $R(\overline \varphi)=\infty$ then $R(\varphi)=\infty$.
(2) If $R(\overline \varphi)<\infty, |Fix \overline \varphi|<\infty$, and $R(\varphi')=\infty$ then $R(\varphi)=\infty$.
(3) If the short exact sequence is a central extension then $R(\varphi)=R(\varphi')R(\bar {\varphi})$.
\end{lemma}
\begin{proof}
Suppose $G$ is finitely generated torsion-free abelian. Then $G=\mathbb Z^k$ for some positive integer $k$. For any $\varphi: G\to G$, $R(\varphi)=\# Coker (1-\varphi)=|\det (1-\varphi)|$ so that $R(\varphi)<\infty$ iff $\varphi$ does not have $1$ as an eigenvalue iff $\varphi(x)=x$ has only trivial solution, i.e., $Fix \varphi=1$.
The homomorphism $p:B\to C$ induces a function $\hat p:\mathcal R(\varphi) \to \mathcal R(\overline{\varphi})$ given by $\hat p([\alpha]_B)=[p(\alpha)]_C$. Since $p$ is surjective, so is $\hat p$. Thus, $(1)$ follows. Similarly, $i:A\to B$ induces a function $\hat i:\mathcal R(\varphi') \to \mathcal R(\varphi)$. Since the sequence is exact, it follows that $\hat i(\mathcal R(\varphi'))=\hat p^{-1}([1]_C)$. The subgroup $Fix \overline{\varphi}$ acts on $\mathcal R(\varphi')$ via $\bar {\theta}\cdot [\alpha]_A=[\theta \alpha \varphi(\theta)^{-1}]_A$ where $\theta \in B$ and $\bar {\theta}\in Fix \overline{\varphi}$. Thus, two classes $[\alpha]_A$ and $[\beta]_A$ are in the same $Fix \overline{\varphi}$-orbit iff $i(\alpha)$ and $i(\beta)$ are in the same Reidemeister class, i.e., $[i(\alpha)]_B=[i(\beta)]_B$. Now, $(2)$ follows immediately. Finally, if the extension is central, $\hat p^{-1}([\bar \alpha]_C)$ is independent of $\bar \alpha$ so that $R(\varphi)=k\cdot R(\varphi')$ and $k$ is the number of distinct Reidemeister classes of $\overline{\varphi}$ in $C$. In other words, $k=R(\overline{\varphi})$ and thus $(3)$ follows.
\end{proof}
In \cite{dyer}, J. Dyer constructed some nilpotent Lie algebras which have the property that
every automorphism is unipotent. This implies that the induced homomorphism on the abelianization is the identity. This should imply that every automorphism of the corresponding nilpotent group has the property that the induced automorphism on the abelianization is the identity. An immediate consequence of this is the fact that every automorphism $\varphi$ of such nilpotent group has an infinite number of $\varphi$-twisted conjugacy classes and thus the group has the
$R_{\infty}$ property. The following example is the group theoretic analog of the Lie algebra theoretic example constructed by Dyer \cite{dyer}.
\begin{example}\label{Dyer}
Let $G$ be the quotient of the free nilpotent group $F_2/\Gamma_7(F_2)$ on two generators $x,y$ by the normal closure of the words
$Y_1^{-1}Y_3$, $[Y_2,x]$, $[Y_1,y]U_4^{-1}$, $[Y_1,x][Y_2,y]^{-3}U_4^{-1}$. Here $B=[x,y]$, $Y_1=[[B,y],y]$, $Y_2=[[B,y],x]$, $Y_3=[[B,x],x]$, $z_1=[Y_1,y]$, $z_2=[Y_1,x]$, $z_3=[Y_2,y]$, $U_4=[z_3,x]$. Then given any automorphism
$\varphi$ of $G$ the induced homomorphism on the abelianization of $G$ is the identity. Therefore, this group has the $R_{\infty}$ property. It has nilpotency class 6 and Hirsch length $\geq 7$.
\end{example}
\begin{remark} One can construct as in \cite{bryant-papistas} a finitely generated torsion-free nilpotent group $G$ so that ${\rm Aut}(G)$, modulo the kernel of the action of ${\rm Aut}(G)$ on the abelianization $G^{Ab}$, is a prescribed closed subgroup of $GL(2,\mathbb Z)$.
\end{remark}
This example has prompted us to determine which finitely generated torsion-free nilpotent groups can have the $R_{\infty}$ property.
In general, the $R_{\infty}$ property has important implications in the fixed point
theory of Jiang-type spaces since $R(f)=\infty$ implies that $N(f)=0$ which in turn implies in most cases that $f$ is deformable to be fixed point free. Recall that a space is said to be of Jiang-type if $L(f)=0 \Rightarrow N(f)=0$ and $L(f)\ne 0 \Rightarrow N(f)=R(f)$. Since nilmanifolds are known to be of Jiang-type, for each torsion-free nilpotent group which has the $R_{\infty}$ property there corresponds a nilmanifold with the property that every homeomorphism is homotopic to a fixed point free map. Such examples cannot appear in dimension 3 but for every $n>3$ there exists an $n$-dimensional nilmanifold such that every homeomorphism is homotopic to be fixed point free; and if $n\ge 5$ then the homotopy can be made to be isotopy. As a by-product of our investigation, we are able to give an algebraic proof that automorphisms of free groups on two generators have infinite Reidemeister number.
This paper is organized into seven sections. In section 2, extension of the $R_{\infty}$ property to automorphisms of virtually abelian groups is discussed. In particular, we show that the fundamental group of the Klein bottle (Theorem \ref{Klein}), which is a finitely generated torsion-free virtually abelian group, has the $R_{\infty}$ property. We also construct for every $n \ge 2$ a finitely generated torsion-free virtually abelian group of Hirsch length $n$ with the desired $R_{\infty}$ property (Theorem \ref{Klein-Zn}). In section 3 we show that $F_2/\Gamma_k$ has the $R_{\infty}$ property for $k\ge 9$ (Theorem \ref{free-k-R}) where $F_2$ is the free group on two generators and $\Gamma_k=\Gamma_k(F_2)$ is the $k$-th term of the lower central series of $F_2$. In sections 4 and 5, examples of finitely generated torsion-free nilpotent groups with the $R_{\infty}$ property are presented, according to the nilpotency class and the Hirsch length. In particular, we show that for each $n>3$, there is a nilpotent group of Hirsch length $n$ with the $R_{\infty}$ property. Furthermore, nilpotent groups of nilpotency class 2 that have the $R_{\infty}$ property are also constructed.
In the section 6, we turn to topological applications. We show that there exist nilmanifolds of dimension $n$ for each $n\ge 5$ on which every homeomorphism is isotopic to a fixed point free homeomorphism (Theorem \ref{fpf-maps-on-nilmanifolds}). In the last section, we consider a generalization of nilpotent groups, namely $\mathcal C$-nilpotent groups where $\mathcal C$ denotes the class of finite groups. These groups provide further examples of groups with the $R_{\infty}$ property. For any $n\ge 7$, we make use of the results from section 5 to construct a compact $\mathcal C$-nilpotent manifold of dimension $n$ for which every homeomorphism is isotopic to a fixed point free homeomorphism (Theorem \ref{fpf-maps-on-C-nilpotent-spaces}).
\section{virtually abelian groups}
In this section, we study the $R_{\infty}$ property for virtually abelian groups. These groups are finite extensions of abelian groups and they appear as fundamental groups of almost flat Riemannian manifolds.
The simplest torsion-free virtually abelian group with the $R_{\infty}$ property is the fundamental group of the Klein bottle $K$ which is the total space of a $S^1$-bundle over $S^1$. On the other hand, $K$ is finitely covered by the $2$-torus $T^2$ so that
$\pi_1(K)$ has $\mathbb Z^2$ as a finite index subgroup. Since $\mathbb Z^2$ does not have the $R_\infty$ property, it follows that in general the $R_\infty$ property is not even invariant under commensurability. Note that $\pi_1(K)$ is non-elementary non-Gromov hyperbolic and has polynomial growth.
Furthermore, we will show that the groups $\pi_1(K)\times \mathbb Z^n$ for
every $n\geq 0$, which are infra-abelian having $\mathbb Z^{n+2}$ as a subgroup of
index 2, have the $R_{\infty}$ property.
We will start by analyzing the group $\mathbb Z\rtimes \mathbb Z$ where the action is non-trivial. This group is the fundamental group of the Klein bottle.
The group of automorphisms of $\mathbb Z\rtimes \mathbb Z$ is known as folklore. Since we cannot find a precise reference for this fact, we state it in the following lemma and we sketch a proof for completeness.
\begin {lemma} The group ${\rm Aut}(\mathbb Z\rtimes \mathbb Z)$ where $\mathbb Z\rtimes \mathbb Z$ is as above is isomorphic to
$(\mathbb Z\bigoplus \mathbb Z_2)\rtimes \mathbb Z_2$ where the action $\theta:\mathbb Z_2 \to {\rm Aut}(\mathbb Z\bigoplus \mathbb Z_2)$ sends the generator to the automorphism $(r,\epsilon) \to (-r,\epsilon)$. Furthermore, the inner automorphisms are isomorphic to the subgroup $(2\mathbb Z\bigoplus 0)\rtimes \mathbb Z_2$ and the quotient ${\rm Out}(G)$ is isomorphic to $\mathbb Z_2\oplus \mathbb Z_2$.
\end{lemma}
\begin{proof}
We use the presentation $\langle \alpha,\beta|\alpha \beta \alpha \beta^{-1} \rangle$ for the fundamental group of the Klein bottle. Given an endomorphism $\kappa$, write $\kappa(\alpha)=\alpha^{\epsilon}\beta^s$ and $\kappa(\beta)=\alpha^{r}\beta^{\delta}$. Now,
\begin{equation}\label{trivial}
\begin{aligned}
\kappa(\alpha \beta \alpha \beta^{-1})&=\kappa(\alpha)\kappa (\beta) \kappa(\alpha) \kappa(\beta^{-1}) \\
&=\alpha^{\epsilon}\beta^s \alpha^{r}\beta^{\delta} \alpha^{\epsilon}\beta^s (\alpha^{r}\beta^{\delta})^{-1} \\
&=\alpha^q\beta^{s+\delta+s-\delta}=\alpha^q\beta^{2s}
\end{aligned}
\end{equation}
for some integer $q$ that depends on the values of $\epsilon, s, r$, and $\delta$. Since $\alpha \beta \alpha \beta^{-1}=1$, the expression \eqref{trivial} yields the trivial element. Thus, $2s=0$ and hence $s=0$ so that $\kappa(\alpha)=\alpha^{\epsilon}$. It follows that
the automorphisms of this group are of the form $\alpha \mapsto \alpha^{\epsilon}, \beta \mapsto \alpha^r \beta^{\delta}$. Moreover, $\epsilon, \delta \in \{1,-1\}$. To see this, we note that given any two automorphisms $\alpha \mapsto \alpha^{\epsilon_1}, \beta \mapsto \alpha^{r_1} \beta^{\delta_1}$ and
$\alpha \mapsto \alpha^{\epsilon_2}, \beta \mapsto \alpha^{r_2} \beta^{\delta_2}$, the composite is an automorphism such that $\alpha \mapsto \alpha^{\epsilon_1\epsilon_2}$
and the $\beta$-exponent of the image of $\beta$ is $\delta_1\delta_2$. Consider the automorphisms
\begin{equation*}
\begin{aligned}
&\varphi_1: \qquad \alpha \mapsto \alpha ; \quad \beta \mapsto \alpha \beta \\
&\varphi_2: \qquad \alpha \mapsto \alpha ; \quad \beta \mapsto \beta^{-1} \\
&\varphi_3: \qquad \alpha \mapsto \alpha^{-1} ; \quad \beta \mapsto \beta
\end{aligned}
\end{equation*}
Writing $\mathbb Z_2=\{\pm 1\}$, the generators for $(\mathbb Z \oplus \mathbb Z_2)\rtimes \mathbb Z_2$ are $\eta_1=(1,1,1), \eta_2=(0,-1,1)$ and $\eta_3=(0,1,-1)$. The homomorphism $\varphi_i \mapsto \eta_i$ for $i=1,2,3$ is the desired isomorphism.
\end{proof}
Now we will show that $\mathbb Z \rtimes \mathbb Z$ has the $R_{\infty}$ property.
\begin{theorem}\label{Klein}
For any automorphism $\varphi$ of $\mathbb Z\rtimes \mathbb Z$, we have $R(\varphi)=\infty$.
\end{theorem}
\begin{proof} Denote by $x,y$ the generators of the group $\mathbb Z\rtimes \mathbb Z$, where
$x$ is a generator of the first copy of $\mathbb Z$ and $y$ is a generator of the
second copy of $\mathbb Z$. In this group, which is the fundamental group of the
Klein bottle, we have the relation $xyxy^{-1}=1$. The automorphisms of
$\mathbb Z\rtimes \mathbb Z$ from their description given in the proof of Lemma 2.1 can be divided into four cases as follows.\\
a) $x\mapsto x; y\mapsto x^ry~$ \ \ \ \ \ \ b) $x\mapsto x; y \mapsto x^ry^{-1}$ \\
c) $x \mapsto x^{-1}; y \mapsto x^ry$ \ \ \ \ d) $x\mapsto x^{-1}; y \mapsto
x^ry^{-1}$
\noindent
where $r\in \mathbb Z$.
Cases a) and c) are treated as follows. These automorphisms leave invariant the subgroup generated by $x$.
So every such automorphism induces a homomorphism of the short exact sequence
$$0\to \mathbb Z\to \mathbb Z\rtimes \mathbb Z \to \mathbb Z\to 0$$
and induces in the quotient the identity homomorphism of $\mathbb Z$ which has an infinite number
of conjugacy classes. So the result follows.
For Case b), we have the automorphism $x\to x; y \to x^ry^{-1}$
which maps a generic element $x^my^k$ to $x^my^{-k}$ if $k$ is even and to
$x^{m+r}y^{-k}$ if $k$ is odd (these elements are obtained by a straightforward
calculation using the relation $xyxy^{-1}=1$ on the group).
So the elements of the group in the Reidemeister class of the element $x^i$
are of the form either
$x^my^{2n}x^iy^{2n}x^{-m}$ or $x^my^{2n+1}x^iy^{2n+1}x^{-m-r}$.
After simplifying the expression using the relation of the group,
these elements can be written as $x^iy^{4n}$ or $x^{-r-i}y^{4n+2}$, respectively. So
there are an infinite number of distinct Reidemeister classes since $x^i$ is in the Reidemeister class
of $x^j$ if and only if $i=j$. This proves Case b).
For Case d), we have the automorphism $x\to x^{-1}; y \to x^ry^{-1}$
which maps a generic element $x^my^k$ to $x^{-m}y^{-k}$ if $k$ is even and to
$x^{-m+r}y^{-k}$ if $k$ is odd.
So the elements of the group in the Reidemeister class of the element $x^iy$
are the elements of the form either
$x^my^{2n}x^iyy^{2n}x^{m}$ or $x^my^{2n+1}x^iyy^{2n+1}x^{m-r}$.
Again, using the relation of the group,
these elements can be written as $x^iy^{1+4n}$ or $x^{r-i}y^{4n+3}$, respectively. So
there are an infinite number of distinct Reidemeister classes since $x^iy$ and $x^jy$ are in the same Reidemeister class
if and only if $i=j$. This proves Case d).
\end{proof}
Next, we construct from the Klein bottle examples of Bieberbach groups with the $R_{\infty}$ property. First, the following result is crucial in our construction.
\begin{proposition}\label{simplest}
The group $\mathbb Z\rtimes \mathbb Z_2$ has the $R_{\infty}$ property.
\end{proposition}
\begin{proof} Let $\varphi:\mathbb Z\rtimes \mathbb Z_2\to \mathbb Z\rtimes \mathbb Z_2$ be an automorphism.
Since the only elements of infinite order are the elements of the form $t^r$
for $t\in \mathbb Z$ a generator and $r\ne 0$, the subgroup $\mathbb Z$ is characteristic.
Therefore, $\varphi|_{\mathbb Z}:\mathbb Z \to \mathbb Z$. Since ${\rm Aut}(\mathbb Z)=\{\pm 1\}$, $\varphi|_{\mathbb Z}$ is either the identity or multiplication by $-1$. In the former case, the assertion that $R(\varphi)=\infty$ follows from $(2)$ of Lemma \ref{R-facts}.
For the latter case, the Reidemeister class of $(t^l,1)$ contains at most two elements where $1$ is the non-trivial
element of $\mathbb Z_2$. To see this, suppose $(t^r,1)$ and $(t^s,1)$ are $\varphi$-conjugate, that is, $(t^s,1)=\alpha (t^r,1) \varphi(\alpha)^{-1}$ for some $\alpha=(t^j,\epsilon) \in \mathbb Z \rtimes \mathbb Z_2$ where $\epsilon=0,1$. Suppose $\varphi(t^0,1)=(t^n,1)$. Now, $\varphi(\alpha)=(t^{-j+\epsilon n},\epsilon)$ so that $\varphi(\alpha)^{-1}$ is equal to $(t^{-j+n},1)$ if $\epsilon =1$ or $(t^j,0)$ if $\epsilon = 0$. It follows that
$$\alpha (t^r,1) \varphi(\alpha)^{-1}=(t^j,1)(t^r,1)(t^{-j+n},1)=(t^{j-r},0)(t^{-j+n},1)=(t^{n-r},1) \qquad \text{when $\epsilon=1$}$$
and
$$\alpha (t^r,1) \varphi(\alpha)^{-1}=(t^j,0)(t^r,1)(t^{j},0)=(t^{j+r},1)(t^{j},0)=(t^{r},1) \qquad \text{when $\epsilon=0$}.$$ Since the set $\{(t^l,1)\}$ is infinite and the Reidemeister classes have finite length, it follows that $R(\varphi)=\infty$.
\end{proof}
The next result shows that for any positive integer $n\ge 2$, one can construct a finitely generated Bieberbach group of Hirsch length $n$ that has the $R_{\infty}$ property.
\begin{theorem}\label{Klein-Zn}
The group $\pi_1(K)\times \mathbb Z^n$ has the $R_{\infty}$ property for
all integer $n\geq 0$.
\end{theorem}
\begin{proof} Consider the presentation $\pi_1(K)=\langle a,b | abab^{-1}\rangle$.
The center of this group is the subgroup generated by $b^2$ and similarly the
center of $\pi_1(K)\times \mathbb Z^n$ is the subgroup $\langle b^2\rangle \times \mathbb Z^n$.
Now consider the short exact sequence
$$1\to \langle b^2\rangle \times \mathbb Z^n \to \pi_1(K)\times \mathbb Z^n \to \mathbb Z\rtimes \mathbb Z_2 \to 1.$$
Since the center is characteristic with respect to automorphisms, any
automorphism of $\pi_1(K)\times \mathbb Z^n$ is an automorphism of the short exact
sequence. Since the quotient $\mathbb Z\rtimes \mathbb Z_2$ has the $R_{\infty}$
property by Proposition \ref{simplest}, the result follows from Lemma \ref{R-facts}.
\end{proof}
\begin{remark} This result in the case $n=0$ gives an alternate proof of Theorem \ref{Klein}.
\end{remark}
\begin{remark} Given a virtually nilpotent group $G$, we have a short exact sequence $1 \to N \to G \to F \to 1$ where $F$ is finite and $N$ is nilpotent. If $N$ is characteristic and has the $R_{\infty}$ property then it follows easily from Lemma \ref{R-facts} that $G$ also has the $R_{\infty}$ property.
\end{remark}
\section{Commutators and free nilpotent groups}
Given $r$ a positive integer and $k$ either an integer or $\infty$, consider the free nilpotent group $G(r,k)=F_r/\Gamma_{k+1}(F_r)$ of rank $r$ and nilpotency class $k$, where for $k=\infty$ we have $G(r,\infty)=F_r$ because
$F_r$ is residually nilpotent. For which values of $r$ and $k$(including $k=\infty$) does the group $G$ have the $R_\infty$ property? For the case $k=\infty$, the free group $F_r$ for $r\ge 2$ is a torsion-free non-elementary Gromov hyperbolic group thus it follows from \cite{levitt-lustig} that $F_r$ has the $R_{\infty}$ property. We should point out that the proof in \cite{levitt-lustig} is geometric. In this section,
we will show that $F_2/\Gamma_9(F_2)$ has the property. It seems that
for $i<9$ the group $F_2/\Gamma_i(F_2)$ does not have this property.
As a result, we will give a purely algebraic proof of
the fact that an automorphism of a free group of rank $2$ has an infinite number
of twisted conjugacy classes.
Let us denote $\Gamma_n(F_2)$ simply by $\Gamma_n$.
From Witt's Formulae (\cite{kms} Theorem 5.11), the ranks of the abelian groups $\Gamma_i/\Gamma_{i+1}$ are calculated. In particular, we have:\\
$rk(\Gamma_2/\Gamma_{3})=1 $, $rk(\Gamma_3/\Gamma_{4})=2 $, $rk(\Gamma_4/
\Gamma_{5})=3$, $rk(\Gamma_5/\Gamma_{6})=6 $,
$rk(\Gamma_6/\Gamma_{7})=9$, $rk(\Gamma_7/\Gamma_{8})=18 $, $rk
(\Gamma_8/\Gamma_{9})=30$.
\begin{lemma} Let $G(2,k)$ be the free nilpotent group of rank $2$ on two generators $a$ and $b$, nilpotency class $k$ and
$\varphi:G(2,k) \to G(2,k)$ a homomorphism. Suppose $k\geq 3$. The induced homomorphism
$\tilde \varphi:\Gamma_2(G(2,k))/\Gamma_3(G(2,k))\to \Gamma_2(G
(2,k))/\Gamma_3(G(2,k))$ is multiplication by $\det (M)$ where $M$ is the matrix
of ${\varphi}^{Ab}:G(2,k)^{Ab} \to G(2,k)^{Ab}$ and $G(2,k)^{Ab}$ denotes the abelianization of $G(2,k)$.
\end{lemma}
\begin{proof} Since $k\ge 3$, it follows that $\Gamma_i(G(2,k))=\Gamma_i(F_2)/\Gamma_{k+1}(F_2)$ for $i\le k$. This means that $\Gamma_2(G(2,k))/\Gamma_3(G(2,k))\cong \Gamma_2/\Gamma_3$ and $G(2,k)^{Ab} \cong {F_2}^{Ab}$. Suppose the induced map ${\varphi}^{Ab}$ is given by the matrix $M=\left(\begin{smallmatrix}
\alpha & \beta \\
\gamma & \delta
\end{smallmatrix}\right)$.
Note that $\Gamma_2/\Gamma_3$ is generated by the single element $B\Gamma_3$ where $B=[a,b]=aba^{-1}b^{-1}$. Using classical calculus of commutators (see \cite{kms} Chapter 5), we have $Ba\Gamma_3=aB\Gamma_3$ and $Bb\Gamma_3=bB\Gamma_3$. Since $aba^{-1}=Bb$, it is straighforward to verify that $\tilde \varphi(B\Gamma_3)=B^k\Gamma_3$ where $k=\alpha \delta -\beta \gamma=\det(M)$.
\end{proof}
\begin{corollary}\label{det=1}
If $\varphi:G(2,k) \to G(2,k)$ is an automorphism such that $\det({\varphi}^{Ab})=1$
then $R(\varphi)=\infty$.
\end{corollary}
\begin{proof} We have a homomorphism of short exact sequence $0\to \Gamma_2/
\Gamma_3 \to G(2,k)/\Gamma_3 \to G(2,k)/\Gamma_2 \to 0$. Since the induced map on the
subgroup $\Gamma_2/\Gamma_3=\mathbb Z$ is multiplication by $1$, it follows from formula (2.2) of \cite{daci-peter} that $G(2,k)/\Gamma_3$ has the $R_{\infty}$ property. Since $\Gamma_3$ is characteristic in $G(2,k)$, the result follows for $G(2,k)$.
\end{proof}
In order to study the $R_{\infty}$ property, it suffices, as a result of Corollary \ref{det=1}, to consider automorphisms $\varphi$ whose induced automorphisms ${\varphi}^{Ab}$ have determinant $-1$.
While we have succeeded in determining the $R_\infty$ property for $G(2,k)$ for $k\geq 9$, we do not know how to extend our techniques to $G(r,k)$ for $r>2$ and $k>2$. On the other hand, the case of the free abelian group $G(r,1)$ of rank $r$ is well-understood.
Next, we will construct a nontrivial element in $\Gamma_8/\Gamma_9\subset G(2,8)$ which is fixed by the restriction of any automorphism $\varphi:G(2,8)\to G(2,8)$.
\begin{proposition} If $\varphi:G(2,8)\to G(2,8)$ is an automorphism then $R(\varphi)=\infty$.
\end{proposition}
\begin{proof} By Corollary \ref{det=1}, the result holds if $\det(\varphi^{Ab})=1$. So let us
assume that $\det(\varphi^{Ab})=-1$. Then consider the quotient $\Gamma_3/\Gamma_4$.
This group has rank 2 and has as generators $[B,x],[B,y]$ where $B=[x,y]$. A
straightforward calculation shows that the matrix of the automorphism induced
by $\varphi$ has determinant -1. Therefore the image of the element $[[B,x],[B,y]]
\in \Gamma_6/\Gamma_7$ is $([[B,x],[B,y]])^{-1}$. Call this element $w$. From
\cite{kms} Lemma 5.4 (pg 314), it follows that $w\ne 1$. Now consider the element
$w_1=[B,w]\in \Gamma_8/\Gamma_9$. From above, we have $\varphi'(w_1)=w_1$ where $\varphi'$ is the induced homomorphism on $\Gamma_8/\Gamma_9$ and
again from \cite{kms} Lemma 5.4 we conclude that $w_1\ne 1$. Then the automorphism $\varphi$ induces an automorphism $\hat \varphi$ of the following short exact sequences and hence a commutative diagram.
\begin{equation*}
\begin{CD}
0 @>>> \Gamma_8/\Gamma_9 @>>> F_2/\Gamma_9 @>>> F_2/\Gamma_8 @>>> 1 \\
@. @V{\varphi'}VV @V{\hat \varphi}VV @V{\bar \varphi}VV @. \\
0 @>>> \Gamma_8/\Gamma_9 @>>> F_2/\Gamma_9 @>>> F_2/\Gamma_8 @>>> 1
\end{CD}
\end{equation*}
If the
induced map on the quotient has infinite Reidemeister number then the result
follows. Otherwise, it will be finite. Note that $\varphi'(w_1)=w_1$ and $\Gamma_8/\Gamma_9$ is finitely generated torsion-free abelian. Applying Lemma \ref{R-facts}, we have $R(\varphi')$ is infinite and together with the fact that $R(\bar \varphi)$ is finite, we conclude that $R(\varphi)$ is infinite and the
result follows.
\end{proof}
Since the commutator subgroups $\Gamma_k$ are characteristic, we can use induction and the same argument as in the proof above to the short exact sequence $0\to \Gamma_k/\Gamma_{k+1} \to F_2/\Gamma_{k+1} \to F_2/\Gamma_k \to 1$ to show that $F_2/\Gamma_{k+1}$ has the desired property. Thus, we state the following result.
\begin{theorem}\label{free-k-R}
Let $G(2,k)$ be the free nilpotent group on $2$ generators of nilpotency class $k$. If $k\ge 9$ (including the case $k=\infty$) then for any automorphism $\varphi:G(2,k) \to G(2,k)$, we have $R(\varphi)=\infty$, i.e., $G(2,k)$ has the $R_\infty$ property.
\end{theorem}
\begin{remark} When $k=\infty$, $G(2,\infty)=F_2$. Thus, our proof of Theorem \ref{free-k-R} provides a purely algebraic proof of the fact that $F_2$ has the $R_{\infty}$ property as a consequence of a result proven in \cite{levitt-lustig} using techniques from geometric group theory.
\end{remark}
\section{$R_{\infty}$ and nilpotency class}
If $G$ is a finitely generated torsion-free abelian group, then $G$ does not have the $R_{\infty}$ property. Since abelian groups have nilpotency class $1$ and Example \ref{Dyer} gives an example where the nilpotency class is $6$, it is interesting to know if one can construct a nilpotent group of nilpotency class $\le 5$ that has the desired $R_{\infty}$ property. First, is there a finitely generated torsion-free nilpotent group of nilpotency class $2$ which has
the $R_\infty$ property?
\begin{example}\label{nil-class2}
We now construct a group of nilpotency class 2 which has the
$R_{\infty}$ property. Consider the free nilpotent group $G(4,2)$ on $3$ generators $x,y,z,w$ of nilpotency class 2 and $\Gamma_2(G(4,2))/\Gamma_3(G(4,2))$ is a free abelian group of rank 6 in which the cosets defined by the commutators $[x,y],[x,z],[x,w]$,$[y,z],[y,w],[z,w]$ form a basis. Take the quotient by the subgroup generated by $[x,z],[x,w],[y,z]$. We will prove that this group has the $R_{\infty}$ property.
Let $\varphi:F_4/\Gamma_3\to F_4/\Gamma_3$ be an automorphism such that the matrix on the abelianization is given by
$$
M=\left[ \begin{array}{cccc}
a_1 & a_2 & a_3 & a_4 \\
b_1 & b_2 & b_3 & b_4 \\
c_1 & c_2 & c_3 & c_4 \\
d_1 & d_2 & d_3 & d_4
\end{array} \right].
$$
The proof is divided into several steps. \\
Step 1. We will show that $b_1=d_1=b_3=d_3=0$. Suppose that $b_3\ne 0$. The proof of the other cases is similar and we leave to the reader. Because the determinant of each of the $2\times 2$ matrices,
$$
\left[ \begin{array}{cc}
a_i & a_j \\
b_i & b_j
\end{array} \right],
\left[ \begin{array}{cc}
b_i & b_j \\
d_i & d_j
\end{array} \right],
\left[ \begin{array}{cc}
c_i & c_j \\
d_i & d_j
\end{array} \right]
$$
\noindent for $(i,j)=(1,3),(2,3),(1,4)$ is zero
we have $(a_1,a_3)=\lambda(b_1,b_3)$, $(a_2,a_3)=\lambda_1(b_2,b_3)$ so that
$\lambda=\lambda_1$. Similarly $(d_1,d_2,d_3)=\alpha(b_1,b_2,b_3)$.
If $\alpha=0$ then $d_4\ne 0$ since the determinant of the matrix is nonzero.
But $c_1d_4=b_1d_4=0$ which implies that $c_1=b_1=0$. So the first column of $M$ is
zero, a contradiction. So let $\alpha \ne 0$. Then similarly because
$\alpha b_3 \ne 0$, it follows that $(c_1,c_2, c_3)=\theta(b_1,b_2,b_3)$. So the rows of $M$ up to
the third column are proportional and the determinant is zero. This is a
contradiction and thus $b_3=0$.\\
Step 2. From now on, we will consider that $b_1=d_1=b_3=d_3=0$. In this step we will compute the form of the matrix under the assumption that $a_3\ne 0$. Because $b_1=d_1=b_3=d_3=0$ we must have $b_2a_3=d_2c_3=a_1b_4=c_1d_4=0$. Since
$a_3\ne 0$ implies $b_2=0$, $b_4\ne 0$, and $a_1=0$, the matrix $M$ is of the form
$$
M=\left[ \begin{array}{cccc}
0 & a_2 & a_3 & a_4 \\
0 & 0 & 0 & b_4 \\
c_1 & c_2 & c_3 & c_4 \\
0 & d_2 & 0 & d_4
\end{array} \right].
$$
\noindent Now $\det (M)=b_4a_3c_1d_2$. Because this determinant is $\pm 1$, it follows that all the factors are either 1 or -1. Also following from the equations above, we have $c_3=d_4=0$. Thus
$\det (M-Id)=1-b_4d_2-c_1a_3+b_4d_2c_1a_3$. If $\det (M-Id)=0$ then we have $R(\varphi^{Ab})=\infty$. This always happens except when $b_4d_2=c_1a_3=-1$ in which case $\det (M-Id)=4$. In this situation, we are going to compute the matrix
$N$ of the induced homomorphism on $\Gamma_2/\Gamma_3$. This matrix is
$$
N=\left[ \begin{array}{ccc}
0 & a_2b_4 & a_3b_4 \\
0 & -d_2b_4 & 0 \\
c_1d_2 & -d_2c_4 & 0
\end{array} \right].
$$
\noindent Using the fact that $d_2b_4=-1$, we have $\det (N-Id)=0$ since the second row becomes trivial and the result follows.
Step 3. Suppose now that $a_3=0$. Then we have
$$
M=\left[ \begin{array}{cccc}
a_1 & a_2 & 0 & a_4 \\
0 & b_2 & 0 & b_4 \\
c_1 & c_2 & c_3 & c_4 \\
0 & d_2 & 0 & d_4
\end{array} \right].
$$
\noindent So we have $d_2c_3=a_1b_4=c_1d_4=0$. Since $c_3\ne 0$, it folows that $d_2=0$. Since $\det (M)=a_1b_2c_3d_4$, it follows that all factors are $\pm 1$ and consequently $b_4=c_1=0$. Then $\det (M-Id)= (a_1-1)(b_2-1)(c_3-1)(d_4-1)$ and as before if $\det (M-Id)=0$ then the Reidemeister number is infinite. This always happens except when $a_1=b_2=c_3=d_4=-1$ in which case $\det (M-Id)$ is 16. In this case we are going to compute the matrix
$N$ of the induced homomorphism on $\Gamma_2/\Gamma_3$. This matrix is
$$
N=\left[ \begin{array}{ccc}
a_1b_2 & -b_2a_4 & 0 \\
0 & b_2c_4 & 0 \\
0 & c_2d_4 & c_3d_4
\end{array} \right].
$$
\noindent Using the fact that $a_1=b_2=-1$ we have $\det (N-Id)=0$ since the first column becomes trivial and the result follows. This concludes the proof.
\end{example}
In contrast to the last example, we will next show that the {\it free} nilpotent groups of nilpotency class $2$ do not have the $R_{\infty}$ property.
\begin{example}\label{nil-class2}
We now show that the free nilpotent groups $G(r,2)$ does not have the $R_{\infty}$ property. It suffices for each $r$ to exhibit an automorphism
$\varphi:G(r,2) \to G(r,2)$ such that $R(\varphi)<\infty$.\\
For $r=2$, consider the automorphism given by $\varphi(x)=x^2y$ and $\varphi(y)=x^5y^2$. The matrix of the induced automorphism on the abelianization $\mathbb Z\oplus \mathbb Z$ has two eigenvalues $\lambda_1=2+5^{1/2}$, $\lambda_2=2-5^{1/2}$. It follows that $Fix(\varphi^{Ab})=\{1\}$ so that $R(\varphi^{Ab})<\infty$. The induced automorphism $\varphi'$ on $\Gamma_2/\Gamma_3=\mathbb Z$ is multiplication by $-1$ so that $R(\varphi')<\infty$. Since we have a central extension we have $R(\varphi)=R(\varphi^{Ab})R(\varphi')$ by Lemma \ref{R-facts} and the result follows.\\
Let $r\ge 3$. If $r$ is even, define $\varphi:(\mathbb Z\oplus \mathbb Z)^{r/2} \to (\mathbb Z\oplus \mathbb Z)^{r/2}$ as a direct sum of the automorphism $\varphi:\mathbb Z\oplus \mathbb Z \to \mathbb Z\oplus \mathbb Z$ defined above. If $r$ is odd, define
$$\varphi:(\mathbb Z\oplus \mathbb Z)^{(r-1)/2}\oplus\mathbb Z \to (\mathbb Z\oplus \mathbb Z)^{(r-1)/2}\oplus\mathbb Z$$
as a direct sum of the automorphism $\varphi:\mathbb Z\oplus \mathbb Z \to \mathbb Z\oplus \mathbb Z$ defined above and the automorphism given by multiplication by -1 in the last coordinate. It is easy to see that the product of any two eigenvalues of $\varphi$ for either $r$ even or
$r$ odd cannot be 1. Therefore
$\varphi':\Gamma_2/\Gamma_3\to \Gamma_2/\Gamma_3$ does not have $1$ as an eigenvalue so that $Fix \varphi'=\{1\}$ and the result follows from Lemma \ref{R-facts}.
\end{example}
\section{$R_{\infty}$ and Hirsch length}
There is only one finitely generated torsion-free nilpotent group of Hirsch length 2, namely
$\mathbb Z\oplus\mathbb Z$ and it does not have the $R_\infty$ property.
For which integers $n$, is there a finitely generated torsion-free nilpotent group of Hirsch length $n$ with the $R_\infty$ property?
\begin{example}\label{length-ex1}
Let us consider nilpotent
torsion-free groups of Hirsch length 3. They are classified as follows. For each integer $r$, consider the group $N_r$ given by the following presentation $\langle a,b,c| [a,c]=1, [b,c]=1$ and $[a,b]=c^r\rangle$. It is not hard to show that there is an automorphism
$\varphi:N_r \to N_r$ defined by the map which sends $a\to a^2b$ and $b \to a^5b^2$. The automorphism $\varphi$ induces a commutative diagram of automorphisms of the short exact sequence
$$0\to C\to N_r \to \mathbb Z \oplus \mathbb Z \to 0$$
\noindent where $C$ is the center of the group. Since the extension is central and the induced automorphisms $\varphi'$ and $\overline {\varphi}$ have finite Reidemeister numbers, it follows from (4) of Lemma \ref{R-facts} that $R(\varphi)$ is finite.
\end{example}
\begin{example}\label{ex3}
We now construct a finitely generated nilpotent group $G$ of Hirsch length 4 such that
$\Gamma_4(G)=1$, $\Gamma_3(G)=\mathbb Z$, and $G$ has the $R_{\infty}$ property. This group is a quotient of
the nilpotent group $F_2/\Gamma_4(F_2)$ where $F_2$ is the free group on the
generators. The group $G$ has the property that
$G/\Gamma_3(G)=F_2/\Gamma_3(F_2)$. The difference comes between
$\Gamma_3(G)/\Gamma_4(G)=\mathbb Z$ and $\Gamma_3(F_2)/\Gamma_4(F_2)=\mathbb Z \oplus \mathbb Z$ (see
e.g. \cite{kms}). Now we describe $G$ in terms of generators and relations. Let
$x,y$ be the generators, $B=[x,y]$ and $w_1=[[x,y],x]$ such that
$[[x,y],y]=1$. Note that the last relation does not hold in the group $F_2/\Gamma_4(F_2)$.
Now given an automorphism of $G$, the matrix of the abelianization has a special
form since $[B,y]=1$. This matrix has the first column $a,b$ and second
column $0,d$. As a consequence, the automorphism induced on the abelianization
either has eigenvalue 1 or $\det ( \varphi)=1$. In the former case the result follows because the induced map on the abelianization has Reidemeister number infinite. In the latter case the result follows from an argument similar to Lemma 2.1. Let $H=F_2/\Gamma_4(F_2)$. Then the group $G$ given by the presentation
$\langle x, y|\Gamma_4,[B,y]\rangle$, which can be regarded as a quotient of $H$, has the $R_\infty$ property.
\end{example}
\begin{remark} Example \ref{ex3} does not imply that $F_2$
has the $R_{\infty}$ property but it gives an example of a nilpotent group with the $R_{\infty}$ property that is simpler than $F_2/\Gamma_9(F_2)$ . Further, it has the least possible Hirsch length among all finitely generated torsion-free nilpotent groups having the $R_{\infty}$ property.
\end{remark}
\begin{example}\label{n>3}
Here we construct for every integer $\ell\geq 4$ a finitely generated torsion-free nilpotent group of Hirsch length $\ell$ with the $R_{\infty}$ property. We will show that the product of the group given in Example \ref{ex3} with $\mathbb Z^n$ for $n\geq 0$, has the desired property.
Now for any $n\ge 0$, let $G_n=G\times \mathbb Z^{n}$ where $G=\langle x, y|\Gamma_4,[B,y]\rangle$ is the group constructed in Example \ref{ex3}. Then, $\Gamma_k(G_n)=\Gamma_k(G)$ so $G_n$ is of rank $n+2$, has nilpotency class $3$, and its Hirsch length is $n+4$. Moreover, $\Gamma_3(G_n)\subseteq Z(G_n)=Z(G) \times \mathbb Z^{n}$. Now, given $\varphi\in {\rm Aut}(G_n)$, we have the following commutative diagram.
\begin{equation}
\begin{CD}
0 @>>> \Gamma_3(G_n) @>>> G_n @>>> G/\Gamma_3(G)=G_n/\Gamma_3(G_n) @>>> 1 \\
@. @V{\varphi'}VV @V{\varphi}VV @V{\bar \varphi=\bar \zeta}VV @. \\
0 @>>> \Gamma_3(G_n) @>>> G_n @>>> G/\Gamma_3(G)=G_n/\Gamma_3(G_n) @>>> 1
\end{CD}
\end{equation}
Since $G$ has the $R_\infty$ property and the following commutative diagram
\begin{equation}
\begin{CD}
0 @>>> \Gamma_3(G) @>>> G @>>> G/\Gamma_3(G) @>>> 1 \\
@. @V{\zeta'}VV @V{\zeta}VV @V{\bar \zeta}VV @. \\
0 @>>> \Gamma_3(G) @>>> G @>>> G/\Gamma_3(G) @>>> 1
\end{CD}
\end{equation}
yields the product formula $R(\zeta)=R(\zeta')R(\bar \zeta)$ by $(4)$ of Lemma \ref{R-facts}, it follows that
$$R(\zeta)=\infty \Rightarrow R(\zeta')=\infty \quad \text{or} \quad R(\bar \zeta)=\infty.$$ Thus $R(\varphi)=\infty$ if $R(\bar \zeta)=\infty$. If $R(\zeta')=\infty$ then $R(\varphi')=\infty$ since $\Gamma_3(G_n)$ projects onto $\Gamma_3(G)$. Hence, $R(\varphi)=\infty$.\\
\end{example}
\section{Fixed point free homeomorphisms on nilmanifolds}
Recall that a compact nilmanifold is the coset space of a finite dimensional nilpotent Lie group by a closed cocompact subgroup. A classical result of A. Mal'cev asserts that such nilmanifolds are coset spaces of simply connected nilpotent Lie groups by uniform discrete subgroups. Furthermore, there is a one-to-one correspondence between finitely generated torsion-free nilpotent groups $\Gamma$ and compact nilmanifolds $M=K(\Gamma,1)$.
From Example \ref{n>3}, we can associate to each such finitely generated torsion-free nilpotent group a compact nilmanifold so that the Reidemeister number of each homeomorphism is infinite. As an application, we obtain the following result for fixed point free homeomorphisms on these nilmanifolds.
\begin{theorem}\label{fpf-maps-on-nilmanifolds}
For any $n\ge 4$, there exists a compact nilmanifold $M$ with $\dim M=n$ such that every homeomorphism $f:M\to M$ is homotopic to a fixed point free map. Moreover, if $n\ge 5$ then $f$ is isotopic to a fixed point free homeomorphism.
\end{theorem}
\begin{proof} For any $n\ge 4$, let $M$ be the compact $K(G_{n-4},1)$ nilmanifold where $G_k$ denotes the group as constructed in Example \ref{n>3}. Since the Hirsch length of $G_{n-4}$ is $n$, it follows that $\dim M=n$. If $f:M\to M$ is a homeomorphism, then $f_{\#}$ is an automorphism of $G_{n-4}$. Thus, $R(f_{\#})=\infty$. It follows from \cite{felshtyn-hill-wong} or \cite{go:nil2} that $N(f)=0$, that is, the Nielsen number vanishes. It follows from a classical theorem of Wecken that $f$ is deformable to be fixed point free. If $n\ge 5$, we invoke the isotopy Wecken theorem of Kelly \cite{kelly} to obtain the desired fixed point free homeomorphism in the isotopy class of $f$.
\end{proof}
\begin{remark} The isotopy Wecken theorem of Kelly also holds in dimension 2 \cite{jiang-guo} based upon Nielsen-Thurston classification of surface homeomorphisms and in dimension 3 \cite{jww} based upon techniques of $3$-manidols. It is however unknown whether it holds in dimension $4$ in general. In fact, it is not even known whether a homeomorphism of the $4$-torus $T^4$ with zero Lefschetz number (hence deformable to be fixed point free) can be isotopic to a fixed point free homeomorphism.
It is known \cite{MO} that the well-known Arnold conjecture holds for nilmanifolds which states that the minimal number of fixed points of Hamiltonian symplectomorphisms of a compact nilmanifold of dimension $2n$ is at least $2n+1$ (for more details on the Arnold conjecture, see e.g. \cite{R}). Based upon our results, any symplectomorphism of the even dimensional nilmanifolds constructed in Theorem \ref{fpf-maps-on-nilmanifolds} can be isotoped to a fixed point free diffeomorphism but the isotopy does not respect the Hamiltonian structure so that the resulting fixed point free diffeomorphism is {\it not} a Hamiltonian symplectomorphism.
\end{remark}
\begin{remark}
It is well-known (see e.g. \cite{charlap} or \cite{vasq}) that there is a one-to-one correspondence between compact flat manifolds and finitely generated Bieberbach groups, that is, compact Riemannian manifolds with zero sectional curvatures are precisely the finite dimensional aspherical manifolds with finitely generated torsion-free virtually abelian fundamental groups. It is natural to ask whether one can obtain a result similar to Theorem \ref{fpf-maps-on-nilmanifolds} for flat manifolds. One of the main differences between nilmanifolds and infra-nilmanifolds (or flat manifolds) is that $N(f)=0$ iff $R(f)=\infty$ for selfmaps on nilmanifolds whereas there are maps on flat manifolds where $R(f)=\infty$ but $N(f)>0$ (see Example 5.3 of \cite{daci-peter2}). Even for the Klein bottle $K$, one can find a homeomorphism with $N(f)>0$ and $R(f)=\infty$. Thus, the groups $\pi_1(K) \times \mathbb Z^n$ do not provide examples of flat manifolds on which all homeomorphisms have zero Nielsen numbers.
\end{remark}
A natural extension of our results for nilmanifolds is that of an infra-nilmanifold whose fundamental group is a finitely generated torsion-free virtually nilpotent group or equivalently an almost Bieberbach group. From the point of view of Nielsen fixed point theory, D. Anosov in \cite{anosov} (or \cite{ed-suf}) already pointed out that the equality $N(f)=|L(f)|$ does not hold for selfmaps of infra-nilmanifolds. Furthermore, Kwasik and Lee \cite{kwasik-lee} constructed infra-nilmanifolds and affine Anosov diffeomorphisms $f$ for which $|L(f)|\ne N(f)$ in every even dimension $n\ge 4$. On the other hand, for a selfmap $f$ of compact solvmanifold, $R(f)<\infty$ implies that $N(f)=R(f)$ \cite{daci-peter2}. Thus, one would like to study the $R_\infty$ property for such groups, although we cannot always obtain results similar to those in the previous two sections.
\section{$\mathcal C$-nilpotent groups}
Next we turn to another generalization of nilpotent groups. Nilpotent spaces have proven to be useful in homotopy theory and in algebraic topology in general. The concept of a Serre class of abelian groups has been generalized. One such generalization is that of a $\mathcal C$-nilpotent class, first introduced in \cite{Daci83}. A family $\mathcal C$ of groups is a {\it class of groups} if given a short exact sequence of groups
$$
1\to A \to B \to C \to 1
$$
then $A,C\in {\mathcal C}$ if and only if $B\in {\mathcal C}$. Given a class ${\mathcal C}$, a group $G$ is said to be ${\mathcal C}$-nilpotent if $\Gamma_n(G)\in {\mathcal C}$ for some positive integer $n$ where $\Gamma_i(G)$ is the $i$-th commutator in the lower central series of $G$. More generally, a group $\pi$ acts on a group $G$ $\mathcal C$-nilpotently if $\Gamma_n^{\pi}(G)\in {\mathcal C}$ for some positive integer $n$ where $\Gamma_n^{\pi}(G)$ is the smallest $\pi$-subgroup that contains $[G,\Gamma_{n-1}^{\pi}(G)]$ and the set $\{(\alpha\cdot g)g^{-1}|\alpha \in \pi, g\in \Gamma_{n-1}^{\pi}(G)\}$. Thus, a space $X$ is said to be $\mathcal C$-nilpotent if $\pi_1(X)$ is $\mathcal C$-nilpotent and the action of $\pi_1(X)$ on $\pi_k(X)$ is $\mathcal C$-nilpotent. From now on, we let ${\mathcal C}$ be the class of finite groups.
Let $G$ be a finitely generated group. For any $\varphi \in {\rm Aut}(G)$ and any positive integer $n$, we have a commutative diagram
\begin{equation}
\begin{CD}
0 @>>> \Gamma_n(G) @>>> G @>>> G/\Gamma_n(G) @>>> 1 \\
@. @V{\varphi'}VV @V{\varphi}VV @V{\bar \varphi}VV @. \\
0 @>>> \Gamma_n(G) @>>> G @>>> G/\Gamma_n(G) @>>> 1
\end{CD}
\end{equation}
It follows easily from Lemma \ref{R-facts} that if $G/\Gamma_n(G)$ has the $R_{\infty}$ property then so does $G$. Since $N_n=G/\Gamma_n(G)$ is nilpotent, the torsion elements form a subgroup $T_n$ and $T_n$ is a characteristic subgroup of $G/\Gamma_n(G)$. It follows that if $N_n/T_n$ has the $R_{\infty}$ property then $N_n$ also has the $R_{\infty}$ property.
Thus, we have the following useful result that can be used to construct more examples of groups with the $R_{\infty}$ property.
\begin{proposition}\label{C-nil}
Let $G$ be a finitely generated group and $T_k$ be the torsion subgroup of the nilpotent group $N_k=G/\Gamma_k(G)$. If for some positive integer $n$ the group $N_n/T_n$ has the $R_{\infty}$ property then $G$ also has the $R_{\infty}$ property.
\end{proposition}
We now combine our results from section 4 to enlarge the family of groups with the $R_{\infty}$ property and also the family of compact manifolds which have fundamental groups with the $R_{\infty}$ property.
\begin{example}\label{Poincare-nil}
Let $\Sigma^3$ be the Poincar\'e $3$-sphere with $\pi_1(\Sigma^3)\cong Icos$, the binary icosahedral group of order $120$. Suppose $M$ is the $4$-dimensional $K(\pi,1)$ nilmanifold where $\pi$ is the torsion-free nilpotent group of nilpotency class $3$ as in Example \ref{ex3}. Take $X=\Sigma^3\times M$. Now, $G=\pi_1(X)$ is $\mathcal C$-nilpotent, $\Gamma_4(G)=Icos$, and $G/\Gamma_4(G)\cong \pi$. Since $\pi$ has the $R_{\infty}$ property (and the torsion subgroup of $\pi$ is trivial), it follows from Proposition \ref{C-nil} that $G$ has the $R_{\infty}$ property. More generally, one can obtain similar examples by replacing $Icos$ with any finite perfect group.
\end{example}
We can show more for homeomorphisms of the space $X$ in the last example although it is not clear if $X$ is a Jiang-type space. In fact, we obtain the following result, similar to Theorem \ref{fpf-maps-on-nilmanifolds}.
\begin{theorem}\label{fpf-maps-on-C-nilpotent-spaces}
For any $n\ge 7$, there exists a compact $\mathcal C$-nilpotent manifold $M$ with $\dim M=n$ such that every homeomorphism $f:M\to M$ is isotopic to a fixed point free homeomorphism.
\end{theorem}
\begin{proof} Let $n\ge 7$ and $M=\Sigma^3 \times N^{n-3}$ where $N^k$ is the compact nilmanifold with fundamental group of Hirsch length $k$ as constructed in Example \ref{ex3}.
We will show that for any $f:M \to M$, $f$ is homotopic to a fiber-preserving map. First, choose basepoints $x_0\in \Sigma^3$, $y_0\in N=N^{n-3}$ and write $f(x,y)=(g(x,y),h(x,y))$. Denote by $\bar f:N\to N$ the restriction of $h$ on $\{x_0\}\times N$, that is, $\bar f(y)=h(x_0,y)$. Next, we will show that $\bar f\circ p_2$ and $h$ are homotopic where $p_2:\Sigma^3\times N\to N$ is the projection on the second factor. Write $A=\Sigma^3\times N\times \{0,1\} \cup (\Sigma^3\vee M)\times [0,1]$. The pair $(M,A)$ is a relative $CW$ complex. Since $\pi_1(\Sigma^3)\cong Icos$ is finite, $N$ is aspherical and $\pi=\pi_1(N)$ is torsion-free, the maps $h$ and $\bar f\circ p_2$ restricted to $\Sigma^3$ are null-homotopic and they are homotopic on $\{x_0\}\times N$. It follows that $h$ and $\bar f\circ p_2$ coincide, up to homotopy, on the subspace $\Sigma^3 \vee N \subset \Sigma^3 \times N$. Let $F$ be the homotopy from $h|_{\Sigma^3\vee N}$ to $(\bar f\circ p_2)|_{\Sigma^3\vee N}$ and $\widehat F:A\to N$ where $\widehat F|_{M\times \{0\}}=h, \widehat F|_{M\times \{1\}}=\bar f\circ p_2$ and $\widehat F|_{A\times [0,1]}=F$. Since the 2-skeleton of $M=\Sigma^3\times N$ already lies inside $\Sigma^3\vee N\subset A$, there is a sequence of obstructions $c^i(\widehat F)\in H^i(M,A;\pi_{i-1}(N))$ to extending $\widehat F$ to $M$ where $i>2$. These obstructions are trivial because $\pi_k(N)=0$ for $k>1$ since $N$ is aspherical. Now, up to homotopy, the map $h$ is of the form $\bar f\circ p_2$ thus we have the following commutative diagram
\begin{equation}\label{fiber}
\begin{CD}
M @>{f}>> M \\
@V{p_2}VV @VV{p_2}V \\
N @>{\bar f}>> N
\end{CD}
\end{equation}
Now, suppose $f$ is a homeomorphism and \eqref{fiber} holds.
If $f$ does not have any fixed points there is nothing to prove. Suppose $f$ does have a fixed point then \eqref{fiber} yields a commutative diagram of groups similar to the diagram as in Lemma \ref{R-facts}. Thus, $\bar f$ induces an automorphism $\bar \varphi$ which has $R(\bar \varphi)=\infty$ since $\pi$ has the $R_{\infty}$ property. It follows that $N(\bar f)=0$. Since the fibration $p_2$ is trivial, it follows that $N(f)=N(f')\cdot N(\bar f)=0$ where $f'$ is the restriction of $f$ on the fiber. Since $n\ge 7$, Kelly's Isotopy Wecken Theorem \cite{kelly} implies that $f$ is isotopic to a fixed point free homeomorphism.
\end{proof}
\begin{remark}
Although the space $M$ in Theorem \ref{fpf-maps-on-C-nilpotent-spaces} is a ${\mathcal C}$-nilpotent space, its fundamental group has a center of infinite index so we cannot conclude from \cite{daci-peter3} that $M$ is of Jiang-type. We should point out that one can construct similar manifolds $M$ with the desired property by replacing $\Sigma^3$ with any compact manifold $L$ with finite $\pi_1(L)$ since the same argument also shows that every map $f:L\times N \to L\times N$ is fiber-preserving up to homotopy. Note that if $\dim L\le 2$, $\pi_1(L)$ is necessarily abelian so that $\pi_1(M)$ would be nilpotent possibly with torsion.
\end{remark}
\begin{remark}
It is shown in \cite{daci-peter3} that if $H$ is a finite subgroup of a compact connected Lie group $G$, then the coset space $X=G/H$ is $\mathcal C$-nilpotent and its fundamental group has a center of finite index and hence is of Jiang-type. However, these spaces do not provide examples like Example \ref{Poincare-nil} because $[\pi_1(X),\pi_1(X)]$ is also finite. It follows that $\pi_1(X)/[\pi_1(X),\pi_1(X)]=\pi_1(X)^{Ab}$ is finitely generated abelian and therefore does not have the $R_{\infty}$ property and hence Proposition \ref{C-nil} is not applicable.
\end{remark}
|
2,869,038,153,868 | arxiv | \section{Introduction}
\label{sec:intro}
The notion of an analytic, free or non-commutative,
map arises naturally in free
probability,
the study of non-commutative (free) rational functions
\cite{BGM,Vo1,Vo2,AIM,KVV},
and
systems theory \cite{HBJP}.
In this note
rigidity results for such functions paralleling those
for their classical commutative counterparts are established.
The free setting leads to substantially stronger results.
Namely, if $f$ is a proper analytic free map from
a non-commutative domain in $g$ variables
to another in $\tg$ variables, then $f$ is injective
and $\tg \ge g$.
If in addition
$\tg=g$, then $f$ is onto and
has an inverse which is itself
a (proper) analytic free map.
This injectivity conclusion contrasts markedly to the classical
case where a
(commutative) \emph{proper}
analytic function $f$ from one domain in $\mathbb C^g$
to another in $\mathbb C^g,$ need not be injective,
although it must be onto. For classical theory of
some commutative proper analytic maps
see \cite{DAn}.
The definitions as used in this paper are given in the
following section. The main result of the paper is
in Section \ref{subsec:main-result}. Analytic free analogs
of classical (commutative) rigidity theorems is the
theme of Section \ref{sec:analogs}. The article
concludes with examples in Section \ref{sec:Examples},
all of which involve linear matrix inequalities (LMIs).
\section{Free Maps}
\label{sec:maps}
This section contains the background on
non-commutative sets and on {\it free maps}
at the level of generality needed for this paper.
As we shall see, free maps which are continuous are also analytic
in several senses, a fact which (mostly) justifies the
terminology analytic free map in the introduction.
Indeed one typically thinks of free maps
as being analytic, but in a weak sense.
The discussion borrows heavily from
the recent basic work of
Voiculescu \cite{Vo1, Vo2} and of
Kalyuzhnyi-Verbovetski\u\i{} and Vinnikov
\cite{KVV}, see also the references therein.
These papers contain a power series approach
to free maps
and for more on this one can
see Popescu \cite{Po1,Po2}, or also \cite{HKMS,HKM1}.
\subsection{Non-commutative Sets and Domains}
\label{subsec:sets}
Fix a positive integer $g$.
Given a positive integer $n$, let $\matng$ denote
$g$-tuples of $n\times n$ matrices.
Of course, $\matng$ is naturally identified with
$M_n(\mathbb C) \otimes\mathbb C^g.$
\renewcommand{\subset}{\subseteq}
A sequence $\cU=(\cU (n))_{n\in\N},$ where $\cU (n) \subset \matng$,
is a {\bf non-commutative set} \index{non-commutative set}
if it is \index{closed with respect to unitary similarity}
{\bf closed with respect to simultaneous unitary similarity}; i.e.,
if $X\in \cU (n)$ and $U$ is an $n\times n$ unitary matrix, then
\[
U^\ast X U =(U^\ast X_1U,\dots, U^\ast X_gU)\in \cU (n);
\]
and if it is \index{closed with respect to direct sums}
{\bf closed with respect to direct sums}; i.e.,
if $X\in \cU (n)$ and $Y\in \cU(m)$ implies
\[
X\oplus Y = \begin{pmatrix} X & 0\\ 0 & Y \end{pmatrix}
\in \cU(n+m).
\]
Non-commutative sets differ from
the fully matricial $\C^g$-sets of Voiculescu
\cite[Section 6]{Vo1} in that the latter are closed
with respect to simultaneous similarity, not just
simultaneous \emph{unitary} similarity.
Remark \ref{rem:sim-vs-unitary} below briefly discusses the significance
of this distinction for the results on proper analytic
free maps in this paper.
The non-commutative set $\cU$ is a
{\bf non-commutative domain} if each
$\cU (n)$ is open and connected.
\index{non-commutative domain}
Of course the sequence $\matg=(\matng)$
is itself a non-commutative domain. Given $\varepsilon>0$,
the set $\cN_\varepsilon = (\cN_\varepsilon(n))$ given by
\beq\label{eq:nbhd}
\cN_\varepsilon(n)=\big\{X\in\matng : \sum X_j X_j^\ast < \varepsilon^2 \big\}
\eeq
is a non-commutative domain which we
call the
{\bf non-commutative $\varepsilon$-neighborhood of $0$ in $\mathbb C^g$}.
\index{non-commutative neighborhood of $0$}
The non-commutative set $\cU$ is {\bf bounded}
\index{bounded, non-commutative set} if there
is a $C\in\R$ such that
\beq\label{eq:bd}
C^2 -\sum X_j X_j^\ast \succ 0
\eeq
for every $n$ and $X\in\cU(n)$. Equivalently, for
some $\lambda\in\RR$, we have $\cU\subset \cN_\lambda$.
Note that this condition is stronger than asking
that each $\cU(n)$ is bounded.
Let $\bCx$ denote the $\C$-algebra freely generated by $g$
non-commuting letters $x=(x_1,\ldots,x_g)$. Its elements are
linear combinations of words in $x$ and are called
{\bf polynomials}.
Given an $r\times r$ matrix-valued polynomial
$p\in M_r(\mathbb C) \otimes \bCx$ with
$p(0)=0$, let $\cD(n)$ denote the connected
component of
\[
\{X\in \matng : I+p(X)+p(X)^* \succ 0\}
\]
containing the origin.
The sequence $\cD=(\cD(n))$ is a non-commutative domain
which is semi-algebraic in nature.
Note that $\cD$ contains an $\varepsilon>0$ neighborhood
of $0,$ and that the choice
\[
p= \frac{1}{\varepsilon}
\begin{pmatrix}\;\; 0_{g\times g} & \begin{matrix} x_1 \\ \vdots \\x_g\end{matrix} \\
\;\; 0_{1\times g} & 0_{1\times 1} \end{pmatrix}
\]
gives $\cD = \cN_\varepsilon$.
Further examples of natural non-commutative domains
can be generated by considering non-commutative polynomials
in both the variables $x=(x_1,\dots,x_g)$
and their formal adjoints, $x^*=(x_1^*,\dots,x_g^*)$.
The case of domains determined
by linear matrix inequalities appears in Section \ref{sec:Examples}.
\subsection{Free Mappings}
\label{subsec:nc-maps}
Let $\cU$ denote a non-commutative subset of $\matg$
and let $\h$ be a positive integer.
A
{\bf \nca}\index{\nca}
$f$ from $\cU$ into $\math$ is a sequence
of functions $\fn:\cU(n) \to\matnh$ which
{\bf respects intertwining maps}; i.e.,
if $X\in\cU(n)$, $Y\in\cU(m)$, $\Gamma:\mathbb C^m\to\mathbb C^n$,
and
\[
X\Gamma=(X_1\Gamma,\dots, X_g\Gamma)
=(\Gamma Y_1,\dots, \Gamma Y_g)=\Gamma Y,
\]
then $\fn(X) \Gamma = \Gamma \fm (Y)$.
\index{respects intertwining maps}
Note if $X\in\cU(n)$ it is natural
to write simply $f(X)$ instead of
the more cumbersome $\fn(X)$ and
likewise $f:\cU\to \math$. In a similar fashion,
we will often write $f(X)\Gamma=\Gamma f(Y).$
\begin{remark}\rm
\label{rem:fasvector}
Each $\fn$ can be represented as
\[
\fn=\begin{pmatrix} \fn_1 \\ \vdots \\ \fn_{\h} \end{pmatrix}
\]
where $\fn_j :\cU(n)\to M_n(\mathbb C)$. Of course, for each
$j$, the sequence $(\fn_j)$ is a \nca
$f_j:\cU\to M(\mathbb C)$ with
$\fjn=\fn_j$. In particular, if $f:\cU \to \math,$
$X\in\cU(n)$, and $v=\sum e_j \otimes v_j$, then
\[
f(X)^* v =\sum f_j(X)^* v_j.
\]
\end{remark}
Let $\cU$ be a given non-commutative subset of $\matg$
and suppose $f=(\fn)$ is a sequence of functions
$\fn:\cU(n)\to \matnh$. The sequence $f$ {\bf respects direct sums}
if, for each $n,m$ and $X\in\cU(n)$ and $Y\in \cU(m),$
\[
f(X\oplus Y)=f(X) \oplus f(Y).
\]
Similarly, $f$
{\bf respects similarity} if for each $n$ and
$X,Y\in \cU(n)$ and invertible $n\times n$
matrix $S$
such that $XS=SY$,
\[
f(X) S= Sf(Y).
\]
The following proposition gives an alternate characterization
of \ncasp
\begin{proposition}
\label{prop:nc-map-alt}
Suppose $\cU$ is a non-commutative subset of $\matg$. A sequence
$f=(\fn)$ of functions $\fn:\cU(n)\to \matnh$
is a \nca if and only if it
respects direct sums and similarity.
\end{proposition}
\begin{proof}
Observe $f(X)\Gamma=\Gamma f(Y)$ if and only if
\[
\begin{pmatrix} f(X) & 0 \\ 0 & f(Y) \end{pmatrix}
\begin{pmatrix} I & \Gamma \\ 0 & I \end{pmatrix}
= \begin{pmatrix} I & \Gamma \\ 0 & I \end{pmatrix}
\begin{pmatrix} f(X) & 0 \\ 0 & f(Y) \end{pmatrix}.
\]
Thus if $f$ respects direct sums and similarity, then
$f$ respects intertwining.
On the other hand, if $f$ respects intertwining then,
by choosing $\Gamma$ to be an appropriate projection,
it is easily seen that
$f$ respects direct sums too.
\end{proof}
\begin{remark}\rm
\label{rem:sim-vs-unitary}
Let $\cU$ be a non-commutative domain in $M(\C)^g$ and
suppose $f:\cU\to M(\C)^{\tg}$ is a free map. If $X\in \cU$
is similar to $Y$ with $Y=S^{-1}X S$, then we can
define $f(Y) = S^{-1}f(X)S$. In this way $f$ naturally
extends to a free map on $\cH(\cU)\subset M(\C)^g$ defined by
\[
\cH(\cU)(n)=\{Y\in M_n(\C)^g: \text{ there is an } X\in \cU(n)
\text{ such that $Y$ is similar to $X$}\}.
\]
Thus if $\cU$ is a domain of holomorphy, then $\cH(\cU)=\cU$.
On the other hand, because our results on proper analytic free maps
to come depend strongly upon
the non-commutative set $\cU$ itself,
the distinction between non-commutative sets and
fully matricial sets as in \cite{Vo1} is important.
See also \cite{HM,HKM2}.
\end{remark}
We close this subsection with the following simple observation.
\begin{proposition}
\label{prop:range}
If $\cU$ is a non-commutative subset of $\matg$ and
$f:\cU\to \math$ is a \ncacomma then the range of $f$, equal to
the sequence $f(\cU)=\big(f(\cU(n))\big)$, is itself
a non-commutative subset of $\math$.
\end{proposition}
\subsection{A Continuous \NCA is Analytic}
Let $\cU\subset \matg$ be a non-commutative set.
A \nca $f:\cU\to \math$ is {\bf continuous} if each
$\fn:\cU(n)\to \matnh$ is continuous. \index{continuous}
Likewise,
if $\cU$ is a non-commutative domain, then
$f$ is called {\bf analytic} if each $\fn$ is analytic.
\index{analytic}
This implies the existence of directional derivatives for all directions
at each point in the domain, and this is the property we shall use later
below.
\begin{proposition}
\label{prop:continuous-analytic}
Suppose $\cU$ is a non-commutative domain in $\matg$.
\ben[\rm (1)]
\item
A continuous \nca $f:\cU\to \math$ is analytic.
\item If $X\in\cU(n)$, and $H\in\matng$ has sufficiently small norm,
then
\[
f\begin{pmatrix} X & H \\ 0 & X\end{pmatrix}
= \begin{pmatrix} f(X) & f^\prime(X)[H] \\ 0 & f(X)\end{pmatrix}.
\]
\een
\end{proposition}
The proof invokes the following lemma which also plays
an important role in the next subsection.
\begin{lemma}
\label{lem:2x2}
Suppose $\cU\subset \matg$ is a non-commutative set
and $f:\cU\to \math$ is a \ncap
Suppose $X\in\cU(n)$, $Y\in \cU(m)$, and
$\Gamma$ is an $n\times m$ matrix. Let
\beq\label{eq:2x21}
C_j = X_j\Gamma -\Gamma Y_j, \quad
Z_j = \begin{pmatrix} X_j & C_j \\ 0 & Y_j \end{pmatrix}.
\eeq
If
$Z=(Z_1,\dots,Z_g)\in \cU(n+m)$, then
\beq\label{eq:2x22}
f_j(Z)=
\begin{pmatrix}
f_j(X) & f_j(X)\Gamma -\Gamma f_j(Y) \\
0 & f_j(Y)
\end{pmatrix}
\eeq
\end{lemma}
This formula generalizes to larger block matrices.
\begin{proof}
With
\[
S=\begin{pmatrix} I & \Gamma \\ 0 & I \end{pmatrix}
\]
we have
\[
\tilde{Z}_j= \begin{pmatrix} X_j & 0 \\ 0 & Y_j \end{pmatrix}
= S Z_j S^{-1}.
\]
Thus, writing $f=(f_1,\dots,f_{\h})^T$ and using
the fact that $f$ respects intertwining maps, for each $j$,
\[
f_j(Z) = S f_j(\tilde{Z}) S^{-1}
= \begin{pmatrix} f_j(X) & f_j(X)\Gamma -\Gamma f_j(Y) \\
0 & f_j(Y) \end{pmatrix}.\qedhere
\]
\end{proof}
\begin{proof}[Proof of Proposition {\rm\ref{prop:continuous-analytic}}]
Fix $n$ and $X\in \cU(n)$.
Because $\cU(2n)$ is open and $X\oplus X\in \cU(2n)$,
for every $H\in \matng$ of sufficiently small norm
the tuple with $j$-th entry
\[
\begin{pmatrix} X_j & H_j \\ 0 & X_j \end{pmatrix}
\]
is in $\cU(2n)$. Hence,
for $z\in\mathbb C$ of small modulus, the tuple $Z(z)$ with
$j$-th entry
\[
Z_j(z)=\begin{pmatrix} X_j+zH_j & H_j \\ 0 & X_j \end{pmatrix}
\]
is in $\cU(2n)$. Note that the choice (when $z\ne 0$) of
$\Gamma(z)=\frac{1}{z}$,
$X=X+zH$ and $Y=X$
in Lemma \ref{lem:2x2}
gives this $Z(z)$. Hence, by Lemma \ref{lem:2x2},
\[
f(Z(z))= \begin{pmatrix} f(X+zH) & \frac{f(X+zH)-f(X)}{z} \\ 0 & f(X)
\end{pmatrix}.
\]
Since $Z(z)$ converges as $z$ tends to $0$ and $f[2n]$
is assumed continuous, the limit
\[
\lim_{z\to 0} \frac{f(X+zH)-f(X)}{z}
\]
exists. This proves that $f$ is analytic at $X$. It also
establishes the moreover portion of the proposition.
\end{proof}
\begin{remark}\rm
Kalyuzhnyi-Verbovetski\u\i{} and Vinnikov \cite{KVV} are developing
general results based on very weak hypotheses with the conclusion that
$f$ is (in our language) an \cncap
Here we will assume
continuity whenever expedient.
For perspective we mention power series.
It is shown in \cite[Section 13]{Vo2}
that an \cnca
$f$ has a formal power series expansion
in the non-commuting variables, which indeed is a powerful way
to think of \cncasp Voiculescu also gives elegant
formulas for the coefficients of the power series expansion of $f$ in terms
of clever evaluations of $f$. Convergence properties for bounded
\cncas are studied in \cite[Sections 14-16]{Vo2}; see also
\cite[Section 17]{Vo2} for a bad unbounded example. We do not dwell
on this since power series are not essential to this paper.
\end{remark}
\section{A Proper \NCA is Bianalytic Free}
\label{subsec:main-result}
Given non-commutative domains $\cU$ and $\cV$ in
$\matg$ and $\math$ respectively, a
\nca $f:\cU\to\cV$ is {\bf proper} \index{proper}
if each $\fn:\cU(n)\to \cV(n)$ is proper in the
sense that if $K\subset \cV(n)$ is compact, then
$f^{-1}(K)$ is compact.
In particular, for all $n$,
if $(z_j)$ is a sequence in $\cU(n)$
and $z_j\to\partial\cU(n)$, then
$f(z_j)\to\partial\cV(n)$.
In the case $g=\h$ and both $f$ and $f^{-1}$ are (proper)
\cncas
we say $f$ is
a
{\bf bianalytic} free map.
The following theorem is a central result of this paper.
\begin{theorem}
\label{thm:oneone}
Let $\cU$ and $\cV$ be non-commutative domains containing $0$
in $\matg$ and $\math$, respectively and
suppose $f:\cU\to \cV$ is a \ncap
\begin{enumerate}[\rm (1)]
\item\label{it:1to1}
If $f$ is proper, then it is one-to-one,
and $f^{-1}:f(\cU)\to \cU$ is a \ncap
\item\label{it:1to1ugly}
If, for each $n$ and $Z\in\matnh$,
the set $\fn^{-1}(\{Z\})$ has compact closure in $\cU$,
then $f$ is one-to-one
and moreover, $f^{-1}:f(\cU)\to \cU$ is a \ncap
\item\label{it:xto1}
If $g=\h$ and $f:\cU\to\cV$ is
proper and continuous, then $f$ is bianalytic.
\end{enumerate}
\end{theorem}
\begin{corollary}
\label{cor:bianalytic-free}
Suppose $\cU$ and $\cV$ are non-commutative domains in
$\matg$. If $f:\cU\to \cV$
is a free map and if each $f[n]$ is bianalytic,
then $f$ is a bianalytic free map.
\end{corollary}
\begin{proof}
Since each $\fn$ is bianalytic, each $\fn$ is proper. Thus $f$ is
proper. Since also $f$ is a \ncacomma by Theorem
\ref{thm:oneone}\eqref{it:xto1} $f$ is
a \fbap
\end{proof}
Before proving Theorem \ref{thm:oneone} we establish the following
preliminary result which is of independent interest and whose proof
uses the full strength of Lemma \ref{lem:2x2}.
\begin{proposition}
\label{prop:oneone}
Let $\cU\subset \matg$ be a non-commutative domain
and suppose $f:\cU\to \math$ is a \ncap
Suppose further that $X\in\cU(n)$, $Y\in \cU(m)$, $\Gamma$ is an
$n\times m$ matrix, and
\[
f(X)\Gamma = \Gamma f(Y).
\]
If
$f^{-1}\big(\{f(X)\oplus f(Y)\}\big)$ has compact closure
in $\cU$,
then $X\Gamma = \Gamma Y.$
\end{proposition}
\begin{proof}
As in Lemma \ref{lem:2x2},
let $C_j = X_j\Gamma -\Gamma Y_j$.
For $0<t$ sufficiently small, $Z(t)\in \cU(n+m)$, where
\beq\label{eq:propcomp}
Z_j(t) =\begin{pmatrix} X_j & t C_j \\ 0 & Y_j \end{pmatrix}.
\eeq
If $f(X)\Gamma = \Gamma f(Y),$ then, by Lemma
\ref{lem:2x2},
\[
f_j(Z(t)) = \begin{pmatrix} f_j(X)
& t\big( f_j(X)\Gamma -\Gamma f_j(Y) \big)\\
0 & f_j(Y) \end{pmatrix} \\
= \begin{pmatrix} f_j(X) & 0 \\ 0 & f_j(Y) \end{pmatrix}.
\]
Thus, $f_j(Z(t))=f_j(Z(0)).$ In particular,
\[
f^{-1}\big(\{f(Z(0))\}\big) \supseteq \{Z(t): t\in \mathbb C\}\cap \cU.
\]
Since this set has, by assumption, compact closure in $\cU$,
it follows that $C=0$; i.e.,
$X\Gamma=\Gamma Y$.
\end{proof}
We are now ready to prove that
a proper \nca is one-to-one and even a \fba if
continuous and mapping
between domains of the same dimension.
\begin{proof}[Proof of Theorem {\rm\ref{thm:oneone}}]
If $f$ is proper, then $f^{-1}(\{Z\})$
has compact closure in $\cU$
for every $Z\in\math$. Hence \eqref{it:1to1} is a consequence of
\eqref{it:1to1ugly}.
For
\eqref{it:1to1ugly}, invoke Proposition \ref{prop:oneone}
with $\Gamma=\gamma I$ to conclude that $f$ is injective.
Thus $f:\cU\to f(\cU)$ is a bijection from one non-commutative
set to another. Given $W,Z\in f(\cU)$ there exists
$X,Y\in\cU$ such that $f(X)=W$ and $f(Y)=Z$. If
moreover, $W\Gamma =\Gamma Z$, then $f(X) \Gamma =\Gamma f(Y)$
and Proposition \ref{prop:oneone} implies $X\Gamma=\Gamma Y$; i.e.,
$f^{-1}(W)\Gamma=\Gamma f^{-1}(Z)$. Hence $f^{-1}$ is itself
a \ncap
Let us now consider
\eqref{it:xto1}.
Using the continuity hypothesis and Proposition
\ref{prop:continuous-analytic},
for each $n$, the map $\fn:\cU(n)\to \cV(n)$
is analytic. By hypothesis each $\fn$ is also proper and
hence its range
is $\cV(n)$ by \cite[Theorem 15.1.5]{Rudin}.
Now $\fn:\cU(n)\to \cV(n)$ is one-to-one, onto and analytic, so
its inverse is analytic.
Further, by
the already proved part of the
theorem, $f^{-1}$ is an \cncap
\end{proof}
For both completeness and later use we record the following
companion to Lemma \ref{lem:2x2}.
\begin{proposition}
\label{prop:fprime}
Let $\cU\subset\matg$ and $\cV\subset\math$ be non-commutative
domains.
If $f:\cU\to\cV$ is a
proper \cnca
and if $X\in\cU(n)$, then
$f^\prime(X):\matng\to \matnh$ is one-to-one.
In particular, if $g=\h$, then $f^\prime(X)$
is a vector space isomorphism.
\end{proposition}
\begin{proof}
Suppose $f^\prime(X)[H]=0$.
We scale $H$ so that $\begin{pmatrix} X & H \\ 0 & X\end{pmatrix} \in\cU$.
From Proposition \ref{prop:continuous-analytic}
\[
f\begin{pmatrix} X & H \\ 0 & X\end{pmatrix}
= \begin{pmatrix} f(X) & f^\prime(X)[H] \\ 0 & f(X)\end{pmatrix}=
\begin{pmatrix} f(X) & 0 \\ 0 & f(X)\end{pmatrix}
= f\begin{pmatrix} X & 0 \\ 0 & X\end{pmatrix}.
\]
By the injectivity of $f$ established in Theorem \ref{thm:oneone},
$H=0$.
\end{proof}
\subsection{The Main Result is Sharp}
Key to the proof of Theorem \ref{thm:oneone} is
testing $f$ on the special class of matrices of the
form \eqref{eq:propcomp}. One naturally asks if
the hypotheses of the theorem in fact yield stronger
conclusions, say by plugging in richer classes of test matrices.
The answer to this question is no:
suppose $f$ is any \cnca
from $g$ to $g$ variables defined on a
neighborhood $\cN_\eps$ of $0$ with $f(0)=0$ and $\fone'(0)$
invertible.
Under mild additional assumptions (e.g.~the lowest
eigenvalue of $f'(X)$ or the norm $\|f'(X)\|$
is bounded away from $0$ for $X\in\cN_\eps(n)$ independently of the
size $n$)
then there are non-commutative domains $\cU$ and $\cV$ with
$f:\cU\to\cV$ meeting the hypotheses of the theorem.
Indeed, consider (for fixed $n$)
the analytic function $\fn$ on $\cN_\eps(n)$.
Its derivative at $0$ is invertible; in fact,
$\fn'(0)$ is unitarily equivalent to
$I_n\otimes \fone'(0)$, cf.~Lemma \ref{lem:ampliate} below.
By the implicit function theorem,
there is a small $\delta$-neighborhood of $0$ on which
$\fn^{-1}$ is defined and analytic. By our assumptions and the bounds on the
size of this neighborhood given in \cite{Wan}, $\delta>0$
may be chosen to be independent of $n$.
This gives rise to a non-commutative
domain $\cV$ and the \cnca
$f^{-1}:\cV\to\cU$, where $\cU=f^{-1}(\cV)$.
Note $\cU$ is open (and hence a non-commutative domain)
since $f^{-1}(n)$ is analytic and one-to-one.
It is now clear
that $f:\cU\to\cV$ satisfies the hypotheses of Theorem \ref{thm:oneone}.
We just saw that absent more conditions on the non-commutative
domains $\cD$ and $\tilde \cD$, nothing beyond bianalytic free
can be concluded about $f$.
The authors, for reasons not gone into here, are particularly
interested in convex domains, the paradigm being
those given by what are called LMIs. These will be discussed
in Section \ref{sec:Examples}.
Whether or not convexity of the domain or range of an analytic
free $f$
has a highly restrictive impact on
$f$ is a serious open question.
\section{Several Analogs to Classical Theorems}
\label{sec:analogs}
The conclusion of Theorem \ref{thm:oneone} is sufficiently
strong that most would say that it does not have a classical analog.
In this section \cnca analogs of classical
several complex variable theorems are obtained by combining
the corresponding classical theorem and Theorem
\ref{thm:oneone}. Indeed,
hypotheses for these analytic \nca results are weaker than their
classical analogs would suggest.
\subsection{A Free Caratheodory-Cartan-Kaup-Wu (CCKW) Theorem}
\label{sec:onto2}
The commutative Caratheodory-Cartan-Kaup-Wu (CCKW) Theorem
\cite[Theorem 11.3.1]{Krantz} says that
if $f$ is an analytic self-map of a bounded domain
in $\mathbb C^g$ which fixes
a point $P$, then the eigenvalues of $f^\prime(P)$
have modulus at most one. Conversely, if the eigenvalues
all have modulus one, then $f$ is in fact an automorphism; and
further if $f^\prime(P)=I$, then $f$ is the identity.
The CCKW Theorem together with Corollary \ref{cor:bianalytic-free}
yields Corollary \ref{cor:cckw1} below.
We note
that Theorem \ref{thm:oneone} can also be thought of
as a non-commutative CCKW theorem in that
it concludes, like the CCKW Theorem does,
that a map $f$ is bianalytic,
but under the (rather different) assumption that $f$ is proper.
\begin{corollary}
\label{cor:cckw1}
Let $\cD$ be a given bounded
non-commutative domain
which contains $0$.
Suppose
$f:\cD \to \cD$ is
an \cncap
Let $\phi$ denote the
mapping $\fone:\cD(1)\to\cD(1)$ and assume $\phi(0)=0.$
\begin{enumerate}[\rm (1)]
\item If all the eigenvalues of
$\phi^\prime(0)$ have modulus one,
then $f$ is a bianalytic free map; and
\item if $\phi^\prime(0)=I$, then $f$ is the identity.
\end{enumerate}
\end{corollary}
The proof uses the following lemma, whose proof is trivial if it is
assumed that $f$ is continuous (and hence analytic) and then
one works with the formal power series representation for a
free analytic function.
\begin{lemma}
\label{lem:ampliate}
Keep the notation and hypothesis of Corollary {\rm\ref{cor:cckw1}}.
If $n$ is a positive integer and $\Phi$
denotes the mapping $\fn:\cD(n)\to\cD(n)$, then
$\Phi^\prime(0)$ is unitarily equivalent to
$I_n\otimes \phi^\prime(0)$.
\end{lemma}
\begin{proof}
Let $E_{i,j}$ denote the matrix units for
$M_n(\C)$. Fix $h\in\mathbb C^g$. Arguing as in the proof of
Proposition \ref{prop:fprime} gives, for $k\ne \ell$
and $z\in \mathbb C$ of small modulus,
\[
\Phi((E_{k,k}+E_{k,\ell})\otimes z h)
= (E_{k,k} +E_{k,\ell})\otimes \phi(zh).
\]
It follows that
\[
\Phi^\prime(0)[(E_{k,k}+E_{k,\ell})\otimes h]
= (E_{k,k}+E_{k,\ell})\phi^\prime(0)[h].
\]
On the other hand,
\[
\Phi^\prime(0)[E_{k,k}\otimes h] = E_{k,k}\otimes \phi^\prime(0)[h].
\]
By linearity of $\Phi^\prime(0)$, it follows that
\[
\Phi^\prime(0)[E_{k,\ell}\otimes h] = E_{k,\ell}\otimes \phi^\prime(0)[h].
\]
Thus, $\Phi^\prime(0)$ is unitarily equivalent to
$I_{n}\otimes \phi^\prime(0).$
\end{proof}
\begin{proof}[Proof of Corollary {\rm\ref{cor:cckw1}}]
The hypothesis that $\phi^\prime(0)$ has eigenvalues
of modulus one, implies, by Lemma \ref{lem:ampliate},
that, for each $n$, the eigenvalues
of $\fn^\prime(0)$ all have modulus one. Thus,
by the CCKW Theorem, each $\fn$ is an automorphism.
Now Corollary \ref{cor:bianalytic-free} implies $f$
is a \fbap
Similarly, if $\phi^\prime(0)=I_g$, then $\fn^\prime(0)=I_{ng}$
for each $n$. Hence, by the CCKW Theorem, $\fn$
is the identity for every $n$ and
therefore $f$ is itself the identity.
\end{proof}
Note a classical bianalytic function $f$ is
completely determined by its value and differential
at a point
(cf.~a remark after Theorem 11.3.1 in \cite{Krantz}).
Much the same is true for \cncas and for the same reason.
\begin{proposition}
\label{prop:derivative0}
Suppose $\cU,\cV\subset \matg$ are non-commutative domains, $\cU$
is bounded, both contain $0,$
and $f,g:\cU\to \cV$ are proper \cncasp
If $f(0)=g(0)$ and $f^\prime(0)=g^\prime(0)$, then $f=g$.
\end{proposition}
\begin{proof}
By Theorem \ref{thm:oneone} both $f$ and $g$ are
\fbasp
Thus $h=f\circ g^{-1}:\cU\to \cU$
is a \fba fixing $0$ with $h[1]^\prime(0)=I$.
Thus, by Corollary \ref{cor:cckw1}, $h$ is the identity.
Consequently $f=g$.
\end{proof}
\subsection{Circular Domains}
A subset $S$ of a complex vector space is
{\bf circular} if $\exp(it) s\in S$ whenever
$s\in S$ and $t\in\R$.
A non-commutative domain $\cU$ is circular
if each $\cU(n)$ is circular. \index{circular domain}
Compare the following theorem to
its commutative counterpart \cite[Theorem 11.1.2]{Krantz}
where the domains $\cU$ and $\cV$ are the same.
\begin{theorem}
\label{thm:circLin}
Let $\cU$ and $\cV$ be bounded non-commutative domains
in $\matg$ and $\math$, respectively, both
of which contain $0$.
Suppose $f:\cU\to \cV$ is a
proper \cnca
with $f(0)=0$.
If $\cU$ and the range
$\cR:= f(\cU)$ of $f$
are circular, then $f$ is linear.
\end{theorem}
The domain $\cU=(\cU(n))$ is {\bf convex}
if each $\cU(n)$ is a convex set.
\begin{corollary}
Let $\cU$ and $\cV$ be bounded non-commutative domains
in $\matg$ both
of which contain $0$.
Suppose $f:\cU\to \cV$ is a
proper \cnca with $f(0)=0$.
If both $\cU$ and $\cV$ are circular and if one is convex,
then so is the other.
\end{corollary}
This corollary is an immediate
consequence of Theorem \ref{thm:circLin} and the fact
(see Theorem \ref{thm:oneone}\eqref{it:xto1}) that
$f$ is onto $\cV$.
We admit the hypothesis that the range $\cR= f(\cU)$ of $f$
in Theorem \ref{thm:circLin}
is circular seems pretty contrived when the domains
$\cU$ and $\cV$ have a different number of variables.
On the other hand if they have the same number of variables
it is the same as $\cV$ being circular since by
Theorem \ref{thm:oneone}, $f$ is onto.
\begin{proof}[Proof of Theorem {\rm\ref{thm:circLin}}]
Because $f$ is a proper \nca it is injective
and its inverse (defined on $\cR$) is a \nca
by Theorem \ref{thm:oneone}. Moreover, using the
analyticity of $f$, its derivative
is pointwise injective by Proposition \ref{prop:fprime}.
It follows that each $\fn:\cU(n)\to \matnh$
is an embedding \cite[p.~17]{GP}.
Thus, each $\fn$ is a homeomorphism onto its range and
its inverse $\fn^{-1}=f^{-1}[n]$ is continuous.
Define $F: \cU \to \cU$ by
\beq\label{eq:defF}
F(x):= f^{-1}\big( e^{- i\theta} f(\e x) \big)
\eeq
This function respects direct sums and similarities,
since it is the composition of maps which do.
Moreover, it is continuous by the discussion above.
Thus $F$ is an \cncap
Using the relation $\e f(F(x))= f(\e)$
we find $\e f^\prime(F(0))F^\prime(0)=f^\prime(0)$.
Since $f^\prime(0)$ is injective, $\e F^\prime(0)=I.$
It follows from Corollary \ref{cor:cckw1}(2)
that $F(x)=\e x$ and thus,
by \eqref{eq:defF},
$f(\e x)=\e f(x)$. Since this holds for every
$\theta$, it follows that $f$ is linear.
\end{proof}
If $f$ is not assumed to map $0$ to $0$ (but instead fixes
some other point), then a
proper self-map
need not be linear.
This follows from the example we discuss in
Section \ref{sec:exYes}.
\begin{remark}\rm
A consequence of the Kaup-Upmeier series of papers
\cite{BKU,KU} shows that
given
two bianalytically equivalent bounded circular
domains in $\C^g$, there
is a \emph{linear} bianalytic map between them.
We believe this result extends to the present non-commutative setting.g
\end{remark}
\section{Maps in One Variable, Examples}
\label{sec:Examples}
This section contains two examples. The first shows that
the circled hypothesis is needed in Theorem \ref{thm:circLin}.
Our second example concerns $\cD$,
a non-commutative domain in one variable
containing the origin, and $b:\cD\to \cD$ a proper
\cnca with $b(0)=0$. It follows that $b$ is bianalytic
and hence $b[1]^\prime(0)$
has modulus one. Our second example shows that this setting can force
further restrictions on $b[1]^\prime(0)$.
The non-commutative domains of both examples are LMI domains; i.e.,
they are the non-commutative solution set of a linear
matrix inequality (LMI). Such domains are convex, and play a major
role in the important area of semidefinite programming; see \cite{WSV}
or the excellent survey \cite{Nem}.
\subsection{LMI Domains}
A special case of the non-commutative domains
are those described by a linear matrix inequality.
Given a positive integer $d$
and $A_1,\dots,A_g \in M_d(\C),$ the
linear matrix-valued polynomial
\[
L(x)=\sum A_j x_j\in M_d(\C)\otimes \bCx
\]
is a {\bf truly linear pencil}. \index{truly linear pencil}
Its adjoint is, by definition,
$
L(x)^*=\sum A_j^* x_j^*.
$
Let
\[
\cL(x) = I_d + L(x) +L(x)^*.
\]
If $X\in \matng$, then $\cL(X)$ is defined by the canonical substitution,
\[
\cL(X) = I_d\otimes I_n
+\sum A_j\otimes X_j +\sum A_j^* \otimes X_j^*,
\]
and yields a symmetric $dn\times dn$ matrix.
The inequality $\cL(X)\succ 0$ for tuples $X\in\matg$
is a {\bf linear matrix inequality (LMI)}. \index{LMI}
\index{linear matrix inequality} The sequence of solution sets
$\cD_{\cL}$ defined by
\[
\cD_{\cL}(n) = \{X\in\matng : \cL(X)\succ 0\}
\]
is a non-commutative domain which contains a neighborhood
of $0$. It is called a {\bf non-commutative (NC) LMI domain}.
\subsection{A Concrete Example of a Nonlinear Bianalytic Self-map
on an NC LMI Domain}
\label{sec:exYes}
It is surprisingly difficulty to find proper self-maps on LMI domains
which are not linear. This section contains the only such example,
up to trivial modification, of which
we are aware. Of course, by Theorem \ref{thm:circLin} the underlying
domain cannot be circular.
In this example the domain is a one-variable LMI domain.
Let $$A=\begin{pmatrix} 1&1\\ 0&0\end{pmatrix}$$
and let $\cL$ denote the univariate $2\times 2$ linear pencil,
$$\cL(x):= I + Ax + A^* x^*
=
\begin{pmatrix} 1 + x +x^* & x \\
x^* & 1 \end{pmatrix}.
$$
Then
$$\cD_\cL=\{X\mid \| X-1 \| < \sqrt 2\}.$$
To see this note $\cL(X) \succ 0$
if and only if $1+X+X^*-XX^*\succ0$,
which is in turn equivalent to $(1-X)(1-X)^*\prec 2$.
\begin{proposition}\label{prop:ex}
For real $\theta$, consider
$$
f_\theta (x):= \frac{ e^{i \theta} x}{1+x- e^{i \theta} x}.
$$
\ben[\rm (1)]
\item
$f_\theta:\cD_\cL\to\cD_\cL$ is a proper
\cncacomman
$f_\theta(0)=0$, and $f^\prime_\theta(0)=\exp(i \theta)$.
\item
Every proper \cnca $f:\cD_\cL\to\cD_\cL$ fixing the origin
equals one of the $f_\theta$.
\een
\end{proposition}
\begin{proof}
Item (1) follows from a straightforward computation:
\begin{gather*}
(1-f_\theta(X))(1-f_\theta(X))^* \prec 2 \iff
\left(1-\frac{ e^{i \theta} X}{1+X- e^{i \theta} X}\right)
\left(1- \frac{ e^{i \theta} X}{1+X- e^{i \theta} X}\right)^* \prec 2 \\
\iff
\left(
\frac {1+X-2 e^{i \theta} X}{1+X- e^{i \theta} X} \right) \left(
\frac{1+X-2 e^{i \theta} X}{1+X- e^{i \theta} X}
\right)^* \prec 2 \\ \iff
\left(
{1+X-2 e^{i \theta} X}\right)
\left(
{1+X-2 e^{i \theta} X} \right)^*
\prec 2
\left(
{1+X- e^{i \theta} X}\right)
\left(
{1+X- e^{i \theta} X} \right)^* \\ \iff
1+X+X^*-XX^*\succ 0 \iff
(1-X)(1-X)^*\prec 2.
\end{gather*}
Statement (2) follows from the uniqueness of a bianalytic map
carrying $0$ to $0$
with a prescribed derivative.
\end{proof}
\subsection{Example of Nonexistence of a Bianalytic Self-map on an NC LMI Domain}
\label{sec:exNo}
Recall that
a bianalytic $f$ with $f(0)=0$ is
completely determined by its differential
at a point.
Clearly, when $f^\prime(0)=1$, then $f(x)=x$.
Does a proper \cncsma exist for each $f^\prime(0)$
of modulus one? In the previous example this was the case.
For the domain in the example in this subsection, again in one variable,
there is no proper \cncsma whose derivative
at the origin is $i$.
The domain will be a ``non-commutative ellipse'' described as $\cD_\cL$
with $\cL(x):= I + A x + A^* x^*$
for $A$ of the form
$$A :=
\begin{pmatrix} C_1 & C_2\\ 0 & -C_1\end{pmatrix},
$$
where $C_1, C_2\in\R$.
There is a choice of parameters
in $\cL$ such that
there is no proper \cncsma
$b$ on $\cD_\cL$ with $b(0)=0$,
and $b'(0) = i$.
Suppose
$b:\cD_\cL\to\cD_\cL$ is a proper \cncsma
with $b(0)=0$,
and $b'(0) = i$. By Theorem \ref{thm:oneone}, $b$ is bianalytic.
In particular, $b[1]:\cD_\cL(1)\to\cD_\cL(1)$ is bianalytic. By the Riemann mapping
theorem there is a conformal map $f$ of the unit disk onto $\cD_\cL(1)$
satisfying $f(0)=0$.
Then
\beq\label{eq:b1}
b[1](z)= f \big( i f^{-1}(z)\big).
\eeq
(Note that $b[1] \circ b[1] \circ b[1] \circ b[1]$ is the identity.)
To give an explicit example, we recall some
special functions involving elliptic integrals.
Let $K(z,t)$ and $K(t)$ be the normal and complete elliptic integrals
of the first kind, respectively, that is,
$$
K(z,t)= \int_0^z \frac {dx}{\sqrt{(1-x^2)(1-t^2x^2)}},
\quad
K(t)=K(1,t).
$$
Furthermore, let
$$
\mu(t)=\frac \pi2 \frac {K(\sqrt{1-t^2})}{K(t)}.
$$
Choose the axis for the non-commutative ellipse as follows:
$$
a= \cosh \left( \frac12 \mu\big(\frac 23\big)\right),
\quad
b=\sinh \left( \frac12 \mu\big(\frac 23\big)\right).
$$
Then
$$
C_1=\frac12 \sqrt{ \frac 1{a^2}-\frac1{b^2}}, \quad
C_2=\frac1b.
$$
The desired conformal mapping is \cite{Scw,Sze}
$$
f(z)=\sin \left(
\frac{\pi}{2 K(\frac23)} K \Big( \frac z{\sqrt \frac23},\frac23\Big) \right).
$$
Hence $b[1]$ in \eqref{eq:b1}
can be explicitly computed
(for details see the Mathematica notebook {\tt Example53.nb}
available under {\it Preprints} on
\url{http://srag.fmf.uni-lj.si}). It
has a power series expansion
\beq\label{eq:b12}
\begin{split}
b[1](z) & =
i z-\frac{1}{27} i \left(9-\frac{52
K\left(\frac{4}{9}\right)^2}{\pi ^2}\right)
z^3+i\frac{ \left(9 \pi ^2-52
K\left(\frac{4}{9}\right)^2\right)^2 }{486
\pi ^4}z^5+ O(z^7) \\
& \approx i\, (1+ 0.30572 z^3 + 0.140197 z^5).
\end{split}
\eeq
This power series expansion has a radius of convergence
$\geq\eps>0$
and thus induces an analytic free mapping
$\cN_\eps\to M(\C)$.
By analytic continuation, this function coincides with $b$.
This enables us to evaluate $b(zN)$ for a nilpotent $N$.
Let $N$ be an order $3$ nilpotent,
$$
N=\begin{pmatrix}
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 0 & 0\\
\end{pmatrix}.
$$
Then $r\in\R$ satisfies $rN\in\cD_\cL$ if and only if
$-1.00033 \leq r \leq 1.00033=:r_0$.
(This has been computed symbolically in the exact arithmetic using Mathematica, and
the bounds given here are just approximations.)
However,
$b(r_0N)\in\cD_\cL\setminus\partial\cD_\cL$
contradicting the properness.
(This was established by computing
the $8\times 8$ matrix $\cL\big(b(r_0N)\big)$
symbolically thus ensuring it is exact.
Then we apply a numerical eigenvalue solver to see that it is
positive definite with smallest eigenvalue $0.0114903\ldots$.)
We conclude that the proper \cncsma
$b$
does not exist.
|
2,869,038,153,869 | arxiv | \section{Introduction}
Promoted by the European Telecommunications Standards Institute (ETSI), network function virtualization (NFV) has become a cornerstone of the envisaged architecture for 5G systems \cite{mijumbi2016network}. NFV leverages virtualization technologies in order to implement network functionalities on commercial off-the-shelf (COTS) programmable hardware, such as general purpose servers, potentially reducing both capital and operating costs. An important challenge in the deployment of NFV is ensuring carrier grade performance while relying on COTS components. Such components may be subject to temporary unavailability due to malfunctioning, and are generally characterized by randomness in their execution runtimes. The typical solution to these problems involves replicating the virtual machines that execute given network functions on multiple processors, e.g., cores or servers \cite{ETSI,liu2016reliability,herrera2016resource,kang2017trade}.
{\color{black} Among the key applications of NFV is the implementation of centralized radio access functionalities in a cloud radio access network (C-RAN) \cite{nikaein2015processing,ETSINFVCRAN}. As shown in Fig.~\ref{fignfv}, each remote radio head (RRH) of a C-RAN architecture is connected to a cloud processor by means of a fronthaul (FH) link. Baseband functionalities are carried out on a distributed computing platform in the cloud, which can be conveniently programmed and reconfigured using NFV.
The most expensive baseband function in terms of latency to be carried out at the cloud is uplink channel decoding \cite{nikaein2015processing,alyafawi2015critical,nikaein2014openairinterface}.
The implementation of channel decoding in the cloud by means of NFV is faced not only with the challenge of providing reliable operation despite the unreliability of COTS servers, but also with the latency constraints imposed by retransmission protocols.
In particular, keeping decoding latency at a minimum is a major challenge in the implementation of C-RAN owing to timing constraints from the link-layer retransmission protocols \cite{dotsch2013quantitative,rost2014opportunistic,khalili2017uplink}.
In fact, positive or negative feedback signals need to be sent to the users within a strict deadline in order to ensure the proper operation of the protocol.
In \cite{Rodriguez17,rodriguez2018cloud} it is argued that exploiting parallelism across multiple cores in the cloud can reduce the decoding latency by enabling decoding as soon as one can has computed its task.
However, parallel processing does not address the unreliability of COTS hardware. A different solution is needed in order to address both unreliability and delays associated with cloud decoding.
}
The problem of straggling processors, that is, of processors
lagging behind in the execution of a certain orchestrated
function, has been well studied in the context of distributed computing
{\color{black} \cite{dean,ananthanarayanan2010reining,zaharia,li2016coded,li2015coded,li2016unified}}. Recently, it has been pointed out that, for the important case
of linear functions, it is possible to improve over repetition strategies in terms of the trade-off between performance and
latency by carrying out linear precoding of the data prior to processing, {\color{black} e.g., \cite{Ramchandran,Li,Yang,Tandon,Dutta,Sev17,yu2017polynomial,mallick2018rateless,kosaian2018learning}}. The key idea
is that, by employing suitable linear (erasure) block codes operating over fractions of size $1/K$ of the original data, a function may be completed as soon as any $K$ or more processors, depending on the minimum distance of the code, have completed their operations.
Coding has also been found to be useful {\color{black}addressing} the straggler problem in the context of coded distributed storage {\color{black} and computing} systems, {\color{black} see, e.g., \cite{wang2015using,joshi2017efficient,ananthanarayanan2013effective,yang2018coded,aktas2017effective}.}
{ \color{black}
In this paper, we explore the use of coded computing {\color{black} to enable} reliable and timely channel decoding in a C-RAN architecture based on distributed unreliable processors. Specifically, we formally and systematically address the analysis of coded NFV for C-RAN uplink decoding.
The only prior work on coded computing for NFV is \cite{Ali}, which provides numerical results concerning a toy example with three processors in which a processor in the cloud is either on or off.
Unlike \cite{Ali}, in this work, we derive analytical performance bounds for a general scenario with any number of servers, random computing runtimes, and random packet arrivals. Specific novel contributions are as follows.
\begin{itemize}
\item
We first consider the transmission of an isolated frame, and develop analytical upper bounds on the frame unavailability probability (FUP) as a function of the allowed decoding delay. The FUP measures the probability that a frame is correctly decoded within a tolerated delay constraint.
The FUP bounds leverage large deviation results for correlated variables \cite{Janson} and depend on the properties of both the uplink linear channel code adopted at the user and the NFV linear code applied at the cloud;
\item
As a byproduct of the analysis we introduce the dependency graph of a linear code and its chromatic number as novel
relevant parameters of a linear code beside minimum distance, blocklength, and rate;
\item
We extend the analysis to account for random frame arrival times, and investigate the trade-off between average decoding latency and frame error rate (FER) for two different queuing policies, whereby the servers carry out either per-frame or continuous decoding;
\item We provide extensive numerical results that demonstrate the usefulness of the derived analytical bounds in both predicting the system performance and enabling the design of NFV codes.
\end{itemize}
}
The rest of the paper is organized as follows. In Section \ref{secModel}, we present the system model focusing, as in \cite{Ali}, on a binary symmetric channel (BSC) for uplink communications. Section \ref{secASY} presents the two proposed upper bounds on the FUP as a function of latency. In Section \ref{SecQueue} we study the proposed system with random frame arrival times, and Section \ref{secnum} provides numerical results.
\vspace{-.5ex}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale=0.64]{nfvMay18_v1.pdf}\vspace{-.1cm}~\caption{
\footnotesize{NFV model for uplink channel
decoding. The input information frame $\textbf{u}$ is divided into
packets, which are encoded with a linear code
$\mathcal{C}_u$ with generator matrix
$\textbf{G}_u$. The packets are received by the RRH
through a BSC and forwarded to the cloud. Server 0 in the cloud
re-encodes the received packet with a linear code
$\mathcal{C}_c$ in order to enhance the
robustness against potentially straggling Servers
$1,\ldots,N$.
}}~\label{fignfv}
\end{center}
\vspace{-6ex}
\end{figure*}
\section{System Model}\label{secModel}
As illustrated in Fig.~\ref{fignfv}, we consider the uplink of a C-RAN system in which a user communicates with the cloud via a remote radio head (RRH). The user is connected to the RRH via a BSC with cross error probability $\delta$, while the RRH-to-cloud link, typically referred to as fronthaul, is assumed to be noiseless.
Note that the BSC is a simple model for the uplink channel, while the noiseless fronthaul accounts for a typical deployment with higher capacity fiber optic cables.
{\color{black} As we briefly discuss in Section \ref{secConclusion}, the analysis can be generalized to other additive noise channel, such as Gaussian channels. }
The cloud contains a master server, or Server 0, and $N$ slave servers,
i.e., Servers $1,\ldots,N$. The slave servers are characterized by random computing delays as in related
works on coded computation \cite{Ramchandran,Li,Sev17}. Note that we use here the term ``server" to refer to a decoding processor, although, in a practical implementation, this may correspond to a core of the cloud computing platform \cite{Rodriguez17,rodriguez2018cloud}.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.45]{codednfvMay18_v1.pdf}\vspace{-.1cm}~\caption{\footnotesize{
Coded NFV at the cloud: Server 0 re-encodes the
received packets in $\textbf{Y}$ by a linear NFV
code $\mathcal{C}_c$ with generator $\textbf{G}_c$.
Each encoded packet $\tilde{\textbf{y}}_i$ is
then conveyed to Server $i$ for decoding.
}}~\label{figcodednfv}
\end{center}
\vspace{-5ex}
\end{figure}
In the first part of this paper, we consider transmission of a single information frame $\textbf{u}$, while Section \ref{SecQueue} focuses on random frame arrival times and queuing effect delays.
The user encodes an information frame $\textbf{u}$ consisting of $L$ bits. Before encoding, the information frame is divided into $K$ blocks $\textbf{u}_1,\textbf{u}_2,\ldots , \textbf{u}_K \in \{0,1\}^{L/K}$ of equal size, each of them containing $L/K$ bits. As shown in Fig.~\ref{fignfv}, in order to combat noise on the BSC, the $L/K$ blocks are encoded by an $(n,k)$ binary linear code $\mathcal{C}_u$ of rate $r=k/n$ defined by generator matrix $\textbf{G}_u\in \mathbb{F}_2^{n\times k}$, where $n=L/(rK)$ and $k=L/K$.
Let $\textbf{x}_j\in \{0,1\}^n$ with $j\in\{1,\ldots,K\}$ be the $K$ transmitted packets of length $n$.
At the output of the BSC, the length-$n$ received vector for the $j$th packet at the RRH is given as
\begin{equation}\label{eqEncod}
\textbf{y}_j=\textbf{x}_j\oplus \textbf{z}_j,
\end{equation}
where $\textbf{z}_j$ is a vector of i.i.d. $\mathrm{Bern}(\delta$) random variables (rvs).
The $K$ received packets $(\textbf{y}_1,\textbf{y}_2,\ldots,\textbf{y}_K)$ by the RRH are transmitted to the cloud via the fronthaul link, and the cloud performs decoding. Specifically, as detailed next, we assume that each Server $1,\ldots,N$ performs decoding of a single packet of length $n$ bits while Server 0 acts as coordinator.
Assuming $N\geq K$, we adopt the idea of NFV coding proposed in \cite{Ali}. Accordingly,
as seen in Fig.~\ref{figcodednfv}, the $K$ packets are first linearly encoded by Server 0 into $N\geq K$ coded blocks of the same length $n$ bits, each forwarded to a different server for decoding.
This form of encoding is meant to mitigate the effect of straggling servers in a manner similar to \cite{Ramchandran,Li,Sev17}.
Using an $(N,K)$ binary linear NFV code $\mathcal{C}_c$ with $K\times N$ generator
matrix $\textbf{G}_c\in\mathbb{F}_2^{N\times K}$, the encoded packets are obtained as
\begin{equation}
\tilde{\textbf{Y}}=\textbf{Y}\textbf{G}_c,
\end{equation}
where $\textbf{Y}=[\textbf{y}_1,\ldots,\textbf{y}_K]$ is the $n\times K$ matrix obtained by including the received signal $\textbf{y}_j$ as the $j$th column and $\tilde{\textbf{Y}}=[\tilde{\textbf{y}}_1,\ldots, \tilde{\textbf{y}}_N]$ is
the $n\times N$ matrix whose $i$th column $\tilde{\textbf{y}}_i$ is the input to Server $i$, where $i\in\{1,\ldots,N\}$.
From (\ref{eqEncod}), this vector can be written as
\vspace{-.25cm}
\begin{equation}\label{eqnoise}
\tilde{\textbf y}_i=\sum_{j=1}^K \textbf{y}_j g_{c,ji} =\sum _{j=1}^K \textbf{x}_j g_{c,ji}+\sum_{j=1}^K\textbf{z}_j g_{c,ji},
\vspace{-.25cm}
\end{equation}
where $g_{c,ji}$ is the $(j,i)$th entry of matrix $\textbf{G}_{c}$.
The signal part $\sum _{j=1}^K \textbf{x}_j g_{c,ji}$ in \eqref{eqnoise} is a linear combination of $d_i$ codewords for the rate-$r$ binary code with generator matrix $\textbf{G}_u$, and hence it is a codeword of the same code.
The parameter $d_i$, $i\in \{1,\ldots, N\}$, denotes the Hamming weight of the $i$th column of matrix $\textbf{G}_{c}$, where $0\leq d_i\leq K$.
Each server $i$ receives as input $\tilde{\textbf{y}}_i$ from which it can decode the codeword
$\sum_{i=1}^K\textbf{x}_ig_{c,ji}$. This decoding operation is affected by the noise
vector $\sum_{j=1}^K\textbf{z}_jg_{ji}$ in \eqref{eqnoise}, which has i.i.d. $\mathrm{Bern}(\gamma_i$) elements. Here, $\gamma_i$ is obtained as the first row and second column's entry of the matrix $\textbf{Q}^{d_i}$, with $\textbf{Q}$ being
{\color{black} the transition matrix of the BSC with cross over probability $\delta$, i.e.,
\begin{equation}\label{generatorGc}
{\textbf{Q}=
\left[ {\begin{array}{cc}
1-\delta & \delta\\
\delta & 1-\delta
\end{array} } \right].}
\end{equation}
As an example, $d_i=2$, implies a bit flipping probability of $\gamma_i=2\delta(1-\delta)$}.
Note that a larger value of $d_i$ yields a larger bit probability $\gamma_i$.
We define as $\mathrm{P}_{n,k}(\gamma_i)$ the decoding error probability of {\color{black} the $(n,k)$ linear user code} at Server $i$, which can be upper bounded by using \cite[Theorem 33]{polyanskiy}.
Server $i$ requires a random time $T_i=T_{1,i}+T_{2,i}$ to complete decoding, which is modeled as the sum of a component $T_{1,i}$ that is independent of the workload and a component $T_{2,i}$ that instead grows with the size $n$ of the packet processed at each server, respectively. The first component accounts, e.g., for processor unavailability periods, while the second models the execution runtime from the start of the computation. The first variable $T_{1,i}$ is assumed to have an exponential probability density function (pdf) $f_1(t)$ with mean $1/\mu_1$, while the variable $T_{2,i}$ has a shifted exponential distribution with cumulative distribution function (cdf) \cite{Amirhossein}
\begin{equation}\label{cdfTime}
F_2(t)=1-\exp{\left(-\frac{rK\mu_2}{L}\left(t-a\frac{L}{rK}\right)\right),}
\end{equation}
for $t\geq aL/(rK)$ and $F_2(t)=0$ otherwise. The parameter $a$ represents the minimum processing time per input bit, while $1/\mu_2$ is the average additional time needed to process one bit.
{\color{black}
As argued in \cite{Ramchandran,Amirhossein}, the shifted exponential model provides a good fit for the distribution of computation times over cloud computing environments such as Amazon EC2 clusters.
}
The cdf of the time $T_i$ can hence be written as the integral $F(t)=\int_{0}^{t}f_{1}(\tau)F_{2}(t-\tau)d\tau$.
We also assume that the runtime rvs $\{T_i\}_{i=1}^N$ are mutually independent.
Due to (\ref{cdfTime}), the probability that a given set of $l$ out of $N$ servers has finished decoding by time $t$ is given as
\begin{equation}\label{eqOrderStatistic}
a_l(t)={\color{black}{N \choose l}}F(t)^l(1-F(t))^{N-l}.
\end{equation}
Let $d_{\min}$ be the minimum distance of the NFV code $\mathcal{C}_c$. Due to \eqref{eqnoise},
Server 0 in the cloud is able to decode the message $\textbf {u}$ or equivalently the $K$ packets $\textbf {u}_j$ for $j\in\{1,\ldots,K\}$, as soon as $N-d_{\min}+1$ servers have decoded successfully. Let $\hat{\textbf{u}}_i$ be the output of the $i$th server in the cloud upon decoding. We assume that an error detection mechanism, such as a cyclic redundancy check (CRC), is in place so that Server 0 outputs
\[
\hat{\textbf{u}}_i =
\begin{cases}
\hat{\textbf{u}}_i,& \text{for correct decoding},\\
\emptyset, & \text{otherwise}.
\end{cases}
\]
The output $\hat{\textbf{u}}(t)$ of the decoder at Server 0 at time $t$ is then a function of the vectors $\hat{\textbf{u}}_i(t)$ for $i\in\{1,\ldots,N\}$, where
\[
\hat{\textbf{u}}_i(t)=
\begin{cases}
\hat{\textbf{u}}_i,& \text{if} ~T_i\leq t,\\
\emptyset, & \text{otherwise}.
\end{cases}
\]
Finally, the frame unavailability probability (FUP) at time $t$ is defined as the probability
\begin{equation}\label{eqgoalerror}
\mathrm{P}_u(t)=\mathrm{Pr}\left[ \hat{\textbf{u}}(t)\neq\textbf{u}\right].
\end{equation}
The event $\{\hat{\textbf{u}}(t)\neq\textbf{u}\}$ occurs when
either not enough servers have completed decoding or many servers have completed but failed decoding by time $t$. We also define the FER as
\begin{equation}\label{FEReq}
\mathrm{P}_e = \lim _{t\rightarrow \infty} \mathrm{P}_u(t).
\end{equation}
The FER measures the probability that, when all servers have completed decoding, a sufficiently large number, namely larger than $N-d_{\min}$, has decoded successfully.
\section{Bounds on the Frame Unavailability Probability}\label{secASY}
In this section we derive analytical bounds on the FUP $\mathrm{P}_u(t)$ in \eqref{eqgoalerror} as a function of the decoding latency $t$.
\subsection{Preliminaries}
Each server $i$ with $i\in\{1,\ldots,N\}$ decodes successfully its assigned packet $\tilde{\textbf{y}}_i$ if: (\textit{i}) the server completes decoding by time $t$;
(\textit{ii}) the decoder at the server is able to correct the errors caused by the BSC.
Furthermore as discussed, an error at Server 0 occurs at time $t$ if the number of servers that have successfully decoded by time $t$ is smaller than $N-d_{\min}+1$.
To evaluate the FUP, we hence define the indicator variables
$C_i(t)=\mathds{1}\{T_i\leq t\}$ and $D_i$
which are equal to 1 if the events (\textit{i}) and (\textit{ii}) described above occur, respectively, and zero otherwise. Based on these definitions, the FUP is equal to
\begin{eqnarray}\label{eqFER}
\mathrm{P}_u(t)&=& \mathrm{Pr}\left[ \sum_{i=1}^N C_i(t) D_i\leq N-d_{\min} \right].
\end{eqnarray}
The indicator variables $C_i(t)$ are independent Bernoulli rvs across the servers $i\in\{1,\ldots,N\}$, due to the independence assumption on the rvs $T_i$. However, the indicator variable $D_i$ are dependent Bernoulli rvs.
The dependence of the variables $D_i$ is caused by the fact that the noise terms
$\sum_{i=1}^K\textbf{z}_jg_{c,ji}$ in \eqref{eqnoise} generally have common terms.
In particular, if two columns $i$ and $j$ of the generator matrix $\textbf{G}_c$ have at least a 1 in the same row, then the decoding indicators $D_i$ and $D_j$ are correlated.
This complicates the evaluation of bounds on the FUP \eqref{eqFER}.
\subsection{ Dependency Graph and Chromatic Number of a Linear Code}\label{subsecdependency}
To capture the correlation among the indicator variables $D_i$, we
introduce here the notion of the \emph{dependency graph} and its
chromatic number for a linear code. These appear to be novel properties of a linear code, and we will argue below that they determine the performance of the NFV code $\mathcal{C}_c$ for the application at hand.
\begin{Definition}\label{dependencyGraph}
Let $\textbf{G}\in \mathbb{F}_2^{K'\times N'}$ be a generator matrix of a linear code. The dependency graph $\mathcal{G}(\textbf{G})=(\mathcal{V},\mathcal{E})$ comprises a set $\mathcal{V}$ of $N'$ vertices and a set $\mathcal{E}\subseteq \mathcal{V}\times \mathcal{V}$ of edges, where edge $(i,j)\in \mathcal{E}$ is included if both the $i$th and $j$th columns of $\textbf{G}$ have at least a $1$ in the same row.
\end{Definition}
\begin{Example}\label{exGraph}
For an $(8,4)$ NFV code $\mathcal{C}_c$ with the following generator matrix
\begin{equation}\label{generatorGc}
{
\textbf{G}_c=
\left[ {\begin{array}{cccccccc}
1&0&0&0&0&1&1&0\\
0&0&0&1&1&0&0&1\\
0&1&0&0&0&0&1&1\\
1&0&1&0&1&0&0&0
\end{array} } \right],}
\end{equation}
the resulting dependency graph $\mathcal{G}(\textbf{G}_c)$ is shown in Fig.~\ref{fig:graph}.
\end{Example}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.42]{PosterGraph.pdf} ~\caption{\footnotesize{Dependency graph associated with the (8,4) NFV code $\mathcal{C}_c$ in Example \ref{exGraph}.}}~\label{fig:graph}
\vspace{-5ex}
\end{figure}
\begin{figure*}[!b]
{\fontsize{10pt}{12pt}
\hrulefill
\begin{equation}\label{eqasymm}
\begin{split}
\mathrm{P}_u(t)\leq
\exp\left(-\frac{S(t)}{b^2(t)\mathcal{X}(\textbf{G}_c)}~\varphi\left(\frac{4b(t)\left(NF(t)-F(t)\sum_{i=1}^N\mathrm{P}_{n,k}(\gamma_i)-N+d_{\min}\right)}{5S(t)}\right) \right),
\end{split}
\end{equation} }
\end{figure*}
The chromatic number $\mathcal{X}(\textbf{G})$ of the graph $\mathcal{G}(\textbf{G})$ will play an important role in the analysis. We recall that the chromatic number is
the smallest number of colors needed to color the vertices of $\mathcal{G}(\textbf{G})$, such that no two adjacent vertices share the same color (see the example in Fig.~\ref{fig:graph}).
Generally, finding the chromatic number of a graph is NP-hard \cite{NP}. However, a simple upper bound on $\mathcal{X}(\textbf{G})$ is given as \cite{Brook}
\begin{equation}\label{Brook}
\mathcal{X}(\textbf{G})\leq \Delta(\textbf{G})+1,
\end{equation}
where $\Delta(\textbf{G})$ is the maximum degree of a graph $\mathcal{G}(\textbf{G})$.
A consequence of \eqref{Brook} is the following.
\begin{Lemma}
Let $\textbf{G} $ be a $K'\times N'$ matrix, where $\alpha_r$ and $\alpha_c$ are the maximum Hamming weights of the rows and columns in $\textbf{G}$, respectively. Then the chromatic number of the corresponding dependency graph $\mathcal{G}(\textbf{G})$ is upper bounded as
\begin{equation}\label{eqChromatic}
\mathcal{X}(\textbf{G})\leq \min\{N,\alpha_c(\alpha_r-1)+1 \}.
\end{equation}
\end{Lemma}
\begin{proof}
According to Definition \ref{dependencyGraph} we have the upper bound
$\Delta(\textbf{G})\leq \alpha_c (\alpha_r-1)$ and hence \eqref{eqChromatic} follows directly from \eqref{Brook}.
\end{proof}
\vspace{-.5ex}
\subsection{Large Deviation Upper Bound}\label{subseclargedevi}
\begin{figure*}[!b]
{\fontsize{10pt}{12pt}
\begin{equation}\label{uppereqTigther}
\begin{split}
\mathrm{P}_u(t)\leq 1- \frac{1}{{N \choose l}} \sum_{l=N-d_{\min}+1}^N a_l(t) \sum_{\substack{ \mathcal{A}\subseteq \{1,\ldots,N\}: \\ |\mathcal{A}|=l}} \left(1-\exp \left(-\frac{S_{\mathcal{A}}}{b_{\mathcal{A}}^2\mathcal{X}(\textbf{G}_{\mathcal{A}})}\varphi \left(\frac{4b_{\mathcal{A}} \left (l-N+d_{\min}-\mathrm{P}_{n,k}^{\mathcal{A}} \right )}{5S_{\mathcal{A}}}\right)\right ) \right).
\end{split}
\end{equation} }
\end{figure*}
In this subsection, we {\color{black} derive an upper bound} on the FUP. The bound is based on the large deviation result in \cite{Janson} for
the tail probabilities of rvs
$X=\sum_{i=1}^M X_i$, where the rvs $X_{i}$ are generally dependent. {\color{black} We refer to this bound as the large deviation bound (LDB)}.
The correlation of rvs $\{X_i\}$ is described in \cite{Janson} by a dependency graph. This is defined as any graph $\mathcal{G}(X)$ with $X_i$ as vertices, such that, if a vertex $i \in \{1,\ldots,M\}\backslash\{i\}$ is not connected to any vertex in a subset $\mathcal{J}\subset\{1,\ldots,M\}$, then $X_i$ is independent of $\{X_j\}_{j\in\mathcal{J} }$.
\begin{Lemma}[\!\cite{Janson}]\label{JansonLemmaTighter}
Let $X=\sum _{i=1}^M{X_i}$, where $X_{i}\sim \text{Bern} (p_i)$ and $p_i\in(0,1)$ are generally dependent.
For any $b\geq 0$, such that the inequality
$X_{i} -\mathbb{E}(X_{i})\geq -b$
holds for all $i\in\{1,\ldots,M\}$ with probability one, and for any
$\tau\geq 0$
we have
\begin{equation}\label{eqlem1}
\mathrm{Pr}[X\leq \mathbb{E}(X)-\tau]\leq \exp\left(-\frac{S}{b^2\mathcal{X}(\mathcal{G}(X))}~\varphi{\left (\frac{4b\tau}{5S}\right )} \right),
\end{equation}
where
$S\overset{\Delta}{=}\sum_{i=1}^N\text{Var}(X_i)$
and
$\varphi(x)\overset{\Delta}{=}(1+x)\ln(1+x)-x$.
The same bound \eqref{eqlem1} holds for
$\mathrm{Pr}(X\geq \mathbb{E}(X)+\tau)$, where
$X_i -\mathbb{E}(X_i)\leq b$
with probability one.
\end{Lemma}
The following theorem uses Lemma \ref{JansonLemmaTighter} to derive a bound on the FUP.
\begin{Theorem}\label{ThmDevi}
Let $\mathrm{P}_{n,k}^{\min}=\min_i\{\mathrm{P}_{n,k}(\gamma_i)\}_{i=1}^N$.
For all
\begin{equation}\label{conditionTimemu10}
t\geq F^{-1}\left ( \frac{N-d_{\min} }{N-\sum_{i=1}^N\mathrm{P}_{n,k}(\gamma_i)} \right ),
\end{equation}
the FUP is upper bounded by
in {\color{black}\eqref{eqasymm}}, shown at the bottom of the page,
where
$b(t)\overset{\Delta}{=} F(t)\left (1- \mathrm{P}_{n,k}^{\min}\right ) $
and
$S(t)\overset{\Delta}{=} \sum_{i=1}^N F(t)\left (1-\mathrm{P}_{n,k}(\gamma_i)\right ) \left (1-F(t)(1-\mathrm{P}_{n,k}(\gamma_i)) \right )$.
The upper bound \eqref{eqasymm} on the FUP captures the dependency of the FUP on both the channel and the NFV code. In particular, the bound is an increasing function of the error probabilities $\mathrm{P}_{n,k}(\gamma_i)$, which depend on both codes. It also depends on the NFV code through parameters $d_{\min}$ and $\mathcal{X}(\textbf{G}_c)$.
\end{Theorem}
\begin{proof}
Let $X_i(t)\overset{\Delta}{=}C_i(t)D_i$ and $X(t)=\sum_{i=1}^NX_i(t)$, where
$X_i(t)$ are dependent Bernoulli rvs with probability $\mathbb{E}[X_i(t)]=\mathrm{Pr}[X_i(t)=1]=F(t)\left (1-\mathrm{P}_{n,k}(\gamma_i)\right )$.
It can be seen that a valid dependency graph $\mathcal{G}(X)$ for the variables $\{X_i\}$ is the dependency graph
$\mathcal{G}(\textbf{G}_c)$ defined above. This is due to the fact that, as
discussed in Section~\ref{subseclargedevi}, the rvs~$X_i$ and $X_j$ are dependent if and only if the $i$th and $j$th column of $\textbf{G}_c$ have at least a 1 in a common row.
We can hence apply Lemma \ref{JansonLemmaTighter} for every time $t$ by selecting
$\tau=\mathbb{E}(X )-N+d_{\min} $, and $b(t)$ as defined
above. Note that this choice of
$b(t)$ meets the
constraint for $b$ in Lemma \ref{JansonLemmaTighter}.
For $1/ \mu_1=0$, \eqref{conditionTimemu10} can be simplified as follows:
\vspace{-.3cm}
\begin{equation}\label{conditionTime}
t\geq n\left (a-\frac{1}{\mu}\ln \left( \frac{d_{\min}-\sum_{i=1}^N\mathrm{P}_{n,k}(\gamma_i)}{N-\sum_{i=1}^N\mathrm{P}_{n,k}(\gamma_i)} \right ) \right ).\vspace{-.3cm}
\end{equation}
\end{proof}
\vspace{-1ex}
{\color{black}
\begin{Remark}
When $t\rightarrow \infty$, we have the limit $\lim_{t\rightarrow\infty} F(t)=1$, which implies that eventually all servers complete decoding. Letting $d^{\max}\overset{\Delta}{=}\max\{d_i\}_{i=1}^N$ and
$\gamma\overset{\Delta}{=}\textbf{Q}^{d^{\max}}(1,2)$, the first row and second column's entry of the matrix $\textbf{Q}^{d^{\max}}$, the bound \eqref{eqasymm} reduces to
\begin{align}\label{eqineq1}
\!{\color{black}\lim_{t\rightarrow\infty}\!\mathrm{P}_u(t)}\!\leq\!\exp \!\Bigg(\!\frac{-N\mathrm{P}_{n,k}(\gamma)}{(1\!-\!\mathrm{P}_{n,k}(\!\gamma)\!)\mathcal{X}(\!\textbf{G}_c)\!}
\varphi\Bigg(\hspace{-1ex}\frac{4\!\left(\!d_{\min}/N\!-\!\mathrm{P}_{n,k}(\!\gamma)\!\right)}{5\mathrm{P}_{n,k}(\gamma)}\hspace{-1ex}\Bigg)\hspace{-1ex}\Bigg).
\end{align}
This expression demonstrates the dependence of the FUP bound \eqref{eqasymm} on the number of servers $N$, the decoding error probability $\mathrm{P}_{n,k}(\gamma)$ for each server, the chromatic number $\mathcal{X}(\textbf{G}_c)$, and minimum distance $d_{\min}$ of the NFV code.
In particular, it can be seen that the FUP upper bound \eqref{eqineq1} is a decreasing function of $d_{\min}$, while it increases with the chromatic number, $\mathrm{P}_{n,k}(\gamma)$ and with $d^{\max}$.
\end{Remark}
}
\vspace{-2ex}
\subsection{Union Bound}\label{subsecunion}
As indicated in Theorem \ref{ThmDevi},
the large deviation based bound in \eqref{uppereqTigther} is only valid for large
enough $t$, as can be observed from \eqref{conditionTime}.
Furthermore, it may generally not be tight, since it neglects the independence of the indicator variables $C_i$.
In this subsection, a generally tighter but more complex {\color{black} union bound (UB)} is derived that is valid for all times $t$.
\begin{Theorem}\label{ErrorBoundThm}
For any subset $\mathcal{A}\subseteq\{1,\ldots,N\}$, define
\begin{equation*}
\mathrm{P}_{n,k}^{\min(\mathcal{A})}\overset{\Delta}{=}\min\{\mathrm{P}_{n,k}(\gamma_i)\}_{i\in\mathcal{A}}~~ \text{and}~~
\mathrm{P}_{n,k}^{\mathcal{A}} \overset{\Delta}{=}\sum_{i\in \mathcal{A}} \mathrm{P}_{n,k}(\gamma_i),
\end{equation*}
and let
$\textbf{G}_{\mathcal{A}}$
be the
$K\times |\mathcal{A}|$,
submatrix of
$\textbf{G}_c$,
with column indices in the subset $\mathcal{A}$.
Then, the FUP is upper bounded by
\eqref{uppereqTigther}, shown at the bottom of the page,
where $S_{\mathcal{A}}\triangleq\sum_{i\in \mathcal{A}} \mathrm{P}_{n,k}(\gamma_i) \left (1-\mathrm{P}_{n,k}(\gamma_i) \right )$ and $b_{\mathcal{A}}\overset{\Delta}{=}1-\mathrm{P}_{n,k}^{\min(\mathcal{A})}$.
\end{Theorem}
\begin{proof}
Let $I_i=1-D_i$ be the indicator variable which equals 1 if Server $i$ fails decoding.
Accordingly, we have $I_i\sim \mathrm{Bern}(\mathrm{P}_{n,k}(\gamma_i))$.
For each subset $\mathcal{A}\subseteq \{1,\ldots ,N\}$, let
$I_{\mathcal{A}}=\sum _{i\in \mathcal{A}} I_i$.
The complement of the FUP $\mathrm{P}_{s}(t)=1-\mathrm{P}_{u}(t)$ can hence be written as
\begin{align}
\mathrm{P}_{s}(t)=& \mathrm{Pr}\left[ \sum_{i=1}^N C_i(t) D_i > N-d_{\min} \right]\\
=&{\color{black}\frac{1}{{N \choose l}}\!\sum_{l=N-d_{\min}+1}^N \!a_l(t) \hspace{-2ex}\sum_{\substack{ \mathcal{A}\subseteq \{1,\ldots,N\}:\\ |\mathcal{A}|=l}}}\nonumber\\
&{\color{black}\cdot \sum_{j=N-d_{\min}+1}^l{ \mathrm{Pr}\left[\substack{j~\text{servers from~}\mathcal{A}~\text{decode successfully}\\\text{ and}\\l-j~\text{servers from~}\mathcal{A}~\text{fail to decode }}\right]}} \\
=& {\color{black}\!\frac{1}{{N \choose l}}}\sum_{l=\!N\!-\!d_{\min}\!+\!1}^N \hspace{-2ex} a_l(t)\hspace{-2ex} \sum_{\substack{ \mathcal{A}\subseteq \{1,\ldots,N\}: \\ |\mathcal{A}|=l}} \hspace{-3ex}\left(1\!-\!\mathrm{Pr}\!\left[I_\mathcal{A}\!\geq\! l\!-\!N\!+\!d_{min}\!\right]\right)\!.\label{eqthm}\vspace{-.3cm}
\end{align}
We can now apply Lemma \ref{JansonLemmaTighter} to the probability in \eqref{eqthm} by noting that
$\mathcal{G}(\mathbf{G}_{\mathcal{A}})$ is a valid dependency graph for the variables $\{I_i\}$, $i\in \mathcal{A}$.
In particular, we apply Lemma \ref{JansonLemmaTighter} by setting
$\tau_{\mathcal{A}}= l-N+d_{\min}-\mathbb{E}(I_{\mathcal{A}}) $,
$b_{\mathcal{A}}\geq I_i-\mathbb{E}[I_i]$,
and
$S_\mathcal{A}=\sum_{i \in \mathcal{A}} \mathrm{Var}~ (I_i)$, leading to \vspace{-0.3cm}
\begin{multline}\label{eqineq}
\mathrm{Pr}\left[I_\mathcal{A}\geq l-N+d_{\min}\right]\leq\\
\exp \left(-\frac{S_{\mathcal{A}}}{b_{\mathcal{A}}^2\mathcal{X}(\textbf{G}_{\mathcal{A}})}\varphi \left(\frac{4b_{\mathcal{A}} \left (l-N+d_{\min}-\mathrm{P}_{n,k}^{\mathcal{A}} \right )}{5S_{\mathcal{A}}}\right)\right ).
\end{multline}
By substituting \eqref{eqineq} into \eqref{eqthm}, the proof is completed.
\end{proof}
\vspace{-1.5ex}
\section{Random Arrivals and Queuing}\label{SecQueue}
In this section we extend our analysis from one to multiple frames transmitted by the users. To this end, we study the system illustrated in Fig.~\ref{fignfvqueue} with random frame arrival times and queueing at the servers. We specifically focus on the analysis of the trade-off between average latency and FER.
\subsection{System Model}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale=0.64]{queueFigSep71.pdf}\vspace{-.1cm}~\caption{\footnotesize{In the model studied in Section \ref{SecQueue},
frames arrive at the receiver according to a Poisson process with parameter $\lambda$. Server 0 in the cloud encodes the received frames using an NFV code and forwards the encoded packets to servers $1,\ldots,N$ for decoding.
}}~\label{fignfvqueue}
\end{center}
\vspace{-5ex}
\end{figure*}
As illustrated in Fig.~\ref{fignfvqueue}, we assume that the arrival times of the received frames are random and distributed according to a Poisson process with a rate of $\lambda$ frames per second. Upon arrival, Server 0 applies an NFV code to any received frame $\textbf{y}^r$ for $r=1,2,\ldots$, as described in Section II and sends each resulting coded packet $\tilde{\textbf{y}}_i^r$ to Server $i$, for $i=1,\ldots,N$. At Server $i$, each packet $\tilde{\textbf{y}}_i^r$ enters a first-come-first-serve queue. After arriving at the head of the queue, each packet $\tilde{\textbf{y}}_i^r$ requires a random time $T_i$ to be decoded by Server $i$. Here, we assume that $T_i$ is distributed according to an exponential distribution in \eqref{cdfTime} with {\color{black} an average processing time of $1/\mu_2$ per bit. Furthermore, the average time to process a frame of $n$ bits is denoted as $1/\mu$.}
Also, the random variables $T_i$ are i.i.d. across servers.
If the NFV code has minimum distance $d_{\min}$, as soon as $N-d_{\min}+1$ servers decode successfully
their respective packets derived from frame $\textbf{y}^r$, the information frame $\textbf{u}^r$ can be decoded at Server 0. We denote as $T$ the average overall latency for decoding frame $\textbf{u}^r$, which includes both queuing and processing.
Using \eqref{FEReq}, \eqref{eqFER} and the fact that all servers complete decoding almost surely as $t\rightarrow\infty$, that is $C_i(t)\rightarrow 1$ as $t\rightarrow\infty$, the FER
is equal to
\begin{align}\label{queueFERR}
\mathrm{P}_e = \mathrm{Pr} \left[ \sum _{i=1}^N I_i \geq d_{\min} \right],
\end{align}
where $I_i$ is the indicator variable that equals $1$ if decoding at Server $i$ fails. This probability can be upper bounded by the following corollary of Theorem \ref{ThmDevi}.
\begin{cor}\label{propFERqueue}
The FER defined in \eqref{queueFERR} is upper bounded by
\begin{equation}\label{eqqueueFER}
\mathrm{P}_e\leq \exp\!\left(\!\frac{-S}{b^2\mathcal{X}(\textbf{G}_{\mathcal{C}})} \varphi\!\left(\!\frac{4b\! \left(\!d_{\min}\!-\!\sum _{i=1}^N\! \mathrm{P}_{n,k}(\gamma_i)\! \right)}{5S}\!\right)\!\right ),
\end{equation}
where $S\triangleq\sum_{i=1}^N \mathrm{P}_{n,k}(\gamma_i) \left (1-\mathrm{P}_{n,k}(\gamma_i) \right )$ and $b\overset{\Delta}{=}1-\mathrm{P}_{n,k}^{\min}$.
\begin{proof}
The result follows from Theorem \ref{ThmDevi} by selecting $\tau = d_{\min}- \sum _{i=1}^N \mathrm{P}_{n,k}(\gamma_i)$.
\end{proof}
\end{cor}
We now discuss the computation of the average delay $T$ for different queueing management policies.
\begin{figure*}[t!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Fig1aaaMAY30Yash.pdf}
\caption{\footnotesize{Parallel, single server and repetition code.}}
\label{fig:01}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Fig1bbbJune2Yasha.pdf}
\caption{\footnotesize{Split repetition code, SPC code and $\mathcal{C}_c$ code.}}
\label{fig:02}
\end{subfigure}%
\caption{\footnotesize{Decoding latency versus FUP for $L=504,N=8,1/\mu_1=0, \mu_2=10,a=1,\delta=0.01,r=0.5):$ (a) LDB, UB and Exact FUP for the {\color{black} parallel}, single-server, and repetition coding; (b) LDB, UB and Monte Carlo simulation (``MC Sim.'') results for split repetition code, SPC code, and the NFV code $\mathcal{C}_c$ defined in \eqref{generatorGc}.} }\label{fig:5ab}
\end{figure*}
\subsection{Per-Frame Decoding}
We first study the system under a queue management policy whereby only one frame $\textbf{y}^r$ is decoded at any time. Therefore, all servers wait until at least $N-d_{\min}+1$ servers have completed decoding of their respective packets $\tilde{\textbf{y}}^r_i$ before moving to the next frame $r+1$, if this is currently available in the queues. Furthermore, as soon as Server 0 decodes a frame, the corresponding packets still being present in the servers' queues are evicted.
As a result, the overall system can be described
an M/G/1 queue with arrival time $\lambda$ and service time distributed according to the $(N-d_{\min}+1)$th order statistic of the exponential distribution
{\color{black} \cite{Joshi}. }
The latter has the pdf \cite{ross2014introduction}
\eqref{pdfOrderStatistics}, shown at the bottom of the page,
where $F_{T}(t)$ and $f_{T}(t)$ are the cdf and pdf of rv $T_i$, respectively. This queueing system was also studied in the context of distributed storage systems.
Using the Pollaczek-Khinchin formula \cite{PZK}, the average delay of an M/G/1 queue can be obtained as
\eqref{ResponseTimeUB}, shown at the bottom of the page,
where $H_N$ and $H_{N^2}$ are generalized harmonic numbers, defined by
$H_N=\sum_{i=1}^N\frac{1}{i}$ and $H_{N^2}=\sum_{i=1}^N\frac{1}{i^2}$ \cite{Joshi}.
Note that the queue is stable, and hence the average delay \eqref{ResponseTimeUB} is finite, if the inequality $n\lambda(H_N-H_{d_{\min}-1})<\mu (N-d_{\min}+1)$ holds. We refer to the described queue management scheme as per-frame decoding (pfd). {\color{black}{ This set-up is equivalent to the fork-join system studied in \cite{Joshi}. }}
\begin{figure*}[!b]
{\fontsize{10pt}{12pt}
\hrulefill
\begin{equation}\label{pdfOrderStatistics}
f_{T_{N-d_{\min}+1:N}}(t)=\frac{N!}{(N-d_{\min})!(d_{\min}-l)!}f_T(t)F_T(t)^{N-d_{\min}}(1-F_T(t))^{d_{\min}-1},
\end{equation}
\begin{equation}\label{ResponseTimeUB}
T_{\text{pfd}} = \frac{n(H_N-H_{d_{\min}-1})}{(N-d_{\min}+1)\mu}+\frac{\lambda n^2 [ (H_N-H_{d_{\min}-1})^2+(H_{N^2}-H_{(d_{\min}-1)^2})]}{2 (N-d_{\min}+1)^2 \mu ^{2} [ 1-\lambda n \mu ^{-1}(N-d_{\min}+1)^{-1} (H_N-H_{d_{\min}-1})]},
\end{equation}}
\end{figure*}
\subsection{Continuous Decoding}
As an alternative queue management policy, as soon as any Server $i$ decodes its packet $\tilde{\textbf{y}}_i^r$, it starts decoding the next packet $\tilde{\textbf{y}}_i^{r+1}$ in its queue, if this is currently available. Furthermore, as above, as soon as Server 0 decodes a frame $\textbf{y}^r$, all corresponding packets $\tilde{\textbf{y}}^r_i$ still in the servers' queues are evicted.
We refer this queue management policy as continuous decoding (cd).
The average delay \eqref{ResponseTimeUB} of per-frame decoding is an upper bound for the average delay of continuous decoding, i.e., we have $T_{\text{cd}} \leq T_{\text{pfd}}$ \cite{Joshi}. This is because, with per-frame decoding, all $N$ servers are blocked until $N-d_{\min}+1$ servers decode their designed packets. We evaluate the performance of continuous decoding using Monte Carlo methods in the next section.
\section{ Simulation Results}\label{secnum}
\vspace{-.5ex}
In this section we provide numerical results to provide additional insights into the performance trade-off for the system {\color{black}shown} in Fig.~\ref{fignfv}. We first consider individual frame transmission as studied in Section \ref{secModel} and Section \ref{secASY}, and then we study random arrivals as investigated in Section \ref{SecQueue}.
\subsection{Single Frame Transmission}\label{simSingle}
We first consider single frame transmission. The main goals are to validate the usefulness of the two bounds
presented in Theorems 1 and 2 as design tools and to assess the importance of coding in obtaining desirable trade-offs between decoding latency and FUP.
We employ a frame length of $L=504$ and $N=8$ servers. The user code $\mathcal{C}_u$ is selected to be a randomly designed $(3,6)$ regular (Gallager-type) LDPC code with $r=0.5$, which is decoded via belief propagation.
We compare the performance of the following solutions: (\emph{i}) \textit{Standard single-server decoding}, whereby we assume, as a benchmark, the use of a single server, that is $N=1$, that decodes the entire frame ($K=1$); (\emph{ii}) \textit{Repetition coding}, whereby the entire frame ($K=1$) is replicated at all servers; (\emph{iii}) \textit{Parallel processing}, whereby the frame is divided into $K=N$ disjoint parts processed by different servers;
{\color{black}(\emph{iv}) \textit{Split repetition coding}, whereby the frame is split into two parts, which are each replicated at $N/2$ servers.
The code has hence $K=2$, $d_{\min}=N/2$, $\mathcal{X}(\textbf{G}_c)=N/2$,
which can be thought of as an intermediate choice between repetition coding and the {\color{black} parallel} scheme;}
(\emph{v}) \textit{Single parity check code (SPC)}, {\color{black} with $N=K+1$, whereby, in addition to the servers used by parallel decoding, an additional server decodes the} binary sum of all other $K$ received packets; and (\emph{vi}) an NFV code $\mathcal{C}_c$ with the generator matrix $\textbf{G}_c$ defined in \eqref{generatorGc}, which is characterized by $K=4$.
Note that, with both single-server decoding and repetition coding, we have a blocklength of $n=1008$ for the channel code. Single-server decoding is trivially characterized by $\mathcal{X}(\mathbf{G}_c)=d_{\min}=1$, while repetition coding is such that the equalities
$\mathcal{X}(\mathbf{G}_c)=d_{\min}=8$ hold. Furthermore, the {\color{black} parallel} approach is characterized by $n=126$, $d_\text{min}=1$ and $\mathcal{X}(\textbf{G}_c)=1$;
{\color{black} the split repetition code is characterized by $n=504 ,d_{\min}=4$ and $\mathcal{X}(\textbf{G}_c)=4$; }
the SPC code has $n=144, d_{\min}=2$ and $\mathcal{X}(\textbf{G}_c)=2$; and the NFV code $\mathcal{C}_c$ has $n=252$, $d_\text{min}=3$ and $\mathcal{X}(\textbf{G}_c)=3$. The exact FUP for a given function $\mathrm{P}_{n,k}(\cdot)$ can easily be computed for cases (\emph{i})-(\emph{iii}).
In particular, for single server decoding, the FUP equals
\begin{equation}\label{ExactSingle}
\mathrm{P}_u(t)=1-a_1(t)(1-\mathrm{P}_{{L/r},L}(\delta));
\end{equation}
for the repetition code, the FUP is
\begin{equation}\label{ExactRepetition}
\mathrm{P}_u(t)=1-\sum_{i=1}^Na_i(t)(1-\mathrm{P}_{L/r,L}(\delta));
\end{equation}
and for the {\color{black} parallel} approach, we have
\begin{equation}\label{ExactUncoded}
\mathrm{P}_u(t)=1-a_N(t)(1-\mathrm{P}_{L/(rN),L/N}(\delta))^N.
\end{equation}
In contrast, the exact FUPs for the SPC and code $\mathcal{C}_c$ are difficult to compute, due to the discussed correlation among the decoding outcomes at the servers.
\begin{figure*}[t!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Fig2aaaMAY30Yasha.pdf}
\caption{\footnotesize{Parallel, single server and repetition code.}}
\label{fig:03}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Fig2bbbMAY30Yasha.pdf}
\caption{\footnotesize{Split repetition code, SPC code and $\mathcal{C}_c$ code.}}
\label{fig:04}
\end{subfigure}%
\caption{\footnotesize{Decoding latency versus FUP for $(L=504,N=8,1/\mu_1=50, \mu_2=20,a=0.1,\delta=0.01,r=0.5):$ (a) LDB, UB and Exact FUP for the {\color{black} parallel}, single-server, and repetition coding; (b) LDB, UB and Monte Carlo simulation (``MC Sim.'') results for split repetition code, SPC code, and the NFV code $\mathcal{C}_c$ defined in \eqref{generatorGc}. }}\label{fig:6ab}
\end{figure*}
Fig.~\ref{fig:01} shows decoding latency versus FUP for the LDB in Theorem \ref{ThmDevi}, the UB in Theorem \ref{ErrorBoundThm},
{\color{black} and the exact error \eqref{ExactSingle}, \eqref{ExactRepetition}, \eqref{ExactUncoded}, for the first three schemes (\emph{i})-(\emph{iii}), and Fig.~\ref{fig:02} shows the LDB in Theorem \ref{ThmDevi}, the UB in Theorem \ref{ErrorBoundThm}, as well as Monte Carlo simulation results for schemes (\emph{iv}), (\emph{v}), and (\emph{vi}). Here, we assume that the latency contribution that, is independent of the workload, is negligible, i.e., $1/\mu_1=0$. }
We also set $a=1$ and $\mu_2=10$. As a first observation, Fig.~\ref{fig:5ab} confirms that the UB bound is tighter than the LDB.
Leveraging multiple servers in parallel for decoding is seen to yield significant gains in terms of the trade-off between latency and FUP as argued also in \cite{Rodriguez17} by using experimental results.
In particular, the {\color{black} parallel} scheme is observed to be preferred for lower latencies. This is due to the shorter blocklength $n$, which entails a smaller average decoding latency. However, the error floor of the {\color{black} parallel} scheme is large due to the higher error probability for short blocklengths. In this case, other forms of NFV coding are beneficial. To elaborate, repetition coding requires a larger latency in order to obtain acceptable FUP performance owing to the larger blocklength $n$, but it achieves a significantly lower error floor. For intermediate latencies, the SPC code, and at larger latencies also both the NFV code $\mathcal{C}_c$,
{\color{black} and the split repetition code} provide a lower FUP. This demonstrates the effectiveness of NFV encoding in obtaining a desirable trade-off between latency and FUP.
In order to validate the conclusion obtained using the bounds, Fig.~\ref{fig:5ab} also shows the exact FUP for the schemes (\emph{i})-(\emph{iii}), as well as Monte Carlo simulation results for schemes (\emph{iv})-(\emph{vi}), respectively. While the absolute numerical values of the bounds in Fig.~\ref{fig:01} and \ref{fig:02} are not uniformly tight with respect to the actual performance, the relative performance of the coding schemes are well matched by the analytical bounds. This provides evidence of the usefulness of the derived bounds as a tool for code design in NFV systems.
Fig.~\ref{fig:6ab} is obtained in the same way as Fig.~\ref{fig:5ab}, except for the parameters $\mu_1=0.02$, $\mu_2=20$, and $a=0.1$.
{\color{black} Unlike Fig.~\ref{fig:5ab}, here latency may be dominated by effects that are independent of the blocklength $n$ since we have $1/\mu_1>0$.}
The key difference with respect to Fig.~\ref{fig:5ab} is that, for this choice of parameters,
repetition coding tends to outperform both the {\color{black} parallel} case, and the
NFV code $\mathcal{C}_c$, apart from very small latencies.
This is because repetition coding has the maximum resilience to the unavailability of the servers, while not being excessively penalized by the larger blocklength $n$. This is not the case, however, for very small latency levels, where the NFV code $\mathcal{C}_c$ provides the smallest
FUP given its shorter blocklength as compared to repetition
coding and its larger $d_{\min}$, with respect to the {\color{black} parallel} scheme.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.49]{NserverAug23.pdf} ~\caption{\footnotesize{Decoding latency versus exact FUP for parallel and repetition coding for different number of servers $N\in \{3,6,12\}$ and $(L=240,1/\mu_1=0, \mu_2=10,a=1,\delta=0.03,r=0.5)$ }
}~\label{figNserver}
\end{center}
\end{figure}
{\color{black} Fig.~\ref{figNserver} shows the exact FUP for the extreme cases of parallel and repetition coding for different number of servers $N\in\{3,6,12\}$. The figure confirms that, for both schemes, the latency decreases for a larger number of servers $N$. However, by increasing $N$, the error floor of the parallel scheme grows due to the higher channel error probability for shorter block lengths.
}
\begin{figure*}[t!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Nov10_fig_tight44.pdf}
\caption{\footnotesize{Lightly loaded system, $\lambda=0.1, \mu =500$. }}
\label{fig1queue}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Nov10figloose44.pdf}
\caption{\footnotesize{Heavily loaded system, $\lambda=1, \mu =50$. }}
\label{fig2queue}
\end{subfigure}%
\caption{ \footnotesize{Average latency versus FER with different values of the user code rate $r$ and for different coding schemes when the system is (a) lightly loaded and (b) heavily loaded, respectively ($L=112, N=8, \delta=0.03 $). }}
\end{figure*
\subsection{Random Frame Transmission}\label{simQueue}
We now consider the queueing system described in Section \ref{SecQueue}, and present numerical results that provide insights into the performance of both per-frame and continuous decoding in terms of FER versus average latency \eqref{queueFERR}. {\color{black} As above, the decoding error probability is upper bounded by using \cite[Theorem 33]{polyanskiy}.} Both FER and average latency are a function of the user code rate $r$. We hence vary $r\in\{1/2,\ldots,1/5\}$ to parametrize a trade-off curve between FER and latency. We assume a frame length of $L=112$ bits with $N=8$ servers, and adopt the same user code $\mathcal{C}_c$ as in the previous subsection. The average delay $T_{\text{pfd}}$ is computed from \eqref{ResponseTimeUB}, and $T_{\text{cd}}$ is obtained via Monte Carlo simulations.
Figs. \ref{fig1queue} and \ref{fig2queue} compare the performance of repetition coding, the NFV code $\mathcal{C}_c$ with the generator matrix \eqref{generatorGc}, and the {\color{black} parallel} approach as defined above.
Fig.~\ref{fig1queue} considers a lightly loaded system with $\lambda=0.1$ frames per second and $\mu = 500$ frames per second, while Fig.~\ref{fig2queue} shows a highly loaded system with both $\lambda=1$ frames per second and $\mu = 50$ frames per second.
First, by comparing the two figures we observe that per-frame decoding and continuous decoding have a similar performance when the system is lightly loaded (see Fig.~\ref{fig1queue}), while continuous decoding yields a smaller average latency than per-frame decoding when the system is heavily loaded (see Fig.~\ref{fig2queue}). This is because, in the former case, it is likely that a frame is decoded successfully before the next one arrives. This is in contrast to heavily loaded systems in which the average latency becomes dominated by queuing delays.
We also note that, for repetition coding, the performance of per-frame decoding and continuous decoding coincides in both lightly or heavily loaded systems, {\color{black} since } decoding is complete as soon as one server decodes successfully.
{\color{black} Also, by} comparing the performance of different codes, we recover some of the main insights obtained from the study of the isolated frame transmission. In particular, the {\color{black} parallel} approach outperforms all other schemes for low average delays due to its shorter block length $n$. In contrast, repetition coding outperforms all other schemes in FER for large average delay because of its large block length $n$ and consequently low probability of decoding error (not shown). Furthermore, we observe that split repetition coding is to be preferred for small values of FER.
{\color{black} Finally, Fig.~\ref{fig111queue} demonstrates the behavior of the average latency as the arrival rate $\lambda$ increases and the system becomes more heavily loaded.
We observe that, for a lightly loaded system,
the latencies of per frame and continuous decoding are similar, while continuous decoding is preferable for a large number of $\lambda$. This is because
per-frame decoding requires
all servers to wait until at least $N-d_{\min}+1$ servers have completed decoding of their respective packets before moving on to the next frame.}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{lambdaMay24_2018.pdf}\vspace{-.1cm}~\caption{\footnotesize{ Average latency versus arrival rate $\lambda$ ($L=112, N=8, r=0.5, \mu=500 $). }
}~\label{fig111queue}
\end{center}\vspace{-1cm}
\end{figure}
\section{Conclusions}\label{secConclusion}
In this paper, we analyzed the performance of a novel coded NFV approach for the uplink of a C-RAN system in which decoding takes place at a multi-server cloud processor. The approach is based on the linear combination of the received packets prior to their distribution to the servers or cores, and on the exploitation of the algebraic properties of linear channel codes. The method can be thought of as an application of the emerging principle of coded computing to NFV.
{\color{black}In addition, we obtain novel upper bounds on the FUP as a function of the decoding latency based on evaluating tail probabilities for Bernoulli dependent rvs. By extending the analysis from isolated frame transmission to random frame arrival times, the trade-off between average decoding latency and FER for two different policies are derived.}
Analysis and simulation results demonstrate the {\color{black} benefits} that linear coding of received packets, or NFV coding, can yield in terms of trade-off between decoding latency and reliability.
{\color{black} In particular, a prescribed decoding latency or reliability can be obtained by selecting an NFV code with a specific minimum distance and chromatic number, where the two extremes are {\color{black} parallel} NFV-based processing and repetition coding. The former scheme obtains the smallest latency but the lowest reliability, whereas the latter scheme yields the largest latency, but the highest reliability. All other linear NFV codes operate between these two extreme cases.
Among interesting open problems, we mention the design of optimal NFV codes and the extension of the principle of NFV coding to other channels. Note that the approach proposed here applies directly to other additive noise channels in which the user code is an additive group. A key example is the additive Gaussian channel with lattice codes at the user, which will be studied in future work. }
\balance
\bibliographystyle{IEEEtran}
|
2,869,038,153,870 | arxiv | \section{Introduction}
Within the standard unified scheme for active galactic nuclei (AGN), radio galaxies
are seen at relatively large inclination angles between the observer and the jet
axis \citep{1995PASP..107..803U}. Since most of the optical emission from the central engine is
shielded by the dusty torus, they can be used to study their host galaxies and
immediate environment in better detail than their low-inclination counterparts.
This is particularly important for powerful radio galaxies at high redshifts,
which are among the most massive galaxies at their redshift \citep{2009ApJ...704..548O}.
They are expected to evolve to present day massive central
cluster galaxies and are often found at the center of (proto)clusters
(see \citet{2008A&ARv..15...67M} for a review). Recently, even a radio galaxy at
z = 5.72 close to the presumed end of the epoch of reionization has been found by
\citet{2018MNRAS.480.2733S}. Thus studies of these species allow us to
investigate the early formation of massive structures in the young Universe.
\object{3C~294} is a powerful Fanaroff-Riley type II (FRII) radio source at z = 1.786.
It shows a z-shaped morphology at $\lambda$6 cm, has extended Ly$\alpha$-emission,
which is roughly aligned with the radio jet axis \citep{1990ApJ...365..487M},
is embedded in extended diffuse X-ray emission indicative of
a high-density plasma region \citep{2003MNRAS.341..729F}, and is surrounded
by an apparent overdensity of faint red galaxies \citep{2003MNRAS.341L..55T}.
Due to its proximity to a bright star useful for adaptive optics (AO) imaging,
\object{3C~294} has been intensively studied using the Keck, Subaru, and Canada France Hawaii (CFHT) telescopes.
High-resolution H and K images of the \object{3C~294} system have been presented and
discussed by \citet{1999ApJ...519L.131S}, \citet{2001ApJ...556..108Q},
and \citet{2002ApJ...569..611S}. There is common agreement across all studies
that the \object{3C~294} morphology can best be described by two or three
components separated by $\leq$1\arcsec, indicative of a merging system.
It is unclear, however, which of the
components coincides with the location of the radio source, mostly due to
the uncertain position of the reference star used for the astrometry.
In addition, some of the components are compact, others extended,
adding more uncertainty to a unique assignment of the counterpart to the
radio source.
The situation became even more complex after an analysis of archival Chandra data
of \object{3C~294} by \citet{2004ApJ...600..626S}, who found the central X-ray emission
to be better represented by two point sources. They argued that \object{3C~294} hosts
a dual AGN, one unabsorbed system associated with a compact component and
one absorbed system associated with one of the extended components.
Small separation (a few kiloparsec)
bound dual AGN are predicted to be rare in general (see \citet{2019MNRAS.483.2712R}).
On the other hand, as discussed in
\citet{2012ApJ...746L..22K}, the percentage of dual AGN can be up to 10\%,
but they are difficult to detect in particular at high redshift due to their small projected separation. In fact, only a few high-redshift (z $>$ 0.5) dual AGN are known \citep[and references therein]{2018A&A...610L...7H}.
If 3C 294 were to evolve to a present day massive central cluster
galaxy, one would assume that its mass would grow mostly via major mergers in
the hierarchical Universe. Since supermassive black holes seems to reside
at the centers of all massive galaxies \citep{2013ARA&A..51..511K}, one could
expect that 3C 294 hosts a dual AGN. Thus a confirmation would be an important detection.
To unambiguously determine the properties of the \object{3C~294} system and in particular
to test whether it is a dual AGN or an AGN pair,
we carried out high-resolution adaptive optics (AO) supported imaging in the near-infrared (NIR)
zJHKs bands, as well as
deep optical low-resolution spectroscopy using the Large Binocular Telescope (LBT),
the results of which are presented
here. We note that the AO data discussed above were taken about 15 years ago.
Since then adaptive optics AO systems and NIR detectors have become much more mature and efficient.
The only spectroscopic investigation of \object{3C~294}, by \citet{1990ApJ...365..487M}, dates back more than
25 years and was carried out using the Shane 3 m reflector. Thus a
revisiting of the \object{3C~294} system should give a clear answer.
Throughout the paper, we assume a flat $\Lambda$CDM cosmology with ${\rm H}_0$ =
70 km/s/Mpc and $\Omega_{\rm M}$ = 0.3. Using this cosmology, the angular
scale is 8.45 kpc per arcsecond at z = 1.786.
\section{Observations and data reduction}
\subsection{NIR AO imaging data}
High-resolution FLAO (first light adaptive optics, \cite{2012SPIE.8447E..0UE})
supported data of \object{3C~294} were recorded in the zJHKs filters
with the NIR imager and spectrographs LUCI1 and LUCI2 at PA = 135\degr\ during
a commissioning run on March 20, 2016 and during a regular science run on March 20, 25, and 29, 2017.
In both LUCI instruments, we used the N30-camera, which is optimized for AO observations.
With a scale of 0.0150$\pm$0.0002"/pixel, the field of view (FoV) offered was 30\arcsec\ $\times$ 30\arcsec.
The observing conditions were very good during all of the nights, with clear skies and ambient seeing of 1" or less.
The LUCI instruments are attached to the front bent Gregorian focal stations of the LBT.
The LBT offers a wide range of instruments and observing modes. It can be operated
in monocular mode (using one instrument and mirror only), binocular mode (using identical instruments
on both mirrors), or interferometric mode (light from both telescopes combined in phase). A complete description of the current
instrument suite at the LBT and its operating mode can be found in \citet{2018SPIE10702E..05R}.
The data obtained on March 20, 2016 and March 20 and 25, 2017 were taken in monocular mode, while the data
from March 29, 2017 were taken in binocular mode.
For the latter, the integrations and offsets were done strictly in parallel.
The FLAO system senses the atmospheric
turbulence by means of a pyramid-based wavefront sensor,
which samples the pupil on a grid of 30 $\times$ 30 subapertures.
A corrected wavefront is obtained through an adaptive secondary
mirror controlled by 672 actuators at a frequency of up to 1 kHz.
As a reference star we used the 12th mag star U1200--07227692 from the
United States Naval Observatory (USNO) catalog USNO-A2.0
\citep{1998usno.book.....M}, which is just 10" southwest of \object{3C~294}.
Given its brightness, the FLAO system corrected 153 modes with a frequency of 625 Hz.
At this distance of the target from the reference star and with this FLAO configuration,
a Strehl ratio of about 30-40\% is expected in H and Ks bands \citep{2018SPIE10702E..0BH}.
U1200--07227692 is a binary star with 0.135" separation and a brightness ratio of 1:1.6 \citep{2001ApJ...556..108Q},
but this did not affect the performance of the FLAO system.
Individual LUCI exposures were one minute each, consisting of six images of 10 sec that were summed before saving.
On any given night, integrations between 7 and 37 min in total were taken in one filter before moving to the next.
Table \ref{nirdates} gives a log of the observations.
Between each one-minute exposure the telescope was shifted randomly within a 2\arcsec\ by
2\arcsec\ box to compensate for bad pixels. Since the bright reference star is the only object in
the field present in one-minute integrations, larger offsets would not have made any difference
as the detector is particularly prone to persistence. The small offsets made sure that none
of the images of \object{3C~294} fell on a region on the detector affected by persistence from the
bright reference star. In Table
\ref{nirsum} a breakdown of the total integration times by filter and instrument is given.
\begin{table}[h]
\centering
\vspace*{.2cm}
\begin{tabular}{l|cccc}
\hline
Date & Instrument & Filter & N$_{\rm images}$ & T$_{\rm int}$ [sec] \\
\hline
Mar 20 2016 & LUCI1 & Ks & 22 & 1320 \\
\hline
\multirow{2}{*}{Mar 20 2017} & \multirow{2}{*}{LUCI2} &H & 7 & 420 \\
& & Ks & 37 & 2220\\
\hline
\multirow{3}{*}{Mar 25 2017} & \multirow{3}{*}{LUCI1} &z & 31 & 1860 \\
& & J & 30 & 1800 \\
& & Ks & 32 & 1920 \\
\hline
\multirow{6}{*}{Mar 29 2017} & \multirow{3}{*}{LUCI1} &J & 15 & 900 \\
& & H & 30 & 1800 \\
& & Ks & 19 & 1140 \\
& \multirow{3}{*}{LUCI2} &J & 15 & 900 \\
& & H & 30 & 1800 \\
& & Ks & 20 & 1200 \\
\hline
\end{tabular}
\caption[]
{Breakdown of the observations by date, instrument, and filter.}
\label{nirdates}
\end{table}
\begin{table}[h]
\centering
\vspace*{.2cm}
\begin{tabular}{cccc}
\hline
Filter & LUCI1 [s] & LUCI2 [s] & T$_{\rm total}$ [s]\\
\hline
z & 1860 & - & 1860\\
J & 2700 & 900 & 3600\\
H & 1800 & 2220 & 4020\\
Ks & 4380 & 3420 & 7800\\
\hline
\end{tabular}
\caption[]
{Total integration times per filter and instrument and for both instruments combined.
Combined exposure times range from 31 minutes to over 2 hours.}
\label{nirsum}
\end{table}
The data were first corrected for non-linearity using the prescription given in
the LUCI user manual, then sky-subtracted and
flat-fielded. Sky-subtraction was achieved by forming a median-combined
two-dimensional sky image out of all data in one filter set, then subtracting a
scaled version from each individual exposure. Given the fine sampling of 0\farcs015/pixel
scale, we saw only about 10 counts/sec in the H and Ks bands. With such low backgrounds
there would have been no benefit in using a running mean or a boxcar for the sky
subtraction. Flatfields were created out of sets of higher and lower background twilight
images taken at zenith, which were separately median-combined after scaling them, subtracted
from each other, and normalized. Finally, the images were corrected for bad pixels
by linear interpolation.
A bad pixel mask was created out of the highly exposed flatfields to identify
cold pixels and dark images to identify hot pixels.
The most difficult part of the data reduction was the stacking of the images. Except for
the saturated AO reference star and the barely visible radio galaxy, no further objects are present on the reduced images
that could be used for the alignment of the images. We thus explored three alternative
possibilities for the alignment: a) to use the world coordinate system (WCS) information given in the image headers;
b) to use a two-dimensional fit to the centers of both saturated
components of the reference star after masking the saturated pixel at their
centers; and c) to take advantage of the channel crosstalk shown by the detector, which
leaves a non-saturated imprint of the reference star in every channel of the detector
on the frame (see Fig. \ref{xtalk}).
\begin{figure}[ht]
\centering
\includegraphics[width=.24\textwidth]{f1left.png}
\includegraphics[width=.24\textwidth]{f1right.png}
\caption{Left) Image and channel crosstalk images of the reference star U1200-07227692. The center of the reference star is indicated as source. These crosstalk images have separations of exactly 64 pixels, the width of each of the 32
parallel amplifier channels of the LUCI HAWAII2-RG detector.
The well-separated images of the individual components of the reference star are remarkable.
Right) Logarithmic image of the center of the reference star to show the two components
separated by 0\farcs135.
The Ks-band image is taken at PA = 135\degr.}
\label{xtalk}
\end{figure}
The simple approach using the WCS information failed, likely because of residual flexure within LUCI,
resulting in a visibly "smeared" image of the reference star. We thus did not pursue this approach further.
Each of the two alternative methods has its advantages and disadvantages. Determining a centroid of
a saturated core leaves some uncertainty but it benefits from a high signal-to-noise ratio (S/N) in its outer parts.
The individual channel crosstalk images have a lower S/N, but combining 10-15 of them from adjacent channels
increases the signal considerably.
We tested both methods using a data set taken in the Ks filter.
The resulting offsets agree within 1/10 of a pixel. Given that we opted for integer pixel shift before
combining the images, both methods delivered equally good results for our purposes. In the end
we decided to use the offsets derived from the centroids to the cores of the two components of the reference star.
The aligned images were first combined per filter, instrument, and night,
then per filter and telescope, and finally per filter. The relative orientation of the
detector on the sky between the two instruments differs by less than one degree.
In addition, the pixel scale between the N30 cameras in LUCI1 and LUCI2 differs by less
then $10^{-4}$. Thus no rotation or rebinning was applied before combining the
data sets from the different instruments.
\subsection{Optical spectroscopy}
Optical longslit spectra of the \object{3C~294} system were taken on the night of May 11-12, 2016 using the
multi-object double CCD spectrographs MODS1 and MODS2 \citep{2010SPIE.7735E..0AP} in homogeneous binocular mode.
The MODS instruments are attached to the direct Gregorian foci at the LBT.
The target was observed at PA = 111\degr\ in
order a) to have all components of the \object{3C~294} system in the slit, b) to see whether components a, b, and c are at the
same redshift, and c) to minimize the impact of the nearby bright star on
the data quality (see Fig. \ref{figslit} for the configuration of MODS1/2 and Fig. \ref{3c294imafig}
for more details on the components). To do so, we performed a blind offset from the
bright reference star, with the offset determined from the AO K-band image. The
expected accuracy of the positioning is $\sim$ 0\farcs1.
We used a 1" slit and the gratings G400L for
the blue and G670L for
the red channel, giving a spectral resolution of about 1000
across the entire optical band. Integration times
were 3 $\times$ 1200 sec. Observing conditions were not photometric with variable cirrus,
but excellent seeing ($\sim$ 0\farcs7 full width at half maximum (FWHM)).
\begin{figure}[t]
\centering
\includegraphics[width=.49\textwidth]{f2.png}
\caption{Orientation of the slit with respect to the \object{3C~294} system for the
MODS spectroscopy.}
\label{figslit}
\end{figure}
The basic data reduction (bias subtraction and flatfielding) was carried out using
the modsCCDRED-package developed by the MODS team \citep{richard_pogge_2019_2550741}.
The extraction of one-dimensional spectra was carried out using the standard
image reduction and analysis facility (IRAF)
{\em apall} task. As the spectra of \object{3C~294} did not show any continuum (Fig. \ref{3c294spectra})
and were taken through variable cirrus, we did not carry
out a spectrophotometric flux calibration. Wavelength calibration was done using spectral images
from calibration lamps and verified using the night sky emission lines. The
resulting accuracy was $\sim$ 0.1\AA\ rms. The resulting spectra were first averaged
per telescope and channel and then combined per channel.
\section{Results}
\subsection{AO NIR images of \object{3C~294}}
The final AO J-, H-, and Ks-band images are shown in the lower part of
Fig. \ref{3c294imafig}.
They have been binned by a factor of two to emphasize structures
more clearly.
As discussed in \citet{2001ApJ...556..108Q} and
\citet{2002ApJ...569..611S},
\object{3C~294} can be separated into two main components separated by about 1\arcsec: a
compact core-like structure to the east and a structure elongated north--south to the west.
The elongated structure seems to consist of two knotty components also separated by roughly 1\arcsec.
No emission from \object{3C~294} was detected in the z-band image. This is
probably due to the shallow depth of the image, the much lower Strehl ratio,
and/or the increasing extinction compared to the redder bands. We note that \object{3C~294} has only barely been
detected in optical broadband images with the Hubble Space Telescope (HST) (m$_{\rm R}$ = 23.4$\pm$0.8, \cite{2002ApJ...569..611S}).
Contrary to earlier observations, the two components of the western component
are clearly separated. A comparison of earlier H- and K-band images with our data
is shown in Fig. \ref{3c294imafig}.
We do not see a clear separation of the western structures in the J band.
The reason for that is not clear. It cannot be due to extinction by dust
as the H and Ks bands would be less affected by that
(Ks band probes the rest-frame wavelength at $\sim$ 7700 \AA\ and J band probes the rest-frame wavelength at $\sim$ 4400 \AA). It is more likely due to the
lower Strehl ratio, which is expected to drop by 10-20\%
between the H band and the J band. We do not detect the component d north of component c
discussed in \citet{2002ApJ...569..611S} in any of our images. We should have detected it in the
H band as both components have a similar brightness in this filter
according to \citet{2002ApJ...569..611S}.
Thus, feature d is either a transient phenomenon or is not physical, that is, it is a statistical noise fluctuation.
There may, however, be some (very blue?) emission northwest of component c in the J-band image,
which is not at the same location as component d.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.25\textwidth]{f3a.png}
\includegraphics[width=0.25\textwidth]{f3b.png}
\includegraphics[width=0.25\textwidth]{f3c.png}\\
\vspace*{.2cm}
\includegraphics[width=0.25\textwidth]{f3d.png}
\includegraphics[width=0.50\textwidth]{f3e.png}\\
\vspace*{.5cm}
\includegraphics[width=0.29\textwidth]{f3f.png}
\hspace*{.1cm}
\includegraphics[width=0.29\textwidth]{f3g.png}\\
\vspace*{.1cm}
\includegraphics[width=0.29\textwidth]{f3h.png}
\hspace*{.1cm}
\includegraphics[width=0.29\textwidth]{f3i.png}\\
\caption{Two uppermost rows) AO NIR images of 3C 294 from earlier
publications. A) Keck\,II NIRSPEC K' image from \citet{2004ApJ...600..626S}, B) Subaru IRCS K' image also from
\citet{2004ApJ...600..626S}, C) CFHT Hokupa K' image from
\citet{1999ApJ...519L.131S}, and D) Keck II SCam H-band image from
\citet{2001ApJ...556..108Q}. The images A-D are adapted from \citet{2004ApJ...600..626S}. The three images right of image D show
R (HST), H, and K CFHT PUEO data from \citet{2002ApJ...569..611S}.
Two lowest rows)
Our LBT FLAO and LUCI JHKs-images of \object{3C~294}. The fourth image shows the scale, orientation, and apertures for the components used for the photometric analysis.
They are 36 $\times$ 26 pixel (0\farcs54 $\times$
0\farcs39) for components
a and b and 20 $\times$ 20 pixel (0\farcs3 $\times$ 0\farcs3) for component c. The
labeling follows \citet{2002ApJ...569..611S}.
The cross marks the position of the radio core, its size reflects its positional error.
}
\label{3c294imafig}
\end{figure*}
The core-like component c appears slightly extended in an east--west direction on the images.
This can also be seen in data from other telescopes shown in Fig. \ref{3c294imafig}.
Its measured FWHM is 0\farcs31 $\times$ 0\farcs20 in the H band and 0\farcs23 $\times$ 0\farcs17 in the
Ks band. If component c were a pure point source, we would expect a FWHM of about
0\farcs08 at 10\arcsec\ from a 12th mag reference star \citep{2018SPIE10702E..0BH}.
Interestingly, the major axis of the elongation is perpendicular to the one seen for the reference star
in Fig. \ref{xtalk}. The latter is most likely due to
vibrations of the telescope (D. Miller, priv. com.) and present on some of the individual
images. The former could be due to tilt anisoplanatism as it is along the axis
joining the reference star and component c. \citet{1994JOSAA..11..368O} derived a
formalism to estimate the tilt anisoplanatism for the Keck telescope. Using their estimates
and scaling them to the LBT (8.4 m) and wavelength (Ks, 2.15 $\mu$m), we would expect a tilt anisoplanatism
of 0\farcs022 and 0\farcs019 in x and y-direction, respectively. We would thus expect the FWHM of component c to
be on the order of 0\farcs10 - 0\farcs12 with an axis ratio of $\sim$ 1.2. This is much smaller then what
we measure. We thus believe that component c is most likely not stellar.
This is in contrast to \citet{2001ApJ...556..108Q} who
found component c to be unresolved. Unfortunately, due to the small separation of the two components
of the reference star (0\farcs135) and their saturation in the core, they can hardly be used for
comparison. A formal Gaussian fit to each of the two components of the reference star ignoring the
central saturated pixels (22 pixels for the brighter and 11 pixels for the fainter component)
results in a FWHM of $\sim$ 0\farcs08.
\begin{table}[h]
\centering
\begin{tabular}{c|llll}
\hline
Comp & J & J - H & J - Ks & H - Ks\\
\hline
a & 21.81$\pm$0.16 & 1.00$\pm$0.19 & 1.93$\pm$0.18 & 0.93$\pm$0.14 \\
b & 21.83$\pm$0.16 & 0.93$\pm$0.19 & 1.83$\pm$0.18 & 0.90$\pm$0.14 \\
c & 22.53$\pm$0.13 & 1.16$\pm$0.16 & 2.07$\pm$0.15 & 0.93$\pm$0.11 \\
Total & 20.48$\pm$0.31 & 1.05$\pm$0.35 & 2.02$\pm$0.34 & 0.97$\pm$0.22 \\
\hline
\end{tabular}
\caption{JHKs photometry of \object{3C~294} and its components.
\label{3c294phot}}
\end{table}
\begin{table*}[h]
\centering
\begin{tabular}{l|ccccc}
\hline
Filter & J & H & K' & Ks & K\\
\hline
LBT & 20.48$\pm$0.31 & 19.43$\pm$0.17 & - & 18.56$\pm$0.14& -\\
\citet{1990ApJ...365..487M} & - & - & - & - & 18.0$\pm$0.3\\
\citet{1999ApJ...519L.131S} & - & - & 18.3$\pm$0.3 & - & - \\
\citet{2001ApJ...556..108Q} & - & 19.4$\pm$0.2 & 18.2$\pm$0.2 & - & - \\
\citet{2002ApJ...569..611S} & - & 18.2$\pm$0.3 & - & - & 17.76$\pm$1.0 \\
\citet{2003MNRAS.341L..55T} & 19.53$\pm$0.30 & 18.64$\pm$0.27 & - & 17.78$\pm$0.07& -\\
\hline
\end{tabular}
\caption{Comparison of JHK photometry of the entire \object{3C~294} system. To derive
the magnitudes from \citet{2002ApJ...569..611S}, the fluxes from components
a, b, and c have been summed.
\label{litphot}}
\end{table*}
Judging from the images, component c seems to be redder than components a and b. To verify
this we performed aperture photometry on the individual components using the apertures indicated in
Fig. \ref{3c294imafig}. Calibration was done using star P272--D from \citet{1998AJ....116.2475P}; the data have
been corrected for galactic extinction following \citet{2011ApJ...737..103S}.
It is $\leq$0.01 mag in all bands.
The results shown in Table \ref{3c294phot} indicate that
components a and b have similar brightnesses and colors, while component c is about 0.7 mag
fainter and about 0.2 mag redder. Photometry of the entire system has been presented
by \citet{1990ApJ...365..487M}, \citet{1999ApJ...519L.131S}, \citet{2001ApJ...556..108Q},
\citet{2002ApJ...569..611S}, and \citet{2003MNRAS.341L..55T}.
A comparison to our data is shown in Table \ref{litphot}.
There is a wide spread in the photometry, with differences of up to $\sim$ 1 mag in particular to
\citet{2003MNRAS.341L..55T}. It is not clear where the differences in the photometry
come from. Unfortunately, the size of the aperture used is not always given.
Photometry of individual components have been derived by \citet{2001ApJ...556..108Q} and
\citet{2002ApJ...569..611S}. A comparison to our data is shown in Table \ref{litcompphot}.
The difference between the \citet{2001ApJ...556..108Q} and \citet{2002ApJ...569..611S} data
is large, approximately two magnitudes, while our measurements are somewhere in between.
Again, the size of the aperture used is not given for either the Keck-data or CFHT-data.
We also created J-H, H-Ks, and J-Ks color maps for \object{3C~294} to search for any evidence of
spatially-dependent extinction in the three components. No obvious feature was detected.
\begin{table*}[h]
\centering
\begin{tabular}{l|ccc|ccc}
\hline
Filter & \multicolumn{3}{c}{H} |& \multicolumn{3}{c}{K} \\
\hline
Component & a & b & c & a & b & c \\
\hline
LBT & 20.88$\pm$0.16 & 20.83$\pm$0.11 & 21.37$\pm$0.09 & 20.04$\pm$0.09 & 19.96$\pm$0.09 & 20.50$\pm$0.07 \\
\citet{2001ApJ...556..108Q} & - & - & 22.0$\pm$0.2 & - & - & 21.7$\pm$0.2\\
\citet{2002ApJ...569..611S} & 19.0$\pm$0.1 & 19.5$\pm$0.2 & 20.2$\pm$0.2 & 18.7$\pm$0.4 & 18.4$\pm$0.4 & 19.3$\pm$0.8\\
\hline
\end{tabular}
\caption{Comparison of HK photometry of components a, b, and c of the \object{3C~294} system. The K images have been taken through
different filters (see Table \ref{litphot}) but the differences are small and did not affect the large differences
seen between the individual measurements.
\label{litcompphot}}
\end{table*}
\subsection{Optical spectra of the \object{3C~294} system}\label{modsspec}
The one-dimensional and two-dimensional spectra of the \object{3C~294} system are
shown in Fig. \ref{3c294spectra}. Despite our 2 hr integration time, we do
not detect any obvious continuum. When rebinning the spectra in spectral and spatial
direction by a factor of five, a hint of continuum can be glimpsed in the red channel.
However, with this rebinning, the potential continuum of components a, b, and c would
overlap, preventing us from distinguishing between the two components (see below).
We detected the Ly$\alpha$ $\lambda$1215 \AA, C IV $\lambda$1550 \AA,
He II $\lambda$1640 \AA,\ and C III$]$ $\lambda$1907/1909 \AA\ lines in the blue channel.
The C III$]$ line is blue-shifted with respect
to the other lines. This is not unusual in AGN
(e.g., \citet{2012ApJ...744....7B})
and may be a result of the increased intensity of the transition at $\lambda$1907 \AA\ relative to
the one at $\lambda$1909\ \AA\ at low densities \citep{1981ApJ...249...17F}.
Only one emission line at $\lambda$6749.4\ \AA\ can be detected in the red channel. It
is faintly present in the individual spectra from both MODS instruments and is not an
artifact of the data reduction. Under the assumption that it originates from the
same source as the emission lines in the blue channel, we identify this line
with the Ne $[$IV$]$ doublet $\lambda$2424/2426 \AA\ at z = 1.783.
Similarly to our C III$]$ $\lambda$1907/1909 \AA\ line in the blue channel
we would not be able to separate the double lines individually given an instrumental resolution
of $\sim$ 5\AA. If the identification is correct,
we may even associate the very faint emission at $\lambda\sim$ 6485 \AA\ with C II$]$
$\lambda$2326 \AA\ at z $\sim$ 1.788. The position of this line coincides with a forest
of night-sky emission lines, however, so this detection is tentative.
To our surprise, we do not detect Mg~II
$\lambda$2798 \AA\
in our spectra, which we would expect at $\lambda\sim$ 7790 \AA\ . However, Mg~II is
relatively weak in radio galaxies \citep{1993ARA&A..31..639M} and line ratios
can vary strongly in high-redshift radio galaxies \citep{2008MNRAS.383...11H}.
Ignoring the redshift determination for the C III$]$ $\lambda$1907/1909 \AA\ line
and the uncertain identification of the C II$]$
$\lambda$2326 \AA\ line, our redshift for \object{3C~294} is z = 1.784$\pm$0.001.
This is somewhat lower then the redshift z = 1.786 quoted
by \citet{1990ApJ...365..487M}, who did not quote errors. With their instrumental resolution
of $\sim$ 12 - 15\AA, an error for their redshift $\delta$ z of at least 0.001 is a fair assumption. Thus
our redshifts for 3C 294 agree within the errors. We will use z = 1.784 as the redshift for 3C 294 for the remainder of the paper.
Similarly to \citet{1990ApJ...365..487M}, we see a spatially extended (6") Ly$\alpha$ line.
Unfortunately, since the spectra of components a and b overlap in dispersion
direction, we cannot probe any dynamics using this line.
The absence of a continuum poses the problem of a unique assignment of the
emission line at $\lambda$6749.4 \AA\ to one of the components of \object{3C~294}. To derive this,
we did the following: we first fitted a slope through the centers of the emission
lines in the blue channel except for the spatially extended Ly$\alpha$ line.
We then did the same through the trace in both channels of a well-exposed star
observed on the same night to infer the spatial offset. Afterwards we determined the
spatial offset between the trace of the star in the blue and the red channels to
finally estimate the expected position of the trace of either component in the red
channel at $\lambda$6749.4 \AA. It turned out that the line emission at $\lambda$6749.4 \AA\
is about 4$\pm$2 pixels above the
expected trace of components a and b and about 3$\pm2$ pixels below the expected position
of component c. Thus we cannot unambiguously tell whether the line at $\lambda$6749.4 \AA\ originates
from the same source as the other UV lines and thus assign it to
components a and/or b or c.
\section{Discussion}
\subsection{Which component is associated with the radio source?}
There has been no common agreement as to which of the components of the \object{3C~294} system
is indeed the NIR counterpart of the radio source. Using a position of the
radio source of $\alpha$,$\delta$ = $14^{\rm h} 06^{\rm m} 44\fs074\pm0\fs005, +34^{\degr} 25' 40{\farcs}00\pm0\farcs05\
(2000)$\ based on \citet{1990ApJ...365..487M}, \citet{2004ApJ...600..626S} found
it practically coincident with component b. On the contrary, \citet{2001ApJ...556..108Q}
associated the radio source with component c. Their positions of the radio core and the
reference stars differed by +0\farcs07 and +0\farcs5 from the ones used by \citet{2004ApJ...600..626S},
respectively. The main difference for the position of the
reference star comes from the sources used (HST FGS observation
by \citet{2004ApJ...600..626S} and the USNO--A2.0 catalog by
\citet{2001ApJ...556..108Q}).
It is surprising that \citet{2001ApJ...556..108Q} associated the radio
source with component c although they found a smaller separation between the
reference star and the radio source than \citet{2004ApJ...600..626S}.
As can be seen from Fig. \ref{figslit}, component b is closer to the reference star than component c.
The most accurate position for the reference star comes from the Gaia Data Release 2
(DR2) \citep{2016A&A...595A...1G,2018A&A...616A...1G}, which gives $\alpha$,$\delta$ =
$14^{\rm h} 06^{\rm m} 43\fs356, +34^{\degr} 11' 23{\farcs}273$ (2000) with an
error of about 0.05 mas for $\alpha$ and $\delta$ each. The proper motion is
$\alpha$,$\delta$ = +8.80,1.66 mas/yr.
Since the spatial resolution of the Gaia DR2 is about 0\farcs4, the binary star is not
resolved and we can assume that the position given above refers to the center of light.
This position is in very good agreement with the one determined by
\citet{2004ApJ...600..626S}.
Using the coordinates of the reference star from the Gaia DR2 and the
radio coordinates of \object{3C~294} from \citet{2004ApJ...600..626S}, we can now
predict the position of the radio core of \object{3C~294} on our AO NIR images.
We used the crosstalk images of the reference star to predict the position
of the center of light of the binary star with an accuracy of $\pm$1 pixel
($\pm$0\farcs015).
The result is shown in Fig. \ref{3c294imafig}. As in \citet{2004ApJ...600..626S},
our result indicates that the NIR counterpart to the \object{3C~294} radio source is
component b. The overall error budget (accuracy of center of light for the reference
star in our images, errors of the positions for the reference star and the radio source)
does not exceed 0\farcs1, meaning that component c can be ruled out as the NIR counterpart
with high confidence.
\subsection{Nature of the NIR counterpart of \object{3C~294}}
We are now convinced that (at least) component b is the NIR counterpart
to the radio source. We also know that extended redshifted Ly$\alpha$ emission
centered on the radio core has been detected \citep{1990ApJ...365..487M}. In addition, since
our spectroscopic results agree well with \citet{1990ApJ...365..487M},
the redshift of z = 1.784 for \object{3C~294} is now solid. There is agreement that components
a and b show a non-stellar morphology. These components are clearly separated
in our H- and Ks-band data. Given our spatial resolution and the distance of \object{3C~294},
we cannot decide whether components a and b represent
two galaxies in the process of merging, whether they correspond to a single
galaxy with an absorption feature along the line of sight, or whether they are two
galaxies at different redshifts.
Surprisingly, both components have similar brightnesses and colors. At z = 1.784,
our JHKs images correspond roughly to rest-frame BVI-data. With B-V $\sim$ 1.0,
components a and b seem to be dominated by late stars of type K and M. If we assume
that components a and b are galaxies, we obtain M$_{\rm K}$ $\sim$ -25.3
from their Ks-band magnitudes of $\sim$ 20.0. This includes a lower limit for the K-band correction
of K = -0.5 (which is between
K = 0 and -0.5 depending on galaxy type; \citet{1997A&AS..122..399P,2001MNRAS.326..745M})
at z = 1.784. No evolutionary correction has been applied. We note that this is an
upper limit for the host galaxy of \object{3C~294} as we do not know the contribution of
the active nucleus to the total flux. If 90/50/10\% of the flux from component b
is from the host galaxy, we would derive M$_{\rm K}$ $\sim$ -25.2/-24.6/-22.8.
This is between 2 mag brighter and 0.4 mag fainter
than a M$_{\rm K}^{\ast}$ galaxy \citep{2017MNRAS.465..672M}.
\begin{figure*}[ht]
\centering
\includegraphics[width=.485\textwidth]{f4a.png}
\includegraphics[width=.485\textwidth]{f4b.png}\\
\includegraphics[width=1\textwidth]{f4c.png}\\
\includegraphics[width=1\textwidth]{f4d.png}\\
\caption{Top) Our one-dimensional MODS spectra of the \object{3C~294} system in the
blue (left) and (red)
channel with the line identifications. The positions of the two emission features
at $\lambda$6485 and $\lambda$6749.4 \AA\ discussed in the text are indicated by arrows.
Bottom) Our two-dimensional MODS spectrum of the 3C 294 system in the blue channel (upper panel) showing the
four emission lines as well as an excerpt of the two-dimensional spectrum in the red channel (lower panel)
showing the emission line detected at $\lambda$6749.4 \AA. The blue spectrum is shown across its full spectral range, the red spectrum with an identical width then the one-dimensional spectrum centered
on the line at $\lambda$6749.4 \AA. The two emission features are labeled by magenta dots.}
\label{3c294spectra}
\end{figure*}
\subsection{Nature of component c}\label{disccompc}
What is the nature of component c? \citet{2004ApJ...600..626S} discussed the intriguing
possibility that \object{3C~294} hosts two active nuclei. This idea stems from an analysis of
archival Chandra data, where the X-ray emission from \object{3C~294} could better be described by
a superposition of two point sources. Based on the X-ray/optical flux ratio, they argued that
component c is unlikely to be a foreground galactic star but could well host a second
active nucleus. We found component c about 0.8 mag brighter
than \citet{2001ApJ...556..108Q}, but even then its X-ray/optical flux ratio of $\sim$ 1.1
indicates that the source is rather an AGN then a star using the arguments of
\citet{2004ApJ...600..626S}. If it is a star, it could be a carbon star. These stars
are often found in symbiotic X-ray binaries (e.g., \cite{2014ApJ...780...11H}) and
have very red V-K and H-K colors similarly to what we found \citep{2001ApJ...558..309D}.
However, with V-K $\sim$ 5, and M$_{\rm V}$ $\sim$ -2.5 typical for carbon stars
\citep{1998A&A...338..209A}, the distance modulus would place our "star" well outside
our Galaxy. In addition, component c appears extended in our data supporting an
extragalactic origin.
Unfortunately, the results from our spectroscopy are of little help.
We now know that the radio core coincides with component b. It is thus
reasonable to assume that the UV lines detected in the blue channel, which have
also be seen by \citet{1990ApJ...365..487M}, originate from that region.
We note that \citep{1990ApJ...365..487M} used a 2" wide slit at PA = 200\degr,
which means that their spectra of components a, b, and c overlapped in
spectral direction. As discussed in Sect. \ref{modsspec}, we cannot unambiguously
assign the emission line at $\lambda$6749.4 \AA\ to component b or c based
on its spatial location. Given its spectral position one can reasonably assume
that this line originates from the same region as all the other UV lines, namely
from component b. If this is the case, the nature of component c remains a mystery.
Although speculative, we briefly discuss the consequences if the emission line at $\lambda$6749.4 \AA\
belongs to component c. One exciting alternative
would be that the line originates from the Ne $[$IV$]$ doublet
$\lambda$2424/2426 \AA\ at z = 1.783 from this component.
This would make \object{3C~294} indeed a dual, perhaps bound AGN separated by a few kiloparsec
as discussed by
\citet{2004ApJ...600..626S}. However, they speculated that the AGN coincident
with component c is less powerful but does not suffer so much from extinction.
In that case one would expect to see the UV lines (in particular Ly$\alpha,$ which is typically
a factor of $\sim$60 stronger than Ne $[$IV$]$ $\lambda$2424/2426 \AA\
in radio galaxies \citep{1993ARA&A..31..639M}) in the blue channel, unless
they are unusually weak. An inspection of the two-dimensional
spectrum in the blue channel did not reveal any second component in spatial direction.
Thus we do not have strong support for the dual AGN scenario based on our spectroscopy.
If not at the same redshift, component c could be at a different redshift.
Given the faintness of the optical counterpart and emission line, and showing X-ray emission, it most likely
originates from an AGN.
Judging from the composite AGN spectrum of \citet{2001AJ....122..549V},
the most prominent lines in the optical are H$\alpha$, H$\beta$, the [O II, O III] lines,
and Mg~II. Out of these, it cannot be H$\alpha$ at z = 0.029, because its
NIR luminosity would be much too low unless it suffers from extreme absorption.
It also cannot be
H$\beta$ at z = 0.38 because then we should have seen H$\alpha$ at $\lambda$9054 \AA,\, which is
normally much stronger, and/or the [O II] or [O III] lines at $\lambda$3727 and $\lambda$5007 \AA, respectively.
The same argument applies for the [O III] line at z = 0.35.
The [O II] line at z = 0.81 would be an interesting possibility. The typically much
stronger H$\beta$ and O [III] lines would be shifted towards $\lambda$9000 \AA, where
there is a strong forest of night-sky emission lines. However, one would
then easily see Mg~II $\lambda$2798 \AA, which is normally also stronger then [O II].
Thus, Mg~II $\lambda$2798 \AA,\, which would be at z = 1.41, remains as the most reasonable
line identification. All optical emission lines redward of Mg~II are redshifted beyond $\lambda$9000 \AA\
and are thus hard to detect or are out of the optical range. Only the C IV and C III] UV lines remain.
These would be redshifted to $\lambda$3735 and $\lambda$4600 \AA, respectively, but are
not present in our spectra. These lines are often faint or not present in type II
quasi stellar object (QSO) candidates
at high redshift \citep{2013MNRAS.435.3306A}, so it would not be surprising.
One caveat of all of the options discussed above is that even at z = 1.41 the host
galaxy of an AGN must substantially absorb the emission from \object{3C~294}. This has not been seen. Thus,
even the most reasonable option (Mg~II at z = 1.41) is not convincing.
Alternatively, the emission line in component c could derive from a redshifted UV line.
The strongest UV lines in a QSO spectrum are Ly$\alpha$ $\lambda$1215 \AA, C~IV $\lambda$1549 \AA,\ and
C~III] $\lambda$1909 \AA\
\citep{2001AJ....122..549V}. This would move the AGN to redshifts
beyond z = 2.5, with \object{3C~294} then being along the line of sight to component c. Its redshift would then
be z = 2.54 (C III]), 3.36 (C IV), or 4.56 (Ly$\alpha$), respectively. If this is the case, the UV lines
should be absorbed to some extent by \object{3C~294}.
There are eight QSO at z $>$ 2.5, up to z = 5.2, in the Chandra Deep Field North
(CDFN, \citet{2001AJ....122.2810B,2002AJ....124.1839B}). These eight sources all
share the same properties with component c. Their soft X-ray flux in the 0.5-2.0
keV band is between 0.3 and 6 $\times 1e^{-15}$ ergs cm$^{-2}$ s$^{-1}$, their K magnitudes are $\sim$ 21,
their V magnitudes are $\sim$ 24, and their spectroscopic signatures
include strong, broad Ly$\alpha$, sometimes also accompanied by
strong C+IV and C~III]. Given the faintness of our emission line and the
absence of a second line, it is reasonable to assume that this would
correspond to Ly$\alpha$ at z = 4.56.
\subsection{Consequences for an AGN pair}
Our results do not allow us to discriminate between the
dual AGN or projected AGN scenario.
Even the latter would not contradict the interpretation by
\citet{2004ApJ...600..626S} that \object{3C~294} hosts an obscured AGN centered
at component b, while a second much fainter but not obscured AGN is
coincident with component c. One natural explanation for
the differences in the photometry from various studies summarized in
Table \ref{litphot} is the intrinsic variability of the two AGN.
Projected AGN pairs can be used for a number of astrophysical applications. Examples
are QSO-QSO clustering and the tomography of the intergalactic medium or the
circumgalactic medium of the QSO along the light of sight to the background QSO
\citep{2006ApJ...651...61H}. The latest compilation of projected QSO pairs can
be found in \citet{2018ApJS..236...44F}. However, the number of small-separation pairs
(a few arcsec) is very small \citep{2012AJ....143..119I,2016MNRAS.456.1595M}, and they have
mostly be derived from searches for gravitationally lensed QSOs and all have a
wider separation ($\geq2\farcs$5) than our target. To the best of our
knowledge, no projected AGN pair with such a small separation and large
$\Delta$ z is known at present.
Due to the close separation of our system, gravitational lensing effects
could modify the
apparent properties of the \object{3C~294} system. A multiply-lensed QSO image of component c
would be expected for an Einstein radius of $\geq$1\arcsec.
Since we do not see any, this would set the upper limit of the
\object{3C~294} host galaxy to 3 $\times$ $10^{12}$ \(M_\odot\)
at a redshift of z = 1.784 and 4.56 for the lense and source, respectively.
As the host galaxy is certainly not
point-like, any amplification must be very low. In addition,
component c could even be subject to gravitational microlensing by stars
in the host galaxy of \object{3C~294}. This might at least in part explain the difference in
brightness of the \object{3C~294} system shown in Table \ref{litphot}.
\subsection{Outlook}
The analysis of our deep AO images and optical spectra of \object{3C~294} did not
allow us to unambiguously characterize the \object{3C~294} system as
the main conclusion rests on the spatial association of the
emission line at $\lambda$6749.4\ \AA\ with either component b or c.
If it originates from component b, the nature of component c remains a mystery.
If it originates from component c, we have support for either the dual or projected
AGN scenario. Whether the lines originate from component b or c can be tested by
repeating the optical spectroscopy "astrometrically" by taking a
spectrum of \object{3C~294} and a bright object on the FoV simultaneously, with the latter
showing a trace on the two-dimensional spectrum. If the line at $\lambda$6749.4\ \AA\
belongs indeed to component c, AO-aided NIR spectroscopy is the only way
to characterize the system due to the faintness of the system and the
probably high redshifts involved.
Not much can be learned for \object{3C~294} itself from the ground, as at
z = 1.784 all diagnostic optical emission lines except H$_{\gamma}$ will be
redshifted into a wavelength range where the NIR sky is opaque. At least one of the
redshifted [O~II, O~III] or H$\alpha$,$\beta$ lines is redshifted into one of
the JHK-windows if component c is at z = 2.54, 3.36, or 4.56.
An unambiguous determination of the nature of 3C 294 will be possible with the NIR spectrograph
NIRspec onboard the James Webb Spacec Telescope.
With its 3" $\times$ 3" Integral Field Unit covering the wavelength range
0.67 - 5$\mu$m, a number of diagnostic lines can be observed in a very low infrared background
devoid of opaque wavelength regions.
\begin{acknowledgements}
We would like to thank the anonymous referee for the constructive comments
that addressed a number of important points in the paper.
We would also like to thank Mark Norris and Jesper Storm
for taking the MODS-data at the LBT for us.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
This work was supported in part by the German federal department for education and research (BMBF)
under the project numbers 05 AL2VO1/8, 05 AL2EIB/4, 05 AL2EEA/1, 05 AL2PCA/5, 05 AL5VH1/5, 05 AL5PC1/1, and 05 A08VH1.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,153,871 | arxiv | \section{Bug Fixes}
Among the bugs fixed were soundness bugs, which are all mentioned below. These
tend to be obscure, and most are not generally encountered by users; but we
consider them particularly important to fix. Among the other bugs
fixed, we mention here only a few of the more interesting ones.
Thus we begin with a description of the soundness bugs that were
fixed. In contrast to the rest of this paper, we do not attempt to
explain much about these bugs, not even how they could lead to proofs
of \texttt{nil}. Pointers to some of those proofs can be found in the
release notes.
\begin{itemize}
\item [6.2] System functions \texttt{acl2-magic-mfc} and
\texttt{acl2-magic-canonical-pathname}, which were not designed for
direct invocation by the user, could be exploited to prove
\texttt{nil}.
\item [6.2]
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STOBJ}{\underline{Stobj}}s
could be confused with strings in raw Lisp.
\item [6.3] A
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STOBJ}{\underline{stobj}}
could be bound by a
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=COMMON-LISP\_\_\_\_LET}{\underline{\texttt{let}}}
or
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_MV-LET}{\underline{\texttt{mv-let}}}
form without being among the outputs of that form, which allowed for
serious violations of single-threadednesss.
\item [6.3] (Gnu Common Lisp only) There was a bug in Common Lisp code
in the implementation of the utility,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_SET-DEBUGGER-ENABLE}{\underline{\texttt{set-debugger-enable}}}.
\item [6.4] ACL2 supports rules of class
\texttt{:}\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DEFINITION}{\underline{\texttt{definition}}},
which are typically equalities that are much like normal
definitions, but can actually have hypotheses. A soundness bug
resulted from incorrect application of such rules during the
handling of \texttt{:expand}
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_HINTS}{\underline{hints}}.
\item [6.4] A
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STOBJ}{\underline{stobj}}
recognizer predicate could be violated after updating the stobj.
\item [6.4] The checks made when admitting
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_CONGRUENCE}{\underline{congruence}}
rules failed to ensure that a certain variable on the right-hand
side of the conclusion, which was intended to be a fresh variable,
was indeed actually fresh (i.e., did not occur elsewhere in the
rule).
\end{itemize}
\noindent Here are a few of the more interesting bugs that were fixed, other
than soundness bugs.
\begin{itemize}
\item [6.2] ACL2 supports a notion of {\em abstract}
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STOBJ}{\underline{stobj}},
which is an abstraction of a corresponding ordinary ({\em concrete})
stobj. The
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DEFABSSTOBJ}{\underline{\texttt{defabbstobj}}}
event, which introduces a new abstract stobj, incurs certain proof
obligations to ensure proper correspondence between the new abstract
stobj and its specified concrete stobj. These proof obligations, in
the form of
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DEFTHM}{\underline{\texttt{defthm}}}
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_EVENTS}{\underline{events}},
are printed when submitting a \texttt{defabbstobj} event, except for
events that have themselves already been admitted. However, the
events printed were not always sufficient in order to admit the
\texttt{defabbstobj} event.
\item [6.3] ACL2's proof output indicates
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_SPLITTER}{\underline{splitter}}s:
rule applications that generate more than one subgoal. However,
splitters were sometimes alleged when only a single subgoal was
generated.
\item [6.3] The utility
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_WOF}{\underline{\texttt{wof}}},
for directing output to a file (as the name stands for ``With Output
to File''), could cause an error when no error was appropriate.
This problem also occurred with the
\texttt{:}\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_PSOF}{\underline{\texttt{psof}}}
(``Print Saved Output to File'')
utility, since
\texttt{psof} is defined in terms of \texttt{wof}.
(\texttt{:Psof} is similar to the utility
\texttt{:}\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_PSO}{\underline{\texttt{pso}}}
(``Print Saved Output''), as both print proof output that had been
suppressed by
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_GAG-MODE}{\underline{gag-mode}};
but \texttt{:psof} prints saved proof output to a file, while
\texttt{:pso} prints to the terminal, which can take much longer.)
\end{itemize}
\section{Changes to Existing Features}
As the sets of ACL2 users and projects continue to expand, we learn of
ways to improve its existing capabilities. Some of these improvements
are rather technical and hence may be new to the reader, who can
follow links below to learn about them. Our message is twofold:
\begin{itemize}
\item ACL2 offers many capabilities beyond ``only'' a proof engine;
and
\item if you have avoided features that seemed awkward to use,
consider trying them again, because they might have improved.
\end{itemize}
\noindent We give a few examples here, referring the reader to the
release notes for a more complete list of such improvements.
\begin{itemize}
\item [6.2] The
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_TRACE\_42}{\underline{\texttt{trace\$}}}
utility, which shows calls and return values for indicated
functions, can be configured to give less noisy output.
\item [6.2] The
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_GUARD-DEBUG}{\underline{guard-debug}}
utility, which shows the origin of proof obligations generated for
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_GUARD}{\underline{guard}}
verification, avoids duplicating that information.
\item [6.2]
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_LD}{\underline{\texttt{Ld}}},
which invokes a read-eval-print-loop, has a new keyword argument,
\texttt{:ld-missing-input-ok}, which avoids treating a missing file
as an error.
\item [6.2]
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_EC-CALL}{\underline{\texttt{Ec-call}}},
a wrapper for executing function calls in the logic, allows
non-\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STOBJ}{\underline{stobj}}
arguments in stobj positions.
\item [6.2] Technical improvements have been made to the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_META-EXTRACT}{\underline{\texttt{meta-extract}}}
capabilities for using facts from the context or world when proving
meta-level rules (i.e., of class
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_META}{\underline{\texttt{meta}}}
or
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_CLAUSE-PROCESSOR}{\underline{\texttt{clause-processor}}}).
\item [6.3] The
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DMR}{\underline{\texttt{dmr}}}
utility, which supports dynamically watching the rewrite stack, has
undergone a few improvements. For example, when the debugger is
enabled by evaluating \texttt{(dmr-start)}, then subsequent
evaluation of \texttt{(dmr-stop)} will (once again) disable the
debugger.
\item [6.3]
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_BIND-FREE}{\underline{\texttt{Bind-free}}},
a construct that generates a binding alist for free variables in rule
hypotheses, can return a list of binding alists to try.
\item [6.3] Evaluation of an event
\texttt{(\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=COMMON-LISP\_\_\_\_PROGN}{\underline{progn}}
event$_1$ ... event$_k$)} prints each \texttt{event$_i$}
immediately before evaluating it, just as was already done by
evaluating a call of
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_ENCAPSULATE}{\underline{\texttt{encapsulate}}}.
\item [6.3] The
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_SET-INHIBIT-WARNINGS}{\underline{\texttt{set-inhibit-warnings}}}
utility, which suppresses specified types of warnings, is
more predictable. Moreover, it now has a
non-\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_LOCAL}{\underline{\texttt{local}}}
variant,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_SET-INHIBIT-WARNINGS\_12}{\underline{\texttt{set-inhibit-warnings!}}}.
\item [6.3] Failure messages printed for
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=COMMON-LISP\_\_\_\_DEFUN}{\underline{\texttt{defun}}}
indicate which proof attempt fails: termination or
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_GUARD}{\underline{\texttt{guard}}}
verification for the indicated definition.
\item [6.3] The functionality of
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_MAKE-EVENT}{\underline{\texttt{make-event}}},
a macro-like capability that can involve the ACL2
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STATE}{\underline{state}}
object, has been significantly expanded (see
Section~\ref{new-features}).
\item [6.4] The
\texttt{:}\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_PBT}{\texttt{\underline{pbt}}}
(``print back through'') utility, which queries the session
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_HISTORY}{\underline{history}},
now abbreviates bodies of large
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DEFCONST}{\texttt{\underline{defconst}}}
forms.
\item [6.4] The utility
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_SET-INHIBIT-OUTPUT-LST}{\underline{\texttt{set-inhibit-output-lst}}},
which supports which type of output to suppress, has had the output
type ``expansion'' replaced by ``history''.
\item [6.4] The optional
\texttt{:}\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_LOOP-STOPPER}{\underline{\texttt{loop-stopper}}} field of a
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_REWRITE}{\underline{rewrite}}
rule can specify certain functions to ignore when comparing terms in
order to avoid looping. Now, each such ``function symbol'' can
actually be a
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_MACRO-ALIASES-TABLE}{\underline{macro
alias}}.
\item [6.4] ACL2 has a
\texttt{:}\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_LOGIC}{\underline{\texttt{logic}}}
mode utility,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_ORACLE-APPLY}{\underline{\texttt{oracle-apply}}},
for making higher-order function calls. This utility now has
more appropriate restrictions codified in its
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_GUARD}{\underline{guard}}.
\item [6.x] Error handling is improved for several utilities,
including:
\begin{itemize}
\item [6.2] errors from the run-time type-checking utility,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=COMMON-LISP\_\_\_\_THE}{\underline{\texttt{THE}}},
have been eliminated when guard-checking is \texttt{:none};
\item [6.2]
a much more instructive error message is printed for
the \href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DEFPKG}{\underline{\texttt{defpkg}}}
``reincarnation'' error that is encountered upon attempting to
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_PACKAGE-REINCARNATION-IMPORT-RESTRICTIONS}{\underline{redefine a previously-defined package}};
\item [6.2] permission problems no longer cause errors
for the file utilities
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_OPEN-INPUT-CHANNEL}{\underline{\texttt{open-input-channel}}}
and
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_OPEN-OUTPUT-CHANNEL}{\underline{\texttt{open-output-channel}}}; and
\item [6.4] errors are more instructive when permission problems are encountered
for
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_INCLUDE-BOOK}{\underline{\texttt{include-book}}}.
\end{itemize}
\end{itemize}
\section{Emacs Support}
The primary Emacs-related ACL2 enhancement is a new utility introduced
in Version 6.4,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_ACL2-DOC}{\underline{ACL2-Doc}},
for browsing documentation inside Emacs. This utility takes the
place of Emacs Info, which is no longer supported because of the
transition to XDOC discussed in Section~\ref{system}. Emacs users
will find this utility to be a nice alternative to using
\texttt{:}\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DOC}{\underline{\texttt{doc}}}
at the terminal. It can be used to browse either the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/combined-manual/index.html}{\underline{acl2+books
combined manual}} or the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html}{\underline{ACL2
User's Manual}}. This utility is loaded automatically into Emacs
when loading the standard file
\href{https://acl2-devel.googlecode.com/svn/trunk/emacs/emacs-acl2.el}{\underline{\texttt{emacs/emacs-acl2.el}}}.
\section{Heuristic Improvements}\label{heuristic}
ACL2 development began in 1989, and development for the Boyer-Moore
series of provers began in 1971. It is therefore not surprising, at
least to us, that there are relatively few changes to prover
heuristics in recent ACL2 releases. However, there are a few, because
the user community finds new applications of ACL2 that present
opportunities for improving the heuristics. The following summary is
intended to give a sense of how ACL2's heuristics have been tweaked,
without diving into unnecessary details of how they work. (Heuristics
are generally not documented at the user level, since we do not expect
it to be necessary or useful to understand in any depth how they work
in order to be an effective ACL2 user.)
\begin{itemize}
\item [6.2] ACL2 has an {\em ancestors check} heuristic that can
prevent excessive backchaining through hypotheses of rules. The
following list (which we invite beginners to skip!) describes ways
in which this heuristic has been improved:
\begin{itemize}
\item the heuristic no longer allows failure for
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_FORCE}{\underline{\texttt{force}}}d
hypotheses;
\item it is delayed until a quick
(\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_TYPE-SET}{\underline{type-set}})
check has a chance to verify or refute the hypothesis; and
\item a slight weakening of the heuristic now permits backchaining
based on counting variable occurrences.
\end{itemize}
\item [6.2] The context
(\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_TYPE-ALIST}{\underline{type-alist}})
built from a goal now considers assumptions in a different order
that may strengthen that context.
\item [6.3] \texttt{:By}
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_HINTS}{\underline{hints}}
are intended to specify when a goal is subsumed by a known fact,
such as the formula of a
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DEFTHM}{\underline{\texttt{defthm}}}
event. The subsumption check for \texttt{:by} hints has been made
less restrictive.
\item [6.3] The hint \texttt{:do-not preprocess} is intended to cause
the ACL2 prover to skip the {\em preprocess} step of its proof
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_HINTS-AND-THE-WATERFALL}{\underline{\em
waterfall}}. This hint now also eliminates the preprocess step
during the application of \texttt{:use} and \texttt{:by}
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_HINTS}{\underline{hints}}.
\end{itemize}
\section{Introduction}\label{intro}
We discuss ACL2 enhancements introduced in versions released between
the ACL2 Workshops in May, 2013 and July, 2014: Versions 6.2 (June,
2013), 6.3 (October, 2013), and 6.4 (January, 2014). (These
enhancements were thus made after the release of ACL2 Version 6.1 in
February, 2013.) Hence this paper is analogous to two papers that
correspond to earlier sets of
releases~\cite{EPTCS114.1,DBLP:journals/corr/abs-1110-4673}. We
summarize some of the more interesting of the roughly 100 items in the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_RELEASE-NOTES}{\underline{release
notes}} for these three releases. While those release notes
themselves are summaries, they are intended solely as documentation
for what has changed: individual paragraphs are not generally intended
to make sense except to those who have relevant background knowledge.
Moreover, for the sake of completeness those paragraphs are often
rather verbose and dense. This paper, on the other hand, is intended
to provide a quick way to get up-to-date on the more interesting of
the recent ACL2 system improvements. For anyone familiar with ACL2,
each brief explanation below is intended to provide sufficient
information so that even inexperienced ACL2 users can get a quick
sense of the enhancement. Then they can decide whether to read more
in the release notes, or even to follow the provided hyperlinks to
relevant documentation.
ACL2 is typically revised to support soundness, robustness, and
functionality of the system, often in direct response to user
requests. Because of the maturity of ACL2, the system has many
features, and thus improvements often pertain to aspects of ACL2 that
may be unfamiliar to many ACL2 users, especially novice users. Our
intention, however, is that this paper will have value to the entire
ACL2 community, including newcomers. This paper thus includes a few
introductory words about most every feature discussed. Moreover, the
online version of this paper contains many links to
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=COMMON-LISP\_\_\_\_DOCUMENTATION}{\underline{documentation}}
topics from the online ACL2 User's Manual~\cite{acl2-users-manual},
where one may learn more about those features. Notice that we use
underlining, as for ``release notes'' and ``documentation'' above, to
mark those hyperlinks. (The topic names typically match the printed
names, but not always; for example, ``release notes'' above
corresponds to the topic RELEASE-NOTES, which includes a hyphen.)
The focus of this paper is on the user level: our aim is to help ACL2
users to understand what is new with ACL2 (at least to them), in order
to increase its effective use in their applications. On the other
hand, those who wish to learn about implementation-level changes can
look for Lisp comments in topics note-6-2, note-6-3, and note-6-4 in
Community Book
\href{https://acl2-books.googlecode.com/svn/trunk/system/doc/acl2-doc.lisp}{\underline{\texttt{books/system/doc/acl2-doc.lisp}}}.
One concept that arises several times in this paper is that of a
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STOBJ}{\underline{{\em stobj}}},
or {\em Single-Threaded OBJect}. Let us review that concept briefly.
Logically, a stobj is just a list whose members are called its {\em
fields}. However, syntactic restrictions in their use permit stobjs
to be modified destructively, and even to have fields that are
represented as (destructively modifiable) arrays in the underlying
host Lisp.
We organize the paper along the lines of the release notes, as
follows.
\begin{itemize}
\item Changes to Existing Features
\item New Features
\item Heuristic Improvements
\item Bug Fixes
\item Changes at the System Level
\item Emacs Support
\end{itemize}
We do not discuss one other category in the release notes,
Experimental/Alternate versions, though there have been improvements
to
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_HONS-AND-MEMOIZATION}{\underline{ACL2(h)}},
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_PARALLELISM}{\underline{ACL2(p)}},
and
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=COMMON-LISP\_\_\_\_REAL}{\underline{ACL2(r)}}.
A label precedes each item being discussed, to indicate the relevant
version of ACL2 in case you wish to read a longer description for that
item in the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_RELEASE-NOTES}{\underline{release
notes}}. For example, if the label is ``6.2'' then you can read
more about the change by visiting the documentation topic,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_NOTE-6-2}{note-6-2}.
\input{changes}
\input{new}
\input{heuristic}
\input{bug}
\input{system}
\input{emacs}
\section{Conclusion}
We have outlined some of the more interesting and important changes to
ACL2 in Versions 6.2, 6.3, and 6.4. We hope that a quick read of this
paper will enable ACL2 users to focus quickly on topics of particular
interest, while easily following links (in the online version of this
paper) in order to learn more about those topics, with the result of
becoming more effective ACL2 users. Many more changes (about 100
altogether) are described in the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_RELEASE-NOTES}{\underline{release
notes}} for these versions, and many changes at a lower level are
described in comments in the source code for those release notes, such
as \texttt{(defxdoc note-6-2 ...)}, in Community Book
\href{https://acl2-books.googlecode.com/svn/trunk/system/doc/acl2-doc.lisp}{\underline{\texttt{books/system/doc/acl2-doc.lisp}}}.
A critical component in the continued evolution of ACL2 is feedback
from its user community, which we hope will continue! We also
appreciate contributions by the user community to the large body of
Community Books~\cite{acl2-books-svn}. These books put demands on the system
and help us to test improvements.
\section*{Acknowledgements}
We thank members of the ACL2 community whose feedback has led us to
continue making improvements to ACL2. In particular, we thank the
following, who are the people mentioned in one or more specific items
in the release notes for Version 6.2, 6.3, or 6.4: Harsh Raju
Chamarthi, Jared Davis, Jen Davis, Caleb Eggensperger, Shilpi Goel,
Dave Greve, Warren Hunt, Robert Krug, Camm Maguire, David Rager,
Gisela Rossi, Sol Swords, Raymond Toy, and Nathan Wetzler,
We expressly thank Warren Hunt for his continuing support of ACL2 use and
development for many years at UT Austin.
We are grateful to Shilpi Goel, Warren Hunt, Robert Krug, Sandip Ray,
Nathan Wetzler, and the referees for feedback on drafts of this paper.
This material is based upon work supported by DARPA under Contract
No. N66001-10-2-4087, by ForrestHunt, Inc., and by the National
Science Foundation under Grant No. CCF-1153558.
\bibliographystyle{eptcs}
\section{New Features}\label{new-features}
This section focuses on a few of the more interesting new features
recently added to ACL2. As before, the release notes for the
indicated versions contain more complete information.
\begin{itemize}
\item [6.2] The
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DEFSTOBJ}{\underline{\texttt{defstobj}}}
event, which introduces singled-threaded objects (see
Section~\ref{intro}), permits
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STOBJ}{\underline{stobj}}s
to have fields that are themselves stobjs or arrays of stobjs. In
the case of these {\em nested stobj} structures, fields are accessed
using a new construct,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_STOBJ-LET}{\underline{\texttt{stobj-let}}}.
\item [6.2] ACL2 supports
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_META}{\underline{meta}}theoretic
reasoning, which may be implemented using
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_EXTENDED-METAFUNCTIONS}{\underline{extended
metafunctions}} that can access the environment. The logical
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_WORLD}{\underline{world}}
--- that is, the ACL2 database --- is now available to such
functions using the function \texttt{mfc-world}.
\item [6.2] ACL2 supports many kinds of
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=COMMON-LISP\_\_\_\_DECLARE}{\underline{\texttt{declare}}}
forms, including not only some that are processed by the host Common
Lisp compiler, but also others processed by ACL2 that are specified
using
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_XARGS}{\underline{\texttt{xargs}}}
forms. A new \texttt{xargs} keyword, \texttt{:split-types}, can be
used to specify that the function's
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=COMMON-LISP\_\_\_\_TYPE}{\underline{\texttt{type}}}
declarations should be provable from its
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_GUARD}{\underline{guard}},
not add to its guard.
\item [6.2] See
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_QUICK-AND-DIRTY-SUBSUMPTION-REPLACEMENT-STEP}{\underline{quick-and-dirty-subsumption-replacement-step}}
for a way to turn off a potentially expensive prover heuristic.
\item [6.3] The macro-like utility
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_MAKE-EVENT}{\underline{\texttt{make-event}}}
evaluates a form to obtain a new form --- its {\em expansion} --- and
then typically submits that expansion. This utility has been made
more flexible by the addition of the following new capabilities,
which we discuss below for the benefit of those who already
have some familiarity with
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_MAKE-EVENT}{\underline{\texttt{make-event}}}.
\begin{itemize}
\item Expansions may have the form \texttt{(:or event$_1$
... event$_k$)}. In this case, each \texttt{event$_i$} is evaluated
in turn until one succeeds. That \texttt{event$_i$} is then treated
as the actual expansion, and is not re-evaluated after expansion (as
would be necessary without the use of \texttt{:or}).
\item Expansions may have the form \texttt{(:do-proofs event)}, for
insisting on performing proofs during evaluation of \texttt{event}
even in contexts where proofs are normally skipped (such as when
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_REBUILD}{\underline{\texttt{rebuild}}}
is invoked).
\item A keyword argument, \texttt{:expansion?}, provides an
optimization that can eliminate storing expansions in
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_CERTIFICATE}{\underline{certificate}}
files.
\end{itemize}
\item [6.3] See
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_GET-INTERNAL-TIME}{\underline{get-internal-time}}
for how to change ACL2's timing reports, so that instead of being
based on run time (cpu time) they are based on real time (wall-clock
time).
\item [6.3] A new utility,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_SYS-CALL+}{\underline{\texttt{sys-call+}}},
can invoke a shell command just as is done by
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_SYS-CALL}{\underline{\texttt{sys-call}}}.
However, while {\texttt{sys-call}} prints the command's output as a
side effect but returns \texttt{nil}, {\texttt{sys-call+}} actually
returns that output as a string.
\item [6.3] A new utility,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_VERIFY-GUARDS\_B2}{\underline{\texttt{verify-guards+}}},
is like the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_VERIFY-GUARDS}{\underline{\texttt{verify-guards}}}
utility for verifying that calls of the indicated function lead only
to function calls that satisfy their
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_GUARD}{\underline{guard}}s
(preconditions). However, in \texttt{verify-guards+}, that argument
can be the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_MACRO-ALIASES-TABLE}{\underline{macro
alias}} for a function. See
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_VERIFY-GUARDS}{\underline{\texttt{verify-guards}}}
for an example showing why it would be unsound to permit
\texttt{verify-guards} to take a macro alias as its argument.
\item [6.4] The
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_BOOKDATA}{\underline{bookdata}}
utility writes out event data on a per-book basis, for tools such as
the one written by Dave Greve that is found in directory
\texttt{tools/book-conflicts/} of the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_COMMUNITY-BOOKS}{\underline{Community
Books}}.
\item [6.4] The utilities
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_ADD-INCLUDE-BOOK-DIR}{\underline{\texttt{add-include-book-dir}}}
and
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DELETE-INCLUDE-BOOK-DIR}{\underline{\texttt{delete-include-book-dir}}},
whose effects are
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_LOCAL}{\underline{local}}
to a book, specify directories denoted by \texttt{:dir} arguments of
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_INCLUDE-BOOK}{\underline{\texttt{include-book}}}.
These utilities now have
non-\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_LOCAL}{\underline{local}}
analogues,
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_ADD-INCLUDE-BOOK-DIR\_12}{\underline{\texttt{add-include-book-dir!}}}
and
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_DELETE-INCLUDE-BOOK-DIR\_12}{\underline{\texttt{delete-include-book-dir!}}}.
\item [6.4]
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_CONGRUENCE}{\underline{Congruence}}
rules specify
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_EQUIVALENCE}{\underline{equivalence}}
relations to maintain during rewriting.~\cite{congruence}
The functionality of congruence rules is extended by
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_PATTERNED-CONGRUENCE}{\underline{patterned
congruence}} rules~\cite{patterned-congruence-rules}
allowing a more general
specification of where an
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_EQUIVALENCE}{\underline{equivalence}}
is to be maintained.
\end{itemize}
\section{Changes at the System Level}\label{system}
Some of the more sweeping changes to ACL2 have taken place far from
its theorem prover. Here we touch briefly on just a few of those.
\begin{itemize}
\item [6.2] From the beginning of ACL2, Gnu Common Lisp (GCL) has been
among the host Lisps on which ACL2 can be built. Now, the ANSI
version of GCL is also a host Lisp on which ACL2 (and ACL2(h)) can
be built.
\item [6.2] The previous system for certifying the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_COMMUNITY-BOOKS}{\underline{Community
Books}} has been updated; in particular, it is largely based on
\texttt{cert.pl} and other utilities maintained by Jared Davis and
Sol Swords. See
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_BOOKS-CERTIFICATION}{\underline{books-certification}}.
\item [6.3] ACL2 is now available between releases, via SVN, at
\href{http://acl2-devel.googlecode.com}{\underline{\texttt{http://acl2-devel.googlecode.com}}}.
Disclaimer (as per the warning message printed at startup): {\em The
authors of ACL2 consider svn distributions to be experimental;
they may be incomplete, fragile, and unable to pass our own
regression.} That said, we have seen few problems with SVN
distributions.
\item [6.4] The ACL2 documentation is now maintained in the XDOC
format developed and implemented by Jared Davis. Indeed, it is now
recommended to peruse the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/combined-manual/index.html}{\underline{acl2+books
combined manual}}. That manual includes not only the ACL2
User's Manual but also topics from the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/manual/index.html?topic=ACL2\_\_\_\_COMMUNITY-BOOKS}{\underline{Community
Books}}, such as the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/combined-manual/index.html}{\underline{XDOC}}
topic itself as well as
\href{http://www.cs.utexas.edu/users/moore/acl2/current/combined-manual/index.html?topic=ACL2\_\_\_\_CERT.PL}{\underline{\texttt{cert.pl}}}
(mentioned above). Note that many books now include an XDOC book,
which causes the \texttt{:doc} command to be an alias for the
\texttt{:xdoc} command using the
\href{http://www.cs.utexas.edu/users/moore/acl2/current/combined-manual/index.html?topic=ACL2\_\_\_\_LD-KEYWORD-ALIASES}{\underline{\texttt{ld-keyword-aliases}}}
feature. This causes a bit of noisy output upon first invocation.
\end{itemize}
|
2,869,038,153,872 | arxiv | \section{History and definitions}
The history of Brownian motion was described a number of times. In 1905, Einstein established his celebrated formula
$$
\overline{\Delta X^2} = \frac{RT}{N}\frac{1}{3\pi\mu a} \Delta t
$$
for spherical particles of radius $a$ suspended in a liquid of viscosity $\mu$ at temperature $T$ ; the first member, $\overline{\Delta X^2}$, is the average of the squares of their displacements during an interval of time $\Delta t$ ; $R$ is the constant of perfect gaz, and $N$ the Avogadro number. In the following years Jean~Perrin made a series of experiments leading to a new determination of the Avogadro number, and observed that the very irregular motion of particles resembled the nowhere differentiable functions of mathematicians. Norbert Wiener introduced what he called ``the fundamental random function'' as a mathematical model for the physical Brownian motion. It was called immediately ``the Wiener process'', and later on, following Paul L\'evy, ``the Brownian motion''. Wiener gave several versions of the construction and derived a number of fundamental properties, L\'evy developed the theory to a high point of sophistication, and it is now a mathematical object of common use as well as a mine of interesting problems.
Here is the theory as it appears from the last exposition made by Norbert Wiener. The problem is to construct a random process $X(t,\omega)$, also denoted by $X(t)$ $(=X(t,\cdot))$, ($t$ the time, $\omega\in\Omega$ the probability space) such that
\vskip2mm
1) for almost all $\omega$ $X(t,\omega)$ is a continuous function of $t$
2) $X(t)$ is a Gaussian process, meaning that the distribution of any $n$--uple $(X(t_1),X(t_2),\ldots X(t_n))$ is Gaussian
3) this Gaussian process has stationary increments, meaning that the distribution of $X(t)-X(s)$ depends on $t-s$ only
4) it satisfies a normalized Einstein equation, that is
$$
||X(t) -X(s) ||_2^2 = |t-s|
$$
where the norm is taken in $L^2(\Omega)$.
\noindent Here is such a construction. Let $\mathcal{H}$ be an infinite--dimensional subspace of $L^2(\Omega)$ consisting of Gaussian centered variables, and $W$ an isometric linear mapping of $L^2(I)$ ($I=\bbbr$, or $\bbbr^+$, or $[0,1]$) into $\mathcal{H}$. Let $\chi_t$ be the indicator function $1_{[0,t]}$. Then
$$
X(t) = W (\chi_t)
$$
satisfies all conditions 2) to 4). Moreover, given an orthonormal basis of $L^2(I)$, $(u_n)$, its image by $W$ is a normal sequence (sequence of independent Gaussian normalized random variables $(\xi_n)$), and expanding $\chi_t$ in the form
$$
\chi_t = \Sigma\ a_n(t) u_n \qquad (\hbox{in }L^2(I))
$$
results in an expansion of $X(t)$ as a random series of functions :
$$
\chi(t)= \Sigma\ a_n(t) \xi_n \qquad (\hbox{in }L^2(\Omega))
$$
or, more explicitly,
$$
X(t,\omega) = \Sigma\ a_n(t) \xi_n(\omega)\,.
$$
To prove condition 1), it is enough to establish that the series in the second member converges uniformly in $t$ for almost all $\omega$, and this is done rather easily when the $u_n$ are classical orthonormal bases.
By definition, an helix is a curve in a Hilbert space, parametrized by $\bbbr$, such that the distance between two points depends only on the distance of the parameters :
$$
||X(t) - X(s)||_2^2 = \psi(t-s)\,,
$$
and $\psi(\cdot)$ is called the helix function. A translation of the parameter results in a isometric motion of the curve onto itself. It is the abstract model for all Gaussian processes with independent increments.
When $\psi(t)=|t|$ we say that the curve is a Brownian helix. In contrast with the realizations of the Brownian motions (the functions $t\lgr X(t,\omega)$ when $\omega$ is fixed), the Brownian helix is a very regular curve. However some basic properties of the Brownian motion can be read on the Brownian helix : its Hausdorff dimension is $2$, its $2$--dimensional Hausdorff measure is nothing but $dt$, and any three points on the curve are the vertices of a rightangle triangle : the increments starting from a point are orthogonal to the past (therefore, independent from the past).
Simple examples of helices are :
1) the line $(\psi(t)=a^2t^2)$
2) the circle $(\psi(t)=r^2\ \sin^2 \omega t)$
3) the three--dimensional helices $(\psi(t) =a^2t^2 + r^2\ \sin^2 \omega t)$
4) generalizations of those, with
$$
\psi(t) = a^2t^2 + \int\ \sin^2 \omega t\ \mu(d\omega)\,,
$$
\noindent
where $\mu$ is a positive measure on $\bbbr^+$ such that the integral is finite. Actually this is the general form of an helix function.
Except when $\mu$ is carried on a finite set, the helix cannot be imbedded in a finite dimensional Euclidean space.
At the end of the 70s, Patrice Assouad developed a theory of Lipschitz embeddings of a metric space into another \cite{assouad}. He introduced and built quasi--helices in Euclidean spaces, meaning that
$$
0<a<\frac{||X(t)-X(s)||_2^2}{\psi(t-s)} < b <\infty
$$
for some $a$ and $b$ and all $t$ and $s$. When $\psi(t)=|t|$ we call them Brownian quasi--helices. Assouad constructed Brownian quasi--helices in Euclidean $\bbbr^n$ for $n\geq 3$, and this gives a new way to prove that the realizations of Brownian motion are continuous $a.s.$. He asked me the question whether $a$ and $b$ can be taken near $1$ when $n$ is large, that is, whether the Brownian helix can be approximated (in this sense) by Brownian quasi--helices. I gave a positive answer with an explicit construction, and it was published in my paper on Helices and quasi--helices \cite{kahane}.
\section{A construction of Brownian quasi--helices by means of Walsh matrices}
Let us consider $\bbbr^{2^n}$ $(n\geq 1)$ as a Euclidean space. Let $N=2^n$. If we want to construct a function $X:\bbbn \lgr \bbbr^N$ such that $X(0)=0$ and $||X(t) -X(s)||^2 = |t-s|$ when $|t-s| \leq N$ we have to choose an orthonormal basis $u_0,u_1,\ldots u_{N-1}$, define $u_{N+j}=u_j$, and write
$$
X(t) = \sum_{0\leq j\leq t-1} \pm u_j\,.
$$
At this stage there is no restriction on the signs $\pm$, and we may choose $+$ when $0\leq j\leq N-1$. If we try to obtain $||X(2t)-X(2s)||^2 = 2|t-s|$, $||X(4t) - X(4s)||^2 = 4|t-s|$ etc when $|t-s|\leq N$, whe have more and more conditions on the $\pm$ and we are led to the following construction.
We define the Walsh matrix of order $N$ as the $N\times N$ matrix obtained as the $n^{\rm th}$ tensor power of the matrix $\left(
\begin{array}{cc}
1& 1 \\
1& -1 \\
\end{array}
\right)$
, that is
$$
\left(
\begin{array}{cc}
1& 1 \\
1& -1 \\
\end{array}
\right) \otimes \left(
\begin{array}{cc}
1& 1 \\
1& -1 \\
\end{array}
\right) \otimes \cdots \otimes \left(
\begin{array}{cc}
1& 1 \\
1& -1 \\
\end{array}
\right) \qquad (n\hbox{ times)}\,.
$$
\noindent
For example, the Walsh matrix of order 4 is
$$
M=M_2 =
\left(
\begin{array}{cccc}
1 &1 &1 &1 \\
1& -1 & 1 &-1 \\
1&+1 &-1 &-1 \\
1 &-1 &-1 &1
\end{array}
\right)
$$
and the matrix $M_{n+1}$ of order $2^{n+1}$ is obtained from $M_n$ as
$$
M_{n+1} =
\left(
\begin{array}{cc}
M_n& M_n \\
M_n& -M_n \\
\end{array}
\right)
$$
The $N^2$ first signs $\pm$ are those of the entries of the Walsh matrix, read line by line. In order to obtain the following signs, we extend the Walsh matrix by a series of vertical translations and change of signs of some lines according to the following rule : the first row is nothing but the whole sequence of entries, written from line to line and from left to right.
With this procedure we define $X(t)$ when $t$ is an integer and we can extend the construction to all $t>0$, then to all real $t$. It is proved in \cite{kahane} that we obtain a quasi--helix with $a$ and $b$ close to $1$ when $n$ is large enough : it is the answer to the question of Assouad.
However, it was not proved that the construction provides a quasi--helix when $n=2$ (it was remarked that it gives a Peano curve in the plane when $n=1$). The aim of the present paper is to give a detailed exposition of the case $n=2$ (most of it could be copied for $n>2$) and to prove that we obtain a quasi--helix. Instead of $t\in \bbbr$ we shall consider only $t\in \bbbr^+$ and a curve starting from $0$ $(X(0)=0)$. We shall investigate the geometric properties of the curve, some of them leading to open questions of a combinatorial or arithmetical nature.
The sequences that we construct are automatic in the sense of \cite{allou-shal}
\section{Description of the sequence}
\textbf{3.1} \hskip2mm It is a sequence of $+1$ and $-1$ as described before, in case $N=4$. We write it as a succession of $+$ and $-$ :
$$
++++\quad +-+- \quad ++-- \quad +--+ \quad ++++\ \cdots
$$
The gaps between the blocks of four letters have no meaning, except a help to understand the construction. The construction proceeds as follows : given the initial word of length $4^j$, we divide it into four words $A$, $B$, $C$, $D$ of equal length $4^{j-1}$ and write it $ABCD$ ; then
$$
A\ B\ C\ D\ A\ (-B)\ C\ (-D)\ A\ B\ (-C)(-D)\ A(-B)(-C)\ D
$$
is the initial word of length $4^{j+1}$. We shall give several equivalent definitions, using substitutions, explicit expressions, or generating functions.
Beforehand let us write the sequence in a tabular form as in the previous section :
$$
\begin{tabular}{|cccc|rl}
\cline{1-4}
+ &+&+&+ & &\hskip 3cm $a_0$\ $a_1 $\ $a_2$ \ $a_3$\\
+ &$-$&+&$-$& \hskip 3mm $A$ & \hskip 3cm$a_4$\ $a_5$\ \hbox to 2.5cm{...........} \\
+&+&$-$&$-$ &&\hskip 3cm\hbox to 4cm{....................}\\
+&$-$&$-$&+ & &\hskip 3cm.\hbox to 1.5cm{............}\ $a_{15}$\\
\cline{1-4}
+&+&+&+ &&\hskip3cm $a_{16}$ \hbox to 4cm{.............}\\
$-$&+&$-$&+ & \hskip 3mm $B$ &\hskip3cm \hbox to 4cm{....................}\\
+&+&$-$&$-$ & &\hskip3cm \hbox to 4cm{....................}\\
$-$&+&+&$-$ & &\hskip 3cm.\hbox to 1.5cm{............}\ $a_{31}$\\
\cline{1-4}
+&+&+&+ &&\hskip 3cm $a_{32}$ \hbox to 4cm{.............}\\
+&$-$&+&$-$& \hskip 3mm $C$ &\hskip3cm \hbox to 4cm{....................}\\
$-$ &$-$& +&+ & &\hskip3cm \hbox to 4cm{....................}\\
$- $&+ &+&- & &\hskip3cm \hbox to 1.5cm{............}\ $a_{47}$\\
\cline{1-4}
+&+&+&+ & &\hskip 3cm$a_{48}$ \hbox to 4cm{.............}\\
$-$ &+ &$-$ &+ &\hskip3mm$D$ &\hskip3cm \hbox to 4cm{....................}\\
$-$ &$-$ &+&+ &&\hskip3cm \hbox to 4cm{....................}\\
+ &$-$ &$-$ &+ &&\hskip3cm \hbox to 1.5cm{............}\ $a_{63}$\\
\cline{1-4}
&&& &&\hskip 3cm$a_{64}$ \hbox to 4cm{.............}\\
&&& &\hskip3mm$A$ &\hskip3cm \hbox to 4cm{....................}\\
&&& &&\hskip3cm \hbox to 4cm{....................}\\
&&& &&\hskip3cm \hbox to 1.5cm{............}\ $a_{79}$\\
\cline{1-4}
&&& &&\hskip 3cm$a_{80}$ \hbox to 4cm{.............}\\
&&& &\hskip3mm$-B$ &\hskip3cm \hbox to 4cm{....................}\\
&&& &&\hskip3cm \hbox to 4cm{....................}\\
&&& &&\hskip3cm \hbox to 4cm{....................}\\
\cline{1-4}
\end{tabular}
$$
\vskip6mm
\noindent\textbf{3.2} \hskip2mm Let me give an explicit expression for $a_n$. Writing
$$
n= n_0 +4n_1 +\cdots +4^\nu n_\nu \qquad (n_\nu=1,2,3 ;\ n_j=0,1,2,3\ \hbox{if } j<\nu)
$$
the construction shows that
$$
a_n = a_{n_0+4n_1} \ a_m\,,\quad m=n_1 +4n_2+\cdots +4^{\nu-1}n_\nu\,,
$$
that is
$$
a_n = a_{n_0+4n_1} \ a_{n_1+4n_2} \cdots a_{n_{\nu-1}+4n_\nu}\,.
$$
In the second member we find $a_j$ $s$ with $j\leq 15$. Their value is $-1$ when $j=5,7,10,11,13$ and $14$ and $+1$ otherwise. Now let us express $a_n$ as a function of $n$ written in the $4$--adic system of numeration. We obtain the formula
$$
a_n=(-1)^{A_n}
$$
\noindent
where $A_n$ is the number of $11,13,22,23,31,32$ in the $4$--adic expansion of $n$. For example, if $n= 1\ 3\ 2\ 0\ 0\ 1\ 1\ 1\ 0\ 2\ 3\ 1\ 1\ 1\ 2\ 2$ the significant links are
$$
1_- 3_-2\ 0\ 0\ 1_-1_-1\ 0\ 2_-3_-1_-1_-1\ 2_-2
$$
$A_n$ is nine and $a_n=-1$.
\vskip4mm
\noindent \textbf{3.3} \hskip2mm Let me describe the sequence by means of a substitution rule.
We start from an alphabet made of eight letters : $+a,+b,+c,+d,-a,-b,-c,-d$. The substitution rule is
$$
\begin{array}{lll}
+a& \lgr &+a+b+c+d \\
+b& &+a-b+c-d \\
+c& & +a+b-c-d\\
+d &&+a-b-c+d\\
-a && -a-b-c-d\\
-b && -a+b-c+d\\
-c &&-a-b+c+d\\
-d &&-a+b+c-d\\
\end{array}
\leqno{(S_0)}
$$
The infinite word beginning with $+a$ and invariant under the substitution is
$$
W= +a+b+c+d+a-b+c-d +a+b-c-d+a-b-c+d+\cdots
$$
Replacing $a,b,c,d$ by $1$ (or, in a graphic way, in suppressing them), we obtain our sequence of $\pm1$ (or $\pm$).
\vskip4mm
\noindent \textbf{3.4} \hskip2mm Actually there is a simpler substitution rule leading to the same result, namely
$$
\begin{array}{lll}
+a& \lgr &+a+b \\
+b& &+c+d \\
+c& & +a-b \\
+d &&+c-d\\
-a && -a-b\\
-b && -c-d\\
-c &&-a+b\\
-d &&-c+d\\
\end{array}
\leqno{(S_1)}
$$
It can be checked immediately that $(S_1)(S_1)=(S_0)$.
\vskip4mm
\noindent \textbf{3.5} \hskip2mm The generating function of the sequence $(a_n)$ is
$$
f(z) =a_0 +a_1 z +a_2 z^2+\cdots
$$
It can be defined using partial sums of order $4^n$. Let us introduce the matrix
$$
M(z) =
\left(
\begin{array}{cccc}
1&z &z^2 &z^3 \\
1 &-z &z^2 &-z^3 \\
1& z &-z^2 &-z^3 \\
1& -z &-z^2 &z^3
\end{array}
\right)
$$
and define four sequences of polynomials by the formulas
$$
\left(
\begin{array}{c}
P_0 \\
Q_0 \\
R_0 \\
T_0
\end{array}
\right) =
\left(
\begin{array}{c}
1 \\
1 \\
1 \\
1
\end{array}
\right) \qquad
\left(
\begin{array}{c}
P_{n+1} \\
Q_{n+1} \\
R_{n+1} \\
T_{n+1}
\end{array}
\right) = M(z^{4^n})
\left(
\begin{array}{c}
P_n \\
Q_n \\
R_n \\
T_n
\end{array}
\right)
$$
$(n=0,1,\ldots)$. Then
$$
\left(
\begin{array}{c}
P_n \\
Q_n \\
R_n \\
T_n
\end{array}
\right) =M(z^{4^{n-1}}) M(z^{4^{n-2}})\cdots M(z^4)M(z)
\left(
\begin{array}{c}
1 \\
1 \\
1 \\
1
\end{array}
\right) \,.
$$
When $|z|=1$, we have $M(z)M(\overline z)=4I$, therefore the matrix ${1\over2}M(z)$ is unitary, hence
$$
\begin{array}{ll}
|P_n|^2 + |Q_n|^2 + |R_n|^2 + |T_n|^2 &=4(|P_{n-1}|^2 + |Q_{n-1}|^2 + |R_{n-1}|^2 + |T_{n-1}|^2)\\
&=4^n (1+1+1+1)=4^{n+1}
\end{array}
$$
We obtain the generating function as
$$
f(z) = \lim_{n\rightarrow \infty} P_n(z)
$$
We can write the generating function in a more interesting form :
$$
f(z) =f_0(z^4) +z f_1(z^4) +z^2 f_2(z^4) +z^3 f_3(z^4)\,,
$$
where the coefficients of the power series $f_0,f_1,f_2,f_3$ are the columns of the table in 3.0. In order to obtain these coefficients, we can start from $W$ in 3.2 and replace $a,b,c,d$ by $1,1,1,1$ (for $f_0)$, $1,-1,1,-1$ (for $f_1$), $1,1,-1,-1$ (for $f_2$) and $1,-1,-1,1$ (for $f_3$). Then $f_0=f$. Writing
$$
F(z) =
\left(
\begin{array}{c}
f_0 (z) \\
f_1 (z) \\
f_2 (z) \\
f_3 (z)
\end{array}
\right)
$$
the functional equation of the generating functions of the columns is
$$
F(z) = M(z) F(z^4)\,.
$$
\section{Description of the curve}
\textbf{4.1} \hskip2mm Let $u_0,u_1,u_2,u_3$ be an orthonormal basis of the Euclidean space $\bbbr^4$, and define $u_{j+4}=u_j$ $(j=0,1,\ldots)$. The partial sums of the series
$$
a_0u_0 +a_1 u_1 +a_2u_2 +\cdots
$$
(that can be obtained from $W$ in 3.2 by replacing $a,b,c,d$ by $u_0,u_1,u_2,u_3$) will be denoted by $S(n)$. Then
$$
S(n) = a_0u_0 +a_1u_1+\cdots +a_{n-1}u_{n-1} \in \bbbz^4\,.
$$
It is easy to check on the table in 3.0 that
$$
S(16n) =4S(n)\qquad (n\in \bbbn)\,.
$$
Moreover it is not difficult to see (we shall be more specific later) that
$$
||S(n)-S(m)||^2 \leq b\ |n-m|
$$
for some $b<\infty$ and all $n$ and $m$. This allows, first, to define $S(t)$ when $t$ is a binary number via a formula $S(16^\nu t) = 4^\nu S(t)$, then to check that
$$
\begin{array}{c}
S(16 t) =4S(t) \\
||S(t)-S(s)||^2 \leq b |t-s| \\
\end{array}
$$
for such numbers, then to extend $S(\cdot)$ by continuity on $\bbbr^+$, and check the above formulas for all $t\geq 0$ and $s\geq 0$.
The curve we consider is the image of $\bbbr^+$ by $S(\cdot)$.
Clearly (changing $t$ into $16t$) the curve is invariant under an homothety of center $0$ and ratio $4$. Our main purpose is to prove that it is a Brownian quasi--helix. We shall point out some geometric properties first.
\vskip4mm
\noindent\textbf{4.2} \hskip2mm The matrix $M$ transforms $u_0,u_1,u_2,u_3$ into $u_0',u_1',u_2',u_3'$ :
$$
(u_0',u_1',u_2',u_3') = M (u_0,u_1,u_2,u_3)
$$
and the partial sums of order $n$ of the series $\Sigma a_ju_j'$ are the partial sums of order $4n$ of the series $\Sigma a_ju_j$. Therefore the equation
$$
S(4t)=M\ S(t)
$$
holds true when $t=n\in \bbbn$ and by extension for all $t\in\bbbr^+$.
It is easy to check that the eigenvalues of $M$ are $2$ and $-2$, and that
$$
M
\left(
\begin{array}{cc}
1& 1 \\
1& 0 \\
1& 0 \\
-1 &1
\end{array}
\right) =
\left(
\begin{array}{cc}
2& 2 \\
2& 0 \\
2& 0 \\
-2 & 2
\end{array}
\right)\,, \qquad
M \left(
\begin{array}{cc}
1& 0 \\
-1& 1 \\
-1& -1 \\
-1 & 0
\end{array}
\right) =
\left(
\begin{array}{cc}
-2& 0 \\
2& -2 \\
2& 2 \\
2 & 0
\end{array}
\right)\,.
$$
Let
$$
\begin{array}{cc}
v_0 = {1\over2}(u_0+u_1+u_2-u_3)\,,& v_1 = {\sqrt{2}\over2}(u_0+u_3)\hfill \\
v_2 = {1\over2}(u_0-u_1-u_2-u_3)\,, & v_3 = {\sqrt{2}\over2}(u_1-u_2)\,. \\
\end{array}
$$
They constitute an orthonormal system. The vectors $v_0$ and $v_1$ generate a plane, $P$, which is the eigenspace of the eigenvalue $2$, and $v_2$ and $v_3$ a plane, $Q$, corresponding to the eigenvalue $-2$. Expressed via the orthonormal basis $v_0,v_1,v_2,v_3$, the operator $M$ takes the form
$$
M'=U\ M \ U^{-1}= 2
\left(
\begin{array}{cccc}
1\ &0 &0 &0 \\
0\ &1 &0 &0 \\
0\ &0 &-1 &0 \\
0\ &0 &0 &-1
\end{array}
\right)\,,
$$
where $U$ is the unitary matrix carrying $u_0,u_1,u_2,u_3$ onto $v_0,v_1,v_2,v_3$. It means that the transformation $S(t) \lgr S(4t)$ is the product of a homothety of centre $0$ and ratio $2$ and an orthogonal symmetry with respect to the plane $Q$.
\vskip4mm
\noindent \textbf{4.3} \hskip2mm Clearly $M^2=4I$ ($I$ being the identity matrix), In turn, $M$ is the square of another simple matrix, as it can be guessed from 3.2 and 3.3. Let us define
$$
T = \left(
\begin{array}{cccc}
1\ & 0& 1 &0 \\
1\ & 0& -1 &0 \\
0\ & 1& 0 &1 \\
0 \ & 1& 0 &-1
\end{array}
\right)\,.
$$
Then $M=T^2$.
The eigenvalues of $T$ are $\sqrt{2}$, $-\sqrt{2}$, $i\sqrt{2}$ and $-i\sqrt{2}$. The vectors
$$
w_0 = \frac{\sqrt{2}}{2} (v_0+v_1)\,, \qquad w_1 = \frac{\sqrt{2}}{2} (v_0-v_1)
$$
are eigenvectors corresponding to $\sqrt{2}$ and $-\sqrt{2}$. Defining $w_2$ and $w_3$ in such a way that $w_0,w_1,w_2,w_3$ is a direct orthonormal basis, and $W$ being the unitary matrix carrying $u_0,u_1,u_2,u_3$ onto $w_0,w_1,w_2,w_3$, we can write
$$
WTW^{-1}= T' =\sqrt{2}
\left(
\begin{array}{cccc}
1& 0 &0 &0 \\
0 & -1 &0 & 0 \\
0& 0& 0 &-1 \\
0 &0 &1 &0
\end{array}
\right)\,.
$$
It means that $T'$ is decomposed into :
1) an homothety of centre $0$ and ratio $\sqrt{2}$
2) a rotation of $\frac{\pi}{2}$ of the orthogonal projection on $Q$
3) a symmetry with respect to $w_1$ of the orthogonal projection on $P$.
\vskip1mm
In the same way as we obtained the equation $S(4t)=MS(t)$, we now have
$$
S(2t) = T\,S(t)
$$
and we have just given the interpretation of the transformation $S(t) \lgr S(2t)$ as a product of simple transformations.
\vskip4mm
\noindent \textbf{4.4} \hskip2mm
We have investigated the properties of the transformations $S(t)\lgr S(16t)$, $S(t)\lgr S(4t)$, $S(t)\lgr S(2t)$ as products of homotheties and isometries. Now we shall look at the effect of a translation of $t$ by an integer. We are interested in differences $S(t)-S(s)$.
Let us begin with integers $m<n<16^k$. Let us divide the series $a_0u_0+a_1u_1+\cdots$ into consecutive blocks of length $16^k$, so that the series reads
$$
+A + B +C +D +A -B +C -D + \cdots
$$
If $j=j_0 +4j_1$ $(j_0 =0,1,2,3,j_1\in \bbbn)$, the $j$--th term is of type $A,B,C,D$ according to the value of $j_0$ and its sign is $a_j$. Therefore
$$
S(n+j\cdot 16^k) - S(m+j\cdot 16^k) = a_j (S(n+j_0 16^k) -S(m+j_0 16^k))\,.
$$
If $0<s<t<1$, we can approximate $s$ and $t$ by $m\, 16^{-k}$ and $n\, 16^{-k}$ and we obtain
$$
S(t+j) -S(s+j) = a_j(S(t+j_0) -S(s+j_0))
$$
$(j_0=1,2,3,4,\ j_0=j$ modulo $4$).
This expresses that all arcs $\cala_j = S([j,j+1])$ are isometric (actually translates or symmetric according to the value of $a_j$) of one of the arcs $\cala_0,\cala_1,\cala_2,\cala_3$ (according to the value of $j_0$). Using 4.2, this holds true when we replace the $\cala_j$s by $\cala_j^\nu = S([j2^\nu,(j+1)2^\nu])$ whatever $\nu\in \bbbz$.
\section{It is a Brownian helix}
\vskip2mm
\textbf{5.1} \hskip2mm What we have to prove is that, writing
$$
a = \inf_{0<s<t} \frac{||S(t)-S(s)||}{\sqrt{|t-s|}} \leq \sup_{0<s<t} \frac{||S(t)-S(s)||}{\sqrt{|t-s|}} = b\,,
$$
we have
$$
a>0\,, \qquad b < \infty\,.
$$
We can write as well
$$
a = \inf_{m<n}\frac{||S(n)-S(m)||}{\sqrt{|n-m|}}\,, \quad b=\sup_{m<n} \frac{||S(n)-S(m)||}{\sqrt{|n-m|}}
$$
\vskip2mm
\noindent \textbf{5.2} \hskip2mm The easy part is $b<\infty$.
Let us first assume $[m,n] = [j 2^k, (j+1) 2^k]$. Then, according to 4.3,
$$
||S(n) - S(m)|| = 2^{k/2}\,.
$$
In the general case, let us decompose $[m,n]$ into such intervals in a minimal way, so that there are at most two intervals of the same length in the decomposition. If the largest length is $2^k$, we obtain
$$
||S(n)-S(m)|| \leq 2(2^{k/2} +2^{(k-1)/2} +\cdots ) \leq 2(1+\sqrt{2}) 2^{k/2}
$$
therefore
$$
||S(n) - S(m)|| \leq 2(1+\sqrt{2}) |n-m|^{1/2}\,.
$$
This gives
$$
b\leq 2(1+\sqrt{2})
$$
\noindent\textbf{5.3} \hskip2mm To prove $a>0$ is more tricky.
We shall use two lemmas.
\vskip2mm
\begin{lemma} {There exists $\alpha>0$ such that}
$$
||S(n+h) - S(n)|| \leq 1-\alpha
$$
{for all $n\in\bbbn$ and $h\in [-\frac{1}{2},\, \frac{1}{2}]$ (with $n+h\geq 0$).}
\end{lemma}
\vskip2mm
\begin{lemma} There exists an integer $A$ such that
$$
||S(n) -S(m)|| \geq 2
$$
for all integers $m$ and $n$ such that $n-m\geq A$.
\end{lemma}
\vskip2mm
Assuming that this is correct, the result is at hand : given $t$ and $s$ such that $t-s\geq A+1$, we can write $s=m+h$ and $t=n+h'$ with $h$, $h'\in [-\frac{1}{2},\frac{1}{2}]$ and $m-n\geq A$, therefore
$$
||S(t)-S(s)||\geq 2\alpha\,.
$$
Whenever $(A+1)2^k < t-s \leq (A+1)2^{k+1}$ $(k\in\bbbz)$, we have
$$
||S(t)-S(s)|| \geq 2\alpha 2^{k/2} \geq \frac{2\alpha}{\sqrt{2(A+1)}}\ |t-s|^{1/2}
$$
therefore
$$
a\geq \frac{2\alpha}{\sqrt{2(A+1)}}\,.
$$
\vskip4mm
\noindent\textbf{5.4} \hskip2mm Proof of Lemma 1.
From now on it may be useful to represent $S(n)$ on the table of 3.0, and also the differences $S(n)-S(m)$, as figures consisting of consecutive lines plus or minus part of a line above and below, in such a way that each column in the figure has a sum equal to the corresponding coordinate of $S(n)$ or $S(n)-S(m)$.
$$
\begin{array}{c}
\begin{tabular}{|cccc|}
\hline
+ &+&+&+ \\
+ &$-$&+&$-$ \\
+&+&$-$&$-$ \\
+&$-$&$-$&+\\
\cline{2-4}
\multicolumn{1}{|c|}{+} &&\\
\cline{1-1}
\end{tabular}
\\
S(17)
\\
\\
\\
\\
\\
\end{array}
\qquad
\begin{array}{c}
\begin{tabular}{|cccc|}
\hline
&&& \\
&&& \\
&&& \\
&&& \\
\hline
&&& \\
&&& \\
\cline{2-4}
\multicolumn{1}{|c|}{} &+&$-$ &$-$\\
\cline{1-1}
$-$ &+ &+ &$-$\\
\hline
\end{tabular}\\
\\
S(32) -S(25)
\end{array}
$$
Let us consider
$$
||S(16n+m)-S(16n)||^2
$$
for $n\in \bbbn$ and $m=0,\pm1,\pm2,\pm3,\ldots,\pm 8$. It is sufficient to consider the four cases $n=0,1,2,3$, and to look at the figures (depending on $m$) in each case. The result is
$$
\begin{array}{cc}
||S(16n+m)-S(16n)||^2 \leq 8 &(n\ \hbox{odd)} \\
||S(16n+m)-S(16n)||^2 \leq 9 &(n\ \hbox{even)} \\
\end{array}
$$
with equality only when $m$ is odd (as for $S(32)-S(25)$). Therefore, going one step further,
$$
\begin{array}{ll}
||S\big(16n+m+\frac{P}{16}\big) - S(16n)|| &\leq \sqrt{8} +\frac{1}{4} \sqrt{9} \ \ (n\ \hbox{odd}) \\
\\
||S\big(16n+m+\frac{P}{16}\big) - S(16n)|| & \leq \sqrt{9} +\frac{1}{4} \sqrt{8} \ \ (n\ \hbox{even}) \\
\end{array}
$$
when $p=0,\pm1,\pm2,\ldots,\pm8$. Proceeding that way we finally obtain
$$
||S(16(n+t)) - S(16n)|| \leq 4(1-\alpha)\qquad \big(-{\textstyle{1\over2}} \leq t \leq{ \textstyle{1\over2}}\big)
$$
where
$$
4(1-\alpha)=\big(\sqrt{9}+{1\over4}\sqrt{8}\big)\big(1+{1\over16}+{1\over16^2}+\cdots \big)
$$
as the second member is less than $4$ and that proves Lemma 1.
\vskip4mm
\noindent \textbf{5.5} \hskip2mm Proof of Lemma 2
Here again we look at the table. We can compute
$||S(n)-S(m)||$
when $n-m$ is in a given interval, and moreover give the couples $(m,n)$ for each the infimum is attained, and the expression of $S(n)-S(m)$ (that is, the coordinates with respect to $u_0,u_1,u_2,u_3$).
1) $4<n-m \leq 16$. It suffices to consider the first 16 lines of the table (fig.~1), since adding to $m$ and $n$ a multiple of 64 does not change $||S(n)-S(m)||$. We obtain
$$
\inf ||S(n)-S(m)||^2 =2
$$
realized for $(5,11)$, $(23,29)$, $(35,41)$ and $(53,59)$ :
\vskip2mm
$$
\begin{array}{cc}
S(11) - S(5) = u_1-u_4\,,\hfill & S(29) -S(23) =u_2 -u_3\hfill \\
S(41) -S(35) = -u_2+u_3\,, &S(59) - S(53)=-u_1 +u_4 \\
\end{array}
$$
\begin{center}
$\begin{array}{ll}
\begin{tabular}{|cccc|rr}
\cline{1-4}
&&&\\
\cline{2-4}
\multicolumn{1}{|c|}{} &$-$ &+&$-$ &$S(11)-S(5)$\ \ \\
\cline{1-1}\cline{4-4}
+&+&$-$&\multicolumn{1}{|c|}{}\\
\cline{1-3}
&&&\\
\cline{1-4}
&&&\\
\cline{4-4}
&&&\multicolumn{1}{|c|}{+} &$S(29)-S(23)$\\
\cline{1-3}
+ &+&$-$ &$-$\\
\cline{2-4}
\multicolumn{1}{|c|}{$-$} &&&\\
\cline{1-4}
&&&\multicolumn{1}{|c|}{+} &$S(41)-S(35)$\\
\cline{1-3}
+ &$-$&+&$-$ \\
\cline{2-4}
\multicolumn{1}{|c|}{$-$} &&&\\
\cline{1-1}
&&&\\
\cline{1-4}
&&&\\
\cline{2-4}
\multicolumn{1}{|c|}{} &+&$-$&+ &$S(59)-S(53)$\\
\cline{1-1}\cline{4-4}
$-$&$-$ &+ &\multicolumn{1}{|c|}{} \\
\cline{1-3}
&&&\\
\cline{1-4}
\end{tabular}\\
\noalign{\vskip2mm}
\hskip4mm \textrm{fig. 1}\\
\end{array}$ \hskip3mm
$
\begin{array}{ll}
\begin{tabular}{|cccc|l}
\cline{1-4}
\multicolumn{2}{|c|}{}&$-$ &+ &$\ \ell \ 6$ \\
\cline{1-2}
+ &+&$-$&$-$ & $\ \ell \ 7$\\
$-$ &+&+&$-$ &\ $\ell\ 8$ \\
+&+&+&+ &$\ \ell \ 9$ \\
+&$-$&+&$-$ &\ $\ell \ 10$\\
\cline{3-4}
$-$ &$-$ &\multicolumn{2}{|c|}{} &\ $\ell\ {11}$\\
\cline{1-4}
\end{tabular}\\
&\hskip-1cm S(42)-S(22) \\
\hskip4mm \textrm{fig. 2} &\\
&\\
&\\
&\\
&\\ &\\ &\\ &\\ &\\ &\\
\end{array}
$
\end{center}
\vskip2mm
2) $16 \leq n-m\leq 64$. It suffices now to consider the first 256 terms (the first 64 lines of the table). The idea in order to pick the infimum is to start from $S(4\times 11)-S(4\times 5)$ and the analogues, and modify the figure in order to diminish $||S(n)-S(m)||^2$ (fig. 2). As a first example, $S(44)-S(20)=2u_2+2u_3$ (obtained by replacing $u_1+u_2+u_3+u_4$ and $u_4$ by $u_1-u_2-u_3+u_4$ in the expression of $S(11) -S(5)$), and the modification provides $S(42)-S(22)= u_1+u_2+u_3-u_4$. The result is
$$
\inf ||S(n)-S(m)||^2=4
$$
realized for $(22,42)$ and $(214,234)$, with
$$
\begin{array}{cc}
S(42)-S(22) &=u_1 +u_2-u_3-u_4\,,\\
S(234)-S(214)&=-u_1-u_2-u_3+u_4\,. \\
\end{array}
$$
This proves Lemma 2 with $A=16$.
\vskip2mm
Actually the proof can be given in a more concentrated form. It is enough to show that $||S(n)-S(m)||^2 \leq 3$ is impossible when $n-m\geq 16$. Let us assume \textit{ab absurdo} that $||S(n) -S(m)||^2\leq 3$. Let us add or remove the minimal number of terms in order to transform $S(n)-S(m)$ into a difference of the form $S(4n') - S(4m')$ (that is, to transform the figure $S(n)-S(m)$ into a rectangle). In general, this minimal number is $\leq 4$ and has the same parity as $||(S(n)-S(m)||^2$ ; here it is $\leq 3$ and the resulting $S(4n')-S(4m')$ has its squared norm $\leq 2$, therefore $||S(n') -S(m')||^2\leq 3$ and the process goes on until we reach $S(n^{\times}) - S(m^{\times})$ with $n^{\times}\leq 64$. Then we know the possible pairs $(m^{\times},n^{\times})$, namely (5,11), (23,29), (35,41) and (53,59), and the reverse process never gives a squared norm $\leq 3$.
\vskip2mm
\noindent \textbf{5.6} Remarks and questions
The estimates we gave for $b$ and $a$ are quite rough. We can ask for better estimates and conjectures. The actual problem, of a combinatorial or arithmetical nature, is to compute these numbers exactly.
We were interested in estimating $b$ from above and $a$ from below. Examples provide estimates in the opposite direction :
$$
\begin{array}{cc}
b & \geq \frac{||S(17)||}{\sqrt{17}} = \frac{5}{\sqrt{17}} \geq 1.21\hfill \\
\\
a & \leq \frac{||S(42)-S(22)||}{\sqrt{20}} = \frac{2}{\sqrt{20}} = \frac{1}{\sqrt{5}} \leq 0.45 \\
\end{array}
$$
It seems not impossible that the estimate for $\alpha$ is precise, that is $a=\frac{1}{\sqrt{5}}$. A careful investigation of the table would confirm or disprove this conjecture. It would lead also to a better estimate for $b$.
\section{Projections of the curve}
\noindent\textbf{6.1} \hskip2mm The direction of $u_0$ is special : all first coordinates of the $S(n) $ are $\geq 0$. That means that the partial sums $S_0(n)$ of the original series described in 3.0 are positive.
A simple way to see it is to use Lemma 1 (of 5.2). Since $S_0(n)\geq 1$ for $n=1,2,3,4,5,6,7,8$, we have $S_0(t)>0$ for $\frac{1}{2}\leq t \leq 8$, therefore (changing $t$
into $16^k t$), $S_0(t)>0$ for all $t>0$.
\vskip4mm
\noindent\textbf{6.2} \hskip2mm We are mainly interested in the three--dimensional projections of the curve. It seems likely that all parallel projections of the curve on a three--dimensional subspace of $\bbbr^4$ have an infinity of double points. The question can be formulated in the equivalent forms :
\vskip2mm
1) is every direction in $\bbbr^4$ the direction of some $S(t)-S(s)$ ?
2) are the $\frac{S(n)-S(m)}{\sqrt{n-m}}$ $(n>m)$ dense on the sphere $S^3$ ?
\vskip4mm
\noindent\textbf{6.3} \hskip2mm Let us project the curve from $0$ on the sphere $S^3$, that is consider
$$
\cals(t)=\frac{S(t)}{||S(t)||} \qquad (t>0)\,.
$$
We obtain a closed $\calc$, image of any interval $[a,16a]$ by $\cals(\cdot)$.
$\calc$ is invariant under the isometries of $\bbbr^4$ defined by $\frac{1}{2}M$ and $\frac{1}{\sqrt{2}}T$ (see 4.1 and 4.2). The first takes the form
$$
\frac{1}{2}M' =
\left(
\begin{array}{cccc}
1\ &0 &0 &0 \\
0\ & 1 & 0 &0 \\
0\ & 0 &-1 &0 \\
0\ &0&0&-1
\end{array}
\right)
$$
with respect to the orthonormal basis $(v_0,v_1,v_2,v_3)$ and the second
$$
\frac{1}{\sqrt{2}}T' =
\left(
\begin{array}{cccc}
1&0 &0 &0 \\
0& -1 & 0 &0 \\
0& 0 &0 &-1 \\
0 &0&1&0
\end{array}
\right)
$$
with respect to the orthonormal basis $(w_0,w_1,w_2,w_3)$. The vectors $v_0$ and $v_1$ ($w_0$ and $w_1$ as well) generate a plane $P$ such that the mapping $\cals(t)\lgr \cals(4t)$ is an orthogonal symmetry with respect to $P$. For the projection of $\calc$ on $P$, the change of $t$ into $2t$ means a symmetry with respect to the line generated by $w_0$.
$\calc$ has a double point at $t={1\over3}$, $t'={4\over3}$ : $\calt({4\over3})=\calt({1\over3})$.
In order to prove it, we expand $t$ and $t'$ in base $4$ (we underline the expansion)
$$
t= \underline{0. 1111}\cdots \qquad t'= \underline{1. 1111}\cdots \,.
$$
Using base $4$ again, we easily obtain the figures and the values of $S(\underline{1})$, $S(\underline{11})$, $S(\underline{111})$ and so on : $S(1) = u_0=(1\ 0\ 0\ 0)$
$$
\begin{array}{ll}
S(\underline{11}) \hfill &=S(\underline{10})+ (S(\underline{11}) - S(\underline{10}))= (1\ 1\ 1\ 1)+(1\ 0\ 0\ 0) \hfill \\
\noalign{\smallskip}
S(\underline{111}) &=S(\underline{100})+ (S(\underline{110}) - S(\underline{100})) +S(\underline{111}) - ( S(\underline{110}) \hfill \\
\noalign{\smallskip}
&=(4\ 0\ 0\ 0) + (1\ 1\ 1\ 1) - (1\ 0\ 0\ )\hfill\\
\noalign{\smallskip}
S(\underline{1111}) &= S(\underline{1000}) +(S(\underline{1100}) - S(\underline{1000})) + (S(\underline{1110}
) - S(\underline{1100}))\\
&\hskip6cm + (S(\underline{1111}) - S(\underline{1110}))\\
\noalign{\smallskip}
&=(4\ 4\ 4\ 4) + (4\ 0\ 0\ 0) - (1\ 1\ 1\ 1) + (1\ 0\ 0\ 0)\hfill\\
\noalign{\smallskip}
S(\underline{11111}) &= (16\ 0\ 0\ 0)+(4\ 4\ 4\ 4) - (4\ 0\ 0\ 0)+ (1\ 1\ 1\ 1) - (1\ 0\ 0\ 0)\\
\noalign{\smallskip}
S(\underline{111111}) &= (16\ 16\ 16\ 16) + (16\ 0\ 0\ 0) - (4\ 4\ 4\ 4) + (4\ 0\ 0\ 0)\\
&\hskip6cm - (1\ 1\ 1\ 1) + (1\ 0\ 0\ 0)
\end{array}
$$
The ratio between two consecutive vectors tend to 2 (meaning that the ratios of coordinates tend to 2), hence
$$
\cals(t') =\calt(t) \qquad \big(t = \frac{1}{3}\big)\,.
$$
By isometry we also have
$$
\calt(2t') = \calt(2t)\,.
$$
These double points are contained in the plane $P$, and they are symmetric with respect to the line generated by~$w_0$.
I believe, but did not prove, that these are the only multiple points of the curve $\calc$. In that case, $\calc$ is a Brownian quasi--helix (actually, a Brownian quasi--circle) on some $4$--covering of the sphere~$S^3$.
\vskip4mm
\noindent\textbf{6.4} \hskip2mm One can see the curve $\calc$ in two other ways
First, taking into account that the first coordinate $S_0(t)$ is always positive, we can consider
$$
\calr(t) = \frac{S(t)}{S_0(t)}
$$
and the curve $\calc'$ described by $\calr(\cdot)$, projection of the original curve with a source at $0$ and a screen at the hyperplane $x_0=1$.
Symmetries and double points can be studied on this model as well.
Secondly, we obtain a projective model of $\calc$, say, $\calc''$, on choosing four points $A_0,A_1,A_2,A_3$ in $\bbbr^4$, defining $A_{j+4}=A_j$ $(j=0,1,\ldots)$, starting with a point $M_0=A_0$ and defining the sequence of points
$$
M_{n+1} = \frac{1}{a_0+ a_1+\cdots a_n} ((a_0+a_1+\cdots a_{n-1}) M_n +a_n A_n)\,.
$$
\vskip2mm
Some real figures would help. If a reader is willing to draw figures of the above curves, I'll appreciate to see them.
|
2,869,038,153,873 | arxiv | \section{Introduction}
In the past decades complex networks and their behavior have attracted much attention. In the real world many of such networks can be found, for instance as social, information, technological and biological networks. An interesting property that many of them share is that they are {\em scale free}~\cite{New03}. This means that their degree sequences obey a {\em power law}, i.e., the fraction of nodes that have $k$ neighbors is proportional to $k^{-\tau}$ for some $\tau>1$. We therefore use power-law random graphs as a simple model for real-world networks.
Not only the structure of these networks is interesting, also the behavior of processes living on these networks \chs{is} a fascinating subject. Processes one can think of are opinion formation, the spread of information and the spread of viruses. An extensive overview of complex networks and processes on them is given by Newman in~\cite{New03}. It is especially interesting if these processes undergo a so-called {\em phase transition}, i.e., a minor change in the circumstances suddenly results in completely different behavior. Examples of such phase transitions include the sudden East European revolution in 1989 \cite{Kur91} and the unusual swine flu outbreak in 2009 \cite{CohEns09}.
Physicists have studied the behavior near phase transitions, the {\em critical behavior}, on complex networks for many different models, see~\cite{DorGolMen08} for an overview. Many of these results have not been mathematically rigorously proved. One of the few models for which rigorous results have been obtained is the contact process \cite{ChaDur09}, where the predictions of physicists, in fact, turned out not to be correct. A mathematical treatment of other models is therefore necessary.
We focus on the {\em Ising model}, a paradigm model for the study of phase transitions \cite{Nis05,Nis09,Nis11}. In this model a spin value that can be either $+1$ or $-1$ is assigned to every vertex. These spins influence each other with {\em ferromagnetic} interactions, i.e., neighboring spins prefer to be aligned. The strength of these interactions depends on the temperature. The first rigorous study of the Ising model on a random graph was performed by De Sanctis and Guerra
in \cite{SanGue08}, where the high and zero temperature regime of the Ising model on the Erd{\H o}s-R\'enyi random graph were analyzed. Later, in~\cite{DemMon10}, Dembo and Montanari analyzed the Ising model on locally tree-like random graphs with a finite-variance degree distribution for any temperature. In~\cite{DomGiaHof10}, we generalized these results to the case where the degree distribution has strongly finite mean, but possibly infinite variance, i.e., the degree distribution obeys a power-law with exponent $\tau>2$. An analysis of the critical behavior, however, was still lacking.
In this article, we rigorously study the critical behavior of the Ising model on power-law random graphs by computing various critical exponents. Predictions for the values of these exponents were given Dorogovtsev, et al.\ in~\cite{DorGolMen02} and independently by Leone et al.\ in~\cite{LeoVazVesZec02} and we prove that these values are indeed correct. These exponents depend on the power-law exponent $\tau$. We prove that \ch{the
critical exponents $\boldsymbol{\delta}, \boldsymbol{\beta}$ and $\boldsymbol{\gamma}$} take the classical mean-field values for $\tau>5$, and hence also for the Erd{\H o}s-R\'enyi random graph, but are different for $\tau\in(3,5)$. In~\cite{DorGolMen02,LeoVazVesZec02} also the case $\tau\in(2,3)$ is studied for which the critical temperature is infinite. Hence, the critical behavior should be interpreted as the temperature going to infinity, which is a different problem from approaching a finite critical temperature and is therefore beyond the scope of this article.
\ch{Our proofs always start by relating the magnetization \chs{of the Ising model on the random graph} and various of its derivatives to the root magnetization of a rooted random tree,
the so-called unimodular tree. After this, we identify the critical
exponents related to the root magetization on the rooted random tree. As a result, all our results also
apply to this setting, where only in the case of the regular tree, the mean-field critical exponents
have been identified \cite{Bax82}, and which we extend to general offspring distributions.}
\section{Model definitions and results}
\subsection{Ising model \ch{on finite graphs}}
We start by defining Ising models on finite graphs. Consider a random graph sequence $\ch{(G_n)}_{n \geq 1}$. Here $G_n=(V_n,E_n)$, with vertex set $V_n=[n] \equiv \{1,\ldots,n\}$ and with a random edge set $E_n$. To each vertex $i\in [n]$ an Ising spin $\sigma_i = \pm 1$ is assigned. A configuration of spins is denoted by $\sigma=(\sigma_i )_{i\in [n]}$. The {\em Ising model on $G_n$} is then defined by the Boltzmann\chs{-}\ch{Gibbs measure}
\begin{equation}\label{eq-boltzmann}
\mu_n(\sigma) = \frac{1}{Z_n(\beta, \underline{B})} \exp \left\{\beta \sum_{(i,j) \in E_n} \sigma_i \sigma_j + \sum_{i \in [n]} B_i \sigma_i\right\}.
\end{equation}
Here, $\beta \geq 0$ is the inverse temperature and $\underline{B}$ the vector of external magnetic fields $\underline{B}=(B_i)_{i \in [n]} \in \mathbb{R}^n$. For a uniform external field we write $B$ instead of $\underline{B}$, i.e., $B_i=B$ for all $i\in[n]$. The partition function $Z_n(\beta,\underline{B})$ is the normalization constant in~\eqref{eq-boltzmann}, i.e.,
\begin{equation}
Z_n(\beta,\underline{B}) = \sum_{\sigma \in \{-1,+1\}^n} \exp \left\{\beta \sum_{(i,j) \in E_n} \sigma_i \sigma_j + \sum_{i \in [n]} B_i \sigma_i\right\}.
\end{equation}
Note that the inverse temperature $\beta$ does not multiply the external field. This turns out to be technically convenient and does not change the results, because we are only looking at systems at equilibrium, and hence this would just be a reparametrization.
We let $\big<\cdot\big>_{\mu_n}$ denote the expectation with respect to the Ising measure $\mu_n$, i.e., for every bounded function $f: \{-1,+1\}^n \rightarrow \mathbb{R}$, we write
\begin{equation}
\big<f(\sigma)\big>_{\mu_n} = \sum_{\sigma \in \{-1,+1\}^n} f(\sigma) \mu_n(\sigma).
\end{equation}
\subsection{Thermodynamics}
We study the critical behavior of this Ising model by analyzing the following \col{two} thermodynamic quantities:
\begin{definition}[Thermodynamic quantities] \rm
For a graph sequence $(G_n)_{n\geq1}$,
\begin{enumerate}[(a)]
\item let $M_n(\beta,B)=\frac{1}{n} \sum_{i\in[n]} \langle\sigma_i\rangle_{\mu_n}$ be the magnetization per vertex. Then, the thermodynamic limit of the {\em magnetization} per vertex equals
\begin{equation}
M(\beta, B) \equiv \lim_{n\rightarrow \infty} M_n(\beta,B).
\end{equation}
\item let $\chi_n(\beta,B)= \frac{1}{n} \sum_{i,j\in[n]} \left( \langle\sigma_i\sigma_j\rangle_{\mu_n}-\langle\sigma_i\rangle_{\mu_n}\langle\sigma_j\rangle_{\mu_n}\right)$ denote the susceptibility. Then, the thermodynamic limit of the {\em susceptibility} equals
\begin{equation}
\chi(\beta,B) \equiv \lim_{n\rightarrow\infty}\chi_n(\beta,B).
\end{equation}
\end{enumerate}
\end{definition}
\col{The existence of the above limits for $n\rightarrow \infty$ has been proved in \cite[Theorem 1.5]{DomGiaHof10},
using the existence of the pressure per particle proved in \cite{DemMon10} and \cite[Theorem 1.4]{DomGiaHof10} \chs{ and using monotonicity properties}.}
We now define the critical temperature. We write $f(0^+)$ for $\lim_{x\searrow0}f(x)$.
\begin{definition}[Critical temperature] \rm
The critical temperature equals
\begin{equation}
\label{betac}
\beta_c \equiv \inf\{\beta: M(\beta,0^+)>0\}.
\end{equation}
\end{definition}
Note that such a $\beta_c$ can only exist in the thermodynamic limit, but not for the magnetization of a finite graph, since always $M_n(\beta,0^+)=0$. The critical behavior can now be expressed in terms of the following critical exponents. We write $f(x) \asymp g(x)$ if the ratio $f(x)/g(x)$ is bounded away from 0 and infinity for the specified limit.
\begin{definition}[Critical exponents]\label{def-CritExp} \rm
The critical exponents $\boldsymbol{\beta,\delta,\gamma,\gamma'}$
are defined by:
\begin{align}
M(\beta,0^+) &\asymp (\beta-\beta_c)^{\boldsymbol{\beta}}, &&{\rm for\ } \beta \searrow \beta_c; \label{eq-def-critexp-beta}\\
M(\beta_c, B) &\asymp B^{1/\boldsymbol{\delta}}, &&{\rm for\ } B \searrow 0;
\label{eq-def-critexp-delta}\\
\chi(\beta, 0^+) &\asymp (\ch{\beta_c-\beta})^{-\boldsymbol{\gamma}}, &&{\rm for\ } \beta \nearrow \beta_c; \label{eq-def-critexp-gamma}\\
\chi(\beta, 0^+) &\asymp (\beta-\beta_c)^{-\boldsymbol{\gamma'}}, &&{\rm for\ } \beta \searrow \beta_c.\label{eq-def-critexp-gamma'}
\end{align}
\begin{remark}
We emphasize that there is a difference between the symbol $\beta$ for the inverse temperature and the bold symbol $\boldsymbol{\beta}$ for the critical exponent in~\eqref{eq-def-critexp-beta}. Both uses for $\beta$ are standard in the literature, so we decided to stick to this notation.
Also note that these are stronger definitions than usual. E.g., normally the critical exponent $\boldsymbol{\beta}$ is defined as that value such that
\begin{equation}
M(\beta,0^+) = (\beta-\beta_c)^{\boldsymbol{\beta}+o(1)},
\end{equation}
where $o(1)$ is a function tending to zero for $\beta\searrow\beta_c$.
\end{remark}
\end{definition}
\subsection{\ch{Locally tree-like random graphs}}
We study the critical behavior of the Ising model on graph sequences $(G_n)_{n\geq1}$ that are assumed to be {\em locally like a homogeneous random tree}, to have a {\em power-law degree distribution} and to be {\em uniformly sparse}. We give the formal definitions of these assumptions below, but we first introduce some notation.
Let the random variable $D$ have distribution $P=(p_k)_{k\geq1}$, i.e., $\mathbb P[D=k]=p_k,$ for $k=1,2,\ldots$. We define its {\em forward degree distribution} by
\begin{equation} \label{eq-defrho}
\rho_k = \frac{(k+1) p_{k+1}}{\mathbb E[D]},
\end{equation}
where we assume that $\mathbb E[D]<\infty$. Let $K$ be a random variable with $\mathbb P[K=k]=\rho_k$ and write $\nu=\mathbb E[K]$. The random rooted tree $\mathcal{T}(D,K,\ell)$ is a branching process with $\ell$ generations, where the root offspring is distributed as $D$ and the vertices in each next generation have offsprings that are independent of the root offspring and are {\em independent and identically distributed} (i.i.d.) copies of the random variable $K$. We write $\mathcal{T}(K,\ell)$ when the offspring at the root has the same distribution as $K$.
We write that an event $\mathcal{A}$ holds \emph{almost surely} (a.s.) if $\mathbb P[\mathcal{A}]=1$.
The ball of radius $r$ around vertex $i$, $B_i(r)$, is defined as the graph induced by the vertices at graph distance at most $r$ from vertex $i$.
For two rooted trees $\mathcal{T}_1$ and $\mathcal{T}_2$, we write that $\mathcal{T}_1 \simeq \mathcal{T}_2$, when there exists a bijective map from the vertices of $\mathcal{T}_1$ to those of $\mathcal{T}_2$ that preserves the adjacency relations.
\begin{definition}[Local convergence to homogeneous \col{random} trees] \label{ass-convtree}
Let $\mathbb P_n$ denote the law induced on the ball $B_i(t)$ in $G_n$ centered at a uniformly chosen vertex $i\in[n]$. We say that the graph sequence $(G_n)_{n\geq 1}$ is {\em locally tree-like} with asymptotic degree \ch{distribution $P$} when, for any rooted tree $\mathcal{T}$ with $t$ generations
\begin{equation}
\lim_{n\rightarrow\infty} \mathbb P_n [B_i(t) \simeq \mathcal{T}] = \mathbb P[\mathcal{T}(D,K,t) \simeq \mathcal{T}].
\end{equation}
\end{definition}
Note that this implies \ch{in particular} that the degree of a uniformly chosen vertex of the graph has an asymptotic
degree distributed as $D$.
\begin{definition}[Uniform sparsity] \label{ass-unisparse}
We say that the graph sequence $(G_n)_{n \geq 1}$ is {\em uniformly sparse} when, a.s.,
\begin{equation}
\lim_{\ell\rightarrow\infty} \limsup_{n\rightarrow\infty} \frac{1}{n} \sum_{i \in [n]} D_i \mathds 1_{\{ D_i \geq \ell\}} = 0,
\end{equation}
where $D_i$ is the degree of vertex $i$ and $\mathds 1_{\mathcal{A}}$ denotes the indicator of the event $\mathcal{A}$.
\end{definition}
Note that uniform sparsity follows if $\frac{1}{n} \sum_{i \in [n]} D_i\to \mathbb E[D]$ a.s.,
by the weak convergence of the degree of a uniform vertex.
\ch{We pay special attention to cases where the degree distribution satisfies a power law, as defined
in the following definition. For power-law degree distributions, not all moments of the degrees are finite,
which has severe consequences for the critical behavior of the Ising model.}
\begin{definition}[Power laws] \label{ass-degdist}
We say that the distribution $P=(p_k)_{k\geq1}$ obeys a {\em power law with exponent $\tau$} when there exist constants $C_p>c_p>0$ such that, for all $k=1,2,\ldots$,
\begin{equation} \label{eq-ppowerlaw}
c_p k^{-(\tau-1)} \leq \sum_{\ell\geq k} p_\ell \leq C_p k^{-(\tau-1)}.
\end{equation}
\end{definition}
\ch{\subsection{The random Bethe tree}
We next extend our \col{definitions} to the random tree $\mathcal{T}(D,K,\infty)$, which is an infinite random tree.
One has to be very careful in defining a Gibbs measure on this tree,
since trees suffer from the fact that the boundaries of intrinsic \chs{(i.e., graph distance)} balls in them have \chs{a} size that is comparable to their volume.
We can adapt the construction of the Ising model on the regular tree in \cite{Bax82} to this setting, as we now explain.
For $\beta\geq 0, B\in {\mathbb R}$, let $\mu_{t,\beta,B}^{+/f}$ be the Ising model on $\mathcal{T}(D,K,t)$ with
$+$ respectively free boundary conditions.
For a function $f$ that only depends on $\mathcal{T}(D,K,m)$ with $m\leq t$, we let
\begin{equation}
\langle f\rangle_{\mu_{\beta,B}^{+/f}}=\lim_{t\rightarrow \infty} \langle f\rangle_{\mu_{t,\beta,B}^{+/f}}.
\end{equation}
Below, we argue that these limits indeed exist and are equal for $B>0$.
This defines a unique infinite volume Gibbs measure $\mu_{\beta,B}^{+/f}$ on the random Bethe tree.
The quantity $M(\beta,B)$ is the expected root magnetization for this infinite volume Gibbs measure on the random Bethe
tree. Our results also apply to this setting under the assumption that the degree of the root obeys a power law
in \eqref{eq-ppowerlaw} or that $\mathbb E[K^3]<\infty$.
The critical value $\beta_c$ for the root magnetization is again defined by \eqref{betac}.}
\subsection{Main results}
We now present our main results which describe the critical behavior of the Ising model on
power-law random graphs \col{and random trees with power-law offspring distribution}.
We first give an expression for the critical temperature:
\begin{theorem}[Critical temperature]
\label{thm-CritTemp}
Assume that the random graph sequence $(G_n)_{n\geq1}$ is locally tree-like with asymptotic degree
distribution $P$ \chs{and} is uniformly sparse.
Then, a.s., the critical temperature $\beta_c$ \ch{of $(G_n)_{n\geq1}$ and of the random Bethe tree $\mathcal{T}(D,K,\infty)$ equal\chs{s}}
\begin{equation}
\beta_c={\rm atanh}(1/\nu).
\end{equation}
\end{theorem}
Near the critical temperature the behavior of the Ising model can be described by critical exponents. The values of these critical exponents for different values of $\tau$ are stated in the following theorem:
\begin{theorem}[Critical exponents] \label{thm-CritExp}
Assume that the random graph sequence $(G_n)_{n\geq1}$ is locally tree-like with asymptotic degree distribution $P$ that obeys $\ch{\mathbb E[K^3]<\infty}$ or a power law with exponent $\tau\in(3,5]$, and is uniformly sparse, \ch{or that the random Bethe tree obeys $\mathbb E[K^3]<\infty$ or a power law with exponent $\tau\in(3,5]$.} Then, the critical exponents $\boldsymbol{\beta,\delta}$ and $\boldsymbol{\gamma}$ defined in Definition~\ref{def-CritExp} \ch{exist and
satisfy}
\begin{center}
{\renewcommand{\arraystretch}{1.2}
\renewcommand{\tabcolsep}{1cm}
\begin{tabular}[c]{c|ccc}
& $\tau\in(3,5)$ & $\mathbb E[K^3]<\infty$ \\
\hline
$\boldsymbol{\beta}$ & $1/(\tau-3)$ & $1/2$ \\
$\boldsymbol{\delta}$ & $\tau-2$ & $3$\\
$\boldsymbol{\gamma}
& $1$ & $1$ \\
\ch{$\boldsymbol{\gamma'}$}
& \ch{$\geq 1$} & \ch{$\geq 1$}\\
\end{tabular}
}
\end{center}
For the boundary case $\tau=5$ there are logarithmic corrections for $\boldsymbol{\beta}=1/2$ and $\boldsymbol{\delta}=3$, but not for $\boldsymbol{\gamma}=1$ \ch{and for the lower bound $\boldsymbol{\gamma'}\geq 1$.} Indeed, \eqref{eq-def-critexp-gamma}
holds with $\boldsymbol{\gamma}=1$ \ch{and the lower bound in \eqref{eq-def-critexp-gamma'}
holds with $\boldsymbol{\gamma'}=1$,} while
\begin{equation}
\label{log-corr-M-tau5}
M(\beta,0^+) \asymp \Big(\frac{\beta-\beta_c}{\log{1/(\beta-\beta_c)}}\Big)^{1/2} \quad {\rm for\ } \beta \searrow \beta_c,
\qquad
M(\beta_c, B) \asymp \Big(\frac{B}{\log(1/B)}\Big)^{1/3} \quad {\rm for\ } B \searrow 0.
\end{equation}
\end{theorem}
\ch{Unfortunately, we cannot prove that the critical exponent $\boldsymbol{\gamma'}$ exists, see the discussion
in the next section for more details on this issue.}
\subsection{Discussion and open problems}
\label{sec-disc-op}
In this section, we discuss relations to the literature, possible extensions and open problems.
\paragraph{The Ising model on random trees and random graphs.}
\col{A key idea to analyze \chs{the} Ising model on random graphs is to use the fact that expectations of
local quantities coincide
with the correspond\chs{ing} values for the Ising model on suitable random trees \cite{DemMon10}.
Statistical mechanics models on deterministic trees have been studied extensively in the
literature (see for instance \cite{Bax82,Lyo89} \chs{and its relation to}
``broadcasting on trees'' \chs{in }\cite{Evans00,MezMon06}\chs{)}.
The analysis on random trees is more recent and has been triggered by the study
of models on random graphs. Extensions beyond the Ising model,
e.g.\chs{, the} Potts model, pose new challenges \cite{Dem12}.
}
\paragraph{Relation to the physics literature.}
Theorem \ref{thm-CritExp} confirms the predictions in~\cite{DorGolMen02,LeoVazVesZec02}.
\col{For $\tau\leq 3$, one has $\nu=\infty$ and hence $\beta_c=0$ by Theorem \ref{thm-CritTemp}, so that the critical behavior
coincides with the infinite temperature limit. Since \ch{in this case} there is no phase transition at finite temperature,
we do not study the critical behavior \chs{here}.
For $\tau=5$,} in \cite{DorGolMen02},
also the logarithmic correction for $\boldsymbol{\beta}=1/2$ \ch{in \eqref{log-corr-M-tau5}
is computed,} but not that of $\boldsymbol{\delta}=3$.
\paragraph{The critical exponents $\boldsymbol{\gamma'}$ and other critical exponents.}
\col{
\ch{Theorem \ref{thm-CritExp} only gives a lower bound on the critical exponent $\boldsymbol{\gamma'}$.
It is predicted that $\boldsymbol{\gamma'}=1$ for all $\tau>3$, while
there are also predictions for other critical exponents.
For instance the critical exponent $\boldsymbol{\alpha'}$ for the specific heat in the low-temperature phase
satisfies $\boldsymbol{\alpha'}=0$ when $\mathbb E[K^3]<\infty$ and
$\boldsymbol{\alpha'}=(\tau-5)/(\tau-3)$ in the power-law case with $\tau\in(3,5)$
(see \cite{DorGolMen02,LeoVazVesZec02}).
We prove the lower bound $\boldsymbol{\gamma'}\geq 1$ in Section \ref{sec-gamma'}
below, and we also present a heuristic argument that $\boldsymbol{\gamma'}\leq 1$ holds.
The critical exponent $\boldsymbol{\alpha'}$ for the specific heat is beyond our current methods,
partly since we are not able to relate the specific heat on a random graph to that on the random Bethe tree.}
}
\paragraph{Light tails.}
\ch{The case $\mathbb E[K^3]<\infty$ includes all power-law degree distributions with $\tau>5$,
but also cases where $P$ does {\em not} obey a power law. This means, e.g., that Theorem \ref{thm-CritExp}
also identifies the critical exponents for the Erd{\H o}s-R\'enyi random graph where the degrees have an asymptotic
Poisson distribution.}
\paragraph{Inclusion of slowly varying functions.}
In Definition \ref{ass-degdist}, we have assumed that the asymptotic degree distribution
obeys a perfect power law. Alternatively, one could assume that
$\sum_{\ell\geq k} p_\ell \asymp L(k)k^{-(\tau-1)}$ for some function $k\mapsto L(k)$
that is slowly varying at $k=\infty$.
For $\tau>5$ and any slowly varying function,
we still have $\mathbb E[K^3]<\infty$, so the results do not change and Theorem \ref{thm-CritExp}
remains to hold. For $\tau\in(3,5]$, we expect
slowly varying corrections to the critical behavior in Theorem \ref{thm-CritExp}.
For example, $\mathbb E[K^3]<\infty$ for $\tau=5$ and $L(k)=(\log{k})^{-2}$,
so that the logarithmic corrections present for $\tau=5$ disappear.
\ch{\paragraph{Beyond the root magnetization for the random Bethe tree.}
We have identified the critical value and some critical exponents for the root magnetization
on the random Bethe tree. The random Bethe tree is a so-called \emph{unimodular} graph,
which is a rooted graph that often arises as the local weak limit of a sequence of graphs
(in this case, the random graphs $(G_n)_{n\geq 1}$). See \cite{AldLyo07, BenLyoSch12}
for more background on unimodular graphs and trees, in particular, $\mathcal{T}(D,K,\infty)$ is the
so-called \emph{unimodular
Galton-Watson tree} as proved by Lyons, Pemantle and Peres in
\cite{LyoPemPer95}. One would expect that the
magnetization of the graph, which can be defined by
\begin{equation}
M_T(\beta,B)=\lim_{t\rightarrow \infty} \frac{1}{|B_{\phi}(t)|}\sum_{v\in B_{\phi}(t)} \sigma_v,
\end{equation}
where $B_{\phi}(t)$ is the graph induced by vertices at graph distance at most
$t$ from the root $\phi$ and
$|B_{\phi}(t)|$ is the number of elements in it, also converges a.s.\ to a limit.
However, we expect that $M_T(\beta,B)\neq M(\beta,B)$ due to the special role of
the root $\phi$, which vanishes in the above limit. Thus one would expect to believe that
$M_T(\beta,B)$ equals the root magnetization of the tree where each vertex has degree
distribution $K+1$. Our results show that also $M_T(\beta,B)$ has the same critical temperature
and critical exponents as $M(\beta,B)$.
}
\ch{\paragraph{Relation to the Curie-Weiss model.}}
Our results show that locally tree-like random graphs with finite fourth moment of the degree distribution are
in the same universality class as the mean-field model on the complete graph, which is the Curie-Weiss model.
We further believe that the Curie-Weiss model should enter as the limit of $r\rightarrow \infty$ for the $r$-regular
random graph, in the sense that these have the same critical exponents (as we already know), as well as that all
constant\chs{s} arising in asymptotics match up nicely \col{(cf. the discussion at the end of Section \ref{sec-gamma'})}.
Further, our results show that for $\tau\in(3,5]$, the Ising model
has \emph{different} critical exponents \chs{than} the ones for the Curie-Weiss model, so these constitute a set of different
universality classes.
\paragraph{Organization of the article.}
The remainder of this article is organized as follows. We start with some preliminary computations
in Section~\ref{sec-prel}. In Section~\ref{sec-CritTemp} we prove that the critical temperature is as stated in Theorem~\ref{thm-CritTemp}.
The proof that the exponents stated in Theorem~\ref{thm-CritExp} are indeed the correct values of
$\boldsymbol{\beta}$ and $\boldsymbol{\delta}$ is given in Section~\ref{sec-CritExpBetaDelta}.
\ch{The value of $\boldsymbol{\gamma}$
is identified in Section~\ref{sec-CritExpChi}, where also the lower bound on
$\boldsymbol{\gamma'}$ is proved and a heuristic is presented for the matching upper bound.}
\section{Preliminaries}
\label{sec-prel}
An important role in our analysis is played by the distributional recursion
\begin{equation} \label{eq-recursion}
h^{(t+1)} \stackrel{d}{=} B + \sum_{i=1}^{K_t} \xi(h_i^{(t)}),
\end{equation}
where
\begin{equation} \label{eq-defxi}
\xi(h) = {\rm atanh} (\hat{\beta}\tanh(h)),
\end{equation}
with $\hat{\beta}=\tanh(\beta)$, and where $h^{(0)} \equiv B$, $(K_t)_{t \geq 1}$, are i.i.d.\ with distribution $\rho$
and $(h_i^{(t)})_{i\geq1}$ are i.i.d.\ copies of $h^{(t)}$ independent of $K_t$. In \cite[Proposition~1.7]{DomGiaHof10},
we have proved that this recursion has a unique fixed point $h$ for all $\beta\geq0$ and $B>0$.
Whenever we write $h$ or $h_i$ this is a random variable distributed as the fixed point of~\eqref{eq-recursion}. Since $h$ is a fixed point, we can interchange $h \stackrel{{\rm d}}{=}B+\sum_{i=1}^K\xi(h_i)$ in expectations and we often do this.
\col{This fixed point $h$ yields the random field acting on the root of the random Bethe tree \col{$\mathcal{T}(D,K,\infty)$} due to
its offsprings. In particular we can use the fixed point $h$ to give an explicit expression for the magnetization:}
\begin{proposition}[Magnetization]\label{prop-Magnetization}
Assume that the random graph sequence $(G_n)_{n\geq1}$ is locally tree-like with asymptotic degree distribution $P$ that obeys $\ch{\mathbb E[K]<\infty}$
or a power law with exponent $\tau\in(2,3)$ and is uniformly sparse. Then, a.s., for all $\beta\geq0$ and $B>0$, the thermodynamic limit of the {\em magnetization} per vertex exists and is given by
\begin{equation}
M(\beta, B) = \mathbb E\Big[ \tanh\Big(B+\sum_{i=1}^{D} \xi(h_i)\Big)\Big],
\end{equation}
where
\begin{itemize}
\item[\rm (i)]
$D$ has distribution $P$;
\item[\rm (ii)]
$(h_i)_{i\geq1}$ are i.i.d.\ copies of the fixed point of the distributional recursion~\eqref{eq-recursion};
\item[\rm (iii)]
$D$ and $(h_i)_{i\geq1}$ are independent.
\end{itemize}
\ch{The same holds on the random Bethe tree $\mathcal{T}(D,K,\infty)$.}
\end{proposition}
This \ch{proposition} was proved in~\cite[\ch{Corollary 1.6(a)}]{DomGiaHof10} by differentiating the expression for the
thermodynamic limit of the pressure per particle that was first obtained. Here we present a more intuitive proof:
\begin{proof}[Proof of Proposition~\ref{prop-Magnetization}]
Let $\ch{\phi}$ be a vertex picked uniformly at random from $[n]$ and $\mathbb E_n$ be the corresponding expectation. Then,
\begin{equation}
M_n(\beta,B) = \frac1n \sum_{i=1}^n \langle\sigma_i\rangle = \mathbb E_n[\langle\sigma_{\ch{\phi}}\rangle].
\end{equation}
Denote by $\langle\cdot\rangle^{\ell,+/f}$ the \col{expectations with respect to the} Ising measure with $+$/free boundary conditions on vertices at graph distance $\ell$ from $\ch{\phi}$. Note that $\langle \sigma_{\ch{\phi}}\rangle^{\ell,+/f}$ only depends on the spins of vertices in $B_{\ch{\phi}}(\ell)$. By the GKS inequality~\cite{KelShe68},
\begin{equation}
\langle\sigma_{\ch{\phi}}\rangle^{\ell,f}\leq\langle\sigma_{\ch{\phi}}\rangle\leq\langle\sigma_{\ch{\phi}}\rangle^{\ell,+}.
\end{equation}
Taking the limit $n\rightarrow\infty$, the ball $B_{\ch{\phi}}(\ell)$ has the same distribution as the random tree $\mathcal{T}(D,K,\ell)$, because of the locally tree-like nature of the graph sequence. Conditioned on the tree $\mathcal{T}$, we can prune the tree, see~\cite[Lemma~4.1]{DemMon10}, to obtain that
\begin{equation}
\langle\sigma_{\ch{\phi}}\rangle^{\ell,f} = \tanh\Big(B+\sum_{i=1}^{D} \xi(h_i^{(\ell-1)})\Big).
\end{equation}
Similarly,
\begin{equation}
\langle\sigma_{\ch{\phi}}\rangle^{\ell,+} = \tanh\Big(B+\sum_{i=1}^{D} \xi(h_i^{'(\ell-1)})\Big),
\end{equation}
where $h_i^{'(t+1)}$ also satisfies~\eqref{eq-recursion}, but has initial value $h^{'(0)}=\infty$. Since this recursion has a unique fixed point~\cite[Prop\chs{osition}~1.7]{DomGiaHof10}, we prove the proposition by taking the limit $\ell\rightarrow\infty$ and taking the expectation over the tree $\mathcal{T}(D,K,\infty)$.
\end{proof}
To study the critical behavior we \chs{investigate the function $\xi(x)={\rm atanh}(\hat{\beta} \tanh x)$ and} prove two important bounds that play a crucial role throughout this paper:
\begin{lemma}[\ch{Properties of $x\mapsto \xi(x)$}]
\label{lem-boundatanh}
For all $x,\beta\geq0$,
\begin{equation}
\hat{\beta} x -\frac{\hat{\beta}}{3(1-\hat{\beta}^2)}x^3 \leq \xi(x) \leq \hat{\beta} x.
\end{equation}
The upper bound holds with strict inequality if $x,\beta>0$.
\end{lemma}
\begin{proof}
By Taylor's theorem,
\begin{equation}
\xi(x) = \xi(0)+\xi'(0)x+\xi''(\zeta)\frac{x^2}{2},
\end{equation}
for some $\zeta \in (0,x)$. It is easily verified that $\xi(0)=0$,
\begin{equation}
\xi'(0) = \frac{\hat{\beta} (1-\tanh^2 x)}{1-\hat{\beta}^2\tanh^2 x}\bigg|_{x=0} = \hat{\beta},
\end{equation}
and
\begin{equation}\label{eq-secondderxi}
\xi''(\zeta) = -\frac{2 \hat{\beta} (1-\hat{\beta}^2) (\tanh \zeta) (1-\tanh^2 \zeta)}{(1-\hat{\beta}^2\tanh^2 \zeta)^2} \leq 0,
\end{equation}
thus proving the upper bound. If $x,\beta>0$ then also $\zeta>0$ and hence the above holds with strict inequality.
For the lower bound, note that $\xi''(0)=0$ and
\begin{align}\label{eq-thirdderxi}
\xi'''(\zeta) &= -\frac{2 \hat{\beta} (1-\hat{\beta}^2) (1-\tanh^2 \zeta)}{(1-\hat{\beta}^2\tanh^2 \zeta)^3}\left(1-3(1-\hat{\beta}^2)\tanh^2\zeta-\hat{\beta}^2\tanh^4\zeta\right) \nonumber\\
&\geq -\frac{2 \hat{\beta}(1-\hat{\beta}^2)(1-\tanh^2\zeta)}{(1-\hat{\beta}^2)^2(1-\tanh^2\zeta)}=-\frac{2 \hat{\beta}}{1-\hat{\beta}^2}.
\end{align}
Thus, for some $\zeta\in(0,x)$,
\begin{equation}
\xi(x) = \xi(0)+\xi'(0)x +\xi''(0)\frac{x^2}{2}+ \xi'''(\zeta)\frac{x^3}{3!} \geq \hat{\beta} x -\frac{2 \hat{\beta}}{1-\hat{\beta}^2}\frac{x^3}{3!}.
\end{equation}
\end{proof}
We next study tail probabilites of $(\rho_k)_{k\geq 0}$. Here, for a probability distribution
$(q_k)_{k\geq 0}$ on the integers, we write $q_{\geq k}=\sum_{\ell \geq k}q_\ell$.
\begin{lemma}[Tail probabilities of $(\rho_k)_{k\geq 0}$]
\label{lem-tail-rho}
Assume that \eqref{eq-ppowerlaw} holds for some $\tau>2$.
Then, \col{for the size-biased distribution defined in (\ref{eq-defrho})},
there exist $\col{0 <} c_{\rho}\leq C_{\rho}$ such that, for all $k\geq 1$,
\begin{equation}
c_{\rho} k^{-(\tau-2)}\leq \rho_{\geq k} \leq C_{\rho} k^{-(\tau-2)}.
\end{equation}
\end{lemma}
\begin{proof}
The lower bound follows directly from the fact that $\rho_{\geq k}\geq \ch{ (k+1)p_{\geq k+1}/\mathbb E[D]}$,
and \eqref{eq-ppowerlaw}.
For the upper bound, we note that for any probability distribution $(q_k)_{k\geq 0}$ on the \col{non-negative} integers,
we have the partial summation identity
\begin{equation}
\label{partial-summation}
\sum_{k\geq 0} q_k f(k) =f(0)+\sum_{\ell\geq 1} q_{\geq \ell} [f(\ell)-f(\ell-1)],
\end{equation}
provided that either $f(k)q_{\geq k}\rightarrow 0$ \ch{as $k\rightarrow \infty$ or
$k\mapsto f(k)$} is either non-decreasing or non-increasing. Indeed,
\begin{equation}
\sum_{k\geq 0} q_k f(k)=f(0)+\sum_{k\geq 0} q_k [f(k)-f(0)]
=\ch{f(0)+}\sum_{k\geq 0} q_k \sum_{\ell=1}^k [f(\ell)-f(\ell-1)].
\end{equation}
Interchanging the summation order (which is allowed by our assumptions) p\chs{r}ovides the proof.
We start by proving bounds on $\rho_{\geq k}$. We rewrite
\begin{equation}
\rho_{\geq k} = \sum_{\ell\geq k} \frac{(\ell+1)p_{\ell+1}}{\mathbb E[D]}
=\sum_{\chs{\ell}\geq 0} f(\ell)p_{\ell+1},
\end{equation}
where $f(\chs{\ell})=(\ell+1)\indic{\chs{\ell}\geq k}/\mathbb E[D]$. By \eqref{partial-summation}
\col{with $q_{\chs{\ell}}=p_{\chs{\ell}+1}$},
\ch{for $k\geq 1$ so that $f(0)=0$,}
\begin{equation}
\rho_{\geq k} =\sum_{\col{\ell\geq 1}} [f(\ell)-f(\ell-1)]p_{\geq \ell+1} =\frac{\col{(k+1)}p_{\geq k+1}}{\mathbb E[D]}
+\frac{1}{\mathbb E[D]}\sum_{\ell\geq \col{k+1}} p_{\geq \ell+1}.
\end{equation}
From~\eqref{eq-ppowerlaw}, it follows that
\begin{equation}
\label{rho-bound}
\rho_{\geq k}
\leq \frac{\ch{C_p}}{\mathbb E[D]}(k+1)^{-(\tau-2)}
+\sum_{\ell\geq k+1} \frac{\ch{C_p}}{\mathbb E[D]}(\ell+1)^{-(\tau-\col{1})},
\end{equation}
so that there exists a constant $C_{\rho}$ such that
\begin{equation}
\label{rho-tail-bds}
\rho_{\geq k} \leq C_{\rho} k^{-(\tau-2)}.
\end{equation}
\end{proof}
When computing the critical exponents for $\tau\in(3,5]$, we often split the analysis into two cases: one where $K$ is small
and one where $K$ is large. For this we need bounds on truncated moments of $K$ \ch{which are the content of the next lemma.}
\begin{lemma}[Truncated moments of $K$]
\label{lem-truncmoment}
\col{Assume that \eqref{eq-ppowerlaw} holds for some $\tau>2$. Then}
there exist \ch{constants} $C_{a,\tau} \col{>0}$ such that,
\ch{as $\ell \rightarrow \infty,$}
\begin{equation}
\mathbb E\left[K^a \mathds 1_{\{K \leq \ell\}}\right] \leq
\begin{cases}
C_{a,\tau}\ell^{a-(\tau-2)} &\text{\ch{when} } a > \tau-2,\\
C_{\tau-2,\tau}\log \ell &\text{\ch{when} } a=\tau-2.
\end{cases}
\end{equation}
\ch{and, when $a<\tau-2$,}
\begin{equation}
\mathbb E\left[K^a \mathds 1_{\{K > \ell\}}\right] \leq C_{a,\tau}\ell^{a-(\tau-2)}.
\end{equation}
Finally, when $\tau=5$, there exists a constant $c_{3,5} \col{>0}$ such that,
\ch{as $\ell \rightarrow \infty,$}
\begin{equation}
\label{log-correct-lb-tau5}
\mathbb E\left[K(K-1)(K-2)\indic{K\leq \ell}\right]\geq c_{3,5}\log{\ell}.
\end{equation}
\end{lemma}
\begin{proof}
We start by bounding the truncated moments of $K$. We rewrite, using
\eqref{partial-summation} and with $f(k)=k^a \indic{k\leq \ell}$,
\begin{equation}
\mathbb E\left[K^a \indic{K \leq \ell}\right]
=\sum_{k=0}^{\infty} f(k)\rho_k
=\sum_{k=\col{1}}^{\infty} [f(k)-f(k-1)]\rho_{\geq k}
\leq \sum_{k=\col{1}}^{\col{\lfloor{\ell}\rfloor}}[k^a-(k-1)^a]\rho_{\geq k}.
\end{equation}
Using $k^a-(k-1)^{a}=a\int_{k-1}^k x^{a-1}dx\leq a k^{a-1}$, we arrive at
\begin{equation}
\mathbb E\left[K^a \indic{K \leq \ell}\right] \leq a C_{\rho}
\sum_{k=1}^{\lfloor\ell\rfloor} k^{a-1} \col{k}^{-(\tau-2)}
\leq a C_{\rho}\sum_{k=\col{1}}^{\lfloor\ell\rfloor+1} k^{a-(\tau-1)}.
\end{equation}
Note that \ch{$k\mapsto k^{a-(\tau-1)}$ is either increasing or decreasing.} Hence,
\begin{equation}
\sum_{k=\col{1}}^{\lfloor\ell\rfloor+1} k^{a-(\tau-1)} \leq \int_{1}^{\ell+2} k^{a-(\tau-1)} {\rm d} k.
\end{equation}
For $a>\tau-2$,
\begin{equation}
\int_{1}^{\ell+2} k^{a-(\tau-1)} {\rm d} k \leq \ch{\frac{2}{a+2-\tau}\ell^{a-(\tau-2)},}
\end{equation}
whereas for $a=\tau-2$,
\begin{equation}
\int_{1}^{\ell+2} k^{a-(\tau-1)} {\rm d} k \leq 2 \log \ell.
\end{equation}
Similarly, for $a<\tau-2$,
\begin{align}
\mathbb E\left[K^a \indic{K > \ell}\right] &=\lceil \ell \rceil^{a} \rho_{\geq \ell}
+\sum_{k>\ell} [k^a-(k-1)^a] \rho_{\geq k}\\
&\leq C_{\rho}\lceil \ell \rceil^{a-(\tau-2)} +
aC_{\rho}\sum_{\lfloor\ell\rfloor+1}^\infty k^{\ch{a-1}} (k+1)^{-(\tau-2)} \leq C_{a,\tau} \ell^{a-(\tau-2)}.\nonumber
\end{align}
Finally, we prove \eqref{log-correct-lb-tau5}, for which we compute with $f(k)=k(k-1)(k-2)$,
\begin{equation}
\mathbb E\left[K(K-1)(K-2)\indic{K\leq \ell}\right]
=\sum_{k=\col{1}}^{\infty} [f(k)-f(k-1)]\sum_{l=k}^{\ell}\rho_l
=\sum_{k=\col{3}}^{\infty} 3(k-1)(k-2)\sum_{l=k}^{\ell}\rho_l.
\end{equation}
We bound this from below by
\begin{equation}
\mathbb E\left[K(K-1)(K-2)\indic{K\leq \ell}\right]
\geq \sum_{k=0}^{\sqrt{\ell}} 3(k-1)(k-2)[\rho_{\geq k}-\rho_{\geq \ell}].
\end{equation}
By Lemma \ref{lem-tail-rho}, for $\tau=5$, the contribution due to $\rho_{\geq \ell}$ is at \ch{most}
\begin{equation}
\ell^{3/2} \rho_{\geq \ell}\leq C_{\rho} \ell^{-3/2}=o(1),
\end{equation}
while the contribution due to $\rho_{\geq k}$ and using $3(k-1)(k-2)\geq k^2$ for every $k\geq 4$, is at least
\begin{equation}
c_{\rho}\sum_{k=4}^{\sqrt{\ell}} k^{-1}\geq c_{\rho}\int_4^{\sqrt{\ell}+1}\frac{ {\rm d} x}{x}\ch{=} c_{\rho} [\log{(\sqrt{\ell}+1)}-\log{4}],
\end{equation}
which proves the claim by \ch{chosing the constant $c_{3,5}$ correctly.}
\end{proof}
\section{Critical temperature\label{sec-CritTemp}}
In this section we compute the critical temperature.
\begin{proof}[Proof of Theorem~\ref{thm-CritTemp}]
Let $\beta^*={\rm atanh}(1/\nu)$. We first show that if $\beta < \beta^*$, then
\begin{equation}
\lim_{B\searrow0} M(\beta,B) = 0,
\end{equation}
which implies that $\beta_c\geq \beta^*$. Later, we show that if $\lim_{B\searrow0}M(\beta,B) = 0$ then $\beta \leq \beta^*$, implying that $\beta_c\leq\beta^*$.
\paragraph{Proof of $\beta_c\geq \beta^*$.} Suppose that $\beta<\beta^*$. Then, by the fact that $\tanh x \leq x$ and Wald's identity,
\begin{equation}\label{eq-crittemp1}
M(\beta,B) = \mathbb E\left[\tanh\left(B+\sum_{i=1}^D\xi(h_i)\right)\right]
\leq B+\mathbb E[D]\mathbb E[\xi(h)].
\end{equation}
We use the upper bound in Lemma~\ref{lem-boundatanh} to get
\begin{equation}\label{eq-crittemp2}
\mathbb E[\xi(h)] = \mathbb E[{\rm atanh}(\hat{\beta} \tanh h)] \leq \hat{\beta} \mathbb E[h]
= \hat{\beta}\left(B+\nu\mathbb E[\xi(h)]\right).
\end{equation}
Further, note that
\begin{equation}\label{eq-crittemp3}
\mathbb E[\xi(h)]= \mathbb E[{\rm atanh}(\hat{\beta} \tanh h)]\leq\beta,
\end{equation}
because $\tanh h \leq 1$.
Applying inequality~\eqref{eq-crittemp2} $\ell$ times to~\eqref{eq-crittemp1} and subsequently using inequality~\eqref{eq-crittemp3} once gives
\begin{equation}
M(\beta,B) \leq B + B \hat{\beta}\mathbb E[D]\frac{1-(\hat{\beta}\nu)^\ell}{1-\hat{\beta}\nu}
+ \beta \mathbb E[D] (\hat{\beta} \nu)^\ell.
\end{equation}
Hence,
\begin{align}
M(\beta,B) &\leq \limsup_{\ell\rightarrow\infty}
\left(B + B \hat{\beta}\mathbb E[D]\frac{1-(\hat{\beta}\nu)^\ell}{1-\hat{\beta}\nu}
+ \beta \mathbb E[D] (\hat{\beta} \nu)^\ell \right) \nonumber\\
&= B\left(1+\hat{\beta}\mathbb E[D]\frac{1}{1-\hat{\beta}\nu}\right),
\end{align}
because $\hat{\beta} <\hat{\beta}^*= 1/\nu$. Therefore,
\begin{equation}
\lim_{B\searrow 0} M(\beta,B) \leq \lim_{B\searrow 0} B
\left(1+\hat{\beta}\mathbb E[D]\frac{1}{1-\hat{\beta}\nu}\right) = 0.
\end{equation}
This proves the lower bound on $\beta_c$.
\paragraph{Proof of $\beta_c\leq \beta^*$.}
We adapt Lyons' proof in~\cite{Lyo89} for the critical temperature of deterministic trees to the random tree to show that $\ch{\beta_c} \leq \beta^*$. Assume that $\lim_{B\searrow0}M(\beta,B) = 0$. Note that Proposition~\ref{prop-Magnetization} shows that the magnetization $M(\beta,B)$ is equal to the expectation over the random tree $\mathcal{T}(D,K,\infty)$ of the root magnetization. Hence, if we denote the root of the tree $\mathcal{T}(D,K,\infty)$ by $\phi$, then it follows from our assumption on $M(\beta,B)$ that, a.s., $\lim_{B\searrow0}\langle\sigma_\phi\rangle=0$.
We therefore condition on the tree $T=\mathcal{T}(D,K,\infty)$ and if we suppose that $\lim_{B\searrow0} \langle\sigma_\phi\rangle=0$, then also $\lim_{B\searrow0} h(\phi) =0$. Because of~\eqref{eq-recursion}, we must then have, for all $v\in T$,
\begin{equation}
\lim_{B\searrow0}\lim_{\ell\rightarrow\infty} h^{\ell,+}(v) = 0,
\end{equation}
where we can take $+$ boundary conditions, since the recursion converges to a unique fixed point~\cite[Proposition 1.7]{DomGiaHof10}. Now, fix $0<\beta_0<\beta$ and choose $\ell$ large enough and $B$ small enough such that, for some $\varepsilon=\varepsilon(\beta_0,\beta)>0$ that we choose later,
\begin{equation}
h^{\ell,+}(v) \leq \varepsilon,
\end{equation}
for all $v\in T$ with $|v|=1$, where $|v|$ denotes the graph distance from $\phi$ to $v$. Note that $h^{\ell,+}(v) =\infty > \varepsilon$ for $v\in T$ with $|v|=\ell$.
As in~\cite{Lyo89}, we say that $\Pi$ is a {\em cutset} if $\Pi$ is a finite subset of $T\setminus \{\phi\}$ and every path from $\phi$ to infinity intersects $\Pi$ at exactly one vertex $v\in\Pi$. We write $v \leq \Pi$ if every infinite path from $v$ intersects $\Pi$ and write $\sigma < \Pi$ if $\sigma \leq \Pi$ and $\sigma \notin \Pi$. Furthermore, we say that $w\leftarrow v$ if $\{w,v\}$ is an edge in $T$ and $|w|=|v|+1$. Then, since $h^{\ell,+}(v) =\infty > \varepsilon$ for $v\in T$ with $|v|=\ell$, there is a unique cutset $\Pi$, such that $h^{\ell,+}(v) \leq \varepsilon$ for all $v \leq \Pi$, and for all $v \in \Pi$ there is at least one $w \leftarrow v$ such that $h^{\ell,+}(w) > \varepsilon$.
It follows from the lower bound in Lemma~\ref{lem-boundatanh} that, for $v < \Pi$,
\begin{equation}
h^{\ell,+}(v) = B+\sum_{w\leftarrow v} \xi(h^{\ell,+}(w))
\geq \sum_{w\leftarrow v} \hat{\beta} h^{\ell,+}(w) - \frac{\hat{\beta} h^{\ell,+}(w)^3}{3(1-\hat{\beta}^2)}
\geq \sum_{w\leftarrow v} \hat{\beta} h^{\ell,+}(w) \Big(1-\frac{\varepsilon^2}{3(1-\hat{\beta}^2)}\Big),
\end{equation}
while, for $v\in\Pi$,
\begin{equation}
h^{\ell,+}(v) = B+\sum_{w\leftarrow v} \xi( \tanh h(w)) > \xi(\varepsilon).
\end{equation}
If we now choose $\varepsilon>0$ such that
\begin{equation}
\hat{\beta} \Big(1-\frac{\varepsilon^2}{3(1-\hat{\beta}^2)^2}\Big) = \hat{\beta}_0,
\end{equation}
which is possible because $\beta_0<\beta$, then,
\begin{equation}
h^{\ell,+}(\phi) \geq \sum_{v\in\Pi} \hat{\beta}_0^{|v|} \xi(\varepsilon).
\end{equation}
Since $\xi(\varepsilon)>0$ and $\lim_{B\searrow0}\lim_{\ell\rightarrow\infty} h^{\ell,+}(\phi) = 0$,
\begin{equation}
\inf_{\Pi} \sum_{v\in\Pi} \hat{\beta}_0^{|v|}=0.
\end{equation}
From \cite[Proposition 6.4]{Lyo90} it follows that $\hat{\beta}_0 \leq 1/\nu$. This holds for all $\beta_0<\beta$, so
\begin{equation}
\beta \leq {\rm atanh}(1 / \nu) = \beta^*.
\end{equation}
This proves the upper bound on $\beta_c$, thus concluding the proof.
\end{proof}
We next show that the phase transition at this critical temperature is \emph{continuous:}
\begin{lemma}[Continuous phase transition]\label{lem-hcto0}
It holds that
\begin{equation}
\lim_{B\searrow 0} \mathbb E[\xi(h(\beta_c,B))] = 0, \qquad {\rm and }
\qquad \lim_{\beta\searrow \beta_c} \mathbb E[\xi(h(\beta,0^+))]=0.
\end{equation}
\end{lemma}
\begin{proof}
\chs{Note that $\lim_{B\searrow 0} \mathbb E[\xi(h(\beta_c,B))]=c$ exists, because $B\mapsto \mathbb E[\xi(h(\beta_c,B))]$ is non-decreasing and non-negative.}
Assume, by contradiction, that $c>0$.
By the recursion in~\eqref{eq-recursion}, for $B>0$,
\begin{equation} \label{eq-strictineq}
\mathbb E[\xi(h(\beta,B))] = \mathbb E\Big[\xi\Big(B+ \sum_{i=1}^K\xi(h_i(\beta,B))\Big)\Big]
\leq \xi\left(B+\nu \mathbb E[\xi(h(\beta,B))]\right),
\end{equation}
where the inequality holds because of Jensen's inequality and the concavity of $h\mapsto\xi(h)$. Hence,
\begin{equation}
c=\lim_{\chs{B\searrow0}}\mathbb E[\xi(h(\beta_c,B))]
\leq \lim_{\chs{B\searrow0}}\xi\Big(B+\nu \mathbb E[\xi(h(\beta_c,B))]\Big) = \xi(\nu c).
\end{equation}
Since $\xi(x) < \hat{\beta}_c x$ for $x>0$ by Lemma~\ref{lem-boundatanh} and using $\hat{\beta}_c = 1/\nu$, we obtain
\begin{equation}
\xi(\nu c) < \hat{\beta}_c \nu c = c,
\end{equation}
leading to a contradiction.
\ch{An adaptation of this} argument \ch{shows} \chs{the second statement of the lemma.}
\ch{Again $\beta\mapsto \mathbb E[\xi(h(\beta,0^+))]$ is non-\chs{decreasing and non-negative and we} assume
that $\lim_{\beta\searrow \beta_c} \mathbb E[\xi(h(\beta,0^+))]=c>0$. Then,
\begin{align}
c&=\lim_{\beta\searrow \beta_c} \mathbb E[\xi(h(\beta,0^+))]
=\lim_{\beta\searrow \beta_c} \lim_{B\searrow 0}\mathbb E\Big[\xi\Big(B+ \sum_{i=1}^K\xi(h_i(\beta,B))\Big)\Big]\nonumber\\
&\leq \lim_{\beta\searrow \beta_c} \lim_{B\searrow 0}\xi\Big(B+\nu \mathbb E[\xi(h(\beta,B))]\Big)
=\xi(\nu c),
\end{align}
leading again to a contradiction when $c>0$.}
\end{proof}
\section{Critical exponents: Magnetization}
\label{sec-CritExpM}
In this section we prove that the critical exponents related to the magnetization, i.e., $\boldsymbol{\beta}$ and $\boldsymbol{\delta}$, take the values stated in Theorem~\ref{thm-CritExp}. The analysis \ch{involves} Taylor expansions performed up to the right order. By these Taylor expansions, higher moments of $\xi(h)$ appear. Therefore, we first bound these higher moments of $\xi(h)$ in terms of its first moment in Section~\ref{sec-MomentsXi}.
In Section~\ref{sec-boundsExi} we use these bounds to give appropriate bounds on $\mathbb E[\xi(h)]$ which finally allow us to compute the critical exponents $\boldsymbol{\beta}$ and $\boldsymbol{\delta}$ in Section~\ref{sec-CritExpBetaDelta}.
\subsection{Bounds on higher moments of $\xi(h)$\label{sec-MomentsXi}}
\col{\chs{Throughout} Section \ref{sec-CritExpM}} we assume that $B$ is sufficiently close to zero and $\beta_c < \beta <\beta_c+\varepsilon$ for $\varepsilon$ sufficiently small. We write $c_i,C_i, i\geq1$ for constants that only depend on $\beta$ and moments of $K$, and satisfy
\begin{equation} \label{eq-boundsCi}
0 < \liminf_{\beta\searrow\beta_c} C_i(\beta) \leq \limsup_{\beta\searrow\beta_c} C_i(\beta) < \infty.
\end{equation}
Here $C_i$ appears in upper bounds, while $c_i$ appears in lower bounds.
Furthermore, we write $e_i, i\geq1$ for error functions that only depend on $\beta, B, \mathbb E[\xi(h)]$ and moments of $K$, and satisfy
\begin{equation} \label{eq-boundsei}
\limsup_{B\searrow0} e_i(\beta,B) < \infty \qquad {\rm and }
\qquad \lim_{B\searrow0} e_i(\beta_c,B) =0.
\end{equation}
\ch{Finally, we write $\nu_k=\mathbb E[K(K-1)\cdots (K-k+1)]$ for the $k$th factorial moment of $K$, so that $\nu_1=\nu$.}
\begin{lemma}[Bounds on second moment of $\xi(h)$]
\label{lem-boundxih2}
Let $\beta\geq\beta_c$ and $B>0$. Then,
\begin{equation}\label{eq-boundxih2}
\mathbb E[\xi(h)^2] \leq \left\{\begin{array}{ll} C_2 \mathbb E[\xi(h)]^2 + B e_2 & {\rm when\ }
\mathbb E[K^2]<\infty, \\ \\
C_2 \mathbb E[\xi(h)]^2\log\left(1/\mathbb E[\xi(h)]\right) + B e_2
&{\rm when\ } \tau=4, \\ \\
C_2 \mathbb E[\xi(h)]^{\tau-2} + B e_2
&{\rm when\ } \tau\in(3,4).\end{array} \right.
\end{equation}
\end{lemma}
\begin{proof}
We first treat the case $\mathbb E[K^2]<\infty$. We use Lemma~\ref{lem-boundatanh} and the recursion in~\eqref{eq-recursion} to obtain
\begin{align}
\mathbb E[\xi(h)^2] &\leq \hat{\beta}^2 \mathbb E[h^2]
= \hat{\beta}^2 \mathbb E\Big[\Big(B+\sum_{i=1}^K\xi(h_i)\Big)^2\Big]\nonumber\\
&= \hat{\beta}^2\left(B^2+2B\nu\mathbb E[\xi(h)]+\nu_2\mathbb E[\xi(h)]^2+\nu\mathbb E[\xi(h)^2]\right).
\end{align}
Since $1-\hat{\beta}^2\nu>0$, because $\beta$ is sufficiently close to $\beta_c$ and $\hat{\beta}_c=1/\nu<1$, the lemma holds with
\begin{equation}
C_2 = \frac{\hat{\beta}^2 \nu_2}{1-\hat{\beta}^2\nu}, \qquad {\rm and } \qquad
e_2 = \frac{B\hat{\beta}^2+2\hat{\beta}^2\nu\mathbb E[\xi(h)]}{1-\hat{\beta}^2\nu}.
\end{equation}
\rem{RvdH: Note that this inequality is correct with the RIGHT $C_2$!}
It is not hard to see that~\eqref{eq-boundsCi} holds. For $\ch{e_2}$ the first property of \eqref{eq-boundsei}
can also easily be seen. The second property in \eqref{eq-boundsei} follows from Lemma~\ref{lem-hcto0}.
If $\tau\leq4$, then $\mathbb E[K^2]=\infty$ and the above does not work. To analyze this case,
we apply the recursion~\eqref{eq-recursion} and split the expectation over $K$ in small and large degrees:
\begin{equation}\label{eq-exih2}
\mathbb E[\xi(h)^2] =
\mathbb E\Big[\xi\Big(B+\sum_{i=1}^K \xi(h_i)\Big)^2 \mathds 1_{\{K\leq \ell\}}\Big]
+\mathbb E\Big[\xi\Big(B+\sum_{i=1}^K \xi(h_i)\Big)^2\mathds 1_{\{K > \ell\}}\Big].
\end{equation}
We use Lemma~\ref{lem-boundatanh} to bound the first term as follows:
\begin{align}
\mathbb E\Big[\xi\Big(B+\sum_{i=1}^K \xi(h_i)\Big)^2 \mathds 1_{\{K\leq \ell\}}\Big]
&\leq \hat{\beta}^2 \mathbb E\Big[\Big(B+\sum_{i=1}^K \xi(h_i)\Big)^2 \mathds 1_{\{K\leq \ell\}}\Big] \\
&\leq \hat{\beta}^2\left(B^2+2B\nu B\mathbb E[\xi(h)]
+ \mathbb E[K^2\mathds 1_{\{K\leq \ell\}}]\mathbb E[\xi(h)]^2+\nu\mathbb E[\xi(h)^2] \right).\nonumber
\end{align}
For $\tau\in(3,4)$,
\begin{equation}
\mathbb E[K^2\mathds 1_{\{K\leq \ell\}}] \leq C_{2,\tau}\ell^{4-\tau},
\end{equation}
by Lemma~\ref{lem-truncmoment}, while for $\tau=4$,
\begin{equation}
\mathbb E[K^2\mathds 1_{\{K\leq \ell\}}] \leq C_{2,4}\log \ell.
\end{equation}
To bound the second sum in~\eqref{eq-exih2}, note that $\xi(h)\leq \beta$. Hence,
\begin{equation}
\mathbb E\Big(\xi\Big(B+\sum_{i=1}^K \xi(h_i)\Big)^2\mathds 1_{\{K > \ell\}}\Big]
\leq \beta^2 \mathbb E[\mathds 1_{\{K > \ell\}}] \leq C_{0,\tau}\beta^2\ell^{2-\tau}.
\end{equation}
The optimal bound (up to a constant) can be achieved by choosing $\ell$ such that $\ell^{4-\tau}\mathbb E[\xi(h)]^2$ and $\ell^{2-\tau}$ are of the some order of magnitude. Hence, we choose $\ell=1/\mathbb E[\xi(h)]$. Combining the two upper bounds then gives the desired result with
\begin{equation}
\ch{C_2=\frac{1}{1-\hat{\beta}^2\nu}\left(C_{2,\tau}\hat{\beta}^2+C_{0,\tau}\beta^2\right),}
\end{equation}
where we \ch{have} also used that $\mathbb E[\xi(h)]^2\leq\mathbb E[\xi(h)]^2\log(1/\mathbb E[\xi(h)])$, and
\begin{equation}
e_2 = \frac{B\hat{\beta}^2+2\hat{\beta}^2\nu\mathbb E[\xi(h)]}{1-\hat{\beta}^2\nu}.
\end{equation}
\end{proof}
We next derive upper \ch{bounds} on the third moment of $\xi(h)$:
\begin{lemma}[Bounds on third moment of $\xi(h)$]\label{lem-boundxih3}
Let $\beta\geq\beta_c$ and $B>0$. Then,
\begin{equation}
\mathbb E[\xi(h)^3] \leq \left\{\begin{array}{ll} C_3 \mathbb E[\xi(h)]^3 + B e_3 &{\rm when\ } \mathbb E[K^3]<\infty, \\ \\
C_3 \mathbb E[\xi(h)]^3\log\left(1/\mathbb E[\xi(h)]\right) + B e_3 &{\rm when\ } \tau=5, \\ \\
C_3 \mathbb E[\xi(h)]^{\tau-2} + B e_3 &{\rm when\ } \tau\in(3,5).\end{array} \right.
\end{equation}
\end{lemma}
\begin{proof}
For $\mathbb E[K^3]<\infty$ we bound, in a similar way as in Lemma~\ref{lem-boundxih2},
\begin{align}
\mathbb E[\xi(h)^3] \leq \hat{\beta}^3&\Bigg(B^3 + 3 B^2\nu\mathbb E[\xi(h)]
+3B\nu_2\mathbb E[\xi(h)]^2+3B\nu\mathbb E[\xi(h)^2]\\
&+\nu_3\mathbb E[\xi(h)]^3+3\nu_2\mathbb E[\xi(h)]\mathbb E[\xi(h)^2]
+\hat{\beta}^3\nu\mathbb E[\xi(h)^3]\Bigg).\nonumber
\end{align}
Using~\eqref{eq-boundxih2}, we indeed get the bound
\begin{equation}
\mathbb E[\xi(h)^3] \leq C_3 \mathbb E[\xi(h)]^3 + B e_3,
\end{equation}
where
\begin{equation}
C_3 = \frac{\hat{\beta}^3}{1-\hat{\beta}^3 \nu} \left(\nu_3 + 3 \nu_2 C_2\right),
\end{equation}
\rem{RvdH: Note that this is the RIGHT $C_3$!}
and
\begin{equation}
e_3 = \frac{\hat{\beta}^3}{1-\hat{\beta}^3\nu}\left\{B^2
+3B\nu e_2+3\left(B\nu+\nu_2e_2\right)\mathbb E[\xi(h)]
+3\left(\nu_2+\nu C_2\right)\mathbb E[\xi(h)]^2\right\}.
\end{equation}
For $\tau\in(3,5]$, we use the recursion~\eqref{eq-recursion}, make the expectation over $K$ explicit and split in small and large values of $K$ to obtain
\begin{equation}\label{eq-exih3}
\ch{\mathbb E[\xi(h)^3] =
\mathbb E\Big[\xi\Big(B+\sum_{i=1}^K\xi(h_i)\Big)^3\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}\Big] +
\mathbb E\Big[\xi\Big(B+\sum_{i=1}^K\xi(h_i)\Big)^3\indic{K>\lfloor1/\mathbb E[\xi(h)]\rfloor}\Big].}
\end{equation}
We bound the first sum from above by
\ch{\begin{align}
&\hat{\beta}^3 \mathbb E\Big[\Big(B+\sum_{i=1}^k\xi(h_i)\Big)^3\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}\Big] \nonumber\\
&\qquad= \hat{\beta}^3 \Big(B^3 + 3B^2 \mathbb E[K \indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]\mathbb E[\xi(h)]+
3B\mathbb E[K(K-1)\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]\mathbb E[\xi(h)^2]\nonumber\\
&\qquad\qquad+3B\mathbb E[K \indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]
\mathbb E[\xi(h)^2]+\mathbb E[K(K-1)(K-2) \indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]
\mathbb E[\xi(h)]^3
\nonumber\\
&\qquad\qquad+3\mathbb E[K(K-1)\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]\mathbb E[\xi(h)]\mathbb E[\xi(h)^2]+\mathbb E[K\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]\mathbb E[\xi(h)^3]\Big).\nonumber
\end{align}}
By Lemma~\ref{lem-truncmoment}, for $\tau\in(3,5)$,
\ch{\begin{equation}
\mathbb E[K^3\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]\leq C_{3,\tau}\mathbb E[\xi(h)]^{\tau-5},
\end{equation}}
while, for $\tau=5$,
\ch{\begin{equation}
\mathbb E[K^3\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]
\leq C_{3,5}\left(1+\log\left(1/\mathbb E[\xi(h)]\right)\right).
\end{equation}}
Similarly, by Lemma~\ref{lem-truncmoment}, for $\tau\in(3,4)$,
\ch{\begin{equation}
\mathbb E[K^2\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]\leq C_{2,\tau}\mathbb E[\xi(h)]^{\tau-4},
\end{equation}}
while, for $\tau=4$,
\ch{\begin{equation}
\mathbb E[K^2\indic{K\leq \lfloor1/\mathbb E[\xi(h)]\rfloor}]
\leq C_{2,4}\left(1+\log\left(1/\mathbb E[\xi(h)]\right)\right).
\end{equation}}
For the other terms we can replace the upper bound in the sum by infinity and use the upper bound on $\mathbb E[\xi(h)^2]$ of Lemma~\ref{lem-boundxih2}. For the second sum in~\eqref{eq-exih3} we bound $\xi(x)\leq\beta$, so that this sum is bounded from above by $C_{0,\tau}\mathbb E[\xi(h)]^{\tau-2}$.
Combining these bounds gives the desired result.
\end{proof}
\subsection{Bounds on first moment of $\xi(h)$ \label{sec-boundsExi}}
\begin{proposition}[Upper bound on first moment of $\xi(h)$]
\label{prop-UpperExi}
Let $\beta\geq\beta_c$ and $B>0$. Then, there exists a $C_1>0$ such that
\begin{equation}\label{eq-UpperExi}
\mathbb E[\xi(h)] \leq \beta B + \hat{\beta} \nu \mathbb E[\xi(h)] - C_1 \mathbb E[\xi(h)]^{\boldsymbol{\delta}},
\end{equation}
where
\begin{equation}
\boldsymbol{\delta}
=\left\{\begin{array}{ll} 3 &{\rm when\ } \mathbb E[K^3]<\infty,\\ \\ \tau-2 &{\rm when\ } \tau\in(3,5].
\end{array}\right.
\end{equation}
For $\tau=5$,
\begin{equation}\label{eq-UpperExi-tau5}
\mathbb E[\xi(h)] \leq \beta B + \hat{\beta} \nu \mathbb E[\xi(h)]
- C_1 \mathbb E[\xi(h)]^{3}\log\left(1/\mathbb E[\xi(h)]\right).
\end{equation}
\end{proposition}
\begin{proof}
We first use recursion~\eqref{eq-recursion} and rewrite it as
\begin{equation}\label{eq-split}
\mathbb E[\xi(h)] = \mathbb E\Big[\xi\Big(B+\sum_{i=1}^K \xi(h_i)\Big)\Big]
= \hat{\beta} B + \hat{\beta} \nu \mathbb E[\xi(h)] + T_1+T_2,
\end{equation}
where
\begin{equation}
T_1= \mathbb E\Big[\xi\Big(B+K\mathbb E[\xi(h)]\Big)-\hat{\beta}\left(B + K \mathbb E[\xi(h)]\right)\Big],
\end{equation}
and
\begin{equation}
T_2 = \mathbb E\Big[\xi\Big(B+\sum_{i=1}^K \xi(h_i)\Big)-\xi\left(B+K\mathbb E[\xi(h)]\right)\Big].
\end{equation}
Here, $T_1$ can be seen as the error of a first order Taylor series approximation of $\xi\left(B+K\mathbb E[\xi(h)]\right)$ around $0$, whereas $T_2$ is the error made by replacing $\xi(h_i)$ by its expected value in the sum. By concavity of $x\mapsto \xi(x)$, both random variables in the expectation of $T_1$ and $T_2$ are non-positive. In particular, $T_2\leq 0$, which is enough for our purposes. We next bound $T_1$ in the cases where $\mathbb E[K^3]<\infty$, $\tau\in(3,5)$ and $\tau=5$ separately.
\paragraph{Bound on $T_1$ when $\mathbb E[K^3]<\infty$.}
To bound $T_1$ for $\mathbb E[K^3]<\infty$ we use that, a.s.,
\begin{equation}
\xi\left(B+K\mathbb E[\xi(h)]\right)-\hat{\beta}\left(B + K\mathbb E[\xi(h)]\right)\leq0,
\end{equation}
which follows from Lemma~\ref{lem-boundatanh}. Hence,
\begin{equation}
T_1 \leq \mathbb E\left[\left(\xi\left(B+K\mathbb E[\xi(h)]\right)
-\hat{\beta}\left(B + K\mathbb E[\xi(h)]\right)\right)
\mathds 1_{\{B + K \mathbb E[\xi(h)]\leq {\rm atanh}\frac12\}}\right].
\end{equation}
Since $\xi''(0)=0$, it follows from Taylor's theorem that, a.s.,
\begin{equation}
\xi\left(B+K\mathbb E[\xi(h)]\right)-\hat{\beta}\left(B + K\mathbb E[\xi(h)]\right)
=\frac{\xi'''(\zeta)}{6}\left(B + K\mathbb E[\xi(h)]\right)^3,
\end{equation}
for some $\zeta\in(0,B + K\mathbb E[\xi(h)])$. If $B + K \mathbb E[\xi(h)]\leq {\rm atanh}\frac12$, then
\begin{equation}
\xi'''(\zeta)= -\frac{2 \hat{\beta} (1-\hat{\beta}^2) (1-\tanh^2 \zeta)}
{(1-\hat{\beta}^2\tanh^2\zeta)^3}\left(1-3(1-\hat{\beta}^2)\tanh^2\zeta-\hat{\beta}^2\tanh^4\zeta\right)
\leq-\frac{9}{32}\hat{\beta}(1-\hat{\beta}^2).
\end{equation}
Hence,
\begin{align}
T_1&\leq-\frac{3}{64}\hat{\beta}(1-\hat{\beta}^2)\mathbb E\left[\left(B + K\mathbb E[\xi(h)]\right)^3
\mathds 1_{\{B + K \mathbb E[\xi(h)]\leq {\rm atanh}\frac12\}}\right]\nonumber\\
&\leq-\frac{3}{64}\hat{\beta}(1-\hat{\beta}^2)\mathbb E[K^3\mathds 1_{\{K \mathbb E[\xi(h)]
\leq {\rm atanh}\frac12-B\}}]\mathbb E[\xi(h)]^3.
\end{align}
\paragraph{Bound on $T_1$ \ch{when} $\tau\in(3,5]$.}
For $\tau\in(3,5]$, we make the expectation over $K$ explicit:
\begin{equation}
T_1=\sum_{k=0}^\infty \rho_k\left(\xi\left(B+k\mathbb E[\xi(h)]\right)
-\hat{\beta}\left(B + k\mathbb E[\xi(h)]\right)\right),
\end{equation}
where it should be noted that all terms in this sum are negative
because of Lemma~\ref{lem-boundatanh}. Define $f(k)=\xi\left(B+k\mathbb E[\xi(h)]\right)
-\hat{\beta}\left(B + k\mathbb E[\xi(h)]\right)$ and note that $f(k)$ is non-increasing.
We use \eqref{partial-summation} and Lemma \ref{lem-tail-rho} to rewrite
\begin{equation}
T_1=\sum_{k=0}^\infty f(k)\rho_k
=\ch{f(0)+}\sum_{k\geq 1} [f(k)-f(k-1)]\rho_{\geq k}
\leq \ch{f(0)+}c_{\rho}\sum_{k\geq 1} [f(k)-f(k-1)] (k+1)^{-(\tau-2)}.
\end{equation}
Then, use \eqref{partial-summation} in reverse to rewrite this as
\begin{equation}
T_1
\leq \ch{f(0)+}c_{\rho}\sum_{k\geq 0} f(k) [k^{-(\tau-2)}-(k+1)^{-(\tau-2)}]
\leq \ch{f(0)(1-c_{\rho}\sum_{k\geq 1} k^{-\tau})+}(\tau-1)c_{\rho} \sum_{k\geq 0} f(k) (k+1)^{-(\tau-1)}.
\end{equation}
Hence, with $\ch{e=f(0)(1-c_{\rho}\sum_{k\geq 1} k^{-\tau})/B}$,
\begin{align}
T_1&\leq \ch{eB+} (\tau-1)c_{\rho} \left(\mathbb E[\xi(h)]\right)^{\tau-1}
\sum_{k=0}^{\infty} ((k+1)\mathbb E[\xi(h)])^{-(\tau-1)}
\left(\xi\left(B+k\mathbb E[\xi(h)]\right)-\hat{\beta}\left(B + k\mathbb E[\xi(h)]\right)\right)\nonumber\\
&\leq \ch{eB+} (\tau-1)c_{\rho} \left(\mathbb E[\xi(h)]\right)^{\tau-1}
\sum_{k=a/\mathbb E[\xi(h)]}^{b/\mathbb E[\xi(h)]} (k\mathbb E[\xi(h)])^{-(\tau-1)}
\left(\xi\left(B+k\mathbb E[\xi(h)]\right)-\hat{\beta}\left(B + k\mathbb E[\xi(h)]\right)\right),
\end{align}
where we choose $a$ and $b$ such that $0<a<b<\infty$. We use dominated convergence on the above sum.
The summands are uniformly bounded, and $\mathbb E[\xi(h)]\rightarrow0$ for both limits of interest.
Further, when $k\mathbb E[\xi(h)]=y$, the summand converges pointwise to
$y^{-(\tau-1)} \left(\xi\left(B+y\right)-\hat{\beta}\left(B + y\right)\right)$.
Hence, we can write the sum above as
\begin{equation}
\mathbb E[\xi(h)]^{-1} \left(\int_{a}^{b} y^{-(\tau-1)}
\left(\xi\left(B+y\right)-\hat{\beta}\left(B + y\right)\right){\rm d} y +o(1)\right),
\end{equation}
where $o(1)$ is a function tending to zero for both limits of interest \cite[216 A]{Ito93}. The integrand is uniformly bounded for $y\in[a,b]$ and hence we can bound the integral from above by a (negative) constant $-I$ for $B$ sufficiently small and $\beta$ sufficiently close to $\beta_c$. Hence,
\begin{equation}
\mathbb E[\xi(h)] \leq \hat{\beta} B + \hat{\beta} \nu \mathbb E[\xi(h)]
- (\tau-1)c_{\rho} I\mathbb E[\xi(h)]^{\tau-2}.
\end{equation}
\paragraph{Logarithmic corrections in the bound for $\tau=5$.}
We complete the proof by identifying the logarithmic correction for $\tau=5$. Since the
random variable in the expectation in $T_1$ is non-positive, we can bound
\begin{equation}
T_1\leq
\mathbb E\left[\xi\left(B+K\mathbb E[\xi(h)]\right)-\hat{\beta}\left(B + K \mathbb E[\xi(h)]\right)
\indic{K\leq \varepsilon /\mathbb E[\xi(h)]}\right].
\end{equation}
Taylor expansion $h\mapsto \xi(h)$ to third order, using that $\xi(0)=\xi''(0)=0$,
while the linear term cancels, leads to
\begin{equation}
T_1\leq
\mathbb E\left[\frac{\xi'''(\zeta)}{6} \left(B+K\mathbb E[\xi(h)]\right)^3
\indic{K\leq \varepsilon /\mathbb E[\xi(h)]}\right],
\end{equation}
for some $\zeta\in (0,K\mathbb E[\xi(h)])$. On the event that $K\leq \varepsilon /\mathbb E[\xi(h)]$,
we thus have that $\zeta\in (0,\varepsilon)$, and $\xi'''(\zeta)\geq \ch{c_{\varepsilon}\equiv\inf_{x\in (0,\varepsilon)} \xi'''(x)}$
when $\varepsilon$ is sufficiently small. Thus,
\begin{align}
T_1&\leq
\frac{\ch{c_{\varepsilon}}}{6}\mathbb E\left[\left(B+K\mathbb E[\xi(h)]\right)^3
\indic{K\leq \varepsilon /\mathbb E[\xi(h)]}\right]\\
&\leq \frac{\ch{c_{\varepsilon}}}{6} \mathbb E[\xi(h)]^3 \mathbb E\left[K(K-1)(K-2)\indic{K\leq \varepsilon/\mathbb E[\xi(h)]}\right].\nonumber
\end{align}
When $\tau=5$, by Lemma \ref{lem-truncmoment}, $\mathbb E\left[K(K-1)(K-2)\indic{K\leq \ell}\right]\geq c_{3,5}\log{\ell}$,
which completes the proof.
\rem{For $\mathbb E[K^3]<\infty$, the above argument yields the correct upper bound
on $T_1$ when $\varepsilon\searrow 0$! In turn, the correct upper bound on
$T_1$ will yield the upper lower bound on $\mathbb E[\xi(h)]$, which is the wrong bound...}
\end{proof}
\begin{proposition}[Lower bound on first moment of $\xi(h)$]\label{prop-LowerExi} Let $\beta\geq\beta_c$ and $B>0$. Then, there exists a constant $C_2>0$ such that
\begin{equation}
\label{eq-LowerExi}
\mathbb E[\xi(h)] \geq
\beta B + \hat{\beta} \nu \mathbb E[\xi(h)] - c_1 \mathbb E[\xi(h)]^{\boldsymbol{\delta}} - B e_1,
\end{equation}
where
\begin{equation}
\boldsymbol{\delta}=\left\{\begin{array}{ll} 3 &{\rm when\ } \mathbb E[K^3]<\infty,\\ \\ \tau-2 &
{\rm when\ }\tau\in(3,5).\end{array}\right.
\end{equation}
For $\tau=5$,
\begin{equation}
\label{eq-LowerExi-tau5}
\mathbb E[\xi(h)] \geq \beta B + \hat{\beta} \nu \mathbb E[\xi(h)]
- C_2 \mathbb E[\xi(h)]^3 \log(1/\mathbb E[\xi(h)]) - B e_1.
\end{equation}
\end{proposition}
\begin{proof}
We again use the split in~\eqref{eq-split} and we bound $T_1$ and $T_2$.
\paragraph{The lower bound on $T_1$.} For $\mathbb E[K^3]<\infty$, we use the lower bound of Lemma~\ref{lem-boundatanh} to get
\begin{equation}\label{eq-t1k3finite}
T_1 \geq - \frac{\hat{\beta}}{3(1-\hat{\beta}^2)} \mathbb E\left[(B+K\mathbb E[\xi(h)])^3\right].
\end{equation}
By expanding, this can be rewritten as
\begin{equation}
T_1 \geq - \frac{\hat{\beta}}{3(1-\hat{\beta}^2)} \mathbb E[K^3]\mathbb E[\xi(h)]^3-B e_4.
\end{equation}
For $\tau\in(3,5]$, we first split $T_1$ in a small $K$ and a large $K$ part. For this, write
\begin{equation}
t_1(k) = \xi\left(B+k\mathbb E[\xi(h)]\right)-\hat{\beta}\left(B + k\mathbb E[\xi(h)]\right).
\end{equation}
Then,
\begin{equation}
T_1 = \mathbb E[t_1(K)]=\mathbb E\left[t_1(K) \mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}\right]
+\mathbb E\left[t_1(K) \mathds 1_{\{K > \varepsilon/\mathbb E[\xi(h)]\}}\right].
\end{equation}
To bound the first term, we again use~\eqref{eq-t1k3finite}:
\begin{equation}
\mathbb E\left[t_1(K) \mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}\right]
\geq - \frac{\hat{\beta}}{3(1-\hat{\beta}^2)} \mathbb E\left[(B+K\mathbb E[\xi(h)])^3
\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}\right].
\end{equation}
It is easy to see that the terms $B^3\mathbb E\left[\mathds 1_{\{K > \varepsilon/\mathbb E[\xi(h)]\}}\right]$ and $3B^2\mathbb E[\xi(h)]\mathbb E\left[K\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}\right]$ that we get by expanding the above are of the form $B e$. To bound the other two terms, we use Lemma~\ref{lem-truncmoment} to obtain,
\ch{for $\varepsilon\leq 1,$}
\begin{equation}
3B \mathbb E[\xi(h)]^2\mathbb E\left[K^2\mathds 1_{\{K\leq \varepsilon/\mathbb E[\xi(h)]\}}\right] \leq \left\{\begin{array}{ll} 3B \mathbb E[\xi(h)]^2\mathbb E\left[K^2\right]&{\rm when\ } \tau\in(4,5], \\ \\
3B C_{2,4}\mathbb E[\xi(h)]^2 \log(1/\mathbb E[\xi(h)])&{\rm when\ } \tau=4,\\ \\
3B C_{2,\tau}\mathbb E[\xi(h)]^{\tau-2}&{\rm when\ } \tau\in(3,4),\end{array}\right.
\end{equation}
which are all of the form $B e$, and
\begin{equation}
\mathbb E\left[K^3\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}\right] \mathbb E[\xi(h)]^3 \leq \left\{\begin{array}{ll} C_{3,5}\mathbb E[\xi(h)]^3\log(1/\mathbb E[\xi(h)])&{\rm when\ } \tau=5, \\ \\
C_{3,\tau}\mathbb E[\xi(h)]^{\tau-2}&{\rm when\ } \tau\in(3,5).\end{array}\right.
\end{equation}
To bound $T_1$ for large $K$, we observe that
\begin{equation}
\mathbb E\left[t_1(K) \mathds 1_{\{K > \varepsilon/\mathbb E[\xi(h)]\}}\right] \geq -\hat{\beta} B \mathbb E[\mathds 1_{\{K > \varepsilon/\mathbb E[\xi(h)]\}}]-\hat{\beta}\mathbb E[\xi(h)]\mathbb E[K\mathds 1_{\{K > \varepsilon/\mathbb E[\xi(h)]\}}].
\end{equation}
Applying Lemma~\ref{lem-truncmoment} now gives, for $\tau\in(3,5]$
\begin{align}
\mathbb E\left[t_1(K) \mathds 1_{\{K > \varepsilon/\mathbb E[\xi(h)]\}}\right]
&\geq -\hat{\beta} B C_{0,\tau}\mathbb E[\xi(h)]^{\tau-2} - \hat{\beta} C_{1,\tau}\mathbb E[\xi(h)]^{\tau-2} \nonumber\\
&= -C_4 \mathbb E[\xi(h)]^{\tau-2}-B e_4.
\end{align}
\paragraph{The lower bound on $T_2$.}
To bound $T_2$, we split in a small and a large $K$ contribution:
\begin{equation}
T_2 = \mathbb E[t_2(K) \mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}]+\mathbb E[t_2(K)
\mathds 1_{\{K > \varepsilon/\mathbb E[\xi(h)]\}}]\equiv T_2^{\scriptscriptstyle \leq}+T_2^{\scriptscriptstyle >},
\end{equation}
where
\begin{equation}
t_2(k)=\xi\left(B+\sum_{i=1}^k \xi(h_i)\right)-\xi\left(B+k\mathbb E[\xi(h)]\right).
\end{equation}
To bound $T_2^{\scriptscriptstyle >}$, we note that
\begin{equation}
t_2(k) \geq -\beta,
\end{equation}
so that
\begin{equation}
T_2^{\scriptscriptstyle >}
\geq -\beta \mathbb E[\mathds 1_{\{K > \varepsilon/\mathbb E[\xi(h)]\}}] \geq -C_5 \mathbb E[\xi(h)]^{\ch{(\tau-2)\wedge 3}},
\end{equation}
where we have used Lemma~\ref{lem-truncmoment} in the last inequality \ch{and the Markov inequality when $\mathbb E[K^3]<\infty$}.
To bound $T_2^{\scriptscriptstyle \leq}$, we start from
\begin{equation}\label{eq-T2Taylor}
T_2^{\scriptscriptstyle \leq}=\mathbb E\left[\frac{\xi''(\zeta)}{2}\left(\sum_{i=1}^K\xi(h_i) - K\mathbb E[\xi(h)]\right)^2\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}\right],
\end{equation}
\ch{We use that, for some $\zeta$ in between $B+\sum_{i=1}^K\xi(h_i)$ and $B+K\mathbb E[\xi(h)]$,}
\begin{equation}
\xi''(\zeta) \geq -\frac{2\hat{\beta}}{1-\hat{\beta}^2} \Big(B+\sum_{i=1}^K\xi(h_i)+K\mathbb E[\xi(h)]\Big).
\end{equation}
to obtain
\begin{align}
T_2^{\scriptscriptstyle \leq}
& \geq -\frac{\hat{\beta}}{(1-\hat{\beta}^2)^2} \mathbb E\Big[\Big(B+\sum_{i=1}^K\xi(h_i)
+ K\mathbb E[\xi(h)]\Big)\Big(\sum_{i=1}^K\xi(h_i) - K\mathbb E[\xi(h)]\Big)^2
\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}\Big]\nonumber\\
&\geq -\frac{\hat{\beta}}{(1-\hat{\beta}^2)^2} \Big(B \nu \mathbb E\left[\left(\xi(h)-\mathbb E[\xi(h)]\right)^2\right]
+ \ch{\mathbb E[K\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}]} \mathbb E\left[\left(\xi(h)-\mathbb E[\xi(h)]\right)^3\right]\nonumber\\
&\qquad+ 2\ch{\mathbb E[K^2\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}]}\mathbb E[\xi(h)]\mathbb E\left[\left(\xi(h)-\mathbb E[\xi(h)]\right)^2\right]\Big)\nonumber\\
&\geq -\frac{\hat{\beta}}{(1-\hat{\beta}^2)^2} \Big(B \nu \mathbb E[\xi(h)^2]
+2\ch{\mathbb E[K^2\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}]}\mathbb E[\xi(h)]\mathbb E[\xi(h)^2]+\nu\mathbb E[\xi(h)^3]\Big).
\end{align}
Using the bounds of Lemmas~\ref{lem-boundxih2} and~\ref{lem-boundxih3} we get,
\begin{equation}
T_2^{\scriptscriptstyle \leq}\geq
\left\{\begin{array}{ll} -\frac{\hat{\beta}}{(1-\hat{\beta}^2)^2}\left(2\mathbb E[K^2]C_2+C_3\nu\right)\mathbb E[\xi(h)]^3-B e_5
&{\rm when\ } \mathbb E[K^3]<\infty,\\ \\
-\frac{\hat{\beta}}{(1-\hat{\beta}^2)^2}\left(2\mathbb E[K^2]C_2+C_3\nu\right)\mathbb E[\xi(h)]^{3}\log(1/\mathbb E[\xi(h)])-B e_5
&{\rm when\ } \tau=5,\\ \\
-\frac{\hat{\beta}}{(1-\hat{\beta}^2)^2}\left(C'_{2,\tau}+C_3\nu\right)\mathbb E[\xi(h)]^{\tau-2}-B e_5
&{\rm when\ } \tau\in(3,5),\end{array}\right.
\end{equation}
where \ch{$C'_{2,\tau}=\mathbb E[K^2]C_2$ for $\tau\in (4,5)$ and $C'_{2,\tau}=C_2$ for $\tau\in(3,4]$. Here,
we have also used that (a) $\mathbb E[\xi(h)]^3\leq\mathbb E[\xi(h)]^{3}\log(1/\mathbb E[\xi(h)])$ for $\tau=5$;
(b) $\mathbb E[\xi(h)]^3\leq\mathbb E[\xi(h)]^{\tau-2}$ for $\tau\in (4,5]$; and (c) ${\mathbb E[K^2\mathds 1_{\{K \leq \varepsilon/\mathbb E[\xi(h)]\}}]}\mathbb E[\xi(h)]
\leq \varepsilon \nu\leq \nu$ for $\tau\in (3,4]$.}
Combining the bounds on $T_1$ and $T_2$ gives the desired lower bound on $\mathbb E[\xi(h)]$.
\end{proof}
\subsection{Critical exponents $\boldsymbol{\beta}$ and $\boldsymbol{\delta}$\label{sec-CritExpBetaDelta}}
It remains to show that the bounds on $\mathbb E[\xi(h)]$ give us the desired result:
\begin{theorem}[Values of $\boldsymbol{\beta}$ and $\boldsymbol{\delta}$]
The critical exponent $\boldsymbol{\beta}$ equals
\begin{equation}
\boldsymbol{\beta}=\left\{\begin{array}{ll} 1/2 &{\rm when\ } \mathbb E[K^3]<\infty,\\
1/(\tau-3) &{\rm when\ } \tau\in(3,5),\end{array}\right.
\end{equation}
and the critical exponent $\boldsymbol{\delta}$ equals
\begin{equation}
\boldsymbol{\delta}=\left\{\begin{array}{ll} 3 &{\rm when\ } \mathbb E[K^3]<\infty,\\
\tau-2 &{\rm when\ } \tau\in(3,5).\end{array}\right.
\end{equation}
For $\tau=5$,
\begin{equation}
M(\beta,0^+) \asymp \Big(\frac{\beta-\beta_c}{\log{(1/(\beta-\beta_c))}}\Big)^{1/2} \quad {\rm for\ } \beta \searrow \beta_c,
\qquad
M(\beta_c, B) \asymp \Big(\frac{B}{\log(1/B)}\Big)^{1/3} \quad {\rm for\ } B \searrow 0.
\end{equation}
\end{theorem}
\begin{proof} We prove the upper and the lower bounds separately, starting with the upper bound.
\paragraph{The upper bounds on the magnetization.}
We start by bounding the magnetization from above:
\begin{equation}
M(\beta,B)=\mathbb E\left[\tanh\left(B+\sum_{i=1}^D\xi(h_i)\right)\right] \leq B+\mathbb E[D]\mathbb E[\xi(h)].
\end{equation}
We first perform the analysis for $\boldsymbol{\beta}$. Taking the limit $B\searrow0$ in \eqref{eq-UpperExi}
in Proposition \ref{prop-UpperExi} yields
\begin{equation}
\mathbb E[\xi(h_0)] \leq \hat{\beta} \nu \mathbb E[\xi(h_0)] - C_1 \mathbb E[\xi(h_0)]^{\boldsymbol{\delta}},
\end{equation}
where $h_0=h(\beta,0^+)$. For $\beta>\beta_c$, by definition, $\mathbb E[\xi(h_0)]>0$ and thus we can divide through by $\mathbb E[\xi(h_0)]$ to obtain
\begin{equation}
\mathbb E[\xi(h_0)]^{\boldsymbol{\delta}-1} \leq \frac{\hat{\beta} \nu-1}{C_1}.
\end{equation}
By Taylor's theorem,
\begin{equation}
\label{eq-Taylorbetahatup}
\hat{\beta} \nu-1 \leq \nu(1-\hat{\beta}_c^2)(\beta-\beta_c).
\end{equation}
Hence,
\begin{equation}
\label{eq-Exih0}
\mathbb E[\xi(h_0)] \leq \left(\frac{\nu(1-\hat{\beta}_c^2)}{C_1}\right)^{1/(\boldsymbol{\delta}-1)}
(\beta-\beta_c)^{1/(\boldsymbol{\delta}-1)}.
\end{equation}
Using that $\boldsymbol{\beta}=1/(\boldsymbol{\delta}-1)$,
\begin{equation}
M(\beta,0^+)\leq \mathbb E[D]\left(\frac{\nu(1-\hat{\beta}_c^2)}{C_1}\right)^{\boldsymbol{\beta}}(\beta-\beta_c)^{\boldsymbol{\beta}},
\end{equation}
from which it easily follows that
\begin{equation}
\label{magnetization-beta-UB}
\limsup_{\beta\searrow\beta_c} \frac{M(\beta,0^+)}{(\beta-\beta_c)^{\boldsymbol{\beta}}}<\infty.
\end{equation}
We complete the analysis for $\boldsymbol{\beta}$ by analyzing $\tau=5$.
Since \eqref{eq-UpperExi} also applies to $\tau=5$, \eqref{magnetization-beta-UB}
holds as well. We now improve upon this using \eqref{eq-UpperExi-tau5}
in Proposition \ref{prop-UpperExi}, which yields in a similar way as above that
\begin{equation}
\mathbb E[\xi(h_0)]^{2} \leq \frac{\hat{\beta} \nu-1}{C_1\log(1/\mathbb E[\xi(h_0)])}.
\end{equation}
Since $x\mapsto 1/\log(1/x)$ is increasing on $(0,1)$ and $\mathbb E[\xi(h_0)]\leq C(\beta-\beta_c)^{1/2}$
for some $C>0$, we immediately obtain that
\begin{equation}
\mathbb E[\xi(h_0)]^{2} \leq \frac{\hat{\beta} \nu-1}{C_1\log(1/\mathbb E[\xi(h_0)])}
\leq \frac{\hat{\beta} \nu-1}{C_1\log(1/[C(\beta-\beta_c)^{1/2}])}.
\end{equation}
Taking the limit of $\beta \searrow\beta_c$ as above then completes the proof.
We continue with the analysis for $\boldsymbol{\delta}$.
Setting $\beta=\beta_c$ in~\eqref{eq-UpperExi} and rewriting gives
\begin{equation}
\mathbb E[\xi(h_c)]\leq\left(\frac{\hat{\beta}_c}{C_1}\right)^{1/\boldsymbol{\delta}} B^{1/\boldsymbol{\delta}},
\end{equation}
with $h_c=h(\beta_c,B)$. Hence,
\begin{equation}
M(\beta_c,B)\leq B+\mathbb E[D]\left(\frac{\hat{\beta}_c}{C_1}\right)^{1/\boldsymbol{\delta}} B^{1/\boldsymbol{\delta}},
\end{equation}
so that, using $1/\boldsymbol{\delta}<1$,
\begin{equation}
\limsup_{B\searrow0} \frac{M(\beta_c,B)}{B^{1/\boldsymbol{\delta}}}<\infty.
\end{equation}
The analysis for $\boldsymbol{\delta}$ for $\tau=5$ can be performed in an identical way as for
$\boldsymbol{\beta}$.
\paragraph{The lower bounds on the magnetization.}
For the lower bound on the magnetization we use that
\begin{equation}
\frac{{\rm d}^2}{{\rm d} x^2} \tanh x = -2\tanh x (1-\tanh^2 x) \geq -2,
\end{equation}
so that
\begin{equation}
\tanh x \geq x-x^2.
\end{equation}
Hence,
\begin{align}
M(\beta,B) &\geq B + \mathbb E[D]\mathbb E[\xi(h)]-\mathbb E\Big[\Big(B+\sum_{i=1}^D \xi(h_i)\Big)^2\Big] \nonumber\\
&\geq B + \mathbb E[D]\mathbb E[\xi(h)]-B e_6-\mathbb E[D(D-1)]\mathbb E[\xi(h)]^2-\mathbb E[D]C_2\mathbb E[\xi(h)]^{2\wedge(\tau-2)}\nonumber\\
&=B + (\mathbb E[D]-e_7)\mathbb E[\xi(h)]-B e_6,
\end{align}
with $a\wedge b$ denoting the minimum of $a$ and $b$, because $\mathbb E[\xi(h)]$ converges to
zero for both limits of interest.
We again first perform the analysis for $\boldsymbol{\beta}$ and $\tau\neq 5$.
We get from \eqref{eq-LowerExi} in Proposition~\ref{prop-LowerExi} that
\begin{equation}
\mathbb E[\xi(h_0)]\geq\left(\frac{\hat{\beta}\nu-1}{c_1}\right)^{1/(\boldsymbol{\delta}-1)}
\geq \left(\frac{\nu(1-\hat{\beta}^2)} {c_1}\right)^{\boldsymbol{\beta}}(\beta-\beta_c)^{\boldsymbol{\beta}},
\end{equation}
where the last inequality holds because, by Taylor's theorem,
\begin{equation}
\label{eq-Taylorbetahatlow}
\hat{\beta} \nu-1 \geq \nu(1-\hat{\beta}^2)(\beta-\beta_c).
\end{equation}
Hence,
\begin{equation}
\liminf_{\beta\searrow \beta_c}\frac{M(\beta,0^+)}{(\beta-\beta_c)^{\boldsymbol{\beta}}} \geq
\mathbb E[D]\left(\frac{\nu(1-\hat{\beta}^2)} {c_1}\right)^{\boldsymbol{\beta}}>0.
\end{equation}
For $\tau=5$, we note that \eqref{eq-LowerExi-tau5} as well as the fact that $\log{1/x}\leq A_{\varepsilon} x^{-\varepsilon}$
for all $x\in (0,1)$ and some $A_{\varepsilon}>0$, yields that
\begin{equation}
\label{log-corr-tau5-beta1}
\mathbb E[\xi(h_0)]\geq\left(\frac{\hat{\beta}\nu-1}{A_{\varepsilon} c_1}\right)^{1/(2\ch{+}\varepsilon)}
\geq \left(\frac{\nu(1-\hat{\beta}^2)} {A_{\varepsilon} c_1}\right)^{1/(2\ch{+}\varepsilon)}(\beta-\beta_c)^{1/(2\ch{+}\varepsilon)}.
\end{equation}
Then again using \eqref{eq-LowerExi-tau5} yields, for some constant $c>0$,
\begin{equation}
\label{log-corr-tau5-beta2}
\mathbb E[\xi(h_0)]\geq\left(\frac{\hat{\beta}\nu-1}{c_1\log(1/\mathbb E[\xi(h_0)])}\right)^{1/2}
\geq c\Big(\frac{\beta-\beta_c}{\log(1/(\beta-\beta_c))}\Big)^{1/2},
\end{equation}
once more since $x\mapsto 1/(\log(1/x))$ is increasing.
We continue with the analysis for $\boldsymbol{\delta}$.
Again, setting $\beta=\beta_c$ in~\eqref{eq-LowerExi}, we get
\begin{equation}
\mathbb E[\xi(h_c)]\geq\left(\frac{\hat{\beta}_c-e_1}{c_1}\right)^{1/\boldsymbol{\delta}}B^{1/\boldsymbol{\delta}},
\end{equation}
from which it follows that
\begin{equation}
\liminf_{B\searrow0} \frac{M(\beta_c,B)}{B^{1/\boldsymbol{\delta}}} \geq
\mathbb E[D]\left(\frac{\hat{\beta}_c}{c_1}\right)^{1/\boldsymbol{\delta}}>0,
\end{equation}
as required. The extension to $\tau=5$ can be dealt with in an identical way as in
\eqref{log-corr-tau5-beta1}--\eqref{log-corr-tau5-beta2}.
This proves the theorem.
\end{proof}
\section{Critical exponents: Susceptibility}
\label{sec-CritExpChi}
In this section, we study the susceptibility. In Section \ref{sec-gamma} we identify
$\boldsymbol{\gamma}$, in Section \ref{sec-gamma'} we prove a lower bound on
$\boldsymbol{\gamma'}$ and add a heuristic why this is the correct value.
\subsection{The critical exponent $\boldsymbol{\gamma}$}
\label{sec-gamma}
For the susceptibility in the {\em subcritical} phase, i.e., in the high-temperature region $\beta<\beta_c$,
we can not only identify the critical exponent $\boldsymbol{\gamma}$, but we can also identify the constant:
\begin{theorem}[Critical exponent $\boldsymbol{\gamma}$]
For $\ch{\mathbb E[K]<\infty}$ and $\beta<\beta_c$,
\ch{
\begin{equation}
\label{chi-comp-highT}
\chi(\beta,0^+)=1+\frac{\mathbb E[D]\hat{\beta}}{1-\nu\hat{\beta}}.
\end{equation}
In particular,
\begin{equation}
\label{chi-asy-highT}
\lim_{\beta \nearrow \beta_c} \chi(\beta,0^+)(\beta_c-\beta) = \frac{\mathbb E[D]\hat{\beta}_c}{1-\hat{\beta}_c^2},
\end{equation}}
and hence
\begin{equation}
\boldsymbol{\gamma}=1.
\end{equation}
\end{theorem}
\begin{proof} The proof \ch{is divided into three steps. We first reduce the suceptibility on the random graph to the one on the random Bethe
tree. Secondly, we rewrite the susceptibility on the tree using transfer matrix techniques. Finally, we use this rewrite (which applies to
\emph{all} $\beta$ and $B>0$) to prove that $\boldsymbol{\gamma}=1$.}
\paragraph{Reduction to the random tree.}
Let $\phi$ denote a vertex selected uniformly at random from $[n]$ and let $\mathbb E_\phi$ denote its expectation. Then we can write the susceptibility as
\begin{equation}
\chi_n \equiv \frac1n \sum_{i,j=1}^n \Big(\langle\sigma_i\sigma_j\rangle_{\col{\mu_n}} - \langle\sigma_i\rangle_{\col{\mu_n}}\langle\sigma_j\rangle_{\col{\mu_n}}\Big)
= \mathbb E_\phi\left[\sum_{j=1}^n
\Big(\langle\sigma_\phi\sigma_j\rangle_{\col{\mu_n}} - \langle\sigma_\phi\rangle_{\col{\mu_n}}\langle\sigma_j\rangle_{\col{\mu_n}}\Big)\right].
\end{equation}
Note that
\begin{equation}
\label{eq-correlisderiv}
\langle\sigma_i\sigma_j\rangle_{\col{\mu_n}} - \langle\sigma_i\rangle_{\col{\mu_n}}\langle\sigma_j\rangle_{\col{\mu_n}}
= \frac{\partial \langle\sigma_i\rangle_{\col{\mu_n}}}{\partial B_j},
\end{equation}
which is, by the GHS inequality~\cite{GriHurShe70}, decreasing in external fields at all other vertices $k\in[n]$. Denote by $\langle\cdot\rangle^{t,+/f}$ the Ising model with $+$/free boundary conditions, respectively, at all vertices at graph distance $t$ from $\phi$. Then, for all $t\geq1$,
\begin{equation}
\chi_n \geq \mathbb E_\phi\left[\sum_{j=1}^n \Big(\langle\sigma_\phi\sigma_j\rangle^{t,+}_{\chs{\mu_n}}
- \langle\sigma_\phi\rangle^{t,+}_{\chs{\mu_n}}\langle\sigma_j\rangle_{\chs{\mu_n}}^{t,+}\Big)\right].
\end{equation}
By introducing boundary conditions, only vertices in the ball $B_\phi(t)$ contribute to the sum.
Hence, by taking the limit $n\rightarrow \infty$ and using that the graph is locally tree-like,
\begin{equation}
\chi \geq \mathbb E\left[\sum_{j\in T_t} \Big(\langle\sigma_\phi\sigma_j\rangle^{t,+}
- \langle\sigma_\phi\rangle^{t,+}\langle\sigma_j\rangle^{t,+}\Big)\right],
\end{equation}
where the expectation now is over the random tree $T_t \sim \mathcal{T}(D,K,t)$ with root $\phi$.
For an upper bound on $\chi_n$ we use a trick similar to one used in the proof of~\cite[Corollary~4.5]{DemMon10}: Let $B_j'=B$ if $j\in B_t(\phi)$ and $B_j'=B+H$ if $j \notin B_t(\phi)$ for some $H>-B$. Denote by $\langle\cdot\rangle_{H}$ the associated Ising expectation. Then, because of~\eqref{eq-correlisderiv},
\begin{equation}
\mathbb E_\phi\left[\sum_{j\notin B_t(\phi)} \Big(\langle\sigma_\phi\sigma_j\rangle
- \langle\sigma_\phi\rangle\langle\sigma_j\rangle\Big)\right]
= \mathbb E_\phi\left[ \frac{\partial}{\partial H} \langle\sigma_\phi\rangle_H \Bigg|_{H=0}\right],
\end{equation}
By the GHS inequality, $\langle\sigma_\phi\rangle_H$ is a concave function of $H$ and hence,
\begin{equation}
\mathbb E_\phi\left[\frac{\partial}{\partial H} \langle\sigma_\phi\rangle_H \Bigg|_{H=0}\right]
\leq \mathbb E_\phi\left[\frac{2}{B}\left(\langle\sigma_\phi\rangle_{H=0}-\langle\sigma_\phi\rangle_{H=-B/2}\right)\right].
\end{equation}
Using the GKS inequality this can be bounded from above by
\begin{equation}
\mathbb E_\phi\left[\frac{2}{B}\left(\langle\sigma_\phi\rangle^{t,+}_{H=0}-\langle\sigma_\phi\rangle^{t,f}_{H=-B/2}\right)\right]
= \mathbb E_\phi\left[\frac{2}{B}\left(\langle\sigma_\phi\rangle^{t,+}-\langle\sigma_\phi\rangle^{t,f}\right)\right],
\end{equation}
where the equality holds because the terms depend only on the system in the ball $B_t(\phi)$ and hence not on $H$. By letting $n\rightarrow\infty$, by the locally tree-likeness, this is equal to
\begin{equation}
\frac{2}{B}\mathbb E\left[\left(\langle\sigma_\phi\rangle^{t,+}-\langle\sigma_\phi\rangle^{t,f}\right)\right],
\end{equation}
where the expectation and the Ising model now is over the random tree $T_t \sim \mathcal{T}(D,K,t)$ with root $\phi$. From~\cite[Lemma~3.1]{DomGiaHof10} we know that this expectation can be bounded from above by $M/t$ for some constant $M=M(\beta,B)<\infty$. Hence, if $t\rightarrow\infty$,
\begin{equation}
\lim_{t\rightarrow\infty}\mathbb E\left[\sum_{j\in T_t} \Big(\langle\sigma_\phi\sigma_j\rangle^{t,+}
- \langle\sigma_\phi\rangle^{t,+}\langle\sigma_j\rangle^{t,+}\Big)\right]
\leq \chi \leq \lim_{t\rightarrow\infty} \mathbb E\left[\sum_{j \in T_t} \Big(\langle\sigma_\phi\sigma_j\rangle^{t,f}
- \langle\sigma_\phi\rangle^{t,f}\langle\sigma_j\rangle^{t,f}\Big)\right].
\end{equation}
\paragraph{Rewrite of the susceptibility on trees.}
It remains to study the susceptibility on trees. For this, condition on the tree $T_\infty$. Then, for some vertex $j$ at height $\ell\leq t$ in the tree, denote the vertices on the unique path from $\phi$ to $j$ by $\phi=v_0,v_1,\ldots,v_\ell=j$ and let, for $0\leq i\leq\ell$, $S_{\leq i}=(\sigma_{v_0},\ldots,\sigma_{v_i})$. We first compute the expected value of a spin $\sigma_{v_i}$ on this path, conditioned on the spin values $S_{\leq i-1}$. Note that under this conditioning the expected spin value only depends on the spin value $\sigma_{v_{i-1}}$ and the effective field $h_{v_i}=h_{v_i}^{t,+/f}$ obtained by pruning the tree at vertex $v_i$, i.e., by removing all edges at vertex $v_i$ going away from the root and replacing the external magnetic field at vertex $v_i$ by $h_{v_i}$ which can be exactly computed using~\cite[Lemma~4.1]{DemMon10}. Hence,
\begin{equation}
\langle\sigma_{v_i} | S_{\leq i-1}\rangle^{t,+/f} =\frac{{\rm e}^{\beta\sigma_{v_{i-1}}+h_{v_i}}
-{\rm e}^{-\beta \sigma_{v_{i-1}} - h_{v_i}}}{{\rm e}^{\beta\sigma_{v_{i-1}}+h_{v_i}}+{\rm e}^{-\beta \sigma_{v_{i-1}} - h_{v_i}}}.
\end{equation}
We can write the indicators $\mathds 1_{\{\sigma_{v_{i-1}}=\pm1\}}=\frac12 (1\pm\sigma_{v_{i-1}})$, so that the above equals
\begin{align}
\frac12& (1+\sigma_{v_{i-1}})\frac{{\rm e}^{\beta+h_{v_i}}-{\rm e}^{-\beta - h_{v_i}}}{{\rm e}^{\beta+h_{v_i}}+{\rm e}^{-\beta- h_{v_i}}}
+\frac12 (1-\sigma_{v_{i-1}})\frac{{\rm e}^{-\beta+h_{v_i}}-{\rm e}^{\beta - h_{v_i}}}{{\rm e}^{-\beta+h_{v_i}}+{\rm e}^{\beta - h_{v_i}}}\\
&= \sigma_{v_{i-1}} \frac12 \left(\frac{{\rm e}^{\beta+h_{v_i}}-{\rm e}^{-\beta - h_{v_i}}}{{\rm e}^{\beta+h_{v_i}}
+{\rm e}^{-\beta- h_{v_i}}}-\frac{{\rm e}^{-\beta+h_{v_i}}-{\rm e}^{\beta - h_{v_i}}}{{\rm e}^{-\beta+h_{v_i}}+{\rm e}^{\beta - h_{v_i}}}\right)
+ \frac12 \left(\frac{{\rm e}^{\beta+h_{v_i}}-{\rm e}^{-\beta - h_{v_i}}}{{\rm e}^{\beta+h_{v_i}}
+{\rm e}^{-\beta- h_{v_i}}}+\frac{{\rm e}^{-\beta+h_{v_i}}-{\rm e}^{\beta - h_{v_i}}}{{\rm e}^{-\beta+h_{v_i}}+{\rm e}^{\beta - h_{v_i}}}\right).\nonumber
\end{align}
By pairwise combining the terms over a common denominator the above equals
\begin{align}
\sigma_{v_{i-1}} \frac12 & \frac{({\rm e}^{\beta+h_{v_i}}-{\rm e}^{-\beta - h_{v_i}})
({\rm e}^{-\beta+h_{v_i}}+{\rm e}^{\beta - h_{v_i}})-({\rm e}^{-\beta+h_{v_i}}-{\rm e}^{\beta - h_{v_i}})
({\rm e}^{\beta+h_{v_i}}+{\rm e}^{-\beta- h_{v_i}})}{({\rm e}^{\beta+h_{v_i}}+{\rm e}^{-\beta- h_{v_i}})({\rm e}^{-\beta+h_{v_i}}+{\rm e}^{\beta - h_{v_i}})} \nonumber\\
&+ \frac12 \frac{({\rm e}^{\beta+h_{v_i}}-{\rm e}^{-\beta - h_{v_i}})({\rm e}^{-\beta+h_{v_i}}+{\rm e}^{\beta - h_{v_i}})
+({\rm e}^{-\beta+h_{v_i}}-{\rm e}^{\beta - h_{v_i}})({\rm e}^{\beta+h_{v_i}}+{\rm e}^{-\beta- h_{v_i}})}{({\rm e}^{\beta+h_{v_i}}+{\rm e}^{-\beta- h_{v_i}})
({\rm e}^{-\beta+h_{v_i}}+{\rm e}^{\beta - h_{v_i}})}.
\end{align}
By expanding all products, this equals, after cancellations,
\begin{align}\
\label{eq-sigmavigivenslei}
\sigma_{v_{i-1}}&\frac{{\rm e}^{2\beta}+{\rm e}^{-2\beta}}{{\rm e}^{2\beta}+{\rm e}^{-2\beta}+{\rm e}^{2h_{v_i}}+{\rm e}^{-2h_{v_i}}}
+\frac{{\rm e}^{2h_{v_i}}+{\rm e}^{-2h_{v_i}}}{{\rm e}^{2\beta}+{\rm e}^{-2\beta}+{\rm e}^{2h_{v_i}}+{\rm e}^{-2h_{v_i}}}\nonumber\\
&=\sigma_{v_{i-1}}\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_i})} + \frac{\sinh(2h_{v_i})}{\cosh(2\beta)+\cosh(2h_{v_i})}.
\end{align}
Using this, we have that
\begin{equation}
\langle\sigma_{v_\ell}\rangle^{t,+/f} =\langle\langle\sigma_{v_\ell}|S_{\leq \ell-1}\rangle^{t,+/f}\rangle^{t,+/f}
= \langle\sigma_{v_{\ell-1}}\rangle^{t,+/f}\frac{\sinh(2\beta)}{\cosh(2\beta)
+\cosh(2h_{v_\ell})} + \frac{\sinh(2h_{v_\ell})}{\cosh(2\beta)+\cosh(2h_{v_\ell})}.
\end{equation}
Applying this recursively, we get
\begin{align}
\langle\sigma_{v_\ell}\rangle^{t,+/f}
= \langle\sigma_{v_0}\rangle^{t,+/f} & \prod_{i=1}^{\ell}\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_i})} \nonumber\\
&+ \sum_{i=1}^{\ell}\left(\frac{\sinh(2h_{v_i})}{\cosh(2\beta)+\cosh(2h_{v_i})}\prod_{k=i+1}^{\ell}
\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_k})}\right).
\end{align}
Similarly,
\begin{align}
\langle\sigma_{v_0} \sigma_{v_\ell}\rangle^{t,+/f}
&= \left\langle\sigma_{v_0}\left(\sigma_{v_0}\prod_{i=1}^{\ell}\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_i})} \right.\right. \nonumber\\
&\left.\left.\qquad + \sum_{i=1}^{\ell}\left(\frac{\sinh(2h_{v_i})}{\cosh(2\beta)+\cosh(2h_{v_i})}\prod_{k=i+1}^{\ell}
\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_k})}\right)\right)\right\rangle^{t,+/f}\nonumber\\
&=\prod_{i=1}^{\ell}\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_i})}\nonumber\\
&\qquad +\langle\sigma_{v_0}\rangle^{t,+/f}\sum_{i=1}^{\ell}\left(\frac{\sinh(2h_{v_i})}
{\cosh(2\beta)+\cosh(2h_{v_i})}\prod_{k=i+1}^{\ell}\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_k})}\right).
\end{align}
Combining the above yields
\begin{equation}
\label{eq-exactcorrintree}
\langle\sigma_{v_0}\sigma_{v_\ell}\rangle^{t,+/f}-\langle\sigma_{v_0}\rangle^{t,+/f}
\langle\sigma_{v_\ell}\rangle^{t,+/f}
=\left(1-\left(\langle\sigma_{v_0}\rangle^{t,+/f}\right)^2\right)\prod_{i=1}^{\ell}\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_i})}.
\end{equation}
By taking the limit $t\rightarrow\infty$, we obtain
\begin{equation}
\chi =\mathbb E\left[\sum_{j \in T_{\infty}} \left(1-\langle\sigma_{v_0}\rangle^2\right)
\prod_{i=1}^{|j|}\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_i})}\right].
\end{equation}
Finally, we can rewrite
\begin{equation}
\frac{\sinh(2\beta)}{\cosh(2\beta)+\cosh(2h_{v_i})}
= \frac{2\sinh(\beta)\cosh(\beta)}{2\cosh(\beta)^2-1+\cosh(2h_{v_i})}=
\frac{\hat{\beta}}{1+\frac{\cosh(2h_{v_i})-1}{2\cosh(\beta)^2}},
\end{equation}
so that
\begin{equation}
\label{chi-rewrite}
\chi(\beta,B)=\mathbb E\left[\left(1-\langle\sigma_{v_0}\rangle^2\right) \sum_{j \in T_{\infty}} \hat{\beta}^{|j|}
\prod_{i=1}^{|j|}\Big(1+\frac{\cosh(2h_{v_i})-1}{2\cosh(\beta)^2}\Big)^{-1}\right].
\end{equation}
The rewrite in \eqref{chi-rewrite} is valid for all $\beta$ and $B>0$, and provides the starting point for
all our results on the susceptibility.
\paragraph{Identification of the susceptibility \col{for $\beta<\beta_c$}.}
We take the limit $B\searrow0$, for $\beta<\beta_c$, and apply dominated convergence.
First of all, all fields $h_i$ converge to zero by the definition of $\beta_c$, so we have pointwise convergence.
Secondly, $1+\frac{\cosh(2h_{v_i})-1}{2\cosh(\beta)^2}\geq 1$, so that the random variable in the
expectation is bounded from above by $\sum_{j \in T_{\infty}} \hat{\beta}^{|j|}$, which has finite
expectation as we show below.
Thus, by dominated convergence, the above converges to
\begin{equation}
\lim_{B\searrow0}\chi(\beta,B)
=\mathbb E\left[\sum_{j \in T_{\infty}} \hat{\beta}^{|j|}\right].
\end{equation}
Denote by $Z_\ell$ the number of vertices at distance $\ell$ from the root. Then,
\begin{equation}
\mathbb E\left[\sum_{j\in T_\infty} \hat{\beta}^{|j|}\right]
= \mathbb E\left[\sum_{\ell=0}^\infty Z_\ell \hat{\beta}^{\ell}\right]=\sum_{\ell=0}^\infty \mathbb E[Z_\ell] \hat{\beta}^{\ell},
\end{equation}
because $Z_\ell\geq0$, a.s. Note that $Z_\ell / (\mathbb E[D] \nu^{\ell-1})$ is a martingale, because the offspring of the root has expectation $\mathbb E[D]$ and all other vertices have expected offspring $\nu$. Hence,
\begin{equation}
\ch{\lim_{B\searrow0}\chi(\beta,B)=}\sum_{\ell=0}^\infty \mathbb E[Z_\ell] \hat{\beta}^{\ell}
= 1+\sum_{\ell=1}^\infty \ch{\mathbb E[D]}\nu^{\ell-1}\hat{\beta}^\ell
= 1+\frac{\mathbb E[D]\hat{\beta}}{1-\hat{\beta}\nu}.
\end{equation}
\ch{This proves \eqref{chi-comp-highT}. We continue to prove \eqref{chi-asy-highT},}
which follows by using~\eqref{eq-Taylorbetahatup} and~\eqref{eq-Taylorbetahatlow}:
\begin{equation}
\frac{\mathbb E[D]\hat{\beta}}{1-\hat{\beta}^2}(\beta_c-\beta)^{-1} +1
\leq \frac{\mathbb E[D]}{\nu}\frac{1}{1-\hat{\beta}\nu}
\leq \frac{\mathbb E[D]\hat{\beta}}{1-\hat{\beta}_c^2}(\beta_c-\beta)^{-1}+1.
\end{equation}
\end{proof}
\subsection{Partial results for the critical exponent $\boldsymbol{\gamma'}$}
\label{sec-gamma'}
For the supercritical susceptibility, we prove the following lower bound on $\boldsymbol{\gamma'}$:
\begin{proposition}[Critical exponent $\boldsymbol{\gamma'}$]
\ch{For $\tau\in (3,5]$ or $\mathbb E[K^3]<\infty$,}
\begin{equation}
\boldsymbol{\gamma'}\geq 1.
\end{equation}
\end{proposition}
\begin{proof} We start by rewriting the susceptibility in a form that is convenient in the low-temperature phase.
\paragraph{A rewrite of the susceptibility in terms of i.i.d.\ random variables.}
For $\beta>\beta_c$ we start from~\eqref{chi-rewrite}.
We further rewrite
\begin{equation}
\chi(\beta,B)=
\sum_{\ell=0}^\infty \hat{\beta}^{\ell} \mathbb E\left[(1-\langle\sigma_{v_0}\rangle^2) \sum_{v_{\ell} \in T_{\infty}}
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{v_i})-1}{2\cosh(\beta)^2}\Big)\Big\}\right].
\end{equation}
\ch{Here, and in the sequel, we use the convention that empty products, arising when $\ell=0$,
equal 1, while empty sums equal 0. Thus, the contribution due to $\ell=0$ in the above sum equals 1.}
We write $v_0=\phi$ and $v_i=a_0\cdots a_{i-1}\in {\mathbb N}^i$ for $i\geq 1$, so that $v_i$ the
$a_{i-1}$st child of $v_{i-1}$. Then,
\begin{equation}
\chi(\beta,B)=
\sum_{\ell=0}^\infty \hat{\beta}^{\ell} \sum_{a_0, \ldots, a_{\ell-1}}
\mathbb E\left[(1-\langle\sigma_{v_0}\rangle^2)\indic{v_{\ell} \in T_{\infty}}
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{v_i})-1}{2\cosh(\beta)^2}\Big)\Big\}\right].
\end{equation}
Let $K_{v_i}$ be the number of children of $v_i$, and condition on $K_{v_i}=k_i$ for every $i\in[0,\ell-1]$,
where we abuse notation to write $[0,m]=\{0,\ldots, m\}$.
As a result, we obtain that
\begin{align}
\chi(\beta,&B)=
\sum_{\ell=0}^\infty \hat{\beta}^{\ell} \sum_{a_0, \ldots, a_{\ell-1}}\sum_{k_0,\ldots, k_{\ell-1}}
\mathbb P(v_{\ell}\in T_{\infty}, K_{v_i}=k_i ~\forall i\in[0, \ell-1])\\
& \times
\mathbb E\left[(1-\langle\sigma_{v_0}\rangle^2)
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{v_i})-1}{2\cosh(\beta)^2}\Big)\Big\}
\mid v_{\ell}\in T_{\infty}, K_{v_i}=k_i ~\forall i\in[0, \ell-1]\right].\nonumber
\end{align}
Note that
\begin{equation}
\mathbb P(K_{v_i}=k_i ~\forall i\in[0, \ell-1], v_{\ell}\in T_{\infty})
=\mathbb P(D=k_0)\indic{a_0\leq k_0}\prod_{i=1}^{\ell-1} \mathbb P(K=k_i)\indic{a_i\leq k_i}.
\end{equation}
Let $T_{i,j}$ be the tree that descibes all descendants of the $j$th child of $v_i$, with the $a_i$th child removed,
and $T_{\ell}$ the offspring of $v_{\ell}$.
When $v_{\ell}\in T_{\infty}$, all information of the tree $T_{\infty}$ can be encoded in the
collection of trees $(T_{i,j})_{j\in [0,K_{v_i}-1],i\in [0,\ell-1]}$ and $T_{\ell}$,
together with the sequence $(a_i)_{i=0}^{\ell-1}$. Denote $\vec{T}=\big((T_{i,j})_{j\in [0,K_{v_i}-1],i\in [0,\ell-1]}, T_{\ell}\big)$.
Then, for any collection of trees $\vec{t}=\big((t_{i,j})_{j\in [0,k_i-1],i\in [0,\ell-1]}, t_{\ell}\big)$,
\begin{equation}
\mathbb P(\vec{T}=\vec{t}\mid K_{v_i}=k_i ~\forall i\in[0, \ell-1], v_{\ell}\in T_{\infty})
=\mathbb P(T=t_{\ell}) \prod_{(i,j)\in [0,k_i-1]\times [0,\ell-1]} \mathbb P(T=t_{i,j}),
\end{equation}
where the law of $T$ is that of a Galton-Watson tree with offspring distribution $K$.
We conclude that
\begin{align}
\chi(\beta,B)&=
\sum_{\ell=0}^\infty \hat{\beta}^{\ell} \sum_{a_0, \ldots, a_{\ell-1}}\sum_{k_0,\ldots, k_{\ell-1}}
\mathbb P(D=k_0)\indic{a_0\leq k_0}\prod_{i=1}^{\ell-1} \mathbb P(K=k_i)\indic{a_i\leq k_i}\\
&\qquad \times
\mathbb E\left[(1-\langle\sigma_{v_0}\rangle^2)
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{i}^{\star}(\vec{k}))-1}{2\cosh(\beta)^2}\Big)\Big\}
\right],\nonumber
\end{align}
where $(h_i^{\star}(\vec{k}))_{i=0}^{\ell}$ satisfy the recursion relations $h_{\ell}^{\star}=h_{\ell,1}$
\begin{equation}
h_i^{\star}(\vec{k})=B+\xi(h_{i+1}^{\star}(\vec{k}))+\sum_{j=1}^{k_i-1} \xi(h_{i,j}),
\end{equation}
and where $(h_{i,j})_{i\in[0,\ell], j\geq 1}$ are i.i.d.\ copies of
the random variable $h(\beta,B)$. We note that the law of $(h_i^{\star}(\vec{k}))_{i=0}^{\ell}$ does not depend
on $(a_i)_{i\in [0,\ell-1]}$, so that the summation over $(a_i)_{i\in [0,\ell-1]}$ yields
\begin{align}
\chi(\beta,B)&=
\sum_{\ell=0}^\infty \hat{\beta}^{\ell}\sum_{k_0,\ldots, k_{\ell-1}}
k_0\mathbb P(D=k_0)\prod_{i=1}^{\ell-1} k_i\mathbb P(K=k_i)\\
&\qquad \times
\mathbb E\left[(1-\langle\sigma_{v_0}\rangle^2)
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{i}^{\star}(\vec{k}))-1}{2\cosh(\beta)^2}\Big)\Big\}
\right].\nonumber
\end{align}
For a random variable $X$ on the non-negative integers with $\mathbb E[X]>0$,
we let $X^{\star}$ be the size-biased distribution of $X$ given by
\begin{equation}
\mathbb P(X^{\star}=k)=\frac{k}{\mathbb E[X]}\mathbb P(X=k).
\end{equation}
Then
\begin{align}
\chi(\beta,B)&=
\frac{\mathbb E[D]}{\nu}
\sum_{\ell=0}^\infty (\hat{\beta}\nu)^{\ell}\sum_{k_0,\ldots, k_{\ell-1}}
\mathbb P(D^{\star}=k_0)\prod_{i=1}^{\ell-1} \mathbb P(K^{\star}=k_i)\\
&\qquad \times
\mathbb E\left[(1-\langle\sigma_{v_0}\rangle^2)
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{i}^{\star}(\vec{k}))-1}{2\cosh(\beta)^2}\Big)\Big\}
\right].\nonumber
\end{align}
Define $(h_i^{\star})_{i=0}^{\ell}=\big(h_i^{\star}(D^{\star}, K^{\star}_1, \ldots, K^{\star}_{\ell-1}, K_{\ell})\big)_{i=0}^{\ell}$,
where the random variables $(D^{\star}, K^{\star}_1, \ldots, K^{\star}_{\ell-1}, K_{\ell})$ are independent.
Then we finally arrive at
\begin{align}
\chi(\beta,B)&=
\frac{\mathbb E[D]}{\nu}
\sum_{\ell=0}^\infty (\hat{\beta}\nu)^{\ell}
\mathbb E\left[(1-\langle\sigma_{v_0}\rangle^2)
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{i}^{\star})-1}{2\cosh(\beta)^2}\Big)\Big\}
\right].
\end{align}
\paragraph{Reduction to second moments.}
We now proceed towards the lower bound on $\boldsymbol{\gamma'}$.
\ch{Note that, a.s.,
\begin{equation}
\langle\sigma_{v_0}\rangle=\tanh(h_{v_0}^{\star}),
\end{equation}
where
\begin{equation}
h_{v_0^{\star}}=B+\xi(h_{v_1}^{\star})+\sum_{j=1}^{D^{\star}-1} \xi(h_{0,j})\leq B+\beta+\sum_{j=1}^{D^{\star}-1} \xi(h_{0,j}).
\end{equation}
Therefore,
\begin{equation}
\langle\sigma_{v_0}\rangle\leq \tanh(B+\beta+\sum_{j=1}^{D^{\star}-1} \xi(h_{0,j})).
\end{equation}
The right hand side is independent of $(h_i^{\star})_{i=1}^{\ell}$, so that the expectation factorizes.
Further,
\begin{equation}
\mathbb E\Big[\tanh(B+\beta+\sum_{j=1}^{D^{\star}-1} \xi(h_{0,j}))\Big]\to \tanh(\beta)=\hat{\beta}<1,
\end{equation}
as $B\searrow 0,\beta\searrow\beta_c$.
}
Further, we restrict the sum over all $\ell$ to $\ell\leq m$, where we take $m=(\beta-\beta_c)^{-1}.$
This leads to
\begin{align}
\chi(\beta,B)&\geq
\frac{(1-\hat{\beta}^2)\mathbb E[D]}{\nu}
\sum_{\ell=0}^m (\hat{\beta}\nu)^{\ell}
\mathbb E\left[
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{i}^{\star})-1}{2\cosh(\beta)^2}\Big)\Big\}
\right].
\end{align}
We \ch{condition on} all coordinates of $(D^{\star}, K^{\star}_1, \ldots, K^{\star}_{\ell-1}, K_{\ell})$
being at most $b=(\beta-\beta_c)^{\ch{-}1/(\tau-3)}$, which has probability
\begin{align}
\mathbb P(D^{\star}\leq b, K^{\star}_1\leq b, \ldots, K^{\star}_{\ell-1}\leq b, K_{\ell}\leq b)
&\geq (1-o(1)) \mathbb P(K^{\star}\leq b)^{m}\\
&\geq (1-o(1)) \big(1-C_{K^{\star}}b^{-(\tau-3)}\big)^m,\nonumber
\end{align}
which is uniformly bounded from below by a constant for the choices $m=(\beta-\beta_c)^{-1}$
and $b=(\beta-\beta_c)^{\ch{-}1/(\tau-3)}$. Also, we use that $\hat{\beta}\nu\geq 1$, since $\beta>\beta_c$.
This leads us to
\begin{equation}
\chi(\beta,B)
\geq
c_{\chi}
\sum_{\ell=0}^m
\overline{\mathbb E}_b\left[
\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{i}^{\star})-1}{2\cosh(\beta)^2}\Big)\Big\}
\right],
\end{equation}
where $\overline{\mathbb E}_b$ denotes the conditional expectation
given that $D^{\star}\leq b, K^{\star}_1\leq b, \ldots, K^{\star}_{\ell-1}\leq b, K_{\ell}\leq b$.
Using that $\mathbb E[{\rm e}^{X}]\geq {\rm e}^{\mathbb E[X]}$, this leads us to
\begin{equation}
\chi(\beta,B)
\geq
c_{\chi}
\sum_{\ell=0}^m
\exp\Big\{-\sum_{i=1}^{\ell}\overline{\mathbb E}_b\left[\log\Big(1+\frac{\cosh(2h_{i}^{\star})-1}{2\cosh(\beta)^2}\Big)
\right]\Big\}.
\end{equation}
Define, for $a>0$ and $x\geq 0$, the function $q(x)=\log\Big(1+a(\cosh(x)-1)\Big)$. Differentiating leads to
\begin{equation}
q'(x)=\frac{a\sinh(x)}{1+a(\cosh(x)-1)},
\end{equation}
so that $q'(x)\leq \ch{C_q x/2}$ for some constant $C_q$ and all $x\geq 0$. As a result, $q(x)\leq C_qx^2/4$, so that
\begin{equation}
\chi(\beta,B)
\geq
c_{\chi}
\sum_{\ell=0}^m
\exp\Big\{-C_q\sum_{i=1}^{\ell}\overline{\mathbb E}_b\left[(h_{i}^{\star})^2\right]\Big\}.
\end{equation}
\paragraph{Second moment analysis of $h_{i}^{\star}$.}
As a result, it suffices to investigate second moments of $h_{i}^{\star}$, which we proceed with now.
We note that
\begin{equation}
h_{i}^{\star}=\xi(h_{i+1}^{\star})+B+\sum_{j=1}^{K_i^{\star}-1} \xi(h_{i,j}).
\end{equation}
Taking expectations and using that $\xi(h)\leq \hat{\beta} h$ leads to
\begin{equation}
\overline{\mathbb E}_b\left[h_{i}^{\star}\right]\leq \ch{\hat{\beta}}\overline{\mathbb E}_b\left[h_{i+1}^{\star}\right]
+B+\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)].
\end{equation}
Iterating this inequality until $\ch{\ell-i}$ and using that $\overline{\mathbb E}_b\left[h_{\ell}^{\star}\right]\leq B+\nu \mathbb E[\xi(h)]$
\ch{(since $\overline{\mathbb E}_b[K]\leq \mathbb E[K]$)} leads to
\begin{align}
\label{asy-Ehi}
\overline{\mathbb E}_b\left[h_{i}^{\star}\right]
&\leq \hat{\beta}^{\ell-i}\ch{(B+\nu\mathbb E[\xi(h)])}+\sum_{s=0}^{\ell-i-1} \hat{\beta}^s
\big(B+\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)]\big)\\
&\leq \ch{\hat{\beta}^{\ell-i}(B+\nu\mathbb E[\xi(h)])+}\frac{B+\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)]}{1-\hat{\beta}}.\nonumber
\end{align}
Similarly,
\begin{align}
\overline{\mathbb E}_b\left[(h_{i}^{\star})^2\right]
&\leq \hat{\beta}^2\overline{\mathbb E}_b\left[(h_{i+1}^{\star})^2\right]
+2\hat{\beta}\overline{\mathbb E}_b\left[h_{i+1}^{\star}\right]\big(B+\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)]\big)\\
&\qquad+B^2 +2B\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)]
+\mathbb E[(K^{\star}-1)(K^{\star}-2)\mid K^{\star}\leq b]\mathbb E[\xi(h)]^2\nonumber\\
&\qquad+\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)^2].\nonumber
\end{align}
Taking the limit $B\searrow 0$ we thus obtain
\begin{align}
\overline{\mathbb E}_b\left[(h_{i}^{\star})^2\right]
&\leq \hat{\beta}^2\overline{\mathbb E}_b\left[(h_{i+1}^{\star})^2\right]
+2\hat{\beta}\overline{\mathbb E}_b\left[h_{i+1}^{\star}\right]\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)]\\
&\qquad
+\mathbb E[(K^{\star}-1)(K^{\star}-2)\mid K^{\star}\leq b]\mathbb E[\xi(h)]^2+
\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)^2].\nonumber
\end{align}
\ch{We start analysing the case where $\mathbb E[K^3]<\infty$. By Theorem~\ref{thm-CritExp}, for $\mathbb E[K^3]<\infty$,
\begin{equation}
\mathbb E[\xi(h)] \leq C_0 (\beta-\beta_c)^{1/2},
\end{equation}
for some constant $C_0$. Substituting \eqref{asy-Ehi}, and iterating in a similar fashion as in the proof of \eqref{asy-Ehi},
we obtain that, for $\mathbb E[K^3]<\infty$,
\begin{equation}
\overline{\mathbb E}_b\left[(h_{i}^{\star})^2\right]\leq C(\beta-\beta_c).
\end{equation}
We next extend this analysis to $\tau\in(3,5).$ Note that, for every $a>0$,
\begin{equation}
\mathbb E[(K^{\star})^a\mid K^{\star}\leq b]
=\frac{\mathbb E[K^{a+1}\indic{K\leq b}]}{\mathbb E[K\indic{K\leq b}]},
\end{equation}
so that, for $\tau\in(3,5)$,
\begin{equation}
\mathbb E[(K^{\star})^2\mid K^{\star}\leq b]\leq \frac{C_{3,\tau}}{\mathbb E[K\indic{K\leq b}]} b^{5-\tau},
\end{equation}
Further, for $\tau\in(3,5)$,
\begin{equation}
\mathbb E[\xi(h)] \leq C_0 (\beta-\beta_c)^{1/(3-\tau)},
\end{equation}
and thus
\begin{equation}
\mathbb E[(K^{\star})^2\mid K^{\star}\leq b] \mathbb E[\xi(h)]^2C\leq b^{5-\tau}\mathbb E[\xi(h)]^2
\leq C (\beta-\beta_c)^{-(5-\tau)/(3-\tau)+2/(3-\tau)}=C(\beta-\beta_c).
\end{equation}}
It can readily be seen that all other contributions to $\overline{\mathbb E}_b\left[(h_{i}^{\star})^2\right]$
are of the same or smaller order. For example, when $\mathbb E[K^2]<\infty$ and using that $1/(\tau-3)\geq 1/2$
for all $\tau\in (3,5)$,
\begin{equation}
\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)^2]\leq C\mathbb E[\xi(h)]^2
=O(\beta-\beta_c),
\end{equation}
while, when $\tau\in (3,4)$,
\begin{equation}
\mathbb E[K^{\star}-1\mid K^{\star}\leq b]\mathbb E[\xi(h)^2]\leq Cb^{4-\tau} \mathbb E[\xi(h)]^{\tau-2}
\ch{=}C (\beta-\beta_c)^{-(4-\tau)/(3-\tau)+(\tau-2)/(3-\tau)}=C(\beta-\beta_c)^2.
\end{equation}
We conclude that
\begin{equation}
\overline{\mathbb E}_b\left[(h_{i}^{\star})^2\right]\leq C(\beta-\beta_c).
\end{equation}
\ch{Therefore,
\begin{equation}
\chi(\beta,B)
\geq
c_{\chi}
\sum_{\ell=0}^m
\exp\Big\{-C\ell (\beta-\beta_c) \Big\}=O((\beta-\beta_c)^{-1}\chs{)},
\end{equation}
as required.}
The proof for $\tau=5$ is similar when noting that the logarithmic corrections
\ch{present in $\mathbb E[\xi(h)]^2$ and in $\mathbb E[(K^{\star})^2\mid K^{\star}\leq b]$} precisely cancel.
\end{proof}
We close this section by performing a heuristic argument to determine the upper bound on
$\boldsymbol{\gamma'}$. Unfortunately, as we will discuss in more detail following the heuristics,
we are currently not able to turn this analysis into a rigorous proof.
\paragraph{The upper bound on $\boldsymbol{\gamma'}$: heuristics for $\mathbb E[K^3]<\infty$.}
We can bound from above
\begin{align}
\chi(\beta,B)&\leq
\frac{\mathbb E[D]}{\nu}
\sum_{\ell=0}^\infty (\hat{\beta}\nu)^{\ell}
\mathbb E\left[\exp\Big\{-\sum_{i=1}^{\ell}\log\Big(1+\frac{\cosh(2h_{i}^{\star})-1}{2\cosh(\beta)^2}\Big)\Big\}
\right].
\end{align}
Now, the problem is that $\hat{\beta}\nu>1$ when $\beta>\beta_c$, so that we need to extract
extra decay from the exponential term, \ch{which is technically demanding, and requires us to know
various constants rather precisely. Let us show this heuristically.} It suffices to study large
values of $\ell$, since small values can be bounded in a simple way.
We blindly put the expectation in the exponential, and Taylor expand to obtain that
\begin{align}
\label{expon-bd}
\chi(\beta,B)&\approx
\frac{\mathbb E[D]}{\nu}
\sum_{\ell=0}^\infty (\hat{\beta}\nu)^{\ell}
\exp\Big\{-\sum_{i=1}^{\ell}\frac{\mathbb E\left[(h_{i}^{\star})^2\right]}{\cosh(\beta)^2}\Big\}.
\end{align}
We compute that
\begin{equation}
\cosh(\beta)^2=\frac{1}{1-\hat{\beta}^2}.
\end{equation}
Since
\begin{equation}
h_{i}^{\star}\approx \hat{\beta} h_{i+1}^{\star}+\sum_{j=1}^{K_i^{\star}-1} \xi(h_{i,j}),
\end{equation}
we have
\begin{equation}
\mathbb E\left[h_{i}^{\star}\right]\approx \frac{\mathbb E[K^{\star}-1]}{1-\hat{\beta}} \mathbb E[\xi(h)],
\end{equation}
and
\begin{equation}
\mathbb E\left[(h_{i}^{\star})^2\right]\approx \frac{2\hat{\beta}\mathbb E[K^{\star}-1]^2
+\mathbb E[(K^{\star}-1)(K^{\star}-2)](1-\hat{\beta})}{(1-\hat{\beta}^2)(1-\hat{\beta})} \mathbb E[\xi(h)]^2
+\frac{\mathbb E[K^{\star}-1]}{1-\hat{\beta}^2}\mathbb E[\xi(h)^2].
\end{equation}
Ignoring all error terms in the proof of Lemma \ref{lem-boundxih2}
shows that
\begin{equation}
\mathbb E[\xi(h)^2]\approx \frac{\nu_2\hat{\beta}^2}{1-\hat{\beta}} \mathbb E[\xi(h)]^2=C_2\mathbb E[\xi(h)]^2,
\end{equation}
so in total we arrive at (\ch{also using that $\hat{\beta}\approx 1/\nu$})
\begin{equation}
\label{h-star-squared}
\mathbb E\left[(h_{i}^{\star})^2\right]\approx
\frac{\nu_3(1-\hat{\beta})/\nu+3\nu_2^2/\nu^3}{(1-\hat{\beta}^2)(1-\hat{\beta})} \mathbb E[\xi(h)]^2.
\end{equation}
As a result,
\begin{equation}
\label{h-star-squared-rep}
\frac{\mathbb E\left[(h_{i}^{\star})^2\right]}{\cosh(\beta)^2}\approx
\frac{\nu_3(1-\hat{\beta})/\nu+3\nu_2^2/\nu^3}{1-\hat{\beta}}\mathbb E[\xi(h)]^2.
\end{equation}
Ignoring error terms in the computation in Lemma \ref{lem-boundxih3} shows that
\begin{equation}
\mathbb E[\xi(h)^3] \approx C_3 \mathbb E[\xi(h)]^3,
\end{equation}
where
\begin{equation}
C_3 = \frac{\hat{\beta}^3}{1-\hat{\beta}^3 \nu} \left(\nu_3 + 3 \nu_2 C_2\right)
\approx \frac{\hat{\beta}^3}{1-\hat{\beta}^2} \left(\nu_3 + 3 \nu_2 C_2\right)
=\frac{\hat{\beta}^3}{(1-\hat{\beta}^2)(1-\hat{\beta})} \left(\nu_3(1-\hat{\beta}) + 3 \ch{(\nu_2/\nu)^2}\right),
\end{equation}
since $\ch{\hat{\beta}\approx 1/\nu}$. Further, again ignoring error terms in
\eqref{eq-UpperExi} \ch{and Taylor expanding to third order} shows that
\begin{equation}\label{eq-UpperExi-rep}
\mathbb E[\xi(h)] \approx \hat{\beta} \nu \mathbb E[\xi(h)] - C_1 \mathbb E[\xi(h)]^{3},
\end{equation}
where
\begin{equation}
C_1=-\frac{\xi'''(0)}{6} \big(\nu C_3+3\nu_2C_2+\nu_3\big),
\end{equation}
and $\xi'''(0)=-2\hat{\beta}(1-\hat{\beta}^2)$. Substituting the definitions for $C_2$ and $C_3$ yields
\begin{align}
C_1&=\frac{\hat{\beta}(1-\hat{\beta}^2)}{3}\big(\nu C_3+3\nu_2C_2+\nu_3\big)\\
&=\frac{\hat{\beta}}{3(1-\hat{\beta})} \big(\nu\hat{\beta}^3\nu_3(1-\hat{\beta})+ 3\nu\hat{\beta}^3\ch{(\nu_2/\nu)}^2+3\nu_2^2\hat{\beta}^2(1-\hat{\beta}^2)+\nu_3(1-\hat{\beta})(1-\hat{\beta}^2)\big)\nonumber\\
&=\frac{\hat{\beta}}{3(1-\hat{\beta})} \big(\nu_3(1-\hat{\beta})+ \ch{3\nu_2^2\hat{\beta}^2}\big).\nonumber
\end{align}
Thus, we arrive at
\begin{equation}
\mathbb E[\xi(h)]^2\approx \frac{\hat{\beta} \nu-1}{C_1},
\end{equation}
so that substitution into \eqref{h-star-squared-rep} leads to
\begin{align}
\label{the-miracle}
\frac{\mathbb E\left[(h_{i}^{\star})^2\right]}{\cosh(\beta)^2}
&\approx (\hat{\beta} \nu-1)
\frac{\ch{3\big(\nu_3(1-\hat{\beta})/\nu+3\nu_2^2/\nu^3\big)}}
{\hat{\beta}\big(\nu_3(1-\hat{\beta})+ \ch{3\nu_2^2\hat{\beta}^2}\big)}\ch{=3(\hat{\beta} \nu-1)}.
\end{align}
\ch{We conclude that
\begin{equation}
\label{cancel-pos-neg}
(\hat{\beta} \nu) \exp{\big\{-\frac{\mathbb E\left[(h_{i}^{\star})^2\right]}{\cosh(\beta)^2}\big\}}
\leq \big(1+(\hat{\beta} \nu-1)\big){\rm e}^{-3(\hat{\beta} \nu-1)}\leq {\rm e}^{-2(\hat{\beta} \nu-1)}.
\end{equation}
This suggests that
\begin{equation}
\chi(\beta,B)\leq
\frac{\mathbb E[D]}{\nu}
\sum_{\ell=0}^\infty {\rm e}^{-2\ell (\hat{\beta} \nu-1)}=O((\hat{\beta} \nu-1)^{-1}),
\end{equation}
as required. Also, using \eqref{expon-bd}, this suggests that
\begin{equation}
\lim_{\beta\searrow \beta_c} (\hat{\beta} \nu-1)\chi(\beta,0^+)=\mathbb E[D]/(2\nu),
\end{equation}
where the constant is precisely half the one for the subcritical susceptibility (see \eqref{chi-comp-highT}).
It can be seen by an explicit computation that the same factor $1/2$ is also present in the same
form for the Curie-Weiss model.}
\col{
Indeed for the Boltzmann\chs{-}Gibbs measure with Hamiltonian
$H_n(\sigma) = -\frac{1}{2\chs{n}}\sum_{i,j}\sigma_i\sigma_j $ one has $\beta_c=1$ and
a susceptibility $\chi(\beta,0^+)=1/(1-\beta)$ for $\beta<\beta_c$,
$\chi(\beta,0^+) = (1-m^2)/(1-\beta(1-m^2))$ with $m$ the non-zero solution
of $m=\tanh(\beta m)$ for $\beta>\beta_c$.} \chs{Expanding this gives $m^2=3(\beta-1)(1+o(1))$ for $\beta\searrow1$ and hence $\chi(\beta,0^+)=(1+o(1))/(1-\beta(1-3(\beta-1)))=(1+o(1))/(2(\beta-1))$.}
\ch{It is a non-trivial task to turn the heuristic of this Section into a proof because of several reasons:
(a) We need to be able to justify the step where we put expectations in the exponential.
While we are dealing with random variables with small means, they are not independent, so this is
demanding; (b) We need to know the constants very precisely, as we are using the fact that a
positive and negative term cancel in \eqref{cancel-pos-neg}. The analysis performed in the
previous sections does not give optimal control over these constants, so this step also requires substantial
work.}
\ch{The above heuristic does not apply to $\tau\in (3,5]$. However, the constant in \eqref{the-miracle}
is \emph{always} equal to 3, irrespective of the degree distribution. This suggests that also for $\tau\in(3,5]$, we should have
$\boldsymbol{\gamma'}\leq1$.}
\paragraph*{Acknowledgements.}
The work of SD and RvdH is supported in part by The Netherlands Organisation for
Scientific Research (NWO).
CG acknowledge\chs{s} financial support by the Italian Research Funding Agency (MIUR)
through the FIRB project grant n.\ RBFR10N90W.
|
2,869,038,153,874 | arxiv | \section*{Introduction}
Let $K$ be an imaginary quadratic field and
$\mathcal{O}_K=\mathbb{Z}[\theta]$ be the ring of integers with
$\theta$ in the complex upper half plane $\mathfrak{H}$. We denote
the Hilbert class field and the ray class field modulo $N$ of $K$
for a positive integer $N$ by $H$ and $K_{(N)}$, respectively. Hasse
(\cite{Hasse} or \cite[Chapter 10 Corollary to Theorem 7]{Lang})
found in 1927 that for any integral ideal $\mathfrak{a}$ of $K$,
$K_{(N)}$ is generated over $H$ by adjoining the value of the Weber
function for the elliptic curve $\mathbb{C}/\mathfrak{a}$ at a
generator of the cyclic $\mathcal{O}_K$-module
$(1/N)\mathfrak{a}/\mathfrak{a}$. It requires good understanding of
the arithmetic of elliptic curves, which is formulated by the theory
of complex multiplication (\cite[Chapter 10]{Lang} or \cite[Chapter
5]{Shimura}). Together with Shimura's reciprocity law which reveals
a remarkable relationship between class field theory and modular
function fields, the theory of Shimura's canonical model allows us
to generate $K_{(N)}$ over $K$ by the specialization of a certain
modular function field. In particular, Cho-Koo (\cite[Corollary
5.2]{C-K}) showed that the singular value of a Hauptmodul with
rational Fourier coefficients on some modular curve generates
$K_{(N)}$ over $K$. For instance, Cho-Koo-Park (\cite[Theorem
13]{C-K-P}) considered the case $N=6$ in terms of Ramanujan's cubic
continued fraction. Also Koo-Shin further provided in
\cite[pp.161--162]{K-S} appropriate Hauptmoduli for this purpose.
\par
It seems to be a difficult problem to construct a ray class
invariant (as a primitive generator of $K_{(N)}$) over $K$ by means
of values of a transcendental function which can be applied to all
$K$ and $N$. In 1964 Ramachandra (\cite[Theorem 10]{Ramachandra}) at
last found universal generators of ray class fields of arbitrary
moduli by applying the Kronecker limit formula. However his
invariants involve overly complicated products of high powers of
singular values of the Klein forms and singular values of the
discriminant $\Delta$-function. On the other hand, Schertz
(\cite[Theorems 3 and 4]{Schertz}) attempted to find simple and
better answers for practical use with similar ideas. The simplest
generators conjectured by Schertz (\cite[p.386]{Schertz}) are
singular values of a Siegel function, and Jung-Koo-Shin
(\cite[Theorem 2.4]{J-K-S}) showed that his conjectural generators
are the right ones at least over $H$ for
$K\neq\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{-3})$.
\par
Since the primitive element theorem guarantees the existence of a
simple generator of $K_{(N)}$ over $K$, one might try to combine
Hasse's two generators to get a ray class invariant. Cho-Koo
(\cite[Corollary 5.5]{C-K}) recently succeeded in obtaining such a
generator by showing that the singular value of a Weber function is
an algebraic integer and then applying the result of Gross-Zagier
(\cite{G-Z} or \cite[Theorem 13.28]{Cox}). Koo-Shin (\cite[Theorems
9.8 and 9.10]{K-S}) further investigated the problem over $K$ in a
completely different point of view by using both singular values of
the elliptic modular function $j$ and Siegel functions.
\par
For any pair $(r_1,r_2)\in\mathbb{Q}^2\setminus\mathbb{Z}^2$ we
define a \textit{Siegel function} $g_{(r_1,r_2)}(\tau)$ on
$\tau\in\mathfrak{H}$ by the following infinite product expansion
\begin{eqnarray}\label{FourierSiegel}
g_{(r_1,r_2)}(\tau)=-q_\tau^{(1/2)\textbf{B}_2(r_1)}e^{\pi
ir_2(r_1-1)}(1-q_z)\prod_{n=1}^{\infty}(1-q_\tau^nq_z)(1-q_\tau^nq_z^{-1}),
\end{eqnarray}
where $\textbf{B}_2(X)=X^2-X+1/6$ is the second Bernoulli
polynomial, $q_\tau=e^{2\pi i\tau}$ and $q_z=e^{2\pi iz}$ with
$z=r_1\tau+r_2$. Then it is a modular unit in the sense of
\cite[p.36]{K-L}. Since its Fourier coefficients are quite small, we
are able to estimate and compare the values of the function in order
to derive our main theorem.
\par
Let $\mathfrak{a}=[\omega_1,\omega_2]$ be a fractional ideal of $K$
not containing $1$, where $\{\omega_1,\omega_2\}$ is an oriented
basis such that $\omega_1/\omega_2\in\mathfrak{H}$. Writing
$1=r_1\omega_1+r_2\omega_2$ for some
$(r_1,r_2)\in\mathbb{Q}^2\setminus\mathbb{Z}^2$ we denote
\begin{equation*}
g(1,[\omega_1,\omega_2])=g_{(r_1,r_2)}(\omega_1/\omega_2).
\end{equation*}
When a product of these values becomes a unit, we call it an
\textit{elliptic unit}. The $12$-th power the above value depends
only on $\mathfrak{a}$ itself (\cite[Chapter 2 Remark to Theorem
1.2]{K-L}). So we may write $g^{12}(1,\mathfrak{a})$ instead of
$g^{12}(1,[\omega_1,\omega_2])$.
\par
For a nontrivial integral ideal $\mathfrak{f}$ of $K$, let
$I_K(\mathfrak{f})$ be the group of fractional ideals of $K$ which
are relatively prime to $\mathfrak{f}$, and $P_{K,1}(\mathfrak{f})$
be the subgroup of $I_K(\mathfrak{f})$ generated by the principal
ideals $\alpha\mathcal{O}_K$ for $\alpha\in\mathcal{O}_K$ which
satisfies $\alpha\equiv1\pmod{\mathfrak{f}}$. Then the ideal class
group $I_K(\mathfrak{f})/P_{K,1}(\mathfrak{f})$ is isomorphic to the
Galois group of the ray class field $K_\mathfrak{f}$ modulo
$\mathfrak{f}$ over $K$ (\cite[pp.116--118]{Silverman}). Now we
consider the value
\begin{equation*}
g^{12N(\mathfrak{f})}(1,\mathfrak{f}),
\end{equation*}
where $N(\mathfrak{f})$ is the smallest positive integer in
$\mathfrak{f}$. It belongs to $K_\mathfrak{f}$ (\cite[Chapter 2
Proposition 1.3 and Chapter 11 Theorem 1.1]{K-L}). Let
$\sigma:I_K(\mathfrak{f})/P_{K,1}(\mathfrak{f})\rightarrow\mathrm{Gal}(K_\mathfrak{f}/K)$
be the Artin map. Then for a ray class $C\in
I_K(\mathfrak{f})/P_{K,1}(\mathfrak{f})$, $\sigma(C)$ satisfies the
rule
\begin{equation}\label{rule}
g^{12N(\mathfrak{f})}(1,\mathfrak{f})^{\sigma(C)}
=g^{12N(\mathfrak{f})}(1,\mathfrak{f}\mathfrak{c}^{-1}),
\end{equation}
where $\mathfrak{c}$ is a representative integral ideal of $C$ by
the theory of complex multiplication (\cite[pp.235--236]{K-L}). In
our case we take $\mathfrak{f}=N\mathcal{O}_K$ for an integer $N$
($\geq2$). In this paper, as Schertz conjectured, we shall show that
the singular value
\begin{equation*}
g^{12N}(1,N\mathcal{O}_K)=g_{(0,1/N)}^{12N}(\theta)
\end{equation*}
alone, or any one of its integral powers generates $K_{(N)}$ over
$K$ ($\neq\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{-3})$) (Theorem
\ref{primitive} and Remark \ref{exception}). While the formula
(\ref{rule}) provides all conjugates of $g_{(0,1/N)}^{12N}(\theta)$,
it is inconvenient for practical use because we can hardly describe
bases of representative ideals in general. Therefore, rather than
working with actions of $\mathrm{Gal}(K_{(N)}/K)$ directly by
(\ref{rule}) we will manipulate actions of $\mathrm{Gal}(H/K)$ and
$\mathrm{Gal}(K_{(N)}/H)$ separately by following Gee-Stevenhagen's
idea (\cite[$\S$3, 9, 10]{Gee} or \cite[$\S$3, 6]{Stevenhagen}).
\section{Fields of modular functions}
This section will be devoted to reviewing briefly modular function
fields and actions of Galois groups in terms of Siegel functions.
For the full description of the modularity of Siegel functions we
refer to \cite{K-S} or \cite{K-L}.
\par
For a positive integer $N$, let $\zeta_N=e^{2\pi i/N}$ and
\begin{eqnarray*}
\Gamma(N)=\bigg\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in
\mathrm{SL}_2(\mathbb{Z})~;~
\begin{pmatrix}a&b\\c&d\end{pmatrix}\equiv
\begin{pmatrix}1&0\\0&1\end{pmatrix}\pmod{N}\bigg\}
\end{eqnarray*}
be the principal congruence subgroup of level $N$ of
$\mathrm{SL}_2(\mathbb{Z})$. The group $\Gamma(N)$ acts on
$\mathfrak{H}$ by fractional linear transformations, and the orbit
space $Y(N)=\Gamma(N)\backslash\mathfrak{H}$ can be given a
structure of a Riemann surface. Furthermore, $Y(N)$ can be
compactified by adding cusps so that
$X(N)=\Gamma(N)\backslash\mathfrak{H}^*$ with
$\mathfrak{H}^*=\mathfrak{H}\cup\mathbb{P}^1(\mathbb{Q})$ becomes a
compact Riemann surface (or an algebraic curve), which we call the
\textit{modular curve of level $N$} (\cite[Chapter 2]{D-S} or
\cite[$\S$1.5]{Shimura}).
\par
Meromorphic functions on $X(N)$ are called \textit{modular functions
of level $N$}. In particular, we are interested in the field of
modular functions of level $N$ defined over the $N$-th cyclotomic
field $\mathbb{Q}(\zeta_N)$ which is denoted by $\mathcal{F}_N$.
Then it is well-known that the extension
$\mathcal{F}_N/\mathcal{F}_1$ is Galois and
\begin{equation*}
\mathrm{Gal}(\mathcal{F}_N/\mathcal{F}_1)\cong\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm1_2\},
\end{equation*}
whose action is given as follows: We can decompose an element
$\alpha\in\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm1_2\}$ into
$\alpha=\alpha_1\cdot\alpha_2$ for some
$\alpha_1\in\mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm1_2\}$ and
$\alpha_2=\left(\begin{smallmatrix}1&0\\0&d\end{smallmatrix}\right)$.
The action of $\alpha_1$ is defined by a fractional linear
transformation. And, $\alpha_2$ acts by the rule
\begin{equation*}
\sum_{n>-\infty} c_nq_\tau^{n/N}\mapsto \sum_{n>-\infty}
c_n^{\sigma_d}q_\tau^{n/N},
\end{equation*}
where $\sum_{n>-\infty} c_nq_\tau^{n/N}$ is the Fourier expansion of
a function in $\mathcal{F}_N$ and $\sigma_d$ is the automorphism of
$\mathbb{Q}(\zeta_N)$ defined by $\zeta_N^{\sigma_d}=\zeta_N^d$
(\cite[Chapter 6 Theorem 3]{Lang}).
\par
It is well-known that the fields $\mathcal{F}_N$ are described by
$j(\tau)$ and the Fricke functions (\cite[Chapter 6 Corollary
1]{Lang} or \cite[Proposition 6.9]{Shimura}). However, we restate
these fields in terms of Siegel functions for later use. First, we
need some transformation formulas and modularity criterion for
Siegel functions. For $x\in\mathbb{R}$ we define $\langle x\rangle$
by the fractional part of $x$ such that $0\leq \langle x\rangle<1$.
\begin{proposition}\label{F_N}
Let $N\geq 2$. For
$(r_1,r_2)\in(1/N)\mathbb{Z}^2\setminus\mathbb{Z}^2$ the function
$g_{(r_1,r_2)}^{12N}(\tau)$ satisfies the relation
\begin{equation*}
g_{(r_1,r_2)}^{12N}(\tau)=g_{(-r_1,-r_2)}^{12N}(\tau)=g_{(\langle
r_1\rangle,\langle r_2\rangle)}^{12N}(\tau).
\end{equation*}
It belongs to $\mathcal{F}_N$ and
$\alpha\in\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm1_2\}$ acts on
it by
\begin{equation*}
g_{(r_1,r_2)}^{12N}(\tau)^\alpha= g_{(r_1,r_2)\alpha}^{12N}(\tau).
\end{equation*}
Also we have
\begin{equation*}
\mathcal{F}_N=\mathbb{Q}(\zeta_N)(j(\tau),~g_{(1/N,0)}^{12N}(\tau),~
g_{(0,1/N)}^{12N}(\tau)).
\end{equation*}
\end{proposition}
\begin{proof}
See \cite[Proposition 2.4, Theorems 2.5 and 4.2]{K-S}.
\end{proof}
We set
\begin{equation*}
\mathcal{F}=\bigcup_{N=1}^\infty \mathcal{F}_N.
\end{equation*}
Passing to the projective limit of exact sequences
\begin{equation*}
1\longrightarrow
\{\pm1_2\}\longrightarrow\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})\longrightarrow\mathrm{Gal}(\mathcal{F}_N/\mathcal{F}_1)\longrightarrow1
\end{equation*}
for all $N$ ($\geq1$), we obtain an exact sequence
\begin{equation}\label{functionexact}
1\longrightarrow
\{\pm1_2\}\longrightarrow\prod_{p~:~\textrm{primes}}\mathrm{GL}_2(\mathbb{Z}_p)\longrightarrow\mathrm{Gal}(\mathcal{F}/\mathcal{F}_1)\longrightarrow1.
\end{equation}
For every $u=(u_p)_p\in\prod_p\mathrm{GL}_2(\mathbb{Z}_p)$ and a
positive integer $N$, there exists an integral matrix $\alpha$ in
$\mathrm{GL}_2^+(\mathbb{Q})$ with $\det(\alpha)>0$ such that
$\alpha\equiv u_p\pmod{N\mathbb{Z}_p}$ for all $p$ dividing $N$ by
the Chinese remainder theorem. The action of $u$ on $\mathcal{F}_N$
can be described by the action of $\alpha$ (\cite[Proposition
6.21]{Shimura}).
\section{Shimura's reciprocity law}
We shall develop an algorithm for finding all conjugates of the
singular value of a modular function, from which we can determine
all conjugates of $g_{(0,1/N)}^{12N}(\theta)$. To this end we adopt
Gee-Stevenhagen's idea which explains Shimura's reciprocity law
explicitly for practical use.
\par
Let $\mathbb{A}_\mathbb{Q}^\mathrm{f}=\prod_p^\prime\mathbb{Q}_p$
denote the ring of finite adeles. Here, the restricted product is
taken with respect to the subrings
$\mathbb{Z}_p\subset\mathbb{Q}_p$. Every
$x\in\mathrm{GL}_2(\mathbb{A}_\mathbb{Q}^\mathrm{f})$ can be written
as
\begin{equation*}
x=u\cdot\alpha\quad\textrm{with}~u\in\prod_p\mathrm{GL}_2(\mathbb{Z}_p)~\textrm{and}~
\alpha\in\mathrm{GL}_2^+(\mathbb{Q}),
\end{equation*}
since the class number of $\mathbb{Q}$ is one (\cite[Lemma
6.19]{Shimura}). Such a decomposition $x=u\cdot\alpha$ determines a
group action of $\mathrm{GL}_2(\mathbb{A}_\mathbb{Q}^\mathrm{f})$ on
$\mathcal{F}$ by
\begin{equation*}
h^x=h^u\circ\alpha,
\end{equation*}
where $h^u$ is given by the exact sequence (\ref{functionexact})
(\cite[pp.149--150]{Shimura}). Then we have the following
\textit{Shimura's exact sequence}
\begin{equation*}
1\longrightarrow\mathbb{Q}^*\longrightarrow\mathrm{GL}_2(\mathbb{A}_\mathbb{Q}^\mathrm{f})
\longrightarrow\mathrm{Aut}(\mathcal{F})\longrightarrow1
\end{equation*}
(\cite[Theorem 6.23]{Shimura}).
\par
Let $K$ be an imaginary quadratic field of discriminant $d_K$. From
now on we fix
\begin{eqnarray}\label{theta}
\theta=\left\{\begin{array}{ll}\sqrt{d_K}/2&\textrm{for}~d_K\equiv0\pmod{4}\vspace{0.1cm}\\
(-1+\sqrt{d_K})/2&\textrm{for}~d_K\equiv1\pmod{4},\end{array}\right.
\end{eqnarray}
which satisfies $\mathcal{O}_K=\mathbb{Z}[\theta]$. Then we have
\begin{equation*}
\min(\theta,\mathbb{Q})=X^2+B_\theta X+C_\theta
=\left\{\begin{array}{ll} X^2-d_K/4 &
\textrm{if}~d_K\equiv0\pmod{4}\vspace{0.1cm}\\
X^2+X+(1-d_K)/4 & \textrm{if}~d_K\equiv1\pmod{4}.
\end{array}\right.
\end{equation*}
We use the notation $K_p=K\otimes_\mathbb{Q}\mathbb{Q}_p$ for each
prime $p$ and denote the group of finite ideles of $K$ by
$(\mathbb{A}_K^\mathrm{f})^*=\prod_p^\prime K_p^*$, where the
restricted product is taken with respect to the subgroups
$\mathcal{O}_p^*=(\mathcal{O}_K\otimes_\mathbb{Z}\mathbb{Z}_p)^*$ of
$K_p^*$. Let $[\cdot,K]$ denote the Artin map on
$(\mathbb{A}_K^\mathrm{f})^*$. Then the class field theory on $K$ is
summarized in the following exact sequence
\begin{equation*}
1\longrightarrow
K^*\longrightarrow(\mathbb{A}_K^\mathrm{f})^*\stackrel{[\cdot,K]}
{\longrightarrow}\mathrm{Gal}(K^\mathrm{ab}/K)\longrightarrow1,
\end{equation*}
where $K^\mathrm{ab}$ is the maximal abelian extension of $K$
(\cite[$\S$5.2]{Shimura} or \cite[Chapter II Theorem
3.5]{Silverman}). The main theorem of the theory of complex
multiplication states that the value $j(\theta)$ generates $H$ over
$K$, and the sequence
\begin{equation}\label{classfieldexact}
1\longrightarrow
\mathcal{O}_K^*\longrightarrow\prod_p\mathcal{O}_p^*\stackrel
{[\cdot,K]}{\longrightarrow}\mathrm{Gal}(K^\mathrm{ab}/K(j(\theta)))\longrightarrow1
\end{equation}
is exact (\cite[Theorem 5.7]{Shimura}). Furthermore, $K_{(N)}$ is
none other than the field $K(\mathcal{F}_N(\theta))$ which is the
extension field of $K$ obtained by adjoining all singular values
$h(\theta)$ for $h\in\mathcal{F}_N$ which is defined and finite at
$\theta$ (\cite[Chapter 10 Corollary to Theorem 2]{Lang} or
\cite[Proposition 6.33]{Shimura}).
\par
For each prime $p$ we define
\begin{equation*}
(g_\theta)_p:K_p^*\longrightarrow \mathrm{GL}_2(\mathbb{Q}_p)
\end{equation*}
as the injection that sends $x_p\in K_p^*$ to the matrix in
$\mathrm{GL}_2(\mathbb{Q}_p)$ which represents the multiplication by
$x_p$ with respect to the $\mathbb{Q}_p$-basis
$\left(\begin{smallmatrix}\theta\\1\end{smallmatrix}\right)$ for
$K_p$. More precisely, if
$\mathrm{min}(\theta,\mathbb{Q})=X^2+B_\theta X+C_\theta$, then for
$s_p,~t_p\in\mathbb{Q}_p$ we can describe the map as
\begin{equation*}
(g_\theta)_p~:~s_p\theta+t_p\mapsto\begin{pmatrix}t_p-B_\theta
s_p&-C_\theta s_p\\s_p&t_p\end{pmatrix}.
\end{equation*}
On $(\mathbb{A}_K^\mathrm{f})^*$ we have an injection
\begin{equation*}
g_\theta=\prod_p(g_\theta)_p~:~(\mathbb{A}_K^\mathrm{f})^*\longrightarrow{\prod_p}^\prime\mathrm{GL}_2(\mathbb{Q}_p),
\end{equation*}
where the restricted product is taken with respect to the subgroups
$\mathrm{GL}_2(\mathbb{Z}_p)$ of $\mathrm{GL}_2(\mathbb{Q}_p)$.
Combining (\ref{functionexact}) and (\ref{classfieldexact}) we get
the diagram
\begin{eqnarray}\label{exactdiagram}
\begin{array}{ccccccccc}
1& \longrightarrow & \mathcal{O}^* &\longrightarrow
&\prod_p\mathcal{O}_p^*& \stackrel{[\cdot,K]}{\longrightarrow} &
\mathrm{Gal}(K^\mathrm{ab}/K(j(\theta)))
&\longrightarrow &1\\
&&&&\phantom{\bigg\downarrow}\big\downarrow g_\theta&&&&\\
1 & \longrightarrow &\{\pm1_2\} &\longrightarrow
&\prod_p\mathrm{GL}_2(\mathbb{Z}_p)& \longrightarrow &
\mathrm{Gal}(\mathcal{F}/\mathcal{F}_1) &\longrightarrow &1.
\end{array}
\end{eqnarray}
Then \textit{Shimura's reciprocity law} says that for
$h\in\mathcal{F}$ and $x\in\prod_p\mathcal{O}_p^*$
\begin{equation}\label{reciprocity}
h(\theta)^{[x^{-1},K]}=h^{(g_\theta(x))}(\theta)
\end{equation}
(\cite[Theorem 6.31]{Shimura}).
\par
Let $Q=[a,b,c]=aX^2+bXY+cY^2\in\mathbb{Z}[X,Y]$ be a primitive
positive definite quadratic form of discriminant $d_K$. Under an
appropriate equivalence relation these forms determine the group
$\mathrm{C}(d_K)$, called the \textit{form class group of
discriminant $d_K$}. In particular, the unit element is the class
containing
\begin{eqnarray}\label{unitform}
\left\{\begin{array}{ll} \textrm{[}1,0,-d_K/4\textrm{]} &
\textrm{for}~d_K\equiv0\pmod{4}\vspace{0.1cm}\\
\textrm{[}1,1,(1-d_K)/4\textrm{]} & \textrm{for}~
d_K\equiv1\pmod{4},\end{array}\right.
\end{eqnarray}
and the inverse of the class containing $[a,b,c]$ is the class
containing $[a,-b,c]$ (\cite[Theorem 3.9]{Cox}). We identify
$\mathrm{C}(d_K)$ with the set of all \textit{reduced quadratic
forms}, which are characterized by the conditions
\begin{equation}\label{reduced}
(-a<b\leq a<c\quad\textrm{or}\quad 0\leq b\leq
a=c)\quad\textrm{and}\quad b^2-4ac=d_K
\end{equation}
(\cite[Theorem 2.9]{Cox}). Note that the above two conditions for
reduced quadratic forms imply
\begin{equation}\label{bound a}
a\leq\sqrt{-d_K/3}
\end{equation} (\cite[p.29]{Cox}).
It is well-known that $\mathrm{C}(d_K)$ is isomorphic to
$\mathrm{Gal}(H/K)$ (\cite[Theorem 7.7]{Cox}). Gee and Stevenhagen
found an idele $x_Q\in(\mathbb{A}_K^\mathrm{f})^*$ such that
\begin{equation*}
[x_Q,K]|_H=[a,b,c].
\end{equation*}
\begin{proposition}\label{idele}
Let $Q=[a,b,c]$ be a primitive positive definite quadratic form of
discriminant $d_K$. We put
\begin{equation*}
\theta_Q=(-b+\sqrt{d_K})/2a.
\end{equation*}
Furthermore, for each prime $p$ we define $x_p$ as
\begin{eqnarray*}
x_p=\left\{\begin{array}{ll}a&\textrm{if}~p\nmid
a\vspace{0.1cm}\\
a\theta_Q&\textrm{if}~p\mid a~\textrm{and}~p\nmid c\vspace{0.1cm}\\
a(\theta_Q-1)&\textrm{if}~p\mid a~\textrm{and}~p\mid c.
\end{array}\right.
\end{eqnarray*}
Then for $x_Q=(x_p)_p\in(\mathbb{A}_K^\mathrm{f})^*$ the Galois
action of the Artin symbol $[x_Q,K]$ satisfies the relation
\begin{equation*}
j(\theta)^{[a,b,c]}=j(\theta)^{[x_Q,K]}.
\end{equation*}
\end{proposition}
\begin{proof}
See \cite[Lemma 19]{Gee} or \cite[$\S$6]{Stevenhagen}.
\end{proof}
The next proposition gives the action of $[x_Q^{-1},K]$ on
$K^\mathrm{ab}$ by using Shimura's reciprocity law
(\ref{reciprocity}).
\begin{proposition}\label{Hilbertclass}
Let $Q=[a,b,c]$ be a primitive positive definite quadratic form of
discriminant $d_K$ and $\theta_Q$ be as in Proposition
\textup{\ref{idele}}. Define
$u_Q=(u_p)_p\in\prod_p\mathrm{GL}_2(\mathbb{Z}_p)$ as
\begin{itemize}
\item[] Case 1 : $d_K\equiv0\pmod{4}$
\begin{eqnarray}\label{u1}
u_p=\left\{\begin{array}{ll}
\begin{pmatrix}a&b/2\\0&1\end{pmatrix}&\textrm{if}~p\nmid a\vspace{0.1cm}\\
\begin{pmatrix}-b/2&-c\\1&0\end{pmatrix}&\textrm{if}~p\mid a~\textrm{and}~p\nmid c\vspace{0.1cm}\\
\begin{pmatrix}-a-b/2&-c-b/2\\1&-1\end{pmatrix}&\textrm{if}~p\mid a~\textrm{and}~p\mid c
\end{array}\right.
\end{eqnarray}
\item[] Case 2 : $d_K\equiv1\pmod{4}$
\begin{eqnarray}\label{u2}
u_p=\left\{\begin{array}{ll}
\begin{pmatrix}a&(b-1)/2\\0&1\end{pmatrix}&\textrm{if}~p\nmid a\vspace{0.1cm}\\
\begin{pmatrix}-(b+1)/2&-c\\1&0\end{pmatrix}&\textrm{if}~p\mid a~\textrm{and}~p\nmid c\vspace{0.1cm}\\
\begin{pmatrix}-a-(b+1)/2&-c+(1-b)/2\\1&-1\end{pmatrix}&\textrm{if}~p\mid a~\textrm{and}~p\mid
c.
\end{array}\right.
\end{eqnarray}
\end{itemize}
Then for $h\in\mathcal{F}$ which is defined and finite at $\theta$
we have
\begin{equation*}
h(\theta)^{[x_Q^{-1},K]}=h^{u_Q}(\theta_Q).
\end{equation*}
\end{proposition}
\begin{proof}
See \cite[Lemma 20]{Gee} or \cite[$\S$6]{Stevenhagen}.
\end{proof}
For each positive integer $N$ we define the matrix group
\begin{equation*}
W_{N,\theta}=\bigg\{\begin{pmatrix}t-B_\theta s & -C_\theta
s\\s&t\end{pmatrix}\in\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})~;~t,~s\in\mathbb{Z}/N\mathbb{Z}\bigg\}.
\end{equation*}
Analyzing the diagram (\ref{exactdiagram}) and using Shimura's
reciprocity law (\ref{reciprocity}), Gee and Stevenhagen could
express $\mathrm{Gal}(K_{(N)}/H)$ quite explicitly.
\begin{proposition}\label{rayclass}
Assume that $K\neq\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{-3})$ and
$N\geq1$. Then we have a surjection
\begin{eqnarray*}
W_{N,\theta}&\longrightarrow&\mathrm{Gal}(K_{(N)}/H)\\
\label{action}\alpha&\longmapsto&(h(\theta)\mapsto
h^\alpha(\theta))~\textrm{for $h\in \mathcal{F}_N$ which is defined
and finite at}~\theta,
\end{eqnarray*}
whose kernel is $\{\pm 1_2\}$ by \textup{(\ref{exactdiagram})} and
\textup{(\ref{reciprocity})}.
\end{proposition}
\begin{proof}
See \cite[pp.50--51]{Gee} or \cite[$\S$3]{Stevenhagen}.
\end{proof}
Finally we obtain an assertion which we shall use to solve our main
problem. In the next theorem, we follow the notations in
Propositions \ref{Hilbertclass} and \ref{rayclass}.
\begin{theorem}\label{conjugate}
Assume that $K\neq\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{-3})$ and
$N\geq1$. Then there is a one-to-one correspondence
\begin{eqnarray*}
W_{N,\theta}/\{\pm1_2\}\times\mathrm{C}(d_K)&\longrightarrow&\mathrm{Gal}(K_{(N)}/K)\\
(\alpha,Q)&\mapsto&(h(\theta)\mapsto h^{\alpha\cdot
u_Q}(\theta_Q))~\textrm{for $h\in\mathcal{F}_N$ which is defined and
finite at $\theta$.}
\end{eqnarray*}
Here, we follow notations in Propositions
\textup{\ref{Hilbertclass}} and \textup{\ref{rayclass}}.
\end{theorem}
\begin{proof}
Observe the following diagram:
\begin{equation*}
\begindc{0}[3]
\obj(0,0)[A]{$K$}
\obj(0,15)[B]{$H$}
\obj(0,30)[C]{$K_{(N)}$}
\obj(0, 35)[D]{\underline{Fields}}
\obj(30, 35)[E]{\underline{Galois groups}}
\mor{A}{B}{$\quad\Bigg)\quad\mathrm{Gal}(H/K)=\{[x_Q,~K]|_H~;~Q\in
\mathrm{C}(d_K)\}\quad\textrm{by Proposition
\ref{idele}}.$}[\atright,\solidline]
\mor{B}{C}{$\quad\Bigg)\quad\mathrm{Gal}(K_{(N)}/H)\cong
W_{N,\theta}/\{\pm1_2\}\quad\textrm{by Proposition
\ref{rayclass}}$}[\atright,\solidline]
\enddc
\end{equation*}
Now the conclusion follows from Proposition \ref{Hilbertclass}.
\end{proof}
\begin{remark}\label{identitycorrespond}
In particular, the unit element of
$W_{N,\theta}/\{\pm1_2\}\times\mathrm{C}(d_K)$ corresponds to the
unit element of $\mathrm{Gal}(K_{(N)}/K)$ by the definitions of
$u_Q$ and $\theta_Q$. Note that the correspondence is not a group
homomorphism.
\end{remark}
\section{Ray class invariants}
In this last section we shall prove that the singular value
$g_{(0,1/N)}^{12N}(\theta)$ generates $K_{(N)}$ by showing that the
only automorphism of $K_{(N)}$ over $K$ which fixes it is the unit
element. Then Galois theory guarantees our theorem.
\par
Throughout this section we let $K$
($\neq\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{-3})$) be an imaginary
quadratic field of discriminant $d_K$ such that $d_K\leq-7$. We put
$D=\sqrt{-d_K/3}$ and define $\theta$, $\theta_Q$, $u_Q$ for each
primitive positive definite quadratic form $Q=[a,b,c]$ as in
(\ref{theta}) and Proposition \ref{Hilbertclass}. If we set
\begin{equation*}
B=|q_\theta|=|e^{2\pi i\theta}|=e^{-\pi\sqrt{-d_K}},
\end{equation*}
then we have
\begin{equation}\label{B}
B\leq e^{-\sqrt{7}\pi}\quad\textrm{and}\quad
B^{1/D}=e^{-\sqrt{3}\pi}.
\end{equation}
\par
In what follows we shall often use the following basic inequality
\begin{equation}\label{basicinequality}
1+X<e^X\quad\textrm{for}~X>0.
\end{equation}
\begin{lemma}\label{ineq}
We have the following inequalities:
\begin{itemize}
\item[(i)] If $N\geq21$, then
$|(1-\zeta_N)/(1-B^{1/DN})|<1.306$.
\item[(ii)] If $N\geq2$, then
$|(1-\zeta_N)/(1-\zeta_N^s)|\leq1$ for all $s\in\mathbb{Z}\setminus
N\mathbb{Z}$.
\item[(iii)] If $N\geq4$,
then $|(1-\zeta_N)/(1-\zeta_N^s)|\leq 1/\sqrt{2}$ for $2\leq s\leq
N/2$.
\item[(iv)] If $N\geq2$, then
$B^{(1/2)(\mathbf{B}_2(0)-\mathbf{B}_2(1/N))}
|(1-\zeta_N)/(1-B^{1/N})|<0.76$.
\item[(v)] $1/(1-B^{X/D})<1+B^{X/1.03D}$
for all $X\geq1/2$.
\item[(vi)] $1/(1-B^X)<1+B^{X/1.03}$ for all
$X\geq1/2$.
\end{itemize}
\end{lemma}
\begin{proof}
(i) It is routine to check that $|(1-\zeta_N)/(1-B^{1/DN})|=
2\sin(\pi/N)/(1-e^{-\sqrt{3}\pi/N})$ is a decreasing function for
$N\geq21$. Hence its value is maximal when
$N=21$, which is less than $1.306$.\\
(ii) $|(1-\zeta_N)/(1-\zeta_N^s)|=|\sin(\pi/N)/\sin(\pi s/N)|\leq1$
for all $s\in\mathbb{Z}\setminus N\mathbb{Z}$.\\
(iii) If $N\geq4$ and $2\leq s\leq N/2$, then
$|\sin(s\pi/N)|\geq\sin(2\pi/N)$. Thus
\begin{eqnarray*}
\bigg|\frac{1-\zeta_N}{1-\zeta_N^s}\bigg|=
\bigg|\frac{\sin(\pi/N)}{\sin(s\pi/N)}\bigg|\leq
\frac{\sin(\pi/N)}{\sin(2\pi/N)}=\frac{1}{2\cos(\pi/N)}\leq
\frac{1}{2\cos(\pi/4)}=\frac{1}{\sqrt{2}}.
\end{eqnarray*}
(iv) Observe that
\begin{eqnarray*}
B^{(1/2)(\mathbf{B}_2(0)-\mathbf{B}_2(1/N))}\bigg|\frac{1-\zeta_N}{1-B^{1/N}}\bigg|
\leq e^{(-\sqrt{7}\pi/2)(1/N-1/N^2)}\frac{2\sin(\pi/N)}
{1-e^{-\sqrt{7}\pi/N}}\quad\textrm{by (\ref{B})}.
\end{eqnarray*}
It is also routine to check the last term on $N$ ($\geq2$) is less
than
$0.76$.\\
(v) By (\ref{B}) the inequality is equivalent to $e^{-\sqrt{3}\pi
X}+e^{-3\sqrt{3}\pi X/103}<1$,
which obviously holds for $X\geq1/2$.\\
(vi) The given inequality is equivalent to $B^X+B^{3X/103}<1$. By
(\ref{B}) it suffices to show $e^{-\sqrt{7}\pi X}+e^{-3\sqrt{7}\pi
X/103}<1$, which is true for all $X\geq1/2$.
\end{proof}
\begin{lemma}\label{newlemma1}
Let $N\geq21$ and $Q=[a,b,c]$ be a reduced quadratic form of
discriminant $d_K$. If $a\geq2$, then the inequality
\begin{equation*}
|g_{(0,1/N)}(\theta)|< |g_{(r/N,s/N)}(\theta_Q)|
\end{equation*}
holds for $(r,s)\in\mathbb{Z}^2\setminus N\mathbb{Z}^2$.
\end{lemma}
\begin{proof}
We may assume $0\leq r\leq N/2$ by Proposition \ref{F_N}. And, note
that $2\leq a\leq D$ by (\ref{bound a}). From (\ref{FourierSiegel})
we obtain that
\begin{eqnarray*}
&&\bigg|\frac{g_{(0,1/N)}(\theta)}{g_{(r/N,s/N)}(\theta_Q)}\bigg|
=\bigg|\frac{g_{(0,1/N)}(\theta)}{g_{(r/N,s/N)}((-b+\sqrt{d_K})/2a)}
\bigg|\\
&\leq&B^{(1/2)(\mathbf{B}_2(0)-(1/a)\mathbf{B}_2(r/N))}
\bigg|\frac{1-\zeta_N}{1-e^{2\pi
i((r/N)(-b+\sqrt{d_K})/2a+s/N)}}\bigg|\\
&&\times\prod_{n=1}^\infty
\frac{(1+B^n)^2}{(1-B^{(1/a)(n+r/N)})(1-B^{(1/a)(n-r/N)})}.
\end{eqnarray*}
If $r\neq0$, then by the fact $2\leq a\leq D$ and Lemma
\ref{ineq}(i),
\begin{eqnarray*}
\bigg|\frac{1-\zeta_N}{1-e^{2\pi
i((r/N)(-b+\sqrt{d_K})/2a+s/N)}}\bigg| \leq
\bigg|\frac{1-\zeta_N}{1-B^{r/Na}}\bigg| \leq
\bigg|\frac{1-\zeta_N}{1-B^{1/ND}}\bigg|<1.306.
\end{eqnarray*}
If $r=0$, then by Lemma \ref{ineq}(ii),
\begin{eqnarray*}
\bigg|\frac{1-\zeta_N}{1-e^{2\pi
i((r/N)(-b+\sqrt{d_K})/2a+s/N)}}\bigg|=
\bigg|\frac{1-\zeta_N}{1-\zeta_N^s}\bigg| \leq 1.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
&&\bigg|\frac{g_{(0,1/N)}(\theta)}{g_{(r/N,s/N)}(\theta_Q)}
\bigg|\\
&<& B^{(1/2)(\mathbf{B}_2(0)-(1/2)\mathbf{B}_2(0))}\cdot
1.306\cdot\prod_{n=1}^\infty\frac{(1+B^{n})^2}
{(1-B^{n/D})(1-B^{(1/D)(n-1/2)})}\\
&&\textrm{since $2\leq a\leq D$, $0\leq r\leq N/2$}\\
&<& 1.306B^{1/24}\prod_{n=1}^\infty (1+B^{n})^2(1+B^{n/1.03D})
(1+B^{(1/1.03D)(n-1/2)})\quad\textrm{by Lemma \ref{ineq}(v)}\\
&<& 1.306B^{1/24}\prod_{n=1}^\infty
e^{2B^{n}+B^{n/1.03D}+B^{(1/1.03D)(n-1/2)}}
\quad\textrm{by (\ref{basicinequality})}\\
&=&1.306B^{1/24}e^{2B/(1-B)+
(B^{1/1.03D}+B^{1/2.06D})/(1-B^{1/1.03D})}\\
&\leq& 1.306e^{-\sqrt{7}\pi/24}
e^{2e^{-\sqrt{7}\pi}/(1-e^{-\sqrt{7}\pi})+
(e^{-\sqrt{3}\pi/1.03}+e^{-\sqrt{3}\pi/2.06})/
(1-e^{-\sqrt{3}\pi/1.03})}<1\quad\textrm{by (\ref{B})}.
\end{eqnarray*}
This proves the lemma.
\end{proof}
\begin{lemma}\label{newlemma2}
Let $N\geq2$ and $Q=[1,b,c]$ be a reduced quadratic form of discriminant $d_K$. Then
we have the inequality
\begin{equation*}
|g_{(0,1/N)}(\theta)|< |g_{(r/N,s/N)}(\theta_Q)|
\end{equation*}
for $r,~s\in\mathbb{Z}$ with $r\not\equiv0\pmod{N}$.
\end{lemma}
\begin{proof}
We may assume $1\leq r\leq N/2$ by Proposition \ref{F_N}. Then
\begin{eqnarray*}
&&\bigg|\frac{g_{(0,1/N)}(\theta)}{g_{(r/N,s/N)}(\theta_Q)}
\bigg| \\
&\leq& B^{(1/2)(\mathbf{B}_2(0)-\mathbf{B}_2(r/N))}
\bigg|\frac{1-\zeta_N}{1-B^{r/N}}\bigg|
\prod_{n=1}^\infty\frac{(1+B^n)^2}{{(1-B^{n+r/N})}
{(1-B^{n-r/N})}}\quad\textrm{by (\ref{FourierSiegel})}\\
&<& B^{(1/2)(\mathbf{B}_2(0)-\mathbf{B}_2(1/N))}
\bigg|\frac{1-\zeta_N}{1-B^{1/N}}\bigg|\prod_{n=1}^\infty\frac{(1+B^n)^2}{{(1-B^{n})}
{(1-B^{n-1/2})}}\\
&<& 0.76\prod_{n=1}^\infty (1+B^n)^2(1+B^{n/1.03})
(1+B^{(1/1.03)(n-1/2)})\quad\textrm{by Lemma \ref{ineq}(iv) and (vi)}\\
&<& 0.76\prod_{n=1}^\infty
e^{2B^{n}+B^{n/1.03}+B^{(1/1.03)(n-1/2)}}\quad\textrm{by}~(\ref{basicinequality})\\
&=&0.76 e^{2B/(1-B)+
(B^{1/1.03}+B^{1/2.06})/(1-B^{1/1.03})}\\
&\leq& 0.76e^{2e^{-\sqrt{7}\pi}/(1-e^{-\sqrt{7}\pi})+
(e^{-\sqrt{7}\pi/1.03}+e^{-\sqrt{7}\pi/2.06})/
(1-e^{-\sqrt{7}\pi/1.03})}<1\quad\textrm{by (\ref{B})}.
\end{eqnarray*}
\end{proof}
\begin{lemma}\label{newlemma3}
Let $N\geq2$ and $Q=[1,b,c]$ be a reduced quadratic form of
discriminant $d_K$. Then
\begin{equation*}
|g_{(0,1/N)}(\theta)|<|g_{(0,s/N)}(\theta_Q)|
\end{equation*}
for $s\in\mathbb{Z}$ with $s\not\equiv0,~\pm1\pmod{N}$.
\end{lemma}
\begin{proof}
If $N=2$ or $3$, there is nothing to prove. Thus, let $N\geq4$. Here
we may assume that $2\leq s\leq N/2$ by Proposition \ref{F_N}.
Observe that
\begin{eqnarray*}
&&\bigg|\frac{g_{(0,1/N)}(\theta)}
{g_{(0,s/N)}(\theta_Q)}\bigg|\\
&\leq& \bigg|\frac{1-\zeta_N}{1-\zeta_N^s}\bigg|
\prod_{n=1}^\infty\frac{(1+B^n)^2}{(1-B^n)^2}\quad\textrm{by (\ref{FourierSiegel})}\\
&<& (1/\sqrt{2})\prod_{n=1}^\infty
(1+B^{n})^2(1+B^{n/1.03})^2\quad\textrm{by Lemma \ref{ineq}(iii) and (vi)}\\
&\stackrel{}{<}& \tfrac{1}{\sqrt{2}}\prod_{n=1}^\infty
e^{2B^{n}+2B^{n/1.03}}\quad\textrm{by}~(\ref{basicinequality})\\
&\stackrel{}{=}&
(1/\sqrt{2})e^{2B/(1-B)+2B^{1/1.03}/(1-B^{1/1.03})}\\
&\leq&(1/\sqrt{2})e^{2e^{-\sqrt{7}\pi}/(1-e^{-\sqrt{7}\pi})
+2e^{-\sqrt{7}\pi/1.03}/(1-e^{-\sqrt{7}\pi/1.03})}<1 \quad\textrm{by
(\ref{B})},
\end{eqnarray*}
which proves the lemma.
\end{proof}
Now we are ready to prove our main theorem.
\begin{theorem}\label{primitive}
Let $N\geq21$. Let $K$
\textup($\neq\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{-3})$\textup) be
an imaginary quadratic field and $\theta$ be as in
\textup{(\ref{theta})}. Then for any positive integer $n$ the value
\begin{equation*}
g^{12Nn}(1,N\mathcal{O}_K)=g_{(0,1/N)}^{12Nn}(\theta)
\end{equation*}
generates $K_{(N)}$ over $K$. It is a real algebraic integer and its
minimal polynomial has integer coefficients. In particular, if $N$
has at least two prime factors, then it is an elliptic unit.
\end{theorem}
\begin{proof}
For simplicity we put $g(\tau)=g_{(0,1/N)}^{12Nn}(\tau)$. Since $g$
belongs to $\mathcal{F}_N$ by Proposition \ref{F_N}, the value
$g(\theta)$ lies in $K_{(N)}$ by the main theorem of the theory of
complex multiplication. Hence, if we show that the only element of
$\mathrm{Gal}(K_{(N)}/K)$ fixing the value $g(\theta)$ is the unit
element, then we can conclude that it generates $K_{(N)}$ over $K$
by Galois theory.
\par By Theorem \ref{conjugate} any conjugate of $g(\theta)$ is of
the form $g^{\alpha\cdot u_Q}(\theta_Q)$ for some
$\alpha=\left(\begin{smallmatrix}t-B_\theta s&-C_\theta
s\\s&t\end{smallmatrix}\right)\in W_{N,\theta}$ and a reduced
quadratic form $Q=[a,b,c]$ of discriminant $d_K$. Assume that
$g(\theta)=g^{\alpha\cdot u_Q}(\theta_Q)$. Then Lemma
\ref{newlemma1} implies that $a=1$, which yields
\begin{eqnarray*}
u_Q=\left\{\begin{array}{ll}\begin{pmatrix}1&b/2\\0&1\end{pmatrix}
&\textrm{for}~d_K\equiv0\pmod{4}\vspace{0.1cm}\\
\begin{pmatrix}1&(b-1)/2\\0&1\end{pmatrix}
&\textrm{for}~d_K\equiv1\pmod{4}\end{array}\right.
\end{eqnarray*}
as an element of $\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})$ by
(\ref{u1}) and (\ref{u2}). It follows from Proposition \ref{F_N}
that
\begin{eqnarray*}
g(\theta)=g^{\alpha\cdot u_Q}(\theta_Q)=g_{(0,1/N)\alpha
u_Q}^{12Nn}(\theta_Q)= \left\{\begin{array}{ll} g_{(s/N,(s/N)(b/2)+t/N)}^{12Nn}(\theta_Q)&\textrm{for}~d_K\equiv0\pmod{4}\vspace{0.1cm}\\
g_{(s/N,(s/N)(b-1)/2+t/N)}^{12Nn}(\theta_Q)&\textrm{for}~
d_K\equiv1\pmod{4},
\end{array}\right.
\end{eqnarray*}
from which we get $s\equiv0\pmod{N}$ by Lemma \ref{newlemma2}. Now
Lemma \ref{newlemma3} implies that $t\equiv\pm1\pmod{N}$, which
shows that $\alpha$ is the unit element of
$W_{N,~\theta}/\{\pm1_2\}$. Finally (\ref{reduced})implies that
\begin{eqnarray*}
Q=[a,b,c]=\left\{\begin{array}{ll} \textrm{[}1,0,-d_K/4\textrm{]} &
\textrm{for}~d_K\equiv0\pmod{4}\vspace{0.1cm}\\
\textrm{[}1,1,(1-d_K)/4\textrm{]} & \textrm{for}~
d_K\equiv1\pmod{4},\end{array}\right.
\end{eqnarray*}
which represents the unit element of $\mathrm{C}(d_K)$ by
(\ref{unitform}). This implies that $(\alpha,Q)\in
W_{N,\theta}/\{\pm1_2\}\times\mathrm{C}(d_K)$ represents the unit
element of $\mathrm{Gal}(K_{(N)}/K)$ by Remark
\ref{identitycorrespond}. Therefore $g(\theta)$ actually generates
$K_{(N)}$ over $K$.
\par
From (\ref{FourierSiegel}) we have
\begin{eqnarray*}
g(\theta)&=&\bigg(q_\theta^{1/12}(1-\zeta_N)\prod_{n=1}^\infty(1-q_\theta^n\zeta_N)(1-q_\theta^n\zeta_N^{-1})\bigg)^{12Nn}\\
&=&q_\theta^{Nn}(2\sin(\pi/N))^{12Nn}
\prod_{n=1}^\infty(1-(\zeta_N+\zeta_N^{-1})q_\theta^n+q_\theta^{2n})^{12Nn},
\end{eqnarray*}
and this shows that $g(\theta)$ is a real number. Furthermore, we
see from \cite[$\S$3]{K-S} that the function $g(\tau)$ is integral
over $\mathbb{Z}[j(\tau)]$. Since $j(\theta)$ is a real algebraic
integer (\cite[Chapter 5 Theorem 4]{Lang}), so is the value
$g(\theta)$, and its minimal polynomial over $K$ has integer
coefficients. In particular, if $N$ has at least two prime factors,
the function $1/g(\tau)$ is also integral over $\mathbb{Z}[j(\tau)]$
(\cite[Chapter 2 Theorem 2.2]{K-L}); hence $g(\theta)$ becomes a
unit.
\end{proof}
\begin{remark}\label{exception}
\begin{itemize}
\item[(i)] If we assume that
\begin{equation}\label{newcondition}
( N=2,~d_K\leq-43)\quad\textrm{or}\quad
(N=3,~d_K\leq-39)\quad\textrm{or}\quad (N\geq4,~d_K\leq-31),
\end{equation}
then the upper bounds of the inequalities appeared in Lemma
\ref{ineq} should be slightly changed. But it is routine to check
that Lemmas \ref{newlemma1}--\ref{newlemma3} are also true.
Therefore we can establish Theorem \ref{primitive} again under the
condition (\ref{newcondition}), however, we shall not repeat the
similar proofs.
\item[(ii)]
Theorem \ref{primitive} is still valid for all $N\geq2$ and
$K\neq\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{-3})$. Indeed, for the
remaining finite cases $(N=2,~-40\leq d_K\leq-7)$, $(N=3,~-35\leq
d_K\leq-7)$, $(4\leq N\leq20,~-24\leq d_K\leq-7)$ one can readily
verify Lemmas \ref{newlemma1}--\ref{newlemma3} by Theorem
\ref{conjugate} and some numerical estimation, not by using Lemma
\ref{ineq}.
\end{itemize}
\end{remark}
\begin{remark}\label{exponent}
\begin{itemize}
\item[(i)] For $N\geq2$ and $(r_1,r_2)\in(1/N)\mathbb{Z}^2\setminus
\mathbb{Z}^2$, the function $g_{(r_1,r_2)}^{12N/\gcd(6,N)}(\tau)$
belongs to $\mathcal{F}_N$ and satisfies the same transformation
formulas as in Proposition \ref{F_N} by \cite[Theorem 2.5 and
Proposition 2.4]{K-S}. Hence we are able to replace the value
$g_{(0,1/N)}^{12Nn}(\theta)$ in Theorem \ref{primitive} by
$g_{(0,1/N)}^{12Nn/\gcd(6,N)}(\theta)$ with smaller exponent, which
enables us to have class polynomials with relatively small
coefficients.
\item[(ii)] Nevertheless, the exponent of
$g_{(0,1/N)}^{12N/\gcd(6,N)}(\theta)$ could be quite high for
numerical computations. So one usually takes suitable products of
Siegel functions with lower exponents (\cite{B-S}).
\item[(iii)] In order to prove that the singular value
$g_{(0,1/N)}^{12N/\gcd(6,N)}(\theta)$ is a unit, it suffices to
check whether $N$ has more than one prime ideal factor in $K$
(\cite[$\S$6]{Ramachandra}).
\end{itemize}
\end{remark}
Now we close this section by presenting an example which illustrates
Theorem \ref{primitive}, Remarks \ref{exception} and \ref{exponent}.
\begin{example}
Let $K=\mathbb{Q}(\sqrt{-10})$ and $N=6$ ($=2\cdot3$). Then
$d_K=-40$ and $\theta=\sqrt{-10}$. The reduced quadratic forms of
discriminant $d_K$ are
\begin{equation*}
Q_1=[1,~0,~10]\quad\textrm{and}\quad Q_2=[2,~0,~5].
\end{equation*}
So we have
\begin{eqnarray*}
\theta_{Q_1}=\sqrt{-10},~u_{Q_1}=\begin{pmatrix}1&0\\0&1\end{pmatrix}\quad\textrm{and}\quad
\theta_{Q_2}=\sqrt{-10}/2,~u_{Q_2}=\begin{pmatrix}2&-3\\3&4\end{pmatrix}.
\end{eqnarray*}
Furthermore, one can compute the group $W_{6,\theta}/\{\pm1_2\}$
easily and the result is as follows:
\begin{eqnarray*}
W_{6,\theta}/\{\pm1_2\}=\bigg\{
\begin{pmatrix}1&0\\0&1\end{pmatrix},
\begin{pmatrix}1&2\\1&1\end{pmatrix},
\begin{pmatrix}1&4\\2&1\end{pmatrix},
\begin{pmatrix}1&0\\3&1\end{pmatrix},
\begin{pmatrix}1&2\\4&1\end{pmatrix},
\begin{pmatrix}1&4\\5&1\end{pmatrix},
\begin{pmatrix}3&2\\1&3\end{pmatrix},
\begin{pmatrix}3&4\\2&3\end{pmatrix}
\bigg\}.
\end{eqnarray*}
\,Thus the class polynomial is
\begin{eqnarray*}
&&\mathrm{min}(g_{(0,1/6)}^{12}(\theta),K)=
\prod_{r=1}^2\prod_{\alpha\in
W_{6,\theta}/\{\pm1_2\}}(X-g_{(0,1/6)\alpha
u_{Q_r}}^{12}(\theta_{Q_r}))\\
&=&(X-g_{(0,1/6)}^{12}(\sqrt{-10}))
(X-g_{(1/6,1/6)}^{12}(\sqrt{-10}))
(X-g_{(2/6,1/6)}^{12}(\sqrt{-10}))\\
&&(X-g_{(3/6,1/6)}^{12}(\sqrt{-10}))
(X-g_{(4/6,1/6)}^{12}(\sqrt{-10}))
(X-g_{(5/6,1/6)}^{12}(\sqrt{-10}))\\
&&(X-g_{(1/6,3/6)}^{12}(\sqrt{-10}))
(X-g_{(2/6,3/6)}^{12}(\sqrt{-10}))
(X-g_{(3/6,4/6)}^{12}(\sqrt{-10}/2))\\
&&(X-g_{(5/6,1/6)}^{12}(\sqrt{-10}/2))
(X-g_{(1/6,4/6)}^{12}(\sqrt{-10}/2))
(X-g_{(3/6,1/6)}^{12}(\sqrt{-10}/2))\\
&&(X-g_{(5/6,4/6)}^{12}(\sqrt{-10}/2))
(X-g_{(1/6,1/6)}^{12}(\sqrt{-10}/2))
(X-g_{(5/6,3/6)}^{12}(\sqrt{-10}/2))\\
&&(X-g_{(1/6,0)}^{12}(\sqrt{-10}/2))\\
&=&X^{16}+20560X^{15}-1252488X^{14}-829016560X^{13}-8751987701092X^{12}\\
&&+217535583987600X^{11}+181262520621110344X^{10}+43806873084101200X^9\\
&&-278616280004972730X^8+139245187265282800X^7-8883048242697656X^6\\
&&+352945014869040X^5+23618989732508X^4-1848032773840X^3+49965941112X^2\\
&&-425670800X+1,
\end{eqnarray*}
which shows that $g_{(0,1/6)}^{12}(\theta)$ is also a unit.
\end{example}
|
2,869,038,153,875 | arxiv | \section{\bf Introduction}\label{}
Throughout this paper, let $\mathbb{R}^n_{+}=\{x\in \mathbb{R}^n;x\geq0\}$, and $\mathbb{R}^n_{-}=\{x\in \mathbb{R}^n;x\leq0\}$, and $\mathbb{R}^n_{++}=\{x\in \mathbb{R}^n;x>0\}$, and $e=(1,1,\cdots,1)^T$, and $x^{[m]} = (x_1^m, x_2^m,\cdots, x_n^m)^T$ for $x = (x_1, x_2,\cdots, x_n)^T$, where $x^T$ is the transposition of a vector $x$ and $x\geq0$ ($x>0$) means $x_i\geq0$ ($x_i>0$) for all $i\in\{1,2,\cdots,n\}$.
As a natural extension of the concept of matrices, an $m$-order $n$-dimensional tensor $\mathcal{A}$ consists of $n^m$ elements in the real field $\mathbb{R}$:
$$\mathcal{A} = (a_{i_1\cdots i_m}),\ \ \ \ \ a_{i_1\cdots i_m} \in \mathbb{R},\ \ i_1,i_2,\cdots,i_m=1,2,\cdots, n.$$
For an element $x = (x_1, x_2,\cdots, x_n)^T\in \mathbb{R}^n$ or $\mathbb{C}^n$, $\mathcal{A}x^m$ is defined by \begin{equation}\label{eq:11}\mathcal{A}x^m=\sum_{i_1,i_2,\cdots,i_m=1}^na_{i_1i_2\cdots i_m}x_{i_1}x_{i_2}\cdots x_{i_m};\end{equation}
$\mathcal{A}x^{m-1}$ is a vector in $\mathbb{R}^n$ (or $\mathbb{C}^n$) with its ith component defined by
\begin{equation}\label{eq:12}(\mathcal{A}x^{m-1})_i=\sum_{i_2,\cdots,i_m=1}^na_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m}\mbox{ for } i=1,2,\ldots,n.\end{equation}
An $m$-order $n$-dimensional tensor $\mathcal{A}$ is said to be {\em symmetric} if its entries $a_{i_1\cdots i_m}$ are invariant for any permutation of the indices. Clearly, each $m$-order $n$-dimensional symmetric tensor $\mathcal{A}$ defines a homogeneous polynomial $\mathcal{A}x^m$ of degree $m$ with $n$ variables and vice versa.
For given an $m$-order $n$-dimensional symmetric tensor $\mathcal{A}$, we consider a constrained optimization problem of the form:
\begin{equation}\label{eq:13} \begin{aligned}
\min &\ \frac1m\mathcal{A}x^m\\
s.t. &\ x^Tx^{[m-1]} = 1\\
&\ x\in \mathbb{R}^n_+.
\end{aligned}\end{equation}
Then the Lagrange function of the problem (\ref{eq:13}) is given clearly by
\begin{equation}\label{eq:14} L(x,\lambda,y)=\frac1m\mathcal{A}x^m+\frac1m\lambda(1-x^Tx^{[m-1]})-x^Ty\end{equation}
where $x,y\in \mathbb{R}^n_+$, $\frac{\lambda}m\in\mathbb{R}$ is the Lagrange multiplier of the equality constraint and $y$ is the Lagrange multiplier of non-negative constraint. So the solution $x$ of the problem (\ref{eq:13}) satisfies the following conditions:
\begin{align}
\mathcal{A}x^{m-1}-\lambda x^{[m-1]}-y&=0\label{eq:15}\\
1-x^Tx^{[m-1]} &= 0\label{eq:16}\\
x^Ty &= 0\label{eq:17}\\
x,y&\in \mathbb{R}^n_+.\label{eq:18
\end{align}
The equation (\ref{eq:16}) means that $\sum\limits_{i=1}^nx_i^m = 1$. It follows from the equations (\ref{eq:15}), (\ref{eq:17}) and (\ref{eq:18}) that $$\begin{aligned}x^Ty=x^T\mathcal{A}x^{m-1}-\lambda x^Tx^{[m-1]}&=0\\ x\geq 0, \mathcal{A}x^{m-1}-\lambda x^{[m-1]}=y&\geq0,\end{aligned}$$ and hence,
\begin{equation}\label{eq:19}\begin{cases} \mathcal{A}x^m=\lambda x^Tx^{[m-1]}\\
\mathcal{A}x^{m-1}-\lambda x^{[m-1]}\geq0\\ x\geq 0.\end{cases}\end{equation}
Following Qi \cite{LQ1} ($H-$eigenvalue of the tensor $\mathcal{A}$) and Seeger \cite{S99} (Pareto eigenvalue of the matrix $A$), for a $m$-order $n$-dimensional tensor $\mathcal{A}$, a real number $\lambda$ is called {\em Pareto $H-$eigenvalue} of the tensor $\mathcal{A}$ if there exists a non-zero vector $x\in \mathbb{R}^n$ satisfying the system (\ref{eq:19}). The non-zero vector $x$ is called a {\em Pareto $H-$eigenvector} of the tensor $\mathcal{A}$ associated to $\lambda$.
Similarly, for given an $m$-order $n$-dimensional symmetric tensor $\mathcal{A}$, we consider another constrained optimization problem of the form ($m\geq 2$):
\begin{equation}\label{eq:110} \begin{aligned}
\min &\ \frac1m\mathcal{A}x^m\\
s.t. &\ x^Tx = 1\\
&\ x\in \mathbb{R}^n_+.
\end{aligned}\end{equation}
Obviously, when $x\in \mathbb{R}^n$, $x^Tx = 1$ if and only if $(x^Tx)^{\frac{m}2} = 1$.
The corresponding Lagrange function may be written in the form
$$ L(x,\mu,y)=\frac1m\mathcal{A}x^m+\frac1m\mu(1-(x^Tx)^{\frac{m}2})-x^Ty.$$
So the solution $x$ of the problem (\ref{eq:110}) satisfies the conditions:
$$ \mathcal{A}x^{m-1}-\mu(x^Tx)^{\frac{m}2-1} x-y=0,\
1-(x^Tx)^{\frac{m}2} = 0,\
x^Ty = 0,\
x,y \in \mathbb{R}^n_+.
$$
Then we also have $\sum\limits_{i=1}^nx_i^2 = 1$ and
\begin{equation}\label{eq:111}\begin{cases} \mathcal{A}x^m=\mu (x^Tx)^{\frac{m}2} \\
\mathcal{A}x^{m-1}-\mu (x^Tx)^{\frac{m}2-1}x \geq0\\ x\geq 0.\end{cases}\end{equation}
Following Qi \cite{LQ1} ($Z-$eigenvalue of the tensor $\mathcal{A}$) and Seeger \cite{S99} (Pareto eigenvalue of the matrix $A$), for an $m$-order $n$-dimensional tensor $\mathcal{A}$, a real number $\mu$ is said to be {\em Pareto $Z-$eigenvalue} of the tensor $\mathcal{A}$ if there is a non-zero vector $x\in \mathbb{R}^n$ satisfying the system (\ref{eq:111}). The non-zero vector $x$ is called a {\em Pareto $Z-$eigenvector} of the tensor $\mathcal{A}$ associated to $\mu$.
So the constrained optimization problem (\ref{eq:13}) and (\ref{eq:110}) of homogeneous polynomial may be respectively solved by means of the Pareto $H$-eigenvalue (\ref{eq:19}) and Pareto $Z-$eigenvalue (\ref{eq:111}) of the corresponding tensor. It will be an interesting work to compute the Pareto $H$-eigenvalue ($Z-$eigenvalue) of a higher order tensor.\\
When $m=2$, both Pareto $H-$eigenvalue and Pareto $Z-$eigenvalue of the $m$-order $n$-dimensional tensor obviously changes into Pareto eigenvalue of the matrix. The concept of Pareto eigenvalue is first introduced and used by Seeger \cite{S99} for studying the equilibrium processes defined by linear complementarity conditions. For more details, also see Hiriart-Urruty and Seeger \cite{HS10}.\\
In this paper, we will study the properties of the Pareto $H$-eigenvalue ($Z-$eigenvalue) of a higher order tensor $\mathcal{A}$. It will be proved that a real number $\lambda$ is Pareto $H$-eigenvalue ($Z-$eigenvalue) of $\mathcal{A}$ if and only if $\lambda$ is $H^{++}$-eigenvalue ($Z^{++}$-eigenvalue) of some $|N|$-dimensional principal sub-tensor of $\mathcal{A}$ with corresponding $H-$eigenvector ($Z-$eigenvector) $w$ and
$$\sum\limits_{i_2,\cdots ,i_m\in N}a_{ii_2\cdots i_m}w_{i_2}w_{i_3}\cdots w_{i_m}\geq0\mbox{ for }i\in \{1,2,\cdots,n\}\setminus N.$$
So we may calculate some Pareto $H$-eigenvalue ($Z-$eigenvalue) of a higher order tensor by means of $H^{++}$-eigenvalue ($Z^{++}$-eigenvalue) of the lower dimensional tensors. What's more, we will show that \begin{equation}\label{eq:112}\min\limits_{x\geq0 \atop \|x\|_m=1 }\mathcal{A}x^m=\min\{\mu; \mu \mbox{ is Pareto $H$-eigenvalue of }\mathcal{A}\}\end{equation}
\begin{equation}\label{eq:113}\min\limits_{x\geq0 \atop \|x\|_2=1 }\mathcal{A}x^m=\min\{\mu; \mu \mbox{ is Pareto $Z$-eigenvalue of }\mathcal{A}\}.\end{equation} Therefore, we may solve the constrained minimization problem for homogeneous polynomial and test the (strict) copositivity of a symmetric tensor $\mathcal{A}$ with the help of computing the Pareto $H$-eigenvalue (or Pareto $Z$-eigenvalue) of a symmetric tensor.
As a corollary, a symmetric tensor $\mathcal{A}$ is copositive if and only if every Pareto $H$-eigenvalue ($Z-$eigenvalue) of $\mathcal{A}$ is non-negative and $\mathcal{A}$ is strictly copositive if and only if every Pareto $H$-eigenvalue ($Z-$eigenvalue) of $\mathcal{A}$ is positive.
\section{\bf Preliminaries and Basic facts}
Let $\mathcal{A}$ be an $m$-order $n$-dimensional symmetric tensor. A number $\lambda\in \mathbb{C}$ is called an {\em eigenvalue of $\mathcal{A}$} if there exists a nonzero vector $x\in \mathbb{C}^n$ satisfying
\begin{equation}\label{eq:22}\mathcal{A}x^{m-1}=\lambda x^{[m-1]}, \end{equation}
where $x^{[m-1]}=(x_1^{m-1},\cdots , x_n^{m-1})^T$, and call $x$ an {\em eigenvector} of $\mathcal{A}$ associated with the eigenvalue $\lambda$. We call such an eigenvalue {\em $H$-eigenvalue} if it is real and has a real eigenvector $x$, and call such a real eigenvector $x$ an {\em H-eigenvector}.
These concepts were first introduced by Qi \cite{LQ1} to the higher order symmetric tensor, and the existence of the eigenvalues and its some application were studied also. Lim \cite{LL} independently introduced these concept and obtained the existence results using the variational approach. Qi \cite{LQ1, LQ2, LQ3} extended some nice properties of the matrices to the higher order tensors. Subsequently,
this topics are attracted attention of many mathematicians
from different disciplines. For various studies and applications, see Chang \cite{C09}, Chang, Pearson and Zhang \cite{CPT1},
Chang, Pearson and Zhang \cite{CPT}, Hu, Huang and Qi \cite{HHQ}, Hu and Qi \cite{HQ}, Ni, Qi, Wang and Wang \cite{NQWW}, Ng, Qi and Zhou \cite{NQZ}, Song and Qi \cite{SQ13,SQ10}, Yang and Yang \cite{YY10,YY11}, Zhang \cite{TZ}, Zhang and Qi \cite{ZQ}, Zhang, Qi and Xu \cite{ZQX} and references cited therein.\\
A number $\mu\in \mathbb{C}$ is said to be an {\em $E$-eigenvalue of $\mathcal{A}$} if there exists a nonzero
vector $x\in \mathbb{C}^n$ such that
\begin{equation}\label{eq:23}\mathcal{A}x^{m-1}=\mu x (x^Tx)^{\frac{m-2}2}.\end{equation} Such a nonzero
vector $x\in \mathbb{C}^n$ is called an {\em $E$-eigenvector} of $\mathcal{A}$ associated with $\mu$,
If $x$ is real, then $\mu$ is also real. In this case, $\mu$ and $x$ are called a {\em $Z$-eigenvalue} of $\mathcal{A}$ and
a {\em $Z$-eigenvector} of $\mathcal{A}$ (associated with $\mu$), respectively. Qi \cite{LQ1, LQ2, LQ3} first introduced and used these concepts and showed that if $\mathcal{A}$ is regular, then a complex number is an $E$-eigenvalue of higher order symmetric tensor if and only if it is a root of the corresponding $E$-characteristic polynomial. Also see Hu and Qi \cite{HQ13}, Hu, Huang, Ling and Qi \cite{HHLQ}, Li, Qi and Zhang \cite{LQZ} for more details. \\
In homogeneous
polynomial $\mathcal{A}x^m$ defined by (\ref{eq:11}), if we let some (but not all) $x_i$ be zero, then we have a homogeneous
polynomial with fewer variables, which defines a lower dimensional tensor. We call such a lower dimensional
tensor a {\em principal sub-tensor} of $\mathcal{A}$. The concept were first introduced and used by Qi \cite{LQ1} to the higher order symmetric tensor.\\
Recently, Qi \cite{LQ4} introduced and used the following concepts for studying the properties of hypergraph. An $H$-eigenvalue $\lambda$ of $\mathcal{A}$ is said to be (i) {\em $H^+$-eigenvalue of $\mathcal{A}$}, if its $H$-eigenvector $x\in \mathbb{R}^n_+$; (ii) {\em $H^{++}$-eigenvalue of $\mathcal{A}$}, if its $H$-eigenvector $x\in \mathbb{R}^n_{++}$. Similarly, we introduce the concepts of $Z^+$-eigenvalue and $Z^{++}$-eigenvalue. An $Z$-eigenvalue $\mu$ of $\mathcal{A}$ is said to be (i) {\em $Z^+$-eigenvalue of $\mathcal{A}$}, if its $Z$-eigenvector $x\in \mathbb{R}^n_+$; (ii) {\em $Z^{++}$-eigenvalue of $\mathcal{A}$}, if its $Z$-eigenvector $x\in \mathbb{R}^n_{++}$.
\section{\bf Pareto $H$-eigenvalue and Pareto $Z$-eigenvalue}
Let $N$ be a subset of the index
set $\{1, 2, \cdots, n\}$ and $\mathcal{A}$ be a tensor of order $m$ and dimension $n$. We denote the principal sub-tensor of $\mathcal{A}$ by $\mathcal{A} ^N$ which is obtained by homogeneous polynomial $\mathcal{A}x^m$ for all $x=(x_1, x_2, \cdots, x_n)^T$ with $x_i=0$ for $i\in \{1, 2, \cdots, n\}\setminus N$. The symbol $|N|$ denotes the cardinality of $N$. So, $\mathcal{A} ^N$ is a tensor of order $m$ and dimension $|N|$ and the principal sub-tensor $\mathcal{A}^N$ is just $\mathcal{A}$ itself when $N=\{1, 2, \cdots, n\}$.
\begin{thm} \label{th:31} Let $\mathcal{A}$ be a $m$-order and $n$-dimensional tensor.
A real number $\lambda$ is Pareto $H$-eigenvalue of $\mathcal{A}$ if and only if there exists a nonempty subset $N\subseteq \{1, 2, \cdots, n\}$ and a vector $w\in\mathbb{R}^{|N|}$
such that
\begin{align}
\mathcal{A}^{N}w^{m-1}&=\lambda w^{[m-1]}\label{eq:31},\ \ w\in\mathbb{R}^{|N|}_{++}\\
\sum\limits_{i_2,\cdots ,i_m\in N}a_{ii_2\cdots i_m}w_{i_2}w_{i_3}\cdots w_{i_m}&\geq0\mbox{ for }i\in \{1,2,\cdots,n\}\setminus N\label{eq:32
\end{align}
In such a case, the vector $y\in\mathbb{R}^{|N|}_+$ defined by
\begin{equation}\label{eq:33} y_i=\begin{cases} w_i, \ \ i\in N\\
0,\ \ i\in \{1,2,\cdots,n\}\setminus N\end{cases}\end{equation}
is a Pareto $H$-eigenvector of $\mathcal{A}$ associated to the real number $\lambda$.
\end{thm}
\begin{proof} First we show the necessity. Let the real number $\lambda$ be a Pareto $H$-eigenvalue of $\mathcal{A}$ with a corresponding Pareto $H$-eigenvector $y$. Then by the definition (\ref{eq:19}) of the Pareto $H$-eigenvalue, the Pareto $H$-eigenpairs $(\lambda,y)$ may be rewritten in the form
\begin{equation}\label{eq:34}\begin{aligned} y^T(\mathcal{A}y^{m-1}-\lambda y^{[m-1]})=&0\\
\mathcal{A}y^{m-1}-\lambda y^{[m-1]}\geq&0\\ y\geq& 0\end{aligned}\end{equation}
and hence \begin{align} \sum_{i=1}^ny_i(\mathcal{A}y^{m-1}-\lambda y^{[m-1]})_i=&0\label{eq:35}\\
(\mathcal{A}y^{m-1}-\lambda y^{[m-1]})_i\geq&0,\ \mbox{ for } i=1,2,\ldots,n\label{eq:36}\\
y_i\geq& 0,\ \mbox{ for } i=1,2,\ldots,n.\label{eq:37
\end{align}
Combining the equation (\ref{eq:35}) with (\ref{eq:36}) and (\ref{eq:37}), we have
\begin{equation}\label{eq:38}y_i(\mathcal{A}y^{m-1}-\lambda y^{[m-1]})_i=0,\ \mbox{ for all } i\in \{1,2,\ldots,n\}.\end{equation}
Take $N=\{i\in \{1,2,\ldots,n\}; y_i>0\}$. Let the vector $w\in\mathbb{R}^{|N|}$ be defined by $$w_i=y_i\mbox{ for all } i\in N.$$
Clearly, $w\in\mathbb{R}^{|N|}_{++}$. Combining the equation (\ref{eq:38}) with the fact that $y_i>0$ for all $i\in N$, we have
$$(\mathcal{A}y^{m-1}-\lambda y^{[m-1]})_i=0,\ \mbox{ for all } i\in N,$$ and so
$$\mathcal{A}^{N}w^{m-1}=\lambda w^{[m-1]},\ \ w\in\mathbb{R}^{|N|}_{++}.$$
It follows from the equation (\ref{eq:36}) and the fact that $y_i=0$ for all $i\in \{1,2,\cdots,n\}\setminus N$ that $$(\mathcal{A}y^{m-1})_i\geq0,\ \mbox{ for all } i\in \{1,2,\cdots,n\}\setminus N.$$
By the definition (\ref{eq:12}) of $\mathcal{A}y^{m-1}$, the conclusion (\ref{eq:32}) holds.
Now we show the sufficiency. Suppose that there exists a nonempty subset $N\subseteq \{1, 2, \cdots, n\}$ and a vector $w\in\mathbb{R}^{|N|}$
satisfying (\ref{eq:31}) and (\ref{eq:32}). Then the vector $y$ defined by (\ref{eq:33}) is a non-zero vector in $\mathbb{R}^{|N|}_+$ such that $(\lambda,y)$ satisfying (\ref{eq:34}). The desired conclusion follows.
\end{proof}
Using the same proof techniques as that of Theorem \ref{th:31} with appropriate changes in the
inequalities or equalities ($y^{[m-1]}$ is replaced by $(y^Ty)^{\frac{m-2}2} y$ and so on). We can obtain the following conclusions about the Pareto $Z$-eigenvalue of $\mathcal{A}$.
\begin{thm} \label{th:32} Let $\mathcal{A}$ be a $m$-order and $n$-dimensional tensor.
A real number $\mu$ is Pareto $Z$-eigenvalue of $\mathcal{A}$ if and only if there exists a nonempty subset $N\subseteq \{1, 2, \cdots, n\}$ and a vector $w\in\mathbb{R}^{|N|}$
such that
\begin{align}
\mathcal{A}^{N}w^{m-1}&=\mu (w^Tw)^{\frac{m-2}2} w\label{eq:39},\ \ w\in\mathbb{R}^{|N|}_{++}\\
\sum\limits_{i_2,\cdots ,i_m\in N}a_{ii_2\cdots i_m}w_{i_2}w_{i_3}\cdots w_{i_m}&\geq0\mbox{ for }i\in \{1,2,\cdots,n\}\setminus N\label{eq:310
\end{align}
In such a case, the vector $y\in\mathbb{R}^{|N|}_+$ defined by
\begin{equation}\label{eq:311} y_i=\begin{cases} w_i, \ \ i\in N\\
0,\ \ i\in \{1,2,\cdots,n\}\setminus N\end{cases}\end{equation}
is a Pareto $Z$-eigenvector of $\mathcal{A}$ associated to the real number $\mu$.
\end{thm}
Following Theroem \ref{th:31} and \ref{th:32}, the following results are obvious.
\begin{cor} \label{co:33} Let $\mathcal{A}$ be a $m$-order and $n$-dimensional tensor.
If a real number $\lambda$ is Pareto $H$-eigenvalue ($Z$-eigenvalue) of $\mathcal{A}$, then $\lambda$ is $H^{++}$-eigenvalue ($Z^{++}$-eigenvalue, respectively) of some $|N|$-dimensional principal sub-tensor of $\mathcal{A}$.
\end{cor}
Since the definition of $H^{+}$-eigenvalue ($Z^{+}$-eigenvalue) $\lambda$ of $\mathcal{A}$ means that $\mathcal{A}x^{m-1}-\lambda x^{[m-1]}=0$ ($\mathcal{A}x^{m-1}-\lambda (x^Tx)^{\frac{m}2-1} x=0$, respectively) for some non-zero vector $x\geq0$, the following conclusions are trivial.
\begin{prop} \label{pr:34} Let $\mathcal{A}$ be a $m$-order and $n$-dimensional tensor. Then
\begin{itemize}
\item[(i)] each $H^{+}$-eigenvalue ($Z^{+}$-eigenvalue) of $\mathcal{A}$ is its Pareto $H$-eigenvalue ($Z$-eigenvalue, respectively);
\item[(ii)]the Pareto $H$-eigenvalues ($Z$-eigenvalues) of a diagonal tensor $\mathcal{A}$ coincide with its diagonal
entries. In particular, a $n$-dimensional and diagonal tensor may have at most
$n$ distinct Pareto $H$-eigenvalues ($Z$-eigenvalues).
\end{itemize}
\end{prop}
It follows from the above results that some Pareto $H$-eigenvalue ($Z-$eigenvalue) of a higher order tensor may be calculated by means of $H^{++}$-eigenvalue ($Z^{++}$-eigenvalue, respectively) of the lower dimensional tensors.
\begin{ex} \label{ex:1} Let $\mathcal{A}$ be a $4$-order and $2$-dimensional tensor. Suppose that $a_{1111}=1, a_{2222}=2$, $a_{1122}+a_{1212}+a_{1221}=-1$, $a_{2121}+a_{2112}+a_{2211}=-2$, and other $a_{i_1i_2i_3i_4}=0$. Then $$\mathcal{A}x^4=x_1^4+2x_2^4-3x_1^2x_2^2$$ $$\mathcal{A}x^3=\left(\begin{aligned}x_1^3-&x_1x_2^2\\2x_2^3-&2x_1^2x_2\end{aligned}\right)$$
When $N=\{1, 2\}$, the principal sub-tensor $\mathcal{A}^N$ is just $\mathcal{A}$ itself. $\lambda_1=0$ is a $H^{++}$-eigenvalue of $\mathcal{A}$ with a corresponding eigenvector $x^{(1)}=(\frac{\sqrt[4]{8}}2,\frac{\sqrt[4]{8}}2)^T$, and so it follows from Theorem \ref{th:31} that $\lambda_1=0$ is a Pareto $H$-eigenvalue with Pareto $H$-eigenvector $x^{(1)}=(\frac{\sqrt[4]{8}}2,\frac{\sqrt[4]{8}}2)^T$.
$\lambda_2=0$ is a $Z^{++}$-eigenvalue of $\mathcal{A}$ with a corresponding eigenvector $x^{(2)}=(\frac{\sqrt2}2,\frac{\sqrt2}2)^T$, and so it follows from Theorem \ref{th:32} that $\lambda_2=0$ is a Pareto $Z$-eigenvalue of $\mathcal{A}$ with Pareto $Z$-eigenvector $x^{(2)}=(\frac{\sqrt2}2,\frac{\sqrt2}2)^T$.
When $N=\{1\}$, the $1$-dimensional principal sub-tensor $\mathcal{A}^{N}=1$. Obviously, $\lambda_3=1$ is both $H^{++}$-eigenvalue and $Z^{++}$-eigenvalue of $\mathcal{A}^{N}$ with a corresponding eigenvector $w=1$ and $a_{2111}w^3=0$, and hence it follows from Theorem \ref{th:31} and \ref{th:32} that $\lambda_3=1$ is both Pareto $H$-eigenvalue and Pareto $Z$-eigenvalue of $\mathcal{A}$ with a corresponding eigenvector $x^{(3)}=(1,0)^T$.
Similarly, when $N=\{2\}$, the $1$-dimensional principal sub-tensor $\mathcal{A}^{N}=2$. Clearly, $\lambda_4=2$ is both $H^{++}$-eigenvalue and $Z^{++}$-eigenvalue of $\mathcal{A}^{N}$ with a corresponding eigenvector $w=1$ and $a_{1222}w^3=0$, and so $\lambda_4=2$ is both Pareto $H$-eigenvalue and Pareto $Z$-eigenvalue of $\mathcal{A}$ with a corresponding eigenvector $x^{(4)}=(0,1)^T$.
\end{ex}
\begin{ex} \label{ex:2} Let $\mathcal{A}$ be a $3$-order and $2$-dimensional tensor. Suppose that $a_{111}=1, a_{222}=2$, $a_{122}=a_{212}=a_{221}=\frac13$, and $a_{112}=a_{121}=a_{211}=-\frac23$. Then $$\mathcal{A}x^3=x_1^3+x_1x_2^2-2x_1^2x_2+2x_2^3$$ $$\mathcal{A}x^2=\left(\begin{aligned}x_1^2&+\frac13x_2^2-\frac43x_1x_2\\2x_2^2&+\frac23x_1x_2-\frac23x_1^2\end{aligned}\right)$$
When $N=\{1\}$, the $1$-dimensional principal sub-tensor $\mathcal{A}^{N}=1$. Obviously, $\lambda_1=1$ is both $H^{++}$-eigenvalue and $Z^{++}$-eigenvalue of $\mathcal{A}^{N}$ with a corresponding eigenvector $w=1$ and $a_{211}w^2=-\frac23<0$, and so $\lambda_1=1$ is neither Pareto $H$-eigenvalue nor Pareto $Z$-eigenvalue of $\mathcal{A}$.
When $N=\{2\}$, the $1$-dimensional principal sub-tensor $\mathcal{A}^{N}=2$. Clearly, $\lambda_2=2$ is both $H^{++}$-eigenvalue and $Z^{++}$-eigenvalue of $\mathcal{A}^{N}$ with a corresponding eigenvector $w=1$ and $a_{122}w^2=\frac13>0$, and so $\lambda_2=2$ is both Pareto $H$-eigenvalue and Pareto $Z$-eigenvalue of $\mathcal{A}$ with a corresponding eigenvector $x^{(2)}=(0,1)^T$. But $\lambda=2$ is neither $H^+$-eigenvalue nor $Z^+$-eigenvalue of $\mathcal{A}$.
\end{ex}
\begin{rem}
The Example \ref{ex:2} reveals that a Pareto $H$-eigenvalue ($Z$-eigenvalue) of a tensor $\mathcal{A}$ may not be its $H^+$-eigenvalue ($Z^{+}$-eigenvalue) even when $\mathcal{A}$ is symmetric.
\end{rem}
\section{\bf Constrained minimization and Pareto eigenvalue}
Let $\mathcal{A}$ be a symmetric tensor of order $m$ and dimension $n$ and $\|x\|_k=(|x_1|^k+|x_2|^k+\cdots+|x_n|^k)^{\frac1k} $ for $k\geq1$. Denote by $e^{(i)}=(e^{(i)}_1,e^{(i)}_2,\cdots,e^{(i)}_n)^T$ the ith unit vector in $\mathbb{R}^n$, i.e.,
$$e^{(i)}_j=\begin{cases}1 &\mbox{ if }i=j\\ 0 & \mbox{ if }i\neq j\end{cases}\mbox{ for }i,j\in\{1,2,\cdots,n\}.$$
We consider the constrained minimization problem \begin{equation}
\gamma(\mathcal{A})=\min\{\mathcal{A}x^m;\ x\geq0\mbox{ and }\|x\|_m=1 \},\label{eq:41} \end{equation}
\begin{thm} \label{th:41} Let $\mathcal{A}$ be a $m$-order and $n$-dimensional symmetric tensor. If $$\lambda(\mathcal{A})=\min\{\lambda; \lambda \mbox{ is Pareto $H$-eigenvalue of }\mathcal{A}\},$$ then $\gamma(\mathcal{A})=\lambda(\mathcal{A})$.
\end{thm}
\begin{proof}
Let $\lambda$ be a Pareto $H$-eigenvalue of $\mathcal{A}$. Then there exists a non-zero vector $y\in\mathbb{R}^n$ such that
$$ \mathcal{A}y^m=\lambda y^Ty^{[m-1]},\ y\geq0,$$
and so
\begin{equation} \mathcal{A}y^m=\lambda \sum_{i=1}^ny_i^m=\lambda \|y\|_m^m\mbox{ and }\|y\|_m>0.\label{eq:42}\end{equation}
Then we have $$\lambda=\mathcal{A}(\frac{y}{\|y\|_m})^m\mbox{ and } \|\frac{y}{\|y\|_m}\|_m=1.$$
From (\ref{eq:41}), it follows that $\gamma(\mathcal{A}) \leq\lambda.$ Since $\lambda$ is arbitrary,
we have $$\gamma(\mathcal{A})\leq\lambda(\mathcal{A}).$$
Now we show $\gamma(\mathcal{A})\geq\lambda(\mathcal{A}).$ Let $S=\{x\in\mathbb{R}^n; x\geq0\mbox{ and }\|x\|_m=1\}.$ It follows from the continuity of the homogeneous polynomial $\mathcal{A}x^m$ and the compactness of the set $S$ that there exists a $v\in S$ such that
\begin{equation}\gamma(\mathcal{A})=\mathcal{A}v^m,\ v\geq0,\ \|v\|_m=1.\label{eq:43}\end{equation}
Let $g(x)=\mathcal{A}x^m-\gamma(\mathcal{A})x^Tx^{[m-1]}$ for all $x\in\mathbb{R}^n$. We claim that for all $x\geq0,$ $g(x)\geq0.$ Suppose not, then there exists non-zero vector $y\geq0$ such that $$g(y)=\mathcal{A}y^m-\gamma(\mathcal{A})\sum_{i=1}^ny_i^m<0,$$ and hence $\gamma(\mathcal{A})\leq\mathcal{A}(\frac{y}{\|y\|_m})^m<\gamma(\mathcal{A}), $ a contradiction. Thus we have
\begin{equation} g(x)=\mathcal{A}x^m-\gamma(\mathcal{A})x^Tx^{[m-1]}\geq0\mbox{ for all }x\in \mathbb{R}^n_+.\label{eq:44}\end{equation}
For each $i\in\{1,2,\cdots,n\}$, we define a one-variable function $$f(t)=g(v+t e^{(i)})\mbox{ for all }t\in\mathbb{R}^1.$$ Clearly, $f(t)$ is continuous and $v+t e^{(i)}\in \mathbb{R}^n_+$ for all $t\geq 0.$ It follows from (\ref{eq:43}) and (\ref{eq:44}) that $$f(0)=g(v)=0 \mbox{ and } f(t)\geq0\mbox{ for all }t\geq0.$$
From the necessary conditions of extremum of one-variable function, it follows that the right-hand derivative $f'_+(0)\geq0$, and hence
$$\begin{aligned}f'_+(0)=(e^{(i)})^T\nabla g(v) =&m(e^{(i)})^T(\mathcal{A}v^{m-1}-\gamma(\mathcal{A}) v^{[m-1]})\\
=&m(\mathcal{A}v^{m-1}-\gamma(\mathcal{A}) v^{[m-1]})_i\geq0.\end{aligned}$$ So we have $$(\mathcal{A}v^{m-1}-\gamma(\mathcal{A}) v^{[m-1]})_i\geq0, \mbox{ for } i\in\{1,2,\cdots,n\}.$$
Therefore, we obtain
\begin{align} f(0)=g(v)=\mathcal{A}v^m-\gamma(\mathcal{A}) v^Tv^{[m-1]}=&0\label{eq:45} \\
\mathcal{A}v^{m-1}-\gamma(\mathcal{A}) v^{[m-1]}\geq&0\label{eq:46}\\v\geq&0\nonumber
\end{align}
Namely, $\gamma(\mathcal{A})$ is a Pareto $H$-eigenvalue of $\mathcal{A}$, and hence $\gamma(\mathcal{A})\geq\lambda(\mathcal{A}),$ as required. \end{proof}
It follows from the proof of the inquality $\gamma(\mathcal{A})\geq\lambda(\mathcal{A})$ in Theorem \ref{th:41} that $\gamma(\mathcal{A})$ is a Pareto $H$-eigenvalue of $\mathcal{A}$, which implies the existence of Pareto $H$-eigenvalue of a symmetric tensor $\mathcal{A}$.
\begin{thm} \label{th:42} If a $m$-order and $n$-dimensional tensor $\mathcal{A}$ is symmetric, then $\mathcal{A}$ has at least one Pareto $H$-eigenvalue $\gamma(\mathcal{A})=\min\limits_{ x\geq0 \atop \|x\|_m=1 }\mathcal{A}x^m$.
\end{thm}
Since $(x^Tx)^{\frac{m}2}=\|x\|_2^m$, using the same proof techniques as that of Theorem \ref{th:41} with appropriate changes in the
inequalities or equalities ($x^Tx^{[m-1]}$ and $y^{[m-1]}$ are respectively replaced by $(x^Tx)^{\frac{m}2}$ and $(y^Ty)^{\frac{m-2}2} y$). We can obtain the following conclusions about the Pareto $Z$-eigenvalue of a symmetric tensor $\mathcal{A}$.
\begin{thm} \label{th:43} Let $\mathcal{A}$ be a $m$-order and $n$-dimensional symmetric tensor. Then $\mathcal{A}$ has at least one Pareto $Z$-eigenvalue $\mu(\mathcal{A})=\min\limits_{x\geq0 \atop \|x\|_2=1 }\mathcal{A}x^m$. What's more,
\begin{equation}\label{eq:47}\mu(\mathcal{A})=\min\{\mu; \mu \mbox{ is Pareto $Z$-eigenvalue of }\mathcal{A}\}.\end{equation}
\end{thm}
In 1952, Motzkin \cite{TSM} introduced the concept of copositive matrices, which is an important in applied mathematics and graph theory. A real symmetric matrix $A$ is said to
be (i) {\em copositive} if $x\geq 0$ implies $x^TAx\geq0$; (ii) {\em strictly copositive} if $x\geq 0$ and $x\neq0$ implies $x^TAx>0$. Recently, Qi \cite{LQ5} extended this concept to the higher order symmetric tensors and obtained its some nice properties as ones of copositive matrices.
Let $\mathcal{A}$ be a real symmetric tensor of order $m$ and dimension $n$. $\mathcal{A}$ is said to be \begin{itemize}
\item[(i)] {\em copositive } if $\mathcal{A}x^m\geq0$ for all $x\in \mathbb{R}^n_+$; \item[(ii)] {\em strictly copositive} if $\mathcal{A}x^m>0$ for all $x\in \mathbb{R}^n_+\setminus\{0\}$.\end{itemize}
Let $\|\cdot\|$ denote any norm on $\mathbb{R}^n$. Obviously, we have the following equivalent definition of (strict) copositivity of a symmetric tensor in the sense of any norm on $\mathbb{R}^n$. Also see Song and Qi \cite{SQ} for detail proof.
\begin{lem}(Song and Qi \cite{SQ}) \label{le:44} Let $\mathcal{A}$ be a symmetric tensor of order $m$ and dimension $n$. Then we have
\begin{itemize}
\item[(i)] $\mathcal{A}$ is copositive if and only if $\mathcal{A}x^m\geq0$ for all $x\in \mathbb{R}^n_+$ with $\|x\|=1$;
\item[(ii)] $\mathcal{A}$ is strictly copositive if and only if $\mathcal{A}x^m>0$ for all $x\in \mathbb{R}^n_+$ with $\|x\|=1$;
\end{itemize}
\end{lem}
As the immediate conclusions of the above consequences, it is easy to obtain the following results about the copositive (strictly copositive) tensor.
\begin{cor}\label{co:45} Let $\mathcal{A}$ be a $m$-order and $n$-dimensional symmetric tensor. Then \begin{itemize}
\item[(a)] $\mathcal{A}$ always has Pareto $H$-eigenvalue. $\mathcal{A}$ is copositive (strictly copositive) if and only
if all of its Pareto $H$-eigenvalues are nonnegative (positive, respectively).
\item[(b)] $\mathcal{A}$ always has Pareto $Z$-eigenvalue. $\mathcal{A}$ is copositive (strictly copositive) if and only
if all of its Pareto $Z$-eigenvalues are nonnegative (positive, respectively).\end{itemize}
\end{cor}
Now we give an example for solving the constrained minimization problem for homogeneous polynomial and testing the (strict) copositivity of a symmetric tensor $\mathcal{A}$ with the help of the above results.
\begin{ex} \label{ex:3} Let $\mathcal{A}$ be a $4$-order and $2$-dimensional tensor. Suppose that $a_{1111}=a_{2222}=1$, $a_{1112}=a_{1211}=a_{1121}=a_{2111}=t$, and other $a_{i_1i_2i_3i_4}=0$. Then $$\mathcal{A}x^4=x_1^4+x_2^4+4tx_1^3x_2$$ $$\mathcal{A}x^3=\left(\begin{aligned}x_1^3+&3tx_1^2x_2\\x_2^3+&tx_1^3\end{aligned}\right)$$
When $N=\{1, 2\}$, the principal sub-tensor $\mathcal{A}^N$ is just $\mathcal{A}$ itself. $\lambda_1=1+\sqrt[4]{27}t$ is $H^{++}$-eigenvalue of $\mathcal{A}$ with a corresponding eigenvector $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$. Then it follows from Theorem \ref{th:31} and \ref{th:32} that $\lambda_1=1+\sqrt[4]{27}t$ is Pareto $H$-eigenvalues with Pareto $H$-eigenvector $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$.
When $N=\{1\}$, the $1$-dimensional principal sub-tensor $\mathcal{A}^{N}=1$. Obviously, $\lambda_2=1$ is both $H^{++}$-eigenvalue and $Z^{++}$-eigenvalue of $\mathcal{A}^{N}$ with a corresponding eigenvector $w=1$ and $a_{2111}w^3=t$. Then when $t>0$, it follows from Theorem \ref{th:31} and \ref{th:32} that $\lambda_2=1$ is both Pareto $H$-eigenvalue and Pareto $Z$-eigenvalue of $\mathcal{A}$ with a corresponding eigenvector $x^{(2)}=(1,0)^T$; when $t<0$, $\lambda_2=1$ is neither Pareto $H$-eigenvalue nor Pareto $Z$-eigenvalue of $\mathcal{A}$.
Similarly, when $N=\{2\}$, the $1$-dimensional principal sub-tensor $\mathcal{A}^{N}=1$. Clearly, $\lambda_3=1$ is both $H^{++}$-eigenvalue and $Z^{++}$-eigenvalue of $\mathcal{A}^{N}$ with a corresponding eigenvector $w=1$ and $a_{1222}w^3=0$, and so $\lambda_3=1$ is both Pareto $H$-eigenvalue and Pareto $Z$-eigenvalue of $\mathcal{A}$ with a corresponding eigenvector $x^{(3)}=(0,1)^T$.
So the following conclusions are easily obtained:
\begin{itemize}
\item[(i)] Let $t<-\frac{1}{\sqrt[4]{27}}$. Then $\lambda_1=1+\sqrt[4]{27}t<0$ and $\lambda_3=1$ are Pareto $H$-eigenvalues of $\mathcal{A}$ with Pareto $H$-eigenvectors $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$ and $x^{(3)}=(0,1)^T$, respectively. It follows from Theorem \ref{th:41} and \ref{th:42} that $$\gamma(\mathcal{A})=\min\limits_{x\geq0 \atop \|x\|_4=1}\mathcal{A}x^4=\min\{\lambda_1,\lambda_3\}=1+\sqrt[4]{27}t<0.$$ The polynomial $\mathcal{A}x^4$ attains its minimum value at $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$. It follows from Corollary \ref{co:45} that $\mathcal{A}$ is not copositive.
\item[(ii)] Let $t=-\frac{1}{\sqrt[4]{27}}$. Then $\lambda_1=1+\sqrt[4]{27}t=0$ and $\lambda_3=1$ are Pareto $H$-eigenvalues of $\mathcal{A}$ with Pareto $H$-eigenvectors $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$ and $x^{(3)}=(0,1)^T$, respectively. It follows from Theorem \ref{th:41} and \ref{th:42} that $$\gamma(\mathcal{A})=\min\limits_{x\geq0 \atop \|x\|_4=1}\mathcal{A}x^4=\min\{\lambda_1,\lambda_3\}=0.$$ The polynomial $\mathcal{A}x^4$ attains its minimum value at $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$. It follows from Corollary \ref{co:45} that $\mathcal{A}$ is copositive.
\item[(iii)] Let $0>t>-\frac{1}{\sqrt[4]{27}}$. Clearly, $0<1+\sqrt[4]{27}t<1$. Then $\lambda_1=1+\sqrt[4]{27}t$ and $\lambda_3=1$ are Pareto $H$-eigenvalues of $\mathcal{A}$. It follows from Theorem \ref{th:41} and \ref{th:42} that $$\gamma(\mathcal{A})=\min\limits_{x\geq0 \atop \|x\|_4=1}\mathcal{A}x^4=\min\{\lambda_1,\lambda_3\}=1+\sqrt[4]{27}t>0.$$ The polynomial $\mathcal{A}x^4$ attains its minimum value at $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$. It follows from Corollary \ref{co:45} that $\mathcal{A}$ is strictly copositive.
\item[(iv)] Let $t=0$. Then $\lambda_1=\lambda_2=\lambda_3=1$ are Pareto $H$-eigenvalues of $\mathcal{A}$ with Pareto $H$-eigenvectors $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$ and $x^{(2)}=(1,0)^T$ and $x^{(3)}=(0,1)^T$, respectively. It follows from Theorem \ref{th:41} and \ref{th:42} that $$\gamma(\mathcal{A})=\min\limits_{x\geq0 \atop \|x\|_4=1}\mathcal{A}x^4=\min\{\lambda_1,\lambda_2,\lambda_3\}=1>0.$$
The polynomial $\mathcal{A}x^4$ attains its minimum value at $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$ or $x^{(2)}=(1,0)^T$ or $x^{(3)}=(0,1)^T$. It follows from Corollary \ref{co:45} that $\mathcal{A}$ is strictly copositive.
\item[(v)] Let $t>0$. Then $\lambda_1=1+\sqrt[4]{27}t$ and $\lambda_2=\lambda_3=1$ are Pareto $H$-eigenvalues of $\mathcal{A}$ with Pareto $H$-eigenvectors $x^{(1)}=(\sqrt[4]{\frac34},\sqrt[4]{\frac14})^T$ and $x^{(2)}=(1,0)^T$ and $x^{(3)}=(0,1)^T$, respectively. It follows from Theorem \ref{th:41} and \ref{th:42} that $$\gamma(\mathcal{A})=\min\limits_{x\geq0 \atop \|x\|_4=1}\mathcal{A}x^4=\min\{\lambda_1,\lambda_2,\lambda_3\}=1>0.$$
The polynomial $\mathcal{A}x^4$ attains its minimum value at $x^{(2)}=(1,0)^T$ or $x^{(3)}=(0,1)^T$. It follows from Corollary \ref{co:45} that $\mathcal{A}$ is strictly copositive.
\end{itemize}
\end{ex}
\bibliographystyle{amsplain}
|
2,869,038,153,876 | arxiv | \section{Introduction}\label{sec-1}
A spectral measure is a projection operator-valued measure defined on a measurable space,
where a projection operator means an orthogonal projection operator on some Hilbert space.
Given a bounded measurable function $f$ and a spectral measure $\pi$ on a measurable space $(\mathfrak{X},\mathscr{F})$,
one can use the usual idea of Lebesgue integration to define an integral $\int_{\mathfrak{X}}fd\pi$,
which is a bounded linear operator on the Hilbert space associated with the spectral measure $\pi$ (see, e.g. \cite{parth}).
Such integrals are usually known as spectral integrals, which play an important role in the theory of operators \cite{helson, parth}.
Generalized functions (functionals) are continuous linear functionals on fundamental function spaces.
For instance, Schwartz generalized functions are continuous linear functionals on the Schwartz rapidly decreasing function space \cite{gel-1},
and Hida generalized functionals are continuous linear functionals on the Hida testing functional space \cite{hida,huang,kuo,obata}.
As is well known, generalized functions (functionals) have wide application in mathematical physics
(see, e.g. \cite{albe,barhoumi,benth,ji,huyz,lee,nunno} and references therein).
Given a generalized function (functional) $\Phi$ and a spectral measure $\pi$ on an appropriate measurable space $(\mathfrak{X},\mathscr{F})$,
one natural question arises: how can one define an integral $\int_{\mathfrak{X}}\Phi d\pi$ both reasonably and rigorously?
Such integrals are of physical significance \cite{accardi}.
However, the usual idea of Lebesgue integration does not work in this case,
which suggests that a new method is needed to define such an integral.
Accardi \textit{et al} \cite{accardi} considered such a question in the context of Hilbert space theory.
In 2005, by using Hida's white noise analysis \cite{hida}, Wang \textit{et al} \cite{wang-huang} introduced integrals
of Schwartz generalized functions with respect to a spectral measure on the Borel space $(\mathbb{R},\mathscr{B}(\mathbb{R}))$,
which were continuous linear operators from the space of Hida testing functionals to the space of Hida generalized functionals.
In particular, they gave a rigorous definition to the delta function $\delta(Q)$ of an observable $Q$ used in the physical literature.
It is known \cite{hida} that Hida's white noise analysis is essentially a theory of infinite dimensional stochastic analysis on
functionals of Brownian motion (also known as Gaussian white noise functionals).
Bernoulli functionals (also known as Rademacher functionals) are measurable functions defined on the Bernoulli space,
and can be viewed as functionals of Bernoulli process.
Much attention has been paid to Bernoulli functionals in the past fifteen years (see, e.g.\cite{krok,nourdin,privault,zheng} and references therein).
In 2014, by using the canonical orthonormal basis for the space of square integrable Bernoulli functionals,
Wang and Zhang \cite{wang-zhang} constructed a Gel'fand triple over the Bernoulli space,
which actually introduced Bernoulli generalized functionals.
The following year, Wang and Chen \cite{wang-chen-1} obtained a characterization theorem
for generalized functionals of discrete-time normal martingale via the Fock transform,
which covers the case of Bernoulli generalized functionals.
Recently, they have further considered operators acting on generalized functionals of discrete-time normal martingale \cite{wang-chen-2}.
Let $\mathcal{S}\subset \mathcal{L}^2 \subset \mathcal{S}^*$ be the Gel'fand triple over the Bernoulli space (see Section~\ref{sec-2} for details),
where elements of $\mathcal{S}^*$ are called Bernoulli generalized functionals.
In this paper, motivated by the work of \cite{wang-huang,wang-chen-2}, we would like to define integrals of Bernoulli generalized functionals
(namely elements of $S^*$) with respect to a spectral measure in the framework of $\mathcal{S}\subset \mathcal{L}^2 \subset \mathcal{S}^*$,
and examine their fundamental properties.
The paper is organized as follows. Section~\ref{sec-2} describes the Gel'fand triple $\mathcal{S}\subset \mathcal{L}^2 \subset \mathcal{S}^*$
over the Bernoulli space, which is the framework where we work.
In Section~\ref{sec-3}, we prove a technical theorem on the regularity of continuous linear operators from the space $\mathcal{S}$ of Bernoulli testing functionals to the space $\mathcal{S}^*$ of Bernoulli generalized functionals.
In Section~\ref{sec-4}, we first introduce a notion of $\mathcal{S}$-smooth spectral measures on the Bernoulli space, where $\mathcal{S}$ refers to
the space of Bernoulli testing functionals, and then, with the 2D-Fock transform as the main tool, we define
integrals of Bernoulli generalized functionals (namely elements of $\mathcal{S}^*$) with respect to an $\mathcal{S}$-smooth spectral measure,
which are actually continuous linear operators from $\mathcal{S}$ to $\mathcal{S}^*$.
We examine fundamental properties of these integrals, and establish a convergence theorem for sequences of these integrals.
Finally in Section~\ref{sec-5}, we show an example of an $\mathcal{S}$-smooth spectral measure and Bernoulli generalized functionals that
are integrable with this spectral measure. Some further results are also obtained therein.
\vskip 2mm
\textbf{Notation and conventions.} Throughout, $\mathbb{N}$ always denotes the set of all non-negative integers.
Unless otherwise stated, letters $j$, $k$ and $n$ stand for nonnegative integers.
We denote by $\Gamma$ the finite power set of $\mathbb{N}$, namely
$\Gamma=\{\sigma \mid \sigma\subset \mathbb{N}, \# \sigma<\infty\}$
where $\# \sigma$ means the cardinality of $\sigma$ as a set.
\section{Gel'fand triple over Bernoulli space}\label{sec-2}
This section describes the Gel'fand triple over the Bernoulli space, which is the framework where we work.
Recall that $\mathbb{N}$ denotes the set of all nonnegative integers.
Let $\Sigma=\{-1,1\}^{\mathbb{N}}$ be the set of all mappings $\omega\colon \mathbb{N} \rightarrow \{-1,1\}$, and
$(\zeta_n)_{n\geq 0}$ the sequence of canonical projections on $\Sigma$ given by
\begin{equation}\label{eq-2-1}
\zeta_n(\omega)=\omega(n),\quad \omega\in \Sigma.
\end{equation}
Denote by $\mathscr{A}$ the $\sigma$-field on $\Sigma$ generated by the sequence $(\zeta_n)_{n\geq 0}$.
Let $(\theta_n)_{n\geq 0}$ be a given sequence of positive numbers with the property that $0 < \theta_n < 1$ for all $n\geq 0$.
It is known \cite{privault} that there exists a unique probability measure $\mu$ on $\mathscr{A}$ such that
\begin{equation}\label{eq-2-2}
\mu\circ(\zeta_{n_1}, \zeta_{n_2}, \cdots, \zeta_{n_k})^{-1}\big\{(\epsilon_1, \epsilon_2, \cdots, \epsilon_k)\big\}
=\prod_{j=1}^k \theta_{n_j}^{\frac{1+\epsilon_j}{2}}(1-\theta_{n_j})^{\frac{1-\epsilon_j}{2}}
\end{equation}
for $n_j\in \mathbb{N}$, $\epsilon_j\in \{-1,1\}$ ($1\leq j \leq k$) with $n_i\neq n_j$ when $i\neq j$
and $k\in \mathbb{N}$ with $k\geq 1$. Then we come to a probability measure space $(\Sigma, \mathscr{A}, \mu)$,
which is referred to as the \textbf{Bernoulli space}. Measurable functions (complex-valued random variables)
on $(\Sigma, \mathscr{A}, \mu)$ are usually known as \textbf{Bernoulli functionals}.
Let $(Z_n)_{n\geq 0}$ be the sequence of independent random variables on $(\Sigma, \mathscr{A}, \mu)$ defined by
\begin{equation}\label{eq-2-3}
Z_n = \frac{\zeta_n + 1 - 2\theta_n}{2\sqrt{\theta_n(1-\theta_n)}},\quad n\geq0.
\end{equation}
Clearly, for each $n\geq 0$, $Z_n$ has a probability distribution of the following form
\begin{equation}\label{eq-2-4}
\mu\left\{Z_n = \sqrt{(1-\theta_n)/\theta_n}\right\}=\theta_n,\quad
\mu\left\{Z_n = -\sqrt{\theta_n/(1-\theta_n)}\right\}=1-\theta_n.
\end{equation}
Let $\mathcal{L}^2\equiv \mathcal{L}^2(\Sigma, \mathscr{A}, \mu)$ be the space of square integrable Bernoulli functionals.
We denote by $\langle\cdot,\cdot\rangle$ the usual inner product in space $\mathcal{L}^2$
given by
\begin{equation}\label{eq-2-5}
\langle\xi,\eta\rangle=\int_{\Sigma}\overline{\xi}\eta d\mu,\quad \xi,\, \eta \in \mathcal{L}^2,
\end{equation}
and by $\|\cdot\|$ the corresponding norm.
It is known that $(Z_n)_{n\geq 0}$ has the chaotic representation property \cite{privault}.
Thus $\mathcal{L}^2$ has an orthonormal basis of the form $\{Z_{\sigma}\mid \sigma \in \Gamma\}$,
where $Z_{\emptyset}=1$ and
\begin{equation}\label{eq-2-6}
Z_{\sigma} = \prod_{i\in \sigma}Z_i,\quad \text{$\sigma \in \Gamma$, $\sigma \neq \emptyset$},
\end{equation}
where $\Gamma$ is the finite power set of $\mathbb{N}$ (see Section~\ref{sec-1} for the definition of $\Gamma$).
Clearly, as a complex Hilbert space, $\mathcal{L}^2$ is infinite-dimensional and separable.
In what follows, we call $\{Z_{\sigma}\mid \sigma \in \Gamma\}$ the \textbf{canonical orthonormal basis} for $\mathcal{L}^2$.
\begin{lemma}\label{lem-2-1}\cite{wang-chen-1}
Let $\sigma \mapsto \lambda_{\sigma}$ be the positive integer-valued function on $\Gamma$ given by
\begin{equation}\label{eq-2-7}
\lambda_{\sigma}
=\left\{
\begin{array}{ll}
1, & \hbox{$\sigma=\emptyset$, $\sigma\in \Gamma$;} \\
\Pi_{k\in \sigma}(1+k), & \hbox{$\sigma\neq\emptyset$, $\sigma\in \Gamma$.}
\end{array}
\right.
\end{equation}
Then the series $\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-r}$ converges for all real number $r>1$, and moreover,
it holds true that
\begin{equation}\label{eq-2-8}
\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-r} \leq \exp\Big[\sum_{n=1}^{\infty}n^{-r}\Big].
\end{equation}
\end{lemma}
Using the function $\sigma\mapsto \lambda_{\sigma}$ introduced above, we can construct a chain of Hilbert spaces
consisting of Bernoulli functionals as follows.
For $\sigma\in \Gamma$, we use $|Z_{\sigma}\rangle\!\langle Z_{\sigma}|$ to mean the Dirac operator associated with
the basis vector $Z_{\sigma}$, which is a $1$-dimensional projection operator on $\mathcal{L}^2$.
Then the countable family $\{\,|Z_{\sigma}\rangle\!\langle Z_{\sigma}|\,\}_{\sigma\in \Gamma}$ forms
a resolution of identity on $\mathcal{L}^2$.
For $p \geq 0$, let $\mathcal{S}_p$ be the domain of the operator
$A_p = \sum_{\sigma\in \Gamma}\lambda_{\sigma}^p|Z_{\sigma}\rangle\!\langle Z_{\sigma}|$, namely
\begin{equation}\label{eq-2-9}
\mathcal{S}_p = \mathrm{Dom}\, A_p
=\Big\{\xi\in \mathcal{L}^2 \Bigm| \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{2p}|\langle Z_{\sigma}, \xi\rangle|^2<\infty\Big\}.
\end{equation}
It is easy to verify that $\mathcal{S}_p$ becomes a Hilbert space with the inner product $\langle\cdot,\cdot\rangle_p$ given by
\begin{equation}\label{eq-2-10}
\langle\xi,\eta\rangle_p = \langle A_p\xi, A_p\eta\rangle
= \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{2p}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\sigma},\eta\rangle,\quad \xi,\, \eta\in\mathcal{S}_p.
\end{equation}
We denote by $\|\cdot\|_p$ the norm induced by $\langle\cdot,\cdot\rangle_p$, which obviously satisfies the following relations
\begin{equation}\label{eq-2-11}
\|\xi\|_p^2 = \|A_p\xi\|^2 = \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{2p}|\langle Z_{\sigma}, \xi\rangle|^2,\quad \xi\in\mathcal{S}_p.
\end{equation}
\begin{lemma}\cite{wang-chen-1} \label{lem-2-2}
Let $p \geq 0$ be given. Then $\{Z_{\sigma} | \sigma \in \Gamma\} \subset \mathcal{S}_p$ and, moreover, the system
$\{\lambda_{\sigma}^{-p}Z_{\sigma}\mid \sigma\in \Gamma\} $ forms an orthonormal basis for $\mathcal{S}_p$.
\end{lemma}
It is not hard to show that the norms $\{\|\cdot\|_p \mid p\geq 0\}$ are compatible.
This, together with the fact $\lambda_{\sigma}\geq 1$ for all $\sigma\in \Gamma$,
implies that $\|\cdot\|_p\leq \|\cdot\|_q$
and $\mathcal{S}_q\subset \mathcal{S}_p$ whenever $0 \leq p \leq q$. Consequently, we get a chain of Hilbert spaces of Bernoulli functionals
as follows:
\begin{equation}\label{eq-2-12}
\cdots \subset \mathcal{S}_{p+1} \subset \mathcal{S}_p \subset \cdots \subset \mathcal{S}_0 =\mathcal{L}^2.
\end{equation}
We put
\begin{equation}\label{eq-2-13}
\mathcal{S} = \bigcap_{p=0}^{\infty}\mathcal{S}_p
\end{equation}
and endow it with the topology generated by the norm sequence $\big(\|\cdot\|_p\big)_{p \geq 0}$.
Note that, for each $p \geq 0$, $\mathcal{S}_p$ is just the completion of $\mathcal{S}$ with respect to norm $\|\cdot\|_p$.
Thus $\mathcal{S}$ is a countably-Hilbert space. The next lemma, however, shows that $\mathcal{S}$ even has a much
better property.
\begin{lemma}\cite{wang-chen-1} \label{lem-2-3}
The space $\mathcal{S}$ is a nuclear space, namely for any $p\geq 0$, there exists $q > p$ such that the inclusion mapping
$i_{pq} \colon \mathcal{S}_q \rightarrow \mathcal{S}_p$ defined by $i_{pq}(\xi) = \xi$ is a Hilbert-Schmidt operator.
\end{lemma}
For $p \geq 0$, we denote by $\mathcal{S}_p^*$ the dual of $\mathcal{S}_p$ and $\|\cdot\|_{-p}$ the norm of $\mathcal{S}_p^*$.
Then $\mathcal{S}_p^*\subset \mathcal{S}_q^*$ and $\|\cdot\|_{-p}\geq \|\cdot\|_{-q}$ whenever $0 \leq p \leq q$.
The lemma below is then an immediate consequence of the general theory of countably-Hilbert spaces (see \cite{gel-1,becnel}).
\begin{lemma}\label{lem-2-4}
Let $\mathcal{S}^*$ be the dual of $\mathcal{S}$ and endow it with the strong topology. Then
\begin{equation}\label{eq-2-14}
\mathcal{S}^* = \bigcup_{p=0}^{\infty}\mathcal{S}^*_p
\end{equation}
and, moreover, the inductive limit topology on $\mathcal{S}^*$ given by space sequence $\{\mathcal{S}^*_p\}_{p\geq 0}$ coincides
with the strong topology.
\end{lemma}
By identifying $\mathcal{L}^2$ with its dual, one naturally comes to a Gel'fand triple of the following form
\begin{equation}\label{eq-2-15}
\mathcal{S}\subset \mathcal{L}^2 \subset \mathcal{S}^*,
\end{equation}
which is referred to as the \textbf{Gel'fand triple over the Bernoulli space} $(\Sigma,\mathscr{A},\mu)$.
By convention, elements of $\mathcal{S}^*$ are called \textbf{Bernoulli generalized functionals},
while elements of $\mathcal{S}$ are called \textbf{Bernoulli testing functionals}.
\begin{lemma}\label{lem-2-5}\cite{wang-chen-1}
The system $\{Z_{\sigma} \mid \sigma \in \Gamma\}$ is contained in $\mathcal{S}$ and, moreover, it forms a
basis for $\mathcal{S}$ in the sense that
\begin{equation}\label{eq-2-16}
\xi = \sum_{\sigma\in \Gamma}\langle Z_{\sigma}, \xi\rangle Z_{\sigma},\quad \xi\in \mathcal{S},
\end{equation}
where $\langle\cdot,\cdot\rangle$ is the inner product of $\mathcal{L}^2$ and the series converges in the topology of $\mathcal{S}$.
\end{lemma}
We denote by $\langle\!\langle\cdot,\cdot\rangle\!\rangle$ the canonical bilinear form (also known as pairing) on $\mathcal{S}^*\times \mathcal{S}$, namely
\begin{equation}\label{eq-2-17}
\langle\!\langle\Phi,\xi\rangle\!\rangle = \Phi(\xi),\quad \Phi\in \mathcal{S}^*,\, \xi\in \mathcal{S},
\end{equation}
where $\Phi(\xi)$ means the value of the functional $\Phi$ at $\xi$. Note that $\langle\cdot,\cdot\rangle$ denotes the inner product of $\mathcal{L}^2$,
which is different from $\langle\!\langle\cdot,\cdot\rangle\!\rangle$.
\begin{lemma}\cite{wang-lin}\label{lem-2-6}
Let $\Phi\in \mathcal{S}^*$ be given. Then, for $p\geq 0$, $\Phi\in \mathcal{S}^*_p$ if and only if $\Phi$ satisfies that
\begin{equation}\label{eq-2-18}
\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2p}|\langle\!\langle \Phi, Z_{\sigma}\rangle\!\rangle|^2<\infty.
\end{equation}
In that case $\|\Phi\|_{-p}^2 = \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2p}|\langle\!\langle \Phi, Z_{\sigma}\rangle\!\rangle|^2$.
\end{lemma}
\section{Technical theorem}\label{sec-3}
In this section, we establish a technical theorem about the regularity of
operators acting on Bernoulli functionals, which will be used to prove our main results.
We keep using the notions and notation fixed in previous sections.
\begin{definition}\cite{wang-chen-2}\label{def-3-1}
For an operator $\mathsf{T}\colon \mathcal{S} \rightarrow \mathcal{S}^*$, its 2D-Fock transform is the function
$\widehat{\mathsf{T}}$ on $\Gamma\times \Gamma$ given by
\begin{equation}\label{eq-3-1}
\widehat{\mathsf{T}}(\sigma,\tau)=\langle\!\langle \mathsf{T}Z_{\sigma},Z_{\tau}\rangle\!\rangle,\quad \sigma,\, \tau \in \Gamma.
\end{equation}
\end{definition}
Continuous linear operators from $\mathcal{S}$ to $\mathcal{S}^*$ are completely determined by their 2D-Fock transforms.
More precisely, if $\mathsf{T}_1$, $\mathsf{T}_2\colon \mathcal{S}\rightarrow \mathcal{S}^*$ are continuous linear operators,
then $\mathsf{T}_1=\mathsf{T}_2$
if and only if their 2D-Fock transforms are the same, namely $\widehat{\mathsf{T}_1}=\widehat{\mathsf{T}_2}$.
The following lemma offers a useful characterization of
continuous linear operators from $\mathcal{S}$ to $\mathcal{S}^*$ via their 2D-Fock transforms.
\begin{lemma}\cite{wang-chen-2}\label{lem-3-1}
A function $G$ on $\Gamma\times \Gamma$ is the 2D-Fock transform of a continuous linear operator $\mathsf{T}\colon \mathcal{S} \rightarrow \mathcal{S}^*$
if and only if it satisfies that
\begin{equation}\label{eq-3-2}
|G(\sigma, \tau)|\leq C\lambda_{\sigma}^p\lambda_{\tau}^p,\quad \sigma,\, \tau \in \Gamma
\end{equation}
for some constants $C\geq 0$ and $p\geq 0$.
\end{lemma}
For $p\geq 0$, we denote by $\mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)$ the Banach space of all bounded linear operators
from $\mathcal{S}_p$ to $\mathcal{S}_p^*$ and by $\|\cdot\|_{\mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)}$ the usual operator norm
in $\mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)$, which is given by
\begin{equation}\label{eq-3-3}
\|\mathsf{T}\|_{\mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)}
= \sup\{ \|\mathsf{T}\xi\|_{-p} \mid \xi\in \mathcal{S}_p,\, \|\xi\|_p = 1 \},\quad \mathsf{T}\in \mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*).
\end{equation}
Note that $\mathcal{S}$ is dense in $\mathcal{S}_p$.
Thus, for each bounded linear operator $\mathsf{A} \colon (\mathcal{S}, \|\cdot\|_p)\rightarrow \mathcal{S}_p^*$, there exists
a unique bounded linear operator $\widetilde{\mathsf{A}}\in \mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)$ such that
$\widetilde{\mathsf{A}}\xi = \mathsf{A}\xi$,\ $\forall\, \xi\in \mathcal{S}$ and
\begin{equation*}
\|\widetilde{\mathsf{A}}\|_{\mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)}
= \sup\{\|\mathsf{A}\xi\|_{-p} \mid \xi\in \mathcal{S},\, \|\xi\|_p = 1\}.
\end{equation*}
The operator $\widetilde{\mathsf{A}}$ is usually known as the norm-keeping extension of the operator $\mathsf{A}$ to $\mathcal{S}_p$.
The next theorem is a result about the regularity of continuous linear operator from $\mathcal{S}$ to $\mathcal{S}^*$,
which will play an important role in proving our main results.
\begin{theorem}\label{thr-3-2}
Let $\mathsf{T}\colon \mathcal{S}\rightarrow \mathcal{S}^*$ be a continuous linear operator.
Suppose that $\mathsf{T}$ satisfies
\begin{equation}\label{eq-3-4}
|\widehat{\mathsf{T}}(\sigma,\tau)|
\leq C\lambda_{\sigma}^p\lambda_{\tau}^p,\quad \sigma,\, \tau \in \Gamma
\end{equation}
for some constants $C\geq 0$ and $p\geq 0$. Then, for $q> p+\frac{1}{2}$, there exists a unique $\widetilde{\mathsf{T}} \in \mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)$
such that
\begin{equation}\label{eq-3-5}
\big\|\widetilde{\mathsf{T}}\big\|_{\mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)}\leq C\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(q-p)}
\end{equation}
and $\widetilde{\mathsf{T}}\xi =\mathsf{T}\xi$\ for all\ $\xi\in \mathcal{S}$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem-2-1}, we know that $\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2(q-p)}<\infty$ since $2(q-p)>1$.
Let $\sigma\in \Gamma$ be given. Then, $\mathsf{T}Z_{\sigma}\in \mathcal{S}^*$ and, by using the assumption (\ref{eq-3-4}),
we find
\begin{equation*}
\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2q}|\langle\!\langle \mathsf{T} Z_{\sigma}, Z_{\tau} \rangle\!\rangle|^2
= \sum_{\tau\in \Gamma}\lambda_{\tau}^{-2q}|\widehat{\mathsf{T}} (\sigma, \tau)|^2
\leq C^2\lambda_{\sigma}^{2p}\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2(q-p)}
<\infty,
\end{equation*}
which, together with Lemma~\ref{lem-2-6}, implies that $\mathsf{T}Z_{\sigma}\in \mathcal{S}_q^*$ and
\begin{equation*}
\|\mathsf{T}Z_{\sigma}\|_{-q}^2
= \sum_{\tau\in \Gamma}\lambda_{\tau}^{-2q}|\langle\!\langle \mathsf{T} Z_{\sigma}, Z_{\tau} \rangle\!\rangle|^2
\leq C^2\lambda_{\sigma}^{2p}\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2(q-p)}.
\end{equation*}
Now take $\xi\in \mathcal{S}$. Then $\sum_{\sigma\in \Gamma}\langle Z_{\sigma}, \xi\rangle \mathsf{T}Z_{\sigma}$ is a series in $\mathcal{S}_q^*$.
And, by using the above inequality, we have
\begin{equation*}
\begin{split}
\sum_{\sigma\in \Gamma}\|\langle Z_{\sigma}, \xi\rangle \mathsf{T}Z_{\sigma}\|_{-q}
& \leq \Big[\sum_{\sigma\in \Gamma} \lambda_{\sigma}^{2q}|\langle Z_{\sigma}, \xi\rangle|^2\Big]^{\frac{1}{2}}
\Big[\sum_{\sigma\in \Gamma} \lambda_{\sigma}^{-2q}\|\mathsf{T}Z_{\sigma}\|_{-q}^2\Big]^{\frac{1}{2}}\\
& \leq \|\xi\|_q \Big[\sum_{\sigma\in \Gamma} \lambda_{\sigma}^{-2q}C^2\lambda_{\sigma}^{2p}\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2(q-p)}\Big]^{\frac{1}{2}}\\
& = C \Big[\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(q-p)}\Big] \|\xi\|_q,
\end{split}
\end{equation*}
which implies that the series $\sum_{\sigma\in \Gamma}\langle Z_{\sigma}, \xi\rangle \mathsf{T}Z_{\sigma}$ converges in $\mathcal{S}_q^*$,
hence its sum $\sum_{\sigma\in \Gamma}\langle Z_{\sigma}, \xi\rangle \mathsf{T}Z_{\sigma}$ belongs to $\mathcal{S}_q^*$.
On the other hand, by Lemma~\ref{lem-2-5} and the continuity of $\mathsf{T}\colon \mathcal{S} \rightarrow \mathcal{S}^*$, we can get
\begin{equation*}
\mathsf{T}\xi = \sum_{\sigma\in \Gamma}\langle Z_{\sigma}, \xi\rangle \mathsf{T}Z_{\sigma}.
\end{equation*}
Thus $\mathsf{T}\xi\in \mathcal{S}_q^*$ and
\begin{equation*}
\|\mathsf{T}\xi\|_{-q}
\leq \sum_{\sigma\in \Gamma}\|\langle Z_{\sigma}, \xi\rangle \mathsf{T}Z_{\sigma}\|_{-q}
\leq C \Big[\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(q-p)}\Big] \|\xi\|_q,
\end{equation*}
which, together with the arbitrariness of $\xi\in \mathcal{S}$, implies that $\mathsf{T}$ is a bounded linear operator from $(\mathcal{S},\|\cdot\|_q)$
to $\mathcal{S}_q^*$. Therefore, there exists a unique $\widetilde{\mathsf{T}} \in \mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)$
such that $\widetilde{\mathsf{T}}\xi =\mathsf{T}\xi$, $x\in \mathcal{S}$ and
\begin{equation*}
\big\|\widetilde{\mathsf{T}}\big\|_{\mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)}
= \sup\{\|\mathsf{T}\xi\|_{-q} \mid \xi \in S,\, \|\xi\|_q=1\}
\leq C \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(q-p)}.
\end{equation*}
This completes the proof.
\end{proof}
\section{Spectral integrals of Bernoulli generalized functionals}\label{sec-4}
In the present section, we define integrals of Bernoulli generalized functionals with respect to a spectral measure on the Bernoulli space
and examine their fundamental properties.
We continue to use the notation fixed in previous sections.
Additionally, we denote by $\mathfrak{P}(\mathcal{L}^2)$ the set of all projection operators on $\mathcal{L}^2$, which is a subset of the Banach algebra $\mathfrak{B}(\mathcal{L}^2)$
of all bounded linear operators on $\mathcal{L}^2$,
and by $\mathfrak{L}(\mathcal{S},\mathcal{S}^*)$ the space of all continuous linear operators
from $\mathcal{S}$ to $\mathcal{S}^*$.
Recall that the Bernoulli space $(\Sigma, \mathscr{A}, \mu)$ is actually a probability measure space.
This naturally leads to the next definition.
\begin{definition}\label{def-4-1}
A mapping $\pi\colon \mathscr{A}\rightarrow \mathfrak{P}(\mathcal{L}^2)$ is called a spectral measure
on $(\Sigma, \mathscr{A}, \mu)$ if it satisfies the following two requirements:
\begin{enumerate}
\item[(1)] $\pi(\Sigma)=I$, where $I$ denotes the identity operator on $\mathcal{L}^2$;
\item[(2)] For each sequence $(E_n)_{n\geq 1}\subset \mathscr{A}$ with $E_m \cap E_n = \emptyset$ when $m\neq n$, it holds true that
\begin{equation}\label{eq-4-1}
\pi\Big(\bigcup_{n=1}^{\infty}E_n\Big) = \sum_{n=1}^{\infty}\pi(E_n),
\end{equation}
\end{enumerate}
where the operator series on the right-hand side converges strongly, namely in the strong operator topology of $\mathfrak{B}(\mathcal{L}^2)$.
\end{definition}
A spectral measure admits many interesting properties. The next lemma just shows the most striking one,
which is well known in the theory of functional analysis (see, e.g. \cite{parth}).
\begin{lemma}\label{lem-4-1}
If $\pi\colon \mathscr{A} \rightarrow \mathfrak{P}(\mathcal{L}^2)$ is a spectral measure on $(\Sigma,\mathscr{A},\mu)$, then for all
$E_1$, $E_2\in \mathscr{A}$ it holds true that
\begin{equation}\label{eq-4-2}
\pi(E_1\cap E_2)=\pi(E_1)\pi(E_2),
\end{equation}
where $\pi(E_1)\pi(E_2)$ just means the usual composition of operators $\pi(E_1)$ and $\pi(E_2)$.
\end{lemma}
Let $\pi\colon \mathscr{A} \rightarrow \mathfrak{P}(\mathcal{L}^2)$ be a spectral measure on $(\Sigma,\mathscr{A},\mu)$. Then,
for fixed $\xi$, $\eta\in \mathcal{L}^2$, the function
\begin{equation*}
E\mapsto \langle \pi(E)\xi, \eta\rangle
\end{equation*}
defines a complex-valued measure on the measurable space $(\Sigma,\mathscr{A})$.
In particular, for $\sigma$, $\tau\in \Gamma$, the function
$E\mapsto \langle \pi(E)Z_{\sigma}, Z_{\tau}\rangle$ is a complex-valued measure on $(\Sigma,\mathscr{A})$.
\begin{definition}\label{def-4-2}
A spectral measure $\pi\colon \mathscr{A} \rightarrow \mathfrak{P}(\mathcal{L}^2)$ on $(\Sigma,\mathscr{A},\mu)$ is said
to be $\mathcal{S}$-smooth if for each pair $(\sigma,\tau)\in \Gamma\times \Gamma$ there exists a Bernoulli testing functional
$\phi_{\sigma,\tau}^{\pi}\in \mathcal{S}$ such that
\begin{equation}\label{eq-4-3}
\langle \pi(E) Z_{\sigma}, Z_{\tau}\rangle = \int_E \phi_{\sigma,\tau}^{\pi}d\mu,\quad \forall\, E\in \mathscr{A}.
\end{equation}
In that case, $\phi_{\sigma,\tau}^{\pi}$ is called the numerical density of $\pi$ associated with $(\sigma, \tau)\in \Gamma\times \Gamma$.
\end{definition}
For a nonnegative integer $n\geq 0$, we write $\Gamma\!_n=\{\sigma \mid \sigma\subset \mathbb{N}_n\}$, where $\mathbb{N}_n=\{0,1,\cdots, n\}$.
Obviously, $\Gamma\!_n\subset \Gamma\!_{n+1}\subset \Gamma$ for all $n\geq 0$, and $\bigcup_{n=0}^{\infty}\Gamma\!_n=\Gamma$.
In particular, $\Gamma\!_n$ has exactly $2^{n+1}$ elements.
\begin{proposition}\label{prop-4-2}
Let $\pi\colon \mathscr{A} \rightarrow \mathfrak{P}(\mathcal{L}^2)$ be a $\mathcal{S}$-smooth spectral measure on $(\Sigma,\mathscr{A},\mu)$
and $\phi_{\sigma,\tau}^{\pi}$ its numerical spectral density
associated with $(\sigma,\tau)\in \Gamma\times \Gamma$. Then, for all $n\geq 0$ and all $\xi\in \mathcal{L}^2$, it holds true that
\begin{equation}\label{eq-4-4}
\sum_{\sigma, \tau \in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle \phi_{\sigma,\tau}^{\pi}\geq 0\quad
\mbox{$\mu$-a.e. in $\Sigma$},
\end{equation}
where $\sum_{\sigma, \tau \in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle \phi_{\sigma,\tau}^{\pi}$ is viewed as a function on $\Sigma$.
\end{proposition}
\begin{proof}
Let $n\geq 0$ and $\xi\in \mathcal{L}^2$ be given. Then, for any $E\in \mathscr{A}$, by using the fact of $\pi(E)$ being a projection operator on $\mathcal{L}^2$ we have
\begin{equation*}
\begin{split}
\int_E\Big(\sum_{\sigma, \tau \in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle \phi_{\sigma,\tau}^{\pi}\Big)d\mu
& = \sum_{\sigma, \tau \in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle \int_E \phi_{\sigma,\tau}^{\pi}d\mu\\
& = \sum_{\sigma, \tau \in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle \langle \pi(E) Z_{\sigma}, Z_{\tau}\rangle\\
& = \Big\langle \pi(E) \sum_{\sigma\in \Gamma\!_n}\langle Z_{\sigma}, \xi\rangle Z_{\sigma}, \sum_{\tau \in \Gamma\!_{n}}\langle Z_{\tau}, \xi\rangle Z_{\tau}\Big\rangle\\
& = \Big\|\pi(E) \sum_{\sigma\in \Gamma\!_n}\langle Z_{\sigma}, \xi\rangle Z_{\sigma}\Big\|^2\\
& \geq 0,
\end{split}
\end{equation*}
which together with the arbitrariness of $E\in \mathscr{A}$ implies that
\begin{equation*}
\sum_{\sigma, \tau \in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle \phi_{\sigma,\tau}^{\pi}\geq 0\quad
\mbox{$\mu$-a.e. in $\Sigma$},
\end{equation*}
namely, as a function on $\Sigma$,
$\sum_{\sigma, \tau \in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle \phi_{\sigma,\tau}^{\pi}$
takes nonnegative values at almost all points in $\Sigma$.
\end{proof}
\begin{definition}\label{def-4-3}
Let $\pi\colon \mathscr{A} \rightarrow \mathfrak{P}(\mathcal{L}^2)$ be a $\mathcal{S}$-smooth spectral measure on $(\Sigma,\mathscr{A},\mu)$
and $\phi_{\sigma,\tau}^{\pi}$ its numerical spectral density
associated with $(\sigma,\tau)\in \Gamma\times \Gamma$. A Bernoulli generalized functional $\Phi \in \mathcal{S}^*$ is said to be integrable with respect to $\pi$ if there exist constants $C\geq 0$ and $p\geq 0$ such that
\begin{equation}\label{eq-4-5}
\big|\big\langle\!\big\langle \Phi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\big|\leq C\lambda_{\sigma}^p\lambda_{\tau}^p,\quad
\forall\, (\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation}
In that case, by Lemma~\ref{lem-3-1}, there exists a unique operator $\mathsf{T}_{\Phi,\pi}\in \mathfrak{L}(\mathcal{S},\mathcal{S}^*)$ such that
\begin{equation}\label{eq-4-6}
\widehat{\mathsf{T}_{\Phi,\pi}}(\sigma, \tau)
= \big\langle\!\big\langle \Phi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle,\quad
\forall\, (\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation}
We call $\mathsf{T}_{\Phi,\pi}$ the spectral integral of $\Phi$ with respect to $\pi$ and write $\int_{\Sigma}\Phi d\pi=\mathsf{T}_{\Phi,\pi}$.
\end{definition}
In the rest of the present section, we always assume that $\pi\colon \mathscr{A} \rightarrow \mathfrak{P}(\mathcal{L}^2)$ is
a fixed $\mathcal{S}$-smooth spectral measure on $(\Sigma,\mathscr{A},\mu)$
and $\phi_{\sigma,\tau}^{\pi}$ its numerical spectral density associated with $(\sigma,\tau)\in \Gamma\times \Gamma$.
Thus, if a Bernoulli generalized functional $\Phi$ is integrable with respect to $\pi$,
then its spectral integral $\int_{\Sigma} \Phi d\pi$ is a continuous linear operator from $\mathcal{S}$ to $\mathcal{S}^*$,
namely $\int_{\Sigma} \Phi d\pi\in \mathfrak{L}(\mathcal{S},\mathcal{S}^*)$, and satisfies that
\begin{equation*}
\widehat{\int_{\Sigma} \Phi d\pi}(\sigma,\tau)
= \big\langle\!\big\langle \big(\int_{\Sigma} \Phi d\pi\big)Z_{\sigma}, Z_{\tau}\big\rangle\!\big\rangle
= \big\langle\!\big\langle \Phi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle,\quad (\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation*}
\begin{remark}\label{rem-4-1}
Let $\varphi\in \mathcal{L}^2$ be a bounded function on $\Sigma$ and $\int_{\Sigma}\varphi d\pi$ be the usual spectral integral
of $\varphi$ with respect to $\pi$, which is a bounded linear operator
on $\mathcal{L}^2$. Suppose that $\mathsf{R}_0\varphi$ is integrable with respect to $\pi$
in the sense of Definition~\ref{def-4-3}, where
$\mathsf{R}_0\colon \mathcal{L}^2\rightarrow (\mathcal{L}^2)^*$ denotes the Riesz mapping.
Then, the spectral integral $\int_{\Sigma} \mathsf{R}_0\varphi d\pi$ in the sense of Definition~\ref{def-4-3}
admits the following features
\begin{equation*}
\int_{\Sigma} \mathsf{R}_0\varphi d\pi = \mathsf{R}_0\int_{\Sigma} \varphi d\pi,
\end{equation*}
where $\mathsf{R}_0\int_{\Sigma} \varphi d\pi$ means the composition of operators $\mathsf{R}_0$
and $\int_{\Sigma} \varphi d\pi$. This justifies our Definition~\ref{def-4-3}.
\end{remark}
\begin{theorem}\label{thr-4-3}
Let $\Phi$, $\Psi \in \mathcal{S}^*$ be integrable with respect to $\pi$. Then, for all $\alpha$, $\beta\in \mathbb{C}$,
$\alpha \Phi + \beta \Psi$ remains integrable with respect to $\pi$, and moreover it holds true that
\begin{equation}\label{eq-4-7}
\int_{\Sigma}\big(\alpha \Phi + \beta \Psi\big)d\pi = \alpha\int_{\Sigma} \Phi d\pi + \beta\int_{\Sigma}\Psi d\pi.
\end{equation}
\end{theorem}
\begin{proof}
It follows from the integrability of $\Phi$ and $\Psi$ that there exist nonnegative constants $C_1$, $C_2$, $p_1$ and $p_2$ such that
\begin{equation*}
\big|\big\langle\!\big\langle \Phi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\big|\leq C_1\lambda_{\sigma}^{p_1}\lambda_{\tau}^{p_1}\quad
\mbox{and}\quad
\big|\big\langle\!\big\langle \Psi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\big|\leq C_2\lambda_{\sigma}^{p_2}\lambda_{\tau}^{p_2},
\quad \forall\, (\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation*}
Take $p\geq \max\{p_1,p_2\}$. Then, using the above inequalities, we obtain the bound
\begin{equation*}
\big|\big\langle\!\big\langle \alpha \Phi + \beta \Psi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\big|
\leq (|\alpha|C_1 + |\beta|C_2)\lambda_{\sigma}^p\lambda_{\tau}^p,\quad \forall\, (\sigma,\tau)\in \Gamma\times \Gamma,
\end{equation*}
which means that $\alpha \Phi + \beta \Psi$ is integrable with respect to $\pi$.
For all $(\sigma,\tau)\in \Gamma\times \Gamma$, a straightforward calculation yields
\begin{equation*}
\begin{split}
\widehat{\int_{\Sigma}\big(\alpha \Phi + \beta \Psi\big)d\pi}(\sigma,\tau)
& = \big\langle\!\big\langle \alpha \Phi + \beta \Psi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\\
& = \alpha\big\langle\!\big\langle \Phi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle
+ \beta \big\langle\!\big\langle\Psi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\\
& = \alpha \widehat{\int_{\Sigma}\Phi d\pi}(\sigma,\tau) + \beta \widehat{\int_{\Sigma}\Psi d\pi}(\sigma,\tau)\\
& = \widehat{\Big[\alpha \int_{\Sigma}\Phi d\pi + \beta \int_{\Sigma}\Psi d\pi\Big]}(\sigma,\tau),
\end{split}
\end{equation*}
which implies that $\int_{\Sigma}\big(\alpha \Phi + \beta \Psi\big)d\pi = \alpha\int_{\Sigma} \Phi d\pi + \beta\int_{\Sigma}\Psi d\pi$,
see the comments after Definition~\ref{def-3-1}.
\end{proof}
For $\xi\in \mathcal{S}$, we use $\xi\geq 0$ to mean that $\xi(\omega)\geq 0$ for $\mu$-a.a. $\omega\in \Sigma$.
For $\Phi\in \mathcal{S}^*$, we use $\Phi\geq 0$ to mean that $\langle\!\langle \Phi, \xi\rangle\!\rangle\geq 0$ for all $\xi \in \mathcal{S}$ with $\xi\geq 0$.
In that case, we also say that $\Phi$ is a positive Bernoulli generalized functional.
\begin{theorem}\label{thr-4-4}
Let $\Phi \in \mathcal{S}^*$ be integrable with respect to $\pi$. Suppose that $\Phi\geq 0$. Then, for all $\xi \in \mathcal{S}$,
it holds true that
\begin{equation}\label{eq-4-8}
\big\langle\!\big\langle \big(\!\int_{\Sigma} \Phi d\pi\big)\overline{\xi}, \xi\big\rangle\!\big\rangle \geq 0.
\end{equation}
\end{theorem}
\begin{proof}
Let $\xi \in \mathcal{S}$ be given. For each positive integer $n\geq 1$, we set $\xi_n = \sum_{\sigma\in \Gamma\!_n}\langle Z_{\sigma}, \xi\rangle Z_{\sigma}$.
Then it is easy to see that
\begin{equation*}
\overline{\xi_n} = \sum_{\sigma\in \Gamma\!_n}\langle Z_{\sigma}, \overline{\xi}\rangle Z_{\sigma}.
\end{equation*}
By applying Lemma~\ref{lem-2-5}, we know that $\xi_n$ converges to $\xi$ in the topology of $\mathcal{S}$ as $n\rightarrow \infty$.
Similarly, $\overline{\xi_n}$ converges to $\overline{\xi}$ in the topology of $\mathcal{S}$. Thus
\begin{equation*}
\big\langle\!\big\langle \big(\!\int_{\Sigma} \Phi d\pi\big)\overline{\xi}, \xi\big\rangle\!\big\rangle
= \lim_{n\to \infty} \big\langle\!\big\langle \big(\!\int_{\Sigma} \Phi d\pi\big)\overline{\xi_n}, \xi_n\big\rangle\!\big\rangle.
\end{equation*}
For each $n\geq 1$, $\sum_{\sigma,\tau\in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle
\phi_{\sigma,\tau}^{\pi}$ obviously belongs to $\mathcal{S}$, and moreover, by Proposition~\ref{prop-4-2}, we further know that
\begin{equation*}
\sum_{\sigma,\tau\in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle
\phi_{\sigma,\tau}^{\pi} \geq 0,
\end{equation*}
which, together with the assumption $\Phi\geq 0$, gives
\begin{equation*}
\big\langle\!\big\langle\Phi,\sum_{\sigma,\tau\in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle
\phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle
\geq 0.
\end{equation*}
On the other hand, for each $n\geq 1$, by a careful examination we find
\begin{equation*}
\begin{split}
\big\langle\!\big\langle \big(\!\int_{\Sigma} \Phi d\pi\big)\overline{\xi_n}, \xi_n\big\rangle\!\big\rangle
&= \sum_{\sigma,\tau\in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle
\widehat{\int_{\Sigma}\Phi d\pi}(\sigma, \tau)\\
&= \sum_{\sigma,\tau\in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle
\big\langle\!\big\langle\Phi, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\\
& = \big\langle\!\big\langle\Phi,\sum_{\sigma,\tau\in \Gamma\!_n}\overline{\langle Z_{\sigma}, \xi\rangle}\langle Z_{\tau}, \xi\rangle
\phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle.
\end{split}
\end{equation*}
Thus $\big\langle\!\big\langle \big(\!\int_{\Sigma} \Phi d\pi\big)\overline{\xi_n}, \xi_n\big\rangle\!\big\rangle\geq 0$
for all $n\geq 0$, which directly leads to the desired result as follows
\begin{equation*}
\big\langle\!\big\langle \big(\!\int_{\Sigma} \Phi d\pi\big)\overline{\xi}, \xi\big\rangle\!\big\rangle
= \lim_{n\to \infty} \big\langle\!\big\langle \big(\!\int_{\Sigma} \Phi d\pi\big)\overline{\xi_n}, \xi_n\big\rangle\!\big\rangle
\geq 0.
\end{equation*}
This completes the proof.
\end{proof}
A family $\{\Phi_{\alpha} \mid \alpha\in \Lambda\}$ of Bernoulli generalized functionals is said to be uniformly integrable
with respect to $\pi$ if there exist constants $C\geq 0$ and $p\geq 0$ such that
\begin{equation*}
\sup_{\alpha \in \Lambda} \big|\big\langle\!\big\langle \Phi_{\alpha}, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\big|\leq C\lambda_{\sigma}^p\lambda_{\tau}^p,\quad
\forall\, (\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation*}
The next result establishes a convergence theorem for spectral integrals of Bernoulli generalized functionals.
\begin{theorem}\label{thr-4-5}
Let $(\Phi_n)_{n\geq 1}\subset \mathcal{S}^*$ be a sequence of Bernoulli generalized functionals. Suppose that the following conditions are satisfied:
\begin{enumerate}
\item[(1)] $\Phi_n$ converges weakly to $\Phi_0\in \mathcal{S}^*$ as $n\rightarrow \infty$, namely
$\lim_{n\to \infty}\langle\!\langle\Phi_n, \xi\rangle\!\rangle = \langle\!\langle\Phi_0, \xi\rangle\!\rangle $ for all $\xi\in \mathcal{S}$;
\item[(2)] $(\Phi_n)_{n\geq 1}$ is uniformly integrable with respect to $\pi$.
\end{enumerate}
Then $\Phi_0$ is also integrable with respect to $\pi$. Moreover, for all $\xi\in \mathcal{S}$,
$(\int_{\Sigma}\Phi_nd\pi)\xi$ converges strongly to $(\int_{\Sigma}\Phi_0d\pi)\xi$ as $n\rightarrow \infty$.
\end{theorem}
\begin{proof}
By the uniform integrability of $(\Phi_n)_{n\geq 1}$, there exist constants $C\geq 0$ and $p\geq 0$ such that
\begin{equation}\label{eq-4-9}
\sup_{n\geq 1} \big|\big\langle\!\big\langle \Phi_n, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\big|\leq C\lambda_{\sigma}^p\lambda_{\tau}^p,\quad
\forall\, (\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation}
On the other hand, since $\Phi_n$ converges weakly to $\Phi_0$ as $n\rightarrow \infty$,
we have
\begin{equation}\label{eq-4-10}
\lim_{n\to \infty}\big\langle\!\big\langle \Phi_n, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle
= \big\langle\!\big\langle \Phi_0, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle
\end{equation}
for all $(\sigma,\tau)\in \Gamma\times \Gamma$. Thus
$\big|\big\langle\!\big\langle \Phi_0, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\big|\leq C\lambda_{\sigma}^p\lambda_{\tau}^p$,
$\forall\, (\sigma,\tau)\in \Gamma\times \Gamma$, which implies that $\Phi_0$ is integrable with respect to $\pi$.
Now consider the sequence $\int_{\Sigma}\Phi_n d\pi$, $n\geq 1$. Clearly, we have
\begin{equation*}
\sup_{n\geq 1}\Big|\widehat{\int_{\Sigma}\Phi_n d\pi}(\sigma,\tau)\Big|
= \sup_{n\geq 1}\big|\big\langle\!\big\langle \Phi_n, \phi_{\sigma,\tau}^{\pi}\big\rangle\!\big\rangle\big|
\leq C\lambda_{\sigma}^p\lambda_{\tau}^p,\quad \forall\, (\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation*}
Take $q>p+\frac{1}{2}$. Then, by Theorem~\ref{thr-3-2}, there exists a sequence
$\mathsf{T}_n\in \mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)$, $n\geq 1$, such that
\begin{equation}\label{eq-4-11}
\mathsf{T}_n\xi = \Big(\int_{\Sigma}\Phi_n d\pi\Big)\xi,\quad \forall\,\xi\in \mathcal{S},\, n\geq 1
\end{equation}
and
\begin{equation}\label{eq-4-12}
\sup_{n\geq 1}\|\mathsf{T}_n\|_{\mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)}\leq C\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(q-p)}.
\end{equation}
Next we show that $\mathsf{T}_n\xi\rightarrow \mathsf{T}_0\xi$ in the norm $\|\cdot\|_{-q}$ of $\mathcal{S}_q^*$ for each $\xi \in \mathcal{S}_q$.
However, in view of (\ref{eq-4-12}) and the fact that $\{Z_{\sigma} \mid \sigma\in \Gamma\}$ is total in $\mathcal{S}_q$,
it suffices to prove that $\mathsf{T}_nZ_{\sigma}\rightarrow \mathsf{T}_0Z_{\sigma}$ in the norm $\|\cdot\|_{-q}$ of $\mathcal{S}_q^*$ for each $\sigma \in \Gamma$.
Let $\sigma \in \Gamma$ be given, then by Lemma~\ref{lem-2-6} we have
\begin{equation*}
\|\mathsf{T}_nZ_{\sigma}-\mathsf{T}_0Z_{\sigma}\|_{-q}^2
= \sum_{\tau\in \Gamma}\lambda_{\tau}^{-2q}|\langle\!\langle \mathsf{T}_nZ_{\sigma}-\mathsf{T}_0Z_{\sigma}, Z_{\tau}\rangle\!\rangle|^2,\quad n\geq 1.
\end{equation*}
For each $\tau\in \Gamma$, it follows from (\ref{eq-4-11}) and (\ref{eq-4-10}) that
\begin{equation*}
\lim_{n\to \infty} \lambda_{\tau}^{-2q}|\langle\!\langle \mathsf{T}_nZ_{\sigma}-\mathsf{T}_0Z_{\sigma}, Z_{\tau}\rangle\!\rangle|^2
= \lim_{n\to \infty}\lambda_{\tau}^{-2q}|\langle\!\langle \Phi_n, \phi_{\sigma, \tau}^{\pi}\rangle\!\rangle
- \langle\!\langle \Phi_0, \phi_{\sigma, \tau}^{\pi}\rangle\!\rangle|^2
= 0.
\end{equation*}
On the other hand, we note that $\sum_{\tau\in \Gamma}4C^2\lambda_{\sigma}^{2p}\lambda_{\tau}^{-2(q-p)}
= 4C^2\lambda_{\sigma}^{2p}\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2(q-p)} <\infty$, and by (\ref{eq-4-9}) we have
\begin{equation*}
\sup_{n\geq 1}\lambda_{\tau}^{-2q}|\langle\!\langle \mathsf{T}_nZ_{\sigma}-\mathsf{T}_0Z_{\sigma}, Z_{\tau}\rangle\!\rangle|^2
= \sup_{n\geq 1}\lambda_{\tau}^{-2q}|\langle\!\langle \Phi_n, \phi_{\sigma, \tau}^{\pi}\rangle\!\rangle
- \langle\!\langle \Phi_0, \phi_{\sigma, \tau}^{\pi}\rangle\!\rangle|^2
\leq 4C^2\lambda_{\sigma}^{2p}\lambda_{\tau}^{-2(q-p)},\ \ \tau\in \Gamma.
\end{equation*}
Thus, by the dominated convergence theorem, we come to
\begin{equation*}
\lim_{n\to \infty} \|\mathsf{T}_nZ_{\sigma}-\mathsf{T}_0Z_{\sigma}\|_{-q}^2
= \lim_{n\to \infty}\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2q}|\langle\!\langle \mathsf{T}_nZ_{\sigma}-\mathsf{T}_0Z_{\sigma}, Z_{\tau}\rangle\!\rangle|^2
=0,
\end{equation*}
which implies that $\mathsf{T}_nZ_{\sigma}\rightarrow \mathsf{T}_0Z_{\sigma}$ in the norm $\|\cdot\|_{-q}$ of $\mathcal{S}_q^*$.
Finally, for any $\xi \in \mathcal{S}$, in view of (\ref{eq-4-11}), we have
\begin{equation*}
\lim_{n\to \infty}\Big(\int_{\Sigma}\Phi_n d\pi\Big)\xi = \lim_{n\to \infty}\mathsf{T}_n\xi = \mathsf{T}_0\xi = \Big(\int_{\Sigma}\Phi_0 d\pi\Big)\xi
\end{equation*}
in the norm $\|\cdot\|_{-q}$, which implies that $\Big(\int_{\Sigma}\Phi_n d\pi\Big)\xi$ converges to $\Big(\int_{\Sigma}\Phi_0 d\pi\Big)\xi$
in the strong topology of $\mathcal{S}^*$ as $n\rightarrow \infty$.
\end{proof}
\section{Example and further results}\label{sec-5}
In the final section, we show an example of an $\mathcal{S}$-smooth spectral measure and Bernoulli generalized functionals that are integrable
with respect to this spectral measure. Some further results are also obtained.
Throughout this section, we further assume that the Bernoulli space $(\Sigma, \mathscr{A},\mu)$ is symmetric,
namely the sequence $(\theta_n)_{n\geq 0}$ that defines the measure $\mu$ (see (\ref{eq-2-2}) in Section~\ref{sec-2})
satisfies the following requirements
\begin{equation*}
\theta_n=\frac{1}{2},\quad \forall\, n\geq 0.
\end{equation*}
In this case, one has $Z_n =\zeta_n$, $n\geq 0$, which implies that $Z_{\sigma}^2=1$ for all $\sigma\in \Gamma$.
For details about $Z_n$ and $\zeta_n$, see (\ref{eq-2-3}) and (\ref{eq-2-1}) in Section~\ref{sec-2}.
As in previous sections, $\mathfrak{L}(\mathcal{S},\mathcal{S}^*)$ denotes the space of all continuous linear operators from $\mathcal{S}$ to $\mathcal{S}^*$,
and, for $p\geq 0$, $\mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)$ denotes the Banach space of all bounded linear operators
from $\mathcal{S}_p$ to $\mathcal{S}_p^*$. Note that a linear operator $\mathsf{T}\colon \mathcal{S}_p\rightarrow \mathcal{S}_p^*$
is bounded if and only if it is continuous.
For each $E\in \mathscr{A}$, by putting $\pi_0(E)\xi = \mathbf{1}_{E}\xi$, $\xi\in \mathcal{L}^2$,
we get a projection operator $\pi_0(E)$ on $\mathcal{L}^2$, where $\mathbf{1}_{E}$ denotes the indicator of $E$
and $\mathbf{1}_{E}\xi$ means the usual product of functions $\mathbf{1}_{E}$ and $\xi$ on $\Sigma$.
It can be shown that the mapping $E\mapsto \pi_0(E)$ defines a spectral measure $\pi_0$
on the Bernoulli space $(\Sigma, \mathscr{A},\mu)$,
which we call \textbf{the canonical spectral measure} on $(\Sigma, \mathscr{A},\mu)$.
\begin{theorem}\label{thr-5-1}
The canonical spectral measure $\pi_0$ is $\mathcal{S}$-smooth and its numerical spectral density $\phi_{\sigma,\tau}^{\pi_0}$
associated with $(\sigma,\tau)\in \Gamma\times \Gamma$ takes the following form
\begin{equation}\label{eq-5-1}
\phi_{\sigma,\tau}^{\pi_0}= Z_{\sigma\bigtriangleup \tau},
\end{equation}
where $\sigma\bigtriangleup \tau = (\sigma\setminus \tau)\cup (\tau\setminus \sigma)$ and
$Z_{\sigma\bigtriangleup \tau}$ is the corresponding basis vector of the canonical orthonormal basis for $\mathcal{L}^2$ (see (\ref{eq-2-6}) for details).
\end{theorem}
\begin{proof}
Take $(\sigma,\tau)\in \Gamma$. Then, $\sigma\bigtriangleup \tau\in \Gamma$, which together with Lemma~\ref{lem-2-5}
implies that $Z_{\sigma\bigtriangleup \tau}\in \mathcal{S}$.
On the other hand, by (\ref{eq-2-6}) and the property that $Z_{\gamma}^2=1$ for $\gamma\in \Gamma$, we have
\begin{equation*}
Z_{\sigma}Z_{\tau}=\Big(\prod_{k\in \sigma\setminus \tau}Z_k\Big)\Big(\prod_{k\in \sigma\cap \tau}Z_k\Big)^2\Big(\prod_{k\in \tau\setminus \sigma}Z_k\Big)
= Z_{\sigma\bigtriangleup \tau}Z_{\sigma\cap \tau}^2
= Z_{\sigma\bigtriangleup \tau},
\end{equation*}
which together with the definition of $\pi_0$ gives
\begin{equation*}
\langle \pi_0(E)Z_{\sigma}, Z_{\tau}\rangle
= \int_{\Sigma} \mathbf{1}_{E}Z_{\sigma}Z_{\tau}d\mu
= \int_E Z_{\sigma}Z_{\tau}d\mu
= \int_E Z_{\sigma\bigtriangleup \tau}d\mu,\quad \forall\, E\in \mathscr{A}.
\end{equation*}
Therefore, $\pi_0$ is $\mathcal{S}$-smooth and its numerical spectral density $\phi_{\sigma,\tau}^{\pi_0}$
associated with $(\sigma,\tau)\in \Gamma\times \Gamma$ is exactly $Z_{\sigma\bigtriangleup \tau}$.
\end{proof}
\begin{theorem}\label{thr-5-2}
Every $\Phi\in \mathcal{S}^*$ is integrable with respect to the canonical spectral measure $\pi_0$.
\end{theorem}
\begin{proof}
Let $\Phi\in \mathcal{S}^*$ be given. Then, there is some $p\geq 0$ such that $\Phi\in \mathcal{S}_p^*$. For all $(\sigma,\tau)\in \Gamma\times \Gamma$,
we have
\begin{equation*}
|\langle\!\langle \Phi, \phi_{\sigma,\tau}^{\pi_0}\rangle\!\rangle|
= |\langle\!\langle \Phi, Z_{\sigma\bigtriangleup\tau}\rangle\!\rangle|
\leq \|\Phi\|_{-p}\|Z_{\sigma\bigtriangleup\tau}\|_p,
\end{equation*}
which together with $\|Z_{\sigma\bigtriangleup\tau}\|_p= \lambda_{\sigma\bigtriangleup\tau}^p\leq \lambda_{\sigma}^p\lambda_{\tau}^p$ yields
\begin{equation*}
|\langle\!\langle \Phi, \phi_{\sigma,\tau}^{\pi_0}\rangle\!\rangle|\leq \|\Phi\|_{-p}\lambda_{\sigma}^p\lambda_{\tau}^p.
\end{equation*}
Therefore, by definition, $\Phi$ is integrable with respect to $\pi_0$.
\end{proof}
\begin{remark}\label{rem-5-1}
Recall that $\mathcal{S}$ is dense in $\mathcal{S}_p$ for each $p\geq 0$. Thus,
if $\mathsf{T}\colon (\mathcal{S}, \|\cdot\|_p)\rightarrow \mathcal{S}_p^*$ is a bounded linear operator,
then there exists a unique $\widetilde{\mathsf{T}}\in \mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)$ such that
$\widetilde{\mathsf{T}}\xi = \mathsf{T}\xi$ for all $\xi\in \mathcal{S}$
and
\begin{equation*}
\big\|\widetilde{\mathsf{T}}\big\|_{\mathfrak{L}(\mathcal{S}_p,\mathcal{S}_p^*)}
= \sup\{\,\|\mathsf{T}\xi\|_{-p} \,\mid\, \xi\in S,\, \|\xi\|_p=1\,\}.
\end{equation*}
In that case, we identify $\mathsf{T}$ with $\widetilde{\mathsf{T}}$, namely $\mathsf{T}=\widetilde{\mathsf{T}}$.
\end{remark}
According to Theorem~\ref{thr-5-1}, the integration with the canonical spectral measure $\pi_0$ defines a linear
mapping $\Phi\mapsto \int_{\Sigma}\Phi d\pi_0$ from $\mathcal{S}^*$ to $\mathfrak{L}(\mathcal{S},\mathcal{S}^*)$.
The next theorem shows that this mapping is continuous.
\begin{theorem}\label{thr-5-3}
Let $p\geq 0$ and $\Phi\in \mathcal{S}^*_p$ be given. Then, for $q>p+\frac{1}{2}$, $\int_{\Sigma}\Phi d\pi_0 \in \mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)$ and
moreover
\begin{equation}\label{eq-5-2}
\Big\|\int_{\Sigma}\Phi d\pi_0\Big\|_{\mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)}
\leq \Big[\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(q-p)}\Big]\|\Phi\|_{-p}.
\end{equation}
\end{theorem}
\begin{proof}
Write $\mathsf{T}=\int_{\Sigma}\Phi d\pi_0$. Then, from the proof of Theorem~\ref{thr-5-2}, we find that
\begin{equation*}
|\widehat{\mathsf{T}}(\sigma,\tau)|
= |\langle\!\langle \Phi, \phi_{\sigma,\tau}^{\pi_0}\rangle\!\rangle|
\leq \|\Phi\|_{-p}\lambda_{\sigma}^p\lambda_{\tau}^p, \quad \forall\, (\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation*}
Consequently, by Theorem~\ref{thr-3-2} and Remark~\ref{rem-5-1}, we know that $\mathsf{T}\in \mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)$ and
\begin{equation*}
\|\mathsf{T}\|_{\mathfrak{L}(\mathcal{S}_q,\mathcal{S}_q^*)}
\leq \|\Phi\|_{-p}\Big[\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(q-p)}\Big]
=\Big[\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(q-p)}\Big]\|\Phi\|_{-p}.
\end{equation*}
This completes the proof.
\end{proof}
For Bernoulli generalized functionals $\Phi$, $\Psi\in \mathcal{S}^*$, their convolution $\Phi\ast\Psi\in \mathcal{S}^*$ is defined by
\begin{equation}\label{eq-5-3}
\widehat{\Phi\ast\Psi}(\sigma) = \widehat{\Phi}(\sigma) \widehat{\Psi}(\sigma),\quad \sigma\in \Gamma,
\end{equation}
where $\widehat{\Phi}$ is the Fock transform of $\Phi$, which is defined by $\widehat{\Phi}(\sigma)= \langle\!\langle \Phi, Z_{\sigma}\rangle\!\rangle$,
$\sigma\in \Gamma$.
Similarly, for operators $\mathsf{T}_1$, $\mathsf{T}_2\in \mathfrak{L}(\mathcal{S},\mathcal{S}^*)$,
their convolution $\mathsf{T}_1\ast \mathsf{T}_2\in \mathfrak{L}(\mathcal{S},\mathcal{S}^*)$ is determined by
\begin{equation}\label{eq-5-4}
\widehat{\mathsf{T}_1\ast \mathsf{T}_2}(\sigma,\tau) = \widehat{\mathsf{T}_1}(\sigma,\tau) \widehat{\mathsf{T}_2}(\sigma,\tau),\quad
(\sigma,\tau)\in \Gamma\times \Gamma.
\end{equation}
See \cite{wang-chen-1} and \cite{wang-chen-2} for details about convolutions of generalized functionals of discrete-time normal martingale,
which include Bernoulli generalized functionals as a special case,
and about convolutions of operators on these functionals, respectively.
\begin{theorem}
For all $\Phi$, $\Psi\in \mathcal{S}^*$, it holds true that
\begin{equation}\label{eq-5-5}
\int_{\Sigma}\Phi\ast \Psi d\pi_0 = \Big(\int_{\Sigma}\Phi d\pi_0\Big)\ast \Big(\int_{\Sigma}\Phi d\pi_0\Big).
\end{equation}
\end{theorem}
\begin{proof}
For all $(\sigma,\tau)\in \Gamma\times \Gamma$, in view of $\phi_{\sigma,\tau}^{\pi_0}=Z_{\sigma\bigtriangleup \tau}$, we have
\begin{equation*}
\widehat{\int_{\Sigma}\Phi\ast \Psi d\pi_0}(\sigma,\tau)
= \langle\!\langle \Phi\ast \Psi, Z_{\sigma\bigtriangleup\tau}\rangle\!\rangle
= \widehat{\Phi\ast \Psi}(\sigma\bigtriangleup\tau)
= \widehat{\Phi}(\sigma\bigtriangleup\tau)\widehat{\Psi}(\sigma\bigtriangleup\tau)
= \langle\!\langle \Phi,\sigma\bigtriangleup\tau\rangle\!\rangle \langle\!\langle \Psi,\sigma\bigtriangleup\tau\rangle\!\rangle
\end{equation*}
and
\begin{equation*}
\widehat{\Big(\int_{\Sigma}\Phi d\pi_0\Big)\ast\Big(\int_{\Sigma}\Phi d\pi_0\Big)}(\sigma,\tau)
= \widehat{\Big(\int_{\Sigma}\Phi d\pi_0\Big)}(\sigma,\tau) \widehat{\Big(\int_{\Sigma}\Phi d\pi_0\Big)}(\sigma,\tau)
= \langle\!\langle \Phi,\sigma\bigtriangleup\tau\rangle\!\rangle \langle\!\langle \Psi,\sigma\bigtriangleup\tau\rangle\!\rangle,
\end{equation*}
which implies that $\int_{\Sigma}\Phi\ast \Psi d\pi_0=\big(\int_{\Sigma}\Phi d\pi_0\big)\ast\big(\int_{\Sigma}\Phi d\pi_0\big)$.
\end{proof}
For Bernoulli generalized functionals $\Phi$, $\Psi\in \mathcal{S}^*$, in the spirit of \cite{wang-zhang}, one can define the Wick product
$\Phi\diamond\Psi$, which belongs to $\mathcal{S}^*$ and satisfies
\begin{equation}\label{eq-5-6}
\widehat{\Phi\diamond\Psi}(\sigma) = \sum_{\tau\subset \sigma}\widehat{\Phi}(\tau)\widehat{\Psi}(\sigma\setminus\tau),\quad \sigma \in \Gamma,
\end{equation}
where $\widehat{\Upsilon}$ denotes the Fock transform of a Bernoulli generalized functional $\Upsilon$,
and $\sum_{\tau\subset \sigma}$ means to sum for all subsets of $\sigma$. Comparing (\ref{eq-5-6}) and (\ref{eq-5-3}), one can see that
the Wick product $\Phi\diamond\Psi$ differs greatly from the convolution $\Phi\ast\Psi$.
The next proposition further shows that their spectral integrals can have quite different regularity.
\begin{proposition}
Let $\Phi$, $\Psi\in \mathcal{S}_p^*$ be Bernoulli generalized functionals with $p\geq 0$. Then
\begin{equation}\label{eq-5-7}
\int_{\Sigma}\Phi\diamond\Psi d\pi_0\in \mathfrak{L}(\mathcal{S}_{p+2},\mathcal{S}_{p+2}^*),\quad
\mbox{while}\quad \int_{\Sigma}\Phi\ast\Psi d\pi_0 \in \mathfrak{L}(\mathcal{S}_{2p+1},\mathcal{S}_{2p+1}^*).
\end{equation}
\end{proposition}
\begin{proof}
According to Lemma~\ref{lem-2-6}, we have
\begin{equation*}
\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2p}\big|\widehat{\Phi}(\tau)\big|^2
= \sum_{\tau\in \Gamma}\lambda_{\tau}^{-2p}|\langle\!\langle \Phi, Z_{\tau}\rangle\!\rangle|^2
= \|\Phi\|_{-p}^2
< \infty.
\end{equation*}
Similarly, we also have $\sum_{\tau\in \Gamma}\lambda_{\tau}^{-2p}\big|\widehat{\Psi}(\tau)\big|^2= \|\Psi\|_{-p}^2<\infty$.
Using these relations, we find
\begin{equation*}
\begin{split}
\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(p+1)}|\langle\!\langle \Phi\diamond\Psi, Z_{\sigma}\rangle\!\rangle|^2
& = \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2(p+1)}\big|\widehat{\Phi\diamond\Psi}(\sigma)\big|^2\\
& = \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2}
\Big|\sum_{\tau\subset \sigma}\lambda_{\tau}^{-p}\widehat{\Phi}(\tau)\lambda_{\sigma\setminus\tau}^{-p}\widehat{\Psi}(\sigma\setminus\tau)\Big|^2\\
&\leq \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2}
\Big[\sum_{\tau\subset \sigma}\lambda_{\tau}^{-2p}|\widehat{\Phi}(\tau)|^2\Big]
\Big[\sum_{\tau\subset \sigma}\lambda_{\sigma\setminus\tau}^{-2p}|\widehat{\Psi}(\sigma\setminus\tau)|^2\Big]\\
&\leq \|\Phi\|_{-p}^2 \|\Psi\|_{-p}^2\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2}\\
& < \infty,
\end{split}
\end{equation*}
which, together with Lemma~\ref{lem-2-6}, implies that $\Phi\diamond\Psi\in S_{p+1}^*$. Thus, by using Theorem~\ref{thr-5-3}, we come to
the relation $\int_{\Sigma}\Phi\diamond\Psi d\pi_0\in \mathfrak{L}(\mathcal{S}_{p+2},\mathcal{S}_{p+2}^*)$.
Next, we prove the second relation of (\ref{eq-5-7}). In fact, we have
\begin{equation*}
\begin{split}
\sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-4p}|\langle\!\langle \Phi\ast\Psi, Z_{\sigma}\rangle\!\rangle|^2
& = \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-4p}\big|\widehat{\Phi\ast\Psi}(\sigma)\big|^2\\
& = \sum_{\sigma\in \Gamma}\lambda_{\sigma}^{-2p}\big|\widehat{\Phi}(\sigma)\big|^2\lambda_{\sigma}^{-2p}\big|\widehat{\Psi}(\sigma)\big|^2\\
& \leq \|\Phi\|_{-p}^2\|\Psi\|_{-p}^2,
\end{split}
\end{equation*}
which, together with Lemma~\ref{lem-2-6}, implies that $\Phi\ast\Psi\in S_{2p}^*$, which together with Theorem~\ref{thr-5-3} gives
the relation $\int_{\Sigma}\Phi\ast\Psi d\pi_0 \in \mathfrak{L}(\mathcal{S}_{2p+1},\mathcal{S}_{2p+1}^*)$.
\end{proof}
\section*{Acknowledgement}
The authors are extremely grateful to the referees for their valuable comments and suggestions on improvement
of the first version of the present paper.
This work is supported by National Natural Science Foundation of China (Grant No. 11861057).
|
2,869,038,153,877 | arxiv | \section{Introduction}
\thispagestyle{empty}
In 1924 Banach and Tarski~\cite{Banach-Tarski-original} decompose a solid ball into five pieces, and reassemble them into two balls using rotations. That is now called the Banach-Tarski paradox. Von Neumann~\cite{Neumann1929} observes that the reason for this phenomenon is that the group of rotations of $\R^3$ admits a free subgroup. He introduces the concept of amenable groups. Tarski~\cite{Tarski1938} later proves amenability to be the only obstruction to the existence of "paradoxical" decompositions (like the one in Banach-Tarski's article~\cite{Banach-Tarski-original}) of the action of the group on itself by multiplication, as well as any free actions of the group. One way to prove the result of Banach-Tarski is to see it as an almost everywhere free action of $SO_3(\R)$ and correct for the countable set where it is not (see e.g. Wagon~\cite[Cor.~3.10]{Banach-Tarski}).
The original definition of amenability of a group $G$ is the existence of an invariant mean. A mean is a normalised positive linear functional on $l^\infty(G)$. It is called invariant if it is preserved by translation on the argument. Groups that contain free subgroups are non-amenable. It is proven by Ol'shanskii in 1980~\cite{Olshanskii1980} that it is also possible for a non-amenable group to not have a free subgroup. Adyan~\cite{MR682486} shows in 1982 that all Burnside groups of a large enough odd exponent (which are known to be infinite by result of Novikov and Adyan from 1968~\cite{adyannovakov}) are non-amenable. Clearly they do not contain free subgroups. For more information and properties of amenability, see~\cite{bartholdi},\cite{article},\cite{greenleaf},\cite{Banach-Tarski}.
It is worth noting that despite the existence of a large amount of equivalent definitions of amenability, to our knowledge until recently all examples of non-amenable groups without free subgroups are proven (Ol'shanskii~\cite{Olshanskii1980}, Adyan~\cite{MR682486}, Ol'shanskii~\cite{0036-0279-35-4-L13}, Ol'shanskii-Sapir~\cite{finpresnonam}) to be such using the co-growth criterion. See Grigorchuk~\cite{Gri77} for the announcement of the criterion, or~\cite{grigcogreng} for a full proof. For other proofs, see Cohen~\cite{cogr1}, Szwarc~\cite{cogr3}. The criterion is closely related to Kesten's criterion in terms of probability of return to the origin~\cite{kesten}.
Monod constructs in \cite{h-main} a class of groups of piecewise projective homeomorphisms $H(A)$ (where $A$ is a subring of $\R$). By comparing the action of $H(A)$ on the projective line $\P(\R)$ with that of $PSL_2(A)$, he proves that it is non-amenable for $A\neq\Z$ and without free subgroups for all $A$. This can be used to obtain non-amenable subgroups with additional properties. In particular, Lodha~\cite{Lodha2014} proves that a certain subgroup of $H(\Z[\frac{\sqrt{2}}{2}])$ is of type $F_\infty$ (in other words, such that there is a connected CW complex $X$ which is aspherical and has finitely many cells in each dimension such that $\pi_1(X)$ is isomorphic to the group). That subgroup was constructed earlier by Moore and Lodha~\cite{lhodafinpres} as an example of a group that is non-amenable, without free subgroup and finitely presented. It has three generators and only $9$ defining relations (compare to the previous example by Ol'shanskii-Sapir~\cite{finpresnonam} with $10^{200}$ relations). This subgroup is the first example of a group of type $F_\infty$ that is non-amenable and without a free subgroup. Later, Lodha~\cite{lhoda-tarski} also proves that the Tarski numbers (the minimal number of pieces needed for a paradoxical decomposition) of all the groups of piecewise projective homeomorphisms are bounded by $25$.
It is not known whether the group $H(\Z)$ of piecewise projective homeomorphisms in the case $A=\Z$ defined by Monod is amenable. One of the equivalent conditions for amenability is the existence of a non-degenerate measure with trivial Poisson boundary (see Kaimanovich-Vershik~\cite{kaimpoisson}, Rosenblatt~\cite{rosenblatt}). This measure can be chosen to be symmetric. It is also known that amenable groups can have measures with non-trivial boundary. In a recent result Frisch-Hartman-Tamuz-Vahidi-Ferdowski~\cite{choquet-deny} describe an algebraic necessary and sufficient condition for a group to admit a measure with non-trivial boundary. In the present paper we give sufficient conditions for non-triviality of the Poisson boundary on $H(\Z)$. There are several equivalent ways to define the Poisson boundary (see Kaimanovich-Vershik~\cite{kaimpoisson}). Consider a measure $\mu$ on a group $G$ and the random walk it induces by multiplication on the left. It determines an associated Markov measure $P$ on the trajectory space $G^\N$.
\begin{defi}\label{poisson}
Consider the following equivalence relation on $G^\N$: two trajectories $(x_0,x_1,\dots)$ and $(y_0,y_1,\dots)$ are equivalent if and only if there exist $i_0\in\N$ and $k\in\Z$ such that for every $i>i_0$ $x_i=y_{i+k}$. In other words, if the trajectories coincide after a certain time instant up to a time shift. The \textit{Poisson boundary} (also called \textit{Poisson-Furstenberg boundary}) of $\mu$ on $G$ is the quotient of $(G^\N,P)$ by the measurable hull of this equivalence relation.
\end{defi}
Note that if the support of the measure does not generate $G$, in which case we say that the measure is \textit{degenerate}, this defines the boundary on the subgroup generated by the support of the measure rather than on $G$. For a more recent survey on results concerning the Poisson boundary, see~\cite{Erschler2010}.
Kim, Koberda and Lodha have shown in~\cite{chainhomeo} that $H(\Z)$ contains Thompson's group $F$ as a subgroup. This group is the group of orientation-preserving piecewise linear self-isomorphisms of the closed unit interval with dyadic slopes, with a finite number of break points, all break points being dyadic numbers (see Cannon-Floyd-Perry~\cite{thomsoncfp} or Meier's book~\cite[Ch.~10]{meier} for details and properties). It is not known whether it is amenable, which is a celebrated open question. Kaimanovich~\cite{kaimanovichthompson} and Mishchenko~\cite{mischenko2015} prove that the Poisson boundary on $F$ is not trivial for finitely supported non-degenerate measures. They study the induced walk on the dyadic numbers in their proofs. However, there exist non-degenerate symmetric measures on $F$ for which the induced walk has trivial boundary as proven by Juschenko and Zheng~\cite{juszheng}. The results of the current article are inspired by the paper of Kaimanovich. It is not hard to prove that $H(\Z)$ is not finitely generated (see Remark~\ref{fingen}), so we will consider measures the support of which is not necessarily finite.
Our main result is as follows. Consider the group $H(\Z)$ of piecewise projective homeomorphisms, as defined by Monod~\cite{h-main}, in the case $A=\Z$. For $g\in H(\Z)$ denote by $Br(g)$ the number of \textit{break points} of $g$, which is the ends of pieces in its piecewise definition. We will say that a measure $\mu$ on a subgroup of $H(\Z)$ has \textit{finite first break moment} if the expected number of break points $\mathbb{E}[Br]$ is finite. A group $H$ is called \textit{locally solvable} if all finitely generated subgroups are solvable. Then
\begin{thm}\label{main}
For any subgroup $H$ of $H(\Z)$ which is not locally solvable and any measure $\mu$ on $H$ with finite first break moment $\mathbb{E}[Br]$ and such that the support of $\mu$ generates $H$ as a semigroup, the Poisson boundary of $(H,\mu)$ is non-trivial.
\end{thm}
For a measure $\mu$ on a finitely generated group, we say that $\mu$ has \textit{finite first moment} if the word length over any finite generating set has finite first moment with respect to $\mu$. This is well defined as word lengths over different finite generating sets are bilipschitz, and in particular the finiteness of the first moment does not depend on the choice of generating set. We remark (see Remark~\ref{brfin}) that any measure $\mu$ on a finitely generated subgroup $H$ of $H(\Z)$ that has finite first moment also has finite expected number of break points. Therefore by Theorem~\ref{main} if $\mu$ is a measure on a non-solvable finitely generated subgroup $H$ such that the support of $\mu$ generates $H$ as a semigroup and $\mu$ has finite first moment, the Poisson boundary of $(H,\mu)$ is non-trivial. Furthermore, in the other case we will show (Lemma~\ref{mineps}) that so long as $H$ is not abelian, we can construct a symmetric non-degenerate measure with finite $1-\varepsilon$ moment and non-trivial Poisson boundary.
The structure of the paper is as follows. In Section~\ref{prelim}, given a fixed $s\in\R$, to every element $g\in H(\Z)$ we associate (see Definition~\ref{confdef}) a configuration $C_g$. Each configuration is a function from the orbit of $s$ into $\Z$. The value of a configuration $C_g$ at a given point of the orbit of $s$ represents the slope change at that point of the element $g$ to which it is associated. There is a natural quotient map of the boundary on the group into the boundary on the configuration space. The central idea of the paper is to show that under certain conditions, the value of the configuration at a given point of the orbit of $s$ almost always stabilises. If that value is not fixed, this then implies non-triviality of the boundary on the configuration space, and thus non-triviality of the Poisson boundary on the group. These arguments bear resemblance to Kaimanovich's article on Thompson's group~\cite{kaimanovichthompson}, but we would like to point out that the action on $\R$ considered in the present article is different.
In Section~\ref{sectfour} we obtain the first result for non-triviality of the Poisson boundary (see Lemma~\ref{constr}). Measures satisfying the assumptions of that lemma do not necessarily have finite first break moment. In Section~\ref{thompsect} we study copies of Thompson's group $F$ in $H(\Z)$. Building on the results from it, in Section~\ref{schreier} we obtain transience results (see Lemma~\ref{algtho}) which we will need to prove Theorem~\ref{main}. In Section~\ref{anothersuff} we prove Lemma~\ref{ltwo} which is the main tool for proving non-triviality of the Poisson boundary. In the particular case of Thompson's group, the lemma already allows us to answer a question by Kaimanovich~\cite[7.A]{kaimanovichthompson}:
\begin{cor}\label{finfirstthomp}
Any measure on Thompson's group $F$ that has finite first moment and the support of which generates $F$ as a semi-group has non-trivial Poisson boundary.
\end{cor}
We mention that the arguments of Lemma~\ref{ltwo} could also be applied for the action and configurations considered in Kaimanovich's article, giving an alternative proof of the corollary. Combining the lemma with the transience results from Section~\ref{schreier} we obtain non-triviality of the Poisson boundary under certain conditions (see Lemma~\ref{algone}), which we will use to prove the main result. As the negation of those conditions passes to subgroups, it suffices to show that if $H$ is finitely generated and does not satisfy them, it is then solvable, which we do in Section~\ref{algsec}. Remark that the theorem generalises the result of Corollary~\ref{finfirstthomp}. In Section~\ref{last} we give an additional remark on the case of finite $1-\varepsilon$ moment.
\section*{Acknowledgements}
I would like to offer my special thanks to my thesis director, Anna Erschler. Discussions with Laurent Bartholdi have been extremely helpful, and they inspired me to consider the case of finite first moment. I would also like to thank Vadim Kaimanovich for his remarks on the preliminary version of the paper. I am also grateful to Dmytro Savchuk for the question that lead to Remark~\ref{grapham}.
\section{Preliminaries}\label{prelim1}
\subsection{$\Pl$ and $H(\Z)$}
The projective linear group $PSL_2(\R)$ is defined as $SL_2(\R)/\{Id,-Id\}$, which is the natural quotient that describes the linear actions on the projective space $\P(\R)$. As the latter can be defined as $\mathbb{S}/(x\sim-x)$, we can think of it as a circle for understanding the dynamics of the action of the projective group. Remark that it is commonly understood as the boundary of the hyperbolic plane. In this paper we will not be interested in the interior of the hyperbolic plane as we do a piecewise definition of $H(A)$ on $\P(\R)$. An element $h\in PSL_2(\R)$ is called:
\begin{enumerate}
\item \textbf{Hyperbolic} if $|tr(h)|>2$ (or equivalently, $tr(h)^2-4>0$). In this case a calculation shows that $h$ has two fixed points in $\P(\R)$. One of the points is attractive and the other repulsive for the dynamic of $h$, meaning that starting from any point and multiplying by $h$ (respectively $h^{-1}$) we get closer to the attractive (resp. the repulsive) fixed point.
\item \textbf{Parabolic} if $|tr(h)|=2$. In this case $h$ has exactly one "double" fixed point. We can identify $\P(\R)$ with $\R\cup\{\infty\}$ in such a way that the fixed point is $\infty$, in which case $h$ becomes a translation on $\R$. We will go into detail about the identification below.
\item \textbf{Elliptic} if $|tr(h)|<2$. Then $h$ has no fixed points in $\P(\R)$ and is conjugate to a rotation. If we consider it as an element of $PSL_2(\C)$, we can see that it has two fixed points in $\P(\C)$ that are outside $\P(\R)$.
\end{enumerate}
Consider an element $\begin{pmatrix}x\\y\end{pmatrix}\in\R^2\setminus0$. If $y\neq 0$, identify it with $\frac{x}{y}$, otherwise with $\infty$. This clearly passes on $\P(\R)$, and the action of $PSL_2(\R)$ becomes $\begin{pmatrix}a&b\\c&d\end{pmatrix}. x=\frac{ax+b}{cx+d}$. The conventions for infinity are $\begin{pmatrix}a&b\\c&d\end{pmatrix}(\infty)=\frac{a}{c}$ if $c\neq0$ and $\infty$ otherwise, and if $c\neq 0$, $\begin{pmatrix}a&b\\c&d\end{pmatrix}.(-\frac{d}{c})=\infty$. Note that by conjugation we can choose any point to be the infinity.
Let us now look into the groups defined by Monod~\cite{h-main}. We define $\Gamma$ as the group of all homeomorphisms of $\R\cup\{\infty\}$ that are piecewise in $PSL_2(\R)$ with a finite number of pieces. Take a subring $A$ of $\R$. We define $\Gamma(A)$ to be the subgroup of $\Gamma$ the elements of which are piecewise in $PSL_2(A)$ and the extremities of the intervals are in $P_A$, the set of fixed points of hyperbolic elements of $PSL_2(A)$.
\begin{defi}
The group of piecewise projective homeomorphisms $H(A)$ is the subgroup of $\Gamma(A)$ formed by the elements that fix infinity.
\end{defi}
It can be thought of as a group of homeomorphisms of the real line, and we will use the same notation in both cases. We will note $G=H(\Z)$ to simplify. Note in particular that $\infty\notin P_\Z$. This means that the germs around $+\infty$ and $-\infty$ are the same for every element of $G$. The only elements in $\Pl$ that fix infinity are
\begin{equation}\label{agrp}
\left\{\left(\alpha_n=\begin{pmatrix}1&n\\0&1\end{pmatrix}\right)_{n\in\Z}\right\}= G\cap \Pl.
\end{equation}
Fix $g\in G$ and let its germ at infinity (on either side) be $\alpha_n$. Then $g\alpha_{-n}$ has finite support. The set of elements $\bar{G}\subset G$ that have finite support is clearly a subgroup, and therefore if we denote $\A=\{\alpha_n,n\in\Z\}$, we have
$$G=\bar{G}+\A$$
For the purposes of this article, we also need to define:
\begin{defi}\label{tildeg}
Consider the elements of $\Gamma$ that fix infinity and are piecewise in $\Pl$. We call the group formed by those elements the \textit{piecewise $\Pl$ group}, and denote it as $\G$.
\end{defi}
Remark that in an extremity $\gamma$ of the piecewise definition of an element $g\in\G$, the left and right germs $g(\gamma-0)$ and $g(\gamma+0)$ have a common fixed point. Then $g(\gamma+0)^{-1}g(\gamma-0)\in \Pl$ fixes $\gamma$. Therefore the extremities are in $P_\Z\cup\Q\cup\{\infty\}$, that is in the set of fixed points of any (not necessarily hyperbolic) elements of $\Pl$. In other words, the only difference between $\G$ and $G=H(\Z)$ is that $\G$ is allowed to have break points in $\Q\cup\{\infty\}$, that is in the set of fixed points of parabolic elements. Clearly, $G\leq\G$. This allows us to restrain elements, which we will need in Section~\ref{algsec}:
\begin{defi}\label{restr}
Let $f\in\G$, and $a,b\in\R$ such that $f(a)=a$ and $f(b)=b$. The function $f\restriction_{(a,b)}$ defined by $f\restriction_{(a,b)}(x)=f(x)$ for $x\in(a,b)$ and $f(x)=x$ otherwise is called a restriction.
\end{defi}
Remark that $f\restriction_{(a,b)}\in\G$. The idea of this definition is that we extend the restrained function with the identity function to obtain an element of $\G$.
The subject of this paper is $G$, however in order to be able to apply results from previous sections in Section~\ref{algsec}, we will prove several lemma for $\G$. The equivalent result will easily follow for $G$ just from the fact that it is a subgroup.
\subsection{Random walks}
Throughout this article, for a measure $\mu$ on a group $H$ we will consider the random walk by multiplication on the left. That is the walk $(x_n)_{n\in\N}$ where $x_{n+1}=y_nx_n$ and the increments $y_n$ are sampled by $\mu$. In other words, it is the random walk defined by the kernel $p(x,y)=yx^{-1}$. Remark that for walks on groups it is standard to consider the walk by multiplications on the right. In this article the group elements are homeomorphisms on $\R$ and as such they have a natural action on the left on elements of $\R$, which is $(f,x)\mapsto f(x)$.
We will use Definition~\ref{poisson} as the definition of Poisson boundary. For completeness' sake we also mention its description in terms of harmonic functions. For a group $H$ and a probability measure $\mu$ on $H$ we say that a function $f$ on $H$ is \textit{harmonic} if for every $g\in H$, $f(g)=\sum_{h\in H}f(hg)\mu(h)$. For a non-degenerate measure, the $L^\infty$ space on the Poisson boundary is isomorphic to the space of bounded harmonic functions on $H$, and the exact form of that isomorphism is given by a classical result called the \textit{Poisson formula}. In particular, non-triviality of the Poisson boundary is equivalent to the existence of non-trivial bounded harmonic functions.
We recall the entropy criterion for triviality of the Poisson boundary.
\begin{defi}
Consider two measures $\mu$ and $\lambda$ on a discrete group $H$. We denote $\mu*\lambda$ their \textit{convolution}, defined as the image of their product by the multiplication function. Specifically:
$$\mu*\lambda(A)=\int\mu(Ah^{-1})d\lambda(h).$$
\end{defi}
Remark that $\mu^{*n}$ gives the probability distribution for $n$ steps of the walk, starting at the neutral element. For a probability measure $\mu$ on a countable group $H$ we denote $H(\mu)$ its \textit{entropy}, defined by
$$H(\mu)=\sum_{g\in H}-\mu(g)\log{\mu(g)}.$$
One of the main properties of entropy is that the entropy of a product of measures is not greater than the sum of their entropies. Combining that with the fact that taking image of a measure by a function does not increase its entropy, we obtain $H(\mu*\lambda)\leq H(\mu) +H(\lambda)$. Avez~\cite{avez72} introduces the following definition:
\begin{defi}
The \textit{entropy of random walk} (also called \textit{asymptomatic entropy}) of a measure $\mu$ on a group $H$ is defined as $\lim_{n\rightarrow\infty}\frac{H(\mu^{*n})}{n}$.
\end{defi}
\begin{thm}[Entropy Criterion (Kaimanovich-Vershik~\cite{kaimpoisson}, Derriennic~\cite{derast})]\label{entropy}
Let $H$ be a countable group and $\mu$ a non-degenerate probability measure on $H$ with finite entropy. Then the Poisson boundary of $(H,\mu)$ is trivial if and only if the asymptotic entropy of $\mu$ is equal to zero.
\end{thm}
\section{Some properties of groups of piecewise projective homeomorphisms}\label{prelim}
In Subsection~\ref{slopechange} we study $P_\Z$ and the group action locally around points of it. In Subsection~\ref{confsect}, using the results from the first subsection, to each element $g\in\G$ we associate a configuration $C_g$. We then also describe how to construct an element with a specific associated configuration.
\subsection{Slope change points in $G=H(\Z)$}\label{slopechange}
Let $g$ be a hyperbolic element of $\Pl$. Let it be represented by $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and denote $tr(g)=a+d$ its trace. Then its fixed points are $\frac{d-a\pm\sqrt{tr(g)^2-4}}{c}$. As the trace is integer and greater than $2$ in absolute value, this number is never rational. Furthermore, it is worth noting that $\Q(\sqrt{tr(g)^2-4})$ is stable by $\Pl$ and therefore by $\G$ (and $G$). If we enumerate all prime numbers as $(p_i)_{i\in\N}$, we have, for $I\neq J\subset\N$ finite, $\Q(\sqrt{\prod_{i\in I}p_i})\cap\Q(\sqrt{\prod_{i\in J}p_i})=\Q$. We just mentioned that $P_\Z\cap\Q=\emptyset$ so we have
$$P_\Z=\bigsqcup_{I\subset\N\mbox{ finite}}\left(P_\Z\bigcap\Q\left(\sqrt{\prod_{i\in I}p_i}\right)\right)$$
where each set in the decomposition is stable by $\G$. Note also that the fixed points of parabolic elements of $\Pl$ are rational. This actually completely characterizes the set $P_\Z$, as we will now show that $P_\Z\bigcap\Q\left(\sqrt{\prod_{i\in I}p_i}\right)=\Q\left(\sqrt{\prod_{i\in I}p_i}\right)\setminus\Q$:
\begin{lemma}\label{all}
Take any $s\in\Q(\sqrt{k})\setminus\Q$ for some $k\in\N$. Then $s\in P_\Z$.
\end{lemma}
Remark that $k$ is not an exact square, as $\Q(\sqrt{k})\setminus\Q$ has to be non-empty.
\begin{proof}
Note first that to have $\sqrt{tr^2-4}\in\Q(\sqrt{k})$ for some matrix it suffices to find integers $x\geq 2$ and $y$ such that $x^2-ky^2=1$. Indeed, any matrix with trace $2x$ will then satisfy this, for example $\begin{pmatrix}x&x^2-1\\1&x\end{pmatrix}$. This is known as Pell's equality, and has infinitely many solutions for any $k$ that is not a square (see Mordell's book~\cite[Ch.~8]{mordell}).
Write $s=\frac{p}{q}+\frac{p'}{q'}\sqrt{k}$ for some integers $p,q,p',q'$. Applying Pell's equality for $(p'q'q^2)^2k$, we obtain integers $x$ and $a$ such that $x^2-a^2(p'q'q^2)^2k=1$. In other words, $x^2-y^2k=1$ for $y=p'q'q^2a$. We construct $\begin{pmatrix}x+q'^2pqa&b\\q'^2q^2a&x-q'^2pqa\end{pmatrix}$ where $b=\frac{x^2-q'^4p^2q^2a^2-1}{q'^2q^2a}=p'^2q^2ak-q'^2p^2a\in a\Z$. The matrix has $s$ for a fixed point, and $s$ is not rational, therefore the matrix is a hyperbolic element of $\Pl$.
\end{proof}
\begin{remark}\label{fingen}
The break points a finite number of elements of $H(\Z)$ are all contained in the sets $\Q(\sqrt{k})$ for a finite number of $k$, so Lemma~\ref{all} implies that $H(\Z)$ is not finitely generated.
\end{remark}
In order to define configurations, we wish to study the slope changes at elements of $P_\Z$. Consider $g\in\G$ and $s\in P_\Z$ such that $g(s+0)\neq g(s-0)$. Then it is easy to see that $f=g(\gamma-0)^{-1}g(\gamma+0)\in \Pl$ fixes $s$. Therefore, in order to study the slope changes we need to understand the stabiliser of $s$ in $\Pl$. We prove:
\begin{lemma}\label{cyclic}
Fix $s\in\P(\R)$. The stabiliser $St_s$ of $s$ in $\Pl$ is either isomorphic to $\Z$ or trivial.
\end{lemma}
\begin{proof}
Assume that $St_s$ is not trivial, and let $f\in St_s$ be different from the identity. Clearly, $f$ is not elliptic. If $f$ is hyperbolic, $s\in P_\Z$, and if $f$ is parabolic, $s\in\Q\cup\{\infty\}$. We distinguish three cases, that is $s\in P_\Z$, $s=\infty$ and $s\in\Q$.
We first assume $s\in P_\Z$. Let $s=r+r'\sqrt{k}$ with $r,r'\in\Q$ and $k\in\Z$. Note that the calculations in the beginning of the section yield that for every element $f$ in $St_s$ that is not the identity, $f$ is hyperbolic and the other fixed point of $f$ is $\bar{s}=r-r'\sqrt{k}$. Let $i=\begin{pmatrix}\frac{1}{2}&-\frac{r+r'\sqrt{k}}{2}\\\frac{1}{r'\sqrt{k}}&1-\frac{r}{r'\sqrt{k}}\end{pmatrix}\in PSL_2(\R)$ and consider the conjugation of $St_s$ by $i$. By choice of $i$ we have that $i(s)=0$ and $i(\bar{s})=\infty$. Therefore the image of $St_s$ is a subgroup of the elements of $PSL_2(\R)$ that have zeros on the secondary diagonal. Furthermore, calculating the image of an example matrix $\begin{pmatrix}a&b\\c&d\end{pmatrix}$, for $tr=a+d$ the trace of the matrix, we get
\begin{equation}\label{cyc}
i\begin{pmatrix}a&b\\c&d\end{pmatrix}i^{-1}=\begin{pmatrix}\frac{\sqrt{tr^2-4}+tr}{2}&0\\0&\frac{\sqrt{tr^2-4}-tr}{2}\end{pmatrix}
\end{equation}
Thus to understand the image of $St_s$ we just need to study the elements of the form $\frac{x+y\sqrt{k}}{2}$ with $x^2-ky^2=4$. This appears in a generalized form of Pell's equation, and those elements are known~\cite[Ch.~8]{mordell} to be powers of a fundamental solution (which is also true for the classic Pell equation if you identify a solution $x^2-y^2k=1$ with a unit element $x+y\sqrt{k}$ in $\Z[\sqrt{k}]$). This proves that the image of $St_s$ by this conjugation, which is isomorphic to $St_s$, is a subgroup of a group isomorphic to $\Z$. $St_s$ is then also isomorphic to $\Z$. The matrix with the fundamental solution in the upper left corner defines a canonical generator for the group of elements of the form seen in (\ref{cyc}), and its smallest positive power in the image of $St_s$ defines a canonical generator for $St_s$.
Assume now $s=\infty$. As we described in (\ref{agrp}), the stabiliser of $\infty$ is $(\alpha_n)_{n\in\N}$, which is trivially isomorphic to $\Z$.
Lastly, assume that $s=\frac{p}{q}\in\Q$ with $p$ and $q$ co-prime. There exist $m$ and $n$ such that $pm+qn=1$. Then $i=\begin{pmatrix}m&n\\-q&p\end{pmatrix}\in \Pl$ verifies $i(s)=\infty$. Thus the conjugation by $i$ defines an injection from the subgroup that fixes $s$ into $St_\infty=\A$. We observe that non-trivial subgroups of $\Z$ are isomorphic to $\Z$, which concludes the proof.\end{proof}
Having an isomorphism between $St_s$ (for $s\in P_\Z$) and $\Z$ will be useful to us, so we wish to know its exact form. We prove:
\begin{lemma}\label{log}
Let $s\in P_\Z$. There exists $\phi_s\in\R^+$ that remains constant on the orbit $Gs$ of $s$ such that $f\mapsto\log_{\phi_s}(f'(s)))$ defines an isomorphism between $St_s$ and $\Z$.
\end{lemma}
\begin{proof}
The derivative on the fixed point is multiplicative. Therefore for a fixed $s$, this follows from Lemma~\ref{cyclic} and the fact that subgroups of $\Z$ are isomorphic to $\Z$ (or trivial, which is impossible here). What we need to prove is that $\phi$ remains constant on $Gs$. Fix $s$ and consider $s'\in Gs$. Let $j\in \Pl$ be such that $j(s)=s'$. Then the conjugation by $j$ defines a bijection between $St_s$ and $St_{s'}$. Calculating the derivative on an element $f\in St_s$ we get $(jfj^{-1})'(s')=j'(s)(j^{-1})'(j(s))f'(s)=f'(s)$, which proves the result.
\end{proof}
We further denote $\psi:\A\mapsto\Z$ (see \ref{agrp}) the map that associates $n$ to $\alpha_n$, and $\psi_r$ the conjugate map for any $r\in\Q$. Remark that this is well defined by Lemma~\ref{cyclic} and conjugations in $\Z$ being trivial.
\subsection{Configurations}\label{confsect}
Fix $s\in P_\Z$ and let $\phi=\phi_s$ be given by Lemma~\ref{log}. By the isomorphism it defines, there exists an element $g_s$ that fixes $s$, such that $g_s'(s)=\phi_s$. As $s\notin\Q$, $g_s$ is hyperbolic. We associate to each element of the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}) a configuration representing the changes of slope at each point of the orbit $\G s=Gs$ of $s$, precisely:
\begin{defi}\label{confdef}To $g\in\G$ we assign $C_g:Gs\rightarrow\Z$ by
$$C_g(\gamma)=\log_\phi(g'(\gamma+0)g'(\gamma-0)^{-1}).$$
\end{defi}
Note that by choice of $\phi$ this value is well defined: indeed, $g(\gamma+0)g(\gamma-0)^{-1}\in \Pl$, fixes $\gamma$, and is therefore in $St_\gamma$.
Remark that by definition of $\G$ each configuration in the image of the association has a finite support. Remark also that the configuration ignores information about the changes in slope outside the orbit of $s$. For $s\in\Q$ we further denote $C_g(\gamma)=\psi_\gamma(g'(\gamma+0)g'(\gamma-0)^{-1})$, which will have similar properties. In the rest of the paper we will consider $s\in P_\Z$ unless otherwise specified. For completeness' sake, remark also that $G=H(\Z)\leq\G$ and the orbits of $G$ and $\G$ on $s$ are the same (as they are both the same as the orbit of $\Pl$) and therefore Definition~\ref{confdef} could be done directly for $G$, and what we would obtain is the same as restraining from the current definition.
\begin{lemma}\label{unit}\label{hs}
For every $s\in P_\Z$, there exists an element $h_s\in G$ such that $h_s(s-0)^{-1}h_s(s+0)=g_s$ and all other slope changes of $h_s$ are outside $Gs$. In particular, $C_{h_s}=\delta_s$.
\end{lemma}
\begin{proof}
Fix $s\in P_\Z$ and let $k=k_s$ be the unique square-free integer such that $s\in\Q(\sqrt{k})$. We will construct $h_s$ such that $h_s(s)=s$. Note that in that case we have $C_{h_s^{-1}}=-\delta_s$. This implies that if we construct an element $\tilde{h}_s$ that verifies $\tilde{h}_s(s-0)^{-1}\tilde{h}_s(s+0)=g_s^{\pm1}$ and all other slope changes are outside $Gs$, choosing $h_s=\tilde{h}_s^{\pm1}$ gives the result. In other words, we can replace $g_s$ with $g_s^{-1}$. Seen as a function on $\R$, $g_s$ is defined in all points but $-\frac{d}{c}$. It is then continuous in an interval around $s$. Moreover, if the interval is small enough, $s$ is the only fixed point in it. Therefore for some $\varepsilon$, either $g_s(x)>x$ for every $x\in(s,s+\varepsilon)$, or $g_s(x)<x$ in that interval. As we have the right to replace it with its inverse, without loss of generality we assume that $g_s$ is greater than the identity in a right neighbourhood of of $s$.
Write $s=r+r'\sqrt{k}$ with $r,r'\in\Q$. Then the other fixed point of $g_s$ is its conjugate $\bar{s}=r-r'\sqrt{k}$. Remark that it is impossible for $-\frac{d}{c}$ to be between $s$ and $s'$ as the function $g_s$ is increasing where it is continuous and has the same limits at $+\infty$ and $-\infty$ (see Figure~\ref{trivplot}). If $r'<0$, $g_s$ is greater than the identity in $(s,\bar{s})$ as it is continuous there. In that case, it is smaller than the identity to the left of the fixed points, but as it is increasing and has a finite limit at $-\infty$, this implies (see Figure~\ref{trivplot}) that $-\frac{d}{c}<s$. Similarly, if $s>\bar{s}$, $g_s$ is increasing and greater than the identity to the right of $s$, but has a finite limit at $+\infty$, so $-\frac{d}{c}>s$.
\begin{figure}
\centering
\begin{minipage}{8cm}\centering\caption{Graphs of $g_s$ and the identity}\label{trivplot}
\begin{tikzpicture}
\begin{axis}[xmin=-4,xmax=4,ymin=-4,ymax=4,axis lines = middle, legend pos = south west,xtick={-10},ytick={17}]
\addplot[domain=-3.8:3.8,color=black]{x};
\addlegendentry{$Id$}
\addplot[color=blue,samples=100,domain=-4:-2,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)};
\addlegendentry{$g_s$}
\addplot[color=red,samples=100,domain=-4:2,restrict y to domain=-4:4]{(2*x-3)/(2-x)};
\addplot[color=red,samples=100,domain=2:4,restrict y to domain=-4:4]{(2*x-3)/(2-x)};
\addlegendentry{$g_s^{-1}$}
\addplot[color=blue,samples=100,domain=-2:4,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)};
\node[label={-30:{$s$}},circle,fill,inner sep=1pt] at (axis cs:-1.732,-1.732) {};
\end{axis}
\end{tikzpicture}
\end{minipage}
\begin{minipage}{8cm}\centering\caption{Graphs of $g_s$ and $j_s$}\label{plot}
\begin{tikzpicture}
\begin{axis}[xmin=-4,xmax=4,ymin=-4,ymax=4,axis lines = middle, legend pos = south west,xtick={-10},ytick={17}]
\addplot[domain=-3.8:3.8,color=black]{x};
\addlegendentry{$Id$}
\addplot[color=blue,samples=100,domain=-4:-2,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)};
\addlegendentry{$g_s$}
\addplot[color=red,samples=100,domain=-4:0,restrict y to domain=-4:4]{(2*x-3)/(2-x)};
\addlegendentry{$g_s^{-1}$}
\addplot[samples=100,domain=0:4,restrict y to domain=-4:4,densely dotted,thick]{(4*x-1)/x};
\addlegendentry{$j_s$}
\addplot[color=blue,samples=100,domain=-2:4,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)};
\addplot[color=red,samples=150,domain=0:2,restrict y to domain=-4:4]{(2*x-3)/(2-x)};
\addplot[color=red,samples=100,domain=2:4,restrict y to domain=-4:4]{(2*x-3)/(2-x)};
\addplot[samples=100,domain=-4:0,restrict y to domain=-4:4,densely dotted,thick]{(4*x-1)/x};
\node[label={-1:{$\bar{t}$}},circle,fill,inner sep=1pt] at (axis cs:0.268,0.268) {};
\node[label={110:{$\tilde{s}$}},circle,fill,inner sep=1pt] at (axis cs:0.414,1.5858) {};
\end{axis}
\end{tikzpicture}
\end{minipage}
\end{figure}
We will find a hyperbolic element $j_s$ verifying: the larger fixed point $t$ of $j_s$ is not in $Gs$ and $t>-\frac{d}{c}$, while the smaller fixed point $\bar{t}$ is between $s$ and $\bar{s}$, and $j_s$ is greater than the identity between $\bar{t}$ and $t$. If $r'<0$ consider the interval $(\bar{t},\bar{s})$. At its infimum, $j_s$ has a fixed point while $g_s$ is greater than the identity, and at its supremum the inverse is true. By the mean values theorem, there exists $\tilde{s}$ in that interval such that $j_s(\tilde{s})=g_s(\tilde{s})$ (see Figure~\ref{plot}). If $r'>0$, consider the interval $(s,-\frac{d}{c})$. At its infimum, $g_s$ is fixed and therefore smaller than $j_s$, and at its supremum $g_s$ diverges towards $+\infty$ while $j_s$ has a finite limit. Again by the mean values theorem, there exists $\tilde{s}$ in that interval where $g_s$ and $j_s$ agree. As $-\frac{d}{c}<t$ by hypothesis, in both cases we have $s<\tilde{s}<t$. We then define
\begin{equation*}h_s(x)=\begin{cases}
x & x\leq s \\
g_s(x) & s\leq x\leq\tilde{s}\\
j_s(x) & \tilde{s}\leq x\leq t \\
x & t\leq x \\
\end{cases}
\end{equation*}
Thus it would suffice to prove that we can construct $j_s$ that verifies those properties and such that $\tilde{s}\notin Gs$. Note that $\tilde{s}$ is a fixed point of $g_s^{-1}j_s$, so to prove that it is not in $Gs$ it will suffice to study the trace of the latter. Remark that in this definition $h_s$ is strictly greater than the identity in an open interval, and equal to it outside (this is with the assumption on $g_s$, in the general case $h_s$ has its support in an open interval, and is either strictly greater then the identity on the whole interval, or strictly smaller).
Write $r=\frac{p}{q}$. By Bezout's identity, there are integers $\tilde{m}$ and $\tilde{n}$ such that $q\tilde{n}-p\tilde{m}=1$. Then the matrix $i=\begin{pmatrix}\tilde{n}&p\\\tilde{m}&q\end{pmatrix}\in \Pl$ verifies $i.0=\frac{p}{q}$. Taking $\tilde{j}_s=i^{-1}j_si$ it suffices to find $\tilde{j}_s$ with fixed points outside $Gs$, the smaller one being close enough to $0$, and the greater one large enough. Remark that the only information we have on $g_s$ is its trace, so this does not complicate the computations for $\tilde{s}$.
We will define $\tilde{j}_s$ in the form $\begin{pmatrix}x'+ma'&n^2l_sa'-m^2a'\\a'&x'-ma'\end{pmatrix}$ where $x'^2-n^2a'^2l_s=1$. Its fixed points are $m\pm n\sqrt{l_s}$. By choosing $m$ arbitrarily large, the second condition will be satisfied. Note $ig_s^{-1}i^{-1}=\begin{pmatrix}\tilde{a}&\tilde{b}\\\tilde{c}&\tilde{d}\end{pmatrix}$ and $tr(g_s)^2-4=o^2k$. Calculating the trace of $g_s^{-1}j_s$ we get $tr(g_s)x'+a'\tilde{b}+mz_1+nz_2$ with $z_1,z_2\in\Z$. Then, admitting that $n$ divides $x'-1$ (which will be seen in the construction of $x'$) we obtain for some $z_i\in Z$, $i\in\N$:
\begin{equation}\label{moche}
\begin{split}
tr(g_s^{-1}j_s)^2-4&=mz_3+nz_4+a'^2\tilde{b}^2+2a'\tilde{b}x'tr(g_s)+x'^2tr(g_s)^2-tr(g_s)^2+tr(g_s)^2-4\\
&=mz_3+nz_5+a'^2\tilde{b}^2+2a'\tilde{b}tr(g_s)+n^2a'^2l_str(g_s)^2+o^2k\\
&=mz_3+nz_6+a'^2\tilde{b}^2+2a'\tilde{b}tr(g_s)+o^2k.
\end{split}
\end{equation}
Take a prime $p_s$ that is larger than $k$ and $b(tr(g_s)+2)$. There is an integer $a''<p_s$ such that $b(tr(g_s)+2)a''\equiv-1\mod{p_s}$. Take $a=o^2ka''$. Then
$$a'^2\tilde{b}^2+2a'\tilde{b}tr(g_s)+o^2k=o^2k(b(tr(g_s)+2)a''+1)(b(tr(g_s)-1)).$$
As $\Z[p_s]$ is a field, clearly $b(tr(g_s)-2)a''\not\equiv-1\mod{p_s}$. As $b(tr(g_s)+2)a''<p_s^2$, the product is divisible by $p_s$ but not $p_s^2$. We will choose $m$ and $n$ divisible by $p_s^2$, which will then ensure that the value in (\ref{moche}) is divisible by $p_s$ but not $p_s^2$, proving that $\tilde{s}\notin Gs$.
All that is left is choosing $n$ and $m$. As we just noted, we need them to be multiples of $p_s^2$. Aside from that $n$ needs to satisfy $x'^2-n^2a'^2l_s=1$, $l_s$ must not be a square times $k$ and we need to be able to make $m-n\sqrt{l_s}$ arbitrarily small. Write $m=p_s^2m'$ and $n=p_s^2n'$. Then $m'$ can be anything so long as $m-n\sqrt{l_s}$ becomes arbitrarily small. In other words, we are only interested in the fractional part of $n'\sqrt{l_s}$. We choose $x'=n'^2a'^2p_s^5-1$ and will prove that the conditions are satisfied for $n'$ large enough. Then $x'^2-n^2a'^2l_s=1$ is satisfied for $l_s=p_s(n'^2a'^2p_s^5-2)$. In particular, $p_s$ divides $l_s$ but its square does not, so $l_s$ is not equal to a square times $k$. Moreover, $\sqrt{l_s}=\sqrt{(n'a'p_s^3)^2-2p_s}$ and as the derivative of the square root is strictly decreasing, $\sqrt{(n'a'p_s^3)^2-2p_s}-n'a'p_s^3\rightarrow0$ for $n'\rightarrow\infty$. Its factorial part then clearly converges towards $1$, which concludes the proof.\end{proof}
For a product inside the group $\G$, by the chain rule we have
$$(g_2g_1)'(\gamma)=g_2'(g_1(\gamma))g_1'(\gamma)$$
and thus
\begin{equation}\label{der}C_{g_2g_1}(\gamma)=C_{g_1}(\gamma)+C_{g_2}(g_1(\gamma))\end{equation}
That gives us a natural action of $\G$ on $\Z^{Gs}$ by the formula $(g,C)\rightarrow C_g+S^gC$ where $S^gC(\gamma)=C(g(\gamma))$. It is easy to check that it also remains true for $s\in\Q$.
\begin{lemma}\label{nostable}
There is no configuration $C:Gs\rightarrow\Z$ such that $C=C_{h_s}+S^{h_s}C$.
\end{lemma}
Indeed, applying (\ref{der}) and taking the value at $s$ we get a contradiction.
Consider $g$ and $h$ such $C_g=C_h$. We have $C_{Id}=C_{g^{-1}}+S^{g^{-1}}C_g$ and thus $C_{hg^{-1}}=C_{g^{-1}}+S^{g^{-1}}C_h=C_{Id}=0$. We denote
$$H_s=\{g\in G:C_g=0\}.$$
Then:
\begin{lemma}\label{generate}
The element $h_s$ and the subgroup $H_s$ generate $G$ for every $s\in P_\Z$.
\end{lemma}
\begin{proof}We show for $g\in G$ by induction on $\|C_g\|_1=\sum_{x\in Gs}|C_g(x)|$ that it is in the group generated by $\{h_s\}\cup H_s$ . The base is for $\|C_g\|_1=0$, in which case we have $C_g\equiv 0$ and the result is part of the statement hypothesis. We take $g\in G$ and assume that every element with smaller $l^1$ measure of its configuration is in the group generated by $\{h_s\}\cup H_s$. We take any $\alpha\in\supp(C_g)$. Without loss of generality, we can assume that $C_g(\alpha)>0$. As $g(\alpha)\in Gs$, by Lemma~\ref{trace} there exists $h\in H_s$ such that $h(s)=g(\alpha)$ and $C_h=0$. Let $\tilde{g}=hh_sh^{-1}$. As $h_s\in\{h_s\}\cup H_s$, we have $\tilde{g}\in\langle \{h_s\}\cup H_s\rangle$. Applying the composition formula~(\ref{der}) we obtain $C_{\tilde{g}}(x)=0$ for $x\neq g(\alpha)$ and $C_{\tilde{g}}(g(\alpha))=1$. We consider $\bar{g}=\tilde{g}^{-1}g$. If $x\neq g(\alpha)$, by the composition formula (\ref{der}) we get $C_{\bar{g}}(x)=C_g(x)$, and at $\alpha$ we have $C_{\bar{g}}(\alpha)=C_g(\alpha)-1$. By hypothesis we then have $\bar{g}\in\langle \{h_s\}\cup H_s\rangle$, and as $\tilde{g}$ is also included in this set, so is $g$.\end{proof}
\begin{lemma}\label{trace}
For any $g\in \Pl$ and $\gamma\in\R$ there exists $h\in H_s$ such that $g(\gamma)=h(\gamma)$.
\end{lemma}
\begin{proof}By Monod's construction in~\cite[Proposition~9]{h-main}, we know that we can find $h\in G$ that agrees with $g$ on $\gamma$ of the form $q^{-1}g$ where $q=\begin{pmatrix}a&b+ra\\c&d+rc\end{pmatrix}$ in the interval between its fixed points that contains infinity and the identity otherwise. To have this result, what is required is that either $r$ or $-r$ (depending on the situation) be large enough. Clearly, $C_h\equiv0$ would follow from slope change points of $q$ being outside $Gs$ (as neither of them is infinity). In particular, it is enough to prove that for some infinitely large $r$, the fixed points of $\begin{pmatrix}a&b+ra\\c&d+rc\end{pmatrix}$ are outside $\Q(\sqrt{k})$. The trace of that matrix is $(a+d)+rc$. Let $p$ be a large prime number that does not divide $2$, $k$ or $c$. As $c$ and $p$ are co-prime, there exists $r_0$ such that $a+d+r_0c=p+2\pmod{p}$. Then for every $i\in\Z$, we have $(a+d+(r_0+p^2i)c)^2-4=4p(modp^2)$. As $p$ and $4$ are co-prime, this implies that for each $r=r_0+p^2i$ the fixed points of that matrix are not in $\Q(\sqrt{k})$ as $p$ does not divide $k$.\end{proof}
\section{Convergence condition}\label{sectfour}
Fix $s\in P_\Z$ and let us use the notations from Subsection~\ref{confsect}. For a measure $\mu$ on $\G$ we denote $C_\mu=\bigcup_{g\in\supp(\mu)}\supp(C_g)$ its "support" on $Gs$. That is, $C_\mu\subset Gs$ is the set of points in which at least one element that is inside the support of $\mu$ in the classical sense changes slope. We thus obtain the first result
\begin{lemma}\label{base}
Consider the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}). Let $\mu$ be a measure on a subgroup of $\G$ such that $C_\mu$ is transient with respect to $\mu$ for the natural action of $\G$ on $\R$ and $h_s$ is in the semigroup generated by $\supp(\mu)$. Then the Poisson boundary of $\mu$ on the subgroup is not trivial.
\end{lemma}
\begin{proof}Consider a random walk $g_n$ with $g_{n+1}=h_ng_n$. For a fixed $\gamma\in Gs$ we have
$$C_{g_{n+1}}(\gamma)=C_{g_n}(\gamma)+C_{h_n}(g_n(\gamma))$$
By the hypothesis of transiency this implies that $C_{g_n}(\gamma)$ stabilises. In other words, $C_{g_n}$ converges pointwise towards a limit $C_\infty$. This defines a hitting measure on $\Z^{Gs}$ that is a quotient of $\mu$'s Poisson boundary. Moreover, it is $\mu$-invariant by the natural action on $\Z^{Gs}$. It remains to see that it is not trivial. Assume the opposite, which is that there exists a configuration $C$ such that for almost all walks, the associated configuration $C_{g_n}$ converges pointwise to $C$. By hypothesis there are elements $h_1,\dots,h_m$ with positive probability such that $h_mh_{m-1}\dots h_1=h_s$. There is a strictly positive probability for a random walk to start with $h_mh_{m-1}\dots h_1$. Applying~(\ref{der}) we get $C=C_{h_s}+S^{h_s}C$, which is contradictory to Lemma~\ref{nostable}.\end{proof}
This lemma, along with Lemma~\ref{generate} implies:
\begin{lemma}\label{constr}
Fix $s\in P_\Z$. Let $\mu$ be a measure on $G=H(\Z)$ that satisfies the following conditions:
(i) The element $h_s$ belongs to the support of $\mu$,
(ii) The intersection of the support of $\mu$ with the complement of $H_s$ is finite,
(iii) The action of $\mu$ on the orbit of $s$ is transient.
Then the Poisson boundary of $\mu$ is non-trivial.
\end{lemma}
We will now show how measures satisfying whose assumptions can be constructed. Remark that the question of existence of a measure with non-trivial boundary has already been solved by Frisch-Hartman-Tamuz-Vahidi-Ferdowski~\cite{choquet-deny}. In our case, notice that $\A\subset H_s$ (see (\ref{agrp})), and it is isomorphic to $\Z$. We can then use a measure on $\A$ to ensure transience of the induced walk on the orbit. To prove that, we use a lemma from Baldi-Lohoué-Peyrière~\cite{var} (see also Woess~\cite[Section~2.C,3.A]{woess2000random}). Here we formulate a stronger version of the lemma, as proven by Varopoulos~\cite{Varopoulis1983}:
\begin{lemma}[Comparison lemma]\label{var}
Let $P_1(x,y)$ and $P_2(x,y)$ be doubly stochastic kernels on a countable set $X$ and assume that $P_2$ is symmetric. Assume that there exists $\varepsilon\geq 0$ such that
$$P_1(x,y)\geq\varepsilon P_2(x,y)$$
for any $x,y$. Then
\begin{enumerate}
\item For any $0\leq f\in l^2(X)$
$$\sum_{n\in\N}\langle P_1^nf,f\rangle\leq \frac{1}{\varepsilon}\sum_{n\in\N}\langle P_2^nf,f\rangle.$$
\item If $P_2$ is transient then so is $P_1$ (for any point $x\in X$, it follows from (1) applied to $f=\delta_x$).
\end{enumerate}
\end{lemma}
Here, doubly stochastic kernels means that the operators are reversible and the inverse is also Markov. It is in particular the case for $P(x,y)=\mu(yx^{-1})$ for some measure on a group (as the inverse is $(x,y)\mapsto\mu(xy^{-1})$).
\begin{remark}\label{gen}
If $\lambda$ is a transient measure on $\A$ and $\mu$ satisfies conditions (i) and (ii) of Lemma~\ref{constr}, then the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) implies that $\varepsilon\lambda+(1-\varepsilon)\mu$ satisfies all the conditions of the lemma for any $0<\varepsilon<1$. In other words, this is a way to construct non-degenerate symmetric measures on $G$ with non-trivial Poisson boundary.
\end{remark}
For completeness' sake, we show that there exist measures positive on all of $G$ that have non-trivial boundary.
\begin{lemma}
Let $\mu$ be a measure on a group $H$ with finite entropy and non-zero asymptotic entropy and which generates $H$ as a semigroup. Then there exists a measure $\tilde{\mu}$ with support equal to $H$ that also has finite entropy and non-zero asymptotic entropy. Furthermore, if $\mu$ is symmetric, so is $\tilde{\mu}$.
\end{lemma}
\begin{proof}
Define $\tilde{\mu}=\frac{1}{e}\sum_{i\in\N}\frac{\mu^{*i}}{i!}$. By a result of Kaimanovich~\cite[Corollary~to~Theorem~4]{entrlemma} we get
$$h(H,\tilde{\mu})=h(H,\mu)\sum_{i\in\N}\frac{i}{ei!}=h(H,\mu).$$
Moreover, as the entropy of $\tilde{\mu}^{*n}$ is not smaller than the entropy of $\tilde{\mu}$, finite asymptotic entropy implies finite entropy.
\end{proof}
From this lemma and the entropy criterion Theorem~\ref{entropy} it follows that to have a measure positive on all of $G$ with non-trivial boundary it suffices to construct a measure verifying the conditions of Lemma~\ref{constr} with finite asymptotic entropy, which we can achieve with the construction presented in Remark~\ref{gen}.
\section{Thompson's group as a subgroup of $G=H(\Z)$}\label{thompsect}
In~\cite{chainhomeo} Kim, Kuberda and Lodha show that any two increasing homeomorphisms of $\R$ the supports of which form a 2-chain (as they call it) generate, up to taking a power of each, a group isomorphic to Thompson's group $F$. Let us give the exact definition of this term. For a homeomorphism $f$ of $\R$ we call its support $\supp(f)$ the set of points $x$ where $f(x)\neq x$. Remark that we do not define the closure of that set as support, as it is sometimes done. Consider four real numbers $a,b,c,d$ with $a<b<c<d$. Take two homeomorphisms $f$ and $g$ such that $\supp(f)=(a,c)$ and $\supp(g)=(b,d)$. In that case we say that their supports form a 2-chain, and the homeomorphisms generate a 2-prechain group. In other words, two homeomorphisms generate a 2-prechain if their supports are open intervals that intersect each other but neither is contained in the other.
Clearly, there exist many such pairs in $G$. We will give a simple example. Fix $s$ and find positive rational numbers $\tilde{r}$ and $\tilde{r}'$ such that $\tilde{r}<s<\tilde{r}+\tilde{r}'\sqrt{p_s}<t$. Recall that $p_s$ is a prime larger than $k$. Then choose a hyperbolic element $\tilde{g}$ that fixes $\tilde{r}+\tilde{r}'\sqrt{p_s}$ and define
\begin{equation*}\tilde{h}_s(x)=\begin{cases}
\tilde{g}_s(x) & \tilde{r}-\tilde{r}'\sqrt{p_s}\leq x\leq\tilde{r}+\tilde{r}'\sqrt{p_s} \\
x & \mbox{otherwise.} \\
\end{cases}
\end{equation*}
By definition of $\tilde{r}$ and $\tilde{r}'$, $\tilde{h}_s$ and $h_s$ clearly form a 2-prechain, and thus up to a power they generate a copy of Thompson's group (see~\cite[Theorem~3.1]{chainhomeo}). We will denote $\mathfrak{a}_s$ the action $F\curvearrowright\R$ this defines. To obtain the convergence results, we need to prove that the induced random walks on the Schreier graphs of certain points are transient. By the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) it would suffice to prove it for the simple random walk on the graph, which is why we will study its geometry. In the dyadic representation of Thompson's group, the geometry of the Schreier graph on dyadic numbers has been described by Savchuk~\cite[Proposition~1]{slav10}. It is a tree quasi-isometric to a binary tree with rays attached at each point (see Figure~\ref{sav}), which implies transience of the simple random walk. For a different proof of transience see Kaimanovich~\cite[Theorem~14]{kaimanovichthompson}. We will see that the Schreier graph has similar geometry in the case of $\mathfrak{a}_s$ (see Figure~\ref{treefig}).
\begin{lemma}\label{tree-old}
Consider two homeomorphisms $f$ and $g$ of $\R$ the supports of which are $\supp(f)=(a,c)$ and $\supp(g)=(b,d)$ with $a<b<c<d$. Denote $H$ the group generated by $f$ and $g$. Then the simple random walk on the Schreier graph of $H$ on the orbit of $b$ is transient.
\end{lemma}
\begin{proof}
Up to replacing $f$ or $g$ with its inverse, we can assume without loss of generality that $f(x)>x$ for $x\in\supp(f)$ and $g(x)>x$ for $x\in\supp(g)$. Denote by $\Gamma$ the Schreier graph of $H$ on the orbit of $b$. The vertices of this graph are the points of the orbit $Hb$ of $b$ by $H$, and two points are connected by an edge if and only if $f$, $g$, $f^{-1}$ or $g^{-1}$ sends one point into the other. Denote by $\tilde{\Gamma}$ the subgraph defined by the vertexes that belong to the closed interval $[b,c]$. At every point $x$ of $\Gamma$ such that $x\notin[b,c]$, in a neighbourhood $(x-\varepsilon,x+\varepsilon)$ of $x$, one of the two elements $f$ and $g$ acts trivially, and the other one is strictly greater than the identity map. Without loss of generality, let $f$ act trivially. Let $i_0$ be the largest integer such that $g^{i_0}(x)\in[b,c]$. Then the set of points $(g^i(x))_{i\geq i_0}$ is a ray that starts at an element of $\tilde{\Gamma}$. As the simple random walk on $\Z$ is recurrent (see~\cite[Chapter~3,~Theorem~2.3]{durrett2005probability}), the walk always returns to $\tilde{\Gamma}$ in finite time, and that part of the graph ($\tilde{\Gamma})$ is what we need to study.
Replacing, if necessary, $f$ or $g$ by its power, we can assume that $g^{-1}(c)<f(b)$. Denote $A=[b,g^{-1}(c)]=g^{-1}([b,c])$, $B=[f(b),c]=f([b,c])$ and $C=(g^{-1}(c),f(b))=[b,c]\setminus(A\cup B)$. Consider $x\in\tilde{\Gamma}$ with $x\neq b$ and $x\notin C$. Consider a reduced word $c_nc_{n-1}\dots c_1$ with $c_i\in\{f^{\pm1},g^{\pm1}\}$ that describes a path in $\tilde{\Gamma}$ from $b$ to $x$. In other words $c_nc_{n-1}\dots c_1(b)=x$ and the suffixes of that word satisfy $c_ic_{i-1}\dots c_1(b)\in\tilde{\Gamma}$ for every $i\leq n$. The fact that the word is reduced means that $c_i\neq c_{i+1}^{-1}$ for every $i$. We claim that if $x\in A$, this word ends with $g^{-1}=c_n$, and if $x\in B$, $c_n=f$.
We prove the latter statement by induction on the length of the word $n$. If a word of length one, it is $g$ since $f$ fixes $b$ and since $g^{-1}(b)\notin [b,c]$. As $g(b)\in B$ this gives the base for the induction.
Assume that the result is true for any reduced word of length strictly less than $n$ whose suffixes, when applied to $b$, stay in $[b,c]$. We will now prove it for $x=c_nc_{n-1}\dots c_1(b)$. We denote $y=c_{n-1}c_{n-2}\dots c_1(b)$ the point just before $x$ in that path. We first consider the case $x\in B$ (as we will see from the proof, the other case is equivalent). We distinguish three cases: $y\in A$, $y\in B$ and $y\in C$.
If $y\in A$, by induction hypothesis we have $c_{n-1}=g^{-1}$. As the word is reduced we thus have $c_n\neq g$. However, from $y\in A$ and $x\in B$ we have $y<x$. Therefore, $c_n\notin\{f^{-1},g^{-1}\}$, and the only possibility left is $c_n=f$.
If $y\in B$, by induction hypothesis we have $c_{n-1}=f$. Therefore, as the word is reduced, $c_n\neq f^{-1}$. From $g^{-1}(c)<f(b)$ it follows that $g(B)\cap[b,c]=\emptyset$. As $x\in B$, this implies that $c_n\neq g$. Similarly, $g^{-1}(B)\subset A$, therefore $c_n\neq g^{-1}$. The only possibility left is $c_n=f$.
If $y\in C$, consider the point $y'=c_{n-2}\dots c_1(b)$. If $y'\in A$, by induction hypothesis $c_{n-2}=g^{-1}$. Then $c_{n-1}\neq g$. As $y>y'$, this implies that $c_{n-1}=f$. However, $g(A)\subset B$, which is a contradiction. In a similar way, we obtain a contradiction for $y'\in B$. However, both $f^{-1}(C)$ and $g(C)$ are outside $[b,c]$, while $f(C)\subset B$ and $g^{-1}(C)\subset A$. Therefore the case $y\in C$ is impossible by induction hypotheses on $c_{n-2}\dots c_1$.
This completes the induction. Remark that we also obtained $\tilde{\Gamma}\cap C=\emptyset$, so the result holds for all points of $\tilde{\Gamma}$. In particular, if two paths in $\tilde{\Gamma}$ described by reduced words arrive at the same point, the last letter in those words is the same, which implies that $\tilde{\Gamma}$ is a tree. Remark also that the result implies that $c\notin\tilde{\Gamma}$ as $c\in B$ and $f^{-1}(c)=c$.
Moreover, for a vertex $x\in A$, we have that $f(x)$, $g(x)$ and $g^{-1}(x)$ also belong to $\tilde{\Gamma}$. Similarly, for $x\in B$, $g^{-1}(x)$, $f(x)$ and $f^{-1}(x)$ are in $\tilde{\Gamma}$. Therefore every vertex aside from $b$ has three different neighbours. The simple walk on $\tilde{\Gamma}$ is thus transient.
\end{proof}
By the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}), this implies transience on the Schreier graph of $s$ for any measure on $G$ such that $h_s$ and $\bar{h}_s$ are in the semigroup generated by the support of the measure. If the support of a given measure generates $G$ as a semigroup, conditions $(i)$ and $(iii)$ in Lemma~\ref{base} are then automatically satisfied. In particular, any measure $\mu$ on $G$ that generates it as a semigroup and such that there exists $s$ for which $\supp(\mu)\cap(G\setminus H_s)$ is finite has a non-trivial Poisson boundary.
In the proof of Lemma~\ref{tree-old} we obtained a description of the graph of $\mathfrak{a}_s$, which is similar to the one by Savchuk~\cite{slav10} in the case of the dyadic action:
\begin{remark}\label{tree}
Consider two homeomorphisms $f$ and $g$ of $\R$ the supports of which are $\supp(f)=(a,c)$ and $\supp(g)=(b,d)$ with $a<b<c<d$. Denote $H$ the group generated by $f$ and $g$. Then the Schreier graph of $H$ on the orbit of $b$ is described in Figure~\ref{treefig} (solid lines are labelled by $f$ and dashed lines by $g$).
\begin{figure}[!h]\caption{Schreier graph of $\mathfrak{a}_s$}\label{treefig}\centering\begin{tikzpicture}[-stealth]
\tikzset{node/.style={circle,draw,inner sep=0.7,fill=black}}
\tikzset{every loop/.style={min distance=8mm,in=55,out=125,looseness=10}}
\tikzstyle{level 1}=[level distance=2.4cm,sibling distance=3cm]
\tikzstyle{level 2}=[level distance=2.4cm,sibling distance=12mm]
\tikzstyle{level 3}=[level distance=1.5cm,sibling distance=5mm]
\tikzstyle{level 4}=[level distance=1cm,sibling distance=5mm]
\node[node,label=below:{$b$}](0){}
child[grow=left,<-]{node[node](-1){}}
child[grow=right,->]{node[node]{}
child[grow=south west,<-,dashed]{node[node]{}
child[grow=south west,<-,dashed]{node[node]{}
child[grow=south west,<-,dashed]{}
child[grow=south east,->,solid]{}
child[grow=left,<-,solid,level distance=1cm]{node[node](1){} child[grow=left,level distance=1cm]{node[node](8){}}}}
child[grow=south east,->,solid]{node[node]{}
child[grow=south west,<-,dashed]{}
child[grow=south east,->,solid]{}
child[grow=right,->,dashed,level distance=0.2cm]{node[node](9){} child[grow=right,level distance=0.2cm]{node[node](10){}}}}
child[grow=left,<-,solid,level distance=1.5cm]{node[node](2){} child[grow=left,level distance=1.5cm]{node[node](11){}}}}
child[grow=south east,->,solid]{node[node]{}
child[grow=south west,<-,dashed]{node[node]{}
child[grow=south west,<-,dashed]{}
child[grow=south east,->,solid]{}
child[grow=left,<-,solid,level distance=0.2cm]{node[node](3){} child[grow=left,level distance=0.2cm]{node[node](3b){}}}}
child[grow=south east,->,solid]{node[node]{}
child[grow=south west,<-,dashed]{}
child[grow=south east,->,solid]{}
child[grow=right,->,dashed,level distance=1cm]{node[node](4){} child[grow=right,level distance=1cm]{node[node](4b){}}}}
child[grow=right,->,dashed,level distance=1.5cm]{node[node](6){} child[grow=right,level distance=1.5cm]{node[node](6b){}}}}
child[grow=right,->,dashed,level distance=2.4cm]{node[node](5){} child[grow=right, level distance=2.4cm]{node[node](7){}}}
};
\draw (0) edge[loop above,dashed] (0);
\draw (-1) edge[loop above,dashed] (-1);
\draw (1) edge[loop above,dashed] (1);
\draw (2) edge[loop above,dashed] (2);
\draw (3) edge[loop above,dashed] (3);
\draw (3b) edge[loop above,dashed] (3b);
\draw (4) edge[loop above] (4);
\draw (4b) edge[loop above] (4b);
\draw (5) edge[loop above] (5);
\draw (6) edge[loop above] (6);
\draw (6b) edge[loop above] (6b);
\draw (7) edge[loop above] (7);
\draw (8) edge[loop above,dashed] (8);
\draw (9) edge[loop above] (9);
\draw (10) edge[loop above] (10);
\draw (11) edge[loop above,dashed] (11);
\end{tikzpicture}\end{figure}
\end{remark}
\begin{proof}
In the proof of Lemma~\ref{tree-old} we have shown that for every vertex $x\in\tilde{\Gamma}$ that is not $b$, $x$ has exactly three different neighbours in $\tilde{\Gamma}$. We also proved that $\tilde{\Gamma}$ is a tree. It is therefore a binary tree. Furthermore, if $x\in A$, it is equal to $g^{-1}(y)$ where $y$ is closer to $b$ than $x$ (in the graph), and if $x\in B$, $x=f(y)$ where $y$ is again closer to $b$. We think of $y$ as the parent of $x$. Then every vertex $x$ has two children: left child $g^{-1}(x)$ and right child $f(x)$. Furthermore, if $x$ is a left child, $x\in A$ and $f^{-1}(x)\notin\tilde{\Gamma}$. Equivalently, if $x$ is a right child, $g(x)\notin\tilde{\Gamma}$.\end{proof}
Compare to the Schreier graph of the dyadic action as described by Savchuk~\cite[Proposition~1]{slav10}(see Figure~\ref{sav}).
\begin{figure}[!h]\centering\caption{Schreier graph of the dyadic action of $F$ for the standard generators}\label{sav}\begin{tikzpicture}[-stealth]
\tikzset{no edge/.style={edge from parent/.append style={draw=none}}}
\tikzset{node/.style={circle,draw,inner sep=0.7,fill=black}}
\tikzset{every loop/.style={min distance=8mm,in=55,out=125,looseness=10}}
\tikzstyle{level 1}=[level distance=2.4cm,sibling distance=3cm]
\tikzstyle{level 2}=[level distance=2.4cm,sibling distance=12mm]
\tikzstyle{level 3}=[level distance=1.5cm,sibling distance=5mm]
\tikzstyle{level 4}=[level distance=1cm,sibling distance=5mm]
\node[node,label=south west:{$\frac{3}{4}$}](34){}
child[grow=left,<-,level distance=2.4cm]{[no edge] node[node,label=below:{$\frac{7}{8}$}](78){}child[grow=left,<-,level distance=2.4cm]{[no edge] node[node,label=below:{$\frac{15}{16}$}](1516){}}}
child[grow=right,level distance=2.4cm,dashed]{node[node,label=below:{$\frac{1}{2}$}](12){}child[grow=right,level distance=2.4cm,dashed]{node[node,label=below:{$\frac{1}{4}$}](14){}}}
child[grow=down,->,level distance=1.5cm]{node[node,label=north west:{$\frac{5}{8}$}]{}
child[grow=south west,<-,dashed,level distance=1.2cm]{node[node,label=right:{$\frac{13}{16}$}](1316){}
child[grow=left,<-,level distance=2cm]{[no edge] node[node](1316a){}child[grow=left,<-,level distance=2cm]{[no edge] node[node](1316b){}}}
child[grow=south west,->,solid,level distance=1.2cm]{node[node,label=above:{$\frac{11}{16}$}](1116){}
child[grow=south west,<-,dashed,level distance=7.5mm]{node[node,label=north west:{$\frac{27}{32}$}](2732){}
child[grow=left,<-,level distance=1.2cm]{[no edge] node[node](2732a){}child[grow=left,<-,level distance=1.2cm]{[no edge] node[node](2732b){}}}
child[grow=south west,->,solid,level distance=7.5mm]{node[node,label=left:{$\frac{23}{32}$}](2332){}
child[grow=south west,<-,dashed,level distance=1cm]{}
child[grow=south east,->,solid,level distance=1cm]{}
child[grow=right,->,dashed,level distance=6mm]{node[node](1){} child[grow=right,level distance=6mm]{node[node](8){}}}}}
child[grow=south east,->,solid,level distance=1.5cm]{node[node,label=left:{$\frac{19}{32}$}]{}
child[grow=south west,<-,dashed,level distance=1cm]{}
child[grow=south east,->,solid,level distance=1cm]{}
child[grow=right,->,dashed,level distance=0.2cm]{node[node](9){} child[grow=right,level distance=0.2cm]{node[node](10){}}}}
child[grow=right,->,dashed,level distance=1cm]{node[node](2){} child[grow=right,level distance=1cm]{node[node](11){}}}}}
child[grow=south east,->,solid,level distance=2.4cm]{node[node,label=north east:{$\frac{9}{16}$}]{}
child[grow=south west,<-,dashed,level distance=7.5mm]{node[node,label=north west:{$\frac{25}{32}$}](2532){}
child[grow=left,<-,level distance=0.6cm]{[no edge] node[node](2532a){}child[grow=left,<-,level distance=0.6cm]{[no edge] node[node](2532b){}}}
child[grow=south west,->,solid,level distance=7.5mm]{node[node,label=left:{$\frac{21}{32}$}](2332){}
child[grow=south west,<-,dashed,level distance=1cm]{}
child[grow=south east,->,solid,level distance=1cm]{}
child[grow=right,->,dashed,level distance=0.6cm]{node[node](3){} child[grow=right,level distance=0.6cm]{node[node](3b){}}}}}
child[grow=south east,->,solid,level distance=1.5cm]{node[node,label=north east:{$\frac{17}{32}$}]{}
child[grow=south west,<-,dashed,level distance=1cm]{}
child[grow=south east,->,solid,level distance=1cm]{}
child[grow=right,->,dashed,level distance=1cm]{node[node](4){} child[grow=right,level distance=1cm]{node[node](4b){}}}}
child[grow=right,->,dashed,level distance=1.5cm]{node[node](6){} child[grow=right,level distance=1.5cm]{node[node](6b){}}}}
child[grow=right,->,dashed,level distance=2.4cm]{node[node](5){} child[grow=right, level distance=2.4cm]{node[node](7){}}}
};
\draw (1516) edge[bend right=10] (78);
\draw (78) edge[bend right=10] (34);
\draw (1516) edge[bend left=10,dashed] (78);
\draw (78) edge[bend left=10,dashed] (34);
\draw (1316b) edge[bend right=10] (1316a);
\draw (1316a) edge[bend right=10] (1316);
\draw (1316b) edge[bend left=10,dashed] (1316a);
\draw (1316a) edge[bend left=10,dashed] (1316);
\draw (2732b) edge[bend right=10] (2732a);
\draw (2732a) edge[bend right=10] (2732);
\draw (2732b) edge[bend left=10,dashed] (2732a);
\draw (2732a) edge[bend left=10,dashed] (2732);
\draw (2532b) edge[bend right=10] (2532a);
\draw (2532a) edge[bend right=10] (2532);
\draw (2532b) edge[bend left=10,dashed] (2532a);
\draw (2532a) edge[bend left=10,dashed] (2532);
\draw (12) edge[loop above] (12);
\draw (14) edge[loop above] (14);
\draw (1) edge[loop above,min distance=6mm,in=55,out=125,looseness=10] (1);
\draw (2) edge[loop above] (2);
\draw (3) edge[loop above,min distance=6mm,in=55,out=125,looseness=10] (3);
\draw (3b) edge[loop above,min distance=6mm,in=55,out=125,looseness=10] (3b);
\draw (4) edge[loop above] (4);
\draw (4b) edge[loop above] (4b);
\draw (5) edge[loop above] (5);
\draw (6) edge[loop above] (6);
\draw (6b) edge[loop above] (6b);
\draw (7) edge[loop above] (7);
\draw (8) edge[loop above,min distance=6mm,in=55,out=125,looseness=10] (8);
\draw (9) edge[loop above] (9);
\draw (10) edge[loop above] (10);
\draw (11) edge[loop above] (11);
\end{tikzpicture}\end{figure}
\section{Schreier graphs of finitely generated subgroups of $H(\Z)$ and $\G$}\label{schreier}
We will build on the result from Remark~\ref{tree}. In a more general case, the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) implies that the existence of a regular subtree (like $\tilde{\Gamma}$) is enough to ensure transience on the Schreier graph. To obtain such a tree, we only need the assumptions of the remark inside the closed interval $[b,c]$. We will now prove a lemma that ensures transience while allowing the graph to be more complicated outside $[b,c]$. This will help us understand subgroups of $G$ for which the supports of their generators are not necessarily single intervals.
\begin{lemma}\label{algtho}
Let $f,g$ be homeomorphisms on $\R$ and assume that there exist $b<c$ such that $g(b)=b$, $f(c)=c$, $(b,c]\subset\supp(g)$ and $[b,c)\subset\supp(f)$. Assume also that there exists $s\in\R$ with $s\leq b$ such that for some $n\in\Z$, $f^n(s)\in[b,c]$. Let $H$ be the subgroup of the group of homeomorphisms on $\R$ generated by $f$ and $g$. Then the simple walk of $H$ on the Schreier graph $\Gamma$ of $H$ on the orbit $s$ is transient.
\end{lemma}
\begin{proof}
Without loss of generality, $f(x)>x$ and $g(x)>x$ for $x\in(b,c)$ (and the end point that they do not fix). In that case clearly $n\geq0$. We will apply the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) with $P_1$ defined on $\Gamma$ as the kernel of the simple random walk of $H$ on $\Gamma$. In other words, $P_1(x,f(x))=P_1(x,f^{-1}(x))=P_1(x,g(x))=P_1(x,g^{-1}(x))=\frac{1}{4}$ for every $x\in\Gamma$. Let us now define $P_2$. Let $a$ be the largest fixed point of $f$ that is smaller than $b$, and $d$ the smallest fixed point of $g$ that is larger than $c$. For $x\in(a,b)$ we define $n(x)=min(n|f^n(x)\in[b,c])$. Similarly, we define for $x\in(c,d)$, $m(x)=min(m|g^{-m}\in[b,c])$. We define
\begin{equation*}\begin{minipage}{8.2cm}
$P_2(x,f(x))=\begin{cases}
\frac{1}{4} & x\in[b,c] \\
\frac{1}{4} & x\in(a,b)\mbox{ and }n(x)\mbox{ is odd}\\
\frac{3}{4} & x\in(a,b)\mbox{ and }n(x)\mbox{ is even}\\
0 & \mbox{otherwise.}\\
\end{cases}$\end{minipage}\begin{minipage}{8.3cm}
$P_2(x,f^{-1}(x))=\begin{cases}
\frac{1}{4} & x\in[b,c] \\
\frac{3}{4} & x\in(a,b)\mbox{ and }n(x)\mbox{ is odd}\\
\frac{1}{4} & x\in(a,b)\mbox{ and }n(x)\mbox{ is even}\\
0 & \mbox{otherwise.}\\
\end{cases}$\end{minipage}
\end{equation*}
\begin{equation*}\begin{minipage}{8.2cm}
$P_2(x,g(x))=\begin{cases}
\frac{1}{4} & x\in[b,c] \\
\frac{3}{4} & x\in(c,d)\mbox{ and }m(x)\mbox{ is odd}\\
\frac{1}{4} & x\in(c,d)\mbox{ and }m(x)\mbox{ is even}\\
0 & \mbox{otherwise.}\\
\end{cases}$\end{minipage}\begin{minipage}{8.3cm}
$P_2(x,g^{-1}(x))=\begin{cases}
\frac{1}{4} & x\in[b,c] \\
\frac{1}{4} & x\in(c,d)\mbox{ and }m(x)\mbox{ is odd}\\
\frac{3}{4} & x\in(c,d)\mbox{ and }m(x)\mbox{ is even}\\
0 & \mbox{otherwise.}\\
\end{cases}$\end{minipage}
\end{equation*}
Of course, we have $P_2(x,y)=0$ otherwise. This clearly defines a stochastic kernel (as the sum of probabilities at each $x$ is $1$), and it follows directly from the definition that it is symmetric. It is therefore doubly stochastic and symmetric.
We now check that it is transient similarly to Lemma~\ref{tree-old}. Indeed, take a point $x\in[f(b),c]$ (respectively $x\in[b,g^{-1}(c)]$). Consider the subgraph $\tilde{\Gamma}(x)$ of the vertices of the form $c_nc_{n-1}\dots c_1(x)$ with $c_ic_{i-1}\dots c_1(x)\in [b,c]$ for every $i$ and $c_1\in\{f^{-1},g^{-1}\}$ (respectively $c_1\in\{g,f\}$). Equivalently to Lemma~\ref{tree-old}, $\tilde{\Gamma}(x)$ is a binary tree. Moreover, the graph $\bar{\Gamma}(x)$ defined by the vertices of the form $\tilde{c}^n(y)\in\Gamma$ with $\tilde{c}\in\{g,f^{-1}\}$, $n\in\N$ and $y\in\tilde{\Gamma}(x)$ is equivalent to the one in Lemma~\ref{tree-old}. In particular, the simple random walk on it is transient. Take any $y\in\Gamma\cap(a,d)$. Then either $f^n(y)\in[f(b),c]$ for some $n$, or $g^{-n}\in[b,g^{-1}(c)]$. In either case, there is $x$ such that $y$ belongs to $\bar{\Gamma}(x)$. By the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}), we have $\sum_{n\in\N}\langle P_2^n\delta_y,\delta_y\rangle<\infty$. Therefore $P_2$ is transient. We apply Lemma~\ref{var} again for $P_1\geq\frac{1}{3}P_2$, which concludes the proof.
\end{proof}
Remark that with this result we can apply the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) to obtain transience for a random walk induced by a measure on a subgroup of the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}), the support of which contains two such elements and generates that subgroup as a semi-group.
For the sake of completeness, we will also consider amenability of Schreier graphs of subgroups of $\G$. A locally finite graph is called amenable if for every $\varepsilon$ there exists a finite set of vertices $S$ such that $|\partial S|/|S|<\varepsilon$ where $\partial S$ is the set of vertices adjacent to $S$. This closely mirrors F{\o}lner's criterion for amenability of groups. In particular, a finitely generated group is amenable if and only if its Cayley graph is. In his article, Savchuk~\cite{slav10} shows that the Schreier graph of the dyadic action of Thompson's group $F$ is amenable. He also mentions that it was already noted in private communication between Monod and Glasner. The amenability of the graph comes from the fact that sets with small boundary can be found in the rays (see Figure~\ref{sav}). We will prove that for finitely generated subgroups of $\G$ we can find sets quasi-isometric to rays.
\begin{remark}Consider a point $s\in\R$ and a finitely generated subgroup $H$ of the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}). Let $a=\sup(Hs)$. Let $S$ be a finite generating set and consider the Schreier graph $\Gamma$ defined by the action of $H$ on $Hs$. Then there is $b<a$ such that the restriction of $\Gamma$ to $(b,a)$ is a union of subgraphs quasi-isometric to rays.
\end{remark}
\begin{proof}As all elements of $H$ are continuous (when seen as functions on $\R$), they all fix $a$. Therefore they admit left germs at $a$. By definition, the germs belong to the stabiliser $St_a$ of $a$ in $PSL_2(\Z)$.
By Lemma~\ref{cyclic}, $St_a$ is cyclic. Let $h\in PSL_2(\Z)$ be a generator of $St_a$. Then the left germ at $a$ of any element $s_i\in S$ is equal to $h^{n_i}$ for some $n_i\in\Z$. Up to replacing $h$ with $h^{GCD(\{n_i:s_i\in S\})}$, we can assume that there exists $g\in H$ such that the left germ at $a$ of $g$ is $h$. Let $(b,a)$ be a small enough left neighbourhood such that the restrictions of all elements of $S\cup\{g\}$ to $(b,a)$ are equal to their left germs at $a$. For example, one can choose $b$ to be the largest break point of an element of $S\cup\{g\}$ that is smaller than $a$.
Consider the following equivalence relation on $Hs\cap(b,a)$: $x\sim y$ if and only if there exists $n\in\Z$ such that $h^n(x)=y$. As the restriction of $h$ to $(b,a)$ is an increasing function, an equivalence class is of the form $(h^n(x))_{n\in\N}$ for some $x\in(b,a)$. We will prove that this set is quasi-isometric to a ray (when seen as a subgraph of $\Gamma$). It is by definition of $b$ preserved by elements of $S$. Furthermore, the graph distance $d$ is bilipschitz to the standard distance $d'$ on $\N$. Indeed, on one hand, we have $d>\frac{1}{\max(|n_i|:s_i\in S)}d'$. On the other hand, $d<|g|d'$ where $|g|$ is the word length of $g$. This proves the result.
\end{proof}
This implies:
\begin{remark}\label{grapham}Consider a point $s\in\R$ and a finitely generated subgroup $H<\G$. The Schreier graph defined by the action of $H$ on $Hs$ is amenable.
\end{remark}
\section{Convergence conditions based on expected number of break points}\label{anothersuff}
The aim of this section is to describe sufficient conditions for convergence similar to Theorem~\ref{base} that do not assume leaving $C_\mu$ (which is potentially infinite). The ideas presented are similar to the arguments used in studies of measures with finite first moment on wreath products (see Kaimanovich~\cite[Theorem~3.3]{Kaimanovich1991}, Erschler~\cite[Lemma~1.1]{erschler2011}). Consider the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}) and a measure $\mu$ on it. We think of the measure as something that could be positive on all points of $\G$. Fix $s\in P_\Z\cup\Q$ and denote, for $g\in\G$, $A_g=\supp(C_g)$ (for $s\in\Q$, see discussion after Definition\ref{confdef} and after the proof of Lemma~\ref{log}). Take $x\in Gs$ and consider a random walk $(g_n)_{n\in\N}$ with increments $h_n$, that is $g_{n+1}=h_ng_n$. Then by (\ref{der}),
$$C_{g_n}(x)\neq C_{g_{n+1}}(x)\iff g_n(x)\in A_{h_n}.$$
In other words, $C_{g_n}(x)$ converges if and only if $g_n(x)\in A_{h_n}$ only for a finite number of values of $n$. For a fixed $n$, the probability that $g_n(x)$ belongs to $A_{h_n}$ is
$$\langle p^{*n}\delta_x,\sum_{h\in\G}\mu(h)\chi_{A_h}\rangle$$
where $p$ is the induced kernel on $Gs$. Taking the sum over $n$ we get:
\begin{lemma}\label{conv}
Fix $\mathfrak{o}\in Gs$. For a random walk $g_n$ on $\G$ with law $\mu$, the value $C_{g_n}(\mathfrak{o})$ converges with probability $1$ if and only if
$$\sum_{n\in\N}\langle p^{*n}\delta_\mathfrak{o},\sum_{h\in\G}\mu(h)\chi_{A_h}\rangle<\infty$$
where $p$ is the induced kernel on $Gs$.
\end{lemma}
We define $f_\mu$ as
\begin{equation}\label{fmu}
f_\mu=\sum_{h\in\G}\mu(h)\chi_{\supp(C_h)}
\end{equation}
and show that it suffices for $f_\mu$ to be $l^1$ and $\mu$ transient :
\begin{lemma}\label{ltwo}
Let $s\in P_\Z\cup\Q$ be fixed. Take a measure $\mu$ on $\G$ such that the induced random walk on the Schreier graph on $Gs$ is transient and $f_\mu\in l^1(Gs)$ (as defined in (\ref{fmu})). Then for a random walk $g_n$ on $\G$ with law $\mu$, the associated configuration $C_{g_n}$ converges pointwise with probability $1$.
\end{lemma}
Remark in particular that $\mathbb{E}[Br]<\infty$ implies $f_\mu\in l^1(\G)$, where $Br(g)$ is the number of break points of $g$. Indeed, for any fixed $s$, $\|f_\mu\|_1$ is the expected number of break points inside the orbit $Gs$, which is smaller than the total expected number of break points. This is, of course, also true for measures on $H(\Z)$ as $H(\Z)\leq\G$.
\begin{proof}
Fix a point $\mathfrak{o}$ in the Schreier graph on $Gs$. We denote by $p$ the induced kernel on $Gs$ and write $f=f_\mu$. We have
\begin{equation}\label{ltwosum}
\sum_{n\in\N}\langle p^{*n}\delta_\mathfrak{o},f\rangle=\sum_{n\in\N}\sum_{x\in Gs}p^{*n}(\mathfrak{o},x)f(x)=\sum_{x\in Gs}f(x)\sum_{n\in\N}p^{*n}(\mathfrak{o},x)
\end{equation}
where we will have the right to interchange the order of summation if we prove that the right-hand side is finite. We write $p^{*n}(\mathfrak{o},x)=\check{p}^{*n}(x,\mathfrak{o})$ where $\check{p}$ is the inverse kernel of $p$. Let $\check{P}(x,y)$ be the probability that a random walk (with law $\check{p}$) starting at $x$ visits $y$ at least once. Then $\sum_{n\in\N}\check{p}^{*n}(x,y)=\check{P}(x,y)\sum_{n\in\N}\check{p}^{*n}(y,y)$. Indeed, $\sum_{n\in\N}\check{p}^{*n}(x,y)$ is the expected number of visits of $y$ of a walk starting at $x$ and random walk that starts from $x$ and visits $y$ exactly $k$ times is the same as the concatenation of a walk that goes from $x$ to $y$ and a walk that starts from $y$ and visits it $k$ times. Thus
\begin{equation}\label{ltwoinv}
\sum_{n\in\N}p^{*n}(\mathfrak{o},x)=\sum_{n\in\N}\check{p}^{*n}(x,\mathfrak{o})=\check{P}(x,\mathfrak{o})\sum_{n\in\N}\check{p}^{*n}(\mathfrak{o},\mathfrak{o})\leq\sum_{n\in\N}\check{p}^{*n}(\mathfrak{o},\mathfrak{o}).
\end{equation}
Then if we denote $c(p,\mathfrak{o})=\sum_{n\in\N}p^{*n}(\mathfrak{o},\mathfrak{o})$,
\begin{equation}\label{ltwofin}
\sum_{x\in Gs}f(x)\sum_{n\in\N}p^{*n}(\mathfrak{o},x)\leq c(p,\mathfrak{o})\|f\|_1<\infty.
\end{equation}
Applying Lemma~\ref{conv} we obtain the result.
\end{proof}
Combining this result with the result of Lemma~\ref{algtho} which gives transience of the induced random walk on $Gs$ under certain conditions, we obtain:
\begin{lemma}\label{algone}
Consider the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}). Let $H$ be a subgroup of $\G$. Assume that there exist $b<c$ such that $g(b)=b$, $f(c)=c$, $(b,c]\subset\supp(g)$ and $[b,c)\subset\supp(f)$ for some $f,g\in H$ (see Figure~\ref{alto} on page~\pageref{alto}). Assume also that there exists $s\in P_\Z\cup\Q$ and $\varepsilon_s>0$ with $s\leq b$ such that for some $n\in\Z$, $f^n(s)\in[b,c]$, and also $g(s-\varepsilon)=s-\varepsilon$ and $g(s+\varepsilon)\neq s+\varepsilon$ for every $0<\varepsilon\leq\varepsilon_s$. Then for any $\mu$ on $H$ with finite first break moment ($\mathbb{E}[Br]<\infty$) such that $\supp(\mu)$ generates $H$ as a semigroup, the Poisson boundary of $\mu$ on $H$ is non-trivial.
\end{lemma}
\begin{proof}
By Lemma~\ref{algtho}, the simple random walk on the Schirer graph of $s$ by $\langle f,g\rangle$ is transient. By the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}), as the support of $\mu$ generates $H$ as a semigroup, the random walk by $\mu$ on the Schreier graph of $s$ is then transient. Applying Lemma~\ref{ltwo}, the associated configurations converge as $\mu$ has finite first break moment. However, by hypothesis on $s$, $g(s)=s$ and $C_g(s)\neq 0$. Therefore, as $g\in H$, the limit configuration cannot be singular. Thus the Poisson boundary of $\mu$ on $H$ is non-trivial.
\end{proof}
For finitely generated subgroups of $\G$, from Lemma~\ref{ltwo} we have:
\begin{remark}\label{brfin}
The amount of break points is subadditive in relation to multiplication. In particular, if a measure $\mu$ has finite first moment, then it has finite first break moment.
\end{remark}
\begin{cor}\label{firstfin}
Consider a measure $\mu$ on $\G$, the support of which generates a finitely generated subgroup, and such that $\mu$ has a finite first moment on that subgroup. Assume that there exists $s\in P_\Z$ such that the random walk on the Schreier graph on $Gs$ of this subgroup is transient. Then, for almost all random walks on $\G$ with law $\mu$, the associated configuration converges pointwise.
\end{cor}
\begin{proof}
Follows from Remark~\ref{brfin} and Lemma~\ref{ltwo}.\end{proof}
In such cases it is enough to prove that the associated limit configuration is not always the same, which can require case-specific arguments. We already have it in the case of Thompson's group:
\begin{proof}[Proof of Corollary~\ref{finfirstthomp}]Fix $s\in P_\Z$ and consider the action $\mathfrak{a}_s$ of Thompson's group $F$ on $\R$ as defined in Section~\ref{thompsect}. Take a measure $\mu$ on $F$ that generates it as a semigroup. From Lemma~\ref{tree-old} and the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) the walk $\mu$ induces on the orbit of $s$ is transient. Applying Corollary~\ref{firstfin} this implies that the associated configuration stabilises, and by Lemma~\ref{nostable}, it cannot always converge towards the same point. Therefore the Poisson boundary of $\mu$ is not trivial.\end{proof}
We remark that arguments similar to the ones in this section can also be made for the action of Thompson's group considered in Kaimanovich's article~\cite{kaimanovichthompson}.
In a more general case, we can use the stronger result by Varopoulos of the comparison Lemma~\ref{var} in order to prove that if the transient walk diverges quickly enough, we can also have the result for $f_\mu\in l^2(Gs)$ (and not necessarily in $l^1$):
\begin{lemma}\label{ltwoforreal}
Fix $s\in P_\Z$. Consider a measure $\mu_0$ such that $\tilde{f}=f_{\mu_0}\in l^2(Gs)$. Consider $\lambda$ on $H_s$ such that $\sum_{n\in\N}\langle\lambda^{*n}\tilde{f},\tilde{f}\rangle<\infty$. Let $\mu=\varepsilon\lambda+(1-\varepsilon)\mu_0$ with $0<\varepsilon<1$. Then for almost all random walks on $G$ with law $\mu$, the associated configuration converges pointwise.
\end{lemma}
\begin{proof}
Clearly, $f_\mu=(1-\varepsilon)\tilde{f}$. Then by the comparison Lemma~\ref{var} we get:
$$\sum_{n\in\N}\langle\mu^{*n}f_\mu,f_\mu\rangle<\frac{1}{\varepsilon(1-\varepsilon)^2}\sum_{n\in\N}\langle\lambda^{*n}\tilde{f},\tilde{f}\rangle<\infty.$$
Denote $f=f_\mu$. Consider $x\in P_\Z$ such that it is possible for the value of the associated configuration at $x$ to change. In other words, there is $n_0\in\N$ and $y\in P_\Z$ such that $x\in\supp(\mu^{*n_0})y$ and $f(y)>0$. Denote by $p$ the probability to reach $x$ from $y$. Then $\sum_{n\in\N}\langle\mu^{*n}\delta_y,f\rangle>p\sum_{n\in\N}\langle\mu^{*n+n_0}\delta_x,f\rangle$. In particular, if the first is finite, so is the second. However, we clearly have $\sum_{n\in\N}\langle\mu^{*n}\delta_y,f\rangle<\frac{1}{f(y)}\sum_{n\in\N}\langle\mu^{*n}f,f\rangle$ which concludes the proof.
\end{proof}
In particular, if for any $s$ all associated configurations cannot be stable by all the elements of $\langle\supp(\mu)\rangle$, we obtain a non-trivial boundary.
\begin{cor}
Fix $s\in P_\Z$. Consider a measure $\mu_0$ such that $h_s\in\supp(\mu_0)^{*n_0}$ for some $n_0$ and $\tilde{f}=f_{\mu_0}\in l^2(Gs)$. Consider $\lambda$ on $H_s$ such that $\sum_{n\in\N}\langle\lambda^{*n}\tilde{f},\tilde{f}\rangle<\infty$. Let $\mu=\varepsilon\lambda+(1-\varepsilon)\mu_0$ with $0<\varepsilon<1$. Then the Poisson boundary of $\mu$ on the subgroup generated by its support is non-trivial.
\end{cor}
\begin{proof}
Follows from Lemma~\ref{ltwoforreal} and Lemma~\ref{nostable}.
\end{proof}
Remark that there always exists a symmetric measure $\lambda$ satisfying those assumptions as $\A\subset H_s$ ($\A$ was defined in (\ref{agrp})).
\begin{figure}
\centering
\begin{minipage}{8cm}\centering\caption{Graphs of $f$ and $g$ and positions of $b$ and $c$}\label{alto}
\begin{tikzpicture}
\begin{axis}[xmin=-4,xmax=4,ymin=-4,ymax=4,axis lines = middle, legend pos = south west,xtick={-10},ytick={17}]
\addplot[domain=-3.8:3.8,color=black]{x};
\addlegendentry{$Id$}
\addplot[color=blue,samples=100,domain=0:2.5,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)};
\addlegendentry{$f$}
\addplot[samples=100,domain=0:2.5,restrict y to domain=-4:4,densely dotted,thick]{(4*x-1)/x};
\addlegendentry{$g$}
\node[label={-1:{$b$}},circle,fill,inner sep=1pt] at (axis cs:0.268,0.268) {};
\node[label={110:{$c$}},circle,fill,inner sep=1pt] at (axis cs:1.732,1.732) {};
\end{axis}
\end{tikzpicture}
\end{minipage}
\begin{minipage}{8cm}\centering\caption{Graphs of $f$ and $g$ in $(a,b')$}\label{alte}
\begin{tikzpicture}
\begin{axis}[xmin=-4,xmax=4,ymin=-4,ymax=4,axis lines = middle, legend pos = south west,xtick={-10},ytick={17}]
\addplot[domain=-3.8:3.8,color=black]{x};
\addlegendentry{$Id$}
\addplot[samples=100,domain=0.4365:1.732,restrict y to domain=-4:4,densely dotted,thick]{(2*x+3)/(x+2)};
\addlegendentry{$f$}
\addplot[color=blue,samples=100,domain=0.382:2.618,restrict y to domain=-4:4,dashed,thick]{(3*x-1)/x};
\addlegendentry{$g$}
\addplot[samples=100,domain=0.382:0.4365,restrict y to domain=-4:4,densely dotted,thick]{(8*x-3)/(3*x-1)};
\node[label={-2:{$a$}},circle,fill,inner sep=1pt] at (axis cs:0.382,0.382) {};
\node[label={-30:{$b$}},circle,fill,inner sep=1pt] at (axis cs:1.732,1.732) {};
\node[label={-30:{$b'$}},circle,fill,inner sep=1pt] at (axis cs:2.618,2.618) {};
\end{axis}
\end{tikzpicture}
\end{minipage}
\end{figure}
\section{An algebraic lemma and proof of the main result}\label{algsec}
Consider the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}). Take a subgroup $H$ of $\G$. In Lemma~\ref{algone} we proved that if there are $f,g\in H$ and $b,c,s\in\R$ that satisfy certain assumptions, for every measure $\mu$ on $H$ the support of which generates $H$ as a semigroup and that has finite first break moment $\mathbb{E}[Br]$, $(H,\mu)$ has non-trivial Poisson boundary. To prove the main result (Theorem~\ref{main}) we will study subgroups that do not contain elements satisfying those assumptions.
\begin{lemma}\label{algthree}
Let $H=\langle h_1,\dots,h_k\rangle$ be a finitely generated subgroup of $\G$. Then either $H$ is solvable, or the assumptions of Lemma~\ref{algone} are satisfied for some $f,g\in H$, $b,c,s\in\R$.
\end{lemma}
We recall that for $f\in\G$, and $a,b\in\R$ such that $f(a)=a$ and $f(b)=b$, we defined (see Definition \ref{restr}) $f\restriction_{(a,b)}\in\G$ by $f\restriction_{(a,b)}(x)=f(x)$ for $x\in(a,b)$ and $x$ otherwise.
\begin{proof}
We first check that with the appropriate assumptions on $(f,g,b,c)$, $s$ always exists:
\begin{lemma}\label{algtwo}
Let $H$ be a subgroup of $\G$. Assume that there exist $b<c$ such that $g(b)=b$, $f(c)=c$, $(b,c]\subset\supp(g)$ and $[b,c)\subset\supp(f)$ for some $f,g\in H$. Then there exist $f',g',b',c'$ and $s$ that satisfy the assumptions of Lemma~\ref{algone}.
\end{lemma}
The assumptions of the lemma are illustrated in Figure~\ref{alto}. Recall that we defined $\supp(f)=\{x\in\R:f(x)\neq x\}$.
\begin{proof}
Without loss of generality assume that $b$ is minimal among all $b$ for which there exists $c$ such that either $(f,g,b,c)$ or $(g,f,b,c)$ satisfy the assumptions of this lemma. We can assume without loss of generality that $f(x)>x$ and $g(x)>x$ for $x\in(b,c)$ (otherwise, we can replace either or both with their inverse). Let $a$ be the largest fixed point of $f$ that is smaller than $b$.
By minimality of $b$ we clearly have that $g(a)=a$. The stabiliser $St_a$ of $a$ in $\Pl$ is cyclic by Lemma~\ref{cyclic}. Therefore there exist $k$ and $l$ such that $f^k(x)=g^l(x)$ for $x\in(a,a+\varepsilon)$ for some $\varepsilon>0$. Take $(f',g')=(f,f^{-k}g^l)$. By our assumption, $f^k$ and $g^l$ are strictly greater then the identity function in $(b,c)$. As they are continuous and each fixes an end of the interval, by the mean values theorem there exists $b'\in(b,c)$ such that $f^k(b')=g^l(b')$. Then $(f',g')$ and $(b',c)$ satisfy the assumptions of this lemma. Furthermore, $f^{-k}g^l$ is the identity in a small enough right neighbourhood of $a$, which implies that there exists an element $s$ that satisfies the assumptions of Lemma~\ref{algone}.
\end{proof}
We now assume that the assumptions of Lemma~\ref{algone}, and therefore also the assumptions of Lemma~\ref{algtwo}, are not satisfied by any couple of elements in $H$. We will prove that $H$ is solvable. For any element in $g\in\G$, its support $\supp(g)$ is a finite union of (not necessarily finite) open intervals. The intervals in the support of $h_i$ we denote $I^i_j=(a_i^j,b_i^j)$ for $j<r_i$ where $r_i$ is the number of intervals in the support of $h_i$. In terms of those intervals, the negation of Lemma~\ref{algtwo} means that for every $(i,j)$ and $(i',j')$, either $I^i_j\cap I^{i'}_{j'}=\emptyset$, or $I^i_j\subset I^{i'}_{j'}$, or $I^{i'}_{j'}\subset I^i_j$. We further check that if the inclusion is strict, it must be strict at both extremities. Specifically:
\begin{lemma}\label{algbonus}
Let $H$ be a subgroup of $\G$. Assume that there exist $a<b<b'\in\R\cup\{-\infty\}$ such that $f(a)=g(a)=a$, $f(b)=b$, $g(b')=b'$, $(a,b)\subset\supp(f)$ and $(a,b')\subset\supp(g)$ for some $f,g\in H$ (see Figure~\ref{alte}). Then the assumptions of Lemma~\ref{algtwo} are satisfied by some elements of the group.
\end{lemma}
\begin{proof}
In a small enough right neighbourhood of $a$ there are no break points of $f$ and $g$. Let $c$ be a point in that neighbourhood. Clearly, $a<c<b$. Without loss of generality, we can assume that $f(x)>x$ for $x\in(a,b)$, and idem for $g$ (otherwise, we can replace them with their inverse). For some $k\in\N$, $f^{-k}(b)<c$. Denote $g'=f^{-k}gf^k$. Consider the elements $g'$ and $g^{-1}g'$. As the stabiliser of $a$ in $\Pl$ is cyclic (by Lemma~\ref{cyclic}), $g^{-1}g'(x)=x$ for $x\in(a,f^{-k}(c))$. However, $g^{-1}g'(x)=g^{-1}(x)$ for $x\in(f^{-k}(b),b)$, and in particular $g^{-1}g'(x)\neq x$ in that interval. Let $c'$ be the largest fixed point of $g^{-1}g'$ that is smaller than $f^{-k}(b)$. Consider now $g'$. It is the conjugate of $g$, therefore it is different from the identity in $(a,f^{-k}(b))$ and fixes $f^{-k}(b)<c$. Clearly, $c'<f^{-k}(b)$. Then $g',g^{-1}g'$ and $c',f^{-k}(b)$ satisfy the assumptions of Lemma~\ref{algtwo}. Observe that the same arguments can be used for two elements with supports $(a,b)$ and $(a',b)$ with $a\neq a'$.
\end{proof}
Consider the natural extension of the action of $\G$ on $\R\cup\{+\infty,-\infty\}$, which is that every element of $\G$ fixes both $-\infty$ and $+\infty$. We make the convention that $+\infty$ is considered to be a break point of $f\in\G$ if and only if for every $M\in\R$ there is $x>M$ such that $f(x)\neq x$ (and idem for $-\infty$). In other words, if the support of an element is equal to an interval $(a,b)$, $a$ and $b$ are break points even if one or both are infinite. We now prove that $H$ is solvable by induction on the number of different orbits of $H$ on $\R\cup\{\pm\infty\}$ that contain non-trivial break points of elements of $H$. Remark that the number of orbits of $H$ that contain non-trivial break points of elements of $H$ is the same as the number of orbits that contain non-trivial break points of $h_1,\dots,h_k$. In particular, it is finite.
Consider all maximal (for inclusion) intervals $I^i_j$ over all couples $(i,j)$. We denote them $I_1,I_2,\dots,I_n$. By our hypothesis we have that they do not intersect each other. We denote $h_i^j=h_i\restriction_{I_j}$ and $H_j=\langle h_1^j,h_2^j,\dots,h_k^j\rangle$ for every $j<n$. As the intervals $I_j$ do not intersect each other, $H$ is a subgroup of the Cartesian product of $H_j$:
\begin{equation}\label{maxint}H\leq\prod_{j=1}^n H_j.\end{equation}
Moreover, for every $j$, the amount of orbits with non-trivial break points of $H_j$ is not greater than that of $H$. Indeed, the orbits with break points of $H_j$ inside $I_j$ coincide with those of $H$, and it has only two other orbits containing break points, which are the singletons containing the end points of $I_j$. We just need to prove that $H$ has at least two other orbits containing non-trivial break points. If $I_j=I_{i'}^{j'}$, then the supremum and infimum of the support of $h_{i'}$ are break points, and by definition of $I_j$ their orbits by $H$ do not intersect the interior of $I_j$. The convention we chose assures that our arguments are also correct if one or both of the end points is infinite. It is thus sufficient to prove the induction step for $H_j$ for every $j$. Therefore without loss of generality we can assume $n=1$. Remark that in this case the end points of $I_1$ are both non-trivial break points, and both clearly have trivial orbits.
We denote $(a,b)=I=I_1$. Consider the germs $g_i\in St_a$ of $h_i$ at a right neighbourhood of $a$. As $St_a$ is cyclic, there exist $m_i\in\Z$ such that $\prod_i g_i^{m_i}$ generates a subgroup of $St_a$ that contains $g_i$ for all $i$. Specifically, the image in $\Z$ of this product is the greatest common divisor of the images in $\Z$ of $g_i$. We denote $h=\prod_i h_i^{m_i}$ and let, for every $i$, $n_i$ satisfy $(\prod_i g_i^{m_i})^{n_i}=g_i$. For every $i\leq k$, we consider $h'_i=h_ih^{-n_i}$.
Clearly, $H=\langle h,h'_1,h'_2,\dots,h'_k\rangle$, and there exists $\varepsilon$ such that for every $i$, $\supp(h'_i)\subset(a+\varepsilon,b-\varepsilon)$ (as the assumptions of Lemma~\ref{algbonus} are not satisfied by $h,h'_i$). Consider the set of $h^{-l}h'_ih^l$ for $i<k,l\in\Z$ and their supports. They are all elements of $H$. Furthermore, there is a power $n$ such that $h^n(a+\varepsilon)>b-\varepsilon$. Therefore, for every point $x\in(a,b)$, the number of elements of that set that contain $x$ in their support is finite. Considering the intervals that define those supports, we can therefore choose a maximal one (for the inclusion). Let $x_0$ be the lower bound of a maximal interval. By our assumption, $x_0$ is then not contained in the support of any of those elements, and neither is $x_l=h^l(x_0)$ for $l\in\Z$. We denote ${h'}_i^j=h^jh'_ih^{-j}\restriction(x_0,x_1)$. For $i<k$, let $J_i$ be the set of $j\in\Z$ such that ${h'}_i^j\neq Id$. Then $H$ is a subgroup of
\begin{equation}\label{wreath}
\left\langle h,\bigcup_{i<k}\bigcup_{j\in J_i}{h'}_i^j\right\rangle\cong\langle h\rangle\wr\left\langle\bigcup_{i<k}\bigcup_{j\in J_i}{h'}_i^j\right\rangle.
\end{equation}
For a group $F$, $\Z\wr F$ denotes the wreath product of $\Z$ on $F$. It is a group, the elements of which are pairs $(n,f)$ with $n\in\Z$ and $f\in\prod_{k\in\Z}F$ with finite support. The group multiplication is defined as $(n,f)(n',f')=(n+n',T^{n'}f+f')$, where $T^{n'}f(k)=f(k-n')$. It is a well known property of wreath products that if $F$ is solvable, so is $\Z\wr F$.
Denote $H'=\langle\bigcup_{i<k}\bigcup_{j\in J_i}{h'}_i^j\rangle$. The non-trivial break points and supports of ${h'}_i^j$ are contained in $(x_0,x_1)$, and they fix that interval. Therefore the orbits that contain those break points are the same in relation to $\langle h,H'\rangle$ and to $H'$. On the other hand, $\langle h,H'\rangle$ and $H$ act the same way locally, which means that they have the same orbits. Those two facts imply that $H'$ has at least two less orbits containing non-trivial break points than $H$ (as it does not have non-trivial break points in the orbits of the end points of $I$). That group also does not contain elements that satisfy the assumptions of Lemma~\ref{algtwo}. Indeed, assume that there are two words on $\bigcup_{i<k}\bigcup_{j\in J_i}{h'}_i^j$ and $a,b\in\R$ that satisfy those assumptions. Their supports are also contained in $(x_0,x_1)$, therefore so are $a$ and $b$. Then the same words in $\bigcup_{i<k}\bigcup_{j\in J_i}h'_i$ are equal inside $(a,b)$, and they satisfy the conditions of Lemma~\ref{algtwo}. However, $h'_i$ are elements of $H$ and this is contradictory to our assumptions.
This provides the induction step. The induction basis is the trivial group, which is solvable. Therefore $H$ is solvable.
\end{proof}
We can now prove the main result, that is that for any subgroup $H$ of $H(\Z)$ which is not locally solvable and any measure $\mu$ on $H$ such that the support of $\mu$ generates $H$ as a semigroup and has finite first break moment $\mathbb{E}[Br]$, the Poisson boundary of $(H,\mu)$ is non-trivial.
\begin{proof}[Proof of Theorem~\ref{main}]Fix $H$ and take $\mu$ on $H$ with finite first break moment and the support of which generates $H$ as a semigroup. We distinguish two cases.
Assume first that there exist $f,g\in H$ and $b,c,s\in\R$ that satisfy the assumptions of Lemma~\ref{algone}. By the result of the lemma, the Poisson boundary of $(H,\mu)$ is non-trivial.
We now assume that no such $f,g,b,c,s$ exist and will prove that $H$ is locally solvable. Any finitely generated subgroup $\widetilde{H}$ of $H$ clearly also does not contain such $f$ and $g$ for any $b,c,s\in\R$. Furthermore, $H(\Z)$ is a subgroup of the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}), and thus $\widetilde{H}$ is a subgroup of $\G$. Therefore by Lemma~\ref{algthree} we obtain that $\widetilde{H}$ is solvable, which proves that $H$ is locally solvable.\end{proof}
\section{A remark on the case of finite $1-\varepsilon$ moment}\label{last}
Remark that in the proof of Lemma~\ref{algthree}, for a finitely generated subgroup that does not satisfy the assumptions of Lemma~\ref{algone} we obtained more than it being solvable. If the subgroup is also non-abelian, we have proven that it contains a wreath product of $\Z$ with another subgroup (see (\ref{wreath})). In particular, it is not virtually nilpotent, which implies (as it is finitely generated) that there exists a measure on it with non-trivial boundary by a recent result of Frisch-Hartman-Tamuz-Vahidi-Ferdowski~\cite{choquet-deny}. Furthermore, it is known that
on the wreath products $\Z\wr\Z$ it is possible to obtain a measure with finite $1-\varepsilon$ moment and non-trivial Poisson boundary for every $\varepsilon>0$ (see Lemma~\ref{wreathnontriv} and discussion before and after it). The same arguments can be used in $\G$:
\begin{lemma}\label{mineps}
For every finitely generated subgroup $H=\langle h_1,\dots,h_k\rangle$ of $\G$ that is not abelian and every $\varepsilon>0$ there exists a symmetric non-degenerate measure $\mu$ on $H$ with non-trivial Poisson boundary such that $\int_H |g|^{1-\varepsilon}d\mu(g)<\infty$, where $|g|$ is the word length of $g$.
\end{lemma}
We recall that every measure on an abelian group has trivial Poisson boundary (see Blackwell~\cite{blackwell1955}, Choquet-Deny~\cite{ChoquetDeny}).
\begin{proof}
As there is always a non-degenerate symmetric measure with finite first moment, we can assume that the assumptions of Lemma~\ref{algone} are not satisfied in $H$. We will use the results on the structure of $H$ seen in the proof of Lemma~\ref{algthree}. It is shown (see (\ref{maxint})) that $H$ is a subgroup of a Cartesian product $\prod_{j=1}^n H_j$. Specifically, there exist disjoint intervals $I_1,I_2,\dots,I_n$ such that the supports of elements of $H$ are included in the union of those intervals. Taking $h_i^j=h_i\restriction_{I_j}$ to be the restriction on one of those intervals (as defined in Definition \ref{restr}), the group $H_j$ is then equal to $\langle h_1^j,h_2^j,\dots,h_k^j\rangle$. For any $j$, consider the composition of the projection of $\prod_{j=1}^n H_j$ onto $H_j$ and the inclusion of $H$ in $\prod_{j=1}^n H_j$. Then $H_j$ is the quotient of $\prod_{j=1}^n H_j$ by the kernel of this composition, which is equal to $\{h\in\prod_{j=1}^n H_j,h\restriction_{I_j}\equiv0\}$.
We can therefore separately define measures on $H_j$ and on the kernel, and the Poisson boundary of their sum would have the Poisson boundary of the measure on $H_j$ as a quotient. In particular, it suffices to show that for some $j$ we can construct a measure on $H_j$ with non-trivial boundary satisfying the conditions of the lemma. As $H$ is non-abelian, so is at least one $H_j$. Without loss of generality, let that be $H_1$. In the proof of Lemma~\ref{algthree} we have shown (see (\ref{wreath})) that in $H_1$ there are elements $h^1$ and ${h^1}'_j$ for $j=1,2,\dots,k$ such that $H_1=\langle{h^1},{h^1}'_1,{h^1}'_2,\dots,{h^1}'_k\rangle$ and is isomorphic to a subgroup of the wreath product of $h^1$ on a group $H'$ defined by the rest of the elements. Remark that $H_1$ not being abelian implies that $H'$ is not trivial. Furthermore, by taking the group morphism of $H_1$ into $\Z\wr H'$, we see that the image of $h^1$ is the generator $(1,0)$ of the active group, while for every $j$, the image of ${h^1}'_j$ is of the form $(0,f_j)$ where $f_j$ has finite support. The following result is essentially due to Kaimanovich and Vershik~\cite[Proposition~6.1]{kaimpoisson},\cite[Theorem~1.3]{Kai83}, and has been studied in a more general context by Bartholdi and Erschler~\cite{Bartholdi2017}:
\begin{lemma}\label{wreathnontriv}
Consider the wreath product $\Z\wr H'$ where $H'$ is not trivial, and let $\mu$ be a measure on it such that the projection of $\mu$ on $\Z$ gives a transient walk and the projection of $\mu$ on ${H'}^\Z$ is finitary and non-trivial. Then the Poisson boundary of $\mu$ is not trivial.
\end{lemma}
In the article of Kaimanovich and Vershik, it is assumed that the measure is finitary, and the acting group is $\Z^k$ for $k\geq3$, which assures transience. The proof remains unchanged with our assumptions. Remark that those results have also been generalised in the case of a measure with finite first moment that is transient on the active group, see Kaimanovich~\cite[Theorem~3.3]{Kaimanovich1991},\cite[Theorem~3.6.6]{Kaimanovich2007PoissonBO}, Erschler~\cite[Lemma~1.1]{erschler2011}.
\begin{proof}
Take a random walk $(g_n)_{n\in\N}$ on $\Z\wr H'$ with law $\mu$. Let $p$ be the projection of the wreath product onto the factor isomorphic to $H'$ that has index $0$ in ${H'}^\Z$. By the assumptions of the lemma, $p(h_n)$ stabilises, and is not almost always the same. This provides a non-trivial quotient of the Poisson boundary of $\mu$.
\end{proof}
All that is left is constructing a measure that verifies the assumptions of Lemma \ref{wreathnontriv}. Consider a symmetric measure $\mu_1$ on $\langle h^1\rangle$ that has finite $1-\varepsilon$ moment and is transient. Let $\mu_2$ be defined by being symmetric and by $\mu_2({h^1}'_j)=\frac{1}{2k}$ for every $j$. Then $\mu=\frac{1}{2}(\mu_1+\mu_2)$ is a measure on $H_1$ with non-trivial Poisson boundary.
\end{proof}
\bibliographystyle{plain}
|
2,869,038,153,878 | arxiv | \section*{Abstract}
We consider the correction of errors from nucleotide sequences produced by next-generation targeted amplicon sequencing. \textcolor{\bhlplosrevisioncolor}{The next-generation sequencing (NGS) platforms can provide a great deal of sequencing data thanks to their high throughput, but the associated error rates often tend to be high.} Denoising in high-throughput sequencing \textcolor{\editagecolor}{has} thus \textcolor{\editagecolor}{become} a crucial \textcolor{\editagecolor}{process} for boosting the reliability of downstream analyses.
Our methodology, named \acl, is derived from a general setting of reconstructing finite-valued source data corrupted by a discrete memoryless channel and \textcolor{\editagecolor}{effectively corrects} substitution and homopolymer indel errors, the two major types of sequencing errors in most high-throughput targeted amplicon sequencing platforms. Our experimental studies with real and simulated datasets suggest that the proposed \acl~not only outperforms existing alternatives in terms of error-correction \textcolor{\editagecolor}{capability} and time efficiency, but also \textcolor{\editagecolor}{boosts} the reliability of downstream analyses. Further, the flexibility of \acl~enables \textcolor{\editagecolor}{its robust application} to different sequencing platforms and analysis pipelines \textcolor{\editagecolor}{by} simple update\textcolor{\editagecolor}{s} of the noise model.
\acl~is available at \href{http://data.snu.ac.kr/pub/dude-seq}{http://data.snu.ac.kr/pub/dude-seq}.
\section*{Author Summary}
\textcolor{\srycolor}{
Next-generation sequencing (NGS) has already become a fundamental means for understanding a variety of biological processes in living organisms, creating numerous academic and practical opportunities. The success of NGS can largely be accredited to the low cost and high throughput of mainstream NGS technology, \textcolor{\proofcolor}{but that} inevitably incur\textcolor{\proofcolor}{s} a sacrifice in robustness measured in terms of error rates in sequenced reads. Denoising in NGS is thus a crucial component in many NGS analysis pipelines in order to ensure the reliability and quality of sequencing results. In this paper, we propose a new denoising algorithm named \acl, which possesses flavors connected to existing denoising methodologies such as $k$-mer based, multiple sequence alignment-based, and statistical error model-based techniques, \textcolor{\proofcolor}{to} effectively overcoming their limitations.
\textcolor{\bhlplosrevisioncolor}{As the sequencing coverage becomes deeper, context-counting vectors can accumulate more probable contexts, and the robustness of denoising normally improves, hence, we focus on the targeted amplicon sequencing.}
Our thorough evaluation efforts lead us to \textcolor{\proofcolor}{conclude} that the proposed \acl~algorithm is effective in removing substitution errors and homopolymer errors \textcolor{\proofcolor}{that} frequently \textcolor{\bhlploscolor}{occur} in applications of NGS \textcolor{\proofcolor}{for} targeted amplicon sequencing. We also anticipate that the flexibility of \acl~will make it a versatile building block of other NGS pipelines that need efficiency and robustness \textcolor{\proofcolor}{for} large-scale sequence processing, such as \textcolor{\proofcolor}{the} denoising workflow involved in the emerging nanopore sequencing technology.}
\section*{Introduction}
A new generation of high-throughput, low-cost sequencing technologies, referred to as \emph{next-generation sequencing} (NGS) \textcolor{\editagecolor}{technologies}~\cite{metzker2010sequencing}, is reshaping biomedical research\textcolor{\editagecolor}{,} including large-scale comparative and evolutionary studies~\cite{astbury1961molecular,bateson1894materials,riesenfeld2004metagenomics}. Compared with automated Sanger sequencing, NGS \textcolor{\editagecolor}{platforms} produce significantly shorter reads in large quantit\textcolor{\proofcolor}{ies}, posing various new computational challenges~\cite{pop2008bioinformatics}.
\textcolor{\editagecolor}{There are} \textcolor{\bhlplosrevisioncolor}{several} DNA sequencing methodologies \textcolor{\proofcolor}{that use} NGS~\textcolor{\bhlploscolor}{\cite{shendure2008next,goodwin2016coming}} \textcolor{\bhlplosrevisioncolor}{including} whole genome sequencing (WGS), chromatin immunoprecipitation (ChIP) sequencing, and targeted sequencing. WGS is \textcolor{\editagecolor}{used to} analyze the genome of an organism to \textcolor{\bhlcolor}{capture all variants and} identify potential causative variants; \textcolor{\bhlploscolor}{\textcolor{\editageploscolor}{it} is also used for \textit{de novo} genome assembly.} ChIP sequencing identif\textcolor{\proofcolor}{ies} genome-wide DNA binding sites for transcription factors and other proteins.
Targeted sequencing (\eg, exome sequencing and amplicon sequencing), the focus of this paper, is a cost-effective method that enables researchers to focus on investigating areas of interest that are likely to be involved in \textcolor{\editagecolor}{a particular} phenotype.
According to \textcolor{\editagecolor}{previous studies}~\cite{bamshad2011exome,jamuar2015clinical}, \textcolor{\bhlploscolor}{targeted sequencing often \textcolor{\editageploscolor}{results in the} complete coverage of exons of disease\textcolor{\editageploscolor}{-}related genes\textcolor{\editageploscolor}{,} while alternative \textcolor{\editageploscolor}{methods result in} approximately 90--95\% \textcolor{\editageploscolor}{coverage}. Hence, in clinical setting\textcolor{\editageploscolor}{s}, researchers tend to rely on targeted sequencing for diagnostic evaluation\textcolor{\editageploscolor}{s}.}
To detect sequences \textcolor{\editagecolor}{based on} fluorescent labels at the molecular level, NGS technologies normally rely on imaging systems requir\textcolor{\editagecolor}{ing} templates \textcolor{\editagecolor}{that are amplified by} emulsion polymerase chain reaction (PCR) or solid-phase amplification~\cite{metzker2010sequencing}. These amplification and imaging processes can \textcolor{\editagecolor}{generate} erroneous reads, the origin of which can be traced \textcolor{\proofcolor}{to the} incorrect determination of homopolymer lengths, \textcolor{\editagecolor}{the} erroneous insertion/deletion/substitution of nucleotide bases, and PCR chimera\textcolor{\editagecolor}{s}~\cite{shendure2008next}. Substitution errors dominate \textcolor{\editagecolor}{for} many platforms\textcolor{\editagecolor}{,} including Illumina, while homopolymer errors\textcolor{\editagecolor}{,} manifested as insertions and deletions (indels)\textcolor{\editagecolor}{,} are also abundant \textcolor{\editagecolor}{for} 454 pyrosequencing and Ion Torrent.
Erroneous reads must be properly handled \textcolor{\proofcolor}{because} they complicate downstream analyses (\eg, variant calling and genome assembly), often lowering the quality of the whole analysis pipeline\textcolor{\bhlploscolor}{~\cite{goodwin2016coming}.} Soft clipping, \textcolor{\editagecolor}{in which} 3'-ends of a read \textcolor{\editagecolor}{are trimmed} based on the quality scores of individual bases, may be the simplest approach, but it results in \textcolor{\editagecolor}{a} loss of information~\cite{yang2013survey}. More sophisticated methods \textcolor{\proofcolor}{focus on} detecting and correcting errors in sequence data~\cite{ilie2011hitec,kao2011echo,kelley2010quake,qu2009efficient,salmela2010correction,salmela2011correcting,schroder2009shrec,wijaya2009recount,yang2011repeat,yang2010reptile}. Given the widespread use of Illumina sequencing platforms, most error-correction algorithms have targeted substitution errors~\cite{yang2013survey}.
As summarized in recent \textcolor{\editagecolor}{reviews}~\cite{yang2013survey,laehnemann2015denoising}, current error-correction methods for NGS \textcolor{\editagecolor}{data} can be categorized as follows: $k$-mer (\ie, oligonucleotide\textcolor{\editagecolor}{s} of length $k$) frequency/spectrum\textcolor{\editagecolor}{-}based, multiple sequence alignment (MSA)\textcolor{\editagecolor}{-}based, and statistical error model\textcolor{\proofcolor}{-}based methods.
The idea \textcolor{\proofcolor}{behind} $k$-mer\textcolor{\editagecolor}{-}based methods~\cite{kelley2010quake,yang2010reptile,medvedev2011error,nikolenko2013bayeshammer,greenfield2014blue,lim2014trowel} is \textcolor{\srycolor}{to create a list of \textcolor{\proofcolor}{``}trusted\textcolor{\proofcolor}{''} $k$-mers from the input reads and correct untrusted $k$-mers \textcolor{\editagecolor}{based on} a consensus represented by this spectrum. In addition to the length of \textcolor{\editagecolor}{the} $k$-mer, coverage ($k$-mer occurrences) information is important to determine trusted $k$-mers.} Under the assumption that errors are rare and random and that coverage is uniform, for sufficiently large $k$, it is reasonable to expect that \textcolor{\bhlplosrevisioncolor}{most errors} alter $k$-mers to inexistent ones in a genome. Thus, \textcolor{\editagecolor}{for} high-coverage genome sequences \textcolor{\editagecolor}{obtained by} NGS, we may identify suspicious $k$-mers and correct them \textcolor{\editagecolor}{based on} a consensus.
MSA\textcolor{\proofcolor}{-}based methods~\cite{salmela2011correcting,kao2011echo,bragg2012fast} work by aligning \textcolor{\srycolor}{related sequences according to their similarities and \textcolor{\editagecolor}{correcting} aligned reads\textcolor{\editagecolor}{,} usually \textcolor{\editagecolor}{based on} a consensus in an alignment column\textcolor{\editagecolor}{,} using various techniques. This alignment-based scheme is inherently well-suited for correcting indel errors. Early methods suffered from computational issues, but recent approaches utilize advanced indexing techniques to expedite the alignments.}
\textcolor{\editagecolor}{In} statistical error model\textcolor{\proofcolor}{-}based methods~\cite{meacham2011identification,yin2013premier,schulz2014fiona}\textcolor{\editagecolor}{,}
\textcolor{\srycolor}{a statistical model \textcolor{\editagecolor}{is developed} to capture the sequencing process\textcolor{\editagecolor}{,} including error generation. In this regard, an empirical confusion model \textcolor{\editagecolor}{is often created} from datasets, exploiting the information obtained from, \eg, alignment results\textcolor{\editagecolor}{,} Phred quality scores (a measure of the quality of nucleobases generated by automated DNA sequencing)~\cite{ewing1998base}\textcolor{\editagecolor}{, or other parameters}.}
\textcolor{\tsmrevisioncolor}{While the above methods often \textcolor{\editagecolor}{exhibit good} performance \textcolor{\editagecolor}{for} various platforms, they also have several limitations. First, $k$-mer\textcolor{\editagecolor}{-}based schemes tend to be \textcolor{\bhlplosrevisioncolor}{ineligible} when the coverage is expected to vary over the queried sequences, as in transcriptomics, metagenomics, heterogen\textcolor{\editagecolor}{e}ous cell samples\textcolor{\editagecolor}{,} or pre-amplified libraries \cite{laehnemann2015denoising}. Second, MSA\textcolor{\proofcolor}{-}based methods, \textcolor{\editagecolor}{which do} not suffer from \textcolor{\proofcolor}{the} above \textcolor{\editagecolor}{issue related to} non-uniform coverage, often \textcolor{\editagecolor}{require the application of} heuristic and sophisticated consensus decision rules for the aligned columns, and such rules may be sensitive to specific applications or sequencing platforms.
Third, statistical error model\textcolor{\proofcolor}{-}based methods typically \textcolor{\editagecolor}{use} computationally expensive schemes (e.g., expectation-maximization) \textcolor{\editagecolor}{owing} to additional stochastic modeling assumptions \textcolor{\editagecolor}{for} the underlying DNA sequences. Moreover, little attention is given to the validity and accuracy of such modeling assumptions, let alone to theoretical analysis \textcolor{\editagecolor}{of} whether near optimum or sound error-correction performance is attained. Finally, many existing schemes applying the three methods often return only representative (consensus) denoised sequences created by merging input sequences\textcolor{\proofcolor}{;} hence, the number of sequences is often not preserved after denoising. In some applications, this may result in inconsistencies in downstream analyses.
\textcolor{\editagecolor}{To address} these limitations, many existing tools combine the three methods in a complementary \textcolor{\editagecolor}{manner to improve} performance~\textcolor{\bhlploscolor}{\cite{yang2013survey,laehnemann2015denoising}.}}
\textcolor{\tsmploscolor}{In this paper, as an alternative, we \textcolor{\editageploscolor}{applied} an algorithm called Discrete Universal DEnoiser (DUDE)~\cite{weissman2005universal} \textcolor{\editageploscolor}{for accurate} DNA sequence denoising. DUDE was developed for a general setting of reconstructing sequences with finite-valued components (source symbols) corrupted by a
noise mechanism that corrupts each source symbol independently and statistically identically. \textcolor{\tsmploscolor}{In \textcolor{\editageploscolor}{the} DNA denoising literature, such \textcolor{\editageploscolor}{a} noise model is equivalent to the confusion matrix commonly used in statistical error-model\textcolor{\editageploscolor}{-}based methods.} \textcolor{\editageploscolor}{As demonstrated in the} original paper~\cite{weissman2005universal}\textcolor{\editageploscolor}{, DUDE exhibits} rigorous performance guarantee for the following setting;
even when no stochastic modeling assumptions are made \textcolor{\editageploscolor}{for} the underlying clean source data, only with the assumption of \emph{known} noise mechanism,
DUDE is shown to universally attain the optimum denoising performance
for \emph{any} source data \textcolor{\editageploscolor}{the data increase}. We note that the above setting of DUDE naturally fits the setting \textcolor{\editageploscolor}{for} DNA sequence denoising\textcolor{\editageploscolor}{,} \ie, it is difficult to \textcolor{\editageploscolor}{establish} accurate stochastic models for clean DNA sequences, but it is simple and fairly realistic to assume noise models (\ie, confusion matrices) for sequencing devices \textcolor{\editageploscolor}{based on} reference sequences.}
\textcolor{\tsmrevisioncolor}{The DUDE algorithm, which will be explained in details \textcolor{\srycolor}{in the next section}, possesses flavors that are somewhat connected to all three representative methods mentioned above, in a single scheme. \textcolor{\editagecolor}{Specifically}, DUDE works with double-sided contexts of \textcolor{\proofcolor}{a} fixed size \textcolor{\proofcolor}{that} are analogous to $k$-mers. Moreover, like MSA, DUDE applies a denoising decision rule \textcolor{\editagecolor}{to} each noisy symbol based on aggregated information over certain positions in the reads. \textcolor{\proofcolor}{However}, unlike MSA\textcolor{\proofcolor}{,} which makes a decision based on the information collected from the symbols in the same aligned column, DUDE makes a decision using the information collected from positions with the same double-sided context. Finally, the denoising decision rule of DUDE utilizes information \textcolor{\proofcolor}{from} \textcolor{\tsmploscolor}{the assumed noise model}\textcolor{\editagecolor}{,} like \textcolor{\editagecolor}{in} most statistical error model\textcolor{\proofcolor}{-}based methods, but does not assume any stochastic model on the underlying sequence, thus result\textcolor{\proofcolor}{ing} in a computationally efficient method. The \textcolor{\proofcolor}{method} of incorporating the \textcolor{\tsmploscolor}{noise} model is also simple, mak\textcolor{\proofcolor}{ing} it easy to flexibly apply DUDE to different sequencing platforms by \textcolor{\editagecolor}{simply} changing the \textcolor{\tsmploscolor}{confusion matrix model} in the algorithm.}
\textcolor{\tsmrevisioncolor}{With the above unique nature of \textcolor{\proofcolor}{the} DUDE algorithm, we show in our experiments that it outperforms other state-of-the-art schemes\textcolor{\editagecolor}{,} particularly for \textcolor{\editagecolor}{applications to} targeted amplicon sequencing.}
Specifically, among the applicable areas of targeted amplicon sequencing (\eg, cancer gene, 16S rRNA, plant, and animal sequencing~\cite{schirmer2015insight}), we used 16S rRNA benchmark datasets \textcolor{\editagecolor}{obtained with} different library preparation methods and DNA polymerases to confirm the robustness of our algorithm \textcolor{\editagecolor}{across various} sequencing preparation methods.
\textcolor{\bhlploscolor}{Targeted amplicon sequencing datasets often have deeper sequencing coverage than \textcolor{\editageploscolor}{those of} WGS or ChIP datasets, which frequently makes conventional $k$-mer-based techniques \textcolor{\editageploscolor}{often} suffer from the amplification bias problem~\cite{yan2016coverage}. By contrast, \textcolor{\editageploscolor}{for} \acl, as the sequencing coverage becomes deeper, context-counting vectors can accumulate more probable contexts, and the robustness of denoising \textcolor{\editageploscolor}{typically} improves.}
We apply two versions of DUDE separately for substitution and homopolymer errors, the two major types of sequencing error. For substitution error\textcolor{\tsmrevisioncolor}{s}, our approach \textcolor{\editagecolor}{directly} utilizes the original DUDE with appropriate adaptation to DNA sequences and is applicable to reads \textcolor{\editagecolor}{generated by} any sequencing platform. For homopolymer error\textcolor{\tsmrevisioncolor}{s}, however, we do not apply the original DUDE\textcolor{\editagecolor}{,} which was developed in a framework that does not cover errors of the homopolymer type. To correct homopolymer errors, we \textcolor{\proofcolor}{therefore} adopt a variant of DUDE for general-output channels \cite{dembo2005universal}. Our homopolymer-error correction is applicable to cases in which base-called sequences and the underlying flowgram intensities are available (\eg, pyrosequencing and Ion Torrent). For brevity, we refer to both of \textcolor{\editagecolor}{these} DUDE-based approaches as \acl\textcolor{\editagecolor}{, but the correction type will be easily distinguishable by the reader.}
\section*{Discrete Universal DEnoiser (DUDE)}\label{sec:related_work}
\textcolor{\tsmrevisioncolor}{In this section, we formally introduce \textcolor{\proofcolor}{the} DUDE algorithm \textcolor{\proofcolor}{along} with \textcolor{\editagecolor}{its} notation and its connection to DNA sequence denoising.} Fig~\ref{fig:general-setting} shows the concrete setting of the discrete denoising problem.
We denote the underlying source data as $\{x_i\}$ and assume each component takes values in some finite set $\mcX$. The resulting noisy version of the source corrupted by \textcolor{\tsmploscolor}{a noise mechanism} is denoted as $\{Z_i\}$, and its components take values in, again, some finite set $\mcZ$. \textcolor{\tsmploscolor}{As mentioned in \textcolor{\editageploscolor}{the} Introduction, DUDE assumes that the noise mechanism injects noise\textcolor{\editageploscolor}{s} that are independent and statistically identical, and such \textcolor{\editageploscolor}{a} mechanism is often referred to as \textcolor{\editageploscolor}{a} Discrete Memoryless Channel (DMC) in information theory.}
The DMC is completely characterized by the channel transition matrix\textcolor{\tsmploscolor}{, also known as the confusion matrix,} $\mathbf\Pi\in\mathbb{R}^{|\mcX|\times|\mcZ|}$, of which the $(x,z)$-th element, $\Pi(x,z)$, stands for $\text{Pr}(Z_i=z|x_i=x)$, \ie, the conditional probability \textcolor{\editagecolor}{that} the noisy symbol tak\textcolor{\editagecolor}{es} value $z$\textcolor{\editagecolor}{,} given \textcolor{\editagecolor}{that} the original source symbol \textcolor{\editagecolor}{is} $x$. \textcolor{\tsmploscolor}{We denote random variables with uppercase letters and the individual samples of random variables or deterministic symbols with lowercase letters. Thus, the underlying source data, which \textcolor{\editageploscolor}{are} treated by DUDE as individual sequence\textcolor{\editageploscolor}{s} (and not a stochastic process), \textcolor{\editageploscolor}{are} denoted \textcolor{\editageploscolor}{by} the lowercase $\{x_i\}$, and the noise-corrupted sequence\textcolor{\editageploscolor}{s}, \textcolor{\editageploscolor}{\ie,} sequence\textcolor{\editageploscolor}{s} of random variables, \textcolor{\editageploscolor}{are} denoted \textcolor{\editageploscolor}{by} uppercase $\{Z_i\}$.}
Furthermore, throughout this paper, we generally denote a sequence ($n$-tuple) as $a^n=(a_1,\ldots,a_n)$, \textcolor{\editagecolor}{for example, where} $a_i^j$ refers to the subsequence $(a_i,\ldots,a_j)$.
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{Figure1}
\caption{{\bf The general setting of discrete denoising.}}
\label{fig:general-setting}
\end{figure}
As shown in Fig~\ref{fig:general-setting}, a discrete denoiser observes the entire noisy data $Z^n$ and reconstructs the original data with $\hat{X}^n=(\hat{X}_1(Z^n),\ldots,\hat{X}_n(Z^n))$. The goodness of the reconstruction by a discrete denoiser $\hat{X}^n$ is measured by the average loss,
\begin{equation}
L_{\hat{X}^n}(x^n,Z^n) = \frac{1}{n}\sum_{i=1}^n\Lambda(x_i,\hat{X}_i(Z^n)),\label{eq:avg_loss}
\end{equation}
where $\Lambda(x_i,\hat{x}_i)$ is a single-letter loss function that measures the loss incurred by estimating $x_i$ with $\hat{x}_i$ at location $i$. The loss function can be also represented with a loss matrix $\mathbf{\Lambda}\in\mathbb{R}^{|\mcX|\times|\hat{\mcX}|}$.
DUDE in \cite{weissman2005universal} is a two-pass algorithm that has linear complexity \textcolor{\editagecolor}{with respect to} the data size $n$. During the first pass, given the realization of the noisy sequence $z^n$, the algorithm collects the statistics vector
\be
\mathbf{m}(z^n, l^k,r^k)[a] = \big|\{i: k+1\leq i \leq n-k, z_{i-k}^{i+k} = l^kar^k\}\big|, \nonumbe
\ee
for all $a\in\mcZ$, which is the count of the occurrence of the symbol $a\in\mcZ$ along the noisy sequence $z^n$ that has the \emph{double-sided context} $(l^k, r^k)\in\mcZ^{2k}$. \textcolor{\tsmrevisioncolor}{Note \textcolor{\editagecolor}{that} $\mathbf{m}$ is similar to the counts across the aligned columns for the simple majority voting in MSA-based denoising methods. \textcolor{\editagecolor}{However,} in DUDE, the count is collected regardless of \textcolor{\proofcolor}{whether} the positions in the reads \textcolor{\proofcolor}{are} aligned or not, but \textcolor{\editagecolor}{considering} whether the position has the same context. \textcolor{\editagecolor}{Additionally}, the context length $k$ is analogous to the $k$-mer length.} Once the $\mathbf{m}$ vector is collected, for the second pass, DUDE then applies the rule
\be
\hat{X}_i(z^n) =\arg\min_{\hat{x}\in\mcX}\mathbf{m}^T(z^n,z_{i-k}^{i-1},z_{i+1}^{i+k})\mathbf{\Pi}^{-1}[\lambda_{\hat{x}}\odot \pi_{z_i}]\label{eq:dude_rule}
\ee
for each $k+1\leq i\leq n-k$, where $\pi_{z_i}$ is the $z_i$-th column of the channel matrix $\mathbf{\Pi}$, and $\lambda_{\hat{x}}$ is the $\hat{x}$-th column of the loss matrix $\mathbf{\Lambda}$. \textcolor{\tsmrevisioncolor}{Furthermore, $\odot$ stands for the element-wise product operator for two vectors.} \textcolor{\tsmploscolor}{The \textcolor{\editageploscolor}{intuitive explanation of} (\ref{eq:dude_rule}) is \textcolor{\editageploscolor}{as} follow\textcolor{\editageploscolor}{s}: when we rearrange the right-hand side of (\ref{eq:dude_rule}), we obtain
\be
(\ref{eq:dude_rule})&=&\arg\min_{\hat{x}\in\mcX}\lambda_{\hat{x}}^T\big \{\pi_{z_i}\odot\mathbf{\Pi}^{-T}\mathbf{m}^T(z^n,z_{i-k}^{i-1},z_{i+1}^{i+k})\big\}\label{eq:dude_rule_2},
\ee
and we can show that $\pi_{a}\odot\mathbf{\Pi}^{-T}\mathbf{m}^T(z^n,l^k,r^k)$ approximates the empirical count vector of the underlying \emph{clean} symbol at the middle location that resulted in the noisy context $l^kar^k$.
Thus, the denoising rule (\ref{eq:dude_rule}), re-expressed in (\ref{eq:dude_rule_2}), \textcolor{\editageploscolor}{finds} a reconstruction symbol $\hat{x}$ that minimizes the expected loss with respect to the \emph{empirical estimate} (\textcolor{\editageploscolor}{obtained} by utilizing the inverse of $\mathbf{\Pi}$) of the count vector
of the underlying $x_i$ given the noisy context $z_{i-k}^{i+k}$. At \textcolor{\editageploscolor}{a} high level, DUDE is not a simple majority voting rule based on $\mathbf{m}$\textcolor{\editageploscolor}{; instead,} it incorporates the DMC model $\mathbf{\Pi}$ (the confusion matrix) and loss function $\mathbf{\Lambda}$ to \textcolor{\editageploscolor}{obtain} a more accurate estimation of the clean source symbol. For more detailed and rigorous arguments on the \textcolor{\editageploscolor}{intuitive description} of (\ref{eq:dude_rule}), we refer readers to the original paper~\cite[Section IV-B]{weissman2005universal}.}
\textcolor{\tsmploscolor}{Note \textcolor{\editageploscolor}{that} formula (\ref{eq:dude_rule}) assumes $\mcX=\mcZ=\hat{\mcX}$ and $\mathbf{\Pi}$ is invertible for simplicity, but \textcolor{\sryploscolor}{Weissman et al.~\cite{weissman2005universal} deal} with more general cases as well. The form of (\ref{eq:dude_rule}) \textcolor{\tsmploscolor}{also} shows that DUDE is a sliding window denoiser with window size $2k+1$\textcolor{\proofcolor}{;} \ie, DUDE returns the same denoised symbol at all locations with the same value $z_{i-k}^{i+k}$. DUDE is guaranteed \textcolor{\proofcolor}{to} attain the optimum performance by the sliding window denoisers with the same window size as the observation length $n$ increases. For more details on the theoretical performance analyses, \textcolor{\editageploscolor}{see} \textcolor{\sryploscolor}{Weissman et al.}~\cite[Section V]{weissman2005universal}.}
The original DUDE dealt exclusively with the case of $|\mcX|$ and $|\mcZ|$ finite. \textcolor{\bhlploscolor}{Dembo and Weissman~\cite{dembo2005universal} generalized} DUDE to the case of discrete input and general output channels; the noisy outputs \textcolor{\proofcolor}{do} not have \textcolor{\proofcolor}{to have} their values in some finite set, but can have continuous values as well. As in \cite{weissman2005universal}, the memoryless noisy channel model, which is characterized \textcolor{\proofcolor}{in this case} by the set of densities $\{f_{x}\}_{x\in\mcX}$, was assumed \textcolor{\editagecolor}{to be} known. As shown in \cite[Fig~1]{dembo2005universal}, the crux of the arguments is to apply a scalar quantizer $Q(\cdot)$ to each continuous-valued noisy output $\{Y_i\}$ and \textcolor{\editagecolor}{to} derive a virtual DMC, $\mathbf{\Gamma}\in\mathbb{R}^{|\mcX|\times|\mcZ|}$, between the discrete input $\{X_i\}$ and the quantized (hence, discrete) output $\{Z_i\}$. Such $\mathbf{\Gamma}$ can be readily obtained by the knowledge of $\{f_{x}\}_{x\in\mcX}$ and evaluating the following integral for each $(x,z)$: $\Gamma(x,z) = \int_{y:Q(y)=z}f_x(y)dy$.
Once the virtual DMC is obtained, the rest of the algorithm in \cite{dembo2005universal} proceeds similarly as the original DUDE; \textcolor{black}{\textcolor{\editagecolor}{specifically}, \textcolor{\proofcolor}{it} obtain\textcolor{\proofcolor}{s} the statistics vector $\mathbf{m}$ for the quantized noisy outputs $\{Z_i\}$ during the first pass, \textcolor{\proofcolor}{and} then appl\textcolor{\proofcolor}{ies} a sliding window denoising rule similar to (\ref{eq:dude_rule}), which depends on the statistics vector $\mathbf{m}$, the virtual DMC $\mathbf{\Gamma}$, $\{f_x\}_{x\in\mathcal X}$, and the noisy sequence $Y^n$, during the second pass. A concrete denoising rule can be found in \cite[Eqs. (16),(19), and (20)]{dembo2005universal}.
In \cite{dembo2005universal}, a formal analysis of the generalized DUDE shows that it attains the optimum denoising performance among sliding window denoisers with the same window size, that base their denoising decisions on the original continuous-valued outputs $Y^n$. We refer readers to the paper for more details. In the next section, we show how we adopt this generalized DUDE in our \acl~to correct homopolymer errors in DNA sequencing.}
\section*{\acl~: DUDE for DNA Sequence Denoising}
\subsection*{Substitution Errors.}
As described in the previous section, the setting of \textcolor{\proofcolor}{the} original DUDE algorithm naturally aligns with the setting of substitution-error \textcolor{\editagecolor}{correction} in DNA sequence denoising. We can set $\mcX=\mcZ= \{\texttt{A},\texttt{C},\texttt{G},\texttt{T}\}$, and the loss function as the Hamming loss, namely, $\Lambda(x,\hat{x})=0$, if $x=\hat{x}$, and $\Lambda(x,\hat{x})=1$, otherwise. Then, the two-pass sliding window procedure of DUDE \textcolor{\proofcolor}{for} collecting the statistics vector $\mathbf{m}$ and the actual denoising can be directly applied as shown in the toy example in Fig \ref{fig:context-def}. Before we formally describe our DUDE-Seq for substitution-error correction, however, \textcolor{\proofcolor}{we need to address} some subtle points.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\linewidth]{Figure2}
\begin{spacing}{\mylinespacing}
\caption{{\bf A sliding window procedure of the DUDE-Seq \textcolor{\tsmrevisioncolor}{with} the context size $\mathbf{k = 3}$.} During the first pass, \acl~updates the $\mathbf{m}(z^{\textcolor{\tsmrevisioncolor}{n}},l^3,r^3)$ for the encountered double-sided contexts $(l^3,r^3)$. Then, for the second pass, \acl~uses the obtained $\mathbf{m}(z^{\textcolor{\tsmrevisioncolor}{n}},l^3,r^3)$ and (\ref{eq:dude_rule}) for the denoising.}
\label{fig:context-def}
\end{spacing}
\end{figure}
First, the original DUDE in (\ref{eq:dude_rule}) assumes that the DMC matrix $\mathbf{\Pi}$ is known beforehand, but in real DNA sequence denoising, we need to estimate $\mathbf{\Pi}$ for each sequencing device. As described in the Experimental Results section in detail, we \textcolor{\editagecolor}{performed this} estimation following the typical process \textcolor{\editagecolor}{for} obtaining the empirical confusion matrix\textcolor{\editagecolor}{,} \ie, \textcolor{\proofcolor}{we} align\textcolor{\editagecolor}{ed} the predefined reference sequence and its noise-corrupted sequence \textcolor{\proofcolor}{and} then \textcolor{\editagecolor}{determined} the ratio of substitution errors and obtain the estimated $\mathbf{\Pi}$.
Second, the original DUDE assumes that the noise mechanism is memoryless, \textcolor{\editagecolor}{\ie}, the error rate does not depend on the location of a base within the sequence.
In contrast, \textcolor{\editagecolor}{for} real sequencing devices, the actual error rate, namely, the conditional probability Pr$(Z_i=z|X_i=x)$ may not always be the same \textcolor{\editagecolor}{for all} location index \textcolor{\editagecolor}{values} $i$. For example, for Illumina sequencers, the error rate tends to increase towards the end\textcolor{\editagecolor}{s} of reads\textcolor{\editagecolor}{,} as pointed out in \cite{laehnemann2015denoising}. In our DUDE-Seq, however, we still treat the substitution error mechanism as a DMC \textcolor{\proofcolor}{and therefore} use the single estimated $\mathbf{\Pi}$ obtained as above, which \textcolor{\proofcolor}{is} essentially \textcolor{\proofcolor}{the same as} \textcolor{\editagecolor}{that obtained} using the \emph{average} error rate matrix. Our experimental results show that such an approach still yields very competitive denoising results. Thirdly, the optimality of the original DUDE relies on the stationarity of the underlying clean sequence, \textcolor{\proofcolor}{thus} requir\textcolor{\proofcolor}{ing} a very large observation sequence length $n$ to obtain a reliable statistics vector $\mathbf{m}$. In contrast, most sequencing devices generate multiple short reads of lengths $100\sim200$. Hence, in DUDE-Seq, we combined all statistics vectors collected from multiple short reads to generate a single statistics vector $\mathbf{m}$ to use in (\ref{eq:dude_rule}).
\textcolor{\editagecolor}{Addressing} \textcolor{\proofcolor}{the} above three points, a formal summary of \acl~for the substitution errors is given in Algorithm~1. \textcolor{black}{Note that the pseudocode in Algorithm~1 skips those bases whose Phred quality score\textcolor{\proofcolor}{s are} higher than a user-specified threshold and invokes \acl~only for the bases with low quality scores (lines 10--14). This is in accord with the common practice in sequence preprocessing and \textcolor{\editagecolor}{is not a specific property of} the \acl~ algorithm.} Furthermore, for simplicity, we denoted $z^n$ as the entire noisy DNA sequence, and $\mathbf{m}^T(z^n,z_{i-k}^{i-1},z_{i+1}^{i+k})$ \textcolor{\editagecolor}{represents} the aggregated statistics vector obtained as described above.
\begin{algorithm*}[!ht]
\caption{The \emph{DUDE-Seq} for substitution errors}\label{dude_alg}
\begin{algorithmic}[1]
\small
\Require Observation $z^n$, Estimated DMC matrix $\mathbf{\Pi}\in\mathbb{R}^{4\times4}$, Hamming loss $\mathbf{\Lambda}\in\mathbb{R}^{4\times4}$, Context size $k$, Phred quality score $Q^n$
\Ensure The denoised sequence $\hat{X}^n$
\State Define $\mathbf{m} (z^{n}, l^{k}, r^{k})\in\mathbb{R}^{4}$ for all $(l^k,r^k)\in\{\texttt{A}, \texttt{C}, \texttt{G}, \texttt{T}\}^{2k}$.
\State Initialize $\mathbf{m} (z^{n}, l^{k}, r^{k})[a]=0$ for all $(l^k,r^k)\in\{\texttt{A}, \texttt{C}, \texttt{G}, \texttt{T}\}^{2k}$ and for all $a\in\{\texttt{A}, \texttt{C}, \texttt{G}, \texttt{T}\}$
\For {$i \leftarrow k+1,\ldots, n-k$}
\Comment \textsf{First pass}
\State $\mathbf{m} (z^{n}, z_{i-k}^{i-1}, z_{i+1}^{i+k})[z_{i}] = \mathbf{m} (z^{n}, z_{i-k}^{i-1}, z_{i+1}^{i+k})[z_{i}] +1$
\Comment \textsf{Update the count statistics vector}
\EndFor
\For {$i \leftarrow 1,\ldots, n$}
\Comment \textsf{Second pass}
\If {$i\leq k$ \texttt{or} $i\geq n-k+1$}
\State $\hat{X}_i=z_i$
\Else
\If {$Q_i > \text{threshold}$}
\Comment \textsf{Quality score}
\State $\hat{X}_i=z_i$
\Else
\State $\hat{X}_i(z^n) =\argmin\limits_{\hat{x}\in\{\texttt{A},\texttt{C},\texttt{G},\texttt{T}\}}\mathbf{m}^T(z^n,z_{i-k}^{i-1},z_{i+1}^{i+k})\mathbf{\Pi}^{-1}[\lambda_{\hat{x}}\odot \pi_{z_i}]$
\Comment \textsf{Apply the denoising rule}
\EndIf
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm*}
\subsubsection*{Remarks.}
\begin{enumerate}
\item
\textcolor{\tsmploscolor}{Incorporating \textcolor{\editageploscolor}{flanking sequences} in DUDE-Seq is quite straightforward; we can simply use the one-sided contexts $l^{2k}$ or $r^{2k}$ once DUDE-Seq reaches the flank\textcolor{\editageploscolor}{ing} regions. In our experiments, however, we did not \textcolor{\editageploscolor}{perform} such modification (lines 7--8 of Algorithm~1) since we normally used small $k$ values (around $k=5$). As demonstrated in our experimental results, the effect of such small flank\textcolor{\editageploscolor}{ing regions} is not significant on the final denoising result\textcolor{\editageploscolor}{s}, and we can achieve satisfactory results without considering flank\textcolor{\editageploscolor}{ing regions}. However, in general, should longer values of $k$ be needed, we can easily modify the algorithm to incorporate one-sided contexts in the flank\textcolor{\editageploscolor}{ing} regions, and such modification will clearly improve the final denoising result.}
\item
\textcolor{\bhlploscolor}{\acl~does not need to consider reverse complements of the input sequences \textcolor{\editageploscolor}{to} collect $\mathbf{m}$'s, since forward and reverse reads \textcolor{\editageploscolor}{are handled} separately in our experiments. Reverse complements are typically considered when we need to handle double-stranded sequences without knowing whether each read corresponds to \textcolor{\editageploscolor}{the} forward or reverse strand.}
\end{enumerate}
\subsection*{Homopolymer Errors.}
\textcolor{\proofcolor}{H}omopolymer errors, \textcolor{\tsmrevisioncolor}{particularly in pyrosequencing,} occur while handling the observed flowgram, and a careful understanding of the error injection procedure is necessary to correct \textcolor{\editagecolor}{these errors}. As described in \cite{quince2011removing}, in pyrosequencing, the light intensities, \ie, flowgram, that correspond to a fixed order of four DNA bases $\{\texttt{T}, \texttt{A}, \texttt{C}, \texttt{G}\}$ are sequentially observed. The intensity value increases when the number of consecutive nucleotides (\ie, homopolymers) for each DNA base increases, and the standard base-calling procedure round\textcolor{\editagecolor}{s} the continuous-valued intensities to the closest integers. For example, when the observed light intensities for the two frames of DNA bases are $[0.03\; 1.03\; 0.09\; 0.12;\, 1.89\; 0.09\; 0.09\; 1.01],$ the corresponding rounded integers are $[0.00\; 1.00\; 0.00\; 0.00;\,2.00\;0.00\;0.00\;1.00]$. \textcolor{\editagecolor}{H}ence, the \textcolor{\editagecolor}{resulting} sequence is \texttt{ATTG}. The insertion and deletion errors \textcolor{\editagecolor}{are inferred} because the observed light intensities do not perfectly match the actual homopolymer lengths\textcolor{\proofcolor}{;} thus, the rounding procedure may \textcolor{\editagecolor}{result in the insertion or deletion of} DNA symbols. In fact, the distribution of the intensities $f$\textcolor{\proofcolor}{,} given the actual homopolymer length $N$, $\{P(f|N)\}$, can be obtained for each sequencing device, and Fig~\ref{fig:cont-disc-ch} shows typical distributions given various lengths.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\linewidth]{Figure3}
\caption{{\bf Conditional intensity distributions for $N = 0,1,2,3$.}}
\label{fig:cont-disc-ch}
\end{figure}
Exploiting the fact that the order of DNA bases is always fixed \textcolor{\editagecolor}{at} $\{\texttt{T}, \texttt{A}, \texttt{C}, \texttt{G}\}$, we can apply \textcolor{black}{the setting of} the generalized DUDE in \cite{dembo2005universal} to correct homopolymer errors as follows. \textcolor{\proofcolor}{Because} we \textcolor{\proofcolor}{know} \textcolor{\editagecolor}{the} exact DNA base \textcolor{\editagecolor}{that corresponds with} each intensity value, the goal \textcolor{\editagecolor}{is the} correct estimat\textcolor{\editagecolor}{imation of} homopolymer lengths from the observed intensity values. Hence, we can interpret the intensity distributions $\{P(f|N)\}$ as the memoryless noisy channel models with \textcolor{\editagecolor}{a} continuous-output\textcolor{\editagecolor}{,} where the channel input is the homopolymer length $N$. We set the upper bound of $N$ to 9 \textcolor{\bhlploscolor}{according to the convention commonly used for handling flowgram distributions in the targeted amplicon sequencing literature~\cite{quince2011removing,bragg2013shining,fichot2013microbial}.}
When the usual rounding function
\be
Q_R(f) = \argmin_{i\in\{0,\ldots,9\}}|i-f|\label{eq:rounding}
\ee is used as a scalar quantizer\textcolor{\proofcolor}{,} as mentioned above, \textcolor{\proofcolor}{and} the virtual DMC $\mathbf{\Gamma}\in\mathbb{R}^{10\times 10}$ can be obtained \textcolor{\proofcolor}{by} calculating the integral
\be
\Gamma(i,j) = \int_{j-0.5}^{j+0.5}P(f|i)df\label{eq:gamma}
\ee
for each $0\leq i \leq 9, \ 1\leq j \leq 9$ and $\Gamma(i,0) = \int_{0}^{0.5}P(f|i)df$.
With this virtual DMC model, we apply a scheme inspired by the generalized DUDE to correctly estimate the homopolymer lengths, which results in correcting the insertion and deletion errors.
That is, we set $\mcX=\mcZ=\{0,1,\ldots,9\}$, and again use the Hamming loss $\mathbf{\Lambda}\in\mathbb{R}^{10\times 10}$. With this setting, we apply $Q_R(f)$ to each $f_i$ to obtain the quantized discrete output $z_i$, \textcolor{\editagecolor}{and} obtain the count statistics vector $\mathbf{m}$ from $z^n$ during the first pass. \textcolor{black}{Then, for the second pass, instead of applying the more involved denoising rule in \cite{dembo2005universal}, we employ the same rule as (\ref{eq:dude_rule}) with $\mathbf{\Gamma}$ in place of $\mathbf{\Pi}$ to obtain the denoised sequence of integers $\hat{X}^n$ based on the quantized noisy sequence $Z^n$. \textcolor{\editagecolor}{Although it is} potentially suboptimal compared to the generalized DUDE, this scheme \textcolor{\proofcolor}{is} used \textcolor{\tsmrevisioncolor}{\textcolor{\proofcolor}{because} \textcolor{\editagecolor}{its implementation is easier} and \textcolor{\editagecolor}{it} has \textcolor{\proofcolor}{a} faster running time than that of the generalized DUDE.}
Once we obtain $\hat{X}^n$, from the knowledge of the DNA base for each $i$, }we can reconstruct the homopolymer error-corrected DNA sequence $\hat{D}$ (the length of which may not necessarily be equal to $n$). Algorithm~2 summarizes the pseudo-code of \acl~for homopolymer-error \textcolor{\editagecolor}{correction}.
\begin{algorithm*}[!ht]
\small
\caption{The \emph{DUDE-Seq} for homopolymer errors}\label{dude_alg_homopolymer}
\begin{algorithmic}[1]
\Require Flowgram data $f^n$, Flowgram densities $\{P(f|N)\}_{N=0}^9$, Hamming loss $\mathbf{\Lambda}\in\mathbb{R}^{10\times10}$, Context size $k$
\Ensure The denoised sequence $\hat{D}$
\State Let $Q_R(f)$ be the rounding quantizer in Eq. (4) of the main text
\State Let $\texttt{Base}(i)\in\{\texttt{T},\texttt{A},\texttt{C},\texttt{G}\}$ be the DNA base corresponding to $f_i$
\State Define $\mathbf{m} (f^{n}, l^{k}, r^{k})\in\mathbb{R}^{10}$ for all $(l^k,r^k)\in\{0,1,\ldots,9\}^{2k}$.
\State Initialize $\mathbf{m} (f^{n}, l^{k}, r^{k})[a]=0$ for all $(l^k,r^k)\in\{0,1,\ldots,9\}^{2k}$ and for all $a\in\{0,1,\ldots,9\}$
\State Let $\hat{D}=\phi$, $I=0$
\For {$i\leftarrow0,\ldots,9$}
\For {$j\leftarrow0,\ldots,9$}
\State Compute $\Gamma(i,j)$ following Eq. (5) of the main text
\Comment \textsf{Computing the virtual DMC $\mathbf{\Gamma}$}
\EndFor
\EndFor
\For {$i\leftarrow1,\ldots,n$} Obtain $z_i = Q_R(f_i)$
\Comment \textsf{Note $z_i\in\{0,\ldots,9\}$}
\EndFor
\For {$i \leftarrow k+1,\ldots, n-k$}
\Comment \textsf{First pass}
\State $\mathbf{m} (f^{n}, z_{i-k}^{i-1}, z_{i+1}^{i+k})[z_{i}] = \mathbf{m} (f^{n}, z_{i-k}^{i-1}, z_{i+1}^{i+k})[z_{i}] +1$
\EndFor
\For {$i \leftarrow 1,\ldots, n$}
\Comment \textsf{Second pass}
\If {$i\leq k$ \texttt{or} $i\geq n-k+1$} $\hat{X}_i(f^n)=z_i$
\Else
\State $\hat{X}_i(f^n) =\argmin\limits_{\hat{x}\in\mcX}\mathbf{m}^T(f^n,z_{i-k}^{i-1},z_{i+1}^{i+k})\mathbf{\Gamma}^{-1}[\lambda_{\hat{x}}\odot \gamma_{z_i}]$
\Comment \textsf{Note $\hat{X}_i(z^n)\in\{0,\ldots,9\}$}
\EndIf
\If {$\hat{X}_i(f^n)\geq1$}
\For {$j\leftarrow 1,\ldots,\hat{X}_i(f^n)$} $\hat{D}_{I+j}= \texttt{Base}(i)$ \Comment \textsf{Reconstructing the DNA sequence}
\EndFor
\EndIf
\State $I\leftarrow I+ \hat{X}_i(f^n)$
\EndFor
\end{algorithmic}
\end{algorithm*}
\section*{Experimental Results}\label{sec:experiments}
\subsection*{Setup.}
We used both real and simulated NGS datasets and compared the performance of \acl~with \textcolor{\editagecolor}{that of} several state-of-the-art error correction methods. \textcolor{\srycolor}{The list of alternative tools used for comparison and the rationale behind our choice\textcolor{\editagecolor}{s are} described in the next subsection.} \textcolor{\editagecolor}{When} the flowgram intensities of base-calling were available, we corrected both homopolymer and substitution errors\textcolor{\editagecolor}{;} otherwise\textcolor{\editagecolor}{, we} \textcolor{\proofcolor}{only} \textcolor{\editagecolor}{corrected} substitution errors. The specification\textcolor{\editagecolor}{s} of the machine we used \textcolor{\editagecolor}{for the analysis are} as follows: Ubuntu 12.04.3 LTS, 2$\times$ Intel Xeon X5650 CPUs, 64 GB main memory, and 2 TB HDD.
\acl~has a single hyperparameter $k$, the context size, that needs to be determined. \textcolor{\proofcolor}{S}imilar to the popular $k$-mer\textcolor{\editagecolor}{-}based schemes, there is no analytic\textcolor{\editagecolor}{al method for selecting} the best $k$ for finite data size $n$\textcolor{\proofcolor}{,} except for the asymptotic order result of $k|\mcX|^{2k}=o(n/\log n)$ in \cite{weissman2005universal}, but a heuristic rule of thumb is to try values between 2 and 8.
Furthermore, as shown in Eq. (\ref{eq:dude_rule}), the two adjustable matrices, \textcolor{\bhlcolor}{$\boldsymbol{\Lambda}$ and $\boldsymbol{\Pi}$,} are required for \acl. \textcolor{\bhlcolor}{The loss $\boldsymbol{\Lambda}$ used for both types of errors is the Hamming loss.}
\textcolor{\bhlploscolor}{According to Marinier~\etal~\cite{marinier2015pollux}, adjusting the sequence length by one can correct most homopolymer errors, which justifies our use of Hamming loss in \acl. In our experiments, \textcolor{\editageploscolor}{the use of} other types of loss functions did not \textcolor{\editageploscolor}{result in} any noticeable performance difference\textcolor{\editageploscolor}{s}.}
The DMC matrix $\boldsymbol{\Pi}$ for substitution errors is empirically determined by aligning each sampled read to its reference sequence\textcolor{\editagecolor}{,} as in \cite{quince2011removing}. \textcolor{\bhlplosrevisioncolor}{Fig~\ref{fig:results-pi}} shows the non-negligible variation \textcolor{\editagecolor}{in} the empirically obtained $\mathbf{\Pi}$'s across the sequencing platforms\textcolor{\editagecolor}{,} \textcolor{\bhlcolor}{where each row corresponds to the true signal $x$ and each column corresponds to the observed noisy signal $z$. In this setting, each cell represents the conditional probability $P(z|x)$. In our experiments, dataset P1--P8 used $\mathbf{\Pi}$ for GS FLX, Q19--Q31 used $\mathbf{\Pi}$ for Illumina, and S5, A5 used $\mathbf{\Pi}$ for Simulation \textcolor{\editagecolor}{data}. The details of each dataset \textcolor{\editagecolor}{are} explained in the following sections.}
\begin{figure}[!h]
\begin{adjustwidth}{-2in}{0in}
\centering
\includegraphics[width=\linewidth]{Figure4}
\begin{spacing}{\mylinespacing}
\caption{\textcolor{\bhlplosrevisioncolor}{{\bf Adjustable DMC matrix $\mathbf{\Pi}$ of \acl.} Empirically obtained $\mathbf{\Pi}$'s for different sequencing platforms (colors are on a log scale).}}
\label{fig:results-pi}
\end{spacing}
\end{adjustwidth}
\end{figure}
In order to evaluate the results, we used Burrows-Wheeler Aligner (BWA)~\cite{li2009fast} and SAMtools~\cite{li2009sequence}. We aligned all reads to their reference genome using BWA with the following parameters: [minimum seed length: 19, matching score: 1, mismatch penalty: 4, gap open penalty: 6, gap extension penalty: 1].
\textcolor{\bhlploscolor}{After the mapped regions were determined \textcolor{\editageploscolor}{using} BWA in SAM format, we chose uniquely mapped pairs using SAMtools. The Compact Idiosyncratic Gapped Alignment Report (CIGAR) string and MD
tag (string for mismatching positions) \textcolor{\editageploscolor}{for} each of the resultant pairs in the SAM file were reconstructed to their pairwise alignments using sam2pairwise~\cite{lafave_2014_11377}.}
\subsection*{\textcolor{\bhlplosrevisioncolor}{Evaluation Metric.}}
\textcolor{\bhlplosrevisioncolor}{As a performance measure, we define the per-base error rate of a tool after denoising as
\begin{align}
e_\text{tool} = \frac{\text{\# mismatched bases}}{\text{\# aligned bases}},\label{eq:error_rate_def}
\end{align}
in which `\# aligned bases' represents the number of mapped bases (\ie, matches and mismatches) after mapping each read to its reference sequence, and `\# mismatched bases' represents the number of the erroneous bases (\ie, insertions, deletions, and substitutions) among the aligned bases.}
\textcolor{\bhlplosrevisioncolor}{We also employ an alternative definition that adjusts the error rate by incorporating the degree of alignment. To this end, we define the \emph{relative gain} of the number of aligned bases after denoising by a tool over raw data as
\begin{align}
g(a_\text{tool}) = \frac{\text{\# aligned bases after denoising}-\text{\# aligned bases in raw}}{\text{\# aligned bases in raw}}. \label{eq:rel_gain_align}
\end{align}
Based on this, the adjusted error rate $\hat{e}_\text{tool}$ of a denoising tool is defined as follows:
\begin{align}
\hat{e}_{\text{tool}} = (1+g(a_\text{tool})) \times e_{\text{tool}} - g(a_\text{tool}) \times e_{\text{raw}},\label{eq:error_hat}
\end{align}
where $e_\text{tool}$ and $e_\text{raw}$ represent the (unadjusted) error rates of the denoised data and the raw data, respectively. In other words, (\ref{eq:error_hat}) is a weighted average of $e_\text{tool}$ and $e_\text{raw}$, in which the weights are determined by the relative number of aligned bases of a tool compared to the raw sequence. We believe $\hat{e}_{\text{tool}}$ is a fairer measure as it penalizes the error rate of a denoiser when there is a small number of aligned bases. The relative gain of the adjusted error rate over raw data is then defined as
\begin{align}
g(\hat{e}_{\text{tool}}) = \frac{e_{\text{raw}} - \hat{e}_{\text{tool}}}{e_{\text{raw}}},\label{eq:rel_gain_error}
\end{align}
which we use to evaluate the denoiser performance.}
\textcolor{\bhlplosrevisioncolor}{While evaluating a clustering result, we employ a measure of concordance (MoC)~\cite{pfitzner2009characterization} which is a popular similarity measure for pairs of clusterings.
For two pairs of clusterings $P$ and $Q$ with $I$ and $J$ clusters, respectively, the MoC is defined as
\begin{align}
\text{MoC} (P,Q) = \frac{1}{\sqrt{IJ}-1} \left (\sum_{i=1}^{I} \sum_{j=1}^{J} \frac{f_{ij}^{2}}{p_{i} q_{j}}-1 \right )
\end{align}
where $f_{ij}$ is the number of the common objects between cluster $P_i$ and $Q_j$ when $p_i$ and $q_j$ are the numbers of the objects in cluster $P_i$ and $Q_j$, respectively.
A MoC of one or zero represents perfect or no concordance, respectively, between the two clusters.}
\subsection*{\textcolor{\bhlploscolor}{Software Chosen for Comparison.}}
It \textcolor{\editagecolor}{is} impossible to compare the performance of DUDE-Seq with \textcolor{\editagecolor}{that of} all other schemes. Hence, we selected representative baselines \textcolor{\proofcolor}{using} the following reasoning\textcolor{\editagecolor}{.}
\begin{enumerate}
\item We included tools that can represent different principles outlined in the Introduction\textcolor{\editagecolor}{,} namely, $k$-mer\textcolor{\editagecolor}{-}based (Trowel, Reptile, BLESS, and fermi), MSA\textcolor{\proofcolor}{-}based (Coral), and statistical error model\textcolor{\proofcolor}{-}based (AmpliconNoise) \textcolor{\editagecolor}{methods}.
\item We cons\textcolor{\proofcolor}{idered} the recommendation\textcolor{\proofcolor}{s} of~\cite[Table 2]{laehnemann2015denoising} to choose baseline tools that are competitive for different scenarios\textcolor{\proofcolor}{,} \textcolor{\editagecolor}{\ie,} for 454 pyrosequencing data (AmpliconNoise), non-uniform coverage data\textcolor{\editagecolor}{,} such as metagenomics \textcolor{\editagecolor}{data} (Trowel, fermi, Reptile), data \textcolor{\editagecolor}{dominated by} substitution errors\textcolor{\editagecolor}{,} such as Illumina data (Trowel, fermi, Reptile), and data \textcolor{\proofcolor}{with} \textcolor{\editagecolor}{a high prevalence of} indel errors (Coral).
\item For multiple $k$-mer\textcolor{\editagecolor}{-}based tools, we chose \textcolor{\editagecolor}{those} that use different main approaches/data structures: BLESS ($k$-mer spectrum\textcolor{\editagecolor}{-}based/hash table and bloom filter), fermi ($k$-mer spectrum and frequency\textcolor{\editagecolor}{-}based/hash table and suffix array), Trowel ($k$-mer spectrum\textcolor{\editagecolor}{-}based/hash table), and Reptile ($k$-mer frequency and Hamming graph\textcolor{\editagecolor}{-}based/replicated sorted $k$-mer list).
\item The selected tools were developed quite recently; Trowel and BLESS (2014), fermi (2012), Coral and AmpliconNoise (2011), and Reptile (2010).
\item We mainly chose tools that return read-by-read denoising results to make fair error-rate comparisons \textcolor{\sryoonrevisioncolor}{with DUDE-seq}. \textcolor{\editagecolor}{W}e excluded tools that return a substantially reduced number of reads after error correction (caused by filtering or forming consensus clusters). \textcolor{\proofcolor}{E}xamples of excluded tools are Acacia, ALLPATHS-LG, and SOAPdenovo.
\item We also excluded some recently developed tools that require additional mandatory information (e.g., the size of the genome of the reference organism) beyond the common setting of DNA sequence denoising \textcolor{\proofcolor}{in order} to make fair error-rate comparisons. \textcolor{\proofcolor}{E}xamples of excluded tools are Fiona, Blue, and Lighter. Incorporating those \textcolor{\editagecolor}{tools that require} additional information \textcolor{\proofcolor}{in}to \textcolor{\proofcolor}{the} DUDE-Seq framework and \textcolor{\editagecolor}{comparisons} with the excluded tools would be another future direction\textcolor{\editagecolor}{s}.
\end{enumerate}
\subsection*{Real Data: 454 Pyrosequencing.}
Pyrosequenced 16S rRNA genes are commonly used to characterize microbial communities \textcolor{\editagecolor}{because the method yields} relatively longer reads than those of other NGS technologies~\cite{reeder2010rapid}. \textcolor{\sryoonrevisioncolor}{Although 454 pyrosequencing is gradually being phased out, we test\textcolor{\editagecolor}{ed} \acl~with 454 pyrosequencing data \textcolor{\editagecolor}{for} the following \textcolor{\editagecolor}{reasons}: (1) the \acl~methodology for correcting homopolymeric errors in 454 sequencing \textcolor{\editagecolor}{data} is equally applicable to other sequencing technologies that produce homopolymeric errors, such as Ion Torrent; (2) using pyrosequencing data allows us to exploit existing (experimentally obtained) estimates of the channel transition matrix $\mathbf{\Gamma}$ (\eg, \cite{quince2011removing}), which is required for denoising noisy flowgrams by \acl~\textcolor{\bhlcolor}{(see Algorithm 2)}; (3) in the metagenomics literature, widely used standard benchmarks consist of datasets generated by pyrosequencing.}
In metagenome analysis~\cite{schloss2005introducing}, grouping reads and assigning them to operational taxonomic units (OTUs) (\ie, binning) are essential processes\textcolor{\editagecolor}{,} \textcolor{\bhlploscolor}{given that the majority of microbial species have not been taxonomically classified.}
\textcolor{\bhlploscolor}{By OTU binning, we can computationally identify closely related genetic groups of reads at a desired level of sequence differences.}
However, \textcolor{\editagecolor}{owing} to erroneous reads, \textcolor{\proofcolor}{non}existent OTUs may \textcolor{\editagecolor}{be obtained}, \textcolor{\editagecolor}{resulting} in the common problem of overestimating ground truth OTUs. Such overestimation \textcolor{\editagecolor}{is a} bottleneck \textcolor{\editagecolor}{in} the overall microbiome analysis\textcolor{\editagecolor}{;} hence, removing errors \textcolor{\editagecolor}{in} reads before \textcolor{\editagecolor}{they are assigned} to OTUs \textcolor{\editagecolor}{is} a critical issue \cite{quince2011removing}. With this motivation, in some of our experiments below, we used the difference \textcolor{\editagecolor}{between} the number of assigned OTUs and the ground truth number of OTUs as a proxy \textcolor{\editagecolor}{for} denoising performance; \textcolor{\bhlploscolor}{the number of OTUs \textcolor{\sryploscolor}{was} determined \textcolor{\editageploscolor}{using} UCLUST~\cite{edgar2010search}} \textcolor{\bhlplosrevisioncolor}{at identity threshold of 0.97 which is for species assignment.}
We tested the performance of \acl~with the eight datasets used in~\cite{quince2011removing}, which are mixtures of \textcolor{\bhlplosrevisioncolor}{94 environmental clones library from eutrophic lake (Priest Pot) using primers 787f and 1492r}. Dataset P1 ha\textcolor{\editagecolor}{d} 90 clones that are mixed in two orders of magnitude difference while P2 ha\textcolor{\editagecolor}{d} 23 clones that \textcolor{\editagecolor}{were} mixed in equal proportions. In P3, P4, \textcolor{\proofcolor}{and} P5 and P6, P7, \textcolor{\proofcolor}{and} P8, there are 87 mock communities mixed in even and uneven proportions, respectively. In all datasets, both homopolymer and substitution errors exist, and the flowgram intensity values as well as the distributions are available~\cite{quince2011removing}. Therefore, \acl~tries to correct both types of errors using the empirically obtained $\mathbf{\Pi}$ and the flowgram intensity distributions $\{P(f|N)\}$.
We first show the effect of $k$ on the performance of \acl~in \textcolor{\bhlplosrevisioncolor}{Fig~\ref{fig:results-k}}. The vertical axis shows the \textcolor{\bhlplosrevisioncolor}{ratio} between the number of OTUs assign\textcolor{\editagecolor}{ed} after denoising with \acl~and the ground truth number of OTUs for the \textcolor{\bhlcolor}{P1, P2, and P8} dataset. The horizontal axis \textcolor{\proofcolor}{shows} the $k$ values used for correcting the substitution errors \textcolor{\bhlcolor}{(\ie, for Algorithm~1)}, and color-coded curves \textcolor{\editagecolor}{were generated} for different $k$ values used for homopolymer-error \textcolor{\editagecolor}{correction} \textcolor{\bhlcolor}{(\ie, for Algorithm~2)}. \textcolor{\editagecolor}{As shown in the figure,} correcting homopolymer errors (\ie, with $k=2$ for Algorithm~2) always enhance\textcolor{\editagecolor}{ed} the results in terms of the number of OTUs \textcolor{\proofcolor}{in comparison to} correcting substitution errors alone (\ie, Algorithm~1 alone). \textcolor{\editagecolor}{W}e observe that $k=5$ for Algorithm~1 and $k=2$ for Algorithm~2 \textcolor{\proofcolor}{produce} the best result\textcolor{\editagecolor}{s} in terms of the number of OTUs. \textcolor{\editagecolor}{L}arger $k$ \textcolor{\proofcolor}{value} work better for substitution errors \textcolor{\editagecolor}{owing to} the smaller alphabet size of the data, \ie, 4, compared to that of homopolymer errors, \ie, 10. \textcolor{black}{Motivated by this result, we fixed the context sizes of substitution error correction and homopolymer error correction to $k=5$ and $k=2$, respectively, for all subsequent experiments.}
\begin{figure}[!h]
\begin{adjustwidth}{-2in}{0in}
\centering
\includegraphics[width=\linewidth]{Figure5}
\begin{spacing}{\mylinespacing}
\caption{\textcolor{\bhlplosrevisioncolor}{{\bf Hyperparameter $k$ of \acl.} Effects of varying context size $k$ [$k1$ is for Algorithm 1 (substitution-error correction) and $k2$ is for Algorithm 2 (homopolymer-error correction); data:~\cite{quince2011removing}].}}
\label{fig:results-k}
\end{spacing}
\end{adjustwidth}
\end{figure}
In Fig~\ref{fig:results-pyrosequencing}(a), we report a more direct \textcolor{\editagecolor}{analysis of} error correction performance.
We compare\textcolor{\proofcolor}{d} the performance of \acl~with \textcolor{\editagecolor}{that of} Coral~\cite{salmela2011correcting}, which is \textcolor{\editagecolor}{an} \textcolor{\tsmcolor}{MSA-based} state-of-the-art scheme. It aligns multiple reads by exploiting \textcolor{\proofcolor}{the} $k$-mer neighborhood of each base read and produces read-by-read correction results for pyrosequencing datasets\textcolor{\editagecolor}{, similar to} \acl. \textcolor{\tsmrevisioncolor}{Furthermore, as a baseline, we also present\textcolor{\proofcolor}{ed} the error rates \textcolor{\editagecolor}{for} the original, uncorrected sequences (labeled `Raw').} {\textcolor{\editagecolor}{W}e did not include \textcolor{\editagecolor}{the results of} AmpliconNoise~\cite{quince2011removing}, \textcolor{\tsmrevisioncolor}{a state-of-the-art scheme for 454 pyrosequencing \textcolor{\editagecolor}{data},} in \textcolor{\tsmrevisioncolor}{the} performance comparison \textcolor{\proofcolor}{because} it does not provide read-by-read correction results\textcolor{\editagecolor}{,} making a fair comparison \textcolor{\tsmcolorfinal}{\textcolor{\editagecolor}{of} the per-base error correction performance} with \acl~difficult.}
\textcolor{\editagecolor}{W}e observe\textcolor{\editagecolor}{ed} that \acl(1+2), which corrects both substitution errors and homopolymer errors, always outperforms Coral, and the relative error reductions of \acl(1+2) \textcolor{\editagecolor}{with respect to} `Raw\textcolor{\editagecolor}{,}' without any denoising\textcolor{\editagecolor}{, was} up to 23.8\%. Furthermore, the homopolymer error correction further drive\textcolor{\editagecolor}{s} down the error rates \textcolor{\editagecolor}{obtained by} substitution-error correction alone\textcolor{\proofcolor}{; hence,} \acl(1+2) always \textcolor{\editagecolor}{outperforms} \acl(1).
\begin{figure}[!h]
\begin{adjustwidth}{-2in}{0in}
\centering
\includegraphics[width=\linewidth]{Figure6}
\begin{spacing}{\mylinespacing}
\caption{{\bf Comparison of reads correction performance on eight real 454 pyrosequencing datasets (labeled P1--P8; \cite{quince2011removing}).} {[parameters: $k=5$ \textcolor{\sryoonrevisioncolor}{(Algorithm 1) and $k=2$ (Algorithm 2)} for \acl; $(s_{\text{PyroNoise}},c_{\text{PyroNoise}},s_{\text{SeqNoise}},c_{\text{SeqNoise}})=(60,0.01,25,0.08)$ for AmpliconNoise; $(k,mr,mm,g)=(21,2,2,3)$ for Coral]}: (a) Per-base error rates [1 and 2 represents substitution error-correction \textcolor{\bhlcolor}{(Algorithm~1)} and homopolymer error-correction \textcolor{\bhlcolor}{(Algorithm~2)}, respectively.]
(b) Measure of concordance (MoC), a similarity measure for pairs of clusterings (c) Running time {(the type and quantity of processors used for each case are shown in legend)}}
\label{fig:results-pyrosequencing}
\end{spacing}
\end{adjustwidth}
\end{figure}
\textcolor{\tsmcolor}{
In Fig~\ref{fig:results-pyrosequencing}(b), we compare the error correction performance of three schemes, AmpliconNoise, Coral, and \acl, in terms of the \textcolor{\bhlplosrevisioncolor}{MoC}.} \textcolor{\tsmrevisioncolor}{AmpliconNoise assumes a certain statistical model on the DNA sequence and runs an expectation-maximization algorithm for denoising.}
Here, the two clusterings \textcolor{\editagecolor}{in the comparison} are the golden OTU clusterings and the clusterings returned by denoisers.
\textcolor{\tsmcolor}{We observe that for all eight datasets, the number of OTUs generated by \acl~is consistently closer to the ground truth, \textcolor{\proofcolor}{providing} higher MoC values than \textcolor{\editagecolor}{those of} the other two schemes.}
\textcolor{\tsmcolor}{Furthermore, Fig~\ref{fig:results-pyrosequencing}(c) compares the running time of the three schemes \textcolor{\editagecolor}{for} the eight datasets. We can clearly see that \acl~is substantially faster than the other two. Particularly, we stress that the running time of \acl, even when {implemented and executed with} a single CPU, is two orders of magnitude faster than that of {parallelized AmpliconNoise, \textcolor{\editagecolor}{run} on four powerful GPUs}.
We believe \textcolor{\editagecolor}{that this substantial} boost over state-of-the-art scheme\textcolor{\proofcolor}{s} \textcolor{\editagecolor}{with respect to} running time \textcolor{\editagecolor}{is} a compelling reason for \textcolor{\proofcolor}{the} adoption of \acl~in microbial community analysis.}
\subsection*{Real Data: Illumina Sequencing.}\label{sec:real_illumina}
\textcolor{\tsmcolor}{Illumina platforms, such as GAIIx, MiSeq, and HiSeq, are \textcolor{\editagecolor}{currently} ubiquitous platforms in genome analysis. These platforms intrinsically generate paired-end reads (forward and reverse reads), due to the relatively short reads compared to \textcolor{\editagecolor}{those obtained by} automated Sanger sequencing~\cite{bartram2011generation}. Merging the forward and reverse reads from paired-end sequencing \textcolor{\editagecolor}{yeilds} elongated reads (\eg, $2\times300$ bp for MiSeq) that improve the performance \textcolor{\editagecolor}{of} downstream pipelines~\cite{magovc2011flash}.}
\textcolor{\tsmcolor}{Illumina platforms primarily inject substitution errors. A realistic error model is not the DMC, though, as the error rates of the Illumina tend to increase from the beginning to the end of reads\textcolor{\editagecolor}{. T}hus, the assumptions under which the DUDE was originally developed do not exactly apply to the error model of Illumina. In our experiments with \acl, however, we still used the empirically obtained DMC model $\mathbf{\Pi}$ in \textcolor{\bhlplosrevisioncolor}{Fig~\ref{fig:results-pi}}, which was computed by \emph{averaging} all error rates throughout different Illumina platforms.}
\textcolor{\sryoonrevisioncolor}{
In our experiments, we used 13} real Illumina datasets (named Q19--Q31) reported \textcolor{\editagecolor}{previously}~\cite{schirmer2015insight}, \textcolor{\sryoonrevisioncolor}{\textcolor{\editagecolor}{including} sequencing results from} four organisms (\textit{Anaerocellum thermophilum Z-1320 DSM 6725}, \textit{Bacteroides thetaiotaomicron VPI-5482}, \textit{Bacteroides vulgatus ATCC 8482}, and \textit{Caldicellulosiruptor saccharolyticus DSM 8903}) target\textcolor{\editagecolor}{ing} two hypervariable regions\textcolor{\editagecolor}{,} V3 and V4\textcolor{\editagecolor}{,} using different configurations (\textcolor{\tsmrevisioncolor}{\textcolor{\editagecolor}{s}ee the caption for Table~\ref{tab:illumina-dataset} and Fig~\ref{fig:results-illumina} for details). }
To \textcolor{\editagecolor}{examine} how the number of reads in a dataset affects denoising performance, we derived 10 subsets from the original datasets by randomly subsampling 10,000 to 100,000 reads in \textcolor{\editagecolor}{increments} of 10,000 reads. In addition to Coral, we compared the performance of \acl~with \textcolor{\editagecolor}{that of} BLESS~\cite{heo2014bless}, fermi~\cite{li2012exploring}, and Trowel~\cite{lim2014trowel}, \textcolor{\sryoonrevisioncolor}{which are \textcolor{\tsmrevisioncolor}{representative} $k$-mer\textcolor{\editagecolor}{-}based state-of-the-art tools.}
BLESS corrects ``weak'' $k$-mers that exist between consecutive ``solid'' $k$-mers, assuming that a weak $k$-mer has only one error. \textcolor{\tsmrevisioncolor}{F}ermi corrects sequencing errors in underrepresented $k$-mers using a heuristic cost function based on quality scores and does not rely on a $k$-mer occurrence threshold.
Trowel does not use a coverage threshold for its $k$-mer spectrum and iteratively boosts the quality values of bases after making corrections with $k$-mers that have high quality values.
\setlength{\tabcolsep}{13pt}
\ctable[
caption = {\textcolor{\srycolor}{\bf Details of the Illumina datasets~\cite{schirmer2015insight} used for our experiments shown in Fig ~\ref{fig:results-illumina}}},
label = {tab:illumina-dataset},
doinside = \small,
pos = !ht,
]
{cccccc}
{
\tnote[]{\scriptsize{Taqs: HiFI Kapa (HF), Q5 neb (Q5); Organisms: Anaerocellum thermophilum Z-1320 DSM 6725 (AT), Bacteroides thetaiotaomicron VPI-5482 (BT), Bacteroides vulgatus ATCC 8482 (BV), Caldicellulosiruptor saccharolyticus DSM 8903 (CS), Herpetosiphon aurantiacus ATCC 23779 (HA), Rhodopirellula baltica SH 1 (RBS), Leptothrix cholodnii SP-6 (LC)}}
}
{
\toprule
dataset & \multirow{2}[0]{*}{region} & \multirow{2}[0]{*}{sequencer} & \multirow{2}[0]{*}{Taq} & \multirow{2}[0]{*}{organism} & \textcolor{\srycolor}{forward \& reverse} \\
ID & & & & & primer \\
\midrule
Q19 & V4 & MiSeq2 & Q5 & AT & 515 \& 805RA \\
Q20 & V4 & MiSeq2 & Q5 & BT & 515 \& 805RA \\
Q21 & V4 & MiSeq2 & Q5 & BV & 515 \& 805RA \\
Q22 & V4 & MiSeq2 & Q5 & CS & 515 \& 805RA \\
Q23 & V4 & MiSeq2 & HF & AT & 515 \& 805RA \\
Q24 & V4 & MiSeq2 & HF & BT & 515 \& 805RA \\
Q25 & V4 & MiSeq2 & HF & BV & 515 \& 805RA \\
Q26 & V4 & MiSeq2 & HF & CS & 515 \& 805RA \\
Q27 & V3/V4 & MiSeq1 & Q5 & AT & 314f \& 806rcb \\
Q28 & V3/V4 & MiSeq1 & Q5 & BT & 314f \& 806rcb \\
Q29 & V3/V4 & MiSeq1 & Q5 & BV & 314f \& 806rcb \\
Q30 & V3/V4 & MiSeq1 & Q5 & CS & 314f \& 806rcb \\
Q31 & V3/V4 & MiSeq1 & HF & AT & 314f \& 806rcb \\
\bottomrule
}
\begin{figure}[!h]
\begin{adjustwidth}{-2in}{0in}
\centering
\includegraphics[width=0.95\linewidth]{Figure7}
\begin{spacing}{\mylinespacing}
\caption{{\bf Comparison of reads correction performance on real Illumina datasets (labeled Q19--Q26; \textcolor{\sryoonrevisioncolor}{see Table~\ref{tab:illumina-dataset} for more details}).} [parameters: $(k,mr,mm,g)=(21,1,1,1000)$ for Coral; $k=21$ for Trowel; $(k,O,C,s)=(21,3,0.3,5)$ for fermi; $k=5$ for \acl; \textcolor{\sryoonrevisioncolor}{no BLESS result shown since it did not work on these data] [Organisms: \textit{Anaerocellum thermophilum Z-1320 DSM 6725} (Q19 and Q23), \textit{Bacteroides thetaiotaomicron VPI-5482} (Q20 and Q24), \textit{Bacteroides vulgatus ATCC 8482} (Q21 and Q25), \textit{Caldicellulosiruptor saccharolyticus DSM 8903} (Q22 and Q26)] [Q19--Q22: Miseq (Library: nested single index, Taq: Q5 neb, Primer: 515 \& 805RA)] [Q23--Q26: Miseq (Library: NexteraXT, Taq: Q5 neb, Primer: 341f \& 806rcb)]}}
\label{fig:results-illumina}
\end{spacing}
\end{adjustwidth}
\end{figure}
Fig~\ref{fig:results-illumina} shows the per-base \textcolor{\tsmrevisioncolor}{error rates, defined in (\ref{eq:error_rate_def}),}
\textcolor{\sryoonrevisioncolor}{\textcolor{\editagecolor}{for} the tools under comparison
} \textcolor{\editagecolor}{using} the \textcolor{\tsmrevisioncolor}{first} eight datasets (Q19--Q26) \textcolor{\sryoonrevisioncolor}{and their subsets created as described above (\textcolor{\tsmrevisioncolor}{thus,} a total of 80 datasets per tool).}
\textcolor{\proofcolor}{BLESS did not run \textcolor{\editagecolor}{successfully} on these datasets, \textcolor{\editagecolor}{and} hence its result\textcolor{\editagecolor}{s are} not shown.}
\textcolor{\sryoonrevisioncolor}{\textcolor{\tsmrevisioncolor}{First, w}e can confirm that \acl~is effective in reducing substitution error\textcolor{\tsmrevisioncolor}{s for \textcolor{\editagecolor}{data obtained using} the Illumina} \textcolor{\tsmrevisioncolor}{platform} in all tested cases \textcolor{\tsmrevisioncolor}{of targeted amplicon sequencing, \textcolor{\proofcolor}{with} relative error rate reductions of} 6.40--49.92\%\textcolor{\proofcolor}{,} compared to the `Raw' sequences. \textcolor{\tsmrevisioncolor}{Furthermore, among the tools \textcolor{\editagecolor}{included in the comparison},} \acl~\textcolor{\proofcolor}{produced} the best results for the largest number of datasets. For Q24 and Q25, fermi was most effective\textcolor{\editagecolor}{,} but was outperformed by \acl~in many other cases. Coral was able to denoise to some extent but was \textcolor{\bhlploscolor}{inferior} to \acl~and fermi. Trowel gave unsatisfactory results in this experiment.}
\textcolor{\tsmrevisioncolor}{Before presenting our next result\textcolor{\editagecolor}{s}, we note that while the error rate defined in (\ref{eq:error_rate_def}) is widely used for DNA sequence denoising research as a performance measure, it \textcolor{\editagecolor}{occasionally} misleading and \textcolor{\editagecolor}{can}not \textcolor{\editagecolor}{be used to} fairly evaluate the performance of denoisers. \textcolor{\editagecolor}{This} is because only errors \textcolor{\editagecolor}{at} aligned bases are counted in the error rate calculation; hence, a poor denoiser may significantly reduce the number of aligned bases, potentially further corrupting the noisy sequence, \textcolor{\proofcolor}{but it} can have \textcolor{\proofcolor}{a} low error rate calculated as in (\ref{eq:error_rate_def}). In our experiments with the datasets Q27-Q31, we \textcolor{\editagecolor}{detected} \textcolor{\proofcolor}{a} large variance \textcolor{\editagecolor}{in} the number of aligned bases across different denoising tools\textcolor{\proofcolor}{;} thus, it was \textcolor{\proofcolor}{difficult} to make a fair comparison among the performance of different tools with (\ref{eq:error_rate_def}).
}
\textcolor{\proofcolor}{We note that \textcolor{\editagecolor}{in} the experiments presented in Fig~\ref{fig:results-pyrosequencing}(a) and Fig~\ref{fig:results-illumina}\textcolor{\editagecolor}{,} such \textcolor{\editagecolor}{a} large variance \textcolor{\editagecolor}{was not detected}.}
\textcolor{\bhlplosrevisioncolor}{To alleviate this issue, we employ the alternative definition of the per-base error rate of a tool in Eq.~(\ref{eq:error_hat}).}
\textcolor{\sryoonrevisioncolor}{
\textcolor{\tsmrevisioncolor}{Fig~\ref{fig:results-weighted-gain} shows the results \textcolor{\editagecolor}{obtained for} 100,000-read subsets of each of the Q19--Q31 datasets\textcolor{\tsmrevisioncolor}{, i.e., \textcolor{\editagecolor}{all} datasets,} for \acl~and the four alternative denoisers. \textcolor{\proofcolor}{Because} the datasets Q27--Q31 had two subsets of 100,000 reads, we used a total of 18 datasets to draw Fig~\ref{fig:results-weighted-gain}\textcolor{\proofcolor}{,} one each from Q19--Q26 \textcolor{\proofcolor}{and} two each from Q27--Q31. As mentioned \textcolor{\editagecolor}{previously}, BLESS could not \textcolor{\editagecolor}{run} successfully on Q19--Q26\textcolor{\proofcolor}{;} hence, there are only 10 points for BLESS in the plots.}
Fig~\ref{fig:results-weighted-gain}(a), (b) and (c) presents the distribution of $g(\hat{e}_{\text{tool}})$, $g(a_{\text{tool}})$, and running time\textcolor{\editagecolor}{s} for each tool, respectively. For each distribution, the average value is marked with a solid circle. \textcolor{\tsmrevisioncolor}{\textcolor{\editagecolor}{As shown in} Fig~\ref{fig:results-weighted-gain}(b), we clearly see that Coral and Trowel show \textcolor{\proofcolor}{a} large variance \textcolor{\proofcolor}{in} the number of aligned bases. For example, Coral only aligns 30\% of bases compared to \textcolor{\proofcolor}{the} raw sequence after denoising for some dataset\textcolor{\proofcolor}{s}. With the effect of this variance \textcolor{\proofcolor}{in} aligned bases adjusted, Fig~\ref{fig:results-weighted-gain}(a) shows that \acl~\textcolor{\proofcolor}{produces} the highest average $g(\hat{e}_{\text{tool}})$, \ie, 19.79\%, among all \textcolor{\proofcolor}{the} compared tools. Furthermore,} the variability of $g(a_{\text{tool}})$ was the smallest for \acl\textcolor{\editagecolor}{, as shown} in Fig~\ref{fig:results-weighted-gain}(b), suggesting its robustness.
\textcolor{\tsmrevisioncolor}{
Finally, \textcolor{\proofcolor}{in} Fig~\ref{fig:results-weighted-gain}(c), we observe that the running time\textcolor{\editagecolor}{s were significantly shorter for} \acl~and Trowel than \textcolor{\editagecolor}{for} Coral and fermi. Overall, we can conclude that DUDE-Seq is the most robust tool\textcolor{\editagecolor}{,} \textcolor{\proofcolor}{with a} fast running time and the highest average accuracy after denoising.}
}
\begin{figure}[!h]
\begin{adjustwidth}{-2in}{0in}
\centering
\includegraphics[width=\linewidth]{Figure8}
\begin{spacing}{\mylinespacing}
\caption{{\bf Performance comparison.} (a) Relative gain of adjusted error rates over `Raw' data (Eq.~\ref{eq:rel_gain_error}). (b) Relative gain of aligned bases (Eq.~\ref{eq:rel_gain_align}). (c) Running time on real Illumina datasets (labeled Q19--Q31; \textcolor{\sryoonrevisioncolor}{see the caption for Fig~\ref{fig:results-illumina}).} [parameters: $\text{kmerlength}=21$ for BLESS; $(k,mr,mm,g)=(21,1,1,1000)$ for Coral; $k=21$ for Trowel; $(k,O,C,s)=(21,3,0.3,5)$ for fermi; $k=5$ for \acl] \textcolor{\sryoonrevisioncolor}{[BLESS did not work on Q19--Q26]}}
\label{fig:results-weighted-gain}
\end{spacing}
\end{adjustwidth}
\end{figure}
\textcolor{\tsmrevisioncolor}{In summary,} we observe from Fig~\ref{fig:results-illumina} and Fig~\ref{fig:results-weighted-gain} that \acl~robustly outperforms the competing schemes \textcolor{\editagecolor}{for} \textcolor{\sryoonrevisioncolor}{most of the datasets tested.} We specifically emphasize that \acl~shows a strong performance\textcolor{\editagecolor}{, even though} the DMC assumption does not hold for the sequencer.
We believe that th\textcolor{\proofcolor}{e} \textcolor{\editagecolor}{better performance} of \acl~\textcolor{\editagecolor}{relative to other} state-of-the-art algorithms (\textcolor{\sryoonrevisioncolor}{based on MSA or $k$-mer spectrums}) on real Illumina datasets strongly demonstrates the competitiveness of \acl~as a general DNA sequence denoiser \textcolor{\sryoonrevisioncolor}{for targeted amplicon sequencing}.
\subsection*{Experiments on Simulated Data.}
\textcolor{\tsmcolor}{\textcolor{\editagecolor}{W}e \textcolor{\editagecolor}{performed} more detailed experiments using \textcolor{\tsmcolorfinal}{Illumina simulators} in order to further highlight the strong denoising performance of \acl, including the effect\textcolor{\editagecolor}{s} on downstream analyses.}
\textcolor{\tsmcolor}{Fig~\ref{fig:results-varying-error}(a) shows the results obtained using the Grinder simulator~\cite{angly2012grinder} and \textcolor{\editagecolor}{a} comparison
with Coral.
\textcolor{\proofcolor}{Trowel and Reptile require quality scores as input, which are provided by the GemSIM simulator, but not by the Grinder simulator; hence, we could not include Trowel and Reptile in Fig~\ref{fig:results-varying-error}(a).}
We generated nine synthetic datasets of forward reads that had error rates at the end of the sequence varying from 0.2\% to 1.0\%, as denoted \textcolor{\editagecolor}{on} the horizontal axis. For all cases, the error rate at the beginning of the sequence was 0.1\%. We again used the \emph{average} DMC model for the entire sequence for \acl. }
\textcolor{black}{Note that the error rates for the `Raw' data, \ie, the red bars, match the average of the error rates at the beginning and the end of the sequence.} \textcolor{\tsmcolor}{From the figure, \textcolor{\editagecolor}{consistent} with the real datasets \textcolor{\editagecolor}{analyzed} in Section \ref{sec:real_illumina}, we clearly see that \acl~significantly outperforms Coral \textcolor{\editagecolor}{for all} tested error rates.}
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\linewidth]{Figure9}
\begin{spacing}{\mylinespacing}
\caption{{\bf Reads correction performance on simulated dataset.} {[parameters: $k=5$ for \acl; $k=10$ for Trowel; $(k,mr,mm,g)=(21,1,1,1000)$ for Coral; optimal values set by tool \texttt{seq-analy} for Reptile; \textcolor{\sryoonrevisioncolor}{$(k,O,C,s)=(21,3,0.3,5)$ for fermi}]}: (a) Varying error rates using the Grinder simulator~\cite{angly2012grinder}. (b) Varying reads composition using the GemSIM simulator~\cite{mcelroy2012gemsim} (values on top of each bar represent the error rates).}
\label{fig:results-varying-error}
\end{spacing}
\end{figure}
\textcolor{\editagecolor}{T}o evaluate the performance of \acl~for paired-end reads, we generated datasets\textcolor{\editagecolor}{, shown} in \tablename~\ref{tab:illumina-data}\textcolor{\editagecolor}{,} with the GemSIM sequencing data simulator~\cite{mcelroy2012gemsim}. As shown in the table, we used 23 public reference sequences~\cite{quince2011removing} \textcolor{\editagecolor}{to generate} the dataset A5 and a single reference sequence for S5. \textcolor{black}{We used the error model v5 that has error rate\textcolor{\editagecolor}{s} of 0.28\% for forward reads and 0.34\% for reverse reads.}
In Fig~\ref{fig:results-varying-error}(b), in addition to \acl, Coral, \textcolor{\tsmrevisioncolor}{fermi}, {and Trowel}, we included the result\textcolor{\editagecolor}{s obtained using} Reptile~\cite{yang2010reptile}, another $k$-mer spectrum\textcolor{\editagecolor}{-}based method that outputs read-by-read denoising results.
We again observe from the figure that \acl~outperforms \textcolor{\tsmrevisioncolor}{the alternatives} \textcolor{\proofcolor}{by} significant margins.
\setlength{\tabcolsep}{10pt}
\ctable[
pos = !ht,
caption = {\textcolor{\bhlcolor}{\bf Details of the public data \cite{kwon2014casper} used for our experiments on simulated data \textcolor{\srycolor}{shown in \tablename~\ref{tab:paired-end-merging-result}}}},
label = tab:illumina-data,
doinside = \scriptsize
]{crrcccc}{
\tnote[$\sharp$]{\small{Error model v5 (forward rate 0.28\%, reverse 0.34\%)}}
}{
\toprule
dataset & \# total & \multicolumn{1}{c}{\multirow{2}[0]{*}{\# refs}} & fragment & read & overlap & simulator (error model) \\
ID & reads & \multicolumn{1}{c}{} & length & length & length & or sequencer used \\
\midrule
S5 & 1,000,000 & 1 & 160 & 100 & 40 & GemSIM (v5$^\sharp$) \\
A5 & 1,000,000 & 23 & 160--190 & 100 & 10--40 & GemSIM (v5$^\sharp$) \\
\bottomrule
\vspace{-0.8em}
}
In \tablename~\ref{tab:paired-end-merging-result}, we show that the error-corrected reads produced by \acl~can also improve the \textcolor{\editagecolor}{performance of} downstream pipeline\textcolor{\editagecolor}{s}, such as paired-end merging. We applied four different paired-end merging schemes, CASPER~\cite{kwon2014casper}, COPE~\cite{liu2012cope}, FLASH~\cite{magovc2011flash}, and PANDAseq~\cite{masella2012pandaseq}, for the two datasets A5 and S5 in \tablename~\ref{tab:illumina-data}.
The metrics are defined as usual. \textcolor{\proofcolor}{A} true positive (TP) is defined as a merge with correct mismatching resolution in the overlap region, and a false positive (FP) is defined as a merge with incorrect mismatching resolution in the overlap region. Furthermore, a false negative (FN) is a merge that escapes the detection, and a true negative (TN) is defined as \textcolor{\editagecolor}{a} correct prediction \textcolor{\editagecolor}{for} reads that do not truly overlap. The accuracy and F1 score are computed based on the above metrics \cite{witten2005data}.
\textcolor{black}{For each dataset, we compared the results for \textcolor{\editagecolor}{four} cases: performing paired-end merging without any denoising, after correcting errors with Coral, after correcting errors with fermi, and after correcting errors with \acl.
Reptile and Trowel were not included in this experiment \textcolor{\proofcolor}{because} \textcolor{\editagecolor}{they were generally outperformed by} Coral and fermi\textcolor{\proofcolor}{,} as shown in Fig~\ref{fig:results-varying-error}(b).
The accuracy and F1 score results show that correcting errors with \acl~consistently yields better paired-end merging performance, not only compared to the \textcolor{\editagecolor}{case with} no denoising, but also \textcolor{\proofcolor}{compared} to the \textcolor{\editagecolor}{cases with} Coral and fermi \textcolor{\proofcolor}{error correct\textcolor{\editagecolor}{ion}}. This result highlights the potential \textcolor{\editagecolor}{application of} \acl~for boosting the performance \textcolor{\editagecolor}{of} downstream DNA sequence \textcolor{\editagecolor}{analyses}.}
\setlength{\tabcolsep}{8pt}
\ctable[
pos= !ht,
caption = {\textcolor{\bhlcolor}{\bf Paired-end reads merging performance statistics {[parameters: $k=5$ for \acl; $(k,mr,mm,g)=(21,1,1,1000)$ for Coral; $(k,O,C,s)=(21,3,0.3,5)$ for fermi]}}},
label = {tab:paired-end-merging-result},
doinside = \scriptsize,
]{lcrrrrrr}{
}{
\toprule
\multicolumn{1}{c}{tool} & \multicolumn{1}{c}{dataset} & \multicolumn{1}{c}{\# merges} & \multicolumn{1}{c}{TP} & \multicolumn{1}{c}{FP} & \multicolumn{1}{c}{FN} & \multicolumn{1}{c}{accuracy} & \multicolumn{1}{c}{$F_1$} \\
\midrule
\multicolumn{1}{l}{CASPER} & \multirow{4}[0]{*}{\shortstack{S5}} & 1,000,000 & 997,303 & 2,697 & \textbf{0} & 0.997 & 0.999 \\
\multicolumn{1}{l}{COPE} & & 974,219 & 961,366 & 12,853 & 25,781 & 0.961 & 0.980 \\
\multicolumn{1}{l}{FLASH} & & 999,921 & 977,431 & 22,490 & 79 & 0.977 & 0.989 \\
\multicolumn{1}{l}{PANDAseq} & & 999,947 & 976,807 & 23,140 & 53 & 0.977 & 0.988 \\
\midrule
\multicolumn{1}{l}{CASPER} & \multirow{4}[0]{*}{\shortstack{S5\\w/ Coral}} & 1,000,000 & 997,510 & 2,490 & \textbf{0} & 0.998 & 0.999 \\
\multicolumn{1}{l}{COPE} & & 975,803 & 963,717 & 12,086 & 24,197 & 0.964 & 0.982 \\
\multicolumn{1}{l}{FLASH} & & 999,942 & 978,835 & 21,107 & 58 & 0.979 & 0.989 \\
\multicolumn{1}{l}{PANDAseq} & & 999,949 & 978,270 & 21,679 & 51 & 0.978 & 0.989 \\
\midrule
\multicolumn{1}{l}{CASPER} & \multirow{4}[0]{*}{\shortstack{S5\\w/ fermi}} & 1,000,000 & 997,356 & 2,644 & \textbf{0} & 0.997 & 0.999 \\
\multicolumn{1}{l}{COPE} & & 994,025 & 969,451 & 24,574 & \textbf{5,975} & 0.969 & 0.984 \\
\multicolumn{1}{l}{FLASH} & & 999,933 & 972,025 & 27,908 & 67 & 0.972 & 0.986 \\
\multicolumn{1}{l}{PANDAseq} & & 999,952 & 971,567 & 28,385 & 48 & 0.972 & 0.986 \\
\midrule
\multicolumn{1}{l}{CASPER} & \multirow{4}[0]{*}{\shortstack{S5\\w/ \acl}} & 1,000,000 & \textbf{999,320} & \textbf{680} & \textbf{0} & \textbf{0.999} & \textbf{1.000} \\
\multicolumn{1}{l}{COPE} & & 987,238 & \textbf{983,639} & \textbf{3,599} & 12,762 & \textbf{0.984} & \textbf{0.992} \\
\multicolumn{1}{l}{FLASH} & & 999,958 & \textbf{992,915} & \textbf{7,043} & \textbf{42} & \textbf{0.993} & \textbf{0.996} \\
\multicolumn{1}{l}{PANDAseq} & & 999,949 & \textbf{991,146} & \textbf{8,803} & \textbf{51} & \textbf{0.991} & \textbf{0.996} \\
\midrule
\multicolumn{1}{l}{CASPER} & \multirow{4}[0]{*}{\shortstack{A5}} & 999,973 & 997,202 & 2,771 & 27 & 0.997 & \textbf{0.999} \\
\multicolumn{1}{l}{COPE} & & 924,634 & 915,981 & 8,653 & 75,366 & 0.916 & 0.956 \\
\multicolumn{1}{l}{FLASH} & & 999,578 & 977,355 & 22,223 & 422 & 0.977 & 0.989 \\
\multicolumn{1}{l}{PANDAseq} & & 999,122 & 978,720 & 20,402 & 878 & 0.979 & 0.989 \\
\midrule
\multicolumn{1}{l}{CASPER} & \multirow{4}[0]{*}{\shortstack{A5\\w/ Coral}} & 999,974 & 995,899 & 4,075 & \textbf{26} & 0.996 & 0.998 \\
\multicolumn{1}{l}{COPE} & & 927,757 & 918,733 & 9,024 & 72,243 & 0.919 & 0.958 \\
\multicolumn{1}{l}{FLASH} & & 999,742 & 978,814 & 20,928 & \textbf{258} & 0.979 & 0.989 \\
\multicolumn{1}{l}{PANDAseq} & & 999,351 & 979,899 & 19,452 & 649 & 0.980 & 0.990 \\
\midrule
\multicolumn{1}{l}{CASPER} & \multirow{4}[0]{*}{\shortstack{A5\\w/ fermi}} & 999,969 & 997,288 & 2,681 & 31 & 0.997 & \textbf{0.999} \\
\multicolumn{1}{l}{COPE} & & 939,986 & 923,252 & 16,734 & 60,014 & 0.923 & 0.960 \\
\multicolumn{1}{l}{FLASH} & & 999,732 & 974,903 & 24,829 & 268 & 0.975 & 0.987 \\
\multicolumn{1}{l}{PANDAseq} & & 999,328 & 975,320 & 24,008 & 672 & 0.975 & 0.988 \\
\midrule
\multicolumn{1}{l}{CASPER} & \multirow{4}[0]{*}{\shortstack{A5\\w/ \acl}} & 999,971 & \textbf{998,078} & \textbf{1,893} & 29 & \textbf{0.998} & \textbf{0.999} \\
\multicolumn{1}{l}{COPE} & & 943,531 & \textbf{939,555} & \textbf{3,976} & \textbf{56,469} & \textbf{0.940} & \textbf{0.969} \\
\multicolumn{1}{l}{FLASH} & & 999,638 & \textbf{989,860} & \textbf{9,778} & 362 & \textbf{0.990} & \textbf{0.995} \\
\multicolumn{1}{l}{PANDAseq} & & 999,354 & \textbf{989,250} & \textbf{10,104} & \textbf{646} & \textbf{0.989} & \textbf{0.995} \\
\bottomrule
}
\section*{Discussion}
\textcolor{black}{
Our experimental results show that \acl~can robustly outperform $k$-mer\textcolor{\editagecolor}{-}based, MSA\textcolor{\proofcolor}{-}based, and statistical error model\textcolor{\editagecolor}{-}based schemes \textcolor{\editagecolor}{for} both real-world datasets, such as 454 pyrosequencing and Illumina \textcolor{\editagecolor}{data}, and simulated datasets, \textcolor{\tsmrevisioncolor}{particularly for targeted amplicon sequencing.}
This performance advantage in denoising further allowed us to obtain improved results in downstream analysis tasks, such as OTU binning and paired-end merging. Furthermore, the time demand of \acl-based OTU binning is order(s) of magnitude lower than that of the current state-of-the-art \textcolor{\proofcolor}{schemes}. We also demonstrated \textcolor{\srycolorfinal}{the robustness and flexibility of \acl~by showing} that a simple change \textcolor{\proofcolor}{in} $\mathbf{\Pi}$ matrix is enough to apply the exact same \acl~to data \textcolor{\editagecolor}{obtained using} different sequencing platforms.} \textcolor{black}{In particular, we experimentally showed that even when the memoryless channel assumption does not hold, as in Illumina \textcolor{\editagecolor}{data}, \acl~still solidly outperforms state-of-the-art schemes.}
\textcolor{black}{The sliding window nature of \acl~resemble\textcolor{\editagecolor}{s} the popular $k$-mer\textcolor{\editagecolor}{-}based schemes in the literature. However, while all existing $k$-mer\textcolor{\editagecolor}{-}based schemes rely on heuristic threshold \textcolor{\editagecolor}{selection} for determining errors in the reads\textcolor{\proofcolor}{,} regardless of the error model of the sequencing platform, \acl~applies an analytic denoising rule that explicitly takes the error model $\mathbf{\Pi}$ into account. Therefore, even for \textcolor{\editagecolor}{identical} noisy reads $z^n$, DUDE-Seq may result in different denoised sequences\textcolor{\proofcolor}{,} depending on the $\mathbf{\Pi}$'s of different sequencing platforms, whereas the $k$-mer\textcolor{\editagecolor}{-}based scheme will always result in the exact same denoised sequence.
The performance gains reported in this paper compared to state-of-the-art baselines, including \textcolor{\proofcolor}{those for} $k$-mer\textcolor{\editagecolor}{-}based schemes, substantiate the competitiveness of our method for \textcolor{\tsmrevisioncolor}{targeted amplicon sequencing.}}
\textcolor{black}{
Another advantage of \acl~is its read-by-read error-correction capability. This feature is important for a number of bioinformatics tasks\textcolor{\proofcolor}{,} including \emph{de novo} sequencing, metagenomics, resequencing, targeted resequencing, and transcriptome sequencing, which typically require \textcolor{\editagecolor}{the} extract\textcolor{\editagecolor}{ion of} subtle information from small variants in each read. In addition to the types of tasks presented in this paper (\textcolor{\proofcolor}{\ie,} per-based error correction, OTU binning, and paired-end merging), we plan to apply \acl~to additional tasks\textcolor{\proofcolor}{,} as mentioned above.}
Additional venues for further investigation include the procedure for estimating the noise mechanism represented by $\mathbf{\Pi}$, which is currently empirically determined by aligning each read to the reference sequence and is \textcolor{\proofcolor}{therefore} sensitive to read mapping and alignment. For more robust estimation, we may employ an expectation-maximization-based algorithm, as was recently proposed for estimating substitution emissions for the data \textcolor{\editagecolor}{obtained using} nanopore technology~\cite{jain2015improved}.
Considering uncertainties in $\mathbf{\Pi}$ may also be helpful; hence, it may be useful to investigate the relevance of the framework in \cite{gemelos2006algorithms}.
Additionally, it will likely be fruitful to utilize the information \textcolor{\editagecolor}{in} Phred quality scores \textcolor{\editagecolor}{to make} decisions about noisy bases and \textcolor{\editagecolor}{to} fine-tun\textcolor{\editagecolor}{e} the objective loss function in our approach.
\textcolor{black}{Using a lossy compressed version of the quality scores \textcolor{\editagecolor}{is} one possible direction \textcolor{\proofcolor}{for} boost\textcolor{\proofcolor}{ing} the inferential performance of some downstream applications\textcolor{\editagecolor}{,} as shown in~\cite{ochoa2015effect}.}
\textcolor{black}{Furthermore, particularly for the homopolymer error correction, there are several hyperparameters whose choices can be experimented with in the future \textcolor{\proofcolor}{to} potentially \textcolor{\proofcolor}{achieve} substantial performance boosts. \textcolor{\proofcolor}{Examples include} the choice of alphabet size (in lieu of the current value of 10), the choice of the loss function that may be proportional to the difference between the true and estimated value of $N$ (in lieu of the current Hamming loss), and the choice of quantization (in lieu of (\ref{eq:rounding})).
Moreover, we may apply the full generalized DUDE in \cite{dembo2005universal} for homopolymer error correction to \textcolor{\editagecolor}{determine} if better performance can be achieved at the cost of increased complexity.}
\textcolor{\bhlploscolor}{Applying \acl~to other types of sequencing technology with homopolymer errors (\eg, Ion Torrent) would also be possible \textcolor{\editageploscolor}{as} long as we can acquire \textcolor{\bhlplosrevisioncolor}{flow (\eg, ionogram)} density distributions to estimate $\mathbf{\Gamma}$.
Currently, there exists no public data repository that \textcolor{\editageploscolor}{includes} such information for Ion Torrent, and thus existing Ion Torrent denoisers often ignore homopolymer errors or rely on simplistic noise modeling and iterative updates that unrealistically limit the maximum length of homopolymer errors that can be handled, let alone computational efficiency~\cite{bragg2013shining}.
}
\textcolor{\tsmcolor}{Finally, we plan to test \acl~on several other sequencing platforms\textcolor{\proofcolor}{,} such as PacBio and Oxford Nanopore, which tend to result in longer and more noisy sequences, to further substantiate the robustness and effectiveness of our algorithm. \textcolor{\tsmrevisioncolor}{Applying the recently developed deep neural networks\textcolor{\proofcolor}{-}based Neural DUDE algorithm \cite{MooMinLeeYoo16} to DNA sequence denoising beyond targeted amplicon sequencing could be another fruitful direction.}}
\section*{Supporting Information}
\paragraph*{S1 Fig.}\label{S1_Fig}
\textcolor{\srycolor}{
{\bf \acl~web interface.}
This is a screenshot of the website accompanying the proposed \acl~method ( \href{http://data.snu.ac.kr/pub/dude-seq}{http://data.snu.ac.kr/pub/dude-seq}). For users who prefer a graphical user interface, this website provides a web-based execution environments for \acl. Through this screen, a user can specify the parameters for each of the two error types (in the figure, \acl~(1) stands for for the substitution error correction described in Algorithm~1 and \acl~(2) stands for the homopolymer error correction shown in Algorithm~2), and upload the input file of her choice. The \acl~process starts automatically by clicking the ``SUBMIT'' button. For advanced users who prefer batch processing, the source code of \acl~is also available at \href{http://github.com/datasnu/dude-seq}{http://github.com/datasnu/dude-seq}.
}
\paragraph*{S2 Fig.}\label{S2_Fig}
\textcolor{\srycolor}{
{\bf Website output: sequence complexity.}
The \acl~website provides analysis results from applying the DUST algorithm~\cite{morgulis2006fast} and block-entropy to the outputs from denoising by \acl. The DUST algorithm masks low-complexity regions that have highly biased distribution of nucleotides based on counting 3-mer frequencies in 64-base windows. \textcolor{\bhlplosrevisioncolor}{The DUST score is computed based on how often different trinucleotides occur as follows:
\begin{equation}
\text{score} = \sum_{i=1}^{k} \frac{n_i(n_i-1)(w-2)s}{2(l-1)l} \nonumber
\end{equation}
where $k=4^3$ is the trinucleotide size, $w=64$ is the window size, $n_i$ is the number of the words $i$ in a window, $l$ is the number of the possible words in a window, and $s$ is the scaling factor. The score is scaled from 0 to 100 and a high score implies a low complexity metagenome.}
The block-entropy is calculated using Shannon's diversity index~\cite{shannon2001mathematical}. \textcolor{\bhlplosrevisioncolor}{The block-entropy evaluates the entropy of the trinucleotides in a sequence as follows:
\begin{equation}
\text{entropy} = -\sum_{i=1}^{k} \big(\frac{n_i}{l}\big) \text{log}_k \big(\frac{n_i}{l}\big) \nonumber
\end{equation}
where $k=4^3$ is the trinucleotide size, $n_i$ is the number of the words $i$ in a window, and $l$ is the number of the possible words in a window. The entropy is also scaled from 0 to 100 and a low entropy implies a low complexity metagenome.}
}
\paragraph*{S3 Fig.}\label{S3_Fig}
\textcolor{\srycolor}{
{\bf Website output: tag sequence probability.}
Another output from the \acl~website is the tag sequence probability of reads~\cite{schmieder2010tagcleaner}. This is to reveal the existence of artifacts at the ends, \ie, adapter or barcode sequences at the 5'- or 3'-end.
}
\paragraph*{S4 Fig.}\label{S4_Fig}
\textcolor{\srycolor}{
{\bf Website output: sequence duplication.} The accompanying website also carries out sequence duplication analysis based on the denoised outputs from \acl, in order to reveal artificial duplicates. As shown in the figure, five types of duplication statistics~\cite{schmieder2011quality} are provided: exact duplicates, 5' duplicates, 3' duplicates, exact duplicates with the reverse complement of another sequence, and 5' or 3' duplicates with the reverse complement of another sequence.
}
\section*{Acknowledgments}
This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science, ICT and Future Planning) [No. 2014M3A9E2064434], in part by a grant of the Korea Health Technology R\&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health \& Welfare, Republic of Korea [HI14C3405030014 and HI15C3224], and in part by the Brain Korea 21 Plus Project in 2017.
\nolinenumbers
\newcommand{\noop}[1]{}
|
2,869,038,153,879 | arxiv | \section{Introduction}\label{sec:intro}
\noindent Contraction theory is a methodology for assessing the global convergence of trajectories of a dynamical system to each other instead of convergence to a pre-specified attractor.
Given a vector norm
with its induced matrix norm, the \textit{Logarithmic norm} of a linear operator $A$ is defined as the directional derivative of the matrix norm in the direction of $A$ and evaluated at the identity matrix, \cite{Lewis1949,
Hartman1961}. This definition can be extended to any nonlinear operator using the notion of \textit{logarithmic Lipschitz constant} \cite{Soderlind}. Logarithmic Lipschitz constants of a nonlinear vector field or the logarithmic norm of the Jacobian of the vector field can characterize the contraction property of a nonlinear system.
Studying contractivity of ordinary differential equations (ODEs) and reaction-diffusion partial differential equations using logarithmic norms and logarithmic Lipschitz constants is a well-established research topic (see e.g. \cite{dahlquist, Demidovich1961, Demidovich1967, Yoshizawa1966,
Yoshizawa1975, Soderlind2, Loh_Slo_98, arcak2011_Automatica, Simpson-Porco_Bullo_2014, russo_dibernardo_sontag13, contraction_survey_CDC14, Coogan_Arcak_2015, Margaliot_Sontag_Tuller, 2016_Aminzare_Sontag, Coogan_2019, CV_J_Bullo_2020, D_J_Bullo_2021}). However, there are not too many attempts to study contractivity of non-deterministic systems and in particular, Itô stochastic differential equations (SDEs).
In \cite{slotine_stochastic,Han_Chung_2021} contraction theory is studied using stochastic Lyapunov function and incremental stability.
In \cite{2013_Tabareau_Slotine,LG-TM-MM-2021,newman_2018} contraction theory is studied for random dynamical systems.
In \cite{2013_Pham_Slotine,2019_Bouvrieand_Slotine} contractivity is generalized to Riemannian metrics and Wasserstein norms, respectively.
In \cite{Dani_Chung_Hutchinson_2015}, stochastic contraction is studied for Poisson shot noise and finite-measure Lévy noise.
This work takes a step forward and extends contraction theory to SDEs using generalized forms of logarithmic norms and logarithmic Lipschitz constants.
Stochastic contraction theory can be used to study the stability of SDEs and to characterize the synchronization behavior of networks of nonlinear and noisy systems. Synchronization induced by common noise has been observed experimentally
and confirmed theoretically
in many networks of nonlinear dynamical systems without mutual coupling. Indeed, this kind of synchronization is equivalent to the stochastic contraction of SDEs that we study in Section \ref{sec:noise-induced:contrac} below. Therefore, extending contraction theory to SDEs can be beneficial for characterizing networks' synchronization.
In \cite{Ahmad_Raha_2010}, the authors introduced stochastic logarithmic norms and used them to study the stability properties of linear SDEs. Analog to the deterministic version, stochastic logarithmic norms are proper tools for characterizing the contractivity of linear SDEs, but they are not directly applicable to nonlinear SDEs. Our main contributions are to generalize the notion of logarithmic Lipschitz constants, which generalize the logarithmic norms to nonlinear operators, and use them to study the contractivity of nonlinear SDEs.
The remainder of the paper is organized as follows.
Section~\ref{sec:background} reviews logarithmic Lipschitz constants of deterministic nonlinear operators and contraction properties of ODEs.
Sections~\ref{sec:main-definitions} and ~\ref{sec:main-results} contain the definition of stochastic logarithmic Lipschitz constants and main results on characterizing the stochastic contractivity of SDEs.
Section~\ref{sec:noise-induced:contrac} discusses how noise can induce stochastic contractivity and synchronization and illustrates the results in a numerical example.
Section~\ref{sec:conclusion} is the conclusion and discussion.
Some of the proofs are given in an Appendix, Section~\ref{sec:appendix}.
\section{Background}\label{sec:background}
In this section we review the definitions of logarithmic norms and logarithmic Lipschitz constants and explain how they are helpful to study contraction properties of ODEs.
\begin{definition}\textit{(\textbf{Logarithmic norm})} Let $(\mathcal X,\|\cdot\|_{\mathcal X})$ be a finite dimensional normed vector space over $\mathbb{R}$ or $\mathbb{C}$. The space $\mathcal{L}(\mathcal X,\mathcal X)$ of linear transformations $A\colon \mathcal X \to \mathcal X$ is also a normed vector space with the induced operator norm $\|A\|_{\mathcal X\to \mathcal X}=\sup_{\|x\|_{\mathcal X}=1}\|Ax\|_{\mathcal X}.$ The \textit{logarithmic norm} (or matrix measure) of $A$ induced by $\|\cdot\|_{\mathcal X}$ is defined as the directional derivative of the matrix norm,
\begin{equation*}
\mu_{\mathcal X}[A]\;=\;\displaystyle\lim_{h\to 0^+}\frac{1}{h}\left(\|I+hA\|_{\mathcal X\to \mathcal X}-1\right),
\end{equation*}
where $I$ is the identity operator on $\mathcal X$.
\end{definition}
\begin{definition}\label{def:LLC}\textit{(\cite{Soderlind}, \textbf{Logarithmic Lipschitz constants})} Assume $F\colon \mathcal Y\subseteq \mathcal X \to \mathcal X$ is an arbitrary function. Two generalizations of the logarithmic norms are
the strong least upper bound (s-lub) and least upper bound (lub) \textit{logarithmic Lipschitz constants}, which are respectively defined by
{\small{\begin{align*}\label{defM+}
&M_{\mathcal X}^{+}[F]=\displaystyle\sup_{u\neq v\in \mathcal Y}\lim_{h\to0^{+}}\frac{1}{h}\left(\frac{\|u-v+h(F(u)-F(v))\|_{\mathcal X}}{\|u-v\|_{\mathcal X}}-1\right),\\
&M_{\mathcal X}[F]=\displaystyle\lim_{h\to0^{+}}\sup_{u\neq v\in \mathcal Y}\frac{1}{h}\left(\frac{\|u-v+h(F(u)-F(v))\|_{\mathcal X}}{\|u-v\|_{\mathcal X}}-1\right).
\end{align*}}}
\end{definition}
\begin{proposition}\textit{(\cite{Soderlind, aminzare-sontag}, \textbf{Some properties of logarithmic Lipschitz constants})}
\noindent $M_{\mathcal X}^+$ and $M_{\mathcal X}$ are sub-linear, i.e., for $F, F^i\colon \mathcal Y\to \mathcal X,$
and $\alpha\geq0$
(similar properties hold for $M_{\mathcal X}$):
\begin{itemize}
\item $M_{\mathcal X}^+[F^1+F^2]\leq M_{\mathcal X}^+[F^1]+M_{\mathcal X}^+[F^2]$,
\item$M_{\mathcal X}^{+}[\alpha F]=\alpha M_{\mathcal X}^{+}[F],$ and
\item $M_{\mathcal X}^+[F]\leq M_{\mathcal X}[F].$
\end{itemize}
\end{proposition}
\begin{proposition}\label{prop:logarithmic:constants:norm}\textit{(\cite{Soderlind2}, \textbf{Relationship between logarithmic Lipschitz constants and logarithmic norms})}
For finite dimensional space $\mathcal X$, the (lub) logarithmic Lipschitz constant $M_{\mathcal X}$ generalizes the logarithmic norm $\mu_{\mathcal X}$, i.e., for any matrix $A$, $M_{\mathcal X} [A]=\mu_{\mathcal X}[A].$ Furthermore, by the definitions,
$M_{\mathcal X}[A] = M^+_{\mathcal X}[A] =\mu_{\mathcal X}[A].$
Let $\mathcal Y$ be a connected subset of $\mathcal X$. For a globally Lipschitz and continuously differentiable function $F\colon \mathcal Y\to \mathbb{R}^n$,
$
\sup_{x\in \mathcal Y}\mu_{\mathcal X}[J_F(x)]\leq M_{\mathcal X}[F],
$
where $J_F$ denotes the Jacobian of $F$. Moreover, if $\mathcal Y$ is a convex subset of $\mathcal X$, then
\[
\displaystyle\sup_{x\in \mathcal Y}\mu_{\mathcal X}[J_F(x)]= M_{\mathcal X}[F].
\]
\end{proposition}
\begin{definition} \textit{(\textbf{Contractive ODE})} Consider
\begin{align}\label{ODE}
\dot x =F(x,t),
\end{align}
where $x\in \mathcal Y\subset \mathbb{R}^n$ is an $n-$dim vector describing the state of the system, $t\in[0,\infty)$ is the time, and $F$ is an $n-$dim nonlinear vector field. Assume that $\mathcal Y$ is convex and $F$ is continuously differentiable on $x$ and continuous on $t$.
The system \eqref{ODE} is called contractive if there exists $c>0$ such that for any two solutions $X$ and $Y$ that remain in $\mathcal Y$, and $t\geq0$,
$\|X(t)-Y(t)\| \leq e^{-ct}\|X(0)-Y(0)\|.$
\end{definition}
\noindent In the following theorem and corollary, we find a value for the \textit{contraction rate} $c$ using logarithmic Lipschitz constant of the vector field $F$ and the logarithmic norm of the Jacobian of $F$ induced by a norm $\|\cdot\|_{\mathcal X}$ on $\mathbb{R}^n$.
\begin{theorem}\textit{(\cite[Proposition 3]{contraction_survey_CDC14}, \textbf{Contractivity of ODEs using logarithmic Lipschitz constants})}\label{thm:contractivity}
For a given norm $\|\cdot\|_{\mathcal X}$ and any two trajectories $X(t)$ and $Y(t)$ of \eqref{ODE} and any $t\geq0$ the following inequality holds
\[\|X(t)-Y(t)\|_{\mathcal X} \leq e^{M_{\mathcal X}^+[F] t}\|X(0)-Y(0)\|_{\mathcal X}.\]
In particular, if $M_{\mathcal X}^+[F]<0$, then \eqref{ODE} is contractive.
\end{theorem}
\begin{corollary} \textit{(\textbf{Contractivity of ODEs using logarithmic norms})}
Under the conditions of Theorem \ref{thm:contractivity}, if $\mathcal Y$ is connected and
$\sup_{(x,t)}\mu_{\mathcal X}[J_F(x,t)] \leq -c$, for some constant $c>0$ and some norm $\|\cdot\|_{\mathcal X}$, then \eqref{ODE} is contractive.
\end{corollary}
\section{Stochastic Logarithmic Lipschitz Constants}\label{sec:main-definitions}
In this section we generalize the definition of logarithmic Lipschitz constants given in Definition~\ref{def:LLC}.
\begin{notation}\label{notation}
We will use the following notations for the rest of the paper.
\begin{itemize}
\item $(\mathcal X,\|\cdot\|_{\mathcal X})$ is a normed space over $\mathbb{R}^n$ and $ \mathcal Y\subseteq \mathcal X$.
\item $F\colon \mathcal Y\to \mathbb{R}^n$ is a vector field with components $F_i$.
\item $G$ is an $n\times d$ matrix of continuously differentiable column vectors $G_{j}\colon \mathcal Y \to \mathbb{R}^n$, for $j=1,\ldots,d$.
\item $W(t)$ is a $d-$dim Wiener process with components $W_j$.
\item $\Delta W_j :=W_j(t+h)-W_j(t)= \int_t^{t+h} dW_j(s)$ and $\Delta W = (\Delta W_1, \ldots, \Delta W_d)^\top$
\item $\Delta W^2_{i,j} := \int_t^{t+h} dW_i(s) \int_t^s dW_j(s')$ and $\Delta W^2$ is a $d\times d$ matrix with components $\Delta W^2_{i,j}$.
\item For $k=1,\ldots,d$, $L_k := \sum_{l=1}^n G_{lk}\frac{\partial}{\partial x_l}$.
\item $\mathcal{M}_{F,G }^{(h,W)}$ is an $n-$dim function on $\mathcal Y$ with components:
\begin{align}\label{mcFG}
hF_i+ \sum_{j=1}^dG_{ij} \Delta W_j+ \sum_{j,k=1}^d L_k G_{ij} \Delta W^2_{j,k}.
\end{align}
\end{itemize}
\end{notation}
Let $\Delta_{jk}= \Delta W_j\Delta W_k+\Delta W^2_{j,k} -\Delta W^2_{k,j}$. Using \cite[Equation~(15.5.26)]{CG:09},
$\Delta W^2_{j,k} = \frac{1}{2} (\Delta_{jk}-h\delta_{jk}),$ where $\delta_{jk}$ is the Kronecker delta. A straightforward calculation shows that
\[\mathcal{M}_{F,G }^{(h,W)} = h\Big(F - \frac{1}{2}\sum_{i=1}^dJ_{G_j} G_j\Big)+\sum_{i=1}^d G_j\Delta W_j + \mathcal{R}, \]
where $\mathcal{R} = \frac{1}{2}\sum_{j,k=1}^d L_k G_{ij} \Delta_{jk}$.
\begin{definition}\textit{(\textbf{Stochastic logarithmic Lipschitz constants})}
The s-lub and lub \textit{stochastic logarithmic Lipschitz constants} of $F$ and $G$ in the $l$-th mean and induced by $\|\cdot\|_{\mathcal X}$ are respectively:
{\small{\begin{align*}
&\mathcal M^{+}_{\mathcal X,l}[F,G]=\displaystyle\sup_{t}\sup_{u\neq v\in \mathcal Y}\lim_{h\to0^{+}}\frac{1}{h}\times\\
&\Big(\mathbb{E}\;\frac{\|u-v+\mathcal{M}_{F,G }^{(h,W)}(u) -\mathcal{M}_{F,G }^{(h,W)} (v)\|^l_{\mathcal X}}{\|u-v\|^l_{\mathcal X}}-1\Big),\\
&\mathcal M_{\mathcal X,l}[F,G]=\displaystyle\sup_{t}\lim_{h\to0^{+}}\sup_{u\neq v\in \mathcal Y}\frac{1}{h}\times\\
&\Big(\mathbb{E}\;\frac{\|u-v+\mathcal{M}_{F,G }^{(h,W)} (u) -\mathcal{M}_{F,G }^{(h,W)} (v)\|^l_{\mathcal X}}{\|u-v\|^l_{\mathcal X}}-1\Big),
\end{align*}}}
where $\mathbb{E}$ denotes the expected value
\end{definition}
In \cite{Ahmad_Raha_2010} the authors introduced the notion of stochastic logarithmic norm which is a special case of $\mathcal M_{\mathcal X, l}[F,G]$ with linear $F$ and $G_j$, i.e., $F(u) = Au$ and $G_j(u) = B_ju$ for square matrices $A, B_j$s.
\begin{proposition} \textit{(\textbf{Some properties of stochastic logarithmic Lipschitz constants})} \label{prop:properties:stochastic:constants}
Let $\alpha>0$ be a constant, $F, F^1$, and $F^2$ be vector functions as described in Notation \ref{notation}, and $G, G^1,$ and $G^2$ be matrices as described in Notation \ref{notation}. The following statements hold.
\noindent 1. For a zero matrix $G$, $\mathcal M_{\mathcal X, l}^{+}[F,0] = l M_{\mathcal X}^{+}[F]$.
\noindent 2. $\mathcal M_{\mathcal X, l}^{+}[F,G] \leq \mathcal M_{\mathcal X, l}[F,G]. $
\noindent 3. Unlike the deterministic ones, the stochastic logarithmic Lipschitz constants are not sub-linear. However, they satisfy:
\begin{itemize}
\item $\mathcal M_{\mathcal X, l}^{+}[\alpha F,\sqrt\alpha G] =\alpha \mathcal M_{\mathcal X, l}^{+}[ F, G], \text{and}$
\item $\mathcal M_{\mathcal X, l}^{+}[ F^1+F^2, G^1+G^2] \\\leq \mathcal M_{\mathcal X, l}^{+}\Big[ F^1, \frac{G^1+G^2}{\sqrt 2}\Big] + \mathcal M_{\mathcal X, l}^{+}\Big[ F^2, \frac{G^1+G^2}{\sqrt 2}\Big].$
\end{itemize}
Similar properties hold for $\mathcal M_{\mathcal X, l}$.
\end{proposition}
\noindent A proof is given in Appendix, Section \ref{sec:appendix}.
\section{Contraction Properties of SDEs}\label{sec:main-results}
In this section we first define stochastic contractivity and then provide conditions that guarantee contractivity in SDEs.
Consider
\begin{align}\label{SDE:eq}
dX(t) = F(X(t))dt + G(X(t)) dW(t),
\end{align}
where all the terms are as defined in Notation \ref{notation}. Furthermore, we assume that $F$ and $G$ satisfy the Lipschitz and growth conditions: $\exists K_1,K_2>0$ such that $\forall x,y$:
$\|F(x)-F(y)\|+\|G(x)-G(y)\|\leq K_1\|x-y\|,$ and
$\|F(x)\|^2+\|G(x)\|^2\leq K_2(1+\|x\|^2)$,
\noindent where $\|\cdot\|$ denotes the Euclidean norm. Note that for the matrix $G, \|G\|^2= \sum_{i,j}|G_{ij}|^2$. Under these conditions, for any given initial condition $X(0)$ (with probability one) the SDE has a unique non-anticipating solution, i.e., the solution is independent of the Wiener process, see \cite[Chapter 4]{CG:09}.
\begin{definition}[\textbf{Stochastic contraction}]\label{def:stochastic:contraction}
An SDE described by \eqref{SDE:eq}
is $l-$th moment contractive if there exists a constant $c>0$ such that for any solutions $X(t)$ and $Y(t)$ with initial conditions $X(0)$ and $Y(0)$, and $\forall t\geq0$,
\begin{equation}\label{eq:moment:stability}
\mathbb{E} \|X(t) - Y(t)\|_{\mathcal X}^l \;\leq\; \mathbb{E} \|X(0) - Y(0)\|_{\mathcal X}^l \;\; e^{-c t}.
\end{equation}
\end{definition}
\begin{theorem}\label{thm:stochastic:contractivity}\textbf{\textit{(Stochastic contraction based on stochastic logarithmic Lipschitz constants)}}
For any two solutions $X(t)$ and $Y(t)$ of \eqref{SDE:eq} and $\forall t\geq0$,
\begin{equation}\label{eq:contractivity:1}
\mathbb{E} \|X(t) - Y(t)\|_{\mathcal X}^l \;\leq\; \mathbb{E} \|X(0) - Y(0)\|_{\mathcal X}^l \;\; e^{\mathcal M^{+}_{\mathcal X,l}[F,G] t}.
\end{equation}
Moreover, if $\mathcal M^{+}_{\mathcal X,l}[F,G]\leq -c$ for some $c>0$, \eqref{SDE:eq} becomes $l-$th moment stochastically contractive.
\end{theorem}
\begin{proof}
If $\mathbb{E} \|X(t) - Y(t)\|_{\mathcal X}^l=0$, then \eqref{eq:contractivity:1} holds. Therefore, we assume that $\mathbb{E} \|X(t) - Y(t)\|_{\mathcal X}^l\neq0$.
Using Milstein algorithm \cite[Chapter 15]{CG:09} any solution $X(t)$ can be approximated by
\begin{equation*}
X(t+h) = X(t) + \mathcal{M}_{F,G }^{(h,W)} (X(t)).
\end{equation*}
\noindent By subtracting Milstein approximations of $X$ and $Y$, we get
\begin{align}\label{equ:difference}
\begin{split}
X(t+h)-Y(t&+h) = X(t) -Y(t)\\
&+\mathcal{M}_{F,G }^{(h,W)} (X(t))-\mathcal{M}_{F,G }^{(h,W)} (Y(t)).
\end{split}
\end{align}
Taking $\|\cdot\|_{\mathcal X}$, raising to the power $l$, taking expected value $\mathbb{E}$, subtracting $\mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l$ from both sides, dividing by $h$, and taking limit as $h\to0^+$, we get (to fit the equations, we dropped some of $(t)$ arguments):
{\small{\begin{align*}
&\lim_{h\to0^+}\frac{1}{h}\left\{\mathbb{E}\|X(t+h) -Y(t+h)\|_{\mathcal X}^l - \mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l\right\}\\
&=\lim_{h\to0^+}\frac{1}{h}\Big\{\mathbb{E}\|X(t)-Y(t)+\mathcal{M}_{F,G }^{(h,W)}(X) -\mathcal{M}_{F,G }^{(h,W)} (Y)\|^l_{\mathcal X}\Big. \\
&\qquad\qquad\quad\Big.- \mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l\Big\}\\
&=\lim_{h\to0^+}\frac{1}{h}
\Big\{\frac{\mathbb{E}\|X-Y+\mathcal{M}_{F,G }^{(h,W)}(X) -\mathcal{M}_{F,G }^{(h,W)} (Y)\|^l_{\mathcal X}}{ \mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l}-1\Big\} \\
&\qquad\qquad\qquad\times\mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l\\
&\leq \mathcal M^{+}_{\mathcal X,l}[F,G]\; \mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l,
\end{align*}
}}
where the last inequality holds by the definition of $\mathcal M^{+}_{\mathcal X,l}[F,G]$ and the fact that $X(t)-Y(t)$ is non-anticipating, and hence, independent of the Wiener increment $dW$.
The first term of the above relationships is the upper Dini derivative of $\mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l$. Hence,
\begin{align*}
D^+\mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l \leq \mathcal M^{+}_{\mathcal X,l}[F,G]\; \mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l.
\end{align*}
Applying comparison lemma \cite[Lemma 3.4]{HKK:02}, $\forall t\geq0$:
\begin{align*}
\mathbb{E}\|X(t) -Y(t)\|_{\mathcal X}^l \leq \mathbb{E}\|X(0) -Y(0)\|_{\mathcal X}^le^{\mathcal M^{+}_{\mathcal X,l}[F,G] t},
\end{align*}
which is the desired result.
\end{proof}
In this work, inspired by \textit{common} noise-induced synchronization, we assume that all the trajectories realize the \textit{same} Wiener process $W$ and therefore \eqref{equ:difference} is a valid equation.
Studying stochastic contractivity for the trajectories driven by distinct Wiener processes is a topic of future investigations.
Next proposition gives an upper bound for $\mathcal M^{+}_{\mathcal X,l}[F,G]$ based on the deterministic logarithmic norms of $J_F$ and $J_{G_j}$, $j=1,\ldots,d$. The upper bound makes the result of Theorem \ref{thm:stochastic:contractivity} more applicable, since computing deterministic logarithmic norms induced by some norms, such as $L^p$ norms and weighted $L^p$ norms for $p=1,2,\infty$ are straightforward.
\begin{proposition}\textbf{\textit{\small(Relationship between deterministic and stochastic logarithmic Lipschitz constants)}}\label{prop:relation:stochM:M}
Let $F$, $G$, and $W$ be as described in Notation \ref{notation}. Then
\begin{align}\label{eq1:relation:stochM:M}
\begin{split}
\mathcal M^{+}_{\mathcal X,l}[F,G] &\leq lM^+_{\mathcal X}\Big[F-\frac{1}{2}\sum_jJ_{G_j} G_j\Big]\\
&+\frac{l}{\sqrt{2\pi}}\sum_j (M^+_{\mathcal X}[G_j] +M^+_{\mathcal X}[-G_j]).
\end{split}
\end{align}
Furthermore, if $F$ and $G_j$s are continuously differentiable and $\mathcal Y$ is convex, then the following inequality holds.
\begin{align}\label{eq2:relation:stochM:M}
\begin{split}
\mathcal M^{+}_{\mathcal X,l}[&F,G] \leq l\sup_x\mu_{\mathcal X}\Big[J_{F-\frac{1}{2}\sum_jJ_{G_j}G_j}(x)\Big]\\
&+ \frac{l}{\sqrt{2\pi}}\sum_j(\sup_x\mu_{\mathcal X}[J_{G_j}(x)] +\sup_x\mu_{\mathcal X}[-J_{G_j}(x)]).
\end{split}
\end{align}
\end{proposition}
\noindent See Appendix \ref{sec:appendix} for a proof.
\begin{corollary}\textbf{\textit{(Stochastic contraction based on deterministic logarithmic norms)}}\label{cor:contractivity:1}
Under the conditions of Proposition \ref{prop:relation:stochM:M}, if there exists $c>0$ such that the right hand side of \eqref{eq2:relation:stochM:M} is upper bounded by $-c$,
then \eqref{SDE:eq} is $l-$th moment stochastically contractive.
\end{corollary}
\begin{proof}
Since the right hand side of \eqref{eq2:relation:stochM:M} is bounded by $-c$, so is its left hand side, i.e.,
$\mathcal M^{+}_{\mathcal X,l}[F,G]\leq -c$.
Therefore, by Theorem \ref{thm:stochastic:contractivity}, system \eqref{SDE:eq} is stochastically contractive.
\end{proof}
\section{Noise-induced contractivity and synchronization}\label{sec:noise-induced:contrac}
In this section we show how a multiplicative noise can be beneficial for a system and make it contractive.
Suppose $\dot x = F(x)$
is not contractive, that is, for any given norm $\|\cdot\|_{\mathcal X}$, $\sup_{x}\mu_{\mathcal X}[J_F(x)]\geq 0$. Corollary \ref{cor:contractivity:1} suggests that for appropriate choices of the noise term $G$ and norm $\|\cdot\|_{\mathcal X}$, the underlying stochastic system $dx = F(x)dt +G(x)dW$ may become stochastically contractive.
The reason is that there might exist $G$ and $\|\cdot\|_{\mathcal X}$ such that for any $x$,
$\mu_{\mathcal X}[J_{F-\frac{1}{2}\sum_jJ_{G_j}G_j} (x)]$ becomes a small enough negative number.
Note that by sub-additivity of the logarithmic norms,
$0=\mu_{\mathcal X}[J_{G_j}-J_{G_j}]\leq \mu_{\mathcal X}[J_{G_j}]+\mu_{\mathcal X}[-J_{G_j}].$
Hence, the last sum on the right hand side of \eqref{eq2:relation:stochM:M} is always non-negative. Therefore, the first term must be small enough such that the sum becomes negative.
For example, for a linear diffusion term, i.e., $G_j(x) = \sigma_j x$, $\sigma_j>0$:
\[\mu_{\mathcal X}[J_{G_j}]+\mu_{\mathcal X}[-J_{G_j}] = \sigma (\mu_{\mathcal X}[I]+\mu_{\mathcal X}[-I]) =0,\]
and by sub-additivity of logarithmic norms:
\begin{align}\label{sub-additivity}
\begin{split}
\mu_{\mathcal X}\left[J_{F-\frac{1}{2}\sum_jJ_{G_j}{G_j}}\right] &= \mu_{\mathcal X}\Big[J_F-\frac{1}{2}\sum_j\sigma_j^2I\Big] \\
&\leq \mu_{\mathcal X}\left[J_F\right]-\frac{1}{2}\sum_j\sigma_j^2.
\end{split}
\end{align}
For some large $\sigma_j$s, $\mu_{\mathcal X}[J_F]-\frac{1}{2}\sum_j\sigma_j^2$ becomes negative, and hence, the SDE becomes stochastically contractive.
Intuitively, since we assumed all the trajectories sense the same Wiener process, the noise plays the role of a common external force to all the trajectories. Therefore, for a strong enough noise, the trajectories converge to each other.
See Example \ref{example:theorem1} below.
Equation~\eqref{sub-additivity} guarantees that linear multiplicative stochastic terms do not destroy the contraction properties of contraction systems, no matter how large the perturbations are.
Note that Corollary \ref{cor:contractivity:1} argues that multiplicative noise may aid contractivity. For an additive noise, i.e., for a state-independent noise term $G_j(x)\equiv a$, $\mu_{\mathcal X}[J_{G_j}]=\mu_{\mathcal X}[-J_{G_j}]=0$ and $\mu_{\mathcal X}[J_{F-\frac{1}{2}J_{G_j}{G_j}}] = \mu_{\mathcal X}[J_F]$. Therefore, $\mathcal M^{+}_{\mathcal X,l}[F,G] \leq \mu_{\mathcal X}[J_F]$ and $\mu_{\mathcal X}[J_F]\geq0$ do not give any information on the sign of $\mathcal M^{+}_{\mathcal X,l}$, and hence, on the contractivity of the SDE.
\begin{example}\label{example:theorem1}
We consider the Van der Pol oscillator subject to a multiplicative noise
\begin{align}\label{eq:vanderpol}
\begin{split}
dx &= \left(x - \frac{1}{3} x^3 -y \right)dt +\sigma g_1(x) dW,\\
dy &= x dt +\sigma g_2(y) dW,
\end{split}
\end{align}
where we assume that the Wiener process is one dimensional, $d=1$.
The state of the oscillator is denoted by $X=(x,y)^\top$ which its change of rate is described by $F=(x - \frac{1}{3} x^3 -y , x)^\top$. The noise of the system is described by the column vector $G(x,y) = (\sigma g_1(x), \sigma g_2(y))^\top$.
A simple calculation shows that the Jacobian of $F$ evaluated at the origin is not Hurwitz, i.e., the eigenvalues are not negative. Therefore, the deterministic Van der Pol is not contractive with respect to any norm.
Figure~\ref{fig:vandepol:no-noise} depicts two trajectories $(x_1,y_1)^\top$ and $(x_2,y_2)^\top$ of \eqref{eq:vanderpol} in the absence of noise which do not converge.
In Figure~\ref{fig:vandepol:additive-noise}, an additive noise $g_1(x)=g_2(y)=1$ with intensity $\sigma=0.35$ is added. We observe that the trajectories still do not converge. As discussed above, our result in Corollary \ref{cor:contractivity:1} does not guarantee noise-induced contractivity in the case of additive noise.
In Figure~\ref{fig:vandepol:with-noise}, a state-dependent multiplicative noise
$(g_1(x), g_2(y))=(1+4x, 1+4y)$ with noise intensity $\sigma=0.35$ is added and two trajectories with initial conditions $(1,-1)^\top$ and $(2,-2)^\top$ are plotted. We observe that the two trajectories converge to each other. A simple calculation shows that $\mu_2[J_G]+ \mu_2[-J_G] = 4\sigma -4\sigma=0$, where $\mu_2[A] = \frac{1}{2} \max\lambda(A+A^\top)$ is the logarithmic norm induced by $L^2$ norm and $\max\lambda$ denotes the largest eigenvalue. Also,
\begin{align*}
\sup_{(x,y)}\mu_2[J_{F-\frac{1}{2}J_GG}(x,y)] &= \sup_{(x,y)}\max\{1-x^2-8\sigma^2, -8\sigma^2\}\\
&=1-8\sigma^2.
\end{align*}
By Corollary \ref{cor:contractivity:1},
$\mathcal M^{+}_{\mathcal X,2}[F,G]\leq 2 (1-8\sigma^2)$. Therefore, for $\sigma>\frac{1}{\sqrt 8} \approx 0.35$, $\mathcal M^{+}_{\mathcal X,l}[F,G]<0$ and the system becomes $l-$th moment stochastically contractive for any $l\geq1$.
Figure \ref{fig:expected-value} shows the mean square difference of the two solutions plotted in Figure~\ref{fig:vandepol:with-noise} over 5000 simulations, which converges to zero, as expected.
\end{example}
\begin{figure}
\centering
\subfigure[No contraction in deterministic system]{
\includegraphics[width=0.36\textwidth]{vanderpol_contraction_noise_0}
\label{fig:vandepol:no-noise}
}
\subfigure[No contraction with additive noise]{
\includegraphics[width=0.36\textwidth]{vanderpol_no_contraction_additive_noise}
\label{fig:vandepol:additive-noise}
}
\subfigure[Multiplicative noise-induced contraction]{
\includegraphics[width=0.36\textwidth]{vanderpol_contraction_noise035}
\label{fig:vandepol:with-noise}
}
\subfigure[The mean square of difference of two solutions]{
\includegraphics[width=0.39\textwidth]{expected-value-vanderpol}
\label{fig:expected-value}
}
\caption{Contraction behavior of van der Pol oscillator given in Example \ref{example:theorem1}.
(a) Two trajectories of the deterministic oscillator are plotted to show the system is not contractive.
(b) An additive noise ($g_1(x)=g_2(y)=1$) with intensity $\sigma=0.35$ is added but does not make the system contractive.
(c) A multiplicative noise ($g_1(x)=1+4x, g_2(y)=1+4y$) with intensity $\sigma=0.35$ is added which makes the system contractive.
(d) The mean square difference of two solutions over 5000 simulations is shown.
}
\end{figure}
Now consider a network of $N$ isolated nonlinear systems which are driven by a multiplicative common noise, i.e., the only interaction between the systems is through the common noise. The dynamics of such a network can be described by the following SDEs. For $i=1,\ldots, N,$
\begin{align}\label{eq:synchronization}
d X_i = F(X_i) dt +\sigma G(X_i) dW,
\end{align}
with initial conditions $X_i(0)=X_{i0}$.
Then \eqref{eq:synchronization} stochastically synchronizes if for any $i,j,$ $\mathbb{E}\|X_i(t)-X_j(t)\|_{\mathcal X}^l\to0$ as $t\to\infty$, which can be concluded from $d X = F(X) dt +\sigma G(X) dW$ being contractive.
\section{Conclusions}\label{sec:conclusion}
Deterministic logarithmic Lipschitz constants generalize classical logarithmic norms to nonlinear operators. These constants are proper tools to characterize the contraction properties of ODEs. In this work, we introduced the notions of \textit{stochastic} logarithmic Lipschitz constants and used them to extend contraction theory to a class of SDEs. Unlike some logarithmic norms, computing stochastic (or deterministic) logarithmic Lipschitz constants is not straightforward.
Therefore, to make our theory more applicable, we found some relationships between stochastic logarithmic Lipschitz constants and logarithmic norms.
We discussed how multiplicative noise could aid contractivity and foster stochastic synchronization in nonlinear dynamical systems.
In this paper, we assumed that a common Wiener process drives all the trajectories. Studying contractivity (respectively, network synchronization) in the case that distinct and independent Wiener processes drive the trajectories (respectively, nonlinear dynamical systems) is a topic of future investigations.
In this case, we need to define an ``approximate" contraction in the sense that the trajectories exponentially enter a tube and stay there but do not necessarily converge. See \cite{slotine_stochastic}
(respectively, \cite{2021_Aminzare_Srivastava_Cybernetic}) for this type of contractivity (respectively, synchronization) which are based on stochastic Lyapunov function.
Proposition \ref{prop:relation:stochM:M} provides a mechanism to characterize stochastic contractivity in a class of nonlinear SDEs and understand stochastic synchronization in networks driven by common noise. Generalizing this result to the case of independent Wiener increments is another topic of future investigations.
\section*{ACKNOWLEDGMENT}
The author would like to thank Michael Margaliot for his comments that improved this paper's exposition. This work is supported by Simon Foundations' grant $712522.$
\section{Appendix}\label{sec:appendix}
\noindent\textbf{Proof of Proposition~\ref{prop:properties:stochastic:constants}.}
\noindent{1.} For $h>0$, let $\Omega(h) = \frac{\|u-v+hF(u) -hF (v)\|_{\mathcal X}}{\|u-v\|_{\mathcal X}}$.
Using the equality $\Omega^l-1= (\Omega-1)(\Omega^{l-1}+\cdots+1)$ and the fact that $\lim_{h\to0}\Omega(h)=1$, we have,
{\small{
\begin{align*}\label{}
&\mathcal M^{+}_{\mathcal X,l}[F, 0]\\
&=\displaystyle\sup_{u\neq v\in \mathcal Y}\lim_{h\to0^{+}}\frac{1}{h}
\left(\frac{\|u-v+h(F(u) -F (v))\|^l_{\mathcal X}}{\|u-v\|^l_{\mathcal X}}-1\right)\\
&=\displaystyle\sup_{u\neq v\in \mathcal Y}\lim_{h\to0^{+}}\frac{l}{h}
\left(\frac{\|u-v+h(F(u) -F (v))\|_{\mathcal X}}{\|u-v\|_{\mathcal X}}-1\right) \\
&= l M_{\mathcal X}^{+}[F].
\end{align*}
}}
\noindent{2.} The proof is straightforward from the definition of $\mathcal M^{+}_{\mathcal X,l}$ and $\mathcal M_{\mathcal X,l}$.
\noindent{3.} By the definition of $\mathcal{M}_{F,G }^{(h,W)}$ given in Notation \ref{notation},
\[\mathcal M_{\alpha F,\sqrt\alpha G}^{(h,W)} = \mathcal M_{ F, G}^{(\alpha h,\sqrt\alpha W)}.\]
Using the fact that $W$ is of order $\sqrt h$, and therefore, $\sqrt \alpha W$ is of order $\sqrt{\alpha h}$, we have:
{\small{\begin{align*}\label{}
&\mathcal M^{+}_{\mathcal X,l}[\alpha F,\sqrt\alpha G]=\displaystyle\sup_{u\neq v\in \mathcal Y}\lim_{h\to0^{+}}\frac{1}{h}\times\\
&\left(\mathbb{E}\;\frac{\|u-v+\mathcal M_{ F, G}^{(\alpha h,\sqrt\alpha W)}(u) -\mathcal M_{ F, G}^{(\alpha h,\sqrt\alpha W)} (v)\|^l_{\mathcal X}}{\|u-v\|^l_{\mathcal X}}-1\right)\\
&=\displaystyle\sup_{u\neq v\in \mathcal Y}\lim_{h\to0^{+}}\frac{\alpha}{\alpha h}\times\\
&\left(\mathbb{E}\;\frac{\|u-v+\mathcal M_{ F, G}^{(\alpha h,\sqrt\alpha W)}(u) -\mathcal M_{ F, G}^{(\alpha h,\sqrt\alpha W)} (v)\|^l_{\mathcal X}}{\|u-v\|^l_{\mathcal X}}-1\right)\\
&=\alpha \mathcal M_{\mathcal X, l}^{+}[ F, G].
\end{align*}
}}
\noindent The second inequality in part 3 can be obtained by the definition of $\mathcal M^{+}_{\mathcal X,l}$ and the following equality.
\[2\mathcal M_{F^1+F^2, G^1+G^2}^{(h,W)} = \mathcal M_{ F^1, \frac{G^1+G^2}{\sqrt 2}}^{(2 h,\sqrt 2 W)}+ \mathcal M_{ F^2, \frac{G^1+G^2}{\sqrt 2}}^{(2 h,\sqrt 2 W)}.\]
\oprocend
\textbf{Proof of Proposition~\ref{prop:relation:stochM:M}.}
For fixed $u, v, F, G,$ and $h>0$, define $\mathcal K(h)$ and $\mathcal K_l(h)$ as follows:
\begin{align*}
\mathcal K(h) &:= \frac{\|u-v+\mathcal{M}_{F,G}^{(h,W)}(u) -\mathcal{M}_{F,G}^{(h,W)} (v)\|_{\mathcal X}}{\|u-v\|_{\mathcal X}}, \\
\mathcal K_l(h) &:= \mathbb{E}\;\frac{\|u-v+\mathcal{M}_{F,G}^{(h,W)}(u) -\mathcal{M}_{F,G}^{(h,W)} (v)\|^l_{\mathcal X}}{\|u-v\|^l_{\mathcal X}}\\
&=\mathbb{E}\; \mathcal K(h)^l.
\end{align*}
Note that $\mathcal M^{+}_{\mathcal X,l}[F,G] = \sup_{u\neq v} D^+\mathcal K_l(0)$. A simple calculation shows that the derivative of $\mathcal K_l$ evaluated at $h=0$ is equal to
$
l \;\mathbb{E}(\mathcal K(h)^{l-1} \mathcal D^+\mathcal K(h))|_{h=0} = l\;\mathbb{E} (D^+\mathcal K(0)),
$
since $\mathcal K(0)=1$ and by Dominated Convergence Theorem, the limit in the definition of $D^+$ and the expected value can be exchanged.
Therefore, by the definition of upper Dini derivative:
{\small{
\begin{align*}
&D^+\mathcal K(0)\\
&= \lim_{h\to 0^+} \frac{l}{h} \left\{\frac{\|u-v+\mathcal{M}_{F,G}^{(h,W)}(u) -\mathcal{M}_{F,G}^{(h,W)} (v)\|_{\mathcal X}}{\|u-v\|_{\mathcal X}} -1 \right\}\\
&\leq \lim_{h\to 0^+} \frac{l}{2h} \left\{\frac{\|u-v+\mathcal M_{F,G}^{(2h,0)}(u) -\mathcal M_{F,G}^{(2h,0)} (v)\|_{\mathcal X}}{\|u-v\|_{\mathcal X}} -1 \right\}\\
&+\lim_{h\to 0^+} \frac{l}{2h} \left\{\frac{\|u-v+\mathcal M_{F,G}^{(0,2W)}(u) -\mathcal M_{F,G}^{(0,2W)} (v)\|_{\mathcal X}}{\|u-v\|_{\mathcal X}} -1 \right\}\\
&\leq M^+_{\mathcal X} \Big[F-\frac{1}{2}\sum_jJ_{G_j} G_j\Big] \\
&+\lim_{h\to 0^+} \frac{l}{2h}\left\{\frac{\|u-v+ 2\sum_j(G_j(u)-G_j(v))\Delta W_j\|_{\mathcal X}}{\|u-v\|_{\mathcal X}} -1 \right\}.
\end{align*}
}}
The first inequality is obtained by writing $\mathcal{M}_{F,G}^{(h,W)}$ as $ \mathcal M_{F,G}^{(h,0)}+\mathcal M_{F,G}^{(0,W)}$.
The second inequality is obtained using the following expressions:
$\mathcal M_{F,G}^{(h,0)}= h\big(F - \frac{1}{2}\sum_jJ_{G_j} G_j\big)$ and
$\mathcal M_{F,G}^{(0,W)}=\sum_j G_j\Delta W_j + \mathcal{R}$.
The term $\mathcal{R}(u)- \mathcal{R}(v)$ is omitted from the last term because $\mathcal{R}$ contains a factor of $h^2$ which vanishes when $h\to0$.
The Wiener increment satisfies
\[\Delta W_j= W_j(t+h) - W_j(t) =\int_{t}^{t+h} \xi_j(s)ds = h\xi_j(\tilde s_j),\]
where $\xi_j$ is the standard normal distribution, $\xi_j\sim \mathcal{N}(0,1)$ and $t<\tilde s_j<t+h$.
Since with probability $\frac{1}{2}$, $\xi_j$ is positive or negative, the last term of the above relationship becomes:
{\small{
\begin{align*}
&\lim_{h\to 0^+} \frac{1}{2h}\left\{\frac{\|u-v+ 2h\sum_j(G_j(u)-G_j(v))\xi_j(\tilde s_j)\|_{\mathcal X}}{\|u-v\|_{\mathcal X}} -1 \right\}\\
&\leq M^+_{\mathcal X} \Big[\sum_j\xi_j G_j\Big]\leq \sum_jM^+_{\mathcal X} [\xi_j G_j]\\\
&= \frac{1}{2}\sum_j\left(M^+_{\mathcal X} [|\xi_j|G_j]+ M^+_{\mathcal X} [-|\xi_j|G_j]\right)\\
&= \frac{1}{2} \sum_j |\xi_j| \left(M^+_{\mathcal X} [G_j]+ M^+_{\mathcal X} [-G_j]\right).
\end{align*}
}}
Therefore, by taking $\mathbb{E}$ from both sides, we get
\begin{align*}
l\;\mathbb{E} \Big(D^+\mathcal K(0)\Big)&\leq l \sum_j M^+_{\mathcal X} \Big[F-\frac{1}{2}\sum_jJ_{G_j} G_j\Big] \\
&+ \frac{l}{2} \sum_j \mathbb{E} |\xi_j| \left(M^+_{\mathcal X} [G_j]+ M^+_{\mathcal X} [-G_j]\right).
\end{align*}
Equation~\eqref{eq1:relation:stochM:M} is obtained by plugging $\mathbb{E} |\xi_j| = \sqrt\frac{2}{\pi}$.
\noindent Equation~\eqref{eq2:relation:stochM:M} holds by Proposition~\ref{prop:logarithmic:constants:norm}.
\oprocend
|
2,869,038,153,880 | arxiv |
\section{The DataScope Approach}
\label{sec:approach}
We summarize our main theoretical contribution in \autoref{sec:approach-overview}, followed by the characteristics of ML pipelines to which these results are applicable (\autoref{sec:approach-characteristics}).
We further discuss how we can approximate many real-world pipelines as \textit{canonical pipelines} to make them compatible with our algorithmic approach (\autoref{sec:approach-approx}).
We defer the details of our (non-trivial) theoretical results to \autoref{sec:framework}.
\subsection{Overview}
\label{sec:approach-overview}
The key technical contribution of this paper is a novel algorithmic framework that covers a large sub-family of ML pipelines whose KNN Shapley can be computed in \textsf{PTIME}. We call these pipelines \textit{canonical pipelines}.
\begin{theorem} \label{thm:shapley-using-counting-oracle}
Let $\mathcal{D}_{tr}$ be a set of $n$ training tuples, $f$ be an ML pipeline over $\mathcal{D}_{tr}$, and $\mathcal{A}_{knn}$ be a $K$-nearest neighbor classifier. If $f$ can be expressed as an Additive Decision Diagram (ADD) with polynomial size, then computing
\begin{small}
\[
\varphi_i = \frac{1}{n} \sum_{S \subseteq \mathcal{D}_{tr} \backslash \{t_i\}} {n - 1 \choose |S|}^{-1} \left(
u \circ \mathcal{A}_{knn} \circ f (S \cup \{t_i\}) -
u \circ \mathcal{A}_{knn} \circ f (S)
\right)
\]
\end{small}
is in \textsf{PTIME} for all additive utilities $u$.
\label{theorem:main}
\end{theorem}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/canonical-pipelines.pdf}
\vspace{-2em}
\caption{Three types of canonical pipelines over which
Shapley values can be computed in PTIME.}
\vspace{2em}
\label{fig:patterns}
\end{figure}
We leave the details about this
Theorem to \autoref{sec:framework}.
This theorem provides a sufficient condition under which we can compute Shapley values for KNN classifiers over ML pipelines. We can instantiate this general framework with concrete types of ML pipelines.
\subsection{Canonical ML Pipelines}
\label{sec:approach-characteristics}
As a prerequisite for an efficient Shapley-value computation over pipelines, we need to understand how the removal of an input tuple $t_i$ from $\mathcal{D}_{tr}$ impacts the featurised training data $f(\mathcal{D}_{tr})$ produced by the pipeline. In particular, we need to be able to reason about the difference between $f(\mathcal{D}_{tr})$ and $f(\mathcal{D}_{tr} \setminus \{t_i\})$, which requires us to understand the \textit{data provenance}~\cite{green2007provenance,cheney2009provenance} of the pipeline $f$. In the following, we summarise three common types of pipelines (illustrated in \autoref{fig:patterns}), to which we refer as {\em canonical pipelines}. We will formally prove that Shapley values over these pipelines can be computed in PTIME in \autoref{sec:framework}.
\inlinesection{Map pipelines} are a family of pipelines that satisfy the condition in \autoref{theorem:main}, in which the feature extraction $f$ has the following property: each input training tuple $t_i$ is transformed into a unique output training example $t_i$ with a tuple-at-a-time transformation function $h_f$: $t_i \mapsto t_i' = h_f(t_i)$. Map pipelines are the standard case for supervised learning, where each tuple of the input data is encoded as a feature vector for the model's training data. The provenance polynomial for the output $t_i'$ is $p(t_i) = a_i$ in this case, where $a_i$ denotes the presence of $t_i$ in the input to the pipeline~$f$.
\inlinesection{Fork pipelines} are a superset of Map pipelines, which requires that for each output example $t_j$, there exists a \textit{unique}
input tuple $t_i$, such that $t_j$ is generated by applying a tuple-at-a-time transformation function $h_f$ over $t_i$: $t_j = h_f(t_i)$. As illustrated in \autoref{fig:patterns}(b), the output examples $t_1$ and $t_2$ are both generated from the input example $t_1$. Fork pipelines also satisfy the condition in \autoref{theorem:main}. Fork pipelines typically originate from data augmentation operations for supervised learning, where multiple variants of a single tuple of the input data are generated (e.g., various rotations of an image in computer vision), and each copy is encoded as a feature vector for the model's training data. The provenance polynomial for an output $t_j$ is again $p(t_j) = a_i$ in this case, where $a_i$ denotes the presence of $t_i$ in the input to the pipeline~$f$.
\inlinesection{One-to-Many Join pipelines} are a superset of Fork pipelines, which rely on the star-schema structure of the relational inputs. Given the relational inputs $\mathcal{D}_e$ (``fact table'') and $\mathcal{D}_s$ (``dimension table''), we require that, for each output example $t_k$, there exist \textit{unique} input tuples $t_i \in \mathcal{D}_e$ and $t_j \in \mathcal{D}_s$ such that $t_k$ is generated by applying a tuple-at-a-time transformation function $h_f$ over the join pair $(t_i, t_j)$: $t_k = h_f(t_i, t_j)$. One-to-Many Join pipelines also satisfy the condition in \autoref{theorem:main}. Such pipelines occur when we have multiple input datasets in supervised learning, with the ``fact'' relation holding data for the entities to classify (e.g., emails in a spam detection scenario), and the ``dimension'' relations holding additional side data for these entities, which might result in additional helpful features.
The provenance polynomial for an output $t_k$ is $p(t_k) = a_i \cdot a_j$ in this case, where $a_i$ and $a_j$ denote the presence of $t_i$ and $t_j$ in the input to the pipeline~$f$. Note that the polynomials states that both $t_i$ and $t_j$ must be present in the input at the same time (otherwise no join pair can be formed from them).
\inlinesection{Discussion.} We note that this classification of pipelines assumes that the relational operations applied by the pipeline are restricted to the positive relational algebra (SPJU: Select, Project, Join, Union), where the pipeline applies no aggregations, and joins the input data according to the star schema. In our experience, this covers a lot of real-world use cases in modern ML infrastructures, where the ML pipeline consumes pre-aggregated input data from so-called ``feature stores,'' which is naturally modeled in a star schema. Furthermore, pipelines in the real-world operate on relational datasets using dataframe semantics~\cite{petersohn13towards}, where unions and projections do not deduplicate their results, which (together with the absence of aggregations), has the effect that there are no additions present in provenance polynomials of the outputs of our discussed pipeline types. This pipeline model has also been proven helpful for interactive data distribution debugging~\cite{grafberger2022data,grafberger2021mlinspect}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{figures/pipeline-categories.pdf}
\vspace{-1em}
\caption{A majority of real-world ML pipelines~\cite{psallidas2019data}
either already exhibit a canonical map-fork pipeline pattern, or are easily convertible to it using our approximation scheme.}
\vspace{2em}
\label{fig:coverage}
\end{figure}
\subsection{Approximating Real-World ML Pipelines}
\label{sec:approach-approx}
In practice, an ML pipeline $f$ and its corresponding ML model $\mathcal{A}$ will often not directly give us a canonical pipeline whose Shapley value can be computed in \textsf{PTIME}. The reasons for this are twofold: $(i)$~there might be no technique known to compute the Shapley value in \textsf{PTIME} for the given model; $(ii)$~the estimator/transformer operations for feature encoding in the pipeline require global aggregations (e.g., to compute the mean of an attribute for normalising it). In such cases, each output depends on the whole input, and the pipeline does not fit into one of the canonical pipeline types that we discussed earlier.
As a consequence, we \textit{approximate} an ML pipeline into a canonical pipeline in two ways.
\inlinesection{Approximating~$\mathcal{A}$.} The first approximation follows various previous efforts summarised as KNN~Shapley before, and has been shown to work well, if not better, in a diverse range of scenarios. In this step, we approximate the pipeline's ML model with a KNN classifier
\[
\mathcal{A} \mapsto \mathcal{A}_{knn}.
\]
\inlinesection{Approximating the estimator/transformer steps in $f$.} In terms of the pipeline operations, we have to deal with the global aggregations applied by the estimators for feature encoding. Common feature encoding and dimensionality reduction techniques often
base on a \texttt{reduce-map} pattern over the data:
\[
op(\mathcal{D}) = \texttt{map}(\texttt{reduce}(\mathcal{D}), \mathcal{D}).
\]
During the \texttt{reduce} step, the estimator computes some global statistics over the dataset --- e.g., the estimator for \texttt{MinMaxScaling} computes the minimum and maximum of an attribute, and the \texttt{TFIDF} estimator computes the inverse document frequencies of terms. The estimator then generates a transformer, which applies the \texttt{map} step to the data, transforming the input dataset based on the computed global statistics, e.g., to normalise each data example based on the computed minimum and maximum values in case of \texttt{MinMaxScaling} .
The global aggregation conducted by the \texttt{reduce} step is often the key reason that we cannot compute Shapley value in \textsf{PTIME} over a given pipeline --- such a global
aggregation requires us to enumerate
all possible subsets of
data examples, each of which
corresponds to a potentially different
global statistic. Fortunately, we also observe, and will validate
empirically later, that the results of these global aggregations are relatively stable given different subsets of the data, especially in cases where what we want to compute is the \textit{difference} when a single example is added or removed. The approximation that we conduct is to reuse the result of the \texttt{reduce} step computed over the whole dataset $\mathcal{D}_{tr}$ for a subset $\mathcal{D} \subset \mathcal{D}_{tr}$:
\begin{equation} \label{eq:conditional-op-conversion}
op(\mathcal{D}) = \texttt{map}(\texttt{reduce}(\mathcal{D}), \mathcal{D}) \mapsto
op^*(\mathcal{D}) = \texttt{map}(\texttt{reduce}(\mathcal{D}_{tr}), \mathcal{D}).
\end{equation}
In the case of scikit-learn, this means that we reuse the transformer generated by fitting the estimator on the whole dataset. Once all estimators $op$ in an input pipeline are transformed into their approximate variant $op^*$, a large majority of realistic pipelines become canonical pipelines of a \texttt{map} or \texttt{fork} pattern.
\inlinesection{Statistics of Real-world Pipelines.} A natural question is how common these families of pipelines are in practice. \autoref{fig:coverage} illustrates a case study that we conducted over 500K real-world pipelines provided by Microsoft~\cite{psallidas2019data}. We divide pipelines into three categories: (1) "pure" map/fork pipelines, based on our definition of canonical pipelines; (2) "conditional" map/fork pipelines, which are comprised of a reduce operator that can be effectively approximated using the scheme we just described; and (3) other pipelines, which contain complex operators that cannot be approximated. We see that a vast majority of pipelines we encountered in our case study fall into the first two categories that we can effectively approximate using our canonical pipelines framework.
\paragraph*{\underline{Discussion: What if These Two Approximations Fail?}}
Computing Shapley values for a generic pipeline
$(\mathcal{A}, f)$ is \#\textsf{P}-hard, and by
approximating it into
$(\mathcal{A}_{knn}, f^*)$, we obtain an
efficient \textsf{PTIME} solution. This drastic improvement
on complexity also means that \textit{we should
expect that there exist scenarios under which
this $(\mathcal{A}, f) \mapsto (\mathcal{A}_{knn}, f^*)$
approximation is not a good approximation.}
{\em How often would this failure case happen in
practice?}
When the training set is large,
as illustrated in many previous studies focusing on the KNN proxy, we are confident that the $\mathcal{A} \mapsto \mathcal{A}_{knn}$ approximation should work well in many practical scenarios except those
relying on some very strong global
properties that KNN does not model (e.g., global population balance).
As for the $f \mapsto f^*$ approximation, we expect the failure cases to be rare, especially when the training set is large.
In our experiments, we have empirically verified these two beliefs, which were also backed up by previous empirical results on KNN Shapley~\cite{Jia2021-zf}.
{\em What should we do when such a failure case
happens?} Nevertheless, we should expect
such a failure case can happen in practice.
In such situations, we will resort to the Monte Carlo baseline,
which will be orders of magnitude slower but
should provide a backup alternative.
It is an interesting future direction to further
explore the limitations of both approximations and
develop more efficient Monte Carlo methods.
\paragraph*{\underline{Approximating Additive Utilities: Equalized Odds Difference}}
We show how slightly more complex utilities can also be represented as additive, with a little approximation, similar to the one described above. We will demonstrate this using the ``equalized odds difference'' utility, a measure of (un)fairness commonly used in research~\cite{moritz2016equality,barocas-hardt-narayanan} that we also use in our experiments. It can be defined as such:
\begin{equation} \label{eq:eqodds-diff}
u (\mathcal{D}_{tr}, \mathcal{D}_{val}) := \max\{ TPR_{\Delta}(\mathcal{D}_{tr}, \mathcal{D}_{val}), FPR_{\Delta}(\mathcal{D}_{tr}, \mathcal{D}_{val}) \}.
\end{equation}
Here, $TPR_{\Delta}$ and $FPR_{\Delta}$ are \emph{true positive rate difference} and \emph{false positive rate difference} respectively. We assume that each tuple $t_{tr} \in f(\mathcal{D}_{tr})$ and $t_{val} \in f(\mathcal{D}_{val})$ have some sensitive feature $g$ (e.g. ethnicity) with values taken from some finite set
$\{G_1, G_2, ... \}$, that allows us to partition the dataset into \emph{sensitive groups}. We can define $TPR_{\Delta}$ and $FPR_{\Delta}$ respectively as
\begin{equation} \label{eq:tpr-diff}
\begin{split}
TPR_{\Delta}(\mathcal{D}_{tr}, \mathcal{D}_{val}) &:=
\max_{G_i \in G} TPR_{G_i}(\mathcal{D}_{tr}, \mathcal{D}_{val})
-
\min_{G_j \in G} TPR_{G_j}(\mathcal{D}_{tr}, \mathcal{D}_{val}), \ \textrm{and} \\
FPR_{\Delta}(\mathcal{D}_{tr}, \mathcal{D}_{val}) &:=
\max_{G_i \in G} FPR_{G_i}(\mathcal{D}_{tr}, \mathcal{D}_{val})
-
\min_{G_j \in G} FPR_{G_j}(\mathcal{D}_{tr}, \mathcal{D}_{val}).
\end{split}
\end{equation}
For some sensitive group $G_i$, we define $TPR_{G_i}$ and $FPR_{G_i}$ respectively as:
\begin{equation*}
\begin{split}
TPR_{G_i}(\mathcal{D}_{tr}, \mathcal{D}_{val}) &:= \frac{\sum_{t_{val} \in f(\mathcal{D}_{val})} \mathbbm{1}\{ (\mathcal{A} \circ f (\mathcal{D}_{tr}))(t_{val}) = 1 \} \mathbbm{1} \{ y(t_{val}) = 1 \} \mathbbm{1} \{ g(t_{val}) = G_i \} }{|\{ t_{val} \in \mathcal{D}_{val} \ : \ y(t_{val}) = 1 \wedge g(t_{val}) = G_i \}|}, \ \textrm{and} \\
FPR_{G_i}(\mathcal{D}_{tr}, \mathcal{D}_{val}) &:= \frac{\sum_{t_{val} \in f(\mathcal{D}_{val})} \mathbbm{1}\{ (\mathcal{A} \circ f (\mathcal{D}_{tr}))(t_{val}) = 1 \} \mathbbm{1} \{ y(t_{val}) = 0 \} \mathbbm{1} \{ g(t_{val}) = G_i \} }{|\{ t_{val} \in \mathcal{D}_{val} \ : \ y(t_{val}) = 0 \wedge g(t_{val}) = G_i \}|}
\end{split}
\end{equation*}
For a given training dataset $\mathcal{D}_{tr}$, we can determine \autoref{eq:eqodds-diff} whether $TPR_{\Delta}$ or $FPR_{\Delta}$ is going to be the dominant metric. Similarly, given that choice, we can determine a pair of sensitive groups $(G_{max}, G_{min})$ that would end up be selected as minimal and maximal in \autoref{eq:tpr-diff}. Similarly to the conversion shown in \autoref{eq:conditional-op-conversion}, we can treat these two steps as a \texttt{reduce} operation over the whole dataset. Then, if we assume that this intermediate result will remain stable over subsets of $\mathcal{D}_{tr}$, we can approximatly represent the equalized odds difference utility as an additive utility.
As an example, let us assume that we have determined that $TPR_\Delta$ dominates over $FPR_\Delta$, and similarly that the pair of sensitive groups $(G_{max}, G_{min})$ will end up being selected in \autoref{eq:tpr-diff}. Then, our tuple-wise utility $u_T$ and the scaling factor $w$ become
\begin{align*}
u_T(y_{pred}, t_{val}) &:= TPR_{G_{max},T}(y_{pred}, t_{val}) - TPR_{G_{min},T}(y_{pred}, t_{val}), \\
w &:= 1/|\{ t_{val} \in \mathcal{D}_{val} \ : \ y(t_{val}) = 1 \wedge g(t_{val}) = G_i \}|,
\end{align*}
where
\begin{equation*}
TPR_{G_i,T}(y_{pred}, t_{val}) := \mathbbm{1}\{ y_{pred} = 1 \} \mathbbm{1} \{ y(t_{val}) = 1 \} \mathbbm{1} \{ g(t_{val}) = G_i \}.
\end{equation*}
A similar approach can be taken to define $u_T$ and $w$ for the case when $FPR_\Delta$ dominates over $TPR_\Delta$.
\section{DataScope}
We present \texttt{DataScope}\xspace, a data debugging and understanding approach for end-to-end ML pipelines. \texttt{DataScope}\xspace takes the following three inputs: (1)~a training set, (2)~a pre-defined utility (e.g., validation accuracy), and (3)~a \texttt{sklearn} pipeline, and outputs the {\em Shapley value} for each training example. \ssc{I think we should generalise this definition to talk about multiple inputs and pipeline mixing relational operations with estimator/transformer pipelines}
In the following, we describe \texttt{DataScope}\xspace's overall architecture and formally define the core technical problem of this work (Section~\ref{sec:approach-problem}). Next, we summarize the key enabling theoretical results in Section~\label{sec:approach-ptime} which demonstrate that we can compute Shapley values efficiently in PTIME for a large family of pipelines that we refer to as {\em canonical pipelines}. Last not but least, we describe our current strategies to approximate a given \texttt{sklearn} pipeline to a canonical pipeline in Section~\ref{sec:approach-approx}, and discuss our rationales behind such an approximation.
\vspace{-0.5em}
\subsection{Overview and Core Technical Problem}
\label{sec:approach-problem}
Figure~\ref{fig:datascope} illustrates the interaction between \texttt{DataScope}\xspace and a user on a toy example. The input of \texttt{DataScope}\xspace is a standard \texttt{sklearn} program (line 1-18) in which the user loads data (line 2-3), constructs an end-to-end pipeline consisting both feature extractors and an ML model (line 5-12), trains this pipeline(line 15), and tests its quality (line 18). \ssc{Let's use a more interesting pipeline with a join, a filter and a ColumnTransformer for multiple features}
\vspace{-0.5em}
\paragraph*{\underline{Data Debugging and
Understanding as Data Importance}}
In real-world ML, one often encounters data-related problems in the input training set (e.g., wrong labels, outliers, biased samples) that leads to sub-optimal quality of the user's model.\ssc{cite deequ, validation paper from Google} The goal of \texttt{DataScope}\xspace is to facilitate the process of improving the data quality of the training set via efficient data debugging and data understanding. As illustrated in previous work~{\color{red}[CITE]}\xspace, many data debugging and understanding problems hinge on the following fundamental question, which \texttt{DataScope}\xspace addresses:
\begin{quote}
\em Which data examples in the training set are most important for the model utility ?
\end{quote}
A common approach, on which we also base our work, is to model this problem as computing the {\em Shapley value} of each data example as a measure of its importance to a model. This core functionality of \texttt{DataScope}\xspace is illustrated at line 24-28 in Figure~\ref{fig:datascope}, which assigns an importance value each training example. We illustrate several use cases of how these importance values can be used, including label denoising, data summarization,
and fairness debugging in Section~{\color{red}[REF]}\xspace. Note that Shapley values have been applied to a wide range use cases~{\color{red}[CITE]}\xspace. \ssc{cite XAI work which also heavily relies on Shapley values}
\paragraph*{\underline{Formal Definitions}}
\ssc{I think we need a more general definition here. I would suggest that a pipeline $f$ maps a set of relational inputs $\mathcal{D}_{{tr}_1},\dots,\mathcal{D}_{{tr}_k}$ (in star schema?) to the numerical inputs $\{z_i = (x_i, y_i)\}_{i \in [n]} $ for the model $\mathcal{A}$. I think this formulation makes it easier to think about joins in the pipeline, and make it clear what the challenges of including the pipeline in the picture are.}
We formally define the core technical problem addressed by \texttt{DataScope}\xspace as follows. Let $\mathcal{D}_{tr}=\{z_i = (x_i, y_i)\}_{i \in [n]}$ be a training set, let $f$ denote a feature extraction pipeline and $\mathcal{A}$ be a ML training algorithm. After feature extraction and training, we obtain an ML model:
\[
\mathcal{A} \circ f (\mathcal{D}_{tr}).
\]
Note that in Figure~\ref{fig:datascope}, $\mathcal{A} \circ f $ corresponds to the pipeline \texttt{pipe}~(Line 7-12). \ssc{We could nest the feature encoding operations into their own pipeline to make this clearer.} We can measure the ``performance'' of this model in various ways, e.g., via validation accuracy and a fairness metric \ssc{cite survey paper for fairness metrics here}. Let $\mathcal{D}_{val}=\{\tilde{z}_i = (\tilde{x}_i,
\tilde{y}_i)\}_{i \in [m]}$ \ssc{This should also be a set of relational inputs} be a given validation set. Based on this, we define a utility function $u$, which measures the performance of this trained end-to-end pipeline:
\[
u (\mathcal{A} \circ f (\mathcal{D}_{tr}), f(\mathcal{D}_{val})) \mapsto [0, 1].
\]
In this paper, we will focus on what we refer to as \textcolor{red}{\textit{additive utilities}}, which cover the most important set of utilities functions in practice (e.g., validation loss, validation accuracy, many different fairness metrics). We say
an utility function is \textit{additive} if:
\[
u (\mathcal{A} \circ f (\mathcal{D}_{tr}), f(\mathcal{D}_{val}))
=
\sum_{\tilde{z} \in \mathcal{D}_{val}}
u (\mathcal{A} \circ f (\mathcal{D}_{tr}), f(\{\tilde{z}\})).
\]
\ssc{Maybe give an example for non-intuitive cases, such as fairness measures}
For the simplicity of the notation, we use the following notation in cases where the validation set $\mathcal{D}_{val}$ is clear from the context:
\[
u \circ \mathcal{A} \circ f (\mathcal{D}_{tr}) := u (\mathcal{A} \circ f (\mathcal{D}_{tr}), f(\mathcal{D}_{val}))
\]
\paragraph*{\underline{Shapley Value}}
The Shapley value, denoting the importance of a training example $z_i$, is defined as
\[
\phi_i = \frac{1}{n} \sum_{S \subseteq \mathcal{D}_{tr} \backslash \{z_i\}} {n - 1 \choose |S|}^{-1} \left(
u \circ \mathcal{A} \circ f (S \cup \{z_i\}) -
u \circ \mathcal{A} \circ f (S)
\right).
\]
Intuitively, the {\em importance} of a data example $z_i$ over a subset of
training examples $S \subseteq \mathcal{D}_{tr} \backslash \{z_i\}$
is measured as the difference of the utility $u \circ \mathcal{A} \circ f (S \cup \{z_i\})$ \textit{with} $z_i$ to the utility $u \circ \mathcal{A} \circ f (S)$ \textit{without} the specific example $z_i$. The Shapley value takes the average of all such possible subsets $S \subseteq \mathcal{D}_{tr} \backslash \{z_i\}$, which allows it to have a range of desired properties that significantly benefit data debugging tasks~{\color{red}[CITE]}\xspace. \ssc{Name and discuss these benefits briefly}
\paragraph*{\underline{Challenges and Monte Carlo Baselines}}
\ssc{Parts of this could be moved to preliminaries}
All previous research focuses on the scenario in which there is no feature
extraction pipelines (i.e., $f(D) \mapsto D$); \ssc{If we assume that the pipeline transforms relations into a matrix, then this statement becomes even stronger} even in this case, computing Shapley values is tremendously hard since its complexity for general ML models is \texttt{\#P}-hard.
Previous work accommodates this computational challenge in two different ways:
\begin{enumerate}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,topsep=0pt, leftmargin=*]
\item {\bf Monte Carlo Shapley}: One line of efforts tries to estimate Shapley values with Markov Chain Monte Carlo (MCMC)-based techniques. This includes vanilla Monte Carlo sampling~{\color{red}[CITE]}\xspace, group testing~{\color{red}[CITE]}\xspace, and Truncated Monte Carlo~{\color{red}[CITE]}\xspace sampling. \ssc{We should describe these a bit in one sentence}
\item {\bf KNN Shapley}:
Even the most efficient Monte Carlo-based methods need
to train multiple ML models (in order to evaluate $\mathcal{A}$ multiple times on different subsets of the features) and thus do not scale to datasets of even of modest size \ssc{Add a reference for that}. As a consequence, an alternative line of research proposes to approximate the model $\mathcal{A}$ using a simpler proxy model. Specifically, previous work shows that
Shapley values can be computed over K-nearest neighbors (KNN) classifiers
in PTIME~{\color{red}[CITE]}\xspace, and that leveraging these as a proxy is very effective in a various real-world scenarios~{\color{red}[CITE]}\xspace.
\end{enumerate}
In \texttt{DataScope}\xspace, we are facing an even harder problem
given the presence of a feature extraction pipeline. \ssc{Explain what exactly is harder!} Nevertheless, as a baseline, it is important to
realize that all Monte Carlo-based approaches~{\color{red}[CITE]}\xspace can be
directly extended to support \texttt{DataScope}\xspace. This is because most, of not all, Monte Carlo-based approaches model training as a {\em black box function} and thus, can be used directly to handle an end-to-end pipeline:
$\mathcal{A} \circ f$.
\ssc{Is this really the case? If the pipeline joins multiple inputs, we will only get the Shapley values for the join result, right?}
\paragraph*{\underline{Core Technical Problem}}
Despite the existence of a Monte Carlo baseline \ssc{Again, will this baseline produce correct results for the fork and join cases?},
as we will see in Section~{\color{red}[REF]}\xspace,
it suffers when it comes to
scalability and speed --- in
our experiments, it is not uncommon for
such a Monte Carlo baseline to take
1 hour to compute Shapley value
on even a small dataset with only 1000 examples!
To bring data debugging and understanding
into practice, we are in dire needs
for a more efficient and scalable alternative.
Without a feature extraction pipeline,
using KNN as a proxy model has been shown
to be orders of magnitude faster
than its Monte Carlo counterpart~{\color{red}[CITE]}\xspace
while being equally, if not more,
effective on many applications~{\color{red}[CITE]}\xspace. \ssc{List one or two of these applications}
{\em Can we similarly use a KNN classifier
as a proxy for \texttt{DataScope}\xspace}?
\ssc{It might make sense to split up these section into two sections, a problem statement (until here) and then a follow up section for the approximation of pipelines}
\subsection{PTIME KNN Shapley for canonical Pipelines}
\label{sec:approach-ptime}
Extending KNN Shapley to an end-to-end
ML pipeline is a far more challenging
problem compared with Monte Carlo-based approach.
Today's KNN Shapley algorithm heavily relies
on the structure of the KNN classifier.
The presence of a feature extraction pipeline
will drastically change the underlying
algorithm and time complexity --- in fact,
for many feature extraction pipelines,
we expect that Shapley is \texttt{\#P}-hard
even for KNN classifiers!
\ssc{Give an intuition why this is the case. Is is related to global aggregations in these pipelines?}
The key technical contribution of this paper is a novel algorithmic framework which covers a large sub-family of ML pipelines whose KNN Shapley can be computed in PTIME. We call these pipelines \textit{canonical pipelines}.
\begin{theorem}
Let $\mathcal{D}_{tr}$ be a training set of $n$ examples, $f$ be a feature extractor over $\mathcal{D}_{tr}$, and $\mathcal{A}_{knn}$ be a $K$-nearest neighbor classifier.
\textcolor{red}{If $f$ can be expressed as
an Additive Decision Diagram (ADD) with polynomial
size, then computing}
\begin{small}
\[
\phi_i = \frac{1}{n} \sum_{S \subseteq \mathcal{D}_{tr} \backslash \{z_i\}} {n - 1 \choose |S|}^{-1} \left(
u \circ \mathcal{A}_{knn} \circ f (S \cup \{z_i\}) -
u \circ \mathcal{A}_{knn} \circ f (S)
\right)
\]
\end{small}
is in \texttt{PTIME} for all additive utilities $u$.
\label{theorem:main}
\end{theorem}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/patterns.pdf}
\vspace{-2em}
\caption{Three types of canonical pipelines over which
Shapley values can be computed in PTIME.}
\label{fig:patterns}
\end{figure}
This theorem provides a sufficient condition under which
we can compute Shapley values for KNN classifiers
over ML pipelines. We can instantiate this general framework with concrete types of feature extraction pipelines.
We leave the details of these highly non-trivial technical results to Section~{\color{red}[REF]}\xspace, summarize three types of
canonical pipelines whose Shapley value can be computed in PTIME, and illustrate them in Figure~\ref{fig:patterns}.
What is interesting, especially to the data management
community, is that the time complexity
of computing KNN Shapley over feature
extraction pipeline $f$ heavily relies on the
\textit{data provenance}~{\color{red}[CITE]}\xspace
of this specific ML pipeline $f$:
\vspace{0.5em}
{\bf \em Map pipelines} are a subfamily of pipeline that
satisfy the condition in Theorem~\ref{theorem:main}, in which
the feature extraction $f$ has the following property: each
input training example $z_i$ is transformed into a unique
output training example $z_i'$ with an arbitary tuple-at-a-time
transformation function $h_f$:
\[
z_i \mapsto z'_i = h_f(z_i)
\]
\ssc{We could mention the structure of the provenance polynomials here}
\vspace{0.5em}
{\bf \em Fork pipelines} are a superset of Map pipelines which
requires that for each output example $z_j'$, there exists a \textit{unique}
input example $z_i$, such that
$z_j'$ is generated by applying an arbitrary tuple-at-a-time
transformation function $h_f$ over $z_i$: $z_j' = h_f(z_i)$. As illustrated in Figure~\ref{fig:patterns}(b),
the output examples $z_1'$ and $z_2'$ are both
generated by input example $z_1$. Fork pipelines
also satisfy the condition in Theorem~\ref{theorem:main}.
\ssc{We could mention the structure of the provenance polynomials here}
\vspace{0.5em}
{\bf \em One-to-many Join} is illustrated in
Figure~\ref{fig:patterns}(c).
XXX \\
XXX \\
XXX \\
XXX \\
XXX \\
XXX \\
XXX
\ssc{I think we need to extend the definition of the pipelines to handle join pipelines, we need to be able to have multiple inputs and joins in the definition}
\subsection{Approximating Real-world Pipelines}
\label{sec:approach-approx}
\ssc{I think we can rewrite this to be more general if we talk about ML pipelines as a combination of relational operations with estimator/transformer pipelines, which also covers SparkML and some parts of TFX}
Given an \texttt{sklearn} pipeline
\[
\texttt{pipe} := (\mathcal{A}, f)
\]
in practice, often these do not directly give us a canonical pipeline whose Shapley value can be computed
in PTIME. \ssc{Explain why} In \texttt{DataScope}\xspace, a key operation is to
\textit{approximate} this pipeline into a canonicals pipeline, in two ways.
\paragraph*{\underline{Approximate $\mathcal{A}$}}
The first approximation follows various previous efforts summarised as KNN Shapley before, and has been shown to work well, if not better,
in a diverse range of scenarios~{\color{red}[CITE]}\xspace.
In this step, we approximate the pipeline's ML model with a KNN classifier
\[
\mathcal{A} \mapsto \mathcal{A}_{knn}.
\]
\paragraph*{\underline{Approximate $f$}}
In terms of the feature extraction pipeline,
it becomes more complex. In practice, we see that a given feature extraction \textit{pipeline} consists of a sequence of
feature extraction \textit{operators}, e.g., \texttt{MinMaxScaling}, \texttt{PCA}, \texttt{TFIDF}. Each of these operators can often be expressed with a \texttt{fork-reduce} pattern over the data:
\[
op(\mathcal{D}) = \texttt{fork}(\texttt{reduce}(\mathcal{D}), \mathcal{D}).
\]
\ssc{I think we should rewrite this to talk about the estimator/transformer abstraction to make it more general. The transformer is the map, and the estimator computes a global statistic over the data to generate a ``fitted'' transformer, which is the reduce part of things here}
During the \texttt{reduce} step, operators compute some global statistics over the dataset --- in \texttt{MinMaxScaling} it computes the \texttt{Min} and \texttt{Max} values and in \texttt{TFIDF} it computes the \texttt{IDF} value. During the \texttt{fork} step, operators transform the input dataset using these global statistics --- in \texttt{MinMaxScaling} it normalizes each data example with the computed \texttt{Min} and \texttt{Max}.
The \texttt{reduce} step is often the key reason that a feature extraction pipeline cannot have PTIME Shapley computation. \ssc{Explain why} Fortunately, we also observe that they are
relatively stable given different subset of the data, especially what we want to compute is the \textit{difference}
when a single example is added or removed. \ssc{We should mention that we experimentally validate this later}
One approximation that we conduct is to let the \texttt{reduce}
component to take only as input the whole dataset: \ssc{in sklearn terms, we reuse the transformer generated by fitting the estimator on the whole dataset. We do the same thing in mlinspect btw.}
\[
op(\mathcal{D}) = \texttt{fork}(\texttt{reduce}(\mathcal{D}), \mathcal{D}) \mapsto
op^*(\mathcal{D}) = \texttt{fork}(\texttt{reduce}(\mathcal{D}_{tr}), \mathcal{D})
\]
\ssc{I think this should be map not fork, sklearn pipelines cannot change the output size}
Once all operators $op$ in an input pipeline are transformed into their approximate variant $op^*$, a large majority of realistic pipelines become canonical pipelines of a \texttt{fork} pattern. \ssc{I think this should be map!}
XXX \\
XXX \\
XXX
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{figures/coverage.pdf}
\vspace{-2em}
\caption{\textcolor{red}{A dominating majority of real-world ML pipelines~{\color{red}[CITE]}\xspace
have a canonical pipeline of map-fork pattern.}}
\label{fig:coverage}
\end{figure}
\paragraph*{\underline{Statistics of Real-world Pipelines}}
Figure~\ref{fig:coverage} illustrates a case study
that we conducted over 500K real-world pipelines
provided by Microsoft~{\color{red}[CITE]}\xspace. We see that \\
XXX \\
XXX \\
XXX \\
XXX \\
XXX \\
XXX
\ssc{I can help plotting this if you give the data or the statistics}
\paragraph*{\underline{Discussion: What if These Two Approximations Fail?}}
Computing Shapley values for a generic pipeline
$(\mathcal{A}, f)$ is \#P-hard, and by
approximating it into
$(\mathcal{A}_{knn}, f^*)$, we obtain an
efficient PTIME solution. This drastic improvement
on complexity also means that \textit{we should
expect that there exist scenarios under which
this $(\mathcal{A}, f) \mapsto (\mathcal{A}_{knn}, f^*)$
approximation is not a good approximation.}
\vspace{0.3em}
{\em How often would this failure case happen in
practice?}
When the training set is reasonably large,
as illustrated in many previous studies focusing on the KNN proxy, we are relatively confident that the $\mathcal{A} \mapsto \mathcal{A}_{knn}$ approximation should work well in most practical scenarios \ssc{this sounds a bit vague}, and only fails when we are relying on some very strong global
properties that KNN does not model. \ssc{such as?}
As for the $f \mapsto f^*$ approximation, we expect the failure cases to be even more rare, especially when the training set
is reasonably large. \ssc{how large is that?}
These two beliefs are empirically verified in our extensive experiments in Section~{\color{red}[REF]}\xspace, and backed up by various previous empirical results on KNN Shapley~{\color{red}[CITE]}\xspace.
\vspace{0.3em}
{\em What should we do when such a failure case
happen?} Nevertheless, we should expect
such a failure case can happen in practice.
In this case, we will resort to the Monte Carlo baseline,
which will be orders of magnitude slower, but
should provide a backup alternative in this case.
It is an exciting future direction to further
explore the limitations of both approximations and
develop more efficient Monte Carlo methods. \ssc{Does this work for join pipelines?}
\section{Conclusion}
We present
\texttt{Ease.ML/DataScope}, the first system that efficiently computes Shapley values of training examples over an \emph{end-to-end} ML pipeline.
Our core contribution is a
novel algorithmic framework that computes Shapley value over a specific family of ML pipelines that we call \textit{canonical pipelines}, connecting decades of research on relational data provenance and recent advancement of machine earning.
For many subfamilies of canonical pipelines, computing Shapley value is in \textsf{PTIME}, contrasting the exponential complexity of computing Shapley value in general.
We conduct extensive experiments illustrating different use cases and utilities.
Our results show that \texttt{DataScope} is up to four orders of magnitude faster over state-of-the-art Monte Carlo-based methods, while being comparably, and often even more, effective in data debugging.
\section{Additional Experiments}
\autoref{fig:exp-label-repair-accuracy-map-logreg-1k} to \autoref{fig:exp-label-repair-accuracy-fork-xgb-1k};
\autoref{fig:exp-label-repair-fairness-map-logreg-1k} to \autoref{fig:exp-label-repair-fairness-fork-xgb-1k}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {accuracy}
\def 0 {0}
\def map {map}
\def xgb {logreg}
\def XGBoost {logistic regression}
\makeatletter
\def FolkUCI {UCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {TwentyNewsGroups}
\@for\vpipeline:={tf-idf,tolower-urlremove-tfidf}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {FashionMNIST}
\@for\vpipeline:={gauss-blur,hog-transform}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {accuracy}
\def 0 {0}
\def map {map}
\def xgb {knn}
\def XGBoost {K-nearest neighbor}
\makeatletter
\def FolkUCI {UCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {TwentyNewsGroups}
\@for\vpipeline:={tf-idf,tolower-urlremove-tfidf}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {FashionMNIST}
\@for\vpipeline:={gauss-blur,hog-transform}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {accuracy}
\def 0 {100}
\def map {fork}
\def xgb {logreg}
\def XGBoost {logistic regression}
\makeatletter
\def FolkUCI {UCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {TwentyNewsGroups}
\@for\vpipeline:={tf-idf,tolower-urlremove-tfidf}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {FashionMNIST}
\@for\vpipeline:={gauss-blur,hog-transform}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {accuracy}
\def 0 {100}
\def map {fork}
\def xgb {knn}
\def XGBoost {K-nearest neighbor}
\makeatletter
\def FolkUCI {UCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {TwentyNewsGroups}
\@for\vpipeline:={tf-idf,tolower-urlremove-tfidf}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {FashionMNIST}
\@for\vpipeline:={gauss-blur,hog-transform}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {accuracy}
\def 0 {100}
\def map {fork}
\def xgb {xgb}
\def XGBoost {XGBoost}
\makeatletter
\def FolkUCI {UCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {TwentyNewsGroups}
\@for\vpipeline:={tf-idf,tolower-urlremove-tfidf}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {FashionMNIST}
\@for\vpipeline:={gauss-blur,hog-transform}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {fairness}
\def 0 {0}
\def 0.0 {0.0}
\def map {map}
\def xgb {logreg}
\def XGBoost {logistic regression}
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\@for\vutility:={acc,eqodds}\do{
\begin{subfigure}[b]{.49\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dirtybias=0.0/dataset=FolkUCI/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{myyellow}\addlegendentry{DataScope Interactive}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {fairness}
\def 0 {0}
\def 0.0 {0.0}
\def map {map}
\def xgb {knn}
\def XGBoost {K-nearest neighbor}
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\@for\vutility:={acc,eqodds}\do{
\begin{subfigure}[b]{.49\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dirtybias=0.0/dataset=FolkUCI/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{myyellow}\addlegendentry{DataScope Interactive}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {fairness}
\def 0 {100}
\def 0.0 {0.0}
\def map {fork}
\def xgb {logreg}
\def XGBoost {logistic regression}
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\@for\vutility:={acc,eqodds}\do{
\begin{subfigure}[b]{.49\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dirtybias=0.0/dataset=FolkUCI/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{myyellow}\addlegendentry{DataScope Interactive}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {fairness}
\def 0 {100}
\def 0.0 {0.0}
\def map {fork}
\def xgb {knn}
\def XGBoost {K-nearest neighbor}
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\@for\vutility:={acc,eqodds}\do{
\begin{subfigure}[b]{.49\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dirtybias=0.0/dataset=FolkUCI/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{myyellow}\addlegendentry{DataScope Interactive}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {fairness}
\def 0 {100}
\def 0.0 {0.0}
\def map {fork}
\def xgb {xgb}
\def XGBoost {XGBoost}
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\@for\vutility:={acc,eqodds}\do{
\begin{subfigure}[b]{.49\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dirtybias=0.0/dataset=FolkUCI/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{myyellow}\addlegendentry{DataScope Interactive}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\iffalse
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1m}
\def fairness {accuracy}
\def 0 {0}
\def map {map}
\def xgb {logreg}
\def XGBoost {logistic regression}
\makeatletter
\def FolkUCI {Higgs}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.19\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.19\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over the Higgs dataset (1k\xspace samples) and various map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1m}
\def fairness {accuracy}
\def 0 {0}
\def map {map}
\def xgb {knn}
\def XGBoost {K-nearest neighbor}
\makeatletter
\def FolkUCI {Higgs}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.19\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.19\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over the Higgs dataset (1k\xspace samples) and various map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1m}
\def fairness {accuracy}
\def 0 {100}
\def map {fork}
\def xgb {logreg}
\def XGBoost {logistic regression}
\makeatletter
\def FolkUCI {Higgs}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.19\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.19\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over the Higgs dataset (1k\xspace samples) and various map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1m}
\def fairness {accuracy}
\def 0 {100}
\def map {fork}
\def xgb {knn}
\def XGBoost {K-nearest neighbor}
\makeatletter
\def FolkUCI {Higgs}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.19\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.19\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over the Higgs dataset (1k\xspace samples) and various map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\end{figure*}
\fi
\section{Experimental Evaluation}
\label{sec:evaluation}
We evaluate the performance of
\texttt{DataScope}\xspace when applied to data
debugging and repair. In this section, we present the empirical study we conducted with the goal of evaluating both quality and speed.
\subsection{Experimental Setup}
\inlinesection{Hardware and Platform.}
All experiments were conducted on Amazon AWS c5.metal instances with a 96-core Intel(R) Xeon(R) Platinum 8275CL 3.00GHz CPU and 192GB of RAM. We ran each experiment in single-thread mode.
\begin{table}[t!]
\centering
\scalebox{1}{
\begin{tabular}{l|c|cc}
\hline
\textbf{Dataset} & \textbf{Modality} &
\textbf{\# Examples} &
\textbf{\# Features}
\\
\hline
\at{UCIAdult}~\cite{kohavi1996scaling} & tabular & $49K$ & $14$ \\
\at{Folktables}~\cite{ding2021retiring} & tabular & $1.6M$ & $10$ \\
\at{FashionMNIST}~\cite{xiao2017online} & image & $14K$ & (Image) $28 \times 28$ \\
\at{20NewsGroups}~\cite{joachims1996probabilistic} & text & $1.9K$ & (Text) $20K$ after TF-IDF \\
\hline \at{Higgs}~\cite{baldi2014searching} & tabular & $11M$ & $28$ \\
\hline
\end{tabular}
}
\caption{Datasets characteristics}
\label{tbl:datasets}
\vspace{2em}
\end{table}
\inlinesection{Datasets.}
We assemble a collection of widely used datasets with diverse modalities (i.e. tabular, textual, and image datasets).
\autoref{tbl:datasets} summarizes the datasets that we used.
\noindent
\textbf{(Tabular Datasets)} \at{UCI Adult} is a tabular dataset from the US census data~\cite{kohavi1996scaling}. We use the binary classification variant where the goal is to predict whether the income of a person is above or below \$50K. One of the features is `sex,' which we use as a \emph{sensitive attribute} to measure group fairness with respect to male and female subgroups. A very similar dataset is \at{Folktables}, which was developed to redesign and extend the original \at{UCI Adult} dataset with various aspects interesting to the fairness community~\cite{ding2021retiring}. We use the `income' variant of this dataset, which also has a `sex' feature and has a binary label corresponding to the \$50K income threshold. Another tabular dataset that we use for large-scale experiments is the \at{Higgs} dataset, which has $28$ features that represent physical properties of particles in an accelerator~\cite{baldi2014searching}. The goal is to predict whether the observed signal produces Higgs bosons or not.
\noindent
\textbf{(Non-tabular Datasets)}
We used two non-tabular datasets. One is \at{FashionMNIST}, which contains $28\times 28$ grayscale images of 10 different categories of fashion items~\cite{xiao2017online}. To construct a binary classification task, we take only images of the classes `shirt' and `T-shirt.'
We also use \at{TwentyNewsGroups}, which is a dataset with text obtained from newsgroup posts categorized into 20 topics~\cite{joachims1996probabilistic}. To construct a binary classification task, we take only two newsgroup categories, `\at{sci.med}' and `\at{comp.graphics}.' The task is to predict the correct category for a given piece of text.
\inlinesection{Feature Processing Pipelines.}
We obtained a dataset with about $500K$ machine learning workflow instances from internal Microsoft users~\cite{psallidas2019data}. Each workflow consists of a dataset, a feature extraction pipeline, and an ML model. We identified a handful of the most representative pipelines and translated them to \texttt{sklearn} pipelines. We list the pipelines used in our experiments in \autoref{tbl:pipelines}.
\begin{table}[t!]
\centering
\scalebox{1}{
\begin{tabular}{l|ccc}
\hline
\thead{Pipeline} & \thead{Dataset \\ Modality} & \thead{w/ \\ Reduce} & \thead{Operators} \\
\hline
\at{Identity} & tabular & false & $\emptyset$ \\
\at{Standard Scaler} & tabular & true & $\mathtt{StandardScaler}$ \\
\at{Logarithmic Scaler} & tabular & true & $\mathtt{Log1P} \circ \mathtt{StandardScaler}$ \\
\at{PCA} & tabular & true & $\mathtt{PCA}$ \\
\at{Missing Indicator + KMeans} & tabular & true & $\mathtt{MissingIndicator} \oplus \mathtt{KMeans}$ \\
\hline
\at{Gaussian Blur} & image & false & $\mathtt{GaussBlur}$ \\
\at{Histogram of Oriented Gradients} & image & false & $\mathtt{HogTransform}$ \\
\hline
\at{TFIDF} & text & true & $\mathtt{CountVectorizer} \circ \mathtt{TfidfTransformer}$ \\
\multirow{2}{*}{\at{Tolower + URLRemove + TFIDF}} &
\multirow{2}{*}{text} &
\multirow{2}{*}{false} &
$\begin{array}{cc}
& \mathtt{TextToLower} \circ \mathtt{UrlRemover} \\
& \circ \mathtt{CountVectorizer} \circ \mathtt{TfidfTransformer}
\end{array}$ \\
\hline
\end{tabular}
}
\caption{Feature extraction pipelines used in experiments.}
\label{tbl:pipelines}
\vspace{2em}
\end{table}
As \autoref{tbl:pipelines} shows, we used pipelines of varying complexity. The data modality column indicates which types of datasets we applied each pipeline to. Some pipelines are pure map pipelines, while some implicitly require a reduce operation. \autoref{tbl:pipelines} shows the operators contained by each pipeline. They are combined either using a composition symbol $\circ$, i.e., operators are applied in sequence; or a concatenation symbol $\oplus$, i.e., operators are applied in parallel and their output vectors are concatenated. Some operators are taken directly from \texttt{sklearn} ($\mathtt{StandardScaler}$, $\mathtt{PCA}$, $\mathtt{MissingIndicator}$, $\mathtt{KMeans}$, $\mathtt{CountVectorizer}$, and $\mathtt{TfidfTransformer}$), while others require customized implementations: (1) $\mathtt{Log1P}$, using the \texttt{log1p} function from \texttt{numpy}; (2) $\mathtt{GaussBlur}$, using the \texttt{gaussian\_filter} function from \texttt{scipy}; (3) $\mathtt{HogTransform}$, using the
\texttt{hog} function from \texttt{skimage}; (4) $\mathtt{TextToLower}$, using the built-in \texttt{tolower} Python function; and (5) $\mathtt{UrlRemover}$, using a simple regular expression.
\underline{\emph{Fork Variants}:}
We also create a ``fork'' version of the above pipelines, by prepending each with a $\mathtt{DataProvider}$ operator.
It simulates distinct data providers that each provides a portion of the data. The original dataset is split into a given number of groups (we set this number to $100$ in our experiments).
We compute importance for each group, and we conduct data repairs on entire groups all at once.
\inlinesection{Models.}
We use three machine learning models as the downstream
ML model following the previous feature extraction
pipelines: \at{XGBoost}, \at{Logistic Regression}, and \at{KNearest Neighbor}. We use the \texttt{LogisticRegression} and \texttt{KNeighborsClassifier} provided by the \texttt{sklearn} package. We use the default hyper-parameter values except that we set \texttt{max\_iter} to 5,000 for \at{Logistic Regression} and \texttt{n\_neighbors} to $1$ for the \at{KNearest Neighbor}.
\inlinesection{Data Debugging Methods.}
We apply different data debugging methods and compare them based on their effect on model quality and the computation time that they require:
\begin{itemize}[leftmargin=*]
\item \underline{\at{Random}} --- We measure importance with a random number and thus apply data repairs in random order.
\item \underline{\at{TMC Shapley x10} and \at{TMC Shapley x100}} --- We express importance as Shapley values computed using the Truncated Monte-Carlo (TMC) method~\cite{ghorbani2019data}, with 10 and 100 Monte-Carlo iterations, respectively. We then follow the computed importance in ascending order to repair data examples.
\item \underline{\at{DataScope}} --- This is our $K$-nearest-neighbor based method for efficiently computing the Shapley value. We then follow the computed importance in ascending order to repair data examples.
\item \underline{\at{DataScope Interactive}} --- While the above methods compute importance scores only once at the beginning of the repair, the speed of \at{DataScope} allows us to \textit{recompute} the importance after \emph{each} data repair. We call this strategy \at{DataScope Interactive}.
\end{itemize}
\inlinesection{Protocol.}
In most of our experiments (unless explicitly stated otherwise), we simulate importance-driven data repair scenarios performed on a given \emph{training dataset}. In each experimental run, we select a dataset, pipeline, model, and a data repair method.
We compute the importance using the utility
defined over a \emph{validation set}.
Training data repairs are conducted one unit at a time until all units are examined. The order of units is determined by the specific repair method. We divide the range between $0\%$ data examined and $100\%$ data examined into $100$ checkpoints. At each checkpoint we measure the quality of the given model on a separate \emph{test dataset} using some metric (e.g. accuracy). For importance-based repair methods, we also measure the time spent on computing importance.
We repeat each experiment $10$ times and report the median as well as the $90$-th percentile range (either shaded or with error bars).
\subsection{Results}
Following the protocol of ~\cite{Li2021-sg,Jia2021-zf}, we
start by flipping certain amount of labels in the training dataset.
We then use a given data debugging method to go through the dataset and repair labels by replacing each label with the correct one.
As we progress through the dataset, we measure the model quality on a separate test dataset using a metric such as accuracy or equalized odds difference (a commonly used fairness metric). Our goal is to achieve the best possible quality while at the same time having to examine the least possible amount of data.
Depending on whether the
pipeline is an original one or its fork variant, we have slightly different approaches to label corruption and repair. For original pipelines, each label can be flipped with some probability (by default this is $50\%$). Importance is computed for independent data examples, and repairs are performed independently as well.
For fork variants, data examples are divided into groups corresponding to their respective data providers. By default, we set the number of data providers to $100$. Each label inside a single group is flipped based on a fixed probability. However, this probability differs across data providers (going from $0\%$ to $100\%$). Importance is computed for individual providers, and when a provider is selected for repair, all its labels get repaired.
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {accuracy}
\def 0 {0}
\def map {map}
\def xgb {xgb}
\def XGBoost {XGBoost}
\makeatletter
\def FolkUCI {UCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {TwentyNewsGroups}
\@for\vpipeline:={tf-idf,tolower-urlremove-tfidf}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\def FolkUCI {FashionMNIST}
\@for\vpipeline:={gauss-blur,hog-transform}\do{
\begin{subfigure}[b]{.32\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dataset=FolkUCI/pipeline=\vpipeline/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\vspace{2em}
\end{figure*}
\inlinesection{Improving Accuracy with Label Repair.}
In this set of experiments, we aim to improve the accuracy as much as possible with as least as possible labels examined.
We show the case for XGBoost in \autoref{fig:exp-label-repair-accuracy-map-xgb-1k}
while leave other scenarios (logistic regression, $K$-nearest neighbor, original pipelines and fork variants) to Appendix (\autoref{fig:exp-label-repair-accuracy-map-logreg-1k} to \autoref{fig:exp-label-repair-accuracy-fork-xgb-1k}. Figures differ with respect to the target model used (XGBoost, logistic regression, or $K$-nearest neighbor) and the type of pipeline (either map or fork). In each figure, we show results for different pairs of dataset and pipeline, and we measure the performance of the target model as well as the time it took to compute the importance scores.
We see that \texttt{DataScope}\xspace is significantly faster than
TMC-based methods. The speed-up is in the order of $100\times$ to $1,000\times$ for models such as logistic regression. For models requiring slightly longer training time (e.g., XGBoost), the speed-up can be up to $10,000\times$.
In terms of quality, we see that \texttt{DataScope}\xspace is comparable with or better than the TMC-based methods (mostly for the logistic regression model),
both outperforming the \at{Random} repair method. In certain cases, \texttt{DataScope}\xspace,
despite its orders of magnitude speed-up,
also clearly dominates the TMC-based methods, especially
when the pipelines produce features of high-dimensional datasets (such as the text-based pipelines used for the \at{20NewsGroups} dataset and the image-based pipelines used for the \at{FashionMNIST} dataset).
\begin{figure*}
\centering
\def label-repair {label-repair}
\def 1k {1k}
\def fairness {fairness}
\def 0 {0}
\def 0.0 {0.0}
\def map {map}
\def xgb {xgb}
\def XGBoost {XGBoost}
\makeatletter
\def FolkUCI {FolkUCI}
\@for\vpipeline:={identity,std-scaler,log-scaler,pca,mi-kmeans}\do{
\@for\vutility:={acc,eqodds}\do{
\begin{subfigure}[b]{.49\linewidth}
\def figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf {figures/scenario=label-repair/trainsize=1k/repairgoal=fairness/providers=0/model=xgb/dirtybias=0.0/dataset=FolkUCI/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\includegraphics[width=\linewidth]{figures/scenario=\vscenario/trainsize=\vtrainsize/repairgoal=\vrepairgoal/providers=\vproviders/model=\vmodel/dirtybias=0.0/dataset=\vdataset/pipeline=\vpipeline/utility=\vutility/report.figure.pdf}
\end{subfigure}
}
}
\makeatother
\begin{subfigure}[c][][c]{\linewidth}
\begin{center}
\vspace{10pt}
\begin{tikzpicture}
\begin{customlegend}[legend columns=-1,legend style={column sep=5pt}]
\addlegendimage{myblue}\addlegendentry{Random}
\addlegendimage{myred}\addlegendentry{DataScope}
\addlegendimage{myyellow}\addlegendentry{DataScope Interactive}
\addlegendimage{mygreen}\addlegendentry{TMC Shapley x10}
\addlegendimage{mypink}\addlegendentry{TMC Shapley x100}
\end{customlegend}
\end{tikzpicture}
\end{center}
\end{subfigure}
\caption{Label Repair experiment results over various combinations of datasets (1k\xspace samples) and map\xspace pipelines. We optimize for fairness\xspace. The model is XGBoost\xspace.}
\label{fig:exp-label-repair-fairness-map-xgb-1k}
\vspace{2em}
\end{figure*}
\inlinesection{Improving Accuracy and Fairness.}
We then explore the relationship between accuracy and fairness when performing label repairs.
\autoref{fig:exp-label-repair-fairness-map-xgb-1k}
shows the result for XGBoost over original pipelines
and we leave other scenarios to the Appendix (\autoref{fig:exp-label-repair-fairness-map-logreg-1k} to \autoref{fig:exp-label-repair-fairness-fork-xgb-1k}). In these experiments we only use the two tabular datasets \at{UCI Adult} and \at{Folktables}, which have a `sex' feature that we use to compute group fairness using \emph{equalized odds difference}, one of the commonly used fairness metrics~\cite{moritz2016equality}.
We use equalized odds difference as
the utility function for both \texttt{DataScope}\xspace and TMC-based methods.
We first see that being able to debug
specifically for fairness is important --- the left panel of \autoref{fig:exp-label-repair-fairness-map-xgb-1k} illustrates the behavior of
optimizing for accuracy whereas the right
panel illustrates the behavior of
optimizing for fairness.
In this example, the 100\% clean
dataset is unfair.
When optimizing
for accuracy, we see that the unfairness
of model can also increase. On the other hand, when taking fairness into consideration, \at{DataScope Interactive} is
able to maintain fairness while
improving accuracy significantly ---
the unfairness increase of
\at{DataScope Interactive} only
happens at the very end of the cleaning
process, where all other ``fair''
data examples have already been cleaned. This is likely due to the way that the equalized odds difference utility is approximated in \at{DataScope}. When computing the utility, we first make a choice on which $G_i$ and $G_j$ to choose in \autoref{eq:tpr-diff}, as well as a choice between $TPR_\Delta$ and $FPR_\Delta$ in \autoref{eq:eqodds-diff}; only then we compute the Shapley value. We assume that these choices are stable over the entire process of label repair. However, if these choices are ought to change, only \at{DataScopeInteractive} is able to make the necessary adjustment because the Shapley value is recomputed after every repair.
In terms of speed, \texttt{DataScope}\xspace significantly outperforms TMC-based methods --- in the order of $100\times$ to $1,000\times$ for models like logistic regression and up to $10,000\times$ for XGBoost.
In terms of quality, \texttt{DataScope}\xspace is comparable to TMC-based methods, while
\at{DataScope Interactive}, in certain cases, dramatically outperforms \texttt{DataScope}\xspace and TMC-based methods.
\at{DataScope Interactive} achieves much better fairness (measured by equalized odds difference, lower the better)
while maintaining similar, if not better, accuracy
compared with other methods.
When optimizing for fairness we can observe that sometimes
non-interactive methods suffer in pipelines that use standard scalers.
It might be possible that importance scores do not remain stable over the course of our data repair process. Because equalized odds difference is a non-trivial measure, even though it may work in the beginning of our process, it might mislead us in the wrong direction after some portion of the labels get repaired. As a result, being able to compute
data importance frequently, which is enabled by
our efficient algorithm, is crucial to effectively
navigate and balance accuracy and fairness.
\inlinesection{Scalability.} We now evaluate the quality and speed of \texttt{DataScope}\xspace for larger training datasets. We test the runtime for various sizes of the training set ($10k$-$1M$), the validation set ($100$-$10k$), and the number of features ($100$-$1k$). As expected, the impact of training set size and validation set size is roughly linear. Furthermore, we see that even for large datasets, \texttt{DataScope}\xspace can compute Shapley scores in minutes.
When integrated into an intearctive data repair workflow, this could have a dramatic impact on the productivity of data scientists. We have clearly observed that Monte Carlo approaches do improve the efficiency of importance-based data debugging. However, given their lengthy runtime, one could argue that many users would likely prefer not to wait and consequently end up opting for the random repair approach. What \texttt{DataScope}\xspace offers is a viable alternative to random which is equally attainable while at the same time offering the significant efficiency gains provided by Shapley-based importance.
\iffalse
We also conducted experiments on large training datasets ($1M$ examples). \autoref{fig:exp-label-repair-accuracy-map-logreg-1m} to \autoref{fig:exp-label-repair-accuracy-fork-knn-1m} present the results.
The first thing we can notice is that, in cases when logistic regression is the target model, the random repair method becomes a very strong baseline. So much so that, for map pipelines, (apart from the initial boost) the KNN-based method demonstrates inferior capabilities. This does not hold in cases when the target model is KNN where the KNN-based Shapley importance still dominates the random repair method. We can also see that for fork pipelines, the KNN method performs well compared to random. It appears that the capacity of the KNN model to be a valid proxy diminishes for very large datasets in cases where importance is computed in a very fine-graned manner. This finding opens some questions which likely require further investigation in the future.
\fi
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{figures/compute-time/report.figure-trainsize.pdf}
\end{subfigure}
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{figures/compute-time/report.figure-valsize.pdf}
\end{subfigure}
\begin{subfigure}[b]{.30\linewidth}
\includegraphics[width=\linewidth]{figures/compute-time/report.figure-numfeatures.pdf}
\end{subfigure}
\caption{Scalability of \texttt{DataScope}\xspace}
\label{fig:exp-compute-time}
\vspace{2em}
\end{figure}
\section{Algorithm Framework: KNN Shapley Over Data Provenance}
\label{sec:framework}
In this section we describe our method for efficiently computing the Shapley value as defined in \autoref{eq:shap-basic} over the K-Nearest Neigbor accuracy utility as defined in \autoref{eq:knn-acc-definition}. To start off, we need to transform the original Shapley value formula to better fit our setting where the set of players $\mathcal{X}$ corresponds to variables associated with source data tuples. These tuples then get passed through a feature processing pipeline $f$ to produce a training dataset $\mathcal{D}_{tr}$. According to semantics described in \autoref{sec:end-to-end-ml-pipelines}, each dataset represents a set of candidate datasets, each one defined by a distinct value assignment $v \in \mathcal{V}_{\mathcal{X}}$.
Therefore, we replace the sum over sets $S$ of active players from \autoref{eq:shap-basic} with a sum over value assignments. Thus we obtain the following definition of the Shapley value for source tuple associated with variable $x_i$:
\begin{equation} \label{eq:shap-pipeline}
\resizebox{0.9\hsize}{!}{$
\varphi_i = \frac{1}{|\mathcal{X}|} \sum_{v \in \mathcal{V}_{\mathcal{X} \setminus \{ x_i \}}}
\frac{u(\mathcal{D}_{tr}[v[x_i \gets 1]]) - u(\mathcal{D}_{tr}[v[x_i \gets 0]])}
{\binom{|\mathcal{X}|-1}{|\mathrm{supp}(v)|}}
$}
\end{equation}
In the above formula, the notational expression $v[x_i \gets X]$ represents the same value assignment as $v$, except that we enforce that $v(x_i) = X$ for some constant $X$. Also, the support $\mathrm{supp}(v)$ of a value assignment $v$ is a subset of $\mathcal{X}$ of variables that are assigned the value $1$ according to $v$.
By observing \autoref{eq:shap-pipeline} we can notice that the time complexity is still exponential in the number of variables. Our goal in this section is to compute $\varphi_i$ in polynomial time. In \autoref{sec:shapley-with-oracle} we describe a strategy to partition that large sum and push the exponential computation in a so-called \emph{counting oracle}. Then in \autoref{sec:oracle-with-add} we describe a method of computing that oracle using decision diagrams described in \autoref{sec:additive-decision-diagrams}. We then show in \autoref{sec:add-for-pipeline} that polynomial-time algorithms are indeed possible for the classes of feature processing pipelines described in \autoref{sec:representing-pipelines}. Finally, in \autoref{sec:special-case-1nn} we describe some further optimizations for the special case of the KNN model when $K=1$.
\ww{Given that all this section is about computing the Shapley value over the pipeline lineage, I think we perhaps can have a summary subsection first to give a diagram and a running example. We can then follow the same pattern in each of Sections 5.1, 5.2, and 5.3 to focus on intuition and examples, and defer the technical details to the full version.}
\subsection{PTIME Shapley Algorithm with PTIME Counting Oracle} \label{sec:shapley-with-oracle}
Starting from \autoref{eq:shap-pipeline} and plugging in the KNN accuracy utility defined in \autoref{eq:knn-acc-definition}, we can augment the expression for computing $\varphi_i$ as such:
\begin{equation}
\begin{split}
\varphi_i = \frac{1}{N}
\sum_{v \in \mathcal{V}_{\mathcal{X} \setminus \{ x_i \}}}
\sum_{\alpha=1}^{N}
& \mathbbm{1} \{ \alpha = |\mathrm{supp}(v)| \}
\binom{N-1}{\alpha}^{-1} \\
\cdot \sum_{t, t' \in \mathcal{D}_{tr}}
& \mathbbm{1} \{ t = \mathrm{top}_K \mathcal{D}_{tr}[v[x_i \gets 0]] \} \\
\cdot & \mathbbm{1} \{ t' = \mathrm{top}_K \mathcal{D}_{tr}[v[x_i \gets 1]] \} \\
\cdot \sum_{\gamma, \gamma' \in \Gamma}
& \mathbbm{1} \{ \gamma = \mathrm{tally}_{t} \mathcal{D}_{tr}[v[x_i \gets 0]] \} \\
\cdot & \mathbbm{1} \{ \gamma' = \mathrm{tally}_{t'} \mathcal{D}_{tr}[v[x_i \gets 1]] \} \\
\cdot & \mathrm{acc}_{\Delta} (\gamma, \gamma')
\end{split}
\label{eq:shap-transform-1}
\end{equation}
Here, the innermost accuracy gain function is formally defined as
$\mathrm{acc}_{\Delta} (\gamma, \gamma') := \mathrm{acc}(\gamma') - \mathrm{acc}(\gamma)$
where
$\mathrm{acc}(\gamma) := \mathbbm{1}\{ \ell(t_v) = \mathrm{argmax}_{c \in C} \gamma \}$.
The correctness of \autoref{eq:shap-transform-1} comes from the observation that for any distinct $v \in \mathcal{V}_{\mathcal{X} \setminus \{ x_i \}}$, there is a unique solution to all predicates. Namely, there is a single $t$ that is the $K$-th most similar tuple when $v(x_i)=0$, and similarly for $t'$ when $v(x_i)=1$. Given those \emph{boundary tuples} $t$ and $t'$, the same goes for the \emph{tally vectors} $\gamma$ and $\gamma'$ which are unique for given $\mathcal{D}_{tr}[v[x_i \gets 0]]$ and $\mathcal{D}_{tr}[v[x_i \gets 1]]$ respectively. \todo{Clarify better.}
If we reshufle the sums in \autoref{eq:shap-transform-1} by pushing them outside, we can isolate the sum over value assignments, along with all the predicates to obtain the expression for the \emph{counting oracle} which is defined as follows:
\begin{equation}
\resizebox{0.9\hsize}{!}{$
\begin{split}
\omega_{t, t'} (\alpha, \gamma, \gamma') :=
\sum_{v \in \mathcal{V}_{\mathcal{X} \setminus \{ x_i \}}}
\cdot & \mathbbm{1} \{ \alpha = |\mathrm{supp}(v)| \} \\
\cdot & \mathbbm{1} \{ t = \mathrm{top}_K \mathcal{D}_{tr}[v[x_i \gets 0]] \} \\
\cdot & \mathbbm{1} \{ t' = \mathrm{top}_K \mathcal{D}_{tr}[v[x_i \gets 1]] \} \\
\cdot & \mathbbm{1} \{ \gamma = \mathrm{tally}_t \mathcal{D}_{tr}[v[x_i \gets 0]] \} \\
\cdot & \mathbbm{1} \{ \gamma' = \mathrm{tally}_t \mathcal{D}_{tr}[v[x_i \gets 1]] \}
\end{split}
$}
\label{eq:counting-oracle}
\end{equation}
By isolating the exponential part of the computation into a counting oracle $\omega_{t, t'}$, we can obtain the following simplified formula for computing the Shapley value:
\begin{equation}
\resizebox{0.9\hsize}{!}{$
\varphi_i = \frac{1}{N}
\sum_{t, t' \in \mathcal{D}_{tr}}
\sum_{\alpha=1}^{N}
\binom{N-1}{\alpha}^{-1}
\sum_{\gamma, \gamma' \in \Gamma}
\mathrm{acc}_{\Delta} (\gamma, \gamma')
\omega_{t, t'} (\alpha, \gamma, \gamma')
$}
\label{eq:shap-main}
\end{equation}
In the above formula, we can notice that all sums iterate over a number of elements that is linear in the size of data $|\mathcal{D}_{tr}|$ and the number of variables $|\mathcal{X}|$. Thus, it becomes clear that computing the above expression in polynomial time is conditioned upon being able to compute the oracle $\omega_{t, t'}$ in polynomial time. This observation is summarized in the following theorem:
\begin{theorem} \label{thm:counting-oracle}
If each counting oracle $\omega_{t, t'}$ can be computed in polynomial time with respect to the size of data, then the Shapley value $\varphi_i$ for the KNN accuracy utility can also be computed in polynomial time with respect to the size of data.
\end{theorem}
\subsection{PTIME Counting Oracle using PSPACE ADD's} \label{sec:oracle-with-add}
Our goal is to efficiently compute the counting oracle $\omega_{t, t'}$ as defined in \autoref{eq:counting-oracle}. Our approach is to model this problem using an Additive Decision Diagram (ADD). ADD's represent Boolean functions $\phi : \mathcal{V}_{\mathcal{X}} \rightarrow \mathcal{E} \cup \infty$ that map value assignments $v \in \mathcal{V}_{\mathcal{X}}$ to elements of some set $\mathcal{E}$ or to a special invalid element $\infty$. Since we are allowed to design the set $\mathcal{E}$ based on our problem, we will define it as $\mathcal{E} := \{1,...,|\mathcal{X}|\} \times \Gamma \times \Gamma$ where $\Gamma$ is the set of label tally vectors defined in $\autoref{eq:tally-vector-set}$. We introduce our Boolean function $\phi_{t, t'} : \mathcal{V}_{\mathcal{X}}[x_i=0] \rightarrow \mathbbm{N}$ which we formally define as follows:
\begin{equation} \label{eq:oracle-add-function}
\begin{split}
\phi_{t, t'}(v) &:= \begin{cases}
\infty, & \mathrm{if} \ t \not\in \mathcal{D}\big[v[x_i \gets 0]\big], \\
\infty, & \mathrm{if} \ t' \not\in \mathcal{D}\big[v[x_i \gets 1]\big], \\
(\alpha, \gamma, \gamma' ), & \mathrm{otherwise} \\
\end{cases} \\
\alpha &:= |\mathrm{supp}(v)| \\
\gamma &:= \mathrm{tally}_{t} \mathcal{D}\big[v[x_i \gets 0]\big] \\
\gamma' &:= \mathrm{tally}_{t'} \mathcal{D}\big[v[x_i \gets 1]\big]
\end{split}
\end{equation}
If we can construct an ADD with a root node $n_{t, t'}$ that computes $\phi_{t, t'}(v)$ exactly as defined in the above expression, then the following equality holds:
\begin{equation}
\omega_{t, t'} (\alpha, \gamma, \gamma') = \mathrm{count}_{(\alpha, \gamma, \gamma')} (n_{t, t'})
\end{equation}
Given that in \autoref{sec:additive-decision-diagrams} we saw that the complexity of model counting is $O(|\mathcal{N}| \cdot |\mathcal{E}|)$ and since the size of $\mathcal{E}$ is polynomial in the size of data, we can derive the following theorem:
\begin{theorem} \label{thm:decision-diagram}
If we can represent the $\phi_{t, t'}(v)$ in \autoref{eq:oracle-add-function} with an ADD of size polynomial in $|\mathcal{X}|$ and $|\mathcal{D}_{tr}|$, we can compute the counting oracle $\omega_{t, t'}$ in time polynomial of $|\mathcal{X}|$ and $|\mathcal{D}_{tr}|$.
\end{theorem}
\subsection{Constructing PSPACE ADD's for ML Pipelines} \label{sec:add-for-pipeline}
\begin{algorithm}[t!]
\scriptsize
\caption{Compiling a provenance-tracked dataset into ADD.} \label{alg:compile-dataset-to-add}
\begin{algorithmic}[1]
\Function{CompileADD}{}
\Inputs
\Input{$\mathcal{D}$, provenance-tracked dataset;}
\Input{$\mathcal{X}$, set of variables;}
\Input{$t$, boundary tuple;}
\Outputs
\Output{$\mathcal{N}$, nodes of the compiled ADD;}
\Begin
\State $\mathcal{N} \gets \{\}$
\State $\mathcal{P} \gets \{ (x_1, x_2) \in \mathcal{X} \ : \exists t \in \mathcal{D}, x_1 \in p(t) \ \wedge \ x_2 \in p(t) \}$
\State $\mathcal{X}_{L} \gets $ \Call{GetLeafVariables}{$\mathcal{P}$} \label{alg:cmp:line:leaves}
\For{$\mathcal{X}_{C} \in $ \Call{GetConnectedComponents}{$\mathcal{P}$}} \label{alg:cmp:line:conn-cmp}
\State $\mathcal{N}' \gets $ \Call{ConstructADDTree}{$\mathcal{X}_C \setminus \mathcal{X}_L$} \label{alg:cmp:line:add-tree}
\State $\mathcal{X}' \gets \mathcal{X}_C \setminus \mathcal{X}_L$
\State $\mathcal{D}' \gets \{ t' \in \mathcal{D} \ : \ p(t') \cup \mathcal{X}_C \neq \emptyset \}$
\For{$v \in \mathcal{V}_{\mathcal{X}'}$}
\State $\mathcal{N}_C \gets $ \Call{ConstructADDChain}{$\mathcal{X}_C \cap \mathcal{X}_L$}
\For{$n \in \mathcal{N}_C$}
\State $v' \gets v \cup \{ x(n) \rightarrow 1 \}$
\State $a_H(n) \gets |\{ t' \in \mathcal{D}' \ : \ \mathrm{eval}_{v'} p(t') = 1 \ \wedge \ \sigma(t') \geq \sigma(t) \}|$
\EndFor
\State $\mathcal{N}' \gets $ \Call{AppendToADDPath}{$\mathcal{N}'$, $\mathcal{N}_C$, $v$} \label{alg:cmp:line:append-path}
\EndFor
\State $\mathcal{N} \gets $ \Call{AppendToADDRoot}{$\mathcal{N}$, $\mathcal{N}'$}
\EndFor
\For{$x' \in p(t)$}
\For{$n \in \mathcal{N}$ \textbf{where} $x(n) = x'$}
\State $a_L(n) \gets \infty$
\EndFor
\EndFor
\Return $\mathcal{N}$
\EndFunction
\end{algorithmic}
\end{algorithm}
In this section we describe the procedure for constructing a ADD for a given dataset $\mathcal{D}$ made up of tuples that are annotated with provenance polynomials. The main procedure \textsc{CompileADD} is defined in \autoref{alg:compile-dataset-to-add}. Invoking \textsc{CompileADD}($\mathcal{D}$, $\mathcal{X}$, $t$) constructs an ADD with node set $\mathcal{N}$ that computes the following function:
\begin{equation} \label{eq:oracle-add-function-single}
\phi_{t}(v) := \begin{cases}
\infty, & \mathrm{if} \ t \not\in \mathcal{D}[v], \\
\mathrm{tally}_t \mathcal{D}[v] & \mathrm{otherwise} \\
\end{cases}
\end{equation}
To construct the function defined in \autoref{eq:oracle-add-function}, we need to invoke \textsc{CompileADD} once more by passing $t'$ instead of $t$ in order to obtain another diagram $\mathcal{N}'$. The final diagram is obtained according to the expression defined as $\mathcal{N}[x_i \gets 0] + \mathcal{N}'[x_i \gets 1]$. The size of the resulting diagram will still be bounded by $O(|\mathcal{D}|)$. \todo{Missing $\alpha$.}
We can now examine different types of ML pipelines and see how their structures are reflected onto the ADD's.
\inlinesection{One-to-Many Join Pipeline}
In a \emph{star} database schema, this corresponds to a \emph{join} between a \emph{fact} table and a \emph{dimension} table, where each tuple from the dimension table can be joined with multiple tuples from the fact table. It can be represented by an ADD similar to the one in \autoref{fig:example-add-structure}.
\begin{corollary}
For the $K$-NN accuracy utility and a one-to-many \emph{join} pipeline, which takes as input two datasets, $\mathcal{D}_F$ and $\mathcal{D}_D$, of total size $|\mathcal{D}_F| + |\mathcal{D}_D| = N$ and outputs a joined dataset of size $O(N)$, the Shapley value can be computed in $O(N^4)$ time.
\end{corollary}
\inlinesection{Fork Pipeline}
The key characteristic of a pipeline $f$ that contains only \emph{fork} or \emph{map} operators is that the resulting dataset $f(\mathcal{D})$ has provenance polynomials with only a single variable. This is due to the absence of joins, which are the only operator that results in provenance polynomials with a combination of variables.
\begin{corollary}
For the $K$-NN accuracy utility and a \emph{fork} pipeline, which takes as input a dataset of size $N$ and outputs a dataset of size $M$, the Shapley value can be computed in $O(M^2 N^2)$ time.
\end{corollary}
\inlinesection{Map Pipeline}
A \emph{map} pipeline is similar to \emph{fork} pipeline in the sense that every provenance polynomial contains only a single variable. However, each variable now can appear in a provenance polynomial of \emph{at most} one tuple, in contrast to \emph{fork} pipeline where a single variable can be associated with \emph{multiple} tuples. This additional restriction results in the following corollary:
\begin{corollary}
For the $K$-NN accuracy utility and a \emph{map} pipeline, which takes as input a dataset of size $N$, the Shapley value can be computed in $O(N^2)$ time.
\end{corollary}
\subsection{Otpimizations for the 1-Nearest Neighbor Model} \label{sec:special-case-1nn}
We examine the special case when $K=1$.
Since for each $v \in \mathcal{V}_{\mathcal{X}}$, there is always \emph{exactly} one tuple that is most similar to $t_{v}$.
We now consider how to leverage this observation to construct the counting oracle.
Let $\phi_t(v)$ represent the event when $t$ is the top-$1$ tuple:
\begin{equation} \label{eq:top-1-condition-map-single}
\phi_t :=
p(t) \wedge
\bigwedge_{
\substack{
t' \in f(\mathcal{D}) \\
\sigma(t') > \sigma(t)
}
} \neg p(t').
\end{equation}
By definition, $\phi_t(v) := \mathrm{eval}_v (\phi_t)$. We can notice that, for \autoref{eq:top-1-condition-map-single} to be \emph{true} (i.e. for tuple $t$ to be the top-$1$), all tuples $t'$ where $\sigma(t') > \sigma(t)$ need to be \emph{absent} from the pipeline output. Hence, for a given value assignment, all provenance polynomials that control those tuples need to evaluate to $0$.
We now construct the function
$$\phi_{t, t'}(v) := \phi_t(v[x_i \gets 0]) \wedge \phi_{t'}(v[x_i \gets 1]),$$
which is \emph{true} if $t$ is the top-$1$ tuple when $x_i \gets 0$ and $t'$ is the top-$1$ tuple when $x_i \gets 1$. This corresponds to the condition that our counting oracle counts models for. Expanding $\phi_{t, t'}(v)$, we obtain
\begin{equation} \label{eq:top-1-condition-map}
\resizebox{0.9\hsize}{!}{$
\phi_{t, t'} :=
\Big(
\neg x_i \wedge
p(t) \wedge
\bigwedge_{\substack{
t'' \in f(\mathcal{D}) \\
\sigma(t'') > \sigma(t)
}} \neg p(t'')
\Big)
\wedge
\Big(
x_i \wedge
p(t') \wedge
\bigwedge_{\substack{
t'' \in f(\mathcal{D}) \\
\sigma(t'') > \sigma(t')
}} \neg p(t'')
\Big).
$}
\end{equation}
Similarly, $\phi_{t, t'}(v) := \mathrm{eval}_v (\phi_{t, t'})$. It can only be \emph{true} if $p(t') \iff x_i$ and $\sigma(t) < \sigma(t')$. As a result, all provenance polynomials controlling tuples with a higher similarity score than that of $t$ need to evaluate to $0$. Therefore, the only polynomials that can be allowed to evaluate to $1$ are those corresponding to tuples with similarity score below $t$. Based on these observations, we can express the counting oracle for different types of ML pipelines.
\inlinesection{Map Pipeline}
In a \emph{map} pipeline, the provenance polynomial for each tuple $t \in f(\mathcal{D})$ is defined by a single distinct variable $x \in \mathcal{X}$. Given our observation about $p(t') \iff x_i$, for a \emph{map} pipeline we can translate it to $p(t') = x_i$. Furthermore, if we observe the definition of the counting oracle from \autoref{eq:counting-oracle}, we can see that each oracle $\omega_{t, t'}$ counts the value assignments that result in support size $\alpha$ and label tally vectors $\gamma$ and $\gamma'$.
Given our observation about the provenance polynomials that are allowed to be set to $1$, we can easily construct an expression for counting valid value assignments. Namely, we have to choose exactly $\alpha$ variables out of the set $\{t'' \in \mathcal{D} \ : \ \sigma(t'') < \sigma(t) \}$, which corresponds to tuples with lower similarity than that of $t$. This can be constructed using a \emph{binomial coefficient}. Furthermore, when $K=1$, the label tally $\gamma$ is entirely determined by the top-$1$ tuple $t$. The same observation goes for $\gamma'$ and $t'$. To denote this, we define a constant $\Gamma_L$ that represents a tally vector with all values $0$ and only the value corresponding to label $L$ being set to $1$. We thus need to fix $\gamma$ to be equal to $\Gamma_{\ell (t)}$ (and the same for $\gamma'$). Finally, as we observed earlier, when computing $\omega_{t, t'}$ for $K=1$, the provenance polynomial of the tuple $t'$ must equal $x_i$. With these notions, we can define the counting oracle as
\begin{equation}
\resizebox{0.9\hsize}{!}{$
\omega_{t, t'} (\alpha, \gamma, \gamma') =
\binom{|\{t'' \in \mathcal{D} \ : \ \sigma(t'') < \sigma(t) \}|}{\alpha}
\mathbbm{1} \{ p(t')=x_i \}
\mathbbm{1} \{ \gamma = \Gamma_{\ell(t)} \}
\mathbbm{1} \{ \gamma' = \Gamma_{\ell(t')} \}.
$}
\label{eq:oracle-1nn}
\end{equation}
Note that we always assume $\binom{a}{b}=0$ for all $a < b$. We can prove the following corollary about \emph{map} pipelines:
\begin{corollary}
For the $1$-NN accuracy utility and a \emph{map} pipeline, which takes as input a dataset of size $N$, the Shapley value can be computed in $O(N \log N)$ time.
\end{corollary}
\inlinesection{Fork Pipeline}
As we have noted in the general case of $K$-NN, both \emph{map} and \emph{fork} pipelines result in polynomials made up of only one variable. The difference is that in \emph{map} pipelines each variable is associated with at most one polynomial, while in \emph{fork} pipelines it can be associated with multiple polynomials. However, when $K=1$, this difference vanishes when it comes to Shapley value computation:
\begin{corollary}
For the $1$-NN accuracy utility and a \emph{fork} pipeline, which takes as input a dataset of size $N$, the Shapley value can be computed in $O(N \log N)$ time.
\end{corollary}
\section{Algorithm Framework: KNN Shapley Over Data Provenance}
\label{sec:framework}
We now provide details for our
theoretical results that are
mentioned in \autoref{sec:approach}.
We present an algorithmic framework that efficiently computes the Shapley value over the KNN accuracy utility (defined in \autoref{eq:model-acc-definition} when $\mathcal{A}$ is the KNN model).
Our framework is based on the following key ideas: (1) the computation can be reduced to computing a set of \emph{counting oracles}; (2) we can develop PTIME algorithms to compute such counting oracles for the canonical ML pipelines, by translating their \emph{provenance polynomials} into an Additive Decision Diagram (ADD).
\subsection{Counting Oracles}
We now unpack \autoref{theorem:main}.
Using the notations of data provenance introduced in \autoref{sec:prelim},
we can rewrite the definition of the Shapley value as follows, computing
the value of tuple $t_i$, with
the corresponding varible $a_i \in A$:
\begin{equation} \label{eq:shap-pipeline}
\varphi_i = \frac{1}{|A|} \sum_{v \in \mathcal{V}_{A \setminus \{ a_i \}}}
\frac{u(\mathcal{D}_{tr}^f[v[a_i \gets 1]]) - u(\mathcal{D}_{tr}^f[v[a_i \gets 0]])}
{\binom{|A|-1}{|\mathrm{supp}(v)|}}.
\end{equation}
Here, $v[a_i \gets X]$ represents the same value assignment as $v$, except that we enforce $v(a_i) = X$ for some constant $X$. Moreover, the support $\mathrm{supp}(v)$ of a value assignment $v$ is the subset of variables in $A$ that are assigned value $1$ according to $v$.
\inlinesection{Nearest Neighbor Utility.}
When the downstream classifier is
a K-nearest neighbor classifier, we have
additional structure of
the utility function $u(-)$ that we
can take advantage of.
Given a data example $t_{val}$ from the validation dataset, the hyperparameter $K$ controlling the size of the neighborhood and the set of class labels $\mathcal{Y}$, we formally define the KNN utility $u_{t_{val}, K, \mathcal{Y}}$ as follows.
Given the transformed training set
$\mathcal{D}_{tr}^f$,
let $\sigma$ be a scoring
function that computes, for each tuple
$t \in \mathcal{D}_{tr}^f$, its
similarity with
the validation example $t_{val}$:
$\sigma(t, t_{val})$. In the following,
we often write $\sigma(t)$ whenever
$t_{val}$ is clear from the context.
We also omit $\sigma$ when the scoring
function is clear from the context.
Given this scoring function $\sigma$,
the KNN utility can be defined as follows:
\begin{equation}
u_{t_{val}, K, \mathcal{Y}} (\mathcal{D}) :=
u_T
\left( \mathrm{argmax}_{y \in \mathcal{Y}} \Big(
\mathrm{tally}_{y, \mathrm{top}_K \mathcal{D}_{tr}^f} (\mathcal{D}_{tr}^f)
\Big),
t_{val}
\right)
\label{eq:knn-acc-definition}
\end{equation}
where $\mathrm{top}_K \mathcal{D}_{tr}^f$ returns the tuple $t$ which ranks at the $K$-th spot
when all tuples in $\mathcal{D}_{tr}^f$
are ordered by decreasing similarity $\sigma$. Given this tuple $t$ and a class label $y \in \mathcal{Y}$, the $\mathrm{tally}_{y, t}$ operator returns the number of tuples with similarity score greater or equal to $t$ that have label $y$. We assume a standard majority voting scheme where the predicted label is selected to be the one with the greatest tally ($\arg\max_y$). The accuracy is then computed by simply comparing the predicted label with the label of the validation tuple $t_{val}$.
Plugging the KNN accuracy utility into \autoref{eq:shap-pipeline}, we can augment the expression for computing $\varphi_i$ as
\begin{equation}
\begin{split}
\varphi_i = \frac{1}{|A|}
\sum_{v \in \mathcal{V}_{A \setminus \{ a_i \}}}
\sum_{\alpha=1}^{|A|}
& \mathbbm{1} \{ \alpha = |\mathrm{supp}(v)| \}
\binom{|A|-1}{\alpha}^{-1} \\
\cdot \sum_{t, t' \in \mathcal{D}_{tr}^f}
& \mathbbm{1} \{ t = \mathrm{top}_K \mathcal{D}_{tr}^f[v[a_i \gets 0]] \} \\
\cdot & \mathbbm{1} \{ t' = \mathrm{top}_K \mathcal{D}_{tr}^f[v[a_i \gets 1]] \} \\
\cdot \sum_{\gamma, \gamma' \in \Gamma}
& \mathbbm{1} \{ \gamma = \mathrm{tally}_{t} \mathcal{D}_{tr}^f[v[a_i \gets 0]] \} \\
\cdot & \mathbbm{1} \{ \gamma' = \mathrm{tally}_{t'} \mathcal{D}_{tr}^f[v[a_i \gets 1]] \} \\
\cdot & u_{\Delta} (\gamma, \gamma')
\end{split}
\label{eq:shap-transform-1}
\end{equation}
where $\mathrm{tally}_{t} \mathcal{D} = (
\mathrm{tally}_{c_1, t} \mathcal{D}
...
\mathrm{tally}_{y_{|\mathcal{Y}|}, t} \mathcal{D})$
returns a tally vector $\gamma \in \Gamma \subset \mathbb{N}^{|\mathcal{Y}|}$ consisting of the tallied occurrences of each class label $y \in \mathcal{Y}$ among tuples with similarity to $t_{val}$ greater than or equal to that of the boundary tuple $t$.
Let $\Gamma$ be all
possible tally vectors (corresponding to
all possible label ``distributions'' over
top-$K$).
Here, the innermost utility gain function is formally defined as
$u_{\Delta} (\gamma, \gamma') := u_{\Gamma}(\gamma') - u_{\Gamma}(\gamma)$, where $u_{\Gamma}$ is defined as
\[
u_{\Gamma}(\gamma) := u_T(\mathrm{argmax}_{y \in \mathcal{Y}} \gamma, t_{val} ).
\]
Intuitively,
$u_{\Delta} (\gamma, \gamma')$
measures the utility
difference between
two different label distributions (i.e., tallies)
of top-$K$ examples: $\gamma$ and $\gamma'$.
$u_T(y, t_{val})$ is the tuple-wise utility for a KNN prediction (i.e., $\mathrm{argmax}_{y \in \mathcal{Y}} \gamma$) and validation tuple $t_{val}$, which is the building block of the \emph{additive utility}.
The correctness of \autoref{eq:shap-transform-1} comes from the observation that for any distinct $v \in \mathcal{V}_{A \setminus \{ a_i \}}$, there is a unique solution to all indicator functions $\mathbbm{1}$. Namely, there is a single $t$ that is the $K$-th most similar tuple when $v(a_i)=0$, and similarly, a single $t'$ when $v(a_i)=1$. Given those \emph{boundary tuples} $t$ and $t'$, the same goes for the \emph{tally vectors}:
given $\mathcal{D}_{tr}^f[v[a_i \gets 0]]$ and $\mathcal{D}_{tr}^f[v[a_i \gets 1]]$,
there exists a unique
$\gamma$ and $\gamma'$.
We can now define the following \emph{counting oracle} that computes the sum over value assignments, along with all the predicates:
\begin{equation}
\begin{split}
\omega_{t, t'} (\alpha, \gamma, \gamma') :=
\sum_{v \in \mathcal{V}_{A \setminus \{ a_i \}}}
\cdot & \mathbbm{1} \{ \alpha = |\mathrm{supp}(v)| \} \\
\cdot & \mathbbm{1} \{ t = \mathrm{top}_K \mathcal{D}_{tr}^f[v[a_i \gets 0]] \} \\
\cdot & \mathbbm{1} \{ t' = \mathrm{top}_K \mathcal{D}_{tr}^f[v[a_i \gets 1]] \} \\
\cdot & \mathbbm{1} \{ \gamma = \mathrm{tally}_t \mathcal{D}_{tr}^f[v[a_i \gets 0]] \} \\
\cdot & \mathbbm{1} \{ \gamma' = \mathrm{tally}_t \mathcal{D}_{tr}^f[v[a_i \gets 1]] \}.
\end{split}
\label{eq:counting-oracle}
\end{equation}
Using counting oracles, we can simplify \autoref{eq:shap-transform-1} as:
\begin{equation}
\varphi_i = \frac{1}{N}
\sum_{t, t' \in \mathcal{D}_{tr}^f}
\sum_{\alpha=1}^{N}
\binom{N-1}{\alpha}^{-1}
\sum_{\gamma, \gamma' \in \Gamma}
u_{\Delta} (\gamma, \gamma')
\omega_{t, t'} (\alpha, \gamma, \gamma').
\label{eq:shap-main}
\end{equation}
We see that the computation of $\varphi_i$ will be in \textsf{PTIME} if we can compute the counting oracles $\omega_{t, t'}$ in \textsf{PTIME} (ref. \autoref{theorem:main}).
As we will demonstrate next, this is indeed the case for the canonical pipelines that we focus on in this paper.
\subsection{Counting Oracles for Canonical Pipelines}
We start by discussing how to compute the counting oracles using ADD's in general.
We then study the canonical ML pipelines in particular and develop \textsf{PTIME} algorithms for them.
\subsubsection{Counting Oracle using ADD's} \label{sec:oracle-with-add}
We use Additive Decision Diagram (ADD) to compute the counting oracle $\omega_{t, t'}$ (\autoref{eq:counting-oracle}).
An ADD represents a Boolean function $\phi : \mathcal{V}_{A} \rightarrow \mathcal{E} \cup \{\infty\}$ that maps value assignments $v \in \mathcal{V}_{A}$ to elements of some set $\mathcal{E}$ or a special invalid element $\infty$ (see \autoref{sec:additive-decision-diagrams} for more details).
For our purpose, we define
$\mathcal{E} := \{1,...,|A|\} \times \Gamma \times \Gamma$, where $\Gamma$ is the set of label tally vectors.
We then define a
function over Boolean inputs $\phi_{t, t'} : \mathcal{V}_{A}[a_i=0] \rightarrow \mathbbm{N}$ as follows:
\begin{equation} \label{eq:oracle-add-function}
\begin{split}
\phi_{t, t'}(v) &:= \begin{cases}
\infty, & \mathrm{if} \ t \not\in \mathcal{D}\big[v[a_i \gets 0]\big], \\
\infty, & \mathrm{if} \ t' \not\in \mathcal{D}\big[v[a_i \gets 1]\big], \\
(\alpha, \gamma, \gamma' ), & \mathrm{otherwise}, \\
\end{cases} \\
\alpha &:= |\mathrm{supp}(v)|, \\
\gamma &:= \mathrm{tally}_{t} \mathcal{D}\big[v[a_i \gets 0]\big], \\
\gamma' &:= \mathrm{tally}_{t'} \mathcal{D}\big[v[a_i \gets 1]\big].
\end{split}
\end{equation}
If we can construct an ADD with a root node $n_{t, t'}$ that computes $\phi_{t, t'}(v)$,
then the following equality holds:
\begin{equation}
\omega_{t, t'} (\alpha, \gamma, \gamma') = \mathrm{count}_{(\alpha, \gamma, \gamma')} (n_{t, t'}).
\end{equation}
Given that
the complexity of model counting is $O(|\mathcal{N}| \cdot |\mathcal{E}|)$ (see \autoref{eq:dd-count-recursion}) and the size of $\mathcal{E}$ is polynomial in the size of data, we have
\begin{theorem} \label{thm:decision-diagram}
If we can represent the $\phi_{t, t'}(v)$ in \autoref{eq:oracle-add-function} with an ADD of size polynomial in $|A|$ and $|\mathcal{D}_{tr}^f|$, we can compute the counting oracle $\omega_{t, t'}$ in time polynomial of $|A|$ and $|\mathcal{D}_{tr}^f|$.
\end{theorem}
\subsubsection{Constructing Polynomial-size ADD's for ML Pipelines} \label{sec:add-for-pipeline}
\begin{algorithm}[t!]
\caption{Compiling a provenance-tracked dataset into ADD.} \label{alg:compile-dataset-to-add}
\begin{algorithmic}[1]
\Function{CompileADD}{}
\Inputs
\Input{$\mathcal{D}$, provenance-tracked dataset;}
\Input{$A$, set of variables;}
\Input{$t$, boundary tuple;}
\Outputs
\Output{$\mathcal{N}$, nodes of the compiled ADD;}
\Begin
\State $\mathcal{N} \gets \{\}$
\State $\mathcal{P} \gets \{ (x_1, x_2) \in A \ : \exists t \in \mathcal{D}, x_1 \in p(t) \ \wedge \ x_2 \in p(t) \}$
\State $A_{L} \gets $ \Call{GetLeafVariables}{$\mathcal{P}$} \label{alg:cmp:line:leaves}
\For{$A_{C} \in $ \Call{GetConnectedComponents}{$\mathcal{P}$}} \label{alg:cmp:line:conn-cmp}
\State $\mathcal{N}' \gets $ \Call{ConstructADDTree}{$A_C \setminus w_L$} \label{alg:cmp:line:add-tree}
\State $A' \gets A_C \setminus w_L$
\State $\mathcal{D}' \gets \{ t' \in \mathcal{D} \ : \ p(t') \cup A_C \neq \emptyset \}$
\For{$v \in \mathcal{V}_{A'}$}
\State $\mathcal{N}_C \gets $ \Call{ConstructADDChain}{$A_C \cap w_L$}
\For{$n \in \mathcal{N}_C$}
\State $v' \gets v \cup \{ x(n) \rightarrow 1 \}$
\State $w_H(n) \gets |\{ t' \in \mathcal{D}' \ : \ \mathrm{eval}_{v'} p(t') = 1 \ \wedge \ \sigma(t') \geq \sigma(t) \}|$
\EndFor
\State $\mathcal{N}' \gets $ \Call{AppendToADDPath}{$\mathcal{N}'$, $\mathcal{N}_C$, $v$} \label{alg:cmp:line:append-path}
\EndFor
\State $\mathcal{N} \gets $ \Call{AppendToADDRoot}{$\mathcal{N}$, $\mathcal{N}'$}
\EndFor
\For{$x' \in p(t)$}
\For{$n \in \mathcal{N}$ \textbf{where} $x(n) = x'$}
\State $w_L(n) \gets \infty$
\EndFor
\EndFor
\Return $\mathcal{N}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\autoref{alg:compile-dataset-to-add} presents our main procedure \textsc{CompileADD} that constructs an ADD for a given dataset $\mathcal{D}$ made up of tuples annotated with provenance polynomials.
Invoking \textsc{CompileADD}($\mathcal{D}$, $A$, $t$) constructs an ADD with node set $\mathcal{N}$ that computes
\begin{equation} \label{eq:oracle-add-function-single}
\phi_{t}(v) := \begin{cases}
\infty, & \mathrm{if} \ t \not\in \mathcal{D}[v], \\
\mathrm{tally}_t \mathcal{D}[v], & \mathrm{otherwise}. \\
\end{cases}
\end{equation}
We provide a more detailed description of \autoref{alg:compile-dataset-to-add} in \autoref{sec:apx-alg-compile-dataset-to-add-details}.
To construct the function defined in \autoref{eq:oracle-add-function}, we need to invoke \textsc{CompileADD} once more by passing $t'$ instead of $t$ in order to obtain another diagram $\mathcal{N}'$. The final diagram is obtained by $\mathcal{N}[a_i \gets 0] + \mathcal{N}'[a_i \gets 1]$. The size of the resulting diagram will still be bounded by $O(|\mathcal{D}|)$.
We can now examine different types of canonical pipelines and see how their structures are reflected onto the ADD's.
In summary, we can construct an ADD with polynomial size for canonical pipelines and therefore, by \autoref{thm:decision-diagram}, the computation of the corresponding counting oracles is in PTIME.
\inlinesection{One-to-Many Join Pipeline.}
In a \emph{star} database schema, this corresponds to a \emph{join} between a \emph{fact} table and a \emph{dimension} table, where each tuple from the dimension table can be joined with multiple tuples from the fact table. It can be represented by an ADD similar to the one in \autoref{fig:example-add-structure}.
\begin{corollary} \label{col:complexity-knn-join}
For the $K$-NN accuracy utility and a one-to-many \emph{join} pipeline, which takes as input two datasets, $\mathcal{D}_F$ and $\mathcal{D}_D$, of total size $|\mathcal{D}_F| + |\mathcal{D}_D| = N$ and outputs a joined dataset of size $O(N)$, the Shapley value can be computed in $O(N^4)$ time.
\end{corollary}
We present the proof in \autoref{sec:apx-complexity-knn-join-proof} in the appendix.
\inlinesection{Fork Pipeline.}
The key characteristic of a pipeline $f$ that contains only \emph{fork} or \emph{map} operators is that the resulting dataset $f(\mathcal{D})$ has provenance polynomials with only a single variable. This is due to the absence of joins, which are the only operator that results in provenance polynomials with a combination of variables.
\begin{corollary} \label{col:complexity-knn-fork}
For the $K$-NN accuracy utility and a \emph{fork} pipeline, which takes as input a dataset of size $N$ and outputs a dataset of size $M$, the Shapley value can be computed in $O(M^2 N^2)$ time.
\end{corollary}
We present the proof in \autoref{sec:apx-complexity-knn-fork-proof} in the appendix.
\inlinesection{Map Pipeline.}
A \emph{map} pipeline is similar to \emph{fork} pipeline in the sense that every provenance polynomial contains only a single variable. However, each variable now can appear in a provenance polynomial of \emph{at most} one tuple, in contrast to \emph{fork} pipeline where a single variable can be associated with \emph{multiple} tuples. This additional restriction results in the following corollary:
\begin{corollary} \label{col:complexity-knn-map}
For the $K$-NN accuracy utility and a \emph{map} pipeline, which takes as input a dataset of size $N$, the Shapley value can be computed in $O(N^2)$ time.
\end{corollary}
We present the proof in \autoref{sec:apx-complexity-knn-map-proof} in the appendix.
\subsection{Special Case: 1-Nearest-Neighbor Classifiers} \label{sec:special-case-1nn}
We can significantly reduce the time complexity for 1-NN classifiers, an important special case of $K$-NN classifiers that is commonly used in practice.
For each validation
tuple
$t_{val}$, there is always \emph{exactly} one tuple that is most similar to $t_{val}$.
Below we illustrate how to leverage this observation to construct the counting oracle.
In the following, we
assume that $a_i$
is the variable corresponding
to the tuple for which we hope to compute Shapley value.
Let $\phi_t$ represent the event when $t$ is the top-$1$ tuple:
\begin{equation} \label{eq:top-1-condition-map-single}
\phi_t :=
p(t) \wedge
\bigwedge_{
\substack{
t' \in f(\mathcal{D}_{tr}) \\
\sigma(t') > \sigma(t)
}
} \neg p(t').
\end{equation}
For \autoref{eq:top-1-condition-map-single} to be \emph{true} (i.e. for tuple $t$ to be the top-$1$), all tuples $t'$ where $\sigma(t') > \sigma(t)$ need to be \emph{absent} from the pipeline output. Hence, for a given value assignment $v$, all provenance polynomials that control those tuples, i.e., $p(t')$, need to evaluate to \textsf{false}.
We now construct the event
\[
\phi_{t, t'} := \phi_t[a_i/\textsf{false}] \wedge \phi_{t'}[a_i/\textsf{true}],
\]
where $\phi_t[a_i/\textsf{false}]$ means to substitue
all appearances of $a_i$ in $\phi_t$
to \textsf{false}. This event happens
only if if $t$ is the top-$1$ tuple
when $a_i$ is \textsf{false} and $t'$ is the top-$1$ tuple when $a_i$ is \textsf{true}. This corresponds to the condition that our counting oracle counts models for. Expanding $\phi_{t, t'}$, we obtain
\begin{equation} \label{eq:top-1-condition-map}
\phi_{t, t'} :=
\Big(
p(t) \wedge
\bigwedge_{\substack{
t'' \in f(\mathcal{D}_{tr}) \\
\sigma(t'') > \sigma(t)
}} \neg p(t'')
\Big)[a_i/\textsf{false}]
\wedge
\Big(
p(t') \wedge
\bigwedge_{\substack{
t'' \in f(\mathcal{D}_{tr}) \\
\sigma(t'') > \sigma(t')
}} \neg p(t'')
\Big)[a_i/\textsf{true}].
\end{equation}
Note that $\phi_{t, t'}$ can only be \emph{true} if $p(t')$ is true
when $a_i$ is \textsf{true}
and $\sigma(t) < \sigma(t')$. As a result, all provenance polynomials corresponding to tuples with a higher similarity score than that of $t$ need to evaluate to \textsf{false}. Therefore, the only polynomials that can be allowed to evaluate to \textsf{true} are those corresponding to tuples with lower similarity score than $t$. Based on these observations, we can express the counting oracle for different types of ML pipelines.
\inlinesection{Map Pipeline.}
In a \emph{map} pipeline, the provenance polynomial for each tuple $t \in f(\mathcal{D}_{tr})$ is defined by a single distinct variable $a_t \in A$.
Furthermore, from the definition of the counting oracle (\autoref{eq:counting-oracle}), we can see that each $\omega_{t, t'}$ counts the value assignments that result in support size $\alpha$ and label tally vectors $\gamma$ and $\gamma'$.
Given our observation about the provenance polynomials that are allowed to be set to \textsf{true}, we can easily construct an expression for counting valid value assignments. Namely, we have to choose exactly $\alpha$ variables out of the set $\{t'' \in \mathcal{D} \ : \ \sigma(t'') < \sigma(t) \}$, which corresponds to tuples with lower similarity than that of $t$. This can be constructed using a \emph{binomial coefficient}. Furthermore, when $K=1$, the label tally $\gamma$ is entirely determined by the top-$1$ tuple $t$. The same observation goes for $\gamma'$ and $t'$. To denote this, we define a constant $\Gamma_L$ parameterized by some label $L$. It represents a tally vector with all values $0$ and only the value corresponding to label $L$ being set to $1$. We thus need to fix $\gamma$ to be equal to $\Gamma_{y (t)}$ (and the same for $\gamma'$). Finally, as we observed earlier, when computing $\omega_{t, t'}$ for $K=1$, the provenance polynomial of the tuple $t'$ must equal $a_i$. With these notions, we can define the counting oracle as
\begin{equation}
\omega_{t, t'} (\alpha, \gamma, \gamma') =
\binom{|\{t'' \in \mathcal{D} \ : \ \sigma(t'') < \sigma(t) \}|}{\alpha}
\mathbbm{1} \{ p(t')=a_i \}
\mathbbm{1} \{ \gamma = \Gamma_{y(t)} \}
\mathbbm{1} \{ \gamma' = \Gamma_{y(t')} \}.
\label{eq:oracle-1nn}
\end{equation}
Note that we always assume $\binom{a}{b}=0$ for all $a < b$. Given this,
we can prove the following corollary about \emph{map} pipelines:
\begin{corollary} \label{col:complexity-1nn-map}
For the $1$-NN accuracy utility and a \emph{map} pipeline, which takes as input a dataset of size $N$, the Shapley value can be computed in $O(N \log N)$ time.
\end{corollary}
We present the proof in \autoref{sec:apx-complexity-1nn-map-proof} in the appendix.
\inlinesection{Fork Pipeline.}
As we noted, both \emph{map} and \emph{fork} pipelines result in polynomials made up of only one variable. The difference is that in \emph{map} pipeline each variable is associated with at most one polynomial, whereas in \emph{fork} pipelines it can be associated with multiple polynomials. However, for 1-NN classifiers, this difference vanishes when it comes to Shapley value computation:
\begin{corollary} \label{col:complexity-1nn-fork}
For the $1$-NN accuracy utility and a \emph{fork} pipeline, which takes as input a dataset of size $N$, the Shapley value can be computed in $O(N \log N)$ time.
\end{corollary}
We present the proof in \autoref{sec:apx-complexity-1nn-fork-proof} in the appendix.
\section{Implementation}
We integrate\footnote{\url{https://github.com/schelterlabs/arguseyes/blob/datascope/arguseyes/refinements/_data_valuation.py}} our Shapley value computation approach into the {\em ArgusEyes}~\cite{schelter2022screening} platform. ArgusEyes leverages the {\em mlinspect}~\cite{grafberger2022data,grafberger2021mlinspect} library to execute and instrument Python-based ML pipelines that combine code from popular data science libraries such as pandas, scikit-learn and keras. During execution, mlinspect extracts a ``logical query plan'' of the pipeline operations, modeling them as a dataflow computation with relational operations (e.g., originating from pandas) and ML-specific operations (e.g., feature encoders) which are treated as extended projections. Furthermore, ArgusEyes captures and materialises the relational inputs of the pipeline (e.g., CSV files read via pandas) and the numerical outputs of the pipeline (e.g., the labels and feature matrix for the training data).
The existing abstractions in mlinspect map well to our pipeline model from \autoref{sec:problem-formal}, where we postulate that a pipeline maps a set of relational inputs $\mathcal{D}_{tr} = \{ \mathcal{D}_{e}, \mathcal{D}_{s_1},\dots,\mathcal{D}_{s_k} \}$ to vectorised labeled training examples $\{z_i = (x_i, y_i)\}$ for a subsequent ML model. Furthermore, mlinspect has built-in support for computing polynomials for the why-provenance~\cite{green2007provenance} of its outputs. This provenance information allows us to match a pipeline to its canonical counterpart as defined in \autoref{sec:approach-characteristics} and apply our techniques from \autoref{sec:framework} to compute Shapley values for the input data.
\autoref{lst:arguseyes} depicts a simplified implementation of data valuation for map pipelines in ArgusEyes. As discussed, the ArgusEyes platform executes the pipeline and extracts and materialises its relational \texttt{inputs}, its numerical \texttt{outputs} and the corresponding \texttt{provenance} polynomials. First, we compute Shapley values for the labeled rows $\{z_i = (x_i, y_i)\}$ of the training feature matrix produced by the pipeline, based on our previously published efficient KNN Shapley method~\cite{Jia2019-kz} (lines~10-13). Next, we retrieve the materialised relational input \texttt{fact\_table} for $\mathcal{D}_e$ (the ``fact table'' in cases of multiple inputs in a star schema), as well as the provenance polynomials \texttt{provenance\_fact\_table} for $\mathcal{D}_e$ and \texttt{provenance\_X\_train} for the training samples $\{z_i\}$~(lines~15-17). Finally, we annotate the rows of the \texttt{fact\_table} with a new column \texttt{shapley\_value} where we store the computed Shapley value for each input row. We assign the values by matching the provenance polynomials of $\mathcal{D}_e$ and $\{z_i = (x_i, y_i)\}$~(lines~19~24).
\begin{Python}[frame=none,captionpos=b,texcl=true,numbers=left,xleftmargin=.25in,caption={Simplified implementation of data valuation for map pipelines in ArgusEyes.},label={lst:arguseyes}]
class DataValuation():
def compute(self, inputs, outputs, provenance):
# Access captured pipeline outputs
# (train/test features and labels)
X_train = outputs[Output.X_TRAIN]
X_test = outputs[Output.X_TEST]
y_train = outputs[Output.Y_TRAIN]
y_test = outputs[Output.Y_TEST]
# Compute Shapley values via KNN approximation
shapley_values = self._shapley_knn(
X_train, y_train, self.k,
X_test[:self.num_test_samples, :],
y_test[:self.num_test_samples, :])
# Input data and provenance
fact_table = inputs[Input.FACT_TABLE]
provenance_fact_table = provenance[Input.FACT_TABLE]
provenance_X_train = provenance[Output.X_TRAIN]
# Annotate input tuples with their Shapley values
for polynomial, shapley_value in
zip(provenance_X_train, shapley_values):
for entry in polynomial:
if entry.input_id == fact_table.input_id:
row = provenance_fact_table.row_id(entry.tuple_id)
fact_table.at[row, 'shapley_value'] = shapley_value
\end{Python}
We provide an executable end-to-end example for data valuation over complex pipelines in the form of a jupyter notebook at \textcolor{blue}{\url{https://github.com/schelterlabs/arguseyes/blob/datascope/arguseyes/example_pipelines/demo-shapley-pipeline.ipynb}}, which shows how to leverage ArgusEyes to identify mislabeled samples in a computer vision pipeline\footnote{\url{https://github.com/schelterlabs/arguseyes/blob/datascope/arguseyes/example_pipelines/product-images.py}} on images of fashion products.
\section{Introduction}
Last decade has witnessed the rapid advancement of machine learning (ML), along which comes the advancement of \textit{machine learning systems}~\cite{Ratner2019-kt}. Thanks to these advancements, training a machine learning model has never been easier today for practitioners --- distributed learning over hundreds of devices~\cite{Liu2020-hb,Li2020-xr,Gan2021-kk, Sergeev2018-om, Jiang2020-qu}, tuning hyper-parameters and selecting the best model~\cite{noauthor_undated-ap, Zoph2016-bz,Feurer2015-cl}, all of which become much more systematic and less mysterious.
Moreover, all major cloud service providers now support AutoML and other model training and serving services.
\vspace{0.5em}
\noindent
{\bf \em Data-centric Challenges and Opportunities.} Despite these great advancements, a new collection of challenges start to emerge in building better machine learning applications.
One observation getting great attention recently is that
\textit{the quality of
a model is often a reflection
of the quality of the underlying
training data}. As a result,
often the most practical and
efficient way of improving
ML model quality is to
improve data quality.
As a result,
recently, researchers have
studied how to conduct
data cleaning~\cite{Krishnan_undated-mf,karlas2020nearest}, data debugging~\cite{koh2017understanding, koh2019accuracy,ghorbani2019data,jia2019towards,Jia2021-zf, Jia2019-kz}, and data acquisition~\cite{Ratner2017-aw}, specifically for
the purpose of improving an ML model.
\vspace{0.3em}
\noindent
{\bf \em Data Debugging via Data Importance.}
In this paper, we focus on
the fundamental problem of
reasoning about the
\textit{importance of
training examples with respect to some utility functions
(e.g., validation accuracy and fairness) of the trained ML model.} There have been intensive
recent interests to develop methods for
reasoning about data importance. These
efforts can be categorized into two
different views. The \textit{Leave-One-Out (LOO)}
view of this problem tries to calculate,
given a training set $\mathcal{D}$,
the importance of a data example $x \in \mathcal{D}$ modeled as the \emph{utility} decrease after removing this data example: $U(\mathcal{D}) - U(\mathcal{D} \backslash x)$.
To scale-up this process over a large dataset,
researchers have been developing approximation
methods such as \textit{influence function}
for a diverse set of ML models~\cite{koh2017understanding}.
On the other hand, the \textit{Expected-Improvement (ExpI)} view
of this problem tries to
calculate such a utility decrease over
\textit{all possible subsets of $\mathcal{D}$}.
Intuitively, this line of work models data
importance as an ``expectation'' over
all possible subsets/sub-sequences of $\mathcal{D}$, instead of trying to reason about it solely on a single
training set.
One particularly popular approach is
to use Shapley value~\cite{ghorbani2019data,jia2019towards,Jia2021-zf},
a concept in game theory that has been
applied to data importance
and data valuation~\cite{Jia2019-kz}.
\vspace{0.3em}
\noindent
{\bf \em Shapley-based Data Importance.}
In this paper, we do not champion one
view over the other (i.e., LOO vs. ExpI).
We scope ourselves and only focus on
Shapley-based methods since
previous work has shown applications
that can only use
Shapley-based methods because of the favorable
properties enforced by the Shapley value.
Furthermore, taking expectations
can sometimes provide a more reliable
importance measure~\cite{Jia2019-kz} than
simply relying on a single dataset.
Nevertheless,
we believe that it is important
for future ML systems to support both
and we hope that this paper
can inspire future research in data importance for both the LOO and ExpI views.
One key challenge of Shapley-based
data importance is its computational
complexity --- in the worst case,
it needs to enumerate \textit{exponentially}
many subsets. There have been different
ways to \emph{approximate} this computation, either
with MCMC~\cite{ghorbani2019data} and group testing~\cite{jia2019towards} or
proxy models such as K-nearest neighbors (KNN)~\cite{Jia2021-zf}.
One surprising result is that
Shapley-based data importance
can be calculated efficiently (in
\textit{polynomial} time) for
KNN classifiers~\cite{Jia2021-zf}, and
using this as a proxy for
other classifiers performs well
over a diverse range of tasks~\cite{Jia2019-kz}.
\vspace{0.3em}
\noindent
{\bf \em Data Importance over Pipelines.}
Existing methods for computing Shapley values~\cite{ghorbani2019data,jia2019towards,Jia2021-zf,Jia2019-kz} are designed to directly operate on a single numerical input dataset for an ML model, typically in matrix form. However, in real-world ML applications, this data is typically generated on the fly from multiple data sources with an ML pipeline. Such pipelines often take multiple datasets as input, and transform them into a single numerical input dataset with relational operations (such as joins, filters, and projections) and common feature encoding techniques, often based on nested estimator/transformer pipelines, which are integrated into popular ML libraries such as scikit-learn~\cite{pedregosa2011scikit}, SparkML~\cite{meng2016mllib} or Google~TFX~\cite{baylor2017tfx}. It is an open problem how to apply Shapley-value computation in such a setup.
\autoref{lst:example} shows a toy example of such an end-to-end ML pipeline, which includes relational operations from pandas for data preparation (lines~3-9), a nested estimator/transformer pipeline for encoding numerical, categorical, and textual attributes as features (lines~12-16), and an ML model from scikit-learn (line~18). The code loads the data, splits it temporally into training and test datasets, `fits' the pipeline to train the model, and evaluates the predictive quality on the test dataset. This leads us to the key question we pose in this work:
\begin{quote}
\textit{Can we efficiently
compute Shapley-based data importance
over such an end-to-end ML pipeline with \underline{both} data processing and
ML training?}
\end{quote}
\iffalse
\vspace{0.3em}
\noindent
{\bf \em Challenges.}
One challenge of Shapley-based
data importance is its computational
complexity --- in the worst case,
it needs to enumerate all, \textit{exponentially}
many subsets. There have been different
ways to \emph{approximate} this computation, either
with MCMC~{\color{red}[CITE]}\xspace or
proxy models such as K-nearest neighbors (KNN)~{\color{red}[CITE]}\xspace.
One surprising result is that
Shapley-based data importance
can be calculated efficiently (in
\textit{polynomial} time) for
KNN classifiers~{\color{red}[CITE]}\xspace, and
using this as a proxy for
other classifiers performs well
over a diverse range of tasks~{\color{red}[CITE]}\xspace.
\ww{Maybe briefly give several examples here, in addition to the references.}
However, this line of work focused solely on ML model training but ignored the \textit{data pre-processing pipeline} prior to model training, which includes steps such as feature extraction, data augmentation,
etc. This significantly limits its applications to real-world scenarios, most of which consist
of a non-trivial data processing pipeline~{\color{red}[CITE]}\xspace.
\fi
\vspace{0.3em}
\noindent
{\bf \em Technical Contributions.}
We present
\texttt{Ease.ML}/\texttt{DataScope}\xspace, the first
system
that efficiently computes
and approximates Shapley
value over end-to-end ML pipelines.
\texttt{DataScope}\xspace takes as input
an ML pipeline (e.g.,
a \texttt{sklearn} pipeline)
and a given utility function,
and outputs
the importance, measured
as the Shapley value, of each input tuple of the ML pipeline.
\autoref{lst:example} (lines 21-25) gives a simplified illustration of this core functionality provided by \texttt{DataScope}\xspace.
A user points \texttt{DataScope}\xspace to the pipeline code, and \texttt{DataScope}\xspace executes the pipeline, extracts the input data, which is annotated with the corresponding Shapley value per input tuple. The user could then, for example, retrieve and inspect the least useful input tuples.
We present several use cases of how these importance values can be used, including label denoising and
fairness debugging, in \autoref{sec:evaluation}.
We made the following contributions when developing \texttt{DataScope}\xspace.
\vspace{0.3em}
\textbf{Our first technical contribution}
is to jointly analyze Shapley-based
data importance together with a \textit{feature extraction pipeline}. To our best knowledge,
this is the first time that
these two concepts are
analyzed together.
We first show
that we can develop a \textit{PTIME algorithm}
given a counting oracle relying on data provenance.
We then show that,
for a collection of
``canonical pipelines'', which covers many real-world
pipelines~\cite{psallidas2019data} (see \autoref{tbl:pipelines} in \autoref{sec:evaluation} for examples),
this counting oracle itself can be implemented
in polynomial time. This provides
an efficient algorithm
for computing Shapley-based
data importance
over these ``canonical pipelines''.
\vspace{0.3em}
\textbf{Our second technical contribution}
is to understand and further adapt
our technique in the context of real-world ML pipelines.
We identify scenarios from the aforementioned 500K ML pipelines where our techniques
cannot be directly applied to have PTIME algorithms.
We introduce
a set of simple yet effective
approximations and optimizations
to further improve the performance on these scenarios.
\vspace{0.3em}
\textbf{Our third technical contribution}
is an extensive empirical study of
\texttt{DataScope}\xspace. We show that for a
diverse range of ML pipelines,
\texttt{DataScope}\xspace
provides effective approximations to support
a range of applications to improve
the accuracy and fairness
of an ML pipeline.
Compared with strong
state-of-the-art methods based on
Monte Carlo sampling,
\texttt{DataScope}\xspace can be up to four orders of magnitude faster while being comparably, and often even more, effective in data debugging.
\section{Preliminaries}
\input{problem}
\input{approach}
\input{framework}
\input{evaluation}
\input{related}
\input{conclusion}
\bibliographystyle{ACM-Reference-Format}
\section{Preliminaries}
\label{sec:prelim}
In this section we describe several concepts from existing research that we use as basis for our contributions. Specifically, (1) we present the definition of machine learning pipelines and their semantics, and (2) we describe decision diagrams as a tool for compact representation of Boolean functions.
\iffalse
\subsection{Shapley-based Data Importance for ML}
\topicitem{Shapley Value}
\topicitem{KNN Accuracy Utility}
\inlinesection{Shapley-based Importance.}
We use the Shapley Value \cite{shapley1951notes} as a well established measure taken from game theory to quantify the importance of data tuples. In the original setting, it is defined over a set $\mathcal{X}$ of \emph{players}. We compute the \emph{value} $\varphi_i$ of a single \emph{player} $x_i \in \mathcal{X}$, with respect to some real-valued \emph{utility} function $u$ that measures the quality of a group of players.
Intuitively, it represents the target player's \emph{marginal contribution} to the utility of a sequence of players averaged over all possible sequences of players. \todo{Fix formulation.} Formally,
\begin{equation} \label{eq:shap-basic}
\varphi_i = \frac{1}{|\mathcal{X}|} \sum_{S \subseteq \mathcal{X} \setminus \{ x_i \}}
\binom{|\mathcal{X}|-1}{|S|}^{-1}
\left( u(S \cup \{ x_i \}) - u(S) \right)
\end{equation}
In the machine learning setting we are interested in computing the importance of some tuple $t$ that belongs to a training dataset $\mathcal{D}$. An established approach is to view tuples as players and derive their importance using the Shapley value \cite{ghorbani2019data, jia2019towards}. Taking a machine learning model trained on $\mathcal{D}$, we can evaulate the model using some scoring metric (e.g. accuracy of predictions), which we can use as the utility $u$ in the Shapley formula.
\inlinesection{Nearest Neighbor Utility.}
Since \autoref{eq:shap-basic} contains a sum over $O(2^{|\mathcal{X}|})$ elements, the problem of computing $\varphi_i$ is in the \#P complexity class, given the assumption that $u$ is a black-box function. This limitation holds true if we take a naive approach of retraining an arbitrary machine learning model for each subset of the training dataset. In order to overcome this obstacle, several approximation approaches have been proposed.
One approach that we also apply in this work involves approximating the arbitrary machine learning model with a K-Nearest Neighbor (KNN) model. The utility then becomes the model's accuracy on some validation dataset \cite{Jia2019-kz}. This approach is computationally convenient due to the simplicity of the KNN model. On top of that, the usefulness of the KNN model as a proxy for other models has been empirically tested with promising results, which inspired us to apply it in our setting as well.
Given a data example from the validation dataset $t_v$, the hyperparameter $K$ controlling the size of the neighborhood and the set of class labels $\mathcal{Y}$, we formally define the KNN accuracy utility $u_{t_v, K, \mathcal{Y}}$ as follows. Starting from the training dataset $\mathcal{D}$ we apply the $\mathrm{score}$ operator which augments each tuple $t \in \mathcal{D}$ with a real-valued attribute $\sigma(t)$ containing its similarity with $t_v$, thus obtaining the scored dataset $\mathcal{D}^\sigma$. Given this scored dataset, we arrive at the following definition of the KNN accuracy utility:
\begin{equation}
\resizebox{0.9\hsize}{!}{$
u_{t_v, K, \mathcal{Y}} (\mathcal{D}) := \mathbbm{1} \Big\{
\ell(t_v) = \mathrm{argmax}_{y \in \mathcal{Y}} \Big(
\mathrm{tally}_{y, \mathrm{top}_K (\mathcal{D}^\sigma)} \mathcal{D}^\sigma
\Big)
\Big\}
$}
\label{eq:knn-acc-definition}
\end{equation}
In the above formula, the $\mathrm{top}_K$ operator returns the tuple $t$ which ranks at the $K$-th spot when $\mathcal{D}^\sigma$ is ordered by decreasing similarity. Given this tuple $t$ and a class label $y \in \mathcal{Y}$, the $\mathrm{tally}_{y, t}$ operator returns the number of tuples with similarity score greater or equal to $t$ that have label $y$. We assume a standard majority voting scheme where the predicted label is selected to be the one with the greatest tally. The accuracy is then computed by simply comparing the predicted label with the label of the validation tuple $t_v$.
\fi
\subsection{End-to-end ML Pipelines} \label{sec:end-to-end-ml-pipelines}
An end-to-end ML application consists
of two components: (1) a feature
extraction pipeline, and (2) a
downstream ML model. To conduct
a joint analysis over one such
end-to-end application,
we leave the precise definite to \autoref{sec:problem-formal}.
One important component in our
analysis relies on the \textit{provenance} of
the feature extraction pipeline,
which we will discuss as follows.
\iffalse
We take a traditional data
processing view in our treatment of machine learning (feature extraction) pipelines. A pipeline $f$ is composed of one or more data processing operators. Each operator takes one or more datasets $\mathcal{D}$ and produces an output dataset. Datasets are made up of tuples taken from some data domain $\mathbb{D}$. We assume a data model with named attributes, where each attribute $a$ is simply a function that maps tuples $t$ to values $a(t)$.
In the machine learning setting, data is pulled from one or more sources and transformed with a feature processing pipeline $f$. The output of this pipeline is a training dataset $\mathcal{D}_{tr}$ that is used for training a model. We focus on the supervised learning setting where each training tuple $t \in \mathcal{D}_{tr}$ has an attribute $y$ that corresponds to its label $y(t) \in \mathcal{Y}$ taken from a set of class labels $\mathcal{Y}$. We also have an unlabeled validation dataset $\mathcal{D}_{val}$ for which the trained model predicts the label. Thus, a machine learning model is also a data processing operator that takes a labeled training dataset and an unlabeled validation dataset in order to produces predicted labels.
\fi
\begin{figure
\centering
\resizebox{0.8\columnwidth}{!}{%
\begin{tikzpicture}[align=center, node distance=5mm and 7mm, line width=0.8pt]
\tikzstyle{free} = [inner sep=5pt]
\tikzstyle{box} = [draw, rectangle, fill=myyellow, inner sep=5pt, minimum width=2cm, minimum height=1cm]
\node[free] (join) {$\bowtie$};
\node[free] (dataset1) [above=of join] {$\mathcal{D}_1$};
\node[free] (dataset2) [below=of join] {$\mathcal{D}_2$};
\node[box] (o1) [right=of join] {Missing Value \\ Filter};
\node[box] (o2) [right=of o1] {Data \\ Augmentation};
\node[box] (o3) [right=of o2] {Min-Max \\ Scaler};
\node[box] (model) [right=of o3] {ML Model};
\node[free] (val_dataset) [above=of model] {$\mathcal{D}_{val}$};
\node[free] (acc) [below=of model] {$u$};
\draw[-stealth] (dataset1) -- (join);
\draw[-stealth] (dataset2) -- (join);
\draw[-stealth] (join) -- node[above] {$\mathcal{D}_{tr}$} (o1);
\draw[-stealth] (o1) -- (o2);
\draw[-stealth] (o2) -- (o3);
\draw[-stealth] (o3) -- (model);
\draw[-stealth] (val_dataset) -- (model);
\draw[-stealth] (model) -- (acc);
\end{tikzpicture}
}
\vspace{-2em}
\caption{Representative example of an ML pipeline.}
\vspace{2em}
\label{fig:example-pipeline}
\end{figure}
\inlinesection{Provenance Tracking.}
Input examples (tuples) in $\mathcal{D}_{tr}$
are transformed by a feature processing pipeline before being turned into a
processed training dataset $\mathcal{D}_{tr}^f := f(\mathcal{D}_{tr})$, which is directly used to train the model. To enable ourselves to compute the importance
of examples in $\mathcal{D}_{tr}$,
it is useful to relate the presence of tuples in $\mathcal{D}_{tr}^f$ in the training dataset with respect to the presence of tuples in $\mathcal{D}_{tr}$. In other words, we need to know the \emph{provenance} of training tuples. In this paper, we
rely on the well-established theory of provenance semirings \cite{green2007provenance} to
describe such provenance.
We associate a variable $a_t \in A$ with every tuple $t$ in the training dataset $\mathcal{D}_{tr}$.
We define \emph{value assignments} $v : A \rightarrow \mathbb{B}$ to
describe whether a given tuple $t$
appears in $\mathcal{D}_{tr}$ --- by
setting $v(a_t)=0$, we ``exclude'' $t$
from $\mathcal{D}_{tr}$ and by setting
$v(a_t)=1$, we ``include''
$t$ in $\mathcal{D}_{tr}$.
Let $\mathcal{V}_A$
be the set of all possible such
value assignments ($|\mathcal{V}_A| = 2^{|A|}$).
We use
\[
\mathcal{D}_{tr}[v] = \{t \in \mathcal{D}_{tr} | v(a_t) \ne 0\}
\]
to denote a subset of training
examples, only containing tuples
$t$ whose corresponding variable in $a_t$ is
set to 1 according to $v$.
To describe the association between
$\mathcal{D}_{tr}$ and its transformed
version $\mathcal{D}_{tr}^f$, we
annotate each potential tuple in
$\mathcal{D}_{tr}^f$ with an attribute $p : \mathbb{D} \rightarrow \mathbb{B}[A]$ containing its \emph{provenance polynomial}~\cite{green2007provenance} which is a logical formula with variables in $A$ and binary coefficients (e.g. $a_1 + a_2 \cdot a_3$) --- $p(t)$
is true only if tuple $t$
appears in $\mathcal{D}_{tr}^f$. For such polynomials, an \emph{addition} corresponds to a \emph{union} operator in the ML pipeline, and a \emph{multiplication} corresponds to a \emph{join} operator in the pipeline.
\autoref{fig:patterns}
illustrates some examples
of the association between
$a_t$ and $p(t)$.
Given a value assignment $v \in \mathcal{V}_{\mathcal{A}}$, we can define an evaluation function $\mathrm{eval}_v \ \phi$ that returns the \emph{evaluation} of a
provenance polynomial $\phi$
under the assignment $v$. Given a value assignment $v$, we can obtain the corresponding \emph{transformed dataset} by evaluating all provenance polynomials of its tuples, as such:
\begin{equation} \label{eq:candidate-dataset}
\mathcal{D}_{tr}^f[v] := \{ t \in \mathcal{D}_{tr}^f \ | \ \mathrm{eval}_v (p(t)) \neq 0 \}
\end{equation}
Intuitively, $\mathcal{D}_{tr}^f[v]$
corresponds to the result of applying
the feature transformation $f$
over a subset of training examples
that only contains tuples $t$ whose
corresponding variable $a_t$ is
set to 1.
Using this approach, given a feature processing pipeline $f$ and a value assignment $v$, we can obtain the
transformed training set $\mathcal{D}_{tr}^f[v] = f(\mathcal{D}_{tr}[v])$.
\subsection{Additive Decision Diagrams (ADD's)} \label{sec:additive-decision-diagrams}
\inlinesection{Knowledge Compilation.}
Our approach of computing the Shapley value will rely upon being able to construct functions over Boolean inputs $\phi : \mathcal{V}_A \rightarrow \mathcal{E}$, where $\mathcal{E}$ is some finite \emph{value set}. We require an elementary algebra with $+$, $-$, $\cdot$ and $/$ operations to be defined for this value set. Furthermore, we require this value set to contain a \emph{zero element} $0$, as well as an \emph{invalid element} $\infty$ representing an undefined result (e.g. a result that is out of bounds).
We then need to count the number of value assignments $v \in \mathcal{V}_A$ such that $\phi(v) = e$, for some specific value $e \in \mathcal{E}$. This is referred to as the \emph{model counting} problem, which is \#\textsf{P} complete for arbitrary logical formulas~\cite{Valiant1979-jp,arora2009computational}. For example, if $A = \{a_1, a_2, a_3\}$, we can define $\mathcal{E} = \{0,1,2,3, \infty\}$ to be a value set and a function $\phi(v) := v(a_1) + v(a_2) + v(a_3)$ corresponding to the number of variables in $A$ that are set to $1$ under some value assignment $v \in \mathcal{V}_A$.
\emph{Knowledge compilation}~\cite{cadoli1997survey} has been developed as a well-known approach to tackle this model counting problem.
It was also successfully applied to various problems in data management~\cite{Jha2011}.
One key result from this line of work is that, if we can construct certain polynomial-size data structures to represent our logical formula, then we can perform model counting in polynomial time. Among the most notable of such data structures are \emph{decision diagrams}, specifically binary decision diagrams~\cite{lee1959bdd,bryant1986bdd} and their various derivatives~\cite{bahar1997algebric,sanner2005affine, lai1996formal}.
For our purpose in this paper, we use the \emph{additive decision diagrams} (ADD), as detailed below.
\vspace{-0.5em}
\inlinesection{Additive Decision Diagrams (ADD).}
We define a simplified version of the \emph{affine algebraic decision diagrams}~\cite{sanner2005affine}.
An ADD is a directed acyclic graph defined over a set of nodes $\mathcal{N}$ and a special \emph{sink node} denoted $\boxdot$. Each node $n \in \mathcal{N}$ is associated with a variable $a(n)\in A$.
Each node has two outgoing edges,
$c_L(n)$ and $c_H(n)$, that point to its \emph{low} and \emph{high} child nodes, respectively.
For some value assignment $v$, the low/high edge corresponds to $v(a)=0$/$v(a)=1$. Furthermore, each low/high edge is associated with an increment $w_L$/$w_H$ that maps edges to elements of $\mathcal{E}$.
\iffalse
The semantics of an entire ADD is that each path from the root node to sink node $\boxdot$ corresponds to some value assignment $v \in \mathcal{V}_A$. This allows us to represent a logical formula $\phi$ where the value $\phi(v)$ is computed by \emph{adding} together all increments along the path determined by $v$. \autoref{fig:example-add-structure} shows an example ADD with one path highlighted in red. In our work we focus specifically on ADD's that are \emph{full} and \emph{ordered}. A diagram is full if every path from root to sink encounters every variable in $A$ exactly once. On top of that, an ADD is ordered when on each path from root to sink variables always appear in the same order. For this purpose, we define $\pi : A \rightarrow \{1,...,|A|\}$ to be a permutation of variables that assigns each variable $a \in A$ an index.
\fi
Note that each node $n \in \mathcal{N}$ represents the root of a subgraph and defines a Boolean function. Given some value assignment $v \in \mathcal{V}_{A}$ we can evaluate this function by constructing a path starting from $n$ and at each step moving towards the low or high child depending on whether the corresponding variable is assigned a $0$ or $1$. The value of the function is the result of adding all the edge increments together.
\autoref{fig:example-add-structure} presents an example ADD with one path highlighted in red.
Formally, we can define the evaluation of the function defined by the node $n$ as follows:
\begin{equation} \label{eq:dd-eval-definition}
\mathrm{eval}_v (n) := \begin{cases}
0, & \mathrm{if} \ n = \boxdot, \\
w_L(n) + \mathrm{eval}_v (c_L(n)) & \mathrm{if} \ v(x(n)) = 0, \\
w_H(n) + \mathrm{eval}_v (c_H(n)) & \mathrm{if} \ v(x(n)) = 1. \\
\end{cases}
\end{equation}
In our work we focus specifically on ADD's that are \emph{full} and \emph{ordered}. A diagram is full if every path from root to sink encounters every variable in $A$ exactly once. On top of that, an ADD is ordered when on each path from root to sink variables always appear in the same order. For this purpose, we define $\pi : A \rightarrow \{1,...,|A|\}$ to be a permutation of variables that assigns each variable $a \in A$ an index.
\begin{figure
\centering
\begin{subfigure}[t]{.45\linewidth}
\centering
\caption{ADD}\label{fig:example-add-structure}
\begin{tikzpicture}[align=center, node distance=5mm and 5mm, line width=0.8pt]
\tikzstyle{free} = [inner sep=5pt]
\tikzstyle{var} = [draw, rectangle, inner sep=2pt, minimum width=4mm, minimum height=4mm]
\tikzstyle{root} = [draw, rectangle, inner sep=2pt, minimum width=4mm, minimum height=4mm]
\tikzstyle{path} = [draw=myred]
\node[var] (x11-1) {};
\draw (x11-1 -| -2,0) node[anchor=west] {$a_{1,1}$};
\node[var] (x21-1) [below left=of x11-1] {};
\draw[-stealth, dashed] (x11-1) to [bend right] (x21-1);
\node[var] (x21-2) [below right=of x11-1] {};
\draw[-stealth, path] (x11-1) to [bend left] (x21-2);
\draw (x21-1 -| -2,0) node[anchor=west] {$a_{2,1}$};
\node[var] (x22-1) [below=of x21-1] {};
\draw[-stealth, dashed] (x21-1) to [bend right] (x22-1);
\draw[-stealth] (x21-1) to [bend left] (x22-1);
\node[var] (x22-2) [below=of x21-2] {};
\draw[-stealth, dashed, path] (x21-2) to [bend right] (x22-2);
\draw[-stealth] (x21-2) to [bend left] node[right] {$+1$} (x22-2);
\draw (x22-1 -| -2,0) node[anchor=west] {$a_{2,2}$};
\node[var] (x12-1) [below right=of x22-1] {};
\draw[-stealth, dashed] (x22-1) to [bend right] (x12-1);
\draw[-stealth] (x22-1) to [bend left] (x12-1);
\draw[-stealth, dashed] (x22-2) to [bend right] (x12-1);
\draw[-stealth, path] (x22-2) to [bend left] node[right] {$+1$} (x12-1);
\draw (x12-1 -| -2,0) node[anchor=west] {$a_{1,2}$};
\node[var] (x23-1) [below left=of x12-1] {};
\draw[-stealth, dashed] (x12-1) to [bend right] (x23-1);
\node[var] (x23-2) [below right=of x12-1] {};
\draw[-stealth, path] (x12-1) to [bend left] (x23-2);
\draw (x23-1 -| -2,0) node[anchor=west] {$a_{2,3}$};
\node[root] (xs) [below right=of x23-1] {$\cdot$};
\draw[-stealth, dashed] (x23-1) to [bend right] (xs);
\draw[-stealth] (x23-1) to [bend left] (xs);
\draw[-stealth, dashed] (x23-2) to [bend right] (xs);
\draw[-stealth, path] (x23-2) to [bend left] node[right] {$+1$} (xs);
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}[t]{.45\linewidth}
\centering
\caption{Uniform ADD}\label{fig:example-uniform-add}
\begin{tikzpicture}[align=center, node distance=5mm and 5mm, line width=0.8pt]
\tikzstyle{free} = [inner sep=5pt]
\tikzstyle{var} = [draw, rectangle, inner sep=2pt, minimum width=4mm, minimum height=4mm]
\tikzstyle{root} = [draw, rectangle, inner sep=2pt, minimum width=4mm, minimum height=4mm]
\node[var] (x1) {};
\draw (x1 -| -1,0) node[anchor=west] {$a_{1}$};
\node[var] (x2) [below=of x1] {};
\draw[-stealth, dashed] (x1) to [bend right] (x2);
\draw[-stealth] (x1) to [bend left] node[right] {$+5$} (x2);
\draw (x2 -| -1,0) node[anchor=west] {$a_{2}$};
\node[var] (x3) [below=of x2] {};
\draw[-stealth, dashed] (x2) to [bend right] (x3);
\draw[-stealth] (x2) to [bend left] node[right] {$+5$} (x3);
\draw (x3 -| -1,0) node[anchor=west] {$a_{3}$};
\node[root] (xs) [below=of x3] {$\cdot$};
\draw[-stealth, dashed] (x3) to [bend right] (xs);
\draw[-stealth] (x3) to [bend left] node[right] {$+5$} (xs);
\end{tikzpicture}
\end{subfigure}
\vspace{-1em}
\caption{(a) An ordered and full ADD for computing $\phi(v) := v(a_{1,1}) \cdot \big( v(a_{2,1}) + v(a_{2,2}) \big) + v(a_{1,2}) \cdot v(a_{2,3}).$
(b) A uniform ADD
for computing $\phi(v) := 5 \cdot (v(a_1) + v(a_2) + v(a_3))$.
}
\vspace{2em}
\end{figure}
\inlinesection{Model Counting.}
We define a model counting operator
\begin{equation} \label{eq:dd-count-definition}
\mathrm{count}_e (n) := \Big| \Big\{ v \in \mathcal{V}_{A [\leq \pi(a(n))]} \ | \ \mathrm{eval}_v (n) = e \Big\} \Big|,
\end{equation}
where $A [\leq \pi(a(n))]$ is the subset of variables in $A$ that include $x(n)$ and all variables that come before it in the permutation $\pi$.
For an ordered and full ADD, $\mathrm{count}_e (n)$ satisfies the following recursion:
\begin{equation} \label{eq:dd-count-recursion}
\begin{split}
\mathrm{count}_e (n) := \begin{cases}
1, & \mathrm{if} \ e=0 \ \mathrm{and} \ n = \boxdot, \\
0, & \mathrm{if} \ e = \infty \ \mathrm{or} \ n = \boxdot, \\
\mathrm{count}_{e - w_L(n)} (c_L(n)) + \mathrm{count}_{e - w_H(n)} (c_H(n)), & \mathrm{otherwise}. \\
\end{cases}
\end{split}
\end{equation}
The above recursion can be implemented as a dynamic program with computational complexity $O(|\mathcal{N}| \cdot |\mathcal{E}|)$.
In special cases when the ADD is structured as a chain with one node per variable, all low increments equal to zero and all high increments equal to some constant $E \in \mathcal{E}$, we can perform model counting in constant time. We call this a \emph{uniform} ADD, and \autoref{fig:example-uniform-add} presents an example.
The $\mathrm{count}_e$ operator for a uniform ADD can be defined as
\begin{equation} \label{eq:dd-count-uniform}
\mathrm{count}_{e} (n) := \begin{cases}
\binom{\pi(a(n))}{ e / E}, & \mathrm{if} \ e \ \mathrm{mod} \ E = 0, \\
0 & \mathrm{otherwise}. \\
\end{cases}
\end{equation}
Intuitively, if we observe the uniform ADD shown in \autoref{fig:example-uniform-add}, we see that the result of an evaluation must be a multiple of $5$. For example, to evaluate to $10$, the evaluation path must pass a \emph{high} edge exactly twice. Therefore, in a $3$-node ADD with root node $n_R$, the result of $\mathrm{count}_{10} (n_R)$ will be exactly $\binom{3}{2}$.
\inlinesection{Special Operations on ADD's.}
Given an ADD with node set $\mathcal{N}$, we define two operations that will become useful later on when constructing diagrams for our specific scenario:
\begin{enumerate}[leftmargin=*]
\item \emph{Variable restriction}, denoted as $\mathcal{N}[a_i \gets V]$, which restricts the domain of variables $A$ by forcing the variable $a_i$ to be assigned the value $V$. This operation removes every node $n \in \mathcal{N}$ where $a(n) = a_i$ and rewires all incoming edges to point to the node's high or low child depending on whether $V=1$ or $V=0$.
\item \emph{Diagram summation}, denoted as $\mathcal{N}_1 + \mathcal{N}_2$, where $\mathcal{N}_1$ and $\mathcal{N}_2$ are two ADD's over the same (ordered) set of variables $A$.
It starts from the respective root nodes $n_1$ and $n_2$ and produces a new node $n := n_1 + n_2$. We then apply the same operation to child nodes. Therefore, $c_L(n_1 + n_2) := c_L(n_1) + c_L(n_2)$ and $c_H(n_1 + n_2) := c_H(n_1) + c_H(n_2)$. Also, for the increments, we can define $w_L(n_1 + n_2) := w_L(n_1) + w_L(n_2)$ and $w_H(n_1 + n_2) := w_H(n_1) + w_H(n_2)$.
\end{enumerate}
\section{Data Importance over ML Pipelines}
\label{sec:problem}
We first recap the problem of computing data importance for ML pipelines in \autoref{sec:problem-importance}, formalise the problem in \autoref{sec:problem-formal}, and outline core technical efficiency and scalability issues afterwards.
We will describe the \texttt{DataScope}\xspace approach
in \autoref{sec:approach}
and our theoretical framework in
\autoref{sec:framework}.
\subsection{Data Importance for ML Pipelines}
\label{sec:problem-importance}
In real-world ML, one often encounters data-related problems in the input training set (e.g., wrong labels, outliers, biased samples) that lead to sub-optimal quality of the user's model. As illustrated in previous work~\cite{koh2017understanding, koh2019accuracy,ghorbani2019data,jia2019towards,Jia2021-zf, Jia2019-kz}, many data debugging and understanding problems hinge on the following fundamental question:
\begin{quote}
\em Which data examples in the training set are most important for the model utility ?
\end{quote}
A common approach is to model this problem as computing the {\em Shapley value} of each data example as a measure of its importance to a model, which has been applied to a wide range use cases~\cite{ghorbani2019data,jia2019towards,Jia2021-zf, Jia2019-kz}.
However, this line of work focused solely on ML model training but ignored the \textit{data pre-processing pipeline} prior to model training, which includes steps such as feature extraction, data augmentation,
etc. This significantly limits its applications to real-world scenarios, most of which consist
of a non-trivial data processing pipeline~\cite{psallidas2019data}.
In this paper, we take the first step in applying
Shapley values to debug end-to-end ML pipelines.
\subsection{Formal Problem Definition}
\label{sec:problem-formal}
We first formally define the core technical problem.
\inlinesection{ML Pipelines.} Let $\mathcal{D}_{{e}}$ be an input training set for a machine learning task, potentially accompanied by additional relational side datasets $\mathcal{D}_{s_1},\dots,\mathcal{D}_{s_k}$. We assume the data to be in a \emph{star} database schema, where each tuple from a side dataset $\mathcal{D}_{s_i}$ (the ``dimension'' tables) can be joined with multiple tuples from $\mathcal{D}_{{e}}$ (the ``fact'' table).
Let $f$ be a feature extraction pipeline that transforms the relational inputs $\mathcal{D}_{tr} = \{ \mathcal{D}_{e}, \mathcal{D}_{s_1},\dots,\mathcal{D}_{s_k} \}$ into a set of training tuples $\{t_i = (x_i, y_i)\}_{i \in [m]}$ made up of feature and label pairs that the ML training algorithm $\mathcal{A}$ takes as input.
Note that $\mathcal{D}_{e}$ represents \texttt{train\_data} in our toy example in \autoref{lst:example}, $\mathcal{D}_{s}$ represents \texttt{side\_data}, while $f$ refers to the data preparation operations from lines 6-14, and the model $\mathcal{A}$ corresponds to the support vector machine \texttt{SVC} from line~16.
After feature extraction and training, we obtain an ML model:
\[
\mathcal{A} \circ f (\mathcal{D}_{tr}).
\]
We can measure the \emph{quality} of this model in various ways, e.g., via validation accuracy and a fairness metric. Let $\mathcal{D}_{v}$ be a given set of relational validation data with the same schema as $\mathcal{D}_{e}$. Applying $f$ to $\mathcal{D}_{val} = \{ \mathcal{D}_{v}, \mathcal{D}_{s_1},\dots,\mathcal{D}_{s_k} \}$ produces a set of validation tuples $\{t_i = (\tilde{x}_i,
\tilde{y}_i)\}_{i \in [p]}$ made up of feature and label pairs, on which we can derive predictions with our trained model $\mathcal{A} \circ f (\mathcal{D}_{tr})$. Based on this, we define a utility function $u$, which measures the performance of the predictions:
\[
u (\mathcal{A} \circ f (\mathcal{D}_{tr}), f(\mathcal{D}_{val})) \mapsto [0, 1].
\]
For readability, we use the following notation in cases where the model $\mathcal{A}$ and pipeline $f$ are clear from context:
\begin{equation} \label{eq:model-acc-definition}
u (\mathcal{D}_{tr}, \mathcal{D}_{val}) := u (\mathcal{A} \circ f (\mathcal{D}_{tr}), f(\mathcal{D}_{val}))
\end{equation}
\inlinesection{Additive Utilities.}
In this paper, we focus on \textit{additive utilities} that cover the most important set of utility functions in practice (e.g., validation loss, validation accuracy, various fairness metrics, etc.).
A utility function $u$ is \textit{additive} if there exists a \emph{tuple-wise} utility $u_T$ such that $u$ can be rewritten as
\begin{equation} \label{eq:additive-utility}
u (\mathcal{D}_{tr}, \mathcal{D}_{val})
=
w \cdot
\sum_{t_{val} \in f(\mathcal{D}_{val})}
u_T \bigg( \Big(\mathcal{A} \circ f (\mathcal{D}_{tr})\Big)(t_{val}), t_{val} \bigg).
\end{equation}
Here, $w$ is a scaling factor only relying on $\mathcal{D}_{val}$. The tuple-wise utility $u_{T} : (y_{pred}, t_{val}) \mapsto [0, 1]$ takes a validation tuple $t_{val} \in \mathcal{D}_{val}$ as well as a class label $y_{pred} \in \mathcal{Y}$ predicted by the model for
$t_{val}$.
It is easy to see that
popular utilities such as validation accuracy are all additive, e.g., the accuracy utility is simply defined by plugging $u_T(y_{pred}, (x_{val}, y_{val})) := \mathbbm{1} \{y_{pred} = y_{val}\}$ into \autoref{eq:additive-utility}.
\inlinesection{Example: False Negative Rate as an Additive Utility.} Apart from accuracy which represents a trivial example of an additive utility, we can show how some more complex utilities happen to be additive and can therefore be decomposed according to \autoref{eq:additive-utility}. As an example, we use \emph{false negative rate (FNR)} which can be defined as such:
\begin{equation}
u(\mathcal{D}_{tr}, \mathcal{D}_{val}) :=
\frac{\sum_{t_{val} \in f(\mathcal{D}_{val})} \mathbbm{1}\{ (\mathcal{A} \circ f(\mathcal{D}_{tr}))(t_{val}) = 0 \} \mathbbm{1} \{ y(t_{val}) = 1 \} }{|\{ t_{val} \in \mathcal{D}_{val} \ : \ y(t_{val}) = 1 \}|}.
\end{equation}
In the above expression we can see that the denominator only depends on $\mathcal{D}_{val}$ which means it can be interpreted as the scaling factor $w$. We can easily see that the expression in the numerator neatly fits the structure of \autoref{eq:additive-utility} as long as we we define $u_T$ as $u_T (y_{pred}, (x_{val}, y_{val})) := \mathbbm{1} \{ y_{pred} = 0 \} \mathbbm{1} \{ y_{val} = 1 \}$. Similarly, we are able to easily represent various other utilities, including: false positive rate, true positive rate (i.e. recall), true negative rate (i.e. specificity), etc. We describe
an additional example in \autoref{sec:approach-approx}.
\inlinesection{Shapley Value.} The Shapley value, denoting the importance of an input tuple $t_i$ for the ML pipeline, is defined as
\[
\varphi_i = \frac{1}{|\mathcal{D}_{tr}|} \sum_{S \subseteq \mathcal{D}_{tr} \backslash \{t_i\}} {n - 1 \choose |S|}^{-1} \left(
u (S \cup \{t_i\}, \mathcal{D}_{val}) -
u (S, \mathcal{D}_{val})
\right).
\]
Intuitively, the {\em importance} of $t_i$ over a subset $S \subseteq \mathcal{D}_{tr} \backslash \{t_i\}$ is measured as the difference of the utility $u \circ \mathcal{A} \circ f (S \cup \{t_i\})$ \textit{with} $t_i$ to the utility $u \circ \mathcal{A} \circ f (S)$ \textit{without} $t_i$.
The Shapley value takes the average of all such possible subsets $S \subseteq \mathcal{D}_{tr} \backslash \{t_i\}$, which allows it to have a range of desired properties that significantly benefit data debugging tasks,
often leading to more effective
data debugging mechanisms
compared to other leave-one-out methods.
\subsection{Prior Work and Challenges}
All previous research focuses on the scenario in which there is no ML pipeline $f$ (i.e., one directly works with the vectorised training examples $\{t_i\}$). Even in this case, computing Shapley values is tremendously difficult since its complexity for general ML model is \texttt{\#P}-hard. To accommodate this computational challenge, previous work falls into two categories:
\begin{enumerate}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,topsep=0pt, leftmargin=*]
\item {\em Monte Carlo Shapley}: One natural line of efforts tries to estimate Shapley value with Markov Chain Monte Carlo (MCMC) approaches. This includes vanilla Monte Carlo sampling, group testing~\cite{jia2019towards,Zhou2014-ca}, and truncated Monte Carlo sampling~\cite{ghorbani2019data}.
\item {\em KNN Shapley}: Even the most efficient Monte Carlo Shapley methods need to train multiple ML models (i.e., evaluate $\mathcal{A}$ multiple times) and thus exhibit long running time for datasets of modest sizes. Another line of research proposes to approximate the model $\mathcal{A}$ using a simpler proxy model. Specifically, previous work shows that Shapley values can be computed over K-nearest neighbors (KNN) classifiers in PTIME~\cite{Jia2019-kz} and using KNN classifiers as a proxy is very effective in various real-world scenarios~\cite{Jia2021-zf}.
\end{enumerate}
In this work, we face an even harder problem given the presence of an ML pipeline $f$ in addition to the model $\mathcal{A}$. Nevertheless, as a baseline, it is important to realize that all Monte Carlo Shapley approaches~\cite{ghorbani2019data,jia2019towards} can be directly extended to support our scenario. This is because most, if not all, Monte Carlo Shapley approaches operate on {\em black-box functions} and thus, can be used directly to handle an end-to-end pipeline $\mathcal{A} \circ f$.
\inlinesection{Core Technical Problem.} Despite the existence of such a Monte Carlo baseline, there remain tremendous challenges with respect to scalability and speed --- in our experiments in \autoref{sec:evaluation}, it is not uncommon for such a Monte Carlo baseline to take a full hour to compute Shapley values even on a small dataset with only 1,000 examples. To bring data debugging and understanding into practice, we are in dire need for a more efficient and scalable alternative. Without an ML pipeline, using a KNN proxy model has been shown to be orders of magnitude faster than its Monte Carlo counterpart~\cite{Jia2019-kz} while being equally, if not more, effective on many applications~\cite{Jia2021-zf}.
As a consequence, we focus on the following question: {\em Can we similarly use a KNN classifier as a proxy when dealing with end-to-end ML pipelines}? Today's KNN Shapley algorithm heavily relies on the structure of the KNN classifier. The presence of an ML pipeline will drastically change the underlying algorithm and time complexity --- in fact, for many ML pipelines, computation of Shapley value is \texttt{\#P}-hard even for KNN classifiers.
\section{Proofs and Details}
\subsection{Proof of \autoref{thm:decision-diagram}} \label{sec:apx-decision-diagram-proof}
\inlinesection{Model Counting for ADD's.} We start off by proving that \autoref{eq:dd-count-recursion} correctly performs model counting.
\begin{lemma} \label{lem:add-count-correct}
For a given node $n \in \mathcal{N}$ of an ADD and a given value $e \in \mathcal{E}$, \autoref{eq:dd-count-recursion} correctly computes $\mathrm{count}_e (n)$ which returns the number of assignments $v \in \mathcal{V}_{A}$ such that $\mathrm{eval}_v (n) = e$. Furthermore, when computing $\mathrm{count}_e (n)$ for any $n \in \mathcal{N}$, the number of computational steps is bounded by $O(|\mathcal{N}| \cdot |\mathcal{E}|)$.
\end{lemma}
\begin{proof}
We will prove this by induction on the structure of the recursion.
\emph{(Base case.)} Based on \autoref{eq:dd-eval-definition}, when $n = \boxdot$ we get $\mathrm{eval}_v(n) = 0$ for all $v$. Furthermore, when $n = \boxdot$, the set $\mathcal{V}_{A}[a_{>\pi(a(n))}=0]$ contains only one value assignment with all variables set to zero. Hence, the model count will equal to $1$ only for $e=0$ and it will be $0$ otherwise, which is reflected in the base cases of \autoref{eq:dd-count-recursion}.
\emph{(Inductive step.)} Because our ADD is ordered and full, both $c_L(n)$ and $c_H(n)$ are associated with the same variable, which is the predecessor of $a(n)$ in the permutation $\pi$. Based on this and the induction hypothesis, we can assume that
\begin{gather} \label{eq:count-proof-induction-components}
\begin{split}
\mathrm{count}_{e - w_L(n)} (c_L(n)) &= \Big|\Big\{ v \in \mathcal{V}_{A [\leq a(c_L(n))] } \ | \ \mathrm{eval}_v (c_L(n)) = e - w_L(n) \Big\}\Big| \\
\mathrm{count}_{e - w_H(n)} (c_H(n)) &= \Big|\Big\{ v \in \mathcal{V}_{A [\leq a(c_H(n))] } \ | \ \mathrm{eval}_v (c_H(n)) = e - w_H(n) \Big\}\Big|
\end{split}
\end{gather}
We would like to compute $\mathrm{count}_e (n)$ as defined in \autoref{eq:dd-count-definition}. It computes the size of a set defined over possible value assignments to variables in $A [\leq a(n)]$. The set of value assignments can be partitioned into two distinct sets: one where $a(n) \gets 0$ and one where $a(n) \gets 1$. We thus obtain the following expression:
\begin{align}
\begin{split}
\mathrm{count}_e (n) :=
& \Big|\Big\{ v \in \mathcal{V}_{A [\leq a(n)]} \big[ a(n) \gets 0 \big] \ | \ \mathrm{eval}_v (n) = e \Big\}\Big| \\
+ \
& \Big|\Big\{ v \in \mathcal{V}_{A [\leq a(n)]} \big[ a(n) \gets 1 \big] \ | \ \mathrm{eval}_v (n) = e \Big\}\Big|
\end{split}
\end{align}
Based on \autoref{eq:dd-eval-definition}, we can transform the $\mathrm{eval}_v (n)$ expressions as such:
\begin{align}
\begin{split}
\mathrm{count}_e (n) :=
& \Big|\Big\{ v \in \mathcal{V}_{A [\leq a(c_L(n))]} \ | \ w_L(n) + \mathrm{eval}_v (c_L(n)) = e \Big\}\Big| \\
+ \
& \Big|\Big\{ v \in \mathcal{V}_{A [\leq a(c_L(n))]} \ | \ w_H(n) + \mathrm{eval}_v (c_H(n)) = e \Big\}\Big|
\end{split}
\end{align}
Finally, we can notice that the set size expressions are equivalent to those in \autoref{eq:count-proof-induction-components}. Therefore, we can obtain the following expression:
\begin{equation}
\mathrm{count}_e (n) := \mathrm{count}_{e - w_L(n)} (c_L(n)) + \mathrm{count}_{e - w_H(n)} (c_H(n))
\end{equation}
which is exactly the recursive step in \autoref{eq:dd-count-recursion}. This concludes our inductive proof and we move onto proving the complexity bound.
\emph{(Complexity.)} This is trivially proven by observing that since $\mathrm{count}$ has two arguments, we can maintain a table of results obtained for each $n \in \mathcal{N}$ and $e \in \mathcal{E}$. Therefore, we know that we will never need to perform more than $O(|\mathcal{N}| \cdot |\mathcal{E}|)$ invocations of $\mathrm{count}_e (n)$.
\end{proof}
\inlinesection{ADD Construction.} Next, we prove that the size of an ADD resulting from \emph{diagram summation} as defined in \autoref{sec:additive-decision-diagrams} is linear in the number of variables.
The size of the diagram resulting from a sum of two diagrams with node sets $\mathcal{N}_1$ and $\mathcal{N}_2$ can be loosely bounded by $O(|\mathcal{N}_1| \cdot |\mathcal{N}_2|)$ assuming that its nodes come from a combination of every possible pair of operand nodes. However, given the much more narrow assumptions we made in the definition of the node sum operator, we can make this bound considerably tighter. For this we define the \emph{diameter} of an ADD as the maximum number of nodes associated with any single variable. Formally we can write:
\begin{equation}
\mathrm{diam}(\mathcal{N}) := \max_{a_i \in A} \big| \{ n \in \mathcal{N} : a(n) = a_i \} \big|
\end{equation}
We can immediately notice that the size of any ADD with set of nodes $\mathcal{N}$ and variables $A$ is bounded by $O(|A| \cdot \mathrm{diam}(\mathcal{N}))$. We can use this fact to prove a tighter bound on the size of an ADD resulting from a sum operation:
\begin{lemma}
Given two full ordered ADD's with nodes $\mathcal{N}_1$ and $\mathcal{N}_2$, noth defined over the set of variables $A$, the number of nodes in $\mathcal{N}_1 + \mathcal{N}_2$ is bounded by $O(|A| \cdot \mathrm{diam}(\mathcal{N}_1) \cdot \mathrm{diam}(\mathcal{N}_2))$.
\end{lemma}
\begin{proof}
It is sufficient to show that $\mathrm{diam} (\mathcal{N}_1 + \mathcal{N}_2) = O(\mathrm{diam} (\mathcal{N}_1) \cdot \mathrm{diam} (\mathcal{N}_2))$. This is a direct consequence of the fact that for full ordered ADD's the node sum operator is defined only for nodes associated with the same variable. Since the only way to produce new nodes is by merging one node in $\mathcal{N}_1$ with one node in $\mathcal{N}_2$, and given that we can merge nodes associated with the same variable, the number of nodes associated with the same variable in the resulting ADD equals the product of the corresponding number of nodes in the constituent ADD's. Since the diameter is simply the upper bound of the number of nodes associated with any single variable, the same upper bound in the resulting ADD cannot be larger than the product of the upper bounds of constituent nodes.
\end{proof}
\inlinesection{Computing the Oracle using ADD's.} Finally, we prove the correctness of \autoref{thm:decision-diagram}.
\begin{lemma} \label{lem:oracle-as-add-count}
Given an Additive Decision diagram with root node $n_{t, t'}$ that computes the Boolean function $\phi_{t, t'}(v)$ as defined in \autoref{eq:oracle-add-function}, the counting oracle $\omega_{t, t'} (\alpha, \gamma, \gamma')$ defined in \autoref{eq:counting-oracle} can be computed as:
\begin{equation}
\omega_{t, t'} (\alpha, \gamma, \gamma') := \mathrm{count}_{(\alpha, \gamma, \gamma')} (n_{t, t'})
\end{equation}
\end{lemma}
\begin{proof}
Let us define $\mathcal{D}[\geq_\sigma t] \subseteq \mathcal{D}$ as a set of tuples with similarity higher or equal than that of $t$, formally $\mathcal{D}[\geq_\sigma t] := \{ t' \in \mathcal{D} : \sigma(t') \geq \sigma(t) \}$. Similarly to $\mathcal{D}$, the semantics of $\mathcal{D}[\geq_\sigma t]$ is also that of a set of possible candidate sets. Given a value assignment $v$, we can obtain $\mathcal{D}[\geq_\sigma t][v]$ from $\mathcal{D}[v]$. For convenience, we also define $\mathcal{D}[\geq_\sigma t][y]$ as a subset of $\mathcal{D}[\geq_\sigma t]$ with only tuples that have label $y$. Given these definitions, we can define several equivalences. First, for $\mathrm{top}_K$ we have:
\begin{equation}
\Big( t = \mathrm{top}_K \mathcal{D}[v] \Big) \iff
\Big(t \in \mathcal{D}[v] \wedge \big| \mathcal{D}[\geq_\sigma t][v] \big| = K \Big)
\end{equation}
In other words, for $t$ to be the tuple with the $K$-th highest similarity in $\mathcal{D}[v]$, it needs to be a member of $\mathcal{D}[v]$ and the number of tuples with similarity greater or equal to $t$ has to be exactly $K$. Similarly, we can define the equivalence for $\mathrm{tally}_{t}$:
\begin{equation} \label{eq:tally-dataset-equivalence-step}
\Big( \gamma = \mathrm{tally}_t \mathcal{D}[v] \Big) \iff
\Big( \forall y \in \mathcal{Y}, \gamma_{y} = \big| \mathcal{D}[ \geq_\sigma t][ y][ v] \big| \Big)
\end{equation}
This is simply an expression that partitions the set $\mathcal{D}[\geq_\sigma t][v]$ based on $y$ and tallies them up. The next step is to define an equivalence for $( t = \mathrm{top}_K \mathcal{D}[v] ) \wedge ( \gamma = \mathrm{tally}_t \mathcal{D}[v] )$. We can notice that since $|\gamma| = K$, if we have $( \forall y \in \mathcal{Y}, \gamma_{y} = |\mathcal{D}[\geq_\sigma t][y][v]| )$ then we can conclude that $(|\mathcal{D}[\geq_\sigma t][v]| = K)$ is redundant. Hence, we can obtain:
\begin{equation}
\Big( t = \mathrm{top}_K \mathcal{D}[v] \Big) \wedge
\Big( \gamma = \mathrm{tally}_t \mathcal{D}[v] \Big) \iff
\Big(t \in \mathcal{D}[v] \Big) \wedge
\Big( \forall y \in \mathcal{Y}, \gamma_{y} = \big| \mathcal{D}[\geq_\sigma t][y][v] \big| \Big)
\end{equation}
According to \autoref{eq:tally-dataset-equivalence-step}, we can reformulate the right-hand side of the above equivalence as:
\begin{equation}
\Big( t = \mathrm{top}_K \mathcal{D}[v] \Big) \wedge
\Big( \gamma = \mathrm{tally}_t \mathcal{D}[v] \Big) \iff
\Big(t \in \mathcal{D}[v] \Big) \wedge
\Big( \gamma = \mathrm{tally}_t \mathcal{D}[v] \Big)
\end{equation}
We can construct a similar expression for $t'$ and $v[a_i = 1]$ so we cover four out of five predicates in \autoref{eq:counting-oracle}. The remaining one is simply the support of the value assignment $v$ which we will leave intact. This leads us with the following equation for the counting oracle:
\begin{gather}
\begin{split}
\omega_{t, t'} (\alpha, \gamma, \gamma') :=
\sum_{v \in \mathcal{V}_{A}[a_i \gets 0]}
& \mathbbm{1} \{ \alpha = |\mathrm{supp}(v)| \} \\
& \mathbbm{1} \{ t \in f(\mathcal{D}[v]) \}
\mathbbm{1} \{ t' \in f(\mathcal{D}[v[a_i \gets 1]]) \} \\
& \mathbbm{1} \{ \gamma = \mathrm{tally}_t \mathcal{D}[v] \}
\mathbbm{1} \{ \gamma = \mathrm{tally}_t \mathcal{D}[v[a_i \gets 1]] \}
\end{split} \label{eq:counting-oracle-dd-redef}
\end{gather}
We can use the Boolean function $\phi_{t, t'}(v)$ in \autoref{eq:oracle-add-function} to simplify the above equation. Notice that the conditions $t \in f(\mathcal{D}[v])$ and $t' \in f(\mathcal{D}[v[a_i \gets 1]])$ are embedded in the definition of $\phi_{t, t'}(v)$ which will return $\infty$ if those conditions are not met. When the conditions are met, $\phi_{t, t'}(v)$ returns exactly the same triple $(\alpha, \gamma, \gamma')$. Therefore it is safe to replace the five indicator functions in the above formula with a single one as such:
\begin{equation}
\omega_{t, t'} (\alpha, \gamma, \gamma') :=
\sum_{v \in \mathcal{V}_{A}[a_i \gets 0]}
\mathbbm{1} \{ (\alpha, \gamma, \gamma') = \phi_{t, t'}(v) \}
\end{equation}
Given our assumption that $\phi_{t, t'}(v)$ can be represented by an ADD with a root node $n_{t, t'}$, the above formula is exactly the model counting operation:
\begin{equation}
\omega_{t, t'} (\alpha, \gamma, \gamma') := \mathrm{count}_{(\alpha, \gamma, \gamma')} (n_{t, t'})
\end{equation}
\end{proof}
\begin{theorem} \label{thm:decision-diagram-appendix}
If we can represent the Boolean function $\phi_{t, t'}(v)$ defined in \autoref{eq:oracle-add-function} with an Additive Decision Diagram of size polynomial in $|\mathcal{D}|$ and $|f(\mathcal{D})|$, then we can compute the counting oracle $\omega_{t, t'}$ in time polynomial in $|\mathcal{D}|$ and $|f(\mathcal{D})|$.
\end{theorem}
\begin{proof}
This theorem follows from the two previously proved lemmas: \autoref{lem:add-count-correct} and \autoref{lem:oracle-as-add-count}. Namely, as a result of \autoref{lem:oracle-as-add-count} we claim that model counting of the Boolean function $\phi_{t, t'}(v)$ is equivalent to computing the oracle result. On top of that, as a result of \autoref{lem:add-count-correct} we know that we can perform model counting in time linear in the size of the decision diagram. Hence, if our function $\phi_{t, t'}(v)$ can be represented with a decision diagram of size polynomial in the size of data, then we can conclude that computing the oracle result can be done in time polynomial in the size of data.
\end{proof}
\subsection{Proof of corollary \autoref{col:complexity-knn-join}} \label{sec:apx-complexity-knn-join-proof}
\begin{proof}
This follows from the observation that in \autoref{alg:compile-dataset-to-add}, each connected component $A_C$ will be made up from one variable corresponding to the dimension table and one or more variables corresponding to the fact table. Since the fact table variables will be categorized as "leaf variables", the expression $A_C \setminus A_L$ in Line \ref{alg:cmp:line:add-tree} will contain only a single element -- the dimension table variable. Consequently, the ADD tree in $\mathcal{N}'$ will contain a single node. On the other side, the $A_C \cap A_L$ expression will contain all fact table variables associated with that single dimension table variable. That chain will be added to the ADD tree two times for two outgoing branches of the single tree node. Hence, the ADD segment will be made up of two fact table variable chains stemming from a single dimension table variable node. There will be $O(|\mathcal{D}_D|)$ partitions in total. Given that the fact table variables are partitioned, the cumulative size of their chains will be $O(|\mathcal{D}_F|)$. Therefore, the total size of the ADD with all partitions joined together is bounded by $O(|\mathcal{D}_D|+|\mathcal{D}_F|) = O(N)$.
Given fact and combining it with \autoref{thm:decision-diagram} we know that the counting oracle can be computed in time $O(N)$ time. Finally, given \autoref{thm:shapley-using-counting-oracle} and the structure of \autoref{eq:shap-main} we can observe that the counting oracle is invoked $O(N^3)$ times. As a result, we can conclude that the total complexity of computing the Shapley value is $O(N^4)$.
\end{proof}
\subsection{Proof of corollary \autoref{col:complexity-knn-fork}} \label{sec:apx-complexity-knn-fork-proof}
\begin{proof}
The key observation here is that, since all provenance polynomials contain only a single variable, there is no interdependency between them, which means that the connected components returned in Line \ref{alg:cmp:line:conn-cmp} of \autoref{alg:compile-dataset-to-add} will each contain a single variable. Therefore, the size of the resulting ADD will be $O(N)$. Consequently, similar to the proof of the previous corollary, the counting oracle can be computed in time $O(N)$ time. In this case, the size of the output dataset is $O(M)$ which means that \autoref{eq:shap-main} willinvoke the oracle $O(M^2 N)$ times. Therefore, the total time complexity of computing the Shapley value will be $O(M^2 N^2)$.
\end{proof}
\subsection{Proof of corollary \autoref{col:complexity-knn-map}} \label{sec:apx-complexity-knn-map-proof}
\begin{proof}
There are two arguments we need to make which will result in the reduction of complexity compared to fork pipelines. The first argument is that, given that each variable can appear in the provenance polynomial of at most one tuple, having its value set to $1$ can result in either zero or one tuple contributing to the top-$K$ tally. It will be one if that tuple is more similar than the boundary tuple $t$ and it will be zero if it is less similar. Consequently, our ADD will have a chain structure with high-child increments being either $0$ or $1$. If we partition the ADD into two chains, one with all increments $1$ and another with all increments $0$, then we end up with two uniform ADD's. As shown in \autoref{eq:dd-count-uniform}, model counting of uniform ADD's can be achieved in constant time. The only difference here is that, since we have to account for the support size each model, computing the oracle $\omega_{t, t'} (\alpha, \gamma, \gamma')$ for a given $\alpha$ will require us to account for different possible ways to split $\alpha$ across the two ADD's. However, since the tuple $t$ needs to be the boundary tuple, which means it is the $K$-th most similar, there need to be exactly $K-1$ variables from the ADD with increments $1$ that can be set to $1$. This gives us a single possible distribution of $\alpha$ across two ADD's. Hence, the oracle can be computed in constant time.
As for the second argument, we need to make a simple observation. For map pipelines, given a boundary tuple $t$ and a tally vector $\gamma$ corresponding to the variable $a_i$ being assigned the value $0$, we know that setting this variable to $1$ can introduce at most one tuple to the top-$K$. That could only be the single tuple associated with $a_i$. If this tuple has a lower similarity score than $t$, there will be no change in the top-$K$. On the other side, if it has a higher similarity, then it will become part of the top-$K$ and it will evict exactly $t$ from it. Hence, there is a unique tally vector $\gamma'$ resulting from $a_i$ being assigned the value $1$. This means that instead of computing the counting oracle $\omega_{t, t'} (\alpha, \gamma, \gamma')$, we can compute the oracle $\omega_t (\alpha, \gamma)$. This means that, in \autoref{eq:shap-main} we can eliminate the iteration over $t'$ which saves us an order of $O(N)$ in complexity.
As a result, \autoref{eq:shap-main} will make $O(N^2)$ invocations to the oracle which can be computed in constant time. Hence, the final complexity of computing the Shapley value will be $O(N^2)$.
\end{proof}
\subsection{Proof of corollary \autoref{col:complexity-1nn-map}} \label{sec:apx-complexity-1nn-map-proof}
\begin{proof}
We start off by plugging in the oracle definition from \autoref{eq:oracle-1nn} into the Shapley value computation \autoref{eq:shap-main}:
\begin{gather}
\begin{split}
\varphi_i = \frac{1}{N}
\sum_{t, t' \in f(\mathcal{D})}
\sum_{\alpha=1}^{N}
\binom{N-1}{\alpha}^{-1}
\sum_{\gamma, \gamma' \in \Gamma}
u_{\Delta} (\gamma, \gamma')
& \binom{|\{t'' \in \mathcal{D} \ : \ \sigma(t'') < \sigma(t) \}|}{\alpha} \\
& \mathbbm{1} \{ p(t')=a_i \} \\
& \mathbbm{1} \{ \gamma = \Gamma_{y(t)} \}
\mathbbm{1} \{ \gamma' = \Gamma_{y(t')} \}
\end{split}
\end{gather}
As we can see, the oracle imposes hard constraints on the tuple $t'$ and tally vectors $\gamma$ and $\gamma'$. We will replace the tally vectors with their respective constants and the tuple $t'$ we will denote as $t_i$ because it is the only tuple associated with $a_i$. Because of this, we can remove the sums that iterate over them:
\begin{gather}
\varphi_i = \frac{1}{N}
\sum_{t \in f(\mathcal{D})}
\sum_{\alpha=1}^{N}
\binom{N-1}{\alpha}^{-1}
u_{\Delta} (\Gamma_{y(t)}, \Gamma_{y(t_i)})
\binom{|\{t'' \in \mathcal{D} \ : \ \sigma(t'') < \sigma(t) \}|}{\alpha}
\end{gather}
We could significantly simplify this equation by assuming the tuples in $f(\mathcal{D})$ are sorted by decreasing similarity. We then obtain:
\begin{equation}
\varphi_i = \frac{1}{N}
\sum_{j = 1}^{N}
\sum_{\alpha=1}^{N}
\binom{N-1}{\alpha}^{-1}
u_{\Delta} (\Gamma_{y(t)}, \Gamma_{y(t_i)})
\binom{N-j}{\alpha}
\end{equation}
We shuffle the sums a little by multiplying $\frac{1}{N}$ with $\binom{N-1}{\alpha}^{-1}$ and expanding the $u_{\Delta}$ function according to its definition. We also alter the limit of the innermost sum because $\alpha \leq N - j$. Thus, we obtain:
\begin{equation}
\varphi_i =
\sum_{j = 1}^{N}
\Big(
\mathbbm{1} \{ y(t_i) = y(t_v) \} -
\mathbbm{1} \{ y(t_j) = y(t_v) \}
\Big)
\sum_{\alpha=1}^{N-j}
\binom{N}{\alpha}^{-1}
\binom{N-j}{\alpha}
\end{equation}
The inntermost sum in the above equation can be simplified by applying the so-called Hockey-stick identity \cite{ross1997generalized}. Specifically, $\binom{N}{\alpha}^{-1}\binom{N-j}{\alpha}$ becomes $\binom{N}{j}^{-1}\binom{N-\alpha}{j}$. Then, $\sum_{\alpha=1}^{N-j}\binom{N}{j}^{-1}\binom{N-\alpha}{j}$ becomes $\binom{N}{j}^{-1}\binom{N}{j+1}$. Finally, we obtain the following formula:
\begin{equation}
\varphi_i =
\sum_{j = 1}^{N}
\Big(
\mathbbm{1} \{ y(t_i) = y(t_v) \} -
\mathbbm{1} \{ y(t_j) = y(t_v) \}
\Big)
\binom{N-j}{j+1}
\end{equation}
As we can see, the above formula can be computed in $O(N)$ iterations. Therefore, given that we still need to sort the dataset beforehand, the ovarall complexity of the entire Shapley value amounts to $O(N \log N)$.
\end{proof}
\subsection{Proof of corollary \autoref{col:complexity-1nn-fork}} \label{sec:apx-complexity-1nn-fork-proof}
\begin{proof}
We will prove this by reducing the problem of Shapley value computation in fork pipelines to the one of computing it for map pipelines. Let us have two tuples $t_{j,1}, t_{j,2} \in f(\mathcal{D})$, both associated with some variable $a_j \in A$. That means that $p(t_{j,1}) = p(t_{j,2})$. If we examine \autoref{eq:top-1-condition-map-single}, we notice that it will surely evaluate to false if either $\sigma(t_{j,1}) > \sigma(t)$ or $\sigma(t_{j,2}) > \sigma(t)$. The same observation holds for \autoref{eq:top-1-condition-map}.
Without loss of generality, assume $\sigma(t_{j,1}) > \sigma(t_{j,2})$. Then, $\sigma(t_{j,1}) > \sigma(t)$ implies $\sigma(t_{j,2}) > \sigma(t)$. As a result, we only ever need to check the former condition without paying attention to the latter. The outcome of this is that for all sets of tuples associated with the same variable, it is safe to ignore all of them except the one with the highest similarity score, and we will nevertheless obtain the same oracle result. Since we transformed the problem to the one where for each variable we have to consider only a single associated tuple, then we have effectively reduced the problem to the one of computing Shapley value for map pipelines. Consequently, we can apply the same algorithm and will end up with the same time complexity.
\end{proof}
\subsection{Details of \autoref{alg:compile-dataset-to-add}} \label{sec:apx-alg-compile-dataset-to-add-details}
In this section we examine the method of compiling a provenance-tracked dataset $f(\mathcal{D})$ that results from a pipeline $f$. The crux of the method is defined in \autoref{alg:compile-dataset-to-add} which is an algorithm that takes a dataset $\mathcal{D}$ with provenance tracked over a set of variables $\mathcal{X}$ and a boundary tuple $t \in \mathcal{D}$. The result is an ADD that computes the following function:
\begin{equation} \label{eq:oracle-add-function-single}
\phi_{t}(v) := \begin{cases}
\infty, & \mathrm{if} \ t \not\in \mathcal{D}[v], \\
\mathrm{tally}_t \mathcal{D}[v] & \mathrm{otherwise}. \\
\end{cases}
\end{equation}
Assuming that all provenance polynomials are actually a single conjunction of variables, and that the tally is always a sum over those polynomials, it tries to perform factoring by determining if there are any variables that can be isolated. This is achieved by first extracting variables that appear only once (Line \ref{alg:cmp:line:leaves}) separating the total sum into components that don't share any variables (Line \ref{alg:cmp:line:conn-cmp}). Then for the variables that cannot be isolated (because they appear in polynomials in multiple tuples with multiple different variables) we form a group which will be treated as one binary vector and based on the value of that vector we would take a specific path in the tree. We thus take the group of variables and call the \textsc{\small ConstructADDTree} function to construct an ADD tree (Line \ref{alg:cmp:line:add-tree}).
Every path in this tree corresponds to one value assignment to the variables in that tree. Then, for every path we call the \textsc{\small ConstructADDChain} to build a chain made up of the isolated variables and call \textsc{\small AppendToADDPath} to append them to the leaf of that path (Line \ref{alg:cmp:line:append-path}). For each variable in the chain we also define an increment that is defined by the number of tuples that will be more similar than the boundary tuple $t$ and also have their provenance polynomial "supported" by the path. We thus construct a segment of the final ADD made up of different components. We append this segment to the final ADD using the \textsc{\small AppendToADDRoot} function. We don't explicitly define these functions but we illustrate their functionality in \autoref{fig:example-add-compilation-functions}.
\begin{figure*}[!ht]
\centering
\begin{tikzpicture}[align=center, node distance=6mm and 2mm, line width=0.8pt]
\tikzstyle{invisible} = [minimum width=0, minimum height=4.5mm]
\tikzstyle{free} = [inner sep=5pt]
\tikzstyle{var} = [draw, rectangle, inner sep=2pt, minimum width=4mm, minimum height=4mm]
\tikzstyle{root} = [draw, rectangle, inner sep=2pt, minimum width=4mm, minimum height=4mm]
\tikzstyle{path} = [draw=myred]
\begin{scope}[local bounding box=n1]
\node[var] (n11) {};
\draw (n11 -| -3,0) node[anchor=west] {$a_{1}$};
\node[var] (n12) [below left=6mm and 8mm of n11] {};
\draw[-stealth, dashed] (n11) to [bend right] (n12);
\node[var] (n13) [below right=6mm and 8mm of n11] {};
\draw[-stealth] (n11) to [bend left] (n13);
\draw (n12 -| -3,0) node[anchor=west] {$a_{2}$};
\node[var] (n14) [below left=of n12] {};
\draw[-stealth, dashed] (n12) to [bend right] (n14);
\node[var] (n15) [below right=of n12] {};
\draw[-stealth] (n12) to [bend left] (n15);
\node[var] (n16) [below left=of n13] {};
\draw[-stealth, dashed] (n13) to [bend right] (n16);
\node[var] (n17) [below right=of n13] {};
\draw[-stealth] (n13) to [bend left] (n17);
\draw (n14 -| -3,0) node[anchor=west] {$a_{3}$};
\node[invisible] (n15n16) [shift={($(n15.east)!0.5!(n16.west)$)}] {};
\end{scope}
\node[free] (n1-cap) [above=of n11] {\small $\mathcal{N}_1 = $ \textsc{ConstructADDTree}$(\{a_1, a_2, a_3\})$};
\begin{scope}[local bounding box=n2, shift={($(n11.east)+(8cm,0)$)}]
\node[var] (n21) {};
\draw (n21 -| -0.9,0) node[anchor=west] {$a_{4}$};
\node[var] (n22) [below=of n21] {};
\draw[-stealth, dashed] (n21) to [bend right] (n22);
\draw[-stealth] (n21) to [bend left] (n22);
\draw (n22 -| -0.9,0) node[anchor=west] {$a_{5}$};
\node[var] (n23) [below=of n22] {};
\draw[-stealth, dashed] (n22) to [bend right] (n23);
\draw[-stealth] (n22) to [bend left] (n23);
\draw (n23 -| -0.9,0) node[anchor=west] {$a_{6}$};
\end{scope}
\node[free] (n2-cap) [above=of n21] {\small $\mathcal{N}_2 = $ \textsc{ConstructADDChain}$(\{a_4, a_5, a_6\})$};
\begin{scope}[local bounding box=n3, shift={($(n15n16.south)+(0,-2cm)$)}]
\node[var] (n31) {};
\draw (n31 -| -3,0) node[anchor=west] {$a_{1}$};
\node[var] (n32) [below left=6mm and 8mm of n31] {};
\draw[-stealth, dashed] (n31) to [bend right] (n32);
\node[var] (n33) [below right=6mm and 8mm of n31] {};
\draw[-stealth] (n31) to [bend left] (n33);
\draw (n32 -| -3,0) node[anchor=west] {$a_{2}$};
\node[var] (n34) [below left=of n32] {};
\draw[-stealth, dashed] (n32) to [bend right] (n34);
\node[var] (n35) [below right=of n32] {};
\draw[-stealth] (n32) to [bend left] (n35);
\node[var] (n36) [below left=of n33] {};
\draw[-stealth, dashed] (n33) to [bend right] (n36);
\node[var] (n37) [below right=of n33] {};
\draw[-stealth] (n33) to [bend left] (n37);
\draw (n34 -| -3,0) node[anchor=west] {$a_{3}$};
\node[var] (n38) [below right=of n36] {};
\draw[-stealth] (n36) to [bend left] (n38);
\draw (n38 -| -3,0) node[anchor=west] {$a_{4}$};
\node[var] (n39) [below=of n38] {};
\draw[-stealth, dashed] (n38) to [bend right] (n39);
\draw[-stealth] (n38) to [bend left] (n39);
\draw (n39 -| -3,0) node[anchor=west] {$a_{5}$};
\node[var] (n310) [below=of n39] {};
\draw[-stealth, dashed] (n39) to [bend right] (n310);
\draw[-stealth] (n39) to [bend left] (n310);
\draw (n310 -| -3,0) node[anchor=west] {$a_{6}$};
\end{scope}
\node[free] (n3-cap) [above=of n31] {\small \textsc{AppendToADDPath}$(\mathcal{N}_1, \mathcal{N}_2, \{ a_1 \rightarrow 1, a_2 \rightarrow 0, a_3 \rightarrow 1 \})$};
\begin{scope}[local bounding box=n3, shift={($(n31.east)+(8cm,0)$)}]
\node[var] (n41) {};
\draw (n41 -| -3,0) node[anchor=west] {$a_{1}$};
\node[var] (n42) [below left=6mm and 8mm of n41] {};
\draw[-stealth, dashed] (n41) to [bend right] (n42);
\node[var] (n43) [below right=6mm and 8mm of n41] {};
\draw[-stealth] (n41) to [bend left] (n43);
\draw (n42 -| -3,0) node[anchor=west] {$a_{2}$};
\node[var] (n44) [below left=of n42] {};
\draw[-stealth, dashed] (n42) to [bend right] (n44);
\node[var] (n45) [below right=of n42] {};
\draw[-stealth] (n42) to [bend left] (n45);
\node[var] (n46) [below left=of n43] {};
\draw[-stealth, dashed] (n43) to [bend right] (n46);
\node[var] (n47) [below right=of n43] {};
\draw[-stealth] (n43) to [bend left] (n47);
\draw (n44 -| -3,0) node[anchor=west] {$a_{3}$};
\node[invisible] (n45n46) [shift={($(n45.east)!0.5!(n46.west)$)}] {};
\node[var] (n48) [below=of n45n46] {};
\draw[-stealth] (n44) to [bend left=15] (n48);
\draw[-stealth, dashed] (n44) to [bend right=15] (n48);
\draw[-stealth] (n45) to [bend left=15] (n48);
\draw[-stealth, dashed] (n45) to [bend right=15] (n48);
\draw[-stealth] (n46) to [bend left=15] (n48);
\draw[-stealth, dashed] (n46) to [bend right=15] (n48);
\draw[-stealth] (n47) to [bend left=15] (n48);
\draw[-stealth, dashed] (n47) to [bend right=15] (n48);
\draw (n48 -| -3,0) node[anchor=west] {$a_{4}$};
\node[var] (n49) [below=of n48] {};
\draw[-stealth, dashed] (n48) to [bend right] (n49);
\draw[-stealth] (n48) to [bend left] (n49);
\draw (n49 -| -3,0) node[anchor=west] {$a_{5}$};
\node[var] (n410) [below=of n49] {};
\draw[-stealth, dashed] (n49) to [bend right] (n410);
\draw[-stealth] (n49) to [bend left] (n410);
\draw (n410 -| -3,0) node[anchor=west] {$a_{6}$};
\end{scope}
\node[free] (n4-cap) [above=of n41] {\small \textsc{AppendToADDRoot}$(\mathcal{N}_1, \mathcal{N}_2)$};
\end{tikzpicture}
\caption{An example of a ADD compilation functions.
}
\label{fig:example-add-compilation-functions}
\end{figure*}
\section{Related Work}
\inlinesection{Analyzing ML models.}
Understanding model predictions and handling problems when they arise has been an important area since the early days. In recent years, this topic has gained even more traction and is better known under the terms explainability and interpretability~\cite{adadi2018peeking,guidotti2018survey,gilpin2018explaining}. Typically, the goal is to understand why a model is making some specific prediction for a data example. Some approaches use surrogate models to produce explanations~\cite{ribeiro2016should, krishnan2017palm}. In computer vision, saliency maps have gained prominence~\cite{zeiler2014visualizing, shrikumar2017learning}. Saliency maps can be seen as a type of a \emph{feature importance} approach to explainability, although they also aim at interpreting the internals of a model. Other feature importance frameworks have also been developed~\cite{sundararajan2017axiomatic,covert2021explaining}, and some even focus on Shapley-based feature importance~\cite{lundberg2017unified}.
Another approach to model interpretability can be referred to as \emph{data importance} (i.e. using training data examples to explain predictions). This can have broader applications, including data valuation~\cite{pei2020datapricing}. One important line of work expresses data importance in terms of \emph{influence fucnctions}~\cite{basu2020second, koh2019accuracy, koh2017understanding, sharchilev2018finding}. Another line of work expresses data importance using Shapley values. Some apply Monte Carlo methods for efficiently computing it~\cite{ghorbani2019data} and some take advantage of the KNN model~\cite{jia2019towards,Jia2019-kz}. The KNN model has also been used for computing an entropy-based importance method targeted specifically for the data-cleaning application~\cite{karlas2020nearest}. In general, all of the mentioned approaches focus on interpreting a single model, without the ability to efficiently analyze it in the context of an end-to-end ML pipeline.
\inlinesection{Analyzing Data Processing Pipelines.}
For decades, the data management community has been studying how to analyze data importance for data-processing pipelines through various forms of query analysis. Some broader approaches include: causal analysis of interventions to queries~\cite{meliou2014causality}, and reverse data management~\cite{meliou2011reverse}. Methods also differ in terms of what is the target of their explanation, that is, with respect to what is the query output being explained. Some methods target queried relations~\cite{kanagal2011sensitivity}. Others target specific predicates that make up a query~\cite{roy2015explaining, roy2014formal}. Finally, some methods target specific tuples in input relations~\cite{meliou2012tiresias}.
A prominent line of work employs \emph{data provenance} as a means to analyze data processing pipelines~\cite{buneman2001and}. \emph{Provenance semiring} represents a theoretical framework for dealing with provenance~\cite{green2007provenance}. This framework gives us as a theoretical foundation to develop algorithms for analyzing ML pipelines. This analysis typically requires us to operate in an exponentially large problem space. This can be made manageable by transforming provenance information to decision diagrams through a process known as \emph{knowledge compilation}~\cite{Jha2011}. However, this framework is not guaranteed to lead us to tractable solutions. Some types of queries have been shown to be quite difficult~\cite{amsterdamer2011provenance, amsterdamer2011limitations}. In this work, we demonstrate tractability of a concrete class of pipelines compried of both a set of feature extraction operators, as well as an ML model.
Recent research under the umbrella of ``mlinspect''~ \cite{grafberger2022data,grafberger2021mlinspect,schelter2022screening} details how to compute data provenance over end-to-end ML pipelines similar to the ones in the focus of this work, based on lightweight code instrumentation.
\inlinesection{Joint Analysis of End-to-End Pipelines.}
Joint analysis of machine learning pipelines is a relatively novel, but nevertheless, an important field~\cite{polyzotis2017data}. Systems such as Data X-Ray can debug data processing pipelines by finding groups of data errors that might have the same cause~\cite{wang2015xray}. Some work has been done in the area of end-to-end pipeline compilation to tensor operations~\cite{nakandala2020tensor}. A notable piece of work leverages influence functions as a method for analyzing pipelines comprising of a model and a post-processing query~\cite{wu2020complaint}. This work also leverages data provenance as a key ingredient. In general, there are indications that data provenance is going to be a key ingredient of future ML systems~\cite{agrawal2020cloudy,schelter2022screening}, something that our own system depends upon. |
2,869,038,153,881 | arxiv | \section{INTRODUCTION}
With the rapid development in networking technologies, distributed optimization over multi-agent networks has been a heated research topic during the last decade, where agents aim to collaboratively minimize the sum of local functions possessed by each agent through local communication. Compared with centralized ones, distributed algorithms allow more flexibility and scalability due to its capability of breaking large-scale problems into sequences of smaller ones. In view of this, distributed algorithms are inherently robust to environment uncertainties and communication failures and are widely adopted in power grids~\cite{braun2016distributed}, sensor networks \cite{dougherty2016extremum} and vehicular networks \cite{mohebifard2018distributed}.
The most commonly used algorithms for distributed optimization is the Decentralized Gradient Descent (DGD), requiring diminishing step-sizes to ensure optimality \cite{nedic2009distributed}. To overcome this challenge, Xu et al.\cite{xu2015augmented} replaced the local gradient with an estimated global gradient based on the dynamic average consensus \cite{kia2019tutorial} and then proposed a gradient tracking method for distributed optimzation problem. Recently, Pu et al. \cite{pu2020push} and Xin and Khan \cite{xi2018linear} devised a modified gradient-tracking algorithm called Push-Pull/AB algorithm for consensus-based distributed optimization, which can be applied to a general directed graph including undirected graph as a special case.
\\ \indent The above conventional distributed algorithms require each agent to exchange their state information with the neighbouring agent, which is not desirable if the participating agents have sensitive and private information, as the transmitted information is
at risk of being intercepted by adversaries. By hacking into communication links, an adversary may have access to all conveyed messages, and potentially obtain the private information of each agent by adopting an attack algorithm. The theoretical analysis of privacy disclosure in distributed optimization is presented by Mandal\cite{mandal2016privacy}, where the parameters of cost functions and generation power can be correctly inferred by an eavesdropper in the economic dispatch problem. As the number of privacy leakage events is increasing, there is an urgent need to preserve privacy of each agent in distributed systems.
\\ \indent For the privacy preservation in distributed optimization, there have been several research results. Wang\cite{wang2019privacy} proposed a privacy-preserving average consensus in which the state of an agent is decomposed into two substates. Zhang et al. \cite{zhang2018enabling} and Lu et al. \cite{lu2018privacy}
combined existing distributed optimization approaches with the partially homomorphic cryptography. However, these approaches suffer from high computation complexity and communication cost which may be inapplicable for systems with limited resources. As an appealing alternative, differential privacy has attracted much attention in light of its rigorous mathematical framework, proven security properties, and easy implementation \cite{nozari2017differentially}. The main idea of differential private approaches is noise perturbation, leading to a tradeoff between privacy and accuracy.
Huang et al. \cite{huang2012differentially} devised a differential private distributed optimization algorithm by adding Laplacian noise on transmitted message with a decaying stepsize, resulting in a low convergence rate. A constant stepsize is achieved by Ding et al. \cite{ding2018consensus,ding2021differentially} where linear convergence is enjoyed by gradient tracking method and differential privacy is achieved by perturbing states.
None of the aforementioned approaches, however, is suitable for directed graphs with weak topological restrictions, which is more practical in real applications. In practice, the information flows among sensors may not be bidirectional due to the different communication ranges, e.g., the coordinated vechicle control problem \cite{ghabcheloo2005coordinated} and the economic dispatch problem \cite{yang2013consensus}. To address privacy leakage in distributed optimization for agents interacting over an unbalanced graphs, Mao et al. \cite{mao2020privacy} designed a privacy-preserving algorithm based on the push-gradient method with a decaying stepsize, which is implemented via a case study to the economic dispatch problem. Nevertheless, the algorithm in \cite{mao2020privacy} lacked a formal privacy notion and it cannot achieve differential privacy.\\
\indent All the above motivates us to further develop a differential private distributed optimization algorithm over directed graphs. Inspired by \cite{wang2019privacy}, a novel differential private distributed optimization approach based on state decomposition is proposed for agents communicating over directed networks. Under the proposed state decomposition mechanism, a Laplacian noise is perturbed on the gradient state and the global gradient is still tracked after state decomposition.
The main contributions of this paper are summarized as follows:
\begin{enumerate}
\item We propose a state-decomposition based gradient tracking approach (\textbf{SD-Push-Pull}) for distributed optimziation over unbalanced directed networks, where the gradient state of each agent is decomposed into two substates to maintain the privacy of all agents. Specifically, one sub-state replacing the role of the original state is communicated with neighboring agents while the other substate is kept private.
Compared to the privacy-preserving approaches in \cite{huang2012differentially} and \cite{ding2018consensus}, our proposed approach can be applied to more general and practical networks.
\item Different from the privacy notion in \cite{wang2019privacy} and \cite{ding2018consensus}, we adopt the definition of differential privacy, which ensures the privacy of agents regardless of any auxiliary information that an adversary may have and enjoys a rigorous formulation. In addition, we prove that the proposed SD-Push-Pull algorithm can achieve $(\epsilon, \delta)$-differential privacy (\textbf{Theorem 1}).
\item We analyze the convergence performance of the proposed SD-Push-Pull algorithm for strongly convex local cost functions. The results shows that the SD-Push-Pull algorithm converges to a neighborhood of the optimal solution in expectation exponentially fast under a constant stepsize policy (\textbf{Theorem 2}). Moreover, our results reveal a tradeoff between the privacy level and the optimization accuracy (\textbf{Remark 1}).
\end{enumerate}
\textit{Notations:} In this paper, $\mathbb{N}$ and $\mathbb{R}$ represent the sets
whose components are natural numbers and real numbers. $\mathbf{1}_n\in\mathbb{R}^n$ and $\mathbf{I}_n\in\mathbb{R}^{n\times n}$ represent the vector of ones and the identity matrix, respectively. The spectral radius of matrix $\mathbf{A}$ is denoted by $\rho(\mathbf{A})$. For a given constant $\theta$, Lap($\theta$) is the Laplace distribution with probability function $p_\theta =\frac{1}{2\theta}e^{-\frac{|x|}{\theta}}$. In addition, $\mathbb{E}(x)$ and $P(x)$ denote the expectation and probability distribution of a random variable $x$, respectively.
\section{PRELIMINARIES AND PROBLEM FORMULATION}
\subsection{Network Model}
We consider a group of agents which communicate with each other over a directed graph. The directed graph is denoted as a pair $G\triangleq (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ denotes the agents set and $\mathcal{E}\subset V \times V$ denotes the edge set, respectively. A communication link from agent $i$ to agent $j$ is denoted by $(j,i)\in \mathcal{ E}$, indicating that agent $i$ can send messages to agent $j$. Given a nonnegative matrix $\mathbf{M}=[m_{ij}]\in\mathbb{R}^{n\times n}$, the directed graph induced by $\mathbf{M}$ is denoted by $\mathcal{G}_{\mathbf{M}}\triangleq (\mathcal{N}, \mathcal{E}_{\mathbf{M}})$, where $\mathcal{N}=\{1,2,\ldots,n\}$ and $(j,i)\in \mathcal{E}_{\mathbf{M}}$ if and only if $m_{ij}>0$.
The agents who can directly send messages to agent $i$ are represented as in-neighbours of agent $i$ and the set of these agents is denoted as $N_{\mathbf{M},i}^{in}=\{j\in V\mid (i,j)\in \mathcal{E}_{\mathbf{M}}\}$. Similarly, the agents who can directly receive messages from agent $i$ are represented as out-neighbours of agent $i$ and the set of these agents is denoted as $N_{\mathbf{M},i}^{out}=\{j\in V\mid (j,i)\in \mathcal{E}_{\mathbf{M}}\}$.
\subsection{Differential Privacy}
\begin{definition}\label{df1}
(Adjacency \cite{ding2018consensus}) Given $\delta>0$ and two function sets $\mathcal{S}^{(1)}=\{f_i^{(1)}\}^n_{i=1}$ and $\mathcal{S}^{(2)}=\{f_i^{(2)}\}^n_{i=1}$, $\mathcal{S}^{(1)}$ and $\mathcal{S}^{(2)}$ are $\delta-$adjacent if there exists some $i_0 \in\{1,2\ldots,n\}$ such that
$$f_i^{(1)}=f_i^{(2)}, \forall i\neq i_0, D(f_{i_0}^{(1)},f_{i_0}^{(2)})\leq \delta,$$
where $D(f_{i_0}^{(1)},f_{i_0}^{(2)})$ represents the distance between two functions $f_{i_0}^{(1)}$ and $f_{i_0}^{(2)}$.
\end{definition}
Definition \ref{df1} implies that two function sets are adjacent only if one agent changes its objective function.
In this paper, we assume there exists an eavesdropper with enough background knowledge in the network, who has access to all transmitted data by eavesdropping the communications among the agents. Moreover, he can obtain arbitrary auxiliary information to infer private information. Under this circumstance, we introduce the definition of differential privacy as follows:
\begin{definition}\label{df2}
(Differential privacy \cite{dwork2006calibrating}) Given $\delta,\epsilon>0,$ for any $\delta-$adjacent function sets $\mathcal{S}^{(1)}$ and $\mathcal{S}^{(2)}$ and any observation $\mathcal{O}\subseteq \text{Range}(\mathcal{A})$, a randomized algorithm $\mathcal{A}$ keeps $\epsilon-$differentially private if
$$P\{\mathcal{A}(\mathcal{S}^{(1)})\in \mathcal{O}\}\leq e^\epsilon P\{\mathcal{A}(\mathcal{S}^{(2)})\in\mathcal{O}\}$$
where $\text{Range}(\mathcal{A})$ denotes the output codomain of $\mathcal{A}$.
\end{definition}
Definition \ref{df2} illustrates that
a random mechanism is differentially private if its outputs are nearly statistically identical over two similar inputs which only differ in one element. Hence, an eavesdropper cannot distinguish whether a participant's data is in the database based on the output of the mechanism.
Here, a smaller $ \epsilon $ represents a higher level of privacy since the eavesdropper has less chance to distinguish sensitive information of each agent from the observations. Nevertheless, a high privacy level will sacrifice the accuracy of the optimization algorithm. Hence, the constant $\epsilon$ determines a tradeoff between the privacy level and the accuracy.
\subsection{Problem Formulation}
Consider an optimization problem in a multi-agent system of $n$ agents. Each agent has a private cost function $f_i$, which is only known to agent $i$ itself. All the participating agents aim to minimize a global objective function
\begin{equation}\label{pro1}
\min\limits_{x\in\mathbb{R}^{p}}\sum\limits_{i=1}^{N}f_{i}(x)
\end{equation}
where $x$ is the global decision variable.
To solve Problem \eqref{pro1}, assume each agent $i$ maintains a local copy of $x_i \in \mathbb{R}^{p}$ of the decision variable and an auxiliary variable $y_i \in \mathbb{R}^{p}$ tracking the average gradients. Then we can rewrite Problem \eqref{pro1} into local optimization problem of each agent with an added consensus constraint as follows
\begin{equation}\label{pro2}
\begin{split}
\min_{x_i \in \mathbb{R}^p} \quad & \sum_{i=1}^n f_i(x_i) \\
\text{s.t.} \quad & x_i = x_j, \quad \forall i,j, \\
\end{split}
\end{equation}
where $ x_i $ is the local decision variable of the agent $ i $.
Let
$$\begin{aligned}
&\mathbf{x}:=[x_1,x_2,\ldots,x_n]^\top
\in\mathbb{R}^{n\times p},\\
&\mathbf{y}:=[y_1,y_2,\ldots,y_n]^\top
\in\mathbb{R}^{n\times p}.
\end{aligned}
$$
Denote $F(x)$ as an aggregate objective function of the local variables, i.e., $F(x)=\sum_{i=1}^n f_i(x_i)$.
With respect to the objective function in Problem \eqref{pro1}, we assume the following strong convexity and smoothness conditions.
\begin{assumption}\label{asp}
Each objective function $f_i$ is $\mu-$strongly convex with $L-$Lipschitz continuous gradients, i.e., for any $x_1$, $x_2\in\mathbb {R}^p$,
$$\begin{aligned}
&\langle\nabla f_i(x_1)- \nabla f_i(x_2), x_1-x_2\rangle \geq \mu||x_1-x_2||^2,\\
&||\nabla f_i(x_1)- \nabla f_i(x_2)||\leq L||x_1-x_2||.\\
\end{aligned}
$$
\end{assumption}
\vspace*{2mm}
Under Assumption \ref{asp}, Problem \eqref{pro1} has a unique optimal solution $x^\star\in\mathbb{R}^{1\times p}$ \cite{pu2018push}.
\section{PRIVATE GRADIENT TRACKING ALGORITHM VIA STATE DECOMPOISTION}
In this section, we propose a state-decomposition based gradient tracking method (SD-Push-Pull) for distributed optimization over directed graphs, which is described in Algorithm \ref{alg1}.
The main idea is to let each agent decompose its gradient state $y_{i}$ into two substates $y^\alpha_{i}$ and $y^\beta_{i}$. The substate $y^\alpha_{i}$ is used in the communication with other agents while $y^\beta_{i}$ is never shared with other agents except for agent $i$ itself, so the substate $y^\beta_{i}$ is imperceptible to the neighbouring agents of agent $i$.
\begin{breakablealgorithm}
\caption{SD-Push-Pull}
\label{alg1}
\begin{algorithmic}
\State \textbf {Step 1}. Initialization:
\begin{enumerate}
\item Agent $i\in \mathcal{N}$ chooses in-bound mixing/pulling weights $R_{ij}\geq 0$ for all $j\in N^{in}_{R,i}$, out-bound pushing weights $\tilde C_{li}\geq 0$ for all $l\in N^{out}_{\tilde C,i}$, and the two sub-state weights $\alpha_i, \beta_i \in (0,1)$.
\item Agent $i\in \mathcal{N}$ picks any $x_{i,0}, \xi_i\in \mathbb{R}^p, \theta _i \in \mathbb{R_{++}}$, and initilizes $y_{i,0}^\alpha=\nabla f_i^\alpha(x_{i,0})=\xi_i$, $y_{i,0}^\beta=\nabla f_i^\beta(x_{i,0})=\nabla f_i(x_{i,0})-\xi_i$.
\item The step size $\eta>0$ is known to each agent.
\end{enumerate}
\textbf {Step 2.} At iteration $k=0,1,2,\ldots$\\
\begin{enumerate}
\item Agent $i\in \mathcal{N}$ pulls $(x_{j,k}-\eta y_{j,k}^\alpha)$ from each $j\in N^{in}_{R,i}$.
\item Agent $i\in \mathcal{N}$ pushes $\tilde C_{li}y_{i,k}^\alpha$ to each $l\in N^{out}_{\tilde C,i}$.
\item Agent $i\in \mathcal{N}$ updates $x_{i,k+1}$ through
\begin{equation}\label{d1}
x_{i,k+1}=\sum_{j=1}^N R_{ij}(x_{j,k}-\eta y_{j,k}^\alpha).
\end{equation}
\item Agent $i\in \mathcal{N}$ draws a random vector $s_{i,k}$ consisting of $p$ Laplacian noiseindependently drawn from Lap($\theta_i$) and updates $\nabla f_i^\alpha(x_{i,k+1})$,$\nabla f_i^\beta(x_{i,k+1})$ as follows:
\begin{equation}\label{der}
\begin{aligned}
&\nabla f_i^\alpha(x_{i,k+1})=\nabla f_i^\alpha(x_{i,k})+s_{i,k},\\
&\nabla f_i^\beta(x_{i,k+1})=\nabla f_i(x_{i,k+1})-\nabla f_i^\alpha(x_{i,k+1}).
\end{aligned}
\end{equation}
\item Agent $i\in \mathcal{N}$ updates $y_{i,k+1}^\alpha$ and $y_{i,k+1}^\beta$ as follows:
\begin{equation}\label{d2}
\begin{aligned}
&y_{i,k+1}^\alpha=\sum_{j=1}^N \tilde C_{ij}y_{j,k}^\alpha+\beta_i y_{i,k}^\beta\\
&\qquad \qquad \qquad \qquad \qquad+\nabla f_i^\alpha(x_{i,k+1})-\nabla f_i^\alpha(x_{i,k}),\\
&y_{i,k+1}^\beta=\alpha_i y_{i,k}^\alpha+(1-\beta_i) y_{i,k}^\beta\\
&\qquad \qquad \qquad \qquad \qquad+\nabla f_i^\beta(x_{i,k+1})-\nabla f_i^\beta(x_{i,k}).
\end{aligned}
\end{equation}
\end{enumerate}
\end{algorithmic}
\end{breakablealgorithm}
Denote $$\begin{aligned}
&\mathbf{R}:=[R_{ij}], \quad \mathbf{\Lambda_\alpha}:=diag(\alpha_1,\alpha_2,\ldots,\alpha_n), \\
&\mathbf{\tilde C}:=[\tilde C_{ij}],\quad \mathbf{\Lambda_\beta}:=diag(\beta_1,\beta_2,\ldots,\beta_n), \\
&\mathbf{\xi}:=[\xi_1, \xi_2,\ldots,\xi_n]^\top, \quad \mathbf{s}_k:=[s_{1,k},s_{2,k}\ldots, s_{n,k}]^\top,\\
&\mathbf{y}_k:=[y_1^\alpha(k),\ldots,y_n^\alpha(k),y_1^\beta(k),\ldots,y_n^\beta(k)]^\top\\
&\nabla F(\mathbf{x})=[\nabla f_1(x_1),\ldots,\nabla f_n(x_n)]^\top,\\
&\nabla {\tilde F}(\mathbf{x})=[\nabla f_1^\alpha(x_1),\ldots,\nabla f_n^\alpha(x_n),\nabla f_1^\beta(x_1),\ldots,\nabla f_n^\beta(x_n)]^\top,\\
&\mathbf{ C}:=\begin{bmatrix}
\mathbf{\tilde C}&\mathbf{\Lambda_\beta}\\
\mathbf{\Lambda_\alpha}&\bf{I}_n-\mathbf{\Lambda_\beta}\\
\end{bmatrix},\quad \mathbf{T}:=\begin{bmatrix}
\mathbf{I}_n & \mathbf{0}_n
\end{bmatrix}.\\
\end{aligned}
$$
Algorithm \ref{alg1} can be rewritten
in a matrix form as follows:
\begin{subequations}\label{eqalg}
\begin{align}
&\mathbf{x}_{k+1}=\mathbf{R}(\mathbf{x}_k-\eta \mathbf{T} \mathbf{y}_k),\\
&\mathbf{y}_{k+1}=\mathbf{C}\mathbf{y}_k+\nabla {\tilde F}(\mathbf{x}_{k+1})-\nabla {\tilde F}(\mathbf{x}_{k}),
\end{align}
where $\mathbf{x}_0$ is arbitrary and $\mathbf{y}_0=[\mathbf{\xi}^\top, (\nabla F(\mathbf{x}_0)-\mathbf{\xi})^\top]^\top.$
\end{subequations}
\begin{assumption}\label{asp1}
The matrix $\mathbf{R}\in\mathbb{R}^{n\times n}$ is a nonnegative row-stochastic matrix and $\mathbf{C}\in\mathbb{R}^{2n\times 2n}$ is a nonnegative column-stochastic matrix, i.e., $\mathbf{R1}_n=\mathbf{1}_n$ and $\mathbf{1}_{2n}^\top \mathbf{C}=\mathbf{1}_{2n}^\top$. Moreover, the diagonal entries of $\mathbf{R}$ and $\mathbf{C}$ are positive, i.e., $R_{ii}>0, \tilde C_{ii}>0, \forall i\in \mathcal{N}$.
\end{assumption}
{\color{black} Assumption \ref{asp1} can be satisfied by properly designing the weights in $\mathbf{R}$ and $\mathbf{C}$ by each agent locally. For instance, each agent may choose $R_{ij}=\frac{1}{| N^{in}_{R,i}|+c_R}$ for some constant $c_R>0$ for all $j\in N^{in}_{R,i}$ and let $R_{ii}=1-\sum_{j\in N^{in}_{R,i}}R_{ij}$. Similarly, agent $i$ may choose $\alpha_i=\epsilon$ and $\tilde C_{li}=\frac{1-\epsilon}{| N^{out}_{C,i}|+c_C}$ for some constant $0<\epsilon<1, c_C>0$ for all $l\in N^{out}_{C,i}$, and let $\tilde C_{ii}=1-\epsilon-\sum_{lin N^{out}_{C,i}} \tilde C_{li}$. Such a choice of weights renders $\mathbf{R}$ row-stochastic and $\mathbf{C}$
column-stochastic, thus satisfying Assumption \ref{asp1}. }
Since $C$ is column stochastic,
$$\mathbf{1}_{2n}^\top \mathbf{y}_{k+1}=\mathbf{1}_{2n}^\top \mathbf{y}_{k}+\mathbf{1}_{2n}^\top \nabla {\tilde F}(\mathbf{x}_{k+1})-\mathbf{1}_{2n}^\top \nabla {\tilde F}(\mathbf{x}_{k}). $$
From equation \eqref{der} and $\nabla f_i^\beta(x_{i,0})=\nabla f_i(x_{i,0})-\nabla f_i^\alpha(x_{i,0}), \forall i\in \mathcal{N}$, we have by induction that
\begin{equation}\label{sumy}
\frac{1}{n}\mathbf{1}_{2n}^\top \mathbf{y}_{k}=\frac{1}{n}\mathbf{1}_{n}^\top \nabla F(\mathbf{x}_{k}).
\end{equation}
Relation \eqref{sumy} shows that under the proposed state decomposition mechanism in Algorithm \ref{alg1}, the average gradient $\mathbf{1}_{n}^\top \nabla F(\mathbf{x}_{k})/n$ is still tracked through $\mathbf{y}$-update.
\begin{assumption}\label{asp2}
The graphs $\mathcal{G}_\mathbf{R}$ and $\mathcal{G}_\mathbf{C^\top}$ induced by matrices $\mathbf{R}$ and $\mathbf{C}$ contain at least one spanning tree. In addition, there exists at least one agent that is a root of spanning trees for both $\mathcal{G}_\mathbf{R}$ and $\mathcal{G}_\mathbf{C^\top}$.
\end{assumption}
{\color{black} Assumption \ref{asp2} is weaker than the assumptions in most previous works (e.g.,\cite{xi2018linear}, \cite{nedic2017achieving}, \cite{xin2018linear}), where graphs $\mathcal{G}_\mathbf{R}$ and $\mathcal{G}_\mathbf{C^\top}$ are assumed to be strongly connected. The relaxed assumption about graph topology enables us to design the graphs $\mathcal{G}_\mathbf{R}$ and $\mathcal{G}_\mathbf{C^\top}$ more flexibly. Similar assumption is adopted in \cite{pu2018push}, \cite{pu2020robust}.}
\begin{lemma}[\cite{horn2012matrix}]\label{lem1}
Under Assumption \ref{asp1} and Assumption \ref{asp2}, the matrix $\mathbf{R}$ has a unique nonnegative left eigenvector $u^\top$ (w.r.t. eigenvalue) with $u^\top \mathbf{1}_n=n,$ and matrix $\mathbf{C}$ has a unique nonnegative right eigenvector $v$ (w.r.t. eigenvalue) with $ \mathbf{1}_{2n}^\top v=n$.
\end{lemma}
\section{Differential Privacy Analysis}
In this section, we analyze the differential property of SD-Push-Pull. The worst case is considered at which the eavesdropper knows all the parameters, including $\mathbf{R}, \mathbf{C}, \mathbf{x}_0, \mathbf{\xi}, \{f_i\}_{i\neq i_0}$. The observation $\mathcal{O}_k$ denotes the message transmitted between agents, where $\mathcal{O}_k=\{x_{i,k}-\eta y_{i,k}^\alpha, c_{ji}y_{i,k}^\alpha| \forall i,j \in{\mathcal{N}}\}$.
Before analyzing the differential privacy performance, we first give a definition of the $\delta-$adjacent function sets in Definition \ref{df1}. In order to guarantee the convergence of SD-Push-Pull, we assume that $f_{i_0}^{(1)}$ and $f_{i_0}^{(2)}$ in Definition \ref{df1} satisfy Assumption \ref{asp3}.
\begin{assumption}\label{asp3}
For any $x,x'\in\mathbb{R}^p,$
\begin{equation}\label{dis}
\nabla f_{i_0}^{(1)}(x)-\nabla f_{i_0}^{(1)}(x')=\nabla f_{i_0}^{(2)}(x)-\nabla f_{i_0}^{(2)}(x')
\end{equation}
\end{assumption}
{\color{black}Assumption \ref{asp3} is a common assumption used in relevant work (e,g,\cite{ding2018consensus},\cite{ding2021differentially}), which guarantees that both $f_{i_0}^{(1)}$ and $f_{i_0}^{(2)}$ have the property in Assumption \ref{asp}. An example satisfying assumption \ref{asp3} is that functions $f_{i_0}^{(1)}$ and $f_{i_0}^{(2)}$ have the same Hessian matrix.} In light of equation \eqref{dis}, we define the distance function as follows
$$D(f_{i_0}^{(1)}, f_{i_0}^{(2)})=\big|\big|\nabla f_{i_0}^{(1)}(x_{i,0})-\nabla f_{i_0}^{(2)}(x_{i,0})\big|\big|_1.
$$
where the distance between $f_{i_0}^{(1)}$ and $f_{i_0}^{(2)}$ does not depend on $x_{i,0}$ in view of equation \eqref{dis}.
\begin{theorem}
Under assumption \ref{asp3}, let $\epsilon_i=\beta_i \delta/\theta_i$, where $\delta$ is defined in Definition \ref{df2}. SD-Push-Pull keeps $\epsilon_i$-differential privacy at the $t$th iteration of each agent $i$'s cost function for any $t\in\mathbb{N}$.
\end{theorem}
\begin{proof}
Consider any pair of $\delta-$adjacent function sets $\mathcal{S}^{(1)}$ and $\mathcal{S}^{(2)}$.
In view of the dynamics \eqref{d1} and \eqref{d2}, the observation $\mathcal{O}_k^{(l)}$ is dependent on the function set $\mathcal{S}^{(l)}$ and the random variable $\mathbf{s}_{k}^{(l)}, l=\{1,2\}$. In order that $\mathcal{S}^{(1)}$ and $\mathcal{S}^{(2)}$ generate identical observations, it is indispensable to guarantee that $\forall k \in \mathbb{N}, i\in\mathcal{N}$, $ y_{i,k}^{\alpha,(1)}= y_{i,k}^{\alpha,(2)}$.
Hence, for any $i\neq i_0$, $s_{i,k}^{(1)}=s_{i,k}^{(2)}$,
and for $i_0$,
\begin{equation}\label{es}
\Delta s_{i_0,k}=-\beta_{i_0} \Delta y_{i_0,k}^\beta,
\end{equation}
where $\Delta s_{i_0,k}=s_{i_0,k}^{(1)}-s_{i_0,k}^{(2)}$ and $\Delta y_{i_0,k}^\beta=y_{i_0,k}^{\beta,(1)}-y_{i_0,k}^{\beta,(2)}$.
In light of equation \eqref{d2}, we have
\begin{equation}
\Delta y_{i_0,k}^\beta=(1-\beta_{i_0})\Delta y_{i_0,k-1}^\beta-\Delta s_{i_0,k-1}
\end{equation}
From \eqref{es}, we can obtain
\begin{equation}
\begin{aligned}
&\Delta y_{i_0,k}^\beta=(1-\beta_{i_0})\Delta y_{i_0,k-1}^\beta+\beta_{i_0} \Delta y_{i_0,k-1}^\beta\\
&=\Delta y_{i_0,k-1}^\beta=\cdots=\Delta y_{i_0,0}^\beta.
\end{aligned}
\end{equation}
Since $y_{i_0,k-1}^{\beta,(l)}=\nabla f_{i_0}^{(l)}(x_{i,0})-\xi_i, l=\{1,2\}$,
\begin{equation}
\Delta y_{i_0,0}^\beta=\nabla f_{i_0}^{(1)}(x_{i,0})-\nabla f_{i_0}^{(2)}(x_{i,0}).
\end{equation}
Hence, we have
\begin{equation}
\vspace*{-2mm}
\begin{aligned}
||\Delta s_{i_0,k}||_1=\beta_i||\Delta y_{i_0,0}^\beta||_1=&\beta_i ||\nabla f_{i_0}^{(1)}(x_{i,0})-\nabla f_{i_0}^{(2)}(x_{i,0})||_1\\
\leq& \beta_{i_0} \delta.
\end{aligned}
\end{equation}
Next, from Lemma 2 in \cite{huang2015differentially}, we can obtain that
\begin{equation}
\vspace*{-2mm}
\begin{aligned}
\frac{P\{\mathcal{A}(\mathcal{S}^{(1)})\in \mathcal{O}\}}{P\{\mathcal{A}(\mathcal{S}^{(2)})\in\mathcal{O}\}}&=\prod\limits_{i=1}^n\prod\limits_{l=1}^p\frac{P\big([s_{i,k}^{(1)}]_l\big)}{P\big([s_{i,k}^{(2)}]_l\big)}\\
&\leq \prod\limits_{l=1}^p\exp\Big(
\frac{\Big|[s_{i_0,k}^{(1)}]_l-[s_{i_0,k}^{(2)}]_l\Big|}{\theta_{i_0}}\Big)\\
&=\exp\Big(\frac{||\Delta s_{i_0,k}||_1}{\theta_{i_0}}\Big)\leq \exp(\frac{\beta_{i_0}\delta}{\theta_{i_0}}),
\end{aligned}
\end{equation}
where $[s_{i,k}^{(1)}]_l$ and $[s_{i,k}^{(2)}]_l$ denotes the $l$th element of $s_{i,k}^{(1)}$ and $s_{i,k}^{(2)}$. Thus, the proof is completed.
\end{proof}
\section{Convergence Analysis}
In this section, we analyze the convergence performance of the proposed private push-pull algorithm. For the sake of analysis, we now re-write Eqs. \eqref{eqalg} as follows, which is based on equation \eqref{der}.
\begin{subequations}\label{eqc}
\begin{align}\label{eqc1}
&\mathbf{x}_{k+1}=\mathbf{R}(\mathbf{x}_k-\eta \mathbf{T} \mathbf{y}_k),\\
&\mathbf{y}_{k+1}=\mathbf{C}\mathbf{y}_k+[\mathbf{s}_k^\top, (\nabla F(\mathbf{x}_{k+1})-\nabla F(\mathbf{x}_{k})-\mathbf{s}_k)^\top]^\top.
\end{align}
\end{subequations}
Next, we define the following variables:
$$\bar x_k:=\frac{1}{n}u^\top \mathbf{x}_k,\qquad \bar y_k=\frac{1}{n}\mathbf{1}_{2n}^\top \mathbf{y}_k.$$
The main idea of our strategy is to bound $\mathbb{E}[||\bar x_{k+1}-x^\star||]_2, \mathbb{E}[||\mathbf{x}_{k+1}-\mathbf{1}_n\bar x_{k+1}||_R], \mathbb{E}[||\mathbf{y}_{k+1}-v\bar y_{k+1}]||_C]$ on the basis of the linear combinations of their previous values, where $||\cdot||_R$ and $||\cdot||_C$ are specific norms to be defined later. By establishing a linear system of inequalities, we can derive the convergence result.
\begin{definition}
Given an arbitrary vector norm $||\cdot||$, for any $\mathbf{x}\in\mathbb{R}^{n\times p}$, we define
$$||\mathbf{x}||:=\Big|\Big|\Big[||\mathbf{x}^{(1)}||,||\mathbf{x}^{(1)}||,\ldots, ||\mathbf{x}^{(p)}||\Big]\Big|\Big|_2
$$
where $\mathbf{x}^{(1)},\mathbf{x}^{(2)},\ldots,\mathbf{x}^{(p)}\in\mathbb{R}^n$ are columns of $\mathbf{x}$.
\end{definition}
\subsection{Preliminary Analysis}
From Eqs. \eqref{eqc} and Lemma \ref{lem1}, we can obtain
\begin{equation}\label{p1}
\bar x_{k+1}=\frac{1}{n}u^\top \mathbf{R}(\mathbf{x}_k-\eta \mathbf{T} \mathbf{y}_k)=\bar x_{k}-\frac{\eta}{n}u^\top\mathbf{T} \mathbf{y}_k,
\end{equation}
and
\begin{equation}\label{p2}
\begin{aligned}
\bar y_{k+1}=&\frac{1}{n}\mathbf{1}_{2n}^\top (\mathbf{C}\mathbf{y}_k+[\mathbf{s}_k^\top, (\nabla F(\mathbf{x}_{k+1})-\nabla F(\mathbf{x}_{k})-\mathbf{s}_k)^\top]^\top)\\
=&\bar y_{k}+\frac{1}{n}\mathbf{1}_{2n}^\top[\mathbf{s}_k^\top, (\nabla F(\mathbf{x}_{k+1})-\nabla F(\mathbf{x}_{k})-\mathbf{s}_k)^\top]^\top.\\
\end{aligned}
\end{equation}
Furthermore, let us define $$g_k:=\frac{1}{n}\sum_{i=1}^n f_i(\bar x_{k}). $$
Then, from equation \eqref{p1}
\begin{equation}\label{m1}
\begin{aligned}
\bar x_{k+1}=&\bar x_{k}-\frac{\eta}{n}u^\top\mathbf{T} (\mathbf{y}_k-v\bar y_k+v\bar y_k)\\
=&\bar x_{k}-\frac{\eta}{n}u^\top\mathbf{T}v\bar y_k-\frac{\eta}{n}u^\top\mathbf{T}(\mathbf{y}_k- v\bar y_k)\\
=&\bar x_{k}-\eta'g_k-\eta'(\bar y_k-g_k)-\frac{\eta}{n}u^\top\mathbf{T}(\mathbf{y}_k- v\bar y_k),
\end{aligned}
\end{equation}
where $$\eta':=\frac{\eta}{n}u^\top\mathbf{T}v.$$
Based on Lemma \ref{lem1} and equation \eqref{p1}, we obtain
\begin{equation}\label{m2}
\begin{aligned}
\mathbf{x}_{k+1}-&\mathbf{1}_{n}\bar{x}_{k+1}=\mathbf{R}(\mathbf{x}_k-\eta \mathbf{T} \mathbf{y}_k)-\mathbf{1}_{n}\bar x_{k}+\frac{\eta}{n}\mathbf{1}_{n}u^\top\mathbf{T} \mathbf{y}_k\\
=&\mathbf{R}(\mathbf{x}_k-\mathbf{1}_{n}\bar x_{k})-(\mathbf{R}-\frac{\mathbf{1}_{n}u^\top}{n})\eta\mathbf{T} \mathbf{y}_k\\
=&(\mathbf{R}-\frac{\mathbf{1}_{n}u^\top}{n})(\mathbf{x}_k-\mathbf{1}_{n}\bar x_{k})-(\mathbf{R}-\frac{\mathbf{1}_{n}u^\top}{n})\eta\mathbf{T} \mathbf{y}_k.
\end{aligned}
\end{equation}
In addition, from equation \eqref{p2}, we have
\begin{equation}\label{m3}
\begin{aligned}
\mathbf{y}_{k+1}-&v\bar{y}_{k+1}=\mathbf{C}\mathbf{y}_k-v\bar{y}_{k}\\&+(\mathbf{I}_{2n}-\frac{v\mathbf{1}_{2n}^\top}{n})[\mathbf{s}_k^\top, (\nabla F(\mathbf{x}_{k+1})-\nabla F(\mathbf{x}_{k})-\mathbf{s}_k)^\top]^\top\\
=&(\mathbf{C}-\frac{v\mathbf{1}_{2n}^\top}{n})(\mathbf{y}_k-v\bar{y}_{k})\\
&+(\mathbf{I}_{2n}-\frac{v\mathbf{1}_{2n}^\top}{n})[\mathbf{s}_k^\top, (\nabla F(\mathbf{x}_{k+1})-\nabla F(\mathbf{x}_{k})-\mathbf{s}_k)^\top]^\top.\\
\end{aligned}
\end{equation}
Denote $\mathcal{F}_k$ as the $\sigma$-algebra generated by
$\{\mathbf{s}_0,\ldots, \mathbf{s}_{k-1}\}$, and define $\mathbb{E}[\cdot | \mathcal{F}_k]$ as the conditional expectation given $\mathcal{F}_k$.
\subsection{Supporting lemmas}
We next prepare a few useful supporting lemmas for further convergence analysis.
\begin{lemma}\label{lemma2}
Under Assumption \ref{asp}, there holds
$$\begin{aligned}
&||\bar y_k-g_k||_2\leq \frac{L}{\sqrt{n}}||\mathbf{x}_k-\mathbf{1}_n \bar x_k||_2,\\
&||g_k||_2\leq L||\bar x_k-x^\star||_2.\\
\end{aligned}$$
\end{lemma}
\vspace*{2mm}
\begin{proof}
In view of Assumption \ref{asp},
$$\begin{aligned}
||\bar y_k-g_k||_2=&\frac{1}{n}||\mathbf{1}_n\nabla F(\mathbf{x}_k)-\mathbf{1}_n\nabla F(\mathbf{1}_n{\bar x}_k)||_2\\
\leq& \frac{L}{n}\sum_{i=1}^n||x_{i,k}-\bar x_k||_2\leq \frac{L}{\sqrt{n}}||\mathbf{x}_k-\mathbf{1}_n{\bar x}_k||_2,
\end{aligned}$$
and
$$\begin{aligned}
||g_k||_2=&\frac{1}{n}||\mathbf{1}_n\nabla F(\mathbf{1}_n{\bar x}_k)-\mathbf{1}_n\nabla F(\mathbf{1}_n{\bar x}^\star)|| \\
&\leq \frac{L}{n}\sum_{i=1}^n||\bar x_k-x^\star||_2=L ||\bar x_k-x^\star||_2.
\end{aligned}$$
\end{proof}
\begin{lemma}\label{lemma3}
(Adapted from Lemma 10 in \cite{qu2017harnessing}) Under Assumption \ref{asp}, for any $x\in \mathbb{R}^p$ and $0<\theta<2/\mu,$ we have
$$||x-\theta F(x)-x^\star||_2\leq
\tau ||x-x^\star||,$$
where $\tau=\max (|1-\mu \theta|,|1-L \theta|)$.
\end{lemma}
\begin{lemma}\label{lemma4}
(Adapted from Lemma 3 and Lemma 4 in \cite{pu2020push}) Suppose Assumption \ref{asp1} and \ref{asp2} hold. There exist vector norms $||\cdot||_R$ and $||\cdot||_{C}$, such that $\sigma_R:=||\mathbf{R}-\frac{\mathbf{1}_n u^\top}{n}||_R<1, \sigma_C:=||\mathbf{C}-\frac{v \mathbf{1}_{2n}^\top }{n}||_C<1, $ and $\tau_R$ and $\tau_C$ are arbitrarily close to the spectral radii $\rho(\mathbf{R}-\mathbf{1}_n u^\top/n)<1$ and $\rho(\mathbf{C}-v \mathbf{1}_{2n}^\top /n)<1,$ respectively.
\end{lemma}
The following two lemmas are also taken from \cite{pu2018push}.
\begin{lemma}\label{lemma5}
Given an arbitrary norm $||\cdot||$, for $\mathbf{W}\in\mathbb{R}^{m\times n}$ and $\mathbf{x}\in\mathbb{R}^{n\times p}$, we have $||\mathbf{Wx}||\leq ||\mathbf{W}||||\mathbf{x}||$. For any $w \in\mathbb{R}^{n\times 1}$ and $x\in\mathbb{R}^{1\times p}$, we have $||wx||=||w||||x||_2$.
\end{lemma}
\begin{lemma}\label{lemma6}
There exists constants $\delta_{C,R},\delta_{C,2},\delta_{R,C},\delta_{R,2}>0,$ such that $||\cdot||_C\leq \delta_{C,R}||\cdot||_R$, $||\cdot||_C\leq \delta_{C,2}||\cdot||_2$,$||\cdot||_C\leq \delta_{C,R}||\cdot||_R$, $||\cdot||_R\leq \delta_{R,C}||\cdot||_C$,$||\cdot||_R\leq \delta_{R,2}||\cdot||_2$. Moreover, with a proper rescaling of norms $||\cdot||_R$ and $||\cdot||_C$, we have $||\cdot||_2\leq ||\cdot||_R$ and $||\cdot||_2\leq ||\cdot||_C$.
\end{lemma}
\begin{lemma}\label{lemma8}
(Lemma 5 in \cite{pu2020distributed}) Given a nonnegative, irreducible matrix $\mathbf{M}= [m_{ij}] \in \mathbf{R}^{3\times3}$ with its diagonal element $m_{11}, m_{22}, m_{33}<\lambda^\star$ for some $\lambda^\star > 0$. A necessary and sufficient condition for $\rho(\mathbf{M}) < \lambda^\star$ is det$(\lambda^\star\mathbf{I}-\mathbf{M})>0$.
\end{lemma}
\subsection{Main reselts}
The following critical lemma establishes a linear system of inequalities that bound $\mathbb{E}[||\bar x_{k+1}-x^\star||_2], \mathbb{E}[||\mathbf{x}_{k+1}-\mathbf{1}_n\bar x_{k+1}||_R],$ and $\mathbb{E}[||\mathbf{y}_{k+1}-v\bar y_{k+1}||_C]$.
\begin{lemma}\label{mainl}
Under Assumption \ref{asp}-\ref{asp2}, when $\eta'<2/(\mu+L)$, we have the following linear system of inequalities:
\begin{equation}
\begin{bmatrix}
\mathbb{E}[||\bar x_{k+1}-x^\star||_2]\\
\mathbb{E}[||\mathbf{x}_{k+1}-\mathbf{1}_n\bar x_{k+1}||_R]\\
\mathbb{E}[||\mathbf{y}_{k+1}-v\bar y_{k+1}]||_C]
\end{bmatrix}\leq \mathbf{A}\begin{bmatrix}
\mathbb{E}[||\bar x_{k}-x^\star||_2]\\
\mathbb{E}[||\mathbf{x}_{k}-\mathbf{1}_n\bar x_{k}||_R]\\
\mathbb{E}[||\mathbf{y}_{k}-v\bar y_{k}]||_C]
\end{bmatrix}+\mathbf{B},
\end{equation}\label{ine}
where the inequality is taken component-wise, and elements of the transition matrix $\mathbf{A} = [a_{ij}]$ and the vector $\mathbf{B}$ are given by:
\begin{equation}
\mathbf{A}=\begin{bmatrix}
1-\eta' \mu & c_1\eta & c_2\eta\\
c_3\eta & \sigma_R+c_4\eta & c_5 \eta\\
c_6 \eta & c_7+c_8\eta &\sigma_C+c_9\eta\\
\end{bmatrix},
\end{equation}
and \begin{equation}
\mathbf{B}=\begin{bmatrix}
0& 0&b_1\sqrt{2p\sum_{i=1}^n\theta_i^2}
\end{bmatrix}^\top,
\end{equation}
respectively, where constants $c_i$'s and $b_1$ are defined in
\begin{equation*}
\begin{aligned}
&c_1=u^\top\mathbf{T}v\frac{L}{n\sqrt{n}},\quad c_2=\frac{||\mathbf{T}^\top u||}{n},\quad c_3=\sigma_R||\mathbf{T}v||_R L,\\
&c_4=\sigma_R||\mathbf{T}v||_R\frac{L}{\sqrt{n}},\quad c_5=\sigma_R\delta_{R,C}||\mathbf{T}||_R,\\
&c_6=c_0 \delta_{C,2}||\mathbf{RT}||_2 ||v||_2 L^2,\quad
c_7=c_0\delta_{C,2}L||\mathbf{R}-\mathbf{I}_n||_2\\
&c_8=||\mathbf{RT}||_2||v||_2\frac{L}{\sqrt{n}},\quad
c_9=c_0\delta_{C,2}L||\mathbf{RT}||_2,\\
&b_1=2c_0\delta_{C,2}L.\\
\end{aligned}
\end{equation*}
\end{lemma}
\vspace*{2mm}
\begin{proof}
See Appendix \ref{ap1}.
\end{proof}
The following theorem shows the convergence properties for the SD-Push-Pull algorithm in \eqref{eqalg}.
\begin{theorem}\label{th1}
Suppose Assumption \ref{asp}-\ref{asp2} holds and the stepsize $\eta$ satisfies
$$\eta\leq\min\big\{\frac{1-\sigma_R}{2c_4}, \frac{1-\sigma_C}{2c_9},\frac{2d_3}{d_2+\sqrt{d_2^2+4d_1d_3}}
\big\},$$
where $d_1,d_2,d_3$ are defined in \eqref{ds}. Then $\sup_{l\geq k} \mathbb{E}[||\bar x_{l}-x^\star||]_2$ and $\sup_{l\geq k}\mathbb{E}[||\mathbf{x}_{l}-\mathbf{1}_n\bar x_{l}||_R]$ converge to $\lim\sup_{k\rightarrow\infty} \mathbb{E}[||\bar x_{k}-x^\star||]_2$ and $\lim\sup_{k\rightarrow\infty} \mathbb{E}[||\mathbf{x}_{k}-\mathbf{1}_n\bar x_{k}||_R]$, respectively, at the linear rate $\mathcal{O}(\rho(\mathbf{A})^k)$. In addition,
\begin{equation}\label{eqf}
\begin{aligned}
&\limsup_{k\rightarrow\infty} \mathbb{E}[||\bar x_{k}-x^\star||]_2\leq [\mathbf{(I-A)^{-1}B}]_1,\\
&\limsup_{k\rightarrow\infty} \mathbb{E}[||\mathbf{x}_{k}-\mathbf{1}_n\bar x_{k}||_R]\leq [\mathbf{(I-A)^{-1}B}]_2,\\
\end{aligned}
\end{equation}
where $[\mathbf{(I-A)^{-1}B}]_i$ denotes the $i$th element of the vector $\mathbf{(I-A)^{-1}B}$. Their specific forms are given in \eqref{bound1} and \eqref{bound2}, respectively.
\end{theorem}
\begin{proof} In terms of \eqref{mainl}, by induction we have
\begin{equation}\label{eqi}
\begin{bmatrix}
\mathbb{E}[||\bar x_{k}-x^\star||_2]\\
\mathbb{E}[||\mathbf{x}_{k}-\mathbf{1}_n\bar x_{k}||_R]\\
\mathbb{E}[||\mathbf{y}_{k}-v\bar y_{k}]||_C]
\end{bmatrix}\leq \mathbf{A}^k
\begin{bmatrix}
\mathbb{E}[||\bar x_{0}-x^\star||_2]\\
\mathbb{E}[||\mathbf{x}_{0}-\mathbf{1}_n\bar x_{0}||_R]\\
\mathbb{E}[||\mathbf{y}_{0}-v\bar y_{0}]||_C]
\end{bmatrix}+\sum_{l=0}^{k-1}\mathbf{A}^l\mathbf{B}.
\end{equation}
From equation \eqref{eqi}, we can see that if $\rho(\mathbf{A})<1$, then $\sup_{l\geq k} \mathbb{E}[||\bar x_{l}-x^\star||]_2$, $\sup_{l\geq k}\mathbb{E}[||\mathbf{x}_{l}-\mathbf{1}_n\bar x_{l}||_R]$ and $\sup_{l\geq k}\mathbb{E}[||\mathbf{y}_{l}-v\bar y_{l}||_C]$ all converge to a neighborhood of $0$ at the linear rate $\mathcal{O}(\rho(\mathbf{A})^k)$.
In view of Lemma \ref{lemma8}, it suffices to ensure $a_{11},a_{22}, a_{33}<1$ and det$(\mathbf{I-A})>0$, or
\begin{equation}\label{det}\begin{aligned}
&\text{det}(\mathbf{I-A})=(1-a_{11})(1-a_{22})(1-a_{33})-a_{12}a_{23}a_{31}\\
&-a_{13}a_{21}a_{32}-(1-a_{22})a_{13}a_{31}-(1-a_{11})a_{23}a_{32}\\
&-(1-a_{33})a_{12}a_{21}>\frac{1}{2}(1-a_{11})(1-a_{22})(1-a_{33}),
\end{aligned}\end{equation}
which is equivalent to
\begin{equation}\label{eqd}\begin{aligned}
\frac{1}{2}(1-&a_{11})(1-a_{22})(1-a_{33})-c_1c_5c_6\eta^3\\&-c_2c_3\eta^2(c_7+c_8\eta)-(1-a_{22})c_2c_6\eta^2\\
&-(1-a_{33})c_1c_3\eta^2>0.
\end{aligned}
\end{equation}
Next, we give some sufficient conditions where $a_{11}, a_{22}, a_{33} < 1$ and relation \eqref{eqd} holds true.
First, $a_{11} < 1$ is ensured by choosing $ \eta'\leq 2/(\mu+L)$. In addition, $a_{22},a_{33}<1$ is ensured by choosing
\begin{equation}\label{eqr}
1-a_{22}\geq\frac{1-\sigma_R}{2},\quad 1-a_{33}\geq\frac{1-\sigma_C}{2},
\end{equation}
requiring
\begin{equation}\label{eqeta}
\eta \leq \min\big\{\frac{1-\sigma_R}{2c_4}, \frac{1-\sigma_C}{2c_9}\big\}.
\end{equation}
Second, in view of relation \eqref{eqr}, $a_{22}>\sigma_R,$ and $a_{33}>\sigma_C,$ we have
\begin{equation}\begin{aligned}
\frac{1}{2}(1-&a_{11})(1-a_{22})(1-a_{33})-c_1c_5c_6\eta^3-c_2c_3\eta^2(c_7+c_8\eta)\\&-(1-a_{22})c_2c_6\eta^2-(1-a_{33})c_1c_3\eta^2\\
&>\frac{1}{2}(1-a_{11})\frac{1-\sigma_R}{2}\frac{1-\sigma_C}{2}-(c_1c_5c_6+c_2c_3c_8)\eta^3\\
&-c_2c_3c_7\eta^2-(1-\sigma_R)c_2c_6\eta^2-(1-\sigma_C)c_1c_3\eta^2. \end{aligned}
\end{equation}
Then, relation \eqref{eqd} is equivalent to
$$d_1 \eta^2+ d_2\eta-d_3<0,$$
where
\begin{equation}\label{ds}
\begin{aligned}
&d_1:=c_1c_5c_6+c_2c_3c_8,\\
&d_2:=c_2c_3c_7+(1-\sigma_R)c_2c_6+(1-\sigma_C)c_1c_3,\\
&d_3:=\frac{1}{8}u^\top\mathbf{T}v\mu(1-\sigma_R)(1-\sigma_C).\\
\end{aligned}
\end{equation}
Hence, a sufficient condition for det$\mathbf{I-A}>0$ is
\begin{equation}\label{eqeta2}
\eta\leq\frac{2d_3}{d_2+\sqrt{d_2^2+4d_1d_3}}.
\end{equation}
Relation \eqref{eqeta} and \eqref{eqeta2} yield the final bound on the stepsize $\eta$.
Moreover, in light of \eqref{det} and \eqref{eqr}, we can obtain from \eqref{eqi} that
\begin{equation} \label{bound1}
\begin{aligned}
&[ \mathbf{(I-A)^{-1}B}]_1\\
&=\Big[(a_{12}a_{23}+a_{13}(1-a_{22}))b_1\sqrt{2p\sum_{i=1}^n\theta_i^2}\Big]\frac{1}{\text{det}(\mathbf{I-A})}\\
&\leq \frac{8b_1(c_1c_5\eta^2+c_2\eta(1-\sigma_R))\sqrt{2p\sum_{i=1}^n\theta_i^2}}{u^\top\mathbf{T}v\eta\mu(1-\sigma_R)(1-\sigma_C)},
\end{aligned}
\end{equation}
and
\begin{equation}\label{bound2}
\begin{aligned}
&[ \mathbf{(I-A)^{-1}B}]_2\\
&=\Big[(a_{13}a_{21}+a_{23}(1-a_{11}))b_1\sqrt{2p\sum_{i=1}^n\theta_i^2}\Big]\frac{1}{\text{det}(\mathbf{I-A})}\\
&\leq \frac{8b_1(c_2c_3+c_5u^\top\mathbf{T}v\mu)\eta^2\sqrt{2p\sum_{i=1}^n\theta_i^2}}{u^\top\mathbf{T}v\eta\mu(1-\sigma_R)(1-\sigma_C)}.
\end{aligned}
\end{equation}
\end{proof}
\begin{remark}
When $\eta$ is sufficiently small, it can be shown that the linear rate indicator $\rho(\mathbf{A})\simeq 1-\eta'\mu$. From Theorem \ref{th1}, it is worth noting that the upper bounds in \eqref{bound1} and \eqref{bound2} are functions of $\eta, \theta_i, \forall i\in\mathcal{N} $ and other problem parameters, and they are decreasing in terms of $\theta_i$. Fixing the system parameter and the privacy level $(\epsilon_i, \delta)$, $\theta_i$ can be written as $\theta_i=\beta_i\delta/\epsilon_i$. Hence, the optimization accuracy has the order of $d\sim O( \frac{1}{\epsilon_i})
$ for small $\epsilon_i$. As $\epsilon_i$ converges to 0, that is, for
complete privacy for each agents, the accuracy becomes arbitrarily bad.
\end{remark}
\section{SIMULATIONS}
In this section, we illustrate the effectiveness of SD-Push-Pull.
Consider a network containing $N=5$ agents, shown in Fig. \ref{digraph}. The optimization problem is considered as the ridge regression problem, i.e.,
\begin{equation}
\min\limits_{x\in\mathbb{R}^p} f(x) =\frac{1}{n}\sum\limits_{i=1}^n f_i(x)=\frac{1}{n}\sum\limits_{i=1}^n\Big((u_i^\top x-v_i)^2+\rho ||x||^2_2\Big)\end{equation}
where $\rho>0$ is a penalty parameter. Each agent $i$ has its private sample $(u_i,v_i)$ where $u_i\in\mathbb{R}^p$ denotes the features and $v_i\in\mathbb{R}$ denotes the observed outputs. The vector $u_i\in[-1,1]^p$ is drawn from the uniform distribution. Then the observed outputs $v_i$ is generated according to $v_i=u_i^\top \tilde x_i +\gamma_i$, where $\tilde x_i $ is evenly located in $[0,10]^p$ and $\gamma_i\sim \mathcal{N}(0,5)$. In terms of the above parameters, problem \eqref{pro1} has a unique solution $x^\star=(\sum_{i=1}^n[u_i u_i^\top+n\rho \mathbf{I}])^{-1}\sum_{i=1}^n[u_i u_i^\top]\tilde x_i$.
\vspace*{-2mm}
\begin{figure}[htp]
\centering
\includegraphics[width=0.20\textwidth]{digraph1.png}
\caption{A digraph of 5 agents. }
\vspace*{-2mm}
\label{digraph}
\end{figure}
The weight between two substates, $\alpha_i$ and $\beta_i$, are set to be $0.01$ and $0.5$ for each agent $i\in\{1,2,3,4,5\}$. The matrix $\mathbf{R}$ and $\mathbf{C}$ are designed as follows: for any agent $i$, $R_{ij}=\frac{1}{|\mathcal{N}_{\mathbf{R},i}^{in}|+1}$ for $j\in \mathcal{N}_{\mathbf{R},i}^{in}$ and $R_{ii}=1-\sum_{j\in \mathcal{N}_{\mathbf{R},i}^{in}} R_{ij}$; for any agent $i$, $C_{li}=\frac{1-\alpha_i}{|\mathcal{N}_{\mathbf{C},i}^{out}|+1}$ for all $l\in \mathcal{N}_{\mathbf{C},i}^{out}$ and $C_{ii}=1-\alpha_i-\sum_{l\in \mathcal{N}_{\mathbf{C},i}^{out}} C_{li}$.
Assume $\epsilon_i=\epsilon, \forall i=\{1,2,3,4,5\}$ and $\delta=10$. To investigate the dependence of the algorithm accuracy with differential privacy level, we compare the performance SD-Push-Pull for three cases:
$\epsilon=1, \epsilon=5$ and $\epsilon=10$, in terms of the normalized residual $\frac{1}{n}\mathbb{E}\Big[\sum\limits_{i=1}^5\frac{||x_{i,k}-x^\star||_2^2}{||x_{i,0}-x^\star||_2^2}\Big]$. The results are depicted in Fig. \ref{per}, which reflect that SD-Push-Pull can achieve suboptimality and the constant $\epsilon$ determines a tradeoff between the privacy level and the optimization accuracy.
\vspace*{-2mm}
\begin{figure}[htp]
\vspace*{-2mm}
\centering
\includegraphics[width=0.4\textwidth]{simu2}
\vspace*{-2mm}
\caption{Evolutions of the normalized residual under different settings of the privacy level. The expected
residual are approximated by averaging over 50 simulation results. Dimension $ p= 10$, stepsize $\alpha = 0.01$ and penalty parameter $\rho = 0.01$.}
\label{per}
\vspace*{-2mm}
\end{figure}
\vspace*{-2mm}
\section {CONCLUSION AND FUTURE WORK}
In this paper, we considered a distributed optimization problem with differential privacy in the scenario where a network is abstracted as an unbalanced directed graph. We proposed a state-decomposition-based differentially private distributed optimization algorithm (SD-Push-Pull). In particular, the state decomposition mechanism was adopted to guarantee the differential privacy of individuals’ sensitive information. In addition, we proved that each agent reach a neighborhood of the optimum in expectation exponentially fast under a constant stepsize policy. Moreover, we showed that the constants $(\epsilon,\delta)$ determine a tradeoff between the privacy level and the optimization accuracy. Finally, a numerical example was provided that demonstrates the effectiveness of SD-Push-Pull. Future work includes improving the accuracy of the optimization and considering the optimization problem with constraints.
\section{APPENDIX}
\subsection{Proof of Lemma \ref{mainl}}\label{ap1}
The three inequalities embedded in \eqref{ine} come from \eqref{m1}, \eqref{m2} and \eqref{m3}, repectively.
\textit{First inequality}: By Lemma \ref{lemma2}, Lemma \ref{lemma3} and \ref{lemma6}, we can obtain from \eqref{m1} that
\begin{equation*}
\begin{aligned}
&\mathbb{E}[||\bar x_{k+1}-x^\star||_2|\mathcal{F}_k]\\
&= \mathbb{E}[||\bar x_{k}-\eta'g_k-x^\star-\eta'(\bar y_k+g_k)\\&
\qquad \qquad \qquad\qquad\qquad \qquad\qquad-\frac{\eta}{n}u^\top\mathbf{T}(\mathbf{y}_k- v\bar y_k)||_2|\mathcal{F}_k]\\
&\leq (1-\eta' \mu)||\bar x_{k}-x^\star||_2+\frac{\eta'L}{\sqrt{n}}||\mathbf{x}_k-\mathbf{1}_n{\bar x}_k)||_2\\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad+\frac{\eta||u^\top\mathbf{T}||_2}{n}||\mathbf{y}_k- v\bar y_k||_2\\
&\leq (1-\eta' \mu)||\bar x_{k}-x^\star||_2+\frac{\eta'L}{\sqrt{n}}||\mathbf{x}_k-\mathbf{1}_n{\bar x}_k)||_2\\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad+\frac{\eta||u^\top\mathbf{T}||_2}{n}||\mathbf{y}_k- v\bar y_k||_C.\\
\end{aligned}
\end{equation*}
Taking full expectation on both sides of the inequalities completes the proof.
\textit{Second inequality:} By relation \eqref{m2}, Lemma \ref{lemma5} and Lemma \ref{lemma6}, we can obtain
\begin{equation*}
\begin{aligned}
&\mathbb{E}[||\mathbf{x}_{k+1}-\mathbf{1}_n\bar x_{k+1}||_R|\mathcal{F}_k]\\
&\leq \sigma_R ||\mathbf{x}_{k}-\mathbf{1}_n\bar x_{k}||_R+\sigma_R\eta||\mathbf{T}||_R||\mathbf{y}_k||_R\\
&\leq \sigma_R ||\mathbf{x}_{k}-\mathbf{1}_n\bar x_{k}||_R+\sigma_R\eta||\mathbf{T}||_R||\mathbf{y}_k-v\bar y_k||_R\\
&\qquad \qquad \qquad \qquad \qquad+\sigma_R\eta||\mathbf{T}v||_R||\bar y_k||_2.\\
\end{aligned}
\end{equation*}
In view of Lemma \ref{lemma2},
\begin{equation}\label{bary}
||\bar y_k||_2\leq \frac{L}{\sqrt{n}}||\mathbf{x}_k-\mathbf{1}_n \bar x_k||_2+L||\bar x_k-x^\star||_2.
\end{equation}
Thus, we have
$$\begin{aligned}
&\mathbb{E}[||\mathbf{x}_{k+1}-\mathbf{1}_n\bar x_{k+1}||_R|\mathcal{F}_k]\\
&\leq \sigma_R ||\mathbf{x}_{k}-\mathbf{1}_n\bar x_{k}||_R+\sigma_R\eta||\mathbf{T}||_R||\mathbf{y}_k-v\bar y_k||_R+\\
&+\sigma_R\eta||\mathbf{T}v||_R(\frac{L}{\sqrt{n}}||\mathbf{x}_k-\mathbf{1}_n \bar x_k||_2+L||\bar x_k-x^\star||_2)\\
&\leq (\sigma_R+\sigma_R\eta||\mathbf{T}v||_R\frac{L}{\sqrt{n}})||\mathbf{x}_{k}-\mathbf{1}_n\bar x_{k}||_R\\
&+\sigma_R\eta\delta_{R,C}||\mathbf{T}||_R||\mathbf{y}_k-v\bar y_k||_C+\sigma_R\eta||\mathbf{T}v||_RL||\bar x_k-x^\star||_2.
\end{aligned}
$$
Again, taking full expectation on both sides of the inequalities completes the proof.
\textit{Third inequality:} It follows from \eqref{m3}, Lemma \ref{lemma5} and Lemma \ref{lemma6} that
\begin{equation*}
\begin{aligned}
&\mathbb{E}[||\mathbf{y}_{k+1}-v\bar y_{k+1}||_C|\mathcal{F}_k]\\
&\leq \sigma_C ||\mathbf{y}_{k}-v\bar y_{k}||_C+c_0\delta_{C,2}L||\mathbf{x}_{k+1}-\mathbf{x}_{k}||_2\\
&\qquad \qquad \qquad \qquad \qquad \qquad+2c_0\delta_{C,2}L\mathbb{E}[||\mathbf{s}_k||_2]\\
&\leq \sigma_C ||\mathbf{y}_{k}-v\bar y_{k}||_C+c_0\delta_{C,2}L||\mathbf{x}_{k+1}-\mathbf{x}_{k}||_2\\
&\qquad \qquad \qquad \qquad \qquad \qquad+2c_0\delta_{C,2}L\sqrt{\mathbb{E}[||\mathbf{s}_k||_2^2]}\\
&= \sigma_C ||\mathbf{y}_{k}-v\bar y_{k}||_C+2c_0\delta_{C,2}L\sqrt{2p\sum_{i=1}^n\theta_i^2}\\
&+c_0\delta_{C,2}L||(\mathbf{R}-\mathbf{I})(\mathbf{x}_{k}-\mathbf{1}_n\bar x_k)-\eta\mathbf{RT}\mathbf{y}_k||_2\\
&\leq \sigma_C ||\mathbf{y}_{k}-v\bar y_{k}||_C+2c_0\delta_{C,2}L\sqrt{2p\sum_{i=1}^n\theta_i^2}\\
&+c_0\delta_{C,2}L(||\mathbf{R}-\mathbf{I}||_2||\mathbf{x}_{k}-\mathbf{1}_n\bar x_k)||_2+\eta||\mathbf{RT}||_2||\mathbf{y}_k-v\bar y_k||_2)\\
&+c_0\delta_{C,2}L\eta||\mathbf{RT}||_2||v||_2||\bar y_k||_2,
\end{aligned}
\end{equation*}
where the second inequality follows from Jensen's inequality.
Then, from \eqref{bary}, we have
\begin{equation*}
\begin{aligned}
&\mathbb{E}[||\mathbf{y}_{k+1}-v\bar y_{k+1}||_C|\mathcal{F}_k]\\
&\leq \sigma_C ||\mathbf{y}_{k}-v\bar y_{k}||_C+2c_0\delta_{C,2}L\sqrt{2p\sum_{i=1}^n\theta_i^2}\\
&+c_0\delta_{C,2}L(||\mathbf{R}-\mathbf{I}||_2||\mathbf{x}_{k}-\mathbf{1}_n\bar x_k)||_2+\eta||\mathbf{RT}||_2||\mathbf{y}_k-v\bar y_k||_2)\\
&+c_0\delta_{C,2}L\eta||\mathbf{RT}||_2||v||_2(\frac{L}{\sqrt{n}}||\mathbf{x}_k-\mathbf{1}_n \bar x_k||_2+L||\bar x_k-x^\star||_2)\\
&=(\sigma_C+\eta c_0\delta_{C,2}L||\mathbf{RT}||_2)||\mathbf{y}_{k}-v\bar y_{k}||_C\\&+2c_0\delta_{C,2}L\sqrt{2p\sum_{i=1}^n\theta_i^2}+\eta c_0\delta_{C,2}L^2||\mathbf{RT}||_2||v||_2||\bar x_k-x^\star||_2\\
&+c_0\delta_{C,2}L(||\mathbf{R}-\mathbf{I}||_2+\eta|\mathbf{RT}||_2||v||_2\frac{L}{\sqrt{n}})||\mathbf{x}_k-\mathbf{1}_n \bar x_k||_2\\
\end{aligned}
\end{equation*}
Taking full expectation yields the desired result.
\bibliographystyle{unsrt}
|
2,869,038,153,882 | arxiv | \section*{Acknowledgement}}{}
\newenvironment{romenumerate}[1][-10pt]
\addtolength{\leftmargini}{#1}\begin{enumerate
\renewcommand{\labelenumi}{\textup{(\roman{enumi})}}%
\renewcommand{\theenumi}{\textup{(\roman{enumi})}}%
}{\end{enumerate}}
\newcounter{oldenumi}
\newenvironment{romenumerateq
{\setcounter{oldenumi}{\value{enumi}}
\begin{romenumerate} \setcounter{enumi}{\value{oldenumi}}}
{\end{romenumerate}}
\newcounter{thmenumerate}
\newenvironment{thmenumerate}
{\setcounter{thmenumerate}{0}%
\renewcommand{\thethmenumerate}{\textup{(\roman{thmenumerate})}}%
\def\item{\pa
\refstepcounter{thmenumerate}\textup{(\roman{thmenumerate})\enspace}}
}
{}
\newcounter{xenumerate}
\newenvironment{xenumerate}
{\begin{list}
{\upshape(\roman{xenumerate})}
{\setlength{\leftmargin}{0pt}
\setlength{\rightmargin}{0pt}
\setlength{\labelwidth}{0pt}
\setlength{\itemindent}{\labelsep}
\setlength{\topsep}{0pt}
\usecounter{xenumerate}} }
{\end{list}}
\newcommand\xfootnote[1]{\unskip\footnote{#1}$ $}
\newcommand\pfitem[1]{\par(#1):}
\newcommand\pfitemx[1]{\par#1:}
\newcommand\pfitemref[1]{\pfitemx{\ref{#1}}}
\newcommand\pfcase[2]{\smallskip\noindent\emph{Case #1: #2} \noindent}
\newcommand\step[2]{\smallskip\noindent\emph{Step #1: #2} \noindent}
\newcommand\stepx{\smallskip\noindent\refstepcounter{steps}%
\emph{Step \arabic{steps}:}\noindent}
\newcommand{\refT}[1]{Theorem~\ref{#1}}
\newcommand{\refTs}[1]{Theorems~\ref{#1}}
\newcommand{\refC}[1]{Corollary~\ref{#1}}
\newcommand{\refL}[1]{Lemma~\ref{#1}}
\newcommand{\refLs}[1]{Lemmas~\ref{#1}}
\newcommand{\refR}[1]{Remark~\ref{#1}}
\newcommand{\refS}[1]{Section~\ref{#1}}
\newcommand{\refSs}[1]{Sections~\ref{#1}}
\newcommand{\refSS}[1]{Section~\ref{#1}}
\newcommand{\refP}[1]{Problem~\ref{#1}}
\newcommand{\refD}[1]{Definition~\ref{#1}}
\newcommand{\refE}[1]{Example~\ref{#1}}
\newcommand{\refF}[1]{Figure~\ref{#1}}
\newcommand{\refApp}[1]{Appendix~\ref{#1}}
\newcommand{\refApps}[1]{Appendices~\ref{#1}}
\newcommand{\refTab}[1]{Table~\ref{#1}}
\newcommand{\refand}[2]{\ref{#1} and~\ref{#2}}
\newcommand\nopf{\qed}
\newcommand\noqed{\renewcommand{\qed}{}}
\newcommand\qedtag{\eqno{\qed}}
\DeclareMathOperator*{\sumx}{\sum\nolimits^{*}}
\DeclareMathOperator*{\sumxx}{\sum\nolimits^{**}}
\newcommand{\sum_{i=0}^\infty}{\sum_{i=0}^\infty}
\newcommand{\sum_{j=0}^\infty}{\sum_{j=0}^\infty}
\newcommand{\sum_{j=1}^\infty}{\sum_{j=1}^\infty}
\newcommand{\sum_{k=0}^\infty}{\sum_{k=0}^\infty}
\newcommand{\sum_{k=1}^\infty}{\sum_{k=1}^\infty}
\newcommand{\sum_{m=0}^\infty}{\sum_{m=0}^\infty}
\newcommand{\sum_{m=1}^\infty}{\sum_{m=1}^\infty}
\newcommand{\sum_{m=0}^k}{\sum_{m=0}^k}
\newcommand{\sum_{m=0}^\ell}{\sum_{m=0}^\ell}
\newcommand{\sum_{n=0}^\infty}{\sum_{n=0}^\infty}
\newcommand{\sum_{n=1}^\infty}{\sum_{n=1}^\infty}
\newcommand{\sum_{i=1}^k}{\sum_{i=1}^k}
\newcommand{\sum_{i=1}^m}{\sum_{i=1}^m}
\newcommand{\sum_{i=1}^n}{\sum_{i=1}^n}
\newcommand{\sum_{j=1}^n}{\sum_{j=1}^n}
\newcommand{\sum_{k=1}^n}{\sum_{k=1}^n}
\newcommand{\prod_{i=1}^k}{\prod_{i=1}^k}
\newcommand{\prod_{i=1}^m}{\prod_{i=1}^m}
\newcommand\set[1]{\ensuremath{\{#1\}}}
\newcommand\bigset[1]{\ensuremath{\bigl\{#1\bigr\}}}
\newcommand\Bigset[1]{\ensuremath{\Bigl\{#1\Bigr\}}}
\newcommand\biggset[1]{\ensuremath{\biggl\{#1\biggr\}}}
\newcommand\lrset[1]{\ensuremath{\left\{#1\right\}}}
\newcommand\xpar[1]{(#1)}
\newcommand\bigpar[1]{\bigl(#1\bigr)}
\newcommand\Bigpar[1]{\Bigl(#1\Bigr)}
\newcommand\biggpar[1]{\biggl(#1\biggr)}
\newcommand\lrpar[1]{\left(#1\right)}
\newcommand\bigsqpar[1]{\bigl[#1\bigr]}
\newcommand\Bigsqpar[1]{\Bigl[#1\Bigr]}
\newcommand\biggsqpar[1]{\biggl[#1\biggr]}
\newcommand\lrsqpar[1]{\left[#1\right]}
\newcommand\xcpar[1]{\{#1\}}
\newcommand\bigcpar[1]{\bigl\{#1\bigr\}}
\newcommand\Bigcpar[1]{\Bigl\{#1\Bigr\}}
\newcommand\biggcpar[1]{\biggl\{#1\biggr\}}
\newcommand\lrcpar[1]{\left\{#1\right\}}
\newcommand\abs[1]{|#1|}
\newcommand\bigabs[1]{\bigl|#1\bigr|}
\newcommand\Bigabs[1]{\Bigl|#1\Bigr|}
\newcommand\biggabs[1]{\biggl|#1\biggr|}
\newcommand\lrabs[1]{\left|#1\right|}
\def\rompar(#1){\textup(#1\textup)}
\newcommand\xfrac[2]{#1/#2}
\newcommand\xpfrac[2]{(#1)/#2}
\newcommand\xqfrac[2]{#1/(#2)}
\newcommand\xpqfrac[2]{(#1)/(#2)}
\newcommand\parfrac[2]{\lrpar{\frac{#1}{#2}}}
\newcommand\bigparfrac[2]{\bigpar{\frac{#1}{#2}}}
\newcommand\Bigparfrac[2]{\Bigpar{\frac{#1}{#2}}}
\newcommand\biggparfrac[2]{\biggpar{\frac{#1}{#2}}}
\newcommand\xparfrac[2]{\xpar{\xfrac{#1}{#2}}}
\newcommand\innprod[1]{\langle#1\rangle}
\newcommand\expbig[1]{\exp\bigl(#1\bigr)}
\newcommand\expBig[1]{\exp\Bigl(#1\Bigr)}
\newcommand\explr[1]{\exp\left(#1\right)}
\newcommand\expQ[1]{e^{#1}}
\def\xexp(#1){e^{#1}}
\newcommand\ceil[1]{\lceil#1\rceil}
\newcommand\floor[1]{\lfloor#1\rfloor}
\newcommand\lrfloor[1]{\left\lfloor#1\right\rfloor}
\newcommand\frax[1]{\{#1\}}
\newcommand\setn{\set{1,\dots,n}}
\newcommand\nn{[n]}
\newcommand\ntoo{\ensuremath{{n\to\infty}}}
\newcommand\Ntoo{\ensuremath{{N\to\infty}}}
\newcommand\asntoo{\text{as }\ntoo}
\newcommand\ktoo{\ensuremath{{k\to\infty}}}
\newcommand\mtoo{\ensuremath{{m\to\infty}}}
\newcommand\stoo{\ensuremath{{s\to\infty}}}
\newcommand\ttoo{\ensuremath{{t\to\infty}}}
\newcommand\xtoo{\ensuremath{{x\to\infty}}}
\newcommand\bmin{\wedge}
\newcommand\norm[1]{\|#1\|}
\newcommand\bignorm[1]{\bigl\|#1\bigr\|}
\newcommand\Bignorm[1]{\Bigl\|#1\Bigr\|}
\newcommand\downto{\searrow}
\newcommand\upto{\nearrow}
\newcommand\half{\tfrac12}
\newcommand\thalf{\tfrac12}
\newcommand\punkt{.\spacefactor=1000}
\newcommand\iid{i.i.d\punkt}
\newcommand\ie{i.e\punkt}
\newcommand\eg{e.g\punkt}
\newcommand\viz{viz\punkt}
\newcommand\cf{cf\punkt}
\newcommand{a.s\punkt}{a.s\punkt}
\newcommand{a.e\punkt}{a.e\punkt}
\renewcommand{\ae}{\vu}
\newcommand\whp{w.h.p\punkt}
\newcommand\ii{\mathrm{i}}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand\dto{\overset{\mathrm{d}}{\longrightarrow}}
\newcommand\pto{\overset{\mathrm{p}}{\longrightarrow}}
\newcommand\asto{\overset{\mathrm{a.s.}}{\longrightarrow}}
\newcommand\eqd{\overset{\mathrm{d}}{=}}
\newcommand\neqd{\overset{\mathrm{d}}{\neq}}
\newcommand\op{o_{\mathrm p}}
\newcommand\Op{O_{\mathrm p}}
\newcommand\bbR{\mathbb R}
\newcommand\bbC{\mathbb C}
\newcommand\bbN{\mathbb N}
\newcommand\bbT{\mathbb T}
\newcommand\bbQ{\mathbb Q}
\newcommand\bbZ{\mathbb Z}
\newcommand\bbZleo{\mathbb Z_{\le0}}
\newcommand\bbZgeo{\mathbb Z_{\ge0}}
\newcounter{CC}
\newcommand{\CC}{\stepcounter{CC}\CCx}
\newcommand{\CCx}{C_{\arabic{CC}}}
\newcommand{\CCdef}[1]{\xdef#1{\CCx}}
\newcommand{\CCname}[1]{\CC\CCdef{#1}}
\newcommand{\CCreset}{\setcounter{CC}0}
\newcounter{cc}
\newcommand{\cc}{\stepcounter{cc}\ccx}
\newcommand{\ccx}{c_{\arabic{cc}}}
\newcommand{\ccdef}[1]{\xdef#1{\ccx}}
\newcommand{\ccname}[1]{\cc\ccdef{#1}}
\newcommand{\ccreset}{\setcounter{cc}0}
\renewcommand\Re{\operatorname{Re}}
\renewcommand\Im{\operatorname{Im}}
\newcommand\E{\operatorname{\mathbb E{}}}
\renewcommand\P{\operatorname{\mathbb P{}}}
\newcommand\Var{\operatorname{Var}}
\newcommand\Cov{\operatorname{Cov}}
\newcommand\Corr{\operatorname{Corr}}
\newcommand\Exp{\operatorname{Exp}}
\newcommand\Po{\operatorname{Po}}
\newcommand\Bi{\operatorname{Bi}}
\newcommand\Bin{\operatorname{Bin}}
\newcommand\Be{\operatorname{Be}}
\newcommand\Ge{\operatorname{Ge}}
\newcommand\NBi{\operatorname{NegBin}}
\newcommand\Res{\operatorname{Res}}
\newcommand\fall[1]{^{\underline{#1}}}
\newcommand\rise[1]{^{\overline{#1}}}
\newcommand\supp{\operatorname{supp}}
\newcommand\sgn{\operatorname{sgn}}
\newcommand\Tr{\operatorname{Tr}}
\newcommand\ga{\alpha}
\newcommand\gb{\beta}
\newcommand\gd{\delta}
\newcommand\gD{\Delta}
\newcommand\gf{\varphi}
\newcommand\gam{\gamma}
\newcommand\gG{\Gamma}
\newcommand\gk{\varkappa}
\newcommand\gl{\lambda}
\newcommand\gL{\Lambda}
\newcommand\go{\omega}
\newcommand\gO{\Omega}
\newcommand\gs{\sigma}
\newcommand\gss{\sigma^2}
\newcommand\gS{\Sigma}
\newcommand\gth{\theta}
\newcommand\eps{\varepsilon}
\newcommand\ep{\varepsilon}
\newcommand\bJ{\bar J}
\newcommand\cA{\mathcal A}
\newcommand\cB{\mathcal B}
\newcommand\cC{\mathcal C}
\newcommand\cD{\mathcal D}
\newcommand\cE{\mathcal E}
\newcommand\cF{\mathcal F}
\newcommand\cG{\mathcal G}
\newcommand\cH{\mathcal H}
\newcommand\cI{\mathcal I}
\newcommand\cJ{\mathcal J}
\newcommand\cK{\mathcal K}
\newcommand\cL{{\mathcal L}}
\newcommand\cM{\mathcal M}
\newcommand\cN{\mathcal N}
\newcommand\cO{\mathcal O}
\newcommand\cP{\mathcal P}
\newcommand\cQ{\mathcal Q}
\newcommand\cR{{\mathcal R}}
\newcommand\cS{{\mathcal S}}
\newcommand\cT{{\mathcal T}}
\newcommand\cU{{\mathcal U}}
\newcommand\cV{\mathcal V}
\newcommand\cW{\mathcal W}
\newcommand\cX{{\mathcal X}}
\newcommand\cY{{\mathcal Y}}
\newcommand\cZ{{\mathcal Z}}
\newcommand\ett[1]{\boldsymbol1_{#1}}
\newcommand\bigett[1]{\boldsymbol1\bigcpar{#1}}
\newcommand\Bigett[1]{\boldsymbol1\Bigcpar{#1}}
\newcommand\etta{\boldsymbol1}
\newcommand\smatrixx[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)}
\newcommand\limn{\lim_{n\to\infty}}
\newcommand\limN{\lim_{N\to\infty}}
\newcommand\qw{^{-1}}
\newcommand\qww{^{-2}}
\newcommand\qq{^{1/2}}
\newcommand\qqw{^{-1/2}}
\newcommand\qqq{^{1/3}}
\newcommand\qqqb{^{2/3}}
\newcommand\qqqw{^{-1/3}}
\newcommand\qqqbw{^{-2/3}}
\newcommand\qqqq{^{1/4}}
\newcommand\qqqqc{^{3/4}}
\newcommand\qqqqw{^{-1/4}}
\newcommand\qqqqcw{^{-3/4}}
\newcommand\intii{\int_{-1}^1}
\newcommand\intoi{\int_0^1}
\newcommand\intoo{\int_0^\infty}
\newcommand\intoooo{\int_{-\infty}^\infty}
\newcommand\oi{[0,1]}
\newcommand\ooo{[0,\infty)}
\newcommand\ooox{[0,\infty]}
\newcommand\oooo{(-\infty,\infty)}
\newcommand\setoi{\set{0,1}}
\newcommand\dtv{d_{\mathrm{TV}}}
\newcommand\dd{\,\mathrm{d}}
\newcommand\ddx{\mathrm{d}}
\newcommand\ddxx{\frac{\ddx}{\ddx x}}
\newcommand{probability generating function}{probability generating function}
\newcommand{moment generating function}{moment generating function}
\newcommand{characteristic function}{characteristic function}
\newcommand{uniformly integrable}{uniformly integrable}
\newcommand\rv{random variable}
\newcommand\lhs{left-hand side}
\newcommand\rhs{right-hand side}
\newcommand\gnp{\ensuremath{G(n,p)}}
\newcommand\gnm{\ensuremath{G(n,m)}}
\newcommand\gnd{\ensuremath{G(n,d)}}
\newcommand\gnx[1]{\ensuremath{G(n,#1)}}
\newcommand\etto{\bigpar{1+o(1)}}
\newcommand\GW{Galton--Watson}
\newcommand\GWt{\GW{} tree}
\newcommand\cGWt{conditioned \GW{} tree}
\newcommand\GWp{\GW{} process}
\newcommand\tX{{\widetilde X}}
\newcommand\tY{{\widetilde Y}}
\newcommand\spann[1]{\operatorname{span}(#1)}
\newcommand\tn{\cT_n}
\newcommand\tnv{\cT_{n,v}}
\newcommand\tnV{\cT_{n,V}}
\newcommand\xT{\mathfrak T}
\newcommand\rea{\Re\ga}
\newcommand\rga{{\Re\ga}}
\newcommand\rgb{{\Re\gb}}
\newcommand\wgay{{-\ga-\frac12}}
\newcommand\qgay{{\ga+\frac12}}
\newcommand\rqgay{{\rga+\frac12}}
\newcommand\ex{\mathbf e}
\newcommand\hX{\widehat X}
\newcommand\sgt{simply generated tree}
\newcommand\sgrt{simply generated random tree}
\newcommand\hh[1]{d(#1)}
\newcommand\WW{\widehat W}
\newcommand\coi{C\oi}
\newcommand\out{\gd^+}
\newcommand\zne{Z_{n,\eps}}
\newcommand\ze{Z_{\eps}}
\newcommand\gatoo{\ensuremath{\ga\to\infty}}
\newcommand\rtoo{\ensuremath{r\to\infty}}
\newcommand\Yoo{Y_\infty}
\newcommand\bes{R}
\newcommand\tex{\tilde{\ex}}
\newcommand\tbes{\tilde{\bes}}
\newcommand\Woo{W_\infty}
\newcommand{m_1}{m_1}
\newcommand{\tilde m_1}{\tilde m_1}
\newcommand{B^{(3)}}{B^{(3)}}
\newcommand{r^{1/2}}{r^{1/2}}
\newcommand\coo{C[0,\infty)}
\newcommand\coT{\ensuremath{C[0,T]}}
\newcommand\expx[1]{e^{#1}}
\newcommand\gdtau{\gD\tau}
\newcommand\ygam{Y_{(\gam)}}
\newcommand\EE{V}
\newcommand\pigsqq{\sqrt{2\pi\gss}}
\newcommand\pigsqqw{\frac{1}{\sqrt{2\pi\gss}}}
\newcommand\gapigsqqw{\frac{(\ga-\frac12)\qw}{\sqrt{2\pi\gss}}}
\newcommand\gdd{\frac{\gd}{2}}
\newcommand\gdq{\frac{\gd}{4}}
\newcommand\raisetagbase{\raisetag{\baselineskip}}
\newcommand\eit{e^{\ii t}}
\newcommand\emit{e^{-\ii t}}
\newcommand\tgf{\widetilde\gf}
\newcommand\txi{\tilde\xi}
\newcommand\intT{\frac{1}{2\pi}\int_{-\pi}^\pi}
\newcommand\intpi{\int_{-\pi}^\pi}
\newcommand\Li{\operatorname{Li}}
\newcommand\xq{\setminus\set{\frac12}}
\newcommand\xo{\setminus\set{0}}
\newcommand\tqn{t/\sqrt n}
\newcommand\intpm[1]{\int_{-#1}^{#1}}
\newcommand\gnaxt{g_n(\ga,x,t)}
\newcommand\gaxt{g(\ga,x,t)}
\newcommand\gssx{\frac{\gss}2}
\newcommand\tq{\tilde q}
\newcommand\gao{\ga_0}
\newcommand\ppp{\cP_1}
\newcommand\tpsi{\widetilde\psi}
\newcommand\tgD{\tilde\gD}
\newcommand\xinn{\xi_{n-1,N}}
\newcommand\zzn{\frac12+\ii y_n}
\newcommand\tgdn{\tgD_N}
\newcommand\xgdn{\gD^*_N}
\newcommand\intt{\int_0^T}
\newcommand\act{|\cT|}
\newcommand\gna{g_{n,\ga}}
\newcommand\hna{h_{n,\ga}}
\newcommand\hnax{\hna^*}
\newcommand\doi{D_{1}}
\newcommand\cHoi{\cH(\doi)}
\newcommand\db{D}
\newcommand\dbm{D_-}
\newcommand\dbmx{\widehat D}
\newcommand\dbmB{D_-^B}
\newcommand\dbmxB{\widehat{D}^B}
\newcommand\DB{D^B}
\newcommand\bDB{\overline{\DB}}
\newcommand\Dbad{D_0}
\newcommand\Dbadm{D_-}
\newcommand\Dbadx{D^*}
\newcommand\OHD{O_{\cH(\Dbad)}}
\newcommand\OHDx{O_{\cH(\Dbadx)}}
\newcommand\Dgdx{D}
\newcommand\DEgdx{D^\gd}
\newcommand\VZ{V}
\newcommand\GDD{$\gD$-domain}
\newcommand\gdaf{\gda{} function}
\newcommand\gda{$\gD$-analytic}
\newcommand\xxl{^{(\ell)}}
\newcommand\xxll{^{(\ell_1,\ell_2)}}
\newcommand\xxllo{^{(\ell,\ell)}}
\newcommand\xxm{^{(m)}}
\newcommand\yyll{{\ell_1,\ell_2}}
\newcommand\ctn{\cT_n}
\newcommand\oz[1]{o\bigpar{|1-z|^{#1}}}
\newcommand\Oz[1]{O\bigpar{|1-z|^{#1}}}
\newcommand\Ozo[1]{O\bigpar{|1-z|^{#1}+1}}
\newcommand\Ozz[2]{O\bigpar{|1-z|^{#1}+|1-z|^{#2}}}
\newcommand\kk{\kappa}
\newcommand\kkk{\chi}
\newcommand\kkkx{\chi^*}
\newcommand\REA{\Re A}
\newcommand\yz{\bigpar{y(z)}}
\newcommand\gax{\ga'}
\newcommand\bga{\overline\ga}
\newcommand\QX{\mathbf{X}}
\newcommand\NNo{\set{0,1,2,\dots}}
\newcommand\dF{\widecheck F}
\newcommand\zddz{\vartheta}
\newcommand\ddz{\frac{\ddx}{\dd z}}
\newcommand\intqq{\int_{\frac12-\ii\infty}^{\frac12+\ii\infty}}
\newcommand\Igaz{I(\ga;z)}
\newcommand\Sgaz{S(\ga;z)}
\newcommand\gaoo{\ga_\infty}
\newcommand\gthx{\theta_1}
\newcommand\gthy{\theta_2}
\newcommand\vpx{\varpi}
\newcommand\xxi{\tau}
\newcommand\lir{\rho}
\newcommand\lirr{\check\rho}
\newcommand\muu{\mu'}
\newcommand\On[1]{O\bigpar{n^{#1}}}
\newcommand\ci{c^{(1)}}
\newcommand\cii{c^{(2)}}
\newcommand\ciii{c^{(3)}}
\newcommand\civ{c^{(4)}}
\newcommand{H\"older}{H\"older}
\newcommand\CS{Cauchy--Schwarz}
\newcommand\CSineq{\CS{} inequality}
\newcommand{L\'evy}{L\'evy}
\newcommand{Tak\'acs}{Tak\'acs}
\newcommand{Fr\'echet}{Fr\'echet}
\newcommand{\texttt{Maple}}{\texttt{Maple}}
\newcommand\citex{\REM}
\newcommand\refx[1]{\texttt{[#1]}}
\newcommand\xref[1]{\texttt{(#1)}}
\hyphenation{Upp-sala}
\begin{document}
\begin{abstract}
We study the additive functional $X_n(\ga)$ on \cGWt{s} given, for arbitrary complex~$\ga$, by summing the $\ga$th power of all subtree sizes. Allowing complex~$\ga$ is advantageous, even for the study of real~$\ga$, since it allows us to use powerful results from the theory of analytic functions in the proofs.
For $\rea < 0$, we prove that $X_n(\ga)$, suitably normalized, has a complex normal limiting distribution; moreover, as processes in~$\ga$, the weak convergence holds in the space of analytic functions in the left half-plane. We establish, and prove similar process-convergence extensions of, limiting distribution results for~$\ga$ in various regions of the complex plane. We focus mainly on the case where $\rea > 0$, for which $X_n(\ga)$, suitably normalized, has a limiting distribution that is \emph{not} normal but does not depend on the offspring distribution~$\xi$ of the \cGWt, assuming only that $\E \xi = 1$ and $0 < \Var \xi < \infty$. Under a weak extra moment assumption on~$\xi$, we prove that the convergence extends to moments, ordinary and absolute and mixed, of all orders.
At least when $\rea > \frac12$, the limit random variable $Y(\ga)$ can be expressed as a function of a normalized Brownian excursion.
\end{abstract}
\maketitle
\section{Introduction and main results}\label{S:intro}
In the study of random trees, one important part is the study of
\emph{additive functionals}. These are functionals of rooted trees
of the type
\begin{equation}\label{F}
F(T):=\sum_{v\in T} f(T_v),
\end{equation}
where $v$ ranges over all nodes of the tree $T$, $T_v$ is the subtree
consisting of $v$ and all its descendants, and $f$ is a given functional of
trees, often called the \emph{toll function}.
Equivalently, additive functionals may be defined by the recursion
\begin{equation}\label{F2}
F(T):=f(T)+\sum_{i=1}^d F(T_{v(i)}),
\end{equation}
where $d$ is the degree of the root $o$ of $T$ and $v(1),\dots,v(d)$ are the
children of $o$.
(All trees in this paper are rooted.)
We are mainly interested in the case when $T=\tn$ is some random tree of
order $|\tn|=n$, and we study asymptotics of $F(\tn)$ as \ntoo.
Such problems have been studied by many authors, for different classes of
functionals $f$ and different classes of random trees $\cT_n$;
some examples are
\cite{
HwangR
FillK04,
FillFK,
FillK05,
SJ296,
Wagner15,
SJ285,
Delmas18,
RWagner19,
SJ347,
Delmas20,
Caracciolo20}.
In the present paper we consider the case where
the toll function is $f_\ga(T):=|T|^\ga$ for some constant $\ga$, and
$\tn$ is a \cGWt,
defined by some offspring distribution $\xi$ with $\E\xi=1$ and
$0<\Var\xi<\infty$;
see \refS{SSGW}
for definitions and note that this includes for example uniformly random
labelled trees, ordered trees, and binary trees.
(We use these standing assumptions on $\tn$ and~$\xi$ throughout the
paper, whether said explictly or not.)
Some previous papers dealing with this situation, in varying generality, are
\cite{
FillK04,
FillFK,
Delmas18,
Delmas20,
Caracciolo20}.
We denote the corresponding additive functional \eqref{F} by $F_\ga$;
thus $F_\ga(T)$ is the sum of the $\ga$th power of all subtree sizes for $T$.
We also introduce the following notation:
\begin{align}
X_n(\ga)&:=F_\ga(\tn):=\sum_{v\in\tn}|\tnv|^\ga,\label{Xn}
\\
\tX_n(\ga)&:=X_n(\ga)-\E X_n(\ga).\label{tX}
\end{align}
Note that for $\ga=0$, we trivially have $X_n(0)=F_0(\tn)=n$.
The case $\ga=1$ yields, as is well known,
the \emph{total pathlength},
see \refE{Ega=1}.
Previous papers have studied
the case when~$\ga$ is real, but
we consider these variables for arbitrary complex~$\ga$.
This is advantageous, even for the study of real~$\ga$,
since it allows us to use powerful results from the
theory of analytic functions in the proofs.
We also find new phenomena for non-real $\ga$ (for example \refT{TEgd}).
Note that $X_n(\ga)$ and $\tX_n(\ga)$ are random
entire functions of $\ga$, for any given $n$. [The expectation in
\eqref{tX} exists because, for a given $n$, the variable $X_n(\ga)$ takes
only a finite number of different values.]
We begin with the case $\rea<0$, where $X_n(\ga)$ is asymptotically normal
as an easy consequence of \cite[Theorem~1.5 and Remark~1.6]{SJ285}.
More precisely, the following holds.
(Proofs of this and other theorems stated here are given later.)
We say that a complex random variable $\zeta$ is \emph{normal}
if $(\Re\zeta,\Im\zeta)$ has a two-dimensional normal distribution.
(See \cite[Section 1.4]{SJIII}, and note that a real normal variable is a
special case.)
\begin{theorem}\label{T<0}
Let $\tn$ be a \cGWt{} defined by an offspring distribution $\xi$ with
$\E\xi=1$ and $0<\gss:=\Var\xi<\infty$.
Then there exists a family of centered complex
normal random variables $\hX(\ga)$,
$\rea<0$, such that, as \ntoo,
\begin{equation}\label{t0}
n^{-1/2}\tX_n(\ga)= \frac{X_n(\ga)-\E X_n(\ga)}{\sqrt n}\dto \hX(\ga),
\qquad \rea<0.
\end{equation}
Moreover, $\hX(\ga)$ is a (random) analytic function of $\ga$, and
the convergence \eqref{t0} holds in the space $\cH(H_-)$ of analytic
functions in the left half-plane $H_-:=\set{\ga:\rea<0}$.
Furthermore,
\begin{equation}\label{t0symm}
\overline{\hX(\ga)}=\hX(\overline{\ga}),
\qquad \ga\in H_-.
\end{equation}
The covariance function $\E\bigpar{\hX(\ga)\hX(\gb)}$ is an analytic
function of two variables $\ga,\gb\in H_-$, and, as \ntoo,
\begin{equation}\label{t0cov}
n\qw \Cov\bigpar{X_n(\ga),X_n(\gb)}\to \E\bigsqpar{\hX(\ga)\hX(\gb)},
\qquad \ga,\gb\in H_-.
\end{equation}
\end{theorem}
The convergence in $\cH(H_-)$
means uniform convergence on compact sets and
implies joint convergence for different $\ga$
in \eqref{t0}; see \refSS{SSanalytic}.
The distribution of the limit $\hX(\ga)$ depends on the offspring
distribution $\xi$ in a rather complicated way.
Since the variables $\hX(\ga)$ are complex normal,
and \eqref{t0symm} holds,
the joint distribution of all $\hX(\ga)$ is determined by the covariance
function $\E\bigpar{\hX(\ga)\hX(\gb)} $, $\ga,\gb\in H_-$. We give a formula
for this in \eqref{r0cov}, but we do not know any simple way to evaluate it.
In most parts of the paper we assume $\rea>0$.
We introduce a normalization that will turn out to be correct
for $\rea>0$ and define
\begin{align}
Y_n(\ga)&:=n^{-\ga-\frac12}X_n(\ga),\label{Yn}
\\
\tY_n(\ga)&:=n^{-\ga-\frac12}\tX_n(\ga)=Y_n(\ga)-\E Y_n(\ga)
.\label{tY}
\end{align}
Then the following holds.
\begin{theorem}\label{T1}
There exists a family of complex
random variables $\tY(\ga)$, $\rea>0$, such that if
$\tn$ is a \cGWt{} defined by an offspring distribution $\xi$ with
$\E\xi=1$ and $0<\gss:=\Var\xi<\infty$, then, as \ntoo,
\begin{equation}\label{t1}
\gs n^{-\ga-\frac12}\tX_n(\ga)= \gs\tY_n(\ga)\dto \tY(\ga),
\qquad \rea>0.
\end{equation}
Moreover, $\tY(\ga)$ is a (random) analytic function of $\ga$, and
the convergence \eqref{t1} holds in the space $\cH(H_+)$ of analytic
functions in the right half-plane $H_+:=\set{\ga:\rea>0}$.
\end{theorem}
Here $\tY(\ga)$ is \emph{not} normal. [In fact, it follows from \eqref{Y}
and \eqref{tx1} below that if $\ga>\frac12$, then $\tY(\ga)$ is bounded below.]
On the other hand, note that the family $\tY(\ga)$ does \emph{not} depend on the
offspring
distribution $\xi$; it is the same for all \cGWt{s} satisfying our
conditions
$\E\xi=1$ and $0<\gss<\infty$, and thus the asymptotics of $\tX_n$ depends
on $\xi$ only through the scaling factor $\gs$.
Hence, we have universality of the limit when $\rea>0$, but not when $\rea<0$.
We can add moment convergence to \refT{T1}, at least
provided we add a weak extra moment assumption.
\begin{theorem}\label{T1mom}
Assume, in addition to the conditions on $\xi$ in \refT{T1}, that
$\E\xi^{2+\gd}<\infty$ for some $\gd>0$.
Then, the limit \eqref{t1} holds with all moments,
ordinary and absolute.
In other words, if\/ $\rga>0$, then $\E|\tY(\ga)|^r<\infty$ for every
$r<\infty$;
furthermore, for any integer $\ell\ge1$,
\begin{equation}\label{t1mom}
n^{-\ell(\qgay)}\E\bigsqpar{\tX_n(\ga)^\ell}
= \E\bigsqpar{\tY_n(\ga)^\ell}
\to \gs^{-\ell}\E\bigsqpar{ \tY(\ga)^\ell},
\qquad \rea>0
,\end{equation}
and similarly for absolute moments and mixed moments of $\tX_n(\ga)$
and $\overline{\tX_n(\ga)}$.
Moreover, for each fixed $\ell$, \eqref{t1mom}
and its analogues for absolute moments and mixed moments
hold uniformly for $\ga$ in
any fixed compact subset of $H_+$;
the limit $\E\tY(\ga)^\ell$ is an analytic
function of $\ga\in H_+$ while absolute moments and
mixed moments of $\tY(\ga)$ and $\overline{\tY(\ga)}$
are continuous functions of $\ga\in H_+$.
\end{theorem}
The result extends to joint moments for several $\ga\in H_+$.
The moments of
$\tY(\ga)$ may be computed by \eqref{Y} and the recursion
formula
\eqref{kk1}--\eqref{kk2} below.
Note that $\tY(\ga)$ is centered: $\E\tY(\ga)=0$; this follows, \eg, by the case $\ell=1$ of \eqref{t1mom}.
See also
\refR{Rcent} and \refE{EVar}.
\begin{remark}\label{RT1mom}
We conjecture that \refT{T1mom} holds also without the extra moment condition.
Note that even without that condition, \eqref{t1mom} holds for
$\ga\neq\frac12$ as a simple consequence of \refT{TXmom} below.
The case $\ga=\frac12$ is more complicated, but has been treated directly in
the special case $\xi\sim\Bi(2,\frac12)$ (binary trees)
by \cite{FillK04}; that special case satisfies
$\E \xi^r<\infty$ for every $r$,
but it seems likely that the proof in \cite{FillK04}
can be adapted to the general case by arguments similar to those in \refS{Smom}.
However, we have not pursued this and leave it as an open problem.
See also \cite{Caracciolo20}.
\end{remark}
Theorems \ref{T<0} and \ref{T1} are stated for the centered variables
$\tX_n(\ga)$.
We obtain results for $X_n(\ga)$ by combining Theorems \ref{T<0}--\ref{T1}
with the
asymptotics for the expectation $\E X_n(\ga)$ given in the next theorem, but
we first need more notation.
Let $\cT$ be the \GWt{} (without conditioning) defined by the offspring
distribution $\xi$; see \refSS{SSGW}.
It follows from \eqref{pct} that $f_\ga(\cT)=|\cT|^\ga$ has a finite
expectation if and only if $\rea<\frac12$, and we define
\begin{equation}
\label{mu}
\mu(\ga):=\E f_\ga(\cT)=\E|\cT|^\ga
=\sum_{n=1}^\infty n^\ga\P(|\cT|=n),
\qquad \rea<\tfrac12.
\end{equation}
This is an analytic function in the half-plane $\rea<\frac12$. Note that
$\mu(\ga)$ depends on the offspring distribution $\xi$, although we do not
show this in the notation.
Note also that $\mu(\ga)$ has a singularity at $\ga=\frac12$; in fact, it
is easily seen from \eqref{pct} that
\begin{equation}
\label{aaa}
\mu(\ga)\sim \frac{(2\pi\gss)\qqw}{\frac12-\ga}, \qquad
\text{as $\ga\upto\tfrac12$}.
\end{equation}
\begin{remark}\label{Rmua}
In \refS{Smua} (\refT{TM}),
we show by a rather complicated argument
that although $\mu(\ga) \to \infty$ as $\ga \upto \frac12$ (se \eqref{aaa}),
$\mu(\ga)$ has a continuous extension to all other points on the line
$\rea = \frac12$.
\end{remark}
It is shown by \citet{Aldous-fringe}
that if we construct
a random fringe tree $\tnV$ by first choosing a random
\cGWt{} $\cT_n$ as above, and then a random node $V$ in the tree,
then $\tnV$ converges in distribution as \ntoo{}
to the random \GWt{} $\cT$.
This was sharpened in \cite[Theorem 7.12]{SJ264}
to the corresponding 'quenched' result:
the conditional distribution of $\tnV$ given $\cT_n$ converges in
probability to the distribution of $\cT$.
As a consequence (see \refS{Sfringe}), we obtain the following
results, which show the central role of $\mu(\ga)$ in the study of
$X_n(\ga)$.
\begin{theorem}\label{Tfringe}
\begin{thmenumerate}
\item \label{TfringeE}
If\/ $\rea\le0$, then as \ntoo,
\begin{equation}
\label{tfringe}
\E X_n(\ga) = \mu(\ga) n + o(n).
\end{equation}
\item \label{TfringeP}
If\/ $\rea \le 0$, then $X_n(\ga)/n \pto\mu(\ga)$.
\end{thmenumerate}
\end{theorem}
The following theorem improves and extends the estimate \eqref{tfringe}; in
particular, note that [in parts \ref{TE<0} and \ref{TE-}]
the error term in \eqref{tfringe} is improved to
$o\bigpar{n\qq}$ for $\rea<0$ and $O\bigpar{n\qq}$ for $\rea=0$.
\begin{theorem}\label{TE}
The following estimates hold as \ntoo, in all cases uniformly for $\ga$ in
compact subsets of the indicated domains.
\begin{romenumerate}
\item \label{TE<0}
If\/ $\rea<0$, then
\begin{equation}\label{te<0}
\E X_n(\ga)
=\mu(\ga)n+o\bigpar{n\qq}.
\end{equation}
\item \label{TE-}
If\/ $-\frac12<\rea<\frac12$, then
\begin{equation}\label{te-}
\E X_n(\ga)
=\mu(\ga)n+\frac{1}{\sqrt{2}\gs}\frac{\gG(\ga-\frac12)}{\gG(\ga)}\,n^\qgay
+o\bigpar{n^{(\rea)_++\frac12}}.
\end{equation}
\item \label{TE+}
If\/ $\rea>\frac12$, then
\begin{equation}\label{te+}
\E X_n(\ga)
=\frac{1}{\sqrt{2}\gs}\frac{\gG(\ga-\frac12)}{\gG(\ga)}n^\qgay
+ o\bigpar{n^\qgay}.
\end{equation}
\item \label{TE=}
If\/ $\ga=\frac12$, then
\begin{equation}\label{te=}
\E X_n(1/2)
=
\frac{1}{\sqrt{2\pi\gss}} n \log n + o\bigpar{n\log n}.
\end{equation}
\end{romenumerate}
\end{theorem}
\begin{remark}\label{RTE1/2}
As shown in \refT{T1/2}\ref{T1/2E}, the estimate~\eqref{te-} holds also for
$\ga = \frac12 + \ii y$, $y \neq 0$, where $\mu(\ga)$ is the continuous
extension
described in \refR{Rmua}.
\end{remark}
Theorems \ref{T<0} and \ref{TE}\ref{TE<0} together yield the following
variant of \eqref{t0}.
\begin{theorem}
\label{TX<0}
If\/ $\rea<0$, then, as \ntoo,
\begin{equation}
\frac{ X_n(\ga)-n\mu(\ga)}{\sqrt n}
\dto \hX(\ga).
\end{equation}
Moreover, this holds in the space $\cH(H_-)$.
\end{theorem}
Similarly, Theorems \ref{T1} and \ref{TE} [parts \ref{TE+} and \ref{TE-}]
yield the following.
We define, for $\rea>0$ and $\ga\neq\frac12$,
the complex random variable
\begin{equation}
\label{Y}
Y(\ga):=\tY(\ga)+\frac{1}{\sqrt{2}}\frac{\gG(\ga-\frac12)}{\gG(\ga)}.
\end{equation}
\begin{theorem}
\label{TX}
\begin{thmenumerate}
\item \label{TX>}
If\/ $\rea>\frac12$, then, as \ntoo,
\begin{equation}\label{tx1}
Y_n(\ga):= n^{\wgay} X_n(\ga)
\dto \gs\qw Y(\ga).
\end{equation}
\item \label{TX<}
If\/ $0<\rea<\frac12$, then, as \ntoo,
\begin{equation}\label{tx<}
n^{\wgay}\bigsqpar{ X_n(\ga)-n\mu(\ga)}
\dto \gs\qw Y(\ga).
\end{equation}
\end{thmenumerate}Moreover, in both cases,
this holds in the space $\cH(D)$ for the indicated domain $D$.
\end{theorem}
\begin{remark}\label{RT1/2X}
As shown in \refT{T1/2}\ref{T1/2X}, the limit result~\eqref{tx<} holds also
for $\ga = \frac12 + \ii y$, $y \neq 0$, where $\mu(\ga)$ is the continuous
extension of \refR{Rmua}.
\end{remark}
We can add moment convergence to \refT{TX}, too.
\begin{theorem}\label{TXmom}
The limits \eqref{tx1} and \eqref{tx<} hold with all moments,
for $\rga>\frac12$, and $0<\rga<\frac12$, respectively.
In other words, for any integer $\ell\ge1$,
if\/ $\rga>\frac12$, then
\begin{equation}\label{mtx1}
\E X_n(\ga)^\ell
= \gs^{-\ell}\E Y(\ga)^\ell n^{\ell(\qgay)}
+ o\bigpar{n^{\ell(\qgay)}}
, \end{equation}
and if\/ $0<\rga<\frac12$, then
\begin{equation}\label{mtx<}
\E\bigsqpar{ X_n(\ga)-n\mu(\ga)}^\ell
= \gs^{-\ell} \E Y(\ga)^\ell n^{\ell(\qgay)}
+ o\bigpar{n^{\ell(\qgay)}}
. \end{equation}
Moreover, in both cases, the moments
$\kk_\ell=\kk_\ell(\ga):=\E Y(\ga)^\ell$ are given by the recursion formula
\begin{align}\label{kk1}
\kk_1&=\frac{\gG(\ga-\frac12)}{\sqrt2\,\gG(\ga)},
\intertext{and, for $\ell\ge2$, with $\gax:=\ga+\frac12$,}
\kk_\ell&=
\frac{\ell\gG(\ell\gax-1)}{\sqrt2\,\gG(\ell\gax-\frac12)}\kk_{\ell-1}
+\frac{1}{4\sqrt\pi}\sum_{j=1}^{\ell-1}\binom{\ell}{j}
\frac{\gG(j\gax-\frac12)\gG((\ell-j)\gax-\frac12)}{\gG(\ell\gax-\frac12)}
\kk_j\kk_{\ell-j}.
\label{kk2}
\end{align}
\end{theorem}
The result extends to joint moments; see \refSS{SSmom-mix}.
\begin{remark}\label{RFillK04}
For the case of random binary trees [the case $\xi\sim\Bi(2,\frac12)$]
and real $\ga$,
\refTs{TX} and \ref{TXmom} were shown already by \citet{FillK04}, by
the method used here in \refS{Smom}
to show \refT{TXmom}
(namely, singularity analysis of generating functions and the method of
moments).
Recently (and independently),
the case of uniformly random ordered trees
[$\xi\sim\Ge(\frac12)$, in connection with a study of Dyck paths] has been
shown (also by such methods)
by \citet{Caracciolo20}, and they have extended their result to general
$\xi$, at least when $\xi$ has a finite exponential moment
[personal communication].
\end{remark}
\begin{remark}\label{RDelmas2}
\refT{TX}\ref{TX>} has also been shown
by \citet{Delmas18} (for $\ga>1$, or for full binary trees)
and \citet{Delmas20} (in general).
(They consider only real $\ga$, but their results extend immediately to
complex $\ga$.)
The results in these papers are more general and allow more general
toll functions,
and they show how the result can be formulated in an
interesting way as convergence of random measures defined by the trees;
moreover, they consider also
more general \cGWt{s}, where $\Var(\xi)$ may be infinite
provided $\xi$ belongs to the domain of attraction of a stable distribution.
We do not consider such extensions here.
\end{remark}
\begin{remark}\label{Rcent}
Centered moments $\E\tY(\ga)^k$ can as always be found from the ordinary
moments given by the recursion above.
Alternatively, \cite[Proposition 3.9]{FillK04}
gives a (more complicated) recursion formula for the centered moments that
yields them directly. [The formula there is given for real $\ga$, but it
extends to complex $\ga$ with $\rga>0$ by the same proof or by analytic
continuation.
Note also the different normalizations: $Y$ there is our $\sqrt2 Y(\ga)$.]
Another formula for centered moments is given by
\cite[Proposition 7]{Caracciolo20}
[again with a different normalization: $x_p$ there is our $2\qqw Y(p)$].
\end{remark}
\begin{example}
\label{EVar}
Consider for simplicity real $\ga>0$.
It follows from
\eqref{kk1}--\eqref{kk2} that
\begin{align}\label{evar}
\E \tY(\ga)^2&
= \Var Y(\ga)
= \kk_2-\kk_1^2
\notag\\&
=\frac{\gG(2\ga)\gG(\ga-\frac12)}{\gG(2\ga+\frac12)\gG(\ga)}
+\frac{\gG(\ga-\frac12)^2}{4\sqrt\pi\gG(2\ga+\frac12)}
-\frac{\gG(\ga-\frac12)^2}{2\gG(\ga)^2}
,\qquad \ga\neq\tfrac12
.\end{align}
Moreover, the moments of $\tY(\ga)$ (which do not depend on $\xi$) are
continuous functions of $\ga$ by \refT{T1mom}, and thus we can obtain
the variance $\Var\tY(\frac12)$ by taking the limit of \eqref{evar} as
$\ga\to\frac12$.
A simple calculation using Taylor and Laurent expansions of $\gG(z)$
yields, \cf{} \cite[Remark 3.6(c)(iv)]{FillK04},
\begin{align}
\E\tY(\tfrac12)^2= \Var \tY(\tfrac12) =
\frac{4\log 2}{\pi}-\frac{\pi}4.
\end{align}
Higher moments of $\tY(\frac12)$ can be calculated in the same way.
The moments of $\tY(\frac12)$ were originally found in
\cite[Proposition~3.8 and Theorem~3.10(b)]{FillK04}, and given by a recursion
there.
[Note again that~$Y$ there is our $\sqrt2 Y(\frac12)$.]
See
\cite[Proposition 7 and Table 3]{Caracciolo20}
for another formula and explicit expressions up to order 5
(again with a different normalization).
\end{example}
Theorems \ref{T<0} and \ref{T1},
or \ref{TX<0} and \ref{TX},
show that the asymptotic distribution exhibits a phase transition at
$\rea=0$.
\begin{remark}\label{Riy}
We do not know how to bridge the gap between the two cases
$\rea<0$ and $\rea>0$.
Moreover, we do not know the asymptotic distribution, if any, when $\rea=0$
(excepting the trivial case $\ga=0$ when $X_n(0)=n$ is deterministic),
although we note that \refT{Tfringe}\ref{TfringeP} yields a weaker result
on convergence in probability.
However,
we conjecture that $(n\log n)\qqw \tX_n(\ii t)$ converges in distribution to
a symmetric complex normal distribution, for any $t\neq0$.
\end{remark}
\begin{problem}\label{Piy}
Does $X_n(\ii t)$ have an asymptotic distribution, after suitable normalization,
for (fixed and real) $t \neq 0$? If so, what is it?
\end{problem}
\begin{remark}\label{R0}
For real $\ga\downto0$, \eqref{kk1}--\eqref{kk2} show that
$\E Y(\ga)^2\to0$,
and thus $Y(\ga)\pto0$. [See also~\eqref{evar}.]
As remarked in \cite[Remark 3.6(e)]{FillK04}, one can use
\eqref{kk1}--\eqref{kk2} and the method of moments to show that
\begin{align}\label{r0}
\ga\qqw Y(\ga)\dto N(0,2-2\log 2),
\qquad \ga\downto0.
\end{align}
If we consider complex $\ga$ with $\rga>0$, and let $\ga\to 0$
from various different
directions, then $\ga\qqw Y(\ga)$ converges in
distribution to various different limits, each of which has a certain
complex normal distribution;
see \refApp{A0}.
If we
instead let
$\ga\to \ii t$ with
$t\neq0$ real, then \eqref{kk1}--\eqref{kk2} imply that the (complex)
moments $\E Y(\ga)^\ell$ converge. However, the absolute moment
$\E|Y(\ga)|^2\to\infty$ by a similar calculation; see \eqref{e|2|}.
It can be shown, again by the method of moments, that in this case,
$(\rga)\qq Y(\ga)$ converges in distribution to a symmetric complex normal
distribution; see
\refApp{Ait}.
As a consequence, the imaginary axis
is a.s\punkt{} a natural boundary for the random analytic functions $Y(\cdot)$
and $\tY(\cdot)$
\ie, they have no analytic extension to any
larger domain; see again \refApp{Ait} for details.
\end{remark}
Theorems \ref{TE} and \ref{TX}
show another phase transition at $\rea=\frac12$;
this phase transition comes from the behavior of the mean $\E X_n(\ga)$,
while the fluctuations $\tX_n(\ga)$ vary analytically by \refT{T1}.
To be precise, there is a singularity at $\ga=\frac12$, as shown by
\eqref{aaa} together with \eqref{te-} or \eqref{tx<}.
For non-real $\ga$ on the line $\rea=\frac12$, the situation is more
complicated.
As said in Remarks \ref{Rmua}, \ref{RTE1/2}, and \ref{RT1/2X},
the results for $\rea<\frac12$ extend continuously to
$\Re\ga=\frac12$, $\ga\neq\frac12$.
Moreover, the next theorem (\refT{TEgd}) shows that
if we add a weak moment
assumption on $\xi$,
then
we can extend Theorems~\ref{TE} and~\ref{TX} \emph{analytically}
across the line $\rea=\frac12$,
and also refine the result at the exceptional case $\ga=\frac12$.
[The results now depend on~$\xi$ through more than just
$\gss$, see \eqref{tegd-c}.]
Hence, assuming a higher moment, there is a singularity at $\ga=\frac12$ but
no other singularities at the line $\rea=\frac12$.
However, in general (without higher moments), $\mu(\ga)$ \emph{cannot} be
extended analytically across the line $\rea = \frac12$, see \refT{Tbad};
hence, in general the entire line $\rea=\frac12$ is
a singularity---in other words, a phase transition.
\begin{theorem}
\label{TEgd}
Suppose that $\E \xi^{2+\gd}<\infty$ for some $\gd\in(0,1]$.
Then:
\begin{romenumerate}
\item \label{TEgdmu}
$\mu(\ga)$ can be analytically continued to a meromorphic function in
$\rea<\frac12+\gdd$, with a single pole at $\ga=\frac12$ with residue
$-1/\sqrt{2\pi\gss}$.
\item \label{TEgd-}
Using this extension of $\mu(\ga)$, \eqref{te-} holds, uniformly on compact
sets,
for $-\frac12<\rea<\frac12+\gdd$ with $\ga\neq\frac12$.
\item \label{TEgd=}
For some constant $c$ (depending on the offspring distribution),
\begin{equation}\label{tegd=}
\E X_n(\tfrac12)
=
\frac{1}{\sqrt{2\pi\gss}} n \log n + cn + o(n).
\end{equation}
\end{romenumerate}
\end{theorem}
\begin{remark}\label{Rhighmom}
If $\xi$ has higher moments, then $\mu(\ga)$ can be continued even further:\ see \refT{TH}.
In particular, if $\xi$ has finite moments of all orders, then $\mu(\ga)$
can be continued to a meromorphic function in the entire complex plane
$\bbC$, with poles at $j+\frac12$, $j=0,1,2,\dots$
\hspace{-.07in}
(or possibly a subset
thereof).
\end{remark}
\begin{theorem}
\label{TXgd}
Suppose that $\E \xi^{2+\gd}<\infty$ for some $\gd\in(0,1]$.
Then:
\begin{romenumerate}
\item \label{TXgd-}
The limit in distribution \eqref{tx<} holds for all $\ga\in
\Dgdx:=\set{\ga\neq\frac12:0<\rea<\frac12+\gdd}$;
moreover \eqref{tx<} holds in
$\cH(\Dgdx)$.
\item \label{TXgd=}
For some constant $c$ (depending on the offspring distribution),
\begin{equation}\label{txgd=}
n^{-1}\Bigsqpar{ X_n(\tfrac12)-\pigsqqw n\log n }
\dto \gs\qw \tY(\tfrac12)+c.
\end{equation}
\end{romenumerate}
\end{theorem}
The constants $c$ in \eqref{tegd=} and \eqref{txgd=} are equal.
The proof yields the formula \eqref{tegd-c}.
\begin{remark}
The phase transitions at $\rea=0$ and $\rea=\frac12$ can be explained as
follows.
Consider for simplicity real $\ga$, when all terms in \eqref{F} are
positive.
The expected number of subtrees $\tnv$ of order $k$ is
roughly $n\P(|\cT|=k)=\Theta(n k^{-3/2})$,
by \cite[Theorem 7.12]{SJ264} (see \refS{Sfringe})
and \eqref{pct}.
Hence, if
$\ga>\frac12$, $\E X_n(\ga)$ is dominated by the rather few large
$\tnv$ of size $\Theta(n)$; there are roughly $\Theta(n\qq)$ such trees,
which explains the order $n^\qgay$ of $\E X_n(\ga)$.
For $\ga<\frac12$, $\E X_n(\ga)$ is dominated by the small
subtrees $\tnv$, of size
$O(1)$, and this yields the linear behavior of $\E X_n(\ga)$ in \refT{TE}.
For $\ga<0$, the fluctuations, too, are dominated by the small subtrees (as
shown in the proof of \cite[Theorem~1.5]{SJ285});
there are $\approx n$ of these,
and they are only weakly dependent on each other,
and as a result $X_n(\ga)$ has an asymptotic normal distribution with the
usual scaling.
For $0<\ga<\frac12$, on the other hand,
the mean $\E X_n(\ga)$ is dominated by the small subtrees as just said,
but
fluctuations are dominated by the large subtrees
of order $\Theta(n)$.
(To see this, note that for $\ga>0$ and $\eps>0$, the
contribution to $X_n(\ga)$ from subtrees of order $\le\eps n$ has
variance $O\bigpar{\eps^{2\ga}n^{2\ga+1}}$
by \cite[Theorem 6.7]{SJ285}.)
Hence, we
have the same asymptotic behavior of $\tX_n(\ga)$ as for larger $\ga$.
The large subtrees are more
strongly dependent on each other, and lead to a non-normal limit; on the
other hand, asymptotically they do not depend on details in the offspring
distribution.
\end{remark}
At least when $\rea>\frac12$, the limit random variable $Y(\ga)$ can be
expressed as a function of a normalized Brownian excursion
$(\ex(t))$. [Recall
that $(\ex(t))$ is a random continuous function on $\oi$; see, \eg{}, \cite{RY}
for a definition.]
For a function $f$ defined on an interval, define
\begin{equation}
\label{m}
m(f;s,t):=\inf_{u\in[s,t]}f(u).
\end{equation}
The general representation formula for $\rea>\frac12$ is a little bit
complicated, and
we give three closely related versions \eqref{wa0}--\eqref{wa2}, where the
first two are related by mirror symmetry and the third, symmetric, formula
is the average of the two preceding. (See further the proof,
which also gives a fourth formula \eqref{richard}.
The representations \eqref{wa2} and \eqref{wb} were stated in
\cite[(4.2)--(4.3), see also Examples 4.6 and 4.7]{SJ197};
the present paper gives, after a long delay, the proof promised there.)
Note that the integrals in \eqref{wa0}--\eqref{wa2} converge (absolutely) a.s\punkt{}
when $\rea>\frac12$,
since $\ex(t)$ is a.s\punkt{}
H\"older($\gam$)-continuous for every $\gam<\frac12$,
and thus, \eg, $|\ex(t)-m(\ex;s,t)|\le C(t-s)^{\gam}$ for
some random constant $C$.
(This well-known fact follows \eg{} from the corresponding fact for Brownian
motion together with the construction of $\ex$ from the excursions of the
Brownian motion, see \cite[Theorem I.(2.2) and Chapter XII.2--3]{RY}.)
We also give a simpler expression \eqref{wb} valid for $\rea>1$. [The
integral in \eqref{wb} diverges for $\rea\le1$.]
\begin{theorem}
\label{Tbrown}
\begin{thmenumerate}
\item \label{Tbrown>1/2}
If\/ $\rea>\frac12$, then, jointly for all such $\ga$,
\begin{align}
Y(\ga)&\eqd
2\ga \int^{1}_{0}\!t^{\ga - 1}\ex(t)\dd t
\notag
\\&\hskip4em
{}- 2\ga (\ga - 1) \iint\limits_{0<s<t<1} (t - s)^{\ga - 2}
\bigsqpar{{\ex(t)}-{m(\ex;s,t)}}\dd s \dd t
\label{wa0}
\\
&=
2\ga \int^{1}_{0}\!(1-t)^{\ga - 1}\ex(t)\dd t
\notag
\\& \hskip4em
{} - 2\ga (\ga - 1) \iint\limits_{0<s<t<1} (t - s)^{\ga - 2}
\bigsqpar{{\ex(s)}-{m(\ex;s,t)}}\dd s \dd t
\label{wa1}
\\
&=
\ga \int^1_0 \left[ t^{\ga - 1} + (1 - t)^{\ga - 1} \right]\ex(t)\dd t
\notag
\\&\hskip1em
{} - \ga (\ga - 1) \iint\limits_{0 < s < t < 1}
(t - s)^{\ga - 2} \left[ \ex(s) + \ex(t) - 2 m(\ex;s,t) \right]\dd s\dd t.
\label{wa2}
\end{align}
\item \label{Tbrown>1}
If\/ $\rea>1$, we have also the simpler representation
\begin{equation}\label{wb}
Y(\ga)\eqd2\ga (\ga - 1) \iint\limits_{0 < s < t < 1}
(t - s)^{\ga - 2} m(\ex;s,t) \dd s\dd t.
\end{equation}
\end{thmenumerate}
\end{theorem}
\begin{example}\label{Ega=1}
For $\ga=1$, \eqref{wa0}--\eqref{wa2} reduce to
\begin{equation}\label{y1}
Y(1)=2\intoi\!\ex(t)\dd t,
\end{equation}
twice the \emph{Brownian excursion area}.
In fact, with $\hh{v}$ denoting the depth of a given
node~$v$, it is easy to see that
\begin{equation}\label{xn1}
X_n(1)=\sum_{v\in\tn}|\tnv|=\sum_{v\in\tn}(\hh{v}+1)
=n+\sum_{v\in\tn}\hh{v},
\end{equation}
\ie, $n$ plus the \emph{total pathlength}. The convergence of the
total pathlength, suitably rescaled, to the Brownian excursion area was
shown by \citet{AldousII,AldousIII}, see also \cite{SJ146}.
The Brownian excursion area has been studied by many authors in various
contexts, for example
\cite{Lou:kac,Lou,Takacs:Bernoulli,Takacs:rooted,Takacs:binary,Spencer,FPV,FL:Airy,SJ133},
see also \cite{SJ201} and the further references there.
Furthermore, for $\ga=2$, \eqref{wb} reduces to
\begin{equation}\label{y2}
Y(2)\eqd 4 \iint\limits_{0 < s < t < 1}\!m(\ex;s,t) \dd s\dd t.
\end{equation}
This too was studied in \cite{SJ146}, where $Y(2)$ was denoted $\eta$.
Moreover, the random variable
$P(\ctn)$ there equals $X_n(1)-n$,
$Q(\ctn)$ equals $X_n(2)-n^2$,
and the Wiener index $W(\ctn)=nP(\ctn)-Q(\ctn)$ equals $nX_n(1)-X_n(2)$.
Hence, the limit theorem \cite[Theorem 3.1]{SJ146} follows from
\refTs{TX} and \ref{Tbrown}.
Moreover, as noted by \cite{FillK04},
\refT{TXmom} yields for $\ga=1$
a recursion formula for the moments of the Brownian excursion area,
which is equivalent to the formulas given by
\cite{Takacs:Bernoulli,Takacs:rooted,Takacs:binary,FPV,FL:Airy},
see also \cite[Section 2]{SJ201}.
Similarly, also noted by \cite{FillK04},
\refT{TXmom} yields for $\ga=2$ the recursion formula for moments of $Y(2)$
given in \cite{SJ146}. More generally, the recursion in \cite{SJ146}
for mixed moments of $Y(1)$ and $Y(2)$ follows from \refT{Tmix} below.
\end{example}
\begin{remark}\label{Rdelmas}
For $\ga>\frac12$,
a different (but equivalent)
representation of the limit $Y(\ga)$ as a function of a
Brownian excursion~$\ex$ is given by
\citet[(1.10) and (2.6)]{Delmas18}.
That representation can also be written as a functional of the Brownian
continuum random tree; see
\citet[Theorem 1.1]{Delmas20}.
\end{remark}
\begin{remark}\label{Rrepr}
As demonstrated in \refS{Spf2}, it follows from the proof of \refT{T1} given in that section
that there exists a representation of
$Y(\ga)$ as a (measurable) functional of~$\ex$ also for $0<\rga\le\frac12$.
However, this is only an existence statement, and we do not know
any explicit representation.
More precisely, there exists a measurable function
$\Psi: H_+\times\coi \to\bbC$ such that
\begin{align}
\label{rrepr}
Y(\ga) = \Psi(\ga,\ex),
\qquad \rga >0,
\end{align}
where $\ex$ as above is a Brownian excursion.
Moreover, $\Psi(\ga,f)$ is an analytic function of
$\ga\in H_+$ for every $f\in\coi$.
For $\rga>\frac12$, $\Psi(\ga,\ex)$ is a.s\punkt{} given by the formulas
\eqref{wa0}--\eqref{wa2}, and for $\rga>1$ also by \eqref{wb}.
Hence, in principle, $\Psi(\ga,\ex)$ is given by an analytic extension of
\eqref{wb} to all $\ga\in H_+$, and such an extension (necessarily unique)
exists a.s.
(Note that for $\rga<1$, the double integrals in \eqref{wa0}--\eqref{wa2} do
not converge for every function $\ex\in\coi$, so we can
only claim existence of the extension a.s.)
We concede that the existence of an analytic extension $\Psi(\ga,\ex)$ gives a
``representation'' of $Y(\ga)$ only in a rather abstract sense.
\end{remark}
\begin{problem}
Find an explicit representation for $Y(\ga)$ as a function of~$\ex$
for $0<\rga<\frac12$, or even for $0<\ga<\frac12$.
\end{problem}
Finally,
we consider real $\ga$ and let $\ga\to\infty$.
We show the following asymptotic
result yielding a limit of the limit in \refT{TX};
this improves a result in \cite{FillK04} which shows
the existence of such a limit together with \eqref{EZk}.
Let $B(t)$, $t\ge0$, be a standard Brownian motion, and let
\begin{equation}\label{S}
S(t):=\sup_{s\in[0,t]} B(s)
\end{equation}
be the corresponding supremum process.
\begin{theorem}
\label{Tlol}
As $\ga\to+\infty$
along the real axis,
we have
$\ga\qq Y(\ga)\dto \Yoo$, where $\Yoo$ is a random variable with
the representation
\begin{equation}\label{Z}
\Yoo = \intoo e^{-t} S(t)\dd t.
\end{equation}
and moments
\begin{equation}\label{EZk}
\E \Yoo^k = 2^{-k/2}\sqrt{k!},
\qquad k\ge0,
\end{equation}
and more generally, for real or complex $r$,
\begin{equation}\label{EZr}
\E \Yoo^r = 2^{-r/2}\sqrt{\gG(r+1)},
\qquad \Re r>-1.
\end{equation}
\end{theorem}
Further representations of $\Yoo$ are given in \eqref{yoo} and \eqref{ytau}.
\begin{remark}\label{R'}
Since convergence in the space $\cH(D)$ (for a domain $D\subseteq\bbC$)
of a sequence of analytic functions implies convergence of their
derivatives, the results above imply corresponding results for $X_n'(\ga)$
and $Y_n'(\ga)$ (and also for higher derivatives).
Note that $X_n'(\ga)$ is the additive functional given by the toll function
$\frac{\ddx}{\ddx\ga}f_\ga(T)=|T|^\ga\log|T|$. In particular, we have
\begin{align}\label{r'}
X_n'(0) = \sum_{v\in \cT_n}\log|\cT_{n,v}|
=\log\prod_{v\in \cT_n}|\cT_{n,v}|,
\end{align}
which is known as the \emph{shape functional}, see \eg{}
\cite{Fill96,MM98}.
Unfortunately, because of the phase transition at $\rea=0$,
most of our results do not include $0$ in their domains.
The exception is \refT{TE}\ref{TE-}, which implies
\begin{align}\label{x'0}
\E X_n'(0) = \mu'(0) n + o\bigpar{n\qq\log n},
\end{align}
where the error term is obtained from \eqref{te-} and Cauchy's estimates
using the circle $|z|=1/\log n$.
More precise estimates of $\E X_n'(0)$ have been proved by
\cite{Fill96,FillK04,FillFK}
[random binary trees, the case $\xi\sim\Bi(2,\frac12)$],
and
\cite{MM98} (general $\xi$ with an exponential moment);
furthermore, these papers also give results for the variance
(which is of order $n\log n$).
Moreover, asymptotic normality of $X_n'(0)$ has been shown in special cases
by
\citet{Pittel} [random labelled trees, the case $\xi\sim\Po(1)$],
\citet{FillK04} [random binary trees, the case $\xi\sim\Bi(2,\half)$],
and
\citet{Caracciolo20} [random ordered trees, the case $\xi\sim\Ge(\half)$].
We have been able to extend this to general $\xi$, assuming
$\E \xi^{2+\gd}<\infty$ for some $\gd>0$, by suitable modifications
of the arguments in \refS{Smom} (we might provide details in future work).
It seems to be an open problem to show asymptotic normality
of $X_n'(0)$
for arbitrary $\xi$ with $0<\Var\xi<\infty$ (and $\E\xi=1$, as always).
Note that although the asymptotic normality of $X_n'(0)$ does not follow
from the results in the present paper,
it fits well together with \refT{T<0} which shows that
$X_n(\ga)$ is asymptotically normal for every $\ga<0$.
\end{remark}
The contents of the paper are as follows.
\refS{Sprel} contains some preliminaries.
\refS{Sfringe} gives the simple proof of \refT{Tfringe}.
\refS{Stight} shows two lemmas on tightness,
and \refS{S<0} then gives a short proof of \refT{T<0}.
\refS{SE} is a detailed study of the expectation $\E X_n(\ga)$.
\refS{Sbrown} treats
convergence to Brownian excursion and functions thereof.
\refS{Spf2} gives some remaining proofs.
\refS{Slim} discusses the limit as real $\ga\to+\infty$.
\refSs{Smua} and \ref{S:bad} give proofs and a counterexample, respectively,
for the case $\rea=\frac12$.
\refS{Smom} studies moments and gives proofs of \refTs{T1mom} and \ref{TXmom}.
This section uses a method different from that of the previous sections; the two
methods complement each other and combine in the proof of \refT{T1mom}.
Finally,
\refApp{Amuexamples}
discusses calculation of $\mu(\ga)$ and gives some examples of it;
\refApp{Apoly} gives a proof of a technical lemma in \refS{Smom},
together with some background on polylogarithms used in the proof;
\refApps{A0} and \ref{Ait}
give proofs of the additional results claimed in \refR{R0}.
\begin{acks}
We are grateful to Nevin Kapur for his contributions to \refS{Smom}; Kapur
also coauthored the related unpublished manuscript~\cite{FillK03}.
The present paper was originally conceived as a joint work including him.
We are also grateful to Lennart Bondeson for helpful comments on the topic of~\refR{Rid}.
\end{acks}
\section{Preliminaries and notation}\label{Sprel}
\subsection{Conditioned Galton--Watson trees}\label{SSGW}
Given a non-negative integer-valued random variable $\xi$,
with distribution $\cL(\xi)$,
the \emph{\GWt}
$\cT$
with offspring distribution $\cL(\xi)$ is constructed recursively by
starting with a root and
giving each node a number of children that is a new copy of $\xi$,
independent of the numbers of children of the other nodes.
Obviously,
only the distribution $\cL(\xi)$
of $\xi$ matters; we abuse language and say also
that $\cT$ has offspring distribution $\xi$.
Furthermore, let $\tn$ be $\cT$ conditioned on having
exactly $n$ nodes; this is called a \emph{\cGWt}.
(We consider only $n$ such that $\P(|\cT|=n)>0$.)
We assume that $\P(\xi=0)>0$, since otherwise the tree $\cT$ is a.s\punkt{} infinite.
In fact, we consider here only the \emph{critical} case $\E\xi=1$;
in this case
$\cT$ is a.s\punkt{} finite (provided $\P(\xi\neq1)>0$).
It is well known that in most cases, but not all,
a \cGWt{} with an offspring distribution $\xi'$ with an expectation
$\E\xi'\neq1$ is equivalent to a \cGWt{} with another offspring distribution
$\xi$ satisfying $\E\xi=1$, so this is only a minor restriction.
See \eg{}
\cite[Section~4]{SJ264} for details.
We also assume $0<\Var\xi<\infty$ (but usually no higher moment assumptions).
\begin{remark}
More generally,
a \emph{simply generated random tree} $T_n$ defined by a given
sequence of non-negative weights
$(\phi_k)_0^\infty$ is a random ordered tree with $n$ nodes such
that for every ordered tree $T$ with $|T|=n$, the probability $\P(\tn=T)$ is
proportional to $\prod_{v\in T} \phi_{\out(v)}$,
where $\out(v)$ denotes the outdegree of $v$, see \eg{} \cite{MM} or
\cite[Section 1.2.7]{Drmota}. Every \cGWt{} is a \sgrt, and the converse
holds under a weak condition. In particular, if the generating function
$\Phi(z):=\sum_{k=0}^\infty\phi_k z^k$ has a positive radius of convergence $R$ and
there exists $\tau$ with $0<\tau<R$ and $\tau\Phi'(\tau)=\Phi(\tau)$
(which is a common assumption in studies of \sgrt{s}), then the \sgrt{}
$T_n$ equals a \cGWt{} defined by a suitable $\xi$ with $\E\xi=1$;
furthermore, this $\xi$ has finite moment generating function
$\E e^{t\xi} < \infty$ at some $t > 0$,
and thus
finite moments of all orders. Again, see \eg{}
\cite[Section~4]{SJ264} for details.
\end{remark}
Let $\xi_1,\xi_2,\dots$ be independent copies of $\xi$ and define
\begin{equation}
\label{Sn}
S_n:=\sum_{i=1}^n \xi_i.
\end{equation}
It is well known (see \citet{Otter}, or \cite[Theorem 15.5]{SJ264} and the
further references given there)
that for any $n\ge1$,
\begin{equation}\label{ptk}
{\P(|\cT|=n)}=\frac{1}n\P(S_n=n-1).
\end{equation}
In particular, \eqref{mu} can be written
\begin{equation}
\label{mua}
\mu(\ga)=\sum_{n=1}^\infty n^{\ga-1}\P(S_n=n-1),
\qquad \rea<\tfrac12.
\end{equation}
For some examples where exact (and in one case rational)
values of $\mu(\ga)$ can be computed when~$\ga$ is a negative integer, see
\refApp{Amuexamples}.
Recall that the
\emph{span} of an integer-valued random variable $\xi$,
denoted $\spann\xi$,
is the largest integer $h$ such that $\xi\in a+h\bbZ$
a.s\punkt{} for some $a\in\bbZ$; we consider only $\xi$ with $\P(\xi=0)>0$
and then the span is the largest integer $h$ such that $\xi/h\in\bbZ$ a.s.,
\ie, the greatest common divisor of \set{n:\P(\xi=n)>0}.
(Typically, $h=1$, but we have for example $h=2$ in the case of full binary
trees, when $\xi\in\set{0,2}$.)
The local limit theorem for discrete random variables
can in our setting can be stated as follows;
see, \eg{},
\cite[Theorem 1.4.2]{Kolchin} or
\cite[Theorem VII.1]{Petrov}.
\begin{lemma}[Local limit theorem]\label{LLT}
Suppose that $\xi$ is an integer-valued random variable with
$\P(\xi=0)>0$,
$\E\xi=1$,
$0<\gss:=\Var\xi<\infty$, and span $h$.
Then, as \ntoo, uniformly in all $m\in h\bbZ$,
\begin{equation}\label{llt}
\P(S_n=m)=\frac{h}{\sqrt{2\pi\gss n}} \Bigsqpar{e^{-(m-n)^2/(2n\gss)}+o(1)}.
\end{equation}
\nopf
\end{lemma}
In particular, for any fixed $\ell\in\bbZ$, as \ntoo{} with
$n\equiv \ell \pmod h$,
\begin{equation}\label{snn}
\P(S_n=n-\ell)\sim\frac{h}{\sqrt{2\pi\gss }}\,n\qqw.
\end{equation}
Combining \eqref{ptk} and \eqref{snn} we see that
\begin{equation}\label{pct}
\P(|\cT|=n)\sim\frac{h}{\sqrt{2\pi\gss }}\,n^{-3/2}
\end{equation}
as \ntoo{} with $n\equiv1\pmod h$.
[The probability is 0 when $n\not\equiv1\pmod h$.]
We will for simplicity assume
in some proofs below
that the span of $\xi$ equals 1;
then \eqref{pct} is valid as $\ntoo$ without restriction.
However, this is just for convenience, and the results hold also for $h>1$,
using standard modifications of the arguments. (We leave these to the
reader, but give sometimes a hint.)
\subsection{Random analytic functions}\label{SSanalytic}
For a domain (non-empty open connected set) $D\subseteq\bbC$, let $\cH(D)$
denote the
space of all analytic functions on $D$, equipped with the usual topology of
uniform convergence on compact sets; this is a topological vector space
with the topology given by the seminorms $p_K(f):=\sup_{z\in K}|f(z)|$,
with $K$ ranging over all compact subsets of $D$.
The space $\cH(D)$ is a Fr\'echet space, \ie{}, a locally convex space with a
topology that can be defined by a complete translation-invariant metric,
and it has (by Montel's theorem on normal families) the property that
every closed bounded subset is compact,
see \eg{} \cite[\S1.45]{Rudin-FA}
or \cite[Example 10.II and Theorem 14.6]{Treves}.
Furthermore, $\cH(D)$ is separable.
$\cH(D)$ is thus a Polish space (\ie, a complete separable metric space).
We equip $\cH(D)$ with its Borel $\gs$-field, and note that this is
generated by the point evaluations $f\mapsto f(z)$, $z\in D$.
[This can be seen by choosing
an increasing sequence $(K_i)$ of compact sets with $D=\bigcup_i K_i$,
and a countable dense subset $(f_j)$ of $\cH(D)$, and noting that
then the sets
$U_{i,j,n}:=\set{f:p_{K_i}(f-f_j)<1/n}$ form a countable basis of the
topology of $\cH(D)$; furthermore, each $U_{i,j,n}$ belongs to the
$\gs$-field generated by the point evaluations. We omit the standard details.]
It follows from this and the monotone class theorem that the distribution of
a random function $f$ in $\cH(D)$ is determined by its finite-dimensional
distributions (\ie, the distributions of finite sets of point evaluations).
We can use the general theory in
\eg{} Billingsley \cite{Billingsley} or Kallenberg \cite{Kallenberg}
for convergence in distribution of random
functions in $\cH(D)$.
In particular, recall that a sequence $(X_n)$ of random variables in a
metric space $\cS$ is \emph{tight} if for every $\eps>0$, there exists
a compact subset $K\subseteq\cS$ such that $\P(X_n\in K)>1-\eps$ for
every $n$.
Prohorov's theorem \cite[Theorems 6.1--6.2]{Billingsley},
\cite[Theorem 16.3]{Kallenberg} says that
in a Polish space, a sequence $X_n$ is tight if and only if
the corresponding sequence of distributions $\cL(X_n)$ is relatively
compact, \ie, each subsequence has a subsubsequence that converges in
distribution.
It is easy to characterize tightness in $\cH(D)$ in terms of tightness of
real-valued random variables.
\begin{lemma} \label{Lhd}
Let $D$ be a domain in $\bbC$, and let $(X_n(z))$ be a sequence
of random analytic functions on $D$. Then the following are equivalent.
\begin{romenumerate}
\item \label{Lhdt}
The sequence $(X_n(z))$ is tight in $\cH(D)$.
\item \label{LhdK}
The sequence
$(\sup_{z\in K}|X_n(z)|)$ is tight for every compact $K\subset D$.
\item \label{LhdB}
The sequence
$(\sup_{z\in B}|X_n(z)|)$ is tight for every closed disc $B\subset D$.
\end{romenumerate}
\end{lemma}
\begin{proof}
This proof is an easy exercise that we include for completeness.
\ref{Lhdt}$\implies$\ref{LhdK}$\implies$\ref{LhdB} is trivial.
\ref{LhdB}$\implies$\ref{Lhdt}. Assume
that \ref{LhdB} holds and choose a
sequence of closed discs $B_j\subset D$, $j\ge1$,
such that the interiors $B_j^\circ$
cover $D$.
Let $\eps>0$. Then, by \ref{LhdB}, for each $j$ there exists
$M_j<\infty$ such that $\P(\sup_{z\in B_j}|X_n(z)|>M_j)<2^{-j}\eps$.
Let $L:=\set{f\in\cH(D):\sup_{z\in B_j}|f(z)|\le M_j\textrm{\ for all~$j$}}$.
Each compact subset $K$ of $D$ is covered by a finite collection of open
discs $B_j^\circ$, and it follows that there exists $M_K<\infty$ such that
if $f\in L$, then $p_K(f):=\sup_{z\in K}|f(z)|\le M_K$.
In other words, $\sup_{f\in L}p_K(f)<\infty$ for each compact $K\subset D$,
which
says that $L$ is bounded in $\cH(D)$, because the topology is defined
by the seminorms $p_K$
\cite[Proposition 14.5]{Treves}.
Moreover, $L$ is a closed set in $\cH(D)$, and thus
$L$ is compact in $\cH(D)$ by the Montel property mentioned above.
Furthermore, $\P(X_n\notin L)< \sum_{j=1}^\infty 2^{-j}\eps=\eps$.
\end{proof}
This leads to the following simple sufficient condition.
\begin{lemma}
\label{LL1}
Let $D$ be a domain in $\bbC$ and let $(X_n(z))$ be a sequence of random
analytic functions in $\cH(D)$.
Suppose that there exists a function $\gam:D\to(0,\infty)$, bounded on
each compact subset of $D$, such that
$\E|X_n(z)|\le \gam(z)$
for every $z\in D$. Then the sequence $(X_n)$ is tight in $\cH(D)$.
\end{lemma}
\begin{proof}
Let $B\subset D$ be a closed disc. There exists a circle $\gG\subset D$
such that $B$ lies in the interior of $\gG$.
If $f\in\cH(D)$, then the value $f(z)$ at a
point inside $\gG$ can be
expressed by a
Poisson integral
$\int_\gG P(z,w) f(w)|\ddx w|$
over the circle $\gG$, where $P$ is the
Poisson kernel.
(This is because analytic functions are harmonic.
See \eg{} \cite[11.4, 11.12, and 11.13]{Rudin-RealAndComplex}.)
Furthermore, the Poisson kernel is continuous, and
thus bounded by some constant $\CC$ for all $z\in B$ and $w\in \gG$.
Consequently, for every $f\in\cH(D)$ we have
\begin{equation}
\sup_{z\in B} |f(z)|\le \CCx \int_\gG |f(w)|\,|\ddx w|.
\end{equation}
Applying this to $X_n(z)$ and taking the expectation, we obtain
\begin{equation}
\begin{split}
\E \sup_{z\in B} |X_n(z)|
&
\le \CCx \E \int_\gG |X_n(w)|\,|\ddx w|
= \CCx \int_\gG \E|X_n(w)|\,|\ddx w|
\\&
\le \CCx \int_\gG \gam(w)\,|\ddx w|<\infty.
\end{split}
\end{equation}
Hence the sequence $(X_n)$ satisfies \refL{Lhd}\ref{LhdB} (by Markov's
inequality), and the
conclusion follows by \refL{Lhd}.
\end{proof}
We shall also use the following, which again uses properties of analytic
functions.
\begin{lemma}\label{Lsub}
Let $D$ be a domain in $\bbC$ and let $E$ be a subset of $D$ that has a
limit point in $D$. (I.e., there exists a sequence $z_n\in E$ of distinct
points and $z_\infty\in D$ such that $z_n\to z_\infty$.)
Suppose that $(X_n)$ is a tight sequence of random elements of $\cH(D)$ and
that there exists
a family of random variables $\set{Y_z:z\in E}$ such that for each $z\in E$,
$X_n(z)\dto Y_z$ and, moreover, this holds jointly for any finite set of
$z\in E$.
Then $X_n\dto Y$ in $\cH(D)$, for some random function
$Y(z)\in\cH(D)$.
Furthermore,
$Y(z)\eqd Y_z$, jointly for any finite set of $z\in E$.
That is, $Y$ restricted to $E$ and $(Y_z)$ have the same finite-dimensional
distributions, and thus have the same distribution as random elements of
$\bbC^E$.
\end{lemma}
\begin{proof}
It suffices to consider the case when $E=\set{z_1,z_2,\dots}$ with $z_n\to
z_\infty\in D$.
The result then is a special case of
\citet[Lemma 7.1]{SJ185}; in the notation there we take $\cS_1=\cH(D)$,
$\cS_2=\bbC^E$ and let $\phi$ be the obvious restriction map
$f(z)\mapsto (f(z_i))_{i=1}^{\infty}$; note that $\phi$ is injective by the
standard uniqueness for analytic functions. The assumption of joint
convergence $X_n(z)\dto Y_z$ for any finite subset of $E$ is equivalent to
the convergence $\phi(X_n)\dto (Y_{z_i})$ in $\bbC^E$, since this space has
the product topology
\cite[p.~19]{Billingsley}.
The conclusion follows from
\cite[Lemma 7.1]{SJ185}.
\end{proof}
\begin{remark}
\refL{Lsub} may fail
if we do not assume joint convergence; \ie, if only
$X_n(z)\dto Y_z$ for each $z\in E$ separately. For a counterexample,
let $D=\bbC$ and $E=\set{z:|z|=1}$; further, let
$U$ be uniformly distributed on the unit circle \set{z:|z|=1},
let $X_{2n}(z):=U$ (a constant function) and $X_{2n+1}(z):=Uz$.
Then $X_n(z)\dto U$ for each fixed $z\in E$,
and
$(X_n)$ is tight in $\cH(D)$ by \refL{LL1} with $\gamma(z) := \max\{1,|z|\}$,
but $X_n$ does not converge in
$\cH(\bbC)$; for example, $X_n(0)$ does not converge in distribution.
We do not know whether it would be sufficient to assume $X_n(z)\dto Y_z$ for
each $z\in E$ separately in the case when $E$ contains a non-empty open set.
\end{remark}
\subsection{Dominated convergence}\label{SSdom}
To show uniformity in $\ga$ of various estimates, we use the following
simple, but perhaps not so well known, version of Lebesgue's dominated
convergence theorem.
\begin{lemma}
\label{Ldom}
Let $\cA$ be an arbitrary index set.
Suppose that, for $\ga\in\cA$ and $n\ge1$, $f_{\ga,n}(x)$ are measurable
functions on a measure space $(\cS,\cF,\mu)$, and that
for a.e\punkt{} fixed $x\in\cS$, we have
$f_{\ga,n}(x)\to g_\ga(x)$ as \ntoo, uniformly in $\ga\in\cA$.
Suppose furthermore that
$h(x)$ is an integrable function on $\cS$, such that $|f_{\ga,n}(x)|\le
h(x)$ a.e\punkt{} for each $\ga$ and $n$. Then
$\int_{\cS} f_{\ga,n}(x)\dd\mu(x) \to \int_{\cS} g_{\ga}(x)\dd \mu(x)$ as
\ntoo, uniformly in $\ga\in\cA$.
\end{lemma}
\begin{proof}
Note first that the assumptions imply $|g_{\ga}(x)|\le h(x)$ a.e\punkt{} for
each $\ga$;
hence,
$\bigabs{f_{\ga,n}(x)-g_{\ga}(x)}\le 2h(x)$ a.e.
Let $\ga_n$ be an arbitrary sequence of elements of $\cA$. Then
$\int_{\cS} \bigpar{f_{\ga_n,n}(x)-g_{\ga_n}(x)}\dd\mu(x) \to 0$ as \ntoo{} by
the standard dominated convergence theorem. The result follows.
\end{proof}
\begin{remark}\label{Rdom}
Suppose that the assumptions of \refL{Ldom} hold, and furthermore
that $\cA$ is an open
set in the complex plane and that $g_\ga(x)$ is
an analytic function of $\ga$
for every $x\in\cS$, and jointly measurable in $\ga$ and $x$.
Then the limit $G(\ga):=\int_{\cS} g_\ga(x)\dd\mu(x)$
is an analytic function of $\ga\in\cA$.
To see this, note again
that the assumptions imply $|g_\ga(x)|\le h(x)$ a.e\punkt{} for
each $\ga$. It follows by dominated convergence that $G(\ga)$ is a
continuous function of $\ga$, and by Fubini's theorem that the line integral
of $G(\ga)$ around the boundary of any closed triangle inside $\cA$ is 0;
hence $G(\ga)$ is analytic by Morera's theorem.
\end{remark}
\subsection{Further notation}
We denote the distance between two nodes $v$ and $w$ in a tree by $d(v,w)$.
Furthermore, we let $\hh{v}:=d(v,o)$ denote the distance from $v$ to the
root $o$; this is usually called the \emph{depth} of $v$.
For two nodes $v,w$ of a rooted tree $T$, $v\prec w$ means that $w$ is a
descendant of $v$. Thus, $w\in T_v\iff w\succeq v$.
Furthermore, $v\land w$ denotes the last common ancestor of $v$ and $w$.
Thus,
\begin{equation}
\label{min}
u\preceq v\land w \iff (u\preceq v) \land (u\preceq w).
\end{equation}
For real numbers $x$ and $y$, $x\land y$ is another notation for
$\min\xpar{x,y}$.
Furthermore, $x_+:=\max(x,0)$ and $x_-:=-\min(x,0)$.
Unspecified limits are as \ntoo.
$C, C_1,\dots$ and $c,c_1,\dots$
denote positive constants (typically with large and small values, respectively), not necessarily the same at
different places. The constants may depend on the offspring distribution
$\xi$; they may also depend on other parameters that are indicated
as arguments.
\section{The case $\rea\le0$, convergence in probability}
\label{Sfringe}
\begin{proof}[Proof of \refT{Tfringe}]
By \eqref{Xn},
recalling that $V$ is a random node in $\tn$,
\begin{equation}\label{teffa}
\E\bigpar{f_\ga(\tnV)\mid \tn}
=\frac{1}n \sum_{v\in\tn}|\tnv|^\ga
= \frac{1}{n} X_n(\ga),
\end{equation}
and consequently
\begin{equation}\label{teffb}
\E f_\ga(\tnV)
= \frac{1}{n} \E X_n(\ga).
\end{equation}
The random trees defined in \refS{S:intro} may be regarded as random
elements of the countable discrete set $\xT$ of finite ordered rooted trees.
As noted just before the statement of~\refT{Tfringe} in \refS{S:intro},
\citet{Aldous-fringe} shows that $\tnV\dto\cT$, as
random elements of $\xT$.
If $\rea\le0$, then $f_\ga$ is a bounded function on $\xT$, trivially
continuous since $\xT$ is discrete. Hence, it follows from \eqref{teffb} that
\begin{equation}
\frac{1}n X_n(\ga)=\E f_\ga(\tnV) \to \E f_\ga(\cT)=\mu(\ga),
\end{equation}
showing \eqref{tfringe}.
Similarly, by \cite[Theorem 7.12]{SJ264},
the conditional distribution of $\tnV$ given $\cT_n$ converges
(as a random element of the space of probability distributions on $\xT$)
in probability to the distribution of $\cT$, which by \eqref{teffa} yields
part \ref{TfringeP} of \refT{Tfringe}.
\end{proof}
\section{Tightness}\label{Stight}
Recall the notation at~\eqref{Xn}--\eqref{tX} and \eqref{Yn}--\eqref{tY}.
\begin{lemma}
\label{LL2}
\begin{thmenumerate}
\item \label{LL2<0}
For $\rea<0$ and all $n\ge1$,
$\E|\tX_n(\ga)|^2\le C(\ga) n$,
for some constant $C(\ga)=O\bigpar{1+|\rea|\qww}$; thus $C(\ga)$ is
bounded on each proper half-space \set{\ga:\Re\ga<-\eps<0}.
\item \label{LL2>0}
For $\rea>0$ and all $n\ge1$,
$\E|\tX_n(\ga)|^2\le C(\ga) n^{2\rea+1}$
and thus
$\E|\tY_n(\ga)|^2\le C(\ga)$,
for some constant $C(\ga)=O\bigpar{1+(\rea)\qww}$; thus $C(\ga)$ is
bounded on each proper half-space \set{\ga:\Re\ga>\eps>0}.
\end{thmenumerate}
\end{lemma}
\begin{proof} \CCreset
Recall the notation $f_{\alpha}(T) := |T|^{\alpha}$.
We apply \cite[Theorem 6.7]{SJ285} to
(the real and imaginary parts of)
the functional $f(T):=f_\ga(T)\cdot\ett{|T|\le n}$.
Since
$|f(\cT_k)|=|f_\ga(\cT_k)|=|k^\ga|=k^{\rea}$ for $k\le n$,
and $f(\cT_k)=0$ for $k>n$,
this yields
\begin{equation*}
\begin{split}
\bigpar{\E|\tX_n(\ga)|^2}^{1/2}
&\le \CC n\qq \Bigpar{\sup_{k\le n}k^{\rea} + \sum_{k=1}^n k^{\rea-1}}
\\&
\le
\begin{cases}
\CC(\ga) n^{\xfrac12}, & \rea<0, \CCdef{\CCneg}
\\
\CC(\ga) n^{\rea+\frac12}, &\rea>0,
\end{cases}
\end{split}
\end{equation*}
with
$\CCneg(\ga)=O\bigpar{1+|\rea|\qw}$ and
$\CCx(\ga)=O\bigpar{1+|\rea|\qw}$.
\end{proof}
\begin{lemma}
\label{LcH}
\begin{thmenumerate}
\item \label{LcH<0}
The family of random functions $n\qqw\tX_n(\ga)$ is tight in the space
$\cH(H_-)$.
\item \label{LcH>0}
The family of random functions $\tY_n(\ga):=n^{-\ga-\frac12} \tX_n(\ga)$ is tight in the space $\cH(H_+)$.
\end{thmenumerate}
\end{lemma}
\begin{proof}
This is an immediate consequence of Lemmas \ref{LL1} and \ref{LL2}
(and the \CSineq).
\end{proof}
\section{The case $\rea<0$}\label{S<0}
\begin{proof}[Proof of \refT{T<0}]
For a fixed real $\ga<0$,
\cite[Theorem 1.5]{SJ285} yields \eqref{t0} with
$\hX(\ga)\sim N\bigpar{0,\gam^2(\ga)}$ for some $\gam^2(\ga)\ge0$.
Furthermore, as remarked in \cite{SJ285},
\cite[Theorem 1.5]{SJ285} extends, by the Cram\'er--Wold device,
to joint convergence for several functionals.
By considering $\Re f_\ga$ and $\Im f_\ga$, we thus obtain \eqref{t0} for
complex $\ga\in H_-$; furthermore, we obtain joint convergence for any
finite set of (real or complex) such $\ga$.
The convergence in $\cH(H_-)$ now follows from Lemmas \ref{Lsub} and
\ref{LcH}\ref{LcH<0}.
The symmetry \eqref{t0symm} is now obvious, since the corresponding formula for
$X_n(\ga)$ follows trivially from the definition \eqref{Xn}.
Finally, \eqref{t0cov} follows from \cite[(1.16)]{SJ285} and polarization
(\ie, considering linear combinations).
\end{proof}
\begin{remark}\label{R0cov}
Furthermore, \cite[(1.17)]{SJ285} and polarization yields a formula
for the covariance function, for $\rea,\Re\gb<0$:
{\multlinegap=0pt
\begin{multline}\label{r0cov}
\E\bigpar{\hX(\ga)\hX(\gb)}=
\E\bigpar{f_\ga(\cT)\bigpar{F_\gb(\cT)-|\cT|\mu(\gb)}}
+
\E\bigpar{f_\gb(\cT)\bigpar{F_\ga(\cT)-|\cT|\mu(\ga)}}
\\
-\mu(\ga+\gb)+\bigpar{1-\gs\qww}\mu(\ga)\mu(\gb).
\end{multline}}
\end{remark}
\section{The mean}\label{SE}
\begin{lemma}
\label{LE}
For any complex $\ga$,
\begin{equation}
\label{le}
\E X_n(\ga)=n\sum_{k=1}^n \frac{\P(S_{n-k}=n-k)\P(S_k=k-1)}{\P(S_n=n-1)}k^{\ga-1}.
\end{equation}
\end{lemma}
\begin{proof}
By \cite[Lemma 5.1]{SJ285}, summing over $k$,
\begin{equation}
\E X_n(\ga) = \E F_\ga(\tn)=\sum_{k=1}^n n \frac{\P(S_{n-k}=n-k)}{\P(S_n=n-1)}\E f_{\ga,k}(\cT)
\end{equation}
where $f_{\ga,k}(T):=f_\ga(T)\ett{|T|=k}=k^\ga\ett{|T|=k}$
and thus, using \eqref{ptk},
\begin{equation}
\E f_{\ga,k}(\cT)=k^{\ga}\P(|\cT|=k)
=k^{\ga-1}\P(S_k=k-1).
\end{equation}
The result follows.
\end{proof}
We now prove \refT{TE}. We begin with part~\ref{TE<0}, which follows from
\cite{SJ285}, and part~\ref{TE+}, which is rather easy.
\begin{proof}[Proof of \refT{TE}\ref{TE<0}]
The estimate \eqref{te<0} is an instance of \cite[(1.13)]{SJ285}, and the
proof in \cite{SJ285} shows that the estimate holds uniformly in each
half-space $\rea<-\eps<0$.
\end{proof}
\begin{proof}[Proof of \refT{TE}\ref{TE+}]
We write \eqref{le} as
$\E X_n(\ga)=n^{\ga-\frac12}\sum_{k=1}^n \gna(k)$ where
\begin{equation}\label{gn}
\gna(k):=\frac{\P(S_{n-k}=n-k)\P(S_k=k-1)}{\P(S_n=n-1)}k^{\ga-1}n^{-\ga+\frac32}.
\end{equation}
Thus, converting the sum in \eqref{le} to an integral by letting
$k:=\ceil{x n}$,
\begin{equation}\label{pyret}
n^\wgay \E X_n(\ga)
=n\qw\sum_{k=1}^n \gna(k)
=\intoi \gna(\ceil{xn})\dd x.
\end{equation}
Assume for simplicity $\spann\xi=1$.
[Otherwise, replace $\ceil{xn}$ by $xn$ rounded upwards to the nearest
integer $k\equiv1\pmod {\spann\xi}$, and make minor modifications.]
For any fixed $x\in(0,1)$, it then follows from \eqref{snn}
that as \ntoo,
for any fixed $\ga$ and uniformly for $\ga$ in a compact set,
\begin{equation}\label{glim}
\gna(\ceil{xn})
\sim \frac{(n-nx)\qqw (nx)\qqw}{{\sqrt{2\pi\gss}}\,n\qqw}
(nx)^{\ga-1}n^{-\ga+\frac32}
= \frac{1}{\sqrt{2\pi\gss}}(1-x)\qqw x^{\ga-\frac{3}2}.
\end{equation}
Furthermore, \eqref{snn} similarly also implies that, for $n$ so large that
$\P(S_n=n-1)>0$,
\begin{equation}\label{gb}
\abs{ \gna(\ceil{xn})}
\le C (1-x)\qqw x^{-(\rea-\frac{3}2)_-}
\end{equation}
for some constant $C$
(depending on the offspring distribution, but not on $\ga$).
Since we assume $\rea>\frac12$,
the \rhs{} of \eqref{gb} is integrable, and thus dominated convergence and
\eqref{glim} yield, evaluating a beta integral,
\begin{align}
\intoi \gna(\ceil{xn})\dd x
&\to
\frac{1}{\sqrt{2\pi\gss}}
\intoi
x^{\ga-3/2}(1-x)\qqw\dd x
\notag\\&
= \frac{1}{\sqrt{2\pi\gss}}B\bigpar{\ga-1/2,1/2}
= \frac{1}{\sqrt{2\pi\gss}}\frac{\gG(\ga-1/2)\gG(1/2)}{\gG(\ga)}
\notag\\&
= \frac{1}{\sqrt{2}\gs}\frac{\gG(\ga-1/2)}{\gG(\ga)}
.
\end{align}
Moreover,
using \refL{Ldom},
this holds
uniformly for $\ga$ in each compact subset of $\set{\ga:\rea>\frac12}$.
The result follows by \eqref{pyret}.
\end{proof}
Before completing the proof of \refT{TE}, we give another lemma with a
related estimate for $\E X_n(\ga)$.
We define, compare \eqref{mu} and \eqref{mua}, for any complex $\ga$,
\begin{equation}
\label{mun}
\mu_n(\ga):=
\E\bigpar{|\cT|^\ga\ett{|\cT|\le n}}
=
\sum_{k=1}^n k^\ga\P(|\cT|=k)
=\sum_{k=1}^n k^{\ga-1}\P(S_k=k-1).
\end{equation}
\begin{lemma}\label{LEn}
If\/ $\rea>-\frac12$,
then, as \ntoo,
\begin{equation}\label{len}
\E X_n(\ga)
=n \mu_n(\ga)
+\frac{1}{\sqrt{2}\gs}\left[\frac{\gG(\ga-\frac12)}{\gG(\ga)}
-\frac{\pi\qqw}{\ga-\frac12}\right]
n^\qgay
+o\bigpar{n^{(\rea)_+ +\frac12}}.
\end{equation}
Moreover, this holds uniformly
for any compact set of $\ga$ with $\rea>-\frac12$.
\end{lemma}
\begin{remark}\label{Rat1/2}
For $\ga=\frac12$, the square bracket in \eqref{len} is interpreted by
continuity.
With $\psi(x):=\gG'(x)/\gG(x)$,
the value at $\frac12$ is easily found to be
$\pi\qqw\bigpar{\psi(1)-\psi(\frac12)}=(2\log2)\pi\qqw$,
using \cite[5.4.12--13]{NIST}.
\end{remark}
\begin{proof}
This time we use \eqref{le} and \eqref{mun} to obtain, with $\gna(k)$ as in
\eqref{gn}, \cf{}~\eqref{pyret},
\begin{equation}\label{winston}
\begin{split}
\E X_n(\ga)-n\mu_n(\ga)
& =n^{\ga-\frac12}\sum_{k=1}^n
\bigsqpar{\gna(k)-n^{\frac{3}2-\ga}k^{\ga-1}\P(S_k=k-1)},
\\&
=n^{\ga-\frac12}\sum_{k=1}^n \hna(k),
\end{split}
\end{equation}
where, see \eqref{gn},
\begin{equation}\label{emm}
\begin{split}
\hna(k)&:=\gna(k)-n^{\frac{3}2-\ga}k^{\ga-1}\P(S_k=k-1)
\\&\phantom:
=n^{\frac{3}2-\ga}k^{\ga-1}\P(S_k=k-1)
\lrsqpar{\frac{\P(S_{n-k}=n-k)}{\P(S_n=n-1)}-1}.
\end{split}
\end{equation}
We use once more \eqref{snn} and see that,
assuming for simplicity that $\xi$ has span 1,
for any fixed $x\in(0,1)$,
for any fixed $\ga$ and uniformly for $\ga$ in a compact set,
\begin{equation}\label{hlim}
\hna(\ceil{xn})\to
\frac{1}{\sqrt{2\pi\gss}} x^{\ga-\frac{3}2}
\lrsqpar{(1-x)\qqw-1}.
\end{equation}
Furthermore,
by \eqref{snn},
for all $n$, $k$, and $\ga$,
\begin{equation}\label{emm1}
n^{\frac{3}2-\ga}k^{\ga-1}\P(S_k=k-1)
=O\Bigpar{\Bigparfrac{k}{n}^{\rea-\frac{3}2}}.
\end{equation}
If $1\le k\le n/2$, then by \cite[Lemma 5.2(i)]{SJ285},
\begin{equation}\label{emm2}
\begin{split}
{\frac{\P(S_{n-k}=n-k)}{\P(S_n=n-1)}-1}
=O\parfrac{k}{n}+o\bigpar{n\qqw}
,
\end{split}
\end{equation}
and if $n/2<k\le n$,
then by \cite[Lemma 5.2(ii)]{SJ285},
\begin{equation}\label{emm3}
\begin{split}
{\frac{\P(S_{n-k}=n-k)}{\P(S_n=n-1)}-1}
=O\lrpar{\frac{n\qq}{(n-k+1)\qq}}.
\end{split}
\end{equation}
For $k\ge n\qq$, the bound in \eqref{emm2} is $O(k/n)$.
Let $\hnax(k):=\hna(k)\ett{k\ge n\qq}$,
and fix $\ga$ with $\Re\ga>-\frac12$.
Then,
combining \eqref{emm} and \eqref{emm1}--\eqref{emm3},
for all $n$ and $x\in(0,1)$,
\begin{equation}\label{emm5}
\hnax(\ceil{xn})
=
O\bigpar{x^{-(\rea-\frac12)_-}+(1-x)\qqw}.
\end{equation}
This bound is integrable, and thus dominated convergence and \eqref{hlim} yield
\begin{equation}\label{emma}
\begin{split}
n\qw\sum_{n^{1/2} \leq k \leq n}\hna(k)
&=\intoi \hnax(\ceil{xn})\dd x
\\&
\to \frac{1}{\sqrt{2\pi\gss}}
\intoi
x^{\ga-\frac{3}2}\lrsqpar{(1-x)\qqw-1} \dd x.
\end{split}
\end{equation}
The
integral on the \rhs{} of
\eqref{emma} converges for any $\ga$ with $\rea>-\frac12$, and defines an
analytic function in that region. If $\rea>\frac12$, we have
\begin{equation}\label{sofie}
\begin{split}
\intoi
x^{\ga-\frac{3}2}\lrsqpar{(1-x)\qqw-1}\dd x
&=
\intoi x^{\ga-\frac{3}2}(1-x)\qqw \dd x
-
\intoi x^{\ga-\frac{3}2} \dd x
\\&
=B\bigpar{\ga-\tfrac12,\tfrac12}-\frac{1}{\ga-\frac12}
\\&
=\frac{\gG(\ga-\tfrac12)\gG(\tfrac12)}{\gG(\ga)}-\frac{1}{\ga-\frac12}.
\end{split}
\raisetag\baselineskip
\end{equation}
The \rhs{} in \eqref{sofie} is analytic for $\rea>-\frac12$ (with a removable
singularity at $\ga=\frac12$), and thus by analytic continuation,
\eqref{sofie} holds as soon as
$\rea>-\frac12$.
By combining \eqref{winston}, \eqref{emma}, and
\eqref{sofie}, we obtain the main terms in \eqref{len}.
However, it remains to show that the terms with $k<n\qq$ in \eqref{winston}
are negligible.
For this we use again \eqref{emm1} and \eqref{emm2} and obtain
\begin{equation}\label{anna}
\begin{split}
\sum_{k<\sqrt n} \hna(k)
&
\le C \sum_{k<\sqrt n} \Bigparfrac{k}{n}^{\rea-\frac12}
+o\bigpar{n\qqw}
\sum_{k<\sqrt n} \Bigparfrac{k}{n}^{-(\rea)_--\frac32}
\\&
\le C_1(\ga) n^{\frac12(\rea+\frac12)+\frac12-\rea}
+o\bigpar{n^{1+(\rea)_-}}
\\&
=o\bigpar{n^{1+(\rea)_-}}.
\end{split}
\end{equation}
This shows that the contribution to \eqref{winston} for $k<n\qq$ is
$o\bigpar{n^{\rea+\frac12+(\rea)_-}}=o\bigpar{n^{(\rea)_+ +\frac12}}$,
which completes the proof of \eqref{len}.
Moreover, the estimates \eqref{hlim},
\eqref{emm5}, and~\eqref{anna} hold uniformly in
any compact subset of $\set{\ga:\rea>-\frac12}$, which
using \refL{Ldom} gives the uniformity in \eqref{len}.
\end{proof}
\begin{proof}[Proof of \refT{TE}\ref{TE-}]
Assume again for simplicity that $\xi$ has span 1. Then
\eqref{snn} yields
\begin{equation}\label{skk}
\P(S_k=k-1)=\frac{1}{\sqrt{2\pi\gss}} k^{-\frac12}(1+\eps_k),
\end{equation}
with $\eps_k\to0$ as \ktoo,
and thus, using dominated convergence, for \mbox{$\rea<\frac12$},
\begin{equation}\label{magnus}
\begin{split}
n^{\frac12-\ga} \bigsqpar{\mu(\ga)-\mu_n(\ga)}
&=
n^{\frac12-\ga} \sum_{k=n+1}^\infty k^{\ga-1}\P(S_k=k-1)
\\&
=
\frac{1}{\sqrt{2\pi\gss}}
\int_1^\infty \parfrac{\ceil{xn}}{n}^{\ga-\frac{3}2}(1+\eps_{\ceil{xn}})\dd x
\\
\\&
\to
\frac{1}{\sqrt{2\pi\gss}}
\int_1^\infty x^{\ga-\frac{3}2}\dd x
=
\frac{1}{\sqrt{2\pi\gss}(\frac12-\ga)}.
\end{split}
\end{equation}
Moreover, by \refL{Ldom}, \eqref{magnus} holds uniformly in every half-plane
$\rea<b<\frac12$.
The result follows by combining \eqref{len} and \eqref{magnus}.
\end{proof}
\begin{proof}[Proof of \refT{TE}\ref{TE=}]
By \eqref{mun} and \eqref{skk}, again assuming $\spann\xi=1$,
\begin{equation}\label{aannc}
\mu_n(\tfrac12)
=\sum_{k=1}^n \frac{1}{\sqrt{2\pi\gss}} k^{-1}(1+\eps_k)
= \frac{1}{\sqrt{2\pi\gss}} \log n+o(\log n),
\end{equation}
and the result follows from \refL{LEn}.
\end{proof}
\begin{proof}[Proof of \refT{TX<0}]
This follows, as said in the introduction,
immediately from Theorems~\ref{T<0}
and \ref{TE}\ref{TE<0}.
\end{proof}
\subsection{Extensions assuming higher moments}
We first prove \refT{TEgd} where we assume $\E\xi^{2+\gd}<\infty$ for some
$\gd\in(0,1]$.
For an example (without higher moments) where $\mu(\ga)$ \emph{cannot} be
extended analytically across the line $\ga = \frac12$, see \refT{Tbad} in
\refS{S:bad}.
\begin{proof}[Proof of \refT{TEgd}]
Assume again for simplicity that $\spann\xi=1$.
Then
the assumption $\E\xi^{2+\gd}<\infty$ implies that
\eqref{snn} can be improved to
\begin{equation}\label{rrn}
\P(S_n=n-1)=\frac{1}{\sqrt{2\pi\gss}} n\qqw+ r(n),
\end{equation}
with
\begin{equation}\label{rn}
r(n)=O\bigpar{n^{-\frac12-\frac{\gd}{2}}},
\end{equation}
see \cite[Theorem 6.1]{Ibragimov66},
\cite[Theorem 4.5.3 and 4.5.4]{IbragimovLinnik}.
\pfitemref{TEgdmu}
Consequently, with~$\zeta(\cdot)$ denoting the Riemann zeta function,
\eqref{mua} yields
\begin{equation}\label{puh}
\mu(\ga)=\frac{1}{\sqrt{2\pi\gss}}\sum_{n=1}^\infty n^{\ga-\frac32}+ \sum_{n=1}^\infty n^{\ga-1}r(n)
= \frac{\zeta(\frac32-\ga)}{\sqrt{2\pi\gss}}+ \sum_{n=1}^\infty n^{\ga-1}r(n),
\end{equation}
where the final sum by \eqref{rn} converges and is analytic
in $\ga$
for
\mbox{$\rea<\frac12+\gdd$}.
It is well known that the Riemann
zeta function can be extended to a meromorphic function in the complex plane,
with a single pole at~$1$ with residue~$1$.
The result follows.
[If $\spann\xi>1$, we use the Hurwitz zeta function
\cite[\S25.11]{NIST} instead of the Riemann zeta function.]
\pfitemref{TEgd-}
Let $\DEgdx:=\set{\ga\neq\frac12:-\frac12<\rea<\frac12+\gdd}$,
$\DEgdx_-:=\set{\ga\in \DEgdx:\rea<\frac12}$, and
$\DEgdx_+:=\set{\ga\in \DEgdx:\rea>\frac12}$.
Furthermore, fix a compact subset $K$ of $\DEgdx$.
Define, for $k\ge2$ and for $k=1$ and $\rea>\frac12$,
\begin{align}
\label{aka}
a(k,\ga)&:=k^{\ga-\frac32}
-(\ga-\tfrac12)\qw\bigsqpar{k^{\ga-\frac12}-(k-1)^{\ga-\frac12}},
\\
b(k,\ga)&:=\pigsqqw a(k,\ga)+k^{\ga-1} r(k).\label{bka}
\end{align}
Note that
for $k\ge2$ and $\ga\in K$,
by a Taylor expansion,
\begin{align}\label{akao}
a(k,\ga)&=O\bigpar{k^{\rea-\frac52}},
\end{align}
where the implied constant depends only on~$K$,
and thus, using also \eqref{rn},
\begin{equation}\label{bkb}
b(k,\ga) = O\bigpar{k^{\rea-\frac32-\gdd}}.
\end{equation}
By \eqref{rrn}, \eqref{aka}, and \eqref{bka},
\begin{equation}\label{sb}
k^{\ga-1}\P(S_k=k-1)=\gapigsqqw\bigsqpar{k^{\ga-\frac12}-(k-1)^{\ga-\frac12}}
+b(k,\ga),
\end{equation}
where either $k\ge2$ or $k\ge1$ and $\ga\in \DEgdx_+$.
It follows from \eqref{akao} that $\sum_{k=2}^\infty a(k,\ga)$ converges for
$\ga\in \DEgdx$ and defines an analytic function there. Furthermore, if $\ga\in
\DEgdx_-$, then, summing the telescoping sum,
\begin{equation}
\sum_{k=2}^\infty a(k,\ga)
=\zeta\bigpar{\tfrac32-\ga}-1+\bigpar{\ga-\tfrac12}\qw,
\end{equation}
and consequently, by \eqref{puh},
\begin{equation}\label{mux}
\begin{split}
\mu(\ga)=\pigsqqw\lrpar{\sum_{k=2}^\infty a(k,\ga)+1-\bigpar{\ga-\tfrac12}\qw}
+\sum_{k=1}^\infty k^{\ga-1}r(k).
\end{split}
\end{equation}
Both sides of \eqref{mux} are analytic in $\DEgdx$, so by analytic continuation,
\eqref{mux} holds for all $\ga\in \DEgdx$ (and also for $\rea\le-\frac12$).
In particular, for $\ga\in \DEgdx_+$, where $a(1,\ga)$ is defined,
\begin{equation}\label{mux+}
\begin{split}
\mu(\ga)=\pigsqqw{\sum_{k=1}^\infty a(k,\ga)}
+\sum_{k=1}^\infty k^{\ga-1}r(k)
=\sum_{k=1}^\infty b(k,\ga).
\end{split}
\end{equation}
We now analyze $\mu_n(\ga)$ further.
First, for $\ga\in K_-:=K\cap \DEgdx_-$,
using \eqref{sb} and \eqref{bkb},
\begin{equation}\label{elf-}
\begin{split}
\mu(\ga)-\mu_n(\ga)
&
=\sum_{k=n+1}^\infty k^{\ga-1}\P(S_k=k-1)
\\&
=\sum_{k=n+1}^\infty \biggpar{
\frac{(\ga-\frac12)\qw}{\pigsqq}
\bigsqpar{k^{\ga-\frac12}-(k-1)^{\ga-\frac12}}
+b(k,\ga)}
\\&
=-\frac{(\ga-\frac12)\qw}{\pigsqq} n^{\ga-\frac12}
+O\bigpar{n^{\rea-\frac{1}2-\gdd}}.
\end{split}
\end{equation}
Next, consider $\ga\in \DEgdx_+$. By \eqref{sb} and \eqref{mux+}, for $\ga\in
K_+:=K\cap \DEgdx_+$,
\begin{equation}\label{elf+}
\begin{split}
\mu_n(\ga)-\mu(\ga)
&=\sum_{k=1}^n \biggpar{
\frac{(\ga-\frac12)\qw}{\pigsqq}\bigsqpar{k^{\ga-\frac12}-(k-1)^{\ga-\frac12}}
+b(k,\ga)}
-\sum_{k=1}^\infty b(k,\ga)
\\&
=\gapigsqqw n^{\ga-\frac12} -\sum_{k=n+1}^\infty b(k,\ga)
\\&
=\frac{(\ga-\frac12)\qw}{\pigsqq} n^{\ga-\frac12}
+O\bigpar{n^{\rea-\frac{1}2-\gdd}}.
\end{split}
\raisetag{\baselineskip}
\end{equation}
We have obtained the same estimate for the two ranges in \eqref{elf-} and
\eqref{elf+}, and can combine them to obtain,
for $\ga\in K_-\cup K_+$,
\begin{equation}\label{elf}
\begin{split}
\mu_n(\ga)-\mu(\ga)
=\frac{(\ga-\frac12)\qw}{\pigsqq} n^{\ga-\frac12}
+O\bigpar{n^{\rea-\frac{1}2-\gdd}}.
\end{split}
\end{equation}
Furthermore, for each $n$, $\mu_n(\ga)-\mu(\ga)$ is a continuous function in
$\DEgdx$, and thus \eqref{elf} holds for $\ga\in \overline{K_-\cup K_+}$ by
continuity.
If $K$ is a closed disc, then $K=\overline{K_-\cup K_+}$, and thus
\eqref{elf} hold for $\ga\in K$.
In general, any compact $K\subset \DEgdx$ can be covered by a finite union of
closed discs $K_i\subset \DEgdx$,
and it follows that \eqref{elf}
holds uniformly in $\ga\in K$ for each compact $K\subset \DEgdx$.
Combining \eqref{len} and \eqref{elf}, we obtain
\eqref{te-}, uniformly on each compact $K\subset \DEgdx$.
\pfitemref{TEgd=}
By \eqref{rrn}--\eqref{rn}, \eqref{skk} holds with $\eps_k=O(k^{-\gd/2})$.
Consequently, \eqref{aannc} is improved to
\begin{equation}\label{annac}
\mu_n(\tfrac12)
=\sum_{k=1}^n \frac{1}{\sqrt{2\pi\gss}} k^{-1}(1+\eps_k)
= \frac{1}{\sqrt{2\pi\gss}} \log n+c_1+o(1),
\end{equation}
with $c_1=(2\pi\gss)\qqw\bigpar{\gamma+\sum_{k=1}^\infty \eps_k/k}$.
The result \eqref{tegd=} follows from \eqref{len}.
\end{proof}
\begin{remark}
The proof shows, using \refR{Rat1/2}, that the constant $c$
in \refT{TEgd}\ref{TEgd=} is given by
\begin{equation} \label{tegd-c}
c = \sum_{k = 1}^{\infty}\frac{1}{k}
\bigsqpar{k^{1/2} \P(S_k = k - 1) - \frac{1}{\sqrt{2 \pi \gss}}}
- \frac{1}{\sqrt{2 \pi \gss}} \psi\Bigpar{\frac12},
\end{equation}
where $\psi(\frac12) = - (2 \log 2 + \gam)$
\cite[5.4.13]{NIST}.
\end{remark}
We next show that \refT{TEgd}\ref{TEgdmu} extends
to the case $\gd>1$, at least if $\gd$ is an integer.
\begin{theorem}
\label{TH}
If\/ $\E\xi^k<\infty$ for an integer $k\ge3$, then
$\mu(\ga)$ can be continued as a meromorphic function in
$\rea<\frac{k-1}2$ with
simple poles at
$\set{\frac{1}2,\frac{3}2,\frac{5}2,\dots}$
(or possibly a subset of these points)
and no other poles.
\end{theorem}
Typically, all these points $\ell-\frac12$
(with $1\le \ell<k/2$) are poles; however
in special cases, $\mu(\ga)$ might be regular at some
of these points, see \refE{Eresidue0}.
\begin{proof}
Assume for simplicity that $\spann\xi=1$.
In this case,
see \cite[Theorem VII.13]{Petrov} (with slightly different notation),
\eqref{rrn} can be refined to
\begin{equation}\label{h1}
\P(S_n=n-1)=\frac{e^{-x^2/2}}{\sqrt{2\pi \gss n}}
\biggsqpar{1+\sum_{\nu=1}^{k-2}\tq_\nu(x) n^{-\nu/2}}
+o\bigpar{n^{-(k-1)/2}}
\end{equation}
where $x=-1/(\gs\sqrt n)$ and $\tq_\nu$ is a polynomial (independent of $n$)
whose coefficients depend on the cumulants of $\xi$
of order up to $\nu+2$,
see
\cite[VI.(1.14)]{Petrov} for details.
The polynomial $\tq_\nu$ is odd
if $\nu$ is odd, and is even if $\nu$ is even;
hence the term $\tq_\nu(x)n^{-\nu/2}$ is a polynomial in $n\qw$
for every $\nu$, and expanding $e^{-x^2/2}=e^{-1/(2\gss n)}$
into its Taylor series
and
rearranging,
we obtain from \eqref{h1}
\begin{equation}\label{h2}
\P(S_n=n-1)=
\sum_{j=0}^{\floor{k/2}-1} a_j n^{-j-\frac12}
+ r(n)
\end{equation}
with $r(n)=o\bigpar{n^{-(k-1)/2}}$,
for some coefficients $a_j$.
Consequently \eqref{mua} yields, \cf{} \eqref{puh},
\begin{equation}
\mu(\ga)=
\sum_{j=0}^{\floor{k/2}-1} a_j \zeta\bigpar{\tfrac{3}2+j-\ga}
+\sum_{n=1}^\infty n^{\ga-1}r(n),
\end{equation}
where the final sum is analytic in $\rea<(k-1)/2$, which proves the result.
\end{proof}
\begin{remark}
The proof of \refT{TH} shows that the residue of $\mu(\ga)$ at
$\ga=j+\frac12$
(assumed to be less than $\frac{k-1}2$)
is $-a_j$, where $a_j$ is the coefficient in the expansion \eqref{h2}
and can be calculated from the cumulants
$\gk_2=\gss,\gk_3\dots,\gk_{2j+2}$ of $\xi$.
For example, see \refT{TEgd}\ref{TEgdmu},
the residue at $\frac12$ is
$-a_0=-1/\sqrt{2\pi\gss}$.
As another example,
a calculation (which we omit) shows that if $k>4$, the residue
at $\frac{3}2$ is
\begin{equation}\label{noso}
-a_1=
\frac{1}{\sqrt{2\pi\gss}}\Bigpar{
\frac{1}{2\gss} - \frac{\gk_3}{2\gs^{4}}
- \frac{\gk_4}{8\gs^{4}}
+ \frac{5\gk_3^2}{24\gs^6}}.
\end{equation}
\end{remark}
\begin{example}
Consider the case of uniformly random labelled trees, which is given by
$\xi\sim\Po(1)$.
In this case,
\begin{equation}
\begin{split}
\P(S_n=n-1)=\P(\Po(n)=n-1)
=\frac{n^{n-1}}{(n-1)!}e^{-n}
\end{split}
\end{equation}
which by Stirling's formula, see \eg{} \cite[5.11.1]{NIST}, has a
(divergent)
asymptotic expansion that can be written
\begin{equation}\label{bern}
\begin{split}
\sim (2\pi n)\qqw \exp\Bigpar{-\sum_{k=1}^\infty \frac{B_{2k}}{2k(2k-1)n^{2k-1}}}
\end{split}
\end{equation}
where $B_{2k}$ are the Bernoulli numbers.
Expanding the exponential in \eqref{bern}
(as a formal power series),
we obtain coefficients
$a_k$ such that for any integer~$J$ we have
\begin{equation}
\P(S_n=n-1)=\sum_{j=0}^J a_j n^{-j-\frac12} +o\bigpar{n^{-J-\frac12}},
\end{equation}
which is the same as \eqref{h2},
and it follows by the argument above that $\mu(\ga)$ has residue
$-a_j$ at $j+\frac12$.
For example, $a_0=(2\pi)\qqw$, see \eqref{rrn} and \eqref{puh},
and $a_1=-\frac{1}{12}(2\pi)\qqw$, showing that $\mu(\ga)$ has a pole with
residue $\frac{1}{12}(2\pi)\qqw$ at $\frac{3}2$.
(This agrees with \eqref{noso} since $\gk_k=1$ for every $k\ge1$.)
\end{example}
\begin{example}\label{Eresidue0}
We construct an example where $\xi$ is bounded, so \refT{TH} applies for
every $k$ and $\mu(\ga)$ is meromorphic in the entire complex plane,
and furthermore $\mu(\ga)$ is regular at $\ga=\frac{3}{2}$.
We use three parameters $m$, $s$, and $A$, where $m\ge10$ is a fixed integer
(we may take $m=10$), $s\in[0,m)$, and $A$ is a large integer.
Let $\xi=\xi_{m,s,A}$ take the values $0,1,A,mA$ with the probabilities
\begin{align}
\P(\xi=mA)&=\frac{s}{2m^2A},\\
\P(\xi=A)&=\frac{1}{2A},\\
\P(\xi=1)&=\frac12-\frac{s}{2m},\\
\P(\xi=0)&=1-\P(\xi=1)-\P(\xi=A)-\P(\xi=mA).
\end{align}
Then $\E\xi=1$ and $\spann\xi=1$.
Keep $m$ and $s$ fixed, and let $A\to\infty$; then
\begin{align}
\gss&\sim\E\xi^2\sim \frac{sA}2+\frac{A}2 = \frac{1+s}{2} A,\label{xi2}\\
\gk_3&\sim\E\xi^3\sim \frac{smA^2}2+\frac{A^2}2 = \frac{1+sm}{2} A^2,
\label{xi3}\\
\gk_4&\sim\E\xi^4\sim \frac{sm^2A^3}2+\frac{A^3}2 = \frac{1+sm^2}{2} A^3.
\label{xi4}
\end{align}
Denote the parenthesized factor in \eqref{noso} by $f(m,\ga,A)$.
It follows from \eqref{xi2}--\eqref{xi4} that as $A\to\infty$ with fixed $m$
and $s$,
\begin{align}\label{fmsa}
f(m,s,A) = -\frac{1+sm^2}{4(1+s)^2}A + \frac{5(1+sm)^2}{12(1+s)^3}A+o(A)
=\bigpar{g(m,s)+o(1)}A,
\end{align}
where
\begin{align}\label{gms}
g(m,s):= -\frac{1+sm^2}{4(1+s)^2} + \frac{5(1+sm)^2}{12(1+s)^3}
=
\frac{5(1+sm)^2-3(1+s)(1+sm^2)}{12(1+s)^3}.
\end{align}
For $s=0$, the final numerator in \eqref{gms} is $2>0$, and thus $g(m,0)>0$.
For $s=1$, the final numerator is $5(1+m)^2-6-6m^2<0$, and thus
$g(m,1)<0$. Hence, by \eqref{fmsa}, we may choose a large $A$ such that
$f(m,0,A)>0$ and $f(m,1,A)<0$.
Then, by continuity, these exists $s\in(0,1)$ such that $f(m,s,A)=0$,
and \eqref{noso} shows that for the corresponding $\xi$, we have
the residue 0 at $\frac{3}2$, \ie, there is no pole there and
$\mu(\ga)$ is regular at $\frac{3}2$.
\end{example}
\section{Brownian representations}\label{Sbrown}
We use the well-known result by \citet{AldousII,AldousIII}
that represents a \cGWt{} asymptotically by a Brownian excursion $(\ex(t))$
in the
following way (under the conditions $\E\xi=1$ and $\gss:=\Var\xi<\infty$
that also we assume).
(See also \citet{LeGall} and \citet[Chapter 4.1]{Drmota}.)
Consider the \emph{depth-first walk} on the tree $\tn$; this is a walk
$v(1),\dots,\allowbreak
v(2n-1)$ on the nodes of $\tn$, where $v(1)=v(2n-1)$ is the
root $o$, and each time we come to a node, we proceed to the first unvisited
child of the node, if there is any, and otherwise to the parent.
For convenience, we also define $v(0)=v(2n)=o$.
We define $W_n(i):=\hh{v(i)}$, and extend $W_n$ to the interval $[0,2n]$ by
linear interpolation between the integers. Furthermore, we scale $W_n$ to a
function on $\oi$ by
\begin{equation}\label{WW}
\WW_n(t):=\gs n\qqw W_n(2nt).
\end{equation}
Then $\WW_n$ is a random continuous funtion on \oi, and is thus a random
element of the Banach space $\coi$. One of the main results of
\citet[Theorem 23 with Remark 2]{AldousIII}
is that, as random elements of $\coi$,
\begin{equation}\label{WW2e}
(\WW_n(t))\dto (2\ex(t)).
\end{equation}
We can think of $W_n(t)$ as the position of a worm that crawls on the edges
of the tree, visiting each edge twice (once in each direction).
We define $v(x)$ also for non-integer $x\in[0,2n]$ as
either $v(\floor x)$ or $v(\ceil x)$, choosing between these two
the node more distant from the root.
Thus,
\begin{equation}\label{hhvx}
\hh{v(x)}=\ceil{W_n(x)}.
\end{equation}
For a node $v$, let $i_v':=\min\set{i\ge1:v(i)=v}$ and
$i_v'':=\max\set{i\le2n-1:v(i)=v}$, \ie, the first and last times that $v$ is
visited (with $i_o'=1$ and $i_o''=2n-1$). Then the subtree
$\tnv$ is visited during the interval $[i_v',i_v'']$, and
$i_v''-i'_v=2(|\tnv|-1)$.
Let
\begin{equation}\label{jv}
J_v:=\set{x\in(0,2n):v(x)\succeq v}.
\end{equation}
Then $J_v=(i_v'-1,i_v''+1)$, and thus $J_v$ is an interval of length
\begin{equation}
\label{jvt}
|J_v|=i''_v-i'_v+2
=2|\tnv|.
\end{equation}
We can now prove \refT{Tbrown}.
When $\rea>1$, all four expressions \eqref{wa0}--\eqref{wb} are equivalent
by elementary calculus, so part \ref{Tbrown>1} follows from part \ref{Tbrown>1/2}.
Nevertheless, we begin with a straightforward proof of the
simpler part \ref{Tbrown>1}, and then show how part \ref{Tbrown>1/2} can be
proved by a similar, but more complicated, argument.
Since we have not yet proved convergence of $Y_n(\ga)$,
we state the result as the following two lemmas.
\begin{lemma}\label{LB1}
If\/ $\rea>1$, then $Y_n(\ga)\dto \gs\qw Y(\ga)$ as \ntoo,
with $Y(\ga)$ given
by \eqref{wb}.
Moreover, this holds jointly for any finite set of such $\ga$.
\end{lemma}
\begin{proof
We assume $\rea>1$, and then \eqref{jvt} implies
\begin{equation}
(2|\tnv|)^\ga
= \iint\limits_{\substack{x,y\in J_v \\ x<y}} \ga(\ga-1)(y-x)^{\ga-2}\dd x \dd y
.
\end{equation}
Hence,
\begin{equation}\label{hamlet}
\begin{split}
2^\ga X_n(\ga)=
\sum_{v\in\tn} (2|\tnv|)^\ga & =
\iint\limits_{0<x<y<2n} \ga(\ga-1)(y-x)^{\ga-2}
\sum_{v\in\tn} \ett{x,y\in J_v}\dd x\dd y.
\end{split}
\end{equation}
Now, by \eqref{jv} and \eqref{min},
$x,y\in J_v\iff v \preceq v(x)\land v(y)$, and thus
\begin{equation}\label{ofelia}
\sum_{v\in\tn} \ett{x,y\in J_v}
=\#\set{v:v\preceq v(x)\land v(y)}
=\hh{v(x)\land v(y)}+1.
\end{equation}
Furthermore, from the construction of the depth-first walk,
\begin{equation}\label{hhvxy}
\hh{v(x)\land v(y)}=\ceil{m(W_n;x,y)}.
\end{equation}
recalling the notation \eqref{m}.
[Actually, $m(W_n;x,y)$ is an integer except when
$v(x)$ is an ancestor of $v(y)$
or conversely.]
Combining \eqref{hamlet}--\eqref{hhvxy} and \eqref{WW} yield
\begin{align}
2^\ga X_n(\ga)
& = \iint\limits_{0<x<y<2n} \ga(\ga-1)(y-x)^{\ga-2} \bigsqpar{
\hh{v(x) \land v(y)}+1} \dd x\dd y
\notag\\
& = \iint\limits_{0<x<y<2n} \ga(\ga-1)(y-x)^{\ga-2} \bigsqpar{m(W_n;x,y)+O(1)}
\dd x\dd y
\notag\\
& = \ga(\ga-1) (2n)^\ga\iint\limits_{0<s<t<1} (t-s)^{\ga-2}
\bigsqpar{m(n^{1/2}\gs\qw \WW_n;s,t)+O(1)}\dd s\dd t.
\end{align}
Since $\WW_n\dto 2\ex$ in $C[0,1]$ by \eqref{WW2e},
and the integral below defines a
continuous functional on $\coi$ because $\iint(t-s)^{\ga-2} \dd s \dd t$ converges (absolutely),
it follows that
\begin{align}
\gs n^{-\ga-\frac12}X_n(\ga)
&=
\ga(\ga-1) \iint\limits_{0<s<t<1} (t-s)^{\ga-2} m(\WW_n;s,t)\dd s\dd t
+O\bigpar{n\qqw}
\notag\\&
\dto
\ga(\ga-1) \iint\limits_{0<s<t<1} (t-s)^{\ga-2}
m(2\ex;s,t)\dd s\dd t
=Y(\ga).
\end{align}
In other words, recalling \eqref{Yn},
$\sigma Y_n(\ga)\dto Y(\ga)$.
Joint convergence for several $\ga$ follows by the same argument.
\end{proof}
\begin{lemma}\label{LB1/2}
If\/ $\rea>\frac12$, then $Y_n(\ga)\dto \gs\qw Y(\ga)$ as \ntoo,
with $Y(\ga)$ given
by \eqref{wa0}--\eqref{wa2}.
Moreover, this holds jointly for any finite set of such $\ga$.
\end{lemma}
\begin{proof}
Fix $\ga$ with $\rea>\frac12$.
We begin with a calculus fact (assuming only that $\rea > 0$).
For any $0 < a < b < \infty$,
\begin{equation}
(b - a)^{\ga} = \ga \int^b_a\!x^{\ga - 1}\dd x
- \ga (\ga - 1) \iint\limits_{0 < x < a < y < b} (y - x)^{\ga - 2}\dd x\dd y.
\end{equation}
We apply this to the interval $(a, b) = J_v$ in \eqref{jv} and obtain, using
\eqref{jvt},
\begin{equation*}
\begin{split}
(2|\tnv|)^\ga &
= \ga \int_{x \in J_v}\!x^{\ga - 1}\dd x
- \ga (\ga - 1) \iint\limits_{0<x<y,\,x\notin J_v,\,y\in J_v}
(y - x)^{\ga - 2}\dd x\dd y
\end{split}
\end{equation*}
and thus, summing over all nodes $v$ of $\tn$,
\begin{multline}
\label{1}
2^\ga\sum_v |\tnv|^\ga
= \ga \int^{2 n}_{0}\!x^{\ga - 1}\sum_{v\in \tn}\ett{x\in J_v}\dd x
\\
{} - \ga (\ga - 1) \iint\limits_{0 < x < y < 2 n} (y - x)^{\ga - 2}
\sum_{v\in \tn}\ett{x\notin J_v,\, y\in J_v}\dd x \dd y .
\end{multline}
Now, using \eqref{jv} and \eqref{hhvx},
\begin{equation}
\begin{split}
\sum_{v\in \tn}\ett{x\in J_v}
&=
\#\set{v: v(x) \succeq v}
= \hh{v(x)}+1
= \ceil{W_n(x)}+1
\end{split}
\end{equation}
and similarly, using also \eqref{min} and \eqref{hhvxy},
\begin{equation}
\begin{split}
\sum_{v\in \tn}\ett{x\notin J_v,\, y\in J_v}
&=
\#\set{v: v(x) \not\succeq v\text{ and }v(y) \succeq v}
\\
&=
\#\set{v: v(y) \succeq v}
-
\#\set{v: v(x)\land v(y) \succeq v}
\\&
= \hh{v(y)}-\hh{v(x)\land v(y)}
\\&
= \ceil{W_n(y)}-\ceil{m(W_n;x,y)}.
\end{split}
\end{equation}
Consequently, recalling the definitions \eqref{Xn} and \eqref{Yn}
of $X_n(\ga)$ and $Y_n(\ga)$,
\begin{multline}
\label{polonius}
2^\ga X_n(\ga)
= \ga \int^{2 n}_{0}\!x^{\ga - 1}\bigpar{\ceil{W_n(x)}+1}\dd x
\\
{} - \ga (\ga - 1) \iint\limits_{0 < x < y < 2 n} (y - x)^{\ga - 2}
\bigpar{\ceil{W_n(y)}-\ceil{m(W_n;x,y)}}\dd x \dd y
\end{multline}
and thus
\begin{multline}
\label{laertes}
Y_n(\ga)
= \ga \int^{1}_{0}\!t^{\ga - 1}n\qqw \bigpar{\ceil{W_n(2nt)}+1}\dd t
\\
{} - \ga (\ga - 1) \iint\limits_{0 < s < t < 1} (t - s)^{\ga - 2}
n\qqw \bigpar{\ceil{W_n(2nt)}-\ceil{m(W_n;2ns,2nt)}}\dd s \dd t .
\end{multline}
The first integral in \eqref{laertes} is no problem; it converges (in
distribution) by \eqref{WW} and \eqref{WW2e},
just as the integral at the end of the proof of \refL{LB1},
because $\int\!t^{\ga-1} \dd t$ converges (absolutely).
The second integral, however, is more difficult, since $\iint(t-s)^{\ga-2} \dd s \dd t$
diverges if $\rea\le 1$. We therefore use a truncation argument.
For $0 < \eps < 1$ we split $Y_n(\ga)=\zne(\ga) + \zne'(\ga)$, where
\begin{multline}
\zne(\ga)
:= \ga \int^{1}_{0}\!t^{\ga - 1}n\qqw \bigpar{\ceil{W_n(2nt)}+1}\dd t
\\
{} - \ga (\ga - 1) \iint\limits_{t-s>\eps} (t - s)^{\ga - 2}
n\qqw \bigpar{\ceil{W_n(2nt)}-\ceil{m(W_n;2ns,2nt)}}\dd s \dd t
\end{multline}
and
\begin{multline}\label{zne'}
\zne'(\ga):= \\
{} - \ga (\ga - 1) \iint\limits_{ 0<t-s<\eps}\!(t - s)^{\ga - 2}
n\qqw \bigpar{\ceil{W_n(2nt)}-\ceil{m(W_n;2ns,2nt)}}\dd s \dd t .
\end{multline}
For each fixed $\ga$ with $\rea>0$ and each fixed $0 < \eps < 1$,
\begin{multline}
\gs\zne(\ga)
= \ga \int^{1}_{0}\!t^{\ga - 1}\WW_n(t)\dd t
\\
{} - \ga (\ga - 1) \iint\limits_{t-s>\eps} (t - s)^{\ga - 2}
\bigpar{{\WW_n(t)}-{m(\WW_n;s,t)}}\dd s \dd t
+ O\bigpar{n\qqw}
\end{multline}
and thus, by \eqref{WW2e} and the continuous mapping theorem,
\begin{multline}\label{falstaff}
\gs\zne(\ga)
\dto
\ze(\ga):=
2\ga \int^{1}_{0}\!t^{\ga - 1}\ex(t)\dd t
\\
{} - 2\ga (\ga - 1) \iint\limits_{t-s>\eps} (t - s)^{\ga - 2}
\bigpar{{\ex(t)}-{m(\ex;s,t)}}\dd s \dd t
.
\end{multline}
We now use the assumption $\rea>\frac12$. We define $Y(\ga)$ by \eqref{wa0},
noting that the integrals converge, as said in \refS{S:intro}, because
$\ex(t)$ is H\"older($\gam$)-continuous for every $\gam<\frac12$.
This shows that
as $\eps\to0$,
\begin{equation}\label{york}
\ze(\ga)\to Y(\ga)
\end{equation}
a.s\punkt{} (and thus in distribution).
Furthermore, let $\gb$ be real with $\frac12<\gb<(\rea\land 1)$.
It follows from \eqref{zne'} that
\begin{equation}\label{henryv}
|\zne'(\ga)|
\le \frac{|\ga(\ga-1)|}{|\gb(\gb-1)|}\,\eps^{\rea-\gb}\zne'(\gb).
\end{equation}
Furthermore, by \eqref{laertes}, $\zne'(\gb)\le Y_n(\gb)$, and by
\refT{TE}\ref{TE+} we have $\E Y_n(\gb)=O(1)$.
Consequently, \eqref{henryv} implies
\begin{equation}\label{crispin}
\E|Y_n(\ga)-\zne(\ga)|=\E |\zne'(\ga)| = O\bigpar{\eps^{\rea-\gb}}.
\end{equation}
Consequently, $\zne(\ga)\pto Y_n(\ga)$ as $\eps\to0$ uniformly in $n$,
\ie, for any $\gd>0$,
$\sup_n\P(|Y_n(\ga)-\zne(\ga)|>\gd)\to0$.
This together with the facts \eqref{falstaff} and \eqref{york} imply
the result $\gs Y_n(\ga)\dto Y(\ga)$,
see \eg{} \cite[Theorem 4.2]{Billingsley}
or \cite[Theorem 4.28]{Kallenberg}.
Joint convergence for several $\ga$ follows by the same argument.
It remains to show that \eqref{wa1}--\eqref{wa2} are equal to $Y(\ga)$.
Let us temporarily denote these expressions by $Y^{(1)}(\ga)$ and
$Y^{(2)}(\ga)$.
Note that, a.s.,
\begin{align}
(\ga - 1) \iint\limits_{t-s>\eps} (t - s)^{\ga - 2}\ex(t)\dd s \dd t
&=
(\ga - 1) \int_\eps^1 \ex(t)\int_0^{t-\eps} (t - s)^{\ga - 2}\dd s \dd t
\notag\\& \hskip-2em
=
\int_\eps^1 \ex(t) \bigpar{t^{\ga - 1}-\eps^{\ga-1}} \dd t
\notag\\& \hskip-2em
=
\int_0^1 \ex(t) \bigpar{t^{\ga - 1}-\eps^{\ga-1}} \dd t +O\bigpar{\eps^{\rea}}
\end{align}
and hence
\begin{equation}
\ze(\ga)=
2\ga\eps^{\ga-1}\int_0^1 \ex(t) \dd t
+ 2\ga (\ga - 1) \iint\limits_{t-s>\eps} (t - s)^{\ga - 2}
{m(\ex;s,t)}\dd s \dd t
+O\bigpar{\eps^{\rea}}.
\end{equation}
Consequently, \eqref{york} yields the formula
\begin{equation}\label{richard}
Y(\ga)=
\lim_{\eps\to0}
\lrpar{
2\ga\eps^{\ga-1}\int_0^1 \ex(t) \dd t
+ 2\ga (\ga - 1) \iint\limits_{t-s>\eps} (t - s)^{\ga - 2}
{m(\ex;s,t)}\dd s \dd t }.
\end{equation}
If we replace $\ex(t)$ by the reflected $\ex(1-t)$, the \rhs{} of
\eqref{richard} is unchanged, while $Y(\ga)$ defined by \eqref{wa0} becomes
$Y^{(1)}(\ga)$ defined by \eqref{wa1}. Consequently,
$Y^{(1)}(\ga)=Y(\ga)$ a.s.
Furthermore, $Y^{(2)}(\ga)=[Y(\ga)+Y^{(1)}(\ga)]/2$, and thus also
$Y^{(2)}(\ga)=Y(\ga)$ a.s.
\end{proof}
\begin{remark}
For $\ga=k\ge2$ integer, an alternative argument uses the following identity,
obtained by extending \eqref{ofelia} to several nodes:
\begin{equation}
\begin{split}
\sum_{v\in\tn}|\tnv|^k
&=\sum_{v,v_1,\dots,v_k\in\tn}\ett{v_1,\dots,v_k\succeq v}
\\&
=2^{-k}\int_0^{2n}\dotsi\int_0^{2n}\bigpar{d(v(x_1)\land\dotsm\land v(x_k))+1} \dd x_1 \dotsm \dd x_k
\\&
=2^{-k}k!\idotsint\limits_{0<x_1<\dots< x_k<2n}\ceil{m(W_n;x_1,x_k)}\dd
x_1\dotsm \dd x_k
+n^k
\\&
=n^{k}k(k-1)\iint\limits_{0<t_1<
t_k<1}\ceil{m(W_n;2nt_1,2nt_k)}(t_k-t_1)^{k-2}\dd t_1 \dd t_k
+n^k.
\end{split}
\end{equation}
This easily shows \refL{LB1} with \eqref{wb} in this case.
A similar, but simpler, argument yields \eqref{y1} for $k=1$, see \eqref{xn1}.
\end{remark}
\section{Proofs of \refT{T1} and remaining limit theorems}
\label{Spf2}
\begin{proof}[Proof of \refT{T1}]
\refT{T1} now follows from \refL{Lsub}, with $D=H_+$ and
$E=\set{\ga:\rea>1}$, using Lemmas \ref{LcH}\ref{LcH>0} and
\ref{LB1}.
\end{proof}
\begin{proof}[Proof of \refR{Rrepr}.]
This is implicit in the proof above, but we add some details.
Let $D$ and $E$ be as in the proof of \refT{T1}.
(Alternatively, take $E:=[2,3]$.)
Let $\gf:\cH(D)\to C(E)$ be
the restriction mapping $f\mapsto f|_E$,
and let $\psi:\coi\to C(E)$ be the mapping
taking $\ex \in C[0, 1]$ to the element of $C(E)$ that maps $\alpha \in E$ to
the \rhs{} of~\eqref{wb};
both~$\gf$ and~$\psi$ are continuous and thus measurable.
Let also $Y$ denote the random function $Y(\ga)\in \cH(D)$.
The proof above (in particular, \refL{Lsub})
shows that $\gf(Y)\eqd \psi(\ex)$,
and thus we may assume
\begin{align}\label{pippi}
\gf(Y) = \psi(\ex)
\qquad\text{a.s.}
\end{align}
(The skeptical
reader might apply \cite[Corollary 6.11]{Kallenberg} for the
last step.)
Furthermore, $\gf$ is injective, and both $\cH(D)$ and $C(E)$ are Polish
spaces;
thus the range $R:=\gf(\cH(D))$ is a Borel set in $C(E)$, and the inverse
function $\gf\qw:R\to \cH(D)$ is measurable,
see \eg{} \cite[Theorem 8.3.7 and Proposition 8.3.5]{Cohn}.
By \eqref{pippi}, we have
$Y = \gf\qw\bigpar{\psi(\ex)}$ a.s.
Consequently,
\eqref{rrepr} holds with
\begin{align}
\label{emil}
\Psi(\ga,f):=
\begin{cases}
\gf\qw\bigpar{\psi(f)}(\ga) , & \psi(f)\in R,
\\
0,& \text{otherwise}.
\end{cases}
\end{align}
\end{proof}
\begin{proof}[Proofs of Theorems \ref{TX} and \ref{TXgd}]
These results
follow immediately from Theorem~\ref{T1}
and the estimates of $\E X_n(\ga)$ in
Theorems \ref{TE} and \ref{TEgd}.
\end{proof}
\begin{proof}[Proof of \refT{Tbrown}]
\refT{Tbrown} follows from \refT{TX}\ref{TX>} and
Lemmas \ref{LB1}--\ref{LB1/2}, comparing the limits.
More precisely, this yields equality in distribution jointly for any finite
number of $\ga$,
which implies equality jointly
for all $\ga$ since
the distribution of $Y(\ga)$ in $\cH(H^+)$
is determined by the finite-dimensional distributions,
see \refSS{SSanalytic}.
\end{proof}
\section{The limit as $\ga\to\infty$}\label{Slim}
We introduce more notation.
As above, $\ex(t)$, $t\in\oi$, is a normalized Brownian excursion, and
$m(\ex;s,t)$ is defined by \eqref{m}. We further define
\begin{align}\label{mm'}
m(s):=m\bigpar{\ex;s;\tfrac12},&&&
m'(s):=m\bigpar{\ex;\tfrac12,1-s}
\end{align}
for $0\le s\le\frac12$; for convenience we extend $m$ and $m'$ to continuous
functions on $\ooo$ by defining $m(s)=m'(s):=m(\frac12)=\ex(\frac12)$ for
$s>\frac12$.
Furthermore,
\begin{itemize}
\item
$(B(t))$ is a standard Brownian motion on $[0,\infty)$.
\item
$(S(t):=\sup_{s\in[0,t]}B(s))$ is the corresponding supremum process.
\item
$(\tau(a):=\min\set{t: B(t)=a}, a\ge0)$ is the corresponding family of
hitting times.
\item
$(\bes(t))$ is a three-dimensional Bessel process on $[0,\infty)$,
i.e.,\ $(\bes(t))\eqd(|B^{(3)}(t)|)$, where $(B^{(3)}(t)=\bigpar{B_1(t),B_2(t),B_3(t)})$
is a three-dimensional Brownian motion (so $B_1$, $B_2$, $B_3$ are
three independent copies of $B$).
It is well known that a.s.\ $\bes(0)=0$, $\bes(s)>0$ for all $s>0$ and
$\bes(s)\to\infty$ as $s\to\infty$ \cite[\S VI.3]{RY}.
\item
$J(t):=\inf_{s\ge t} \bes(s)$, $t\ge0$, is the future minimum of $\bes$.
By Pitman's theorem \cite[VI.(3.5)]{RY}, as stochastic processes in $\coo$ we have
\begin{equation}
\label{pitman}
(J(t)) \eqd (S(t)).
\end{equation}
\item
$(J'(t), t\ge0)$ is an independent copy of the stochastic process $(J(t))$.
Similarly, $(S'(t))$ is an independent copy of $(S(t))$ and $(\tau'(a))$ is an
independent copy of $(\tau(a))$.
\end{itemize}
For notational convenience, we also define,
using \eqref{wb},
for $r>-1$,
\begin{equation}\label{WY}
W_r:=\iint_{0<s<t<1}(t-s)^rm(\ex;s,t)\dd s\dd t =\frac1{2(r+1)(r+2)}Y(r+2).
\end{equation}
The assertion
$\ga\qq Y(\ga)\dto \Yoo$ in \refT{Tlol}
is thus equivalent to $r^{5/2}W_r\dto\frac12\Yoo$ as \rtoo.
\begin{lemma}\label{Lmm}
As $r\to\infty$ we have jointly (\ie, bivariately for sequences of processes)
$r^{1/2} m(x/r)\dto J(x)$ and
$r^{1/2} m'(x/r)\dto J'(x)$ in $\coT$, for any $T<\infty$.
\end{lemma}
\begin{remark}
Convergence in $\coT$ for every fixed $T$ is equivalent to
convergence in $\coo$,
see \eg{} \cite[Proposition 16.6]{Kallenberg},
so the conclusion may as well be stated as
joint convergence in distribution in $\coo$.
\end{remark}
\begin{proof}
Let us first consider $m$.
We use the representation, see \eg{} \cite[II.(1.5)]{Blum},
\begin{equation}
\label{e}
\ex(t)\eqd (1-t)\bes\Bigpar{\frac{t}{1-t}}
\end{equation}
as processes on $[0,1)$. Hence, using Brownian scaling,
for $x\in[0,r)$ we have,
as processes,
\begin{equation}\label{ex}
r^{1/2} \ex(x/r)\eqd \Bigpar{1-\frac xr} r^{1/2} \bes\Bigpar{\frac{x/r}{1-(x/r)}}
\eqd \Bigpar{1-\frac xr} \bes\Bigpar{\frac{x}{1-(x/r)}}
\end{equation}
and thus, for $x\in[0,r/2]$,
\begin{equation}\label{mr}
r^{1/2} m(x/r)
\eqd \min_{x\le t\le r/2} \Bigpar{1-\frac tr} \bes\Bigpar{\frac{t}{1-(t/r)}}.
\end{equation}
Recall that a.s\punkt{} $R(t)\to\infty$ as \ttoo. Hence,
given $T$, we can choose a (random) $T_1\ge T$
such that $\bes(t)\ge 2\sup_{u\in[T,2T]} \bes(u)$
for all $t\ge T_1$. It follows that if $T_1\le t\le r/2$, then
\begin{equation}
\Bigpar{1-\frac tr} \bes\Bigpar{\frac{t}{1-(t/r)}}
\ge \tfrac12 \cdot2\sup_{u\in[T,2T]} \bes(u)
\ge
\bes\Bigpar{\frac{T}{1-(T/r)}}.
\end{equation}
Hence, if $x\le T$ and $r\ge 2T_1$,
the minimum in \eqref{mr} equals the minimum over $x\le t\le T_1$.
Furthermore,
as \rtoo,
since $\bes$ is continuous,
\begin{equation}\label{zmr}
\min_{x\le t\le T_1} \Bigpar{1-\frac tr} \bes\Bigpar{\frac{t}{1-(t/r)}}
\to
\min_{x\le t\le T_1} \bes\xpar{t}
=\min_{x\le t<\infty}\bes(t)
=J(x)
\end{equation}
uniformly for $x\in [0,T]$, \ie{} in $\coT$.
Consequently,
\begin{equation
\min_{x\le t\le r/2} \Bigpar{1-\frac tr} \bes\Bigpar{\frac{t}{1-(t/r)}}
\asto
J(x)
\end{equation}
in \coT,
and
\eqref{mr} implies
\begin{equation}\label{mx}
r^{1/2} m(x/r) \dto J(x)
\qquad \text{in }
\coT,
\end{equation}
which proves the assertion about $m$.
By symmetry also
\begin{equation}\label{m'x}
r^{1/2} m'(x/r) \dto J(x)\eqd J'(x)
\qquad \text{in }
\coT,
\end{equation}
since $(\ex(1-t)) \eqd (\ex(t))$ and
thus
$(m'(t)) \eqd (m(t))$ (as random functions in $\coo$).
It remains to prove joint convergence to independent limits.
Let
\begin{align}\label{hm}
m_1(s):=m(\ex; s,r^{-2/3}), &&&
m_1'(s):=m(\ex; 1-r^{-2/3},1-s)
\end{align}
(for $r$ with $s\le r^{-2/3}\le\frac12$).
We may assume that the left and right sides of \eqref{ex} are equal,
and
then $m(x/r)=m_1(x/r)$ whenever the minimum in \eqref{mr} equals the minimum
over $t\in[x,r\qqq]$; in particular, this holds if $x\le T$ and $r\qqq\ge
T_1$ defined above. (This implies $r\ge 2r\qqq\ge 2T_1$.) Consequently,
\begin{equation}
\P\bigpar{m(x/r)=m_1(x/r)
\text{ for all }x\in[0,T]}
\ge \P\bigpar{T_1\le r\qqq} \to1
\end{equation}
as \rtoo. By symmetry, also
\begin{equation}\label{macbeth}
\P\bigpar{m'(x/r)=m_1'(x/r) \text{ for all }x\in[0,T]}
\to1.
\end{equation}
Next, we may assume $\bes(t)=|B^{(3)}(t)|$ and that equality holds
in \eqref{e}.
Define the modification
$\tbes(t):=|B^{(3)}(t)-B^{(3)}(1)|$ and the corresponding
$\tex(t):=(1-t)\tbes\bigpar{t/(1-t)}$
and
$\tilde m_1'(s):=m(\tex; 1-r^{-2/3},1-s)$.
Then $|\tbes(t)-\bes(t)|\le |B^{(3)}(1)|$ for all $t$, and thus
$|\tex(t)-\ex(t)|\le(1-t)|B^{(3)}(1)|$
and
$ |\tilde m_1'(s)-m_1'(s)|\le r^{-2/3}|B^{(3)}(1)|.$
Consequently, assuming $r\qqq\ge T$,
\begin{equation}\label{banquo}
\sup_{x\le T} |r^{1/2}\tilde m_1'(x/r)-\rrm_1'(x/r)|\le r^{-1/6}|B^{(3)}(1)| \pto0.
\end{equation}
Let $\rho$ denote the metric in \coT. By \eqref{macbeth} and \eqref{banquo},
\begin{equation}\label{cawdor}
\rho\bigpar{r^{1/2}\tilde m_1'(x/r),r^{1/2} m'(x/r)}\pto0
\end{equation}
as \rtoo. Thus by \eqref{m'x},
\begin{equation}\label{thm'x}
r^{1/2} \tilde m_1'(x/r) \dto J(x)
\qquad \text{in }
\coT,
\end{equation}
Now, for $x\le T$ and large $r$, $\tilde m_1'(x/r)$ depends only on $\tex(t)$
for $t\ge\frac12$, and thus on $\tbes(t)$ for $t\ge1$. However,
$\bigpar{\tbes(t)=|B^{(3)}(t)-B^{(3)}(1)|,\, t\ge1}$ is independent of
$\bigpar{\bes(t)=|B^{(3)}(t)|,\,t\le1}$,
and thus of
$\bigpar{\ex(t),\,t\le\frac12}$ and of
$\bigpar{m(s),\, s\le\frac12}$.
Consequently, we can combine \eqref{mx} and \eqref{thm'x} to
\begin{equation*
\bigpar{r^{1/2} m(x/r), r^{1/2} \tilde m_1'(x/r)} \dto \bigpar{J(x),J'(x)}
\qquad \text{in } \coT\times\coT,
\end{equation*}
with independent limits $(J(x))$ and $(J'(x))$.
Finally, the result follows by using \eqref{cawdor} again.
\end{proof}
\begin{lemma}\label{LW}
As $r\to\infty$,
\begin{equation}
\label{a}
r^{5/2}W_r\dto \Woo:=\iint_{x,y>0} e^{-x-y}\bigpar{J(x)\bmin J'(y)}\dd x\dd y.
\end{equation}
\end{lemma}
\begin{proof}
Note first that for some constant $c$ (in fact, $c=\E|B^{(3)}(1)|=\sqrt{8/\pi}$),
$\E \bes(t) = ct\qq$. Hence, $\E J(x)\le\E \bes(x)= cx\qq$ and
\begin{equation*}
\E \iint_{x,y>0} e^{-x-y}\bigpar{J(x)\bmin J'(y)}\dd x\dd y
\le \iint_{x,y>0} e^{-x-y}cx\qq\dd x\dd y
<\infty.
\end{equation*}
Consequently, the double integral in \eqref{a} converges a.s.
If $s\le \frac12\le t$, then by \eqref{mm'}, $m(\ex;s,t)=m(s)\land m'(1-t)$.
Noting this, we define a truncated version of $W_r$ by,
for $r\ge 2T$ and substituting $s=x/r$ and $t=1-y/r$,
\begin{equation}\label{wrt}
\begin{split}
W_r^T&:=\iint_{\substack{0<s<T/r\\ 1-T/r<t<1}} (t-s)^rm(\ex; s,t)\dd s\dd t
\\
&\phantom: =r^{-2}\int_0^T\!\int_0^T\!
\Bigpar{1-\frac{y}{r}-\frac{x}{r}}^r\bigpar{m(x/r)\bmin m'(y/r)}
\dd x\dd y.
\end{split}
\end{equation}
Since $\bigpar{1-\frac{y}{r}-\frac{x}{r}}^r\to e^{-y-x}$ uniformly for
$x,y\in[0,T]$ as \rtoo,
it follows from \refL{Lmm} and the continuous mapping theorem that for each
fixed $T<\infty$, as \rtoo\ we have
\begin{equation}\label{wtoo}
\begin{split}
r^{5/2}W_r^T&
\dto \Woo^T :=\int_0^T\int_0^T
e^{-x-y}\bigpar{J(x)\bmin J'(y)} \dd x\dd y.
\end{split}
\end{equation}
Furthermore, $\Woo^T\to \Woo$ a.s\punkt{} as $T\to\infty$.
Moreover, by \eqref{e},
$\E\ex(t)=ct\qq(1-t)\qq$ and thus $\E m(\ex;s,t)\le \E \ex(s) \le c s^{1/2}$.
Hence, for $r\ge 2T>0$, and again with the substitutions $s=x/r$ and $t=1-(y/r)$, we have
\begin{equation}\label{wwrt}
\begin{split}
\E (W_r-W_r^T)&=\iint_{\set{T/r<s<t<1}\cup\set{0<s<t<1-T/r}} (t-s)^r
\E m(\ex;s,t)\dd s\dd t
\\
&\le r^{-2}\iint_{[0,r)^2\setminus[0,T]^2}
\Bigpar{1-\frac{y}{r}-\frac{x}{r}}_+^r c\parfrac{x}{r}^{1/2}
\dd x\dd y
\\
&\le cr^{-5/2}\iint_{[0,\infty)^2\setminus[0,T]^2}
e^{-x-y} x^{1/2}
\dd x\dd y.
\end{split}
\end{equation}
Hence,
\begin{equation}
\limsup_{\rtoo}\E|r^{5/2}W_r-r^{5/2}W_r^T | \to 0
\end{equation}
as $T\to\infty$. This shows, by \cite[Theorem 4.2]{Billingsley}
or \cite[Theorem 4.28]{Kallenberg} again,
that we can let $T\to\infty$ inside \eqref{wtoo} and obtain the conclusion
\eqref{a}.
\end{proof}
\begin{proof}[Proof of \refT{Tlol}]
By \eqref{WY}, \refL{LW} can be written
\begin{equation}
\ga\qq Y(\ga)\dto \Yoo:=2\Woo
\end{equation}
as \gatoo. We now give some equivalent expressions for the limit.
First, by \eqref{pitman},
\begin{equation}\label{yoo}
\begin{split}
\Yoo\eqd 2\intoo\!\intoo\!
e^{-x-y}\bigpar{S(x)\land S'(y)} \dd x\dd y.
\end{split}
\end{equation}
Secondly, note that $\tau(a)\le x\iff S(x)\ge a$; thus $\tau$ and $S$ are
inverses of each other.
Similarly, we may assume that $\tau'$ is the inverse of $S'$.
By Fubini's theorem,
\begin{align}
\Yoo
&\eqd2 \int_0^\infty\!\int_0^\infty\! e^{-x-y} \bigpar{S(x)\bmin S'(y)} \dd x\dd y
\notag\\
&= 2\iiint_{0\le s\le S(x) \land S'(y)} e^{-x-y} \dd s \dd x \dd y
\notag\\
&= 2\iiint_{ \tau(s)\le x,\, \tau'(s)\le y} e^{-x-y} \dd x \dd y \dd s
\notag\\
&= 2\int_0^\infty e^{-\tau(s)-\tau'(s)} \dd s.
\end{align}
However, $(\tau(s))$ and $(\tau'(s))$ are independent processes with independent
increments, and thus $(\tau(s)+\tau'(s))$ has independent increments.
Furthermore, for each fixed $s$,
$\tau(2s)-\tau(s)\eqd\tau(s)$ and is independent of $\tau(s)$, and
hence $\tau(s)+\tau'(s)\eqd\tau(2s)$.
It follows that the stochastic process $(\tau(s)+\tau'(s))$
equals in distribution $(\tau(2s))$.
Hence, we also have the representation
\begin{equation}\label{ytau}
\Yoo
\eqd 2\int_0^\infty e^{-\tau(2s)} \dd s
= \int_0^\infty e^{-\tau(s)} \dd s.
\end{equation}
The same Fubini argument in the opposite direction now gives
\begin{equation}
\begin{split}
\Yoo
&\eqd
\int_0^\infty e^{-\tau(s)} \dd s
=\iint_{ x \ge \tau(s)} e^{-x} \dd x \dd s
\\&
= \iint_{0\le s\le S(x)} e^{-x} \dd s \dd x
= \intoo e^{-x} S(x) \dd x.
\end{split}
\end{equation}
This shows \eqref{Z}.
It remains to calculate the moments of $\Yoo$.
For integer moments we use \eqref{ytau}.
Recall,
see \eg{} \cite[Proposition II.3.7 and Sections III.3--4]{RY},
that $\tau(s)$ is a stable process with stationary independent
increments and
\begin{equation}\label{est}
\E \expx{-s \tau(t)} = \expx{-t\sqrt{2s}}, \qquad s,t\ge0.
\end{equation}
Define $\gdtau(s,s'):=\tau(s')-\tau(s)$.
Then, by symmetry and the change of variables
$t_1=s_1$, $t_2=s_2-s_1$, \dots, $t_k=s_k-s_{k-1}$,
noting that the increments $\gdtau(s_{i-1},s_i)$ are independent
and $\gdtau(s_{i-1},s_i)\eqd\tau(t_i)$ (with $s_0=0$), we have
\begin{align}\label{zzbx}
\E \Yoo^k
&= k! \int_{0<s_1<s_2<\dots< s_k} \E e^{-\tau(s_1)-\dots-\tau(s_k)}
\dd s_1
\dotsm \dd s_k
\notag\\
&= k! \int_{0<s_1<s_2<\dots< s_k}
\E e^{-k\gdtau(0,s_1)-(k-1)\gdtau(s_1,s_2)- \dots-\gdtau(s_{k-1},s_k)}
\dd s_1 \dotsm \dd s_k
\notag\\
\notag\\
&= k! \int_{t_1,\dots,t_k>0}
\E e^{-k\tau(t_1)}\E e^{-(k-1)\tau(t_2)}\dotsm \E e^{-\tau(t_k)}
\dd t_1\dotsm \dd t_k
\notag\\
&= k! \int_{t_1,\dots,t_k>0}
e^{-t_1\sqrt{2k}-t_2\sqrt{2(k-1)}-\dots-t_k\sqrt2}
\dd t_1\dotsm \dd t_k
\notag\\
&= k! \prod_1^k \frac{1}{\sqrt{2j}
= 2^{-k/2} k!^{1/2},
\end{align}
which is \eqref{EZk}.
In order to extend this to non-integer moments,
let
\begin{equation}
\label{ZZ}
Z:=\log\Yoo+\frac12\log2,
\end{equation}
and let $Z'$ be an independent copy of $Z$.
Then, for integer $k\ge1$,
\begin{equation}
\begin{split}
\E \bigpar{\bigpar{e^{Z+Z'}}^k}
=\E{e^{k(Z+Z')}}
=\bigpar{\E e^{kZ}}^2
= \bigpar{2^{k/2}\E\Yoo^k}^2=k!,
\end{split}
\end{equation}
and thus $\EE:=e^{Z+Z'}\sim\Exp(1)$, since an exponential distribution is
determined by its moments. Hence, for any real $r>-1$,
\begin{equation}\label{EIIr}
\begin{split}
\bigpar{\E e^{rZ}}^2
=\E e^{{rZ+rZ'}}
=
\E \EE^r
=\intoo x^r e^{-x}\dd x=\gG(r+1),
\end{split}
\end{equation}
and thus
$\E e^{rZ}=\sqrt{\gG(r+1)}$.
Since $e^{rZ}=2^{r/2}\Yoo^r$, \eqref{EZr} follows, for real~$r$.
Finally, \eqref{EZr} is extended to complex $r$ by analytic continuation,
or by \eqref{EIIr} again, now knowing that the expectations exist.
\end{proof}
\begin{remark}
The characteristic function $\gf_Z(t)$
of the random variable $Z$ in \eqref{ZZ}
is thus $\gG(1+\ii t)\qq$, which decreases exponentially as $t\to\pm\infty$;
hence
$Z$ has by Fourier inversion a continuous density
\begin{align}
f_Z(x)=\frac{1}{2\pi}\intoooo e^{-\ii tx}\gf_Z(t)\dd t
=\frac{1}{2\pi}\intoooo e^{-\ii tx}\gG(1+\ii t)\qq\dd t
,\end{align}
see \eg{}
\cite[Theorem XV.3.3]{FellerII}; furthermore, by a standard argument,
we may differentiate repeatedly under the integral sign, and thus
the density function $f_Z(x)$ is infinitely differentiable.
(In fact, it follows from Stirling's formula that $\gf_Z(t)=\gG(1+\ii t)\qq$
belongs to the Schwartz class $\cS(\bbR)$
of infinitely differentiable functions such
that every derivative decreases faster than $|x|^{-k}$ for any $k<\infty$;
hence $f_Z\in\cS(\bbR)$, see \cite[Theorem 25.1]{Treves}.)
Consequently, also $\Yoo$ is absolutely continuous, with a density $f_Y(x)$
that is
infinitely differentiable on $(0,\infty)$.
Results on the asymptotics of the density function $f_Y(x)$ of $\Yoo$ as
$x\to0$ and $x\to\infty$ are given in \cite{FillK04}.
\end{remark}
\begin{remark}\label{Rid}
$2\qq\Yoo$ has moments $\sqrt{k!}$, and it follows that if $\Yoo'$ is
an independent copy of $\Yoo$, then $2\Yoo\Yoo'$ has moments $k!$ and
$2\Yoo\Yoo'\sim\Exp(1)$. Hence, the distribution of
$2\qq\Yoo$ is a ``square root'' of $\Exp(1)$, in the sense of taking
products of independent variables.
Moreover, if we let $(\tau(s))$ be another stable subordinator, with
$\E e^{-s\tau(t)}=e^{-ts^\gam}$ ($0<\gam<1$) instead of \eqref{est},
then \eqref{ytau} defines by the same calculations a random
variable $\ygam$ with
\begin{equation}
\E \ygam^k=(k!)^{1-\gam}.
\end{equation}
In particular, choosing $\gam=1-(1/m)$, we obtain an $m^{\rm th}$ root of the
exponential distribution $\Exp(1)$.
Recalling that $V \sim \Exp(1)$ and taking logarithms, this shows that $\log
\EE$ is infinitely divisible, and thus the same holds for $-\log\EE$, which
has a Gumbel distribution.
This has been known for a long time, and a calculation shows that $-\log\EE$
has a L\'evy{} measure with a density $\sum_{j=1}^\infty e^{-jx}/x=x\qw(e^x-1)\qw$,
$x>0$; see, \eg{},
\cite[Examples 11.1 and 11.10]{Steutels}.
See also \cite[Example 7.2.3]{Bondis}.
\end{remark}
\section{Extensions to $\rea=\frac12$} \label{Smua}
\CCreset
In this section, we show the extensions to $\rea=\frac12$ claimed in
Remarks \ref{Rmua}, \ref{RTE1/2}, and \ref{RT1/2X}. These require
different methods from the ones used above.
Let $\gf(t):=\E e^{\ii t\xi}$ be the characteristic function{} of the offspring distribution
$\xi$. Furthermore, let $\txi:=\xi-\E\xi=\xi-1$, and denote its characteristic function{} by
\begin{equation}\label{tgf}
\tgf(t):=\E e^{\ii t\txi}=\emit\gf(t),
\qquad t\in\bbR.
\end{equation}
Since $\E\txi=0$ and $\E\txi^2=\gss$, we have
$\tgf(t)=1-\frac{\gss}2t^2+o(t^2)$; hence
\begin{equation}
\label{pyr}
\tgf(t)=1-\frac{\gss}2t^2\bigsqpar{1+\gam(t)},
\qquad t\in\bbR,
\end{equation}
for some continuous
function $\gam(t)$ on $\bbR$ such that $\gam(0)=0$.
We also let
\begin{equation}\label{rho}
\rho(t):=1-\tgf(t)=\frac{\gss}2t^2\bigsqpar{1+\gam(t)}.
\end{equation}
Since $\xi$ is integer-valued, $\gf$ and $\tgf$ are $2\pi$-periodic.
Note that
$\P(\txi=-1)>0$ and thus
$\tgf(t)\neq1$ if $0<|t|\le\pi$ [also when
$\spann\xi>1$]; hence \eqref{pyr} and continuity imply
\begin{equation}\label{fitr}
\Re\rho(t)=1-\Re\tgf(t)\ge\cc t^2, \ccdef\ccft
\qquad 0\le|t|\le\pi,
\end{equation}
for some $\ccx>0$.
Furthermore, if $\spann\xi=h\ge1$, then $\gf(\pm2\pi/h)=1$ but
$|\gf(t)|<1$ for $0<|t|<2\pi/h$, and it follows
similarly from \eqref{pyr} and continuity that
\begin{equation}\label{fir}
|\tgf(t)|=|\gf(t)|\le 1-\cc t^2 \le e^{-\ccx t^2}, \ccdef\ccf
\qquad 0\le|t|\le\pi/h.
\end{equation}
\begin{lemma}
\label{Lpyret}
If\/ $\rea<\frac12$, then
\begin{equation}\label{pyr1}
\mu(\ga) = \frac{1}{2\pi\gG(1-\ga)}\intpi\intoo
x^{-\ga}\frac{\gf(t)}{e^x-\tgf(t)}\dd x\dd t,
\end{equation}
where the double integral is absolutely convergent.
\end{lemma}
\begin{proof}
Let $\rea<\frac12$.
Fourier inversion and \eqref{mua} yield
\begin{equation}\label{pyr2}
\mu(\ga)
= \sum_{n=1}^\infty n^{\ga-1} \intT e^{-\ii(n-1)t}\gf(t)^n\dd t
= \sum_{n=1}^\infty n^{\ga-1} \intT e^{\ii t}\tgf(t)^n\dd t.
\end{equation}
Let $\spann\xi=h\ge1$.
It follows from the estimate \eqref{fir} that
\begin{equation}\label{pq}
\intpi|\tgf(t)|^n\dd t
={h} \int_{-\pi/h}^{\pi/h}|\tgf(t)|^n\dd t
\le {h} \intoooo e^{-\ccf n t^2}\dd t
= \CC n\qqw.\CCdef\CCpq
\end{equation}
Hence,
\begin{equation}\label{pq1}
\sum_{n=1}^\infty \intpi \bigabs{n^{\ga-1}\eit \tgf(t)^n}\dd t
\le\CCpq\sum_{n=1}^\infty n^{\rea-\frac32}<\infty.
\end{equation}
Thus we may interchange the order of summation and integration in \eqref{pyr2}
and obtain
\begin{equation}\label{pyr3}
\mu(\ga)
= \intT\eit\sum_{n=1}^\infty n^{\ga-1} \tgf(t)^n\dd t.
\end{equation}
The sum $\sum_{n=1}^\infty n^{\ga-1} \tgf(t)^n$ is known as the polylogarithm
$\Li_{1-\ga}(\tgf(t))$ \cite[\S25.12(ii)]{NIST}.
It can be expressed as an
integral \cite[25.12.11]{NIST}
by a standard argument, which we adapt as follows:
Since $\rea<\frac12<1$,
we have
$
n^{\ga-1}\gG(1-\ga)
= \intoo x^{-\ga} e^{-nx}\dd x
$
and thus \eqref{pyr3} yields
\begin{equation}
\gG(1-\ga)\mu(\ga)
= \intT\sum_{n=1}^\infty\intoo x^{-\ga}e^{-nx} \tgf(t)^n\eit \dd x\dd t.
\end{equation}
Again, this expression is absolutely convergent as a consequence of
\eqref{pq} and \eqref{pq1},
and thus we may again interchange the order of summation and integration
and obtain
\begin{equation}\label{pyr18}
\begin{split}
\gG(1-\ga)\mu(\ga)
= \intT\intoo x^{-\ga}\sum_{n=1}^\infty e^{-nx} \tgf(t)^n\eit \dd x\dd t
\\
= \intT\intoo x^{-\ga}\frac{e^{-x}\tgf(t)}{1- e^{-x} \tgf(t)}\eit \dd x\dd t.
\end{split}
\end{equation}
This yields \eqref{pyr1}, with absolute convergence.
\end{proof}
We next modify \eqref{pyr1} by ignoring terms that are analytic at
$\rea=\frac12$; more precisely, we ignore terms that are analytic in
$\doi:=\set{\ga:0<\Re \ga<1}$.
\begin{lemma}
\label{LP2}
There exists a function $h(\ga)\in\cH(\doi)$ such that
if $0<\rea<\frac12$, then
\begin{equation}\label{pyrr}
\mu(\ga)=
\frac{\gG(\ga)}{2\pi}\intpi \rho(t)^{-\ga}\dd t
+h(\ga).
\end{equation}
\end{lemma}
\begin{remark}\label{RP2}
Since $\rho(t)\neq0$ for $0<|t|\le\pi$, the integral
$\int_{t_0\le |t|\le\pi}\rho(t)^{-\ga}\dd t$ is an entire function of $\ga$
for any
$t_0\in(0,\pi]$, and thus the integral in \eqref{pyrr} can be replaced by
the integral over $|t|\le t_0$ for any such $t_0$.
\end{remark}
\begin{proof}[Proof of \refL{LP2}]
First, for $x\ge1$ and $\rea>0$, the integrand in \eqref{pyr1} is
$O(e^{-x})$ so the double integral over $\set{x\ge1,\,t\in(-\pi,\pi)}$
converges and defines an analytic function $h_1\in\cHoi$.
We may thus consider the integral for $0<x<1$ only.
Next, using \eqref{rho} and \eqref{fitr}, for $x>0$ we have
\begin{equation}\label{pl}
\bigabs{e^x-\tgf(t)}
\ge \Re \bigpar{e^x-\tgf(t)}
=e^x-1+\Re\rho(t)
\ge x+\ccft t^2.
\end{equation}
Hence, using $|\gf(t)-1|\le \CC t$
(since $\E\xi<\infty$),
\begin{equation}\label{pk}
\intpi\intoi
\Bigabs{ x^{-\ga}\frac{\gf(t)-1}{e^x-\tgf(t)}}\dd x\dd t
\le
\CC
\intpi\intoi
x^{-\rea}\frac{|t|}{x+t^2}\dd x\dd t.
\end{equation}
Now, for $0<x<1$,
\begin{equation}\label{pkb}
\int_0^\pi \frac{t}{x+t^2}\dd t
\le \int_0^{\sqrt x}\frac{t}x\dd t +\int_{\sqrt x}^\pi\frac{t}{t^2}\dd t
=\frac12+\log\pi-\log\sqrt x =O(1+|\log x|)
\end{equation}
and thus \eqref{pk} converges for $\rea<1$.
It follows that if we replace the numerator $\gf(t)$ by 1 in \eqref{pyr1}
(with $x<1$ only), then the difference is in $\cHoi$.
Similarly, for $0<x<1$ and $|t|\le\pi$,
\begin{equation}\label{fita}
\Bigabs{\frac{1}{e^x-\tgf(t)}-\frac{1}{x+1-\tgf(t)}}
\le \frac{e^x-1-x}{(x+\ccft t^2)^2}
\le 1,
\end{equation}
and we may thus also replace the denominator $e^x-\tgf(t)$ by
$x+1-\tgf(t)=x+\rho(t)$.
This yields
\begin{equation
\mu(\ga) = \frac{1}{2\pi\gG(1-\ga)}\intpi\intoi
x^{-\ga}\frac{1}{x+\rho(t)}\dd x\dd t+h_2(\ga).
\end{equation}
with $h_2\in\cHoi$.
We now reintroduce $x\ge1$, noting that $\Re\rho(t)\ge0$ and thus,
for $\rea>0$,
\begin{equation}
\intpi\int_1^\infty \Bigabs{\frac{x^{-\ga}}{x+\rho(t)}}\dd x\dd t
\le 2\pi \int_1^\infty x^{-\rea-1}\dd x<\infty.
\end{equation}
Hence, for $\ga\in\doi$,
\begin{equation
\mu(\ga) = \frac{1}{2\pi\gG(1-\ga)}\intpi\intoo
\frac{x^{-\ga}}{x+\rho(t)}\dd x\dd t+h(\ga),
\end{equation}
with $h\in\cHoi$, and \eqref{pyrr} follows by a standard beta integral:\ for
$0<\rea<1$ and $\rho\notin(-\infty,0]$ we have
\begin{equation}
\begin{split}
\intoo \frac{x^{-\ga}}{x+\rho}\dd x
&=\rho^{-\ga} \intoo \frac{x^{-\ga}}{x+1}\dd x
=\rho^{-\ga}B(1-\ga,\ga)
\\&
=\rho^{-\ga}\gG(1-\ga)\gG(\ga),
\end{split}
\end{equation}
where the first equality holds for all $\rho>0$ by a change of variables and
therefore for all $\rho\notin(-\infty,0]$ by analytic continuation.
\end{proof}
Recall the function $\gam(\ga)$ defined by \eqref{pyr}.
\begin{lemma}
\label{LG}
For any $a>0$ we have
\begin{equation}
\intoo \frac{|\gam(at)-\gam(t)|}t\dd t<\infty.
\end{equation}
\end{lemma}
\begin{proof}
By \eqref{pyr}, recalling $\E\txi=0$ and $\E\txi^2=\gss$, we have
\begin{equation}\label{fitb}
-\frac{\gss t^2}2\gam(t) = \tgf(t)-1+\frac{\gss t^2}2
=\E e^{\ii t\txi}-1-\E(\ii t\txi)-\frac12\E\xpar{\ii t\txi}^2.
\end{equation}
Define
\begin{align}
\psi_1(x)&:=e^{\ii x}-1-\ii x, \label{psi1}
\\
\psi_2(x)&:=e^{\ii x}-1-\ii x-\tfrac12(\ii x)^2.\label{psi2}
\end{align}
Then \eqref{fitb} implies
\begin{equation}
\gam(t)=-\frac{2}{\gss t^2}\E \psi_2(t\txi)
\end{equation}
and thus
\begin{align}
\gam(at)- \gam(t)
=\frac{2}{\gss t^2}\E \Bigsqpar{\psi_2(t\txi)-\frac{1}{a^2}\psi_2(at\txi)}.
\label{win1}
\end{align}
Fix $a>0$. Taylor's formula yields the standard estimate
$|\psi_2(x)|\le |x|^3$, and thus
\begin{equation}
|\psi_2(x)-a\qww\psi_2(ax)|\le C|x|^3.
\end{equation}
Furthermore,
$\psi_2(x)-a\qww\psi_2(ax)=\psi_1(x)-a\qww\psi_1(ax)$
by cancellation, and
$|\psi_1(x)|\le2|x|$ and thus
\begin{equation}
|\psi_1(x)-a\qww\psi_1(ax)|\le C|x|.
\end{equation}
Consequently,
\begin{equation}\label{win2}
|\psi_2(x)-a\qww\psi_2(ax)|\le C\bigpar{|x|\land|x|^3}.
\end{equation}
Combining \eqref{win1} and \eqref{win2} we obtain,
for $t\neq0$,
\begin{equation
|\gam(at)-\gam(t)|\le Ct^{-2} \E\bigpar{|t\txi|\land|t\txi|^3}.
\end{equation}
Hence,
\begin{equation*}
\begin{split}
\intoo\frac{|\gam(at)-\gam(t)|}t\dd t
&\le C \intoo \E\bigpar{|t\qww\txi|\land|\txi|^3}\dd t
\\&
= C \biggpar{\int_0^{|\txi|\qw}|\txi|^3\dd t
+ \int_{|\txi|\qw}^\infty t^{-2}|\txi|\dd t}
\\&
= C\E\bigpar{|\txi|^2+|\txi|^2} = 2 C \gss <\infty.
\end{split}
\qedhere
\end{equation*}
\end{proof}
\begin{remark}
\refL{LG} and its proof hold
with $\txi$ replaced by
any random variable $X$ with $\E X=0$ and
$\E X^2<\infty$.
\end{remark}
\begin{remark}
Note, in contrast, that the integral $\intoi |\gam(t)|t\qw \dd t$ may
diverge; hence some cancellation is essential in \refL{LG}.
In fact, it is not difficult to show, using similar arguments, that
$\intoi |\gam(t)|t\qw \dd t<\infty$ if and only if $\E \txi^2 {\log|\txi|}<\infty$.
(Since $\gam(t)\to-1$ as $t\to\infty$, we cannot here integrate to $\infty$.)
\end{remark}
The function $\mu(\ga)$ is defined by \eqref{mu} for $\rea<\frac12$. As
noted at~\eqref{aaa},
$\mu(\ga)\to\infty$ as $\ga\upto\frac12$. However, $\mu(\ga)$ has a
continuous extension to all other points on the line $\rea=\frac12$.
\begin{theorem}
\label{TM}
The function $\mu(\ga)$ has a continuous extension to the set
$\set{\ga:\rea\le\frac12}\setminus\set{\frac12}$.
\end{theorem}
\begin{proof}
For $0<s\le\pi$ and $\rea<\frac12$, let
\begin{equation}\label{f}
f_s(\ga):=\Bigpar{\frac{\gss}2}^\ga\int_{-s}^s\rho(t)^{-\ga}\dd t
=\int_{-s}^s t^{-2\ga}\bigsqpar{1+\gam(t)}^{-\ga}\dd t.
\end{equation}
Let $a>0$ and let $s_0:=\pi/(1\vee a)$.
Then, for $0<s\le s_0$, we have
\begin{equation}\label{fa}
f_{a s}(\ga)
= \int_{-as}^{as} t^{-2\ga}\bigsqpar{1+\gam(t)}^{-\ga}\dd t
= a^{1-2\ga}\int_{-s}^{s} t^{-2\ga}\bigsqpar{1+\gam(at)}^{-\ga}\dd t.
\end{equation}
Fix $B<\infty$ and let $\DB:=\set{\ga:0\le\rea<\frac12,\,|\Im\ga|\le B}$.
By \eqref{f} and \eqref{fa},
uniformly for $\ga\in \DB$, noting that $1+\gam(t)\neq0$ for $0 < |t|\le\pi$ by
\eqref{pyr}, we have
\begin{equation}\label{winn}
\begin{split}
\bigabs{a^{2\ga-1}f_{as}(\ga)-f_s(\ga)}
&\le
\int_{-s}^s\bigabs{(1+\gam(at))^{-\ga}-(1+\gam(t))^{-\ga}} |t^{-2\ga}|\dd t
\\&
\le C\int_{-s}^s|\gam(at)-\gam(t)| t^{-2\rea}\dd t
\\&
\le C\int_{0}^s|\gam(at)-\gam(t)| t^{-1}\dd t,
\end{split}
\end{equation}
which tends to 0 as $s\to0$ by \refL{LG}.
Let
\begin{equation}
F_s(\ga):=a^{2\ga-1}\bigpar{f_\pi(\ga)-f_{as}(\ga)}
-\bigpar{f_\pi(\ga)-f_s(\ga)}.
\end{equation}
We have just shown in \eqref{winn} that as $s\to0$ we have
\begin{equation}\label{Fs}
F_s(\ga)\to\bigpar{a^{2\ga-1}-1}f_\pi(\ga)
\end{equation}
uniformly in $\DB$.
For $s\in (0,s_0]$, $F_s(\ga)$ is an entire function, see \refR{RP2}, and
in particular continuous on $\bDB$.
Hence, the sequence $F_{1/n}(\ga)$, which is uniformly convergent on $\DB$
by \eqref{Fs}, is a Cauchy sequence in $C(\bDB)$, and thus converges
uniformly on $\bDB$ to some continuous limit.
Together with \eqref{Fs} again, this shows that
$\xpar{a^{2\ga-1}-1}f_\pi(\ga)$ has a continuous extension to $\bDB$.
This holds for any $a>0$. We now choose $a=e^{1/B}$; then $a^{2\ga-1}\neq1$
in $\bDB\xq$, and thus $f_{\pi}(\ga)$ has a continuous extension to $\bDB\xq$.
Since $B$ is arbitrary, this shows that
$f_\pi(\ga)$ has a continuous extension to
$\set{\ga:0\le\rea\le\frac12}\xq$.
Finally, the definition \eqref{f} shows that the same holds for
$\intpi \rho(t)^{-\ga}\dd t$, and the result follows by
\refL{LP2}.
\end{proof}
In the sequel, $\mu(\ga)$ is defined for $\rea=\frac12$, $\ga\neq\frac12$, as
this continuous extension.
\begin{theorem}
\label{T1/2}
\begin{thmenumerate}
\item \label{T1/2E}
The estimate \eqref{te-} in \refT{TE}\ref{TE-} holds also for
$\ga=\frac12+\ii y$, $y\neq0$.
Moreover, \eqref{te-} holds uniformly on compact subsets of
$\set{\ga:-\frac12<\rea\le\frac12}\xq$.
\item \label{T1/2X}
The limit result \eqref{tx<} in \refT{TX}\ref{TX<} holds also for
$\ga=\frac12+\ii y$, $y\neq0$.
Moreover, \eqref{tx<} holds
in the space $C(\dbmx)$ of continuous functions on
the set
$\dbmx:=\set{\ga:0<\rea\le\frac12}\xq$.
\end{thmenumerate}
\end{theorem}
The topology in $C(\dbmx)$ is defined by uniform convergence on compact
subsets of $\dbmx$.
\begin{proof}
Part \ref{T1/2X} follows by \refT{T1} and \ref{T1/2E}, so it suffices to
prove \ref{T1/2E}.
In this proof, let
$\db:=\set{\ga:0<\rea<\frac{3}4}$,
$\dbm:=\set{\ga:0<\rea<\frac12}$,
and, for $B>0$,
$\dbmB:=\set{\ga\in\dbm:|\Im\ga|\le B}$,
$\dbmxB:=\set{\ga\in\dbmx:|\Im\ga|\le B}$.
By \eqref{mua} and \eqref{mun}, for $\rea<\frac12$ we have
\begin{equation}
\mu(\ga)-\mu_n(\ga)=\sum_{k=n+1}^\infty k^{\ga-1}\P(S_k=k-1).
\end{equation}
Imitating the proof of \refL{Lpyret} we obtain, \cf{} \eqref{pyr18}, for
$\ga\in\dbm$,
\begin{equation}\label{pyrx0}
\begin{split}
\gG&(1-\ga)\bigpar{\mu(\ga)-\mu_n(\ga)}
= \intT\intoo x^{-\ga}\sum_{k=n+1}^\infty e^{-kx} \tgf(t)^k\eit \dd x\dd t
\\&
= \intT\intoo x^{-\ga}\frac{e^{-(n+1)x}\tgf(t)^{n+1}}{1- e^{-x} \tgf(t)}\eit
\dd x\dd t
\\&
= \intT\intoo x^{-\ga}\frac{e^{-nx}\tgf(t)^{n}}{e^{x}- \tgf(t)}\gf(t)
\dd x\dd t
\end{split}
\raisetag\baselineskip
\end{equation}
and thus, by the change of variables $x\mapsto x/n$, $t\mapsto\tqn$, we have
\begin{align}\label{pyrx1}
f_n(\ga)&:=
n^{\frac12-\ga}
\gG(1-\ga)\bigpar{\mu(\ga)-\mu_n(\ga)}
\\&\phantom:
=\frac{1 }{2\pi}
\intpm{\pi\sqrt n}\intoo x^{-\ga}
\frac{e^{-x}\tgf(t/\sqrt n)^{n}}{n[e^{x/n}- \tgf(\tqn)]} \gf(\tqn)\dd x\dd t
\label{pyrx2}
\end{align}
Denote the integrand in \eqref{pyrx2} by $\gnaxt$, and let
this define $\gnaxt$ for any $\ga\in\db$.
Note that for any fixed $\ga\in\db$, $x>0$, and $t\in\bbR$,
by \eqref{pyr},
\begin{equation}\label{kdu}
\gnaxt\to
x^{-\ga} \frac{e^{-x-\gssx t^2}}{x+\gssx t^2}
=:\gaxt.
\end{equation}
Furthermore, \eqref{kdu} trivially holds uniformly for $\ga\in \db$.
Note also that, by \eqref{fitr},
\begin{equation}\label{fitt}
\begin{split}
\bigabs{n\bigpar{e^{x/n}-\tgf(\tqn)}}
&\ge\Re \bigpar{n\bigpar{e^{x/n}-\tgf(\tqn)}}
\ge x+n\Re\bigpar{1-\tgf(\tqn)}
\\&\ge x+\ccft t^2.
\end{split}
\raisetag\baselineskip
\end{equation}
Let $h:=\spann\xi$.
If $h>1$,
consider first $t$ with $\pi\sqrt n/h <|t|\le\pi\sqrt n$.
For such $t$, \eqref{fitt} implies $\bigabs{e^{x/n}-\tgf(\tqn)}\ge c$,
and thus $|\gnaxt|\le C n\qw x^{-\rea}e^{-x}$.
Hence, the integral \eqref{pyrx2} restricted to $|t|>\pi\sqrt n/h$ is
$O\bigpar{n\qqw}$, uniformly in $\db$.
Next (for any $h$),
for $\ga\in\db$ and $|t|\le\pi\sqrt n/h$, \eqref{fir} and \eqref{fitt} yield
\begin{equation}
|\gnaxt|\le x^{-\rea}\frac{e^{-x-ct^2}}{x+ct^2}
\le \bigpar{1+x^{-3/4}}\frac{e^{-x-ct^2}}{x+ct^2}.
\end{equation}
The \rhs{} is integrable over
$(x,t)\in(0,\infty)\times((-\infty,-1)\cup(1,\infty))$; hence
the integral \eqref{pyrx2} restricted to
$1<|t|\le\pi\sqrt n/h$ converges by \refL{Ldom} uniformly on $\db$ to the
corresponding integral of $\gaxt$, which is an analytic function
$h_1(\ga)\in\cH(\db)$ by \refR{Rdom}.
Similarly, for $x\ge1$, using \eqref{fitt} again,
\begin{equation}
|\gnaxt|
\le x^{-\rea}\frac{e^{-x}}{x + c_1 t^2}
\le
e^{-x}
\end{equation}
and it follows by \refL{Ldom} and \refR{Rdom} that the integral \eqref{pyrx2}
restricted to $(x,t)\in(1,\infty)\times(-1,1)$ converges uniformly to an
analytic function $h_2(\ga)\in\cH(\db)$.
It remains to consider the integral in \eqref{pyrx2} over
$(x,t) \in Q:=(0, 1) \times(-1,1)$.
We modify this integral in several steps.
We first replace $e^{-x}$ by 1 in the numerator of $\gnaxt$; the absolute value
of the difference is bounded, using \eqref{fitt} again, by
\begin{equation}
x^{-\rea}\frac{1-e^{-x}}{x+ c_1 t^2}\le x^{-\rea}\le 1+x^{-3/4}
\end{equation}
and thus \refL{Ldom} and \refR{Rdom} show that the integral of the
difference over $(x,t)\in Q$ converges uniformly to an
analytic function $h_3(\ga)\in\cH(\db)$.
Similarly, we then replace $\tgf(\tqn)^n$ by 1
in the resulting integral;
the difference is by \eqref{fitt} and
\eqref{pyr}, using $|1-\tgf(\tqn)^n|\le n|1-\tgf(\tqn)|$,
bounded by
\begin{equation}
x^{-\rea}\frac{Ct^2}{x+c_1 t^2}\le Cx^{-\rea}\le C(1+x^{-3/4})
\end{equation}
and again the integral of the
difference over $Q$ converges uniformly to an
analytic function $h_4(\ga)\in\cH(\db)$.
Next, we replace in the denominator
$e^{x/n}-\tgf(\tqn)$ by $(x/n)+\rho(\tqn)$.
The resulting error is by \eqref{fita} bounded by
x^{-\rea}\frac{1}n
$
so the error in the integral
over $Q$
is $O(n\qw)$, uniformly in $\ga\in\db$.
Similarly, $\gf(\tqn)=1+O(\tqn)$, so replacing the factor $\gf(\tqn)$ by 1
yields an error in the integral over $Q$ that is bounded, for $\ga\in\db$, by
\begin{equation}
\frac{C}{\sqrt n}\intii\intoi x^{-3/4}\frac{|t|}{x+t^2}\dd x\dd t
=O\bigpar{n\qqw},
\end{equation}
since the integral converges
by \eqref{pkb}.
Summarizing the development so far, we have shown that
\begin{equation}\label{pyrz}
\begin{split}
f_n(\ga)&
=\frac{1 }{2\pi}
\intii\intoi x^{-\ga}
\frac{1}{x+n\rho(\tqn)}\dd x\dd t +h_5(\ga)+o(1),
\end{split}
\end{equation}
uniformly in $\dbm$, for some $h_5(\ga)\in\cH(\db)$.
Define, for $a>0$ and $\ga\in\dbm$,
\begin{equation}\label{pyrw}
\begin{split}
F_{n,a}(\ga)
&:=
\intpm{1}\int_0^{1}
\frac{x^{-\ga}}{x+na\qww\rho(a\tqn)}\dd x\dd t
\\&\phantom:
=\intpm{1}\int_0^{1}
\frac{x^{-\ga}}{x+\gssx t^2\bigsqpar{1+\gam(a\tqn)}}\dd x\dd t ,
\end{split}
\end{equation}
noting that the integrals converge by \eqref{fitr} and the fact that
\begin{align}
\intpm{1}\int_0^{1} \frac{|x^{-\ga}|}{x+ t^2}\dd x\dd t
\le \pi \int_0^{1}\!x^{-\rea-\frac12}\dd x<\infty.
\end{align}
Thus, \eqref{pyrz} can be written, uniformly in $\dbm$,
\begin{equation}\label{pyro}
f_n(\ga)
=\frac{1 }{2\pi}F_{n,1}(\ga) +h_5(\ga)+o(1).
\end{equation}
Fix $a>1$.
Then,
for $\ga\in\dbm$ (and $n\ge a^2$, say),
using \refL{LG} we have
\begin{equation
\begin{split}
\bigabs{F_{n,a}(\ga)-F_{n,1}(\ga)}
&
\le
\intpm{1}\intoi|x^{-\ga}|\frac{Ct^2\bigabs{\gam(a\tqn)-\gam(\tqn)}}
{(x+ct^2)^2}
\ddx x\dd t
\\&\le
C
\intpm{1}\bigabs{\gam(a\tqn)-\gam(\tqn)}\int_0^{1} x^{-1/2}
\frac{\ddx x}{x+t^2}\dd t
\\&
\le
C
\intpm{1}\frac{\bigabs{\gam(a\tqn)-\gam(\tqn)}}{|t|}\dd t
\\&
=
C
\intpm{1/\sqrt n}\frac{\bigabs{\gam(at)-\gam(t)}}{|t|}\dd t
\to0,
\end{split}
\raisetag{1.2\baselineskip}
\end{equation}
as \ntoo.
Moreover, by the change of variables
$x\mapsto a\qww x$, $t\mapsto a\qw t$,
\begin{equation
\begin{split}
F_{n,a}(\ga)
=
a^{2\ga-1}
\intpm{a}\int_0^{a^2}
\frac{x^{-\ga}}{x+n\rho(\tqn)}\dd x\dd t ,
\end{split}
\end{equation}
which differs from $a^{2\ga-1}F_{n,1}(\ga)$ by an integral which,
using \refL{Ldom} and \refR{Rdom} again, converges uniformly to some
function $h_6(\ga)\in\cH(\db)$.
It follows that, uniformly for $\ga\in\dbm$,
\begin{equation}
\bigpar{a^{2\ga-1}-1}F_{n,1}(\ga)
=F_{n,a}(\ga)-F_{n,1}(\ga) -\bigpar{F_{n,a}(\ga)-a^{2\ga-1}F_{n,1}(\ga)}
\to -h_6(\ga)
.
\end{equation}
Consequently, \eqref{pyro} shows that
$\bigpar{a^{2\ga-1}-1}f_{n}(\ga)$ converges uniformly in $\dbm$
to some function
$h_7(\ga)\in\cH(\db)$, which, recalling the definition \eqref{pyrx1} of
$f_n(\ga)$,
shows that
\begin{equation}\label{fitw}
\bigpar{a^{2\ga-1}-1}n^{\frac12-\ga}\bigpar{\mu(\ga)-\mu_n(\ga)}
=h_8(\ga)+o(1),
\end{equation}
uniformly in $\dbmB$, for some function $h_8(\ga)\in\cH(\db)$
and every $B>0$.
By \eqref{magnus},
\begin{equation}\label{fitz}
h_8(\ga)=\bigpar{a^{2\ga-1}-1}\frac{1}{\sqrt{2\pi\gss}(\frac12-\ga)}
\end{equation}
for $\ga\in\dbm$, and thus
by analytic continuation
for $\ga\in \db\xq$.
By \refT{TM}, $\mu(\ga)$ is continuous on $\dbmx$, and so are $\mu_n(\ga)$
(which is an entire function) and $h_8(\ga)$.
Hence, by continuity, \eqref{fitw} holds
uniformly in every $\dbmxB$.
Finally, for any compact set $K\subset\dbmx$, we can choose $a>1$ such
that $a^{2\ga-1}\neq1$ on $K$, and then \eqref{fitw} and \eqref{fitz} show
that, uniformly for $\ga\in K$,
\begin{equation}
\label{magnusx}
n^{\frac12-\ga}\bigsqpar{\mu(\ga)-\mu_n(\ga)}=
\frac{1}{\sqrt{2\pi\gss}(\frac12-\ga)}+o(1)
\end{equation}
as \ntoo.
The result \eqref{te-}, uniformly on $K$, follows from \eqref{magnusx} and
\refL{LEn}.
This shows that \eqref{te-} holds uniformly on any compact subset of
$\dbmx$, and in particular on any compact subset of
$\set{\ga:\frac{1}{4}\le\rea\le\frac12}\xq$. Since \refT{TE}\ref{TE-} implies that
\eqref{te-} holds uniformly on any compact subset of
$\set{\ga:-\frac{1}{2}<\rea\le\frac14}$, it follows that it holds uniformly
on any compact subset of
$\set{\ga:-\frac{1}{2}<\rea\le\frac12}\xq$.
\end{proof}
\section{An example where $\mu(\ga)$ has no analytic extension} \label{S:bad}
\refT{TM} shows that $\mu(\ga)$ has a continuous extension to the line
$\rea=\frac12$, except at $\ga=\frac12$.
However, in general, $\mu$ cannot be extended analytically across this line;
in fact the derivative $\mu'(\ga)$
may diverge as $\ga$ approaches this line.
In particular, \refT{TEgd}\ref{TEgdmu} does not hold (in general) without
the extra moment assumption there.
\begin{theorem}
\label{Tbad}
There exists $\xi$ with $\E\xi=1$ and $0<\Var\xi<\infty$ such that for any
$\gao$ with $\Re\gao=\frac12$,
$
\limsup_{\ga\to\gao,\,\rea<\frac12}|\mu'(\ga)|=\infty$.
In particular, $\mu(\ga)$ has no analytic extension in a neighborhood of
any such $\gao$. In other words, the line $\rea=\frac12$ is a natural
boundary for $\mu(\ga)$.
\end{theorem}
We shall first prove three
lemmas.
Instead of working with $\mu(\ga)$ directly, we shall use \refL{LP2} (and,
for convenience, \refR{RP2}).
We define, for any function $\rho(t)$ and a complex $\ga$,
\begin{equation}\label{FF}
F(\rho;\ga):=\intii\rho(t)^{-\ga}\dd t.
\end{equation}
Note that if $\rho(t)\ge ct^2$ (as will be the case below), then this
integral is
finite for $\rea<\frac12$, at least, and defines an analytic function
there.
If $F(\rho;\ga)$ extends analytically to a larger domain, we will use the
same notation for the extension (even if the integral \eqref{FF} diverges).
If $\rho(t)=1-\E e^{\ii t\txi}$ as in \eqref{rho}, we also write $F(\xi;\ga)$.
By \refL{LP2} and \refR{RP2}, \refT{Tbad} follows if we prove the statement
with $\mu(\ga)$ replaced by $F(\xi;\ga)$.
We define in this section the domains
$\Dbad:=\set{\ga:\frac{1}4<\rea<\frac{3}4}$,
$\Dbadm:=\set{\ga:\frac{1}4<\rea<\frac{1}2}$
and
$\Dbadx:=\Dbad\xq$.
(These choices are partly for convenience; we could take $\Dbad$ larger.)
If $(g_N)$ is a sequence of functions in a domain~$D$,
we write $O_{\cH(D)}(g_N(\ga))$ for any sequence of functions
$f_N\in\cH(D)$ such that
$f_N(\ga)/g_N(\ga)$ is bounded on each compact $K\subset D$, uniformly in $N$.
(Often, $g_N(\ga)$ will not depend on $\ga$.)
We extend the definition to functions $g_{N,t}(\ga)$ and $f_{N,t}(\ga)$
depending also on an additional parameter~$t$,
requiring uniformity also in $t$.
It will be convenient to work with a restricted set of offspring
distributions $\xi$.
Let $\ppp$ be the set of all probability distributions $(p_k)_0^\infty$
on \set{0,1,2,\dots} such that
$p_0,p_1,p_2>0.1$, and if $\xi$ has the distribution $(p_k)_0^\infty$, then
$\E\xi=\sum_k kp_k=1$,
$\Var\xi=\sum_k (k-1)^2p_k=2$ and
$\E\xi^3=\sum_k k^3p_k<\infty$.
(The set $\ppp$ is clearly non-empty. A concrete example is
$(0.52,0.2,0.2,0,0,0.08)$.)
We write $\xi\in\ppp$ for $\cL(\xi)\in\ppp$.
If $\xi\in\ppp$, then $\gss=2$ and $\E\xi^3<\infty$, and thus
$\tgf(t)=1-t^2+O(t^3)$; hence $\rho(t)=t^2+O(t^3)$.
Moreover, since $\P(\txi=j)>0.1$ for $j = \pm 1$, we have
\begin{equation}\label{rer}
\Re\rho(t)=\Re\bigpar{1-\E e^{\ii t\txi}}
=\E\bigpar{1-\cos t\txi}
\ge 0.2(1-\cos t) \ge ct^2,
\end{equation}
for $|t|\le\pi$,
uniformly for all $\xi\in\ppp$.
\begin{lemma}
\label{Ldx}
If $\xi\in\ppp$, then
$F(\xi;\ga)$ extends to a function in $\cH(\Dbadx)$.
\end{lemma}
\begin{proof}
$\mu(\ga)\in\cH(\Dbadx)$ by
\refT{TEgd}\ref{TEgdmu} (or \refT{TH}), and the result follows
by \refL{LP2} and \refR{RP2}.
\end{proof}
\begin{lemma}
\label{LY}
If $\xi_N\in\ppp$ for $N \ge 1$ and $\xi_N\dto\xi$, then $F(\xi_N;\ga)\to
F(\xi;\ga)$ in $\cH(\Dbadm)$.
\end{lemma}
Note that we do not assume $\xi\in\ppp$.
(In fact, it is easy to see that the lemma extends to arbitrary $\xi_N$ and
$\xi$ with expectation 1 and finite, non-zero variance.)
\begin{proof}
Let $\rho(t):=1-\E e^{\ii t \txi}$ and $\rho_N(t):=1-\E e^{\ii t \txi_N}$,
where as usual $\txi:=\xi-1$ and $\txi_N:=\xi_N-1$.
Since $\xi_N\dto\xi$, $\rho_N(t)\to\rho(t)$ for every $t$.
\refL{Ldom} together with the estimate \eqref{rer} show that
$F(\xi_N;\ga)=F(\rho_N;\ga)\to F(\xi;\ga)$ uniformly on every compact subset
of $\Dbadm$.
\end{proof}
\begin{lemma}\label{LD}
If $\xi\in\ppp$ and $y\in\bbR\xo$,
then there exists a sequence $\xi_N\in\ppp$, $N\ge1$,
such that, as \Ntoo,
$\xi_N\dto\xi$
and
$\bigabs{\frac{\ddx}{\ddx \ga}F(\xi_N;\ga)}_{\ga=\frac12+\ii y}\to\infty$
for any fixed real $y\neq0$.
\end{lemma}
\begin{proof}
Let $a_N:=(\log N)\qqw$ and let $\xi_N$ have the distribution
\begin{equation}
\cL(\xi_N)=\cL(\xi)
+a_N\Bigsqpar{\frac{2}{N^2}\bigpar{\gd_N-N\gd_1+(N-1)\gd_0}
-\frac{N-1}N\bigpar{\gd_2-2\gd_1+\gd_0}}
\end{equation}
where $\gd_j$ is unit mass at $j$. Since $a_N\to0$, and
$\P(\xi=j)>0.1>0$ for $j=0,1,2$,
this is clearly a probability distribution if $N$ is large enough.
Furthermore, $\xi_N\dto\xi$ as \Ntoo, and
$\E\xi_N=\E\xi$, $\E\xi_N^2=\E\xi^2$ and $\E\xi_N^3<\infty$,
and thus $\xi_N\in\ppp$, provided $N$ is large enough.
(We assume in the rest of this proof that $N$ is large enough whenever
necessary, without further mention.
We can define $\xi_N$ arbitrarily for small $N$.)
Let $\gf_N(t):=\E e^{\ii t\xi_N}$
and, recalling \eqref{psi1}--\eqref{psi2},
\begin{equation}\label{gD1}
\begin{split}
\gD_N(t)
&
:=\gf_N(t)-\gf(t)
=\E e^{\ii t\xi_N} -\E e^{\ii t\xi}
\\&
=a_N\Bigsqpar{\frac{2}{N^2}\bigpar{e^{\ii Nt}-N\eit+N-1}
-\frac{N-1}N\bigpar{e^{2\ii t}-2\eit+1}}
\\&
=a_N\Bigsqpar{\frac{2}{N^2}\bigpar{\psi_1(Nt)-N\psi_1(t)}
-\frac{N-1}N\bigpar{(\ii t)^2+O(t^3)}}
\\&
=2a_NN\qww\bigpar{\psi_2(Nt)-N\psi_2(t)}
+O(a_Nt^3)
\\&
=2a_NN\qww\psi_2(Nt)+O(a_Nt^3),
\end{split}
\end{equation}
since $\psi_2(x)=O(x^3)$.
We further define
\begin{equation}\label{tpsi}
\tpsi(t):=2\frac{\psi_2(t)}{t^2}
=\frac{2e^{\ii t}-2-2\ii t+t^2}{t^2}
=2\frac{\psi_1(t)}{t^2}+1.
\end{equation}
Then $\tpsi$ is bounded and continuous on $\bbR$, $\tpsi(t)=O(t)$ and
$\tpsi(t)=1+O(t\qw)$. Furthermore, \eqref{gD1} yields
\begin{equation}\label{gD2}
\gD_N(t)=a_Nt^2\tpsi(Nt)+O(a_Nt^3).
\end{equation}
In particular, $\gD_N(t)=O(a_Nt^2)$ for $|t|\le\pi$.
We further let $\tgf_N(t):=\E e^{\ii t\txi_N}$,
$\rho_N(t):=1-\tgf_N(t)$
and, using \eqref{gD2},
\begin{equation}\label{tgD}
\begin{split}
\tgD_N(t):=\rho_N(t)-\rho(t)=-\emit\gD_N(t)
=
-a_Nt^2\tpsi(Nt)+O(a_Nt^3).
\end{split}
\end{equation}
In particular,
\begin{equation}
\label{tgD2}
\tgD_N(t)=O(a_Nt^2),
\qquad |t|\le\pi.
\end{equation}
Let $\rho_0(t):=t^2$, and let $\gd(t):=\rho(t)-\rho_0(t)$.
Then $\gd(t)=O(t^3)$, since $\Var\xi=2$ and $\E\xi^3<\infty$.
The general formula, for any twice continuously differentiable function $f$,
\begin{equation*}
f(x+y+z)-f(x+y)-f(x+z)+f(x)
=
yz\intoi\intoi f''(x+sy+tz)\dd s\dd t
\end{equation*}
implies together with \eqref{rer} and \eqref{tgD2},
for $\ga\in \Dbad$,
\begin{equation}
\begin{split}
\bigabs{(\rho(t)+\tgD_N(t))^{-\ga}-\rho(t)^{-\ga}
&-\bigpar{(\rho_0(t)+\tgD_N(t))^{-\ga}-\rho_0(t)^{-\ga}}
}
\\
\le
C|\tgD_N(t)|\, |\gd(t)|\, |\ga|\, |\ga+1|\, |t|^{-2(\rea+2)}
\\&
=O\bigpar{a_N |\ga|^2|t|^{1-2\rea}}.
\end{split}
\end{equation}
Hence, integrating over $t$ and recalling \eqref{FF},
\begin{equation}\label{DF1}
F(\rho+\tgD_N;\ga)- F(\rho;\ga)
-\bigpar{ F(\rho_0+\tgD_N;\ga)- F(\rho_0;\ga)}
=\OHD(a_N).
\end{equation}
Next, let $\xgdn(t):=-a_Nt^2\tpsi(Nt)$. Then $\tgdn(t)-\xgdn(t)=O(a_Nt^3)$ by
\eqref{tgD}, and thus, by the mean value theorem and \eqref{tgD2},
for $|t|\le\pi$,
\begin{equation*}
\begin{split}
\bigabs{
(\rho_0(t)+\tgD_N(t))^{-\ga}-(\rho_0(t)+\xgdn(t))^{-\ga}
}
&\le
C |\tgD_N(t)-\xgdn(t)|\, |\ga|\, |t|^{-2(\rea+1)}
\\&
=O\bigpar{a_N |\ga||t|^{1-2\rea}}.
\end{split}
\end{equation*}
Hence, by an integration,
\begin{equation}\label{DF2}
F(\rho_0+\tgD_N;\ga)- F(\rho_0+\xgdn;\ga)
=\OHD(a_N).
\end{equation}
Now consider
$F(\rho_0+\xgdn;\ga)- F(\rho_0;\ga)$.
Let
$\chi(t):=\ett{|t|>1}$.
Then, considering first $t>0$,
for $\ga\in \Dbadm$,
\begin{equation}\label{eleonora}
\begin{split}
\intoi
&\bigsqpar{(\rho_0(t)+\xgdn(t))^{-\ga}-\rho_0(t)^{-\ga}}\dd t
=
\intoi t^{-2\ga}\bigsqpar{\xpar{1-a_N\tpsi(Nt)}^{-\ga}-1}\dd t
\\&
=
\intoi t^{-2\ga}\bigsqpar{(1-a_N\tpsi(Nt))^{-\ga}-(1-a_N\chi(Nt))^{-\ga}}\dd
t
\\&\qquad\qquad{}
+\bigsqpar{(1-a_N)^{-\ga}-1}\int_{1/N}^1 t^{-2\ga}\dd t
\\&
=
N^{2\ga-1}
\int_0^N t^{-2\ga}\bigsqpar{(1-a_N\tpsi(t))^{-\ga}-(1-a_N\chi(t))^{-\ga}}\dd t
\\&\qquad\qquad{}
+\bigsqpar{(1-a_N)^{-\ga}-1}\frac1{1-2\ga}
\bigpar{1-N^{2\ga-1}}.
\end{split}
\raisetag{\baselineskip}
\end{equation}
Since $\tpsi(t)-\chi(t)=O\bigpar{|t|\land|t\qw|}$,
and $a_N\chi(t)=O(a_N)=o(1)$, with $\chi(t)=0$ for $0 \leq t<1$,
a Taylor expansion yields,
uniformly for $t\in\bbR$,
\begin{align}\label{D13}
&(1-a_N\tpsi(t))^{-\ga}-(1-a_N\chi(t))^{-\ga}
\notag\\
&\hskip2em
=\ga a_N\bigpar{\tpsi(t)-\chi(t)}
+\OHD\bigpar{a_N|\tpsi(t)-\chi(t)|(a_N|\tpsi(t)|+a_N\chi(t))}
\notag\\
&\hskip2em
=\ga a_N\bigpar{\tpsi(t)-\chi(t)}+\OHD\bigpar{a_N^2(|t|^2\land|t|\qw)}.
\end{align}
Using \eqref{D13}
and a Taylor expansion of $(1-a_N)^{-\ga}$ in \eqref{eleonora},
we obtain
for $\ga\in \Dbadm$,
\begin{multline}\label{erika}
\intoi
\bigsqpar{(\rho_0(t)+\xgdn(t))^{-\ga}-\rho_0(t)^{-\ga}}\dd t
\\
=
N^{2\ga-1}
\int_0^N t^{-2\ga}\ga a_N\bigpar{\tpsi(t)-\chi(t)}\dd t
-\frac{\ga a_N}{1-2\ga} N^{2\ga-1}
\\
+\OHDx\bigpar{a_N^2 N^{2\ga-1}}
+\OHDx(a_N).
\end{multline}
Furthermore,
using again $\tpsi(t)-\chi(t)=O\bigpar{|t\qw|}$,
\begin{equation}\label{D14}
\int_N^\infty\!t^{-2\ga}\bigpar{\tpsi(t)-\chi(t)}\dd t
=O\bigpar{N^{-2\rea}},
\end{equation}
so we may as well integrate to $\infty$ on the \rhs{} of \eqref{erika}.
For $\ga\in \Dbadm$, recalling \eqref{tpsi},
\begin{equation}\label{matt}
\begin{split}
\intoo\!t^{-2\ga}\bigpar{\tpsi(t)-\chi(t)}\dd t
&=
2\intoo \psi_1(t)t^{-2\ga-2}\dd t
+\intoi t^{-2\ga}\dd t
\end{split}
\end{equation}
Furthermore, if $\ga\in \Dbadm$ and $\Re\zeta\ge0$,
then
\begin{equation}\label{per}
\intoo\!\bigpar{e^{-\zeta t}-1+\zeta t}t^{-2\ga-2} \dd t =
\zeta^{2\ga+1}\gG(-2\ga-1);
\end{equation}
the case $\zeta=1$ is well known \cite[5.9.5]{NIST},
the case $\zeta>0$ follows by a change of variables,
the case $\Re\zeta>0$ follows by analytic continuation,
and the case $\Re\zeta\ge0$ follows by continuity.
Recalling \eqref{psi1}, we take
$\zeta=-\ii$ in \eqref{per}, and obtain from \eqref{matt},
for $\ga\in \Dbadm$.
\begin{equation}\label{matta}
\begin{split}
\intoo\!t^{-2\ga}\bigpar{\tpsi(t)-\chi(t)}\dd t
&=
2(-\ii)^{2\ga+1}\gG(-2\ga-1)
+\frac{1}{1-2\ga}.
\end{split}
\end{equation}
Combining \eqref{erika}--\eqref{D14} and \eqref{matta},
we obtain (for $\alpha \in \Dbadm$)
\begin{multline}\label{ahm}
\intoi
\bigsqpar{(\rho_0(t)+\xgdn(t))^{-\ga}-\rho_0(t)^{-\ga}}\dd t
=
2\ga (-\ii)^{2\ga+1}\gG(-2\ga-1) a_N N^{2\ga-1}
\\
+\OHDx\bigpar{a_N^2 N^{2\ga-1}}
+\OHDx(a_N).
\end{multline}
The integral over $(-1,0)$ yields the same result with
$(-\ii)^{2\ga+1}$ replaced by $\ii^{2\ga+1}$, \eg{} by conjugating \eqref{ahm}
and $\ga$.
Consequently,
\begin{multline}\label{DF3}
F(\rho_0+\xgdn;\ga)-F(\rho_0;\ga)
=
2\ga \bigpar{\ii^{2\ga+1}+(-\ii)^{2\ga+1}}\gG(-2\ga-1) a_N N^{2\ga-1}
\\
+\OHDx\bigpar{a_N^2 N^{2\ga-1}}
+\OHDx(a_N).
\end{multline}
For convenience, we write
\begin{equation}
G(\ga):=2\ga\bigpar{\ii^{2\ga+1}+(-\ii)^{2\ga+1}}\gG(-2\ga-1)
=2\ga\bigpar{\ii e^{\ii\pi\ga}-\ii e^{-\ii\pi\ga}}\gG(-2\ga-1).
\end{equation}
Combining \eqref{DF1}, \eqref{DF2}, and \eqref{DF3} yield,
for $\ga\in \Dbadm$,
\begin{equation}\label{sw}
\begin{split}
F(\rho_N;\ga)
=
F(\rho+\tgD_N;\ga)
&
=F(\rho;\ga)
+ a_N G(\ga) N^{2\ga-1}
\\&\qquad \qquad
+\OHDx\bigpar{a_N^2 N^{2\ga-1}}
+\OHDx(a_N) .
\end{split}
\end{equation}
By \refL{Ldx},
all terms in \eqref{sw} are analytic in $\Dbadx$, and thus \eqref{sw} holds for
$\ga\in \Dbadx$.
Note that if $f_N$ and $g_N$ are functions such that
$f_N(\ga)=\OHDx(g_N(\ga))$,
then $f_N(\ga)=g_N(\ga)h_N(\ga)$ with $h_N(\ga)=\OHDx(1)$. By Cauchy's
estimate, $h_N'(\ga)=\OHDx(1)$, and it follows that
$f'_N(\ga)=\OHDx(g_N(\ga))+\OHDx(g_N'(\ga))$.
Hence, taking derivatives in \eqref{sw} and then putting $\ga=\frac12+\ii y$
for a fixed $y\neq0$ yields
\begin{align}\label{sjw}
F'(\rho_N;\ga)
&
=F'(\rho;\ga)
+2 (\log N) a_N G(\ga) N^{2\ga-1}
+O\bigpar{a_N^2 \log N}
+O(a_N)
\notag\\&
=
2 G(\ga) (\log N) a_N N^{2\ii y}+O(1)
.
\end{align}
Since $G(\ga)=-4\ga (\cosh \pi y) \gG(-2-2y\ii)\neq0$, $|N^{2\ii y}|=1$ and
$a_N \log N=(\log N)\qq\to\infty$,
\eqref{sjw} shows that
$|F'(\xi_N;\frac12+\ii y)|=|F'(\rho_N;\frac12+\ii y)|\to\infty$
as \Ntoo.
\end{proof}
\begin{proof}[Proof of \refT{Tbad}]
Let $(y_n)_1^\infty$ be an enumeration of all non-zero rational numbers.
We shall construct sequences $x_n\in(\frac{1}4,\frac12)$ and $\xi_n\in\ppp$,
$n=1,2,\dots$,
such that, with $z_n:=x_n+\ii y_n\in \Dbadm$,
\begin{equation}\label{ind}
|F'(\xi_n,z_k)|>k, \qquad k=1,\dots,n,
\end{equation}
and,
furthermore, the total variation distance
\begin{equation}
\label{dtv}
\dtv(\xi_n,\xi_{n-1})<2^{-n}.
\end{equation}
We construct the sequences inductively. Suppose that $\xi_{n-1}$ is
constructed. (For $n=1$, we let $\xi_0$ be any element of $\ppp$.)
By \refL{LD}, there exists a sequence $\xi_{n-1,N}\in\ppp$ such that,
as \Ntoo,
$\xinn\dto\xi_{n-1}$ and $|F'(\xinn;\zzn)|\to\infty$.
By \refL{LY}, then $F(\xinn;\ga)\to F(\xi_{n-1};\ga)$ in $\cH(\Dbadm)$.
This implies
$F'(\xinn;\ga)\to F'(\xi_{n-1};\ga)$ in $\cH(\Dbadm)$, and in particular,
$F'(\xinn;z_k)\to F'(\xi_{n-1};z_k)$ for $1\le k\le n-1$.
Since \eqref{ind} holds for $n-1$ by the induction hypothesis,
it follows that
$|F'(\xinn;z_k)|>k$ for $1\le k\le n-1$ for all large $N$.
Furthermore, if we choose $N$ large enough, $|F'(\xinn;\zzn)|>n$ and
$\dtv(\xinn,\xi_{n-1})<2^{-n}$.
We choose a large $N$ such that these properties hold and let
$\xi_n:=\xinn$.
Then \eqref{ind} holds for $k=1,\dots,n-1$. Furthermore, since
$\xi_n\in\ppp$, $F(\xi_n;\ga)\in\cH(\Dbadx)$,
and thus $F'(\xi_n;\ga)$ is
continuous in $\Dbadx$. Hence $|F'(\xi_n;x+\ii y_n)|\to |F'(\xi_n;\zzn)|$ as
$x\to\frac12$, and we can choose $x_n\in(\frac{1}4,\frac12)$ with
$\frac12-x_n<\frac{1}n$ such that
$|F'(\xi_n;x+\ii y_n)|>n$.
This completes the construction of $x_n$ and $\xi_n$.
By \eqref{dtv}, the distributions $\cL(\xi_n)$ form a Cauchy sequence in
total variation distance, so there exists a random variable $\xi$ with
$\xi_n\dto\xi$. Clearly, $\xi$ is non-negative and integer-valued.
Moreover, since $\xi_n\in\ppp$ we have $\E\xi_n^2=\Var\xi_n+(\E\xi_n)^2=3$,
for every $n$, and thus the sequence $\xi_n$ is uniformly integrable, so
$\E\xi=\lim_\ntoo\E\xi_n=1$. Furthermore, by Fatou's lemma,
$\E\xi^2 \leq 3 <\infty$.
Note that $\xi$ does not necessarily belong to $\ppp$; in
fact, it is easily seen from \eqref{Finf} below that $\xi\notin\ppp$.
Nevertheless \eqref{rer} holds for every $\xi_n$ (with the same $c$) and thus
\eqref{rer} holds for $\xi$ too. In particular $\P(\xi\neq1)>0$ so $\Var\xi>0$.
\refL{LY} shows that $F(\xi_n;\ga)\to F(\xi;\ga)$ in $\cH(\Dbadm)$, and thus
$F'(\xi_n;\ga)\to F'(\xi;\ga)$ for every $\ga\in \Dbadm$.
Hence, \eqref{ind} implies
\begin{equation}\label{Finf}
|F'(\xi;z_k)|\ge k
\end{equation}
for every $k$.
Thus, $|F'(\xi;z_n)|\to\infty$ as \ntoo.
Now take any $y\in\bbR$ and let $\ga_0:=\frac12+\ii y$. There is an infinite
number of points $y_n$ in each neighborhood of $y$, so we can find a
subsequence
converging to $y$. Since $x_n\to\frac12$, it
follows that there is a subsequence of $z_n=x_n+\ii y_n$
that converges to $\ga_0$.
Suppose first that $y\neq0$, so $\ga_0\neq\frac12$.
Then it follows from \refL{LP2} (with \refR{RP2}) and
\refT{TM} that, as \ntoo{} along the subsequence,
\begin{equation}
\mu'(z_n)=\frac{1}{2\pi}\gG(z_n)F'(\xi;z_n)+O(1)
\end{equation}
and thus, by \eqref{Finf}, $|\mu'(z_n)|\to\infty$.
This proves the claim in
\refT{Tbad} for every $\ga_0$ with $\Re\ga_0=\frac12$ and
$\ga_0\neq\frac12$.
The case $\ga_0=\frac12$ follows easily,
either by noting that the set of $\ga_0$ for which the
claim holds is closed, or simply by \eqref{aaa}.
\end{proof}
\section{Moments}\label{Smom}
In this section we prove \refTs{T1mom} and \ref{TXmom} on
moments of $X_n(\ga)$ and of the limits $Y(\ga)$.
The section is largely based on \citet{FillK03} and \cite{FillK04}, and uses
the methods of \cite{FillFK}, also presented in \cite[Section VI.10]{FS}.
We assume for simplicity throughout this section that $\xi$ has span 1.
The general case follows by minor modifications of standard type.
\subsection{More notation and preliminaries}
\label{S:more_notation}
Recall that $\cT$ is the random \GWt{} defined by the offspring distribution
$\xi$.
Let $p_k:=\P(\xi=k)$ denote the values of the probability mass function
for~$\xi$,
and let $\Phi$ be its probability generating function:
\begin{align}\label{Phi}
\Phi(z):=\E z^\xi=\sum_{k=0}^\infty p_k z^k.
\end{align}
Similarly, let $q_n:=\P(|\cT|=n)$, and let~$y$ denote the corresponding
probability generating function:
\begin{align}\label{yz}
y(z):=\E z^{|\cT|} = \sum_{n=1}^\infty \P\bigpar{|\cT|=n}z^n
=\sum_{n=1}^\infty q_n z^n.
\end{align}
If $\cT$ has root degree $k$, denote the subtrees rooted at the children of
the root by $\cT_1,\dots,\cT_k$; note that, conditioned on $k$, these are
independent copies of $\cT$.
By conditioning on the root degree, we thus obtain the standard formula
\begin{align}\label{ma}
y(z) &
= \sum_{k=0}^\infty p_k \E [z^{1+|\cT_1|+\dotsm+|\cT_k|}]
=\sum_{k=0}^\infty p_k z \bigpar{\E [z^{|\cT|}]}^k
=z\sum_{k=0}^\infty p_k {y(z)}^k
\notag\\&\phantom:
=z\Phi\yz
.\end{align}
A \GDD{} is a complex domain of the type
\begin{align}\label{GDD}
\set{z:|z|<R,\, z\neq1,\,|\arg(z-1)|>\gth}
\end{align}
where $R>1$ and $0<\gth<\pi/2$, see \cite[Section VI.3]{FS}.
A function is \emph{\gda} if it is analytic in some \GDD{}
(or can be analytically continued to such a domain).
Under our standing assumptions $\E\xi=1$ and $0<\Var\xi<\infty$,
the generating function $y(z)$ is \gda; moreover,
as $z\to1$
in some \GDD,
\begin{align}\label{yz1}
y(z)=1-\sqrt2 \gs\qw(1-z)\qq+o\bigpar{|1-z|\qq}
,\end{align}
see \cite[Lemma A.2]{SJ167}.
This is perhaps more well-known if $\xi$ has some exponential moment, and then
\eqref{yz1} may be improved to a full asymptotic expansion, and in particular
\begin{align}\label{yz2}
y(z)=1-\sqrt2 \gs\qw(1-z)\qq+\Oz{},
\end{align}
see \eg{}
\cite[Theorem VI.6]{FS}.
In fact, \eqref{yz2} holds provided only $\E\xi^3<\infty$. This
follows easily from \eqref{ma}, see \refL{Lgdy}.
In the present section, asymptotic estimates similar to \eqref{yz1} and
\eqref{yz2} should always be interpreted as holding when
$z\to1$ in a suitable \GDD, even when not said so explicitly; the domain may be different each time.
\begin{remark}
In most parts of the present section, we will only use the assumption
$\E\xi^2<\infty$ and the general \eqref{yz1}.
If we assume the $\E\xi^3<\infty$, and thus \eqref{yz2}
holds, then the error estimates below can be improved,
and explicit error estimates can be obtained in \refT{TXmom}; see \cite{FillK04}
where this is done in detail for a special $\xi$ using similar arguments.
In fact, it can be checked that if $\E\xi^3<\infty$, then
all $o$ terms in the proof below can be shown to be of (at most)
the same order as the bounds given in \cite{FillK04} for
the corresponding terms.
Further, when $\xi$ has an exponential moment,
a full asymptotic expansion of the mean is derived in
\cite[Section 5.2]{FillFK};
it seems possible that this can be extended to higher moments, but we have
not pursued this.
\end{remark}
In some formulas below, certain unspecified polynomials appear as
``error terms''.
(These are best regarded as polynomials in $1-z$.)
Let $\cP$ be the set of all polynomials, and, for any real $a$, let
\begin{align}\label{Pa}
\cP_a:=\set{P(z)\in\cP:\deg(P(z))<a}.
\end{align}
Note that if $a\le0$, then $\cP_a=\set0$, and thus terms in $\cP_a$ vanish
and can be ignored.
In the formulas below, a restriction of the type $P(z)\in\cP_{a}$, i.e.,
$\deg(P(z))<a$, will always be a triviality,
since higher powers of $1-z$ can be absorbed in an $O$ or $o$ term.
Recall that
the polylogarithm function is defined, for $\ga\in\bbC$, by
\begin{align}\label{Li}
\Li_\ga(z):=\sum_{n=1}^\infty n^{-\ga}z^n,
\qquad |z|<1;
\end{align}
see \cite[Section VI.8]{FS},
\cite[\S25.12]{NIST},
or \refApp{Apoly}.
It is well known that $\Li_{\ga}(z)$ is \gda; in fact, it can be
analytically continued
to $\bbC\setminus[1,\infty)$. Moreover,
if $\ga\notin\set{1,2,\dots}$, then,
as $z\to1$,
\begin{align}\label{li}
\Li_{\ga}(z) = \gG(1-\ga)(1-z)^{\ga-1} +P(z)+ O\bigpar{|1-z|^{\rga}},
\qquad P(z)\in\cP_{\rga},
\end{align}
see
\cite[Theorem VI.7]{FS}
or \cite{Flajolet1999},
where a complete asymptotic expansion is given;
see also \refApp{Apoly}.
In particular, if $\rga\le0$, then $P(z)$ vanishes and so \eqref{li}
simplifies.
Recall also that the \emph{Hadamard product} $A(z)\odot B(z)$
of two power series
$A(z)=\sum_{n=0}^\infty a_n z^n$ and $B(z)=\sum_{n=0}^\infty b_n z^n$
is defined by
\begin{align}\label{hadamard}
A(z)\odot B(z) := \sum_{n=0}^\infty a_n b_n z^n.
\end{align}
As a simple example, for any complex $\ga$ and $\gb$,
\begin{align}\label{lili}
\Li_\ga(z)\odot\Li_\gb(z)=\Li_{\ga+\gb}(z).
\end{align}
We will use some results on Hadamard products,
essentially taken from \cite{FillFK}.
In the next lemma,
Part \ref{LFFKO}
is \cite[Propositions 9 and 10(i)]{FillFK}, and
\ref{LFFKo}
follows by the same arguments;
the
proof of $\gD$-analyticity of the Hadamard product
given for
\cite[Proposition 9]{FillFK} holds for any \gda{} functions.
(For the case $a+b+1\in\NNo$, see \cite{FillFK} and \cite{FS}.)
\begin{lemma}[\cite{FillFK}]\label{LFFK}
If $g(z)$ and $h(z)$ are \gda{}, then $g(z)\odot h(z)$ is \gda.
Moreover, suppose that $a$ and $b$ are real with $a+b+1\notin\NNo$; then the
following holds, as $z\to1$ in a suitable \GDD.
\begin{romenumerate}
\item \label{LFFKO}
If $g(z)=O(|1-z|^a)$ and $h(z)=O(|1-z|^b)$, then
\begin{align}
\label{lffkO}
g(z)\odot h(z)=P(z)+\Oz{a+b+1},
\qquad P(z)\in\cP_{a+b+1}.
\end{align}
\item \label{LFFKo}
If $g(z)=O(|1-z|^a)$ and $h(z)=o(|1-z|^b)$, then
\begin{align}
\label{lffko}
g(z)\odot h(z)=P(z)+\oz{a+b+1},
\qquad P(z)\in\cP_{a+b+1}.
\end{align}
\end{romenumerate}
\end{lemma}
The next lemma is a simplified version of \cite[Proposition 8]{FillFK};
that proposition
gives
(when $\ga,\gb,\ga+\gb\notin\bbZ$)
a complete asymptotic expansion,
and in particular a more explicit error term for our~\eqref{lih2}.
\begin{lemma}[\cite{FillFK}]\label{LIH2}
Suppose that $\rga+\Re\gb+1\notin\NNo$. Then, as $z\to1$ in a suitable \GDD,
\begin{multline}\label{lih2}
(1-z)^\ga\odot(1-z)^\gb
\\
= \frac{\gG(-\ga-\gb-1)}{\gG(-\ga)\gG(-\gb)} (1-z)^{\ga+\gb+1}
+P(z)
+ o\bigpar{|1-z|^{\Re\ga+\Re\gb+1}},
\\
P(z)\in\cP_{\rga+\rgb+1}
.\end{multline}
\end{lemma}
\begin{proof}
The case when none of $\ga,\gb,\ga+\gb$ is an integer is part of
\cite[Proposition 8]{FillFK}.
In general, we use arguments from \cite{FillFK}.
If neither $\ga$ nor $\gb$ is a non-negative integer,
the result follows easily from \eqref{li}, \eqref{lili}, and \refL{LFFK},
which then imply that
\begin{align}
& \gG(-\ga)(1-z)^\ga\odot\gG(-\gb)(1-z)^\gb
\notag\\&
=\bigpar{\Li_{\ga+1}(z)+ P_1(z)+ \oz{\rga}}
\odot
\bigpar{\Li_{\gb+1}(z)+ P_2(z)+ \oz{\rgb}}
\notag\\&
=\Li_{\ga+\gb+2}(z)+P_3(z)+\oz{\rga+\rgb+1}
\notag\\&
=\gG(-\ga-\gb-1)(1-z)^{\ga+\gb+1}+P_4(z)+\oz{\rga+\rgb+1},
\end{align}
where $P_i(z)$ are polynomials.
[Note that $P(z)\odot f(z)$ is a polynomial for any polynomial $P$ and
analytic $f$, and that we may assume $\deg(P_4(z))<\rga+\rgb+1$
by the comment after \eqref{Pa}.]
Finally, if $\ga$ is a non-negative integer, then $(1-z)^\ga$ is a polynomial
and thus the \lhs{} of \eqref{lih2} is a polynomial, so
\eqref{lih2} holds trivially [with $1/\gG(-\ga)=0$].
The same holds if $\gb$ is a non-negative integer.
\end{proof}
\subsection{Generating functions}
Let $(b_n)_1^\infty$ be a given sequence of constants and
consider the toll function $f(T):=b_{|T|}$ and the corresponding additive
functional $F(T)$ given by \eqref{F}.
We are mainly interested in the case $b_n=n^\ga$, but will also consider
$b_n=n^\ga-c$ below for a suitable constant $c$.
In the present subsection, $b_n$ can be arbitrary if we
regard the generating functions as formal power series; if we assume
$b_n=O(n^K)$ for some $K$, then the generating functions below converge and
are analytic at least in the unit disc.
We are interested in the random variable $F(\cT_n)$. We denote its
moments by
\begin{align}\label{mell}
m_n\xxl:=
\E[F(\cT_n)^\ell]
\end{align}
for integer $\ell\ge0$.
Define the generating functions
\begin{align}\label{Mell}
M_\ell(z):=\E \bigsqpar{F(\cT)^\ell z^{|\cT|}}
=\sum_{n=1}^\infty q_n \E \bigsqpar{F(\cT)^\ell z^{|\cT|}\mid |\cT|=n}
=\sum_{n=1}^\infty q_n m\xxl_n z^n
.\end{align}
Note that $M_0(z)=y(z)$, see \eqref{yz}.
The generating functions $M_\ell$ can be calculated recursively as follows,
using Hadamard products and the
generating function
\begin{align}\label{Bz}
B(z):=\sum_{n=1}^\infty b_n z^n.
\end{align}
\begin{lemma}
\label{LH}
For every $\ell\ge1$,
\begin{align}\label{lh}
M_\ell(z)
=
\frac{z y'(z)}{y(z)}
\sum_{m=0}^\ell \frac{1}{m!}\sumxx \binom{\ell}{\ell_0,\dots,\ell_m}
B(z)^{\odot\ell_0} \odot
\bigsqpar{zM_{\ell_1}(z)\dotsm M_{\ell_m}(z)\Phi\xxm\yz},
\end{align}
where $\sumxx$ is the sum over all $(m+1)$-tuples $(\ell_0,\dots,\ell_m)$ of
non-negative integers summing to $\ell$ such that
$1\le\ell_1,\dots,\ell_m<\ell$.
\end{lemma}
\begin{proof}
Condition on the root degree $k$ of $\cT$, and let $\cT_1,\dots,\cT_k$ be
the principal subtrees as at the beginning of \refS{S:more_notation}.
Then \eqref{F2} can be written
\begin{align}\label{lh1}
F(\cT)=f(\cT)+\sum_{i=1}^k F(\cT_i)
= b_{|\cT|}+\sum_{i=1}^k F(\cT_i).
\end{align}
Hence, the multinomial theorem yields the following, where for each $k$
we let $\sum$ denote the sum over all $(k+1)$-tuples $(\ell_0,\dots,\ell_k)$
summing to $\ell$ such that each $\ell_i\ge0$, and furthermore
$\cT_1,\dots,\cT_k$ are independent copies of $\cT$,
and $|\cT|$ is
$1+|\cT_1|+\dots+|\cT_k|$:
\begin{align}\label{lh2}
M_\ell(z)&
=
\sum_{k=0}^\infty p_k \E \Bigsqpar{z^{|\cT|} \Bigpar{b_{|\cT|}+\sum_{i=1}^k F(\cT_i)}^\ell}
\notag\\&
=\sum_{k=0}^\infty p_k \sum \binom{\ell}{\ell_0,\dots,\ell_k}
\E \Bigsqpar{z^{|\cT|} b_{|\cT|}^{\ell_0} F(\cT_1)^{\ell_1}\dotsm F(\cT_k)^{\ell_k}}
\notag\\&
=\sum_{k=0}^\infty p_k \sum \binom{\ell}{\ell_0,\dots,\ell_k}
B(z)^{\odot\ell_0}\odot\E \Bigsqpar{z^{|\cT|} F(\cT_1)^{\ell_1}\dotsm F(\cT_k)^{\ell_k}}
\notag\\&
=\sum_{k=0}^\infty p_k \sum \binom{\ell}{\ell_0,\dots,\ell_k}
B(z)^{\odot\ell_0}\odot\E \Bigsqpar{z\prod_{i=1}^k \bigpar{z^{|\cT_i|} F(\cT_i)^{\ell_i}}}
\notag\\&
=\sum_{k=0}^\infty p_k \sum \binom{\ell}{\ell_0,\dots,\ell_k}
B(z)^{\odot\ell_0}\odot \Bigsqpar{z\prod_{i=1}^k\E\bigsqpar{ z^{|\cT_i|} F(\cT_i)^{\ell_i}}}
\notag\\&
=\sum_{k=0}^\infty p_k \sum \binom{\ell}{\ell_0,\dots,\ell_k}
B(z)^{\odot\ell_0}\odot \Bigsqpar{z\prod_{i=1}^k M_{\ell_i}(z)}.
\end{align}
We consider the terms where $\ell_i=\ell$ for some $1\le i\le k$
separately.
In this case, $\ell_0=0$ and $\ell_j=0$ for $j\neq i$, and thus the combined
contribution of these $k$ terms is, recalling $M_0(z)=y(z)$ and \eqref{Phi},
\begin{align}\label{lh3}
\sum_{k=1}^\infty p_k k \bigsqpar{z M_\ell(z) y(z)^{k-1}}
=z M_\ell(z) \sum_{k=1}^\infty p_k k y(z)^{k-1}
=z M_\ell(z) \Phi'(y(z)).
\end{align}
Let
$\sumx$ denote the sum over the remaining terms, \ie, the terms with
$\ell_1,\dots,\ell_k<\ell$, and
define
\begin{align}\label{lh4}
R_\ell(z):=
\sum_{k=0}^\infty p_k \sumx \binom{\ell}{\ell_0,\dots,\ell_k}
B(z)^{\odot\ell_0}\odot \Bigsqpar{z\prod_{i=1}^k M_{\ell_i}(z)}.
\end{align}
Using \eqref{lh3}--\eqref{lh4}, we can write \eqref{lh2} as
\begin{align}\label{lh5}
M_\ell(z) = z\Phi'\yz M_\ell(z) + R_\ell(z).
\end{align}
Moreover, differentiating \eqref{ma} yields
\begin{align}\label{y'}
y'(z)=\Phi\yz+z\Phi'\yz y'(z)
\end{align}
and thus, using \eqref{ma} again,
\begin{align}\label{y'a}
\bigpar{1-z\Phi'\yz}y'(z)
= \Phi\yz = y(z)/z
.\end{align}
Hence, \eqref{lh5} yields
\begin{align}\label{lh6}
M_\ell(z) = \frac{R_\ell(z)}{1-z\Phi'\yz}
= \frac{zy'(z)}{y(z)}R_\ell(z).
\end{align}
Finally, in each term in the sum $\sumx$ in \eqref{lh4}, let $m\ge0$ be the
number of $\ell_1,\dots,\ell_k$ that equal 0.
By symmetry, we may assume that $\ell_1,\dots,\ell_m\ge1$ and
$\ell_{m+1}=\dots\ell_k=0$, and multiply by the symmetry factor $\binom
km$. Thus,
\begin{align}\label{lh7}
R_\ell(z)&=
\sum_{k=0}^\infty p_k \sum_{m=0}^k\binom km \sumxx \binom{\ell}{\ell_0,\dots,\ell_m}
B(z)^{\odot\ell_0}\odot \Bigsqpar{z \Bigpar{\prod_{i=1}^m M_{\ell_i}(z)} y(z)^{k-m}}
\notag\\&
=
\sum_{m=0}^\infty\sumxx \binom{\ell}{\ell_0,\dots,\ell_m}
B(z)^{\odot\ell_0}\odot \Bigsqpar{z \Bigpar{\prod_{i=1}^m M_{\ell_i}(z)}
\sum_{k = m}^{\infty} p_k \binom km y(z)^{k-m}}
\notag\\&
=\sum_{m=0}^\infty\sumxx \binom{\ell}{\ell_0,\dots,\ell_m}
B(z)^{\odot\ell_0}\odot \Bigsqpar{z \Bigpar{\prod_{i=1}^m M_{\ell_i}(z)}
\frac{1}{m!}\Phi\xxm\bigpar{ y(z)}}
.\end{align}
The result \eqref{lh} follows from \eqref{lh6} and \eqref{lh7}, noting that
the sum $\sumxx$ is empty if $m>\ell$.
\end{proof}
\subsection{The mean}
For $\ell=1$, \eqref{lh} contains only the term $m=0$ and thus
$\ell_0=\ell=1$.
Hence, \refL{LH} yields, recalling \eqref{ma},
\begin{align}\label{sx1}
M_1(z)=\frac{zy'(z)}{y(z)}\cdot \bigpar{B(z)\odot \bigsqpar{z\Phi(y(z))}}
=\frac{zy'(z)}{y(z)}\cdot\bigpar{ B(z)\odot y(z)}
.\end{align}
Let us first consider the factor $zy'(z)/y(z)$.
It follows from \eqref{ma} that $y(z)=0$ implies $z=0$, and thus $z/y(z)$ is
analytic in any domain where $y(z)$ is.
Hence, $zy'(z)/y(z)$ is \gda, since $y(z)$ is.
Moreover, by Cauchy's estimates as in \cite[Theorem 6]{FillFK},
\eqref{yz1} implies, as $z\to1$,
\begin{align}\label{sx2}
y'(z)=2\qqw\gs\qw(1-z)\qqw+o\bigpar{|1-z|\qqw}.
\end{align}
Consequently,
\begin{align}\label{sx3}
\frac{z y'(z)}{y(z)}=2\qqw\gs\qw(1-z)\qqw+o\bigpar{|1-z|\qqw}.
\end{align}
We turn to the second factor $B(z)\odot y(z)$.
We consider first the case
\begin{align}
\label{bn1}
f(n)=
b_n=n^\ga,
\qquad n\ge1,
\end{align}
for some $\ga\in\bbC$;
then
$F=F_\ga$ and, by \eqref{Xn},
\begin{align}\label{bnf}
F(\ctn)=X_n(\ga).
\end{align}
By \eqref{bn1} and \eqref{Li}, $B(z)=\Li_{-\ga}(z)$, a
polylogarithm function, and thus \eqref{li} yields,
at least for $\rga>-1$,
\begin{align}
\label{Bo}
B(z) = \gG(1+\ga)(1-z)^{-\ga-1}+\oz{-\rga-1}.
\end{align}
Furthermore,
by the definitions,
\begin{align}\label{by}
B(z)\odot y(z)
= \sum_{n=1}^\infty b_n q_n z^n
=\sum_{n=1}^\infty q_n n^\ga z^n
=
\E \bigsqpar{|\cT|^\ga z^{|\cT|}}
.\end{align}
\begin{lemma}\label{LC1>}
Let $\rga>\frac12$ and let $b_n:=n^{\ga}$.
Then, as $z\to1$ in some \GDD,
\begin{align}\label{lc1>}
M_1(z) =
\frac{\gs\qww}{2\sqrt\pi}\gG\bigpar{\ga-\tfrac12}
(1-z)^{-\ga}
+\oz{-\rga}.
\end{align}
\end{lemma}
\begin{proof}
By \eqref{Bo} and \eqref{yz1} together with \refLs{LFFK} and \ref{LIH2},
and the fact that
$B(z)\odot1=0$,
\begin{align}\label{sixten}
B(z) \odot y(z )
&=
-\gG(1+\ga)(1-z)^{-\ga-1}\odot \sqrt2\gs\qw (1-z)\qq +\oz{-\rga+\frac12}
\notag\\&
=- 2\qq\gs\qw\frac{\gG(\ga-\half)}{\gG(-\frac12)}(1-z)^{-\ga+\half}
+\oz{-\rga+\frac12}.
\end{align}
The result follows by \eqref{sx1} and \eqref{sx3}.
\end{proof}
\subsection{The mean when $0<\rga<\frac12$}\label{SSmean<}
Consider now the case
$\rga<\frac12$.
If we still take $b_n=n^\ga$ as in \eqref{bn1},
then \eqref{by} and \eqref{pct} show that $B(z)\odot y(z)$ is continuous in
the closed unit disc,
and a comparison with \eqref{mu} yields
\begin{align}\label{by1}
(B\odot y)(1) = \E |\cT|^\ga = \mu(\ga)
.\end{align}
Hence, \eqref{sixten} cannot hold, since the \rhs{} tends to 0 as $z\to1$.
Actually, it follows from the arguments below that
the leading term in $B(z)\odot y(z)$ is the constant
$\mu(\ga)$, which by \eqref{sx1} and singularity analysis corresponds to the
fact that the leading term in \eqref{te-} is $\mu(\ga)n$.
We recall from \refS{S:intro} that when $\rga<\frac12$, we want to subtract
this term. In the present setting, we achieve this by
modifying \eqref{bn1} and
instead taking
\begin{align}
\label{bn2}
f(n)=b_n:=n^\ga-\mu(\ga)
.\end{align}
Then \eqref{bnf} is modified to
\begin{align}\label{bnf2}
F(\ctn)=
\sum_{v\in\tn}\bigsqpar{|\tnv|^\ga-\mu(\ga)}
=X_n(\ga)-\mu(\ga)n,
\end{align}
and
\eqref{by} is modified to
\begin{align}\label{by2}
B(z)\odot y(z)
=\sum_{n=1}^\infty q_n [n^\ga-\mu(\ga)] z^n
=
\E \bigsqpar{|\cT|^\ga z^{|\cT|}}-\mu(\ga)y(z)
.\end{align}
In particular,
\begin{align}\label{by0}
(B\odot y)(1)=\E|\cT|^\ga- \mu(\ga)=0.
\end{align}
\begin{lemma}\label{LC1<}
Let $0<\rga<\frac12$ and let $b_n:=n^{\ga}-\mu(\ga)$.
As $z\to1$ in some \GDD,
\begin{align}\label{lc1<}
M_1(z) =
\frac{\gs\qww}{2\sqrt\pi}\gG\bigpar{\ga-\tfrac12}
(1-z)^{-\ga}
+\oz{-\rga}.
\end{align}
\end{lemma}
\begin{proof}
We now have, by \eqref{bn2} and \eqref{li},
\begin{align}\label{Box}
B(z)&
=\Li_{-\ga}(z)-\mu(\ga)z(1-z)\qw
\notag\\&
= \gG(1+\ga)(1-z)^{-\ga-1}+\oz{-\rga-1},
\end{align}
just as in \eqref{Bo}.
Then, arguing
as for~\eqref{sixten}
using \eqref{yz1} and \refLs{LFFK} and \ref{LIH2}
now yields
\begin{align}\label{sixten2}
B(z)&\odot y(z )
=
-\gG(1+\ga)(1-z)^{-\ga-1}\odot \sqrt2\gs\qw (1-z)\qq +P_1(z)
+\oz{-\rga+\frac12}
\notag\\&
=- 2\qq\gs\qw\frac{\gG(\ga-\half)}{\gG(-\frac12)}(1-z)^{-\ga+\half}
+P_2(z)
+\oz{-\rga+\frac12},
\end{align}
where $P_1(z),P_2(z)\in\cP_{\frac12-\rga}$ and thus are constants.
Letting $z\to1$ in \eqref{sixten2}
shows that $P_2(z)=(B\odot y)(1)=0$, by \eqref{by0}.
Hence, the result in \eqref{sixten} holds in the present case too,
and the result follows again by \eqref{sx1} and \eqref{sx3}.
\end{proof}
\subsection{Higher moments}
In the remainder of this \refS{Smom}, we assume that $\rga>0$,
and that we have chosen $b_n$ by
\eqref{bn1} or \eqref{bn2}
so that
\begin{align}\label{bn12}
b_n:=
\begin{cases}
n^\ga,& \rga\ge\frac12,
\\
n^\ga-\mu(\ga),& 0<\rga<\frac12.
\end{cases}
\end{align}
In the present subsection we also assume $\rga\neq\frac12$.
We need one more general lemma.
\begin{lemma}
\label{LC}
Under our standing assumptions $\E\xi=1$ and\/ \mbox{$0<\Var\xi<\infty$},
the function $\Phi\xxm(y(\cdot))$ is \gda{} for every $m\ge0$, and as $z\to1$ in some \GDD,
\begin{align}\label{lc}
\Phi\xxm\yz
=
\begin{cases}
O(1), & m\le 2,
\\
\oz{1-\frac{m}{2}}, & m\ge3.
\end{cases}
\end{align}
\end{lemma}
\begin{proof}
As noted at the beginning of \refS{S:more_notation},
$y(z)$ is \gda.
It follows from~\eqref{yz1} that for
some \GDD{} $\gD_1$,
if $z\in\gD_1$
with $|1-z|$ small enough, then
\begin{align}\label{ca1}
|y(z)|<1-c|1-z|\qq.
\end{align}
Moreover, the definition \eqref{yz} implies that $|y(z)|\le1$ for $|z|=1$
with strict inequality unless $z=1$.
Hence, by continuity, for some $\gd,\eta>0$, $|y(z)|\le 1-\eta$ when
$z\in\gD_1$, $|1-z|\ge\eps$, and $|z|\le1+\gd$.
It follows that \eqref{ca1} holds (with a new $c>0$) for all $z$ in the
\GDD{} $\gD_2:=\set{z\in\gD_1:|z|<1+\gd}$.
In particular, $|y(z)|<1$ in $\gD_2$ and thus $\Phi\xxm\yz$ is
analytic in $\gD_2$.
The assumption $\E\xi^2<\infty$ implies that $\Phi$, $\Phi'$ and $\Phi''$ are
bounded and continuous functions on the closed unit disc. Hence, \eqref{lc}
holds for $m\le2$.
Now suppose $m\ge3$.
Since $\Phi''$ is continuous, we have $\Phi''(z)-\Phi''(1)=o(1)$ as $z\to1$
with $|z|<1$. Hence it folows from Cauchy's estimates that
\begin{align}\label{ca2}
\Phi\xxm(z) = o\bigpar{(1-|z|)^{2-m}}
\qquad \text{as $z\to 1$ with $|z|<1$}
.\end{align}
The result \eqref{lc} for $m\ge3$ follows from \eqref{ca2} and \eqref{ca1}.
\end{proof}
\begin{lemma}\label{LC2}
Assume $\rga\in(0,\frac12)\cup(\frac12,\infty)$ and that \eqref{bn12} holds.
Then, for every $\ell\ge1$,
$M_\ell(z)$ is \gda, and
as $z\to1$ in some \GDD,
\begin{align}\label{lc2}
M_\ell(z) =
\kkk_\ell\gs^{-\ell-1}(1-z)^{-\ell(\ga+\frac12)+\frac12}
+\oz{-\ell(\rga+\frac12)+\frac12},
\end{align}
where
the constants $\kkk_\ell$ are given recursively by
\begin{align}\label{kkk1}
\kkk_1&
= \frac{1}{2\sqrt\pi}\gG\bigpar{\ga-\tfrac12},
\\\label{kkk2}
\kkk_\ell&=
2^{-3/2} \sum_{j=1}^{\ell-1} \binom{\ell}{j}\kkk_j\kkk_{\ell-j}
+ 2\qqw \ell \frac{\gG\bigpar{\ell(\ga+\frac12)-1}}
{\gG\bigpar{(\ell-1)(\ga+\frac12)-\frac12}}\kkk_{\ell-1}
.\end{align}
\end{lemma}
The \GDD{} may depend on $\ell$.
We write $\kkk_\ell$ in \eqref{kkk1}--\eqref{kkk2} as $\kkk_\ell(\ga)$
when we want to emphasize the dependence on $\ga$.
\begin{proof}
We use induction on $\ell$, based on \refL{LH}.
First, this shows that $M_\ell$ is \gda, using
the fact that~$B$ and, by \refL{LC}, $\Phi\xxm(y(\cdot))$ are,
together with \refL{LFFK}.
To show \eqref{lc2} by induction, we note that
the base case $\ell=1$ is \refLs{LC1>} and \ref{LC1<}.
Assume thus $\ell\ge2$, and
let $A:=-\ell(\ga+\frac12)+\frac12<-\frac12$
be the exponent of $1 - z$ in \eqref{lc2}.
Consider one of the terms in \eqref{lh}.
By the induction hypothesis and \refL{LC}, we have
\begin{align}\label{lc1qq}
& zM_{\ell_1}(z)\dotsm M_{\ell_m}(z)\Phi\xxm\yz
= O\bigpar{|1-z|^{-\sum_{i=1}^m \ell_i(\Re \ga+\frac12)+\frac{m}{2}}\Phi\xxm\yz}
\notag\\
&\hskip4em
=
\begin{cases}
O\bigpar{|1-z|^{-(\ell-\ell_0)(\Re \ga+\frac12)+\frac{m}2}}, & m\le 2,
\\
o\bigpar{|1-z|^{-(\ell-\ell_0)(\Re \ga+\frac12)+1}}, & m\ge3.
\end{cases}
\end{align}
Since $\ell - \ell_0 \ge m$, the exponent here is
\begin{align}
\label{exp_real_part}
-(\ell-\ell_0)(\rga+\tfrac12)+\frac{m\bmin 2}{2}
\le
-m(\rga+\tfrac12)+\frac{m\bmin 2}2
\le-m\,\rga\le0.
\end{align}
Furthermore,
\eqref{Bo} and \eqref{Box} show that,
for both $\rga>\frac12$ and $\rga<\frac12$,
\begin{align}\label{Boy}
B(z)=\Oz{-\rga-1}
\end{align}
and thus \refL{LFFK}
applies $\ell_0$ times and yields
\begin{align}\label{lc3}
&B(z)^{\odot\ell_0} \odot
\bigsqpar{zM_{\ell_1}(z)\dotsm M_{\ell_m}(z)\Phi\xxm\yz}
\notag\\
&\hskip4em
=
\begin{cases}
O\bigpar{|1-z|^{-(\ell-\ell_0)(\rga+\frac12)+\frac{m}2-\ell_0\rga}}, & m\le 2,
\\
o\bigpar{|1-z|^{-(\ell-\ell_0)(\rga+\frac12)+1-\ell_0\rga}}, & m\ge3.
\end{cases}
\end{align}
The exponent here is
\begin{align}\label{lc4}
-(\ell-\ell_0)(\rga+\tfrac12)+\frac{m\bmin2}2-\ell_0\rga
= -\ell(\rga+\tfrac12)+\frac{\ell_0+m\bmin2}2.
\end{align}
For $m\ge3$, this is at least $-\ell(\rga+\frac12)+1=\REA+\frac12$,
and thus the term is
\begin{align}\label{lc5}
\oz{\REA+\frac12}.
\end{align}
We will see that this contributes only to the error term in \eqref{lc2},
so such terms may be ignored.
Similarly, for every term with $m\le2$ and $\ell_0+m > 2$,
the exponent considered in
\eqref{lc4} is strictly larger than $\REA+\frac12$, and thus
such terms also satisfy \eqref{lc5} and may be ignored.
If $m=1$, then $\ell_1<\ell$, and thus $\ell_0\ge1$.
Hence, the only remaining terms to consider are
(1) $m=0$ and thus $\ell_0=\ell$;
(2) $m=1$ and $\ell_0=1$;
(3) $m=2$ and $\ell_0=0$.
Furthermore, also the term with $m=0$ can be ignored, since it is
\begin{align}\label{lc6}
B(z)^{\odot\ell} \odot \bigsqpar{z\Phi\yz}
&=
B(z)^{\odot\ell} \odot y(z)
\notag\\&
=
B(z)^{\odot\ell} \odot 1
+ B(z)^{\odot\ell} \odot\bigpar{y(z)-1}
,\end{align}
where $B(z)^{\odot\ell} \odot1$ vanishes
and $y(z)-1=\Oz{\frac12}=\oz{0}$ by \eqref{yz1};
hence \refL{LFFK}\ref{LFFKo} yields
\begin{align}\label{lc7}
B(z)^{\odot\ell} \odot \bigsqpar{z\Phi\yz}
=\oz{-\ell\,\rga}
=\oz{\REA+\frac12}.
\end{align}
Consequently, recalling
\eqref{lh7}, we have
\begin{align}\label{lc8}
R_l(z) = \ell B(z) \odot \bigsqpar{z M_{\ell-1}(z) \Phi'\yz}
&+ \frac12\sum_{j=1}^{\ell-1}\binom{\ell}{j} z M_j(z)M_{\ell-j}(z) \Phi''\yz
\notag\\
&+\oz{\REA+\frac12}
.\end{align}
Since $\Phi'$ is continuous in the unit disc with $\Phi'(1)=1$,
the induction hypothesis implies that
\begin{align}\label{lc9}
z M_{\ell-1}(z)\Phi'\yz =
\kkk_{\ell-1}\gs^{-\ell}(1-z)^{-(\ell-1)(\ga+\frac12)+\frac12}
+\oz{-(\ell-1)(\rga+\frac12)+\frac12},
\end{align}
Hence, \eqref{Bo}, \eqref{Box}, and Lemmas~\ref{LFFK} and~\ref{LIH2} yield
\begin{multline}\label{lc10}
B(z)\odot\bigsqpar{z M_{\ell-1}(z)\Phi'\yz}
\\=
\kkk_{\ell-1}\gs^{-\ell}\frac{\gG\bigpar{\ell(\ga+\frac12)-1}}
{\gG\bigpar{(\ell-1)(\ga+\frac12)-\frac12}}(1-z)^{A+\frac12}
+\oz{\REA+\frac12}.
\end{multline}
Similarly, the induction hypothesis yields, using $\Phi''(1)=\gss$,
\begin{multline}\label{lc11}
\sum_{j=1}^{\ell-1}\binom{\ell}{j} z M_j(z)M_{\ell-j}(z) \Phi''\yz
\\
=\sum_{j=1}^{\ell-1}\binom{\ell}{j} \kkk_j\kkk_{\ell-j}\gs^{-\ell}(1-z)^{A+\frac12}
+\oz{\REA+\frac12}.
\end{multline}
The result \eqref{lc2} now follows from
\eqref{lh6}, \eqref{sx3}, and \eqref{lc8}--\eqref{lc11}.
\end{proof}
\subsection{Mixed moments. Proof of \refT{TXmom}}
\label{SSmom-mix}
We may extend \refT{TXmom} to mixed moments of $X_n(\ga_1),\dots,X_n(\ga_m)$,
for several given $\ga_1,\dots,\ga_m$,
using the
same arguments with only notational differences.
For convenience, define
\begin{align}\label{ebbe}
\QX_n(\ga):=
\begin{cases}
n^{-\ga-\frac12}X_n(\ga), & \rga>\frac12,
\\
n^{-\ga-\frac12}\bigpar{X_n(\ga)-\mu(\ga)n}, & 0<\rga<\frac12.
\end{cases}
\end{align}
We consider for simplicity only two different
values of $\ga$; the general case is similar but left to the reader.
\begin{theorem}\label{Tmix}
Let $\rga_1,\rga_2\in(0,\frac12)\cup(\frac12,\infty) $,
and write $\ga_i':=\ga_i+\frac12$.
Then, for any integers $\ell_1,\ell_2\ge0$
with $\ell_1+\ell_2\ge1$,
\begin{align}\label{tmix}
\gs^{\ell_1+\ell_2}\E\bigsqpar{\QX_n(\ga_1)^{\ell_1}\QX_n(\ga_2)^{\ell_2}}
\to
\E \bigsqpar{Y(\ga_1)^{\ell_1}Y(\ga_2)^{\ell_2}}
=\frac{\sqrt{2\pi}}{\gG(\ell_1\ga_1'+\ell_2\ga_2'-\frac12)}\kkk_{\ell_1,\ell_2},
\end{align}
where
$\kkk_{1,0}=\kkk_1(\ga_1)$
and
$\kkk_{0,1}=\kkk_1(\ga_2)$
are given by \eqref{kkk1},
and,
for $\ell_1+\ell_2\ge2$,
\begin{align}\label{kkkll}
\kkk_{\ell_1,\ell_2}&
=
2^{-3/2} \sum_{0<j_1+j_2<\ell_1+\ell_2}
\binom{\ell_1}{j_1}\binom{\ell_2}{j_2}\kkk_{j_1,j_2}\kkk_{\ell_1-j_1,\ell_2-j_2}
\notag\\&\qquad
+ 2\qqw \ell_1 \frac{\gG\bigpar{\ell_1\ga_1'+\ell_2\ga_2'-1}}
{\gG\bigpar{\ell_1\ga_1'+\ell_2\ga_2'-1-\ga_1}}\kkk_{\ell_1-1,\ell_2}
\notag\\&\qquad
+ 2\qqw \ell_2 \frac{\gG\bigpar{\ell_1\ga_1'+\ell_2\ga_2'-1}}
{\gG\bigpar{\ell_1\ga_1'+\ell_2\ga_2'-1-\ga_2}}\kkk_{\ell_1,\ell_2-1}
.\end{align}
\end{theorem}
\begin{proof}[Proof of \refTs{TXmom} and \ref{Tmix}]
For a given $\ga$, we continue to use the choice
\eqref{bn12} of $b_n$. This yields
\eqref{bnf} ($\rga\ge\frac12$) or \eqref{bnf2} ($\rga<\frac12$),
i.e., now writing $\dF_\ga$ for $F$,
\begin{align}\label{abba}
\dF_\ga(\ctn)=
\begin{cases}
X_n(\ga), & \rga\ge\frac12,
\\
{X_n(\ga)-\mu(\ga)n}, & 0<\rga<\frac12.
\end{cases}
\end{align}
Hence, in both cases,
$\QX_n(\ga)=n^{-\ga-\frac12}\dF_\ga(\ctn)$, and
\refT{TX} yields
\begin{align}\label{frater}
\QX_n(\ga)=n^{-\ga-\frac12}\dF_\ga(\ctn)\dto \gs\qw Y(\ga);
\end{align}
moreover, this holds jointly for any number of $\ga$
by the proof of \refT{TX}.
The asymptotic formula \eqref{lc2} yields,
by \eqref{Mell} and standard singularity analysis
\cite[Chapter VI]{FS},
\begin{align}\label{tm1}
q_n m_n\xxl = [z^n]M_\ell(z) \sim \kkk_\ell \gs^{-\ell-1}
\frac{1}{\gG \bigpar{\ell(\ga+\frac12)-\frac12}} n^{\ell(\ga+\frac12)-\frac32}.
\end{align}
Together with \eqref{pct} (with $h = 1$) for $q_n$, this yields
\begin{align}\label{morm}
m_n\xxl
\sim
\frac{\sqrt{2\pi}\kkk_\ell \gs^{-\ell}}
{\gG \bigpar{\ell(\ga+\frac12)-\frac12}} n^{\ell(\ga+\frac12)}.
\end{align}
Recall that $m_n\xxl:=\E \dF_\ga(\ctn)^\ell$ by \eqref{mell}.
Hence,
\eqref{morm} can
be written as
\begin{align}\label{farf}
\gs^\ell \E \QX_n(\ga)^\ell\to
\frac{\sqrt{2\pi}}
{\gG \bigpar{\ell(\ga+\frac12)-\frac12}} \kkk_\ell
=:\kk_\ell
,\end{align}
where we thus denote the \rhs{} by $\kk_\ell $.
The recursion \eqref{kk1}--\eqref{kk2} then follows from
\eqref{kkk1}--\eqref{kkk2}.
This shows most parts of \refT{TXmom},
but it remains to show that the $\kk_\ell$'s (the limits of moments)
are the moments of the limit (in distribution) $Y(\ga)$ of $\gs \QX_n(\ga)$.
For real $\ga$, this follows from \eqref{farf} by a standard argument,
but for general complex $\ga$ we want to consider absolute moments, so we
postpone the proof of this, and first turn to \refT{Tmix}.
Define, in analogy with \eqref{mell}--\eqref{Mell},
\begin{align}
\label{mell2}
m_n\xxll&:=\E\bigsqpar{\dF_{\ga_1}(\ctn)^{\ell_1}\dF_{\ga_2}(\ctn)^{\ell_2}},
\\\label{Mell2}
M_{\ell_1,\ell_2}(z)
&:=\E \bigsqpar{\dF_{\ga_1}(\cT)^{\ell_1}\dF_{\ga_2}(\cT)^{\ell_2} z^{|\cT|}}
=\sum_{n=1}^\infty q_n m\xxll_n z^n
.\end{align}
It is straightforward to extend \refL{LH} to the following, valid for every $\ell,r\ge0$
with
\mbox{$\ell+r\ge1$}:
\begin{multline}\label{lhz}
M_{\ell,r}(z)
=
\frac{z y'(z)}{y(z)}
\sum_{m=1}^{\ell+r} \frac{1}{m!}\sumxx \binom{\ell}{\ell_0,\dots,\ell_m}
\binom{r}{r_0,\dots,r_m}
B_{\ga_1}(z)^{\odot\ell_0}
\\
\odot
B_{\ga_2}(z)^{\odot r_0} \odot
\bigsqpar{zM_{\ell_1,r_1}(z)\dotsm M_{\ell_m,r_m}(z)\Phi\xxm\yz},
\end{multline}
where $\sumxx$ is the sum over all pairs of
$(m+1)$-tuples $(\ell_0,\dots,\ell_m)$ and $(r_0,\dots,r_m)$
of non-negative integers that sum to $\ell$ and $r$, respectively,
such that $1\le\ell_i+r_i<\ell + r$ for every $i$.
Then, the inductive proof of \refL{LC2} is easily extended to show that
in some \GDD{} (possibly depending on $\ell_1$ and $\ell_2$)
\begin{align}\label{lc2ll}
M_{\ell_1,\ell_2}(z) =
\kkk_{\ell_1,\ell_2}\gs^{-\ell_1-\ell_2-1}(1-z)^{-\ell_1\ga_1'-\ell_2\ga_2'+\frac12}
+\oz{-\ell_1\rga_1'-\ell_2\rga_2'+\frac12},
\end{align}
with $\kkk_{\ell_1,\ell_2}$ given by \eqref{kkk1} and \eqref{kkkll}.
Singularity analysis yields, as for the special case \eqref{morm},
\begin{align}\label{varin}
\gs^{\ell_1+\ell_2} \E \bigsqpar{\QX_n(\ga_1)^{\ell_1}\QX_n(\ga_2)^{\ell_2}}
\to
\frac{\sqrt{2\pi}}
{\gG \bigpar{\ell_1\ga_1'+\ell_2\ga_2'-\frac12}} \kkk_{\ell_1,\ell_2}
=:\kk_{\ell_1,\ell_2}
.\end{align}
In particular, for any $\ga$ in the domain, we may take $\ga_1:=\ga$ and
$\ga_2:=\bga$. Then \eqref{varin} shows, in particular,
that for any integer $\ell\ge1$,
$\E\bigabs{\QX_n(\ga)}^{2\ell}=
\E \bigsqpar{\QX_n(\ga)^{\ell}\QX_n(\bga)^{\ell}}$
converges as $\ntoo$.
By a standard argument,
see \eg{} \cite[Theorems 5.4.2 and 5.5.9]{Gut},
this implies uniform
integrability of each smaller power of $\QX_n(\ga)$,
which together with \eqref{frater} implies
convergence of all lower moments to the moments of the limit $\gs\qw Y(\ga)$.
This completes the proof of
\eqref{mtx1}--\eqref{mtx<} with
\begin{align}
\label{tm2}
\kk_\ell:=\E Y(\ga)^\ell
=\frac{\sqrt{2\pi}}{\gG \bigpar{\ell(\ga+\frac12)-\frac12}}
\kkk_\ell.
\end{align}
Similarly, using H\"older's inequality, the sequence
$\QX_n(\ga_1)^{\ell_1}\QX_n(\ga_2)^{\ell_2}$ is uniformly integrable for
every fixed $\ga_1,\ga_2,\ell_1,\ell_2$, and \eqref{tmix} follows from
the joint convergence in~\eqref{frater} and~\eqref{varin}.
\end{proof}
\begin{example}\label{Emix11}
Taking $\ell_1=\ell_2=1$ in \eqref{kkkll}, we obtain,
with obvious notation
and using \eqref{kkk1},
\begin{align}\label{kkk11}
& \kkk_{1,1}(\ga,\gb)
= 2^{-\frac12}\kkk_1(\ga)\kkk_1(\gb)
+ 2^{-\frac12} \frac{\gG(\ga+\gb)}{\gG(\gb)}\kkk_1(\gb)
+ 2^{-\frac12} \frac{\gG(\ga+\gb)}{\gG(\ga)}\kkk_1(\ga)
\notag\\&
=
\frac{2^{-\frac{5}2}}{\pi}\gG(\ga-\tfrac12)\gG(\gb-\tfrac12)
+\frac{2^{-\frac{3}2}}{\sqrt\pi}\frac{\gG(\ga+\gb)\gG(\gb-\tfrac12)}{\gG(\gb)}
+\frac{2^{-\frac{3}2}}{\sqrt\pi}\frac{\gG(\ga+\gb)\gG(\ga-\tfrac12)}{\gG(\ga)}.
\end{align}
In particular, taking $\gb=\bga$ and using \eqref{tmix} and \eqref{varin},
\begin{align}\label{e|2|}
\E|Y(\ga)|^2&
= \kk_{1,1}(\ga,\bga)
=\frac{\sqrt{2\pi}}{\gG(2\rga+\frac12)}
\kkk_{1,1}(\ga,\bga)
\notag\\&
=
\frac{|\gG(\ga-\tfrac12)|^2}{4\sqrt{\pi}\gG(2\rga+\frac12)}
+\frac{\gG(2\rga)}{\gG(2\rga+\frac12)}\Re\frac{\gG(\ga-\tfrac12)}{\gG(\ga)}.
\end{align}
\end{example}
\begin{example}\label{Emix12}
As mentioned in \refE{Ega=1}, for
the case of joint moments of
$Y(1)$ and $Y(2)$, \refT{Tmix}
yields the recursion formula given in \cite{SJ146};
the method used there is related to the one used here,
but seems to apply only for integer $\ga$.
\end{example}
\begin{remark}\label{Runique}
The mixed moments of $Y(\ga)$ and $\overline{Y(\ga)}=Y(\bga)$
determine the distribution
of $Y(\ga)$ uniquely, for any $\ga\neq\frac12$ with $\rga>0$.
In fact, there exists $C(\ga)>0$ such that for every $\ell\ge1$,
\begin{align}\label{expb}
\E|Y(\ga)|^\ell \le C(\ga)^\ell \ell!,
\end{align}
and thus $(\Re Y(\ga),\Im Y(\ga))$ has a finite moment generating function
in a neighborhood of the origin. The estimate \eqref{expb} was shown for
real $\ga$ in \cite[Lemma 3.4]{FillK04} (with proof in \cite{FillK04v1});
the general case is similar, considering even $\ell$ and using induction
and \eqref{kkkll}.
The constant $C(\ga)$ in \eqref{expb}
can be taken uniformly bounded on compact subsets of
$H_+\setminus\set{\frac12}$.
Moreover, \eqref{expb} obviously implies the same estimate for
$\tY(\ga)=Y(\ga)-\E Y(\ga)$ [with $C(\ga)$ replaced by $2C(\ga)$], and then
we can argue using analyticity as in the proof of \refL{LU4} below and
conclude that \eqref{expb} holds also for $\tY(\frac12)$, which thus also is
determined by its moments, as noted in \cite{FillK04}.
\end{remark}
\subsection{Uniform estimates}\label{SSuniform}
In this \refS{Smom}, we have so far estimated moments for a fixed $\ga$, or
mixed moments for a fixed set of different $\ga$.
We turn to uniform estimates for $\ga$ in suitable sets.
This is rather straightforward
if $\rga$ stays away from $\frac12$. However, we want
uniformity also for $\rea$ approaching (or equalling) $\frac12$, and this is
more complicated.
For our proofs, we assume throughout the present subsection the
weak moment condition
\begin{align}\label{2+gd}
\E\xi^{2+\gd}<\infty,
\end{align}
for some $\gd>0$.
Throughout this subsection, $\gd$ is fixed;
we assume without loss of generality that $\gd\le1$.
\begin{problem}
Do \refLs{LU1}--\ref{LU4} and \refT{T1mom}
hold without the extra condition \eqref{2+gd}?
(Cf.~\refR{RT1mom}.)
\end{problem}
We begin with some preliminaries.
We start with a standard estimate, included for completeness.
\begin{lemma}\label{Lgd}
If \eqref{2+gd} holds with $0<\gd\le1$, then
\begin{align}\label{lgd}
\Phi(z)=z+\tfrac12\gss (1-z)^2 + O\bigpar{|1-z|^{2+\gd}},
\qquad |z|\le1.
\end{align}
\end{lemma}
\begin{proof}
Let $z=1-w$, with $|z|\le1$. Taylor's theorem yields the two estimates,
uniformly for $|z|\le1$ and $k\ge0$,
\begin{align}\label{ru1}
z^k &=(1-w)^k=1-kw + O\bigpar{k^2|w|^2}
=1-kw+\binom k2 w^2 + O\bigpar{k^2|w|^2},
\\\label{ru2}
z^k&=(1-w)^k=1-kw+\binom k2 w^2 + O\bigpar{k^3|w|^3} ,
\end{align}
and thus, taking a geometric mean of the $O$ terms in \eqref{ru1} and
\eqref{ru2},
\begin{align}\label{ru3}
z^k&=1-kw+\binom k2 w^2 + O\bigpar{k^{2+\gd}|w|^{2+\gd}}.
\end{align}
Hence, \eqref{Phi} yields, using the assumption~\eqref{2+gd},
\begin{align}
\Phi(z)
= \sum_{k=1}^\infty p_k\Bigsqpar{1-kw+\binom k2 w^2 + O\bigpar{k^{2+\gd}|w|^{2+\gd}}}
= 1- w + \frac{\gss}{2} w^2 + O\bigpar{|w|^{2+\gd}},
\end{align}
which is \eqref{lgd}.
\end{proof}
This enables us to improve \eqref{yz1}.
\begin{lemma}\label{Lgdy}
If \eqref{2+gd} holds with $0<\gd\le1$, then, for $z$ in some \GDD,
\begin{align}\label{lgdy}
y(z)=1-\sqrt2 \gs\qw(1-z)\qq+\Oz{\frac12+\gdd}.
\end{align}
\end{lemma}
\begin{proof}
By \cite[Lemma A.2]{SJ167},
$y(z)$ is analytic in some \GDD{} $\gD$ such that
$|y(z)|<1$ for $z\in\gD$ and \eqref{yz1} holds as $z\to1$ in $\gD$.
To show the improvement \eqref{lgdy},
it suffices to consider $z\in\gD$ close to 1, since the estimate is trivial when
$|1-z|$ is bounded below.
Let $w:=1-y(z)$. By \eqref{yz1} we have
$|w|=\Theta\bigpar{|1-z|^{\frac12}}$.
The functional equation \eqref{ma} and \refL{Lgd} yield
\begin{align}
y(z)/z=\Phi\yz = y(z)+\frac{\gss}2 w^2 + O\bigpar{|w|^{2+\gd}}
= y(z)+\frac{\gss}2 w^2\bigsqpar{1+ O\bigpar{|w|^{\gd}}}
\end{align}
and thus, for $|1-z|$ small,
\begin{align}
\frac{\gss}2 w^2
=\frac{1-z}{z}y(z)\bigsqpar{1+ O\bigpar{|w|^{\gd}}}
=(1-z)\bigsqpar{1+ \Oz{\gd/2}}.
\end{align}
The result \eqref{lgdy} follows.
\end{proof}
We need also a uniform version of
\refL{LFFK}\ref{LFFKO}. We state it in a rather general form.
\begin{lemma}
\label{LU}
Let $\cI$ be an arbitrary index set, and suppose that $a_\iota,b_\iota$,
$\iota\in\cI$, are real numbers such that
$\sup_\cI|a_\iota|<\infty$, $\sup_\cI|b_\iota|<\infty$ and
$\sup_\cI(a_\iota+b_\iota+1)<0$.
Suppose that $g_\iota(z)$ and $h_\iota(z)$ are \gdaf{s} such that, in some
fixed \GDD{} $\gD$,
$g_\iota(z)=\Oz{a_\iota}$ and $h_\iota(z)=\Oz{b_\iota}$,
uniformly in $\iota$. Then
\begin{align}
g_\iota(z)\odot h_\iota(z) = \Oz{a_\iota+b_\iota+1},
\end{align}
in some fixed \GDD{} $\gD'$, uniformly in $\iota$.
\end{lemma}
\begin{proof}
This follows from the proof of \cite[Proposition 9]{FillFK},
taking there the same integration contour for all $\iota$.
\end{proof}
As a final preparation, we state a uniform version of a special case
of the asymptotic expansion of polylogarithms by \citet{Flajolet1999},
cf.~\eqref{li}.
A proof is given in \refApp{Apoly}.
\begin{lemma}
\label{LUL1}
For every \GDD{} $\gD$ and every compact set
$K\subset\bbC\setminus\set{1,2,\dots}$
we have
\begin{align}
\label{lul1}
\Li_{\ga}(z) = \gG(1-\ga)(1-z)^{\ga-1} + \Ozo{\rga}
\end{align}
uniformly for $z\in\gD$ and $\ga\in K$.
\end{lemma}
We continue to assume \eqref{bn12}.
We now denote the generating function \eqref{Bz} by $B_\ga(z)$; thus
\begin{align}\label{ululu}
B_\ga(z)=
\begin{cases}
\Li_{-\ga}(z), & \rga\ge\frac12,
\\
\Li_{-\ga}(z)-\mu(\ga)z(1-z)\qw, & \rga<\frac12.
\end{cases}
\end{align}
The following lemma is the central step to establishing uniformity in the
estimates above. (Cf.~\refLs{LC1>} and \ref{LC1<}.)
Note that the lemma does not hold for $\ga=\frac12$; it is easily seen from
\eqref{pct} that $B_{\xfrac12}(z)\odot y(z) =\Theta\bigpar{|\log|1-z||}$ as
$z\upto1$.
\begin{lemma}\label{LU1}
Assume that $\E\xi^{2+\gd}<\infty$.
Let $K$ be a compact subset of $\set{\ga:\rga>0}\setminus\set{\frac12}$.
Then,
\begin{align}\label{lu1}
B_\ga(z)\odot y(z) = \Oz{\frac12-\rga}
\end{align}
in some fixed \GDD,
uniformly for $\ga\in K$.
\end{lemma}
\begin{proof}
We consider three different cases, and therefore define
$K_1:=\set{\ga\in K:\rga\ge\frac12+\gdq}$,
$K_2:=\set{\ga\in K:\frac12\le \rga<\frac12+\gdq}$,
$K_3:=\set{\ga\in K: \rga<\frac12}$.
Estimates of the type $\Oz{a}$ below are valid
in some fixed \GDD, which may change from line to line.
\pfcase1{$\rga\ge\frac12+\gdq$.}
In this range, we have by \refL{LUL1}
\begin{align}\label{u1}
B_\ga(z)=\Li_{-\ga}(z) = \Oz{-\rga-1},
\end{align}
uniformly in $\ga\in K_1$.
Furthermore, $y(z)=1+\Oz{\frac12}$ by \eqref{yz1} (or \refL{Lgdy}),
and $\frac12-\rga\le -\gdq$ for $\ga\in K_1$.
Hence, \refL{LU} yields
\begin{align}\label{u2}
B_\ga(z)\odot y(z)
=
B_\ga(z)\odot \bigpar{y(z)-1}
=\Oz{\frac12-\rga},
\end{align}
uniformly in $\ga\in K_1$.
\pfcase{2 and 3}{$0<\rga<\frac12+\gdq$.}
We have, by \eqref{lgdy} and \eqref{li},
\begin{align}\label{u3}
y(z) = 1- c_1(1-z)\qq + \Oz{\frac12+\gdd}
= 1+ c_2\Li_{3/2}(z)+P(z) + \Oz{\frac12+\gdd},
\end{align}
where $P(z)$ is a polynomial that can be assumed to have degree less than
$\frac12+\gdd$, and thus $P(z)=C_1$, a constant.
Let
\begin{align}\label{u4}
h(z):=y(z)-c_2\Li_{3/2}(z)-1-C_1
=\Oz{\frac12+\gdd}.
\end{align}
Let $\zddz$ denote the differential operator $z\ddz$.
Note the identity
$\zddz(g_1(z) \odot g_2(z)) = \zddz g_1(z) \odot g_2(z)$
and that
$\zddz\Li_\ga(z)=\Li_{\ga-1}(z)$.
Thus,
\begin{align}\label{u5}
\zddz\bigpar{\Li_{-\ga}(z)\odot h(z)}
= {\zddz\Li_{-\ga}(z)\odot h(z)}
= \Li_{-\ga-1}(z)\odot h(z)
.\end{align}
We have $\Li_{-\ga-1}(z)=\Oz{-\rea-2}$ uniformly in $\ga\in K$ by \refL{LUL1},
which together with \eqref{u4} and \refL{LU} yields
\begin{align}\label{u6}
\zddz\bigpar{\Li_{-\ga}(z)\odot h(z)}
&=\Oz{-\rea-\frac12+\gdd}
,\end{align}
uniformly in $\ga\in K_2\cup K_3$.
Furthermore, by \refL{LUL1},
\begin{align}\label{u7}
\zddz\bigpar{ \Li_{-\ga}(z)\odot\Li_{3/2}(z)}&
=\zddz\Li_{-\ga+3/2}(z)
=\Li_{-\ga+1/2}(z)
\notag\\&
= \gG(\ga+\tfrac12) (1-z)^{-\ga-\frac12}+O\bigpar{|1-z|^{-\rga+\frac12}+1}
\end{align}
uniformly in $\ga\in K_2\cup K_3$.
The exponent $-\rga-\frac12+\gdd$ in \eqref{u6} lies in $[-1+\gdq,0)$,
and thus \eqref{u6} and \eqref{u7} yield, after division by $z$,
\begin{align}\label{u8}
\ddz\bigpar{\Li_{-\ga}(z) \odot y(z)}
&=
c_2\ddz\bigpar{\Li_{-\ga}(z) \odot \Li_{3/2}(z))}
+\ddz\bigpar{\Li_{-\ga}(z) \odot h(z))}
\notag\\&
=c_2 \gG(\ga+\tfrac12) (1-z)^{-\ga-\frac12}+\Oz{-\rga-\frac12+\gdd},
\end{align}
again uniformly in $\ga \in K_2 \cup K_3$.
We now consider Cases 2 and 3 separately.
\pfcase2{$\frac12\le \rga<\frac12+\gdq$.}
By integrating \eqref{u8} along a suitable contour, for example from 0 along
the negative real axis to $-|z|$ and then along the circle with radius $|z|$
to~$z$,
\begin{align}\label{u9}
B_\ga(z)\odot y(z)=
{\Li_{-\ga}(z) \odot y(z)}
=c_2 \gG(\ga-\tfrac12) (1-z)^{-\ga+\frac12}+ O(1),
\end{align}
uniformly in $\ga\in K_2$, which implies \eqref{lu1}.
\pfcase{3}{$0<\rga<\frac12$.}
Recall that now
\begin{align}
\label{u11}
B_\ga(z)\odot y(z)=\Li_{-\ga}(z)\odot y(z)-\mu(\ga)y(z),
\end{align}
see \eqref{ululu} and
\eqref{by2}.
The estimate \eqref{yz1} implies, in a smaller \GDD,
\begin{align}
\label{u12}
y'(z)=\Oz{-\frac12}.
\end{align}
Furthermore, $\mu(\ga)=O(1)$ on $K_3$, as a consequence of \refT{TM}.
Hence \eqref{u11}, \eqref{u8}, and \eqref{u12}
imply
\begin{equation}
\label{u13}
\ddz\bigpar{B_{\ga}(z) \odot y(z)}
=c_2 \gG(\ga+\tfrac12) (1-z)^{-\ga-\frac12}+\Oz{-((\rga+\frac12-\gdd)\vee\frac12)}.
\end{equation}
We now have $(B_\ga\odot y)(1)=0$ by \eqref{by0},
and thus \eqref{lu1} follows from \eqref{u13} by integration,
noting that the exponents in \eqref{u13} stay away from $-1$ for $\ga\in K_3$.
\end{proof}
\begin{lemma}
\label{LU2}
Assume that $\E\xi^{2+\gd}<\infty$.
Let $K$ be a compact subset of $\set{\ga:\rga>0}\setminus\set{\frac12}$.
Then,
with notations as in \eqref{Mell} and \eqref{Mell2},
for every $\ell\ge1$,
\begin{align}\label{lu2i}
M_\ell(z) = \Oz{-\ell(\rga+\frac12)+\frac12}
\end{align}
in some fixed \GDD{} (depending on $\ell$)
uniformly for all $\ga\in K$.
More generally,
\begin{align}\label{lu2ii}
M_{\ell_1,\ell_2}(z) = \Oz{-\ell_1\rga_1'-\ell_2\rga_2'+\frac12},
\end{align}
in some fixed \GDD{} (depending on $\ell_1,\ell_2$),
uniformly for all $\ga_1,\ga_2\in K$.
\end{lemma}
\begin{proof}
For \eqref{lu2i},
the case $\ell=1$ follows from \eqref{sx1}, \eqref{sx3}, and \refL{LU1}.
We then proceed by induction as in the proof of \refL{LC2}. [But the induction is now simpler; it
suffices to note that \eqref{lc4} is at least $\REA+\frac12$.]
The proof of \eqref{lu2ii} is essentially the same, see the proof of
\refT{Tmix}.
\end{proof}
\begin{lemma}
\label{LU3}
Assume that $\E\xi^{2+\gd}<\infty$.
Let $K$ be a compact subset of $\set{\ga:\rga>0}\setminus\set{\frac12}$.
Then,
for every fixed $r>0$,
\begin{align}\label{lu3<}
\E\bigsqpar{|X_n(\ga)-n\mu(\ga)|^r} = O\bigpar{n^{r(\rqgay)}},
\end{align}
uniformly for all $\ga\in K$ with $\rga<\frac12$, and
\begin{align}\label{lu3>}
\E\bigsqpar{|X_n(\ga)|^r} = O\bigpar{n^{r(\rqgay)}},
\end{align}
uniformly for all $\ga\in K$ with $\rea\ge\frac12$.
\end{lemma}
\begin{proof}
Using the notation \eqref{abba}, \eqref{lu3<} and \eqref{lu3>} can be
combined as
\begin{align}\label{lu3a}
\E |\dF_\ga(\ctn)|^r = O\bigpar{n^{r(\rqgay)}},
\end{align}
uniformly in $\ga\in K$.
By H\"older's (or Lyapounov's) inequality, it suffice to prove \eqref{lu3a}
when $r=2\ell$, an even integer.
In this case,
we let
$\ga_1=\ga$, $\ga_2=\bga$ and
$\ell_1=\ell_2=\ell$;
then \eqref{mell2}--\eqref{Mell2} show that, using also \eqref{pct},
\begin{align}\label{lu3b}
\E |\dF_\ga(\ctn)|^{2\ell}&
=
\E \bigsqpar{\dF_\ga(\ctn)^{\ell}\dF_{\bga}(\ctn)^\ell}
=
m_n\xxllo
= q_n\qw [z^n] M_{\ell,\ell}(z)
\le C n^{3/2} [z^n] M_{\ell,\ell}(z),
\end{align}
and the desired result \eqref{lu3a} (with $r=2\ell$) follows
from \eqref{lu3b} and \eqref{lu2ii} by standard singularity analysis,
see \cite[Proof of Theorem VI.3, p.~390--392]{FS}.
\end{proof}
\begin{lemma}
\label{LU4}
Assume that $\E\xi^{2+\gd}<\infty$.
Let $K$ be a compact subset of $\set{\ga:\rga>0}$.
Then,
for every $r>0$,
\begin{align}\label{lu4}
\E\bigsqpar{|X_n(\ga)-\E X_n(\ga)|^r} = O\bigpar{n^{r(\rqgay)}},
\end{align}
uniformly for all $\ga\in K$.
\end{lemma}
\begin{proof}
It suffices to show this for $r\ge1$.
Let
$L^r$ be the Banach space of all complex random variables $X$ defined on
our underlying probability space such that
\begin{align}\label{normr}
\norm{X}_r:=\bigpar{\E|X|^r}^{1/r}<\infty.
\end{align}
\pfcase1{$\frac12\notin K$.}
In this case, \refL{LU3} applies and thus \eqref{lu3<} and \eqref{lu3>}
hold, uniformly for $\ga$ in the specified sets.
We may write these as
$\norm{X_n(\ga)-n\mu(\ga)}_r \le C n^{\rqgay}$
and $\norm{X_n(\ga)}_r\le C n^{\rqgay}$,
respectively. As is well known, for any (complex) random variable $X$,
\begin{align}\label{lu4a}
\norm{X-\E X}_r\le \norm{X}_r +|\E X| \le 2\norm{X}_r.
\end{align}
Hence we obtain in both cases, and thus uniformly for all $\ga\in K$,
\begin{align}\label{lu4b}
\norm{X_n(\ga)-\E X_n(\ga)}_r\le C n^{\rqgay},
\end{align}
which is equivalent to \eqref{lu4}.
\pfcase2{$\frac12\in K$.}
Consider first the special case $K_1:=\set{\ga\in \bbC:|\ga-\frac12|\le0.1}$
and let $K_2:=\partial K_1=\set{\ga\in \bbC:|\ga-\frac12|=0.1}$.
Then Case 1 applies to $K_2$.
Moreover,
recalling the notation \eqref{tY},
we can write \eqref{lu4} and \eqref{lu4b} as
\begin{align}\label{lu4c}
\norm{\tY_n(\ga)}_r \le C,
\end{align}
where $\tY_n(\ga)=n^{-\ga-\frac12}\bigpar{X_n(\ga)-\E X_n(\ga)}$ is, for each
$n\ge1$,
an $L^r$-valued analytic function of $\ga$.
[Recall that for a fixed~$n$, there are only finitely many choices for the
tree $\ctn$, and for each choice, \eqref{Xn} is an entire function of~$\ga$.]
The maximum modulus principle holds for Banach space valued analytic functions,
see \eg{} \cite[p.~230]{Dunford-Schwartz},
and thus, using \eqref{lu4c} for $K_2$,
\begin{align}
\sup_{\ga\in K_1}\norm{\tY_n(\ga)}_r
=
\sup_{\ga\in K_2}\norm{\tY_n(\ga)}_r
\le C
.\end{align}
Hence, \eqref{lu4c} holds uniformly for $\ga\in K_1$, and thus so does
\eqref{lu4}.
For a general compact set $K$, Case 1 applies to
$\set{\ga\in K:|\ga-\frac12|\ge 0.1}$, which together with the case $K_1$
just proved yields the result \eqref{lu4} uniformly for all $\ga\in K$.
\end{proof}
\begin{proof}[Proof of \refT{T1mom}]
We give the proof for ordinary moments, \ie, \eqref{t1mom}. The other cases
are similar, with mainly notational differences.
Let $\ell\ge1$ and choose $r:=\ell+1$.
First, consider a fixed $\ga$ with $\rga>0$.
Then
\refL{LU4} shows that
$\E|\tY_n(\ga)|^r=O(1)$, and thus the sequence
$\tY_n(\ga)^\ell$ is uniformly integrable, which
together with \eqref{t1} implies \eqref{t1mom}.
(See again \cite[Theorems 5.4.2 and 5.5.9]{Gut}.)
To show uniform convergence on compact sets of $\ga$,
consider first a convergent sequence $(\ga_k)$ in $H_+$ with
$\ga_k\to\gaoo\in H_+$ as \ktoo, and a sequence $n_k\to\infty$.
By \refT{T1}, $\tY_n(\ga)\dto \gs\qw\tY(\ga)$ in $\cH(H_+)$, and
by the Skorohod coupling theorem \cite[Theorem~4.30]{Kallenberg},
we may assume that a.s\punkt{} $\tY_n(\ga)\to \gs\qw\tY(\ga)$ in $\cH(H_+)$,
\ie, uniformly on compact sets. It then follows that
$\tY_{n_k}(\ga_k)\asto \gs\qw\tY(\gaoo)$ as \ktoo.
Furthermore,
\refL{LU4} applies to the compact set $\{\alpha_1, \alpha_2, \dots\} \cup \set{\gaoo}$, and
thus \eqref{lu4c} holds and shows that
$\E|\tY_{n_k}(\ga_k)|^r\le C$.
Hence, similarly to the case of a fixed $\ga$,
the sequence $\tY_{n_k}(\ga_k)^\ell$ is uniformly integrable, and
\begin{align}\label{ul1}
\E \tY_{n_k}(\ga_k)^\ell \to \gs^{-\ell}\E \tY(\gaoo)^\ell,
\qquad\text{as }\ktoo
.\end{align}
This holds for any sequence $n_k\to\infty$.
In particular, we may for each $k$,
using \eqref{t1mom} which we just have proved for each fixed $\ga$,
choose $n_k$ so large that
$\bigabs{ \E \tY_{n_k}(\ga_k)^\ell -\gs^{-\ell} \E \tY(\ga_k)^\ell}<1/k$
for each $k$. Then \eqref{ul1} implies
\begin{align}\label{ul2}
\E \tY(\ga_k)^\ell \to \E \tY(\gaoo)^\ell
\qquad\text{as }\ktoo
.\end{align}
Since this hold for any sequence $\ga_k\to\gaoo$, \eqref{ul2} shows that
$\E\tY(\ga)^\ell$ is a continuous function of $\ga\in H_+$.
Moreover, \eqref{ul1} and \eqref{ul2} show that for any convergent sequence
$(\ga_k)$ in $H_+$, and any $n_k\to\infty$,
\begin{align}\label{ul3}
\E \tY_{n_k}(\ga_k)^\ell- \gs^{-\ell}\E \tY(\ga_k)^\ell \to0.
\end{align}
Let $K\subset H_+$ be compact.
We claim that $ \E \tY_{n}(\ga)^\ell\to \gs^{-\ell}\E \tY(\ga)^\ell $
uniformly for $\ga\in K$.
Suppose not. Then there exists $\eps>0$, a subsequence
$n_k\to\infty$ and a sequence $(\ga_k)\in K$ such that
$\bigabs{\E \tY_{n_k}(\ga_k)^\ell-\gs^{-\ell} \E \tY(\ga_k)^\ell }>\eps$ for
every $k$.
Since $K$ is compact, we may by selecting a subsequence assume that
$\ga_k\to\gaoo$ for some $\gaoo\in K$.
But then \eqref{ul3} holds, which is a contradiction.
This shows the claimed uniform convergence on $K$.
Finally $\E\tY(\ga)^\ell$ is an analytic function of $\ga\in H_+$ since it
is the uniform limit on compact sets of the sequence of analytic functions
$\E \tY_n(\ga)^\ell$.
\end{proof}
\subsection{Final remark}
\begin{remark}
In this \refS{Smom} we have only considered the case $\rga>0$. It seems likely
that similar arguments can be used to show moment convergence in
\refT{T<0} for $\rga<0$, but we have not pursued this, and we leave it as an
open problem.
\end{remark}
|
2,869,038,153,883 | arxiv | \section{Introduction}
\label{sec:intro}
Machine learning models have improved over time at prediction and classification, especially with the advances made in deep learning and availability of large amounts of data. These gains in predictive power have often been achieved using increasingly complex and \emph{black-box} models. This has led to significant interest in, and a proliferation of, \emph{explainers} that provide explanations for the predictions made by these black-box models. Given the crucial importance of these explainers it is imperative to understand what makes them reliable.
In this paper we focus on robustness aspect of reliability. An explainer should provide similar explanations for similar data inputs given a prediction model. Specifically, we investigate the connection between explainer robustness and the smoothness of the black-box function being explained.
We propose and formally define \emph{explainer astuteness} -- a property of explainers which captures the probability that a given method provides similar explanations to similar data points. We then provide a theoretical way to connect this explainer astuteness to the \emph{probabilistic Lipschitzness} of the black-box function that is being explained. Since probabilistic Lipschitzness is a measure of the probability that a function is smooth in a local neighborhood, our results demonstrate how the smoothness of the black-box function itself impacts the astuteness of the explainer. This implies that \emph{enforcing smoothness on black-box functions lends them to more robust explanations.}
\textbf{Related Work.}
A wide variety of explainers have been proposed in the literature \citep{guidotti2018survey,arrieta2020explainable}. Explainers can broadly be categorized as feature attribution or feature selection explainers. Feature attribution explainers provide continuous-valued importance scores to each of the input features, while feature selection explainers provide binary decisions on whether a feature is important or not. Some popular feature attribution explainers can be viewed through the lens of Shapley values such as SHAP \citep{lundberg2017unified}, LIME \citep{ribeiro2016should} and LIFT \citep{shrikumar2016not}. Some models such as CXPlain \citep{schwab2019cxplain}, PredDiff \citep{zintgraf2017visualizing} and feature ablation explainers \citep{lei2018distribution} calculate feature attributions by simulating individual feature removal, while other methods such as RISE \citep{petsiuk2018rise} calculate the mean effect of a feature's presence to attribute importance to it. In contrast, feature selection methods include individual selector approaches such as L2X \citep{chen2018learning} and INVASE \citep{yoon2018invase}, and group-wise selection approaches such as gI \citep{masoomi2020instance}.
While seemingly diverse, these models have been shown to have striking underlying similarities, for example \citet{lundberg2017unified} unify six different explainers under a single framework. Recently, \citet{covert2020explaining} went a step further and combined 25 existing methods under the overall class of \textit{removal-based explainers}.
Similarly, there has been a recent increase in research focused on analyzing the behaviour of these explainers themselves in ways similar to how classification models have been analyzed. Recent work has focused on dissecting various properties of explainers. \citet{yin2021faithfulness} propose stability and sensitivity as measures of faithfulness of explainers to the decision-making process of the black-box model and empirically demonstrate the usefulness of these measures. \citet{li2020learning} explore connections between local explainability and model generalization. \citet{ghorbani2019interpretation} test the robustness of explainers through systemic and adversarial perturbations. \citet{alvarez2018robustness} empirically show that robustness, in the sense that explainers should provide similar explanations for similar inputs, is a desirable property and how forcing this property yields better explanations. Recently, \citet{agarwal2021towards} explore the robustness of LIME \citep{ribeiro2016should} and SmoothGrad \citep{smilkov2017smoothgrad}, and prove that for these two methods their robustness is related to the maximum value of the gradient of the predictor function.
Our work is closely related to \citet{alvarez2018robustness} and \citet{agarwal2021towards} on explainer robustness. However, instead of enforcing explainers to be robust themselves \citep{alvarez2018robustness}, our theoretical results suggest that ensuring robustness of explanations also depends on the smoothness of the black-box function that is being explained. Our results are complementary to the results obtained by \citet{agarwal2021towards} in that our theorems cover a wider variety of explainers as compared to only Continuous LIME and SmoothGrad (see contributions below). We further relate robustness to probabilistic Lipschitzness of black-box models, which is a quantity that can be empirically estimated.
Additionally, there has been recent work estimating upper-bounds of Lipschitz constant for neural networks \citep{virmaux2018lipschitz, fazlyab2019efficient, gouk2021regularisation}, and enforcing Lipschitz continuity during neural networks training, with an eye towards improving classifier robustness \citep{gouk2021regularisation,aziznejad2020deep, fawzi2017robustness, alemi2016deep}. Our work provides crucial additional motivation for that line of research; i.e., it provides theoretical reasons to improve Lipschitzness of neural networks from the perspective of enabling more robust explanations.
\iffalse
Some papers have noted the relationship between Lipschitz and robustness (in the predictor)
* How good is your explanation? Algorithmic stability measures to assess the
quality of explanations for deep neural networks (looks like a workshop paper?)
Some papers have tried to improve explanation robustness by smoothing the predictor function:
\citep{leeRobustLocallyLinear2018}
\citep{alvarezmelisRobustInterpretabilitySelfExplaining2018}
explanations are not robust (salience methods): \citep{adebayoSanityChecksSaliency2018} (salience)
\citep{dombrowskiExplanationsCanBe2019} (salience + adversarial)
important papers:
\cite{agarwalUnificationRobustnessPerturbation2021a}
\cite{alvarez2018robustness}
\citep{dombrowskiExplanationsCanBe2019}
Section about other explainers?
\fi
\textbf{Contributions:}
\begin{itemize}[noitemsep, topsep=0pt]
\item We formalize and define \emph{explainer astuteness} which captures the probability that a given explainer provides similar explanations to similar points. This formalism allows us to theoretically analyze robustness properties of explainers.
\item We provide theoretical results that connect astuteness of explainers to the smoothness of the black-box function they are providing explanations on. \textbf{Our results suggest that smooth black-box functions result in explainers providing more astute explanations}. While this statement is intuitive, proving it is non-trivial and requires additional assumptions for different explainers (See Section~\ref{subsec:impl}).
\item Specifically we prove this result for astuteness of three classes of explainers: (1) Shapley value based (e.g. SHAP), (2) explainers that simulate mean effect of features (e.g. RISE), and (3) explainers that simulate individual feature removal (e.g. CXPlain). Formally, our theorems establish a lower bound on explainer astuteness that depends on the Lipschitzness of the black-box function and square root of data dimensionality. Figure~\ref{fig:my_label} summarizes this main contribution of our work.
\item We demonstrate experimentally that this lower bound indeed holds in practice by comparing the astuteness predicted by our theorems to the observed astuteness on simulated and real datasets. We also demonstrate experimentally that the same neural network when trained with Lipschitz constraints lends itself to more astute explanations compared to when it is trained with no constraints.
\end{itemize}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\columnwidth]{figures/fig_double_simplified.png}
\caption{In this figure we visualize the implication of our theoretical results. For a black-box prediction function that is locally Lipschitz with a constant $L_1$, the predictions for any two points $x, x'$ such that $d_p(x,x') \leq r$ are within $L_1d_p(x,x')$ distance from each other. Given such a prediction function, the explanation for the same data points are also expected to be within $\lambda_1 d_p(x,x')$ of each other where $\lambda_1 = CL_1\sqrt{d}$ where C is a constant. If we consider a second black-box function with $L_2 > L_1$ that results in $\lambda_2 > \lambda_1$, indicating that the explanations for this black-box function can actually end up being farther apart as compared to the first prediction function. This result implies that \textbf{locally smooth black-box functions lend themselves to more astute (i.e., robust) explanations.}}
\label{fig:my_label}
\end{figure*}
\section{Background and Notations}
\label{sec:background}
\subsection{Removal-based Feature Explainers}
As mentioned in Section~\ref{sec:intro}, there exist a wide variety of explainers.
Owing to this diversity, in this work, we concern ourselves with \textit{removal-based feature attribution explainers} as defined by \cite{covert2020explaining} (which showed 25 existing methods under this umbrella). Removal based feature attribution explainers are methods that define a feature's influence through the impact of removing it from a model and assigning continuous-valued scores to each feature signifying its importance. This includes popular approaches such as SHAP and SHAP variants including KernelSHAP, LIME, DeepLIFT \citep{lundberg2017unified}, mean effect based methods such as RISE \citep{petsiuk2018rise}, and individual effects based methods such as CXPlain \citep{schwab2019cxplain}, PredDiff \citep{zintgraf2017visualizing}, permutation tests \citep{strobl2008conditional}, and feature ablation explainers \citep{lei2018distribution}. All of these methods simulate feature removal either explicitly or implicitly. For example, SHAP explicitly considers effect of using subsets that include a feature as compared to the effect of removing that feature from the subset. RISE removes subsets of features while always keeping the feature that is being evaluated, and estimates the average effect of keeping that feature when other features are randomly removed. CXPlain explicitly considers the impact of removing a feature on the loss function used in training the predictor function.
\subsection{Notation}
\label{sec:notation}
We denote $d$-dimensional input data as $x \in \mathbb{R}^d$, from a data distribution $\mathcal{D}$. The black-box predictor function is denoted by $f$, where $f(x)$ is the prediction given $x$, this function is assumed to have been trained on the training samples from $\mathcal{D}$. The explainer is represented by a function $\phi$ where $\phi(x) \in \mathbb{R}^d$ is the feature attribution vector representing attributions for all features in $x$ while $\phi_i(x) \in \mathbb{R}$ is the attribution for the $i^{th}$ feature. To simulate the presence or absence of features in a given subset of features, we use an indicator vector $z \in \{0, 1\}^d$, where $z_i = 1$ when the $i^{th}$ feature is present in the subset. To indicate we are only using subsets where feature $z_i = 1$, we use $z_{+i}$; and to indicate only using subsets where feature $z_i = 0$, we use $z_{-i}$. Lastly, the $p$-norm induced distance between any two points $x, x'$ is denoted by $d_p(x,x')=||x - x'||_p$, where $||.||_p$ is the $p$-norm.
\section{Explainer Astuteness}
\label{sec:analysis}
Our main interest is in defining a metric that can capture the difference in explanations provided by an explainer to points that are close to each other in the input space. The same question has been asked for classifiers. \cite{bhattacharjee2020non} came up with the concept of \textit{Astuteness} of classifiers, which captures the probability that similar points are assigned the same label by a classifier. Formally they provide the following definition:
\begin{definition}
\textit{Astuteness of Classifiers} \citep{bhattacharjee2020non}: The astuteness of a classifier $f$ over $\mathcal{D}$, denoted as $A_r(f,\mathcal{D})$ is the probability that $\forall x, x' \in \mathcal{D}$ such that $d(x,x') \leq r$ the classifier will predict the same label.
\small
\small\begin{equation}
A_r(f,\mathcal{D}) = \mathbb{P}_{x,x'\sim \mathcal{D}}[f(x) = f(x') | d(x,x') \leq r]
\end{equation}
\normalsize
\end{definition}
The obvious difference in trying to adapt this definition of astuteness to explainers is that explanations for nearby points do not have to be \textit{exactly} the same. Keeping this in mind, we propose and formalize \emph{explainer astuteness}, as the probability that the explainer assigns \emph{similar} explanations to similar points. The formal definition is as follows:
\begin{definition}
\label{def:exp_astuteness}
\textit{Explainer astuteness}:
The \emph{explainer astuteness} of an explainer $E$ over $\mathcal{D}$, denoted as $A_{r,\lambda}(E,\mathcal{D})$ is the probability that $\forall x,x' \in \mathcal{D}$ such that $d_p(x,x') \leq r$ the explainer $E$ will provide explanations $\phi(x),\phi(x')$ that are at most $\lambda \cdot d_p(x,x')$ away from each other, where $\lambda \geq 0$
\small\begin{equation}
A_{r,\lambda}(E,\mathcal{D}) = \mathbb{P}_{x,x' \sim \mathcal{D}} [d_p(\phi(x), \phi(x')) \leq \lambda \cdot d_p(x, x') \\ \given[\Big] d_p(x,x') \leq r]
\label{eq:astuteness}
\end{equation}
\end{definition}
\normalsize
A critical observation about definition~\ref{def:exp_astuteness} is that it not only relates to the previously defined notion of classifier astuteness, but also connects to the concept of \emph{probabilistic Lipschitzness}. Probabilistic Lipschitzness captures the probability of a function being locally smooth given a radius $r$. It is specially useful for capturing a notion of smoothness of complicated neural network functions for which enforcing global and deterministic Lipschitzness is difficult. \cite{mangal2020probabilistic} formally defined probabilistic Lipschitzness as follows:
\begin{definition} \label{def:plips}
\textit{Probabilistic Lipschitzness} \citep{mangal2020probabilistic}: Given $0 \leq \alpha \leq 1$, $r \geq 0$, a function $f:\mathbb{X} \rightarrow \mathbb{R}$ is probabilistically Lipschitz with a constant $L \geq 0$ if
\small
\small\begin{equation}
\mathbb{P}_{x, x' \sim \mathcal{D}}[d_p(f(x),f(x')) \leq L \cdot d_p(x, x') \given[\Big] d_p(x, x') \leq r]\geq 1 - \alpha
\label{eq:prob_lip}
\end{equation}
\normalsize
\end{definition}
\subsection{Theoretical bounds of Astuteness}
A cursory comparison between Eq.\eqref{eq:astuteness} and Eq.\eqref{eq:prob_lip} hints at the two concepts being related to each other. In fact, explainer astuteness can be viewed as probabilistic Lipschitzness of the explainer when it is viewed as a function with a Lipschitz constant $\lambda$. However, a much more interesting question to explore is how the astuteness of explainers is connected to the Lipschitzness of the black-box model they are trying to explain. We introduce and prove the following theorems which provide theoretical bounds that connect the Lipschitz constant $L$ of the black-box model to the astuteness of various explainers including SHAP \citep{lundberg2017unified}, RISE \citep{petsiuk2018rise}, and methods that simulate individual feature removal such as CXPlain \citep{schwab2019cxplain}.
\subsubsection{Astuteness of SHAP}
SHAP \citep{lundberg2017unified} is one of the most popular feature attribution based explainers in use today. \citet{lundberg2017unified} unify 6 existing explanation approaches within the SHAP framework. Each of these explanation approaches (including LIME, DeepLIFT, and kernelSHAP) can be viewed as approximations of SHAP, since SHAP in its theoretical form is difficult to calculate. However, in this section we use the theoretical definition of SHAP to establish bounds on astuteness.
For a given data point $x \in \mathcal{X}$ and a prediction function $f$, the feature attribution provided by SHAP for the $i^{th}$ feature is given by:
\small\begin{equation}
\phi_i(x) = \sum_{z_{-i}} \frac{|z_{-i}|! (d - |z_{-i}| - 1)!}{d!}[f(x \odot z_{+i}) - f(x \odot z_{-i})]
\label{eq:shap}
\end{equation}
\normalsize
Before moving on to the actual theorem, we introduce and prove the following Lemma which is necessary for the proof of Theorem~\ref{thm1}.
\begin{lemma}
\label{lem1}
If,
\small\begin{equation*}
\mathbb{P}_{x, x' \sim \mathcal{D}}[d_p(f(x),f(x')] \leq L * d_p(x, x') \given[\Big] d_p(x, x') \leq r] \\ \geq 1 - \alpha
\end{equation*}
\normalsize
then for $y=x \odot z_{+i}, y'= x' \odot z_{+i}$, i.e. $y,y' \in \cup\mathbb{N}_k=\{y | y \in \mathbb{R}^d, ||y||_0 = k, y_i \neq 0\}$ for $k=1,\ldots,d$
\small\begin{equation*}
\mathbb{P}_{x, x' \sim \mathcal{D}}[d_p(f(y),f(y')) \leq L * d_p(y,y') \given[\Big] d_p(y,y') \leq r] \\ \geq 1 - \beta
\end{equation*}
\normalsize
where $\beta \geq \alpha$ assuming that the distribution $\mathcal{D}$ is defined for all $x$ and $y$ and the equality is approached if the probability of sampling points from the set $\mathbb{N}_k=\{y | y \in \mathbb{R}^d, ||y||_0 = k, y_i \neq 0\}$ approaches zero for $k=2,\ldots,d$ relative to the probability of sampling points from $\mathbb{N}_1$.
\end{lemma}
\begin{proof}
(Sketch, full proof in Appendix \ref{app:proof})
Assume $p_k$ is the probability of occurrence of the set $\mathbb{N}_k=\{x | x \in \mathbb{R}^d, ||x||_0 = k, x_i \neq 0\}$ in the input space and $\gamma_k$ is the probability of the set of points that violate Lipschitzness in the set $\mathbb{N}_k$. In finite case each set $\mathbb{N}_k$ can be mapped to a set $\mathbb{N}'_k$ of cardinality $2^{d-k}|\mathbb{N}_k|$ after masking with all possible $z_{+i}$. In probability terms, the probability of $\mathbb{N}'_k$ can be written as $p_k' = \frac{2^{d-k}p_k}{\sum_{j=1}^d 2^{-j} p_j}$. Let $\beta$ be the the proportion of points in \emph{all} $\mathbb{N}'_k$ that also violate Lipschitzness in their unmasked form then $\beta$ can be written as
\small\begin{equation*}
\beta = \frac{\sum_{k=1}^d 2^{-k} p_k \gamma_k}{\sum_{j=1}^d 2^{-j} p_j}
\end{equation*}
\normalsize
Considering worse case $\beta$ requires solving the following equation,
\small\begin{equation}
\beta^* = \max_{\gamma_1, \ldots ,\gamma_d} \frac{\sum_{k=1}^d 2^{-k} p_k \gamma_k}{\sum_{j=1}^d 2^{-j} p_j},\sum_{i=1}^d p_i \gamma_i = \alpha, 0 \leq \alpha \leq 1 ,0 \leq \gamma_i \leq 1, \forall i=1,\ldots,d
\end{equation}
\normalsize
The result of this maximization will be $ \beta^* \geq \alpha$. In the specific case where $p_k \rightarrow 0$ for $k=2,\ldots,d$ (i.e., where the probability of sampling any $x$ with a 0 valued element is 0), $\beta \rightarrow \alpha$.
\end{proof}
\begin{theorem}
\label{thm1}
(Astuteness of SHAP) Consider a given $r \geq 0$ and $0 \leq \alpha \leq 1$, and a trained predictive function $f$ that is probabilistic Lipschitz with a constant $L$, radius $r$ measured using $d_p(.,.)$ and with probability at least $1-\alpha$. Then for SHAP explainers we have astuteness $A_{r, \lambda} \geq 1 - \beta$ for $\lambda = 2\sqrt[p]{d}L$. Where $\beta \geq \alpha$, and $\beta \rightarrow \alpha$ under conditions specified in Lemma~\ref{lem1}.
\end{theorem}
\begin{proof}
Given input $x$ and another input $x'$ s.t. $d(x,x') \leq r$. And letting $\frac{|z_{-i}|! (d - |z_{-i}| - 1)!}{d!} = C_z$. Using Eq.\eqref{eq:shap} we can write,
\normalsize
\small\begin{equation}
d_p(\phi_i(x) , \phi_i(x')) = ||\sum_{z_{-i}}C_z[f(x \odot z_{+i}) - f(x \odot z_{-i})] - \sum_{z_{-i}} C_{z} [f(x' \odot z_{+i}) - f(x' \odot z_{-i})]||_p
\end{equation}
\normalsize
Combining the two sums and re-arranging the R.H.S,
\normalsize
\small\begin{equation}
d_p(\phi_i(x) , \phi_i(x')) = ||\sum_{z_{-i}} C_z[f(x \odot z_{+i}) - f(x' \odot z_{+i}) + f(x' \odot z_{-i}) - f(x \odot z_{-i})]||_p
\end{equation}
\normalsize
Using triangular inequality on the R.H.S twice,
\small
\small\begin{equation}
\begin{split}
d_p(\phi_i(x) , \phi_i(x')) &\leq ||\sum_{z_{-i}} C_z[f(x \odot z_{+i}) - f(x' \odot z_{+i})]||_p + ||\sum_{z_{-i}} C_z[f(x' \odot z_{-i}) - f(x \odot z_{-i})]||_p \\
&\leq \sum_{z_{-i}} C_z||f(x \odot z_{+i}) - f(x' \odot z_{+i})||_p + \sum_{z_{-i}} C_z||f(x' \odot z_{-i}) - f(x \odot z_{-i})||_p
\end{split}
\label{eq:triangle}
\end{equation} \normalsize
We can replace each value inside the sums in Eq.\eqref{eq:triangle} with the maximum value across either sums. Doing so would still preserve the inequality in Eq.\eqref{eq:triangle}, as the sum of $n$ values is always less than the maximum among those summed $n$ times. Without loss of generality let us assume this maximum is $|f(x \odot z^*_{+i}) - f(x' \odot z^*_{+i})|$ for some particular $z^*$. This gives us:
\small\begin{equation}
d_p(\phi_i(x) , \phi_i(x')) \leq ||f(x \odot z^*_{+i}) - f(x' \odot z^*_{+i})||_p \sum_{z_{-i}} C_z + ||f(x \odot z^*_{+i}) - f(x' \odot z^*_{+i})||_p \sum_{z_{-i}} C_z
\end{equation}
\normalsize
However, $\sum_{z_{-i}} C_z = \sum_{z_{-i}} \frac{|z_{-i}|! (d - |z_{-i}| - 1)!}{d!} = 1$, which gives us,
\small\begin{equation}
\begin{split}
d_p(\phi_i(x) , \phi_i(x')) \leq 2||f(x \odot z^*_{+i}) - f(x' \odot z^*_{+i})||_p = 2d_p(f(x \odot z^*_{+i}), f(x' \odot z^*_{+i}))
\end{split}
\label{eq:difference}
\end{equation}
\normalsize
Using the fact that $f$ is probabilistic Lipschitz with a given constant $L \geq 0$, $d_p(x,x') \leq r$, $d_p(x \odot z^*_{+i}, x' \odot z^*_{+i}) \leq d_p(x,x')$ and Lemma \ref{lem1}. We get:
\normalsize
\small\begin{equation*}
P[2d_p(f(x \odot z^*_{+i}),f(x' \odot z^*_{+i})) \leq 2L \cdot d_p(x,x')] \geq 1 - \beta
\end{equation*}
\normalsize
Since Eq.\eqref{eq:difference} establishes that $d_p(\phi_i(x) , \phi_i(x')) \leq 2d_p(f(x \odot z^*_{+i}),f(x' \odot z^*_{+i}))$, the below inequality can be now established:
\normalsize
\small\begin{equation}
\begin{split}
P[d_p(\phi_i(x) , \phi_i(x')) \leq 2L \cdot d_p(x,x') ] \geq 1 - \beta
\end{split}
\label{eq:perfeature}
\end{equation}
\normalsize
Note that Eq.\eqref{eq:perfeature} is true for each feature $i \in \{1, ..., d\}$. To conclude our proof, we note that
\normalsize
\small\begin{equation*}
\begin{split}
d_p(x,y) = \sqrt[p]{\sum_i^d|x_i - y_i|^p}\leq\sqrt[p]{\sum_i^d\max_i|x_i-y_i|^p} = \sqrt[p]{d}\max_id_p(x_i,y_i)
\end{split}
\end{equation*}
\normalsize
Utilizing this in Eq.\eqref{eq:perfeature}, without loss of generality assuming $d_p(\phi_i(x), \phi_i(x'))$ corresponds to the maximum, gives us:
\normalsize
\small\begin{equation}
\begin{split}
P[d_p(\phi(x),\phi(x')) \leq 2 \sqrt[p]{d} L \cdot d_p(x,x')] \geq 1 - \beta
\end{split}
\label{eq:forall}
\end{equation}
\normalsize
Since $ P[d_p(\phi(x),\phi(x')) \leq 2 \sqrt[p]{d} L \cdot d_p(x,x')]$ in Eq.\eqref{eq:forall} defines $A_{\lambda, r}$ for $\lambda = 2 \sqrt[p]{d}L$, this concludes the proof.
\end{proof}
\begin{corollary}
If the prediction function $f$ is locally deterministically $L-$Lipschitz ($\alpha=0$) at radius $r$ then Shapley explainers are $\lambda-$astute for radius $r \geq 0$ for $\lambda = 2 \sqrt[p]{d}L$
\end{corollary}
\begin{proof}
Note that definition~\ref{def:plips} reduces to the definition of deterministic Lipschitz if $\alpha = 0$. Which means Eq.\eqref{eq:forall} will be true with probability $1$. Which concludes the proof.
\end{proof}
\subsubsection{Astuteness of ``Remove Individual'' Explainers}
Within the framework of feature removal explainers, a sub-category is the explainers that work by removing a single feature from the set of all features and calculating feature attributions based on change in prediction that result from removing that feature. This category includes Occlusion, CXPlain \citep{schwab2019cxplain}, PredDiff \citep{zintgraf2017visualizing} Permutation tests \citep{strobl2008conditional}, and feature ablation explainers \citep{lei2018distribution}.
``Remove individual'' explainers determine feature explanations for the $i^{th}$ feature by calculating the difference in prediction with and without that feature included for a given point $x$. Let $z_{-i}\in{0,1}^d$ represent a binary vector with $z_i=0$, then the explanation for feature $i$ can be written as:
\normalsize
\small\begin{equation}
\phi(x_i) = f(x) - f(x \odot z_{-i})
\label{eq:removal_exp}
\end{equation}
\normalsize
\begin{theorem}
\label{thm3}
(Astuteness of Remove individual explainers) Consider a given $r\geq0$ and $0 \leq \alpha \leq 1$ and a trained predictive function $f$ that is locally probabilistic Lipschitz with a constant $L$, radius $r$ measured using $d_p(.,.)$ and probability at least $1-\alpha$. Then for Remove individual explainers, we have the astuteness $A_{r,\lambda} \geq 1 - \alpha$, for $\lambda = 2\sqrt[p]{d}L$, where d is the dimensionality of the data.
\end{theorem}
\begin{proof}
(Sketch, full proof in Appendix~\ref{app:proof}) By considering another point $x'$ such that $d_p(x,x') \leq r$ and Eq.\eqref{eq:removal_exp} we get,
\normalsize
\small\begin{equation}
d_p(\phi(x_i), \phi(x'_i)) = d_p(f(x) - f(x \odot z_{-i}), f(x') - f(x' \odot z_{-i}))
\end{equation}
\normalsize
then following the exact same steps as the proof for Theorem 1 i.e. writing the right hand side in terms of $p$-norm, utilizing triangular inequality, and the definition of probabilistic Lipschitzness leads us to the desired result.
\end{proof}
\begin{corollary}
If the prediction function $f$ is locally $L-$Lipschitz at radius $r \geq 0$, then remove individual explanations are $\lambda-$astute for radius $r$ and $\lambda = 2\sqrt[p]{d}L$.
\end{corollary}
\begin{proof}
Same as proof for Corollary 2.1.
\end{proof}
\subsubsection{Astuteness of RISE}
RISE determines feature explanation for the $i^{th}$ feature by sampling subsets of features and then calculating the mean value of the prediction function when feature $i$ is included in the subset. RISE feature attribution for a given point $x$ and feature $i$ for a prediction function $f$ can be written as:
\normalsize
\small\begin{equation}
\phi_i(x) = \mathbb{E}_{p(z|z_i=1)}[f(x \odot z)]
\label{eq:rise}
\end{equation}
\normalsize
The following theorem establishes the bound on $\lambda$ for \emph{explainer astuteness} of RISE in relation to the Lipschitzness of black-box prediction function.
\begin{theorem}
\label{thm2}
(Astuteness of RISE) Consider a given $r \geq 0$ and $0 \leq \alpha \leq 1$, and a trained predictive function $f$ that is locally deterministically Lipschitz with a constant $L$ ($\alpha=0$), radius $r$ measured using $d_p(.,.)$ and probability at least $1-\alpha$. Then for RISE explainer is $\lambda-$astute$ for radius $r4 and $\lambda = \sqrt[p]{d}L$.
\end{theorem}
\begin{proof}(Sketch, full proof in Appendix~\ref{app:proof})
Given input $x$ and another input $x'$ s.t. $d(x,x') \leq r$, using Eq.\eqref{eq:rise} we can write
\normalsize
\small\begin{equation}
\begin{split}
&d_p(\phi_i(x),\phi_i(x'))=d_p(\mathbb{E}_{p(z|z_i=1)}[f(x \odot z)],\mathbb{E}_{p(z|z_i=1)}[f(x' \odot z)]) \\
&=||\mathbb{E}_{p(z|z_i=1)}[f(x \odot z)]-\mathbb{E}_{p(z|z_i=1)}[f(x' \odot z)]||_p=||\mathbb{E}_{p(z|z_i=1)}[f(x \odot z) - f(x' \odot z)]||_p
\end{split}
\end{equation}
\normalsize
Using Jensen's inequality on R.H.S followed by the fact that $E[f] \leq \max f$
\small\begin{equation}
d_p(\phi_i(x),\phi_i(x')) \leq \max_z d_p(f(x \odot z),f(x' \odot z))
\label{eq:risep1}
\end{equation}
\normalsize
Using the fact that $f$ is is deterministically Lipschitz and $d_p(\phi(x), \phi(x')) \leq \sqrt[p]{d} * \max_i d_p(\phi_i(x),\phi_i(x'))$ gives us,
\normalsize
\small\begin{equation}
P[ d_p(\phi(x),\phi(x') \leq \sqrt[p]{d}L \cdot d_p(x,x')] \geq 1
\label{eq:riseall}
\end{equation}
\normalsize
Since $P[d_p(\phi(x),\phi(x') \leq \sqrt[p]{d}L \cdot d_p(x,x')]$ defines $A_{\lambda,r}$ for $\lambda = \sqrt[p]{d}L$, this concludes the proof.
\end{proof}
\subsection{Implications}
\label{subsec:impl}
The above theoretical results all provide the same critical implication, that is, explainer astutness is lower bounded by the Lipschitzness of the prediction function. This means that black-box classifiers that are locally smooth (have a small $L$ at a given radius $r$) lend themselves to probabilistically more robust explanations.
This work provides the theoretical support on the importance of enforcing smoothness of classifiers to astuteness of explanations.
Note that while this implication makes intuitive sense, proving it for specific explainers is non-trivial as demonstrated by the three theorems above. The statement holds true for all three explainers when the classifier can be assumed to be deterministically Lipschitz, the conditions under which it is still true for probabilistic Lipschitzness vary in each case. For Theorem~\ref{thm1} we have to assume that distribution $\mathcal{D}$ is defined over masked data in addition to the input data and ideally the probability of sampling of masked data from is significantly smaller compared to probability of sampling points with no value exactly equal to 0. For Theorem~\ref{thm3} the statement is true without additional assumptions. For Theorem~\ref{thm2} we can only prove the statement to be true for the detereminsitic case.
\section{Experiments}
\label{sec:experiments}
To demonstrate the validity of our theoretical results, we perform a series of experiments. We train four different classifiers on each of five datasets, and then explain the decisions of these classifiers using three explainers.
We utilize three simulated datasets introduced by \citep{chen2018learning} namely \emph{Orange Skin}(OS), \emph{Nonlinear Additive}(NA) and \emph{Switch}, and two real world datasets from UCI Machine Learning repository \citep{asuncion2007uci} namely \emph{Rice} \citep{cinar2019classification} and \emph{Telescope} \citep{ferenc2005magic}. Details for these datasets can be found in Appendix~\ref{app:datasets}.
For each dataset we train the following four classifiers; \textbf{2layer}: A two-layer MLP with ReLU activations. For simulated datasets each layer has $200$ neurons, while for the 2 real datasets we use $32$ neurons in each layer. \textbf{4layer}: A four-layer MLP with ReLU activations, with the same number of neurons per layer as \emph{2layer}. \textbf{linear}: A linear classifier. \textbf{svm}: A support vector machine with Gaussian kernel. The idea here is that each of these classifiers will have different Lipschitz behavior, and that can be used to lower bound the explainer astuteness when explaining each of these classifiers according to our theoretical results.
We evaluate 3 explainers here that are representative of the 3 theorems provided Section~\ref{sec:analysis}.
\textbf{SHAP} \citep{lundberg2017unified} serves as Representative of Theorem~\ref{thm1}. We use the gradient based approximation for the neural-network classifiers and the kernel shap approximation for SVM. Both are included in the implementation provided by the authors\footnote{\url{https://github.com/slundberg/shap}}. \textbf{RISE} \citep{petsiuk2018rise}serves as representative method for Theorem~\ref{thm2}. The implementation provided by the authors is primarily for image datasets \footnote{\url{https://github.com/eclique/RISE}}. We adapt this for tabular datasets. \textbf{CXPlain} \citep{schwab2019cxplain} serves as representative method for Theorem~\ref{thm3}. We use the implementation provided by the authors \footnote{\url{https://github.com/d909b/cxplain}}.
\subsection{Effect of Lipschitz Constraints on Explainer Astuteness}
\citet{gouk2021regularisation} propose a way to constrain the Lipschitz constant of a neural network by performing constrained optimization during training. Using this method we constrain the Lipschitz constant for each layer by adding a projection step during training where after each update the weight matrices are projected to a feasible set if they violate the constraints on the Lipschitz constant, the constraints can be controlled via a hyperparameter. We use this method to train a four layer MLP with high, low and no Lipschitz constraint. We then calculate astuteness of each of our explainers for all three versions of this neural network. Figure~\ref{fig:experiments2} shows the results.
The goal of this set of experiments is to demonstrate the relationship between Lipschitz regularity of a neural network and the astuteness of explainers. As the \textit{same} neural network is trained on the \textit{same} data but with different levels of Lipschitz constraints enforced, the astuteness of explainers varies accordingly. In all cases in Figure~\ref{fig:experiments2} we see astuteness reaching $1$ for smaller values of $\lambda$ for the same neural network when it is highly constrained (lower lipschitz constant $L$) vs less constrained or unconstrained. The results provide empirical evidence in support of the main conclusion that can be drawn from our work: i.e., enforcing Lipschitzness on classifiers lends them to more astute post-hoc explanations.
\vspace{-1em}
\subsection{Estimating Probabilistic Lipschitzness and Lower Bound for Astuteness}
\vspace{-1em}
\label{sec:estimatingL}
To demonstrate the connection between explainer astuteness and probabilistic Lipschitzness as alluded to by our theory we need to estimate probabilistic Lipschitzness for classifiers. In our experiments we achieve this by by empirically estimating the $\mathbb{P}_{x,x' \sim \mathcal{D}}$ (Eq.\eqref{eq:prob_lip}) for a range of values of $L \in (0, 1)$ incremented at $0.1$. We do this for each classifier and for each dataset $D$ and set $r$ as median of pairwise distance for all training points. According to Eq.\eqref{eq:prob_lip} this gives us an upperbound on $1-\alpha$ i.e. we can say that for a given $L, r$ the classifier is Lipschitz with probability at least $1 - \alpha$.
We can use the estimates for probabilistic Lipschitzness to predict the lower bound of astuteness using our theorems. We do this by noting that our theorems imply that for $\lambda = CL\sqrt{d}$ explainer astuteness is at least $1 - \alpha$. This means we can simply multiply the range of Lipschitz constant $L$ with $C \sqrt{d}$ and for $\lambda$ greater or equal to that value we can guarantee that explainer astuteness should be lower bounded by $1 - \alpha$.
For each dataset, classifier, explainer combination we can plot two curves. One, that represents the predicted lower bound on explainer astuteness given a classifier, as described in the previous paragraph. Second, the actual estimations of explainer astuteness using Definition~\ref{def:exp_astuteness}. According to our theoretical results, at a given $\lambda$ the estimated explainer astuteness should stay above the predicted astuteness based on the Lipschitzness of classifiers. We show these curves in Appendix Figure~\ref{fig:sup_experiments} but summarize them in tabular form in Table~\ref{table:auc} to conserve space. The table shows the difference between the AUC under the estimated astuteness curves ($\mathbf{AUC}$) and the AUC under the predicted lower bound ($\mathbf{AUC_{lb}}$). This number captures the average difference of the lowerbound over a range of $\lambda$ values. Note that the values are all positive supporting our result as a lower bound.
\begin{figure}[!h]
\centering
\includegraphics[width=1\columnwidth]{figures/nonlinear_lip.png}
\includegraphics[width=1\columnwidth]{figures/rice_lip.png}
\includegraphics[width=1\columnwidth]{figures/telescope_lip.png}
\caption{Regularizing the Lipschitness of a neural network during training results in higher astuteness for the same value of $\lambda$. Higher regularization results in lower Lipschitz constant \citep{gouk2021regularisation}. Astuteness reaches $1$ for smaller values of $\lambda$ with Lipschitz regularized training, as expected from our theorems. The errorbars represent results across 5 runs to account for randomness in training. See Figure~\ref{fig:experiments2_sup} in Appendix for plots for all six datasets.}
\label{fig:experiments2}
\end{figure}
\begin{table*}[!h]
{\centering
\caption{$\mathbf{AUC - AUC_{lb}}(\downarrow)$. The observed AUC is lower bounded by the predicted AUC. The difference between the two is always $\geq 0$. The corresponding plots can be found in Appendix Figure~\ref{fig:sup_experiments}.}
\label{table:auc}
\scriptsize
\begin{center}
\setlength\tabcolsep{3.2pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
& \multicolumn{3}{c|}{2layer}
& \multicolumn{3}{c}{4layer}
& \multicolumn{3}{|c|}{linear}
& \multicolumn{3}{c|}{svm}\\
\midrule
\textbf{Datasets}
& \textbf{SHAP}
& \textbf{RISE}
& \textbf{CXPlain}
& \textbf{SHAP}
& \textbf{RISE}
& \textbf{CXPlain}
& \textbf{SHAP}
& \textbf{RISE}
& \textbf{CXPlain}
& \textbf{SHAP}
& \textbf{RISE}
& \textbf{CXPlain}
\\
\midrule
OS
& .585
& .477
& .551
& .489
& .415
& .426
& .043
& .017
& .043
& .761
& .628
& .732
\\
NA
& .359
& .289
& .318
& .285
& .216
& .244
& .452
& .391
& .474
& .742
& .653
& .708
\\
Switch
& .053
& .053
& .003
& .086
& .083
& .039
& .043
& .028
& .034
& .557
& .472
& .524
\\
Rice
& .249
& .142
& .229
& .292
& .131
& .252
& .258
& .165
& .241
& .426
& .347
& .413
\\
Telescope
& .324
& .213
& .317
& .345
& .244
& .333
& .223
& .149
& .211
& .501
& .439
& .504
\\
\midrule
\end{tabular}
\end{center}
}
\end{table*}
\vspace{-1em}
\section{Conclusion, Limitations and Broader Impact}
\vspace{-1em}
\label{sec:conclusion}
In this paper we formally defined \emph{explainer astuteness} which captures the probability that a given explainer will assign similar explanations to similar points. We theoretically prove that this explainer astuteness is proportional to the \emph{probabilistic Lipschitzness} of the black-box function that is being explained. As probabilistic Lipschitzness captures local smoothness properties of a function, this result suggests that enforcing smoothness on black-box models can lend these models to more robust explanations. In terms of limitations, we observe that our experimental results suggest that our predicted lower bound can be tightened further.
From a broader societal impact perspective, we would like to make it clear that just enforcing Lipschitzness on blackbox classifiers should not be considered as doing enough in terms of making them more transparent and interpretable. Our work is intended to be a call to action for the field to concentrate more on improving blackbox models for explain-ability purposes when they are conceptualized and trained and provides one of possibly many ways to achieve that goal.
\begin{ack}
This work was supported by the US Army Research Institute for the Behavioral and Social Sciences (ARI W911N16-1-0191), National Institute of Health (NIH 2T32HL007427-41) and by Award Numbers U01 HL089897, U01 HL089856, R01 HL124233, R01 HL147326 from the National Heart, Lung, and Blood
Institute, the FDA Center for Tobacco Products (CTP).
\end{ack}
|
2,869,038,153,884 | arxiv | \section{Introduction}
The {\it IJCAI--20 Proceedings} will be printed from electronic
manuscripts submitted by the authors. These must be PDF ({\em Portable
Document Format}) files formatted for 8-1/2$''$ $\times$ 11$''$ paper.
\subsection{Length of Papers}
All paper {\em submissions} must have a maximum of six pages, plus at most one for references. The seventh page cannot contain {\bf anything} other than references.
The length rules may change for final camera-ready versions of accepted papers and will differ between tracks. Some tracks may include only references in the last page, whereas others allow for any content in all pages. Similarly, some tracks allow you to buy a few extra pages should you want to, whereas others don't.
If your paper is accepted, please carefully read the notifications you receive, and check the proceedings submission information website\footnote{\url{https://proceedings.ijcai.org/info}} to know how many pages you can finally use. That website holds the most up-to-date information regarding paper length limits at all times. Please notice that if your track allows for a special references-only page, the {\bf references-only page(s) cannot contain anything else than references} (i.e.: do not write your acknowledgments on that page or you will be charged for it).
\subsection{Word Processing Software}
As detailed below, IJCAI has prepared and made available a set of
\LaTeX{} macros and a Microsoft Word template for use in formatting
your paper. If you are using some other word processing software, please follow the format instructions given below and ensure that your final paper looks as much like this sample as possible.
\section{Style and Format}
\LaTeX{} and Word style files that implement these instructions
can be retrieved electronically. (See Appendix~\ref{stylefiles} for
instructions on how to obtain these files.)
\subsection{Layout}
Print manuscripts two columns to a page, in the manner in which these
instructions are printed. The exact dimensions for pages are:
\begin{itemize}
\item left and right margins: .75$''$
\item column width: 3.375$''$
\item gap between columns: .25$''$
\item top margin---first page: 1.375$''$
\item top margin---other pages: .75$''$
\item bottom margin: 1.25$''$
\item column height---first page: 6.625$''$
\item column height---other pages: 9$''$
\end{itemize}
All measurements assume an 8-1/2$''$ $\times$ 11$''$ page size. For
A4-size paper, use the given top and left margins, column width,
height, and gap, and modify the bottom and right margins as necessary.
\subsection{Format of Electronic Manuscript}
For the production of the electronic manuscript, you must use Adobe's
{\em Portable Document Format} (PDF). A PDF file can be generated, for
instance, on Unix systems using {\tt ps2pdf} or on Windows systems
using Adobe's Distiller. There is also a website with free software
and conversion services: \url{http://www.ps2pdf.com}. For reasons of
uniformity, use of Adobe's {\em Times Roman} font is strongly suggested.
In \LaTeX2e{} this is accomplished by writing
\begin{quote}
\mbox{\tt $\backslash$usepackage\{times\}}
\end{quote}
in the preamble.\footnote{You may want also to use the package {\tt
latexsym}, which defines all symbols known from the old \LaTeX{}
version.}
Additionally, it is of utmost importance to specify the {\bf
letter} format (corresponding to 8-1/2$''$ $\times$ 11$''$) when
formatting the paper. When working with {\tt dvips}, for instance, one
should specify {\tt -t letter}.
\subsection{Title and Author Information}
Center the title on the entire width of the page in a 14-point bold
font. The title must be capitalized using Title Case. Below it, center author name(s) in 12-point bold font. On the following line(s) place the affiliations, each affiliation on its own line using 12-point regular font. Matching between authors and affiliations can be done using numeric superindices. Optionally, a comma-separated list of email addresses follows the affiliation(s) line(s), using 12-point regular font.
\subsubsection{Blind Review}
In order to make blind reviewing possible, authors must omit their
names and affiliations when submitting the paper for review. In place
of names and affiliations, provide a list of content areas. When
referring to one's own work, use the third person rather than the
first person. For example, say, ``Previously,
Gottlob~\shortcite{gottlob:nonmon} has shown that\ldots'', rather
than, ``In our previous work~\cite{gottlob:nonmon}, we have shown
that\ldots'' Try to avoid including any information in the body of the
paper or references that would identify the authors or their
institutions. Such information can be added to the final camera-ready
version for publication.
\subsection{Abstract}
Place the abstract at the beginning of the first column 3$''$ from the
top of the page, unless that does not leave enough room for the title
and author information. Use a slightly smaller width than in the body
of the paper. Head the abstract with ``Abstract'' centered above the
body of the abstract in a 12-point bold font. The body of the abstract
should be in the same font as the body of the paper.
The abstract should be a concise, one-paragraph summary describing the
general thesis and conclusion of your paper. A reader should be able
to learn the purpose of the paper and the reason for its importance
from the abstract. The abstract should be no more than 200 words long.
\subsection{Text}
The main body of the text immediately follows the abstract. Use
10-point type in a clear, readable font with 1-point leading (10 on
11).
Indent when starting a new paragraph, except after major headings.
\subsection{Headings and Sections}
When necessary, headings should be used to separate major sections of
your paper. (These instructions use many headings to demonstrate their
appearance; your paper should have fewer headings.). All headings should be capitalized using Title Case.
\subsubsection{Section Headings}
Print section headings in 12-point bold type in the style shown in
these instructions. Leave a blank space of approximately 10 points
above and 4 points below section headings. Number sections with
arabic numerals.
\subsubsection{Subsection Headings}
Print subsection headings in 11-point bold type. Leave a blank space
of approximately 8 points above and 3 points below subsection
headings. Number subsections with the section number and the
subsection number (in arabic numerals) separated by a
period.
\subsubsection{Subsubsection Headings}
Print subsubsection headings in 10-point bold type. Leave a blank
space of approximately 6 points above subsubsection headings. Do not
number subsubsections.
\paragraph{Titled paragraphs.} You should use titled paragraphs if and
only if the title covers exactly one paragraph. Such paragraphs should be
separated from the preceding content by at least 3pt, and no more than
6pt. The title should be in 10pt bold font and ended with a period.
After that, a 1em horizontal space should follow the title before
the paragraph's text.
In \LaTeX{} titled paragraphs should be typeset using
\begin{quote}
{\tt \textbackslash{}paragraph\{Title.\} text} .
\end{quote}
\subsubsection{Acknowledgements}
You may include an unnumbered acknowledgments section, including
acknowledgments of help from colleagues, financial support, and
permission to publish. If present, acknowledgements must be in a dedicated,
unnumbered section appearing after all regular sections but before any
appendices or references.
Use
\begin{quote}
{\tt \textbackslash{}section*\{Acknowledgements\}})
\end{quote}
to typeset the acknowledgements section in \LaTeX{}.
\subsubsection{Appendices}
Any appendices directly follow the text and look like sections, except
that they are numbered with capital letters instead of arabic
numerals. See this document for an example.
\subsubsection{References}
The references section is headed ``References'', printed in the same
style as a section heading but without a number. A sample list of
references is given at the end of these instructions. Use a consistent
format for references. The reference list should not include unpublished
work.
\subsection{Citations}
Citations within the text should include the author's last name and
the year of publication, for example~\cite{gottlob:nonmon}. Append
lowercase letters to the year in cases of ambiguity. Treat multiple
authors as in the following examples:~\cite{abelson-et-al:scheme}
or~\cite{bgf:Lixto} (for more than two authors) and
\cite{brachman-schmolze:kl-one} (for two authors). If the author
portion of a citation is obvious, omit it, e.g.,
Nebel~\shortcite{nebel:jair-2000}. Collapse multiple citations as
follows:~\cite{gls:hypertrees,levesque:functional-foundations}.
\nocite{abelson-et-al:scheme}
\nocite{bgf:Lixto}
\nocite{brachman-schmolze:kl-one}
\nocite{gottlob:nonmon}
\nocite{gls:hypertrees}
\nocite{levesque:functional-foundations}
\nocite{levesque:belief}
\nocite{nebel:jair-2000}
\subsection{Footnotes}
Place footnotes at the bottom of the page in a 9-point font. Refer to
them with superscript numbers.\footnote{This is how your footnotes
should appear.} Separate them from the text by a short
line.\footnote{Note the line separating these footnotes from the
text.} Avoid footnotes as much as possible; they interrupt the flow of
the text.
\section{Illustrations}
Place all illustrations (figures, drawings, tables, and photographs)
throughout the paper at the places where they are first discussed,
rather than at the end of the paper.
They should be floated to the top (preferred) or bottom of the page,
unless they are an integral part
of your narrative flow. When placed at the bottom or top of
a page, illustrations may run across both columns, but not when they
appear inline.
Illustrations must be rendered electronically or scanned and placed
directly in your document. They should be cropped outside latex, otherwise portions of the image could reappear during the post-processing of your paper. All illustrations should be understandable when printed in black and
white, albeit you can use colors to enhance them. Line weights should
be 1/2-point or thicker. Avoid screens and superimposing type on
patterns, as these effects may not reproduce well.
Number illustrations sequentially. Use references of the following
form: Figure 1, Table 2, etc. Place illustration numbers and captions
under illustrations. Leave a margin of 1/4-inch around the area
covered by the illustration and caption. Use 9-point type for
captions, labels, and other text in illustrations. Captions should always appear below the illustration.
\section{Tables}
Tables are considered illustrations containing data. Therefore, they should also appear floated to the top (preferably) or bottom of the page, and with the captions below them.
\begin{table}
\centering
\begin{tabular}{lll}
\hline
Scenario & $\delta$ & Runtime \\
\hline
Paris & 0.1s & 13.65ms \\
Paris & 0.2s & 0.01ms \\
New York & 0.1s & 92.50ms \\
Singapore & 0.1s & 33.33ms \\
Singapore & 0.2s & 23.01ms \\
\hline
\end{tabular}
\caption{Latex default table}
\label{tab:plain}
\end{table}
\begin{table}
\centering
\begin{tabular}{lrr}
\toprule
Scenario & $\delta$ (s) & Runtime (ms) \\
\midrule
Paris & 0.1 & 13.65 \\
& 0.2 & 0.01 \\
New York & 0.1 & 92.50 \\
Singapore & 0.1 & 33.33 \\
& 0.2 & 23.01 \\
\bottomrule
\end{tabular}
\caption{Booktabs table}
\label{tab:booktabs}
\end{table}
If you are using \LaTeX, you should use the {\tt booktabs} package, because it produces better tables than the standard ones. Compare Tables \ref{tab:plain} and~\ref{tab:booktabs}. The latter is clearly more readable for three reasons:
\begin{enumerate}
\item The styling is better thanks to using the {\tt booktabs} rulers instead of the default ones.
\item Numeric columns are right-aligned, making it easier to compare the numbers. Make sure to also right-align the corresponding headers, and to use the same precision for all numbers.
\item We avoid unnecessary repetition, both between lines (no need to repeat the scenario name in this case) as well as in the content (units can be shown in the column header).
\end{enumerate}
\section{Formulas}
IJCAI's two-column format makes it difficult to typeset long formulas. A usual temptation is to reduce the size of the formula by using the {\tt small} or {\tt tiny} sizes. This doesn't work correctly with the current \LaTeX{} versions, breaking the line spacing of the preceding paragraphs and title, as well as the equation number sizes. The following equation demonstrates the effects (notice that this entire paragraph looks badly formatted):
\begin{tiny}
\begin{equation}
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
\end{equation}
\end{tiny}%
Reducing formula sizes this way is strictly forbidden. We {\bf strongly} recommend authors to split formulas in multiple lines when they don't fit in a single line. This is the easiest approach to typeset those formulas and provides the most readable output%
\begin{align}
x =& \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \nonumber\\
+ & \prod_{i=1}^n \sum_{j=1}^n j_i
\end{align}%
If a line is just slightly longer than the column width, you may use the {\tt resizebox} environment on that equation. The result looks better and doesn't interfere with the paragraph's line spacing: %
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i
$}
\end{equation}%
This last solution may have to be adapted if you use different equation environments, but it can generally be made to work. Please notice that in any case:
\begin{itemize}
\item Equation numbers must be in the same font and size than the main text (10pt).
\item Your formula's main symbols should not be smaller than {\small small} text (9pt).
\end{itemize}
For instance, the formula
\begin{equation}
\resizebox{.91\linewidth}{!}{$
\displaystyle
x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j
$}
\end{equation}
would not be acceptable because the text is too small.
\section{Examples, Definitions, Theorems and Similar}
Examples, definitions, theorems, corollaries and similar must be written in their own paragraph. The paragraph must be separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. They must begin with the kind of item written in 10pt bold font followed by their number (e.g.: Theorem 1), optionally followed by a title/summary between parentheses in non-bold font and ended with a period. After that the main body of the item follows, written in 10 pt italics font (see below for examples).
In \LaTeX{} We strongly recommend you to define environments for your examples, definitions, propositions, lemmas, corollaries and similar. This can be done in your \LaTeX{} preamble using \texttt{\textbackslash{newtheorem}} -- see the source of this document for examples. Numbering for these items must be global, not per-section (e.g.: Theorem 1 instead of Theorem 6.1).
\begin{example}[How to write an example]
Examples should be written using the example environment defined in this template.
\end{example}
\begin{theorem}
This is an example of an untitled theorem.
\end{theorem}
You may also include a title or description using these environments as shown in the following theorem.
\begin{theorem}[A titled theorem]
This is an example of a titled theorem.
\end{theorem}
\section{Proofs}
Proofs must be written in their own paragraph separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. Proof paragraphs should start with the keyword ``Proof." in 10pt italics font. After that the proof follows in regular 10pt font. At the end of the proof, an unfilled square symbol (qed) marks the end of the proof.
In \LaTeX{} proofs should be typeset using the \texttt{\textbackslash{proof}} environment.
\begin{proof}
This paragraph is an example of how a proof looks like using the \texttt{\textbackslash{proof}} environment.
\end{proof}
\section{Algorithms and Listings}
Algorithms and listings are a special kind of figures. Like all illustrations, they should appear floated to the top (preferably) or bottom of the page. However, their caption should appear in the header, left-justified and enclosed between horizontal lines, as shown in Algorithm~\ref{alg:algorithm}. The algorithm body should be terminated with another horizontal line. It is up to the authors to decide whether to show line numbers or not, how to format comments, etc.
In \LaTeX{} algorithms may be typeset using the {\tt algorithm} and {\tt algorithmic} packages, but you can also use one of the many other packages for the task.
\begin{algorithm}[tb]
\caption{Example algorithm}
\label{alg:algorithm}
\textbf{Input}: Your algorithm's input\\
\textbf{Parameter}: Optional list of parameters\\
\textbf{Output}: Your algorithm's output
\begin{algorithmic}[1]
\STATE Let $t=0$.
\WHILE{condition}
\STATE Do some action.
\IF {conditional}
\STATE Perform task A.
\ELSE
\STATE Perform task B.
\ENDIF
\ENDWHILE
\STATE \textbf{return} solution
\end{algorithmic}
\end{algorithm}
\section*{Acknowledgments}
The preparation of these instructions and the \LaTeX{} and Bib\TeX{}
files that implement them was supported by Schlumberger Palo Alto
Research, AT\&T Bell Laboratories, and Morgan Kaufmann Publishers.
Preparation of the Microsoft Word file was supported by IJCAI. An
early version of this document was created by Shirley Jowell and Peter
F. Patel-Schneider. It was subsequently modified by Jennifer
Ballentine and Thomas Dean, Bernhard Nebel, Daniel Pagenstecher,
Kurt Steinkraus, Toby Walsh and Carles Sierra. The current version
has been prepared by Marc Pujol-Gonzalez and Francisco Cruz-Mencia.
\section{Introduction}
Combinatorial action spaces pose a challenging problem for AI agents, both from a computational and from an exploratory point of view. The reason being that (i) finding the best action may require iterating over all actions, an exponentially hard task, and (ii) absent prior knowledge, finding the best action requires testing all actions multiple times at each state \citep{brafman2002r}. While the exploratory task is of great importance, in this work we focus on the computational aspects of the problem. Our method can be seen as a natural application of the Action Assembly Theory (AAT) \citep{greene2008action}. According to Greene, behavior is described by two essential processes: \textit{representation} and \textit{processing}. Representation refers to the way information is coded and stored in the mind, whereas processing refers to the mental operations performed to retrieve this information \citep{greene2008action}. Having good representations of information and an efficient processing procedure allows us to quickly exploit highly rewarding nuances of an environment upon first discovery.
In this work we propose the first computationally efficient algorithm (see Figure \ref{fig:algorithm}), called \textit{Sparse Imitation Learning\ (Sparse-IL)}, which is inspired by AAT and combines imitation learning with a Compressed Sensing (CS) retrieval mechanism to solve text-based games with combinatorial action spaces.
Our approach is composed of:
\textbf{(1) Encoder} - the encoder receives a state as input (Figure~\ref{fig:algorithm}). The state is composed of individual words represented by word embeddings that were previously trained on a large corpus of text. We train the encoder, using imitation learning, to generate a continuous action $\mathbf{a}_\text{SoE}$ (a dense representation of the action). The action $\mathbf{a}_\text{SoE}$ corresponds to a sum of word embeddings of the action that the agent intends to take, e.g., the embedding of the action `take egg' is the sum of the word embedding vectors of `take' and `egg'. As the embeddings capture a prior, i.e., similarity, over language, it enables improved generalization and robustness to noise when compared to an end-to-end approach.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/omp-network-horizontal}
\caption{The Sparse-IL\ algorithm.}\label{fig:algorithm}
\end{figure}
\textbf{(2) Retrieval Mechanism} - given a continuous vector $\mathbf{a}_\text{SoE}$, we reconstruct the $K$ best Bag-of-Words (BoW) actions $\mathbf{a}_\text{BoW}$, composed of up to $l=4$ words, from the continuous output of the encoder. We do this using an algorithm that we term Integer K-Orthogonal Matching Pursuit (IK-OMP). We then use a fitness function to score the actions, after which, the best action is fed into a language model to yield an action sentence $\mathbf{a}_\text{env}$ that can be parsed by the game.
\textbf{Main contributions:} We propose a computationally efficient algorithm called Sparse-IL that combines CS with imitation learning to solve natural language tasks with combinatorial action spaces. We show that IK-OMP, which we adapted from \cite{white2016generating} and \cite{lin2013kbest}, can be used to recover a BoW vector from a sum of the individual word embeddings in a computationally efficient manner, even in the presence of significant noise. We demonstrate that Sparse-IL\ can solve the
entire game of Zork1, for the first time, while considering a combinatorial action space of approximately 10 million actions, using noisy, imperfect demonstrations.
This paper is structured as follows: Section \ref{sec:relatedwork} details relevant related work. Section \ref{sec:problemsetting} provides an overview of the problem setting; that is, the text-based game of Zork and the challenges it poses. Section \ref{sec:compressed_sensing} provides an overview of CS algorithms and, in particular, our variant called IK-OMP. Section \ref{sec:imitationlearning} introduces our Sparse-IL\ algorithm. Finally, in Section \ref{sec: experiments} we present our empirical evaluations, which include experiments in the text-based game Zork1 highlighting the robustness of IK-OMP to noise and its computational efficiency and showcasing the ability of Sparse-IL\ solving both the `Troll Quest' and the entire game of Zork
\section{Related work}
\label{sec:relatedwork}
\textit{Combinatorial action spaces in text-based games:} Previous works have suggested approaches for solving text-based games \citep{he2016deep,yuan2018counting,zahavy2018learn,zelinka2018using,tao2018towards}. However, these techniques do not scale to combinatorial action spaces. For example, \cite{he2016deep} presented the DRRN, which requires each action to be evaluated by the network. This results in a total of $\bigO(|A|)$ forward passes. \cite{zahavy2018learn} proposed the Action-Elimination DQN, resulting in a smaller action set $\bigO(|A'|)$. However, this set may still be of exponential size.
\textit{CS and embeddings representation:} CS was originally introduced in the Machine Learning (ML) world by \cite{calderbank2009compressed}, who proposed the concept of compressed learning. That is, learning directly in the compressed domain, e.g. the embeddings domain in the Natural Language Processing (NLP) setting. The task of generating BoW from the sums of their word embeddings was first formulated by \cite{white2016generating}. A greedy approach, very similar to orthogonal matching pursuit (OMP), was proposed to iteratively find the words. However, this recovery task was only explicitly linked to the field of CS two years later in \cite{arora2018compressed}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/zork_console}
\caption{Zork1 example screen.}\label{fig:zork1_egg}
\end{figure}
\section{Problem setting}
\label{sec:problemsetting}
\paragraph{Zork - A text-based game:}
Text-based games \citep{cote2018textworld} are complex interactive games usually played through a command line terminal. An example of Zork1, a text-based game, is shown in Figure \ref{fig:zork1_egg}. In each turn, the player is presented with several lines of text which describe the state of the game, and the player acts by entering a text command. In order to cope with complex commands, the game is equipped with an interpreter which deciphers the input and maps it to in-game actions. For instance, in Figure~\ref{fig:zork1_egg}, a command ``climb the large tree" is issued, after which the player receives a response. In this example, the response explains that up in the tree is a collectible item - a jewel encrusted egg. The large, combinatorial action space is one of the main reasons Zork poses an interesting research problem. The actions are issued as free-text and thus the complexity of the problem grows exponentially with the size of the dictionary in use.
\textbf{Our setup:} In this work, we consider two tasks: the `Troll Quest' \citep{zahavy2018learn} and `Open Zork', i.e., solving the entire game. The `Troll Quest' is a sub-task within `Open Zork', in which the agent must enter the house, collect a lantern and sword, move a rug which reveals a trapdoor, open the trapdoor and enter the basement. Finally, in the basement, the agent encounters a troll which it must kill using the sword. An incorrect action at any stage may prevent the agent from reaching the goal, or even result in its death (termination).
In our setting, we consider a dictionary $D$ of $112$ unique words, extracted from a walk-through of actions which solve the game, a demonstrated sequence of actions (sentences) used to solve the game. We limit the maximal sentence length to $l=4$ words. Thus, the number of possible, unordered, word combinations are $d^l/l!$, i.e., the dictionary size to the power of the maximal sentence length, divided by the number of possible permutations. This results in approximately 10 million possible actions.
\textbf{Markov Decision Process (MDP):} Text-based games can be modeled as Markov Decision Processes.
An MDP $\mathcal{M}$ is defined by the tuple $(S, A, R, P)$ \citep{sutton1998reinforcement}. In the context of text-based games, the states $\state$ are paragraphs representing the current observation. $\mathbf{a}_\text{env} \in A$ are the available discrete actions, e.g., all combinations of words from the dictionary up to a maximal given sentence length $l$. $R : S \times A \times S \mapsto \mathbb{R}$ is the bounded reward function, for instance collecting items provides a positive reward. $P : S \times A \times S \mapsto [0, 1]$ is the transition matrix, where $P(\state'|\state,\mathbf{a}_\text{env})$ is the probability of transitioning from $\state$ to $\state'$ given $\mathbf{a}_\text{env}$ was taken.
\textbf{Action Space:} While the common approach may be to consider a discrete action space, such an approach may be infeasible to solve, as the complexity of solving the MDP is related to the effective action space size. Hence, in this work, we consider an alternative, continuous representation. As each action is a sentence composed of words, we represent each action using the sum of the embeddings of its tokens, or constitutive words, denoted by $\mathbf{a}_\text{SoE}$ (Sum of Embeddings). A simple form of embedding is the BoW, it represents the word using a one-hot vector the size of the dictionary in which the dictionary index of the word is set to $1$. Aside from the BoW embedding, there exist additional forms of embedding vectors. For instance, Word2vec and GloVe, which encode the similarity between words (in terms of cosine distance). These embeddings are pre-trained using unsupervised learning techniques and similarly to how convolutional neural networks enable generalization across similar states, word embeddings enable generalization across similar sentences, i.e., actions.
In this work, we utilize GloVe embeddings, pre-trained on the Wikipedia corpus. We chose GloVe over Word2vec, as there exist pre-trained embeddings in low dimensional space. The embedding space dimensionality is $m=50$, significantly smaller in dimension than the size $d$ of the dictionary $D$, $112$ in our experiments. Given the continuous representation of an action, namely the sum of embeddings of the sentence tokens $\mathbf{a}_\text{SoE} \in \reals^m$, the goal is to recover the corresponding discrete action $\mathbf{a}_\text{env}$, that is the tokens composing the sentence. These may be represented as a BoW vector $\mathbf{a}_\text{BoW} \in \integers^d$. Recovering the sentence from $\mathbf{a}_\text{BoW}$ requires prior information on the language model.
Provided a set of words, the goal of a \emph{language model}, the last element in Figure~\ref{fig:algorithm}, a central piece in many important NLP tasks, is to output the most likely ordering which yields a grammatically correct sentence. In this paper, we use a rule based approach. Our rules are relatively simple. For example, given a verb and an object, the verb comes before the object - e.g., [`sword', `take'] $\mapsto$ `take sword'.
To conclude, we train a neural network $\text{E}_\theta (\state)$ to predict the sum of embeddings $\mathbf{a}_\text{SoE}$. Using CS (Section~\ref{sec:compressed_sensing}), we recover the BoW vector $\text{R}(\mathbf{a}_\text{SoE}) = \mathbf{a}_\text{BoW}$, i.e., the set of words which compose the sentence. Finally, a language model M converts $\mathbf{a}_\text{BoW}$ into a valid discrete-action, namely $\text{M}(\mathbf{a}_\text{BoW}) = \mathbf{a}_\text{env}$. The combined approach is as follows:
$\mathbf{a}_\text{env} = \text{M}(\text{R}(\text{E}(\state))) \enspace .$
\section{Compressed sensing}\label{sec:compressed_sensing}
This section provides some background on CS and sparse recovery, including practical recovery algorithms and theoretical recovery guarantees. In particular, we describe our variant of one popular reconstruction algorithm, OMP, that we refer to as Integer K-OMP (IK-OMP). The first modification allows exploitation of the integer prior on the sparse vector $\mathbf{a}_\text{BoW}$ and is inspired by \citet{white2016generating} and \citet{sparrer2015soft}. The second mitigates the greedy nature of OMP using beam search \citep{lin2013kbest}. In \cref{sec: cs experiments}, we experimentally compare different sparse recovery methods and demonstrate the superiority of introducing the integer prior and the beam search strategy.
\subsection{Sparse Recovery}
CS is concerned with recovering a high-dimensional $p$-sparse signal $\mathbf{x} \in \reals^d$ (the BoW vector $\mathbf{a}_\text{BoW}$ in our setting) from a low dimensional measurement vector $\yy \in \reals^m$ (the sum of embeddings vector $\mathbf{a}_\text{SoE}$). That is, given a dictionary $\mathbf{D} \in \reals^{m \times d}$:
\begin{align}
\min ||\mathbf{x}||_0 \enspace \text{subject to} \enspace \mathbf{D}\mathbf{x} = \yy.
\label{eq:cs_l0}
\end{align}
To ensure uniqueness of the solution of \eqref{eq:cs_l0}, the sensing matrix, or dictionary, $\mathbf{D}$
must fulfill certain properties. These are key to provide practical recovery guarantees as well. Well known such properties are the spark, or Kruskal rank \citep{donoho2003optimally}, and the Restricted Isometry Property (RIP) \citep{candes2005decoding}. Unfortunately, these are typically as hard to compute as solving the original problem \eqref{eq:cs_l0}. While the mutual-coherence (see Definition~\ref{def:coherence}) provides looser bounds, it is easily computable. Thus, we focus on mutual-coherence based results and note that Spark and RIP based guarantees may be found in \cite{elad2010book}.
\begin{definition}[\cite{elad2010book} Definition 2.3]\label{def:coherence}
The mutual coherence of a given matrix $\mathbf{D}$ is the largest absolute normalized inner product between different columns from $\mathbf{D}$. Denoting the $k$-th column in $\mathbf{D}$ by $\mathbf{d}_k$, it is given by
$\mu (\mathbf{D}) = \max_{1 \leq i, j \leq m, \enspace i \neq j} \frac{| \mathbf{d}_j^T \mathbf{d}_i |}{||\mathbf{d}_i||_2 ||\mathbf{d}_j||_2}.$
\end{definition}
The mutual-coherence characterizes the dependence between columns of the matrix $\mathbf{D}$. For a unitary matrix, columns are pairwise orthogonal, and as a result, the mutual-coherence is zero. For general matrices with more columns than rows ($m < d$), as in our case, $\mu$ is necessarily strictly positive, and we desire the smallest possible value so as to get as close as possible to the behavior exhibited by unitary matrices \citep{elad2010book}. This is illustrated in the following uniqueness theorem.
\begin{theorem}[\cite{elad2010book} Theorem 2.5]
\label{th:cs_unique}
If a system of linear equations $\mathbf{D}\mathbf{x}=\yy$ has a solution $\mathbf{x}$ obeying
$p < \frac{1}{2} \left( 1 + \frac{1}{\mu(\mathbf{D})} \right),$
where $p=||\mathbf{x}||_0$, this solution is the sparsest possible.
\end{theorem}
We now turn to discuss practical methods to solve \eqref{eq:cs_l0}.
\subsection{Recovery Algorithms}
The sparse recovery problem \eqref{eq:cs_l0} is non-convex due to the $\ell_0$-norm. Although it may be solved via combinatorial search, the complexity is exponential in the dictionary dimension $d$, and it has been proven that \eqref{eq:cs_l0} is, in general, NP-Hard \citep{elad2010book}.\\
One approach to solve \eqref{eq:cs_l0}, \textit{basis pursuit}, relaxes the $\ell_0$-minimization to its $\ell_1$-norm convex surrogate,
\begin{align} \label{eq:cs_l1}
\min ||\mathbf{x}||_1\enspace \text{s.t.} \enspace \mathbf{D}\mathbf{x} = \yy.
\end{align}
In the presence of noise, the condition $\mathbf{D}\mathbf{x}=\yy$ is replaced by $||\mathbf{D}\mathbf{x}-\yy||_2 \leq \epsilon$. The Lagrangian relaxation of this quadratic program is written, for some $\lambda>0,$ as
$\min ||\mathbf{x}||_1 + \lambda ||\yy-\mathbf{D}\mathbf{x}||_2,$
and is known as basis pursuit denoising (BPDN).
The above noiseless and noisy problems can be respectively cast as linear programming and second order cone programming problems \citep{chen2001atomic}. They thus may be solved using techniques such as interior-point methods \citep{ben2001lectures, boyd2004convex}. Large scale problems involving dense sensing matrices often precludes the use of such methods. This motivated the search for simpler gradient-based algorithms for solving \eqref{eq:cs_l1}, such as fast iterative shrinkage-thresholding algorithm (FISTA) \citep{beck2009fast}.
Alternatively, one may use greedy methods, broadly divided into \textit{matching pursuit} based algorithms, such as OMP \citep{blumensath2008gradient}, and \textit{thresholding} based methods, including iterative hard thresholding \citep{blumensath2009iterative}. The popular OMP algorithm, proceeds by iteratively finding the dictionary column with the highest correlation to the signal residual, computed by subtracting the contribution of a partial estimate of $\mathbf{x}$ from $\yy$. The coefficients over the selected support set are then chosen so as to minimize the residual error. A typical halting criterion compares the residual to a predefined threshold.
\subsection{Recovery Guarantees}
Performance guarantees for both $\ell_1$-relaxation and greedy methods have been provided in the CS literature. In noiseless settings, under the conditions of Theorem~\ref{th:cs_unique}, the unique solution of \eqref{eq:cs_l0} is also the unique solution of \eqref{eq:cs_l1} \citep[Theorem 4.5]{elad2010book}. Under the same conditions, OMP with halting criterion threshold $\epsilon = 0$ is guaranteed to find the exact solution of \eqref{eq:cs_l0} \citep[Theorem 4.3]{elad2010book}. More practical results are given for the case where the measurements are contaminated by noise \citep{donoho2006stable,elad2010book}
\subsection{Integer K-OMP (IK-OMP)}
\begin{algorithm}[H]
\caption{IK-OMP}\label{algo:beam_omp}
\begin{algorithmic}
\State \textbf{Input:} Measurement vector $\yy \in \reals^m$, dictionary $\mathbf{D} \in \reals^{m \times d}$, maximal number of characters $L$ and beam width $K$
\State Initial solutions $\mathbf{X}^0 = [\mathbf{0}_d, \dots, \mathbf{0}_d]$
\For{$l = 1, L$}
\For{$i \in [1, \dots, k]$}
\State \textbf{Extend:} Append $\mathbf{X}_i^{l-1}\! +\! \textbf{1}_j,\! \forall j\!\! \in\! [1, ..., d]$ to $\mathbf{X}^{l-1}$
\EndFor
\State \textbf{Trim:} $\mathbf{X}^l= \text{K-}\argmin_{\mathbf{X}_i \in \mathbf{X}^{l-1}} ||\yy - \mathbf{D}\mathbf{X}_i||_2^2$
\EndFor
\State \textbf{return} $\mathbf{X}^L$
\end{algorithmic}
\end{algorithm}
\textbf{An Integer Prior:} While CS is typically concerned with the reconstruction of a sparse real-valued signal, in our BoW linear representation, the signal fulfills a secondary structure constraint besides sparsity. Its nonzero entries stem from a finite, or discrete, alphabet. Such prior information on the original signal appears in many communication scenarios \citep{candes2005error, axell2012spectrum}, where the transmitted data originates from a finite set.
\textbf{Beam Search OMP:} As OMP iteratively adds atoms to the recovered support, the choice of a new element in an iteration is blind to its effect on future iterations. Therefore, any mistakes, particularly in early iterations, may lead to large recovery errors. To mitigate this phenomenon, several methods have been proposed to amend the OMP algorithm.
To decrease the greediness of the greedy addition algorithm (which acts similarly to OMP), \citet{white2016generating} use a substitution based method, also referred as swapping \citep{andrle2006swapping} in the CS literature. Unfortunately, the computational complexity of this substitution strategy makes it impractical. \citet{elad2009plurality} combine several recovered sparse representations, to improving denoising, by randomizing the OMP algorithm. However, in our case, the sum of embeddings $\mathbf{a}_\text{SoE}$ represents a true sparse BoW vector $\mathbf{a}_\text{BoW}$, so that combining several recovered vectors should not lead to the correct solution.
\textbf{IK-OMP:} We combine the integer-prior with the beam search strategy, and propose the IK-OMP (Algorithm~\ref{algo:beam_omp}). In the algorithm description, $\mathbf{1}_j$ is the vector with a single nonzero element at index $j$ and $\text{K-}\argmin$ denotes the $K$ elements with smallest value for the following expression. In this work, the selected BoW is the candidate which minimizes the reconstruction score.
\begin{figure*}[t]
\centering
Troll Quest \hspace{7cm} Open Zork \\
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/troll_compressed_sensing}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/troll_compressed_sensing_reward}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/full_compressed_sensing}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/full_compressed_sensing_reward}
\end{subfigure}%
\\
\hspace{0.3cm}
\begin{subfigure}{.43\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/troll_cs_legend}
\end{subfigure}%
$\enspace\enspace\enspace\enspace$
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/all_cs_legend}
\end{subfigure}%
\caption{\textbf{Compressed Sensing}: Comparison of the accuracy, and accumulated reward, of the various reconstruction algorithms on the `Troll Quest' and in `Open Zork'.
The SnR denotes the ratio between the norm of the original signal $\mathbf{a}_\text{SoE}$ and that of the added noise.}
\label{fig:full compressed sensing}
\vspace{-0.3cm}
\end{figure*}
\section{Imitation Learning}
\label{sec:imitationlearning}
In this section, we present our Sparse-IL\ algorithm and provide in-depth details regarding the design and implementation of each of its underlying components. We also detail the experiments of executing Sparse-IL\ on the entire game of Zork.
\textbf{Sparse Imitation Learning:} Our Sparse-IL\ architecture is composed of two major components - Encoder $\text{E}_\theta (\state)$ and Retrieval Mechanism (as seen in Figure~\ref{fig:algorithm}). Each component has a distinct role and combining them together enables for a computationally efficient approach.
\textbf{The Encoder (E)} is a neural network trained to output the optimal action representation at each state. As we consider the task of imitation learning, this is performed by minimizing the $\ell_2$ loss between the Encoder's output $E_{\theta}(\state)$ and the embedding of the action provided by the expert $\mathbf{a}_\text{SoE}$.
\begin{figure*}[t]
\hspace{-0.5cm}
\parbox[t]{0.27\textwidth}{\null
\centering
\vspace{0.4cm}
\includegraphics[width=\linewidth]{figures/deep_cs_heatmap_full}
\caption{Difference in reconstruction accuracy, between \textbf{Sparse-IL and DeepCS-2}. Higher value represents a higher reconstruction accuracy for Sparse-IL. DeepCS-2 fails when presented with several variants of the correct actions (synonyms).}
\label{fig:deepcs vs ik-omp accuracy comparison}
}
\hspace*{0.49cm}
\parbox[t]{0.73\textwidth}{\null
\centering
Open Zork\\
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/full_training_process}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/full_training_snr}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/full_training_random_actions}
\end{subfigure}%
\\
\begin{subfigure}{.6\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/all_imitation_legend}
\end{subfigure}%
\caption{\textbf{Sparse Imitation Learning:} Comparison of the accuracy of each reconstruction algorithm on an agent trained using imitation learning to solve the entire game. In the graph on the left, IK-OMP with K=20 and K=112 result in identical performance.}
\label{fig:openzork imitation learning}
}
\end{figure*}
In all of the learning experiments, the architecture we use is a convolutional neural network (CNN) that is suited to NLP tasks \citep{kim2014convolutional}. Due to the structure of the game, there exist long term-dependencies. Frame-stacking, a common approach in games \citep{mnih2015human}, tackles this issue by providing the network with the N previous states. For the ``Open Zork'' task, we stack the previous 12 states, whereas for the ``Troll Quest'' we only provide it with the current frame.
\textbf{Retrieval Mechanism (R):} The output of the Encoder, $\text{E}_{\theta} (\state)$, is fed into a CS algorithm, such as IK-OMP. IK-OMP produces K candidate actions, ${\mathbf{a}_\text{BoW}}_1, ..., {\mathbf{a}_\text{BoW}}_K$. These actions are fed into a fitness function which ranks them, based on the reconstruction score $||\text{E}_{\theta} (\state) - \mathbf{D}{\mathbf{a}_\text{BoW}}_i||_2^2 , \enspace \forall i = 1, ..., k$ (see Section~\ref{sec:compressed_sensing}), and returns the optimal candidate. Other CS approaches, e.g., OMP and FISTA, return a single candidate action.
\section{Experiments}\label{sec: experiments}
In this section, we present our experimental results. We begin by analyzing our proposed CS method, namely IK-OMP, in \cref{sec: cs experiments}, and its ability to reconstruct the action when provided the sum of word embeddings $\mathbf{a}_\text{SoE}$. After evaluating our proposed method in a clean and analyzable scenario, we evaluate the entire system `Sparse Imitation Learning'\ on the full game of Zork (\cref{sec: il experiments}).
\subsection{Compressed Sensing}\label{sec: cs experiments}
In this section, we focus on comparing several CS approaches. To do so, we follow the set of commands, extracted from a walk-through of the game, required to solve Zork1, both in the `Troll Quest' and `Open Zork' domains. In each state $\state$, we take the ground-truth action $\mathbf{a}_\text{env} (\state)$, calculate the sum of word embeddings $\mathbf{a}_\text{SoE} (\state)$, add noise and test the ability of various CS methods to reconstruct $\mathbf{a}_\text{env} (\state)$. We compare the \emph{run-time} (Table~\ref{table:sparse comparison}), and the \emph{reconstruction accuracy} (number of actions reconstructed correctly) and \emph{reward gained} in the presence of noise (Figure~\ref{fig:full compressed sensing}). Specifically, the measured action is $\mathbf{a}_\text{mes} (\state) = \mathbf{a}_\text{SoE} (\state) + \epsilon$, where $\epsilon \sim N(0,1)$ is normalized based on the signal to noise ratio (SnR).
\begin{table}[H]
\caption{Runtime comparison.}\label{table:sparse comparison}
\centering
\begin{tabular}{|l|l|}
\hline
\\[-1em]
\textbf{Algorithm} & \textbf{Runtime} \\
\hline
\\[-1em]
OMP & 0.008 \\
\hline
\\[-1em]
IOMP, K=1 & 0.008 \\
\hline
\\[-1em]
IK-OMP, K=3 & 0.021 \\
\hline
\\[-1em]
IK-OMP, K=20 & 0.166 \\
\hline
\\[-1em]
IK-OMP, K=112 & 1.116 \\
\hline
\\[-1em]
FISTA &0.881 \\
\hline
\\[-1em]
DeepCS &0.347 \\
\hline
\end{tabular}
\end{table}
We compare 4 CS methods: the FISTA implementation of BP, OMP, IK-OMP (Algorithm~\ref{algo:beam_omp}) and a Deep Learning variant we deem DeepCS described below. The dictionary is composed of $d=112$ possible words which can be used in the game. The dimension of the embedding is $m=50$ (standard GloVe embedding available online) and the sentence length is limited to at most $4$ words. This yields a total number of $\approx$ 10 million actions, from which the agent must choose one at each step. It is important to note that while \emph{accuracy} and \emph{reward} might seem similar, an inaccurate reconstruction at an early stage results in an immediate failure, even when the accuracy over the entire trajectory seems high.
Clearly, as seen from Figure~\ref{fig:full compressed sensing}, OMP fails to reconstruct the true BoW vectors $\mathbf{a}_\text{BoW}$, even in the noiseless scenario. Indeed, the mutual-coherence (Definition~\ref{def:coherence}) is $\mu=0.97$ and from Theorem~\ref{th:cs_unique}, there is no guarantee that OMP can reconstruct a sparse vector for any sparsity $p>0$. However, our suggested approach, IK-OMP, is capable of correctly reconstructing the original action $\mathbf{a}_\text{BoW}$, even in the presence of relatively large noise. This gives evidence that the integer prior, in particular, and the beam search strategy significantly improve the sparse recovery performance.
\textbf{Deep Compressed Sensing:} Besides traditional CS methods, it is natural to test the ability of deep learning methods to perform such a task. Here, we train a neural network to predict the BoW vector $\mathbf{a}_\text{BoW}$ which composes the continuous embedding vector. Our network is a multi layer perceptron (MLP), composed of two hidden layers, 100 neurons each. We use a sigmoid activation function to bound the outputs to $[0,1]$ and train the network using a binary cross entropy loss. In these experiments, we denote by $T$ the threshold above which an output is selected, e.g., when $T=0.9$ all words which receive a weight of above $0.9$ are selected.
Our results (Figure~\ref{fig:compressed sensing deep cs openzork}) show that the DeepCS approach works when no noise is present, however, once noise is added to the setup, it is clear that DeepCS performs poorly compared to classic CS methods such as IK-OMP. We observed similar results in the Troll domain. Besides, as DeepCS requires training a new model for each domain, it is data-specific and does not transfer easily, which is not the case with traditional CS methods.
\begin{figure}[H]
\centering
Open Zork \\
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/deep_cs_full_accuracy}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/deep_cs_full_reward}
\end{subfigure}%
\\
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/deep_cs_full_legend}
\end{subfigure}%
\caption{\textbf{Compressed Sensing - DeepCS:} Comparison of the accuracy, and accumulated reward, of the DeepCS baselines, compared to the IK-OMP approach.}
\label{fig:compressed sensing deep cs openzork}
\end{figure}
\subsection{Imitation Learning}\label{sec: il experiments}
In an imitation learning setup, we are given a data set of state-action pairs $(\state, \mathbf{a}_\text{env})$, provided by an expert; the goal is to learn a policy that achieves the best performance possible. We achieve this by training the embedding network $\text{E}_\theta (\state)$ to imitate the demonstrated actions in the embedding space, namely $\mathbf{a}_\text{SoE}$, at each state $\state$, using the MSE between the predicted actions and those demonstrated. We consider three setups: (1) Perfect demonstrations, where we test errors due to architecture capacity and function approximation, (2) Gaussian noise, $\mathbf{a}_\text{mes} (\state) = \mathbf{a}_\text{SoE} (\state) + \epsilon$ (See Section~\ref{sec: cs experiments}), and (3) discrete-action noise, in which a random incorrect action is demonstrated with probability (w.p.) $p$.
This experiment can be seen as learning from demonstrations provided by an ensemble of sub-optimal experts.
Our results (Figure~\ref{fig:openzork imitation learning}) show that by combining CS with imitation learning techniques, we are capable of solving the entire game of Zork1, even in the presence of discrete-action noise. In all our experiments, IK-OMP outperforms the various baselines, including the end-to-end approach DeepCS-2 which is trained to predict the BoW embedding $\mathbf{a}_\text{BoW}$ directly from the state $\state$.
\textbf{Training:} Analyzing the training graph presents an interesting picture. It shows that during the training process, the output of the Encoder can be seen as a noisy estimation of $\mathbf{a}_\text{SoE}$. As training progresses, the effective SnR of the noise decreases which is seen by the increase in the reconstruction performance.
\textbf{Generalization:} In Figure~\ref{fig:deepcs vs ik-omp accuracy comparison}, we present the generalization capabilities which our method Sparse-IL\ enjoys, due to the use of pre-trained unsupervised word embeddings. The heatmap shows two forms of noise. The first, as before, is the probability of receiving a bad demonstration, an incorrect action. The second, synonym probability, is the probability of being presented with a correct action, yet composed of different words, e.g., drop, throw and discard result in an identical action in the environment and have a similar meaning. These results clearly show that Sparse-IL outperforms DeepCS-2 in nearly all scenarios, highlighting the generalization improvement inherent in the embeddings.
\textbf{The benefit of meaningful embeddings:} In our approach, the Encoder $\text{E}_\theta$ is trained to predict the sum-of-embeddings $\mathbf{a}_\text{SoE}$. However, it can also be trained to directly predict the BoW vector $\mathbf{a}_\text{BoW}$. While this approach may work, it lacks the generalization ability which is apparent in embeddings such as GloVe, in which similar words receive similar embedding vectors.
Consider a scenario in which there are 4 optimal actions (e.g., `go north', `walk north', 'run north' and `move north') and 1 sub-optimal action (e.g., `climb tree'). With probability $0.15$ we are presented with one of the optimal actions and with probability $0.4$ the sub-optimal action. In this example, the expected BoW representation would include `north' w.p. $0.6$, `climb' and `tree' w.p. $0.4$, and the rest w.p. $0.15$. On the other hand, since `go', `walk', `run' and `move' have similar meanings and in turn similar embeddings, the expected $\mathbf{a}_\text{SoE}$ is much closer to the optimal actions than to the sub-optimal one and thus an imitation agent is less likely to make a mistake.
\section{Conclusion}\label{sec:conclusion}
We have presented a computationally efficient algorithm called Sparse Imitation Learning\ (Sparse-IL) that combines CS with imitation learning to solve text-based games with combinatorial action spaces. We proposed a CS algorithm variant of OMP which we have called Integer K-OMP (IK-OMP) and demonstrated that it can deconstruct a sum of word embeddings into the individual BoW that make up the embedding, even in the presence of significant noise. In addition, IK-OMP is significantly more computationally efficient than the baseline CS techniques. When combining IK-OMP with imitation learning, our agent is able to solve Troll quest as well as the entire game of Zork1 for the first time. Zork1 contains a combinatorial action space of 10 million actions. Future work includes replacing the fitness function with a critic in order to further improve the learned policy as well as testing the capabilities of the critic agent in cross-domain tasks.
\bibliographystyle{plainnat}
\section{Introduction}
\todo[inline]{Tom: move from RL ro IL}
While we might chuckle at a story like this, there is more here than just a joke. If we look carefully at what happened, we notice that the bartender of our story stumbled on his words \cite{aat_doug}, while thinking about how to start the joke. Such cognitive-behavior breakdowns often occur when there are many actions to choose from, as described in the Action-Assembly Theory (AAT, \citet{aat}). According to Greene, behaviour is described by two essential processes: representation and processing. Representation refers to the way information is coded and stored in the mind, whereas processing refers to the mental operations performed to retrieve this information \cite{greene2008action}. Having good representations of information and an efficient processing procedure allows us to quickly exploit highly rewarding nuances of an environment upon first discovery.
However, learning good representations and developing efficient action selection procedures is a major computational challenge for artificially intelligent agents that aim to solve sequential decision making problems. This is especially problematic when dealing with combinatorial action spaces, since the agent needs to explore all possible action interactions. Combinatorial action spaces are prevalent in many domains including natural language generation \citep{ranzato2015sequence,bahdanau2016actor}, finance \citep{moody2001learning,hull2003options,heaton2017deep}, industry 4.0 (e.g., IoT) \citep{preuveneers2017intelligent}, electric grids \citep{wen2015optimal} and more. For example, in text-based games \citep{cote2018textworld}, given a dictionary with $d$ entries (words), the number of possible sentences of length $l$ is $d^l$ (i.e., the size of the combinatorial action space).
Previous approaches have tackled the problem of large action spaces with varying success \citep{dulac2015deep,he2016deep,zahavy2018learn}, however, none have specifically considered combinatorial action spaces and thus computational efficiency remains an open problem: (i) \citet{dulac2015deep} extends the Deep Deterministic Policy Gradients (DDPG - \citep{lillicrap2015continuous}) architecture to cope with discrete actions, yet their nearest neighbour approach requires comparing with the entire action set - $\bigO(|A|)$ comparisons; (ii) \citet{he2016deep} present the Deep Reinforcement Relevance Network (DRRN) architecture, which requires each action to be evaluated by the network, which results in a total of $\bigO(|A|)$ forward passes; (iii) \citet{zahavy2018learn}, Action-Elimination Deep Q Network (AE-DQN), considers a scenario in which there are many \textit{invalid} actions; it utilizes an elimination signal provided by the environment to learn which actions may be \textit{eliminated}, thus reducing the effective number of actions. Since this is a Q-learning approach, the output layer contains $|A|$ neurons, and the final projection layer performs $\bigO(|A|)$ operations.
Clearly, these approaches do not scale to combinatorial action spaces due to computational inefficiency. In particular, all previous works ignored the underlying action structure - each action is a composition of basic building blocks.
In this work we propose the first computationally efficient reinforcement learning (RL) algorithm for solving problems with combinatorial action spaces. Motivated by AAT, our algorithm learns a model of the world that allows it to efficiently search for optimal actions. Specifically our agent (Figure \ref{fig:algorithm}) is composed of a:
\textbf{(1) Encoder} - We train an encoder which generates continuous actions (a dense representation of the action).
\textbf{(2) Representation} - We represent words using embeddings, previously trained on a large corpus of text. \textbf{(3) Retrieval Mechanism} - Recently, \citet{arora2018compressed} demonstrated sparse reconstruction \cite{elad2010book} of documents from the sum of common embedding schemes such as Word2vec \citep{mikolov2013efficient} and GloVe \citep{pennington2014glove}. Given a continuous vector $\mathbf{a}_\text{SoE}$, a sum of word embeddings, the goal of sparse reconstruction is to recover the Bag-of-Words (BoW) vector (a discrete vector representing the indices of the words from the dictionary which compose $\mathbf{a}_\text{SoE}$). Similarly, we propose to reconstruct discrete actions, composed of words, from the continuous output of the encoder. The benefit of our approach is that, there exist \textit{computationally efficient algorithms} for sparse reconstruction, which under some assumptions on the structure of the dictionary (e.g., coherence - Definition \ref{def:coherence}) ensure perfect recovery of the sparse representation.
Figure \ref{fig:algorithm} presents an overview of our architecture called Encoder EVALuator SENSing nEtwork (EVAL-SENSE). This architecture comprises three distinct components, an Encoder, Evaluator - collectively the \textit{World Model} (Section \ref{ref:encoderevaluator}) and Sparse Reconstruction algorithm - the \textit{retrieval mechanism} (Section \ref{sec:k-omp}). As seen in the figure, the encoder receives a state (in the NLP setting, this compromises a sentence detailing the current state of the game) and computes a sum of word embeddings. Using our novel sparse reconstruction algorithm K-OMP (see Section \ref{sec:k-omp}), we provide K candidate Bag of Words (BoW) vectors which are fed into the evaluator. The evaluator computes an expected value for each action candidate and the action with the highest value is fed into a language model to yield a sentence that can be parsed by the game.
\footnotetext[1]{Global optimality can be ensured, however the tradeoff requires a cost of $\bigO(d^l)$ during the training phase.}
\footnotetext[2]{The complexity is the sub-set of valid actions as deemed by the elimination network. In the worst case, all actions are valid and the complexity is $\bigO(d^l)$.}
|
2,869,038,153,885 | arxiv |
\section{Introduction}
\label{sec:introduction}
\input{introduction}
\section{Problem Statement}
\label{sec:problem_def}
\input{problem_def}
\section{Methodology}
\label{sec:methodology}
\input{methodology}
\section{Implementation Details}
\label{sec:implementation}
\input{implementation_details}
\section{Results}
\label{sec:results}
\input{results}
\section{Related Work}
\label{sec:related_work}
\input{related_work}
\section{Conclusions and Future Work}
\label{sec:conclusions}
\input{conclusions}
\bibliographystyle{ACM-Reference-Format}
\subsection{Topic Modeling for \\Candidate Space Segmentation}
\label{subsec:lda_clus}
Each candidate that could potentially be a fit to the position to be filled can be taken as a bag of skills, seniority, previous companies, previous positions etc. (i.e. \emph{candidate properties}). We propose to utilize topic modeling to separate the potential candidates into candidate topics per position, where each topic is a distribution over the candidate properties, and provides a means to do soft clustering (i.e. with overlapping candidates) to segment the candidate space. We call these soft clusters of candidates for each position as \emph{intent clusters}, i.e. a cluster of candidates for a specific intent that the user may have for the current session. To generate the intent clusters via topic modeling, we are applying Latent Dirichlet Allocation \cite{blei_2003_lda} (LDA) in our current work.
\subsubsection{Latent Dirichlet Allocation for Intent Clusters}
\label{subsubsec:lda_detailed}
Originally applied on text modeling, LDA assumes the following generative process for each document \textbf{d} in a corpus \textbf{D}:
\begin{compact_numbered_enum}
\item Choose the number of words $\textit{N} \sim Poisson(\xi)$.
\item Choose the multinomial topic distribution for d, i.e. $\theta \sim Dir(\alpha)$.
\item For each word $w_n$ within $d$:
\begin{compact_numbered_enum}
\item Choose a topic $z_n \sim Multinomial(\theta)$, and,
\item Choose a word $w_n$ from $p(w_n | z_n, \beta)$, a multinomial probability conditioned on the topic $z_n$.
\end{compact_numbered_enum}
\end{compact_numbered_enum}
Therefore, each document $d$ is a sequence of N words denoted by $w_{1 \rightarrow N}$, and the corpus $D$ is defined as a collection of $\textit{M}$ documents $d_{1 \rightarrow M}$, with a probability of:
\begin{equation}
\textrm{\scriptsize{$p(D | \alpha, \beta) = \prod_{d=1}^M \int_{\theta_d} p(\theta_d | \alpha) \left( \prod_{n=1}^{N_d}\sum_{z_{dn}} p(z_{dn} | \theta_d)p(w_{dn} | z_{dn}, \beta)\right)\mathrm{d}\theta_d ~.$}}
\label{eq:lda_prob}
\end{equation}
LDA is a probabilistic graphical model with a three level representation as given in Figure~1. The outer plate represents documents, and the inner plate represents the repeated choice of topics and words within a document. As indicated in Eq.~\ref{eq:lda_prob}, $\alpha$ and $\beta$ are corpus level parameters sampled once in the process of corpus generation. The variables $\theta_{d}$ are document level variables, sampled once per document. Finally, the variables $z_{dn}$ and $w_{dn}$ are word-level variables and are sampled once for each word in each document. Exact inference of LDA parameters are intractable \cite{blei_2003_lda}. Instead, variational inference \cite{blei_2003_lda} (which we also utilize for this work) and Markov-chain Monte Carlo \cite{Jordan1999} methods are often used for approximate inference of LDA parameters.
\begin{figure}[htb]
\label{fig:lda_plate}
\centering
\resizebox{2.2in}{!}{
\begin{tikzpicture}
\draw(0.8, 2.2) circle(0.4);
\node at (0.8, 2.2) {$\beta$};
\draw (-2.5, 1.5) rectangle (4.2, -0.8);
\node at (3.9, -0.5) {$M$};
\draw (-0.2, 1.2) rectangle (3.5, -0.5);
\node at (3.2, -0.2) {$N$};
\draw (-3.2, 0.5) circle (0.4);
\draw (-1.2, 0.5) circle (0.4);
\draw (0.8, 0.5) circle (0.4);
\draw (2.8, 0.5) circle (0.4);
\fill[color=gray] (2.8, 0.5) circle (0.4);
\draw[->](-2.8,0.5)--(-1.6,0.5);
\draw[->](-0.8,0.5)--(0.4,0.5);
\draw[->](1.2,0.5)--(2.4,0.5);
\draw[->](0.95,1.85)--(2.4,0.6);
\node at (-3.2, 0.5) {$\alpha$};
\node at (-1.2, 0.5) {$\theta$};
\node at (0.8, 0.5) {$z$};
\node at (2.8, 0.5) {$w$};
\end{tikzpicture}
}
\vspace{-10pt}
\caption{Graphical model representation of LDA.}
\end{figure}
In our application, each potential candidate is a document, the words within the document are the skills and seniority tokens extracted from that candidate's profile, and the topics generated are the intent clusters which represent a distribution over skills, seniority etc., i.e. \emph{candidate properties}. Table~\ref{table:example_clusters} presents an example set of clusters (topics, where we removed the probabilities for presentational purposes) with the skills for the position \emph{Software Engineer}, generated from the candidate corpus at LinkedIn (utilizing those members that are Software Engineers as the documents). The last column is our manual interpretation of the cluster.
\begin{table}
\centering
\smaller
\caption{Example Clusters for Software Engineering}
\vspace{-4pt}
\begin{tabular}{|c|c|c|} \hline
Id & Skills & Interpretation \\ \hline
1 & ajax, java, spring framework, & J2EE \\
& android, javascript, xml & Developer \\ \hline
2 & php, javascript, html5, & Front-end \\
& ajax, css, html & Developer \\ \hline
3 & java, matlab, git, & Unix/Linux \\
& python, linux, unix & Developer \\ \hline
4 & android development, sql, & Android \\
& mysql, linux, php, html & Developer \\ \hline
\end{tabular}
\label{table:example_clusters}
\end{table}
A special care often needs to be given to the choice of the number of topics, taking into account how the selected value effects the quality of separation. As an example, Figure~\ref{fig:aic_swe} shows the \emph{Akaike Information Criterion} (AIC)\footnote{~ AIC estimates the loss of information in representing the process that originally generated the documents when we set a model variable (in this case the number of topics), hence the lower this metric, the better.} \cite{akaike_1974} results of the effect of changing number of clusters for the position \emph{Software Engineer}. From the figure, it seems that the rate of drop for AIC seems to stall starting with around five clusters, which is a potentially good number of intent clusters to be utilized for segmenting software engineers (from our experiments, this value changes between 5-10 for other positions as well).
\begin{figure}[ht]
\centering
\includegraphics[width=2.1in]{./AIC.pdf}
\vspace{-10pt}
\caption{AIC vs. Number of Clusters for Software Engineer}
\label{fig:aic_swe}
\end{figure}
\subsubsection{Recommendation via Intent Clusters}
\label{subsubsec:utilizing_intent_clusters}
While the intent clusters for each position/title are generated and stored offline as a distribution over candidate properties, we need a way to ensure a ranked flow of candidates from each cluster at recommendation time. We propose to utilize each intent cluster\footnote{At the time of recommending a next candidate, based on the title that the user had entered as a search query for the current session, we pick up the stored set of intent clusters for that title and serve candidates from them. We present the details of how we choose the next intent cluster to serve a candidate from at each step in the next section.}, and therefore the user properties that they represent a distribution over, as a meta-query to hit the in-house search engine at LinkedIn, called \emph{Galene} \cite{galene_engine}. Galene allows for arbitrary ranking functions to be applied to the set of candidates that match a specific query, and we are utilizing a ranking function similar to the one given in \cite{hathuc2015expertisesearch}, which is trained offline.
Even after applying a ranking function which is trained on a general set of users utilizing offline data, it is not guaranteed that we will take into account the distribution over candidate properties for each intent cluster which is the output of our topic modeling. Hence, we further personalize the ranking of the candidates returned by our search engine with a linear combination of each candidate and the intent cluster's distribution. Therefore, each candidate $c_m$ can be scored by each intent cluster $t_n$ for further ranking as:
\begin{equation}
\label{eq:linear_score}
matchScore(c_m | t_n) = \vec{w}_{t_n} \bullet \vec{w}_{c_m} = \sum_i w_{t_n,i} \cdot w_{c_m,i} ~,
\end{equation}
where $\vec{w}_{c_m}$ is a binary vector of whether the candidate $c_m$ has a specific property (skill, seniority, position etc.), and $\vec{w}_{t_n}$ is a vector representing the distribution of the intent cluster $t_n$ over possible candidate properties. This formulation is equivalent to taking a weighted sum of the intersection (of properties) between the candidate and the intent cluster, therefore it measures the similarity (hence the name \emph{matchScore}) between the candidate returned by an intent cluster, and the cluster itself (higher the similarity, higher the rank). After the offline ranking score \cite{hathuc2015expertisesearch} (which one may call \emph{offlineScore}) and \emph{matchScore} are calculated per candidate ($c_m$), our final ranking score takes a convex combination of the two:
\begin{equation}
\label{eq:convex_score}
score(c_m | t_n) = \alpha ~ matchScore(c_m | t_n) ~ + ~ (1 - \alpha) ~ \textrm{\emph{offlineScore}}(c_m | t_n) ~ ,
\end{equation}
and return the candidates in the descending order of their scores. We evaluate the choice of $\alpha$ and how it affects the recommendation quality in \S~\ref{sec:results}.
Since any candidate may have properties that span over (intersect with) different intent clusters, it is possible that the same candidate can appear in the ranking of multiple intent clusters. However, the similarity score helps with getting the candidates with the highest match to the top, hence improving the distinctness of the intent clusters especially in the earlier ranks.
\subsection{Multi-Armed Bandits for \\Intent Cluster Selection}
\label{subsec:mab_clus}
The next step in our recommendation scheme is understanding the user's intent in the current session, i.e. which intent cluster/s the user is most inclined with. The intent clusters do help us in reducing the space of candidates to recommend; on the other hand, choosing the best intent cluster is an \emph{algorithm selection} problem \cite{kotthoff_2014}, and is also closely tied with \emph{meta-learning} concept in Machine Learning \cite{aha_1992, miles_2008}.
In this work, we utilize the \emph{Multi-Armed Bandits} paradigm to select the best intent or a set of intents for the current user. We assume that each intent cluster is an arm in the multi-armed bandit setting, and we aim to estimate the arm that returns the best overall candidates (i.e. highest expected \emph{quality score})\footnote{~ While the arms, in the general case, are to be independent from each other, we ignore this constraint within our work at this point. Obviously, the arms in our case are intent clusters which have overlapping candidates, however, as explained in \S~\ref{subsubsec:utilizing_intent_clusters}, we strive to increase the number of distinct elements in the earlier ranks. Furthermore, once the terms for the arms are selected via topic modeling, the ranking within, and serving a candidate from, each intent cluster is independent of one another.}. Multi-Armed Bandits are utilized commonly to deal with the \emph{explore-exploit} dilemma, and the framework is inspired by a setting where we have a number of slot machines (arms) in a casino which can be pulled one-by-one at each time step. The solution deals with how to (how many times, and in which order) play these arms in order to get the best rewards. In the traditional version of the problem, the user initially does not know which arm is the most beneficial at each step. If we assume that each arm returns its rewards from a static distribution, the user needs a certain time (via trials and errors) to learn (\emph{explore}) these distributions. Once the distributions for arms are learned, then it is optimal for the user to always play the arm with the highest mean reward (\emph{exploit}), to get the highest utility. While the ultimate objective for a multi-armed bandit setting is to maximize the total rewards over a period of time via selecting the most beneficial arms, an arm selection strategy is often evaluated by \emph{regret} metric. Regret is defined as the expected utility difference of a specific arm selection strategy versus the strategy which always picks the best arm, and can be empirically calculated for a policy $p$ as:
\[
\textrm{regret(p)} = \frac{\sum_{t=1}^{T} r^*_t - r^p_t}{T} ~,
\]
where $r^*_t$ is the reward returned due to the decision of the optimal policy (i.e. the reward returned by the arm selected by the optimal policy) at time $t$, and $r^p_t$ is the reward returned due to the decision of policy $p$. In terms of expected rewards, the above formulation is equivalent to $\mathbb{E}[r^*] - \mathbb{E}[r^p]$.
Naturally, in selecting the set of best intent clusters (arms) to recommend candidates from (since candidates are recommended one-by-one, we choose a next intent/arm at each recommendation step, and get feedback from the user on the candidate served by the chosen arm), we also aim to minimize the regret, i.e. we want to pinpoint the most appropriate intent clusters as soon as possible within the session so that we can provide the most relevant results earlier in the ranks (this helps in improving Eq.~\ref{eq:p_opt} also, which is our main goal). To solve the regret minimization problem for multi-armed bandits, many algorithms have been proposed, where the most commonly used ones are based on \emph{Upper Confidence Bound} (UCB) \cite{auer_2002} on the mean rewards of arms (e.g. UCB1) and \emph{Thompson Sampling} \cite{thompson_1933, agrawal_2012}. While we do not provide the details of these algorithms here (see footnote\footnote{~ In simplified terms, UCB1 estimates a confidence interval (CI) on the expected rewards that would be received from an arm. If an arm has been pulled a small number of times, the CI would be wide, therefore the upper end (confidence bound, hence UCB) of this interval would be large. Since the algorithm chooses the next arm as the one with highest upper confidence bound, it motivates for exploration of less utilized arms. Thompson Sampling, on the other hand, aims to choose the next arm according to the probability that it has the highest expected reward among the other arms. Each time a reward is observed after the pull of an arm, the algorithm updates its belief on the mean rewards distribution of the arms.} for a simplified intuition), a performance comparison of the methodologies is presented in \S~\ref{sec:results}. Like UCB1 and Thompson Sampling, most of the arm selection policies assume a reward in the interval [0.0, 1.0]. This matches well with our application, since we can take the feedback as $0.0$ for \emph{Not Good}, and $1.0$ for the cases when the user gives the feedback as \emph{Good} for the recommended candidate.
\subsubsection{Variations of Multi-Armed Bandits Setting and Relation to Current Work}
There have been multiple variations on the multi-armed bandits setting that are application specific, most notably \emph{mortal bandits} \cite{chakrabarti_2008} (where each arm has a life-span, i.e. birth and death), and contextual bandits \cite{wang_2005, langford_2008} (where the arm rewards are dependent on the current context).
Our use case of multi-armed bandits align most closely with the contextual bandit algorithms. We utilize offline data to come up with a segmentation of the candidate space individually for each position, therefore we do utilize the position information as the context for multi-armed bandits setting. Furthermore, our ranking model takes user features \cite{hathuc2015expertisesearch} into account as well (via Eq.~\ref{eq:convex_score}), hence also utilizes the user context. The main difference of our work compared to the contextual bandit framework is that the context (user's intent of what kind of candidates s/he is looking for) remains the same within a session.
\subsection{Online Update of Intent Clusters}
\label{subsec:online_learning_term_weights}
One final effort we apply within our work is the improvement of intent clusters via utilizing the user feedback. As shown in Eq.~\ref{eq:linear_score}, we employ a linear scoring over the candidates returned by an intent cluster (i.e. \emph{matchScore}, utilized in Eq.~\ref{eq:convex_score}). Ideally, this formulation will give higher values for those candidates that the user would \emph{like} and low values for the others. Therefore, we should be able to update the intent cluster vector (i.e. $\vec{w}_{t_n}$ into $\vec{w'}_{t_n}$) after receiving feedback for each recommended candidate as follows:
\[
\vec{w'}_{t_n} = \vec{w}_{t_n} - \eta \cdot (\vec{w}_{t_n} \bullet \vec{w}_{c_m} - y_{c_m}) \cdot \vec{w}_{c_m} ~,
\]
where $\eta$ is the \emph{learning rate}, $y_{c_m}$ is the feedback of the user to the latest candidate ($c_m$) recommended from the intent cluster $t_n$, and similar to the notation of Eq.~\ref{eq:linear_score}, $\vec{w}_{c_m}$ is the binary vector of the candidate over the possible properties. This is the update methodology that would be used by the Stochastic Gradient Descent algorithm if we were solving a regression problem (i.e. $\vec{w}_{t_n} \bullet \vec{w}_{c_m}$ to estimate $y_{c_m}$) while optimizing mean squared error.
The main problem with the above update formulation is in the semantics of the optimization. The linear score is never meant to \emph{estimate} the user response, but rather to rank the candidates due to their similarity with the intent cluster. In the light of these points, we employ the following update formulation (which is similar to the \emph{perceptron} algorithm of Rosenblatt \cite{rosenblatt_1958}):
\begin{equation}
\label{eq:weights_update_perceptron}
\vec{w'}_{t_n} = \vec{w}_{t_n} + \eta \cdot y_{c_m} \cdot \vec{w}_{c_m} ~.
\end{equation}
In essence, the above update strategy aims to maximize:
\[
\sum_{c_i \mid y_{c_i} > 0} \vec{w}_{t_n} \cdot \vec{w}_{c_i} - \sum_{c_i \mid y_{c_i} \leq 0} \vec{w}_{t_n} \cdot \vec{w}_{c_i}
\]
over $w_{t_n}$ in an online manner, with the starting point (initial $w_{t_n}$, i.e. intent cluster vector) coming from the offline learned topic as a prior (the effect of the prior is diminished with large $\eta$). Therefore, it has the effect of getting the intent cluster as similar as possible (in terms of weights) to the positively rated candidates from that cluster (hence getting the candidates similar to the good ones higher in the ranks), while making it less and less similar (according to $\eta$) from those candidates that the user deemed not a good fit (moving them lower in the ranks of the intent cluster). This updating scheme also brings a new interpretation to Eq.~\ref{eq:convex_score}, and $\alpha$ within the equation. Basically, now, Eq.~\ref{eq:convex_score} presents a mixture (hence $\alpha$ being the \emph{mixture rate}) between an offline learned model (\emph{offlineScore}) and the online updated model (\emph{matchScore}), and $\alpha$ determines the amount of personalization we apply, since \emph{offlineScore} is the output of a model learned over all users, and \emph{matchScore} is updated within session, for a specific user (recruiter). Please note that $y_{c_m}$ should be 1.0 (good fit) or -1.0 (not a good fit) in the above formulation, and is not within range [0.0, 1.0] as was the case in \S~\ref{subsec:mab_clus}.
Indeed, such an update scheme is commonly used in the \emph{Online Learning} domain \cite{shalev_shwartz_2011, kevin_murphy_book_online_learning, romero_2013_online_learning}, and is aimed at getting the recommendation model closer to the current application's restrictions, which, in our case, are the user's preferences. We demonstrate the benefits of this update methodology in \S~\ref{sec:results} (along with the effect of modifying $\alpha$ and $\eta$).
\subsection{Overall Flow}
\label{subsec:methodology_summary}
Figure~\ref{fig:rec_workflow} presents an overall summary of our methodology as a flow of events in the recommendation process. Our workflow starts with the user providing an uninformative query, which is a position/title to be filled (step 1 in figure). Then, we reach into the candidate space that fits this title and segment the set of candidates into possibly overlapping \emph{intent clusters} (step 2), in our case via topic modeling (\S~\ref{subsubsec:lda_detailed}), which is performed offline (i.e. the topics/intent cluster properties for each title are predetermined and stored offline and segmentation is done online by picking up the stored intent clusters for user entered title search query). The intent clusters are translated into a set of queries (step 3), which are utilized to rank the candidates according to the term weights (\S~\ref{subsubsec:utilizing_intent_clusters}).
Selecting the most appropriate intent cluster (step 4), according to the current preferences of the user, is achieved via multi-armed bandits (\S~\ref{subsec:mab_clus}). With each newly shown candidate from the chosen intent cluster (steps 5 and 6), we can utilize the feedback (step 7) of the user on that specific candidate in order to both update the multi-armed bandit parameters (e.g. mean rewards) (step 8), and the term weights (step 9) so that we can get a better ranking for each intent cluster (\S~\ref{subsec:online_learning_term_weights}).
This overall recommendation scheme has been implemented within the LinkedIn Talent Services products, and we present the results of our online experiments in \S~\ref{sec:results}.
\begin{figure}
\centering
\includegraphics[width=1.9in]{./model_summary.pdf}
\vspace{-10pt}
\caption{Recommendation System Flow. Numbers in the circles indicate the order of events in the flow.}
\label{fig:rec_workflow}
\end{figure}
\subsection{Topic Modeling in Recommender Systems}
To the best of our knowledge, our work is the first application of topic modeling within the domain of recruiting and candidate recommendation for hiring. Due to this fact, we will only explore a limited number of works on topic modeling for recommender systems within this section.
LDA and its variants are widely used in applications to extract topics from items and users in recommender systems, mainly for \emph{personalized document recommendation} \cite{Chang, Luostarinen}. Some sample work that diverges from this trend is given in \cite{LCARS} and \cite{Auralist}, which focus on social event recommendations and musical recommendations, respectively. The authors of \cite{LCARS} utilize LDA for event and venue recommendation, proposing a location and content-aware recommender system which gives consideration to both personal interest and local preference. Finally, \cite{Auralist} introduces \emph{Auralist} system which aims to inject serendipity, novelty and diversity into recommendations while limiting the impact on accuracy.
\subsection{Multi-Armed Bandits in \\Recommender Systems}
While there are several works which apply multi-armed bandits within recommender systems domain, we would like to do a deeper examination of the most relevant set of previous literature. An earlier work \cite{radlinski_2008} focuses on the need for diversification of recommended documents, since the relevance scores of ranked items are not independent of each other. Their contribution is \emph{ranked bandits} algorithm, which assigns separate multi-armed bandits for each ranked object in the recommended sequence. This is strictly different compared to our methodology, since we aim to find the best ranking for the current user, and our arms are ranking algorithms, not documents. Therefore choosing the next arm is equivalent in our case to choosing the remaining best item (i.e. next rank) from the arm. A successor paper is given in \cite{kohli_2013}, where the difference is in the reward function. \cite{radlinski_2008} assumes a positive reward for only the first relevant document, where \cite{kohli_2013} receives positive rewards for all clicked documents in the generated ranking, which is more suitable to our application.
The last work we would like to mention here is given in \cite{hofmann_2011} where the authors aim to apply a contextual bandit setting for recommending articles, where the utility of an arm is based on a feature vector that describes the query and the user properties. They also argue that a fully exploitative approach would only collect information of the top ranked documents, hence they propose an approach of interleaved lists where the recommendation shown to user comes from both an explorative and an exploitative list. This is similar to our proposed approach, however we utilize each ranking per session for both exploration and exploitation.
\subsection{Online Session Personalization in Recommender Systems}
Historically, content neutral collaborative filtering and hybrid collaborative filtering systems have been popular with online services such as Amazon, eBay, and Spotify, and continue to be used for many applications. Although early collaborative filtering systems used memory-based methods based on item-item similarity metrics such as cosine similarity, most modern collaborative filtering systems use matrix factorization algorithms. The classical matrix factorization methods for collaborative filtering include Singular Value Decomposition (SVD), Principal Component Analysis (PCA), and Probabilistic Matrix Factorization (PMF). All of these methods suffer from the limitation that online gradient descent is impractical for large matrices, and most of these methods have been forced to respond to user actions through re-training.
Factorization machines \cite{rendle2010fm} are a popular new approach to personalized recommendation systems in the context of sparse data which generalize previous factorization methods for collaborative filtering. Although the initial publications on factorization machines addressed the problem solely in the context of offline learning, later work \cite{kitazawa2016, lu2013second_order_filtering} addresses methods of online learning which are applicable to factorization machines.
In \cite{gemmell2008folksonomies}, the authors present a different sort of online personalization system for free-form tag based collaborative filtering systems in which tags are assembled into clusters and the user's interests are modeled as a vector of cluster weights. The authors define the weight of each cluster as the ratio of the number of times a user has annotated a resource described by one or more tags in a cluster to the total number of annotations a user has made. As the user interacts with resources the weights can be recomputed trivially. The similarity with our work comes from the way the authors model a user's interaction with a large space of labels by clustering the labels and assigning a dynamically computed weight to the user's affinity for the entire cluster. However, their method of personalization is significantly different from our approach since we allow for both positive and negative feedback, and update our vector representation of the user's interests (via intent cluster updates) rather than merely rely on the number of times a user interacts with a resource from a cluster.
Linear models are also widely used in many recommendation systems when content aware features are desirable, but models trained on large enough sample sizes of user-item interactions to estimate the likelihood of users interacting with resources can lack personalization. Generalized linear mixed models (GLMix) can be used to combine a traditional globally trained linear model with individually trained regression coefficients \cite{zhang2016glmix}. Online personalization is achieved by retraining the model frequently to capture recent user-resource interactions.
\subsection{Offline Experiments} \label{sec:offl_exp}
We first evaluate our proposed methodology and the effect of mixture rate ($\alpha$ in Eq.~\ref{eq:convex_score}), learning rate ($\eta$ in Eq.~\ref{eq:weights_update_perceptron}), and the choice of MAB algorithm in an offline setting. For this purpose, we utilized the click logs from our \emph{Recruiter} application \cite{hathuc2015expertisesearch, hathuc2016talentsearch} over a period of 10 days within 2017. The dataset consists of a sampled set of user search instances along with a ranked list of candidates recommended and whether each candidate was clicked or not (search instances in the order of thousands were sampled for evaluation with hundreds of thousands of candidates recommended). Each search instance is therefore a stream of candidates recommended to the recruiter for the same search query, which we re-rank offline using our proposed methodology and look at whether the methodology places the positively rated candidates higher. Figures \ref{fig:mrate} through \ref{fig:col_ts} presents the results of our offline evaluations.
\begin{figure}[!t]
\centering
\includegraphics[width=2.85in]{./mrate-crop.pdf}
\vspace{-10pt}
\caption{Precision@25 over Mixture Rate $\alpha$ (Averaged over All Explored Learning Rates)}
\label{fig:mrate}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=2.85in]{./lrate-crop.pdf}
\vspace{-10pt}
\caption{Precision@25 over Learning Rate $\eta$ (Averaged over All Explored Mixture Rates)}
\label{fig:lrate}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=2.60in]{./col_ucb-crop.pdf}
\vspace{-10pt}
\caption{Precision@25 Colour Map over Learning and Mixture Rate Combinations for UCB1}
\label{fig:col_ucb}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=2.60in]{./col_ts-crop.pdf}
\vspace{-10pt}
\caption{Precision@25 Colour Map over Learning and Mixture Rate Combinations for Thompson Sampling (TS)}
\label{fig:col_ts}
\end{figure}
In Figures \ref{fig:mrate} and \ref{fig:lrate}, we demonstrate the precision (percentage of positively rated candidates\footnote{~ We modify the precision values with a constant multiplier for company policy.}) over different mixture rate values and learning rates for both UCB1 and Thompson Sampling (TS) arm selection algorithm choices, focusing on the first 25 candidates recommended. Each point in the graphs represent an average over different parameters, e.g. the precision value for learning rate $\eta = 0.01$ in Figure~\ref{fig:mrate} is calculated by changing the mixture rate over a specific set of values, given in the x-axis of Figure~\ref{fig:lrate} to be exact, and getting the average precision over different runs. We can see from the figures that learning rate has a peaking behavior (at 0.05), and mixing the offline score with online updated \emph{matchScore} is necessary ($\alpha$=0 gives the worst results). However, further increasing the mixture rate ($\alpha$) reduces the precision, which indicates the need for balancing between the global offline learned model (\emph{offlineScore} in Eq.~\ref{eq:convex_score}) and personalized, online updated model (\emph{matchScore} in Eq.~\ref{eq:convex_score}). Also, Thompson Sampling performs better overall, compared to UCB1, therefore it has been our choice for online deployment. Finally, to allow for a more granular examination of our results, we provide a heat map of all paired combinations of learning and mixture rates in Figures \ref{fig:col_ucb} and \ref{fig:col_ts}.
\begin{figure}[!t]
\centering
\includegraphics[width=2.9in]{./interested_perf_5-crop.pdf}
\vspace{-10pt}
\caption{Evolution of Positive Feedback Percentage over the Life Cycle of Sessions, with the Trend-line in Red (x-axis gives the index within which we do the recommendation, i.e. 5 $\rightarrow$ candidates recommended in 1-5th ranks, 10 $\rightarrow$ 6-10th, 60 $\rightarrow$ 56-60th etc.).}
\label{fig:interested_perf_5}
\end{figure}
\subsection{Online Results}
Next, we would like to present the results from our online deployment of the proposed methodology within 2017. The number of search sessions we have utilized for the evaluation is in the order of hundreds, where we received feedback for thousands of recommended candidates. The results are given in Figures \ref{fig:interested_perf_5} through \ref{fig:distinct_arm_conv}. In Figure~\ref{fig:interested_perf_5}, we present the precision results (modified similar to the results presented in \S~\ref{sec:offl_exp}), averaged over all users, during the evolution of the search session (due to online learning), where we also provide the trend-line. The improvement in precision as we get more feedback is visible from the graph, which demonstrates the online learning capabilities of the proposed scheme.
We also examined the convergence behavior of the multi-armed bandits utilized to select intent clusters, as each search session progresses. Figure~\ref{fig:frequency_arm_conv} shows that, as expected, the utilization (pull) percentage of the most frequently used (pulled) arm (which represents an intent cluster) increases as we get more and more feedback within a search session (we calculated these statistics within a sliding window of 25 at each newly recommended candidate, i.e. looking at the utilized arms for the past 25 candidates at each rank). Finally, Figure~\ref{fig:distinct_arm_conv} (where the statistics are calculated using a sliding window of 25, similar to Figure~\ref{fig:frequency_arm_conv}) also supports our previous observation, where it can be noticed that the number of unique arms utilized to recommend a candidate gets lower as we get more feedback within the session (which means that \emph{exploration} over intent queries lessens as we learn the user preferences, and \emph{exploitation} kicks in more and more as the session progresses).
\begin{figure}[!t]
\centering
\includegraphics[width=2.3in]{./frequency_arm_conv.pdf}
\vspace{-10pt}
\caption{Percentage of Most Frequently Utilized Arm over the Life Cycle of Sessions}
\label{fig:frequency_arm_conv}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=2.3in]{./distinct_arm_conv.pdf}
\vspace{-10pt}
\caption{Number of Distinct Arms Utilized Arm over the Life Cycle of Sessions}
\label{fig:distinct_arm_conv}
\end{figure}
|
2,869,038,153,886 | arxiv | \section{Introduction}
A polymer solution is in itself an inhomogeneous system of segment concentration, which is due to the fact that monomers are joined by chemical bonds. The concentration is highest at the center of gravity, but decreases rapidly with increasing distance from the center. This inhomogeneity gives rise to the gradient of the Gibbs free energy between the inside of a coil and the outside, and this is the origin of the excluded volume effects. Our fundamental idea is thus that the excluded volume effects are manifested as a result of the wild inhomogeneity of polymer solutions.
On the basis of the above concept, the theory of the excluded volume effects\cite{Flory, Kazumi} leads us, in a natural fashion, to the following equation:
\begin{equation}
\alpha^{5}-\alpha^{3}=N^{2}\frac{V_{2}^{\,2}}{V_{1}}\left(1/2-\chi\right)\left(\frac{\beta}{\pi}\right)^{3}\iiint\left(G_{hill}^{\,2}-G_{valley}^{\,2}\right)dxdydz\label{1-1}
\end{equation}
with $N$ being the number of segments, $V$ the volume (the subscripts 1 and 2 signify solvent and segment, respectively), $\chi$ the enthalpy parameter defined by $\Delta H\propto\chi$, and $\beta=3/2\langle s^{2}\rangle_{0}$ ($\langle s^{2}\rangle_{0}$ denotes the mean square radius of gyration of an unperturbed chain).
In eq. (\ref{1-1}), $G$ is a function associated with segment concentration at the coordinate $(x, y, z)$ (the subscripts $hill$ and $valley$ signifying concentrated and dilute regions, respectively) and has the form
\begin{equation}
G(x,y,z)=\sum_{\{a,b,c\}}\exp\{-\beta[(x-a)^{2}+(y-b)^{2}+(z-c)^{2}]\}\label{1-2}
\end{equation}
Now an additional new term
\begin{equation}
J=\iiint\left(G_{hill}^{\,2}-G_{valley}^{\,2}\right)dxdydz\label{1-3}
\end{equation}
directly related with the concentration fluctuation has been introduced. The term $J$ is a direct manifestation of the wild inhomogeneity of polymer solutions. When $J$ diminishes, the excluded volume effects must also diminish accordingly.
\section{Simulation}
In this report, we solve eq. (\ref{1-1}) as a function of molecular weights, modeling polystyrene solutions in carbon disulfide (PSt in CS$_{2}$). The employed physico-chemical parameters are listed in Table \ref{table1}.
\begin{table}[!htb]
\vspace{-2mm}
\caption{Basic parameters of polystyrene solution\label{table1}}
\begin{center}
\vspace*{-1.5mm}
\begin{tabular}{l l c r}\hline\\[-1.5mm]
& \hspace{10mm}parameters & notations & values \,\,\,\,\\[2mm]
\hline\\[-1.5mm]
polystyrene (PSt) & volume of a solvent (CS$_{2}$) & $V_{1}$ & \hspace{5mm}100 \text{\AA}$^{3}$\\[1.5mm]
& volume of a segment (C$_{8}$H$_{8}$) & $V_{2}$ & \hspace{5mm}165 \text{\AA}$^{3}$\\[1.5mm]
& Flory characteristic ratio & C$_{F}$ & \hspace{5mm}10 \,\,\,\,\,\,\,\\[1.5mm]
& mean bond length & $\bar{\ell}$ & \hspace{5mm}1.55 \text{\AA}\,\,\,\\[1.5mm]
& enthalpy parameter (25$^{\,\circ}$C) & $\chi$ & \hspace{5mm}0.4 \,\,\,\,\,\,\,\\[2mm]
\hline\\[-6mm]
\end{tabular}\\[6mm]
\end{center}
\end{table}
\begin{wrapfigure}[18]{r}{7.5cm}
\vspace{-6mm}
\includegraphics[width=7.5cm]{Fig-1.eps}
\vspace{-5mm}
\caption{Molecular weight dependence of $\alpha$: (a) $\text{M}_{w}=10^{4}$, (b) $\text{M}_{w}=10^{5}$ and (c) $\text{M}_{w}=10^{6}$.}\label{fig1}
\end{wrapfigure}
The simulation results are illustrated in Fig. \ref{fig1} for M$_{w}=10^{4}$, $10^{5}$ and $10^{6}$. It is seen that the molecular weight has a marked effect on the location of the swollen-to-unperturbed coil transition point, $c^{*}, (\alpha=1$). With increasing M$_{w}$ (from (a) to (c)), $c^{*}$ shifts rapidly to lower concentration range. This phenomenon is easy to understand: Because of the form, $p(s)=(d/2\pi\langle s^{2}\rangle)^{d/2}\exp(-d\hspace{0.2mm}s^{2}/2\langle s^{2}\rangle)$, of the segment distribution around the center of gravity, chains tend to interpenetrate more deeply as M$_{w}$ increases. Thus in the limit of an infinitely long chain, the fluctuation term $J$ goes to zero, and the excluded volume effects vanish in all finite concentrations. For this reason, it is only at zero concentration that the notion of the infinite chain has sound physical basis in the study of the excluded volume effects. In contrast, for short chains, the excluded volume effects tend to survive in high concentration region (curve (a)), suggesting that for chains less than M$_{w}=10^{4}$, the excluded volume effects may not disappear even at the melt state ($\bar{\phi}=1$).
\begin{wrapfigure}[14]{r}{7.5cm}
\vspace{-8.5mm}
\includegraphics[width=7.5cm]{Fig-2.eps}
\vspace{-5mm}
\caption{Molecular weight dependence of $\alpha$: (d) $\text{M}_{w}=10^{3}$ and (e) $\text{M}_{w}=10^{7}$.}\label{fig2}
\end{wrapfigure}
To examine the above inference, we have calculated two extreme cases of M$_{w}=10^{3}$ and $10^{7}$; the results are shown in Fig. \ref{fig2}. As expected, for the short chain (M$_{w}=10^{3}$), the excluded volume effects never disappear in all concentration range from the dilution limit to the melt; the coil remains swollen even at the melt state. We know of course that no such short Gaussian chain having M$_{w}=10^{3}$ exists in reality, so our discussion is purely theoretical. Notwithstanding the present results reveal that ``the ideal chain hypothesis at the melt state (ICHM)'' is not strictly true, but a practical law valid only for polymers having M$_{w}\gtrsim10^{4}$\cite{Cotton}. The swollen-to-unperturbed coil transition point should vary from $c^{*}=\infty$ to 0 as $\text{M}_{w}$ varies from 0 to $\infty$.
The above statement can be reinforced by re-examining the original Flory theory: The local free energy
\begin{equation}
\delta F_{mixing}=kT\left\{\log(1-v_{2})+\chi\hspace{0.2mm}v_{2}\right\}\delta n_{1}\label{1-4}
\end{equation}
stands for the difference in the Gibbs potential between the pure components (polymers and solvents) and the mixture. In the point of view of polymers as solutes, eq. (\ref{1-4}) represents the potential difference between the melt state and the solution. Integrating eq. (\ref{1-4}) over all volume elements in the system, then adding the elastic term, and minimizing the resultant total free energy, we are led to the known result, $\alpha^{5}-\alpha^{3}=C\,(1-\Theta/T)\,M^{1/2}$. It becomes clear that the classic theory identifies the melt state (pure polymer) with the standard state and $\alpha$ is calculated as the difference from the melt state. It is realized that the classic theory postulates, as the premise, the ideal chain behavior at the melt state. This will be the reason, despite the fact that ICHM has been confirmed firmly by SANS experiments\cite{Cotton}, why ICHM has raised so many questions and debates so far\cite{deGennes, deCloizeaux, Wittmer}.
Finally we would like to emphasize that, aside from the problem of the standard state, the classic theory\cite{Flory} has extracted correct and essential features of the excluded volume effects, i.e., it has made the complete description of the limiting case of $C=0$ for the generalized expression, eq. (\ref{1-1}).
|
2,869,038,153,887 | arxiv | \section{Introduction}\label{introduction}
{\bf Introduction} -- The search for the Higgs boson
is entering a critical phase. Data collected at the LHC
rules out the SM Higgs boson for a wide range of masses
and may suggest a Higgs boson with mass near $125$
GeV~\cite{ATLAS:2012ae, Chatrchyan:2012tx}.
Searches for a light SM Higgs in the still-relevant mass window rely primarily on
the $\gamma\gamma$ channel, though the $WW^*\to 2\ell 2\nu$ channel and
the golden channel, $ZZ^*\to 4\ell$, are also important.
So far very little attention has been given to the
$Z\gamma\to \ell\bar{\ell}\gamma$ channel~\cite{Cahn:1978nz}, although its event rate is
comparable to that of the golden channel for a light SM Higgs boson.
Nevertheless, this channel has the advantage that all final state particles
can be measured well, which carries several important implications:
1) the Higgs mass could be measured from the total invariant mass spectrum,
2) the spin of a putative signal can be determined by studying angular
correlations~\cite{arXiv:1010.2528}, and 3) the separation of signal from
background can be facilitated by employing full kinematic information, potentially
allowing searches with enhanced sensitivities.
For the golden channel in $ZZ^*\to 4\ell$ the above questions have been
studied extensively \cite{Matsuura:1991pj,Keung:2008ve, Gainer:2011xz},
but we are not aware of any detailed studies for the $Z\gamma$ channel.
Measurements of all four Higgs decay modes into electroweak bosons are
in fact very important in determining the electroweak quantum numbers of
a putative Higgs signal \cite{arXiv:1005.0872}. Furthermore, an electroweak
singlet scalar could easily have a branching fraction in the $Z\gamma$ mode
that is orders of magnitude larger than the SM
expectation~\cite{arXiv:1105.4587} which provides an important additional
incentive for studying this channel.
{\bf Kinematics} -- The kinematics of $Z\gamma\to \ell\bar{\ell}\gamma$ is
described by three angles, $\Theta$, $\theta$ and $\phi$, where
$\Theta$ may be taken to be the
angle describing the production of the $Z$ boson in the center of mass frame,
and $\theta$ and $\phi$ are the angles that describe the decay of the $Z$ to leptons,
as defined in more detail in~\cite{Gainer:2011xz}.
To accomodate events with jet radiation in the final state, we use the
momentum of the $Z\gamma$ system in the lab frame,
rather than the beam axis, to define $\Theta$.
\begin{figure}[t]
\includegraphics[scale=0.42, angle=0]{fig/feynDiag.pdf}
\caption{\label{fig1}{\em Feynman diagrams contributing to
$q\bar{q}\to \ell\bar{\ell}\gamma$ are shown in (a) and (b).}}
\end{figure}
The dominant irreducible background to the Higgs signal arises from initial
state radiation (ISR) and final state radiation (FSR) from Drell-Yan
production of a $Z$ boson; the diagram describing this process
is shown in Fig.~\ref{fig1} (a) and (b). The invariant mass of the
$Z\gamma$ system from FSR events is close to the $Z$ boson mass,
so this background is removed efficiently by imposing $m_{\ell\ell\gamma}>100$~GeV,
and we can focus on the ISR diagram and the
corresponding $u$-channel diagram for the rest of this analysis.
The signal and background cross sections were computed using the helicity basis
in~\cite{Hagiwara:1986vm}.
We now discuss some qualitative features of these
differential cross sections, in particular the $\Theta$ dependence of the signal
and background processes.
\begin{figure*}[ht]
\includegraphics[width=0.9\textwidth, angle=0]{fig/SinglyDist.pdf}
\caption{\label{fig:dist}{\em Signal (red, solid) and background (blue, dashed) distributions in $\cos \Theta$, $\phi$ and $\cos \theta$, with $\sqrt{\hat{s}} = m_h = 125$~GeV.}}
\end{figure*}
In the signal case, angular distributions follow from the fact that the Higgs is a scalar particle, and hence
only the decay angle $\theta$ has a nontrivial distibution:
\begin{equation}
\label{signal diff}
\frac1{N} \frac{d\sigma}{d\cos\Theta\, d\cos\theta\, d\phi} = (1+\cos^2\theta) \ ,
\end{equation}
For the background distributions, the non-vanishing helicity
combinations are $(\lambda_1,\lambda_2)=(\pm, \mp), (0,\pm)$,
and $(\pm, \pm)$. The production angular distribution
exhibits a collinear singularity at $\cos\Theta=\pm 1$, which is seen by
examining the $t$-channel propagator in Fig.~\ref{fig1} (a),
\begin{equation}
\frac{1}{(k_{\bar{q}}-p_\gamma)^2} = -\frac{1}{2E_{\bar{q}} E_\gamma
(1-\cos\Theta)} \ ,
\end{equation}
while the $u$-channel propagator gives the collinear singularity at
$\cos\Theta=-1$. Thus the production angular distribution for the background
process is peaked at $\cos\Theta=\pm 1$, producing forward and backward photons.
The singularity
is removed by the $p_T$ cuts on the photon and leptons. Explicit calculations lead to
\begin{eqnarray}
\label{bkdg diff}
&&\frac1{N'}\frac{d\sigma}{d\cos\Theta\, d\cos\theta\, d\phi}= \nonumber \\
&& \ (g_r^2+g_\ell^2)(g_R^2+g_L^2) \, {\cal G}_1 + (g_r^2-g_\ell^2)(g_R^2-g_L^2) {\cal G}_2 \,,
\end{eqnarray}
with
\begin{eqnarray}
{\cal G}_1 &= & \left[ (m_{12}^4+\hat{s}^2)(3+\cos 2\theta)(4 \csc^2\Theta-2)\phantom{\sqrt{\hat{s}}} \right. \nonumber\\
&& +8 m_{12}^2\,\hat{s}\, \sin^2\theta (2+\cos2\phi) \nonumber \\
&& \left.+8\,m_{12} \sqrt{\hat{s}}\,\left(m_{12}^2+\hat{s}\right)\cot\Theta\, \sin2\theta\, \cos\phi\right] \, ,\\
{\cal G}_2&=&16 \csc\Theta \left[(m_{12}^4+\hat{s}^2) \cos\theta \cot\Theta\right. \nonumber\\
&& + \left. m_{12} \sqrt{\hat{s}}\left(m_{12}^2+\hat{s} \right)\ \sin\theta \cos\phi \,\right]\, ,
\end{eqnarray}
where $g_{L(\ell)}$ and $g_{R(r)}$ are the $Z$ couplings to left- and right-handed quarks (leptons).
In Fig.~\ref{fig:dist} we show the distributions in $\cos\Theta$, $\phi$, and $\cos\theta$ for a 125~GeV Higgs boson and a background process $d\bar{d}\to Z\gamma$ at $\sqrt{\hat{s}}=125$~GeV at the parton level.
These are modified after including the effects of parton distribution functions ({PDF}) and detector acceptance and isolation cuts.
In particular, we note that $\cos \Theta$ is directly connected to the photon $p_T$ through
\begin{equation}
\cos\Theta = \sqrt{1-{4p_{\gamma T}^2\hat{s}}/{(\hat{s}-m_Z^2)^2}} \,.
\end{equation}
The $\cos\Theta$ distribution in Fig.~\ref{fig:dist} therefore implies that the $p_{\gamma T}$ distribution is peaked at zero for the background and $(m_h^2-m_Z^2)/(2m_h)$ for the signal. However it also follows that once a cut on $p_{\gamma T}$ is imposed,
very little additional sensitivity can be gained from the $\cos\Theta$ distribution.
{\bf Analysis and Results} --
We perform Monte Carlo simulations to obtain projections for the sensitivity
of this channel at the LHC using various analyses.
We consider Higgs masses of $120$, $125$, and $130$ GeV. Our simulations
are specific to the $8$ and $14$ TeV LHC. The existing $7$~TeV data has
very little sensitivity in this channel such that we do not report those results here.
To perform these Monte Carlo studies, we generate at least 50,000
events for each signal and background process using MadGraph 5
~\cite{arXiv:1106.0522}.
The Higgs coupling to gluons and the $hZ\gamma$ vertex are implemented
as effective dimension five operators using the HEFT model provided by
MadGraph 5 and the FeynRules~\cite{Christensen:2008py} package.
For both signal and background, the processes
$p p \to Z\gamma$ and $p p \to Z\gamma + 1j$ are generated, using the
MLM matching scheme~\cite{MLM} implemented in MadGraph 5 and interfaced
with Pythia~6~\cite{Pythiaref}, with a matching scale of $25$~GeV.
Events are then passed to PGS 4~\cite{PGSref} using the CMS parameter card,
to model detector acceptance and smearing effects.
Since the energy and momentum resolution is crucial for this analysis,
we have compared the invariant mass resolution obtained from PGS~4 with
the one that is obtained when smearing parton level events by hand using
the CMS detector parameters~\cite{Bayatian:2006zz}, and found
that they agree in general.
We demand that each lepton or photon has
\begin{equation}
\label{eq:basicut}
|\eta| < 2.5 \quad {\rm and} \quad p_T > 15\ {\rm GeV}.
\end{equation}
The smearing results in the broadening of the lineshape in the total
invariant mass of the $Z\gamma$ system, $m_{\ell \ell \gamma}$, for the signal events. Therefore,
before performing more detailed analyses, we perform an invariant mass
cut; demanding that the invariant mass of the $Z\gamma$ system be
within $5$ GeV of the mean invariant mass of the $Z\gamma$ system
in signal events.
It is worth emphasizing that since subsequent analyses will effectively
reduce the range of invariant mass considered, the specific details
of this initial cut does not have a strong effect on the final value of
$S/\sqrt{B}$ obtained.
Note that this cut also effectively removes the background coming from FSR
radiation that is characterized by $m_{\ell\ell\gamma} \sim M_Z$.
To determine the expected number of signal events at the $14$ TeV LHC, we obtain the
inclusive Higgs production cross section from \cite{arXiv:1101.0593}.
For the $8$ TeV LHC, we use the values given
in~\cite{Anastasiou:2012hx}.
The branching fraction for $h \to Z\gamma$ is found using
HDECAY~\cite{hep-ph/9704448}, while we use the PDG value ($6.73\%$) for the
branching fraction for a $Z$ decaying to leptons
~\cite{FERMILAB-PUB-10-665-PPD}.
The background cross section is found by using MCFM \cite{arXiv:1007.3492,Campbell:2011bn} with
FSR photon radiation turned off.
\begin{figure}[t]
\includegraphics[scale=0.4, angle=0]{fig/fig2.pdf}
\caption{\label{fig2}{\em Exclusion limits at the 95\% confidence level on the Higgs production rate times branching fraction to $Z\gamma$ at the
$8$ TeV LHC with an integrated luminosity of 20 fb$^{-1}$. The green (yellow) band is the 1(2) $\sigma$ contour. The solid red line corresponds to the SM expectation.}}
\end{figure}
We perform three analyses, two of which are multivariate. The multivariate discriminants we use are based on the matrix element of the signal and background processes. In the context of a maximum likelihood analysis such a discriminant was used in the discovery of the single top production in \cite{arXiv:0803.0739}. For simplicity we use a cut-based approach to determining our sensitivity using these multivariate discriminants.
We construct a discriminant using the fully differential cross sections computed for the signal and background processes to quantify the relative probability of a particular event being signal-like or background-like. We then determine an optimal cut on the discriminant to maximize the value for $S/\sqrt{B}$. In one analysis, we include PDF weights for the leading initial state for signal or background events ($gg$ or $q\bar{q}$ respectively). In the second multivariate analysis, we do not include
a weight from PDFs.
Labelling
the signal and background differential cross sections by $s(\mathbf{\Omega})$
and $b(\mathbf{\Omega})$, respectively, we consider the quantity
\begin{equation}
\label{eq:optcut}
D(\mathbf{\Omega})=\frac{s(\mathbf{\Omega})}{s(\mathbf{\Omega})
+ b(\mathbf{\Omega})} =
\bigg( 1 + \frac{s(\mathbf{\Omega})}{b(\mathbf{\Omega})}\bigg)^{-1}.
\end{equation}
Here, $\mathbf{\Omega}=\{x_1, x_2, \hat{s}, m_{\ell\bar{\ell}}, \Theta, \theta, \phi\}$ is the complete set of kinematic observables characterizing each event. When evaluating $D$ on a sample of pure signal events the distribution is peaked toward 1 while it is peaked toward $0$ for a pure background sample. For each Higgs mass, a cut on $D$ is determined by maximizing $S/\sqrt{B}$ of the events passing the cut.
One advantage of using the multivariate discriminant in a cut-based approach is that the relative normalization of the signal and background cross sections does not affect the final significance computed using $S/\sqrt{B}$. The drawback, on the other hand, is that we lose those events not passing the cut, which would not be the case if the signal and background matrix elements were used to construct the likelihood directly.
Our multivariate discriminants use the parton-level differential cross section except for the Higgs propagator,
as for the Higgs masses considered the Higgs width is much narrower than the experimental resolution. In principle, one can deal with this issue by using transfer functions for the lepton momenta.
We take the simpler approach
of weighting each event with a Gaussian invariant mass distribution that is centered at the average invariant mass for
signal events. The width used in this Gaussian weighting is found
by scanning (in $20$ MeV increments) over potential values, from $100$ MeV to $5$ GeV, and selecting the value which maximizes the sensitivity of the analysis.
The third analysis uses the same Gaussian invariant mass weight, but no other kinematic information about the events. While one would expect a loss of sensitivity, this approach has the advantage of being less sensitive to higher order corrections that could modify the angular distributions that enter the multivariate analyses.
\begin{table}[t]
\begin{tabular}{| l | r | r | r |}
\hline
Higgs Mass~ &
Signal (fb) & Backg. (fb) & $S/\sqrt{B}$ ($20$ fb$^{-1}$) \\ \hline
$120$~GeV & $ 0.38 ~~ (0.45) $ & $ 32. ~~ (110) $ & $ 0.30 ~~ (0.19) $ \\ \hline
$125$~GeV & $ 0.61 ~~ (0.74) $ & $ 30. ~~ (100) $ & $ 0.50 ~~ (0.33) $ \\ \hline
$130$~GeV & $ 0.66 ~~ (0.86) $ & $ 23. ~~ (89.) $ & $ 0.62 ~~ (0.41) $ \\ \hline
\end{tabular}
\caption{
\emph{
The signal and background cross sections, as well as the significance after an optimal cut on the discriminant in Eq.~(\ref{eq:optcut}) in the invariant mass only analysis at the $8$ TeV LHC. In the parenthesis we also show the corresponding values for all events passing the $p_{T}$ and geometric acceptance cuts and which are within an invariant mass window of $10$ GeV centered on the Higgs mass, as described in the text.}
\label{table : analysis 8}
}
\end{table}
\begin{table}[t]
\center
\begin{tabular}{| l | r | r | r |}
\hline
Higgs Mass~ &
Signal (fb) & Backg. (fb) & $S/\sqrt{B}$ ($100$ fb$^{-1}$) \\ \hline
$120$~GeV & $ 0.83 ~~ (1.0) $ & $ 36. ~~ (180) $ & $ 1.2 ~~ (0.78) $ \\ \hline
$125$~GeV & $ 1.3 ~~ (1.6) $ & $ 37. ~~ (160) $ & $ 2.0 ~~ (1.3) $ \\ \hline
$130$~GeV & $ 1.7 ~~ (2.1) $ & $ 40. ~~ (140) $ & $ 2.7 ~~ (1.8) $ \\ \hline
\end{tabular}
\caption{
\emph{Same as Tab.~\ref{table : analysis 8}, for the $14$~TeV LHC, with a luminosity of $100$~fb$^{-1}$.}
}\label{table: analysis 14}
\end{table}
We find the best values for $S/\sqrt{B}$ from the analysis in which the full differential cross sections and PDF weights are used. However the sensitivity from this analysis is only $\sim 1 \%$ larger than that obtained
from the invariant mass only analysis. The smallness of this increase in sensitivity is due to the fact that the relatively hard $p_{\gamma T}$ cut leaves us without much additional sensitivity to $\Theta$, and the other angular variables are not as sensitive, especially given geometric acceptance and finite momentum resolution. We therefore quote results using the invariant mass only analysis, as they should be more robust with respect to systematic uncertainties. In particular, the $m_{\ell\ell\gamma}$ distribution is unaffected by jet radiation, so that corrections to the jet multiplicity and momentum distribution, which is only simulated to leading order in our analysis, will not reduce the sensitivity.
The signal and background cross sections after the optimal cut on $D$ from this invariant mass only analysis are listed in Table~\ref{table : analysis 8} for various Higgs masses at the $8$ TeV LHC. The expected significance with $20$ fb$^{-1}$ integrated luminosity is also provided.
Table~\ref{table: analysis 14} shows analogous information for the $14$ TeV; here the expected significance with $100$ fb$^{-1}$ is shown.
\begin{figure}[t]
\includegraphics[scale=0.4, angle=0]{fig/fig3.pdf}
\caption{\label{fig3}{\em Exclusion limits at the 95\% confidence level on the Higgs production rate times branching fraction to $Z\gamma$ at the
$14$ TeV LHC with an integrated luminosity of $100$ fb$^{-1}$. The green (yellow) band is the 1(2) $\sigma$ contour. The solid red line corresponds to the SM expectation.}}
\end{figure}
In the absence of any signal, we have also considered the expected exclusion limit on the Higgs production rate in the gluon fusion channel using the CL$_s$ method \cite{CERN-OPEN-2000-205} with $20$ fb$^{-1}$ of integrated luminosity for the $8$~TeV LHC in Fig.~\ref{fig2} and for the $14$ TeV LHC with $100$ fb$^{-1}$ in Fig.~\ref{fig3}.
{\bf Conclusions} -- We have considered the possibility of searching for a light Higgs boson in its decays to $\ell\bar{\ell}\gamma$ final states via $Z\gamma$. This branching ratio is known precisely in the SM, and deviations from this rate are unambiguous signals of new physics that couples to the Higgs boson, or could even signal the presence of a Higgs imposter~\cite{arXiv:1105.4587}.
We have performed a detailed Monte Carlo study for the $8$ and $14$~TeV LHC. We find that branching ratios for the Higgs decay to $Z\gamma$ of several times the SM rate are probed at $8$ TeV with $20$ fb$^{-1}$, while the SM rate is probed at the $14$ TeV LHC with $100$ fb$^{-1}$. For Higgs masses of $125$~GeV and above, a measurement of the Higgs branching ratio to $Z\gamma$ is in reach of the $14$~TeV LHC.
We hope this work inspires experimental efforts in this particular search channel.
{\bf Acknowledgements} -- We benefitted from discussions with B. Auerbach, S. Chekanov, A. Kobach, H. Schellman, and M. Valesco. The model file for Higgs to Z$\gamma$ decays was prepared by R. Vega-Morales; K. Kumar aided in the construction of our figures. I.L. and P.S. acknowledge the hospitality from the CERN TH-LPCC summer institute on LHC physics, where part of this work was performed. This work was supported in part by the U.S. Department of Energy under
contract numbers DE-AC02-06CH11357, DE-FG02-91ER40684, and DE-FG02-84ER40173.
|
2,869,038,153,888 | arxiv | \section{Introduction}
\IEEEPARstart{A}{}gricultural food products naturally vary in their detailed internal structure. Individual samples can be analyzed to assess product quality, predict maturity state and minimize waste. To facilitate early detection of health risks, it is crucial to apply an inspection procedure capable of detecting foreign object inclusions\cite{chen2001multiresolution, kwon2008real, kotwaliwale2014x} and contaminations\cite{van2016segmentation, chuang2011automatic}. This task can be performed for an individual sample by a human expert. However, a manual inspection can not provide reasonable speed in high-throughput cases, such as product processing in a factory.
X-ray imaging is a widely used technique for non-dectructive, in-line inspection of agricultural products\cite{el2019applications}. It is commonly applied to food samples while they are being processed on a conveyor belt on the factory floor. One of the important applications of X-ray is the automatic detection of foreign object inclusions that might appear in food products. Examples of such objects are bone fragments in a chicken filet, plastic debris and bones in fish, infestation in fruits. One of the well-known approaches in foreign object detection is to acquire two dual-energy X-ray absorptiometry (DEXA)\cite{lopez2018rapid} projections with different values of the X-ray tube voltage. This method is commonly used in medical X-ray imaging for contrast agent detection and body composition analysis.
In-line foreign object detection in food samples on a conveyor belt possesses three major challenges for DEXA analysis. Firstly, high-throughput X-ray acquisition leads to a significant noise level that will greatly impact the detection process. Noise reduction methods have to be applied to reduce this effect. Secondly, typical foreign objects in the food industry have similar X-ray attenuation properties as food samples, resulting in low contrast between foreign object and main object. Contrast enhancing methods have to be applied to mitigate this effect. Thirdly, a variance of thickness of the main object causes ambiguities when detecting a foreign inclusion in a sample. For example, a thin sample of a chicken filet with a bone fragment might have a very similar dual-energy measurements as a thicker sample of a filet object without a bone. Hence, a thickness correction method should be taken into account
to mitigate this effect.
The main contribution of this article is a novel approach to DEXA image pre-processing. For low contrast foreign objects, it is crucial to analyze how the ratio between two projections acquired with different voltages depends on the thickness of the object. This effect is caused by the polychromatic spectrum of the X-ray tube and the unknown shape of the sample. The correlation between two intensities for different voltages is not the same for the defect and the product sample, and this difference can be utilized to distinguish between them. The goal of the DEXA pre-processing is to create an image where the main object has an average intensity of zero whereas the foreign object deviates significantly from zero.
This article presents a three step data processing methodology that uses the DEXA pre-processing model in combination with an active contour algorithm and parameterizable foreign object detection criteria. The methodology is optimized to achieve high detection rates on samples with foreign object inclusions and, perhaps more importantly for industrial applications, achieve low detection rates on samples without inclusions. Although the results are targeted towards bone fragments in chicken filets, the processing methodology is generic and can be used to analyse the performance of various industrial scenarios of foreign object detection on a conveyor belt. The DEXA pre-processing is based on general concepts of X-ray measurements and can be applied to other materials. The active contour model is chosen for the segmentation in this work. However, the outcome of the pre-processing step can be used as an input for other segmentation algorithms, including machine learning based methods.
\section{Related work}
Non-destructive study of the products is an important topic for the food industry. It is applied to a variety of food samples \cite{du2019x}, and every type of object has different details and typical defects to detect. X-ray imaging can be used for the detection of grain infection, fruit infestation, bone detection in the fish and poultry industry. X-ray CT is of great interest since it enables volume reconstruction and detection of defects based on the internal structure of the product. However, this approach requires a significant time for data acquisition and reconstruction. Discrete tomography based on limited-angle data \cite{pereira2017inline} can be introduced to balance acquisition time and reconstruction quality. This paper will concentrate on the radiography approach since it can provide the fastest inspection.
Foreign object detection based on a single projection has been studied for different types of food, such as poultry and apples. Multiple algorithms for fruit inspection rely on the shape knowledge that can be estimated with a certain accuracy \cite{van2019combination}. However, knowledge of the shape of the product is not necessary if the foreign object has significantly different X-ray absorption properties. Several algorithms can be applied to enhance foreign object detection by utilizing conventional filtering\cite{mery2011automated}, local contrast enhancement \cite{chen2001multiresolution} and local adaptive thresholding \cite{mathanker2010local}. This approach relies on the assumption that an absorption gradient on the border of the defect can be distinguished from gradients in the main object.
The addition of the second projection with a different tube voltage for better defect detection is a concept that is widely used in medicine. It is applied to a body fat measurement \cite{vasan2018comparison} that determined percentages of different tissues in the human body based on their absorption rates corresponding to different tube voltages. This problem may be similar to some types of food inspection (assessment of fat level), but defect detection focuses on small inclusions of different objects. Furthermore, in many applications, material identification is performed using dual-energy CT\cite{martin2015learning}. This approach is more accurate than DEXA because attenuation properties are analyzed for a small region of the internal structure of the object. If the data are limited to a single projection, the measured attenuation distribution depends on both material properties and the unknown shape of the object (thickness along the ray trajectory). In \cite{international2011iaea}, this effect was explained by beam hardening and corrected with a system calibration.
In the poultry industry, an addition of a laser was considered to obtain a thickness profile of the studied object \cite{tao2000thickness, gleason2002automatic}. Knowledge of the exact thickness profile helps to predict an absorption distribution for the main object if it is homogeneous. Thus, the presence of the defect can be detected by simple thresholding after subtracting the expected distribution from the measured absorption signal. In this study, only X-ray equipment will be used to perform detection, and no additional sources of information will be used.
Active contours methods can be used for foreign object detection if there is a visible boundary separating a defect and the main object. The level-set methods of Osher and Sethian were used for a fan bone location in the chicken samples \cite{vachtsevanos2000fusion} based on the combination of X-ray and color images. The main downside of many active contour models is that they rely on the edges to perform segmentation. With a high noise level, any edge information becomes unreliable since noise deviations are bigger than a natural absorption gradient that would be observed on a noiseless image. The Chan-Vese energy equation \cite{chan2001active} was used to partition an image into two phases without relying on edge detection. This algorithm is unsupervised and does not require any prior knowledge or machine learning to work.
In this work, the inspection procedure is evaluated based on the detection rate and not segmentation accuracy. Typical studies of the algorithms concentrate on the images with a defect and estimate the accuracy of segmentation. However, the detection rate is more important for most industry applications since the majority of the samples is expected to be without a defect. Such study was performed, for example, in \cite{van2020nondestructive} for pear fruit inspection. Methods to distinguish between bones and no-bones for different patches of fish images were proposed in \cite{mery2011automated}. An algorithm with a good detection rate might show suboptimal segmentation accuracy for samples with a foreign object. However, a good inspection procedure requires a balance in performance on normal and defected samples.
\section{Methods}
\subsection{General methodology}
The product inspection procedure proposed in this paper consists of several stages. The corresponding flowchart is shown on Fig. \ref{flowchart}. Firstly, two X-ray images of the studied sample are obtained using two different voltages of the X-ray tube. These projections should be aligned, and darkfield and flatfield corrected. Then both images are combined into a single quotient using thickness correction pre-processing. The goal of this step is to create an image where pixels of the main object have close to zero values, and a foreign object presence leads to a sufficiently large non-zero intensity. Segmentation is performed on this image to divide it into two phases with different mean intensities. This leads to a set of clusters corresponding to the regions of the foreign object inclusion. Properties of these clusters are used to decide if the sample should be marked as containing a defect.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.8\linewidth]{diagram.png}
\caption{Diagram of the foreign object inspection procedure. The input is two projections of the sample acquired with different voltages of the tube. The blue curve on the images approximately shows the sample boundary. The segmentation image uses green color for pixels that were incorrectly classified as a defect, red - for not detected foreign object pixels, and yellow - for detected pixels of the defect.}
\label{flowchart}
\end{figure*}
\subsection{Dual-energy projection pre-processing}
X-ray imaging can be used to create a projection of the studied sample. The value of every pixel in such resulting projections depends on the integral absorption of the object's matter across the corresponding trajectory. The main object and a foreign object absorb radiation differently, and this leads to differences in pixel intensities. However, the shape of the studied sample is not known in advance. Thus, the pixels in the object region do not have constant pixel intensities, as their values depend on the sample thickness. Two images acquired under different voltages provide additional information since material absorption depends on the X-ray photon energy.
A dual-energy projection of a sample with a defect can be segmented as a 2-channel image. The X-ray absorption rate in a pixel depends on both material attenuation properties and the thickness of the object. The absorption rate is given by
\begin{equation}
M(x) = -\ln \frac{P(x)}{F(x)} = -\ln \frac{\int I_{0}(E) e^{-\kappa(E) L(x)} dE}{\int I_{0}(E) dE},
\label{absorp_eq}
\end{equation}
\noindent where $M(x)$ is absorption rate computed for the detector pixel $x$, $P$ and $F$ are projection and flatfield pixel intensities, $I_0(E)$ is a spectrum of the X-ray tube, $\kappa(E)$ is a material absorption curve, and $L(x)$ is a profile of thickness along the ray. The argument $x$ refers to a detector pixel, and every pixel has a corresponding X-ray beam trajectory from the source to this pixel. The absorption curve $\kappa$ does not depend on $x$ since the material is assumed to be homogeneous. If scattering is not considered, attenuation properties of the material are defined by X-ray absorption, and the attenuation rate can be calculated according to Eq. \ref{absorp_eq}.
If the tube spectrum is monochromatic, then $I_0(E) = I_0 \delta(E - E_0)$, where $\delta(x)$ is a Dirac delta function. Eq. \ref{absorp_eq} can be simplified:
\begin{equation}
M(x) = -\ln \frac{I_0 e^{-\kappa(E_0)L(x)}}{I_0} = \kappa(E_0) L(x).
\end{equation}
\noindent In this case, the two channels of the dual-energy image are linearly correlated. If a homogeneous material is scanned with two monochromatic beams of energies $E_1$ and $E_2$, the corresponding absorption rates are $M_1(x) = \kappa(E_1) L(x)$ and $M_2(x) = \kappa(E_2) L(x)$. The ratio between $M_1$ and $M_2$ is constant, does not depend on the thickness and is defined by the ratio of attenuation coefficients for two X-ray energies. As a result, two different materials can be easily separated using a dual-energy projection.
In most CT applications, a beam is usually polychromatic since it is produced by conventional X-ray tubes. In this case, the attenuation rate depends on material thickness according to Eq. \ref{absorp_eq}. If the thickness $L(x)$ is small, an effective attenuation coefficient $\kappa_{\textrm{eff}}$ can be computed as a first-order approximation\cite{heismann2003density}:
\begin{equation}
\kappa_{\textrm{eff}} = \frac{\int I_0(E) \kappa(E) dE}{\int I_0(E) dE}
\end{equation}
However, the attenuation rate does not linearly depend on $L(x)$ in general. Thus, a ratio of attenuation rates is no longer a material characteristic, it depends on the thickness $L(x)$.
\begin{figure*}[]
\centering
\includegraphics[width=0.9\linewidth]{fig2.png}
\caption{Correlation between absorption of skeletal muscle for the X-ray tube voltages of 40 and 90 kV(a). The ratio between the attenuation rate is drawn as a function of thickness. The quotient is not constant due to a polychromatic spectrum (b).}
\label{nist_plots}
\end{figure*}
A nonlinear dependency of attenuation rate on material thickness is visible on the simulated data. For the simulation, the main object is assumed to be a skeletal muscle with an attenuation curve taken from the NIST database (ICRU-44 report\cite{griffiths1989tissue}). The tungsten tube spectrum for the voltages of 40 and 90 kV is computed according to the TASMIP data\cite{boone1997accurate}. Fig. \ref{nist_plots}a shows how attenuation rates for two different voltages of the tube correspond to each other. On this plot, thickness changes from 0.1~mm to 20 cm, and the attenuation rates are calculated according to Eq. \ref{absorp_eq}. The correlation between the two values is almost linear as if both of them depend linearly on thickness. However, the ratio of two attenuation rates changes with thickness, as shown on Fig.\ref{nist_plots}b. This change is not significant, therefore, it is not visible on a correlation plot between two intensities for different voltages. The ratio dependency on thickness can be calculated as
\begin{equation}
\frac{M_1(x)}{M_2(x)} = \frac{-\ln \int I_{1}(E) e^{-\kappa(E) L(x)} dE + \ln \int I_1(E) dE}{-\ln \int I_{2}(E) e^{-\kappa(E) L(x)} dE + \ln \int I_2(E) dE},
\end{equation}
where $I_1(E)$ and $I_2{E}$ are the tube spectra for two different voltages.
In the real scan, the nonlinear dependency is further complicated by the noise presence. A mass attenuation function is usually unknown for many food industry materials. Therefore, the thickness dependency of the quotient values can not be predicted beforehand and should be extracted from the data. Experimental measurement produces distributions of $M_1(x)$ and $M_2(x)$ for two different tube voltages. The quotient distribution $R(x)$ can be computed as
\begin{equation}
R(x) = \frac{M_1(x)}{M_2(x)},
\label{quotient_formula}
\end{equation}
and the thickness profile $L(x)$ is unknown. As shown on Fig. \ref{nist_plots}a, attenuation rate $M(x)$ is almost proportional to $L(x)$. Thus, in a data-driven approach, the dependency of quotient values on thickness can be studied as a dependency of $R(x)$ on $M_2(x)$. It will be further replaced by a polynomial approximation since a high noise level makes it impossible to recover true dependency $R(M_2)$ from the data without any additional information.
The order of the polynomial chosen for a function approximation depends on the data quality. In the experimental data used in this work, a linear approximation of $R(M_2)$ is not sufficient and leads to significant discrepancies between the acquired data and the fit. High orders of the polynomial are prone to noise, and the fit does not always converge as a result. The quadratic approximation was chosen as a middle ground since it provides a sufficiently good representation of the data and has a low noise sensitivity. This approximation is given by
\begin{equation}
R(x) = \frac{M_1(x)}{M_2(x)} \approx a M_2^2(x) + b M_2(x) + c,
\label{ratio_fit}
\end{equation}
\noindent where $M_1(x)$ and $M_2(x)$ are pixel values in the respective channels of the experimentally acquired projection, $a$, $b$, and $c$ are fit coefficients. The polynomial regression is performed for all pixels of the object simultaneously.
When the dependency of $R(x)$ on $M_2(x)$ is extracted from the data in the form of polynomial approximation, the effect of thickness dependency can be reduced. After a polynomial fit, the distribution of $R'(x)$ can be computed as
\begin{equation}
R'(x) = R(x) - a M_2^2(x) - b M_2(x) - c.
\label{corrected_quotient}
\end{equation}
$R'(x)$ is a corrected quotient distribution. If the sample consists of a homogeneous material, the $R'(x)$ is close to zero regardless of the thickness. However, inclusion of a defect with different absorption properties affects both $R(x)$ and $R'(x)$. $R'(x)$ is easier to use for defect detection since the form variation of the object does not significantly influence this distribution.
\subsection{Pre-processing of the experimental data}
A sample of a chicken fillet containing a fan bone was scanned using a CMOS detector with a CsI(Tl) scintillator (Dexela1512NDT)\cite{Flexray2020}. The X-ray source was a microfocus X-ray tube with voltages of 40 and 90 kV. The piece of fillet was wrapped in a plastic bag and placed on a holder. This experimental setup imitates a top view similar to the typical data from a conveyor belt.
The same sample was measured with different exposure times to illustrate the impact of the detector noise. Fig. \ref{dual_lowexp}a shows a quotient image computed as a division of two projections acquired with the exposure time of 0.5 seconds. Fig. \ref{dual_lowexp}b is a plot of thickness dependency of quotient values based on the experimental data corresponding to Fig. \ref{nist_plots}b for the simulated data. Values of $M_2(x)$ are used instead of $L(x)$ since the thickness profile of the object is unknown. Pixels of the defect are marked with a different color to highlight that the noise variance is bigger than a difference between the sample and defect in spectral properties. Nevertheless, the bone can be located by a human expert based on the quotient image since the defect pixels are located near each other and form a region. If the same product is scanned with a higher exposure time, the level of statistical noise becomes lower, and the foreign object is easier to locate. Figs. \ref{dual_highexp}a and \ref{dual_highexp}b show the quotient image and quotient plot for the measurement with an exposure time of 5~s. The high noise case is more difficult, and it will be the main focus of the next subsections.
\begin{figure*}[]
\centering
\includegraphics[width=0.75\linewidth]{fig3.png}
\caption{Sample scan with low exposure (0.5~s per projection): quotient image computed as a division of two projections (a) and the dependency of quotient values on the single projection intensity (b).}
\label{dual_lowexp}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.75\linewidth]{fig4.png}
\caption{Sample scan with high exposure (5~s per projection): quotient image computed as a division of two projections (a) and the dependency of quotient values on the single projection intensity (b).}
\label{dual_highexp}
\end{figure*}
Even if the foreign object is visible to a human expert, its detection might still be non-trivial for a classical algorithm. Thickness dependency of the ratio values leads to natural gradients in the quotient image. The introduction of the corrected quotient $R'(x)$ helps to reduce this effect and makes the image easier to segment. The thickness correction is shown on Figs. \ref{pre-processing}a and \ref{pre-processing}b that represent a quotient and corrected image. After this procedure, the quotient value is close to zero in most pixels corresponding to the main object. Deviation from zero might be caused by detector noise, systematic errors of the experimental setup, and different defects. The presence of a foreign object changes a pixel value, and the difference depends on its thickness. The main task of the segmentation algorithm applied to Fig. \ref{pre-processing}b is to locate big clusters of nonzero pixels excluding noisy outliers. With a significant noise influence, foreign object location is difficult to perform on a pixel level. Thus, it is important to use spatial information.
The largest noise level in the quotient image is usually found near the edges of the main object. In those regions, a quotient intensity is calculated as a ratio between two small numbers that leads to the high relative error. This effect can be reduced to improve detection accuracy. It can be assumed that the variance of the image values is mainly defined by the statistical noise and depends on the thickness. Therefore, a standard deviation for different thickness values should be computed from the data. The simplest approach to solve this problem is to divide the image into several regions corresponding to different bins of the $M_2$ intensity. If the intensity values start from $m$ and the bin size is $\Delta$, the intensity bins are $[m + i \Delta, m + (i+1) \Delta], i \in \mathbb{N}$. For i-th intensity bin, mean value $\overline{R'_i}$ and standard deviation $\sigma_i$ can be computed for the values of $R'(x), \forall x: M_2(x) \in [m+i\Delta; m+(i+1)\Delta]$. Then the normalized corrected quotient $N(x)$ can be calculated as
\begin{equation}
\begin{aligned}
N(x) {} &= \frac{R'(x) - \overline{R'_i}}{\sigma_i}, \\
&\forall x: M_2(x) \in [m+i\Delta; m+(i+1)\Delta].
\end{aligned}
\label{normalied_quotient}
\end{equation}
Fig. \ref{pre-processing}c shows a normalized quotient based on the corrected image \ref{pre-processing}b.
\begin{figure*}[]
\centering
\includegraphics[width=0.9\linewidth]{fig5.png}
\caption{Stages of the scan data pre-processing: quotient image computed as a division of 2 channels as in Eq. \ref{quotient_formula} (a), quotient after thickness correction defined by Eq. \ref{corrected_quotient} (b), quotient after correction and normalization computed according to Eq. \ref{normalied_quotient} (c). The sample is a chicken fillet with a fan bone scanned with an exposure time of 0.5~s.}
\label{pre-processing}
\end{figure*}
\subsection{Quotient segmentation}
Thickness correction pre-processing described in the previous section transforms two-channel dual-energy projections into a single normalized quotient. In this work, an active contour model without edges is used to achieve good segmentation quality with a high noise level. The Chan-Vese method is a variational segmentation algorithm inspired by the Mumford–Shah model. It separates two homogeneous phases in the image by minimizing the energy functional over the phase boundary and their average values
\begin{equation}
\begin{aligned}
F ={} & \lambda_1 \int_{\Omega_1} (N(x) - c_1)^2 dx + \lambda_2 \int_{\Omega_2} (N(x) - c_2)^2 dx + \\
& + \mu |\partial \Omega_1| + \nu |\Omega_1|,
\end{aligned}
\label{cv_energy}
\end{equation}
where $\Omega_1$ and $\Omega_2$ are regions segmented as an object and backgound, $c_1$ and $c_2$ are average pixel values in these regions, $|\partial \Omega_1|$ is the boundary length of $\Omega_1$, and $|\Omega_1|$ is the area of $\Omega_1$. In the case of a foreign object location problem, the background refers to the main object, and the object refers to the foreign object. The minimization problem is solved by applying the level-set technique: phase boundary is defined as a zero-level of a level-set function. The values of $c_1$ and $c_2$ are recalculated on every step depending on the current phase boundary.
The segmentation outcome is implicitly controlled by the ratios between $\lambda_1$, $\lambda_2$, $\mu$ and $\nu$. The first two terms favor a similarity between pixel value and region average intensity regardless of the spatial properties. The last two terms mitigate the effect of noisy pixels on the segmentation. Low values of $\mu$ and $\nu$ would transform the segmentation into thresholding with minimal removal of outliers (Fig. \ref{segmentation_examples}a). Different values of penalty weights lead to a different boundary detection, noise sensitivity, and overall accuracy of the algorithm. Examples of such effects are shown on Figs. \ref{segmentation_examples}b and \ref{segmentation_examples}c. The biggest strength of the active contour approach is that parameters have an interpretation and can be related to the image properties, such as object intensities and noise distribution.
\begin{figure*}[]
\centering
\includegraphics[width=0.9\linewidth]{fig6.png}
\caption{Different examples of the segmentation with different values of penalty weights applied to the quotient after pre-processing shown on Fig. \ref{pre-processing}c. The Fig. (a) shows a segmentation with low values of penalty weights $\mu = 1, \nu = 1$ where many noisy outliers are marked as bone fragments. This effect can be reduced by changing the weights as shown on Fig. (b) corresponding to $\mu = 5, \nu = 1$. High values of penalties, such as $\mu = 20, \nu = 5$ on Fig. (c), might lead to a safe segmentation excluding a significant fraction of the bone.}
\label{segmentation_examples}
\end{figure*}
As a variational algorithm, the Chan-Vese method has a few implicit parameters that influence the iterative process. The initial state of level-set function in the described implementation is defined by a simple thresholding
\begin{equation}
\phi(x) =
\begin{cases}
1 & \text{if}\quad x \geq T_{init}\\
0 & \text{if}\quad x < T_{init} \\
\end{cases}.
\end{equation}
where $T_{init}$ corresponds to a significant deviation from zero. Thus, pixels with values higher than the threshold are likely to be part of the defect region. On every iteration of the segmentation algorithm, the level-set is recalculated to better minimize the segmentation energy. If the increment norm is smaller than a certain tolerance value, the algorithm has converged. The iterative process is also stopped if it takes more iterations than a certain maximal number. These parameters mainly influence the speed and convergence of the method and define the final segmentation in case there are multiple local minima.
Accuracy of the segmentation can be evaluated if a ground truth (correct segmentation of the input) is known for every sample. In this paper, F1-score is chosen as an accuracy metrics and is calculated according to the formula
\begin{equation}
\textrm{F1-score} = \frac{\textrm{TP}}{\textrm{TP} + 0.5 (\textrm{FP} + \textrm{FN})},
\label{f1_formula}
\end{equation}
\noindent where TP is the number of True Positive pixels (pixels of the defect that were correctly identified), FP - False Positive (pixels of the main object that were falsely classified as a defect), and FN - False Negative (pixels of the defect which were missed). This metric is commonly used in papers about foreign object segmentation. However, it does not evaluate performance on the samples without a foreign object.
\subsection{Post-processing and foreign object detection}
The main challenge for the active contour segmentation lies in images without defects. A neighborhood of a noisy pixel with a significant deviation from zero can be considered as a part of the defect region, even if there is no foreign object in the sample. This problem can be solved by adjusting penalty weighting coefficients $\mu$ and $\nu$. If a region $\Omega_{noisy}$ with mean intensity $N_{noisy}$ is considered as a part of the main object, the energy will be increased by $(N_{noisy} - c_{main})|\Omega_{noisy}|$ since the mean intensities are different. This energy increment can be avoided if this region is classified as a foreign object. In this case, the energy will be modified by penalty terms $\mu|\partial\Omega_{noisy}| + \nu|\Omega_{noisy}|$. The decision about including or excluding the region $\Omega_{noisy}$ is based on the ratio between these two terms. It is important to note that the Chan-Vese algorithm is an iterative method. Therefore, the result does not only depend on energy terms but on the initial level-set, regularization parameters, convergence speed among other things. Nevertheless, penalty weights should be raised to a certain level to exclude noisy clusters in most cases.
If a noise level is sufficiently high, fine-tuning parameters for all types of inputs is a challenging problem. Significant noise fluctuations in the samples without foreign objects require penalty weights to be high. On the other hand, the accuracy of defect segmentation becomes worse since many pixels on the foreign object boundary are included in the main object. A post-processing procedure is introduced as an additional way to exclude noisy pixels from the foreign object region and make the algorithm more robust. A segmented defect region can be divided into clusters of neighboring pixels. For each cluster, the mean intensity and size can be computed. Segmentation quality can be enhanced if a certain threshold on the cluster size is set, and small clusters are ignored. The main reason for employing this strategy is the existence of noisy pixels with an intensity that is several times higher than the average defect intensity. If penalty weights are adjusted to exclude such pixels, small foreign objects might be excluded as well.
In the proposed methodology, post-processing is the removal of segmented clusters with a size lower than a certain number of pixels. If there are no clusters after pre-processing, the sample is marked as normal. Otherwise, it is considered that the sample contains a defect. For every sample in the experimental dataset, it is known whether it contains a foreign object or not. Thus, it is possible to compute a confusion matrix and F1-score for the inspection procedure. In this case, accuracy is measured on a sample level, unlike the segmentation precision. These metrics are more important for the algorithm performance evaluation since they include all possible cases. If the segmentation is fine-tuned to achieve the best segmentation accuracy, it might become too sensitive to noise. Therefore, due to noise fluctuations, it will classify normal samples as containing a defect.
\section{Results}
\subsection{Dataset description}
The thickness correction procedure was tested on the scans of chicken filets on a conveyor belt. The images were acquired with a line detector since these are commonly used in industrial setups. The majority of filet samples contained a bone that should be detected as a foreign object. Every sample was scanned four times with different positions on a belt. There are 100 images with a fan bone, 100 images with a large rib bone, and 96 images with a small rib bone. In 192 scans a chicken fillet does not contain a foreign object. These types of bone differ by average size, form, and typical position in a fillet. The dataset was semi-manually segmented to create an approximate ground truth for accuracy estimation.
The biggest challenge of the dataset is the small difference in attenuation curves between the chicken fillet and bone. Both are organic materials and contain almost the same chemical elements in different ratios. As a result, the dependency of attenuation coefficients on energy is similar, and therefore, spectral imaging efficiency is limited. The difference in spectral properties is visible but of the same order of magnitude as a noise level.
\begin{table}[]
\caption{Comparison of mean pixel intensity and foreign object sizes for different types of bone}
\label{stat_table}
\begin{tabular}{|l|l|l|}
\hline
Bone class & Pixel value & Bone size \\ \hline
Fan bone & 3.7 $\pm$ 2.6 & 370 $\pm$ 120 \\ \hline
Large rib bone & 3.1 $\pm$ 2.5 & 290 $\pm$ 70 \\ \hline
Small rib bone & 3.0 $\pm$ 2.6 & 160 $\pm$ 35 \\ \hline
\end{tabular}
\end{table}
The ground truth for the dataset was used to calculate the average properties of different bone types. Table \ref{stat_table} shows the comparison between pixel values after thickness correction and normalization and bone sizes for different classes.
\subsection{Thickness correction}
The described pre-processing was applied to the dataset to compute normalized corrected quotient images. A thickness dependency for every sample was interpolated by a polynomial of the second degree. If an absorption rate was lower than a certain threshold equal to 0.2, a pixel was ignored both in interpolation and division. The size of the intensity bin $\Delta$ was set to 0.1. Fig. \ref{dataset_examples} shows projections of different samples from the experimental dataset, the corresponding normalized corrected quotient for every case, the ground truth, and the thickness dependency plot.
The dataset contains a variety of examples with different types and sizes of bone fragments and normal samples without defects. Cases (a) - (d) show the effect of thickness correction on samples with a foreign object. The defect might be visible on a single projection, even if it has a small size, such as a shattered rib bone in sample (d). However, the contrast on a single projection significantly depends on the exact location of the foreign object. If the main object form causes intensity gradients near the defect, it might be missed without additional information. The main benefit of thickness correction lies in removing intensity changes corresponding to the main object and highlighting the defect location. The thickness dependency plot shows that quotient values corresponding to the foreign object often have a similar deviation from zero as noisy outliers. This corresponds to the low exposure scanning procedure discussed previously.
Example (e) illustrates the main advantage of using thickness correction for DEXA data. Both projections contain a region with a well-visible border that is not a foreign object. However, the correlation between images does not correspond to a material with attenuation properties different from the main object. In the corrected quotient, this region is the same as other parts of the sample. The thickness dependency plot does not contain any significant outliers as well. Such intensity changes might appear on samples with and without foreign objects. Therefore, it is crucial to remove them in order to prevent a high false positive rate.
Some projections in the dataset contain systematic experimental effects that can look like a foreign object after thickness correction. In sample (f), a set of pixels near the border has a high quotient value after correction. However, no bone is present in the object in this case. Automatic data acquisition might lead to misalignment artifacts, small movements of the object between scans and change of shape. These artifacts have quotient values similar to the real foreign objects and might be recognized as such.
\begin{figure*}[]
\centering
\includegraphics[width=0.8\linewidth]{examples.png}
\caption{Different samples from the experimental dataset. For every object, two projections acquired with different voltages, the normalized corrected quotient, the segmented image, and the thickness dependency plot are shown. Sample (a) contains a fan bone, (b) - large rib bone, cases (c) and (d) show small rib bones. Sample (e) and (f) do not contain defects. Boundaries of the samples are approximately drawn as blue curves but they are not used during the inspection procedure. The defect location is marked by orange color and corresponds to the ground truth images from the dataset. Ground truth in sample (d) is partially wrong and does not include the second part of the shattered bone.}
\label{dataset_examples}
\end{figure*}
\subsection{Segmentation accuracy}
Normalized quotient images were used as an input for the Chan-Vese segmentation algorithm. The method implementation was based on the C++ code by Pascal Getreuer \cite{getreuer2012chan}. Furthermore, a Python wrapper was written and used as an interface. In the results the default algorithm parameters are fit weights $\lambda_1 = \lambda_2 = 1$, time step $dt = 1$, convergence tolerance $tol = 10^{-4}$, maximal number of iterations $N_{max} = 200$, Heavyside regularization $\epsilon = 1$, curvature regularization $\eta = 10^{-8}$, initial level-set threshold $T_{init} = 5$. Accuracy of the algorithm is studied for different values of $\mu$ and $\nu$ since they significantly influence the outcome and have a geometrical interpretation. The post-processing pixel count threshold is set to 30 pixels. This means that defect clusters containing fewer than 30 neighbouring pixels were removed from the final segmentation.
In this subsection, the segmentation accuracy is estimated on a pixel level for the samples containing a foreign object. For every object, the resulting segmentation is compared with ground truth to count the number of true positive, false positive, and false negative pixels. F1-score is calculated according to Eq. \ref{f1_formula}. The values of the F1-score are shown on Fig.~\ref{f1scores}a for different combinations of penalty weights. The value of the metric is averaged over all images with foreign objects. The best segmentation accuracy is achieved with $\mu=14$ and $\nu=2$.
\begin{figure*}[]
\centering
\includegraphics[width=0.8\linewidth]{fig8.png}
\caption{Dependency of F1-score on length penalty $\mu$ and area penalty $\nu$ for different tasks: image segmentation (a) and foreign object detection (b). For segmentation, the F1-score is computed using a ground truth segmentation known for every sample and averaged over all images from the dataset containing a defect. For detection, the metric is calculated on a sample level for the entire dataset consisting of the objects with and without a bone.}
\label{f1scores}
\end{figure*}
As explained in Section~4, the dataset contains different types of bone as a defect. Values in Fig.~\ref{f1scores}a are averaged over all defect types. Thus, it does not show how defect class affects segmentation accuracy. The corresponding figures for every type of bone are shown on Fig.~\ref{class_comparison}. The dependencies of the F1-score on penalty weights are similar for all defects. Every bone class has a combination of Chan-Vese parameters that achieves the best segmentation accuracy for that defect type, and these parameters might be different from each other. However, the best instances for a single defect class also perform well for the whole dataset as shown in Table~\ref{comparison_table}. Thus, all types of defects can be segmented with the same algorithm settings. Different ratios of bone types in the dataset would affect the algorithm performance, but not significantly.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\linewidth]{class_comparison.png}
\caption{Defect segmentation F1-score for every class of defect in the dataset: fan bone, large rib bone, small rib bone. The F1 value is averaged over all samples with a corresponding defect type.}
\label{class_comparison}
\end{figure*}
\begin{table}[]
\caption{Comparison of the best Chan-Vese parameters for different classes of defects. F1-score is separately calculated for all images with the same bone class and for all samples with a defect.}
\label{comparison_table}
\begin{tabular}{|c|c|c|c|c|}
\hline
Defect class & $\mu$ & $\nu$ & Single class F1 & F1 for all defect classes \\ \hline
Fan bone & 20 & 2 & 77$\%$ & 72$\%$ \\ \hline
Large rib bone & 14 & 2 & 75$\%$ & 74$\%$ \\ \hline
Small rib bone & 14 & 2 & 70$\%$ & 74$\%$ \\ \hline
\end{tabular}
\end{table}
\subsection{Detection rate}
As highlighted previously, the detection procedure should be evaluated on the samples without a foreign object. The decision-making based on the segmentation and post-processing is tested on the whole dataset: 296 images with different types of bone and 192 images without a defect. The test results contain the number of images with a bone where a presence of defect was detected and the number of boneless images that were correctly identified as empty. These values correspond to the true positive and true negative cases. The algorithm accuracy is evaluated using the F1-score.
Fig. \ref{f1scores}b shows the dependency of F1-score on Chan-Vese energy equation penalties $\mu$ and $\nu$. High accuracy(more than 90\%) can be obtained with multiple combinations of parameters, the best value of F1-score is achieved with $\mu=4$ and $\nu=2$. The confusion matrix for this instance of the inspection procedure is shown in Table \ref{confusion_matrix}. The algorithm correctly marks 97\% of the normal samples as not containing a foreign object. Chicken fillets with a bone are successfully identified in 92\% of cases. On average, segmentations with the best detection rate converge after 150 ms.
{
\begin{table}
\caption{Confusion matrix for the inspection procedure with Chan-Vese parameters $\mu=4$ and $\nu=2$}
\label{confusion_matrix}
\begin{tabular}{@{}cc cc@{}}
\multicolumn{1}{c}{} &\multicolumn{1}{c}{} &\multicolumn{2}{c}{Predicted} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{Defected} &
\multicolumn{1}{c}{Normal} \\
\cline{2-4}
\multirow[c]{2}{*}{\rotatebox[origin=tr]{90}{Actual}}
& Defected & 271 & 25 \\[1.5ex]
& Normal & 5 & 187 \\
\cline{2-4}
\end{tabular}
\end{table}
}
\section{Discussion}
\subsection{Thickness correction}
The thickness correction pre-processing is performed on the experimental data and does not rely on prior knowledge about the samples. The dependency of the quotient value on the thickness of the main object material is estimated as an average function for pixels of the projection. The theoretical foundation of the pre-processing implies that this function can be predicted with sufficient knowledge of the inspection system. As a result, it can be used to construct more precise measurement systems and achieve better contrast between the main and foreign objects. However, in the scope of this article, a heuristic approach was chosen to show the applicability of this method to a wide range of data.
One of the major downsides of the quotient images after thickness correction is the resulting significant level of noise. If noise in single-energy projections follows a Gaussian distribution, the quotient noise has a ratio distribution. Thus, significant outliers from the mean value are more likely than with the Gaussian. The exact properties of the resulting distribution depend on the mean value and variance for both original distributions. In practice, this means that high noise appears frequently, especially in the boundary regions of the image.
Several approaches were considered to mitigate a high noise level. Firstly, normalization is applied to the corrected quotient. On the boundary, big deviations from zero level are divided by the significant variance value. A downside of this procedure is that variances are computed under the assumption that the distribution is Gaussian. Secondly, not all boundary regions are used for the quotient computations. If a pixel value is less than a certain threshold, a pixel is considered to be a part of the background and ignored. This boundary cut might remove some parts of the bone from the quotient if it is located near the main object border. Nevertheless, a smaller number of noise outliers leads to a better segmentation in general.
In this work, the normalized corrected quotient images are used as an input for a segmentation algorithm. However, it does not mean that other derivatives of two projection channels should not be considered. In the data-driven approach, the quotient is used as a simple combination of two channels that has an additional meaning for a monochromatic beam. With more knowledge about the inspection system and product materials, a better combination of the two channels might be constructed. The main focus of the pre-processing procedure is to remove thickness dependency, additional steps can be considered to improve defect contrast. Normalization of the quotient can be viewed as an implicit introduction of the Gaussian noise to the active contour model.
\subsection{Active contour segmentation}
The Chan-Vese method operates well even with noisy data, and a high noise level is inevitable for the conveyor belt product inspection. The energy that is minimized over two phases in the image is a formal way of defining a connected cluster of points in the presence of high noise. Thus, the segmentation algorithm determines bone borders consistently based on the objective criteria. At the same time, ground truth made by a human operator might be more subjective. When a discrepancy between the segmentation and ground truth happens, it can be caused by many factors. On one hand, the Chan-Vese method might not converge or reach a local minimum, and thickness correction might produce a very noisy input. On the other hand, the ground truth in a single sample can be inconsistent with other data.
The active contour models contain several parameters that do not have a physical meaning. The maximal number of iterations, convergence tolerance, and time step influence the speed of the method and resulting segmentation. The optimal choices of these parameters balance computational speed and algorithm accuracy. For the detection procedure, it is crucial to get the result as fast as possible. Therefore, a large time step and low tolerance can be considered.
Fig. \ref{f1scores} shows that good accuracy can be achieved with a range of algorithm parameters. The best penalty weight pair does not lead to a significantly better F1-score than its neighborhood in the parameter space. Thus, a search for the optimal inspection settings converges quickly. In these plots, the grid step for $\mu$ and $\nu$ was set to 2. A smaller step was not chosen to prevent overfitting to the experimental data. The form of the F1-score dependency on $\mu$ and $\nu$ implies that a similar result can be achieved with a broader dataset.
\subsection{Defect detection}
The active contour model in the described methodology is defined for two homogeneous phases according to Eq. \ref{cv_energy}. This means that the method is well-suited for the cases when a single defect is present in the main object. Two foreign objects of significantly different classes (i.e. bone and a plastic piece) might be segmented incorrectly since they should be marked as a defect phase, but their average intensities vary significantly. This problem can be solved with a change of the active contour energy and the introduction of several level-set functions. However, in practice, it is unlikely that multiple defects appear on the sample since a single foreign object is expected to appear rarely. At the same time, the introduction of more defect types in the energy equation might decrease detection accuracy even more since the majority of samples contains no foreign objects. In the experimental dataset, some images contain a shattered bone, and both pieces are segmented properly since they correspond to the same defect class.
The Chan-Vese algorithm was chosen as a segmentation method in this paper because it performed better than other well-known techniques, such as thresholding and watershed algorithms. However, supervised machine learning methods\cite{ruger2019automated} can be considered as an alternative to classical algorithms. With a relatively small dataset, such as one used in this study, it is possible to train a neural network that is achieving comparable and sometimes better accuracy. Nevertheless, unsupervised algorithms provide explainable results and can be used to improve the experimental setup. With the thickness correction pre-processing, every pixel value can be computed using physical properties of materials, tube parameters, and detector model. The penalty weights can be interpreted as a balance between the defect signal and noise level of the image. This information can be used to improve the scanning protocol, evaluate the cost efficiency of different detectors for a certain task, and estimate the size limits of the detectable foreign objects. Machine learning lacks this level of result explainability and should be applied to an already optimal experimental setup.
The described methodology is not limited to the food industry. The main novelty of this inspection procedure is the thickness correction procedure. The effect of thickness on quotient values is relevant for any dual-energy single projection measurement. It is not necessary if the defect has significantly different attenuation properties (e.g. detection of metal pieces in the luggage). Nevertheless, the thickness is important to account for if the data contain high noise level and low-contrast foreign objects.
The detection task with optimal parameters achieves 95\% accuracy on the experimental dataset. 5 samples out of 192 were misclassified as containing a bone. Some of them can be attributed to systematic experimental errors, such as misalignment or sample deformation. In other cases, a noisy cluster is not segmented as a defect if the convergence tolerance and the maximal number of iterations are changed. In 25 samples out of 296, a bone is not detected. It is theoretically possible to achieve 100\% accuracy for defect detection on the limited dataset. The main limiting factor is detector noise that requires strict length and area penalties for the segmentation. Furthermore, different materials present in the factory environment, such as blood droplets or poultry fat, can be recognized as a foreign object and lead to the false positive case.
It is important to note that the best Chan-Vese parameters for defect detection are different from those that provide the best segmentation accuracy. For a binary outcome, it is not important how precisely the bone is located on the image. The segmentation method often marks only a central part of the bone and ignores its boundary. At the same time, the best detection parameters lead to better performance in difficult cases: the presence of small bones that are indistinguishable from noise and significant noise fluctuations that look similar to small bones. Furthermore, the execution time is lower for the detection procedure which is important in the industrial environment.
\section{Conclusion}
Thickness correction pre-processing proposed in this work enables accurate detection of foreign objects in the dual-energy projections of the conveyor belt samples. The described methodology does not rely on a good contrast between a defect and the main object on a single projection. Instead, it utilizes the difference in attenuation properties that can be detected with a dual-energy acquisition. The active contour segmentation algorithm allows analyzing data with a significant noise level if a proper energy model is chosen. The performance of the inspection is evaluated based on the ability to distinguish samples with and without a foreign object. It was shown that 97\% of samples without a defect can be correctly identified while maintaining a 95\% accuracy of the foreign object detection on the experimental dataset. The proposed approach does not require prior knowledge about the samples, and necessary material properties are extracted directly from the projections. The methodology is tested on bone detection in chicken filets. However, the thickness correction procedure does not rely on any specific properties of this problem and can be extended for other foreign object detection tasks. Different reasons for segmentation imperfection and possible ways to improve the current implementation are discussed.
\section*{Acknowledgment}
The authors acknowledge financial support from the Netherlands Organisation for Scientific Research (NWO), project number 639.073.506. The authors would like to thank Meyn Food Processing Technology B.V. for providing the experimental dataset and Sophia Bethany Coban for assistance with laboratory data acquisition.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2,869,038,153,889 | arxiv | \section{Introduction}\label{intro}
Inelastic collisions are the process that a part of the initial kinetic energy
of colliding bodies is distributed into the internal degrees of freedom.
Although it is believed that the relative speed of colliding two bodies
decreases after a collision,
the event that the relative rebound speed becomes larger
than the relative colliding speed is not contradicted
with the energy conservation law.
Such an anomalous impact is prohibited by the second law of thermodynamics
in which the energy does not come back
to the macroscopic degrees of freedom
once a part of energy is distributed into the microscopic ones.
For classical colliding bodies at the initial temperature $T$,
the second law may be written as
\begin{equation}\label{eq1}
\frac{1}{2} \mu {\tilde V}^{2}+O(T)\ge \frac{1}{2} \mu {\tilde V}^{'2}
\end{equation}
where $\mu$ is the reduced mass of the colliding bodies,
${\tilde V}$ and ${\tilde V}^{'}$ are respectively
the relative colliding speed and the relative rebound speed.
~\cite{maes-tasaki}
The second term in the left hand side is negligible
in the case of macroscopic bodies. Thus, in the case of
head-on collisions of macroscopic bodies,
the restitution coefficient,
$e \equiv {\tilde V}^{'}/{\tilde V}$,
becomes less than unity~\cite{tasaki-jsp} while
the restitution coefficient projected
into the normal direction of the collision
can exceed unity in the case of oblique collisions.
~\cite{louge,kuninaka_prl}
The second term in the left hand side of eq.(~\ref{eq1}), however,
is not negligible for
small systems. In this paper we investigate the statistical properties of
``super-elastic'' collisions for small colliding bodies
where $e$ is larger than unity in head-on collisions.
``Super-elastic'' collisions may be related to the fluctuation
theorem~\cite{evans,evans_cohen_morris,gallavotti,crooks,PT_FT}
in which the probability of the entropy production is related
to the entropy absorption as
$\frac{P(S^{\tau}=A)}{P(S^{\tau}=-A)} = \exp\left(\tau A\right)$,
where $P(S^{\tau}=A)$ is the probability distribution of observing
the time averaged entropy production $S^{\tau}$ during time interval
$\tau$ lies in the range $A$ to $A+dA$.
Thus, it might be important to study the relation between
the fluctuation theorem and ``super-elastic'' collisions
due to large thermal fluctuations.
There are numerical and theoretical studies for coalescence,
scattering, and fragmentation of colliding nanoclusters.
~\cite{kalweit, full, clus_col} Most of the low-speed collisions of
nanoclusters cause coalescence.
However, some stable clusters such as fullerene can keep their form
in collisions. ~\cite{full}
Therefore, we focus on the properties of small stable clusters in which
the interaction between two clusters is dominated
by the repulsive force.
In this paper, we perform the molecular dynamics simulation of
colliding repulsive clusters to investigate the effect of
thermal fluctuations. The organization of this paper is as follows.
In the next section, we introduce our model. In \S \ref{results},
we investigate the relation between the restitution coefficient
and colliding speed and that between the compressive force and
the deformation. We also compare our numerical results
with the fluctuation theorem.
Section \ref{discussion} and \ref{conclusion} are devoted to
the discussion and the conclusion of our results, respectively.
\section{Model}\label{model}
\begin{wrapfigure}{l}{6.6cm}
\begin{center}
\includegraphics[width=.25\textwidth]{fig2.eps}
\end{center}
\caption{Numerical model of colliding clusters. Each of them is composed of
682 ``atoms'' which are bounded together by the Lennard-Jones potential.}
\label{fig1}
\end{wrapfigure}
Let us introduce our numerical model. Our model is composed of
two identical clusters. Each of them is spherically cut from
a face-centered cubic (FCC) lattice and consisted of $682$ ``atoms''.
The clusters have facets due to the small number of ``atoms''
(Fig. ~\ref{fig1}). All the ``atoms'' in each cluster are bounded
together by the Lennard-Jones potential $U(r_{ij})$ as
\begin{equation}
U(r_{ij})=4\epsilon\left\{\left(\frac{\sigma}{r_{ij}}\right)^{12}-
\left(\frac{\sigma}{r_{ij}}\right)^{6}\right\},
\end{equation}
where $r_{ij}$ is the distance between two ``atoms'', $i$ and $j$, in
one cluster. $\epsilon$ is the energy constant and
$\sigma$ is the lattice constant.
When we regard the ``atom'' as argon, the values of the constants become
$\epsilon=1.65\times10^{-21}\mathrm{J}$ and $\sigma=3.4$\AA,
respectively.~\cite{rieth}
Henceforth, we label the upper and the lower clusters as
cluster $C^{u}$ and cluster $C^{l}$, respectively.
The interaction between the atom $k$ on the lower surface of $C^{u}$
and the atom $l$ on the upper surface of $C^{l}$ is assumed to be
the repulsive potential $R(r_{kl})=4\epsilon(\sigma/r_{kl})^{12}$,
where $r_{kl}$ is the distance between the atoms $k$ and $l$.
To reduce computational costs, we introduce the cut-off length $\sigma_{c}$
of the Lennard-Jones interaction as $\sigma_{c}=2.5 \sigma$.
The procedure of our simulation is as follows.
As the initial condition of simulation,
the centers of mass of $C^{u}$ and $C^{l}$ are placed
along the $z$-axis with the separation $\sigma_{c}$ between them.
The initial velocities of the ``atoms'' in both $C^{u}$ and $C^{l}$
obey Maxwell-Boltzmann distribution with the initial temperature $T$.
The initial temperature is set to be $T=0.01 \epsilon$ or $T=0.02 \epsilon$
in most of our simulations.
Sample average is taken over different sets of initial velocities
governed by the Maxwell-Boltzmann velocity distribution for ``atoms''.
To equilibrate the clusters, we adopt the velocity scaling
method~\cite{haile,andersen} for
$2000$ steps at the initial stage of simulations.
We have checked the equilibration of the total energy
in the initial relaxation process.
After the equilibration,
we give translational velocities and the macroscopic rotations to $C^{u}$ and $C^{l}$
to make them collide against each other.
The relative speed of impact
ranges from ${\tilde V}=0.02 \sqrt{\epsilon/m}$ to
${\tilde V}=0.07 \sqrt{\epsilon/m}$,
which are less than the thermal velocity for one ``atom''
defined by $\sqrt{T/m}$, where $m$ is the mass of the ``atom''.
Numerical integration of the equation of motion for each atom
is carried out by the second order symplectic integrator with
the time step $dt=1.0 \times 10^{-2} \sigma/\sqrt{\epsilon/m}$.
The rate of energy conservation, $|E(t)-E_{0}|/|E_{0}|$,
is kept within $10^{-5}$,
where $E_{0}$ is the initial energy of the system and $E(t)$ is the
energy at time $t$.
We let the angle around $z-$axis, $\theta^{z}$, be $\theta^{z}=0$
when the two clusters are located mirror-symmetrically
with respect to $z=0$.
In most of our simulation, we set $\theta^{z}$ at $\theta^{z}=0$
as the initial condition.
Let us comment on the dependence of
the relative angle $\theta^{z}$ on the numerical results.
From our impact simulation for
$\theta^{z}_{i}=\pi i/18 \hspace{1mm} (i=1,...,9)$ at
$T=0.02\epsilon$ we have confirmed that the initial orientation
does not largely
affect the restitution coefficient.
\section{Results}\label{results}
\begin{figure}[th]
\begin{center}
\begin{minipage}{0.47\textwidth}
\includegraphics[width=0.8\textwidth]{fig3.eps}
\caption{Relation between colliding speed and restitution coefficient.
The solid and broken lines are results of the quasi-static theory.}
\label{fig2}
\end{minipage}
\hspace*{3mm}
\begin{minipage}{0.47\textwidth}
\includegraphics[width=0.8\textwidth]{fig4.eps}
\caption{Relation between compressive force and deformation. Cross
points and error bars are average and standard deviation of
$10$ numerical results. Error bars are hardly seen due to small
standard deviation.
Solid line is the result of Hertzian contact theory
for two spheres pressed each other.
}
\label{fig3}
\end{minipage}
\end{center}
\end{figure}
Figure ~\ref{fig2} shows the relation between the relative speed of
impact ${\tilde V}/ \sqrt{\epsilon/m}$ and the restitution coefficient $e$.
Cross points and error bars are the average and the standard deviation
of $100$ samples for each value on $x$-axis.
From this result, we confirm that the restitution coefficient $e$
decreases with the increase of the colliding speed
${\tilde V} / \sqrt{\epsilon/m}$. When the colliding speed is
${\tilde V} = 0.02 \sqrt{\epsilon/m}$ at $T=0.02\epsilon$,
the average of $e$ becomes $1.04$
which is slightly larger than unity.
It is interesting that our result can be
fitted by the quasi-static theory
of low-speed impacts $1-e\propto {\tilde V}^{1/5}
$~\cite{kuwabara,brilliantov96,morgado}
if the restitution coefficient
at $V=0$ is replaced by a constant larger than unity.
Indeed, the solid and the broken lines in Fig. ~\ref{fig1} are
fitting curves of
$e=\alpha_{1}-\alpha_{2} \left({\tilde V}/ \sqrt{\epsilon/m}\right)^{1/5}$,
where $\alpha_{1}$ and $\alpha_{2}$ depend on material constants of colliding
bodies.
To remove the possibility of accidental agreement between the fitting curve
and the data, we also check the validity of Hertzian contact theory
in our system.
According to the theory, when the two elastic spheres are pressed
each other, the relation between the deformation $h$ and the compressive
force $P$ is described as $h = D P^{2/3}$ and
$D = (3/2)^{2/3} ((1-\nu^{2})/E)^{2/3} (R/2)^{-1/3}$,
where $\nu$ and $E$ are Poisson's ratio and Young's modulus, respectively.
~\cite{hertz,landau}
Poisson's ratio and Young's modulus of the model
can be calculated as follows.
Adopting these material constants,
we confirm that our numerical result of compression of
two clusters is well described by the Hertzian contact theory
without any
fitting parameters between $P=81.84\epsilon/\sigma$
and $P=136.4\epsilon/\sigma$ at $T=0.03\epsilon$ (Fig. ~\ref{fig3}).
For smaller compression finite deformation seems to remain.
For larger compression we observe too large repulsion
because of the existence of local plastic deformations.
Thus, we conclude that the relation
between the impact speed and the restitution coefficient
is characterized by the quasi-static theory of impact processes.
At last, we compare our numerical results with the fluctuation relation
for impact phenomena which is a kind of the fluctuation theorem
on the basis of the probability distribution
of the macroscopic energy loss.
In the case of impact phenomena,
the fluctuation relation may be written as follows~\cite{hal}:
\begin{equation}\label{FR}
\exp(W(X_{0}, X_{1})/T) P(W(X_{0}, X_{1}))
=P(W({\bar X_{1}}, {\bar X_{0}})),
\end{equation}
where $X_{i} (i=0,1)$ are the macroscopic variables at
initial and final states, respectively,
while ${\bar X_{i}}$ are the states obtained by reversing
all the momenta in $X_{i}$.
$P(W(X_{0}, X_{1}))$ is the probability distribution of
$W(X_{0},X_{1})$ which is the macroscopic energy loss during
the transition from $X_{0}$ to $X_{1}$ defined as
\begin{eqnarray}
W(X_{0}, X_{1})=\sum_{i=C^{u},C^{l}}
\left[\frac{1}{2} M(V_{i}^{2}-V_{i}^{'2})
+\frac{1}{2}I(\omega_{i}^{2}-\omega_{i}^{'2})\right]\nonumber \\
+R(r)-R(r^{'}).
\end{eqnarray}
Here, $M$ is the total mass of one cluster.
$\omega_{i}$ and $V_{i}$ are respectively
the angular velocities and the speed of the center of mass
for the cluster $i$ at initial state
while $\omega_{i}^{'}$ and $V_{i}^{'}$ are respectively those at
final state.
$I$ is moment of inertia of the cluster. $R(r)$ represents
the repulsion potential, where $r$ is the distance
between the centers of mass of the two clusters.
In our simulation, we at first equilibrate the two clusters
at $T=0.02 \epsilon$ and collide them each other with
the initial condition ${\tilde V}=0.02 (\epsilon/m)^{1/2}$,
$\omega_{i}=10^{-6} (\epsilon/m)^{1/2}/\sigma$, and $r=12.02\sigma$.
Here we do not use the initial macroscopic rotations
induced by the thermal fluctuations
with the average $9.5 \times 10^{-7}(\epsilon/m)^{1/2}/\sigma$
and the standard deviation $4.9 \times 10^{-7}(\epsilon/m)^{1/2}/\sigma$ as it is.
Instead, for simplicity of our simulations,
we give the initial $\omega$ whose value is approximately equal to
$\sqrt{\langle \omega^2 \rangle}$ without taking into account
the fluctuation of the initial rotations.
We calculate $W \equiv W(X_{0},X_{1})$
from the initial and the final macroscopic energy
by assuming the final state as
the state when the internal energy of the clusters keeps constant value
in time.
After the collision,
we equilibrate the clusters at the initial temperature,
and reverse the translational velocities to make them collide
each other again. At the termination of the second collision,
we calculate ${\bar W} \equiv W({\bar X_{1}},{\bar X_{0}})$.
We obtain the probability distributions
of $W$ and $\bar{W}$
from $5000$ samples. On the basis of the probability distributions,
we investigate whether the relation (~\ref{FR}) holds in this system.
\begin{figure}[h]
\begin{center}
\includegraphics[width=.7\textwidth]{fig5.eps}
\end{center}
\caption{(a) Numerical relation between $\ln P(\bar{W})/{P(W)}$ and $W$ for
initial temperature $T=0.02\epsilon$.
Cross points are numerical results while
the solid line is results from the fluctuation relation of inelastic
impacts in eq.(~\ref{FR}).
(b) The probability distribution of $W$ and ${\bar W}$ obtained by our
simulation. }
\label{fig4}
\end{figure}
Figure ~\ref{fig4}(a) shows the relation
between $W$ and
$\ln P(\bar{W})/P(W)$.
The solid line corresponds to $W/T$ at $T=0.02\epsilon$.
We find that our simulation data are nearly consistent with
the line in the range of $-0.01\epsilon < W < 0.02\epsilon$.
Thus, the relation (~\ref{FR}) holds
in the restricted range of $W$ in our simulations.
Here we should comment on the region in which our simulation
results are consistent with the fluctuation relation. We show both
$P(\bar{W})$ and $P(W)$
in Fig. ~\ref{fig4}(b). Both distributions are overlapped mostly
in the range of $-0.02\epsilon<W<0.02\epsilon$.
In the range of $|W| > 0.02 \epsilon$,
the values of $\ln P(\bar{W})/P(W)$
are calculated from few samples.
Thus, our data and the relation ~(\ref{FR}) show good agreement
only in the range $-0.01\epsilon<W<0.01\epsilon$,
while the discrepancy becomes large outside the region.
Since the effect of rotations is small,
$W$ is approximately proportional to $1-e^2$.
Thus, events with the negative $W$ almost correspond
to ``super-elastic'' collisions.
\section{Discussion}\label{discussion}
Let us discuss our results.
While our model mimics impact phenomena of
small systems subject to large thermal fluctuations,
we should address that our model may not be adequate
for the description of most of realistic collisions of nanoclusters,
where the adhesive interaction between
clusters often prohibits the rebound in the low-speed impact.
Thus, ``atoms'' in our model may be regarded as a coarse-grained
units of crystalline solids and
the dominant interaction among clusters is repulsive.
We also expect that our model
is an idealized collision of stable fullerene~\cite{full}
or charged clusters in which
attractive interaction between clusters is weak\cite{adh}.
As an additional remark, we should indicate that
it may be difficult to control the colliding speed and
the initial rotation of the cluster in actual situations
because the macroscopic motion of one cluster is also affected
by thermal fluctuations.
\section{Conclusion}\label{conclusion}
In conclusion, we have performed molecular dynamics simulations
to investigate the behaviors of colliding clusters and
the relation between those and the fluctuation theorem.
The results of our simulations have revealed that the relation between
colliding speeds and the restitution coefficient can be described
by the quasi-static theory for inelastic impacts. In addition,
on the basis of the distribution function of macroscopic energy
loss during collision, we have shown that our numerical results
can be explained by the fluctuation relation for inelastic impacts.
\section*{Acknowledgements}
We would like to thank H. Tasaki for valuable comments.
We would also like to thank M. Matsushita
for carefully reading the manuscript and giving some advices.
Parts of numerical computation in this work were carried out
at Yukawa Institute Computer Facility.
This study is partially supported by the Grant-in-Aid of
Ministry of Education, Science and Culture, Japan (Grant No. 18540371).
|
2,869,038,153,890 | arxiv | \section{Introduction and context}
\label{SEC:Intro}
Mathematical Software, i.e. tools for effectively computing mathematical objects, is a broad discipline: the objects in question may be expressions such as polynomials or logical formulae, algebraic structures such as groups, or even mathematical theorems and their proofs. In recent years there have been examples of software that acts on such objects being improved through the use of artificial intellegence techniques. For example, \cite{KUV15} uses a Monte-Carlo tree search to find the representation of polynomials that are most efficient to evaluate; \cite{LGPC16b} uses a machine learnt branching heuristic in a SAT-solver for formulae in Boolean logic; \cite{GHK20} uses pattern matching to determine whether a pair of elements from a specified group are conjugate; and \cite{ACEISU16} uses deep neural networks for premise selection in an automated theorem proving. See the survey article \cite{England2018} in the proceedings of ICMS 2018 for more examples.
Machine Learning (ML), that is statistical techniques to give computer systems the ability to \emph{learn} rules from data, may seem unsuitable for use in mathematical software since ML tools can only offer probabilistic guidance, when such software prizes exactness. However, none of the examples above risked the correctness of the end-result in their software. They all used ML techniques to make non-critical choices or guide searches: the decisions of the ML carried no risk to correctness, but did offer substantial increases in computational efficiency.
All mathematical software, no matter the mathematical domain, will likely involve such choices, and our thesis is that in many cases an ML technique could make a better choice than a human user, so-called magic constants \cite{Carette2004}, or a traditional human-designed heuristic.
\subsection*{Contribution and outline}
In Section \ref{SEC:OurWork} we briefly survey our recent work applying ML to improve an algorithm in a computer algebra system which acts on sets of polynomials. We describe how we proposed a more appropriate definition of model accuracy and used this to improve the selection of hyper-parameters for ML models; and a new technique for identifying features of the input polynomials suitable for ML.
These advances can be applied beyond our immediate application: the feature identification to any situation where the input is a set of polynomials, and the hyper-parameter selection to any situation where we are seeking to take a choice that minimises a computation time. Hence we saw value in packaging our techniques into a software pipeline so that they may be used more widely. Here, by pipeline we refer to a succession of computing tasks that can be run as one task. The software is freely available as a Zenodo repository here: \url{https://doi.org/10.5281/zenodo.3731703}
We describe the software pipeline and its functionality in Section \ref{SEC:Pipeline}. Then in Section \ref{SEC:NewData} we describe its application on a dataset we had not previously studied.
\section{Brief survey of our recent work}
\label{SEC:OurWork}
Our recent work has been using ML to select the variable ordering to use for calculating a cylindrical algebraic decomposition relative to a set of polynomials.
\subsection{Cylindrical algebraic decomposition}
\label{SUBSEC:CAD}
A \emph{Cylindrical Algebraic Decomposition} (CAD) is a \emph{decomposition} of ordered $\mathbb{R}^n$ space into cells arranged \emph{cylindrically}, meaning the projections of cells all lie within cylinders over a CAD of a lower dimensional space. All these cells are (semi)-algebraic meaning each can be described with a finite sequence of polynomial constraints. A CAD is produced for either a set of polynomials, or a logical formula whose atoms are polynomial constraints. It may be used to analyse these objects by finding a finite sample of points to query and thus understand the behaviour over all $\mathbb{R}^n$. The most important application of CAD is to perform Quantifier Elimination (QE) over the reals. I.e. given a quantified formula, a CAD may be used to find an equivalent quantifier free formula\footnote{E.g. QE would transform $\exists x, ax^2 + b x + c = 0 \land a \neq 0$ into the equivalent $b^2 - 4ac \geq 0$.\label{fn1}}.
CAD was introduced in 1975 \cite{Collins1975} and is still an active area of research. The collection \cite{CJ98} summarises the work up to the mid-90s while the background section of \cite{EBD20}, for example, includes a summary of progress since. QE has numerous applications in science \cite{BDEEGGHKRSW20}, engineering \cite{Sturm2006}, and even the social sciences \cite{MDE18}.
CAD requires an ordering of the variables. QE imposes that the ordering match the quantification of variables, but variables in blocks of the same quantifier and the free variables can be swapped\footnote{In Footnote \ref{fn1} we must decompose $(x,a,b,c)$-space with $x$ last, but the other variables can be in any order. Using $a \prec b \prec c$ requires 27 cells but $c \prec b \prec a$ requires 115\label{fn2}.}. The ordering can have a great effect on the time / memory use of CAD, the number of cells, and even the underlying complexity \cite{BD07}. Human designed heuristics have been developed to make the choice \cite{DSS04}, \cite{Brown2004}, \cite{BDEW13}, \cite{EBDW14} and are used in most implementations.
The first application of ML to the problem was in 2014 when a support vector machine was trained to choose which of these heuristics to follow \cite{HEWDPB14}, \cite{HEWBDP19}. The machine learned choice did significantly better than any one heuristic overall.
\subsection{Recent work on ML for CAD variable ordering}
The present authors revisited these experiments in \cite{EF19} but this time using ML to predict the ordering directly (because there were many problems where none of the human-made heuristics made good choices and although the number of orderings increases exponentially, the current scope of CAD application means this is not restrictive). We also explored a more diverse selection of ML methods available in the Python library \texttt{scikit-learn} (\texttt{sklearn}) \cite{SciKitLearn2011}. All the models tested outperformed the human made heuristics.
The ML models learn not from the polynomials directly, but from features: properties which evaluate to a floating point number for a specific polynomial set. In \cite{HEWDPB14} and \cite{EF19} only a handful of features were used (measures of degree and frequency of occurrence for variables). In \cite{FE19} we developed a new feature generation procedure which used combinations of basic functions (average, sign, maximum) evaluated on the degrees of the variables in either one polynomial or the whole system. This allowed for substantially more features and improved the performance of all ML models. The new features could be used for any ML application where the input is a set of polynomials.
The natural metric for judging a CAD variable ordering is the corresponding CAD runtime: in the work above models were trained to pick the ordering which minimises this for a given input. However, this meant the training did not distinguish between any non-optimal ordering even though the difference between these could be huge. This led us to a new definition of accuracy in \cite{FE20}: to picking an ordering which leads to a runtime within $x\%$ of the minimum possible.
We then wrote a new version of the \texttt{sklearn} procedure which uses cross-validation to select model hyper-parameters to minimise the total CAD runtime of its choices, rather than maximise the number of times the minimal ordering is chosen. This also improved the performance of all ML models in the experiments of \cite{FE20}. The new definition and procedure are suitable for any any situation where we are seeking to take a choice that minimises a computation time.
\section{Software pipeline}
\label{SEC:Pipeline}
The input to our pipeline is given by two distinct datasets used for training and testing, respectively. An individual entry in the data set is a set of polynomials that represent an input to a symbolic computation algorithms, in our case CAD. The output is a corresponding sequence of variable ordering suggestions for each set of polynomials in the testing dataset.
The pipeline is fully automated: it generates and uses the CAD runtimes for each set of polynomials under each admissible variable ordering; uses the runtimes from the training dataset to select the hyper-parameters with cross-validation and tune the parameters of the model; and evaluates the performance of those classifiers (along with some other heuristics for the problem) for the sets of polynomials in the testing dataset.
We describe these key steps in the pipeline below. Each of the numbered stages can be individually marked for execution or not in a run of the pipeline (avoiding duplication of existing computation). The code for this pipeline, written all in Python, is freely available at: \url{https://doi.org/10.5281/zenodo.3731703}.
\subsection*{I. Generating a model using the training dataset}
\subsubsection*{(a) Measuring the CAD runtimes:}
The CAD routine is run for each set of polynomials in the training dataset. The runtimes for all possible variable orderings are stored in a different file for each set of polynomials. If the runtime exceeds a pre-defined timeout, the value of the timeout is stored instead.
\subsubsection*{(b) Polynomial data parsing:}
The training dataset is first converted to a format that is easier to process into features. For this purpose, we chose the format given by the \texttt{terms()} method from the \texttt{Poly} class located in the \texttt{sympy} package for symbolic computation in Python.
Here, each monomial is defined by a tuple, containing another tuple with the degrees of each variable, and a value defining the monomial coefficient. The polynomials are then defined by lists of monomials given in this format, and a point in the training dataset consists of a list of polynomials. For example, one entry in the dataset is the set $\{235 x_1+42 x_2^2, 2 x_1^2 x_3-1\}$ which is represented as
\begin{equation*}
\left[\left[\left((1,0,0),235\right),\left((0,2,0),42\right)\right],\left[\left((2,0,1),2\right),\left((0,0,0),-1\right)\right]\right].
\end{equation*}
All the data points in the training dataset are then collected into a single file called \texttt{terms\_train.txt} after being placed into this format. Subsequently, the file \texttt{y\_train.txt} is created storing the index of the variable ordering with the minimum computing times for each set of polynomials, using the runtimes measured in Step I(a).
\subsubsection*{(c) Feature generation:}
Here each set of polynomials in the training dataset is processed into a fixed length sequence of floating point numbers, called features, which are the actual data used to train the ML models in \texttt{sklearn}. This is done with the following steps:
\begin{enumerate}[i.]
\item {\bf Raw feature generation}\\
We systematically consider applying all meaningful combinations of the functions \texttt{average}, \texttt{sign}, \texttt{maximum}, and \texttt{sum} to polynomials with a given number of variables. This generates a large set of feature descriptions as proposed in \cite{FE19}. The new format used to store the data described above allows for an easy evaluation of these features. An example of computing such features is given in Figure \ref{fig:featcomp}.
In \cite{FE19} we described how the method provides $1728$ possible features for polynomials constructed with three variables for example. This step generates the full set of feature descriptions, saved in a file called \texttt{features\_descriptions.txt}, and the corresponding values of the features on the training dataset, saved in a file called \texttt{features\_train\_raw.txt}.
\begin{figure}[!ht]
\hfill
\begin{center}
\includegraphics[width=4.8in,trim={0cm 0cm 0cm 0cm},clip]{Feature_selection.jpg}
\end{center}
\caption{Generating feature $\text{av}_p\left( \text{max}_m\left(d_1^{m,p}\right)\right)$ from data stored in the format of Section I(b). Here $d_1^{m,p}$ denotes the degree of variable $x_1$ in polynomial number $p$ and monomial number $m$, and $\text{av}_p$ denotes the average function computed for all polynomials \cite{FE19}. \label{fig:featcomp}}
\end{figure}
\item {\bf Feature simplification}\\
After computing the numerical values of the features in Step I(c)i this step will remove those features that are constant or repetitive for the dataset in question, as described in \cite{FE19}. The descriptions of the remaining features are saved in a new file called \texttt{features\_descriptions\_final.txt}. \\
\item {\bf Final feature generation}\\
The final set of features is computed by evaluating the descriptions in \texttt{features\_descriptions\_final.txt} for the training dataset. Even though these were already evaluated in Step I(c)i we repeat the evaluation for the final set of feature descriptions. This is to allow the possibility of users entering alternative features manually and skipping steps i and ii. As noted above, any of the named steps in the pipeline can be selected or skipped for execution in a given run. The final values of the features are saved in a new file called \texttt{features\_train.txt}.
\end{enumerate}
\subsubsection*{(d) Machine learning classifier training:}
\begin{enumerate}[i.]
\item{\bf Fitting the model hyperparameters by cross-validation}\\
The pipeline can apply four of the most commonly used deterministic ML models (see \cite{EF19} for details), using the implementations in \texttt{sklearn} \cite{SciKitLearn2011}.
\begin{itemize}
\item The K-Nearest Neighbors (KNN) classifier
\item The Multi-Layer Perceptron (MLP) classifier
\item The Decision Tree (DT) classifier
\item The Support Vector Machine (SVM) classifier
\end{itemize}
Of course, additional models in \texttt{sklearn} and its extensions could be included with relative ease.
The pipeline can use two different methods for fitting the hyperparameters via a cross-validation procedure on the training set, as described in \cite{FE20}:
\begin{itemize}
\item Standard cross-validation: maximizing the prediction accuracy (i.e. the number of times the model picks the optimum variable ordering).
\item Time-based cross-validation: minimizing the CAD runtime (i.e. the time taken to compute CADs with the model's choices).
\end{itemize}
Both methods tune the hyperparameterswith cross-validation using the routine \texttt{RandomizedSearchCV} from the \texttt{sklearn} package in Python (the latter an adapted version we wrote). The cross-validation results (i.e. choice of hyperparameters) are saved in a file \texttt{hyperpar\_D**\_**\_T**\_**.txt}, where \texttt{D**\_**} is the date and \texttt{T**\_**} denotes the time when the file was generated.
\item {\bf Fitting the parameters}\\
The parameters of each model are subsequently fitted using the standard sklearn algorithms for each chosen set of hyperparameters. These are saved in a file called \texttt{par\_D**\_**\_T**\_**.txt}.
\end{enumerate}
\subsection*{II. Predicting the CAD variable orderings using the testing dataset}
The models in Step I are then evaluated according to their choices of variable orderings for the sets of polynomials in the testing dataset. The steps below are listed without detailed description as they are performed similarly to Step I for the testing dataset.
\subsubsection*{(a) Polynomial data parsing:}
The values generated are saved in a new file called \texttt{terms\_test.txt}.
\subsubsection*{(b) Feature generation:}
The final set of features is computed by evaluating the descriptions in Step I(b)ii for the testing dataset. These values are saved in a new file called \texttt{features\_test.txt}.
\subsubsection*{(c) Predictions using ML:}
Predictions on the testing dataset are generated using the model computed in Step I(c). The model is run with the data in Step II(a)ii, and the predictions are stored in a file called \texttt{y\_D**\_**\_T**\_**\_test.txt}.
\subsubsection*{(d) Predictions using human-made heuristics:}
In our prior papers \cite{EF19}, \cite{FE19}, \cite{FE20} we compared the performance of the ML models with the human-designed heuristics in \cite{Brown2004} and \cite{DSS04}. For details on how these are applied see \cite{EF19}. Their choices are saved in two files entitled \texttt{y\_brown\_test.txt} and \\ \texttt{y\_sotd\_test.txt}, respectively.
\subsubsection*{(e) Comparative results:}
Finally, in order to compare the performance of the proposed pipeline, we must measure the actual CAD runtimes on the testing dataset. The results of the comparison is saved in a file with the template:\\ \texttt{comparative\_results\_D**\_**\_T**\_**.txt}.
\subsection*{Adapting the pipeline to other algorithms}
The pipeline above was developed for choosing the variable ordering for the CAD implementation in Maple's Regular Chains Library \cite{CMXY09}, \cite{CM16}. But it could be used to pick the variable ordering for other procedures which take sets of polynomials as input by changing the calls to CAD in Steps I(a) and II(e) to that of another implementation / algorithm. Step II(d) would also have to be edited to call an appropriate competing heuristic.
\section{Application of pipeline to new dataset}
\label{SEC:NewData}
The pipeline described in the previous section makes it easy for us to repeat our past experiments (described in Section \ref{SEC:OurWork}) for a new dataset. All that is needed to do is replace the files storing the polynomials and run the pipeline.
To demonstrate this we test the proposed pipeline on a new dataset of randomly generated polynomials. We are not suggesting that it is appropriate to test classifiers on random data: we simply mean to demonstrate the ease with which the experiments in \cite{EF19}, \cite{FE19}, \cite{FE20} that originally took many man-hours can be repeated with just a single code execution.
The randomly generated parameters are: the degrees of the three variables in each polynomial term, the coefficient of each term, the number of terms in a polynomial and the number of polynomials in a set. The means and standard deviations of these parameters were extracted from the problems in the \texttt{nlsat} dataset\footnote{\url{https://cs.nyu.edu/~dejan/nonlinear/}}, which was used in our previous work \cite{EF19} so that the dataset is of a comparable scale.
We generated $7500$ sets of random polynomials, where $5000$ were used for training, and the remaining $2500$ for testing.
The results of the proposed processing pipeline, including the comparison with the existing human-made heuristics are given in Table \ref{tab:1}. The prediction time is the time taken for the classifier or heuristic to make its predictions for the problems in the training set. The total time adds to this the time for the actual CAD computations using the suggested orderings. We do not report the training time of the ML as this is a cost paid only once in advance. The virtual solvers are those which always make the best/worst choice for a problem (in zero prediction time) and are useful to show the range of possible outcomes. We note that further details on our experimental methodology are given in \cite{EF19}, \cite{FE19}, \cite{FE20}.
As with the tests on the original dataset \cite{EF19}, \cite{FE19} the ML classifiers outperformed the human made heuristics, but for this dataset the difference compared to the Brown heuristic was marginal. We used a lower CAD timeout which may benefit the Brown heuristic as past analysis shows that when it makes sub-optimal choices these tend to much worse. We also note that the relative performance of the Brown heuristic fell significantly when used on problems with more than three variables in \cite{FE20}. The results for the sotd heuristic are bad because it had a particularly long prediction time on this random dataset. We note that there is scope to parallelize sotd which may make it more competitive.
\begin{table}[t]
\caption{The comparative performance of DT, KNN, MLP, SVM, the Brown and sotd heuristics on the testing data for our randomly generated dataset. A random prediction, and the virtual best (VB) and virtual worst (VW) predictions are also included.}\label{tab:1}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& DT & KNN & MLP & SVM & \texttt{Brown} & \texttt{sotd} & rand & VB & VW \\
\hline \textbf{Prediction time (s)} & $ 4.8\cdot e^{-4} $ & $ 0.68 $ & $ 2.8\cdot e^{-4} $ & $ 0.99 $ & $ 53.01 $ & $15\,819$ & & & \\
\hline \textbf{Total time (s)} & $ 6\,548 $ & $ 6\,610 $ & $ 6\,548 $ & $ 6\,565 $ & $ 6\,614 $ & $ 22\,313 $ & $ 16\,479 $ & $5\,610$ & $25\,461$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusions}
We presented our software pipeline for training and testing ML classifiers that select the variable ordering to use for CAD, and described the results of an experiment applying it to a new dataset.
The purpose of the experiment in Section \ref{SEC:NewData} was to demonstrate that the pipeline can easily train classifiers that are competitive on a new dataset with almost no additional human effort, at least for a dataset of a similar scale (we note that the code is designed to work on higher degree polynomials but has only been testes on datasets of 3 and 4 variables so far). The pipeline makes it possible for a user to easily tuning the CAD variable ordering choice classifiers to their particular application area.
\newpage
Further, with only a little modification, as noted at the end of Section \ref{SEC:Pipeline}, the pipeline could be used to select the variable ordering for alternative algorithms that act on sets of polynomials and require a variable ordering. We thus expect the pipeline to be a useful basis for future research and plan to experiment with its use on such alternative algorithms in the near future.
\subsubsection*{Acknowledgements}
This work is funded by EPSRC Project EP/R019622/1: \emph{Embedding Machine Learning within Quantifier Elimination Procedures}. We thank the anonymous referees for their comments.
\bibliographystyle{splncs04}
|
2,869,038,153,891 | arxiv | \section{Introduction}
In recent years the multiplier ideals and the log canonical
threshold have played an important role in higher dimensional
birational geometry (see e.g. \cite{Laz}). These are invariants of
singularities in characteristic zero, that can be defined in terms
of log resolutions of singularities. Suppose for simplicity that $X$
is a smooth variety over a field of characteristic zero, and that
${\mathfrak a}\subseteq\ensuremath{\mathcal O}_X$ is a coherent sheaf of ideals. The multiplier
ideal associated to the pair $(X,{\mathfrak a})$ and to a non-negative real
number $t$ is denoted by $\ensuremath{\mathcal J}({\mathfrak a}^t)$. If $t_1<t_2$, then
$\ensuremath{\mathcal J}({\mathfrak a}^{t_2})\subseteq\ensuremath{\mathcal J}({\mathfrak a}^{t_1})$, and $\ensuremath{\mathcal J}({\mathfrak a}^t)=\ensuremath{\mathcal O}_X$
for $0<t\ll 1$. The smallest $t$ such that $\ensuremath{\mathcal J}({\mathfrak a}^t)\neq\ensuremath{\mathcal O}_X$ is
the \emph{log canonical threshold} $\lct({\mathfrak a})$.
On the other hand, in positive characteristic one can define
invariants using the Frobenius morphism. Specifically, Hara and the
second author introduced in \cite{HY} a notion of tight closure for
pairs, and corresponding (generalized) test ideals $\tau({\mathfrak a}^t)$.
Suppose that we have a pair $(X,{\mathfrak a})$ and $t\in\ensuremath{\mathbb R}_+$, where $X$
is a smooth variety over a field of characteristic zero. If we
denote by ${\mathfrak b}_p$ the reduction mod $p$ of the ideal ${\mathfrak b}$, it was
proved in \cite{HY} that
\begin{equation}\label{eq0}
\ensuremath{\mathcal J}({\mathfrak a}^t)_p=\tau({\mathfrak a}_p^t)
\end{equation}
for all primes $p\gg 0$ (depending on
$t$).
\par
In the same vein,
Takagi and Watanabe defined in
positive characteristic \cite{TW} the \emph{F-pure threshold}
$\fpt({\mathfrak a})$. When the ambient variety is nonsingular and $F$-finite
(that is, the Frobenius morphism $F\colon X\to X$ is finite), this
can be described as the smallest $t$ such that
$\tau({\mathfrak a}^t)\neq\ensuremath{\mathcal O}_X$. The formula (\ref{eq0}) can then be
reinterpreted as saying that
\begin{equation}\label{eq00}
\lim_{p\to\infty}\fpt({\mathfrak a}_p)=\lct({\mathfrak a}).
\end{equation}
The above shows the close connection between multiplier and test
ideals. In fact, more is true. Multiplier ideals satisfy several
subtle properties, such as the Restriction Theorem, the
Subadditivity and the Summation Theorems, and Skoda's Theorem (see
\cite{Laz}). One common feature of these results is that they all
rely on applications of vanishing theorems. As it was pointed out in
\cite{HY}, \cite{HT} and \cite{Ta}, all these results have similar
statements for test ideals, with substantially easier proofs.
On the other hand, multiplier ideals enjoy several other properties,
that follow simply from the description in terms of resolutions of
singularities. In this note we concentrate on these properties, and
show that essentially all these fail for test ideals.
\bigskip
Our basic ingredient is the description of test ideals from
\cite{BMS1}, which holds when the ambient variety is nonsingular and
$F$-finite. Therefore we will always make this assumption. Our main
result is a positive one: under mild assumptions, every ideal is a
test ideal.
\begin{thm}\label{thm1}
Suppose that $R$ is a ring of characteristic $p>0$, such that $R$ is
a finitely generated free module over $R^p$. For every ideal $I$ in
$R$, there is $f\in R$ and $c>0$ such that $I=\tau(f^c)$.
\end{thm}
Note that the theorem applies when $R$ is a local regular $F$-finite
ring, or when $R=k[x_1,\ldots,x_n]$, where $[k\colon k^p]<\infty$.
As we will see, both $f$ and $c$ in the theorem can be
explicitly determined. Moreover, if $I$ is ${\mathfrak m}$-primary, for some
maximal ideal ${\mathfrak m}$, then we show that we may write also
$I=\tau({\mathfrak a}^{c'})$ for some ${\mathfrak m}$-primary ideal ${\mathfrak a}$ and some
$c'>0$.
Note that Theorem~\ref{thm1} contrasts with the situation for
multiplier ideals. In that case, as an immediate consequence of the
definition one shows that every multiplier ideal is integrally
closed. Moreover, as it was recently shown in \cite{LL}, there are
more subtle conditions involving the local syzygies, that are
satisfied by all multiplier ideals.
In \cite{ELSV} one shows that whenever one writes an ideal $I$ as a
multiplier ideal, then one can prove an effective uniform Artin-Rees
theorem for $I$. The main ingredient in that proof is a basic
property of multiplier ideals that follows from the definition via
resolutions. As we show in Example~\ref{AR} below, this property
fails in the case of test ideals, and therefore it seems that
Theorem~\ref{thm1} does not have similar consequences in the
direction of uniform Artin-Rees statements.
We give several examples to illustrate that basic properties of
multiplier ideals, which easily follow from the definition via log
resolutions, can fail in the case of test ideals:
\begin{enumerate}
\item[i)] We show that it can happen that for a (principal) ideal
${\mathfrak a}$, we can have the ideal $\tau({\mathfrak a}^c)$ non-radical, where
$c=\fpt({\mathfrak a})$ (see Example~\ref{counterex}).
\item[ii)] We give an example of a (principal) ideal ${\mathfrak a}$ with $c=\fpt({\mathfrak a})$
such that $\tau({\mathfrak a}^c)$ is ${\mathfrak m}$-primary for a maximal ideal
${\mathfrak m}$, but such that $\fpt({\mathfrak a})<\fpt({\mathfrak a}+{\mathfrak m}^{\ell})$ for all
$\ell\gg 0$ (see Example~\ref{counterex}).
\item[iii)] We show that the analogue of
the Generic Restriction Theorem for multiplier ideals can fail in
the case of test ideals (see Example~\ref{semicont1}). However, we
will prove that the $F$-pure thresholds satisfy the same
semicontinuity property as the log canonical thresholds.
\end{enumerate}
The paper is structured as follows. In \S 2 we review the
definitions of multiplier and generalized test ideals, and some
basic properties. In particular, we recall the description of test
ideals in the case of a regular $F$-finite ring from \cite{BMS1},
which we will systematically use. In \S 3 we prove
Theorem~\ref{thm1} above. The next section is devoted to various
examples, including the ones mentioned above, while in the last
section we prove the semicontinuity result for $F$-pure thresholds.
\section{Preliminaries}
We first recall the definition of multiplier ideals (for details see
\cite{Laz}, \S 9). For a real number $u$, we denote by $\lceil
u\rceil $ the smallest integer $\geq u$. Similarly, $\lfloor
u\rfloor$ is the largest integer $\leq u$. This notation is extended
to divisors with real coefficients, in which case we apply it to
each coefficient.
Let $X$ be a $\ensuremath{\mathbb Q}$-Gorenstein normal variety over a field of
characteristic zero, $Y \subsetneq X$ a proper closed subscheme
defined by an ideal sheaf ${\mathfrak a} \subseteq \ensuremath{\mathcal O}_X$, and $t \ge 0$ a
real number. Suppose that $\pi \colon \widetilde{X} \to X$ is a log
resolution of the pair $(X,Y)$ such that ${\mathfrak a}\ensuremath{\mathcal O}_{\widetilde{X}} =
\ensuremath{\mathcal O}_{\widetilde{X}}(-F)$, and let $K_{\widetilde{X}/X}$ denote the
discrepancy divisor. Then the \textit{multiplier ideal}
$\ensuremath{\mathcal J}({\mathfrak a}^t)$ is defined by
\[
\ensuremath{\mathcal J}({\mathfrak a}^t) = \pi_{*}\ensuremath{\mathcal O}_{\widetilde{X}}
\bigg(\lceil K_{\widetilde{X}/X} - t F \rceil \bigg) \subseteq
\ensuremath{\mathcal O}_{X}.
\]
This is an ideal of $\ensuremath{\mathcal O}_X$ that does not depend on the choice of
the log resolution.
One says that $X$ has \emph{log terminal singularities} at $x\in X$
if $x$ does not lie in the support of $\ensuremath{\mathcal J}({\mathfrak a}^t)$ for $0<t\ll 1$.
In this case one defines the \textit{log canonical threshold} of
${\mathfrak a}$ at $x$, denoted by $\lct_x({\mathfrak a})$, to be
\[
\lct_x({\mathfrak a}) = \sup \{ s \in \ensuremath{\mathbb R}_{\ge 0} \mid
x\,\text{is not in the support of}\,\ensuremath{\mathcal J}({\mathfrak a}^s)\}.
\]
For the purpose of this paper, it is enough to restrict ourselves to
the case when the variety $X$ is nonsingular (hence, in particular,
$X$ has log terminal singularities at every point). It is easy to
see starting from definition that $\ensuremath{\mathcal J}({\mathfrak a}^{t_1})\subseteq
\ensuremath{\mathcal J}({\mathfrak a}^{t_2})$ if $t_1>t_2$. Moreover, given any $t\geq 0$, there
is a positive $\varepsilon$ such that
$\ensuremath{\mathcal J}({\mathfrak a}^t)=\ensuremath{\mathcal J}({\mathfrak a}^{t+\varepsilon})$. Following \cite{ELSV}, we
call $\lambda>0$ a \emph{jumping number} of ${\mathfrak a}$ if
$\ensuremath{\mathcal J}({\mathfrak a}^{\lambda})\neq\ensuremath{\mathcal J}({\mathfrak a}^{t})$ for every $t<\lambda$. With
the notation in the definition of multiplier ideals, it follows
easily that if we write $F=\sum_ia_iE_i$, then for every jumping
number $\lambda$ of ${\mathfrak a}$, there is $i$ such that $a_i\lambda$ is
an integer. In particular, the jumping numbers are rational and they
form a discrete set.
The smallest jumping number of ${\mathfrak a}$ is the \emph{log canonical
threshold} $\lct({\mathfrak a})$. It is clear that we can define local
versions of the jumping numbers at every $x\in X$. In this case, the
smallest jumping number is precisely $\lct_x({\mathfrak a})$. In fact, it is
easy to see that $\lct({\mathfrak a})=\min_{x\in X}\lct_x({\mathfrak a})$.
\bigskip
We now turn to the positive characteristic setting. Let $R$ be a
Noetherian ring containing a field of characteristic $p>0$. The ring
$R$ is called \textit{$F$-finite} if $R$ is a finitely generated
module over its subring $R^p = \{a^p \in R\,:\, a \in R\}$. If $J$
is an ideal in $R$, then $J^{[p^e]}$ denotes the ideal
$(u^{p^e}\colon u\in J)$.
We recall first the notion of \textit{generalized test ideals},
introduced by Hara and the second author in \cite{HY}. We denote by
$R^{\circ}$ the complement of all minimal prime ideals of $R$.
\begin{defn} \label{hy-tau}
Let ${\mathfrak a}$ be an ideal such that ${\mathfrak a} \cap R^{\circ} \ne
\emptyset$. Let $t \ge 0$ be a real number. For any ideal $I$ of
$R$, the \textit{${\mathfrak a}^t$-tight closure} of $I$, denoted by
$I^{*{\mathfrak a}^t}$, is defined to be the ideal of $R$ consisting of all
elements $z \in R$ for which
there exists $c \in R^{\circ}$ such that
\[
cz^q {\mathfrak a}^{\lceil t q \rceil} \subseteq I^{[q]}
\]
for all large $q=p^e$. Here we denote by $\lceil x\rceil$ the
smallest integer $\geq x$.
\par
Assume that $R$ is excellent and reduced. Given a real number $t \ge
0$, one defines the \emph{generalized test ideal} $\tau({\mathfrak a}^t)$ by
\[
\tau({\mathfrak a}^t) = \bigcap_{I \subseteq R} I \colon I^{*{\mathfrak a}^t},
\]
where $I$ runs through all ideals of $R$. In the case of a principal
ideal ${\mathfrak a}=(f)$, we simply write $\tau(f^t)$.
\end{defn}
\par
Blickle, Smith and the first author gave in \cite{BMS1} a different
description of generalized test ideals in the case of an $F$-finite
regular ring $R$. We briefly recall this description here, in the
special case when $R$ is free and finitely generated over $R^p$.
Note that this condition holds, for example, when $R$ is an
$F$-finite regular local ring, or when $R=k[x_1,\ldots,x_n]$ and
$[k\colon k^p]<\infty$.
It follows from our assumption that for every $p^e$, with $e\geq 1$,
$R$ is free over $R^{p^e}=\{a^{p^e}\colon a\in R\}$. For every such
$e$, let us fix a basis $u_1,\ldots,u_N$ of $R$ over $R^{p^e}$.
Given any ideal ${\mathfrak b}$ of $R$, we choose generators $h_1,\ldots,h_s$
of ${\mathfrak b}$. If we write for every $i$
\[
h_i = \sum_{j=1}^N a_{ij}^{p^e}u_j,
\]
with $a_{ij} \in R$, then we put
\[
{\mathfrak b}^{[1/p^e]} = (a_{ij}\colon 1\le i \le s,\, 1 \le j \le N).
\]
In fact, ${\mathfrak b}^{[1/p^e]}$ is the unique smallest ideal $J$ (with
respect to inclusion) such that ${\mathfrak b}\subseteq J^{[p^e]}$. In
particular, ${\mathfrak b}^{[1/p^e]}$ does not depend on the choice of
generators for ${\mathfrak b}$, or on the choice of basis for $R$ over
$R^{p^e}$.
Suppose now that ${\mathfrak a}$ is an ideal in $R$ and that $t$ is a
positive real number. For every $e\geq 1$ we have the inclusion
\[
\left({\mathfrak a}^{\lceil tp^e\rceil}\right)^{[1/p^e]} \subseteq
\left({\mathfrak a}^{\lceil tp^{e+1}\rceil}\right)^{[1/p^{e+1}]}.
\]
Since $R$ is Noetherian, these ideals stabilize for $e\gg 0$, and
the limit was taken as definition for $\tau({\mathfrak a}^t)$ in \emph{loc.
cit}, the equivalence with the definition from \cite{HY} being
proved in \emph{ibid.}, Proposition~2.22.
\bigskip
We now recall the definition of $F$-jumping exponents, that is
analogous to that of jumping numbers for multiplier ideals. We
assume that $R$ is a regular $F$-finite ring. Note that if $t < t'$,
then $\tau({\mathfrak a}^t) \supseteq \tau({\mathfrak a}^{t'})$. Moreover, for every
$t$ there exists $\varepsilon >0$ such that
$\tau({\mathfrak a}^{t})=\tau({\mathfrak a}^{t'})$ for every $t' \in
[t,t+\varepsilon)$.
\begin{defn} \label{f-jump}
A positive real number $\lambda$ is called an \textit{$F$-jumping
exponent} of ${\mathfrak a}$ if $\tau({\mathfrak a}^\lambda) \ne \tau({\mathfrak a}^{t})$ for
every $t < \lambda$. It is convenient to make also the convention
that $0$ is an $F$-jumping exponent.
\end{defn}
The smallest positive $F$-jumping exponent of ${\mathfrak a}$ is the
\emph{$F$-pure threshold} $\fpt({\mathfrak a})$. This notion was introduced
in a more general setting by Takagi and Watanabe in \cite{TW}, as an
analogue of the log canonical threshold.
When $(R,{\mathfrak m})$ is an $F$-finite regular local ring, the $F$-pure
threshold has the following alternative description (see \cite{BMS1}
or \cite{MTW}). Given an ideal ${\mathfrak a}\subseteq{\mathfrak m}$ and $e\geq 1$, we
denote by $\nu(e)$ the largest integer $r$ such that
${\mathfrak a}^r\not\subseteq{\mathfrak m}^{[p^e]}$ (we put $\nu(e)=0$ if there is no
such $r$). We then have
\begin{equation}\label{fpt}
\fpt({\mathfrak a})=\sup_e\frac{\nu(e)}{p^e}.
\end{equation}
It follows that given a nonnegative integer $c$, we have
$\fpt({\mathfrak a})\leq c$ if and only if ${\mathfrak a}^{\lfloor
cp^e\rfloor+1}\subseteq {\mathfrak m}^{[p^e]}$ for every $e$.
\bigskip
Rationality and discreteness of $F$-jumping exponents is more subtle
in positive characteristic. Both properties have been proved in
\cite{BMS1} for an arbitrary ideal in a regular ring that is
essentially of finite type over an $F$-finite field, and for a
principal ideal in any $F$-finite regular ring in \cite{BMS2}.
We will be especially interested in the case when ${\mathfrak a}=(f)$ is a
principal ideal in an $F$-finite regular ring. In this case, Skoda's
Theorem (see Theorem~4.1 in \cite{HT} or Proposition~2.25 in
\cite{BMS1}) implies that for every $t \ge 1$ we have $\tau(f^t) = f
\cdot \tau(f^{t-1})$. Therefore the set of $F$-jumping exponents of
$f$ is periodic with period one, hence it is enough to describe the
$F$-jumping exponents in the interval $(0,1]$. As we have mentioned,
this is a finite set.
\section{Any ideal in an $F$-finite regular local ring is a test ideal}
Throughout this section we assume that $R$ is a regular, $F$-finite
ring. By a theorem of Kunz \cite{Kunz}, this is equivalent with $R$
being finitely generated and projective over $R^p$. We will assume
that moreover, $R$ is free over $R^p$. This holds, for example, if
$R$ is also local, or if $R=k[x_1,\ldots,x_n]$, where $[k\colon
k^p]<\infty$. The following is the main result of this section.
\begin{thm}\label{thm2}
Let $R$ be a regular ring of characteristic $p>0$, such that $R$ is
a finitely generated, free module over $R^p$.
\begin{enumerate}
\item[1)] For every ideal $I$ in $R$, there are $f\in R$ and $c>0$ such
that $I=\tau(f^c)$.
\item[2)] Moreover, if ${\mathfrak m}$ is a maximal ideal in $R$, and if
$I$ is ${\mathfrak m}$-primary, then we can find an ${\mathfrak m}$-primary ideal
${\mathfrak b}$ and $c'>0$ such that $I=\tau({\mathfrak b}^{c'})$.
\end{enumerate}
\end{thm}
Suppose that $R$ satisfies the hypothesis of the theorem, and let
$N={\rm rk}_{R^p}(R)$. It is clear that $N=1$ if and only if
$\dim(R)=0$, in which case Theorem~\ref{thm2} is trivial. We will
henceforth assume $N>1$. Note that if $e\geq 1$, then $R$ is free
over $R^e$ of rank $N^e$.
The first assertion in Theorem~\ref{thm2} follows from the more
precise statement below.
\begin{prop} \label{Precise}
Let $R$ be a ring of characteristic $p>0$ that is free and finitely
generated over $R^p$, with ${\rm rk}_{R^p}(R)=N$. Let
$I=(z_1,\ldots,z_{\mu})$ be an ideal of $R$, and fix $e_0\geq 1$
such that $N^{e_0}\geq\mu$. If $g_1,\ldots,g_{N^{e_0}}$ is a basis
of $R$ over $R^{p^{e_0}}$, and if we put
\[
f=\sum_{i=1}^{\mu} z_i^{p^{e_0}} g_i \in R, \qquad c=\frac{1}{p^{e_0}} \in \ensuremath{\mathbb Q},
\]
then
\[
\tau(f^c)=I.
\]
\end{prop}
\begin{proof}
We use the description of $\tau(f^c)$ from \cite{BMS1}. If $e\geq
e_0$, then we have a basis of $R$ over $R^{p^e}$ given by
\[
\{g_{i_1}g_{i_2}^p \cdots g_{i_{e-e_0+1}}^{p^{e-e_0}}\mid 1\leq
i_1,\ldots,i_{e-e_0+1}\leq N\}.
\]
Since we can write
$f^{p^{e-e_0}}=\sum_{i=1}^{\mu}z_i^{p^e}g_i^{p^{e-e_0}}$, it follows
that
\[
\left(f^{\lceil cp^e\rceil}\right)^{[1/p^e]}=
\left(f^{p^{e-e_0}}\right)^{[1/p^e]}=(z_1,\ldots,z_{\mu})=I.
\]
Since this is true for every $e\geq e_0$, we deduce $\tau(f^c)=I$.
\end{proof}
\par \vspace{2mm}
We turn now to the second assertion in Theorem~\ref{thm2} (this
answers positively a question raised by Kei-ichi Watanabe). The
assertion is a consequence of 1), together with the more general
statement below. Recall that by Corollary~2.16 in \cite{BMS1}, for
every $f$ and every $c$ there is $\varepsilon>0$ such that
$\tau(f^c)=\tau(f^{c+\varepsilon})$.
\begin{prop} \label{mprimary}
Let $R$ be a regular $F$-finite ring, and ${\mathfrak m}$ a maximal ideal in
$R$. Suppose that $f\in R$ and $c>0$ are such that $I:=\tau(f^c)$ is
${\mathfrak m}$-primary. If we fix $\varepsilon
> 0$ such that $I=\tau(f^{c+\varepsilon})$, and if $r$ is such that
${\mathfrak m}^r\subseteq I$, then for every positive integer $\ell$ with
$\ell \varepsilon \ge r+\codim({\mathfrak m})-1$, we have
\[
I=\tau((fR+{\mathfrak m}^{\ell})^{c+\varepsilon}).
\]
\end{prop}
\begin{proof}
We put ${\mathfrak a}_{\ell} = fR+{\mathfrak m}^{\ell}$. Note that we clearly have $I
=\tau(f^{c+\varepsilon}) \subseteq
\tau({\mathfrak a}_{\ell}^{c+\varepsilon})$.
\par \vspace{2mm}
On the other hand, by Takagi's Summation Theorem (see Theorem~3.1 in
\cite{Ta}), we have
\[
\tau({\mathfrak a}_{\ell}^{c+\varepsilon})
\subseteq \sum_{\lambda+\nu=c+\varepsilon}
\tau(f^{\lambda})\cdot\tau({\mathfrak m}^{\ell\nu}) \subseteq \tau(f^c) +
\tau({\mathfrak m}^{\ell\varepsilon}).
\]
For the second inclusion we used the fact that if $\lambda\ge c$,
then $\tau(f^{\lambda})\subseteq\tau(f^c)$, and otherwise we have
$\nu\geq\varepsilon$, hence $\tau({\mathfrak m}^{\ell\nu})\subseteq
\tau({\mathfrak m}^{\ell\varepsilon})$.
\par \vspace{2mm}
Since $\ell \varepsilon \ge r + d -1$, where $d=\codim({\mathfrak m})$, and
since $\tau({\mathfrak m}^{\alpha})= {\mathfrak m}^{\lfloor \alpha\rfloor-d+1}$ for
every $\alpha\geq d-1$, it follows that
\[
\tau({\mathfrak m}^{\ell\varepsilon})
\subseteq {\mathfrak m}^r\subseteq I.
\]
Therefore $\tau({\mathfrak a}_{\ell}^{c+\varepsilon})\subseteq I$, which
completes the proof of the proposition.
\end{proof}
\bigskip
Let $I$ be an ideal of a ring $R$. Recall that the \textit{integral
closure} of $I$, denoted $\overline{I}$, is the ideal of $R$
consisting of all $z$ that satisfy an equation $f(z)=0$ for some
\[
f(X) = X^n + a_1 X^{n-1} + \cdots + a_n \quad (a_i \in I^i).
\]
The ideal $I$ is \textit{integrally closed} if $I=\overline{I}$. It
is an immediate consequence of the definition that all multiplier
ideals are integrally closed (see \cite{Laz}, Corollary~9.6.13).
In positive characteristic, the generalized test ideal of
$\tau({\mathfrak a}^t)$ is integrally closed for every $t \in \ensuremath{\mathbb R}_{\ge 0}$
if ${\mathfrak a}$ is generated by monomials in a polynomial ring (in fact,
in this case, the test ideals are given by the same formula as the
multiplier ideals in characteristic zero, see Theorem~6.10 in
\cite{HY}). More precisely, if the ideal ${\mathfrak a}$ is generated by
monomials in a polynomial ring, then
\[
\tau({\mathfrak a}^t)
= \left\{x^u \in R \mid u+(1,1,\ldots,1) \in {\rm Int}(t\cdot
P({\mathfrak a})) \right\},
\]
where $P({\mathfrak a})$ is the Newton polyhedron associated to ${\mathfrak a}$.
We mention that in dimension two, Lipman and Watanabe \cite{LW} and
Favre and Jonsson \cite{FJ} independently proved that every
integrally closed ideal is the multiplier ideal of some ideal. There
was some belief that such a result would be true in higher
dimensions. However, recent work of Lazarsfeld and Lee \cite{LL}
shows that in fact multiplier ideals have to satisfy also some
strong properties in terms of their local syzygies, allowing to give
examples in dimension $\geq 3$ of integrally closed ideals that are
not multiplier ideals.
However, as Theorem~\ref{thm2} clearly shows, the situation for test
ideals in positive characteristic is drastically different. Since
any ideal is a test ideal, in particular we get many non-integrally
closed test ideals. Here is a concrete such example.
\begin{exam}\label{taunotint}
Let $R = \ensuremath{\mathbb F}_2[[x,y,z]]$ and $f = x^2 +y^5+z^5$. It follows from
Proposition~\ref{Precise} that $\tau(f^{1/2})=(x,y^2,z^2)$, hence it
is \textit{not} integrally closed. In fact, we will see in
Proposition~\ref{concrete} below that $f$ has no jumping numbers in
$(1/2, 1)$. It follows that we may apply Proposition~\ref{mprimary}
with $\varepsilon=5/11$ and $r=3$ to deduce that if
${\mathfrak a}=(f)+(x,y,z)^{11}$, then $\tau({\mathfrak a}^{21/22})=(x,y^2,z^2)$.
\end{exam}
\begin{remark} \label{twodim}
Suppose that $(R,{\mathfrak m})$ is a two-dimensional excellent Gorenstein
$F$-rational local domain of characteristic $p>0$. If ${\mathfrak a}
\subseteq R$ is an ${\mathfrak m}$-primary integrally closed ideal, and if
${\mathfrak b}$ is its minimal reduction, then $\tau({\mathfrak a}) = {\mathfrak b} : {\mathfrak a}$,
hence $\tau({\mathfrak a})$ is integrally closed. See \cite[Theorem 3.1]{HWY}
and \cite[Theorem 5.1]{HY}.
\end{remark}
\begin{remark} \label{poly}
In the case of a polynomial ring we do not need the assumption that
the ring is $F$-finite. More precisely, if $R=k[x_1,\ldots,x_n]$ is
a polynomial ring over a field $k$ of positive characteristic, then
every ideal $I$ in $R$ can be expressed as a generalized test ideal.
To see this, write $I=(z_1,\ldots,z_{\mu})$, and let $k_0$ be the
subfield of $k$ generated over the prime field $\ensuremath{\mathbb F}_p$ by the
coefficients of $z_1,\ldots,z_{\mu}$. Since $k_0$ is an extension of
finite type of a perfect field, it follows that $k_0$ is $F$-finite.
Therefore $S=k_0[x_1,\ldots,x_n]$ is also $F$-finite, and we may
apply Theorem~\ref{thm2} for $S$ to find $f\in S$ and $c\in\ensuremath{\mathbb Q}$
such that $\tau((fS)^c)=(z_1,\ldots,z_{\mu})S$. Since $R$ is free
over $S$, one can easily see that $\tau((fS)^c)R=\tau((fR)^c)$,
hence $I=\tau((fR)^c)$.
\end{remark}
It would be interesting to determine also in the singular case those
ideals that can be written as generalized test ideals. We end this
section with the following question of Shunsuke Takagi.
\begin{quest}\label{q-takagi}
Is the analogue of Theorem~\ref{thm2} true if we only assume that
the ring is strongly $F$-regular ?
\end{quest}
\section{Miscellaneous examples}
In this section we give several examples to show that the analogues
of several basic properties of multiplier ideals (which follow
easily from definition) fail for test ideals. We start by describing
the questions we will consider.
\begin{quest} \label{questions}
Let $(R,{\mathfrak m})$ be an $F$-finite regular local ring of characteristic
$p>0$ with $d=\dim R \ge 1$. Let $f$ be a nonzero element of $R$,
and set $c=\fpt(f)$. Given $t > 0$, we put
$\tau(f^{t-})=\tau(f^{t-\varepsilon})$ for $0 < \varepsilon \ll 1$
(note that this is well-defined, since the $F$-jumping exponents of
$f$ are discrete; see \cite{BMS1}).
\begin{enumerate}
\item[1)] Is the ideal $\tau(f^c)$ radical ?
\item[2)] Suppose that $\tau(f^c)$ is ${\mathfrak m}$-primary.
Is there an ${\mathfrak m}$-primary ideal ${\mathfrak b}$ such that $f\in {\mathfrak b}$ and
$\fpt(f)=\fpt({\mathfrak b})$ ?
\item[3)] Does the inclusion
\[
{\mathfrak b}^m \cdot \tau(f^{t-}) \cap \tau(f^t) \subseteq {\mathfrak b}^{m-d} \cdot \tau(f^t)
\]
hold for every $m\geq d$ and every $t>0$ ?
\item[4)] Does the analogue of the Generic Restriction Theorem for
multiplier ideals (see Theorem~\ref{thm30} below) hold for
generalized test ideals ?
\end{enumerate}
\end{quest}
We recall the argument for 1) and 2) in the case of multiplier
ideals. Suppose that ${\mathfrak a}$ is a nonzero ideal sheaf on the
nonsingular variety $X$ (over an algebraically closed field of
characteristic zero). Let $\pi\colon \widetilde{X}\to X$ be a log
resolution of the pair $(X,V({\mathfrak a}))$. If
${\mathfrak a}\ensuremath{\mathcal O}_{\widetilde{X}}=\ensuremath{\mathcal O}(-F)$, we write
\[
F=\sum_{i=1}^ra_iE_i,\qquad K_{\widetilde{X}/X}=\sum_{i=1}^rk_iE_i.
\]
Suppose that $c=\lct({\mathfrak a})$, hence $c=\min_i\frac{k_i+1}{a_i}$.
The analogue of 1) above holds since $\ensuremath{\mathcal J}({\mathfrak a}^c)$ is the radical
ideal corresponding to $\cup_if(E_i)$, the union being over those
$i$ such that $c=\frac{k_i+1}{a_i}$. Moreover, suppose that $x\in X$
is a closed point corresponding to the ideal ${\mathfrak m}$. If
$\ensuremath{\mathcal J}({\mathfrak a}^c)$ is ${\mathfrak m}$-primary, it follows that there is a divisor
$E_i$ lying over $x$, such that $c=\frac{k_i+1}{a_i}$. In this case,
for every $\ell>a_i$, we have ${\rm ord}_{E_i}(f)={\rm
ord}_{E_i}((f)+{\mathfrak m}^{\ell})$. Therefore $c\geq
\lct((f)+{\mathfrak m}^{\ell})$, and we get the assertion in 2), since the
reverse inequality is trivial.
The motivation for the question in 3) comes from its relevance to
uniform Artin-Rees results. The corresponding statement for
multiplier ideals is Theorem~3.1 in \cite{ELSV}. The proof uses only
the definition via log resolutions and Skoda's Theorem (which also
holds in the setting of test ideals). It is used to give an
effective uniform Artin-Rees statement for every ideal that can be
written as a multiplier ideal. Therefore, in light of our
Theorem~\ref{thm2}, a positive answer to 3) would have had very
strong consequences. It is conceivable that some weaker version of
3) might still hold, enough to give effective uniform Artin-Rees for
\textit{every} ideal in positive characteristic.
\bigskip
Our main source of counterexamples to the above questions is the
following proposition, giving a formula for all the test ideals of a
certain class of principal ideals.
\begin{prop} \label{concrete}
Let $p$ be a prime number, $n$ a positive integer, and let $R =
\ensuremath{\mathbb F}_p[[x_0,x_1,\ldots,x_n]]$ be a formal power series ring over
$\ensuremath{\mathbb F}_p =\ensuremath{\mathbb Z}/p\ensuremath{\mathbb Z}$. For any nonnegative integers
$\ell_1,\ldots,\ell_n$, we set
\[
f = x_0^p+x_1^{\ell_1 p+1} + \cdots + x_n^{\ell_n p+1}
\quad \text{and} \quad I = (x_0,x_1^{\ell_1},\ldots,x_n^{\ell_n}).
\]
Then
\[
\tau(f^t) = \left\{
\begin{array}{cl}
R, & (0 \le t < \frac{~1~}{p}); \\[2mm]
I, & (\frac{~1~}{p} \le t < \frac{~2~}{p}); \\[2mm]
\vdots & \vdots \\[2mm]
I^{p-1}, & (\frac{p-1}{p} \le t < 1); \\[2mm]
fR, & (t=1).
\end{array}\right.
\]
In particular,
\begin{enumerate}
\item $\fpt(f) = \frac{~1~}{p}$ and $\tau(f^{\fpt(f)}) = I$.
\item For every $t \in \ensuremath{\mathbb R}_{\ge 0}$, we have
\[
\tau(f^{t}) = f^{\lfloor t \rfloor} \,
I^{\lfloor p (t-\lfloor t \rfloor) \rfloor},
\]
where $\lfloor \alpha \rfloor$ denotes the largest integer
$\leq\alpha$.
\item The set of $F$-jumping exponents of $f$ is $\frac{~1~}{p}\,\ensuremath{\mathbb Z}_{\ge 0}$.
\end{enumerate}
\end{prop}
\begin{proof}
It is enough to show that $\tau(f^t) = I^r$ for
$t\in\left[\frac{r}{p},\frac{r+1}{p}\right)$ and for every
$r=0,1,\ldots,p-1$. The other assertions follow from this and
Skoda's Theorem. First, we show the following
\begin{flushleft}
{\bf Claim 1:} $\tau(f^{r/p}) = I^r$.
\end{flushleft}
Since we have
\begin{eqnarray*}
f^{\lceil (r/p)p^e \rceil} &= & f^{rp^{e-1}} =
\left(x_0^{p^e}+x_1^{\ell_1p^e+p^{e-1}}+\cdots
+ x_n^{\ell_np^e+p^{e-1}}\right)^r \\
& = & \sum_{\stackrel{\scriptstyle i_0,\ldots,i_n \ge 0}{i_0 +
\cdots + i_n = r}} \frac{r!}{i_0!i_1!\cdots i_n!}\; \left(x_0^{i_0}
x_1^{\ell_1i_1} \cdots x_n^{\ell_n i_n} \right)^{p^e} \;
x_1^{i_1p^{e-1}}\cdots x_n^{i_np^{e-1}}
\end{eqnarray*}
and since $\left\{\frac{r!}{i_0!i_1!\cdots i_n!}\;
x_1^{i_1p^{e-1}}\cdots x_n^{i_np^{e-1}}\right\}$ is part of a free
basis of $R$ over $R^{p^e}$, we obtain that
\[
\left(f^{\lceil(r/p)p^e \rceil} \right)^{[1/p^e]}=
(x_0,x_1^{\ell_1},\ldots,x_n^{\ell_n})^r.
\]
Since this holds for every $e\geq 1$, we get our claim.
\par \vspace{2mm}
In order to prove that $\tau(f^t) = I^r$ when $\frac{r}{~p~} < t <
\frac{r+1}{~p~}$, we put $t=\frac{r+1}{~p~}-\varepsilon$, $0 <
\varepsilon < \frac{1}{~p~}$. It follows from Claim~1 that it is
enough to show that $I^r\subseteq\tau(f^t)$. We fix a sufficiently
large integer $e$ such that $s:=\lfloor \varepsilon p^e \rfloor \ge
1$. We have
\begin{eqnarray*}
f^{\lceil tp^e \rceil}
&=& \left(x_0^p+x_1^{\ell_1p+1} +
\cdots + x_n^{\ell_n p+1} \right)^{(r+1)p^{e-1}-s} \\
&=& \sum_{\stackrel{\scriptstyle a_0,\ldots,a_n \ge 0}{a_0 + \cdots
+ a_n =(r+1)p^{e-1} -s}}
\!\!\!\!\!\frac{((r+1)p^{e-1}-s)!}{a_0!a_1!\cdots a_n!} \;\;
x_0^{pa_0} x_1^{(\ell_1p+1)a_1}\cdots x_n^{(\ell_np+1)a_n}.
\end{eqnarray*}
\par
In order to complete the proof, it is enough to show that for every
$(n+1)$-tuple of nonnegative integers $(i_0,i_1,\ldots,i_n)$ such
that $i_0+i_1+\cdots + i_n =r$, we have
\[
y:=x_0^{i_0}x_1^{\ell_1i_1}\cdots x_n^{\ell_ni_n}\in \left(f^{\lceil
tp^e\rceil}\right)^{[1/p^e]}.
\]
If we put $a_0 = (i_0+1)p^{e-1}-s$, $a_j = i_jp^{e-1}$ for
$j=1,\ldots,n$, then we have
\[
a_0,a_1,\ldots, a_n \ge 0, \quad a_0+a_1+\cdots + a_n =
(r+1)p^{e-1}-s
\]
and
\[
x_0^{pa_0}x_1^{(\ell_1p+1)a_1}\cdots x_n^{(\ell_np+1)a_n}
= \left(x_0^{i_0}x_1^{\ell_1i_1}\cdots x_n^{\ell_ni_n} \right)^{p^e}
\; x_0^{p^e-sp} x_1^{i_1p^{e-1}}\cdots x_n^{i_np^{e-1}}.
\]
Therefore it is enough to prove the claim below. Note that the claim
implies that $f^{\lceil tp^e\rceil}$ can be written as $y_1^{p^e}
g_1 + \cdots + y_{\mu}^{p^e} g_{\mu}$, such that
$I^r=(y_1,\ldots,y_{\mu})$ and $\{g_1,\ldots,g_{\mu}\}$ is part of a
free basis of $R$ over $R^{p^e}$.
\begin{flushleft}
{\bf Claim 2:}
\begin{enumerate}
\item $\frac{((r+1)p^{e-1}-s)!}{a_0!a_1!\cdots a_n!} \not \equiv 0 \pmod{p}$.
\vspace{2mm}
\item Let $b_0,b_1,\ldots,b_n \ge 0$ be integers
with $b_0+b_1 + \cdots + b_n = (r+1)p^{e-1} -s$. If there exist
$t_0,t_1,\ldots,t_n \in \ensuremath{\mathbb Z}$ such that
\[
pb_0 - pa_0 = t_0p^e,\qquad (\ell_j p+1)(b_j-a_j) = t_j p^e \; (j=1,\ldots,n),
\]
then $b_0=a_0$, $b_1=a_1,\ldots, b_n=a_n$.
\end{enumerate}
\end{flushleft}
\par
In order to prove (1), we use the fact that for every integer $N$,
the order of $p$ in $N!$ is $\sum_{m\geq 1}\lfloor N/p^m\rfloor$.
Note that if $1\leq m\leq e-1$, then we have
\[
\lfloor(a_0+a_1+\cdots a_n)/p^m\rfloor =\lfloor
a_0/p^m\rfloor+\sum_{j=1}^ni_jp^{e-1-m}=\sum_{j=0}^n\lfloor
a_j/p^m\rfloor.
\]
On the other hand, $a_0+a_1+\cdots+a_n<p^e$. This shows that the
order of $p$ in $\frac{((r+1)p^{e-1}-s)!}{a_0!a_1!\cdots a_n!}$ is
zero.
\par \vspace{2mm}
We now prove (2). Since ${\rm gcd}(p,\ell_j p+1)=1$, we have
$p^e\mid (b_j-a_j)$ for every $1\leq j\leq n$. Therefore we can
write $b_j-a_j = u_jp^e$ for every $j$ as above, and suitable
$u_j\in\ensuremath{\mathbb Z}$. Using $b_j = (i_j+pu_j)p^{e-1} \ge 0$, we deduce $i_j
+ pu_j \ge 0$, hence $u_j \ge 0$ (recall that
$i_0+\cdots + i_n =r < p$). On the
other hand, since $b_0 = (i_0+1+t_0)p^{e-1}-s \ge 0$, we get
$i_0+1+t_0 > 0$ and thus $t_0 \ge -i_0 > -p$. Moreover, $a_0+\cdots
+ a_n = b_0 + \cdots + b_n$ yields $(u_1+\cdots + u_n)p+t_0 =0$.
Therefore $a_j=b_j$ for every all $j$. This completes the proof of
Claim~2, and also the proof of the proposition.
\end{proof}
\begin{exam} \label{counterex}
Let $R = \ensuremath{\mathbb F}_2[\negthinspace[ x,y,z]\negthinspace]$, $f = x^2+y^5+z^5$, and
put ${\mathfrak a}_N = (f) +(x,y,z)^N$ for every $N \ge 1$.
\begin{enumerate}
\item $\fpt(f) = \frac{1}{~2~}$.
\item $\tau(f^{\fpt(f)}) = (x,y^2,z^2)$ is an ${\mathfrak m}$-primary ideal,
but it is \textit{not} radical (hence this gives a counterexample to
1) in Question~\ref{questions}.
\item $\fpt({\mathfrak a}_N) > \fpt(f)=\frac{1}{~2~}$ for every $N \ge 1$
(hence this gives a counterexample to 2) in Question~\ref{questions}).
\end{enumerate}
\end{exam}
\begin{proof}
(1) and (2) follow from Proposition~\ref{concrete}. In order to see
that (3) indeed says that we get a counterexample to 2) in
Question~\ref{questions}, note that if ${\mathfrak b}$ is an ${\mathfrak m}$-primary
ideal containing $f$, then there is $N\geq 1$ such that ${\mathfrak a}_N
\subseteq{\mathfrak b}$. Hence $\fpt({\mathfrak b})\geq\fpt({\mathfrak a}_N)>\fpt(f)$.
\par
It is enough to prove the assertion in (3) for every $N=2^{e-2}$,
where $e\geq 5$. We show that in this case ${\mathfrak a}_N^{2^{e-1}}\not
\subseteq (x^{2^e},y^{2^e}, z^{2^e})$, hence $\tau({\mathfrak a}_N^{1/2})=R$.
Consider
\[
h:= f^{2^{e-1}-4}x^Ny^Nz^{2N}\in {\mathfrak a}_N^{2^{e-1}}.
\]
If $a=2(2^{e-1}-4-2^{e-3})+2^{e-2}=2^e-8$, $b=5\cdot
2^{e-3}+2^{e-2}=7\cdot 2^{e-3}$, and $c=2^{e-1}$, then the monomial
$x^ay^bz^c$ is not in $(x^{2^e},y^{2^e}, z^{2^e})$, and its
coefficient in $h$ is ${{2^{e-1}-4}\choose 2^{e-3}}$. In order to
show that this coefficient is nonzero, we compute the order of $2$
in ${{2^{e-1}-4}\choose 2^{e-3}}$. This order is equal to
\[
\sum_{i=1}^{e-2}\left(\lfloor (2^{e-1}-4)/2^i\rfloor -\lfloor
2^{e-3}/2^i\rfloor-\lfloor (2^{e-1}-4-2^{e-3})/2^i\rfloor\right)
\]
\[
=\lfloor(2^{e-1}-4)/2^{e-2}\rfloor-\lfloor
(2^{e-1}-4-2^{e-3})/2^{e-2}\rfloor=1-1=0.
\]
This concludes the proof of (3).
\end{proof}
\begin{remark} \label{Schwede}
Karl Schwede \cite{Sch} has recently introduced the notion of
\textit{sharp $F$-purity}. He proved that
if $c= \fpt(f) < 1$ is such that the denominator of $c$ is not
divisible by $p$, then the ideal $\tau(f^c)$ is radical;
see Corollary 4.3 and Remark 5.5 in \emph{loc. cit}. It would be
very interesting to see whether assuming that the denominators of
the jumping numbers of $f$ are not divisible by $p$ would imply
other good properties of the generalized test ideals of $f$.
\end{remark}
We consider now the third problem in Question~\ref{questions}.
\begin{exam}\label{AR}
Let $p$ be a prime, $R=\ensuremath{\mathbb F}_p[\negthinspace[ x,y]\negthinspace]$ and
$f=x^p+y^{\ell p+1}$, for some $\ell\geq 3$. It follows from
Proposition~\ref{concrete} that $\fpt(f)=1/p$ and
$\tau(f^{1/p})=(x,y^{\ell})$. If we take ${\mathfrak b}=(y)$ and $t=1/p$,
then we see that
\[
{\mathfrak b}^{\ell}\cdot\tau(f^{t-}) \cap \tau(f^{t}) ={\mathfrak b}^{\ell}\cap
(x,y^{\ell})=(y^{\ell})\not\subseteq
{\mathfrak b}^{\ell-2}\cdot\tau(f^t)=(y^{\ell-2})\cdot (x,y^{\ell}),
\]
giving thus a counterexample to 3) in Question~\ref{questions}.
\end{exam}
\bigskip
We conclude this section with a discussion of the analogue of the
Generic Restriction Theorem for multiplier ideals in the
characteristic $p$ setting. Let us recall the result in
characteristic zero (see \cite{Laz}, Theorem~9.5.35 and
Example~9.5.37).
\begin{thm}\label{thm30}
Let $f\colon X\to S$ be a smooth surjective morphism of nonsingular
complex algebraic varieties. If ${\mathfrak a}$ is a sheaf of ideals on $X$,
then there is an open subset $U\subseteq S$ such that
$$\mathcal{J}(X,{\mathfrak a}^c)\cdot\mathcal{O}_{X_s}
=\mathcal{J}(X_s,({\mathfrak a}\cdot\mathcal{O}_{X_s})^c)$$ for every $s\in
U$ and every positive $c$ {\rm (}here $X_s$ denotes the fiber
$f^{-1}(s)${\rm )}.
\end{thm}
We show now that the analogue of this result fails for test ideals.
Suppose, for simplicity, that $k$ is an algebraically closed field
of positive characteristic, and consider $f\in
R=k[x_1,\ldots,x_n,y]$. Let us denote by $\{u_j\}_j$ the monomials
$x_1^{a_1}\cdots x_n^{a_n}$, where $0\leq a_i\leq p-1$ for every
$i$. We write
\begin{equation}\label{eq1}
f=\sum_{i=0}^{p-1}y^i\sum_ju_jg_{ij}(x,y)^p,
\end{equation}
for some $g_{ij}\in R$. Arguing as in the proof of
Proposition~\ref{Precise}, we see that
\begin{equation}
\tau(f^{1/p})=(f)^{[1/p]}= (g_{ij}(x,y)\mid i,j).
\end{equation}
\par
On the other hand, let us put $f_{\lambda}(x):=f(x,\lambda)\in
k[x_1,\ldots,x_n]$ for every $\lambda\in k$. Note that we have
\begin{equation}\label{eq2}
f_{\lambda}=\sum_j u_j\sum_{i=0}^{p-1}g_{ij}(x,\lambda)^p\lambda^i,
\end{equation}
hence we deduce
\begin{equation}
\tau(f_{\lambda}^{1/p})=(f_{\lambda})^{[1/p]}=
\left(\sum_{i=0}^{p-1}\lambda^{i/p}g_{ij}(x,\lambda)\mid j\right).
\end{equation}
\begin{exam}\label{semicont1}
Consider $f\in k[x_1,x_2,y]$ given by $f(x_1,x_2,y)=x_1^p+x_2^py$.
The above discussion implies that $\tau(f^{1/p})=(x_1,x_2)$, while
for every $\lambda\in k$ we have
$\tau(f_{\lambda}^{1/p})=(x_1+\lambda^{1/p}x_2)$. This gives a
negative answer to 4) in Question~\ref{questions}.
\end{exam}
The main application of Theorem~\ref{thm30} is to prove the
semicontinuity of log canonical thresholds. In spite of the above
example, we will see in the next section that the analogous result
for $F$-pure thresholds holds.
\section{Semicontinuity of $F$-pure thresholds. }
The following theorem is the analogue of the Semicontinuity Theorem
for log canonical thresholds (see \cite{Laz}, Example~9.5.41).
\begin{thm}\label{thm3}
Let $f\colon R\to S$ be an algebra homomorphism between two
$k$-algebras of finite type, where $k$ is a field of characteristic
$p$, with $[k\colon k^p]<\infty$. We assume that all fibers of $f$
are nonsingular, of pure dimension $d$. Let $\phi\colon S\to R$ be a
ring homomorphism such that $\phi\circ f= {\rm id}_R$, and for every
${\mathfrak q}\in {\rm Spec}(R)$, we put ${\mathfrak q}'=\phi^{-1}({\mathfrak q})$. For every
ideal ${\mathfrak a}$ in $S$ such that ${\mathfrak a}\subseteq{\mathfrak q}'$ for all ${\mathfrak q}\in
{\rm Spec}(R)$, and for every nonnegative $c$, the set
$$\{{\mathfrak q}\in {\rm Spec}(R)\mid \fpt({\mathfrak a} \cdot S_{{\mathfrak q}'}/q S_{{\mathfrak q}'})\geq c\}$$
is open in ${\rm Spec}(R)$.
\end{thm}
\begin{proof}
Note that for every ${\mathfrak q}\in {\rm Spec}(R)$ we have $[k({\mathfrak q})\colon
k({\mathfrak q})^p]<\infty$, hence the ring $S_{{\mathfrak q}'}/{\mathfrak q} S_{{\mathfrak q}'}$ is
$F$-finite and regular.
Consider a surjective morphism of $R$-algebras $g\colon
T=R[x_1,\ldots,x_n]\to S$. We claim that we may replace $S$ by
$R[x_1,\ldots,x_n]$. Indeed, it follows from Proposition~3.6 in
\cite{BMS1} that if we write ${\mathfrak a}={\mathfrak b}/{\rm ker}(g)$ and
${\mathfrak q}'={\mathfrak q}''/{\rm ker}(g)$, then
$$\fpt ({\mathfrak a} \cdot S_{{\mathfrak q}'}/{\mathfrak q} S_{{\mathfrak q}'})+n-d=\fpt({\mathfrak b} \cdot T_{{\mathfrak q}''}/{\mathfrak q}
T_{{\mathfrak q}''}).$$ This proves our claim. Moreover, note that if
$\phi\colon S=R[x_1,\ldots,x_n]\to R$ is given by $\phi(x_i)=b_i$,
then we may consider the automorphism of $R$-algebras $\rho\colon
S\to S$ given by $\rho(x_i)=x_i+b_i$. After replacing ${\mathfrak a}$ by
$\rho({\mathfrak a})$, we may also assume that $\phi(x_i)=0$ for every $i$.
We see that for every ${\mathfrak q}\in {\rm Spec}(R)$, we are interested in
the $F$-pure threshold of ${\mathfrak a}\cdot
k({\mathfrak q})[x_1,\ldots,x_n]_{(x_1,\ldots,x_n)}$, that we denote by
$\fpt_0\left({\mathfrak a}\cdot k({\mathfrak q})[x_1,\ldots,x_n]\right)$.
Let us choose generators $g_1,\ldots,g_m$ for ${\mathfrak a}$, and let
$D=\max_i\{\deg(g_i)\}$. It follows from Proposition~3.8 in
\cite{BMS1} that there is $N=N(D,n,m)$ such that the denominator of
every $F$-jumping exponent of an ideal of the form ${\mathfrak a} \cdot
k({\mathfrak q})[x_1,\ldots,x_n]$ (for ${\mathfrak q}\in {\rm Spec}(R)$) is $\leq N$.
Note that $\fpt_0\left({\mathfrak a}\cdot k({\mathfrak q})[x_1,\ldots,x_n]\right)$ is
an $F$-jumping exponent of ${\mathfrak a}\cdot k({\mathfrak q})[x_1,\ldots,x_n]$
(though it might be larger than the $F$-pure threshold of this
ideal). Using also the fact that the $F$-pure threshold of an ideal
in a regular ring of dimension $n$ is $\leq n$, we deduce that the
set
\[
\{\fpt_0({\mathfrak a} \cdot k({\mathfrak q})[x_1,\ldots,x_n])\mid {\mathfrak q}\in {\rm
Spec}(R)\}
\]
is finite.
In particular, in order to prove the theorem, we may choose the
largest element $c'$ in the above set, with $c'<c$. It is enough to
show that the set
$$A_{c'}:=\{{\mathfrak q}\in {\rm Spec}(R)\mid \fpt_0({\mathfrak a}\cdot
k({\mathfrak q})[x_1,\ldots,x_n])\leq c'\}$$ is closed. Using the description
of the $F$-pure threshold in (\ref{fpt}) in \S 2, we see that
$A_{c'}=\cap_{e\geq 1}A_{c',e}$, where
$$A_{c',e}=\{{\mathfrak q}\mid {\mathfrak a}^{\lfloor c'p^e\rfloor+1}\subseteq
(x_1^{p^e},\ldots,x_n^{p^e})\,{\rm in}\,k({\mathfrak q})[x_1,\ldots,x_n]\}.$$
Note that if we consider all $g^{\ell}:=g_1^{\ell_1}\cdots
g_m^{\ell_m}$, with $\sum_{i}\ell_i=\lfloor c'p^e\rfloor+1$, then
$A_{c',e}$ is the set of primes ${\mathfrak q}$ containing all the
coefficients of monomials not in $(x_1^{p^e},\ldots,x_n^{p^e})$, in
all $g^{\ell}$ as above. Therefore each $A_{c',e}$ is a closed
subset of ${\rm Spec}(R)$, hence $A_{c'}$ is closed, too.
\end{proof}
\medskip
{\bf Acknowledgements}. We are indebted to Shunsuke Takagi and to
Kei-ichi Watanabe for inspiring discussions.
|
2,869,038,153,892 | arxiv | \section{\label{sec:intro}Introduction}
Quantum mechanics is based on the postulate that all physical observables must correspond to the real eigenvalues of quantum mechanical operators.
For a long time this assertion had been considered to be equivalent to the requirement of the Hermiticity of the operators.
The situation has changed after the seminal work \cite{ref:bender1998} of Bender and Boettcher, who discovered a wide class of non-Hermitian Hamiltonians exhibiting entirely real-valued spectra.
A number of intriguing properties are related to the non-Hermitian Hamiltonians possessing parity-time ($\mathcal{PT}$) symmetry that is the symmetry with respect to the simultaneous coordinate and time reversal.
For instance, a system described by the Hamiltonian $\hat{H}=\frac{\hat{\vb{p}}^2}{2m} + V(\vb{r}) \neq \hat H^\dag$ is $\mathcal{PT}$-symmetric, if the complex potential $V(\vb{r})$ satisfies condition $V(\vb{r})=V^{*}(-\vb{r})$, where $^\dag$ and $^\ast$ stand for designation of the Hermitian and complex conjugations respectively.
A couple of important features of the $\mathcal{PT}$-symmetric Hamiltonians are worth mentioning~\cite{ref:elganainy2018,ref:zyablovsky2014,ref:wu2019}. First, their eigenfunctions corresponding to the real eigenvalues are not orthogonal.
Second, the systems are able to experience a phase transition from $\mathcal{PT}$-symmetric to $\mathcal{PT}$-symmetry-broken states, when system's parameters pass an exceptional point.
The transfer of the $\mathcal{PT}$ symmetry concept from quantum mechanics to optics is straightforward due to the similarity of the Schr\"odinger and diffraction equations \cite{ref:elganainy2007,ref:makris2008,ref:elganainy2018}.
Photonic $\mathcal{PT}$-symmetric structures are implemented by combining absorbing and amplifying spatial regions to ensure a complex refractive index $n(\vb{r})=n^*(-\vb{r})$ that substitutes the quantum-mechanical complex potential $V$.
A possibility of the experimental investigation of the $\mathcal{PT}$-symmetric structures certainly heats up the interest to this subject in optics \cite{ref:ruter2010a,ref:feng2013,Kremer2019} in order to apply these systems for sensing~\cite{Hodaei2017,Chen2017}, lasing, and coherent perfect absorption (anti-lasing) \cite{Sun2014,Wong2016}.
It was Purcell who revealed that a spontaneous emission rate is not an intrinsic property of the emitter, but is proportional to the local density of modes (density of photonic states) in the vicinity of the transition frequency~\cite{ref:purcell1946}.
In other words, the spontaneous emission rate is determined by an environment.
Phenomenon of the spontaneous emission enhancement owing to the influence of the environment is known now as the Purcell effect. The enhancement is defined as a ratio of the spontaneous emission rate in the system under consideration to that in the free space~\cite{gaponenko2010}.
With the development of nanotechnology, nanophotonics opens up new avenues for engineering spontaneous emission of quantum emitters in specific surrounding media~\cite{ref:klimov2001,ref:hughes2004,ref:anger2006,ref:kolchin2015,ref:karabchevsky2016,Su2019} including non-Hermitian media.
Investigation of the spontaneous emission of the dipole emitter inside a $\mathcal{PT}$-symmetric planar cavity has been recently performed by Akbarzadeh et~al in Ref.~\cite{ref:akbarzadeh2019}.
The authors have found suppression of the spontaneous relaxation rate of a two-level atom below the vacuum level.
A general theory of the spontaneous emission at the exceptional points of non-Hermitian systems was developed in Ref.~\cite{Pick2017} and revealed finite enhancement factors.
A number of methods including numerical techniques~\cite{ref:taflove2013} have been developed for calculation of the Purcell factor of dipole and quadrupole emitters in various environments.
The most general one is based on the calculation of Green's dyadics $\hat G({\bf r}, {\bf r}_0)$. Since the photonic local density of states is proportional to the imaginary part of the dyadic ${\rm Im}\hat G({\bf r}_0, {\bf r}_0)$~\cite{ref:novotny2012}, the purely quantum phenomenon of spontaneous emission can be reduced to the problem of classical electrodynamics. The Purcell factor $F_p = P/P_0$ can be written in terms of the powers $P$ and $P_0$ emitted by a source in an environment and in the free space, respectively.
This approach is widely adopted and can be exploited, e.g., for description of the spontaneous relaxation of molecules in absorbing planar cavities~\cite{ref:tomas1997}, explanation of the surface-enhanced Raman scattering \cite{Maslovski2019}, finding anomalous Purcell factor scaling in hyperbolic metamaterials~\cite{Wang2019}, and many others.
The Purcell factor can be calculated separately for each of the discrete scattering channels.
Due to the highly demanding field of photonic integrated circuitry (PIC) offering chip-scale miniaturization of actual devices and transformation of academy governed knowledge to the industry, recently the research has been accelerated towards utilization of important optical phenomena in integrated photonic devices as summarised in the recent Review on on-chip nanophotonics and future challenges~\cite{ref:karabchevsky2020}.
For instance, just a couple of years ago, the modal Purcell factor for the basic element of PIC planar waveguide was introduced within the scattering matrix formalism~\cite{ref:ivanov2017}.
A year after, another approach based on application of the reciprocity theorem was developed and successfully exploited in a ring resonator configuration~\cite{ref:schulz2018}.
Here, we generalize the reciprocity-theorem formalism to the case of non-Hermitian systems with non-orthogonal modes and
define the modal Purcell factor for a point-source emitter placed in the vicinity of such systems.
We examine the developed theory by studying the influence of the non-Hermiticity and the non-orthogonality on the spontaneous
emission rate of the point-source emitter placed near the coupled-$\mathcal{PT}$-symmetric waveguide systems. We show
analytically, utilizing the coupled mode approach and verify numerically using Finite-difference frequency-domain (FDFD based mode solver,
that although $\mathcal{PT}$-symmetric systems
are known to exhibit Purcell factor enhancement near exceptional point as reported in~\cite{Pick2017},
in principle no change in modal Purcell factor occurs for $\mathcal{PT}$-symmetric coupled-waveguides system
even near exceptional point where the supermodes coalesce leading to infinite values of the Petermann factor.
The rest of the paper is organized in the following way.
In Section~\ref{sec:method}, we formulate a method for the Purcell factor calculation based on the reciprocity approach that accounts for the modes non-orthogonality.
In Section~\ref{sec:cmt}, we probe the developed formalism by considering a $\mathcal{PT}$-symmetric coupled waveguides
system in terms of coupled mode approach and reveal no dependence of the modal Purcell factor on the non-Hermiticity.
In Section~\ref{sec:results}, we show the proof-of-concept calculations of the Purcell factor for the system demonstrated in Fig.~\ref{fig:wg}
and reveal an agreement with the results obtained using the coupled-mode approach.
Eventually, the Section V concludes the paper.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/Figure1.png}
\caption{Schematics of the $\mathcal{PT}$-symmetric system: Gain (refractive index $n_r=n_{\rm{co}} + \mathrm{i}\gamma$)
and Loss ($n_1=n_{\rm{co}} - \mathrm{i}\gamma$) waveguides of the width $w$ and height $h$ are embedded in the dielectric medium
with index of $n_{cl}$.
Waveguides are separated by the distance $g$.
Modes propagate in the $\hat{z}$ direction.}
\label{fig:wg}
\end{figure}
\section{\label{sec:method} Modal Purcell factor for non-Hermitian waveguides}
\subsection{Reciprocity approach}
Utilizing the reciprocity approach (a method for calculating the power $P$ emitted by a current source into a particular propagating mode leaving an open optical system), we normalize this power by the power of radiation into the free space $P_0$ to find the so called \textit{modal Purcell factor}~$F_p$.
We consider an emitting current source (current density distribution $\vb{J}_1$) situated inside a coupled waveguide system with two exit ports at $z_1$ and $z_n$ ~\cite{ref:schulz2018}.
For brevity, we introduce a 4-component vector joining transverse electric and magnetic fields as
\begin{equation}
\ket{\psi(z)}=\mqty(\vb{E}_{t}(x,y,z) \\ \vb{H}_{t}(x,y,z)).
\label{eq:dirac_notation}
\end{equation}
In this way we can describe the fields of guiding (and leaking) modes.
For the $i$th mode we write
\begin{equation}
\ket{M_i(z)}=\mqty(\vb{E}_{t,i}(x, y, z) \\ \vb{H}_{t,i}(x, y, z))=\ket{i}\mathrm{e}^{-\mathrm{i}\beta_iz},
\label{eq:Mode_i}
\end{equation}
where
\begin{equation}
\ket{i}=\mqty(\vb{e}_{t,i}(x, y) \\ \vb{h}_{t,i}(x, y))
\label{eq:mode_i}
\end{equation}
and
\begin{equation}
\mqty(\vb{E}_{t,i}(x, y, z) \\ \vb{H}_{t,i}(x, y, z)) =
\mqty(\vb{e}_{t,i}(x, y) \\ \vb{h}_{t,i}(x, y))\mathrm{e}^{-\mathrm{i}\beta_iz}.
\end{equation}
Here we define the inner product as a cross product of the bra-electric and ket-magnetic fields integrated over the cross-section $z={\rm const}$:
\begin{equation}
\braket{\phi_1}{\phi_2} \equiv \intop_{}
\left( \vb{E}_{1}\times\vb{H}_{2} \right) \cdot \vu{z} \dd x\dd y
\label{eq:inner_product}
\end{equation}
Such a definition is justified by the non-Hermitian system we explore.
In the above and following relations we can drop $t$ subscripts because $z$
component of the vector products depends only on transverse components.
It is well known that the modes of Hermitian systems are orthogonal in the sense
\begin{equation}
\intop \left( \vb{e}_{i}\times\vb{h}_{j}^{*} \right) \cdot \vu{z}\dd x\dd y \sim \delta_{ij},
\label{eq:orth_conventional}
\end{equation}
where $\delta_{ij}$ is the Kronecker delta.
However, the loss and gain channels of the non-Hermitian waveguide break the orthogonality of the modes.
In this case, one should use a non-conjugate inner product~\cite{ref:snyder1984,ref:svendsen2013,ref:wu2019} bringing us to the orthogonality relationship
\begin{equation}
\braket{i}{j} = \intop \left( \vb{e}_{i}\times\vb{h}_{j} \right) \cdot \vu{z} \dd x \dd y = 2 N_i \delta_{ij},
\label{eq:orth}
\end{equation}
where $N_i$ is a normalization parameter.
It worth noting, that redefinition of the inner product is required in non-Hermitian quantum mechanics.
It appears that left and right eigenvectors of non-Hermitian operators obey the so-called biorthogonality relations.
Discussion of quantum mechanics based on biorthogonal states is given in
\cite{ref:weigert2003,ref:mostafazadeh2010,ref:moiseyev2011a,ref:brody2016}.
The fields excited by the current source $\vb{J}_1$ at the cross-section of exit ports can be expanded into a set of modes as follows
\begin{align}
\ket{\psi_1(z_1)}&=\sum_{i} A_{i,z_1} \ket{i,z_1}, \nonumber \\
\ket{\psi_1(z_n)}&=\sum_{i} A_{-i,z_n} \ket{-i,z_n}.
\label{eq:expansion}
\end{align}
Here $A_{i,z_1}$ and $A_{-i,z_n}$ are the amplitudes of the modes propagating forward to port $z_1$ and backward to port $z_n$, respectively,
$\ket{i,z_1}$, $\ket{-i,z_n}$ are respectively eigenmodes of ports $z_1$ and $z_n$ propagating
from the cavity.
In our notations the Lorentz reciprocity theorem
\begin{equation}
\intop_{\delta V} \left( \vb{E}_{1}\times\vb{H}_{2} - \vb{E}_{2}\times\vb{H}_{1} \right)
\cdot \vu{z} \dd x \dd y
= \intop_{V} \left(\vb{E}_{2} \cdot \vb{J}_{1} - \vb{E}_{1} \cdot \vb{J}_{2} \right) \dd V.
\label{eq:reciprocity}
\end{equation}
should be rewritten as
\begin{multline}
\braket{\psi_1(z_1)}{\psi_2(z_1)}-\braket{\psi_2(z_1)}{\psi_1(z_1)}\\
-\braket{\psi_1(z_n)}{\psi_2(z_n)}+\braket{\psi_2(z_n)}{\psi_1(z_n)}\\
= \intop_{V}\left( \vb{E}_{2}\cdot\vb{J}_{1}-\vb{E}_{1}\cdot\vb{J}_{2} \right)\dd V,
\label{eq:reciprocity1}
\end{multline}
where $\delta V$ is the surface enclosing the cavity volume $V$ between two planes $z=z_1$ and $z=z_n$.
In Eq.~\eqref{eq:reciprocity1}, ${\bf J}_1$ and $\ket{\psi_1}$ are defined above, while the source ${\bf J}_2$ and the fields $\ket{\psi_2}$ produced by it can be chosen as we need.
Let the source current ${\bf J}_2$, being outside the volume $V$ (${\bf J}_2 = 0$), excites a single mode $\ket{-k,z_1}$.
In general, this mode is scattered by the cavity $V$ and creates the set of transmitted and reflected modes as discussed in \cite{ref:schulz2018}.
In our case the cavity is a tiny volume ($z_1\approx z_n$) of the waveguide embracing the source ${\bf J}_1$.
Therefore, the field just passes the waveguide without reflection and we get
\begin{align}
\ket{\psi_{2}(z_1)} &= B_{-k, z_1} \ket{-k,z_1},
\label{eq:psi2z1}\\
\ket{\psi_{2}(z_n)} &= B_{-k, z_n} \ket{-k,z_n}.
\label{eq:psi2zn}
\end{align}
Forward and backward transverse modal fields $\vb{e}_{t,i}$ and $\vb{h}_{t,i}$
($\vb{e}_{t,i}\vu{z} = 0$) used in Eq.~(\ref{eq:reciprocity1}) satisfy the symmetry relations
\begin{equation}
\vb{e}_{t,-i}=\vb{e}_{t,i}, \qquad\vb{h}_{t,-i}=-\vb{h}_{t,i}
\label{eq:forward_backward}
\end{equation}
both in the case of Hermitian and non-Hermitian ports.
This means that the inner product of modes also meets the symmetry relations for its bra- and ket-parts: $\braket{i}{j}=\braket{-i}{j}$ and $\braket{i}{j}=-\braket{i}{-j}$.
Adding the orthogonality conditions~(\ref{eq:orth}), one straightforwardly derives
\begin{align}
\braket{\psi_1(z_1)}{\psi_2(z_1)} &= - \braket{\psi_2(z_1)}{\psi_1(z_1)} = - 2 A_{k,z_1} B_{-k,z_1} N_{k,z_1}, \nonumber \\
\braket{\psi_1(z_n)}{\psi_2(z_n)} &= \braket{\psi_2(z_n)}{\psi_1(z_n)} = 0,
\end{align}
where $N_{k,z_1}$ the norm of the mode $\ket{k,z_1}$ as defined in \eqref{eq:orth}.
These inner products in the general case of reflection and transmission of the reciprocal mode by a cavity are given in Appendix.
By substituting these equations into Eq.~\eqref{eq:reciprocity1}, we arrive at the amplitude $A_{k,z_1}$ of the mode excited by the source current $\vb{J}_1$
\begin{equation}
A_{k, z_1} = -\frac{1}{4 B_{-k, z_1} N_{k, z_1}}
\intop_{V} \vb{E}_{2,-k} \cdot \vb{J}_{1} \dd V,
\label{eq:minus4ab}
\end{equation}
where $\vb{E}_{2,-k} = B_{-k, z_1}\vb{e}_{-k}(x,y)\mathrm{e}^{\mathrm{i} \beta_k (z-z_1)}$ is the electric field created by the excitation of the system with reciprocal mode $\ket{-k,z_1}$ at the port $z_1$.
\subsection{Purcell factor}
As an emitter we consider a point dipole oscillating at the circular frequency $\omega$ and having the current density distribution
\begin{equation}
\vb{J}_{1}\left( \vb{r} \right) =
\mathrm{i}\omega\vb{p}\delta\left( \vb{r}-\vb{r}_0 \right),
\label{eq:dipole_current}
\end{equation}
where $\vb{p}$ is the dipole moment of the emitter and $\vb{r}_0$ is its position.
Then we are able to carry out the integration in Eq.~\eqref{eq:minus4ab} and obtain
\begin{equation}
A_{k,z_1}=-\frac{\mathrm{i}\omega}{4 B_{-k, z_1}N_{k,z_1}} \vb{E}_{2,-k}
\left( \vb{r}_0 \right) \cdot \vb{p}.
\label{eq:Akz1}
\end{equation}
Here we observe a dramatic difference compared to the Hermitian case considered in Ref.~\cite{ref:schulz2018}. This difference appears due to the fact that now the expansion coefficients $A_{k, z_1}$ are not directly related to the powers carried by the modes. Finding a power carried by a specific mode is a challenge. To circumvent this challenge, we propose a calculation of the total power carried by the set of modes as we describe below.
The power emitted by the current source $\vb{J}_1$ into the port $z_1$ can be written as
\begin{equation}
P = \frac{1}{2} \mathrm{Re}\intop_{z=z_1} \left( \vb{E}_{1} \times \vb{H}^{*}_{1} \right) \cdot
\vu{z} \dd x \dd y = \frac{1}{2} \braket{\psi_1(z_1)}{\psi_1^\ast(z_1)},
\label{eq:power1}
\end{equation}
where $\ket{\psi^\ast}=(\vb{E}_t^*,\vb{H}_t^*)^T$.
Expanding the electromagnetic fields $\ket{\psi_1(z_1)}$ according to Eq.~(\ref{eq:expansion})
we represent the power transmitted through the port Eq. (\ref{eq:power1}) as follows
\begin{equation}
P = \mathrm{Re}\sum_{k,l}A_{k, z_1} A^{*}_{l, z_1} P_{kl},
\label{eq:power1_expand}
\end{equation}
where $P_{kl}$ is the so called cross-power equal to the Hermitian inner product of the modal fields
\begin{equation}
P_{kl,z_1} = \frac{1}{2} \braket{k,z_1}{l,z_1^*} =
\frac{1}{2}\intop_{z=z1}\left( \vb{e}_{k,z_1} \times \vb{h}^{*}_{l,z_1} \right)\vdot\vu{z}\dd x\dd y.
\label{eq:cross_power}
\end{equation}
For $k = l$ the cross-power reduces to the mode power $P_k = P_{kk}$. By considering the expansion coefficients \eqref{eq:Akz1} we rewrite the power (\ref{eq:power1_expand}) in terms of the reciprocal fields $\vb{E}_{2,-k}$ as
\begin{multline}
P = \frac{\omega^{2}}{16}
\Re
\sum_{k,l}
\frac{
( \vb{E}_{2,-k}\left( \vb{r}_0 \right)\cdot \vb{p} )
( \vb{E}_{2,-l}^{*}\left( \vb{r}_0 \right)\cdot \vb{p}^\ast )
}
{B_{-k}B_{-l}^{*} N_{k} N_{l}^*}
P_{kl}
\\
=\frac{\omega^{2}}{16}
\Re
\sum_{k,l}
\frac{
(\vb{e}_{-k}\left( x_0, y_0 \right)\cdot \vb{p} )
( \vb{e}_{-l}^{*}\left( x_0, y_0 \right)\cdot \vb{p}^\ast )
}
{N_{k} N_{l}^*}
P_{kl}.
\label{eq:power_final}
\end{multline}
The last equality is the consequence of the substitution of $\vb{E}_{2,-k}$ at the emitter position ${\bf r}_0 = (x_0, y_0, z_0)$ and taking into account negligible dimensions of the cavity $z_1\approx z_n\approx z_0$.
Note that here we dropped $z_1$ subscripts.
In order to find the Purcell factor we divide Eq. (\ref{eq:power_final}) by the power emitted by the same dipole into the free space
\begin{equation}
P_{0} = \frac{\mu_{0}}{12\pi c} \omega^{4} |p|^{2},
\label{eq:P0}
\end{equation}
where $\mu_0$ is the vacuum permeability and $c$ is the speed of light in vacuum.
The dipole moment, located in the $xy$ plane, can be presented using the unit vector $\hat {\bf p}$ as follows
\begin{equation}
\vb{p}=p\vu{p},
\end{equation}
therefore,
\begin{equation}
\vb{E}_{2,-k}(\vb{r}_0)\cdot\vb{p}=\vb{E}_{2,-k}(\vb{r}_0)\cdot\vu{p}p=E_{p,k}(\vb{r}_0)p.
\label{eq:Edotp}
\end{equation}
Here $E_{p,k}$ denotes projection of the vector $\vb{E}_{2,-k}$ onto the dipole orientation vector $\vu{p}$
\begin{equation}
E_{p,k}=\vb{E}_{2,-k}\cdot\vu{p}.
\label{eq:cos_alpha}
\end{equation}
Then the Purcell factor reads
\begin{equation}
F_{p} = \frac{P}{P_0}
=\frac{3\pi c}{4\omega^{2} \mu_{0}} \mathrm{Re}
\sum_{k,l}
\frac{
e_{p,k}\left( x_0, y_0 \right)
e_{p,l}^{*}\left( x_0, y_0 \right)
}{N_{k} N_{l}^*}P_{kl}.
\label{eq:Fpurcell}
\end{equation}
It is convenient to rewrite Eq. \eqref{eq:Fpurcell} through the normalized fields as
\begin{equation}
F_{p}=
\frac{3\pi c}{4\omega^{2} \mu_{0}}\sum_{kl}\hat{e}_{p,k}\hat{e}^*_{p,l}K_{kl}\hat{P}_{kl},
\label{eq:Fp_normed}
\end{equation}
where we have introduced power-normalized modal electric fields
\begin{equation}
\hat{\vb{e}}_{2, i} = \frac{\vb{e}_{2,i}}{\sqrt{P_{i}}}
\label{eq:e_power_normalized}
\end{equation}
and normalized cross-power coefficients
\begin{equation}
\hat{P}_{kl} = \frac{1}{\sqrt{P_kP_l}}P_{kl}.
\end{equation}
Here we generalize the well-known Petermann factor~\cite{ref:petermann1979}
\begin{equation}
K_i = K_{ii}
\label{eq:Petermann_factor}
\end{equation}
defining cross-mode Petermann factor
\begin{equation}
K_{kl} = \frac{P_k}{N_k}\frac{P_l}{N_l^*}=
\frac{\Re\braket{k}{k^*}}{\braket{k}}\frac{\Re\braket{l}{l^*}}{\braket{l}^*}.
\label{eq:cross_Petermann_factor}
\end{equation}
It should be noticed that the Petermann factor is often related to the mode non-orthogonality~\cite{Pick2017,ref:siegman1989,ref:berry2003,ref:yoo2011} being obviously equal to the unity for Hermitian systems owing to the coincidence of the norm $N_i$ and power $P_i$ in this case.
The modal Purcell factor can be naturally divided into two parts, the first of which is the sum of all diagonal $(k=l)$ terms, while the second part is the sum of non-diagonal $(k\neq l)$ terms:
\begin{equation}
F_p = F_{p, \mathrm{diag}} + F_{p,\mathrm{non-diag}} =
\sum_kF_{p, k}+\sum_{k\neq l}F_{p, kl},
\label{eq:Fp_two_terms}
\end{equation}
where
\begin{align}
F_{p, i}&=\frac{3\pi c}{4\omega^{2} \mu_{0}}
\abs{\hat{e}_{p,k}}^2 K_{i},
\label{eq:Fp_diag}\\
F_{p, kl}&=\frac{3\pi c}{4\omega^{2} \mu_{0}}
\hat{e}_{p,k}\hat{e}^*_{p,l} K_{kl}\hat{P}_{kl}.
\label{eq:Fp_non-diag}
\end{align}
In the Hermitian case, the non-diagonal terms \eqref{eq:Fp_non-diag} reduce to zero due to the regular orthogonality of the modes expressed by $\hat{P}_{kl}=\delta_{kl}$.
That is why the Purcell factor \eqref{eq:Fp_normed} applied to Hermitian systems coincides with the expression in Ref.~\cite{ref:schulz2018}.
\section{\label{sec:cmt} Modal Purcell factor within the Coupled Mode Theory}
To get some insight on the behavior of the modal Purcell factor, let us analyze the system of two coupled waveguides using the coupled mode theory as adopted in $\mathcal{PT}$-symmetry related literature.
We express the total field at the port $z_1$ in the coupled system in terms of the modes
$\ket{g}$ and $\ket{l}$ of isolated gain and loss waveguides
with corresponding $z$-dependent amplitudes $g$ and $l$ as
\begin{equation}
\ket{\psi_1}=g(z)\ket{g}+l(z)\ket{l}.
\label{eq:gl_ansatz}
\end{equation}
We assume the overlap between the modes of isolated waveguides is negligible
(weak coupling condition), therefore, the modes are orthogonal and normalized as follows
\begin{align}
\braket{g}{l}&=\braket{g}{l^*}=0,\\
\braket{g}&=\braket{l}=1.
\label{eq:gl_norm}
\end{align}
One more assumption is introduced for the sake of simplicity:
\begin{equation}
\braket{g}{g^*}=\braket{l}{l^*}=1.
\end{equation}
It implies that the Hermitian norms of the isolated modes
are equal to the non-Hermitian norms or, in other words,
the Petermann factors for the modes equal unity.
$\mathcal{PT}$ operator converts the mode of isolated lossy waveguide to the mode of the isolated gain waveguide and vice versa that is
\begin{subequations}
\begin{align}
\mathcal{PT}\ket{g} &= \ket{l},\\
\mathcal{PT}\ket{l} &= \ket{g}.
\end{align}
\label{eq:PTgl}
\end{subequations}
Spatial evolution of amplitudes is governed by the system of coupled equations
\begin{equation}
\mathrm{i}\dv{z}\mqty[g\\l] =
\mqty[\Re(\beta+\delta)-\mathrm{i}\alpha/2&&\kappa\\
\kappa&&\Re(\beta+\delta)+\mathrm{i}\alpha/2]\mqty[g\\l]
\end{equation}
where $\beta$ is a propagation constant, $\kappa$ is a coupling coefficient,
$\delta$ is a correction to the propagation constant, $\alpha$ is an effective gain (or loss).
It can be shown that due to the weak coupling and relations \eqref{eq:PTgl}
the coupling constant $\kappa$ is real \cite{ref:chuang1987,ref:elganainy2007}.
\subsection{$\mathcal{PT}$-symmetric regime}
In $\mathcal{PT}$-symmetric regime, the system has the supermodes of the form
\begin{equation}
\ket{1,2}=\ket{g}\pm\mathrm{e}^{\pm\mathrm{i}\theta}\ket{l}
\label{eq:eigmodes_sym}
\end{equation}
with corresponding eigenvalues
\begin{equation}
\beta_{1,2}=\Re(\beta+\delta)\pm\kappa\cos\theta,
\label{eq:eigvals_sym}
\end{equation}
where $\sin\theta=\alpha/2\kappa$.
To find the modal Purcell factor in terms of coupled modes we substitute
the modes in the form \eqref{eq:eigmodes_sym}
into expression \eqref{eq:Fp_normed}.
Then the quantities $K_{kl}$ and $\hat{P}_{kl}$ can be written in the closed form as
\begin{subequations}
\begin{align}
K_{1}=\frac{\Re\braket{1}{1^*}^2}{\abs{\braket{1}}^2}
&=\frac{2}{1+\cos2\theta},\\
K_{2}=\frac{\Re\braket{2}{2^*}^2}{\abs{\braket{2}}^2}
&=\frac{2}{1+\cos2\theta},\\
K_{12}=\frac{\Re\braket{1}{1^*}}{\braket{1}}
\frac{\Re\braket{2}{2^*}}{\braket{2}^*}
&=\frac{2(1+\mathrm{e}^{-\ii2\theta})^2}{(1+\cos2\theta)^2},\\
K_{21}=\frac{\Re\braket{2}{2^*}}{\braket{2}}
\frac{\Re\braket{1}{1^*}}{\braket{2}^*}
&=\frac{2(1+\mathrm{e}^{+\ii2\theta})^2}{(1+\cos2\theta)^2},
\end{align}
\label{eq:K_kl_sym}
\end{subequations}
\begin{subequations}
\begin{align}
\hat{P}_{12}
=\frac{\braket{1}{2^*}}{\sqrt{\braket{1}{1^*}\braket{2}{2^*}}}
&=\frac{1}{2}(1-\mathrm{e}^{\ii2\theta}),\\
\hat{P}_{21}
=\frac{\braket{2}{1^*}}{\sqrt{\braket{1}{1^*}\braket{2}{2^*}}}
&=\frac{1}{2}(1-\mathrm{e}^{-\ii2\theta}).
\end{align}
\label{eq:P_kl_sym}
\end{subequations}
Normalized field projections $\hat{e}_{p,k}$ in
the basis of isolated modes read
\begin{subequations}
\begin{align}
\hat{e}_{p,1}&=\frac{1}{\sqrt{\frac12\bra{1}\ket{1^*}}}(\hat{e}_{p,g}+\mathrm{e}^{\mathrm{i}\theta}\hat{e}_{p,l})
=\hat{e}_{p,g}+\mathrm{e}^{\mathrm{i}\theta}\hat{e}_{p,l},\\
\hat{e}_{p,2}&=\frac{1}{\sqrt{\frac12\bra{2}\ket{2^*}}}(\hat{e}_{p,g}-\mathrm{e}^{-\mathrm{i}\theta}\hat{e}_{p,l})
=\hat{e}_{p,g}-\mathrm{e}^{-\mathrm{i}\theta}\hat{e}_{p,l}.
\label{eq:ep12_sym}
\end{align}
\end{subequations}
In above expressions $\hat{e}_{p,g}$ and $\hat{e}_{p,l}$ denote projections of the fields of
backward-propagating isolated modes onto dipole orientation.
If the emitter dipole moment is perpendicular to $\hat{z}$,
projections of backward-propagating modal fields are equal to
projections of forward-propagating ones.
Performing calculation of the modal Purcell factor \eqref{eq:Fp_normed} using
relations (\ref{eq:K_kl_sym}-\ref{eq:P_kl_sym}) we obtain
\begin{equation}
F_{p}=F_{p,\mathrm{diag}}+F_{p,\mathrm{non-diag}}=
\frac{6\pi c}{\omega^{2} \mu_{0}}
(\abs{\hat{e}_{p,g}}^2+\abs{\hat{e}_{p,l}}^2).
\label{eq:Fp_sym}
\end{equation}
Diagonal and non-diagonal terms separately take the form
\begin{subequations}
\begin{align}
F_{p,\mathrm{diag}}&=\frac{3\pi c}{4\omega^{2} \mu_{0}}
\frac{4}{1+\cos2\theta}(\abs{\hat{e}_{p,g}}^2+\abs{\hat{e}_{p,l}}^2),\\
F_{p,\mathrm{non-diag}}&=-\frac{3\pi c}{4\omega^{2} \mu_{0}}
\frac{2(1-\cos2\theta)}{1+\cos2\theta}(\abs{\hat{e}_{p,g}}^2+\abs{\hat{e}_{p,l}}^2).
\end{align}
\label{eq:Fp_terms_sym}
\end{subequations}
It is curious that although both diagonal and non-diagonal terms \eqref{eq:Fp_terms_sym}
are singular at the EP corresponding to $\theta_{EP}=\pi/2$ and $\cos2\theta_{EP}=-1$,
the singularities cancel each other making the modal Purcell factor finite and independent of $\theta$.
The modal Purcell factor (\ref{eq:Fp_sym}) depends solely on the mode profiles of the isolated modes in $\mathcal{PT}$-symmetric regime.
Further we will show
that the similar conclusion holds, when the $\mathcal{PT}$ symmetry is violated.
\subsection{Broken $\mathcal{PT}$ symmetry regime}
In the $\mathcal{PT}$-broken regime, supermodes of the system of coupled waveguides take the form
\begin{equation}
\ket{1,2}=\ket{g}+\mathrm{i}\mathrm{e}^{\mp\theta}\ket{l},
\label{eq:eigmodes_broken}
\end{equation}
while eigenvalues read
\begin{equation}
\beta_{1,2}=\Re(\beta+\delta)\pm\mathrm{i}\kappa\sinh\theta,
\label{eq:eigvals_broken}
\end{equation}
where $\cosh\theta=\alpha/2\kappa$.
Calculating
\begin{subequations}
\begin{align}
K_{1}&=\coth^2{\theta},\\
K_{2}&=\coth^2{\theta},\\
K_{12}&=-\coth^2{\theta},\\
K_{21}&=-\coth^2{\theta},
\end{align}
\end{subequations}
\begin{subequations}
\begin{align}
\hat{P}_{12}&=\frac{1}{\cosh\theta},\\
\hat{P}_{21}&=\frac{1}{\cosh\theta},
\end{align}
\end{subequations}
\begin{subequations}
\begin{align}
\hat{e}_{p,1}&=
\frac{1}{\sqrt{\frac12(1+\mathrm{e}^{-2\theta})}}(\hat{e}_{p,g}+\mathrm{i}\mathrm{e}^{-\theta}\hat{e}_{p,l}),\\
\hat{e}_{p,2}&=
\frac{1}{\sqrt{\frac12(1+\mathrm{e}^{2\theta})}}(\hat{e}_{p,g}+\mathrm{i}\mathrm{e}^{\theta}\hat{e}_{p,l})
\end{align}
\label{eq:ep12_broken}
\end{subequations}
we straightforwardly derive the diagonal and non-diagonal terms
\begin{subequations}
\begin{multline}
F_{p,\mathrm{diag}}=\\
\frac{3\pi c}{4\omega^{2} \mu_{0}}
\frac{2\cosh\theta}{\sinh^2\theta}
\left((\abs{\hat{e}_{p,g}}^2+\abs{\hat{e}_{p,l}}^2)\cosh\theta-
2\Im(\hat{e}_{p,g}^*\hat{e}_{p,l})\right),
\end{multline}
\begin{multline}
F_{p,\mathrm{non-diag}}=\\
-\frac{3\pi c}{4\omega^{2} \mu_{0}}
\frac{2}{\sinh^2\theta}
\left(\abs{\hat{e}_{p,g}}^2+\abs{\hat{e}_{p,l}}^2-
2\cosh\theta\Im(\hat{e}_{p,g}^*\hat{e}_{p,l})\right)
\end{multline}
\end{subequations}
as well as the modal Purcell factor
\begin{equation}
F_{p}=F_{p,\mathrm{diag}}+F_{p,\mathrm{non-diag}}=
\frac{6\pi c}{\omega^{2} \mu_{0}}
(\abs{\hat{e}_{p,g}}^2+\abs{\hat{e}_{p,l}}^2).
\label{eq:Fp_broken}
\end{equation}
The main result of this section is that although diagonal and non-diagonal terms
of the modal Purcell factor diverge at the EP,
the modal Purcell factor itself does not exhibit a singular behavior
when approaching to the EP either from the left or right side.
Though we do not carry out a rigorous analysis of the behavior at the EP
accounting for the degeneracy of the modes as it was done in Ref.~\cite{Pick2017}, the developed approach leads to the well-defined expressions \eqref{eq:Fp_sym}~and~\eqref{eq:Fp_broken} for $F_p$ at the exceptional point.
\section{\label{sec:results}Numerical Results}
In this section we probe the theory developed in the previous section
by analyzing numerically an optical system consisting of two coupled rectangular waveguides separated by the distance
$g$ as schematically shown in Fig.~\ref{fig:wg}.
Complex refractive indices of the left (Loss) and right (Gain) waveguides are equal to $n_1=n_{\rm{co}} - \mathrm{i}\gamma$ and $n_r=n_{\rm{co}} + \mathrm{i}\gamma$ respectively to satisfy $\mathcal{PT}$-symmetry condition $n(x) = n^\ast(-x)$, where $n_{\rm{co}}$ is the refractive index and $\gamma>0$ is the gain/loss (non-Hermiticity) parameter.
The waveguides are embedded in the transparent ambient medium with refractive index $n_{\rm{cl}}$. Light propagates in the $z$-direction.
To characterize the system numerically we use VPIphotonics Mode Designer\texttrademark\ finite difference mode solver in the frequency domain \cite{ref:vpimd}.
We take parameters of the waveguide coupler as $g=0.8$ $\mu$m, $h=0.2$ $\mu$m, $n_\mathrm{cl}=1.444$, and $n_\mathrm{co}=3.478$ in order to limit the number of system's modes.
Refractive indices of the cladding and core correspond to those of $\text{SiO}_2$ and $\text{Si}$ at the wavelength $1.55~\mu\rm{m}$.
Then the coupler has only two quasi-TE supermodes at this wavelength.
The modes are visualized in Figs.~\ref{fig:modes_below_ep}~and~\ref{fig:modes_above_ep}.
In $\mathcal{PT}$-symmetric state, both the first and the second supermodes have symmetric distribution of the magnitude of the electric field $\abs{E_x}$ over the loss and gain waveguides ensuring a balance of the gain and loss [Figs.~\ref{fig:modes_below_ep}(a) and (b)].
The modes can be associated with the eigenvalues of the scattering matrix, which are known to be unimodular $\abs{s_{1,2}}=1$ and correspond to propagating waves of the form $s_{1,2}=\exp(-i\beta_{1,2}z)$.
Since in the Hermitian limit $\gamma=0$ the fields of the supermodes become real possessing even and odd symmetry, we call the supermodes ``even'' and ``odd'' in quotes for convenience.
In $\mathcal{PT}$-symmetry-broken regime, the fields of the supermodes have a completely different behavior.
According to Fig.~\ref{fig:modes_above_ep} the field is concentrated either in the loss or gain waveguide.
Hence, the supermodes can be named ``loss'' and ``gain'' modes.
In this case the supermodes are mirror reflections of each other with respect to the plane $x=0$.
The amplitude of the ``loss'' (``gain'') mode decreases (increases) during propagation in accordance with the known properties of the eigenvalues of the scattering matrix in the $\mathcal{PT}$-symmetry-broken state: $\abs{s_{1}}=1/\abs{s_{2}}$.
\begin{figure}[t!b!]
\includegraphics[width=\linewidth]{figures/Abs_Ex_below_EP.pdf}
\caption{
Distribution of the electric field component $E_x$ in the
$\mathcal{PT}$-symmetric regime ($\gamma=3.5\times10^{-4}$).
Distributions of $\abs{E_x}$ for (a) ``even'' and (b) ``odd'' supermodes.
}
\label{fig:modes_below_ep}
\end{figure}
\begin{figure}[t!b!]
\includegraphics[width=\linewidth]{figures/Abs_Ex_above_EP.pdf}
\caption{
Distribution of the electric field component $E_x$
in the broken-$\mathcal{PT}$-symmetric regime ($\gamma=8 \times 10^{-4}$).
Distributions of $\abs{E_x}$ for (a) ``loss'' and (b) ``gain'' supermodes.
}
\label{fig:modes_above_ep}
\end{figure}
Transition from the $\mathcal{PT}$- to non-$\mathcal{PT}$-symmetric state occurs when varying some system's parameter. The transition is observed in the modal effective index of the coupled waveguides $n_{\rm eff} = {\rm Re}(n_{\rm eff}) + \mathrm{i}~{\rm Im}(n_{\rm eff})$.
When increasing the gain/loss parameter $\gamma$ the system passes through the regime of propagation ($\mathcal{PT}$-symmetric state) for two non-decaying supermodes to the regime of decay/amplification ($\mathcal{PT}$-symmetry-broken state) for the modes with the refractive indices $n_{\rm eff} = {\rm Re}(n_{\rm eff}) \pm \mathrm{i}~{\rm Im}(n_{\rm eff})$.
The curves in Fig.~\ref{fig:neff_3d} demonstrate this behavior. The non-$\mathcal{PT}$-symmetric phase emerges at the EP around $\gamma_\text{EP} = 4.21 \times 10^{-4}$.
\begin{figure}[t!b!]
\centering
\includegraphics[width=\linewidth]{figures/neff_3d.pdf}
\caption{Effective mode indices versus the non-Hermiticity parameter $\gamma$.
Black curves correspond to the real parts of the effective indices.
Red curves correspond to the imaginary parts of the effective indices.
Dashed (black and red) curves are related to the first supermode whereas dot-dashed ones related to the second supermode.}
\label{fig:neff_3d}
\end{figure}
The Petermann factor for the supermodes in the coupled-$\mathcal{PT}$-symmetric waveguides depends on the non-Hermiticity parameter $\gamma$.
One can see in Fig.~\ref{fig:petermann} that the Petermann factors $K_{1,2}$ almost coincide for both supermodes.
When $\gamma$ approaches $\gamma_\text{EP}$, $K_{1,2}$ become singular.
This singularity might be considered as a consequence of the degeneracy of the modes of the $\mathcal{PT}$-symmetric system at the EP,
but a thorough analysis in Ref.~\cite{Pick2017} demonstrates that the peak value should be finite.
Similar result for the Petermann factor in $\mathcal{PT}$-symetric system was observed also in Ref.~\cite{ref:yoo2011}.
\begin{figure}[t!b!]
\centering
\includegraphics[width=0.9\linewidth]{figures/petermann_vs_gamma.pdf}
\caption{Petermann factors $K_1$ and $K_2$ respectively for the first and second supermodes of the $\mathcal{PT}$-symmetric coupled waveguides as functions of the non-Hermiticity parameter $\gamma$.
Parameters of the waveguide coupler: $g=0.8$ $\mu$m, $h=0.2$ $\mu$m, $n_\mathrm{cl}=1.444$,
and $n_\mathrm{co}=3.478$.
}
\label{fig:petermann}
\end{figure}
While bearing in mind the theory developed in the previous section, we shall explore the Purcell factor $F_p$ as an enhancement factor of the spontaneous emission rate coupled to the pair of TE-like modes computed in Section II.
According to Eq.~\eqref{eq:Fpurcell}, the Purcell factor is defined by the fields of the reciprocal modes at the dipole position ($x_0$, $y_0$, $z_0\approx z_1 \approx z_n$).
In Fig. \ref{fig:Fp_in_plane}, we demonstrate the Purcell factor for an $x$-oriented dipoles as a function of
$x_0$~and~$y_0$ for different values of parameter $\gamma$ (imaginary part of the Gain waveguide refractive index $n_r$).
One can see in Fig.~\ref{fig:Fp_in_plane} that the modal Purcell factor is symmetric in
(a) Hermitian regime as well as in (b) $\mathcal{PT}$-symmetric and (c) $\mathcal{PT}$-symmetry broken regimes.
The Purcell factor $F_p$ is less than 1 taking a maximum value of approximately 0.4 in centers of the waveguides.
According to Fig. \ref{fig:Fp_gamma_y_is_0} diagonal and non-diagonal terms have opposite signs and close absolute values. This explains small values of the modal Purcell factor in spite of the enhancement of $F_{diag}$ and $F_{non-diag}$ and their divergence at the EP.
Such a behavior well agrees with the result obtained in Section \ref{sec:method}
using the coupled-mode theory, namely, the numerically observed distribution of the modal Purcell factor is
similar in Hermitian,
$\mathcal{PT}$-symmetric, and $\mathcal{PT}$-symmetry broken regime. Independence of the non-Hermiticity parameter $\gamma$ including the exceptional point $\gamma_{EP}$ demonstrated in Fig. \ref{fig:Fp_x_gamma} also confirms the analytical predictions given by Eqns. \eqref{eq:Fp_sym} and \eqref{eq:Fp_broken}.
\begin{figure}[t!b!]
\includegraphics[width=\linewidth]{figures/Fp.pdf}
\caption{Purcell factor distribution in the plane ($x$, $y$) (a) for the Hermitian system characterized by $\gamma = 0$, (b) in the $\mathcal{PT}$-symmetric phase ($\gamma=3.5\times 10^{-4}$), (c) in the broken-$\mathcal{PT}$-symmetric state ($\gamma=8\times 10^{-4}$).
Parameters of the waveguide coupler: $g=0.8$ $\mu$m, $h=0.2$ $\mu$m, $n_\mathrm{cl}=1.444$,
and $n_\mathrm{co}=3.478$.
}
\label{fig:Fp_in_plane}
\end{figure}
\begin{figure}[t!b!]
\centering
\includegraphics[width=0.9\linewidth]{figures/Fp_y_is_0.pdf}
\caption{Distribution of the Purcell factor (a) diagonal and (b) non-diagonal terms depending
on the emitter position $x_0$ at $y_0=0$ for different values of $\gamma$. Parameters of the coupled waveguide are given in the caption of Fig. \ref{fig:Fp_in_plane}.}
\label{fig:Fp_gamma_y_is_0}
\end{figure}
\begin{figure}[t!b!]
\centering
\includegraphics[width=\linewidth]{figures/Fp_x_gamma.pdf}
\caption{Distribution of the Purcell factor at the line
$y=0$ as function of the emitter position $x$ and non-Hermiticity parameter~$\gamma$. Parameters of the coupled waveguide are given in the caption of Fig. \ref{fig:Fp_in_plane}. }
\label{fig:Fp_x_gamma}
\end{figure}
It is known that phase transition can occur also in entirely passive couplers, where the channels being either lossy or lossless.
The $\mathcal{PT}$ symmetry then is not exact \cite{ref:guo2009}.
We study a passive coupler with the same geometry as the coupler described previously in this paper.
In the passive coupler, the Gain waveguide is substituted with the lossless waveguide.
Imaginary part of the refractive index of the lossy waveguide is chosen to be $-2\gamma$.
For such a choice of parameters,
the phase transition in the passive coupler occurs at the same point as that in the original $\mathcal{PT}$-symmetric coupler.
This can be observed in Fig. \ref{fig:neff_3d_passive}.
\begin{figure}[t!b!]
\centering
\includegraphics[width=\linewidth]{figures/passive/neff_3d.pdf}
\caption{Effective mode indices for the passive coupler versus the non-Hermiticity parameter $\gamma$.
Black curves correspond to real parts of the effective indices.
Red curves correspond to imaginary parts.
Dashed (black and red) curves are related to the first supermode whereas dot-dashed ones related to the second supermode. Parameters of the passive waveguide coupler: $g=0.8$ $\mu$m, $h=0.2$ $\mu$m, $n_\mathrm{cl}=1.444$,
and $n_\mathrm{co}=3.478$.}
\label{fig:neff_3d_passive}
\end{figure}
The Petermann factor is resonant at the exceptional point in the passive system as well (see Fig.~\ref{fig:petermann_passive}, and the modal Purcell factor in analogy with true $\mathcal{PT}$-symmetric system
shows no dependence on the non-Hermiticity parameter $\gamma$ as confirmed by
Figs.~\ref{fig:Fp_in_plane_passive}~and~\ref{fig:Fp_x_gamma_passive}.
We have verified results for the modal Purcell factor in the passive system by finite-difference time-domain (FDTD) simulations.
We have investigated Purcell enhancement for an $x$-polarized dipole source placed in the center of the lossless waveguide at different values of $\gamma$.
In full agreement with results obtained using reciprocity approach we have revealed almost no change in the Purcell factor in comparison to that in Hermitian system.
FDTD simulations were performed using an open-source software package~\cite{ref:oskooi2010}.
\begin{figure}[t!b!]
\centering
\includegraphics[width=0.9\linewidth]{figures/passive/petermann_vs_gamma.pdf}
\caption{Petermann factors $K_1$ and $K_2$ respectively for the first and second supermodes of the passive coupler
as functions of the non-Hermiticity parameter $\gamma$. Parameters of the guiding system are given in the caption of Fig. \ref{fig:neff_3d_passive}.}
\label{fig:petermann_passive}
\end{figure}
\begin{figure}[t!b!]
\includegraphics[width=\linewidth]{figures/passive/Fp.pdf}
\caption{Purcell factor distribution in the plane ($x$, $y$)
(a) below phase transition ($\gamma=3.5\times 10^{-4}$),
(b) above the phase transition ($\gamma=8\times 10^{-4}$).
Parameters of the guiding system are given in the caption of Fig. \ref{fig:neff_3d_passive}.
}
\label{fig:Fp_in_plane_passive}
\end{figure}
\begin{figure}[t!b!]
\centering
\includegraphics[width=\linewidth]{figures/passive/Fp_x_gamma.pdf}
\caption{Distribution of the Purcell factor at the line
$y=0$ as function of emitter position $x$ and non-Hermiticity parameter~$\gamma$.
Parameters of the guiding system are given in the caption of Fig. \ref{fig:neff_3d_passive}.}
\label{fig:Fp_x_gamma_passive}
\end{figure}
\section{Summary}
In this paper, we have reported on the investigation of the spontaneous emission rate enhancement for a point-source emitter in $\mathcal{PT}$-symmetric system of coupled waveguides.
We have generalized the reciprocity technique proposed in Ref.~\cite{ref:schulz2018} taking into account the non-orthogonality of modes of the $\mathcal{PT}$-symmetric system.
We have revealed analytically using the coupled-mode approach that the Purcell factor for $\mathcal{PT}$-symmetric system of coupled waveguides does not depend on the non-Hermiticity taking close values for Hermitian and $\mathcal{PT}$-symmetric systems.
Even at the exceptional point, where the Petermann factor diverges due to the modes
self-orthogonality, the modal Purcell factor remains finite and almost coincides with
that for the Hermitian system.
Such a behavior of the Purcell factor is motivated by interplay of in-mode and cross-mode
terms, that diverge themselves at the EP, resulting in compensation of each other.
This result is supported with the general theory of spontaneous emission near exceptional points developed in Ref.~\cite{Pick2017}, where rigorous treatment of degeneracies shows that the Purcell factor even in gain-assisted systems remains finite.
\section{Acknowledgements}
We acknowledge Sergei Mingaleev for valuable comments and VPIphotonics company for providing Mode Designer\texttrademark\ as mode solving software. A.K. acknowledge Israel Innovation Authority KAMIN program Grant no. 69073. F.M. and A.N. thank the Belarusian Republican
Foundation for Fundamental Research (Project No. F18R-021)
|
2,869,038,153,893 | arxiv | \section{Introduction}
Evolutionary game theory, introduced in the 70s by Jonker and Taylor \cite{taylor1978evolutionary} and Maynard Smith \cite{smith1982evolution},
is a beautiful mix of biology and game theory.
Each player in some population interact repeatedly with other players by playing a game,
obtaining a pay-off as reward or punishment of the pure strategies or actions they used. We
can consider now an {\it evolutive} mechanism, where agents with higher pay-offs have more
offsprings, or an {\it adaptive} one, where less successful players change strategies and
imitate the ones which performed better. This process is mathematically formalized using
systems of ordinary differential equations or difference equations. Essentially, we have an
equation describing the evolution of the proportion of players in each pure strategy,
usually in the form of a rate equation where the growth rate is proportional to the {\it fitness} of the strategy.
The so-called replicator equations (see \eqref{ReplicatorSystemIntro} below) is a famous and well-studied example of such systems. However,
observe that the fitness also evolves, since it depends on the distribution of the population on the strategies, so
interesting problems appears concerning the existence and stability of fixed points, and their
relationship with the Nash equilibria of the underlying game.
We refer the interested reader to the monographs \cite{cressman2013stability,hofbauer1998evolutionary,sandholm2010population} for details.
Similar mechanisms when players use mixed strategies are less frequent in the literature.
Recently, in \cite{pinasco2018game} we introduced an evolutive model for finitely many players, and we obtained
a systems of ordinary differential equations describing the evolution of the mixed strategy of each agents.
The number of equations is then proportional to the number of players.
Thus, the limit of infinitely many players seems to be intractable in this way.
However, a simple hydrodynamic interpretation is possible, leading to a first order
partial differential equation modeling the strategy updates: we can think of the players as
immersed in a fluid, flowing in the simplex $\Delta$ of mixed strategies, following the drift induced by
the gradient of the fitness of the strategies given the distribution
of the whole population on this simplex.
\medskip
Let us present a brief outline of our model and the main results in this work, and see Section
\S 2 for the precise definitions, notations, and previous results. We consider a population of
agents playing a finite, symmetric, zero sum game, with pay-off matrix $A$. Each player
starts with a given mixed strategy, i.e. a probability distribution on the set of pure strategies.
They are randomly matched in pairs, and select at random a pure strategy using their
respective mixed strategies which they use to play the game. After the match, each player
changes its mixed strategy by adding a small quantity $h$ to the winner strategy, and
reducing the loosing one in the same amount. Some care is necessary in the definition of $h$
to avoid values greater than one, or less than zero. This is needed only near the boundary of
the simplex so we replace the constant $h$ by a function $h(p)$ satisfying
$h(p)<dist(p,\partial \Delta)$. This adaptive, microscopic rule, first introduced and analyzed
in \cite{pinasco2018game} for finitely many players, induces a flow of the players in the
simplex, whose study is the main purpose of this paper.
Let us call $u_t^h$ the distribution of agents on the simplex.
We can think of $u_t^h(p)dp$ as the probability to find a player with an strategy $q$ in a cube of
area $dp$ centered at $p$.
Now, it is possible to describe the time evolution of $u_t^h$ with a Boltzmann-like equation,
whose collision part reflects the dynamics in the changes of strategies due to encounters.
This procedure is strongly inspired by the kinetic theory of rarefied gases and granular flows
and has been successfully implemented
to model a wide variety of phenomena in applied sciences (see e.g. \cite{arlotti2002generalized,bellomo2008modeling,bellomo2013complex,pareschi2013interacting,PPS,perez2018opinion} and
the surveys in Ref. \cite{naldi2010mathematical} for further details).
However, Boltzmann-like equation are challenging objects to study. Performing the so-called
grazing limit or quasi-invariant limit (see
\cite{degond1992fokker,desvillettes2001rigorous,desvillettes1992asymptotics,pareschi2013interacting}),
we can approximate it by a Fokker-Planck equation which is satisfied by $v_t= \lim_{h\to 0}
u_{t/h}^h$. Besides the intrinsic stochastic nature of the interactions, we can add a small
noise in the agents' movements. The Fokker-Planck equation then reads
$$
\frac{\partial v_t}{\partial t} + div(\mathcal{F}[v_t]v_t)
= \lambda \sum_{i,j=1}^d Q_{i,j} \frac{\partial^2 }{\partial p_i \partial p_j}(Gv_t),
$$
where $Q$ and $G$ depend on $v_t$ and the intensity of the noise, $\lambda\ge 0$ depends on the ratio of the noise to
the convection term, and the vector-field $\mathcal{F}[v_t]$, which depends on $v_t$, is given by
$$ \mathcal{F}[v_t]=h(p)[p_ie_i^TA\bar{p}(t) + \bar{p}_i(t)e_i^TAp],$$
with $ \bar{p}(t)=\int_\Delta p\,dv_t(p)$ the mean strategy at time $t$.
In particular, when $\lambda=0$, i. e., when the convection term dominates the noise, we obtain the first order, nonlocal, mean field equation
\begin{equation}\label{TransportEquIntro}
\frac{\partial v}{\partial t} + div(\mathcal{F}[v_t]v_t) = 0.
\end{equation}
Existence and uniqueness of solutions for \eqref{TransportEqu} follows by using the
classical ideas of \cite{braun1977vlasov,dobrushin1979vlasov,neunzert1974approximation}
(see also \cite{canizo2011well,golse2016dynamics}).
One of the main focus of the paper is the study of the long time behavior of solutions to
\eqref{TransportEquIntro} and the stability of the stationary solutions of the form $v = \delta_p$,
where $\delta_p$ is a Dirac mass at $p$;, which corresponds to the case where all the
players use the same mixed strategy $p$.
These issues are closely related to the behaviour of the integral curves of the vector-field $\mathcal{F}$.
It is worth noticing the close resemblance of $\mathcal{F}$ with the replicator equations
\begin{equation}\label{ReplicatorSystemIntro}
\frac{dp_i}{dt} = p_i(e_i^TAp - p^TAp).
\end{equation}
We can thus expect a relationship between the long-time behaviour of the solutions of
\eqref{TransportEquIntro} and of the solutions to the replicator equations
\eqref{ReplicatorSystemIntro}. A well-known result in evolutionary game theory, known as the
Folk Theorem (see \cite{hofbauer2003evolutionary} and Theorem \ref{FolkThm} below),
relates the long-time behaviour of the solution of the replicator equation to the Nash
equilibria of the zero-sum game with pay-off matrix $A$.
\medskip
Our first main theorem can be thought of as a generalization of the Folk Theorem.
Indeed we prove that the following statements are equivalent:
\begin{itemize}
\item $p$ is a Nash equilibrium of the game.
\item $\delta_p$ is an stationary solution to \eqref{TransportEquIntro},
\item $p$ is an equilibrium of the replicator equations \eqref{ReplicatorSystemIntro},
\item $Ap=0$, where $A$ is the pay-off matrix of the game.
\end{itemize}
Our second main theorem states that if $v_t$ is a solution to \eqref{TransportEquIntro}, then
the mean strategy of the population, $$ \bar{p} = \int_\Delta p dv_t$$ is a solution to the
replicator equations \eqref{ReplicatorSystemIntro} while it stays in $\{h=c\}$. See Section \S
5 for the precise statement of both theorems.
Finally, we show some results about the asymptotic behaviour of the solution $v_t$ of
\eqref{LimitEqq} and their relationship with the game with pay-off matrix $A$.
In the simplest case, namely the case of a two-strategies game, we can precisely describe
the asymptotic behavior of $v_t$. Then, we turn our attention to symmetric games with an
arbitrary number of strategies. Following \cite{sandholm2010population}, a zero sum
symmetric game has no stable interior equilibria, and periodic solutions to the replicator
equations appear as in the classical rock-paper-scissor. So, we will show that if all the
trajectories of the replicator equations are periodic orbits, then $v_t$ is also periodic.
\bigskip
Let us compare briefly other works dealing with similar issues. To our knowledge, the first
work dealing with evolutionary game theory for mixed strategies is due to Boccabella,
Natalini and Pareschi \cite{BNP}. They considered an integro-differential equation modeling
the evolution of a population on the simplex of mixed strategies following the idea behind
the replicator equations. That is, the population in a fixed strategy $p$ will increase
(respectively, decrease) if the expected pay-off given the distribution of agents in the
simplex is greater than (resp., lower than) the mean pay-off of the population. A full analysis
of the stability and convergence to equilibria is performed for binary games. Let us remark
that
their dynamics inherit several properties of the replicator equations, and if some mixed strategy $p$
has zero mass in the initial datum, the solution will remains equal to zero forever. So, agents cannot learn
the optimal way of play, and they are faced to extinction or not depending on the
mixed strategies which are present in the initial datum. In some sense, this is equivalent to
consider each mixed strategy as a pure one in a new game. The mathematical theory for
infinitely many pure strategies was developed by Oeschssler and Riedel in
\cite{oechssler2001evolutionary}, and studied later by Cressman
\cite{cressman2005stability}, Ackleh and Cleveland \cite{cleveland2013evolutionary}, and
also Mendoza-Palacios and Hern{\'a}ndez-Lerma \cite{mendoza2015evolutionary}.
Finally, we can cite also \cite{albi2019boltzmann,marsan2016stochastic,salvarani2019kinetic,tosin2014kinetic} where
binary (or more general) games have pay-offs depending on some real parameters, and they
study the distribution of players on this space of parameters (say, wealth, velocity, opinion,
among many other characteristics). Now, the strategy that they play in the binary game is
selected using partial or total knowledge of the global distribution of agents. For example,
we can assume that $x$ represent the wealth of an agent, and in the game they will
cooperate or not depending on the median of the society; if they cooperate, a fraction of
wealth is transferred from the richer to the poorer, on the other hand, if they does not
cooperate, no changes occur.
\bigskip
The paper is organized as follow. In Section \S 2 we first recall some basic results concerning
game theory, the replicator equations and measure theory to make the paper self-contained.
We then present in Section \S 3 the model we are concerned with, and we deduce in
Section \S 4 the partial differential equations allowing to study the long-time behaviour of the
system. In Section \S 5 we prove the generalization of the Folk Theorem. Section \S 6 is
devoted to the study of the asymptotic behaviour and the stability of the stationary solutions
to the mean-field equation \eqref{TransportEquIntro} and their relationship with the
replicator equations, and we present agent based and numerical solutions of the
model. The lengthy or technical proofs of existence and uniqueness of solutions of the relevant
equations are postponed to the Appendix for a better readability
of the paper.
\section{Preliminaries. }
\subsection{Preliminaries on game theory}
We briefly recall some basics about game theory and refer to the excellent references
available on general game theory for more details (e. g. \cite{laraki2019mathematical}).
Since we are concerned in this paper with two players, symmetric, zero-sum, finite games in
normal form, we will limit our exposition to this setting.
A two-players finite game in normal form consists of two players named I and II, two finite
sets $S^1=\{s_1,\ldots,s_{d_1}\}$ and $S^2=\{\tilde s_1,\ldots,\tilde s_{d_2}\}$, and two
matrices $A=(a_{ij})_{1\le i\le d_1,1\le j\le d_2}$, $B=(b_{ij})_{1\le i\le d_1,1\le j\le d_2} \in
\mathbb{R}^{d_1\times d_2}$. The elements of $S^1$ (respectively, $S^2$) are the pure strategies of
the first (resp., second) player and models the actions he can choose. Once both players
have chosen a pure strategy each, say I chosed $s_i\in S^1$ and II chosed $\tilde s_j\in S^2$,
then I receives the pay-off $u^1(s_i,\tilde s_j):=a_{ij}$ and II receives $u^2(s_i,\tilde
s_j)=:b_{ij}$. Thus the pay-off received by each player depends on the pure strategies chosen
by each players and on their pay-off functions $u^1$ and $u^2$. It is convenient to identify
$s_i$ with the the i-th canonical vector $e_i$ of $\mathbb{R}^{d_1}$, and $\tilde s_j$ with the the j-th
canonical vector $e_j$ of $\mathbb{R}^{d_2}$. This way the pay-off of I and II are
$$ u^1(s_i,\tilde s_j)=e_i^TAe_j=a_{ij},\qquad u^2(s_i,\tilde s_j)=e_i^TBe_j=b_{ij}. $$
The game is said:
\begin{itemize}
\item symmetric, when I and II are indistinguishable both from the point of view of the sets
of actions available and the pay-off functions: \begin{itemize} \item[] $S^1=S^2=:S$, and
\item[] $u^1(s,\tilde s)=u^2(\tilde s,s)$ for any $(s,\tilde s)\in S\times S$.\end{itemize}
This last equality means that $A=B^T$.
\item zero-sum if $u^1+u^2=0$, i. e. $B=-A$. This means that the gain of one player is
exactly the loss of the other one.
\end{itemize}
Notice in particular that the game is symmetric and zero-sum if and only if $A^T=-A$, in other
words, the matrix $A$ is antisymmetric.
We illustrate the above definitions with the popular Rock-Paper-Scissor game.
This is a two-players zero-sum game with pure strategies
$$S^1=S^2=\{Rock, Paper, Scissor\}=\{e_1,e_2,e_3\}$$
(we identifed as before each pure strategy with the canonical vectors of $\mathbb{R}^3$.)
We then define the pay-off matrix $A$ of the first player as
\begin{equation}\label{matrizRPS}
A=\left(\begin{array}{rrr} 0 & -a & b\\ b& 0& -a\\ -a & b & 0\end{array}\right)
\end{equation}
where $a$, $b >0$, and the pay-off matrix of the second player is $B=-A$ being he game
zero-sum by definition. For instance if I plays Paper and II plays Rock then I earns $a_{21}=b$
(so II looses b), and if I plays Scissor and II Rock then I earns $a_{32}=-a$ (and thus II gains
a). When $a=b$ then $A$ is antisymmetric meaning that the game is symmetric.
A central concept in game theory is that of Nash equilibrium. A Nash equilibrium is a pair
$(s^*,\tilde s^*)\in S^1 \times S^2$ such that neither player has incentive to change its
action given the action of the other player: there hold at the same time
$$ u^1(s^*,\tilde s^*)\ge u^1(s,\tilde s^*) \qquad \text{for any $s\in S^1$, $s\neq s^*$, }$$
and
$$ u^2(s^*,\tilde s^*)\ge u^2(s^*,\tilde s) \qquad \text{for any $\tilde s\in S^2$,
$\tilde s\neq \tilde s^*$. }$$ A Nash equlibrium thus models a status quo situation. When
the above inequalities are strict, the Nash equilibrim is said to be strict. However, there not
always exists a Nash equilibrium in pure strategies as can be seen for instance in the
Rock-Paper-Scissor game \eqref{matrizRPS}: by symmetry, if both players are playing
some fixed pure strategies, both have an incentive to change to another strategy.
The main mathematical issue here is the lack of convexity of the strategy spaces $S^1$ and
$S^2$. This motivates the introduction of mixed strategies as probability measures over the
set of pure strategies: a mixed strategy for I is a vector $p=(p_1,\ldots,p_{d_1})$ where $p_i$
is the probability to play the i-th pure strategy $s_i$. Thus $p_i\in [0,1]$ and $\sum_i p_i=1$.
Remember that we identify the pure strategies with the canonical vectors of $\mathbb{R}^{d_1}$. Let
$$\Delta_1=\{p=(p_1,\ldots,p_{d_1})\in\mathbb{R}^{d_1}:\, p_1,\ldots,p_{d_1}\ge 0,\, \sum_i p_i=1 \} $$
be the simplex in $\mathbb{R}^{d_1}$, and similarly we denote $\Delta_2$ the simplex in
$\mathbb{R}^{d_2}$. We extend the pay-off functions $u^1$ and $u^2$ to $\Delta_1\times \Delta_2$
as expected pay-offs in the following way: for $(p,\tilde p)\in \Delta_1\times \Delta_2$,
$$ u^1(p,\tilde p) = \sum_{i,j} p_i\tilde p_j u^1(s_i,\tilde s_j)
= \sum_{ij} p_i\tilde p_j a_{ij} = p^TA\tilde p $$ and similarly,
$$ u^2(p,\tilde p) = p^TB\tilde p. $$
We can then extend the notion of Nash equlibrium to mixed strategies saying that
$(p^*,\tilde p^*)\in \Delta_1\times \Delta_2$ is a Nash equilibrium if at the same time
$$ u^1(p^*,\tilde p^*)\ge u^1(p,\tilde p^*) \qquad \text{for any $p\in \Delta_1$,
$p\neq p^*$, }$$
and
$$ u^2(p^*,\tilde p^*)\ge u^2(p^*,\tilde p) \qquad \text{for any $\tilde p\in \Delta_2$,
$\tilde p\neq \tilde p^*$. }$$ Nash'celebrated Theorem states that a finite game in
normal form always has a Nash equilibrium in mixed strategies, we refer to
\cite{geanakoplos2003nash} for a simple proof. Moreover, when the game is symmetric (so
that $S:=S^1=S^2$) there always exists a Nash equilibrium of the form
$(p^*,p^*)\in\Delta\times\Delta$. For instance, in the symmetric Rock-Paper-Scissor game
\eqref{matrizRPS} with $a=b$, the unique symmetric Nash equilibrium is $p^*=(1/3,1/3,1/3)$.
This means that no players has an incentive to deviate when they choose their action with
equiprobability. Of course, for zero sum games, the existence of an equilibria goes back to
Von Neumann's Minimax Theorem.
\subsection{The replicator equations}\label{subseccionreplicador}
The concept of Nash equilibrium is quite static and computationally challenging. Various
dynamical processes have been studied to model the process of learning, the discovering a
Nash equilibrium by an individual. A popular one is a system of ordinary differential
equations known as replicator equation that was introduced by Taylor and Jonker in
\cite{taylor1978evolutionary} (see also \cite{schuster1983replicator,smith1982evolution}).
Consider a large population of individuals randomly matched in pairs to play a two-player
symmetric game with pay-off matrix $A\in \mathbb{R}^{d\times d}$. The players are divided into $d$
groups according to the pure strategy they use. Let $p(t)=(p_1(t),\ldots,p_d(t))$, where
$p_i(t)$ is the proportion of individuals playing the $i$-th pure strategy $e_i$ at time $t$. We
want to write down an equation for $p_i'(t)$, $i=1,\ldots,d$, modelling the fact that
individuals playing a strategy with high fitness should be favored and they will produce more
offsprings than individuals playing a low fitness strategy.
In the case of the replicator equation, the fitness of an individual playing the $i$-th strategy
is defined as the difference between the expected pay-off received against a randomly
selected individual, and the expected pay-off received by a randomly selected individual
playing against another randomly selected individual. Thus, the fitness of the $i$-th strategy
$e_i$ is $e_i^TAp(t)-p(t)^TAp(t)$. Notice that the fitness depends on the distribution of
strategies in the whole population and change in time. We then assume that agents with
positive (respectively, negative) fitness has a reproductive advantage (resp., disadvantage)
leading to their reproduction (resp., death) at a rate proportional to their fitness. We thus
arrive at the replicator equations,
\begin{equation}\label{ReplicatorSystem}
\frac{d}{dt}p_i=p_i(e_i^TAp-p^TAp) \qquad i=1,\ldots,d.
\end{equation}
It is easily shown that if $p(0)\in \Delta$, where $\Delta$, then $p(t)\in \Delta$ for all time
$t\ge 0$.
There is a strong connection between the rest point of the system \eqref{ReplicatorSystem}
and the Nash equilibria of the game with pay-off matrix $A$ as stated in the so-called {\it Folk
Theorem}:
\begin{theorem}\label{FolkThm}
Let us consider a two player symmetric game in normal form with finitely many pure
strategies. Then it hold
\begin{enumerate}
\item if $(p,p)\in\Delta\times\Delta$ is a symmetric Nash equilibrium, then $p$ is a rest
point of \eqref{ReplicatorSystem},
\item if $(p,p)$ is a strict Nash equilibrium, then it is an asymptotically stable rest point of
\eqref{ReplicatorSystem},
\item if the rest point $p$ is the limit as $t\to +\infty$ of a trajectory lying in the interior of $\Delta$ then $(p,p)$ is a Nash equilibrium,
\item if the rest point $p$ is stable then $(p,p)$ is a Nash equilibrium.
\end{enumerate}
Moreover, none of the converse statement holds.
\end{theorem}
We refer to the surveys \cite{hofbauer2003evolutionary,mikekisz2008evolutionary}) for a
proof and related results on the replicator equations.
\subsection{Preliminaries on probability measures and transport equations. }
We denote by $\mathcal{M}({\Delta})$ the space of bounded Borel measures on ${\Delta}$ and by $\mathcal{P}({\Delta})$ the convex
cone of probability measures on ${\Delta}$.
We denote by $\|.\|_{TV}$ the total variation norm on $\mathcal{M}({\Delta})$ defined as
$$ \|\mu\|_{TV} = \sup_{\|\phi\|_\infty\le 1} \int_{{\Delta}} \phi\,d\mu. $$
However, the total variation norm will be too strong for our purpose and it will be more
convenient to work with the weak*-convergence. We say that a sequence $(\mu_n)_n\subset
\mathcal{P}({\Delta})$ converges weak* to $\mu\in \mathcal{P}({\Delta})$ if
$$ \int_{\Delta} \phi\,d\mu_n \to \int_{\Delta} \phi\,d\mu \qquad \text{for any $\phi\in C({\Delta})$. }$$
It is well-known that, since $\Delta$ is compact, that $\mathcal{P}({\Delta})$ is compact for the
weak* topology. The weak*-topology can be metricized in many ways. It will be convenient to
consider the Monge-Kantorovich or Wasserstein distance on $\mathcal{P}({\Delta})$ defined as
$$W_1(u,v):=\sup \, \left(\int_{\Delta} \varphi(p)\,du(p) - \int_{\Delta} \varphi(p)\,dv(p)\right), $$
where the supremum is taken over all the Lipschitz functions $\varphi$ with Lipschitz
constant $Lip(\varphi)\leq 1$. It is known that $W_1$ is indeed a distance that metricizes
the weak*-topology, see \cite{villanioptimal}.
We will work in this paper with first order partial differential equations of the form
\begin{equation}\label{TransportEq}
\partial_t \mu_t + \text{div}(v(x)\mu_t) = 0 \qquad \text{in $\mathbb{R}^d$}
\end{equation}
with a given initial condition $\mu_0\in P(\mathbb{R}^d)$ and
where $v:\mathbb{R}^d\to \mathbb{R}^d$ is a given vector-field usually assumed to be bounded and globally Lipschitz. A solution to this equation is $\mu\in C([0,+\infty),P(\mathbb{R}^d))$ satisfying
$$ \int_{\mathbb{R}^d} \phi\,d\mu_t = \int_{\mathbb{R}^d} \phi\,d\mu_0
+ \int_0^t \int_{\mathbb{R}^d} v(x)\nabla\phi(x)\,d\mu_s(x)ds \qquad \text{for any $\phi\in C^\infty_c(\mathbb{R}^d)$.}$$
Let $T_t:\mathbb{R}^d\to \mathbb{R}^d$ be the flow of $v$ defined for any $x\in\mathbb{R}^d$ by
\begin{eqnarray*}
\frac{d}{dt} T_t(x) & =& v(T_t(x)) \qquad \text{for any $t\in\mathbb{R}$,} \\
\qquad T_{t=0}(x)& =& x.
\end{eqnarray*}
It is well-known (see e.g. \cite{villanioptimal}) that equation \eqref{TransportEq} has a
unique solution given by $ \mu_t = T_t\sharp \mu_0$, the push-forward of $\mu_0$ by
$T_t$. This means that $\int_{\mathbb{R}^d} \phi\,d\mu_t= \int_{\mathbb{R}^d} \phi(T_t(x))\,d\mu_0(x)$ for
any $\phi$ bounded and measurable. This result, which is simply a restatement of the
standard characteristic method, can be generalized to deal with equations with a
non-autonomous vector-field $v(t,x)$ assumed to be continuous and globally Lipschitz in $x$,
uniformly in $t$.
\section{Description of the model}
We consider a population of agents. Two randomly selected agents meet, they play a game,
and then update their strategy taking into account the outcome of the game. The game
played during an interaction is always the same. It is a two-player, zero-sum game with a set
$\{s_1,\ldots, s_d\}$ of pure strategies and whose pay-off is given by a matrix
$A=(a_{lm})_{1\le l,m\le d}\in\mathbb{R}^{d\times d}$. We will assume the game is symmetric i.e.
$A^T=-A$, and with out loss of generality we take $|a_{lm}|\le 1$ for any $l,m=1, \ldots, d$.
Each agent $i$ has a mixed strategy $p=(p_1,\dots,p_d)\in{\Delta}$. Here $p_l$ is the probability
that agent $i$ plays the $l$-th pure strategy $s_l$.
When two agents $i$ and $j$ meet and play the game, they update their respective
strategies using a myopic rule, both agents increase by $\delta h(p)>0$ the probability of
playing the winning strategy and decrease by $\delta h(p)>0$ the loosing one. Here,
$\delta$ is a small positive parameter, and $ h(p)$ is a positive function of $p$, to ensure
that the updated strategy $p^*$ remains in ${\Delta}$. For instance, we can take
\begin{align}\label{defdelta}
h (p) := \min\Big\{ \prod_{i=1}^d p_i, c\Big\}.
\end{align}
where $c<1$ is a positive constant, and hence $ h (p)\to 0$ as $p\to \partial {\Delta}$.
More precisely, if the pure strategies $s_l$
and $s_m$ were played, agent $i$ only updates the probabilities $p_l$, $p_m$ to $p^*_{l}$,
$p^*_m$ as follows
\begin{equation}\label{UpDateProba1}
\begin {split}
p^*_l & =
p_l+a_{lm}\delta h(p) \\
p^*_m & =
p_m -a_{lm}\delta h(p),
\end{split}
\end{equation}
Agent $j$ updates the probabilities $\tilde p_l$, $\tilde p_m$ in the same way. Notice that
probabilities are raised/lowered proportionally to the gain/loss $a_{lm}\delta$.
To model the choice made by agent $i$ of which pure strategy to play, we fix a random
variable $\zeta$ uniformly distributed in $[0,1]$ and then consider the random vector $
f(\zeta; p) = ( f_1(\zeta; p),\ldots, f_d(\zeta; p))$ where
\begin{equation}\label{definicionf}
f_i(\zeta; p):= \begin{cases}
1&\text{ if } \sum_{j<i}p_j\leq \zeta <\sum_{j\leq i}p_j,\\
0 & \text{ otherwise.}
\end{cases}
\end{equation}
Notice that $f_i(\zeta; p)=1$ with probability $p_i$.
Agent $j$ fixes in the same way a random variable $\tilde\zeta$ uniformly distributed in $[0,1]$.
Then $f(\zeta,p)^TA f(\tilde \zeta,\tilde p)\in [-1,1]$ is the pay-off of $i$ when playing against $j$ (recall that the coefficient of $A$ belongs to $[-1,1]$).
We can thus rewrite the updating rule \eqref{UpDateProba1} as
\begin{equation*}
p_i^*=
\begin{cases}
p_i+\delta h (p) f(\zeta,p)^TA f(\tilde \zeta,\tilde p)& \text{ if } f_i(\zeta,p)=1 \text{ and } f_i(\tilde \zeta,\tilde p)=0,\\
p_i- \delta h (p) f(\zeta,p)^TA f(\tilde \zeta,\tilde p) &\text{ if } f_i(\zeta,p)=0 \text{ and } f_i(\tilde \zeta,\tilde p)=1,\\
p_i & \text{ otherwise }
\end{cases}
\end{equation*}
We can also add a small noise to $p_i^*$
in the following way. We fix $r>0$ small enough so that $(\delta+r)<1$, and a smooth function
$G $ such that $G(p)\leq p_i$ for any $p\in{\Delta}$ and any $i$ like e.g. $G(p)= h (p)$.
We then consider a uniformly distributed random vector $q$ in $ {\Delta}$.
The additive random noise is then taken as $r(q_i-1/d)G(p)$.
\medskip
We thus arrive at the following interaction rule:
\begin{defi} Consider an agent with strategy $p$ interacting with an agent with strategy $\tilde p$ through
the game defined by the matrix $A$. They update their strategies from $p$ to $p^*$, and
$\tilde p $ to $\tilde p^*$, as follows
\begin{equation}\label{interaccion}
\begin{split}
& p^*= p+ \delta h (p) f(\zeta,p)^TA f(\tilde \zeta,\tilde p) \Big(f(\zeta,p)-f(\tilde \zeta,\tilde p)\Big) + r(q -\vec{1}/d )G(p),\\
& \tilde p^*= \tilde p+ \delta h (\tilde p) f(\zeta,p)^TA f(\tilde \zeta,\tilde p) \Big(f(\zeta,p)-f(\tilde \zeta,\tilde p)\Big)
+ r(\tilde q - \vec{1}/d )G(\tilde p),
\end{split}
\end{equation}
where $\vec{1}=(1,\ldots,1)\in {\mathbb{R}^d}$.
\end{defi}
Let us remark that $p^*$ and $\tilde p^*$ are random variables. Indeed there are two
sources of randomness in the updating rule. First, there is the presence of the random
variables $\zeta$ and $\tilde \zeta$ which model the fact that the players choose the pure
strategy they are about to play at random using their mixed strategy $p,\tilde p$. The second
factor of randomness is the noise $r(q -\vec{1}/d )G(p)$.
Let us verify now that $p^*$ remains in the simplex $ {\Delta}$.
\begin{lem}
The strategy $p^*$ belongs to ${\Delta}$.
\end{lem}
\begin{proof} Starting from
$$
p^*= p+ \delta h (p) f(\zeta,p)^TA f(\tilde \zeta,\tilde p) \Big(f(\zeta,p)-f(\tilde \zeta,\tilde p)\Big) + r(q -\vec{1}/d )G(p),
$$
we have, for any $i=1,\dots,d$,
$$ p_i^*\le p_i+\delta h (p)+rG(p)\le p_i + (1-p_i)(\delta +r) \le 1, $$
$$ p_i^*\geq p_i-\delta h (p)-rG(p)\geq p_i (1-\delta -r) \geq 0. $$
To conclude the proof, let us show that $\sum_{i=1}^dp_i^*=1$. Since $f(\zeta,p)$ and
$f(\tilde \zeta,\tilde p)$ are vectors whose components are all equal to zero but one which
is equal to one, we have $$\sum_{i=1}^d \Big[f_i(\zeta,p)-f_i(\tilde \zeta,\tilde p)\Big]=0,$$
and, since $q\in {\Delta}$, we have
$$
\sum_{i=1}^d (q_i -1/d)=0.
$$
Then
\begin{eqnarray*}
\sum_{i=1}^dp_i^*
= \sum_{i=1}^dp_i+\delta h f(\zeta)^TA f(\tilde \zeta) \Big(\sum_{i=1}^df_i(\zeta)-f_i(\tilde \zeta)\Big)+
rG(p)\sum_{i=1}^d (q_i -1/d)
= 1.
\end{eqnarray*}
\end{proof}
\section{A Boltzmann-like equation and its grazing limit. }
We now consider an infinite population of agents interacting through the game defined by
the matrix $A$ and updating after each interaction their strategies according to the rule
\eqref{interaccion}. We denote $u_t$ the distribution of agents in the simplex of mixed
strategies at time $t$. Notice that $u_t$ is thus a probability measure on ${\Delta}$. Intuitively, if
this probability is regular enough, we can consider that $u_t$ is its density, and $u_t(p)dp$
is roughly the proportion of agents whose strategy belongs to a neighborhood of volume
$dp$ around $p$.
\subsection{Boltzmann-type equation}
Let us find an equation describing the time evolution of $u_t$. However, since $u_t$ is a
measure, we only hope to find an equation in weak form, that is, for each observable
$\int_{\Delta} \varphi(p)\, du_t(p)$, with $\varphi\in C({\Delta},\mathbb{R})$. Observe that this integral is the
mean value at time $t$ of some macroscopic quantity. For instance if $\varphi\equiv 1$ then
$\int_{\Delta} \varphi(p)\, du_t(p)=u_t({\Delta})$ is the total mass of $u_t$, which should be
conserved. If $\varphi(p)=p$ then $\int_{\Delta} \varphi(p)\, du_t(p)=\int_{\Delta} p\,du_t(p)$ is the
mean strategy in the population. We will see later that it is strongly related to the replicator
equation \eqref{ReplicatorSystem}.
We can also think of $u_t$ as the law of the stochastic process $P_t$ giving the strategy of
an arbitrary agent. Then $\int_{\Delta} \varphi(p)\, du_t(p)=\mathbb{E}[\phi(P_t)]$ is the expected value of
$\phi$.
We assume that interactions take place following a unit rate Poisson process. Notice that if
the Poisson process has constant rate, we can always assume that the rate is one up to a
linear time re-scaling. Then it is standard to show that
\begin{equation}\label{boltzmannconbeta}
\frac{d}{dt} \int_{\Delta} \varphi(p) \,du_t (p) =
\int_{ {\Delta}^2} \mathbb{E}[\varphi(p^*)-\varphi(p)] \, du_t(p)du_t(\tilde p),
\end{equation}
see the book \cite{pareschi2013interacting} for more details.
The following result states the existence and uniqueness of a solutions to this equation
\begin{theorem}\label{existenciaboltzmann}
For any initial condition $u_0\in \mathcal{P}({\Delta})$ there exists a unique
$u\in C([0,\infty),\mathcal{P}({\Delta})) \cap C^1((0,\infty),\mathcal{M}({\Delta})) $ satisfying
$$
\int_{\Delta} \varphi(p) \, du_t(p) = \int_{\Delta} \varphi(p) \, du_0(p)
+\int_0^t \int_{ {\Delta}^2} \mathbb{E}[\varphi(p^*)-\varphi(p)] \, du_s(p)du_s(\tilde p)ds
$$
for any test-function $\varphi\in C({\Delta})$.
\end{theorem}
\noindent The proof of this result is classical and mainly based on Banach fixed-point
theorem. It can also be proved viewing \eqref{boltzmannconbeta} as an ordinary differential
equation in a Banach space following Bressan's insight \cite{bressan2005notes} (see also
\cite{alonso2016cauchy}). For the reader convenience we provide the main steps of the proof
in Appendix.
\subsection{Grazing limit}
We fix an initial condition $u_0\in \mathcal{P}({\Delta})$ and denote $u^\delta$ the solution of \eqref{boltzmannconbeta}
given by Theorem \ref{existenciaboltzmann} corresponding to interaction rules \eqref{interaccion}.
Notice that when $\delta \simeq 0$, $|p^*-p|\simeq 0$ so that
$\varphi(p^*)-\varphi(p)\simeq (p^*-p)\varphi'(p) + \frac12 (p^*-p)^2\varphi''(p)$.
Taking its expected value we thus obtain
$$ \mathbb{E}[\varphi(p^*)-\varphi(p)]
\simeq \mathbb{E}[p^*-p]\varphi'(p) + \frac12 \mathbb{E}[(p^*-p)^2]\varphi''(p). $$
Using the rules \eqref{interaccion} to calculate the expectation and considering the new time scale $\tau = \delta t$,
we obtain that we can approximate \eqref{boltzmannconbeta}
when $\delta \simeq 0$ by a local equation of the form
\begin{equation}\label{LimitEq}
\partial_\tau v+\text{div}(\mathcal{F}[v]\,v)=\frac{\lambda }{2} \sum_{i,j=1}^dQ_{ij}\partial_{ij}(G^2\, v).
\end{equation}
Here $\lambda := r^2 /\delta$, the vector-field $\mathcal{F}[v]$ has components
\begin{equation}\label{defcampo}
\begin{split}
\mathcal{F}_i[v_\tau ](p ) & = \sum_{k=1}^d h (p) a_{ik}(p_i\bar p_k(\tau ) +p_k\bar p_i(\tau )) \\
& = h (p) (p_i e_i^TA \bar p(\tau) + \bar p_i(\tau)e_i^T A p),
\end{split}
\end{equation}
being
$$\bar p(\tau )=\int_{\Delta} p\,dv_\tau (p)$$
the mean-strategy at time $\tau $,
and the diffusion coefficient $Q_{ij}$ is the covariance of the distribution of $\theta$, namely
\begin{equation}\label{defQ}
Q_{ij}:= \int_{\Delta} (q_i -1/d )(q_j -1/d )\,d\theta(q).
\end{equation}
We say that $v\in C([0, \infty), \mathcal{P}({\Delta}))$ is a solution to equation \eqref{LimitEq} if
\begin{equation}\label{DefWeakSol}
\begin{split}
& \int_{\Delta} \varphi(p)\,dv_t(p) - \int_{\Delta} \varphi(p)\,du_0\\
& \quad = \int_0^t \int_{{\Delta}}\nabla\varphi(p)\cdot \mathcal{F}(p,s)\,dv_s(p)
+\frac{\lambda}{2}\sum_{i,j=1}^d Q_{ij} \int_{{\Delta} } \partial_{ij}\varphi(p) G^2(p) \, dv_s(p) ds
\end{split}
\end{equation}
for any $\varphi\in C^2({\Delta})$.
The above procedure is relatively well-known in the literature as grazing-limit. It has been
introduced in the context of socio and econophysics modelling by Toscani
\cite{toscani2006kinetic}, see also \cite{pareschi2013interacting}. It can be rigourously
justified to obtain the following Theorem:
\begin{theorem}\label{grazing}
Given an initial condition $u_0\in\mathcal{P}({\Delta})$, let $u^\delta$ be the solution to
equation \eqref{boltzmannconbeta} given by Theorem \ref{existenciaboltzmann}
corresponding to interaction rule \eqref{interaccion}.
Assume that, as $\delta,r\to 0$, we have $r^2 /\delta\to\lambda$. Let $\tau=\delta t$ and
define $u^\delta_\tau:=u^\delta_t$. Then there exists $v\in C([0, \infty), \mathcal{P}({\Delta}))$
such that, as $\delta\to 0$ up to a subsequence, $u_\delta \to v$ in
$C([0,T],\mathcal{P}({\Delta}))$ for any $T>0$. Moreover, $v$ is a weak solution to equation
\eqref{LimitEq} in the sense of \eqref{DefWeakSol}.
Eventually, if $r^2/\delta^\alpha \to \lambda >0$ for some $\alpha\in(0,1)$, then re-scaling
time considering
$\tau=\delta^\alpha t$, we obtain that $u_\delta\to v $ as before with $v$ solution to
\begin{equation*}
\frac{d}{d\tau}v=\frac{\lambda }{2} \sum_{i,j=1}^dQ_{ij}\partial_{ij}(G^2\, v) .
\end{equation*}
On the other hand, if $r^2/\delta \to 0$, then re-scaling time we obtain that $u_\delta\to v
$ with $v$ solution to
\begin{equation}\label{LimitEqq}
\partial_\tau v+\text{div}(\mathcal{F}[v]\,v)=0.
\end{equation}
\end{theorem}
In the rest of the paper we will focus in this last case, corresponding to the pure transport
equation \eqref{LimitEqq}. Observe that this is a first order, nonlocal, mean field equation,
and following a classic strategy going back at least to Neunzert and Wik
\cite{neunzert1974approximation}, see also
\cite{braun1977vlasov,canizo2011well,dobrushin1979vlasov}, we can prove directly the
well-posedness of equation \eqref{LimitEqq}.
Given $v\in C([0,+\infty],P({\Delta}))$ we denote $T_t^v$ the flow of the
vector field $\mathcal{F}[v(t)](x)$ namely
$$ \frac{d}{dt} T_t^v(x) = \mathcal{F}[v(t)](T_t^v(x)), \qquad T_{t=0}^v(x)=x. $$
It can be proved (see Appendix) that $T_t^v(x)\in{\Delta}$ for any $t\ge 0$.
The result is the following:
\begin{theorem}\label{teotransporte}
For any initial condition $u_0\in P({\Delta})$, equation \eqref{LimitEqq} has a unique solution $u$
in $C([0,+\infty],P({\Delta}))$. This solution satisfies
$u(t)=T_t^v\sharp u_0$ for any $t\ge 0$.
Moreover, the solutions depend continuously on the initial conditions. Indeed, there exists a
continuous function $r:[0,+\infty)\to [0,+\infty)$ with $r(0)=1$ such that for any pair of
solutions $v^{(1)}$ and $v^{(2)} $ to equation \eqref{LimitEqq} there holds
\begin{equation*}
W_1(v^{(1)}(t),v^{(2)}(t))\leq r(t) W_1(v^{(1)}(0),v^{(2)}(0)).
\end{equation*}
\end{theorem}
The proofs of Theorems \ref{grazing} and \ref{teotransporte} can be found in the Appendix.
\section{Relationships between the mean-field equation, the replicator equations, and the game. }
In this section we study the relationships between:
\begin{itemize}
\item solutions $v\in C([0,+\infty],\mathcal{P}(\Delta))$ to
the mean-field equation equation \eqref{LimitEqq}, or in weak form,
\begin{equation}\label{TransportEqu}
\frac{d}{dt} \int_{\Delta} \phi\,dv_t = \int_{\Delta} \mathcal{F}[v]\nabla\phi\,dv_t \qquad \text{for any $\phi\in C({\Delta})$,}
\end{equation}
where the vector-field $\mathcal{F}[v]$ is given by \eqref{defcampo},
\item solutions to the replicator equations \eqref{ReplicatorSystem} $$
\frac{d}{dt}p_i=p_i((Ap)_i-p^TAp) \qquad i=1,\ldots,d,
$$
\item Nash equilibria of the symmetric zero-sum game with pay-off matrix $A$.
\end{itemize}
We first relate the stationary solution to the mean-field equation \eqref{LimitEqq} of the
form $\delta_q$ with $q$ an interior point and the Nash equilibria of the game. Indeed, we
will prove that $\delta_q$ is a stationary solution if and only if $q$ is a Nash equilibrium.
We then show that the mean strategy of the population satisfies the replicator equations.
Finally, we will study the case of a two-strategies game, where we can precisely describe
the asymptotic behavior of $v_t$, and then we show that, for generalizations of the
classical rock-paper-scissor where the trajectories of the replicator equations are closed
orbits, $v_t$ is also periodic.
\subsection{Nash equilibria and stationary solutions. }
Given some probability measure $v\in\mathcal{P}({\Delta})$, we will slightly abuse notation considering $v$ as the time-continuous function
from $[0,+\infty)$ to ${\Delta}$ constantly equal to $v$.
\begin{defi}\label{definicionestransporte}
We say that $v\in \mathcal{P}({\Delta})$ is an equilibrium or stationary solution of the transport
equation \eqref{TransportEqu} if it is a
solution of \eqref{TransportEqu}, that is, if
\begin{equation}\label{DeltaEquilibrium}
\int_{\Delta} \mathcal{F}[v](p)\nabla\phi(p)\,dv(p) = 0 \qquad \text{for all $\phi\in C({\Delta})$.}
\end{equation}
\end{defi}
We will mainly be interested in the case of equilibrium of the form $v=\delta_q$ for some
interior point $q\in{\Delta}$.
\begin{theorem}\label{teoequilibrio}
Let $q$ be an interior point of ${\Delta}$. The following statements are equivalent:
\begin{enumerate}
\item\label{equilibrioreplicador} $q$ is an equilibrium of the replicator equations \eqref{ReplicatorSystem},
\item\label{equilibriomodelo} $\delta_q$ is an equilibrium of equation
\eqref{TransportEqu} in the sense of definition \ref{definicionestransporte},
\item\label{equilibrioautovector} $q$ belongs to the null space of the matrix $A$,
\item\label{equilibrionash} $q$ is a Nash equilibrium of the symmetric zero-sum game with pay-off matrix $A$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let us first rewrite condition \eqref{DeltaEquilibrium} for $\delta_q$ to be an equilibrium of
equation \eqref{TransportEqu}. First notice that, for $v=\delta_q$, the mean strategy is
obviously $q$. Then from definition \eqref{defcampo} of the vector-field $\mathcal{F}_i[\delta_q]$ we
have for any $p\in {\Delta}$ and any time $t\ge 0$ that
$$ \mathcal{F}_i[\delta_q](p,t)= \sum_{k=1}^d h (p) a_{ik}(p_i q_k +p_kq_i). $$
In particular with $p=q$ we have
$$ \mathcal{F}_i[\delta_q](q,t)= \sum_{k=1}^d h (q) a_{ik}(q_i q_k +q_kq_i)
=2 h (q)q_ie_i^tAq =2 h (q)q_i(e_i^tAq-q^TAq) $$
where we used that $A$ is antisymmetric so that $q^TAq=0$.
We can now rewrite condition \eqref{DeltaEquilibrium}.
Notice that
$$ \int_{\Delta} \mathcal{F}[\delta_q]\nabla\phi\,d\delta_q = \mathcal{F}[\delta_q](q)\nabla\phi(q)
= 2 h (q)\sum_{i=1}^d q_i(e_i^tAq-q^TAq) \partial_i\phi(q). $$
Recall that $q$ is an interior point of ${\Delta}$ so that $h(q)\neq 0$. We thus obtain that
$\delta_q$ is an equilibrium of equation \eqref{TransportEqu} if and only if
\begin{equation}\label{DeltaEquilibrium2}
\sum_{i=1}^d q_i(e_i^tAq-q^TAq) \partial_i\phi(q) = 0 \qquad \text{for all $\phi\in C({\Delta})$.}
\end{equation}
We can now easily prove that statements (1) and (2) are equivalent. Indeed if $q$ is an
equilibrium of the replicator equations then $q_i((Aq)_i-q^TAq)=0$ for any $i=1,\ldots,d$ and
\eqref{DeltaEquilibrium2} holds. On the other hand if $\delta_q$ is an equilibrium of the
equation \eqref{TransportEqu}, i.e. condition \eqref{DeltaEquilibrium2} holds for any $\phi\in
C({\Delta})$, then taking $\phi(p)=p_i$ we obtain $q_i(e_i^tAq-q^TAq)=0$ for any $i=1,\ldots,d$
so we get that $q$ is an equilibrium of the replicator equation.
We can also prove that (1) and (3) are equivalent. Indeed if (1) holds, then
$q_i(e_i^tAq-q^TAq)=0$ for any $i=1,\ldots,d$. Since $q^TAq=0$ and $q_i>0$ for any
$i=1,\ldots,d$, being $q$ an interior point, the previous equality can be rewritten as
$e_i^tAq=0$ for any $i=1,\ldots,d$ i.e. $Aq=0$. This prove that (1) implies (3). On the other
hand, if $Aq=0$, then $e_i^tAq=0$ for any $i=1,\ldots,d$ so we obtain that
\eqref{DeltaEquilibrium2} holds.
It remains to show that $Aq=0$ if and only if $q$ is a Nash equilibrium.
Suppose that $Aq=0$. Then for any $p\in {\Delta}$,
$$p^TAq=p\cdot \vec{0}=0=q^TAq. $$
Thus, playing any other strategy than $q$ against $q$ does not increase the pay-off. This
means that $q$ is a Nash equilibrium. Let us now assume that $q$ is a Nash equilibrium and
let us prove that $(Aq)_i=0$ for any $i=1,\ldots,d$. If $(Aq)_i>0$ then $e_i^{T}Aq>0=q^TAq$
contradicting that $q$ is a Nash equilibrium. If $(Aq)_i<0$ then recalling that $A$ is
antisymmetric,
$$0=q^TAq=\sum_{k=1}^dq_k(Aq)_k=q_i(Aq)_i+\sum_{k\neq i}q_k(Aq)_k. $$
Since $q_k>0$ for any $k=1,\ldots,d$ there must exists some $l\in \{1,\ldots,d\}$ such that
$(Aq)_l>0$ and this is not possible.
The proof is finished.
\end{proof}
\subsection{Evolution of the mean strategy and the replicator equations.}
The method of characteristic yields that the solution to equation \eqref{TransportEqu} is
\begin{equation}\label{SolTransportEqu}
v_t = T_t\sharp v_0
\end{equation}
where $T_t$ is the flow of the vector-field $\mathcal{F}[v_t](p)$. This vector-field is the same as the
one in the replicator equations but with the mean-strategy $\bar p(t)$ depending on the
distribution $v_t$ of strategies. The next result shows that $\bar p(t)$ satisfies (up to a
constant) the replicator equations.
\begin{theorem}\label{dinamicapromedio}
Consider a solution $v\in C([0,\infty),\mathcal{P}({\Delta}))$ of the transport equation
\eqref{TransportEqu} staying away from the boundary $\partial{\Delta}$ of ${\Delta}$ up to some time $T>0$ i.e.
\begin{equation}\label{HipDinProm}
dist\Big(supp(v_t),\partial{\Delta}\Big)\ge c^{1/d} \qquad 0\le t\le T,
\end{equation}
where $c$ is defined in \eqref{defdelta}.
Then the mean strategy $\bar p(t) = \int_{\Delta} p\,dv_t(p)$ is a solution of the replicator equation:
\begin{eqnarray}\label{EDOMeanStrategy}
\frac{d}{dt} \bar p_i(t) = 2c \bar p_i(t) e_i^T A\bar p(t) \qquad i=1,\ldots,d.
\end{eqnarray}
\end{theorem}
\noindent Notice that \eqref{EDOMeanStrategy} is not exactly the replicator systems due to the
constant $2c$, but becomes so after the time-scale change $\tau=2ct$.
\begin{proof}
Let $\mathcal{T}_{s,t}$ be the flow of the vector-field $\mathcal{F}[v](t,x)$ i.e.
$$ \frac{d}{dt}\mathcal{T}_{s,t}(p) = \mathcal{F}[v](\mathcal{T}_{s,t}(p),t), \qquad \mathcal{T}_{s,s}(p)=p. $$
We also let $\mathcal{T}_t(p)=\mathcal{T}_{0,t}(p)$ and denote $\mathcal{T}^i_t(p)$, $i=1,\ldots,d$, its components.
Then $v_t = \mathcal{T}_t\sharp v_0$. It follows that for any $i=1,\ldots,d$,
\begin{eqnarray*}
\bar p_i(t) = \int_{\Delta} p_i \, dv_t(p) = \int_{\Delta} \mathcal{T}^i_t(p) \, dv_0(p),
\end{eqnarray*}
so that
\begin{eqnarray*}
\frac{d}{dt} \bar p_i
& = & \int_{\Delta} \frac{d}{dt} \mathcal{T}^i_t(p) \, dv_0(p)
= \int_{\Delta} \mathcal{F}_i[v](\mathcal{T}_{s,t}(p),t)\, dv_0(p) \\
& = & \sum_{k=1}^d \int_{\Delta} h \Big(\mathcal{T}_t(q)\Big) a_{ik}\Big(\mathcal{T}^i_t(q)\bar p_k(t) +\mathcal{T}^k_t(q)\bar p_i(t)\Big) \, dv_0(q) \\
& = & \sum_{k=1}^d \int_{\Delta} h (p) a_{ik} (p_i\bar p_k(t) +p_k\bar p_i(t) ) \, dv_t(p),
\end{eqnarray*}
where we used once again that $v_t = \mathcal{T}_t\sharp v_0$.
According to assumption \eqref{HipDinProm} we have for any $p$ in the support of $v_t$ that
$p_i\ge c^{1/d}$ for $i=1,\ldots,d$, which implies that $h(p)=c$. Thus,
\begin{align*}
\frac{1}{c}\frac{d}{dt} \bar p_i(t)
=& \sum_{k=1}^d \int_{\Delta} a_{ik} (p_i\bar p_k(t) +p_k\bar p_i(t) ) \, dv_t(p) \\
= & 2\sum_{k=1}^d a_{ik} \bar p_i(t)\bar p_k(t)
\end{align*}
which is \eqref{EDOMeanStrategy}.
\end{proof}
\subsection{Two strategies games.}
Let us consider the case of a symmetric game with two strategies. The pay-off matrix $A$ is then
$$ A = \begin{pmatrix} 0 & b \\ -b & 0 \end{pmatrix} $$
for some $b\in \mathbb{R}$. Notice that if $b>0$ (resp. $b<0$) then the first (respectively, second) strategy
strictly dominates the other.
We thus expect that all agents end up playing the dominating strategy except those initially playing
exclusively the loosing strategy and they cannot move due to the presence of $h$ in the interaction rule.
Let $v_t$ be the solution to the transport equation \eqref{LimitEqq} and $\mu_t$ be the distribution of $p_1$ (i.e. $\mu_t$ is the first marginal of $v_t$.)
This means that $v_t(A\times [0,1])=\mu_t(A) $ for any Borel set $A\subset [0,1]$, which can be rewritten as
$$ \iint \phi(p_1) \,dv_t(p_1,p_2) = \int \phi(p_1)\,d\mu_t(p_1) $$
for any measurable non-negative function $\phi:[0,1]\to\mathbb{R}$.
\begin{theorem}\label{Thm2strategies}
Assume that $b>0$ and write the initial condition as
$$ \mu_0 = (1-a)\delta_0+a\tilde \mu_0$$
where $a\in [0,1]$ and $\tilde \mu$ is a probablity measure on $[0,1]$ such that $\tilde \mu_0(\{0\})=0$.
Then
$$\lim_{t\to +\infty}\mu_t = (1-a)\delta_0 + a\delta_1.$$
In the same way, if $b<0$ and $\mu_0 = (1-a)\delta_1+a\tilde \mu_0$ where $\tilde \mu(\{1\})=0$ and
$a\in [0,1]$, then $\mu_t\to (1-a)\delta_1+a\delta_0$ as $t\to +\infty$.
\end{theorem}
\begin{proof}
Let us assume that $b>0$ (the proof when $b<0$ is completely analogous).
It follows from \eqref{LimitEqq} that $\mu_t$ satisfies the following equation: for any $\phi\in C^1([0,1])$,
\begin{equation}\label{Equp1}
\frac{1}{b} \frac{d}{dt} \int_0^1 \phi\,d\mu_t
= \int_0^1 \phi'(p_1) v[\mu_t](p_1) \,d\mu_t(p_1),
\end{equation}
where for any probability measure $\mu$ on $[0,1]$ the vector-field $v[\mu]$ is defined as
$$ v[\mu](p_1) = \underbrace{\min\{p_1(1-p_1),c\}}_{h(p_1)} (p_1+\bar p_1 - 2p_1\bar p_1),$$
where $$ \bar p_1 = \int_0^1 p_1\,d\mu(p_1). $$
Notice first that since $p_1,\bar p_1\in [0,1]$ we have
\begin{equation}\label{Claim2}
v[\mu](p_1) \ge h(p_1)(p_1^2+\bar p_1^2 - 2p_1\bar p_1) = h(p_1) (p_1-\bar p_1)^2.
\end{equation}
Hence, it follows that $v[\mu]\ge 0$. Another consequence is that
\begin{equation}\label{Claim}
v[\mu] = 0 \text{ $\mu$-a.e.} \qquad \Leftrightarrow \qquad \mu=\alpha\delta_0 + (1-\alpha)\delta_1,\,\alpha \in [0,1].
\end{equation}
Indeed, if $v[\mu](p)=0$ then $p_1=0,1,\bar p_1$ by \eqref{Claim2}.
Thus if $v[\mu] = 0$ $\mu$-a.e. then $\mu = \alpha \delta_0+\beta \delta_1 + \gamma\delta_c$ for some $c\in (0,1)$
and $\alpha,\beta,\gamma\ge 0$, $\alpha+\beta+\gamma=1$. Since $h(c)\neq 0$,
$v[\mu](c)=0$ gives $c=\bar p_1$ by \eqref{Claim2} and then $0=v[\mu](c)=h(c)(c+c-2c^2)$
i.e. $c=0$ or $c=1$ which is an absurd.
Let us recall that $\mu_t=T_t\sharp\mu_0$
where $T_t$ is the flow of $v[\mu_t](p_1)$, and also that $v[\mu_t](p_1)\ge 0$. Thus for
any $x$ in the support of $\mu_0$, $T_t(x)$ is non-decreasing and bounded by 1 and thus
converge to some $T_\infty(x)$. Then $\mu_t\to \mu_\infty:=T_\infty\sharp\mu_0$.
Moroever, $v[\mu_\infty]=0$ $\mu_\infty$-a.e. and thus
$\mu_\infty=(1-\alpha)\delta_0+\alpha\delta_1$ for some $\alpha\in [0,1]$ by
\eqref{Claim}.
To conclude, we have to show that
\begin{equation}\label{Claim3}
\mu_\infty(\{0\})=\mu_0(\{0\}).
\end{equation}
Let us take a smooth non-increasing function $\phi:[0,1]\to [0,1]$
such that $\phi=1$ in $[0,1/n]$ and $\phi=0$ in $[2/n,1]$, $n\in \mathbb{N}$. Then
$$ \int_0^1 \phi\,d\mu_t - \int_0^1 \phi\,d\mu_0 = \int_0^t\int_0^1 \phi'(p_1)v[\mu_s](p_1)\,d\mu_s(p_1)ds \le 0 $$
Letting $t=t_k\to +\infty$ we obtain
$$ \int_0^1 \phi\,d\mu_\infty \le \int_0^1 \phi\,d\mu_0 $$
and then $ \mu_\infty([0,1/n])\le \mu_0([0,2/n])$. Letting $n\to +\infty$ gives $\mu_\infty(\{0\})\le \mu_0(\{0\})$.
To prove the converse inequality recall that $\mu_t=T_t\sharp\mu_0$ where $T_t$ is the flow of $v[\mu_t](p_1)$.
For any $\phi\in C([0,1])$ we thus have
$$ \int_0^1 \phi\,d\mu_t = \int_0^1 \phi(T_t(p_1))\,d\mu_0(p_1)
= (1-a)\phi(0) + a \int_0^1 \phi(T_t(p_1))\,d\tilde \mu_0(p_1) $$
Letting $t=t_k\to +\infty$ we obtain $ \int_0^1 \phi\,d\mu_\infty\ge (1-a)\phi(0)$ for any nonnegative and continuous function $\phi$. We deduce that $\mu_\infty(\{0\})\ge 1-a$.
This proves \eqref{Claim3}.
We conclude that $\mu_t \to (1-a)\delta_0 + a\delta_1$, and this finishes the proof.
\end{proof}
Given an initial condition $\mu_0$, the distribution $\mu_t$ of $p_1$ is the unique solution (see \eqref{Equp1}) of
$$ \frac{1}{b} \partial_t\mu_t + \partial_{p_1}\Big( v[\mu_t](p_1) \Big) = 0.$$
In particular if $\mu_0$ is a convex combination of Dirac masses like e.g. $\mu_0 = \frac1N
\sum_{i=1}^N \delta_{p_1^i(0)}$ with $p_1^1(0),\ldots,p_1^N(0)\in [0,1]$, then $\mu_t =
\frac1N \sum_{i=1}^N \delta_{p_1^i(t)}$ where $p_1^1(t),\ldots,p_1^N(t)$ are the solutions
of the system
\begin{equation} \label{Syst2strategies}
\begin{split}
\frac{d}{dt} p_1^i(t)
& = v[\mu](p_1^i(t)) \\
& = \min\{p_1^i(1-p_1^i),c\} (p_1^i(t)+\bar p_1 - 2p_1^i(t)\bar p_1)
\qquad i=1,\ldots,N,
\end{split}
\end{equation}
where
$$ \bar p_1 = \frac1N \sum_{i=1}^N p_1^i(t). $$
We solved numerically the system \eqref{Syst2strategies} in the time interval $[0,T]$ using a Runge-Kutta scheme of order 4 with step size $h=0.1$ with the following parameters values:
\begin{equation}\label{2strategies_Param}
b=1, \qquad c=0.1, \qquad N=1000, \qquad T=400,
\end{equation}
and taking as initial condition
\begin{equation}\label{2strategies_CondIni}
\begin{split}
& p_1^1(0)=...=p_1^{300}(0)=0, \\
& \text{$p_1^k(0)$, $k=301,\ldots,N$, uniformly and independently distributed in $[0,0.3]$. }
\end{split}
\end{equation}
We show in figure \ref{Fig_2strategies} the resulting evolution of the distributions of the
$p_1^k(t)$, $k=1,\ldots,N$. We can see that $p_1^1(t)=...=p_1^{300}(t)=0$ for any $t$,
resulting in the Dirac mass $\frac{3}{10}\delta_0$. The others $p_1^k (t)$,
$k=301,\ldots,N$, are moving to the right until reaching 1 thus building up progressively the
Dirac mass $\frac{7}{10}\delta_1$ in complete agreement with Theorem
\ref{Thm2strategies}.
\begin{figure}\label{Fig_2strategies}
\centering
\includegraphics[width=\textwidth]{2strategies.png}
\caption{Time evolution of the distribution of $p^1_1,\ldots,p^N_1$ solutions of \eqref{Syst2strategies}
with parameter values \eqref{2strategies_Param} and initial condition \eqref{2strategies_CondIni}.}
\end{figure}
\subsection{Periodic solutions for Paper-Rock-Scissor like games.}
We now consider the case of a game for which the solutions of the replicator equations
are periodic orbits. We have in mind the classic Rock-Paper-Scissor game whose pay-off matrix is
$$ A = \begin{pmatrix} 0 & 1 & -1 \\ -1 & 0 & 1 \\ 1 & -1 & 0 \end{pmatrix} $$
In this game strategies dominate each other in a cyclic way as $1\to 2\to 3\to 1$.
This can be generalized to a game with an odd number $d$ of strategies considering the pay-off matrix
\begin{equation}\label{PayOffPPT}
A:= \begin{pmatrix}
0 & a_1 & a_2 & \cdots & \cdots & a_{d-1} \\
a_{d-1} & 0 & a_1 & a_2 & \cdots & a_{d-2} \\
a_{d-2} & a_{d-1} & 0 & a_1 & \cdots & a_{d-3} \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\
a_1 & a_2 & \cdots & \cdots & a_{d-1} & 0
\end{pmatrix}
\end{equation}
with $a_k=(-1)^{k-1}$, $k=1,\ldots,d-1$.
It is known that interior trajectories for the replicator equations for these games are closed
periodic orbits enclosing the unique interior Nash equilibrium $N:=\frac{1}{d}\vec{1}$ (see
e.g. \cite{sandholm2010population}). In particular
\begin{equation}\label{Value}
N^T A N = 0.
\end{equation}
Moreover (see Theorem 7.6.4 in \cite{hofbauer1998evolutionary}) if $p(t)$ is such a
trajectory then its temporal mean converges to $N$:
$$ lim_{t\to +\infty} \frac{1}{t}\int_0^t p(s)\,ds = N. $$
It follows that
\begin{equation} \label{TempMean}
\frac{1}{T}\int_0^T p(s)\,ds = N
\end{equation}
where $T$ is the period of the trajectory $x(t)$.
Our next result states that the solutions of the mean-field equation equation \eqref{LimitEqq} lying in $\{h=c\}$ are also periodic.
\begin{theorem}
Consider a solution $v_t$ to equation \eqref{LimitEqq} with initial condition $v_0$ such that
$v_t$ is supported in $\{h=c\}$ for any $t\ge 0$.
If the initial mean strategy is different from $N$ then there exists $T>0$ such that
$v_{t+T}=v_t$ for any $t\ge 0$.
\end{theorem}
\begin{proof}
Consider a solution $v_t$ to equation \eqref{LimitEqq} with initial condition $v_0$.
Then $v_t =\mathcal{T}_t\sharp v_0$ where $\mathcal{T}_t$ is the flow of $\mathcal{F}[v_t](p)$.
Thus to prove that $v_t$ is periodic, it is enough to prove that all the trayectories $t\to\mathcal{T}_t(p)$,
$p\in \text{supp}(v_0)$, are periodic with the same period.
Let $p(t)=\mathcal{T}_t(p)$ be such a trajectory.
Since $h(p(t))=c$ for any $t$,
\begin{eqnarray*}
\frac{d}{dt} p(t)
& = & \mathcal{F}[v_t](p(t)) \\
& = & c(B(t)+C(t)A)p(t)
\end{eqnarray*}
where
$$ B(t)=diag((A m(t))_1,\ldots,(A m(t))_d), \qquad C(t)= diag(m_1(t),\ldots,m_d(t)) $$
$$ m(t)=\int_\Delta p\,dv_t(p). $$
Thus
$$ p(t) = exp\Big(c\int_0^t B(s)+C(s)A\,ds\Big)p(0). $$
According to Theorem \ref{dinamicapromedio},
$m$ is a solution to the replicator equations for $A$ and thus is periodic.
We denote its period by $T$.
By \eqref{TempMean} we deduce that
$$\frac{1}{T}\int_0^T m(s)\,ds = N. $$
Thus
$$\frac{1}{T}\int_0^T A m(t)\,ds = A\Big( \frac{1}{T}\int_0^T m(s)\,ds \Big) = AN=0. $$
Recalling that $N=\frac{1}{d}\vec{1}$,
$$ \frac{1}{T} \int_0^T B(s)+C(s)A\,ds = \frac{1}{T} \int_0^T C(s)\,ds.A = \frac{1}{d} A. $$
We deduce that
$$ p(T) = exp\Big(\frac{cT}{d} A\Big)p(0). $$
The matrix $R:=exp\Big(\frac{cT}{d} A\Big)$ is orthogonal (being $A$ antisymmetric, $exp(
A^t) = exp(-A) = (exp A)^{-1}$) and has determinant $exp\Big(Tr(\frac{cT}{d} A)\Big)=1$.
Thus $R\in SO(d)$.
Moreover, $RN=(Id+\frac{cT}{d} A + ... )N = N$ so we get that $R$ is a rotation around the
line $(ON)$. Since this line is perpendicular to the plane $\{p_1+ \dots +p_d=1\}$, the
matrix $R$ is a rotation in this plane fixing $N$.
We thus obtain that any trajectory $p(t)$ starting from $p(0)\in \text{supp}(v_0)$ satisfies
$$ p(T)=Rp(0) $$
where $T$ is the period of $m$ and $R$ is a rotation in $\{p_1+ \dots +p_d=1\}$ fixing $N$.
It follows in particular that $v_T=R\sharp v_0$ and $m(T)=Rm(0)$. Since $m(T)=m(0)$ by
definition of $T$, we obtain that $m(0)$ is a another fixed-point of $R$. Thus if $N\neq
m(0)$ then $R=Id$ so that $v_T=v_0$. Since the solution to the mean-field equation
\eqref{LimitEqq} with a given intial condition is unique, we deduce that $v_{t+T}=v_t$ for
any $t\ge 0$.
The proof is finished
\end{proof}
\begin{remark}
Let us note that, given a ball $B$ centered at $N$, any trajectory of the replicator equations
starting at some $p\in B\cap \{p_1+ \dots +p_d=1\}$ cannot reach the boundary of
$\Delta$, since whenever a coordinate of $p(t)$ is equal to zero, it remains zero for every
$t$. So, choosing $B$ sufficiently small, all the trajectories remain in the set $\{ h=c\}$.
Assuming that $supp(v_0)\subset B$, we get that $supp(v_t)\subset\{ h=c\}$.
\end{remark}
\section{Appendix}
\subsection{Existence of solution to the Boltzmann-like equation\label{existenciaboltzmannapendice}:
proof of Theorem \ref{existenciaboltzmann}.}
Given an initial condition $u_0\in \mathcal{P}({\Delta})$ we want to prove that there exists a
unique $u\in C([0,\infty),\mathcal{P}({\Delta}))$ such that
\begin{equation}\label{teoremaboltzmann10}
\begin{split}
\int_{\Delta} \varphi(p) \, du_t(p)=&\int_{\Delta} \varphi(p) \, du_0(p)
+\int_0^t\int_{{\Delta}^2} \mathbb{E}[\varphi(p^*)-\varphi(p)] \, du_s(p)du_s(\tilde p)ds
\end{split}
\end{equation}
for any $\varphi\in C({\Delta}).$
We split the proof into two steps.
\begin{step}
There is a unique $u\in C([0,\infty),M({\Delta}))$ satisfying \eqref{teoremaboltzmann10}.
\end{step}
Recall that $M({\Delta})$ denotes the space of finite Borel measures on ${\Delta}$ that we endow,
unless otherwise stated, with the total-variation norm.
\begin{proof} Given $u,v\in M({\Delta})$ we define a finite measure $ Q(u,v)$ on ${\Delta}$ by
\begin{align*}
\langle Q(u,v),\varphi\rangle
:=&\, \frac12\int_{ {\Delta}^3} \mathbb{E}[\varphi(p^*)-\varphi(p)]\, du(p)dv(\tilde p)
+\,\frac12 \int_{ {\Delta}^3} \mathbb{E}[\varphi(p^*)-\varphi(p)]\, dv(p)du(\tilde p)
\end{align*}
for $\varphi\in C({\Delta})$. We also let $Q(u):=Q(u,u)$.
For $u\in C([0,+\infty],M({\Delta}))$ we then define a map $J(u):[0,+\infty)\to M({\Delta})$
by
$$ J(u)_t:=u_0+\int_0^tQ(u_s)\,ds, \qquad t\ge 0, $$
that is,
$$ (J(u)_t,\varphi):=(u_0,\varphi)+\int_0^t (Q(u_s),\varphi)\,ds \qquad
\text{for any $\varphi\in C({\Delta})$.} $$
We thus look for a fixed point of $J$ in $C([0,+\infty],M({\Delta}))$.
We will apply Banach fixed-point theorem to $J$ in the complete metric space
\begin{equation*}
\mathcal{A}:=\{u\in C([0,T],\mathcal{M}({\Delta})):\, u(0)=u_0 \text{ and } \max_{0\leq s\leq T}\|u_s\|_{TV} \leq 2 \}
\end{equation*}
where $T\in (0,1/8)$.
Let us verify that $J(\mathcal{A})\subseteq \mathcal{A}$.
First notice that
\begin{equation}\label{cotaQ}
\|Q(u,v)\|_{TV} \leq 2\|u\|_{TV}\|v\|_{TV}.
\end{equation}
Moreover, $Q$ is bilinear so that $Q(u)-Q(v)=Q(u+v,u-v)$ and then
\begin{equation}\label{contraccion}
\|Q(u)-Q(v)\|_{TV}\leq 2 \|u+v\|\|u-v\|_{TV}.
\end{equation}
Now for any $u,v\in \mathcal{A}$, and any $t\in [0,T]$,
\begin{align*}
\|J(u)_t)\|_{TV} &
\leq \|u_0\|_{TV}+\int_0^t\|Q(v_s)\|_{TV}\, ds \\
& \leq 1+2T\max_{0\leq s\leq T} \|v_s\|_{TV}^2 \\
& \leq 1+8T
\leq 2.
\end{align*}
Moreover, for $0\le s\le t\le T$,
$$ \|J(u)_t-J(u)_s\|_{TV}
\le \int_s^t \|Q(u_\tau)\|_{TV}\,d\tau
\le 2\int_s^t \|u_\tau\|^2_{TV}\,d\tau
\le 8|t-s|,
$$
and we deduce the continuity of $J(u)_t$ in $t$.
Thus $J(\mathcal{A})\subset \mathcal{A}$.
Now, for any $u,v\in \mathcal{A}$, using \eqref{contraccion},
\begin{align*}
\|J(u)_t-J(v)_t\| & \leq \int_0^t\|Q(u_s)-Q(v_s)\|\, ds \\
& \leq \int_0^t2\| u_s+v_s \|\| u_s-v_s \|\, ds\\
& \leq 8T\| u-v \|,
\end{align*}
so that
$$ \|J(u)-J(v)\|\leq 8T\| u-v \|. $$
Thus choosing $T<1/8$, we deduce that $J$ is a strict contraction from $\mathcal{A}$ to
$\mathcal{A}$ and therefore, has a unique fixed-point. Repeating the argument on $[T,2T],
\ldots$, we obtain a unique $u\in C([0,+\infty],M({\Delta}))$ satisfying
\eqref{teoremaboltzmann10}.
\end{proof}
Notice that it is not a priori obvious that $J(u)_t\ge 0$ if $u_t\ge 0$, $t\ge 0$. We verify
that $u_t\ge 0$ in an indirect way in the next Step following ideas from
\cite{cercignani2013mathematical}.
\begin{step}
$u_t$ is a probability measure on ${\Delta}$ for any $t\ge 0$ where $u$ is given by the previous Step.
\end{step}
\begin{proof}
We first verify that $u_t$ is a non-negative measure for any $t\ge 0$ i.e.
$$(u_t,\varphi)\geq 0 \qquad \text{for any $\varphi\in C({\Delta})$, $\varphi\geq 0$.} $$
Given $u,v\in M({\Delta})$ we define the measures
$$ Q_+(u,v):=
\frac12\int_{{\Delta}^3}\mathbb{E}[\varphi(p^*)] \,(du(p)dv(\tilde p)+dv(p)du(\tilde p)) $$
and $Q_+(v):=Q_+(v,v)$.
Notice that $Q_+(u,v)$ is non-negative if both $u$ and $v$ are non-negative. Moreover,
\begin{equation}\label{relacionQQmas}
Q(u)=Q_+(u)-u
\end{equation}
and
\begin{equation}\label{desigualdadparainduccion}
Q_+(u)\geq Q_+(v)\geq 0 \qquad \text{if $u\geq v\geq 0$}
\end{equation}
since $Q_+(u)-Q_+(v)=Q_+(u+v,u-v)\geq 0$.
The idea of the proof consists in finding $v\in C([0,\infty),\mathcal{P}({\Delta}))$ (continuity with
respect to the total variation norm) such that \begin{equation}\label{vbuscada}
v_t=e^{-t}u_0+\int_0^te^{s-t}Q_+(v_s)\, ds.
\end{equation}
Indeed in that case using \eqref{relacionQQmas},
\begin{align*}
\frac{d}{dt}v_t
&=-e^{-t}u_0-\int_0^te^{s-t}Q_+(v_s)\, ds+Q_+(v_t) \\
& =Q_+(v_t)-v_t\\
&=Q(v_t).
\end{align*}
Thus, $v_t$ verifies \eqref{teoremaboltzmann10} so that $v=u$ since $u$ is the unique solution of \eqref{teoremaboltzmann10}.
To obtain $v$ satisfying \eqref{vbuscada} we consider the sequence
$v^{(n)}\in C([0,+\infty),P({\Delta}))$, $n\in\mathbb{N}$, defined by
$v^{(0)}:=0$ and
$$v_t^{(n)}:=e^{-t}u_0+\int_0^te^{s-t}Q_+(v^{(n-1)}_s)\, ds.$$
Recalling that $u_0\ge 0$ and using \eqref{desigualdadparainduccion} it is easily seen that
$v^{(n)}_t\geq v^{(n-1)}_t\ge 0$.
Also notice that
$$ (v^{(1)}_t,1) = e^{-t} + \int_0^te^{s-t}(Q_+(u_0),1)\, ds
= (Q_+(u_0),1) = (Q(u_0),1)+(u_0,1)=1
$$
where we used \eqref{relacionQQmas} and the fact that $(Q(u),1)=0$ for any $u\in M({\Delta})$.
Notice that the function $(v^{(n)}_t,1)$ satisfies the ordinary differential equation
$$ \begin{cases}
\frac{d}{dt} (v^{(n)}_t,1)
= (Q_+(v_t^{(n-1)}),1) - (v^{(n)}_t,1)
= (v_t^{(n-1)},1) - (v^{(n)}_t,1), \\
(v^{(n)}_{t=0},1) = (u_0,1)=1.
\end{cases}$$
We can then prove by induction that $(v^{(n)}_t,1)=1$ for any $n$ and $t$. Thus for any $t\ge
0$, $(v_t^{(n)})_n$ is a non-decreasing sequence of probability measures on ${\Delta}$. We can
then define a probability measure $v_t$ on ${\Delta}$ by
$$ (v_t,\phi):=\lim_{n\to +\infty} (v^{(n)}_t,\phi) \qquad \phi\in C({\Delta}). $$
In fact the convergence of $v^{(n)}$ to $v$ is uniform in $t\in [0,T]$ for any $T>0$, and thus
$v$ is continuous in $t$. This follows from the Arzela-Ascoli thorem. Indeed, since
$\|v_t^{(n)}\|_{TV}=1$, we only need to prove that the sequence $(v_t^{(n)})_n$ is uniformly
equicontinuous. We have
\begin{eqnarray*}
\|v_{t+h}^{(n)}-v_t^{(n)}\|_{TV}
& \le & |e^{t+h}-e^t|\|u_0\|_{TV}
+ \int_t^{t+h} e^{s-(t+h)} \|Q_+(v_s^{(n-1)})\|_{TV}\,ds \\
&& + \int_0^t |e^{s-(t+h)}-e^{s-t}| \|Q_+(v_s^{(n-1)})\|_{TV}\,ds.
\end{eqnarray*}
In view of \eqref{cotaQ} and recalling that $v_s^{(n-1)}\in \mathcal{P}({\Delta})$ we have
$\|Q_+(v_s^{(n-1)})\|_{TV}\le \|Q(v_s^{(n-1)})\|_{TV} + \|v_s^{(n-1)}\|_{TV}\le 3$.
The uniform equi-continuity follows easily.
This ends the proof.
\end{proof}
To conclude the proof of Theorem \ref{existenciaboltzmann}, we verify that $u\in
C^1((0,+\infty),\mathcal{M}({\Delta}))$ with $\partial_tu_t=Q(u_t)$. Indeed, recalling that
$u_t=J(u)_t$, we have
$$ \frac{u_{t+h}-u_t}{h}-Q(u_t)
= \frac1h \int_t^{t+h}Q(u_s)\,ds-Q(u_t)
= \frac1h\int_t^{t+h} Q(u_s)-Q(u_t)\,ds. $$
Using \eqref{contraccion} together with $\|u_t\|_{TV}=1$ we obtain
\begin{align*}
\Big{\|}\frac{u_{t+h}-u_t}{h}-Q(u_t)\Big\|_{TV}
\leq & \frac2h \int_t^{t+h} \|u_s+u_t\|_{TV}\|u_s-u_t\|_{TV}\,ds\\
\leq & \frac4h \int_t^{t+h} \|u_s-u_t\|_{TV}\,ds
\end{align*}
which goes to 0 as $h\to 0$ by the Dominated Convergence Theorem since $u_s$ is
continuous in $s$ for the total variation norm.
\subsection{Grazing limit: proof of Theorem \ref{grazing}.}
The proof consists in two main steps. First approzimating the difference
$\varphi(p^*)-\varphi(p)$ in the Boltzmann-like equation \eqref{boltzmannconbeta}
by a second order Taylor expansion, we obtain that $u_\delta $ is an approximate s solution of
\eqref{LimitEq} in the sense of \eqref{DefWeakSol}. Then we apply Arzela-Ascoli Theorem to
deduce that a subsequence of the $u_\delta$ converges to a solution of \eqref{DefWeakSol}.
Before beginning the proof we need the following lemma which gives the expected value of
of $f(\zeta,p)^TA f(\tilde \zeta,\tilde p)\Big(f_i(\zeta,p)-f_i(\tilde \zeta,\tilde p)\Big)$ where
the random vector $ f(\zeta; p) = ( f_1(\zeta; p),\ldots, f_d(\zeta; p))$ is defined in
\eqref{definicionf}.
\begin{lem} For any $i,j=1,\dots,d$, any $p,\tilde p\in{\Delta}$ and any
independent random variables $\zeta,\tilde\zeta$ uniformly distributed in $[0,1]$, there hold
\begin{equation}\label{Acumulada1}
\mathbb{E}\Big[ f(\zeta,p)^TA f(\tilde \zeta,\tilde p)\Big(f_i(\zeta,p)-f_i(\tilde \zeta,\tilde p)\Big) \Big]
=\sum_{k=1}^d a_{ik}(p_i \tilde p_{k} + p_k\tilde p_{i} ).
\end{equation}
\end{lem}
\begin{proof}
Let us denote $f_i=f_i(\zeta,p)$ and $\tilde f_i=f_i(\tilde \zeta,\tilde p)$.
The proof is based on the following two properties of the $f_i$:
$$ f_if_j=\delta_{ij}\qquad \text{and} \qquad f_i^2=f_i,$$
which follows from the definition of $f(\zeta,p)$.
We write
\begin{equation*}\label{Grazing1}
\begin{split}
f(\zeta,p)^TA f(\tilde \zeta,\tilde p)\Big(f_i(\zeta,p)-f_i(\tilde \zeta,\tilde p)\Big)
& = \sum_{m,n=1}^d a_{mn}f_mf_n\Big(f_i-f_i\Big) \\
& = \sum_{n=1}^d a_{in}f_i f_n- \sum_{m=1}^d a_{mi}f_mf_i.
\end{split}
\end{equation*}
Since $\zeta$ and $\tilde\zeta$ are independent, so are $f_i(\zeta,p)$ and
$f_j(\tilde \zeta,\tilde p)$ for any $i,j=1,\ldots,d$.
Moreover $\mathbb{E}[f_i(\zeta,p)]=p_i$ since $f_i(\zeta,p)=1$ with probability $p_i$ and
$f_i(\zeta,p)=0$ with probability $1-p_i$.
Taking the expectation in \eqref{Grazing1} we thus obtain
$$ \mathbb{E}\Big[ f(\zeta,p)^TA f(\tilde \zeta,\tilde p)\Big(f_i(\zeta,p)-f_i(\tilde \zeta,\tilde p)\Big) \Big]
=\sum_{n=1}^d a_{in}p_i \tilde p_m - \sum_{m=1}^d a_{mi}p_m \tilde p_i. $$
We deduce \eqref{Acumulada1} recalling that $a_{ij}=-a_{ji}$ being $A$ antisymmetric.
The proof is finished.
\end{proof}
We are now in position to prove Theorem \ref{grazing}.
We split the proof into several Steps.
The first one states that $u_\delta $ is an approximate s solution of \eqref{LimitEq} in the sense of \eqref{DefWeakSol}.
\begin{step}
For any $\phi\in C^3({\Delta})$,
\begin{equation}\label{Step1}
\begin{split}
& \int_{\Delta} \varphi(p)\,du_\delta (p,\tau)
- \int_{\Delta} \varphi(p)\, du_{\delta }(p,0)\\
& = \int_0^t \int_{{\Delta}} \nabla\phi(p) \mathcal{F}[u_\delta(s)](p) du_\delta (p,s) ds
+\frac{r^2}{2\delta} \int_0^t \int_{{\Delta} } \sum_{i,j=1}^dQ_{ij}\partial_{ij}\varphi(p) G(p)^2 \, du_\delta (p,s) ds \\
&+ \int_0^t Error(s,\delta)ds
\end{split}
\end{equation}
where
\begin{equation}\label{GrazingStep1Error}
|Error(s,\delta)|\le \frac1\delta C \|D^3\varphi \|_\infty (\delta^3+r^3)
2+C \|D^2\phi\|_\infty \delta,
\end{equation}
and the constant $C$ is independent of $t$, $\phi$, $\delta$ and $r$.
\end{step}
\begin{proof}
First, for any test-function $\phi$ we have
\begin{align*}
\frac{d}{d\tau} \int_{\Delta} \varphi(p) \,du_\delta (p,\tau)
=&\frac1\delta\frac{d}{dt} \int_{\Delta} \varphi(p) \,du (p,t) \\
=& \frac1\delta \int_{{\Delta}^2} \mathbb{E}\Big[\varphi(p^*)-\varphi(p)\Big]
du (p,t)du (\tilde p,t).
\end{align*}
Performing a Taylor expansion up to the second order we have
$$\varphi(p^*)-\varphi(p)=\sum_{i=1}^d\partial_i\varphi(p)(p_i^*-p_i)+
\frac{1}{2}\sum_{i,j=1}^d\partial_{ij}\varphi(p)(p_i^*-p_i)(p_j^*-p_j)+R(p^*,p) $$
where
\begin{equation}\label{ErrorTerm}
|R(p^*,p)|\leq \frac16 \|D^3\varphi \|_\infty |p^*-p|^3.
\end{equation}
Thus,
\begin{equation}\label{Grazing20}
\begin{split}
& \int_{{\Delta}^2} \mathbb{E}\Big[\varphi(p^*)-\varphi(p)\Big] du (p,t)du (\tilde p,t) \\
& \qquad = \int_{{\Delta}^2} \sum_{i=1}^d\partial_i\varphi(p)\mathbb{E}\Big[p_i^*-p_i\Big] \\
& \qquad \quad +\frac{1}{2} \sum_{i,j=1}^d\partial_{ij}\varphi(p)
\mathbb{E}\Big[(p_i^*-p_i)(p_j^*-p_j)\Big] du (p,t)du (\tilde p,t) \\
& \qquad \quad + \int_{{\Delta}^2} \mathbb{E}\Big[R(p^*,p)\Big] du (p,t)du (\tilde p,t).
\end{split}
\end{equation}
We examine each term in the right hand side. In view of the interaction rule
\eqref{interaccion}, namely
\begin{eqnarray*}
p^* - p = \delta h (p) f(\zeta,p)^TA f(\tilde \zeta,\tilde p) \Big(f(\zeta,p)-f(\tilde \zeta,\tilde p)\Big) + r(q -\vec{1}/d )G(p),
\end{eqnarray*}
we have
\begin{eqnarray*}
\mathbb{E}[p_i^* - p_i] =
\delta h (p) \mathbb{E}\Big[f(\zeta,p)^TA f(\tilde \zeta,\tilde p) \Big(f_i(\zeta,p)-f_i(\tilde \zeta,\tilde p)\Big)\Big]
+ \mathbb{E}[r(q_i -1/d )G(p)].
\end{eqnarray*}
If $q$ is a random variable uniformly distributed on ${\Delta}$, then $\mathbb{E}[q_i]=1/d$. So,
in view of \eqref{Acumulada1} in the previous Lemma, we obtain
\begin{equation*}
\mathbb{E}[p_i^* - p_i] =
\delta h (p) \sum_{k=1}^d a_{ik}(p_i \tilde p_{k} + p_k\tilde p_{i} ).
\end{equation*}
By integrating we get
\begin{equation}\label{Grazing10}
\begin{split}
& \int_{{\Delta}^2} \sum_{i=1}^d\partial_i\varphi(p)\mathbb{E}\Big[p_i^*-p_i\Big]
du (p,t)du (\tilde p,t) \\
& = \delta \int_{{\Delta}} h (p) \sum_{i=1}^d\partial_i\varphi(p)
\sum_{k=1}^d a_{ik}(p_i \bar p_k(t) + p_k\bar p_i(t))du (p,t) \\
& = \delta \int_{{\Delta}} \nabla\phi(p) \mathcal{F}[u_t](p) du (p,t),
\end{split}
\end{equation}
where the vector-field $\mathcal{F}$ is defined in \eqref{defcampo}.
We now study $\mathbb{E}\Big[(p_i^*-p_i)(p_j^*-p_j)\Big]$.
In view of the interaction rule,
\begin{align*}
(p_i^*-p_i)(p_j^*-p_j)
=&\Big[\delta h (p) f(\zeta)^TA f(\tilde \zeta) \Big(f_i(\zeta)-f_i(\tilde \zeta)\Big) + r(q_i -1/d )G(p)\Big]\\
&\quad \times \Big[\delta h (p) f(\zeta)^TA f(\tilde \zeta) \Big(f_j(\zeta)-f_j(\tilde \zeta)\Big) + r(q_j -1/d )G(p)\Big] \\
=&\Big(\delta h (p) f(\zeta)^TA f(\tilde \zeta)\Big)^2 \Big(f_i(\zeta)-f_i(\tilde \zeta)\Big)\Big(f_j(\zeta)-f_j(\tilde \zeta)\Big)\\
&+ \delta h (p)f(\zeta)^TA f(\tilde \zeta) \Big(f_i(\zeta)-f_i(\tilde \zeta)\Big) r(q_j -1/d )G(p)\\
&+\delta h (p)f(\zeta)^TA f(\tilde \zeta) \Big(f_j(\zeta)-f_j(\tilde \zeta)\Big) r(q_i -1/d )G(p)\\
&+ r^2(q_i -1/d )(q_j -1/d )G(p)^2.
\end{align*}
The second and third term have zero expected value since $q$ is independent of $\zeta$
and $\tilde \zeta$ and $\mathbb{E}[q_i]=1/d$. Moreover, in view of the definition
\eqref{defQ} of $Q$, the expectation of the last term is $r^2G(p)^2 Q_{ij}$. Lastly, since
$|f_i(\zeta,p)|\le 1$ for any $i=1,\ldots,d$ and any $p\in{\Delta}$, the expectation of the first
term can be bounded by $C\delta^2$ for a constant $C$ depending only on $c$ and the
coefficients of $A$. Thus
\begin{equation*}
\begin{split}
\mathbb{E}\Big[(p_i^*-p_i)(p_j^*-p_j)\Big]
& = r^2G(p)^2 Q_{ij} + O(\delta^2).
\end{split}
\end{equation*}
By integrating,
\begin{equation} \label{Grazing2}
\begin{split}
& \int_{{\Delta}^2} \sum_{i,j=1}^d\partial_{ij}\varphi(p)
\mathbb{E}\Big[(p_i^*-p_i)(p_j^*-p_j)\Big] du (p,t)du (\tilde p,t) \\
& = r^2 \int_{{\Delta}^2}G(p)^2 \sum_{i,j=1}^d\partial_{ij}\varphi(p) Q_{ij} du (p,t)
+ O(\delta^2)\|D^2\phi\|_\infty.
\end{split}
\end{equation}
It remains to bound the error term
$$ \int_{{\Delta}^2} \mathbb{E}\Big[R(p^*,p)\Big] du (p,t)du (\tilde p,t)
\le \frac16 \|D^3\varphi \|_\infty \int_{{\Delta}^2} \mathbb{E}\Big[|p^*-p|^3\Big] du (p,t) du
(\tilde p,t). $$ Using that $(a+b)^3\le C(a^3+b^3)$ for any $a,b\ge 0$, $|G(p)|\le 1$,
$| h (p)|\le c$, and $|f(\zeta)^TA f(\tilde \zeta)| \le \sum_{i,j=1}^d
|a_{ij}|f_i(\zeta,p)f_i(\tilde \zeta\tilde ,p) \le \sum_{i,j=1}^d |a_{ij}|$, we have
\begin{equation*}
\begin{split}
|p^*-p|^3
& \le
\Big| \delta h (p) f(\zeta,p)^TA f(\tilde \zeta,\tilde p) \Big(f(\zeta,p)-f(\tilde \zeta,\tilde p)\Big) + r(q -\vec{1}/d )G(p) \Big|^3 \\
& \le C \delta^3 c^3 \Big(\sum_{i,j=1}^d |a_{ij}|\Big)^3 + Cr^3
= C(\delta^3+r^3).
\end{split}
\end{equation*}
Thus,
\begin{equation}\label{BoundError}
\begin{split}
\int_{{\Delta}^2} \mathbb{E}\Big[R(p^*,p)\Big] du (p,t)du (\tilde p,t)
\le C \|D^3\varphi \|_\infty (\delta^3+r^3).
\end{split}
\end{equation}
By inserting \eqref{Grazing10}, \eqref{Grazing2} and \eqref{BoundError} into
\eqref{Grazing20}, we obtain \eqref{Step1}.
\end{proof}
We now verify that the sequence $(u_\delta)$ verifies the assumptions of Arzela-Ascoli
Theorem, namely boundedness and uniform equicontinuity. The proof is based of the
previous Step. Since the error term \eqref{GrazingStep1Error} involves $\|\phi\|_{C^3}$, the
Wasserstein distance is of no use. Instead we will use the norm $\|u\|_{sup}$, $u\in
P({\Delta})$, defined by
\begin{equation}\label{normamedida3}
\|u\|_{sup}:=\sup \Big\{ \int_{\Delta}\varphi(p)\,du(p) \text{ tal que } \varphi\in C^3({\Delta},\mathbb{R}) \text{ y } \|\varphi\|_3 \leq 1\Big\}.
\end{equation}
where the supremum is taken over all the function $\phi\in C^3({\Delta})$ such that
$\|\varphi\|_3\le 1$ where
\begin{equation}\label{norma3}
\|\varphi\|_3:=\sum_{|\alpha|\leq 3}\|\partial^\alpha\varphi\|_\infty.
\end{equation}
According to \cite{gabetta1995metrics} this norm metricizes the weak convergence in $P({\Delta})$.
\begin{step}\label{HypAA}
For any $t\in [0,T]$ and any $\delta>0$,
\begin{equation}\label{GrazingUnif}
\|u_\delta(\cdot,\tau)\|_{sup}\le 1.
\end{equation}
Moreover there exists a constant $K>0$ such that for any $\tau,\tau'\in [0,T]$ and
any $r,\delta>0$ small,
\begin{equation}\label{GrazingEqui}
\| u_\delta (\cdot,\tau)- u_{\delta }(\cdot,\tau')\|_{sup}\leq K \, |\tau-\tau'|.
\end{equation}
\end{step}
\begin{proof}
First for any $\phi\in C^3({\Delta})$, $\|\phi\|_{C^3}\le 1$, recalling that
$u_\delta(.,\tau)$ is a probability measure, we clearly have
$$ \int_{\Delta}\psi(p)\,du_\delta(p,\tau)
\leq\int_{\Delta} |\psi(p)|\,du_\delta(p,\tau)
\le \int_{\Delta} \,du_\delta(p,\tau) 1. $$
We deduce \eqref{GrazingUnif} taking the supremum over all such $\phi$.
To prove \eqref{GrazingEqui} we write using \eqref{Step1} that
for any $\phi\in C^3(\sc)$, $\|\phi\|_{C^3}\le 1$,
\begin{equation}\label{Grazing30}
\begin{split}
& \int_{\Delta} \varphi(p)\,du_\delta (p,\tau)
- \int_{\Delta} \varphi(p)\,du_\delta (p,\tau') \\
& = \int_{\tau'}^\tau \int_{{\Delta}} \nabla\phi(p) \mathcal{F}[u_s](p) du (p,s) ds
+\frac{r^2}{2\delta} \int_{\tau'}^\tau \int_{{\Delta} } \sum_{i,j=1}^dQ_{ij}\partial_{ij}\varphi(p) G(p)^2 \, du_\delta (p,s) ds \\
&+ \int_{\tau'}^\tau Error(s,\delta)ds
\end{split}
\end{equation}
Since ${\Delta}$ is bounded we have $|\mathcal{F}[u_s](p)|\le C$. Thus
\begin{equation*}
\begin{split}
& \Big| \int_{\Delta} \varphi(p)\,du_\delta (p,\tau)
- \int_{\Delta} \varphi(p)\,du_\delta (p,\tau') \Big| \\
& \le C\Big( \|\nabla\phi\|_\infty + r^2/\delta \|D^2\varphi \|_\infty
+ \frac1\delta \|D^3\varphi \|_\infty (\delta^3+r^3)\Big) |\tau-\tau'| \\
& \le C(1+r^2/\delta).
\end{split}
\end{equation*}
Since we assumed that $r^2/\delta\to \lambda$, we obtain \eqref{GrazingEqui}.
\end{proof}
We fix $T>0$. The space $P({\Delta})$ is compact for the weak convergence (and so for the norm
$\|.\|_{sup}$). In view of the previous Step we can apply Arzela-Ascoli Theorem to the
sequence of continuous functions $u_\delta: [0,T]\to\mathcal{P}({\Delta})$ to obtain that a
subsequence converges uniformly as $\delta \to 0$.
A diagonal argument shows in fact that a subsequence, that we still denote $(u_\delta)$, converges
in $C([0,T],\mathcal{P}({\Delta}))$ for any $T>0$ to some $v\in C([0,+\infty),\mathcal{P}({\Delta}))$.
We now verify that $v$ satisfies \eqref{DefWeakSol}.
\begin{step}\label{GrazingLimit}
$v$ satisfies \eqref{DefWeakSol}.
\end{step}
\begin{proof}
We need to pass to the limit in \eqref{Step1} as $\delta\to 0$. We fix some $\phi\in
C^3({\Delta})$ and $t>0$, and recall that $r^2/\delta \to \lambda$. Then it is easily seen the last
term in the right hand side of \eqref{Step1} can be bounded by $ C\|\phi\|_{C^3}t (\delta +
r.r^2/\delta) \to 0$. Let us pass to the limit in the second term in the right hand side. For a fixed
$s\in [0,t]$, we write
\begin{equation}\label{Grazing100}
\begin{split}
& \int_{{\Delta}} \nabla\phi(p) \mathcal{F}[u_\delta(s)](p) du_\delta (p,s) ds \\
&\qquad = \int_{{\Delta}} \nabla\phi(p) (\mathcal{F}[u_\delta(s)](p)-\mathcal{F}[u(s)](p)) du_\delta (p,s) ds \\
&\qquad \quad + \int_{{\Delta}} \nabla\phi(p) \mathcal{F}[u(s)](p) du_\delta (p,s) ds.
\end{split}
\end{equation}
Notice that
$$ \mathcal{F}[u_\delta(s)](p)-\mathcal{F}[u(s)](p)
= h(p)(p_ie_i^T A m^\delta(s)+m^\delta_i(s)e_i^TAp) $$
where
$$ m^\delta(s)=\int_{\Delta} \tilde p\,d(u_\delta(s,\tilde p)-u(s,\tilde p)). $$
Since $u_\delta(s)\to v(s)$ weakly uniformly in $s\in [0,t]$, we also have that
$$W_1(u_\delta(s),v(s))\to 0$$ uniformly in $s\in [0,t]$. Thus $m^\delta(s)\to 0$ and then
$\mathcal{F}[u_\delta(s)](p)\to\mathcal{F}[u(s)](p)$ uniformly in $p\in {\Delta}$. We deduce
that the first term in the right hand side of \eqref{Grazing100} goes to 0. The second term
also goes to 0 since
$\nabla\phi$ and $\mathcal{F}[u(s)]$ are Lipschitz.
Moreover,
$$ \Big| \int_{{\Delta}} \nabla\phi(p) \mathcal{F}[u_\delta(s)](p) du_\delta (p,s) \Big|
\le \|\nabla\phi\|_\infty \|\mathcal{F}[u_\delta(s)]\|_\infty
\le C.$$
We then conclude that
$$ \int_0^t \int_{{\Delta}} \nabla\phi(p) \mathcal{F}[u_\delta(s)](p) du_\delta (p,s) ds
\to \int_0^t \int_{{\Delta}} \nabla\phi(p) \mathcal{F}[u(s)](p) du(p,s) ds
$$
applying the Dominated Convergence Theorem. We can prove in the same way that the second
term in the right hand side of \eqref{Step1} converges to
$$ \lambda \int_0^t \int_{{\Delta} } \sum_{i,j=1}^dQ_{ij}\partial_{ij}\varphi(p) G(p)^2
\, du(p,s) ds. $$
\end{proof}
To conclude the proof it remains to study the case where $r^2/\delta^\alpha\to \lambda >0$
for some $\alpha\ in (0,1)$. In that case we rescale time considering $\tau=\delta^\alpha t$.
Reasoning as before we can write
\begin{equation*}\label{igualdadtaylor2}
\begin{split}
& \int_{\Delta} \varphi(p)\,du_\delta (p,\tau)- \int_{\Delta} \varphi(p)\, du_{\delta }(p,\tau') \\
&\qquad =\int_{\tau '}^\tau \frac{d}{d\tau} \int_{\Delta} \varphi(p)\,du_\delta (p,s)ds \\
&\qquad =\frac{1}{\delta^\alpha} \int_{\tau'}^\tau \int \mathbb{E}[\varphi(p^*)-\varphi(p)]
du_\delta (p,s)du_\delta (\tilde p,s) \\
& \qquad= \delta^{1-\alpha} \int_{\tau'}^\tau \int_{\Delta} \nabla \phi \mathcal{F}[u_\delta(s)]
\,du_\delta (p,s)
+\frac{r^2}{2\delta^\alpha} \int_{{\Delta} } \sum_{i,j=1}^dQ_{ij}\partial_{ij}\varphi(p) G(p)^2 \, du_\delta (p,s) \\
& \qquad \quad +\frac{1}{\delta^\alpha}\int_{{\Delta}^2} \mathbb{E}[R(p^*,p)] du_\delta (p,s)du_\delta (\tilde p,s).
\end{split}
\end{equation*}
In view of \eqref{BoundError} and recalling that we assume $r^2/\delta^\alpha\to \lambda$,
the last term can be bounded by
$$ C \|\phi\|_{C^3} (\delta^{3-\alpha}+r.r^2/\delta^\alpha)
+ C\delta^{2-\alpha} \|D^2\phi\|_\infty \le C \|\phi\|_{C^3} (\delta+r). $$ It follows that
Step \ref{HypAA} still holds and thus, applying Arzela-Ascoli Theorem, we obtain that a
subsequence, that we still denote $(u_\delta)$, converges in $C([0,T],\mathcal{P}({\Delta}))$ for
any $T>0$ to some $v\in C([0,+\infty),\mathcal{P}({\Delta}))$. Passing to the limit $\delta\to 0$
as in Step \ref{GrazingLimit}, we deduce that $v$ satisfies
\begin{align*}
\int_{\Delta} \varphi(p) \,dv (p,\tau) = & \int_{\Delta} \varphi(p)\, dv(p,\tau') +\int_{\tau'}^\tau
\frac{\lambda}{2 } \int_{{\Delta} } \sum_{i,j=1}^dQ_{ij}\partial_{ij}\varphi(p) G(p)^2 \, dv (p,s)ds
\end{align*}
which is the weak formulation of
\begin{equation*}
\frac{d}{d\tau}v=\frac{\lambda }{2} \sum_{i,j=1}^dQ_{ij}\partial_{ij}(G^2\, v).
\end{equation*}
This concludes the proof of Theorem \ref{grazing}.
\subsection{Well-posedness of equation \eqref{LimitEqq}. Proof of Theorem \ref{teotransporte}.}
Let us fix some $v\in C([0,+\infty],P({\Delta}))$ and denote $\mathcal{F}(t,p):=\mathcal{F}[v(t)](p)$, namely
$$ \mathcal{F}_i(t,p)= h (p) (p_i e_i^TA \bar p(t) + \bar p_i(t)e_i^T A p), \qquad
\bar p(t)=\int_{\Delta} p\,dv_t(p). $$ Notice that $\vec{1}:=(1,\ldots,1)\in\mathbb{R}^d$ is normal to ${\Delta}$ and
that for any $p\in{\Delta}$,
$$ \vec{1}.\mathcal{F}(t,p) = \sum_{i=1}^d \mathcal{F}_i(t,p)
= h (p)(p.A\bar p(t)+\bar p(t).Ap) = 0 $$
since $A$ is antisymmetric, and
$$ \mathcal{F}(t,p)=0 \qquad \text{for any $p\in \partial{\Delta}$} $$
since $ h (p)=0$ if $p\in\partial{\Delta}$. It follows that any integral curve of $\mathcal{F}(t,p)$
starting from a point in ${\Delta}$ stays in the compact ${\Delta}$ forever. Let us denote $T_{s,t}^v$
the flow of the vector field $\mathcal{F}(t,p)$ namely
$$ \frac{d}{dt}T_{s,t}^v(p) = \mathcal{F}[v(t)](T_{s,t}^v(p)), \qquad
T_{s,s}t^v(p)=p. $$
We also let $T_t^v(p):=T_{0,t}^v(p)$.
We then have
\begin{lem}There holds
$$ T_t^v(p)\in{\Delta} \qquad \text{for any $p\in{\Delta}$ and any $t\ge 0$.} $$
\end{lem}
The characteristic method allows to show in a standard way that the equation
$$\frac{d}{dt}u+\text{div}(\mathcal{F}[v(t)](p) u)=0$$
with initial condition $u_0\in P({\Delta})$ has a unique solution in $C([0,+\infty],P({\Delta}))$ given by
$u(t)=\mathcal{T}^v_t\sharp u_0$.
Thus proving the existence of a unique solution to
\begin{equation}\label{FixePoint20}
\frac{d}{dt}v+\text{div}(\mathcal{F}[v(t)](p) v)=0
\end{equation}
for a given initial condition $v_0\in P({\Delta})$ is equivalent to proving the existence of a solution to the fixed-point equation
\begin{equation}\label{FixedPoint10}
v(t)=T_t^v\sharp v_0.
\end{equation}
This can be done applying the Banach fixed-point theorem to the map $\Gamma(v)_t:=
T_t^v\sharp v_0$ in the complete metric space $ \{v\in C([0,T],P({\Delta})),\, v(0)=v_0\} $ for a small
enough $T>0$ depending on $\|v_0\|_{TV}$. The details of the proof are an adaptation of
e.g. \cite{canizo2011well}. The adaptation is almost straight forward noticing that
$\mathcal{F}[v(t)](p)$ is bounded Lipschitz in $p$ uniformly in $t$ for a given $v\in
C([0,T],P({\Delta}))$. We can then repeat the process on $[T,2T]$, $[2T,3T]$,\dots to obtain $v\in
C([0,+\infty],P({\Delta}))$ satisfying \eqref{FixedPoint10} which is thus the unique solution to
\eqref{FixePoint20} with initial condition $v_0$. The continuity with respect to initial
conditions can also be obtained following the proof of \cite{canizo2011well}.
|
2,869,038,153,894 | arxiv | \section{Introduction}\label{sec1}
Generalized linear mixed models (GLMMs) have become a popular and very
useful class of statistical models. See, for example, \citet{Jia07},
McCulloch, Searle and Neuhaus (\citeyear{McCSeaNeu08}) for some wide-ranging accounts of
GLMMs with theory and applications. In the earlier years after GLMM was
introduced, one of the biggest challenges in inference about these
models was computation of the maximum likelihood estimators (MLEs).
As is well known, the likelihood function under a GLMM typically
involves integrals that cannot be computed analytically. The
computational difficulty was highlighted by the infamous salamander
mating data, first introduced by McCullagh and Nelder [(\citeyear{McCNel83}), Section 14.5].
A mixed logistic model, which is a special case of GLMM, was proposed
for the salamander data that involved crossed random effects for the
female and male animals. However, due to the fact that the random effects
are crossed, the likelihood function involves a high-dimensional integral
that not only does not have an analytic expression, but is also difficult
to evaluate numerically [e.g., \citet{Jia07}, Section 4.4.3]. For years, the
salamander data has been a driving force for the computational
developments in GLMM. Virtually every numerical procedure that was
proposed used this data as a ``gold standard'' to evaluate, or
demonstrate, the procedure. See, for example, \citet{KarZeg92},
\citet{BreCla93}, \citet{DruMcC93}, \citet{McC94},
\citet{BreLin95}, \citet{LinBre96}, \citet{Jia98}, \citet{BooHob99},
\citet{JiaZha01}, \citet{SutRao03}, and
\citet{Tor12}.
\subsection{A theoretical challenge and an open problem}\label{sec1.1}
To illustrate the numerical difficulty as well as a theoretical
challenge, which is the main objective of the current paper, let us
begin with an example.
\begin{example}\label{ex1}
A mixed logistic model was proposed by \citet{BreCla93} for the salamander data, and has since been used [e.g.,
\citet{BreLin95,LinBre96,Jia98}]. Some
alternative models, but only in terms of reparametrizations, have been
considered [e.g., \citet{BooHob99}]. \citet{JiaZha01} noted
that some of these models have ignored the fact that a group of
salamanders were used in both the summer experiment and one of the fall
experiments; in other words, there were replicates for some of the
pairs of female and male animals. Nevertheless, all of these models
are special cases of the following, more general setting. Suppose
that, given the random effects $u_{i}, v_{j}, (i,j)\in S$, where $S$
is a subset of ${\cal I}=\{(i,j)\dvtx1\leq i\leq m, 1\leq j\leq n\}$, binary
responses $y_{ijk}$, $(i,j)\in S$, $k=1,\ldots,c_{ij}$ are conditionally
independent such that, with $p_{ijk}=\mathrm{ P}(y_{ijk}=1|u,v)$, we have
$\operatorname{ logit}(p_{ijk})=x_{ijk}'\beta+u_{i}+v_{j}$, where
$\operatorname{ logit}(p)
=\log\{p/(1-p)\}, p\in(0,1)$, $x_{ijk}$ is an known vector of covariates,
$\beta$ is a unknown vector of parameters, and $u, v$ denote all the random
effects $u_{i}$ and $v_{j}$ that are involved. Here $c_{ij}$ is the number
of replicates for the $(i,j)$ cell. Without loss of generality, assume that
$S$ is a irreducible subset of ${\cal I}$ in that $m, n$ are the smallest
positive integers such that $S\subset{\cal I}$. Furthermore, suppose that
the random effects $u_{i}$'s and $v_{j}$'s are independent with
$u_{i}\sim
N(0,\sigma^{2})$ and $v_{j}\sim N(0,\tau^{2})$, where $\sigma^{2},
\tau^{2}$ are unknown variances. One may think of the random effects
$u_{i}$ and $v_{j}$ as corresponding to the female and male animals, as
in the salamander problem. In fact, for the salamander data, $c_{ij}=2$
for half of the pairs $(i,j)$, and $c_{ij}=1$ for the rest of the pairs.
It can be shown [e.g., \citet{Jia07}, page~126; also see Section~\ref{sec4} in the
sequel] that the log-likelihood function for estimating $\beta,\sigma^{2},
\tau^{2}$ involves an integral of dimension $m+n$, which, in particular,
increases with the sample size, and the integral cannot be further
simplified.
\end{example}
The fact that the random effects are crossed, as in Example~\ref{ex1}, presents
not only a computational challenge but also a theoretical one, that is,
to prove that the MLE is consistent in such a model. In contrast,
the situation is very different if the GLMM has clustered, rather
than crossed, random effects. For example, consider the following.
\begin{example}\label{ex2}
Suppose that, given the random effects $u_{1},
\ldots,u_{m}$, binary responses $y_{ij}, i=1,\ldots,m, j=1,\ldots,n_{i}$
are conditionally independent such that, with $p_{ij}=\mathrm{ P}(y_{ij}
=1|u)$, we have $\operatorname{ logit}(p_{ij})=x_{ij}'\beta+u_{i}$, where
$x_{ij}$ is a vector of known covariates, $\beta$ a vector of unknown
coefficients, and $u=(u_{i})_{1\leq i\leq m}$. Furthermore, suppose
that the $u_{i}$'s are independent with $u_{i}\sim N(0,\sigma^{2})$,
where $\sigma^{2}$ is unknown. It is easy to show that the
log-likelihood function for estimating $\beta,\sigma^{2}$ only
involves one-dimensional integrals. Not only that, a major theoretical
advantage of this case is that the log-likelihood can be expressed as
a sum of independent random variables. In fact, this is a main
characteristic of GLMMs with clustered random effects. Therefore,
limit theorems for sums of independent random variables [e.g., \citet{Jia10}, Chapter
6]
can be utilized to obtain asymptotic properties of the MLE.
\end{example}
Generally speaking, the classical approach to proving consistency of
the MLE [e.g., \citet{LehCas98}, Chapter 6; \citet{Jia10}] relies on
asymptotic theory for sum of random variables, independent or not.
However, one cannot express the log-likelihood in Example~\ref{ex1} as a sum
of random variables with manageable properties. For this reason, it
is very difficult to tackle asymptotic behavior of the MLE in the
salamander problem, or any GLMM with crossed random effects, assuming
that the numbers of random effects in all of the crossed factors
increase. In fact, the problem is difficult to solve even for the
simplest case, as stated in the open problem below.
\begin{quote}
{Open problem} [\textit{e.g., Jiang} (\citeyear{Jia10}), \textit{page 541}]:\vspace*{-2pt}
\textit{Suppose that $x_{ijk}'\beta=\mu$, an unknown parameter, $c_{ij}=1$ for all
$i,j$, $S={\cal I}$, and $\sigma^{2}, \tau^{2}$ are known, say, $\sigma^{2}
=\tau^{2}=1$ in Example~\ref{ex1}. Thus, $\mu$ is the only unknown parameter.
Suppose that $m, n\rightarrow\infty$. Is the MLE of $\mu$ consistent?}
\end{quote}
It was claimed [\citet{Jia10}, pages~541, 550] that even for this seemingly
trivial case, the answer was not known but expected to be anything but
trivial.
\subsection{Origination of the open problem}\label{sec1.2}
The problem regarding consistency of the MLE in GLMMs with crossed random
effects began to draw attention in early~1997. It remained unsolved over
the past 15 years, and was twice cited as an open problem in the
literature, first in Jiang [(\citeyear{Jia07}), page~173] and later in Jiang [(\citeyear{Jia10}),
page~541]. The latter also provided the following supporting evidence for a
positive answer [\citet{Jia10}, page~550].
Let $k=m\wedge n$. Consider a subset of the data, $y_{ii}, i=1,\ldots,k$.
Note that the subset is a sequence of i.i.d. random variables. It follows,
by the standard arguments, that the MLE of $\mu$ based on the subset,
denoted by $\tilde{\mu}$, is consistent. Let $\hat{\mu}$ denote the MLE
of $\mu$ based on the full data, $y_{ij}, i=1,\ldots,m, j=1,\ldots,n$. The
point is that even the MLE based on a subset of the data, $\tilde{\mu}$,
is consistent; and if one has more data (information), one is expected to
do better. Therefore, $\hat{\mu}$ has to be consistent as well.
\subsection{The rest of the paper}\label{sec1.3}
In Section~\ref{sec2}, we give a positive answer to the open problem as well as the
proof. Surprisingly, the proof is fairly short, thanks to a new,
nonstandard technique that we introduce, known as the \textit{subset
argument}. Using this argument, we are able to establish both Cram\'{e}r
(\citeyear{Cra46}) and \citet{Wal49} types of consistency results for the MLE. It is
fascinating that a 15-year-old problem can be solved in such
a simple way. The new technique may be useful well beyond solving the
open problem---for proving consistency of the MLE in cases of dependent
observations. We consider some applications of the subset argument in
Section~\ref{sec3} regarding consistency of the MLE in a general GLMM. An example
is used in Section~\ref{sec4} to further illustrate the new technique. Remark and
discussion on a number of theoretical and practical issues are offered
in Section~\ref{sec5}.
\section{Answer to open problem}\label{sec2}
Throughout this section, we focus on the open problem stated in Section
\ref{sec1}. Let $\mu$ denote the true parameter.
\begin{theorem}[(Cram\'{e}r consistency)]\label{th1}
There is, with probability tending
to one, a root to the likelihood equation, $\hat{\mu}$, such that
$\hat{\mu}\stackrel{\mathrm{ P}}{\longrightarrow}\mu$.
\end{theorem}
\begin{pf}
The idea was actually hinted in Jiang [(\citeyear{Jia10}), page~550] as
``evidence'' that supports a positive answer (see the last paragraph of
Section~\ref{sec1.2} of the current paper). Basically, the idea suggests that,
perhaps, one could use the fact that the MLE based on the subset data is
consistent to argue that the MLE based on the full data is also
consistent. The question is how to execute the idea. Recall that, in the
original proof of Wald [(\citeyear{Wal49}); also see \citet{Wol49}], the focus was on
the likelihood ratio $p_{\theta}(y)/p_{\theta_{0}}(y)$, and showing that
the ratio converges to zero outside any (small) neighborhood of
$\theta_{0}$, the true parameter vector. Can we execute the subset idea
in terms of the likelihood ratio? This leads to consideration of the
relationship between the likelihood ratio under the full data and that
under the subset data. It is in this context that the following
\textit{subset inequality} (\ref{eq2}) is derived (see Section~\ref{sec5.1} for further
discussion), which is the key to the proof.
Let $y_{[1]}$ denote the (row) vector of $y_{ii}, i=1,\ldots,m\wedge n$,
and $y_{[2]}$ the (row) vector of the rest of the $y_{ij}, i=1,\ldots,m,
j=1,\ldots,n$. Let $p_{\mu}(y_{[1]},y_{[2]})$ denote the probability mass
function (p.m.f.) of $(y_{[1]},y_{[2]})$, $p_{\mu}(y_{[1]})$ the p.m.f. of
$y_{[1]}$,
\begin{equation}\label{eq1}
p_{\mu}(y_{[2]}|y_{[1]})=\frac{p_{\mu}(y_{[1]},
y_{[2]})}{p_{\mu}(y_{[1]})}
\end{equation}
the conditional p.m.f. of $y_{[2]}$ given $y_{[1]}$, and $\mathrm{
P}_{\mu}$
the probability distribution, respectively, when $\mu$ is the true
parameter. For any $\varepsilon>0$, we have
\begin{eqnarray}\label{eq2}
\mathrm{ P}_{\mu}\bigl\{p_{\mu}(y_{[1]},y_{[2]})
\leq p_{\mu+\varepsilon}(y_{[1]}, y_{[2]})|y_{[1]}\bigr
\}&=&\mathrm{ P}_{\mu} \biggl\{\frac{p_{\mu
+\varepsilon}(y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}\geq1\Big\vert
y_{[1]} \biggr\}
\nonumber
\\
&\leq&\mathrm{ E} \biggl\{\frac{p_{\mu+\varepsilon}(y_{[1]},
y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}\Big\vert y_{[1]}
\biggr\}
\nonumber\\
&=&\sum_{y_{[2]}}\frac{p_{\mu+\varepsilon}(y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},
y_{[2]})}p_{\mu}(y_{[2]}|y_{[1]})
\\
&=&\sum_{y_{[2]}}\frac{p_{\mu+\varepsilon}(y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]})}
\nonumber
\\
&=&\frac{p_{\mu+\varepsilon}(y_{[1]})}{p_{\mu}(y_{[1]})},\nonumber
\end{eqnarray}
using (\ref{eq1}). A more general form of (\ref{eq2}) is given in Section~\ref{sec5.1}.
On the other hand, by the standard asymptotic arguments [e.g., \citet{Jia10}, page~9],
it can be shown that the likelihood ratio $p_{\mu
+\varepsilon}(y_{[1]})/p_{\mu}(y_{[1]})$ converges to zero in probability,
as $m\wedge n\rightarrow\infty$. Here we use the fact that the components
of $y_{[1]}$, $y_{ii}, 1\leq i\leq m\wedge n$ are independent Bernoulli
random variables. It follows that, for any $\eta>0$, there is $N_{\eta}
\geq1$ such that, with probability $\geq1-\eta$, we have $\zeta_{N}=
\mathrm{ P}_{\mu}\{p_{\mu}(y_{[1]},y_{[2]})\leq p_{\mu+\varepsilon}(y_{[1]},
y_{[2]})|y_{[1]}\}\leq\gamma^{m\wedge n}$ for some $0<\gamma<1$, if
$m\wedge n\geq N_{\eta}$. The argument shows that $\zeta_{N}=O_\mathrm{
P}(\gamma^{m\wedge n})$, hence converges to $0$ in probability. It
follows, by the dominated convergence theorem, that $\mathrm{ E}_{\mu}(
\zeta_{N})=\mathrm{ P}_{\mu}\{p_{\mu}(y_{[1]},y_{[2]})\leq p_{\mu
+\varepsilon}(y_{[1]},y_{[2]})\}\rightarrow0$. Similarly, we have
$\mathrm{
P}_{\mu}\{p_{\mu}(y_{[1]},y_{[2]})\leq p_{\mu-\varepsilon}(y_{[1]},
y_{[2]})\}\rightarrow0$. The rest of the proof follows by the standard
arguments [e.g., \citet{Jia10}, pages~9--10].
\end{pf}
The result of Theorem~\ref{th1} is usually referred to as Cram\'{e}r-type
consistency [Cram\'{e}r (\citeyear{Cra46})], which states that a root to the likelihood
equation is consistent. However, it does not always imply that the MLE,
which by definition is the (global) maximizer of the likelihood function,
is consistent. A stronger result is called Wald-type consistency
[\citet{Wal49}; also see \citet{Wol49}], which states that the MLE is consistent.
Note that the limiting process in Theorem~\ref{th1} is $m, n\rightarrow\infty$,
or, equivalently, $m\wedge n\rightarrow\infty$ (see Section~\ref{sec5.4} for
discussion). With a slightly more restrictive limiting process, the
Wald-consistency can actually be established, as follows.
\begin{theorem}[(Wald consistency)]\label{th2}
If $(m\wedge n)^{-1}\log(m\vee n)
\rightarrow0$ as $m,n\rightarrow\infty$, then the MLE of $\mu$ is
consistent.
\end{theorem}
\begin{pf}
Define $p_{0}(\lambda)=\mathrm{ E}\{h(\lambda+\xi)\}$, where
$h(x)=e^{x}/(1+e^{x})$ and $\xi\sim N(0,2)$. Write $p_{0}=p_{0}(\mu)$.
For any integer $k$, divide the interval $[k,k+1)$ by
$\lambda_{k,j}=k+\delta(mn)^{-1}(m\wedge n)j$, $j=1,\ldots,J$, where $J=
[mn/\delta(m\wedge n)]$ and $0<\delta<1-p_{0}$. It is easy to show that
$|(\partial/\partial\mu)\log p_{\mu}(y_{[1]},y_{[2]})|\leq mn$ uniformly
for all $\mu$. Thus, for any $\lambda\in[k,k+1)$, there is $1\leq j\leq
J$, such that $\log p_{\lambda}(y_{[1]},y_{[2]})-\log p_{\lambda_{k,j}}(
y_{[1]},y_{[2]})=\{(\partial/\partial\mu)\log p_{\mu}(y_{[1]},y_{[2]})
|_{\mu=\tilde{\lambda}}\}(\lambda-\lambda_{k,j})\leq\delta(m\wedge n)$,
where $\tilde{\lambda}$ lies between $\lambda$ and $\lambda_{k,j}$. It
follows that
\[
\sup_{\lambda\in[k,k+1)}\frac{p_{\lambda}(y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},
y_{[2]})}\leq e^{\delta(m\wedge n)}\max_{1\leq j\leq J}
\frac{p_{\lambda_{k,
j}}(y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}.
\]
Therefore, by the subset argument [see (\ref{eq2})], we have
\begin{eqnarray}\label{eq3}
&&\mathrm{ P}_{\mu} \biggl\{\sup_{\lambda\in[k,k+1)}
\frac{p_{\lambda}(
y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}>1\Big\vert y_{[1]} \biggr\}
\nonumber
\\
&&\qquad\leq\sum_{j=1}^{J}\mathrm{
P}_{\mu} \biggl\{\frac{p_{\lambda_{k,j}}(
y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}>e^{-\delta(m\wedge n)}\Big\vert
y_{[1]} \biggr\}
\\
&&\qquad \leq e^{\delta(m\wedge n)}\sum_{j=1}^{J}
\frac{p_{\lambda_{k,
j}}(y_{[1]})}{p_{\mu}(y_{[1]})}.\nonumber
\end{eqnarray}
On the other hand, we have $0\leq1-p_{0}(\lambda)=\mathrm{ E}\{1+\exp
(\lambda
+\xi)\}^{-1}\leq \break e^{-\lambda}\mathrm{ E}(e^{-\xi})=e^{1-\lambda}$; and,
similarly, $0\leq p_{0}(\lambda)\leq e^{1+\lambda}$. Let ${\cal
A}_{\delta}
=\{|\Delta|\leq\delta\}$ with $\Delta=(m\wedge n)^{-1}\sum_{i=1}^{m\wedge
n}y_{ii}-p_{0}$. If $k \geq1$, then, for any $1\leq j\leq J$, write $p_{1}
=p_{0}(\lambda_{k,j})$. We have, on ${\cal A}_{\delta}$,
\begin{eqnarray*}
\frac{p_{\lambda_{k,j}}(y_{[1]})}{p_{\mu}(y_{[1]})}&=& \biggl\{ \biggl(\frac
{p_{1}}{p_{0}} \biggr)^{p_{0}+\Delta}
\biggl(\frac{1-p_{1}}{1-p_{0}} \biggr)^{1-p_{0}-\Delta} \biggr\}^{m\wedge n}
\\
&\leq&\bigl\{a_{\delta}^{-1}(1-p_{1})^{1-p_{0}-\delta}
\bigr\}^{m\wedge n}
\\
&\leq&\bigl[a_{\delta}^{-1}\exp\bigl\{(1-\lambda_{k,j})
(1-p_{0}-\delta)\bigr\} \bigr]^{m\wedge n}
\\
&\leq&\exp \bigl[\bigl\{1-p_{0}-\delta-\log a_{\delta}-(1-p_{0}-
\delta)k\bigr\} (m\wedge n) \bigr],
\end{eqnarray*}
where $a_{\delta}=\inf_{|x|\leq\delta}p_{0}^{p_{0}+x}(1-p_{0})^{1
-p_{0}-x}>0$. It follows, by (\ref{eq3}), that
\begin{eqnarray*}
&&\mathrm{ P}_{\mu} \biggl\{\sup_{\lambda\in[k,k+1)}
\frac{p_{\lambda}(
y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}>1\Big\vert y_{[1]} \biggr\}
\\
&&\qquad \leq\frac{mn}{\delta(m\wedge n)}\exp\bigl[\bigl\{1-p_{0}-\log
a_{\delta}-(1-p_{0} -\delta)k\bigr\}(m\wedge n)\bigr]
\end{eqnarray*}
on ${\cal A}_{\delta}$, or, equivalently, that
\begin{eqnarray}\label{eq4}
&&\mathrm{ P}_{\mu} \biggl\{\sup_{\lambda\in[k,k+1)}
\frac{p_{\lambda}(
y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}>1, |\Delta|\leq\delta\Big\vert y_{[1]} \biggr\}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad \leq\frac{mn}{\delta(m\wedge n)}\exp\bigl[\bigl\{1-p_{0}-\log
a_{\delta}-(1-p_{0} -\delta)k\bigr\}(m\wedge n)
\bigr]1_{{\cal A}_{\delta}}.
\end{eqnarray}
Note that ${\cal A}_{\delta}\in{\cal F}(y_{[1]})$. By taking expectations
on both sides of (\ref{eq4}), it follows that the unconditional probability
corresponding to the left side is bounded by the right side without
$1_{{\cal A}_{\delta}}$, for $k=1,2,\ldots.$ Therefore, we have
\begin{eqnarray}\label{eq5}
&&\mathrm{ P}_{\mu} \biggl\{\sup_{\lambda\in[k,k+1)}\frac{p_{\lambda}(y_{[1]},
y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}>1
\mbox{ for some } k\geq K, |\Delta|\leq\delta \biggr\}
\nonumber
\\
&&\qquad\leq\sum_{k=K}^{\infty}\mathrm{
P}_{\mu} \biggl\{\sup_{\lambda\in
[k,k+1)}\frac
{p_{\lambda}(y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}>1,|\Delta|\leq
\delta \biggr\}
\nonumber
\\
&&\qquad\leq\frac{mn}{\delta(m\wedge n)}\exp\bigl\{(1-p_{0}-\log a_{\delta
}) (m
\wedge n)\bigr\}\sum_{k=K}^{\infty}e^{-(1-p_{0}-\delta)(m\wedge n)k}
\nonumber\\
&&\qquad=\frac{mn}{\delta(m\wedge n)}\exp\bigl\{(1-p_{0}-\log a_{\delta}) (m
\wedge n)\bigr\}\frac{e^{-(1-p_{0}-\delta)(m\wedge n)K}}{1-e^{-(1-p_{0}-\delta
)(m\wedge
n)}}\\
&&\qquad= \bigl\{1-e^{-(1-p_{0}-\delta)(m\wedge n)} \bigr\}^{-1}\nonumber\\
&&\qquad\quad{}\times\exp\bigl[ -(m\wedge n)\bigl
\{(1-p_{0}-\delta)K-1+p_{0}+\log a_{\delta}\nonumber\\
&&\hspace*{80pt}\qquad\quad{}-(m\wedge n)^{-1}\log(m\vee
n)+(m\wedge n)^{-1}\log\delta\bigr\}\bigr].\nonumber
\end{eqnarray}
Thus, if we choose $K$ such that $(1-p_{0}-\delta)K-1+p_{0}+\log
a_{\delta}\geq1$, then, for large $m\wedge n$, the probability on the
left side of (\ref{eq5}) is bounded by $2e^{-(m\wedge n)/2}$. On the other
hand, we have $\mathrm{ P}_{\mu}({\cal A}_{\delta}^{c})\rightarrow0$, as
$m\wedge n\rightarrow\infty$. Thus, we have
\begin{eqnarray}\label{eq6}
&&\mathrm{ P} \biggl\{\frac{p_{\lambda}(y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},
y_{[2]})}>1 \mbox{ for some } \lambda
\geq K \biggr\}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad\leq 2e^{-(m\wedge n)/2}+\mathrm{ P}\bigl({\cal A}_{\delta
}^{c}
\bigr)\longrightarrow0
\end{eqnarray}
as $m\wedge n\rightarrow\infty$. Similarly, the left side of (\ref{eq6}), with the
words ``$\lambda\geq K$'' replaced by ``$\lambda\leq-K$,'' goes to zero,
as $m\wedge n\rightarrow\infty$, if $K$ is chosen sufficiently large.
On the other hand, again by the subset argument, it can be shown (see
the supplementary material [\citet{supp}]) that for any $\varepsilon>0$ and $K>|\mu
|+\varepsilon$,
we have
\begin{eqnarray}\label{eq7}
P_{\mu} \biggl\{\sup_{\lambda\in[-K,\mu-\varepsilon)\cup(\mu+\varepsilon,K]} \frac{p_{\lambda}(y_{[1]},y_{[2]})}{p_{\mu}(y_{[1]},y_{[2]})}>1 \biggr\} &
\longrightarrow&0
\end{eqnarray}
as $m, n\rightarrow\infty$. The consistency of the MLE then follows by
combining (\ref{eq7}) with the previously proved results.
\end{pf}
\section{Beyond}\label{sec3}
We consider a few more applications of the subset argument, introduced
in the previous section. All applications are regarding a general GLMM,
whose definition is given below for the sake of completeness [see, e.g., \citet{Jia07} for further details].
(i) Suppose that, given a vector $u$ of random effects, responses
$y_{1},\ldots,y_{N}$ are conditionally independent with conditional
density function,\vspace*{1pt} with respect to a $\sigma$-finite measure $\nu$,
given by the exponential family $f_{i}(y_{i}|u)=\exp[a_{i}^{-1}(
\phi)\{y_{i}\xi_{i}-b(\xi_{i})\}+c_{i}(y_{i},\phi)]$, where $\phi$
is a dispersion parameter (which in some cases is known), and $b(\cdot),
a_{i}(\cdot), c_{i}(\cdot,\cdot)$ are known, continuously differentiable
functions with respect to $\xi_{i}$ and $\phi$. The natural parameter
of the conditional exponential family, $\xi_{i}$, is therefore
associated with the conditional mean, $\mu_{i}=\mathrm{ E}(y_{i}|u)$,
according to the properties of the exponential family [e.g., \citet{McCNel83}, Section 2.2.2]. (ii) Furthermore, suppose that $\mu_{i}$
satisfies $g(\mu_{i})=x_{i}'\beta+z_{i}'u$, where $x_{i}, z_{i}$ are
known vectors, $\beta$ is a vector of unknown parameters, and $g(\cdot)$
is a link function. (iii) Finally, assume that $u\sim N(0,G)$, where the
covariance matrix $G$ may depend on a vector $\varphi$ of dispersion
parameters.
It is typically possible to find a subset of the data that are independent,
in some way, under a general GLMM. For example, under the so-called ANOVA
GLMM [e.g., \citet{Lin97}], a subset of independent data can always be found.
Here an ANOVA GLMM satisfies $g(\mu)=X\beta+Z_{1}u_{1}+\cdots+Z_{s}u_{s}$,
where $\mu=(\mu_{i})_{1\leq i\leq N}$, $g(\mu)=[g(\mu_{i})]_{1\leq i\leq
N}$, $X=(x_{i}')_{1\leq i\leq N}$, $Z_{r}=(z_{ir}')_{1\leq i\leq N},
1\leq
r\leq s$, are known matrices, $u_{r}, 1\leq r\leq s$ are vectors of
independent random effects, and $u_{1},\ldots,u_{s}$ are independent.
Examples~\ref{ex1} and~\ref{ex2} are special cases of the ANOVA GLMM. Note that in both
examples the responses are indexed by $(i,j)$, instead of $i$, but this
difference is trivial. Nevertheless, the ``trick'' is to select a subset,
or more than one subsets if necessary, with the following desirable
properties: (I) the subset(s) can be divided into independent
clusters with the number(s) of clusters increasing with the
sample size; and (II) the combination of the subset(s) jointly
identify all the unknown parameters. More specifically, let $y_{i}^{(a)},
i=1,\ldots,N_{a}$ be the $a$th subset of the data, $1\leq a \leq b$,
where $b$ is a fixed positive integer. Suppose that, for each $a$,
there is a partition, $\{1,\ldots,N_{a}\}=\bigcup_{j=1}^{m_{a}}S_{a,j}$.
Let $y_{a,j}=[y_{i}^{(a)}]_{i\in S_{a,j}}$, and $p_{\theta}(y_{a,j})$ be
the probability density function (p.d.f.) of $y_{a,j}$, with respect to the
measure $\nu$ (or the product measure induced by $\nu$ if $y_{a,j}$ is
multivariate), when $\theta$ is the true parameter vector. Let $\Theta$
denote the parameter space, and $\theta_{0}$ the true parameter vector.
Then, (I) and (II) can be formally stated as follows:
\begin{longlist}[(A1)]
\item[(A1)] $y_{a,j}, 1\leq j\leq m_{a}$ are independent with $m_{a}
\rightarrow\infty$ as $N\rightarrow\infty, 1\leq a\leq b$;
\item[(A2)] for every $\theta\in\Theta\setminus\{\theta_{0}\}$, we have
\[
\min_{1\leq a\leq b}\limsup_{N\rightarrow\infty}\frac{1}{m_{a}} \sum
_{j=1}^{m_{a}}\mathrm{ E}_{\theta_{0}} \biggl[\log
\biggl\{\frac
{p_{\theta}(y_{a,j})}{p_{\theta_{0}}(y_{a,j})} \biggr\} \biggr]< 0.
\]
\end{longlist}
Note that (A2) controls the average Kullback--Leibler information
[\citet{KulLei51}]; thus, the inequality always holds if $<$ is
replaced by $\leq$.
\subsection{Finite parameter space}\label{sec3.1}
Let us first consider a simpler case by assuming that $\Theta$ is finite.
Although the assumption may seem restrictive, it is not totally
unrealistic. For example, any computer system only allows a finite number
of digits. This means that the parameter space that is practically stored
in a computer system is finite. Using the subset argument, it is fairly
straightforward to prove the following (see the supplementary material [\citet{supp}]).
\begin{theorem}\label{th3}
Under assumptions \textup{(A1)} and \textup{(A2)}, if, in
addition,
\begin{longlist}[(A3)]
\item[(A3)] for every $\theta\in\Theta\setminus\{\theta_{0}\}$, we have
\[
\frac{1}{m_{a}^{2}}\sum_{j=1}^{m_{a}}
\operatorname{ var}_{\theta
_{0}} \biggl[ \log \biggl\{\frac{p_{\theta}(y_{a,j})}{p_{\theta_{0}}(y_{a,j})} \biggr
\} \biggr]\longrightarrow 0, \qquad 1\leq a\leq b,
\]
then $\mathrm{ P}_{\theta_{0}}(\hat{\theta}=\theta_{0})\rightarrow1$, as
$N\rightarrow\infty$, where $\hat{\theta}$ is the MLE of $\theta$.
\end{longlist}
\end{theorem}
\subsection{Euclidean parameter space}\label{sec3.2}
We now consider the case that $\Theta$ is a convex subspace of $R^{d}$,
the $d$-dimensional Euclidean space, in the sense that $\theta_{1},
\theta_{2}\in\Theta$ implies $(1-t)\theta_{1}+t\theta_{2}\in\Theta$ for
every $t\in(0,1)$. In this case, we need to strengthen assumptions
(A2), (A3) to the following:
\begin{longlist}[(B3)]
\item[(B2)] $\theta_{0}\in\Theta^\mathrm{ o}$, the interior of $\Theta$,
and there is $0<M<\infty$ [same as in (B3) below] such that, for
every $\varepsilon>0$, we have
\begin{eqnarray}\label{eq8}
\limsup_{N\rightarrow\infty}\sup_{\theta\in\Theta,\varepsilon\leq
|\theta-\theta_{0}|\leq M}\min_{1\leq a\leq b}\frac{1}{m_{a}}
\sum_{j=1}^{m_{a}}\mathrm{ E}_{\theta_{0}}
\biggl[\log \biggl\{\frac
{p_{\theta}(y_{a,j})}{p_{\theta_{0}}(y_{a,j})} \biggr\} \biggr]<0.
\end{eqnarray}
\item[(B3)] There are positive constant sequences $s_{N}, s_{a,N},
1\leq
a\leq b$ such that
\begin{eqnarray}\label{eq9}
\sup_{\theta\in\Theta,|\theta-\theta_{0}|\leq M}\max_{1\leq c\leq d} \biggl\llvert \frac{\partial}{\partial\theta_{c}}
\log\bigl\{p_{\theta}(y)\bigr\}\biggr\rrvert = O_\mathrm{
P}(s_{N})
\end{eqnarray}
with $\log(s_{N})/\min_{1\leq a\leq b}m_{a}\rightarrow0$,
where $p_{\theta}(y)$ is the p.d.f. of $y=(y_{i})_{1\leq i\leq N}$ given
that $\theta=(\theta_{c})_{1\leq c\leq d}$ is the true parameter
vector,
\begin{eqnarray}\label{eq10}
\sup_{\theta\in\Theta,|\theta-\theta_{0}|\leq M}\frac
{1}{m_{a}}\sum_{j=1}^{m_{a}}
\max_{1\leq c\leq d}\biggl\llvert \frac
{\partial}{\partial\theta_{c}}\log\bigl
\{p_{\theta}(y_{a,j})\bigr\}\biggr\rrvert = o_\mathrm{
P}(s_{a,N})
\end{eqnarray}
with $\log(s_{a,N})/m_{a}\rightarrow0$; and (for the same $s_{a,N}$)
\begin{eqnarray}\label{eq11}\qquad
\sup_{\theta\in\Theta,|\theta-\theta_{0}|\leq M}\frac{s_{a,N}^{d
-1}}{m_{a}^{2}}\sum_{j=1}^{m_{a}}
\operatorname{ var}_{\theta_{0}} \biggl[\log \biggl\{\frac{p_{\theta}(y_{a,j})}{p_{\theta_{0}}(y_{a,j})} \biggr\}
\biggr] \longrightarrow0,\qquad 1\leq a\leq b.
\end{eqnarray}
\end{longlist}
\begin{theorem}\label{th4}
Under assumptions \textup{(A1), (B2)} and
\textup{(B3)},
there is, with probability $\rightarrow1$, a root to the likelihood
equation, $\hat{\theta}$, such that $\hat{\theta}\stackrel{\mathrm{
P}}{\longrightarrow}\theta_{0}$, as $N\rightarrow\infty$.
\end{theorem}
\begin{pf}
Aside from the use of the subset argument, the lines of the
proof are similar to, for example, the standard arguments of Lehmann and
Casella [(\citeyear{LehCas98}), the beginning part of the proof of Theorem 5.1], although
some details are more similar to \citet{Wol49}. We outline the key steps
below and refer the details to the supplementary material [\citet{supp}]. Once again, the
innovative part is the consideration of the conditional probability given
the subset data and, most importantly, the subset inequality (\ref{eq15}) in the
sequel.
For any $\varepsilon>0$, assume, without loss of generality,
that $\{\theta\dvtx |\theta-\theta_{0}|\leq\varepsilon\}\subset\Theta$ and
$C_{\varepsilon}=\{\theta\in R^{d}\dvtx|\theta_{c}-\theta_{0c}|\leq
\varepsilon, 1\leq c\leq d\}\subset\{\theta\in\Theta\dvtx |\theta
-\theta_{0}|\leq M\}$. Essentially, all we need to show is that, as
$N\rightarrow\infty$,
\begin{equation}\label{eq12}
P(\varepsilon)\equiv\mathrm{ P}_{\theta_{0}} \Bigl\{p_{\theta_{0}}(y)\leq
\sup_{\theta\in\partial C_{\varepsilon}}p_{\theta}(y) \Bigr\} \longrightarrow 0,
\end{equation}
where $\partial C_{\varepsilon}$ is the boundary of $C_{\varepsilon}$, which
consists of $\theta\in C_{\varepsilon}$ such that $|\theta_{c}-\theta_{0c}|
=\varepsilon$ for some $1\leq c\leq d$. Define
\[
S_{N,a}(\theta)=\frac{1}{m_{a}}\sum_{j=1}^{m_{a}}
\mathrm{ E}_{\theta_{0}} \biggl[\log \biggl\{\frac{p_{\theta}(y_{a,
j})}{p_{\theta_{0}}(y_{a,j})} \biggr\}
\biggr], \qquad 1\leq a\leq b,
\]
and $I_{N}(\theta)=\min\{1\leq a\leq b\dvtx S_{N,a}(\theta)=\min_{1\leq
a'\leq
b}S_{N,a'}(\theta)\}$. Then, $\partial C_{\varepsilon}=\break\bigcup_{a=1}^{b}\partial
C_{\varepsilon}\cap\Theta_{N,a}$, where $\Theta_{N,a}=\{\theta\in\Theta:
I_{N}(\theta)=a\}$. Then, we have
\begin{equation}\label{eq13}
P(\varepsilon)\leq\sum_{a=1}^{b}
\mathrm{ P}_{\theta_{0}} \Bigl\{ p_{\theta_{0}}(y) \leq\sup_{\theta\in\partial C_{\varepsilon}\cap\Theta_{N,a}}p_{\theta}(y)
\Bigr\}.
\end{equation}
For a fixed $1\leq a\leq b$, let $\delta$ be a small, positive number
to be determined latter, and $K=[e^{\delta m_{a}}]+1$. For any $l=(l_{1},
\ldots,l_{d})$, where $0\leq l_{c}\leq K-1, 1\leq c\leq d$, select a point
$\theta_{l}$ from the subset $\{\theta\dvtx \theta_{0c}-\varepsilon
+2\varepsilon
l_{c}/K\leq\theta_{c}\leq\theta_{0c}-\varepsilon+2\varepsilon
(l_{c}+1)/K, 1\leq
c\leq d\}\cap\partial C_{\varepsilon}\cap\Theta_{N,a}$, if the latter
is not
empty; otherwise, do not select. Let $D$ denote the collection of all such
points. Also let $B$ denote the left side of~(\ref{eq9}). It can be shown that
\begin{eqnarray}\label{eq14}
&&\mathrm{ P}_{\theta_{0}} \Bigl\{p_{\theta_{0}}(y)\leq\sup_{\theta\in
\partial
C_{\varepsilon}\cap\Theta_{N,a}}p_{\theta}(y)
\Bigr\}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad\leq\mathrm{ P}_{\theta_{0}} \biggl\{\exp \biggl(\frac{2d\varepsilon
B}{K} \biggr)
>2 \biggr\}+\mathrm{ P}_{\theta_{0}} \Bigl\{p_{\theta_{0}}(y)\leq2
\max_{\theta
\in D}p_{\theta}(y) \Bigr\}.
\end{eqnarray}
We now apply the subset argument. Let $y_{[1]}$ denote the combined vector
of $y_{a,j}, 1\leq j\leq m_{a}$, and $y_{[2]}$ the vector of the rest of
$y_{1},\ldots,y_{N}$. Then,\vadjust{\goodbreak} similar to the argument of (\ref{eq2}), we have, for
any $\theta\in D$,
\begin{equation}\label{eq15}
\mathrm{ P}_{\theta_{0}}\bigl\{p_{\theta_{0}}(y)\leq2p_{\theta}(y)|y_{[1]}
\bigr\} \leq 2\frac{p_{\theta}(y_{[1]})}{p_{\theta_{0}}(y_{[1]})}.
\end{equation}
Using this result, it can be shown that $\mathrm{ P}_{\theta_{0}}\{
p_{\theta_{0}}(y)\leq2\max_{\theta\in D}p_{\theta}(y)|y_{[1]}\}
=o_\mathrm{
P}(1)$. From here, (\ref{eq12}) can be established.
\end{pf}
Again, Theorem~\ref{th4} is a Cram\'{e}r-consistency result. On the other hand,
Wald-consistency can be established under additional assumptions that
control the behavior of the likelihood function in a neighborhood of
infinity. For example, the following result may be viewed as an
extension of Theorem~\ref{th2}. The proof is given in the supplementary material [\citet{supp}].
Once again, the subset argument plays a critical role in the proof.
For simplicity, we focus on the case of discrete responses, which is
typical for GLMMs. In addition, we assume the following. For any
$0\leq v<w$, define $S_{d}[v,w)=\{x\in R^{d}\dvtx v\leq|x|<w\}$ and write,
in short, $S_{d}(k)=S_{d}[k,k+1)$ for $k=1,2,\ldots.$
\begin{longlist}[(C1)]
\item[(C1)] There are sequences of constants, $b_{k}, c_{N}\geq1$, and
random variables, $\zeta_{N}$, where $c_{N}, \zeta_{N}$ do not depend on
$k$, such that $\zeta_{N}=O_\mathrm{ P}(1)$ and
\begin{eqnarray*}
\sup_{\theta\in\Theta\cap S_{d}[k-1,k+2)}\max_{1\leq
c\leq d}\biggl\llvert \frac{\partial}{\partial\theta_{c}}\log
\bigl\{p_{\theta}(y)\bigr\} \biggr\rrvert \leq b_{k}c_{N}
\zeta_{N},\qquad k=1,2,\ldots
\end{eqnarray*}
\item[(C2)] There is a subset of independent data vectors, $y_{(j)},
1\leq j\leq m_{N}$ [not necessarily among those in (A1)] so that:
(i) $\mathrm{ E}_{\theta_{0}}|\log\{p_{j,\theta_{0}}(y_{(j)})\}|$ is bounded,
$p_{j,\theta}(\cdot)$ being the p.m.f. of $y_{(j)}$ under $\theta$; (ii)
there is a sequence of positive constants, $\gamma_{k}$, with
$\lim_{k\rightarrow\infty}\gamma_{k}=\infty$, and a subset ${\cal
T}_{N}$ of possible values of $y_{(j)}$, such that for every $k\geq1$
and $\theta\in\Theta\cap S_{d}(k)$, there is $t\in{\cal T}_{N}$ satisfying
$\max_{1\leq j\leq m_{N}}\log\{p_{j,\theta}(t)\}\leq-\gamma_{k}$;
(iii) $\inf_{t\in{\cal T}_{N}}m_{N}^{-1}\sum_{j=1}^{m_{N}}p_{j,\theta_{0}}
(t)\geq\rho$ for some constant $\rho>0$; and (iv) $|{\cal T}_{N}|/m_{N}=
o(1)$, and $c_{N}^{d}\sum_{k=K}^{\infty}k^{d_{1}}b_{k}^{d}e^{-\delta
m_{N}\gamma_{k}}=o(1)$ for some $K\geq1$ and $\delta<\rho$, where $d_{1}=
d1_{(d>1)}$.
\end{longlist}
It is easy to verify that the new assumptions (C1), (C2) are
satisfied in the case of Theorem~\ref{th2} for the open problem (see
the supplementary material [\citet{supp}]). Another example is considered in the next section.
\begin{theorem}\label{th5}
Suppose that \textup{(A1)} holds; \textup{(B2), (B3)} hold for
any fixed $M>0$ (instead of some $M>0$), and with the $s_{a,N}^{d-1}$ in
(\ref{eq11}) replaced by $s_{a,N}^{d}$. In addition, suppose that \textup{(C1)},
\textup{(C2)} hold. Then, the MLE of $\theta_{0}$ is consistent.
\end{theorem}
\section{Example}\label{sec4}
Let us consider a special case of Example~\ref{ex1} with $x_{ijk}'\beta=\mu$,
but $\sigma^{2}$ and $\tau^{2}$ unknown. We change the notation slightly,
namely, $y_{i,j,k}$ instead of $y_{ijk}$. Suppose that\vadjust{\goodbreak} $S=S_{1}\cup S_{2}$
such that $c_{ij}=r, (i,j)\in S_{r}$, $r=1,2$ (as in the case of the
salamander data). We use two subsets to jointly identify all the
unknown parameters. The first subset is similar to that used in the
proofs of Theorems~\ref{th1} and~\ref{th2}, namely, $y_{i,i}=(y_{i,i,k})_{k=1,2}, (i,
i)\in S_{2}$. Let $m_{1}$ be the total number of such $(i,i)$'s, and
assume that $m_{1}\rightarrow\infty$, as $m,n\rightarrow\infty$. Then,
the subset satisfies (A1). Let $\theta=(\mu,\sigma^{2},\tau^{2})'$.
It can be shown that the sequence $y_{i,i}, (i,i)\in S_{2}$ is a sequence
of i.i.d. random vectors with the probability distribution, under
$\theta$,
given by
\begin{equation}\label{eq16}
p_{\theta}(y_{i,i})=\mathrm{ E} \biggl[\frac{\exp\{y_{i,i,\cdot}(\mu
+\xi)\}}{\{1+\exp(\mu+\xi)\}^{2}}
\biggr],
\end{equation}
where $\xi\sim N(0,\psi^{2})$, with $\psi^{2}=\sigma^{2}+\tau^{2}$, and
$y_{i,i,\cdot}=y_{i,i,1}+y_{i,i,2}$. By the strict concavity of the
logarithm, we have
\begin{equation}\label{eq17}
\mathrm{ E}_{\theta_{0}} \biggl[\log \biggl\{\frac{p_{\theta}(y_{i,
i})}{p_{\theta_{0}}(y_{i,i})} \biggr\}
\biggr]< 0
\end{equation}
unless $p_{\theta}(y_{i,i})/p_{\theta_{0}}(y_{i,i})$ is a.s. $\mathrm{
P}_{\theta_{0}}$ a constant, which must be one because both $p_{\theta}$
and $p_{\theta_{0}}$ are probability distributions. It is easy to
show that the probability distribution of (\ref{eq16}) is completely determined
by the function $M(\vartheta)=[M_{r}(\vartheta)]_{r=1,2}$, where
$M_{r}(\vartheta)=\mathrm{ E}\{h_{\vartheta}^{r}(\zeta)\}$ with
$\vartheta=(\mu,
\psi)'$, $h_{\vartheta}(\zeta)=\exp(\mu+\psi\zeta)/\{1+\exp(\mu+\psi
\zeta)
\}$, and $\zeta\sim N(0,1)$. In other words, $p_{\theta}(y_{i,i})=
p_{\theta_{0}}(y_{i,i})$ for all values of $y_{i,i}$ if and only if
$M(\vartheta)=M(\vartheta_{0})$. \citet{Jia98} showed that the function
$M(\cdot)$ is injective [also see \citet{Jia07}, page~221]. Thus, (\ref{eq17}) holds
unless $\mu=\mu_{0}$ and $\psi^{2}=\psi_{0}^{2}$.
It remains to deal with a $\theta$ that satisfies $\mu=\mu_{0}$, $\psi^{2}
=\psi_{0}^{2}$, but $\theta\neq\theta_{0}$. For such a $\theta$, we use
the second subset, defined as $y_{i}=(y_{i,2i-1,1},y_{i,2i,1})'$ such
that $(i,2i-1)\in S$ and $(i,2i)\in S$. Let $m_{2}$ be the total number
of all such $i$'s, and assume that $m_{2}\rightarrow\infty$ as $m, n
\rightarrow\infty$. It is easy to see that (A1) is, again, satisfied
for the new subset. Note that any $\theta$ satisfying $\mu=\mu_{0}$ and
$\psi^{2}=\psi_{0}^{2}$ is completely determined by the parameter
$\gamma
=\sigma^{2}/\psi^{2}$. Furthermore, the new subset is a sequence of i.i.d.
random vectors with the probability distribution, under such a $\theta$,
given by
\begin{equation}\label{eq18}
p_{\gamma}(y_{i})=\mathrm{ E} \biggl[\frac{\exp\{y_{i,2i-1,1}(\mu_{0}+X)\}}{1+
\exp(\mu_{0}+X)}\cdot
\frac{\exp\{y_{i,2i,1}(\mu_{0}+Y)\}}{1+\exp(\mu_{0}+
Y)} \biggr],
\end{equation}
where $(X,Y)$ has the bivariate normal distribution with $\operatorname
{ var}(X)=
\operatorname{ var}(Y)=\psi_{0}^{2}$ and $\operatorname{
cor}(X,Y)=\gamma$. Similar to (\ref{eq17}),
we have
\begin{equation}\label{eq19}
\mathrm{ E}_{\gamma_{0}} \biggl[\log \biggl\{\frac
{p_{\gamma}(y_{i})}{p_{\gamma_{0}}(y_{i})} \biggr\}
\biggr]< 0
\end{equation}
unless $p_{\gamma}(y_{i})=p_{\gamma_{0}}(y_{i})$ for all values of $y_{i}$.
Consider (\ref{eq18}) with $y_{i}=(1,1)$ and let $\mathrm{ P}_{\gamma}$ denote the
probability distribution of $(X,Y)$ with the correlation coefficient
$\gamma$. By Fubini's theorem, it can be shown that
\begin{eqnarray}\label{eq20}\qquad
p_{\gamma}(1,1)=\int_{0}^{\infty}\int
_{0}^{\infty}P_{\gamma}\bigl\{X\geq
\operatorname{ logit}(s)-\mu_{0},Y\geq\operatorname{ logit}(t)-
\mu_{0}\bigr\}\,ds\,dt.
\end{eqnarray}
Hereafter, we refer the detailed derivations to the supplementary material [\citet{supp}].
By Slepian's inequality [e.g., \citet{Jia10}, pages~157--158], the integrand on
the right side of (\ref{eq20}) is strictly increasing with $\gamma$, hence so is
the integral. Thus, if $\gamma\neq\gamma_{0}$, at least we have
$p_{\gamma}(1,1)\neq p_{\gamma_{0}}(1,1)$, hence (\ref{eq19}) holds.
In summary, for any $\theta\in\Theta, \theta\neq\theta_{0}$, we must
have either (\ref{eq17}) or (\ref{eq19}) hold. Therefore, by continuity, assumption
(B2) holds, provided that true variances, $\sigma_{0}^{2}, \tau_{0}^{2}$
are positive. Note that, in the current case, the expectations involved
in (B2) do not depend on either $j$ or $N$, the total sample size.
To verify (B3), it can be shown that $|(\partial/\partial\mu)\log
\{
p_{\theta}(y)\}|\leq N$. Furthermore, we have $|(\partial/\partial
\sigma^{2})\log\{p_{\theta}(y)\}|\vee|(\partial/\partial\tau^{2})\log\{
p_{\theta}(y)\}|\leq(A+C+1)N$ in a neighborhood of $\theta_{0}$, ${\cal
N}(\theta_{0})$. Therefore, (\ref{eq9}) holds with $s_{N}=N$.
As for (\ref{eq10}), it is easy to show that the partial derivatives involved
are uniformly bounded for $\theta\in{\cal N}(\theta_{0})$. Thus, (\ref{eq10})
holds for any $s_{a,N}$ such that $s_{a,N}\rightarrow\infty$, $a=1,2$.
Furthermore, the left side of (\ref{eq11}) is bounded by $c_{a}s_{a,N}^{2}/m_{a}$
for some constant $c_{a}>0$, $a=1,2$ (note that $d=3$ in this case). Thus,
for example, we may choose $s_{a,N}=\sqrt{m_{a}/\{1+\log(m_{a})\}}$, $a=1,
2$, to ensure that $\log(s_{a,N})/m_{a}\rightarrow0$, $a=1,2$, and (\ref{eq11})
holds.
In conclusion, all the assumptions of Theorem~\ref{th4} hold provided that
$\sigma_{0}^{2}>0$, $\tau_{0}^{2}>0$, and $(m_{1}\wedge m_{2})^{-1}\log(
N)\rightarrow0$.
Similarly, the conditions of Theorem~\ref{th5} can be verified. Essentially,
what is new is to check assumptions (C1) and (C2). See
the supplementary material [\citet{supp}].
\section{Discussion}\label{sec5}
\subsection{Remark on subset argument}\label{sec5.1}
In proving a number of results, we have demonstrated the usefulness
of the subset argument. In principle, the method allows one to
argue consistency of the MLE in any situation of dependent data, not
necessarily under a GLMM, provided that one can identify some suitable
subset(s) of the data whose asymptotic properties are easier to
handle, such as collections of independent random vectors. The
connection between the full data and subset data is made by the
subset inequality, which, in a more general form, is a consequence
of the martingale property of the likelihood-ratio [e.g., \citet{Jia10},
pages~244--246]: suppose that $Y_{1}$ is a subvector of a random vector
$Y$. Let $p_{\theta}(\cdot)$ and $p_{1,\theta}(\cdot)$ denote the p.d.f.'s
of $Y$ and $Y_{1}$, respectively, with respect to a $\sigma$-finite
measure $\nu$, under the parameter vector $\theta$. For simplicity,
suppose that $p_{\theta_{0}}, p_{1,\theta_{0}}$ are positive a.e. $\nu$,
and $\lambda(\cdot)$ is a positive, measurable function. Then, for any
$\theta$, we have
\begin{eqnarray*}
\mathrm{ P}_{\theta_{0}}\bigl\{p_{\theta_{0}}(Y)\leq\lambda(Y_{1})
p_{\theta}(Y)|Y_{1}\bigr\}\leq\lambda(Y_{1})
\frac{p_{1,\theta}(Y_{1})}{p_{1,
\theta_{0}}(Y_{1})}\qquad\mbox{a.e. } \nu,
\end{eqnarray*}
where $\mathrm{ P}_{\theta_{0}}$ denotes the probability distribution
corresponding to $p_{\theta_{0}}$.
\subsection{Quantifying the information loss}\label{sec5.2}
On the other hand, the subset argument merely provides a method of proof
for the consistency of the full-data MLE---it by no means suggests the
subset-data MLE as a replacement for the full-data MLE. In fact, there is
an information loss if such a replacement takes place. To quantify the
information loss, assume the regularity conditions for exchanging the
order of differentiation and integration. Then, the Fisher information
matrix based on the full data can be expressed as
\begin{eqnarray*}
I_\mathrm{ f}(\theta)&=&-\mathrm{ E}_{\theta} \biggl\{
\frac{\partial^{2}}{\partial
\theta\,\partial\theta'}\log p_{\theta}(y) \biggr\}
\\
&=&\mathrm{ E}_{\theta} \biggl[ \biggl\{\frac{\partial}{\partial\theta}\log
p_{\theta}(y) \biggr\} \biggl\{\frac{\partial}{\partial\theta}\log p_{\theta}(y)
\biggr\}' \biggr]-\mathrm{ E}_{\theta} \biggl\{
\frac
{1}{p_{\theta}(y)} \frac{\partial^{2}}{\partial\theta\,\partial\theta'}p_{\theta}(y) \biggr\}
\\
&=&I_{\mathrm{ f},1}(\theta)-I_{\mathrm{ f},2}(\theta).
\end{eqnarray*}
Similarly, the information matrix based on the subset data can be expressed
as $I_\mathrm{ s}(\theta)=I_{\mathrm{ s},1}(\theta)-I_{\mathrm{
s},2}(\theta)$, where
$I_{\mathrm{ s},j}(\theta)$ is $I_{\mathrm{ f},j}(\theta)$ with $y$
replaced by
$y_{[1]}$, $j=1,2$ [$p_{\theta}(y_{[1]})$ denotes the p.d.f. (or
p.m.f.) of
$y_{[1]}$]. By conditioning on $y_{[1]}$, it can be shown that
$I_{\mathrm{
f},2}(\theta)=I_{\mathrm{ s},2}(\theta)$, while $I_{\mathrm{
f},1}(\theta)\geq
I_{\mathrm{ s},1}(\theta)$. It follows that
\begin{equation}\label{eq21}
I_\mathrm{ f}(\theta)\geq I_\mathrm{ s}(\theta)
\end{equation}
for all $\theta$. Here the inequality means that the difference between
the left side and right side is a nonnegative definite matrix. (\ref{eq21}) suggests
that the information contained in the full data is no less than that
contained in the subset data, which, of course, is what one would expect.
Furthermore, the information loss is given by
\begin{equation}\label{eq22}
I_\mathrm{ f}(\theta)-I_\mathrm{ s}(\theta)=\mathrm{
E}_{\theta} \biggl[\operatorname{ Var}_{\theta} \biggl\{
\frac{\partial}{\partial\theta}\log p_{\theta}(y) \Big\vert y_{[1]} \biggr\}
\biggr],
\end{equation}
where $\operatorname{ Var}_{\theta}(\cdot|y_{[1]})$ denotes the conditional
covariance matrix given $y_{[1]}$ under $\theta$. The derivations of (\ref{eq21})
and (\ref{eq22}) are deferred to the supplementary material [\citet{supp}]. It is seen from (\ref{eq22})
that the information loss is determined by how much (additional) variation
there is in the score function, $(\partial/\partial\theta)\log p_{\theta}(
y)$, given the subset data $y_{[1]}$. In particular, if $y_{[1]}=y$, then
the score function is a constant vector given $y_{[1]}$ (and $\theta$);
hence $\operatorname{ Var}_{\theta}\{(\partial/\partial\theta)\log
p_{\theta}(y)|
y_{[1]}\}=0$, thus, there is no information loss. In general, of course,
the subset data $y_{[1]}$ is not chosen as $y$; therefore, there will be
some loss of information.
Nevertheless, the information contained in the subset data is usually
sufficient for identifying at least some of the parameters. Note that
consistency is a relatively weak asymptotic property in the sense that
various estimators, including those based on the subset data and, for
example, the method of moments estimator of \citet{Jia98}, are consistent,
even though they may not be asymptotically efficient. Essentially, for
the consistency property to hold, one needs that, in spite of the
potential information loss, the remaining information that the
estimator is able to utilize grows with the sample size. For example,
in the open problem (Sections~\ref{sec1} and~\ref{sec2}), the information contained in
$y_{ii}$ grows at the rate of $m\wedge n$, which is sufficient for
identifying $\mu$; in the example of Section~\ref{sec4}, the information contained
in $y_{i,i}$ grows in the order of $m_{1}$, which is sufficient for
identifying $\mu$ and $\psi^{2}=\sigma^{2}+\tau^{2}$, while the information
contained in $y_{i}$ grows at the rate of $m_{2}$, which is sufficient for
identifying $\gamma=\sigma^{2}/\psi^{2}$. The identification of the
``right'' subset in a given problem is usually suggested by the nature of
the parametrization. As mentioned (see the third paragraph of Section~\ref{sec3}),
a subset $y_{[1]}$ of independent data can always be found under the ANOVA
GLMM (e.g., starting with the first observation, $y_{1}$, one finds
the next observation such that it involves different random effects from
those related to $y_{1}$, and so on). If the $y_{[1]}$ is such that
$\liminf_{N\rightarrow\infty}\lambda_{\min}\{I_\mathrm{ s}(\theta)\}
=\infty$,
where $I_\mathrm{ s}(\theta)$ is as in (\ref{eq21}) and $\lambda_{\min}$
denotes the
smallest eigenvalue, the subset $y_{[1]}$ is sufficient for identifying all
the components of $\theta$; otherwise, more than one subsets are needed in
order to identify all the parameters, as is shown in Section~\ref{sec4}.
\subsection{Note on computation of MLE}\label{sec5.3}
The subset argument offers a powerful tool for establishing consistency
of the MLE in GLMM with crossed random effects. Note that the idea has
not followed the traditional path of attempting to develop a
(computational) procedure to approximate the MLE. In fact, this might
explain why the computational advances over the past two decades [see,
e.g., \citet{Jia07}, Section 4.1 for an overview] had not led
to a major theoretical breakthrough for the MLE in GLMM in terms of
asymptotic properties. Note that the MLE based on the subset data is a
consistent estimator of the true parameter, and in that sense it is an
approximation to the MLE based on the full data (two consistent estimators
of the same parameter approximate each other). However, there is an
information loss, as discussed in the previous subsection [see (\ref{eq22})], so
one definitely wants to do better computationally.
One computational method that has been developed for computing the MLE
in GLMMs, including those with crossed random effects, is Monte Carlo
EM algorithm [e.g., McCullogh (\citeyear{McC94,McC97}), \citet{BooHob99}]. Here,
however, we would like to discuss another, more recent, computational
advance, known as \textit{data cloning} [DC; \citet{LelDenLut07}, \citet{LelNadSch10}].
The DC uses the Bayesian computational approach for frequentist purposes.
Let $\pi$ denote the prior density function of $\theta$. Then, one has
the posterior,
\begin{equation}\label{eq23}
\pi(\theta|y)=\frac{p_{\theta}(y)\pi(\theta)}{p(y)},
\end{equation}
where $p(y)$ is the integral of the numerator with respect to $\theta$,
which does not depend on $\theta$. There are computational tools
using the Markov chain Monte Carlo for posterior simulation that
generate random variables from the posterior without having to compute
the numerator or denominator of (\ref{eq23}) [e.g., Gilks, Richardson and
Spiegelhalter (\citeyear{GiRiSp96});
\citet{Spietal04}]. Thus, we can assume that one can
generate random variables from the posterior. If the observations
$y$ were repeated independently from $K$ different individuals such
that all of these individuals result in exactly the same data, $y$,
denoted by $y^{(K)}=(y,\ldots,y)$, then the posterior based on $y^{(K)}$
is given by
\begin{equation}\label{eq24}
\pi_{K}\bigl\{\theta|y^{(K)}\bigr\}=\frac{\{p_{\theta}(y)\}^{K}\pi(\theta)}{\int
\{
p_{\theta}(y)\}^{K}\pi(\theta)\,d\theta}.
\end{equation}
\citet{LelDenLut07}, \citet{LelNadSch10} showed that, as $K$ increases, the
right side of (\ref{eq24}) converges to a multivariate normal distribution whose
mean vector is equal to the MLE, $\hat{\theta}$, and whose covariance
matrix is approximately equal to $K^{-1}I_\mathrm{ f}^{-1}(\hat{\theta})$.
Therefore, for large $K$, one can approximate the MLE by the sample mean
vector of, say, $\theta^{(1)},\ldots,\theta^{(B)}$ generated from the
posterior distribution (\ref{eq24}). Denoted the sample mean by $\bar
{\theta}^{(\cdot)}$, and call it the DC MLE. Furthermore, $I_\mathrm{ f}^{-1}
(\hat{\theta})$ [see (\ref{eq21}), (\ref{eq22})] can be approximated by $K$ times the
sample covariance matrix of $\theta^{(1)},\ldots,\theta^{(B)}$. \citet{Tor12}
successfully applied the DC method to obtain the MLE for the
salamander-mating data.
Note that the DC MLE is an approximate, rather than exact, MLE, in the
sense that, as $K\rightarrow\infty$, the difference between $\bar
{\theta}^{(\cdot)}$ and the exact MLE vanishes. Because we have
established consistency of the exact MLE, it follows that
the DC MLE is a consistent estimator as long as the number
$K$ increase with the sample size. More precisely, it is shown
in the supplementary material [\citet{supp}] that, for every $\varepsilon,\delta>0$, there is
$N_{\varepsilon,\delta}$ such that for every $n\geq N_{\varepsilon
,\delta}$ and
$B\geq1$, there is $K(n,B)$ such that $\mathrm{ P}\{|\bar{\theta
}^{(\cdot)}
-\theta_{0}|\geq\varepsilon\}<\delta$, if $K\geq K(n,B)$, where $\theta_{0}$
is the true parameter vector. Note that, as far as consistency is
concerned, one does not need that $B$ goes to infinity. This makes sense
because, as $K\rightarrow\infty$, the posterior (\ref{eq24}) is becoming degenerate
[the asymptotic covariance matrix is $K^{-1}I_\mathrm{ f}^{-1}(\hat
{\theta})$];
thus, one does not need a large $B$ to ``average out'' the variation in
$\bar{\theta}^{(\cdot)}$. Thus, from an asymptotic point of view, the
result of the current paper provides a justification for the DC method.
More importantly, because $B,K$ are up to one's choice, one can make sure
that they are large enough so that there is virtually no information loss,
as was concerned earlier. In this regard, a reasonably large $B$ would
reduce the sampling variation and therefore improve the DC approximation,
and make the computation more efficient. See \citet{LelNadSch10} for
discussion on how to choose $B$ and $K$ from practical points of view.\vadjust{\goodbreak}
As for the prior $\pi$, \citet{LelNadSch10} only suggests that it
be chosen according to computational convenience and be proper (to avoid
improper posterior). Following the subset idea, an obvious choice for
the prior would be the multivariate normal distribution with mean vector
$\hat{\theta}_\mathrm{ s}$, the subset-data MLE, and covariance matrix
$I_\mathrm{ s}^{-1}(\hat{\theta}_\mathrm{ s})$ [defined above (\ref{eq21})].
Note that
$I_\mathrm{ s}(\theta)$ is much easier to evaluate than $I_\mathrm{
f}(\theta)$.
This would make the procedure more similar to the empirical Bayes than
the hierarchical one. Nevertheless, the DC only uses the Bayesian
computational tool, as mentioned.
\subsection{Regarding the limiting process}\label{sec5.4}
In some applications of GLMM, the estimation of the random effects
are of interest. There have also been developments in semiparametric
GLM and nonparametric ANOVA. In those cases, the random effects are
treated the same way as the fixed effects. As a result, the proof of the
consistency results in those cases usually impose constraints on the
ratio of the number of effects and number of observations falling in each
cluster [e.g., \citet{Che95,Jia99,WuLia04}, and Wang, Tsai and Qu (\citeyear{WanTsaQu12})].
A major difference exists, however, between the case of
clustered data (e.g., Example~\ref{ex2}) and that with crossed random effects
(e.g., Example~\ref{ex1}) in that, in the latter case, the data cannot be
divided into independent groups (with the number of groups increasing
with the sample size). Furthermore, the necessary constraints are very
different depending on the interest of estimation. Consider, for example,
a very simple case of linear mixed model, $y_{ij}=\mu+u_{i}+v_{j}+e_{ij}$,
$i=1,\ldots,m, j=1,\ldots,n$, where the $u_{i}$'s and $v_{j}$'s are random
effects, and $e_{ij}$'s are errors. Assume, for simplicity, that all the
random effects and errors are i.i.d. $N(0,1)$, so that $\mu$ is the only
unknown parameter. Suppose that $n\rightarrow\infty$, while $m$ is fixed,
say, $m=1$. In this case, $\bar{y}_{1\cdot}=n^{-1}\sum_{j=1}^{n}y_{1j}=
\mu+u_{1}+\bar{v}_{\cdot}+\bar{e}_{1\cdot}$ is a consistent estimator of
the cluster mean, $\mu_{1}=\mu+u_{1}$. On the other hand, the MLE of
$\mu$, which is also $\bar{y}_{1\cdot}$, is inconsistent (because it
converges in probability to $\mu+u_{1}$, which is not equal to $\mu$
with probability one). Note that here the ratio of the number of effects
and number of observations in the cluster is $2/n$. Apparently, this is
sufficient for consistently estimating the mixed effect $\mu+u_{1}$, but
not the fixed effect $\mu$. One might suspect that the case $m=1$ is
somewhat extreme, as $\mu$ and $u_{1}$ are ``inseparable''; but it does
not matter. In fact, for any $m\geq1$, as long as it is fixed, the MLE
of $\mu$ is $\bar{y}_{\cdot\cdot}=(mn)^{-1}\sum_{i=1}^{m}\sum_{j=1}^{n}
y_{ij}=\mu+\bar{u}_{\cdot}+\bar{v}_{\cdot}+\bar{e}_{\cdot\cdot}$, which
converges in probability to $\mu+\bar{u}_{\cdot}$ as $n\rightarrow\infty$,
and $\mu+\bar{u}_{\cdot}\neq\mu$ with probability one. Thus, the only way
that the MLE of $\mu$ can be consistent is to have both $m$ and $n$ go to
$\infty$.
The example also helps to explain why it is necessary to consider the
limiting process $m\wedge n\rightarrow\infty$, instead of something else,
in the open problem. The result of Theorem~\ref{th1} shows that $m\wedge
n\rightarrow\infty$ is also sufficient for the consistency of the MLE.
In fact, from the proof of Theorem~\ref{th1} it follows that, for large $m, n$,
we have with probability tending\vadjust{\goodbreak} to one that the conditional probability
that $p_{\mu}(y)\leq p_{\mu+\varepsilon}(y)$ given $y_{[1]}$ is bounded by
$\gamma^{m\wedge n}$ for some constant $0<\gamma<1$. The corresponding
upper bound under Theorem~\ref{th3} is $e^{-\lambda m_{a}}$ for some
constant $\lambda>0$, where $m_{a}$ is the number of independent
vectors in the subset $y_{[1]}$, and a similar result holds under
Theorem~\ref{th4} with the upper bound being $\exp[-\lambda m_{a}\{1+o(1)\}]$.
The assumption of Theorem~\ref{th3}, namely, (A1), makes sure that $m_{*}
=\min_{1\leq a\leq b}m_{a}\rightarrow\infty$ as the sample size
increases; the assumptions of Theorem~\ref{th4}, namely, (A1) and~(B3),
make sure that, in addition, the $o(1)$ in the above vanishes as
$m_{*}\rightarrow\infty$.
Although estimation of the random effects is not an objective of this
paper, in some cases this is of interest. For example, one may consider
estimating the conditional mean of $y_{ij}$ given $u_{i}$ in the open
problem (which may correspond to the conditional probability of successful
mating with the $i$th female in the salamander problem). As mentioned, the
data are not clustered in this case; in other words, all the data are in
the same cluster, so the ratio of the number of effects over the number of
observations is $(1+m+n)/mn=m^{-1}+n^{-1}+(mn)^{-1}$, which goes to zero
as $m\wedge n\rightarrow\infty$. It is easy to show that $\bar
{y}_{i\cdot}
=n^{-1}\sum_{j=1}^{n}y_{ij}$ is a consistent estimator of $\mathrm{
E}_{\mu}(
y_{ij}|u_{i})=\mathrm{ E}\{h(\mu+u_{i}+\eta)\}$, where $h(x)=e^{x}/(1+e^{x})$
and the (unconditional) expectation is with respect to $\eta\sim N(0,1)$,
$1\leq i\leq m$. Similarly, $\bar{y}_{\cdot j}=m^{-1}\sum_{i=1}^{m}y_{ij}$
is a consistent estimator of $\mathrm{ E}_{\mu}(y_{ij}|v_{j})=\mathrm{
E}\{h(\mu
+\xi+v_{j})\}$, where the (unconditional) expectation is with respect to
$\xi\sim N(0,1)$, $1\leq j\leq n$.
\section*{Acknowledgements}
The author wishes to thank all the researchers who have had the opportunity,
and interest, to discuss with the author about the open problem over the
past 15 years, especially those who have spent time thinking about a
solution.
The author is
grateful for the constructive comments from an Associate Editor and two
referees that have led to major improvements of the results and presentation.
\begin{supplement}[id=suppA]
\stitle{Supplementary material}
\slink[doi]{10.1214/13-AOS1084SUPP}
\sdatatype{.pdf}
\sfilename{aos1084\_supp.pdf}
\sdescription{The supplementary material is available online at
\href{http://anson.ucdavis.edu/\textasciitilde
jiang/glmmmle.suppl.pdf}{http://anson.ucdavis.edu/}\break
\href{http://anson.ucdavis.edu/\textasciitilde jiang/glmmmle.suppl.pdf}{\textasciitilde jiang/glmmmle.suppl.pdf}.}
\end{supplement}
|
2,869,038,153,895 | arxiv | \section{Introduction}
The collapsar model
for a gamma ray burst invokes a presence of an
accretion torus around a newly born black hole (Woosley 1995;
Paczy\'nski 1998; MacFadyen \& Woosley 1999).
The accretion energy is being transferred to the jet that propagates through
the collapsar envelope and at some distance from the central engine is
responsible for producing gamma rays. This type of model is commonly
accepted as a mechanism for a long gamma ray burst production, because
the whole event can last as long as the fallback material from the
collapsar envelope is available to fuel the accretion disk or torus.
However, one should bear in mind that the rotating torus
may form only when the substantial amount of specific angular momentum is
carried in the material
(see e.g. Lee \& Ramirez-Ruiz 2006 for a recent study of this problem).
This can be parameterized by the so called
critical specific angular momentum value, which is dependent on the
mass of the black hole, i.e. $l_{\rm crit}=2 R_{\rm g} c$,
where $R_{\rm g}$ is the gravitational radius.
Because the black hole mass is not constant
during the collapsar evolution, but increases as the envelope
material accretes onto it, the critical angular
momentum will change in time. Consequently, the amount of the
rotating material, which was initially available for the torus
formation, may become insufficient at a later stage of the collapsar evolution.
Moreover, the spin of the black hole will be changed by
accretion. Whether the black hole can achieve a spin parameter close
to the maximum one, depends on the properties of the
accreted mass. While a large spin
($a \sim 0.9$) is thought to be a necessary condition
for the jet launching (Blandford \& Znajek 1977),
it may happen that not enough specific angular momentum
is being transferred to the black hole as its mass increases.
Another challenge for the collapsar model is due to the
effects of stellar wind, which removes from the Wolf-Rayet stars a
large fraction of angular momentum (Langer 1998).
However, the winds of massive stars are relatively weaker in
the low metallicity environment
(Abbott et al. 1982),
and possibly the GRB progenitor stars can rotate faster
than an average W-R star
(Vink 2007).
Here we address the question of whether the collapsing star envelope
contains enough specific angular momentum
in order to support the
formation of the torus. Furthermore, it will be
interesting to study the problem of spin up and spin down
of the newly born black hole, and we shall consider this in a follow up paper.
These two are the key properties needed to launch
the GRB jet for
an extended period of time.
Because the angular momentum distribution in the Wolf-Rayet stars
is unknown, we may pose this question also in a different way:
we want to check how much angular momentum has to be
initially present in the stellar envelope in order to
allow for a long GRB production.
In Section \ref{sec:model}, we describe the model of the initail conditions
and evolution of the collapsing star, adopting various prescriptions for the angular
momentum distribution and different scenarios of the accretion process.
In Section \ref{sec:results}, we present the results for the mass accreted onto the black hole
in total and through the torus. We also estimate the
resulting GRB durations, first in case of a constant $\dot m$ and then
by explicitly calculating the free fall velocity of the gas
accreting in the torus.
Finally in Section \ref{sec:diss},
we discuss the
resulting duration time of GRBs, as a function of the distribution of the
specific angular momentum in the progenitor star.
\section{Model}
\label{sec:model}
In the initial conditions, we use
the spherically symmetric model of the 25 $M_{\odot}$ pre-supernova
(Woosley \& Weaver 1995).
The same model was used by Proga et al. (2003) in their MHD simulation
of the collapsar.
Figure \ref{fig:ror} shows the density profile and the mass enclosed inside a given radius.
The Figure also shows the free fall timescale onto the enclosed mass,
corresponding to the radius.
\begin{figure}
\epsscale{.80}
\plotone{Fig1.ps}
\caption{The density and mass profiles in the pre-supernova model. The data are taken from
Woosley \& Weaver (1995), model No. S251S7B@14233.
The x-axis on the upper panel shows the free fall timescale corresponding to the radius shown on the lower panel's x-axis.}
\label{fig:ror}
\end{figure}
The angular momentum within a star or rotating torus may depend on
radius (see e.g. Woosley 1995, Jaroszy\'nski 1996, Daigne \&
Mochkovitch 1997
for various prescriptions).
Here we parameterize this distribution to be either a function of
the polar angle $\theta$ (simplest case; models {\bf A} and {\bf B}),
or a function of both
radius $r$ and $\theta$ (models {\bf C} and {\bf D}).
First, we assume the specific angular momentum to depend only on the
polar angle:
\begin{equation}
l_{\rm spec} = l_0 f(\theta).
\end{equation}
We constitute two different functions:
\begin{equation}
f(\theta) = 1- |\cos \theta| ~~ {\rm (model ~ {\bf A})}
\label{eq:ft1}
\end{equation}
\begin{equation}
f(\theta) = \sin^{2}\theta ~~ {\rm (model ~ {\bf B})}
\label{eq:ft2}
\end{equation}
The rotation velocity is therefore given by:
\begin{equation}
v_{\varphi} = {l_{0} \over r \sin \theta} f(\theta)
\end{equation}
The normalization of this dependence is defined with respect to the
critical specific angular momentum for the seed black hole:
\begin{equation}
l_{0} = x l_{\rm crit}(M^{0}_{\rm BH}) = x \times 3.54 \times 10^{16} {M [M_{\odot}]
\over 2} ~~{\rm cm^{2}~s^{-1}}
\end{equation}
where $R_{\rm g} = 2GM^{0}_{\rm BH}/c^{2}$
is the Schwarzschild
radius (non-rotating black hole).
Second, we assume that the specific angular momentum will depend on the
polar angle, as well as on the radius in the envelope, as:
\begin{equation}
l_{\rm spec} = l_{0} g(r)f(\theta),
\end{equation}
We adopt the following functions:
\begin{equation}
l_{\rm spec} = x l_{\rm crit} ({r \over r_{\rm core}})^{2} \sin^{2}
\theta ~~ {\rm (model ~{\bf C})}
\label{eq:ft3}
\end{equation}
\begin{equation}
l_{\rm spec} = x \sqrt{8 G M_{\rm core} r} \sin^{2}\theta ~~ {\rm
(model ~{\bf D})}
\label{eq:ft4}
\end{equation}
The above model {\bf C} corresponds to the rotation with a constant
angular velocity $\Omega$,
while the model {\bf D} corresponds to a constant ratio
between the centrifugal and gravitational forces.
Note that the strong increase of $l_{\rm spec}$ with radius
will lead to a very fast rotation at large radii.
Therefore, a cut off may be required at some maximum value, $l_{\rm
max}$ (see below).
The normalization of all the models is chosen such that the specific angular momentum is
always equal to the critical value at $\theta = 90^{\circ}$, and at $r=r_{\rm core}$ if the
model depends on radius.
In Section \ref{sec:results}, we present the results of our calculations
considering a range of initial values of $x$.
Initially, the mass of the black hole is given by the mass of the iron
core
of the star:
\begin{equation}
M_{BH}^{0} = M_{\rm core} = 4 \pi \int_{0}^{r_{\rm core}} \rho r^{2} dr.
\end{equation}
For a given $x$, a certain
fraction mass of the collapsar envelope, $M^{0}_{1}$,
carries a specific angular momentum smaller than critical $l_{\rm
crit}^{0} \equiv l_{\rm crit}(M_{\rm BH}^{0})$:
\begin{equation}
M^{0}_{1} = 2 \pi \int_{r_{\rm core}}^{r_{\rm max}}
\int_{0}^{\pi} \rho_{1} r^{2} \sin \theta d\theta dr
\end{equation}
where $\rho_{1} \equiv \rho (r,\theta)|_{l<l^{0}_{\rm crit}}$
is the density in the envelope where
the specific angular momentum is smaller than critical.
Here, the radius $r_{\rm max}$ is the size of the star.
Correspondingly, by $M^{0}_{2}$ we denote the fraction of the envelope
mass that carries the specific angular momentum larger or equal to the critical, with
$\rho_{2} \equiv \rho (r,\theta)|_{l \ge l^{0}_{\rm crit}}$,
and
the total envelope mass is $M^{0}_{\rm env} = M^{0}_{1} + M^{0}_{2}$.
Only the mass $M^{0}_{2}$ can form the torus around the black hole of
the mass $M^{0}_{\rm BH}$.
The above relations set up the initial conditions for the torus
formation in the collapsar, and $l_{\rm crit}$ is defined by the mass
of the iron core, $M_{\rm core}$. However,
as the collapse proceeds, the mass of the black hole will increase and
the critical specific angular momentum will be a function of the increasing
mass: $l_{\rm crit}(M_{\rm BH})$.
The main point of this work is to compute the mass of the progenitor
with $l>l_{\rm crit}$, taking into account this effect.
Below, we redefine $\rho_{1}$ and $\rho_{2}$, so that the
$l_{\rm crit} - M_{\rm BH}$ relation is taken into account.
To compute the mass of the envelope part that has high enough $l$ to
form a torus around a given BH, and to
estimate the time duration of the GRB powered by accretion,
we need to know the mass of this black hole.
A current $M_{\rm BH}$ depends on the mass of the mass of the seed
black hole and the accretion scenario.
We approximate this scenario in a following way. We assume that
accretion is nonhomologous and BH grows by accreting mass $\Delta
m^{\rm k}$, which is a function of the mass of a shell between the
radius $r_{\rm k}$ and $r_{\rm k}+\Delta r_{\rm k}$ (see e.g. Lin \&
Pringle 1990).
Formally, we perform the calculations of $M_{\rm BH}$ and
$\Delta m^{\rm k}$ iteratively:
\begin{equation}
M_{\rm BH}^{\rm k} = M_{\rm BH}^{\rm k-1}+\Delta m^{\rm k}
\end{equation}
where the increment of mass of the black hole is :
\begin{equation}
\Delta m^{\rm k} = 2 \pi \int_{r_{\rm k}}^{r_{\rm k}+\Delta r_{\rm k}}
\int_{0}^{\pi} \bar{\rho} r^{2} \sin \theta d\theta dr
\end{equation}
Here $\bar{\rho}$ depends on the accretion scenario (see below)
and contains the information of the specific angular momentum distribution.
The above two equations define an iterative procedure due to the
nonlinear dependence of $\bar{\rho}$ on $M_{\rm BH}$.
We start from the radius $r_{0} = r_{\rm core}$, i.e. that of an iron core.
We distinguish here three possible accretion scenarios: \\
(a) the accretion onto black hole proceeds at the same rate both from
the torus and from the gas close to the poles, with $l<l^{\rm k}_{\rm crit}$, i.e.
$\bar{\rho} \equiv \rho$ (and does not depend on $\theta$); \\
(b) the
envelope material with $l<l^{\rm k}_{\rm crit}$ falls on the black hole first.
Thus, until the polar funnel is evacuated completely, only this gas contributes to the
black hole mass, i.e. $\bar{\rho} \equiv \rho_{1}$. After that, the material
with $l>l^{\rm k}_{\rm crit}$ accretes, and $\bar{\rho} \equiv \rho_{2}$; \\
(c) the accretion proceeds only through the torus, and only this
material contributes to the black hole growth
i.e. $\bar{\rho} \equiv \rho_{2}$. In this case the rest of the envelope
material is kept aside until the torus is accreted.
The densities $\rho_{1}$ and
$\rho_{2}$, defined above,
depend on $l_{\rm crit}^{\rm k} \equiv l_{\rm crit}(M^{\rm k}_{\rm BH})$.
The above accretion scenarios are illustrated in the Figure
\ref{fig:scheme}. The panel (a) shows the scenario of a uniform accretion,
in which the whole envelope material falls into black hole,
regardless of its specific angular momentum. The red color marks
the material with $l<l^{\rm k}_{\rm crit}$.
The blue colors mark the material with $l>l^{\rm k}_{\rm crit}$, and when the black hole is
small, this condition is satisfied for a larger range of $\theta$ (dark blue).
When the black hole increases, the material with $l>l^{\rm k}_{\rm crit}$ occupies narrower
$\theta$ range (light blue).
The panel (b) shows the scenario with two steps: first the material
with $l<l^{\rm k}_{\rm crit}$
accretes onto the black hole, increasing its mass; after this material is
exhausted, the material with $l>l^{\rm k}_{\rm crit}$ starts accreting. Because the black hole
mass has already increased, material with large $l$ is concentrated very close to the equator.
The panel (c) shows the scenario in which only the material with
$l>l^{\rm k}_{\rm crit}$ accretes.
\begin{figure}
\epsscale{.80}
\plotone{Fig2.eps}
\caption{The scheme of accretion scenarios. The red color
indicates the material with $l<l_{\rm crit}$. The
blue colors indicate the material with $l>l_{\rm crit}$:
darker for smaller black hole mass, and lighter for larger black hole mass.
Arrows indicate, which material is accreting and contributes to the black hole growth.}
\label{fig:scheme}
\end{figure}
In scenario {\it a} the mass accretion rate does not depend on
the specific angular momentum. This is a very special and
simple scenario. In reality, the accretion rate can depend
on the specific angular momentum.
For example, if an accreting torus produces a very powerful
outflow, the weakly rotating polar material could be expelled
and never reach the black hole (scenario {\it c}). This would also be a
special and extreme situation. It is more likely
that both the polar and disk material accrete but at different
rates. However, it is unclear what these rates are
and detailed simulations of MHD flows show that
the rates can depend on time. For example, there are periods
of time when
the polar material accretes faster than the disk material
and vice versa (e.g. Proga \& Begelman 2003). To bracket this more realistic
situation, we consider here another extreme and rather
artificial scenario {\it b}, which corresponds
to an somewhat 'reversed' scenario {\it c}. In this scenario,
initially the torus accretion rate is zero and accretion is dominated
by the polar material.
Only after the polar mateial is exhausted,
the torus accretion starts. We note that although
this scenario is quite extreme, it may be relevant if
jets in GRBs must be very clean and light because
in this scenario jets will be moving in the 'empty' polar funnels.
Due to the increasing mass of the black hole, the critical angular
momentum also increases, and as a result less and less material can
satisfy the condition for the torus formation ($l > l^{\rm k}_{\rm crit}$).
We stop the calculations, when there is no material with
$l>l^{\rm k}_{\rm crit}$,
i.e. able to form the torus:
\begin{equation}
w_{\rm k} = {M_{2}^{\rm k} \over M_{\rm env}^{\rm k}} =
{ 2 \pi \int^{r_{\rm max}}_{r_{\rm k+\Delta r}}
\int_{0}^{\pi} \rho_{2} r^{2} \sin \theta d\theta dr
\over
4 \pi \int^{r_{\rm max}}_{r_{\rm k+\Delta r}}
\rho r^{2} dr
} = 0.
\label{eq:kmax}
\end{equation}
Alternatively,
the iterations may be stopped earlier, for example if we impose a physical limit
based
on the free fall timescale or the accretion rate, to be adequate to
power the prompt GRB phase.
The duration of the GRB could be estimated as the ratio between
the mass accreted through the torus, and the accretion rate $\dot m$:
\begin{equation}
M^{\rm torus}_{\rm accr} = \sum_{\rm k=1}^{k_{\rm max}} M_{2}^{\rm k}
\end{equation}
\begin{equation}
\Delta t_{\rm GRB} = {M_{\rm accr}^{\rm torus} \over \dot m}
\end{equation}
where the number $k_{\rm max}$ is defined by the Equation \ref{eq:kmax}.
Note that we assume here the GRB prompt emission is equal to the
duration of the torus replenishment.
In principle, $\dot m$ may depend on time.
Here we take two approaches. First,
for simplicity, we assume a constant accretion rate
of a moderate value ($\dot m = 0.01-1.0$ M$_{\odot}$ s$^{-1}$,
see e.g. Popham, Woosley \& Fryer 1999; Janiuk et al. 2004).
Second, in more detailed calculations we determine the instantaneous
accretion rate during the iterations, determined by the free
fall velocity of gas in the torus.
\section{Results}
\label{sec:results}
\subsection{Models with the specific angular momentum dependent only on $\theta$}
\label{sec:theta}
The Figure \ref{fig:fig3} shows the initial fraction of the envelope mass
which contains large enough angular
momentum to form the
rotating torus, $w_{0} \equiv M^{0}_{2}/M^{0}_{\rm env}$
(see Eq. \ref{eq:kmax})
for models {\bf A} and {\bf B}.
For instance, for the adopted function
$f(\theta)$ given by Eq. \ref{eq:ft1} (model {\bf A}),
and for the initial angular
momentum normalization of $x=1.15$, we obtain $w_{0}=0.13$.
This means that only 13\% of the total mass of the
envelope will initially be able to form the torus,
while the remaining 87\% of the
envelope mass will fall radially towards black hole.
On the other hand, for $x>5$,
more than 75\% of the envelope mass will be rotating fast enough to
contribute to the torus formation.
The model {\bf B} gives systematically larger values of $w_{0}$
and for $x=1.15$, we have $w_{0}=0.36$, while for $x> 5$ we have
more than 85\% of the envelope mass able to form the torus.
\begin{figure}
\epsscale{.80}
\plotone{Fig3.ps}
\caption{The initial mass fraction of material with the angular
momentum $l>l_{\rm crit}$, as a function of the
initial normalization of the specific angular momentum distribution,
for model {\bf A} (solid squares) and model {\bf B} (open circles) of the
distribution function $f(\theta)$.
}
\label{fig:fig3}
\end{figure}
As we show below, these are only the upper limits for the mass that could be
accreted through the torus, and drive the GRB duration. These values will be much smaller,
when we calculate the collapsar evolution with increasing
$l^{\rm k}_{\rm crit}$ instead of $l^{\rm 0}_{\rm crit}$.
The Figure \ref{fig:lc} shows $l^{\rm k}_{\rm crit}$, i.e. $l_{\rm crit}$ as a function of the
current radius $r_{\rm k}$, which is the inner radius of the collapsing
envelope in a subsequent step $k$.
The figure thus shows how the critical specific angular momentum changes
with time during the collapse, for an exemplary value of $x=7$.
The $l_{\rm crit}$ rises with time,
as the black hole accretes mass from the envelope, and corresponds to
a changing black hole mass. The
most steep rise is for the uniform accretion scenario {\it a}, and in this
case by definition the result does not depend on the adopted distribution function
for the specific angular momentum, $f(\theta)$. Therefore both curves marked by a solid line
overlap.
Also, in scenario {\it a} the plotted
curves do not depend on $x$, as well as neither on the slope nor on the
location of the curve. The latter influences only the maximum of
this curve, as for larger $x$ we have more material available to
form the torus. In particular, for $x=7$, the two overlapping
curves shown in the figure end at $r\sim 10^{13}$ cm.
For the scenario {\it c}, i.e. accretion of gas with $l>l^{\rm k}_{\rm crit}$,
$l_{\rm crit}$ rises less steeply with $r_{\rm k}$ than in scenario
{\it a}, because
now less material is contributing to the black hole mass.
In this case $f(\theta)$
affects the results, and
model {\bf A}
gives systematically smaller values of
$l_{\rm crit}$ than model {\bf B}.
For the scenario {\it b}, the result is very sensitive to $x$,
and we can have either one or two phases of accretion: only the
polar inflow or first the polar inflow
and then torus accretion.
The value of $x=7$ was chosen, because in model {\bf A} still no torus
is able to form,
and we have only phase 1,
while in model {\bf B} this value of $x$ is already large enough and
the phase 2 occurs.
For phase 1 in scenario {\it b} (marked by the thinner lines in the figure),
i.e. the material with $l<l^{\rm k}_{\rm crit}$ is accreting, the
dependence on
$f(\theta)$ is the following:
model {\bf A} adds more mass to the black hole and therefore it
leads to the larger values of $l_{\rm crit}$ than model {\bf B}.
For phase 2 of scenario {\it b} (present only
for model {\bf B} and marked by the thick line in the figure),
the evolution starts from the last $l^{\rm k}_{\rm crit}$ achieved in the end
of phase 1.
Then $l^{\rm k}_{\rm crit}$ increases, and ultimately reaches the final
solution of models {\bf B}{\it c} and {\bf A}{\it b},
because this $l_{\rm crit}$ corresponds to the black hole mass
that has increased maximally: either only through a torus, or first
through the through the polar funnels and then through the torus accretion.
All the curves in Figure \ref{fig:lc} exhibit a characteristic
evolution of their slopes, tracing the density distribution in
the progenitor star (see the top panel of Figure
\ref{fig:ror}). First, the fast rise is due to accretion of
the most dense inner shells of the stellar envelope. Then, the slope
flattens, as the density in the envelope decreases and the mass does not
grow very fast.
In the end, the slope of
$l^{\rm k}_{\rm crit} \equiv l_{\rm crit}(r_{\rm k}) \equiv
l_{\rm crit}(M_{\rm BH})$ rises again,
due to larger
volume of the shells, but this rise is depending on
the adopted scenario. In scenario {\it c} the sequence is the following:
increase of $l^{\rm k}_{\rm crit}$ $\rightarrow$ more accretion
$\rightarrow$ larger increase of $l^{\rm k}_{\rm crit}$ $\rightarrow$ less
accretion. In the phase 1 of
scenario {\it b}, such an equilibrium is not established, because
the accretion onto black hole proceeds through the polar funnels,
i.e. using the material with $l<l^{\rm k}_{\rm crit}$, so we
have: increase of $l^{\rm k}_{\rm crit}$ $\rightarrow$ more accretion
$\rightarrow$ further increase of $l^{\rm k}_{\rm crit}$, and for
large radii $r_{\rm k}$ the slope of
the curves shown in Fig. \ref{fig:lc} in this scenario is much steeper.
The phase 2 of
scenario {\it b} results in the changes of the slope of $l^{\rm
k}_{\rm crit}$
similar to other scenarios, {\it a} and {\it b} (provided that the
phase 2. occurs).
\begin{figure}
\epsscale{.80}
\plotone{Fig4.ps}
\caption{The critical specific angular momentum
as a function of radius within which the envelope mass collapsed
onto BH. The results are shown for one exemplary value of the initial
normalization parameter, $x=l_{0}/l_{crit} = 7$.
The models of the distribution function $f(\theta)$ are:
{\bf A} (solid squares) and {\bf B} (open circles),
and accretion scenarios are: {\it a} (solid line), {\it b} (short dashed line)
and {\it c} (long dashed line). The thin line for model {\bf B}{\it b}
represents the results from the phase 1 of accretion (through the
poles), while the thicker line {\bf B}{\it b} represents the results
from the phase 2 (torus accretion).
Note, that for the model {\bf A}{\it b} there is only a thin line, and
no phase 2, because for $x=7$ the torus does not form.
Note also, that the solid lines (i.e. models {\bf A}{\it
a} and {\bf B}{\it a} overlap.}
\label{fig:lc}
\end{figure}
In Figure \ref{fig:fig2}, we show the total mass accreted onto a black
hole,
and in Figure \ref{fig:fig2a} we show the mass accreted through the torus,
both
as a function of $x$. Figure \ref{fig:fig2a} can be also regarded as
showing
the estimated duration of a GRB, if the
accretion rate is constant.
The results are again presented for the 3 scenarios of accretion: {\it a},
{\it b}
and {\it c}
(marked by solid, short-dashed and long dashed lines, respectively),
as well as for the two prescriptions for the function
$f(\theta)$, models {\bf A} and {\bf B}
(marked by squares and circles, respectively).
The upper panels in Figs. \ref{fig:fig2} and \ref{fig:fig2a}
show the results obtained is case of the maximum limit for the
free fall time set yo $t_{\rm ff}^{\rm max}=1000$ s.
The values of the total accreted mass
are the largest for the scenario of the uniform accretion, {\it a}.
Depending on $x$,
the black hole mass is growing until
there is no further material with $l > l^{\rm k}_{\rm crit}$.
\begin{figure}
\epsscale{.80}
\plotone{Fig5.ps}
\caption{The total mass accreted onto a black hole,
as a function of the
initial normalization of the specific angular momentum distribution.
The models of the distribution function $f(\theta)$ are:
{\bf A} (solid squares) and {\bf B} (open circles),
and accretion scenarios are: {\it a} (solid line), {\it b} (short dashed line)
and {\it c} (long dashed line).
The upper panel shows the case when the mass accretion is limited by a
maximum free fall time of 1000 seconds, while the lower panel shows the
results for no limiting $t_ff$ (cf. Fig. \ref{fig:ror}).}
\label{fig:fig2}
\end{figure}
\begin{figure}
\epsscale{.80}
\plotone{Fig6.ps}
\caption{The accreted mass with $l>l_{\rm crit}$,
as a function of the
initial normalization of the specific angular momentum distribution.
The models of the distribution function $f(\theta)$ are:
A (solid squares) and {\bf B} (open circles),
and accretion scenarios are: {\it a} (solid line), {\it b} (short dashed line)
and {\it c} (long dashed line).
The upper panel shows the case when the mass accretion is limited by a
maximum free fall time of 1000 seconds, while the lower panel shows the
results for no limiting $t_ff$ (cf. Fig. \ref{fig:ror}).}
\label{fig:fig2a}
\end{figure}
In scenario {\it b}, initially we add to the black hole mass
(thus increasing $l^{\rm k}_{\rm crit}$),
only the material with $l<l^{\rm k}_{\rm crit}$. For small values of $x$,
the total accreted mass is the same as for scenario {\it a}, because
the process of accretion lasts in both cases until $M^{\rm k}_{2} = 0$,
i.e. the envelope contains no
further material with $l>l^{\rm k}_{\rm crit}$.
However, for large $x$ ($\ge 7$ in model {\bf A} and $\ge 5$ in model
{\bf B}),
after the black hole has swallowed the whole funnel with
$l<l^{\rm k}_{\rm crit}$, there is still some material with large
specific angular momentum and phase 2 occurs.
The material accretes now through the torus, but only as long as it has
$l>l^{\rm k}_{\rm crit}$. Therefore, for large $x$,
the total mass which the black hole gains in scenario {\it b} is
less than in the scenario {\it a}.
Scenario {\it c} assumes, that the BH accretes only the material
with $l>l^{\rm k}_{\rm crit}$. Now,
the total accreted mass can be either the same (for small $x$)
or smaller than in scenario {\it a}.
In Figure \ref{fig:fig2a}, we show the
accreted mass which had $l>l^{\rm k}_{\rm crit}$.
This represents the accretion through the torus, and may be
regarded as a direct measure of the GRB duration.
Scenario {\it a} results in a linear scaling of
$M_{\rm accr}^{\rm torus}$ with $x$:
\begin{equation}
M_{\rm accr}^{\rm torus} = \alpha x + \beta
\end{equation}
and with a linear fit we obtained
$\alpha \approx 0.83$, $\beta \approx -1.41$ in model {\bf A}, and $\alpha\approx
1.12$, $\beta \approx -1.55$ in model {\bf B}.
Scenario {\it b} predicts that the torus accretion is possible only
for large $x$, while for small $x$ torus will
not form, as discussed above.
The scenario {\it c}, by definition,
predicts the torus accretion for any value of $x>1.0$.
Therefore, the amount of material
accreted with $l>l^{\rm k}_{\rm crit}$ is in this scenario
larger than
in scenario {\it a}, because the black hole mass grows more slowly.
Both scenarios {\it a} and {\it b} result, for large $x$, in a nonlinear
dependence of $M_{\rm accr}^{\rm torus}$ on $x$.
To sum up, in the models {\bf A} and {\bf B}, i.e. if the specific angular
momentum in the progenitor star depends only on the polar angle,
the total mass of material capable of forming the torus can only be a
small fraction of the envelope mass. Depending on the accretion scenario, it
is at most $\sim 3.5 M_{\odot}$, i.e. 15\% of the envelope mass,
for $l_{0} = 3 l^{0}_{\rm crit} = 10^{17}$ cm$^{2}$s$^{-1}$, and
between $\sim 7$ and $\sim 15 M_{\odot}$, i.e. 30\%-65\% of the
envelope mass, for $l_{0} = 10 l^{0}_{\rm crit} =
3.3 \times 10^{17}$ cm$^{2}$s$^{-1}$.
Note that we could proceed with larger $l_{0}$, but we
stopped our calculations at $x=10$,
because larger $x$ would already imply very fast rotation at the
equator. In the present section we did not assume any maximum limit
on the specific angular momentum; this will be taken into account in
the next section.
Howewer, we considered the effects of the maximum limit for the free
fall timescale. As shown in the Figures, the limit of
$t_{\rm ff}^{\rm max}=1000$ s plays an important role when $x>5$,
and in all the models and scenarios
the dependence of the accreted mass on $x$ is significantly weaker
than for the case of no limiting $t_{\rm ff}$.
For large $x$, in scenario {\it a} the total mass accreted onto BH is
constant with $x$
and equal to the fraction of the envelope mass enclosed within the radius
$r \approx 1.58 \times 10^{11}$ (see Fig. \ref{fig:ror}).
The mass accreted via torus is smaller than that, and for $x=10$
it reaches about 6 solar masses.
\subsection{Models with the specific angular momentum dependent on $r$ and $\theta$}
\label{sec:radius}
Now, we investigate how the total accreted mass and in consequence the
duration of the GRB will be affected if the angular velocity $\Omega$
in the collapsing star is constant or given by a fixed ratio of the
centrifugal to gravitational force (Equations \ref{eq:ft3} and \ref{eq:ft4},
respectively).
In these two models, {\bf C} and {\bf D}, the specific angular momentum is
a strong function of radius.
Therefore, if we do not impose any limit on the specific angular momentum,
$l_{\rm max}$, the outer shells of the star will always have a
substantial amount of specific angular momentum, larger than any
current critical
value. Consequently, the GRB will continue until the last shell is
accreted. However, this would imply a very fast rotation of the star in its outer layers.
This would be inconsistent with the progenitor models for GRBs, so
in a more realistic version of our modeling we impose a
maximum value of the specific angular momentum. This assumption will lead
to a shorter GRB event, as it will end when the increasing black hole
mass will imply $l^{\rm end}_{\rm crit} > l_{\rm max}$.
\begin{figure}
\epsscale{.80}
\plotone{Fig7.ps}
\caption{The value of critical specific angular momentum
during an iteration, for one exemplary value of the initial
normalization parameter, $x=l_{0}/l_{\rm crit} = 0.05$.
The models of the distribution function $f(r,\theta)$ are:
{\bf C} (solid squares) and {\bf D} (open circles),
and accretion scenarios are: {\it a} (solid line) and {\it c} (dashed line).
}
\label{fig:figlccd}
\end{figure}
First, we investigate how
the black hole mass, $M^{\rm k}_{\rm BH}$, and critical specific
angular momentum, $l^{\rm k}_{\rm crit}$,
depend on the accretion scenario.
For the scenario {\it a} (i.e. uniform accretion), the black hole is
fed by the whole envelope regardless of the local specific angular momentum value.
The result is the same as in the cases explored in Section
\ref{sec:theta}:
the total accreted mass does not depend neither on
the distribution function ({\bf C} or {\bf D}) nor on the normalization ($x$).
In the Figure \ref{fig:figlccd},
we show how the critical specific angular momentum
increases when the subsequent envelope shells are accreting (i.e.
as a function of radius). The solid lines in this figure
overlap, and basically the curve is the same as in the
figure \ref{fig:lc} (for models {\bf A} and {\bf B}),
the only difference being that the maximum value reached in models
{\bf C} and {\bf D} can be larger (specifically, for
$\log l_{\rm max} \ge 17.3$).
The amount of the total
accreted mass in this case is constant
(Figure \ref{fig:figmcd}) and the value depends only on $l_{\rm max}$:
\begin{equation}
M_{\rm accr} = {l_{\rm max}c \over 4 G} - M_{\rm core}.
\label{eq:maccr}
\end{equation}
For our model of the star and $l_{\rm max} = 10^{17}$ cm$^{2}$s$^{-1}$
this gives
$M_{\rm accr} = 3.99 M_{\odot}$.
If there is no cutoff
of $l_{\rm max}$ then simply the total envelope mass is accreted,
23.9 $M_{\odot}$
(cf. the bottom panel of Fig. \ref{fig:figmcd}).
\begin{figure}
\epsscale{.80}
\plotone{Fig8.ps}
\caption{
The total accreted mass
as a function of the
initial normalization of the specific angular momentum distribution.
The models of the distribution function $f(r)g(\theta)$ are:
{\bf C} (solid squares) and {\bf D} (open circles),
and accretion scenarios are: a (solid line) and c (dashed line).
The upper panel shows the case when the mass accretion is limited by a
maximum free fall time of 1000 seconds,
the middle panel shows the case of the specific angular momentum cut off to
$l_{\rm max} = 10^{17}$ cm$^{2}$s$^{-1}$, while the bottom panel shows the case of
no free fall time limit and no specific angular momentum cut off.
Note, that the solid lines for models {\bf C} and {\bf D} overlap.
}
\label{fig:figmcd}
\end{figure}
The situation becomes more complicated when we adopt the scenario {\it
c} (accretion of material with $l> l^{\rm k}_{\rm crit}$).
The accreted mass in this scenario depends both on the distribution and on
the normalization of the specific angular momentum in the
pre collapse star.
For small $x$, the accreted mass is small, because
the process ends when $l<l^{\rm k}_{\rm crit}$ everywhere in the
envelope and $M^{\rm k}_{2}=0$.
The total mass accreted on black hole
is sensitive to the model distribution function.
In particular, the fact that this function strongly depends on the
radius means that the inner shells contain mostly material with
$l \ll l^{\rm k}_{\rm crit}$. Thus, only the more distant shells contribute
to the mass of the black hole
(see Fig. \ref{fig:figlccd}, dashed lines),
and the particular value of $r_{\rm k}$ for
which the black hole mass and $l^{\rm k}_{\rm crit}$ start rising
depends on $x$. Note that Fig. \ref{fig:figlccd} shows only the
results for $x=0.05$ (arbitrarily chosen value).
For large $x$, the accreted mass will
asymptotically reach the result for the scenario {\it a}
(see Figure \ref{fig:figmcd}, bottom panel), because
if $x = 1$ then the whole envelope material satisfies
$l \ge l_{\rm crit}$.
Therefore the
$l_{\rm crit}(M_{\rm BH})$ functions seen in the Figure
\ref{fig:figlccd} will eventually overlap if
$x$ is close to 1.0.
Therefore in the Figure \ref{fig:figmcd} we show
only the results for $x \le 1.0$ because these are the most
interesting:
for larger $x$ the mass accreted both through the torus and in total,
approaches a constant value.
The smallest amount of the total accreted mass
is obtained when we impose a
cutoff limit on the specific angular momentum, $l_{\rm max}$. This is shown
in Fig. \ref{fig:figmcd} (middle panel) for the value
of $l_{\rm max}=10^{17}$ cm$^{2}$s$^{-1}$.
The total accreted mass is in this model $M_{\rm accr}^{\rm tot}\ll M_{\rm env}$,
and very weakly depends on $x$.
The value of the $l_{\rm max}$ cut off has to be chosen carefully,
because if
$l_{\rm max} \ge {4G \over c}M_{\rm env} = 4.23\times 10^{17}$
cm$^{2}$s$^{-1}$, then
the accreted mass would be equal to the envelope mass (and equal to
that obtained with the uniform accretion scenario), for $x=1$.
Any value of $l_{\rm max}$ larger than the above value will
not affect the results.
The chosen form of the specific angular momentum distribution
(models {\bf C} or {\bf D}) only slightly
affect the results.
For $x \sim 0.01$, the model {\bf C} gives larger value of accreted
mass, while for $x \ge 0.1$,
the model {\bf D} leads to somewhat larger $M_{\rm acc}^{\rm
tot}$. However, the results in this case are also affected by the numerical
issues because of the very small number of steps after which the
calculation is finished.
The accreted mass will be zero, and the
burst will not occur, if the normalization $x$ is very small.
The minimum value can be estimated for {\bf C} and {\bf D} models separately,
if we take the specific angular momentum to be everywhere smaller than critical:
\begin{equation}
x_{\rm min}^{\rm C} = ({r_{\rm core} \over r_{\rm max}})^{2} =
1.22\times 10^{-11}
\end{equation}
and
\begin{equation}
x_{\rm min}^{\rm D} = {4 \over c}\sqrt{G M_{\rm core} \over r_{\rm max}} = 2.6\times 10^{-4}.
\end{equation}
Now, we can
estimate the duration of GRB by means of the mass accreted onto
the black hole with $l<l^{\rm k}_{\rm crit}$, i.e. via the torus.
In scenario {\it c}, this is, by definition,
the same as the total mass accreted. As can be seen from the bottom
panel in the
Figure \ref{fig:mcdtorus}, $M_{\rm acc}^{\rm torus} \ge 20 M_{\odot}$, and GRB
duration on the order of $\sim$ 40 seconds, is possible only with the
model with no cut off on $l_{\rm max}$ and $x > 0.1$.
For both models {\bf C} and {\bf D}
the uniform accretion scenario {\it a} gives slightly
less amount of mass accreted
through the torus, but the difference is visible only for $x < 0.1$.
The more physical models with the angular cut off to $l_{\rm max} =
10^{17}$ cm$^{2}$ s$^{-1}$ always give less than $\sim 4 M_{\odot}$ of mass
accreted via torus, which corresponds to the GRB duration
of only about 8 seconds.
In scenario {\it a}, the mass accreted with $l > l^{\rm k}_{\rm crit}$ is
even smaller than that, especially for small $x$ ($ < 0.5$).
No mass will be accreted through the torus if $x\le 0.05$ (model {\bf C})
or $x\le 0.1$ (model {\bf D}).
In scenario {\it c}, the mass accreted with $l>l^{\rm k}_{\rm crit}$
is independent on $x$, if there is a cut off on $l_{\rm max}$ (cf. Eq.
\ref{eq:maccr} and note that this
mass is equivalent to the total mass accreted).
\begin{figure}
\epsscale{.80}
\plotone{Fig9.ps}
\caption{
The mass accreted with $l>l_{\rm crit}$
as a function of the
initial normalization of the specific angular momentum distribution.
The models of the distribution function $f(r)g(\theta)$ are:
{\bf C} (solid squares) and {\bf D} (open circles),
and accretion scenarios are: a (solid line) and c (dashed line).
The upper panel shows the case when the mass accretion is limited by a
maximum free fall time of 1000 seconds,
the middle panel shows the case of the specific angular momentum cut off to
$l_{\rm max} = 10^{17}$ cm$^{2}$s$^{-1}$,
while the bottom panel shows the case of no free fall time limit and
no specific angular momentum cut off.
}
\label{fig:mcdtorus}
\end{figure}
The accretion scenario {\it b}, i.e. that of the accretion composed from
two steps: polar funnel and than torus, is not discussed for models
{\bf C} and {\bf D}. It is because
only a very small fraction of the envelope, and for very small $x$,
has $l<l_{\rm crit}$, so basically the results will not differ
much from the scenario {\it c}.
Finally, we tested the models {\bf C} and {\bf D} with an upper limit
imposed on the free fall timescale, $t_{\rm ff}^{\rm max}=1000$ s.
To comaper the two effects, in these tests
we did not assume any limit for the specific angular momentum $l_{\rm max}$.
As shown in the upper panels in the Figures \ref{fig:figmcd}
and \ref{fig:mcdtorus}, the mass accreted on the black hole
(both in total and via torus)
is now 3 times smaller than without the $t_{\rm ff}^{\rm max}$ limit.
Nevertheless, this effects is not as strong as the limit for the
progenitor rotation law imposed by $l_{\rm max}$.
To sum up, in the models {\bf C} and {\bf D}, i.e. if the specific
angular momentum in the pre-collapse star depends on $\theta$ and $r$
in such a way, that either the angular velocity $\Omega$ is constant,
or constant is the ratio between the gravitational and centrifugal
forces, a fraction of envelope material able to form a torus can be
much smaller, or much larger than in models {\bf A} and {\bf B}.
The fraction of 100\% is possible if there is no limiting value on
the specific angular momentum (or, more specifically,
the limiting value exceeds
$4.23 \times 10^{17}$ cm$^{2}$s$^{-1}$ in our model),
and no limit for a free fall timescale.
However, in more physical modeling which accounts for such limits,
this fraction becomes very small:
for $t_{\rm ff}^{\rm max} = 1000$ s we obtain about 30\%
of the envelope accreted via torus, and
for
$l_{\rm max} = 10^{17}$ cm$^{2}$s$^{-1}$ we obtain at most 15\%.
\subsection{Duration of a GRB}
In Figure \ref{fig:mdotinst} we show the instantaneous accretion rate
during the iterations, i.e. as a function of the current radius.
As the Figure shows, the accretion rate is the largest at the beginning of
the collapse, and equal about 0.1 $M_{\odot}$ s$^{-1}$.
In model {\bf C}, for $x=0.05$ the condition for torus formation
is not satisfied initially (cf. Fig. \ref{fig:figlccd}),
so the accretion rate through the torus is zero. For the same $x$, in model
{\bf D} the torus is already formed near the equatorial plane,
and the accretion rate is about 0.03 $M_{\odot}$ s$^{-1}$.
\begin{figure}
\epsscale{.80}
\plotone{Fig10.ps}
\caption{
The instantaneous mass accretion rate during an iteration, for the 4 models
and 2 chosen values of the
initial normalization of the specific angular momentum distribution.
The lower panel shows the models of the distribution function $f(r)$:
model {\bf A} (solid squares) and model {\bf B} (open circles) and
the upper panel shows the models of the distribution function $f(r)g(\theta)$:
{\bf C} (solid squares) and {\bf D} (open circles).
The accretion scenarios are: {\it a} (solid line) and {\it c} (dashed line).
}
\label{fig:mdotinst}
\end{figure}
The duration of a gamma ray burst depends on the accretion rate and
the total mass accreted via torus onto the black hole.
If the torus is a poer source for a GRB, the accretion rate,
albeit must not be constant, cannot drop to a very small value either.
Below, say 0.01 solar masses per second the neutrino cooling in the
accretiong torus may become inefficient.
Table \ref{table:tgrb1} shows the results for the computations,
in which we have limited the iterations to the minimum accretion rate of $\dot m_{\rm min} = 0.01 M_{\odot}$ s$^{-1}$. The table summarizes the mass acreted via torus for
all of our models for the progeniotor rotation (i.e. {\bf A},
{\bf B}, {\bf C} and {\bf D}) as well as the two
accretion scenarios ({\it a} and {\it c}).
The mass accreted through the torus never exceeds
4.5 $\dot M_{\odot}$ s$^{-1}$, thus implying that the limiting accretion rate
value influences the results stronger than the limits for the free-fall
time, and comparably to the limit for progenitor rotation.
\begin{table}
\caption{
The mass accreted through the torus (in solar masses)
for various models
and accretion scenarios, under the assumption that the minimum accretion rate
is $\dot m_{\rm min}=0.01 M_{\odot} s^{-1}$}
\begin{center}
\begin{tabular}{l c c c r }
\hline\hline
x & $M^{torus}_{Aa}$ & $M^{torus}_{Ac}$ & $M^{torus}_{Ba}$ & $M^{torus}_{Bc}$ \\
\hline
2.0 & 0.35 & 0.78 & 0.78 & 1.37 \\
3.0 & 1.01 & 1.58 & 1.76 & 2.37 \\
4.0 & 1.62 & 2.12 & 2.61 & 2.94 \\
5.0 & 2.14 & 2.54 & 3.09 & 3.27 \\
6.0 & 2.53 & 2.78 & 3.38 & 3.48 \\
7.0 & 2.82 & 3.02 & 3.57 & 3.64 \\
8.0 & 3.02 & 3.19 & 3.73 & 3.78 \\
9.0 & 3.19 & 3.30 & 3.81 & 3.85 \\
10.0& 3.33 & 3.43 & 3.88 & 3.91 \\
\hline
\hline
x & $M^{torus}_{Ca}$ & $M^{torus}_{Cc}$ & $M^{torus}_{Da}$ & $M^{torus}_{Dc}$\\
\hline
0.05& 1.96 & 2.55 & 1.64 & 2.50 \\
0.1 & 2.93 & 3.18 & 3.41 & 3.53 \\
0.2 & 3.55 & 3.66 & 4.05 & 4.07 \\
0.3 & 3.80 & 3.85 & 4.21 & 4.23 \\
0.4 & 3.95 & 3.97 & 4.35 & 4.31 \\
0.5 & 4.04 & 4.09 & 4.39 & 4.35 \\
0.6 & 4.11 & 4.15 & 4.43 & 4.43 \\
0.7 & 4.17 & 4.21 & 4.45 & 4.45 \\
0.8 & 4.22 & 4.26 & 4.47 & 4.47 \\
0.9 & 4.27 & 4.30 & 4.48 & 4.48 \\
1.0 & 4.32 & 4.33 & 4.49 & 4.49 \\
\hline
\end{tabular}
\end{center}
\label{table:tgrb1}
\end{table}
The Table \ref{table:tgrb2} summarizes the durations of GRB prompt phase,
also for the four models and two accretion scenarios.
The results correspond to the total masses accreted via torus
that are given in Table \ref{table:tgrb1}. The duration time
was calculated as $t_{\rm GRB} = M^{\rm torus}_{\rm acc} / <\dot m>$,
where $<\dot m>$ is the mean accretion rate during an iteration.
Note, that because the minimum accretion rate was fixed at
0.01 solar masses per second, the average value is equal to
$0.5 \dot m_{\rm max}$.
\begin{table}
\caption{Duration of GRB (in seconds)
for various models
and accretion scenarios, under the assumption that the minimum accretion rate
is $\dot m_{\rm min}=0.01 M_{\odot} s^{-1}$}
\begin{center}
\begin{tabular}{l c c c r }
\hline\hline
x & $t_{Aa}$ & $t_{Ac}$ & $t_{Ba}$ & $t_{Bc}$ \\
\hline
2.0 & 7.00 & 15.23 & 11.17 & 19.75 \\
3.0 & 15.15 & 23.77 & 21.96 & 29.60 \\
4.0 & 21.96 & 28.86 & 30.72 & 34.68 \\
5.0 & 27.32 & 32.43 & 35.25 & 37.33 \\
6.0 & 31.01 & 34.08 & 37.80 & 38.93 \\
7.0 & 33.50 & 35.99 & 39.46 & 40.24 \\
8.0 & 35.28 & 37.34 & 40.91 & 41.49 \\
9.0 & 36.66 & 37.96 & 41.39 & 41.85 \\
10.0 & 37.70& 38.85 & 41.90 & 42.27 \\
\hline
\hline
x & $t_{Ca}$ & $t_{Cc}$ & $t_{Da}$ & $t_{Dc}$\\
\hline
0.05& 106.25 & 139.75 & 87.84 & 99.89 \\
0.1 & 126.99 & 147.09 & 48.21 & 49.95 \\
0.2 & 136.21 & 150.19 & 47.62 & 47.70 \\
0.3 & 139.29 & 150.11 & 47.08 & 47.29 \\
0.4 & 141.32 & 150.17 & 47.41 & 46.99 \\
0.5 & 142.43 & 151.77 & 47.44 & 46.98 \\
0.6 & 143.26 & 151.66 & 47.32 & 47.37 \\
0.7 & 144.21 & 142.71 & 47.38 & 47.38 \\
0.8 & 131.66 & 126.07 & 47.22 & 47.20 \\
0.9 & 117.38 & 114.70 & 47.30 & 47.31 \\
1.0 & 108.81 & 106.52 & 47.26 & 47.28 \\
\hline
\end{tabular}
\end{center}
\label{table:tgrb2}
\end{table}
The calculated duration times of GRBs are the largest for models {\bf C},
because the average accretion rate in these models is the smallest.
Taking into account only the free fall timescale and
under the adopted assumptions for a minimum $\dot m$,
these models give at most $\sim 145$ s of the GRB prompt phase.
All the other models result in the GRB duration below 50 seconds.
\section{Discussion and conclusions}
\label{sec:diss}
The durations of GRBs range from less than 0.01 to a few hundred seconds
(for a review see e.g. Piran 2005), and the long duration
bursts,
$T_{90}> 2$ s, are supposed to originate from the collapse of the
massive rotating star. The collapsar model assumes
that the presence of a rotating
torus around a newly born black hole is a crucial element of the GRB
central engine for the whole time of its duration.
In this work, we found that
some specific properties of the progenitor star are important in
order to support the existence of a torus, which consists of the
material with specific angular momentum larger than a critical value
$l>l_{\rm crit}$.
We studied how the initial distribution of specific angular momentum
inside the stellar envelope affects the burst duration, taking into
account the increase of the black hole mass
during the collapse process.
Following Woosley \& Weaver (1995),
we considered the model of pre-supernova star that predicts the existence of
the $\sim 1.7 M_{\odot}$ iron core, which forms a black hole, and the
$\sim 24 M_{\odot}$ envelope. Therefore in the simplest approach, when
the mass available for accretion is the total envelope mass, and
the accretion rate is a constant value on the order of
$0.1-0.5 M_{\odot}$s$^{-1}$, the
central engine is able to operate for a time required to power a long
GRB (i.e. several tens to a couple of hundreds of seconds).
However, McFadyen \& Woosley (1999) in their collapsar model show that
most of the initial accretion goes through
the rotating torus rather than from the polar funnel. Torus formation
is possible if
material with substantial specific angular momentum is present in the envelope,
initially and throughout the event as well. In this
sense the GRB duration estimated in the uniform accretion scenario
is only an upper limit.
In our calculations, this upper limit is achieved only
if the specific angular momentum distribution in the
pre-supernova star is a strong function of radius (i.e.
$g(r)\sim r^{2}$ or $g(r)\sim \sqrt{r}$), and the inner parts have
$x = l/l_{\rm crit}(r_{\rm in}) \sim 1.0$
(for the initial black hole mass we have
$l_{\rm crit}^{0} \sim 3\times 10^{16}$ cm$^{2}$s$^{-1}$ in our model),
while the
outer parts of the star may have an specific angular momentum as large as
$l_{\rm max} \ge 4.23\times 10^{17}$ cm$^{2}$s$^{-1}$.
Both these conditions challenge the progenitor star models:
they require either a rigid rotation of the star, or a huge ratio of
centrifugal
to gravitational force. The latter, if we want to keep the value
of $F_{\rm centr}/F_{\rm graw} \sim 0.02$ (as taken by
Proga et al. 2003), would lead to the mass accreted through the torus
equal to be a fraction of the envelope mass, and a
correspondingly shorter GRB duration (i.e. one should take $x=0.05$,
which implies the
the GRB duration in our 'mass units' to be 20-21 $M_{\odot}$, cf.
Fig. \ref{fig:mcdtorus}).
Furthermore, the progenitor star models
used in MacFadyen \& Woosley (1999; see also Proga et al. 2003), would rather
assume a limiting value for the specific angular momentum.
In our modeling we followed that work and calculated an exemplary
sequence of models with $l_{\rm max} = 10^{17}$
cm$^{2}$s$^{-1}$.
The results in this case are not promising for a long GRB production:
the GRB duration in the accreted mass units does not exceed
4 $M_{\odot}$, which would give an event not longer than a hundred
seconds and only if the accretion rate is very small.
The models with the specific angular momentum distribution depending only
on the polar angle, $\theta$, also yield not very long GRB
durations. In this case, the mass accreted with $l>l_{\rm crit}$ is of
the order of 15 $M_{\odot}$ if the specific angular momentum in
the progenitor star is about ten times the critical one (i.e. $x \sim 10$),
and the accretion proceeds only
through the torus (the latter finds support in MHD simulations
such those performed by Proga et al. 2003).
If the accretion proceeds uniformly through the envelope,
the GRB duration drops to about 10 $M_{\odot}$ for the same value of $x$.
Finally, in the scenario when accretion proceeds first through the
poles and then through the torus (i.e. scenario {\it b}
as indicated by HD simulations performed by MacFadyen \& Woosley 1999),
there is no GRB for $x\le 5-7$ (depending on the shape of
$f(\theta)$), because all the material is swollen by the black hole
during the first stage. For large $x$, the resulting GRB duration
is in between of the scenarios {\it a} and {\it c}.
We plan to consider other progenitor star models such as those
computed by Heger et al. (2005) to check how our conclusions
depend on specific radial and rotational profiles.
We also investigated the models in which the mass accreted onto BH
was limited by the free fall timescale or the minimum accretion rate.
In case of of the free fall time limited to 1000 seconds,
the mass accreted onto the black hole is much smaller than the total
envelope mass, and reached up to 8 $M_{\odot}$ but for a very
fast rotation of the progenitor star.
Finally, the explicitly calculated duration times of GRB,
obtained due to the released assumption of a constant accretion rate,
were at most 30-150 seconds, depending on the model of the specific
angular momentum distribution and accretion scenario.
The effects of accreting non-rotating or slowly rotating
matter on a black hole can reduce also capability of
powering GRBs through the Blandford-Znajek mechanism.
In the estimations done by McFadyen \& Woosley (1999) a-posteriori,
i.e. using the analytical models of the accretion disk
to extend the calculations down to the
event horizon, the authors calculated
the evolution of the black hole mass and angular momentum.
The initial dimensionless angular momentum parameter
of the iron core is taken in their work to be either
$a_{\rm init}=0$ or $a_{\rm init}=0.5$.
However, the black hole changes its total mass
and angular momentum as it accretes matter (see Bardeen 1977 for
specific formulae).
In this way, if the specific angular momentum supply
is substantial, even starting from $a=0$, a Schwarzschild black hole,
the finite amount of accreted mass makes it possible to obtain $a=1$.
On the other hand, the material
with very small specific angular momentum, which is present
in a collapsing star, will spin down the black hole.
The effect of the evolution of the black hole spin due to the
accretion (spin up) and the Blandford-Znajek mechanism (spin down)
has been studied in Moderski \& Sikora (1996).
Lee, Brown \& Wijers (2002) studied the case of GRBs produced after the progenitor star
has been spun up in a close binary system due to spiral-in and tidal coupling.
Recently, Volonteri et al. (2005) and
King \& Pringle (2006) calculated the spin evolution of supermassive
black holes due to the random accretion episodes.
In our model, the black hole spin evolution
is not episodic, but a continuous process. The calculations of the
BH angular momentum evolution are left for the future work.
Such calculations may possibly show that obtaining a large BH spin
parameter, $a\sim 0.9$, is rather difficult when a large fraction of
the envelope material has $l \ll l_{\rm crit}$.
\section*{Acknowledgments}
We thank Phil Armitage for useful comments.
This work was supported by NASA under grants NNG05GB68G.
|
2,869,038,153,896 | arxiv | \section{\@startsection{section}{1}{\z@}{1.5ex plus 0.5ex minus
1.2ex}{1.3ex plus .1ex}{\normalsize\bf}}
\def\subsection{\@startsection{subsection}{2}{\z@}{1.5ex plus 0.5ex minus
1.2ex}{1.3ex plus .1ex}{\normalsize\em}}
\def\@sect#1#2#3#4#5#6[#7]#8{\ifnum #2>\c@secnumdepth
\def\@svsec{}\else
\refstepcounter{#1}\edef\@svsec{\ifnum #2=1 \@sectname\fi
\csname the#1\endcsname.\hskip 1em }\fi
\@tempskipa #5\relax
\ifdim \@tempskipa>\z@
\begingroup #6\relax
\@hangfrom{\hskip #3\relax\@svsec}{\interlinepenalty \@M #8\par}
\endgroup
\csname #1mark\endcsname{#7}\addcontentsline
{toc}{#1}{\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}\fi
#7}\else
\def\@svsechd{#6\hskip #3\@svsec #8\csname #1mark\endcsname
{#7}\addcontentsline
{toc}{#1}{\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}\fi
#7}}\fi
\@xsect{#5}}
\def\@sectname{}
\def\thebibliography#1{\section*{{{\normalsize
\bf References }
\rule{0pt}{0pt}}\@mkboth
{REFERENCES}{REFERENCES}}\list
{{\arabic{enumi}.}}{\settowidth\labelwidth{{#1}}%
\leftmargin\labelwidth \frenchspacing
\advance\leftmargin\labelsep
\itemsep=-0.2cm
\usecounter{enumi}}
\def\hskip .11em plus .33em minus -.07em{\hskip .11em plus .33em minus -.07em}
\sloppy
\sfcode`\.=1000\relax}
\def\@cite#1#2{\unskip\nobreak\relax
\def\@tempa{$\m@th^{\hbox{\the\scriptfont0 #1}}$}%
\futurelet\@tempc\@citexx}
\def\@citexx{\ifx.\@tempc\let\@tempd=\@citepunct\else
\ifx,\@tempc\let\@tempd=\@citepunct\else
\let\@tempd=\@tempa\fi\fi\@tempd}
\def\@citepunct{\@tempc\edef\@sf{\spacefactor=\the\spacefactor\relax}\@tempa
\@sf\@gobble}
\def\citenum#1{{\def\@cite##1##2{##1}\cite{#1}}}
\def\citea#1{\@cite{#1}{}}
\newcount\@tempcntc
\def\@citex[#1]#2{\if@filesw\immediate\write\@auxout{\string\citation{#2}}\fi
\@tempcnta\z@\@tempcntb\m@ne\def\@citea{}\@cite{\@for\@citeb:=#2\do
{\@ifundefined
{b@\@citeb}{\@citeo\@tempcntb\m@ne\@citea\def\@citea{,}{\bf ?}\@warning
{Citation `\@citeb' on page \thepage \space undefined}}%
{\setbox\z@\hbox{\global\@tempcntc0\csname b@\@citeb\endcsname\relax}%
\ifnum\@tempcntc=\z@ \@citeo\@tempcntb\m@ne
\@citea\def\@citea{,}\hbox{\csname b@\@citeb\endcsname}%
\else
\advance\@tempcntb\@ne
\ifnum\@tempcntb=\@tempcntc
\else\advance\@tempcntb\m@ne\@citeo
\@tempcnta\@tempcntc\@tempcntb\@tempcntc\fi\fi}}\@citeo}{#1}}
\def\@citeo{\ifnum\@tempcnta>\@tempcntb\else\@citea\def\@citea{,}%
\ifnum\@tempcnta=\@tempcntb\the\@tempcnta\else
{\advance\@tempcnta\@ne\ifnum\@tempcnta=\@tempcntb \else \def\@citea{--}\fi
\advance\@tempcnta\m@ne\the\@tempcnta\@citea\the\@tempcntb}\fi\fi}
\def\abstract{\if@twocolumn
\section*{Abstract}
\else \small
\begin{center}
{ABSTRACT\vspace{-.5em}\vspace{0pt}}
\end{center}
\quotation
\fi}
\def\if@twocolumn\else\endquotation\fi{\if@twocolumn\else\endquotation\fi}
\def\fnum@figure{Fig. \thefigure}
\long\def\@makecaption#1#2{
\vskip 10pt
\setbox\@tempboxa\hbox{\small #1. #2}
\ifdim \wd\@tempboxa >\hsize
\small #1. #2\par
\else
\hbox to\hsize{\hfil\box\@tempboxa\hfil}
\fi}
\def\epsilon{\epsilon}
\defg_{\rm w}{g_{\rm w}}
\def\alpha_{\rm w}{\alpha_{\rm w}}
\defM_{\rm w}{M_{\rm w}}
\def{\rm tr}{{\rm tr}}
\defT_{\rm c}{T_{\rm c}}
\def \,\, \vcenter{\hbox{$\buildrel{\displaystyle <}\over\sim$}} \,\,{ \,\, \vcenter{\hbox{$\buildrel{\displaystyle <}\over\sim$}} \,\,}
\def \,\, \vcenter{\hbox{$\buildrel{\displaystyle >}\over\sim$}} \,\,{ \,\, \vcenter{\hbox{$\buildrel{\displaystyle >}\over\sim$}} \,\,}
\begin {document}
\begin {flushright}
UW-PT-94-13 \\
October 1994
\end {flushright}
\title{THE ELECTROWEAK PHASE TRANSITION, PART 1\\
Review of Perturbative Methods%
\footnote{
Talk presented at the conference Quarks `94: Vladimir, Russia, 1994.
This work was supported by the U.S. Department of Energy,
Grant No.\ DE-FG06-91ER40614.
}%
}
\author{
Peter Arnold\\
{\em Dept. of Physics, FM-15, Univ. of Washington, Seattle, WA 98115, USA}
}
\maketitle
\setlength{\baselineskip}{14pt}
\vspace{0.7in}
The goal of this talk, and the following one by Larry Yaffe, will be to
investigate the order and strength of the electroweak phase transition.
I will review what can be done with standard perturbative methods and
how such methods sometimes break down in cases of interest. In part 2,
Larry Yaffe will discuss the application of $\epsilon$ expansion techniques
to study those cases where standard perturbative methods fail.
The reason for studying the electroweak phase transition is that it
plays a crucial role in scenarios of electroweak baryogenesis.
Recall Sakharov's three conditions for any scenario of baryogenesis:
(1) baryon number violation, (2) C and CP violation, and (3) disequilibrium.
As I shall briefly review, standard electroweak theory provides
the required violation of baryon number. The standard model also
provides C and CP violation, though the strength of such violation
may not be sufficient to generate the observed baryon excess unless the
Higgs sector is non-minimal---an issue discussed in other talks at this
conference. As I shall discuss later, the role of the electroweak phase
transition is to provide the required disequilibrium, and its success in
this role depends on the order and strength of the transition.
In our talks, Larry and I will stick to a simple toy model: the minimal
standard model with a single doublet Higgs. I call this a toy model because
one probably needs to incorporate extra Higgs bosons into the theory to
have sufficient CP violation for electroweak baryogenesis. But multiple
Higgs models have all sorts of unknown parameters in them, which makes it
difficult to plot results in any simple way. It makes sense to first
refine one's techniques for studying the phase transition in the simpler
one-Higgs model. With a little bit of work, everything we do should be
straightforwardly extendable to more complicated models. For simplicity,
we shall also generally ignore $\sin^2\theta_{\rm w}$ and focus on pure
SU(2) gauge-Higgs theory.
\section{Lightning review of electroweak B violation}
I shall take a moment to quickly review baryon number (B) violation in
standard electroweak theory;%
\footnote{
For a sample of various reviews of the subject, try
ref.~\protect\citenum{b violation}.
For a review of electroweak baryogenesis, try
ref.~\protect\citenum{baryogenesis}.
}
the formula for the B violation rate
will later be relevant to motivating some of the important issues concerning
the electroweak phase transition.
\begin {figure}
\vbox
{%
\begin {center}
\leavevmode
\def\epsfsize #1#2{0.35#1}
\epsfbox [150 300 500 500] {anomaly.ps}
\end {center}
\caption
{%
\label {anomaly}
The triangle anomaly for the baryon number current in electroweak
theory.
}%
}%
\end {figure}
Baryon number violation is a bit strange in standard electroweak theory
because it can't happen perturbatively. All of the vertices in a Feynman
diagram conserve quark number; whenever a quark line enters a diagram,
it remains a quark line and must eventually leave again, thus conserving
baryon number. However, baryon number is violated {\it non}-perturbatively
due to the electroweak anomaly shown in fig.~\ref{anomaly}. This anomaly is
closely analogous to the usual axial anomaly of QCD or QED. In the
electroweak case, however, the
axial nature of the anomaly appears in the gauge couplings rather than
in the current. The formula for the anomaly is the same as
in the QCD case,
\begin {equation}
\partial_\mu j^\mu \sim g_{\rm w}^2 F \tilde F \,,
\end {equation}
except that the field strengths $F$ are for the weak SU(2) fields rather
than the gluon fields. Integrating both sides gives a formula for the
amount of baryon number violation in any process:
\begin {equation}
\Delta B \sim g_{\rm w}^2 \int d^4x \, F \tilde F \,.
\label {del B eq}
\end {equation}
\begin {figure}
\vbox
{%
\begin {center}
\leavevmode
\def\epsfsize #1#2{0.90#1}
\epsfbox [130 320 480 480] {sphaleron.ps}
\end {center}
\caption
{%
\label {sphaleron}
Qualitative picture of gauge configuration space for a B violating
transition.
}%
}%
\end {figure}
Note that, in order to get a $\Delta B$ of order 1, the field strengths
$F$ must be of order $1/g_{\rm w}$. So any process which violates baryon number
involves large, non-perturbative excursions away from the vacuum $F=0$.
Also note that large field strengths imply large energies, and so any
transition with $\Delta B \sim 1$ requires passing
through intermediate gauge field configurations with non-negligible energy.
This situation is depicted schematically in fig.~\ref{sphaleron}.
The horizontal
axes denotes the sequence of gauge field configurations a particular
process passes through when violating baryon number via
(\ref{del B eq}); the vertical axis denotes the potential energy of
those configurations. $E_0$ is the potential energy barrier separating
the initial gauge vacuum $A=0$ from the final gauge vacuum, which is just
a gauge transform. The configuration corresponding to the minimum
potential energy barrier for this process is known as the sphaleron.
At zero energy, the only way to get from one vacuum to the next, and so
produce B violation through the anomaly, is by quantum tunneling.
Because the barrier is non-perturbatively large, the probability for such
tunneling is exponentially suppressed and turns out to be
\begin {equation}
\hbox{rate} \sim e^{-4\pi/\alpha_{\rm w}} \sim 10^{-170} = \hbox{zero} \,.
\end {equation}
Imagine instead the early universe at temperatures large compared to the
barrier energy $E_0$. Such a hot thermal bath will excite states with
energies large compared to $E_0$, and these can cross the barrier classically
rather than tunneling beneath it. The B violation rate will {\it not} be
exponentially suppressed. Now consider an intermediate situation where the
universe is hot but the temperature is smaller than $E_0$. Then there's
still some chance that a random thermal fluctuation will have enough energy
to cross the barrier, and this probability is naively given by a simple
Boltzmann factor:
\begin {equation}
\hbox{rate} \sim e^{-\beta E_0} \sim e^{-\beta M_{\rm w}/\alpha_{\rm w}} \,,
\label {B rate}
\end {equation}
where $\beta$ is the inverse temperature. The estimate of $E_0$ above
may be understood from (1) the earlier observation that field strengths must be
order $1/g_{\rm w}$, which means energies are $1/g_{\rm w}^2$, and (2) the fact that
$E_0$ has dimensions of mass and $M_{\rm w}$ is the natural mass scale of
electroweak theory:
\begin {equation}
E_0 \sim M_{\rm w}/\alpha_{\rm w} \sim \hbox{a few TeV} \,.
\end {equation}
All of this can be made more rigorous, and the numerical coefficients
in these equations can be deduced, but the simple parameter dependence
I am exhibiting here will be all that I'll use for this talk.
Note that my general sloppiness in writing equations extends even to
exponents: the last exponent in (\ref{B rate}) has some numerical
coefficient in it which I haven't bothered to show.
\section {Disequilibrium}
In GUT scenarios for baryogenesis, all the relevant physics occurs at
temperatures of order $10^{16}$ GeV and the expansion of the universe
directly provides the disequilibrium needed for baryogenesis.
In electroweak scenarios for baryogenesis, the relevant physics occurs
at temperatures of order the weak scale, when the universe is very much
older, expanding very much more slowly, and so very much closer to
equilibrium. Back of the envelope estimates show that the expansion is
then far too slow to produce the observed number of baryons.
But there is other physics that is taking place around the same time---namely,
the electroweak phase transition---and this transition can potentially supply
the needed element of disequilibrium. If the transition is second order,
the universe never departs significantly from equilibrium during the
transition. However, if it is first order (and
sufficiently strongly so), then it proceeds by the nucleation, expansion,
and percolation of bubbles of the new phase inside the old---a non-equilibrium
process. Each point in space will feel a non-equilibrium jolt as a bubble
wall sweeps by converting it from the symmetric phase ($\phi{=}0$) to
the asymmetric phase ($\phi\not=0$), and so baryogenesis in these scenarios
takes place on (or near) the bubble walls. Back of the envelope estimates
have shown that, for some models of the Higgs sector, one has a first-order
phase transition and can get in the
ballpark of postdicting the observed baryon-to-photon ratio
$n_{\rm B}/s \sim 10^{-10}$.
To sharpen these back of the envelope estimates, there are many interesting
but complicated problems one could study. How do you accurately compute
the bubble wall profile?\ the bubble wall velocity?\ the amount of baryogenesis
as the wall sweeps by? All of these non-equilibrium problems are complicated,
and so I'm going to focus instead on a simpler problem relevant to the
success or failure of electroweak baryogenesis scenarios.
\section{A simpler constraint on models}
After the phase transition is completed, and the universe settles down into
the new, asymmetric phase with $\phi\not=0$, it had better be the case
that baryon number violation is turned off. Otherwise, the universe will
simply relax back to its preferred equilibrium state of $B=0$ and all of
the baryogenesis that occured during the phase transition will be washed
away. To turn off B violation, we need the rate $e^{-\beta E_0}$ to
be small compared to the expansion rate of the universe, which means
the exponent
\begin {equation}
\beta E_0 \sim M_{\rm w}/\alpha_{\rm w} T \sim g_{\rm w}\phi/\alpha_{\rm w} T
\end {equation}
\begin {figure}
\vbox
{%
\begin {center}
\leavevmode
\def\epsfsize #1#2{0.60#1}
\epsfbox [140 300 500 550] {boltzmann.ps}
\end {center}
\caption
{%
\label {figB}
The Boltzmann exponent for baryon number violation vs.\ the
zero-temperature Higgs mass.
}%
}%
\end {figure}%
must be {\it large} in the asymmetric phase just after the phase transition
is completed. In the minimal standard model, a leading-order calculation
of this exponent (which I will review in a moment) depends qualitatively
on the zero-temperature Higgs boson mass as shown in fig.~\ref{figB}.
I will explain why the exponent depends inversely on the Higgs mass, but
for the moment let's consider the consequences.
A comparison of the B violation rate to the expansion rate of the universe
was made by Shaposhnikov\cite{Shaposhnikov}
and later improved by Dine {\it et al.}\cite{Dine}
Using a leading-order perturbative calculation,%
\footnote{
``Leading'' order here means leading order after improvement by
resumming hard thermal loops (daisies).
}
the requirement that B
violation be turned off
puts a lower bound on the exponent and hence an upper bound on the Higgs
mass, as depicted in the figure.
For the minimal standard model, this bound on the Higgs mass is
roughly 35--40 GeV, which is ruled out by the experimental lower bound
of 60 GeV.
So minimal standard model baryogenesis appears to fail.
If one makes the Higgs sector more complicated, it is possible to evade
these bounds, which is yet another reason to study multiple Higgs models.
However, the situation is more complicated than I have just made it out
to be. As I shall discuss, the {\it leading}-order calculation used to
derive these constraints may be inadequate and higher-order corrections
may be crucial. But first, let me outline how the
leading-order calculation is made.
\section{The leading-order calculation}
Consider the classical Higgs potential:
\begin {equation}
V_0 \sim -\mu^2\phi^2 + \lambda\phi^4 \,.
\label {V0 eq}
\end {equation}
$V_0$ basically tells us the ``vacuum'' energy as a function of $\phi$,
and at zero temperature the ground-state is determined by minimizing it.
At finite temperature, the ground state is determined by minimizing the
free energy. At finite temperature, the system is not in vacuum---there
is a plasma of real, on-shell particles such as W's, quarks, and leptons,
and all contribute to the free energy. For the sake of pedagogy, let me
just focus on the contribution of W's in the plasma, and for the time
being let's ignore interactions. The free energy of an ideal
Bose gas is something you can easily look up in a graduate textbook:
\begin {equation}
\Delta F \sim T \int d^3 k \, \ln\left(1 - e^{-\beta E_k}\right) \,,
\end {equation}
where the relativistic energy is just
\begin {equation}
E_k = \sqrt{\vec k^2 + M_{\rm w}^2} \sim \sqrt{\vec K^2 + g^2\phi^2} \,.
\end {equation}
Note that the W mass is proportional to $g\phi$ in a background Higgs
field $\phi$, and so the W gas contribution $\Delta F$ to the free energy
is a function of $\phi$. At high temperature ($T >> M_{\rm w}$), $\Delta F$
can be expanded in powers of $1/T$ to give
\begin {equation}
\Delta F \sim \# T^4 + \# M_{\rm w}^2 T^2 - \# M_{\rm w}^3 T + \cdots \,,
\label{W gas}
\end {equation}
where I haven't bothered showing the numerical coefficients \#.
Henceforth I won't bother writing in the \# signs either.
To get the total free energy, we just add the ``vacuum'' contribution
(\ref{V0 eq}) and the W gas contribution (\ref{W gas}):
\begin {equation}
F \sim V_0 + \Delta F \sim
\hbox{const.} + (-\mu^2+g^2 T^2)\phi^2 - g^3 \phi^3 T + \lambda\phi^4
\cdots \,.
\label {F eq}
\end {equation}
The $\phi$-independent term ``const.''\ doesn't affect the properties of
the transition and will be ignored. The $g^2 T^2 \phi^2$ term comes from
the $M_{\rm w}^2 T^2$ term of (\ref{W gas}) and is responsible for driving the
phase transition: it turns the curvature of the free energy at $\phi=0$ from
concave-down at $T=0$ to concave-up at sufficiently large $T$.
The cubic term $-g^3 \phi^3 T$ comes from the $-M_{\rm w}^3 T$ term and is
responsible for making the phase transition first-order, as depicted in
fig.~\ref{Vfig}, rather than second-order.
\begin {figure}
\vbox
{%
\begin {center}
\leavevmode
\def\epsfsize #1#2{0.51#1}
\epsfbox [150 260 500 530] {Vfig.ps}
\end {center}
\caption
{%
\label {Vfig}
The form of the free energy, as a function of $\phi$, for
different temperatures.
}%
}%
\end {figure}
We are now in a position to estimate the order of magnitude (or more
specifically, the parameter dependence) of quantities related to the phase
transition. Examine fig.~\ref{Vfig} and consider the critical temperature at
which the two ground states are degenerate, and consider the region of
$\phi$ which, roughly, encompasses the maximum and asymmetric minimum
of $F(\phi)$.
The only way for the free energy to have that shape is if the
quadratic, cubic, and quartic terms of (\ref{F eq}) all have the same
order of magnitude in that region of $\phi$. If the quadratic term
were negligible, it wouldn't curve up at the origin; if the cubic term
were negligible, it wouldn't curve down later; and if the quartic term
were negligible, it wouldn't be turning up again. So
\begin {equation}
(-\mu^2 + g^2 T^2)\phi^2 \sim g^3 \phi^3 T \sim \lambda \phi^4 \,.
\end {equation}
The last relation, in particular, then easily yields
\begin {equation}
\phi \sim {g^3\over\lambda} T \,.
\end {equation}
Now let's return to the rate of B violation in the asymmetric phase.
The Boltzmann exponent
$\beta E_0$ is then
\begin {equation}
{M_{\rm w}\over g^2 T} \sim {\phi\over gT}
\sim {g^2\over\lambda}
\sim {M^2({\rm W})_{T=0} \over m^2({\rm Higgs})_{T=0}} \,.
\label{B rate exponent}
\end {equation}
This is how one finds the inverse dependence on the Higgs mass that
was depicted in fig.~\ref{figB}.
\section{Review of finite-temperature formalism}
Our next goal will be to discuss the validity of the above leading-order
treatment, where the W bosons were treated as an ideal gas.
Discussing corrections due to interactions requires a little more careful
treatment of finite temperature calculations, and so in this section I
shall very briefly review the formalism of finite-temperature field theory.
Recall that the basic tool for studying equilibrium questions at finite
temperature is the partition function:
\begin {equation}
Z = {\rm tr} \, e^{-\beta H} \,.
\end {equation}
A path integral expression for the partition function may be easily found
by noting that $e^{-\beta H}$ is nothing but the time-evolution operator for
an imaginary amount of time $t = i\beta$. So, in the exact same way one
derives the path integral for infinite-time evolution to attack
zero-temperature problems, one may derive the completely analogous result
\begin {equation}
Z = \int [{\cal D}\phi] \exp\left[ -\int\nolimits_0^\beta d\tau
\int d^3 x {\cal L}_{\rm E}(\phi) \right] \,,
\end {equation}
where ${\cal L}_{\rm E}$ is the Euclidean action density.
The only difference is that the integral in the exponent is over Euclidean
time $\beta$ rather than over infinite real time. Also, the trace in
the partition function is implemented by requiring the ``initial'' state
be the same as the ``final'' state, which amounts to requiring the boundary
condition
\begin {equation}
\phi(\tau{=}0, \vec x) = \phi(\tau{=}\beta, \vec x)
\end {equation}
on the path integral.
Since the only difference between finite temperature and zero temperature is
the extent of Euclidean time, the only difference in Feynman
rules will be that, when integrating over internal momenta $k$, only the
frequencies $k_0 = 2\pi n T$ which are periodic in time $\beta$ are relevant.
So Fourier integrals get replaced by Fourier sums, and the only difference
in the Euclidean Feynman rules is the replacement
\begin {equation}
\int d^4 k \to T \sum_{k_0} \int d^3 k \,,
\qquad
k_0 = 2\pi n T \,,
\label {momentum integral}
\end {equation}
for integrations over internal momenta. Note for later the factor of
$T$ in front of the Fourier series sum, which makes the dimensions the
same as $d^4 k$.
Now consider what happens in the large temperature limit $\beta\to 0$.
In this case, the extent of the (Euclidean) temporal dimension shrinks to
zero. So the four-dimensional theory reduces to an effective
three-dimensional theory of the static ($k_0=0$) modes.
(More precisely, this reduction takes place if we are studying equilibrium
quantities at distance scales large compared to $\beta$.)
I should note that fermions turn out to have {\it anti}-periodic boundary
conditions and so cannot have any static modes.
As a result, fermions completely decouple from long-distance, equilibrium
physics in the high-temperature limit.
The introduction of temperatures into Feynman rules via
(\ref{momentum integral}) may seem a bit formal. However, it turns out
to have a fairly simple physical interpretation when one does actual
calculations. If one carries out the Euclidean frequency sum for a
simple one-loop integral, one finds%
\footnote{
For a review, try ref.~\citenum{Kapusta}.
}
\begin {equation}
\hbox{
\setlength{\unitlength}{0.12in}
\begin {picture}(45,10)
\thicklines
%
\put(2,5){\circle{4}}
\put(2,3){\line( 1,-1){2}}
\put(2,3){\line(-1,-1){2}}
\put(3,7.6){\vector(-1,0){2}}
\put(2,8){$k$}
%
\put(8.5,4.7){=}
%
\put(16,5){\circle{4}}
\put(16,3){\line( 1,-1){2}}
\put(16,3){\line(-1,-1){2}}
\put(14.7,4.7){$T{=}0$}
\put(17,7.6){\vector(-1,0){2}}
\put(16,8){$k$}
%
\put(20.5,4.7){+}
%
\put(24,4){$\displaystyle{
\int {d^3 k\over (2\pi)^3 2E_k} \, {1 \over e^{\beta E_k} - 1}
}$}
%
\put(40,4){\line( 1,-1){2}}
\put(40,4){\line(-1,-1){2}}
\put(40,4){\line(-3, 2){3.5}}
\put(40,4){\line( 3, 2){3.5}}
\put(37,6.8){\vector(3,-2){2}}
\put(41,5.4){\vector(3, 2){2}}
\put(36,6.8){$k$}
\put(43.4,6.8){$k$}
%
\put(45,3){.}
\end {picture}
}
\label {finite T display}
\end {equation}
The first term on the right-hand side denotes the zero-temperature result.
The second term---which contains all the temperature dependence---is nothing
more than the amplitude for the external particle to forward scatter off of
a real, physical particle present in the thermal bath:
the $1/(e^{\beta E_k}-1)$ is the Bose probability for finding such a particle,
and the $d^3 k/(2\pi)^3 2E_k$ is just the usual measure for phase space.
\section{Loop Expansion Parameter}
\label{loop expansion section}
We're now in a position to discuss when leading-order calculations
are adequate at finite temperature. The basic cost of adding a loop
to a diagram at high temperature is\footnote{
More precisely, this is the cost once one has absorbed hard thermal
loops (daisies) into propagators, which is something I'll briefly
discuss later.
}
\begin {equation}
{g^2 T\over \hbox{physics scale}} \sim {g^2 T \over M_{\rm w}} \,.
\label {loop parameter}
\end {equation}
The $g^2$ is just the usual cost of extra coupling constants.
The factor of $T$ is the explicit factor of $T$ from the Fourier
sum (\ref{momentum integral}) associated with the additional loop
momentum.
But the cost of adding a loop should be dimensionless,
so $g^2 T$ must be divided by whatever gives the mass scale of the
problem---in this case, $M_{\rm w}$. Because of the factor of $T/M_{\rm w}$,
the loop expansion is not necessarily small at high temperature!
The criteria that the loop expansion parameter be small, and that
therefore a perturbative expansion around mean field theory be
useful, is known to condensed matter physicists as the
{\it Ginzburg} criteria.
The loop expansion parameter (\ref{loop parameter}) is very important,
so let's understand it in several different ways.
First, consider adding a loop with internal momentum $k$ to any
diagram, denoted by a shaded circle below. As discussed earlier,
the effect of finite temperature on loops is to incorporate the
physics of particles forward-scattering off of real particles in the plasma:
\def\blob{
\begin{picture}(2,2)(0.35,0)
\thicklines
\put(0,0){\circle{2}}
\thinlines
\put(-0.7,-0.7){\line(1,1){1.3}}
\put(-0.9,0.0){\line( 1, 1){1.0}}
\put( 0.9,0.0){\line(-1,-1){1.0}}
\put(-0.85,-0.4){\line( 1, 1){1.2}}
\put( 0.85, 0.4){\line(-1,-1){1.2}}
\end{picture}
}
\begin {equation}
\setlength{\unitlength}{0.15in}
\begin {picture}(40,15)
\thicklines
\put(5,11){\oval(4,2.5)[t]}
\put(4,11){\oval(2,2.5)[bl]}
\put(6,11){\oval(2,2.5)[br]}
\put(5,9.5){\blob}
\put(6,12.8){\vector(-1,0){2}}
\put(5,13.2){$k$}
\put(9.5,10.2){$\sim$}
\put(13,10){$\displaystyle{
\int {d^3 k\over (2\pi)^3 2E_k} \, {1 \over e^{\beta E_k} - 1}
}$}
\put(28,10){\blob}
\put(27.2,10.3){\line(-3,2){2.5}}
\put(28.8,10.3){\line( 3,2){2.5}}
\put(24.8,12.6){\vector(3,-2){2}}
\put(29.2,11.2){\vector(3, 2){2}}
\put(24,12.6){$k$}
\put(31.6,12.6){$k$}
\put(9.5,4.2){$\sim$}
\put(8.3,3.2){{\small large T}}
\put(13,4){$\displaystyle{
\int {d^3 k\over (2\pi)^3 2E_k} \, ~~~{T\over E_k}
}$}
\put(28,4){\blob}
\put(27.2,4.3){\line(-3,2){2.5}}
\put(28.8,4.3){\line( 3,2){2.5}}
\put(24.8,6.6){\vector(3,-2){2}}
\put(29.2,5.2){\vector(3, 2){2}}
\put(24,6.6){$k$}
\put(31.6,6.6){$k$}
\put(32,3){.}
\end {picture}
\end {equation}
The only temperature dependence in the last line is the explicit factor
of $T$, and so the loop expansion parameter is proportional to $g^2 T$
as before.
The rest of the expression in the last line must give something
determined by the mass scale of the problem (assuming the diagram is
sufficiently convergent in the ultraviolet) and is dominated by
$E_k \sim M_{\rm w}$.
So the origin of the large
$T/M_{\rm w}$ factor in the loop expansion parameter is simply the
divergent behavior of the Bose factor $1/(e^{\beta E_k}-1)$ as
$E_k \to 0$; there are a large number of low-energy bosons present
in a high-temperature plasma.
Let's understand the loop expansion parameter in yet another way.
As mentioned earlier, the high-temperature limit $\beta\to 0$
reduces the four-dimensional Euclidean theory to an effective
three-dimensional theory of the static ($k_0=0$) modes.
Restricting attention to the static modes, the integrand of the
path integral then has the form
\begin {equation}
e^{-S_{\rm E}}
= \exp\left[ -{1\over g^2} \int\nolimits_0^\beta d\tau
\int d^3 x {\cal L}_{\rm E} \right]
\to \exp\left[ - {1\over g^2 T} \int d^3x {\cal L}_{\rm E} \right] \,.
\label {3d reduction}
\end {equation}
In the first equality, I have normalized the fields so that the
coupling constant appears explicitly out in front of the action as
$1/g^2$. When I specialize to static field configurations, the
Euclidean time integration becomes trivial, giving a factor of $1/T$.
But now we see that $g^2$ always appears in the combination $g^2 T$.
Then dimensional analysis gives us the loop expansion parameter
(\ref{loop parameter}) as before.
Note that for pure, unbroken gauge theory (where there is no Higgs $\phi$
to give a mass to the W),
(\ref{3d reduction}) shows that the only scale in the theory would be
$g^2 T$ itself. If we were to study the physics at that scale,
then the loop expansion parameter given by the left-hand side of
(\ref{loop parameter}) would be order one; the physics is strongly-coupled
even though $g^2$ is small. This is known as the infrared
(or ``magnetic mass'') problem of high-temperature non-Abelian gauge
theory.
\section{Why life is not simple}
We're now finally in a position to discuss under what conditions
perturbation theory might be adequate to study the electroweak phase
transition. Our loop expansion parameter (\ref{loop parameter}) is
nothing other than the inverse of the B violation rate exponent
(\ref{B rate exponent}), and so we may borrow the earlier analysis
of the parameter dependence:\footnote{
For a more detailed discussion of the loop expansion, see
ref.~\citenum{Arnold&Espinosa}.
}
\begin {equation}
{g^2 T\overM_{\rm w}}
\sim {\lambda\over g^2}
\sim {m^2({\rm Higgs})_{T=0} \over M^2({\rm W})_{T=0}} \,.
\label {loop parameter 2}
\end {equation}
A basic result of this review, which should be remembered for Larry Yaffe's
talk on the $\epsilon$ expansion, is then:
\begin {center}
THE LOOP EXPANSION WORKS WHEN $\lambda\ll g^2$.
\end {center}
Now once can wonder how well perturbation theory is doing at the
upper bound $m$(Higgs) = 35 GeV that we earlier discussed for
electroweak baryogenesis in the minimal standard model. Is 35 GeV
small compared to the W mass of 80 GeV? The answer, of course, depends
on all the factors of 2 and $\pi$ that left out of the
simple minded ``$\sim$'' equalities presented in this review.
But there's a way to check it. One can (1) formally assume
$\lambda{\ll}g^2$, (2) explicitly compute the next-to-leading order
({\it i.e.}\ two-loop) correction to the free-energy $F(\phi)$, and then
(3) see if the correction is numerically large for a 35 GeV Higgs.
I should note in passing that from (\ref{loop parameter 2}) one sees
that my constant use of the high-temperature limit $T{\gg}M_{\rm w}$ is
justified provided $g^4{\ll}\lambda$. So the calculation actually
formally assumes
\begin {equation}
g^4 \ll \lambda \ll g^2 \,,
\end {equation}
where the first inequality is for the high-temperature expansion and
the second for the loop expansion.
\begin {figure}
\vbox
{%
\begin {center}
\leavevmode
\def\epsfsize #1#2{0.51#1}
\epsfbox [150 250 500 550] {figa.ps}
\end {center}
\caption
{%
\label {figa}
The effective potential at the critical temperature for
$m_{\rm h}(0)$ = 35 GeV and $m_{\rm t}(0)$ = 110 GeV.
The dashed and solid lines
are the one-loop and two-loop results respectively.
[Why a 110 GeV top mass? Because this is an old graph.
But the results aren't particularly sensitive to $m_{\rm t}$.]
}%
}%
\end {figure}
The details may be found in refs.~\citenum{Arnold&Espinosa} and
\citenum{Bagnasco&Dine}, but the result of the calculation
is shown in fig.~\ref{figa}.
Both the one-loop and 2-loop results for $F(\phi)$
are shown at the temperature that makes their minima degenerate. The
value of $\phi$ in the asymmetric ground state has only shifted by about
20\%. On the other hand, the height of the hump has shifted by almost a
factor of three! The moral is that for {\it some} quantities,
{(35 GeV)/(80 GeV)} is {\it not} small, and perturbation theory cannot
necessarily be trusted.%
\footnote{
There are a variety of caveats to this conclusion. First of all, I have
not shown you the corrections to any {\it physical} quantity. Beyond
leading-order, the exact value of the height of the hump (which becomes
complex), does not have any obvious physical interpretation, and the
expectation of $\phi$ is gauge-dependent. (The 2-loop result shown was
computed in Landau gauge.) There are plenty of examples in the world
where perturbation theory is under reasonable numerical control for
physical quantities but not for unphysical ones.
~~~~In addition, Farakos {\it et al.}\cite{Farakos} believe
that using the renormalization
group to improve some logarithmic corrections may bring perturbative
calculations under numerical control, though some of those authors believe
perturbation theory fails disastrously for different reasons, as discussed by
Shaposhnikov in this conference.
}
In particular, we have no way to know {\it a priori}
whether the B violation rate might not be a quantity which gets
substantial corrections. (Though the B violation rate exponent is
proportional to $\phi$ at leading order, this simple relation does not
hold beyond leading order.)
\section{More on the reduction to 3 dimensions}
When discussing the loop expansion parameter back in section
\ref{loop expansion section}, I assumed that the relevant
``physics scale'' for any diagram was determined by particle masses.
This is true if diagrams are ultraviolet (UV) convergent and so dominated
by their infrared behavior. However, here's a quadratically divergent
diagram, for which it's not true:
\begin {equation}
\matrix{
\buildrel k \over \longleftarrow \cr
\hbox{
\def\epsfsize #1#2{0.10#1}
\epsfbox[150 150 500 700]{eq1.ps}
}\cr
}
\qquad = \qquad \hbox{($T{=}0$ stuff)}
~+~ g^2 T^2
\label {Debye mass}
\end {equation}
[Note: as usual, I'm leaving out numerical coefficients and just showing the
order of magnitude of terms.]
As in (\ref{finite T display}), the finite-temperature piece of this
diagram comes from interactions with real particles of momentum $k$ in
the plasma. The quadratic divergence is then cut-off by $T$ for this
piece because there are no such particles with $k{\gg}T$. This gives
the result of order $g^2 T^2$ indicated above, and the diagram is dominated
by loop momenta $k$ of order $T$.
Because the important momentum scale for this diagram is $T$ (and not the
particle masses), this diagram is sensitive to the Euclidean temporal
direction. That is, $k_0 \not= 0$ modes are {\it not} suppressed in
UV divergent diagrams. But this is the usual story for the decoupling of
heavy degrees of freedom in field theory. At large distances compared
to $\beta=1/T$, the Euclidean time dimension decouples {\it except} for
renormalizations of the masses and couplings of the theory.
The $g^2 T^2$ contribution to (\ref{Debye mass}) is just the renormalization
of the scalar mass in matching the original four-dimensional theory to
the effective three-dimensional theory at long distances.
The way to systematically construct the effective three-dimensional theory,
and to relate its parameters to those of the more fundamental four-dimensional
theory, is to compute the effective interactions among the $k_0{=}0$ modes
generated by integrating out the $k_0{\not=}0$ modes.
For instance, diagrams like
\begin {equation}
\vcenter{
\hbox{
\def\epsfsize #1#2{0.10#1}
\epsfbox[150 150 500 700]{eq2a.ps}
}
}
\qquad \hbox{give} \qquad
m_3^2 = m_4^2 + g^2 T^2 ~~+~~ \hbox{higher-order} \,,
\end{equation}
where the double-lines indicate the non-static $k_0{\not=}0$ modes,
$m_4$ is the scalar mass in the fundamental four-dimensional theory,
and $m_3$ is the scalar mass in the effective three-dimensional theory.%
\footnote{
For more details of the reduction from four to three dimensions in the
particular context of the electroweak phase transition, see
ref.~\citenum{Farakos}.
}
A similar thing happens for the temporal polarization of the photon.
Diagrams like
\begin {equation}
\vcenter{
\hbox{
\def\epsfsize #1#2{0.22#1}
\epsfbox[150 300 500 500]{eq2b.ps}
}
}\
\qquad \hbox{give} \qquad
M_3^2(A_0) = g^2 T^2 ~~+~~ \cdots \,.
\label {debye mass}
\end{equation}
This is the Debye screening mass for static electric fields in a hot
plasma. Coupling constants also get contributions from the $k_0{\not=}0$
modes, such as
\begin {equation}
\vcenter{
\hbox{
\def\epsfsize #1#2{0.13#1}
\epsfbox[150 200 500 600]{eq2c.ps}
}
}
\qquad \hbox{giving} \qquad
\lambda_3 = \lambda_4(T) ~~+~~ \cdots \,,
\end{equation}
where $\lambda_4(T)$ is the four-dimensional coupling evaluated at
renormalization scale $T$.
There is also an effective $\phi^6$ interaction, which is a marginal
operator in three dimensions:
\begin {equation}
\vcenter{
\hbox{
\def\epsfsize #1#2{0.15#1}
\epsfbox[150 150 500 650]{eq2d.ps}
}
}
\qquad\qquad
\hbox{marginal.}
\end {equation}
Other interactions generated by integrating out the heavy modes are
irrelevant---that is they decouple as powers of the physics scale over $T$:
\begin {equation}
\vcenter{
\hbox{
\def\epsfsize #1#2{0.13#1}
\epsfbox[150 150 500 650]{eq2e.ps}
}
}
\qquad\qquad
\hbox{irrelevant operators.}
\end {equation}
The most important point to be made about this matching of the
four-dimensional to an effective three-dimensional theory is that
the matching is {\it perturbative} in $g^2$ (and $\lambda$).
To integrate out the $k_0{=}0$ modes is to account for physics
whose scale is set by $T$ (the inverse size of the temporal dimensions)
and not by particle masses $M$. The loop expansion parameter for this
integration is therefore
\begin {equation}
{g^2 T \over \hbox{scale}} ~~~\sim~~~ g^2 \,,
\label {match}
\end {equation}
and everything is under the control. It is only when one goes to
solve the effective three-dimensional theory of the $k_0{=}0$ modes
that one encounters the infrared enhancements that gave the potentially large
loop expansion parameter of section \ref{loop expansion section}.
I should also make a few remarks about choice of scale in the
regularization of the three-dimensional theory. For simplicity, imagine
a simple UV momentum cut-off $\Lambda$. To get the effective
three-dimensional theory, we now integrate out not only the
$k_0{\not=}0$ modes but also all $|\vec k|{>}\Lambda$
for the $k_0{=}0$ modes. For the latter integration,
the loop expansion parameter is
\begin {equation}
{g^2 T \over \hbox{scale}} ~~~\sim~~~ {g^2 T \over \Lambda} \,.
\end {equation}
The natural choice for $\Lambda$ is $T$---the scale at which we're doing
the matching. If we picked $\Lambda$ to be very small, the matching
would no longer be well-behaved perturbatively. The moral is that
the matching is simple and perturbatively calculable provided one
chooses a renormalization scale of order $T$.
\begin{figure}
\setlength{\unitlength}{0.12in}
\begin {picture}(45,40)
\thicklines
%
\put(10,1){\line(1,0){1}}
\put(10,1){\line(0,1){34}}
\put(11,1){\line(0,1){34}}
\put(10,35){\line(-1,0){1}}
\put(11,35){\line( 1,0){1}}
\put(9,35){\line( 3,4){1.4}}
\put(12,35){\line(-3,4){1.4}}
\put(8.8,39){energy}
\put(9.2,37.7){scale}
%
\put(9,4){\line(1,0){4}}
\put(6,3.7){$g^2 T$}
\put(15,3.7){scale where loop expansion fails;}
\put(15,2.4){confinement scale of 3-dim.\ nonabelian gauge theory}
%
\put(9,19){\line(1,0){4}}
\put(6,18.7){$g T$}
\put(1,18.7){$(m_{A_0^{}})$}
\put(15,18.7){Integrate out $A_0$.}
%
\put(9,31){\line(1,0){4}}
\put(6,30.7){$T$}
\put(15,30.7){Integrate out 4th dimension.}
%
\put(20,35){(3+1) dim.\ theory of $A_\mu$, $\phi$, $\psi$}
\put(20,24){3 dim.\ theory of $A_0$, $\vec A$, $\phi$}
\put(20,14){3 dim.\ theory of $\vec A$, $\phi$}
%
\put( 6 ,10){\line(1,0){1.5}}
\put( 8.5,10){\line(1,0){1.5}}
\put(11 ,10){\line(1,0){1.5}}
\put(13.5,10){\line(1,0){1.5}}
\put(16 ,10){\line(1,0){1.5}}
\put(18.5,10){\line(1,0){1.5}}
\put(21 ,10){\line(1,0){1.5}}
\put(25,9.7){scale of phase transition}
\put(25,8.4){if $m$(Higgs) not too big}
\end {picture}
\caption {Hierarchy of scales and effective theories. (I have shown the
phase transition between $gT$ and $g^2T$. If the Higgs mass is light
enough so that $\lambda{\ll}g^3$, then it would instead be above the
$gT$ line.)}
\label {hierarchy fig}
\end {figure}
Fig.~\ref{hierarchy fig} shows the hierarchy of important energy scales present
at finite temperature. At scale $T$, we integrate out the fourth dimension
to get an effective 3-dimensional theory. In the process, $A_0$ picks up
a Debye screening mass (\ref{debye mass}) of order $gT$. So, to study
physics below $gT$, one should also integrate out $A_0$ as well.
The final effective theory at large distances is then a 3-dimensional
gauge theory of $\vec A$ and the Higgs field $\phi$.
At the scale $g^2 T$, the loop expansion parameter becomes strong and
the theory is no longer perturbatively solvable.
If $\lambda{\ll}g^2$, the scale associated with the phase transition will
be larger than $g^2 T$ and one can apply perturbative techniques.
\section {Breakdown at $\phi\sim 0$%
\protect\footnote{
This section contains material I didn't have time to cover in my talk
in Vladimir.
}}
\begin {figure}
\vbox
{%
\begin {center}
\leavevmode
\setlength{\unitlength}{0.15in}
\begin {picture}(20,14)
\put(0,0){
\def\epsfsize #1#2{0.51#1}
\epsfbox [150 250 500 500] {break.ps}
}
\put(1.7,0.7){\vector(-1,1){1}}
\put(2.3,0.1){$\displaystyle{{g^2 T\over M} \,\, \vcenter{\hbox{$\buildrel{\displaystyle >}\over\sim$}} \,\, 1}$}
\put(8,9){$\displaystyle{
{g^2 T\over M} \sim {\lambda\over g^2} }$}
\end {picture}
\end {center}
\caption
{%
\label {break}
The uncertainties in $F(\phi)$ in different regions of $\phi$.
Perturbation theory is controlled by $\lambda/g^2$ in the region
of the hump and the asymmetric minimum, designated qualitatively
by the solid line, but it breaks down close to the origin.
The size of the problem region is small when $\lambda/g^2$ is
small.
}%
}%
\end {figure}
Assume for the moment that $\lambda/g^2$ is arbitrarily small.
When I made the original estimate (\ref{loop parameter 2}) that the
loop expansion parameter $g^2 T/M_{\rm w}$ is of order $\lambda/g^2$, I assumed
that $\phi$ was the same order of magnitude as its value in the
asymmetric ground state. However, since $M_{\rm w} \sim g\phi$, the loop
expansion parameter $g^2 T/M_{\rm w}$ must eventually get large as I approach
the symmetric ground-state $\phi{=}0$, no matter how small
$\lambda/g^2$ is. This situation is depicted in fig.~\ref{break}.
For small $\lambda/g^2$, there will be a small region around
$\phi{=}0$ where perturbation theory breaks down and our calculation of
the free energy is uncertain.
How big is this uncertainty? In particular, this uncertainty
introduces an uncertainty in computing the critical temperature
$T_{\rm c}$ at which the two ground states become degenerate, and $T_{\rm c}$
affects every other property of the transition we might compute.
There's a simple way to estimate the magnitude
of our ignorance of the free energy $F$ in the symmetric phase.%
\footnote{
For more on this topic, try ref.~\citenum{gpy}.
}
In that
phase, we have an unbroken 3-dimensional gauge theory. As discussed in
section \ref{loop expansion section}, the only parameter of the theory is
then $g^2 T$. So, by dimensional analysis, the 3-dimensional action
density is then
\begin {equation}
{1\over V} \ln Z \sim (g^2 T)^3 \,,
\end {equation}
and so
\begin {equation}
\hbox{uncertainty in $F(0)$} ~~~\sim~~~ g^6 T^4 \,.
\end {equation}
Now compare this to the accuracy of a perturbative calculation in the
{\it asymmetric} phase, where the loop expansion parameter is small.
The uncertainty in $F(0)$ turns out to be comparable to the accuracy
of a {\it four}-loop calculation of $F$ in the asymmetric phase.
(For details on the power-counting, see
sec.~II of ref.~\citenum{Arnold&Espinosa}.)
So, in general, a perturbative treatment of the phase transition is
useful when $\lambda/g^2$ is small, but it is useful only up to a
certain order in $\lambda/g^2$.
\section {Summary}
The following are the elements of this talk that you need to remember
for Larry Yaffe's talk on the $\epsilon$ expansion.
\begin{itemize}
\item The hard part of studying the phase transition is solving a
three-dimensional theory of $\vec A$ and $\phi$.
\item That theory has a simple relationship to the original $d{=}4$
couplings if it is defined at a renormalization scale $\Lambda\sim T$:
\begin {eqnarray}
m_3^2 &\sim& m_4^2 + g^2 T^2 \,,
\\
g_3 &\sim& g_4(T) \,,
\\
\lambda_3 &\sim& \lambda_4(T) \,.
\end {eqnarray}
\item The theory can be studied with straightforward perturbation theory
(at least to some order) when
\begin {equation}
\lambda \ll g^2 \,.
\end {equation}
\end {itemize}
|
2,869,038,153,897 | arxiv |
\section{Introduction}
Full-duplex transmission is the communication scheme where bidirectional communications is carried out over the same temporal and spectral resources~\cite{Ref1}-\cite{Ref11}. The main limitation impacting full-duplex transmission is managing the strong self-interference signal imposed by the transmit antenna on the receive antenna within the same transceiver. Throughout the literature, several combinations of passive and active self-interference cancellation schemes have been proposed~\cite{Ref1}-\cite{Ref7}; aiming to mitigate the self-interference signal below the noise level. However, the experimental results in~\cite{Ref1}-\cite{Ref5} have demonstrated that complete self-interference elimination is not possible in current full-duplex systems, mainly due to a combination of system imperfections, especially radio circuits' impairments.
In order to understand the system limitations, several recent publications~\cite{Ref10}-\cite{Ref13r} have considered the problem of full-duplex transmission to investigate the impact of radio circuit impairments on the system performance and explore system bottleneck. More specifically, the results in~\cite{Ref13} show that, due to the large power differential between the self-interference signal and the signal-of-interest, system nonlinearity becomes one of the main factors that limit self-interference mitigation capability. Generally, system nonlinearity introduces in-band nonlinear distortion to the transmitted and received signals. Most of the existing self-interference cancellation schemes ignore the nonlinearity effect, which limits the amount of cancellable self-interference power to the distortion level.
In this paper, we consider the problem of self-interference cancellation in full-duplex orthogonal frequency division multiplexing (OFDM) systems in the presence of four main radio impairments: (i) transmitter and receiver nonlinearity, (ii) transmitter and receiver oscillator phase noise, (iii) receiver Gaussian noise, and (iv) analog-to-digital converter (ADC) quantization noise. A digital-domain self-interference cancellation scheme that accounts for the transmitter and receiver nonlinearity effect is proposed. The proposed scheme increases the amount of cancellable self-interference power by suppressing the nonlinear distortion associated with the received self-interference signal.
Suppressing the nonlinear distortion requires the self-interference channel as well as nonlinearity coefficients to be estimated. However, due to the presence of the nonlinear distortion while the self-interference channel is being estimated, the channel estimation error will be distortion limited. To overcome this problem, we propose an iterative technique to jointly estimate the self-interference channel and the nonlinearity coefficients required to perform self-interference cancellation and distortion suppression. The performance of the proposed scheme is numerically investigated and compared against the case of a linear full-duplex system. The results show that after three to four iterations, the nonlinear distortion is significantly suppressed such that the proposed scheme achieves a performance that is less than 0.5dB off the performance of a linear full-duplex system.
The remainder of the paper is organized as follows. In Section II, the signal model is presented. The proposed scheme is introduced in Section III. Simulation results and discussions are presented in Section IV. Finally, Section V presents the conclusion.
\section{Signal Model}
Figure~\ref{Fig1Label} illustrates a block diagram for a full-duplex OFDM transceiver, where the transmitter and the receiver are operating simultaneously over the same carrier frequency. At the transmitter side, the base-band signal is modulated using an OFDM modulator and then up-converted to the carrier frequency $f_c$, then amplified using a power amplifier. The oscillator at the transmitter side is assumed to have a random phase error represented by $\phi^t(t)$. At the receiver side, the amplitude of the received signal is properly adjusted using a low-noise amplifier (LNA). The signal is then down-converted from the carrier frequency to the base-band. The down-conversion mixer is assumed to have a random phase error represented by $\phi^r(t)$. The base-band signal is then quantized and converted to the frequency domain using Fourier transform.
\begin{figure}[t]
\begin{center}
\noindent
\includegraphics[width=3.5in,trim= 0in 0in 0in 0in]{figure1.pdf}
\caption{Block diagram of a full-duplex OFDM transceiver.\label{Fig1Label}}
\end{center}
\end{figure}
In practical systems, the main sources of the system nonlinearity are the power amplifier at the transmitter side and the LNA at the receiver side. In this paper, we consider both the power amplifier and LNA nonlinearities. Generally, for any nonlinear block, the output signal $y$ can be written as a polynomial function of the input signal $x$ as follows~\cite{Ref14}
\begin{equation}\label{eq:1}
y = \sum_{m=0}^{M-1} \alpha_{m+1} x^{m+1} \text{.}
\end{equation}
It can be shown that for practical wireless systems~\cite{Ref14}, only the odd orders of the polynomial contribute to the in-band distortion. Furthermore, only a limited number of orders contribute to the distortion and higher orders could be neglected. In practical systems, the nonlinearity is typically characterized by the third-order intercept point (IP3), which is defined as the point at which the power of the third harmonic is equal to the power of the first harmonic~\cite{Ref15}. Accordingly, in this paper we limit our analysis to the third-order nonlinearity where the output of any nonlinear block can be simplified as
\begin{equation}\label{eq:2}
y = x+\alpha_3 x^3 \text{,}
\end{equation}
assuming a unity linear gain (i.e. $\alpha_1 = 1$).
Following the block diagram in Figure~\ref{Fig1Label} and using the assumption that $e^{j\phi}=1+j\phi$, $\phi \ll 1$, the base-band representation of the received signal at the ADC output can be written as
\begin{equation}\label{eq:3}
y_n = x_n^I*h_n^I + x_n^S*h_n^S + d_n + \phi_n + q_n + z_n \text{,}
\end{equation}
where '$*$' denotes convolution process, $n$ is the sample index, $x^I$, $x^S$ are the transmitted self-interference and signal-of-interest respectively, $h^I$, $h^S$ are the self-interference and signal-of-interest channels, $d_n$ is the total transmitter and receiver nonlinear distortion, $\phi_n$ is the total phase noise, $q_n$ is the ADC quantization noise, and $z_n$ is the receiver Gaussian noise. The receiver Gaussian noise represents the noise inherent in the receiver circuits, and usually specified by the circuit noise figure, which is implicitly a function of the LNA gain~\cite{Ref15}.
Using the nonlinearity model in~\eqref{eq:2}, and ignoring the nonlinearity associated with the signal of interest because of its small power compared to the self-interference signal, the total distortion $d_n$ can be written as
\begin{equation}\label{eq:34}
d_n = \underbrace{\alpha_3^t \left(x_n^I \right)^3 * h_n^I}_{\text{Transmitter nonlinearity}} + \underbrace{\alpha_3^r\left(x_n^I*h_n^I + \alpha_3^t \left(x_n^I \right)^3 * h_n^I \right)^3}_{\text{Receiver nonlinearity}} \text{,}
\end{equation}
where $\alpha_3^t$, $\alpha_3^r$ are the transmitter and receiver third-order nonlinearity coefficients. Expanding~\eqref{eq:34} we get
\begin{eqnarray}\label{eq:4}
d_n &=& \alpha_3^t \left(x_n^I \right)^3 * h_n^I + \alpha_3^r \left(x_n^I*h_n^I \right)^3 \nonumber \\
& & + 3 \alpha_3^t \alpha_3^r \left(x_n^I*h_n^I \right)^2 \left(\left(x_n^I \right)^3*h_n^I \right) \nonumber \\
& & + 3 \alpha_3^r \left(x_n^I*h_n^I \right) \left(\alpha_3^t \left(x_n^I \right)^3*h_n^I \right)^2 \nonumber \\
& & + \left(\alpha_3^t \left(x_n^I \right)^3*h_n^I \right)^3 \text{,}
\end{eqnarray}
According to~\eqref{eq:34}, the main difference between the transmitter and receiver nonlinearity is that the transmitter nonlinearity affects the signal only while the receiver nonlinearity affects both the signal and the wireless channel. Also it has to be noted that, although only 3$^{rd}$ order harmonics are considered at both transmitter and receiver sides, the coexistence of the transmitter and receiver nonlinearity introduces 5$^{th}$, 7$^{th}$, and 9$^{th}$ order harmonics (the 3$^{rd}$, 4$^{th}$, and 5$^{th}$ terms in~\eqref{eq:4}). The 7$^{th}$ and 9$^{th}$ order harmonics are much smaller than other harmonics, thus can be ignored. Accordingly, the distortion signal can be simplified as
\begin{eqnarray}\label{eq:5}
d_n &=& \alpha_3^t \left(x_n^I \right)^3 * h_n^I + \alpha_3^r \left(x_n^I*h_n^I \right)^3 \nonumber \\
& & + 3 \alpha_3^t \alpha_3^r \left(x_n^I*h_n^I \right)^2 \left(\left(x_n^I \right)^3*h_n^I \right) \text{.}
\end{eqnarray}
Finally, the received frequency-domain signal can be written as
\begin{equation}\label{eq:6}
Y_k = X_k^I H_k^I + X_k^S H_k^S + D_k + \Phi_k + Q_k + Z_k \text{,}
\end{equation}
where $k$ is the subcarrier index, and upper-case notation refers to the discrete Fourier transform (DFT) of the corresponding time-domain signals.
In order to show the significance of each noise term, the system is simulated using parameter values for a practical wireless transceiver~\cite{Ref16}. Figure~\ref{Fig2Label} shows the strength of each noise source at different received self-interference signal strengths. The results show that the nonlinear distortion is the main limiting factor, followed by the phase noise then the receiver Gaussian noise and quantization noise.
\begin{figure}[t]
\begin{center}
\noindent
\includegraphics[width=3.3in,trim= 0in 0in 0in 0in]{figure2.pdf}
\caption{Noise powers at different received self-interference signal strengths for the transceiver in~\cite{Ref16}.\label{Fig2Label}}
\end{center}
\end{figure}
\section{Self-interference cancellation with distortion suppression}
The results in Figure~\ref{Fig2Label} imply that, eliminating the nonlinear distortion increases the self-interference mitigation capability. According to~\eqref{eq:5}, distortion elimination requires the knowledge of the self-interference channel ($h^I$) as well as the nonlinearity coefficients ($\alpha_3^t$, $\alpha_3^r$). In the proposed scheme, the self-interference channel is estimated using an orthogonal training sequence at the beginning of each transmission frame. The estimated channel along with the knowledge of the self-interference signal ($x^I$) are then used to estimate the nonlinearity coefficients.
The main problem is that due to the presence of the distortion signal at the training time, the channel estimation error will be limited by the distortion signal, which impacts the estimation accuracy and thus the overall cancellation performance. To overcome this problem, we propose an iterative technique to jointly estimate the self-interference channel and the nonlinearity coefficients. The proposed technique consists of four main steps: (i) an initial estimate for the self-interference channel ($\hat{H}_k^I$) is obtained, (ii) the estimated channel is used to estimate the nonlinearity coefficients ($\alpha_3^t$, $\alpha_3^r$), (iii) the estimated coefficients are used to construct an estimate for the distortion signal $\hat{D}_k$, and (iv) the estimated distortion signal $\hat{D}_k$ is subtracted from the received signal. The four steps are then repeated for a number of iterations. An illustrative block diagram for the proposed iterative technique is shown in Figure~\ref{Fig3Label}.
After channel and nonlinearity coefficients estimation, the self interference signal ($X_k^I \hat{H}_k^I$) and the distortion signal ($\hat{D}_k^I$) are subtracted from the received signal at each data OFDM symbol to construct the interference-free signal. In the following subsections, detailed analysis for the channel and nonlinearity coefficients estimation techniques is presented.
\begin{figure}[t]
\begin{center}
\noindent
\includegraphics[width=3in,trim= 0in 0in 0in 0in]{figure3.pdf}
\caption{Block diagram for the iterative channel and nonlinearity coefficients estimation technique.\label{Fig3Label}}
\end{center}
\end{figure}
\subsection{Channel estimation}
It has to be noted that for the iterative technique in Figure~\ref{Fig3Label} to work properly, the mean square error of the channel estimation should be less than the distortion power, otherwise the performance will be limited by the channel estimation error and there will be no gain achieved by the iterative technique. The DFT based channel estimation technique proposed in~\cite{Ref17} is one of the low complexity channel estimation techniques that achieve relatively small mean square error. In this technique, first, an estimate for the channel impulse response (CIR) is obtained using the least square (LS) estimator as follows
\begin{equation}\label{eq:7}
\hat{h}_n^{LS} = \mathsf{IDFT} \left\{\frac{Y_k}{X_k} \right\} \text{.}
\end{equation}
Then, by leveraging the fact that the channel information is contained in the first $L$ samples of the CIR, a better estimate for the channel is obtained by taking the first $L$ samples of $\hat{h}_n^{LS}$ while forcing other samples to zero as follows
\begin{equation}\label{eq:8}
\hat{h}_n = \left\{
\begin{array}{c l}
\hat{h}_n^{LS} & \text{,\ \ } 0 \leq n \leq L-1 \text{,}\\
0 & \text{,\ \ otherwise} \text{,}
\end{array}\right.
\end{equation}
then
\begin{equation}\label{eq:9}
\hat{H}_k = \mathsf{DFT} \left\{\hat{h}_n \right\} \text{.}
\end{equation}
By doing this, the estimation error is reduced by a factor of $\frac{L}{N}$, where $N$ is the number of subcarriers per OFDM symbol. The key challenge in such technique is the choice of $L$. Since the cyclic prefix in practical systems is designed to be larger than the channel length, a good choice for $L$ is to be equal to the cyclic prefix length.
\subsection{Nonlinearity coefficients estimation}
At the self-interference training symbol, the signal-of-interest is not present. Therefore, Equation~\eqref{eq:3} can be written as
\begin{equation}\label{eq:10}
y_n = x_n^I*h_n^I + d_n + \phi_n + q_n + z_n \text{.}
\end{equation}
Since the transmitted self-interference signal $x_n^I$ and the self-interference channel $\hat{h}_n$ are now known, the problem in~\eqref{eq:10} can be recognized as a linear estimation problem with the unknown coefficients $[\alpha_3^t, \alpha_3^r, 3\alpha_3^t\alpha_3^r]$.
Rewriting~\eqref{eq:10} in a matrix form we get
\begin{equation}\label{eq:11}
\left[ \begin{array}{c} \bar{y}_0 \\ \bar{y}_1 \\:\\ \bar{y}_N \end{array} \right] =
\underbrace{\begin{bmatrix} A_1 & B_1 & C_1 \\ A_2 & B_2 & C_2\\:&:&: \\ A_N & B_N & C_N \end{bmatrix}}_{W}
\left[ \begin{array}{c} \alpha_3^t \\ \alpha_3^r \\ 3\alpha_3^t\alpha_3^r \end{array} \right]
+ \left[ \begin{array}{c} \eta_0 \\ \eta_1 \\:\\ \eta_N \end{array} \right] \text{,}
\end{equation}
where $\bar{y}_n=y_n-x_n^I*\hat{h}_n^I$, $\eta_n = \phi_n+q_n+z_n$, $A_n = (x_n^I)^3*\hat{h}_n^I$, $B_n = (x_n^I*\hat{h}_n^I)^3$, and $C_n = (x_n^I*\hat{h}_n^I)^2((x_n^I)^3*\hat{h}_n^I)$. Rewrite~\eqref{eq:11} in a compact form we get
\begin{equation}\label{eq:12}
\bar{y} = W \alpha + \eta \text{.}
\end{equation}
An estimate for the nonlinearity coefficients $\alpha$ can be found using the LS estimator as
\begin{equation}\label{eq:13}
\hat{\alpha} = W^{-1} \bar{y} \text{.}
\end{equation}
The main problem with the LS estimator is that the matrix $W$ is often ill-conditioned, thus the inversion of the matrix will incur numerical errors. To overcome this problem, we propose a successive one-by-one estimation technique to avoid matrix inversion. The proposed technique is similar to the successive interference cancellation technique where one coefficient (e.g $\alpha_3^t$) is estimated assuming that other two are equal to zero. The estimated coefficient is multiplied by its corresponding signal and subtracted from the received signal, then the next coefficient is estimated. Since the third coefficient ($3\alpha_3^t\alpha_3^r$) is function of the first two, estimating $\alpha_3^t$, and $\alpha_3^r$ is sufficient to get the three coefficients. Furthermore, for better estimation accuracy iterative techniques could be used.
A common problem with any successive technique is the determination of the coefficient to start with. If there is prior knowledge about the relative strength of the transmitter and receiver nonlinearity, the optimum choice is to start with the coefficient that corresponds to the stronger nonlinearity. For example if the transmitter nonlinearity is stronger than receiver nonlinearity, the algorithm should start with $\alpha_3^t$ and vise versa. However, if there is no prior knowledge, a wrong starting point might result in performance degradation. In order to overcome this problem, the proposed algorithm selects the start coefficient based on the residual distortion power. In other words, the coefficient that results in smaller residual distortion power will be selected as the start coefficient. The iterative successive nonlinearity coefficients estimation technique is summarized in algorithm~\ref{Alg1}. The equations in algorithm~\ref{Alg1} assumes that $\alpha_3^t$ is selected as the start coefficient. Finally, it has to be mentioned that, to compute $A_n$, $B_n$, and $C_n$ up-sampling is required in order to prevent aliasing
\begin{algorithm}
\caption{Successive nonlinearity coefficients estimation}
\label{Alg1}
{
\begin{algorithmic}[1]
\STATE set $\bar{y}_n = y_n - x_n^I*\hat{h}_n^I$.
\STATE Determine the start coefficient based on the residual distortion power
\FOR{certain number of iterations}
\STATE get $\hat{\alpha}_3^t=\frac{1}{N} \sum_{n=0}^{N-1}\frac{\bar{y}_n}{A_n}$.
\STATE set $\bar{y}_n = y_n - x_n^I*\hat{h}_n^I - \hat{\alpha}_3^t A_n$.
\STATE get $\hat{\alpha}_3^r=\frac{1}{N} \sum_{n=0}^{N-1}\frac{\bar{y}_n}{B_n}$.
\STATE set $\bar{y}_n = y_n - x_n^I*\hat{h}_n^I - \hat{\alpha}_3^r B_n - 3\hat{\alpha}_3^r \hat{\alpha}_3^t C_n$.
\ENDFOR
\end{algorithmic}
}
\end{algorithm}
\section{Simulation results and discussions}
In this section, the performance of the proposed cancellation scheme is numerically investigated under different operating conditions. The simulation setup is chosen as in WiFi 802.11n standard~\cite{Ref18}. The indoor TGn channel model~\cite{Ref19} is used to model the self-interference and signal-of-interest channels. The self-interference and signal-of-interest channel's Rician factors are set to 30dB and 3dB respectively. Two performance criteria are chosen: the achievable rate, and the residual interference plus distortion plus noise (RIDN) power. The RIDN is calculated as
\begin{equation}\label{eq:14}
\mathsf{RIDN} = X_k^I \left(H_k^I - \hat{H}_k^I \right) + \left(D_k - \hat{D}_k \right)+ \Phi_k+Q_k+Z_k \text{.}
\end{equation}
The proposed algorithm is compared to two cases; first, the case of linear full-duplex system (the best case) where $D_k=0$. Second, the case of nonlinear full-duplex system and no distortion removal is performed (as assumed in most current cancellation schemes).
In the first simulation scenario, we investigate the performance of the proposed scheme under different transmitter and receiver nonlinearity distortion levels. The target is to evaluate the performance of the proposed scheme under all distortion scenarios: (i) transmitter distortion is greater than receiver distortion, (ii) receiver distortion is greater than transmitter distortion, and (iii) transmitter and receiver distortion are comparable. Figure~\ref{Fig4Label} shows the RIDN power at different transmitter and receiver distortion levels and phase noise power of $-$70dBm. The top and bottom x-axes shows the transmitter and receiver distortion values respectively.
The conclusions from Figure~\ref{Fig4Label} are multifold: first, regardless of the distortion level, the proposed scheme is able to suppress the distortion to the level of the next bottleneck (e.g. phase noise in this case) and achieve a performance that is highly close (less than 0.5dB difference) to the performance of a linear receiver. Second, when the difference between the distortion level and the level of the next bottleneck increases, the number of iterations required to suppress the distortion signal increases. The reason is that each iteration has a limited suppression gain controlled by the channel estimation error, thus more suppression require more iterations. Finally, comparing the left side of Figure~\ref{Fig4Label} to the right side we note that, because the nonlinearity coefficients estimation algorithm adaptively selects the coefficient to start with, the proposed scheme performs the same way whether the transmitter distortion dominates receiver distortion or vise versa.
\begin{figure}[t]
\begin{center}
\noindent
\includegraphics[width=3.3in,trim= 0in 0in 0in 0in]{figure4.pdf}
\caption{RIDN power at different distortion levels.\label{Fig4Label}}
\end{center}
\end{figure}
In the previous simulation scenario the system is simulated in the case when the nonlinear distortion dominates other noise components. For complete performance evaluation, the performance is investigated under different phase noise power levels in order to investigate the case when the nonlinear distortion is not the limiting factor. Figure~\ref{Fig5Label} shows the RIDN power at different phase noise levels with a $-$45dBm transmitter and receiver distortion power. The results show that when other noise component dominates nonlinear distortion, the proposed scheme achieves same performance as the case where no distortion suppression is performed. In other words, the proposed scheme does not degrade the performance at low distortion levels.
In the following simulation scenario, the overall full-duplex system performance is investigated and compared to the corresponding half-duplex system performance. Figure~\ref{Fig6Label} shows the full-duplex and half-duplex system's achievable rate at different half-duplex signal-to-noise ratios (SNR). Since half-duplex system performance is usually limited by the receiver Gaussian noise, the SNR is defined as the received signal-of-interest power divided by the receiver Gaussian noise power. The parameters for this simulation scenario are shown in the figure caption. The results show that when the nonlinear distortion dominates other noise components, performing distortion suppression using the proposed scheme significantly improves the full-duplex system's spectral efficiency and allows full-duplex systems to achieve better rate than half-duplex systems at high SNR scenarios.
\begin{figure}[t]
\begin{center}
\noindent
\includegraphics[width=3.3in,trim= 0in 0in 0in 0in]{figure5.pdf}
\caption{RIDN power at different phase noise levels.\label{Fig5Label}}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\noindent
\includegraphics[width=3.3in,trim= 0in 0in 0in 0in]{figure6.pdf}
\caption{Full-duplex and half-duplex achievable rates at received self-interference signal strength = $-$30dBm, normalized transmitter and receiver distortion power = $-$45dB, and normalized phase noise power = $-$60dB.\label{Fig6Label}}
\end{center}
\end{figure}
\section{Conclusion}
In this paper, a digital-domain self-interference cancellation scheme for full-duplex OFDM systems is proposed. The proposed scheme increases the amount of cancellable self-interference power by suppressing the distortion caused by the transmitter and receiver nonlinearity. The proposed scheme is able to suppress the nonlinear distortion to the level of the next significant noise component, and achieve a performance that is less than 0.5dB off the performance of a linear full-duplex system.
\section{Acknowledgment}
This work is partially supported by Qatar National Research Fund (QNRF) grant through National Priority Research Program (NPRP) No. 4-1119-2-427.
\input{References}
\end{document}
|
2,869,038,153,898 | arxiv | \section{\label{sec:intro}Introduction}
Nowadays gamma-ray sources are widely used in numerous applications
from medicine to nuclear physics. Such sources can be based not only
on the radioactivity, bremsstrahlung~\cite{Wagner05} and synchrotron
emission in magnetic field~\cite{Bilderback05} but also on a Compton
scattering of laser light on electron beams of conventional
accelerators~\cite{Weller09,Gibson10,Albert10}. The recent progress
in laser technologies and laser-plasma electron acceleration opens the
opportunity to eliminate a conventional electron accelerator from
the scheme and develop a compact all-optical Compton source of hard
photons~\cite{Ta12,*Powers14} or a laser-plasma synchrotron
source~\cite{Nerush07}. Laser-produced hot electrons can also
generate bremsstrahlung
photons~\cite{Kmetec92,*Norreys99,*Hatchett00}, however, the
efficiency of this generation mechanism is not high due to a low value
of the bremsstrahlung cross-section. Nevertheless, the abundant
positron production via decay of bremsstrahlung photons in thick
high-Z targets is demonstrated~\cite{Chen09}.
Another mechanism of gamma-ray production is the nonlinear Compton
scattering induced directly by laser-driven plasma electrons. For a
laser field of ultrarelativistic intensity the nonlinear Compton
scattering is equivalent to synchrotron
radiation~\cite{Nikishov64,*Ritus85}, i.e. ultrarelativistic
plasma electrons emit gamma-rays during travelling along curved
trajectories. The resulting radiation losses take away a considerable
part of the electron energy and significantly affect electron motion
if the laser intensity becomes greater than $10^{23} \text{
W} \, \text{cm}^{-2}$ (for optical wavelengths), the intensity of
the so-called radiation-dominated regime~\cite{Bulanov04,*Esarey93}.
Recent results on the generation of laser pulses with high intensity
(up to $2\times 10^{22} \text{ W}\, \text{cm}^{-2}$, see
Ref.~\onlinecite{Yanovsky08}) and a number of proposals for
multipetawatt and exawatt laser facilities~\cite{Mourou06,*Di12}
stimulate theoretical study of laser-plasma synchrotron sources.
E.g., generation of gamma-ray bunches in the interaction of a laser
pulse with a tailored overcritical density target is investigated on
the basis of theoretical analysis and 2D particle-in-cell simulations
with the radiation friction force incorporated~\cite{Nakamura12}. The
theoretical analysis is done for a circularly polarized laser pulse
propagating in an underdense plasma. The obtained analytical results
agrees qualitatively with the results of simulations for
linear polarization. More complex plasma dynamics than it is assumed
in Ref.~\onlinecite{Nakamura12} is revealed in
Ref.~\onlinecite{Brady12} for an intense laser pulse interacting with
a relativistically underdense plasma. The 2D
simulation shows that a great portion of plasma electrons moves
towards the laser pulse during a noticeable part of an optical cycle.
At this time interval the intensity of the synchrotron radiation emitted
by the electrons peaks, that leads to generation
of a $90^\circ$-wide gamma-ray radiation pattern directed towards the
laser pulse. Such back-and-forth electron motion (including the case of opaque
plasmas) is a key element of the high-harmonic generation
theory~\cite{Brugge10,*Gonoskov11,*Sanz12,*Bulanov13}, where a coherent part of
the synchrotron radiation from laser-irradiated solids is
investigated. It is interesting to note that in
Ref.~\onlinecite{Ridgers12} a 2D simulation of a laser pulse
interacting with a relativistically opaque plasma reveals mostly
forward-directed gamma-ray radiation pattern.
Laser-plasma interactions potentially become even more complicated
at the intensity of $10^{24} \text{ W} \, \text{cm}^{-2}$, when
avalanche-like generation of gamma-quanta and electron-positron
($e^+e^-$) pairs can occur~\cite{Bell08, Fedotov10} due to consecutive
events of nonlinear Compton scattering and Breit--Wheeler process. The
produced $e^+e^-$ plasma can noticeably affect the interaction
process~\cite{Ridgers13} and can even cause a significant absorption of
the laser field~\cite{Fedotov10,Nerush11}. In the latter case a sizeable
portion of the initial laser energy is converted into the energy of
MeV photons ($30$\% in the simulation of Ref.~\onlinecite{Nerush11})
which can have anisotropic distribution in some
cases~\cite{Nerush11_2}.
In this paper we attempt to classify gamma-ray generation regimes and
examine the influence of plasma dynamics, ion acceleration and other
effects on the gamma-ray generation process. For this purpose we
perform a series of numerical simulations for a wide range of foil
densities and laser intensities, as described in Sec.~\ref{sec:map}.
Despite of obvious limitations of such a consideration (e.g., we
restrict the simulations to some certain value of the foil thickness),
it allows us to determine the region of the most efficient gamma-ray
generation (see Sec.~\ref{sec:map}).
At ultrahigh intensities any light nucleus become fully ionized, hence
the ion charge-to-mass ratio doesn't depend much on the material type.
Thus, the presented picture of laser-foil interactions doesn't depend
much on the material type. Electron dynamics, generation of
electromagnetic fields and ion acceleration are considered in
Sec.~\ref{sec:electrons} and Sec.~\ref{sec:ions}. Though a plenty of
important effects manifest itself in the considered parameter region, it
occurs that gamma-ray generation is strongly affected by ion motion, and
the region of efficient gamma-ray generation approximately coincides
with the region of relativistic ion dynamics (see
Sec.~\ref{sec:ions}).
Having in mind possible applications of MeV photons, we focus on
gamma-ray angular distribution and spectrum along with gamma-ray
generation efficiency. Characteristics of gamma-ray bunches from
laser-irradiated foils obtained by means of simulations in a wide
range of parameters are discussed in Sec.~\ref{sec:electrons} and
Sec.~\ref{sec:sum}. In Sec.~\ref{sec:sum} two sets of laser-plasma
interaction parameters and the parameters of the resulting gamma-ray
sources are considered in detail. Namely, we discuss the gamma-ray
generation with normally incident $100 \text{ PW}$ laser pulse and
with obliquely incident $3 \text{ PW}$ laser pulse. In the first case
high laser-to-gamma-ray conversion efficiency ($9\%$) can be easily
got. In the second case tight focusing of the laser pulse and an accurate
choice of the plasma density and the incidence angle let obtain
reasonable conversion efficiency ($\sim 1\%$) along with quite high
directivity of a single-lobe radiation pattern. Possible applications
of the corresponding gamma-ray bunches are discussed and the
comparison with the existing gamma-ray sources is also given.
\section{\label{sec:map} Map of the source gain}
\begin{figure}
\includegraphics{map-gain.eps}
\caption{\label{fig:gain}The gain $\mathcal{G}$, i.e. the product of the radiation
pattern directivity and the gamma-ray generation efficiency, for
high-energy photons generated at the normal incidence of a $10 \text{
fs}$ linearly polarized laser pulse on a $0.91 \text{ }\upmu \text{m}$
thick plasma slab (see text for further details).}
\end{figure}
Three-dimensional numerical simulations allow us to calculate gamma-ray
radiation pattern, i.e. the directional (angular) dependence of the
emitted energy integrated in time. We introduce the directivity $\mathcal{D}$ of the source as
the ratio of the radiation pattern maximum to its value in
the case of isotropic distribution of the emitted gamma-rays. Since high directivity
of a gamma-ray beam is more reasonable for applications,
here we focus on the gain $\mathcal{G}$ that is the product of the
directivity and the generation efficiency $\mathcal{E}$, where we
introduce $\mathcal{E}$ as the overall
energy of the gamma-rays divided by the initial energy of the laser
pulse. The distribution of the gain on the laser-plasma interaction
parameters is shown in Fig.~\ref{fig:gain}, where interpolation
based on $120$ simulation runs is presented. Every
computed value corresponds to the normal incidence of a linearly
polarized laser pulse on a fully ionized thin foil. The normalized
amplitude of a laser pulse $a_0 = eE_0/mc\omega$ alters from $25$ to
$2500$, and the initial plasma density normalized to the critical
density $n_0 = n_e/n_{cr}$ lies in the range $15\text{--}745$, where $n_{cr} =
m \omega^2/ 4 \pi e^2$, $e>0$ and $m$ are the magnitude of the
electron charge and the electron mass, respectively, $\omega = 2\pi
c/\lambda$, $\lambda = 0.91 \text{ } \upmu \text{m}$ is the laser
wavelength and $c$ is the speed of light. The laser pulse has
Gaussian envelope with duration $10 \text{ fs}$ and width $5.4 \text{
} \upmu \text{m}$ (both are measured as FWHM of the laser intensity),
the initial distance between irradiated foil boundary and the laser
pulse centre is $5 \text{ } \lambda$, the interaction time is $12
\lambda/c$, the foil thickness is $l = 1 \lambda$, the ion
mass-to-charge ratio is $M/Z = 2$, where $M$ is the ion mass
normalized to the proton mass and $Z$ is the ion charge normalized to
$e$.
The simulations are performed with a particle-in-cell (PIC) code
that takes into account incoherent emission and decay of hard photons
using the Monte Carlo (MC) technique. The separation of the electromagnetic
fields on a low-frequency (laser field and its harmonics) and
high-frequency (gamma-rays) parts is possible due to a wide gap
between them: spectrum of coherent harmonics emitted in laser-solid
interactions typically extends up to $1 \text{
keV}$~\cite{Brugge10,*Gonoskov11,*Sanz12}, and the characteristic energy of
incoherent photons produced by a laser pulse striking a solid is
greater or of the order of $1 \text{ MeV}$ if laser intensity is
greater than $10^{22} \text{ W} \, \text{cm}^{-2}$ (for
optical wavelengths).
The PIC part of the code that describes laser and plasma fields and
plasma dynamics is based on the same methods as the code
VLPL~\cite{Pukhov99}, however, we use the Vay's particle
pusher~\cite{Vay08} instead of Boris pusher. This part of the code has
been used for a study of various plasma problems, for example, for
simulations of laser-plasma electron acceleration~\cite{Soloviev11}.
The effects of quantum electrodynamics (QED), namely emission of hard
photons and hard photon decay are simulated by an alternative event
generator~\cite{Elkina11} that uses the Baier--Katkov formulae for the
event probabilities~\cite{Baier67,*Baier98,Berestetskii82} which are
applicable in quantum limit (when photon recoil is substantial) as
well as in classical limit. The MC part of the code has been used for
numerical simulations of electromagnetic cascades in a laser
field~\cite{Nerush11,Elkina11}.
\begin{figure}
\includegraphics[width=8.3cm]{map-phen.eps}
\caption{\label{fig:phen}The gamma-ray generation efficiency
$\mathcal{E}$ in
laser-foil interaction obtained in the same simulation series as used
for Fig.~\ref{fig:gain}. Lines $1\text{--}3$ show distinctive
interaction regimes. Below line $1$ electrons and ions are easily
separated by the laser field and the laser pulse propagates through
the plasma slab, above this line the plasma reflects the laser pulse.
The region bounded by lines $2\text{--}3$ corresponds to quick foil
acceleration up to relativistic velocity (see Sec.~\ref{sec:ions}).}
\end{figure}
It follows from the simulations that gamma-ray generation is strongly
affected by ion dynamics. The gamma-ray generation efficiency is shown
in Fig.~\ref{fig:phen}, as well as the boundaries of the
characteristic laser-foil interaction regimes. The ions are quickly
accelerated up to relativistic velocities within the region bounded by
lines $2$ and $3$, that fairly well coincides with the region of the
sizeable gamma-ray generation efficiency. The electron dynamics in
laser-foil interactions is discussed in Sec.~\ref{sec:electrons} and
analytical estimations that lead to lines $2$ and $3$ in
Fig.~\ref{fig:phen} are considered in Sec.~\ref{sec:ions}.
\section{\label{sec:electrons}Electron motion in laser-solid interactions}
Electron dynamics in laser-foil interactions can be extremely diverse
and complicated. Nonetheless, as a rule, plasmas prevent laser pulse
penetration and a thin and dense electron layer which screens the
incident field is formed at the front of the laser pulse. As we show
below, this property together with one-dimensional approximation is
enough for a rough description of the electron trajectories.
Dynamics of the electron layer that screens the incident field is
extensively discussed in the context of high-harmonic generation in
laser-solid interactions~\cite{Brugge10,*Gonoskov11,*Sanz12}. It is
shown that at some specific interaction parameters relativistic
electron layers (bunches) with nanometer thickness and density up to
four orders of magnitude higher than nonrelativistic critical plasma
density can be formed and coherently emit high laser harmonics (up to
the photon energy of about $1 \text{ keV}$ that corresponds to the
wavelengths of $1 \text{ nm}$). At the same time a thicker electron
layer, which doesn't emit high harmonics efficiently but also screens
incident field and prevents penetration of the laser pulse into the
plasma, is formed in a very broad range of the interaction parameters.
Despite such a layer doesn't emit keV photons coherently, electrons in
the layer can produce MeV photons due to incoherent synchrotron
emission.
For the description of the electron trajectories we assume that (i)
for normal incidence of a laser pulse on a plasma halfspace in the
framework of one-dimensional geometry a flat thin (much thinner than
laser wavelength) electron layer is formed that (ii) fully compensates
incident laser field behind the layer by layer's fields, hence, the
plasma behind the layer is unperturbed. Additionally we assume that
(iii) ions remain immobile during the interaction and (iv) initial
plasma density is restored behind the layer if it moves towards the
laser pulse. Hence, the surface charge density of the layer is $n_0
x_l$, where $x_l$ is the distance between the initial position of the
plasma boundary and the layer, normalized to $c/\omega$.
The coherent part of the electromagnetic radiation emitted by the
layer at a particular time instance can be easily found from Maxwell's
equations in the reference frame $K'$ where the layer moves along
itself:
\begin{equation}
E'_{\rightarrow} = B'_{\rightarrow} = E'_{\leftarrow} = - B'_{\leftarrow}
= -J'/2,
\end{equation}
where $E$ and $B$ denote the fields of the emitted waves, $y$
component of the electric field and $z$ component of the magnetic
field, respectively, fields are converted from CGS electrostatic units
to the units of $mc\omega/e$, the axes of the $K'$ reference frame are
assumed to be parallel to the axes of the laboratory frame of
reference $K$ in which the linearly polarized laser pulse is incident
along the $x$~axis and the electric field of the pulse directed along
the $y$~axis. The symbols ``$\rightarrow$'' and ``$\leftarrow$''
denote the fields of the waves running in $+x$ and $-x$ directions at
the corresponding layer boundaries, respectively,
\begin{equation}
J' = \int_{x_l'-\delta x_l'/2}^{x_l'+\delta x_l'/2} j'(x') \, dx'
\end{equation}
is the layer surface current density in $K'$, $j$ is the volume current
density in the units of $mc \omega^2/4\pi e $, $x_l'+\delta
x_l'/2$ and $x_l'-\delta x_l'/2$ are the coordinates in $K'$ of the
layer boundaries normalized to $c/\omega$, hence, $\delta x_l'$ is
the layer thickness. Lorentz transformation of the coherently emitted
fields yields for the laboratory reference frame:
\begin{eqnarray}
E_\rightarrow = B_\rightarrow = -\frac{J}{ 2 ( 1- v_x ) }, \\
E_\leftarrow = -B_\leftarrow = -\frac{J}{ 2 (1+ v_x ) },
\end{eqnarray}
where
\begin{equation}
J = - n_0 x_l v_y,
\end{equation}
$v_x \approx dx_l/dt$ and $v_y$ are the components of the speed of
electrons that form the layer, $x_l$ is the distance between
the layer and the initial (unperturbed) position of the irradiated plasma
boundary in the reference frame $K$, $t$ is the current time instance
normalized to $\omega^{-1}$, and $n_0$ is the initial plasma density
normalized to $n_{cr} = m \omega^2 / 4\pi e^2$. We note again that
$n_0 x_l$ is the surface charge density of the electron layer.
The layer motion can be found now from the assumption that the plasma
remains unperturbed behind the layer because in this region the incident wave
[with $E_y (x,t) = \tilde E(x-t)$, $\tilde E_z(x,t)=0$] is fully
compensated by the wave emitted by the electron layer:
\begin{equation}
\label{eq:epluse}
\tilde E(x_l-t) + E_\rightarrow = 0.
\end{equation}
Assuming that particles in the layer are ultrarelativistic and
$v_x^2 + v_y^2 \approx 1$, the latter equation can be rewritten as
follows:
\begin{equation}
\label{eq:dxldt}
\frac{dx_l}{dt} = \frac{ 4 {\tilde E}^2 (x_l-t) - n_0^2 x_l^2}{ 4
{\tilde E}^2 (x_l-t) + n_0^2 x_l^2}.
\end{equation}
Electrons can leave and join the layer during its motion,
nevertheless, we use Eq.~(\ref{eq:dxldt}) in order to determine some
average trajectory as follows. As $x_l(t)$ is found from
Eq.~(\ref{eq:dxldt}), then $v_x(t)$ and $v_y = \pm \sqrt{1- v_x^2}$
can be obtained, and sign of $v_y$ is chosen to satisfy
Eq.~(\ref{eq:epluse}).
Eq.~(\ref{eq:dxldt}) is not based on the equations of electron
motion, and it should not be affected much by such effects as radiation losses and back reaction
of the coherently emitted fields on the layer. Hence, the main
limitation of this equation is the assumption of immobile ions.
Due to relativistic motion of the layer $E_\rightarrow \neq
E_\leftarrow$, hence, the amplitude and the shape of the reflected laser light can be
considerably modified in comparison with the incident light. This also
means that the fields of the incident laser pulse are not compensated
inside the electron layer. However, the Lorentz factor of the layer
electrons cannot be estimated in the framework of the proposed model
because the model leads to the following:
\begin{equation}
\frac{d \gamma}{dt} = - \mathbf v \mathbf E \approx -v_x
E_x - v_y \left( \tilde E + E_\leftarrow \right) = 0,
\end{equation}
where we assume that $E_x = n_0 x_l$. Hence, energy gain of the
electrons is caused by more subtle effects such as finiteness of the
gamma-factor, dispersion of electron parameters, electron inertia, ion
motion etc. that hardly could be taken into account. Since electron
gamma-factor is the crucial parameter for incoherent photon emission,
gamma-ray generation cannot be described only on the basis of Eq.~(\ref{eq:dxldt}).
Nevertheless, Eq.~(\ref{eq:dxldt}) allows us to estimate the depth on
that laser pulse pushes the electron layer.
Maximal value of the layer displacement $X_l$ corresponds to $dx_l/dt
= 0$. Assuming that the maximal displacement corresponds also to the
maximal value of the incident field $\tilde E \approx a_0$, we obtain:
\begin{equation}
\label{eq:Xl}
X_l \simeq \frac{2}{S}, \qquad S = \frac{n_0}{a_0},
\end{equation}
where $S$ is the so-called similarity parameter~\cite{Gordienko05}.
Eq.~(\ref{eq:Xl}) is very useful for a qualitative description of
the ion dynamics, as shown in the next section.
Moreover, Eqs.~(\ref{eq:dxldt}) and (\ref{eq:Xl}) can be used in order
to analyse gamma-ray parameters as follows.
\begin{figure}
\includegraphics[width=8.3cm]{section.eps}
\caption{\label{fig:section}(Dashed line) The gamma-ray generation efficiency
$\mathcal{E}$, (dotted line) the spectral coefficient $\nu$, (dashed-dotted line) the directivity
$\mathcal{D}$ at fixed value of the similarity
parameter $S = 2/l = 1/\pi$ (i.e., along line $1$ in Figs.~\ref{fig:phen}
and~\ref{fig:ions}) and (solid line) the curve $\mathcal{E} \propto
I^{3/2}$ versus laser intensity. Results of numerical simulations for
the same interaction parameters as used for Fig.~\ref{fig:gain}.}
\end{figure}
For the maximal number of electrons in the layer
Eq.~(\ref{eq:Xl}) yields $N \propto n_0 X_l \sim a_0$. It also follows
from the considered model that the shape of the electron trajectories,
hence, the curvature radius, depends only on the
similarity parameter $S$. Since synchrotron emitting power
proportional to $\gamma^4$ for the fixed curvature radius,
for fixed $S$ the gamma-ray generation efficiency scales as $\mathcal{E} \propto N
\gamma^4/I \propto
I^{3/2}$ under the assumption that the average electron Lorentz factor $\gamma
\sim a_0$. The dependence of the gamma-ray generation efficiency on
laser intensity at fixed $S$ is shown in Fig.~\ref{fig:section} along
with the fit $\mathcal{ E} \propto I^{3/2}$ that fairly well coincides
with numerical data at low intensities.
The dependence of the directivity $\mathcal D$ and spectral
coefficient $\nu$ on intensity is also shown in
Fig.~\ref{fig:section}. The coefficient $\nu$ is determined by
the root-mean-square fitting in the logarithmic axes of the obtained gamma-ray
spectrum by the classical synchrotron spectrum of a single
electron~\cite{Landau75}:
\begin{eqnarray}
\label{eq:nu} \frac{dN_{ph}}{d\omega_{ph}} & \propto & \int_\kappa^\infty
\operatorname{Ai}(\xi) \, d \xi + \frac{2}{\kappa}
\operatorname{Ai}'(\kappa), \\
\kappa & = & \left( \frac{\omega_{ph}}{\nu a_0^3 \omega}
\right)^{2/3},
\end{eqnarray}
where $\hbar \omega_{ph}$ is hard-photon energy. Actually in the
classical synchrotron spectrum $\kappa^{3/2} = \omega_{ph} / F_\perp
\gamma^2 \omega$, where $F_\perp$ is the perpendicular to the electron
velocity component of the force that causes photon emission,
normalized to $mc\omega$. Obviously, we set $F_\perp \gamma^2 = \nu
a_0^3$, hence, $\nu$ can characterize such effects as decrease of the
electron Lorentz factor and decrease of the angle between $\mathbf{F}$
and the electron velocity caused by the radiation
reaction~\cite{Fedotov10}. We found that if the appropriate spectral
coefficient $\nu$ is chosen, Eq.~(\ref{eq:nu}) describes well enough
the part of the gamma-ray spectrum extended from the gamma-ray energy cut-off
to $0.2$ of its value.
It should be noted that for $I\lesssim 10^{23} \text{
W}\,\text{cm}^{-2}$ the directivity and the spectral coefficient do not change with the
intensity (see Fig.~\ref{fig:section}), that conforms to the proposed analytical model. Besides,
for $I\gtrsim 10^{23} \text{ W}\,\text{cm}^{-2}$, $\nu$ declines
apparently due to radiation losses and $\mathcal{D}$ increases with
the increase of the intensity. The latter
can be interpreted as the radiation pattern narrowing due to the light
aberration and the relativistic Doppler effect which are caused in turn by
relativistic motion of the entire foil. This effect is also considered
in Ref.~\onlinecite{Ridgers12}.
\begin{figure}
\includegraphics[width=8.3cm]{map-ions.eps}
\caption{\label{fig:ions}The overall ion energy
normalized to the initial energy of the laser pulse for the same
set-up as in Fig.~\ref{fig:gain}. Lines $1$, $2$ and $3$ are the same
as in Fig.~\ref{fig:phen}.}
\end{figure}
\section{\label{sec:ions}Ion dynamics in laser-foil interactions}
Regimes of laser-foil interactions in terms of ion motion can be
classified by means of Eq.~(\ref{eq:Xl}). First, the line
\begin{equation}
\label{eq:2l}
S = \frac{2}{l},
\end{equation}
where $l$ is the foil thickness, obviously, corresponds to a complete
separation of the electrons and the ions at the maximal magnitude of the laser
intensity. Under the assumption of immobile ions, for $S>2/l$
the rear side of the foil remains unperturbed, otherwise,
for $S<2/l$, almost all electrons are expelled from the foil.
Eq.~(\ref{eq:2l}) corresponds to line $1$ in Figs.~\ref{fig:phen}
and \ref{fig:ions}.
The magnitude of the electric field that accelerates the ions
for $S>2/l$ can be
estimated as follows:
\begin{equation}
E_x \simeq n_0 X_l = 2 a_0.
\end{equation}
Let us introduce the time interval $\tau$ such that ions are accelerated by $E_x$ up
to relativistic velocities during this interval. Hence, $2 \tau
a_{0,i} \approx 1$, where
\begin{equation}
a_{0,i} = \frac{m}{m_p} \frac{Z}{M} a_0.
\end{equation}
Here $m_p \approx 1836 m$ is the proton mass.
At the same time, relativistic ions pass the distance $X_l$ during
the time $X_l$, equating that to $\tau$ yields the following
relation between $a_0$ and $n_0$:
\begin{equation}
\label{eq:n0top}
n_0 \simeq 4 a_0 a_{0,i}.
\end{equation}
Thus, if the plasma density is higher than that given by Eq.~(\ref{eq:n0top}),
$X_l$ is small and the ions have time to leave the accelerating gap
before they gain relativistic velocities. In the opposite case, if the
plasma density is lower than the threshold Eq.~(\ref{eq:n0top}),
accelerating gap is thick enough and the ions at the front of the foil
become relativistic. In the latter case the foil is crumpled and blown
away by the laser pulse with relativistic velocity until the density
of the shovelled plasma is not high enough to slow down the process.
Eq.~(\ref{eq:n0top}) corresponds to line $2$ in Figs.~\ref{fig:phen} and \ref{fig:ions}.
It is worth to note that the estimations derived above relates mostly to
high enough laser intensities ($a_{0,i} \gtrsim 1/4 \pi$). Otherwise
$\tau$ becomes greater than the laser period and the ion acceleration
process depends on the laser pulse duration.
The next threshold that corresponds to relativistic ion dynamics can
be found in the region $S<2/l$. The completely separated electrons and
ions generate in this case the following accelerating field:
\begin{equation}
E_x \simeq n_0 l.
\end{equation}
This field accelerates the ions up to relativistic velocities during
the laser period if $n_0 > \hat n_0$, where
\begin{equation}
\label{eq:n0bottom}
\hat n_0 = \frac{1}{2 \pi l} \frac{m_p}{m} \frac{M}{Z}.
\end{equation}
Despite ions on the front side of the foil are irradiated, the direct
acceleration by the laser pulse is weaker than the acceleration by the induced plasma
fields and can be neglected~\cite{Esirkepov04}. The threshold $n_0 =
\hat n_0$ is shown in Figs.~\ref{fig:phen} and \ref{fig:ions} as line $3$.
\begin{figure}
\includegraphics[width=8.3cm]{astro.eps}
\caption{\label{fig:astro}The gamma-ray generation efficiency as
function of the overall ion energy normalized to the initial energy of the
laser pulse. The circles correspond to the results of numerical simulations
used for Fig.~\ref{fig:phen}. The colour corresponds to the
laser intensity, from (cyan) $2\times10^{21} \text{
W}\,\text{cm}^{-2}$ to (magenta) $2\times10^{25} \text{ W}\,
\text{cm}^{-2}$.}
\end{figure}
Summarizing, on the $(\text{intensity}, \text{ density})$ plane the
region characterized by quick ion acceleration up to
relativistic velocities is bound between lines corresponding to
Eqs.~(\ref{eq:n0top}) and (\ref{eq:n0bottom}) (lines $2$ and $3$ in
Figs.~\ref{fig:phen} and \ref{fig:ions}). Surprisingly, this region
approximately coincides with the area of efficient gamma-ray
generation, as seen from Figs.~\ref{fig:phen} and \ref{fig:ions}.
Besides, plasma dynamics is diverse enough in this region, e.g., it is
quite irregular if $S>2/l$ and regular if $S<2/l$, at high intensities
it is significantly affected by abundant positron production, etc. The
considered coincidence can be partially explained by the proximity of
the threshold intensity value for fast ion acceleration ($a_{0,i} =
1/4 \pi$, that yields $I \sim 10^{23} \text{ W}\,\text{cm} ^{-2}$) and
the threshold intensity value for the radiation-dominated regime.
Furthermore, two distinctive regimes with an evident relation between ion
and gamma-ray energies are revealed by the results of the numerical simulations
(see Fig.~\ref{fig:astro}). Independently on the plasma density, at low
laser intensity the final gamma-ray energy scales as the squared overall ion
energy, and at high intensity the gamma-ray energy is in direct ratio with
the ion energy. Thus, the mutual influence of gamma-ray generation and
ion acceleration requires additional investigations.
\begin{figure}
\includegraphics[width=8.3cm]{A.eps}
\caption{\label{fig:A}(a) The Mollweide projection of the gamma-ray
radiation pattern and (b) the photon, electron and ion spectra
obtained in the numerical simulation corresponding to the point ``A'' in
Fig.~\ref{fig:gain}. The longitude $\varphi$ counts in the
polarization plane, and the latitude $\theta$ counts from the
polarization
plane so that point $\varphi=0$, $\theta=0$ corresponds to the initial
propagation direction of the laser pulse.}
\end{figure}
\section{\label{sec:sum}Summary and discussion}
In conclusion, incoherent generation of hard photons in normal
incidence of a laser pulse on a foil is considered by means of
three-dimensional PIC+MC simulations. For various plasma densities and
laser intensities the gamma-ray gain, directivity, generation efficiency
and spectral features are found. The influence of ion dynamics on
the emission of gamma-rays and the influence of radiation losses on
the electron dynamics are discussed.
As seen from Fig.~\ref{fig:gain}, the maximal gain at the intensity
$3.2 \times 10^{23} \text{ W}\, \text{cm}^{-2}$ ($a_0 = 310$) can be
obtained if foil with $n_e = 1.1 \times 10^{23} \text{ cm}^{-3}$ ($n_0
= 85$) is used (see point ``A'' in Fig.~\ref{fig:gain}). This
intensity level is potentially attainable in the nearest future, so,
let us consider in detail some properties of the gamma-ray bunches
produced at this point. Thus, the generation efficiency is $9\%$,
the directivity is $\mathcal{D} \approx 12$, hence, the gain is about
unity in this case. It should be noted that the generation efficiency
drops dramatically at lower intensities and saturates at higher
intensities, as seen from Fig.~\ref{fig:section}. The Mollweide
projection of the gamma-ray radiation pattern and the photon spectrum
are shown in Fig.~\ref{fig:A}. Two-lobe radiation pattern with lobes
confined to the polarization plane is typical for the case of normal
incidence~\cite{Nakamura12,Ridgers12}. One-lobe radiation pattern
directed towards the incident laser pulse analogous to that in
Ref.~\onlinecite{Brady12} reveals itself only at quite low intensities
when the generation efficiency is not high.
The gamma-ray source corresponding to the point ``A'' in
Fig.~\ref{fig:gain} for $1 \text{ Hz}$ laser shot frequency produces
every second $5 \times 10^{13}$ photons with energy greater than $0.5
\text{ MeV}$. For the photon energy of $1 \text{ MeV}$ such a source
provides the spectral power of $2 \times 10^{10} \text{ photons}/
\text{s}/ 0.1\% \text{ bandwidth}$ and the spectral intensity of $2 \times
10^4 \text{ photons}/ \text{mrad}^2/ \text{s}/ 0.1\% \text{ bandwidth}
$. Now let us compare these parameters with the parameters of modern
Compton gamma-ray sources and discuss the potential applications of
such a laser-plasma synchrotron source.
One of the most intense gamma-ray sources, Hi$\gamma$S~\cite{Weller09},
in a high-flux mode can produce about $10^9 \text{ photons}/\text{s}$
with average energy of $1 \text{ MeV}$. A collimator that provides
$3\%$ photon energy spread reduces the flux by the order of magnitude.
Hence, HI$\gamma$S on the level of $1 \text{ MeV}$ provides the spectral
power of $3 \times 10^6 \text{ photons}/\text{s}/0.1\% \text{
bandwidth}$ that is four orders of magnitude lower than that for the
considered laser-plasma source. However, HI$\gamma$S's gamma-ray beams
after the collimator spread by the angle of about $20 \text{ mrad}$ that
yields the spectral intensity of $10^4 \text{ photons}/ \text{mrad}^2/
\text{s}/ 0.1\% \text{ bandwidth}$ comparable with the considered
all-optical gamma-ray source.
Another Compton source, produced quasimonoenergetic photon beams with
the energy ranging from $0.1$ to $0.9 \text{ MeV}$, has been recently
developed~\cite{Gibson10} at LLNL and used to perform nuclear resonance fluorescence
(NRF) experiments~\cite{Albert10}. This source yields $10^5
\text{ photons}/\text{s}$ with average energy corresponding to $^7
\text{Li}$ line ($478 \text{ keV}$) and energy spread of $12\%$.
The resulting spectral power of $10^3 \text{ photons}/\text{s}/0.1\%
\text{ bandwidth}$ is enough for the accurate detection of $^7
\text{Li}$ isotope \textit{in situ}, however this required $7.5$ hours
of the operation~\cite{Albert10}. The divergence of this gamma-ray beam is
about $10 \text{ mrad}$, hence, not only the spectral power, but also the
spectral intensity of the LLNL source is much lower than that for the
considered laser-plasma source.
Despite of high potential performance of the laser-plasma gamma-ray
sources, some properties of them are so unfamiliar for nowadays
gamma-ray physics that more sophisticated NRF experimental techniques
can be required. For instance, the inherently wide photon spectrum will
lead to extremely high Compton background in spectral measurements of
re-emitted photons. Moreover, semiconductor and scintillating
detectors generally used for gamma-ray energy measurements can operate
only at the flux less than one photon per nanosecond. Thus, the NRF
experiments will require detector arrays and a lot of the laser shots in
order to obtain reasonable statistics. Nevertheless, femtosecond
duration of gamma-ray bunches from laser-irradiated plasmas together
with relatively long lifetimes of low-laying nuclear levels (generally
from tens of femtoseconds to picoseconds~\cite{Chadwick06}) might enable the
time-of-flight separation~\cite{Ahmed04} of the Compton and NRF signals if
an appropriate experimental design is proposed.
Obviously, gamma-ray beams from laser-plasma sources can be used in a
number of applications that do not require high spectral quality. One
of such an application is the radiography of ultrahigh density matter.
It can be performed by means of laser-plasma sources with
unprecedented time resolution, that is crucial for fast ignition
experiments~\cite{Barty04}. Another promising application of
laser-plasma gamma-ray sources is a high-flux positron source based on
pair creation by high-energy photons in a material target. Proposals
of International Linear Collider suppose not only high electron and
positron energy, but also high luminosity, that requires a
reinforcement of nowadays positron sources. Relatively efficient
positron production using Compton gamma-ray sources now is proved in
the experiments, where gamma-to-positron conversion efficiency of
about $10^{-3}$ and total positron flux of $10^4 /\text{s}$ are
achieved~\cite{Omori06}. Since the conversion efficiency does not
sharply depend on the gamma-ray energy, the average flux of about $10^{10}
\text{ positrons}/\text{s}$ can be expected if the considered
laser-plasma gamma-ray source is used.
Keeping on speculating about possible future experiments with bright
gamma-ray beams, propagation-based polychromatic phase-contrast
imaging~\cite{Wilkins96} can be proposed. According to van
Cittert--Zernike theorem the scale of the spatial coherence of light
emitted by an incoherent source is about the product of light
wavelength, distance to the source and reverse size of the source.
Hence, the scale of the spatial coherence of a laser-plasma gamma-ray
source is about $10 \text{ } \upmu \text{m}$, if the photon energy is
$1 \text{ MeV}$, the source size is $1 \text{ } \upmu \text{m}$ and
the distance from the source is $10 \text{ m}$. Assuming that $100$
gamma-photons belong to the corresponding solid angle, at least
$10^{8} \text{ photons}/\text{mrad}^2$ are required for the
phase-contrast gamma-imaging, that nearly the value of the considered
source. Matter refractive index drops rapidly with the decrease of a
hard-photon wavelength, however, the refractive index of polarized
vacuum does not depend on the wavelength~\cite{Berestetskii82}. Hence,
for a photon beam propagating through polarized vacuum the induced
curvature of the beam phase front is higher for smaller wavelengths.
This fact along with femtosecond duration of gamma-ray bunches from
laser-irradiated plasmas can make such bunches a promising probe for
the experiments on vacuum polarization in ultraintense laser field
proposed in Ref.~\onlinecite{King10,*King10_1}.
Finally let us note that modern high-power laser facilities are
already appropriate for quite efficient generation of high-energy
photons in laser-plasma interactions. Simulation results of oblique
incidence (the incidence angle is $60 ^\circ$) of a tightly focused $3
\text{ PW}$ $p$-polarised laser pulse (FWHM: $3 \text{ }\upmu
\text{m}$, $10 \text{ fs}$; $\lambda=0.91 \text{ }\upmu \text{m}$,
$a_0=100$, the peak intensity is $3.3 \times 10^{22} \text{ W}\,
\text{cm}^{-2}$) on a foil ($n_e=1.7 \times 10^{23}
\text{ cm}^{-3}$, $M/Z=2$, $l=1 \lambda$) demonstrates reasonable
generation efficiency ($0.7\%$) and unexpectedly very high directivity
($\mathcal{D} = 37$). The radiation pattern in this case consist of only
one lobe lying both in the plane of incidence and in the foil plane,
hence, the angle between the initial propagation direction of the laser
pulse and the lobe direction is $30^\circ$. Under the assumption of $1
\text{ Hz}$ laser system the resulting total gamma-ray flux is
$10^{12} \text{ photons}/\text{s}$, the spectral power is $10^8 \text{
photons}/ \text{s} / 0.1\% \text{ bandwidth}$ and the spectral intensity
is $400 \text{ photons}/ \text{mrad}^2/ \text{s}/
0.1\%\text{bandwidth}$ at the energy level of $1 \text{ MeV}$. Thus,
quite high flux, spectral power and directivity of such a source can
make it desirable for a number of applications and experiments.
Summarizing, theoretical analysis and three-dimensional PIC+MC
simulations demonstrate an interplay of ion acceleration and hard
photon generation in laser-foil interactions. The scaling of the
gamma-ray bunch parameters with the laser intensity is discussed.
Properties and possible applications of the resulting gamma-ray
bunches are considered, including nuclear resonance fluorescence and
high-flux positron sources.
\begin{acknowledgments}
This work has been supported in part by the Government of the Russian
Federation (Project No. 14.B25.31.0008) and by the Russian Foundation
for Basic Research (Grants No 13-02-00886, 13-02-97025).
\end{acknowledgments}
|
2,869,038,153,899 | arxiv |
\subsection{Background}
Consider the task of image classification over a set of classes $[K] := \{1,2,\ldots,K\}$, we have tuples of images and labels: $({\bm{x}}, {\bm{y}}) \in {\mathcal{X}}\times{\mathcal{Y}}$, with ${\bm{y}}\in\{0,1\}^K$ to denote one-hot encoded label, and $t \in [K]$ to denote the ground-truth class. The goal is to learn a parametric mapping function $f({\bm{x}}; \theta) : {\mathcal{X}} \mapsto {\mathcal{Y}}$ where $\theta \in \Theta$ is characterized by a neural network. We learn the parameters $\theta$ via Empirical Risk Minimization of the surrogate loss function, typically optimized using some variants of Stochastic Gradient Descent:
\begin{equation}
\label{eq:erm}
\theta^* = \argmin_{\theta \in \Theta} {\mathcal{L}}({\bm{y}}, f({\bm{x}}; \theta)),
\end{equation}
where ${\mathcal{L}}$ is the cross-entropy loss ${\mathcal{H}}({\bm{y}}, {\bm{q}})=\sum_{i=1}^{K} -{y}_i \log {q}_i$ between one-hot encoded label ${\bm{y}}\in{\mathcal{Y}}$, and network output distribution ${\bm{q}}=f({\bm{x}}; \theta)$ computed by applying $\mathrm{softmax}$ function over the logits output ${\bm{z}}$:
\begin{equation}\label{eq:softmax}
{q}_i = \mathrm{softmax}({z}_i) = \frac{\exp({z}_i)}{\sum_{j=1}^K \exp({z}_j)}.
\end{equation}
We could also apply temperature $T$ on the $\mathrm{softmax}$ to get a more smooth output distribution $\tilde{{q}}_i = \mathrm{softmax}({z}_i/T)$.
Gradient of a single-sample \emph{w.r.t.} logit ${z}_i$ is given by:
\begin{equation}
\label{eq:grad}
\frac{\partial{\mathcal{L}}}{\partial {z}_i} = {q}_i - {y}_i ~\big(=\partial_i\big).
\end{equation}
\subsection{Benefit from Label Smoothing}
Label Smoothing~\citep{szegedy2016rethinking} is a technique to soften one-hot encoded label ${\bm{y}}$ by a factor of $\epsilon$, such that the modified label becomes: $\tilde{{y}}_i^{\mathrm{LS}}=(1-\epsilon){y}_i + \epsilon/K$. Label smoothing mitigates the over-confidence issue of neural networks, and improves model calibration~\citep{muller2019does}.
Knowledge Distillation~($\mathrm{KD}$) on the other hand, uses an additional teacher model's predictions ${\bm{p}}$:
\begin{equation}\label{eq:dist}
\begin{aligned}
\theta^* &=\argmin_{\theta \in \Theta} {\mathcal{L}}^{\mathrm{KD}}({\bm{y}}, {\bm{p}}, f({\bm{x}}; \theta), \lambda, T)\\
&=(1-\lambda){\mathcal{H}}({\bm{y}}, {\bm{q}}) + \lambda {\mathcal{H}}(\tilde{{\bm{p}}}, \tilde{{\bm{q}}}),
\end{aligned}
\end{equation}
where $\lambda\in[0,1]$ is a hyper-parameter; $\tilde{{\bm{q}}}$ and $\tilde{{\bm{p}}}$ are softened student and teacher's predictions by applying temperature $T$ in~\Eqref{eq:softmax}. Logits gradient for $\mathrm{KD}$ is given by:
\begin{equation}\label{eq:dist_grad}
\frac{\partial{\mathcal{L}}^{\mathrm{KD}}}{\partial z_i} = (1-\lambda)(q_i - y_i) + \frac{\lambda}{T}(\tilde{{q}}_i - \tilde{{p}}_i) ~\big(=\partial^{\mathrm{KD}}_i\big).
\end{equation}
\citet{yuan2019revisit} established the connection between $\mathrm{KD}$ and label smoothing --
In terms of gradient propagation, Knowledge Distillation is equivalent to Label Smoothing, when temperature $T=1$, and teacher's probability distribution ${\bm{p}}$ follows a uniform distribution.
In other words, we can view $\mathrm{KD}$ as an adaptive version of label smoothing, suggesting it should inherit most of the benefits from label smoothing, such as model regularization and better calibration, not being over-confident~\citep{muller2019does}. %
In the next two subsections, we analyze the unique characteristics of real teacher's output distribution %
${\bm{p}}$ over the uniform distribution, and demonstrate how they could potentially facilitate student model's training with distillation.
\subsection{Benefit from Teacher's Prediction Confidence}\label{sec:analysis_re-weight}
One important characteristic of the teacher distribution ${\bm{p}}$ is that the prediction (confidence) ${p}_t$ on the ground-truth class $t$ is not a constant and varies across examples. Compared to vanilla loss function in~\Eqref{eq:erm}, we find that $\mathrm{KD}$ performs gradient rescaling in the logits space. Following is the ratio of gradients (~\cref{eq:grad,eq:dist_grad} ) from the two losses:
\begin{equation}\label{eq:grad_scale}
\begin{aligned}
\omega_i = \partial^{\mathrm{KD}}_i \big/ \partial_i = (1-\lambda) + \frac{\lambda}{T}\left( \frac{\tilde{{q}}_i - \tilde{{p}}_i}{{q}_i - {y}_i}\right).
\end{aligned}
\end{equation}
Next, we show that $\mathrm{KD}$ performs example re-weighting based on teacher model's prediction confidence ${p}_t$ on the ground-truth class. The gradient rescaling factor $\omega_i$ is larger on average, when teacher is more confident on making the right prediction. More specifically, we state the following:
\begin{prop}[\textbf{Example Re-weighting}]\label{prop:conf}
Given any example $({\bm{x}}, {\bm{y}}) \in {\mathcal{X}} \times {\mathcal{Y}}$, let $\tilde{{p}}_t = \tilde{{q}}_t + \tilde{{c}}_t + \eta$, where $\tilde{{c}}_t > 0$ is teacher's relative prediction confidence on the ground-truth $t\in [K]$ and $\eta$ is a zero-mean random noise. Then the gradient rescaling factor for all classes by applying Knowledge Distillation is given by:
$$\mathbb{E}_\eta\left[\sum_{i\in[K]}|\partial^{KD}_i|\Big/\sum_{i\in[K]}|\partial_i|\right] = (1 - \lambda) + \frac{\lambda}{T}\left(\frac{\tilde{{c}}_t}{1 - {q}_t}\right).$$
\end{prop}
\begin{proof}
We first consider the ground-truth class $t\in[K]$. Using $\tilde{{p}}_t = \tilde{{q}}_t + \tilde{{c}}_t + \eta$ and $\mathbb{E}[\eta] = 0$ in \eqref{eq:grad_scale}, we get:
\begin{align*}
\mathbb{E}_{\eta}[|\omega_t|] &= (1-\lambda) + \frac{\lambda}{T}\left(\frac{\tilde{{c}}_t}{1 - {q}_t}\right)
\end{align*}
Now, sum of the incorrect class gradients is given by\footnote{Under the assumption of having a better quality teacher model, we assume $p_t > q_t$, and correspondingly $q_i \ge p_i,~\forall i \in [K]\backslash t$.}:
\begin{align*}
\sum_{i \neq t} |\partial^{\mathrm{KD}}_i| &= \sum_{i\neq t}\big[ (1-\lambda){{q}}_i + \frac{\lambda}{T}(\tilde{{q}}_i - \tilde{{p}}_i)\big]\\
& = (1-\lambda)(1-{{q}}_t) + \frac{\lambda}{T}(\tilde{{p}}_t - \tilde{{q}}_t) = |\partial^{\mathrm{KD}}_t|
\end{align*}
Penultimate equality follows from ${\bm{q}},~{\bm{p}}$ and $\tilde{{\bm{q}}}$ being probability masses. Similarly applies for $\partial_i$, and hence the proof.
\end{proof}
At a given snapshot during training, we could assume ${\tilde{{c}}_t}$ to be a constant for all examples. Then for any pairs of examples $({\bm{x}}, {\bm{y}})$ and $({\bm{x}}', {\bm{y}}') \in {\mathcal{X}} \times {\mathcal{Y}}$, if the teacher is more confident on one of them, i.e., ${p} > {p}'$, then the average $\omega$ for all classes will be greater than $\omega'$ according to Proposition~\ref{prop:conf}.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{figures/p_t-w_t.png}
\caption{When applying $\mathrm{KD}$ for ResNet-20 student model with ResNet-56 as the teacher on CIFAR-100, we plot $\tilde{{p}}_t$ vs. $\omega_t$ (in log scale) with 10K samples at the end of training. Clearly, we can see that the two are positively correlated.}
\label{fig:p_t-w_t}
\end{figure}
In Figure~\ref{fig:p_t-w_t}, we plot the relationship between $\omega_t$ and ${p}_t$ at the end of training ResNet-20~\citep{he2016deep} student model with ResNet-56 as the teacher on CIFAR-100~\citep{krizhevsky2009learning} dataset (more details of this dataset can be found in Suppl. Section~\hyperref[sec:si-detail]{A}). The plot shows a clear positive correlation between the two. Also, we found the correlation to be stronger at the beginning of training.
In~\citep{furlanello2018born}, the authors conjecture that example weight is associated with the largest value in ${\bm{p}}$. However, in the above proof, we show that once the teacher makes a wrong prediction, using the largest value gives a contradictory result. It is also trivial to prove that, when only having two classes, that is $\omega_{i \neq t} = \omega_t$, the only effect of using $\mathrm{KD}$ is example re-weighting. So we can regard the use of $\mathrm{KD}$ on binary classification~\citep{Anil2018Large} as taking the binary log-loss, and multiply with the weight $\omega_t$.
In summary, we think incorporating teacher's supervision in knowledge distillation has an effect of example re-weighting, and the weight is associated with teacher's prediction confidence ${p}_t$ on the ground-truth. Weight will be higher when ${p}_t$ is larger. Alternatively, this suggests that $\mathrm{KD}$ would assign larger weights to training examples that are considered easier from teacher's perspective, and vice versa; which has a similar flavor to Curriculum Learning. \citet{bengio2009curriculum} suggests this may not only speedup training convergence, but also helps to reach a better local minima. Also, according to~\cite{roux2016tighter}, re-weighting examples during training with model prediction confidence results in a tighter bound on the classification error, leading to better generalization.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.22\textwidth}
\includegraphics[width=\textwidth]{figures/heatmap-k=100_t=3.png}
\subcaption{}
\label{fig:heatmap-k=100_t=3}
\end{subfigure}
~
\begin{subfigure}[b]{0.22\textwidth}
\includegraphics[width=\textwidth]{figures/heatmap-k=100_t=10.png}
\subcaption{}
\label{fig:heatmap-k=100_t=10}
\end{subfigure}
~
\begin{subfigure}[b]{0.22\textwidth}
\includegraphics[width=\textwidth]{figures/heatmap-k=10-t=10.png}
\subcaption{}
\label{fig:heatmap-k=10-t=10}
\end{subfigure}
~
\begin{subfigure}[b]{0.26\textwidth}
\includegraphics[width=\textwidth]{figures/heatmap-cosine-sims.png}
\subcaption{}
\label{fig:heatmap-cosine-sims}
\end{subfigure}
\caption{Using 10K samples from CIFAR-100 for ResNet-56, we plot Pearson correlations of output probability ${\bm{p}}$ with varying $\mathrm{softmax}$ temperature (a) $T=3$, (b) $T=10$, and (c) $T=10$ where only top-10 largest values in ${\bm{p}}$ are preserved. In (d), we show cosine similarities computed from the weights of the final logits layer. Since every 5 classes are within the same super-class, class correlations can be interpreted as the color `squares' on the block-diagonal.}
\end{figure*}
\subsection{Prior of Optimal Geometry in Logit Layer}\label{sec:analysis_class_rel}
For binary classification, we showed in Section~\ref{sec:analysis_re-weight} that $\mathrm{KD}$ is essentially performing example re-weighting. However, for multi-class classification, we argue that $\mathrm{KD}$ also leverages relationship between classes as captured by teacher's probability mass distribution ${\bm{p}}$ over the incorrect classes. As argued by~\citet{hinton2015distilling}, on MNIST dataset, model assigns relatively high probability for class `7', when the ground-truth class is `2'. In this section, we first confirm their hypothesis using empirical studies. Then, we provide new insights to interpret the class relationship as a prior on optimal geometry in student's last logit layer.
To illustrate that the teacher's distribution ${\bm{p}}$ captures class relationships, we train ResNet-56 on CIFAR-100 dataset. CIFAR-100 contains 100 classes over 20 super-classes, with each super-class containing 5 sub-classes. Figures~\ref{fig:heatmap-k=100_t=3} and~\ref{fig:heatmap-k=100_t=10} show the heatmap for Pearson correlation coefficient on teacher's distribution ${\bm{p}}$ for different temperatures. We sort the class indexes to ensure that the 5 classes from the same super-class appear next to each other. We observe in Figure~\ref{fig:heatmap-k=100_t=3} that with lower temperature, there's no pattern on the heatmap showing class relationships. But as we increase the temperature in Figure~\ref{fig:heatmap-k=100_t=10}, classes within the same super-class clearly have high correlations to each other, as seen in the block diagonal structure. This observation verifies that teacher's distribution ${\bm{p}}$ indeed reveals class relationships, with proper tuning on the softmax temperature. %
Before diving into the details of geometric interpretation, we recall the case of label smoothing~\citep{szegedy2016rethinking}.
\begin{itemize}
\item From an optimization point of view,~\citet{he2019bag} showed that there is an optimal constant margin $\log(K(1-\epsilon)/\epsilon + 1)$, between the logit of the ground-truth ${z}_t$, and all other logits ${z}_{-t}$, using a label smoothing factor of $\epsilon$. For fixed number of classes $K$, the margin is a monotonically decreasing function of $\epsilon$.
\item From geometry perspective, \citet{muller2019does} showed the logit ${z}_k = {\bm{h}}^\top {\bm{w}}_k$ for any class $k$ is a measure of squared Euclidean distance $\|{\bm{h}} - {\bm{w}}_k\|^2$ between the activations of the penultimate layer ${\bm{h}}$\footnote{Here ${\bm{h}}$ can be concatenated with a ``1'' to account for the bias.}, and weights ${\bm{w}}_k$ for class $k$ in the last logit layer.
\end{itemize}
The above results suggests that label smoothing encourages $\|{\bm{h}} - {\bm{w}}_t\|^2 \ge \|{\bm{h}} - {\bm{w}}_{-t}\|^2$, and pushes all the other incorrect classes equally apart.
Following a similar proof technique, we extend the above results to knowledge distillation:
\begin{prop}\label{prop:distance}
Given $({\bm{x}}, {\bm{y}}) \in {\mathcal{X}} \times {\mathcal{Y}}$, at the optimal solution of the student for the final layer logits ${\bm{w}}^*_k,~\forall k\in[K]$ at $T=1$, Knowledge Distillation enforces different inter-class distances based on teacher's probability distribution ${\bm{p}}$:
$$\|{\bm{h}} - {\bm{w}}^*_i\|^2 < \|{\bm{h}} - {\bm{w}}^*_j\|^2~~~\text{iff}~~~{p}_i > {p}_j,~\forall i,j \in [K]\backslash t$$
\end{prop}
\begin{proof}
At the optimal solution of the student, equating gradient in~\Eqref{eq:dist_grad} to $0$, for $T=1$, we get:
\begin{equation}
{q}^*_k=
\begin{cases}
\lambda{p}_k + (1-\lambda) & \text{if}\ k=t, \\
\lambda{p}_k & \text{otherwise}.
\end{cases}
\label{eq:optimal_stud}
\end{equation}
Using a similar proof technique as~\citet{muller2019does}, $\|{\bm{h}} - {\bm{w}}^*_k\|^2 = \|{\bm{h}}\|^2 + \|{\bm{w}}^*_k\|^2 - 2 {\bm{h}}^\top{\bm{w}}^*_k$, where ${\bm{h}}$ is the penultimate layer activations, and ${\bm{w}}^*_k$ are the weights of the last logits layer for class $k \in [K]$. Note that $\|{\bm{h}}\|^2$ is factored out when computing the $\mathrm{softmax}$, and $\|{\bm{w}}^*_k\|^2$ is usually a (regularized) constant across all classes.
Equating ${z}^*_k={\bm{h}}^\top{\bm{w}}^*_k$, and using the property $\mathrm{softmax}({\bm{z}}) = \mathrm{softmax}({\bm{z}} + c),~\forall c\in{\mathbb{R}}$, we get:
\begin{align*}
{q}^*_k &= \mathrm{softmax}({z}^*_k) = \mathrm{softmax}({\bm{h}}^\top{\bm{w}}^*_k)\\ &=\mathrm{softmax}\Big(-\frac{1}{2}\|{\bm{h}} - {\bm{w}}^*_k\|^2\Big)
= \lambda{p}_k,~\forall k \in [K]\backslash t
\end{align*}
The last equality follows from \eqref{eq:optimal_stud}, and thus proves the claim on class partition prior geometry at optimality.
\end{proof}
From Figure~\ref{fig:heatmap-k=100_t=10}, we know that the teacher assigns higher probability to the classes within the same super-class, and hence $\mathrm{KD}$ encourages hierarchical clustering of the logits layer weights based on the class relationships.
\subsection{Summarizing Effects of Knowledge Distillation}\label{sec:summary_effects}
Primarily, KD brings a regularization/calibration effect by introducing smoothed teacher distribution, although this effect is not considered as knowledge.
Besides, we believe there are two types of knowledge teacher model will distill to its student -- real teacher's probability distribution ${\bm{p}}$ not only benefits the student via \emph{confidence on ground-truth class} to re-weight examples, but also from its probability mass on the \emph{incorrect classes}.
Intuitively, these values reflect class relationships, therefore provide the student with more guidance.
We further interpret these values as applying different label smoothing factor $\epsilon$ for the incorrect classes.
The difference in $\epsilon$ encourages student's optimal inter-class distance to be different for different classes.
In other words, the distillation loss will get lower if more desired output logit layer's geometric inequalities hold.
As a result, two sources of knowledge complementary to each other, which could potentially facilitate the student model's training process and further improve model generalization.
\subsection*{A. Experimental details}\label{sec:si-detail}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{figures/synthetic_vis-sim=00-m=0.png}
\subcaption{}
\label{fig:synthetic-sim=0.0-m=0}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{figures/synthetic_vis-sim=09-m=0.png}
\subcaption{}
\label{fig:synthetic-sim=0.9-m=0}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{figures/synthetic_vis-sim=09-m=2.png}
\subcaption{}
\label{fig:synthetic-sim=0.9-m=2}
\end{subfigure}
\caption{Visualization of 5K synthetic data points (with input dimensionality $d=2$) on 2-D plane. We use $K=4$, $C=2$, means there are two super-classes, one associate with label \{0,1\} and the other one associate with label \{2,3\}. We vary $\tau$ and $M$ and produce 3 plots: (a) $\tau=0.0$, no sine function is used; (b) $\tau=0.9$, no sine function is used and (c) $\tau=0.9$, $M=2$.}
\label{fig:synthetic-sim-m}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.515]{figures/histogram-pt.pdf}
\caption{ On CIFAR-100, we plot the histograms of teacher's confidence on ground-truth ${p}_t$, here teacher model is ResNet-56 with different levels $\epsilon$ of label smoothing. From left to right, we use: (a) $\epsilon=0.0$ and $T=1$; (b) $\epsilon=0.0$ and $T=5$; (c) $\epsilon=0.1$ and $T=1$ and (d) $\epsilon=0.1$ and $T=3$. The distribution of ${p}_t$ becomes skewed after enabling label smoothing.}
\label{fig:histogram-pt}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{figures/heatmap-k=100-t=10-LS=true.png}
\subcaption{}
\end{subfigure}
~~~~~~~~~~~~~~~~
\begin{subfigure}[b]{0.355\textwidth}
\includegraphics[width=\textwidth]{figures/heatmap-cosine-sims-LS=true.png}
\subcaption{}
\end{subfigure}
\caption{Using 10K samples from CIFAR-100 for ResNet-56 trained with smoothed labels ($\epsilon=0.1$), we plot (a) Pearson correlations of ${\bm{p}}$ when $T=10$ and (b) Cosine similarities computed from the weights of the final logits layer. Since every 5 classes are within the same super-class, class correlations can be interpreted as the color `squares' on the diagonal.}
\label{fig:heatmap-label-smooth-failure}
\end{figure*}
\paragraph{Implementation of $\mathrm{KD}$.}
In practice, the gradients from the RHS of~\Eqref{eq:dist_grad} are much smaller compare to the gradients from LHS when temperature $T$ is large. Thus, it makes tuning the balancing hyper-parameter $\lambda$ become non-trivial. To mitigate this and make the gradients from two parts in the similar scale, we multiply $T^2$ to the RHS of~\Eqref{eq:dist_grad}, as suggested in~\citep{hinton2015distilling}.
\paragraph{Implementation of $\mathrm{KD}\text{-sim}$.}
When synthesizing the teacher distribution for $\mathrm{KD}\text{-sim}$, we use ${\bm{\rho}}^{\text{sim}} = \mathrm{softmax}(\hat{{\bm{w}}}_t \Hat{{\bm{W}}}^\top)$, where $\Hat{{\bm{W}}}$ is the $l_2$-normalized logit layer weights and $\hat{{\bm{w}}}_t$ is the $t$-th row of $\Hat{{\bm{W}}}$. However, the cosine similarities computed for $\mathrm{softmax}$ are limited in the range of $[0,1]$. Therefore the resulting distribution is highly likely to be uniform. To mitigate this and bring more resolution to be cosine similarities, we use the following:
$${\bm{\rho}}^{\text{sim}} = \mathrm{softmax}((\hat{{\bm{w}}}_t \Hat{{\bm{W}}}^\top)^{\alpha}/\beta).$$
Here $\alpha<1$ is a hyper-parameter to amplify the resolution of cosine similarities, $\beta$ is another hyper-parameter indicating the temperature for $\mathrm{softmax}$.
\begin{table}[t!]
\begin{subtable}[h]{0.45\textwidth}
\centering
\begin{tabular}{l | l}
{Method} & {Hyper-parameter setting}\\
\hline
LS & $\epsilon=0.3$ for any $\tau$.\\
\hline
\multirow{6}{*}{KD} & $\lambda=0.7, T=3$ when $\tau=0.0$\\
& $\lambda=0.7, T=5$ when $\tau=0.1$\\
& $\lambda=0.7, T=2$ when $\tau=0.2$\\
& $\lambda=0.7, T=3$ when $\tau=0.3$\\
& $\lambda=0.7, T=10$ when $\tau=0.4$\\
& $\lambda=0.7, T=5$ when $\tau=0.5$;\\
\hline
\multirow{6}{*}{KD-pt} & $\lambda=0.7, T=3$ when $\tau=0.0$\\
& $\lambda=0.7, T=5$ when $\tau=0.1$\\
& $\lambda=0.7, T=2$ when $\tau=0.2$\\
& $\lambda=0.7, T=3$ when $\tau=0.3$\\
& $\lambda=0.7, T=10$ when $\tau=0.4$\\
& $\lambda=0.7, T=5$ when $\tau=0.5$\\
\hline
KD-sim & $\lambda=0.7,\alpha=0.5,\beta=0.5$ for any $\tau$\\
\end{tabular}
\caption{Synthetic}
\label{tb:hparam_synthetic}
\end{subtable}
\hfill
\begin{subtable}[h]{0.45\textwidth}
\centering
\begin{tabular}{l | l}
{Method} & {Hyper-parameter setting}\\
\hline
LS & $\epsilon=0.1$\\
KD & $\lambda=0.3,T=5$\\
KD-pt & $\lambda=0.3,T=5$\\
KD-sim & $\lambda=0.3,\alpha=0.3,\beta=0.3$\\
KD-topk & $k=25,\lambda=0.5,T=5$\\
\end{tabular}
\caption{CIFAR-100}
\label{tb:hparam_cifar}
\end{subtable}
\hfill
\begin{subtable}[h]{0.45\textwidth}
\centering
\begin{tabular}{l | l}
{Method} & {Hyper-parameter setting}\\
\hline
LS & $\epsilon=0.1$\\
KD & $\lambda=0.7,T=20$\\
KD-pt & $\lambda=0.2,T=25$\\
KD-sim & $\lambda=0.3,\alpha=0.5,\beta=0.3$\\
KD-topk & $k=500,\lambda=0.5,T=3$\\
\end{tabular}
\caption{ImageNet}
\label{tb:hparam_imagenet}
\end{subtable}
\hfill
\begin{subtable}[h]{0.45\textwidth}
\centering
\begin{tabular}{l | l}
{Method} & {Hyper-parameter setting}\\
\hline
KD & $\lambda=0.1,T=50$\\
KD-topk & $k=100,\lambda=0.1,T=50$\\
\end{tabular}
\caption{Penn Tree Bank (PTB)}
\label{tb:hparam_ptb}
\end{subtable}
\caption{Hyper-parameter settings for different methods on different datasets.}
\label{tab:temps}
\end{table}
\paragraph{Synthetic dataset.}
First, we follow the data generation procedure described in Section~\ref{sec:exp_synthetic}, we get a toy synthetic dataset where we only have input dimensionality $d=2$ with $K=4$ classes and $C=2$ super-classes. Figure~\ref{fig:synthetic-sim-m} shows a series of scatter plots with different settings of class similarity $\tau$ and task difficulty $M$. This visualization gives a better understanding of the synthetic dataset and helps us imagine what it will look like in high-dimensional setting that used in our experiments. For the model used in our experiments, besides they are 2-layer network activated by $\tanh$, we use residual connection~\citep{he2016deep} and and batch normalization~\citep{ioffe2015batch} for each layer. Following~\citep{ranjan2017l2, zhang2018heated}, we found using $l_2$-normalized logits layer weight $\Hat{{\bm{W}}}$ and penultimate layer $\hat{{\bm{h}}}$ provides more stable results. The model is trained by ADAM optimizer~\citep{kingma2014adam} for a total of 3 million steps without weight decay and we report the best accuracy.
Please refer to Table~\ref{tb:hparam_synthetic} for the best setting of hyper-parameters.
\paragraph{CIFAR-100 dataset.}
CIFAR-100 is a relatively small dataset with low-resolution ($32\times32$) images, containing $50k$ training images and $10k$ validation images, covering $100$ classes and $20$ super-classes. It is a perfectly balanced dataset -- we have the same number of images per class (\emph{i.e.,} each class contains $500$ training set images) and $5$ classes per super-class.
To process the CIFAR-100 dataset, we use the official split from Tensorflow Dataset\footnote{\url{https://www.tensorflow.org/datasets/catalog/cifar100}}. Both data augmentation \footnote{\url{https://github.com/tensorflow/models/blob/master/research/resnet/cifar_input.py}}\footnote{We turn on the random brightness/saturation/constrast for better model performance.} for CIFAR-100 and the ResNet model\footnote{\url{https://github.com/tensorflow/models/blob/master/research/resnet/resnet_model.py}} are based on Tensorflow official implementations. Also, following the conventions, we train all models from scrach using Stochastic Gradient Descent (SGD) with a weight decay of 1e-3 and a Nesterov momentum of 0.9 for a total of 10K steps. The initial learning rate is 1e-1, it will become 1e-2 after 40K steps and become 1e-3 after 60K steps. We report the best accuracy for each model.
Please refer to Table~\ref{tb:hparam_cifar} for the best setting of hyper-parameters.
\paragraph{ImageNet dataset.}
ImageNet contains about $1.3$M training images and $50k$ test images, all of which are high-resolution ($224\times224$), covering $1000$ classes. The distribution over the classes is approximately uniform in the training set, and strictly uniform in the test set.
Our data preprocessing and model on ImageNet dataset are follow Tensorflow TPU official implementations\footnote{\url{https://github.com/tensorflow/tpu/tree/master/models/official/resnet}}. The Stochastic Gradient Descent (SGD) with a weight decay of 1e-4 and a Nesterov momentumof 0.9 is used. We train each model for 120 epochs, the mini-batch size is fixed to be 1024 and low precision (FP16) of model parameters is adopted. We didn't change the learning rate schedule scheme from the original implementation.
Please refer to Table~\ref{tb:hparam_imagenet} for the best setting of hyper-parameters.
\paragraph{Penn Tree Bank dataset.}
We use Penn Tree Bank (PTB) dataset for word-level language modeling task using the standard train/validation/test split by~\citep{mikolov2010recurrent}. The vocabulary is
capped at 10K unique words. We consider the state-of-the-art LSTM model called AWD-LSTM proposed by~\citet{merity2017regularizing}. The model used several regularization tricks on top of a 3-layer LSTM, including DropConnect, embedding dropout, tied weight, etc. We use different capacity (indicated by hidden size and embedding size) as our Teacher and Student. To be specific, Teacher has a hidden size of 1150 and an embedding size of 400, while Student has a smaller hidden size of 600 and a smaller embedding size of 300. We follow the official implementation\footnote{\url{https://github.com/salesforce/awd-lstm-lm}} with simple changes for $\mathrm{KD}\text{-topk}$. Besides capacity, we keep the default hyper-parameter as in the official implementation to train our Teacher. However, when training smaller Student model, we follow another implementation\footnote{\url{https://github.com/zihangdai/mos}} to: (1) lower the learning rate to 0.2, (2) increase training epochs to 1000, (3) use 0.4 for embedding dropout rate and (4) use 0.225 for RNN layer dropout rate.
\subsection*{B. Additional Experiments}\label{sec:si-exp}
\begin{table}[h!]
\centering
\begin{tabular}{l|ccc}
\toprule
\textbf{Method} & \textbf{\% top-1 accuracy}\\
\hline
Student & 72.51\\
KD & 75.94\\
\midrule
KD-rel & 74.14\\
KD-sim & 74.30\\
\midrule
KD-pt+rel & 75.07\\
KD-pt+sim & 75.24\\
\bottomrule
\end{tabular}
\caption{Performance of $\mathrm{KD}\text{-rel}$ on CIFAR-100. We report the mean result for 4 individual runs with different initializations. We use $\beta_1=0.6,\beta_2=\frac{0.1}{4},\beta_3=\frac{0.3}{95}$.}
\label{tb:exp_kdrel}
\end{table}
\paragraph{Examine optimal geometry prior effect with class hierarchy.}
In section~\ref{sec:pseudo_kd}, we mentioned the optimal geometry prior effects of $\mathrm{KD}$ can also be examined using existing class hierarchy.
Suppose our data has a pre-defined class hierarchy (e.g., on CIFAR-100), we can also use it to examine the optimal geometry prior effects of $\mathrm{KD}$. To be specific, let ${\mathbb{S}}_t \subset [K]\backslash t$ denote the other classes that share same parent of $t$.
We simply assign different probability masses to different types of classes:
\begin{equation}\label{eq:kdrel}
\rho_i^{\textrm{rel}} = \begin{cases}
\beta_1 & \text{if } i = t,\\
\beta_2 & \text{if } i \in {\mathbb{S}}_t,\\
\beta_3 & \text{otherwise},
\end{cases}
\end{equation}
where $\beta_1 > \beta_2 > \beta_3$ are a hyper-parameters we could search and optimize, and we name this method as $\mathrm{KD}\text{-rel}$.
As shown in Table~\ref{tb:exp_kdrel}, we found $\mathrm{KD}\text{-rel}$ performs slightly worse than $\mathrm{KD}\text{-sim}$ on CIFAR-100. The trend is still hold when we compound each effect with $\mathrm{KD}\text{-pt}$.
\subsection{Empirical Improvements for Distillation Quality}\label{sec:improv_effective}
Though $\mathrm{KD}\text{-sim}$ is able to capture class relationships using different probability masses over the incorrect classes, there's a major drawback -- all the examples from the same class will have the same relationships to the other classes. However, this is not a realistic assumption for most real-world datasets. For instance, on MNIST dataset, only some versions of `2' looks similar to `7'. To overcome this drawback, we propose $\mathrm{KD}\text{-topk}$, which simply borrows the top-$k$ largest values of teacher's probability ${\bm{p}}$, and uniformly distributes the rest of the probability mass to the other classes.
Figure~\ref{fig:heatmap-k=10-t=10} shows that only preserving top-$10$ largest values could closely approximate the class correlations as in the full teacher's distribution ${\bm{p}}$, and is also less noisy.
The above finding shows that only a few incorrect classes that are strongly correlated with the ground-truth class are useful for $\mathrm{KD}$, and the probability mass on other classes are random noise (which is not negligible under high temperature $T$), and only has the effect of label smoothing in expectation.
Using the above intuition, we test $\mathrm{KD}\text{-topk}$ for image classification task on CIFAR-100 and ImageNet. Beyond computer vision, we also test $\mathrm{KD}\text{-topk}$ for language modeling task on Penn Tree Bank (PTB) dataset. We apply state-of-the-art LSTM model~\citep{merity2017regularizing} with different capacities for the teacher and student. Details of PTB dataset and model specifications are in Section~\hyperref[sec:si-detail]{A} of Suppl. For image classification, the performance of $\mathrm{KD}\text{-topk}$ is shown in the last row of Table~\ref{tb:exp_real_data}. We see that $\mathrm{KD}\text{-topk}$ outperforms $\mathrm{KD}$ in both the datasets. For language modeling, the results are shown in Table~\ref{tb:exp_real_data_lm}, which shows a similar trend for $\mathrm{KD}\text{-topk}$.
We plot the performance uplift of $\mathrm{KD}\text{-topk}$ along with $k$ in Figure~\ref{fig:exp_topk}, which suggests that the best performance is achieved with proper tuning of $k < K$, which captures class relationships and also reduces noise. Note that the improvements of $\mathrm{KD}\text{-topk}$ over $\mathrm{KD}$ is simple and free with easy modifications. This is also the reason we omit a more sophisticated comparison with other advanced distillation techniques, such as \citep{romero2014fitnets, yim2017gift}.
\subsection{Potential Improvements for Training Efficiency}
Vanilla $\mathrm{KD}$ method requires loading a pre-trained teacher model in-memory, and computing forward-pass to get the full output distribution. While all of our proposed partial-$\mathrm{KD}$ methods and $\mathrm{KD}\text{-topk}$ can be achieved with better computational efficiency when training the student model. In terms of computation cost, we can pre-compute $K\times K$ class similarity matrix before student training, and directly use it for $\mathrm{KD}\text{-sim}$. For $\mathrm{KD}\text{-topk}$, we only need the top-k predictions from the teacher, which could be efficiently (approximately) computed using SVD-softmax~\citep{svdsoftmax}, when the output space is large.
Alternatively for vanilla $\mathrm{KD}$, one could also save computation by storing teacher's predictions on-disk, which could again be optimized using our proposed methods. We only need to store a single value, i.e., teacher's confidence on ground-truth class for $\mathrm{KD}\text{-pt}$; and top-$k$ predictions for $\mathrm{KD}\text{-topk}$. As shown in our experiments in Figure~\ref{fig:exp_topk}, using $k=10\%K$, we can achieve comparable performance with vanilla $\mathrm{KD}$. %
\subsection{Diagnosis of Failure Cases}
A good understanding of $\mathrm{KD}$ enables us to diagnose failure cases.
\citet{muller2019does} observed that although label smoothing improves teacher model's quality, it results in a worse student model when applying $\mathrm{KD}$.
Verified in CIFAR-100 (see Table~\ref{tb:label-smooth-failure}), we found the unfavorable distillation performance could be attributed to two factors -- Firstly, as argued by the~\citet{muller2019does} and illustrated in Figure~\ref{fig:heatmap-label-smooth-failure} in Suppl., $\mathrm{LS}$ destroys class relationships. Secondly, we found that the skewed teacher's prediction distribution on the ground-truth (see Figure~\ref{fig:histogram-pt} in Suppl.) also hinders the effectiveness of $\mathrm{KD}$, because the example re-weighting effect will be less useful. Results of $\mathrm{KD}\text{-sim}$ and $\mathrm{KD}\text{-pt}$ from last two columns of Table~\ref{tb:label-smooth-failure} verify these findings.
For another failure case, \citet{mirzadeh2019improved} showed that the `distilled' student model's quality gets worse as we continue to increase teacher model's capacity. The larger capacity teacher might overfit, and predict high (uniform) confidence on the ground truth on all the examples; thereby hindering the effectiveness of example re-weighting in knowledge distillation. Another explanation could be that there exists an optimal model capacity gap between the teacher and student, which could otherwise result in inconsistency between teacher's confidence on the ground-truth, and the desired example difficulty for the student.
Perhaps, an `easy' example for larger capacity teacher is overly difficult for the student, due to the large difference in model expressiveness.
\section{Introduction} \input{introduction}
\section{Related work} \input{related_work}
\section{Analysis on mechanism of Knowledge Distillation} \label{sec:analysis} \input{analysis}
\section{Pseudo Knowledge Distillation} \label{sec:pseudo_kd} \input{pseudo_kd}
\section{Experiments} \label{sec:experiments} \input{experiments}
\section{Applications} \label{sec:applications} \input{applications}
\section{Conclusion} \label{sec:conclusions} \input{conclusions}
\section{Introduction} \input{introduction}
\section{Related Work} \input{related_work}
\section{Analyzing Mechanisms of Knowledge Distillation} \label{sec:analysis} \input{analysis}
\section{Isolating Effects by Partial Knowledge Distillation Methods} \label{sec:pseudo_kd} \input{pseudo_kd}
\section{Improving and Diagnosing Knowledge Distillation} \label{sec:application} \input{experiments}
\section{Conclusion} \label{sec:conclusions} \input{conclusions}
\newpage
\subsection{Proposed Partial KD Methods}
\textbf{Examine Example Re-weighting Effect by KD-pt.}
Label smoothing does not have either re-weighting or the class relationships effect, due to its uniform teacher distribution. However, if we borrow ${p}_t$ (prediction on ground truth class $t\in[K]$) from the real teacher's probability distribution ${\bm{p}}$, we can synthesize partial-teacher distribution that is able to incorporate example re-weighting effect. More specifically, we craft teacher probability distribution as follows:
\begin{equation}\label{eq:kdpt}
\rho_i^{\textrm{pt}} = \begin{cases}
{p}_t & \text{if } i = t,\\
(1 - {p}_t)/(K-1) & \text{otherwise}.
\end{cases}
\end{equation}
From Proposition~\ref{prop:conf}, it is trivial to see that $\mathrm{KD}\text{-pt}$ is capable of differentiating weights for different examples. However, it does not capture class relationships, since every class that is not the ground truth has the same probability mass. %
\textbf{Examine Optimal Prior Geometry Effect by KD-sim.}
Following the same methodology, we synthesize a teacher distribution that only capture class relationships, and assign the same weight to each example.
To achieve this, we use the weights of the last logit layer ${\bm{W}} \in {\mathbb{R}}^{K \times d}$ from the teacher model to obtain class relationships. We believe the teacher, due to its larger capacity is able to encode class semantics in the weights of the last logit layer.
Thus, we create a distribution ${\bm{\rho}}^{\text{sim}}$ as the $\mathrm{softmax}$ over cosine similarity\footnote{In practice,
besides tuning the temperature of the $\mathrm{softmax}$, one could also raise the similarities to a power $<1$ to amplify the resolution of cosine similarities. Please refer to Section~\hyperref[sec:si-detail]{A} in Suppl. for more details of our implementation.} of the weights: ${\bm{\rho}}^{\text{sim}} = \mathrm{softmax}(\hat{{\bm{w}}}_t \Hat{{\bm{W}}}^\top)$, where $\Hat{{\bm{W}}}$ is the $l_2$-normalized logit layer weights, and $\hat{{\bm{w}}}_t$ is the $t$-th row of $\Hat{{\bm{W}}}$ corresponding to the ground truth class $t\in[K]$.
Though other distance metrics could be also used as a measure of class similarity, we leave the discussion of analysing the different choices as future work.
To verify our assumption, we check the heatmap of cosine similarities in Figure~\ref{fig:heatmap-cosine-sims}, which clearly shows a similar pattern as the Pearson correlation of the teacher distribution ${\bm{p}}$ in Figure~\ref{fig:heatmap-k=100_t=10}.
We call this method $\mathrm{KD}\text{-sim}$.
From Propositions~\ref{prop:conf} and \ref{prop:distance}, our proposed method, though simple and straightforward, can achieve our purpose of factoring out the class relationships effect.
Note that $\mathrm{KD}\text{-sim}$ doesn't require the knowledge of class hierarchy. However, if the hierarchy is available (as in CIFAR-100), we could also synthesize a teacher distribution apriori.
In Suppl. Section~\hyperref[sec:si-exp]{B}, we synthesize ${\bm{\rho}}$ by setting different values for (1) ground-truth class $t$, (2) classes within the same super-class of $t$, and (3) other incorrect classes. The quality of resulting method is slightly worse than $\mathrm{KD}\text{-sim}$ but can still improve student model's quality.
\textbf{Compounded Effects.}
To enjoy the benefits from multiple effects and approximate the functionality of $\mathrm{KD}$, we could combine the two partial-KD methods introduced above. We explore simple linear combination of synthetic teacher's distribution -- $(1-\alpha) {\bm{\rho}}^{\text{pt}} + \alpha {\bm{\rho}}^{\text{sim}}$ and name the method $\mathrm{KD}\text{-pt+sim}$. %
It is easy to verify that this compounded method performs example re-weighting and injects optimal prior geometry through class relationships.
In the next section, we evaluate our proposed partial-distillation methods on synthetic and real-world datasets to better understand how much each of these effects could benefit the student. Based on our understanding, we propose a novel distillation method that only adopts top-$k$ largest values from the teacher distribution ${\bm{p}}$. In Section~\ref{sec:improv_effective}, we illustrate how this method could reduce noise in ${\bm{p}}$ (see Figure~\ref{fig:heatmap-k=10-t=10}), and also result in a better quality student model.
\subsection{Empirical Studies}
\subsubsection{Synthetic Dataset}\label{sec:exp_synthetic}
Performance of $\mathrm{KD}$ is dependent on the dataset properties. To this end, a natural question is -- \emph{Does $\mathrm{KD}$ perform only example re-weighting when all the classes are uncorrelated to each other?} We proved this to be true for binary classification (Section~\ref{sec:analysis_re-weight}). To answer the same for multi-class classification task, we generate synthetic dataset, where we can control the class similarities within the same super-class.
\textbf{Setup.}
Inspired by~\citep{ma2018modeling}, we synthesize a classification dataset with $K$ classes, and $C$ super-classes. Each super-class will have equal number of $K/C$ classes, and each class will be assigned with a basis vector. These basis vectors are carefully generated, so that we could control the class correlations within the same super-class. More specifically, we generate a single data-point as follows:
\begin{enumerate}
\item Randomly sample $C$ orthonormal basis vectors, denoted by ${\bm{u}}_i \in{\mathbb{R}}^d~\forall i\in[C]$.
\item For each orthonormal basis ${\bm{u}}_i$, we sample $(K/C-1)$ unit vectors ${\bm{u}}_j\in{\mathbb{R}}^d$ that are $\tau$ cosine similar to ${\bm{u}}_i$.
\item Randomly sample an input data point in $d$-dimensional feature space ${\bm{x}}\sim\mathcal{N}_d(\mathbf{0}, \mathbf{I})$.
\item Generate one-hot encoded label ${\bm{y}}\in{\mathcal{Y}}$ with target: $t=\argmax_{k\in[K]} \big({\bm{u}}^\top_k \hat{{\bm{x}}} + \sum_{m=1}^{M} \sin({a}_m {\bm{u}}^T_k \hat{{\bm{x}}} + {b}_m) \big)$, where $\hat{{\bm{x}}}$ is the $l_2$-normalized ${\bm{x}}$; ${\bm{a}},{\bm{b}}\in{\mathbb{R}}^M$ are arbitrary constants; and we refer to the controlled $\sin$ complexity term $M \in {\mathbb{Z}}^+$ as \emph{task difficulty}.
\end{enumerate}
After producing basis vectors with procedure (1) and (2), we run procedure (3) and (4) for $|\mathcal{D}|$ times with fixed basis vectors to generate a synthetic dataset $\mathcal{D}=\{({\bm{x}},{\bm{y}})\}$. By tuning the cosine similarity parameter $\tau$, we can control the classes correlations within the same super-class. Setting task-difficulty $M=0$ generates a linearly separable dataset; and for $M>0$, more non-linearities will be introduced by the $\sin$ function (see Figure~\ref{fig:synthetic-sim-m} in Suppl. for visualization on a toy example).
In the following experiments, we set input dimension $d=500$ with $K=50$ classes, and $C=10$ super-classes. We use $|\mathcal{D}|=500k$ data-points for training, and $|\mathcal{D_{\mathrm{valid}}}|=50k$ for validation. We use a simple 2-layer fully-connected neural network with $\tanh$ activation, and hidden layer dimensions $64$ for the student, and $128$ for the teacher. Finally, we set $M=10$, hoping that this is the right task difficulty trade-off (i.e., not too easy, but hard enough to have a large margin between the two models for $\mathrm{KD}$).
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{figures/synthetic_results_new.png}
\caption{Best performance over 4 runs on a synthetic dataset with different levels of controlled class similarities.}
\label{fig:exp_synthetic_data}
\end{figure}
\textbf{Results and Analysis.}
Figure~\ref{fig:exp_synthetic_data} shows the classification accuracy on the validation set of all the methods, when varying $\tau$ from $0.0 \rightarrow 0.4$.
We notice a large margin between the teacher and student.
Label Smoothing ($\mathrm{LS}$) marginally helps the student, while Knowledge Distillation ($\mathrm{KD}$) benefits significantly.
Interestingly, regardless of the class similarity $\tau$; $\mathrm{KD}\text{-pt}$ has comparable performance with $\mathrm{KD}$, suggesting that example re-weighting effect plays a major role in distillation. Thus, even when the classes are uncorrelated, $\mathrm{KD}$ could still benefit the student through example re-weighting.
Note that for this task, the data-points which are close to the decision boundary are harder to classify, and can be regarded as difficult examples.
Furthermore, when increasing $\tau$, we see a significant improvement in performance of $\mathrm{KD}\text{-sim}$, suggesting that the injected prior knowledge of class relationships boosts student model's quality. %
\subsubsection{Real-world Datasets}
We use two popular image classification datasets -- CIFAR-100~\citep{krizhevsky2009learning} and ImageNet~\citep{russakovsky2015imagenet} to analyze the quality of our proposed partial-distillation methods, and also to verify if we could approximate the performance of $\mathrm{KD}$ by compounding effects.
\textbf{Setup.}
CIFAR-100 is a relatively small dataset with 100 classes. We use ResNet-20 as the student, and ResNet-56 as the teacher.
ImageNet on the other hand is a large-scale dataset covering 1000 classes. We use ResNet-50 as the student, and ResNet-152 as the teacher.
For more details, please refer to Section~\hyperref[sec:si-detail]{A} in Suppl.
Note that instead of using different model families as in~\citep{furlanello2018born,yuan2019revisit}, we use same model architecture (i.e., ResNet) with different depths for the student and teacher to avoid unknown effects introduced by model family discrepancy.
\begin{table}[t!]
\centering
\begin{tabular}{l|ccc}
\toprule
\textbf{Method} & \textbf{CIFAR-100} & \textbf{ImageNet}\\
\hline
\hline
Teacher & 75.68 & 77.98 \\
Student & 72.51 & 76.32\\
\midrule
LS & 73.87& 76.83\\
KD & 75.94& 77.49\\
\midrule
KD-pt & 75.08 & 77.00\\
KD-sim & 74.30& 76.95\\
KD-pt+sim & 75.24& 77.17\\
\midrule
KD-topk & \textbf{76.17} & \textbf{77.75}\\
\bottomrule
\end{tabular}
\caption{We report the mean Top-1 accuracy (\%) over 4 individual runs with different initializations. Best $k$ for $\mathrm{KD}\text{-topk}$ is 25 and 500 for CIFAR-100 and ImageNet, resp.}
\label{tb:exp_real_data}
\end{table}
\begin{table}[t!]
\centering
\begin{tabular}{l|ccc}
\toprule
\textbf{Method} & \textbf{\#Params} & \textbf{Validation} & \textbf{Test}\\
\hline
\hline
Teacher & 24.2M & 60.90 & 58.58\\
Student & 9.1M & 64.17 & 61.55\\
\midrule
KD & 9.1M & 64.04 & 61.33\\
KD-topk & 9.1M & \textbf{63.59} & \textbf{60.85}\\
\bottomrule
\end{tabular}
\caption{Validation and test Perplexity (lower is better) of compared methods on PTB language modeling. We report the best result over 4 individual runs with different initializations. Best $k$ value for $\mathrm{KD}\text{-topk}$ is 100.}
\label{tb:exp_real_data_lm}
\end{table}
\textbf{Results and Analysis.}
Table~\ref{tb:exp_real_data} shows the overall performance when using the best hyper-parameters for each of the methods. On both the datasets, teacher model is much better than the student, and $\mathrm{LS}$ improves student model's generalization. $\mathrm{KD}$ can further boost student model's quality by a large margin, especially on CIFAR-100, where $\mathrm{KD}$ even outperforms the teacher. We try to uncover the different benefits from distillation using partial-$\mathrm{KD}$ methods. Both $\mathrm{KD}\text{-pt}$, and $\mathrm{KD}\text{-sim}$ outperforms $\mathrm{LS}$; especially $\mathrm{KD}\text{-pt}$ on CIFAR-100. This suggests that the different effects from distillation benefits the student in different aspects.
Furthermore, by combining the two effects together in $\mathrm{KD}\text{-pt+sim}$ (using $\alpha=0.5$), we see a further improvement in quality.
|
2,869,038,153,900 | arxiv |
\section{Introduction}
Efficient exploration and representation learning are two core challenges in reinforcement learning.
An agent that understands how its environment works, in particular the causal structure of the environment, knows the consequences of its behavior and will be able to learn new tasks quickly.
Such structural knowledge is not necessary linked to a task and ideally is acquired before a specific task is learned.
An information-theoretic principle known as \emph{empowerment} has led to a number of approaches for task-agnostic exploration and representation learning (e.g. \citet{mohamed2015variational,gregor2016variational}).
Empowerment formulates an unsupervised objective which can be thought of as finding an optimal code for transmitting information through the environment. Considering the environment an information channel and maximizing the mutual information between actions and effects means to understand how to control the environment in such a way that particular states can be achieved.
Another motivation for using such information-based objectives in reinforcement learning is that utilizing an information channel in an optimal way can be linked to the emergence of compact representations \citep{hoel2017map}.
Finding an optimal code translates to finding an optimal action distribution which influences the environment most effectively.
While finding this distribution is manageable in simple environments, the search space becomes intractable and the problem notoriously hard in settings with large action spaces, where actions need to be coordinated to achieve something meaningful.
Previous approaches have not tackled such problems, where a meaningful representation of the action space needs to be learned, and thus potentially do not leverage the full potential of this method.
Moreover, to our knowledge, no other approach is model free, suited for partially observable settings, and uses closed-loop feedback.
We introduce a model-free approach with memory and closed-loop feedback, such that control over full trajectories can be optimized, rather than just control over a desired final state of a trajectory.
We demonstrate that our model learns to understand a complex environment without external reward or supervision.
\section{Empowerment -- a brief review}
Empowerment is a popular objective for task-agnostic reinforcement learning. \citet{klyubin2005empowerment} define \emph{empowerment} as the maximum mutual information in the agent-to-environment information channel, as a measure of how much control an agent exerts on its environment. They argue that an agent should strive to increase its empowerment in the absence of any specific tasks. For an extensive introduction, please refer to \citet{salge2014empowerment}.
\citet{jung2011empowerment} conduct an extensive set of experiments to show its applicability. In particular, they show that empowerment maximization can swing up a pendulum without any rewards for swinging up.
\citet{mohamed2015variational} maximize a lower bound to the mutual information and learn open-loop options using deep neural networks on a variety of nontrivial gridworld environments.
\citet{gregor2016variational} extend it to closed-loop options and show that closed-loop options achieve higher empowerment. They also propose an option-less empowerment-maximizing policy which is able to deeply explore a first-person 3D environment with pixel observations.
\citet{tiomkin2017unified} show that empowerment admits a Bellman-like recursive formulation, and thus can be optimized using a temporal difference version of the Blahut-Arimoto algorithm. However, their formulation does not admit a way to learn options (similar to \citet{gregor2016variational}'s second algorithm).
\citet{eysenbach2018} build on top of \citet{gregor2016variational}'s first algorithm with maximum-entropy policies and a fixed prior over options, which enables stable learning demonstrated on continuous control tasks.
\citet{thomas2018disentangling} present an approach, where options are explicitly mapped to variations of the state.
All the above methods except \citet{tiomkin2017unified} consider final states or observations as a proxy for behavior. Our work learns options for trajectories, i.e. two distinct sequences of states are considered different even if they share the same final state. This makes our proposed variant of empowerment particularly suitable for partially-observable environments.
Table~\ref{table:empowerment} presents a comparison of these differing variants of empowerment, including our own proposal.
\begin{table}
\begin{center}
\small
\begin{tabular}{l|c|c|c}
\thead{Method} & \thead{Closed \\ loop opt.} & \thead{Partial \\ obs.} & \thead{Model-\\free} \\
\hline
\citet{klyubin2005empowerment} & \xmark & \xmark & \xmark \\
\citet{jung2011empowerment} & \xmark & \xmark & \xmark \\
\citet{mohamed2015variational} & \xmark & \cmark & \cmark\\
\citet{gregor2016variational} Alg. 1 & \cmark & \xmark & \cmark\\
\citet{gregor2016variational} Alg. 2 & - & \cmark & \cmark \\
\citet{tiomkin2017unified} & - & \xmark & \xmark\\
\citet{thomas2018disentangling} & \cmark & \xmark & \cmark \\
\citet{eysenbach2018} & \cmark & \xmark & \cmark\\
\textbf{This work} & \cmark & \cmark & \cmark \\
\hline
\end{tabular}
\end{center}
\caption{A comparison of various empowerment variants proposed in the literature. Dashes indicate that the corresponding variant does not learn options.}
\label{table:empowerment}
\end{table}
\section{Model}
\begin{figure}
\centering
\input{figures/model.tex}
\caption{Illustration of the model. Blue arrows correspond to the parts that are added on top of a regular agent model with memory. Here, $\omega$ represents an option, $\mathcal{L}$ the mutual information objective, $z_t=f(o_t, z_{t-1})$ the latent state, $a_t\sim\pi(a_t|z_t,\omega)$ an action, $o_t=o(s_t)$ an observation, and $s_t$ the state. In our implementations, we do not optimize $\mathcal{L}$ directly, but rather provide a reward signal $r_t = \delta_{\omega,\omega'_t}$, where $\omega'_t \sim q(\omega|z_t)$ is the option inferred at time $t$. Solid lines are learned.}
\label{fig:model}
\end{figure}
We consider a (partially observable) Markov decision process (MDP), defined by $(S, A, \Gamma, R)$, where $S$ is a finite set of states, $A$ is the set of actions, $\Gamma = p(s_{t+1} | s_t, a_t)$ is the state transition function, and $R_a(s, s')$ is an extrinsic reward function, specifying the reward received when transitioning from state $s$ to $s'$ due to action $a$.
At every timestep $t$, the agent receives an observation $o_t$, where $o_t = o(s_t)$, and emits an action $a_t \in A$. In our unsupervised setting, we assume the external reward $r_t = 0$ for all $t$.
The self-supervised agent model is defined through an information source $\Omega$, a latent state $z_t = f(o_t, z_{t-1})$, a policy $\pi(a_t | z_t, \omega_t)$, where $\omega_t$ corresponds to a sample from $\Omega$ and is to be encoded in the agent's actions, and an inverse model $q(\omega | z_t)$.
Our objective for unsupervised exploration and representation learning boils down to maximizing the mutual information between the information source $\Omega$ and a representation of a sequence of observations, $z_t = f(o_t, z_{t-1})$. Thereby, rather than using the latent state distribution $p(z_t | o_t, z_{t-1})$ directly, we infer the original information using a learned function $q$, and thus,
\begin{align}
\max_{\pi, q, f} \hat I(\Omega; q(\omega | z_t))\,,
\label{mi1}
\end{align}
where we train $\pi$, $q$, and $f$ simultaneously. The approximation in eqn.~\ref{mi1} corresponds to a variational lower bound of $I(\Omega; \{o_0,\ldots,o_T\})$, assuming the agent was acting over $T$ timesteps (see Appendix~\ref{sect:lower} for details).
The latent state enables our model to maximize information transmitted into a trajectory rather than a single (final) state.
In our implementations, rather than providing an internal reward at every timestep, we choose $\Omega$ to be a uniform distribution over a discrete space, and we sample $\omega_t \sim q$ and provide an internal reward of 1 whenever $\omega_t$ matches the original input word sampled from $\Omega$.
With this reward, our model can be optimized using any reinforcement learning algorithm.
The model is illustrated in fig.~\ref{fig:model}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth, trim=5cm 3cm 4cm 0cm]{figures/boxes3-m8r8-a1-18.png}
\caption{Unsupervised learning of control. 1) The environment consists of random patches of size $3\times3$, which are potentially overlapping, and which can be moved around by exerting forces on individual pixels. A patch will move towards a certain direction if the net force applied to it exceeds a threshold. 2) For each pixel $i$, the agent emits an action (force) $f_i \in \{\leftarrow, \downarrow, \rightarrow, \uparrow, \circ\},$ where $\circ$ represents no force. The action space is thus of size $5^n$, where $n$ is the number of pixels in the environment (81 in this case). Different actions are represented in different colors; white represents no action. 3) The middle panels show the internal state and dynamics of the environment. Red arrows represent the forces resulting from the action. 4) A random target word (or option) to be transmitted through the environment is provided to the agent; a new word is drawn whenever the agent infers the provided word correctly (indicated at the bottom). 5) Probabilities of the predicted bits being 1, and probability of the target under the inverse distribution.}
\label{fig:res1}
\end{figure*}
\subsection{Derivation of the lower bound}
\label{sect:lower}
We intend to optimize a lower bound of
\begin{align}
I(\Omega; p(o_0, \ldots, o_T))\,,
\end{align}
where $\Omega$ is the option distribution, which we assume to be uniform here, and the agent is assumed to interact with the environment over $T$ timesteps.
We have
\begin{align*}
z_T &= f(z_{T-1}, o_T) \\
&= f(f(z_{T-2}, o_{T-1}), o_T) = \ldots =: F(o_0, \ldots, o_T)\,,
\end{align*}
and thus we consider $z_T$ an embedding of the full observation history. With the causal structure of the model and the data processing inequality we get
\begin{align*}
I(\Omega, p(o_0, \ldots, o_T)) &\geq I(\Omega; p(z_T)) \\
&= \left\langle \log \frac{p(\omega | z_T)}{p(\omega)} \right\rangle_{p(z_T|\omega)p(\omega)}\,.
\end{align*}
The right-hand side can further be written as
\begin{align*}
I(\Omega; p(z_T)) = \left\langle \log p(\omega | z_T) \right\rangle_{\omega,z_T} + H(\Omega)\,,
\end{align*}
where $H(\Omega)$ is a constant and therefore ignored in the optimization.
We can now approximate the conditional with a variational distribution, and simply optimize
\begin{align}
\max \hat I(\Omega; p(z_T)) = \max \left\langle \log q(\omega | z_T) \right\rangle_{\omega,z_T}\,,
\end{align}
which we call the \emph{empowerment} objective. We optimize this objective over the policy parameters using reinforcement learning.
\section{Experiments}
\paragraph{Pushing boxes.} To demonstrate the effectiveness of our approach, we evaluate its performance on a synthetic task where the agent receives a top-down view of an environment containing several objects (random patches of $3\times3$ pixels). Patches can overlap and thus the boundaries between objects might not be visible.
The agent can exert a force on each individual pixel, which can be one of up, down, left, right. It can also choose to apply no force. The forces of individual pixels are transferred to the respective objects and if the net force applied to an object exceeds some threshold (i.e. if the forces are sufficiently aligned) the object moves into the respective direction by a small distance.
We consider an environment of size $9\times9$ and three objects, which are initialized at random locations inside the field of view.
This environment is particularly challenging because of its large action space and the fact that objects can occlude each other.
Initially, an option $\omega$ is drawn from $\Omega$, and is represented as a binary string of a certain length (e.g. 8 bits in the example shown in fig.~\ref{fig:res1}.)
The agent rewards itself as soon as it correctly infers the original string $\omega$ based on its latent state $z$, and subsequently draws a new option which it then tries to encode in its trajectory.
In the case of 8 bits, random guessing would lead to a correct guess every 256 steps on average. The agent learns, however, to encode information in its actions in such a way that they influence the environment state sufficiently for the agent to infer the option from its observations after only a small number of steps (less than 10).
It is to be noted that in this environment random policies emitting uncoordinated actions typically have no effect at all, since only coordinated (aligned) actions can exceed the threshold for shifting a block.
For similar reasons, the agent learns to never push the blocks outside of the field of view (there are no walls) since, as soon as not enough pixels of a block are visible, the actions applied to these pixels are not sufficient to move the block and it becomes useless.
Notably, as can be seen in fig.~\ref{fig:res1}, the agent causes the environment to produce the same observation in different contexts (all patches centered in the image; step 8 and step 16). However, the trajectories leading up to those states are different, and thus information is still transmitted. This scenario could not be solved by simply using the current state.
As a performance metric for the proposed approach, we consider the median number of steps the agent requires to recover the option $\omega$. A naive guesser would, on average, require a number of steps of the order of the number of available options, $|\Omega|$, for a uniform option distribution. The trained model performs substantially better than the baseline agent.
\begin{table}[h]
\centering
\begin{tabular}{l|cc}
\thead{Model} & \thead{\#options} & \thead{\#steps} \\
\hline
Baseline & 16 & 11.0 \\
Empowered & 16 & \bf 3.0 \\
Baseline & 256 & 179.0 \\
Empowered & 256 & \bf 9.0
\end{tabular}
\caption{Empowerment performance of the model trained with the trajectory-based objective. Numbers correspond to the median over a minibatch of trials.}
\label{tab:perf}
\end{table}
The fact that the agent is able to transmit information through the environment means that it is able to reliably choose and decode a large number of trajectories, which it discovers through unsupervised training.
\paragraph{Training and model details.}
The agent model consisted of a 3-layer convnet, generating an image embedding, an LSTM with 256 hidden units as a memory model, a single layer, fully connected network for the critic, and a 3 layer, fully connected network for the policy. The inverse model, $q$, was implemented using a single fully-connected layer as well. ReLU activations were used between layers and sigmoid activations were used on the outputs of $q$ and $\pi$ to generate probability values.
All models were trained using the proximal policy optimization algorithm \citep{schulman2017proximal}. The hyperparameters used are listed in Appendix~\ref{sect:hyper}.
\section{Conclusion}
We propose a new variant of an empowerment-based, unsupervised learning method, which is suitable for partially observable settings, model-free, and contains a closed-loop option mechanism.
Unlike other option-based methods, which privilege the final state of a trajectory, our approach uses the full trajectory to infer effects of the agents actions. Thus, the intrinsic reward is based on the whole journey, not just the goal.
We successfully train an agent to control a complex environment, purely based on intrinsic reward signals. We focus on a task featuring high-dimensional actions, which make the problem harder but also more interesting, as for successful acting, good representations have to be learned for both observations and actions.
Future work will include combining the unsupervised system with more complex RL tasks and more detailed analyses of more complex environments.
\subsubsection*{Acknowledgments}
We thank Anirudh Goyal, Junhao Wang, and our colleagues at Mila for helpful discussions.
\vfill
|
2,869,038,153,901 | arxiv | \section{Introduction and key results}
One of the most basic notions of condensed matter physcis is the quantum mechanical problem of a particle in a periodic potential. Yet, there are still quite fundamental questions relating to the physics of Bloch bands that have not been conclusively answered: How can optimally localized real space representations of band insulators in terms of Wannier functions (WFs) be found systematically and computationally efficiently? Under which circumstances can even compactly supported WFs exist for a given lattice Hamiltonian,
or at least for some representative of its topological equivalence class? These questions are of key importance not only for electronic band structure calculations within the single particle approximation, e.g., in the framework of density functional theory \cite{KohnSham}, but also for the dissipative preparation of topological band structures \cite{DiehlTopDiss} and their variational representation as a starting point for tensor network methods. In this work, we report substantial progress towards a comprehensive answer to these questions, \jcb{building on a {\it compressed sensing} (CS) based approach to the problem of finding maximally localized WFs recently proposed by Ozolins et al. \ \cite{OszolinsCompressedModes,OszolinsTranslation}}.
\subsection{\je{Localized Wannier functions}}
The crucial optimization problem of finding maximally localized WFs $\lvert w_R^\alpha\rangle$ associated with a family of $n$ occupied Bloch vectors $\lvert \psi_k^\alpha\rangle, \alpha=1,\ldots,n$\jens{,}
\je{and} $k\in \text{BZ}$ defined in the first Brillouin zone (BZ) has been subject of active research for many years \cite{VanderbiltReview}. The main difficulty is a local $U(n)$ gauge degree of freedom
in reciprocal space acting on the Bloch functions as
\begin{align}
\lvert \psi_k^\alpha\rangle \je{\mapsto} \sum_{\beta=1}^n U_{\alpha\je{,}\beta}(k)\lvert \psi_k^\beta\rangle.
\label{eqn:gauge}
\end{align}
This redundancy in the definition of the Bloch functions renders the Wannier representation highly non-unique: A different gauge choice on the Bloch functions can modify the localization properties of the associated WFs which are defined as
\begin{align}
\lvert w_R^\alpha\rangle=\frac{V}{(2\pi)^{\je{d}}}\int_{\text{BZ}}\text{d}^{\je{d}}k\, \text{e}^{-ikR}\lvert \psi_k^\alpha\rangle,
\label{eqn:WFDef}
\end{align}
where $V$ is the volume of the primitive cell in real space and \je{$d$} is the spatial dimension of the crystal.
Interestingly, the search for maximally localized WFs is substantially influenced by topological aspects of the underlying band structure. The recent study of band structure topology has led to fundamental discoveries like topological insulators and superconductors \cite{HasanKaneXLReview} that have given a new twist to the basic physics of Bloch bands: Roughly speaking, the topology of insulating band structures measures the winding of the submanifold of occupied bands, represented by their projection $\mathcal P_k=\sum_{\alpha=1}^n \lvert \psi_k^\alpha\rangle\langle \psi_k^\alpha \rvert$, in the total space of all bands as a function of the lattice momentum $k$. The archetype of a topological invariant for band structures is the first Chern number
\begin{align}
\mathcal C=\frac{i}{2\pi}\int_{\text{BZ}} \text{d}^2 k\, \text{Tr}\left( \mathcal P_k \left[(\partial_{k_x} \mathcal P_k),(\partial_{k_y} \mathcal P_k)\right]\right),
\label{eqn:Chern}
\end{align}
an integer quantized monopole charge associated with the gauge structure of the Bloch functions that distinguishes topologically inequivalent insulators in two spatial dimensions \cite{TKNN1982}. A non-vanishing monopole charge can be viewed as a fundamental obstruction to finding a global smooth gauge for the family of Bloch functions \cite{TKNN1982, Kohmoto1985}. However, it is precisely this analytical structure of the Bloch functions which determines the asymptotic decay of the associated WFs obtained by Fourier transform (cf.\ Eq.\ (\ref{eqn:WFDef})). This makes it intuitively plausible why a non-trivial band topology can have notable implications on the localization of WFs. Most prominently in this context, it is known that exponentially localized Wannier functions exist if and only if the first Chern number
is zero \cite{chernalgebraic, WannierChern, Matt}. In contrast, in one spatial dimension, Kohn could prove \cite{Kohn1959} that exponentially localized WFs always exist.
For so called symmetry protected topological states \cite{classification}, the situation is less simple. The topological nature of these band structures is rooted in the presence of a discrete symmetry, i.e., they are topologically distinct from an atomic insulator only if these underlying symmetries are maintained. Due to their vanishing Chern numbers, the existence of exponentially localized WFs is guaranteed for symmetry protected topological band structures. However, the possibility of representatives with even compactly supported WFs is unknown for many symmetry protected states. A conclusive understanding of this issue is of particular interest since compactly supported WFs imply the existence of exact flat-band models with finite range hopping \cite{chernflat} and the possibility of dissipative analogs by virtue of local dissipation \cite{BardynTopDiss}.
A remarkably widely adopted and practically very useful approach to maximally localized WFs has been reported in Ref.\ \cite{MarzariVanderbilt1997}, see Ref.\ \cite{VanderbiltReview} for a recent review article. The guiding idea in Ref.\ \cite{MarzariVanderbilt1997} is to localize the WFs in real space by optimizing the gauge of the associated Bloch functions in reciprocal space based on a gradient search algorithm. Generically, this class of algorithms finds a local optimum that depends on the initial choice of gauge.
Very recently, a different paradigm for the construction of localized functions that approximately block-diagonalize a Hamilton operator has been formulated \cite{OszolinsCompressedModes}. This approach is rooted in the theory of CS \cite{Candes}, a contemporary branch of research at the interface between signal processing and fundamental information theory \cite{CSI}, which has also found applications in quantum theory \cite{CS}.
In CS, the expected sparsity of a signal in some basis is employed for its exact reconstruction from significantly under-sampled measurement data, {without having to make use of the exact
sparsity pattern}. To this end, the sparsity of the signal is optimized under the constraint that it be compatible with the incomplete measurement data at hand. Translated to the spectral problem of a Hamiltonian, the analog of the incomplete measurement data is the ambiguity in the choice of basis functions that span a subspace associated with a certain energy range. Under the constraint of not involving basis states outside of this energy range, the sparsity of the basis functions in real space, i.e., their localization, is then optimized. First progress applying this program to the calculation of Wannier functions has been reported in Ref.\ \cite{OszolinsTranslation}.
\subsection{\je{Key results}}
In this work, we \jcb{extend} a CS based approach to the search for maximally localized WFs \jens{\cite{OszolinsCompressedModes,OszolinsTranslation} to study topological equivalence classes of band structures}. The physical motivation of our study is twofold: a comprehensive understanding of the interplay between band structure topology and localization properties of WFs at a fundamental level, and its impact on applications ranging from electronic band structure calculations over variational tensor network methods to dissipative state preparation.
To this end, \jcb{elaborating on the concepts introduced in Refs.\ \cite{OszolinsCompressedModes,OszolinsTranslation}}, we propose a numerically feasible and practical class of algorithms that are capable of manifestly maintaining the underlying physical symmetries of the band structure under investigation. Most interestingly, this allows us to search for maximally localized representatives of a {\it topological equivalence class of band structures} via adiabatic continuity -- an unprecedented approach. The method exploring this possibility does not only search for a gauge of maximally localized WFs for a given Hamiltonian. Instead, the model Hamiltonian flows continuously within the symmetry class of band structures under consideration towards a topologically equivalent sweet-spot with compactly supported WFs. The starting point is in this case a set of Wannier functions of a generic representative of the topological state of interest.
The asymptotic scaling of each step of
\jcb{the present} iterative
method is $O(N\log(N))$, where $N$ is the number of lattice sites in the system. We argue that for each step this is up to constants the optimal effort:
any algorithm on such a translation invariant problem will at some point involve a fast Fourier transform which has the same scaling. \jcb{This speedup compared to Ref.\ \cite{OszolinsTranslation} is rooted in the use of a local orthogonality constraint imposed on the Bloch functions in reciprocal space that is equivalent to a non-local shift-orthogonality constraint on the WFs.} \jcb{Furthermore}, the \jcb{extended} algorithms proposed in this work are capable of exactly preserving the fundamental physical symmetries of the system under investigation. \jcb{From a practical perspective,} this can be of key importance to obtain physically meaningful results when constructing approximate Wannier functions for a given model. For example, if one is concerned with mean field superconductors in the {\it Bogoliubov de Gennes} (BdG) formulation, the fermionic algebra necessarily entails a formal {\it particle hole symmetry} (PHS) constraint;
its violation would lead to inherently unphysical results. \jcb{From a more fundamental perspective, the capability of manifestly maintaining the underlying symmetries at every iterative step opens us the way to study equivalence classes of topological bands structures instead of individual Hamiltonian representatives.}
We present benchmark results for a one-dimensional (1D) topological superconductor (TSC) \cite{Kitaev2001} demonstrating the efficiency of our method: Starting from a generic representative Hamiltonian of a 1D TSC, the algorithm converges towards a set of WFs \resub{corresponding to a projection $\mathcal P_k$ onto an occupied band that obeys} the BdG PHS to high numerical accuracy. In the adiabatic continuity mode described above, our algorithm finds the maximally localized representative of the 1D TSC equivalence class, a state with compactly supported Wannier functions delocalized over two lattice sites. While for this particular state of matter, this ``sweet-spot'' point has been constructed analytically in Ref.\ \cite{Kitaev2001}, our search algorithm is capable of discovering it numerically starting from a generic model Hamiltonian represented by non-compact Wannier functions, as illustrated in Fig. \ref{fig:movies1D}.
For a topologically trivial starting point, the algorithm converges towards a set of atomic orbitals localized at a single lattice site -- the most localized representative of the trivial equivalence class. Finally, we give numerical evidence for the absence of compactly supported Wannier functions for {\it time reversal symmetry} (TRS) protected 2D topological insulators \cite{KaneMele2005a,KaneMele2005b,BHZ2006,koenig2007}: While our adiabatic search algorithm again converges to the WFs of an atomic insulator from a generic topologically trivial starting point, it does not find a compactly supported representative as soon as the initial Hamiltonian has gone through the phase transition to the topological insulator equivalence class. This indicates that there are no two-dimensional topological insulators with compact WFs.
\begin{figure}[htp]
\centering
\includegraphics[width=0.76\columnwidth]{movies1Dv2.pdf}
\caption{(Color online) Evolution of the extent of Wannier functions under the adiabatic continuity algorithm for a trivial 1D superconductor (upper panel) and a non-trivial 1D topological superconductor (lower panel). In both cases, the most localized, compactly supported representatives of the respective phases are found \resub{, i.e., a WF localized on a single site (upper panel) and on two sites (lower panel), respectively}. Plotted is the real space probability density $\rho_x$ (cf.\ Eq.\ \eqref{eq:rho}) on the horizontal $x$-axis with logarithmic color code from $10^{0}$ (yellow) to $10^{-30}$ (blue), and runtime increases on the vertical $t$-axis in units of ten iterative steps. Initial Wannier functions obtain from the gauge constructed in Eq.\ (\ref{eqn:blochKitaev}) in Sec. \ref{sec:restop1D}. Parameters are $\mu =1.5, 2t=2\Delta=1$ and $\mu =0.3, 2t=2\Delta=1$ for upper and lower panel, respectively. The home cell of both WFs is $x=101$, with total length $L=200$ for both plots.}
\label{fig:movies1D}
\end{figure}
{\emph{Outline. }}The remainder of this article is organized as follows. We define in Section \ref{sec:defopt} the search for maximally localized WFs associated with a given model Hamiltonian as an optimization problem subject to orthogonality and symmetry constraints. In Section \ref{sec:hamalg}, we \jcb{present} an efficient algorithm based on CS to numerically tackle this optimization problem. Numerical results for the 1D TSC are presented in Section \ref{sec:resham}. An algorithm which is not limited to a fixed model Hamiltonian but is designed for finding the most localized representative of a topological equivalence class of Hamiltonians is introduced in Section \ref{sec:algtop}. Benchmark results demonstrating the power of this tool are presented in Section \ref{sec:restop1D} and Section \ref{sec:restop2D}.
Finally, we sum up our findings and give an outlook to possible applications in Section \ref{sec:conclusion}.
\section{Compact Wannier functions from sparsity optimization}
\label{sec:csham}
\subsection{Formulation of the optimization problem}
\label{sec:defopt}
The problem of calculating the electronic (fermionic) band structure of a crystal within the independent particle approximation can be viewed as the quantum mechanical problem of a single particle in a lattice-periodic potential. The spectrum of its solution consists of energy bands parameterized by a lattice momentum. Both eigenvalues and eigenfunctions are periodic with the reciprocal lattice and can hence be constrained to a representative unit cell of the reciprocal lattice called the first Brillouin zone (BZ). The eigenfunctions are so called Bloch states. For a given set of energy bands, WFs, i.e., localized functions in real space that are orthogonal to their own lattice translations (shift orthogonality) can be obtained by Fourier transform of the Bloch states (cf.\ Eq.\ (\ref{eqn:WFDef})). In 1D, this problem has been addressed with methods from complex analysis by Kohn \cite{Kohn1959} showing that exponentially localized Wannier functions always exist. In higher spatial dimensions, topological obstructions can preclude the existence of exponentially localized WFs \cite{WannierChern}, e.g., due to a non-vanishing Chern number in 2D (cf.\ Eq.\ (\ref{eqn:Chern})).
The work by Kohn \cite{Kohn1959}, as well as the majority of applications for band structure calculations \cite{VanderbiltReview}\je{,} focus on periodic problems in the spatial continuum. In practice, the continuous problem is often times not approximated by a straightforward discretization in real space but by deriving a so called tight binding model. The relevant degrees of freedom of such a model are then a finite number of orbitals per site of a discrete lattice with the periodicity of the crystalline potential. Our work is concerned with such lattice models within the independent particle approximation from the outset.
{\emph{Definitions. }}We consider a system with Hamiltonian $H$ on a hypercubic lattice with unit lattice constant and $N=L^d$~sites with periodic boundary conditions. Each lattice site hosts $m$~internal degrees of freedom (orbitals), $n$ bands are occupied. Our single particle wave functions are hence normalized vectors in $\cc^{m N}$, a set of Wannier functions is represented by a {matrix}
{$\psi\in \cc^{mN\times n}$}
with shift-orthonormal {columns}, i.e. $\psi^\dag T_j \psi=\ii \delta_{j,0} $ {for all} $ j\in \mathbb Z_L^d$, where $T_j$ performs a translation by the lattice vector $j\in \mathbb Z_L^d$. We denote the matrix elements by $\psi_{\nu, j; \alpha}$, where $\nu \in \{1,\ldots,m\}$,
$ j\in \mathbb Z_L^d$, and $\alpha \in \left\{1,\ldots,n\right\}$. Among any set of shift orthogonal functions, a set of WFs associated with the $n$ occupied bands is
distinguished by minimizing the quadratic energy functional
\begin{align}
\mathcal E[\psi] = \text{Tr}(\psi^\dag H \psi).
\label{eqn:evar}
\end{align}
While the Slater determinant forming the many body ground state characterized by its minimal energy expectation value of the insulating band structure is unique up to a global phase, the set of possible single particle WFs $\psi$ representing this ground state, i.e., minimizing $\mathcal E$ is highly non-unique.
This is due to the local $U(n)$~gauge degree of freedom on the Bloch functions (cf.\ Eq.\ (\ref{eqn:gauge})).
Within this set, we would like to identify the representative where the probability density {$\rho_j^\alpha = \sum_\nu |\psi_{\nu , j; \alpha}|^2$} is most localized in real space.
In the language of compressed sensing, localization is referred to as sparsity.
\jcb{As \jens{suggested} in Ref.\ \cite{OszolinsCompressedModes}, a $l_1$-norm regularization of the energy functional (\ref{eqn:evar}) is a convenient way to enforce the localization of the WFs. Concretely, as a measure for sparsity, we use} the vector
$l_1$-norm $\| \sqrt{\rho} \|{_{l_1}}=\sum_{j,\alpha}\lvert \sqrt{\rho_j^\alpha}\rvert$ of the square root of the probability density{, as a convex relaxation with more benign properties regarding continuity \resub{than} discrete measures like the rank.}
For the WFs themselves, we define the $\rho$-norm as the $l_1$-norm of the square root of the associated probability density, i.e.,
\begin{align}\label{eq:rho}
\|\psi\|_\rho=\| \sqrt{\rho}\|_{l_1}.
\end{align}
A minimization with respect to the $\rho$-norm localizes the WFs only in real space and not in the internal degrees of freedom, as desired. The localization can be enforced by
adding a term $ \| \psi\|_\rho/\xi$~ \jcb{to the energy functional $\mathcal E$ \cite{OszolinsCompressedModes}}. The real parameter $\xi>0$~tunes the priority of the localisation respectively sparsity
condition compared to the energy minimization condition. The optimization problem considered
is hence the minimization \jcb{of the $l_1$-regularized energy expectation \cite{OszolinsCompressedModes}}
\begin{equation}
\mathcal E(\psi) + \frac{1}{\xi}\|\psi\|_\rho.
\label{eqn:l1reg}
\end{equation}
such that $\psi^\dagger T_j\psi=\ii \delta_{j,0}$. {The latter is a non-convex orthogonality constraint \cite{Yin}.}
Even if \jcb{the minimization of (\ref{eqn:l1reg})} will for finite $\xi$ in general produce approximations to the WFs of the model characterized by $H$, we would like to make sure that the \resub{band structure defined by the resulting WFs preserves the} underlying physical symmetries of the problem exactly. It is key to our algorithm that these symmetries can be exactly maintained.
Constraints that we will explicitly consider in this work are TRS $\mathcal T$, and PHS $\mathcal C$ (see Eq. (\ref{eqn:symconstraints}) for the corresponding constraints on the projection $\mathcal P_k$). Generically, we denote the set of local symmetry constraints by $\mathcal S$. With these definitions, the problem of maximally localized WFs can {for each $\xi>0$} be concisely stated as the {$l_1$ regularized minimization problem}
\begin{align}
&\psi=\text{arg min}_\phi \left(\mathcal E(\phi)+\frac{1}{\xi} \| \phi \|_\rho\right),\nonumber\\
&\text{subject to}\quad (\phi^\dag T_j \phi = {\ii }\delta_{j,0})~\text{and}~\mathcal S,
\label{eqn:minfunctional}
\end{align}
where arg gives the argument that minimizes the functional. The objective function is convex, while the symmetries \resub{and orthogonality constraints} give rise to
quadratic equality
constraints.
\subsection{Compressed sensing based algorithm}
\label{sec:hamalg}
Convex $l_1$ regularized problems can be practically and efficiently solved using a number of methods. Here, we
\jcb{use} a {\it split Bregman method} \cite{Bregman,Yin}, \jcb{which has been proposed to calculate maximally localized WFs in Refs.\ \cite{OszolinsCompressedModes, OszolinsTranslation}},
in a way that conveniently allows to include symmetries. The split Bregman method is related to the method of multipliers \cite{59}, which again can be connected to the alternating direction method of multipliers \cite{29}. Each step can then be implemented with as little as $O(N\log N)$ effort in the system size $N$.
The idea of a split Bregman iteration is to decompose the full optimization problem defined in Eq.\ (\ref{eqn:minfunctional}) into a set of coupled {subproblems} that can be solved exactly at every iterative step. We start from the simplest case without additional symmetries $\mathcal S$. In this case, our algorithm can be viewed as a numerically more efficient modification of the algorithms introduced in Refs.\ \cite{OszolinsCompressedModes, OszolinsTranslation}, adopted for {and generalized to}
a lattice Hamiltonian with internal degrees of freedom. We define the auxiliary variables $Q,R$ and associated noise terms $q,r$ that have the same dimension as the set of WFs $\psi\in
\cc^{mN\times n}$. During every step of the iteration, $\psi$ will optimise the energy functional $\mathcal E$ augmented by bilinear coupling terms (see step (i) below), $Q$ will be subject to a {\it soft thresholding procedure} stemming from the $\rho$-norm optimisation (see step (ii)), and $R$ will be subject to the shift-orthogonality constraint defining a proper set of WF (see step (iii)).
The noise terms $q$ and $r$ are incremented by the difference between $\psi$ and the auxiliary variables $Q$ and $R$, respectively (see steps (iv)-(v)). The algorithm in the absence of symmetries $\mathcal S$ then reads
{as pseudocode}
\begin{align}
&\text{Initialize } \psi=Q=R,~q=r=0. {\text{ While not converged do}}\nonumber \\
&\text{(i) } \psi\mapsto\text{arg}\min_{\psi} \left( \mathcal E[\psi]+\frac{\lambda}{2}\lVert \psi-Q+q\rVert^2_F+\frac{\kappa}{2}\lVert \psi-R+r\rVert^2_F\right),\nonumber\\
&\text{(ii) } Q\mapsto \text{arg}\min_{Q}\left( \resub{\frac{1}{\xi}}\| Q\|_\rho+ \frac{\lambda}{2}\lVert \psi-Q+q\rVert^2_F\right),\nonumber\\
&\text{(iii) } R\mapsto\text{arg}\min_R\frac{\kappa}{2}\lVert \psi-R+r\rVert^2_F,~\text{s.t.}~\tilde R_k^\dag \tilde R_k=\frac{\ii}{L^{d/2}}~\forall k,\nonumber\\
&\text{(iv) } q\mapsto q+\psi-Q,\nonumber\\
&\text{(v) } r\mapsto r+\psi-R,
\end{align}
where {$\lVert \ejb{M}\lVert_F\ejb{=({\sum_{i,j}|M_{i,j}|^2}})^{1/2}$ denotes the Frobenius matrix norm of a matrix $M$,
and $ \tilde R_k$~}the Fourier transform of $R$~at momentum $k$. $\lambda,\kappa,\xi>0$ are coupling constants.
The way this problem is split in parts, the
{subproblems} (i)-(iii) afford an explicit exact solution {not requiring any optimisation, given by}
\begin{align}
&\text{(i)}~ \psi=(2H+\lambda+\kappa)^{-1}(\kappa(R-r)+\lambda (Q-q)),\nonumber\\
&\text{(ii)}~ Q=\text{Shrink}\left(A,\frac{1}{\lambda \xi}\right),\nonumber\\
&\text{(iii)}~ \tilde R_k=\tilde B_k U\Lambda^{-\frac{1}{2}}U^\dag.
\label{eqn:exactmin}
\end{align}
Here $A=\psi+q,~ B=\psi+r$,
\begin{equation}
\text{Shrink}(\je{b},\epsilon)= \frac{\je{b}}{\lvert \je{b}\rvert} \max(0,\lvert \je{b}\rvert-\epsilon)
\end{equation}
is applied independently to each of the \resub{$m$-spinors} $B_j^\alpha$ associated with {the} Wannier function $\alpha$~evaluated at site $j$.
{Also,}
\begin{equation}
\tilde B_k^\dag \tilde B_k=U\Lambda U^\dag
\end{equation}
{with $U$ unitary and $\Lambda$ diagonal,} is {an eigenvalue}
decomposition of the positive Hermitian matrix $\tilde B_k^\dag \tilde B_k$. The orthogonality constraint $\tilde R_k^\dag \tilde R_k=\ejb{\ii}/{L^{d/2}}~\forall k$ on the Bloch functions occurring in step (iii) is equivalent
{with} the shift orthogonality {constraints} $R^\dag T_j R={\ii} \delta_{j,0}~\forall j$
on the Wannier functions. However, due to the local nature of the further, step (iii) can readily be solved exactly as explicitly done above, whereas the numerically less
efficient method of Lagrange multipliers has been proposed in Ref.\ \cite{OszolinsTranslation} to enforce the latter non-local constraint in real space.
{This is true even though it arises from a convex problem with a quadratic orthogonality constraint.}
More explicitly, the Fourier transform involved in the implementation used in the present work scales as $O(N\log N)$ if a fast Fourier algorithm is used.
Each step of the procedure is hence efficient. Rigorous convergence proofs for
split Bregman methods are known for $l_1$-regularized convex problems \cite{Convergence}. Here, including the equality constraints, there is still
evidence that the entire method is efficient and convergent as well, {in line with the findings of Ref.\ \cite{Yin}.}
Step (iii) of the above algorithm solves the following problem: Given a set of wave functions $B$, it finds the closest possible (in {Frobenius} norm)
set of proper shift orthogonal Wannier functions. Imposing additional local symmetry constraints $\mathcal S$~further complicates step (iii) of the above algorithm. From our numerical data presented below, it becomes clear that imposing constraints like PHS can be of key importance to obtain physically meaningful results. The simplest way to implement such symmetries is by considering the projection
\begin{align}
\mathcal P_k=\sum_{\alpha=1}^n \tilde \psi_k^\alpha \tilde \psi_k^{\alpha \dag}
\label{eqn:defpofk}
\end{align}
onto the occupied Bloch states at momentum $k$. Local symmetries will basically impose local constraints on this quantity, the only significant complication being the complex conjugation $K$ involved in anti-unitary constraints like TRS and PHS which connects $k$ and $-k$. Explicitly, for TRS $\mathcal T$ and PHS $\mathcal C$, we get the constraints
\begin{equation}
\mathcal T \mathcal P_k \mathcal T^{-1}=\mathcal P_{-k},\,\,
\mathcal C \mathcal P_k \mathcal C^{-1}=1-\mathcal P_{-k},
\label{eqn:symconstraints}
\end{equation}
respectively.
With these definitions, we are ready to formulate a symmetry purification procedure augmenting step (iii). To this end, we follow (iii) to obtain the closest Bloch functions for half of the BZ and calculate $\mathcal P_k$. For the other half of the BZ, $\mathcal P_k$ is then obtained by symmetry conjugation \resub{by virtue of Eq. (\ref{eqn:symconstraints})}. The Bloch functions spanning $\mathcal P_k$ for this second half are obtained by projecting \resub{the Bloch functions from the previous iteration} $\tilde B_k$ onto $\mathcal P_k$ and again performing an orthogonalization based on a eigenvalue decomposition. By this continuous gauge prescription, we make sure that an input function $\tilde B_k$ that already obeys the given symmetry is unchanged by the purification procedure. This ensures that the algorithm can become stationary for the desired solution. The choice how the BZ is divided into two halves is {to some extent}
arbitrary. However, the fact that the Bloch basis in which we perform this purification and the real space basis in which the thresholding (ii) is performed are maximally incoherent bases
{\cite{Candes}}
prevents systematic effects of this choice. For a unitary local symmetry, no constraint between $k$ and $-k$ is introduced and the symmetry purification can be done locally at every point in momentum space.
In summary, the core of our method consists of iteratively shrinking the spatial extent of the WFs by a soft thresholding prescription while reestablishing symmetry and orthogonality constraints on the associated \resub{projection $\mathcal P_k$} at every step. The localization and compact support of the WFs is enforced directly in real space by virtue of $l_1$-norm optimization. Split orthogonality and symmetry constraints enforce the defining properties of the desired WFs. For a search limited to WFs of a fixed lattice model Hamiltonian, the subspace corresponding to a certain subset of bands and with that to a certain energy range is selected by minimizing a quadratic energy functional as proposed in Ref.\ \cite{OszolinsCompressedModes}. Hence, the CS approach does not require the knowledge of an initial set of WFs as a starting point. The converged trial functions are compactly supported well-defined Wannier functions by construction. Their degree of localization can be tuned arbitrarily by a sparsity parameter $\xi$, with a tradeoff in controlling their quality in representing the given model Hamiltonian.
\subsection{Results for the 1D TSC state}
\label{sec:resham}
As an interesting benchmark example, we consider the 1D TSC proposed by Kitaev in 2001 \cite{Kitaev2001} which is distinguished from a trivial superconductor by a topological $\mathbb Z_2$-invariant. The simplest representative of this state is a 1D lattice of spinless fermions modelled by the Hamiltonian
\begin{align}
H_p =\sum_j \left(-t c_j^\dag c_{j+1} +\frac{\mu}{2} c_j^\dag c_j-\Delta c_j c_{j+1}+\text{h.c.}\right),
\end{align}
where $t$ is a real nearest neighbor hopping constant, $\mu$~models a chemical potential, $\Delta$~is a proximity induced $p$-wave pairing.
{The collection of} $c_j~(c_j^\dag)$ ~are the fermionic annihilation (creation) operators. Introducing the {collection of} Nambu spinors $\Psi_j=(c_j,c_j^\dag)^T$ and their Fourier transforms $\tilde \Psi_k=(\tilde c_k,\tilde c_{-k}^\dag)^T$, $H_p$ can be written in the BdG picture as
\begin{align}
H_p=\frac{1}{2}\int_{0}^{2\pi}\tilde \Psi_k^\dag d^i(k)\tau_i\tilde \Psi_k,
\label{eqn:KitaevBdG}
\end{align}
where $\tau_i$ are Pauli matrices in Nambu space \je{and}
\begin{align}
d^1(k)&=0,\\
d^2(k)&=-2 \Delta \sin(k),\\
d^3(k)&=\mu -2t\cos(k).
\end{align}
For simplicity, we {consider the specific case} $2\Delta=2t=1$. As a function of $\mu$, $H_p$ is then in the topologically non-trivial phase for $\lvert \mu \rvert<1$, critical for $\lvert\mu\rvert=1$, and trivial otherwise. The description in terms of Nambu spinors implies a formal doubling of the degrees of freedom while keeping the number of physical degrees of freedom fixed.
This redundancy \resub{is reflected} in an algebraic constraint on the BdG Hamiltonian that can be viewed as a formal PHS $\mathcal C=\tau_1 K$, where $K$ denotes complex conjugation.
The BdG Hamiltonian (\ref{eqn:KitaevBdG}) is formally equivalent to an insulating band structure with one occupied and one empty band. The projection $\mathcal P_k$ onto the occupied band can be expressed as
\begin{align}
\mathcal P_k=\frac{1}{2}(1-\hat d^i(k) \tau_i),
\label{eqn:ptwoband}
\end{align}
where {$\hat d(k)={d}(k)/{\lvert d(k)\rvert}$}.
\begin{figure}[htp]
\centering
\includegraphics[width=0.72\columnwidth]{CompressedPlotsJens.pdf}
\caption{(Color online) Logarithmic plots of the probability density $\rho$ of WFs with home cell $x=101$ for a non-trivial 1D TSC with $\mu =0.3, 2t=2\Delta=1$ (see Section \ref{sec:resham} for definitions). \eb{Upper} panel: Result of algorithm without additional symmetries and coupling constants $\xi=10,r=50,\lambda=20$. Central panel: Result of algorithm with $\mathcal S=\left\{\text{PHS}\right\}$ and coupling constants $\xi=10,r=50,\lambda=20$. \eb{Lower} panel: WF from the gauge constructed in Eq.\ (\ref{eqn:blochKitaev}). $L=200$
{has been} chosen
for all plots.}
\label{fig:hamcomparison}
\end{figure}
We now apply the algorithm introduced in Section \ref{sec:hamalg} to the toy model (\ref{eqn:KitaevBdG}). We first ignore the PHS constraint and apply the algorithm without further symmetries $\mathcal S$. For $\xi=10,r=50,\lambda=20,\mu=0.3,L=200$, it converges towards a set of compact WFs \resub{(see Fig.\ \ref{fig:hamcomparison} upper panel)} functions that minimize $\mathcal E$ to a relative accuracy of $1.8\je{\times}10^{-3}$ but that break PHS by as much as $2.0$ percent. The violation of PHS is measured by the deviation of $\lVert \mathcal P_k-(1-\mathcal C \mathcal P_{-k}\mathcal C^{-1})\rVert_F$ from zero \resub{(cf. Eq. (\ref{eqn:symconstraints}))} . Note that a set of WFs \resub{for which the associated projection $\mathcal P_k$} does not preserve the Nambu PHS $\mathcal C=\tau_x K$ cannot describe any physical superconductor. This demonstrates how important it is to manifestly maintain PHS here. In a next step, we apply the algorithm with $\mathcal S=\left\{\text{PHS}\right\}$ for the same parameters. It converges towards compactly supported WFs \resub{(see Fig.\ \ref{fig:hamcomparison} central panel)} which minimise $\mathcal E$ to a relative accuracy of $1.7\je{\times}10^{-3}$ and \resub{for which $\mathcal P_k$ preserves} PHS to $2.0\je{\times}10^{-8}$ accuracy within our numerical precision, i.e., six orders of magnitude better than without explicitly maintaining PHS. We show a logarithmic plot of the probability density $\rho$ of the converged WFs in Fig.\ \ref{fig:hamcomparison}. From these plots, it becomes clear why PHS is rather strongly broken if not explicitly maintained: The PHS breaking WFs \resub{(upper panel)} minimize the energy functional $\mathcal E$ to roughly the same accuracy but have a somewhat smaller $l_1$-norm at the expense of violating PHS. We compare the results of our algorithm to an analytically obtained WF \resub{(lower panel)} which has been computed from a smooth family of Bloch functions (see Eq.\ (\ref{eqn:blochKitaev}) below for its explicit construction), which clearly is much less localized (note the logarithmic scale).
\section{Maximally localized representatives of topological equivalence classes}
\label{sec:cstop}
\subsection{Adiabatic continuity algorithm}
\label{sec:algtop}
In Section \ref{sec:hamalg}, we introduced an algorithm that is designed to find compactly supported WFs for a fixed model Hamiltonian. In this Section, we present a tool which searches for the most localized compactly supported WFs not only for a given Hamiltonian but {within} an entire topological equivalence class. Topological equivalence classes are the connected components of the set of all free Hamiltonians obeying certain local symmetries $\mathcal S$. In other words, starting from any Hamiltonian that preserves $\mathcal S$, its topological equivalence class is defined by all Hamiltonians that can be reached adiabatically, i.e., continuously without closing the band gap and without breaking $\mathcal S$. We confine our attention to topological states relying on at least one symmetry constraint, i.e., states with zero Chern number. For states with non-zero Chern number, it is known that no representative with exponentially localized let alone compactly supported WFs can exist {\cite{Brouder}}.
The key idea of our adiabatic continuity algorithm is the following:
Start from a set of WFs associated with a generic representative of a given topological equivalence class. Perform the split Bregman iteration introduced in Section \ref{sec:hamalg} with the crucial difference that the energy functional $\mathcal E$~is set to zero. That way the bias towards a particular model Hamiltonian is completely removed. However, the symmetries $\mathcal S$ are again restored at every step of the iteration and the $\rho$-norm optimization is continuous on a coarse grained scale controlled by the thresholding parameter ${1}/({\lambda \xi})$. Hence, the model Hamiltonian that the instantaneous WFs represent will flow continuously in the topological equivalence class of the Hamiltonian associated with the initial set of WFs. The only bias of this flow is the $\rho$-norm optimization, i.e., the localization of the WFs in real space
by minimization of the ${{l_1}}$-norm of the square root of their probability density. Thus, the adiabatic continuity algorithm searches for the most localized representative of a given topological state of matter.
For the converged set of WFs, the corresponding Bloch functions are readily obtained by Fourier transform. From these Bloch functions, the projection onto the occupied bands $\mathcal P_k$ is straightforward to compute (see Eq. (\ref{eqn:PfromWanniers})). The generic flat band Hamiltonian $Q(k)=1-2\mathcal P_k$ then defines an explicit model Hamiltonian for the most localized representative of the topological equivalence class under investigation.
\subsection{Maximally localized representatives in symmetry class D in one dimension}
\label{sec:restop1D}
To benchmark the adiabatic continuity algorithm, we would like to apply it to the 1D TSC model (\ref{eqn:KitaevBdG}) introduced in Section \ref{sec:resham}.
{In the language of Ref.\ \cite{Altland}, this belongs to the symmetry class D.}
For this model, the result of a perfect performance is clear: From a topologically trivial starting point, we would expect our algorithm to converge towards an ``atomic'' Wannier function which has support only on a single site. From Ref.\ \cite{Kitaev2001}, we also know the simplest representatives of the topologically nontrivial class, which are of the form $\lvert t\rvert =\lvert \Delta\rvert >0=\mu$. Such exactly dispersionless models are characterized by WFs \resub{corresponding to operators} of the form $w_j=(c_j+c_j^\dag-c_{j+1}+c_{j+1}^\dag)/2$ with compact support on only two sites around $j\in \left\{1,\ldots,L\right\}$. It is clear that no topologically non-trivial state can be represented by WFs with support on a single site, as this would preclude any momentum dependence of $\mathcal P_k$. We hence expect a set of WFs \resub{annihilated by operators} similar to $w_j$ as a result of our adiabatic search in the topologically non-trivial sector.
As a starting point we calculate a set of WFs from a family of Bloch functions representing the occupied band of $H_p$ for generic $\mu$. A global gauge defining a family of Bloch functions
${k\mapsto \lvert u_-(k)\rangle}$ for the occupied BdG band can be constructed as
\begin{align}
\lvert u_-(k)\rangle = \frac{\mathcal P_k \lvert +x\rangle}{\lvert \mathcal P_k \lvert +x\rangle\rvert},
\label{eqn:blochKitaev}
\end{align}
where $\lvert +x\rangle=\tau_1 \lvert +x\rangle$ is a $\tau_1$ eigenvector. From Eq.\ (\ref{eqn:ptwoband}), it is easy to see that this gauge is regular for all $k$ since $d^1(k)=0$.
The initial WFs $\psi_0$~are then simply obtained by Fourier transform of the Bloch functions. Since $k\mapsto \lvert u_-(k)\rangle$ as resulting from Eq.\ (\ref{eqn:blochKitaev}) are $C^\infty$ functions, the corresponding Wannier functions are asymptotically bound to decay faster than every power law and exhibit in fact only exponential tails as verified in Fig.\ \ref{fig:inWFLog}. Our gauge choice turns out to be more efficient for the non-trivial WF which decays much more rapidly.
Using these functions as an input, the algorithm described in Section \ref{sec:algtop} indeed converges to the correct benchmark results in less than one minute on a regular desktop computer for a lattice of size $L=200$. In other words, our search algorithm numerically detects the ``sweet spot'' point with compactly supported WFs from Ref.\ \cite{Kitaev2001}, starting from a generic set of WFs representing some Hamiltonian with dispersive bands in the same topological equivalence class. Conversely, as soon as we tune $\mu$ over the topological quantum phase transition to a trivial 1D superconducting state, our search algorithm correctly finds an atomic WF representing the simplest possible trivial Hamiltonian.
In Fig.\ \ref{fig:movies1D}, we visualize the performance of our algorithm with a logarithmic color plot of the probability density $\rho_x$ at lattice site $x$ as a function of the computation time $t$. The final WFs concur with the anticipated perfect benchmark results to impressive numerical precision.
\begin{figure}
\includegraphics[width=0.7\columnwidth]{initialWFs.pdf}
\caption{(Color online) Logarithmic plot of the probability density $\rho$ of sets of Wannier functions from the gauge constructed in Eq.\ (\ref{eqn:blochKitaev}) for a trivial 1D superconductor with $\mu =1.5, 2t=2\Delta=1$ (lower panel) and a non-trivial 1D TSC with $\mu =0.3, 2t=2\Delta=1$ (upper panel). The home cell of both WFs is $x=101$. The linear tails demonstrate the asymptotic exponential decay. $L=200$
{is chosen}
for both plots.}
\label{fig:inWFLog}
\end{figure}
\subsection{Absence of compactly supported topological insulator WFs in symmetry class AII in 2D}
\label{sec:restop2D}
We would now like to turn our attention to time reversal symmetric 2D insulators, in symmetry class AII \cite{Altland}.
For states in symmetry class A with non-vanishing first Chern number, so called Chern insulators \cite{QAH}, only algebraically decaying WFs can be found. As a consequence, Chern insulators with exponentially localized or even compactly supported WFs cannot exist.
However, the situation is less obvious for TRS protected topological insulators, a.k.a.\ quantum spin Hall (QSH) insulators \cite{KaneMele2005a,KaneMele2005b,BHZ2006,koenig2007}. The conceptually simplest representative of this topological equivalence class consists of two TRS conjugated copies of a Chern insulator with opposite odd Chern number, one copy for each spin block
{(cf.\ Ref.\ \cite{Matt})}. While the individual spin blocks have non-zero Chern number, the total set of occupied bands has zero Chern number as required by TRS.
Hence, a smooth gauge mixing the two TRS conjugated blocks can be found for the Bloch functions \cite{Soluyanov2011}.
{Here we} would like to consider a minimal model for a QSH insulator analogous to the one introduced in Ref.\ \cite{BHZ2006} which has $m=4$ degrees of freedom per site and $n=2$ occupied bands. The four degrees of freedom are labeled by the basis vectors $\vert e,\uparrow \rangle,\vert h,\uparrow \rangle,\vert e,\downarrow \rangle,\vert h,\downarrow \rangle$. We denote the $e-h$ pseudo spin by $\sigma$ and the real spin by $s$. The Bloch Hamiltonian of the spin up block reads as
\begin{align}
& h_{\uparrow}(k)=d_{\uparrow}^i(k)\sigma_i,\quad d_{\uparrow}^1(k)=\sin(k_x),\nonumber\\
&d_{\uparrow}^2(k)=\sin(k_y),\quad d_{\uparrow}^3(k)=M-\cos(k_x)-\cos(k_y).
\label{eqn:BHZ}
\end{align}
The Hamiltonian of the TRS conjugated block is then {defined} by $h_\downarrow(k)=h^*_\uparrow(-k)$. This model is topologically nontrivial for $0<\lvert M\rvert < 2$ and trivial for $\lvert M\rvert >2$. The projection onto the occupied bands $\mathcal P_k$ can {for each $k$} be written as a sum of
\begin{equation}
\mathcal P_k^\uparrow =\frac{1}{2}\left(1-\hat d_\uparrow^i(k)\sigma_i\right)\otimes\lvert \uparrow\rangle\langle \uparrow \rvert
\end{equation}
and
\begin{equation}
\mathcal P_k^\downarrow =\frac{1}{2}\left(1-\hat d_\downarrow^i(k)\sigma_i\right)\otimes\lvert \downarrow\rangle\langle \downarrow \rvert.
\end{equation}
A smooth gauge of Bloch functions ${k\mapsto \lvert u_i(k)\rangle}$, $i=1,2$, can be found in a generic way \cite{VanderbiltReview}. One first chooses a set of trial orbitals $\lvert \tau_i\rangle,~i=1,2${,}
which are random linear combinations of the four basis orbitals. Projecting onto the occupied bands yields $\lvert \gamma_i(k)\rangle=\mathcal P_k \lvert \tau_i\rangle$. If the
{family of} Gram {matrices}
{with entries}
\begin{equation}
S_{ij}(k)=\langle \gamma_i(k)\vert \gamma_j(k)\rangle
\end{equation}
is regular for all $k$, smooth Bloch functions {defined as}
\begin{equation}
{k\mapsto \lvert u_i(k) \rangle=S^{-{1}/{2}}_{j,i}(k)
\lvert \gamma_j\rangle }
\end{equation}
can be obtained. In practice, by trying a few random choices, a gauge for which ${\det(S(k) )\ge 10^{-2}}~\forall k$ can be readily found.
The associated WFs are then obtained by Fourier transform. \resub{Note that these WFs, while still spanning the same many-body state of occupied bands, individually break all symmetries present in Eq. (\ref{eqn:BHZ}) due to the random choice of $\tau_i$}.
We employ the above prescription to find exponentially decaying WFs both in the topologically trivial and nontrivial regime on a lattice of $N=101\times 101$ sites. These WFs are then used as starting points for the adiabatic continuity algorithm introduced in Section \ref{sec:algtop}. For WFs associated with topologically trivial insulators, i.e., $\lvert M\rvert>2$, our algorithm finds a set of atomic WFs representing the most localized topologically trivial insulator to impressive numerical accuracy (see Fig. \ref{fig:movie2D}).
\begin{figure*}
\centering
\includegraphics[width=.8\linewidth]{Movie2DTrivWide2.pdf}
\caption{(Color online) Logarithmic plot of the probability density $\rho$ of an initial Wannier function for the model Hamiltonian (\ref{eqn:BHZ}) for the topologically trivial mass parameter $M=2.5$ (leftmost panel). The home cell of the WFs is $(x,y)=(51,51)$, the size of the lattice is $101\times101$. Adiabatically deformed WF after 100, 500, and 727 (rightmost panel) iterations iterations respectively with $\xi=\kappa=\lambda=50$.}
\label{fig:movie2D}
\end{figure*}
However, as soon as the initial set of WFs corresponds to a non-trivial QSH state, the algorithm does not find a compactly supported set of WFs. This result gives numerical evidence that a simple flat band model Hamiltonian with finite range hopping does not exist for the QSH state in contrast to the 1D TSC. The relation between flat band models with finite range hopping and compact WFs becomes clear from the following representation of the projection $\mathcal P_k$ onto the occupied bands at momentum $k$ in terms of Wannier functions $w^\alpha_0, \alpha=1,\ldots \je{,} n$ centered around the origin,
\begin{align}
\mathcal P_k = \sum_{\alpha=1}^n \sum_{r,r'}\text{e}^{ik(r-r')}w_0^\alpha(r) w_0^{\alpha \dag}(r').
\label{eqn:PfromWanniers}
\end{align}
An exact flat band Hamiltonian where all occupied states have energy $-\epsilon$ and all empty states have energy $+\epsilon$ is then immediately obtained as
\begin{align}
Q(k) = (+\epsilon) (1-\mathcal P_k) + (-\epsilon) \mathcal P_k=\epsilon\left(1-2\mathcal P_k\right).
\end{align}
To see if our findings are sensitive to the number of bands \resub{or to the spin rotation symmetry of Eq. (\ref{eqn:BHZ})}, we also applied the adiabatic continuity algorithm to a QSH model with 8 bands \resub{and spin mixing terms} which did not yield qualitatively different results.
\subsection{{Dissipative state preparation}}
The idea of dissipative state preparation \cite{DiehlStatePrep} in the context of topological states of matter \cite{DiehlTopDiss} relies, for pure and translation invariant target states, on the existence of a complete set of fermionic creation and annihilation operators $w_{i,\alpha},w_{i,\alpha}^\dag$ forming a Dirac algebra (the indices referring to sites and bands, respectively). In this case, the stationary state of a dissipative evolution described by a Lindblad master equation
\begin{eqnarray}
{\frac{\partial}{\partial t}}\rho = \kappa \sum_{i,\alpha} \left(w_{i,\alpha} \rho w^\dag_{i,\alpha} - \tfrac{1}{2} \{w^\dag_{i,\alpha} w_{i,\alpha} ,\rho\}\right)
\end{eqnarray}
with damping rate {$\kappa>0$}, will be identical to the ground state of the dimensionless Hamiltonian
\begin{equation}
H = \sum_{i,\alpha}h_{i,\alpha},~ h_{i,\alpha} = w^\dag_{i,\alpha} w_{i,\alpha},
\end{equation}
with mutually commuting $h_{i,\alpha}$. In typical implementations of such a dissipative dynamics in the context of cold atomic systems, the Lindblad operators $w_{i,\alpha}$ generating the dissipative dynamics are quasi-local, i.e. have a compact support on the underlying lattice \cite{BardynTopDiss}. Our algorithm is precisely constructed to find such compactly supported operators $w_{i,\alpha}$, with the mutual commutativity of the associated $h_{i,\alpha}$ being granted by the shift orthogonality of the Wannier functions \resub{corresponding to the Lindblad operators $w_{i,\alpha}$}. Unlike the one dimensional case of the topologically nontrivial ground state of Kitaev's quantum wire, where a representative with compactly supported Wannier functions exists and is indeed found by our algorithm, our results in two dimensions imply the absence of an analogous situation in two dimensions.
\section{Conclusion and outlook}
\label{sec:conclusion}
In this work, we have presented a method to search for localized Wannier functions of free quantum lattice models which explicitly takes into account the symmetry of the problem. Most interestingly, we could extend the domain of this search algorithm from
individual model Hamiltonians to entire topological equivalence classes. This allows for a numerical detection of the most localized representative of a given topological state. We did so by elaborating on a compressed sensing approach built upon Bregman split techniques, where the spatial locality takes the role of the sparsity of the problem (see Refs.\ \cite{OszolinsCompressedModes,OszolinsTranslation} ).
We close our presentation by providing some perspectives opened up by our present analysis, including a few particularly intriguing implications and applications of our new algorithm
\je{beyond the most widely known applications \cite{VanderbiltReview} of having localized Wannier functions available.}
\subsection{{Diagnostic tool of topological states}}
The possibility to identify localized Wannier functions not only for given model Hamiltonians,
but also -- if the energy functional is set to zero along with $\xi\rightarrow 0$ -- maximally localized Wannier functions within entire
topological equivalence classes opens up another interesting application of our work: That of a {\it diagnostic tool}: Whenever it converges
\je{to a compactly supported Wannier function},
it identifies a "sweet spot" characterizing the topological class of the initial Hamiltonian itself rather than minimizing the energy of a certain model. The flow towards the atomic insulator and the topological flat band (Kitaev) superconductor, starting from generic states within the same topological phase provide striking examples of this. But the parameter $\xi>0$ can be freely chosen, reflecting the $l_1$-regularization in terms of compressed sensing. \je{In condensed matter terms, this parameter allows for a precise trade-off between locality and energy.}
This freedom gives rise to a useful ``knob'' to tune, and for applications in the context of e.g., ab initio band structure calculations, a finite $\xi$ is more appropriate.
\subsection{{Applications in devising tensor network methods}}
Thinking further about our algorithm as a flow in the renormalization group sense is likely to be fruitful also in the context of interacting as well as disordered systems. In fact our protocol bears some (non-accidental) resemblance with tensor network algorithms (quantum state renormalization methods) such as DMRG and TEBD in one dimension and PEPS and MERA more generally
\cite{R1,R2,R3}.
More specifically, it seems that in order to simulate weakly interacting (and/or disordered) fermionic lattice models,
the efficiently localized Wannier functions which are still orthogonal appear to be a very suitable starting point for devising
variational sets of states, as real space operators remain short {ranged} (and close to diagonal) when projected to the pertinent electronic band. Most saliently, tensor network approaches augmented with an initial preferred basis selection based on our algorithm appear particularly promising in two-dimensional
approaches, where having a low bond dimension in PEPS approaches is critical for the highly costly (approximate) tensor network contraction.
More specifically, two approaches seem interesting: In a first, one takes a weakly interacting model and re-expresses the non-interacting part in the Wannier basis
found by the algorithm. If the Wannier functions are exactly localized, then the new Hamiltonian will still be local. This \je{can then serve as}
an ansatz for a tensor network approach including
interactions. In a second, one starts from a generalized mean field approach for the interacting model,
generates Wannier functions and then applies a variational
tensor network method.
\subsection{Symmetry breaking by truncation of exponential tails}
Finally, a fundamental question arises due to the apparent lack of \resub{compactly supported} Wannier functions for the quantum spin Hall phase, namely that of the importance of exponentially decaying tails. We have found that any truncation of the tail of the Wannier functions inevitably leads to the breaking of time-reversal symmetry at a corresponding rate. In fact, cutting exponential tails seems continuous, but the QSH phase can be left continuously by breaking TRS. Despite being a conceptual problem it may not be a practical one. In any solid-state realization of a finite size QSH insulator, there will be weak TRS breaking terms, yet the physical response can -- at least in principle -- be experimentally indistinguishable from that of a truly TRS invariant system. In this sense, even though the Wannier functions with compact support and formally do not represent a QSH phase, they may still be used for practical purposes.
Our algorithm provides a tool to systematically assess these questions. Yet these are merely a few of many intriguing directions, and we anticipate that our findings will inspire future research in diverse branches of physics, as well as in applied mathematics.\\
\jcb{\emph{Note added. A key result of the present paper is the use of {\emph{local}} orthogonality constraints on the Bloch functions. In this context, we note the recent arXiv submissions by Barekat \emph{et al.} \cite{Barekat1, Barekat2}. In Ref. \cite{Barekat1}, Barekat {\emph{et al.}} independently derive a similar algorithm with the same asymptotic scaling. In Ref. \cite{Barekat2}, the same authors use orthogonality constraints in terms of Bloch functions in the context of certain (topologically trivial) band structures. These papers do not address the maximally localized representatives of topological equivalence classes of band structures
which is the main focus of our present work.}}
\section{Acknowledgements}
We would like to thank C.\ Krumnow and H.\ Wilming for discussions. \eb{We also thank V. Ozolins for helpful correspondence on Refs. \cite{OszolinsCompressedModes,OszolinsTranslation} and for making us aware of Ref. \cite{Barekat1}.}
{Support from the ERC \je{Synergy Grant} UQUAM and \je{Consolidator Grant TAQ}, the EU (RAQUEL, SIQS, \je{COST}),
the BMBF (QuOReP), the START Grant No. Y 581-N16, the SFB FoQuS (FWF Project No. F4006- N16) and DFG's Emmy Noether program (BE 5233/1-1) is gratefully acknowledged.}
\bibliographystyle{apsrev}
|
2,869,038,153,902 | arxiv | \section{Introduction}
\setcounter{equation}{0}
In operator theory and in the theory of linear systems, the study
of functions analytic and contractive in the open unit disk
(Schur functions) is called Schur analysis. It includes, in
particular, interpolation problems, operator models, and has been
extended to various settings. See for instance
\cite{MR2002b:47144,c-schur,hspnw,goh1} for some related books.
In \cite{acs1,acs2} we began a study of Schur analysis in the
setting of slice hyperholomorphic functions.
Following \cite{acs1,acs2}, let us recall that a
generalized Schur function is a $\mathbb H^{N\times M}$ valued
function $S$ slice-hyperholomorphic in a neighborhood $V$ of the
origin and for which the kernel
\begin{equation}
\label{skl}
K_S(p,q)= \sum_{n=0}^\infty p^n(I_N-S(p)
S(q)^*)\overline{q}^n
\end{equation}
has a finite number of negative squares in $V$, or more generally
such that the kernel
\begin{equation}
\label{kernpot}
\sum_{n=0}^\infty p^n(\s_2-S(p)
\s_1S(q)^*)\overline{q}^n
\end{equation}
has a finite number of negative squares in $V$, where
$\s_1\in\mathbb H^{M\times M}$ and $\s_2\in\mathbb H^{N\times N}$
are signature matrix (i.e. both self-adjoint and invertible).
Since this work is aimed at different audiences, it is worth
mentioning that the classical counterparts of the kernels
\eqref{kernpot} originate with the theory of characteristic
operator functions. In the indefinite case, such kernels have
been studied by Krein and Langer; see for instance
\cite{kl1,MR47:7504}. When $\sigma_1=\sigma_2$ and when the
kernel is positive definite, Potapov gave in the fundamental paper
\cite{pootapov} the multiplicative structure of the corresponding
functions $S$.\\
In \cite{acs1} we studied the realization of such $S$ in the case
of positive definite kernels. In \cite{acs2} we studied an
interpolation problem, and began the study of the indefinite
metric case, where the underlying spaces are Pontryagin spaces
rather than Hilbert spaces. In this work we prove a Beurling-Lax
type theorem in this setting and study the Krein-Langer
factorization for slice hyperholomorphic generalized Schur
functions. Slice hyperholomorphic functions turned out to be a
natural tool to generalize Schur analysis to the quaternionic
setting. Some references for this theory of functions, with no
claim of completeness, are \cite{MR2353257, MR2555912, MR2742644},
the book \cite{MR2752913} and the forthcoming \cite{GSS}.\\
The analogue of the
resolvent operator in classical analysis is now the $S$-resolvent
operator, and according to this resolvent, the spectrum has to
be replaced by the $S$-spectrum. The relation between the
$S$-spectrum and the right spectrum of a right linear
quaternionic operator is important for the present paper.
Indeed, in the literature there are several results on the right
spectrum which is widely studied, especially for its application
in mathematical physics, see e.g. \cite{MR1333599}. However, it
is well known that the right spectrum is not associated to a right
linear quaternionic operator; the eigenvectors associated to a
given eigenvalue do not even form a linear space. The
$S$-spectrum arises in a completely different setting, it is
associated to a right linear operator and, quite surprisingly, the point $S$-spectrum
coincides with the right spectrum. This fact and the fact that
any right eigenvector is also an S-eigenvector, see Proposition
\ref{eigenvector}, allow to use for the point $S$-spectrum various
result which hold for the right spectrum, see Sections 6 to 9.\\
The $S$-resolvent operator allows the
definition of the quaternionic analogue of the operator
$(I-zA)^{-1}$ that appears in the realization function
$s(z)=D+zC(I-zA)^{-1}B$. It turns out that when $A$ is a
quaternionic matrix and $p$ is a quaternion then $(I-pA)^{-1}$
has to be replaced by $( I -\bar p A)(|p|^2A^2-2{\rm Re}(p) A+
I )^{-1}$ which is equal to $p^{-1}S^{-1}_R(p^{-1},A)$ where
$S^{-1}_R(p^{-1},A)$ is the right $S$-resolvent operator
associated to the quaternionic matrix $A$.
Moreover, the $S$-resolvent operator allows to introduce and study the Riesz projectors and
the invariant subspaces under a quaternionic operator.
\\
The S-resolvent operator is also a fundamental tool to define
the quaternionic functional calculus, and we refer the reader to
\cite{MR2735309, MR2496568, MR2661152, MR2803786} for further discussions.
Schur multipliers in the quaternionic setting have been studied also
in \cite{MR2124899,MR2240272,asv-cras},
in a different setting, using the
Cauchy-Kovalesvkaya product and series of Fueter polynomials.
Since Schur analysis plays an important role in linear systems, we mention
that papers \cite{MR733953,MR2205693,MR2131920} treat
various aspects of a theory of linear systems in the quaternionic
setting.
We finally remark that it is possible to define slice hyperholomorphic functions with values
in a Clifford algebra, \cite{MR2520116, MR2684426, MR2673423}, which admit a functional
calculus for $n$-tuples of
operators, see \cite{MR2402108, MR2720712, SCFUNCTIONAL,MR2752913}.
\\
The paper consists of eight sections besides the introduction,
and its outline is as follows: Sections \ref{2}-\ref{4} are related
to results on slice hyperholomorphic functions and the related
functional calculus. Sections \ref{6}-\ref{9} are related to Schur
analysis. More precisely: Sections \ref{2} and \ref{3} are of a
survey nature on slice hyperholomorphic functions and the quaternionic
functional calculus, respectively. Section \ref{4} contains new
results on the analogue of Riesz projector for the quaternionic
functional calculus. Moreover, it contains a discussion on the
right spectrum, which has been widely studied in the literature
both in linear algebra \cite{MR97h:15020} and in mathematical physics
\cite{MR1333599}, and which, as we have already pointed out,
coincides with the point $S$-spectrum. These results will be used
in the second part of the paper. A characterization of the number of
negative squares of a slice hyperholomorphic kernel in terms of
its power series coefficients is given in Section \ref{6}. In
Section \ref{7} we present some results on linear operators in
quaternionic Pontryagin spaces. We show in particular that a
contraction with no S-spectrum on the unit sphere has a unique
maximal negative invariant subspace. In Section \ref{8} we prove
a version of the Beurling-Lax theorem, the so-called structure
theorem, in the present setting. In Section \ref{5} we discuss
the counterparts of matrix-valued unitary rational functions. The last section considers a
far reaching result in the quaternionic framework, namely the
Krein-Langer factorization theorem for generalized Schur
functions. It is interesting to note that the result is based on
Blaschke products whose zeros and poles have a peculiar behaviour
when taking the slice hyperholomorphic reciprocal.
\section{Slice hyperholomorphic functions}
\setcounter{equation}{0}
\label{2}
We begin this section by recalling the notations and some basic facts on the theory of slice
hyperholomorphic functions that we will use in the sequel. We send the reader to the papers
\cite{MR2555912, MR2742644, MR2353257} and the book \cite{MR2752913}
for more details. Let
$\hh$ be the real associative algebra of quaternions with respect
to the basis $\{1, i,j,k \}$ whose elements satisfy the relations
$$
i^2=j^2=k^2=-1,\
ij =-ji =k,\
jk =-kj =i ,
\ ki =-ik =j .
$$
We will denote a quaternion $p$ as $p=x_0+ix_1+jx_2+kx_3$,
$x_\ell\in \mathbb{R}$, $\ell=0,1,2,3$, its conjugate as
$\bar p=x_0-ix_1-jx_2-kx_3$, its norm $|p|^2=p\overline p$.
The real part of a quaternion will be denoted with the symbols ${\rm Re}(p)$ or $x_0$, while ${\rm Im}(p)$ denotes the imaginary part of $p$.
\noindent
Let $\mathbb{S}$ be the 2-sphere of purely imaginary unit quaternions, i.e.
\begin{equation}
\label{sphere}
\mathbb{S}=\{ p=ix_1+jx_2+kx_3\ |\
x_1^2+x_2^2+x_3^2=1\}.
\end{equation}
To each nonreal quaternion $p$ it is possible to uniquely associate the
element $I_p\in\mathbb{S}$ defined by
$$
I_p=
\displaystyle\frac{{\rm Im}(p)} {|{\rm Im}(p)|}.
$$
The complex plane $\mathbb{C}_{I_p}=\mathbb{R}+I_p\mathbb{R}=\{x+I_qy \ | \ x,y\in \mathbb{R}\}$ is determined by the imaginary unit $I_p$, and $\mathbb{C}_{I_p}$ obviously contains $p$.
\begin{Dn}
Given $p\in\hh$, $p=p_0+I_pp_1$ we denote by $[p]$ the set of all
elements of the form $p_0+Jp_1$ when $J$ varies in $\mathbb{S}$.
\end{Dn}
\noindent
\begin{Rk}{\rm
The set $[p]$ is a $2$-sphere which is reduced to the point $p$
when $p\in\mathbb{R}$.}
\end{Rk}
\noindent
We now recall the definition of slice hyperholomorphic functions.
\begin{Dn}[Slice hyperholomorphic functions]
Let $U\subseteq\hh$ be an open set and let
$f:\ U\to\hh$ be a real differentiable function. Let
$I\in\mathbb{S}$ and let $f_I$ be the restriction of $f$ to the
complex plane $\mathbb{C}_I := \mathbb{R}+I\mathbb{R}$ passing through $1$
and $I$ and denote by $x+Iy$ an element on $\mathbb{C}_I$.
\begin{itemize}
\item[(1)]
We say that $f$ is a left slice hyperholomorphic function
(or hyperholomorphic for short) if, for every
$I\in\mathbb{S}$, we have
$$
\frac{1}{2}\left(\frac{\partial }{\partial x}+I\frac{\partial
}{\partial y}\right)f_I(x+Iy)=0.
$$
\item[(2)]
We say that $f$ is right slice hyperholomorphic function (or right
hyperholomorphic for short) if,
for every
$I\in\mathbb{S}$, we have
$$
\frac{1}{2}\left(\frac{\partial }{\partial x}f_I(x+Iy)+\frac{\partial
}{\partial y}f_I(x+Iy)I\right)=0.
$$
\item[(3)]
In the sequel we will denote by $\mathcal{R}^L(U)$ (resp.
$\mathcal{R}^R(U)$) the right (resp. left) $\mathbb{H}$-vector space of
left (resp. right) hyperholomorphic functions on the open set $U$.
When we do not distinguish between $\mathcal{R}^L(U)$
and $\mathcal{R}^R(U)$ we will use the symbol
$\mathcal{R}(U)$.
\end{itemize}
\end{Dn}
The natural open sets on which slice hyperholomorphic functions are defined are axially symmetric, i.e. open sets that contain the 2-sphere
$[p]$ whenever they contain $p$, which are also s-domains, i.e. they are domains which remain connected when intersected with any complex plane $\mathbb{C}_I$.
\\
Given two left slice
hyperholomorphic functions $f$, $g$, it is possible to introduce
a binary operation called the $\star$-product, such that $f\star g$ is a slice
hyperholomorphic function.
Let $f,g:\ \Omega \subseteq\mathbb{H}$ be slice hyperholomorphic functions such that
their restrictions to the complex plane $\mathbb{C}_I$ can be written as
$f_I(z)=F(z)+G(z)J$,
$g_I(z)=H(z)+L(z)J$ where $J\in\mathbb{S}$, $J\perp I$. The functions $F$,
$G$, $H$, $L$ are holomorphic functions of the variable $z\in
\Omega \cap \mathbb{C}_I$ and they exist by the splitting lemma, see \cite[p. 117]{MR2752913}.
We can now give the following:
\begin{Dn}
Let $f,g$ slice hyperholomorphic functions defined on an axially symmetric open set $\Omega\subseteq\mathbb{H}$.
The $\star$-product of $f$ and $g$ is defined as the unique
left slice hyperholomorphic function on $\Omega$ whose restriction to the
complex plane $\mathbb{C}_I$ is given by
\begin{equation}\label{starproduct}
(F(z)+G(z)J)\star(H(z)+L(z)J):=
(F(z)H(z)-G(z)\overline{L(\bar z)})+(G(z)\overline{H(\bar z)}+F(z)L(z))J.
\end{equation}
\end{Dn}
When $f$ are expressed by power series, i.e. $f(p)=\sum_{n=0}^\infty p^n a_n$, $g(p)=\sum_{n=0}^\infty p^n b_n$, then $(f\star g)(p)=\sum_{n=0}^\infty p^n c_n$ where
$c_n=\sum_{r=0}^na_rb_{n-r}$ is obtained by convolution on the coefficients. This product extends the product of quaternionic polynomials with right coefficients, see \cite{lam}, to series.
Analogously, one can introduce a $\star$-product for right slice
hyperholomorphic functions. For more details we refer the reader to \cite{MR2752913}. When considering in a same
formula both the products, or when confusion may arise, we will write $\star_l$ or
$\star_r$ according to the fact that we are using the left or the
right slice hyperholomorphic product. When there is no subscript, we will
mean that we are considering the left $\star$-product.\\
Given a slice hyperholomorphic function $f$, we
can define its slice hyperholomorphic reciprocal $f^{-\star}$, see
\cite{MR2555912, MR2752913}. In this paper it will be sufficient to
know the following definition
\begin{Dn}\label{reciprocal}
Given $f(p)=\sum_{n=0}^\infty p^n a_n$, let
us set
$$
f^c(p)=\sum_{n=0}^\infty p^n \bar a_n,\qquad f^s(p)=(f^c\star f)(p
)=\sum_{n=0}^\infty p^nc_n,\quad
c_n=\sum_{r=0}^n a_r\bar a_{n-r},
$$
where the series converge.
The left slice hyperholomorphic reciprocal of $f$
is then defined as
$$
f^{-\star}:=(f^s)^{-1}f^c.
$$
\end{Dn}
\section{Formulations of the quaternionic functional calculus}
\label{3}
Here we briefly recall the possible formulations of the quaternionic functional calculus
that we will use in the sequel.
Let $V$ be a two sided quaternionic Banach space, and let
$\mathcal{B}(V)$ be the two sided vector space of all right
linear bounded operators on $V$.
\begin{Dn}[The $S$-spectrum and the $S$-resolvent sets of
quaternionic operators]\label{defspscandres}
Let $T\in\mathcal{B}(V)$.
We define the $S$-spectrum $\sigma_S(T)$ of $T$ as:
$$
\sigma_S(T)=\{ s\in \mathbb{H}\ \ :\ \ T^2-2 {\rm Re}\,(s) T+|s|^2\mathcal{I}\ \ \
{\it is\ not\ invertible}\}.
$$
The $S$-resolvent set $\rho_S(T)$ is defined by
$$
\rho_S(T)=\mathbb{H}\setminus\sigma_S(T).
$$
\end{Dn}
The notion of $S$-spectrum of a linear quaternionic operator $T$ is suggested
by the definition of $S$-resolvent operator that is the analogue of the Riesz resolvent operator for the
quaternionic functional calculus.
\begin{Dn}[The $S$-resolvent operator]
Let $V$ be a two sided quaternionic Banach space, $T\in\mathcal{B}(V)$ and $s\in
\rho_S(T)$.
We define the left $S$-resolvent operator as
\begin{equation}\label{quatSresolrddlft}
S_L^{-1}(s,T):=-(T^2-2 {\rm Re}\,(s) T+|s|^2\mathcal{I})^{-1}
(T-\overline{s}\mathcal{I}),
\end{equation}
and the right $S$-resolvent operator as
\begin{equation}\label{quatSresorig}
S_R^{-1}(s,T):=-(T-\overline{s}\mathcal{I})(T^2-2 {\rm Re}\,(s)
T+|s|^2\mathcal{I})^{-1}.
\end{equation}
\end{Dn}
\begin{Tm}
Let $T\in\mathcal{B}(V)$ and let $s \in \rho_S(T)$. Then, the left $S$-resolvent
operator satisfies the equation
\begin{equation}\label{quatSresolrddlftequ}
S_L^{-1}(s,T)s-TS_L^{-1}(s,T)=\mathcal{I},
\end{equation}
while the right $S$-resolvent
operator satisfies the equation
\begin{equation}\label{quatSresorigequa}
sS_R^{-1}(s,T)-S_R^{-1}(s,T)T=\mathcal{I}.
\end{equation}
\end{Tm}
\begin{Dn}
Let $V$ be a two sided quaternionic Banach space, $T\in\mathcal{B}(V)$
and let $U \subset \mathbb{H}$ be an axially symmetric s-domain
that contains the $S$-spectrum $\sigma_S(T)$ and such that
$\partial (U\cap \mathbb{C}_I)$ is union of a finite number of
continuously differentiable Jordan curves for every $I\in\mathbb{S}$.
We say that $U$ is a $T$-admissible open set.
\end{Dn}
We can now introduce the class of functions for which we can define the two
versions of the quaternionic functional calculus.
\begin{Dn}\label{quatdef3.9}
Let $V$ be a two sided quaternionic Banach space, $T\in\mathcal{B}(V)$
and let $W$ be an open set in $\hh$.
\begin{itemize}
\item[(1)]
A function $f\in \mathcal{R}^L(W)$ is said to be locally left
hyperholomorphic on $\sigma_S(T)$
if there exists a $T$-admissible domain $U\subset \hh$ such that
$\overline{U}\subset W$, on
which $f$ is left hyperholomorphic.
We will denote by $\mathcal{R}^L_{\sigma_S(T)}$ the set of locally
\index{$\mathcal{R}^L_{\sigma_S(T)}$}
left hyperholomorphic functions on $\sigma_S (T)$.
\item[(2)]
A function $f\in \mathcal{R}^R(W)$ is said to be locally right
hyperholomorphic on $\sigma_S(T)$
if there exists a $T$-admissible domain $U\subset \hh$ such that
$\overline{U}\subset W$, on
which $f$ is right hyperholomorphic.
We will denote by $\mathcal{R}^R_{\sigma_S(T)}$ the set of locally
\index{$\mathcal{R}^R_{\sigma_S(T)}$}
right hyperholomorphic functions on $\sigma_S (T)$.
\end{itemize}
\end{Dn}
Using the left $S$-resolvent operator $S_L^{-1} $, we now give
a result that motivates the functional
calculus; analogous considerations can be done using $S_R^{-1}$
with obvious modifications.
\begin{Dn}[The quaternionic functional calculus]\label{quatfunccalleftright}
Let $V$ be a two sided quaternionic Banach space and $T\in\mathcal{B}(V)$.
Let $U\subset \hh$ be a $T$-admissible domain and set $ds_I=- ds I$. We define
\begin{equation}\label{quatinteg311def}
f(T)={{1}\over{2\pi }} \int_{\partial (U\cap \mathbb{C}_I)} S_L^{-1} (s,T)\
ds_I \ f(s), \ \ {\it for}\ \ f\in \mathcal{R}^L_{\sigma_S(T)},
\end{equation}
and
\begin{equation}\label{quatinteg311rightdef}
f(T)={{1}\over{2\pi }} \int_{\partial (U\cap \mathbb{C}_I)} \ f(s)\ ds_I \
S_R^{-1} (s,T),\ \ {\it for}\ \ f\in \mathcal{R}^R_{\sigma_S(T)}.
\end{equation}
\end{Dn}
\section{Projectors, right and S-spectrum}
\setcounter{equation}{0}
\label{4}
An important result that we will prove in this section is
that the Riesz projector associated to a given quaternionic operator $T$ commute with $T$ itself.
We begin by recalling the definition of projectors and
some of their basic properties that still hold in the quaternionic setting.
\begin{Dn}
Let $V$ be a quaternionic Banach space. We say that $P$ is a projector if $P^2=P$.
\end{Dn}
It is easy to show that the following properties hold:
\begin{enumerate}
\item[(1)]
The range of $P$, denoted by $\ran(P)$ is closed.
\item[(2)]
$v\in \ran(P)$ if and only if $Pv=v$.
\item[(3)]
If $P$ is a projector also $I-P$ is a projector and $\ran(I-P)$ is closed.
\item[(4)]
$v\in \ran(I-P)$ if and only if $(I-P)v=v$, that is if and only
if $Pv=0$, as a consequence $\ran(I-P)=\ker(P)$.
\item[(5)]
For every $v\in V$ we have $v=Pv+(I-P)v$; $Pv\in \ran(P)$, $(I-P)v\in \ker(P)$.
So $v$ can be written as $v'=Pv$ and $v''=(I-P)v$. Since $\ran(P)\cap \ker(P)=\{0\}$
we have the decomposition
$V=\ran(P)\oplus \ker(P)$.
\end{enumerate}
\begin{Tm}\label{PTcommutation}
Let $T\in\mathcal{B}(V)$ and
let $\sigma_S(T)= \sigma_{1S}\cup \sigma_{2S}$,
with ${\rm dist}\,( \sigma_{1S},\sigma_{2S})>0$. Let $U_1$ and
$U_2$ be two open sets such that $\sigma_{1S} \subset U_1$ and $
\sigma_{2S}\subset U_2$, with $\overline{U}_1
\cap\overline{U}_2=\emptyset$. Set
\begin{equation}\label{pigei}
P_j:=\frac{1}{2\pi }\int_{\partial (U_j\cap \mathbb{C}_I)}S_L^{-1}(s,T) \,
ds_I, \ \ \ \ \ j=1,2,
\end{equation}
\begin{equation}\label{tigei}
T_j:=\frac{1}{2\pi }\int_{\partial (U_j\cap \mathbb{C}_I)}S_L^{-1}(s,T) \,
ds_I\,s, \ \ \ \ j=1,2.
\end{equation}
Then the following properties hold:
\begin{itemize}
\item[(1)]
$P_j$ are projectors and $TP_j=P_jT$ for $j=1,2$.
\item[(2)] For $\lambda\in \rho_S(T)$ we have
\begin{equation}\label{tipigeieqL}
P_jS_L^{-1} (\lambda,T)\lambda-T_jS_L^{-1} (\lambda,T)=P_j, \ \ \ \ \ j=1,2,
\end{equation}
\begin{equation}\label{tipigeieqR}
\lambda S_R^{-1} (\lambda,T)P_j-S_R^{-1} (\lambda,T)T_j=P_j, \ \ \ \ \ j=1,2.
\end{equation}
\end{itemize}
\end{Tm}
\begin{proof}
The fact that $P_j$ are projectors is proved in \cite{MR2752913}.
Let us prove that $TP_j=P_jT$. Observe that the functions
$f(s)=s^m$, for $m\in \mathbb{N}_0$ are both right and left
slice hyperholomorphic. So
the operator $T$ can be written as
$$
T={{1}\over{2\pi }} \int_{\partial (U\cap \mathbb{C}_I)} S_L^{-1} (s,T)\ ds_I \ s=
{{1}\over{2\pi }} \int_{\partial (U\cap \mathbb{C}_I)} \ s\ ds_I \ S_R^{-1} (s,T);
$$
analogously, for the projectors $P_j$ we have
$$
P_j={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} S_L^{-1} (s,T)\
ds_I \ =
{{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} \ \ ds_I \ S_R^{-1} (s,T).
$$
From the identity
$$
T_j={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} S_L^{-1} (s,T)\
ds_I \ s={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} \ s\ ds_I \
S_R^{-1} (s,T)
$$
we can compute $TP_j$ as:
$$
TP_j={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} TS_L^{-1} (s,T)\ ds_I \
$$
and using the resolvent equation (\ref{quatSresolrddlftequ})
it follows
$$
TP_j={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} [S_L^{-1}(s,T)\
s-\mathcal{I}]\ ds_I \ =
{{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} S_L^{-1}(s,T)\ s\ ds_I
$$
$$
=
{{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} S_L^{-1}(s,T)\ ds_I\ s=T_j.
$$
Now consider
$$
P_jT={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)}
\ \ ds_I \ S_R^{-1} (s,T)T
$$
and using the resolvent equation (\ref{quatSresorigequa})
we obtain
$$
P_jT={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} \
\ ds_I \ [s \ S_R^{-1}(s,T)-\mathcal{I}]
={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} \
\ ds_I \ s\ S_R^{-1}(s,T)=T_j
$$
so we have the equality $P_jT=TP_j$.
To prove (\ref{tipigeieqL}), for $\lambda\in \rho_S(T)$, consider and compute
$$
P_jS_L^{-1} (\lambda,T)\lambda={{1}\over{2\pi }}
\int_{\partial (U_j\cap \mathbb{C}_I)} \ \ ds_I \
S_R^{-1} (s,T)S_L^{-1} (\lambda,T)\lambda.
$$
Using the S-resolvent equation (\ref{quatSresolrddlftequ}) it
follows that
$$
P_jS_L^{-1} (\lambda,T)\lambda={{1}\over{2\pi }} \int_{\partial (U_j\cap
\mathbb{C}_I)} \ \ ds_I \ S_R^{-1} (s,T)[TS_L^{-1} (\lambda,T)+I]
$$
$$
={{1}\over{2\pi }} \int_{\partial (U_j\cap \mathbb{C}_I)} \ \ ds_I \ [S_R^{-1}
(s,T)T]S_L^{-1} (\lambda,T)+P_j.
$$
By the S-resolvent equation (\ref{quatSresorigequa}) we get
$$
P_jS_L^{-1} (\lambda,T)\lambda={{1}\over{2\pi }} \int_{\partial
(U_j\cap \mathbb{C}_I)} \ s \ ds_I \ S_R^{-1} (s,T)S_L^{-1} (\lambda,T)+P_j
$$
$$
=T_jS_L^{-1} (\lambda,T)+P_j
$$
which is (\ref{tipigeieqL}).
Relation (\ref{tipigeieqR}) can be proved in an analogous way.
\end{proof}
In analogy with the classical case, we will call the operator
$P_j$ Riesz projector.\\
Our next result, of independent interest, is the validity of the
decomposition of the $S$-spectrum which is based on the Riesz
projectors. A simple but crucial result will be the following
Lemma:
\begin{La}\label{lieiruyh}
Let $T\in\mathcal{B}(V)$ and let $\lambda\in \rho_S(T)$.
Then the operator $(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})^{-1}$
commutes with every operator $A$ that commutes with $T$.
\end{La}
\begin{proof}
Since $A$ commutes with $T$ we have that
$$
(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})A=A(T^2-2\lambda_0T+|\lambda|^2\mathcal{I}).
$$
We get the statement by multiplying the above relation on both sides by
$(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})^{-1}$.
\end{proof}
Note that, unlike what happens in the classical case in which an operator $A$ which commutes with $T$ also commute
with the resolvent operator, here an operator $A$ commuting with $T$ just commute
with $(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})^{-1}$. But this result is enough to prove the validity of the next theorem.
\begin{Tm} Let $T\in\mathcal{B}(V)$, suppose that $P_1$ is a
projector in $\mathcal{B}(V)$ commuting with $T$ and let $P_2=I-P_1$.
Let $V_j=P_j(V)$, $j=1,2$
and define the operators $T_j=TP_j=P_jT$. Denote by
$\widetilde{T}_j$ the restriction of $T_j$ to $V_j$, $j=1,2$.
Then
$$
\sigma_S(T)=\sigma_S(\widetilde{T}_1)\cup\sigma_S(\widetilde{T}_2).
$$
\end{Tm}
\begin{proof}
First of all note that $T=T_1+T_2$, \ $T_1(V_2)=T_2(V_1)=\{0\}$ and
that $T_j(V_j)\subseteq V_j$.
We have to show that $\rho_S(T)=\rho_S(\widetilde{T}_1)\cap\rho_S(\widetilde{T}_2)$.
Let us assume that $\lambda\in \rho_S(T)$ and consider the identity
\begin{equation}\label{RPP}
T^2-2\lambda_0T+|\lambda|^2\mathcal{I}=
(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})(P_1+P_2)
\end{equation}
$$
=(T^2_1-2\lambda_0T_1+|\lambda|^2P_1)+(T^2_2-2\lambda_0T_2+|\lambda|^2P_2).
$$
If we set
$$
Q_\lambda(T):=(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})^{-1}
$$
we have
\begin{equation}\label{RPP1}
Q_\lambda(T)=(P_1+P_2)Q_\lambda(T)(P_1+P_2)=P_1Q_\lambda(T)P_1+P_2Q_\lambda(T)P_2;
\end{equation}
in fact, by Lemma \ref{lieiruyh} and by the relation $P_1P_2=P_2P_1=0$,
we deduce $$P_1Q_\lambda(T)P_2=P_2Q_\lambda(T)P_1=0.$$
We now multiply the identity (\ref{RPP}) by $Q_\lambda(T)$ on the left and
by (\ref{RPP1}) we obtain
$$
\mathcal{I}=(P_1Q_\lambda(T)P_1+P_2Q_\lambda(T)P_2)[(T^2_1-2
\lambda_0T_1+|\lambda|^2P_1)+(T^2_2-2\lambda_0T_2+|\lambda|^2P_2)].
$$
Using again Lemma \ref{lieiruyh} and $P_1P_2=P_2P_1=0$ we obtain
\begin{equation}\label{ident}
\mathcal{I}=P_1Q_\lambda(T)P_1(T^2_1-2\lambda_0T_1+|\lambda|^2P_1)+
P_2Q_\lambda(T)P_2(T^2_2-2\lambda_0T_2+|\lambda|^2P_2).
\end{equation}
Let us set
$$
Q_{\lambda, j}(T):=P_jQ_\lambda(T)P_j, \ \ \ \ j=1,2.
$$
It is immediate to observe that
$$
Q_{\lambda, j}(T)(V_j)\subseteq V_j,\ \ \ \ j=1,2,
$$
and from (\ref{ident}) we deduce
$$
Q_{\lambda, j}(T)(T^2_j-2\lambda_0T_j+|\lambda|^2P_j)=P_j, \ \ \ \ j=1,2.
$$
As a consequence, $Q_{\lambda, j}(T)$ restricted to $V_j$ is the inverse of
$(\widetilde{T}^2_j-2\lambda_0\widetilde{T}_j+|\lambda|^2P_j)$ and so we conclude that
$\lambda\in \rho_S(\widetilde{T}_1)\cap \rho_S(\widetilde{T}_2)$.
Conversely, assume that $\lambda\in \rho_S(\widetilde{T}_1)\cap \rho_S(\widetilde{T}_2)$.
Let us set
$$
\widetilde{Q}_{\lambda, j}(T):=(\widetilde{T}^2_j-2\lambda_0\widetilde{T}_j+|\lambda|^2P_j)^{-1}
$$
and define
$$
\widetilde{Q}=P_1\widetilde{Q}_{\lambda, 1}(T)P_1+P_2\widetilde{Q}_{\lambda, 2}(T)P_2.
$$
We have
$$
\widetilde{Q}(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})
=[P_1\widetilde{Q}_{\lambda, 1}(T)P_1+P_2\widetilde{Q}_{\lambda, 2}(T)
P_2](T^2-2\lambda_0T+|\lambda|^2\mathcal{I})
$$
$$
=P_1(\widetilde{T}^2_1-2\lambda_0\widetilde{T}_1+|\lambda|^2P_1)^{-1}
P_1(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})
$$
$$
+
P_2(\widetilde{T}^2_2-2\lambda_0\widetilde{T}_2+|\lambda|^2P_2)^{-1}P_2
(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})
$$
$$
=P_1+P_2=\mathcal{I}.
$$
Analogously $(T^2-2\lambda_0T+|\lambda|^2\mathcal{I})\widetilde{Q}=\mathcal{I}$.
So $\lambda \in \rho_S(T)$.
\end{proof}
In all our discussions on the functional calculus we have used the notion of $S$-spectrum.
However, in the literature, also
other types of spectra are used: the so-called left spectrum and the right spectrum.
In order to discuss the notion of right spectrum it is not necessary to assume that $V$ is a two sided linear space, so we will consider quaternionic right linear spaces.
We recall the following definition:
\begin{Dn}
Let $T:V\to V$ be a right linear quaternionic operator on a right
quaternionic Banach space $V$.
We denote by $\sigma_R(T)$ the right spectrum of $T$
that is
$
\sigma_R(T)=\{ s\in \mathbb{H}\ :\ \ Tv= vs \ {\it for\ } v\in V , \ v\not=0 \}.
$
\end{Dn}
As it has been widely discussed in the literature, one can also
define the left spectrum, i.e. the set of $s\in\mathbb{H}$ such that $Tv=sv$.
However, the notion of left spectrum is not very useful, see \cite{MR1333599}.
The S-spectrum and the left spectrum are not, in general, related, see \cite{MR2752913}.
The right spectrum is more useful and more studied.
It has a structure similar to the one of the S-spectrum,
indeed whenever it contains an element $s$, it contains also the whole 2-sphere
$[s]$. However the operator $\mathcal{I} s-T$, where
$(\mathcal{I}s )(v):=vs$, is not a right linear operator; thus
the notion of right spectrum is not associated to a linear
resolvent operator and this represents a disadvantage since it
prevents to define a functional calculus. The following result,
see \cite{CS_CRAS}, states that the right spectrum coincides with
the S-spectrum and thus $\sigma_R(T)$ can now be related to the
linear operator $T^2-2s_0T+|s|^2\mathcal{I}$.
\begin{Tm}\label{S=R}
Let $T$ be a right linear quaternionic operator. Then its point $S$-spectrum coincides with
the right spectrum.
\end{Tm}
Theorem \ref{S=R} is crucial since all known results
on the right spectrum become valid also for the point S-spectrum.
\\
Let us now consider the two eigenvalue problems:
$$
Tv=v s, \ \ \ v\not=0,
$$
and
$$
(T^2-2s_0 T+|s|^2\mathcal{I})w=0, \ \ \ \ \ \ w\not=0.
$$
As it is well known the right eigenvectors do not form a right
linear subspace of $V$, while the
$S$-eigenvectors do, as it is immediate to verify.
We have the following proposition which will be useful in the sequel.
\begin{Pn}\label{eigenvector}
Let $v$ be a right eigenvector associated to $s\in \sigma_R(T)$. Then
we have
$$
(T^2-2 s_0 T+|s|^2\mathcal{I})v=0.
$$
\end{Pn}
\begin{proof}
Since $Tv=v s$ it follows that $T^2v=T(v s)=v s^2$.
Thus we have
$$
(T^2-2 \ s_0 T+|s|^2\mathcal{I})v=vs^2-2s_0
sv+|s|^2v=v(s^2-2s_0 s+|s|^2)=0
$$
where we have used the identity $s^2-2s_0 s+|s|^2=0$ which
holds for every $s\in \mathbb{H}$.
\end{proof}
\section{A result on negative squares}
\setcounter{equation}{0}
\label{6}
In this section we will consider power series of the form $K(p,q)=\sum_{n,m=0}^\infty p^na_{n,m}\overline{q}^m$,
where $a_{n,m}=a_{n,m}^*\in\mathbb{H}^{N\times N}$. It is immediate that
$K(p,q)$ is a function slice hyperholomorphic in $p$ and right slice hyperholomorphic in $\bar q$; moreover the assumption on the coefficients $a_{n,m}$ implies that $K(p,q)$ is hermitian.
\begin{Pn}
Let $(a_{n,m})_{n,m\in\mathbb N_0}$ denote a sequence of $N\times
N$ quaternionic matrices such that $a_{n,m}=a_{m,n}^*$, and
assume that the power series
\[
K(p,q)=\sum_{n,m=0}^\infty p^na_{n,m}\overline{q}^m
\]
converges in a neighborhood $V$ of the origin. Then the following
are
equivalent:\\
$(1)$ The function $K(p,q)$ has $\kappa$ negative squares in
$V$.\\
$(2)$ All the finite matrices $A_{\mu}\stackrel{\rm def.}{=}
(a_{n,m})_{n,m=0,\ldots \mu}$ have at most $\kappa$ strictly
negative eigenvalues, and exactly $\kappa$ strictly negative
eigenvalues for at least one $\mu\in\mathbb N_0$.
\label{pnneq}
\end{Pn}
\begin{proof}
Let $r>0$ be such that $B(0,r)\subset V$, and let $I,J$ be two
units in the unit sphere of purely imaginary quaternions $\mathbb
S$ (see \eqref{sphere} for the latter). Then
\[
a_{n,m}=\frac{1}{4r^{n+m}\pi^2}\iint_{[0,2\pi]^2}e^{-Int}K(re^{It},re^{Js})e^{Jms}dtds.
\]
This expression does not depend on the specific choice of $I$ and
$J$. Furthermore, we take $I=J$ and so:
\[
A_\mu=\frac{1}{4r^{n+m}\pi^2}\iint_{[0,2\pi]^2}
\begin{pmatrix}I_N\\e^{-Jt}I_N\\ \vdots \\ e^{-J\mu t}I_N\end{pmatrix}
K(re^{Jt},re^{Js})\begin{pmatrix}I_N&e^{Js}I_N& \cdots & e^{J\mu
s}I_N\end{pmatrix}dtds.
\]
Write now
\[
K(p,q)=K_+(p,q)-F(p)F(q)^*,
\]
where $F$ is $\mathbb H^{N\times \kappa}$-valued. The function
$F$ is built from functions of the form $p\mapsto K(p,q)$ for a
finite number of $q$'s, and so is a continuous function of $p$,
and so is $K_+(p,q)$. See \cite[pp. 8-9]{adrs}. Thus
\[
A_\mu=A_{\mu,+}-A_{\mu,-}
\]
where
\[
\begin{split}
A_{\mu, +}&= \frac{1}{4r^{n+m}\pi^2}\iint_{[0,2\pi]^2}
\begin{pmatrix}I_N\\e^{-Jt}I_N\\ \vdots \\ e^{-J\mu t}I_N\end{pmatrix}
K_+(re^{Jt},re^{Js})\begin{pmatrix}I_N&e^{Js}I_N& \cdots & e^{J\mu
s}I_N\end{pmatrix}dtds\\
A_{\mu,-}&= \frac{1}{4r^{n+m}\pi^2}\iint_{[0,2\pi]^2}
\begin{pmatrix}I_N\\e^{-Jt}I_N\\ \vdots \\ e^{-J\mu t}I_N\end{pmatrix}
F(re^{Jt})F(re^{Js})^*\begin{pmatrix}I_N&e^{Js}I_N& \cdots &
e^{J\mu s}I_N\end{pmatrix}dtds.
\end{split}
\]
These expressions show that $A_\mu$ has at most $\kappa$ strictly
negative eigenvalues.\\
Conversely, assume that all the matrices $A_\mu$ have at most
$\kappa$ strictly negative eigenvalues, and define
\[
K_\mu(p,q)=\sum_{n,m=0}^\mu p^ma_{n,m}\overline{q}^m.
\]
Then, $K_\mu$ has at most $\kappa$ negative squares, as is seen by
writing $A_\mu$ as a difference of two positive matrices, one of
rank $\kappa$. Since, pointwise,
\[
K(p,q)=\lim_{\mu\rightarrow\infty} K_\mu(p,q),
\]
the function $K(p,q)$ has at most $\kappa$ negative squares.\\
To conclude the proof, it remains to see that the number of
negative squares of $K(p,q)$ and $A_\mu$ is the same. Assume that
$K(p,q)$ has $\kappa$ negative squares, but that the $A_\mu$ have
at most $\kappa^\prime<\kappa$ strictly negative eigenvalues.
Then, the argument above shows that $K(p,q)$ would have at most
$\kappa^\prime$ negative squares, which contradicts the
hypothesis. The other direction is proved in the same way.
\end{proof}
As consequences we have:
\begin{Pn}
In the notation of the preceding proposition, the number of
negative squares is independent of the neighborhood $V$.
\end{Pn}
\begin{proof}
This is because the coefficients $a_{n,m}$ do not depend on the
given neighborhood.
\end{proof}
\begin{Pn}
\label{pn51} Assume that $K(p,q)$ is $\mathbb H^{N\times
N}$-valued and has $\kappa$ negative squares in $V$ and let
$\alpha(p)$ be a $\mathbb H^{N\times N}$-valued slice
hyperholomorphic function and such that $\alpha(0)$ is
invertible. Then the function
\begin{equation}
\label{aka}
B(p,q)=\alpha(p)\star K(p,q)\star_r\alpha(q)^*
\end{equation}
has $\kappa$ negative squares in $V$.
\end{Pn}
\begin{proof}
Write $K(p,q)=\sum_{n,m=0}^\infty p^na_{n,m}\overline{q}^m$ and
$\alpha(p)=\alpha_0+p\alpha_1+\cdots$. The $\mu\times \mu$ main
block matrix $B_\mu$ corresponding to the power series
\eqref{aka} is equal to
\[
B_\mu=LA_\mu L^*,
\]
where
\[
L=\begin{pmatrix} \alpha_0&0&0&\cdots &0\\
\alpha_1&\alpha_0&0&\cdots&0\\
\alpha_2&\alpha_1&\alpha_0&0&\cdots\\
\vdots&\vdots& & &\\
\alpha_\mu&\alpha_{\mu-1}&\cdots &\alpha_1&\alpha_0
\end{pmatrix}
\]
Since $\alpha_0=\alpha(0)$ is assumed invertible, the signatures of $A_\mu$
and $B_\mu$ are the same for every $\mu\in\mathbb N_0$. By
Proposition \ref{pnneq} it follows that the kernels $A$ and $B$
have the same number of negative squares.
\end{proof}
\section{Operators in quaternionic Pontryagin spaces}
\setcounter{equation}{0}
\label{7}
This section contains some definitions and results on right quaternionic
Pontryagin spaces. Some of the statements hold when we replace Pontryagin spaces by
Krein spaces. \\
The following result, proved in the complex plane in
\cite[Theorem 2.4, p. 18]{ikl}, is very useful to study
convergence of sequences in Pontryagin spaces. It implies in
particular that in a reproducing kernel Pontryagin space,
convergence is equivalent to convergence of the self-inner product
together with pointwise convergence. The proof of the
quaternionic case appears in \cite[Proposition 12.9, p. 471]{as3}.
\begin{Pn}
\label{pn:ikl}
Let $(\mathscr P,[\cdot,\cdot])$ denote a quaternionic right
Pontryagin space. The sequence $f_n$ of elements in $\mathscr P$
tends to $f\in\mathscr P$ if and only if the following two
conditions hold:
\[
\begin{split}
\lim_{n\rightarrow\infty} [f_n,f_n]&=[f,f], \intertext{and}
\lim_{n\rightarrow\infty}[f_n,g]&=[f,g]\quad\mbox{for $g$ in a
dense subspace of $\mathscr P$.}
\end{split}
\]
\end{Pn}
We endow $\mathbb H^N$ with the inner product
\[
[u,v]_{\mathbb H^N}=v^*u.
\]
Furthermore, a Hermitian form will be defined as having the
following linearity condition:
\begin{equation}
\label{eqherm}
[fa,gb]=\overline{b}[f,g]a.
\end{equation}
\begin{Rk}{\rm When we consider two sided Pontryagin vector spaces, we require an additional property on the inner product
with respect to the left multiplication, i.e.
$$
[av,av]=|a|^2[v,v].
$$
This property is satisfied, for example, in $\mathbb{H}^N$ with the inner product described above.}
\end{Rk}
\begin{Tm}
\label{tm:milano}
Let $T$ be a contraction in a two sided quaternionic
Pontryagin space such that
$T$ has no S-spectrum on the unit sphere and it satisfies
$$
[S_L^{-1}(\lambda,T)\lambda v, S_L^{-1}(\lambda,T)\lambda v] \leq
[S_L^{-1}(\lambda,T)v, S_L^{-1}(\lambda,T)v], \ \ \ for \ \ \ |\lambda|=1.
$$
Then $T$ has maximal negative invariant subspace and this
subspace is unique.
\end{Tm}
\begin{proof}
Let $|\lambda|=1$ so that the operator $S_L^{-1}(\lambda,T)$ exists.
The fact that $T$ is a contraction implies the inequality
$$
[TS_L^{-1}(\lambda,T)v, TS_L^{-1}(\lambda,T)v] <
[S_L^{-1}(\lambda,T)v, S_L^{-1}(\lambda,T)v]
$$
for $v\not=0$.
Using the $S$- resolvent equation one deduces
$$
[S_L^{-1}(\lambda,T) \lambda v+\mathcal{I}v,
S_L^{-1}(\lambda,T) \lambda v+\mathcal{I}v] <
[S_L^{-1}(\lambda,T)v, S_L^{-1}(\lambda,T)v]
$$
from which one gets
$$
[S_L^{-1}(\lambda,T) \lambda v, S_L^{-1}(\lambda,T) \lambda v]
+ [v, v]
+ [S_L^{-1}(\lambda,T) \lambda v, v]
+
[ v, S_L^{-1}(\lambda,T)\lambda v]
$$
$$
< [S_L^{-1}(\lambda,T)v, S_L^{-1}(\lambda,T)v].
$$
so, using the hypothesis, we finally get
$$
[v, v]+ [S_L^{-1}(\lambda,T) \lambda v, v]+[ v, S_L^{-1}(\lambda,T)
\lambda v]< 0.
$$
In the above inequality we replace $S_L^{-1}
(\lambda,T) \lambda$ by $S_L^{-1}(\lambda,T) \lambda d\lambda_I$, where $d
\lambda_I=-Ie^{I\theta}d\theta$
and integrate over $[0,2\pi]$.
Recalling the definition of Riesz projector
$$
P=-\frac{1}{2\pi}\int_{\partial(\mathbb{B}\cap \mathbb{C}_I)}
S_L^{-1}(\lambda,T) \lambda d\lambda_I
$$
we obtain
$$
[v, v]< [P v, v]+[ v, Pv]
$$
and so
$$
[v, v]< 2 {\rm Re}\,[P v, v].
$$
Theorem \ref{PTcommutation} implies that $PT=TP$ and the rest of
the proof follows as in Theorem 11.1 p.76 in
\cite{ikl}.
\end{proof}
For right quaternionic Pontryagin spaces we have the following result.
\begin{Pn}\label{maximal}
A contraction $T$ in a right quaternionic
Pontryagin space $\mathcal P$ possessing an eigenvalue $\lambda$ with $|\lambda|>1$ has maximal negative invariant subspace.
\end{Pn}
\begin{proof} Let $v\not= 0$ be an eigenvector associated to the right eigenvalue $\lambda$. Then we have
$$
[Tv,Tv]=[v\lambda,v\lambda]< [v,v],
$$
from which we deduce
$$
|\lambda|^2[v,v]< [v,v]
$$
and so $[v,v]<0$. Consider the right subspace $\mathcal M$ generated by $v$. Then any element in $\mathcal M$ is of the form $va$, $a\in\mathbb{H}$ and
$[va,va]<0$. The subspace $\mathcal M$ is invariant under the action of $T$, indeed $T(va)=T(v)a=v\lambda a$. Thus $\mathcal M$ is a negative invariant subspace
of $\mathcal P$. Then $\mathcal M$ is maximal or it is contained in another negative invariant subspace $\mathcal M_1$ and iterating this procedure we obtain a chain of inclusions $\mathcal M\subset \mathcal M_1\subset\ldots $ which should end because $\mathcal P_-$ is finite dimensional.
\end{proof}
In view of Definition \ref{neqs} below, it is useful to recall the
following result (see \cite[Corollary 6.2, p. 41]{MR97h:15020}).
\begin{Pn}
An Hermitian matrix $H$ with entries in $\mathbb H$ is
diagonalizable, and its eigenvalues are real. Furthermore,
eigenvectors corresponding to different eigenvalues are
orthogonal in $\mathbb H^N$. Let $(t,r,s)$ denote the signature of
$H$. There exists an invertible matrix $U\in\mathbb H^{N\times
N}$ such that
\begin{equation}
H=U\begin{pmatrix}\sigma_{tr}&0\\
0&0_{s\times s}\end{pmatrix}U^*
\end{equation}
where $\sigma_{tr}=\begin{pmatrix}I_t&0\\0&-I_r\end{pmatrix}$.
\label{pn:hermite}
\end{Pn}
\begin{Dn}
\label{neqs}
Let $A$ be a continuous right linear operator from the
quaternionic Pontryagin space $\mathscr P$ into itself. We say
that $A$ has $\kappa$ negative squares if for every choice of
$N\in\mathbb N$ and of $f_1,\ldots, f_N\in\mathscr P$, the
Hermitian matrix $H\in\mathbb H^{N\times N}$ with $jk$ entry
equal to
\begin{equation}
\label{a1}
[Af_k,f_j]_{\mathscr P}
\end{equation}
has at most $\kappa$ strictly negative eigenvalues, and exactly
$\kappa$ strictly negative eigenvalues for some choice of
$N,f_1,\ldots, f_N$.
\end{Dn}
Note that the above definition is coherent with the right linearity
condition \eqref{eqherm}. If we replace the $f_k$ by
$f_kh_k$ where $h_k\in\mathbb H$, the new matrix has $jk$ entry
\[
[Af_kh_k,f_jh_j]_{\mathscr P}=\overline{h_j}[Af_k,f_j]_{\mathscr P}h_k,
\]
and so
\[
\left([Af_kh_k,f_jh_j]_{\mathscr P}\right)_{j,k=1,\ldots, N}=D^*
\left([Af_k,f_j]_{\mathscr P}\right)_{j,k=1,\ldots, N}D,
\]
with
\[
D={\rm diag}~(h_1,h_2,\ldots, h_N).
\]
In case of left linear operators, \eqref{eqherm} is then replaced by
\[
[af,bg]=b[f,g]\overline{a},
\]
and the roles of $j$ and $k$ have to be interchanged in \eqref{a1}.
This problem does not appear in the commutative case.\\
We point out the following notation. Let $T$ be a bounded linear
operator from the quaternionic right Pontryagin space $(\mathscr
P_1,[\cdot,\cdot]_{\mathscr P_1} )$ into the quaternionic right
Pontryagin space $(\mathscr P_2,[\cdot,\cdot]_{\mathscr P_2})$,
and let $\s_1$ and $\s_2$ denote two signature operators such that
$(\mathscr P_1,\langle\cdot,\cdot\rangle_{\mathscr P_1})$ and
$(\mathscr P_2,\langle\cdot, \cdot\rangle_{\mathscr P_2})$ are
right quaternionic Pontryagin spaces, where
\[
\langle x,y\rangle_{\mathscr P_j}=[ x,\s_jy]_{\mathscr P_j},\quad
j=1,2.
\]
We denote by $T^{[*]}$ the adjoint of
$T$ with respect to the Pontryagin structure and by $T^*$ its
adjoint with respect to the Hilbert space structure. Thus,
\[
\begin{split}
[Tx,y]_{\mathscr P_2}&=\langle Tx,\s_2y\rangle_{\mathscr P_2}\\
&=\langle x,T^*\s_2y\rangle_{\mathscr P_1}\\
&=[x,\s_1T^*\s_2y]_{\mathscr P_1},
\end{split}
\]
and so, as is well known in the complex case,
\[
T^{[*]}=\s_1T^*\s_2\quad{\rm and} \quad T^*=\s_1T^{[*]}\s_2.
\]
We will denote $\nu_-(A)=\kappa$. When $\kappa=0$ the operator is
called positive.
\begin{Tm}
\label{tm:facto} Let $A$ be a bounded right linear self-adjoint operator from
the quaternionic Pontryagin space $\mathscr P$ into itself, which
has a finite number of negative squares. Then, there exists a
quaternionic Pontryagin space $\mathscr P_1$ with ${\rm
ind}_{\mathscr P_1}=\nu_-(A)$, and a bounded right linear operator $T$ from
$\mathscr P$ into $\mathscr P_1$ such that
\[
A=T^{[*]}T.
\]
\end{Tm}
\begin{proof} The proof follows that of \cite[Theorem 3.4, p.
456]{MR2576304}, slightly adapted to the present non commutative
setting. Since $A$ is Hermitian, the formula
\[
[Af,Ag]_A=[Af,g]_{\mathscr P}
\]
defines a Hermitian form on the range of $A$. Since
$\nu_-(A)=\kappa$, there exists $N\in\mathbb N$ and $f_1,\ldots,
f_N\in\mathscr P$ such that the Hermitian matrix $M$ with $\ell
j$ entry $[Af_j,f_\ell]_{\mathscr P}$ has exactly $\kappa$
strictly negative eigenvalues. Let $v_1,\ldots , v_\kappa$ be the
corresponding eigenvectors, with strictly negative eigenvalues
$\lambda_1,\ldots, \lambda_\kappa$. As recalled in Proposition
\ref{pn:hermite} $v_j$ and $v_k$ are orthogonal when
$\lambda_j\not=\lambda_k$. We can, and will, assume that vectors
corresponding to a given eigenvalue are orthogonal. Then,
\begin{equation}
v_s^*Mv_t=\lambda_t\delta_{ts},\quad t,s=1,\ldots, N.
\label{Fortho}
\end{equation}
In view of \eqref{eqherm}, and with
\[
v_t=\begin{pmatrix}v_{t1}\\ v_{t2}\\ \vdots\\ v_{tN}\end{pmatrix},\quad t=1,\ldots, N,
\]
we see that \eqref{Fortho} can be rewritten as
\[
[F_s,F_t]_A=\lambda_t\delta_{ts},\quad{\rm with}\quad
F_s=\sum_{k=1}^N Af_kv_{sk},\quad t,s=1,\ldots, N.
\]
The space $\mathscr M$ spanned by $F_1,\ldots, F_N$ is strictly
negative, and it has an orthocomplement in $({\rm Ran}~
A,[\cdot,\cdot]_A)$, say $\mathscr M^{[\perp]}$, which is a right
quaternionic pre-Hilbert space. The space ${\rm Ran}~A$ endowed
with the quadratic form
\[
\langle m+h,m+h\rangle_A=-[m,m]_A+[h,h]_{A},\quad m\in\mathscr
M,\,\, h\in\mathscr M^{[\perp]},
\]
is a pre-Hilbert space, and we denote by $\mathscr P_1$ its
completion. We note that $\mathscr P_1$ is defined only up to an
isomorphism of Hilbert space. We denote by $\iota$ the injection
from ${\rm Ran}~A$ into $\mathscr P_1$ such that
\[
\langle f,f\rangle_A=\langle \iota (f),\iota(f)\rangle_{\mathscr
P_1}.
\]
We consider the decomposition $\mathscr P_1=\mathscr
\iota(M)\oplus\iota(M)^{\perp}$, and endow $\mathscr P_1$ with the
indefinite inner product
\[
[\iota(m)+h,\iota(m)+h]_{\mathscr P_!}
=[m,m]_A+\langle h, h\rangle_{\mathscr P_1}.
\]
See \cite[Theorem 2.5, p. 20]{ikl} for the similar argument in
the complex case. Still following \cite{MR2576304} we define
\[
Tf=\iota (Af),\quad f\in\mathscr P.
\]
We now prove that that $T$ is a bounded right linear operator from $\mathscr
P$ into $\iota({\rm Ran}~A)\subset\mathscr P_1$. Indeed, let
$(f_n)_{n\in\mathbb N}$ denote a sequence of elements in $\mathscr
P$ converging (in the topology of $\mathscr P$) to $f\in\mathscr
P$. Since ${\rm Ran A}$ is dense in $\mathscr P_1$, using
Proposition \ref{pn:ikl} it is therefore enough
to prove that:\\
\[
\begin{split}
\lim_{n\rightarrow}[Tf_n,Tf_n]_{\mathscr P_1}&=[Tf,Tf]_{\mathscr P_1},\\
\intertext{{\rm and}}
\lim_{n\rightarrow\infty}[Tf_n,Tg]_{\mathscr P_1}&=[Tf,Tg]_{\mathscr P_1},
\quad \forall g\in\mathscr P.
\end{split}
\]
By definition of the inner product, the first equality amounts to
\[
\lim_{n\rightarrow}[Af_n,f_n]_{\mathscr P}=[Af,f]_{\mathscr P},
\]
which is true since $A$ is continuous, and similarly for the
second claim. Therefore $T$ has an adjoint operator, which is
also continuous. The equalities (with $f,g\in\mathscr P$)
\[
\begin{split}
[f,T^{[*]}Tg]_{\mathscr P}&=[Tf,Tg]_{\mathscr P_1}\\
&=[Tf,\iota(Ag)]_{\mathscr P_1}\\
&=[\iota(Af),\iota(Ag)]_{\mathscr P_1}\\
&=[Af,Ag]_{A}\\
&=[f,Ag]_{\mathscr P}
\end{split}
\]
show that $T^{[*]}T=A$.
\end{proof}
We note the following. As is well known, the completion of a
pre-Hilbert space is unique up to an isomorphism of Hilbert
spaces, and the completion need not be in general a subspace of
the original pre-Hilbert space. Some identification is needed. In
\cite{ikl} (see \cite[2.4, p. 19]{ikl} and also in
\cite{MR2576304}) the operator $\iota$ is not used, and the space
$\mathscr P_1$ is written directly as a direct sum of $\mathscr
M$ and of the completion of the orthogonal of $\mathscr M$. This
amounts to identify the orthogonal of $\mathscr M$ as a being a
subspace of its abstract completion.
\section{The structure theorem}
\setcounter{equation}{0}
\label{8}
We first give some background to provide motivation for the results presented in this section.
Denote by $R_0$ the backward-shift operator:
\[
R_0f(z)=\frac{f(z)-f(0)}{z}
\]
Beurling's theorem can be seen as the characterization of $R_0$-invariant subspaces of the Hardy space $\mathbf H_2(\mathbb D)$,
where $\mathbb D$ is the unit disk in $\mathbb{C}$. These
are the spaces $\mathbf H_2(\mathbb D)\ominus j\mathbf H_2(\mathbb D)$, where $j$ is an inner function. Equivalently, these
are the reproducing kernel Hilbert spaces with reproducing kernel $k_j(z,w)=\frac{1-j(z)\overline{j(w)}}{1-z\overline{w}}$
with $j$ inner. When replacing $j$ inner by $s$ analytic and contractive in the open unit disk, it is more difficult to
characterize reproducing kernel Hilbert spaces $\mathscr H(s)$
with reproducing kernel $k_s(z,w)$. Allowing for $s$ not necessarily scalar valued,
de Branges gave a characterization of $\mathscr H(s)$ spaces in \cite[Theorem 11, p. 171]{db-fact}.
This result was
extended in \cite[Theorem 3.1.2, p. 85]{adrs} to the case of Pontryagin spaces.
The theorem below is the analog of de Branges' result in the slice-hyperholomorphic setting,
in which the backward-shift operator $R_0$ is now defined as
$$
R_0 f(p)=p^{-1}(f(p)-f(0))=(f(p)-f(0))\star_\ell p^{-1}.
$$
In order to prove the result, we will be in need of a fact which is direct consequence of Lemma 3.6 in \cite{acs2}: if $f$, $g$ are two left slice hyperholomorphic functions then
$$
(f\star_l g)^*=g^*\star_r f^*.
$$
\begin{Tm}
Let $\s\in\mathbb H^{N\times N}$ be a signature matrix, and let
$\mathscr M$ be a Pontryagin space of $\mathbb H^N$-valued
functions slice hyperholomorphic in a spherical neighborhood $V$ of the
origin, and invariant under the operator $R_0$. Assume moreover
that
\begin{equation}
\label{structure} [R_0f,R_0f]_{\mathscr M}\le[f,f]_{\mathscr
M}-f(0)^*\s f(0).
\end{equation}
Then, there exists a Pontryagin space $\mathscr P$ such that
${\rm ind}_-\mathscr P=\nu_-(\s)$ and a $\mathbf L(\mathscr P,
\mathbb H^N)$-valued slice hyperholomorphic function $S$ such
that the elements of $\mathscr M$ are the restrictions to $V$ of
the elements of $\mathscr P(S)$.
\end{Tm}
\begin{proof} We follow the proof in \cite[Theorem 3.1.2, p. 85]{adrs}.
Let ${\mathscr P_2}=\mathscr M\oplus \hh_\s$, and denote by $C$
the point evaluation at the origin.
We divide the proof into a number of steps.\\
STEP 1: {\sl Let $p\in V$ and $f\in \mathscr M$. Then,
\begin{equation}
\label{pointevalua} f(p)=C\star (I-pR_0)^{-\star}f.
\end{equation}
}
STEP 2: {\sl The reproducing kernel of $\mathscr M$ is given by
\[
K(p,q)=C\star (I-pR_0)^{-\star}\left(C\star (I-qR_0)^{-\star}\right)^*.
\]
}
STEP 3: {\sl Let $E$ denote the operator
\[
E=\begin{pmatrix}R_0\\ C\end{pmatrix}:\quad \mathscr
M\longrightarrow\mathscr P_2.
\]
There exists a quaternionic
Pontryagin space $\mathscr P_1$ with ${\rm ind}_{\mathscr
P_1}=\nu_-(J)$, and a bounded right linear operator $T$ from $\mathscr M$ into
$\mathscr P_1$ such that
\begin{equation}
\label{factoE}
I_{\mathscr M}-EE^{[*]}=T^{[*]}T.
\end{equation}
}
Write (see \cite[(1.3.14), p. 26]{adrs})
\[
\begin{split}
\begin{pmatrix}I_{\mathscr M}&0\\ E&I_{\mathscr P_2}\end{pmatrix}
\begin{pmatrix} I_{\mathscr M}&0\\
0&I_{\mathscr P_2}-EE^{[*]}\end{pmatrix}\begin{pmatrix}I_{\mathscr M}&E^{[*]}\\
0&I_{\mathscr P_2}\end{pmatrix}&=\\
&\hspace{-2cm}=
\begin{pmatrix}
I_{\mathscr M}&E^{[*]}\\ 0&I_{\mathscr
P_2}\end{pmatrix}\begin{pmatrix} I_{\mathscr
M}-E^{[*]}E&0\\0&I_{\mathscr P_2}
\end{pmatrix}\begin{pmatrix}I_{\mathscr M}&0\\
E&I_{\mathscr P_2}\end{pmatrix}.
\end{split}
\]
Thus,
\begin{equation}
\label{ineq4} \nu_-(I_{\mathscr P_2}-EE^{[*]})+\nu_-(\mathscr
M)=\nu_- (I_{\mathscr M}-E^{[*]}E)+\nu_-(\mathscr P_2),
\end{equation}
and noting that $\nu_-(\mathscr P_2)=\nu_-(\mathscr M)+\nu_-(\s)$,
we have (see also \cite[Theorem 1.3.4(1), p. 25]{adrs})
\[
\nu_-(I_{\mathscr P_2}-EE^{[*]})+\nu_-(\mathscr M)=\nu_-
(I_{\mathscr M}-E^{[*]}E)+\nu_-(\mathscr M)+\nu_-(\s).
\]
Equation \eqref{structure} can be rewritten as $I-E^{[*]}E\ge 0$, and
in particular $\nu_-(I-E^{[*]}E)=0$. Thus
\[
\nu_-(I_{\mathscr P_2}-EE^{[*]})=\nu_-(\s).
\]
Applying Theorem \ref{tm:facto} we obtain the factorization \eqref{factoE}.\\
We set
\[
T^{[*]}=\begin{pmatrix}B\\ D\end{pmatrix}:\,\, \mathscr
P_1\longrightarrow\mathscr M\oplus\hh_\s,
\]
and
\[
V=\begin{pmatrix}R_0&B\\ C&D\end{pmatrix}.
\]
Let
\[
S(p)=D+p C\star(I_{\mathscr M}-p A)^{-\star} B.
\]
STEP 4: {\sl We have that
\[
\s-S(p)\s S(q)^*=C\star(I-pR_0)^{-\star}\star (I-p\overline{q})
\s_{\mathscr M}((I-qA)^{-\star})^*\star_rC^*,
\]
where $\s_{\mathscr M}$ is a fundamental symmetry for $\mathscr
M$.}\\
The computation is as in our previous paper \cite{acs2}.
\end{proof}
We note that a corollary of \eqref{ineq4} is:
\begin{Tm}
Let $T$ be a contraction between right quaternionic Pontryagin
spaces of same index. Then, its adjoint is a contraction.
\end{Tm}
\begin{proof}
Indeed, when $\nu_-(\mathscr M)=\nu_-(\mathscr P_2)$ we have
\[
\nu_-(I_{\mathscr P_2}-EE^{[*]})=\nu_- (I_{\mathscr M}-E^{[*]}E).
\]
\end{proof}
\section{Blaschke products}
\setcounter{equation}{0}
\label{5}
As is well known and easy to
check, a rational function $r$ is analytic in the open unit disk
and takes unitary values on the unit circle if and only if it is
a finite Blaschke product. If one allows poles inside the unit
disk, then $r$ is a quotient of finite Blaschke products. This is a very special
case of a result of Krein and Langer discussed in Section \ref{9} below. In
particular, such a function cannot have a pole (or a zero) on the
unit circle. The case of matrix-valued rational functions which
take unitary-values (with respect to a possibly indefinite inner
product space) plays an important role in the theory of linear
systems. When the metric is indefinite, poles can occur on the
unit circle.
See for instance \cite{pootapov,gvkdm,bgr-ot64,ag}.\\
Slice hyperholomorphic functions have zeros that are either
isolated points or isolated 2-spheres. If a slice
hyperholomorphic function $f$ has zeros at $Z=\{a_1,a_2,\ldots,
[c_1], [c_2], \ldots \}$ then its reciprocal $f^{-\star}$ has
poles at the set $\{[a_1],[a_2],\ldots, [c_1], [c_2], \ldots
\}$, $a_i,c_j\in\mathbb{H}$. So the sphere associated to a zero
of $f$ is a pole of $f^{-\star}$. In other words, the poles are
always 2-spheres as one may clearly see from the definition of
$f^{-\star}=(f^s)^{-1}f^c$, see also \cite{MR2572530}.\\
We now recall the definitions of
Blaschke factors, see \cite{acs2}, and then
discuss the counterpart of rational unitary functions
in the present setting. For the Blasckhe factors it is necessary to give two
different definitions, according to the fact that the zero of a
Blaschke factor is a point, see Definition \ref{Def5.2}, or a
sphere, see Definition \ref{Def5.13}.
\begin{Dn}\label{Def5.2}
Let $a\in\mathbb{H}$, $|a|<1$. The function
\begin{equation}
\label{eqBlaschke} B_a(p)=(1-p\bar
a)^{-\star}\star(a-p)\frac{\bar a}{|a|}
\end{equation}
is called Blaschke factor at $a$.
\end{Dn}
\begin{Rk}{\rm
Let $a\in\mathbb{H}$, $|a|<1$. Then, see Theorem 5.5 in
\cite{acs2}, the Blaschke factor $B_a(q)$ takes the unit ball
$\mathbb{B}$ to itself and the boundary of the unit ball to
itself. Moreover, it has a unique zero for $p=a$.}
\end{Rk}
The Blaschke factor having zeros at a sphere is defined as
follows:
\begin{Dn}\label{Def5.13}
Let $a\in\mathbb{H}$, $|a|<1$. The function
\begin{equation}
\label{blas_sph} B_{[a]}(p)=(1-2{\rm
Re}(a)p+p^2|a|^2)^{-1}(|a|^2-2{\rm Re}(a)p+p^2)
\end{equation}
is called Blaschke factor at the sphere $[a]$.
\end{Dn}
\begin{Rk}{\rm The definition of $B_{[a]}(p)$ does not depend on
the choice of the point $a$ that identifies the 2-sphere. In fact
all the elements in the sphere $[a]$ have the same real part and module.
It is immediate that the Blaschke factor $B_{[a]}(p)$ vanishes on
the sphere $[a]$.}
\end{Rk}
The following result has been proven in \cite{acs2}, Theorem
5.16:
\begin{Tm}
A Blaschke product having zeros at the set
$$
Z=\{(a_1,\mu_1), (a_2,\mu_2), \ldots, ([c_1],\nu_1), ([c_2],\nu_2), \ldots \}
$$
where $a_j\in \mathbb{B}$, $a_j$ have respective
multiplicities $\mu_j\geq 1$, $a_j\not=0$ for
$j=1,2,\ldots $, $[a_i]\not=[a_j]$ if $i\not=j$,
$c_i\in \mathbb{B}$, the spheres $[c_j]$ have respective multiplicities $\nu_j\geq 1$,
$j=1,2,\ldots$, $[c_i]\not=[c_j]$ if $i\not=j$
and
$$
\sum_{i,j\geq 1} \Big(\mu_i (1-|a_i|)+ \nu_j(1-|c_j|)\Big)<\infty
$$
is given by
\[
\prod_{i\geq 1} (B_{[c_i]}(p))^{\nu_i}\prod_{j\geq 1}^\star
(B_{a'_j}(p))^{\star \mu_j},
\]
where $a'_1=a_1$ and $a'_j\in [a_j]$, for $j=2,3,\ldots$, are suitably chosen
elements.
\end{Tm}
\begin{Rk}{\rm It is not difficult to compute the slice hyperholomorphic inverses of the Blaschke factors using Definition \ref{reciprocal}.
The slice hyperholomorphic reciprocal of $B_a(p)$ and $B_{[a]}(p)$ are, respectively:
$$
B_a(p)^{-\star}=\frac{a}{|a|}(a-p)^{-\star}\star(1-p\bar
a),
$$
$$
B_{[a]}(p)^{-\star}=(|a|^2-2{\rm Re}(a)p+p^2)^{-1}(1-2{\rm
Re}(a)p+p^2|a|^2).
$$
The reciprocal of a Blaschke product is constructed by taking the reciprocal of the factors, in the reverse order.
}
\end{Rk}
\begin{Rk}{\rm The zeroes of $B_{[a]}(p)$ are poles of $B_{[a]}(p)^{-\star}$ and viceversa.
The Blaschke factor $B_{a}(p)$ has a zero at $p=a$ and a pole at the 2-sphere $[1/\bar a]$, while $B_{a}(p)^{-\star}$
has a zero at $p=1/\bar a$ and a pole at the 2-sphere $[a]$.
}
\end{Rk}
\begin{Pn}
Let ${\s}\in\mathbb H^{N\times N}$ denote a signature matrix (that
is, ${\s}={\s}^*={\s}^{-1}$) and let $(C,A)\in\mathbb H^{N\times
M}\times \mathbb H^{M\times M}$ be such that
$\cap_{n=0}^\infty\ker CA^n=\left\{0\right\}$. Let $P$ be an
invertible and Hermitian solution of the Stein equation
\begin{equation}
\label{eq:stein}
P-A^*PA=C^*{\s}C.
\end{equation}
Then, there exist matrices $(B,D)\in\mathbb H^{M\times N}\times
\mathbb H^{\times N\times N}$ such that the function
\begin{equation}
\label{realbgk}
S(p)=D+p C\star(I_M-pA)^{-\star}B
\end{equation}
satisfies
\begin{equation}
{\s}-S(p){\s}S(q)^*=C\star
(I_M-pA)^{-\star}(P^{-1}-pP^{-1}\overline{q})
\star_r(I_M-A^*\overline{q})^{-\star_r} \star_r C^*.
\end{equation}
\label{pn:bla}
\end{Pn}
Before the proof we mention the following. The vectors
$f_1,f_2,\ldots$ in the quaternionic Pontryagin space $(\mathcal
P, [\cdot, \cdot]_{\mathcal P})$ are said to be orthonormal if
\[
[f_j,f_\ell]_{\mathcal P}=\begin{cases}0,\,\,\,\,\quad{\rm if}\quad j\not=\ell,\\
\pm 1,\quad {\rm if}\quad j=\ell.\end{cases}
\]
The set $f_1,f_2,\ldots$ is called an orthonormal basis if the closed linear span of the
$f_j$ is all of $\mathcal P$. In the proof we used
the fact that in a finite dimensional quaternionic Pontryagin space
an orthonormal family can be extended to an orthonormal
basis. This is true because any non-degenerate closed space in
a quaternionic Pontryagin space admits an orthogonal complement. See
\cite[Proposition 10.3, p. 464]{as3}.\\
\begin{proof}[Proof of Proposition \ref{pn:bla}]
Following our previous paper \cite{acs2} the statement is
equivalent to find matrices $(B,D)\in\mathbb H^{M\times N}\times
\mathbb H^{\times N\times N}$ such that:
\begin{equation}
\label{convention}
\begin{pmatrix}A&B\\ C&D\end{pmatrix}\begin{pmatrix} P^{-1}&0\\
0&{\s}\end{pmatrix}\begin{pmatrix}A&B\\ C&D\end{pmatrix}^*=
\begin{pmatrix} P^{-1}&0\\
0&{\s}\end{pmatrix},
\end{equation}
or equivalently,
\begin{equation}
\begin{pmatrix}A&B\\ C&D\end{pmatrix}^*\begin{pmatrix} P&0\\
0&{\s}\end{pmatrix}\begin{pmatrix}A&B\\ C&D\end{pmatrix}=
\begin{pmatrix} P&0\\
0&{\s}\end{pmatrix}.
\end{equation}
By Proposition \ref{pn:hermite} there exists a matrix
$V\in\mathbb H^{M\times M}$ and $t_1,s_1\in\mathbb N_0$ such that
\[
P=V\s_{t_1s_1}V^*.
\]
Equation \eqref{eq:stein} can be then rewritten as
\[
V^{-1}A^*V\s_{t_1,s_1}V^*AV^{-*}+V^{-1}C^*{\s}CV^{-*}=\s_{t_1,s_1},
\]
and expresses that the columns of the $\mathbb H^{(M+N)\times M}$ matrix
\[
\begin{pmatrix}V^{*}AV^{-*}\\ CV^{-*}\end{pmatrix}=
\begin{pmatrix}V^{*}&0\\0&I_N\end{pmatrix}
\begin{pmatrix}A\\ C\end{pmatrix}V^{-*}
\]
are orthogonal in $\mathbb H^{M+N}$, endowed with the inner product
\begin{equation}
\label{innerprod} [u,v]=u_1^*\s_{t_1,s_1}u_1+u_2^*{\s}u_2,\quad
u=\begin{pmatrix}u_1\\ u_2\end{pmatrix},
\end{equation}
the first $t_1$ columns having self-inner product equal to $1$ and
the next $s_1$ columns having self-inner product equal to $-1$. We
can complete these columns to form an orthonormal basis of
$\mathbb H^{(M+N)\times(M+N)}$ endowed with the inner product
\eqref{innerprod}, that is, we find a matrix $X\in\mathbb
H^{(M+N)\times N}$
\[
\begin{pmatrix}
\begin{pmatrix}V^*AV^{-*}\\ CV^{-*}\end{pmatrix}&X\end{pmatrix}
\in\mathbb H^{(M+N)\times(M+N)}
\]
unitary with respect to \eqref{innerprod}. From
\[
\begin{pmatrix}
\begin{pmatrix}V^*AV^{-*}\\ CV^{-*}\end{pmatrix}&X\end{pmatrix}^*
\begin{pmatrix}\s_{t_1s_1}&0\\ 0&{\s}\end{pmatrix}
\begin{pmatrix}\begin{pmatrix}V^*AV^{-*}\\ CV^{-*}\end{pmatrix}&X\end{pmatrix}=
\begin{pmatrix}\s_
{t_1s_1}&0\\ 0&{\s}\end{pmatrix},
\]
we obtain \eqref{convention} with
\begin{equation}
\begin{pmatrix}
B\\ D\end{pmatrix}=X\begin{pmatrix}V^*&0\\ 0&I_N\end{pmatrix}.
\end{equation}
\end{proof}
When the signature matrix ${\s}$ is taken to be equal to $I_N$ we
can get another more explicit formula for $S$.
\begin{Pn}
In the notation and hypothesis of the previous theorem, assume
${\s}=I_N$. Then, $(I_M-A)$ is invertible and the function
\begin{equation}
\label{Scentered1} S(p)=I_N-(1-p)
C\star(I_M-pA)^{-\star}P^{-1}(I_M-A)^{-*}C^*
\end{equation}
satisfies
\begin{equation}
I_N-S(p)S(q)^*=C\star
(I_M-pA)^{-\star}(P^{-1}-pP^{-1}\overline{q})\star_r(I_M-A^*\overline{q})^{-\star_r}
\star_r C^*.
\end{equation}
\label{pn:bla2}
\end{Pn}
Note that formula \eqref{Scentered1} is not a realization of the
form \eqref{realbgk}. It can be brought to the form
\eqref{realbgk} by writing:
\[
\begin{split}
S(p)&=S(0)+S(p)-S(0)\\
&=I_N-CP^{-1}(I_M-A)^{-*}C^*+\\
&\hspace{5mm}+p
C\star(I_M-pA)^{-\star}(I_M-A)P^{-1}(I_M-A)^{-*}C^*.
\end{split}
\]
\begin{proof}[Proof of Proposition \ref{pn:bla2}]
We write for $p,q$ where the various expressions make sense
\[
\begin{split}
S(p)I_N S(q)^*-I_N&= (I_N-(1-p)
C\star(I_M-pA)^{-\star}P^{-1}(I_M-A)^{-*}C^*)\times\\
&\hspace{5mm}\times(I_N-(1-q)
C\star(I_M-qA)^{-\star}P^{-1}(I_M-A)^{-*}C^*)^*-I_N\\
&=C\star(I_M-pA)^{-\star}\star\Delta\star_r(I_M-A^*\overline{q})^{-\star_r}\star_r
C^*,
\end{split}
\]
where
\[
\begin{split}
\Delta&=-(1-p)
P^{-1}\star(I_M-A)^{-*}(I_M-A^*\overline{q})^{-\star_r}-(I_M-pA)\star
(I-\overline{q}(I_M-A)^{-1}P^{-1}\\
&\hspace{5mm}+P^{-1}(I_M-A)^{-*}C^*JC\star(1-p)\star_r(1-\overline{q})\star_r
(I_M-A)^{-1}P^{-1}.
\end{split}
\]
Taking into account the Stein equation \eqref{eq:stein} with
$\sigma=I_M$ we have:
\[
\Delta=P^{-1}(I_M-A)^{-*}\star\Delta_1\star_r (I_M-A)^{-1}P^{-1},
\]
where, after some computations,
\[
\begin{split}
\Delta_1&= \left\{-(1-p)\star(I_M-\overline{q}A^*)\star_r
P(I_M-A)-\right.\\
&\hspace{5mm}\left.-(I_M-A)^*P\star
(I_M-pA)\star_r(1-\overline{q})+(1-p)\star(P-A^*PA)\star_r
(1-\overline{q})\right\}\\
&=(I_M-A^*)P(I_M-A).
\end{split}
\]
\end{proof}
We note that a space $\mathscr P(S)$ can be finite dimensional
without $S$ being square. For instance
\[
S(p)=\frac{1}{\sqrt{2}}\begin{pmatrix}1&b_a(p)\end{pmatrix}.
\]
On the other hand, finite dimensional ${\mathscr P}(S)$ spaces for
square $S$ correspond to the ${\s}$-unitary functions studied in
linear system theory. The factorization theory of these functions
(that is, the slice-hyperholomorphic counterpart of
\cite{ad3,ag,ag2}) will be considered in a future publication.
\section{Krein-Langer factorization}
\setcounter{equation}{0}
\label{9}
In the classical case, functions $S$ for which the kernel
\eqref{skl} has a finite number of negative squares have a
special structure: they can be written as the quotient of a Schur
function and of a finite Blaschke product. This is a result of
Krein and Langer. See for instance \cite{kl1}. In this section we
present some related results.
\begin{Pn}
Let $S$ be a $\mathbb H^{N\times M}$-valued slice
hyperholomorphic function in $\mathbb B$ and of the form
\begin{equation}
\label{formlk}
S(p)=B(p)^{-\star}\star S_0(p),\quad p\in \mathbb B
\end{equation}
where $B$ is a $\mathbb H^{N\times N}$-valued Blaschke product and
$S_0$ is a $\mathbb H^{N\times M}$-valued Schur multiplier. Then,
$S$ is a generalized Schur function.
\end{Pn}
\begin{proof}
We follow the argument in \cite[\S 6.2]{ad3}. We have for
$n\in\mathbb N_0$ and $p,q\in \mathbb{B}$
\[
\begin{split}
p^n(I_N-S(p)S(q)^*)\overline{q}^n&=p^n
B(p)^{-\star}\star\left( B(p)B(q)^*-S_0(p)S_0(q)^*\right)
\star_r(B(q)^*)^{-\star_r}\overline{q}^n
\\
&=p^n B(p)^{-\star}\star\left(B(p)B(q)^*-I_N+\right.
\\
&\hspace{5mm}\left.+I_N-S_0(p)S_0(q)^*\right)
\star_r(B(q)^*)^{-\star_r}\overline{q}^n.
\end{split}
\]
Thus
\begin{equation}
\label{diff}
K_S(p,q)=B(p)^{-\star}\star\left(K_{S_0}(p,q)-K_B(p,q) \right)
(B(q)^*)^{-\star_r},
\end{equation}
where $K_{S_0}$ and $K_B$ are defined as in \eqref{skl}. Using
Proposition \ref{pn51} with $\kappa=0$ we see that formula
\eqref{diff} expresses the kernel $K_S$ as a difference of two
positive definite kernels, one being finite dimensional. It
follows that $K_S$ has a finite number of negative squares in
$\mathbb B$.
\end{proof}
\begin{Tm}
Let $S$ be a $\mathbb H^{N\times M}$-valued slice
hyperholomorphic function in $\mathbb B$, and such that the
associated space $\mathcal P(S)$ is finite dimensional. Then, $S$
admits a representation of the form \eqref{formlk}.
\end{Tm}
\begin{proof} Since the coefficients spaces are quaternionic
Hilbert spaces, $R_0$ is a contraction in $\mathcal P(S)$. We
proceed along the lines of \cite[\S 4.2 p. 141]{adrs}
and divide the proof in a number of steps.\\
STEP 1: {\sl The operator $R_0$ has no eigenvalues of modulus
$1$.}\\
Indeed, let $f\in\mathcal P(S)$ and $\lambda\in\mathbb H$ be such
that $R_0f=f\lambda$. Assume $|\lambda|=1$. From
\begin{equation}
\label{ineq}
[R_0f,R_0f]_{\mathcal P(S)}\le[f,f]_{\mathcal
P(S)}-f(0)^*f(0),\quad f\in\mathcal P(S)
\end{equation}
we get
\[
[f\lambda,f\lambda]\le [f,f]-f(0)^*f(0)
\]
and so $f(0)=0$. Reiterating \eqref{ineq} with $R_0f$ instead of
$f$ we get $(R_0f)(0)=0$, and in a similar way, $(R^n_0f)(0)=0$
for $n=2,3,\ldots$. But the $(R^n_0f)(0)$ are the coefficients of
the power series of $f$, and so $f=0$.\\
STEP 2: {\sl Let $\kappa$ be the number of negative squares of
$K_S$. Then, $R_0$ has a $\kappa$-dimensional negative
invariant subspace.}\\
We write in matrix form $A=R_0$ and $C$ the point evaluation at
the origin, and denote by $H$ the matrix corresponding to the
inner product in $\mathcal P(S)$. Thus
\[
A^*HA\le H.
\]
Without loss of generality we assume that $A$ is in Jordan form
(see \cite{wiegmann}, \cite{MR97h:15020}) we denote by $\mathcal
L_+$ (resp. $\mathcal L_-$) the linear span of the generalized
eigenvectors corresponding to eigenvalues in $\mathbb B$ (resp.
outside the closure of $\mathbb B$). Since there are no
eigenvalues on $\partial\mathbb B$, $\mathbb H^N$ (where $N={\rm
dim}~\mathcal P(S)$) is spanned by $\mathcal L_+$ and $\mathcal
L_-$. As in \cite[Theorem 4.6.1, p. 57]{glr}, one shows that
\[
{\rm dim}~\mathcal L_+\le i_+(H)\quad{\rm and}\quad {\rm
dim}~\mathcal L_-\le i_-(H),
\]
where $i_+(H)$ is the number of positive eigenvalues of $H$ (and
similarly for $i_-(H)$), and by dimension argument equality holds
there. Thus $\mathcal L_-$ is a $\kappa$-dimensional invariant
subspace of $A$.\\
Let $G$ denote the solution of the matrix equation
\[
G-A^*GA=C^*C.
\]
STEP 3: {\sl Let $\mathcal M$ be the space corresponding to
$\mathcal L_-$ in $\mathcal P(S)$, endowed with the metric
defined by $G$. Then $\mathcal M$ is
contractively included in $\mathcal P(S)$.}\\
Let $M$ denote the Gram matrix of $\mathcal M$ in the $\mathcal
P(S)$ inner product. We show that $M\ge P$. Indeed, in view of
\eqref{ineq}, the matrix $M$ satisfies
\[
A^*MA\le M-C^*C.
\]
In view of \eqref{eq:stein}, the matrix $M-P$ satisfies $
A^*(M-P)A\le M-P$, or equivalently (since $A$ is invertible)
\[
M-P\le A^{-*}(M-P)A^{-1}
\]
and so, for every $n\in\mathbb N$,
\begin{equation}
\label{eqpm}
M-P\le (A^{-*})^n(M-P)A^{-n}.
\end{equation}
Since the S-spectrum of $A$ is outside the closed unit ball, we
have by the S-spectral radius theorem (see \cite[Theorem 3.10, p. 616]
{MR2496568},\cite[Theorem 4.12.6, p. 155]{MR2752913}
\[
\lim_{n\rightarrow\infty}\|A^{-n}\|^{1/n}=0,
\]
and so $\lim_{n\rightarrow\infty}\|(A^{-*})^n(P-M)A^{-n}\|=0$.
Thus entrywise
\[
\lim_{n\rightarrow\infty}(A^{-*})^n(P-M)A^{-n}=0
\]
and it follows from \eqref{eqpm} that $M-P\le 0$.\\
By Proposition \ref{pn:bla},
\[
\mathcal M=\mathcal P(B).
\]
when $\mathcal M$ is endowed with the $P$ metric. Furthermore:\\
STEP 4: {\sl The kernel $K_S(p,q)-K_B(p,q)$ is positive.}\\
Let $k_{\mathcal M}(p,q)$ denote the reproducing kernel of
$\mathcal M$ when endowed with the $\mathcal P(S)$ metric. Then
\[
k_{\mathcal M}(p,q)-K_B(p,q)\ge 0
\]
and
\[
K_S(p,q)-k_{\mathcal M}(p,q)\ge 0.
\]
On the other hand
\[
K_S(p,q)-K_B(p,q)=K_S(p,q)-k_{\mathcal M}(p,q)+k_{\mathcal
M}(p,q)-K_B(p,q)
\]
and so is positive definite.\\
To conclude we apply Proposition \ref{pnneq} to
\[
K_S(p,q)-K_B(p,q)=B(p)\star\left(I_N-S_0(p)S_0(q)^*\right)\star_rB(q)^*
\]
where $S(p)=B(p)^{-\star}\star S_0(p)$, to check that $S_0$ is a
Schur function.
\end{proof}
|
2,869,038,153,903 | arxiv | \section{Introduction}
\label{section:intro}
Biomedical \emph{systematic reviews} aim to synthesize all evidence relevant to a given clinical query \cite{sackett1997evidence,gough2017introduction}.
Such reviews typically comprise both quantitative and narrative summaries of the evidence.
The former is most often a statistical meta-analysis of the results reported in the constituent trials, which in turn informs the natural language interpretation provided in the latter.
In Cochrane reviews,\footnote{\url{https://www.cochrane.org/}} brief narrative summaries communicating the main review findings are provided in structured abstracts in the \emph{Authors' conclusions} section. Below (left) is an example from a review of the evidence concerning the use of inhaled antibiotics for cystic fibrosis \cite{ryan2011inhaled}. We also provide the summary generated by one of the automated models we evaluate (right), given the abstracts of the included papers.
\begin{center}
\begin{table}[h!]
\begin{tabular}{ p{.47\textwidth} p{.47\textwidth} }
\hline
{\bf \emph{Authors' conclusions}} Inhaled antibiotic treatment probably improves lung function and reduces exacerbation rate, but a pooled estimate of the level of benefit is not possible. The best evidence is for inhaled tobramycin. More evidence, from trials of longer duration, is needed to determine whether this benefit is maintained and to determine the significance of development of antibiotic-resistant organisms. & {\bf \emph{Automatically generated summary}}
Inhaled antibiotics are effective in the treatment of Pseudomonas aeruginosa pulmonary infection in CF patients. However, there is a need for further randomised controlled trials to assess long-term safety and efficacy of inhaled antibiotics in patients with cystic fibrosis. Further trials are also needed to assess the effects of antibiotic treatment on morbidity and mortality. \\
\hline
\end{tabular}
\caption{Example \emph{Author conclusions} from a Cochrane systematic review abstract (left) and an automatically generated summary (right), conditioned on the set of clinical trial abstracts that informed the corresponding review. }
\label{table:example}
\end{table}
\end{center}
Narrative interpretations of review findings are an invaluable resource for practitioners because they provide a concise, readable summary of all evidence relevant to the clinical question that motivated the corresponding review.
Unfortunately, these author conclusions are the endpoint of the lengthy and laborious systematic review process.
Consequently, summaries will often not be available for arbitrary clinical questions (even when relevant trial reports exist).
Moreover, even when they are available they will often be out of date.
A system capable of automatically summarizing clinical trials literature would be capable of summarizing all evidence, on-demand.
In this work we evaluate state-of-the-art multi-document neural abstractive summarization models that aim to produce narrative summaries from the titles and abstracts of published reports of relevant randomized controlled trials (RCTs).
We train these models using the \emph{Authors' conclusions} sections of Cochrane systematic review abstracts as targets, and the titles and abstracts from the corresponding reviews as inputs.
We evaluate our models both quantitatively and qualitatively, paying special attention to the \emph{factuality} of generated summaries.
\subsection*{Related Work}
\label{section:related-work}
\paragraph{Automatic Summarization and Question Answering for EBM}
This work extends a thread of prior work on summarization for EBM \cite{demner2006answer,molla2010corpus,sarker2017automated}.
Demner-Fushman and Lin led a seminal effort on automatic question answering (QA) from the literature to aid EBM \cite{demner2006answer}.
This work on QA is adjacent to traditional summarization:
In their approach they aimed to extract snippets from individual articles relevant to a given question, rather than to \emph{generate} an abstractive summary of relevant abstracts, as is our aim here.
More recent work on (extractive) QA over clinical literature has further demonstrated the promise of such systems \cite{cao2011askhermes}.
For recent work in this vein, we point the reader to the latest BioASQ challenge iteration \cite{nentidis2019results}, which included a biomedical QA task.
While relevant, we view the task of extractive biomedical QA as distinct from the more focussed aim of generating abstractive narrative summaries over relevant input abstracts to mimic narratives found in formal evidence syntheses (Table \ref{table:example}).
More directly relevant to this setting of biomedical systematic reviews, Molla \cite{molla2010corpus,molla2016corpus} introduced a dataset to facilitate work on summarization in EBM that comprises 456 questions and accompanying evidence-based answers sourced from the ``Clinical Inquiries'' section of the Journal of Family Practice.
Sarkar \emph{et al.} \cite{sarker2017automated} surveyed automated summarization and EBM, respectively, highlighting the need for domain-specific multi-document summarization systems to aid EBM.
We aspire to make progress toward this end.
In contrast to our approach, these prior efforts used comparatively small corpora, and pre-dated the current wave of the neural summarization techniques that have yielded considerable progress in language generation and (abstractive) summarization \cite{see2017get,lewis2019bart,zhang2019pegasus}.
\paragraph{Neural Abstractive Summarization}
Automatic summarization is a major subfield in NLP\cite{maybury1999advances,nenkova2011automatic}.
Much of the prior work on summarization of biomedical literature has used \emph{extractive} techniques, which directly copy from inputs to produce summaries.
However, narrative evidence synthesis is an inherently \emph{abstractive} task --- systems must generate, rather than just copy, text --- as it entails communicating an overview of all available evidence.
Recent work on neural models has engendered rapid progress on abstractive summarization\cite{rush2015neural,lin2019abstractive}; we do not aim to survey this extensively here.
Illustrative of recent progress --- and most relevant to this work --- is the Bidirectional and Auto-Regressive Transformers (BART) model\cite{lewis2019bart}, which recently achieved state-of-the-art performance on abstractive summarization tasks.
Because it forms the basis of our approach, we elaborate on this model in Section \ref{section:methods}.
Despite progress in summary generation, evaluating abstractive summarization models remains challenging\cite{van2019best}.
Automated metrics calculated with respect to reference summaries such as ROUGE\cite{lin2004rouge} provide, at best, a noisy assessment of text quality.
Of particular interest in the setting of evidence syntheses is the \emph{factuality} of generated summaries: Here, as in many settings, users are likely to value accuracy more than other properties of generated text\cite{reiter:2020,reiter-belz-2009-investigation}. Unfortunately, neural models for abstractive summarization are prone to `hallucinations` that do not accurately reflect the source document(s), and automatic metrics like ROUGE may not capture this \cite{maynez2020faithfulness}.
This has motivated a few recent efforts to automatically evaluate factuality.
Wang \emph{et al.} proposed \emph{QAGS}, which uses automated question-answering to measure the consistency between reference and generated summaries \cite{wang2020asking}.
Elsewhere, Xu \emph{et al.} \cite{xu2020fact} proposed evaluating text factuality independent of surface realization via Semantic Role Labeling (SRL).
We extend this emerging line of work here by manually evaluating the factuality of summaries produced of clinical trial reports, and proposing a domain-specific method for automatically evaluating such narrative syntheses.
\section{Methods}
\subsection*{Data}
We use 4,528 systematic reviews composed by members of the Cochrane collaboration (\url{https://www.cochrane.org/}).
These are reviews of all trials relevant to a given clinical question.
The systematic review abstracts together with the titles and abstracts of the clinical trials summarized by these reviews form our dataset.
The reviews include, on average, 10 trials each. The average abstract length of included trials is 245 words.
We use the ``authors' conclusions" subsection of the systematic review abstract as our target summary (75 words on average).
We divvy this data up into random splits of 3,619, 455, and 454 reviews corresponding to train, development (dev), and test sets, respectively.
\subsection*{Models}
\label{section:methods}
We adopt Bidirectional and Auto-Regressive Transformers (BART) as our underlying model architecture\cite{lewis2019bart}. This is a generalization of the original BERT\cite{devlin2018bert} Transformer\cite{vaswani2017attention} model and pretraining regime in which self-supervision is not restricted to the objectives of (masked) token and next sentence prediction (as in BERT). Instead, BART is defined as an encoder-decoder model with an autoregressive decoder trained to `denoise` arbitrarily corrupted input texts.
Masking tokens --- the original BERT objective --- is just one type of `corruption`.
This permits use of additional corruption schemes (pretraining objectives), a property that we exploit in this work (Section \ref{section:additional-pretraining}).
BART achieves strong performance on abstractive summarization tasks\cite{lewis2019bart}, which makes it particularly appropriate for our use here.
BART defines a sequence-to-sequence network\cite{sutskever2014sequence} in which the \emph{encoder} is a bidirectional Transformer network and the \emph{decoder} is autoregressive (and hence amenable to language generation tasks such as summarization).
One limitation of BART (and large neural encoder models generally) is that it imposes a limit on the number of input words that can be accepted due to memory constraints; for BART this limit is 1024.
We discuss this further below.
We do not modify the BART architecture, but we explore new, domain-specific pretraining strategies and methods that entail modifying inputs.
For the former, we propose and evaluate additional pretraining in which the objective is to construct abstracts of RCT articles from corresponding full-texts (Section \ref{section:additional-pretraining}).
For the latter, we propose and evaluate a method in which we `decorate' input texts with annotations automatically produced by trained models, e.g., we explicitly demarcate (via special tokens) snippets in the input that seem describe interventions and key findings (Section \ref{section:inputs}).
This is a simple (and as far as we aware, novel) means of incorporating prior information or instance meta-data in end-to-end neural summarization models.
\subsubsection*{Initialization and pre-training strategies}
\label{section:additional-pretraining}
We use the BART-large version of BART,\footnote{Provided via the {\tt huggingface Transformers} library \cite{wolf2019huggingface}.} in which both the encoder and decoder are 12-layer Transformers.
The `vanilla' variant of BART is initialized to weights learned under a set of denoising objectives that differ in how they corrupt the input text (which is then to be reconstructed).
For example, objectives include token masking (as in the original BERT), `text infilling`, and `sentence permutation` tasks \cite{lewis2019bart}.
This pretraining is performed over a very large corpus comprising: {\tt BookCorpus}\cite{zhu2015aligning} and {\tt English Wikipedia}, {\tt CC-News}\cite{liu2019roberta}, {\tt OpenWebText},\footnote{\url{http://web.archive.org/ save/http://Skylion007.github.io/ OpenWebTextCorpus}} and {\tt Stories}\cite{trinh2018simple} (over 160GB of raw text in all).
We verified via string matching that none of the target summaries (published Cochrane review abstracts) appeared in this pretraining corpora.
As a natural starting point for our task, we initialize BART-large weights to those learned via fine-tuning on the XSUM abstractive summarization corpus\cite{narayan2018don}.
With this as our starting point, we explored additional `in-domain` pretraining prior to learning to summarize trials.
Specifically we train BART to generate summaries from full-text articles.
Specifically, we use $\sim$60k full-texts from the PubMed Central (PMC) Open-Access set that were classified as describing RCTs in humans by a previously developed model \cite{marshall2018machine}.
Full-texts exceed the 1024 token budget imposed by BART, and so we alternate between selecting sentences from the start and end of the article text until we reach this limit.
\subsubsection*{Modifying Inputs}
\label{section:inputs}
\begin{figure}
\centering
\includegraphics[scale=0.4]{figures/decorated-text.pdf}
\caption{Input articles (here we show two for illustration) `decorated' using special tokens to demarcate automatically extracted salient attributes: \underline{$<$pl$>$} for `punchlines' sentences (those that seem to state the main finding), and snippets of text describing study populations $<$pop$>$, interventions $<$inter$>$, and outcomes $<$out$>$, respectively.}
\label{fig:decoration-ex}
\end{figure}
Another important design choice concerns the inputs that we provide to the encoder component of BART.
In the most straightforward use, we simply pass along subsets of the raw titles and abstracts.
We demarcate titles, abstracts, and the start of new documents with with special tokens (`$<$T$>$`, `$<$ABS$>$`, `$<$S$>$`).
Typically, only some of the abstracts associated with a given review will fit within the aforementioned token limit.
We prioritize including titles, and then sentences from the beginnings and ends of abstracts.
We select the latter in a `round-robin` (random) order from inputs, alternating between the starts and ends of abstracts, until the token budget is exhausted.
\vspace{.5em}
\noindent {\bf Decoration} Prior work has investigated methods for automatically extracting key trial attributes from reports of RCTs, including descriptions of the study Populations, Interventions/Comparators, and Outcomes (the `PICO' elements) \cite{nye2018corpus} and identifying `punchline` snippets that communicate the main study findings \cite{lehman2019inferring}.
These key aspects of trials ought to figure prominently in summaries of the evidence.
But in a standard end-to-end summarization approach, the model would have to implicitly learn to focus on these attributes, which seems inefficient.
We propose a simple `decoration' strategy in which we explicitly demarcate snippets of text tagged by pretrained models as describing the aforementioned attributes.
Decoration entails enclosing snippets (automatically) identified as describing the respective attributes within special tokens that denote these.
We provide an example (showing only two articles) in Figure \ref{fig:decoration-ex}.
This preprocessing step is a simple mechanism to directly communicate to the encoder which bits of the text seem to provide information for the aspects of interest.
To identify PICO elements, we use RobotReviewer\cite{marshall2016robotreviewer,marshall2017automating}.
To identify punchline sentences, we fine-tuned a BioMed-RoBERTa model \cite{domains} on the Evidence Inference (2.0) dataset \cite{deyoung2020evidence}, which includes annotations of evidence-bearing (`punchline`) sentences in RCT articles.
\vspace{0.5em}
\noindent{\bf Sorting} Rather than treating all inputs equally, we might prefer to prioritize inclusion of evidence from large, high-quality studies.
To operationalize this intuition, we consider a variant in which we greedily include tokens from abstracts ordered by sample size ($N$) scaled by an estimate of overall risk of bias (RoB) (a proxy for study quality).
We infer both of these quantities automatically using RobotReviewer\cite{marshall2017automating,marshall2016robotreviewer}.
Here RoB is the predicted probability of a study being at overall low risk of bias, based on the abstract text.
\subsection*{Design}
We analyze the performance of five model variants that use elements of the above strategies (see \ref{table:main-results-dev-test-combined}).
All are fine-tuned on the training set of Cochrane reviews.
For `XSUM' we initialize BART to weights estimated on the XSUM abstractive summarization task \cite{narayan-etal-2018-dont}.
For `Pretrain (PMC)' we continue pretraining over the PMC set as described above; all other models start from this checkpoint.
`Decorate' marks up the inputs as described above before passing to the encoder (at train and test time).
`Sort by N$\cdot$RoB' spends the 1024 token by greedily selecting for inclusion words from abstracts with the lowest (inferred) risk of bias, scaled by (extracted) sample size ($N$).
\paragraph{Hyperparameters} During fine-tuning we used learning rate of $3$e$-5$. During decoding we used beam size of 4, a minimum target length of 65, we enabled early stopping, and we prevent three consecutive $n$-grams from repeating.
This largely follows the original BART paper\cite{lewis2019bart}; we did not systematically tune these hyperparameters.
\subsection*{Main outcome measurements}
We measure summarization system performance using both automated and manual approaches.
For the former we use Recall-Oriented Understudy for Gisting Evaluation (ROUGE) \cite{lin2004rouge}, which relies on word overlaps between generated and reference summaries.
For the latter we enlisted medical doctors to annotate generated summaries with respect to relevance, plausibility (including fluency), and factuality (i.e., agreement with reference target summaries).
For this we built a custom interface; task instructions (with screenshots) are available here: \url{http://shorturl.at/csJPS}.
As we later confirm empirically, ROUGE scores do not necessarily capture the factual accuracy of generated summaries, which is critical when generating evidence syntheses.
Manual evaluation of summaries can capture this, but is expensive, hindering rapid development of new models.
We propose a new approach to automatically evaluating generated narrative evidence syntheses with respect to the factuality of the findings that these present.
Specifically, we infer the reported directionality of the main finding in the generated and reference summaries, respectively, and evaluate the resultant level of (dis)agreement.
To derive this automated metric we use the Evidence Inference dataset\cite{lehman2019inferring}, which comprises full-text RCT reports in which evidence-bearing snippets have been annotated, along with whether these report that the finding is a \emph{significant decrease}, \emph{significant increase}, or \emph{no significant difference} with respect to specific interventions, comparators, and outcomes.
We simplify this by collapsing the first two categories, yielding a binary classification problem with categories \emph{significant difference} and \emph{no significant difference}.
Following DeYoung \emph{et al.}\cite{deyoung2020evidence}, we train a `pipeline' model in which one component is trained to identify `punchline' sentences within summaries, and a second is trained to infer the directionality of these findings.
Both models are composed of a linear layer on top of BioMed-RoBERTa \cite{domains}.
Using these models we can infer whether reference and generated summaries appear to agree.
Specifically, we use the Jensen-Shannon Divergence (JSD) --- a measure of similarity between probability distributions --- between the predicted probabilities for \emph{sig. difference} and \emph{no sig. difference} from our inference model for the generated and reference summary texts, respectively.
A low divergence should then suggest that the findings presented in these summaries is in agreement. We will call this measure \emph{findings-JSD}.
\section{Results}
\subsection*{Automated Evaluation}
\label{section:results-quant}
We report ROUGE-L scores with respect to the target (manually composed) Cochrane summaries, for both the development and test sets in Table \ref{table:main-results-dev-test-combined}.
The methods perform about comparably with respect to this automatic metric.
But ROUGE measures are based on (exact) $n$-gram overlap, and are insufficient for measuring the \emph{factuality} of generated texts\cite{maynez2020faithfulness,kryscinski2019neural}.
Indeed, we find that the summaries generated by all variants considered enjoy strong fluency, but the key question for this application is whether generated summaries are \emph{factually correct}.
Below we confirm via manual evaluation that despite achieving comparable ROUGE scores, these systems vary significantly with respect to the factuality of the summaries that they produce.
\begin{table*}
\centering
\footnotesize
\begin{tabular}{llllrr}
Name & Initialization & (Additional) Pretraining & System inputs & \multicolumn{1}{c}{ROUGE-L ({\tt dev})} & \multicolumn{1}{c}{ROUGE-L ({\tt test})} \\
\midrule
XSUM & \emph{XSUM} & None & Titles and abstracts & 0.264 & 0.264 \\
Pretrain (PMC) & \emph{XSUM} & PMC RCTs & Titles and abstracts & 0.264 & 0.269 \\
Decorate & \emph{XSUM} & PMC RCTs & Decorated & 0.271 & 0.266 \\
Sort by N$\cdot$RoB & \emph{XSUM} & PMC RCTs & Sorted by $N \cdot$ RoB & 0.267 & 0.267 \\
Decorate and sort & \emph{XSUM} & PMC RCTs & Decorated and sorted & 0.265 & 0.265\\
\bottomrule
\end{tabular}
\caption{Model variants and ROUGE-L measures over the {\tt dev} and {\tt test} sets. `PMC RCTs' is shorthand for our proposed strategy of (continued) pretraining to generate abstracts from full-texts for all RCT reports in PMC. All model variants aside from `XSUM' start from the Pretrain (PMC) checkpoint.}
\label{table:main-results-dev-test-combined}
\end{table*}
\subsection*{Manual Evaluation}
\label{section:results-manual}
Manual annotation was performed for 100 reference Cochrane reviews and the 5 systems described above.
Annotators were shown summaries generated by these systems in turn, \emph{in random order}.
Randomization was performed independently for each review, i.e., for each reference summary.
Annotators were blinded to which system produced which summaries during assessment.
We asked four questions across two pages about each generated summary.
The first page displayed only the generated summary, and asked annotators to appraise its \emph{relevance} to the topic (the title of the corresponding systematic review) on a 3-point ordinal scale ranging from mostly off-topic to strongly focused on-topic.
The second question on the first page concerned `semantic plausibility'; this is intended to measure whether the generated text is understandable, coherent, and free from self-contradictory statements.
This assessment was on a five-point (Likert) scale.
Following these initial evaluations, annotators were asked two additional questions to assess the factuality of generated summaries with respect to the reference.
The first concerned the direction of the reported finding in the reference summary (e.g., did the authors conclude the intervention being investigated beneficial compared with the control?).
The second question then asked the annotator to assess the degree to which the generated summary agreed with the reference summary in terms of these key conclusions.
Both of these judgments were collected on Likert-scales.
\begin{table*}
\centering
\footnotesize
\begin{tabular}{lrrr}
\toprule
System variant & Relevance $>$2 & Fluency $>$3 & Factuality $>$3 \\
\midrule
XSUM & 96 & 90 & 40 \\
Pretrain (PMC) & 98 & 97 & 34 \\
Decorate & 98 & 96 & 54 \\
Sort by N $\cdot$ RoB & 96 & 93 & 46 \\
Decorate and sort & 93 & 88 & 47 \\
\bottomrule
\end{tabular}
\caption{Counts of generated summaries out of 100 assessed by MD 1 as exhibiting high relevance (3/3); good to very good fluency ($>$3/5); and moderate to strong factual agreement with reference summaries ($>$3/5).}
\label{table:manual-results-counts}
\end{table*}
One author (IJM; a medical doctor with extensive experience in evidence-based medicine) performed the above evaluation on a `pilot' set of 10 reference reviews, yielding 50 system judgements in all. He did not know which systems produced which outputs.
This pilot set was used to assess agreement with additional prospective annotators, who we recruited via the Upwork platform\footnote{\url{http://www.upwork.com}}.
We hired three candidate annotators (all with medical training) to assess the system summaries for pilot set of reviews appraised by IJM.
Only one of these candidates provided reliable annotations, as determined by agreement with the reference set of assessments.\footnote{We assessed this both quantitatively and qualitatively. Rejected candidates provided uniformly high scores for all generated summaries, even in cases where, upon inspection, these were blatantly in disagreement with the reference summary.}
Scores provided by the successful annotator (who we will refer to as `MD 1') achieved 0.535 linearly weighted $\kappa$ with reference annotations concerning `factuality', the hardest task, indicating reasonable agreement.
IJM also subsequently evaluated all cases in this set where the label he had provided disagreed with MD 1's assessment (still blinded to which systems produced the corresponding summaries).
These were determined reasonable disagreements, given that the task is somewhat subjective as currently framed.
In total we collected assessments across the five system variants considered with respect to 100 unique corresponding Cochrane reference summaries from MD 1; for this we paid about \$1,500 USD.
As a second (post-hoc) quality check, IJM evaluated an additional randomly selected subset comprising 10 reference reviews.
Agreement concerning relevance (80\% exact agreement) and fluency (68\% accuracy) remained high, as in the pilot round.
However, in contrast to the pilot set, agreement concerning factuality on this subset was ostensibly poor (linearly weighted $\kappa$=0.04); on average IJM scored systems higher in factuality by 1.62 points.
As above, IJM therefore evaluated all instances for which disagreement was $\geq 2$ (again keeping blinding intact).
This process again revealed predominantly reasonable subjective differences on this small set in assessing the level of agreement between the findings communicated in the generated and reference summaries, respectively.
MD 1 consistently rated lower factuality scores than IJM --- assigning lower numbers across the board --- but relative rankings seem to broadly agree (Figure \ref{fig:fact_scores}).
This disagreement suggests that in future work we should work to improve the annotation task framing and guidance.
The most common disagreement occurred in cases where the reference summary described a lack of reliable evidence on a topic, but \emph{hinted} cautiously that there was some small, or low quality evidence in favor of an intervention. If an automated summary only described a lack of reliable evidence on the topic, it was ambiguous whether the overall poor state of evidence should be scored (in this instance, showing perfect agreement), or by how much the automated summary should be penalized for missing the tentative conclusion of possible benefit.
Nonetheless, in light of strong agreement on \emph{other} rated aspects and our manual assessments of all substantial disagreements, we feel reasonably confident that MD 1 provided meaningful scores, despite the low quantitative measure of agreement on the second randomly selected set.
And regardless, the broad trends across systems agree when the annotations from the two annotators are analyzed independently (Figure \ref{fig:fact_scores}).
\begin{figure}%
\centering
\subfloat[Scores from the MD hired via Upwork (MD 1) over 100 unique reference summaries (500 summary annotations).]{{\includegraphics[width=0.45\linewidth]{figures/fact_bars_5054.pdf} }}%
\qquad
\subfloat[Scores from co-author (and MD) IJM over a subset of 20 unique reviews (100 summary annotations).]{{\includegraphics[width=0.45\linewidth]{figures/fact_bars_ijm.pdf} }}%
\caption{Factuality assessments performed by an individual with medical training for five systems over 100 unique reference summaries from the {\tt dev} set (a), and by co-author and MD IJM over a small subset of twenty of these (b). All strategies to the right of `+ Pretrain (PMC)` start from the model checkpoint after this additional pretraining. We first evaluate the `decoration' and sorting strategies (Section \ref{section:inputs}) independently, and then in combination; system names are consistent with Table \ref{table:main-results-dev-test-combined}.}%
\label{fig:fact_scores}
\end{figure}
All systems received consistently high relevance scores from MD 1 (mean scores for system summaries produced by different systems over the 100 reviews range from 2.73 to 2.79, out of 3), and `semantic plausibility' scores (range: 4.47 to 4.64 across systems, out of 5).
Table \ref{table:manual-results-counts} reports the counts of `good quality' summaries with respect to the aforementioned aspects, as judged by MD 1.
We can see that systems struggle to produce factual summaries.
Figure \ref{fig:fact_scores} (a) reports the mean factuality scores provided by MD 1 for the respective model variants.
The proposed `decorating' strategy yields a statistically significant improvement over the baseline PMC pretraining strategy (2.92 vs 3.46; $p\approx0.001$ under a paired $t$-test).
Note that this is the appropriate comparison because the `+ Decorate' model starts from the pretrained checkpoint.
Sorting inputs such that the encoder prioritizes including abstracts that describe large, high-quality studies (given the 1024 token budget imposed by BART) also increases factuality, albeit less so (2.92 vs 3.33; $p \approx 0.01$).
Figure \ref{fig:fact_scores} (b) presents the factuality scores provided by IJM over a small subset of the data (20 unique reviews in all).
The pattern is consistent with what we observed in MD 1's scores in that `decoration' yields increased factuality (mean score of 3.35 vs 3.70).\footnote{Though given the small sample of 20 reviews that IJM annotated neither difference is statistical significant when considering only these labels.}
\subsection*{Automatically Assessing the Factuality of Evidence Synopses}
\label{section:results-auto}
\begin{comment}
\begin{figure}%
\centering
\subfloat[Mean factuality scores (from MD 1), broken down by the manually assigned directionality of findings (excluding `cannot tell'). Higher values imply better factual accuracy.]{{\includegraphics[width=0.45\linewidth]{figures/fact_bars_both.pdf} }}%
\qquad
\subfloat[Mean Jensen Shannon Divergences (JSDs) between automatically inferred results for reference and generated summaries, respectively. Lower values should imply greater factual accuracy. ]{{\includegraphics[width=0.45\linewidth]{figures/auto_fact_bars_both.pdf} }}%
\caption{System factuality measures broken down by whether the reference summary reports a significant difference (or not). In (a), this is manually annotated and we report mean provided fact scores; higher is better. In (b) we \emph{infer} the directionality for both the generated and reference summaries and measure the divergence between these; lower is better. The manual and automatic results largely tell the same story: Decoration and sorting improve factuality specifically for cases in which a significant difference is reported.}%
\label{fig:fact_scores_auto}
\end{figure}
\end{comment}
ROUGE scores do not vary much across model variants, but this probably mostly reflects the fluency of summaries --- which was also manually assessed to be uniformly good across systems.
ROUGE (which is based on word overlap statistics) does not, however, seem to capture \emph{factuality}, which is naturally of central importance for evidence synthesis.
We tested this formally using annotations from MD 1: We regressed factuality judgements (ranging 1-5) on ROUGE-L scores (including an intercept term), over all annotated summaries.
The trend is as we might expect: larger ROUGE-L scores are associated with better factuality ratings, but the correlation is not significant ($p \approx 0.18$).
We are therefore reliant on manual factuality assessments as we work to improve models.
Performing such evaluations is expensive and time-consuming: Collecting annotations over 100 instances for this work cost nearly \$2,000 USD (including payments to collect `pilot` round annotations) and investing considerable time in documentation and training.
Relying on manual assessments will therefore substantially slow progress on summarization models for evidence synthesis, motivating a need for automated factuality evaluation such as the \emph{findings-JSD} measure proposed above.
Repeating the regression we performed for ROUGE-L, we can measure whether findings-JSD correlates with manual factuality assessments.
We define a regression in which we predict factuality scores on the basis of the JSD scores.
We find a statistically significant correlation between these with an estimated coefficient of -1.30 (95\% CI: -1.79 to -0.81; $p < 0.01$), implying that the larger the disagreement concerning whether the summaries report a significant effect or not (measued using JSD), the lower the factuality score, as we might expect.
This result is promising. But despite the significant correlation this automated metric has with manual assessments, it is not strong enough to pick up on the differences between strategies.
In particular, repeating the $t$-test on findings-JSD scores for the pretaining and decorating strategies yields a $p$-value of $0.40$, i.e., the measure fails to meaningfully distinguish the latter from the former with respect to factuality.
We conjecture that this is because while the measure significantly correlates with human assessments, it does so only modestly ($R^2=0.05$).
We therefore conclude that this strategy constitutes a promising avenue for automatically assessing the factuality of generated summaries, but additional work is needed to define a measure that enjoys a stronger correlation with manual assessments.
\section{Discussion}
Above we proposed variants of modern neural summarizaiton models in which we: Perform additional in-domain pretraining (over the RCTs in PMC); `decorate' inputs with automatically extracted information (e.g., population descriptions and evidence-bearing sentences); and sort inputs to prioritize passing along large and high-quality trials (given the limit on the length of the model input imposed by the transformer model we use).
We evaluated these models across key aspects, including relevance, `semantic plausibility', and factuality.
All systems we considered yielded highly fluent and relevant summaries.
But manual analysis of generated and corresponding reference summaries revealed that the \emph{factuality} of these systems remains an issue.
The proposed decoration and sorting strategies both yielded modest but statistically significant improvements in assessed factuality.
Annotators exhibited some disagreement when evaluating factuality.
We believe this to be in part to the inherent difficulty of the task, but we hope in future work to improve the annotation protocol to reduce the level of subjectivity in making factuality assessments.
For example, being more explicit in the levels of disagreement that should map onto specific numerical scores and providing more detailed instructions regarding this may improve the calibration.
Separating the factuality rating of the \emph{strength} of evidence from the direction of the conclusion seems a promising route to improve inter-rater agreement.
ROUGE scores --- commonly used to automatically evaluate summarization systems --- did not significantly correlate with factuality assessments here.
We proposed a method for automatically evaluating the factuality of narrative evidence syntheses, findings-JSD, using models to infer the reported directionality of findings in generated and reference summaries.
This measure significantly (though weakly) correlates with manual assessments of factuality.
We view this as a promising direction to pursue going forward to facilitate automatic evaluation of evidence synopses, which in turn would support continued development of automated summarization systems for evidence synthesis.
\section{Conclusions}
We have demonstrated that modern neural abstractive summarization systems can generate relevant and fluent narrative summaries of RCT evidence, but struggle to produce summaries that accurately reflect the underlying evidence, i.e., that are \emph{factual}.
We proposed new approaches that modestly improve the factuality of system outputs, and described a metric that attempts (with some success) to automatically measure factuality.
\section{Acknowledgements}
This work is funded by the National Institutes of Health (NIH) under the National Library of Medicine, grant R01-LM012086, “Semi-Automating Data Extraction for Systematic Reviews”.
\setlength\itemsep{-0.1em}
\bibliographystyle{vancouver}
|
2,869,038,153,904 | arxiv | \section*{ACKNOWLEDGEMENTS}
I wish to thank Robert Myers for his useful
comments and McGill University for their financial support.
|
2,869,038,153,905 | arxiv | \section{Introduction}
\begin{figure}
\includegraphics[width=0.5\textwidth]{f1.eps}
\caption{Images of the Galactic Center with the observed positions indicated by
square boxes upon the radio continuum (log-log scale) imaged at 21 cm imaged by Yusef-Zadeh
\& Morris (1987) with 11" resolution (adapted from Simpson et al.2007, Fig. 1. )
}
\end{figure}
The Galaxy central region cannot be observed in the optical and UV range
because of strong extinction (Erickson et al 1991), but in recent years,
infrared observations allowed a detailed investigation of the gas and dust
structures.
Fig. 1 shows a radio image of this region (Yusef-Zadeh \& Morris 1987). The
Sgr A West HII region contains a quiescent black hole $\sim$ 4 10$^{6}$ M$_{\odot}$ (Ghez et al.
2005; Eisenhauer et al. 2005) which is coincident with the radio source Sgr A* and is located
at the center of the Galaxy. It also contains a cluster of massive stars.
A distance of 8 kpc is assumed by Simpson et al (2007).
Two other clusters of young massive stars and massive molecular clouds
(Sch\"{o}del et al. 2006) appear in the Galactic Center (GC). The Arches Cluster and the Quintuplet Cluster
are located $\sim$ 25 pc away in the plane of the sky.
The Arches Cluster (Nagata et al 1995 and Cotera et al. 1996) is a very massive and dense
cluster of young stars heating and ionizing the region of the
Arched Filaments and the Sickle. These are thermal radio emitters (e.g. Yusef-Zadeh \& Morris 1987,
Morris \& Yusef-Zadeh 1989, Lang et al. 1997, 2001), while the Radio Arc (Yusef-Zadeh et al 1984)
consists of non-thermally emitting linear filaments perpendicular to the Galactic plane.
The stars of the Quintuplet Cluster ionize the clouds in the extended region including the Bubble.
A detailed description of the GC is given by Simpson et al (2007).
Excitation of the gas is ascribed to photoionization
because it was found that the excitation variations depend on the projected distances
from the clusters (Erickson 1991).
The radial velocity field is very complicated. The gas velocity range
in the Sickle is $\sim$ 40-140 $\rm km\, s^{-1}$ (Yusef-Zadeh et al 1997, Lang et al. 1997),
and seems lower closest to Sgr A. Interestingly, in both the Arched Filaments and the Sickle
the velocity of the molecular clouds is similar to that of the gas.
According to the morphology in Fig. 1, it appears that the cloud structures
characterised by a system of semicircular arcs,
arise from stellar winds or supernova explosions.
This can be noticed for instance, in the Bubble region (Levine et al. 1999)
and in the “Radio Arc Bubble” (Rodr\'{i}guez-Fern\'{a}ndez et al. 2001).
At the same position on the plane of the sky as the Bubble, there is a dense molecular
cloud, G0.011−0.011 (Tsuboi et al. 1997) or G0.013−0.013 (Oka et al. 2001).
Stellar winds and supernova explosions suggest that the shocks have a non-negligible
role in heating and ionizing both the gas and the dust.
The fragmented filamentary structures characteristic of the GC strengthens this hypothesis.
The Arches Cluster has also been investigated in the light of
dynamical evolution of compact young clusters (e.g. Kim, Morris, \& Lee 1999, Kim et al. 2000).
In the X-ray domain,
using the Chandra telescope, Yusef-Zadeh et al. (2002) detected three X-ray
components associated with the Arches Cluster.
They claim that hot (10$^7$K) X-ray emitting gas
is produced by an interaction of material expelled by the massive stellar winds with
the local interstellar medium.
One of the sources has roughly the characteristics expected from
shock-heated gas created by the collision of a number of 1000 $\rm km\, s^{-1}$ stellar
winds emanating from the stars in the rich dense cluster.
However, the X-ray sources are extended and hardly related
to single X-ray binary systems.
Far-infrared (FIR) lines were
observed using the Kuiper Airborne Observatory (KAO: Colgan et al. 1996; Simpson et al. 1997)
and the Infrared Space Observatory (ISO: Rodr\'{i}guez-Fern\'{a}ndez et al. 2001; Cotera et al. 2005).
For both the Arched Filaments (Colgan et al. 1996; Cotera et al. 2005) and the Sickle
(Simpson et al. 1997; Rodr\'{i}guez-Fern\'{a}ndez et al. 2001), the excitation decreases with
distance from the Arches Cluster and Quintuplet Cluster, respectively, as
expected for photoionization.
Spitzer observations of MIR spectra
in 38 positions along a line approximately perpendicular to the Galactic plane in the GC
are presented by Simpson et al (2007),
who analyse the sources of excitation of the Arched Filaments and of the
other thermal arcs. They are particularly interested in the Bubble physical conditions
relatively to the clusters and the other filament structures.
The observed positions are shown in Fig. 1.
Their spectra contain high and low ionization level lines (e.g. [OIV], [SIV], [NeIII], [NeII],
[SIII], [FeIII], [FeII], and [SiII]).
In their paper, Simpson et al. (2007) use the new observations to determine the stellar properties
of the most massive stars in the Arches Cluster.
However, the modelling of the spectral line ratios by pure photoionization codes, (e.g. CLOUDY etc.)
was successful only to explain some line ratios in a few positions. Simpson et al. (2007)
conclude that the models accounting for the shocks by Contini \& Viegas (2001)
could explain the relatively strong [OIV] lines. This induced us
to revisit Simpson et al. observations
of the Galactic central region, by a detailed modelling of the line and continuum spectra,
constraining the results by the interpretation of
the spectra previously observed by Erickson et al. (1991) and Colgan et al.(1996).
We adopt for the calculations the code SUMA which accounts for both photoionization and shocks.
In particular, all the lines available in each spectrum and the continuum spectral energy distribution
(SED) will be modelled in a consistent way.
The results will explain the excitation
mechanisms of the gas near the GC as well as some particular features,
e.g. the distribution of dust.
The calculation details are described in Sect. 2.
In Sect. 3 the spectra presented by Simpson et al. are modelled and discussed.
In Sect. 4, line and continuum spectra are modelled
for Position C - G0.095+0.012 and the E2 thermal radio Filament which were observed by Erickson et al (1991).
In Sect. 5 we examine the observations of Colgan et al (1996) in the different positions crossing the
E2-W1 arched radio filament.
Concluding remarks follow in Sect. 6.
\section{The modelling method}
The physical parameters are combined throughout the calculation of
forbidden and permitted
lines (see Osterbrock 1988) emitted from a shocked nebula. Besides the element abundances,
the main parameters are known to be :
the electron density $\rm n_{e}$~, the electron temperature $\rm T_{e}$,
the component of the magnetic field perpendicular to the flow direction $B$,
the flux from the external source, which is characterised by its spectral type (e.g. a black body)
and intensity (e.g. the ionization parameter U), and the fractional
abundance of the ions.
They follow the recombination trend of the gas downstream. Therefore, the
precision of the calculations requires to divide the downstream region in
many different slabs corresponding to the different physical conditions.
The line and continuum intensities in the X-ray - radio range, are calculated in each
slab and integrated throughout the nebula geometrical thickness.
In pure photoionization models
the density n is constant throughout the nebula, while
in a shock wave regime,
n is calculated downstream by the compression equation in each of the single slabs.
Compression depends on n, $B$, and the shock velocity V.
The ranges of the physical conditions in the nebula are deduced, as a first guess,
from the observed lines
(e.g. the shock velocity from the FWHM) and from the characteristic line ratios (e.g.
$\rm n_{e}$~ and $\rm T_{e}$).
The observations indicate that a steady state situation can be applied (Cox 1972).
In this case, the time t required for a parcel of gas to cross
the cooling region from the shock front to the recombination zone, for shock waves
with v=100 $\rm km\, s^{-1}$, is found to be about 1000/$\rm n_{e}$~ years (calculated by the recombination coefficients)
so, for an electron density $\rm n_{e}$~ = 100 $\rm cm^{-3}$, t = 10 years.
Shock velocities within the GC are not likely to change appreciably in that short a time,
so the steady state calculation should be adequate.
In this paper, the
line and continuum spectra emitted by the gas downstream of the shock front,
are calculated by
SUMA (see http://wise-obs.tau.ac.il/$\sim$marcel/suma/index.htm for
a detailed description).
The code simulates the physical conditions in an
emitting gaseous cloud
under the coupled effect of photoionisation from an external radiation
source and shocks. The line and continuum emission from the gas
is calculated consistently with dust reprocessed radiation
in a plane-parallel geometry.
In a composite (shock and photoionization) code such as SUMA,
the input parameters are: the shock velocity $\rm V_{s}$~, the preshock density $\rm n_{0}$,
the preshock magnetic field $\rm B_{0}$ which refer to the shock, while, the colour temperature of the hot star $\rm T_{*}$~
and the ionization parameter $U$ (the number of photons per number of electrons at the nebula) refer to the flux.
The geometrical thickness of the emitting nebula $D$,
the dust-to-gas ratio $d/g$, and the abundances of He, C, N, O, Ne, Mg, Si, S, A, and Fe relative to H
are also considered.
The distribution of the grain radius (a$_{gr}$) downstream
is determined by sputtering, beginning with an initial radius.
The calculations start at the shock front where the gas is compressed
and thermalized adiabatically, reaching the maximum temperature
in the immediate post-shock region
(T$\sim$ 1.5 10$^5$ ($V_s$/100 $\rm km\, s^{-1}$)$^2$). T decreases downstream
following recombination. The cooling rate is calculated in each slab.
The downstream region is cut up into a maximum of 300 plane-parallel slabs
with different geometrical widths calculated automatically, in order
to account for the temperature gradient (Contini 1997 and references therein).
In each slab, compression is
calculated by the Rankine-Hugoniot equations for the
conservation of mass, momentum and energy throughout the shock front.
Compression (n/$\rm n_{0}$) downstream ranges between
4 (the adiabatic jump) and $\leq$ 10, depending on $\rm V_{s}$~ and $\rm B_{0}$.
The stronger the magnetic field the lower is compression downstream, while
a higher shock velocity corresponds to a higher compression.
The ionizing radiation from an external source is characterized by its
spectrum depending on $\rm T_{*}$~, and by the ionization parameter. The flux is calculated at 440 energies,
from a few eV to KeV.
Due to radiative transfer, the
spectrum changes throughout the downstream slabs, each of them
contributing to the optical depth.
In addition to the radiation from the primary
source, the effect of the diffuse radiation created by the gas line and continuum emission
is also taken into account,
using 240 energies to calculate the spectrum.
For each slab of gas, the fractional abundance of the ions of each
chemical element are obtained by solving the ionization equations.
These equations account for the ionization mechanisms
(photoionization by the primary and diffuse radiation, and
collisional ionization) and recombination mechanisms (radiative,
dielectronic recombinations), as well as charge transfer effects.
The ionization equations are coupled to the energy equation
when collision processes dominate, and to the thermal balance if
radiative processes dominate. This latter balances the heating
of the gas due to the primary and diffuse radiations reaching
the slab, and the cooling, due to recombinations and collisional
excitation of the ions followed by line emission, dust collisional
ionization, and thermal bremsstrahlung. The coupled equations
are solved for each slab, providing the physical conditions necessary
for calculating the slab optical depth, as well as its line and
continuum emissions. The slab contributions are integrated
throughout the cloud.
In particular, the absolute line fluxes referring to the ionization level i of element K are calculated
by the term n$_K$(i) which represents the density of the ion i. We consider that
n$_K$(i)= X(i) [K/H] N$_H$, where X(i) is the fractional abundance of the
ion i calculated by the ionization equations, [K/H] is the relative abundance of the element K to H,
and N$_H$ is the density of H (by number $\rm cm^{-3}$). In models including shock,
N$_H$ is calculated by the compression equation (Cox 1972) in each slab downstream (Sect. 2).
So the abundances of the elements are given relative to H as input parameters.
Dust grains are coupled to the gas across the shock front by the magnetic field,
and are heated by radiation
from the stars and collisionally by the gas to a maximum temperature which is a function
of the shock velocity,
of the chemical composition, and the radius of the grains, up to the evaporation temperature
(T$_{dust} \geq$ 1500 K).
The grain radius distribution downstream is determined by sputtering, which depends
on the shock velocity and on the density.
Throughout shock fronts and downstream, the grains might be destroyed by sputtering.
Summarizing, very schematically :
\noindent
1) we adopt an initial Te ($\sim$ 10$^4$ K) and the input parameters;
\noindent
2) calculate the density from the compression equation;
\noindent
3) calculate the fractional abundances of the ions from each level for each element;
\noindent
4) calculate line emission, free-free and free-bound emission;
\noindent
5) recalculate Te by thermal balancing or the enthalpy equation;
\noindent
6) calculate the optical depth of the slab and the primary and secondary fluxes;
\noindent
7) adopt the parameters found in slab i as initial conditions for slab i+1;
Integrating the contribution of the line intensities
calculated in each slab, we obtain the absolute
fluxes of each of the lines, calculated at the nebula (the same for bremsstrahlung).
\noindent
8) We then calculate the line ratios to a certain line (in the present case [SIII], because
we do not have values of H$_{\beta}$ or other H lines)
\noindent
9) and compare them with the observed line ratios.
The observed data have errors, both random and systematic.
Models are generally allowed to reproduce the data within a factor of 2.
This leads to input parameter ranges of a few percent.
The uncertainty in the calculation results, on the other hand, derives from the use of many atomic parameters,
such as recombination coefficients, collision strengths, etc., which are continuously
updated. Moreover, the precision of the integrations depends on the computer efficiency.
Simpson et al. present the spectra observed in close regions throughout the slit.
Actually, we are interested in the trend of the physical conditions, as well as in the
parameter ranges. Therefore, to avoid the accumulation of errors which leads to inconsistent
results, we try to reproduce the data as close as possible.
If the calculated lines are compatible with the observed ones within the error
of each line, we adopt the input parameters as the result in this observed position.
If there are discrepancies, we change consistently the input parameters and we restart the whole
calculation process. As explained in the text, the alterations are done on the basis that
$\rm T_{*}$~ affects [NeIII]/[NeII], U affects [FeIII]/[FeII], [OIV] depends on $\rm V_{s}$~, etc.
We also change the relative abundances to obtain a good fit for all the line ratios, however,
they affect the cooling rate, so it is important to restart
the calculation process each time.
As a matter of fact, a degeneracy can arise e.g. from the density and the
magnetic field which are directly correlated.
There can be a doubt in the values of $\rm n_{0}$ and $\rm B_{0}$ leading to the
best fit of the observations in a certain position $p$. Our method is to adopt
as a first guess, the input parameters of position $p$-1, and then modify
them, little by little, calculating a grid of models,
until all the observed line ratios in position $p$ are satisfactorily reproduced within the
least error.
The number of ionizing photons cm$^{-2}$ s$^{-1}$ produced by the hot source is
$N$= $\int_{\nu 0}B_{\nu} /(h\nu ) d\nu$, where
$\nu$0=3.29 10$^{15}$ s$^{-1}$ and B$_{\nu}$ is the Planck function.
The flux from the star is combined with U and n
by :
$N$ (r/R)$^2$ = U n c, where
r is the radius of the hot source (the stars),
R is the radius
of the nebula (in terms of the distance from the stars), n is the density of the nebula,
and c the speed of light.
Therefore, $\rm T_{*}$~ and U compensate each other, but only in a qualitative way, because
$\rm T_{*}$~ determines the frequency distribution of the primary flux, while U
represents the number of photons per number of electrons reaching the nebula.
The choice of $\rm T_{*}$~ and U in each position is made by the fit of the line ratios.
$\rm T_{*}$~ affects strongly the [NeIII] line, while [FeIII]/[FeII] is more
affected by U.
The velocity V and the density n are linked by the continuity
equation : V0 n0 = V1 n1.
Moreover, $\rm B_{0}$, $\rm n_{0}$, and $\rm V_{s}$~, are combined in the compression equation (Cox, 1972).
In conclusion, all the parameters
are combined together in the calculations. Therefore, for each position a
large grid of models are run. The models are selected on the
basis of the minimum deviation from the observation data for all
the line ratios.
\section{Spitzer observations}
\subsection{The line spectra}
\begin{figure}
\includegraphics[width=0.41\textwidth]{f2a.eps}
\includegraphics[width=0.41\textwidth]{f2b.eps}
\includegraphics[width=0.41\textwidth]{f2c.eps}
\caption{The profiles of the different parameters which result from modelling. a :the parameters depending on the shock;
b : the parameters depending on photoionization;
c : the relative abundances (top panel); comparison of the continuum fluxes at different wavelength
with the calculated bremsstrahlung in the different positions (bottom panel);
magenta : 10.0-10.48 $\mu$m~, blue : 13.5-14.3 $\mu$m~, green : 15.56 $\mu$m~, black : 18.71 $\mu$m~, cyan : 22.29 $\mu$m~,, yellow : 28.22 $\mu$m~,
red : 33.48 $\mu$m~
Each parameter units appear in Table 2}.
\end{figure}
Spitzer MIR spectra (Program 3295, all AORs) were observed
by Simpson et al. (2007, hereafter S07) in
38 positions along a line approximately perpendicular to the Galactic plane in the GC (Fig. 1).
The line intensities were corrected by the $\tau_{\lambda}$/$\tau_{9.6}$ optical depth ratios,
using the optical depth at 9.6 $\mu$m~ $\tau_{9.6}$, as given by S07 in their Table 2 and
Table 1 respectively.
In some positions, in particular between 1 and 16, the observed [SIII]18.7/[SIII]33.5 ratios,
corrected for extinction by the $\tau_{9.6}$ presented in S07, were
lower than the calculated [SIII]18.7/[SIII]33.5 lower limit ($\sim$ 0.5),
even adopting a very low density model.
Consequently,
the $\tau_{9.6}$ values were incremented in each position in order to lead to a reasonable spectrum.
. In fact, the $\tau_{9.6}$ values were calculated
by Simpson et al (2007) assuming $\rm T_{e}$=6000 K, while in our models the temperatures
downstream, depending on the shock velocity, show higher values in the region close to the shock front
(Sect. 3.2.1).
In Table 1, we compare the spectra corrected for extinction with model calculations.
In the last column the extinction is given.
The spectra are numerated from 1 to 38 referring to S07.
Each observed (corrected) spectrum is followed in
the next row by the calculated one, which is numerated from m1 to m38.
Model m$_{pl}$ which appears in the last row of Tables 1 and 2 is explained in Sect. 3.3.
The models adopt a black body photo-ionizing radiation flux corresponding to the
colour temperature of the stars. The model parameters are given in Table 2, where
columns 2-4 list the shock parameters. Columns 5 and 6 give the photoionizing flux : the temperature of
the stars and the ionization parameter, respectively. The relative abundances
Si/H, S/H, and Fe/H follow in columns 7-9. The O/H and Ne/H ratios were found
nearly constant in all S07 positions, by a first modelling.
In fact, depletion into dust grains is not important since
O is not the main constituent of dust grains and Ne cannot be adsorbed
due to its atomic structure.
The last column shows the geometrical thickness
of the emitting filaments. Indeed, a high fragmentation of matter appears in the observed region
and in each position, many different conditions could coexist. In our modelling
we refer to the data as to single (average) spectra.
In Table 1, we show the line ratios normalized to [SIII] 33.5 =10 -the strongest line -
to avoid very small values.
The line sequence is ordered by wavelength.
A grid of models was run for each position and the best fitting spectrum was selected
on the basis of the [SIV]/[SIII], [NeIII]/[NeII], and [FeIII]/[FeII] flux ratios
which do not depend on the relative abundances, and of [OIV]/[SIII] which depends strongly on the
shock velocity.
To understand the results, we recall that the radiative ionization rates depend on
the intensity of the primary and secondary (diffuse) radiation flux;
radiation cannot heat the gas to $>$ 2-3 10$^4$ K.
The shocks heat the gas to temperatures which depend on the
shock velocity and the collisional ionization rates
increase with increasing temperatures.
We derive $\rm T_{*}$~ and U by the best fit of [NeIII]/[NeII] and [FeIII]/[FeII], respectively.
However, these parameters also affect [SIV]/[SIII]. So the whole process is iterated until
all the line ratios are reproduced.
The [OIV] line corresponds to a relatively high ionization level
that can be reached collisionally by a relatively high temperature gas,
depending on $\rm V_{s}$~.
It was found that the [OIV] line
is hardly detected for shock velocities $<$ 64 $\rm km\, s^{-1}$.
Only shocks with $\rm V_{s}$~ $>$ 64 $\rm km\, s^{-1}$ can lead to results suitable to the observations.
The ionization potential of S$^{+3}$ is lower than those
of O$^{+2}$ and Ne$^{+2}$.
Therefore, the [SIV] line intensity depends on U and $\rm T_{*}$~ more than on $\rm V_{s}$~.
Moreover, [SIV]/[SIII] decreases with distance from the shock front downstream
following recombination, because the S$^{+3}$ region
is totally included within the nebula, while the S$^{+2}$ region can be cut off at a certain distance from
the shock front in matter-bound nebulae.
The geometrical thickness of the emitting cloud is determined when the calculated
[SIV]/[SIII] line ratio reproduces the observed one.
The relative abundance of S is determined by all the line ratios because they are given as ratios to
[SIII]. When the line ratios are either all overestimated or all underestimated
by the same factor, S/H is modified in order to reproduce the data. S and
Si are not strong coolant, therefore Si/H and S/H result directly, without re-starting the
whole modelling process.
\subsection{Results}
The results of modelling are presented in the three diagrams of Fig. 2. a,b,c.
We define as {\it results} the sets of input parameters ($\rm V_{s}$~, $\rm n_{0}$,
$\rm B_{0}$, $\rm T_{*}$~, U, D, and the relative abundances) which lead to the best fit
of the data in each position.
When a cloud moves toward the photoionization source, the shock front edge is reached directly
by the photons. When the cloud propagates outwards, the photoionising flux reaches the cloud
on the edge opposite to the shock front. Therefore, the calculations need some iterations.
In the present modelling, the best fit is obtained considering that the shock front is reached by radiation
from the hot stars. This
indicates that the clouds move towards the photoionization source.
The case of an outward motion was discarded because we could not reach
a consistent fit of all the lines in the different positions.
Comparing our results with those generally derived from
specific line ratios (e.g. Simpson et al. 2007), we recall
that $\rm n_{0}$ and $\rm B_{0}$ are pre-shock values, while electron densities and temperatures
derived from the observed line ratios, refer to the values in the downstream regions.
To illustrate the results, we present in Fig. 3 the profiles of
the electron temperature, the electron density, and
the fractional abundance of the most significant ions downstream, e.g. for model m18.
\subsubsection{The parameters depending on the shock}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{f3.eps}
\caption{The distribution (top panel) of
the electron temperature, the electron density, and (bottom panel)
of the fractional abundance of the most significant ions downstream for model m18.
}
\end{center}
\end{figure}
In Fig. 2a, $\rm V_{s}$~, $\rm n_{0}$, and $\rm B_{0}$ are shown as a function of position.
The curves are not smooth, because the matter is strongly fragmented
and the calculations refer to {\it averaged} observations in each position.
There is a trend of decreasing $\rm V_{s}$~ from positions 1 to 30, with a fluctuating increase
from 30 to 38.
The density is minimum between 14 and 16, namely in the Bubble. The abrupt increase in $\rm n_{0}$ by a
factor $\leq$ 10 after position 16, leads to relatively high densities up to position 35,
which corresponds to the limit of the Arched Filament region. Then the density returns to
the values characteristic of the ISM.
The trend of increasing $\rm B_{0}$ starts smoothly from position 1 and follows, on a reduced scale, the bump of
the density in the Arched Filament region.
Considering that line excitation and recombination coefficients, and cooling and heating
rates depend on the density of the gas downstream, $\rm B_{0}$ and $\rm n_{0}$ must be cross-checked in each position.
The magnetic field throughout the regions covered by the slit (Fig. 1) is still under discussion (S07).
Yusef-Zadeh \& Morris (1987a) claim
that the magnetic field in the linear non-thermal filaments of the Radio Arc is as high as 10$^{-3}$ gauss,
while LaRosa et al (2005) found that the global field of the GC is only $\sim$ 10$^{-5}$
gauss with an upper limit of $\leq$ 10$^{-4}$ gauss.
The low value is similar to that in the ISM. The consistent modelling of all the lines
in each position leads to a growth of $\rm B_{0}$ in the arched filament region by a factor of 10
(from $\sim$ 10$^{-5}$ to $\sim$ 10$^{-4}$ gauss), while in the surrounding ISM, $\rm B_{0}$ is lower.
In Fig. 2a (top panel), we notice that the observed [OIV]/[SIII] line ratio
follows the $\rm V_{s}$~ decreasing slope, while [SIV]/[SIII]
is affected also by the other parameters.
\subsubsection{The parameters depending on the radiation flux}
In the bottom panel of Fig. 2b, the temperature of the stars,
which are the source of the photoionizing flux, is shown as a function of position.
A maximum corresponding to $\sim$ 3.7 10$^4$ K appears in the region corresponding to
the Bubble. These temperatures refer to the Quintuplet cluster.
Star temperatures in the Arches Cluster are lower, $\leq$ 3 10$^4$ K.
We would expect a higher U in the positions closest to the clusters, but 1) we are dealing
with projected distances
and 2) the geometrical thickness has a crucial role in reducing the photoionization flux
throughout the different slabs downstream.
Fig. 2b in fact shows that the minimum of U between
positions 14 and 20 is accompanied by a rough bump in D.
Fig. 2b shows that
the observed [NeIII]/[NeII] line ratio is correlated with $\rm T_{*}$~,
while [FeIII]/[FeII] is roughly proportional to U.
\subsubsection{The relative abundances}
The relative abundances resulting from modelling
are shown in Fig. 2c (top panel). They are calculated consistently with
$\rm V_{s}$~, $\rm n_{0}$, $\rm B_{0}$, and $\rm T_{e}$
We had some difficulty to determine O/H because the [OIV]/[SIII] line ratio could depend
on $\rm V_{s}$~ , O/H, and on S/H as well. We decided to keep O/H constant
(and Ne/H constant) only after considering a first iteration of results from all the positions
and noticing that O/H and Ne/H were changing very little from position to position.
The best results were obtained by O/H = 6. 10$^{-4}$
and Ne/H = 10$^{-4}$ (Ne/O=0.167) and N/O= 0.18. Grevesse \& Sauval (1998) find Ne/O=0.15 and
N/O=0.123.
These values are within the range calculated by Mart\'{i}n-Hern\'{a}ndez et al (2002)
for HII regions in the Galaxy.
For elements whose lines
were not observed, we adopted the following abundances : C/H=3.3 10$^{-4}$, N/H= 1.1 10$^{-4}$,
Mg/H = 2.6 10$^{-5}$, and Ar/H = 6.3 10$^{-6}$ (Allen 1973).
Thus, they do not appear in Table 2.
Si appears through one only line.
Si/H is determined in each position after the satisfactory fit of all the other line ratios.
In fact Si is not a strong coolant.
The Si/H relative abundance can assume all the
values below the solar one (3.3 10$^{-5}$), because Si is the main component of silicate grains.
The most impressive result refers to Fe. The strong depletion of Fe from the gaseous phase in the
Arched Filament region indicates that iron is quite likely depleted into grains.
However, its relative abundance is constrained
by both [FeII] and [FeIII] lines, and therefore cannot always be derived directly by changing Fe/H.
Small grains are easily sputtered.
Iron returns to the gaseous phase at higher $\rm V_{s}$~ close to the Quintuplet Cluster.
Si is slightly depleted from the gaseous phase along all the slit positions with some fluctuations,
indicating a minimum of silicate grains near the Sickle.
Si/H reaches values close to solar in the Arched Filaments beyond the Arches Cluster.
Perhaps silicate grains are evaporated by the strong radiation from the cluster.
In the Bubble, both Si/H and Fe/H are slightly reduced, indicating that a large fraction of grains survive.
Although the IR range observed by Simpson et al. includes the PAH bands observed in the Milky Way (Draine \& Li 2007),
silicates including iron and other iron species,
can also be present in dust.
We will discuss these features through the analysis of the continua at different
wavelengths which is shown in the bottom panel of Fig. 2c,
after modelling the continuum SED in the next section.
\subsection{A power-law flux from the Galactic center?}
The results in the extreme positions 1-2, 36-38, showing a relatively high $\rm T_{*}$~, are
unusual in the ISM,
suggesting that perhaps we should try models which account for a power-law (pl) flux
from the center of the Galaxy, as for AGNs (e.g. Contini \& Viegas 2001), instead of a black body (bb).
In fact, in the Galaxy center there is a "quiescent" BH (Ghez et al. 2005),
which is likely not far from the observed positions.
We have run a grid of models (m$_{pl}$) with a pl ionization flux and the other parameters
similar to those found by modelling with a bb. The selected model appears in the last row
of Tables 1 and 2.
Actually, we could not find a fit to the observed line ratios as good as that found by the bb models.
In addition, a small contribution of this spectrum would not change the results.
The flux F$_{\nu}$ is
lower by at least a factor of 100 than the lowest flux found for LINERS (Contini 1997)
and for low luminosity AGN (Contini 2004).
Moreover, we found
O/H=8 10$^{-4}$ and Ne/H= 10$^{-4}$.
The best fitting pl model was run with photoionization and shocks acting
on the same edge of
the cloud, which means that the cloud is propagating towards the (supposed active) center. Inflow
is characteristic of regions close to the AGN active center.
\begin{table*}
\begin{center}
\caption{Comparison with models of the observed line intensity ratios ([SIII] 33.5 =10) corrected for extinction}
\begin{tabular}{ l l l l l l l l l l l l} \\ \hline \hline
\ Position & [SIV]10.5& [NeII]12.8& [NeIII]15.6&[SIII]18.7&[FeIII]22.9&[OIV]25.9&[FeII]26 & [SIII]33.5 & [SiII]34.8 &ext(9.6$\mu$m~)\\ \hline
\ 1& 0.43& 14.94& 1.80& 5.15& 0.27& 0.47& 1.33& 10.00& 28.56 & 1.44\\
\ m1 & 0.43 &14.8 & 1.7 &5.29 & 0.25& 0.48& 1.2 &10. &28.56 & \\
\ 2& 0.66& 13.50& 1.74& 5.20& 0.41& 0.75& 1.61& 10.00& 27.57& 2.29\\
\ m2 & 0.7 &13.3 & 1.74 & 5.29& 0.34 & 0.74& 1.77& 10. & 27.4 &\\
\ 3& 0.33& 10.63& 0.78& 5.30& 0.38& 0.18& 0.88& 10.00& 19.61& 2.18\\
\ m3 & 0.33 &10.1 & 0.82 & 5.3 & 0.37 & 0.18& 0.83& 10. & 19.8& \\
\ 4& 0.19& 12.36& 0.37& 5.44& 0.31& 0.05& 0.57& 10.00& 18.53& 2.09\\
\ m4 & 0.184&12.0 & 0.39 & 5.47& 0.30 & 0.058& 0.59& 10. & 18.3& \\
\ 5& 0.16& 9.04& 0.48& 5.48& 0.45& 0.07& 0.54& 10.00& 14.64& 2.37\\
\ m5 & 0.16 & 9.48 & 0.46 & 5.5 & 0.45 & 0.07 & 0.44& 10. & 14.3& \\
\ 6& 0.24& 8.57& 0.53& 5.50& 0.60& 0.09& 0.69& 10.00& 14.97&2.83\\
\ m6 & 0.25 & 8.5 & 0.53 & 5.47& 0.6 & 0.08 & 0.67& 10. & 14.4&\\
\ 7& 0.19& 8.43& 0.63& 5.39& 0.50& 0.08& 0.49& 10.00& 12.61&2.34\\
\ m7 & 0.19 & 8.7 & 0.67 & 5.48& 0.55 & 0.08 & 0.47& 10. & 12.9& \\
8& 0.23& 5.78& 0.94& 5.46& 0.58& 0.08& 0.32& 10.00& 8.78& 2.58\\
\ m8 & 0.23 & 6.0 & 0.93 & 5.4 & 0.56& 0.08 & 0.25& 10. & 8.4& \\
\ 9& 0.20& 9.26& 1.23& 5.46& 0.83& 0.15& 0.85& 10.00& 14.59& 2.32\\
\ m9 & 0.21 & 9.29 & 1.28 & 5.46& 0.85& 0.146& 0.7 & 10. & 14.7&\\
\ 10& 0.13& 8.20& 0.84& 5.53& 0.38& 0.06& 0.31& 10.00& 10.50&2.30\\
\ m10 & 0.13 & 8.38 & 0.84 & 5.45& 0.35 & 0.065& 0.3 & 10. & 10.6& \\
\ 11& 0.12& 7.44& 0.91& 5.41& 0.42& 0.06& 0.26& 10.00& 9.28&2.46\\
\ m11 & 0.12 & 7.8 & 0.9 & 5.38& 0.477& 0.06 & 0.278&10. & 9.24& \\
\ 12& 0.17& 6.09& 1.53& 5.31& 0.42& 0.08& 0.25& 10.00& 7.17&2.32\\
\ m12 & 0.17 & 6.6 & 1.44 & 5.35& 0.4 & 0.08 & 0.25& 10. & 7.6&\\
\ 13& 0.19& 9.51& 0.82& 5.44& 0.52& 0.07& 0.62& 10.00& 10.36& 2.54\\
\ m13 & 0.19 & 9. & 0.84 & 5.44& 0.54 & 0.076& 0.68 & 10. & 11.3& \\
\ 14& 0.57& 6.28& 1.74& 5.22& 1.06& 0.34& 0.88& 10.00& 12.54&2.75\\
\ m14 & 0.57 & 6.47 & 1.79 & 5.3 & 1.1 & 0.32 & 0.83 & 10. & 12.5& \\
\ 15& 0.55& 9.46& 2.10& 5.21& 1.42& 0.34& 1.51& 10.00& 16.91&1.75\\
\ m15 & 0.56 & 9.1 & 2.1 & 5.3 & 1.47 & 0.34 & 1.6 & 10. & 17.2& \\
\ 16& 0.44& 8.51& 2.54& 5.24& 1.12& 0.31& 1.37& 10.00& 14.10&1.57\\
\ m16 & 0.44 & 8.7 & 2.65 & 5.29& 1.19 & 0.34 & 1.43 & 10. & 14.1& \\
\ 17& 0.55& 7.24& 2.81& 5.59& 0.95& 0.22& 0.73& 10.00& 11.75&1.75\\
\ m17 & 0.6 & 7.6 & 2.83 & 5.8 & 1. & 0.23 & 0.77 & 10. & 11.2& \\
\ 18& 0.42& 6.49& 2.95& 6.05& 0.96& 0.22& 0.66& 10.00& 8.96&1.74\\
\ m18 & 0.45 & 6.8 & 2.86 & 6. & 0.93 & 0.22 & 0.69 & 10. & 8.04& \\
\ 19& 0.47& 8.74& 3.12& 7.34& 1.91& 0.25& 0.93& 10.00& 11.45&1.80\\
\ m19 & 0.47 & 8.6 & 3.16 & 7.32& 1.9 & 0.24 & 1.16 & 10. & 11.48&\\
\ 20& 0.35& 12.04& 1.44& 10.75& 1.45& 0.15& 0.61& 10.00& 8.96&2.59\\
\ m20 & 0.34 & 12.1 & 1.5 & 10.7 & 1.47 & 0.157& 0.69 & 10. & 8.5& \\
\ 21& 0.62& 7.50& 2.75& 7.31& 1.67& 0.22& 0.58& 10.00& 8.89&2.44\\
\ m21 & 0.62 & 7.5 & 2.8 & 7.3 & 1.7 & 0.22 & 0.55 & 10. & 8.89& \\
\ 22& 0.54& 8.00& 2.77& 9.72& 0.75& 0.07& 0.26& 10.00& 6.68&3.17\\
\ m22 & 0.54 & 8. & 2.77 & 9.6 & 0.76 & 0.075 & 0.3 & 10. & 6.63& \\
\ 23& 0.37& 7.42& 1.55& 6.57& 0.86& 0.08& 0.43& 10.00& 9.53&2.50\\
\ m23 & 0.37 & 7.5 & 1.6 & 6.66 & 0.83 & 0.08 & 0.43 & 10. & 9.3&\\
\ 24& 0.33& 8.18& 1.20& 8.38& 1.22& 0.07& 0.45& 10.00& 7.83&3.11\\
\ m24 & 0.31 & 8.3 & 1.34 & 8.1 & 1.2 & 0.07 & 0.57 & 10. & 7.6&\\
\ 25& 0.23& 8.54& 1.15& 8.73& 0.78& 0.05& 0.31& 10.00& 7.50&2.91\\
\ m25 & 0.23 & 8.6 & 1.16 & 8.62 & 0.75 & 0.05 & 0.42 & 10. & 7.7& \\
\ 26& 0.33& 8.32& 0.86& 6.73& 1.31& 0.09& 0.63& 10.00& 11.93&2.67\\
\ m26 & 0.33 & 8.34 & 0.88 & 6.74 & 1.35 & 0.09 & 0.6 & 10. & 11.84& \\
\ 27& 0.19& 8.10& 0.37& 8.07& 0.50& 0.04& 0.28& 10.00& 7.52&2.91\\
\ m27 & 0.19 & 7.8 & 0.45 & 7.0 & 0.50 & 0.04 & 0.22& 10. &7.7& \\
\ 28& 0.29& 7.22& 0.39& 7.95& 0.44& 0.01& 0.17& 10.00& 6.33&3.18\\
\ m28 & 0.29 & 7.3 & 0.45 & 7.9 & 0.44 & 0.014& 0.2 & 10. & 6.3& \\
\ 29& 0.45& 5.27& 0.41& 7.89& 0.26& 0.02& 0.07& 10.00& 3.39&3.11\\
\ m29 & 0.42 & 5. & 0.46 & 7.5 & 0.23 & 0.02 & 0.09 & 10. & 3.47& \\
\ 30& 0.45& 6.51& 0.42& 8.78& 0.24& 0.01& 0.08& 10.00& 3.95&3.12\\
\ m30 & 0.4 & 6.2 & 0.42 & 8.9 & 0.18 & 0.01 & 0.1 & 10. & 3.9 \\ \hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\centerline{Table 1 (cont)}
\begin{tabular}{ l l l l l l l l l l l l} \\ \hline
\ Position & [SIV]10.5& [NeII]12.8& [NeIII]15.6&[SIII]18.7&[FeIII]22.9&[OIV]25.9&[FeII]26 & [SIII]33.5 & [SiII]34.8 &ext.(9.6 $\mu$m~)\\
\ 31& 0.28& 8.48& 0.17& 13.05& 0.16& 0.02& 0.04& 10.00& 2.92&3.40\\
\ m31 & 0.28 & 8.1 & 0.2 & 12.9 & 0.13 & 0.02 & 0.05 & 10. & 2.7& \\
\ 32& 0.07& 7.68& 0.16& 7.98& 0.26& 0.02& 0.17& 10.00& 6.28&2.51\\
\ m32 & 0.07 & 7.76 & 0.15 & 8.2 & 0.27 & 0.02 & 0.18 & 10. & 6.& \\
\ 33& 0.16& 5.37& 0.26& 7.12& 0.28& 0.02& 0.09& 10.00& 3.85&3.21\\
\ m33 & 0.16 & 5.9 & 0.23 & 7.14 & 0.26 & 0.02 & 0.07& 10. & 3.2&\\
\ 34& 0.10& 9.18& 0.13& 9.10& 0.14& 0.00& 0.07& 10.00& 4.21&3.02\\
\ m34 & 0.10 & 9.0 & 0.14 & 9.36 & 0.14 & 0.01 & 0.078& 10. & 4.8& \\
\ 35& 0.03& 11.02& 0.06& 7.72& 0.13& 0.01& 0.15& 10.00& 6.17&2.00\\
\ m35 & 0.03 & 10.8 & 0.05 & 8. & 0.13 & 0.01 & 0.28& 10. & 7.2& \\
\ 36& 0.37& 12.99& 2.67& 5.40& 0.24& 0.32& 1.41& 10.00& 23.54&0.80\\
\ m36 & 0.4 & 13.2 & 2.2 & 5.24 & 0.3 & 0.34 & 1.4 & 10. & 24.& \\
\ 37& 0.34& 13.41& 2.87& 5.48& 0.21& 0.30& 1.39& 10.00& 23.67&0.42\\
\ m37 & 0.36 & 13. & 2.7 & 5.5 & 0.37 & 0.3 & 1.3 & 10. & 21.3& \\
\ 38& 0.26& 14.82& 2.70& 5.95& 0.187& 0.29& 1.23& 10.00& 23.27&0.33\\
\ m38 & 0.27 & 14.8 & 2.2 & 5.6 & 0.2 & 0.2 & 1. & 10. & 25 \\ \hline
\ m$_{pl}$ & 1.9&12.7&3.4 & 5.3 & 0.1 & 0.36 & 2.1 & 10 &30 \\ \hline\\
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{The models}
\begin{tabular}{ l l l l l l l l l l l l} \\ \hline \hline
\ model & $\rm V_{s}$~ & $\rm n_{0}$ & $\rm B_{0}$ & $\rm T_{*}$~ & U $^1$ & Si/H & S/H & Fe/H & D \\
\ &($\rm km\, s^{-1}$) & ($\rm cm^{-3}$) & (gauss) & (K) & - & - & - & - & (cm)\\ \hline
\ m1& 79& 1& 5e-6& 3.5e4 & 2.2e-3 & 1.3e-5 & 6.e-6 & 3.e-6 & 1.92e17\\
\ m2& 80& 1& 5e-6& 3.4e4 & 2.1e-3 & 1.4e-5 & 7.e-6 & 5.e-6 & 1.40e17\\
\ m3& 75& 2& 1.e-5& 3.e4 & 3.e-3 & 1.5e-5 & 8.e-6 & 4.e-6 & 2.e17\\
\ m4& 72& 4& 8.e-6& 2.6e4 & 5.e-3 & 1.3e-5 & 8.e-6 & 2.5e-6& 8.5e16\\
\ m5& 74& 4& 8.e-6& 2.8e4 & 7.e-3 & 1.6e-5 & 8.e-6 & 3.5e-6& 8.3e16\\
\ m6 & 73& 4& 9.e-6& 2.8e4 & 6.e-3 & 1.7e-5 & 9.e-6 & 5.5e-6& 6.6e16\\
\ m7& 74& 4& 9.e-6& 3.e4 & 6.5e-3 & 1.6e-5 & 8.e-6 & 4.5e-6& 8.3e16\\
\ m8& 75.5& 3& 1.e-5& 3.2e4 & 7.5e-3 & 1.9e-5 & 1.e-5 & 6.e-6 & 1.43e17\\
\ m9& 77.5& 4& 1.2e-5& 3.3e4 & 5.5e-3 & 1.65e-5& 7.e-6 & 6.5e-6& 1.18e17\\
\ m10& 76.5& 4& 1.2e-5& 3.2e4 & 5.5e-3 & 1.3e-5 & 8.e-6 & 3.e-6 & 1.95e17\\
\ m11& 76.5& 3& 1.4e-5& 3.2e4 & 5.5e-3 & 1.4e-5 & 8.e-6 & 4.e-6 & 5.6e17\\
\ m12& 76.9& 2.5& 1.4e-5& 3.5e4 & 4.5e-3 & 1.2e-5 & 9.e-6 & 4.e-6 & 6.e17\\
\ m13& 73.5& 4 & 1.2e-5& 3.2e4 & 4.2e-3 & 1.15e-5& 8.e-6 & 5e-6 & 1.18e17\\
\ m14& 74 & 2 & 1.4e-5& 3.45e4& 3.5e-3 & 1.9e-5 & 9.e-6 & 1.2e-5& 2.2e17\\
\ m15& 73.5& 2.3& 1.4e-5& 3.45e4& 3.e-3 & 1.7e-5 & 7.e-6 & 1.3e-5& 1.7e17\\
\ m16& 73.4& 2 & 1.8e-5& 3.75e4& 2.e-3 & 1.3e-5 & 7.e-6 & 1.1e-5& 6.3e17\\
\ m17& 72.5& 17 & 4.e-5 & 3.7e4 & 3.e-3 & 1.6e-5 & 8.e-6 & 9.5e-6& 3.16e16\\
\ m18& 74.2& 27 & 6.e-5 & 3.85e4& 2.7e-3 & 1.3e-5 & 9.e-6 & 9.5e-6& 4.3e16\\
\ m19& 76.2& 55 &6.e-5 & 3.72e4& 4.2e-3 & 2.1e-5 & 8.3e-6& 1.5e-5& 9.5e15\\
\ m20& 77.5& 92 & 4.1e-5& 3.2e4 & 8.5e-3 & 2.3e-5 & 9.e-6 & 8.e-6 & 4.e15\\
\ m21& 74 & 65 & 7.3e-5& 3.5e4 & 6.e-3 & 2.6e-5 & 9.e-6 & 1.6e-5& 8.e15\\
\ m22& 70 & 110& 6.3e-5& 3.6e4 & 6.e-3 & 2.3e-5 & 1.1e-5& 6.5e-6& 3.6e15\\
\ m23& 70.5& 50 & 7.e-5 & 3.37e4& 4.e-3 & 2.e-5 & 9.2e-6& 7.5e-6& 1.8e16\\
\ m24& 71 & 77 & 6.5e-5& 3.26e4& 5.5e-3 & 2.e-5 & 9.9e-6& 9.5e-6& 8.6e15\\
\ m25& 72.5& 79 & 5.5e-5& 3.26e4& 5.8e-3 & 2.e-5 & 1.05e-5& 6.e-6 & 9.5e15\\
\ m26& 71.2& 50 & 6.5e-5& 2.9e4 & 5.8e-3 & 2.8e-5 & 8.8e-6 & 1.05e-5& 1.57e16\\
\ m27& 70 & 55 & 6.7e-5& 2.7e4 & 6.6e-3 & 2.1e-5& 1.0e-5& 4.e-6 & 2.3e16\\
\ m28& 65 & 80 & 6.e-5 & 2.7e4 & 6.8e-3 & 2.1e-5 & 1.2e-5 & 4.e-6 & 6.1e15\\
\ m29 & 65 & 85 & 8.e-5 & 2.7e4 & 6.9e-3 & 1.8e-5 & 1.6e-5 & 3.e-6 & 6.9e15\\
\ m30& 65 & 88 & 4.5e-5& 2.7e4 & 8.e-3 & 1.7e-5 & 1.6e-5 & 2.e-6 & 3.e15\\
\ m31& 69 & 125& 2.2e-5& 2.2e4 & 2.2e-2 & 2.e-5 & 1.9e-5 & 1.e-6 & 2.7e15\\
\ m32& 74 & 70 & 5.3e-5& 2.3e4 & 1.e-2 & 1.85e-5& 1.4e-5 & 2.3e-6 & 3.1e16\\
\ m33& 68.5& 62 & 6.4e-5& 2.4e4 & 1.3e-2 & 1.9e-5 & 1.3e-5 & 2.6e-6 & 1.95e16\\
\ m34& 68 & 88 & 4.e-5 & 2.2e4 & 1.3e-2 & 1.8e-5 & 1.35e-5& 1.e-6 & 1.1e16\\
\ m35& 74 & 46 & 2.5e-5& 2.e4 & 1.1e-2 & 1.6e-5 & 1.9e-5 & 1.3e-6 & 8.7e16\\
\ m36& 73 & 1 & 1.3e-5& 3.7e4 & 1.2e-3 & 1.2e-5 & 6.e-6 & 4.e-6 & 1.5e18\\
\ m37& 76 & 5 & 1.3e-5& 3.9e4 & 2.e-3 & 1.2e-5 & 6.e-6 & 4.e-6 & 6.2e16\\
\ m38& 76 & 5 & 9.e-6 & 3.9e4 & 2.e-3 & 2.5e-6 & 6.e-6 &2.5e-6 & 5.4e16\\ \hline
\ m$_{pl}$&75& 3 & 2.e-6 & $\alpha$=-2&F$_{\nu}$=1e6$^2$&8.e-6& 3.e-5 & 3.e-6& 4.2e16\\ \hline
\end{tabular}
\end{center}
\flushleft
$^1$ U is a number
$^2$ in photons cm$^{-2}$ s$^{-1}$ eV$^{-1}$ at the Lyman limit.
\end{table*}
\subsection{The continuum}
\begin{figure}
\includegraphics[width=0.45\textwidth]{f4.eps}
\caption{The observed corrected continuum SEDs in the different positions.
magenta dotted : 1-3; red solid : 4-13; blue solid : 14-19;
green solid : 20-31; black solid : 32-35: magenta dashed : 36-38}
\end{figure}
\begin{figure*}
\includegraphics[width=0.3\textwidth]{f5a.eps}
\includegraphics[width=0.3\textwidth]{f5b.eps}
\includegraphics[width=0.3\textwidth]{f5c.eps}
\caption{Comparison of calculated with observed SEDs in positions 1, 20, and 31.
For each model two curves are shown : one referring to bremsstrahlung peaking at higher frequencies,
and one peaking in the IR referring to dust reprocessed
radiation. Solid lines : models m1, m20, and m31 calculated by a$_{gr}$=0.2 $\mu$m~.
Dashed lines : in the left diagram correspond to model m1 calculated by a$_{gr}$=0.01
$\mu$m~, in the middle and right diagrams correspond to models calculated by
U=2.2.
}
\end{figure*}
We adopt the observational data of the continuum by Simpson et al (2007, Table 4).
The data were corrected for extinction by the same correction parameters as those used for
the lines (Table 1). The SEDs in the different positions as a function of the frequency are shown
in Fig. 4. The errorbands are not included in the figure for the sake of clarity.
Two different trends appear : one characteristic of positions in the ISM : 1, 2, 3 (dotted lines) and 36, 37, 38
(dashed lines).
The other trend prevails in the other regions (except in position 20).
Different colours refer to different groups of positions,
in order to have a qualitative view of the SEDs.
\subsubsection{The continuum SEDs}
To understand the trends in Fig. 4, we present
in Fig. 5 the comparison of the continua calculated by the models which were used to calculate the line ratios
with the data for position 1, 20, and 31 in a large range of frequencies, from radio to X-ray.
Unfortunately, the data cover only the range between $\sim$ 10 and 35 $\mu$m~.
The calculated continuum SED shows the contribution of the gas (free-free+free-bound) and that of dust reprocessed
radiation, separately.
An interesting result appears from Fig. 5, namely, the
dust reradiation peak predicted by the models, which explain the line ratios (solid lines),
occurs at a frequency
lower than that derived from the observations in positions 20 and 31, while in position 1, model m1 can
reproduce the continuum adopting grains with a radius a$_{gr}$ $\sim$ 0.01 $\mu$m~ (dashed lines).
Very small grains
can be heated stochastically to temperatures ($\leq$ 50 K, Draine 2003) which, however,
are not high enough to shift the frequency peak.
PAHs correspond to small grains ($\leq$ 0.01 $\mu$m~), while the size of grains including Fe is still
under discussion (Hoefner 2009).
Previous calculations of models including dust (Contini et al 2004) show that the peak shifts to
higher frequencies 1) increasing $\rm V_{s}$~ i.e. increasing the collisional heating of dust grains,
2) increasing U, i.e. increasing radiation heating of the grains, and 3) reducing the radius of the grains.
Excluding collisional heating derived from a higher velocity which would imply very broad line
profiles, the only parameters that we can alterate, are U and a$_{gr}$.
We have calculated some models with a very high ionization parameter which represent the case
of an emitting cloud closer to the hot source, or less screened from the radiation flux.
For positions 20 and 31 we had to use a ionization parameter higher by a factor $>$ 100
than that used for the lines in order to fit the IR continuum data.
The model, leading to the hot dust component, produces different line fluxes
destroying the nice fit of the line ratios to the data shown in Table 1. So this model contribution corresponds
to a low relative weight.
In positions 20 and 31, a dust
temperature of $\sim$ 150 K (dashed lines) explains the data in the IR,
while dust within the cloud emitting the
line spectrum at position 20 reaches only a temperature of $\sim$ 38 K (solid lines).
Moreover, Fig. 5 shows that the IR continuum in positions 20 and 31 is emitted by dust
while in position 1 the data are reproduced by the sum of reradiation fluxes by dust and bremsstrahlung.
This explains the different slopes in Fig. 4.
In agreement
with very non-homogeneous matter in each observed region, different
clouds contribute to the spectra.
Alternatively,
the relatively hot dust could be spread in the central region of the Galaxy, independently from the gas morphology.
\subsubsection{Comparison of IR continuum fluxes}
In Fig. 2c (bottom panel) the bremsstrahlung (black solid line) in the IR range {\it calculated
at the nebula} at each position,
is compared with the fluxes corresponding to different wavelengths
{\it observed (corrected) at Earth},
in the continuum.
They are shifted by a factor $\eta$ which depends on the distance of
the clouds from the photoionization source and on the distance of the clouds to Earth.
Adopting a distance to Earth of 8 kpc (Simpson et al 2007), the distance of the dusty clouds
from the cluster is $\geq$ 30 pc.
Recall that both the bremsstrahlung and the IR fluxes depend on n$^2$ D (where n is the density
of the emitting gas), while the IR fluxes between 10 and 33.48 $\mu$m~ depend also on
the gas-to-dust ratios, because they are generally emitted from reprocessed dust.
A perfect fit of the bremsstrahlung with the IR observed fluxes
is not expected due to the approximations of modelling.
Fig. 2c shows that the bremsstrahlung and the IR fluxes have roughly similar profiles,
except in the ISM positions : in the southern positions, dust reradiation is higher than the bremsstrahlung
confirming that that Si, S, and Fe could be depleted from the gaseous phase into grains,
while in the northern positions, the dust-to-gas ratios are low.
\section{Position C - G0.095+0.012 and the E2 Thermal Radio Filaments}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{f6.eps}
\caption{Location of infrared observations overlaid on a VLA continuum radiograph
(Yusef-Zadeh et al. 1984).The figure is taken from Erickson et al (1991, Fig. 1).
}
\end{center}
\end{figure}
Radio VLA maps show peculiar filamentary
structures $\sim$ 10' long located roughly 10' northeast of the Galactic center,
which suggest fragmentation of matter close to shock fronts.
The morphology and the radio polarization (Yusef-Zadeh, Morris, \& Chance 1984)
indicate that magnetic fields are important,
which is significant for models including shocks.
Moreover, it was found that the Arches Cluster is photoionizing the region of straight and arched
filaments surrounding it (E91).
This led us to adopt composite models (shocks+ photoionization)
to explain gas and dust spectra observed from both thermal (arched) and non-thermal (linear) structures.
The clouds and filaments at position G0.095+0.012 and the E2 thermal
"arched" radio filament near the Galactic center were observed by E91
in 8 positions (Fig. 6).
To clarify the nature of these filaments, Erickson et al (1991, hereafter E91) have made FIR line
and adjacent continuum
observations of [SIII]19 , [SIII]33, [OIII]52 and [OIII]88, [NIII]57, [SiII]35,
[OI]63, [OI]145,
and [CII]158 from NASA's Kuiper Airborne Observatory (KAO).
Upper limits were obtained for [SI]25, [FeII]26, and [NIII]36.
In this section
the modelling of line and continuum spectra is presented.
\subsection{Position C}
\subsubsection{The line spectrum}
We have corrected the spectrum from extinction as indicated by E91.
In Table 3, we compare our models with the observed corrected line ratios normalized to [SIII]33.5 =1
The best fit is obtained adopting $\rm T_{*}$~=24,000 K and U=4 10$^{-3}$,
$\rm V_{s}$~=65 $\rm km\, s^{-1}$, and $\rm n_{0}$=200 $\rm cm^{-3}$. The magnetic field $\rm B_{0}$= 2 10$^{-5}$ gauss
is similar to that found in S07 position 31.
The relative abundances which lead to the best fit of the observed spectrum, show that C/H is lower
than solar by a factor of 1.65, while N/H is higher
by a factor of 1.5. Si, S, and Fe are lower than solar suggesting that they are trapped into grains.
Also in this case the clouds are moving towards the hot source, i.e. the Arches Cluster.
In our models, the temperature of the stars (24000 K) results phenomenologically
because leading to the consistent fit of all the lines.
E91 adopted a T$_{\it eff}$=35,000K atmosphere from Kurucz (1979).
The LTE atmosphere has a very different UV SED from a black body (Simpson et al 2004, Fig. 6)
so the entire modelling is different.
\begin{table}
\caption{IR line spectrum at position C}
\begin{tabular}{lllll}\\ \hline \hline
\ line & obs$^1$ &I$_{\lambda}$/I$_{[SIII]33.5}$$^2$ & m$_{C}$ \\ \hline
\ [SIII] 18.8 & 21.3$\pm$4.0 & 1.8 & 1.82 \\
\ [FeII] 26 & $<$9.4 &$<$0.2 &0.2\\
\ [SIII] 33.5 & 70.5 $\pm$1.1 & 1. &1.\\
\ [SiII] 34.8 & 31.5 $\pm$1.2 & 0.42 & 0.41 \\
\ [NeIII] 36. & $<$0.7 &$<$0.009 & 0.009 \\
\ [OIII] 51.8 & 12.8$\pm$0.4 & 0.135 & 0.14 \\
\ [NIII] 57.3 & 15.7$\pm$0.5 & 0.159& 0.15 \\
\ [OI] 63.2 & 5.2$\pm$0.4 & 0.05 & 0.06 \\
\ [OIII] 88.4& 11.2$\pm$0.3 & 0.1 &0.08\\
\ [OI] 145.5 & 0.5$\pm$0.05 & 0.004 & 0.003 \\
\ [CII] 157.7 & 8.2$\pm$0.1 & 0.074 & 0.075 \\
\ H$_{\beta}$ ($\rm erg\, cm^{-2}\, s^{-1}$) & -&- & 6.4e-5 \\
\ $\rm V_{s}$~ ($\rm km\, s^{-1}$)& - &- &65 \\
\ $\rm n_{0}$ ($\rm cm^{-3}$) & - &- &200 \\
\ $\rm B_{0}$ (10$^{-3}$ gauss)& - &- &0.02\\
\ $\rm T_{*}$~ (K) & - & -& 2.4e4 \\
\ U &-& - & 4.e-3 \\
\ $D$ (10$^{14}$ cm) & - &- &9.7 \\
\ C/H &-&-& 2.0e-4 \\
\ N/H &-&-& 1.5e-4 \\
\ Si/H&-&-& 4.0e-6 \\
\ S/H &-&-& 1.0e-5 \\
\ Fe/H &-&-& 2.6e-6 \\
\hline
\end{tabular}
\flushleft
$^1$ 10$^{-18}$ W cm$^{-2}$
$^2$ extinction corrected (Erickson et al (1991, Table 1)
\end{table}
In Fig. 7 we show the profile of the electron temperature and
electron density, and of the fractional abundance of
the most significant ions downstream as calculated by model m$_C$.
The model is matter bound.
\subsubsection{The continuum SED}
We try to constrain the model adopted to explain the line spectrum, using the SED
of the continuum.
We plot in Fig. 8 the data from E91. The data cover
only the far-IR range, but they are enough to show that with model
m$_{C}$ the continuum data are not reproduced. In particular, the model dust reradiation peak
is shifted at a lower
frequency. We check whether a higher $\rm V_{s}$~ could improve the agreement since
higher $\rm V_{s}$~ lead to higher dust peak frequencies.
We have adopted a rather large $\rm V_{s}$~ compared with the radial
velocities ($\sim$ 10 $\rm km\, s^{-1}$) measured by E91.
Morris \& Yusef-Zadeh (1989) suggest a mechanism to account for the ionization
and radio emission based on a retrograde, high velocity of $\sim$ 100 $\rm km\, s^{-1}$
cloud encountering the poloidal magnetic field in the vicinity of the
GC. Even with such a high velocity, the dust peak could not be reproduced.
In relatively low shock-velocity regimes, a high flux dominates the ionization and heating
processes. We have therefore
run a model with a very high U (=5), as we have done for the dust peak relative
to the S07 observations.
The other parameters are the same as those of model m$_{C}$.
The fit to the IR data by the hot dust model is satisfactory.
Dust is not homogeneusly distributed throughout the observed region.
Dilution of U can be explained by a larger distance from the photoionizing source
and/or by obscuring matter between the radiation source and the emitting clouds.
\subsection{The E2 arched filament}
E91 reported the observations of the [SIII]33,
[OIII]52, and [OIII]88
lines at eight positions along the E2 arched filament. They claim that the E2
filament is roughly tubular with a 10:1 length to diameter ratio. Moving northward
along the filament, the excitation decreases slowly and the line and continuum
brightness decrease by a factor of $\sim$ 2.
In Table 4 we compare the calculated
[OIII]52/[SIII]33 and [OIII]52/[OIII]88 line ratios with the data
corrected for extinction. The lines observed are too few to fully constrain
the models.
In Table 4 we refer to positions 2, 4, 6, and 8, where both the [OIII]52 and [OIII]88
lines are observed.
We notice by modelling that the line ratios depend strongly on the preshock density.
These ratios are significant because [SIII] refers to a ionization potential
lower than that of [OIII], so the trend of the [OIII]52/[SIII]33 ratio eventually resembles
that of the [OIII]/[OII] ratio, assuming constant relative abundances.
\begin{table*}
\caption{IR line spectra in the E2 arched filament}
\begin{tabular}{cccccccccccccc}\\ \hline \hline
\ line ratios &\multicolumn{2}{c}{position 2} &\multicolumn{2}{c}{position 4} &\multicolumn{2}{c}{position 6} &\multicolumn{2}{c} {position 8} \\ \hline
\ & obs & calc & obs & calc & obs & calc & obs & calc \\
\ [OIII]52/[SIII]33 & 0.176 & 0.178&0.105&0.105 &0.0664&0.067&0.076&0.076\\
\ [OIII]88/[SIII]33 &0.116 & 0.112&0.079&0.079 &0.076&0.076&0.067&0.066\\
\ $\rm n_{0}$ ($\rm cm^{-3}$) & - &160 &- &110 &-& 40 & - & 80 \\
\hline
\end{tabular}
for all positions $\rm V_{s}$~=65 kms, $\rm B_{0}$= 2 10$^{-5}$ gauss U=4 10$^{-3}$, $\rm T_{*}$~= 24 000 K and the relative
abundances as for position C.
\end{table*}
\begin{figure}
\includegraphics[width=0.23\textwidth]{f7a.eps}
\includegraphics[width=0.23\textwidth]{f7b.eps}
\caption{Top : the profile of the electron temperature and the electron density
downstream in position C, calculated by model m$_C$.
Bottom : the profile of the fractional abundance of the most significant ions.
The diagram on the right presents a zoom of the temperature drop region downstream.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{f8.eps}
\caption{The comparison of the calculated continuum SED in position C
with the data (Erikson et al. 1991).
Short-dash : model m$_{C}$; solid : model calculated with
U=5
For all models two curves appear referring to the bremsstrahlung and to reradiation
by dust}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{f9.eps}
\caption{The comparison of the calculated continuum SED
with the data in position 2 of the E2 strip (Erikson et al. 1991).
Short-dash : model calculated in position 2 (see Table 4); solid : model calculated with
U=5 For each model two curves appear, one refers to the bremsstrahlung,
the other peaking in the IR referring to the dust reprocessed radiation
}
\end{center}
\end{figure}
In Fig. 9 the continuum SED in position 2 is compared with the IR data
(E91, Table 2) corrected for extinction.
Using the model which leads to the best fit of the line ratios,
dust reaches a maximum temperature of $\sim$ 40 K,
while the data are better explained by a temperature of $\sim$ 94 K.
This relatively high temperature can be achieved by a very high U (see Sect. 3.3.1).
The model
which explains the line ratios is constrained by the datum in the radio range.
The contribution of the high U cloud component in the line spectra
is $<$ 5\%.
The high U clouds can be very close to stars embedded within the filament,
or they are directly reached by radiation from the Arches Cluster stars,
as previously explained.
Notice that iron is highly depleted from the gaseous phase, therefore we can attribute IR radiation to
iron rich grains (see Sect. 3.3.2).
\section{The spectra in the E2-W1 Crossing Strip}
\begin{figure}
\includegraphics[width=0.45\textwidth]{f10.eps}
\caption{Positions of infrared observations overlaid on a VLA 5 GHz
continuum map from the data of Morris \& Yusef-Zadeh (1989).
(Taken from Colgan et al. 1996, Fig. 2)
}
\end{figure}
The region 0$^o$.25 north of the GC is characterised by $\geq$ 30 pc long, thin,
straight filaments of ionized gas which cross the Galactic plane
at right angles (Yusef-Zadeh et al. 1984, Simpson et al. 1997).
Their radio continuum emission is polarized and
nonthermal, indicating
relative strong magnetic fields (e.g. Tsuboi et al 1985).
In the north-western region, the linear filaments crossing the Galactic plane
intersect the Arched filaments, which emit thermal radio continuum.
It seems that the linear and arched filaments are connected (Yusef-Zadeh \& Morris 1988).
The excitation mechanisms responsible for the emission for both sets of
filaments is controversial (Colgan et al 1996, hereafter C96, and references therein).
Photoionization from Sgr A West is excluded because the photon flux is too low.
Photoionization by a line of OB stars close to the filaments is not suitable to the
region's morphology. Collisional excitation of the lines, derived from the MHD model of
Morris \& Yusef-Zadeh (1989) is rejected on the basis of electron densities lower than
that of the adjacent molecular gas. Also embedded evolved stars not associated with the filaments
could provide some fraction of the continuum, however, Genzel et al (1990) claim that their
luminosity is too low to provide the infrared continuum.
It is now clear that the hot young star cluster (the Arches Cluster, Figer et al 1999) found
by Cotera et al (1996) and Nagata et al. (1995) is the main source of photoionization.
Moreover, the FWHM of the lines presented by Cotera et al. (2005) for the E1 filament
and the top of the E2 filament are relatively high and indicate that the effect of shocks
is non negligible.
C96, in their Table 1, present the far-IR line and continuum spectra
in 11 positions of the strip between E2 and W1 thermal radio filaments in
the Galactic Center "arch" (Fig. 10).
In the following we will try to explain the spectra by composite models
that were used previously in Sects. 3 and 4, namely, shock and photoionization
are consistently adopted to calculate the line ratios. Comparison of calculated with observed
line ratios leads to the set of parameter which best describe the physical conditions in the
observed regions. We consider that the photoionizing radiation flux is provided by the stars in the
Arches Cluster.
The observations were made by the Kuiper Airborne Observatory (KAO) facility Cryogenic Grating Spectrometer (CGS)
(Erickson et al. 1985).
The lines fluxes presented by C96, include [SIII]33, [SiII]35, [OIII]52, [OI]63, [OIII]88, and
[CII]158.
\subsection{Preliminary model investigation}
\begin{table*}
\begin{center}
\caption{The preliminary models}
\begin{tabular}{llllllll}\\ \hline \hline
model &$\rm V_{s}$~ & $\rm n_{0}$ & $U$ & $D$ & symbols \\
& ($\rm km\, s^{-1}$) & ($\rm cm^{-3}$) & - & (10$^{15}$ cm)& \\
mp1 & 50 & 70 & 0.005& 1-10&dotted line linking empty triangles (cyan) \\
mp2 & 60 & 80 & 0.0005& 0.5-50& short-dashed + asterisks (5) (black) \\
mp3 & 60 & 100 & 0.0005& 0.3-30&short-dashed + asterisks(7) (black) \\
mp4 & 60 & 150 & 0.0005& 0.2-16& short-dashed + empty circles (black) \\
mp5 & 50 & 70 & 0.001 & 1-2&dotted + empty pentagons (magenta)\\
mp6$^1$ & 30 & 30 & 0.0015& 2-10&long-dashed + empty hexagons (green)\\
mp7 & 50 & 60 & 0.002 & 10-100& solid + asterisks (3) (red) \\
mp8 & 50 & 70 & 0.002 & 0.8-14 & solid + asterisks (5) (blue) \\
mp9 & 50 & 80 & 0.0015& 0.6-4.5& solid + dash (black) \\
mp10$^2$ & 60 & 60 & 0.002 & 30-150 & long dashed + asterisks (5)(red) \\
\hline
\end{tabular}
\flushleft
$^1$ m9 was calculated adopting Si/H = 3.3 10$^{-6}$
$^2$ m10 was calculated adopting $\rm B_{0}$=10$^{-4}$ gauss
\end{center}
\end{table*}
\begin{figure}
\includegraphics[width=0.42\textwidth]{f11a.eps}
\includegraphics[width=0.42\textwidth]{f11b.eps}
\caption{The comparison of model calculations with the the most significant
line ratios observed in the E2-W1 crossing strip. The numbers refer to positions
in Colgan et al. 1996, fig. 2. The observations are represented by black triangles.
In the top diagram we have plotted the data (black dots) from the observations
of Erickson et al (1991, Table 2)
The models are described in Table 5 and in the text} .
\end{figure}
Before trying a detailed modelling of the line ratios, we investigate the parameter ranges and
their influence on the different line ratios by comparing the observed line ratios with a grid
of models in the Fig. 11 plots.
The models (mp1-mp10) are described in Table 5. They are characterized by
the shock velocity $\rm V_{s}$~, the pre-shock density $\rm n_{0}$, the ionization parameter U,
and a range of geometrical thickness of the clouds which are indicated by point symbols
throughout the model trends.
In this first trial we adopted a relatively high $\rm T_{*}$~ (34000K) close to that indicated by E91.
For all the models $\rm B_{0}$=3 10$^{-5}$ gauss
except for m10 which was calculated by $\rm B_{0}$=10$^{-4}$ gauss. A stronger
magnetic field prevents the compression downstream which is generally regulated
by the shock velocity and the pre-shock density and leads to unsuitable line ratios.
On the top of Fig. 11, [OIII]88/[SIII]33 versus [OIII]52/[SIII]33 is compared with
model results. The data from C96 (their Table 1) are shown as filled
triangles.
On the same diagram we have plotted the data from the observations
of Erickson et al (1991, Table 2)
from G0.095+0.012 and the E2 thermal radio filament. The data are distributed
from left to right : 6, 8, 4, C, and 2.
This diagram is related to the physical conditions in the gas
emitting the lines from the second ionization level. It seems that both the
[OIII]88/[SIII]33 and [OIII]52/[SIII]33 ratios increase with increasing density,
because the S$^{++}$ region downstream is more reduced than the O$^{++}$ one at higher n.
These ratios are
particularly correlated with the geometrical thickness of the filaments, decreasing at higher $D$ .
To constrain the models, we show in Fig. 11 (bottom) the [CII]158/[SIII]33
vs. [SiII]35/[SIII]33 plot.
The two spectra at positions 5 and 11 which were not reproduced by the models in the top diagram
are well included among the other positions in the bottom one.
In fact, the spectrum in position 5 shows an unreliable [OIII] 52 (0.5 $\pm $ 0.4) and that in position 11
shows an unreliable [OIII] 88 (0.2 $\pm$ 0.3). In the bottom diagram which
is independent from these lines, the two spectra regain the "normal" trend.
We refrain from showing the error-bars in the diagrams for sake of clearness.
The spectra at positions 6 and 10 are not reproduced by the grid of models
presented in Table 5. In fact, the relatively high $\rm T_{*}$~ maintains the gas ionized
to the second ionization level in a large region, leading particularly
to underpredicted [CII]/[SIII] line ratios.
Cross-checking the results obtained by a detailed modelling of the data (Tables 6 and 7),
models mc6 and mc10 (green lines, solid and short-dashed, respectively) were plotted on Fig. 11.
Two main trends can be noticed in the bottom diagram. In fact,
the combination of the input parameters leads to the stratification of the ions downstream
of the shock front, which is also reached by the photoionization flux from the stars (Figs. 3, 7, and 12).
For instance, a relatively high $\rm T_{*}$~ and/or a high U maintain the gas ionized to a higher D
(the distance from the shock front),
while a higher n speeds up recombination because the cooling rate is $\propto$ n$^2$.
The shock velocity yields compression (increasing n) and a relatively high temperature
downstream ($\propto$V$_s^2$) leading to a characteristic stratification of the physical
conditions.
When $\rm T_{*}$~ and/or U are relatively low and D relatively large,
the fractional abundance of S$^{++}$ is low and the
[SIII] line flux remains nearly constant at larger D throughout the filament.
On the other hand, the first ionization potential of Si and C (8.11 eV and 11.20 eV, respectively) are lower than
that of H (13.54 eV), so Si and C remain singly ionized at larger D. This leads to [CII]/[SIII]
and [SiII]/[SIII] line ratios increasing with D.
When $\rm T_{*}$~ and /or U are relatively high (models mp1-mp10) and D are such that S$^{++}$ and C$^+$ fractional abundance
are still increasing, [CII]/[SIII] slightly decreases. As soon as D reaches the S$^{++}$ recombination distance,
[CII]/[SIII] increases.
[SiII/[SIII] has an increasing trend because of its very low ionization potential.
\begin{figure*}
\centering
\includegraphics[width=0.23\textwidth]{f12a.eps}
\includegraphics[width=0.23\textwidth]{f12b.eps}
\includegraphics[width=0.23\textwidth]{f12c.eps}
\includegraphics[width=0.23\textwidth]{f12d.eps}
\caption{ The profile of the electron temperature and of the electron density
(top diagrams) and the distribution of the fractional of the most significant ions
(bottom diagrams) throughout a cloud corresponding to models (Table 5) mp7, mp8, mp6, and mp3,
from left to right, respectively}
\end{figure*}
To better understand the trend of the models presented in Table 5,
the profiles of the electron temperature, of the electron density,
and of the fractional abundance of the more significant ions
downstream are shown in Fig. 12 for models mp7, mp8,
mp6 and mp3 from left to right, respectively.
Compression is relatively small for $\rm V_{s}$~ $<$ 100 $\rm km\, s^{-1}$.
The best fitting models are matter bound as can be guessed from the relatively
low [SiII]/[SIII] line ratios.
Summarizing, the trend of the data in the E2-W1 strip was recovered using models with $\rm T_{*}$~ = 27000K.
We conclude that, also on the basis of the conditions found in position C ($\rm T_{*}$~ = 24000K)
a relatively low temperature is more suitable to the stars close to the observed positions.
\subsection{Detailed modelling}
The absolute flux of the [SIII]33 line
is the strongest one so we will consider line ratios to [SIII].
The ratios of the observed corrected lines fluxes to [SIII]33
are reported in Table 6.
The line fluxes were corrected according to C96 factors.
The results of modelling
are given in the row below that containing the data, for all positions.
The selected models numerated from mc1 to mc11 are described in Table 7.
\begin{table*}
\caption{Comparison of observed (corrected) with calculated IR line ratios}
\begin{tabular}{ccccccccc}\\ \hline \hline
\ Position & [SIII]33& [SiII]35 & [OIII]52 & [OI]63 & [OIII]88 & [CII]158 \\
\ 1& 10.& 2.46& 1.29& 0.15& 1.26& 1.53\\
\ mc1& 10 & 2.66& 1.3 & 0.2 & 1.26& 1.44\\
\ 2& 10.& 2.03& 1.61& 0.00& 1.58& 1.29\\
\ mc2& 10. & 2.1 & 1.63& 0.1 & 1.58& 1.26\\
\ 3& 10.& 2.45& 1.21& 0.15& 0.94& 0.88\\
\ mc3& 10. & 2.4 & 1.28& 0.14& 0.93& 0.86\\
\ 4& 10.& 1.59& 1.04& 0.32& 0.64& 0.74\\
\ mc4& 10. & 1.59& 1.03& 0.31& 0.66& 0.79\\
\ 5& 10.& 2.62& 0.24& 0.44& 0.76& 1.51\\
\ mc5& 10. & 2.77& 0.46& 0.43& 0.76& 2.36\\
\ 6& 10.& 3.58& 1.44& -0.45& 0.85& 2.56\\
\ mc6& 10. & 3.8 & 1.44& 0.42& 0.85& 2.5\\
\ 7& 10.& 6.21& 2.65& 1.06& 1.34& 0.97\\
\ mc7& 10. & 6.4 & 2.54& 0.96& 1.4 & 1.06\\
\ 8& 10.& 3.44& 1.19& -0.08& 0.68& 0.66\\
\ mc8& 10. & 3.2 & 1.2 & 0.085& 0.67& 0.6\\
\ 9& 10.& 1.76& 0.52& 0.10& 0.66& 0.89\\
\ mc9& 10. & 1.73& 0.52& 0.10& 0.66& 0.9\\
\ 10& 10.& 3.68& 1.19& 0.46& 0.74& 1.92\\
\ mc10& 10. & 3.5 & 1.17& 0.47& 0.73& 1.4\\
\ 11& 10.& 1.87& 1.89& 1.22& 0.26& 1.37\\
\ mc11& 10. & 2.0 & 1.6 & 1.29&0.8 & 1.38\\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{The models adopted in the E2-W1 strip}
\begin{tabular}{lllllllllll}\\ \hline \hline
\ model & $\rm V_{s}$~& $\rm n_{0}$& $\rm B_{0}$ & U & Si/H & S/H & C/H & D \\ \hline
\ mc1& 65& 91& 6.e-5& 2.e-3& 4.e-6 & 1.3e-5& 3.5e-4&2.9e15\\
\ mc2& 65& 91& 6.e-5& 2.5e-3& 4.e-6 & 1.3e-5& 3.5e-4&2.3e15\\
\ mc3& 65& 150& 7.e-5 & 3.0e-3& 5.e-6 & 3.5e-5& 3.5e-4&1.7e15\\
\ mc4& 65& 170& 5.e-5 & 2.3e-3& 2.7e-6& 1.5e-5& 3.3e-4&1.35e15\\
\ mc5& 73& 10 & 8.e-5 & 8.e-4 & 5.e-6 & 4.e-5 & 9.e-5 &2.7e18\\
\ mc6& 75& 190& 5.e-5 & 1.2e-3& 1.6e-6& 1.e-5 & 3.3e-4&2.9e15\\
\ mc7& 70& 280& 5.e-5 & 1.9e-3& 6.e-6 & 1.e-5 & 2.8e-4&9.e14\\
\ mc8& 70& 150& 5.e-5 & 7.e-3 & 9.e-6 & 1.2e-5& 3.4e-4&1.9e15 \\
\ mc9& 70& 30 & 5.e-5 & 3.e-3 & 2.3e-6& 1.2e-5& 9.e-5 &7.3e16\\
\ mc10& 70& 140& 3.e-5 & 3.5e-3& 3.9e-6& 1.e-5 & 4.e-4 &1.6e15\\
\ mc11& 70& 200& 1.e-5 & 3.1e-3& 1.8e-6& 1.e-5 & 3.6e-4&9.0e14\\
\hline
\end{tabular}
For all models $\rm T_{*}$~=2.7 10$^4$ K is adopted
\end{table*}
The modelling is constrained by the [OIII]/[OI] line ratio, which depends strongly on the ionization
parameter, while the [OIII]52/[OIII]88 ratio depends on the density.
The shock velocity is not strongly affecting the line ratios, because lines from relatively high ionization
levels were not reported.
So we have chosen, as a first guess, $\rm V_{s}$~ in the range of the shock velocities explaining the spectra
observed by S07 (Table 2) in the region between E2 and W1. The ranges of the other parameters, $\rm n_{0}$, $\rm B_{0}$, and U,
were suggested by the preliminary investigation (Sect. 5.1) which also leads to relatively low $\rm T_{*}$~.
All the results presented in Table 6 were consistently calculated. In position 5 the density
adopted to explain
the very low [OIII]52/[OIII]88 line ratio is exceptionally low. This was already realized by C96.
Even by $\rm n_{0}$=10 $\rm cm^{-3}$, the calculated value is lower than the observed one. Notice, however, that
the error given by C96 in their Table 1 for the [OIII]52 observed line flux is $\sim$ 80 \%.
The results are shown in Fig. 13. In Fig. 13a the parameters depending on the shock are given
as a function of position.
The preshock density shows two deep minima, at positions 5 and 9. As expected, the shock velocity has a maximum
in position 5 denoting that the shock velocity range is relatively large.
The pre-shock magnetic field shows a decreasing trend in agreement with the results for $\rm B_{0}$ obtained by explaining
S07 observations between positions 31 and 34.
In Fig. 11a the [OIII]88/[OIII]52 ratio shows a profile similar to that of the density, while the radio
distribution taken from C96 and shown in the top panel, is not well correlated with the density.
This can be explained by recalling that
radio and line emissions occur from different regions of the gas downstream in each cloud.
Interestingly, the distribution of both the radio fluxes at 43 and 1.4 GHz shows
that the two maxima do not correspond exactly to the maxima in D (Table 6) of 2.7 10$^{18}$ cm in position 5
and 7.3 10$^{16}$ cm in position 9.
In Fig. 13b, the profile of U is shown and compared with the observed [OIII]52/[OI]63 and
[SiII]/[CII] line ratios (top panel). The oxygen line ratio follows the trend of U. The two U minima in positions
5 and 9 correspond to the minima in $\rm n_{0}$. The opposite would be expected because the ionization
parameter is reduced by crossing regions of dense gas.
This indicates that the origin of the U minima in both positions 5 and 9, is different.
In Fig. 13b (bottom panel) the relative abundances are shown for Si, S, and C, showing
that carbon is depleted from the gaseous phase at positions 5 and 9. Then, carbon grains, most probably
PAH, screen the flux from the hot source and the radiation flux does not lead to
full evaporation of the grains.
Also, Si is trapped into dust grains
because it shows depletion from the gaseous phase along all the strip.
The other relative abundances adopted by the models
are N/H = 10$^{-4}$, O/H = 6 10$^{-4}$, and Ne/H = 10$^{-4}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{f13a.eps}
\includegraphics[width=0.45\textwidth]{f13b.eps}
\caption{The results
along the different positions in the E2-W1 crossing strip.
a :the parameters depending on the shock.
b : the parameters depending on photoionization and the relative abundances
}
\end{center}
\end{figure}
In Fig. 14a we present the continuum SED in position 2.
The diagram on the left shows the modelling of the data.
The short-dashed lines show the bremsstrahlung and dust reradiation
calculated by model mc2 which best fit the line ratios.
The data in the IR (C96, Table 1) are not well reproduced by the model
and indicate that dust grains are heated to a higher temperature,
because the dust reradiation maximum is shifted towards higher frequencies.
This result was found previously modelling the data by S07 and E91.
The best fit is obtained increasing U by a factor of 2000
and reducing a$_{gr}$ to 0.01 $\mu$m~. This leads to a maximum dust grain temperature of 88 K.
In order to reduce the contribution
of such a dusty cloud to the line spectra, a $d/g$ =0.4 by mass is adopted.
Grains are neither destroyed by evaporation nor sputtered because the stars are not hot enough
and the shock velocities are relatively low, respectively.
It seems that these grains are not explained by PAHs because C is not depleted from the gaseous phase
in position 2. They could be explained by eroded silicates and/or iron species.
The slope of the radio continuum is an interesting issue.
In fact, the non-thermal or thermal character of the emitting clouds is
determined on the basis of radio observations. The non-thermal character of the radio emission should confirm the
presence of shocks. Synchrotron radiation, created by the Fermi mechanism at the shock front, is observed in the
radio range from nebulae ionised and heated by radiation from both the stars and shocks.
The relative importance of synchrotron radiation to bremsstrahlung determines the non-thermal or thermal character.
Fig. 14a shows that the radio datum at 43 GHz can be explained by thermal bremsstrahlung
as well as by dust reradiation. If it corresponds to dust reradiation,
the synchrotron radiation flux created
could also contribute. We do not have enough data in the radio range to confirm this.
On the other hand, if the radio flux which refers to the data from Sofue et al. (1987) follows the bremsstrahlung
trend, it can indicate some free-free self-absorption towards lower frequencies.
For comparison, we have added in Fig. 12a and b the synchrotron power-law radiation flux (long-dashed line) which
clearly follows a different trend.
To investigate the continuum for the other positions, we show in Fig. 14b the data in both the IR
and radio frequency ranges for all the positions.
The results found for position 2 are valid on a large scale also for the other positions.
The dust temperatures are now constrained by radio data at 43 GHz. We have thus added the black body
flux corresponding to 200 K (black dotted line).
Such a dust temperature is easily reached by small grains.
In Fig. 14c, a zoom on the dust reradiation maximum is shown. We can conclude
that dust cannot reach temperatures higher than 200 K.
In most positions there is an absorption feature at wavelengths $\geq$ 30 $\mu$m~.
Even if the data are insufficient to determine the absorption and emission bands of typical grains,
we suggest that the feature at $\sim$ 30 $\mu$m~ is not so rare, since it was discovered
from ground based observations in 63 Galactic objects : 36 evolved carbon stars, 14 PPNe, and 13 PNe
(Hony et al. 2002). In our Galaxy, this feature, whose carrier seems to be MgS species,
occurs from extreme AGB stars on to later stages (Jiang et al. 2008).
\begin{figure*}
\includegraphics[width=0.33\textwidth]{f14a.eps}
\includegraphics[width=0.33\textwidth]{f14b.eps}
\includegraphics[width=0.33\textwidth]{f14c.eps}
\caption{The continuum SEDs in the E2-W1 Strip.
a : position 2. short-dashed : model mc2;
long-dashed : synchrotron radiation; solid line : model
calculated with U=5 and a$_{gr}$=0.01 $\mu$m~.
b : the comparison for all position. Solid lines :
red : 1; blue : 2; green : 3; magenta : 4; cyan : 5; black : 6;
yellow : 7 . Dotted lines : red : 8; blue : 9; green : 10; magenta : 11.
c : a zoom in the IR maximum.
For all models two curves appear referring to the bremsstrahlung and to reradiation
by dust}
\end{figure*}
\section{Concluding remarks}
We have modelled the IR spectra observed in the region near the Galactic center
with the aim of determining the physical conditions in the different
observed positions. We have obtained the results by comparing
the calculated with the observed line ratios and of the continuum SED data.
Our models account for the coupled effect of the shocks and the photoionizing flux.
The models are matter-bound, indicating a high degree of fragmentation of matter, that
is characteristic of turbulent regimes.
We have found that the shocks propagate towards the photoionizing source,
i.e. the star clusters, suggesting that gravitation may prevail over
eventual wind effects.
The shock velocities range between $\sim$ 65 and 80 $\rm km\, s^{-1}$. Indeed, they are not high enough
to produce X-ray emission by bremsstrahlung (e.g. Fig. 5) and to collisionally heat the
dust grains to the observed temperatures of $\sim$ 150 K in some positions of the Arched Filament
region and of $\sim$ 88 K in the E2-W1 crossing strip.
In the downstream regions, the characteristic electron temperature and density
profiles lead to a good agreement of calculated line ratios from different ionization levels,
with the observed ones.
The results obtained with pure photoionization models which account on the [OIII] and
[SIII] line ratios (e.g. Rodr\'{i}guez-Fern\'{a}ndez et al. 2001, Simpson et al. 2007, etc) demonstrate
that photoionization from the clusters affects the intermediate ionization level line ratios.
However, adopting the composite models, detailed results can be found also for the shock velocities,
pre-shock densities, and pre-shock magnetic fields, by modelling different level lines.
The pre-shock densities range between $\geq$ 1 $\rm cm^{-3}$ in the ISM and $\geq$ 200 $\rm cm^{-3}$ in the filamentary structures.
High densities ($\rm n_{0}$ =100-80 $\rm cm^{-3}$) are found in the Arched Filaments, the maximum values
($\rm n_{0}$=200 $\rm cm^{-3}$) in E91 position C, and in C96 positions 7 and 11 (280 and 200 $\rm cm^{-3}$, respectively).
The magnetic field ranges from 5. 10$^{-6}$ gauss in S07 positions 1 and 2, characteristic of the ISM,
increasing smoothly to
$>$ 5 10$^{-5}$ gauss beyond the Bubble, up to a maximum of 8 10$^{-5}$ gauss. These values are about the same
as found in the crossing
strip E2-W1 in the Arched Filaments. Beyond the Arched Filaments, $\rm B_{0}$ regains the ISM values.
Our results confirm LaRosa et al. (2005) predictions of the magnetic field strength.
The maximum temperature of the stars are higher in the Quintuplet Cluster ($\sim$ 38000 K)
than in the Arches Cluster ($\sim$ 27000 K). There are stars at temperatures of $\sim$ 35000 K
in the southern ISM
and of $\sim$ 39000 K in the northern one, above 0.1 degree.
The ionization parameter in relatively low ($<$ 0.01 ) reaching
a maximum of $>$0.01 near the Arches Cluster. This indicates that the observed positions 30-35
are closer to the stars. In the E2-W1 strip, U is rather low, diluted by the distance
from the ionization source, most probably the Arches Cluster.
The depletion from the gaseous phase of Si is ubiquitous, indicating the presence of
silicate dust throughout all the region, while a large quantity of iron rich grains
is present in the region of the Arched Filaments.
Comparing the relative abundances for positions 29-34, S07 find the average Ne/H=1.63$\pm$0.08 10$^{-4}$,
S/H 1.16$\pm$0.06 10$^{-5}$,
while we find that Ne/H $\sim$ 10$^{-4}$ satisfactorily fits all positions and S/H fluctuates
between 6.3 10$^{-6}$ and 1.6 10$^{-5}$.
S07 find Fe/H $\sim$ 1.3 10$^{-6}$ in the Arched Filament and $\sim$ 8.8 10$^{-6}$ in the Bubble,
in agreement with our results :Fe/H $\sim$ 10$^{-5}$ in the Bubble and 10$^{-6}$ in the Arched Filaments
(Fig. 3c).
The continuum SED between 33 and 158 $\mu$m~ in all the observed regions indicate that a component
of dust heated to temperatures of $\sim$ 100-200 K must be present. The dust grains coupled to gas
within the emitting clouds cannot reach those high temperatures by using the input parameters
which are constrained by the fit of the line spectra. Higher ionization parameters and small
grains characterise this dust. We suggest that hot dust is located closer to the stars
than the emitting gaseous clumps.
The temperature of the stars is not high enough to destroy the grains by evaporation,
and the shock velocity cannot disrupt them totally by sputtering.
In the Arched Filaments, we find a dust-to-gas ratio $\sim$ 0.4 by mass.
The data are insufficient
to show absorption and emission bands from the grains or
to constrain the dust-to-gas ratios in
the different regions.
PAHs can be present in some Arched filament region positions,
leading to a strong absorption of the photoionizing flux from the stars.
The radio emission seems thermal bremsstrahlung in all the positions observed in the Arched Filaments,
however a synchrotron radiation component is not excluded.
More data should confirm and improve the modelling presented in this paper.
\section*{Acknowledgements}
I am grateful to J. P. Simpson, E. F. Erickson, and S. W. Colgan, for allowing me to reproduce their
figures, and to an anonymous referee for many important comments. I thank R. Angeloni for helpful advise.
|
2,869,038,153,906 | arxiv | \section{INTRODUCTION}
Connected and Automated Vehicle (CAV) technologies have been developed significantly in recent years. CAV would change the mobility system and energy efficiency \cite{Wadud2016}. The mechanisms of the energy efficiency and emission impacts of CAV could be eco-driving \cite{mensing2011vehicle,barkenbus2010eco,huang2018eco}, platooning \cite{bergenhem2012overview,bergenhem2010challenges,al2010experimental}, congestion mitigation \cite{guerrero2015integration,feng2015real}, higher highway speeds \cite{Wadud2016}, de-emphasized performance \cite{mackenzie2012acceleration,chang2018fuel}, vehicle right sizing \cite{davistransportation}, improved crash avoidance \cite{mackenzie2014determinants}, and travel demand effects. Each eco-driving and platooning technology could offer substantial energy efficiency improvement in the range of 5\% to 20\% \cite{Wadud2016}. However, the standard to evaluate the fuel economy and emission for CAV is not existing yet. The current fixed drive cycle method is not suitable for the evaluation of CAV.
In order to fully release the benefits of the energy saving and emission reduction technologies and partial-automation technologies for CAV, policymakers need to consider to give the credits for fuel economy or Green House Gas (GHG) emission for the implementation of CAV technologies. However, the current Corporate Average Fuel Economy (CAFE) /GHG test driving cycles \cite{national2002effectiveness} could not capture the benefits of CAV such as the scenarios when they interact with other vehicles and infrastructures. Fuel economy and emission testing normally uses a vehicle on a treadmill, while a trained driver or a robot follows a fixed drive cycle. The current standardized fuel economy testing system neglects differences in how individual vehicles drives on the road \cite{Atabani2011ASector} so that some energy-saving automated and connected vehicle control algorithm could not be effectively reflected during the current drive cycle testing. The Environmental Protection Agency (EPA) has used “off-cycle technology credits” for CAFE standards to address similar issues of other emerging technologies. However, the evaluation process is not standardized and different technologies could not be tested equivalently. In addition, the credits are only applicable to new and nonstandard technologies. Also, this only affect the CAFE standard and it is not the fuel economy ratings which would inform the consumer or the emission level certification.\cite{register2010light}
While Europe and China have the similar evaluation method for fuel economy, commissions from these countries established the Real Driving Emissions (RDE) regulations and announced that the vehicle emission must be tested on the road in addition to the drive cycle testing and must be measured with a Portable Emission Measurement System (PEMS) \cite{williams2016technical,he2017china,chang2017effect}. However, the reproducibility of these tests is very difficult to achieve because of the dynamic and environmental boundaries such as routes, ambient conditions, and the data analysis methods. The two main methods for data analysis that being tested and regulated are the Moving Averaging Window (MAW) and Power Binning (PB). However, MAW method sometimes normalized values which can influence the analysis and both MAW and PB lack of capability on analyzing the hybrid electric vehicles and electric vehicles\cite{varella2017comparison}.
In order to evaluate the energy efficiency and emission of new vehicle models with CAV features exhaustively, new energy efficiency and emission testing method needs to be developed. The research work related to fuel economy and emission testing standard for CAV is rare. Mersky \cite{Mersky2016FuelVehicles} proposed a method to measure the fuel economy of the targeted vehicle by following a lead vehicle driving under EPA City and Highway fuel economy testing. This method could not include the information from the transportation system such as the other vehicles around the evaluated vehicle and the infrastructure.
This paper proposes a method which targets on developing a statistical method of the energy efficiency and emission standard testing for CAV that can evaluate the energy efficiency and emission of vehicles based on the database of naturalistic driving data instead of the drive cycles. This method can evaluate the CAV, the conventional vehicles, and other off-cycle credits evaluation, which enhances the fair comparison of different types of vehicle technologies and models. Also,
this evaluation method is flexible to be updated with the change of infrastructure, policy (speed limits), and development of the vehicle technologies.
The idea of this method is as follows:
1. Use the data of naturalistic driving to get the typical driving primitives by using the unsupervised learning methods including Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM) and K-means clustering. The driving primitive is defined as the combination of the speed and acceleration over a time interval. The durations of the drive primitives usually vary.
2. Calculate the fraction of each cluster of the driving primitives and rank them.
3. Apply the HDP-HSMM method to the real driving data of the vehicle which is under evaluation and get the driving primitives of the evaluated vehicle.
4. Find the most matchable driving primitive of the evaluated vehicle for each frequent driving primitive cluster and finish the coupling process.
5. Calculate the average value of the energy consumption and emission over the period of each driving primitive based on the real-time measurement of energy consumption and the emission of the evaluated vehicle, and use these values and the corresponding fraction of the driving primitive clusters to get the energy efficiency and emission evaluation results.
The major contributions of this paper are:
\begin{itemize}
\item Propose a new method for the energy efficiency and emission testing of CAV and the off-cycle credit rating.
\item Propose a new method to segment the driving conditions of velocity and acceleration of the real driving datasets effectively and efficiently.
\item Find out the frequent clusters of driving primitives and their fractions, which represent the typical driving conditions well.
\item Propose the effective method for the coupling of the clusters of driving primitive based on large naturalistic driving datasets with the driving primitives of the evaluated vehicle, which secures the repeatability and effectiveness of the evaluation process.
\end{itemize}
\section{Methodology}
\subsection{Data Description}
For a relative long time, CAV and conventional vehicles will coexist on the roads. When the penetration rate of CAV is low, CAV need to perform with the similar patterns as the conventional vehicles drive. In order to get the typical driving primitives which applies for both CAV and conventional vehicles, the naturalistic driving data which records the drive behavior during every day trips through unobtrusive data gathering equipment and without experimental control is used for this evaluation method .
Driving data used in this paper are from the Safety Pilot Model Deployment (SPMD) database. The SPMD was held in Ann Arbor, Michigan, starting in August 2012. The deployment covered over 73 lane-miles and included approximately 3,000 onboard vehicle equipped with vehicle-to-vehicle (V2V) communication devices and data acquisition system. The entities from this dataset include the basic safety messages (BSM) such as the vehicles position, motion, safety information and the status of a vehicle's components. The data used in this paper is from the vehicle's Control Area Network (CAN) bus with recording frequency at 10 Hz.
Currently,two months of SPMD data (Oct. 2012 and April. 2013) are publicly available for consumption and use via Department of Transportation official website \cite{DOT}. We are currently using this public sub-dataset of SPMD. The query standard of this dataset is as follows:
\begin{itemize}
\item The vehicle is the light duty passenger car. (The data from the buses is eliminated)
\item The vehicle with a total driving duration from different trips larger than one hour.
\item The flag for valid CAN signal shows 1 (true).
\end{itemize}
The key parameters of the devices and trips after the query are summarized in Table \ref{Table:Summary of Dataset}.
\begin{table}[t]
\caption{Key parameters of the devices and trips from the queried dataset}
\label{Table:Summary of Dataset}
\centering
\begin{tabular}{c|c}
\hline\hline
Variable Name & Value\\
\hline
Vehicle Amount & 59 \\
Total Trip Amount & 4577 \\
Longest Trip Duration (min) & 197.6 \\
Average of the Longest Trip Duration for Each Vehicle (min) & 49.9 \\
Total Driving Time (min) & 49697.3 \\
Max of Total Driving Time for Each Vehicle (min) & 2046.0 \\
Min of Total Driving Time for Each Vehicle (min) & 133.3 \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Analysis of the driving primitives of each vehicle}
The essential idea of the drive cycle development from federal agencies is to have a standardized measurement stick for emissions and fuel economy which gives a proxy for the typical driving and has the capability to compare across vehicles. The current drive cycle development is based on the frequency of bins of speed and acceleration with constant interval \cite{nam2009drive}. However, constant interval might neglect important driving patterns inside the bins. In order to find the hidden patterns or grouping in driving data without restrictions, the unsupervised learning method is used in this paper to draw inferences of the typical driving primitives from datasets without labels of driving patterns. Unsupervised learning methods are widely used in transportation field. They have shown the great performance \cite{karlaftis2011statistical,wang2018extracting}. Among the common cluster algorithms of the unsupervised learning, Hidden Markov models (HMM) can use observed data to recover the sequence of states, which would be suitable for the driving scenarios such as the speed change in the variable durations of driving primitives. HDP-HMM is a Bayesian nonparametric extension of the HMM for learning from sequential and time-series data \cite{fox2008hdp}. HDP-HMM’s strict Markovian constraints are undesirable for our application. The weak limit sampling algorithm can be used for efficient posterior inference \cite{johnson2013hdphsmm}. Here, we would like to identify the typical driving primitives of each vehicle without restriction of the duration of each primitive and amount of total primitives, so that HDP-HSMM with weak-limit sampler is used here.
Figure \ref{Fig:HDP-HSMM} shows the graphical model for the HDP-HSMM in which the number of nodes is random.
The HDP-HSMM$(\gamma, \alpha, H, G)$ can be described as follows \cite{johnson2013hdphsmm}:
\begin{subequations}
\label{eqn:HDP-HSMM}
\begin{eqnarray}
\beta \sim GEM(\gamma)\\
\pi_i \overset{iid}{\sim} DP(\alpha, \beta) \quad (\theta_{i}, \omega_{i})\overset{iid}{\sim}H \times G \quad i = 1,2,\cdots,\\
z_{s}\sim \bar{\pi}_{z_s-1},\\
D_s\sim g(\omega_{z_s}),\quad s=1,2,\cdots,\\
x_{t_{s}^{1}:t_{s}^{2}} = z_s\\
y_{t_{s}^{1}:t_{s}^{2}} \overset{iid}{\sim} f(\theta_{x_t}) \quad t_{s}^{1} = \sum_{\bar{s}<s}D_{\bar{s}} \quad
t_{s}^{2} = t_{s}^{1}+D_{s}-1,
\end{eqnarray}
\end{subequations}
Where $\bar{\pi}_i:=\dfrac{\pi_{ij}}{1-\pi_{ii}}(1-\delta_{ij})$ is used to eliminate self-transition in the super-state sequence($z_s$).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig/HDP_HSMM.PNG}
\caption{graphical model for the HDP-HSMM in which the number of nodes is random \cite{johnson2013hdphsmm}}
\label{Fig:HDP-HSMM}
\end{figure}
Algorithms of this paper to get the driving primitive of each vehicles are as follows:
\begin{itemize}
\item Normalize the data of speed and acceleration from CAN to follow standard Gaussian distribution
\item Apply the weak limit sticky HDP-HSMM to all the normalized data of each vehicle to find the driving primitives for each vehicle
\item For each primitive, put all the original data points of speed and acceleration that belongs to that primitive and calculate the the mean, variance, and covariance of the original physical data.
\end{itemize}
\subsection{Analysis of the clusters of typical driving primitives}
After the driving primitives from each vehicles are obtained, the $k$-means clustering method is used to partition the driving primitives from each vehicle to the general typical driving primitives regardless of the vehicle or the driver.
$K$-means clustering aims to partition a set of observations $(\mathbf{x}_1, \mathbf{x}_2, …, \mathbf{x}_n)$ into $k (≤ n)$ sets $\mathbf{S} = \{S_1, S_2, …, S_k\}$ so as to minimize sum of squares \cite{hartigan1979algorithm} within each cluster.
\begin{equation}
arg\min_{\mathbf{S}}
\sum_{i=1}^{k}\sum_{\mathbf{x} \in \mathbf{s}_{i}} ||\mathbf{x}- \mathbf{\mu}_{i}||^2
\label{eqn:kmeans_objective_function}
\end{equation}
where $\mu_{i}$ is the mean of points in $\mathcal{S}_{j}$.
Constrained $k$-means clustering is a useful way to use the priori knowledge about which instances should or should not be grouped together \cite{wagstaff2001constrained}. Two types of pairwise constrains in the constrained $k$-means clustering are considered: Must-link and Cannot-link. For the application of this paper, we are using the Cannot-link constraint to avoid the situation where the driving primitives from the same vehicle are put in the same cluster.
Algorithms of this paper to get the driving primitive clusters are as follows:
\begin{itemize}
\item Use constrained $k$-means cluster to cluster driving primitives from each device into the typical driving primitives clusters.
\item Calculate the total number of data points from each cluster and the fraction of the data points from each cluster, and rank them.
\item Calculate the mean, variance and covariance values of the data points from each driving primitives cluster.
\end{itemize}
\subsection{Coupling of driving primitives of the evaluated vehicle and clusters of driving primitives}
The major idea of this part is to apply the same algorithm HDP-HSMM described in Section B to the real driving data of the evaluated vehicle and compare the driving primitives of the evaluated vehicle with the typical driving primitives clusters from the naturalistic driving dataset. For each driving primitive cluster derived from constrained $k$-means clustering result from the large dataset of naturalistic driving, a driving primitive of the evaluated vehicle which has the minimum value of the Kullback Leibler divergence from this driving primitive cluster is identified to be the couple of this driving primitive cluster.
The Kullback Leibler divergence is commonly used to describe how one probability distribution is different from a second, expected probability distribution \cite{duchi2007derivations}. The Kullback Leibler divergence between two multivariate normal distributions can be expressed as follows \cite{duchi2007derivations}:
\begin{equation}
\begin{split}
D_{KL}(\mathcal{N}_{0}\parallel\mathcal{N}_{1})
= \frac{1}{2}(\operatorname{tr}( \Sigma_{1}^{-1}\Sigma_{0})+(\mu_{1}-\mu_{0})^{T}\Sigma_{1}^{-1}(\mu_{1}-\mu_{0})\\
-k+\ln(\frac{det\Sigma_{1}}{det\Sigma_{0}}))
\end{split}
\label{equn:KL divergence}
\end{equation}
where $\mathcal{N}_{0}$ and $\mathcal{N}_{1}$ are two multivariate normal distributions, with means $\mu_{0}$ and $\mu_{1}$ and with (nonsingular) covariance matrices $\Sigma_{0}$ and $\Sigma_{1}$.
\subsection{Calculation of the evaluation result of energy efficiency and emission}
During the data collecting process for the energy efficiency evaluation of the evaluated vehicle, the fuel meter or other relevant sensors to measure the energy consumption of the powertrain system needs to be running to collect the essential data. This requirement is equivalent to the normal fuel economy drive cycle testing except that the vehicle can be running under the real driving conditions for this testing. Similarly, during the the data collecting process for the emission evaluation of the evaluated vehicle, PEMS is needed but this requirement is equivalent to the current RDE testing. Taking the vehicles with the conventional engine system as a example, the average value of the fuel consumption rate (gallon/mile) and the emission level (g/mile) of any duration compatible with the sensors response frequency can be calculated. Then the evaluation result of the fuel economy or the emission of the evaluated vehicle can be calculated as follows:
\begin{equation}
E = \sum_{i=1}^n (\omega_i \cdot E_i)
\label{eqn:E}
\end{equation}
Where $E$ stands for the the evaluation result of the fuel consumption rate (gallon/mile) or emission level (g/mile) while $\omega_i$ is the fraction of data points from cluster $i$.
$E_i$ is the average value of the fuel flow rate or emission level of the data points from the driving primitive of the evaluated vehicle which is coupled with the cluster $i$. Miles per gallon (MPG) can be calculated as 1/$E$ when $E$ stands for the fuel flow rate (gallon/mile).
\section{Results and Discussion}
\subsection{Driving primitives of each vehicle}
After applying unsupervised learning method HDP-HSMM to the velocity and acceleration of each data points sampled at 10 Hz from SPMD database, the driving primitives of each vehicle are obtained.
Figure \ref{fig:hsmm_r1} shows an example of the HDP-HSMM driving primitive analysis result of 25 seconds continuous real driving velocity and acceleration data of one vehicle. The colors in this figure indicates the labels of different driving primitives and the primitives with the same color label belong to the same driving primitive. It can be seen that the duration of each driving primitive varies significantly.
After the driving primitives analysis for each device is done, it is found that the min value of driving primitive amount from one of the vehicles is 66 and the max value is 155. Meanwhile, the maximum value of the total driving duration is 14.4 times larger than the minimum value of the total driving duration from each vehicle as shown in Table \ref{Table:Summary of Dataset}. This shows that the driving primitive method is relatively robust to the change of the availability of the amount of data from each vehicle. The average driving primitives amount from different vehicles is 121. After the driving primitives are ranked based on the fraction of the amount of the data points from each vehicle, it can be found that, in average, the driving primitives ranked at top 38\% cover the 68\% of the total data points. As the fraction of the data points that belongs to the driving primitives ranking as the last 5\% is very small, the data points from these driving primitives can be seen as the very rare events. Due to this reason, they were eliminated in order to find out the clusters of typical driving primitives.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth ]{Fig/hsmm_r3.png}
\caption{An example of the result of driving primitives analysis using HDP-HSMM method }
\label{fig:hsmm_r1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig/Mean_v_a_cluster.png}
\caption{Mean of top 20 clusters of driving primitives (a) Velocity, (b) Acceleration }
\label{Fig:Mean_v_a_cluster}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig/Var_v_a_cluster.png}
\caption{Variance of top 20 clusters of driving primitive (a) Velocity, (b) Acceleration }
\label{Fig:Variance_v_a_cluster}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig/Fraction_Ranking.png}
\caption{Fraction of data points in specific cluster of driving primitives vs rank of corresponding cluster}
\label{Fig:Rank_Fraction_cluster}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig/Couple9.png}
\caption{Coupling of driving primitives of the evaluated vehicle with clusters of driving primitives}
\label{Fig:Coupling}
\end{figure}
\subsection{Clusters of driving primitives of different vehicles }
Driving primitives from different vehicles are different from each other but they have similarity to each other. This is especially true for those ones ranking at the similar range of the fraction from different vehicles. In order to put the similar driving primitives into one cluster, the constrained $k$-means cluster algorithm
is used to get the typical driving primitives which can represent the different vehicles under different driving conditions. Data of 58 vehicles out of 59 vehicles are used to train the constrained $k$-means cluster model. The other random vehicle not used here is served as the evaluated vehicle in this paper.
As the maximum value of the amount of driving primitives is 155 and the average value of that is 121, 200 is chosen as the cluster number to see the preliminary result of this method. After the constrained $k$-means cluster method is applied and the ranking process of the clusters is finished, the mean value and variance value of the velocity and acceleration from each cluster that ranked at top 20 are shown in Figure \ref{Fig:Mean_v_a_cluster} and Figure \ref{Fig:Variance_v_a_cluster}. It can be clearly seen that the top 1st cluster of the driving primitive is the idling status (0 velocity and 0 acceleration). The velocity from the top 2nd to top 20th clusters of the driving primitives is larger than 14 $m$/$s$ (31 $km$/$h$). The mean value of the acceleration from the top 2nd to top 20th clusters is smaller than 0.09 $m$/$s^2$, which indicates that the most frequent clusters of driving primitives have relatively moderate acceleration. From Figure \ref{Fig:Variance_v_a_cluster}, it can be seen that the variance of velocity is much smaller than the mean value of velocity while the variance of the acceleration has the similar magnitude with the mean value of the acceleration, which indicates that the velocity plays the major role for the differentiation of clusters of the driving primitives.
The fraction of the data points from each cluster of driving primitives can also be calculated and ranked after the clustering process. Figure \ref{Fig:Rank_Fraction_cluster} shows the fraction of data points from each cluster which represents the value of $\omega_i$ in Equation \ref{eqn:E}.It can be clearly seen that the fraction of the top 1st cluster which is the idling state cluster of driving primitives is over 5 times larger than that of any one of the other clusters.
\subsection{Coupling of the driving primitives of the evaluated vehicle and the clusters of the driving primitives}
For each cluster of the driving primitive identified as described above, the driving primitive of the evaluated vehicle with the minimum value of the Kullback Leibler divergence from this driving primitive cluster is identified to be the couple with this driving primitive cluster. After this coupling process, each cluster couples with the driving primitive from the evaluated vehicle which has the largest similarity with it. Figure \ref{Fig:Coupling} shows the mean value of velocity from the driving primitives of the evaluated vehicle and that from the clusters of driving primitive. The coupling ID here has the same sequence as the cluster ID (after ranking). It can be seen that the couples coincides with each other very well especially for those ones with the coupling ID smaller than 150. Even though some couples has larger difference, they won't significantly affect the energy consumption or emission evaluation result because the fraction values of these clusters are very minimal as shown in Figure \ref{Fig:Rank_Fraction_cluster}. These results indicate the effectiveness of this method to segment the real driving data into driving primitives and clusters and identify the most matchable driving primitive from the real driving data for each cluster.
After this coupling process, the energy consumption or emission evaluation can be done effectively through Equation \ref{eqn:E} for an evaluated vehicle with the installed corresponding instruments such as the fuel meter or PEMS.
\section{Conclusion}
This paper proposes a new method of the energy efficiency and emission testing for the vehicles by using the real driving data instead of the drive cycle data, which would be suitable for the different types of vehicles such as CAV or other types of vehicles which can apply the off-cycle credit. Testing for the CAV whose benefits of the powertrain control algorithms can be reflected in the real driving conditions instead of the current drive cycle conditions are especially suitable to use this method. The unsupervised learning method HDP-HSMM effectively identifies the driving primitives of the velocity and acceleration of each vehicle from the SPMD database. The clusters of the typical driving primitives for different vehicles are identified by applying the constrained $k$-means clustering process of the driving primitives from each vehicle. The coupling process of the driving primitives from the evaluated vehicle and the clusters of the driving primitives works well so that each typical driving conditions from large naturalistic datasets can find the similar driving conditions from the real driving data of the evaluated vehicle. After this process, the energy efficiency and emission evaluation result of the evaluated vehicle can be obtained through the the linear weighted estimation method proposed in this paper.
This paper primarily introduces this new method to evaluate the energy efficiency and emission of CAV. Currently the velocity and acceleration are used as the data inputs of the driving primitives. Other factors such as the road grade and the whether condition which would also affect the energy efficiency and emission of the powertrain system can also be investigated to be included in driving primitives. Also, the research on the optimal duration of the real driving data for the evaluated vehicle would be helpful to accelerate the application of this method. Future study can also apply this driving primitives identification method to guide other types of the vehicles testing.
\bibliographystyle{IEEEtran}
|
2,869,038,153,907 | arxiv | \section{Introduction}
\label{sec:intro}
Clusters of galaxies are the most massive collapsed objects in the
Universe and sit at the top of the hierarchy of non--linear
structures. They were first identified as over--dense regions in the
projected number counts of galaxies (e.g. \citealt{Abell58},
\citealt{ZW68.1}). However nowadays clusters can be identified over
the whole electro-magnetic range, including as X-ray sources
(e.g. \citealt{boehringer00}, \citealt{pacaud07},
\citealt{vikhlinin09b}, \citealt{suhada12}), as optically
overdensities of red galaxies ( \citealt{Gladders05},
\citealt{Koester07}, \citealt{Hao10}, \citealt{Szabo11}) and as
distorsions of the cosmic microwave background as predicted by
\citet{Sunyaev72} (e.g. \citealt{vanderlinde10}, \citealt{Marriage11},
\citealt{Planck11}).
Given the continuous improvement in both spatial and spectral
resolution power of modern X--ray, optical and infrared telescopes,
more and more details on the inner properties of galaxy clusters have
been unveiled in the last decade. These objects, that in a first
approximation were thought to be virialized and spherically symmetric,
have very complex dynamical features -- such as strong asymmetries and
clumpiness (e.g. \citealt{geller82}, \citealt{dressler88},
\citealt{mohr95}) -- witnessing for violent processes being acting or
having just played a role. They exhibit luminosity and temperature
functions which are not trivially related to their mass function, as
one would expect for virialized gravitation--driven objects. Moreover,
the radial structure of baryons' properties is far from being
completely understood: a number of observational facts pose a real
challenge to our ability in modeling the physics of the intracluster
medium and the closely related physics of the galaxy
population. Indeed a number of different physical processes are acting
together during the formation and evolution of galaxy clusters. Gas
cooling, star formation, chemical enrichment, feedback from supernovae
explosions and from active galactic nuclei, etc are physical processes
at the base of galaxy formation, which are difficult to disentangle
(e.g. see \citealt{benson10} for a recent review on galaxy
formation models).
Line-of-sight galaxy velocities in principle provide a measure of the
depth of the gravitational potential well and therefore can be used to
estimate cluster masses. Furthermore, galaxy dynamics are expected to
be less affected by the complex baryonic physics affecting the intra
cluster medium. Thus, one would naively expect a mass function defined on the
basis of velocity dispersion to be a good proxy of the underlying
cluster mass. However a number of possible systematics can affect
dynamical mass estimation and must be carefully take into
account. \citet{biviano06} for example studied a sample of 62 clusters
at redshift $z = 0$ from a $\Lambda CDM$ cosmological hydrodynamical
simulation. They estimated virial masses from both dark matter (DM)
particles and simulated galaxies in two independent ways: a virial
mass estimator corrected for the surface pressure term, and a mass
estimator based entirely on the velocity dispersion $\sigma_v$. They
also modeled interlopers by selecting galaxies within cylinders of
different radius and length $192 \mbox{$h^{-1}{\rm{Mpc}}~$}$ and applying interloper
removal techniques. They found that the mass estimator based entirely
on velocity dispersions is less sensitive on the radially dependent
incompleteness. Furthermore the effect of interlopers is smaller if
early type galaxies, defined in the simulations using their mean
redshift of formation, are selected. However, the velocity dispersion
of early type galaxies is biased low with respect to DM particles.
\citet{evrard08} analysed a set of different simulations with
different cosmologies, physics and resolutions and found that the 3D
velocity dispersion of DM particles within the virial radius can be
expressed as a tight function of the halo virial
mass\footnote{Throughout the text, we will refer to \mbox{$M_{\rm{vir}}$ } as the mass
contained within a radius \mbox{$R_{\rm{vir}}$} encompassing a mean density equal to
200 $\rho_{c}$, where $\rho_{c}$ is the critical cosmic density.},
regardless of the simulation details. They also found the scatter
about the mean relation is nearly log-normal with a low standard
deviation $\sigma_{ln \sigma} \simeq 0.04$. In a more recent work,
\citet{white10} used high resolution N-body simulations to study how
the influence of large scale structure could affect different physical
probes, including the velocity dispersion based upon sub-halo
dynamics. They found that the highly anisotropic nature of infall
material into clusters of galaxies and their intrinsic triaxiality is
responsible for the large variance of the 1D velocity dispersion under
different lines of sight. They also studied how different interloper
removal techniques affect the velocity dispersion and the stability of
velocity dispersion as a function of the number of sub-halos used to
estimate it. They found that only when using small numbers of
sub-halos ($\lesssim 30$) is the line of sight velocity dispersion
biased low and the scatter significantly increases with respect to the
DM velocity dispersion. Furthermore the effect of interlopers is
different for different interloper rejection techniques and can
significantly increase the scatter and bias low velocity dispersion
estimates.
Currently IR, SZE and X-ray cluster surveys are delivering significant
numbers of clusters at redshifts $z>1$ (e.g. \citealt{stanford05},
\citealt{staniszewski09}, \citealt{fassbender11},
\citealt{williamson11}, \citealt{reichardt12}). Mass calibration of
these cluster samples is challenging using weak lensing, making
velocity dispersion mass estimates particularly valuable. At these
redshifts it is also prohibitively expensive to obtain spectroscopy of
large samples of cluster galaxies, and therefore dispersion
measurements must rely on small samples of 20 to 30 cluster members.
This makes it critically important to understand how one can best use
the dynamical information of a few dozen of the most luminous cluster
galaxies to constrain the cluster mass. It is clear that with such a
small sample one cannot obtain precise mass estimates of individual
clusters. However, for mass calibration of a cluster SZE survey, for
example, an {\it unbiased} mass estimator with a large statistical
uncertainty is still valuable.
In this work we focus on the characterisation of dynamical mass of
clusters with particular emphasis on high-z clusters with a small number of measured galaxy velocities. The plan of the paper is as follows. In Sec. \ref{sec:sims} we briefly
introduce the simulation describe the adopted semi-analytic model, and
in Sec. \ref{sec:Res} we present the results of our analysis. Finally,
in Sec. \ref{sec:Concl}, we summarise our findings and give our
conclusions.
\begin{table}
\centering
\caption{The redshift--number distribution of the 22,484 clusters
with \mbox{$M_{\rm{vir}}$ }$>10^{14} \,{\rm M}_\odot$
analysed in this work at different redshift. Column 1: redshift
$z$; Column 2: number of clusters $N_{clus}$.}
\begin{tabular}{ll}
$z$ & $N_{clus}$ \\
\hline
0.00 & 3133\\
0.09 & 2953\\
0.21 & 2678\\
0.32 & 2408\\
0.41 & 2180\\
0.51 & 1912\\
0.62 & 1635\\
0.75 & 1292\\
0.83 & 1152\\
0.91 & 1020\\
0.99 & 867\\
1.08 & 702\\
1.17 & 552\\
\end{tabular}
\label{t:clus}
\end{table}
\begin{figure*}
\centerline{ \hbox{ \psfig{file=A.ps,width=9.0cm}
\psfig{file=B.ps,width=9.0cm}
}}
\caption{The evolution of the normalization $A$ (left panel) and
slope $B$ (right panel) parameters used to fit the relation
between the 3D velocity dispersion of all the galaxies within
\mbox{$R_{\rm{vir}}$} and the virial mass of each cluster (eq. \ref{eq:fit}). Red
horizontal dashed lines represent the mean value. The slope is
moderately shallower than the self-similar expectation.
}
\label{fi:EV}
\end{figure*}
\section{Input Simulation}
\label{sec:sims}
This analysis is based on the publicly available galaxy catalogue
produced using the semi-analytic model (SAM) by \citet{delucia07} on
the Millennium Simulation (\citealt{Springel_etal_2005}). The
Millennium Simulation adopts the following values for the parameters
of a flat $\Lambda $ cold dark matter model: $\Omega_{DM} = 0.205$ and
$\Omega_b = 0.045$ for the densities in cold dark matter and baryons
at redshift $z = 0$, $\sigma_8 = 0.9$ for the rms linear mass
fluctuation in a sphere of radius $8 \mbox{$h^{-1}{\rm{Mpc}}~$}$, $h = 0.73$ for the
present dimensionless value of the Hubble constant and $n = 1$ for the
spectral index of the primordial fluctuation. The simulation follows
the evolution of $2160^3$ dark matter particles from $z = 127$ to the
present day within a cubic box of $500 \mbox{$h^{-1}{\rm{Mpc}}~$}$ on a side. The
individual dark matter particle mass is $8.6 \times 10^8 h^{-1}
\,{\rm M}_\odot$. The simulation was carried out with the massively parallel
GADGET-2 code (\citealt{Springel05}). Gravitational forces were
computed with the TreePM method, where long-range forces are
calculated with a classical particle-mesh method while short-range
forces are determined with a hierarchical tree approach
(\citealt{barnes86}). The gravitational force has a Plummer-equivalent
comoving softening of $5 \,h^{-1}{\rm kpc}$, which can be taken as the spatial
resolution of the simulation. Full data are stored 64 times spaced
approximately equally in the logarithm of the expansion factor. Dark
matter halos and subhalos were identified with the friends-of-friends
(FOF; \citealt{davis85})) and SUBFIND (\citealt{springel01})
algorithms, respectively. Based on the halos and subhalos within all
the simulation outputs, detailed merger history trees were
constructed, which form the basic input required by subsequently
applied semi-analytic models of galaxy formation.
We recall that the SAM we employ builds upon the methodology
originally introduced by \citet{kauffmann99}, \citet{springel01a} and
\citet{delucia04a}. We refer to the original papers for details.
The SAM adopted in this study includes explicitly DM
substructures. This means that the halos within which galaxies form
are still followed even when accreted onto larger systems. As
explained in Springel et al. (2001) and De Lucia et al. (2004), the
adoption of this particular scheme leads to the definition of
different galaxy types. Each FOF group hosts a Central galaxy. This
galaxy is located at the position of the most bound particle of the
main halo, and it is the only galaxy fed by radiative cooling from the
surrounding hot halo medium. Besides central galaxies, all galaxies
attached to DM substructures are considered as satellite
galaxies. These galaxies were previously central galaxies of a halo
that merged to form the larger system in which they currently
reside. The positions and velocities of these galaxies are followed by
tracing the surviving core of the parent halo. The hot reservoir
originally associated with the galaxy is assumed to be kinematically
stripped at the time of accretion and is added to the hot component of
the new main halo. Tidal truncation and stripping rapidly reduce the
mass of DM substructures (but not the associated stellar mass) below
the resolution limit of the simulation (\citealt{delucia04b}; Gao et
al. 2004). When this happens, we estimate a residual surviving time
for the satellite galaxies using the classical dynamical friction
formula, and we follow the positions and velocities of the galaxies by
tracing the most bound particles of the destroyed substructures.
\section{Properties of the Full Galaxy Population}
\label{sec:Res}
\subsection{Intrinsic galaxy velocity dispersion}
\label{sec:intrinsic}
\citet{evrard08} showed that massive dark matter halos adhere to a
virial scaling relation when one expresses the velocity dispersion of
the DM particles as a function of the virial mass of the halo in the
form: \begin{equation} \sigma_{DM}(M_{vir},z) = \sigma_{DM,15}\left(
\frac{h(z)M_{vir}}{10^{15}\,{\rm M}_\odot}\right)^{\alpha}, \end{equation} where
$\sigma_{DM,15} = 1082.9 \pm 4.0 \,{\rm km\,s^{-1}}$ is the typical 3D velocity
dispersion of the DM particles within \mbox{$R_{\rm{vir}}$}\ for a $10^{15}
h^{-1}\,{\rm M}_\odot$ cluster at $z = 0$ and $\alpha = 0.3361 \pm
0.0026$. Similarly, we first compute for each cluster the 3D velocity
dispersion $\sigma_{3D}$ (divided by $\sqrt 3$) of all the galaxies
within \mbox{$R_{\rm{vir}}$} and then fit the relation between $\sigma_{3D}$ and \mbox{$M_{\rm{vir}}$ }
in the form of $log(\sigma_{3D}) \propto log(h_{70}(z)\mbox{$M_{\rm{vir}}$ } /
10^{15}\,{\rm M}_\odot)$ individually at any of the redshift listed in
Table~\ref{t:clus}. As a result we can express the dynamical mass
$M_{dyn}$ as: \begin{equation} M_{dyn} = \left( \frac{\sigma_{v}}{A}\right) ^{B}
h_{70}(z)^{-1} 10^{15} \,{\rm M}_\odot,
\label{eq:fit}
\end{equation} where the resulting best fitting values of A and B with their
associated error-bars are shown in Fig. \ref{fi:EV} as a function of
redshift. Dashed horizontal red lines show the average values
which are respectively $\bar A = 938 \pm 3 \,{\rm km\,s^{-1}}$ and $\bar B = 2.91 \pm 0.03$.
After accounting for the differences in the Hubble parameter, our measured normalization
of the galaxy velocity dispersion-- mass relation is within $\lesssim$3\%
of \citet{evrard08}. This reflects the differences
between the subhalo and DM particle dynamics. As has been previously
pointed out (e.g. \citealt{gao04}, \citealt{goto05},\citealt{faltenbacher06},
\citealt{evrard08}, \citealt{white10}), the velocity bias between galaxies and DM is expected
to be small $b_v \lesssim 5\%$. But to be absolutely clear, we
adopt our measured galaxy velocity dispersion-- mass calibration in the
analyses that follow.
To better visualize the relative importance of the cosmological
redshift dependence we show in Fig.~\ref{fi:EVZ} the redshift
evolution of the normalisation parameter $A$ (solid black line )when
the fit is made on the relation $log(\sigma_{3D}) \propto
log(h_{70}(z=0) \mbox{$M_{\rm{vir}}$ } /10^{15} \,{\rm M}_\odot )$. The expected self-similar
evolution given by $A(z) = \bar A \times E(z)^{1\over3}$ is
highlighted (dashed red line), where the term $\bar A$ is equal to
mean value $\bar A = 938 \,{\rm km\,s^{-1}}$ and $E(z)$ describes the universal
expansion history $H(z) = H_0 E(z)$. In other words, Fig.~\ref{fi:EVZ}
shows the typical galaxy velocity dispersion in $\,{\rm km\,s^{-1}}$ for a cluster
with \mbox{$M_{\rm{vir}}$ } $= 10^{15} \,h_{70}^{-1}{\rm M}_\odot$ as a function of redshift and
demonstrates the nearly self-similar evolution (within $\sim$1\%)
over the redshift range tested in this work.
\begin{figure}
\vspace{-0.3truecm}
\centerline{ \hbox{ \psfig{file=A_Z.ps,width=9.0cm} }}
\vspace{-0.5truecm}
\caption{The evolution of the normalisation parameter $A(z)$ of
Eq. \ref{eq:fit} when no self-similar evolution is taken into account
(solid black line). Red dashed line is showing the best fitting
parameter $\bar A \times \left( H(z)/H_0\right) ^{1\over3}$ }
\label{fi:EVZ}
\end{figure}
\begin{figure*}
\centerline{ \hbox{ \psfig{file=map_s3A.ps,width=9.0cm}
\psfig{file=map_s1A.ps,width=9.0cm} }}
\caption{ The relation between \mbox{$M_{\rm{vir}}$ } and the dynamical mass for all
the clusters in the sample. For each cluster the dynamical mass is
inferred by applying Eq. \ref{eq:fit} to the 3D velocity
dispersion divided by $\sqrt{3}$ (left panel) and for each of the
three projected 1D velocity dispersions (right panel) of all the
galaxies extracted from the \citet{delucia07} database within
\mbox{$R_{\rm{vir}}$}\ from the centre of the cluster. The dashed (dotted)
white-black line is the best fit of the relation (plus and minus
one $\sigma$) and is virtually indistinguishable from the
one-to-one relation (dotted-dashed purple line).}
\label{fi:maps3}
\end{figure*}
For the full sample of clusters analysed (see Table~\ref{t:clus}), we
then compute the dynamical masses by applying Eq.~\ref{eq:fit} to (1)
the 3D galaxy velocity dispersion (divided by $\sqrt 3$) and (2) to
each orthogonal projected 1D velocity dispersion. Fig.~\ref{fi:maps3}
shows the comparison between the virial masses \mbox{$M_{\rm{vir}}$ } and the resulting
dynamical masses $M_{3D}$ (left panel) and $M_{1D}$ (right panel) for
the full sample of clusters. The best fit of the relation (dashed
black and white lines) is virtually indistinguishable from the
one-to-one relation (dotted-dashed purple line) in the case of the 3D
velocity dispersion. On the other hand, in the case of the 1D velocity
dispersion there is a small but detectable difference between the
one-to-one relation and the best fit. The best fit of the dynamical
mass for the 1D velocity dispersion is about $\lesssim 1\%$ lower than
the one-to-one relation. We will show in Section \ref{sec:triax} that
this difference can be explained in terms of triaxial properties
of halos. Typical logarithmic scatter of $\sigma_{M_{3D} / M_{vir}}
\simeq 0.145$ and $\sigma_{M_{1D} / M_{vir}} \simeq 0.334$ are
highlighted with dotted black and white lines in $log_{10}$ scale. We
find that, similar to results by \citet{white10}, using the 1D
velocity dispersion rather than the 3D velocity dispersion increases
the intrinsic log scatter around the mean relation by a factor of
$\sim2.3$.
We further investigate the intrinsic scatter in the relation between the true virial masses and
the dynamical mass estimates in Fig. \ref{fi:stddev}. Taking $\sigma$ to be the standard deviation of the logarithm of the ratio between the dynamical mass estimate and the virial masses, we show that
in the case of the 3D velocity dispersion (dashed red line) and the 1D
velocity dispersion (dotted black line) the scatter increases with redshift.
The solid black line shows a linear fit to the evolution of the intrinsic $M_{dyn - 1D}$ scatter and can be expressed as:
\begin{equation}
\sigma_{ln(M_{1D}/M_{vir})} \simeq 0.3 + 0.075 \times z .
\label{eq:scatfit}
\end{equation}
Velocity dispersions are $\sim$25\% less accurate for estimating single cluster masses at $z=1$ than at low redshift.
The logarithmic scatter of the 1D velocity dispersion mass estimator
$\sigma_{M_{1D}}$ around the true mass arises from two sources of
scatter: (1) the logarithmic scatter between the 3D velocity
dispersion mass estimator and the true mass - $\sigma[M_{3D}/M_{vir}]$
(red dashed line in Fig~\ref{fi:stddev}) and (2) the logarithmic
scatter between the 1D and 3D velocity dispersions
$\sigma[\sigma_{1D}/\sigma_{3D}]$ (solid green line). The expected 1D
dispersion mass scatter is then the quadrature addition of these two
sources: \begin{equation} \sigma^2_{M_{1D}} \sim \sigma^2[M_{3D}/M_{vir}] + \{\bar
B\times \sigma[\sigma_{1D}/\sigma_{3D}]\}^2,
\label{eq:sumscat}
\end{equation}
where $\bar B$ is the best fitting slope parameter from
Eq.~\ref{eq:fit}. The expected $\sigma_{M_{1D}}$ estimate from Eqn.~\ref{eq:sumscat} appears as a dotted-dashed purple line in Fig~\ref{fi:stddev}; note that this estimate is in excellent agreement with the directly measured scatter (dotted black line). Therefore, we show-- as pointed out by \citet{white10}-- that the dominant contributor to the scatter is the intrinsic triaxial structure of halos. Furthermore its evolution with redshift is also the dominant source of the
increasing scatter of the 1D dynamical mass estimates with redshift. By comparison, the scatter between the 3D velocity dispersion mass estimator and the true mass $\sigma[ln(M_{3D}/M_{vir})]$, which is reflecting departures from dynamical equilibrium due to ongoing merging in the cluster population, is relatively minor. Ultimately it is the lack of observational access to the full 3D dynamics and distribution of the galaxies that limits us from precise single cluster dynamical mass estimates.
\begin{figure}
\centerline{ \hbox{ \psfig{file=stddev.ps,width=9.0cm}}}
\caption{ The redshift evolution of the logarithmic 1$\sigma$
scatter for the following quantities: (1) the 3D galaxy velocity
dispersion mass estimate scatter (dashed red), (2) the 1D galaxy
velocity dispersion mass estimate scatter (dotted black), (3) a
fit to \#2 (solid black; Eqn.~\ref{eq:scatfit}), (4) the scatter
of the 1D velocity dispersion about the 3D dispersion (solid
green), (5) the same quantity turned into mass scatter using
Eqn.~\ref{eq:fit} (dashed-dotted blue) and (6) the expected 1D
dispersion mass scatter (\#2) obtained by quadrature addition of
\#1 and \#5, as explained in Sec. \ref{sec:intrinsic}
(dotted-dashed purple; Eqn.~\ref{eq:sumscat}).}
\label{fi:stddev}
\end{figure}
\subsection {Triaxiality}
\label{sec:triax}
The presence of pronounced departures from sphericity in dark matter
halos (\citealt{thomas92}, \citealt{warren92}, \citealt{jing02}), if
not approximately balanced between prolate and oblate systems, could
in principle not only increase the scatter in dynamical mass
estimates, but also lead to a bias. If, for example, clusters were
mainly prolate systems, with one major axis associated to a higher
velocity dispersion and two minor axes with a lower velocity
dispersion, there should be two lines of sight over three associated
with a lower velocity dispersion. This could potentially lead to a
bias in the 1D velocity dispersion with respect to the 3D velocity
dispersion. To quantify this possible bias, we compute the moment of
inertia for each cluster in the sample, and we then calculate the
velocity dispersions along each of the three major axes. As has
been pointed out before (\citealt{tormen97}, \citealt{kasun05},
\citealt{white10}) the inertia and velocity tensor are quite well aligned, with typical
misalignment angle of less than $30^\circ$. In
Fig.~\ref{fi:prolatez}, at each redshift we show the lowest velocity
dispersion $\sigma_0$ with black crosses, the highest $\sigma_2$ with
green stars and the intermediate one $\sigma_1$ with red diamonds
normalized to the 3D velocity dispersion $\sigma_{3D}$ (divided by
$\sqrt 3$). Dashed blue lines are the 16, 50 and 84 percentile of the
full distribution and DEV is the associated standard deviation which,
as expected from Fig.~\ref{fi:stddev} is increasing with redshift. A
perfectly spherical cluster in this plot will therefore appear with
the three points lying all at the value 1, whereas prolate and oblate
systems will have the intermediate velocity dispersions $\sigma_1$
closer to the lower one $\sigma_0$ and to the higher one $\sigma_2$,
respectively. The black solid line is the best fit of the distribution
of the intermediate $\sigma_1$ velocity dispersions and it is very
close to unity, showing that dynamically, clusters do not have a very
strong preference among prolate and oblate systems. Furthermore this
result is true for the range of redshifts and masses we examine here.
This can be better seen in Fig.~\ref{fi:prolate}, where we show that
we measure only a mild excess (at $\lesssim 5\%$ level) of prolate
systems. In addition, for each cluster in the sample, we compute a
"prolateness" quantity $Prol$ as: \begin{equation} Prol = \frac{\left(\sigma_2 -
\sigma_1\right) - \left(\sigma_1 - \sigma_0\right)}{\sigma_{3D}}.
\label{eq:prol}
\end{equation} A prolate system will have a positive $Prol$ value whereas an
oblate one will have a negative $Prol$. Fig.~\ref{fi:prolate} shows a
map representing the distribution of the $Prol$ variable as a function
of the cluster mass (left panel) and redshift (right panel). To
compute the former we stack clusters from all the redshifts, and for
the latter we stack clusters from all masses. As it is shown in
Fig.~\ref{fi:prolate}, there are no clear dependencies of the $Prol$
variable on the cluster mass or redshift. The slight excess of prolate
over oblate systems at all masses and redshifts would translate into
1D dynamical masses slightly biased towards smaller masses.
Indeed, this is seen as a $\sim1$\% effect in Fig.~\ref{fi:maps3}.
\begin{figure*}
\centerline{\hbox{ \psfig{file=prolate.ps,width=20cm} }}
\caption{We show for each cluster the velocity dispersion along the
tree major axes of the inertia momentum (black crosses for the
smaller, red diamonds for the intermediate and green stars for the
larger) normalized to the 3D velocity dispersion divided by $\sqrt
3$ as a function of the cluster mass in different redshift
bins. The black solid line is the best fit of the intermediate axis
velocity dispersion, and the dashed blue lines are the median and the 16
and 84 percentile of the full distribution. DEV is the
associated standard deviation. }
\label{fi:prolatez}
\end{figure*}
\begin{figure*}
\centerline{\hbox{
\psfig{file=prolatenessM.ps,width=9.cm}
\psfig{file=prolatenessZ.ps,width=9.cm}
}}
\caption
{ The distribution of the prolateness variable $Prol$ (see
Eqn.~\ref{eq:prol}) as a function of the cluster mass for all the
clusters at different redshift stacked together (left panel) and as
a function of redshift (right panel). To guide the eye the dashed
black and white line is highlighting the value of $Prol = 0$, while
the dotted red-white lines are respectively the 16, 50 and 84
percentiles for the two different distributions. The cluster
ensemble exhibits a slight preference for prolateness at all masses
and redshifts.}
\label{fi:prolate}
\end{figure*}
\section{Properties of Spectroscopic Subsamples}
\label{sec:selection}
Results in the previous section relied on the full galaxy sample
within each cluster virial region. We now study the possible
systematics affecting the cluster velocity dispersion and associated
dynamical mass estimates when more realistic selection for the member
galaxies are taken into account.
We model the selection carried out in real world circumstances by
following the procedure we have developed for the South Pole Telescope
(SPT) dynamical mass calibration program (\citealt{bazin12}). Namely,
we preferentially choose the most luminous red sequence galaxies that
lie projected near the center of the cluster for our spectroscopic
sample. To do this we select galaxies according to their colors,
which are a direct prediction of the adopted semi-analytic model. In
particular, we compute the following color-magnitude diagrams for
different redshift ranges: $g-r$ as a function of $r$ for redshift
z$\leq 0.35$, $r-i$ as a function of $i$ for redshifts $0.35<$z$\leq
0.75$ and $i-z$ as a function of $z$ for redshifts larger than 0.75
(e.g. \citealt{song11}). We report in Fig.~\ref{fi:CMR} the
color-magnitude diagram at different redshifts for all the galaxies
within the virial radius of each cluster. The model given by
\citet{song11}, which has proven to be a good fit to the
observational data, is highlighted with a dashed black-red line. As
it is shown, the simulated cluster galaxy population has a
red-sequence flatter than the observational results. Because the purpose
of this work is not to study the evolution of the cluster galaxy
population, but rather to see the effect of the selection of galaxies
on the estimated dynamical mass, we adopt the following
procedure: First we fit the red sequence at each analysed
redshift. Then, we symmetrically increase the area on
color-magnitude space in order to encompass $68\%$ of the galaxies and
iterate the fit. The resulting best fit and corresponding area are
highlighted as green continuous and dashed lines in
Fig. \ref{fi:CMR}. Table \ref{t:cmr} describes the width in color space
used to select red sequence galaxies at each analysed redshift.
\begin{table}
\centering
\caption{Color width of the simulated red sequence in
magnitudes at each analysed redshift.
Column 1: redshift; Column 2: 1 $\sigma$ width of the red sequence in magnitudes.}
\begin{tabular}{llcr}
z & mag \\
\hline
0.00 & 0.05\\
0.09 & 0.06\\
0.21 & 0.08\\
0.32 & 0.10\\
0.41 & 0.06\\
0.51 & 0.08\\
0.62 & 0.08\\
0.75 & 0.08\\
0.83 & 0.09\\
0.91 & 0.09\\
0.99 & 0.07\\
1.08 & 0.06\\
1.17 & 0.05\\
\end{tabular}
\label{t:cmr}
\end{table}
This color selection helps to reduce the interlopers in our cluster
spectroscopic sample. In addition to color selection, we explore the
impact of imposing a maximum projected separation $R_\perp$ from the
cluster center, and we explore varying the spectroscopic sample size.
In all cases we use the $N_{gal}$ most massive (and therefore
luminous) galaxies in our selected sample. Table~\ref{tab:param} shows
the range of $N_{gal}$ and $a=R_\perp/r_{vir}$ that we explore as well
as the sample binning in redshift and mass. Note that for SZE
selected clusters from SPT or equivalently X-ray selected samples of
clusters, once one has the cluster photometric redshift one also has
an estimate of the cluster virial mass and virial radius from SZE
signature or X-ray luminosity (e.g. \citealt{reiprich02},
\citealt{andersson11}); therefore, we do in fact restrict our
spectroscopic sample when building masks according to projected
distance from the cluster center relative to the cluster virial radius
estimate.
\begin{table}
\centering
\caption{Parameter space explored for the mock observations.
Column 1: maximum projected distance $R_\perp$ from cluster
center $a=R_\perp/r_{vir}]$; Column 2: $N_{gal}$
initial number of selected most massive red-sequence galaxies; Column 3:
redshift $z$; Column 4: cluster mass \mbox{$M_{\rm{vir}}$ } [$10^{14} \,{\rm M}_\odot$].}
\begin{tabular}{llcr}
$a$ & $N_{gal}$ & z & \mbox{$M_{\rm{vir}}$ } \\
\hline
0.2 & 10 & 0.00 & 1.0 \\
0.4 & 15 & 0.09 & 2.0 \\
0.6 & 20 & 0.21 & 4.0 \\
0.8 & 25 & 0.32 & 6.0 \\
1.0 & 30 & 0.41 & 8.0 \\
1.2 & 40 & 0.51 & 10.0 \\
1.4 & 50 & 0.62 & 20.0 \\
1.6 & 60 & 0.75 & \\
1.8 & 75 & 0.83 & \\
2.0 & 100 & 0.91 & \\
2.2 & & 0.99 & \\
2.4 & & 1.08 & \\
& & 1.17 & \\
\end{tabular}
\label{tab:param}
\end{table}
\begin{figure*}
\centerline{
\hbox{
\psfig{file=cmr.ps,width=20cm
}}
\caption{Color magnitude relation for all the galaxies within
\mbox{$R_{\rm{vir}}$} at six different redshifts. Color-magnitude relations are
expressed as $g-r$ as a function of $g$ for redshift z$\leq
0.35$, $r-i$ as a function of $i$ for redshifts $0.35<$z$\leq
0.75$ and as $i-z$ as a function of $z$ for redshifts larger
than 0.75 (see text for further details). Symbols with different
colors refers to different galaxy clusters in each separate
redshift bin. Dashed black-red line is the model given by
\citet{song11}. The solid green lines are the best fit to the
simulated red-sequence relation used in this work and dashed
green lines enclose $68\%$ of the galaxies. The
area between them represents the color space used for the
selection of galaxies described in Sect. \ref{sec:selection}.}
\label{fi:CMR}
\end{figure*}
\subsection{Dynamical friction and Velocity Bias}
\label{sec:dynamical}
In section \ref{sec:intrinsic} we showed the presence of a tight
relation between the 3D dynamical mass and the virial mass \mbox{$M_{\rm{vir}}$ }\ for
galaxy clusters. When dynamical masses are computed from the 1D
velocity dispersion instead of the 3D one, we significantly increase
the scatter of this relation and introduce a negligible bias
($\lesssim 1\%$) due to the triaxial properties of dark matter
halos. We now study the effect of velocity segregation due to
dynamical friction and its effect on the estimated dynamical
masses. To do this, for each cluster we select a number of
red-sequence galaxies within the virial radius \mbox{$R_{\rm{vir}}$} that ranges from 10
to 100 galaxies as described in Table~\ref{tab:param}. We sort
galaxies according to their luminosity (different bands were used at
different redshift as described in Sec. \ref{sec:selection}). This
results in a "cumulative" selection. Therefore, for example, for each
cluster the 10 most luminous red-sequence galaxies are present in all
the other samples with larger number of galaxies. On the other
hand, when a cluster field is spectroscopically observed,
completeness to a given limiting magnitude is not always achieved.
In fact, the number of slits per mask in
multi-slit spectrographs is fixed, hence the central, high-density
regions of galaxy clusters can often be sampled only to brighter
magnitudes than the external regions. As a consequence, the spatial
distribution of the galaxies selected for spectroscopy turns out to
be more extended than the parent spatial distribution of cluster
galaxies. In the analyses presented here, we do not model this
observational limitation. Indeed, as described
in the companion paper \citet{bazin12}, such difficulty could be easily
overcome by applying multiple masks to the same field, which would allow
one to achieve high completeness. For each cluster and for all the three
orthogonal projections, we then compute the robust estimation of the
velocity dispersion \citet{beers90} with different numbers of galaxies
and compare it with the intrinsic 1D velocity dispersion. Fig.~\ref{fi:dynfric}
shows the probability distribution
function (PDF) of the ratio between the velocity dispersion computed
with different numbers of bright red-sequence cluster galaxies
($\sigma_{Ngal}$) and the intrinsic 1D velocity dispersion
($\sigma_{1D}$) obtained by stacking results from all the lines of
sight of the cluster sample. Different colors refer to different
numbers of galaxies and the mean of each distribution is highlighted
at the top of the plot with a vertical line segment. We note that when
large numbers of galaxies are used to estimate the velocity
dispersion, the probability distribution function is well represented by a
log-normal distribution centered at zero. As a result dynamical masses obtained from
large numbers of bright red-sequence cluster galaxies are unbiased
with respect to the intrinsic 1D dynamical mass. However, when the
number of red-sequence galaxies used to estimate the velocity
dispersion is lower than $\sim 30$, the corresponding PDF starts to
deviate from a symmetric distribution and its mean is biased towards
lower values. This effect is evidence of a dynamically cold population
of luminous red galaxies whose velocities are significantly slowed due
to dynamical friction processes. Indeed dynamical friction is more
efficient for more massive galaxies, hence the velocity bias is
expected to be more important for the bright end of the galaxy
population (e.g. \citealt{biviano92}, \citealt{adami98},
\citealt{cappi03}, \citealt{goto05}, \citealt{biviano06}).
\begin{figure}
\hbox{\psfig{file=sigmansigma1d.ps,width=8.0cm}}
\caption{ The probability distribution function of the measured velocity dispersion computed with different numbers of
red-sequence cluster galaxies sorted by luminosity and normalized by the intrinsic 1D velocity dispersion using the full galaxy sample. The position of the mean of each curve is highlighted with a vertical line segment at the top of the figure. Small samples of the most luminous galaxies exhibit biases in dispersion and significant asymmetries in the PDF.}
\label{fi:dynfric}
\end{figure}
To verify this we compute $\sigma_{Ngal}$ in the same way described
above, but starting from galaxies that are randomly selected with
respect to luminosity. Note that in this case we only randomly select
galaxies, but we don't change the "cumulative nature" of our selection
and the subsequent estimated velocity dispersion when using larger
numbers of galaxies. We then calculate the corresponding dynamical
masses in the case of galaxies selected according to the procedure
described in Sect.~\ref{sec:selection} and in the case of random
selection using the different number of galaxies listed in
Table~\ref{tab:param}. The resulting stacked dynamical masses for the
full sample of clusters and for the three orthogonal projections are
shown in Fig.~\ref{fi:fricran} as a function of the intrinsic virial
mass \mbox{$M_{\rm{vir}}$ }. The dashed purple-black line is the one-to-one relation
and the solid green lines are the median and 16 and 84 percentiles of
the distributions. The left panel of Fig.~\ref{fi:fricran} represents
the original distribution, while the right panel represents the
randomly selected distribution. As expected from
Fig.~\ref{fi:dynfric}, if velocity dispersions are computed from
red-sequence galaxies selected according to their luminosity, a clear
bias is introduced in the estimated dynamical mass. Moreover we can
see that the distance between the median line and the 84 percentile
line is smaller than the distance between the median and the 16
percentile line, because the distribution is no longer symmetric.
Furthermore it appears that the bias
present in the estimated dynamical mass does not depend on the cluster
mass. On the other hand, if we randomly select galaxies (right
panel), the bias is reduced, and we obtain a more
symmetric distribution. We also check for a possible redshift
dependency on the velocity bias or dynamical friction. For this
purpose we split our sample of clusters into two different redshift
bins and show separately in Fig.~\ref{fi:fricz} the relation between
the true cluster virial mass and the estimated dynamical masses
computed with different number of bright red-sequence galaxies
selected according to their luminosity in the case of low redshift
(left panel) and high redshift (right panel) clusters. Obviously the
number of clusters and their mass distribution is a strong function of
redshift. However, it is worth noting that the impact of dynamical
friction on the estimation of velocity dispersion and dynamical mass
does not vary much with cluster mass or redshift.
\begin{figure*}
\hbox{ \psfig{file=Mdyn_stacked.ps,width=9.0cm}
\psfig{file=Mdyn_stacked_shuffle.ps,width=9.0cm}}
\caption{ The relation between \mbox{$M_{\rm{vir}}$ } and the dynamical mass for all
the clusters in the sample and for each orthogonal projection. For
each cluster the dynamical mass is infer-ed by applying
Eq. \ref{eq:fit} to the robust estimation of the velocity
dispersion computed using different number of galaxies (Table
\ref{tab:param}). Left panel is for bright red-sequence galaxies
sorted according to their luminosity and right panel is for a
randomly sorted array of galaxies. Dashed purple-black line is the
one-to-one relation, and solid green lines represent the 16, 50 and
84 percentiles.}
\label{fi:fricran}
\end{figure*}
Using the results of these mock observations we express both the
velocity dispersion bias, represented by the position of the vertical
segment at the top of Fig.~\ref{fi:dynfric}, and the characteristic
width of each distribution shown in Fig.~\ref{fi:dynfric} with the
following parametrisation: \begin{equation} <ln(\sigma_{Ngal}/\sigma_{1D})> = 0.05
- 0.51/\sqrt {Ngal},
\label{eq:meandyn}
\end{equation}
\begin{equation}
\sigma_{ln(\sigma_{Ngal}/\sigma_{1D})} = -0.037 + 1.047/\sqrt {Ngal}.
\label{eq:sigmadyn}
\end{equation}
This parametrisation is valid only in the limit of the number of galaxies used in this study (between 10 and
100). For example if the dynamical mass is estimated starting from a
number of galaxies larger than 100, the bias would presumably be zero
rather than negative as implied from Eq.~\ref{eq:meandyn}.
\begin{figure*}
\hbox{ \psfig{file=dynfriclt0p5.ps,width=9.0cm}
\psfig{file=dynfricgt0p5.ps,width=9.0cm}}
\caption{ Same as for the left panel of Fig. \ref{fi:fricran}, but
dividing our cluster sample in 2 redshift bins. Left panel is for
$z\leq 0.5$ and right panel is for $z > 0.5$.}
\label{fi:fricz}
\end{figure*}
We demonstrate in Fig.~\ref{fi:dynfricpage} that by applying
Eq.~\ref{eq:meandyn} to the velocity dispersion estimated with
different numbers of red-sequence cluster galaxies we are able to
remove the bias induced by the dynamical friction. In particular,
Fig.~\ref{fi:dynfricpage} shows the relation between true virial mass
and the dynamical mass estimated using the most luminous 100, 50 and
15 red-sequence galaxies. Dynamical masses are computed by
applying Eq.~\ref{eq:fit} directly to the velocity dispersion (left
panels) and to the velocity dispersion corrected according to
Eq.~\ref{eq:meandyn} (right panels). Dynamical friction is affecting
mostly the bright end of the red-sequence cluster galaxies population
and therefore the bias is larger in the case of the smallest sample (lower left
panel). Consistently the correction given by Eq.~\ref{eq:meandyn} is
larger in this case, whereas it is negligible in the other cases (50 and 100).
\begin{figure*}
\hbox{\psfig{file=page_dyn_all.ps,width=16.0cm}}
\caption{ The relation between \mbox{$M_{\rm{vir}}$ } and the dynamical mass for all
the clusters in the sample and for each orthogonal
projection. {\it Left panels}: For each cluster the dynamical mass
is infer-ed by applying Eq. \ref{eq:fit} to the robust estimation
of the velocity dispersion computed using the 100, 50 and 15 most
luminous red-sequence cluster galaxies (with distance from the
centre smaller than \mbox{$R_{\rm{vir}}$}) and show respectively in the upper,
middle and lower panels. {\it Right panels:} Same as for left
panels, but velocity dispersions are corrected according to
Eq. \ref{eq:meandyn}. Dashed purple-black line is the one-to-one
relation, and solid green lines represent the 84, 50 and 16
percentiles.}
\label{fi:dynfricpage}
\end{figure*}
\subsection{Impact of Poisson Noise}
\label{sec:poisson}
In this work we restrict our analyses to all the galaxies with stellar
masses predicted by the adopted SAM larger than $5\times 10^8
\,{\rm M}_\odot$. The total number of galaxies within the virial radius
\mbox{$R_{\rm{vir}}$}\ is therefore quite large and even for the poorer clusters with
$\mbox{$M_{\rm{vir}}$ } \sim 10^{14} \,{\rm M}_\odot$, the number of galaxies used to compute the
1D velocity dispersion is of the order of $N_{1D} \sim 200$. As a
result, in the absence of any dynamical friction effect, the
associated characteristic scatter to the ratio $\sigma_{Ngal}/
\sigma_{1D}$ is well represented by the Poissonian factor
$\sqrt{2N_{gal}}$. To demonstrate it, we show in
Fig.~\ref{fi:dynfric100} the evolution of the scatter in the relation
between the true virial masses and the dynamical masses as a function
of redshift. For each cluster, dynamical mass is
estimated starting from the velocity dispersion of the 100 most
luminous red-sequence galaxies through Eq.~\ref{eq:fit}. The resulting
scatter is highlighted as a cyan solid line. We also show the
evolution of associated scatter when dynamical mass is computed from
the intrinsic 3D (dashed red line) and 1D (dotted black line) velocity
dispersions. Moreover, similarly to Fig.~\ref{fi:stddev}, we
separately show the predicted scatter obtained by adding in quadrature
the scatter associated to the 1D velocity dispersion with the Poisson
term $\sqrt{2 N_{gal}}$ (dashed-triple dotted green line) or with the
factor given by Eq.~\ref{eq:sigmadyn} (dashed-dotted purple line). We
note that, as expected, both predictions agree very well with the
measured evolution of the scatter.
\begin{figure}
\hbox{ \psfig{file=stdev_100.ps,width=9.0cm}}
\caption{ The evolution of the 1 $\sigma$ scatter as a function of
redshift in log-space for the following quantities. Dashed red
(dotted black) line is for the ratio between the estimated
dynamical mass $M_{3D}$ ($M_{1D}$) computed from the 3D (1D)
velocity dispersion and the virial mass \mbox{$M_{\rm{vir}}$ }. The solid cyan
line is for the ratio between the measured dynamical mass
$M_{dyn}$ computed from the 100 most luminous red-sequence
galaxies within the virial radius of each cluster along each line
of sight and the virial mass \mbox{$M_{\rm{vir}}$ }. The dashed-dotted green line
is the expected scatter in mass obtained by multiplying $B$ by the
term given by adding in quadrature the scatter from the 1D
velocity dispersion and a Poissonian term equal to $\sqrt{2\times
100}$. The dashed-dotted purple line is the expected scatter in
mass obtained by multiplying $B$ by the term given by adding in
quadrature the scatter from the 1D velocity dispersion and a term
computed with the fitting formula of Eq.~\ref{eq:sigmadyn}.}
\label{fi:dynfric100}
\end{figure}
However, if a lower number of galaxies is used to calculate the
dynamical mass, a difference in the two predictions emerge. For
example, in Fig.~\ref{fi:dynfric5015} we show the same computation
highlighted in Fig.~\ref{fi:dynfric100}, but with a number of galaxies
equal to 50 (left panel) and 15 (right panel). We note in particular
that the observed evolution of the scatter of the relation among the
virial mass and the dynamical mass is well described by adding in
quadrature to scatter associated to the intrinsic 1D dynamical mass
the term given by the fitting formula of Eq.~\ref{eq:sigmadyn}. On the
contrary, if only the Poisson term $\sqrt{2 N_{gal}}$ is taken into
account, the predicted scatter is underestimated with respect to the
measured one. Furthermore, note that while on the one hand in
Figures~\ref{fi:dynfricpage} and \ref{fi:dynfric} we showed that the dispersion
calculated using 50 galaxies does not introduce a significant mass bias;
however, it is clear that the Poisson term is no
longer adequate to explain the real scatter. On the other hand,
if the contribution to the scatter due to dynamical friction is
included through Eq. \ref{eq:sigmadyn} we are able to recover the
right answer.
\begin{figure*}
\hbox{ \psfig{file=stdev_50.ps,width=9.0cm}
\psfig{file=stdev_15.ps,width=9.0cm}}
\caption{ Same as for Fig. \ref{fi:dynfric100}, but with a number of galaxies respectively equal to 50 (left panel) and 15 (right panel).}
\label{fi:dynfric5015}
\end{figure*}
\subsection{Impact of Interlopers}
\label{sec:Interlopers}
Finally, to have a more coherent and realistic approach to our
analyses, we further investigate the effect of interlopers as a
possible source of systematics in the computation of clusters
dynamical mass. For this purpose, for each snapshot and projection, we
construct a number of cylindrical light-cones centred at each cluster
with height equal to the full simulated box-length and different
radius spanning the interval 0.2 to 2.4 \mbox{$R_{\rm{vir}}$}. The different aperture
values used are listed in Table~\ref{tab:param}. We then apply an
initial cut of $4000 \,{\rm km\,s^{-1}}$ to select galaxies within the
cylinders. For each cylindrical light-cone realisation, we then
initially select a different number of members in the color-magnitude
space described in Sect. \ref{sec:selection} ranging from 10 to 100
galaxies as shown in Table \ref{tab:param}. Several techniques have
been developed to identify and reject interlopers. Such methods have
been studied before typically using randomly selected dark matter
particles \citep[e.g.][]{perea90,diaferio99,lokas06,wojtak07,wojtak09}
and more recently using subhalos by \citet{biviano06} and
\citet{white10}. However, for the purpose of this work, here we simply
apply a 3$\sigma$ clipping procedure to its robust estimation of the
velocity dispersion \citet{beers90} to reject interlopers, as
discussed in \citet{bazin12}. This leads to a final spectroscopic
sample of galaxies for each cluster, at each redshift, for each
projection, within each different aperture and for each different
initially selected number of red-sequence galaxies.
Fig.~\ref{fi:schema} is a schematic representation of the procedure we
follow to obtain from each cluster and projection different estimation
of the velocity dispersion according to different "observational
choices".
\begin{figure}
\hbox{\psfig{file=Schema.eps,width=9.0cm}
}
\caption{ Schematic representation of the parameter space explored
in this work. See Table~\ref{tab:param} for specific ranges in each parameter.}
\label{fi:schema}
\end{figure}
From final spectroscopic sample of galaxies described above, we
compute the fraction of interlopers (arbitrary defined here as
galaxies lying at a cluster centric distance larger than
3$\times$\mbox{$R_{\rm{vir}}$}) as a function of the aperture by stacking together the
sample in different bins according to their redshift, to the number of
galaxies used to evaluate their velocity dispersion and to the cluster
masses. This can be seen in Figure~\ref{fi:interlopers} and is in good agreement with previous works (e.g. \citealt{mamon10}). The two upper
panels and the lower-left panel show the fraction of interlopers as a
function of aperture respectively color coded according to the number
of galaxies (panel A), to the redshift (panel B) and to the cluster
mass (panel C). As expected, the fraction of interlopers rises with
the aperture within which the simulated red-sequence galaxies were
initially chosen. This indicates that even red sequence,
spectroscopically selected samples are significantly contaminated by
galaxies lying at distances more than three times the virial radius
from the cluster.
On the other hand a much weaker dependency between the number of
selected red sequence galaxies and the fraction of interlopers is
highlighted on the upper-left panel of Fig. \ref{fi:interlopers}
(A). Whether one has small or large samples the fraction of
interlopers remains almost the same. The upper-right and the
bottom-left panels are showing that the fraction of interlopers is
larger at larger redshifts consistently with a denser Universe (B),
and is a steeper function of aperture for lower mass clusters
(C). Since in the hierarchical scenario more massive halos forms at
later times than the lower mass ones, these two variables are clearly
correlated. Thus, we also show in the bottom-right panel labeled "D"
how the fraction of interlopers varies as a function of redshift by
stacking together the sample in different mass bins. Most massive
clusters are not formed yet at high redshifts, therefore above
certain redshifts the redder lines go to zero. Although oscillating,
an evident tendency of increasing fraction of interlopers is
associated to larger redshift, whereas at fixed $z$ there is no clear
dependency of the fraction of interlopers from the clusters mass. We
stress however that all the relations shown in
Fig.~\ref{fi:interlopers} are meant to describe the qualitative
dependency of the interloper fraction from the analysed quantities.
\begin{figure*}
\hbox{\psfig{file=intfracn.ps,width=9.0cm}
\psfig{file=intfracz.ps,width=9.0cm}}
\hbox{ \psfig{file=intfracm.ps,width=9.0cm}
\psfig{file=intfracmz.ps,width=9.0cm}}
\caption{ {\it Upper panels and bottom left panel:} The stacked mean
fraction of interlopers (defined as galaxies at distance larger
than $3\times$ \mbox{$R_{\rm{vir}}$}) as a function of maximum projected
separation from the cluster $R_\perp$ normalized to \mbox{$R_{\rm{vir}}$}\ color
coded in numbers of galaxies used to estimate the velocity
dispersion (top left panel labeled A), redshift (top right panel
labeled B) and mass of the cluster (bottom left panel labeled
C). {\it Bottom right panel:} The stacked mean fraction of
interlopers as a function of redshift color coded according to
the cluster mass (labeled D).}
\label{fi:interlopers}
\end{figure*}
In a similar way we compute the mean velocity bias (defined as
the ratio between the measured velocity dispersion and the intrinsic
line-of-sight velocity dispersion:
$\sigma_{(N_{gal},R_\perp,M_{vir},z)} / \sigma_{1D} $) as a function
of the aperture by stacking together the sample in different bins
according to their redshift, to the number of spectroscopic galaxies
and to the cluster mass. This can be seen in
Figure~\ref{fi:biasfrac}. The two upper panels and the lower-left
panel show the velocity bias as a function of aperture respectively
color coded according to the number of galaxies (panel A), to the
redshift (panel B) and to the cluster mass (panel C). Interestingly,
the velocity bias has a minimum when velocity dispersions are
evaluated within $R_\perp \sim \mbox{$R_{\rm{vir}}$}$, and rises at both smaller and
larger radii. In particular, for projected radii $\lesssim \mbox{$R_{\rm{vir}}$}$,
where the effect of interlopers is smaller, we recover the expected
decrease of the average velocity dispersion profile
(e.g. \citealt{biviano06}) as a function of aperture. On the other
hand, for $R_\perp \gtrsim \mbox{$R_{\rm{vir}}$}$, the larger contamination from
interlopers is significantly affecting and boosting the velocity
bias. Furthermore, as expected, dynamical friction is also affecting
the estimated velocity dispersion, when the latest is computed with
a small number of selected red sequence galaxies (A). Indeed, by
applying Eq.~\ref{eq:meandyn} to the estimated velocity dispersion
we are able to successfully remove the degeneracy between the
velocity bias and the number of galaxies within projected aperture
$R_\perp \lesssim \mbox{$R_{\rm{vir}}$}$ (Fig. \ref{fi:biasdyn}). The upper-right and the
bottom-left panels of Fig.~\ref{fi:biasfrac} show that,
consistent with the fraction of interlopers, the velocity bias
computed within $R_\perp \gtrsim \mbox{$R_{\rm{vir}}$}$ is larger at larger
redshifts (B) and is a steeper function of aperture for lower mass
clusters (C). Finally, a mild dependence of redshift for fixed mass
is highlighted in the bottom-right panel (D).
\begin{figure*}
\hbox{\psfig{file=biasfracn.ps,width=9.0cm}
\psfig{file=biasfracz.ps,width=9.0cm}}
\hbox{ \psfig{file=biasfracm.ps,width=9.0cm}
\psfig{file=biasfracmz.ps,width=9.0cm}}
\caption{ {\it Upper panels and bottom left panel:} The stacked mean
velocity bias as a function of maximum projected separation from
the cluster $R_\perp$ normalized to \mbox{$R_{\rm{vir}}$}\ color coded in numbers
of galaxies used to estimate the velocity dispersion (top left
panel labeled A), redshift (top right panel labeled B) and mass of
the cluster (bottom left panel labeled C). {\it Bottom right
panel:} The stacked mean velocity bias as a function of redshift
color coded according to the cluster mass (labeled D).}
\label{fi:biasfrac}
\end{figure*}
\begin{figure}
\hbox{\psfig{file=biasfracndyn.ps,width=9.0cm}}
\caption{ {\it Upper panels and bottom left panel:} Same as
Fig. \ref{fi:biasfrac} - panel A, but corrected for dynamical
friction according to Eq. \ref{eq:meandyn}.}
\label{fi:biasdyn}
\end{figure}
To better understand how interlopers affect the inferred velocity
dispersion we select as an example all clusters with \mbox{$M_{\rm{vir}}$ }\ larger
than $5\times10^{14} \,{\rm M}_\odot$. For each of the three orthogonal
projections we then initially select the most luminous 25 red-sequence
galaxies as described in Sect.~\ref{sec:selection} within a projected
distance of $1.5 \mbox{$R_{\rm{vir}}$}$. We then apply the same procedure described
above to reject interlopers and obtain a final list of galaxies. From
this list of galaxies we then identify the "true" cluster members and
the interlopers. We show in the left panel of Fig.~\ref{fi:caustic} a
map representing the stacked distribution of the velocity of the
cluster galaxies as a function of the projected separation from the
cluster center $R_\perp/R_{vir}$. Note the typical trumpet shape
of the expected caustic distribution (\citealt{diaferio99},
\citealt{serra11}, \citealt{zhang11}). On the top of this map, we
overplot as contours the stacked distribution of the interloper
population that the $3\sigma$ clipping procedure was not able to
properly reject. A large fraction of high velocity interlopers are
still present after foreground and background removal and thus they
will bias high the estimated velocity dispersion.
This map highlights how caustic based techniques are potentially more
effective to remove interlopers than a simple $3\sigma$
clipping. However, observationally, a much larger number of galaxies
than the 25 spectra used here is typically needed to apply
these more sophisticated methods.
We also show in the right panel of Fig. \ref{fi:caustic} as solid
black and dashed red histograms respectively the distribution of
velocities for both the cluster galaxies and the interlopers
population. The expected Gaussian velocity distribution is overplotted
as a solid black Gaussian with a standard deviation given by
Eq. \ref{eq:meandyn} and $N_{gal} = 25$. The absolute normalisations of
the histograms are arbitrary, but the relative ratio of the two
histograms is representative of the ratio between the number of
cluster galaxies and interlopers. Note also that a
large fraction of low velocity interlopers is present. These
interlopers are mostly red-sequence galaxies which lie at about the
turn-around radius of the cluster over-density and therefore have
associated redshifts which are consistent with the cluster redshift.
As discussed above, a simple $3\sigma$ clipping technique is not able
to effectively remove high velocity interlopers, and therefore is
biasing high the inferred velocity dispersion. On the contrary caustic
based methods are able to remove this high velocity interlopers
population, but are not effective to reject this low velocity galaxies
at around the turn-around radius. As a net result, velocity
dispersions computed after interlopers rejection based upon caustic
techniques will be biased low (\citealt{wojtak07}, \citealt{zhang11}).
\begin{figure*}
\hbox{\psfig{file=map_caustic.ps,width=9.0cm}
\psfig{file=pdf_hist_vel.ps,width=9.0cm}
}
\caption{{\it Left panel:} The color map represents the
distribution of the line of sight velocity of cluster galaxies
(within $3 \mbox{$R_{\rm{vir}}$}$) normalized to the intrinsic 3D velocity
dispersion of clusters as a function of the projected distance
from the cluster center
in units of $\mbox{$R_{\rm{vir}}$}$ for the sample described in the
text. The contour lines represent the same
distribution for the interloper galaxies. {\it Right panel:} The
distribution of velocities in units of the intrinsic 3D velocity
dispersion for the cluster galaxy population (solid black
histogram) and for the interloper population (dashed red
histogram). The normalisation is arbitrary, while the
relative ratio of the two histograms reflects the sample described
in the text. The solid black Gaussian is the expected distribution
with width given by the Eq.~\ref{eq:meandyn} and $N_{gal} = 25$.}
\label{fi:caustic}
\end{figure*}
As mentioned above, for each cluster along all the projections we end
up with different samples of red-sequence galaxies that the $3\sigma$
clipping procedure recognises as "spectroscopic members". Therefore,
for each different initially selected number of red-sequence galaxies,
we measure the robust estimation of the velocity dispersion. We then
apply Eq.~\ref{eq:fit} to estimate the dynamical mass. Left panel of
Fig.~\ref{fi:MAPS} shows the corresponding relation between the
resulting dynamical mass and the true virial mass for all the sample
stacked together. The dashed black-purple line is the one to one
relation, whereas the green lines show the 16, 50 and 84
percentiles. Note that the sample shown here is volume limited, and so
the distribution in mass is different than the typical observational
samples. Furthermore, the same clusters appear several times with
dynamical masses computed from different number of galaxies on each
projection, and within different projected radii at all the
redshifts. When red-sequence galaxies are selected within a
projected radius from a light-cone regardless of their true 3D
distance from the centre of the cluster, the relation between the
virial mass and the inferred dynamical mass is much broader. In
particular, by looking at the median of the distribution, it is
possible to notice that a systematical overestimation of the dynamical
mass is present at all cluster masses, as expected from the interloper
contribution previously discussed. Furthermore, especially at the low
mass end of the cluster galaxy distribution, the presence of a
significant population of catastrophic outliers is making the relation
among virial mass and dynamical mass very asymmetric and causing a
severe boosting of the dynamical mass.
These outliers are likely related to cases where the simple $3\sigma$
clipping procedure is not sophisticated enough to effectively separate
the foreground and background interloper galaxies from the proper
cluster galaxies. To verify this hypothesis we show in the right panel
of Fig.~\ref{fi:MAPS} the same computation as for the left panel, but
restricting our sample to only the cases in which the presence of
interlopers is smaller than $5\%$. We note how this sub-sample
qualitatively looks very similar to left panel of
Fig.~\ref{fi:fricran} which by construction contains only cluster
galaxies. Furthermore, once the contribution from interlopers is
removed, the bias of dynamical mass over the true mass disappears.
However, remember that Fig.~\ref{fi:MAPS} shows that without
interlopers dynamical masses are on average underestimated compared to
the true virial mass, as expected from the effect of dynamical
friction described in Sect.~\ref{sec:dynamical}. Moreover, as
expected from the lower panels of Fig.~\ref{fi:interlopers}, the
adopted interlopers rejection method is more effective for more
massive clusters. Clearly the interloper effect on the dynamical mass
is more severe at the low mass end of the cluster population.
Because the color selection of cluster members is a crucial point
in this analysis, the results presented here obviously depend on the
adopted galaxy formation model at some level. On the one hand it is true
that the model is not perfectly reproducing the observed properties
of the cluster galaxy population. On the other hand we also do not
take into account any observational uncertainty which will instead
affect the real data, for example broadening the observed
red-sequence at fainter magnitudes.
To estimate the sensitivity of the color selection to uncertainties in
the galaxy modeling on the above described results, we select
red-sequence galaxies with a different
criteria than the one described in Sect. \ref{sec:selection}. Instead
of selecting the area in color-magnitude space which encompasses
68\% of the cluster galaxies, we select all galaxies within a fixed $\pm
0.15$ $mag$ along the fitted red-sequence relation, similarly to the
adopted criteria in the companion paper \citet{bazin12}. This is on
average a factor of $\sim 2$ in magnitude larger than the former
threshold (depending on the redshift ranging from $\sim 0.5 -3$, as
highlighted in Tab. \ref{t:cmr}). Then, we reject interlopers and
compute velocity dispersions and subsequent dynamical masses as
described in the above sections. We find that the fraction of
interlopers which the $3\sigma$ clipping procedure is not able to
reject is on average in agreement within $\sim 3\%$ with the previous
color selection. In particular for clusters with \mbox{$M_{\rm{vir}}$ } larger than
$4\times 10^{14} \mbox{$M_{\rm{vir}}$ }$ the agreement is better than 1\%.
We show in the left panel of Fig.~\ref{fi:MAPS} the resulting 16, 50
and 84 percentiles overplotted as red continuous lines. We note that a
larger effect from the interlopers is present in comparison with the
previous analyses, as expected from the broader color selection
adopted. In particular, larger differences appear at the low mass end
of the cluster galaxy population, where a significant increase of
catastrophic outliers in the overestimation of the dynamical mass is
visible. On the other hand, the average population is not affected by
much. As a net result, a changing in the color selection of a factor
$\sim 2$ implies a change in the estimated velocity dispersion by less
than $\sim 3\%$. In particular, this difference reduces to less than $
\sim 1\%$ for clusters with \mbox{$M_{\rm{vir}}$ } larger than $5\times 10^{14}\,{\rm M}_\odot$.
\subsection{Unbiased Dispersion Mass Estimator}
\label{sec:unbiased}
Similarly to Sect.~\ref{sec:dynamical}, we try to parametrise as a
function of the variables in Table \ref{tab:param} (aperture, number
of spectra, redshift and cluster mass), the way that interlopers
affect the inferred dynamical mass. However, we could not find a
satisfactory analytical solution to easily model the measured velocity
dispersion of clusters as a function of the above described variables,
due to the non-linear interplay of the explored parameter space
highlighted in Fig. \ref{fi:biasfrac}. Therefore, we numerically
compute the mean and the associated standard deviation of the ratio
between the observed and the 1D intrinsic velocity dispersion in
different bins of the parameter space as highlighted in Table
\ref{tab:param}. In this way, given the cluster mass, redshift and the
number of red-sequence galaxy spectra within a given projected radius
used to compute the velocity dispersion, we can correct for the
average bias affecting the estimation of the dynamical mass. We show
in Fig. \ref{fi:MAPSc} the same relation described in the left panel
of Fig. \ref{fi:MAPS} when such corrections are included. We remark
that the bias is effectively removed at all the mass scales analysed
here. Furthermore, by comparing the 84 percentile and median lines at
the low mass end of the left panel of Fig.~\ref{fi:MAPS} with the ones
in Fig.~\ref{fi:MAPSc}, we note that while the former are separated by
about an order of magnitude in dynamical mass, for the later this
difference is reduced to about $0.8$ dex.
\begin{figure*}
\centerline{\psfig{file=map_full.ps,width=8cm}
\psfig{file=map_full_wi.ps,width=8cm}
}
\caption{{\it Left panel:} The distribution of the dynamical mass
estimated through Eq. \ref{eq:fit} as a function of the true \mbox{$M_{\rm{vir}}$ }
for the whole sample described in Sect. \ref{sec:selection} used
in this work (green lines). Red lines represent the same
distribution obtained from a different color selection of
red-sequence galaxies as explained in Sect.
\ref{sec:Interlopers}.{\it Right panel:} Same as for the left
panel, but only for the cases where the fraction of interlopers is
smaller than $5\%$. Dashed purple-black line is showing the
one-to-one relation, while solid green and red lines are the 16,
50 and 84 percentile.}
\label{fi:MAPS}
\end{figure*}
\begin{figure}
\hbox{
\psfig{file=map_fullc.ps,width=8cm}
}
\caption{Same as for the left panel of Fig.~\ref{fi:MAPS}, but
velocity dispersions were numerically corrected as described in
the text. Dashed purple-black line is showing the oat-to-one
relation, while solid green lines are the 16, 50 and 84
percentile. Note that with these corrections the dynamical
mass is an unbiased estimator of the true mass.}
\label{fi:MAPSc}
\end{figure}
\section{Discussion and Conclusions}
\label{sec:Concl}
We have examined the use of velocity dispersions for unbiased mass
estimation in galaxy clusters using the publicly available galaxy catalogue
produced with the semi-analytic model by De Lucia \& Blaizot (2007) coupled with
the N-body cosmological Millennium Simulation (Springel et al. 2005).
In particular, we selected all galaxies in the SAM
with stellar mass larger than $10^8 \,{\rm M}_\odot$ and analysed a sample
consisting of more than $\sim 20000$ galaxy clusters with $\mbox{$M_{\rm{vir}}$ } \geq
10^{14} \,{\rm M}_\odot$ up to $z \sim 1.2$ (Tab: \ref{t:clus}).
First we explore the properties of the full galaxy sample and then we
increase the level of complications to mimic the spectroscopic selection
that is typically undertaken in real world studies of clusters.
Then we work through a series of controlled studies in an attempt
to disentangle the different effects leading to biases and enhanced scatter
in velocity dispersion mass estimates. Ultimately our goal is to inform the
dispersion based mass calibration of the SPT cluster sample (\citealt{bazin12}),
but we explore a broad range in selection in hopes that our results will
be of general use to the community.
Our primary conclusions for the full subhalo population are:
\begin{itemize}
\item We measure the galaxy (i.e. subhalo) velocity dispersion mass
relation and show that it has low scatter ($\sim0.14$ in $ln(M)$)
and that subhalo dispersions are $\lesssim$3\% lower than DM
dispersion in \citet{evrard08}. This difference corresponds to a $\lesssim
10$\% bias in mass for our halos if the DM dispersion-- mass
relation is used, and is consistent with previous determination
of subhalo velocity bias.
\item We explore line of sight velocity dispersions of the full galaxy
populations within the cluster ensemble and confirm that the
triaxiality of the velocity dispersion ellipsoid is the dominant
contributor to the characteristic $\sim$35\% scatter in dispersion
based mass estimates. We show that this scatter increases with
redshift as $\sigma(z)\simeq 0.3+0.075z$.
\item We measure the principal axes and axial ratios of the spatial
galaxy distribution ellipsoid, showing that there is a slight
($\sim5$\%) preference for prolate distributions; this property has
no clear variation with mass or redshift. We examine the line of
sight velocity dispersions along the principle axes, showing that
the slight preference toward prolate geometries translates into a
slight ($\sim1$\%) bias in the dispersion mass estimates extracted
from line of sight measures.
\end{itemize}
Our primary conclusions for the spectroscopic subsamples of subhalos are:
\begin{itemize}
\item We characterize the bias (Eqn.~\ref{eq:meandyn}) and the
scatter (Eqn.~\ref{eq:sigmadyn}) in the line of sight velocity
dispersion introduced by selecting a subset $N_{gal}$ of the most
luminous red sequence galaxies within a cluster. The bias is
significant for samples with $N_{gal}<30$ and is likely due to
dynamical friction of these most massive subhalos. The scatter
cannot be fully explained through a combination of intrinsic
scatter in the relation between mass and the 3D dispersion of all
galaxies (i.e. departures from equilibrium), scatter of the line of
sight dispersion around the 3D dispersion (halo triaxility) and
Poisson noise associated with the number of subhalos $N_{gal}$.
A further component of scatter due to the presence of a dynamically cold
population of luminous red-sequence galaxies is needed to explain the
full measured scatter.
\item We explore the impact of interlopers by creating spectroscopic
samples using (1) red sequence color selection, (2) a maximum
projected separation from the cluster center, and (3) $N$-sigma
outlier rejection in line of sight velocity. In these samples the
interloper fraction (contamination) can be significant, growing from
$\sim 10\%$ at the projected virial radius to $\sim$35\% at twice
the project virial radius. The contamination fraction has a much
weaker dependency on the sample size $N_{gal}$. We explore the
dependence on mass and cluster redshift, showing that within a fixed
aperture, contamination is a factor of $\sim 2$ worse at redshift
$z\sim 1$ than at $z=0$. Furthermore, we show that the fraction of
interlopers is a steeper function of aperture for low mass clusters,
but that at fixed redshift contamination does not change
significantly with mass. We show that contamination is significant
even if a more sophisticated caustic approach is used to reject
interlopers, demonstrating that even clusters with large numbers of
spectroscopic redshifts for red sequence selected galaxies suffer
contamination from non-cluster galaxies. We further study how
interlopers are affecting the estimated velocity bias. We find that
the velocity bias has a minimum if computed within $ R_\perp \sim
\mbox{$R_{\rm{vir}}$}$. This is due to the balancing effect of larger intrinsic
velocity bias at smaller radii and larger contamination at larger
radii. Furthermore, we show that if velocity dispersions are
computed within projected aperture $R_\perp$ larger than $\sim
\mbox{$R_{\rm{vir}}$}$, the velocity bias is a steeper function of $R_\perp$ for
higher redshifts and lower cluster mass, as expected from the
contamination fraction.
\item We study how changing the color selection affects the fraction
of interlopers and the subsequent effect on the estimated velocity
dispersion and dynamical masses. We find that doubling the width of
the color selection window centered on the red sequence has only a
modest impact on the interloper fraction. The primary effect of
changing the color selection is on the filtering of catastrophic
outliers. This results in changes to the estimated velocity
dispersion virial mass relation at the level of 1\% in mass. We
also show that uncertainties in the color selection are more
important for low mass clusters than for the high mass end of the
cluster population, which is because the dispersions of low mass
clusters are more sensitive to catastrophic outliers. The rather
weak dependence of the dispersion based mass estimates on the
details of the color selection suggests also that uncertainties in
the star formation histories (and therefore colors) of galaxy
populations in and around clusters are not an insurmountable
challenge for developing unbiased cluster mass estimates from
velocity dispersions.
\item We present a model to produce unbiased line of sight dispersion
based mass estimates, correcting for interlopers and velocity bias.
We also present the probability distribution function for the
scatter of the mass estimates around the virial mass. These two
data products can be used together with a selection model describing
real world cluster dispersion measurements to enable accurate
cluster mass calibration.
\end{itemize}
In a companion paper, \citet{bazin12} apply this model in the
dispersion mass calibration of the SPT Sunyaev-Zel'odovich effect
selected cluster sample. We identify the following key remaining
challenges in using dispersions for precise and accurate mass
calibration of cluster cosmology samples. Surprisingly, the larger
systematic uncertainty has to be ascribed to our relative poor
knowledge of the velocity bias between galaxies or subhalos and DM. A
conservative estimate of this systematic is at the level of $< 5\%$
and arises from the comparison of different simulations and different
algorithms for subhalo identification (e.g. \citealt{evrard08}). The
systematic uncertainty in the color selection of galaxies and its
subsequent mapping between line of sight velocity dispersion and mass
is at relatively smaller level. Indeed we can estimate it at a $<1 \%$
level for samples selected as the ones described in \citet{bazin12},
despite the fact that galaxy formation models involve a range of
complex physical processes. In other words, systematics in predicting
galaxy properties (e.g. luminosity, colors, etc.) due to subgrid
physics associated with magnetic fields, AGN and supernova feedback,
radiative cooling and the details of star formation, do not appear to
significantly change the spectroscopic sample selection. On the other
hand, simulations including different physical treatments of gravity
are affecting the dynamics of the spectroscopic selected sample at a
higher level than we expected. Given that the current dominant
contributor to the systematics floor is an issue associated with the
treatment of gravitational physics, there are reasons to be optimist
that future simulations will be able to reduce the current systematics
floor.
We acknowledge Jonathan Ruel for very useful discussions and support
from the Deutsche Forschungsgemeinschaft funded Excellence Cluster
Universe and the trans-regio program TR33: Dark Universe.
\label{lastpage}
\bibliographystyle{apj}
|
2,869,038,153,908 | arxiv | \section{Introduction}\label{sec:intro}
A common taks is the optimization problem
\begin{align}\label{eq:loss}
\argmin_{\theta } \mathbb{E}[L_\theta(X,Y)],
\end{align}
where $L_\theta$ is a generic loss function and the expectation is taken with respect to the joint distribution of $(X,Y)$; e.g.~$L_\theta(x,y)=(x^\top\theta - y)^2 + \lambda|\theta|_1$ for LASSO \cite{Tibshirani1996,Murphy2012}. %
In practice, the distribution of $(X,Y)$ is not known and one approximates it with the empirical measure $\mu=\frac{1}{N}\sum_{i=1}^N \delta_{(x_i,y_i)}$ built from $N$ samples $(x_i,y_i)$ of the pair $(X,Y)$.
That is, one minimizes the so-called empirical risk
\begin{align}\label{eq:emp loss}
\theta^\star:= \argmin_{\theta } \mathbb{E}[L_{\theta}(Z)] = \frac{1}{N}\sum_{i=1}^N L_\theta(z_i),
\end{align}
where $Z$ denotes the discrete random variable that takes the values $z_i=(x_i,y_i)$ with equal probability.
If $L_{\theta}$ is smooth with bounded derivatives and convex, standard gradient descent (GD),
\begin{align}\label{eq:GD1}
\theta_{j+1}-\theta_{j}%
=-\frac{\gamma}{N}\sum_{i=1}^N\nabla_{\theta}L_{\theta}(z_i)\Big|_{\theta=\theta_{i}}
\end{align}
converges if the learning rate $\gamma$ is appropriately chosen, $\lim_{i \rightarrow \infty} \theta_i =\theta^\star$~\cite{Nesterov2018}.
However, for large-scale problems when the number of samples $N$ is huge, the evaluation of the gradient in
Equation~\eqref{eq:GD1} in every iteration step is prohibitive.
\subsection{Related Literature}
Popular approaches to reduce the cost in each iteration step are the so-called stochastic gradient descent (SGD) algorithms.
The gradient is approximated by selecting at each iteration step $j$ a subset of the $N$ points at random.
Solving the minimization problem~\eqref{eq:emp loss} via (S)GD has a long history and is a research topic that is still rapidly evolving; we refer to \cite{Robbins1951,Bottou2004,bottou2010large,ruder2016overview,Bottou2018} for a general overview.
Our work is inspired by variants of SGD that produce better estimators for the expected gradient than the naive estimator given by subsampling a batch of data points uniformly at random.
This reduces the variance in each step and often guarantees a better convergence rate.
Popular examples of such SGD variants include stochastic average gradients (SAG) \cite{Roux2012}, Iterate Averaging \cite{Polyak1992}, Incremental Aggregated Gradient \cite{Blatt2007}. %
Our work relies on replacing the empirical measure $\mu=\frac{1}{N}\sum_{i=1}^N\delta_{z_i}$ by a measure $\hat \mu$ with much smaller support,
which however has the property that $\mathbb{E}_{Z \sim \mu}[\nabla_\theta L_\theta (Z)] = \mathbb{E}_{Z \sim \hat \mu}[\nabla_\theta L_\theta (Z)]$ at carefully selected iteration steps.
The construction of this reduced measure $\hat \mu$ that matches certain expectations of $\mu$ is known as the {\it recombination problem}.
Algorithms to construct this reduced measure by solving a constrained linear system have been known for a long time~\cite{Davis1967}, and recently more efficient algorithms have been developed~\cite{Litterer2012,maria2016a,Maalouf2019,Cosentino2020}.
In particular, we rely on \cite{Cosentino2020} that shows strong performance when the number of samples $N$ is very large.
In the second part of this paper we combine our proposed Caratheodory subsampling with Block Coordinate Descent (BCD) to make Caratheodory sampling effective when $\theta$ is high-dimensional.
Generally, under the block separability assumptions on the regularization term and usual conditions on the principal part of the loss function, e.g. convexity or the Polyak-Lojasiewicz condition, the convergence is proved and the rates of convergence have been found, e.g.~\cite{Nutini2017,Csiba2017}.
The papers \cite{Tseng2007,Tseng2008} study how the smoothness assumptions can be relaxed.
Applications of BCD techniques have been studied in sparse settings~\cite{Nutini2017}, large scale Gaussian process regression~\cite{Bo2012}, L1 Regularized Least Squares~\cite{Santis2016}, Group LASSO~\cite{Meier2008,Qin2013}, Training Support Vector Machines~\cite{Platt1998},
matrix and tensor factorization~\cite{Xu2013,Yu2012} and other works~\cite{Ene2015,Sardy2000,Thoppe2014}.
\subsection{Contribution}
Instead of approximating the sum $\mathbb{E}[L_\theta(Z)] = \frac{1}{N} \sum_{i=1}^N L_\theta(z)_i$ by subsampling, we construct at certain steps $j$ in the GD iteration of $\theta_j$ a new probability measure $\hat \mu_j$ supported on a \emph{very small subset} of the original $N$ atoms.
This measure $\hat \mu$, however, matches certain statistical functions of the empirical measure $\mu$; in particular, we can choose it such that $\mathbb{E}_{Z \sim \mu}[L_{\theta_j}(Z)] = \mathbb{E}_{Z \sim \hat \mu}[L_{\theta_j}(Z)]$.
The construction of $\hat \mu$ is also known as the \emph{recombination problem} and we use the recent algorithm \cite{Cosentino2020} which scales well in the regime where the number of samples $N$ is large.
Although it can be relatively costly to carry out the recombination at a given step, in return the gradient is perfectly matched at this step and the expectation can be computed as a sum over $n$ weighted points rather than $N\gg n$ uniformly weighted points.
We balance this tradeoff by combining two techniques:
\begin{enumerate*}[label=(\roman*)]
\item \label{itm:control} By using an approximation to the Hessian to derive a control statistic that tells us when to carry out the recombination step.
In practice, this allows to do only a few recombination computations over the whole descent trajectory.
\item \label{itm:bc} By using Block coordinate descent (BCD) to carry out the reduction only for a subset of the %
coordinates of the gradient. This makes a recombination cheap even if $\theta$ is high-dimensional.
\end{enumerate*}
\subsection{Outline}
Section~\ref{sec:background} introduces the theoretical background on recombination.
Section \ref{sec:GD} provides the main theoretical results and a first application %
to logistic regression;
Section \ref{sec:BCD} then {recalls BCD techniques}, and shows how this allows to efficiently carry out Caratheodory subsampling for high-dimensional $\theta$.
Further, it benchmarks the resulting Carath\'eodory BCD (CaBCD) against classic SGD algorithms SAG and ADAM.
Section \ref{sec:BCD} also compares the rules regarding the selection of the coordinates' blocks introduced in~\cite{Nutini2017} with the same
rules when the Carath\'eodory Sampling is applied.
A Python implementation for all of our experiments can be found at \url{https://github.com/FraCose/Caratheodory_GD_Acceleration}.
\section{The Recombination Problem}\label{sec:background}
We now recall a classic result which shows that for any discrete random variable that can take $N$ different values, there exists another discrete random variable that only takes values in a subset of $n+1$ of the original $N$ points that has the same statistics as defined by $n$ functions $f_1,\ldots,f_n$. %
\begin{theorem}[Carath\'eodory~\cite{DeLoera2013} ]\label{th:cath}
Given a set of $N > n+1$ points in $\mathbb{R}^n$ and a point $z$ that lies in the convex hull of these $N$ points,
$z$ can be expressed as a convex combination of maximum $n+1$ points.
\end{theorem}
As is well-known, this implies Tchakaloff's Theorem~\ref{th:tchakalof}~\cite{Tchak} for the special case of discrete measures:
given $n$ functions $f_1,\ldots,f_n:\mathcal{Z} \rightarrow \mathbb{R}^n$ define $F: \mathcal{Z} \rightarrow \mathbb{R}^n$ as $F(z):=(f_1(z),\ldots,f_n(z))$.
Now given a discrete probability measure $\mu$ on $\mathcal{Z}$ that is supported on $N$ atoms $z_1,\ldots,z_N \in\mathcal{Z}$, it follows that $\mathbb{E}_{Z\sim \mu }[F(Z)]= \sum_{i=1}^N F(z_i)\mu(z_i)$.
Since this finite sum
defines a point within the convex hull of the set of $N$ points $\mathbf{z}:=\{F(z_i)\}_{i=1}^N$, it follows by Carath\'eodory's Theorem that this point can be equivalently expressed as a convex combination of a subset $\hat \mathbf{z}$ of $\mathbf{z}$ comprising at most $n+1$ points.
As first shown by Tchakaloff, this shows that Theorem~\ref{th:cath} implies the following recombination result.
\begin{theorem}[Tchakaloff~\cite{Tchak}]\label{th:tchakalof}
Let $Z$ be a discrete random variable that can take $N$ values $\{z_1,\ldots,z_N\}$.
For any set $\{f_1,\ldots,f_n\}$ of $n$ real-valued functions there exists a random variable $\hat Z$ such that
\begin{align}\label{eq:F}
\mathbb{E}[f_i(Z)] = \mathbb{E}[f_i(\hat Z)]\quad \text{ for every } i=1, \ldots, n.
\end{align}
and $\hat Z$ only takes values in a subset of $\{z_1,\ldots,z_N\}$ of cardinality at most $n+1$.
We refer to $\hat Z$ as a reduction or recombination of $Z$.
\end{theorem}
Tchakaloff~\cite{Tchak} showed a more general version for continuous random variables, but in this work the above result for the discrete setting is sufficient.
In our context of the optimization problem~\eqref{eq:emp loss}, we will apply it with $Z=(X,Y)$ denoting a pair consisting of observations $X$ and labels $Y$.
The above derivation already implies an algorithm to calculate $\hat Z$, by finding the subset $\hat \mathbf{z}$ , that solves $N-n-1$ times a constrained linear system, see~\cite{Davis1967} for details.
More recently, algorithms have been devised that exploit a divide and conquer strategy which reduce the complexity of the needed calculations drastically, but they all require ${O}(Nn+\log(N/n)n^4)$~\cite{Litterer2012,Maalouf2019} respectively ${O}(Nn+\log(N/n)n^3)$~\cite{maria2016a}.
Throughout we use~\cite{Cosentino2020} to construct $\hat \mu$ by a geometric greedy sampling which is advantageous when $N\gg n$.
In particular, what makes the algorithm \cite{Cosentino2020} suitable in contrast to other recombination algorithms \cite{Litterer2012,Maalouf2019,maria2016a} is that although it has a similar worst case complexity, it has a much better average case complexity.
However, we emphasize that the ideas below are independent of the choice of the concrete recombination algorithm and any improvement on recombination algorithms will result in an improvement of Caratheodory's subsampling for SGD.
\section{Carath\'eodory Gradient Descent (CaGD)}\label{sec:GD}
Given a dataset $\{(x_i,y_i):i=1,\ldots,N\}$ consisting of $N$ observations $x_i$ with labels $y_i$, we denote by $Z$ the discrete random variable that takes the value $z_i=(x_i,y_i)$ with probability $\frac{1}{N}$.
That is, the empirical risk~\eqref{eq:emp loss} at $\theta$ equals $\mathbb{E}[L_\theta(Z)]$.
Further, denote with $G(\theta,z):=\nabla_{\theta}L_\theta(z) \in \mathbb{R}^n$ the gradient at $\theta$ and with $H(\theta,z):=\nabla^2_{\theta}L_\theta(z) $ the Hessian.
With this notation, the usual GD iteration reads as
\begin{align}
\theta_{j+1} := \theta_j - \gamma \mathbb{E}[G(\theta,Z)],
\end{align}
and converges, under assumptions which we recall below, to the minimum $\theta^\star$ as given in~\eqref{eq:emp loss}, see \cite{Nesterov2018}.
However, in every descent step $j$ the evaluation of the sum \[\mathbb{E}[G(\theta,Z)]=\frac{1}{N}\sum_{i=1}^N G(\theta,(x_i, y_i))\] can be costly.
Below we use Theorem~\ref{th:tchakalof} to derive a similar iteration $(\hat \theta_j)$, that also converges to $\theta^\star$, which however avoids the evaluation of $\mathbb{E}[G(\theta_j,Z)]$ at most of the steps.
\paragraph{The first recombination step.}
Initialize $\theta_0 \in \mathbb{R}^n$ as before in (S)GD, but before the first step, apply Theorem~\ref{th:tchakalof} to $Z$ and the $n$ coordinate functions of the gradient $z \mapsto G(\theta_0,z)$ to produce a discrete random variable $\hat Z_0$.
By construction, this random variable $\hat Z_0$ has only $n+1$ different possible outcomes and these outcomes are part of the original dataset $\{z_i=(x_i,y_i), i=1,\ldots,n\}$.
Now define the first descent step
\begin{align}
\hat \theta_{1}:= \theta_0 - \gamma \mathbb{E}[G(\theta_0, \hat Z_0)].
\end{align}
Since by construction, $\mathbb{E}[G(\theta_0,Z)] = \mathbb{E}[G(\theta_0,\hat Z_0) ]$, it follows that $\hat \theta_1 = \theta_1$.
In general $\mathbb{E}[G(\hat\theta_1,Z)] \neq \mathbb{E}[G(\hat \theta_1,\hat Z_0)]$ but the intuiton is that $\mathbb{E}[G(\hat \theta_1,\hat Z_0)]$ is a good approximation of $ \mathbb{E}[G(\hat \theta_1, Z) ]$ for reasonable choices of $\gamma$.
Hence, we continue to iterate
\begin{align}\label{eq:gdzro}
\hat \theta_{j+1}:= \hat \theta_j - \gamma \mathbb{E}[G(\hat \theta_j, \hat Z_0)].
\end{align}
until the first time $\tau_1$ a control statistic tells us that the gradient error has become too large.
\paragraph{A control statistic.}
Let $L$ be convex, twice differentiable, and its gradient be Lipschitz, we show later that
a natural choice for control statistic is the quantity
\begin{align}\label{eq:Delta_def}
\Delta_{j,0}:=\mathbb{E}[G(\theta_{0},\hat{Z}_{0})]\cdot(\hat{\theta}_{j}-\theta_{0})+\frac{c}{2}\|\hat{\theta}_{j}-\theta_{0}\|^{2},
\end{align}
where $c$ is such that %
$v^\top H(\theta,z) v \le c$ for every $v \in \mathbb{R}^n$; the existence of such a $c$ is justified by the assumptions on $L$.
More precisely, $\Delta_{j,0}< \Delta_{j-1,0}$ guarantees that the loss function $L$ continues to decrease.
Hence, we follow the iteration~\eqref{eq:gdzro} until $\Delta_{j,0}\geq \Delta_{j-1,0}$, that is until step $\tau_1:=\inf \{j>0: \Delta_{j,0}\geq \Delta_{j-1,0}\}$, where we fix $\Delta_{0,0}:=0$.
At time $\tau_1$ we then simply update $\hat Z_0$ to $\hat Z_1$ so that the gradients are matched at the point $\hat\theta_{\tau_1-1}$,
that is $\hat Z_1$ is such that $\mathbb{E}[G(\hat \theta_{\tau_{1}-1},Z)]=\mathbb{E}[G(\hat \theta_{\tau_1-1},\hat Z_1)]$, and then we continue as before.
\paragraph{CaGD in a nutshell.}
To sum up, we set $\tau_0:=0$, $\Delta_{0,0}=0$, and construct $\hat Z_0$ such that $\mathbb{E}[G(\theta_0,Z)]=\mathbb{E}[G(\theta_0,\hat Z_0)]$.
We then update, for $j\geq 0$,
\begin{align}
\hat \theta_{j+1}:= \hat \theta_j- \gamma \mathbb{E}[G(\hat \theta_j , \hat Z_0)] \text{ as long as } \Delta_{j,0}<\Delta_{j-1,0}.
\end{align}
At time $\tau_1$ %
we compute $\hat Z_1$ such that
\begin{align}
\mathbb{E}[G(\hat \theta_{\tau_1-1}, Z)]= \mathbb{E}[G(\hat \theta_{\tau_1-1}, \hat Z_1)]
\end{align}
and update for $j \ge \tau_1-1$
\begin{align}
\hat \theta_{j+1}:= \hat \theta_j- \gamma \mathbb{E}[G(\hat \theta_j, \hat Z_1)], \text{ as long as } \Delta_{j,1}< \Delta_{j-1,1}
\end{align}
where $\Delta_{j,1}:=\mathbb{E}[G(\hat{\theta}_{\tau_{1}-1},\hat{Z}_{1})]\cdot(\hat{\theta}_{j}-\hat{\theta}_{\tau_{1}-1})+\frac{c}{2}\|\hat{\theta}_{j}-\hat{\theta}_{\tau_{1}-1}\|^{2}$ and $\Delta_{\tau_1-1,1}=0$.
At time $\tau_2:=\inf \{j>\tau_1: \Delta_{j,1} \ge \Delta_{j-1,1}\}$ we compute $\hat Z_2$ such that $ \mathbb{E}[G(\hat \theta_{\tau_2-1}, Z)]= \mathbb{E}[G(\hat \theta_{\tau_2-1}, \hat Z_2)]$, etc.
\subsubsection{Convergence and convergence rate}
The above is the main structure of our first algorithm, denoted as
Carath\'eodory Gradient descent (CaGD).
However, we add three further modifications. First, we stop as soon as the gradient or the value of the loss function is smaller than a given $\epsilon$ since this means we are already close enough to the minimum;
second, we bound the number of iterations between two recombinations by a constant, that is $\tau_{k+1}-\tau_k \le \operatorname{it\_max\_Ca} $, to {avoid pathological cases, see Theorem~\ref{th:general_conv} for more details};
third, we allow to match a general oracle direction $D_j$ at step $j$.
The choice $D_j=-\mathbb{E}[G(\hat \theta_j, Z)]$ is the most relevant for this section, but the general oracle formulation allows to use more involved choices; e.g. momentum strategies in Section \ref{sec:BCD}.
This leads to Algorithm~\ref{algo:Ca-accel}.
In Algorithm \ref{algo:Ca-accel} we write $D_j(\{\theta\},Z)$ to express its dependencies on the data $Z$ and the sequence $\{ \theta\}$ computed
up to the step $j$, although it could depend also on the loss function $L$, in particular it could depend on its derivatives $G$, $H$, etc.
Theorem~\ref{th:general_conv} shows that it converges whenever we match oracle directions.%
\begin{algorithm}\caption{Carath\'eodory Sampling Acceleration}\label{algo:Ca-accel}
\begin{algorithmic}[1]
\State{Initialize $\hat \theta_0 $}
\State{$j \gets 1$, $k \gets 0$}\Comment{$j$ counts steps, $k+1$ the number of recombinations}
\State{$\tau_0 \gets 0$} \Comment{$\tau_k$ is the step we made with the $(k+1)$th recombination}
\State{{$\Delta_{\tau_0,0} \gets 0$}}
\State{{$\operatorname{Grad}_0\gets \mathbb{E}[G(\hat \theta_{\tau_{0}}, Z)]$}}
\While{{($\|\operatorname{Grad}_{\tau_k} \|>\epsilon_1$\textbf{ or }$|L(\hat\theta_{\tau_k},Z) |>\epsilon_2$)} \textbf{and} $j\leq$ it\_max }
\State{{Compute $\hat Z_k$ such that $\ensuremath{\mathbb{E}[D_{\tau_{k}}(\{\hat{\theta}\},\hat{Z}_{k})]=\mathbb{E}[D_{\tau_{k}}(\{\hat{\theta}\},Z)]}$}}\label{step:compute_meas}
\Comment{%
Reduce $Z$ to $\hat Z_k$}
\While{$\Delta_{j,k}<\Delta_{j-1,k}$ \textbf{and} $j-\tau_k\leq$ it\_max\_Ca }
\State{$\hat \theta_{j}\gets \hat \theta_{j-1}+\gamma\mathbb{E}[D_{j-1}(\{\hat \theta\},\hat Z_{k})]$ }\label{step:compute_reduced_gr}
\State{$\Delta_{j,k} \gets \operatorname{Grad}_{\tau_k} \cdot (\hat{\theta}_{j}-\hat{\theta}_{\tau_{k}}) + \frac{c}{2} \|\hat{\theta}_{j} - \hat{\theta}_{\tau_{k}}\|^{2}$}\label{step:statistic_control}
\State{$j\gets j+1$ }%
\EndWhile
\If{$j-\tau_k \not= \operatorname{it\_max\_Ca}$ }\label{step:strange_if}
\State{$\tau_k, j \gets j-1$}
\Else \State{$\tau_k, j \gets j$}
\EndIf
\State{{$\operatorname{Grad}_{\tau_k}\gets \mathbb{E}[G(\hat \theta_{\tau_{k}}, Z)]$, $\quad\Delta_{\tau_k,k}\gets0$}}\label{step:full_gradient}
\State{$k \gets k+1$}
\EndWhile
\textbf{ and return} $j$, $\hat \theta_j$
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{th:general_conv}
{Let $L_\theta$ be convex, twice differentiable in $\theta$ and its gradient $G$ be Lipschitz.
If the quantities $\{\theta_j\}$ defined as
\[
\theta_{j}-\theta_{j-1}=\gamma\mathbb{E}[D_{j-1}(\{\theta\},Z)]
\]
converge to the minimum $\theta^*$, i.e.~$\lim_{j\to\infty}\theta_j =\theta^*$, then also the sequence of $\{\hat\theta\}$ computed via Algorithm \ref{algo:Ca-accel} converges to $\theta^*$, $\lim_{j\to\infty}\hat\theta_j =\theta^*$. }
\end{theorem}
\begin{proof}%
Thanks to the hypothesis there exists $c$ s.t.%
\begin{align}
\mathbb{E}[L(\hat{\theta}_{j},Z)]\!=&\mathbb{E}[L(\hat{\theta}_{0},Z)]\!+\!\mathbb{E}[G(\hat{\theta}_{0},Z)]\!\cdot\!(\hat{\theta}_{j}\!-\!\hat{\theta}_{0})\!+\!\dfrac{1}{2}(\hat{\theta}_{j}\!-\!\hat{\theta}_{0})^{\top}\!\!\cdot\!\mathbb{E}[H(\bar{\theta},Z)]\!\cdot\!(\hat{\theta}_{j}\!-\!\hat{\theta}_{0})\\\le&\mathbb{E}[L(\hat{\theta}_{0},Z)]+\mathbb{E}[G(\hat{\theta}_{0},Z)]\cdot(\hat{\theta}_{j}-\hat{\theta}_{0})+\frac{c}{2}\|\hat{\theta}_{j}-\hat{\theta}_{0}\|^{2},\quad\text{ for }j\geq0,
\end{align}
where $\bar\theta$ is a convex combination of $\hat\theta_{j}$ and $\hat\theta_{0}$.
It is now easy to see that we have a condition to check, in order to rebuild the measure:
we update the measure after $\tau_1$ steps, where
\begin{align}
\tau_{1}:=&\inf\{j\geq1:\Delta_{j,0}\geq\Delta_{j-1,0}\}\\
\Delta_{j,0}:=&\mathbb{E}[G(\hat{\theta}_{0}, Z)]\cdot(\hat{\theta}_{j}-\hat{\theta}_{0})+\frac{c}{2}\|\hat{\theta}_{j}-\hat{\theta}_{0}\|^{2},
\end{align}
where $\Delta_{0,0}=0$.
We have that $\{\Delta_{0,0}, \Delta_{1,0},\ldots\Delta_{\tau_1-1,0}\}$ is a negative decreasing sequence and therefore
\[
\mathbb{E}[L(\hat{\theta}_{\tau_{1}-1},Z)]\leq\mathbb{E}[L(\hat{\theta}_{0},Z)].
\]
In particular, note that $\Delta_{1,0}\leq0$, since $\hat{\theta}_{1}=\theta_{1}:=\theta_{0}+\gamma\mathbb{E}[D_{0}(\{\theta\},Z)]$, thanks to Theorem \ref{th:cath} and the definition of $\hat Z_0$, therefore
$\tau_1\geq 2$. $\tau_1-1 = 1 $ means that the reduced r.v. $\hat Z_0$ computed has been useless, i.e. we have done only one step with the reduced measure that we could have done directly using $\mathbb{E}[D_{0}(\{\theta\},Z)]$ without computing the reduced measure.
The reasoning can be easily generalized: we can define for $k> 1$, and
$j \geq \tau_{k-1}$
\begin{align}
\Delta_{j,k}:=&\mathbb{E}[G(\hat{\theta}_{\tau_{k}-1}, Z)]\cdot(\hat{\theta}_{j}-\hat{\theta}_{\tau_{k}-1})+\frac{c}{2}\|\hat{\theta}_{j}-\hat{\theta}_{\tau_{k}-1}\|^{2}\\\tau_{k}:=&\inf\{j\geq\tau_{k-1}:\Delta_{j,k-1}\geq\Delta_{j-1,k-1}\},
\end{align}
where $\Delta_{\tau_{k}-1,k}=0$.
The proof of the convergence follows since if %
$\tau_k-\tau_{k-1}=2$ we follow the directions $D_j(\{\theta\},Z)$ which converge for the hypothesis, whereas if $\tau_k-\tau_{k-1}\geq 2$ the value of $L$ decreases,
\[
\mathbb{E}[L(\hat{\theta}_{\tau_{k}-1},Z)]\leq\mathbb{E}[L(\hat{\theta}_{\tau_{k-1}-1},Z)].
\]
Moreover, to avoid pathological cases, e.g. $\Delta_{1,k}<\Delta_{2,k}<\ldots\searrow-a,$ $a>0$ in which cases $L(\hat\theta_j,Z)$ cannot decrease ``enough'', we impose a number of maximum iterations that the Algorithm can do with the reduced measure.
\qed
\end{proof}
Theorem \ref{th:general_conv} can be easily extended to the case where the learning rate $\gamma$ is not fixed.
Theorem~\ref{th:convergence_CaGD} gives the convergence rate for the choice $D_j=-\mathbb{E}[G(\hat \theta_j, Z)]$.
\begin{theorem}\label{th:convergence_CaGD}
{Let $L_\theta$ be convex, twice differentiable in $\theta$ and its gradient $G$ be Lipschitz.} Then if $D_j=-G(\hat\theta_j,Z)$, Algorithm \ref{algo:Ca-accel} converges to $\theta^*$,
and its convergence rate is
\begin{align}\label{eq:Ca-GDcomplexity}
|L(\hat\theta_{j})-L(\theta^{*})|\leq\frac{1}{2\gamma} J \frac{\|\hat\theta_{0}-\theta^{*}\|^{2}}{j},
\end{align}
where $j$ is the number of iterations, and $J$ is the number of times the reduced measure is used
(as per Algorithm \ref{algo:Ca-accel}, we can conservatively bound $J<\operatorname{it\_max\_Ca}$).
\end{theorem}
\begin{proof}%
The convergence is a simple application of Theorem \ref{th:general_conv}.
We can show that Algorithm \ref{algo:Ca-accel} does not reduce the order of convergence of the standard GD. Let us call $\hat\theta_i$ the
sequence of weights obtained by Algorithm \ref{algo:Ca-accel} in chronological order %
\begin{align}
\{\hat{\theta}_{0},\hat{\theta}_{1},\ldots,\hat{\theta}_{\tau_{1}-1},\hat{\theta}_{\tau_{1}},\ldots,\hat{\theta}_{\tau_{2}-1},\hat{\theta}_{\tau_{2}},\ldots\},
\end{align}
where for $k>1$ ($k=1$) $\tau_k$ indicates the number of times we use the reduced measure computed using $\theta_{\tau_{k-1}-1}$ ($\theta_{0}$). Moreover, let us suppose that for any step $j$ we have a map $S$ that tell us
the step where we had recomputed the measure the last time, so $
S(j)=\max\{k: \tau_k\leq j\}$.
Let us recall that if the function is convex we have that
\[
L(\theta)\leq L(\theta^{*})+\nabla L(\theta)(\theta-\theta^{*})
\]
where $\theta^{*}$ is the minimum, moreover if $\{\theta_i\}$ are the weights computed using the standard GD, we can say that
\[
L(\theta_{i+1})\leq L(\theta_{i})-\frac{1}{2}\gamma\|\nabla L(\theta_{i})\|^{2},
\]
if $\gamma$ respects the usual conditions, i.e. $\gamma\leq 1/\text{Lip}(L)$, where $\text{Lip}(L)$ indicates the Lipschitz constant of $L$.
We know that $L(\hat{\theta}_{j})\leq L(\hat{\theta}_{\tau_{S(j)}-1})+\Delta_{j,\,S(j)}$
therefore, since $\Delta_{j,\,S(j)}\leq 0$%
\begin{align}
L(\hat{\theta}_{j})\leq L(\hat{\theta}_{\tau_{S(j)}})\leq L(\theta^{*})+\nabla L(\hat{\theta}_{\tau_{S(j)}-1})(\hat{\theta}_{\tau_{S(j)}-1}-\theta^{*})-\frac{1}{2}\gamma\|\nabla L(\hat{\theta}_{\tau_{S(j)}-1})\|^{2},
\end{align}
which rearranging the terms and using that $\hat{\theta}_{\tau_{S(j)}}-\hat{\theta}_{\tau_{S(j)}-1}=\mathbb{E}[G(\hat{\theta}_{\tau_{S(j)}-1}, Z)]=\mathbb{E}[G(\hat{\theta}_{\tau_{S(j)}-1},\hat{Z}_{\tau_{S(j)}-1})]$ becomes
\begin{align}
L(\hat{\theta}_{j})-L(\theta^{*})\leq\frac{1}{2\gamma}\left(\|\hat{\theta}_{\tau_{S(j)}-1}-\theta^{*}\|^{2}-\|\hat{\theta}_{\tau_{S(j)}}-\theta^{*}\|^{2}\right).
\end{align}
Thus, %
\begin{align}
\sum_{l=1}^{j}L(\hat{\theta}_{l})-L(\theta^{*})\leq&\frac{1}{2\gamma}\sum_{l=1}^{j}\left(\|\hat{\theta}_{\tau_{S(l)}-1}-\theta^{*}\|^{2}-\|\hat{\theta}_{\tau_{S(l)}}-\theta^{*}\|^{2}\right)\\=&\frac{1}{2\gamma}\sum_{k:\tau_{k}\leq j}\left(\tau_{k}-\tau_{k-1}\right)\left(\|\hat{\theta}_{\tau_{k}-1}-\theta^{*}\|^{2}-\|\hat{\theta}_{\tau_{k}}-\theta^{*}\|^{2}\right)\\\leq&\frac{1}{2\gamma}\max_{k:\tau_{k}\leq j}\left\{ \tau_{k}-\tau_{k-1}\right\} \sum_{k:\tau_{k}\leq j}\left(\|\hat{\theta}_{\tau_{k}-1}-\theta^{*}\|^{2}-\|\hat{\theta}_{\tau_{k}}-\theta^{*}\|^{2}\right)\\\leq&\frac{1}{2\gamma}\max_{k:\tau_{k}\leq j}\left\{ \tau_{k}-\tau_{k-1}\right\} \|\hat{\theta}_{0}-\theta^{*}\|^{2}.
\end{align}
Therefore it holds that
\begin{align}
L(\hat{\theta}_{j})-L(\theta^{*})\leq\frac{1}{j}\sum_{l=1}^{j}L(\hat{\theta}_{l})-L(\theta^{*})\leq\frac{1}{2\gamma}\max_{k:\tau_{k}\leq j}\left\{ \tau_{k}-\tau_{k-1}\right\} \frac{\|\hat{\theta}_{j}-\theta^{*}\|^{2}}{j}.
\end{align}
\qed
\end{proof}
First, note that the bound in
Equation~\eqref{eq:Ca-GDcomplexity}
assumes that when we use the reduced measures the objective function does not decrease, thanks to
Equation~\eqref{eq:Delta_def}.
Secondly, note that
the constant $c$ in Equation~\eqref{eq:Delta_def} in practice might be unknown and expensive to compute, and when known it might be quite conservative.
In our implementation we use an approximation of the second derivative,
so that $\Delta_{j,k}$ in Equation~\eqref{eq:Delta_def} becomes
\begin{align}
\Delta_{j,k}:=\ensuremath{\mathbb{E}[G(\hat{\theta}_{\tau_{k}},\hat{Z}_{k})]\cdot(\hat{\theta}_{j}-\hat{\theta}_{\tau_{k}})+\frac{1}{2}(\hat{\theta}_{j}-\hat{\theta}_{\tau_{k}})^{\top}\cdot\mathcal{H}_{k}\cdot(\hat{\theta}_{j}-\hat{\theta}_{\tau_{k}})},\quad j\ge\tau_{k},
\end{align}
where $ \mathcal{H}_{k}:=\left[\mathbb{E}[G(\hat{\theta}_{\tau_{k}},\hat{Z}_{k})]-\mathbb{E}[G(\hat{\theta}_{\tau_{k}-1},Z)]\right]^{\top}\cdot\left[1/(\hat{\theta}_{\tau_{k}}-\hat{\theta}_{\tau_{k}-1})\right] $
and
$[1/x]$ denotes a vector whose elements are the reciprocals of those in $[x]$.
To compute the terms $\Delta_{(\cdot,\cdot)}$ we modify Algorithm \ref{algo:Ca-accel} doing two iterations where $\mathbb{E} [G(\theta,Z)]$ is computed -- see Algorithms \ref{algo:CaBCD_GS} %
and \ref{algo:CaBCD_random} %
in Section~\ref{sec:BCD}.
We do not discuss how to optimally select $\gamma$, since there exists a broad literature about the optimal selection of the step~\cite{Nesterov2018}.
\subsection{A first comparison of CaGD and GD: logistic regression}\label{sec:exp_GD}
Already comparing the complexity of CaGD, Algorithm \ref{algo:Ca-accel}, to standard GD is not trivial, since the number of recombinations steps is not known a-priori (the times $\tau_k$ specify a recombination compuation depend on the data).
Furthermore, the worst-case complexity of the recombination algorithm itself is much worse than its average complexity, see~\cite{Cosentino2020}.
Hence, the intuition remains that for a sufficiently large number of samples $N$ and low-dimensional $\theta$, the total cost of computing the recombinations is negligible compared to evaluating the full gradient in each descent step.
We present three numerical experiments to test this intuition.
We use classic logistic regression for binary classification (it is easy to check that the assumptions of Theorem \ref{th:convergence_CaGD} are fulfilled in this case, see \cite[Exercise 8.3]{Murphy2012}) and use synthetic data which allows to study various regimes of $N$.
We run both GD and CaGD until either the norm of the gradient is less than \num{1e-3}, or the number of iterations is greater than \num{1e4}.
\begin{figure}[hbt!]
\centering
\includegraphics[height=3.3cm,width=0.32\textwidth]{figure/sin_5000}
\includegraphics[height=3.3cm,width=0.32\textwidth]{figure/greater0_5000}
\includegraphics[height=3.3cm,width=0.32\textwidth]{figure/logistic_5000}
\includegraphics[height=3.3cm,width=0.315\textwidth]{figure/gd_ratio_sin_2sample}\hspace{1.5mm}
\includegraphics[height=3.3cm,width=0.315\textwidth]{figure/gd_ratio_greater0_2sample}\hspace{1.5mm}
\includegraphics[height=3.3cm,width=0.315\textwidth]{figure/gd_ratio_generated_log_2sample}
\caption{
Synthetic data generated by sampling from
(i) a uniform distribution, classified with a sine function,
(ii) a shifted exponential, classified as $1$ in the third octant, and $0$ otherwise,
(iii) a uniform distribution, classified with a logistic model with parameter $(-5;2)$.
The top row shows samples with $N = \num{5000}$.
The bottom row shows the ratios of running times between standard GD and CaGD, as a function of the step size for various sample sizes $N$.
}
\label{fig:2d}
\end{figure}
The results are shown in Figure \ref{fig:2d} and indicate that the improvement in the running time is up to $35$-fold.
Generally the improvement increases as the number of points $N$ increases and when the step size is small (the step of the GD must be small enough for the algorithm to converge to a minimum).
{Another advantageous observation is that CaGD reaches lower gradient values than GD, because in Algorithm \ref{algo:Ca-accel} the ``true'' gradient $\mathbb{E}[G(\theta,Z)]$ is only computed at the step~\ref{step:full_gradient}, but we modify $\hat\theta_j$ at step~\ref{step:compute_reduced_gr}.}
In these instances we have employed it\_max\_Ca$=\max\{10/\text{step},10^{\,4}\}$.
Of course, the real benchmarks are SGD variants like SAG and ADAM.
However, CaGD in its simple form above is not competitive to such SGD variants since its computational bottleneck is that the recombination step scales cubically in the dimension $n$ of $\theta$ which makes it infeasible for many datasets.
In Section~\ref{sec:BCD} below we combine CaGD with BCD to resolve this issues.
This then results in a competitive alternative to SGD variants like SAG and ADAM.
\section{Carath\'eodory Block Coordinate Descent}\label{sec:BCD}
The computational bottleneck of CaGD is in step~\ref{step:compute_reduced_gr}, where a recombination algorithm is run to compute $\hat Z$.
As mentioned before, the recombination algorithm \cite{maria2016a} has the worst case complexity $O(Nn+n^3\log(N/n))$ where $n$ is the dimension of $\theta$ and $N$ the number of samples.
However, unlike the other recombination algorithms, the greedy algorithm in \cite{Cosentino2020} is designed to have a much lower average complexity in $N$.
Unfortunately, in terms of the dimension $n$, it suffers like all the other recombination algorithms from the cubic term $n^3$ which makes CaGD not suitable for high-dimensional problems.
In this Section, we combine CaGD with Block Coordinate Descent (BCD) \cite{Nesterov2012,Wright2015,Richtarik2016,Nutini2017,Beck2013,Csiba2017} to resolve this cubic complexity.
This leverages the strengths of BCD in terms of compuational efficiency and of CaGD in terms of low variance estimators for the expected gradient estimate.
\subsection{From Block Coordinate Descent to Carath\'eodory Block Coordinate Descent}
BCD selects in every descent step a small subset of the $n$ coordinates of $\theta$. This, in turn allows us to apply the recombination algorithm to a small number of coordinates, typically $n$ is between $2$ and $5$ so that the cubic complexity in number of coordinates becomes negligible.
The core idea of BCD is to update only a subset of the coordinates at every step,%
\begin{align}
\theta_{j+1} = \theta_{j}-\gamma \mathbb{E}[G^{(B({j}))} (\theta,Z)],
\end{align}
where ${B}({j})$ denotes a set of coordinates, and $G^{(B({j}))}$ denotes the function that returns the gradient for the coordinate in ${B}({j})$,
and sets it to be equal to $0$ for the other coordinates.
If the problem we want to solve is of the form
\begin{align}
\min_\theta \, L_\theta = \min_\theta (f(\theta)+g(\theta)),
\end{align}
where $f$ is convex and $g$ is (block) separable, i.e.~$g=\sum_{m=1}^b g_m$, $g_m:\mathbb{R}^{n_m}\to\mathbb{R}$ and $\sum_m n_m \leq n$, $b\leq n$,
then BCD converges to a minimum and the rate of convergence is that of standard GD, up to some constants depending on different factors, e.g.~the number of directions we update at any step, the strategy to choose such directions, the separability of the objective function $g$; see~\cite{Wright2015,Richtarik2016} for a detailed study.
Notable examples of optimisation problems with functions $(f,g)$ abiding by the previous condition are least-squares problems with LASSO regularisation~\cite{Richtarik2016,Nesterov2012,Fercoq2015}, which is where BCD appears to be more effective (we analyse this problem in the next subsection).
A well-studied aspect of BCD is its parallelisation~\cite{Fercoq2015,Liu2015,Bradley2011,You2016,Scherrer2012},
which can be studied in terms of the spectral radius of the data~\cite{Bradley2011}.
In the following, we focus on simple applications of BCD %
to highlight the improvements that are due to CaBCD, rather than BCD optimizations.
\subsection{Update rules}
An important aspect is how to select the directions
for the descent step~\cite{Nutini2017,Li2015,Richtarik2016a,Dhillon2011,Glasmachers2013,Lei2016,Li2018,Qu2016,Qu2016a,Beck2013}, and how many directions to select: cyclic versus acyclic, deterministic versus random, or via the Gauss-Southwell (GS) rule (discussed below).
The main differences amongst these options hinges on whether or not we can afford to compute the full gradient:
if we can, then the GS rule is expected to be the best strategy;
if we cannot, then the random acyclic strategy seems to perform better than
the deterministic cyclic ones \cite{Lee2018}.
Moreover, these strategies can be implemented in parallel, exploiting multiple processors.
We focus on the following two acyclic strategies, whose respective procedures are presented in Algorithm~\ref{algo:CaBCD_GS} and~\ref{algo:CaBCD_random}; {for a detailed comparison see~\cite{Nutini2015,Lee2018}}:
\begin{itemize}
\item \textit{Modified Gauss-Southwell (GS).}
If we can compute the full gradient,
then we can select directions where the gradient is larger in absolute value.
A rule of thumb is to consider only a percentage of the ``total'' value of the gradient (in our experiments we consider 75\%):
\begin{enumerate}[label=(\roman*)]
\item Let us call $\nabla_S$ the vector with the absolute value of the directions of the GD sorted in descending order, i.e.
$\big|\nabla L^{\left(\nabla_{S}^{(r)}\right)}\big|\geq\big|\nabla L^{\left(\nabla_{S}^{(q)}\right)}\big|$ if $r\leq q$;
\item We consider the directions where the gradient is bigger in absolute value, namely the first $\hat n$ components of $\nabla_S$, where
\[
\hat n := \inf \left\{ q:\sum_{jr=1}^{q} \left|\nabla L^{\left(\nabla_{S}^{(r)}\right)}\right|>\text{Percentage}\right\};
\]
\item We split the $\hat n$ directions in $b=\hat n/s$ blocks of size $s$, respecting the ordering.
\end{enumerate}
\item \textit{Random.}
If we cannot compute the full gradient,
then we can group the directions
into $n/s$ blocks of size $s$,
and perform BCD over the blocks.
In the experiments of the next subsection we randomly group half of the directions per iteration.
{The condition to terminate in Algorithm~\ref{algo:CaBCD_random} depends ``only'' on the loss function $L$ since we cannot compute the full gradient.}
\end{itemize}
In~\cite{Nesterov2012,Fercoq2015}, the selection of the coordinate step $\gamma$ is given as a sub-optimisation problem, which for simplicity we skip in the following. %
Furthermore,~\cite{Nesterov2012} shows that a momentum application to the single block of directions can improve the rate of convergence: knowing the convexity of the function to be optimised,
it is possible to obtain an accelerated method with an improved rate of convergence ${O}(1/j^2)$, where $j$ is the number of iterations, in the same spirit of~\cite{Nesterov1983}.
Our implementation has been done in a synchronous and parallel manner (cf.~discussion above).
\begin{algorithm}\caption{Carath\'eodory BCD - Modified Gauss-Southwell (GS) rule}\label{algo:CaBCD_GS}
\begin{algorithmic}[1]
\State{Initialize $\hat \theta_0 $}
\State{$j \gets 1$, $k \gets 0$
}\Comment{$j$ counts steps,
$\sum_{l=0}^{k}b_l$ counts the number of recombinations}
\State{$\operatorname{Grad_0}\gets \mathbb{E}[G(\hat\theta_{0},Z)] $}
\While{{($\|\operatorname{Grad}_{j-1} \|>\epsilon_1$\textbf{ or }$|L(\hat\theta_{j-1},Z) |>\epsilon_2$)} \textbf{and} $j\leq$ it\_max }
\State{$\hat \theta_{j} \gets \hat \theta_{j-1}+ \gamma \mathbb{E}[ D_{j-1}(\{\hat\theta\},Z)]$ }
\State{$\operatorname{Grad}_j\gets \mathbb{E}[G(\hat\theta_{j},Z)] $}
\State{Build $b_k$ blocks $B(m,k)$, $m=1,...,b_k$ using the $\mathbb{E}[G(\hat\theta_{j},Z)] $ and the GS rule}
\State{$\tau_k\gets j$}%
\State{$ j\gets j+1$}
\For{%
$m=1,...,b_k$,\textbf{ in parallel}}%
\State{$j_m\gets j$, $\quad \Delta_{\tau_k,k}^{(m)}\gets 0$}\Comment{$j_m-1=\tau_k$}
\State{$
\operatorname{Hessian}_{k}^{(m)}\gets\left[\operatorname{Grad}_{\tau_{k}}^{(m)}-\operatorname{Grad}_{\tau_{k}-1}^{(m)}\right]^{\top}\cdot\left[1/(\hat{\theta}_{\tau_{k}}^{(m)}-\hat{\theta}_{\tau_{k}-1}^{(m)})\right]$
}
\State{Compute $\hat{Z}_{k}^{(m)}$ s.t. $ \mathbb{E}\left[D_{\tau_{k}}^{(m)}\left(\{\hat{\theta}\},\hat{Z}_{k}^{(m)}\right)\right]=\mathbb{E}\left[D_{\tau_{k}}^{(m)}\left(\{\hat{\theta}\},Z\right)\right]$}
\While{$\Delta_{j_m,k}^{(m)}\leq\Delta_{j_m-1,k}^{(m)}$ %
\textbf{and} $j_m-\tau_k\leq$it\_max\_Ca }
\State{$\hat{\theta}_{j_m}^{(m)}\gets\hat{\theta}_{j_m-1}^{(m)}+\mathbb{E}\left[D_{j_m-1}^{(m)}\left(\{\hat\theta\},\hat{Z}_{k}^{(m)}\right)\right]$ }
\State{$\delta_{j_m,k}^{(m)}\gets \hat{\theta}_{j_{m}}^{(m)}-\hat{\theta}_{\tau_{k}}^{(m)}$}
\State{$\Delta_{j_{m},k}^{(m)}\gets\operatorname{Grad}_{\tau_{k}}^{(m)}\cdot\delta_{j_{m},k}^{(m)}+\left(\delta_{j_{m},k}^{(m)}\right)^{\top}\cdot\operatorname{Hessian}_{k}^{(m)}\cdot\delta_{j_{m},k}^{(m)}$}
\State{$j_m\gets j_m+1$}
\EndWhile
\If{$j_m-\tau_k \not= \operatorname{it\_max\_Ca}$ }
\State{$\tau_{m,k+1} \gets j_m-1$} \Comment{{$\tau_{m,k+1}-\tau_k$ steps in $(k+1)$th recombination relative to $B(m,j)$}}
\Else \State{$\tau_{m,k+1} \gets j_m$}
\EndIf
\EndFor
\State{$j\gets j+\sum_m \tau_{m,k+1}$}
\State{$ \hat\theta^{(m)}_j \gets \hat\theta^{(m)}_{\tau_{m,k+1}}, \quad \forall m$
}\Comment{synchronise and update $\hat\theta$}
\State{$\operatorname{Grad}_j\gets \mathbb{E}[G(\hat\theta_{j},Z)] $}
\State{$k\gets k+1, \quad$ $j\gets j+1$}
\EndWhile
\textbf{ and return} $j$, $\hat \theta_j$
\end{algorithmic}
{We write $\cdot^{(m)}$ in place of $\cdot^{(B(m,k))}$ to indicate the restriction to the components in the blocks $B(m,k)$.}
\end{algorithm}
\begin{algorithm}\caption{Carath\'eodory BCD - Random}\label{algo:CaBCD_random}
\begin{algorithmic}[1]
\State{Initialize $\hat \theta_0 $}
\State{$j \gets 1$, $k \gets 0$
}\Comment{$j$ counts steps,
$\sum_{l=0}^{k}b_l$ counts the number of recombinations}
\While{$|L(\theta_{j-1},Z)|>\epsilon$ \textbf{and} $j\leq$it\_max }
\State{Build $b$ blocks $B(m,k)$, $m=1,...,b$ using the \textit{Random} rule}
\For{$m=1,...,b$,\textbf{ in parallel}}%
\State{$\operatorname{Grad}_{j-1}^{(m)}\gets \mathbb{E}[G^{(m)}(\hat\theta_{j-1},Z)] $}
\State{$\hat \theta^{(m)}_{j} \gets \hat \theta^{(m)}_{j-1}+ \gamma \mathbb{E}[ D^{(m)}_{j-1}(\{\hat\theta\},Z)]$ }
\State{$\operatorname{Grad}_j^{(m)}\gets \mathbb{E}[G^{(m)}(\hat\theta_{j},Z)] $}
\State{$\tau_k\gets j$}
\State{$j_m\gets j+1$, $\quad \Delta_{\tau_k,k}^{(m)}\gets 0$}\Comment{$j_m-1=\tau_k$}
\State{$
\operatorname{Hessian}_{k}^{(m)}\gets\left[\operatorname{Grad}_{\tau_{k}}^{(m)}-\operatorname{Grad}_{\tau_{k}-1}^{(m)}\right]^{\top}\cdot\left[1/(\hat{\theta}_{\tau_{k}}^{(m)}-\hat{\theta}_{\tau_{k}-1}^{(m)})\right]$
}
\State{Compute $\hat{Z}_{k}^{(m)}$ s.t. $ \mathbb{E}\left[D_{\tau_{k}}^{(m)}\left(\{\hat{\theta}\},\hat{Z}_{k}^{(m)}\right)\right]=\mathbb{E}\left[D_{\tau_{k}}^{(m)}\left(\{\hat{\theta}\},Z\right)\right]$}
\While{$\Delta_{j_m,k}^{(m)}\leq\Delta_{j_m-1,k}^{(m)}$ %
\textbf{and} $j_m-\tau_k\leq$it\_max\_Ca }
\State{$\hat{\theta}_{j_m}^{(m)}\gets\hat{\theta}_{j_m-1}^{(m)}+\mathbb{E}\left[D_{j_m-1}^{(m)}\left(\{\hat\theta\},\hat{Z}_{k}^{(m)}\right)\right]$ }
\State{$\delta_{j_m,k}^{(m)}\gets \hat{\theta}_{j_{m}}^{(m)}-\hat{\theta}_{\tau_{k}}^{(m)}$}
\State{$\Delta_{j_{m},k}^{(m)}\gets\operatorname{Grad}_{\tau_{k}}^{(m)}\cdot\delta_{j_{m},k}^{(m)}+\left(\delta_{j_{m},k}^{(m)}\right)^{\top}\cdot\operatorname{Hessian}_{k}^{(m)}\cdot\delta_{j_{m},k}^{(m)}$}
\State{$j_m\gets j_m+1$}
\EndWhile
\If{$j_m-\tau_k \not= \operatorname{it\_max\_Ca}$ }
\State{$\tau_{m,k+1} \gets j_m-1\quad\quad\quad$} \Comment{{$\tau_{m,k+1}-\tau_k$ steps in $(k+1)$th recombination relative to $B(m,j)$}}
\Else \State{$\tau_{m,k+1} \gets j_m$}
\EndIf
\EndFor
\State{$j\gets j+\sum_m \tau_{m,k+1}$}
\State{$ \hat\theta^{(m)}_j \gets \hat\theta^{(m)}_{\tau_{m,k+1}}, \quad \forall m$
}\Comment{synchronise and update $\hat\theta$}
\State{$k\gets k+1, \quad$ $j\gets j+1$}
\EndWhile
\textbf{ and return} $j$, $\hat \theta_j$
\end{algorithmic}
We write $\cdot^{(m)}$ in place of $\cdot^{(B(m,k))}$ to indicate the restriction to the components of $\cdot$ in the blocks $B(m,k)$.
\end{algorithm}
\begin{figure}[hbt!]
\centering
\includegraphics[height=3cm,width=0.32\textwidth]{figure/gd_vs_gdCA_path_case_sine}
\includegraphics[height=3cm,width=0.32\textwidth]{figure/gd_vs_gdCA_path_case_greater0}
\includegraphics[height=3cm,width=0.32\textwidth]{figure/gd_vs_gdCA_path_case_logistic_model}
\caption{Paths generated by CaGD (Theorem \ref{th:convergence_CaGD}) and GD, for the experiments of Figure~\ref{fig:2d}, same order.}
\end{figure}
\begin{figure}[h!]%
\centering
\includegraphics[height=3cm,width=0.32\textwidth]{figure/gd_vs_CaBCD_path_case_0-5}
\includegraphics[height=3cm,width=0.32\textwidth]{figure/gd_vs_CaBCD_path_case_9-2}
\includegraphics[height=3cm,width=0.32\textwidth]{figure/gd_vs_CaBCD_path_case_10-2}
\caption{Samples of trajectories followed by the GD and the CaBCD over the parameter space.
The dotted blue trajectories and the continuous orange trajectories converge to the same desired minimum, though via different paths.
The CaBCD, between change of directions, uses only a subset of the total points $N$, namely $s+1$ if the size of the selected block is $s$. This Figure has been obtained using the data of Figure~\ref{fig:2d} (center) in the multi-dimensional case.}
\label{fig:parameter-path}
\end{figure}
\subsection{Experiments: CaBCD vs ADAM vs SAG for LASSO}\label{sec:exp_BCD}
We consider a least-squares problems with LASSO regularisation, i.e.
\begin{align}\label{eq:LASSO}
\min_\theta \frac{1}{N}\sum_i (x_i\theta^\top-y_i)^2 + \lambda |\theta|_1.
\end{align}
We have used the following datasets:
\begin{enumerate}[label=(\roman*)]
\item Household power consumption~\cite{electrdata}, which consists of $N=\num{2075259}$ data points.
We want to predict the \textit{Voltage} given \textit{active power, reactive power, intensity}.
We have raised to the tensor power of $5$\footnote{Raising to the tensor power of $\alpha$ means that we have added all the ``mixed'' products up to order $\alpha$: if we indicate with $x^i_m$, $i\in\{1,\ldots,n\}$, the $i$-th feature of the $m$-th point, in the case $\alpha=3$, we create all the new features of the form $x^i_m\times x^j_m$ and $x^i_m\times x^j_m\times x^h_m$, $i,j,k\in\{1,\ldots,n\}$ for all the points $m\in\{1,\ldots N\}$.},
scaled the data, and applied PCA to reduce the number of features to \num{7}.
\item 3D Road Network~\cite{3ddata}, which consists of $N=\num{434874}$ data points.
We want to predict the \textit{Altitude}, given \textit{Longitude} and {Latitude}.
We have raised to the tensor power of $5$, scaled the data, and applied PCA to reduce the number of features to \num{7}.
\item NYC Taxi Trip Duration~\cite{nydata}, which consists of $N=\num{1458644}$ data points.
We want to predict the \textit{trip duration}, given \textit{pickup time/longitude/latitude} and \textit{dropoff longitude/latitude}. We consider only the time of the feature \textit{pickup\_datetime}, without the date.
We have raised to the tensor power of $3$, scaled the data, and applied PCA to reduce the number of features to \num{8}.
In this case we have considered as outliers the points such that $y_i>$\num{10000} -- this amounts to \num{2123} points (\num{0.14}\%).
\end{enumerate}
In all datasets the variance reduction by PCA is greater than \num{99.9}\%, which results from eliminating the symmetries introduced via the tensor power.
Throughout we have chosen $\lambda=0.01$ for the Lasso regularisation.
We have implemented the BCD with and without the Carath\'eodory sampling procedure with Gauss-Southwell rule (\textit{CaBCD GS}, \textit{BCD GS}),
with a momentum strategy and the GS rule (\textit{CaBCD mom GS}, \textit{BCD mom GS}),
and with the Random rule (\textit{CaBCD mom random}, \textit{BCD mom random}).
For the momentum strategy we have chosen the momentum parameter $\beta = 0.9$.
As benchmarks we used ADAM~\cite{Kingma2014} and SAG~\cite{Roux2012} with standard mini-batches with size of $256$.
The learning rate for the CaBCD
Algorithms and ADAM is \num{1e-3}, as suggested in~\cite{Kingma2014};
we selected it\_max\_Ca $=1/\gamma/10=100$.
SAG %
was more sensitive to the step size and we decreased it to \num{1e-6}
to preserve the convergence.
\subsection{Discussion of results}
The results are summarized in Figure \ref{fig:results_expCaBCD}.
Overall CaBCD strongly outperforms the other methods and within the CaBCD variants the ones that use moments do better.
\begin{figure}[hbt!]
\centering
\includegraphics[height=2.8cm,width=0.32\textwidth]{figure/CaBCD_vs_all_time_NY}
\includegraphics[height=2.8cm,width=0.32\textwidth]{figure/CaBCD_vs_all_time_Elec}
\includegraphics[height=2.8cm,width=0.33\textwidth]{figure/CaBCD_vs_all_time_3DRoads}
\includegraphics[height=2.8cm,width=0.32\textwidth]{figure/CaBCD_vs_all_iteration_NY}
\includegraphics[height=2.8cm,width=0.32\textwidth]{figure/CaBCD_vs_all_iteration_Elec}
\includegraphics[height=2.8cm,width=0.32\textwidth]{figure/CaBCD_vs_all_iteration_3DRoads}
\caption{Running times and iterations of the different Algorithms.
For \textit{CaBCD mom GS}, \textit{BCD mom GS} and \textit{CaBCD mom random}, \textit{BCD mom random}
the directions have been computed using a standard momentum strategy, and chosen
respectively by the GS rule and by the Random rule.
For \textit{CaBCD GS}, \textit{BCD GS}
the directions have been computed using the standard GD method, and chosen by the GS rule. }%
\label{fig:results_expCaBCD}
\end{figure}
Some further observations are that, firstly, the size of the blocks $s$ has been fixed to two.
The reason is that experimentally we have observed that if the block's size is between $2$ and $5$ the reduced measure is used for longer, i.e.~the algorithm does more steps with the reduced measure, thus decreasing the runtime.
Secondly, in the case of CaBCD algorithms we count $1$ iteration when a full gradient has been computed, while we count $\frac{\text{number of points in the reduced measure}}{N}$ for any iteration done with the reduced measure (if the size of the block is $s$, the reduced measure has support on $s+1$ points, see Theorem \ref{th:cath}).
An analogous reasoning is used to count the iterations of SAG and ADAM.
Third, the CaBCD algorithms are for the first two iterations are ``slower''.
This is due to the fact that we compute $\mathcal{H}$, i.e.~the approximation of the second derivative.
Finally, using the GS rule, the parallelisation of the code has often no effect because the directions to optimise belong to only one block.
\subsection{Let's make Carath\'eodory Block Coordinate Gradient Descent go fast}
The central question of BCD is the choice of the update rule.
In the previous section we used the arguably simples ones, randomized and Gauss--Southwell, for CaBCD.
However, more sophisticated update rules are possible which in turn could lead to a further performance improvement.
{To understand this better, we revisit in this section the study of different BCD rules of~\cite{Nutini2017} in the context of our CaBCD.
To do so we follow \cite{Nutini2017} and focus on a least-squares problem
\begin{align}\label{eq:ls}
\min_\theta \sum_i (x_i\theta^\top-y_i)^2.
\end{align}
We use \cite[Dataset A]{Nutini2017} %
with $N=\num{1000000}$ and $n=\num{500}$.
The data are generated following the same procedure explained in \cite[Appendix F.1]{Nutini2017}.
The $x_i$ values are sampled from a standard normal random variable, then
1 is added to induce a dependency between columns and each
column is multiplied by a sample from a standard normal random variable multiplied by ten, to induce different Lipschitz constants across the
coordinates. Finally, each entry is kept non-zero with probability $10 \log(m)/m$.
$y_i = x_i\cdot \theta^\times + e_i$, where the $e_i$ are drawn from a standard normal random variable.
90\% of $\theta^\times $ is set to zero and the remaining values are sampled from a standard normal random variable.}
\begin{figure}[t!]
\centering
\includegraphics[width=0.335\textwidth, clip=true, trim = 0 125mm 0 0 ]{figure/fig4a_and_8_and_10_Sort_block5_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0 ]{figure/fig4a_and_8_and_10_Sort_block50_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig4a_and_8_and_10_Sort_block100_onlyCA}
\includegraphics[width=0.335\textwidth, clip=true, trim = 0 125mm 0 0 ]{figure/fig4a_and_8_and_10_VB_block5_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig4a_and_8_and_10_VB_block50_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig4a_and_8_and_10_VB_block100_onlyCA}
\includegraphics[width=0.335\textwidth, clip=true, trim = 0 125mm 0 0 ]{figure/fig9_and_10_Order_block5_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig9_and_10_Order_block50_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig9_and_10_Order_block100_onlyCA}
\includegraphics[width=0.335\textwidth, clip=true, trim = 0 125mm 0 0 ]{figure/fig10_Avg_block5_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig10_Avg_block20_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig10_Avg_block50_onlyCA}
\caption{{CaBCD applied with different block sizes, rules used in
\cite[Figure 4, 8, 10]{Nutini2017} (top-two lines),
\cite[Figure 9]{Nutini2017} (third line) and
\cite[Figure 10]{Nutini2017}.}
}\label{fig:nutinifig4,8,9,10}
\end{figure}
\subsubsection{BCD update rules}
{ %
The rules presented in~\cite{Nutini2017} can be represented as
\[
\theta_{j+1} = \theta_{j} + \langle\Gamma,\sum_i \nabla L_{\theta_j} (x_i,y_i)\rangle,
\]
where $\Gamma$ can be a function of the Hessian of $L$, of Lipschitz bounds related to $L$, or however it can depend on $L_{\theta_j} (x_i,y_i)$ non linearly in the data, e.g. the inverse of the Hessian.
Due to the non-linearity, we compute the reduced measure for $\sum_i \nabla L_{\theta_j} (x_i,y_i)$ and consider $\Gamma$ as an independent factor.
{In general, Lipschitz bounds are difficult to find, whilst precise Hessian information is expensive computationally unless a closed formula is available, which is the case only for a small portion of models}.\\
In~\cite{Nutini2017} the step-size of the BCD is determined as a function of the computed Lipschitz bounds. While using the recombined measure we use a factor $\gamma=\num{1e-2}$, i.e.
\[
\hat\theta_{j+1} = \hat\theta_{j} + \gamma \times \langle\Gamma,\sum_i \nabla L_{\hat\theta_j} (\hat x_i,\hat y_i)\rangle.
\]
In place of $\mathbb{E}[\nabla L_{\theta_j} (X,Y)] = \frac{1}{N}\sum_i \nabla L_{\theta_j} (x_i,y_i)$, in~\cite{Nutini2017} $\sum_i \nabla L_{\theta_j} (x_i,y_i)$ is used, which results in higher loss values.\\
Compared to the previous experiments, we want to underline that some of the rules used in~\cite{Nutini2017} compute precisely the Lipschitz Constants of the different blocks. %
Indeed, for least-squares problems Lipschitz constants can be written expliclity as a function of $(x_i,y_i)$ and $\theta_j$, see e.g.~\cite[Appendix B]{Nutini2017}.\\
Dataset A used in~\cite{Nutini2017} is synthetic and sparse.
While the rules to select the directions of~\cite{Nutini2017} prefer sparse matrices, we did not optimize the algorithms' code to find the reduced measures to efficiently deal with sparse datasets.
Nevertheless, we can imagine improvements given a significant presence of matrices multiplications in the implementations.}
\subsection{A list of rules.}
{We briefly introduce the rules below, for an exhaustive description we refer to~\cite{Nutini2017}.
We structure the experiments and plots as follows:
any rule is represented by the following string format
\begin{center}
\textit{``partition\_block-selection\_direction''}
\end{center}
with an additional suffix ``\textit{\_CA}'' to note that we have applied it with CaBCD.
The possible choices for \emph{partition}, \emph{block-selection}, \emph{direction} are\footnote{
Not all the combinations are possible, see \cite{Nutini2017} and the official repository \url{https://github.com/IssamLaradji/BlockCoordinateDescent} from more details. }
\begin{align}
\text{ partition} \in & \{\text{VB, Sort, Order, Avg}\}, \\
\text{block-selection} \in &\{\text{Random, Cyclic, Lipschitz, Perm, GS, GSD, GSL,} \\
&\,\,\,\,\,\, \text{GSDHb, GSQ, IHT}\},\\
\text{direction} \in & \{\text{Hb, Lb}\}.
\end{align}
We give details on the choices below:
VB stands for Variable Blocks which indicates that the partition of the directions can change at any iteration of the optimization procedure.
Sort fixes the partition from the beginning, organizing the blocks of directions according to their {Lipschitz values: the largest Lipschitz} values into the first block, and so on.
Order it fixes the partition from the beginning, subdividing the directions in order, e.g. if the block size is 2, the blocks will be $(1,2), (3,4), $ etc.
Avg fixes the partition alternating between adding large and small Lipschitz values.
Between the previous, VB is the only one which allows the partition to change between iterations.
The ``\textit{block-selection}'' rules prescribe how blocks are selected given the partition of the directions and we refer to \cite{Nutini2017} for details.
The two choices of ``\textit{direction}'' are ``\textit{Lb}'' and ``\textit{Hb}''.
\textit{Lb} means that the direction for the update is {$G_{block} / L_{block}$; $Hb$ signifies that the direction is $ H_{block}^{-1}\cdot G_{block}$, where $L_{block}, G_{block}, H_{block}$ represent respectively the Lipschitz value, the Gradient and the Hessian of the chosen block.}}
The plots are named analogous{ly} to the plots in~\cite{Nutini2017} but additionally we include the values of the size of the blocks.
For the implementation of the block{s}' selection rules we have used the code provided by the authors of~\cite{Nutini2017}, freely available at
\url{https://github.com/IssamLaradji/BlockCoordinateDescent}.
\begin{figure}[bth!]
\centering
\includegraphics[width=0.16\textwidth]{figure/figwhite}
\includegraphics[width=0.335\textwidth, clip=true, trim = 0 125mm 0 0 ]{figure/fig5a_Sort_block5_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig5a_VB_block5_onlyCA}
\includegraphics[width=0.16\textwidth]{figure/figwhite}
\includegraphics[width=0.335\textwidth, clip=true, trim = 0 125mm 0 0 ]{figure/fig11_Sort_block5_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig11_Sort_block50_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig11_Sort_block100_onlyCA}
\includegraphics[width=0.335\textwidth, clip=true, trim = 0 125mm 0 0 ]{figure/fig11_VB_block5_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig11_VB_block50_onlyCA}
\includegraphics[width=0.32\textwidth, clip=true, trim = 12mm 125mm 0 0]{figure/fig11_VB_block100_onlyCA}
\caption{CaBCD applied with different block sizes, rules used in \cite[Figure 5]{Nutini2017} (top-line) and
\cite[Figure 11]{Nutini2017}.}
\label{fig:nutinifig5,11}
\end{figure}
\subsection{Discussion of results}
{
The results show that the general conclusion of \cite{Nutini2017} also applies to CaBCD.
Firstly, from Figure~\ref{fig:nutinifig4,8,9,10} the GS based rules should be preferred when possible.
Secondly, from Figure~\ref{fig:nutinifig4,8,9,10} it can be observed that between the partition rules we should prefer VB or Sort.
In our experiments, the differences between the partition rules VB and Sort are less evident. In particular, we can notice that the VB partition rule attains its minimum loss when the block size is $5$, which is congruent with our observation of Section~\ref{sec:exp_BCD} that the CaBCD makes more steps with the reduced measure when the block's size is low.
Thirdly, from Figure~\ref{fig:nutinifig4,8,9,10} and~\ref{fig:nutinifig5,11} the differences between the selection rules vanish when the blocks' size increases.
Lastly, the (quasi-)Newton updates $Hb$ Figure~\ref{fig:nutinifig5,11} reach a lower minimum faster, as one can expect.
However, we recall that the Carath\'eodory reduced measure was built matching only the gradient and in the future, we want to refine this aspect applying the Carath\'eodory Sampling ``exactly'' also to the second derivative, i.e. (quasi-)Newton methods.
}
\section{Summary}
We introduced a new SGD algorithm, CaGD and then combined it with BCD to make it scalable to high-dimensional spaces.
Similar to SGD variants we approximate the gradient in each descent step by a subset of the data.
In contrast to such SGD variants, the approximation is not done by randomly selecting a small subset of points and giving each point the same, uniform weight;
instead the points are carefully selected from the original dataset and weighing them differently.
This recombination step results in a small, weighted summary of the data is constructed and subsequently the gradient is only computed using this simpler summary until a control statistic tells us to recombine again.
To deal with high-dimensional optimization problems we then leveraged the strengths of this approach (low-variance gradient estimates) with BCD (low computational complexity).
Our experiments show that this can lead to remarkable improvements compared to competitive baselines such as ADAM and SAG.
Many extensions are possible, e.g.~on the theoretical side, studying the behaviour under non-convex losses and on the applied side, combination with Quasi-Newton methods, or BCD rules that are specialized to CaBCD.
Independently of these, any improvement for recombination algorithms can lead to a further speed up of CaGD resp.~CaBCD.
\paragraph{Acknowledgements.}
The authors want to thank The Alan Turing Institute and the University of Oxford for the financial support given. FC is supported by The Alan Turing Institute, TU/C/000021, under the EPSRC Grant No. EP/N510129/1. HO is supported by the EPSRC grant ``Datasig'' [EP/S026347/1], The Alan Turing Institute, and the Oxford-Man Institute.
\bibliographystyle{plain}
\addcontentsline{toc}{section}{References}
|
2,869,038,153,909 | arxiv | \section{Introduction}
\label{sec:intro}
For effective interaction, robot manipulators must build and refine their understanding of the world through sensing. This is especially relevant in unstructured settings, where robots have little to no knowledge of object properties, but can physically interact with their surroundings. Even when blindfolded, humans can locate and infer properties of unknown objects through touch~\cite{klatzky1985identifying}. Replicating some fraction of these capabilities will enable contact-rich manipulation in environments such as homes and warehouses. In particular, knowledge of object shape and pose determines the success of generated grasps or nonprehensile actions.
While there have been significant advances in tactile sensing, from single-point sensors to high-resolution tactile arrays, a general technique for the underlying inference still remains an open question~\cite{luo2017robotic}. Visual and depth-based tracking have been widely studied~\cite{schmidt2015dart}, but suffer from occlusion due to clutter or self-occlusions with the gripper or robot. We provide a general formulation of pure tactile inference, that could later accommodate additional sensing modalities.
Pure tactile inference is challenging because, unlike vision, touch cannot directly provide global estimates of object model or pose. Instead, it provides detailed, local information that must be fused into a global model. Moreover, touch is intrusive: the act of sensing itself constantly perturbs the object. We consider tactile inference as an analog of the well-studied simultaneous localization and mapping (SLAM) problem in mobile robotics~\cite{durrant2006simultaneous}. Errors in object tracking accumulate to affect its predicted shape, and vice versa.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{images/cover.pdf}
\caption{The shape and pose estimation problem in a pusher-slider system. A robot manipulator pushes along a planar object, while recording a stream of tactile measurements. Our method builds a shape contour in real-time as a Gaussian process implicit surface, and optimizes for pose via geometry and physics-based constraints. Figure shows tactile measurements (\protect\tikz[baseline=-0.6ex]\protect\draw[contact_color,fill=contact_color, line width=0.5pt] (0,0) circle (.3ex);), estimated motion (\textcolor{motion_color}{$\mathbf{\downarrow}$}), estimated shape/pose (\protect\tikz[baseline=-0.6ex]\protect\draw[shape_color,fill=shape_fill_color, line width=1.0pt] (0,0) circle (.7ex);), and ground-truth (\textbf{-\,-}).}
\label{fig:cover}
\GobbleLarge
\end{figure}
Central to this problem is choosing a shape representation that both faithfully approximates arbitrary geometries, and is amenable to probabilistic updates. This excludes most parametric models such as polygons/polyhedrons~\cite{yu2015shape}, superquadrics~\cite{solina1990recovery}, voxel maps~\cite{duncan2013multi}, point-clouds~\cite{meier2011probabilistic}, and standard meshes~\cite{varley2017shape}. Gaussian process implicit surfaces (GPIS)~\cite{williams2007gaussian} are one such nonparametric shape representation that satisfies these requirements.
In this paper, we demonstrate online shape and pose estimation for a planar pusher-slider system. We perform tactile exploration of the object via contour following, that generates a stream of contact and force measurements. Our novel schema combines efficient GPIS regression with factor graph optimization over geometric and physics-based constraints. The problem is depicted in Fig. \ref{fig:cover}: the pusher moves along the object while estimating its shape and pose.
We expand the scope of the batch-SLAM method by Yu et al.~\cite{yu2015shape} with a more meaningful shape representation, and real-time online inference. Our contributions are:
\begin{enumerate}
\item[{(1)}] A formulation of the tactile SLAM problem that alternates between GPIS shape regression and sparse nonlinear incremental pose optimization,
\item[{(2)}] Efficient implicit surface generation from touch using overlapping local Gaussian processes (GPs),
\item[{(3)}] Fixed-lag smoothing over contact, geometry, and frictional pushing mechanics to localize objects,
\item[{(4)}] Results from tactile exploration across different planar object shapes in simulated and real settings.
\end{enumerate}
\section{Related work}
\label{sec:related}
\subsection{SLAM and object manipulation}
\label{ssec:related_1}
Our work is closely related to that of Yu et al.~\cite{yu2015shape}, that recovers shape and pose from tactile exploration of a planar object. The approach uses contact measurements and the well-understood mechanics of planar pushing~\cite{lynch1992manipulation, goyal1991planar} as constraints in a batch optimization. Naturally, this is expensive and unsuitable for online tactile inference. Moreover, the object shape is represented as ordered control points, to form a piecewise-linear polygonal approximation. Such a representation poorly approximates arbitrary objects, and fails when data-association is incorrect. Moll and Erdmann~\cite{moll2004reconstructing} consider the illustrative case of reconstructing motion and shape of smooth, convex objects between two planar palms. Strub et al.~\cite{strub2014correcting} demonstrate the full SLAM problem with a dexterous hand equipped with tactile arrays.
Contemporaneous research considers one of two simplifying assumptions: modeling shape with fixed pose~\cite{meier2011probabilistic, dragiev2011gaussian, martinez2013active, yi2016active, driess2019active}, or localizing with known shape~\cite{petrovskaya2011global, zhang2013dynamic, koval2015pose, bauza2019tactile, Sodhi21icra}. The extended Kalman filter (EKF) has been used in visuo-tactile methods~\cite{hebert2011fusion, izatt2017tracking}, but is prone to linearization errors. At each timestep, it linearizes about a potentially incorrect current estimate, leading to inaccurate results. Smoothing methods~\cite{kaess2012isam2} are more accurate as they preserve a temporal history of costs, and solve a nonlinear least-squares problem. These frameworks have been used to track known objects with vision and touch~\cite{yu2018realtime, lambert2019joint}, and rich tactile sensors~\cite{, Sodhi21icra}.
\subsection{Gaussian process implicit surfaces}
\label{ssec:related_2}
A continuous model which can generalize without discretization errors is of interest to global shape perception. Implicit surfaces have long been used for their smoothness and ability to express arbitrary topologies~\cite{blinn1982generalization}. Using Gaussian processes~\cite{rasmussen2003gaussian} as their surface potential enables probabilistic fusion of noisy measurements, and reasoning about shape uncertainty. GPIS were formalized by Williams and Fitzgibbon~\cite{williams2007gaussian}, and were later used by Dragiev et al.~to learn shape from grasping~\cite{dragiev2011gaussian}. It represents objects as a signed distance field (SDF): the signed distance of spatial grid points to the nearest object surface. The SDF and surface uncertainty were subsequently used for active tactile exploration~\cite{dragiev2013uncertainty, li2016dexterous, yi2016active}. To our knowledge, no methods use GPIS alongside pose estimation for manipulation tasks.
Online GP regression scales poorly due to the growing cost of matrix inversion~\cite{rasmussen2003gaussian}. Spatial mapping applications address this by either sparse approximations to the full GP~\cite{snelson2006sparse}, or training separate local GPs~\cite{kim2013continuous, stork2020ensemble}. Lee et al.~\cite{lee2019online} propose efficient, incremental updates to the GPIS map through spatial partitioning.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/final_graph.pdf}
\caption{The combined formulation between the factor graph (Section \ref{sec:fac_graph}) and GPIS (Section \ref{sec:gpis}). \textbf{[top]} The graph illustrates the relationship between the variables to be optimized for (circles) and the factors that act as constraints (colored dots). \textbf{[bottom]} Our GPIS builds an implicit surface shape representation that is the zero level-set of GP potential function. Spatial partitioning with local GPs enables efficient regression.}
\label{fig:final_graph}
\GobbleLarge
\end{figure}
\section{Problem formulation: tactile slam}
\label{sec:problem}
We consider a rigid planar object on a frictional surface, moved around by a pusher (see Fig. \ref{fig:cover}). The interaction is governed by simple contour following for tactile exploration. Given a stream of tactile measurements, we estimate the 2-D shape and object's planar pose in real-time.
\boldsubheading{Object pose}
The object pose at the current timestep $t$ is defined by position and orientation $\mathbf{x}_t = (x, \ y, \ \theta) \in SE\left(2\right)$.
\boldsubheading{Object shape}
The estimated object shape is represented as an implicit surface $\mathbf{\mathcal{S}} \in \mathbb{R}^2$ in the reference frame of $\mathbf{x}_t$.
\boldsubheading{Tactile measurements}
At every timestep we observe:
\begin{equation}
\mathbf{z}_t = \Big\{ \underbrace{\mathbf{p}_t \in \mathbb{R}^{2}}_{\text{pusher position}}, \ \underbrace{\mathbf{f}_t \in \mathbb{R}^{2}}_{\text{force vector}}, \ \underbrace{{\Theta}_t \in \{0, 1\}}_{\text{contact/no-contact}} \Big\}
\label{eq:1}
\end{equation}
Pusher position is obtained from the robot's distal joint state, and force is sensed directly from a force/torque sensor. Contact is detected with a minimum force threshold, and the estimated contact point $\mathbf{c}_t$ is derived from knowledge of $\mathbf{p}_t$, $\mathbf{f}_t$ and probe radius $r_{\text{probe}}$. This is consistent with the formulation in \cite{yu2015shape}, with the addition of direct force measurements $\mathbf{f}_t$. For simplicity we consider a single pusher, but it can be expanded to multiple pushers or a tactile array.
\boldsubheading{Assumptions}
We make a minimal set of assumptions, similar to prior work in planar pushing~\cite{yu2015shape, yu2018realtime, lambert2019joint}:
\begin{itemize}[leftmargin=1em]
\item Quasi-static interactions and limit surface model~\cite{goyal1991planar, lee1991fixture},
\item Uniform friction and pressure distribution between bodies,
\item Object's rough scale and initial pose, and
\item No out-of-plane effects.
\end{itemize}
The remainder of the paper is organized as such: Section \ref{sec:gpis} formulates building shape estimate $\mathbf{\mathcal{S}}$ as the implicit surface of a GP, given current pose estimate and measurement stream. Section \ref{sec:fac_graph} describes factor graph inference to obtain pose estimate $\mathbf{x}_t^*$ with $\mathbf{\mathcal{S}}$ and the measurement stream. These two processes are carried out back-and-forth for online shape and pose hypothesis at each timestep (Fig. \ref{fig:final_graph}). Finally, we demonstrate its use in simulated and real experiments (Section \ref{sec:expts}) and present concluding remarks (Section \ref{sec:conc}).
\section{Shape estimation with implicit surfaces}
\label{sec:gpis}
\subsection{Gaussian process implicit surface}
\label{ssec:gpis_1}
A Gaussian process learns a continuous, nonlinear function from sparse, noisy datapoints~\cite{rasmussen2003gaussian}. Surface measurements are in the form of contact points $\mathbf{c}_i$ and normals $\mathbf{n}_{i}$, transformed to an object-centric reference frame. Given $N$ datapoints, the GP learns a mapping $X \mapsto Y$ between them:
\begin{equation}
F: \underbrace{\{ c_{i_x}, \ c_{i_y} \}^{i = 1 \cdots N}}_{X \in \, \mathbb{R}^2} \mathbf{ \ \mapsto \ } \underbrace{\{d_i, n_{i_x}, n_{i_y}\}^{i = 1 \cdots N}}_{Y \in \, \mathbb{R}^3}
\label{eq:2}
\end{equation}
\begin{equation}
\parbox{10em}{\centering \footnotesize where $d$ represents \\ signed-distance from \\ object surface} \ {\footnotesize \begin{cases}
d = 0,&\text{on surface} \\
d < 0,&\text{inside object} \\
d > 0,&\text{outside object}
\end{cases}}
\label{eq:3}
\end{equation}
The posterior distribution at a test point $\mathbf{c}_*$ is shown to be $F_* \sim \mathcal{GP}(\bar{F_*}, \sigma_*^2)$, with output mean and variance~\cite{rasmussen2003gaussian}:
\begin{equation}
\begin{split}
\bar{F_*} &= k_*^T\left(K + \sigma_{\text{noise}}^2I \right)^{-1}Y \\
\sigma_*^2 &= k_{**} - k_*^T\left(K + \sigma_{\text{noise}}^2I \right)^{-1}k_*
\end{split}
\label{eq:4}
\end{equation}
where $K \in \mathbb{R}^{N \! \times \! N}$, $k_* \in \mathbb{R}^{N \! \times \! 1}$ and $k_{**} \in \mathbb{R}$ are the train-train, train-test, and test-test kernels respectively. We use a thin-plate kernel~\cite{williams2007gaussian}, with hyperparameter tuned for scene dimensions. The noise in output space is defined by a zero-mean Gaussian with variance $\sigma_{\text{noise}}^2$. While contact points condition the GP on zero SDF observations, contact normals provide function gradient observations~\cite{dragiev2011gaussian}. Thus, we can jointly model both SDF and surface direction for objects.
We sample the posterior over an $M$ element spatial grid of test points $\mathbf{C}_*$, to get SDF $F_{*_{d}}$. The estimated implicit surface $\mathcal{S}$ is then the zero-level set contour:
\begin{equation}
\mathcal{S} \triangleq \{ \mathbf{s} \in \mathbb{R}^2 \ | \ F_*(\mathbf{s})_d = 0 \}
\label{eq:5}
\end{equation}
The zero-level set $\mathcal{S}$ is obtained through a contouring subroutine on $F_{*_{d}}$. $\mathcal{S}$ is initialized with a circular prior and updated with every new measurement $\{\mathbf{c}_i, \mathbf{n}_{i}, d \! = \! 0 \}$. In Fig. \ref{fig:gpis_butter} we reconstruct the \texttt{butter} shape~\cite{yu2016more} with a sequence of noisy contact measurements.
\subsection{Efficient online GPIS}
\label{ssec:gpis_2}
In a na\"ive GP implementation, the computational cost restricts online shape perception. Equation \ref{eq:4} requires an $N \! \times \! N$ matrix inversion that is $O(N^3)$, and spatial grid testing that is $O(MN^2)$. We use local GPs and a subset of training data approximation for efficient online regression:
\boldsubheading{Local GP regression}
We adopt a spatial partitioning approach similar to~\cite{kim2013continuous, lee2019online, stork2020ensemble}. The scene is divided into $L$ independent local GPs $F^1 \ldots F^L$, each with a radius $r$ and origin (Fig. \ref{fig:gpis_butter}). Each $F^i$ claims the training and test points that fall within $r$ of its origin. The GPs effectively govern smaller domains ($N_{F^i} \! \ll \! N$ and $M_{F^i} \! \ll \! M$), and not the entirety of the spatial grid. At every timestep: (i) a subset of local GPs are updated, (ii) only the relevant test points are resampled. Kim et al.~\cite{kim2013continuous} demonstrate the need for overlapping domains to avoid contour discontinuity at the boundaries. Thus, we increase $r$, and in overlapping regions, the predicted estimates are averaged among GPs.
\boldsubheading{Subset of data}
Before adding a training point $\mathbf{c}_i$ to the active set, we ensure that the output variance $\sigma_i^2$ is greater than a pre-defined threshold $\sigma_{\text{thresh}}^2$. This promotes sparsity in our model by excluding potentially redundant information.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/gpis_butter.pdf}
\caption{\textbf{[left]} Gaussian process potential function and its implicit surface (green) for noisy contact measurements on the \texttt{butter} shape~\cite{yu2016more}. The colormap shows spatial grid uncertainty. \textbf{[right]} The overlapping local GPs $F^1 \ldots F^L$. Each GP is responsible for training and test points within its radius, and the overlapping regions ensure continuity in the shape contour.}
\label{fig:gpis_butter}
\GobbleLarge
\end{figure}
\boldsubheading{Implementation}
Rather than direct matrix inversions (Equation \ref{eq:4}), it is more efficient to use the Cholesky factor of the kernel matrix~\cite{rasmussen2003gaussian}. In our online setting, we directly update the Cholesky factor $\mathcal{L} \mathcal{L}^T = \left(K + \sigma_{\text{noise}}^2I \right)$ with new information. We multi-thread the update/test steps, and perform these at a lower frequency than the graph optimization. We set $L = 25$, grid sampling resolution $5$~mm, and circular prior of radius $40$~mm for our experiments.
\section{Pose estimation with factor graphs}
\label{sec:fac_graph}
\subsection{Factor graph formulation}
\label{ssec:fac_graph_1}
The \textit{maximum a posteriori} (MAP) estimation problem gives variables that maximally agree with the sensor measurements. This is commonly depicted as a factor graph: a bipartite graph with variables to be optimized for and factors that act as constraints (Fig. \ref{fig:final_graph}). The augmented state comprises object poses and a motion model scalar $\mathcal{C}$:
\begin{equation}
\mathbf{\mathcal{X}}_t = \{ \mathbf{x}_{1}, \ldots, \mathbf{x}_t \ ; \ \mathbf{\mathcal{C}} \}
\label{eq:6}
\end{equation}
Prior work in pushing empirically validates measurement noise to be well-approximated by a Gaussian distribution~\cite{yu2018realtime}. With Gaussian noise models, MAP estimation reduces to a nonlinear least-squares problem~\cite{dellaert2017factor}. Our MAP solution (given best-estimate shape $\mathcal{S}$) is:
\begin{equation}
\footnotesize
\begin{split}
\label{eq:7}
&\mathcal{X}^*_t = \underset{\mathcal{X}_t}{\operatorname{argmin}} \sum_{i=1}^t \Big( \underbrace{ \mystrut{1.5ex} ||Q(\mathbf{x}_{i-1},\mathbf{x}_{i}, \mathbf{z}_{i - 1}, \mathcal{C})||_{\Sigma_{Q}}^2}_{\text{\tikz[baseline=-0.6ex]\draw[black,fill=qcolor, line width=0.5pt] (0,0) circle (.5ex);~QS pushing factor}}
+
\underbrace{ \mystrut{1.5ex} ||I(\mathbf{x}_{i}, \mathbf{z}_{i}, \mathcal{S})||_{\Sigma_{I}}^2}_{\text{\tikz[baseline=-0.6ex]\draw[black,fill=icolor, line width=0.5pt] (0,0) circle (.5ex);~IS contact factor}}\\
&+
\underbrace{ \mystrut{1.5ex} ||P(\mathbf{x}_{i}, \mathbf{z}_{i}, \mathcal{S})||_{\Sigma_{P}}^2}_{\text{\tikz[baseline=-0.6ex]\draw[black,fill=pcolor, line width=0.5pt] (0,0) circle (.5ex);~Non-penetration factor}}
+ \underbrace{ \mystrut{1.5ex} ||F(\mathbf{x}_{i-1},\mathbf{x}_{i})||_{\Sigma_{F}}^2}_{\text{\tikz[baseline=-0.6ex]\draw[black,fill=fcolor, line width=0.5pt] (0,0) circle (.5ex);~Finite motion factor}} \Big)
+ \underbrace{ \mystrut{1.5ex} ||\mathbf{p}_0||_{\Sigma_0}^2 + ||\mathbf{c}_0||_{\Sigma_{c}}^2}_{\text{\tikz[baseline=-0.6ex]\draw[black,fill=prcolor, line width=0.5pt] (0,0) circle (.5ex);~Priors}}
\end{split}
\normalsize
\raisetag{35pt}
\end{equation}
This is graphically represented in Fig. \ref{fig:final_graph}, and the cost functions are described in Section \ref{ssec:fac_graph_2}. Given covariance matrix $\Sigma$, $\left\Vert v\right\Vert_{\Sigma}^{2}=v^{T}\Sigma^{-1}v$ is the Mahalanobis distance of $v$. The noise terms for covariances \{$\Sigma_{Q}, \ldots, \Sigma_{c}$\} are empirically selected. The online estimation is performed using incremental smoothing and mapping (iSAM2)~\cite{kaess2012isam2}. Rather than re-calculating the entire system every timestep, iSAM2 updates the previous matrix factorization with new measurements. In addition, we use a fixed-lag smoother to bound optimization time over the exploration~\cite{dellaert2017factor}. Fixed-lag smoothing maintains a fixed temporal window of states $\mathcal{X}_w$, while efficiently marginalizing out preceding states ($100$ timesteps in our experiments). Note that this is different from simply culling old states and discarding information.
\subsection{Cost functions}
\label{ssec:fac_graph_2}
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{images/sim_data.pdf}
\caption{Snippets of the tactile exploration data collected in the PyBullet simulator. We use a two-finger pusher to perform contour following, and collect the tactile measurements and ground-truth poses.}
\label{fig:sim_data}
\end{figure}
\boldsubheading{\tikz[baseline=-0.6ex]\draw[black,fill=qcolor, line width=1pt] (0,0) circle (.5ex);~QS pushing factor}
The quasi-static model uses a limit surface (LS) model to map between pusher force $\textbf{f}_t$ and object motion~\cite{goyal1991planar}. Specifically, Lynch et al.~\cite{lynch1992manipulation} develop an analytical model using an ellipsoid LS approximation~\cite{lee1991fixture}. The factor ensures object pose transitions obey the quasi-static motion model, with an error term:
\begin{equation}
Q(\mathbf{x}_{t-1},\mathbf{x}_{t}, \mathbf{z}_{t - 1}, \mathcal{C}) = \bigg[ \frac{v_x}{\omega} - \mathcal{C}^2 \frac{{f_{t - 1}}_x}{\tau},\frac{v_y}{\omega} - \mathcal{C}^2 \frac{{f_{t - 1}}_y}{\tau} \bigg]
\label{eq:8}
\end{equation}
\begin{itemize}
\item $(v_x, v_y, \omega)$ is the object's velocity between $\mathbf{x}_{t-1}$ and $\mathbf{x}_{t}$,
\item $\tau$ is the applied moment w.r.t. pose center of $\mathbf{x}_{t-1}$,
\item $\mathcal{C} = {\tau_{\text{max}}}/{f_{\text{max}}}$ is an object-specific scalar ratio dependent on pressure distribution.
\end{itemize}
For a more rigorous treatment, we refer the reader to~\cite{yu2015shape}. We weakly initialize $\mathcal{C}$ with our known circular shape prior, and incorporate it into the optimization.
\boldsubheading{\tikz[baseline=-0.6ex]\draw[black,fill=icolor, line width=1pt] (0,0) circle (.5ex);~Implicit surface contact factor}
Given ${\Theta}_t \! = \! 1$ (contact), we encourage the measured contact point $\textbf{c}_t$ to lie on the manifold of the object. We define $\mathbf{\Phi} = \mathbf{\Phi}(\mathbf{x}_t, \mathbf{z}_t, \mathcal{S})$ that computes the closest point on $\mathcal{S}$ w.r.t. the pusher via projection. The error term is defined as:
\begin{equation}
I(\mathbf{x}_{t}, \mathbf{z}_{t}, \mathcal{S}) = \big[ \mathbf{\Phi} - \mathbf{c}_t \ , \ \| \mathbf{\Phi} - \mathbf{p}_t \| - r_{\text{probe}} \big]
\label{eq:9}
\end{equation}
This ensures that in the presence of noise, the contact points lie on the surface and the normals are physically valid \cite{yu2015shape}.
\boldsubheading{\tikz[baseline=-0.6ex]\draw[black,fill=pcolor, line width=1pt] (0,0) circle (.5ex);~Non-penetration factor}
While we assume persistent contact, this cannot be assured when there is more than one pusher. When ${\Theta}_t \! = \! 0$ (no contact) we enforce an intersection penalty (as used in \cite{lambert2019joint}) on the pusher-slider system. We define $\mathbf{\Psi} = \mathbf{\Psi}(\mathbf{x}_t, \mathbf{z}_t, \mathcal{S})$ to estimate the pusher point furthest inside the implicit surface, if intersecting. The error is:
\begin{align}
P(\mathbf{x}_{t}, \mathbf{z}_{t}, \mathcal{S}) =
\begin{cases}
\| \mathbf{\Psi} - \mathbf{\Phi} \|, &\text{when intersecting} \\
0, &\text{when not intersecting}
\end{cases}
\label{eq:10}
\end{align}
\boldsubheading{\tikz[baseline=-0.6ex]\draw[black,fill=fcolor, line width=1pt] (0,0) circle (.5ex);~Finite motion factor}
Given persistent contact, we weakly bias the object towards constant motion in $SE\left(2\right)$. The magnitude is empirically chosen from the planar pushing experiments. We observe this both smooths the trajectories, and prevents an indeterminate optimization.
\boldsubheading{\tikz[baseline=-0.6ex]\draw[black,fill=prcolor, line width=1pt] (0,0) circle (.5ex);~Priors}
The prior $\mathbf{p}_0$ anchors the optimization to the initial pose. $\mathcal{C}$ is initialized with $\mathbf{c}_0$ using the circular shape prior.
\section{Experimental evaluation}
\label{sec:expts}
\begin{figure}[b]
\GobbleMedium
\centering
\includegraphics[width=\columnwidth]{images/sim_MHD.pdf}
\caption{Average MHD w.r.t. the ground-truth model for the $50$ logs and for each of the $3$ objects. Comparing ours \textbf{[left]} to \textit{Yu et al.~incremental} \textbf{[right]}, we see less variance across all shapes, and much lower error in \texttt{ellip2}. This shows that while the piecewise-linear representation is suited to polygonal objects, it understandably fails to generalize to more arbitrary shapes. The GPIS faithfully approximates both classes. Moreover, the errors in data association lead to large variance among trials in \texttt{rect1} and \texttt{hex}.}
\label{fig:sim_MHD}
\end{figure}
\begin{figure*}
\centering
\begin{minipage}[b]{.59\textwidth}
\includegraphics[width=\textwidth]{images/sim_pose_shape.jpg}
\caption{Estimated shape and pose of representative simulation trials, with timesteps $t$. We compare these against the ground-truth, and overlay the stream of tactile measurements.}\label{fig:sim_pose_shape}
\end{minipage}\qquad
\begin{minipage}[b]{.32\textwidth}
\includegraphics[width=\textwidth]{images/timing_plot.pdf}
\caption{Moving average of execution time of the processes, over all $3 \! \times \! 50$ logs. \textbf{[top]} While the complexity of full iSAM2 grows linearly with time, fixed-lag smoothing maintains a bounded optimization window. \textbf{[bottom]} As a result of spatial partitioning, local GPIS regression has a lower query time compared to the standard implementation.}\label{fig:timing_plot}
\end{minipage}
\GobbleLarge
\end{figure*}
We demonstrate the framework in both simulated (Section \ref{ssec:expts_1}) and real-world planar pushing tasks (Section \ref{ssec:expts_2}).
\boldsubheading{Evaluation metrics}
For pose error, we evaluate the root mean squared error (RMSE) in translation and rotation w.r.t. the true object pose. For shape, we use the modified Hausdorff distance (MHD)~\cite{dubuisson1994modified} w.r.t. the true object model. The Hausdorff distance is a measure of similarity between arbitrary shapes that has been used to benchmark GPIS mapping~\cite{stork2020ensemble}, and the MHD is an improved metric more robust to outliers.
\boldsubheading{Baseline}
We compare against the work of Yu et al.~\cite{yu2015shape}, which recovers shape and pose as a batch optimization (Section \ref{ssec:related_1}). For fairness, we implement an online version as our baseline, that we refer to as \textit{Yu et al.~incremental}. We use $N_s = 25$ shape nodes in the optimization to better represent non-polygonal objects.
\boldsubheading{Compute}
We use the GTSAM library with iSAM2~\cite{kaess2012isam2} for incremental factor graph optimization. The experiments were carried out on an Intel Core i7-7820HQ CPU, 32GB RAM without GPU parallelization.
\subsection{Simulation experiments}
\label{ssec:expts_1}
\boldsubheading{Setup}
The simulation experiments are conducted in PyBullet~\cite{coumans2010bullet} (Fig. \ref{fig:sim_data}). We use a two-finger pusher ($r_{\text{probe}} = 6.25$~mm) to perform tactile exploration at $60$~mm/s. Contour following is based on previous position and sensed normal reaction. The coefficients of friction of the object-pusher and object-surface are both $0.25$. Zero-mean Gaussian noise with std. dev. $\left[0.1~\text{mm},~0.01~\text{N}\right]$ are added to $\left[\textbf{c}_t,~\textbf{f}_t\right]$.
We run $50$ trials of $4000$ timesteps each, on three shape models~\cite{yu2016more}: \textbf{(i)} \texttt{rect1} ($90$~mm side), \textbf{(ii)} \texttt{hex} ($60.5$~mm circumradius), and \textbf{(iii)} \texttt{ellip2} ($130.9$~mm maj. axis). While our framework can infer arbitrary geometries, the contour following schema is best suited for simpler planar objects. The object's initial pose $\mathbf{x}_0$ is randomly perturbed over the range of $\pm (2~\text{mm}, \! \ 2~\text{mm}, \! \ 5^{\circ})$.
\begin{figure}[!b]
\GobbleMedium
\centering
\includegraphics[width=\columnwidth]{images/yu_da.pdf}
\caption{\textbf{[left]} An example of data association failure in the baseline parametric representation~\cite{yu2015shape}. Without discriminating features, committing to vertex associations affects the entire optimization. \textbf{[right]} The GP does not require associations, and the kernel smooths out outlier measurements.}
\label{fig:yu_da}
\GobbleMedium
\end{figure}
\begin{figure}[!b]
\GobbleLarge
\centering
\includegraphics[width=0.9\columnwidth]{images/sim_RMSE.pdf}
\caption{Translation and rotation RMSE box-plots across the $50$ simulation trials, for each of the objects. \textit{Yu et al.~incremental} performs comparably for \texttt{rect1}, but has higher variance and error for the more complex shapes.}
\label{fig:sim_RMSE}
\GobbleMedium
\end{figure}
\boldsubheading{Results}
\begin{figure*}
\centering
\begin{minipage}[b]{.71\textwidth}
\includegraphics[width=\textwidth]{images/real_pose_shape.jpg}
\caption{Representative results from the real-world estimation task. We compare our tactile only \textbf{(T)} result to one aided by periodic relocalization \textbf{(T + R)}. We add $10$ such events in the trial, and the reduced pose drift improves shape prediction.}\label{fig:real_pose_shape}
\end{minipage}\qquad
\begin{minipage}[b]{.25\textwidth}
\includegraphics[width=\textwidth]{images/real_data.jpg}
\caption{Our data collection setup: The ABB IRB 120, with F/T sensing, pushes the block on a plywood surface. The Vicon system tracks the object as ground-truth in our evaluation.}\label{fig:real_data}
\end{minipage}
\GobbleLarge
\end{figure*}
We highlight the qualitative results of a few representative trials in Fig. \ref{fig:sim_pose_shape}. We observe the evolution of object shape from the initial circular prior to the final shape, with pose estimates that match well with ground-truth.
Fig. \ref{fig:sim_MHD} shows the decreasing MHD shape error over the $50$ trials. The uncertainty threshold of the GPIS $\sigma_{\text{thresh}}^2$ (Section \ref{ssec:gpis_2}) prevents shape updates over repeated exploration, and hence the curve flattens out. This trades-off accuracy for speed, but we find little perceivable difference in the final models. The baseline has larger error for \texttt{ellip2}, a shape which their formulation cannot easily represent. Moreover, the uncertainty of the shape estimates are high, due to data association failures. Our representation has no explicit correspondences, and the kernel smooths out outlier measurements. An example of these effects are seen in Fig.~\ref{fig:yu_da}. A similar trend is seen in pose RMSE across all trials (Fig. \ref{fig:sim_RMSE}). The baseline shows comparable performance only with \texttt{rect1}, as the shape control points can easily represent it.
Finally, we quantify the computational impact of both the local GPIS regression and incremental fixed-lag optimizer (Fig. \ref{fig:timing_plot}). Fixed-lag smoothing keeps the optimization time bounded, and prevents linear complexity rise. Spatial partitioning keeps online query time low, and $\sigma_{\text{thresh}}^2$ results in less frequent updates over time for both methods. The combination of these two give us an average compute time of about $10~\text{ms}$ or $100~\text{Hz}$. For reference, at $60$~mm/s, that equates to an end-effector travel distance of $0.6~\text{mm}$ per computation. The maximum time taken by a re-linearization step is $55~\text{ms}$ ($3.3~\text{mm}$ travel).
\begin{figure}[b]
\GobbleMedium
\centering
\includegraphics[width=0.9\columnwidth]{images/MHD_plot_real.pdf}
\caption{Average MHD w.r.t. the ground-truth model for the real-world experiments, compared against the baseline. With just a few relocalization events, we can achieve far lower shape error.}
\label{fig:MHD_plot_real}
\GobbleMedium
\end{figure}
\subsection{Real-world tactile exploration}
\label{ssec:expts_2}
\boldsubheading{Setup}
We carry out an identical tactile exploration task with the pusher-slider setup in Fig. \ref{fig:real_data}. An ABB IRB 120 industrial robotic arm circumnavigates a square object ($98$~mm side) at the rate of $20$~mm/s. We perform the experiments on a plywood surface, with object-pusher and object-surface coefficient of friction both $\approx 0.25$. We use a single cylindrical rod with a rigidly attached ATI F/T sensor that measures reaction force. Contact is detected with a force threshold, set conservatively to reduce the effect of measurement noise. Ground-truth is collected with Vicon, tracking reflective markers on the object.
We collect three trials of $4000$ timesteps each, with tactile measurements and ground-truth. In this case, we do not record force magnitude, but only contact normals. We instead map the pusher velocity to forces via the motion cone, and reasoning about sticking and slipping~\cite{mason1986mechanics}.
\boldsubheading{Results}
The top row \textbf{(T)} of Fig. \ref{fig:real_pose_shape} shows the evolution of shape and pose over the runtime. When compared to the simulation results (Fig. \ref{fig:sim_pose_shape}), we notice aberrations in the global shape. We can attribute these to: \textbf{(i)} lack of a second pusher, which enables better localization and stable pushes, and \textbf{(ii)} motion model uncertainty in real-world pushing.
The bottom row (T + R) of Fig. \ref{fig:real_pose_shape} describes an additional scenario, where we demonstrate better results when we can periodically relocalize the object. This is a proxy for combining noisy global estimates from vision in a difficult perception task with large occlusions. To illustrate this, we crudely simulate $10$ such events over each trial using global Vicon measurements with Gaussian noise.
Fig. \ref{fig:MHD_plot_real} plots the evolution of shape error over the three trials. Decrease in shape error is correlated to relocalization events, highlighting the importance of reducing pose drift. Finally, Table \ref{tab:rmse_real} shows the RMSE of our experiments.
\begin{table}[t]
\GobbleSmall
\centering
\caption{RMSE for real-world tactile exploration. Apart from \textit{Yu et al.~incremental}, we also compare to a method aided by periodic relocalization.}
\label{tab:rmse_real}
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{@{}ccc@{}}
\toprule
Method & Trans. RMSE (mm) & Rot. RMSE (rad) \\ \midrule
\textbf{Ours (T)} & 10.60 ± 2.74 & 0.09 ± 0.02 \\
Ours (T + R) & 4.60 ± 1.00 & 0.09 ± 0.01 \\
Yu et al. incremental & 12.75 ± 4.01 & 0.17 ± 0.03 \\ \bottomrule
\end{tabular}%
}
\GobbleLarge
\end{table}
\section{Conclusion}
\label{sec:conc}
We formulate a method for estimating shape and pose of a planar object from a stream of tactile measurements. The GPIS reconstructs the object shape, while geometry and physics-based constraints optimize for pose. By alternating between these steps, we show real-time tactile SLAM in both simulated and real-world settings. This method can potentially accommodate tactile arrays and vision, and be extended beyond planar pushing.
In the future, we wish to build on this framework for online SLAM with dense sensors, like the GelSight~\cite{yuan2017gelsight} or GelSlim~\cite{donlon2018gelslim}, to reconstruct complex 3-D objects. Multi-hypothesis inference methods~\cite{hsiao2019mh} can relax our assumption of known initial pose, and learned shape priors \cite{wang20183d} can better initialize unknown objects. Knowledge about posterior uncertainty can guide active exploration~\cite{dragiev2013uncertainty}, and perform uncertainty-reducing actions~\cite{dogar2012planning}.
\footnotesize
\bibliographystyle{ieeetr}
|
2,869,038,153,910 | arxiv |
\section{Hopper}
\vspace{-10pt}
\label{sec:framework}
\texttt{\textbf{Hopper}}\ (Figure~\ref{fig:hopper}) is a framework inspired from the observation that humans think in terms of entities and relations.
Unlike traditional deep visual networks that perform processing over the pixels from which they learn and extract features, object-centric learning-based architecture explicitly separates information about entities through grouping and abstraction from the low-level information~\citep{slotattention}. \texttt{\textbf{Hopper}}\ obtains representations of object entities from the low-level pixel information of every frame (Section~\ref{sec:objrepresent}). Additionally, to maintain object permanence, humans are able to identify key moments when the objects disappear and
reappear. To imitate that, \texttt{\textbf{Hopper}}\ computes object tracks with the goal to have a more consistent object representation (Section~\ref{sec:tracking}) and then achieves \textit{multi-step compositional long-term reasoning} with the Multi-hop Transformer\ to pinpoint these critical moments.
Furthermore, \texttt{\textbf{Hopper}}\ combines both fine-grained (object) and coarse-grained (image) information to form a contextual understanding of a video.
As shown in Figure~\ref{fig:hopper}, \texttt{\textbf{Hopper}}\ contains $4$ components; we describe them below.
\vspace{-5pt}
\subsection{Backbone}
\label{sec:backbone}
\vspace{-10pt}
Starting from the initial RGB-based video representation $x_{\mathrm{v}} \in \mathbb{R}^{T \times 3 \times H_{0} \times W_{0}}$ where $T$ represents the number of frames of the video, $3$ is for the three color channels, and $H_{0}$ and $W_{0}$ denote the original resolution height and width,
a conventional CNN backbone would extract the feature map $f \in \mathbb{R}^{T \times P \times H \times W}$ and for every frame $t$ a compact image representation $i_t \in \mathbb{R}^{P}$.
The backbone we use is ResNeXt-101 from~\citet{sinet}, $P=2048$ and $H,W=8,10$.
A 1$\times$1 convolution~\citep{detr} then reduces the channel dimension of $f$ from $P$ to a smaller dimension $d$ ($d=256$), and a linear layer is used to turn the dimension of $i_t$ from $P$ to $d$.
\vspace{-5pt}
\subsection{Object Detection and Representation}
\label{sec:objrepresent}
\vspace{-10pt}
We collapse the spatial dimensions into $1$ dimension and combine the batch dimension with the temporal dimension for the feature map $f$.
Positional encodings are learned for each time step ($T$ in total) and each spatial location ($H\times W$ in total), which are further added to the feature map in an element-wise manner. The positional encoding-augmented feature map is the source input to the transformer encoder~\citep{transformer} of DETR~\citep{detr}. DETR is a recently proposed transformer-based object detector for image input; it additionally accepts $N$ embeddings of object queries for every image (assuming every image at most has $N$ objects\footnote{$\varnothing$, i.e., none object, will be predicted if the number of objects in an image is less than $N$.}) to the transformer decoder. We also combine the batch dimension with temporal dimension for the object queries.
Outputs from DETR are transformed object representations that are used as inputs to a multilayer perceptron (MLP) to predict the bounding box and class label of every object.
For Snitch Localization, DETR is trained on
object annotations from LA-CATER~\citep{opnet}.
\vspace{-5pt}
\subsection{Tracking}
\label{sec:tracking}
\vspace{-10pt}
Tracking produces consistent object representations as it links the representations of each object through time. We perform tracking using the unordered object representations, bounding boxes and labels as inputs, and
applying our Hungarian-based algorithm to match objects between every two consecutive frames. We describe the details as follows.
Tracking is essentially an association problem~\citep{bewley2016simple}.
An association between $2$ objects respectively from consecutive $2$ frames can be defined by the object class agreement
and the difference of the two bounding boxes.
Let us denote by $\hat{y} = [\hat{y}_t]_{t=1}^{T}$ the predicted list of objects at all frames in a video,
where $\hat{y}_t = \{\hat{y}_t^{i}\}_{i=1}^{N}$ denotes the predicted set of objects at frame $t$. Each
object is represented
as a $4$-tuple $\hat{y}_t^{i} = (\hat{c}_t^{i}, \hat{b}_t^{i}, \{\hat{p}_t^{i}(c) | c\in C\}, o_t^{i})$ where $\hat{c}_t^{i}$ denotes the class label that has the maximum predicted likelihood for object $i$ at frame $t$,
$\hat{b}_t^{i}\in [0, 1]^4$ is a vector that defines the bounding box top left and bottom right coordinates relative to the image size, $\hat{p}_t^{i}(c)$ denotes the predicted likelihood for class $c$ (where $C = \{$large metal green cube, small metal green cube, $\dots$, $\varnothing$\}), and $o_t^{i} \in \mathbb{R}^d$ denotes the representation vector of this object $i$ at frame $t$.
In order to obtain the optimal bipartite matching between the set of predicted objects at frame $t$ and $t+1$, we search for a permutation of $N$ elements $\sigma \in \mathfrak{S}_{N}$ with the lowest permutation cost:
\vspace{-12pt}
\begin{equation}
\begin{aligned}
\hat{\sigma}=\underset{\sigma \in \mathfrak{S}_{N}}{\arg \min } \sum_{i=1}^{N}
\mathscr{L}_{\mathrm{track}}\left(\hat{y}_t^{i}, \hat{y}_{t+1}^{\sigma(i)}\right)
\end{aligned}
\label{equa:loss_search_permu}
\end{equation}
\vspace{-12pt}
where $\mathscr{L}_{\mathrm{track}}$ is a pair-wise track matching cost between predicted object $\hat{y}_t^{i}$
(i.e., object $i$ at frame $t$)
and predicted object at frame $t+1$ with index $\sigma(i)$ from the permutation $\sigma$, denoted by $\hat{y}_{t+1}^{\sigma(i)}$. Following~\citet{detr}, the optimal assignment is computed efficiently with the Hungarian algorithm. The track matching cost at time $t$ for object $i$ is defined as
\begin{equation}
\mathscr{L}_{\mathrm{track}}\left(\hat{y}_t^{i}, \hat{y}_{t+1}^{\sigma(i)}\right) =
-\lambda_{\mathrm{c}} \mathbbm{1}_{\left\{\hat{c}_{t}^{i} \neq \varnothing\right\}} \hat{p}_{t+1}^{\sigma(i)}\left(\hat{c}_{t}^{i}\right)
+\lambda_{\mathrm{b}} \mathbbm{1}_{\left\{\hat{c}_{t}^{i} \neq \varnothing\right\}} \mathscr{L}_{\text {box }}\left(\hat{b}_{t}^{i}, \hat{b}_{t+1}^{\sigma(i)}\right)
\end{equation}
\vspace{-12pt}
where $\mathbbm{1}$ denotes an indicator function such that the equation after the symbol $\mathbbm{1}$ only takes effect when the condition inside the $\{\dots\}$ is true, otherwise the term will be $0$. $\lambda_{\mathrm{c}}, \lambda_{\mathrm{b}}\in \mathbb{R}$ weight each term. $\mathscr{L}_{\text {box }}$ is defined as a linear combination of the $L_1$ loss and the generalized IoU loss~\citep{GIoU}.
When the predicted class label of object $i$ at frame $t$ is not $\varnothing$, we aim to maximize the likelihood of the class label $\hat{c}_{t}^i$ for the predicted object $\sigma(i)$ at frame $t+1$, and minimize the bounding box difference between the two.
The total track matching cost of a video is the aggregation of $\mathscr{L}_{\mathrm{track}}\left(\hat{y}_t^{i}, \hat{y}_{t+1}^{\sigma(i)}\right)$ from object $i=1$ to $N$ and frame $t=1$ to $T-1$.
This Hungarian-based tracking algorithm is used due to its simplicity.
A more sophisticated tracking solution (e.g. DeepSORT~\citep{deepsort}) could be easily integrated into \texttt{\textbf{Hopper}}, and may improve the accuracy of tracking in complex scenes.
\vspace{-5pt}
\subsection{Video Query Representation and Recognition}
\vspace{-10pt}
The $N$ object tracks
obtained from the Hungarian algorithm and a single track of image features
from the backbone are further added with the learned positional time encodings to form the source input to our Multi-hop Transformer\ (which will be introduced in Section~\ref{sec:multihop_tx}). Multi-hop Transformer\ produces the final latent representation of the video query $\mathbf{e}\in \mathbb{R}^{d}$. A MLP uses the video query representation $\mathbf{e}$ as an input and predicts the grid class for the Snitch Localization task.
\section{Multi-hop Transformer}
\label{sec:multihop_tx}
\vspace{-10pt}
Motivated by how humans reason an object permanence task through identifying \textit{critical moments of key objects} in the video~\citep{bremner2015perception},
we propose Multi-hop Transformer\ (MHT).
MHT\ reasons by hopping over frames and selectively attending to objects in the frames, until it arrives at the correct object that is the most important for the task.
MHT\ operates in an iterative fashion, and each iteration produces one-hop reasoning by selectively attending to objects from a collection of frames. Objects in that collection of frames form the candidate pool of that hop. Later iterations are built upon the knowledge collected from the previous iterations, and the size of the candidate pool decreases as iteration runs.
We illustrate the MHT\ in Figure~\ref{fig:multihop_transformer} in Appendix. The overall module is described in Algorithm~\ref{algo}.
MHT\ accepts a frame track $\mathcal{T}_{f}$: [$i_1$, $i_2$, $\cdots$, $i_T$],
an object track $\mathcal{T}_{o}$: [$o_1^1$, $o_2^1$, $\cdots$, $o_T^1$, $\cdots$, $o_1^N$, $o_2^N$, $\cdots$, $o_T^N$ ],
an initial target video query embedding $\mathcal{E}$, the number of objects $N$ and number of frames $T$.
$h$ denotes the hop index, and $t$ is the frame index that the previous hop (i.e., iteration) mostly attended to, in Algorithm~\ref{algo}.
\noindent \textbf{Overview.}
Multiple iterations are applied over MHT, and each iteration performs one hop of reasoning by attending to certain objects in critical frames.
With a total of $H$ hops, MHT\ produces refined representation of the video query $\mathcal{E}$ ($\mathcal{E}\in \mathbb{R}^{1\times d}$).
As the complexity of video varies, $H$ should also vary across videos.
In addition,
MHT\ operates in an autoregressive manner to process the incoming frames.
This is achieved by `Masking()'
(will be described later).
The autoregressive processing in MHT\ allows hop $h$+$1$ to only attend to objects in frames after frame $t$, if hop $h$ mostly attends to an object at frame $t$.
We define \textit {the most attended object of a hop} as the object that has the highest attention weight (averaged from all heads) from the encoder-decoder multi-head attention layer in Transformer$_s$ (will be described later).
The hopping ends when the most attended object is an object in the last frame.
\noindent \textbf{MHT\ Architecture.}
Inside of the MHT, there are $2$ encoder-decoder transformer units~\citep{transformer}: Transformer$_f$ and Transformer$_s$.
This architecture is inspired by study in cognitive science that
reasoning consists of $2$ stages: first, one has to establish the domain about which one reasons and its properties,
and only after this initial step can one's reasoning
happen~\citep{stenning2012human}.
We first use Transformer$_f$ to adapt the representations of the object entities,
which form the main ingredients of the domain under the context of an object-centric video task. Then, Transformer$_s$ is used to produce the task-oriented representation to perform reasoning.
We separate $2$ types of information from the input:
\textit{attention candidates} and \textit{helper information}.
This separation
comes from the intuition that humans sometimes rely on additional information outside of the candidate answers.
We call such \textit{additional information} as helper information $\mathcal{H}$ (specifically in our case, could be the coarse-grained global image context, or information related to the
previous reasoning step).
We define \textit{candidate answers} as attention candidates $\mathcal{U}$, which are representations of object entities (because object permanence is a task that requires reasoning relations of objects).
For each hop, we first extract
\textit{attention candidates} and \textit{helper information}
from the source sequence,
then use Transformer$_f$ to condense the most useful information by attending to attention candidates
via self-attention and helper information
via encoder-decoder attention. After that, we use Transformer$_s$ to learn the latent representation of the video query by attentively utilizing the information extracted from Transformer$_f$ (via encoder-decoder attention).
Thus, MHT\ decides on which object to mostly attend to, given the \textit{current} representation of the video query $\mathcal{E}$, by reasoning about the relations between the object entities ($\mathcal{U}$), and how would each object entity relate to the reasoning performed by the previous hop \textit{or} global information ($\mathcal{H}$).
\begin{wrapfigure}{L}{0.55\textwidth}
\vspace{-20pt}
\begin{minipage}{0.55\textwidth}
\begin{algorithm}[H]
\footnotesize
\textbf{Input}: $\mathcal{T}_{f}\in \mathbb{R}^{T\times d}$, $\mathcal{T}_{o}\in \mathbb{R}^{NT\times d}$, $\mathcal{E}\in \mathbb{R}^{1 \times d}$, $N\in \mathbb{R}$, $T\in \mathbb{R}$\\
\textbf{Params}: LayerNorm, Transformer$_f$, Transformer$_s$, $W_g$, $b_g$
\caption{Multi-hop Transformer\ module.
}
\label{algo}
\begin{algorithmic}[1]
\State $h \gets 0$, $t \gets 0$
\While {$t\neq (T-1$)}
\State $h \gets$ $h+1$
\If{$h>1$}
\State $\mathcal{H} \gets $Extract ($\mathcal{T}_{o}$, $N$, $T$, $t$)
\Else
\State $\mathcal{H} \gets \mathcal{T}_{f}$
\EndIf
\State $\mathcal{U} \gets \mathcal{T}_{o}$
\State $\mathcal{U}_{\text{update}}\text{, }\_\gets$ Transformer$_f$ ($\mathcal{U}$, $\mathcal{H}$)
\State $\mathcal{U}_{\text{update}}\gets $Sigmoid ($W_g \cdot \mathcal{U}_{\text{update}} + b_g$)$\text{ }\odot\text{ }\mathcal{U}$
\State $\mathcal{U}_{\text{mask}}\gets $Masking ($\mathcal{U}_{\text{update}}$, $t$)
\State $\mathcal{E}\text{, }\mathcal{A}\gets$ Transformer$_s$ ($\mathcal{U}_{\text{mask}}$, $\mathcal{E}$)
\State $t \gets $Softargmax ($\mathcal{A}$)
\EndWhile
\State $\mathbf{e}\gets$ LayerNorm ($\mathcal{E}$)
\end{algorithmic}
\textbf{Return} $\mathbf{e}$
\end{algorithm}
\end{minipage}
\vspace{-15pt}
\end{wrapfigure}
\noindent \textbf{Transformer$_f$}.
Transformer$_f$ uses helper information $\mathcal{H}$ from the previous hop, to adapt the representations of the object entities $\mathcal{U}$ to use in the current reasoning step.
Formally, $\mathcal{U}$ is the object track sequence $\mathcal{T}_{o}$ as in line $9$ in Algorithm~\ref{algo} ($\mathcal{U}\in \mathbb{R}^{NT\times d}$, $NT$ tokens), whereas $\mathcal{H}$ encompasses different meanings for hop $1$ and the rest of the hops. For hop $1$, $\mathcal{H}$ is the frame track $\mathcal{T}_{f}$ ($\mathcal{H}\in \mathbb{R}^{T\times d}$, $T$ tokens, line $7$). This is because hop $1$ is necessary for all videos with the goal to find the first critical object (and frame) from \textit{the global information}. Incorporating frame representations is also beneficial because it provides complementary information and can mitigate occasional errors from the object detector and tracker. For the rest of the hops, $\mathcal{H}$ is the set of representations of all objects in the frame that \textit{the previous hop mostly attended to} ($\mathcal{H}\in \mathbb{R}^{N\times d}$, $N$ tokens, Extract() in line $5$). The idea is that, to select an answer from object candidates after frame $t$, objects in frame $t$ could be
the most important
helper information.
Transformer$_f$ produces $\mathcal{U}_{\text{update}}$ ($\mathcal{U}_{\text{update}}\in \mathbb{R}^{NT\times d}$, $NT$ tokens), an updated version of $\mathcal{U}$, by selectively attending to $\mathcal{H}$.
Further, MHT\ conditionally integrates helper-fused representations
and the original representations of $\mathcal{U}$. This conditional integration is achieved by \textit{Attentional Feature-based Gating} (line $11$), with the role to combine the new modified representation with the original representation.
This layer, added on top of Transformer$_f$, provides additional new information, because it switches the perspective into learning new representations of object entities by learning a feature mask (values between 0 and 1) to select salient dimensions of $\mathcal{U}$, conditioned on the adapted representations of object entities that are produced by Transformer$_f$.
Please see details about this layer in ~\citet{margatina2019attention}).
\noindent \textbf{Transformer$_s$}.
Transformer$_s$ is then used to produce the task-oriented video query representation $\mathcal{E}$.
As aforementioned, MHT\ operates in an autoregressive manner to proceed with time. This is achieved by `Masking()' that turns $\mathcal{U}_{\text{update}}$ into $\mathcal{U}_{\text{mask}}$ ($\mathcal{U}_{\text{mask}}\in \mathbb{R}^{NT\times d}$) for Transformer$_s$ by only retaining the object entities in frames \textit{after} the frame that the previous hop mostly attended to (for hop $1$, $\mathcal{U}_{\text{mask}}$ is $\mathcal{U}_{\text{update}}$).
Masking is commonly used in NLP for the purpose of autoregressive processing~\citep{transformer}.
Masked objects will have $0$ attention weights. Transformer$_s$ learns representation of the video query $\mathcal{E}$ by attending to $\mathcal{U}_{\text{mask}}$ (line $13$). It indicates that, unlike Transformer$_f$ in which message passing is performed across all connections between tokens in $\mathcal{U}$, between tokens in $\mathcal{H}$, and especially across $\mathcal{U}$ and $\mathcal{H}$ (we use $\mathcal{U}$ for Transformer$_f$, instead of $\mathcal{U}_{\text{mask}}$, because potentially, to determine which object a model should mostly attend to in frames after $t$, objects in and before frame $t$ might also be beneficial), message passing in Transformer$_s$ is only performed between tokens in $\mathcal{E}$ (which has only $1$ token for Snitch Localization), between tokens in \textit{unmasked} tokens in $\mathcal{U}_{\text{update}}$, and more importantly, across connections between the video query $\mathcal{E}$ and \textit{unmasked} tokens in $\mathcal{U}_{\text{update}}$.
The indices of the most attended object
and the frame that object is in, are determined by attention weights $\mathcal{A}$ from the previous hop with a \textit{differentiable} `Softargmax()'
~\citep{chapelle2010gradient,honari2018improving},
defined as, $
\operatorname{softargmax}(x)=\sum_{i} \frac{e^{\beta x_{i}}}{\sum_{j} e^{\beta x_{j}}} i
$, where $\beta$ is an arbitrarily large number.
Attention weights $\mathcal{A}$ ($\mathcal{A}\in \mathbb{R}^{NT\times 1}$) is averaged from all heads.
$\mathcal{E}$ is updated over the hops, serving the information
exchange between the hops.
\noindent \textbf{Summary \& discussion.}
$\mathcal{H}$, $\mathcal{U}_{\text{mask}}$ and $\mathcal{E}$ are updated in every hop.
$\mathcal{E}$ should be seen as an encoding for a query of the entire video.
Even though in this dataset, a single token is used for the video query, and the self-attention in the decoder part of Transformer$_s$ is thus reduced to a stacking of $2$ linear transformations,
it is possible that multiple queries
would be desirable in other applications.
These structural priors that are embedded in
(e.g., the iterative hopping mechanism and attention, which could be treated as a soft tree) essentially provide the composition rules that algebraically manipulate the previously acquired knowledge
and lead to the higher forms of reasoning~\citep{bottou2014machine}.
Moreover,
MHT\ could potentially correct errors made by object detector and tracker, but poor performance of them
(especially object detector)
would also make MHT\ suffer because (1) inaccurate object representations will confuse MHT\ in learning, and (2) the heuristic-based loss for intermediate hops (will be described in Section~\ref{sec:training}) will be less accurate.
\vspace{-8pt}
\section{Training}
\vspace{-12pt}
\label{sec:training}
We propose the following training methods for the Snitch Localization task and present an ablation study in Appendix~\ref{sec:appendix_abaltion}. We provide the implementation details of our model in Appendix~\ref{sec:appendix_implement}.
\vspace{-3pt}
$\blacktriangleright$ \textit{Dynamic hop stride.} A basic version of autoregressive MHT\ is to set the per-hop frame stride to $1$ with `Masking()' as usually done in NLP. It means that Transformer$_s$ will only take in objects \textit{in} frame $t$+$1$ as the source input if the previous hop mostly attended to an object in frame $t$. However, this could produce an unnecessary long reasoning chain.
By using dynamic hop stride, we let the model automatically decide on which upcoming frame to reason by setting `Masking()' to give unmasked candidates as objects in \textit{frames} after the frame that the previous hop mostly attended to.
\vspace{-3pt}
$\blacktriangleright$ \textit{Minimal hops of reasoning.}
We empirically set the minimal number of hops that the model has to perform for any video as $5$ to encourage
multi-hop reasoning with reasonably large number of hops (unless not possible, e.g., if the last visible snitch is in the second last frame, then the model is only required to do $2$ hops). This is also achieved by `Masking()'. E.g., if hop $1$ mostly attends to an object in frame $3$, `Masking()' will
\textit{not} mask objects in frames from frame $4$ to frame $10$ for hop $2$, in order to allow hop $3$, $4$, $5$ to happen (suppose $13$ frames per video, and frame $4$ is computed from $3+1$, frame $10$ is computed as $max(3+1, 13-(5-2))$).
\vspace{-3pt}
$\blacktriangleright$ \textit{Auxiliary hop $1$ object loss.} Identifying the correct object to attend to in early hops is critical and for Snitch Localization, the object to attend to in hop $1$ should be the last visible snitch (intuitively). Hence, we define an auxiliary hop $1$ object loss as the cross-entropy of classifying index of the last visible snitch.
Inputs to this loss are the computed index of the last visible snitch from $\mathcal{T}_{o}$ (with the heuristic that approximates it from predicted object bounding boxes and labels), as well as the
attention weights $\mathcal{A}$ from Transformer$_s$ of hop $1$, serving as predicted likelihood for each index.
\vspace{-3pt}
$\blacktriangleright$ \textit{Auxiliary hop $2$ object loss.} Similarly, we let the second hop to attend to the immediate occluder or container of the last visible snitch. The auxiliary hop $2$ object loss is defined as the cross-entropy of classifying index of the immediate occluder or container of the last visible snitch.
Inputs to this loss are the heuristic
\footnote{The heuristic is $L_1$ distance to find out that in the immediate frame which object's bounding box bottom midpoint location is closest to that of the last visible snitch.}
computed index and
attention weights $\mathcal{A}$ from Transformer$_s$ of hop $2$.
\vspace{-3pt}
$\blacktriangleright$ \textit{Auxiliary hop $1\&2$ frame loss.} Attending to objects in the \textit{correct frames} in hop $1$ and $2$ is critical for the later hops. A $L_1$ loss term could guide the model to find out the correct frame index.
\vspace{-3pt}
$\blacktriangleright$ \textit{Teacher forcing} is often used as a strategy for
\textit{training} recurrent neural networks that uses the ground truth from a prior time step as an input~\citep{williams1989learning}. We use teacher forcing for hop $2$ and $3$ by providing the ground truth $\mathcal{H}$ and $\mathcal{U}_{\text{mask}}$ (since we can compute the frame index of the last visible snitch with heuristics as described above).
\vspace{-3pt}
$\blacktriangleright$ \textit{Contrastive debias loss via masking out.} This loss is inspired from the \textit{human mask confusion loss} in ~\citet{whycannotdance}. It allows penalty for the model if it could make predictions correctly when the most attended object in the last frame is masked out. However, in contrast to human mask, we enforce consistency between attended objects and correct predictions, ensuring that the model {\em understands} why it is making a correct prediction.
The idea here is that the model should not be able to predict the correct location without seeing the correct evidence.
Technically, the contrastive debias loss is defined as the entropy function that we hope to maximize, defined as follows.
\vspace{-5pt}
\begin{equation}
\vspace{-5pt}
\mathscr{L}_{\mathrm{debias}}= \mathbb{E}\left[\sum_{k=1}^{K} g_{\theta}\left(\mathcal{M}_{\text{neg}}\text{;} \cdots\right) \left(\log g_{\theta}\left(\mathcal{M}_{\text{neg}}\text{;} \cdots\right)\right)\right]
\end{equation}
where $g_{\theta}$ denotes the video query representation and recognition module (Multi-hop Transformer\ along with MLP) with parameter $\theta$ that produces the likelihood of each grid class, $\mathcal{M}_{\text{neg}}$ is the source sequence to the Multi-hop Transformer\ with the most attended object in the last hop being masked out (set to zeros), and $K$ denotes the number of grid classes.
\vspace{-5pt}
\noindent \textbf{Summary \& discussion.} The total loss of the model is a linear combination of hop $1$ and hop $2$ object \& frame loss,
contrastive debiasing loss for the last hop,
and the final
grid classification cross-entropy loss.
The object \& frame loss for hop $1$ and $2$ are based on heuristics.
The motivation is to provide weak supervision for the early hops to avoid error propagation, as multi-hop model can be difficult to train, without intermediate supervision or when ground truth reasoning chain is not present (as in Hopper)~\citep{dua2020benefits,qi2019answering,ding2019cognitive,wang2019multi,chen2018learn,jiang2019self}.
One can use similar ideas as the ones here on other tasks that require multi-hop reasoning (e.g., design self-supervision or task-specific heuristic-based weak supervision for intermediate hops, as existing literature often does).
\section{Ablation Study}
\label{sec:appendix_abaltion}
\begin{table}[h]
\centering
\begin{tabular}{@{}cccccccccc@{}}
\toprule
Dynamic & Min $5$ & Hop $1$ & Hop $2$ & Frame & Teacher & Debias & \multirow{2}{*}{Top $1$ $\uparrow$} & \multirow{2}{*}{Top $5$ $\uparrow$} & \multirow{2}{*}{$L_1$ $\downarrow$} \\
Stride & Hops & Loss & Loss & Loss & Forcing & Loss & & & \\\midrule
\Checkmark & \Checkmark & \Checkmark & \Checkmark & \Checkmark & \Checkmark & \Checkmark & $68.41$ & $87.89$ & $1.09$ \\
\Checkmark & \Checkmark & \Checkmark & \Checkmark & \Checkmark & \Checkmark & \xmark & $64.65$ & $87.75$ & $ 1.19$ \\
\Checkmark & \Checkmark & \Checkmark & \Checkmark & \Checkmark & \xmark & \xmark & $63.32$ & $86.94$ & $1.23$ \\
\Checkmark & \Checkmark & \Checkmark & \Checkmark & \xmark & \xmark & \xmark & $62.88$ & $86.57$ & $1.19$ \\
\Checkmark & \Checkmark & \Checkmark & \xmark & \xmark & \xmark & \xmark & $61.92$ & $88.19$ & $1.20$ \\
\Checkmark & \Checkmark & \xmark & \xmark & \xmark & \xmark & \xmark & $59.48$ & $84.58$ & $1.37$ \\
\Checkmark & \xmark & \xmark & \xmark & \xmark & \xmark & \xmark & $59.11$ & $86.20$ & $1.37$ \\
\xmark & \xmark & \xmark & \xmark & \xmark & \xmark & \xmark & $57.27$ & $84.43$ & $1.39$ \\ \bottomrule
\end{tabular}
\caption{Ablation Study of \texttt{\textbf{Hopper}}\ training methods. We gradually add training methods described in Section~\ref{sec:multihop_tx}, i.e., dynamic hop stride, minimal $5$ hops of reasoning, auxiliary hop $1$ object loss, auxiliary hop $2$ object loss, auxiliary frame loss, teacher forcing, and contrastive debias loss via masking out, onto
the base \texttt{\textbf{Hopper}}-multihop model. The results are obtained from the CATER-h test set.
}
\label{tab:ablation}
\end{table}
We conduct an ablation study of training methods described in Section~\ref{sec:multihop_tx} in Table~\ref{tab:ablation}.
As shown, all proposed training methods are beneficial. `Dynamic Stride' gives the model more flexibility whereas `Min $5$ Hops' constrains the model to perform a reasonable number of steps of reasoning. `Hop $1$ Loss', `Hop $2$ Loss', `Frame Loss' and `Teacher Forcing' stress the importance of
the correctness of
the first $2$ hops to avoid error propagation. `Debias Loss' is the most effective one by contrastively inducing the latent space to capture information that is maximally useful to the task at hand.
\begin{table}[h]
\vspace{10pt}
\centering
\begin{tabular}{@{}ccccc@{}}
\toprule
Object Detector \& Tracker & Reasoning Model & Top $1$ $\uparrow$ & Top $5$ $\uparrow$ & $L_1$ $\downarrow$ \\ \midrule
DETR + Hungarian (ours) & MHT\ (both Masked) & $65.73$ & $88.39$ & $1.13$ \\
DETR + Hungarian (ours) & MHT\ (no Gating) & $66.62$ & $87.43$ & $1.16$ \\
DETR (no Tracking) & MHT\ (no Tracking) & $67.51$ & $88.32$ & $1.14$ \\
DETR + Hungarian (ours) & MHT\ (mask out LAST) & $32.49$ & $60.92$ & $2.53$ \\
DETR + Hungarian (ours) & MHT\ (mask out ALL) & $11.68$ & $28.98$ & $3.60$ \\
\bottomrule
\end{tabular}
\caption{Ablation study \& comparative results of analyzing components of our method (on CATER-h test set).
}
\label{tab:extra_result}
\end{table}
We then study how would different choices of the sub-components of our method affect the Snitch Localization performance. In Table~\ref{tab:extra_result}:
\begin{itemize}
\item `MHT\ (both Masked)': This refers to using our \texttt{\textbf{Hopper}}-multihop but replacing the original $U$ input to Transformer$_f$ with a masked version. In this way, both Transformer$_f$ and Transformer$_s$ have `Masking()' applied beforehand.
\item `MHT\ (no Gating)': This refers to using our \texttt{\textbf{Hopper}}-multihop but removing the \textit{Attentional Feature-based Gating} (line $11$ in Algorithm~\ref{algo}) inside of MHT.
\item `MHT\ (no Tracking)': This refers to using our \texttt{\textbf{Hopper}}-multihop but entirely removing the Hungarian tracking module. Thus, MHT\ directly takes in unordered object representations as inputs.
\item `MHT\ (mask out LAST)': This refers to taking our trained \texttt{\textbf{Hopper}}-multihop, masking out the representation of the most attended object in the last hop by zeros, and then making predictions. This is to verify whether the most attended object in the last hop is important for the final Snitch Localization prediction task.
\item `MHT\ (mask out ALL)': Similar to the above, `MHT\ (mask out ALL)' refers to taking our trained \texttt{\textbf{Hopper}}-multihop, masking out the representations of the most attended objects in \textit{all} hops by zeros, and then making predictions. This is to verify how important are the most attended objects in all hops that are identified by our \texttt{\textbf{Hopper}}-multihop.
\end{itemize}
As shown in Table~\ref{tab:extra_result}, all of these ablations give worse performance, thus, indicating that our motivations for these designs are reasonable (see Section~\ref{sec:multihop_tx}). Recall that in Table~\ref{tab:bcater_result}, `DETR + Hungarian' (without MHT) has only $37.2\%$ Top-1 accuracy on CATER-h (learning a perfect object detector or tracker is not the focus of this paper). This highlights the superiority of our MHT\ as a reasoning model, and suggests that
MHT\ has the potential to correct mistakes from the upstream object detector and tracker, by learning more robust object representations during the process of learning the Snitch Localization task.
Masking out the most attended object identified by our \texttt{\textbf{Hopper}}-multihop in the last hop only has $32.49\%$ Top-1 accuracy. Masking out all of the most attended objects from all hops only has $11.68\%$ Top-1 accuracy. Such results reassure us about the interpretability of our method.
\section{Parameter Comparison}
\label{sec:appendix_param_compare}
\begin{table}[h]
\centering
\begin{tabular}{@{}lccc@{}}
\toprule
Model & \# Parameters (M) & GFLOPs & Top $1$ Acc. \\ \midrule
SINet & $138.69$ & $7.98$ & $18.6$\\
Transformer & $15.01$ & $0.11$ & $11.6$ \\
\texttt{\textbf{Hopper}}-transformer (last frame) & $15.01$ & $0.09$ & $41.8$ \\
\texttt{\textbf{Hopper}}-transformer & $15.01$ & $1.10$ & $57.6$\\
\texttt{\textbf{Hopper}}-sinet & $139.22$ & $8.05$ & $62.8$ \\
\texttt{\textbf{Hopper}}-multihop (our proposed method) & $6.39$ & $1.79$ & $68.4$\\
\bottomrule
\end{tabular}
\caption{Parameter and FLOPs comparison of our \texttt{\textbf{Hopper}}-multihop to alternative methods. (M) indicates millions. Results of the methods on CATER-h test set is also listed.
}
\label{tab:param_compare}
\end{table}
In Table~\ref{tab:param_compare}, we compare the number of parameters of our \texttt{\textbf{Hopper}}-multihop with alternative methods. Our proposed method is the most efficient one in terms of the number of parameters. This is because of the iterative design embedded in MHT. Unlike most existing attempts on using Transformer that stack multiple encoder and decoder layers in a traditional way, MHT\ only has one layer of Transformer$_f$ and Transformer$_s$. As multiple iterations are applied to MHT, parameters of Transformer$_f$ and Transformer$_s$ \textit{from different iterations} are shared. This iterative transformer design is inspired by previous work~\citet{slotattention}. This design saves parameters, accumulates previously learned knowledge, and adapts to varying number of hops, e.g., some require $1$ hop and some require more than $1$ hop (e.g, $5$ hops). Because for these videos that tentatively only require $1$ hop, stacking multiple layers of Transformer such as $5$ might be wasteful and not necessary, our design of MHT\ could address such issue, being more parameter-efficient. We also report the GFLOPs comparison. Given that the FLOPs for MHT\ depends on the number of hops predicted for a video, we report the average number of FLOPs for the CATER-h test set.
\vspace{-5pt}
\section{Diagnostic Analysis}
\label{sec:appendix_diagnostic}
\vspace{-10pt}
\subsection{Diagnostic Analysis on the Hopping Mechanism}
\label{sec:appendix_hopping}
\begin{table}[h]
\centering
\begin{tabular}{l|ccccc}\toprule
\# Hops & $1$ & $2$ & $3$ & $4$ & $\geq 5$ \\ \midrule
Ground Truth & $104$ & $105$ & $111$ & $110$ & $1026$ \\
Prediction & $109$ & $102$ & $112$ & $108$ & $1025$\\
Jaccard Similarity $\uparrow$ & $0.9541$ & $0.9714$ & $0.9561$ & $0.9818$ & $0.9990$ \\ \bottomrule
\end{tabular}
\vspace{-5pt}
\caption{Diagnostic analysis of the Multi-hop Transformer\ in terms of the `hopping' ability (\# hops performed).}
\label{tab:nhops}
\end{table}
\vspace{-5pt}
We evaluate the `hopping' ability of the proposed Multi-hop Transformer\ in Table~\ref{tab:nhops}. The prediction is made by our \texttt{\textbf{Hopper}}-multihop that requires at least $5$ hops of reasoning unless not possible. For each test video, we compute the ground truth number of hops required by this video and obtain the number of hops that \texttt{\textbf{Hopper}}\ actually runs. In the table, we provide the ground truth count and predicted count of test videos that require $1$ hop, $2$ hops, $3$ hops, $4$ hops, and equal or greater than $5$ hops.
Since the numbers are close,
we further compute Jaccard Similarity (range from $0$ to $1$ and higher is better) to measure the overlapping between the ground truth set of the test videos and predicted set of the test videos.
According to these metrics, our proposed \texttt{\textbf{Hopper}}-multihop
functionally performs the correct number of hops for almost all test videos.
\begin{table}[h]
\centering
\footnotesize
\begin{tabular}{l|ccccccccccccc}\toprule
Frame Index & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ & $13$ \\ \midrule
Hop $1$ & $127$ & $136$ & $106$ & $120$ & $104$ & $111$ & $109$ & $107$ & $105$ & $0$ & $0$ & $0$ & $0$ \\
Hop $2$ & $0$ & $127$ & $136$ & $106$ & $120$ & $104$ & $111$ & $109$ & $107$ & $105$ & $0$ & $0$ & $0$ \\
Hop $3$ & $0$ & $0$ & $8$ & $2$ & $66$ & $10$ & $13$ & $24$ & $17$ & $107$ & $778$ & $0$ & $0$ \\
Hop $4$ & $0$ & $0$ & $0$ & $1$ & $0$ & $0$ & $0$ & $1$ & $0$ & $0$ & $10$ & $1013$ & $0$ \\
Hop $5$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $9$ & $1016$ \\
Hop $6$ & $0$ & $0$ & $1$ & $0$ & $0$ & $0$ & $1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $9$ \\
\bottomrule
\end{tabular}
\vspace{-5pt}
\caption{Hop Index vs. Frame Index: the number of times hop $h$ mostly attends to frame $t$ (results are obtained from \texttt{\textbf{Hopper}}-multihop on CATER-h). See details in Appendix~\ref{sec:appendix_hopping}. }
\label{tab:hop_vs_frame}
\vspace{-15pt}
\end{table}
\vspace{-5pt}
\begin{wrapfigure}{l}{0.5\textwidth}
\includegraphics[width=0.5\textwidth]{Fig/hops_deep_analysis.png}
\vspace{-20pt}
\caption{Hop Index vs. Frame Index:
we plot the index of frame of the most attended object identified by each hop. Each hop has its unique color, and the transparency of the dot denotes the normalized frequency of that frame index for that particular hop. }
\label{fig:hops_deep_analysis}
\vspace{-10pt}
\end{wrapfigure}
In Table~\ref{tab:hop_vs_frame},
we show the number of times hop $h$ mostly attends to frame $t$.
The results are obtained from \texttt{\textbf{Hopper}}-multihop on CATER-h, for those $1025$ test videos predicted with $\geq 5$ hops shown in Table~\ref{tab:nhops}. In Figure~\ref{fig:hops_deep_analysis}, we plot the index of frame of the most attended object identified by each hop
(conveying the same meaning as Table~\ref{tab:hop_vs_frame}). The transparency of the dot denotes the normalized frequency of that frame index for that particular hop.
We can observe that: \textbf{(1)} Hop $3$ to $6$ tend to attend to later frames, and this is due to lacking supervision for the intermediate hops. As we discussed in Section~\ref{sec:training}, multi-hop model is hard to train in general when the ground truth reasoning chain is missing during training~\citep{dua2020benefits,chen2018learn,jiang2019self,wang2019multi}. Researchers tend to use ground truth reasoning chain as supervision when they train a multi-hop model~\citep{qi2019answering,ding2019cognitive,chen2019multi}. The results reconfirm that, without supervision for the intermediate steps, it is not easy for a model to automatically figure out the ground truth reasoning chain; \textbf{(2)}
MHT\ has learned to predict the next frame of the frame that is identified by hop $1$, as the frame that hop $2$ should attend to; \textbf{(3)} there are only $9$ videos predicted with more than $5$ hops even though we only constrain the model to perform \textit{at least} $5$ hops (unless not possible). Again, this is because no supervision is provided for the intermediate hops. As the Snitch Localization task itself is largely focused on the last frame of the video, without supervision for the intermediate hops, the model tends to ``look at'' later frames as soon as possible. These results suggest where we can improve for the current MHT, e.g., one possibility is to design self-supervision for each intermediate hop.
\subsection{Comparative Diagnostic Analysis across Frame Index}
\vspace{-5pt}
\begin{wrapfigure}{l}{0.5\textwidth}
\vspace{-10pt}
\includegraphics[width=0.5\textwidth]{Fig/CATER-h_method_compare.png}
\vspace{-10pt}
\caption{Diagnostic analysis of the performance in terms of when snitch becomes last visible in the video.
}
\label{fig:method_compare}
\vspace{-10pt}
\end{wrapfigure}
In Figure~\ref{fig:method_compare}, we present the
comparative
diagnostic analysis of the performance in terms of when snitch becomes last visible.
We bin the test set using the frame index of when snitch becomes last visible in the video. For each, we show the test set distribution with the bar plot, the performance over that bin using the line plot, and performance of that model on the full test set with the dashed line.
We find that for Tracking (DaSiamRPN) and TSM, the Snitch Localization performance drops if the snitch becomes not visible earlier in the video.
Such phenomenon, though still exists, but is
alleviated for \texttt{\textbf{Hopper}}-multihop. We compute the standard deviation (SD) and coefficient of variation (CV). Both are measures of relative variability. The higher the value, the greater the level of dispersion around the mean.
The values of these metrics as shown in Figure~\ref{fig:method_compare} further reinforce the stability of our model and necessity of CATER-h dataset.
\section{Extra Qualitative Results}
\label{sec:appendix_extra}
In Figure~\ref{fig:good_vis}, we visualize the attention weights per hop and per head
from Transformer$_s$ to showcase the hops performed by \texttt{\textbf{Hopper}}-multihop for video `CATERh$\_$054110' (the one in Figure~\ref{fig:good_vis_main}) in details.
Please see Figure~\ref{fig:good_extra_visible},~\ref{fig:good_extra_occlusion},~\ref{fig:good_extra_contain},~\ref{fig:good_extra_recursive}, and~\ref{fig:good_extra_early} for extra qualitative results from \texttt{\textbf{Hopper}}-multihop. We demonstrate the reasoning process for different cases (i.e., `visible', `occluded', `contained', `contained recursively', and `not visible very early in the video').
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Fig/CATER-h_054110.png}
\caption{Visualization of attention weights \& interpretability of our model.
In (a), we highlight object(s) attended in every hop from \texttt{\textbf{Hopper}}-multihop (frame border is colored accordingly: \textcolor{red}{\textbf{Hop1}}, \textcolor{green}{\textbf{Hop2}}, \textcolor{blue}{\textbf{Hop3}}, \textcolor{orange}{\textbf{Hop4}}, and \textcolor{purple}{\textbf{Hop5}}). In (b), we visualize the attention weights per hop (the smaller attention weight that an object has, the larger opacity is plotted for that object entity).
As shown, \texttt{\textbf{Hopper}}-multihop performs $5$ hops of reasoning for the video `CATERh$\_$054110'. Our model performs reasoning by hopping over frames and meanwhile selectively attending to objects in the frame.
Please zoom in to see the details. Best viewed in color.
}
\label{fig:good_vis}
\vspace{-10pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Fig/good_extra_visible.png}
\caption{We visualize the attention weights per hop and per head from Transformer$_s$ in our \texttt{\textbf{Hopper}}-multihop. Objects attended in every hop are highlighted (whose frame border is colored accordingly: \textcolor{red}{\textbf{Hop1}}, \textcolor{green}{\textbf{Hop2}}, \textcolor{blue}{\textbf{Hop3}}, \textcolor{orange}{\textbf{Hop4}}, and \textcolor{purple}{\textbf{Hop5}}).
Please zoom in to see the details. Best viewed in color.
}
\label{fig:good_extra_visible}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Fig/good_extra_occlusion.png}
\caption{We visualize the attention weights per hop and per head from Transformer$_s$ in our \texttt{\textbf{Hopper}}-multihop. Objects attended in every hop are highlighted (whose frame border is colored accordingly: \textcolor{red}{\textbf{Hop1}}, \textcolor{green}{\textbf{Hop2}}, \textcolor{blue}{\textbf{Hop3}}, \textcolor{orange}{\textbf{Hop4}}, and \textcolor{purple}{\textbf{Hop5}}).
Please zoom in to see the details. Best viewed in color.
}
\label{fig:good_extra_occlusion}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Fig/good_extra_containment.png}
\caption{We visualize the attention weights per hop and per head from Transformer$_s$ in our \texttt{\textbf{Hopper}}-multihop. Objects attended in every hop are highlighted (whose frame border is colored accordingly: \textcolor{red}{\textbf{Hop1}}, \textcolor{green}{\textbf{Hop2}}, \textcolor{blue}{\textbf{Hop3}}, \textcolor{orange}{\textbf{Hop4}}, and \textcolor{purple}{\textbf{Hop5}}).
Please zoom in to see the details. Best viewed in color.
}
\label{fig:good_extra_contain}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Fig/good_extra_recursivecontainment.png}
\caption{We visualize the attention weights per hop and per head from Transformer$_s$ in our \texttt{\textbf{Hopper}}-multihop. Objects attended in every hop are highlighted (whose frame border is colored accordingly: \textcolor{red}{\textbf{Hop1}}, \textcolor{green}{\textbf{Hop2}}, \textcolor{blue}{\textbf{Hop3}}, \textcolor{orange}{\textbf{Hop4}}, and \textcolor{purple}{\textbf{Hop5}}).
Please zoom in to see the details. Best viewed in color.
}
\label{fig:good_extra_recursive}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Fig/good_extra_early.png}
\caption{We visualize the attention weights per hop and per head from Transformer$_s$ in our \texttt{\textbf{Hopper}}-multihop. Objects attended in every hop are highlighted (whose frame border is colored accordingly: \textcolor{red}{\textbf{Hop1}}, \textcolor{green}{\textbf{Hop2}}, \textcolor{blue}{\textbf{Hop3}}, \textcolor{orange}{\textbf{Hop4}}, and \textcolor{purple}{\textbf{Hop5}}).
Please zoom in to see the details. Best viewed in color.
}
\label{fig:good_extra_early}
\end{figure}
\section{Track Representation Visualization}
\label{sec:track_vis}
Please see Figure~\ref{fig:track_vis} for visualization of the object track representations of video `CATERh$\_$054110' (attention weights from \texttt{\textbf{Hopper}}-multihop for this video are shown in Figure~\ref{fig:good_vis}). \texttt{\textbf{Hopper}}\ utilizes tracking-integrated object representations since tracking can link object representations through time and the resulting representations are more informative and consistent. As shown in the figure, the tracks that are obtained from our custom Hungarian algorithm that are competitive. Our model \texttt{\textbf{Hopper}}-multihop takes in the best-effort object track representations (along with the coarse-grained frame track) as the source input to the Multi-hop Transformer, and then further learns the most useful and correct task-oriented track information implicitly (as shown in Figure~\ref{fig:good_vis}).
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{Fig/track_vis.png}
\caption{Tracking-integrated object representation visualization. \texttt{\textbf{Hopper}}\ utilizes tracking-integrated object representations since tracking can link object representations through time and the resulting representations are more informative and consistent. We visualize the object track representations of video `CATERh$\_$054110' (attention weights from \texttt{\textbf{Hopper}}-multihop for this video are shown in Figure~\ref{fig:good_vis}). Here, every column is for a track and every row is for a frame. The bounding box and object class label computed from the object representation are plotted ($404$ is the object class label for $\varnothing$, i.e., none object, and $140$ is the object class label for the snitch). As shown in the figure, the tracks that are obtained from our designed Hungarian algorithm are not perfect but acceptable since having perfect tracking here is not the goal of this paper. Our model \texttt{\textbf{Hopper}}-multihop takes in the (imperfect) object track representations (along with the coarse-grained frame track) as the source input to the Multi-hop Transformer, and then further learns the most useful and correct task-oriented track information implicitly (as shown in Figure~\ref{fig:good_vis}). \texttt{\textbf{Hopper}}-multihop preforms $5$ hops of reasoning for this video; objects attended in every hop are highlighted (whose frame border is colored accordingly: \textcolor{red}{\textbf{Hop1}}, \textcolor{green}{\textbf{Hop2}}, \textcolor{blue}{\textbf{Hop3}}, \textcolor{orange}{\textbf{Hop4}}, and \textcolor{purple}{\textbf{Hop5}}). Please zoom in to see the details. Best viewed in color.
}
\label{fig:track_vis}
\end{figure}
\section{Failure Cases}
\label{sec:appendix_failure}
We present a sample of the failure cases of \texttt{\textbf{Hopper}}-multihop in Figure~\ref{fig:bad_vis}. Generally, a video is more difficult if: (1) there are similar looking objects present simultaneously (especially if the object is similar to snitch or another cone in the video); (2) wrong hop $1$ or $2$ identified; (3) critical moments are occluded (e.g. in Figure~\ref{fig:bad_vis}, when the snitch becomes occluded, it is contained by the brown cone); (4) complex object interactions such as recursive containment along with container moving (since such case usually has the last visible snitch very early).
\texttt{\textbf{Hopper}}-multihop fails in the first scenario due to the error made by the object representation and detection module, which can be avoided by using a fine-tuned object detector model. \texttt{\textbf{Hopper}}-multihop fails in the second scenario can attribute to the error made by the object detector, tracker, our heuristics, or capability of the inadequately-trained Multi-hop Transformer.
The third scenario is not easy even for humans under $1$ FPS, thus increasing FPS with extra care might ease the problem. The last scenario requires more sophisticated multi-step reasoning, thus changing the minimal number of hops of the Multi-hop Transformer\ into a larger number with self-supervision for the intermediate hops to handle the long hops should help in solving this scenario. Overall, an accurate backbone, object detector, tracking method, or the heuristics
to determine visibility
and the last visible snitch's immediate container (or occluder) will help improve the performance of \texttt{\textbf{Hopper}}-multihop. We would like to focus on enhancing \texttt{\textbf{Hopper}}-multihop for these challenges and verify our hypothesis in our future work.
\section{The CATER-h Dataset}
\label{sec:appendix_dataset}
\subsection{Basics: CATER}
\label{sec:cater}
CATER~\citep{cater} provides a diagnostic video dataset that requires spatial and temporal understanding to be solved. It is built against models that take advantage of wrong scene biases.
With fully observable and controllable scene bias, the $5,500$ videos in CATER are rendered synthetically at 24 FPS ($300$-frame $320$x$240$px) using a library of standard 3D objects: $193$ different object classes in total which includes $5$ object shapes (cube, sphere, cylinder, cone, snitch) in $3$ sizes (small, medium, large), $2$ materials (shiny metal and matte rubber) and $8$ colors. Every video has a small metal snitch (see Figure~\ref{fig:task}). There is a large ``table'' plane on which all objects are placed. At a high level, the dynamics in CATER videos are in analogy to the cup-and-balls magic routine\footnote{\url{https://en.wikipedia.org/wiki/Cups_and_balls}}. A subset of $4$ atomic actions (`rotate', `pick-place', `slide' and `contain') is afforded by each object. See Appendix~\ref{sec:cater_actions} for definition of the actions. Note that `contain' is only afforded by cone and recursive containment is possible, i.e., a cone can contain a smaller cone that contains another object.
Every video in CATER is split into several time slots, and every object in this video randomly performs an action in the time slot (including `no action'). Objects and actions vary across videos. The ``table'' plane is divided into $6\times 6$ grids ($36$ rectangular cells), and the \textit{Snitch Localization} task is to determine the grid that the snitch is in at the end of the video, as a single-label classification task. The task implicitly requires the understanding of object permanence because objects could be occluded or contained (hidden inside of) by another object.
\subsection{Definition of Actions}
\label{sec:cater_actions}
We follow the definition of the four atomic actions in ~\citet{cater}. Specifically:
\begin{enumerate}
\item \textbf{`rotate':} the `rotate' action means that the object rotates by \ang{90} about its perpendicular or horizontal axis, and is afforded by cubes, cylinders and the snitch.
\item \textbf{`pick-place':} The `pick-place' action means the object is picked up into the air along the perpendicular axis, moved to a new position, and placed down. This is afforded by all objects.
\item \textbf{`slide':} the `slide' action means the object is moved to a new location by sliding along the bottom surface, and is also afforded by all objects.
\item \textbf{`contain':} `contain' is a special operation, only afforded by the cones, in which a cone is pick-placed on top of another object, which may be a sphere, a snitch or even a smaller cone. This allows for recursive containment, as a cone can contain a smaller cone that contains another object. Once a cone `contains' an object, the `slide' action of the cone effectively slides all objects contained within the cone. This holds until the top-most cone is pick-placed to another location, effectively ending the containment for that top-most cone.
\end{enumerate}
\subsection{Dataset Generation Process}
The generation of the CATER-h dataset is built upon the CLEVR~\citep{clevr} and CATER~\citep{cater} codebases. Blender is used for rendering. The animation setup is the same as the one in CATER. A random number of objects with random parameters are spawned at random locations at the beginning of the video. They exist on a $6\times 6$ portion of a 2D plane with the global origin in the center. Every video has a snitch,
and every video is split into several time slots. Each action is contained within its time slot. At the beginning of each slot, objects are randomly selected to perform a random action afforded by that object (with no collision ensured). Please refer to~\citet{cater} for more animation details.
In order to have a video dataset that emphasizes on recognizing the effect of the temporal variations on the state of the world, we set roughly equal number of video samples to have the last visible snitch along the temporal axis. In order to obtain such a dataset, we generated a huge number of videos, computed the frame index of the last visible snitch in every video under $1$ FPS ($13$ frames per video). Then, for every frame index $i$, we obtained the set of videos whose last visible snitch is at frame index $i$, and finally, we randomly chose $500$ and more videos from this set and discarded the rest. Eventually, the total number of videos in CATER-h is $7,080$. We split the data randomly in $70:30$ ratio into a training and test set, resulting in $5,624$ training samples and $1,456$ testing samples.
\subsection{CATER-h v.s. CATER}
Figure~\ref{fig:dataset_lvs} compares the CATER-h dataset and the CATER dataset.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{Fig/lvs_histogram.png}
\caption{Histogram of the frame index of the last visible snitch of every video. We find that CATER is highly imbalanced for the Snitch Localization task in terms of the temporal cues: e.g., snitch is entirely visible at the end of the video for $58\%$ samples. This temporal bias results in high-accuracy even if it ignores all but the last frame of the video. Our dataset CATER-h addresses this issue with a balanced dataset.
}
\label{fig:dataset_lvs}
\end{figure}
\subsection{Train/Test Distribution}
Figure~\ref{fig:train_testset_class} shows the data distribution over classes in CATER-h.
\subsection{Other Potential Dataset Bias}
\label{sec:appendix_databias}
As shown in Table~\ref{tab:bcater_result}, `\texttt{\textbf{Hopper}}-transformer (last frame)' still has a relatively high accuracy on CATER-h. We hypothesize that the reason it has $41.8\%$ Top-1 accuracy on CATER-h might be due to other dataset biases
(apart from the snitch grid distribution bias, and temporal bias that CATER has). Upon further investigation, we identify one type of additional bias, the ``cone bias'', i.e., snitch can only be contained by a cone in the videos of CATER and CATER-h.
In order to verify the existence of the ``cone bias'', we compute the accuracy if we make a random guess among the grids of cones that are not covered by any other cones, for all test videos whose snitch is covered in the end. This gives us $48.26\%$ Top-1 accuracy. This shows that the ``cone bias'' does exist in the dataset.
The so-called ``cone bias'' comes from the nature of objects used in CATER and the fact that only the cone can carry the snitch (thus, it is closer to a feature of the dataset, rather than being a ``bias'' per se).
Furthermore, because of the animation rules of CATER,
there might exist other dataset biases, such as bias in terms of object size and shape, etc., which are hard to discover and address. This highlights the glaring challenge in building a fully unbiased (synthetic or real) dataset. CATER-h addresses the temporal bias that CATER has. A model has to perform long-term spatiotemporal reasoning in order to have a high accuracy on CATER-h.
\section{Implementation Details}
\label{sec:appendix_implement}
\subsection{\texttt{\textbf{Hopper}}\ }
We introduce the implementation of \texttt{\textbf{Hopper}}-multihop in this section. For both training and testing, we only used $1$ FPS (frames per second) to demonstrate the efficiency of our approach. This means we only have $13$ frames per video. Note that we trained the video query representation and recognition part (Multi-hop Transformer\ along with the final MLP) end to end.
The CNN backbone we utilized is the pre-trained ResNeXt-101~\citep{xie2017aggregated} model from~\citet{sinet}. We trained DETR~\citep{detr}\footnote{\url{https://github.com/facebookresearch/detr}} on LA-CATER~\citep{opnet} which is a dataset with generated videos following the same configuration to the one used by CATER, but additional ground-truth object bounding box location and class label annotations are available (\citet{opnet} predicts the bounding box of snitch in
the video given the
supervision of the bounding box of the snitch in $300$ frames).
We followed the settings in~\citet{detr} to set up and train DETR, e.g., stacking $6$ transformer encoder layers and $6$ transformer decoder layers, utilizing the object detection set prediction loss and the auxiliary decoding loss per decoder layer. $d$ is $256$, $N$ is $10$ and $C$ is $193$. The initial $N$ object query embeddings are learned. The MLP for recognizing the object class label is one linear layer and for obtaining the object bounding box is a MLP with $2$ hidden layers with $d$ neurons. After DETR was trained, we tested it on CATER to obtain the object representations, predicted bounding boxes and class labels. For tracking, we set $\lambda_{\mathrm{c}}=1, \lambda_{\mathrm{b}}=0$
because under a low FPS, using the bounding boxes for tracking is counterproductive,
and it yielded reasonable results. Then, we trained the video recognition part, i.e., Multi-hop Transformer\ along with the final MLP, end to end with Adam~\citep{kingma2014adam} optimizer. The final MLP we used is one linear layer that transforms the video query representation $\mathbf{e}$ of dimension $d$ into the grid class logits. The initial learning rate was set to $10^{-4}$ and weight decay to $10^{-3}$.
The batch size was $16$. The number of attention heads for DETR was set to $8$ and for the Multi-hop Transformer\ was set to $2$. Transformer dropout rate was set to $0.1$.
We used multi-stage training with the training methods proposed.
Moreover, we found that DETR tends to predict a snitch for every cone on the ``table'' plane when there is no visible snitch in that frame. To mitigate this particular issue of the DETR object detector trained on~\citet{opnet}, we further compute an object visibility map $\mathcal{V} \in \mathbb{R}^{NT \times 1}$, which is a binary vector and determined by a heuristic: an object is visible if the bounding box of the object is not completely contained by any bounding box of another object in that frame. The `Masking()' function uses $\mathcal{V}$ by considering only the visible objects.
\subsection{Description of Reported CATER Baselines}
For CATER, we additionally compare our results with the ones reported by~\citet{cater} from TSN~\citep{tsn}, I3D~\citep{carreira2017quo}, NL~\citep{nonlocal} as well as their LSTM variants. Specifically, TSN (Temporal Segment Networks) was the top performing method that is based on the idea of long-range temporal structure modeling before TSM and TPN. Two modalities were experimented with TSN, i.e., RGB or Optical Flow (that captures local temporal cues). I3D inflates 2D ConvNet into 3D for efficient spatiotemproal feature learning. NL (Non-Local Networks) proposed a spacetime non-local operation as a generic building block for capturing long-range dependencies for video classification. In order to better capture the temporal information for these methods,~\citet{cater} further experimented with a 2-layer LSTM aggregation that operates on the last layer features before the logits. Conclusions from~\citet{cater} are: (1) TSN ends up performing significantly worse than I3D instead of having similar performance which contrasts with standard video datasets; (2) the optical flow modality does not work well as the Snitch Localization task requires recognizing objects which is much harder from the optical flow; (3) more sampling from the video would give higher performance; (4) LSTM for more sophisticated temporal aggregation leads to a major improvement in performance.
\subsection{Baselines}
We introduce the implementation of the baselines that we experimented with in this section.
First, for our Hungarian tracking baseline, for every test video, we obtain the snitch track (based on which track's first object has snitch as its label) produced from our Hungarian algorithm, and project the center point of the bounding box of the last object in that track to the 3D plane (and eventually, the grid class label) by using a homography transformation between the image and the 3D plane (same method used in~\citet{cater}). We also try with the majority vote, i.e., obtain the snitch track as the track who has the highest number of frames classified as snitch. We report the majority vote result in Table~\ref{tab:cater_result} and~\ref{tab:bcater_result} because it is a more robust method. The results of using the first frame of our Hungarian tracking baseline are $31.8\%$ Top-1, $40.2\%$ Top-5, $2.7$ $L_1$ on CATER, and $28.2\%$ Top-1, $36.3\%$ Top-5, $2.8$ $L_1$ on CATER-h.
We used the public available implementation provided by the authors~\citep{tpn}
for TSM and TPN. Models were defaultly initialized by pre-trained models on ImageNet~\citep{deng2009imagenet}. The original ResNet~\citep{he2016deep} serves as the $2$D backbone, and the inflated ResNet~\citep{feichtenhofer2019slowfast} as the $3$D backbone network. We used the default settings of TSM and TPN provided by~\citet{tpn}, i.e., TSM-$50$ ($2$D ResNet-$50$ backbone) settings that they used to obtain results on Something-Something~\citep{goyal2017something}, which are also the the protocols used in~\citet{tsm}, as well as TPN-$101$ (I3D-$101$ backbone, i.e. $3$D ResNet-$101$) with multi-depth pyramid and the parallel flow that they used to obtain results on Kinetics (their best-performing setting of TPN)~\citep{carreira2017quo}. Specifically, the augmentation of random crop, horizontal flip and a dropout of $0.5$ were adopted to reduce overfitting. BatchNorm (BN) was not frozen. A momentum of $0.9$, a weight decay of $0.0001$ and a synchronized SGD with the initial learning rate $0.01$, which would be reduced by a factor of $10$ at $75$, $125$ epochs ($150$ epochs in total). The weight decay for TSM was set to $0.0005$. TPN used auxiliary head, spatial convolutions in semantic modulation, temporal rate modulation and information flow~\citep{tpn}. For SINet, we used the implementation provided by~\citet{sinet}. Specifically, image features for SINet were obtained from a pre-trained ResNeXt-101~\citep{xie2017aggregated} with standard data augmentation (randomly cropping and horizontally flipping video frames during training).
Note that the image features used by SINet are the same as the ones used in our \texttt{\textbf{Hopper}}.
The object features were generated from a Deformable RFCN~\citep{dai2017deformable}. The maximum number of objects per frame was set to $10$. The number of subgroups of higher-order object relationships ($K$) was set to $3$. SGD with Nesterov momentum were used as the optimizer. The initial learning rate was $0.0001$ and would drop by $10$x when validation loss saturates for $5$ epochs. The weight decay was $0.0001$ and the momentum was $0.9$. The batch size was $16$ for these $3$ baselines. Transformer, \texttt{\textbf{Hopper}}-transformer, and \texttt{\textbf{Hopper}}-sinet used the Adam optimizer with a total of $150$ epochs, a initial learning rate of $10^{-4}$, a weight decay of $10^{-3}$, and a batch size of $16$. Same as our model, the learning rate would drop by a factor of $10$ when there has been no improvement for $10$ epochs on the validation set. The number of attention heads for the Transformer (and \texttt{\textbf{Hopper}}-transformer) was set to $2$, the number of transformer layers was set to $5$ to match the $5$ hops in our Multi-hop Transformer, and the Transformer dropout rate was set to $0.1$.
For OPNet related experiments, we used the implementation provided from authors~\citep{opnet}. We verified we could reproduce their results under $24$ FPS on CATER
by using their provided code and trained models.
For the Random baseline, it is computed as the average performance of random scores passed into the evaluation functions~\citet{cater}.
For the Tracking baseline, we use the DaSiamRPN implementation from~\citet{cater}
~\footnote{\url{https://github.com/rohitgirdhar/CATER}}.
Specifically, the ground truth information of the starting position of the snitch was first projected to screen coordinates using the render camera parameters. A fixed size box around the snitch is defined to initialize the tracker, and run the tracker until the end of the video. At the last frame, the center point of the tracked box is projected to the 2D plane by using a homography transformation between the image and the 2D plane, and then converted to the class label. With respect to TSN, I3D, NL and their variants, the results were from~\citet{cater}, and we used the same train, val split as theirs when obtaining our results on CATER.
\begin{figure}[h]
\centering
\subfloat[CATER-h Train Set]{%
\includegraphics[clip,width=1.0\columnwidth]{Fig/CATER-h_train_class_label_distribution.png}%
}
\subfloat[CATER-h Test Set]{%
\includegraphics[clip,width=1.0\columnwidth]{Fig/CATER-h_val_class_label_distribution.png}%
}
\caption{Data distribution over classes in CATER-h. }
\label{fig:train_testset_class}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Fig/failures.png}
\caption{Failure cases. \texttt{\textbf{Hopper}}\ produces wrong Top-$1$, Top-$5$ prediction and terrible $L_1$ results for these failure cases. Similarly, we highlight the attended object per hop and per head (\textcolor{red}{\textbf{Hop1}}, \textcolor{green}{\textbf{Hop2}}, \textcolor{blue}{\textbf{Hop3}}, \textcolor{orange}{\textbf{Hop4}}, and \textcolor{purple}{\textbf{Hop5}}). \textbf{Case (a).} `CATERh$\_$048295': the occlusion has made the Snitch Localization task extremely difficult since when the snitch got occluded it was contained by the brown cone. Meanwhile, \texttt{\textbf{Hopper}}\ fails to attend to the immediate container of the last visible snitch (should be the brown cone at frame $5$) in Hop $2$. \textbf{Case (b).} `CATERh$\_$022965': the snitch was not visible very early in the video (at frame $3$), the recursive containment, as well as the presence of two similar looking cones have made the task extremely difficult. \texttt{\textbf{Hopper}}\ fails to attend to the correct object in Hop $3$ (should be the yellow cone).
}
\label{fig:bad_vis}
\end{figure}
\section{Related Work (Full Version)}
\label{sec:appendix_related}
In this section, we provide detailed discussion of related work. Our work is generally related to the following recent research directions.
\textbf{Video understanding \& analysis.} With the release of large-scale datasets such as Kinetics~\citep{carreira2017quo}, Charades~\citep{sigurdsson2016hollywood}, and Something something~\citep{goyal2017something}, the development of video representation has matured quickly in recent years. Early approaches use deep visual features from 2D ConvNets with LSTMs for temporal aggregation~\citep{donahue2015long,yue2015beyond}. As a natural extension to handle the video data, 3D ConvNets were later proposed~\citep{ji20123d,taylor2010convolutional} but with the issue of inefficiency and huge increase in parameters. Using both RGB and optical flow modalities, Two-stream networks~\citep{simonyan2014two,feichtenhofer2016convolutional} and Two-Stream Inflated 3D ConvNets (I3D)~\citep{carreira2017quo} were designed. With the emphasis on capturing the temporal structure of a video, TSN~\citep{tsn}, TRN~\citep{trn}, TSM~\citep{tsm} and TPN~\citep{trn} were successively proposed and gained considerate improvements. Recently, attention mechanism and Transformer design~\citep{transformer} have been utilized for more effective and transparent video understanding. Such models include Non-local Neural Networks (NL)~\citep{nonlocal} that capture long-range spacetime dependencies, SINet~\citep{sinet} that learns higher-order object interactions and Action Transformer~\citep{girdhar2019video} that learns to attend to relevant regions of the actor and their context. Nevertheless, instead of the reasoning capabilities, existing benchmarks and models for video understanding and analysis mainly have focused on pattern recognition from complex visual and temporal input.
\textbf{Visual reasoning from images.} To expand beyond image recognition and classification, research on visual reasoning has been largely focused on Visual Question Answering (VQA). For example, a diagnostic VQA benchmark dataset called CLEVR~\citep{clevr} was built that reduces spatial biases and tests a range of visual reasoning abilities. There have been a few visual reasoning models proposed~\citep{santoro2017simple,perez2017film,mascharka2018transparency,suarez2018ddrprog,aditya2018explicit}. For example, inspired by module networks~\citep{neuralmodulenet,andreas2016learning}, \citet{johnson2017inferring} propose a compositional model for visual reasoning on CLEVR that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer; both are implemented by neural networks. Evaluated on the CLEVR dataset,~\citet{hu2017learning}
proposed N2NMNs, i.e., End-to-End Module Networks, which learn to reason by directly predicting the network structures while simultaneously learning network parameters.
MAC networks~\citep{macnetwork}, that approach CLEVR by decomposing the VQA problems into a series of attention-based reasoning steps, were proposed by stringing the MAC cells end-to-end and imposing structural constraints to effectively learn to perform iterative reasoning. Further, datasets for real-world visual reasoning and compositional question answering are released such as GQA~\citep{hudson2019gqa}. Neural State Machine~\citep{neuralstatemachine} was introduced for real-world VQA that performs sequential reasoning by traversing the nodes over a probabilistic graph which is predicted from the image.
\textbf{Video reasoning.} There has been a notable progress for joint video and language reasoning. For example, in order to strengthen the ability to reason about temporal and causal events from videos, the CLEVRER video question answering dataset~\citep{CLEVRER} was introduced being a diagnostic dataset generated under the same visual settings as CLEVR but instead for systematic evaluation of video models. Other artificial video question answering datasets include COG~\citep{yang2018dataset} and MarioQA~\citep{mun2017marioqa}. There have been also numerous datasets that are based on real-world videos and human-generated questions such as MovieQA~\citep{tapaswi2016movieqa}, TGIF-QA~\citep{jang2017tgif}, TVQA~\citep{lei2018tvqa} and Social-IQ~\citep{zadeh2019social}. Moving beyond the question and answering task, CoPhy~\citep{baradel2019cophy} studies physical dynamics prediction in a counterfactual setting and a small-sized causality video dataset~\citep{fire2017inferring} was released to study the causal relationships between human actions and hidden statuses. To date, research on the general video understanding and reasoning is still limited. Focusing on both video reasoning and a general video recognition and understanding,
we experimented on the recently released CATER dataset~\citep{cater}, a synthetic video recognition dataset which is also built upon CLEVR focuses on spatial and temporal reasoning as well as localizing particular object of interest. There also has been significant research in object tracking, often with an emphasis on occlusions with the goal of providing object permanence~\citep{bewley2016simple,deepsort,SiamRPN,DaSiamRPN,SiamMask}. Traditional object tracking approaches have focused on the fine-grained temporal and spatial understanding and often require expensive supervision of location of the objects in every frame~\citep{opnet}. We address object permanence and video recognition on CATER with a model that performs tracking-integrated object-centric reasoning for localizing object of interest.
\textbf{Multi-hop reasoning.} Reasoning systems vary in expressive power and predictive abilities, which include systems focus on symbolic reasoning (e.g., with first order logic), probabilistic reasoning, causal reasoning, etc~\citep{bottou2014machine}. Among them, multi-hop reasoning is the ability to reason with information collected from multiple passages to derive the answer~\citep{wang2019multi}. Because of the desire for chains of reasoning, several multi-hop datasets and models have been proposed for nature language processing tasks~\citep{Dhingra2020Differentiable,dua2019drop,welbl2018constructing,talmor2018repartitioning,yang2018hotpotqa}. For example,~\citet{das2016chains} introduced a recurrent neural network model which allows chains of reasoning over entities, relations, and text.
~\citet{chen2019multi} proposed a two-stage model that identifies intermediate discrete reasoning chains over the text via an extractor model and then separately determines the answer through a BERT-based answer module.
~\citet{wang2019multi} investigated that whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing models.
Their results demonstrate the existence of the potential
improvement using explicit multi-hop reasoning.
Multi-hop reasoning gives us a discrete intermediate output of the reasoning process, which can help gauge model's behavior beyond just final task accuracy~\citep{chen2019multi}. Favoring the benefits that multi-hop reasoning could bring, in this paper, we developed a video dataset that explicitly requires aggregating clues from different spatiotemporal parts of the video and a multi-hop model that automatically extracts a step-by-step reasoning chain. Our proposed Multi-hop Transformer\ improves interpretability and imitates a natural way of thinking. The iterative attention-based neural reasoning~\citep{slotattention,neuralstatemachine} with a contrastive debias loss further offers robustness and generalization.
\vspace{-5pt}
\section{MHT\ Architecture}
\label{sec:appendix_multihop_illu}
\vspace{-10pt}
We illustrate the architecture of the proposed Multi-hop Transformer\ (MHT) in Figure~\ref{fig:multihop_transformer}.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Fig/multihop_final.png}
\caption{\textbf{Architecture of the Multi-hop Transformer\ (MHT) that learns a comprehensive video query representation and meanwhile encourages \textit{multi-step compositional long-term reasoning} of a spatiotemporal sequence.} As inputs to this module, the `Source Sequence' is [$\mathcal{T}_{f}$, $\mathcal{T}_{o}$],
where [ , ] denote concatenation; and the `Target Video Query' is $\mathcal{E}\in \mathbb{R}^{1\times d}$. `Final Video Query Representation' is $\mathbf{e}\in \mathbb{R}^{1\times d}$. $\mathcal{A}\in \mathbb{R}^{NT\times 1}$ refers to attention weights from the encoder-decoder multi-head attention layer in Transformer$_s$, averaged over all heads. To connect this figure with Algorithm~\ref{algo}, `Obtain Helper Information \& Attention Candidates' refers to line $5$ for `Helper Information' $\mathcal{H}$ (or line $7$ for the first iteration), and line $9$ for the `Attention Candidates' $\mathcal{U}\in \mathbb{R}^{NT\times d}$. Dimensionality of `Helper Information' $\mathcal{H}$ is $T\times d$ for hop $1$ and $N\times d$ for the rest of the hops. Transformer$_f$ and Transformer$_s$ are using the original Transformer architecture~\citep{transformer}. `Attentional Feature-based Gating' corresponds to line $11$. `Updated Attention Candidates' is $\mathcal{U}_{\text{update}}\in \mathbb{R}^{NT\times d}$. `Masking' corresponds to line $12$. Its output, `Masked Attention Candidates' is $\mathcal{U}_{\text{mask}}\in \mathbb{R}^{NT\times d}$ (however, certain tokens out of these $NT$ tokens are masked and will have $0$ attention weights). The last `Layer Norm' after the Transformer$_s$ corresponds to line $16$. $H$ denotes the total number of hops, i.e., total number of iterations, and it varies across videos. Please refer to Section~\ref{sec:multihop_tx} for details. }
\label{fig:multihop_transformer}
\end{figure}
\section{Introduction}
\label{sec:intro}
\input{Tex/1intro.tex}
\vspace{-5pt}
\section{Related Work}
\vspace{-10pt}
\label{sec:related}
\input{Tex/2related.tex}
\label{sec:method}
\input{Tex/3method.tex}
\vspace{-5pt}
\section{Experiments}
\label{sec:exp}
\vspace{-10pt}
\input{Tex/4dataset.tex}
\input{Tex/5exp.tex}
\vspace{-5pt}
\section{Conclusion and Future Work}
\label{sec:conclusion}
\vspace{-10pt}
\input{Tex/6conclusion.tex}
\vspace{-5pt}
\subsubsection*{Acknowledgments}
\vspace{-5pt}
The research was supported in part by NSF awards: IIS-1703883, IIS-1955404, and IIS-1955365.
|
2,869,038,153,911 | arxiv | \section{Introduction}
Based on AdS/CFT duality and holography, one should be able to calculate different parameters of a boundary CFT by using the dual bulk theory. One such quantity is the complexity of a quantum state where in quantum information context is defined by using the minimum number of simple gates which are needed to build a quantum circuit that constructs them from a certain reference state \cite{Watrous:2008}. There are also some recent progresses in \cite{Chapman:2017rqy, Jefferson:2017sdb} to define complexity more rigorously in quantum field theory and in a continuous way, where interestingly their results in different setups match with results from holography.
The holographic proposal, by Susskind \cite{Susskind:2014rva, Brown:2015bva }, states that for computing the quantum computational complexity of a holographic state one can calculate the on-shell action on the ``Wheeler-De Witt'' patch. Therefore,
\begin{gather}\label{eq:defA}
\mathcal{C} \left( \Sigma \right) =\frac{ I_ {WDW}}{\pi \hbar},
\end{gather}
where $\Sigma$ is the time slice which is the intersection of asymptotic boundary and the Cauchy surface in the bulk. This proposal is named Complexity=Action (CA) conjecture.
There is also Complexity=Volume (CV) conjecture \cite{Susskind:2014rva} which states that to compute the complexity of the boundary state, one can evaluate the volume of a codimension-one bulk hypersurface intersecting with the asymptotic boundary on the desired time slice. So
\begin{gather}\label{eq:defv}
\mathcal{C}_\mathcal{V} \left( \Sigma \right) = {\text{max}}_{\Sigma=\partial \mathcal{B}} \left[ \frac{\mathcal{V}(\mathcal{B})}{G_N \ell} \right],
\end{gather}
where $\mathcal{B}$ is in the bulk and $\ell$ is a specific time-scale such as the radius of AdS space.
The complexity grows linearly even after the boundary reaches the thermal equilibrium and therefore it could be a useful thermodynamical and quantum information measure. In the dual picture, this growth of complexity corresponds to the expansion of the length of Einstein-Rosen bridge (ERB) or the volume of the wormhole entangling two thermofield-double CFTs on boundary. Studying properties of complexity could also help to understand the inside of black holes.
In \cite{Lloyd:2000} an upper bound for the rate of growth of quantum complexity has been found and later in \cite{Brown:2015lvg} it was written in the holographic context as
\begin{gather}\label{eq:bound}
\frac{d \mathcal{C}}{dt} \le \frac{2M}{\pi \hbar},
\end{gather}
where $M$ is the mass of the black hole in the bulk, and this inequality saturates for the uncharged black holes.
In \cite{Alishahiha:2017hwg}, using the CA conjecture, by calculating the on-shell actions on two nearby WDW patches, shown in Fig. \ref{fig:gp}, the rates of complexity growth for gravity theories with higher derivative terms, such as $F( R )$ and New massive Gravity (NMG), for specific black hole solutions and shockwave have been calculated and the above bound for complexity growth rate has been verified.
\begin{figure}
\centering
\includegraphics[width=5cm] {penrose1}
\includegraphics[width=4cm] {smallp}
\caption{General Penrose diagram and the corresponding WDW patch for calculating complexity growth. At the late time only region 1 contributes to the complexity growth.}
\label{fig:gp}
\end{figure}
Those theories, however, are parity preserving having both left and right moving modes in the dual boundary CFTs. In this work, we are mainly interested in studying the effect of chirality on the rate of growth of complexity. Notably, the effects of chirality on the entanglement entropy in parity-violating quantum filed theories have been studied in \cite{Hughes:2015ora, Iqbal:2015vka, Castro:2015csg}. There, an \textit{entanglement inflow} between the bulk and the domain-wall has been perceived which actually comes from the imbalance in the flux of modes flowing through the boundary. So, it would be very interesting to check if such effects can also be detected by calculating the holographic complexity of the bulk in parity-violating gravitation theories and specifically to study the effects of edge states.
In this work, first we study the effect of Chern-Simons term on the rate of growth of complexity. Again using the CA conjecture, we calculate the rate of complexity growth in several solutions of \textit{Topologically Massive Gravity (TMG)} which is the Einstein-Hilbert action plus the chiral breaking Chern-Simons term.
As mentioned in \cite{Alishahiha:2017hwg}, the main challenge is to calculate the contribution coming from the boundary term. For the Gibbons-Hawking boundary term of TMG we specifically use the boundary term first introduced in \cite{Miskovic:2009kr} where the background-independent charges of TMG have been calculated. Considering that the approach of \cite{Brown:2015bva, Brown:2015lvg} has worked for \cite{Alishahiha:2017hwg}, we go forward and use it for different black hole solutions of TMG namely, BTZ, warped $\text{AdS}_3$, Null Warped $\text{AdS}_3$, supersymmetric and ACL black holes. We will also present the result for the shockwave solution of TMG in our following paper.
For the sake of comparing our results with the parity-preserving case, we also calculate complexity growth in warped $\text{AdS}_3$, new hairy and log black hole solutions of NMG and comment on the effect of warping factor, hair parameter and log term on the growth rate of complexity. We also compare complexity growth rate with different thermodynamical quantities of these black holes and and observe a curious correlation between temperature and complexity growth which might be useful in understanding thermodynamical-like laws for complexity. Finally, we conclude with a discussion where we comment on many recent progresses in defining quantum complexity in CFTs which one could also apply for the warped CFT case as well. Specifically, we compare the usual Liouville and ``chiral Liouville" actions to try to interpret the meaning of the warped CFT deformed term in the MERA language.
\section{Complexity growth in a chiral theory}
The chiral theory of topologically massive gravity, also known as Chern-Simons gravity is a rich, ghost-free theory of gravity in $2+1$ dimensions. The field equations of this theory include the Cotton tensor which is the analogue of the Weyl tensor in three dimensions and it can add a degree of freedom to the theory to make it dynamical which also makes the graviton massive. The effects of all these could change the rate of complexity growth.
In first order formalism, the action of TMG with a negative cosmological constant $\Lambda= - 1/\ell^2$ can be written as \cite{Miskovic:2009kr}
\begin{gather}\label{eq:TMGaction}
I= - \frac{1}{ 16 \pi G} \int_M \epsilon_{ABC} \left ( R^{AB} + \frac{1}{3 \ell^2} e^A e^B \right) e^C+ \frac{1}{32 \pi G \mu} \int_M \left( L_{CS} \left (\omega \right) + 2 \lambda_A T^A \right)+\int _{\partial M} B.
\end{gather}
In the above action, $M$ is a three-dimensional manifold where $x^\mu$ are the local coordinates, $G$ is the gravitation constant, $\mu$ is a constant parameter with the dimension of mass and $L_{CS}$ is the gravitational Chern-Simons 3-form which its relation is
\begin{gather}
L_{CS} (\omega) = \omega ^{AB} d \omega_{BA}+\frac{2}{3} {\omega^ A}_B {\omega^B}_C {\omega^C}_A.
\end{gather}
By defining the dreibein $e^A= e^A_\mu dx^\mu $ and the spin connections $ \omega^{AB} =\omega_\mu ^{AB} dx^\mu $ one can write the curvature 2-form as
\begin{gather}
R^{AB} =\frac{1}{2} R_{\mu\nu}^{AB} dx^\mu dx^\nu = d \omega^{AB}+ {\omega^A}_C \omega^{CB},
\end{gather}
and then the torsion 2-form as $T^A=\frac{1}{2} T^A_{\mu \nu} dx^\mu dx^\nu = De^A$, where the covariant derivative acts on the vectors as $DV^A= dV^A+{\omega^A}_B V^B$. As one would want a torsionless theory, then $T_A=0$ and then one can find the Lagrange multipliers in terms of Schouten tensor of the manifold
\begin{gather}
S_{\mu \nu}= (Ric)_{\mu\nu}-\frac{1}{4} \mathcal{G}_{\mu \nu} R,
\end{gather}
as
\begin{gather}
\lambda_mu^A=-2 e^{A\nu} S_{\mu \nu},
\end{gather}
where $\mathcal{G}=\eta_{AB} e_\mu^A e_\nu^B$.
For the first time, for the TMG case, in \cite{Miskovic:2009kr}, the boundary term which makes the variational principle well-defined were introduced as
\begin{gather} \label{eq:boundary}
B= \frac{1}{32\pi G} \epsilon_{ABC} \omega^{AB} e^C.
\end{gather}
Note that specially for the topological theories and Chern-Simons action the contribution of the boundary term is significant as it is also the case for the modes on the boundary of topological matters.
Now, as explained in \cite{Bouchareb:2007yx}, TMG admits two different kinds of black hole solutions, one is asymptotically AdS, BTZ solution of Einstein gravity with a negative cosmological constant, and the other is the non-asymptotically flat, non-asymptotically AdS, with a zero cosmological constant ACL black hole.
In the following sections we calculate the rate of complexity growth for these two categories of black holes and study the effect of different parameters of the theory and solutions, specifically the parameter $\mu$ on this growth rate.
\subsection{BTZ black hole}
By the method introduced in \cite{Alishahiha:2015rta, Alishahiha:2017hwg }, we evaluate the TMG action for the BTZ case. For the BTZ metric of
\begin{gather}
ds^2= -f (r ) ^2 dt^2 +\frac{dr^2}{f(r ) ^2} +r^2 (d\phi-\frac{4G J}{r^2} dt )^2, \ \ \ \ \ f^2(r ) = \frac{r^2}{\ell^2}-8GM+\frac{(8GJ)^2}{4r^2}.
\end{gather}
the vierbeins and spin connections would be \cite{Bouchareb:2007yx}
\begin{gather}
e^0= f( r ) dt, \ \ \ \ \ \ \ \ \ e^1= (r d\phi -\frac{4G J}{r}) dt, \ \ \ \ \ \ \ \ \ e^2=\frac{1}{f( r )} dr, \nonumber\\
{\omega^0}_1 =\frac{4 G J}{r^2 f( r )} dr, \ \ \ \ \ \ {\omega^0}_2= \left( f'( r ) f( r ) -\frac{16 G^2 J^2}{r^3} \right) dt+\frac{4GJ}{r} d\phi, \ \ \ \ \ \ {\omega^1}_2= f( r ) d\phi.
\end{gather}
\begin{figure}
\centering
\includegraphics[width=8.5cm] {BTZpenrose.jpg}
\caption{ Penrose diagram of BTZ black hole. At late times, only the dark blue part contributes to the complexity growth.}
\label{fig:BTZpenrose}
\end{figure}
Note that for the BTZ case, the Cotton tensor vanishes identically and so it satisfies the TMG field equations in a trivial way.
Now calculating the Lagrangian, the first term, $\epsilon_{ABC} R^{AB} e^C$, gives
\begin{gather}
\epsilon_{ABC} R^{AB} e^C= 2\left( 2 f'( r ) f ( r) +r f'' ( r ) f ( r) + r f'^2 ( r ) +\frac{4 G^2 J^2}{r^3} \right) dt dr d\phi.
\end{gather}
For the second term we get
\begin{gather}
\frac{1}{3 \ell^2} \epsilon_{ABC} e^A e^B e^C = -\frac{2r}{\ell^2} dt dr d\phi.
\end{gather}
Also for the BTZ metric, the Chern-Simon term would give
\begin{gather}
L_{CS}= -\frac{8 G J}{r} \left( \frac{64 G^2 J^2}{r^4} + f'' f +f'^2-\frac{f' f}{r} \right).
\end{gather}
One can also check that as the Lagrange multiplier for the locally AdS space is $\lambda_\mu^A=\frac{1}{\ell^2} e_\mu^A$ then $\lambda_A T^A=0$ and there would be no contribution from this term as one expects from the equations of motion of TMG. Also for the boundary term $B$ one finds
\begin{gather}
B=\frac{1}{32 \pi G} \epsilon_{ABC} \omega^{AB} e^C= 2 f \left(f + r f' \right) d\phi \wedge dt.
\end{gather}
Now we can write the parameters of the BTZ metric in terms of the outer and inner horizon radii, $r_{+} \ , r_{-}$ (the solutions of $f( r )=0$) in the following form,
\begin{gather}
f^2( r ) =\frac{(r^2 -r_+^2)(r^2- r_-^2) }{ r^2 \ell^2}, \ \ \ \ \ 8GM=\frac{r_+^2 +r_-^2}{\ell^2}
, \ \ \ \ \ \ \ 8GJ=\frac{2 r_+ r_-}{\ell}.
\end{gather}
Also the total mass and total angular momentum of TMG could be written as \cite{Miskovic:2009kr}
\begin{gather}
\mathcal{M} =M-\frac{J}{\mu \ell^2}, \ \ \ \ \ \ \ \ \ \ \ \mathcal{J}=J-\frac{M}{\mu}.
\end{gather}
Now similar to \cite{Alishahiha:2017hwg}, to find the rate of the growth of complexity, one should calculate the difference between the on-shell actions which are evaluated over the two nearby WDW patches. At the late time the only part that contributes to the rate of complexity growth is region 1 which is shown in blue in Figure. \ref{fig:BTZpenrose}. For the BTZ case and at the late time, only the region between the two horizons contribute to this difference. So one would find
\begin{flalign}
\delta I_{\mathcal{M}} &= I_{\mathcal{M}} [ \text{WDW} \big |_{t +\delta t} ] - I_{\mathcal{M}} [\text{WDW}\big |_t ] \nonumber\\ &= -\frac{1}{16 \pi G} \int_{r_-}^{r_+} \int_{t} ^{t+\delta t} \int_{0}^{2\pi} \mathcal{L_{\text{EH}}} \ dt \ dr \ d\phi + \frac{1}{32 \pi G \mu} \int _{r_-}^{r_+} \int_t ^{t+\delta t} \int_0^{2\pi} L_{\text{CS}} \ dt \ dr \ d\phi \nonumber\\&=
-\frac{\delta t}{4 G \ell^2} \int_{r_-}^{r_+} \left(r+\frac{r_+^2 r_-^2}{r^3} \right) dr - \frac{\delta t J}{2 \mu} \int_{r_-} ^{r_+} \frac{dr}{r} \left( \frac{64 G^2 J^2}{r^4} + f'' f + f'^2 -\frac{f' f}{r} \right) \nonumber\\
&= -\frac{(r_+^2 -r_-^2)}{4G \ell^2}\delta t+ \frac{1}{4 G \ell^3 \mu} \left( \frac{r_+^4 -r_-^4}{r_+ r_-} \right) \delta t.
\end{flalign}
The first term coming from the Einstein Hilbert term, matches with previous calculations such as in\cite{Alishahiha:2017hwg}.
Then the contribution of the generalized Gibbons-Hawking boundary term \eqref{eq:boundary} would be
\begin{flalign}
\delta I_{\partial \mathcal{M}} &= \int_t ^{t+\delta t} \int_0^{2\pi} \frac{2}{\ell^2} \left(2r -r_+^2 -r_-^2 \right) dt \ d\phi \Big |_{ r_+} \nonumber\\ & - \int_t ^{t+\delta t} \int_0^{2\pi} \left(2r -r_+^2 -r_-^2 \right) dt \ d\phi \Big | _{ r_-}=\frac{(r_+^2 -r_-^2)}{4 G \ell^2} \delta t.
\end{flalign}
Based on \eqref{eq:defA}, the complexity growth would be
\begin{gather}\label{eq:cbtz}
\dot{\mathcal{C}} =\frac{d I}{dt}= \frac{1}{4 G \ell^3 \mu} \left( \frac{r_+^4 -r_-^4}{r_+ r_-} \right).
\end{gather}
We can also write the complexity growth $\dot{\mathcal{C}}$ in terms of the conserved charges of BTZ as
\begin{gather} \label{eq:TMGcomplex}
\dot{\mathcal{C}}=\frac{4 M}{\mu J} \sqrt{M^2 -\frac{J^2}{\ell^2} }=\frac{4}{\ell^2}\frac{\mathcal{M} \mu \ \ell^2+ \mathcal{J} }{\mathcal{M}+\mu \mathcal{J} } \sqrt{\frac{\ell^2 \mathcal{M}^2-\mathcal{J}^2}{ \mu ^2 \ell^2 -1}}.
\end{gather}
One can notice that the higher derivative corrections which here is the Chern-Simons term, would actually slow down the rate of growth of complexity similar to the results of \cite{Alishahiha:2017hwg} for the critical gravity where the mass term decreased the rate.
Also note that for the special case of $\mu \ell=1$ the complexity growth rate diverges indicating again that this is a special point in the region of the solution. In this critical point, the left central charge would vanish and the equation of motion degenerate to a log-gravity which its dual is the LCFT \cite{Grumiller:2008qz}.
From the result of \eqref{eq:cbtz} one can see that decreasing the coupling $\mu$ which increases the effect of Chern-Simons term in the action \eqref{eq:TMGaction} and increases the parity-violation, would increase the rate of complexity growth. This actually makes sense, since breaking the symmetry between the left and right moving modes should definitely increase complexity and its growth rate. Note that for $\mu \to 0$, where the Chern-Simons term becomes completely dominant, the rate of complexity growth diverges which however might not physically be possible due to the bound of \eqref{eq:bound}.
A peculiar feature of this result is that for $\mu \to \infty$ it will not give the complexity growth of pure Einstein action. This might be due to specific feature of the Chern-Simons theory, or the effect of the particular boundary term \eqref{eq:boundary} that we have chose, which is independent of the factor $\mu$, unlike NMG which depends on $m^2$ through an auxiliary field. It worths to work further on this point and to check how actually the distinctions between the left and right moving modes and increasing chirality would increase complexity growth rate.
One might also try to interpret the results based on the difference between the central charges,
\begin{gather}
c_L=\frac{3 \ell}{2 G} \left( 1-\frac{1}{\mu \ell} \right), \ \ \ \ \ \ \ \ \ \ c_R=\frac{3 \ell}{2 G} \left(1+\frac{1}{\mu \ell} \right ),\ \ \ \ \ \ \Delta c=\frac{3}{\mu \ell}.
\end{gather}
Note also that as in TMG case mass, i.e., $M$, could be negative, the bound of $\mathcal{\dot{C}}\le 2\mathcal{M}$ at $\mathcal{J}=0$, could be satisfied.
To examine the behavior of complexity, we compare it with other thermodynamical quantities of the BTZ black holes in TMG which are as follows \cite{Zhang:2013mva, Myung:2008ey},
\begin{gather}\label{eq:btzthermo}
S=\frac{\pi r_+}{2G}+\frac{1}{\mu \ell} \frac{\pi r_-}{2G}, \ \ \ \ \ \ \ T_H=\frac{r_+^2 -r_-^2}{2 \pi \ell^2 r_+}, \ \ \ \ \ M=\frac{r_+^2+r_-^2}{8 H \ell^2}, \ \ \ \ \ \ J=\frac{2 r_+ r_-}{8 G \ell}.
\end{gather}
Note that for the extremal case where $T \to 0$ and $r_+ -r_- \to 0$, we have $ \dot{\mathcal{C}} \to 0$ as we expected.
In fact, there are evidences that complexity is a quantity which shows similarities to both temperature and entropy. However, from \eqref{eq:btzthermo}, one can notice the more similarities are actually between complexity and temperature where both are always proportional to $(r_+-r_-)$. In \cite{Flory:2017ftd, Roy:2017uar} also it was shown that in certain systems by decreasing temperature complexity would decrease. All these observations could suggest that a more direct relationship between complexity and temperature exists, rather than complexity and entropy.
This is actually in accordance with the Lloyd's proposal in \cite{Lloyd:2000}. As he put forward, integrating $T=(\partial S/ \partial E)^{-1}$, leading to $T= CE/S$ ($C$ is just constant), suggests that the temperature would actually govern the number of operations per bit per second, $ \left( k_B \ln 2E/ \hbar S \approx k_B T/ \hbar \right)$, that a system can perform, and conceptually this is more related to the concept of complexity than entropy. By calculating the complexity of black holes, we also see this interconnection directly.
\subsection{Warped $\text{AdS}_3$ black hole}
Now we want to calculate the holographic quantum complexity for a warped $\text{AdS}_3$ black hole and study the effect of warping factor in addition to the effect of chirality.
Warped $\text{AdS}_3$ black holes are in fact stretched or squeezed deformation of BTZ black holes. Their isometry group is $SL(2,R) \times U(1)$ and their dual boundary theory is warped CFT (WCFT) which is a semi-direct product of a Virasoro algebra and a $U(1)$ Kac-Moody algebra.
The metric of $\text{WAdS}_3$ black hole in the ADM form can be written as
\begin{gather}\label{eq:warped}
ds^2= \ell^2 \left( dt^2 + 2 M( r ) dt d\theta + N ( r ) d \theta^2 + D ( r ) dr^2 \right ),
\end{gather}
where
\begin{flalign}
M( r ) &= \nu r -\frac{1}{2} \sqrt{r_+ r_- (\nu^2+3)}, \nonumber\\
N( r ) &= \frac{r}{4} \left(3 (\nu^2-1) r + ( \nu^2+3) ( r_+ + r_-) -4\nu \sqrt{r_+ r_- (\nu^2+3)} \right), \nonumber\\
D( r )&= \frac{1}{(\nu^2+3)(r-r_+)(r-r_-)}.
\end{flalign}
Note that $ \nu= \frac{\mu \ell}{3}$ and for the case of $\nu=1$ this metric reaches to the BTZ black hole in an unusual coordinate.
The Carter-Penrose diagrams of these kinds of black holes have been presented in \cite{Ferreira:2013zta} which are similar to the asymptotically flat space times in $3+1$ dimensions. Also, in\cite{Ferreira:2013zta} it was shown that these black holes are stable against the massive scalar field perturbations.
If we choose the following vierbein \cite{Chen:2009hg}
\begin{gather}
e^0= \frac{\ell}{2 \sqrt{D ( r )} } d\theta, \ \ \ \ \ e^1= \ell \sqrt{ D ( r )} dr, \ \ \ \ \ e^2= \ell dt + M ( r) \ell d\theta,
\end{gather}
then the spin connections would be
\begin{gather}
\omega_t^{01}=-\omega _t^{10} =- M', \ \ \ \ \ \ \ \ \omega_r^{02}=-\omega_r^{20}=-\sqrt{D} M' ,\nonumber\\
\omega_\theta^{01}=-\omega_\theta^{10} = MM'-N', \ \ \ \ \ \omega_\theta^{12}=-\omega_\theta^{21}=-\frac{M'}{2\sqrt{D}}.
\end{gather}
Then calculating different terms in the Lagrangian, we find
\begin{gather}
\epsilon_{ABC} R^{AB} e^C= \frac{3}{2} \ell M'^2 dt \ dr \ d\theta, \ \ \ \ \ \ \ \ \frac{1}{3 \ell^2} \epsilon_{ABC} e^A e^B e^C=-\ell dt \ dr \ d\theta, \nonumber\\
L_{\text{CS}}=2 \left(M' N''-M'' N' \right) dt \ dr \ d\theta, \ \ \ \ \ \ \ \ 2\lambda_A T^A=0.
\end{gather}
Taking the integral we get
\begin{gather}
\delta I_{\mathcal{M}}= -\frac{\ell}{8G} (\nu^2-1) (r_+-r_-) \delta t.
\end{gather}
From the boundary term we also get
\begin{gather}
\delta I _{\partial M}= \frac{\ell}{16G} (\nu^2+3)(r_+ -r_-) \delta t.
\end{gather}
So the rate of complexity growth would be
\begin{gather}\label{eq:warpgrowthTMG}
\dot{\mathcal{C}}= \frac{\ell}{G} \left(\frac{5-\nu^2}{16}\right) (r_+-r_-).
\end{gather}
As the central charges of dual CFT are
\begin{gather}
c_L=\frac{4 \nu \ell}{(3-\nu^2) G}, \ \ \ \ \ \ c_R=\frac{(5\nu^2-3) \ell}{\nu (3-\nu^2) G},
\end{gather}
in order to have positive central charges we should have $\nu^2 <3$, then one can see the growth of complexity is actually positive as one expected.
It can be seen that the deformation parameter $\nu$ would actually decrease the rate of the growth of complexity. In future works, different proposed pictures for complexity, such as the ones in \cite{Freedman:2016zud} or \cite{Caputa:2017yrh} could be implemented to describe this fact, which we will explain them in the discussion section.
The thermodynamical properties of warped $\text{AdS}_3$ black holes are also as follows \cite{Anninos:2008fx}
\begin{flalign}
T_R&=\frac{(\nu^2+3)(r_+-r_-)}{8 \pi \ell},\ \ \ \ \ \
T_L=\frac{(\nu^2+3)}{8 \pi \ell} \left( r_++r_- - \frac{\sqrt{(\nu^2+3)r_+ r_-} }{\nu} \right),\nonumber\\
T_H&=\frac{(\nu^2+3)}{4\pi \ell} \frac{(r_+-r_-)}{(2\nu r_+-\sqrt{(\nu^2+3)r_+ r_-} )},\ \ \ \ \ \
S = \frac{\pi \ell}{24 \nu G} \left [ (9\nu^2+3) r_+ - (\nu^2+3)r_- - 4\nu \sqrt{(\nu^2+3)r_+ r_-} \right ].
\end{flalign}
One can see that again the rate of complexity growth is more correlated with the temperatures, i.e., $T_R$ and $T_H$, rather than the entropy of the black hole. It could be interesting to try to further explain this observation by considering the properties and dynamics of the modes in the warped CFTs and then also by taking into account some other more exotic pictures such as $\text{ER}=\text{EPR}$ in warped CFTs or others such as \cite{Freedman:2016zud, Caputa:2017yrh}.
\subsection{Null Warped $\text{AdS}_3$}
A vacuum solution of TMG is null warped $\text{AdS}_3$ which is only well-defined at $\nu=1$. So it would be easier to study just the effect of $\mu$ term in the action on the rate of complexity growth. Its isometry group is again $SL(2,R) \times U(1)_{\null}$. Also the entropy and $T_L$ for this case is zero, but $T_R=\frac{\alpha}{\pi l}$.
The metric of null warped black hole is of the form \cite{Chen:2009hg},
\begin{flalign}
ds^2 &= l^2 \left(-2r d\theta dt+(r^2+r+\alpha^2) +\frac{dr^2}{4r^2} \right),
\end{flalign}
where to avoid naked causal singularity, one should take $0<\alpha <1/2$.
The vierbein are
\begin{gather}
e^0 =r l dt+\frac{1}{2} l (1-\alpha^2-r-r^2) d\theta, \ \ \ \ \
e^1 = \frac{l}{2r} dr, \ \ \ \ \
e^2 =-r l dt+\frac{1}{2} l (1+\alpha^2+r+ r^2) d\theta,
\end{gather}
and the non-zero components of the spin connections are
\begin{flalign}
\omega^{01}&=-\omega^{10} = r dt+ \frac{1}{2} (1-r-3r^2+\alpha^2) d\theta, \nonumber\\
\omega^{02}&=-\omega^{20}=\frac{1}{2r} dr, \nonumber\\
\omega^{12}&=-\omega^{21}= r dt+\frac{1}{2} ( -1-r-3r^2 +\alpha^2) d\theta.
\end{flalign}
Now computing all the terms in the Lagrangian we get
\begin{gather}
\epsilon_{ABC} R^{AB} e^C= -3 l \ dt dr d\theta , \ \ \ \ \ \ \
\frac{1}{3 l^2} \epsilon_{ABC} e^A e^B e^C = l dt dr d\theta, \nonumber\\
L_{CS}=2(1+\alpha^2+3r^2) dt d\theta dr,\ \ \ \ \ \ \ \ \ B=-4r l d\theta dt.
\end{gather}
Taking the integral from $r=0$ to a specific $r_s$ we will get
\begin{gather}
\dot{\mathcal{C}}=\frac{r_s}{4G} \left(l+\frac{1+\alpha^2+r_s^2}{2\mu} \right)-8\pi G r_s.
\end{gather}
The first two terms come from the bulk action and the last term comes from the boundary term. Note that again decreasing the parameter $\mu$, which increases the chirality, would actually increase the rate of growth of complexity similar to the BTZ solution of TMG.
\subsection{Supersymmetric black hole}
A new solution of TMG with negative cosmological constant were found in \cite{Dereli:2000fm} which is supersymmetric, asymptotically approaches the extremal BTZ solution, and goes to flat space if one sets the cosmological constant to zero. So with these specific characteristics it might be interesting to also check its rate of complexity growth.
For these black holes the vierbein would be \cite{Dereli:2000fm},
\begin{flalign}
e^0= f(\rho) dt, \ \ \ \ \ \ \ e^1=d\rho, \ \ \ \ \ \ \ \ e^2=h(\rho) (d\phi+a(\rho) dt),
\end{flalign}
and the spin connections are
\begin{flalign}
{\omega^0}_1=\left(f'-\frac{a a' h^2}{2f}\right)dt-\frac{a' h^2}{2f}d\phi, \ \ \ \ \ \ {\omega^0}_2=-\frac{a' h}{2f} d\rho, \ \ \ \ \ {\omega^1}_2=\left( -\frac{a' h}{2}-ah'\right)dt -h' d\phi.
\end{flalign}
The metric functions for the solution of \cite{Dereli:2000fm} are
\begin{flalign}
f&=f_0 e^{2\rho/l} \left(1+ \beta_1 e^{2\rho/l} +\beta_2 e^{(1/l-k \mu) \rho} \right)^{-1/2}, \nonumber\\
h&=h_0 \left( 1+\beta_1 e^{2\rho/l}+\beta_2 e^{(1/l-k \mu)\rho} \right)^{1/2}, \nonumber\\
a&=-a_0+k \frac{f_0}{h_0} e^{2\rho/l} \left(1+\beta_1 e^{2\rho/l}+\beta_2 e^{(1/l-k \mu) \rho} \right)^{-1},
\end{flalign}
where $\beta_1$, $\beta_2$, $a_0$, $f_0$, $h_0$ are some integration constants. The extremal BTZ
can be recovered in the limit of $ \big | \mu \big | \to \infty$ of the above solution.
Note that both the extremal BTZ and the solution here are in fact supersymmetric since for them, there exist a 2-spinor $\epsilon$ which satisfies
\begin{gather}
(2\mathcal{D}+\frac{1}{l} \gamma) \epsilon=0,
\end{gather}
where $\gamma=\gamma_a e^a$ and $\mathcal{D}=d+\frac{1}{2} \omega^{ab} \sigma_{ab}$.
As mentioned in \cite{Dereli:2000fm}, depending on the values of the integration constants $\beta_1$ and $\beta_2$, we can find singularities and event horizons in the metric functions.
Now by inverting the metric and finding the roots of the $tt$ component of the inverse metric, i.e, $g^{-1}_{tt}$, we can find the location of the event horizon as
\begin{gather}
2 \beta_1 e^{\left(\frac{1}{l}+k \mu \right) \rho }+\beta_2(1-\mu k l ) =0, \ \ \ \to \ \rho _s=\frac{l}{1+\mu k l }\log \left(\frac{\beta_2 (\mu k l -1) }{2 \beta_1} \right).
\end{gather}
Now for this solution we can find the terms in the Lagrangian. The first term gives
\begin{flalign}
\epsilon_{ABC} R^{AB} e^C&= \frac{a'^2 h^3}{2f} -2( h f''+h' f' + h'' f ),
\end{flalign}
The second term would be
\begin{flalign}
\frac{1}{3 l^2} \epsilon_{ABC} e^A e^B e^C=\frac{2 f h}{l^2} dt d\phi d\rho.
\end{flalign}
The Chern-Simons term is
\begin{flalign}
L_{CS}&= \frac{h^2 a'}{f^2} \left( f'^2-h^2 a'^2 \right)- \frac{f' h}{f} \left( a'' h+4 a' h' \right)
+h \left( a'' h'-a' h''+\frac{a' h f''}{f} \right) + 3 a' h'^2,
\end{flalign}
and the boundary term is
\begin{flalign}
B=-\frac{1}{16\pi G} \left( h f'+h' f \right) d\phi \wedge dt= -\frac{1}{4 G l} e^{\frac{2 \rho }{l}} f_0 h_0 \delta t.
\end{flalign}
The general terms for the rate complexity is complicated. However for the special case of $k=1$ in this solution, things would become much more simplified and therefore we only present the result for this special case. Therefore we find,
\begin{flalign}
\delta I_M&= \frac{f_0 h_0 }{ G l^2}\left(\left(\frac{\beta _2 (\mu l-1)}{2\beta _1}\right)^{\frac{2}{\mu l+1}}-\frac{l}{4}\right) \delta t,
\end{flalign}
and
\begin{flalign}
\delta I_{\partial \mathcal{M}}=\frac{f_0 h_0}{G l} \left(\frac{1}{4}-2^{-2-\frac{2}{ \mu l+1}} \left(\frac{\beta _2(\mu l-1)}{\beta _1}\right){}^{\frac{2}{ \mu l+1}}\right)\text{$\delta $t},
\end{flalign}
So the rate of complexity growth would be
\begin{gather}
\mathcal{\dot{C}}=\frac{ f_0 h_0 }{G l^2}2^{-2-\frac{2}{ \mu l+1}}\left(4^{\frac{1}{ \mu l+1}} l-(l-2) \left(\frac{\beta _2 (\mu l-1)}{\beta _1}\right){}^{\frac{2}{\mu l+1}}\right).
\end{gather}
Note that it is actually a more complicated function of $\mu$, but generally it decreases by increasing the coupling constant $\mu$ similar or BTZ and warped $\text{AdS}_3$ case.
\subsection{ACL black hole}
In addition to BTZ, TMG also admits a non-asymptotically flat, non-asymptotically AdS black hole solution named ACL \cite{Bouchareb:2007yx}. It was shown in \cite{Moussa:2008sj} that these black holes are geodesically complete and causally regular which this property makes the computation of their complexity interesting.
This black hole is of the following form
\begin{flalign}
ds^2&=-\beta^2 \frac{\rho^2-\rho_0^2}{r^2} dt^2+\frac{1}{\zeta^2 \beta^2} \frac{d\rho^2}{\rho^2-\rho_0^2}+r^2 \left( d\varphi-\frac{\rho+(1-\beta^2)\omega}{r^2} dt \right)^2,
\end{flalign}
with
\begin{flalign}
r^2= \rho^2+2 \omega \rho+\omega^2 (1-\beta^2)+\frac{\beta^2 \rho_0^2}{1-\beta^2},
\end{flalign}
where
\begin{gather}\label{eq:change}
\beta^2 \equiv \frac{1}{4} \left( 1-\frac{27 \Lambda}{\mu^2} \right), \ \ \ \ \ \zeta=\frac{2}{3}\mu.
\end{gather}
Note that the two parameters $\omega$ and $\rho_0 \ge 0$ are related to the mass and angular momentum of the black hole.
Also if $\omega=\rho_0=0$, the metric becomes horizonless and becomes a ground state solution. Therefore, we expect for this case the complexity and its growth rate vanishes.
Writing the metric in the ADM form
\begin{gather}
ds^2=-N^2 dt^2+r^2 (d\varphi+N^{\varphi} dt)^2+\frac{1}{(\zeta r N)^2} d\rho^2,
\end{gather}
the dreibein $e^a$ for this metric would be
\begin{gather}
e^0=N dt, \ \ \ \ \ \ \ \ \ e^1= r( d\varphi+N^{\varphi} dt), \ \ \ \ \ \ \ \ e^2=\frac{1}{\zeta r N} d\rho,
\end{gather}
with the following corresponding spin connections \cite{Bouchareb:2007yx},
\begin{flalign}
{\omega^0}_2 &=\zeta r [N' e^0+\frac{1}{2} r (N^\varphi)' e^1],\nonumber\\
{\omega^0}_1&= \zeta r \frac{1}{2} r (N^\varphi)' e^2, \nonumber\\
{\omega^1}_2& = \zeta r [\frac{1}{2} r (N^\varphi)' e^0+N \frac{r'}{r} e^1].
\end{flalign}
Now calculating different terms in the Lagrangian, we get the following results
\begin{gather}
\epsilon_{ABC} R^{AB} e^C=\frac{\zeta r}{2} \left (r^3 ({N^\varphi}')^2+4 N N' r' \right)dt d\rho d\varphi= \frac{\zeta}{2}dt d\rho d\varphi, \nonumber\\
\frac{1}{3 l^2}\ \epsilon_{ABC} e^A e^B e^C =-\frac{2}{\zeta l^2} dt d\rho d\varphi, \nonumber\\
L_{CS}=-\frac{\zeta^2 r^3}{2} (N^{\varphi})' \left(r^3 ({N^\varphi} ')^2 - 4 N N' r' \right)dt d\rho d \varphi,
\end{gather}
and the boundary term would be
\begin{gather}
\frac{1}{32 \pi G} \epsilon_{ABC} \omega^{AB} e^C= \frac{\zeta }{16 \pi G} r N ( N' r+N r') dt d\varphi.
\end{gather}
The contribution of the first two Einstein terms would be
\begin{gather}
\delta I_{\mathcal{M}_1}=-\frac{\rho_0 \zeta }{8 G} \left(1-\frac{4}{l^2 \zeta^2 } \right) \delta t,
\end{gather}
and the contribution of the Chern-Simons term is
\begin{gather}
\delta I_{\mathcal{M}_{CS}}=\frac{\zeta \beta }{24 G } \Bigg (\frac{\zeta ^2\left(2 \rho _0{}^2+\omega ^2\left(3 \beta ^4-\beta ^2-2\right)\right)}{\sqrt{\left(\rho _0{}^2+\left(\beta ^2-1\right) \omega ^2\right)\left(\beta ^2-1\right)}}\text{ArcTanh}\left(\frac{2 \beta \text{ }\rho _0 \sqrt{\left(\rho _0{}^2+\left(\beta ^2-1\right) \omega ^2\right)\left(\beta ^2-1\right)}}{\rho _0{}^2+\left(\beta ^4-1\right) \omega ^2}\right)\nonumber\\
+5\beta \omega \log \left(\frac{\rho _0-\omega \left(\beta ^2-1\right) }{\rho _0+\omega \left(\beta ^2-1\right) }\right)+\frac{4 \rho _0{}^3\beta ^3 }{\rho _0{}^2-\left(\beta ^2-1\right)^2 \omega ^2}-6\rho _0 \beta -\frac{\rho _0}{\beta }\Bigg) \delta t.
\end{gather}
Finally the boundary term would result in
\begin{gather}
\delta I_{\partial \mathcal{M}}= \frac{ \rho _0 \zeta \beta ^2 }{4 G}\text{$\delta $t}.
\end{gather}
\begin{figure}[ht!]
\centering
\includegraphics[width=75mm] {ACLplot}
\caption{Plot of $\dot{\mathcal{C}}$ vs. $\mu$ for $\rho_0=\omega=G=l=1$ and $\beta=\frac{1}{2}$. }
\label{fig:ACL}
\end{figure}
Note that if $\omega=\rho_0=0$ all these three terms vanish as we have expected.
Using \eqref{eq:change}, one can then write the sum of all these in terms of $\mu$. The final result of the rate of complexity growth versus $\mu$ is shown in Figure \ref{fig:ACL}. Again, one can notice that decreasing $\mu$, meaning increasing the effect of Chern-Simons term, would increase the rate of complexity growth to the point that for $\mu \to 0$ it diverges, again in a $\frac{1}{\mu}$ fashion, which is similar to the previous cases. For this figure, however, note that one should only consider the region where $\dot{\mathcal{C}}$ is positive.
As calculated in \cite{Bouchareb:2007yx}, the thermodynamical quantities of this black hole is as follows
\begin{gather}
T_H=\frac{\mu \beta^2}{3\pi} \frac{\rho_0 \sqrt{1-\beta^2} }{ \rho_0+(1-\beta^2) \omega}, \ \ \ \ \ \
S=\frac{\pi}{3G \sqrt{1-\beta^2} } \left( (1+\beta^2)\rho_0+(1-\beta^2)\omega \right),\nonumber\\
\mathcal{M}=\frac{\mu}{9 G} \beta^2 (1-\beta^2)\omega, \ \ \ \ \ \ \ \mathcal{J}=\frac{\mu \beta^2 }{18G} \left( (1-\beta^2) \omega^2-\frac{1+\beta^2}{1-\beta^2}\rho_0^2 \right).
\end{gather}
Here it is a bit more difficult to distinguish the strength of correlation between different thermodynamical quantities.
\subsection{Shockwave solution of TMG}
Similar to \cite{Alishahiha:2017hwg}, one can also study the shockwave solution of TMG to get more information about the boundary complexity. We will present this computation in our future paper \cite{Ghodrati:2017rta}. However we just sketch the general idea here.
First, one writes the black brane metric of
\begin{gather}
ds^2=-\frac{r^2-r_h^2}{\ell^2} dt^2+\frac{\ell^2}{r^2-r_h^2} dr^2+\frac{r^2}{\ell^2} dx^2, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Lambda=-\frac{1}{\ell^2},
\end{gather}
in the Kruskal coordinates, \cite{Alishahiha:2016cjk}
\begin{gather}
ds^2= 2 A(uv) du dv+B(uv) dx^2,
\end{gather}
where
\begin{gather}
A(uv)=-\frac{2 c \ell^2}{(1+ c uv)^2}, \ \ \ \ \ \ \ \ B(uv)=\frac{r_h^2}{\ell^2} \left( \frac{1-c u v}{1+c u v} \right).
\end{gather}
Then, by considering the following back-reacted metric ansatz
\begin{gather}
ds^2= 2 A(UV) dU dV+B(UV) dx^2 -2 A(UV) h(x) \delta(U) dU^2,
\end{gather}
and the calculated form of shock wave strength, i.e, the function $h(x)$, \cite{Alishahiha:2016cjk}
\begin{gather}
h(x)=-\frac{\eta}{2 a_2} \left (x+\frac{1}{2 a_2} \right) e^{-a_2 x},
\end{gather}
and also the following form of scrambling time and butterfly velocities \cite{Alishahiha:2016cjk}
\begin{gather}
t_*=-\frac{\beta}{2\pi} \log \frac{k}{\ell}, \ \ \ \ \ \ \ \ \ \ \ \ \ v_B^{(1)}=\frac{2\pi}{\beta a_2}=1, \ \ \ \ \ \ \ \ \ \ \ \ \ v_B^{(2)}=\frac{2\pi}{\beta a_1}=\frac{1}{\mu \ell},
\end{gather}
one can calculate the action for two different regimes of small shifts, $ u_0^{-1} +h(x) <v_0$ and large shifts where $ u_0^{-1} +h(x) \ge v_0$. Note that for these two different cases the corresponding WDW patches are different leading to different results for the rate of complexity growth.
\section{Complexity growth in a parity-preserving theory}
We now study the rate of complexity growth for several black hole solutions of New Massive Gravity, as a parity-preserving theory, to compare some results with the previous case and also to examine the effects of mass term, warping factor, hair parameter and different variables of the theory.
The action of NMG is
\begin{flalign}\label{eq:NMGaction}
I=\frac{1}{16 \pi G} \int_{\mathcal{M}} d^3 x \sqrt{-g} \left[ R-2\Lambda-\frac{1}{m^2} \left( R^{\mu \nu} R_{\mu \nu} -\frac{3}{8} R^2 \right) \right],
\end{flalign}
where $m$ is a dimensionful parameter.
One can also write this theory in the following form \cite{Hohm:2010jc}
\begin{gather}
I=\frac{1}{16 \pi G} \int_{\mathcal{M}} d^3 x \sqrt{-g} \left[ R-2\Lambda+f^{\mu \nu} G_{\mu \nu}+\frac{m^2}{4} \left( f^{\mu \nu} f_{\mu \nu} -f^2 \right) \right ].
\end{gather}
In the above term, $G_{\mu \nu}$ is the Einstein tensor and the auxiliary field $f_{\mu \nu}$ is
\begin{gather}
f_{\mu \nu}=-\frac{2}{m^2} \left( R_{\mu \nu}-\frac{1}{2 (d+1)} R g_{\mu \nu} \right ).
\end{gather}
The Gibbons-Hawking boundary term would be
\begin{gather}\label{eq:NMGboundary}
I_{GGH}= \frac{1}{16 \pi G} \int_{\partial \mathcal{M}} d^2 x \sqrt{- \gamma} \left( -2K-\hat{f}^{ij} K_{ij}+\hat{f} K \right),
\end{gather}
where $K_{ij}$ is the extrinsic curvature of the boundary and $K=\gamma^{ij} K_{ij}$ is the trace of the extrinsic curvature. The auxiliary filed $\hat{f}^{ij}$ is also defined as
\begin{gather}
\hat{f}^{ij}= f^{ij}+2h^{(i} N^{j)}+s N^i N^j,
\end{gather}
where above functions are defined from the following ADM form of the metric
\begin{gather}
ds^2= N^2 dr^2+\gamma_{ij}( dx^i+N^i dr) (dx^j+N^j dr).
\end{gather}
Note that NMG is also a rich theory which admits several solutions. The complexity growths for BTZ, AdS-Schwarzschild black hole, and shockwave solutions of this theory have been studied in \cite{Alishahiha:2017hwg}. Here we would like to study the rate of complexity for some other black hole solutions.
\subsection{Warped $\text{AdS}_3$ black hole}
The form of the metric has been given in section \eqref{eq:warped}. By calculating the action \eqref{eq:NMGaction} and the boundary term \eqref{eq:NMGboundary}, the rate of complexity growth can be found as
\begin{flalign}
\delta I_{\mathcal{M}}&= \frac{1}{16 \pi G} \int_{r_-}^{r_+} \int_{t}^{t+\delta t} \int_0^{2\pi} \mathcal{L}_{\text{NMG}} d\phi dt dr\nonumber\\&
=\frac{\delta t \ l \left(4 \nu ^4-48 \nu ^2+9\right)}{8G(20 \nu ^2-3)} (r_+-r_-),
\end{flalign}
and
\begin{flalign}
\delta I_{\mathcal{\partial M}}&= \frac{1}{16 \pi G} \int_t^{t+\delta t} \int_0^{2\pi} dt \ d\phi \left[ \frac{3 l\left(\nu ^2+3\right)\left(4 \nu ^2-1\right)}{ \left(20 \nu ^2-3\right)}\left(2 r-r_+-r_-\right)\right] \Bigg | _{r+}\nonumber\\& - \frac{1}{16 \pi G} \int_t^{t+\delta t} \int_0^{2\pi} dt \ d\phi \left[ \frac{3 l\left(\nu ^2+3\right)\left(4 \nu ^2-1\right)}{ \left(20 \nu ^2-3\right)}\left(2 r-r_+-r_-\right)\right] \Bigg | _{r-}
\nonumber\\& = \frac{\delta t \ 3 l (\nu^2+3) (4\nu^2-1)}{4G (20\nu^2-3)} (r_+-r_-).
\end{flalign}
So, the rate of increase in complexity is
\begin{gather}\label{eq:warpgrowthNMG}
\dot{\mathcal{C}}=\frac{dI}{dt} =\frac{l \ \left(28 \nu ^4+18 \nu ^2-9\right)}{8 G \left(20 \nu ^2-3\right)}\left(r_+-r_-\right).
\end{gather}
We can also write the above result in terms of the conserved charges of the solution, $\mathcal{M}$ and $\mathcal{J}$ \cite{Ghodrati:2016vvf}.
One can notice that similar to \eqref{eq:warpgrowthTMG}, both in TMG and NMG, the rate of complexity growth for warped $\text{AdS}_3$ black holes only depends on the warping factor $\nu$ and the difference between the inner and outer horizons; i.e. $\dot{\mathcal{C}}_\text{WBTZ} \propto (r_+-r_-)$, while in the BTZ case the relation is in the form of $\dot{\mathcal{C}}_{\text{BTZ}} \propto (r_+^2-r_-^2 )$. These results are summarized in Table \ref{tab:table1}. \\
\begin{table}[h]
\centering
\begin{tabular}{l | l | l}
& \ \ \ \ \ \ TMG & \ \ \ \ \ \ NMG \\
\hline \rule{0pt}{1.5\normalbaselineskip}
BTZ & $\frac{r_+^2- r_-^2}{4 G l^2} \left (\frac{r_+^2 +r_-^2}{\mu l r_+ r_-} \right)$ & $\frac{r_+^2- r_-^2}{4 G l^2} \left (1-\frac{1}{2 l^2 m^2} \right)$ \\ [2.5ex]
$\text{WAdS}_3$ & $\frac{l \left(r_+-r_-\right)}{8G}\left( \frac{ 5-\nu^2}{2}\right)$
& $\frac{l \left(r_+-r_-\right)}{8G}\left( \frac{ 28 \nu ^4+18 \nu ^2-9}{20 \nu ^2-3}\right) $
\end{tabular}
\caption{Complexity growth of BTZ and $\text{WAdS}_3$ black holes in two theories of TMG and NMG.}
\label{tab:table1}
\end{table}
Interestingly, this is similar to the way that the inner and outer temperatures of the horizons of these black holes, i.e, ${T_H}^+$, ${T_H}^-$, and also the right-moving temperature $T_R$ of warped $\text{AdS}_3$ black hole depend on the horizons' radii. The difference between the factors of $r_+$ and $r_-$ in CFT and warped CFT cases could be due to the fact that In CFTs we have both left and right moving modes while in WCFTs there are only right moving modes. Studying these relations further could help to a better understanding of the thermodynamics of quantum complexity.
Also, it is worth to notice that in the region where the solution is spacelike stretched and is free of naked Closed Timelike Curves (CTCs), (which is satisfied when $\nu^2 >1$), the relation \eqref{eq:warpgrowthNMG} is an increasing function of $\nu$, while relation \eqref{eq:warpgrowthTMG} is a decreasing one.
\subsection{New hairy black hole}
It would also be interesting to study the effect of black hole's hair on the growth rate of complexity as any hair parameter could change different features of black holes such as evaporation, encoding of information and scrambling behaviors.
For doing so we study a hairy black hole solution of NMG which was first introduced in \cite{Oliva:2009ip} and then later it was studied more in \cite{Nam:2010dd, Giribet:2009qz , Capela:2012uk} and also in \cite{Ghodrati:2016ggy, Ghodrati:2016tdy} where their Hawking-Page phase diagrams were presented.
For this type of black hole, in the action \eqref{eq:NMGaction}, we should set $m^2=\Lambda=-\frac{1}{2 l^2}$. Then the form of the metric could be derived as
\begin{gather}
ds^2=-N F dt^2+\frac{dr^2}{F}+r^2 (d\phi+N^\phi dt)^2,
\end{gather}
where
\begin{equation}
\begin{split}
N=&\Big( 1+\frac{b l^2}{4H} \big( 1-\Xi^{\frac{1}{2}} \big) \Big)^2,\ \ \ \ \ \ \ \ \ \
N^\phi=-\frac{a}{2r^2} (4G_N M-bH),\nonumber\\
F=&\frac{H^2}{r^2} \Big( \frac{H^2}{l^2} +\frac{b}{2} \Big(1+ \Xi^{\frac{1}{2}} \Big)H+\frac{b^2 l^2}{16} \Big( 1-\Xi^ {\frac{1}{2}} \Big)^2- 4G_N M\ \Xi^{\frac{1}{2}} \Big),\nonumber\\
H=& \Big( r^2-2G_N M l^2 \big (1- \Xi^{\frac{1}{2}} \big) -\frac{b^2 l^4}{16} \big(1-\Xi^{\frac{1}{2}} \big)^2 \Big)^{\frac{1}{2}},
\end{split}
\end{equation}
and $\Xi :=1-\frac{a^2}{l^2}$ , $-l \le a \le l $.
Depending on the range of the parameters $M$, $a$, and $b$, this solution could have an ergosphere and inner and outer horizons which could make this example more interesting for studying its rate of complexity growth.
The Penrose diagrams for different signs of $b$ and $\mu$ have been brought in \cite{Oliva:2009ip} which basically are similar to Schwarzschild case.
Now calculating the complexity growth for the most general case gives a very complicated answer. Since we are only interested in studying the effect of hair parameter $b$ here, by taking $a=0$, we only consider the non-rotating case. So we get
\begin{flalign}
\delta I_{\mathcal{M}}&=I_{\mathcal{M}} [\text{WDW} |_{t+\delta t}]- I_{\mathcal{M}} [\text{WDW} |_t]=\frac{\left(r_+-r_-\right)}{2 G l^2} \left(b l^2+r_++r_-\right)\text{$\delta $t},
\end{flalign}
and the contribution from the boundary term is
\begin{flalign}
\delta I_{\partial \mathcal{M}} =-\frac{\left(r_+-r_-\right)}{2 G l^2}\left( \frac{b^2 l^4}{4}+b l^2 \left(r_++r_-\right)+\frac{2 }{3}\left(r_+{}^2+r_+ r_-+r_-{}^2-6 M G l^2\right)\right)\text{$\delta $t}.
\end{flalign}
Note that increasing the hair parameter $b$ increases the contribution to the complexity growth from the bulk term and decreases the complexity growth coming from the boundary term.
For the following special case where
\begin{gather}
b=-\frac{1}{l^2} (r_++r_-), \ \ \ \ \ \ \ M=-\frac{r_+ r_-}{4 G l^2},
\end{gather}
the total rate of complexity growth is
\begin{gather}
\dot{\mathcal{C}}=\frac{\left(r_+-r_-\right) }{2 G l^2}\left(\frac{1}{3}\left(r_+^2+r_-^2\right) + \frac{l^2}{4}\left(r_++r_-\right) +\frac{7}{3} r_+r_-\right).
\end{gather}
One can see that similar to the relation for the temperature of this type of black hole, the rate of complexity growth also has a factor of $(r_+-r_-)$.
It might also be interesting to study complexity growth rate for the case with a positive cosmological constant where the black hole posses the event and cosmological horizons \cite{Oliva:2009ip}.
\subsection{Log black hole solution }
Another solution of NMG which one might think that the behavior of its complexity growth could be interesting is the so called ``log" solution. This solutions was first found in \cite{Clement:2009ka,Ghodsi:2010gk}. For the special case of $\nu=-l$ which were defined in \cite{Ghodsi:2010gk}, the following simple solution could be written as
\begin{flalign}
ds^2&=\left ( -2 l^{-2} \rho+( A \rho^{-l} \log (\rho) +B) \right) dt^2-2l ( A \rho^{-l} \log(\rho)+B) dt d\phi\nonumber\\ &+
\left( 2\rho +l^2 (A \rho^{-l} \log(\rho) +B)\right) d\phi^2+\frac{l^2 d\rho^2}{4n\rho^2},
\end{flalign}
and for this case the entropy is
\begin{gather}
S=\frac{2 A_h l (l+1)}{G (1+8l(l+1)}.
\end{gather}
Now computing the Lagrangian \eqref{eq:NMGaction} and \eqref{eq:NMGboundary} we find
\begin{gather}
\delta I_{\mathcal{M}}=-\frac{4 (l+1) \left(r_+-r_-\right)}{G l (1+8 l (l+1))}\text{$\delta $t},
\end{gather}
and
\begin{gather}
\delta I_{\partial \mathcal{M}}=\frac{4(l+1) \left({r_+}^2-{r_-}^2\right)}{G l (1+8 l (l+1))}\text{$\delta $t},
\end{gather}
leading to the total complexity growth of
\begin{gather}
\dot{\mathcal{C}}=\frac{4 (l+1)}{G l (1+8 l (l+1))} \left(r_+-r_-\right) \left(r_++r_--1\right).
\end{gather}
One can see that even for a ``log" solution, although different terms such as $f_{ij}$ might have a complicated form, but the final result for the rate of complexity growth would greatly simplify and also a factor of $(r_+ - r_-)$ is again present here, similar to the BTZ and warped $\text{AdS}_3$ black holes.
\section{Discussion}\label{sec:disc}
In this paper our first aim was to examine the effect of chirality on the rate of growth of complexity in order to get more information about how different aspects of ``Complexity=Action" conjecture would work in different setups.
To do so we studied the rate of complexity growth for different solutions of a chiral breaking theory (TMG) and a chiral-preserving theory (NMG). Specifically using CA conjecture, we calculated the complexity growth for BTZ, warped $\text{AdS}_3$, null warped $\text{AdS}_3$, supersymmetric and ACL black hole solutions of TMG and then warped $\text{AdS}_3$, new hairy and log black hole solutions of NMG.
Using the specific Gibbons-Hawking boundary term of TMG, introduced in \cite{Miskovic:2009kr}, and then by calculating different terms of the action and integrating on the Willer-DeWitt patch, we found that increasing the parameter $\mu$ would actually decrease the rate of complexity growth of BTZ black hole in TMG. By decreasing $\mu$ which increases the effect of Chern-Simons term and increases chirality, the rate of complexity growth would increase where for $\mu \to 0$ the rate of complexity growth would diverge. For the parity-preserving theory of NMG, however, we see that by decreasing $m^2$ which increases the effect of higher derivate term (couples the $\frac{1}{m^2}$), the rate of growth of complexity of BTZ would decrease.
For the case of warped $\text{AdS}_3$ black hole we found that generally the warping factor $\nu$ decreases the rate of growth of complexity in the chiral theory of TMG while it increases this rate in NMG. This could be interpreted by the dynamics of left and right moving modes and their effects on the growth of complexities in these two theories. It would also be very interesting to study the effect of $\mu$ or warping factors on the scrambling or switchback time of a warped $\text{AdS}_3$ black hole in these two theories and compare the results.
Another interesting point that we have found is that in all of these theories there was a direct relationship between the rate of complexity growth and the difference between the inner and outer horizons; i.e, $\dot{\mathcal{C}} \propto (r_+-r_-) $. This factor is also present in the relation of temperature of BTZ, Warped BTZ and hairy black holes while this is not the case for the relations of entropies. This could suggest that there is a stronger correlation between the complexity and temperature, rather than the complexity and entropy, which is worth further investigation. This is actually in line with the idea of Lloyd \cite{Lloyd:2000}. This fact could help to understand further the similar thermodynamical laws for complexity.
One could also think to study this rate for other solutions such as Lifshitz or hyperscaling violating \cite{Ghodrati:2014spa} backgrounds with black holes. However for those solutions which breaks the Lorentz invariance there is not a well-defined Carter-Penrose diagram as the scaling of time and space coordinates for these backgrounds are different and this makes the form of WDW patch and computing the holographic complexity more difficult.
It would also be very interesting to calculate the rate of change of complexity in the dynamical setups such as the ones in \cite{Sachs:2011xa, Flory:2013ssa} where an interpolating solution between a past horizon and a chiral AdS pp-wave has been found. Studying the behavior of complexity growth rate and the effects of different factors in these backgrounds could shed more light on the nature of holographic complexity.
In \cite{Carmi:2016wjl}, the structure of UV divergences that appear in $\mathcal{C}_V (\Sigma )$ has been studied where for the first order the coefficients of the divergences have been written in terms of the extrinsic and intrinsic curvatures which were integrated over the boundary time slice. To gain more information about these structures, in different boundary CFTs such as warped CFTs or topological matters and to study the effect of different factors such as chirality, it might be useful to go beyond the first order and to try to find some universal coefficients in the first and second order of these divergences. For example similar to \cite{Alishahiha:2017cuk} one can gain more information about the fidelity susceptibility of different field theories. We will sketch few steps for this calculation in appendix \ref{appendix}.
There are many more ideas and progresses in different setups for calculating complexity or complexity growth rate that could be applied for the warped CFT case as well, where in the following parts we are going to review.
In \cite{Hashimoto:2017fga}, by discretizing the $U(1)$ gauge group as $\mathbf{Z}_N$ authors studied the time evolution of complexity for Abelian pure gauge theories. They could define a universal gate set for the $U(1)$ gauge theories which enabled them to calculate the complexity growth explicitly. It would be interesting to use the same idea for WCFT and by discretizing the $U(1)_L \times SL(2, \mathbb{R})_R$ group, define some new gate sets. Then by evaluating the eigenvalues of the Hamiltonian, one could directly study the rate of complexity growth in WCFTs and then one can compare the results with usual CFTs and also obviously with the results found here coming from holography.
Another approach to define complexity was introduced by Nielsen,\cite{Nielsen:2005}. In this method, the complexity was defined by the geodesic distance between an unitary operator $U$ and an identity operator with respect to a metric in the space of unitary operators. It would be interesting to study the behavior of complexity metric or Nielsen geometry in warped CFTs. Note that as mentioned in \cite{Brown:2017jil, Brown:2016wib}, the complexity metric would actually punish directions that touch more qubits. So it would be interesting to see how chirality and parity violation of the modes would affects the complexity metric.
There are also some new ideas to evaluate complexity by minimizing the Liouville action, \cite{Caputa:2017yrh, Czech:2017ryf}
\begin{flalign}
S_L&=\frac{c}{24 \pi} \int dx \int_\epsilon^\infty dz \Big[ \underbrace{ (\partial_x \phi)^2+ (\partial_z \phi)^2}_{ \# \text{ of Isometries \includegraphics[width=4mm]{iso2} }} +\underbrace{\delta^{-2} e^{2\phi}}_{\# \text{ of Unitaries \includegraphics[width=4mm] {uni} } }\Big].
\end{flalign}
By minimizing this action, one can define complexity \cite{Caputa:2017yrh} in the field theory side. It would also be possible to derive Einstein's equation in the bulk and to build a hyperbolic space which is the time-slice of $\text{AdS}_3$ \cite{Czech:2017ryf}. Note that in the Liouville action the first two terms which are the Kinetic terms are actually dual to the number of isometries (coarse-graining) \cite{Czech:2017ryf} and the third term which is the potential term is dual to the number of unitaries (disentanglers), in the tensor network formalism of MERA.
It would be interesting to see if one can also derive the time slice of $\text{AdS}_3$ or warped $\text{AdS}_3$ space-times from warped CFTs by using the \textit{``chiral Liouville theory"} \cite{Compere:2013aya}, which is in the following form
\begin{flalign}
S_L&=\frac{c}{12 \pi} \int d^2 x \left(\partial_{+} \rho \partial_{-} \rho -\frac{\Lambda}{8} e^{2\rho} + h(\partial_{-} \rho)^2 + [ \partial_- h \partial_- \rho ] -\frac{6}{c} h \Delta \right ),
\end{flalign}
or one can also write it as
\begin{flalign}
S&=S_L^0 +\int dt^+ dt^- \left( \underbrace{\partial_+ \phi \partial_- \phi}_{\# \text{ of Isometries}} +\color{blue}\underbrace{h \partial_- \phi \partial_- \phi}_{\text{$\#$ of WCFTs new gate}}\color{black}-\underbrace {\frac{m^2}{4} e^{2\rho} \phi^2 }_{\# \text{ of Unitaries}} \right) .
\end{flalign}
The main difference between the two actions is the middle term written in blue, i.e, $h \partial_- \phi \partial_- \phi$. As noted in \cite{Compere:2013aya}, $h$ which is proportional to a right moving current is dimension $(1,0)$ and $(\partial_- \phi)^2$ is dimension $(0,2)$. So this term is a dimension $(1,2)$ operator and one can think of chiral theory as a usual field theory which is deformed by this operator \cite{Compere:2013aya}. These operators are indeed very special as they are related to the $\text{IR}$ limits of the dipole deformed gauge theories. It would be interesting to study the corresponding gates for these operators in the MERA pictures and also study the effects of these operators on complexity or its rate of growth, and then compare these results with the holographic ones.
In \cite{Freedman:2016zud, Bao:2016rbj}, a new picture based on max-flow min-cut theorem or a set of Planck-thickness ``bit threads" for entanglement entropy has been proposed. In this picture entanglement entropy of a boundary region is related to the maximum number of threads that can emanate from its area. One might be able to use this picture here as well and by considering the dynamics of these threads explains the holographic conjecture for the complexity and then use these ideas to explain the complexity of chiral theories explicitly.
In \cite{Bhattacharya:2014vja}, authors tried to define a quasi-local measure of quantum entanglement and by considering the infinitesimal variation of the region, they defined the concept of entanglement density. Also using the positivity of the entanglement which would be mapped to the null energy condition in the gravity bulk dual, they derived the second law of thermodynamics for the extremal surfaces. One might think that the similar concepts of quasi-local measure of quantum complexity and a notion of quantum complexity density could also be defined and using the positivity of complexity growth rate, similar maps to the bulk dual can be implemented which might lead to more rigorous thermodynamics-like laws for complexity.
Another more exotic idea is to study the relationship between complexity and Schiwnger effect. For doing so, similar to studies in \cite{Ghodrati:2015rta}, one could study both of their rates in different backgrounds and also study the effect of external factors such as electric or magnetic fields to find the relationship between these two quantities. Hypothetically, these studies could shed more light on the dynamics and structure of the bulk and also different aspects of $\text{ER}=\text{EPR}$.
\section*{Acknowledgement}
I would like to thank M. Alishahiha, S. Sadeghian, M. H. Vahidinia, A. Naseh, A. Faraji, K. Hajian and H. Soltanpanahi for useful discussions and M. Flory for reading the draft before submission and many useful comments. This work has been supported by National Elites Foundation (INEF) of Iran.
\section{Introduction}
In order to study the quantum field theories with momentum dissipation holographically, the holographic massive gravity theories (HMGs) have been exploited. Using these models one can study different field theory features such as DC resistivity, relaxation rates or the effect of dissipations or disorders on the confinement-deconfinement phase transitions in strongly correlated systems \cite{Adams:2014vza}.
There are different massive gravity models with multiple geometrical solutions and their corresponding field theory duals. One of these theories is the ``Topological Massive Gravity'' (TMG), which is the Einstein action, plus a Chern-Simons term breaking the parity. Recently in \cite{Detournay:2015ysa}, the Hawking-Page phase transitions between the $\text{AdS}_3$ and BTZ solutions, and warped $\text{AdS}_3$ and warped BTZ black hole of TMG were investigated and the Gibbs free energies, local and global stability regions and the phase diagrams were presented.
There is yet another rich theory, the parity preserving Bergshoeff-Hohm-Townsend (BHT) or the ``New Massive Gravity'' (NMG), which has many different solutions as well, in addition to the thermal warped $\text{AdS}_3$ and warped BTZ black hole. The aim of this paper is, similar to \cite{Detournay:2015ysa}, we study the Hawking-Page phase transitions between different solutions of NMG and therefore learning more about the properties of the dual CFTs. Particularly we study the phase transitions between the thermal AdS and BTZ black holes, the warped AdS and warped BTZ black holes in two different ensembles, the Lifshitz black hole and the new hairy black hole and their corresponding vacua.
The other motivation is to extend AdS/CFT duality to other more general geometries. One would think that for doing so, the most direct way is to perturbatively deform the $\text{AdS}_3$ manifold to a warped $\text{AdS}_3$ metric, \cite{Rooman:1998xf} \cite{Israel:2003cx} \cite{Detournay:2010rh}, and then study the dual field theory. The initial works on this extension were done in \cite{Israel:2004vv}, where they studied the magnetic deformation of $S^3$ and the electric/magnetic deformations of $\text{AdS}_3$ which still could remain a solution of string theory vacua. Then in \cite{Anninos:2008fx} \cite{Azeyanagi:2012zd} \cite{Song:2011sr}, the dual field theories have been studied. In \cite{Song:2011sr} the dual of warped $\text{AdS}_3$ were suggested to be the IR limits of the non-local deformed 2D D-brane gauge theory or the dipole CFT. Constructing this duality could lead to more information about the properties of these new field theories and also some properties of the dual bulk geometries, for instance the nature of closed time-like curves (CTCs).
The Bergshoeff-Hohm-Townsend gravity have both warped AdS and warped BTZ black hole solutions. The deformed $\text{AdS}_3$ preserve a $SL(2,R) \times U(1)$ subgroup of $SL(2,R) \times SL(2,R)$ isometries. The obtained space-times called null, time-like or space-like warped $AdS_{3}$ (WAdS$_{3}$) corresponding to the norm of $U(1)$ killing vectors where the time-like WAdS$_{3}$ is just the $G\ddot{o}del$ spacetime \cite{Rooman:1998xf} \cite{Reboucas:1982hn}.
Other extension of AdS/CFT includes, AdS/CMT (Condensed Matter), AdS/QCD, dS/CFT, flat space holography and Kerr/CFT. However, the dual CFT of these theories are not completely known. The advantages of WCFTs are that they posses many properties of CFTs and they can be derived from string theory and low-dimensional gravity theories and hence for studying them the known CFT techniques could be deployed.
The specific properties of this new class of WCFTs were studied in \cite{Detournay:2012pc} and their entanglement entropies were first studied in \cite{Anninos:2013nja} holographically and in a more recent work in \cite{Castro:2015csg} by using the Rindler method of WCFT. To further study this WAdS/WCFT duality, one could study other properties such as the instabilities of the solutions and the Hawking-Page phase transitions \cite{Hawking:1982dh}. As the phase transitions from the thermal AdS or WAdS, to BTZ or warped BTZ black hole is dual to confining/deconfining phase transitions in the dual field theory, these models could be used in QCD or condensed matter systems with dissipations.
The plan of this paper is as follows. First we review two methods of finding the conserved charges for any solution of NMG, the ADT formalism and the $SL(2,R)$ reduction method. Mainly we use the general formulas from $SL(2,R)$ reduction method to calculate the conserved charges for any solution of NMG in different ensembles. Then by finding the free energies, we discuss the phase transitions between the ``vacuum $\text{AdS}_3$" and ``BTZ black hole" solutions in section \ref{sec:num1}. We discuss the thermodynamics and local and global stability regions. In section \ref{sec:sec2} we calculate the free energies of ``warped $\text{AdS}_3$ vacuum" and ``warped BTZ black hole" solutions. We calculate the free energy of the $\text{WAdS}_3$ by three different methods and by doing so we could find a factor in the modular parameter which extends the result of \cite{Kraus:2005vz} for calculating the free energy for $\text{WAdS}_3$ solutions in NMG. Then we present the phase diagrams of these solutions. In section \ref{sec:hairy} we discuss the free energy and phase transitions of the Lifshitz and new hairy black hole solution in NMG. We also discuss the inner horizon thermodynamics in section \ref{sec:inner}. In section \ref{entanglement}, we discuss the entanglement entropy of the vacuum solutions corresponding to the WCFT dual of WAdS in NMG and then we conclude with a discussion in section \ref{sec:disc}.
\section{Bergshoeff-Hohm-Townsend Theory }\label{sec:num1}
The Bergshoeff-Hohm-Townsend (BHT) or the New Massive gravity (NMG) is a higher-curvature extension of the Einstein-Hilbert action in three dimensions which is diffeomorphism and parity invariant. In the linearized level, it is equivalent to the unitary Pauli-Fierz action for a massive spin-2 field \cite{Bergshoeff:2009hq}.
The action of NMG is
\begin{gather}\label{eq:action}
S=\frac{1}{16 \pi G_N} \int d^3x \sqrt{-g} \Big[ R-2\Lambda+\frac{1}{m^2} \Big( R^{\mu\nu} R_{\mu\nu}-\frac{3}{8} R^2 \Big) \Big],
\end{gather}
where $m$ is the mass parameter, $\Lambda$ is a cosmological parameter and $G_N$ is a three-dimensional Newton constant. In the case of $m \to \infty$, the theory reduces to the Einstein gravity and in the limit of $m \to 0$ it is just a pure fourth-order gravity.
The equation of motion from the action would be derived as
\begin{gather}
R_{\mu \nu}-\frac{1}{2} R g_{\mu\nu} +\Lambda g_{\mu\nu}+\frac{1}{m^2}K_{\mu\nu}=0,
\end{gather}
with the explicit form of $K_{\mu\nu}$ as in \cite{Bergshoeff:2009hq},
\begin{gather}
K_{\mu\nu} = \nabla^{2}R_{\mu\nu}-\frac{1}{4}\left(
\nabla_{\mu}\nabla_{\nu}R+g_{\mu\nu}\nabla^{2}R\right)-4R^{\sigma}_{\mu}R_{\sigma\nu}
+\frac{9}{4}RR_{\mu\nu}+\frac{1}{2}g_{\mu\nu}\left
(3R^{\alpha\beta}R_{\alpha\beta}-\frac{13}{8}R^{2}\right).\nonumber\\
\end{gather}
The boundary terms of NMG which make the variational principle well-defined would be
\begin{gather}
S_{\text{Boundary}}=\frac{1}{16 \pi G} \int_\sigma d^3x \sqrt{-g} \left( f^{\mu \nu} (R_{\mu \nu}-\frac{1}{2} R g_{\mu \nu})-\frac{1}{4} m^2 (f_{\mu \nu} f^{\mu \nu}-f^2) \right),
\end{gather}
where $f_{\mu \nu}$, the rank two symmetric tensor is
\begin{gather}
f_{\mu \nu}=\frac{2}{m^2} (R_{\mu \nu}-\frac{1}{4} R g_{\mu \nu}).
\end{gather}
This theory admits different solutions, such as the vacuum $\text{AdS}_3$, warped $\text{AdS}_3$, BTZ black hole, asymptotic warped AdS black hole, Lifshitz, Schr$\ddot{\text{o}}$dinger and so on \cite{Bergshoeff:2009hq}, \cite{AyonBeato:2009nh}. We construct the phase diagrams between several of these solutions by comparing the \textit{on-shell} free energies.
By constructing the \textit{off-shell} free energies, one could even find all the states connecting any two solutions and therefore create a picture of continuous evolutions of the phase transitions; similar to the work in \cite{Zhang:2015wna}, who studied the continuous phase transition between the BTZ black hole with $M \ge 0$ and the thermal AdS soliton with $M=-1$ in the New Massive gravity.
In the next section we review on how one can calculate the conserved charges and we brought several general formulas for the solutions of NMG which could be used to find the on-shell Gibbs free energies. Then in section \ref{sec:AdS} we study the vacuum $\text{AdS}_3$ and BTZ solutions of NMG, the free energies and the phase diagrams. Then in section \ref{sec:sec2} we discuss the warped solutions and in section \ref{sec:hairy} we study the new hairy black hole solution of this theory.
\section{Review of calculating conserved charges in BHT}
\label{sec:charges}
In three dimensions, the conserved charges associated to a Killing vector $\xi$ would be
\begin{gather}
\delta Q_\xi [\delta g, g]=\frac{1}{16 \pi G} \int_0^{2\pi} \sqrt{-g} \epsilon_{\mu \nu \varphi} k_\xi^{\mu \nu} [\delta g,g] d\varphi.
\end{gather}
As calculated in \cite{Nam:2010ub} for BHT, the ADT formalism would result in
\begin{gather}
k_\xi ^{\mu \nu} =Q_R^{\mu \nu} +\frac{1}{2m^2} Q_K^{\mu \nu},
\end{gather}
where
\begin{gather}
Q_K^{\mu \nu} =Q_{R_2} ^{\mu \nu}-\frac{3}{8} Q_{R^2}^{\mu \nu},
\end{gather}
and the term for each charge is
\begin{equation} \label{eq:x}
Q_R^{\mu \nu} \equiv \xi _\alpha \nabla^ {[ \mu} h^{ \nu ] \alpha}-\xi^{[\mu }\nabla_\alpha h^{\nu ] \alpha}-h^{\alpha [ \mu } \nabla _\alpha \xi^{\nu ]}+\xi ^{[\mu} \nabla^{\nu ]} h+\frac{1}{2} h \nabla^{[\mu} \xi ^{\nu ]},
\end{equation}
\begin{equation}
Q_{R^2} ^{\mu \nu}=2 R Q_R^{\mu \nu}+4\xi ^{[\mu} \nabla^{\nu ]} \delta R+2 \delta R \nabla ^{[\mu} \xi^{\nu]}-2\xi^{[ \mu} h^{\nu ] \alpha} \nabla_\alpha R,
\end{equation}
where
\begin{gather}
\delta R \equiv -R^{\alpha \beta} h_{\alpha \beta}+ \nabla^\alpha \nabla^\beta h_{\alpha \beta} -\nabla^2 h,
\end{gather}
and
\begin{align}
Q_{R_2}^{\mu \nu} & = \nabla^2 Q_R^{\mu \nu}+\frac{1}{2} Q_{R^2}^{\mu \nu} -2 Q_R^{\alpha [\mu} R_\alpha^{\nu ]}-2 \nabla^\alpha \xi^\beta \nabla_\alpha \nabla ^{[\mu} h_\beta ^{\nu ]}-4 \xi ^\alpha R_{\alpha \beta} \nabla^{[ \mu} h^{\nu ] \beta}-R h_\alpha^{[ \mu} \nabla^{\nu ]} \xi ^\alpha \nonumber\\ &
+2 \xi^{[ \mu} R_\alpha^{\nu ]} \nabla_\beta h^{\alpha \beta}+2 \xi_\alpha R^{\alpha [ \mu} \nabla_\beta h^{\nu ] \beta}+2 \xi^\alpha h^{\beta [ \mu} \nabla_\beta R_\alpha^{\nu ]}+2 h^{\alpha \beta} \xi^{[ \mu} \nabla_\alpha R_\beta^{\nu ]} \nonumber\\ &
-(\delta R+2 R^{\alpha \beta} h_{ \alpha \beta} ) \nabla^{[\mu} \xi^{\nu ]}-3 \xi^\alpha R_\alpha^{[ \mu} \nabla^{\nu ]}h-\xi^{[ \mu} R^{\nu ] \alpha} \nabla_\alpha h. &
\end{align}
For the three dimensional case of $ (t, r, \phi)$, the mass and angular momentum in three dimensions would be \cite{Nam:2010ub}
\begin{gather}
M=\frac{1}{4G} \sqrt{- \text{det} \ g} Q^{r t} (\xi_T) \Big |_{r\to \infty}, \ \ \ \ \ \ \ J=\frac{1}{4G} \sqrt{- \text{det} \ g} Q^{r t} (\xi_R) \Big |_{r\to \infty},
\end{gather}
where
\begin{gather}
\xi_T=\frac{1}{L} \frac{\partial}{ \partial t}, \ \ \ \ \ \ \ \ \ \ \ \xi_R= \frac{\partial}{ \partial \phi}.
\end{gather}
\subsection{The $SL(2,R)$ reduction method}
One can also derive the charges by $SL(2,R)$ reduction method which changes the metric to $SO(1,2)$ from. For doing so one should write the metric in the form of \cite{Nam:2010ub}
\begin{gather}
ds^2=\lambda_{ab} (\rho) dx^a dx^b+\frac{d\rho^2}{\zeta^2 U^2(\rho) }, \ \ \ \ \ x^a=(t,\phi).
\end{gather}
Since there is a reparametrization invariance with respect to the radial coordinate, one can write the function $U$ such that $\det \lambda=-U^2$ and this would give $\sqrt{-g}=1/\zeta$.
One then applies the equations of motions (EOMs) and the Hamiltonian constraint and then by integrating the EOM one can derive the \textit{"super angular momentum"} vector.
So first one parameterize the matrix $\lambda$ as
\begin{gather}
\lambda_{ab}= \left(
\begin{array}{cc}
X^0+X^1 & X^2\\
X^2 & X^0-X^1
\end{array} \right ),
\end{gather}
where $\mathbf{X}=(X^0,X^1,X^2) $ would be the $SO(1,2)$ vector.
Then one applies the reduced equation of motion and the Hamiltonian constraint as \cite{Clement:2009gq}
\begin{align}\label{eq:EOM}
&\mathbf{X} \wedge ( \mathbf{X} \wedge \mathbf{X}'''')+\frac{5}{2} \mathbf{X} \wedge (\mathbf{X}' \wedge \mathbf{X}''')+\frac{3}{2} \mathbf{X}' \wedge (\mathbf{X} \wedge \mathbf{X}'''')+\frac{9}{4} \mathbf{X}' \wedge (\mathbf{X}' \wedge \mathbf{X}'') \nonumber\\ &
-\frac{1}{2} \mathbf{X}'' \wedge ( \mathbf{X} \wedge \mathbf{X}'')- \left[\frac{1}{8} (\mathbf{X}'^2)+\frac{m^2}{\zeta^2} \right] \mathbf{X}'' = 0,&
\end{align}
\begin{align}\label{eq:hamiltonian}
H \ \ \ & \equiv \ \ \ (\mathbf{X} \wedge \mathbf{X'} )\ . \ ( \mathbf{X} \wedge \mathbf{X}'''')-\frac{1}{2} (\mathbf{X} \wedge \mathbf{X}'')^2 +\frac{3}{2} (\mathbf{X} \wedge \mathbf{X}') \ . \ (\mathbf{X}' \wedge \mathbf{X}'') \nonumber\\ & +\frac{1}{32} (\mathbf{X}'^2 )^2 +\frac{m^2}{2\zeta^2} (\mathbf{X'}^2)+\frac{2m^2 \Lambda} {\zeta^4} =0.
\end{align}
From these two equations one can find $\zeta^2$ and $\Lambda$. Then one can define the vector
\begin{gather}
\mathbf{L} \equiv \mathbf{X} \wedge \mathbf{X}',
\end{gather}
where $ ' \equiv \frac{d}{d\rho}$.
Finally the \textit{super angular momentum} of NMG $\mathbf{J}=(J^0, J^1, J^2)$ would be
\begin{gather}
\mathbf{J}= \mathbf{L}+\frac{\zeta^2}{m^2} \left[ 2\mathbf{L} \wedge \mathbf{L'}+\mathbf{X}^2 \mathbf{L''}+\frac{1}{8} \big( \mathbf{X'}^2-4 \mathbf{X} \ .\ \mathbf{X''} \big) \mathbf{L} \right],
\end{gather}
where the products are defined as
\begin{gather}
\mathbf{A} . \mathbf{B}= \eta_{ij} A^i B^j, \ \ \ \ \ \ \ \ \ \ (\mathbf{A} \wedge {B})^i=\eta^{im} \epsilon_{mjk} A^j B^k, \ \ \ \ \ \ (\epsilon_{012}=1).
\end{gather}
That being so for the case of NMG one would have \cite{Nam:2010ub}
\begin{align}\label{charge1}
\eta \left[\sigma Q_R^{\rho t}+\frac{1}{m^2} Q_K^{\rho t} \right]_{\zeta_T} \ &= \ \frac{1}{L} \left[ -\frac{\zeta^2}{2} \delta J^2 + \Delta_{\textit{Cor}} \right], \nonumber\\
\eta \left[ \sigma Q_R^{\rho t}+\frac{1}{m^2} Q_K^{\rho t}\right]_{\zeta_R} \ &= \ \frac{\zeta^2}{2} \delta (J^0-J^1),
\end{align}
where $\eta$ and $\sigma$ are $\pm 1$, depending on the sign in the action. Based on eq. \ref{eq:action}, both of $\eta$ and $\sigma$ would be positive in our case.
Also $\Delta _{\textit{Cor}}$, the correction term to the mass, for the NMG would be
\begin{gather}
\Delta _{\textit{Cor}}=\Delta_R+\Delta_K,
\end{gather}
where
\begin{align}
&\Delta_R = \frac{\zeta^2}{2} \left[ -(\mathbf{X} \ .\ \delta \mathbf{X'}) \right], \nonumber\\ &
\Delta_K = \frac{\zeta^4}{m^2}\bigg [ -U^2 ( \mathbf{X''} \ . \ \delta \mathbf{X'} )+\frac{U^2}{2} \Big [ (U\delta U)'''-(\mathbf{X} \ . \ \delta \mathbf{X'})''-\frac{1}{2} (\mathbf{X'}\ . \ \delta \mathbf{X'} )' \Big]\nonumber\\ & -\frac{ U U'}{4}\Big [ (U \delta U)''-\frac{5}{2} (\mathbf{X}' \ . \ \delta \mathbf{X'} ) \Big]+ \Big[\mathbf{X'}^2-(U U')' \Big] ( U \delta U)'+ U U'(\mathbf{X''} \ . \ \delta \mathbf{X} )\nonumber\\ &
+ \Big[\frac{5}{4}(UU')'-\frac{21}{16} \mathbf{X'}^2 \Big] (\mathbf{X} \ . \ \delta \mathbf{X'} )+ \Big[-\frac{1}{2} (U U')''+\frac{9}{4} (\mathbf{X'} \ . \ \mathbf{X''} ) \Big ] U \delta U \bigg ].
\end{align}
Then the mass and angular momentum in NMG would be
\begin{gather}
\mathbf{M} \ = \ \frac{1}{4G} \sqrt{ - \text{det} \ g} \ \left [ Q_R^{rt} +\frac{1}{m^2} Q_K^{rt} \right]_{\zeta_T, r\to \infty}, \nonumber\\
\mathbf{J} \ = \ \frac{1}{4G} \sqrt{ -\text{det} \ g} \ \left[ Q_R^{rt} +\frac{1}{m^2} Q_K^{rt} \right] _{\zeta_R , r\to \infty}.\label{charge2}
\end{gather}
Also for calculating entropy for any solution on NMG, we can use the following relation from \cite{Clement:2009gq}
\begin{gather}\label{entropy}
\mathbf{S}=\frac{A_h}{4 G} \left( 1+\frac{\zeta^2}{2m^2} \left[ (\mathbf{X\ .\ X''})-\frac{1}{4} (\mathbf{X'}^2) \right] \right).
\end{gather}
Now using these relations one can derive the charges, Gibbs free energies and the phase diagrams of several solutions of NMG.
\subsection{ Examples of conserved charges of BHT solutions}
First for the warped AdS black hole in the \textit{"grand canonical ensemble"} \cite{Detournay:2015ysa},
\begin{gather}\label{metric}
g_{\mu \nu}=\left(
\begin{array}{ccc}
-\frac{r^2}{l^2}-\frac{H^2 (-r^2-4 l J+8 l^2 M)^2}{4l^3(l M-J)}+8M & 0 & 4J-\frac{H^2(4 l J-r^2)(-r^2-4 l J+8 l^2 M) }{4l^2(l M-J)} \\
0 & \frac{1}{\frac{16 J^2}{r^2}+\frac{r^2}{l^2}-8M } & 0 \\
4J-\frac{H^2(4 l J-r^2)(-r^2-4 l J+8 l^2 M) }{4l^2(l M-J)} & 0 & r^2-\frac{H^2 (4 J l -r^2)^2 }{4 l (l M-J)}
\end{array} \right ),
\end{gather}
by reparametrizing the radial coordinate as $r^2 \to \rho$, and then by applying the equation of motion and hamiltonian constraints \ref{eq:EOM} and \ref{eq:hamiltonian} one can find
\begin{gather}
\zeta^2= \frac{8 l^2 m^2}{(1-2H^2)(17-42H^2)} , \ \ \ \ \ \ \ \ \ \ \Lambda=\frac{ m^2 (84H^4+60H^2-35) }{(17-42H^2)^2}.
\end{gather}
From this one can see that the acceptable region for $\Lambda$ is
\begin{gather}
\frac{-35m^2}{289} < \Lambda < \frac{m^2}{21},
\end{gather}
and the special case of $\Lambda=\frac{m^2}{21}$ corresponds to the case of $\text{AdS}_2 \times S^1$.
Now for the metric \ref{metric}, the components of the super angular momentum would be
\begin{gather}
J^0= -\frac{H^2 \left(1+l^2\right)}{4 l^3 (-J+l M)},\ \ \ \ \
J^1= \frac{H^2 \left(-1+l^2\right)}{4 l^3 (-J+l M)}, \ \ \ \
J^2= \frac{H^2}{2 l^2 (J-l M)}.
\end{gather}
Then using \ref{charge1} and \ref{charge2} one can find the charges as
\begin{gather}\label{charge1}
\mathbf{M}=\frac{16 \left(1-2 H^2\right)^{3/2} M}{G L\left(17-42 H^2\right) }, \ \ \ \ \ \ \ \ \ \ \ \mathbf{J}=\frac{16 \left(1-2 H^2\right)^{3/2} J}{G \left(17-42 H^2\right)}.
\end{gather}
One should note they $\Delta_{Cor}$ would be zero here.
For the above metric using \ref{entropy}, the entropy would be
\begin{gather}\label{entropy1}
\mathbf{S}=\frac{16 \pi \left(1-2 H^2\right)^{3/2} \text{ }}{G \left(17-42 H^2\right)} \sqrt{l^2 M+\sqrt{l^4 M^2-J^2 l^2}}.
\end{gather}
\\
We can then study this black hole solution in another ensemble. The asymptotically warped $\mathrm{AdS}_3$ black hole in NMG in the ADM form and therefore in the \textit{"quadratic/non-local ensemble"} would be in the following form
\begin{equation}
\begin{split} \label{eq:WBTZ}
\frac{ds^2}{l^2}= dt^2+\frac{dr^2 }{ (\nu^2+3)(r-r_+)(r-r_-) }+(2\nu r-\sqrt {r_+ r_- (\nu^2+3)} ) dt d\varphi \\
+\frac{r}{4} \Big[ 3(\nu^2-1)r +(\nu^2+3)(r_+ +r_-)-4\nu \sqrt {r_+ r_- (\nu^2+3)} \Big] d\varphi^2.
\end{split}
\end{equation}
So using \ref{eq:EOM} and \ref{eq:hamiltonian}, one would have
\begin{gather}
\zeta ^2=\frac{8 m^2}{l^4 \left(20 \nu ^2-3\right)}, \ \ \ \ \ \ \ \ \ \Lambda =\frac{m^2 \left(9-48 \nu ^2+4 \nu ^4\right)}{\left(3-20 \nu ^2\right)^2}.
\end{gather}
The components of the super angular momentum would be
\begin{equation}
\begin{split}
\mathnormal{J^0}&= -\frac{l^4 \nu (\nu ^2+3)\left (4-2 r_- \nu \sqrt{r_+ r_- (\nu ^2+3)}-2 r_+\nu \sqrt{r_+ r_- (\nu ^2+3)}+r_+ r_-\text{ }(5 \nu ^2+3)\right)}{2(20 \nu ^2-3)} , \nonumber\\ \mathnormal{J^1}&= \frac{l^4 \nu (\nu ^2+3) \left (-4-2 {r_-} \nu \sqrt{r_+ r_- (\nu ^2+3)}-2 r_+ \nu \sqrt{r_+ r_- (\nu ^2+3)}+r_+ r_-\text{ }(5 \nu ^2+3)\right)}{2(20 \nu ^2-3)}, \nonumber\\ \mathnormal{J^2}&= -\frac{2 l^4 \nu (\nu ^2+3) \left(({r_+}+{r_-}) \nu -\sqrt{r_+ r_- (\nu ^2+3)}\right)}{20 \nu ^2-3}.
\end{split}
\end{equation}
Then by using \ref{charge1} and \ref{charge2} one could find the conserved charges \cite{Clement:2009gq} \cite{Donnay:2015iia}
\begin{align}
&\mathbf{M}=\frac{\nu \left(\nu ^2+3\right) }{2 G \left(20 \nu ^2-3\right)}\left(\left(r_++r_-\right) \nu -\sqrt{r_+ r_- \left(\nu ^2+3\right)}\right), \nonumber\\
&\mathbf{J}= \frac{ \nu \left(\nu ^2+3\right) }{4 G l\left(20 \nu ^2-3\right)}\left( \left(5 \nu ^2+3\right)r_+ r_- -2 \nu \sqrt{ r_+ r_-\left(3+\nu ^2\right)}\left(r_++r_-\right)\right), \nonumber\\
& \mathbf{S}=\frac{4\pi l \nu ^2 }{G \left(20 \nu ^2-3\right)}\sqrt{r_+ r_- \left(\nu ^2+3\right)+4 r_+ \nu \left(r_+ \nu -\sqrt{r_+ r_- \left(\nu ^2+3\right)}\right)}.
\end{align}
As another example of a practical solution of NMG in condensed matter, one could also study the conserved charges of the Lifshitz geometry
\begin{gather}
ds^2=-\frac{r^{2z}}{l^{2z}} dt^2+\frac{l^2}{r^2} dr^2+\frac{r^2}{l^2} d {\vec{x}}^2.
\end{gather}
Here $\zeta ^2=-\frac{2 m^2l^{2+2 z} }{1+z(z-3)}$ and the vector of super angular momentum would be zero. The case of $z=3$ and $z=\frac{1}{3}$ could be a solution of the simple NMG with no matter content. For the case of $z=3$, one would have $\zeta^2 = -2 l^8 m^2$.
Now considering the Lifshitz black hole solutions \cite{Correa:2014ika}
\begin{gather}
ds^2=-\frac{r^{2z}}{l^{2z}} \left[ 1-M \left(\frac{l}{r}\right)^{\frac{z+1}{2}} \right] dt^2+\frac{l^2}{r^2} \left[1-M \left( \frac{l}{r}\right)^{\frac{z+1}{2}} \right]^{-1} dr^2 +\frac{r^2}{l^2} d\varphi^2,
\end{gather}
by taking $r \to \Big( \rho (z+1) \Big)^{\frac{1}{1+z}}$, one would have $\sqrt{-g}=1/\zeta=l^{-z}$ which would result in
\begin{gather}
\mathbf{M}=-\frac{\pi M^2 (z+1)^2 (3z-5)}{16 \kappa (z-1) (z^2-3z+1)}, \ \ \ \ \ \ \ \ \ \ \ \ \ \mathbf{J}=0,
\end{gather}
in accordance with \cite{Correa:2014ika}. This would lead us to the following Gibbs free energy
\begin{gather}
G_{\text{Lifshitz BH}}=\frac{M^2 \pi z (z+1)^2 (3 z-5)}{16 k (z-1) (z(z-3)+1)}.
\end{gather}
Comparing this result with the free energy of the Lifshitz metric, one can see that in NMG, always the Lifshitz black hole would be the dominant phase.
\section{Phase transitions of $\text{AdS}_3$ solution} \label{sec:AdS}
The vacuum $\text{AdS}_3$ solution is
\begin{gather}
ds_{\text{AdS}_3}^2=l^2 (d\rho^2-\cosh^2 \rho \ dt^2+\sinh^2 \rho \ d\phi^2 ),
\end{gather}
where \cite{Grumiller:2009sn}
\begin{gather}
1/l^2=2m^2(1 \pm \sqrt{1+\frac{\Lambda}{m^2}}),
\end{gather}
and the boundary where the CFT dual can be defined is located at $\rho \to \infty$.
For this case, we use the relation $G(T,\Omega) =TS[g_c]$ to find the Gibbs free energy, where $g_c$ is the Euclidean saddle and $\tau=\frac{1}{2\pi} (-\beta \Omega_E+i \frac{\beta}{l})$ is the modular parameter. We work in the regimes that the saddle-point approximation could be used.
First we need to find the free energy of the vacuum solution. In \cite{Kraus:2005vz} \cite{Kraus:2006wn}, the authors derived a general result for deriving the action of thermal $\mathrm{AdS}_3$ in any theory as,
\begin{gather}\label{Kraus}
S_E \big(AdS(\tau ,\tilde{\tau} )\big)=\frac{i \pi}{12 l} (c \tau-\tilde{c} \tilde{\tau}).
\end{gather}
Also the modular transformed version of this equation would give the thermal action of the BTZ black hole. By changing the boundary torus as $\tau \to -\frac{1}{\tau}$, and then by using the modular invariance, one would have
\begin{gather}
ds^2_{\text{BTZ}}\left[-\frac{1}{\tau}\right]=ds^2_{\text{AdS}} [\tau],
\end{gather}
so
\begin{gather}\label{KrausBH}
S_E \big(BTZ(\tau ,\tilde{\tau} )\big)=\frac{i \pi}{12 l} (\frac{c}{ \tau}- \frac{\tilde{c}}{ \tilde{\tau}}).
\end{gather}
In this equation the contributions of the quantum fluctuations of the massless field is neglected as they are suppressed for large $\beta$.
One should notice that this equation and its modular transformed version are only true for the $\text{AdS}_3$ and not particularly for the "warped $\text{AdS}_3$" or "asymptotically warped AdS black holes". This equation is correct as in the Lorentzian signature, the thermal $\text{AdS}_3$ has the same form as in global coordinates and also the global $\text{AdS}_3$ corresponds to NS-NS vacuum with zero Virasoro modes \cite{Kraus:2005vz}. These statements are not particularly correct for geometries with other asymptotics than AdS, specifically geometries such as warped $\text{AdS}_3$.
In the next section, by redefinition of the modular parameter $\tau$, and deriving the free energy by three different methods, we find a new equation for the thermal action of the warped $\text{AdS}_3$ in NMG case as well.
So for now, inserting the central charges of the NMG \cite{Bergshoeff:2009aq}\cite{Liu:2009bk},
\begin{gather}
c_L=c_R=\frac{3l}{2G_N} \left( 1-\frac{1}{2m^2 l^2 }\right),
\end{gather}
and the modular parameter $\tau=\frac{1}{2\pi} (-\beta \Omega_E+i \frac{\beta}{l})$ in Eq \ref{Kraus} would result in
\begin{gather}
S_E=-\frac{1}{8 l T G_N} \Big( 1-\frac{1}{2m^2 l^2} \Big).
\end{gather}
This relation, unlike the corresponding equation in the TMG case, does not depend on the angular velocity $\Omega_E$. This is because the NMG has chiral symmetry and so the central charges are equal which causes that the terms containing $\Omega$ cancel out.
So the Gibbs free energy would be
\begin{gather}\label{eq:GADS}
G_{AdS}(T, \Omega)=-\frac{1}{8 l G_N} \Big( 1-\frac{1}{2m^2 l^2} \Big).
\end{gather}
Just by considering this equation, one can see that the stability condition of the vacuum $\text{AdS}_3$ in NMG would be $m^2 l^2 >\frac{1}{2}$ which is different from the Einstein theory. \\
Additionally, the NMG theory also admits a general BTZ solution. The rotating BTZ black hole metric solution in this theory would be of the following form
\begin{gather}
ds^2=(-\frac{2 \rho}{\tilde{l} ^2} +\frac{M }{2}) dt^2- j dt d\phi+(2\rho +\frac{M \tilde{l}^2}{2}) d\phi^2+\frac{ d\rho^2} {(\frac{4\rho^2}{\tilde{l}^{2}}- \frac{M ^2 \tilde{l}^2-j^2)}{4}) } ,
\end{gather}
where the AdS curvature is again \cite{Clement:2009gq}, $l^{-2} =2m^2 \big[ 1 \pm \sqrt{1+\frac{\Lambda}{m^2} } \big]$,
and $M$ is the ADM mass of the black hole. If we aim to write the metric in the ADM form,
\begin{gather}
ds^2=-N(r)^2 dt^2+\frac{dr^2}{f(r)^2}+R(r)^2(N^\phi (r)dt+d\phi)^2,
\end{gather}
we need to go from the coordinate system $(t, \rho, \phi)$ to $(t, r, \phi)$, so we should change the radial coordinate as $ \rho=r^2/2 - M \tilde{l}^2 /4$, and then re-parametrize the three coordinates as $r\to\tilde{l} r$, $t\to -l t$ and $\phi \to L \phi / \tilde{l}$.
Then the metric becomes \cite{Nam:2010dd}
\begin{gather} \label{eq:BTZ}
ds^2=l^2 \Big[ -\frac{(r^2-r_+^2)(r^2-r_-^2)}{r^2}dt^2+\frac{r^2}{(r^2-r_+^2)(r^2-r_-^2)} dr^2+r^2(d\phi+\frac{r_+ r_-}{r^2} dt)^2 \Big].
\end{gather}
The Hawking temperature of this black hole would be \cite{Nam:2010dd}
\begin{gather}
T_H=\frac{\kappa}{2\pi} =\frac{1}{2\pi l} \frac{\partial_r N}{\sqrt{g_{rr}} } \Big |_{r=r_+}=\frac{r_+}{2\pi l}\Big(1-\frac{r_-^2}{r_+^2} \Big),
\end{gather}
the entropy is
\begin{gather}
S_{BH}=\frac{\pi^2 l}{3} c( T_L+T_R),
\end{gather}
and the angular velocity at the horizon is defined as \cite{Nam:2010dd}
\begin{gather}
\Omega_H=\frac{1}{l} N^\phi (r_+)=\frac{1}{l} \frac{r_-}{r_+} .
\end{gather}
Also the left and right temperatures are given by \cite{Maldacena:1998bw}
\begin{gather}
T_L=\frac{r_+ +r_-}{2\pi l}=\frac{T}{1- l \Omega}, \ \ \ \ \ \ \ \ \ \ \ \ \ T_R=\frac{r_+-r_-}{2\pi l}=\frac{T}{1+l \Omega},
\end{gather}
and the left and right energies can be defined as follows
\begin{gather}
E_L \equiv \frac{\pi^2 l}{6} c_L T_L^2, \ \ \ \ \ \ \ \ \ \ \ \ \ \ E_R\equiv \frac{\pi^2 l}{6} c_R T_R^2.
\end{gather}
These parameters are related to the mass and angular momentum as \cite{Nam:2010dd}
\begin{gather}
M=E_L+E_R, \ \ \ \ \ \ \ \ \ J=l(E_L-E_R).
\end{gather}
The horizons of the BTZ black hole are located at
\begin{gather}
r_+=\sqrt{2\Big( \frac{M \tilde{l}^2 }{4} +\frac{\tilde{l}}{4} \sqrt{M^2 \tilde{l}^2 -j^2} \Big) }=\frac{2\pi l T}{1-\Omega^2 l^2}, \ \ \ r_-=\sqrt{2 \Big( \frac{M \tilde{l}^2 }{4} -\frac{\tilde{l}}{4} \sqrt{M^2 \tilde{l}^2 -j^2} \Big ) }=\frac{2\pi \Omega l^2 T}{1-\Omega^2 l^2}.
\end{gather}
For the BTZ black hole in NMG which has an asymptotic AdS geometry, again the central charges would be
\begin{gather}
c_L=c_R=\frac{3 l}{2G_N} \Big( 1-\frac{1}{ 2 m^2 l^2 } \Big).
\end{gather}
For having a physical theory, the central charge and the mass of the BTZ black hole should be positive which again sets the condition of $m^2 l^2 >\frac{1}{2}$.
These parameters would satisfy the first law of thermodynamics,
\begin{gather}
dM=T_H dS_{BH}+\Omega_H dJ,
\end{gather}
and the integral of it would satisfy the Smarr relation \cite{Nam:2010dd},
\begin{gather}
M=\frac{1}{2}T_H S_{BH}+\Omega_H J.
\end{gather}
Now one can read the Gibbs free energy from the following relation,
\begin{gather}
G=M-T_H S_{BH} -\Omega_H J.
\end{gather}
So using all the above equations, the Gibbs free energy of the BTZ in NMG would be
\begin{gather}\label{eq:GBTZ}
G_{BTZ} (T,\Omega)=-\frac{\pi ^2 T ^2\left(2 m^2 l^2-1 \right)}{4 G_N m^2l\left( 1 - l^2 \Omega ^2\right)}.
\end{gather}
This result can also be rederived by considering the modular invariance. Therefore using the relation \ref{KrausBH} and $G(T, \Omega)=T S[g_c]$ again denotes the applicability of \ref{Kraus} for the $\text{AdS}_3$ case in NMG.
From this relations one can see that for small rotations $\Omega$ as also explained in \cite{Myung:2015pua}, the thermal stability condition for BTZ black hole in NMG is $m^2 l^2 > \frac{1}{2}$, regardless of the size of the event-horizon. For this case, the Hawking-Page phase transition can occur between the BTZ black hole and the thermal solution, while for the case of $m^2 l^2 < \frac{1}{2}$ the fourth-order curvature terms is dominant and in this case an inverse Hawking-Page phase transition between the BTZ black hole and the ground state massless BTZ black hole can occur \cite{Myung:2015pua}.
One can also discuss the interpolations and the continuous phase transitions between these phases such as the scenario in \cite{Zhang:2015wna}.
We now extend these results to the higher angular momentums.
\subsection{The stability conditions}
For checking the local stability we find the Hessain, $H$, of the free energy $G(T,\Omega)$ of the BTZ metric as
\begin{gather}
H=\left(
\begin{array}{ll}
\frac{\partial ^2G}{\partial T^2} & \frac{\partial ^2G}{\partial T \partial \Omega } \\ \\
\frac{\partial ^2G}{\partial \Omega \partial T} & \frac{\partial ^2G}{\partial \Omega ^2} \\
\end{array}
\right)= \left(
\begin{array}{ll}
\frac{ \pi ^2 \left(2 m^2 l^2-1\right)}{2 G_N m^2 \left(\Omega ^2 l^2-1\right)} & \frac{\pi ^2 l^2 \left(1-2 l^2 m^2\right) T \Omega }{G_N m^2 \left(\Omega ^2 l^2-1\right)^2} \\ \\
\frac{\pi ^2 l^2 \left(1-2 m^2 l^2 \right) T \Omega }{G_N m^2 \left(\Omega ^2 l^2-1\right)^2} & \frac{\pi ^2 T^2 \left(2 m^2 l^2-1 \right) \left(l^2 +3 l^4 \Omega ^2\right)}{2 G_N m^2 \left(\Omega^2 l^2-1\right)^3} \\
\end{array}
\right).
\end{gather}
In the region where both of its eigenvalues are negative, the system is stable. For the above matrix, finding the eigenvalues of the above matrix and then assuming $G_N=l=1$, the stable region would be found as $m^2 >1$ and $\Omega^2<1$ for any $T$, similar to the stability region of TMG in \cite{Detournay:2015ysa}.
Now for calculating the global stability, we calculate the difference of the free energies of AdS and BTZ case as
\begin{gather}
\Delta G=G_{AdS}-G_{BTZ}=\frac{2m^2 l^2-1}{4 G_N m^2 l} \left( \frac{\pi^2 T^2}{1- l^2 \Omega^2} -\frac{1}{4l^2} \right).
\end{gather}
\subsection{Phase diagrams}
When $\Delta G>0$, the BTZ black hole is the dominant phase and when $\Delta G<0$, the thermal $\text{AdS}_3$ is dominant. Now if we assume $G_N=l=1$ then we can show the phase diagrams as in Figures \ref{fig:p1} and \ref{fig:p2}. One can notice that, since in NMG unlike the TMG case, the parity is conserved, the phase diagrams would be symmetric.
\begin{figure}[ht!]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{p1} \caption{\label{fig:p1} $m=1.05$.}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{p2}
\caption{\label{fig:p2} $m=10$.}
\end{minipage}
\end{figure}
From the above diagrams, one can also notice that, by decreasing $m$, the effect of the higher derivative correction terms to the Einstein-Hilbert action would increase. This effect would make the BTZ black hole to form in a lower temperature. So forming a black hole in NMG is easier relative to the pure Einstein gravity due to the fact that in NMG the modes are massive. On the other hand, increasing $m$ with a specific angular velocity, would cause that the phase transition from $\text{AdS}_3$ to BTZ occurs at a bigger temperature.
\section{Phase transitions of warped $\text{AdS}_3$ solution in quadratic ensemble}\label{sec:sec2}
Now in this section we introduce the thermal $\text{WAdS}_3$ and Warped BTZ black hole solutions and then we present the phase diagrams.
Both of these solutions have an asymptotic warped AdS geometry which are the squashed or stretched deformation of $\text{AdS}_3$ with a different symmetry algebra than the AdS case. This difference makes the thermal properties different from the asymptotic AdS solution. We derive some relations for deriving the thermal action of the warped solutions in NMG.
\subsection{G\"{o}del space-time}
The time-like $\text{WAdS}_3$ or the three-dimensional G\"{o}del space-time is the true vacuum of the $\text{WAdS}_3$ black hole \cite{Donnay:2015iia} \cite{Banados:2005da}.
The original G\"{o}del solution of the Einstein equations was four-dimensional. The non-trivial three-dimensional factor of G\"{o}del space-time which is within the family of deformed $\text{AdS}_3$ were first studied in \cite{Rooman:1998xf}. This metric is a constant curvature Lorentzian manifold with isometry group $U(1) \times SL(2, \mathbb{R})$ where the $U(1)$ factor is generated by a time-like Killing vector. As this metric can be embedded in seven-dimensions flat space, it would possess time-like closed curves. However it is still a solution of string theory which corresponds to the time-like generator of $SL(2,\mathbb{R})$ or a magnetic background \cite{Israel:2004vv}.
Its metric is given by
\begin{gather}\label{eq:Godel}
ds^2=-dt^2-4 \omega r dt d\phi+\frac{\ell^2 dr^2}{(2r^2(\omega^2 \ell^2+1) +2\ell^2 r)}-\Big( \frac{2r^2}{ \ell^2} (\omega^2 \ell^2-1)-2r\Big) d\phi^2.
\end{gather}
In the special case of $\omega^2 \ell^2=1$, this metric corresponds to $\text{AdS}_3$.
For this timelike solution we would have
\begin{gather}\label{eq:LMG}
m^2=-\frac{(19\omega^2 \ell^2-2)}{2\ell^2}, \ \ \ \ \ \ \ \Lambda=-\frac{(11\omega^4 \ell^4+28 \omega^2 \ell^2-4)}{2\ell^2 (19 \omega^2 \ell^2-2)}.
\end{gather}
This metric were hugely studied in cosmological models although it contains CTSs and is unstable with respect to the quantum fluctuations. As these casual pathologies are large scale deficiencies, some rotating objects with physical applications in cosmology or perhaps in condensed matter could be modeled by this metric surrounded by a more standard space-time \cite{Rooman:1998xf}. Therefore constructing the phase diagrams of this manifold could have interesting applications.
\subsection{Space-like warped BTZ black hole}
The warped $\mathrm{AdS}_3$ or warped BTZ black hole in NMG in the quadratic non-local ensemble in its ADM form can be written as
\begin{equation}
\begin{split} \label{eq:WBTZ}
\frac{ds^2}{l^2}= dt^2+\frac{dr^2 }{ (\nu^2+3)(r-r_+)(r-r_-) }+(2\nu r-\sqrt {r_+ r_- (\nu^2+3)} ) dt d\varphi \\
+\frac{r}{4} \Big[ 3(\nu^2-1)r +(\nu^2+3)(r_+ +r_-)-4\nu \sqrt {r_+ r_- (\nu^2+3)} \Big] d\varphi^2.
\end{split}
\end{equation}
If $\nu^2=1$, the space is locally $\text{AdS}_3$, if $\nu^2 >1$ it is stretched and if $\nu^2<1$ it is a squashed deformation of $\text{AdS}_3$.
For the space-like solution, the parameters would be,
\begin{gather}\label{eq:mlBTZ}
m^2=-\frac{(20\nu^2-3)}{2 l^2}, \ \ \ \ \ \ \ \ \ \ \ \Lambda=-\frac{m^2 (4\nu^4-48\nu^2+9 ) }{(20\nu^2-3)^2}.
\end{gather}
If one employs the following relations
\begin{gather}
\omega=\frac{\nu}{l}, \ \ \ \ \ \ \ \ \ \ \omega^2 \ell^2+2=3 \ell^2/ l^2,
\end{gather}
one reaches to equation \ref{eq:LMG} again.
Notice that "$l$" is the radius of space-like $\text{AdS}_3$ and "$\ell$" is the radius of the warped time-like $\text{AdS}_3$.
Similar to the way that one can derive BTZ black hole by global identifications, one can also derive \ref{eq:WBTZ} from \ref{eq:Godel}.
In order to have a real $m$ and negative $\Lambda$ and therefore a physical solution, from \ref{eq:LMG} and \ref{eq:mlBTZ} the allowed range of $\nu$ and $\omega$ would be found as
\begin{gather}
-\sqrt{\frac{2}{19}}<\omega \ell<\sqrt{\frac{2}{19}}, \ \ \ \ \ \ \ \ \ \ \ \ -\sqrt{\frac{3}{20} } <\nu < \sqrt{\frac{3}{20} }.
\end{gather}
\subsection{The free energies and phase diagrams}\label{sub:free}
Now by using the thermodynamic quantities and conserved charges, we calculate the free energies of both of these space-times and then we proceed by making the phase diagrams.
Notice that the isometry group of the time-like $\text{WAdS}_3$ is $SL(2,\mathbb{R}) \times U(1)$ which is generated by four Killing vectors \cite{Donnay:2015iia}. By assuming a specific boundary condition, the authors in \cite{Donnay:2015iia} derived the asymptotic algebra of WAdS in NMG and then the central charge $c$ and the $\hat{u}(1)_k$ ka$\check{\text{c}}$-Moody level $k$ as \cite{Donnay:2015joa}
\begin{gather}\label{eq:param}
c=\frac{48 \ell^4 \omega^3 }{G (19 \ell^4 \omega^4+17\ell^2 \omega^2-2)}=-\frac{96 l \nu^3}{G (20 \nu^4+57 \nu^2-9) }, \\ \nonumber\\ k=\frac{8 \omega (1+\ell^2 \omega^2)}{G(19 \ell^2 \omega^2-2)}=\frac{4\nu(\nu^2 +3)}{G l(20\nu^2-3)} .
\end{gather}
Now if we just simply assume that the relation \ref{Kraus} can be used here and the modular parameter is $\tau=\frac{1}{2\pi} (-\beta \Omega_E+i \frac{\beta}{l})$, then by using the above central charge one can find the free energy as
\begin{gather}\label{eq:G2}
G_{\text{timelike WAdS}}=-\frac{4 \ell^2 \omega^3 (\omega^2 \ell^2+2)}{3G(19\ell^4 \omega^4+ 17 \ell^2 \omega^2-2)}=-\frac{4\nu^2}{G(20\nu^2-3)}\times \frac{\nu}{(\nu^2+3) l}.
\end{gather}
We can also recalculate the free energy by using the conserved charges. These conserved charges of the timelike $\text{WAdS}_3$ in NMG have been calculated in \cite{Donnay:2015joa} for a "spinning defect". Using these relations one can take the limit of $\mu \to 0$ to find the mass and angular momentum as
\begin{gather}
\mathcal{M}=-\frac{4 \ell^2 \omega^2}{G(19 \ell^2 \omega^2-2)}, \ \ \ \ \ \
\mathcal{J}=-\frac{4 j \ell^4 \omega^3}{G(19\ell^2 \omega^2-2)}.
\end{gather}
For the time-like warped $\text{AdS}_3$, again the entropy and the temperature are zero, so the Gibbs free energy, $G=\mathcal{M}-\Omega \mathcal{J}$, would be
\begin{gather}
G_\text{spinning defect }=\frac{4 \ell^2 \omega^2 \big((\mu-1)+\Omega j \ell^2 \omega \big)}{G(19\ell^2 \omega^2-2)}.
\end{gather}
Taking the limit of zero defect, the result is as follows
\begin{gather}\label{eq:G1}
G_{\text{timelike WAdS}}=-\frac{4 \ell^2 \omega^2}{G (19 \ell^2 \omega^2 -2)}=-\frac{4\nu^2}{G (20\nu^2-3)}.
\end{gather}
Comparing \ref{eq:G1} with \ref{eq:G2}, one can see that there is a factor of $N_1= \frac{\nu}{(\nu^2+3) l}$ difference. This factor can be introduced in the modular parameter to compensate for this discrepancy.
For re-deriving this factor we can also calculate the free energy in a third way. As the authors in \cite{Donnay:2015joa} found, the warped CFT version of Cardy's formula of entropy
\begin{gather}
S_{\text{WCFT}}=\frac{4\pi i}{k} \tilde{P}_0^{(\text{vac})} \tilde{P}_0+4\pi\sqrt{ -\tilde{L}_0^{+(\text{vac}) } \tilde{L}_0^+ },
\end{gather}
matches with the black hole entropy
\begin{gather}
S_{BH}=\frac{8 \pi \nu^3}{ (20\nu^2-3) G_N } \left(r_+-\frac{1}{2\nu} \sqrt{ ( \nu^2+3) r_- r_+}\right).
\end{gather}
Now using the above equation and the relation \ref{eq:radius} of the next section, one can find the warped BTZ black hole entropy as
\begin{gather}
S_{\text{WBTZ}}=\frac{8\pi \nu^2}{ G_N \Omega l (20\nu^2-3)}.
\end{gather}
If one uses the modular transformed equation for the BTZ black hole as
\begin{gather}
S=\frac{-i \pi c}{12 l} (\frac{1}{\tau}-\frac{1}{\tilde{\tau}}),
\end{gather}
then by using the central charge \ref{eq:centralcharge}, one can see that for matching the two relations, the modular parameter of the warped CFT should be defined as
\begin{gather}
\tau=\frac{2i \Omega \nu}{ (\nu^2 +3)}.
\end{gather}
One can see that again a similar factor is appearing here. The imaginary factor here can point out to the appearance of closed time-like curves (CTCs) in the bulk.
This factor of $\frac{\nu}{(\nu^2+3)}$ can actually be explained by studying the Killing vectors of the space-time. The orbiflold construction of warped $\text{AdS}_3$ preserves $U(1) \times U(1)$ subgroup of $SL(2,\mathbb{R}) \times U(1)$ isometries and is generated by two Killing vectors
\begin{gather}
\xi^{(1)} =\partial_t, \ \ \ \ \ \ \ \ \ \ \ \ \xi^{(2)}=\frac{2 l \nu }{(\nu^2+3) }\partial_t +\partial_{\varphi}.
\end{gather}
The same factor of $\frac{2 l \nu }{(\nu^2+3) }$ is in the construction of the manifold which changes the partition function and therefore the free energy. This factor is the normalization factor $N_1=\frac{2l\nu}{(\nu^2+3)}$ in \cite{Donnay:2015joa} which is being fixed by matching the asymptotic Killing vector $\ell_0$ with the vector $\xi^{(2)}$. In addition, in the WCFT context as in \cite{Detournay:2012pc}, this factor relates to the anomaly in the transformation operates $T$ and $P$ which generate the infinitesimal coordinate transformation in $x$ and gauge transformation in the gauge bundle, respectively.
Due to the current anomaly $k$, the operators $P$ and $T$ would mix with each other. This can be seen like a "tilt" $\alpha$, in the mapping from $x^-$ to $\phi$ coordinates as in \cite{Detournay:2012pc},
\begin{gather}
x_-=e^{i \phi}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x^+=t+2\alpha \phi.
\end{gather}
This spectral flow parameter $\alpha$ which is a property of the specific theory on the cylinder can be related to the factor $N_1$ for any theory.
So in general, for the warped $\text{AdS}_3$ solutions one cannot simply use the relation \ref{Kraus}. However for calculating the thermal action for space-times with warped $\text{AdS}_3$ asymptotes, one can redefine the modular parameter using the Killing vectors of the manifold or the normalization constant in the symmetry algebra of the asymptotic geometry.
This redefinition of the modular parameters have been also seen in Kerr/CFT contexts \cite{Guica:2008mu}. Specifically the NHEK geometry has an enhanced $SL(2, \mathbb{R}) \times U(1)$ isometry group where a different normalization factor appears in the algebra and in the normalization factors between the Killing vectors, and therefore in the redefinition of the modular parameter. \\
Now using the method introduced in previous chapters for calculating the conserved charges, we calculate the thermodynamic properties and the Gibbs free energies of the black holes with asymptotic warped $\text{AdS}_3$ geometry which could be called "warped BTZ black hole". The thermodynamical quantities are \cite{Nam:2010dd}, \cite{Donnay:2015iia}
\begin{gather}\label{Tomega}
T_H=\frac{\nu^2+3}{8\pi \nu l} \bigg( \frac{r_+-r_-}{r_+- \sqrt{ \frac{(\nu^2+3)r_+ r_-}{4 \nu^2} } } \bigg), \ \ \ \ \Omega_H=\frac{1}{\nu l} \bigg( \frac{1}{r_+- \sqrt{ \frac{( \nu^2+3)r_+ r_-}{4\nu^2} } } \bigg),
\end{gather}
\begin{gather}
T_L=\frac{(\nu^2+3)}{8 \pi l^2} (r_+ +r_- -\frac{1}{\nu} \sqrt{(\nu^2+3) r_- r_+} ) , \ \ \ \ \ T_R=\frac{(\nu^2+3)}{8 \pi l^2} (r_+ -r_-).
\end{gather}
The conserved charges, mass and angular momentum are
\begin{gather}\label{Mass}
\mathcal{M}=Q_{\partial _t}= \frac{\nu (\nu^2+3) }{G l (20 \nu^2-3) } \Big( (r_-+r_+)\nu -\sqrt{r_+ r_-(\nu^2+3) } \Big) ,
\end{gather}
\begin{gather}\label{angular}
\mathcal{J} = Q_{\partial_{\varphi} }= \frac{\nu (\nu^2+3) }{4 G l ( 20\nu^2-3)} \Big( (5\nu^2+3) r_+ r_- -2\nu \sqrt{r_+ r_-(\nu^2+3) } (r_+ +r_-) \Big) .
\end{gather}
Also the condition for the existence of the black hole is
\begin{gather}
\mathcal{J} \le \frac{G l (20\nu^2-3)}{4\nu (\nu^2+3) } \mathcal{M}^2, \ \ \ \ \ \ \ \ \ \mathcal{M} \ge 0,
\end{gather}
which specifically does not put any new constraint on $\nu$.
The entropy of warped BTZ black hole in NMG is again
\begin{gather}\label{eq:entropy}
S_{BH}=\frac{8 \pi \nu^3}{(20\nu^2-3) G } (r_+-\frac{1}{2\nu} \sqrt{(\nu^2+3) r_+ r_- } ) .
\end{gather}
These thermodynamical quantities follow the first law of thermodynamics and the integral follows the Smarr-like relation
\begin{gather}
M=T_H S_{BH}+2 \Omega_H J.
\end{gather}
The central charge is \cite{Donnay:2015iia}
\begin{gather}\label{eq:centralcharge}
c=-\frac{96 l \nu^3 }{G (20\nu^4+57\nu^2-9) }.
\end{gather}
One can study the behavior of the central charge versus the warping parameter $\nu$ in Figure \ref{fig:c22}. In the region where there is no CTSs, it is a monotonically increasing function of $\nu$ and also at $\nu=0$ or $\nu\to \pm \infty$, one can see that $c=0$ which indicates that for infinitely squashed or stretched space time the Casimir energy would be zero. The central charge also diverges at $\nu= \pm \sqrt{\frac{3}{20}}\sim 0.387$.
For having a physical theory, the central charge should be positive. So if we assume that $G=l =1$, then the constraint on $\nu$ would be
\begin{gather}
0<\nu <\sqrt{\frac{3}{20}}.
\end{gather}
\begin{figure}[ht!]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{Lambda22} \caption{\label{fig:c1} The plot of cosmological constant, $\Lambda$ vs. $ -\sqrt{\frac{3}{20}}<\nu< \sqrt{\frac{3}{20}} $.}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{c22}
\caption{\label{fig:c22} The central charge of NMG vs. $\nu$.}
\end{minipage}
\end{figure}
So now if one defines
\begin{gather}
K=1+\frac{8 l \pi T (l \pi T-\nu ) \pm 4 \pi l T \sqrt{4 \pi ^2 l^2 T^2-8 l \pi T \nu +\nu ^2+3}}{\nu ^2+3}\text{ },
\end{gather}
then
\begin{gather}\label{eq:radius}
r_+=\frac{1}{\Omega \nu l \Big(1-\frac{1}{2 \nu} \sqrt{K(\nu^2+3)}\Big) }, \ \ \ \ \ \ \ \ \ \ r_-=K r_+.
\end{gather}
The minus sign in $K$ would be acceptable which indeed could make $r_-$ smaller than $r_+$. Then the Gibbs free energy in terms of $\Omega ,T$, $K$ and $\nu$ would be
\begin{equation}
\begin{split}
G_{WBTZ}&=\frac{1 }{G \Omega l (20 \nu ^2-3)}\Bigg[- 8 \pi T \nu^2 +\frac{ (\nu ^2+3) }{l \Big(1-\frac{1}{2\nu}\sqrt{K(\nu^2+3) } \Big) }\Bigg ( (K+1)\nu -\sqrt{K(\nu^2+3) } \nonumber\\&
-\frac{(5\nu^2+3) K-2\nu(K+1)\sqrt{K(\nu^2+3) } }{4\nu l \big(1-\frac{1}{2\nu}\sqrt{K(\nu^2+3)} \big ) } \Bigg)\Bigg].
\end{split}
\end{equation}
Notice that this Gibbs free energy only depends on $\Omega$, $T$ and $\nu$.
One should also notice that the limit of the un-warped black hole is $\nu=\beta^2=1$ which corresponds to $m=1$ for $l=1$. In this limit the $G_{WBTZ}$ does not reach to the limit of $G_{BTZ}$ in equation \ref{eq:GBTZ}. This is specially obvious from the factors of $\Omega$ and $T$. Actually this is not a real problem as we shouldn't expect to reach to the results of the previous section for the case of $\nu \to 1$ since these metrics have been written in two different coordinate systems. \\
Now that we have found the free energies of the warped BTZ black hole and its vacuum, we can find the phase diagrams of temperature versus the angular velocity as before and then we can compare them for different warping factors. Therefore we can study the effect of $\nu$ on the phase transitions in warped geometries.
We saw that the acceptable interval for $\nu$ is $0<\nu < \sqrt{\frac{3}{20} }\sim 0.3872 $. The phase diagram for $\nu=0.387$ is shown in Figure \ref{fig:nu1}. The blue regions are where the warped BTZ black hole is the dominant phase and in the white regions the vacuum WAdS is dominant. If one increases $\nu$ till $\nu \ge \sqrt{\frac{3}{20}}$, the places of these two phases change with each other as it is also obvious from the functionality of the central charge in terms of the warping factor $\nu$.
\begin{figure}[ht!]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pnew1} \caption{\label{fig:nu1} The phase diagram for $\nu=0.387$.}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pnew2}
\caption{\label{fig:warp} The phase diagram for different $\nu$.}
\end{minipage}
\end{figure}
In these diagrams one can notice some other points. As one can see in the figures and also from the equations of the free energy of warped BTZ black hole, in the quadratic/non-local ensemble, unlike the grand canonical ensemble, the diagrams of warped AdS solutions are not symmetric although the NMG is parity preserving. One can notice that the behaviors for the positive and negative angular velocities are very different as the free energy is an odd function of $\Omega$ for the warped geometry unlike the previous case.
Also, if $\Omega>0$ the phase of warped black hole is dominant at lower temperatures and the thermal warped $\text{AdS}_3$ is dominant at higher temperatures. So a positive and high angular velocity makes the black hole more stable. However, in this case a higher temperature would trigger the reverse Hawking-Page transition and the black hole phase changes to the thermal warped $\text{AdS}_3$. So bigger $\Omega$ makes the black hole the dominant phase and bigger $T$ makes the vacuum dominant.
Furthermore, the effect of warping factor $\nu$ is shown in Figure \ref{fig:warp}. If we define a critical temperature $T_c$ where the tilted line crosses the $\Omega=0$ axis, then one can see that increasing $\nu$ would decrease this critical temperature. So for $\Omega <0$, increasing the warping factor $\nu$ makes the black hole phase more dominant and for $\Omega>0$, it makes the vacuum $\text{AdS}_3$ the dominant phase.
\section{Phase diagram of warped $\text{AdS}_3$ solution in grand canonical ensemble}
The WAdS black hole in the grand canonical solution would be of the following form\\
\begin{gather}\label{metric}
g_{\mu \nu}=\left(
\begin{array}{ccc}
-\frac{r^2}{l^2}-\frac{H^2 (-r^2-4 l J+8 l^2 M)^2}{4l^3(l M-J)}+8M & 0 & 4J-\frac{H^2(4 l J-r^2)(-r^2-4 l J+8 l^2 M) }{4l^2(l M-J)} \\
0 & \frac{1}{\frac{16 J^2}{r^2}+\frac{r^2}{l^2}-8M } & 0 \\
4J-\frac{H^2(4 l J-r^2)(-r^2-4 l J+8 l^2 M) }{4l^2(l M-J)} & 0 & r^2-\frac{H^2 (4 J l -r^2)^2 }{4 l (l M-J)}
\end{array} \right ).
\end{gather}
The change of coordinates to go from this form to the form of \ref{eq:WBTZ} were derived in \cite{Detournay:2015ysa}. Also the phase diagram of this specific ensemble was just recently presented in \cite{Detournay:2016gao}. We brought the phase diagram of this ensemble here for the sake of comparison to the previous ensemble.
Using the charges and entropy derived in \ref{charge1} and \ref{entropy1}, one can derive the Gibbs free energy as
\begin{gather}
G_{WBTZ} (T, \Omega)=\frac{-8 l^2 \pi^2 T^2 (1-2H^2)^{\frac{3}{2}} }{(17-42H^2)(1-l^2 \Omega^2) },
\end{gather}
and the vacuum corresponds to $M=-\frac{1}{8}$ and $J=0$.
\subsection{local stability}
The Hessian matrix would be \\
\begin{gather}
H=\left(
\begin{array}{cc}
\frac{16 l^2 \pi ^2\left(1-2 H^2\right)^{3/2} }{\left(17-42 H^2\right) \left(l^2 \Omega ^2-1\right)} & \frac{-32 l^4 \pi ^2 T \Omega \left(1-2 H^2\right)^{3/2} }{\left(17-42 H^2\right) \left(l^2 \Omega ^2-1\right)^2} \\ \\
\frac{-32l^4 \pi ^2 T \Omega \left(1-2 H^2\right)^{3/2} }{\left(17-42 H^2\right) \left(l^2 \Omega ^2-1\right)^2} & \frac{16\pi ^2 T^2 \left(1-2 H^2\right)^{3/2}\left(l^4+3 l^6 \Omega ^2\right)}{\left(17-42 H^2\right) \left(l^2 \Omega ^2-1\right)^3} \\
\end{array}
\right).
\end{gather}
For having a locally stable solution, both of the eigenvalues of the Hessian should be negative. For the case of $\Omega=0$, the condition of making both eigenvalues negative would be $H^2 <\frac{17}{42}$.
One can notice that unlike the previous ensemble, in the grand canonical ensemble the diagrams of warped BTZ black hole solution in addition to BTZ black holes would be symmetric. This could be just he result of the symmetry and parity preserving nature of this kind of solutions in BHT gravity.
Also this could show us that the thermodynamical properties of these black holes and therefore the Hawking-Page phase diagrams could only really be meaningful in the grand canonical ensemble and not in other ensembles. The significations and the dual interpretations of the phase diagrams in other thermodynamical ensembles is not particularly clear.
\begin{figure}[htb!]
\centering
\centering
\includegraphics[width=.35\linewidth]{grand} \caption{\label{fig:localb1} Phase diagram for WAdS solution in grand canonical ensemble, $C=l=1$. }
\end{figure}
\section{Phase diagram of the hairy black hole }\label{sec:hairy}
There exist yet another interesting black hole solution in the new massive gravity, the "New Hairy black hole''. In this section, we are interested in studying its Hawking-Page phase transitions as well.
This solution was first introduced in \cite{Oliva:2009ip} and later it was studied more in \cite{Nam:2010dd}, \cite{Giribet:2009qz} and \cite{Mirza:2014xxa}. Its geodesics and entanglement entropy for the specific case of non-rotating solution were discussed recently in \cite{Hosseini:2015vya}.
This hairy black hole solution exists for $m^2=\Lambda=-\frac{1}{2 l^2}$ for the parameters of action \ref{eq:action}. The form of its metric is as follows
\begin{gather}
ds^2=-N F dt^2+\frac{dr^2}{F}+r^2 (d\phi+N^\phi dt)^2,
\end{gather}
where
\begin{equation}
\begin{split}
N=&\Big( 1+\frac{b l^2}{4H} \big( 1-\Xi^{\frac{1}{2}} \big) \Big)^2,\ \ \ \ \ \ \ \ \ \
N^\phi=-\frac{a}{2r^2} (4G_N M-bH),\nonumber\\
F=&\frac{H^2}{r^2} \Big( \frac{H^2}{l^2} +\frac{b}{2} \Big(1+ \Xi^{\frac{1}{2}} \Big)H+\frac{b^2 l^2}{16} \Big( 1-\Xi^ {\frac{1}{2}} \Big)^2- 4G_N M\ \Xi^{\frac{1}{2}} \Big),\nonumber\\
H=& \Big( r^2-2G_N M l^2 \big (1- \Xi^{\frac{1}{2}} \big) -\frac{b^2 l^4}{16} \big(1-\Xi^{\frac{1}{2}} \big)^2 \Big)^{\frac{1}{2}},
\end{split}
\end{equation}
and $\Xi :=1-\frac{a^2}{l^2}$ , and also $-l \le a \le l $. There are two conserved charges for this black hole which are $M$ and $J=M a$ and also a gravitational hair parameter which is $b$.
The thermodynamical parameters of this black hole are
\begin{equation}
\begin{split}
\Omega_+= & \frac{1}{a} \Big( \Xi^{\frac{1}{2}}-1 \Big),\ \ \ \ \ \ \ \ \ \ \ \
T= \frac{1}{\pi l} \Xi^{\frac{1}{2}} \sqrt{2 G_N \Delta M \Big (1+\Xi^{\frac{1}{2}} \Big)^{-1}},\nonumber\\
S= & \pi l \sqrt{\frac{2}{G_N} \Delta M \Big( 1+\Xi^{\frac{1}{2}} \Big) },\ \ \ \ \ \ \ \ \ \ \ \ \
\Delta M = M+\frac{b^2 l^2}{16 G_N}.
\end{split}
\end{equation}
Now using all these thermodynamical quantities one can read the Gibbs free energy.
We will see that the region where the black hole can be locally stable for any $b$ is $\Omega^2 l^2 <1$. So with this assumption we can simplify the relation as
\begin{gather}\label{GNBH}
G_{NBH} =\frac{l^2}{16G}\text{ }\left(\frac{16 \pi ^2 T^2 \left(5 l^2 \Omega ^2-1\right)}{\left(l^2 \Omega ^2-1\right)^2}-\frac{b^2 \left(3 l^2 \Omega ^2+1\right)}{l^2 \Omega ^2+1}\right).
\end{gather}
We can see that the Gibbs free energy, in addition to $\Omega$ and $T$, depends also on the hair parameter $b$. One can also notice that there is no real $b$ which makes this free energy vanish.
Now for studying the local stability we calculate the Hessian as
\begin{gather}
H=\left(
\begin{array}{ll}
\frac{2 l^4 \pi ^2 \Omega ^2 \left(l^2 \Omega ^2+3\right)}{G \left(l^2 \Omega ^2-1\right)^2} & \ \ \ \ -\frac{4 l^4 \pi ^2 T \Omega \left(5 l^2 \Omega ^2+3\right)}{G \left(l^2 \Omega ^2-1\right)^3} \\ \\
-\frac{4 l^4 \pi ^2 T \Omega \left(5 l^2 \Omega ^2+3\right)}{G \left(l^2 \Omega ^2-1\right)^3} & \ \ \ \ \frac{l^4 \left(b^2 \left(l^2 \Omega ^2-1\right)^4 \left(3 l^2 \Omega ^2-1\right)+24 \pi ^2 T^2 \left(l^2 \Omega ^2+1\right)^3 \left(1+5 l^2 \Omega ^2\left(2+l^2 \Omega ^2\right) \right)\right)}{4 G \left(l^2 \Omega ^2-1\right)^4 \left(l^2 \Omega ^2+1\right)^3} \\
\end{array}
\right).
\end{gather}
The region where both of the eigenvalues of the above matrix is negative would depend on the hair parameter $b$. The phase diagram for a specific value of $b=20$ is shown in figure \ref{fig:localb1}.
For $G_N=l=1$ one can check that for any $b$ the angular velocity should be in the range of $-0.5< \Omega <0.5$, so that the black hole solution can be locally stable. Increasing $\Omega$ can make the black hole locally unstable. Also the condition for $T$ depends on $b$.
Increasing the hair parameter $b$ would make the locally stable region bigger. So basically the hair parameter makes the system more stable and $\Omega $ makes it more unstable. In condensed matter systems, one can also investigate the dual interpretation of this hair parameter and how it makes the system stable.
\begin{figure}[ht!]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{stableNBH} \caption{\label{fig:localb1} The local stable region for $b=20$. }
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{NBH4}
\caption{\label{fig:localb2} The phase diagram for $b=20$.}
\end{minipage}
\end{figure}
We now can compare the Gibbs free energies of this black hole with the free energies of other solutions to study the phase diagram of the system. The phase diagram for the region of local stability, the vacuum $\text{AdS}_3$ solution and the ground state of NBH is shown in figure \ref{fig:localb2}. The region where $\Delta G_1=G_{\text{AdS}}-G_{\text{NBH}} >0$ is the union of regions 1, 2, 3. By comparing the Gibbs free energy of this black hole with the the free energy of the vacuum AdS, one can see that in the region of local stability, for any $b$, only the black hole phase would be present. So the only phase that is both locally and globally stable is the black hole case. Outside of region 1, the phase would not be even locally stable.
Also the case of $M=M_0=-\frac{b^2 l^2}{16 G}$ in this solution is the ground state which corresponds to an extremal case where both the left, right and the Hawking temperature and also the entropy vanish. \cite{Giribet:2009qz}. So its free energy would be
\begin{gather}
{G_0}_{\text{NBH}}=-\frac{b^2 l^2}{16 G_N} \left(3-\frac{2}{1+l^2 \Omega ^2}\right).
\end{gather}
The region where $\Delta G_2={G_0}_{\text{NBH}}-G_{\text{NBH}}>0$ is the union of 1 and 2. Again one can see that $-0.5 <\Omega <0.5$ is the region where the black hole can be stable.
One can also plot the diagram of $M$ versus $J$ and study the effect of other physical parameters or conserved charges on the phase transitions which could shed light on other physical characteristics.
\section{The inner horizon thermodynamics}\label{sec:inner}
It would also be useful to study the thermodynamics of black holes \textit{inside} the horizon as the quantities from outside of the horizon combined with the inside ones can provide additional information about the central charges or scattering data around the black hole. Also the relationes between the thermodynamics of inside and outside of horizon would be of practical use in holographical applications such as the examples in \cite{Detournay:2015ysa}, \cite{Giribet:2015lfa}.
One can first integrate the Wald's formula on $r=r_-$ to find the inner horizon entropy. Alternatively, one can use the following relations,
\begin{gather}
T_{R,L}=\frac{T_- \pm T_+}{\Omega_- -\Omega_+}, \ \ \ \ \ \ S_{\pm}=S_R\pm S_L, \ \ \ \ \ S_{R,L}=\frac{\pi^2 l}{3} c \ T_{R,L}.
\end{gather}
So one would get,
\begin{gather}
S_{-}=\frac{8 \pi \nu^3}{(20\nu^2-3) G } (r_- -\frac{1}{2\nu} \sqrt{(\nu^2+3) r_+ r_- } ) .
\end{gather}
The temperature at the inner horizon would be
\begin{gather}
T_-=\frac{1}{2\pi l} \frac{\partial_r N} {\sqrt{g_{rr}} } \Big | _{r=r_-}=\frac{\nu^2+3}{8\pi \nu} \bigg (\frac{r_+-r_-}{r_- -\sqrt{\frac{(\nu^2+3)r_+ r_- }{4\nu^2} } } \bigg ),
\end{gather}
and the angular velocity at the inner horizon is
\begin{gather}
\Omega_-=\frac{1}{l} N^{\phi} (r_-)= \frac{1}{\nu l} \bigg ( \frac{1}{ r_--\sqrt{\frac{ \left(\nu ^2+3\right)r_+ r_-}{4 \nu ^2}}}\bigg).
\end{gather}
As explained in \cite{Giribet:2015lfa}, the statement of inner horizons mechanics is that the product of all horizons' entropies should be proportional to the conserved charges that at the quantum level are quantized. In this case this charge is only $J$. The product of the inner and outer horizon entropies would be
\begin{gather}
S_+ S_-=\frac{16 \pi^2 \nu^4}{ G^2 (20\nu^2-3)^2}\Big( (5\nu^2+3) r_+ r_--2\nu \sqrt{(\nu^2+3) r_+ r_-}(r_++r_-) \Big),
\end{gather}
which as expected can be written as a factor of $J$, i.e, $S_+ S_- \propto J $, and it is independent of $M$. This is because $S_+ S_-$ is holography dual to the level matching condition of the warped $\text{CFT}_2$ and this is also consistent with the WAdS/WCFT picture.
Also as explained in \cite{Chen:2012mh}, the mass-independency of the product of entropies would be satisfied here as $T_+ S_+=T_- S_-$ which is also a result of the parity-preserving nature of this theory, i.e., the fact that the left and right central charges are equal. Also, based on \cite{Castro:2013pqa}, as the Smarr-relation holds for the Einstein gravity with the higher curvature corrections in the form of NMG, we would expect such result. Moreover, the first law of black hole thermodynamics would be satisfied as
\begin{gather}
dM= \pm T_{\pm} dS_{\pm}+\Omega_{\pm} dJ.
\end{gather}
This was expected since as explained in \cite{Chen:2012mh}, if the first law is satisfied in outer horizon, it should also be satisfied in the inner-horizon as well. Then using both of these relations, one can derive the same results for the $M$ and $J$ as the equations \ref{Mass} and \ref{angular}, consistent with WAdS/WCFT picture \cite{Donnay:2015vrb}.
\section{Entanglement entropy of WCFT in BHT}\label{entanglement}
Recently using the Rindler method, the authors of \cite{Castro:2015csg} found a general expression for the entanglement and Renyi entropies of a single interval in $(1+1)$-dimensional WCFTs. Then they have provided the geodesic and massive point particles descriptions in a lower spin gravity as the dual of the WCFT. So this way they could holographically study the results in the bulk.
Their general result of the entanglement entropy of an interval in the warped CFT theories were
\begin{gather}\label{eq:SEE}
S_{EE}= i P_0^{\text{vac}} \ell^* \left( \frac{\bar{L} }{L}-\frac{\bar{\ell^*} }{\ell^*} \right) -4 L_0^{\text{vac}} \log \left( \frac{L}{\pi \epsilon} \sin \frac{ \pi \ell^* }{L} \right ).
\end{gather}
In the above formula $\ell^*$ and $\bar{\ell^*}$ are the separation in space and time respectively and $L$ and $\bar{L}$ are related to the identification pattern of the circle that defines the vacuum of the theory \cite{Castro:2015csg}.
The authors have explained that the second term is well-known and expected, but the first term seems exotic. With the motivation of finding the nature of the fist term, we study this result for our specific case of NMG theory.
First, for the vacuum of $\text{WAdS}_3$ which is holographically dual to WCFT, since the theory is parity even, one can write the Virasoro and Kac-Moody operators as
\begin{gather}\label{eq:operatores}
\tilde{P}_0^{(\text{vac})}=\mathcal{M}^{\text{(vac)} }, \ \ \ \ \ \ \ \ \ \ \ \ \ \tilde{L}_0^{\text{(vac)}}=\frac{1}{k} \left ( \mathcal{M}^{\text{vac}} \right)^2,
\end{gather}
where as found in \cite{Donnay:2015iia},
\begin{gather}\label{eq:M}
M^{(\text{vac})}= i {\mathcal{M}}_{\text{God} }=-i \frac{4 \ell^2 \omega^2 }{G(19 \ell^2 \omega^2-2) }.
\end{gather}
So $P_0^{\text{(vac)}}$ has an overall factor of $i$ which makes the first term real here as expected.
It would also be interesting to write the Virasoro and $\text{U}(1)$ Kac-Moody operators of NMG in terms of its central charge as well. So using \ref{eq:operatores} and \ref{eq:M} and the expression for $c$ and $k$ in eq. \ref{eq:param}, one can write
\begin{gather}
\tilde{P}_0^{\text{(vac)}}=-\frac{i c }{12 l^2 \omega} (1+\ell^2 \omega^2 ), \ \ \ \ \ \ \ \ \ \ \tilde{L}_0^{\text{(vac)}}=-\frac{c}{24}.
\end{gather}
We can also compare it with the operators of TMG,
\begin{gather}
\tilde{P}_0^{\text{(vac)}}=-\frac{c_L}{24}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \tilde{L}_0^{\text{(vac)}}=-\frac{c_R}{24}.
\end{gather}
Similar to the TMG case, one can see that the Virasoro part is only proportional to $c$ or $c_R$, but the Kac-Moody part does not only depend on $c$ but also on $\omega$, and therefore cannot define a central charge. However $c_L$ still is called central charge only by convention.
These information are useful for studying the dual WCFT of time-like G$\ddot{\text{o}}$del or Warped BTZ in these gravitational theories.
Now by using \ref{eq:operatores}, the entanglement entropy of NMG could be found as
\begin{gather}\label{eq:SEE}
S_{EE}=\frac{4\ell^2 \omega^2 }{G (19 \ell^2 \omega^2 -2) } \left( \ell^* \Big( \frac{\bar{L} }{L}-\frac{\bar{\ell^*} }{\ell^*} \Big) + \frac{2\ell^2 \omega}{1+\ell^2 \omega^2} \log \Big( \frac{L}{\pi \epsilon} \sin \frac{ \pi \ell^* }{L} \Big ) \right).
\end{gather}
Notice that $\ell$ here is just the length of AdS in G$\ddot{\text{o}}$del space-time and is independent of the length of intervals $\ell^*$ and $\bar{\ell^*}$.
The plot of $S_{\text{EE}}$ versus $\omega$ is shown in Figure \ref{fig:S1}. The factors consisting of lengths are considered to be constant and equal to one, so one can examine the effect of the warping factor of G$\ddot{\text{o}}$del space-time on the entanglement entropy. As it is shown, it is a monotonic function of $\omega$. One can also see that at $\omega= \pm \sqrt{\frac{2}{19}}$, the entanglement entropy diverges. This is due to the fact that for this value of $\omega$, the cosmological constant would diverge, i.e., $\Lambda \to \infty$ and this makes the degrees of freedom infinite and therefore the entanglement entropy would also diverge. \\
\begin{figure}[]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{S1} \caption{\label{fig:S1} The plot of $S_{\text{EE}}$ versus $\omega$. }
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{S2}
\caption{\label{fig:S2} The plot of $S_1$ versus $\omega$.}
\end{minipage}
\end{figure}
The plot of the peculiar first term that is contributing to the entanglement entropy for NMG,
\begin{gather}
S_1=\frac{\omega k }{2(1+\ell^2 \omega^2) } \Big( \frac{\bar{L} }{L}-\frac{\bar{\ell^*} }{\ell^*} \Big) \ell^2 \ell^*.
\end{gather}
versus the parameter $\omega$ is shown in Fig. \ref{fig:S2}. One can notice that, based on the sign of $\Big( \frac{\bar{L} }{L}-\frac{\bar{\ell^*} }{\ell^*} \Big)$, it can be a positive or negative term and has an extremum at $\omega=0$, and also it is a symmetric function.
One can also write $\omega$ and $\ell$ in terms of the physical quantities $m$ and $\Lambda$ as
\begin{gather}
\omega=\pm \sqrt{\frac{-2m^2}{7} \pm \frac{\sqrt{ 5m^4-7m^2 \Lambda} }{7 \sqrt{3}} }, \ \ \ \ \ \
\ell=\pm \sqrt{\frac{144m^2 \pm 38\sqrt {3m^2(5m^2-7 \Lambda) } } {11m^4-361m^2 \Lambda} },
\end{gather}
and then study the relationship between the entanglement entropy and the physical parameters $m$ and $\Lambda$.
As explained in \cite{Castro:2015csg}, in the holographic picture, the first term actually contributes to the black hole entropy. This term is a $\text{U}(1)$ contribution to the entanglement entropy, and is not UV divergent like the second term. It is also proportional to the volume of the interval although it corresponds to the vacuum states and not the mixed states which is a new observation in warped AdS geometries. Also as noted in \cite{Castro:2015csg}, this term in WCFT, unlike CFT, is independent of the Renyi replica index. It is also a rather complicated function of physical parameter if the theory such as $m$ and $\Lambda$.
\section{Discussion}\label{sec:disc}
In this paper we calculated the free energy of $\text{AdS}_3$ and BTZ black hole, warped $\text{AdS}_3$ and warped BTZ black hole in quadratic/non-local and grand canonical and New hairy black hole in BHT gravirt and then we plotted the Hawking-Page phase transition diagrams. We found symmetric diagrams for the solutions with $\text{AdS}_3$ geometry, warped AdS geometry in grand canonical ensemble and non-symmetric diagrams for the solution with warped $\text{AdS}_3$ geometry in quadratic ensemble. We also studied the effect of mass parameters, warping factor $\nu$ and the hair parameter on the diagrams.
For calculating the free energy of the vacuum warped $\text{AdS}_3$, we should change the previous relations for the action of thermal $\text{AdS}_3$ to the deformed case. By calculating the free energy from the conserved charges or the Cardy-like formula for the warped solutions in NMG, we found a factor, $\frac{\nu}{(\nu^2+3) l}$ that should multiply to one of the terms in the definition of the modular parameter of the torus for compensating the difference in the result of different methods. This factor extends the definition of the modular parameter of the torus and the formula for the action of thermal $\text{AdS}_3$ to the warped solutions in NMG theory.
Also the thermodynamics of the inner horizon have been studied which were consistent with previous results and WAdS/WCFT picture. We found that the product of the entropies of inner and outer horizon is a product of $J$ and therefore independent of mass which checks the inner horizon universality properties for the warped black hole solution of the new massive gravity as well.
In the last section using the general formula of \cite{Castro:2015csg}, we found the entanglement entropy of an interval in WCFTs dual to Warped $\text{AdS}_3$ in NMG and studied the behaviors of the two terms that contribute to the entanglement entropy versus the warping factor. We could also try to derive the same result using the HRT formalism of deriving the entanglement entropy holographically \cite{Hosseini:2015vya},\cite{Hubeny:2007xt} or study the geodesics in time-like warped $\text{AdS}_3$ (G$\ddot{\text{o}}$del space-time) or space time WAdS, which could be done in the future works.
\acknowledgments
We would like to thank Mohsen Alishahiha, Leopoldo Pando Zayas, Shahin Sheikh-Jabbari and Finn Larsen for useful discussions.
\bibliographystyle{JHEP}
|
2,869,038,153,912 | arxiv | \section*{Introduction}
\label{intro}
Given a number field $F$, let $\mathcal{A}_0 ({\rm GL}_n(\mathbb A_F)) $ be the set of cuspidal automorphic representations for $ {\rm GL}_n(\mathbb A_F) $ with unitary central character. For $\pi \in \mathcal{A}_0 ({\rm GL}_n(\mathbb A_F))$, at any finite place $v$ of $F$ where $\pi$ is unramified we denote the Langlands conjugacy class by $A_v (\pi) \subset {\rm GL}_n(\mathbb C) $, which we represent by the matrix $ {\rm diag}\{\alpha _{1, v }, \dots, \alpha _{n,v} \}$. Let $a _v (\pi) $ denote the trace of this matrix.
Let us now set $n = 2$. Given $\pi,\pi' \in \mathcal{A}_0 ({\rm GL}_2(\mathbb A_F)) $, one can compare local data to determine whether $\pi$ and $\pi'$ are globally isomorphic. In this context we ask the following question: if we have a set $S$ such that $a _v (\pi) = a _v (\pi') $ for all $v \not \in S$, what property of $S$ is sufficient to establish that $\pi$ and $\pi'$ are globally isomorphic?
One approach involves establishing a condition on the size of $S$. The strong multiplicity one theorem of Jacquet--Shalika~\cite{JS81} states that it is sufficient for $S$ to be finite. In 1994, D.~Ramakrishnan~\cite{Ra94} proved that a set $S$ of density less than 1/8 is sufficient. This bound is sharp, as shown by an example of J.-P. Serre~\cite{Se77} of a pair of dihedral automorphic representations (see Section~\ref{s4-4} for details).
This naturally leads to the question of whether the bound can be increased if we exclude dihedral automorphic representations.
Given $\pi, \pi' \in \mathcal{A}_0({\rm GL}_2(\mathbb A_F))$, let us define the set
\begin{align*}
S = S (\pi,\pi') := \{ v \mid v \text{ unramified for }\pi \text{ and }\pi', a_v(\pi) \neq a_v(\pi') \}.
\end{align*}
We will show:
\begin{theorem} \label{t1}
Let $\pi, \pi ' \in \mathcal{A}_0 ({\rm GL}_2(\mathbb A_F))$ be distinct non-dihedral
representations.
Then, if $\underline{\delta} (S)$ is the lower Dirichlet density of the set $S = S (\pi, \pi')$, we have
\begin{align*}
\underline{\delta} (S) \geq \frac 14.
\end{align*}
\end{theorem}
\begin{remark}
This bound is sharp, as established in Section~\ref{s4-1} by a pair of octahedral automorphic representations.
The question of what bound can be established when exactly one of the representations is dihedral is addressed later by Theorem~\ref{t3}.
\end{remark}
A related question involves weakening the requirement of global isomorphism to that of twist-equivalence. Specifically, we can ask: given a set $S$ such that $a _v (\pi) = a _v (\pi') $ for all $v \not \in S $, what property of $S$ is sufficient to establish that $\pi$ and $\pi'$ are twist-equivalent?
We will prove:
\begin{theorem} \label{t2}
Let $\pi, \pi ' \in \mathcal{A}_0 ({\rm GL}_2(\mathbb A_F))$ be representations that are not twist-equivalent.
For $S = S (\pi,\pi')$, we have
\begin{align*}
\underline{\delta} (S) \geq \frac 29.
\end{align*}
\end{theorem}
\begin{remark}
This bound is sharp, as demonstrated by a pair of dihedral automorphic representations associated to the group $S_3$ (see Section~\ref{s4-2}).
\end{remark}
We ask if the bound can be increased if we specify that one, or both, of the cuspidal automorphic representations is non-dihedral. We obtain
\begin{theorem} \label{t3}
Let $\pi, \pi ' \in \mathcal{A}_0 ({\rm GL}_2(\mathbb A_F))$ be
non-twist-equivalent. For $S = S (\pi, \pi')$:\\
(i) If exactly one of the representations is non-dihedral, then
\begin{align*}
\underline{\delta} (S) \geq \frac 27.
\end{align*}
(ii) If both of the representations are non-dihedral, then
\begin{align*}
\underline{\delta} (S) \geq \frac 25.
\end{align*}
\end{theorem}
\begin{remark}
The second bound is sharp, as demonstrated by a pair of odd icosahedral automorphic representations (see Section~\ref{s4-3}).
Note that the first bound applies to all pairs consisting of a dihedral automorphic representation and a non-dihedral cuspidal automorphic representation, since the representations will always be twist-inequivalent.
It does not appear that this bound is sharp. This may be because, unlike in the other three cases, we have forcibly broken the symmetry in the pair - one is dihedral and the other is not.
\end{remark}
Given that Theorem~\ref{t1} is sharp, one might ask what analogous theorem to expect if we only consider cuspidal automorphic representations that are neither dihedral nor octahedral. Following the same method as in the proof of Theorem~\ref{t1} does not seem to lead to improved bounds in this case. However, based on examples, we expect that there would be a lower bound of 3/8 for $\underline{\delta}(S)$, which would be sharp in both the tetrahedral and icosahedral cases.
Beyond that, if we restrict ourselves to non-polyhedral cuspidal automorphic representations, it is natural to conjecture
that two such representations are globally isomorphic if they agree locally for a set of places of density greater than 1/2. This would be sharp due to the following example: Let $\pi$ be a cuspidal automorphic representation that corresponds to a non-dihedral holomorphic newform of weight greater than 1. We observe that the condition on the weight, along with being non-dihedral, then implies that the newform is of non-polyhedral type. One knows that the set of primes where the Hecke eigenvalue of $\pi$ is zero has density zero~\cite{Se81}. Thus $\pi$ and $\pi \otimes \chi$, where $\chi$ is a quadratic Hecke character, provide an example of a pair of non-polyhedral cuspidal automorphic representations that agree locally for a set of places of density exactly 1/2. Note that the holomorphy condition is needed here: if $\pi$ were associated to a non-dihedral Maa{\ss} form then the best known upper bounds on the density of places at which the Hecke eigenvalue is zero is 1/2~\cite{Wa13b}.
In the case of Theorem~\ref{t3}, restricting further to cuspidal automorphic representations that are neither dihedral nor icosahedral does not seem to yield better bounds under the current approach. However, examples indicate that it may be reasonable to expect the existence of a lower bound of 15/32 for $\underline{\delta}(S)$, which would be optimal in the tetrahedral case.
For two non-polyhedral cuspidal automorphic representations that are not twist-equivalent, it is natural to conjecture that they agree locally for a set of places of density 0.
We collect all the examples implicit in the discussions above:
\begin{theorem}\label{obs}
We note the existence of the following cuspidal automorphic representations for ${\rm GL}(2)$:
\begin{itemize}
\item (due to Serre~\cite{Se77}) A pair of dihedral representations with $\delta (S) = 1/8$.
\item A pair of dihedral representations that are non-twist-equivalent and where $\delta (S) = 2/9$.
\item A pair of tetrahedral representations with $\delta (S) = 3/8$.
\item A pair of tetrahedral representations that are non-twist-equivalent and where $\delta (S) = 15/32$.
\item A pair of octahedral representations with $\delta (S) = 1/4$.
\item A pair of icosahedral representations with $\delta (S) = 3/8$.
\item A pair of icosahedral representations that are non-twist-equivalent and where $\delta (S) = 2/5$.
\item A pair of non-polyhedral representations with $\delta (S) = 1/2$.
\end{itemize}
\end{theorem}
These statements will be proved in Section~\ref{s4}.\\
The idea of the proof of Theorems~\ref{t1},~\ref{t2}, and~\ref{t3} relies on the examination of the asymptotic properties of certain suitable Dirichlet series.
We achieve this by establishing various identities of incomplete $L$-functions and determining their behaviour when real $s \rightarrow 1^+$ ($s$ being the variable in each incomplete $L$-function) by using functoriality and non-vanishing results.
We make use of the work of Gelbart--Jacquet~\cite{GJ78} and Kim--Shahidi~\cite{KS00} on the symmetric second and third powers (respectively) of cuspidal automorphic representations for GL(2) with regard to their existence and possible cuspidality (note however that the symmetric fourth power lift of Kim~\cite{Ki03} is not needed).
We also require the results of Ramakrishnan~\cite{Ra00,Ra04} on the adjoint lift and the automorphic tensor product.
Our approach is related to, though is not a straightforward extension of, work of Ramakrishnan in~\cite{Ra94,Ra97}.
In constructing our examples in Theorem~\ref{obs}, we rely on various cases of the strong Artin conjecture. The first of these cases is implicit in the work of Hecke and Maa{\ss},
and subsequent cases were proved by Langlands~\cite{La80}, Tunnell~\cite{Tu81}, Khare--Wintenberger~\cite{KW109,KW209} and Kisin~\cite{Ki09}. (Note that we do not actually need the full force of Khare--Wintenberger and Kisin, and can instead make do with one of the earlier examples available in the literature; see Section~\ref{s4}).\\
We also note a result of Murty--Rajan~\cite{MR96} that is related to strong multiplicity one. For $\pi_1, \pi_2 \in \mathcal{A}_0({\rm GL}_2(\mathbb A_\mathbb Q))$ with the property that the $L$-functions of the Rankin--Selberg convolutions of ${\rm Sym}^n (\pi_1)$ and ${\rm Sym}^m (\pi_2)$, for $n =m$ and $n = m +2$, are entire, have suitable functional equations, and satisfy GRH, then $ \# \{p \leq x \mid a_p (\pi_1) = a_p (\pi_2)\} = O\left( x ^{5/6 +\epsilon }\right). $\\
The structure of this paper is as follows. In Section~\ref{s1}, we establish notation and recall relevant theorems on the properties of certain types of $L$-functions associated to automorphic representations. In Section~\ref{s2}, we prove the non-dihedral results. In Section~\ref{s3}, we prove the dihedral cases of Theorems~\ref{t2} and~\ref{t3}. Finally, in Section~\ref{s4} we construct the examples from Theorem~\ref{obs}.
\subsection*{Acknowledgements}
The author would like to thank Dinakar Ramakrishnan for his valuable guidance and useful discussions, as well as Farrell Brumley and Abhishek Saha for their helpful comments on earlier drafts of this paper.
\section{Preliminaries} \label{s1}
We begin by introducing some notation.
Let $F$ be a number field and let $S$ be a set of primes in $F$. Then the \emph{lower and upper Dirichlet densities} of $S$ are
\begin{align*}
\underline{\delta}(S) = \lim_{s \rightarrow 1^+} {\rm inf} \frac{\sum_{\mathfrak{p} \in S}{\rm N}\mathfrak{p}^{-s}}{-\log(s-1)}
\end{align*}
and
\begin{align*}
\overline{\delta}(S) = \lim_{s \rightarrow 1^+} {\rm sup} \frac{\sum_{\mathfrak{p} \in S}{\rm N}\mathfrak{p}^{-s}}{-\log(s-1)},
\end{align*}
respectively.
When the lower and upper Dirichlet densities of $S$ coincide, we say that $S$ has a \emph{Dirichlet density}.
\subsection*{Polyhedral automorphic representations}
We will call a cuspidal automorphic representation $\pi$ for GL(2)/$F$ (where $F$ is a number field) \textit{polyhedral} if it corresponds to a 2-dimensional irreducible complex representation $\rho$ of the Weil group $W_F$. This means that the $L$-functions of the two objects are equal:
\begin{align*}
L(s, \pi) = L(s, \rho).
\end{align*}
Recall that these 2-dimensional irreducible complex representations fall into four different categories, namely \textit{dihedral, tetrahedral, octahedral}, and \textit{icosahedral} (for example, see the Proposition in Section 4.3 of~\cite{Ge97}). An associated cuspidal automorphic representation then inherits the same nomenclature.
Given a 2-dimensional irreducible complex representation $\rho$, the strong Artin conjecture states that it corresponds to a cuspidal automorphic representation for GL(2). This is known when $\rho$ is dihedral (implicit in the work of Hecke and Maa{\ss}), tetrahedral~\cite{La80}, octahedral~\cite{Tu81}, and odd icosahedral~\cite{KW109,KW209,Ki09} (note that a representation $\rho$ is \textit{odd} if ${\rm det}\rho (c) = -1$, where $c$ is complex conjugation).
Note that there were a number of examples of modular odd icosahedral representations already in the literature before the work of Khare--Wintenberger and Kisin. The first example came from Buhler~\cite{Bu78} (which made key use of the correspondence established by Deligne--Serre~\cite{DS74}), which was followed by, amongst others, the work of Buzzard, Dickinson, Shepherd-Barron, and Taylor~\cite{BDST01}, and Taylor~\cite{Ta03}.
Cuspidal automorphic representations of solvable polyhedral type can be characterised by means of their symmetric power lifts (see~\cite{KS00}):
A cuspidal automorphic representation for GL(2) is said to be \textit{dihedral} if it admits a non-trivial self-twist by a (quadratic) character.
It is \textit{tetrahedral} if it is non-dihedral and its symmetric square admits a non-trivial self-twist by a cubic character.
It is \textit{octahedral} if it is not dihedral or tetrahedral and if its symmetric cube admits a non-trivial self-twist by a quadratic character.
Presently known cases of functoriality are not sufficient to determine whether or not icosahedral automorphic representations admit such a description.\\
Given $\pi \in \mathcal{A}_0 ({\rm GL}_n(\mathbb A_F))$ there is the associated $L$-function
\begin{align*}
L(s, \pi ) = \prod_{ v } L_v (s, \pi),
\end{align*}
where, for finite $v$ at which $\pi$ is unramified, we have:
\begin{align*}
L_v (s, \pi) =& {\rm det} \left(I_n - A( \pi _v ) {\rm N } v ^{-s} \right) ^{-1} \\
=& \prod_{ j=1 }^{ n} \left( 1 - \alpha _{j, v }{\rm N } v ^{-s}\right) ^{-1} .
\end{align*}
The $L$-function $L(s,\pi)$ converges absolutely for ${\rm Re}(s) > 1$, and furthermore it is non-vanishing on $ {\rm Re}(s) = 1 $~\cite{JS76}.
Now let $T$ be the set of all ramified and infinite places. We define the incomplete $L$-function
\begin{align*}
L ^ T (s, \pi ) = \prod_{v \not \in T } L_v (s, \pi),
\end{align*}
and the incomplete Dedekind zeta function
\begin{align*}
\zeta_F^T (s) = \prod_{v \not \in T }(1- {\rm N }v ^{-s})^{-1} .
\end{align*}
\subsection*{Rankin--Selberg $L$-functions}
Given automorphic representations $\pi$ and $\pi'$ for ${\rm GL}_n(\mathbb A_F)$ and ${\rm GL}_m(\mathbb A_F)$, respectively,
we have the Rankin--Selberg $L$-function
\begin{align*}
L(s, \pi \times \pi') = \prod_{ v } L _v (s, \pi \times \pi'),
\end{align*}
where for finite $v$ at which both $\pi$ and $\pi'$ are unramified, we have:
\begin{align*}
L_v (s, \pi \times \pi') = {\rm det} \left(I _{nm} - \left( A_v(\pi) \otimes A_v(\pi') \right){\rm N } v ^{-s}\right) ^{-1} .
\end{align*}
This global $L$-function converges absolutely for ${\rm Re}(s) > 1$. When $\pi'$ is dual to $\pi$, one knows that $L(s, \pi \times \pi')$ has a simple pole at $s=1$~\cite{JS81}. Otherwise, it is invertible at $s=1$~\cite{Sh81}.\\
Given $\pi, \pi' \in \mathcal{A}_0({\rm GL}_2(\mathbb A_F))$, by~\cite{Ra00}, one knows of the existence of the automorphic tensor product of $\pi$ and $\pi'$, which is an automorphic representation for ${\rm GL}_4(\mathbb A_F)$ that we write as $\pi \boxtimes \pi'$. When $\pi$ and $\pi'$ are both dihedral, there is a cuspidality criterion (from~\cite{Ra00}, and later refined in~\cite{Ra04}) which we will make use of later:
For two dihedral automorphic representations
$\pi, \pi' \in \mathcal{A}_0({\rm GL}_2(\mathbb A_F)) $,
the automorphic product $\pi \boxtimes \pi'$ is not cuspidal if and only if $\pi$ and $\pi'$ can be induced from the same quadratic extension $K$ of $F$. In this case, let us write $\pi$, $\pi'$ as $I_K^F (\nu), I_K^F (\mu)$, respectively, where $\nu$ and $\mu$ are Hecke characters for $K$. Then
\begin{align*}
\pi \boxtimes \pi' = I_K^F (\nu \mu) \boxplus I_K^F (\nu \mu^\tau)
\end{align*}
where $\tau$ is the non-trivial element of ${\rm Gal}(K/F)$ and $\mu^\tau$ signifies precomposition with $\tau$.
\begin{remark}
Note that a given dihedral automorphic representation may be induced from more than one quadratic extension. See also Theorem A of~\cite{PR11}.
\end{remark}
\subsection*{Cuspidality of symmetric powers}
For $\pi \in \mathcal{A}_0({\rm GL}_2(\mathbb A_F))$, one knows, by Gelbart--Jacquet~\cite{GJ78} and Kim--Shahidi~\cite{KS00}, that the symmetric second and third power representations are isobaric sums of cuspidal automorphic representations. One also knows that $ {\rm Sym}^2 \pi $ is cuspidal if and only if $\pi$ is non-dihedral and that $ {\rm Sym}^3 \pi $ is cuspidal if and only if $\pi$ is not dihedral or tetrahedral.
\subsection*{Adjoint lift}
Given $\pi \in \mathcal{A}_0({\rm GL}_2(\mathbb A_F))$, there exists the adjoint lift ${\rm Ad}(\pi)$ which is an automorphic representation for ${\rm GL}_3(\mathbb A_F)$. This lift is in fact isomorphic to ${\rm Sym}^2 (\pi) \otimes \omega ^{-1}$, where $\omega$ is the central character of $\pi$. The adjoint lift (and thus the symmetric square lift) is cuspidal if and only if $\pi$ is non-dihedral~\cite{JS81}.
We make use of this lift to address possible twist-equivalence for cuspidal automorphic representations for GL(2): Given $\pi = \otimes_v' \pi_v$, $\pi' = \otimes_v' \pi'_v \in \mathcal{A}_0(GL_2(\mathbb A_F))$, they are twist-equivalent if
\begin{align*}
{\rm Ad}(\pi_v) \simeq {\rm Ad}(\pi'_v)
\end{align*}
for almost all $v$~\cite{Ra00}.
\section{Proof of Theorems~\ref{t1} and~\ref{t3}(ii)} \label{s2}
Throughout this section, the cuspidal automorphic representations are assumed to be non-dihedral.\\
For the proofs of Theorems~\ref{t1} and~\ref{t3}(ii) we require the following identities of incomplete $L$-functions:
\begin{lemma}\label{s2lem1} Given $\pi,\pi' \in \mathcal{A}_0({\rm GL}_2(\mathbb A_F))$ with (unitary) central characters $\omega$, $\omega'$ (respectively), let $T$ be the set of all the infinite places as well as the finite places at which either $\pi$ or $\pi'$ is ramified.
Then we have
\begin{align*}
L^T\left(s, \pi \boxtimes \pi \times \overline{\pi} \boxtimes \overline{\pi} \right) =& L^T\left(s, {\rm Ad} (\pi) \times {\rm Ad}(\pi) \right) L^T\left(s, {\rm Ad}(\pi) \right)^2 \zeta_F^T(s) \\
L^T\left(s, \pi \boxtimes \pi \times \overline{\pi} \boxtimes \overline{\pi'}\right) =& L^T\left(s, (\overline{\omega}\otimes \rm{Sym}^3 (\pi)) \times \overline{\pi'} \right) L^T\left(s, \pi \times \overline{\pi'} \right) ^2\\
L^T\left(s, \pi \boxtimes \pi \times \overline{\pi'} \boxtimes \overline{\pi'}\right) =& L^T \left(s, {\rm Ad}(\pi) \times ({\rm Ad}(\pi') \otimes \overline{\omega}\omega') \right)
L^T\left(s, \omega \overline{\omega'} \otimes \rm{Ad} (\pi) \right) \\
& \cdot L^T\left(s, \omega \overline{\omega'} \otimes \rm{Ad} (\pi') \right) L^T \left(s, \omega \overline{\omega '}\right) \\
L^T\left(s, \pi \boxtimes \overline{\pi} \times \pi' \boxtimes \overline{\pi'}\right) =& L^T \left(s, {\rm Ad}(\pi) \times {\rm Ad}(\pi') \right) L^T\left(s, {\rm Ad}(\pi)\right) L^T\left(s, {\rm Ad}(\pi')\right) \zeta^T_F(s) .
\end{align*}
\end{lemma}
\begin{proof}
This follows from the Clebsch--Gordon decomposition of tensor powers of two-dimensional representations.
Here and in later proofs, we use the fact that the contragredient representation $\widetilde{\pi_v}$ is isomorphic to the complex conjugate representation $\overline{\pi_v}$~\cite{GK75}.
By way of example, we elaborate on the case of $L^T\left(s, \pi \boxtimes \pi \times \overline{\pi'} \boxtimes \overline{\pi'}\right)$. At a finite place $v$ of $F$ where both $\pi$ and $\pi'$ are unramified, we represent the associated Langlands conjugacy classes $A_v(\pi)$ and $A_v(\pi')$ as ${\rm diag}\{\alpha,\beta\}$ and ${\rm diag}\{\gamma,\delta\}$, respectively.
Consider the tensor product $A_v(\pi)\otimes A_v(\pi)$, which we express as
\begin{align*}
\left( \begin{array}{cc}
\alpha& \\
&\beta
\end{array} \right)
\otimes
\left( \begin{array}{cc}
\alpha& \\
&\beta
\end{array} \right)
\end{align*}
which is equivalent to
\begin{align*}
\alpha \beta
\otimes
\left(
\left( \begin{array}{ccc}
\alpha / \beta& & \\
& 1 & \\
& & \beta / \alpha
\end{array} \right)
\oplus
1
\right)
\end{align*}
and we observe that this is a representative of $A_v(\omega \otimes ({\rm Ad}\pi \boxplus 1))$.
For $A_v(\overline{\pi'})\otimes A_v(\overline{\pi'})$, we have
\begin{align*}
\overline{\gamma \delta}
\otimes
\left(
\left( \begin{array}{ccc}
\overline{\gamma}/ \overline{\delta}& & \\
& 1 & \\
& & \overline{\delta} / \overline{\gamma}
\end{array} \right)
\oplus 1
\right)
\end{align*}
which is a representative of $A_v(\overline{\omega'}\otimes ({\rm Ad}\pi' \boxplus 1))$.
Therefore
\begin{align*}
A_v(\pi)\otimes A_v(\pi)\otimes A_v(\overline{\pi'})\otimes A_v(\overline{\pi'})
\end{align*}
can be expressed as
\begin{align*}
A_v\left(\omega \overline{\omega'} \otimes \left(({\rm Ad}\pi \times {\rm Ad}\pi') \boxplus {\rm Ad}\pi \boxplus {\rm Ad}\pi' \boxplus 1\right)\right).
\end{align*}
Observing that
\begin{align*}
& L_v\left(s, \pi \boxtimes \pi \times \overline{\pi'} \boxtimes \overline{\pi'}\right) \\
=& {\rm det}\left(I_{16} - \frac{A_v(\pi)\otimes A_v(\pi)\otimes A_v(\overline{\pi'})\otimes A_v(\overline{\pi'})}{Nv^s}\right)^{-1}\\
=& {\rm det}\left(I_{9} - \frac{\omega \overline{\omega'} \cdot A_v({\rm Ad}\pi)\otimes A_v({\rm Ad}\pi')}{Nv^s}\right)^{-1}
{\rm det}\left(I_{3} - \frac{\omega \overline{\omega'} \cdot A_v({\rm Ad}\pi)} {Nv^s}\right)^{-1}\\
&\cdot {\rm det}\left(I_{3} - \frac{\omega \overline{\omega'} \cdot A_v({\rm Ad}\pi')}{Nv^s}\right)^{-1}
{\rm det}\left(I_{1} - \frac{\omega \overline{\omega'}}{Nv^s}\right)^{-1}\\
=& L_v(s, \omega \overline{\omega'} \otimes {\rm Ad}\pi \times {\rm Ad}\pi')
L_v(s, \omega \overline{\omega'} \otimes {\rm Ad}\pi)
L_v(s, \omega \overline{\omega'} \otimes {\rm Ad}\pi')
L_v(s, \omega \overline{\omega'})
\end{align*}
we obtain our $L$-function identity.
\end{proof}
\begin{remark}
The purpose of the lemma above is to establish the asymptotic behaviour of various Dirichlet series as real $s \rightarrow 1^+$.
For example, using~\cite{BB11} we observe that, as real $s \rightarrow 1^+$,
\begin{align*}
\log L^T(s, \pi \boxtimes \overline{\pi} \times \pi' \boxtimes \overline{\pi'}) =
\sum \frac{|a_v (\pi)|^2 |a_v (\pi')|^2 }{{\rm N}v ^s} + O\left(1\right),
\end{align*}
and similarly for the other incomplete $L$-functions on the left-hand side of the equations in Lemma~\ref{s2lem1}.
\end{remark}
\noindent \textbf{Proof of Theorem~\ref{t1}}\\
Let us begin by fixing both $\pi$ and $\pi'$ to be non-tetrahedral (as well as non-dihedral, as mentioned at the beginning of this section) so that we can assume that the symmetric cubes of $\pi$ and $\pi'$ are both cuspidal. At finite places $v$ where both $\pi$ and $\pi'$ are unramified, we denote the trace of $A_v(\pi)$ as $a_v$ and the trace of $A_v(\pi')$ as $b_v$.
By considering the behaviour of the incomplete $L$-functions in Lemma~\ref{s2lem1}, and using the results stated in the previous section on Rankin--Selberg $L$-functions, symmetric powers, and adjoint lifts, we obtain
\begin{align*}
\sum \frac{a_v^i \overline{a_v}^j b_v^k \overline{b_v}^l}{{\rm N}v ^s}= &k(i,j,k,l)\cdot \log\left(\frac{1}{s-1}\right) +o\left(\log\left(\frac{1}{s-1}\right)\right)
\end{align*}
as real $s \rightarrow 1 ^+$, where $k (i,j,k,l)$ is a non-negative integer such that
\begin{align*}
k (i,j,k,l) \leq
\left\{ \begin{array}{cccl}
0& \text{ for }&(i,j,k,l)=&(2,1,0,1), (0,1,2,1),(1,2,1,0),\\
&&& \text{ and }(1,0,1,2)\\
2& \text{ for }&(i,j,k,l)=&(1,1,1,1),(2,2,0,0),(0,0,2,2),(2,0,0,2)\\
&&& \text{ and }(0,2,2,0).
\end{array} \right.
\end{align*}
Note that we are using the fact that the logarithms of incomplete $L$-functions such as those in Lemma~\ref{s2lem1} are asymptotically equivalent (as real $s \rightarrow 1^+$) to the Dirichlet series above.\\
Let $C= C _S $ be the characteristic function of the set $S = \{ v \mid a_v \neq b_v \}$. We have, for real $s > 1$:
\begin{align}
\sum_{v } \frac{|a _v - b_v|^2 C (v) }{ {\rm N }v ^{s} }
\leq \left( \sum_{v } \frac{|a _v -b_v|^4 }{ {\rm N }v ^{s} }\right)^{1/2} \left( \sum_{v \in S} \frac{1 }{ {\rm N }v ^{s} } \right)^{1/2} \label{s2ineq1}
\end{align}
where the inequality above arises from applying Cauchy--Schwarz.
We divide inequality~(\ref{s2ineq1}) by $\log \left( 1 / (s-1) \right)$, take the limit inferior as $s \rightarrow 1 ^+$, and examine the asymptotic properties of the series. We obtain
\begin{align*}
2 &\leq (16) ^{1/2} \cdot \underline{\delta}(S)^{1/2}
\end{align*}
which gives
\begin{align*}
\frac{1}{4} &\leq \underline{\delta}(S).
\end{align*}
Now let us consider the case when at least one of the cuspidal automorphic representations, say $\pi$, is tetrahedral. We apply Theorem 2.2.2 of~\cite{KS02} which states that
\begin{align*}
{\rm Sym}^3 (\pi) \otimes \omega ^{-1} (\pi) \simeq (\pi \otimes \nu)\boxplus (\pi \otimes \nu^2)
\end{align*}
where $\omega$ is the central character of $\pi$ and $\nu$ is a (cubic) Hecke character that satisfies ${\rm Sym}^2 (\pi) \simeq {\rm Sym}^2 (\pi) \otimes \nu$.
Since
\begin{align*}
L^T(s, \overline{\omega}\otimes {\rm Sym}^3 (\pi) \times \overline{\pi'}) =L^T(s, (\pi \otimes \nu) \times \overline{\pi'}) L^T(s, (\pi \otimes \nu^2) \times \overline{\pi'}),
\end{align*}
the $L$-function on the left-hand side only has a (simple) pole at $s=1$ when $\pi' \simeq \pi \otimes \nu$ or $\pi \otimes \nu^2$.
If this is the case, then $k(i,j,k,l)=1$ for $(i,j,k,l)=(2,1,0,1), (0,1,2,1)$, $(1,2,1,0)$, and $(1,0,1,2)$. This gives us a lower bound of 1/2 for the density of places at which $a_v(\pi) \neq a_v(\pi')$. Thus the lower bound of 1/4 still holds.\\ \qed
\noindent \textbf{Proof of Theorem~\ref{t3}(ii)}\\
Here the non-dihedral automorphic representations $\pi$ and $\pi'$ are non-twist-equivalent.
Following a similar approach to before, we obtain
\begin{align*}
\sum \frac{a_v^i \overline{a_v}^j b_v^k \overline{b_v}^l}{{\rm N}v ^s}= &k(i,j,k,l)\cdot \log\left(\frac{1}{s-1}\right) +o\left(\log\left(\frac{1}{s-1}\right)\right)
\end{align*}
as real $s \rightarrow 1 ^+$, where $k(i,j,k,l)$ is a non-negative integer such that
\begin{align*}
k (i,j,k,l) \leq
\left\{ \begin{array}{ccl}
0& \text{ for }&(i,j,k,l)=(2,1,0,1), (0,1,2,1), (1,2,1,0),\text{ and }(1,0,1,2) \\
1& \text{ for }&(i,j,k,l)=(1,1,1,1) \\
2& \text{ for }&(i,j,k,l)=(2,2,0,0)\text{ and }(0,0,2,2)\\
c& \text{ for }&(i,j,k,l)=(2,0,0,2)\text{ and }(0,2,2,0),
\end{array} \right.
\end{align*}
with $c=c (\omega,\omega ')$ equal to zero if $\omega \neq \omega '$ and equal to one otherwise.
Again we divide inequality~(\ref{s2ineq1}) by $\log \left( 1 / (s-1) \right)$, take the limit inferior as $s \rightarrow 1 ^+$, and examine the asymptotic properties of the series given our condition of non-twist-equivalence. We obtain
\begin{align*}
2 &\leq (8 +2c) ^{1/2} \cdot \underline{\delta}(S)^{1/2}
\end{align*}
which implies
\begin{align*}
\frac{2}{4+c} &\leq \underline{\delta}(S).
\end{align*}
Thus $\underline{\delta}(S)$ is greater or equal to 2/5 if $\omega = \omega '$, and 1/2 otherwise.\\ \qed
\section{Proof of Theorems~\ref{t2} and~\ref{t3}(i)} \label{s3}
In this section, any pair of cuspidal automorphic representations $\pi$,$\pi'$ will always be non-twist-equivalent.\\
Since the non-dihedral cases have been covered by the proof of Theorem~\ref{t3}(ii) in the previous section, we now need to consider the situation where at least one out of the pair of cuspidal automorphic representations in question is dihedral.
Let us briefly recall our notation for dihedral automorphic representations. Given a dihedral automorphic representation $\pi$ for ${\rm GL}_2(\mathbb A_F)$, one knows that it can be induced from a Hecke character $\mu$ of $K$, where $K$ is a suitable quadratic extension of $F$. Then we will denote it as $I_K^F(\mu)$, or $I(\mu)$ if $K$ and $F$ are understood.\\
We split the proof into four cases. The first three cases will account for the situation where both cuspidal automorphic representations are dihedral, and we distinguish between these three cases based on whether a certain property P holds for none, one, or both of the representations. The fourth case addresses the situation when exactly one of the cuspidal automorphic representations is dihedral.
We shall say that a cuspidal automorphic representation $\pi$ has property P if it is dihedral and, furthermore, that if we express $\pi$ as $I_K^F(\mu)$, then the Hecke character $\mu / \mu^{\tau}$ is invariant under $\tau$, the non-trivial element of ${\rm Gal}(K / F)$. Such $\pi$ can be induced from exactly three different quadratic extensions of $F$.\\
We begin with the case of two dihedral automorphic representations that do not satisfy property P.
Here, the general strategy of the proof follows as in the previous section, though the approach to the analysis needs to be altered. The asymptotic properties of the $L$-functions as real $s \rightarrow 1^+$ may be different from before, in that we expect a higher order pole in certain cases. By way of example, we will discuss two particular incomplete $L$-functions in detail, that of $L^T (s, \pi \boxtimes \pi \times \overline{\pi'}\boxtimes \overline{\pi'})$ and $L^T (s, \pi \boxtimes \pi \times \overline{\pi}\boxtimes \overline{\pi})$.
With regard to the first incomplete $L$-function, we shall explain why it has a pole of order at most two.
If $\pi$, $\pi'$ cannot be induced from the same quadratic extension, then the automorphic tensor product $\pi \boxtimes \overline{\pi'}$ is cuspidal, and so
$L^T(s, \pi \boxtimes \overline{\pi'} \times \pi \boxtimes \overline{\pi'})$
has at most a pole of order one.
If $\pi, \pi'$ can be induced from the same quadratic extension, then let us denote this extension as $K$. We write $\pi =I_K^F(\nu)=I(\nu)$ and $\pi' =I_K^F(\mu)=I(\mu)$, where $\mu, \nu$ are Hecke characters of $K$. We obtain
\begin{align*}
\pi \boxtimes \overline{\pi'} \simeq I(\nu \overline{\mu}) \boxplus I(\nu \overline{\mu}^\tau)
\end{align*}
and thus
\begin{align*}
L^T(s, \pi \boxtimes \overline{\pi'} \times \pi \boxtimes \overline{\pi'})
=& L^T(s, I(\nu \overline{\mu}) \times I(\nu \overline{\mu}))
L^T(s, I(\nu \overline{\mu}) \times I(\nu \overline{\mu}^\tau))^2\\
& \cdot L^T(s, I(\nu \overline{\mu}^\tau) \times I(\nu \overline{\mu}^\tau)).
\end{align*}
If neither $I(\nu \mu)$ nor $I(\nu \mu ^\tau)$ is self-dual, then the right-hand side has a pole of order at most two.
Now let us consider the case when exactly one of them is self-dual. We note that the middle $L$-function on the right-hand side will contribute to the order of the pole of the overall expression if and only if $I(\nu \mu)$ and $I(\nu \mu ^\tau)$ are dual. However, this is not possible given that one is self-dual and the other is not.
Lastly, we have the case when both are self-dual, in which case the middle expression contributes to the order of the pole if and only if $I(\nu \mu) \simeq I(\nu \mu ^\tau)$. This means that either $\nu =\nu ^\tau$ or $\mu =\mu ^\tau$, implying that either $I(\nu)$ or $I(\mu)$ is not cuspidal, in contradiction to our original assumption.
Note that in the analysis above, the issue of whether $\nu / \nu^\tau$ (or $\mu / \mu^\tau$) is invariant under ${\rm Gal}(K/ F)$ did not arise. This is not the case for the next incomplete $L$-function that we address, where we need to make the assumption that both $\nu / \nu^\tau$ and $\mu / \mu^\tau$ are \textit{not} ${\rm Gal}(K/ F)$-invariant.
We now consider $L^T(s, \pi \boxtimes \pi \times \overline{\pi} \boxtimes \overline{\pi})$ and show that, under our assumption, it can only have a pole of order at most three. Note that
\begin{align*}
\pi \boxtimes \overline{\pi} &\simeq I(1) \boxplus I(\nu / \nu^\tau)\\
& \simeq 1 \boxplus \chi \boxplus I(\nu / \nu^\tau),
\end{align*}
where $\chi$ is the quadratic Hecke character associated to $K / F$. Since we have assumed that $\nu / \nu^\tau$ is not ${\rm Gal}(K / F)$-invariant, we know that $I(\nu / \nu^\tau)$ is cuspidal. Thus
\begin{align*}
L^T(s, \pi \boxtimes \overline{\pi} \times \pi \boxtimes \overline{\pi})=&\zeta^T(s)L^T(s, \chi)^2 L^T(s, \chi \times \chi)L^T(s, I(\nu / \nu^\tau))\\
&\cdot L^T(s, \chi \times I(\nu / \nu^\tau))L^T(s, I(\nu / \nu^\tau) \times I(\nu / \nu^\tau)).
\end{align*}
On the right-hand side, the first and third $L$-functions have simple poles at $s=1$. The last $L$-function has a simple pole at $s=1$ if $I(\nu / \nu^\tau)$ is self-dual, otherwise it is invertible at that point. The remaining $L$-functions on the right-hand side are all invertible at $s=1$. Therefore the $L$-function on the left-hand side has a pole of order at most three.
Taking a similar approach to the analysis of the remaining incomplete $L$-functions, we obtain
\begin{align*}
\sum \frac{a_v^i \overline{a_v}^j b_v^k \overline{b_v}^l}{{\rm N}v ^s}= &k(i,j,k,l)\cdot \log\left(\frac{1}{s-1}\right) +o\left(\log\left(\frac{1}{s-1}\right)\right)
\end{align*}
as real $s \rightarrow 1 ^+$, where $k (i,j,k,l)$ is a non-negative integer such that
\begin{align*}
k (i,j,k,l) \leq
\left\{ \begin{array}{ccll}
1& \text{ for }&(i,j,k,l)=&(2,1,0,1), (0,1,2,1), (1,2,1,0),\\
&&&\text{ and }(1,0,1,2) \\
2& \text{ for }&(i,j,k,l)=&(1,1,1,1), (2,0,0,2),\text{ and }(0,2,2,0)\\
3& \text{ for }&(i,j,k,l)=&(2,2,0,0)\text{ and }(0,0,2,2).
\end{array} \right.
\end{align*}
Continuing with our proof, we divide inequality~(\ref{s2ineq1}) by $\log \left( 1 / (s-1) \right)$, take the limit inferior as $s \rightarrow 1 ^+$, and examine the asymptotic properties of the series given our non-twist-equivalent condition. We obtain
\begin{align*}
2 &\leq (18 ) ^{1/2} \cdot \underline{\delta}(S)^{1/2}
\end{align*}
which gives
\begin{align*}
\frac{2}{9} & \leq \underline{\delta}(S).
\end{align*}
Thus $\pi$ and $\pi'$ differ locally at a set of places of lower Dirichlet density at least 2/9.\\
We now consider the case where both dihedral automorphic representations satisfy property P.
Again we write $\pi =I_K^F(\nu)$ and $\pi' =I_K^F(\mu)$, where $\mu, \nu$ are Hecke characters for $K$; this time both $\nu / \nu^\tau$ and $\mu / \mu^\tau$ are ${\rm Gal}(K/ F)$-invariant.
Note that
\begin{align*}
\pi \boxtimes \overline{\pi} \simeq & I(1) \boxplus I\left(\frac{\nu}{\nu^\tau}\right)\\
\simeq & 1 \boxplus \chi \boxplus \frac{\nu}{\nu^\tau} \boxplus \left(\frac{\nu}{\nu^\tau} \cdot \chi \right).
\end{align*}
We also know that $\pi \boxtimes \overline{\pi} \simeq 1 \boxplus {\rm Ad}\pi$, so
\begin{align*}
{\rm Ad}\pi \simeq \chi \boxplus \frac{\nu}{\nu^\tau} \boxplus (\frac{\nu}{\nu^\tau} \cdot \chi )
\end{align*}
and similarly,
\begin{align*}
{\rm Ad}\pi' \simeq \chi \boxplus \frac{\mu}{\mu^\tau} \boxplus (\frac{\mu}{\mu^\tau} \cdot \chi ).
\end{align*}
We now want to consider the lower Dirichlet density of the set
\begin{align*}
S:=\{v \mid a_v({\rm Ad}\pi) \neq a_v({\rm Ad}\pi') \},
\end{align*}
as this will give us a lower bound for the lower Dirichlet density of the set
\begin{align*}
S':=\{v \mid a_v(\pi) \neq a_v(\pi') \}.
\end{align*}
We first determine the decompositions of the relevant incomplete $L$-functions.
\begin{lemma}
Given $\pi =I_K^F(\nu)$ and $\pi' =I_K^F(\mu)$, where $\mu$ and $\nu$ are Hecke characters for $K$ such that both $\nu / \nu^\tau$ and $\mu / \mu^\tau$ are ${\rm Gal}(K/F)$-invariant, we have
\begin{align*}
L^T\left(s, {\rm Ad}\pi \times {\rm Ad}\pi\right)=&\zeta^T (s)
L^T\left(s, \frac{\nu}{\nu^\tau}\cdot \chi\right)^2
L^T\left(s,\frac{\nu}{\nu^\tau}\right)^2
L^T\left(s, \frac{\nu}{\nu^\tau} \cdot \frac{\nu}{\nu^\tau}\right)^2\\
& \cdot L^T\left(s, \frac{\nu}{\nu^\tau} \cdot \frac{\nu}{\nu^\tau} \cdot \chi\right)^2\\
L^T\left(s, {\rm Ad}\pi \times {\rm Ad}\pi'\right)=&\zeta ^T (s)
L^T\left(s, \frac{\nu}{\nu^\tau}\cdot \chi\right)
L^T\left(s, \frac{\nu}{\nu^\tau}\right)
L^T\left(s, \frac{\mu}{\mu^\tau}\cdot \chi\right)\\
& \cdot L^T\left(s,\frac{\nu}{\nu^\tau} \cdot \frac{\mu}{\mu^\tau}\right)^2
L^T\left(s, \frac{\nu}{\nu^\tau} \cdot \frac{\mu}{\mu^\tau} \cdot \chi\right)^2
L^T\left(s, \frac{\mu}{\mu^\tau}\right)\\
L^T\left(s, {\rm Ad}\pi' \times {\rm Ad}\pi'\right)=&\zeta^T (s)
L^T\left(s, \frac{\mu}{\mu^\tau}\cdot \chi\right)^2
L^T\left(s,\frac{\mu}{\mu^\tau}\right)^2
L^T\left(s, \frac{\mu}{\mu^\tau} \cdot \frac{\mu}{\mu^\tau}\right)^2\\
& \cdot L^T\left(s, \frac{\mu}{\mu^\tau} \cdot \frac{\mu}{\mu^\tau} \cdot \chi\right)^2.\\
\end{align*}
\end{lemma}
\begin{proof}
As in the proof for Lemma~\ref{s2lem1}.
\end{proof}
Since $\nu / \nu^\tau$ and $\mu / \mu^\tau$ are ${\rm Gal}(K / F)$-invariant, we deduce that they are quadratic characters. Neither of them is equal to $\chi$ as this would imply that
\begin{align*}
I(\nu) \simeq \nu \boxplus (\nu \cdot \chi),
\end{align*}
or the same for $\mu$, and thus either $\pi$ or $\pi'$ would not be cuspidal.
So $L^T(s, {\rm Ad}\pi \times {\rm Ad}\pi)$ and $L^T(s, {\rm Ad}\pi' \times {\rm Ad}\pi')$ have a pole of order three at $s=1$. We also note that $\nu / \nu^\tau$ cannot be equal to $\mu / \mu^\tau$ or $\chi \cdot \mu / \mu^\tau$ otherwise $\pi$ and $\pi'$ would have adjoint lifts that agree locally almost everywhere and thus be twist-equivalent, in contradiction to the assumption made at the beginning of this section. Therefore, $L^T(s, {\rm Ad}\pi \times {\rm Ad}\pi')$ has a pole of order one at $s=1$.
We apply this to the following inequality which has been obtained via the application of the Ramanujan bound (which is known to hold for dihedral representations)
\begin{align*}
\sum_{v \not \in T}\frac{|a_v ({\rm Ad}\pi) -a_v({\rm Ad}\pi')|^2}{Nv^s}\leq 16 \sum_{v \in S}\frac{1}{Nv^s}.
\end{align*}
Dividing by $\log(1/ (s-1))$, taking the limit inferior as $s \rightarrow 1^+$, and examining the asymptotic properties of the resulting inequality, we obtain $1/4 \leq \underline{\delta}(S)$. The same bound then holds for the lower Dirichlet density of the set $S'$, which is sufficient to claim that $\pi$ and $\pi'$ differ locally for a set of places of lower Dirichlet density greater than 2/9.\\
We move on to the case where exactly one of the dihedral automorphic representations satisfies property P, and we use the same notation as before. Without loss of generality, let us say that property P holds for $\pi = I(\nu)$ (i.e., $\nu / \nu^\tau = (\nu / \nu^\tau)^\tau$).
We will compare $\pi = I(\nu)$ and $\pi' = I(\mu)$ and show that $a_v(\pi)$ and $a_v(\pi')$ differ at a set of places of density at least 1/4.
Note that for any place $w$ of $K$, we have that $\nu (w) =\pm \nu^\tau (w)$. For a place $v$ of $F$, this implies $|a_v (I(\nu))|=0 \text{ or }2$. On the other hand, $\mu / \mu^\tau \neq (\mu / \mu^\tau)^\tau$, so $\mu ^2 / (\mu^\tau)^2 $ is not the trivial character.
Then there exists a set of density 1/4 of places $v$ of $F$ which split in $K$ and at which $\mu^2 / (\mu^\tau)^2 \neq 1$ for $w \mid v$. Thus $\mu(w) \neq \pm \mu^\tau (w)$ and so $|a_v(I(\mu))| \neq 0 \text{ or }2$.
Therefore, $I(\nu)$ and $I(\mu)$ differ locally at a set of places of density at least 1/4.\\
Lastly, we address the case where exactly one of the cuspidal automorphic representations is not dihedral.
This follows in a similar manner to the proof of Theorem~\ref{t3}(ii) in the previous section (as well as the first case of this proof). We will elaborate on this for $L^T(s,\pi \boxtimes \pi \times \overline{\pi'}\boxtimes \overline{\pi'})$.
Let us say that $\pi$ is non-dihedral and $\pi' = I(\mu)$.
Then
\begin{align*}
\overline{\pi'}\boxtimes \overline{\pi'}
&\simeq I(\overline{\mu}^2) \boxplus I(\overline{\mu \mu^\tau})\\
& \simeq I(\overline{\mu}^2) \boxplus \overline{\mu \mu^\tau} \boxplus \overline{\mu \mu^\tau} \cdot \chi.
\end{align*}
Note that $I(\overline{\mu}^2)$ may or may not be cuspidal, depending on the properties of $\mu$. One knows that
\begin{align*}
\pi \boxtimes \pi \simeq {\rm Sym}^2 \pi \boxplus \omega,
\end{align*}
where $\omega$ is the central character of $\pi$, and ${\rm Sym}^2 \pi$ is cuspidal.
Making use of these, we find that
\begin{align*}
L^T(s,\pi \boxtimes \pi \times \overline{\pi'}\boxtimes \overline{\pi'})
= & L^T(s, {\rm Sym}^2 \pi \times I(\overline{\mu}^2)) L^T(s, {\rm Sym}^2 \pi \times \overline{\mu \mu^\tau})\\
& \cdot L^T(s, {\rm Sym}^2 \pi \times \overline{\mu \mu^\tau}\cdot \chi)
L^T(s, \omega \times I(\overline{\mu}^2))\\
&\cdot L^T(s, \omega \times \overline{\mu \mu^\tau}) L^T(s, \omega \times \overline{\mu \mu^\tau}\cdot \chi).
\end{align*}
The first three $L$-functions on the right-hand side will be invertible at $s=1$. The fourth $L$-function will either be invertible at $s=1$ or have a simple pole there.
At most one of the last two $L$-functions (on the right-hand side) can have a (simple) pole at $s=1$. So the $L$-function on the left-hand side has a pole of order at most two.
The analysis of the remaining quadruple-product $L$-functions follows in a similar way, and we obtain:
\begin{align*}
\sum \frac{a_v^i \overline{a_v}^j b_v^k \overline{b_v}^l}{{\rm N}v ^s}= &k(i,j,k,l)\cdot \log\left(\frac{1}{s-1}\right) +o\left(\log\left(\frac{1}{s-1}\right)\right)
\end{align*}
as real $s \rightarrow 1 ^+$, where $k (i,j,k,l)$ is a non-negative integer such that
\begin{align*}
k (i,j,k,l) \leq
\left\{ \begin{array}{ccll}
0& \text{ for }&(i,j,k,l)=&(2,1,0,1), (0,1,2,1), (1,2,1,0),\\
&&&\text{ and }(1,0,1,2) \\
1& \text{ for }&(i,j,k,l)=&(1,1,1,1)\\
2& \text{ for }&(i,j,k,l)=&(2,2,0,0), (2,0,0,2),\text{ and }(0,2,2,0)\\
4& \text{ for }&(i,j,k,l)=&(0,0,2,2).
\end{array} \right.
\end{align*}
Divide inequality~(\ref{s2ineq1}) by $\log \left( 1 / (s-1) \right)$ and take the limit inferior as $s \rightarrow 1 ^+$. Then
\begin{align*}
2 &\leq (14) ^{1/2} \cdot \underline{\delta}(S)^{1/2}
\end{align*}
giving
\begin{align*}
\frac{2}{7} & \leq \underline{\delta}(S),
\end{align*}
which completes the proof of Theorem~\ref{t3}.
\section{Examples}\label{s4}
We begin by constructing examples to establish that Theorems~\ref{t1},~\ref{t2}, and~\ref{t3}(ii) are sharp. We then describe the example of Serre which demonstrates that Ramakrishnan's refinement of strong multiplicity one~\cite{Ra94} is also sharp. We finish by addressing the remaining examples from Theorem~\ref{obs}.
\subsection{An octahedral example for Theorem~\ref{t1}.\\}\label{s4-1}
We prove the existence of a pair of non-isomorphic octahedral automorphic representations that agree locally for a set of places of density 3/4, proving that Theorem~\ref{t1} is sharp.
Let $\widetilde{S_4}$ denote the binary octahedral group, which is a double cover of the octahedral group $S_4$ and has presentation
\begin{align*}
<\alpha,\beta,\gamma \mid \alpha ^4 =\beta ^3 =\gamma ^2 =-1>.
\end{align*}
The conjugacy classes are $[1],[-1],[\alpha ^i],[\beta ^j],[\gamma]$, where $i =1,2,3$ and $j =1,2$. They have sizes 1, 1, 6, 8, and 12, respectively.
Two of the 2-dimensional complex irreducible representations of the binary octahedral group have the following character table:
\begin{center}
\begin{tabular}{r|c|c|c|c|c|c|c|c}
&$[1]$&$[-1]$&$[\alpha]$&$[\alpha^2]$& $[\alpha^3]$&$[\beta]$&$[\beta^2]$&$[\gamma]$ \\ \hline
$\eta$ &2&$-2$&$\sqrt{2}$&0&$-\sqrt{2}$&1&$-1$&0 \\
$\eta'$ &2&$-2$&$-\sqrt{2}$&0&$\sqrt{2}$&1&$-1$&0 \\
\end{tabular}
\end{center}
Given number fields $K$ and $F$ such that ${\rm Gal}(K/ F) \simeq \widetilde{S_4}$ and given the complex representations $\rho, \rho'$ associated to this Galois group which arise from $\eta, \eta'$ (respectively), we apply the Chebotarev density theorem to determine that
\begin{align*}
{\rm tr}\rho ({\rm Frob}_v) = {\rm tr}\rho' ({\rm Frob}_v)
\end{align*}
holds for a set of finite places $v$ of density exactly 3/4.
Applying the Langlands--Tunnell theorem (see~\cite{Tu81}), we obtain the corresponding cuspidal automorphic representations $\pi$ and $\pi'$ with the required properties.
\subsection{A dihedral example for Theorem~\ref{t2}.\\} \label{s4-2}
We shall establish the existence of a pair of dihedral cuspidal automorphic representations that are not twist-equivalent but agree locally for a set of places of density 7/9.
Consider two number fields $K, K '$ which are $S_3$-extensions of a number field $F$ such that $K \cap K '$ is a degree 2 (Galois) extension of $F$ (for example, $K =\mathbb Q (\zeta_3, \sqrt[3]{5})$, $K' =\mathbb Q (\zeta_3, \sqrt[3]{7})$, and $F =\mathbb Q$).
Note that $S_3$ has a complex irreducible representation of degree 2 that has the following character table:
\begin{center}
\begin{tabular}{r|c|c|c}
&$[(1)]$&$[(12)]$&$[(123)]$ \\ \hline
$\tau$&2&0&$-1$ \\
\end{tabular}
\end{center}
Fixing isomorphisms
\begin{align*}
{\rm Gal}(K/F)\simeq S_3 \simeq {\rm Gal}(K'/F)
\end{align*}
we obtain two dihedral Artin representations $\rho, \rho'$.
We now establish the density of the set of primes at which the respective Frobenius conjugacy classes are equal and determine that the two representations are not twist-equivalent. Since
\begin{align*}
{\rm Gal}(K K'/ F)\simeq \{(\phi,\psi) \in {\rm Gal}(K/F)\times {\rm Gal}(K'/F)\mid \phi |_{K \cap K'} = \psi |_{K \cap K'} \}
\end{align*}
and
\begin{align*}
{\rm Frob}_{v, K K' / F}|_{K^{(')}} ={\rm Frob}_{v, K^{(')} / F}
\end{align*}
we apply the Chebotarev density theorem to determine the density of places at which pairs of the form
$({\rm tr}\rho ({\rm Frob}_{v, K / F}), {\rm tr}\rho ({\rm Frob}_{v, K' / F}))$
occur:
\begin{center}
\begin{tabular}{r|c|c|c}
&$\phi,\psi$&$({\rm tr}\rho ({\rm Frob}_{v, K / F}), {\rm tr}\rho ({\rm Frob}_{v, K' / F}))$ &density\\ \hline
&(1),(1)&(2,2)&1/18\\
&(1),(123)&(2,$-1$)&2/18\\
&(123),(1)&($-1$,2)&2/18\\
&(123),(123)&($-1$,$-1$)&4/18\\
&(12),(12)&(0,0)&9/18\\
\end{tabular}
\end{center}
Examining the table, we conclude that the respective traces of Frobenius agree exactly for a set of places of density $(1+4+9)/18=7/9$.
The strong Artin conjecture for solvable groups~\cite{AC89} then implies the existence of dihedral automorphic representations $\pi, \pi'$ that agree locally for a set of density 7/9.
Now we determine that the automorphic representations are not twist-equivalent. Observe that, for a positive density of places $v$, both
\begin{align*}
a_v(\pi) &= 2\\
a_v(\pi') &= -1
\end{align*}
hold. The quotient of their absolute values is not equal to 1, and so $\pi$ and $\pi'$ cannot differ by a unitary Hecke character.
Therefore, we can conclude that Theorem~\ref{t2} is sharp.
\subsection{An icosahedral example for Theorem~\ref{t3}(ii).\\} \label{s4-3}
We construct a pair of non-twist-equivalent (odd) icosahedral automorphic representations whose Hecke eigenvalues are equal at a set of places of density 3/5, which will imply that Theorem~\ref{t3}(ii) is sharp.
To address the structure of icosahedral Artin representations, we make use of the following proposition from~\cite[Proposition 2.1]{Wa03}:
\begin{proposition}[Wang] \label{prop-wang}
Let
\begin{align*}
r: {\rm Gal}(\overline{\mathbb Q}/ \mathbb Q) \rightarrow {\rm GL}_2(\mathbb C)
\end{align*}
be an icosahedral representation, and denote its image as $G$.
Then
\begin{align*}
G \simeq (\widetilde{A_5}\times \mu_{2m})/ \pm (1,1)
\end{align*}
where $\widetilde{A_5}$ is the binary icosahedral group (which is isomorphic to ${\rm SL}_2 \mathbb F_5$), and $\mu_{2m}$ is the group of roots of unity of order $2m$.
An irreducible representation $\rho$ of $G$ can be decomposed uniquely into a pair $(\rho_0,\chi)$ where $\rho_0 =\rho |_{\widetilde{A_5}}$ is an irreducible representation of $\widetilde{A_5}$, $\chi = \rho |_{\mu_{2m}}$ is a (1-dimensional) representation of $\mu_{2m}$, and we have the condition
\begin{align*}
\rho_0 (-1) \chi (-1) =I.
\end{align*}
Every such pair of irreducible representations gives an irreducible representation of $G$.
\end{proposition}
Let us now consider the 2-dimensional irreducible representations for the binary icosahedral group $\widetilde{A_5}$. This group has presentation
\begin{align*}
<\alpha,\beta,\gamma \mid \alpha ^5 =\beta ^3 =\gamma ^2 =-1>.
\end{align*}
The conjugacy classes are $[1],[-1],[\alpha^i],[\beta^j],$ and $[\gamma]$, where $i =1,2,3,4$ and $j =1,2$. Their sizes are 1, 1, 12, 20, and 30, respectively.
The binary icosahedral group has exactly two complex irreducible representations of dimension 2, with character table:
\begin{center}
\begin{tabular}{r|c|c|c|c|c|c|c|c|c}
&$\{1\}$&$\{-1\}$&$[\alpha]$&$[\alpha^2]$& $[\alpha^3]$&$[\alpha^4]$&$[\beta]$&$[\beta^2]$&$[\gamma]$ \\ \hline
$\eta$&2&$-2$&$c$&$-c '$&$c'$&$-c$&1&$-1$&0 \\
$\eta'$&2&$-2$&$c'$&$-c$&$c$&$-c'$&1&$-1$&0 \\
\end{tabular}
\end{center}
where
\begin{align*}
c =\frac{1 +\sqrt{5}}{2} \text{ and } c' =\frac{1-\sqrt{5}}{2}.
\end{align*}
Fix an Artin representation
\begin{align*}
\tau: {\rm Gal}(\overline{\mathbb Q}/ \mathbb Q) \rightarrow {\rm GL}_2(\mathbb C)
\end{align*}
that is of odd icosahedral type.
Now Proposition~\ref{prop-wang} implies that $\tau$ factors through a finite Galois group $G$ which is isomorphic to
\begin{align*}
(\widetilde{A_5}\times \mu_{2m})/ \pm (1,1)
\end{align*}
for some $m$.
Decomposing the representation into the pair $(\tau_0,\chi)$ of irreducible representations on $\widetilde{A_5}$ and $\mu_{2m}$ (respectively), we note that $\tau_0$ must be isomorphic to either $\eta$ or $\eta'$ from the character table above. Let $\tau_0'$ correspond to the other representation. We construct another icosahedral Artin representation $\tau'$, corresponding to the pair $(\tau_0',\chi)$, which is also odd. We then apply the Chebotarev density theorem, which implies that
\begin{align*}
{\rm tr\ }\tau ({\rm Frob}_v)={\rm tr\ }\tau' ({\rm Frob}_v)
\end{align*}
holds for a set of places $v$ of density 3/5.
To these two representations $\tau$ and $\tau'$ we apply the strong Artin conjecture for odd icosahedral representations (due to Khare--Wintenberger~\cite{KW109,KW209} and Kisin~\cite{Ki09}) and obtain corresponding icosahedral automorphic representations, which we denote as $\pi$ and $\pi'$, respectively.
We now determine that they are not twist-equivalent. There exist a positive density of places $v$ where
\begin{align*}
a_v(\pi) &= c \cdot \chi(\alpha)\\
a_v(\pi') &= c' \cdot \chi(\alpha)
\end{align*}
both hold simultaneously.
The quotient of their absolute values is not 1, so $\pi$ and $\pi'$ cannot differ by a unitary Hecke character.\\
As mentioned earlier, we point out that we do not need to rely on the work of Khare--Wintenberger and Kisin in order to produce a suitable pair of icosahedral representations. Instead we can make use of one of the earlier examples of (odd) icosahedral representations, such as the one whose modularity is addressed by~\cite{BDST01} and is described in~\cite{JM}. This representation $\rho$ is associated to the polynomial (which describes the splitting field) $x ^5 - 8 x ^3 - 2 x ^2 + 31x + 74$, has conductor 3547, and odd quadratic determinant. It corresponds to a weight one modular form that we will denote as $f = f (\rho)$.
As before, we write $\rho$ as $(\rho_0,\chi)$ and construct another icosahedral Artin representation $\rho'$ which corresponds to $(\rho_0',\chi)$, where $\rho_0$ corresponds to one of the complex irreducible 2-dimensional representations $\eta$ or $\eta'$ in the character table above, and $\rho_0'$ corresponds to the other. Now ${\rm tr}(\rho ({\rm Frob}_p)) \in \mathbb Q (\sqrt{5})$, and this new representation $\rho'$ can also be constructed by pre-composing $\rho$ with the non-trivial automorphism $\tau$ of ${\rm Gal}(\mathbb Q (\sqrt{5})/\mathbb Q)$. On the modular side, $f ^\tau$ is the weight one icosahedral modular form that corresponds to $\rho'$. By the same reasoning as earlier, we note that $f$ and $f^\tau$ are not twist-equivalent. Applying the Chebotarev density theorem, we see that the set of primes at which $f$ and $f ^\tau$ have equal Hecke eigenvalues has density 3/5.
\subsection{An example of Serre.\\} \label{s4-4}
Here we briefly describe the dihedral example of J.-P.~Serre~\cite{Se77}
which demonstrates that D.~Ramakrishnan's refinement of strong multiplicity one~\cite{Ra94} is sharp.
The quaternion group $Q_8$ has a unique 2-dimensional complex irreducible representation, which we will denote as $\tau$.
Consider the group
\begin{align*}
G = Q_8 \times \{\pm 1\}
\end{align*}
and define two representations of $G$, denoted by $\rho $ and $\rho '$, as $ \tau \otimes 1$ and $ \tau \otimes {\rm sgn}$, respectively. We note that both $\rho$ and $\rho'$ are irreducible.
The quaternion group, and thus $G$, are known to appear as a Galois group of a finite extension of number fields. Therefore, any representation of $G$ can be lifted to a representation of ${\rm Gal}(\overline{\mathbb Q}/\mathbb Q)$.
Now let $S$ be the subset $\{(+1, -1),(-1,-1)\}$ of $G$. The traces of $\rho$ and $\rho'$ agree exactly outside $S$, and note that $ |S | / |G | = 2/ 2 ^{4} = 1 /8$.
Since $G$ is nilpotent, by Arthur--Clozel~\cite{AC89} the strong Artin conjecture holds for $\rho$ and $\rho'$, so there exist corresponding (dihedral) cuspidal automorphic representations $\pi$ and $\pi'$.
One concludes that D.~Ramakrishnan's theorem is sharp.
\subsection{Further examples.\\}
We construct the three remaining examples from Theorem~\ref{obs}.
The twist-equivalent icosahedral pair of automorphic representations can be constructed as follows:
Consider the icosahedral cuspidal automorphic representation $\pi$ from subsection~\ref{s4-3} and let $\chi$ be a quadratic Hecke character.
Using the character table from that subsection and the Chebotarev density theorem, we see that the pair $\pi$ and $\pi \otimes \chi$ agree locally exactly at a set of places of density 5/8.\\
For the construction of the two examples of pairs of tetrahedral representations in Theorem~\ref{obs}, we will need to make use of the character table for the binary tetrahedral group.
First, define $i,j,$ and $k$ to be the quaternions and let $\omega = - \frac{1}{2} (1 +i +j +k)$ (note that $\omega$ has order 3). Then the binary tetrahedral group is generated by $ i,j, \omega $.
The character table for the irreducible representations of dimensions one and two is:
\begin{center}
\begin{tabular}{r|c|c|c|c|c|c|c}
&$\{1\}$&$\{-1\}$&$[i]$&$[ \omega ]$&$[-\omega]$&$[\omega ^2]$&$[-\omega ^2]$ \\ \hline
$\chi _0$&1&1&1&1&1&1&1\\
$\chi _1$&1&1&1&$\zeta$&$\zeta$&$\zeta ^2$&$\zeta ^2$ \\
$\chi _2$&1&1&1&$\zeta ^2$&$\zeta ^2$&$\zeta$&$\zeta$\\
$\rho$&2&$-2$&0&$-1$&1&$-1$&1\\
$\rho \otimes \chi _1$ &2&$-2$&0&$-\zeta$&$\zeta$&$-\zeta ^2$&$\zeta ^2$ \\
$\rho \otimes \chi _2$ &2&$-2$&0&$-\zeta ^2$&$\zeta ^2$&$-\zeta$&$\zeta$\\
\end{tabular}
\end{center}
where $\zeta = e ^{2 \pi i /3}. $
Note that the conjugacy class $[i]$ has size 6, and the conjugacy classes $[\omega]$, $[-\omega]$, $[\omega ^2]$, and $[-\omega ^2]$ all have size 4.
Since $\rho$ has tetrahedral image in ${\rm PGL}_2(\mathbb C)$, it corresponds, by Langlands~\cite{La80}, to a tetrahedral automorphic representation $\pi$ for GL(2). If we twist $\pi$ by a quadratic Hecke character $\chi$, we note that $\pi $ and $ \pi \otimes \chi$ agree locally at a set of places of density 5/8.\\
We move on to the example of the non-twist-equivalent pair of tetrahedral representations. Now an Artin representation of tetrahedral type which factors through a binary tetrahedral Galois group ${\rm Gal}(K/ F)$ has the structure
\begin{diagram}
K\\
\dLine_{2}\\
L\\
\dLine_{4}\\
k\\
\dLine_{3}\\
F
\end{diagram}
where $L / F$ and $k / F$ are Galois extensions with groups isomorphic to $A_4$ and $\mathbb Z/3\mathbb Z$, respectively.
We consider two tetrahedral representations $\rho_1 $ and $ \rho_2$ such that they factor through Galois groups ${\rm Gal}(K_1/ F) $ and $ {\rm Gal}(K_2/ F)$, respectively, with structure
\begin{diagram}
K_1&&&&K_2 \\
\dLine &&&& \dLine \\
L_1&&&&L_2 \\
&\rdLine && \ldLine & \\
& &k&& \\
& &\dLine&& \\
&& F&&
\end{diagram}
where $L_1 \neq L_2$ and $K_1 \neq K_2$.
We now need to establish the Galois group of the compositum $K_1 K_2$.
We make use of the embedding
\begin{align*}
{\rm Gal}(K_1K_2/F) &\hookrightarrow {\rm Gal}(K_1/F) \times {\rm Gal}(K_2/F)\\
\sigma & \mapsto (\sigma|_{K_1} , \sigma |_{K_2} ),
\end{align*}
which has image
\begin{align*}
H = \{ (\phi , \psi ) \mid \phi |_{K_1 \cap K_2} , \psi |_{K_1 \cap K_2}\}.
\end{align*}
Expressing ${\rm Gal}(K_1 K_2 / F)$ as a subset of ${\rm Gal}(K_1/ F) \times {\rm Gal}(K_2/ F)$, we see that it consists exactly of:
\begin{itemize}
\item pairs of the form $(a,b)$ where $ a,b \in \{ \pm 1, \pm i, \pm j, \pm k\}$,
\item pairs of the form $(a,b)$ where $ a,b \in \{ \pm \omega , \pm i\omega, \pm j\omega, \pm k\omega\}$,
\item pairs of the form $(a,b)$ where $ a,b \in \{ \pm \omega^2 , \pm i\omega^2, \pm j\omega^2, \pm k\omega^2\}$.
\end{itemize}
Since
\begin{align*}
{\rm Frob}_{v, K_1 K_2 / F}|_{K_i} ={\rm Frob}_{v, K_i / F}
\end{align*}
for $i = 1,2$ (as mentioned earlier), we now just need to count the occurrences of pairs whose components have the same trace. These are:
\begin{itemize}
\item pairs $(1,1)$ and $ (-1,-1)$
\item pairs $(a,b)$ where $ a,b \in \{\pm i, \pm j, \pm k\}$
\item pairs $(a,b)$ where $ a,b \in \{\omega, -i \omega, -j \omega, -k \omega \}$
\item pairs $(a,b)$ where $ a,b \in \{- \omega, i \omega, j \omega, k \omega \}$
\item pairs $(a,b)$ where $ a,b \in \{\omega ^2 , i \omega ^2, j \omega ^2, k \omega ^2 \}$
\item pairs $(a,b)$ where $ a,b \in \{-\omega ^2 , -i \omega ^2,- j \omega ^2,- k \omega ^2 \}$.
\end{itemize}
Counting the number of these pairs $(1 +1 +36 +16 +16 +16 +16)$, out of a group of order 192, we obtain a density of 17/32 (using the Chebotarev density theorem) for those primes whose image of Frobenius under the two different representations have equal traces.
We lift these representations to ${\rm Gal}(\overline{F}/F)$, and by Langlands~\cite{La80} we obtain the existence of tetrahedral cuspidal automorphic representations $\pi, \pi'$ such that the set
\begin{align*}
\{v \mid v \text{ unramified for $\pi$ and $\pi'$, } a_v(\pi)=a_v(\pi')\}
\end{align*}
has a density of $17/32$.
The element $(1,i)$ in ${\rm Gal}(K_1 K_2 / F)$ has components which have traces 2 and 0, respectively. This means that there exist places $v$ of $F$ where both
\begin{align*}
a_v(\pi) &= 2 \\
a_v(\pi') &= 0
\end{align*}
hold. Therefore, $\pi$ and $\pi'$ cannot be twist-equivalent by a unitary Hecke character.
|
2,869,038,153,913 | arxiv | \section{Introduction}
The following two remarkable theorems were proved by H.~Brezis,
J.~Van Schaftingen and Po-Lam~Yung in \cite{hhh3,hhh2}:
\begin{theorem}\label{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhj}
Let $q\geq 1$. Then, for every dimension $N\geq 1$ there exist
constants $c_{N},C_{N}>0$ such that, for every $u\in
C_c^{\infty}(\R^N)$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyj}
c^q_{N}\,\int_{\R^N}\big|\nabla u(x)\big|^qdx\leq\\
\sup\limits_{s\in(0,+\infty)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}\\ \leq
C_{N}\,\int_{\R^N}\big|\nabla u(x)\big|^qdx \,.
\end{multline}
\end{theorem}
\begin{theorem}\label{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjggg}
Let $q\geq 1$. Then, for every $u\in C_c^{\infty}(\R^N)$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfh}
\lim\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}\\=\frac{\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)}{N}\int_{\R^N}\big|\nabla
u(x)\big|^qdx \,.
\end{multline}
\end{theorem}
These results shed light on what happens when one replaces the
strong $L^q$ by weak $L^q$ in the expression for the Gagliardo
semi-norm $|u|_{W^{s,q}}$, computed at $s=1$. Several interesting
open problems, related to Theorems
\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhj} and
\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjggg}, were raised in
\cite{hhh2}:
\begin{itemize}
\item[{\bf(i)}] If $u\in L^q$ for some $q\ge1$ satisfies
\begin{equation*}
\sup\limits_{s\in(0,+\infty)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}<+\infty\,,
\end{equation*}
does it imply that $u\in W^{1,q}$ (when $q>1$) or $u\in BV$ (when
$q=1$)?
\item[{\bf (ii)}] Does Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhj} hold in the cases $u\in
W^{1,q}$ (when $q>1$) and $u\in BV$ (when $q=1$)?
\item[{\bf (iii)}] Same question as in (ii), but for Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjggg}.
\item[{\bf (iv)}] Given $u\in L^q$, does
\begin{equation*}
\lim\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}=0\,,
\end{equation*}
imply that $u$ necessarily equals a constant a.e.\,in $\R^N$?
\item[\bf {(v)}] For $r\in(0,q)$ characterize the class of functions $u\in L^q$,
satisfying
\begin{equation}\label{hjgjgjggjg}
\sup\limits_{s\in(0,\infty)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}>
s\bigg\}\Bigg)\Bigg\}<+\infty\,.
\end{equation}
In particular, determine how this class is related to an appropriate
Besov space.
\end{itemize}
In the current paper we give full affirmative answers to questions
{\bf (i)}, {\bf (ii)} and {\bf (iv)}, see Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjh} and Corollary \ref{gbjgjgjgj}
bellow. Moreover, we give a partial answer to questions {\bf (iii)},
see Corollary \ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf}
bellow (in particular, we completely resolve this question in the
case $q>1$). Concerning question {\bf (v)}, we give only some
partial information about the quantity appearing in
\eqref{hjgjgjggjg} that could be obtained by combining Theorem
\ref{hjkgjkfhjff}, treating the quantities
\begin{multline}\label{ghfghfhdf}
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}\\
\text{and}\quad\quad
\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}\,,
\end{multline}
for general $r$, together with Proposition \ref{gjggjfhhffhfhjgghf}.
Our first main result, answering Questions {\bf (i)} and {\bf (ii)},
is:
\begin{theorem}\label{hjkgjkfhjffgggvggoopikhhhkjh}
Let $\Omega\subset\R^N$ be an open domain with Lipschitz boundary
and let $q\geq 1$. Then there exist constants $C_{\Omega}>0$ and
${\widetilde C}_{N}>0$ satisfying $C_{\Omega}=1$ if $\Omega=\R^N$,
such that for every $u\in L^q(\Omega,\R^m)$ we have:
\begin{enumerate}
\item [(i)]
When $q>1$,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhj}
\frac{\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)}{(N+q)}\,\int_\Omega\big|\nabla
u(x)\big|^qdx \\
\leq\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}\leq\\
\sup\limits_{s\in(0,+\infty)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\} \leq
C^q_{\Omega}{\widetilde C}_{N}\,\int_\Omega\big|\nabla u(x)\big|^qdx
\,,
\end{multline}
with the convention that $\int_\Omega\big|\nabla
u(x)\big|^qdx=+\infty$ if $u\notin W^{1,q}(\Omega,\R^m)$.
\item[(ii)] When $q=1$,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkjjkkhjkhkhjhjhjljkjk}
\frac{\int_{S^{N-1}}|z_1|d\mathcal{H}^{N-1}(z)}{(N+1)}\,\|Du\|(\Omega)
\leq\\
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|}{|y-x|^{1+N}}> s\bigg\}\Bigg)\Bigg\}\leq\\
\sup\limits_{s\in(0,+\infty)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|}{|y-x|^{1+N}}> s\bigg\}\Bigg)\Bigg\} \leq
C_{\Omega}{\widetilde C}_{N}\,\|Du\|(\Omega)\,,
\end{multline}
with the convention $\|Du\|(\Omega)=+\infty$ if $u\notin
BV(\Omega,\R^m)$.
\end{enumerate}
\end{theorem}
\begin{remark}
Setting ${\tilde c}_N=\frac{\int_{S^{N-1}}|z_1|d\mathcal{H}^{N-1}(z)}{(N+1)\big(\mathcal{H}^{N-1}(S^{N-1})+1\big)}$ it is easy to deduce by H\"older's inequality that
$$
{\tilde c}_N^q\le
\frac{\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)}{(N+q)}\,.
$$
Therefore, the lower-bound in inequality
\eqref{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhj}
can be also written, analogously to
\eqref{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyj},
as
$$
{\tilde c}_N^q\int_\Omega\big|\nabla u(x)\big|^qdx\le
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}\,.
$$
\end{remark}
\begin{remark}\label{hkhjggjhgh}
Only the lower bound in Theorem \ref{hjkgjkfhjffgggvggoopikhhhkjh}
requires a non-trivial proof. Indeed, the upper bound in this
Theorem follows quite easily from Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhj}, by an extension of
$u\in W^{1,q}$ or $u\in BV$ from $\Omega$ to $\R^N$, followed by a
standard approximation of its gradient seminorm by smooth functions,
see the technical Lemma \ref{fhgolghoi} for details.
\end{remark}
The proof of Theorem\,\ref{hjkgjkfhjffgggvggoopikhhhkjh} is given in
Section\,\ref{sec:r=q} below. From Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjh} we deduce the next corollary,
that provides a positive answer to Question {\bf (iv)} (see
\cite[Open Problem\,1]{hhh2}):
\begin{corollary}\label{gbjgjgjgj}
Let $\Omega\subset\R^N$ be an open domain, $q\geq 1$ and $u\in
L^q(\Omega,\R^m)$. If
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjkhkjkk}
\lim\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}=0\,,
\end{equation}
then $u(x)$ necessarily equals a constant a.e. in $\Omega$.
\end{corollary}
Regarding Question {\bf (iii)}, the following result
provides a positive answer to it for
$q>1$, and in the case $q=1$ under the additional assumption, $u\in
W^{1,1}\subsetneq BV$.
\begin{corollary}\label{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf}
Let $q\geq 1$ and let $\Omega\subset\R^N$ be an open set. Then, for
every $u\in W^{1,q}(\R^N,\R^m)$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhzz}
\lim\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}\\=\frac{\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)}{N}\int_{\Omega}\big|\nabla
u(x)\big|^qdx \,.
\end{multline}
\end{corollary}
Actually,
Corollary\,\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf} is
just a special case of the following more general result in which we
replace the quantity appearing on the L.H.S.\,of
\eqref{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhzz}
by a more general one; the case appearing in
Corollary\,\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf}
corresponds
to the special case $F(a,y,x):=|a|^q$:
\begin{theorem}\label{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf11}
Let $q\geq 1$ and let $\Omega\subset\R^N$ be an open set. Let
$F:\R\times\R^N\times\R^N\to[0,+\infty)$ be a continuous function,
such that there exists $C>0$ satisfying $0\leq F(a,y,x)\leq C|a|^q$
for every $a\in\R$ and every $x,y\in\R^N$. Moreover, assume that
$F(a,y,x)$ is non-decreasing in the $a$--variable on $[0,+\infty)$
for every fixed $(y,x)\in\R^N\times\R^N$. Then, for every $u\in
W^{1,q}(\R^N,\R^m)$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhzz11}
\lim\limits_{s\to +\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times\Omega\;:\;
F\bigg(\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>
s\Bigg\}\Bigg)\\
=\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\big|\nabla
u(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx\,.
\end{multline}
\end{theorem}
The proof of
Theorem\,\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf11} is
given in Section\,\ref{limtem} below.
In the proof of the lower bound in Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjh} we essentially use the so called
``BBM formula'' due to J.~Bourgain, H.~Brezis,
P.~Mironescu~\cite{hhh1} for $q>1$ (and under some limitations for
$q=1$). For $q=1$ the formula in the general case of BV functions
is due to J. D\'{a}vila~\cite{hhh5}. This formula states, in
particular, that given an open domain with Lipschitz boundary
$\Omega\subset\R^N$, a family of radial mollifiers
$\rho_\e\big(|z|\big):\R^N\to[0,+\infty)$, satisfying
$\int_{\R^N}\rho_\e\big(|z|\big)dz=1$ and such that for every $r>0$
there exists $\delta:=\delta_r>0$, satisfying
$\supp{(\rho_\e)}\subset B_r(0)$ for every $\e\in(0,\delta_r)$, the
following holds true:
\begin{enumerate}
\item[(i)]
For any $q>1$ and any $u\in L^q(\Omega,\R^m)$ we have
\begin{equation}
\label{eq:1jjjint} \lim_{\e\to 0^+} \int_{\Omega}\int_{\Omega}
\frac{|u(x)-u(y)|^q}{|x-y|^q}\,\rho_\e\big(|x-y|\big)\,dx\,dy=K_{q,N}\int_{\Omega}\big|\nabla
u(x)\big|^qdx\,,
\end{equation}
with the convention that $\int_{\Omega}\big|\nabla
u(x)\big|^qdx=+\infty$ if $u\notin W^{1,q}(\Omega,\R^m)$ and with
$K_{q,N}$ given by
\begin{equation}\label{fgyufghfghjgghgjkhkkGHGHKKggkhhjoozzbvqkkint}
K_{q,N}:=\frac{1}{\mathcal{H}^{N-1}(S^{N-1})}\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)\quad\quad\forall
q\geq 1\,.
\end{equation}
\item[(ii)] In the case $q=1$, for any $u\in L^1(\Omega,\R^m)$
we have
\begin{equation}
\label{eq:1jjj1int} \lim_{\e\to 0^+} \int_{\Omega}\int_{\Omega}
\frac{|u(x)-u(y)|}{|x-y|}\,\rho_\e\big(|x-y|\big)\,dx\,dy=K_{1,N}\,\|Du\|(\Omega)\,,
\end{equation}
with the convention that $\|Du\|(\Omega)=+\infty$ if $u\notin
BV(\Omega,\R^m)$.
\end{enumerate}In particular, taking
$$\rho_\e\big(|z|\big):=\frac{1}{2\sigma_\e\mathcal{H}^{N-1}(S^{N-1})|z|^{N-1}}\,\chi_{[\e-\sigma_\e,\e+\sigma_\e]}(|z|)\quad\quad\forall z\in\R^N$$
with sufficiently small $0<\sigma_\e\ll\e$, we deduce the following
variant of the ``BBM formula'':
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluikljjhkjhjhjhjh}
\frac{1}{\mathcal{H}^{N-1}(S^{N-1})}\,\lim\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^q}\,dx\,d\mathcal{H}^{N-1}(\vec n)\Bigg)
\\=
K_{q,N}\int_{\Omega}\big|\nabla
u(x)\big|^qdx\quad\quad\quad\quad\text{for}\quad q>1\,,
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluikljjhkjhjhjhjhjhgjghhf}
\frac{1}{\mathcal{H}^{N-1}(S^{N-1})}\,\lim\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|}{\e}\,dx\,d\mathcal{H}^{N-1}(\vec n)\Bigg) \\=
K_{1,N}\,\|Du\|(\Omega)\quad\quad\quad\quad\text{for}\quad q=1\,,
\end{multline}
where we denote
\begin{equation}\label{higuffykljk}
\chi_{\Omega}(z):=\begin{cases} 1\quad\quad z\in\Omega\,,\\
0\quad\quad z\in\R^N\setminus\Omega\,.
\end{cases}
\end{equation}
In the spirit of
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluikljjhkjhjhjhjh}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluikljjhkjhjhjhjhjhgjghhf}
we prove the following Theorem. The special case $r=q$ provides the
{\em key ingredient} in the proof of Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjh}:
\begin{theorem}\label{hjkgjkfhjff} Let
$\Omega\subset\R^N$ be a bounded domain, $q\geq 1$, $r\geq 0$ and
$u\in L^\infty(\Omega,\R^m)
$. Then,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhj1}
\liminf\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^r}\,dx\,d\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq
(N+r)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\},
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkhhggh1}
\limsup\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^r}\,dx\,d\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\geq N\,
\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
\end{theorem}
We refer the reader to Lemma \ref{gjyfyfuyyfifgyify} in Appendix
for the significance of the quantity
$$\lim\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^r}\,dx\,d\mathcal{H}^{N-1}(\vec n)\Bigg)\,,$$
appearing in Theorem \ref{hjkgjkfhjff} for general $r$.
\begin{remark}\label{hjhjghgh}
Although we stated Theorem \ref{hjkgjkfhjff} for every $r\geq 0$, it
is useful only for $r\in(0,q]$, since in the case $r>q$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjkyjh}
\liminf\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^r}\,dx\,d\mathcal{H}^{N-1}(\vec
n)\Bigg)<+\infty\quad\quad\text{impplies}\\ \liminf\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^q}\,dx\,d\mathcal{H}^{N-1}(\vec n)\Bigg)=0\,,
\end{multline}
and thus, by the ``BBM formula", $u$ must be a constant. On the
other hand, in the case $r=0$ we obviously have
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjkyjhkuhh}
\lim\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\big|u( x+\e\vec n)-u(x)\big|^q\,dx\,d\mathcal{H}^{N-1}(\vec
n)\Bigg)=0\,.
\end{equation}
\end{remark}
Next we recall the definition of the Besov Spaces $B_{q,\infty}^s$
with $s\in(0,1)$:
\begin{definition}\label{gjghghghjgghGHKKhjhjhjhhzzbvqkkl,.,.}
Given $q\geq 1$ and $s\in(0,1)$, we say that $u\in
L^q(\mathbb{R}^N,\R^m)$ belongs to the Besov space
$B_{q,\infty}^s(\mathbb{R}^N,\R^m)$ if
\begin{equation}\label{gjhgjhfghffh}
\sup\limits_{\rho\in(0,\infty)}\Bigg(\sup_{|h|\leq\rho}\int_{\mathbb{R}^N}
\frac{|u(x+h)-u(x)\big|^q}{\rho^{sq}}dx\Bigg)<+\infty.
\end{equation}
Moreover, for every open $\Omega\subset\R^N$ we say that $u\in
L^q_{loc}(\Omega,\R^m)$ belongs to Besov space
$\big(B_{q,\infty}^s\big)_{loc}(\Omega,\R^m)$ if for every compact
$K\subset\subset\Omega$ there exists $u_K\in
B_{q,\infty}^s(\mathbb{R}^N,\R^d)$ such that $u_K(x)= u(x)$ for
every $x\in K$.
\end{definition}
The following technical proposition makes the connection between
Besov spaces and the quantities appearing in the statement of
Theorem\,\ref{hjkgjkfhjff}. This proposition is a direct consequence
of Corollary \ref{gjggjfhhffhfh} and Lemma \ref{hjgjg}, see in the
Appendix, whose proofs are based on similar arguments to those used
in \cite{jmp}.
\begin{proposition}\label{gjggjfhhffhfhjgghf}
If $q\geq 1$, $r\in(0,q)$ and $u\in L^q(\R^N,\R^m)$, then,
$u\in\big(B_{q,\infty}^{r/q}\big)(\R^N,\R^m)$ if and only if we have
\begin{equation}\label{gghgjhfgggjfgfhughGHGHKKzzjkjkyuyuybvqjhgfhfhgjgjjlhkhkh}
\limsup\limits_{\e\to
0^+}\Bigg\{\int_{S^{N-1}}\int_{\R^N}\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^r}\,dx\,d\mathcal{H}^{N-1}(\vec
n)\Bigg\}<+\infty\,.
\end{equation}
Moreover, if $\Omega\subset\R^N$ be an open set, $q\geq 1$,
$r\in(0,q)$ and $u\in L^q_{loc}(\Omega,\R^m)$, then,
$u\in\big(B_{q,\infty}^{r/q}\big)_{loc}(\Omega,\R^m)$ if and only if
for every open set $G\subset\subset\Omega$ we have
\begin{equation}\label{gghgjhfgggjfgfhughGHGHKKzzjkjkyuyuybvqjhgfhfhgjgjjlhk}
\limsup\limits_{\e\to 0^+}\Bigg\{\int_{S^{N-1}}\int_{G}\chi_G(
x+\e\vec n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^r}\,dx\,d\mathcal{H}^{N-1}(\vec
n)\Bigg\}<+\infty\,.
\end{equation}
\end{proposition}
Combining Theorem \ref{hjkgjkfhjff} in the case $r\in(0,q)$ with
Proposition\,\ref{gjggjfhhffhfhjgghf} might be a first step towards
an answer to open question {\bf (v)}.
Our last result links the quantities in \er{ghfghfhdf} for $r=1,q>1$
and $u\in BV\cap L^\infty$ with the ``jump in the power $q$'' of
$u$:
\begin{theorem}\label{hjkgjkfhjffjhmgg7}
Let $\Omega$ be an open set with bounded Lipschitz boundary, $q>1$
and $u\in BV(\Omega,\R^m)\cap L^\infty(\Omega,\R^m)$. Then,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhj7}
N\,
\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{N+1}}> s\bigg\}\Bigg)\Bigg\}\leq
\\
\bigg(\int_{S^{N-1}}|z_1|d\mathcal{H}^{N-1}(z)\bigg)\Bigg(\int_{J_u\cap
\Omega}\Big|u^+(x)-u^-(x)\Big|^qd\mathcal{H}^{N-1}(x)\Bigg)\\
\leq
(N+r)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{N+1}}> s\bigg\}\Bigg)\Bigg\}\,.
\end{multline}
Here $J_u$ denotes the jump set of $u\in BV$ and $u^+,u^-$ are the
approximate one-side limits of $u$.
\end{theorem}
To conclude, we list some interesting open problems for future
research:
\begin{itemize}
\item[ {\bf (a)}] Does a \underline{complete} version of Theorem
\ref{hjkgjkfhjff} hold, where we replace $\liminf$ by $\limsup$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhj}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkhhggh}.
In particular, following Proposition \ref{gjggjfhhffhfhjgghf}, this
would provide a full answer to Question {\bf (v)} in the case $u\in
L^\infty$.
\item[ {\bf (b)}] In the spirit of Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjggg}, does
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhj}
in Theorem \ref{hjkgjkfhjff} hold with the constant $N$, instead of
$(N+r)$?
\item[ {\bf (c)}]
Does Corollary \ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf}
hold for $q=1$ and $u\in BV\setminus W^{1,1}$?
\end{itemize}
\subsubsection*{Acknowledgments.} I am indebted to Prof.\,Haim Brezis
for providing me the preprints \cite{hhh3,hhh2} and for suggesting
to me the research directions that served as a basis for the study
carried out in the present manuscript.
\section{Proof of Theorem \ref{hjkgjkfhjff}}
This section is devoted to the proof of Theorem \ref{hjkgjkfhjff}.
Its special case $r=q$ is essential for the proof of the main
results Theorem\,\ref{hjkgjkfhjffgggvggoopikhhhkjh} and
Corollary\,\ref{gbjgjgjgj}. Theorem \ref{hjkgjkfhjff} is a
particular case of the following more general statment.
\begin{proposition}\label{hjkgjkfhjff1} Let
$\Omega\subset\R^N$ be a bounded domain,
$r\geq 0$ and
$F\in L^\infty\Big(\Omega\times\Omega\,,\,[0,+\infty)\Big)
$. Then,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhj}
\liminf\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{F\big(x+\e\vec n,x\big)
}{\e^r}\,dx\,d\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq
(N+r)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\},
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkhhggh}
\limsup\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{F\big(x+\e\vec n,x\big)}{\e^r}\,dx\,d\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\geq N\,
\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
\end{proposition}
\begin{proof}
Given $\alpha>0$, consider
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88jkhkhhk88jhkhhjhhhjhiyhijkkjkkhkhhhuhhjhjjhiihhjlhkhkjohkhk1}
\eta_\e\big(t\big):=\frac{1}{\alpha\big|\ln{\e}\big||t|^N}\,\chi_{[\e^{1+\alpha},\e]}(t)\quad\quad\forall
t\in \mathbb{R}.
\end{equation}
In particular, we have
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88jkhkhhk88jhkhhjhhhjhiyhijkkjkkhkhhhuhhjhjjhiihh1}
\eta_\e\big(t\big)t^N=\frac{1}{\alpha\big|\ln{\e}\big|}\chi_{[\e^{1+\alpha},\e]}(t).
\end{equation}
Then, for every $z\in\R^N$ every $h\geq 0$ and every $s\geq 0$
considering
\begin{multline}\label{jghjgghghhg}
K_{\e,u}\big(z,s,h\big):=\mathcal{L}^N\Bigg(\bigg\{x\in
\Omega\;:\;\frac{\chi_{\Omega}(x+z)F\big(
x+z,x\big)}{|z|^h}>s\bigg\}\Bigg)\\=K_{\e,u}\bigg(z,\frac{s}{|z|^l},h+l\bigg)\quad\quad\forall
l\geq 0\,,
\end{multline}
by Fubini Theorem we deduce:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytyt1}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx=\int\limits_{\mathbb{R}^N}\int\limits_{\Omega}\eta_\e\Big(|z|\Big)
\chi_{\Omega}(x+z)\frac{F\big( x+z,x\big)}{|z|^r}dxdz\\=
\int\limits_{\mathbb{R}^+}\int\limits_{\mathbb{R}^N}\eta_\e\Big(|z|\Big)K_{\e,u}\big(z,s,r\big)
dzds=
\int\limits_{S^{N-1}}\int\limits_{\mathbb{R}^+}\int\limits_{\mathbb{R}^+}\eta_\e\big(t\big)t^{N-1}K_{\e,u}\big(t\vec
n,s,r\big) dtdsd\mathcal{H}^{N-1}(\vec n)
\\=
\int\limits_{S^{N-1}}\int\limits_{\mathbb{R}^+}\int\limits_{\mathbb{R}^+}\eta_\e\big(t\big)t^{N-1}K_{\e,u}\bigg(t\vec
n,\frac{s}{t^N},r+N\bigg) dtdsd\mathcal{H}^{N-1}(\vec n)
\\=
\int\limits_{S^{N-1}}\int\limits_{\mathbb{R}^+}\int\limits_{\mathbb{R}^+}\eta_\e\big(t\big)t^N\,t^{N
-1}K_{\e,u}\big(t\vec n,s,r+N\big) dtdsd\mathcal{H}^{N-1}(\vec n).
\end{multline}
\begin{comment}
and so, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo1}
we deduce
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluiklj1}
\inf\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{\big|u( x+t\vec
n)-u(x)\big|^q}{t^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{\big|u(
y)-u(x)\big|^q}{|y-x|^r}dydx
\\
\leq\sup\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{\big|u( x+t\vec
n)-u(x)\big|^q}{t^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg).
\end{multline}
\end{comment}
Thus, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytyt1} and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88jkhkhhk88jhkhhjhhhjhiyhijkkjkkhkhhhuhhjhjjhiihh1}
we have
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhj1}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx=I_{\e,u,\alpha}\Big([0,+\infty]\Big).
\end{equation}
where, for every $0\leq a\leq b\leq +\infty$ we denote:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkh1jkhjhjj}
I_{\e,u,\alpha}\Big([a,b]\Big):=
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{[\e^{1+\alpha},\e]}(t)\,
\chi_{[a,b]}(s)t^{N -1}sK_{\e,u}\big(t\vec
n,s,r+N\big)d\mathcal{H}^{N-1}(\vec n) dtds
\\
=\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{[\e^{1+\alpha},\e]}(t)\,
\chi_{[a,b]}(s)\,
\times\\ \times\,t^{N -1}s\mathcal{L}^N\Bigg(\bigg\{x\in
\Omega\;:\;\frac{\chi_{\Omega}(x+t\vec n)F\big(x+\e\vec
n,x\big)}{|t|^{r+N}}> s\bigg\}\Bigg)d\mathcal{H}^{N-1}(\vec n) dtds.
\end{multline}
So, for every $d>0$ and $\gamma>0$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkh1}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx\\=I_{\e,u,\alpha}\Bigg(\bigg[\frac{1}{\e^d},\frac{1}{\e^{d+\gamma}}\bigg]\Bigg)+I_{\e,u,\alpha}\Bigg(\bigg[0,\frac{1}{\e^d}\bigg]\Bigg)+I_{\e,u,\alpha}\Bigg(\bigg[\frac{1}{\e^{d+\gamma}},+\infty\bigg]\Bigg).
\end{multline}
Furthermore, since $\Omega$ is bonded, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkh1jkhjhjj}
we can obtain
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjhkhgjljjhkhkhjggghgnkhk}
I_{\e,u,\alpha}\Bigg(\bigg[0,\frac{1}{\e^d}\bigg]\Bigg)\leq
\frac{C}{\alpha\big|\ln{\e}\big|}\,\e^{N-d},
\end{equation}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhkhhgjnkhhkjohkjioojjojjiojjojl}
I_{\e,u,\alpha}\Bigg(\bigg[\frac{1}{\e^{d+\gamma}},+\infty\bigg]\Bigg)
=\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|}\,\frac{1}{
t^{r+1}}\,\chi_{[\e^{1+\alpha},\e]}(t)\,
\chi_{[1/\e^{d+\gamma},+\infty)}(s)\,
\times\\ \times\,t^{r+N}K_{\e,u}\big(t\vec n,s
t^{r+N},0\big)d\mathcal{H}^{N-1}(\vec n) dtds\\
=\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|}\,\frac{1}{
t^{r+1}}\,\chi_{[\e^{1+\alpha},\e]}(t)\,
\chi_{[t^{r+N}/\e^{d+\gamma},+\infty)}(\tau)K_{\e,u}\big(t\vec
n,\tau,0\big)d\mathcal{H}^{N-1}(\vec n) dtd\tau\\
\leq
\int\limits_{\frac{1}{\e^{(d+\gamma)-(1+\alpha)(r+N)}}}^{+\infty}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|}\,\frac{1}{
t^{r+1}}\,\chi_{[\e^{1+\alpha},\e]}(t)\, K_{\e,u}\big(t\vec
n,\tau,0\big)d\mathcal{H}^{N-1}(\vec n) dtd\tau\leq
\\
\int\limits_{\frac{1}{\e^{(d+\gamma)-(1+\alpha)(r+N)}}}^{+\infty}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|}\,\frac{1}{
t^{r+1}}\,\chi_{[\e^{1+\alpha},\e]}(t)K_{\e,u}\bigg(t\vec
n,\frac{1}{\e^{(d+\gamma)-(1+\alpha)(r+N)}},0\bigg)d\mathcal{H}^{N-1}(\vec
n) dtd\tau\,.
\end{multline}
Thus, since $F\in L^\infty$, in the case $d\leq N$ and
$\gamma>(1+\alpha)(r+N)-d
$ for sufficiently small
$\e>0$ we have
\begin{multline}\label{ghhvufioufufljhkh}
K_{\e,u}\bigg(t\vec
n,\frac{1}{\e^{(d+\gamma)-(1+\alpha)(r+N)}},0\bigg):=\\ \bigg\{x\in
\Omega\;:\;\chi_{\Omega}(x+t\vec n)F\big(x+\e\vec n,x\big)>
1/\e^{((d+\gamma)-(1+\alpha)(r+N))}\bigg\}=\emptyset,
\end{multline}
and thus by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjhkhgjljjhkhkhjggghgnkhk}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhkhhgjnkhhkjohkjioojjojjiojjojl}
we rewrite
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkh1},
in the case $d\leq N$, $\gamma>(1+\alpha)(r+N)-d
$ and
sufficiently small $\e>0$, as:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkh}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{
F(y,x)}{|y-x|^r}dydx=O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+I_{\e,u,\alpha}\Bigg(\bigg[\frac{1}{\e^d},\frac{1}{\e^{d+\gamma}}\bigg]\Bigg)\\
\leq O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{(0,+\infty)}(t)\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\,
\times\\ \times\,t^{N -1}s\mathcal{L}^N\Bigg(\bigg\{x\in
\Omega\;:\;\frac{\chi_{\Omega}(x+t\vec n)F\big(x+\e\vec
n,x\big)}{|t|^{r+N}}> s\bigg\}\Bigg)d\mathcal{H}^{N-1}(\vec n) dtds=
\\
O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+
\int\limits_{\mathbb{R}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\,
s\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)ds.
\end{multline}
On the other hand, since, we have
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88jkhkhhk88jhkhhjhhhjhiyhijkkjkkhkhhhuhhjhjjhiihh1},
then we deduce:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo1}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx=\int\limits_{\mathbb{R}^N}\int\limits_{\Omega}\eta_\e\Big(|z|\Big)\chi_{\Omega}(x+z)\frac{F\big(
x+z,x\big)}{|z|^r}dxdz\\=
\int\limits_{S^{N-1}}\int\limits_{\mathbb{R}^+}\int\limits_{\Omega}t^{N-1}\eta_\e\big(t\big)\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxdtd\mathcal{H}^{N-1}(\vec n)
\\=
\int\limits_{\e^{1+\alpha}}^{\e}\frac{1}{\alpha\big|\ln{\e}\big|t}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)dt.
\end{multline}
Thus, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo1}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkh}
for $d=N$, $\gamma>r+\alpha (N+r)$ and
sufficiently small $\e>0$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhk}
\int\limits_{\e^{1+\alpha}}^{\e}\frac{1}{\alpha\big|\ln{\e}\big|t}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)dt\\=\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx
\leq
O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+\\
\int\limits_{\mathbb{R}}\,\frac{1}{\alpha\big|\ln{\e}\big|s}\,
\chi_{[1/\e^N,1/\e^{N+\gamma}]}(s)\,
s\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)ds.
\end{multline}
On the other hand,
we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluikljhkjhjhmhhgn}
\inf\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)
\leq\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx\\=
\int\limits_{\e^{1+\alpha}}^{\e}\frac{1}{\alpha\big|\ln{\e}\big|t}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)dt
\\
\leq\sup\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg),
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljjljjhhk}
\inf\limits_{s>(1/\e^N)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}\leq\\
\int\limits_{\mathbb{R}}\,\frac{1}{\gamma\big|\ln{\e}\big|s}\,
\chi_{[1/\e^N,1/\e^{N+\gamma}]}(s)\,
s\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)ds\\
\leq
\sup\limits_{s>(1/\e^N)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Therefore, inserting these into
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhk}
gives that for $d=N$, $\gamma>r+\alpha (N+r)$ and sufficiently
small $\e>0$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkh}
\inf\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg) \leq
O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+\\
\frac{\gamma}{\alpha}\,\sup\limits_{s>(1/\e^N)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Thus, letting $\e\to 0^+$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkh}
gives that for $d=N$ and $\gamma>r+\alpha (N+r)$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrrjkhkhklkljk}
\liminf\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{F\big(x+\e\vec n,x\big)}{\e^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq \frac{\gamma}{\alpha}\,
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Therefore, letting $\gamma\to\big(r+\alpha (N+r)\big)^+$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrrjkhkhklkljk}
we deduce for $d=N$ and $\gamma=r+\alpha (N+r)$:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr}
\liminf\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{F\big(x+\e\vec n,x\big)}{\e^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq \Bigg(\frac{r}{\alpha}+(N+r)\Bigg)\,
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Letting $\alpha\to+\infty$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr}
we finally deduce
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhj}.
\begin{comment}
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkll}
\inf\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{\big|u( x+t\vec
n)-u(x)\big|^q}{t^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq
(N+r)\sup\limits_{s>(1/\e^N)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Thus, letting $\e\to 0^+$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkll}
gives
\end{comment}
\begin{comment}
However, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkhhggh}
we also have:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkhhgghjiih}
\limsup\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\geq N\,
\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
On the other hand by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjkhhjjkh}
we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjkhhjjkh}
\sup\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{\big|u( x+t\vec
n)-u(x)\big|^q}{t^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\geq\\
\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^r}dxd\mathcal{H}^{N-1}(\vec n)\geq\\
N(1/\e^N) \mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}>
(1/\e^N)\bigg\}\\
\geq
N\inf\limits_{s>(1/\e^N)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
\end{comment}
\begin{comment}
Let $\Omega\subset\R^N$ be a bounded domain and $u\in
L^\infty(\Omega,\R^m)
$. Given $\alpha>0$, as before, consider
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88jkhkhhk88jhkhhjhhhjhiyhijkkjkkhkhhhuhhjhjjhiihhjlhkhkjohkhk}
\eta_\e\big(t\big):=\frac{1}{\alpha\big|\ln{\e}\big||t|^N}\,\chi_{[\e^{1+\alpha},\e]}(t)\quad\quad\forall
t\in \mathbb{R},
\end{equation}
and, in particular, we have
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88jkhkhhk88jhkhhjhhhjhiyhijkkjkkhkhhhuhhjhjjhiihh}
\eta_\e\big(t\big)t^N=\frac{1}{\alpha\big|\ln{\e}\big|}\chi_{[\e^{1+\alpha},\e]}(t).
\end{equation}
Then, as before, for every $z\in\R^N$ every $h\geq 0$ and every
$s\geq 0$ considering
\begin{multline}\label{jghjgghghhg1}
K_{\e,u}\big(z,s,h\big):=\mathcal{L}^N\Bigg(\bigg\{x\in
\Omega\;:\;\frac{\chi_{\Omega}(x+z)\big|u(
x+z)-u(x)\big|^q}{|z|^h}>s\bigg\}\Bigg)\\=K_{\e,u}\bigg(z,\frac{s}{|z|^l},h+l\bigg)\quad\quad\forall
l\geq 0\,,
\end{multline}
by Fubini Theorem we deduce:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytyt}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{\big|u(
y)-u(x)\big|^q}{|y-x|^r}dydx=\int\limits_{\mathbb{R}^N}\int\limits_{\Omega}\eta_\e\Big(|z|\Big)\chi_{\Omega}(x+z)\frac{\big|u(
x+z)-u(x)\big|^q}{|z|^r}dxdz\\=
\int\limits_{\mathbb{R}^+}\int\limits_{\mathbb{R}^N}\eta_\e\Big(|z|\Big)K_{\e,u}\big(z,s,r\big)
dzds=
\int\limits_{S^{N-1}}\int\limits_{\mathbb{R}^+}\int\limits_{\mathbb{R}^+}\eta_\e\big(t\big)t^{N-1}K_{\e,u}\big(t\vec
n,s,r\big) dtdsd\mathcal{H}^{N-1}(\vec n)
\\=
\int\limits_{S^{N-1}}\int\limits_{\mathbb{R}^+}\int\limits_{\mathbb{R}^+}\eta_\e\big(t\big)t^{N-1}K_{\e,u}\bigg(t\vec
n,\frac{s}{t^N},r+N\bigg) dtdsd\mathcal{H}^{N-1}(\vec n)
\\=
\int\limits_{S^{N-1}}\int\limits_{\mathbb{R}^+}\int\limits_{\mathbb{R}^+}\eta_\e\big(t\big)t^N\,t^{N
-1}K_{\e,u}\big(t\vec n,s,r+N\big) dtdsd\mathcal{H}^{N-1}(\vec n).
\end{multline}
Moreover, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytyt} and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88jkhkhhk88jhkhhjhhhjhiyhijkkjkkhkhhhuhhjhjjhiihh}
we have
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhj}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{\big|u(
y)-u(x)\big|^q}{|y-x|^r}dydx= I_{\e,u,\alpha}\Big([0,+\infty]\Big).
\end{equation}
where, for every $0\leq a\leq b\leq +\infty$ we denote:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkh1jkhjhjj1}
I_{\e,u,\alpha}\Big([a,b]\Big):=
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{[\e^{1+\alpha},\e]}(t)\,
\chi_{[a,b]}(s)t^{N -1}sK_{\e,u}\big(t\vec
n,s,r+N\big)d\mathcal{H}^{N-1}(\vec n) dtds
\\
=\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{[\e^{1+\alpha},\e]}(t)\,
\chi_{[a,b]}(s)\,
\times\\ \times\,t^{N -1}s\mathcal{L}^N\Bigg(\bigg\{x\in
\Omega\;:\;\frac{\chi_{\Omega}(x+t\vec n)\big|u( x+t\vec
n)-u(x)\big|^q}{|t|^{r+N}}> s\bigg\}\Bigg)d\mathcal{H}^{N-1}(\vec n)
dtds.
\end{multline}
So, as before, for every $d>0$ and $\gamma>0$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkh}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{\big|u(
y)-u(x)\big|^q}{|y-x|^r}dydx\\=I_{\e,u,\alpha}\Bigg(\bigg[\frac{1}{\e^d},\frac{1}{\e^{d+\gamma}}\bigg]\Bigg)+I_{\e,u,\alpha}\Bigg(\bigg[0,\frac{1}{\e^d}\bigg]\Bigg)+I_{\e,u,\alpha}\Bigg(\bigg[\frac{1}{\e^{d+\gamma}},+\infty\bigg]\Bigg),
\end{multline}
\end{comment}
Next, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkh1}
for every $d>0$ and $\gamma>0$ we have
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigj}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx\geq
I_{\e,u,\alpha}\Bigg(\bigg[\frac{1}{\e^d},\frac{1}{\e^{d+\gamma}}\bigg]\Bigg).
\end{equation}
Furthermore, by \er{jghjgghghhg} we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhkhhkjggjkjhhkhohkhjklj}
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{[\e,+\infty)}(t)\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)t^{N -1}sK_{\e,u}\big(t\vec n,s
,r+N\big)d\mathcal{H}^{N-1}(\vec n)
dtds=\\
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|}\,\chi_{[\e,+\infty)}(t)\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\frac{1}{t^{r+1}}\,t^{N
+r}K_{\e,u}\big(t\vec n,s t^{r+N},0\big)d\mathcal{H}^{N-1}(\vec n)
dtds
\\=
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|}\,\chi_{[\e,+\infty)}(t)\,
\chi_{[t^{r+N}/\e^d,t^{r+N}/\e^{d+\gamma}]}(\tau)\frac{1}{t^{r+1}}K_{\e,u}\big(t\vec
n,\tau,0\big)d\mathcal{H}^{N-1}(\vec n) dtd\tau\\ \leq
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|}\,\chi_{[\e,+\infty)}(t)\,
\chi_{[1/\e^{d-(r+N)},t^{r+N}/\e^{d+\gamma}]}(\tau)\frac{1}{t^{r+1}}K_{\e,u}\big(t\vec
n,\tau,0\big)d\mathcal{H}^{N-1}(\vec n) dtd\tau\leq\\
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|}\,\chi_{[\e,+\infty)}(t)\,
\chi_{[1/\e^{d-(r+N)},t^{r+N}/\e^{d+\gamma}]}(\tau)\frac{1}{t^{r+1}}K_{\e,u}\bigg(t\vec
n,\frac{1}{\e^{d-(r+N)}},0\bigg)d\mathcal{H}^{N-1}(\vec n) dtd\tau.
\end{multline}
On the other hand, since $F\in L^\infty$, in the case $d>(N+r)$ for
sufficiently small $\e>0$ we have
\begin{equation}\label{ghhvufioufufljhkhbhggg}
K_{\e,u}\bigg(t\vec n,\frac{1}{\e^{d-(r+N)}},0\bigg)=\bigg\{x\in
\Omega\;:\;\chi_{\Omega}(x+t\vec n)F\big(x+\e\vec n,x\big)>
1/\e^{d-(r+N)}\bigg\}=\emptyset,
\end{equation}
and thus by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhkhhkjggjkjhhkhohkhjklj},
in the case $d>(N+r)$ for sufficiently small $\e>0$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhkhhkjggjkjhhkhohkhjkljjkhhkhjljkjj}
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{[\e,+\infty)}(t)\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\,
\times\\ \times\,t^{N -1}s\mathcal{L}^N\Bigg(\bigg\{x\in
\Omega\;:\;\frac{\chi_{\Omega}(x+t\vec n)F\big(x+\e\vec
n,x\big)}{|t|^{r+N}}> s\bigg\}\Bigg)d\mathcal{H}^{N-1}(\vec n)
dtds=0.
\end{multline}
In particular, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigj}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhkhhkjggjkjhhkhohkhjkljjkhhkhjljkjj}
in the case $d>(N+r)$ for sufficiently small $\e>0$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhk}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx\geq\\
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{[\e^{1+\alpha},+\infty)}(t)\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\,
\times\\ \times\,t^{N -1}s\mathcal{L}^N\Bigg(\bigg\{x\in
\Omega\;:\;\frac{\chi_{\Omega}(x+t\vec n)F\big(x+\e\vec
n,x\big)}{|t|^{r+N}}> s\bigg\}\Bigg)d\mathcal{H}^{N-1}(\vec n) dtds.
\end{multline}
On the other hand, since $\Omega$ is bonded, we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhklhkhkgjgg}
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{1}{\alpha\big|\ln{\e}\big|s}\,\chi_{[0,\e^{1+\alpha}]}(t)\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\,
\times\\ \times\,t^{N -1}s\mathcal{L}^N\Bigg(\bigg\{x\in
\Omega\;:\;\frac{\chi_{\Omega}(x+t\vec n)F\big(x+\e\vec
n,x\big)}{|t|^{r+N}}> s\bigg\}\Bigg)d\mathcal{H}^{N-1}(\vec n)
dtds\\
\leq
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\frac{1}{\alpha\big|\ln{\e}\big|}\,t^{N
-1}\,\chi_{[0,\e^{1+\alpha}]}(t)\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\,
\,C dtds\leq
C\frac{\e^{N(1+\alpha)-(d+\gamma)}}{N\alpha\big|\ln{\e}\big|}.
\end{multline}
Moreover, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88jkhkhhk88jhkhhjhhhjhiyhijkkjkkhkhhhuhhjhjjhiihh1}
we deduce:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo}
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx=\int\limits_{\mathbb{R}^N}\int\limits_{\Omega}\eta_\e\Big(|z|\Big)\chi_{\Omega}(x+z)
\frac{F\big( x+z,x\big)}{|z|^r}dxdz\\=
\int\limits_{S^{N-1}}\int\limits_{\mathbb{R}^+}\int\limits_{\Omega}t^{N-1}\eta_\e\big(t\big)\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxdtd\mathcal{H}^{N-1}(\vec n)
\\=
\int\limits_{\e^{1+\alpha}}^{\e}\frac{1}{\alpha\big|\ln{\e}\big|t}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)dt.
\end{multline}
Thus, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhk},
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhklhkhkgjgg}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo},
in the case $d>(N+r)$ and $N(1+\alpha)-(d+\gamma)>0$ for
sufficiently small $\e>0$ we deduce
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfh}
\int\limits_{\e^{1+\alpha}}^{\e}\frac{1}{\alpha\big|\ln{\e}\big|t}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)dt\\=
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx\geq
O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)\\+
\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\int\limits_{S^{N-1}}\frac{\gamma}{\alpha}\,\frac{1}{\gamma\big|\ln{\e}\big|s}\,\chi_{[0,+\infty)}(t)\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)t^{N -1}sK_{\e,u}\big(t\vec
n,s,r+N\big)d\mathcal{H}^{N-1}(\vec n) dtds=\\
O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+
\int\limits_{\frac{1}{\e^d}}^{\frac{1}{\e^{d+\gamma}}}\frac{\gamma}{\alpha}\,\frac{1}{\gamma\big|\ln{\e}\big|}\,
\mathcal{L}^{2N}\Bigg(\bigg\{(x,z)\in
\Omega\times\R^N\,:\,\frac{\chi_{\Omega}(x+z)F\big(
x+z,x\big)}{|z|^{r+N}}> s\bigg\}\Bigg)ds
\\=
O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+
\int\limits_{\frac{1}{\e^d}}^{\frac{1}{\e^{d+\gamma}}}\frac{\gamma}{\alpha}\,\frac{1}{\gamma\big|\ln{\e}\big|}\,
\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\,:\,\frac{F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)ds
.
\end{multline}
Then, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfh}
we infer
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljj}
\int\limits_{\e^{1+\alpha}}^{\e}\frac{1}{\alpha\big|\ln{\e}\big|t}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)dt\\=
\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx\geq O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+\\
\int\limits_{\mathbb{R}}\frac{\gamma}{\alpha}\,\frac{1}{\gamma
s\big|\ln{\e}\big|}\, \chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\,
s\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)ds.
\end{multline}
On the other hand, we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluikljhkjhjhmhhgnuhjhjggg}
\inf\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)
\leq\int\limits_{\Omega}\int\limits_{\Omega}\eta_\e\Big(|y-x|\Big)\frac{F(y,x)}{|y-x|^r}dydx\\=
\int\limits_{\e^{1+\alpha}}^{\e}\frac{1}{\alpha\big|\ln{\e}\big|t}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg)dt
\\
\leq\sup\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec
n)\Bigg),
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljjljjhhkhihhjhh}
\inf\limits_{s>(1/\e^d)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}\leq\\
\int\limits_{\mathbb{R}}\,\frac{1}{\gamma\big|\ln{\e}\big|s}\,
\chi_{[1/\e^d,1/\e^{d+\gamma}]}(s)\,
s\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)ds\\
\leq
\sup\limits_{s>(1/\e^d)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Therefore, inserting these into
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljj}
gives, that in the case $d>(N+r)$ and $N(1+\alpha)-(d+\gamma)> 0$
for sufficiently small $\e>0$ we have:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljj}
\sup\limits_{t\in(0,\e)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+t\vec
n)\frac{F\big(x+\e\vec n,x\big)}{t^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\geq O\bigg(\frac{1}{\alpha\big|\ln{\e}\big|}\bigg)+
\frac{\gamma}{\alpha}\inf\limits_{s>(1/\e^d)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Thus, letting $\e\to 0^+$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljj}
gives in the case $d>(N+r)$ and $N(1+\alpha)-(d+\gamma)>0$:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyu}
\limsup\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{F\big(x+\e\vec n,x\big)}{\e^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\geq \frac{\gamma}{\alpha}\,
\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
In particular, given $\delta>0$ sufficiently small,
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyu}
holds for $\gamma= N(1+\alpha)-d-\delta$ and $d=(N+r)+\delta$ in
the case of sufficiently large $\alpha>0$. Thus, letting $\delta\to
0^+$,
we deduce by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyu}:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjk}
\limsup\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{F\big(x+\e\vec n,x\big)}{\e^r}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\geq \frac{N\alpha-r}{\alpha}\,
\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
F(y,x)}{|y-x|^{r+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Then letting $\alpha\to +\infty$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjk}
we infer
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkhhggh}.
\end{proof}
\section{Some consequences of Theorem \ref{hjkgjkfhjff} in the case $r=q$}
\label{sec:r=q} This section is devoted to the proof of
\rth{hjkgjkfhjffgggvggoopikhhhkjh}, that will follow
from Corollary \ref{hjkgjkfhjffgggvggoopikhh} and Lemma
\ref{fhgolghoi} after proving some intermediate results. We start
with the following Proposition:
\begin{proposition}\label{hjkgjkfhjffgggvggoop}
Let $\Omega\subset\R^N$ be an open set, $q\geq 1$ and $u\in
L^q(\Omega,\R^m)
$. Furthermore, let $G\subset\Omega$ be an open subset, such that
$\mathcal{L}^N(\partial G)=0$ and either $G$ is convex or $\ov
G\subset\subset\Omega$. Then,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp2}
\limsup\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{G}\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^q}\chi_{G}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq\sup\limits_{\e\in(0,h)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{G}\frac{\big|u(
x+\e\vec n)-u(x)\big|^q}{\e^q}\chi_{G}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\},
\end{multline}
where
\begin{equation}\label{fhfuyfyufuyurressgstr2}
h:=\begin{cases} +\infty\quad\quad\text{if $G$ is convex}\,,\\
\dist(G,\R^N\setminus\Omega)\quad\quad\text{otherwise}\,.
\end{cases}
\end{equation}
\end{proposition}
\begin{proof}
In the case of bounded $\Omega$ and $u\in L^\infty(\Omega,\R^m)$,
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp2}
follow from Theorem \ref{hjkgjkfhjff}
together with either Corollary
\ref{gughfgfhfgdgddffddfKKzzbvqhigygygtyuu2} or Corollary
\ref{gughfgfhfgdgddffddfKKzzbvqhigygygtyuu} from the Appendix. So it
remains to prove
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp2}
in the case of unbounded $\Omega$ and/or $u\notin
L^\infty(\Omega,\R^m)$. Thus for every $k\in \mathbb{N}$ consider a
bounded open sets $G_k\subset\Omega_k\subset\Omega$ with
$\mathcal{L}^N(\partial G_k)=0$, defined by
\begin{equation}\label{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihj2}
G_k:=G\cap B_k(0)\quad\text{and}\quad\Omega_k:=\Omega\cap B_k(0)\,,
\end{equation}
and consider
$u^{(k)}(x):=\big(u^{(k)}_1(x),\ldots,u^{(k)}_m(x)\big)\in
L^\infty(\Omega,\R^m)\cap L^q(\Omega,\R^m)$, defined by
\begin{equation}\label{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhgh2}
u^{(k)}_j(x):=\begin{cases}-k\quad\text{if}\quad u(x)\leq -k,\\
u(x)\quad\text{if}\quad u(x)\in [-k,k],\\
k\quad\text{if}\quad u(x)\geq k,
\end{cases}\quad\quad\quad\forall
x\in\Omega\quad\quad\forall\, j\in\{1,\ldots, m\}\,.
\end{equation}
Then obviously,
\begin{equation}\label{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjb2}
\Big|u^{(k)}(y)-u^{(k)}(x)\Big|^q\leq
\Big|u(y)-u(x)\Big|^q\quad\quad\forall (x,y)\in
\Omega\times\Omega\,,\quad\quad\forall\, k\in \mathbb{N}\,.
\end{equation}
and
\begin{equation}\label{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjbhiihhhh2}
\lim\limits_{k\to+\infty}u^{(k)}(x)=u(x)\quad\quad\forall x\in
\Omega\,.
\end{equation}
Moreover, if $G$ is convex then $G_k$ is also convex. Otherwise,
$G_k=G$ for sufficiently large $k$ and
$\dist(G_k,\R^N\setminus\Omega_k)=\dist(G,\R^N\setminus\Omega)$ for
sufficiently large $k$. Thus, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp2}
with $\Omega_k$ instead of $\Omega$ and $u^{(k)}$ instead of $u$,
for sufficiently large $k$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiih2}
\sup\limits_{\e\in(0,h)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{G_k}\frac{\Big|u^{(k)}(
x+\e\vec n)-u^{(k)}(x)\Big|^q}{\e^q}\chi_{G_k}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\ \leq
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega_k\times\Omega_k\;:\;\frac{
\big|u^{(k)}( y)-u^{(k)}(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}\quad\quad\forall k\in\mathbb{N}.
\end{multline}
Thus, since $\Omega_k\subset\Omega$ by
\er{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjb2}
we deduce from
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiih2}
that, for sufficiently large $k$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiihgfggffhjhg2}
\int\limits_{S^{N-1}}\int\limits_{G_k}\frac{\Big|u^{(k)}( x+\e\vec
n)-u^{(k)}(x)\Big|^q}{\e^q}\chi_{G_k}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\leq \\
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}\quad\quad\forall\,\e\in(0,h)\,,\quad\quad\forall
k\in\mathbb{N}\,.
\end{multline}
Then, letting $k\to+\infty$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiihgfggffhjhg2},
using
\er{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjb2},
\er{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjbhiihhhh2}
and the Dominated Convergence Theorem, gives
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiihgfggffhjhghjj2}
\int\limits_{S^{N-1}}\int\limits_{G}\frac{\Big|u( x+\e\vec
n)-u(x)\Big|^q}{\e^q}\chi_{G}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\leq \\
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}\quad\quad\quad\quad\forall\,\e\in(0,h)\,.
\end{multline}
In particular,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphgg2}
\sup\limits_{\e\in(0,h)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{G}\frac{\big|u(
x+\e\vec n)-u(x)\big|^q}{\e^q}\chi_{G}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\ \leq
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Thus, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphgg2}
we finally deduce
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp2}.
\end{proof}
\begin{corollary}\label{hjkgjkfhjffgggvggoop2}
Let $\Omega\subset\R^N$ be a convex open domain, such that
$\mathcal{L}^N(\partial\Omega)=0$, $q\geq 1$ and $u\in
L^q(\Omega,\R^m)
$. Then,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp}
\sup\limits_{\e\in(0,+\infty)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\frac{\big|u(
x+\e\vec n)-u(x)\big|^q}{\e^q}\chi_{\Omega}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\=\lim\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^q}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Moreover, in the case of bounded $\Omega$ and $u\in
L^\infty(\Omega,\R^m)$ we also have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkhhgghijjffgyfghjhopp}
\sup\limits_{\e\in(0,+\infty)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\frac{\big|u(
x+\e\vec n)-u(x)\big|^q}{\e^q}\chi_{\Omega}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\=\lim\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\chi_{\Omega}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^q}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\geq N\,
\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
\end{corollary}
\begin{proof}
In the case of bounded $\Omega$ and $u\in L^\infty(\Omega,\R^m)$,
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkhhgghijjffgyfghjhopp}
follow from Theorem \ref{hjkgjkfhjff}
together with Corollary \ref{gughfgfhfgdgddffddfKKzzbvqhigygygtyuu}
from the Appendix. On the other hand, in in the case of unbounded
$\Omega$ and/or $u\notin L^\infty(\Omega,\R^m)$, in order to prove
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp}
we use Proposition \ref{hjkgjkfhjffgggvggoop} with $G=\Omega$,
together with Corollary \ref{gughfgfhfgdgddffddfKKzzbvqhigygygtyuu}
from the Appendix.
\begin{comment}
So it remains to prove
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp}
in the case of unbounded $\Omega$ and/or $u\notin
L^\infty(\Omega,\R^m)$. Thus for every $k\in \mathbb{N}$ consider a
convex bounded domain $\Omega_k\subset\Omega$ with
$\mathcal{L}^N(\partial\Omega_k)=0$, defined by
\begin{equation}\label{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihj}
\Omega_k:=\Omega\cap B_k(0)\,,
\end{equation}
and consider
$u^{(k)}(x):=\big(u^{(k)}_1(x),\ldots,u^{(k)}_m(x)\big)\in
L^\infty(\Omega,\R^m)\cap L^q(\Omega,\R^m)$, defined by
\begin{equation}\label{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhgh}
u^{(k)}_j(x):=\begin{cases}-k\quad\text{if}\quad u(x)\leq -k,\\
u(x)\quad\text{if}\quad u(x)\in [-k,k],\\
k\quad\text{if}\quad u(x)\geq k,
\end{cases}\quad\quad\quad\forall
x\in\Omega\quad\quad\forall\, j\in\{1,\ldots, m\}\,.
\end{equation}
Then obviously,
\begin{equation}\label{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjb}
\Big|u^{(k)}(y)-u^{(k)}(x)\Big|^q\leq
\Big|u(y)-u(x)\Big|^q\quad\quad\forall (x,y)\in
\Omega\times\Omega\,,\quad\quad\forall\, k\in \mathbb{N}\,.
\end{equation}
and
\begin{equation}\label{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjbhiihhhh}
\lim\limits_{k\to+\infty}u^{(k)}(x)=u(x)\quad\quad\forall x\in
\Omega\,.
\end{equation}
On the other hand by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp}
with $\Omega_k$ instead of $\Omega$ and $u^{(k)}$ instead of $u$ we
have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiih}
\sup\limits_{\e\in(0,+\infty)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega_k}\frac{\Big|u^{(k)}(
x+\e\vec n)-u^{(k)}(x)\Big|^q}{\e^q}\chi_{\Omega_k}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\ \leq
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega_k\times\Omega_k\;:\;\frac{
\big|u^{(k)}( y)-u^{(k)}(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}\quad\quad\forall k\in\mathbb{N}.
\end{multline}
Thus, since $\Omega_k\subset\Omega$ by
\er{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjb}
we deduce from
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiih}
that
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiihgfggffhjhg}
\int\limits_{S^{N-1}}\int\limits_{\Omega_k}\frac{\Big|u^{(k)}(
x+\e\vec n)-u^{(k)}(x)\Big|^q}{\e^q}\chi_{\Omega_k}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\leq \\
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}\quad\quad\forall\,\e>0\,,\quad\quad\forall
k\in\mathbb{N}\,.
\end{multline}
Then, letting $k\to+\infty$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiihgfggffhjhg},
using
\er{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjb},
\er{hkjkgkfjfkkkghjggfhfhfhjgjghlkhigiukpoihghgfhihjghffhkhghihhjjbhiihhhh}
and the Dominated Convergence Theorem, gives
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphiihgfggffhjhghjj}
\int\limits_{S^{N-1}}\int\limits_{\Omega}\frac{\Big|u( x+\e\vec
n)-u(x)\Big|^q}{\e^q}\chi_{\Omega}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\leq \\
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\}\quad\quad\quad\quad\forall\,\e>0\,.
\end{multline}
In particular,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphgg}
\sup\limits_{\e\in(0,+\infty)}\Bigg(\int\limits_{S^{N-1}}\int\limits_{\Omega}\frac{\big|u(
x+\e\vec n)-u(x)\big|^q}{\e^q}\chi_{\Omega}(x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\ \leq
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
Thus, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfphgg}
and Corollary \ref{gughfgfhfgdgddffddfKKzzbvqhigygygtyuu} from the
Appendix, we finally deduce
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfp}.
\end{comment}
\end{proof}
\begin{corollary}\label{hjkgjkfhjffgggvggoopikhh}
Let $\Omega\subset\R^N$ be an open domain, $q\geq 1$ and $u\in
L^q(\Omega,\R^m)
$. Then, in the case $q>1$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhk}
\frac{\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)}{(N+q)}\,\int_\Omega\big|\nabla
u(x)\big|^qdx\leq \\
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}\,,
\end{multline}
with the convention that $\int_\Omega\big|\nabla
u(x)\big|^qdx=+\infty$ if $u\notin W^{1,q}(\Omega,\R^m)$. On the
other hand, in the case $q=1$ we have:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkjjkkhjkhkhjhjh}
\frac{\int_{S^{N-1}}|z_1|d\mathcal{H}^{N-1}(z)}{(N+1)}\,\|Du\|(\Omega)
\leq\\
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|}{|y-x|^{1+N}}> s\bigg\}\Bigg)\Bigg\}\,,
\end{multline}
with the convention $\|Du\|(\Omega)=+\infty$ if $u\notin
BV(\Omega,\R^m)$.
\end{corollary}
\begin{proof}
Let $\rho_\e\big(|z|\big):\R^N\to[0,+\infty)$ be radial mollifiers,
so that $\int_{\R^N}\rho_\e\big(|z|\big)dz=1$ and for every $r>0$
there exits $\delta:=\delta_r>0$, such that $\supp{(\rho_\e)}\subset
B_r(0)$ for every $\e\in(0,\delta_r)$. Next fix an open subset
$G\subset\subset\Omega$ with Lipschitz boundary. Since, by
Proposition \ref{hjkgjkfhjffgggvggoop} we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjh}
\limsup\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{G}\chi_{G}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^q}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\leq
(N+q)\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}\,,
\end{multline}
and at the same time by Lemma \ref{gjyfyfuyyfifgyify} we have:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluikljjhkjhhjjhpoijgggjgljj}
\frac{1}{\mathcal{H}^{N-1}(S^{N-1})}\,\limsup\limits_{\e\to
0^+}\Bigg(\int\limits_{S^{N-1}}\int\limits_{G}\chi_{G}(x+\e\vec
n)\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e^q}dxd\mathcal{H}^{N-1}(\vec n)\Bigg)\\
\geq \limsup\limits_{\e\to
0^+}\int\limits_{G}\int\limits_{G}\rho_\e\Big(|y-x|\Big)\frac{\big|u(
y)-u(x)\big|^q}{|y-x|^q}dydx\,,
\end{multline}
we deduce from
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjh}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytl;klljkljojkojjo;k;kklklklkiljluikljjhkjhhjjhpoijgggjgljj}
that
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjh1}
\limsup\limits_{\e\to
0^+}\int\limits_{G}\int\limits_{G}\rho_\e\Big(|y-x|\Big)\frac{\big|u(
y)-u(x)\big|^q}{|y-x|^q}dydx\\
\leq
\frac{(N+q)}{\mathcal{H}^{N-1}(S^{N-1})}\,\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}.
\end{multline}
On the other hand, since bounded $G\subset{\mathbb R}^N$ has a
Lipschitz boundary, the so called ``BBM formula'' (see \cite{hhh1}
and \cite{hhh5} for the details) states that for $q>1$ we have
\begin{equation}
\label{eq:1jjj} \lim_{\e\to 0^+} \int_{G}\int_{G}
\frac{|u(x)-u(y)|^q}{|x-y|^q}\,\rho_\e\big(|x-y|\big)\,dx\,dy=K_{q,N}\int_{G}\big|\nabla
u(x)\big|^qdx\,,
\end{equation}
with the convention that $\int_{G}\big|\nabla u(x)\big|^qdx=+\infty$
if $u\notin W^{1,q}(G,\R^m)$ and with $K_{q,N}$ given by
\begin{equation}\label{fgyufghfghjgghgjkhkkGHGHKKggkhhjoozzbvqkk}
K_{q,N}:=\frac{1}{\mathcal{H}^{N-1}(S^{N-1})}\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)\quad\quad\forall
q\geq 1\,,
\end{equation}
where we denote $z:=(z_1,\ldots, z_N)\in\R^N$. Moreover, for $q=1$
we have
\begin{equation}
\label{eq:1jjj1} \lim_{\e\to 0^+} \int_{G}\int_{G}
\frac{|u(x)-u(y)|}{|x-y|}\,\rho_\e\big(|x-y|\big)\,dx\,dy=K_{1,N}\,\|Du\|(G)\,,
\end{equation}
with the convention $\|Du\|(G)=+\infty$ if $u\notin BV(G,\R^m)$.
Inserting it into
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjh1}
gives for $q>1$:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhk}
\frac{\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)}{(N+q)}\,\int_{G}\big|\nabla
u(x)\big|^qdx=\frac{K_{q,N}\,\mathcal{H}^{N-1}(S^{N-1})}{(N+q)}\,\int_{G}\big|\nabla
u(x)\big|^qdx\\
\leq
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}\,,
\end{multline}
and for $q=1$:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkjjkkhjkhk}
\frac{\int_{S^{N-1}}|z_1|d\mathcal{H}^{N-1}(z)}{(N+1)}\,\|Du\|(G)=
\frac{K_{1,N}\,\mathcal{H}^{N-1}(S^{N-1})}{(N+1)}\,\|Du\|(G)\\
\leq
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times\Omega\;:\;\frac{
\big|u( y)-u(x)\big|}{|y-x|^{1+N}}> s\bigg\}\Bigg)\Bigg\}\,.
\end{multline}
Thus, taking the supremum of the left hand side of
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhk}
over all open $G\subset\subset\Omega$ with Lipschitz boundary, we
deduce
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhk}
and taking the supremum of the left hand side of
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkjjkkhjkhk}
over all open $G\subset\subset\Omega$ with Lipschitz boundary, we
deduce
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkjjkkhjkhkhjhjh}.
\end{proof}
\begin{lemma}\label{fhgolghoi}
Let $q\geq 1$ and let $\Omega\subset\R^N$ be a domain with Lipschiz
boundary. Then there exist constants $C:=C_{\Omega}>0$ and
${\widetilde C}_{N}>0$, satisfying $C_{\Omega}=1$ if $\Omega=\R^N$,
such that, in the case $q>1$, for every $u\in W^{1,q}(\Omega,\R^m)$
we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukky}
\sup\limits_{s\in(0,+\infty)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\} \leq
C^q_{\Omega}{\widetilde C}_{N}\,\int_\Omega\big|\nabla
u(x)\big|^qdx\,,
\end{multline}
and, in the case $q=1$, for every $u\in BV(\Omega,\R^m)$ we have:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkjjkkhjkhkhjhjhkuy}
\sup\limits_{s\in(0,+\infty)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|}{|y-x|^{1+N}}> s\bigg\}\Bigg)\Bigg\} \leq
C_{\Omega}{\widetilde C}_{N}\,\|Du\|(\Omega)\,.
\end{multline}
\end{lemma}
\begin{proof}
By Extension Theorem for Sobolev and $BV$ functions there exist a
constant $C:=C_{\Omega}>0$ such that, in the case $q>1$ for every
$u\in W^{1,q}(\Omega,\R^m)$ there exists its extension $\tilde u\in
W^{1,q}(\R^N,\R^m)$ with the property
\begin{equation}\label{gjfguhjfghfhkjh}
\begin{cases}
\tilde u(x)=u(x)\quad\quad\quad\quad\forall\,x\in\Omega\,,
\\
\int_{\R^N}\big|\nabla \tilde u(x)\big|^qdx\leq
C^q_{\Omega}\,\int_\Omega\big|\nabla u(x)\big|^qdx\,,
\end{cases}
\end{equation}
and in the case $q=1$ for every $u\in BV(\Omega,\R^m)$ there exists
its extension $\tilde u\in BV(\R^N,\R^m)$ with the property
\begin{equation}\label{gjfguhjfghfhkjhyutujgjgh}
\begin{cases}
\tilde u(x)=u(x)\quad\quad\quad\quad\forall\,x\in\Omega\,,
\\
\big\|D\tilde u\big\|(\R^N)\leq C_{\Omega}\,\|Du\|(\Omega)\,.
\end{cases}
\end{equation}
Moreover, in the trivial case $\Omega=\R^N$ we obviously can
consider $C_{\Omega}=1$. Next, by the standard properties of the
Sobolev and the $BV$ functions, there exist a sequence
$\big\{\varphi_n(x)\big\}_{n=1}^{+\infty}\subset
C^\infty_c(\R^N,\R^m)$ such that in the case $q>1$ we have
\begin{equation}\label{gjfguhjfghfhkjhhikh}
\begin{cases}
\varphi_n\to \tilde u\quad\quad\text{strongly in}\quad
L^q(\R^N,\R^m)\,,
\\
\lim\limits_{n\to+\infty}\int_{\R^N}\big|\nabla
\varphi_n(x)\big|^qdx=\int_{\R^N}\big|\nabla \tilde u(x)\big|^qdx\,,
\end{cases}
\end{equation}
and in the case $q=1$ we have
\begin{equation}\label{gjfguhjfghfhkjhhkjhgjgj}
\begin{cases}
\varphi_n\to \tilde u\quad\quad\text{strongly in}\quad
L^1(\R^N,\R^m)\,,
\\
\lim\limits_{n\to+\infty}\int_{\R^N}\big|\nabla
\varphi_n(x)\big|dx=\big\|D \tilde u\big\|(\R^N)\,.
\end{cases}
\end{equation}
On the other hand, H.~Brezis, J.~Van Schaftingen and Po-Lam~Yung in
\cite{hhh3} or \cite{hhh2} proved that for every $q\geq 1$ there
exists a constant ${\widetilde C}:={\widetilde C}_{N}>0$ such that
for every $\varphi(x)\in C^\infty_c(\R^N,\R^m)$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjg}
\sup\limits_{s\in(0,+\infty)}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|\varphi( y)-\varphi(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg)\Bigg\} \leq {\widetilde
C}_{N}\,\int_{\R^N}\big|\nabla \varphi(x)\big|^qdx\,.
\end{multline}
In particular, for every $s>0$ and every $n\in\mathbb{N}$ we have:
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhj}
s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \R^N\times \R^N\;:\;\frac{
\big|\varphi_n( y)-\varphi_n(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)
\leq {\widetilde C}_{N}\,\int_{\R^N}\big|\nabla
\varphi_n(x)\big|^qdx\,.
\end{equation}
Thus letting $n\to +\infty$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhj}
and using either \er{gjfguhjfghfhkjhhikh} (for $q>1$) or
\er{gjfguhjfghfhkjhhkjhgjgj} (for $q=1$), in the case $q>1$ for
every $s>0$ we deduce
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhjhkjhjhljkjl}
s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \R^N\times \R^N\;:\;\frac{
\big|\tilde u( y)-\tilde u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)
\leq {\widetilde C}_{N}\int_{\R^N}\big|\nabla \tilde
u(x)\big|^qdx\,.
\end{equation}
and in the case $q=1$ for every $s>0$ we deduce
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhjhkjhjhjhkhilkl}
s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \R^N\times \R^N\;:\;\frac{
\big|\tilde u( y)-\tilde u(x)\big|}{|y-x|^{1+N}}> s\bigg\}\Bigg)
\leq {\widetilde C}_{N}\,\big\|D \tilde u\big\|(\R^N)\,.
\end{equation}
Thus, by \er{gjfguhjfghfhkjh} and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhjhkjhjhljkjl},
in the case $q>1$ for every $s>0$ we deduce
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhjhkjhjhljkjlhkh}
s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times
\Omega\;:\;\frac{ \big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}>
s\bigg\}\Bigg) \leq
{\widetilde C}_{N}\,C^q_{\Omega}\,\int_{\Omega}\big|\nabla
u(x)\big|^qdx\,,
\end{equation}
and by \er{gjfguhjfghfhkjhyutujgjgh} and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhjhkjhjhjhkhilkl}
in the case $q=1$ for every $s>0$ we deduce
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhjhkjhjhjhkhilklhkyj}
s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times
\Omega\;:\;\frac{ \big|u( y)-u(x)\big|}{|y-x|^{1+N}}> s\bigg\}\Bigg)
\leq {\widetilde C}_{N}\,C_{\Omega}\,\big\|D u\big\|(\Omega)\,.
\end{equation}
Finally, taking the supremum of ether
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhjhkjhjhljkjlhkh}
or
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkhkkjyukkyjgggjjggjgkyhjhkjhjhjhkhilklhkyj}
over the set $s\in(0,+\infty)$ completes the proof.
\end{proof}
\begin{proof}[Proof of \rth{hjkgjkfhjffgggvggoopikhhhkjh}]
It suffices to combine Corollary \ref{hjkgjkfhjffgggvggoopikhh} with
Lemma \ref{fhgolghoi}.
\end{proof}
\section{Proof of Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf11}}\label{limtem}
First, we prove the following proposition.
\begin{proposition}\label{fhfhfhfffhkjkj}
Let $\Omega\subset\R^N$ be an open set and let $u\in Lip(\R^N,\R^m)$
be such that there exists $R>0$ satisfying $u(x)=0$ for every
$x\in\R^N\setminus B_R(0)$. Next let $G_0:
\R\times\R^m\times\R^m\times\R^N\times\R^N\to[0,+\infty)$ be a
continuous function, such that $G_0(0,0,0,x,y)=0$ for every
$x,y\in\R^N$ and $G_0$ is bounded on every set of the type
$[\alpha,\beta]\times K\times K\times\R^N\times\R^N$ with any
$K\subset\subset\R^N$ and any $\alpha<\beta\in\R$. Then,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy1iyyuliyhigugu}
\lim\limits_{s\to +\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times\Omega\;:\;
G_0\bigg(\frac{\big|u(y)-u(x)\big|}{|y-x|},u(y),u(x),y,x\bigg)\,\frac{1}{|y-x|^{N}}>
s\Bigg\}\Bigg)\\
=\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}G_0\bigg(\big|\nabla
u(x)\big||z_1|,u(x),u(x),x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx.
\end{multline}
\end{proposition}
\begin{proof}
Let $\Omega\subset\R^N$ be an open set and let $u\in Lip(\R^N,\R^m)$
be such that there exists $R>0$ satisfying $u(x)=0$ for every
$x\in\R^N\setminus B_R(0)$. Next let $G:
\R^m\times\R^m\times\R^m\times\R^N\times\R^N\to[0,+\infty)$ be a
continuous function, such that $G(0,0,0,x,y)=0$ for every
$x,y\in\R^N$ and $G$ is bounded on every set of the type $K\times
K\times K\times\R^N\times\R^N$ with any $K\subset\subset\R^N$. Then
for every $s>0$ we have:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhh}
\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times\Omega\;:\;
G\bigg(\frac{u(y)-u(x)}{|y-x|},u(y),u(x),y,x\bigg)\,\frac{1}{|y-x|^{N}}>
s\Bigg\}\Bigg)=\\
\mathcal{L}^{2N}\Bigg(\Bigg\{(x,z)\in \Omega\times\R^N\,:\,
\frac{\chi_{\Omega}(x+z)
}{|z|^{N}} G\bigg(\frac{u(x+z)-u(x)}{|z|},u(x+z),u(x),x+z,x\bigg)>
s\Bigg\}\Bigg)\\= \int\limits_\Omega\mathcal{L}^{N}\Bigg(\Bigg\{z\in
\R^N\,:\, \chi_{\Omega}(x+z)
G\bigg(\frac{u(x+z)-u(x)}{|z|},u(x+z),u(x),x+z,x\bigg)\frac{1}{|z|^{N}}>
s\Bigg\}\Bigg)dx .
\end{multline}
Therefore for every $\e>0$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy}
\frac{1}{\e^N}\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times\Omega\;:\;
G\bigg(\frac{u(y)-u(x)}{|y-x|},u(y),u(x),y,x\bigg)\,\frac{1}{|y-x|^{N}}>
\frac{1}{\e^N}\Bigg\}\Bigg)=\\
\int\limits_\Omega\frac{1}{\e^N}\mathcal{L}^{N}\Bigg(\Bigg\{z\in
\R^N\,:\, \chi_{\Omega}(x+z)
G\bigg(\frac{u(x+z)-u(x)}{|z|},u(x+z),u(x),x+z,x\bigg)\frac{1}{|z|^{N}}>
\frac{1}{\e^N}\Bigg\}\Bigg)dx\\=
\int\limits_\Omega\mathcal{L}^{N}\Bigg(\Bigg\{z\in \R^N\,:\,
\chi_{\Omega}(x+\e z) G\bigg(\frac{u(x+\e z)-u(x)}{|\e z|},u(x+\e
z),u(x),x+\e z,x\bigg)\frac{1}{|z|^{N}}> 1\Bigg\}\Bigg)dx .
\end{multline}
Moreover, since $G(0,0,0,x,y)=0$, by the Lipschitz and the compact
support conditions for $u$ we obviously deduce that there exists
$M>0$ such that
\begin{align}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjyuiyyj}
0\leq G\bigg(\frac{u(y)-u(x)}{|y-x|},u(y),u(x),y,x\bigg)\leq
\Big(\frac{M}{2}\Big)^N\quad\forall
(y,x)\in\R^N\times\R^N\quad\quad\text{and}\quad\quad\\
G\bigg(\frac{u(y)-u(x)}{|y-x|},u(y),u(x),y,x\bigg)=0\quad\quad\text{whenever}\quad\quad
|x|\geq \frac{M}{2}\;\;\text{and}\;\;|y|\geq \frac{M}{2}\,.
\end{align}
In particular, for every $\e\in(0,1)$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjyhkhhhjhh}
\mathcal{L}^{N}\Bigg(\Bigg\{z\in \R^N\,:\, \chi_{\Omega}(x+\e z)
G\bigg(\frac{u(x+\e z)-u(x)}{|\e z|},u(x+\e z),u(x),x+\e
z,x\bigg)\frac{1}{|z|^{N}}> 1\Bigg\}\Bigg)=\\
\mathcal{L}^{N}\Bigg(\Bigg\{z\in B_M(0)\,:\, \chi_{\Omega}(x+\e z)
G\bigg(\frac{u(x+\e z)-u(x)}{|\e z|},u(x+\e z),u(x),x+\e
z,x\bigg)\frac{1}{|z|^{N}}> 1\Bigg\}\Bigg).
\end{multline}
Moreover, for every $\e\in(0,1)$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjyhkhhhjhhyiyyu}
\mathcal{L}^{N}\Bigg(\Bigg\{z\in \R^N\,:\, \chi_{\Omega}(x+\e z)
G\bigg(\frac{u(x+\e z)-u(x)}{|\e z|},u(x+\e z),u(x),x+\e
z,x\bigg)\frac{1}{|z|^{N}}> 1\Bigg\}\Bigg)=0\\
\quad\quad\quad\quad \forall\,x\in\R^N\setminus B_M(0)\,.
\end{multline}
Thus, for every $\e\in(0,1)$ we rewrite
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy}
as:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy1}
\frac{1}{\e^N}\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times\Omega\;:\;
G\bigg(\frac{u(y)-u(x)}{|y-x|},u(y),u(x),y,x\bigg)\,\frac{1}{|y-x|^{N}}>
\frac{1}{\e^N}\Bigg\}\Bigg)=\\ \int\limits_{\Omega\cap
B_M(0)}\mathcal{L}^{N}\Bigg\{ z\in B_M(0)\,:\,
\frac{\chi_{\Omega}(x+\e z)}{|z|^{N}} G\bigg(\frac{u(x+\e
z)-u(x)}{|\e z|},u(x+\e z),u(x),x+\e z,x\bigg)>1\Bigg\}dx.
\end{multline}
However, since $G$ is continuous and since $u$ is a Lipschitz
function, by Rademacher's Theoreom for a.e. $x\in\R^N$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjyjhjhhjhhjhj}
\lim\limits_{\e\to 0^+}G\bigg(\frac{u(x+\e z)-u(x)}{|\e z|},u(x+\e
z),u(x),x+\e z,x\bigg)\\=G\bigg(\nabla u(x)\cdot\frac{z}{|
z|},u(x),u(x),x,x\bigg)\quad\forall z\in\R^N\setminus\{0\}\,.
\end{multline}
Therefore, since the pointwise convergence implies the convergence
by a measure, we deduce from
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjyjhjhhjhhjhj}
that
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjyjkhjjjy}
\lim\limits_{\e\to 0^+}\mathcal{L}^{N}\Bigg(\Bigg\{z\in B_M(0)\,:\,
\frac{\chi_{\Omega}(x+\e z)}{|z|^{N}} G\bigg(\frac{u(x+\e
z)-u(x)}{|\e z|},u(x+\e z),u(x),x+\e z,x\bigg)> 1\Bigg\}\Bigg)\\=
\mathcal{L}^{N}\Bigg(\Bigg\{z\in B_M(0)\,:\,
\frac{\chi_{\Omega}(x)}{|z|^{N}} G\bigg(\nabla u(x)\cdot\frac{z}{|
z|},u(x),u(x),x,x\bigg)> 1\Bigg\}\Bigg)
\\=
\mathcal{L}^{N}\Bigg(\Bigg\{z\in \R^N\,:\,
\frac{\chi_{\Omega}(x)}{|z|^{N}} G\bigg(\nabla u(x)\cdot\frac{z}{|
z|},u(x),u(x),x,x\bigg)> 1\Bigg\}\Bigg) \quad\text{for
a.e.}\;\;x\,\in\R^N\,.
\end{multline}
Therefore, by a dominated convergence Theorem we deduce from
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy1}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjyjkhjjjy}:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy1iyyu}
\lim\limits_{s\to +\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times\Omega\;:\;
G\bigg(\frac{u(y)-u(x)}{|y-x|},u(y),u(x),y,x\bigg)\,\frac{1}{|y-x|^{N}}>
s\Bigg\}\Bigg)=\\ \lim\limits_{\e\to
0^+}\frac{1}{\e^N}\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times\Omega\;:\;
G\bigg(\frac{u(y)-u(x)}{|y-x|},u(y),u(x),y,x\bigg)\,\frac{1}{|y-x|^{N}}>
\frac{1}{\e^N}\Bigg\}\Bigg)=\\ \int\limits_{\Omega\cap
B_M(0)}\mathcal{L}^{N}\Bigg(\Bigg\{z\in \R^N\,:\,
|z|^{N}<G\bigg(\nabla u(x)\cdot\frac{z}{| z|},u(x),u(x),x,x\bigg)
\Bigg\}\Bigg)dx\\
=\int\limits_{\Omega}\mathcal{L}^{N}\Bigg(\Bigg\{z\in \R^N\,:\,
|z|^{N}<G\bigg(\nabla u(x)\cdot\frac{z}{| z|},u(x),u(x),x,x\bigg)
\Bigg\}\Bigg)dx.
\end{multline}
However, in the case
$G\big(a,b,c,y,x\big):=G_0\big(|a|,b,c,y,x\big)$ with
$G_0:\R\times\R^m\times\R^m\times\R^N\times\R^N\to[0,+\infty)$, for
every $x\in\R^N$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy1iyyuhjggjjgghhfhhhgj}
\mathcal{L}^{N}\Bigg(\Bigg\{z\in \R^N\,:\, |z|^{N}<G\bigg(\nabla
u(x)\cdot\frac{z}{| z|},u(x),u(x),x,x\bigg) \Bigg\}\Bigg)=\\
\mathcal{L}^{N}\Bigg(\Bigg\{z\in \R^N\,:\,
|z|^{N}<G_0\bigg(\big|\nabla u(x)\big|\frac{|z_1|}{|
z|},u(x),u(x),x,x\bigg) \Bigg\}\Bigg)=
\\
\int_{S^{N-1}}\int\limits_{\Big\{t\in (0,+\infty)\;:\;
t^{N}<G_0\big(|\nabla u(x)||z_1|,u(x),u(x),x,x\big)
\Big\}}t^{N-1}dtd\mathcal{H}^{N-1}(z)\\=
\frac{1}{N}\int_{S^{N-1}}G_0\bigg(\big|\nabla
u(x)\big||z_1|,u(x),u(x),x,x\bigg)d\mathcal{H}^{N-1}(z).
\end{multline}
Thus inserting
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy1iyyuhjggjjgghhfhhhgj}
into
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy1iyyu}
finally gives
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhhiigjhjgjjljklkhhkghfhk;llkljklljjhjbjjbljjhuyuhyulkjjkrr88mhjhhjhjhhhjy1iyyuliyhigugu}.
\end{proof}
\begin{proof}
[Proof of Theorem
\ref{hjkgjkfhjffgggvggoopikhhhkjhhkjhjgjhhjgggghhdf11}]
Let
$\big\{u_n\big\}_{n=1}^{+\infty}\subset C^\infty_c(\R^N,\R^m)$ be
such that
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhj188hkjjggjhhjjh}
\lim\limits_{n\to+\infty}\int_{\R^N}\bigg(\Big|\nabla u_n(x)-\nabla
u(x)\Big|^q+\big| u_n(x)- u(x)\big|^q\bigg)\,=\,0.
\end{equation}
Then,
by the particular case of Proposition
\ref{fhfhfhfffhkjkj} with $G_0(a,b,c,y,x)=F(\sigma a,y,x)$, for
every fixed $n\in\mathbb{N}$ and every $\sigma>0$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhh}
\lim\limits_{s\to +\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times\Omega\;:\;
F\bigg(\sigma\,\frac{\big|u_n(y)-u_n(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>
s\Bigg\}\Bigg)\\
=\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\sigma\,\big|\nabla
u_n(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx\,.
\end{multline}
On the other hand, since $F(a,y,x)$ is non-decreasing on
$a\in[0,+\infty)$, given arbitrary $v\in W^{1,q}(\R^N,\R^m)$, $w\in
W^{1,q}(\R^N,\R^m)$, $y\neq x\in\R^N$, $s>0$ and $\alpha>1$, by
triangle inequality we obviously have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhjgjghkhjh}
F\bigg(\frac{\Big|\big(v(y)+w(y)\big)-\big(v(x)+w(x)\big)\Big|}{|y-x|},y,x\bigg)\leq
F\bigg(\frac{\big|v(y)-v(x)\big|}{|y-x|}+\frac{
\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\\
=F\Bigg(\frac{1}{\alpha}\bigg(\alpha\frac{\big|v(y)-v(x)\big|}{|y-x|}\bigg)+\frac{\alpha-1}{\alpha}\bigg(\frac{\alpha}{\alpha-1}\frac{
\big|w(y)-w(x)\big|}{|y-x|}\bigg),y,x\Bigg)\\
\leq
F\Bigg(\max\bigg\{\alpha\frac{\big|v(y)-v(x)\big|}{|y-x|},\frac{\alpha}{\alpha-1}\frac{
\big|w(y)-w(x)\big|}{|y-x|}\bigg\},y,x\Bigg)\\=
\max\Bigg\{F\bigg(\alpha\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg),F\bigg(\frac{\alpha}{\alpha-1}\frac{
\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\Bigg\}\,.
\end{multline}
Therefore, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhjgjghkhjh}
we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhjgjghkhjhhgjg}
F\bigg(\frac{\Big|\big(v(y)+w(y)\big)-\big(v(x)+w(x)\big)\Big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\quad\quad\text{implies}\quad\quad\\
\quad\quad\text{either}\quad\quad
F\bigg(\alpha\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\\
\quad\quad\text{or}\quad\quad F\bigg(\frac{\alpha}{\alpha-1}\frac{
\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\,.
\end{multline}
Thus, denoting the sets:
\begin{align*}
A:=\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\Big|\big(v(y)+w(y)\big)-\big(v(x)+w(x)\big)\Big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\\
B_1:=\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\alpha\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\\
B_2:= \Bigg\{(x,y)\in \Omega\times
\Omega\;:\;F\bigg(\frac{\alpha}{\alpha-1}\frac{
\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\,,
\end{align*}
by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhjgjghkhjhhgjg}
we obviously deduce:
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhj188hkjjggjhhjjhkojojjk}
A\subset B_1\cup B_2\,,
\end{equation}
and so,
\begin{equation}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhj188hkjjggjhhjjhkojojjkhkhh}
\mathcal{L}^{2N}(A)\leq\mathcal{L}^{2N}(B_1)+\mathcal{L}^{2N}(B_2)\,,
\end{equation}
i.e. for every $v\in W^{1,q}(\R^N,\R^m)$ and $w\in
W^{1,q}(\R^N,\R^m)$, every $s>0$ and every $\alpha>1$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjkhhkkjhkh}
\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times
\Omega\;:\;F\bigg(\frac{\Big|\big(v(y)+w(y)\big)-\big(v(x)+w(x)\big)\Big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\mathcal{L}^{2N}\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\alpha\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)+\\
\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times
\Omega\;:\;F\bigg(\frac{\alpha}{\alpha-1}\frac{
\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\,.
\end{multline}
\begin{comment}
and thus,
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjkhhkkjhkh}
s\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times
\Omega\;:\;F\bigg(\frac{
\Big|\big(v(y)+w(y)\big)-\big(v(x)+w(x)\big)\Big|}{|y-x|}\bigg)\,\frac{1}{|y-x|^{N}}>
s\bigg\}\Bigg)\leq\\
s\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times
\Omega\;:\;\frac{
\big|v(y)-v(x)\big|}{|y-x|}\,\frac{1}{|y-x|^{N}}>s\bigg\}\Bigg)+\\
s\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \Omega\times
\Omega\;:\;\frac{
\big|w(y)-w(x)\big|}{|y-x|}\,\frac{1}{|y-x|^{N}}> s\bigg\}\Bigg) .
\end{multline}
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjkhhk}
\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \R^N\times \R^N\;:\;\frac{
\Big|\big(v(y)+w(y)\big)-\big(v(x)+w(x)\big)\Big|}{|y-x|}\,\frac{1}{|y-x|^{\frac{N}{q}}}>
\tau\bigg\}\Bigg)\leq\\ \mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|v(y)-v(x)\big|}{|y-x|}\,\frac{1}{|y-x|^{\frac{N}{q}}}+\frac{
\big|w(y)-w(x)\big|}{|y-x|}\,\frac{1}{|y-x|^{\frac{N}{q}}}>
\tau\bigg\}\Bigg) \leq\\ \mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\R^N\times \R^N\;:\;\frac{
\big|v(y)-v(x)\big|}{|y-x|}\,\frac{1}{|y-x|^{\frac{N}{q}}}>\frac{\tau}{\alpha}\bigg\}\bigcup\\
\bigg\{(x,y)\in \R^N\times \R^N\;:\;\frac{
\big|w(y)-w(x)\big|}{|y-x|}\,\frac{1}{|y-x|^{\frac{N}{q}}}>
\frac{(\alpha-1)\tau}{\alpha}\bigg\}\Bigg) \leq\\
\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \R^N\times \R^N\;:\;\frac{
\big|v(y)-v(x)\big|}{|y-x|}\,\frac{1}{|y-x|^{\frac{N}{q}}}>\frac{\tau}{\alpha}\bigg\}\Bigg)+\\
\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in \R^N\times \R^N\;:\;\frac{
\big|w(y)-w(x)\big|}{|y-x|}\,\frac{1}{|y-x|^{\frac{N}{q}}}>
\frac{(\alpha-1)\tau}{\alpha}\bigg\}\Bigg),
\end{multline}
\end{comment}
Then, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjkhhkkjhkh}
we deduce that for every $v\in W^{1,q}(\R^N,\R^m)$, $w\in
W^{1,q}(\R^N,\R^m)$ and every $\alpha>1$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjull33}
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\alpha\frac{\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)+\\
\limsup\limits_{s\to+\infty}s\,\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times \Omega\;:\;\\
F\bigg(\frac{\alpha}{\alpha-1}\frac{\Big|\big(v(y)-w(y)\big)-\big(v(x)-w(x)\big)\Big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg).
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjull33mod}
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\alpha\frac{\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)+\\
\limsup\limits_{s\to+\infty}s\,\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times \Omega\;:\;\\
F\bigg(\frac{\alpha}{\alpha-1}\frac{\Big|\big(v(y)-w(y)\big)-\big(v(x)-w(x)\big)\Big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg).
\end{multline}
Therefore, since $F(a,y,x)\leq C|a|^q$ for every $a\in\R$ and every
$x,y\in\R^N$, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjull33},
for every $v\in W^{1,q}(\R^N,\R^m)$, $w\in W^{1,q}(\R^N,\R^m)$ and
every $\alpha>1$ we deduce
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjullyy}
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\alpha\frac{\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)+\\
\limsup\limits_{s\to+\infty}s\,\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;C\frac{\alpha^q}{(\alpha-1)^q}\frac{\Big|\big(v(y)-w(y)\big)-\big(v(x)-w(x)\big)\Big|^q}{|y-x|^{q+N}}>s\Bigg\}\Bigg)
\\=\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\alpha\frac{\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)+\\C\frac{\alpha^q}{(\alpha-1)^q}\limsup\limits_{s\to+\infty}s\,\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;\frac{\Big|\big(v(y)-w(y)\big)-\big(v(x)-w(x)\big)\Big|^q}{|y-x|^{q+N}}>s\Bigg\}\Bigg),
\end{multline}
and similarly by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjull33mod}
we obtain
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjullyymod}
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;F\bigg(\alpha\frac{\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)+\\C\frac{\alpha^q}{(\alpha-1)^q}\limsup\limits_{s\to+\infty}s\,\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;\frac{\Big|\big(v(y)-w(y)\big)-\big(v(x)-w(x)\big)\Big|^q}{|y-x|^{q+N}}>s\Bigg\}\Bigg),
\end{multline}
Therefore, using Theorem \ref{hjkgjkfhjffgggvggoopikhhhkjh}, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjullyy}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjullyymod},
for every given $v\in W^{1,q}(\R^N,\R^m)$, $w\in
W^{1,q}(\R^N,\R^m)$ and every $\alpha>1$ we infer
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjull}
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;F\bigg(\alpha\frac{\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\\+
C\,\widetilde C_{N}\,\frac{\alpha^q}{(\alpha-1)^q}
\int_{\R^N}\Big|\nabla v(x)-\nabla w(x)\Big|^qdx,
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjullmod}
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|v(y)-v(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;F\bigg(\alpha\frac{\big|w(y)-w(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\\+
C\,\widetilde C_{N}\,\frac{\alpha^q}{(\alpha-1)^q}
\int_{\R^N}\Big|\nabla v(x)-\nabla w(x)\Big|^qdx.
\end{multline}
In particular, taking firstly $v=u$ and $w=u_n$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjull}
and secondly $v=u_n$ and $w=u$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjullmod},
for every $\alpha>1$ we deduce:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjulld1}
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;F\bigg(\alpha\frac{\big|u_n(y)-u_n(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\\+
C\,\widetilde C_{N}\,\frac{\alpha^q}{(\alpha-1)^q}
\int_{\R^N}\Big|\nabla u_n(x)-\nabla u(x)\Big|^qdx,
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjulld2}
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|u_n(y)-u_n(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;F\bigg(\alpha\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\\+
C\,\widetilde C_{N}\,\frac{\alpha^q}{(\alpha-1)^q}
\int_{\R^N}\Big|\nabla u_n(x)-\nabla u(x)\Big|^qdx.
\end{multline}
Thus by combining
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhh}
with
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjulld1}
we deduce
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj13}
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\alpha\,\big|\nabla
u_n(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx+
C\,\widetilde C_{N}\,\frac{\alpha^q}{(\alpha-1)^q}
\int_{\R^N}\Big|\nabla u_n(x)-\nabla u(x)\Big|^qdx,
\end{multline}
and by combining
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhh}
with
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhjulld2}
we deduce
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj24}
\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\big|\nabla
u_n(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx\leq
C\,\widetilde C_{N}\,\frac{\alpha^q}{(\alpha-1)^q}
\int_{\R^N}\Big|\nabla u_n(x)-\nabla u(x)\Big|^qdx\\+
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;F\bigg(\alpha\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg).
\end{multline}
\begin{comment}
Then, using Theorem \ref{hjkgjkfhjffgggvggoopikhhhkjh}, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj1}
for every fixed $n\in\mathbb{N}$ and every $\alpha>1$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj13}
\limsup\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}\leq\\
\alpha^q\,\frac{\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)}{N}\int_{\Omega}\big|\nabla
u_n(x)\big|^qdx\\
+\bigg(\frac{\alpha}{\alpha-1}\bigg)^q\,\widetilde
C_{N}\,\int_{\R^N}\Big|\nabla u_n(x)-\nabla u(x)\Big|^qdx.
\end{multline}
and using Theorem \ref{hjkgjkfhjffgggvggoopikhhhkjh}, by
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj2}
for every fixed $n\in\mathbb{N}$ and every $\alpha>1$ we have
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj24}
\frac{\int_{S^{N-1}}|z_1|^qd\mathcal{H}^{N-1}(z)}{N}\int_{\Omega}\big|\nabla
u_n(x)\big|^qdx\leq\\
\alpha^q\liminf\limits_{s\to+\infty}\Bigg\{s\,\mathcal{L}^{2N}\Bigg(\bigg\{(x,y)\in
\Omega\times \Omega\;:\;\frac{
\big|u( y)-u(x)\big|^q}{|y-x|^{q+N}}> s\bigg\}\Bigg)\Bigg\}\\
+\bigg(\frac{\alpha}{\alpha-1}\bigg)^q\,\widetilde
C_{N}\,\int_{\R^N}\Big|\nabla u_n(x)-\nabla u(x)\Big|^qdx\,.
\end{multline}
\end{comment}
Therefore, letting $n\to+\infty$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj13}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj24}
and using
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhj188hkjjggjhhjjh}
together with the Dominated Convergence Theorem, we deduce:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj135}
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\alpha\,\big|\nabla
u(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx\,,
\end{multline}
and
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj247kk}
\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\big|\nabla
u(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx\leq
\\
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;F\bigg(\alpha\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\,.
\end{multline}
In particular, taking
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj247kk}
for $\frac{1}{\alpha}\,u$ instead of $u$ we deduce:
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj247}
\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\frac{1}{\alpha}\big|\nabla
u(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx\leq
\\
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in
\Omega\times
\Omega\;:\;F\bigg(\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\,.
\end{multline}
Finally, letting $\alpha\to 1^+$ in
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj135}
and
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhj247}
and using again the Dominated Convergence Theorem, we infer
\begin{multline}\label{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhhkhjhjggjkgkf}
\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\big|\nabla
u(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx\leq\\
\liminf\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\leq\\
\limsup\limits_{s\to+\infty}s\mathcal{L}^{2N}\Bigg(\Bigg\{(x,y)\in \Omega\times \Omega\;:\;F\bigg(\frac{\big|u(y)-u(x)\big|}{|y-x|},y,x\bigg)\,\frac{1}{|y-x|^{N}}>s\Bigg\}\Bigg)\\
\leq\frac{1}{N}\int\limits_{\Omega}\Bigg(\int\limits_{S^{N-1}}F\bigg(\big|\nabla
u(x)\big||z_1|,x,x\bigg)d\mathcal{H}^{N-1}(z)\Bigg)dx \,,
\end{multline}
and we obtain
\er{GMT'3jGHKKkkhjjhgzzZZzzZZzzbvq88nkhhjggjgjkpkjljluytytuguutloklljjgjgjhjklljjjkjkhjkkhkhhkhhjlkkhkjljljkjlkkhkllhjhjhhfyfppiooiououiuiuiuhjhjkhkhkjkhhkkhjjyhjggjuyyjghfhjhjghhhzz11}.
\end{proof}
\section{Proof of Theorem \ref{hjkgjkfhjffjhmgg7}}
The next Proposition is proved exactly as a similar statement
in \cite{jmp}; the proof is postponed to the Appendix; in both cases the key
ingredient is Proposition \ref{hgugghghhffhfhKKzzbvq} which is part of \cite[Proposition 2.4]{jmp}).
\begin{proposition}\label{hgugghghhffhfhKKzzbvqhkjjgg}
Let $\Omega$ be an open set with bounded Lipschitz boundary, $q>1$
and $u\in BV(\Omega,\R^m)\cap L^\infty(\Omega,\R^m)$. Then,
\begin{multline}\label{gghgjhfgggjfgfhughGHGHKKzzjkjkyuyuybvqjhgfhfhgjgjjlhkluyikhhkhkhkjgjhhh}
\lim\limits_{\e\to
0^+}\Bigg\{\int_{S^{N-1}}\int_{\Omega}\frac{\big|u( x+\e\vec
n)-u(x)\big|^q}{\e}\chi_\Omega( x+\e\vec
n)dxd\mathcal{H}^{N-1}(\vec n)\Bigg\}=\\
\bigg(\int_{S^{N-1}}|z_1|d\mathcal{H}^{N-1}(z)\bigg)\Bigg(\int_{J_u\cap
\Omega}\Big|u^+(x)-u^-(x)\Big|^qd\mathcal{H}^{N-1}(x)\Bigg)\,.
\end{multline}
\end{proposition}
\begin{proof}[Proof of Theorem \ref{hjkgjkfhjffjhmgg7}]
The result is a direct consequence of Theorem \ref{hjkgjkfhjff} and
Proposition \ref{hgugghghhffhfhKKzzbvqhkjjgg}.
\end{proof}
|
2,869,038,153,914 | arxiv | \section*{\notesname}%
}
\setcounter{MaxMatrixCols}{10}
\newtheorem{theorem}{Theorem}
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}
\theoremstyle{definition}
\newtheorem{defn}{Definition}
\newtheorem{assumption}{Assumption}
\theoremstyle{remark}
\newtheorem*{remark}{Remark}
\newcommand{\superscript}[1]{\ensuremath{^{\textrm{#1}}}}
\newcommand \independent{\protect \mathpalette{\protect \independenT}{\perp}}
\def \independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}}
\newcommand{\D}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\DD}[2]{\frac{\partial^2 #1}{\partial#2\partial#2'}}
\newcommand{{\rm tr}}{{\rm tr}}
\newcommand{\trc}[1]{{\rm tr}\left \{#1\right \}}
\newcommand{\Nulp}[1]{{\rm Nul}\left(#1\right)}
\newcommand{\Varp}[1]{{\rm Var}\left(#1\right)}
\newcommand{\Varb}[1]{{\rm Var}\left[#1\right]}
\newcommand{\Varc}[1]{{\rm Var}\left \{#1\right \}}
\def \mathop{\rm arg\,max}{\mathop{\rm arg\,max}}
\def \mathop{\rm arg\,min}{\mathop{\rm arg\,min}}
\def \mathop{\rm plim}{\mathop{\rm plim}}
\def \mathop{\rm Nul}{\mathop{\rm Nul}}
\def {\rm ~s.t.~}{{\rm ~s.t.~}}
\def \mathbb R{\mathbb R}
\def \mathbb E{\mathbb E}
\let \originalleft \left
\let \originalright \right
\renewcommand{\left}{\mathopen{}\mathclose \bgroup \originalleft}
\renewcommand{\right}{\aftergroup \egroup \originalright}
\renewcommand{\labelenumi}{(\roman{enumi})}
\def \citeposs#1{\citeauthor{#1}'s (\citeyear{#1})}
\renewcommand{\qedsymbol}{$\blacksquare$}
\onehalfspacing
\doublespacing
\begin{document}
\title[Smoothed Estimating Equations for {IV-QR}]{Smoothed Estimating Equations \\ for Instrumental Variables Quantile Regression}
\thanks{%
Thanks to Victor Chernozhukov (co-editor) and an anonymous referee for insightful comments and references, and thanks to Peter C.\ B.\ Phillips (editor) for additional editorial help. Thanks to Xiaohong Chen, Brendan
Beare, Andres Santos, and active seminar and conference participants for insightful
questions and comments.}
\maketitle
\begin{center}
{\sc David M.\ Kaplan}
\textit{Department of Economics, University of Missouri}
\textit{118 Professional Bldg, 909 University Ave, Columbia, MO 65211-6040}
E-mail: \texttt{kaplandm@missouri.edu}
~\\
{\sc Yixiao Sun}
\textit{Department of Economics, University of California, San Diego}
E-mail: \texttt{yisun@ucsd.edu}
\end{center}
\paragraph{\sc Abstract}
The moment conditions or estimating equations for instrumental variables
quantile regression involve the discontinuous indicator function. We instead
use smoothed estimating equations (SEE), with bandwidth $h$. We show that
the mean squared error (MSE) of the vector of the SEE is minimized for some $h>0$,
leading
to smaller asymptotic
MSE
of the estimating
equations and associated parameter estimators. The same MSE-optimal $h$ also
minimizes the higher-order type I error of a SEE-based $\chi ^{2}$ test
and increases
size-adjusted power in large samples. Computation of the SEE
estimator also becomes simpler and more reliable, especially with (more)
endogenous regressors. Monte Carlo simulations demonstrate all of these
superior properties in finite samples, and we apply our estimator to JTPA data.
Smoothing the estimating equations is
not just a technical operation for establishing Edgeworth expansions and
bootstrap refinements; it also brings the real benefits of having more
precise estimators and more powerful tests.
\allowdisplaybreaks[4]
\section{Introduction}
Many econometric models are specified by moment conditions or estimating
equations. An advantage of this approach is that the full distribution of
the data does not have to be parameterized. In this paper, we consider
estimating equations that are not smooth in the parameter of interest. We
focus on instrumental variables quantile regression (IV-QR), which
includes the usual quantile regression as a special case. Instead of using
the estimating equations that involve the nonsmooth indicator function, we
propose to smooth the indicator function, leading to our smoothed estimating
equations (SEE) and SEE estimator.
Our SEE estimator has several advantages. First, from a computational point
of view, the SEE estimator can be computed using any standard iterative
algorithm that requires smoothness. This is especially attractive in IV-QR
where simplex methods for the usual QR are not applicable. In fact,
the SEE approach has been used in %
\citet{ChenPouzo2009,ChenPouzo2012} for computing their nonparametric sieve
estimators in the presence of nonsmooth moments or generalized residuals.
However, a rigorous investigation is currently lacking. Our paper can be
regarded as a first step towards justifying the SEE approach in
nonparametric settings.
Relatedly, \citet[][\S7.1]{FanLiao2014} have employed
the same strategy of smoothing the indicator function to
reduce the computational burden of their focused GMM approach.
Second, from a technical point of view, smoothing
the estimating equations enables us to establish high-order properties of
the estimator. This motivates \citet{Horowitz1998}, for instance, to examine
a smoothed objective function for median regression, to show high-order
bootstrap refinement. Instead of smoothing the objective function, we show
that there is an advantage to smoothing the estimating equations. This point
has not been recognized and emphasized in the literature. For QR estimation
and inference via empirical likelihood, \citet{Otsu2008} and %
\citet{Whang2006} also examine smoothed estimators. To the best of our
knowledge, no paper has examined smoothing the estimating equations for the
usual QR estimator, let alone IV-QR. Third, from a statistical point of
view, the SEE estimator is a flexible class of estimators that includes the
IV/OLS mean regression estimators and median and quantile regression
estimators as special cases. Depending on the smoothing parameter, the SEE
estimator can have different degrees of robustness in the sense of %
\citet{Huber1964}. By selecting the smoothing parameter appropriately, we
can harness the advantages of both the mean regression estimator and the
median/quantile regression estimator. Fourth and most importantly, from an
econometric point of view, smoothing can reduce the mean squared error (MSE)
of the SEE, which in turn leads to a smaller asymptotic MSE of the parameter
estimator and to more powerful tests. We seem to be the first to establish
these advantages.
In addition to investigating the asymptotic properties of the SEE estimator,
we provide a smoothing parameter choice that minimizes different criteria:
the MSE of the SEE, the type I error of a chi-square test subject to exact
asymptotic size control, and the approximate MSE of the parameter estimator. We show
that the first two criteria produce the same optimal smoothing parameter,
which is also optimal under a variant of the third criterion. With the
data-driven smoothing parameter choice, we show that the statistical and
econometric advantages of the SEE estimator are reflected clearly in our
simulation results.
There is a growing literature on IV-QR. For a recent review, see \citet{ChernozhukovHansen2013}. Our paper is built upon \citet{ChernozhukovHansen2005}, which establishes a structural framework for IV-QR and provides primitive identification conditions. Within this framework, \citet{ChernozhukovHansen2006} and \citet{ChernozhukovEtAl2009} develop estimation and inference procedures under strong identification. For inference procedures that are robust to weak identification, see \citet{ChernozhukovHansen2008} and \citet{Jun2008}, for example. IV-QR can also reduce bias for dynamic panel fixed effects estimation as in \citet{Galvao2011}.
None of these papers considers smoothing the IV-QR estimating equations; that idea (along with minimal first-order theory) seemingly first appeared in an unpublished draft by \citet{MaCurdyHong1999}, although the idea of smoothing the indicator function in general appears even earlier, as in \citet{Horowitz1992} for the smoothed maximum score estimator.
An alternative approach to overcome the computational obstacles in the presence of a nonsmooth objective function is to explore the asymptotic equivalence of the Bayesian and classical methods for regular models and use the MCMC approach to obtain the classical extremum estimator; see \citet{ChernozhukovHong2003}, whose Example 3 is IV-QR. As a complement, our approach deals with the computation problem in the classical framework directly.
The rest of the paper is organized as follows. Section \ref{sec:see}
describes our setup and discusses some illuminating connections with other
estimators. Sections \ref{sec:mse}, \ref{sec:eI}, and \ref{sec:est}
calculate the MSE of the SEE, the type I and type II errors of a chi-square
test, and the approximate MSE of the parameter estimator, respectively.
Section \ref{sec:emp} applies our estimator to JTPA data, and Section \ref{sec:sim} presents simulation results before we conclude. Longer
proofs and calculations are gathered in the appendix.
\section{Smoothed Estimating Equations\label{sec:see}}
\subsection{Setup}
We are interested in estimating the instrumental variables quantile
regression (IV-QR) model
\begin{equation*}
Y_{j}=X_{j}^{\prime }\beta _{0}+U_{j}
\end{equation*}%
where $\mathbb E\left[Z_{j}\left( 1\{U_{j}<0\}-q\right)\right] =0$ for instrument vector $%
Z_{j}\in \mathbb{R}^{d}$ and $1\{ \cdot \}$ is the indicator function.
Instruments are taken as given; this does not preclude first determining the
efficient set of instruments as in \citet{Newey2004} or %
\citet{NeweyPowell1990}, for example.
We restrict attention to the ``just identified'' case $X_{j}\in \mathbb{R}%
^{d}$ and iid data for simpler exposition; for the overidentified case, see %
\eqref{eqn:EE-overID} below.
A special case of this model is exogenous QR with $Z_{j}=X_{j}$, which is
typically estimated by minimizing a criterion function:
\begin{equation*}
\hat{\beta}_{Q}\equiv \mathop{\rm arg\,min}_{\beta }\frac{1}{n}%
\sum_{j=1}^{n}\rho _{q}(Y_{j}-X_{j}^{\prime }\beta ),
\end{equation*}%
where $\rho _{q}(u)\equiv \left( q-1\{u<0\}\right) u$ is the check function.
Since the objective function is not smooth, it is not easy to obtain a
high-order approximation to the sampling distribution of $\hat{\beta}_{Q}$.
To avoid this technical difficulty, \citet{Horowitz1998} proposes to smooth
the objective function to obtain
\begin{equation*}
\hat{\beta}_{H}=\mathop{\rm arg\,min}_{\beta }\frac{1}{n}\sum_{j=1}^{n}\rho
_{q}^{H}(Y_{j}-X_{j}^{\prime }\beta ),\quad \rho _{q}^{H}(u)\equiv \left[
q-G\left( -u/h\right) \right] u,
\end{equation*}%
where $G(\cdot )$ is a smooth function and $h$ is the smoothing parameter or
bandwidth. Instead of smoothing the objective function, we smooth the
underlying moment condition and define $\hat{\beta}$ to be the solution of
the vector of smoothed estimating equations (SEE) $m_{n}(\hat{\beta})=0$,
where\footnote{%
It suffices to have $m_{n}(\hat{\beta})=o_{p}(1)$, which allows for a small
error when $\hat{\beta}$ is not the exact solution to $m_{n}(\hat{\beta})=0$.%
}
\begin{equation*}
m_{n}(\beta )\equiv \frac{1}{\sqrt{n}}\sum_{j=1}^{n}W_{j}(\beta )\text{ and }%
W_{j}(\beta )\equiv Z_{j}\left[ G\left( \frac{X_{j}^{\prime }\beta -Y_{j}}{h}%
\right) -q\right] .
\end{equation*}
Our approach is related to kernel-based nonparametric conditional quantile
estimators. The moment condition there is $\mathbb E\left[ 1\{X=x\} \left(
1\{Y<\beta \}-q\right) \right] =0$. Usually the $1\{X=x\}$ indicator
function is ``smoothed'' with a kernel, while the latter term is not. This
yields the nonparametric conditional quantile estimator $\hat{\beta}_{q}(x)=%
\mathop{\rm arg\,min}_{b}\sum_{i=1}^{n}\rho _{q}(Y_{i}-b)K[(x-X_{i})/h]$ for
the conditional $q$-quantile at $X=x$, estimated with kernel $K(\cdot)$ and
bandwidth $h$. Our approach is different in that we smooth the indicator $%
1\{Y<\beta \}$ rather than $1\{X=x\}$. Smoothing both terms may help but is
beyond the scope of this paper.
Estimating $\hat{\beta}$ from the SEE is computationally easy: $d$ equations
for $d$ parameters, and a known, analytic Jacobian. Computationally, solving
our problem is faster and more reliable than the IV-QR method in %
\citet{ChernozhukovHansen2006}, which requires specification of a grid of
endogenous coefficient values to search over, computing a conventional QR
estimator for each grid point. This advantage is important particularly when
there are more endogenous variables.
If the model is overidentified with $\dim (Z_{j})>\dim (X_{j})$, we can use
a $\dim (X_{j})\times \dim (Z_{j})$ matrix $\mathbb{W}$ to transform the
original moment conditions $\mathbb E\left[Z_{j}\left(q-1\left\{Y_{j}<X_{j}^{\prime }\beta \right\}\right)\right]=0$
into
\begin{equation} \label{eqn:EE-overID}
\mathbb E\left[ \tilde{Z}_{j}\left( q-1\left\{Y_{j}<X_{j}^{\prime }\beta \right\} \right)
\right] =0,\text{ for }\tilde{Z}_{j}=\mathbb{W}Z_{j}\in \mathbb{R}^{\dim
(X_{j})}.
\end{equation}%
Then we have an exactly identified model with transformed instrument vector $%
\tilde{Z}_{j}$, and our asymptotic analysis can be applied to %
\eqref{eqn:EE-overID}.
By the theory of optimal estimating equations or efficient two-step GMM, the
optimal $\mathbb{W}$ takes the following form:%
\begin{align*}
\mathbb{W} &= \left. \frac{\partial }{\partial \beta }\mathbb E\left[ Z^{\prime
}\left( q-1\left\{Y<X^{\prime }\beta \right\} \right) \right] \right \vert _{\beta
=\beta _{0}}\mathrm{Var}\left[Z\left(q-1\{Y<X^{\prime }\beta _{0}\} \right) \right]^{-1}
\\
&= \mathbb E \left[XZ^{\prime }f_{U|Z,X}(0)\right] \left\{ \mathbb E\left[ZZ^{\prime }\sigma
^{2}\left( Z\right) \right]\right\} ^{-1} ,
\end{align*}%
where $f_{U|Z,X}(0)$ is the conditional PDF of $U$ evaluated at $U=0$ given $%
\left( Z,X\right) $ and $\sigma ^{2}\left( Z\right) =\mathrm{Var}%
\left(1\left \{ U<0\right \} \mid Z\right)$. The standard two-step approach
requires an initial estimator of $\beta _{0}$ and nonparametric estimators
of $f_{U|Z,X}(0)$ and $\sigma ^{2}\left( Z\right)$. The underlying
nonparametric estimation error may outweigh the benefit of having an optimal
weighting matrix. This is especially a concern when the dimensions of $X$
and $Z$ are large. The problem is similar to what \citet{HwangSun2015} consider in a time series GMM framework where the optimal weighting matrix
is estimated using a nonparametric HAC approach. Under the alternative and
more accurate asymptotics that captures the estimation error of the
weighting matrix, they show that the conventionally optimal two-step
approach does not necessarily outperform a first-step approach that does not
employ a nonparametric weighting matrix estimator. While we expect a similar
qualitative message here, we leave a rigorous analysis to future research.
In practice, a simple procedure is to ignore $%
f_{U|Z,X}(0) $ and $\sigma ^{2}\left( Z\right) $ (or assume that they are
constants) and employ the following empirical weighting matrix,
\begin{equation*}
\mathbb{W}_{n} = \left( \frac{1}{n}\sum_{j=1}^{n}X_{j}Z_{j}^{\prime }\right) %
\left( \frac{1}{n}\sum_{j=1}^{n}Z_{j}Z_{j}^{\prime }\right) ^{-1}.
\end{equation*}%
This choice of $\mathbb{W}_{n}$ is in the spirit of the influential work of %
\citet{LiangZeger1986} who advocate the use of a working correlation matrix
in constructing the weighting matrix. Given the above choice of $\mathbb{W}%
_{n}$, $\tilde{Z}_{j}$ is the least squares projection of $X_{j}$ on $Z_{j}$%
. It is easy to show that with some notational changes our asymptotic
results remain valid in this case.
An example of an overidentified model is the conditional moment
model
\begin{equation*}
\mathbb E\left[ \left( 1\{U_{j}<0\}-q\right) \mid Z_{j}\right] =0.
\end{equation*}%
In this case, any measurable function of $Z_{j}$ can be
used as an instrument. As a result, the model could be overidentified.
According to \citet{Chamberlain1987} and \citet{Newey1990}, the optimal set of instruments in our setting is given by
\begin{equation*}
\left. \left[ \frac{\partial }{\partial \beta }\mathbb E\left( 1\{Y_{j}-X_{j}^{%
\prime }\beta <0\}\mid Z_{j}\right) \right] \right \vert _{\beta =\beta _{0}}.
\end{equation*}%
Let $F_{U|Z,X}\left( u\mid z,x\right) $ and $%
f_{U|Z,X}\left( u\mid z,x\right) $ be the conditional
distribution function and density function of $U$ given $\left(
Z,X\right) =(z,x)$. Then under some regularity conditions,
\begin{align*}
\left. \left[ \frac{\partial }{\partial \beta }\mathbb E\left( 1\{Y_{j}-X_{j}^{%
\prime }\beta <0\}\mid Z_{j}\right) \right] \right \vert _{\beta =\beta _{0}}
&= \left. \left\{ \frac{\partial }{\partial \beta }\mathbb E\left[ \mathbb E\left(
1\{Y_{j}-X_{j}^{\prime }\beta <0\} \mid Z_{j},X_{j}\right) \mathrel{\big|} Z_{j}\right]
\right\} \right \vert _{\beta =\beta _{0}} \\
&= \mathbb E\left\{ \left. \left[\frac{\partial }{\partial \beta }%
F_{U_{j}|Z_{j},X_{j}}\left( X_{j}\left( \beta -\beta _{0}\right) \mid
Z_{j},X_{j}\right)\right]\right\vert _{\beta =\beta_{0}} \mathrel{\bigg|} Z_{j}\right\} \\
&= \mathbb E\left[ f_{U_{j}|Z_{j},X_{j}}(0\mid Z_{j},X_{j})X_{j} \mathrel{\big|} Z_{j}\right] .
\end{align*}
The optimal instruments involve the conditional density
$f_{U|Z,X}\left( u\mid z,x\right) $ and a conditional expectation. In
principle, these objects can be estimated nonparametrically. However, the
nonparametric estimation uncertainty can be very high,
adversely affecting the reliability of inference. A simple and practical
strategy\footnote{We are not alone in recommending this simple strategy for empirical work.
\citet{ChernozhukovHansen2006} make the same recommendation in their Remark 5 and use this strategy in
their empirical application. See also \citet{Kwak2010}.}
is to construct the optimal instruments as the OLS projection of each $%
X_{j}$ onto some sieve basis functions $\Phi ^{K}\left(
Z_{j}\right) \equiv \left[ \Phi
_{1}(Z_{j}),\ldots,\Phi _{K}(Z_{j})\right] ^{\prime }$, leading to
\begin{equation*}
\tilde{Z}_{j}=\left[ \frac{1}{n}\sum_{j=1}^{n}X_{j}\Phi ^{K}\left(
Z_{j}\right) ^{\prime }\right] \left[ \frac{1}{n}\sum_{j=1}^{n}\Phi
^{K}\left( Z_{j}\right) \Phi ^{K}\left( Z_{j}\right) ^{\prime }\right]
^{-1}\Phi ^{K}\left( Z_{j}\right) \in \mathbb{R}^{\dim (X_{j})}
\end{equation*}%
as the instruments. Here $\left \{ \Phi _{i}\left( \cdot \right)
\right \} $ are the basis functions such as power functions. Since
the dimension of $\tilde{Z}_{j}$ is the same as the dimension of $%
X_{j}$, our asymptotic analysis can be applied for any fixed value
of $K$.\footnote{A theoretically efficient estimator can be obtained using the sieve minimum
distance approach. It entails first estimating the conditional expectation $\mathbb E%
\left[ \left( 1\{Y_{j}<X_{j}\beta \}-q\right) \mid Z_{j}\right] $ using $\Phi
^{K}\left( Z_{j}\right) $ as the basis functions and then choosing $\beta $
to minimize a weighted sum of squared conditional expectations. See, for
example, \citet{ChenPouzo2009,ChenPouzo2012}. To achieve the semiparametric
efficiency bound, $K$ has to grow with the sample size at an appropriate
rate. In work in progress, we consider nonparametric quantile regression
with endogeneity and allow $K$ to diverge, which is necessary for both
identification and efficiency. Here we are content with a fixed $K$ for
empirical convenience at the cost of possible efficiency loss.}
\subsection{Comparison with other estimators\label{sec:SEE-comp}}
\subsubsection*{Smoothed criterion function}
For the special case $Z_{j}=X_{j}$, we compare the SEE with the estimating equations derived
from smoothing the criterion function as in \citet{Horowitz1998}. The first
order condition of the smoothed criterion function, evaluated at the true $%
\beta _{0}$, is
\begin{align}
\notag
0& =\left. \frac{\partial }{\partial \beta }\right \vert _{\beta =\beta
_{0}}n^{-1}\sum_{i=1}^{n}\left[ q-G\left( \frac{X_{i}^{\prime }\beta -Y_{i}}{%
h}\right) \right] (Y_{i}-X_{i}^{\prime }\beta ) \\
\notag
& =n^{-1}\sum_{i=1}^{n}\Big[-qX_{i}-G^{\prime
}(-U_{i}/h)(X_{i}/h)Y_{i}+G^{\prime }(-U_{i}/h)(X_{i}/h)X_{i}^{\prime }\beta
_{0}+G(-U_{i}/h)X_{i}\Big] \\
\notag
& =n^{-1}\sum_{i=1}^{n}X_{i}\left[ G(-U_{i}/h)-q\right] +n^{-1}%
\sum_{i=1}^{n}G^{\prime }(-U_{i}/h)\left[ (X_{i}/h)X_{i}^{\prime }\beta
_{0}-(X_{i}/h)Y_{i}\right] \\
\label{eqn:SCF-EE}
& =n^{-1}\sum_{i=1}^{n}X_{i}\left[ G(-U_{i}/h)-q\right] +n^{-1}%
\sum_{i=1}^{n}(1/h)G^{\prime }(-U_{i}/h)[-X_{i}U_{i}].
\end{align}%
The first term agrees with our proposed SEE. Technically, it should be
easier to establish high-order results for our SEE estimator since it has
one fewer term. Later we show that the absolute bias of our SEE estimator is
smaller, too. Another subtle point is that our SEE requires only the
estimating equation $\mathbb E\left[X_{j}\left( 1\{U_{j}<0\}-q\right)\right] =0$, whereas %
\citet{Horowitz1998} has to impose an additional condition to ensure that
the second term in the FOC is approximately mean zero.
\subsubsection*{IV mean regression\label{sec:see-comp-IV}}
When $h\rightarrow \infty $, $G(\cdot )$ only takes arguments near zero and
thus can be approximated well linearly. For example, with the $G(\cdot )$
from \citet{Whang2006} and \citet{Horowitz1998}, $G(v)=0.5+(105/64)v+O(v^{3})
$ as $v\rightarrow 0$. Ignoring the $O(v^{3})$, the corresponding estimator $%
\hat{\beta}_{\infty }$ is defined by
\begin{align*}
0& =\sum_{i=1}^{n}Z_{i}\left[ G\left( \frac{X_{i}^{\prime }\hat{\beta}%
_{\infty }-Y_{i}}{h}\right) -q\right] \\
& \doteq \sum_{i=1}^{n}Z_{i}\left[ \left( 0.5+(105/64)\frac{X_{i}^{\prime }%
\hat{\beta}_{\infty }-Y_{i}}{h}\right) -q\right] \\
& =(105/64h)Z^{\prime }X\hat{\beta}_{\infty }-(105/64h)Z^{\prime
}Y+(0.5-q)Z^{\prime }\mathbf{1}_{n,1} \\
& =(105/64h)Z^{\prime }X\hat{\beta}_{\infty }-(105/64h)Z^{\prime
}Y+(0.5-q)Z^{\prime }(Xe_{1}) ,
\end{align*}%
where $e_{1}=(1,0,\ldots ,0)^{\prime }$ is $d\times 1$, $\mathbf{1}%
_{n,1}=(1,1,\ldots ,1)^{\prime }$ is $n\times 1$, $X$ and $Z$ are $n\times d$
with respective rows $X_{i}^{\prime }$ and $Z_{i}^{\prime }$, and using the
fact that the first column of $X$ is $\mathbf{1}_{n,1}$ so that $Xe_{1}=%
\mathbf{1}_{n,1}$. It then follows that
\begin{equation*}
\hat{\beta}_{\infty }=\hat{\beta}_{IV}+\left( (64h/105)(q-0.5),0,\ldots
,0\right) ^{\prime }.
\end{equation*}%
As $h$ grows large, the smoothed QR estimator approaches the IV estimator
plus an adjustment to the intercept term that depends on $q$, the bandwidth,
and the slope of $G(\cdot )$ at zero. In the special case $Z_{j}=X_{j}$, the
IV estimator is the OLS estimator.\footnote{%
This is different from \citet{ZhouEtAl2011}, who add the $d$ OLS moment
conditions to the $d$ median regression moment conditions before estimation;
our connection to IV/OLS emerges naturally from smoothing the (IV)QR
estimating equations.}
The intercept is often not of interest, and when $q=0.5$, the adjustment is
zero anyway. The class of SEE estimators is a continuum (indexed by $h$)
with two well-known special cases at the extremes: unsmoothed IV-QR and mean
IV. For $q=0.5$ and $Z_j=X_j$, this is median regression and mean regression
(OLS). Well known are the relative efficiency advantages of
the median and the mean for different error distributions. Our estimator
with a data-driven bandwidth can harness the advantages of both, without
requiring the practitioner to make guesses about the unknown error
distribution.
\subsubsection*{Robust estimation}
With $Z_j=X_j$, the result that our SEE can yield OLS when $h\to \infty$ or
median regression when $h=0$ calls to mind robust estimators like the
trimmed or Winsorized mean (and corresponding regression estimators).
Setting the trimming/Winsorization parameter to zero generates the mean
while the other extreme generates the median. However, our SEE mechanism is
different and more general/flexible; trimming/Winsorization is not directly
applicable to $q\ne0.5$; our method to select the smoothing parameter is
novel; and the motivations for QR extend beyond (though include) robustness.
With $X_{i}=1$ and $q=0.5$ (population median estimation), our SEE becomes
\begin{equation*}
0=n^{-1}\sum_{i=1}^{n}\left[ 2G\left(\frac{\beta -Y_{i}}{h}\right)-1\right] .
\end{equation*}%
If $G'(u)=1\{-1\le u\le 1\}/2$ (the uniform kernel), then $H(u)\equiv 2G(u)-1=u$ for $u\in[-1,1]$, $H(u)=1$ for $u>1$, and $H(u)=-1$ for $u<-1$. The SEE is then $0=\sum_{i=1}^{n}\psi\left(Y_i;\beta\right)$ with $\psi\left(Y_i;\beta\right)=H\left(\left(\beta-Y_i\right)/h\right)$. This produces the Winsorized mean estimator of the type in \citet[example (iii), p.\ 79]{Huber1964}.\footnote{%
For a strict mapping, multiply by $h$ to get $\psi (Y_{i};\beta )=hH[(\beta
-Y_{i})/h]$. The solution is equivalent since $\sum h\psi (Y_{i};\beta )=0$
is the same as $\sum \psi (Y_{i};\beta )=0$ for any nonzero constant $h$.}
Further theoretical comparison of our SEE-QR with trimmed/Winsorized mean
regression (and the IV versions) would be interesting but is beyond the
scope of this paper. For more on robust location and regression estimators,
see for example \citet{Huber1964}, \citet{KoenkerBassett1978}, and %
\citet{RuppertCarroll1980}.
\section{MSE of the SEE\label{sec:mse}}
Since statistical inference can be made based on the estimating equations
(EEs), we examine the mean squared error (MSE) of the SEE.
An advantage of using EEs directly is that inference can be made robust to the
strength of identification. Our focus on the EEs is also in the same
spirit of the large literature on optimal estimating equations. For the
historical developments of EEs and their applications in econometrics, see %
\citet{BeraEtAl2006}. The MSE of the SEE is also related to the estimator MSE
and inference properties both intuitively and (as we will show) theoretically. Such results may provide
helpful guidance in contexts where the SEE MSE is easier to compute than the
estimator MSE, and it provides insight into how smoothing works in the QR
model as well as results that will be used in subsequent sections.
We maintain different subsets of the following assumptions for different
results.
We write $f_{U|Z}(\cdot \mid z)$ and $F_{U|Z}(\cdot \mid z)$ as the
conditional PDF and CDF of $U$ given $Z=z$. We define $f_{U|Z,X}(\cdot \mid
z,x)$ and $F_{U|Z,X}(\cdot \mid z,x) $ similarly.
\begin{assumption}
\label{a:sampling} $(X_{j}^{\prime },Z_{j}^{\prime },Y_{j})$ is iid across $%
j=1,2,\ldots ,n$, where $Y_{j}=X_{j}^{\prime }\beta _{0}+U_{j}$, $X_{j}$ is
an observed $d\times 1$ vector of stochastic regressors that can include a
constant, $\beta _{0}$ is an unknown $d\times 1$ constant vector, $U_{j}$ is
an unobserved random scalar, and $Z_{j}$ is an observed $d\times 1$ vector
of instruments such that $\mathbb E\left[Z_{j}\left( 1\{U_{j}<0\}-q\right)\right] =0$.
\end{assumption}
\begin{assumption}
\label{a:rank} (i) $Z_{j}$ has bounded support. (ii) $\mathbb E\left(
Z_{j}Z_{j}^{\prime }\right) $ is nonsingular.
\end{assumption}
\begin{assumption}
\label{a:fUZ}(i) $P(U_{j}<0\mid Z_{j}=z)=q$ for almost all $z\in \mathcal{Z}$%
, the support of $Z$. (ii) For all $u$ in a neighborhood of zero and almost
all $z\in \mathcal{Z}$, $f_{U|Z}(u\mid z)$ exists, is bounded away from
zero, and is $r$ times continuously differentiable with $r\geq 2$. (iii)
There exists a function $C(z)$ such that $\left \vert f_{U|Z}^{(s)}(u\mid
z)\right
\vert \leq C(z)$ for $s=0,2,\ldots ,r$, almost all $z\in \mathcal{Z%
}$ and $u$ in a neighborhood of zero, and $\mathbb E\left[ C(Z)\left
\Vert
Z\right
\Vert ^{2}\right] <\infty $.
\end{assumption}
\begin{assumption}
\label{a:G} (i) $G(v)$ is a bounded function satisfying $G(v)=0$ for $%
v\leq-1 $, $G(v)=1$ for $v\geq 1$, and $1-\int_{-1}^{1}G^{2}(u)du>0$. (ii) $%
G^{\prime }(\cdot )$ is a symmetric and bounded $r$th order kernel with $%
r\geq 2$ so that $\int_{-1}^{1}G^{\prime }(v)dv=1$, $\int_{-1}^{1}v^{k}G^{%
\prime }(v)dv=0$ for $k=1,2,\ldots ,r-1$, $\int_{-1}^{1}\left \vert
v^{r}G^{\prime }(v)\right \vert dv<\infty $, and $\int_{-1}^{1}v^{r}G^{%
\prime }(v)dv\neq 0$. (iii) Let $\tilde{G}(u)=\left( G(u),[G(u)]^{2},\ldots
,[G(u)]^{L+1}\right) ^{\prime }$ for some $L\geq 1$. For any $\theta \in
\mathbb{R}^{L+1}$ satisfying $\left \Vert \theta \right \Vert =1$, there is
a partition of $[-1,1]$ given by $-1=a_{0}<a_{1}<\cdots <a_{\tilde L}=1$ for some finite $\tilde L$ such
that $\theta ^{\prime }\tilde{G}(u)$ is either strictly positive or strictly
negative on the intervals $(a_{i-1},a_{i})$ for $i=1,2,\ldots ,\tilde L$.
\end{assumption}
\begin{assumption}
\label{a:h} $h\propto n^{-\kappa }$ for $1/\left( 2r\right) <\kappa <1$.
\end{assumption}
\begin{assumption}
\label{a:beta} $\beta=\beta_0$ uniquely solves $\mathbb E\left[ Z_{j}\left(
q-1\{Y_{j}<X_{j}^{\prime }\beta \} \right) \right] =0$ over $\beta \in
\mathcal{B}$.
\end{assumption}
\begin{assumption}
\label{a:power_mse}(i) $f_{U|Z,X}(u\mid z,x)$ is $r$ times continuously
differentiable in $u$ in a neighborhood of zero for almost all $x\in
\mathcal{X}$ and $z\in \mathcal{Z}$ for $r>2$. (ii) $\Sigma _{ZX}\equiv \mathbb E%
\left[ Z_{j}X_{j}^{\prime }f_{U|Z,X}(0\mid Z_{j},X_{j})\right] $ is
nonsingular.
\end{assumption}
Assumption \ref{a:sampling} describes the sampling process. Assumption \ref%
{a:rank} is analogous to Assumption 3 in both \citet{Horowitz1998} and %
\citet{Whang2006}. As discussed in these two papers, the boundedness
assumption for $Z_{j}$, which is a technical condition, is made only for
convenience and can be dropped at the cost of more complicated proofs.
Assumption \ref{a:fUZ}(i) allows us to use the law of iterated expectations
to simplify the asymptotic variance. Our qualitative conclusions do not rely
on this assumption. Assumption \ref{a:fUZ}(ii) is critical. If we are not
willing to make such an assumption, then smoothing will be of no benefit.
Inversely, with some small degree of smoothness of the conditional error
density, smoothing can leverage this into the advantages described here.
Also note that \citet{Horowitz1998} assumes $r\geq 4$, which is sufficient
for the estimator MSE result in Section \ref{sec:est}.
Assumptions \ref{a:G}(i--ii) are analogous to the standard high-order kernel
conditions in the kernel smoothing literature. The integral condition in (i)
ensures that smoothing reduces (rather than increases) variance. Note that
\begin{align*}
1-\int_{-1}^{1}G^{2}(u)du& =2\int_{-1}^{1}uG(u)G^{\prime }(u)du \\
& =2\int_{0}^{1}uG(u)G^{\prime }(u)du+2\int_{-1}^{0}uG(u)G^{\prime }(u)du \\
& =2\int_{0}^{1}uG(u)G^{\prime }(u)du-2\int_{0}^{1}vG(-v)G^{\prime }(-v)dv \\
& =2\int_{0}^{1}uG^{\prime }(u)\left[ G(u)-G(-u)\right] du,
\end{align*}
using the evenness of $G'(u)$. When $r=2$, we can use any $G(u)$ such that $G'(u)$ is a symmetric PDF on $[-1,1]$.
In this case, $1-\int_{-1}^1 G^2(u)du>0$ holds automatically.
When $r>2$, $G'(u)<0$ for some $u$, and $G(u)$ is not monotonic. It is not easy to sign $1-\int_{-1}^1 G^2(u)du$ generally, but it is simple to calculate this quantity for any chosen $G(\cdot)$. For example, consider $r=4$ and the $G(\cdot)$ function in \citet{Horowitz1998} and \citet{Whang2006} shown in Figure \ref{fig:G}:
\begin{equation}
G(u)=\left \{
\begin{array}{ll}
0, & u\leq -1 \\
0.5+\frac{105}{64}\left( u-\frac{5}{3}u^{3}+\frac{7}{5}u^{5}-\frac{3}{7}%
u^{7}\right) , & u\in \lbrack -1,1] \\
1 & u\geq 1%
\end{array}%
\right. \label{eqn:G_fun}
\end{equation}%
The range of the function is outside $[0,1]$. Simple calculations show that $1-\int_{-1}^1 G^2(u)du>0$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.7\textwidth,clip=true,trim=35 35 20 70]{Graph_G_from_HorowitzWhang.pdf}%
\end{center}
\caption{Graph of $G(u)=0.5+\frac{105}{64}\left( u-\frac{5}{3}u^{3}+\frac{7}{%
5}u^{5}-\frac{3}{7}u^{7}\right) $ (solid line) and its derivative (broken).}
\label{fig:G}
\end{figure}
Assumption \ref{a:G}(iii) is needed for the Edgeworth expansion. As \citet{Horowitz1998} and %
\citet{Whang2006} discuss, Assumption \ref{a:G}(iii) is a technical
assumption that (along with Assumption \ref{a:h}) leads to a form of Cram\'{e%
}r's condition, which is needed to justify the Edgeworth expansion used in
Section \ref{sec:eI}. Any $G(u)$ constructed by integrating polynomial kernels in \citet{Muller1984} satisfies Assumption \ref{a:G}(iii). In fact, $G(u)$ in \eqref{eqn:G_fun} is obtained by integrating a fourth-order kernel given in Table 1 of \citet{Muller1984}.
Assumption \ref{a:h} ensures that the bias of the SEE is of
smaller order than its variance. It is needed for the asymptotic normality
of the SEE as well as the Edgeworth expansion.
Assumption \ref{a:beta} is an identification assumption. See Theorem 2 of %
\citet{ChernozhukovHansen2006} for more primitive conditions. It ensures the
consistency of the SEE estimator. Assumption \ref{a:power_mse} is necessary
for the $\sqrt{n}$-consistency and asymptotic normality of the SEE estimator.
Define%
\begin{equation*}
W_{j}\equiv W_{j}(\beta _{0})=Z_{j}\left[ G(-U_{j}/h)-q\right]
\end{equation*}%
and abbreviate $m_{n}\equiv m_{n}(\beta _{0})=n^{-1/2}\sum_{j=1}^{n}W_{j}$.
The theorem below gives the first two moments of $W_{j}$ and the first-order
asymptotic distribution of $m_{n}$.
\begin{theorem}
\label{thm:Wj} Let Assumptions \ref{a:rank}(i), \ref{a:fUZ}, and \ref{a:G}%
(i--ii) hold. Then
\begin{align}
\mathbb E(W_{j})& =\frac{(-h)^{r}}{r!}\left[ \int_{-1}^{1}G^{\prime
}(v)v^{r}dv\right] \mathbb E\left[ f_{U|Z}^{(r-1)}(0\mid Z_{j})Z_{j}\right] +o\left(
h^{r}\right) , \label{eqn:see-bias} \\
\mathbb E(W_{j}^{\prime }W_{j})
&= q(1-q)\mathbb E\left(Z_{j}^{\prime }Z_{j}\right)
-h\left[1-\int_{-1}^{1}G^{2}(u)du\right] \mathbb E\left[f_{U|Z}(0\mid Z_{j})Z_{j}^{\prime
}Z_{j}\right]+O(h^{2}), \label{eqn:W-var-trace} \\
\mathbb E(W_{j}W_{j}^{\prime })& =q(1-q)\mathbb E\left(Z_{j}Z_{j}^{\prime }\right)-h\left[
1-\int_{-1}^{1}G^{2}(u)du\right] \mathbb E\left[f_{U|Z}(0\mid Z_{j})Z_{j}Z_{j}^{\prime
}\right]+O(h^{2}). \notag
\end{align}%
If additionally Assumptions \ref{a:sampling} and \ref{a:h} hold, then
\begin{equation*}
m_{n}\overset{d}{\rightarrow }N(0,V),\quad V\equiv \lim_{n\rightarrow \infty
}\mathbb E\left\{ \left[W_{j}-\mathbb E(W_{j})\right]\left[W_{j}-\mathbb E(W_{j})\right]^{\prime }\right\}
=q(1-q)\mathbb E\left(Z_{j}Z_{j}^{\prime }\right).
\end{equation*}
\end{theorem}
Compared with the EE derived from smoothing the criterion function as in \citet{Horowitz1998}, our SEE has smaller bias and variance, and these differences affect the bias and variance of the parameter estimator.
The former approach only applies to exogenous QR with $Z_j=X_j$.
The EE derived from smoothing the criterion function in \eqref{eqn:SCF-EE} for $Z_j=X_j$ can be written
\begin{align}\label{eqn:SCF-EE-Wj}
0 &= n^{-1}\sum_{j=1}^n W_j, \quad
W_j \equiv X_j\left[G(-U_j/h)-q\right] + (1/h)G'(-U_j/h)(-X_jU_j) .
\end{align}
Consequently, as calculated in the appendix,
\begin{align}\label{eqn:SCF-EWj}
\mathbb E(W_j)
& =(r+1)\frac{(-h)^r}{r!} \left[
\int G^{\prime }(v)v^{r}dv\right] \mathbb E\left[ f_{U|Z}^{(r-1)}(0\mid Z_{j})Z_{j}%
\right] +o\left( h^{r}\right) , \\
\mathbb E(W_jW_j') \label{eqn:SCF-EWjWj}
&= q(1-q)\mathbb E(X_jX_j')
+h \int_{-1}^1[G'(v)v]^2dv \, \mathbb E\left[f_{U|X}(0\mid X_j)X_jX_j'\right]
+O(h^2) , \\
\mathbb E & \left[\D{}{\beta'} n^{-1/2}m_n(\beta_0)\right] \label{eqn:SCF-EdBmn}
= \mathbb E\left[f_{U|X}(0\mid X_j)X_jX_j'\right]
-h \mathbb E\left[f_{U|X}'(0\mid X_j)X_jX_j'\right]
+O(h^2) .
\end{align}
The dominating term of the bias of our SEE in \eqref{eqn:see-bias} is $r+1$ times smaller in absolute value than that of the EE derived from a smoothed criterion function in \eqref{eqn:SCF-EWj}.
A larger bias can lead to less accurate confidence regions if the same
variance estimator is used.
Additionally, the smoothed criterion function analog of $\mathbb E(W_jW_j')$ in \eqref{eqn:SCF-EWjWj} has a positive $O(h)$ term instead of the negative $O(h)$ term for SEE.
The connection between these terms and the estimator's asymptotic mean squared error (AMSE) is shown in Section \ref{sec:est} to rely on the inverse of the matrix in equation \eqref{eqn:SCF-EdBmn}. Here, though, the sign of the $O(h)$ term is indeterminant since it depends on a PDF derivative. (A negative $O(h)$ term implies higher AMSE since this matrix is inverted in the AMSE expression, and positive implies lower.) If $U=0$ is a mode of the conditional (on $X$) distribution, then the $O(h)$ term is zero and the AMSE comparison is driven by $\mathbb E(W_j)$ and $\mathbb E(W_jW_j')$. Since SEE yields smaller $\mathbb E(W_jW_j')$ and smaller absolute $\mathbb E(W_j)$, it will have smaller estimator AMSE in such cases. Simulation results in Section \ref{sec:sim} add evidence that the SEE estimator usually has smaller MSE in practice.
The first-order asymptotic variance $V$ is the same as the asymptotic
variance of
\begin{equation*}
n^{-1/2}\sum_{j=1}^n Z_j \left(1\{U_j<0\}-q\right) ,
\end{equation*}
the scaled EE of the unsmoothed IV-QR. The effect of smoothing to reduce
variance is captured by the term of order $h$, where $1-%
\int_{-1}^{1}G^2(u)du>0$ by Assumption \ref{a:G}(i). This reduction in
variance is not surprising. Replacing the discontinuous indicator function $%
1\{U<0\}$ by a smooth function $G(-U/h)$ pushes the dichotomous values of
zero and one into some values in between, leading to a smaller variance. The
idea is similar to \citeauthor{Breiman1994}'s (\citeyear{Breiman1994})
bagging (bootstrap aggregating), among others.
Define the MSE of the SEE to be $\mathbb E\left(m_{n}^{\prime }V^{-1}m_{n}\right)$. Building
upon \eqref{eqn:see-bias} and \eqref{eqn:W-var-trace}, and using $W_{i}%
\mathpalette{\protect \independenT}{\perp}W_{j}$ for $i\neq j$, we have:
\begin{align}
& \mathbb E\left(m_{n}^{\prime }V^{-1}m_{n}\right) \notag \\
& =\frac{1}{n}\sum_{j=1}^{n}\mathbb E\left(W_{j}^{\prime }V^{-1}W_{j}\right)+\frac{1}{n}%
\sum_{j=1}^{n}\sum_{i\neq j}\mathbb E\left( W_{i}^{\prime }V^{-1}W_{j}\right)
\notag \\
& =\frac{1}{n}\sum_{j=1}^{n}\mathbb E\left(W_{j}^{\prime }V^{-1}W_{j}\right)+\frac{1}{n}%
n(n-1)\mathbb E(W_{j}^{\prime })V^{-1}\mathbb E(W_{j}) \notag \\
& =q(1-q)\mathbb E\left(Z_{j}^{\prime }V^{-1}Z_{j}\right)+nh^{2r} \mathbb E(B)'\mathbb E(B) -h\mathrm{tr}\left[ \mathbb E\left( AA^{\prime }\right) \right]
+o\left(h+nh^{2r}\right), \notag \\
& =d+nh^{2r} \mathbb E(B)'\mathbb E(B) -h\mathrm{tr}\left[
\mathbb E\left( AA^{\prime }\right) \right] +o\left(h+nh^{2r}\right), \label{eqn:mse}
\end{align}%
where
\begin{align*}
A& \equiv \left[ 1-\int_{-1}^{1}G^{2}(u)du\right] ^{1/2}\left[ f_{U|Z}(0\mid
Z)\right] ^{1/2}V^{-1/2}Z, \\
B& \equiv \left[ \frac{1}{r!}\int_{-1}^{1}G^{\prime }(v)v^{r}dv\right]
f_{U|Z}^{(r-1)}(0\mid Z)V^{-1/2}Z.
\end{align*}
Ignoring the $o(\cdot )$ term, we obtain the asymptotic MSE of the SEE. We
select the smoothing parameter to minimize the asymptotic MSE:
\begin{equation}
h_{\text{SEE}}^{\ast }
\equiv \mathop{\rm arg\,min}_{h}
nh^{2r}\mathbb E(B)'\mathbb E(B)
- h\mathrm{tr}\left[ \mathbb E\left(AA'\right) \right] . \label{eqn:def_h_SEE}
\end{equation}%
The proposition below gives the optimal smoothing parameter $h_{\text{SEE}%
}^{\ast }$.
\begin{proposition}
\label{prop:hSEE} Let Assumptions \ref{a:sampling}, \ref{a:rank}, \ref{a:fUZ}%
, and \ref{a:G}(i--ii) hold. The bandwidth that minimizes the asymptotic MSE
of the SEE is
\begin{equation*}
h_{\text{SEE}}^{\ast }
= \left( \frac{\mathrm{tr}\left[ \mathbb E\left(AA'\right) \right] }{\mathbb E(B)'\mathbb E(B) }\frac{1}{2nr}\right) ^{%
\frac{1}{2r-1}}.
\end{equation*}%
Under the stronger assumption $U\mathpalette{\protect \independenT}{\perp} Z$%
,
\begin{equation*}
h_{\text{SEE}}^{\ast }=\left( \frac{\left( r!\right) ^{2}\left[
1-\int_{-1}^{1}G^{2}(u)du\right] f_{U}(0)}{2r\left[ \int_{-1}^{1}G^{\prime
}(v)v^{r}dv\right] ^{2}\left[ f_{U}^{\left( r-1\right) }(0)\right] ^{2}}%
\frac{d}{n}\right) ^{\frac{1}{2r-1}}.
\end{equation*}
\end{proposition}
When $r=2$, the MSE-optimal $h_{\text{SEE}}^{\ast }\asymp
n^{-1/(2r-1)}=n^{-1/3}$. This is smaller than $n^{-1/5}$, the rate that
minimizes the MSE of estimated standard errors of the usual regression
quantiles. Since nonparametric estimators of $f_{U}^{(r-1)}(0)$ converge
slowly, we propose a parametric plug-in described in Section \ref{sec:sim}.
We point out in passing that the optimal smoothing parameter $h_{\text{SEE}%
}^{\ast }$ is invariant to rotation and translation of the (non-constant)
regressors. This may not be obvious but can be proved easily.
For the unsmoothed IV-QR, let
\begin{equation*}
\tilde{m}_{n} = \frac{1}{\sqrt{n}}\sum_{j=1}^{n}Z_{j}\left( 1\left \{
Y_{j}\leq X_{j}'\beta \right \} -q\right) ,
\end{equation*}%
then the MSE of the estimating equations is $\mathbb E\left(\tilde{m}_{n}^{\prime }V^{-1}%
\tilde{m}_{n}\right)=d$. Comparing this to the MSE of the SEE given in %
\eqref{eqn:mse}, we find that the SEE has a smaller MSE when $h=h_{\text{SEE}%
}^{\ast }$ because
\begin{equation*}
n(h_{\text{SEE}}^{\ast })^{2r} \mathbb E(B)'\mathbb E(B)
-h_{\text{SEE}}^{\ast }\mathrm{tr}\left[ \mathbb E\left(AA'\right) \right]
=-h_{\text{SEE}}^{\ast }\left( 1-\frac{1}{2r}\right) \mathrm{tr}%
\left[ \mathbb E\left(AA'\right) \right] <0.
\end{equation*}%
In terms of MSE, it is advantageous to smooth the estimating equations. To
the best of our knowledge, this point has never been discussed before in the
literature.
\section{Type {I} and Type {II} Errors of a Chi-square Test\label{sec:eI}}
In this section, we explore the effect of smoothing on a chi-square test.
Other alternatives for inference exist, such as the Bernoulli-based
MCMC-computed method from \citet{ChernozhukovEtAl2009}, empirical likelihood
as in \citet{Whang2006}, and bootstrap as in \citet{Horowitz1998}, where the
latter two also use smoothing. Intuitively, when we minimize the MSE, we may
expect lower type I error: the $\chi ^{2}$ critical value is from the
unsmoothed distribution, and smoothing to minimize MSE makes large values
(that cause the test to reject) less likely.
The reduced MSE also makes it easier to distinguish the null hypothesis from
some given alternative. This combination leads to improved size-adjusted
power. As seen in our simulations, this is true especially for the IV case.
Using the results in Section \ref{sec:mse} and under Assumption \ref{a:h}, we
have
\begin{equation*}
m_{n}^{\prime }V^{-1}m_{n}\overset{d}{\rightarrow }\chi _{d}^{2},
\end{equation*}%
where we continue to use the notation $m_{n}\equiv m_{n}(\beta _{0})$. From
this asymptotic result, we can construct a hypothesis test that rejects the
null hypothesis $H_{0}:\beta =\beta _{0}$ when
\begin{equation*}
S_{n}\equiv m_{n}^{\prime }\hat{V}^{-1}m_{n}>c_{\alpha },
\end{equation*}%
where
\begin{equation*}
\hat{V}=q(1-q)\frac{1}{n}\sum_{j=1}^{n}Z_{j}Z_{j}^{\prime }
\end{equation*}%
is a consistent estimator of $V$ and $c_{\alpha }\equiv \chi _{d,1-\alpha
}^{2}$ is the $1-\alpha $ quantile of the chi-square distribution
with $d$ degrees of freedom. As desired, the asymptotic size is
\begin{equation*}
\lim_{n\rightarrow \infty }P\left( S_{n}>c_{\alpha }\right) =\alpha .
\end{equation*}%
Here $P\equiv P_{\beta _{0}}$ is the probability measure under the true
model parameter $\beta _{0}$. We suppress the subscript $\beta _{0}$ when
there is no confusion.
It is important to point out that the above result does not rely on the
strong identification of $\beta _{0}$. It still holds if $\beta _{0}$ is
weakly identified or even unidentified. This is an advantage of focusing on
the estimating equations instead of the parameter estimator. When a direct
inference method based on the asymptotic normality of $\hat{\beta}$ is used,
we have to impose Assumptions \ref{a:beta} and \ref{a:power_mse}.
\subsection{Type {I} error and the associated optimal bandwidth}
To more precisely measure the type I error $P\left( S_{n}>c_{\alpha }\right)
$, we first develop a high-order stochastic expansion of $S_{n}$. Let $%
V_{n}\equiv \mathrm{Var}\left( m_{n}\right) $. Following the same
calculation as in \eqref{eqn:mse}, we have
\begin{align*}
V_{n}& =V-h\left[ 1-\int_{-1}^{1}G^{2}(u)du\right] \mathbb E\left[f_{U|Z}(0\mid
Z_{j})Z_{j}Z_{j}^{\prime }\right]+O(h^{2}) \\
& =V^{1/2}\left[ I_{d}-h\mathbb E\left( AA'\right) +O\left(h^{2}\right)\right] \left(
V^{1/2}\right) ^{\prime },
\end{align*}%
where $V^{1/2}$ is the matrix square root of $V$ such that $V^{1/2}\left(
V^{1/2}\right) ^{\prime }=V$. We can choose $V^{1/2}$ to be symmetric but do
not have to.
Details of the following are in the appendix; here we outline our strategy
and highlight key results. Letting
\begin{equation}
\Lambda _{n}=V^{1/2}\left[ I_{d}-h\mathbb E\left( AA'\right) +O\left(h^{2}\right)%
\right] ^{1/2} \label{Lambda_n}
\end{equation}%
such that $\Lambda _{n}\Lambda _{n}^{\prime }=V_{n}$, and defining
\begin{equation}
\bar{W}_{n}^{\ast }\equiv \frac{1}{n}\sum_{j=1}^{n}W_{j}^{\ast }\text{ and }%
W_{j}^{\ast }=\Lambda _{n}^{-1}Z_{j}\left[ G(-U_{j}/h)-q\right] ,
\label{define_W_star}
\end{equation}%
we can approximate the test statistic as
$S_{n}=S_{n}^{L}+e_{n}$,
where
\begin{equation*}
S_{n}^{L}=\left( \sqrt{n}\bar{W}_{n}^{\ast }\right) ^{\prime }\left( \sqrt{n}%
\bar{W}_{n}^{\ast }\right) -h\left( \sqrt{n}\bar{W}_{n}^{\ast }\right)
^{\prime }\mathbb E\left( AA'\right) \left( \sqrt{n}\bar{W}_{n}^{\ast
}\right)
\end{equation*}%
and $e_{n}$ is the remainder term satisfying $P\left( \left \vert
e_{n}\right \vert >O\left( h^{2}\right) \right) =O\left( h^{2}\right) $.%
The stochastic expansion above allows us to approximate the characteristic
function of $S_{n}$ with that of $S_{n}^{L}$. Taking the Fourier--Stieltjes
inverse of the characteristic function yields an approximation of the
distribution function, from which we can calculate the type I error by
plugging in the critical value $c_{\alpha}$.
\begin{theorem}
\label{thm:inf} Under Assumptions \ref{a:sampling}--\ref{a:h}, we have
\begin{align*}
P\left(S_{n}^{L}<x\right)
&= \mathcal{G}_{d}(x) -\mathcal{G}_{d+2}^{\prime}(x) \left \{
nh^{2r}\mathbb E(B)'\mathbb E(B) -h\mathrm{tr}\left
[\mathbb E\left( AA'\right) \right] \right \} +R_n, \\
P\left( S_{n}>c_{\alpha }\right) &= \alpha +\mathcal{G}_{d+2}^{\prime
}(c_{\alpha }) \left \{ nh^{2r}\mathbb E(B)'\mathbb E(B) -h\mathrm{tr}%
\left[\mathbb E\left( AA'\right) \right] \right \} +R_{n},
\end{align*}%
where $R_{n}=O\left(h^{2}+nh^{2r+1}\right)$ and $\mathcal{G}_{d}(x)$ is the CDF of the $%
\chi _{d}^{2}$ distribution.
\end{theorem}
From Theorem \ref{thm:inf}, an approximate measure of the type I error of
the SEE-based chi-square test is%
\begin{equation*}
\alpha +\mathcal{G}_{d+2}^{\prime }(c_{\alpha })\left\{ nh^{2r}\mathbb E(B)'\mathbb E(B) -h\mathrm{tr}%
\left[\mathbb E\left( AA'\right) \right] \right\} ,
\end{equation*}%
and an approximate measure of the coverage probability error (CPE) is%
\footnote{%
The CPE is defined to be the nominal coverage minus the true coverage
probability, which may be different from the usual definition. Under this
definition, smaller CPE corresponds to higher coverage probability (and
smaller type I error).}
\begin{equation*}
\mathrm{CPE}=\mathcal{G}_{d+2}^{\prime }(c_{\alpha })\left\{ nh^{2r}\mathbb E(B)'\mathbb E(B) -h\mathrm{tr}%
\left[\mathbb E\left( AA'\right) \right] \right\} ,
\end{equation*}%
which is also the error in rejection probability under the null.
Up to smaller-order terms, the term $nh^{2r}\mathbb E(B)'\mathbb E(B)$
characterizes the bias effect from smoothing. The bias increases type I
error and reduces coverage probability. The term $h\mathrm{tr}\left[\mathbb E\left( AA'\right) \right]$ characterizes the variance
effect from smoothing. The variance reduction decreases type I error and
increases coverage probability.
The type I error is $\alpha $ up to order $O\left(h+nh^{2r}\right)$.
There exists some $h>0$ that makes bias and variance effects cancel, leaving
type I error equal to $\alpha$ up to smaller-order terms in $R_{n}$.
Note that $nh^{2r}\mathbb E(B)'\mathbb E(B) -h\mathrm{tr}\left[\mathbb E\left( AA'\right) \right]$ is identical to the
high-order term in the asymptotic MSE of the SEE in \eqref{eqn:mse}. The $h_{%
\text{CPE}}^{\ast }$ that minimizes type I error is the same as $h_{\text{SEE%
}}^{\ast }$.
\begin{proposition}
\label{prop:hCPE} Let Assumptions \ref{a:sampling}--\ref{a:h} hold. The
bandwidth that minimizes the approximate type I error of the chi-square test
based on the test statistic $S_{n}$ is
\begin{equation*}
h_{\text{CPE}}^{\ast }=h_{\text{SEE}}^{\ast }=\left( \frac{\mathrm{tr}\left[\mathbb E\left( AA'\right) \right]}{\mathbb E(B)'\mathbb E(B)}%
\frac{1}{2nr}\right) ^{\frac{1}{2r-1}}.
\end{equation*}
\end{proposition}
The result that $h_{\text{CPE}}^{\ast }=h_{\text{SEE}}^{\ast }$ is
intuitive. Since $h_{\text{SEE}}^{\ast }$ minimizes $\mathbb E\left(m_{n}^{\prime
}V^{-1}m_{n}\right)$, for a test with $c_{\alpha }$ and $\hat{V}$ both invariant
to $h$, the null rejection probability $P\left(m_{n}^{\prime }\hat{V}%
^{-1}m_{n}>c_{\alpha }\right)$ should be smaller when the SEE's MSE is smaller.
When $h=h_{\text{CPE}}^{\ast }$,
\begin{equation*}
P\left( S_{n}>c_{\alpha }\right) =\alpha -C^{+}\mathcal{G}_{d+2}^{\prime
}(c_{\alpha })h_{\text{CPE}}^{\ast }\left[1+o(1)\right]
\end{equation*}%
where $C^{+}=\left( 1-\frac{1}{2r}\right) \mathrm{tr}\left[\mathbb E\left( AA'\right) \right] >0$. If instead we construct the test statistic
based on the unsmoothed estimating equations, $\tilde{S}_{n}=\tilde{m}%
_{n}^{\prime }\hat{V}^{-1}\tilde{m}_{n}$, then it can be shown that
\begin{equation*}
P\left( \tilde{S}_{n}>c_{\alpha }\right) =\alpha +Cn^{-1/2}\left[1+o(1)\right]
\end{equation*}%
for some constant $C$, which is in general not equal to zero. Given that $%
n^{-1/2}=o(h_{\text{CPE}}^{\ast })$ and $C^{+}>0$, we can expect the
SEE-based chi-square test to have a smaller type I error in large samples.
\subsection{Type II error and local asymptotic power}
To obtain the local asymptotic power of the $S_{n}$ test, we let the true
parameter value be $\beta _{n}=\beta _{0}-\delta /\sqrt{n}$, where $\beta _{0}
$ is the parameter value that satisfies the null hypothesis $H_{0}$. In this
case,
\begin{equation*}
m_{n}\left( \beta _{0}\right) =\frac{1}{\sqrt{n}}\sum_{j=1}^{n}Z_{j}\left[
G\left( \frac{X_{j}^{\prime }\delta /\sqrt{n}-U_{j}}{h}\right) -q\right] .
\end{equation*}%
In the proof of Theorem \ref{thm:power}, we show that
\begin{align*}
\mathbb E\left[m_{n}\left( \beta _{0}\right)\right] & =\Sigma _{ZX}\delta +\sqrt{n}%
(-h)^{r}V^{1/2}\mathbb E(B)+O\left( n^{-1/2}+\sqrt{n}h^{r+1}\right) , \\
V_{n}& =\mathrm{Var}\left[ m_{n}\left( \beta _{0}\right) \right]
=V-hV^{1/2}\left[ \mathbb E\left( AA'\right)\right] (V^{1/2})^{\prime }+O\left( n^{-1/2}+h^{2}\right) .
\end{align*}%
\begin{theorem}
\label{thm:power} Let Assumptions \ref{a:sampling}--\ref{a:h} and \ref{a:power_mse}(i) hold. Define $\Delta \equiv \mathbb E\left[V_{n}^{-1/2}m_{n}(\beta _{0})\right]$ and $\tilde{\delta}\equiv V^{-1/2}\Sigma_{ZX}\delta $. We have%
\begin{align*}
P_{\beta _{n}}\left( S_{n}<x\right)
&= \mathcal{G}_{d}\left( x;\left \Vert\Delta \right \Vert ^{2}\right)
+\mathcal{G}_{d+2}^{\prime }\left(x;\Vert \Delta\Vert ^{2}\right)
h\mathrm{tr}\left[\mathbb E\left( AA'\right) \right] \\
& \quad +\mathcal{G}_{d+4}^{\prime }\left(x;\left \Vert \Delta \right \Vert ^{2}\right)h%
\left[ \Delta ^{\prime }\mathbb E\left( AA'\right) \Delta \right] +O\left(
h^{2}+n^{-1/2}\right) \\
&= \mathcal{G}_{d}\left( x;\Vert \tilde{\delta}\Vert ^{2}\right)
-\mathcal{G}_{d+2}^{\prime }\left(x;\Vert \tilde{\delta}\Vert ^{2}\right)
\left\{ nh^{2r}\mathbb E(B)'\mathbb E(B)-h\mathrm{tr}\left[\mathbb E\left( AA'\right)\right]\right\}\\
& \quad +\left[ \mathcal{G}_{d+4}^{\prime }\left(x;\Vert \tilde{\delta}\Vert
^{2}\right)-\mathcal{G}_{d+2}^{\prime }\left(x;\Vert \tilde{\delta}\Vert ^{2}\right)\right] h%
\left[ \tilde{\delta}^{\prime }\mathbb E\left( AA'\right) \tilde{\delta}%
\right] \\
& \quad -\mathcal{G}_{d+2}^{\prime }\left(x;\Vert \tilde{\delta}\Vert ^{2}\right)2%
\tilde{\delta}^{\prime }\sqrt{n}(-h)^{r}\mathbb E(B)+O\left( h^{2}+n^{-1/2}\right) ,
\end{align*}%
where $\mathcal{G}_{d}(x;\lambda )$ is the CDF of the noncentral chi-square
distribution with degrees of freedom $d$ and noncentrality parameter $%
\lambda $. If we further assume that $\tilde{\delta}$ is uniformly
distributed on the sphere $\mathcal{S}_{d}(\tau )=\{ \tilde{\delta}\in
\mathbb{R}^{d}:\Vert \tilde{\delta}\Vert =\tau \}$, then
\begin{align*}
\mathbb E_{\tilde{\delta}}& \left[P_{\beta _{n}}\left( S_{n}>c_{\alpha }\right) \right] \\
& =1-\mathcal{G}_{d}\left( c_{\alpha };\tau ^{2}\right) +\mathcal{G}%
_{d+2}^{\prime }(c_{\alpha };\tau ^{2})\left\{ nh^{2r}\mathbb E(B)'\mathbb E(B)
-h\mathrm{tr}\left[\mathbb E\left( AA'\right)\right]\right\} \\
& \quad -\left[ \mathcal{G}_{d+4}^{\prime }(c_{\alpha };\tau ^{2})-\mathcal{G%
}_{d+2}^{\prime }(c_{\alpha };\tau ^{2})\right] \frac{\tau ^{2}}{d}h\mathrm{tr}\left[\mathbb E\left( AA'\right)\right]
+O\left(h^{2}+n^{-1/2}\right)
\end{align*}%
where $\mathbb E_{\tilde{\delta}}$ takes the average uniformly over the sphere $%
\mathcal{S}_{d}(\tau )$.
\end{theorem}
When $\delta=0$, which implies $\tau=0$,
the expansion in Theorem \ref{thm:power} reduces to that in Theorem \ref%
{thm:inf}.
When $h=h_{\text{SEE}}^{\ast }$, it follows from Theorem \ref{thm:inf} that
\begin{align*}
P_{\beta _{0}}\left( S_{n}>c_{\alpha }\right) &= 1-\mathcal{G}_{d}\left(
c_{\alpha }\right) -C^{+}\mathcal{G}_{d+2}^{\prime }(c_{\alpha })h_{\text{SEE%
}}^{\ast }+o(h_{\text{SEE}}^{\ast }) \\
&= \alpha -C^{+}\mathcal{G}_{d+2}^{\prime }(c_{\alpha })h_{\text{SEE}}^{\ast
}+o(h_{\text{SEE}}^{\ast }).
\end{align*}%
To remove the error in rejection probability of order $h_{\text{SEE}}^{\ast
} $, we make a correction to the critical value $c_{\alpha }$. Let $%
c_{\alpha }^{\ast }$ be a high-order corrected critical value such that $%
P_{\beta_{0}}\left( S_{n}>c_{\alpha }^{\ast }\right) =\alpha +o(h_{\text{SEE}%
}^{\ast})$. Simple calculation shows that
\begin{equation*}
c_{\alpha }^{\ast }=c_{\alpha }-\frac{\mathcal{G}_{d+2}^{\prime }(c_{\alpha
})}{\mathcal{G}_{d}^{\prime }\left( c_{\alpha }\right) }C^{+}h_{\text{SEE}%
}^{\ast }
\end{equation*}%
meets the requirement.
To approximate the size-adjusted power of the $S_{n}$ test, we use $%
c_{\alpha }^{\ast }$ rather than $c_{\alpha }$ because $c_{\alpha }^{\ast }$
leads to a more accurate test in large samples. Using Theorem \ref{thm:power}%
, we can prove the following corollary.
\begin{corollary}
\label{cor:power} Let the assumptions in Theorem \ref{thm:power} hold.
Then for $h=h_{\text{SEE}}^{\ast }$,
\begin{equation}
\begin{split}
\mathbb E_{\tilde{\delta}}& \left[P_{\beta _{n}}\left( S_{n}>c_{\alpha }^{\ast }\right)\right] \\
& =1-\mathcal{G}_{d}\left( c_{\alpha };\tau ^{2}\right) +Q_{d}\left(
c_{\alpha },\tau ^{2},r\right) \mathrm{tr}\left[\mathbb E\left( AA'\right)\right]
h_{\text{SEE}}^{\ast }+O\left( h_{\text{SEE}}^{\ast 2}+n^{-1/2}\right) ,
\end{split}
\label{eqn:asym-local-power}
\end{equation}%
where
\begin{align*}
Q_{d}\left( c_{\alpha },\tau ^{2},r\right) & =\left( 1-\frac{1}{2r}\right) %
\left[ \mathcal{G}_{d}^{\prime }\left( c_{\alpha };\tau ^{2}\right) \frac{%
\mathcal{G}_{d+2}^{\prime }(c_{\alpha })}{\mathcal{G}_{d}^{\prime }\left(
c_{\alpha }\right) }-\mathcal{G}_{d+2}^{\prime }(c_{\alpha };\tau ^{2})%
\right] \\
& \quad -\frac{1}{d}\left[ \mathcal{G}_{d+4}^{\prime }(c_{\alpha };\tau
^{2})-\mathcal{G}_{d+2}^{\prime }(c_{\alpha };\tau ^{2})\right] \tau ^{2}.
\end{align*}
\end{corollary}
In the asymptotic expansion of the local power function in %
\eqref{eqn:asym-local-power}, $1-\mathcal{G}_{d}\left( c_{\alpha };\tau
^{2}\right) $ is the usual first-order power of a standard chi-square test.
The next term of order $O(h_{\text{SEE}}^{\ast })$ captures the effect of
smoothing the estimating equations. To sign this effect, we plot the
function $Q_{d}\left( c_{\alpha },\tau ^{2},r\right) $ against $\tau ^{2}$
for $r=2$, $\alpha =10\%$, and different values of $d$ in Figure \ref%
{fig:power}. Figures for other values of $r$ and $\alpha $ are qualitatively
similar. The range of $\tau ^{2}$ considered in Figure \ref{fig:power} is
relevant as the first-order local asymptotic power, i.e.,\ $1-\mathcal{G}%
_{d}\left( c_{\alpha };\tau ^{2}\right) $, increases from $10\%$ to about $%
94\%$, $96\%$, $97\%$, and $99\%$, respectively for $d=1,2,3,4$. It is clear
from this figure that $Q_{d}\left( c_{\alpha },\tau ^{2},r\right) >0$ for
any $\tau ^{2}>0$. This indicates that smoothing leads to a test with
improved power. The power improvement increases with $r$. The smoother the
conditional PDF of $U$ in a neighborhood of the origin is, the larger the
power improvement is.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.65\textwidth,clip=true,trim=40 190 80 200]{local_asym_power_10_better.pdf}
\caption{Plots of $Q_{d}\left( c_{\protect \alpha },\protect \tau %
^{2},2\right) $ against $\protect \tau ^{2}$ for different values of $d$ with
$\protect \alpha =10\%$.}
\label{fig:power}
\end{figure}
\section{MSE of the Parameter Estimator\label{sec:est}}
In this section, we examine the approximate MSE of the parameter estimator.
The approximate MSE, being a Nagar-type approximation \citep{Nagar1959}, can
be motivated from the theory of optimal estimating equations, as presented
in \citet{Heyde1997}, for example.
The SEE estimator $\hat{\beta}$ satisfies $m_{n}(\hat{\beta})=0$. In Lemma %
\ref{lem:stochastic_expansion_beta} in the appendix, we show that%
\begin{equation}
\sqrt{n}\left( \hat{\beta}-\beta _{0}\right)
= -\left \{ \mathbb E\left[\frac{\partial }{%
\partial \beta ^{\prime }}\frac{1}{\sqrt{n}}m_{n}\left( \beta _{0}\right)
\right]\right\} ^{-1}m_{n}+O_{p}\left( \frac{1}{\sqrt{nh}}\right)
\label{eqn:expansion1}
\end{equation}%
and
\begin{equation}
\mathbb E\left[\frac{\partial }{\partial \beta ^{\prime }}\frac{1}{\sqrt{n}}m_{n}\left(
\beta _{0}\right)\right]
= \mathbb E\left[ Z_{j}X_{j}^{\prime }f_{U|Z,X}(0\mid Z_{j},X_{j})%
\right] +O\left(h^{r}\right). \label{eqn:expansion2}
\end{equation}%
Consequently, the approximate MSE (AMSE) of $\sqrt{n}\left( \hat{\beta}%
-\beta _{0}\right) $ is\footnote{%
Here we follow a common practice in the estimation of nonparametric and
nonlinear models and define the AMSE to be the MSE of $\sqrt{n}\left( \hat{%
\beta}-\beta _{0}\right) $ after dropping some smaller-order terms. So the
asymptotic MSE we define here is a Nagar-type approximate MSE. See %
\citet{Nagar1959}.}
\begin{align*}
\mathrm{AMSE}_{\beta }
&= \left \{ \mathbb E\left[\frac{\partial }{\partial \beta ^{\prime
}}\frac{1}{\sqrt{n}}m_{n}\left( \beta _{0}\right) \right] \right\} ^{-1}
\mathbb E\left(m_n m_n'\right) \left\{ \mathbb E\left[\frac{\partial }{\partial \beta
^{\prime }}\frac{1}{\sqrt{n}}m_{n}\left( \beta _{0}\right) \right]\right\}
^{-1\prime } \\
&= \Sigma _{ZX}^{-1}V\Sigma _{XZ}^{-1}+\Sigma _{ZX}^{-1}V^{1/2}\left[
nh^{2r} \mathbb E(B)\mathbb E(B') -h\mathbb E\left( AA'\right) \right] \left( V^{1/2}\right) ^{\prime }\Sigma _{XZ}^{-1} \\
&\quad+O\left(h^r\right)+o\left(h+nh^{2r}\right),
\end{align*}%
where
\begin{equation*}
\Sigma _{ZX} = \mathbb E\left[ Z_{j}X_{j}^{\prime }f_{U|Z,X}(0\mid Z_{j},X_{j})\right]
\text{ and }\Sigma _{XZ}=\Sigma _{ZX}^{\prime }.
\end{equation*}
The first term of $\mathrm{AMSE}_{\beta }$ is the asymptotic variance of the
unsmoothed QR estimator. The second term captures the higher-order effect of
smoothing on the AMSE of $\sqrt{n}(\hat{\beta}-\beta _{0})$. When $%
nh^{r}\rightarrow \infty $ and $n^{3}h^{4r+1}\rightarrow \infty$, we have $%
h^{r}=o\left( nh^{2r}\right) $ and $1/\sqrt{nh}=o\left( nh^{2r}\right) $, so
the terms of order $O_{p}(1/\sqrt{nh})$ in \eqref{eqn:expansion1} and of
order $O\left( h^{r}\right) $ in \eqref{eqn:expansion2} are of smaller order
than the $O(nh^{2r})$ and $O(h)$ terms in the AMSE. If $h\asymp
n^{-1/(2r-1)} $ as before, these rate conditions are satisfied when $r>2$.
\begin{theorem}
\label{thm:est-MSE} Let Assumptions \ref{a:sampling}--\ref{a:G}(i--ii), \ref%
{a:beta}, and \ref{a:power_mse} hold. If $nh^{r}\rightarrow \infty $ and $%
n^{3}h^{4r+1}\rightarrow \infty $, then the AMSE of $\sqrt{n}(\hat{\beta}%
-\beta _{0})$ is%
\begin{equation*}
\Sigma _{ZX}^{-1} V^{1/2}
\left[ I_{d}+nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right) \right]
\left(V^{1/2}\right) ^{\prime }\left( \Sigma _{ZX}^{\prime }\right)^{-1}
+O\left(h^r\right)+o\left(h+nh^{2r}\right).
\end{equation*}
\end{theorem}
The optimal $h^{\ast }$ that minimizes the high-order AMSE satisfies
\begin{align*}
&\Sigma _{ZX}^{-1} V^{1/2} \left[ n\left( h^{\ast }\right) ^{2r}\mathbb E(B)\mathbb E(B') - h^{\ast }\mathbb E\left( AA'\right) \right]
\left( V^{1/2} \right)^\prime \left( \Sigma _{ZX}^{\prime }\right) ^{-1} \\
&\quad \leq \Sigma _{ZX}^{-1} V^{1/2} \left[ nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right) \right]
\left( V^{1/2} \right)^\prime \left( \Sigma _{ZX}^{\prime
}\right) ^{-1}
\end{align*}%
in the sense that the difference between the two sides is nonpositive
definite for all $h$. This is equivalent to
\begin{equation*}
n\left( h^{\ast }\right) ^{2r}\mathbb E(B)\mathbb E(B') - h^{\ast }\mathbb E\left( AA'\right)
\leq nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right) .
\end{equation*}
This choice of $h$ can also be motivated from the theory of optimal
estimating equations. Given the estimating equations $m_{n}=0$, we follow %
\citet{Heyde1997} and define the standardized version of $m_{n}$ by
\begin{equation*}
m_{n}^{s}(\beta _{0},h)
= -\mathbb E\left\{\frac{\partial }{\partial \beta ^{\prime }}%
m_{n}\left( \beta _{0}\right) \left[ \mathbb E(m_{n}m_{n}^{\prime })\right]
^{-1}m_{n} \right\}.
\end{equation*}%
We include $h$ as an argument of $m_{n}^{s}$ to emphasize the dependence of $%
m_{n}^{s}$ on $h$. The standardization can be motivated from the following
considerations. On one hand, the estimating equations need to be close to
zero when evaluated at the true parameter value. Thus we want $%
\mathbb E(m_{n}m_{n}^{\prime })$ to be as small as possible. On the other hand, we
want $m_{n}\left( \beta +\delta \beta \right) $ to differ as much as
possible from $m_{n}\left( \beta \right) $ when $\beta $ is the true value.
That is, we want $\mathbb E\frac{\partial }{\partial \beta ^{\prime }}m_{n}\left(
\beta _{0}\right) $ to be as large as possible. To meet these requirements,
we choose $h$ to maximize%
\begin{equation*}
\mathbb E\left\{m_{n}^{s}(\beta _{0},h)\left[ m_{n}^{s}\left( \beta _{0},h\right) \right]
^{\prime }\right\}
= \left[ \mathbb E\frac{\partial }{\partial \beta ^{\prime }}m_{n}\left(
\beta _{0}\right) \right] \left[ \mathbb E(m_{n}m_{n}^{\prime })\right] ^{-1}\left[ \mathbb E%
\frac{\partial }{\partial \beta ^{\prime }}m_{n}\left( \beta _{0}\right) %
\right] ^{\prime }.
\end{equation*}%
More specifically, $h^{\ast }$ is optimal if
\begin{equation*}
\mathbb E\left\{m_{n}^{s}(\beta _{0},h^{\ast })\left[ m_{n}^{s}\left( \beta _{0},h^{\ast
}\right) \right] ^{\prime }\right\}
- \mathbb E\left\{m_{n}^{s}(\beta _{0},h)\left[
m_{n}^{s}\left( \beta _{0},h\right) \right] ^{\prime }\right\}
\end{equation*}%
is nonnegative definite for all $h\in \mathbb{R}^{+}$. But $%
\mathbb E\left[m_{n}^{s}\left( m_{n}^{s}\right) ^{\prime }\right]=\left( \mathrm{AMSE}_{\beta
}\right) ^{-1}$, so maximizing $\mathbb E\left[m_{n}^{s}\left( m_{n}^{s}\right) ^{\prime
}\right]$ is equivalent to minimizing $\mathrm{AMSE}_{\beta }$.
The question is whether such an optimal $h$ exists. If it does, then the
optimal $h^{\ast }$ satisfies
\begin{equation} \label{AMSE_obj}
h^{\ast } = \mathop{\rm arg\,min}_{h} u^{\prime }\left[ nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right) \right] u
\end{equation}%
for all $u\in \mathbb{R}^{d}$, by the definition of nonpositive definite
plus the fact that the above yields a unique minimizer for any $u$. Using
unit vectors $e_1=(1,0,\ldots,0)$, $e_2=(0,1,0,\ldots,0)$, etc.,\ for $u$,
and noting that $\mathrm{tr}(A)=e_1^{\prime
}Ae_1+\cdots+e_d^{\prime }Ae_d$ for $d\times d$ matrix $A$, this implies
that
\begin{align*}
h^{\ast } &= \mathop{\rm arg\,min}_{h}\mathrm{tr}\left[ nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right) \right] \\
&= \mathop{\rm arg\,min}_{h}\left\{ nh^{2r}\mathbb E(B)'\mathbb E(B) - h\mathrm{tr}\left[\mathbb E\left( AA'\right) \right] \right\} .
\end{align*}%
In view of \eqref{eqn:def_h_SEE}, $h_\text{SEE}^*=h^*$ if $h^*$ exists.
Unfortunately, it is easy to show that no single $h$ can minimize the
objective function in \eqref{AMSE_obj} for all $u\in \mathbb{R}^{d}$. Thus,
we have to redefine the optimality with respect to the direction of $u$. The
direction depends on which linear combination of $\beta $ is the focus of
interest, as $u^{\prime }\left[ nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right) \right] u$ is the high-order AMSE of
$c^{\prime }\sqrt{n}(\hat{\beta}-\beta _{0})$ for $c=\Sigma_{XZ} \left(
V^{-1/2}\right)^{\prime} u$.
Suppose we are interested in only one linear combination. Let $h_{c}^{\ast }$
be the optimal $h$ that minimizes the high-order AMSE of $c^{\prime }\sqrt{n}%
(\hat{\beta}-\beta _{0})$. Then
\begin{equation*}
h_{c}^{\ast }=\left( \frac{u^{\prime }\mathbb E\left( AA^{\prime }\right) u}{%
u^{\prime }\mathbb E(B)\mathbb E(B')u}\frac{1}{2nr}%
\right) ^{\frac{1}{2r-1}}
\end{equation*}%
for $u=\left( V^{1/2}\right) ^{\prime } \Sigma_{XZ}^{-1} c$. Some algebra
shows that
\begin{equation*}
h_{c}^{\ast }\geq \left( \frac{1}{\mathbb E(B)'\left[
\mathbb E\left(AA'\right)\right]^{-1}\mathbb E(B)}\frac{1}{2nr}\right) ^{\frac{1}{2r-1}}>0.
\end{equation*}%
So although $h_{c}^{\ast }$ depends on $c$ via $u$, it is nevertheless
greater than zero.
Now suppose without loss of generality we are interested in $d$ directions $%
\left( c_{1},\ldots ,c_{d}\right) $ jointly where $c_{i}\in \mathbb{R}^{d}$.
In this case, it is reasonable to choose $h_{c_{1},\ldots ,c_{d}}^{\ast }$
to minimize the sum of direction-wise AMSEs, i.e.,
\begin{equation*}
h_{c_{1},\ldots ,c_{d}}^{\ast }=\mathop{\rm arg\,min}_{h}%
\sum_{i=1}^{d}u_{i}^{\prime }\left[ nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right) \right] u_{i},
\end{equation*}%
where $u_{i}=\left( V^{1/2}\right) ^{\prime }\Sigma _{XZ}^{-1}c_{i}$. It is
easy to show that%
\begin{equation*}
h_{c_{1},\ldots ,c_{d}}^{\ast }
= \left[ \frac{\sum_{i=1}^{d}u_{i}^{\prime
}\mathbb E\left( AA'\right) u_{i}}{\sum_{i=1}^{d}u_{i}^{\prime }\mathbb E(B)\mathbb E(B')u_{i}}\frac{1}{2nr}\right]^{\frac{1}{%
2r-1}}.
\end{equation*}
As an example, consider $u_{i}=e_{i}=\left( 0,\ldots ,1,\ldots ,0\right) $,
the $i$th unit vector in $\mathbb{R}^{d}$. Correspondingly%
\begin{equation*}
\left( \tilde{c}_{1},\ldots,\tilde{c}_{d}\right) = \Sigma_{XZ} \left(
V^{-1/2}\right)^{\prime } \left( e_{1},\ldots,e_{d}\right) .
\end{equation*}
It is clear that%
\begin{equation*}
h_{\tilde{c}_{1},\ldots ,\tilde{c}_{d}}^{\ast }=h_{\text{SEE}}^{\ast }=h_{%
\text{CPE}}^{\ast } ,
\end{equation*}%
so all three selections coincide with each other. A special case of interest
is when $Z=X$, non-constant regressors are pairwise independent and
normalized to mean zero and variance one, and $U%
\mathpalette{\protect
\independenT}{\perp} X$. Then $u_{i}=c_{i}=e_{i}$ and the $d$ linear
combinations reduce to the individual elements of $\beta $.
The above example illustrates the relationship between $h_{c_{1},\ldots
,c_{d}}^{\ast }$ and $h_{\text{SEE}}^{\ast }$. While $h_{c_{1},\ldots
,c_{d}}^{\ast }$ is tailored toward the flexible linear combinations $%
\left(c_{1},\ldots ,c_{d}\right)$ of the parameter vector, $h_{\text{SEE}%
}^{\ast }$ is tailored toward the fixed $\left( \tilde{c}_{1},\ldots ,\tilde{%
c}_{d}\right) $. While $h_{c_{1},\ldots ,c_{d}}^{\ast }$ and $h_{\text{SEE}%
}^{\ast }$ are of the same order of magnitude, in general there is no
analytic relationship between $h_{c_{1},\ldots ,c_{d}}^{\ast }$ and $h_{%
\text{SEE}}^{\ast }$.
To shed further light on the relationship between $h_{c_{1},\ldots
,c_{d}}^{\ast }$ and $h_{\text{SEE}}^{\ast }$, let $\left \{ \lambda
_{k},k=1,\ldots ,d\right \} $ be the eigenvalues of $nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right)$ with the
corresponding orthonormal eigenvectors $\left \{ \ell _{k},k=1,\ldots
,d\right \} $. Then we have $nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right) =\sum_{k=1}^{d}\lambda _{k}\ell
_{k}\ell _{k}^{\prime }$ and $u_{i}=\sum_{j=1}^{d}u_{ij}\ell _{j}$ for $%
u_{ij}=u_{i}^{\prime }\ell _{j}$. Using these representations, the objective
function underlying $h_{c_{1},\ldots ,c_{d}}^{\ast }$ becomes
\begin{align*}
& \sum_{i=1}^{d}u_{i}^{\prime }\left[ nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right)\right] u_{i} \\
& =\sum_{i=1}^{d}\left( \sum_{j=1}^{d}u_{ij}\ell _{j}^{\prime }\right)
\left( \sum_{k=1}^{d}\lambda _{k}\ell _{k}\ell _{k}^{\prime }\right) \left(
\sum_{\tilde{j}=1}^{d}u_{i\tilde{j}}\ell _{\tilde{j}}\right) \\
& =\sum_{j=1}^{d}\left( \sum_{i=1}^{d}u_{ij}^{2}\right) \lambda _{j}.
\end{align*}%
That is, $h_{c_{1},\ldots ,c_{d}}^{\ast }$ minimizes a weighted sum of the
eigenvalues of $nh^{2r}\mathbb E(B)\mathbb E(B') - h\mathbb E\left( AA'\right)$ with weights depending on $c_{1},\ldots ,c_{d}$. By
definition, $h_{\text{SEE}}^{\ast }$ minimizes the simple unweighted sum of
the eigenvalues, viz.\ $\sum_{j=1}^{d}\lambda _{j}$. While $h_{\text{SEE}%
}^{\ast }$ may not be ideal if we know the linear combination(s) of
interest, it is a reasonable choice otherwise.
In empirical applications, we can estimate $h_{c_{1},\ldots ,c_{d}}^{\ast }$
using a parametric plug-in approach similar to our plug-in implementation of
$h_{\text{SEE}}^{\ast }$. If we want to be agnostic about the directional
vectors $c_{1},\ldots ,c_{d}$, we can simply use $h_{\text{SEE}}^{\ast }$.
\section{Empirical example: JTPA\label{sec:emp}}
We revisit the IV-QR analysis of Job Training Partnership Act (JTPA) data in %
\citet{AbadieEtAl2002}, specifically their Table III.\footnote{%
Their data and Matlab code for replication are helpfully provided online in
the Angrist Data Archive, %
\url{http://economics.mit.edu/faculty/angrist/data1/data/abangim02}.} They
use 30-month earnings as the outcome, randomized offer of JTPA services as
the instrument, and actual enrollment for services as the endogenous
treatment variable. Of those offered services, only around 60 percent
accepted, so self-selection into treatment is likely. Section 4 of %
\citet{AbadieEtAl2002} provides much more background and descriptive
statistics.
We compare estimates from a variety of methods.\footnote{Code and data for replication is available on the first author's website.}
``AAI'' is the original paper's
estimator. AAI restricts $X$ to have finite support (see condition (iii) in
their Theorem 3.1), which is why all the regressors in their example are
binary. Our fully automated plug-in estimator is ``SEE ($\hat{h}$).''
``CH'' is \citet{ChernozhukovHansen2006}. Method ``tiny $h$'' uses $h=400$ (compared with our plug-in values on the order of $10\,000$), while ``huge $h$'' uses $h=5\times 10^{6}$. 2SLS is the usual (mean) two-stage least squares estimator, put in the $q=0.5$ column only for convenience of comparison.
\begin{table}[htbp]
\centering
\caption{\label{tab:emp}IV-QR estimates of coefficients for certain regressors as in Table III of \citet{AbadieEtAl2002} for adult men.}
\begin{tabular}[c]{ccS[table-format=9.0]S[table-format=8.0]S[table-format=6.0]S[table-format=7.0]S[table-format=8.0]}
\hline\hline
& & \multicolumn{5}{c}{Quantile} \\
\cline{3-7}
Regressor & Method & \multicolumn{1}{c}{0.15} & \multicolumn{1}{c}{0.25} & \multicolumn{1}{c}{0.50} & \multicolumn{1}{c}{0.75} & \multicolumn{1}{c}{0.85} \\
\hline
Training & AAI & 121 & 702 & 1544 & 3131 & 3378 \\
Training & SEE ($\hat h$) & 57 & 381 & 1080 & 2630 & 2744 \\
Training & CH & -125 & 341 & 385 & 2557 & 3137 \\
Training & tiny $h$ & -129 & 500 & 381 & 2760 & 3114 \\
Training & huge $h$ & 1579 & 1584 & 1593 & 1602 & 1607 \\
Training & 2SLS & & & 1593 & & \\
HS or GED & AAI & 714 & 1752 & 4024 & 5392 & 5954 \\
HS or GED & SEE ($\hat h$) & 812 & 1498 & 3598 & 6183 & 6753 \\
HS or GED & CH & 482 & 1396 & 3761 & 6127 & 6078 \\
HS or GED & tiny $h$ & 463 & 1393 & 3767 & 6144 & 6085 \\
HS or GED & huge $h$ & 4054 & 4062 & 4075 & 4088 & 4096 \\
HS or GED & 2SLS & & & 4075 & & \\
Black & AAI & -171 & -377 & -2656 & -4182 & -3523 \\
Black & SEE ($\hat h$) & -202 & -546 & -1954 & -3273 & -3653 \\
Black & CH & -38 & -109 & -2083 & -3233 & -2934 \\
Black & tiny $h$ & -18 & -139 & -2121 & -3337 & -2884 \\
Black & huge $h$ & -2336 & -2341 & -2349 & -2357 & -2362 \\
Black & 2SLS & & & -2349 & & \\
Married & AAI & 1564 & 3190 & 7683 & 9509 & 10185 \\
Married & SEE ($\hat h$) & 1132 & 2357 & 7163 & 10174 & 10431 \\
Married & CH & 504 & 2396 & 7722 & 10463 & 10484 \\
Married & tiny $h$ & 504 & 2358 & 7696 & 10465 & 10439 \\
Married & huge $h$ & 6611 & 6624 & 6647 & 6670 & 6683 \\
Married & 2SLS & & & 6647 & & \\
Constant & AAI & -134 & 1049 & 7689 & 14901 & 22412 \\
Constant & SEE ($\hat h$) & -88 & 1268 & 7092 & 15480 & 22708 \\
Constant & CH & 242 & 1033 & 7516 & 14352 & 22518 \\
Constant & tiny $h$ & 294 & 1000 & 7493 & 14434 & 22559 \\
Constant & huge $h$ & -1157554 & -784046 & 10641 & 805329 & 1178836 \\
Constant & 2SLS & & & 10641 & & \\
\hline
\end{tabular}
\end{table}
Table \ref{tab:emp} shows results from the sample of $5102$ adult men, for a
subset of the regressors used in the model. Not shown in the table are
coefficient estimates for dummies for Hispanic, working less than 13 weeks
in the past year, five age groups, originally recommended service strategy,
and whether earnings were from the second follow-up survey. CH is very close
to ``tiny $h$''; that is,
simply using the smallest possible $h$ with SEE provides a good
approximation of the unsmoothed estimator in this case. Demonstrating our theoretical results in Section \ref{sec:see-comp-IV}, ``huge $h$'' is very close to
2SLS for everything except the constant term for $q\neq 0.5$. The IVQR-SEE estimator using
our plug-in bandwidth has some economically significant differences with the
unsmoothed estimator. Focusing on the treatment variable (``Training''), the unsmoothed median effect estimate is below
$400$ (dollars), whereas SEE$(\hat{h})$ yields $1080$, both of
which are smaller than AAI's $1544$ (AAI is the most positive at all
quantiles). For the $0.15$-quantile effect, the unsmoothed estimates are
actually slightly negative, while SEE$(\hat{h})$ and AAI are
slightly positive.
For $q=0.85$, though, the SEE$(\hat{h})$ estimate is smaller than
the unsmoothed one, and the two are quite similar for $q=0.25$ and $q=0.75$;
there is no systematic ordering.
Computationally, our code takes only one second total to calculate the plug-in bandwidths and coefficient estimates at all five quantiles. Using the fixed $h=400$ or $h=5\times10^6$, computation is immediate.
\begin{table}[htbp]
\centering
\caption{\label{tab:emp2}IV-QR estimates similar to Table \ref{tab:emp}, but replacing age dummies with a quartic polynomial in age and adding baseline measures of weekly hours worked and wage.}
\begin{tabular}[c]{ccS[table-format=4.0]S[table-format=4.0]S[table-format=4.0]S[table-format=4.0]S[table-format=4.0]}
\hline\hline
& & \multicolumn{5}{c}{Quantile} \\
\cline{3-7}
Regressor & Method & \multicolumn{1}{c}{0.15} & \multicolumn{1}{c}{0.25} & \multicolumn{1}{c}{0.50} & \multicolumn{1}{c}{0.75} & \multicolumn{1}{c}{0.85} \\
\hline\rule{0pt}{12pt}
Training & SEE ($\hat h$) & 74 & 398 & 1045 & 2748 & 2974 \\
Training & CH & -20 & 451 & 911 & 2577 & 3415 \\
Training & tiny $h$ & -50 & 416 & 721 & 2706 & 3555 \\
Training & huge $h$ & 1568 & 1573 & 1582 & 1590 & 1595 \\
Training & 2SLS & & & 1582 & & \\
\hline
\end{tabular}
\end{table}
Table \ref{tab:emp2} shows estimates of the endogenous coefficient when various ``continuous'' control variables are added, specifically a quartic polynomial in age (replacing the age range dummies), baseline weekly hours worked, and baseline hourly wage.\footnote{Additional JTPA data downloaded from the W.E.\ Upjohn Institute at \url{http://upjohn.org/services/resources/employment-research-data-center/national-jtpa-study}; variables are named \texttt{age}, \texttt{bfhrswrk}, and \texttt{bfwage} in file \texttt{expbif.dta}.} The estimates do not change much; the biggest difference is for the unsmoothed estimate at the median. Our code again computes the plug-in bandwidth and SEE coefficient estimates at all five quantiles in one second. Using the small $h=400$ bandwidth now takes nine seconds total (more iterations of \texttt{fsolve} are needed); $h=5\times10^6$ still computes almost immediately.
\section{Simulations\label{sec:sim}}
For our simulation study,\footnote{%
Code to replicate our simulations is available on the first author's
website.} we use $G(u)$ given in \eqref{eqn:G_fun} as in %
\citet{Horowitz1998} and \citet{Whang2006}. This satisfies Assumption \ref%
{a:G} with $r=4$. Using (the integral of) an Epanechnikov
kernel with $r=2$ also worked well in the cases we consider here, though
never better than $r=4$. Our error distributions always have at
least four derivatives, so $r=4$ working somewhat better is expected.
Selection of optimal $r$ and $G(\cdot)$, and the quantitative impact
thereof, remain open questions.
We implement a plug-in version ($\hat h$) of the infeasible $h^{\ast }\equiv h_{\text{%
SEE}}^{\ast }$. We make the plug-in assumption $U%
\mathpalette{\protect
\independenT}{\perp} Z$ and parameterize the distribution of $U$. Our
current method, which has proven quite accurate and stable, fits the
residuals from an initial $h=(2nr)^{-1/(2r-1)}$ IV-QR to Gaussian, $t$,
gamma, and generalized extreme value distributions via maximum likelihood.
With the distribution parameter estimates, $f_{U}(0)$ and $f_{U}^{(r-1)}(0)$
can be computed and plugged in to calculate $\hat{h}$. With larger $n$, a nonparametric kernel estimator may perform better, but nonparametric estimation of $f_U^{(r-1)}(0)$ will have high variance in smaller samples.
Viewing the unsmoothed estimator as a reference point,
potential regret (of using $\hat h$ instead of $h=0$) is largest when $\hat h$ is too large,
so we separately calculate $\hat{h}$
for each of the four distributions and take the smallest. Note that this
particular plug-in approach works well even under heteroskedasticity and/or
misspecification of the error distribution: DGPs 3.1--3.6 in Section \ref{sec:sim-additional} have error
distributions other than these four, and DGPs 1.3, 2.2, 3.3--3.6 are
heteroskedastic, as are the JTPA-based simulations. For the infeasible $h^{\ast }$, if the PDF derivative in
the denominator is zero, it is replaced by $0.01$ to avoid $h^{\ast }=\infty
$.
For the unsmoothed IV-QR estimator, we use code based on %
\citet{ChernozhukovHansen2006} from the latter author's website.
We use the option to let their code determine the
grid of possible endogenous coefficient values from the data.
This code in turn uses the interior point method in \texttt{rq.m} (developed
by Roger Koenker, Daniel Morillo, and Paul Eilers) to solve exogenous QR
linear programs.
\subsection{JTPA-based simulations\label{sec:sim-JTPA}}
We use two DGPs based on the JTPA data examined in Section \ref%
{sec:emp}. The first DGP corresponds to the variables used in the original
analysis in \citet{AbadieEtAl2002}. For individual $i$, let $Y_{i}$ be the
scalar outcome (30-month earnings), $X_{i}$ be the vector of exogenous
regressors, $D_{i}$ be the scalar endogenous training dummy, $Z_{i}$ be the
scalar instrument of randomized training offer, and $U_{i}\sim \text{Unif}%
(0,1)$ be a scalar unobservable term. We draw $X_{i}$ from the joint
distribution estimated from the JTPA data. We randomize $Z_{i}=1$ with
probability $0.67$ and zero otherwise. If $Z_{i}=0$, then we set the
endogenous training dummy $D_{i}=0$ (ignoring that in reality, a few percent
still got services). If $Z_{i}=1$, we set $D_{i}=1$ with a probability
increasing in $U_{i}$. Specifically, $P(D_{i}=1\mid Z_{i}=1,U_{i}=u)=\min
\{1,u/0.75\}$, which roughly matches the $P(D_{i}=1\mid Z_{i}=1)=0.62$ in
the data. This corresponds to a high degree of self-selection into treatment
(and thus endogeneity). Then, $Y_{i}=X_{i}\beta _{X}+D_{i}\beta
_{D}(U_{i})+G^{-1}(U_{i})$, where $\beta _{X}$ is the IVQR-SEE $\hat{\beta}%
_{X}$ from the JTPA data (rounded to the nearest $500$), the function $\beta
_{D}(U_{i})=2000U_{i}$ matches $\hat{\beta}_{D}(0.5)$ and the increasing
pattern of other $\hat{\beta}_{D}(q)$, and $G^{-1}(\cdot )$ is a recentered
gamma distribution quantile function with parameters estimated to match the
distribution of residuals from the IVQR-SEE estimate with JTPA data.
In each of $1000$ simulation replications, we generate $n=5102$ iid observations.
For the second DGP, we add a second endogenous regressor (and instrument)
and four exogenous regressors, all with normal distributions. Including the
intercept and two endogenous regressors, there are $20$ regressors. The
second instrument is $Z_{2i}\overset{iid}{\sim}N(0,1)$, and the second
endogenous regressor is $D_{2i}=0.8Z_{2i}+0.2\Phi^{-1}(U_i)$. The
coefficient on $D_{2i}$ is $1000$ at all quantiles. The new exogenous
regressors are all standard normal and have coefficients of $500$ at all
quantiles. To make the asymptotic bias of 2SLS relatively more important,
the sample size is increased to $n=50\,000$.
\begin{table}[htbp]
\centering
\caption{\label{tab:sim-JTPA1}Simulation results for endogenous coefficient estimators with first JTPA-based DGP. ``Robust MSE'' is squared median-bias plus the square of the interquartile range divided by $1.349$, $\textrm{Bias}_{\textrm{median}}^{2}+(\textrm{IQR}/1.349)^2$; it is shown in units of $10^5$, so $7.8$ means $7.8\times10^5$, for example. ``Unsmoothed'' is the estimator from \citet{ChernozhukovHansen2006}.}
\begin{tabular}[c]{cS[table-format=2.1]S[table-format=2.1]S[table-format=2.1]cS[table-format=5.1]S[table-format=4.1]S[table-format=5.1]
\hline\hline\rule{0pt}{12pt}
& \multicolumn{3}{c}{Robust MSE / $10^5$} & & \multicolumn{3}{c}{Median Bias} \\
\cline{2-4}\cline{6-8} \rule{0pt}{14pt}
$q$ & \multicolumn{1}{c}{Unsmoothed} & \multicolumn{1}{c}{SEE ($\hat h$)} & \multicolumn{1}{c}{2SLS} & &
\multicolumn{1}{c}{Unsmoothed} & \multicolumn{1}{c}{SEE ($\hat h$)} & \multicolumn{1}{c}{2SLS} \\
\hline
$0.15$ & 78.2 & 43.4 & 18.2 && -237.6 & 8.7 & 1040.6 \\%262.6 & 54.2 & 17.4 & -1199.8 & -272.5 & 1001.4
$0.25$ & 30.5 & 18.9 & 14.4 && -122.2 & 16.3 & 840.6 \\% 27.2 & 18.8 & 13.8 & -125.5 & 8.7 & 801.4
$0.50$ & 9.7 & 7.8 & 8.5 && 24.1 & -8.5 & 340.6 \\% 9.2 & 7.9 & 8.3 & -18.0 & -11.7 & 301.4
$0.75$ & 7.5 & 7.7 & 7.6 && -5.8 & -48.1 & -159.4 \\% 8.5 & 7.3 & 7.8 & -19.1 & -28.4 & -198.6
$0.85$ & 11.7 & 9.4 & 8.6 && 49.9 & -17.7 & -359.4 \\% 12.2 & 9.7 & 9.0 & 10.5 & -11.5 & -398.6
\hline
\end{tabular}
\end{table}
Table \ref{tab:sim-JTPA1} shows results for the first JTPA-based DGP, for three estimators of the endogenous coefficient: \citet{ChernozhukovHansen2006}; SEE with our data-dependent $\hat h$; and 2SLS. The first and third can be viewed as limits of IVQR-SEE estimators as $h\to0$ and $h\to\infty$, respectively. We show median bias and ``robust MSE,'' which is squared median bias plus the square of the interquartile range divided by $1.349$, $\textrm{Bias}_{\textrm{median}}^{2}+(\textrm{IQR}/1.349)^2$. We report these ``robust'' versions of bias and MSE since the (mean) IV estimator does not even possess a first moment in finite samples \citep{Kinal1980}. We are unaware of an analogous result for IV-QR but remain wary of presenting bias and MSE results for IV-QR, too, especially since the IV estimator is the limit of the SEE IV-QR estimator as $h\to\infty$. At all quantiles, for all methods, the robust MSE is dominated by the IQR rather than bias. Consequently, even though the 2SLS median bias is quite large for $q=0.15$, it has less than half the robust MSE of SEE($\hat h$), which in turn has half the robust MSE of the unsmoothed estimator. With only a couple exceptions, this is the ordering among the three methods' robust MSE at all quantiles. Although the much larger bias of 2SLS than that of SEE($\hat h$) or the unsmoothed estimator is expected, the smaller median bias of SEE($\hat h$) than that of the unsmoothed estimator is surprising. However, the differences are not big, and they may be partly due to the much larger variance of the unsmoothed estimator inflating the simulation error in the simulated median bias, especially for $q=0.15$. The bigger difference is the reduction in variance from smoothing.
\begin{table}[htbp]
\centering
\caption{\label{tab:sim-JTPA2}Simulation results for endogenous coefficient estimators with second JTPA-based DGP.}
\begin{tabular}[c]{cS[table-format=6.0]S[table-format=6.0]S[table-format=7.0]cS[table-format=4.1]S[table-format=4.1]S[table-format=5.1]
\hline\hline\rule{0pt}{12pt}
& \multicolumn{3}{c}{Robust MSE} & & \multicolumn{3}{c}{Median Bias} \\
\cline{2-4}\cline{6-8} \rule{0pt}{14pt}
$q$ & \multicolumn{1}{c}{$h=400$} & \multicolumn{1}{c}{SEE ($\hat h$)} & \multicolumn{1}{c}{2SLS} & &
\multicolumn{1}{c}{$h=400$} & \multicolumn{1}{c}{SEE ($\hat h$)} & \multicolumn{1}{c}{2SLS} \\
\hline
\multicolumn{8}{c}{\textit{Estimators of binary endogenous regressor's coefficient}} \\
$0.15$ & 780624 & 539542 & 1071377 && -35.7 & 10.6 & 993.6 \\% 723856 & 524639 & 1060738 & -69.5 & -24.9 & 989.3
$0.25$ & 302562 & 227508 & 713952 && -18.5 & 17.9 & 793.6 \\% 273749 & 219483 & 705021 & -15.6 & 17.4 & 789.3
$0.50$ & 101433 & 96350 & 170390 && -14.9 & -22.0 & 293.6 \\% 98549 & 89963 & 165726 & -16.2 & -12.3 & 289.3
$0.75$ & 85845 & 90785 & 126828 && -9.8 & -22.5 & -206.4 \\% 87771 & 83331 & 126432 & -16.0 & -22.4 & -210.7
$0.85$ & 147525 & 119810 & 249404 && -15.7 & -17.4 & -406.4 \\% 138270 & 115398 & 250714 & -15.8 & -17.1 & -410.7
\multicolumn{8}{c}{\textit{Estimators of continuous endogenous regressor's coefficient}} \\
$0.15$ & 9360 & 7593 & 11434 && -3.3 & -5.0 & -7.0 \\
$0.25$ & 10641 & 9469 & 11434 && -3.4 & -3.5 & -7.0 \\
$0.50$ & 13991 & 12426 & 11434 && -5.7 & -9.8 & -7.0 \\
$0.75$ & 28114 & 25489 & 11434 && -12.3 & -17.9 & -7.0 \\
$0.85$ & 43890 & 37507 & 11434 && -17.2 & -17.8 & -7.0 \\
\hline
\end{tabular}
\end{table}
Table \ref{tab:sim-JTPA2} shows results from the second JTPA-based DGP. The first estimator is now a nearly-unsmoothed SEE estimator instead of the unsmoothed \citet{ChernozhukovHansen2006} estimator. Although in principle \citet{ChernozhukovHansen2006} can be used with multiple endogenous coefficients, the provided code allows only one, and Tables \ref{tab:emp} and \ref{tab:emp2} show that SEE with $h=400$ produces very similar results in the JTPA data. For the binary endogenous regressor's coefficient, the 2SLS estimator now has the largest robust MSE since the larger sample size reduces the variance of all three estimators but does not reduce the 2SLS median bias (since it has first-order asymptotic bias). The plug-in bandwidth yields smaller robust MSE than the nearly-unsmoothed $h=400$ at four of five quantiles. At the median, for example, compared with $h=400$, $\hat h$ slightly increases the median bias but greatly reduces the dispersion, so the net effect is to reduce robust MSE. This is consistent with the theoretical results. For the continuous endogenous regressor's coefficient, the same pattern holds for the nearly-unsmoothed and $\hat h$-smoothed estimators. Since this coefficient is constant across quantiles, the 2SLS estimator is consistent and very similar to the SEE estimators with $q=0.5$.
\subsection{Comparison of SEE and smoothed criterion function\label{sec:sim-SCF}}
For exogenous QR, smoothing the criterion function (SCF) is a different
approach, as discussed. The following simulations compare the MSE of our SEE
estimator with that of the SCF estimator. All DGPs have $n=50$, $X_{i}%
\overset{iid}{\sim }\text{Unif}(1,5)$, $U_{i}\overset{iid}{\sim }N(0,1)$, $%
X_{i}\mathpalette{\protect \independenT}{\perp}U_{i}$, and $%
Y_{i}=1+X_{i}+\sigma (X_{i})\left( U_{i}-\Phi ^{-1}(q)\right) $. DGP 1 has $%
q=0.5$ and $\sigma (X_{i})=5$. DGP 2 has $q=0.25$ and $\sigma
(X_{i})=1+X_{i} $. DGP 3 has $q=0.75$ and $\sigma (X_{i})=1+X_{i}$. In
addition to using our plug-in $\hat{h}$, we also compute the estimators for
a much smaller bandwidth in each DGP: $h=1$, $h=0.8$, and $h=0.8$,
respectively. Each simulation ran $1000$ replications. We
compare only the slope coefficient estimators.
\begin{table}[htbp]
\center
\caption{\label{tab:sim-SCF}Simulation results comparing SEE and SCF exogenous QR estimators.}
\begin{tabular}[c]{cS[table-format=1.3]S[table-format=1.3]cS[table-format=1.3]S[table-format=1.3]cS[table-format=2.3]S[table-format=2.3]cS[table-format=2.3]S[table-format=2.3]}
\hline\hline
& \multicolumn{5}{c}{MSE} & & \multicolumn{5}{c}{Bias} \\
\cline{2-6}\cline{8-12}
\rule{0pt}{14pt}
& \multicolumn{2}{c}{Plug-in $\hat h$} && \multicolumn{2}{c}{small $h$} &&
\multicolumn{2}{c}{Plug-in $\hat h$} && \multicolumn{2}{c}{small $h$} \\
\cline{2-3}\cline{5-6} \cline{8-9}\cline{11-12}
\rule{0pt}{12pt}
DGP & \multicolumn{1}{c}{SEE} & \multicolumn{1}{c}{SCF} &
& \multicolumn{1}{c}{SEE} & \multicolumn{1}{c}{SCF} &
& \multicolumn{1}{c}{SEE} & \multicolumn{1}{c}{SCF} &
& \multicolumn{1}{c}{SEE} & \multicolumn{1}{c}{SCF} \\
\hline
1 & 0.423 & 0.533 && 0.554 & 0.560 && -0.011 & -0.013 && -0.011 & -0.009 \\% && 0.423 & 0.533 && 0.554 & 0.560
2 & 0.342 & 0.433 && 0.424 & 0.430 && 0.092 & -0.025 && 0.012 & 0.012 \\% && 0.334 & 0.432 && 0.424 & 0.430
3 & 0.146 & 0.124 && 0.127 & 0.121 && -0.292 & -0.245 && -0.250 & -0.232 \\% && 0.061 & 0.065 && 0.064 & 0.067
\hline
\end{tabular}
\end{table}
Table \ref{tab:sim-SCF} shows MSE and bias for the SEE and SCF estimators, for our plug-in $\hat h$ as well as the small, fixed $h$ mentioned above. The SCF estimator can have slightly lower MSE, as in the third DGP ($q=0.75$ with heteroskedasticity), but the SEE estimator has more substantially lower MSE in more DGPs, including the homoskedastic conditional median DGP. The differences are quite small with the small $h$, as expected. Deriving and implementing an MSE-optimal bandwidth for the SCF estimator could shrink the differences, but based on these simulations and the theoretical comparison in Section \ref{sec:see}, such an effort seems unlikely to yield improvement over the SEE estimator.
\subsection{Additional simulations\label{sec:sim-additional}}
We tried additional data generating processes (DGPs). The first three DGPs are for exogenous QR, taken directly from \citet{Horowitz1998}. In each case, $q=0.5$, $Y_i=X_i'\beta_0+U_i$, $\beta_0=(1,1)'$, $X_i=(1,x_i)'$ with $x_i\stackrel{iid}{\sim}\textrm{Uniform}(1,5)$, and $n=50$. In DGP 1.1, the $U_i$ are sampled iid from a $t_3$ distribution scaled to have variance two. In DGP 1.2, the $U_i$ are iid from a type I extreme value distribution again scaled and centered to have median zero and variance two. In DGP 1.3, $U_i=(1+x_i)V/4$ where $V_i\stackrel{iid}{\sim}N(0,1)$.
DGPs 2.1, 2.2, and 3.1--3.6 are shown in the working paper version; they include variants of the \citet{Horowitz1998} DGPs with $q\ne0.5$, different error distributions, and another regressor.
DGPs 4.1--4.3 have endogeneity. DGP 4.1 has $q=0.5$, $n=20$, and $\beta _{0}=(0,1)^{\prime }$. It uses the reduced form
equations in \citet[equation 2]{CattaneoEtAl2012} with $\gamma _{1}=\gamma
_{2}=1$, $x_{i}=1$, $z_{i}\sim N(0,1)$, and $\pi =0.5$. Similar to their
simulations, we set $\rho =0.5$, $(\tilde{v}_{1i},\tilde{v}_{2i})$ iid $%
N(0,1)$, and $(v_{1i},v_{2i})^{\prime }=\left(\tilde{v}_{1i},\sqrt{1-\rho ^{2}}%
\tilde{v}_{2i}+\rho \tilde{v}_{1i}\right)'$. DGP 4.2 is similar to DGP 4.1 but with $(\tilde v_{1i},\tilde v_{2i})'$ iid Cauchy, $n=250$, and $\beta_0=\left(0,\left[\rho-\sqrt{1-\rho^2}\right]^{-1}\right)'$.
DGP 4.3 is the same as DGP 4.1 but with $q=0.35$ (and consequent re-centering of the error term) and $n=30$.
We compare MSE for our SEE estimator using the plug-in $\hat h$ and estimators using different (fixed) values of $h$. We include $h=0$ by using unsmoothed QR or the method in \citet{ChernozhukovHansen2006} for the endogenous DGPs. We also include $h=\infty$ (although not in graphs) by using the usual IV estimator. For the endogenous DGPs, we consider both MSE and the ``robust MSE'' defined in Section \ref{sec:sim-JTPA} as $\textrm{Bias}_{\textrm{median}}^{2}+(\textrm{IQR}/1.349)^2$.
For ``size-adjusted'' power (SAP) of a test with nominal size $\alpha$, the
critical value is picked as the $(1-\alpha)$-quantile of the simulated test
statistic distribution. This is for demonstration, not practice. The size
adjustment fixes the left endpoint of the size-adjusted power curve to the
null rejection probability $\alpha $. The resulting size-adjusted power
curve is one way to try to visualize a combination of type I and type II
errors, in the absence of an explicit loss function. One shortcoming is that
it does not reflect the variability/uniformity of size and power over the
space of parameter values and DGPs.
Regarding notation in the size-adjusted power figures, the vertical axis in the size-adjusted
power figures shows the simulated rejection probability. The horizontal axis
shows the magnitude of deviation from the null hypothesis, where a
randomized alternative is generated in each simulation iteration as that
magnitude times a random point on the unit sphere in $\mathbb{R}^{d}$, where
$\beta \in \mathbb{R}^{d}$. As the legend shows, the dashed line corresponds
to the unsmoothed estimator ($h=0$), the dotted line to the infeasible $h_{%
\text{SEE}}^{\ast }$, and the solid line to the plug-in $\hat{h}$.
For the MSE graphs, the flat horizontal solid and dashed lines are the MSE
of the intercept and slope estimators (respectively) using feasible plug-in $%
\hat{h}$ (recomputed each replication). The other solid and dashed lines
(that vary with $h$) are the MSE when using the value of $h$ from the
horizontal axis. The left vertical axis shows the MSE values for the
intercept parameter; the right vertical axis shows the MSE for slope
parameter(s); and the horizontal axis shows a log transformation of the
bandwidth, $\log _{10}(1+h)$.
Our plug-in bandwidth is quite stable. The range of $\hat{h}$ values over the
simulation replications is usually less than a factor of $10$, and the range
from $0.05$ to $0.95$ empirical quantiles is around a factor of two. This corresponds to
a very small impact on MSE; note the log transformation in the x-axis in the MSE
graphs.
\begin{figure}[tbph]
\begin{center}
\includegraphics[clip=true,trim=25 175 145 200,width=.49\textwidth]
{mse_psi1.1_noh_better.pdf}
\hfill
\includegraphics[clip=true,trim=25 175 145 200,width=.49\textwidth]
{mse_psi1.3_noh_better.pdf}
\end{center}
\caption{MSE for DGPs 1.1 (left) and 1.3 (right).}
\label{fig:mse-1a}
\end{figure}
\begin{figure}[tbph]
\begin{center}
\includegraphics[clip=true,trim=30 175 145 200,width=.48\textwidth]
{adjpwr_psi1.1_better.pdf}
\hfill
\includegraphics[clip=true,trim=30 175 145 200,width=.48\textwidth]
{adjpwr_psi1.3_better.pdf}
\end{center}
\caption{Size-adjusted power for DGPs 1.1 (left) and 1.3 (right).}
\label{fig:adjpwr-1a}
\end{figure}
In DGPs 1.1--1.3, SEE($\hat h$) has smaller MSE than either the unsmoothed estimator or OLS, for both the intercept and slope coefficients. Figure \ref{fig:mse-1a} shows MSE for DGPs 1.1 and 1.3. It shows that the MSE of SEE($\hat h$) is very close to that of the best estimator with a fixed $h$. In principle, a data-dependent $\hat h$ can attain MSE even lower than any fixed $h$. SAP for SEE($\hat h$) is similar to that with $h=0$; see Figure \ref{fig:adjpwr-1a} for DGPs 1.1 and 1.3.
\begin{figure}[tbph]
\begin{center}
\includegraphics[clip=true,trim=0 175 125 200,width=.49\textwidth]
{mse_psi5.2_noh_better.pdf}
\hfill
\includegraphics[clip=true,trim=0 175 125 200,width=.49\textwidth]
{medmse_psi5.2_noh_better.pdf}
\end{center}
\caption{For DGP 4.2, MSE (left) and ``robust
MSE'' (right): squared median-bias plus the square of the
interquartile range divided by $1.349$, $\text{Bias}_{\mathrm{median}}^{2}+(%
\text{IQR}/1.349)^{2}$.}
\label{fig:mse-4.2}
\end{figure}
\begin{figure}[tbph]
\centering
\includegraphics[clip=true,trim=25 175 125 200,width=.49\textwidth]
{mse_psi5.3_noh_better.pdf}
\hfill
\includegraphics[clip=true,trim=25 175 125 200,width=.49\textwidth]
{medmse_psi5.3_noh_better.pdf}
\caption{Similar to Figure \ref{fig:mse-4.2}, MSE (left) and ``robust MSE'' (right) for DGP 4.3.}
\label{fig:mse-4.3}
\end{figure}
Figures \ref{fig:mse-4.2} and \ref{fig:mse-4.3} show MSE and ``robust MSE'' for two DGPs with endogeneity. Graphs for the other endogenous DGP (4.1) are similar to those for the slope estimator in DGP 4.3 but with larger MSE; they may be found in the working paper. The MSE graph for DGP 4.2 is not as informative since it is sensitive to very large outliers that occur in only a few replications. However, as shown, the MSE for SEE($\hat h$) is still better than that for the unsmoothed IV-QR estimator, and it is nearly the same as the MSE for the mean IV estimator (not shown: $1.1\times10^6$ for $\beta_1$, $2.1\times10^5$ for $\beta_2$).
For robust MSE, SEE($\hat h$) is again always better than the unsmoothed estimator. For DGP 4.3 with normal errors and $q=0.35$, it is similar to the IV estimator, slightly worse for the slope coefficient and slightly better for the intercept, as expected. Also as expected, for DGP 4.2 with Cauchy errors, SEE($\hat h$) is orders of magnitude better than the mean IV estimator.
Overall, using $\hat{h}$ appears
to consistently reduce the MSE of all estimator components compared with $h=0$ and with IV ($h=\infty $). Almost always, the exception is cases where
MSE is monotonically decreasing with $h$ (mean regression is more
efficient), in which $\hat{h}$ is much better than $h=0$ but not quite large
enough to match $h=\infty $.
\begin{figure}[tbph]
\begin{center}
\includegraphics[clip=true,trim=30 175 145 200,width=.48\textwidth]
{adjpwr_psi5.1_better.pdf}
\hfill
\includegraphics[clip=true,trim=30 175 145 200,width=.48\textwidth]
{adjpwr_psi5.3_better.pdf}
\end{center}
\caption{Size-adjusted power for DGPs 4.1 (left) and 4.3 (right).}
\label{fig:adjpwr-4a}
\end{figure}
Figure \ref{fig:adjpwr-4a} shows SAP for DGPs 4.1 and 4.3. The gain from smoothing is more substantial than in the exogenous DGPs, close to $10$ percentage points for a range of deviations. Here, the randomness in $\hat h$ is not helpful. In DGP 4.2 (not shown), the SAP for $\hat h$ is actually a few percentage points \emph{below} that for $h=0$ (which in turn is below the infeasible $h^*$), and in DGP 4.1, the SAP improvement from using the infeasible $h^*$ instead of $\hat h$ is similar in magnitude to the improvement from using $\hat h$ instead of $h=0$. Depending on one's loss function of type I and type II errors, the SEE-based test may be preferred or not.
\section{Conclusion\label{sec:conclusion}}
We have presented a new estimator for quantile regression with or without
instrumental variables. Smoothing the estimating equations (moment
conditions) has multiple advantages beyond the known advantage of allowing
higher-order expansions. It can reduce the MSE of both the estimating
equations and the parameter estimator, minimize type I error and improve
size-adjusted power of a chi-square test, and allow more reliable
computation of the instrumental variables quantile regression estimator
especially when the number of endogenous regressors is larger. We have given
the theoretical bandwidth that optimizes these properties, and simulations
show our plug-in bandwidth to reproduce all these advantages over the
unsmoothed estimator. Links to mean instrumental variables regression and
robust estimation are insightful and of practical use.
The strategy of smoothing the estimating equations can be applied to any
model with nonsmooth estimating equations; there is nothing peculiar to the
quantile regression model that we have exploited. For example, this strategy
could be applied to censored quantile regression, or to select the optimal
smoothing parameter in \citeauthor{Horowitz2002}'s (\citeyear{Horowitz2002})
smoothed maximum score estimator. The present paper has focused on
parametric and linear IV quantile regression; extensions to nonlinear IV
quantile regression and nonparametric IV quantile regression along the lines
of \citet{ChenPouzo2009,ChenPouzo2012} are currently under development.
\theendnotes
\bibliographystyle{chicago}
|
2,869,038,153,915 | arxiv | \section{Introduction}
The strongest evidence for dark matter in galaxies comes from extended
neutral hydrogen (HI) rotation curves of galaxies, and especially
amongst all the galaxy types from dwarf Irregulars (dIrrs) rotation curves.
These systems are literally dominated by dark matter, their luminous
matter usually bring only a minor dynamical contribution. From their extended
HI distribution one can derive rotation curves to large galactocentric
radii, probing very far out into the dark halo potential. For these
reasons the dark matter halo parameters can be tightly constrained.
And by studying extreme low-mass dIrrs one gets a better handle on the
dark halo scaling laws, since there are known correlations of the
dark halo properties with galaxy types (Kormendy 1987).
The dIrrs of our sample are dwarf members of
the two nearest groups of galaxies outside the Local Group, Sculptor at 2.5 Mpc
and Centaurus A at 3.5 Mpc. Our Parkes HI survey detected three dozens
of dwarf members in these two groups (see C\^ot\'e et al 1996 for a
detailed description). Five objects amongst the most gas-rich ones,
with absolute magnitude M$_B$
in the range $-15$ to $-12.7$, were selected for kinematical studies:
UGCA 442 (Sculptor Group), ESO381-G20, DDO 161, ESO444-G84, and ESO325-G11
(Centaurus A Group). Some of their optical parameters are listed
in Table 1.
\begin{table}
\caption{Optical parameters of the selected dwarfs}
\begin{center}
\begin{tabular}{lccccc}
Name & R.A. \& Dec & M$_B$ & B-R & $\mu _B$ & $\alpha ^{-1}$\\
& (1950) & & & mag/arcsec$^2$ & kpc \\
\tableline
UGCA 442 & 23 41 10 -32 13 58 & -13.8 & 0.58 & 22.2 & 0.43 \\
ESO381-G20 & 12 43 18 -33 33 54 & -13.9 & 0.59 & 22.9 & 0.62 \\
DDO 161 & 13 00 38 -17 09 14 & -14.9 & 0.71 & 21.8 & 0.7 \\
ESO444-G84 & 13 34 32 -27 47 30 & -12.7 & 0.28 & 23.1 & 0.24 \\
ESO325-G11 & 13 42 01 -41 36 30 & -13.8 & 0.69 & 24.0 & 1.2 \\
\end{tabular}
\end{center}
\end{table}
\section{HI observations}
HI mappings were done with the Australia Telescope (1.5km \& 3km arrays)
and the Very Large Array (B/C \& C/D), providing velocity resolutions of
3.3 \mbox{\,km\,s$^{-1}$} and 5.2 \mbox{\,km\,s$^{-1}$} respectively, and beam sizes ranging from 13" to
40". The HI
distributions extend well beyond the optical galaxies in all cases, out
to 2 Holmberg radii on average, which means for our dwarfs radii between
4 and 13 $\alpha ^{-1}$ (where $\alpha ^{-1}$ is the optical disk
scalelength).
Their velocity fields show the clear signature of large-scale rotation.
These dwarfs are therefore gravitationally supported by rotation rather
than pressure-supported by random motions (since their velocity
dispersions are much lower, see below). For lower luminosity systems
the maximum rotation velocity ($V_{max}$) decreases so one expects that
eventually the random motions dominate in very small galaxies. Here the
range of magnitudes of our objects overlap with the Lo et al 1993 Local
Group sample which they found to be pressure-supported, although
Hoffman et al 1996 recently got higher $V_{max}$ for half of these dwarfs from
Arecibo mapping. So it seems that down to at least $M_B=-12$ the systems
are supported by rotation.
Inclined tilted-rings were fitted to the velocity fields to yield
the rotation curves, allowing to model the warps which are found to be
about 10\deg on average (in position angle and inclination), leaving
low velocity residuals of the order of 5 \mbox{\,km\,s$^{-1}$} . As is expected for such
systems the rotation curves are seen to be slowly rising (see Figure 1),
however the flat part is reached in all cases, and the
V$_{max}$ range from 43 \mbox{\,km\,s$^{-1}$} ~to 67 \mbox{\,km\,s$^{-1}$} .
The velocity dispersions are mostly uniform with an average value of
8 \mbox{\,km\,s$^{-1}$}, which is similar to giant spirals where $\sigma \sim $ 10 \mbox{\,km\,s$^{-1}$}
(Shostak \& van der Kruit 1984). So indeed their $V_{max}$ is several
times higher than their $\sigma $ so that they are supported mainly
by rotation, which
allows us to build valid mass-models with
their rotation curves, provided the velocities are corrected for
asymmetric drift to take this pressure term into account (Skillman
1987).
\begin{figure}
\plottwo{stefcen0.ps}{stefsd3.ps}
\plottwo{stefcen1.ps}{stefcen3.ps}
\plottwo{stefcen8.ps}{stefcen11.ps}
\caption{Mass-models for the five dIrrs, showing the contributions from
the stellar disk, the HI disk and the dark halo in each model.}
\end{figure}
\section{Mass-Models}
Multi-component mass-models have been fitted to the rotation curves
(Figure 1),
which assume the mass distribution of a galaxy to consist of
a stellar disk, a neutral gas disk, and a dark matter
halo. The mass-components of the luminous material (stars and gas) are
calculated from the surface brightness profile and the HI radial
surface density profile, while a nonsingular isothermal sphere is used
for the dark matter halo (see Carignan 1985).
The mass-to-light ratio
of the stellar disk $(M/L)_{lum}$ is a free parameter, it is applied to
the luminosity profiles (here obtained in I with the Siding Spring 2.3m
telescope)
to obtain the stellar mass surface densities. The dark matter halo has
two free parameters, its core radius and its velocity dispersion, with
its central density given by
$\rho _0 = {{9 \sigma ^2}/{4 \pi G r_c^2}}$. Of course the more
extended is the rotation curve the better one can hope to constrain
these parameters. Also combining the HI rotation curves with optical
H$\alpha $ rotation curves in the inner parts,
like we have here for UGCA 442 and DDO 161, helps better constraining
$(M/L)_{lum}$.
For each dwarf a `best-fit' model and a so-called
`maximum-disk' model ($(M/L)_{lum}$ being pushed to its maximal allowed
value) were constructed. Figure 1 shows the best-fit models. It is clear
that the dark matter is in every
case the most massive component, accounting for
at least 54\% and up to more than 92\% of their total mass (out to the
last measured point of their rotation curves).
They are definitely dark-halo dominated, and in fact even in
the rising part of the rotation curve, except for ESO325-G11, the dark
halo becomes already the major dynamical contributor and the stellar
disk is not self-gravitating. This is also the
case for other dwarfs like DDO 154 (Carignan \& Freeman 1988), DDO 127
(Kormendy 1996) and DDO 170 (Lake et al 1990) in which the rotation
curve is explained by dark matter even near the center.
Even the gas component is sometimes more dynamically
significant than the stellar disk, like in DDO 161 or UGCA 442 which has
two times more mass in HI than in stars. So even if $(M/L)_{lum}$ is the
least well constrained parameter in our models, this does not impact very
much on the total mass or mass-to-light ratio $(M/L_B)_{dyn}$ derived for
these objects.
\section{Properties of dark halos}
Let us now compare the dark halo parameters of our dwarfs with those of
normal spirals, in order to inspect the halo scaling laws. Kormendy
(1990) pointed out that the central halo density seems to increase for
galaxies of decreasing absolute magnitude. With their study of DDO 154
Carignan \& Freeman (1988) suggested that the ratio of dark to luminous
matter ($M_{dark}/M_{lum}$) gets larger for galaxies at the low mass
end.
Our mass-models results confirm that the total
mass-to-light ratio scales with luminosity, in the sense of course that lower
luminosity galaxies have a higher ratio of dark matter mass to luminous
mass than so-called normal galaxies. This is true even when comparing
their dark matter and luminous masses at a particular radius, for
example at a few times the stellar disk scalelength $\alpha ^{-1}$
rather than at the last measured point at r$_{max}$
(otherwise galaxies with more extended HI rotation curves will be
advantaged as more of their dark halo is sampled). In Figure 2
$M_{dark}/M_{lum}$ at a radius of 7 $\alpha ^{-1}$ is plotted for
a sample of galaxies for which rotation curves extend at least that far
(from the compilation of Broeils 1992). The results of our dwarfs
maximum-disk models are added (with lower limits in two cases because
their rotation curves don't reach 7 $\alpha ^{-1}$).
It can be argued
that comparing these masses at a certain optical radius, like $R_{25}$
or 7 $\alpha ^{-1}$ here, is perhaps not the best choice considering
that the stellar disk is so unimportant for dwarfs and therefore these
values are not representative of the baryonic scalelength for a dwarf.
Nevertheless despite this the trend is clearly visible in Figure 2. For
normal spirals $M_{dark}/M_{lum}$ is not far from $\sim $1 (which was
noticed many years ago by Bahcall \& Casertano 1985 for example), but is
known to be a function of luminosity ({\it eg.} Persic \& Salucci 1988,
1990).
At lower luminosity this increases very rapidly. The point at the top is
DDO 170 (Lake et al 1990); also NGC 5585 (C\^ot\'e et al 1991) has a
high $M_{dark}/M_{lum}$ of 6.9, as most low-surface-brightness galaxies
seem to have higher fraction of dark matter (see also de Blok, this
volume). One also notices that DDO 154 (at M$_B$=-13.8) is not an
exceptional galaxy, other dwarfs of the same luminosity range have
similar (or even more extreme!) dark matter properties.
\begin{figure}[h]
\epsfxsize=8.5cm
\epsfysize=7cm
\centerline{\epsffile{steffig2.ps}}
\caption[1]{\label{f2}
Ratios of dark to luminous mass at a radius of 7 $\alpha
^{-1}$. Filled circles are for our sample, open circles for the galaxies
compiled by Broeils 1992.}
\end{figure}
\begin{figure}[h]
\epsfxsize=8.5cm
\epsfysize=7cm
\centerline{\epsffile{steffig3.ps}}
\caption[1]{\label{f3}
Central dark halo densities (in M$_\odot$ pc$^{-3}$) for our sample
(filled circles) and
for similarly modeled galaxies, compiled in C\^ot\'e 1995 (open
circles).}
\end{figure}
Looking now at the dark matter halo
parameters, in Figure 3 the central dark halo density is plotted for our
dwarfs as well as a sample of 16 galaxies which have been modeled
similarly using an isothermal sphere for the dark halo (mainly from the
Puche \& Carignan 1991 compilation, see C\^ot\'e 1995). Like Kormendy
(1990) we notice an increase in $\rho _0$
for lower luminosity galaxies.
But again, like in Figure 2, there is a large dispersion within the dIrrs.
This seems to
indicate that galaxies with similar optical properties can have
dark matter halos with fairly different properties.
Athanassoula et al (1987) also suggested that halos of early-type
galaxies are more concentrated than those of later types. We also see a
a trend that the ratio of the core radius over $R_{25}$ increases for our
dwarfs (but with a large spread here too).
These trends have important implications for galaxy formation scenarios.
Indeed the CDM halos from N-body simulations with $\Omega _0 =1$ of Navarro,
Frenk \& White (1996) are not compatible with observed rotation curves: the
CDM halos are substantially more concentrated than what is inferred from
observations. Navarro, Eke \& Frenk (1996) have proposed that early
bursts of star formation could expell a large fraction of the baryonic
material in dwarfs therefore significantly altering the central
regions. But low-surface-brightness galaxies, which can have quite large
scalelengths and be massive objects (de Blok, this volume) are also
obsverved to be less concentrated than their simulated halos; since they
do not have the shallow potentials of dwarfs it is more difficult to
create baryonic outflows to solve their concentration problem. Navarro
(this volume) proposes instead that lower concentration halos, compatible
with dwarfs and LSBs observed curves, can be produced by low $\Omega _0$ flat
CDM models, with $\Omega _0 =0.3$ and $\Lambda =0.7$;
it should be noticed that the inclusion of a compensating $\Lambda$
term is mandatory for $\Omega < 1$ models, since the way current simulations
are constructed requires a flat geometry of the background cosmology
(Buchert 1995).
\begin{figure}[h]
\epsfxsize=8.5cm
\epsfysize=7cm
\centerline{\epsffile{steffig4.ps}}
\caption[1]{\label{f4} Ratio of HI to dark matter
surface densities (full line), and stellar to
dark matter densities (dashed line) for DDO 161}
\end{figure}
Another possible clue about the nature of dark matter comes from the
fact that in most spiral galaxies the ratio of the HI
surface densities to dark matter surface densities
($\Sigma_{HI}/\Sigma_{DM}$) are seen to stay remarkably
constant (Bosma 1978), even out to large radii (Carignan et al 1990).
This has been used as an argument for a strong coupling between the HI gas
and the dark matter,
hinting that the dark matter is not dissipationless, therefore
has possibly a baryonic nature.
One can then model the rotation curves using a scaled-up version of the
HI disks (ie: varying the gas mass) rather than a dark matter halo
(see van Albada, this volume) and obtain reasonable fits (sometimes even
better ones, Broeils 1992).
In our dwarfs however this is no longer true:
the ratio of HI to dark matter surface densities start dropping appreciably
at roughly the Holmberg radius. Figure 4 shows the case of DDO 161 (for
which $R_{HO} \sim $3 kpc) where this ratio $\Sigma_{HI}/\Sigma_{DM}$ drops
by at least a factor
of 10 (similar factors are found for our other 4 dwarfs). Many spirals
have HI radial profiles measured out to a larger number of $\alpha
^{-1}$ than some of our dwarfs (see the whole sample of Figure 3 for
example) and do not exhibit this decline in $\Sigma_{HI}/\Sigma_{DM}$.
So this could imply that not only different galaxies can have different
fractions of dark matter but possibly a different mixture of dark matter
flavours (different fractions of baryonic and non-baryonic dark matter).
\section{Conclusions}
Dwarf irregular galaxies are dark-matter dominated, the dark matter
halos of our dwarfs account for 54\% up to 92\% of their total mass
(inside r$_{max}$).
In most cases the dark halo is the major dynamical contributor already
in the rising part of the rotation curve, and sometimes even the HI disk
is more massive than the stellar disk.
Our mass-models results show that these lower luminosity galaxies have
higher total mass-to-light ratios, and that their dark halos have higher
central densities and are less concentrated, confirming the Kormendy
(1989) correlations.
Contrary to what is found in normal spirals, the ratio of HI to dark matter
surface densities are no longer constant
at large galactocentric radii. One can no longer fit scaled-up HI disks
instead of dark halos to explain the rotation curves, since at large
radii there is no longer a strong coupling between the HI gas and the
dark matter.
\acknowledgments
Thanks to ATNF and MSO TACs and Miller Goss for lots of
telescope time. Thanks to ANU and Fonds FCAR (Qu\'ebec) for
financial support.
|
2,869,038,153,916 | arxiv | \section{Introduction}
\IEEEPARstart{F}{or} the development of next generation communication systems, massive multiple-input multiple-output (MIMO) technology has been widely investigated during the last few years \cite{larsson2014massive, haider2014cellular, yongpeng2015secure, lulow2016, You16Channel, overview_LSAS_2016}. Massive MIMO systems provide huge capacity enhancement by employing hundreds of antennas at a base station (BS).
The co-location of so many antennas on a single BS is a major challenge in realizing massive MIMO, whereas dividing the BS antennas into distributed antenna sets (ASs) provides an alternative solution \cite{truong2013viability}.
In most massive MIMO literature, it is assumed that each user equipment (UE) is equipped with a single-antenna. Since multiple antenna UEs are already used in practical systems, it would be of both theoretical and practical interest to investigate the capacity of massive MIMO with distributed ASs and multiple antenna users.
In \cite{zhang2013capacity}, Zhang \textit{et al.} investigated the capacity of a MIMO multiple access channel (MAC) with distributed sets of correlated antennas. The results of \cite{zhang2013capacity} can be applied to a massive MIMO uplink with distributed ASs and multiple antenna UEs directly. The channel between a user and an AS in \cite{zhang2013capacity} is assumed to be a Kronecker correlated MIMO channel \cite{kermoal2002stochastic} with line-of-sight (LOS) components. In \cite{oestges2006validity}, Oestges concluded that the validity of the Kronecker model decreases as the array size increases. Thus, we consider in this paper a MIMO MAC with a more general channel model than that in \cite{zhang2013capacity}. More precisely, we consider also distributed ASs and multiple antenna UEs, but assume that each link between a user and an AS forms a jointly correlated Rician fading channel \cite{weichselberger2006stochastic,gao:statistical}. If the BS antennas become co-located, then the considered channel model reduces to that in \cite{wen2011on}. To the best of our knowledge, a capacity analysis for such MIMO MACs has not been addressed to date.
For the MIMO MAC under consideration, an exact capacity analysis is difficult and might be unsolvable when the number of antennas grows large. In this paper, we aim at deriving an approximate capacity expression. Deterministic equivalents \cite{couillet2011random}, which have been addressed extensively, are successful methods to derive the approximate capacity for various MIMO channels. These deterministic equivalent approaches fall into four main categories: the Bai and Silverstein method\cite{couillet2011deterministic,couillet2012random,wen2013deterministic}, the Gaussian method\cite{hachem2008new,dupuy2011capacity,zhang2013capacity}, the replica method\cite{taricco2008asymptotic,wen2011on} and free probability theory\cite{far2008slow,speicher2012free}.
The Bai and Silverstein method has been applied to various MIMO MACs. Couillet \textit{et al.} \cite{couillet2011deterministic} used it to investigate the capacity of a MIMO MAC with separately correlated channels.
Combining it with the generalized Lindeberg principle \cite{korada2011applications}, Wen \textit{et al.} \cite{wen2013deterministic} derived the ergodic input-output mutual information of a MIMO MAC where the channel matrix consists of correlated non-Gaussian entries. In the Bai and Silverstein method, one needs to ``guess'' the deterministic equivalent of the Stieltjes transform. This limits its applicability since the deterministic equivalents of some involved models might be hard to ``guess'' \cite{couillet2011random}. By using an integration by parts formula and the Nash-Poincare inequality, the Gaussian method is able to derive directly the deterministic equivalents and can be applied to random matrices with involved correlations.
It is particularly suited to random matrices with Gaussian entries.
Combined with the Lindeberg principle, the Gaussian method can be used to treat random matrices with non-Gaussian entries as in \cite{zhang2013capacity}.
The replica method developed in statistical physics \cite{edwards1975theory} is a widely used approach in wireless communications. It has also been applied to the MIMO MAC. Wen \textit{et al.} \cite{wen2011on} used it to investigate the sum-rate of multiuser MIMO uplink channels with jointly correlated Rician fading. Free probability theory \cite{voiculescu1997free}
provides a better way to understand the asymptotic behavior of large dimensional random matrices. It was first applied to wireless communications by Evans and Tse to investigate the multiuser wireless communication systems \cite{evans2000large}.
The Bai and Silverstein method and the Gaussian method are very flexible. Both of them have been used to handle
deterministic equivalents for advanced Haar models \cite{couillet2012random,pastur2011eigenvalue}. Although its validity has not yet been proved \cite{couillet2011random}, the replica method is also a powerful tool. Meanwhile, the applicability of free probability theory is commonly considered very limited as it can be only applied to large random matrices with unitarily invariant properties, such as standard Gaussian matrices and Haar unitary matrices.
The domain of applicability of free probability techniques can be broadened tremendously by operator-valued free probability theory \cite{voiculescu1985symmetries, nica2002operator}, which is a more general version of free probability theory and allows one to deal with random matrices with correlated entries \cite{far2008slow}.
In \cite{far2008slow}, Far \textit{et al.} first used operator-valued free probability theory in wireless communications to study slow-fading MIMO systems with nonseparable correlation. The results of \cite{far2008slow} were then used by Pan \textit{et al.} to study the approximate capacity of uplink network MIMO systems \cite{pan2013capacity} and the asymptotic spectral efficiency of uplink MIMO-CDMA systems over arbitrarily spatially correlated Rayleigh fading channels \cite{pan2013asymptotic}. Quaternionic free probability used in \cite{muller2012channel} by M\"{u}ller and Cakmak can be seen as a particular kind of operator-valued free probability\cite{nica2008free}.
In \cite{speicher2012free}, Speicher and Vargas provided the free deterministic equivalent method to derive the deterministic equivalents under the operator-valued free probability framework. A free deterministic equivalent of a random matrix is a non-commutative random variable or an operator-valued random variable,
and the difference between the distribution of the latter and the expected distribution of the random matrix goes to zero in the large dimension limit. They viewed the considered random matrix as a polynomial in
several matrices, and obtained its free deterministic equivalent by replacing the matrices with operator-valued random variables satisfying certain freeness relations. They observed that the Cauchy transform of the free deterministic equivalent is actually the solution to the iterative deterministic equivalent equation derived by the Bai and Silverstein method or the Gaussian method. Using the free deterministic equivalent approach, they recovered the deterministic equivalent results for the advanced Haar model from \cite{couillet2010deterministic}.
Motivated by the results from \cite{speicher2012free}, we propose a free deterministic equivalent for the capacity analysis of the general channel model considered in this paper. The method of free deterministic equivalents provides a relatively formalized methodology to obtain the deterministic equivalent of the Cauchy transform.
By replacing independent Gaussian matrices with random matrices that are composed of non-commutative random variables and satisfying certain operator-valued freeness relations, we obtain the free deterministic equivalent of the channel Gram matrix.
The Cauchy transform of the free deterministic equivalent is easy to derive by using operator-valued free probability techniques, and is asymptotically the same as that of the channel Gram matrix.
Then, we compute the approximate Shannon transform of the channel Gram matrix and the approximate ergodic input-output mutual information of the channel. Furthermore, we derive the sum-rate capacity achieving input covariance matrices based on the approximate ergodic input-output mutual information.
Our considered channel model reduces to that in \cite{zhang2013capacity} when the channel between a user and an AS is a Kronecker correlated MIMO channel, and to the channel model in \cite{wen2011on} when there is one AS at the BS. In this paper, we will show that the results of \cite{zhang2013capacity} and \cite{wen2011on} can be recovered by using the free deterministic equivalent method. Since many existing channel models are special cases of the channel models in \cite{zhang2013capacity} and \cite{wen2011on}, we will also be able to provide a new approach to derive the deterministic equivalent results for them.
The rest of this article is organized as follows.
The preliminaries and problem formulation are presented in Section II.
The main results are provided in Section III. Simulations are contained in Section IV. The conclusion is drawn in Section V. A tutorial on free probability theory and operator-valued free probability theory is presented in Appendix A, where the free deterministic equivalents used in this paper are also introduced and a rigorous mathematical justification
of the free deterministic equivalents is provided. Proofs of Lemmas and Theorems are provided in Appendices B to G.
\textit{Notations:}
Throughout this paper, uppercase boldface letters and lowercase boldface letters are used for matrices and vectors, respectively. The superscripts $(\cdot)^*$, $(\cdot)^T$ and $(\cdot)^H$ denote the conjugate, transpose and conjugate transpose operations, respectively. The notation ${\mathbb E}\{\cdot\}$ denotes the mathematical expectation operator. In some cases, where it is not clear from the context, we will employ subscripts to emphasize the definition. The notation $g \circ f$ represents the composite function $g(f(x))$. We use $\mathbf{A} \odot \mathbf{B}$ to denote the Hadamard product of two matrices $\mathbf{A}$ and $\mathbf{B}$ of the same dimensions. The $N \times N$ identity matrix is denoted by $\mathbf{I}_N$. The $N \times N$ and $N \times M$ zero matrices are denoted by $\mathbf{0}_N$ and $\mathbf{0}_{N\times M}$. We use $[\mathbf{A}]_{ij}$ to denote the $(i,j)$-th entry of the matrix $\mathbf{A}$. The operators ${\rm{tr}}(\cdot)$ and $\det(\cdot)$ represent the matrix trace and determinant, respectively. ${\rm {diag}}(\mathbf{x})$ denotes a diagonal matrix with $\mathbf{x}$ along its main diagonal. $\Re(\mathbf{W})$ and $\Im(\mathbf{W})$ denote $\frac{1}{2}(\mathbf{W}+\mathbf{W}^H)$ and $\frac{1}{2i}(\mathbf{W}-\mathbf{W}^H)$, respectively. $\mathbf{D}_N(\mathbb{C})$ denotes the algebra of $N \times N$ diagonal matrices with elements in the complex field $\mathbb{C}$. Finally, we denote by $\mathbf{M}_{N}(\mathbb{C})$ the algebra of $N \times N$ complex matrices and by $\mathbf{M}_{N \times M}(\mathbb{C})$ the algebra of $N \times M$ complex matrices.
\section{Preliminaries and Problem Formulation}
In this section, we first present the definitions of the Shannon transform and the Cauchy transform, and introduce the free deterministic equivalent method with a simple channel model, while our rigorous mathematical justification
of the free deterministic equivalents is provided in Appendix A. Then, we present the general model of the MIMO MAC considered in this work, followed by the problem formulation.
\vspace{-1em}
\subsection{Shannon Transform and Cauchy Transform}
Let $\mathbf{H}$ be an $N \times M$ random matrix and ${\mathbf{B}_N}$ denote the Gram matrix $\mathbf{H}\mathbf{H}^H$.
Let $F_{\mathbf{B}_N}(\lambda)$ denote the expected cumulative distribution of the eigenvalues of
${\mathbf{B}_N}$. The Shannon transform $\mathcal{V}_{\mathbf{B}_N}(x)$ is defined as \cite{tulino2004random}
\begin{equation}
\mathcal{V}_{\mathbf{B}_N}(x) = \int_{0}^{\infty}\log(1+\frac{1}{x}\lambda)dF_{\mathbf{B}_N}(\lambda).
\end{equation}
Let $\mu$ be a probability measure on $\mathbb{R}$ and $\mathbb{C}^+$ denote the set
\begin{equation}
\left\{z \in \mathbb{C}:\Im(z) > 0\right\}. \nonumber
\end{equation} The Cauchy transform $G_{\mu}(z)$ for $z\in\mathbb{C}^+$
is defined by \cite{nica2006lectures}
\begin{eqnarray}
{G}_{\mu}(z) = \int_{0}^{\infty}\frac{1}{z-\lambda}d\mu(\lambda).
\end{eqnarray}
Let $G_{\mathbf{B}_N}(z)$ denote the Cauchy transform for $F_{\mathbf{B}_N}(\lambda)$. Then, we have $G_{\mathbf{B}_N}(z)=\frac{1}{N}{\mathbb{E}}\{{\rm{tr}}((z\mathbf{I}_N-\mathbf{B}_N)^{-1})\}.$
The relation between the Cauchy transform $G_{\mathbf{B}_N}(z)$ and the Shannon transform $\mathcal{V}_{\mathbf{B}_N}(x)$ can be expressed as \cite{tulino2004random}
\begin{equation}
\mathcal{V}_{\mathbf{B}_N}(x) = \int_{x}^{+\infty}\left(\frac{1}{z}+G_{\mathbf{B}_N}(-z)\right)dz.
\label{eq:relation_between_Cauchy_transform_and_shannon_transform}
\end{equation}
Differentiating both sides of \eqref{eq:relation_between_Cauchy_transform_and_shannon_transform} with respect to $x$, we obtain
\begin{equation}
\frac{d\mathcal{V}_{\mathbf{B}_N}(x)}{dx} = -x^{-1}-G_{\mathbf{B}_N}(-x).
\label{eq:relation_between_Shannon_transform_and_Cauchy_transform}
\end{equation}
Thus, if we are able to find a function whose derivative with respect to $x$ is $-x^{-1}-G_{\mathbf{B}_N}(-x)$, then we can obtain $\mathcal{V}_{\mathbf{B}_N}(x)$.
In conclusion, if the Cauchy transform $G_{\mathbf{B}_N}(x)$ is known, then the Shannon transform $\mathcal{V}_{\mathbf{B}_N}(x)$ can
be immediately obtained by applying \eqref{eq:relation_between_Shannon_transform_and_Cauchy_transform}.
\subsection{Free Deterministic Equivalent Method}
In this subsection, we introduce the free deterministic equivalent method, which can be used to
derive the approximation of $G_{\mathbf{B}_N}(z)$.
The associated definitions, such as that of free independence, circular elements, R-cyclic matrices and semicircular elements over $\mathbf{D}_n(\mathbb{C})$, are provided in Appendix
\ref{sec:Free Probability and Operator-valued Free Probability}.
The term free deterministic equivalent was coined by Speicher and Vargas in \cite{speicher2012free}. The considered random matrix in \cite{speicher2012free} was viewed as a polynomial in several deterministic matrices and several independent random matrices. The free deterministic equivalent of the considered random matrix was then obtained by replacing the matrices with operator-valued random variables satisfying certain freeness relations.
Moreover, the difference between the Cauchy transform of the free deterministic equivalent and that of the considered random matrix goes to zero in the large dimension limit.
However, the method in \cite{speicher2012free} only showed how to obtain the free deterministic equivalents for the
case where the random matrices are standard Gaussian matrices and Haar unitary matrices.
A method similar to that in \cite{speicher2012free} was presented by Speicher in \cite{Speicher2008what}, which
appeared earlier than \cite{speicher2012free}.
The method in \cite{Speicher2008what} showed that the random matrix with independent Gaussian entries having different variances can be replaced
by the random matrix with free (semi)circular elements having different variances.
But, it only considered a very simple case, and the replacement process had no rigorous mathematical proof.
Moreover, the free deterministic equivalents were not mentioned in \cite{Speicher2008what}.
In this paper, we introduce in Appendix \ref{sec:Free Deterministic Equivalent} the free deterministic equivalents for the case where all the matrices are square and
have the same size, and the random matrices are Hermitian and composed of independent Gaussian entries with different variances. Similarly to \cite{speicher2012free}, the free deterministic equivalent of a polynomial in matrices is defined.
The replacement process used is that in \cite{Speicher2008what}. Moreover,
a rigorous mathematical justification of the free deterministic equivalents we introduce is also provided in Appendix \ref{sec:Free Deterministic Equivalent} and Appendix \ref{New_asymptotic_freeness_results}.
\newcounter{tempequationcounter}
\begin{figure*}[!t]
\normalsize
\setcounter{tempequationcounter}{\value{equation}}
\begin{eqnarray}
\setcounter{equation}{19}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\left(
\begin{array}{cccccc}
z\mathcal{G}_{\boldsymbol{\mathcal{B}_N}}^{\mathcal{D}_N}(z^2\mathbf{I}_{N}) & \mathbf{0} \\
\mathbf{0} & z\mathcal{G}_{\boldsymbol{\mathcal{H}}^H\boldsymbol{\mathcal{H}}}^{\mathcal{D}_M}(z^2\mathbf{I}_{M}) \\
\end{array}
\right) \nonumber \\
&&= E_{\mathcal{D}_n}\left\{
\left(
\begin{array}{cccccc}
z\mathbf{I}_N - z\eta_{\mathcal{D}_N}(\mathcal{G}_{\boldsymbol{\mathcal{H}}^H \boldsymbol{\mathcal{H}}}^{\mathcal{D}_M}(z^2\mathbf{I}_{M})) & -\overline{\mathbf{H}} \\
-\overline{\mathbf{H}}{}^H & z\mathbf{I}_M - z\eta_{\mathcal{D}_M}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z^2\mathbf{I}_{N})) \\
\end{array}
\right)^{-1}\right\}
\label{eq:floatingequation_tmp1}
\end{eqnarray}
\setcounter{equation}{\value{tempequationcounter}}
\hrulefill
\end{figure*}
In \cite{Speicher2008what}, the deterministic equivalent results of \cite{hachem2007deterministic} were rederived.
But the description in \cite{Speicher2008what} is not easy to follow.
To show how the introduced free deterministic equivalents can be used to derive the approximation of the Cauchy transform $G_{\mathbf{B}_N}(z)$, we
use the channel model in \cite{hachem2007deterministic} as a toy example and
restate the method used in \cite{Speicher2008what} as follows.
The channel matrix $\mathbf{H}$ in \cite{hachem2007deterministic} consists of an $N \times M$ deterministic matrix $\overline{\mathbf{H}}$ and an $N \times M$ random matrix $\widetilde{\mathbf{H}}$, \textit{i.e.}, $\mathbf{H}=\overline{\mathbf{H}}+\widetilde{\mathbf{H}}$. The entries of $\widetilde{\mathbf{H}}$ are independent zero mean complex Gaussian random variables with variances $\mathbb{E}\{[\widetilde{\mathbf{H}}]_{ij}[\widetilde{\mathbf{H}}]_{ij}^*\}=\frac{1}{N}\sigma_{ij}^2$.
Let $n$ denote $N+M$, $\mathcal{P}$ denote the algebra of complex random variables and $\mathbf{M}_n(\mathcal{P})$ denote the algebra of $n \times n$ complex random matrices. We define $\mathbb{E}_{\mathcal{D}_n}:\mathbf{M}_n(\mathcal{P}) \rightarrow \mathbf{D}_n(\mathbb{C})$ by
\begin{eqnarray}
& &\!\!\!\!\!\!\!\!\!\!\!\!{\mathbb E}_{\mathcal{D}_n}\left\{\left(
\begin{array}{ccccc}
X_{11} & X_{12} & \cdots & X_{1n} \\
X_{21} & X_{22} & \ldots & X_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
X_{n1} & X_{n2} & \ldots & X_{nn} \\
\end{array}
\right)\right\}
\nonumber \\
&&=\left(
\begin{array}{cccc}
{\mathbb E}\{X_{11}\} & 0 & \cdots & 0 \\
0 & {\mathbb E}\{X_{22}\} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \ldots & {\mathbb E}\{X_{nn}\} \\
\end{array}
\right)
\end{eqnarray}
where each $X_{ij}$ is a complex random variable. Hereafter, we use the notations $\mathcal{M}_n:=\mathbf{M}_n(\mathbb{C})$ and $\mathcal{D}_n:=\mathbf{D}_n(\mathbb{C})$ for brevity.
Let $\mathbf{X}$ be an $n \times n$ matrix defined by \cite{far2008slow}
\begin{eqnarray}
{\mathbf{X}} = \left(
\begin{array}{cc}
\mathbf{0}_{N} & {\mathbf{H}} \\
{\mathbf{H}}^H & \mathbf{0}_{M} \\
\end{array}
\right).
\label{eq:definition_of_matrix_bold_captial_x}
\end{eqnarray}
The matrix $\mathbf{X}$ is even, \textit{i.e.}, all the odd moments of $\mathbf{X}$ are zeros, and
\begin{eqnarray}
{\mathbf{X}}^2 = \left(\begin{array}{cc}
{\mathbf{H}}{\mathbf{H}}^H & \mathbf{0}_{N\times M} \\
\mathbf{0}_{M \times N} & {\mathbf{H}}^H{\mathbf{H}} \\
\end{array}
\right).
\label{eq:mathbf_x_square}
\end{eqnarray}
Let $\boldsymbol{\Delta}_n\in \mathcal{D}_n$ be a diagonal matrix with $\Im(\boldsymbol{\Delta}_n) \succ 0$. The $\mathcal{D}_n$-valued Cauchy transform $\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_n}(\boldsymbol{\Delta}_n)$ is given by
\begin{eqnarray}
\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_n}(\boldsymbol{\Delta}_n) = {\mathbb E}_{\mathcal{D}_n}\{(\boldsymbol{\Delta}_n-\mathbf{X})^{-1}\}.
\end{eqnarray}
When $\boldsymbol{\Delta}_n=z\mathbf{I}_{n}$ and $z \in \mathbb{C}^+$, we have that
\begin{eqnarray}
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_n}(z\mathbf{I}_{n})
\nonumber \\
&& \!\!\!\!\!\!\!\!\!\!= {\mathbb E}_{\mathcal{D}_n} \!\!\left\{(z\mathbf{I}_{n}-\mathbf{X})^{-1}\right\}
\nonumber \\
&& \!\!\!\!\!\!\!\!\!\!=
{\mathbb E}_{\mathcal{D}_n} \!\!\left\{ \!\! \left(\!\!\begin{array}{cc}
\!\!z(z^2\mathbf{I}_{N}-\mathbf{H}\mathbf{H}^H)^{-1} \!\! & \!\!{\mathbf{H}}(z^2\mathbf{I}_{M}-\mathbf{H}^H\mathbf{H})^{-1} \!\! \\
\!\!{\mathbf{H}}^H(z^2\mathbf{I}_{N}-\mathbf{H}\mathbf{H}^H)^{-1} \!\!& \!\!z(z^2\mathbf{I}_{M}-\mathbf{H}^H\mathbf{H})^{-1} \!\!\\
\end{array}\right)\!\!\right\} \nonumber \\
\label{eq:diagonal_matrix_valued_cauchy_transform_of_X}
\end{eqnarray}
where the second equality is due to the block matrix inversion formula \cite{petersen2008matrix}.
From \eqref{eq:mathbf_x_square} and \eqref{eq:diagonal_matrix_valued_cauchy_transform_of_X}, we obtain
\begin{equation}
\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_n}(z\mathbf{I}_{n}) = z\mathcal{G}_{\mathbf{X}^{2}}^{\mathcal{D}_n}(z^2\mathbf{I}_{n})
\label{eq:realtion_of_Cauchy_transform_of X_and_X2}
\end{equation}
for each $z,z^2 \in \mathbb{C}^+$.
Furthermore, we write $\mathcal{G}_{\mathbf{X}^{2}}^{\mathcal{D}_n}(z\mathbf{I}_{n})$ as
\begin{equation}
\mathcal{G}_{\mathbf{X}^2}^{\mathcal{D}_n}(z\mathbf{I}_{n})=\left(
\begin{array}{cccccc}
\mathcal{G}_{\mathbf{B}_N}^{\mathcal{D}_N}(z\mathbf{I}_{N}) & \mathbf{0} \\
\mathbf{0} & \mathcal{G}_{{\mathbf{H}}^H{\mathbf{H}}}^{\mathcal{D}_M}(z\mathbf{I}_{M}) \\
\end{array}
\right)
\label{eq:Cauchy_transform_of_Hsquare_in_detail}
\end{equation}
where
\begin{IEEEeqnarray}{Rl}
&\mathcal{G}_{\mathbf{B}_N}^{\mathcal{D}_N}(z\mathbf{I}_{N}) = {\mathbb E}_{\mathcal{D}_N}\{(z\mathbf{I}_{N}-\mathbf{B}_N)^{-1}\}\nonumber \\
&\mathcal{G}_{{\mathbf{H}}^H{\mathbf{H}}}^{\mathcal{D}_M}(z\mathbf{I}_M) = {\mathbb E}_{\mathcal{D}_M}\{(z\mathbf{I}_{M}-{\mathbf{H}}^H{\mathbf{H}})^{-1}\}.\nonumber
\end{IEEEeqnarray}
Since $G_{\mathbf{B}_N}(z)=\frac{1}{N}{\rm{tr}}(\mathcal{G}_{\mathbf{B}_N}^{\mathcal{D}_N}(z\mathbf{I}_{N}))$, we have related the calculation of ${G}_{\mathbf{B}_N}(z)$ with that of $\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_n}(z\mathbf{I}_{n})$.
We define $\overline{\mathbf{X}}$ and $\widetilde{\mathbf{X}}$ by
\begin{eqnarray}
\overline{\mathbf{X}} = \left(
\begin{array}{cc}
\mathbf{0}_{N} & \overline{\mathbf{H}} \\
\overline{\mathbf{H}}{}^H & \mathbf{0}_{M} \\
\end{array}
\right)
\label{eq:definition_of_matrix_overline_bold_captial_x}
\end{eqnarray}
and
\begin{eqnarray}
\widetilde{\mathbf{X}} = \left(
\begin{array}{cc}
\mathbf{0}_{N} & \widetilde{\mathbf{H}} \\
\widetilde{\mathbf{H}}^H & \mathbf{0}_{M} \\
\end{array}
\right).
\label{eq:definition_of_matrix_widetilde_bold_captial_x}
\end{eqnarray}
Then, we have that $\mathbf{X} = \overline{\mathbf{X}} + \widetilde{\mathbf{X}}$.
The free deterministic equivalent of $\mathbf{X}$ is constructed as follows.
Let $\mathcal{A}$ be a unital algebra, $(\mathcal{A},\phi)$ be a non-commutative probability space and
$\widetilde{\boldsymbol{\mathcal{H}}}$ denote an $N \times M$ matrix with entries from $\mathcal{A}$.
The entries $[\widetilde{\boldsymbol{\mathcal{H}}}]_{ij}
\in \mathcal{A}$ are freely independent centered circular elements with variances $\phi([\widetilde{\boldsymbol{\mathcal{H}}}]_{ij}[\widetilde{\boldsymbol{\mathcal{H}}}]_{ij}^*)=\frac{1}{N}\sigma_{ij}^2$.
Let $\boldsymbol{\mathcal{H}}$ denote $\overline{\mathbf{H}}+\widetilde{\boldsymbol{\mathcal{H}}}$, $\widetilde{\boldsymbol{\mathcal{X}}}$ denote
\begin{eqnarray}
\widetilde{\boldsymbol{\mathcal{X}}}=\left(
\begin{array}{ccccc}
\mathbf{0} & \widetilde{\boldsymbol{\mathcal{H}}} \\
\widetilde{\boldsymbol{\mathcal{H}}}^H & \mathbf{0} \\
\end{array}
\right)
\label{eq:definition_of_matrix_widetilde_cal_captial_x}
\end{eqnarray}
and
$\boldsymbol{\mathcal{X}}$ denote
\begin{eqnarray}
\boldsymbol{\mathcal{X}}=\left(
\begin{array}{ccccc}
\mathbf{0} & \boldsymbol{\mathcal{H}} \\
\boldsymbol{\mathcal{H}}^H & \mathbf{0} \\
\end{array}
\right).
\label{eq:definition_of_matrix_cal_captial_x}
\end{eqnarray}
It follows that $\boldsymbol{\mathcal{X}} = \overline{\mathbf{X}} + \widetilde{\boldsymbol{\mathcal{X}}}$.
The matrix $\boldsymbol{\mathcal{X}}$ is the free deterministic equivalent of $\mathbf{X}$.
We define $E_{\mathcal{D}_n}:\mathbf{M}_n(\mathcal{A}) \rightarrow \mathcal{D}_n$ by
\begin{eqnarray}
& &\!\!\!\!\!\!\!\!\!\!\!\!E_{\mathcal{D}_n}\left\{\left(
\begin{array}{ccccc}
x_{11} & x_{12} & \cdots & x_{1n} \\
x_{21} & x_{22} & \ldots & x_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n1} & x_{n2} & \ldots & x_{nn} \\
\end{array}
\right)\right\}
\nonumber \\
&&=\left(
\begin{array}{cccc}
\phi(x_{11}) & 0 & \cdots & 0 \\
0 & \phi(x_{22}) & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \ldots & \phi(x_{nn}) \\
\end{array}
\right)
\label{eq:definition_of_E_sub_mathcalD_n}
\end{eqnarray}
where each $x_{ij}$ is a non-commutative random variable from $(\mathcal{A},\phi)$. Then, $(\mathbf{M}_{n}(\mathcal{A}), E_{\mathcal{D}_n})$ is a $\mathcal{D}_n$-valued probability space.
From the discussion of the free deterministic equivalents provided in Appendix \ref{sec:Free Deterministic Equivalent}, we have that
$\mathcal{G}^{\mathcal{D}_n}_{\boldsymbol{\mathcal{X}}}(z\mathbf{I}_n)$ and
$\mathcal{G}^{\mathcal{D}_n}_{\mathbf{X}}(z\mathbf{I}_n)$ are asymptotically the same. Let $\boldsymbol{\mathcal{B}}_N$ denote $\boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{H}}^H$.
The relation between $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}_n}(z\mathbf{I}_n)$ and $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_N)$ is the same as that between $\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_n}(z\mathbf{I}_n)$ and $\mathcal{G}_{\mathbf{B}_N}^{\mathcal{D}_N}(z\mathbf{I}_N)$.
Thus, we also have that $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_N)$ and
$\mathcal{G}_{\mathbf{B}_N}^{\mathcal{D}_N}(z\mathbf{I}_N)$ are asymptotically the same
and $G_{\boldsymbol{\mathcal{B}}_N}(z)$ is the deterministic equivalent of $G_{\mathbf{B}_N}(z)$.
For convenience, we also call $\boldsymbol{\mathcal{B}}_N$ the free deterministic equivalent of $\mathbf{B}_N$.
In the following, we derive
the Cauchy transform
$G_{\boldsymbol{\mathcal{B}}_N}(z)$ by using operator-valued free probability techniques.
Since its elements on and above the diagonal are freely independent, we have that $\widetilde{\boldsymbol{\mathcal{X}}}$ is an R-cyclic matrix.
From Theorem 8.2 of \cite{nica2002r}, we then have that $\overline{\boldsymbol{\mathbf{X}}}$ and $\widetilde{\boldsymbol{\mathcal{X}}}$ are free over $\mathcal{D}_n$.
The $\mathcal{D}_n$-valued Cauchy transform of the sum of two $\mathcal{D}_n$-valued free random variables is given by \eqref{eq:operator_cauchy_transform_of_sum_of_free_varaible} in Appendix \ref{sec:Free Probability and Operator-valued Free Probability}. Applying \eqref{eq:operator_cauchy_transform_of_sum_of_free_varaible}, we have that
\begin{eqnarray}
\!\!\!\!\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}_n}(z\mathbf{I}_n)
\!\!\!\!&=&\!\!\!\! \mathcal{G}_{\overline{\boldsymbol{\mathbf{X}}}}^{\mathcal{D}_n}\!\!\left(z\mathbf{I}_n - \mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}_n}\!\! \left(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}_n}(z\mathbf{I}_n)\right)\right)
\nonumber \\
\!\!\!\!&=&\!\!\!\! E_{\mathcal{D}_n}\!\!\left\{\!\!\left(z\mathbf{I}_n - \mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}_n}\!\! \left(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}_n}(z\mathbf{I}_n)\right)-\overline{\boldsymbol{\mathbf{X}}}\right)^{-1}\!\!\right\}
\label{eq:Cauchy_transform_of_the_sum_in_example}
\end{eqnarray}
where $\mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}_n} $ is the $\mathcal{D}_n$-valued R-transform of $\boldsymbol{\mathcal{X}}$.
Let $\eta_{\mathcal{D}_n}(\mathbf{C})$ denote $E_{\mathcal{D}_n}\{\widetilde{\boldsymbol{\mathcal{X}}}\mathbf{C}\widetilde{\boldsymbol{\mathcal{X}}}\}$, where $\mathbf{C} \in \mathcal{D}_n$. From Theorem 7.2 of \cite{nica2002r}, we obtain that
$\widetilde{\boldsymbol{\mathcal{X}}}$ is semicircular over $\mathcal{D}_n$, and thus its $\mathcal{D}_n$-valued R-transform
is given by
\begin{equation}
\mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}_n}(\mathbf{C})=\eta_{\mathcal{D}_n}(\mathbf{C}).
\label{eq:digonal_matrix_valued_R_transform_of_X}
\end{equation}
From \eqref{eq:Cauchy_transform_of_the_sum_in_example} and the counterparts of \eqref{eq:realtion_of_Cauchy_transform_of X_and_X2} and \eqref{eq:Cauchy_transform_of_Hsquare_in_detail} for $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}_n}(z\mathbf{I}_n)$ and $\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}_n}(z\mathbf{I}_n)$, we obtain equation
\eqref{eq:floatingequation_tmp1}
at the top of this page.
\setcounter{equation}{19}
\begin{figure*}[!t]
\normalsize
\setcounter{tempequationcounter}{\value{equation}}
\begin{eqnarray}
&&\!\!\!\!z\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_{N})
= E_{\mathcal{D}_N}\left\{\left(\mathbf{I}_N - \eta_{\mathcal{D}_N}(\mathcal{G}_{\boldsymbol{\mathcal{H}}^H\boldsymbol{\mathcal{H}}}^{\mathcal{D}_M}(z\mathbf{I}_{M}))
-\overline{\mathbf{H}}\left(z\mathbf{I}_M-z\eta_{\mathcal{D}_M}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N} (z\mathbf{I}_{N}))\right)^{-1}\overline{\mathbf{H}}{}^H\right)^{-1}\right\} \label{eq:floatingequation_tmp2}
\\
&&\!\!\!\!z\mathcal{G}_{\boldsymbol{\mathcal{H}}^H\boldsymbol{\mathcal{H}}}^{\mathcal{D}_M}(z\mathbf{I}_{M})
=E_{\mathcal{D}_M}\left\{\left(\mathbf{I}_M - \eta_{\mathcal{D}_M}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_{N}))
-\overline{\mathbf{H}}{}^H\left(z\mathbf{I}_N-z\eta_{\mathcal{D}_N}(\mathcal{G}_{\boldsymbol{\mathcal{H}}^H \boldsymbol{\mathcal{H}}}^{\mathcal{D}_M}(z\mathbf{I}_{M}))\right)^{-1}\overline{\mathbf{H}}\right)^{-1}\right\}
\label{eq:floatingequation_tmp3}
\end{eqnarray}
\addtocounter{tempequationcounter}{2}
\setcounter{equation}{\value{tempequationcounter}}
\hrulefill
\end{figure*}
Furthermore, we obtain equations \eqref{eq:floatingequation_tmp2} and \eqref{eq:floatingequation_tmp3}
at the top of the following page, where
\begin{IEEEeqnarray}{Rl}
&\eta_{\mathcal{D}_N}(\mathbf{C}_1) = E_{\mathcal{D}_N}\{\widetilde{\boldsymbol{\mathcal{H}}}\mathbf{C}_1\widetilde{\boldsymbol{\mathcal{H}}}{}^H\}, \mathbf{C}_1 \in \mathcal{D}_M \nonumber \\
&\eta_{\mathcal{D}_M}(\mathbf{C}_2) = E_{\mathcal{D}_M}\{\widetilde{\boldsymbol{\mathcal{H}}}{}^H\mathbf{C}_2\widetilde{\boldsymbol{\mathcal{H}}}\}, \mathbf{C}_2 \in \mathcal{D}_N.\nonumber
\end{IEEEeqnarray}
Equations (20) and (21) are equivalent to the
ones provided by Theorem 2.4 of \cite{hachem2007deterministic}.
Finally, the Cauchy transform $G_{\boldsymbol{\mathcal{B}}_N}(z)$ is obtained by $G_{\boldsymbol{\mathcal{B}}_N}(z)=\frac{1}{N}{\rm{tr}}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_{N}))$.
In conclusion, the free deterministic equivalent method provides a way to
derive the approximation of the Cauchy transform $G_{\mathbf{B}_N}(z)$.
The fundamental step is to construct the free deterministic equivalent $\boldsymbol{\mathcal{B}}_N$ of $\mathbf{B}_N$.
After the construction, the Cauchy transform
$G_{\boldsymbol{\mathcal{B}}_N}(z)$ can be derived by using operator-valued free probability techniques.
Moreover, $G_{\boldsymbol{\mathcal{B}}_N}(z)$ is the deterministic equivalent of $G_{\mathbf{B}_N}(z)$.
\subsection{General Channel Model of MIMO MAC}
We consider a frequency-flat fading MIMO MAC channel with one BS and $K$ UEs. The BS antennas are divided into $L$ distributed ASs. The $l$-th AS is equipped with $N_l$ antennas. The $k$-th UE is equipped with $M_k$ antennas. Furthermore, we assume $\sum\nolimits_{l=1}^L N_l=N$ and $\sum\nolimits_{k=1}^K M_k=M$. Let $\mathbf{x}_k$ denote the $M_k \times 1$ transmitted vector of the $k$-th UE. The covariance matrices of $\mathbf{x}_k$ are given by
\begin{equation}
{\mathbb{E}}\{\mathbf{x}_k\mathbf{x}_{k'}^H\}=\left\{\begin{array}{cc}
\frac{P_k}{M_k}\mathbf{Q}_k, &{\rm if}~ k=k'\\
\mathbf{0}, &~~{\rm otherwise}
\end{array}\right.
\end{equation}
where $P_k$ is the total transmitted power of the $k$-th UE, and $\mathbf{Q}_k$ is an $M_k \times M_k$ positive semidefinite matrix with the constraint ${\rm{tr}}(\mathbf{Q}_k)\leq M_k$. The received signal $\mathbf{y}$ for a single symbol interval can be written as
\begin{equation}
\mathbf{y} = \sum\limits_{k=1}^{K}\mathbf{H}_k\mathbf{x}_k + \mathbf{z}
\end{equation}
where $\mathbf{H}_k$ is the $N \times M_k$ channel matrix between the BS and the $k$-th UE, and $\mathbf{z}$ is a
complex Gaussian noise vector distributed as $\mathcal{CN}(0,\sigma_z^2\mathbf{I}_N)$. The channel matrix $\mathbf{H}_k$ is normalized as
\begin{equation}
\mathbb{E}\{{\rm{tr}}(\mathbf{H}_k\mathbf{H}_k^H)\}=\frac{NM_k}{M}.
\label{eq:constraint_of_channel_matrix}
\end{equation}
Furthermore, $\mathbf{H}_k$ has the following structure
\begin{equation}
\mathbf{H}_k=\overline{\mathbf{H}}_k + \widetilde{\mathbf{H}}_k
\label{eq:channel_matrix_one}
\end{equation}
where $\overline{\mathbf{H}}_k$ and $\widetilde{\mathbf{H}}_k$ are defined by
\begin{IEEEeqnarray}{Rl}
&\overline{\mathbf{H}}_k=\left(\vphantom{\widetilde{\mathbf{H}}} \overline{\mathbf{H}}{}_{1k}^T~\overline{\mathbf{H}}{}_{2k}^T~\cdots~\overline{\mathbf{H}}{}_{Lk}^T\right)^T
\label{eq:channel_matrix_two}
\\
&\widetilde{\mathbf{H}}_k=\left(\widetilde{\mathbf{H}}_{1k}^T~\widetilde{\mathbf{H}}_{2k}^T~\cdots~\widetilde{\mathbf{H}}_{Lk}^T\right)^T.
\label{eq:channel_matrix_three}
\end{IEEEeqnarray}
Each $\overline{\mathbf{H}}_{lk}$ is an $N_l \times M_k$ deterministic matrix, and each $\widetilde{\mathbf{H}}_{lk}$ is a jointly correlated channel matrix defined by \cite{weichselberger2006stochastic,gao:statistical}
\begin{equation}
\widetilde{\mathbf{H}}_{lk}=\mathbf{U}_{lk}(\mathbf{M}_{lk}\odot{\mathbf{W}}_{lk})\mathbf{V}_{lk}^H
\label{eq:channel_matrix_four}
\end{equation}
where $\mathbf{U}_{lk}$ and $\mathbf{V}_{lk}$ are deterministic unitary matrices, $\mathbf{M}_{lk}$ is an $N_{l}\times M_{k}$ deterministic matrix with nonnegative elements, and $\mathbf{W}_{lk}$ is a complex Gaussian random matrix with independent and identically distributed (i.i.d.), zero mean and unit variance entries.
The jointly correlated channel model not only accounts for the correlation at both link ends, but also
characterizes their mutual dependence.
It provides a more adequate model for realistic massive MIMO channels since the validity of the widely used Kronecker model decreases as the number of antennas increases.
Furthermore, the justification of using the jointly correlated channel model for massive MIMO channels has been
provided in \cite{sunbeam, you2015pilot, adhikary2013joint}.
We assume that the channel matrices of different links are independent in this paper, \textit{i.e.}, when $k\neq m$ or $j\neq n$, we have that
\begin{IEEEeqnarray}{Rl}
&{\mathbb E}\left\{\widetilde{\mathbf{H}}_{kj}\mathbf{C}_{jn}\widetilde{\mathbf{H}}_{mn}^H\right\} = \mathbf{0}_{N_k \times N_m} \\
&{\mathbb E}\left\{\widetilde{\mathbf{H}}_{kj}^H{\widetilde{\mathbf{C}}}_{km}\widetilde{\mathbf{H}}_{mn}\right\} = \mathbf{0}_{M_j \times M_n}
\end{IEEEeqnarray}
where $\mathbf{C}_{jn} \in \mathbf{M}_{M_j\times M_n}(\mathbb{C})$ and $\widetilde{\mathbf{C}}_{km} \in \mathbf{M}_{N_k\times N_m}(\mathbb{C})$. Let $\widetilde{\mathbf{W}}_{lk}$ denote $\mathbf{M}_{lk}\odot{\mathbf{W}}_{lk}$. We define $\mathbf{G}_{lk}$ as $\mathbf{G}_{lk}=\mathbf{M}_{lk}\odot\mathbf{M}_{lk}$. The parameterized one-sided correlation matrix ${\tilde{\eta}}_k(\mathbf{C}_k)$ is given by
\begin{IEEEeqnarray}{Rl}
{\tilde{\eta}}_k(\mathbf{C}_k) =& {\mathbb E}\left\{\widetilde{\mathbf{H}}_k\mathbf{C}_k\widetilde{\mathbf{H}}_k^H\right\}
\nonumber \\
=& {\rm{diag}}\left(\mathbf{U}_{1k}\widetilde{\mathbf{\Pi}}_{1k}(\mathbf{C}_k)\mathbf{U}_{1k}^H, \mathbf{U}_{2k}\widetilde{\mathbf{\Pi}}_{2k}(\mathbf{C}_k)\mathbf{U}_{2k}^H, \right.
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~\left.\cdots, \mathbf{U}_{Lk}\widetilde{\mathbf{\Pi}}_{Lk}(\mathbf{C}_k)\mathbf{U}_{Lk}^H\right)
\label{eq:eta_function}
\end{IEEEeqnarray}
where $\mathbf{C}_k \in \mathcal{M}_{M_k}$, and $\widetilde{\mathbf{\Pi}}_{lk}(\mathbf{C}_k)$ is an $N_l \times N_l$ diagonal matrix valued function with the diagonal entries obtained by
\begin{equation}
\left[\widetilde{\mathbf{\Pi}}_{lk}(\mathbf{C}_k)\right]_{ii}=\sum\limits_{j=1}^{M_k}\left[{\mathbf{G}}_{lk}\right]_{ij}\left[\mathbf{V}_{lk}^H\mathbf{C}_k\mathbf{V}_{lk}\right]_{jj}.
\label{eq:eta_function_component}
\end{equation}
Similarly, the other parameterized one-sided correlation matrix ${\eta}_{k}(\widetilde{\mathbf{C}})$ is expressed as
\begin{equation}
{\eta}_{k}(\widetilde{\mathbf{C}}) = {\mathbb E}\left\{\widetilde{\mathbf{H}}_k^H{\widetilde{\mathbf{C}}}\widetilde{\mathbf{H}}_k\right\} =
\sum\limits_{l=1}^{L}\mathbf{V}_{lk}{\mathbf{\Pi}}_{lk}(\langle{\widetilde{\mathbf{C}}}\rangle_l)\mathbf{V}_{lk}^H
\label{eq:eta_function_of_widetilde}
\end{equation}
where $\widetilde{\mathbf{C}} \in \mathcal{M}_{N}$, the notation $\langle{\widetilde{\mathbf{C}}}\rangle_l$
denotes the $N_l \times N_l$ diagonal block of $\widetilde{\mathbf{C}}$, \textit{i.e.}, the submatrix of ${\widetilde {\mathbf{C}}}$ obtained by extracting the entries of the rows and
columns with indices from $\sum\nolimits_{i=1}^{l-1}N_i + 1$ to $\sum\nolimits_{i=1}^{l}N_i$,
and ${\mathbf{\Pi}}_{lk}(\langle{\widetilde{\mathbf{C}}}\rangle_l)$ is an $M_k \times M_k$ diagonal matrix valued function with the diagonal entries computed by
\begin{equation}
\left[{\mathbf{\Pi}}_{lk}(\langle{\widetilde{\mathbf{C}}}\rangle_l)\right]_{ii}=\sum\limits_{j=1}^{N_l}\left[{\mathbf{G}}_{lk}\right]_{ji}\left[\mathbf{U}_{lk}^H\langle{\widetilde{\mathbf{C}}}\rangle_l\mathbf{U}_{lk}\right]_{jj}.
\label{eq:eta_function_of_widetilde_component}
\end{equation}
The channel model described above is suitable for describing cellular systems employing cooperative
multipoint (CoMP) processing \cite{jungnickel2014role}, and also conforms with the framework of cloud radio access
networks (C-RANs) \cite{liu2014joint}.
Moreover, it
embraces many existing channel models as special cases. When $L=1$,
the MIMO MAC in \cite{wen2011on} is described.
Let $\mathbf{J}_{lk}$ be an $N_l \times M_k$ matrix of all $1$s, $\boldsymbol{\Lambda}_{r,lk}$ be an $N_l \times N_l$
diagonal matrix with positive entries and $\boldsymbol{\Lambda}_{t,lk}$ be an $M_k \times M_k$ diagonal matrix with positive entries.
Set $\mathbf{M}_{lk}=\boldsymbol{\Lambda}_{r,lk}^{1/2}\mathbf{J}_{lk}\boldsymbol{\Lambda}_{t,lk}^{1/2}$.
Then, we obtain $\widetilde{\mathbf{H}}_{lk}=\mathbf{U}_{lk}(\boldsymbol{\Lambda}_{r,lk}^{1/2}\mathbf{J}_{lk}\boldsymbol{\Lambda}_{t,lk}^{1/2}\odot{\mathbf{W}}_{lk})\mathbf{V}_{lk}^H=\mathbf{U}_{lk}\boldsymbol{\Lambda}_{r,lk}^{1/2}(\mathbf{J}_{lk}\odot{\mathbf{W}}_{lk})\boldsymbol{\Lambda}_{t,lk}^{1/2}\mathbf{V}_{lk}^H $ \cite{hom1994topics}.
Thus, each $\widetilde{\mathbf{H}}_{lk}$ reduces to
the Kronecker model, and the considered channel model reduces to that in \cite{zhang2013capacity}. Many channel models are already
included in the channel models of \cite{zhang2013capacity} and \cite{wen2011on}.
See the references for more details.
\subsection{Problem Formulation}
Let $\mathbf{H}$ denote $[\mathbf{H}_1 ~ \mathbf{H}_2 ~ \cdots ~ \mathbf{H}_K]$.
In this paper, we are interested in computing the ergodic input-output mutual information of the channel
$\mathbf{H}$ and deriving the sum-rate capacity achieving input covariance matrices.
In particular, we consider the large-system regime where $L$ and $K$ are fixed but $N_l$ and $M_k$ go to infinity with ratios $\frac{M_k}{N_l} = \beta_{lk}$
such that
\begin{equation}
0 < \min\limits_{l,k}\liminf\limits_{N}\beta_{lk} < \max\limits_{l,k}\limsup\limits_{N}\beta_{lk} < \infty.
\label{eq:antenna_size_limiting_regime}
\end{equation}
We first consider the problem of computing the ergodic input-output mutual information.
For simplicity, we assume $\frac{P_k}{M_k}\mathbf{Q}_k=\mathbf{I}_{M_k}$.
The results for general precoders can then be obtained by replacing
$\mathbf{H}_k$ with $\sqrt{\frac{P_k}{M_k}}\mathbf{H}_k\mathbf{Q}_k^{\frac{1}{2}}$.
Let $\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2)$ denote the ergodic input-output mutual information of the channel ${\mathbf{H}}$ and $\mathbf{B}_N$ denote the channel Gram matrix ${\mathbf{H}}{\mathbf{H}}^H$. Under the assumption that the transmitted vector is a Gaussian random vector having an identity covariance matrix and the receiver at the BS has perfect channel state information (CSI), $\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2)$ is given by \cite{Goldsmith03capacitylimits}
\begin{equation}
{\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2)} = {\mathbb{E}}\left\{\log\det(\mathbf{I}_N+\frac{1}{\sigma_z^2}\mathbf{B}_N)\right\}.
\label{mutual_information}
\end{equation}
Furthermore, we have $\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2) = N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$.
For the considered channel model, an exact expression of $\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2)$ is intractable.
Instead, our goal is to find an approximation of $\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2)$.
From Section II-A and Section II-B, we know that the Shannon transform $\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ can
be obtained from the Cauchy transform $G_{\mathbf{B}_N}(z)$ and the free deterministic equivalent method
can be used to derive the approximation of $G_{\mathbf{B}_N}(z)$. Thus, the problem becomes to construct the free deterministic equivalent
$\boldsymbol{\mathcal{B}}_N$ of $\mathbf{B}_N$, and to derive the Cauchy transform $G_{\boldsymbol{\mathcal{B}}_N}(z)$ and the Shannon transform $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$. This problem will be treated in Sections III-A to III-C.
To derive the sum-rate capacity achieving input covariance matrices, we then consider the problem of maximizing the ergodic input-output mutual information $\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2)$.
Since $\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2)=N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$, the problem can be formulated as
\begin{equation}
(\mathbf{Q}_{1}^{\diamond},\mathbf{Q}_{2}^{\diamond},\cdots,\mathbf{Q}_{K}^{\diamond}) =\mathop{\arg\max}\limits_{(\mathbf{Q}_{1},\cdots,\mathbf{Q}_{K})\in\mathbb{Q}} \mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)
\label{eq:optimization_of_information}
\end{equation}
where the constraint set $\mathbb{Q}$ is defined by
\begin{IEEEeqnarray}{Rl}
\!\!\!\!\!\!\!\!\mathbb{Q}=\{(\mathbf{Q}_{1},\mathbf{Q}_{2},\cdots,\mathbf{Q}_{K}):{\rm{tr}}(\mathbf{Q}_k)\leq M_k,\mathbf{Q}_k\succeq 0,\forall k\}.
\end{IEEEeqnarray}
We assume that the UEs have no CSI, and that each $\mathbf{Q}_k$ is fed back from the BS to the $k$-th UE. Moreover, we assume that all $\mathbf{Q}_k$ are computed from the deterministic matrices $\overline{\mathbf{H}}_{lk},\mathbf{G}_{lk},\mathbf{U}_{lk}$ and $\mathbf{V}_{lk},{1\leq l \leq L, 1\leq k \leq K}$.
Since $\mathcal{I}_{\mathbf{B}_N}(\sigma_z^2)$ is an expected value of the input-output mutual information, the optimization problem in \eqref{eq:optimization_of_information} is a stochastic programming problem. As mentioned in \cite{zhang2013capacity} and \cite{wen2013deterministic}, it is also a convex optimization problem, and thus can be solved by using approaches based on convex optimization with Monte-Carlo methods \cite{boyd2009convex}. More specifically, it can be solved by the Vu-Paulraj algorithm \cite{vu2005capacity}, which was developed from the barrier method \cite{boyd2009convex} with the gradients and Hessians provided by Monte-Carlo methods.
However, the computational complexity of the aforementioned method is very high \cite{zhang2013capacity}. Thus, new approaches are needed. Since the approximation $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2)$ of
$\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ will be obtained, we can use it as the objective function. Thus, the optimization problem can be reformulated as
\begin{equation}
(\mathbf{Q}_{1}^{\star},\mathbf{Q}_{2}^{\star},\cdots,\mathbf{Q}_{K}^{\star})=\mathop{\arg\max}\limits_{(\mathbf{Q}_{1},\cdots,\mathbf{Q}_{K})\in\mathbb{Q}} \mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2).
\end{equation}
The above problem will be solved in Section \ref{sec:Sum_rate_Capacity_Achieving_Input_Covariance_Matrices}.
\section{Main Results}
In this section, we present the free deterministic equivalent of $\mathbf{B}_N$, the deterministic equivalents of the Cauchy transform $G_{\mathbf{B}_N}(z)$ and the Shannon transform $\mathcal{V}_{\mathbf{B}_N}(x)$.
We also present the results for the problem of maximizing the approximate ergodic input-output mutual information $N\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2)$.
Let $\overline{\mathbf{H}}=[\overline{\mathbf{H}}_1 ~ \overline{\mathbf{H}}_2~ \cdots ~\overline{\mathbf{H}}_K]$ and $\widetilde{\mathbf{H}}=[\widetilde{\mathbf{H}}_1~ \widetilde{\mathbf{H}}_2~ \cdots ~ \widetilde{\mathbf{H}}_K]$. We define
$\mathbf{X}$, $\overline{\mathbf{X}}$ and $\widetilde{\mathbf{X}}$ as in \eqref{eq:definition_of_matrix_bold_captial_x}, \eqref{eq:definition_of_matrix_overline_bold_captial_x} and \eqref{eq:definition_of_matrix_widetilde_bold_captial_x}, respectively.
\subsection{Free Deterministic Equivalent of ${\mathbf{B}}_N$ }
In \cite{benaych2009rectangular}, independent rectangular random matrices are found to be asymptotically free
over a subalgebra when they are embedded in a larger square matrix space.
Motivated by this, we embed $\widetilde{\mathbf{H}}_{lk}$ in the larger matrix space $\mathbf{M}_{N \times M}(\mathcal{P})$.
Let $\widehat{\mathbf{H}}_{lk}$ be the $N \times M$ matrix defined by
\begin{equation}
\!\!\widehat{\mathbf{H}}_{lk}= [\mathbf{0}_{N \times {M_{1}}}\cdots \mathbf{0}_{N \times {M_{k-1}}}\check{\mathbf{H}}_{lk}~
\mathbf{0}_{N \times {M_{k+1}} } \cdots \mathbf{0}_{N \times {M_K} }]
\end{equation}
where $\check{\mathbf{H}}_{lk}$ is defined by
\begin{equation}
\check{\mathbf{H}}_{lk}=[\mathbf{0}_{N_1\times M_k}^T\cdots\mathbf{0}_{N_{l-1}\times M_k}^T\widetilde{\mathbf{H}}_{lk}^T ~{\mathbf{0}_{N_{l+1}\times M_k}^T}\cdots\mathbf{0}_{N_L\times M_k}^T]^T.
\end{equation}
Then, $\widetilde{\mathbf{X}}$ can be rewritten as
\begin{equation}
\widetilde{\mathbf{X}} = \sum\limits_{k=1}^{K}\sum\limits_{l=1}^{L}\widehat{\mathbf{X}}_{lk}
\end{equation}
where $\widehat{\mathbf{X}}_{lk}$ is defined by
\begin{equation}
\widehat{\mathbf{X}}_{lk} = \left(
\begin{array}{cc}
\mathbf{0}_{N} & \widehat{\mathbf{H}}_{lk} \\
\widehat{\mathbf{H}}_{lk}^H & \mathbf{0}_{M} \\
\end{array}
\right).
\end{equation}
Recall that $\widetilde{\mathbf{H}}_{lk}={\mathbf{U}}_{lk}\widetilde{\mathbf{W}}_{lk}{\mathbf{V}}_{lk}^H$.
Inspired by \cite{far2008slow},
we rewrite $\widehat{\mathbf{X}}_{lk}$ as
\begin{equation}
\widehat{\mathbf{X}}_{lk}
= {\mathbf{A}}_{lk}\mathbf{Y}_{lk}{\mathbf{A}}_{lk}^H
\end{equation}
where $\mathbf{Y}_{lk}$ and ${\mathbf{A}}_{lk}$ are defined by
\begin{equation}
\mathbf{Y}_{lk}
= \left(
\begin{array}{cc}
\mathbf{0}_{N} & \widehat{\mathbf{W}}_{lk} \\
\widehat{\mathbf{W}}_{lk}^H & \mathbf{0}_{M} \\
\end{array}
\right)
\end{equation}
and
\begin{equation}
{\mathbf{A}}_{lk}
= \left(
\begin{array}{cc}
{\widehat{\mathbf{U}}}_{lk} & \mathbf{0}_{N\times M} \\
\mathbf{0}_{M\times N} & {\widehat{\mathbf{V}}}_{lk} \\
\end{array}
\right)
\end{equation}
where
\begin{IEEEeqnarray}{Rl}
&\widehat{\mathbf{W}}_{lk}=[\mathbf{0}_{N \times {M_{1}}}\!\cdots\mathbf{0}_{N \times {M_{k-1}}} \check{\mathbf{W}}_{lk}~\mathbf{0}_{N \times {M_{k+1}} } \! \cdots \mathbf{0}_{N \times {M_K} }] \IEEEnonumber
\\
\\
&\check{\mathbf{W}}_{lk}=[\mathbf{0}_{N_1\times M_k}^T\!\cdots\mathbf{0}_{N_{l-1}\times M_k}^T\widetilde{\mathbf{W}}_{lk}^T~{\mathbf{0}_{N_{l+1}\times M_k}^T}\!\cdots\mathbf{0}_{N_L\times M_k}^T]^T \IEEEnonumber
\\
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\widehat{\mathbf{U}}_{lk}={\rm{diag}}( {\mathbf{0}}_{N_1},\cdots,{\mathbf{0}}_{N_{l-1}},{\mathbf{U}}_{lk},{\mathbf{0}}_{N_{l+1}},\cdots,{\mathbf{0}}_{N_{L}})
\\
&\!\!\!\!\!\!\widehat{\mathbf{V}}_{lk}={\rm{diag}}(
{\mathbf{0}}_{M_1},\cdots,{\mathbf{0}}_{M_{k-1}},{\mathbf{V}}_{lk},{\mathbf{0}}_{M_{k+1}},\cdots,{\mathbf{0}}_{M_{K}}).
\end{IEEEeqnarray}
The free deterministic equivalents of $\mathbf{X}$ and $\mathbf{B}_N$ are constructed as follows.
Let $\mathcal{A}$ be a unital algebra, $(\mathcal{A},\phi)$ be a non-commutative probability space and $\boldsymbol{\mathcal{Y}}_{11}, \cdots, \boldsymbol{\mathcal{Y}}_{LK} \in \mathbf{M}_{n}(\mathcal{A})$ be a family of selfadjoint matrices. The entries $[\boldsymbol{\mathcal{Y}}_{lk}]_{ii}$ are centered semicircular elements, and the entries $[\boldsymbol{\mathcal{Y}}_{lk}]_{ij}, i \neq j$, are centered circular elements. The variance of the entry $[\boldsymbol{\mathcal{Y}}_{lk}]_{ij}$ is given by $\phi([\boldsymbol{\mathcal{Y}}_{lk}]_{ij}[\boldsymbol{\mathcal{Y}}_{lk}]_{ij}^*) = \mathbb{E}\{[\mathbf{Y}_{lk}]_{ij}[\mathbf{Y}_{lk}]_{ij}^*\}$. Moreover, the entries on and above the diagonal
of $\boldsymbol{\mathcal{Y}}_{lk}$ are free, and the entries from different $\boldsymbol{\mathcal{Y}}_{lk}$ are also
free. Thus, we also have $\phi([\boldsymbol{\mathcal{Y}}_{lk}]_{ij}[\boldsymbol{\mathcal{Y}}_{pq}]_{rs}) = \mathbb{E}\{[\mathbf{Y}_{lk}]_{ij}[\mathbf{Y}_{pq}]_{rs}\}$, where $lk \neq pq$, $1 \leq l, p \leq L$, $1 \leq k, q \leq K$ and $1 \leq i, j, r, s \leq n $.
\vspace{0.1em}
Let $\widetilde{\boldsymbol{\mathcal{X}}}$ denote $\sum_{k=1}^{K} \sum_{l=1}^{L} \widehat{\boldsymbol{\mathcal{X}}}_{lk}$, where $\widehat{\boldsymbol{\mathcal{X}}}_{lk}=\mathbf{A}_{lk}\boldsymbol{\mathcal{Y}}_{lk}\mathbf{A}_{lk}^H$.
Based on the definitions of $\boldsymbol{\mathcal{Y}}_{lk}$, we have that both the $N \times N$ upper-left block matrix and the $M \times M$ lower-right block matrix of $\widetilde{\boldsymbol{\mathcal{X}}}$ are equal to zero matrices.
Thus, $\widetilde{\boldsymbol{\mathcal{X}}}$ can be rewritten as \eqref{eq:definition_of_matrix_widetilde_cal_captial_x}, where $\widetilde{\boldsymbol{\mathcal{H}}}$
denotes the $N \times M$ upper-right block matrix of $\widetilde{\boldsymbol{\mathcal{X}}}$.
For fixed $n$, we define the map $E:\mathbf{M}_{n}(\mathcal{A}) \rightarrow \mathcal{M}_n$ by $[E\{\boldsymbol{\mathcal{Y}}_{lk}\}]_{ij} = \phi([\boldsymbol{\mathcal{Y}}_{lk}]_{ij})$.
Then, we have that
\begin{equation}
E\{\widetilde{\boldsymbol{\mathcal{X}}}\mathbf{C}_n\widetilde{\boldsymbol{\mathcal{X}}}\}
=\mathbb{E}\{\widetilde{\boldsymbol{\mathbf{X}}}\mathbf{C}_n\widetilde{\boldsymbol{\mathbf{X}}}\}
\nonumber
\end{equation}
where $\mathbf{C}_n \in \mathcal{M}_n$. Let $\boldsymbol{\mathcal{H}}$ denote $\overline{\mathbf{H}}+\widetilde{\boldsymbol{\mathcal{H}}}$ and $\boldsymbol{\mathcal{B}}_N$ denote $\boldsymbol{\mathcal{H}}\boldsymbol{\mathcal{H}}^H$.
Finally, we define $\boldsymbol{\mathcal{X}}$ as in \eqref{eq:definition_of_matrix_cal_captial_x}.
The matrices $\boldsymbol{\mathcal{X}}$ and $\boldsymbol{\mathcal{B}}_N$ are the free deterministic equivalents of $\mathbf{X}$ and $\mathbf{B}_N$ under the following assumptions.
\begin{assumption}
\label{assump:temp1}
The entries $[M\mathbf{G}_{lk}]_{ij}$ are uniformly bounded.
\end{assumption}
Let $\psi_{lk}[n]:\mathcal{D}_n \rightarrow \mathcal{D}_n$ be defined by
$\psi_{lk}[n](\mathbf{\Delta}_n)=\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_{lk}\mathbf{\Delta}_n\mathbf{Y}_{lk}\}$,
where $\mathbf{\Delta}_n \in \mathcal{D}_n$. We define $i_n: \mathcal{D}_n \rightarrow L^{\infty}[0, 1]$ by $i_n({\rm{diag}}(d_1,d_2,\cdots,d_n))=\sum_{j=1}^nd_j \chi_{[\frac{j-1}{n},\frac{j}{n}]}$, where
$\chi_{U}$ is the characteristic function of the set $U$.
\begin{assumption}
There exist maps $\psi_{lk}: L^{\infty}[0, 1] \rightarrow L^{\infty}[0, 1]$ such that whenever $i_n(\mathbf{\Delta}_n) \rightarrow d \in L^{\infty}[0, 1]$ in norm, then also $\lim_{n\rightarrow \infty}\psi_{lk}[n](\mathbf{\Delta}_n) = \psi_{lk}(d)$.
\label{assump:variance_operator_valued_limit_1}
\end{assumption}
\begin{assumption}
\label{assump:temp2}
The spectral norms of $\overline{\mathbf{H}}_k\overline{\mathbf{H}}{}_k^H$ are uniformly bounded in $N$.
\end{assumption}
To rigorously show the relation between
$\mathcal{G}^{\mathcal{D}_n}_{\mathbf{X}}(z\mathbf{I}_n)$ and $\mathcal{G}^{\mathcal{D}_n}_{\boldsymbol{\mathcal{X}}}(z\mathbf{I}_n)$, we present the following theorem.
\begin{theorem}
\label{th:coro_of_determinstic_matrices_gaussian_matrices_asymptotic_operator_valued_free}
Let $\mathcal{E}_n$ denote the algebra of $n \times n$ diagonal matrices with uniformly bounded entries and $\mathcal{N}_n$ denote the algebra generated by $\mathbf{A}_{11}, \cdots, \mathbf{A}_{LK}$, $\overline{\mathbf{X}}$ and $\mathcal{E}_n$. Let $m$ be a positive integer and $\mathbf{C}_0, \mathbf{C}_1, \cdots, \mathbf{C}_m \in \mathcal{N}_n$ be a family of $n \times n$ deterministic matrices. Assume that Assumptions \ref{assump:temp1}
and \ref{assump:temp2} hold.
Then,
\begin{IEEEeqnarray}{Rl}
&\lim\limits_{n \rightarrow \infty} i_n (\mathbb{E}_{\mathcal{D}_n} \{\mathbf{C}_{0}\mathbf{Y}\!_{p_1q_1}\!\mathbf{C}_{1}\mathbf{Y}\!_{p_2q_2}\!\mathbf{C}_{2} \cdots\mathbf{Y}\!_{p_mq_m}\!\mathbf{C}_{m}\}
\IEEEnonumber \\
& ~~~~~-E_{\mathcal{D}_n}\{ \mathbf{C}_{0}\boldsymbol{\mathcal{Y}}\!_{p_1q_1}\!\mathbf{C}_{1} \boldsymbol{\mathcal{Y}}\!_{p_2q_2}\!\mathbf{C}_{2}\cdots\boldsymbol{\mathcal{Y}}\!_{p_mq_m}\! \mathbf{C}_{m}\}) = 0_{L^{\infty}[0, 1]}
\IEEEnonumber \\
\end{IEEEeqnarray}
where $1 \leq p_1,\cdots, p_m \leq L$, $1 \leq q_1,\cdots, q_m \leq K$ and the definition of $E_{\mathcal{D}_n}\{\cdot\}$ is given in \eqref{eq:definition_of_E_sub_mathcalD_n}. Furthermore, if Assumption \ref{assump:variance_operator_valued_limit_1} also holds, then $\mathbf{Y}_{11}, \cdots, \mathbf{Y}_{LK}$, $\mathcal{N}_n$ are asymptotically free over $L^{\infty}[0, 1]$.
\end{theorem}
\begin{IEEEproof}
From \eqref{eq:antenna_size_limiting_regime} and Assumption 1, we obtain that the entries $[n\mathbf{G}_{lk}]_{ij}$ are uniformly bounded. According to Assumption \ref{assump:temp2}, the spectral norm of $\overline{\mathbf{X}}$ is uniformly bounded in $n$.
Furthermore, the matrices $\mathbf{A}_{lk}$ have unit spectral norm.
Thus, this theorem can be seen as a corollary of Theorem \ref{th:determinstic_matrices_gaussian_matrices_asymptotic_operator_valued_free} in Appendix \ref{New_asymptotic_freeness_results}.
\end{IEEEproof}
Theorem \ref{th:coro_of_determinstic_matrices_gaussian_matrices_asymptotic_operator_valued_free} implies that $\boldsymbol{\mathcal{X}} $ and $\mathbf{X} $ have the same
asymptotic $L^{\infty}[0, 1]$-valued distribution.
This further indicates that $\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_n}(z\mathbf{I}_{n})$ and $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}_n}(z\mathbf{I}_{n})$ are the same in the limit, \textit{i.e.},
\begin{equation}
\lim\limits_{n \rightarrow \infty} i_n \left(\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_n}(z\mathbf{I}_n) - \mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}_n}(z\mathbf{I}_n)\right) = 0_{L^{\infty}[0, 1]}.
\label{eq:asymptotic_equivalent_of_diagonal_cauchy_transform}
\end{equation}
Following a derivation similar to that of \eqref{eq:realtion_of_Cauchy_transform_of X_and_X2}, we have that
\begin{equation}
\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}_n}(z\mathbf{I}_n) = z\mathcal{G}_{\boldsymbol{\mathcal{X}}^{2}}^{\mathcal{D}_n}(z^2\mathbf{I}_n)
\label{eq:realtion_of_Cauchy_transform_of X_and_X2_over_diagonal_matrix}
\end{equation}
where $z,z^2 \in \mathbb{C}^+$.
According to \eqref{eq:realtion_of_Cauchy_transform_of X_and_X2}, \eqref{eq:asymptotic_equivalent_of_diagonal_cauchy_transform} and \eqref{eq:realtion_of_Cauchy_transform_of X_and_X2_over_diagonal_matrix}, we have
\begin{equation}
\lim\limits_{n \rightarrow \infty}i_n \left(\mathcal{G}_{\mathbf{X}^2}^{\mathcal{D}_n}(z\mathbf{I}_n) - \mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}_n}(z\mathbf{I}_n)\right) = 0_{L^{\infty}[0, 1]}.
\label{eq:realtion_of_Cauchy_transform_of X2_and_CalX2_over_diagonal_matrix}
\end{equation}
Furthermore, from \eqref{eq:Cauchy_transform_of_Hsquare_in_detail} and its counterpart for $\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}_n}(z\mathbf{I}_{n})$ we obtain
\begin{equation}
\lim\limits_{N \rightarrow \infty}i_N \left(\mathcal{G}_{\mathbf{B}_N}^{\mathcal{D}_N}(z\mathbf{I}_N) - \mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_N)\right) = 0_{L^{\infty}[0, 1]}.
\label{eq:deterministric_equivalent_of_cauchy_transform}
\end{equation}
Since
\begin{equation}
G_{\boldsymbol{\mathcal{B}}_N}(z) = \frac{1}{N}{\rm{tr}}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_N)) \nonumber
\end{equation}
and
\begin{equation}
G_{\boldsymbol{\mathbf{B}}_N}(z) = \frac{1}{N}{\rm{tr}}(\mathcal{G}_{\boldsymbol{\mathbf{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_N)) \nonumber
\end{equation}
we have that $G_{\boldsymbol{\mathcal{B}}_N}(z)$ is the deterministic equivalent of $G_{\boldsymbol{\mathbf{B}}_N}(z)$.
\subsection{Deterministic Equivalent of $G_{\mathbf{B}_N}(z)$}
The calculation of $G_{\boldsymbol{\mathcal{B}}_N}(z)$ can be much easier than that of $G_{\boldsymbol{\mathbf{B}}_N}(z)$ by using operator-valued free probability techniques.
Let $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N) = E\{(z\mathbf{I}_{N} - \boldsymbol{\mathcal{B}}_N)^{-1}\}$.
Since $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{D}_N}(z\mathbf{I}_{N}) = E_{\mathcal{D}_N}\{\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_{N})\}$, where $E_{\mathcal{D}_N}\{\cdot\}$ is defined according to \eqref{eq:definition_of_E_sub_mathcalD_n}, we can obtain $G_{\boldsymbol{\mathcal{B}}_N}(z)$ from
$\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_{N})$. We denote by $\mathcal{D}$ the algebra of the form
\begin{equation}
\mathcal{D}=\left(
\begin{array}{ccccc}
\mathcal{M}_N & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & \mathcal{M}_{M_1} & \ldots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{0} & \mathbf{0} & \ldots & \mathcal{M}_{M_K} \\
\end{array}
\right).
\end{equation}
We define the conditional expectation $E_{\mathcal{D}}: \mathbf{M}_n(\mathcal{A}) \rightarrow \mathcal{D}$ by
\begin{eqnarray}
E_{\mathcal{D}}\left\{\left(
\begin{array}{ccccc}
\boldsymbol{\mathcal{C}}_{11} & \boldsymbol{\mathcal{C}}_{12} & \cdots & \boldsymbol{\mathcal{C}}_{1(K+1)} \\
\boldsymbol{\mathcal{C}}_{21} & \boldsymbol{\mathcal{C}}_{22} & \ldots & \boldsymbol{\mathcal{C}}_{2(K+1)} \\
\vdots & \vdots & \ddots & \vdots \\
\boldsymbol{\mathcal{C}}_{(K+1)1} & \boldsymbol{\mathcal{C}}_{(K+1)2} & \ldots & \boldsymbol{\mathcal{C}}_{(K+1)(K+1)} \\
\end{array}
\right)\right\}& & \nonumber \\
=\left(
\begin{array}{ccccc}
E\{\boldsymbol{\mathcal{C}}_{11}\} & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & E\{\boldsymbol{\mathcal{C}}_{22}\} & \ldots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{0} & \mathbf{0} & \ldots & E\{\boldsymbol{\mathcal{C}}_{(K+1)(K+1)}\} \\
\end{array}
\right)& &
\end{eqnarray}
where $\boldsymbol{\mathcal{C}}_{11} \in \mathbf{M}_N(\mathcal{A})$, and $\boldsymbol{\mathcal{C}}_{kk} \in \mathbf{M}_{M_{k-1}}(\mathcal{A})$ for $k=2,3,\cdots,K+1$.
Then, we can write $\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z\mathbf{I}_n)$ for $z \in \mathbb{C}^+$ as
\begin{eqnarray}
\!\!\!\!\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z\mathbf{I}_n)
\!\!\!\!&=&\!\!\!\!E_{\mathcal{D}}\left\{(z\mathbf{I}_n - \boldsymbol{\mathcal{X}}^2)^{-1}\right\}
\nonumber \\
\!\!\!\!&=&\!\!\!\!\left(
\begin{array}{ccccc}
\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N) & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & \mathcal{G}_1(z) & \ldots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{0} & \mathbf{0} & \ldots & \mathcal{G}_K(z) \\
\end{array}
\right)
\label{eq:Cauchy_transform_of_Hfreesquare_in_detail}
\end{eqnarray}
where $\mathcal{G}_k(z)$ denotes $(E\{(z\mathbf{I}_{M} - \boldsymbol{\mathcal{H}}^H\boldsymbol{\mathcal{H}})^{-1}\})_k$ for $k=1, \cdots, K$, and $(\mathbf{A})_k$ denotes the submatrix of $\mathbf{A}$ obtained by extracting the entries of the rows and columns with indices from $\sum\nolimits_{i=1}^{k-1}M_i+1$ to $\sum\nolimits_{i=1}^{k}M_i$. Thus, we can obtain
$\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)$ from $\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z\mathbf{I}_n)$, which is further related to $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)$.
\begin{lemma}
$\widetilde{\boldsymbol{\mathcal{X}}}$ is semicircular over $\mathcal{D}$. Furthermore, $\widetilde{\boldsymbol{\mathcal{X}}}$ and $\mathcal{M}_n$ are free over $\mathcal{D}$.
\label{lm:semicircular_lemma}
\end{lemma}
\begin{IEEEproof}
The proof is given in Appendix \ref{sec:proof_of_semicircular_lemma}.
\end{IEEEproof}
Since $\overline{\boldsymbol{\mathbf{X}}} \in \mathcal{M}_n$, we have that $\widetilde{\boldsymbol{\mathcal{X}}}$ and $\overline{\boldsymbol{\mathbf{X}}}$ are free over $\mathcal{D}$. Recall that $\boldsymbol{\mathcal{X}} = \overline{\mathbf{X}}+ \widetilde{\boldsymbol{\mathcal{X}}}$. Then, $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)$ and $\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z\mathbf{I}_n)$ can be derived. Moreover, we obtain $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)$ as shown in the following theorem.
\begin{theorem}
\label{th:cauchy_transform}
The $\mathcal{M}_N$-valued Cauchy transform $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)$ for $z \in \mathbb{C}^+$ satisfies
\begin{IEEEeqnarray}{Rl}
&{\Tilde{{\boldsymbol{\Phi}}}}(z) = \mathbf{I}_N - \sum\limits_{k=1}^{K}{\tilde{\eta}}_{k} (\mathcal{G}_k(z))
\label{eq:cauchy_transform_tilde_phi}\\
&{{\boldsymbol{\Phi}}}(z) = {\rm{diag}}\left(\mathbf{I}_{M_1} - \eta_{1} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)),\right.
\IEEEnonumber \\
&~~~~~~~~~~~~~~~~~~~~\mathbf{I}_{M_2} - \eta_{2} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)),\cdots, \IEEEnonumber \\
& \left.~~~~~~~~~~~~~~~~~~~~~~~\mathbf{I}_{M_K} - \eta_{K} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N))\right)
\label{eq:cauchy_transform_phi}\\
&\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)
= \left(z{\Tilde{{\boldsymbol{\Phi}}}}(z)
- \overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(z)^{-1}\overline{\mathbf{H}}{}^H\right)^{-1}
\label{eq:cauchy_transform_of_mathcal_BN} \\
&\mathcal{G}_k(z)
= \left(\left(z{{\boldsymbol{\Phi}}}(z)
- \overline{\mathbf{H}}{}^H{\Tilde{{\boldsymbol{\Phi}}}}(z)^{-1} \overline{\mathbf{H}}\right)^{-1}\right)_k.
\label{eq:cauchy_transform_of_mathcal_HkHkH}
\end{IEEEeqnarray}
Furthermore, there exists a unique solution of $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N) \in \mathbb{H}_{-}(\mathcal{M}_N) := \{b \in \mathcal{M}_N: \Im(b) \prec 0\}$ for each $z \in \mathbb{C}^+$, and the solution is obtained by iterating (59)-(62).
The Cauchy transform $G_{\boldsymbol{\mathcal{B}}_N}(z)$ is given by
\begin{equation}
G_{\boldsymbol{\mathcal{B}}_N}(z) = \frac{1}{N}{\rm{tr}}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)).
\label{eq:cauchy_transform_of_BN}
\end{equation}
\end{theorem}
\begin{IEEEproof}
The proof is given in Appendix {\ref{sec:proof_of_cauchy_theorem}}.
\end{IEEEproof}
In massive MIMO systems, $N_l$ can go to a very large value. In this case, $\mathbf{U}_{lk}$ can be assumed to be independent of $k$, \textit{i.e.}, $\mathbf{U}_{{l1}}=\mathbf{U}_{{l2}}=\cdots=\mathbf{U}_{{lK}}$, under some antenna configurations \cite{noh2014pilot,zhou2006experimental,you2015pilot}. When uniform linear arrays (ULAs) are employed in all ASs and $N_l$ grows very large, each $\mathbf{U}_{lk}$ is closely approximated by a discrete Fourier transform (DFT) matrix \cite{noh2014pilot,zhou2006experimental}. In \cite{you2015pilot}, a more general BS antenna configuration is considered, and it is shown that the eigenvector matrices of the channel covariance matrices at the BS for different users tend to be the same as the number of antennas increases.
Under the assumption $\mathbf{U}_{l1}=\mathbf{U}_{l2}=\cdots=\mathbf{U}_{lK}$, we can obtain simpler results. For brevity, we denote all $\mathbf{U}_{lk}$ by $\mathbf{U}_{l}$. Consider the Rayleigh channel case, \textit{i.e.}, $\overline{\mathbf{H}}=\mathbf{0}$. Let ${\widetilde{\boldsymbol{\Lambda}}}_l(z)$ denote $(\mathbf{I}_{N_l} - \sum_{k=1}^{K}\widetilde{\mathbf{\Pi}}_{lk}(\mathcal{G}_k(z)))^{-1}$. Then, \eqref{eq:cauchy_transform_tilde_phi} becomes
\begin{IEEEeqnarray}{Rl}
{\Tilde{{\boldsymbol{\Phi}}}}(z)
=& {\rm{diag}}\left(\mathbf{U}_{1}({\widetilde{\boldsymbol{\Lambda}}}_1(z))^{-1}\mathbf{U}_{1}^H, \mathbf{U}_{2}({\widetilde{\boldsymbol{\Lambda}}}_2(z))^{-1}\mathbf{U}_{2}^H, \right.
\nonumber \\
&~~~~~~~~~~~~~~~~~~~\left. \cdots,\mathbf{U}_{L}({\widetilde{\boldsymbol{\Lambda}}}_L(z))^{-1}\mathbf{U}_{L}^H\right).
\end{IEEEeqnarray}
Furthermore, \eqref{eq:cauchy_transform_of_mathcal_BN} and \eqref{eq:cauchy_transform_of_mathcal_HkHkH} become
\begin{IEEEeqnarray}{Rl}
&\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N) =z^{-1}{\rm{diag}}\left(\mathbf{U}_{1}{\widetilde{\boldsymbol{\Lambda}}}_1(z)\mathbf{U}_{1}^H, \mathbf{U}_{2}{\widetilde{\boldsymbol{\Lambda}}}_2(z)\mathbf{U}_{2}^H, \right.
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.
\cdots, \mathbf{U}_{L}{\widetilde{\boldsymbol{\Lambda}}}_L(z)\mathbf{U}_{L}^H\right)
\\
&\mathcal{G}_k(z) = z^{-1}\left(\mathbf{I}_{M_k} - \eta_{k} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N))\right)^{-1}.
\end{IEEEeqnarray}
From \eqref{eq:eta_function_of_widetilde} and \eqref{eq:eta_function_of_widetilde_component}, we have that
\begin{equation}
\eta_{k}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N))
= \sum\limits_{l=1}^{L}\mathbf{V}_{lk}{\mathbf{\Pi}}_{lk} (\mathbf{U}_{l}{\widetilde{\boldsymbol{\Lambda}}}_l(z)\mathbf{U}_{l}^H)\mathbf{V}_{lk}^H
\end{equation}
where ${\mathbf{\Pi}}_{lk}(\mathbf{U}_{l}{\widetilde{\boldsymbol{\Lambda}}}_l(z)\mathbf{U}_{l}^H)$ is an $M_k \times M_k$ diagonal matrix valued function with the diagonal entries computed by
\begin{eqnarray}
\left[{\mathbf{\Pi}}_{lk}(\mathbf{U}_{l}{\widetilde{\boldsymbol{\Lambda}}}_l(z)\mathbf{U}_{l}^H)\right]_{ii}
\!\!\!\!&=& \!\!\!\!\sum\limits_{j=1}^{N_l}\left[{\mathbf{G}}_{lk}\right]_{ji} \!\left[\mathbf{U}_{{l}}^H\mathbf{U}_{l}{\widetilde{\boldsymbol{\Lambda}}}_l(z)\mathbf{U}_{l}^H\mathbf{U}_{{l}}\right]_{jj} \nonumber \\
\!\!\!\!&=& \!\!\!\! \sum\limits_{j=1}^{N_l}\left[{\mathbf{G}}_{lk}\right]_{ji}\!\left[{\widetilde{\boldsymbol{\Lambda}}}_l(z)\right]_{jj}.
\end{eqnarray}
Thus, $\mathbf{U}_{1}, \mathbf{U}_{2}, \cdots, \mathbf{U}_{L}$ can be omitted in the iteration process, and hence (59)-(63) reduce to
\begin{IEEEeqnarray}{Rl}
&\left[{\widetilde{\boldsymbol{\Lambda}}}_l(z)\right]_{ii} = \left(1 - \sum\limits_{k=1}^{K}\left[\widetilde{\mathbf{\Pi}}_{lk}(\mathcal{G}_k(z))\right]_{ii}\right)^{-1}
\label{eq:reduce_formula_lambda_tilde_diagonal} \\
&\widetilde{\boldsymbol{\Lambda}}(z)={\rm{diag}}\left({\widetilde{\boldsymbol{\Lambda}}}_1(z), {\widetilde{\boldsymbol{\Lambda}}}_2(z), \cdots, {\widetilde{\boldsymbol{\Lambda}}}_L(z)\right) \\
&\mathcal{G}_k(z) = \left(z\mathbf{I}_{M_k} - \eta_{k}({\widetilde{\boldsymbol{\Lambda}}}(z))\right)^{-1}
\label{eq:tmp1}
\\
&{G}_{\boldsymbol{\mathcal{B}}_N}(z) = z^{-1}\frac{1}{N}\sum\limits_{l=1}^{L}\sum\limits_{i=1}^{N_l}
\left[{\widetilde{\boldsymbol{\Lambda}}}_l(z)\right]_{ii}
\label{eq:cauchy_transform_of_BN_large_mimo_1}
\end{IEEEeqnarray}
where the diagonal entries of ${\mathbf{\Pi}}_{lk}(\langle{{\widetilde{\boldsymbol{\Lambda}}}(z)}\rangle_l)$, which
is needed in the computation of $\eta_{k} ({\widetilde{\boldsymbol{\Lambda}}}(z))$, are now redefined by
\begin{equation}
\left[{\mathbf{\Pi}}_{lk}(\langle{{\widetilde{\boldsymbol{\Lambda}}}(z)}\rangle_l)\right]_{ii} =\sum\limits_{j=1}^{N_l}\left[{\mathbf{G}}_{lk}\right]_{ji}\left[{\widetilde{\boldsymbol{\Lambda}}}_l(z)\right]_{jj}.
\end{equation}
Furthermore, the matrix inversion in \eqref{eq:cauchy_transform_of_mathcal_BN} has been avoided.
When $L=1$, we have that
\begin{IEEEeqnarray}{Rl}
&\eta_{k}({\widetilde{\boldsymbol{\Lambda}}}(z)) = \mathbf{V}_{1k}{\mathbf{\Pi}}_{1k}({\widetilde{\boldsymbol{\Lambda}}}_1(z))\mathbf{V}_{1k}^H
\\
&\mathcal{G}_k(z) = \mathbf{V}_{1k}\left(z\mathbf{I}_{M_k} - {\mathbf{\Pi}}_{1k}({\widetilde{\boldsymbol{\Lambda}}}_1(z))\right)^{-1}\mathbf{V}_{1k}^H.
\end{IEEEeqnarray}
Let ${\boldsymbol{\Lambda}}_k(z)$ denote $(z\mathbf{I}_{M_k} - {\mathbf{\Pi}}_{1k}({\widetilde{\boldsymbol{\Lambda}}}_1(z)))^{-1}$. From \eqref{eq:eta_function_component}, we then obtain
\begin{eqnarray}
\left[\widetilde{\mathbf{\Pi}}_{1k}(\mathcal{G}_k(z))\right]_{ii} \!\!\!\!&=&\!\!\!\!\sum\limits_{j=1}^{M_k}\left[{\mathbf{G}}_{1k}\right]_{ij} \left[\mathbf{V}_{1k}^H\mathbf{V}_{1k}{\boldsymbol{\Lambda}}_k(z)\mathbf{V}_{1k}^H\mathbf{V}_{1k}\right]_{jj}
\nonumber \\
\!\!\!\!&=&\!\!\!\!\sum\limits_{j=1}^{M_k}\left[{\mathbf{G}}_{1k}\right]_{ij}\left[{\boldsymbol{\Lambda}}_k(z)\right]_{jj}.
\end{eqnarray}
Thus, we can further omit $\mathbf{V}_{11},\mathbf{V}_{12},\cdots,\mathbf{V}_{1K}$ in the iteration process. We redefine
$\widetilde{\mathbf{\Pi}}_{k}({\boldsymbol{\Lambda}}_k(z))$ by
\begin{equation}
\left[\widetilde{\mathbf{\Pi}}_{k}({\boldsymbol{\Lambda}}_k(z))\right]_{ii} = \sum\limits_{j=1}^{M_k}\left[{\mathbf{G}}_{1k}\right]_{ij}\left[{\boldsymbol{\Lambda}}_k(z)\right]_{jj}.
\end{equation}
Equations (59)-(63) can be further reduced to
\begin{IEEEeqnarray}{Rl}
&\left[{\widetilde{\boldsymbol{\Lambda}}}_1(z)\right]_{ii} = \left(1 - \sum\limits_{k=1}^{K}\left[\widetilde{\mathbf{\Pi}}_{k} ({\boldsymbol{\Lambda}}_k(z))\right]_{ii}\right)^{-1} \\
&\left[{\boldsymbol{\Lambda}}_k(z)\right]_{ii} = \left(z - \left[{\mathbf{\Pi}}_{k}({\widetilde{\boldsymbol{\Lambda}}}_1(z))\right]_{ii}\right)^{-1} \\
&G_{\boldsymbol{\mathcal{B}}_N}(z) = z^{-1}\frac{1}{N}\sum\limits_{i=1}^{N}
\left[{\widetilde{\boldsymbol{\Lambda}}}_1(z)\right]_{ii}.
\end{IEEEeqnarray}
In this case, all matrix inversions have been avoided. Since $\mathbf{U}_1$ and $\mathbf{V}_{11}, \mathbf{V}_{12}, \cdots, \mathbf{V}_{1K}$ have been omitted in the iteration process, we have that the distribution of $\boldsymbol{\mathcal{B}}_N$ depends only on $\{{\mathbf{G}}_{1k}\}$.
Consider now the Rician channel case, \textit{i.e.}, $\overline{\mathbf{H}}\neq \mathbf{0}$. If $\overline{\mathbf{H}}$ has some special structures, we can still obtain simpler results. Let $L=1$ and $\overline{\mathbf{H}}_{1k} = \mathbf{U}_{1}\mathbf{\Sigma}_{1k}\mathbf{V}_{1k}^H$, where $\mathbf{\Sigma}_{1k}$ is an $N\times M_k$ deterministic matrix with at most one nonzero element in each row and each column. In this case, we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(z)^{-1}\overline{\mathbf{H}}{}^H = \mathbf{U}_{1}\left(\sum\limits_{k=1}^K \mathbf{\Sigma}_{1k}\mathbf{V}_{1k}^H\left(\mathbf{I}_{M_k}-
\vphantom{\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}}
\right.\right.
\nonumber \\
&~~~~~~~~~~~~~~~~~~ \left.\left.
\eta_{k}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N))\right)^{-1}\!\!
\vphantom{\left(\sum\limits_{k=1}^K \mathbf{\Sigma}_{1k}\mathbf{V}_{1k}^H\left(\mathbf{I}_{M_k}-
\right.\right.}
\mathbf{V}_{1k}\mathbf{\Sigma}_{1k}^H\right)\mathbf{U}_{1}^H
\\
&\!\!\!\!\eta_{k}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)) = \mathbf{V}_{1k}{\mathbf{\Pi}}_{1k}({\widetilde{\boldsymbol{\Lambda}}}_1(z))\mathbf{V}_{1k}^H
\\
&\!\!\!\!{\Tilde{{\boldsymbol{\Phi}}}}(z)
= \mathbf{U}_{1}({\widetilde{\boldsymbol{\Lambda}}}_1(z))^{-1}\mathbf{U}_{1}^H.
\end{IEEEeqnarray}
Recall from \eqref{eq:cauchy_transform_of_mathcal_BN} that
\begin{equation}
\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N) = (z{\Tilde{{\boldsymbol{\Phi}}}}(z)
- \overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(z)^{-1}\overline{\mathbf{H}}{}^H)^{-1}.
\nonumber
\end{equation}
The matrix inversion in \eqref{eq:cauchy_transform_of_mathcal_BN} can still be avoided, and the distribution of $\boldsymbol{\mathcal{B}}_N$ also does not vary with $\mathbf{U}_{1}$. However, the matrix inversion in \eqref{eq:cauchy_transform_of_mathcal_BN} can not be avoided even with the assumption $\overline{\mathbf{H}}_{lk}=\mathbf{U}_{{l}}\mathbf{\Sigma}_{lk}\mathbf{V}_{lk}^H$ when $L\neq 1$.
\subsection{Deterministic Equivalent of $\mathcal{V}_{\boldsymbol{\mathbf{B}}_N}(x)$}
In this subsection, we derive the Shannon transform $\mathcal{V}_{{\boldsymbol{\mathcal{B}}}_N}(x)$ from the Cauchy transform $G_{{\boldsymbol{\mathcal{B}}}_N}(z)$.
According to \eqref{eq:deterministric_equivalent_of_cauchy_transform}, we have that
\begin{equation}
\lim\limits_{N \rightarrow \infty} \mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x) - \mathcal{V}_{{\boldsymbol{\mathbf{B}}}_N}(x) = 0.
\end{equation}
Thus, $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ is the deterministic equivalent of $\mathcal{V}_{{\boldsymbol{\mathbf{B}}}_N}(x)$.
To derive $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$, we introduce the following two lemmas.
\begin{lemma}
\label{lm:shannon_theorem_lemma_1}
{Let $\mathbf{E}_k(x)$ denote} $-x\mathcal{G}_k(-x)$ and {$\mathbf{A}(x)$ denote} $({\Tilde{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H)^{-1}$, we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\!\!\!\!-{\rm{tr}}\left(x^{-1}\overline{\mathbf{H}}{}^H\mathbf{A}(x)\overline{\mathbf{H}}\frac{d{{\boldsymbol{\Phi}}}(-x)^{-1}}{dx}\right) \nonumber \\
&= \sum\limits_{k=1}^{K}{\rm{tr}}\left(\left({{\boldsymbol{\Phi}}}_k(-x)^{-1}-\mathbf{E}_k(x)\right)\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}\right)
\label{eq:shannon_theorem_lemma_1}
\end{IEEEeqnarray}
where ${{\boldsymbol{\Phi}}}_k(-x) = \mathbf{I}_{M_k} - \eta_{k} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N))$.
\end{lemma}
\begin{IEEEproof}
The proof is given in Appendix {\ref{sec:proof_of_shannon_theorem_lemma_1}}
\end{IEEEproof}
\begin{lemma}
\label{lm:shannon_theorem_lemma_2}
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\!\!\!\!{\rm{tr}}\left(\frac{d(x^{-1}\mathbf{A}(x))}{dx}\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right)
\nonumber \\
&= \sum\limits_{k=1}^{K}{\rm{tr}}\left(\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}x^{-1}\mathbf{E}_k(x)\right).
\label{eq:shannon_theorem_lemma_2}
\end{IEEEeqnarray}
\end{lemma}
\begin{IEEEproof}
The proof is given in Appendix {\ref{sec:proof_of_shannon_theorem_lemma_2}}.
\end{IEEEproof}
Using the above two lemmas and a technique similar to that in \cite{hachem2007deterministic}, we obtain the following theorem.
\begin{theorem}
\label{th:shannon_theorem}
The Shannon transform $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ of $\boldsymbol{\mathcal{B}}_N$ satisfies
\begin{IEEEeqnarray}{Rl}
\!\!\!\!\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)
= & \log\det\left({\Tilde{{\boldsymbol{\Phi}}}}(-x) + x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H\right)
\nonumber \\
& +\log\det\left({{\boldsymbol{\Phi}}}(-x)\right)
\nonumber \\
& ~- {\rm{tr}}\left(x\sum\limits_{k=1}^{K}{{\eta}}_{k} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N))\mathcal{G}_k(-x) \right)
\label{eq:theorem_2_present_1}
\end{IEEEeqnarray}
or equivalently
\begin{IEEEeqnarray}{Rl}
\!\!\!\!\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)
=& \log\det\left({{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{}^H\Tilde{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}\right)
\nonumber \\
& +\log\det(\Tilde{{\boldsymbol{\Phi}}}(-x))
\nonumber \\
& ~- {\rm{tr}} \left(x\sum\limits_{k=1}^{K}{\tilde{\eta}}_{k} (\mathcal{G}_k(-x))\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)\right).
\label{eq:theorem_2_present_2}
\end{IEEEeqnarray}
\end{theorem}
\begin{IEEEproof}
The proof of \eqref{eq:theorem_2_present_1} is given in Appendix {\ref{sec:proof_of_shannon_theorem}}. Equation \eqref{eq:theorem_2_present_2} can be obtained from \eqref{eq:theorem_2_present_1} easily, and thus its proof is omitted for brevity.
\end{IEEEproof}
\begin{remark}
From Theorems \ref{th:cauchy_transform} and \ref{th:shannon_theorem}, we observe that the deterministic equivalent $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2)$
is totally determined by the parameterized one-sided correlation matrices $ {\tilde{\eta}}_{k}(\mathbf{C}_k)$ and ${\eta}_{k}({\widetilde{\mathbf{C}}})$.
In \cite{zhang2013capacity}, each sub-channel matrix $\widetilde{\mathbf{H}}_{lk}$ reduces to $\mathbf{R}_{lk}^{\frac{1}{2}}\mathbf{W}_{lk}\mathbf{T}_{lk}^{\frac{1}{2}}$, where $\mathbf{R}_{lk}$ and $\mathbf{T}_{lk}$ are deterministic positive definite matrices.
In this case, ${\tilde{\eta}}_k(\mathbf{C}_k)$ becomes
\begin{IEEEeqnarray}{Rl}
{\tilde{\eta}}_k(\mathbf{C}_k) = & {\rm{diag}}\left(\mathbf{R}_{1k}{\rm{tr}}\left(\mathbf{T}_{1k}\mathbf{C}_k\right),\mathbf{R}_{2k}{\rm{tr}}\left(\mathbf{T}_{2k}\mathbf{C}_k\right), \right.
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~
\left.\cdots,\mathbf{R}_{Lk}{\rm{tr}}\left(\mathbf{T}_{Lk}\mathbf{C}_k\right)\right)
\end{IEEEeqnarray}
and $\eta_k(\widetilde{\mathbf{C}})$ becomes
\begin{equation}
\eta_k(\widetilde{\mathbf{C}}) = \sum\limits_{l=1}^{L}\mathbf{T}_{lk}{\rm{tr}}(\mathbf{R}_{lk}\langle{\widetilde{\mathbf{C}}}\rangle_l).
\end{equation}
Let $e_{lk}={\rm{tr}}(\mathbf{R}_{lk}\langle{\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)}\rangle_l)$ and
$\tilde{e}_{lk}={\rm{tr}}(\mathbf{T}_{lk}\mathcal{G}_{k}(-x))$. Then, it is easy to show that the deterministic equivalent $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2)$ provided by
\eqref{eq:theorem_2_present_1} or \eqref{eq:theorem_2_present_2} reduces to that provided by Theorem $2$ of \cite{zhang2013capacity} when $\widetilde{\mathbf{H}}_{lk}$ reduces to $\mathbf{R}_{lk}^{\frac{1}{2}}\mathbf{W}_{lk}\mathbf{T}_{lk}^{\frac{1}{2}}$.
\end{remark}
We now summarize the method to compute the deterministic equivalent $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2)$ of the Shannon transform $\mathcal{V}_{\boldsymbol{\mathbf{B}}_N}(\sigma_z^2)$ as follows: First, initialize $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-\sigma_z^2\mathbf{I}_N)$ with $\mathbf{I}_N$ and $\mathcal{G}_k(-\sigma_z^2)$ with $\mathbf{I}_{M_k}$. Second, iterate (59)-(62) until the desired tolerances of $\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-\sigma_z^2\mathbf{I}_N)$ and $\mathcal{G}_k(-\sigma_z^2)$ are satisfied. Third, obtain the deterministic equivalent $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2)$ by \eqref{eq:theorem_2_present_1} or \eqref{eq:theorem_2_present_2}.
When $N_l$ goes to a very large value, we can also obtain simpler results from Theorem {\ref{th:shannon_theorem}} under some scenarios.
Consider $\overline{\mathbf{H}}=\mathbf{0}$. Let $\tilde{\lambda}_{li}(x)=1 - \sum_{k=1}^{K}[\widetilde{\mathbf{\Pi}}_{lk}(\mathcal{G}_k(-x))]_{ii}$. We can rewrite $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ as
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x) = \sum\limits_{i=1}^{K}\log\det(\mathbf{I}_{M_k} - \eta_{k}( {\widetilde{\boldsymbol{\Lambda}}}(-x)))
\nonumber \\
&&
~+\sum\limits_{l=1}^{L}\sum\limits_{i=1}^{N_l}\log(\tilde{\lambda}_{li}(x))
+ \sum\limits_{l=1}^{L}\sum\limits_{i=1}^{N_l}\frac{1-\tilde{\lambda}_{li}(x)}{\tilde{\lambda}_{li}(x)}.
\label{eq:Shannon_theorem_remark_1}
\end{eqnarray}
When $L=1$, \eqref{eq:Shannon_theorem_remark_1} further reduces to
\begin{eqnarray}
&&\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x) = \sum\limits_{k=1}^{K}\sum\limits_{i=1}^{M_k}\log({\lambda}_{ki}(x))
+\sum\limits_{i=1}^{N}\log(\tilde{\lambda}_{1i}(x)) \nonumber \\
&&~~~~~~~~~~~~
+ \sum\limits_{i=1}^{N}\frac{1-\tilde{\lambda}_{1i}(x)}{\tilde{\lambda}_{1i}(x)}
\end{eqnarray}
where ${\lambda}_{ki}(x)$ denotes $1 - [{\mathbf{\Pi}}_{k}({\widetilde{\boldsymbol{\Lambda}}}(-x))]_{ii}$. In the case of $L=1$ and $\overline{\mathbf{H}}_{1k}=\mathbf{U}_{1}\mathbf{\Sigma}_{1k}\mathbf{V}_{1k}$, similar results
to \eqref{eq:Shannon_theorem_remark_1} can still be obtained and are omitted here for brevity.
\subsection{Sum-rate Capacity Achieving Input Covariance Matrices}
\label{sec:Sum_rate_Capacity_Achieving_Input_Covariance_Matrices}
In this subsection, we consider the optimization problem
\begin{equation}
(\mathbf{Q}_{1}^{\star},\mathbf{Q}_{2}^{\star},\cdots,\mathbf{Q}_{K}^{\star})=\mathop{\arg\max}\limits_{(\mathbf{Q}_{1},\cdots,\mathbf{Q}_{K})\in\mathbb{Q}} \mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2).
\label{eq:optimization_problem_of_deterministic_equivalent}
\end{equation}
In the previous section, we have obtained the expression of $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ when assuming $\frac{P_k}{M_k}\mathbf{Q}_k=\mathbf{I}_{M_k}$. The results for general $\mathbf{Q}_k$'s are obtained by replacing the matrices $\overline{\mathbf{H}}_k$ with
$\sqrt{\frac{P_k}{M_k}}\overline{\mathbf{H}}_k\mathbf{Q}_k^{\frac{1}{2}}$ and $\widetilde{\mathbf{H}}_k$ with $\sqrt{\frac{P_k}{M_k}}\widetilde{\mathbf{H}}_k\mathbf{Q}_k^{\frac{1}{2}}$.
Let ${\tilde{\eta}}_{Q,k}(\mathbf{C}_k)$ and $\eta_{Q,k}({\widetilde{\mathbf{C}}})$ be defined by
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!{\tilde{\eta}}_{Q,k}(\mathbf{C}_k)=\frac{P_k}{M_k}{\rm{diag}}\left(\mathbf{U}_{1k}\widetilde{\mathbf{\Pi}}_{1k} \left(\mathbf{Q}_k^{\frac{1}{2}}\mathbf{C}_k\mathbf{Q}_k^{\frac{1}{2}}\right)\mathbf{U}_{1k}^H, \right.
\nonumber \\
& ~~~~~~~~~~~~~~~~~~\mathbf{U}_{2k}\widetilde{\mathbf{\Pi}}_{2k}\left(\mathbf{Q}_k^{\frac{1}{2}}\mathbf{C}_k\mathbf{Q}_k^{\frac{1}{2}}\right) \mathbf{U}_{2k}^H, \cdots,
\nonumber \\
&\left.~~~~~~~~~~~~~~~~~~~~~~ \mathbf{U}_{Lk}\widetilde{\mathbf{\Pi}}_{Lk}\left(\mathbf{Q}_k^{\frac{1}{2}}\mathbf{C}_k\mathbf{Q}_k^{\frac{1}{2}}\right) \mathbf{U}_{Lk}^H\right)
\label{eq:etaQ_function}
\end{IEEEeqnarray}
and
\begin{eqnarray}
\eta_{Q,k}({\widetilde{\mathbf{C}}})
=\frac{P_k}{M_k}\sum\limits_{l=1}^{L}\mathbf{Q}_k^{\frac{1}{2}}\mathbf{V}_{lk}{\mathbf{\Pi}}_{lk}(\langle{\widetilde{\mathbf{C}}}\rangle_l)\mathbf{V}_{lk}^H\mathbf{Q}_k^{\frac{1}{2}}.
\label{eq:etaQ_function_of_widetilde}
\end{eqnarray}
The right-hand sides (RHSs) of \eqref{eq:etaQ_function} and \eqref{eq:etaQ_function_of_widetilde} are obtained by
replacing $\widetilde{\mathbf{H}}_k$ with $\sqrt{\frac{P_k}{M_k}}\widetilde{\mathbf{H}}_k\mathbf{Q}_k^{\frac{1}{2}}$ in
\eqref{eq:eta_function} and \eqref{eq:eta_function_of_widetilde}, respectively.
Let $\overline{\mathbf{S}}$ denote
$[\sqrt{\tfrac{P_1}{M_1}}\overline{\mathbf{H}}_1~\sqrt{\tfrac{P_2}{M_2}}\overline{\mathbf{H}}_2~\cdots~\sqrt{\tfrac{P_K}{M_K}}\overline{\mathbf{H}}_K]
$ and $\mathbf{Q}={\rm{diag}}(\mathbf{Q}_1,\mathbf{Q}_2,\cdots,\mathbf{Q}_K)$.
Then, \eqref{eq:theorem_2_present_1} becomes
\begin{eqnarray}
\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)
\!\!\!\!&=&\!\!\!\! \log\det\left(\mathbf{I}_M+\mathbf{\Gamma}\mathbf{Q}\right)
+\log\det(\Tilde{{\boldsymbol{\Phi}}}(-x))
\nonumber \\
&&- {\rm{tr}}\left(x\sum\limits_{k=1}^{K}{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x)\right)
\label{eq:shannon_transform_general_Q}
\end{eqnarray}
with the following notations
\begin{IEEEeqnarray}{Rl}
&\mathbf{\Gamma}
={\rm{diag}}\left(-\eta_{1} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)),-\eta_{2} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)), \cdots,\right.
\nonumber \\
&~~~~~~~~~~~~\left.-\eta_{K}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N))\right) + x^{-1}\overline{\mathbf{S}}{}^H\Tilde{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{S}}
\\
&{\Tilde{{\boldsymbol{\Phi}}}}(-x) = \mathbf{I}_N - \sum\limits_{k=1}^{K}{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))
\label{eq:cauchy_transform_tilde_phi_general_Q} \\
&{{\boldsymbol{\Phi}}}(-x)= {\rm{diag}}\left(\mathbf{I}_{M_1} \!-\! \eta_{Q,1} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)),\right.
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~\mathbf{I}_{M_2} - \eta_{Q,2} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)), \cdots,
\nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~\left.\mathbf{I}_{M_K} \!-\! \eta_{Q,K} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N))\right)
\label{eq:cauchy_transform_phi_general_Q} \\
&\mathcal{G}_k(-x) = \left(\!\left({-x}{{\boldsymbol{\Phi}}}(-x)
- \mathbf{Q}^{\frac{1}{2}}\overline{\mathbf{S}}{}^H{\Tilde{{\boldsymbol{\Phi}}}}(-x)^{-1} \overline{\mathbf{S}}\mathbf{Q}^{\frac{1}{2}}\right)^{-1}\!\right)_{\!\!k}
\label{eq:cauchy_transform_of_mathcal_HkHkH_general_Q} \\
&\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N) = \left({-x}{\Tilde{{\boldsymbol{\Phi}}}}(-x)
- \overline{\mathbf{S}}\mathbf{Q}^{\frac{1}{2}}{{\boldsymbol{\Phi}}}(-x)^{-1}\mathbf{Q}^{\frac{1}{2}}\overline{\mathbf{S}}{}^H\right)^{-1}\!\!.
\nonumber \\
\label{eq:cauchy_transform_of_mathcal_BN_general_Q}
\end{IEEEeqnarray}
By using a procedure similar to that in \cite{zhang2013capacity}, \cite{couillet2011deterministic}, \cite{wen2013deterministic} and \cite{dumont2010capacity},
we obtain the following theorem.
\begin{theorem}
\label{th:capaicty_achieving matrix_theorem}
The optimal input covariance matrices
\begin{equation}
(\mathbf{Q}_{1}^{\star},\mathbf{Q}_{2}^{\star},\cdots,\mathbf{Q}_{K}^{\star}) \nonumber
\end{equation}
are the
solutions of the standard waterfilling maximization problem:
\begin{IEEEeqnarray}{Rl}
&\max\limits_{\mathbf{Q}_k} \log\det(\mathbf{I}_{M_k}+\mathbf{\Gamma}_k\mathbf{Q}_k)
\nonumber \\
&{\rm s.t.} ~{\rm{tr}}(\mathbf{Q}_k)\leq M_k,\mathbf{Q}_k\succeq 0
\end{IEEEeqnarray}
where
\begin{eqnarray}
& &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbf{\Gamma}_k= \langle(\mathbf{I}_{M}+\mathbf{\Gamma}\mathbf{Q}_{\backslash k})^{-1}\mathbf{\Gamma}\rangle_k \\
& &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbf{Q}_{\backslash k}={\rm{diag}}(\mathbf{Q}_1,\cdots,\mathbf{Q}_{k-1},\mathbf{0}_{M_k},\mathbf{Q}_{k+1},\cdots,\mathbf{Q}_{K}).
\end{eqnarray}
\end{theorem}
\begin{IEEEproof}
The proof is given in Appendix {\ref{sec:proof_of_capaicty_achieving matrix_theorem}}.
\end{IEEEproof}
\begin{remark}
When $L=1$, we have $\mathbf{H}_k=\overline{\mathbf{H}}_{1k}+\mathbf{U}_{1k}(\mathbf{M}_{1k}\odot{\mathbf{W}}_{1k})\mathbf{V}_{1k}^H$ \cite{wen2011on}.
Let $\mathbf{G}_k$ denote $\mathbf{M}_{1k}\odot\mathbf{M}_{1k}$.
We define $\boldsymbol{\psi}_k$ by
$[\boldsymbol{\psi}_k]_j=[\frac{P_k}{M_k}\mathbf{V}_{1k}^H\mathbf{Q}_k^{\frac{1}{2}}\mathcal{G}_k(-x)\mathbf{Q}_k^{\frac{1}{2}}\mathbf{V}_{1k}]_{jj}$. Then, we have that
\begin{equation}
{\tilde{\eta}}_{Q,k}(\mathcal{G}_k(-x))= \mathbf{U}_{1k}{\rm{diag}}({\mathbf{G}}_k\boldsymbol{\psi}_k)\mathbf{U}_{1k}^H. \nonumber
\end{equation}
Similarly, defining $\boldsymbol{\gamma}_{k}$ by
\begin{equation}
[\boldsymbol{\gamma}_{k}]_j=[\frac{P_k}{M_k}\mathbf{U}_{1k}^H\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)\mathbf{U}_{1k}]_{jj} \nonumber
\end{equation}
we have that
\begin{equation}
\eta_k(\mathcal{G}_{{\boldsymbol{\mathcal{B}}_N}}^{\mathcal{M}_N}(-x\mathbf{I}_N)) = \mathbf{V}_{1k}{\rm{diag}}({\mathbf{G}}_k^T\boldsymbol{\gamma}_{k})\mathbf{V}_{1k}^H. \nonumber
\end{equation}
Let $\mathbf{R}_k=-{\tilde{\eta}}_{Q,k}(\mathcal{G}_k(-x))$ and $\mathbf{T}_k=-\eta_k(\mathcal{G}_{{\boldsymbol{\mathcal{B}}_N}}^{\mathcal{M}_N}(-x\mathbf{I}_N))$.
With
\begin{equation}
{\rm{tr}}\left(x\sum_{k=1}^{K}{{\eta}}_{k,Q} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N))\mathcal{G}_k(-x) \right)=\boldsymbol{\gamma}_k^T{\mathbf{G}}_k\boldsymbol{\psi}_k \nonumber
\end{equation}
and the previous results, it is easy to show that $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2)$ provided by \eqref{eq:shannon_transform_general_Q} reduces to that in Proposition 1 of \cite{wen2011on}, and the capacity achieving input covariance matrices provided by Theorem \ref{th:capaicty_achieving matrix_theorem} reduce to that in Proposition 2 of \cite{wen2011on}.
When $\widetilde{\mathbf{H}}_{lk}$ reduces to $\mathbf{R}_{lk}^{\frac{1}{2}}\mathbf{W}_{lk}\mathbf{T}_{lk}^{\frac{1}{2}}$, we have shown in the previous section that $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(\sigma_z^2)$ provided by \eqref{eq:theorem_2_present_1} reduces to that provided by Theorem 2 of \cite{zhang2013capacity}. It follows naturally that the capacity achieving input covariance matrices presented in Theorem \ref{th:capaicty_achieving matrix_theorem} also reduce to that provided by Proposition 2 of \cite{zhang2013capacity}.
\end{remark}
To obtain $(\mathbf{Q}_{1}^{\star}, \mathbf{Q}_{2}^{\star}, \cdots, \mathbf{Q}_{K}^{\star})$, we need to iteratively compute $\mathbf{\Gamma}$ via (97)-(101). When each $N_l$ goes to a very large value, we can make these equations simpler with the assumption that $\mathbf{U}_{l1}=\mathbf{U}_{l2}=\cdots=\mathbf{U}_{lK}$ under some scenarios. Consider $\overline{\mathbf{H}} = \mathbf{0}$. The diagonal entries of the $N_l \times N_l$ diagonal matrix valued function
${\widetilde{\boldsymbol{\Lambda}}}_l(z)$ in \eqref{eq:reduce_formula_lambda_tilde_diagonal} become
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\left[{\widetilde{\boldsymbol{\Lambda}}}_l(z)\right]_{ii}
\nonumber \\
&\!\!= \left(1 - \sum\limits_{k=1}^{K}\frac{P_k}{M_k}\left[\widetilde{\mathbf{\Pi}}_{lk}(\mathbf{Q}_k^{\frac{1}{2}}\mathcal{G}_k(z)\mathbf{Q}_k^{\frac{1}{2}})\right]_{ii}\right)^{-1}\!\!.
\end{IEEEeqnarray}
Then, equations (98)-(101) reduce to
\begin{IEEEeqnarray}{Rl}
&\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N) = z^{-1}\mathbf{U}_{R}{\widetilde{\boldsymbol{\Lambda}}}(z)\mathbf{U}_{R}^H
\label{eq:cauchy_transform_of_mathcal_BN_large_mimo_Q} \\
&{\widetilde{\boldsymbol{\Lambda}}}(z)={\rm{diag}}\left({\widetilde{\boldsymbol{\Lambda}}}_1(z), {\widetilde{\boldsymbol{\Lambda}}}_2(z), \cdots, {\widetilde{\boldsymbol{\Lambda}}}_L(z)\right) \\
&\mathcal{G}_k(z)= z^{-1}\left(\mathbf{I}_{M_k} - \eta_{Q,k}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N))\right)^{-1}
\end{IEEEeqnarray}
where $\mathbf{U}_R$ denotes ${\rm{diag}}(\mathbf{U}_1, \mathbf{U}_2, \cdots, \mathbf{U}_L)$.
In the above equations, we have avoided the matrix inversion in \eqref{eq:cauchy_transform_of_mathcal_BN_general_Q}.
However, due to the existence of $\mathbf{Q}$, (98)-(101)
can not be further reduced when $L=1$.
Let $\tilde{\lambda}_{li}(x)$ denote
\begin{equation}
1-\sum_{k=1}^{K}\frac{P_k}{M_k}[\widetilde{\mathbf{\Pi}}_{lk}(\mathbf{Q}_k^{\frac{1}{2}}\mathcal{G}_k(-x)\mathbf{Q}_k^{\frac{1}{2}})]_{ii}.
\nonumber
\end{equation}
We can rewrite $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ for general $\mathbf{Q}$ as
\begin{IEEEeqnarray}{Rl}
\!\!\!\!\!\!\!\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)
=& \sum\limits_{i=1}^{K}\log\det\left(\mathbf{I}_{M_k} - \eta_{Q,k} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N))\right)
\nonumber \\
&\!\!\!+\sum\limits_{l=1}^{L}\sum\limits_{i=1}^{N_l}\log(\tilde{\lambda}_{li}(x))
+ \sum\limits_{l=1}^{L}\sum\limits_{i=1}^{N_l}\frac{1-\tilde{\lambda}_{li}(x)}{\tilde{\lambda}_{li}(x)}.
\end{IEEEeqnarray}
In the case of $L=1$ and $\overline{\mathbf{H}}_{1k}=\mathbf{U}_{1}\mathbf{\Sigma}_{1k}\mathbf{V}_{1k}^H$, similar results can be obtained and are omitted here for brevity.
As shown in \cite{wen2011on}, it is easy to prove that the eigenvectors of the optimal input covariance matrix of the $k$-th user are aligned with ${\mathbf{V}_{1k}}$ when $L=1$ and $\overline{\mathbf{H}}_{1k}=\mathbf{0}$.
However, for $L\neq 1$, unless $\mathbf{V}_{lk}$ for different AS are the same, the eigenvectors of the optimal input covariance matrix of the $k$-th user are not aligned with ${\mathbf{V}_{lk}}$ even when $\overline{\mathbf{H}}_k=\mathbf{0}$.
\section{Simulation Results}
In this section, we provide simulation results to show the performance of the proposed free deterministic equivalent approach. Two simulation models are used. One consists of randomly generated jointly correlated channels.
The other is the WINNER II model \cite{meinila2009winner}. The WINNER II channel model is a geometry-based stochastic channel model (GSCM), where the channel parameters are determined stochastically based on statistical distributions extracted from channel measurements. Since the jointly correlated channel is a good approximation of the measured channel \cite{bonek2005experimental,weichselberger2006stochastic}, we assume that it can well approximate the WINNER II channel model. In all simulations, we set ${P_k}={M_k}$, $K=3$ and $L=2$ for simplicity. The signal-to-noise ratio (SNR) is given by SNR$=\frac{1}{M\sigma_z^2}$.
\vspace{0.1em}
For the first simulation model, $\mathbf{M}_{lk}$, ${\mathbf{U}}_{lk}$ and $\mathbf{V}_{lk}$ are all randomly generated. The matrices ${\mathbf{U}}_{lk}$ and ${\mathbf{V}}_{lk}$ are extracted from randomly generated Gaussian matrices with i.i.d. entries via singular value decomposition (SVD), and the entries $[\mathbf{M}_{lk}]_{ij}$ are first generated as uniform random variables with range $[0 ~ 1]$ and then normalized according to \eqref{eq:constraint_of_channel_matrix}. Each deterministic channel matrix $\overline{\mathbf{H}}_{lk}$ is set to a zero matrix for simplicity.
For the WINNER II model, we use the cluster delay line (CDL) model of the Matlab implementation in \cite{hentila2007matlab} directly. The Fourier transform is used to convert the time-delay channel to a time-frequency channel. The Simulation scenario is set to B1 (typical urban microcell) with line of sight (LOS). The carrier frequency is 5.25GHz. The antenna arrays of both the BS and the users are uniform linear arrays (ULAs) with 1-cm spacing. For other detailed parameters, see \cite{meinila2009winner}. When the simulation model under consideration becomes the WINNER II model, we extract $\overline{\mathbf{H}}_{lk}$, $\mathbf{M}_{lk}$, ${\mathbf{U}}_{lk}$ and $\mathbf{V}_{lk}$ first.
\subsection{Extraction of $\overline{\mathbf{H}}_{lk}$, $\mathbf{M}_{lk}$, ${\mathbf{U}}_{lk}$ and $\mathbf{V}_{lk}$ from WINNER II Model}
We denote by $S$ the number of samples, and by ${\mathbf{H}}_{lk}(s)$ the $s$-th sample of ${\mathbf{H}}_{lk}$. Then, each deterministic channel matrix
$\overline{\mathbf{H}}_{lk}$ is obtained from
\begin{equation}
\overline{\mathbf{H}}_{lk}=\frac{1}{S}\sum\limits_{s=1}^S{\mathbf{H}}_{lk}(s)
\end{equation}
and each random channel matrix $\widetilde{\mathbf{H}}_{lk}$ is given by
\begin{equation}
\widetilde{\mathbf{H}}_{lk}(s)={\mathbf{H}}_{lk}(s) - \overline{\mathbf{H}}_{lk}.
\end{equation}
Then, we normalize the channel matrices ${\mathbf{H}}_{lk}(s)$ according to \eqref{eq:constraint_of_channel_matrix}.
Furthermore, from the correlation matrices
\begin{IEEEeqnarray}{Rl}
{\mathbf{R}}_{r,{lk}}=&\frac{1}{S}\sum\limits_{s=1}^S\widetilde{\mathbf{H}}_{lk}(s)\widetilde{\mathbf{H}}_{lk}^H(s) \\
{\mathbf{R}}_{t,{lk}}=&\frac{1}{S}\sum\limits_{s=1}^S\widetilde{\mathbf{H}}_{lk}^H(s)\widetilde{\mathbf{H}}_{lk}(s)
\end{IEEEeqnarray}
and their eigenvalue decompositions
\begin{IEEEeqnarray}{Rl}
{\mathbf{R}}_{r,{lk}}=&{\mathbf{U}}_{lk}{\mathbf{\Sigma}}_{r,{lk}}{\mathbf{U}}_{lk}^H \\
{\mathbf{R}}_{t,{lk}}=&{\mathbf{V}}_{lk}{\mathbf{\Sigma}}_{t,{lk}}{\mathbf{V}}_{lk}^H
\end{IEEEeqnarray}
the eigenvector matrices ${\mathbf{U}}_{lk}$ and ${\mathbf{V}}_{lk}$ are obtained. Then,
the coupling matrices $\mathbf{G}_{lk}=\mathbf{M}_{lk} \odot \mathbf{M}_{lk} $ are computed as \cite{weichselberger2006stochastic}
\begin{equation}
\mathbf{G}_{lk}=\frac{1}{S}\sum\limits_{s=1}^S\left({\mathbf{U}}_{lk}^H{\mathbf{H}}_{lk}(s){\mathbf{V}}_{lk}\right)\odot\left({\mathbf{U}}_{lk}^T{\mathbf{H}}_{lk}^*(s){\mathbf{V}}_{lk}^*\right).
\end{equation}
\subsection{Simulation Results}
\begin{figure}
\centering
\includegraphics[scale=0.55]{capacity.eps}
\caption{Ergodic input-output mutual information versus SNRs of the randomly generated jointly correlated channels with
$N_1=N_2=64, M_1=M_2=M_3=4$. The line plots the deterministic equivalent
results, while the circle markers denote the simulation results.}
\label{fig:capacity_simulation_analytic_one}
\end{figure}
We first consider the randomly generated jointly correlated channels with $N_1=N_2=64$, $M_1=M_2=M_3=4$ and $\mathbf{Q}_1=\mathbf{Q}_2=\mathbf{Q}_3=\mathbf{I}_{4}$. The results of the simulated ergodic mutual information $N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ and their deterministic equivalents $N\mathcal{V}_{{\boldsymbol{\mathcal{B}}}_N}(\sigma_z^2)$ are depicted in Fig.~\ref{fig:capacity_simulation_analytic_one}. The ergodic mutual information $N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ in Fig.~\ref{fig:capacity_simulation_analytic_one} and the following figures is evaluated by Monte-Carlo simulations, where $10^4$ channel realizations are used for averaging.
As depicted in Fig.~\ref{fig:capacity_simulation_analytic_one}, the deterministic equivalent results are virtually the same as the simulation results.
\begin{figure}
\centering
\subfigure[]{
\includegraphics[scale=0.55]{capacity_winner2_1.eps}}
\subfigure[]{
\includegraphics[scale=0.55]{capacity_winner2_2.eps}}
\caption{Ergodic input-output mutual information versus SNRs of the WINNER II channel with
(a) $ N_1=N_2=4, M_1=M_2=M_3=4$ and (b) $ N_1=N_2=64, M_1=M_2=M_3=4$. The lines plot the deterministic equivalent
results, while the circle markers denote the simulation results.}
\label{fig:capacity_simulation_analytic_two}
\end{figure}
We then consider the WINNER II model for the case with $N_1=N_2=4, M_1=M_2=M_3=4$ and the case with $N_1=N_2=64, M_1=M_2=M_3=4$, respectively. For simplicity, we also set $\mathbf{Q}_1=\mathbf{Q}_2=\mathbf{Q}_3=\mathbf{I}_{4}$.
In Fig.~\ref{fig:capacity_simulation_analytic_two}, the ergodic mutual information $N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ and their deterministic equivalents $N\mathcal{V}_{{\boldsymbol{\mathcal{B}}}_N}(\sigma_z^2)$ are represented. As shown in both Fig.~\ref{fig:capacity_simulation_analytic_two}(a) and Fig.~\ref{fig:capacity_simulation_analytic_two}(b), the differences between the deterministic equivalent results and the simulation results are negligible.
\begin{table}
\renewcommand{\arraystretch}{1.3}
\caption{Average execution time in seconds}
\label{table_example}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& $N_1$=$N_2$=4 & $N_1$=$N_2$=64 & $N_1$=$N_2$=64 \\
& \!$M_1$=$M_2$=$M_3$=4\!& \!$M_1$=$M_2$=$M_3$=4\! &\!$M_1$=$M_2$=$M_3$=8\! \\
\hline
Monte-Carlo & 9.74 & 12.9014 & 24.6753\\
\hline
DE & 0.0269 & 0.3671 & 0.4655 \\
\hline
\end{tabular}
\label{lb:average_execution_time_1}
\end{table}
To show the computational efficiency of the proposed deterministic equivalent $N\mathcal{V}_{{\boldsymbol{\mathcal{B}}}_N}(\sigma_z^2)$, we provide in Table~\ref{lb:average_execution_time_1} the average execution time for both the Monte-Carlo simulation and the proposed
algorithm, on a 1.8 GHz Intel quad core i5 processor with 4 GB of RAM, under different system sizes.
As shown in Table~\ref{lb:average_execution_time_1}, the proposed deterministic equivalent results are
much more efficient.
Moreover, the comparison indicates that the proposed deterministic equivalent provides a promising
foundation to derive efficient algorithms for system optimization.
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[scale=0.55]{precoding_1.eps}
}
\subfigure[]{
\includegraphics[scale=0.55]{precoding_2.eps}
}
\subfigure[]{
\includegraphics[scale=0.55]{precoding_4.eps}
}
\subfigure[]{
\includegraphics[scale=0.55]{precoding_3.eps}
}
\caption{Ergodic input-output mutual information versus SNRs of the WINNER II channel with
(a) $N_1=N_2=4, M_1=M_2=M_3=4$, (b) $N_1=N_2=32, M_1=M_2=M_3=4$,
(c) $N_1=N_2=32, M_1=M_2=M_3=8$ and (d) $N_1=N_2=64, M_1=M_2=M_3=4$. The solid lines plot the simulation results without optimization.
The dashed lines denote the simulation results of the proposed algorithm, while the diamond markers denote the simulation results of the Vu-Paulraj algorithm.}
\label{fig:precoding_simulation_analytic}
\end{figure*}
Simulations are also carried out to evaluate the performance of the capacity achieving input covariance matrices $(\mathbf{Q}_{1}^{\star},\mathbf{Q}_{2}^{\star},\mathbf{Q}_{3}^{\star})$. Fig.~\ref{fig:precoding_simulation_analytic} depicts the results of the WINNER II channel models with various system sizes.
In Fig.~1 and Fig.~2, we have shown that the deterministic equivalent $N\mathcal{V}_{{\boldsymbol{\mathcal{B}}}_N}(\sigma_z^2)$ and the simulated ergodic mutual information
$N\mathcal{V}_{{\mathbf{B}}_N}(\sigma_z^2)$ are nearly the same. Since
the latter represents the actual performance of the input covariance matrices,
we use it for producing the numerical results in Fig.~3.
In all four subfigures of Fig.~\ref{fig:precoding_simulation_analytic}, both the ergodic mutual information $N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ for $(\mathbf{Q}_{1}^{\star},\mathbf{Q}_{2}^{\star},\mathbf{Q}_{3}^{\star})$ and the ergodic mutual information $N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ without optimization (\textit{i.e.}, for $(\mathbf{I}_{M_1},\mathbf{I}_{M_2},\mathbf{I}_{M_3})$) are shown. Let $(\mathbf{Q}_{1}^{\diamond},\mathbf{Q}_{2}^{\diamond},\mathbf{Q}_{3}^{\diamond})$ denote the solution of the Vu-Paulraj algorithm. The ergodic mutual information $N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ for $(\mathbf{Q}_{1}^{\diamond}, \mathbf{Q}_{2}^{\diamond}, \mathbf{Q}_{3}^{\diamond})$ are also given for comparison. We note that the ergodic mutual information $N\mathcal{V}_{\mathbf{B}_N}(\sigma_z^2)$ for $(\mathbf{Q}_{1}^{\star}, \mathbf{Q}_{2}^{\star}, \mathbf{Q}_{3}^{\star})$ and that for $(\mathbf{Q}_{1}^{\diamond}, \mathbf{Q}_{2}^{\diamond}, \mathbf{Q}_{3}^{\diamond})$ are indistinguishable. We also observe that increasing the number of receive antennas decreases the optimization gain when the number of transmit antennas is fixed, whereas increasing the number of transmit antennas provides a larger gain when the number of receive antennas is fixed. The main reason behind this phenomenon is as the following: If the number of transmit antennas is fixed, then more receive antennas means lower correlations between the received channel vectors from each transmit antenna (columns of the channel matrices), and thus the performance gain provided by the optimization algorithm becomes smaller. On the other hand, if the number of receive antennas is fixed, then the received channel vectors from each transmit antenna become more correlated as the number of transmit antennas increases, and thus a larger optimization gain can be observed.
\section{Conclusion}
In this paper, we proposed a free deterministic equivalent for the capacity analysis of a MIMO MAC with a more general channel model compared to previous works. The analysis is based on operator-valued free probability theory.
We explained why the free deterministic equivalent method for the considered channel model is reasonable, and also showed how to obtain the free deterministic equivalent of the channel Gram matrix.
The obtained free deterministic equivalent is an operator-valued random variable.
Then, we derived the Cauchy transform of the free deterministic equivalent, the approximate Shannon transform and hence the approximate ergodic mutual information.
Furthermore, we maximized the approximate ergodic mutual information to obtain the sum-rate capacity achieving input covariance matrices. Simulation results showed that the approximations are not only numerically accurate but also computationally efficient. The results of this paper can be used to design optimal precoders and evaluate the capacity or ergodic mutual information for massive MIMO uplinks with multiple antenna users.
\appendices
\section{Prerequisites and Free Deterministic Equivalents}
Free probability theory was introduced by Voiculescu as a non-commutative probability theory equipped with a notion of freeness. Voiculescu pointed out that freeness should be seen as an analogue to independence in classical probability theory \cite{speicher2014free}. Operator-valued free probability theory was also presented by Voiculescu from the very beginning in \cite{voiculescu1985symmetries}. In this appendix, we briefly review definitions and results of free probability theory and operator-valued free probability theory, and introduce the free deterministic equivalents used in this paper with a rigorous mathematical justification.
\subsection{Free Probability and Operator-valued Free Probability}
\label{sec:Free Probability and Operator-valued Free Probability}
In this subsection, we briefly review definitions and results of free probability theory \cite{nica2006lectures, speicher2014free} and operator-valued free probability theory \cite{shlyakhtenko1998gaussian, speicher2014free, shlyakhtenko1996random, belinschi2013analytic, roland1998combinatorial}.
Let $\mathcal{A}$ be a unital algebra. A non-commutative probability space $(\mathcal{A},\phi)$ consists of $\mathcal{A}$ and a linear functional $\phi: \mathcal{A} \rightarrow \mathbb{C}$. The elements of a non-commutative probability space are called non-commutative random variables. If $\mathcal{A}$ is also a $C^*$-algebra and $\phi(a^*a) \geq 0 $ for all $a \in \mathcal{A}$, then $(\mathcal{A},\phi)$ is a $C^*$-probability space. An element $a$ of $\mathcal{A}$ is called a selfadjoint random variable if $a = a^*$; an element $u$ of $\mathcal{A}$ is called a unitary random variable if $uu^* = u^*u=1$; an element $a$ of $\mathcal{A}$ is called a normal random variable if $aa^* = a^*a$.
Let $(\mathcal{A}, \phi)$ be a $C^*$-probability space and $a \in \mathcal{A}$ be a normal random variable. If there exists a compactly supported probability measure $\mu_a$ on $\mathbb{C}$ such that
\begin{equation}
\int z^k({z}^*)^l d \mu_a(z)=\phi(a^k(a^*)^l), k, l \in \mathbb{N}
\end{equation}
then $\mu_a$ is uniquely determined and called the $*$-distribution of $a$. If $a$ is selfadjoint, then $\mu_a$ is simply called the distribution of $a$.
Let $\mathcal{A}_1, \mathcal{A}_2, \cdots, \mathcal{A}_n$ be a family of unital subalgebras of $\mathcal{A}$ and $k$ be a positive integer. The subalgebras $\mathcal{A}_i$ are called free or freely independent, if $\phi(x_1x_2\cdots x_k) = 0$ for any $k$, whenever $\phi(x_j) = 0$ and $x_j \in \mathcal{A}_{i(j)}$ for all $j$, and $i(j) \neq i(j + 1)$ for $j = 1, \cdots , k-1$. Let $y_1,y_2,\cdots,y_n \in \mathcal{A}$. The non-commutative random variables $y_i$ are called free, if the unital subalgebras ${\rm alg}(1,y_i)$ are free, where ${\rm alg}(1,y_i)$ denotes the unital algebra generated by the random variable $y_i$.
Let $(\mathcal{A}, \phi)$ be a $C^*$-probability space, $s \in \mathcal{A}$ be a selfadjoint element and $r$ be a positive real number. If the distribution of $s$ is determined by \cite{nica1996multiplication}
\begin{equation}
\phi(s^n) = \frac{2}{\pi r^2}\int_{-r}^{r}t^n\sqrt{r^2 - t^2}dt
\end{equation}
then $s$ is a semicircular element of radius $r$.
An element $c$ with the definition $c=\frac{1}{\sqrt{2}}(s_1+is_2)$ is called a circular element, if $s_1$ and $s_2$ are two freely independent semicircular elements with the same variance.
Let $\mathcal{B} \subset \mathcal{A}$ be a unital subalgebra. A linear map $F: \mathcal{A} \rightarrow \mathcal{B}$ is a conditional expectation, if $F[b] = b$ for all $b \in \mathcal{B}$ and $F[b_1\boldsymbol{\mathcal{X}}b_2] = b_1F[\boldsymbol{\mathcal{X}}]b_2$ for all
$\boldsymbol{\mathcal{X}} \in \mathcal{A}$ and $b_1, b_2 \in \mathcal{B}$.
An operator-valued probability space $(\mathcal{A},F)$, also called $\mathcal{B}$-valued probability space, consists of $\mathcal{B} \subset \mathcal{A}$ and a conditional expectation $F: \mathcal{A} \rightarrow \mathcal{B}$. The elements of a $\mathcal{B}$-valued probability space are called $\mathcal{B}$-valued random variables. If in addition $\mathcal{A}$ is a $C^*$-algebra, $\mathcal{B}$ is a $C^*$-subalgebra and $F$ is completely positive, then $(\mathcal{A},F)$ is a $\mathcal{B}$-valued $C^*$-probability space. Let $\boldsymbol{\mathcal{X}}$ be a $\mathcal{B}$-valued random variable of $(\mathcal{A},F)$. The $\mathcal{B}$-valued distribution of $\boldsymbol{\mathcal{X}}$ is given by all $\mathcal{B}$-valued moments $F[\boldsymbol{\mathcal{X}}b_1\boldsymbol{\mathcal{X}}b_2 \cdots \boldsymbol{\mathcal{X}}b_{n-1}\boldsymbol{\mathcal{X}}]$, where $b_1,b_2,\cdots,b_{n-1} \in \mathcal{B}$.
We denote by $\mathbf{M}_n(\mathcal{P})$ the algebra of $n \times n$ complex random matrices.
The mathematical expectation operator $\mathbb{E}$ over $\mathbf{M}_n(\mathcal{P})$ is a conditional expectation from $\mathbf{M}_n(\mathcal{P})$ to $\mathcal{M}_n$. Thus, $(\mathbf{M}_n(\mathcal{P}), \mathbb{E})$ is an $\mathcal{M}_n$-valued $C^*$-probability space. Furthermore, $(\mathbf{M}_n(\mathcal{P}), \mathbb{E}_{\mathcal{D}_n})$ is a $\mathcal{D}_n$-valued probability space, and $(\mathbf{M}_n(\mathcal{P}), \frac{1}{n}{\rm{tr}} \circ \mathbb{E})$ or $(\mathbf{M}_n(\mathcal{P}), \frac{1}{n}{\rm{tr}} \circ \mathbb{E}_{\mathcal{D}_n})$ is a $C^*$-probability space. Let $\mathbf{X} \in \mathbf{M}_n(\mathcal{P})$ be a random Hermitian matrix. Then, $\mathbf{X}$ is at the same time an $\mathcal{M}_n$-valued, a $\mathcal{D}_n$-valued and a scalar valued $C^*$-random variable. The $\mathcal{M}_n$-valued distribution of $\mathbf{X}$ determines the $\mathcal{D}_n$-valued distribution of $\mathbf{X}$, which determines also the expected eigenvalue distribution of $\mathbf{X}$.
Let $\boldsymbol{\mathcal{X}}_1,\boldsymbol{\mathcal{X}}_2,\cdots,\boldsymbol{\mathcal{X}}_k \in (\mathcal{A},F)$ denote a family of $\mathcal{B}$-valued random variables and $n$ be a positive integer. Let $A_i$ denote the polynomials in some $\boldsymbol{\mathcal{X}}_{j(i)}$ with coefficients from $\mathcal{B}$, \textit{i.e.}, $A_i \in \mathcal{B}\langle \boldsymbol{\mathcal{X}}_{j(i)} \rangle$ for $i=1,2,\cdots,n$. The $\mathcal{B}$-valued random variables $\boldsymbol{\mathcal{X}}_i $ are free with amalgamation over $\mathcal{B}$, if $F(A_1A_2\cdots A_n) = 0$ for any $n$,
whenever $F(A_i) = 0$ for all $i$, and $j(i) \neq j(i + 1)$ for $i = 1, \cdots , n-1$.
Let $S(n)$ be the finite totally ordered set $\{1,2,\cdots,n\}$ and $V_i(1 \leq i \leq r)$ be pairwise disjoint subsets of $S(n)$. A set $\pi = \{V_1,V_2,\cdots,V_r\}$ is called a partition if $V_1 \cup V_2 \cdots \cup V_r = S(n)$. The subsets $V_1,V_2,\cdots,V_r$ are called blocks of $\pi$. The set of non-crossing partitions of $S(n)$ is denoted by $NC(n)$.
The $\mathcal{B}$-valued multiplicative maps $\{f_{\pi}^{\mathcal{B}}\}_{\pi \in NC(n)}:\mathcal{A}^n \rightarrow \mathcal{B}$ are defined recursively as
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!f_{\pi_1\sqcup\pi_2}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1, \boldsymbol{\mathcal{X}}_2, \cdots, \boldsymbol{\mathcal{X}}_n)
\nonumber \\
&=
f_{\pi_1}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1, \boldsymbol{\mathcal{X}}_2, \cdots, \boldsymbol{\mathcal{X}}_p) f_{\pi_2}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_{p+1}, \boldsymbol{\mathcal{X}}_{p+2}, \cdots, \boldsymbol{\mathcal{X}}_n)
\\
&\!\!\!\!f_{{\rm ins}(p,\pi_2\rightarrow\pi_1)}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1, \boldsymbol{\mathcal{X}}_2,
\cdots, \boldsymbol{\mathcal{X}}_n)
\nonumber \\
&=
f_{\pi_1}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1, \boldsymbol{\mathcal{X}}_2, \cdots,
\boldsymbol{\mathcal{X}}_p f_{\pi_2}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_{p+1},\boldsymbol{\mathcal{X}}_{p+2},
\cdots, \boldsymbol{\mathcal{X}}_{p+q}), \nonumber \\
&~~~~~~~~~~~~~~~~~\boldsymbol{\mathcal{X}}_{p+q+1},\boldsymbol{\mathcal{X}}_{p+q+2},\cdots,\boldsymbol{\mathcal{X}}_n)
\end{IEEEeqnarray}
where $\pi_1$ and $\pi_2$ are two non-crossing partitions, $\pi_1\sqcup\pi_2$ denotes the disjoint union with $\pi_2$ after $\pi_1$, and ${\rm ins}(p,\pi_2\rightarrow\pi_1)$ denotes the partition obtained from $\pi_1$ by inserting the partition $\pi_2$ after the $p$-th element of the set on which $\pi_1$ determines a partition. Let $\mathbf{1}_n$ denote $\{\{1,2,\cdots,n\}\}$, $\mathbf{0}_n$ denote $\{\{1\},\{2\},\cdots,\{n\}\}$ and $f_{n}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1,\boldsymbol{\mathcal{X}}_2,\cdots,\boldsymbol{\mathcal{X}}_n)$ denote $
f_{\mathbf{1}_n}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1,\boldsymbol{\mathcal{X}}_2,\cdots,\boldsymbol{\mathcal{X}}_n)$.
Let $\nu_{\pi}^{\mathcal{B}}:\mathcal{A}^n \rightarrow \mathcal{B}$ be defined by $\nu_{n}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1,\boldsymbol{\mathcal{X}}_2,\cdots,\boldsymbol{\mathcal{X}}_n)
=F(\boldsymbol{\mathcal{X}}_1\boldsymbol{\mathcal{X}}_2\cdots\boldsymbol{\mathcal{X}}_n)$. The $\mathcal{B}$-valued cumulants $\kappa_{\pi}^{\mathcal{B}}:\mathcal{A}^n \rightarrow \mathcal{B}$, also $\mathcal{B}$-valued multiplicative maps, are indirectly and inductively defined by
\begin{equation}
F(\boldsymbol{\mathcal{X}}_1\boldsymbol{\mathcal{X}}_2\cdots\boldsymbol{\mathcal{X}}_n) = \sum\limits_{\pi \in NC(n)} \kappa_{\pi}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1, \boldsymbol{\mathcal{X}}_2, \cdots, \boldsymbol{\mathcal{X}}_{n}).
\label{eq:operator_valued_moments_from_cumulants}
\end{equation}
Furthermore, the $\mathcal{B}$-valued cumulants can be obtained from the $\mathcal{B}$-valued moments by
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\kappa_{\pi}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1, \boldsymbol{\mathcal{X}}_2, \cdots, \boldsymbol{\mathcal{X}}_{n})
\nonumber \\
&= \sum\limits_{\sigma \leq \pi, \sigma \in NC(n)} \nu_{\sigma}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1, \boldsymbol{\mathcal{X}}_2, \cdots, \boldsymbol{\mathcal{X}}_{n})\mu(\sigma,\pi)
\label{eq:computation_of_opeartor-valued_cumulant_for_any_partition_from_moment}
\end{IEEEeqnarray}
where $\sigma \leq \pi$ denotes that each block of $\sigma$ is completely contained in one of the blocks of
$\pi$, and $\mu(\sigma,\pi)$ is the M\"{o}bius function over the non-crossing partition set $NC(n)$.
Freeness over $\mathcal{B}$ can also be defined by using the $\mathcal{B}$-valued cumulants. Let $S_1, S_2$ be two subsets of $\mathcal{A}$ and ${\mathcal{A}}_i$ be the algebra generated by $S_i$ and $\mathcal{B}$ for $i = 1, 2$. Then ${\mathcal{A}}_1$ and ${\mathcal{A}}_2$ are free with amalgamation over $\mathcal{B}$ if and only if whenever
$\boldsymbol{\mathcal{X}}_1, \cdots ,\boldsymbol{\mathcal{X}}_n \in S_1 \bigcup S_2$,
\begin{equation}
\kappa_n^{\mathcal{B}}(\boldsymbol{\mathcal{X}}_1, \cdots ,\boldsymbol{\mathcal{X}}_n) = 0
\end{equation}
unless either all $\boldsymbol{\mathcal{X}}_1, \cdots ,\boldsymbol{\mathcal{X}}_n \in S_1 $ or all $\boldsymbol{\mathcal{X}}_1, \cdots ,\boldsymbol{\mathcal{X}}_n \in S_2$.
Let $(\mathcal{A}, \phi)$ be a non-commutative probability space and $d$ be a positive integer. A matrix $\mathbf{A} \in \mathbf{M}_d(\mathcal{A})$ is said to be R-cyclic if the following condition holds,
$\kappa_n^{\mathbb{C}}([\mathbf{A}]_{i_1j_1}, \cdots, [\mathbf{A}]_{i_nj_n})=0$,
for every $n \geq 1$ and every $1 \leq i_1 , j_1 , \cdots, i_n , j_n \leq d$ for which it is not true that
$j_1=i_2, \cdots, j_{n-1}=i_n, j_n=i_1$ \cite{nica2002r}.
Let the operator upper half plane $\mathbb{H}_{+}(\mathcal{B})$ be defined by $\mathbb{H}_{+}(\mathcal{B}) = \{b \in \mathcal{B}: \Im(b) \succ 0\}$. For a selfadjoint random variable $\boldsymbol{\mathcal{X}} \in \mathcal{A}$ and $b \in \mathbb{H}_{+}(\mathcal{B})$, the $\mathcal{B}$-valued Cauchy transform $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{B}}(b)$ is defined by
\begin{eqnarray}
\mathcal{G}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}(b) \!\!\!\!&=&\!\!\!\! F\{(b-\boldsymbol{\mathcal{X}})^{-1}\}
\nonumber \\
\!\!\!\!&=&\!\!\!\! \sum\limits_{n\geq0}F\{b^{-1}(\boldsymbol{\mathcal{X}}b^{-1})^n\},\|b^{-1}\| \leq \|\boldsymbol{\mathcal{X}}\|^{-1}.
\end{eqnarray}
Let the operator lower half plane $\mathbb{H}_{-}(\mathcal{B})$ be defined by $\mathbb{H}_{-}(\mathcal{B}) = \{b \in \mathcal{B}: \Im(b) \prec 0\}$. We have that $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}(b) \in \mathbb{H}_{-}(\mathcal{B})$.
The $\mathcal{B}$-valued R-transform of $\boldsymbol{\mathcal{X}}$ is defined by
\begin{equation}
\mathcal{R}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}(b) = \sum\limits_{n\geq0}\kappa_{n+1}^{\mathcal{B}}(\boldsymbol{\mathcal{X}}b, \cdots, \boldsymbol{\mathcal{X}}b, \boldsymbol{\mathcal{X}}b,\boldsymbol{\mathcal{X}})
\label{eq:definition_of_R_transform}
\end{equation}
where $b \in \mathbb{H}_{-}(\mathcal{B})$.
Let $\boldsymbol{\mathcal{X}}$ and $\boldsymbol{\mathcal{Y}}$ be two $\mathcal{B}$-valued random variables.
The $\mathcal{B}$-valued freeness relation between $\boldsymbol{\mathcal{X}}$ and $\boldsymbol{\mathcal{Y}}$ is actually a rule for calculating the mixed $\mathcal{B}$-valued moments in $\boldsymbol{\mathcal{X}}$ and $\boldsymbol{\mathcal{Y}}$ from the $\mathcal{B}$-valued moments of $\boldsymbol{\mathcal{X}}$ and the $\mathcal{B}$-valued moments of $\boldsymbol{\mathcal{Y}}$. Furthermore, if $\boldsymbol{\mathcal{X}}$ and $\boldsymbol{\mathcal{Y}}$ are free over $\mathcal{B}$, then their mixed $\mathcal{B}$-valued cumulants in $\boldsymbol{\mathcal{X}}$ and $\boldsymbol{\mathcal{Y}}$ vanish. This further implies
\begin{equation}
\mathcal{R}_{\boldsymbol{\mathcal{X}}+\boldsymbol{\mathcal{Y}}}^\mathcal{B}(b) = \mathcal{R}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}(b) + \mathcal{R}_{\boldsymbol{\mathcal{Y}}}^\mathcal{B}(b).
\label{eq:operator_r_transform_of_sum_of_free_varaible}
\end{equation}
The relation between the $\mathcal{B}$-valued Cauchy transform and R-transform is given by
\begin{equation}
\mathcal{R}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}(b) = {\mathcal{G}^\mathcal{B}_{\boldsymbol{\mathcal{X}}}}^{\langle-1\rangle}(b) - b^{-1}
\label{eq:operator_r_transform_cauchy_transform_relation}
\end{equation}
where ${\mathcal{G}^\mathcal{B}_{\boldsymbol{\mathcal{X}}}}^{\langle-1\rangle}: \mathbb{H}_{-}(\mathcal{B}) \rightarrow \mathbb{H}_{+}(\mathcal{B}) $ is the inverse function of $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}$.
According to \eqref{eq:operator_r_transform_cauchy_transform_relation}, \eqref{eq:operator_r_transform_of_sum_of_free_varaible} becomes
\begin{equation}
{\mathcal{G}_{\boldsymbol{\mathcal{X}}+\boldsymbol{\mathcal{Y}}}^\mathcal{B}}^{\langle-1\rangle}(b) - b^{-1} = {\mathcal{G}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}}^{\langle-1\rangle}(b) - b^{-1} + \mathcal{R}_{\boldsymbol{\mathcal{Y}}}^\mathcal{B}(b).
\label{eq:Cauchy_transform_of_sum_of_two_varaiable_temp1}
\end{equation}
By substituting $\mathcal{G}_{\boldsymbol{\mathcal{X}}+\boldsymbol{\mathcal{Y}}}^\mathcal{B}(b)$ for each $b$, \eqref{eq:Cauchy_transform_of_sum_of_two_varaiable_temp1} becomes
\begin{equation}
b = {\mathcal{G}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}}^{\langle-1\rangle}
\left(\mathcal{G}_{\boldsymbol{\mathcal{X}}+\boldsymbol{\mathcal{Y}}}^\mathcal{B}(b)\right) + \mathcal{R}_{\boldsymbol{\mathcal{Y}}}^\mathcal{B}\left(\mathcal{G}_{\boldsymbol{\mathcal{X}} +
\boldsymbol{\mathcal{Y}}}^\mathcal{B}(b)\right)
\end{equation}
which further leads to
\begin{equation}
\mathcal{G}_{\boldsymbol{\mathcal{X}}+\boldsymbol{\mathcal{Y}}}^\mathcal{B}(b) = \mathcal{G}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}\left(b - \mathcal{R}_{\boldsymbol{\mathcal{Y}}}^\mathcal{B}\left(\mathcal{G}_{\boldsymbol{\mathcal{X}}
+\boldsymbol{\mathcal{Y}}}^\mathcal{B}(b)\right)\right).
\label{eq:operator_cauchy_transform_of_sum_of_free_varaible}
\end{equation}
A $\mathcal{B}$-valued random variable $\boldsymbol{\mathcal{X}} \in \mathcal{A}$ is called a $\mathcal{B}$-valued semicircular variable if its $\mathcal{B}$-valued R-transform is given by
\begin{equation}
\mathcal{R}_{\boldsymbol{\mathcal{X}}}^\mathcal{B}(b) = \kappa_{2}^\mathcal{B}(\boldsymbol{\mathcal{X}}b,\boldsymbol{\mathcal{X}}).
\end{equation}
According to \eqref{eq:operator_valued_moments_from_cumulants} and \eqref{eq:definition_of_R_transform}, the higher order $\mathcal{B}$-valued moments of $\boldsymbol{\mathcal{X}}$ are given in terms of the second order moments by summing over the non-crossing pair partitions.
Let $\boldsymbol{\mathcal{X}}_1,\boldsymbol{\mathcal{X}}_2,\cdots, \boldsymbol{\mathcal{X}}_n$ be a family of $\mathcal{B}$-valued random variables, the maps
\begin{equation}
\eta_{ij}:\mathbf{C} \rightarrow F\{\boldsymbol{\mathcal{X}}_i\mathbf{C}\boldsymbol{\mathcal{X}}_j\} \nonumber
\end{equation}
are called the covariances of the family, where $\mathbf{C} \in \mathcal{B}$.
\subsection{Free Deterministic Equivalents}
\label{sec:Free Deterministic Equivalent}
In this subsection, we introduce the free deterministic equivalents for the case where all the matrices are square and have the same size, and the random matrices are Hermitian and composed of independent Gaussian entries with different variances.
Let $\mathbf{Y}_1,\mathbf{Y}_2,\cdots,\mathbf{Y}_t$ be a $t$-tuple of $n \times n$ Hermitian random matrices. The entries $[\mathbf{Y}_k]_{ij}$ are Gaussian random variables. For fixed $k$, the entries $[\mathbf{Y}_k]_{ij}$ on and above the diagonal are independent, and $[\mathbf{Y}_k]_{ij}=[\mathbf{Y}_k]_{ji}^*$. Moreover, the entries from different matrices are also independent. Let $\frac{1}{n}\sigma_{ij,k}^2(n)$ denote the variance of $[\mathbf{Y}_k]_{ij}$. Then, we have $\sigma_{ij,k}(n)=\sigma_{ji,k}(n)$ and
\begin{equation}
\mathbb{E}\{[\mathbf{Y}_k]_{ij}[\mathbf{Y}_l]_{rs}\} = \frac{1}{n}\sigma_{ij,k}(n)\sigma_{rs,l}(n)\delta_{jr}\delta_{is}\delta_{kl}
\label{eq:variance_of_matrices_entries_for_shlyakhtenko1996random}
\end{equation}
where $1 \leq k,l \leq t$ and $1 \leq i,j,r,s \leq n $.
Let $\mathbf{A}_1,\mathbf{A}_2,\cdots,\mathbf{A}_s$ be a family of $n \times n$ deterministic matrices
and
\begin{equation} P_c := P(\mathbf{A}_1,\mathbf{A}_2,\cdots,\mathbf{A}_s,{\boldsymbol{\mathbf{Y}}}_{1},{\boldsymbol{\mathbf{Y}}}_{2},\cdots,{\boldsymbol{\mathbf{Y}}}_t) \nonumber
\end{equation}
be a selfadjoint polynomial. In the following, we will give the definition of
the free deterministic equivalent of $P_c$.
Let $\mathcal{A}$ be a unital algebra, $(\mathcal{A},\phi)$ be a scalar-valued probability space and $\boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2,\cdots, \boldsymbol{\mathcal{Y}}_t \in \mathbf{M}_n(\mathcal{A})$ be a family of selfadjoint matrices with non-commutative random variables.
The entries $[\boldsymbol{\mathcal{Y}}_{k}]_{ii}$ are centered semicircular elements, and the entries $[\boldsymbol{\mathcal{Y}}_{k}]_{ij}, i \neq j$, are centered circular elements. The variance of the entry $[\boldsymbol{\mathcal{Y}}_{k}]_{ij}$ is given by
\begin{equation}
\phi([\boldsymbol{\mathcal{Y}}_{k}]_{ij}[\boldsymbol{\mathcal{Y}}_{k}]_{ij}^*) = \mathbb{E}\{[\mathbf{Y}_{k}]_{ij}[\mathbf{Y}_{k}]_{ij}^*\}. \nonumber
\end{equation}
Moreover, the entries on and above the diagonal
of $\boldsymbol{\mathcal{Y}}_{k}$ are free, and the entries from different $\boldsymbol{\mathcal{Y}}_{k}$ are also
free. Thus, we have
\begin{equation}
\phi([\boldsymbol{\mathcal{Y}}_k]_{ij}[\boldsymbol{\mathcal{Y}}_l]_{rs}) = \mathbb{E}\{[\mathbf{Y}_k]_{ij}[\mathbf{Y}_l]_{rs}\} \nonumber
\end{equation}
where $k \ne l$, $1 \leq k,l \leq t$ and $1 \leq i,j,r,s \leq n $.
According to Definition $2.9$ of \cite{nica2002r}, $\boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ form an R-cyclic family of matrices.
Then, from Theorem $8.2$ of \cite{nica2002r} it follows that $\mathcal{M}_n, \boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ are free over $\mathcal{D}_n$. According to Theorem 7.2 of \cite{nica2002r}, we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\kappa_t^{\mathcal{D}_n}(\boldsymbol{\mathcal{Y}}_{k}\mathbf{C}_1,
\cdots,\boldsymbol{\mathcal{Y}}_{k}\mathbf{C}_{t-1},\boldsymbol{\mathcal{Y}}_{k})
\nonumber \\
&=
\sum\limits_{i_1,\cdots,i_t=1}^{n}[\mathbf{C}_1]_{i_2i_2}\cdots[\mathbf{C}_{t-1}]_{i_{t}i_{t}}
\nonumber \\
&
~~~~~~~~~~~~\kappa_t^{\mathbb{C}}([\boldsymbol{\mathcal{Y}}_{k}]_{i_1i_2},[\boldsymbol{\mathcal{Y}}_{k}]_{i_2i_3},\cdots, [\boldsymbol{\mathcal{Y}}_{k}]_{i_{t}i_1})\mathbf{P}_{i_1}
\end{IEEEeqnarray}
where $\mathbf{C}_1, \cdots, \mathbf{C}_{t-1} \in \mathcal{D}_n$ and $\mathbf{P}_{i_1}$ denotes the $n \times n$ matrix containing zeros in all entries except for the $i_1$-th diagonal entry, which is $1$. Since the entries on and above the diagonal of $\boldsymbol{\mathcal{Y}}_{lk}$ are a family of free (semi)circular elements and $[\boldsymbol{\mathcal{Y}}_{k}]_{ij}=[\boldsymbol{\mathcal{Y}}_{k}]_{ji}^*$, we have
\begin{equation}
\kappa_t^{\mathbb{C}}([\boldsymbol{\mathcal{Y}}_{k}]_{i_1i_2},[\boldsymbol{\mathcal{Y}}_{k}]_{i_2i_3},\cdots , [\boldsymbol{\mathcal{Y}}_{k}]_{i_ti_1})=0 \nonumber
\end{equation}
unless $t=2$. Then, we obtain
\begin{equation}
\kappa_t^{\mathcal{D}_n}(\boldsymbol{\mathcal{Y}}_{k}\mathbf{C}_1,
\cdots,\boldsymbol{\mathcal{Y}}_{k}\mathbf{C}_{t-1},\boldsymbol{\mathcal{Y}}_{k})=\mathbf{0}_n \nonumber
\end{equation}
unless $t=2$.
Thus, $\boldsymbol{\mathcal{Y}}_{1}, \cdots, \boldsymbol{\mathcal{Y}}_{t}$ are $\mathcal{D}_n$-valued semicircular elements.
In \cite{shlyakhtenko1996random}, Shlyakhtenko has proved that $\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$ are asymptotically free over $L^{\infty}[0, 1]$, and the asymptotic $L^{\infty}[0, 1]$-valued joint distribution of $\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$ and that of $\boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2,\cdots, \boldsymbol{\mathcal{Y}}_t$ are the same. However, the proof of \cite{shlyakhtenko1996random} is based on operator algebra and might be hard to understand. Thus, we present Theorem \ref{th:diagonal_valued_free_results} in the following and prove it ourselves.
\begin{assumption}
\label{assump:variance_bounded}
The variances $\sigma_{ij,k}(n)$ are uniformly bounded in $n$.
\end{assumption}
Let $\psi_{k}[n]:\mathcal{D}_n \rightarrow \mathcal{D}_n$ be defined by
$\psi_{k}[n](\mathbf{\Delta}_n)=\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_k\mathbf{\Delta}_n\mathbf{Y}_k\}$,
where $\mathbf{\Delta}_n \in \mathcal{D}_n$.
\begin{assumption}
There exist maps $\psi_{k}: L^{\infty}[0, 1] \rightarrow L^{\infty}[0, 1]$ such that whenever $i_n(\mathbf{\Delta}_n) \rightarrow d \in L^{\infty}[0, 1]$ in norm, then also $\lim_{n\rightarrow \infty}\psi_{k}[n](\mathbf{\Delta}_n) = \psi_{k}(d)$.
\label{assump:variance_operator_valued_limit}
\end{assumption}
\begin{theorem}
Let $m$ be a positive integer. Assume that Assumption \ref{assump:variance_bounded} holds. Then we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\lim\limits_{n \rightarrow \infty} i_n (\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_{p_1}\mathbf{C}_{1}\cdots\mathbf{Y}_{p_{m-1}} \mathbf{C}_{m-1}\mathbf{Y}_{p_m}\}
\nonumber \\
&- E_{\mathcal{D}_n}\{\boldsymbol{\mathcal{Y}}_{p_1}\mathbf{C}_{1}\cdots\boldsymbol{\mathcal{Y}}_{p_{m-1}} \mathbf{C}_{m-1}\boldsymbol{\mathcal{Y}}_{p_m}\})=0_{L^{\infty}[0, 1]}
\end{IEEEeqnarray}
where $1 \leq p_1,\cdots, p_m \leq t$
and $\mathbf{C}_1,\cdots,\mathbf{C}_{m-1}$ is a family of $n \times n$ deterministic diagonal matrices with uniformly bounded entries.
Furthermore, if Assumption \ref{assump:variance_operator_valued_limit} holds, then
$\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$ are asymptotically free over $L^{\infty}[0, 1]$.
\label{th:diagonal_valued_free_results}
\end{theorem}
\begin{IEEEproof}
In \cite{nica2006lectures}, a proof of asymptotic freeness between Gaussian random
matrices is presented. Extending the proof therein, we obtain the following results.
We first prove the special case when $p_1=p_2=\cdots=p_m=k$, \textit{i.e.},
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\!\!\!\!\lim\limits_{n \rightarrow \infty} i_n( \mathbb{E}_{\mathcal{D}_n}\{ \mathbf{Y}_k\mathbf{C}_{1}\cdots\mathbf{Y}_k\mathbf{C}_{m-1}\mathbf{Y}_k\}
\nonumber \\
&- E_{\mathcal{D}_n}\{ \boldsymbol{\mathcal{Y}}_k\mathbf{C}_{1}\cdots\boldsymbol{\mathcal{Y}}_k\mathbf{C}_{m-1} \boldsymbol{\mathcal{Y}}_k\})= 0_{L^{\infty}[0, 1]}.
\label{eq:limit_moments_of_random_matrix_and_deterministic_diagonal_matrix_equal_to_free_deterministic_equivalent}
\end{IEEEeqnarray}
The ${\mathcal{D}_n}$-valued moment $\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_k\mathbf{C}_{1}\cdots\mathbf{Y}_k \mathbf{C}_{m-1}\mathbf{Y}_k\}$ is given by
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\!\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_k\mathbf{C}_{1}\cdots\mathbf{Y}_k \mathbf{C}_{m-1}\mathbf{Y}_k\} \nonumber \\
&=\sum\limits_{i_1,\cdots,i_m=1}^n \mathbb{E}\{[\mathbf{Y}_k]_{i_1i_2}[\mathbf{C}_{1}]_{i_2i_2}\cdots[\mathbf{Y}_k]_{i_{m-1}i_{m}}
\nonumber \\
&~~~~~~~~~~~~~~~~[\mathbf{C}_{m-1}]_{i_mi_m}[\mathbf{Y}_k]_{i_mi_1}\} \mathbf{P}_{i_1} \nonumber \\
&=\sum\limits_{i_1,\cdots,i_m=1}^n \mathbb{E}\{[\mathbf{Y}_k]_{i_1i_2}\cdots [\mathbf{Y}_k]_{i_{m-1}i_{m}}[\mathbf{Y}_k]_{i_mi_1}\}\nonumber \\
&~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_mi_m} \mathbf{P}_{i_1}.
\label{eq:moments_of_matrix_yk}
\end{IEEEeqnarray}
According to the Wick formula (Theorem 22.3 of \cite{nica2006lectures}), we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\mathbb{E}\{[\mathbf{Y}_k]_{i_1i_2}\cdots [\mathbf{Y}_k]_{i_{m-1}i_m}[\mathbf{Y}_k]_{i_mi_1}\}
\nonumber \\
&=\sum\limits_{\pi \in \mathcal{P}_2(m)}
\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_k]_{i_ri_{\gamma(r)}}[\mathbf{Y}_k]_{i_si_{\gamma(s)}}\}
\end{IEEEeqnarray}
where $\mathcal{P}_2(m)$ denotes the set of pair partitions of $S(m)$, and $\gamma$ is the cyclic permutation of $S(m)$ defined by $\gamma(i)=i+1 \mod m$.
Then, \eqref{eq:moments_of_matrix_yk} can be rewritten as
\begin{IEEEeqnarray}{Rl}
&\!\!\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_k\mathbf{C}_{1}\cdots\mathbf{Y}_k \mathbf{C}_{m-1}\mathbf{Y}_k\} \nonumber \\
&=\!\!\sum\limits_{i_1,\cdots,i_m=1}^n \sum\limits_{\pi \in \mathcal{P}_2(m)}
\!\!\left(\prod \limits_{(r,s) \in \pi}\!\!\mathbb{E}\{[\mathbf{Y}_k]_{i_ri_{\gamma(r)}}[\mathbf{Y}_k]_{i_si_{\gamma(s)}}\}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2} \cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1} \nonumber \\
&=\!\!\sum\limits_{\pi \in NC_2(m)} \sum\limits_{i_1,\cdots,i_m=1}^n
\!\!\left(\prod \limits_{(r,s) \in \pi}\!\!\mathbb{E}\{[\mathbf{Y}_k]_{i_ri_{\gamma(r)}}[\mathbf{Y}_k]_{i_si_{\gamma(s)}}\}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2} \cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1} \nonumber \\
&~~~~+\!\!\!\!\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}}\!\! \sum\limits_{i_1,\cdots,i_m=1}^n
\!\!\left(\prod \limits_{(r,s) \in \pi}\!\!\mathbb{E}\{[\mathbf{Y}_k]_{i_ri_{\gamma(r)}}[\mathbf{Y}_k]_{i_si_{\gamma(s)}}\}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2} \cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}
\label{eq:mix_moments_of_matrix_w_and_deterministic_diagonal_matrix}
\end{IEEEeqnarray}
where $NC_2(m) \subset \mathcal{P}_2(m)$ denotes the set of non-crossing pair partitions of $S(m)$.
Meanwhile, the ${\mathcal{D}_n}$-valued moment $E_{\mathcal{D}_n}\{\boldsymbol{\mathcal{Y}}_k\mathbf{C}_{1}\cdots \boldsymbol{\mathcal{Y}}_k\mathbf{C}_{m-1}\boldsymbol{\mathcal{Y}}_k\}$ is given by
\begin{IEEEeqnarray}{Rl}
& \!\!\!\!\!\!\!\!\!\!\!\!E_{\mathcal{D}_n}\{\boldsymbol{\mathcal{Y}}_k\mathbf{C}_{1}\cdots \boldsymbol{\mathcal{Y}}_k\mathbf{C}_{m-1}\boldsymbol{\mathcal{Y}}_k\} \nonumber \\
&=\sum\limits_{i_1,\cdots,i_m=1}^n \phi([\boldsymbol{\mathcal{Y}}_k]_{i_1i_2}[\mathbf{C}_{1}]_{i_2i_2}\cdots [\boldsymbol{\mathcal{Y}}_k]_{i_{m-1}i_{m}}
\nonumber \\
&~~~~~~~~~~~~~~~~[\mathbf{C}_{m-1}]_{i_mi_m}[\boldsymbol{\mathcal{Y}}_k]_{i_mi_1}) \mathbf{P}_{i_1}
\nonumber \\
&=\sum\limits_{i_1,\cdots,i_m=1}^n \phi([\boldsymbol{\mathcal{Y}}_k]_{i_1i_2}\cdots [\boldsymbol{\mathcal{Y}}_k]_{i_{m-1}i_{m}}[\boldsymbol{\mathcal{Y}}_k]_{i_mi_1})
\nonumber \\
&~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2}\cdots[\mathbf{C}_{m-1}]_{i_mi_m} \mathbf{P}_{i_1}.
\label{eq:moments_of_free_matrix_yk}
\end{IEEEeqnarray}
The entries of $\boldsymbol{\mathcal{Y}}_k$ are a family of semicircular and circular elements.
From (8.8) and (8.9) in \cite{nica2006lectures}, we obtain
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\phi([\boldsymbol{\mathcal{Y}}_k]_{i_1i_2}\cdots [\boldsymbol{\mathcal{Y}}_k]_{i_{m-1}i_{m}}[\boldsymbol{\mathcal{Y}}_k]_{i_mi_1})
\nonumber \\
&= \sum\limits_{\pi \in NC_2(m)}\kappa_{\pi}^{\mathbb{C}}([\boldsymbol{\mathcal{Y}}_k]_{i_1i_2},\cdots, [\boldsymbol{\mathcal{Y}}_k]_{i_{m-1}i_{m}},[\boldsymbol{\mathcal{Y}}_k]_{i_mi_1})
\nonumber \\
&=
\sum\limits_{\pi \in NC_2(m)}
\prod \limits_{(r,s) \in \pi}\phi([\boldsymbol{\mathcal{Y}}_k]_{i_ri_{\gamma(r)}}[\boldsymbol{\mathcal{Y}}_k]_{i_si_{\gamma(s)}}).
\end{IEEEeqnarray}
Then, $\eqref{eq:moments_of_free_matrix_yk}$ can be rewritten as
\begin{eqnarray}
& &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!E_{\mathcal{D}_n}\{\boldsymbol{\mathcal{Y}}_k\mathbf{C}_{1} \cdots\boldsymbol{\mathcal{Y}}_k\mathbf{C}_{m-1}\boldsymbol{\mathcal{Y}}_k\} \nonumber \\
&=&\!\!\!\!\!\!\!\!\sum\limits_{\pi \in NC_2(m)} \sum\limits_{i_1,\cdots,i_m=1}^n
\left(\prod \limits_{(r,s) \in \pi}
\phi([\boldsymbol{\mathcal{Y}}_k]_{i_ri_{\gamma(r)}}[\boldsymbol{\mathcal{Y}}_k]_{i_si_{\gamma(s)}})\right)
\nonumber \\
&&~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_mi_m} \mathbf{P}_{i_1}.
\label{eq:mix_moments_of_free_matrix_w_and_deterministic_diagonal_matrix}
\end{eqnarray}
If $m$ is odd, then $\mathcal{P}_2(m)$ and $NC_2(m)$ are empty sets. Thus, we obtain that
both $\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_k\mathbf{C}_{1}\cdots\mathbf{Y}_k\mathbf{C}_{m-1}\mathbf{Y}_k\}$ and $E_{\mathcal{D}_n}\{ \boldsymbol{\mathcal{Y}}_k \mathbf{C}_{1} \cdots \boldsymbol{\mathcal{Y}}_k \mathbf{C}_{m-1} \boldsymbol{\mathcal{Y}}_k\}$
are equal to zero matrices for odd $m$. Thus, we assume that $m$ is even for the remainder of the proof.
According to $\phi([\boldsymbol{\mathcal{Y}}_k]_{i_rj_r}[\boldsymbol{\mathcal{Y}}_k]_{i_sj_s}) = \mathbb{E}\{[\mathbf{Y}_k]_{i_rj_r}[\mathbf{Y}_k]_{i_sj_s}\}$, \eqref{eq:mix_moments_of_matrix_w_and_deterministic_diagonal_matrix} and
\eqref{eq:mix_moments_of_free_matrix_w_and_deterministic_diagonal_matrix},
\eqref{eq:limit_moments_of_random_matrix_and_deterministic_diagonal_matrix_equal_to_free_deterministic_equivalent} is equivalent to that
\begin{IEEEeqnarray}{Rl}
&i_n (\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{i_1,\cdots,i_m=1}^n
(\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_k]_{i_ri_{\gamma(r)}}[\mathbf{Y}_k]_{i_si_{\gamma(s)}}\})
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}) \nonumber
\end{IEEEeqnarray}
vanishes as $n \rightarrow \infty$.
It is convenient to identify a pair partition $\pi$ with a special permutation
by declaring the blocks of $\pi$ to be cycles \cite{nica2006lectures}. Then,
$(r, s) \in \pi$ means $\pi(r) = s$ and $\pi(s) = r$.
Applying \eqref{eq:variance_of_matrices_entries_for_shlyakhtenko1996random}, we obtain equation \eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal} at the top
of the following page,
\begin{figure*}[!t]
\normalsize
\setcounter{tempequationcounter}{\value{equation}}
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \!\!\sum\limits_{i_1,\cdots,i_m=1}^n\!\!
\left(\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_k]_{i_ri_{\gamma(r)}}[\mathbf{Y}_k]_{i_si_{\gamma(s)}}\}\right)
[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}
\nonumber \\
&= n^{-\frac{m}{2}}\!\!\!\! \sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \!\!\sum\limits_{i_1,\cdots,i_m=1}^n\!\!
\left(\prod \limits_{(r,s) \in \pi}\sigma_{i_ri_{\gamma(r)},k}(n)\sigma_{i_{s}i_{\gamma(s)},k}(n)
\delta_{i_ri_{\gamma(s)}}\delta_{i_{s} i_{\gamma(r)}}\right)
[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}
\nonumber \\
&= n^{-\frac{m}{2}}\!\!\!\! \sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \!\!\sum\limits_{i_1,\cdots,i_m=1}^n\!\!
\left(\prod \limits_{r=1}^m \sigma_{i_ri_{\gamma(r)},k}(n)\delta_{i_ri_{\gamma\pi(r)}}\right)
[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}
\nonumber \\
&= n^{-\frac{m}{2}}\!\!\!\! \sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \!\!\sum\limits_{i_1,\cdots,i_m=1}^n \!\!\left(\prod \limits_{r=1}^m\delta_{i_ri_{\gamma\pi(r)}}\right)
\left(\prod \limits_{r=1}^m \sigma_{i_ri_{\gamma(r)},k}(n)\right)[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}
\label{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal}
\end{IEEEeqnarray}
\addtocounter{tempequationcounter}{1}
\setcounter{equation}{\value{tempequationcounter}}
\hrulefill
\end{figure*}
where $\gamma\pi$ denotes the product of the two permutations $\gamma$ and $\pi$, and is defined as their composition as functions, \textit{i.e.}, $\gamma\pi(r)$ denotes $\gamma(\pi(r))$.
Applying the triangle inequality, we then obtain
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\left| n^{-\frac{m}{2}} \!\!\!\! \sum\limits_{i_2,\cdots,i_m=1}^n \left(\prod \limits_{r=1}^m\delta_{i_ri_{\gamma\pi(r)}}\right)
\left(\prod \limits_{r=1}^m \sigma_{i_ri_{\gamma(r)},k}(n)\right) \right.
\nonumber \\
&&~~~~~~~~~~~~~~~~~\left.
\vphantom{\left| n^{-\frac{m}{2}}\sum\limits_{i_2,\cdots,i_m=1}^n \left(\prod \limits_{r=1}^m\delta_{i_ri_{\gamma\pi(r)}}\right)
\left(\prod \limits_{r=1}^m \sigma_{i_ri_{\gamma(r)},k}(n)\right) \right.}
[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m}\right|
\nonumber \\
&& \leq n^{-\frac{m}{2}}\sum\limits_{i_2,\cdots,i_m=1}^n \left(\prod \limits_{r=1}^m\delta_{i_ri_{\gamma\pi(r)}}\right)
\left(\prod \limits_{r=1}^m \sigma_{i_ri_{\gamma(r)},k}(n)\right)
\nonumber \\
&&~~~~~~~~~~~~~~~~\left|[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m}\right|
\end{eqnarray}
where $i_1$ is fixed.
Since the entries of $\mathbf{C}_1,\cdots,\mathbf{C}_{m-1}$ and $\sigma_{ij,k}(n)$ are uniformly bounded in $n$,
there must exists a positive real number $c_0$ such that
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\left| n^{-\frac{m}{2}}\sum\limits_{i_2,\cdots,i_m=1}^n \left(\prod \limits_{r=1}^m\delta_{i_ri_{\gamma\pi(r)}}\right)
\left(\prod \limits_{r=1}^m \sigma_{i_ri_{\gamma(r)},k}(n)\right)\right.
\nonumber \\
&&~~~~~~~~~~~~~~~~\left.[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m}\right|
\nonumber \\
&&\leq c_0 n^{-\frac{m}{2}} \sum\limits_{i_2,\cdots,i_m=1}^n \left(\prod \limits_{r=1}^m\delta_{i_ri_{\gamma\pi(r)}}\right).
\label{eq:diagonal_matrix_valued_free_inequality}
\end{eqnarray}
In \cite{nica2006lectures} (p.365), it is shown that
\begin{eqnarray}
\sum\limits_{i_1,i_2,\cdots,i_m=1}^n \left(\prod \limits_{r=1}^m\delta_{i_ri_{\gamma\pi(r)}}\right)
=n^{\#(\gamma\pi)}
\label{eq:combinatorical_results_of_lectures_of_free_probability}
\end{eqnarray}
where $\#(\gamma\pi)$ is the number of cycles in the permutation $\gamma\pi$.
The interpretation of \eqref{eq:combinatorical_results_of_lectures_of_free_probability} is as follows: For each cycle of $\gamma\pi$, one can choose one of
the numbers $1, \cdots ,n$ for the constant value of $i_r$ on this orbit, and all
these choices are independent from each other.
Following the same interpretation, we have that
\begin{eqnarray}
\sum\limits_{i_2,\cdots,i_m=1}^n \left(\prod \limits_{r=1}^m\delta_{i_ri_{\gamma\pi(r)}}\right)
=n^{\#(\gamma\pi)-1}
\label{eq:combinatorical_results_of_lectures_of_free_probability_variation}
\end{eqnarray}
when $i_r$ on the orbit of one cycle of $\gamma\pi$ is fixed on $i_1$.
If $\pi \in \mathcal{P}_2(m)$, we have $\#(\gamma\pi) - 1 - \frac{m}{2} = -2g$ as stated below Theorem 22.12
of \cite{nica2006lectures}, where $g \geq 0$ is called genus in the geometric language of genus expansion. The result comes from Proposition 4.2 of \cite{zvonkin1997matrix}. If $\pi \in NC_2(m)$, then $g=0$ as stated in Exercise 22.14 of \cite{nica2006lectures}. Furthermore, for $\pi \in \mathcal{P}_2(m)$ and $\pi \notin NC_2(m)$, we have $\#(\gamma\pi) - 1 - \frac{m}{2} \leq -2$. Thus, the RHS of the inequality in \eqref{eq:diagonal_matrix_valued_free_inequality} is of order $n^{-2}$, and the left-hand side (LHS) of the inequality in \eqref{eq:diagonal_matrix_valued_free_inequality} vanishes as $n \rightarrow \infty$. Furthermore, \eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal} also vanishes and we have proven \eqref{eq:limit_moments_of_random_matrix_and_deterministic_diagonal_matrix_equal_to_free_deterministic_equivalent}.
Then, we prove the general case that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\!\!\lim\limits_{n \rightarrow \infty} i_n (\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_{p_1}\mathbf{C}_{1}\cdots\mathbf{Y}_{p_{m-1}}\mathbf{C}_{m-1}\mathbf{Y}_{p_m}\}
\nonumber \\
&\!\!
- E_{\mathcal{D}_n}\{\boldsymbol{\mathcal{Y}}_{p_1}\mathbf{C}_{1}\cdots\boldsymbol{\mathcal{Y}}_{p_{m-1}} \mathbf{C}_{m-1}\boldsymbol{\mathcal{Y}}_{p_m}\})=0_{L^{\infty}[0, 1]}.
\label{eq:limit_moments_of_random_matrix_and_deterministic_diagonal_matrix_equal_to_free_deterministic_equivalent_general}
\end{IEEEeqnarray}
The $\mathcal{D}_n$-valued moment $\mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_{p_1}\mathbf{C}_{1}\cdots\mathbf{Y}_{p_{m-1}}\mathbf{C}_{m-1}\mathbf{Y}_{p_m}\}$ is given by
\begin{IEEEeqnarray}{Rl}
& \mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_{p_1}\mathbf{C}_{1}\cdots\mathbf{Y}_{p_{m-1}}\mathbf{C}_{m-1}\mathbf{Y}_{p_m}\}
\nonumber \\
&=\sum\limits_{\pi \in NC_2(m)} \sum\limits_{i_1,\cdots,i_m=1}^n
\prod \limits_{(r,s) \in \pi}\!\!\!\!\mathbb{E}\{[\mathbf{Y}_{p_r}]_{i_ri_{\gamma(r)}}[\mathbf{Y}_{p_s}]_{i_si_{\gamma(s)}}\}
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1} \nonumber \\
&~~~~+\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}}\!\! \sum\limits_{i_1,\cdots,i_m=1}^n
\prod \limits_{(r,s) \in \pi}\!\!\!\!\mathbb{E}\{[\mathbf{Y}_{p_r}]_{i_ri_{\gamma(r)}}[\mathbf{Y}_{p_s}]_{i_si_{\gamma(s)}}\}
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}.
\label{eq:mix_moments_of_matrix_w_and_deterministic_diagonal_matrix_general}
\end{IEEEeqnarray}
To prove \eqref{eq:limit_moments_of_random_matrix_and_deterministic_diagonal_matrix_equal_to_free_deterministic_equivalent_general} is equivalent to prove that the second term on the RHS of \eqref{eq:mix_moments_of_matrix_w_and_deterministic_diagonal_matrix_general} vanishes as $n \rightarrow \infty$. Then, according to \eqref{eq:variance_of_matrices_entries_for_shlyakhtenko1996random}, we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \!\! \sum\limits_{i_1,\cdots,i_m=1}^n \!\!
\left(\prod \limits_{(r,s) \in \pi} \!\!\!\! \mathbb{E}\{[\mathbf{Y}_{p_r}]_{i_ri_{\gamma(r)}} [\mathbf{Y}_{p_s}]_{i_si_{\gamma(s)}}\}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2} \cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}
\nonumber \\
&= n^{-\frac{m}{2}} \!\!\!\!\!\!\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \!\!\sum\limits_{i_1,\cdots,i_m=1}^n\!\!
\left(\prod \limits_{(r,s) \in \pi} \!\!\!\! \sigma_{i_ri_{\gamma(r)},p_r}(n)\sigma_{i_{s}i_{\gamma(s)},p_s}(n) \right.
\nonumber \\
&~~~~~~\left.\vphantom{\left(\prod \limits_{(r,s) \in \pi}\sigma_{i_ri_{\gamma(r)},p_r}(n)\sigma_{i_{s}i_{\gamma(s)},p_s}(n) \right.}\delta_{i_ri_{\gamma(s)}}\delta_{i_{s} i_{\gamma(r)}}\delta_{p_{r}p_{s}}\!\!\right)\!\!
[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}.
\label{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal_general}
\end{IEEEeqnarray}
The above equation is similar to \eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal}, the only difference is the extra factor $\delta_{p_rp_s}$, which just indicates that we have an extra condition
on the partitions $\pi$. A similar situation has been given in the proof of Proposition $22.22$ of \cite{nica2006lectures}.
Let $\mathcal{P}_2^{(p)}(m)$ and $NC_2^{(p)}(m)$ be defined by
\begin{equation}
\mathcal{P}_2^{(p)}(m) = \{\pi \in \mathcal{P}_2(m):p_r = p_{{\pi(r)}} ~\forall r = 1, \cdots ,m\} \nonumber
\end{equation}
and
\begin{equation}
NC_2^{(p)}(m) = \{\pi \in NC_2(m):p_r = p_{\pi(r)} ~\forall r = 1, \cdots ,m\}. \nonumber
\end{equation}
Then,
\eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal_general} becomes
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{i_1,\cdots,i_m=1}^n
\left(\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_{p_r}]_{i_ri_{\gamma(r)}}[\mathbf{Y}_{p_s}]_{i_si_{\gamma(s)}}\}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}
\nonumber \\
&= n^{-\frac{m}{2}} \sum\limits_{\substack{\pi \in \mathcal{P}_2^{(p)}(m)\\ \pi \notin NC_2^{(p)}(m)}} \sum\limits_{i_1,\cdots,i_m=1}^n
\left(\prod \limits_{r=1}^m \sigma_{i_ri_{\gamma(r)},p_r}(n)\delta_{i_ri_{\gamma\pi(r)}}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{1}]_{i_2i_2}\cdots [\mathbf{C}_{m-1}]_{i_m i_m} \mathbf{P}_{i_1}
\label{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal_general_2}.
\end{IEEEeqnarray}
For all partitions $\pi \in \mathcal{P}_2^{(p)}(m) \backslash NC_2^{(p)}(m)$, we have that $\#(\gamma\pi) - 1 - \frac{m}{2} \leq -2$.
Comparing \eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal} with \eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal_general_2},
we obtain that \eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal_general_2} vanishes as $n \rightarrow \infty$ and furthermore
\eqref{eq:limit_moments_of_random_matrix_and_deterministic_diagonal_matrix_equal_to_free_deterministic_equivalent_general}
holds.
Since $\boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ are $\mathcal{D}_n$-valued semicircular elements and also free over $\mathcal{D}_n$, their asymptotic $L^{\infty}[0, 1]$-valued joint distribution is only determined by $\psi_{k}, 1 \leq k \leq t$. Thus, the asymptotic $L^{\infty}[0, 1]$-valued joint distribution of $\boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ exists.
Furthermore, the asymptotic $L^{\infty}[0, 1]$-valued joint moments
\begin{equation}
\lim\limits_{n \rightarrow \infty} i_n ( \mathbb{E}_{\mathcal{D}_n}\{\mathbf{Y}_{p_1}\mathbf{C}_{1}\cdots\mathbf{Y}_{p_{m-1}}\mathbf{C}_{m-1}\mathbf{Y}_{p_m}\}) \nonumber
\end{equation}
include all the information about the asymptotic $L^{\infty}[0, 1]$-valued joint distribution of $\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$. Thus, we obtain from \eqref{eq:limit_moments_of_random_matrix_and_deterministic_diagonal_matrix_equal_to_free_deterministic_equivalent_general} that the asymptotic $L^{\infty}[0, 1]$-valued joint distributions of $\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$ and $\boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2,\cdots, \boldsymbol{\mathcal{Y}}_t$ are the same. Finally, we have that $\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$ are asymptotically free over $L^{\infty}[0, 1]$.
\end{IEEEproof}
The asymptotic $L^{\infty}[0,1]$-valued distribution of the polynomial $P({\boldsymbol{\mathcal{Y}}}_{1},{\boldsymbol{\mathcal{Y}}}_{2},\cdots,{\boldsymbol{\mathcal{Y}}}_{t})$ is the same as the expected asymptotic $L^{\infty}[0,1]$-valued distribution of $P(\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t)$ in the sense that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\lim_{n \rightarrow \infty} i_n(\mathbb{E}_{\mathcal{D}_n}\{(P(\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t))^k\}
\nonumber \\
&~~~~- E_{\mathcal{D}_n}\{(P({\boldsymbol{\mathcal{Y}}}_{1}, {\boldsymbol{\mathcal{Y}}}_{2}, \cdots, {\boldsymbol{\mathcal{Y}}}_{t}))^k\}) = 0_{L^{\infty}[0, 1]}.
\end{IEEEeqnarray}
When the $n \times n$ deterministic matrices $\mathbf{A}_1,\mathbf{A}_2,\cdots,\mathbf{A}_s$ are also considered, we will present Theorem \ref{th:determinstic_matrices_gaussian_matrices_asymptotic_operator_valued_free} in the following subsection to show the
asymptotic $L^{\infty}[0,1]$-valued freeness of
\begin{equation}
\{\mathbf{A}_1,\mathbf{A}_2,\cdots,\mathbf{A}_s\}, \mathbf{Y}_1,\mathbf{Y}_2,\cdots,\mathbf{Y}_t. \nonumber
\end{equation}
Furthermore, Theorem \ref{th:determinstic_matrices_gaussian_matrices_asymptotic_operator_valued_free} implies that
the asymptotic $L^{\infty}[0,1]$-valued distribution of
\begin{eqnarray}
P_f := P(\mathbf{A}_1,\mathbf{A}_2,\cdots,\mathbf{A}_s,{\boldsymbol{\mathcal{Y}}}_{1},{\boldsymbol{\mathcal{Y}}}_{2},\cdots,{\boldsymbol{\mathcal{Y}}}_t) \nonumber
\end{eqnarray}
and the expected asymptotic $L^{\infty}[0,1]$-valued distribution of $P_c$
are the same.
The polynomial $P_f$ is called the free deterministic equivalent of $P_c$.
For finite dimensional random matrices, the difference between the $\mathcal{D}_n$-valued distribution of $P_f$ and $P_c$ is given by the deviation from $\mathcal{D}_n$-valued freeness of
\begin{equation}
\{\mathbf{A}_1,\mathbf{A}_2,\cdots,\mathbf{A}_s\}, \mathbf{Y}_1,\mathbf{Y}_2,\cdots,\mathbf{Y}_t \nonumber
\end{equation}
and the deviation of the expected $\mathcal{D}_n$-valued distribution of $\mathbf{Y}_1,\mathbf{Y}_2,\cdots,\mathbf{Y}_t$ from being the same as the $\mathcal{D}_n$-valued distribution of ${\boldsymbol{\mathcal{Y}}}_{1}, {\boldsymbol{\mathcal{Y}}}_{2}, \cdots, {\boldsymbol{\mathcal{Y}}}_t$. For large dimensional matrices, these deviations become smaller and the $\mathcal{D}_n$-valued distribution of $P_f$ provides a better approximation for the expected $\mathcal{D}_n$-valued distribution of $P_c$.
\subsection{New Asymptotic $L^{\infty}[0,1]$-valued Freeness Results }
\label{New_asymptotic_freeness_results}
Reference \cite{nica2006lectures} presents a proof of asymptotic free independence between Gaussian random
matrices and deterministic matrices. We extend the proof therein and obtain the following theorem.
\begin{assumption}
The spectral norms of the deterministic matrices $\mathbf{A}_1,\mathbf{A}_2,\cdots,\mathbf{A}_s$ are uniformly bounded.
\label{assump:bounded_spectral norms}
\end{assumption}
\begin{theorem}
\label{th:determinstic_matrices_gaussian_matrices_asymptotic_operator_valued_free}
Let $\mathcal{E}_n$ denote the algebra of $n \times n$ diagonal matrices with uniformly bounded entries
and $\mathcal{F}_n$ denote the algebra generated by $\mathbf{A}_{1}, \mathbf{A}_{2}, \cdots,\mathbf{A}_{s}$ and $\mathcal{E}_n$. Let $m$ be a positive integer and $\mathbf{C}_0, \mathbf{C}_1, \cdots, \mathbf{C}_m \in \mathcal{F}_n$ be a family of $n \times n$ deterministic matrices. Assume that Assumptions \ref{assump:variance_bounded} and \ref{assump:bounded_spectral norms} hold. Then,
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\!\!\lim\limits_{n \rightarrow \infty}i_n (\mathbb{E}_{\mathcal{D}_n}\{\mathbf{C}_{0}\mathbf{Y}_{p_1}\mathbf{C}_{1}\mathbf{Y}_{p_2}\mathbf{C}_2\cdots \mathbf{Y}_{p_m}\mathbf{C}_m\}
\nonumber \\
&- E_{\mathcal{D}_n}\{\mathbf{C}_{0}\boldsymbol{\mathcal{Y}}_{p_1}\mathbf{C}_{1}\boldsymbol{\mathcal{Y}}_{p_2}\mathbf{C}_2 \cdots\boldsymbol{\mathcal{Y}}_{p_m}\mathbf{C}_m\})=0_{L^{\infty}[0, 1]}
\end{IEEEeqnarray}
where $1 \leq p_1,\cdots, p_m \leq t$. Furthermore, if Assumption \ref{assump:variance_operator_valued_limit} also holds, then $\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$, $\mathcal{F}_n$ are asymptotically free over $L^{\infty}[0, 1]$.
\end{theorem}
\begin{IEEEproof}
We first prove the special case when $p_1=p_2=\cdots=p_m=k$, \textit{i.e.},
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\lim\limits_{n \rightarrow \infty} i_n( \mathbb{E}_{\mathcal{D}_n}\!\{ \mathbf{C}_{0}\mathbf{Y}_k\mathbf{C}_{1}\mathbf{Y}_k\mathbf{C}_2\cdots\mathbf{Y}_k\mathbf{C}_m\}
\nonumber \\
&~- E_{\mathcal{D}_n}\{ \mathbf{C}_{0}\boldsymbol{\mathcal{Y}}_k\mathbf{C}_{1}\boldsymbol{\mathcal{Y}}_k\mathbf{C}_2\cdots \boldsymbol{\mathcal{Y}}_k\mathbf{C}_m\})= 0_{L^{\infty}[0, 1]}.
\label{eq:limit_moments_of_random_matrix_and_deterministic_matrix_equal_to_free_deterministic_equivalent}
\end{IEEEeqnarray}
Using steps similar to those used to derive \eqref{eq:mix_moments_of_matrix_w_and_deterministic_diagonal_matrix} and \eqref{eq:mix_moments_of_free_matrix_w_and_deterministic_diagonal_matrix} in the proof of Theorem \ref{th:diagonal_valued_free_results}, we obtain
\begin{IEEEeqnarray}{Rl}
&\!\!\!\mathbb{E}_{\mathcal{D}_n}\{\mathbf{C}_{0}\mathbf{Y}_k\mathbf{C}_{1}\mathbf{Y}_k\mathbf{C}_2\cdots\mathbf{Y}_k\mathbf{C}_m\} \nonumber \\
&= \sum\limits_{\pi \in NC_2(m)} \sum\limits_{\substack{i_1,\cdots,i_m\\j_0,j_1,\cdots,j_m=1}}^n
\left(\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_k]_{i_rj_r}[\mathbf{Y}_k]_{i_sj_s}\}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~[\mathbf{C}_{0}]_{j_0i_1}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}i_m}[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0} \nonumber \\
&~ +\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{\substack{i_1,\cdots,i_m\\j_0,j_1,\cdots,j_m=1}}^n
\left(\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_k]_{i_rj_r}[\mathbf{Y}_k]_{i_sj_s}\}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~[\mathbf{C}_{0}]_{j_0i_1}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}i_m}[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0}
\label{eq:mix_moments_of_matrix_w_and_deterministic_matrix_d}
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!E_{\mathcal{D}_n}\{\mathbf{C}_{0}\boldsymbol{\mathcal{Y}}_k\mathbf{C}_{1}\boldsymbol{\mathcal{Y}}_k\mathbf{C}_2\cdots\boldsymbol{\mathcal{Y}}_k\mathbf{C}_m\} \nonumber \\
&= \sum\limits_{\pi \in NC_2(m)} \sum\limits_{\substack{i_1,\cdots,i_m\\j_0,j_1,\cdots,j_m=1}}^n
\left(\prod \limits_{(r,s) \in \pi}\phi([\boldsymbol{\mathcal{Y}}_k]_{i_rj_r}[\boldsymbol{\mathcal{Y}}_k]_{i_sj_s})\right)
\nonumber \\
&~~~~~~~~~~~~~~[\mathbf{C}_{0}]_{j_0i_1}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}i_m}[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0}
\label{eq:mix_moments_of_free_matrix_w_and_deterministic_matrix_d}
\end{IEEEeqnarray}
respectively. Furthermore, both
\begin{equation}
\mathbb{E}_{\mathcal{D}_n}\{\mathbf{C}_{0}\mathbf{Y}_k\mathbf{C}_{1}\mathbf{Y}_k\mathbf{C}_2\cdots\mathbf{Y}_k\mathbf{C}_m\}
\nonumber
\end{equation}
and
\begin{equation}
E_{\mathcal{D}_n}\{\mathbf{C}_{0}\boldsymbol{\mathcal{Y}}_k\mathbf{C}_{1}\boldsymbol{\mathcal{Y}}_k\mathbf{C}_2\cdots\boldsymbol{\mathcal{Y}}_k\mathbf{C}_m\}
\nonumber
\end{equation}
are equal to zero matrices for odd $m$. Thus, we also assume that $m$ is even for the remainder of the proof.
According to $\phi([\boldsymbol{\mathcal{Y}}_k]_{i_rj_r}[\boldsymbol{\mathcal{Y}}_k]_{i_sj_s}) = \mathbb{E}\{[\mathbf{Y}_k]_{i_rj_r}[\mathbf{Y}_k]_{i_sj_s}\}$, \eqref{eq:mix_moments_of_matrix_w_and_deterministic_matrix_d} and
\eqref{eq:mix_moments_of_free_matrix_w_and_deterministic_matrix_d},
\eqref{eq:limit_moments_of_random_matrix_and_deterministic_matrix_equal_to_free_deterministic_equivalent} is equivalent to that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!i_n (\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{\substack{i_1,\cdots,i_m\\j_0,j_1,\cdots,j_m=1}}^n
(\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_k]_{i_rj_r}[\mathbf{Y}_k]_{i_sj_s}\})
\nonumber \\
&~~~~~~~~~~~~~~~~~[\mathbf{C}_{0}]_{j_0i_1}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}i_m}[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0}) \nonumber
\end{IEEEeqnarray}
vanishes as $n \rightarrow \infty$. From \eqref{eq:variance_of_matrices_entries_for_shlyakhtenko1996random}, we then obtain equation \eqref{eq:inequality_formula_of_crossing_partition_of_moments} at the top of the following page.
\begin{figure*}[!t]
\normalsize
\setcounter{tempequationcounter}{\value{equation}}
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{\substack{i_1,\cdots,i_m\\j_0,j_1,\cdots,j_m=1}}^n
\left(\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_k]_{i_rj_r}[\mathbf{Y}_k]_{i_sj_s}\}\right)
[\mathbf{C}_{0}]_{j_0i_1}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}i_m}[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0}
\nonumber \\
&= n^{-\frac{m}{2}} \sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{\substack{i_1,\cdots,i_m\\j_0,j_1,\cdots,j_m=1}}^n
\left(\prod \limits_{(r,s) \in \pi}\sigma_{i_rj_r,k}(n)\sigma_{i_sj_s,k}(n)\delta_{i_rj_s}\delta_{i_sj_r}\right)[\mathbf{C}_{0}]_{j_0i_1}\cdots[\mathbf{C}_{m-1}]_{j_{m-1}i_m} [\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0} \nonumber \\
&= n^{-\frac{m}{2}} \sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{\substack{i_1,\cdots,i_m\\j_0,j_1,\cdots,j_m=1}}^n
\left(\prod \limits_{r=1}^m
\sigma_{i_rj_r,k}(n)\delta_{i_rj_{{\pi(r)}}}\right)
[\mathbf{C}_{0}]_{j_0i_1}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}i_m}[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0} \nonumber \\
&= n^{-\frac{m}{2}} \!\!\!\!\!\! \sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{j_0,j_1,\cdots,j_m=1}^n \left(\prod \limits_{r=1}^m \sigma_{j_{\pi(r)}j_r,k}(n)\right)
[\mathbf{C}_{0}]_{j_0j_{\pi\gamma(m)}}[\mathbf{C}_{1}]_{j_1j_{\pi\gamma(1)}}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}j_{\pi\gamma(m-1)}}
[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0}
\label{eq:inequality_formula_of_crossing_partition_of_moments}
\end{IEEEeqnarray}
\addtocounter{tempequationcounter}{1}
\setcounter{equation}{\value{tempequationcounter}}
\hrulefill
\end{figure*}
Since $\mathbf{C}_0,\mathbf{C}_1,\cdots,\mathbf{C}_m$ are not diagonal matrices, \eqref{eq:inequality_formula_of_crossing_partition_of_moments} is different from \eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal} in the proof of Theorem \ref{th:diagonal_valued_free_results}. Thus, the method used to prove the LHS of \eqref{eq:crossing_partition_of_moments_for_matrices_free_over_diagonal} vanishes is no longer suitable here.
In the following, we use a different method to prove the LHS of \eqref{eq:inequality_formula_of_crossing_partition_of_moments} vanishes as $n \rightarrow \infty$.
If all $\sigma_{i_rj_r,k}(n)=1$, then \eqref{eq:inequality_formula_of_crossing_partition_of_moments} becomes
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{\substack{i_1,\cdots,i_m\\j_0,j_1,\cdots,j_m=1}}^n
\left(\prod \limits_{(r,s) \in \pi}\mathbb{E}\{[\mathbf{Y}_k]_{i_rj_r}[\mathbf{Y}_k]_{i_sj_s}\}\right)
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{0}]_{j_0i_1}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}i_m}[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0} \nonumber \\
&~~= n^{-\frac{m}{2}} \!\!\!\!\!\! \sum\limits_{\substack{\pi \in \mathcal{P}_2(m)\\ \pi \notin NC_2(m)}} \sum\limits_{j_0,j_1,\cdots,j_m=1}^n
[\mathbf{C}_{0}]_{j_0j_{\pi\gamma(m)}}[\mathbf{C}_{1}]_{j_1j_{\pi\gamma(1)}}\cdots
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~ [\mathbf{C}_{m-1}]_{j_{m-1}j_{\pi\gamma(m-1)}}
[\mathbf{C}_m]_{j_{m}j_0} \mathbf{P}_{j_0}.
\label{eq:equality_formula_of_crossing_partition_of_moments_variances_one}
\end{IEEEeqnarray}
Let $\rho_1,\rho_2,\cdots,\rho_u$ be cycles of $\pi\gamma$ and ${\rm{tr}}_{\pi\gamma}(\mathbf{C}_{1},\cdots,\mathbf{C}_m)$ be defined by
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!{\rm{tr}}_{\pi\gamma}(\mathbf{C}_{1},\cdots,\mathbf{C}_m)={\rm{tr}}_{\rho_1}(\mathbf{C}_{1},\cdots,\mathbf{C}_m){\rm{tr}}_{\rho_2}(\mathbf{C}_{1},\cdots,\mathbf{C}_m)
\nonumber \\
&&~~~~~~~~~~~~~~~~~~~~~~~~\cdots{\rm{tr}}_{\rho_u}(\mathbf{C}_{1},\cdots\mathbf{C}_m)
\label{eq:definition_of_joint_moments_of_deterministic_matrices_2}
\end{eqnarray}
where
\begin{equation}
{\rm{tr}}_{\rho_i}(\mathbf{C}_{1},\cdots,\mathbf{C}_m)=\frac{1}{n}{\rm{tr}}(\mathbf{C}_{v_1}\mathbf{C}_{v_2}\cdots\mathbf{C}_{v_a})\nonumber
\end{equation}
if $\rho_i = (v_1,v_2,\cdots,v_a)$.
Lemma 22.31 of \cite{nica2006lectures} shows that
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\sum\limits_{j_1,\cdots,j_m=1}^n [\mathbf{C}_{1}]_{j_1j_{\pi\gamma(1)}}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}j_{\pi\gamma(m-1)}}[\mathbf{C}_{m}]_{j_mj_{\pi\gamma(m)}}
\nonumber \\
&&~~~~~~~~~~~~~~~~~~~~= n^{ \# (\pi\gamma)}{\rm{tr}}_{\pi\gamma}(\mathbf{C}_{1},\cdots,\mathbf{C}_m).
\label{eq:lemma_22_31_of_nica2006lectures}
\end{eqnarray}
For example, let $m=8$ and $\pi=(1,4)(3,6)(2,7)(5,8)$.
Then, we have
\begin{IEEEeqnarray}{Rl}
&\pi\gamma(1)=\pi(\gamma(1))=\pi(2)=7 \nonumber \\
&\pi\gamma(2)=\pi(\gamma(2))=\pi(3)=6 \nonumber \\
&~~~~~~~~~~~~~~\cdots \nonumber \\
&\pi\gamma(8)=\pi(\gamma(8))=\pi(1)=4. \nonumber
\end{IEEEeqnarray}
Then, we obtain $\pi\gamma=(4, 8)(1,7,5,3)(2, 6)$, $\#(\pi\gamma)=3$ and
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\sum\limits_{j_1,\cdots,j_8=1}^n [\mathbf{C}_{1}]_{j_1j_7}[\mathbf{C}_{2}]_{j_2j_6}[\mathbf{C}_{3}]_{j_3j_1}
[\mathbf{C}_{4}]_{j_4j_8}
\IEEEnonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{5}]_{j_5j_3}[\mathbf{C}_{6}]_{j_6j_2}[\mathbf{C}_{7}]_{j_7j_5}
[\mathbf{C}_{8}]_{j_8j_4} \nonumber
\\
&=\sum\limits_{j_1,j_3,j_5,j_7=1}^n [\mathbf{C}_{1}]_{j_1j_7}[\mathbf{C}_{7}]_{j_7j_5}[\mathbf{C}_{5}]_{j_5j_3}[\mathbf{C}_{3}]_{j_3j_1}
\IEEEnonumber \\
&~~~~~~~~\sum\limits_{j_2,j_6=1}^n[\mathbf{C}_{2}]_{j_2j_6}[\mathbf{C}_{6}]_{j_6j_2}
\sum\limits_{j_4,j_8=1}^n [\mathbf{C}_{4}]_{j_4j_8}[\mathbf{C}_{8}]_{j_8j_4}
\IEEEnonumber
\\
&
=n^3\frac{1}{n}{\rm{tr}}(\mathbf{C}_{4}\mathbf{C}_8)\frac{1}{n}{\rm{tr}}(\mathbf{C}_{1}\mathbf{C}_{7}\mathbf{C}_{5}\mathbf{C}_{3})\frac{1}{n}{\rm{tr}}(\mathbf{C}_{2}\mathbf{C}_6)
\IEEEnonumber \\
&
=n^{\#(\pi\gamma)}{\rm{tr}}_{\pi\gamma}(\mathbf{C}_{1},\cdots,\mathbf{C}_8).
\label{eq:example}
\end{IEEEeqnarray}
From Remarks 23.8 and Proposition 23.11 of \cite{nica2006lectures}, we have that $\#(\pi\gamma)=\#(\gamma\pi)$.
Without loss of generality, let $\rho_1=(w_1,w_2,\cdots,w_b)$ be the cycle of $\pi\gamma$ containing $m$ and $w_b=m$. We denote by $\alpha$ the permutation $\rho_2 \cup \cdots \cup \rho_u$.
Then, we obtain a result similar to \eqref{eq:lemma_22_31_of_nica2006lectures} that
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!n^{-\frac{m}{2}}\sum\limits_{j_1,\cdots,j_m=1}^n [\mathbf{C}_{0}]_{j_0j_{\pi\gamma(m)}}[\mathbf{C}_{1}]_{j_1j_{\pi\gamma(1)}}\cdots
\nonumber \\
&&~~~~~~~~~~~~~~~~~~~~~~~~[\mathbf{C}_{m-1}]_{j_{m-1}j_{\pi\gamma(m-1)}}[\mathbf{C}_m]_{j_{m}j_0}
\nonumber \\
& &=n^{ \# (\gamma\pi-\frac{m}{2}-1)}{\rm{tr}}_{\alpha}(\mathbf{C}_{1},\cdots,\mathbf{C}_m)[\mathbf{C}_{0}\mathbf{C}_{w_1} \cdots\mathbf{C}_{w_b}]_{j_0j_0}.
\nonumber \\
\label{eq:combinatoric_results_of_sum_of_deterministic_matrices}
\end{eqnarray}
Under the assumptions on $\mathbf{C}_{0},\mathbf{C}_{1},\cdots,\mathbf{C}_m$, the limits of all
\begin{equation}
{\rm{tr}}_{\alpha}(\mathbf{C}_{1},\cdots,\mathbf{C}_m)[\mathbf{C}_{0}\mathbf{C}_{w_1} \cdots\mathbf{C}_{w_b}]_{j_0j_0}
\nonumber
\end{equation}
exist. For each crossing pair partition $\pi$, we have that $\#(\gamma\pi) - 1 - \frac{m}{2} \leq -2$. Thus, the RHS of
\eqref{eq:equality_formula_of_crossing_partition_of_moments_variances_one} is of order $n^{-2}$, and the LHS of \eqref{eq:equality_formula_of_crossing_partition_of_moments_variances_one} vanishes as $n \rightarrow \infty$.
For general $\sigma_{i_rj_r,k}(n)$, the formula
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!n^{-\frac{m}{2}}\sum\limits_{j_1,\cdots,j_m=1}^n \left(\prod \limits_{r=1}^m \sigma_{j_{\pi(r)}j_r,k}(n)\right)
[\mathbf{C}_{0}]_{j_0j_{\pi\gamma(m)}}
\nonumber \\
&&[\mathbf{C}_{1}]_{j_1j_{\pi\gamma(1)}}\cdots [\mathbf{C}_{m-1}]_{j_{m-1}j_{\pi\gamma(m-1)}}
[\mathbf{C}_m]_{j_{m}j_0}
\label{eq:general_variance_combin}
\end{eqnarray}
is still a product of elements similar to \eqref{eq:combinatoric_results_of_sum_of_deterministic_matrices} along the cycles of $\pi\gamma$.
For example, let $\pi=(1, 4)(2, 6)(3, 7)(5, 8)$, $m=8$ and $\pi\gamma=(4, 8)(1, 6, 3)(2, 7, 5)$.
Then, we obtain equation \eqref{eq:general_sigama_rules} at the top of the following page,
\begin{figure*}[!t]
\normalsize
\setcounter{tempequationcounter}{\value{equation}}
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!n^{-4}\sum\limits_{j_1,\cdots,j_8=1}^n \left(\prod \limits_{r=1}^8 \sigma_{j_{\pi(r)}j_r,k}(n)\right)
[\mathbf{C}_{0}]_{j_0j_{\pi\gamma(8)}}[\mathbf{C}_{1}]_{j_1j_{\pi\gamma(1)}}\cdots [\mathbf{C}_{m-1}]_{j_{7}j_{\pi\gamma(7)}}
[\mathbf{C}_m]_{j_{8}j_0}
\nonumber \\
&=n^{-4}\sum\limits_{j_1,\cdots,j_8=1}^n\left([\mathbf{C}_{3}]_{j_3j_1
}[\mathbf{\Lambda}_{j_4}]_{j_1j_1}[\mathbf{C}_{1}]_{j_1j_6}[\mathbf{\Lambda}_{j_2}]_{j_6j_6} [\mathbf{C}_{6}]_{j_6j_3}\right)
\left([\mathbf{C}_2]_{j_2j_7}[\mathbf{\Lambda}_{j_3}]_{j_7j_7} [\mathbf{C}_{7}]_{j_7j_5}[\mathbf{\Lambda}_{j_8}]_{j_5j_5}[\mathbf{C}_{5}]_{j_5j_2}\right)
\nonumber \\
& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\left([\mathbf{C}_{0}]_{j_0j_4}[\mathbf{C}_{4}]_{j_4j_8}[\mathbf{C}_{8}]_{j_8j_0}\right)
\nonumber \\
&=n^{-4}\sum\limits_{j_0,j_2,j_3,j_4,j_8=1}^n\left([\mathbf{C}_{3}
\mathbf{\Lambda}_{j_4}\mathbf{C}_{1}\mathbf{\Lambda}_{j_2}\mathbf{C}_{6}]_{j_3j_3}\right)
\left([\mathbf{C}_2\mathbf{\Lambda}_{j_3}\mathbf{C}_{7}\mathbf{\Lambda}_{j_8}\mathbf{C}_{5}]_{j_2j_2}\right)
\left([\mathbf{C}_{0}]_{j_0j_4}[\mathbf{C}_{4}]_{j_4j_8}[\mathbf{C}_{8}]_{j_8j_0}\right)
\nonumber \\
&= n^{-2}\sum\limits_{j_2,j_3=1}^n \frac{1}{n^2}
[\mathbf{C}_{0}\mathbf{\Xi}_{j_2j_3}\mathbf{C}_{4}\mathbf{\Sigma}_{j_2j_3}\mathbf{C}_{8}]_{j_0j_0}
\label{eq:general_sigama_rules}
\end{IEEEeqnarray}
\addtocounter{tempequationcounter}{1}
\setcounter{equation}{\value{tempequationcounter}}
\hrulefill
\vspace*{4pt}
\end{figure*}
where
\begin{IEEEeqnarray} {Rl}
&\mathbf{\Lambda}_{j_r}={\rm{diag}}(\sigma_{1j_r,k}^2(n), \sigma_{2j_r,k}^2(n), \cdots, \sigma_{nj_r,k}^2(n)) \IEEEnonumber \\
&\mathbf{\Xi}_{j_2j_3}={\rm{diag}}(
[\mathbf{C}_{3}\mathbf{\Lambda}_{1}\mathbf{C}_{1}\mathbf{\Lambda}_{j_2}\mathbf{C}_{6}]_{j_3j_3},
[\mathbf{C}_{3}\mathbf{\Lambda}_{2}\mathbf{C}_{1}\mathbf{\Lambda}_{j_2}\mathbf{C}_{6}]_{j_3j_3}
\IEEEnonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdots, [\mathbf{C}_{3}
\mathbf{\Lambda}_{n}\mathbf{C}_{1}\mathbf{\Lambda}_{j_2}\mathbf{C}_{6}]_{j_3j_3}) \IEEEnonumber \\
&\mathbf{\Sigma}_{j_2j_3}={\rm{diag}}(
[\mathbf{C}_2\mathbf{\Lambda}_{j_3}\mathbf{C}_{7}\mathbf{\Lambda}_{1}\mathbf{C}_{5}]_{j_2j_2},
[\mathbf{C}_2\mathbf{\Lambda}_{j_3}\mathbf{C}_{7}\mathbf{\Lambda}_{2}\mathbf{C}_{5}]_{j_2j_2}
\IEEEnonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdots, [\mathbf{C}_2\mathbf{\Lambda}_{j_3}\mathbf{C}_{7}\mathbf{\Lambda}_{n}\mathbf{C}_{5}]_{j_2j_2}). \IEEEnonumber
\end{IEEEeqnarray}
Thus, \eqref{eq:general_variance_combin} is still of order $n^{\#(\gamma\pi) - 1 - \frac{m}{2}}$, and
the LHS of \eqref{eq:equality_formula_of_crossing_partition_of_moments_variances_one} is of order $n^{-2}$.
Furthermore, we have proven that
\eqref{eq:limit_moments_of_random_matrix_and_deterministic_matrix_equal_to_free_deterministic_equivalent} holds.
Then, we continue to prove the situation with more than one random matrix that
\begin{IEEEeqnarray}{Rl}
&\lim\limits_{n \rightarrow \infty} i_n (\mathbb{E}_{\mathcal{D}_n}\{\mathbf{C}_{0}\mathbf{Y}_{p_1}\mathbf{C}_{1}\mathbf{Y}_{p_2} \mathbf{C}_2\cdots \mathbf{Y}_{p_m}\mathbf{C}_m\}
\IEEEnonumber \\
&~~~~~~~~-\mathbb{E}_{\mathcal{D}_n}\{\mathbf{C}_{0}\boldsymbol{\mathcal{Y}}_{p_1}\mathbf{C}_{1} \boldsymbol{\mathcal{Y}}_{p_2} \mathbf{C}_2\cdots\boldsymbol{\mathcal{Y}}_{p_m}\mathbf{C}_m\})= 0_{L^{\infty}[0, 1]}. \IEEEnonumber \\
\label{eq:limit_moments_of_random_matrix_and_deterministic_matrix_equal_to_free_deterministic_equivalent_general}
\end{IEEEeqnarray}
The proof of \eqref{eq:limit_moments_of_random_matrix_and_deterministic_matrix_equal_to_free_deterministic_equivalent_general} is similar to that of \eqref{eq:limit_moments_of_random_matrix_and_deterministic_diagonal_matrix_equal_to_free_deterministic_equivalent_general}
in the proof of Theorem \ref{th:diagonal_valued_free_results} and omitted here for brevity.
Since $\mathcal{M}_n, \boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ are free over $\mathcal{D}_n$ and $\mathcal{F}_n \subset \mathcal{M}_n$, we obtain that $\mathcal{F}_n, \boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ are free over $\mathcal{D}_n$. Then,
since $\boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ are $\mathcal{D}_n$-valued semicircular elements, we have that the asymptotic $L^{\infty}[0, 1]$-valued joint distribution of
$\mathcal{F}_n, \boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$
is only determined by $\psi_{k}$ and the asymptotic $L^{\infty}[0, 1]$-valued joint distribution of elements from $\mathcal{F}_n$.
Furthermore, the elements of $\mathcal{F}_n$ have uniformly bounded spectral norm. Thus,
the asymptotic $L^{\infty}[0, 1]$-valued joint distribution of
$\mathcal{F}_n, \boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ exists.
Then, since the asymptotic $L^{\infty}[0, 1]$-valued joint moments
\begin{equation}
\lim\limits_{n \rightarrow \infty} i_n( \mathbb{E}_{\mathcal{D}_n}\{\mathbf{C}_{0}\mathbf{Y}_{p_1}\mathbf{C}_{1}\cdots\mathbf{Y}_{p_{m-1}}\mathbf{C}_{m-1} \mathbf{Y}_{p_m}\mathbf{C}_{m}\}) \nonumber
\end{equation}
include all the information about the asymptotic $L^{\infty}[0, 1]$-valued joint distribution of $\mathcal{F}_n, \mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$, we obtain from \eqref{eq:limit_moments_of_random_matrix_and_deterministic_matrix_equal_to_free_deterministic_equivalent_general} that the asymptotic $L^{\infty}[0, 1]$-valued joint distributions of $\mathcal{F}_n, \mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$ and $\mathcal{F}_n, \boldsymbol{\mathcal{Y}}_1, \boldsymbol{\mathcal{Y}}_2, \cdots, \boldsymbol{\mathcal{Y}}_t$ are the same. Thus, we have that $\mathcal{F}_n, \mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t$ are asymptotically free over $L^{\infty}[0, 1]$.
\end{IEEEproof}
\section{Proof of Lemma \ref{lm:semicircular_lemma}}
\label{sec:proof_of_semicircular_lemma}
From Definition $2.9$ of \cite{nica2002r}, we have that $\boldsymbol{\mathcal{Y}}_{11}, \cdots, \boldsymbol{\mathcal{Y}}_{LK}$ form an R-cyclic family of matrices.
Applying Theorem $8.2$ of \cite{nica2002r}, we then obtain $\mathcal{M}_n, \boldsymbol{\mathcal{Y}}_{11}, \cdots, \boldsymbol{\mathcal{Y}}_{LK}$ are free over $\mathcal{D}_n$.
The joint $\mathcal{M}_n$-valued cumulants of $\widehat{\boldsymbol{\mathcal{X}}}_{11}, \cdots, \widehat{\boldsymbol{\mathcal{X}}}_{LK}$ are given by
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\kappa_t^{\mathcal{M}_n}(\widehat{\boldsymbol{\mathcal{X}}}_{i_1j_1}\mathbf{C}_1,\widehat{\boldsymbol{\mathcal{X}}}_{i_2j_2}\mathbf{C}_2,
\cdots,\widehat{\boldsymbol{\mathcal{X}}}_{i_tj_t})
\nonumber \\
&~=\kappa_t^{\mathcal{M}_n}(\mathbf{A}_{i_1j_1}\boldsymbol{\mathcal{Y}}_{i_1j_1}\mathbf{A}_{i_1j_1}^H\mathbf{C}_1,\mathbf{A}_{i_2j_2}\boldsymbol{\mathcal{Y}}_{i_2j_2}\mathbf{A}_{i_2j_2}^H\mathbf{C}_2,
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdots,\mathbf{A}_{i_tj_t}\boldsymbol{\mathcal{Y}}_{i_tj_t}\mathbf{A}_{i_tj_t}^H)
\nonumber \\
&~= \mathbf{A}_{i_1j_1}\kappa_t^{\mathcal{M}_n}(\boldsymbol{\mathcal{Y}}_{i_1j_1}\mathbf{A}_{i_1j_1}^H\mathbf{C}_1\mathbf{A}_{i_2j_2},\boldsymbol{\mathcal{Y}}_{i_2j_2}\mathbf{A}_{i_2j_2}^H\mathbf{C}_2\mathbf{A}_{i_3j_3},
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdots,\boldsymbol{\mathcal{Y}}_{i_tj_t})\mathbf{A}_{i_tj_t}^H
\nonumber \\
&~= \mathbf{A}_{i_1j_1}\kappa_t^{\mathcal{D}_n}(\boldsymbol{\mathcal{Y}}_{i_1j_1}E_{\mathcal{D}_n}\{\mathbf{A}_{i_1j_1}^H\mathbf{C}_1\mathbf{A}_{i_2j_2}\}, \boldsymbol{\mathcal{Y}}_{i_2j_2}\nonumber \\
&~~~~~~~~~~~~~~E_{\mathcal{D}_n}\{\mathbf{A}_{i_2j_2}^H\mathbf{C}_2\mathbf{A}_{i_3j_3}\},\cdots,\boldsymbol{\mathcal{Y}}_{i_tj_t})\mathbf{A}_{i_tj_t}^H
\label{eq:relation_between_Mn_valued_cumulants_and_En_valued_cumulants}
\end{IEEEeqnarray}
where $1 \leq i_t \leq L$, $1 \leq j_t \leq K$, $\mathbf{C}_1,\mathbf{C}_2,\cdots, \mathbf{C}_t \in \mathcal{M}_n$,
and the last equality is obtained by applying Theorem $3.6$ of \cite{nica2002operator}, which requires that $\mathcal{M}_n$ and $\{\boldsymbol{\mathcal{Y}}_{11}, \cdots, \boldsymbol{\mathcal{Y}}_{LK}\}$ are free over $\mathcal{D}_n$.
Since $\kappa_t^{\mathcal{D}_n} \in {\mathcal{D}_n}$, we obtain
\begin{equation}
\kappa_t^{\mathcal{M}_n}(\widehat{\boldsymbol{\mathcal{X}}}_{i_1j_1}\mathbf{C_1},\widehat{\boldsymbol{\mathcal{X}}}_{i_2j_2}\mathbf{C_2},\cdots,\widehat{\boldsymbol{\mathcal{X}}}_{i_tj_t})
\in \mathcal{D}. \nonumber \\
\end{equation} This implies
the $\mathcal{D}$-valued cumulants of $\widehat{\boldsymbol{\mathcal{X}}}_{11}, \cdots, \widehat{\boldsymbol{\mathcal{X}}}_{LK}$ are the restrictions
of their $\mathcal{M}_n$-valued cumulants over $\mathcal{D}$ by applying Theorem $3.1$ of \cite{nica2002operator}. Thus, we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\kappa_t^{\mathcal{D}}(\widehat{\boldsymbol{\mathcal{X}}}_{i_1j_1}\mathbf{C}_1,\widehat{\boldsymbol{\mathcal{X}}}_{i_2j_2}\mathbf{C}_2,\cdots,\widehat{\boldsymbol{\mathcal{X}}}_{i_tj_t})
\nonumber \\
&=\kappa_t^{\mathcal{M}_n}(\widehat{\boldsymbol{\mathcal{X}}}_{i_1j_1}\mathbf{C}_1,\widehat{\boldsymbol{\mathcal{X}}}_{i_2j_2}\mathbf{C}_2,\cdots,\widehat{\boldsymbol{\mathcal{X}}}_{i_tj_t})
\nonumber \\
&=\mathbf{A}_{i_1j_1}\kappa_t^{\mathcal{D}_n}(\boldsymbol{\mathcal{Y}}_{i_1j_1}E_{\mathcal{D}_n}\{\mathbf{A}_{i_1j_1}^H\mathbf{C}_1\mathbf{A}_{i_2j_2}\},\boldsymbol{\mathcal{Y}}_{i_2j_2}
\nonumber \\
&~~~~~~~~~~
E_{\mathcal{D}_n}\{\mathbf{A}_{i_2j_2}^H\mathbf{C}_2\mathbf{A}_{i_3j_3}\},\cdots,\boldsymbol{\mathcal{Y}}_{i_tj_t})\mathbf{A}_{i_tj_t}^H
\end{IEEEeqnarray}
where $\mathbf{C}_1,\mathbf{C}_2,\cdots, \mathbf{C}_t \in \mathcal{D}$ and the last equality is obtained by applying
\eqref{eq:relation_between_Mn_valued_cumulants_and_En_valued_cumulants}.
Since $\boldsymbol{\mathcal{Y}}_{11}, \cdots, \boldsymbol{\mathcal{Y}}_{LK}$ are free over $\mathcal{D}_n$, we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\kappa_t^{\mathcal{D}_n}(\boldsymbol{\mathcal{Y}}_{i_1j_1}E_{\mathcal{D}_n}\{\mathbf{A}_{i_1j_1}^H\mathbf{C}_1\mathbf{A}_{i_2j_2}\},
\boldsymbol{\mathcal{Y}}_{i_2j_2}
\nonumber \\
&~~~~~~~~E_{\mathcal{D}_n}\{\mathbf{A}_{i_2j_2}^H\mathbf{C}_2\mathbf{A}_{i_3j_3}\},\cdots,\boldsymbol{\mathcal{Y}}_{i_tj_t})=\mathbf{0}_n
\end{IEEEeqnarray}
unless $i_1=i_2=\cdots=i_t$ and $j_1=j_2=\cdots=j_t$. Hence, $\widehat{\boldsymbol{\mathcal{X}}}_{11}, \cdots, \widehat{\boldsymbol{\mathcal{X}}}_{LK}$ are free over $\mathcal{D}$.
Moreover, since each $\boldsymbol{\mathcal{Y}}_{ij}$ is semicircular over $\mathcal{D}_n$, we obtain
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\kappa_t^{\mathcal{D}_n}(\boldsymbol{\mathcal{Y}}_{ij}E_{\mathcal{D}_n}\{\mathbf{A}_{ij}^H\mathbf{C}_1\mathbf{A}_{ij}\},
\boldsymbol{\mathcal{Y}}_{ij}
\nonumber \\
&~~~~~~~~E_{\mathcal{D}_n}\{\mathbf{A}_{ij}^H\mathbf{C}_2\mathbf{A}_{ij}\},\cdots,\boldsymbol{\mathcal{Y}}_{ij})=\mathbf{0}_n
\end{IEEEeqnarray}
except for $t=2$. This implies each $\widehat{\boldsymbol{\mathcal{X}}}_{lk}$ is also semicircular over $\mathcal{D}$.
Furthermore, since $\widehat{\boldsymbol{\mathcal{X}}}_{11}, \cdots, \widehat{\boldsymbol{\mathcal{X}}}_{LK}$ are free over $\mathcal{D}$, we obtain $\widetilde{\boldsymbol{\mathcal{X}}}$ is semicircular over $\mathcal{D}$.
According to \eqref{eq:relation_between_Mn_valued_cumulants_and_En_valued_cumulants}, we obtain
\begin{IEEEeqnarray}{Rl}
&\!\!\kappa_t^{\mathcal{M}_n}(\widehat{\boldsymbol{\mathcal{X}}}_{i_1j_1}\mathbf{C}_1, \widehat{\boldsymbol{\mathcal{X}}}_{i_2j_2}\mathbf{C}_2,\cdots,\widehat{\boldsymbol{\mathcal{X}}}_{i_tj_t})
\nonumber \\
&=E_{\mathcal{D}}\{\kappa_t^{\mathcal{M}_n}(\widehat{\boldsymbol{\mathcal{X}}}_{i_1j_1}E_{\mathcal{D}}\{\mathbf{C}_1\}, \widehat{\boldsymbol{\mathcal{X}}}_{i_2j_2}
E_{\mathcal{D}}\{\mathbf{C}_2\},\cdots,\widehat{\boldsymbol{\mathcal{X}}}_{i_tj_t})\}. \nonumber \\
\end{IEEEeqnarray}
Thus, we have that $\widehat{\boldsymbol{\mathcal{X}}}_{11}, \cdots, \widehat{\boldsymbol{\mathcal{X}}}_{LK}$ and ${\mathcal{M}_n}$ are free over $\mathcal{D}$ by applying Theorem $3.5$ of \cite{nica2002operator}.
It follows that $\widetilde{\boldsymbol{\mathcal{X}}}$ and ${\mathcal{M}_n}$ are free over $\mathcal{D}$.
\section{Proof of Theorem \ref{th:cauchy_transform}}
\label{sec:proof_of_cauchy_theorem}
{Recall that $\boldsymbol{\mathcal{X}} = \overline{\mathbf{X}} + \widetilde{\boldsymbol{\mathcal{X}}}$.}
Since $\widetilde{\boldsymbol{\mathcal{X}}}$ and $\overline{\boldsymbol{\mathbf{X}}}$ are free over $\mathcal{D}$ by Lemma \ref{lm:semicircular_lemma}, we can apply \eqref{eq:operator_cauchy_transform_of_sum_of_free_varaible} and thus obtain
\begin{IEEEeqnarray}{Rl}
\!\!\!\!\!\!\!\!\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)
=&\mathcal{G}_{\overline{\boldsymbol{\mathbf{X}}}}^{\mathcal{D}}\left(z\mathbf{I}_n - \mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}}\left(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)\right)\right)
\nonumber \\
=&E_{\mathcal{D}}\left\{\left(z\mathbf{I}_{n} - \mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}}\left(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)\right)- \overline{\boldsymbol{\mathbf{X}}}\right)^{-1}\right\}.
\label{eq:equation_of_operator_valued_cauchy_transform}
\end{IEEEeqnarray}
Since $\boldsymbol{\mathcal{X}} = \boldsymbol{\mathcal{X}}^H$ and
\begin{IEEEeqnarray}{Rl}
\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n) =
E_{\mathcal{D}}\{(z\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}\}
\end{IEEEeqnarray}
we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!
\Im(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n))
\nonumber \\
&= \frac{1}{2i}\left(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n) - \left(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)\right)^H\right)
\nonumber \\
&=
\frac{1}{2i}E_{\mathcal{D}}\left\{\left(z\mathbf{I}_{n} - \boldsymbol{\mathcal{X}}\right)^{-1} - \left({z}^*\mathbf{I}_{n} - \boldsymbol{\mathcal{X}}\right)^{-1}\right\} \nonumber \\
&=-\Im(z)E_{\mathcal{D}}\left\{\left(z\mathbf{I}_{n} - \boldsymbol{\mathcal{X}}\right)^{-1}\left({z}^*\mathbf{I}_{n} - \boldsymbol{\mathcal{X}}\right)^{-1}\right\}.
\end{IEEEeqnarray}
It is obvious that $E\{(z\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}({z}^*\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}\}$ is positive definite. Each block matrix of $E_{\mathcal{D}}\{(z\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}({z}^*\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}\}$ is a principal submatrix of $E\{(z\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}({z}^*\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}\}$
and thus positive definite by Theorem $3.4$ of \cite{bapat2012linear}. Then $E_{\mathcal{D}}\{(z\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}({z}^*\mathbf{I}_{n} - \boldsymbol{\mathcal{X}})^{-1}\}$ is also positive definite. Thus, we obtain $\Im(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)) \prec 0$ for $z \in \mathbb{C}^+$.
This implies that $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)$ should be a solution of \eqref{eq:equation_of_operator_valued_cauchy_transform} with the property that $\Im(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)) \prec 0$ for $z \in \mathbb{C}^+$.
In the following, we will prove that \eqref{eq:equation_of_operator_valued_cauchy_transform} has
exactly one solution with $\Im(\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)) \prec 0$ for $z \in \mathbb{C}^+$.
Replace $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)$ with $-i\mathbf{W}$, we have that $\Re(\mathbf{W}) \succ 0$. Then, \eqref{eq:equation_of_operator_valued_cauchy_transform} becomes
\begin{IEEEeqnarray}{Rl}
\mathbf{W} =& iE_{\mathcal{D}}\left\{\left(z\mathbf{I}_{n} - \mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}}(-i\mathbf{W})- \overline{\boldsymbol{\mathbf{X}}}\right)^{-1}\right\} \nonumber \\
=& E_{\mathcal{D}}\left\{\left(\mathbf{V} + \mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}}(\mathbf{W})\right)^{-1}\right\} \nonumber \\
=& E_{\mathcal{D}}\{\mathfrak{F}_\mathbf{V}(\mathbf{W})\}
\end{IEEEeqnarray}
where $\mathbf{V}=-iz\mathbf{I}_{n}+i\overline{\boldsymbol{\mathbf{X}}}$. Since $z \in \mathbb{C}^+$ and $\overline{\boldsymbol{\mathbf{X}}}$ is Hermitian, we have that $\Re(\mathbf{V}) \succeq \epsilon \mathbf{I}_{n}$ for some $\epsilon > 0$.
Let ${\mathcal{M}_n}_{+}$ denote $\{\mathbf{W} \in \mathcal{M}_n:\Re(\mathbf{W}) \succeq \epsilon\mathbf{I}~{\rm for~some~}\epsilon > 0\}$. We
define $R_a = \{\mathbf{W} \in {\mathcal{M}_n}_{+}:\|\mathbf{W}\| \leq a \}$ for $a > 0$.
According to Proposition $3.2$ of \cite{helton2007operator}, $\mathfrak{F}_\mathbf{V}$ is well defined, $\|\mathfrak{F}_\mathbf{V}(\mathbf{W})\| \leq \|{\Re(\mathbf{V})}^{-1}\| $, and $\mathfrak{F}_\mathbf{V}$ maps $R_a$ strictly to itself for $\mathbf{V} \in {\mathcal{M}_n}_{+}$ and $\|{\Re(\mathbf{V})}^{-1}\| < a $.
Furthermore, by applying the Earle-Hamilton fixed point theorem \cite{earle1970fixed}, the statement in Theorem $2.1$ of \cite{helton2007operator} that
there exists exactly one solution $\mathbf{W} \in {\mathcal{M}_n}_{+}$ to the equation $\mathbf{W} = \mathfrak{F}_\mathbf{V}(\mathbf{W})$ and
the solution is the limit of iterates $\mathbf{W}_n = \mathfrak{F}_\mathbf{V}^n(\mathbf{W}_0)$ for every $\mathbf{W}_0 \in {\mathcal{M}_n}_{+}$ is proven.
We herein extend the proof of \cite{helton2007operator}. First, we define $R_b = \{\mathbf{W} \in {\mathcal{M}_n}_{+} \cap \mathcal{D} : \|\mathbf{W}\| \leq b \}$ for $b > 0$. Using Proposition $3.2$ of \cite{helton2007operator}, we have that $\|\mathfrak{F}_\mathbf{V}(\mathbf{W})\| \leq \|{\Re(\mathbf{V})}^{-1}\| $ and
$\Re(\mathfrak{F}_\mathbf{V}(\mathbf{W})) \succeq \epsilon\mathbf{I}$ for some $\epsilon > 0$ and $\mathbf{W} \in R_b$. Since $\|E_{\mathcal{D}}\{\mathfrak{F}_\mathbf{V}(\mathbf{W})\}\| \leq \|\mathfrak{F}_\mathbf{V}(\mathbf{W})\| $, we obtain $\|E_{\mathcal{D}}\{\mathfrak{F}_\mathbf{V}(\mathbf{W})\}\| \leq \|{\Re(\mathbf{V})}^{-1}\| $. Furthermore, because each diagonal block of $E_{\mathcal{D}}\{\mathfrak{F}_\mathbf{V}(\mathbf{W})\}$ is a principal submatrix of $\mathfrak{F}_\mathbf{V}(\mathbf{W})$, we also have that
$\lambda_{min}(\mathfrak{F}_\mathbf{V}(\mathbf{W})) \leq \lambda_{min}(E_{\mathcal{D}}\{\mathfrak{F}_\mathbf{V}(\mathbf{W})\})$ by applying Theorem $1$ of \cite{thompson1972principal}. Hence, we
have that $\Re(E_{\mathcal{D}}\{\mathfrak{F}_\mathbf{V}(\mathbf{W})\}) \succeq \epsilon\mathbf{I}$ for some $\epsilon > 0$, and that $E_{\mathcal{D}} \circ \mathfrak{F}_\mathbf{V}$ maps $R_b$ strictly to itself for $\mathbf{V} \in {\mathcal{M}_n}_{+} \cap \mathcal{D} $ and $\|{\Re(\mathbf{V})}^{-1}\| < b $.
Thus, applying the Earle-Hamilton fixed point theorem,
we obtain there exists exactly one solution $\mathbf{W} \in {\mathcal{M}_n}_{+} \cap \mathcal{D}$ to the equation $\mathbf{W} = E_{\mathcal{D}}\{\mathfrak{F}_\mathbf{V}(\mathbf{W})\}$ and
the solution is the limit of iterates $\mathbf{W}_n = (E_{\mathcal{D}} \circ \mathfrak{F}_\mathbf{V})^n(\mathbf{W}_0)$ for every $\mathbf{W}_0 \in {\mathcal{M}_n}_{+} \cap \mathcal{D}$.
Following a derivation similar to that of \eqref{eq:realtion_of_Cauchy_transform_of X_and_X2}, we have that
\begin{eqnarray}
\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n) = z\mathcal{G}_{\boldsymbol{\mathcal{X}}^{2}}^{\mathcal{D}}(z^2\mathbf{I}_n)
\end{eqnarray}
where $z,z^2 \in \mathbb{C}^+$.
Then, we obtain
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!z\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z^2\mathbf{I}_n)
\nonumber \\
&= E_{\mathcal{D}}\left\{\left(z\mathbf{I}_{n} - \mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}}\left(z\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z^2\mathbf{I}_n)\right)- \overline{\boldsymbol{\mathbf{X}}}\right)^{-1}\right\} \label{eq:equation_of_operator_valued_cauchy_transform_X2}
\end{IEEEeqnarray}
by substituting $z\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z^2\mathbf{I}_n)$ for $\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)$ in
\eqref{eq:equation_of_operator_valued_cauchy_transform}.
{Furthermore, we have that $\Im(z^{-1}\mathcal{G}_{\boldsymbol{\mathcal{X}}}^{\mathcal{D}}(z\mathbf{I}_n)) \prec 0$
for $z,z^2 \in \mathbb{C}^+$.
Thus, $z\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z^2\mathbf{I}_n)$ with $\Im(\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z^2\mathbf{I}_n)) \prec 0$ for $z,z^2 \in \mathbb{C}^+$ is uniquely determined by \eqref{eq:equation_of_operator_valued_cauchy_transform_X2}.}
Since $\widetilde{\boldsymbol{\mathcal{X}}}$ is semicircular over $\mathcal{D}$ as shown in Lemma \ref{lm:semicircular_lemma}, we have that
\begin{IEEEeqnarray}{Rl}
\!\!\!\!\!\!\mathcal{R}_{\widetilde{\boldsymbol{\mathcal{X}}}}^{\mathcal{D}}(\mathbf{C})
=&E_{\mathcal{D}}\{{\widetilde{\boldsymbol{\mathcal{X}}}}\mathbf{C}{\widetilde{\boldsymbol{\mathcal{X}}}}\}
=E_{\mathcal{D}}\{{\widetilde{\boldsymbol{\mathbf{X}}}}\mathbf{C}{\widetilde{\boldsymbol{\mathbf{X}}}}\}\nonumber \\
=& \left(
\begin{array}{ccccc}
\sum\limits_{k=1}^{K}{\tilde{\eta}}_k(\mathbf{C}_k) & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & \eta_{1} (\widetilde{\mathbf{C}}) & \ldots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{0} & \mathbf{0} & \ldots & \eta_{K} (\widetilde{\mathbf{C}}) \\
\end{array}
\right) \label{eq:operator_valued_r_transform_of_widetilde_mathcal_H}
\end{IEEEeqnarray}
where $\mathbf{C}={\rm{diag}}(\widetilde{\mathbf{C}},\mathbf{C}_1,\cdots,\mathbf{C}_K)$, ${\widetilde{\mathbf{C}}}\in \mathcal{M}_N$ and $\mathbf{C}_k\in \mathcal{M}_{M_k}$.
Then according to \eqref{eq:Cauchy_transform_of_Hfreesquare_in_detail} and \eqref{eq:operator_valued_r_transform_of_widetilde_mathcal_H}, \eqref{eq:equation_of_operator_valued_cauchy_transform_X2}
becomes
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\left(
\begin{array}{ccccc}
z\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z^2\mathbf{I}_N) & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & z\mathcal{G}_1(z^2) & \ldots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{0} & \mathbf{0} & \ldots & z\mathcal{G}_K(z^2) \\
\end{array}
\right)
\nonumber \\
&\!\!\!\!= E_{\mathcal{D}}\!\!\left\{\!\left(\!
\begin{array}{cccc}
z{\tilde{{\boldsymbol{\Phi}}}}(z^2) & -\overline{\mathbf{H}}_1 & \cdots & -\overline{\mathbf{H}}_K \\
-\overline{\mathbf{H}}{}_1^H & z{{\boldsymbol{\Phi}}}_1(z^2) & \ldots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
-\overline{\mathbf{H}}{}_K^H & \mathbf{0} & \ldots & z{{\boldsymbol{\Phi}}}_K(z^2) \\
\end{array}
\!\!\right)^{-1}\!\!\right\} \label{eq:equation_of_operator_value_cauchy_transform_1}
\end{IEEEeqnarray}
where
\begin{IEEEeqnarray}{Rl}
&\tilde{\boldsymbol{\Phi}}(z^2)= \mathbf{I}_N - \sum\limits_{k=1}^{K}{\tilde{\eta}}_{k}(\mathcal{G}_k(z^2)) \\
&{\boldsymbol{\Phi}}_k(z^2) = \mathbf{I}_{M_k} - \eta_{k} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z^2\mathbf{I}_N)).
\end{IEEEeqnarray}
According to the block matrix inversion formula \cite{petersen2008matrix}
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\left(\begin{array}{cc}
\mathbf{A}_{11} & \mathbf{A}_{12} \\
\mathbf{A}_{21} & \mathbf{A}_{22} \\
\end{array}\right)^{-1}
\nonumber \\
&=
\left(\begin{array}{cc}
\mathbf{C}_{1}^{-1} & -\mathbf{A}_{11}^{-1}\mathbf{A}_{12}\mathbf{C}_{2}^{-1} \\
-\mathbf{C}_{2}^{-1}\mathbf{A}_{21}\mathbf{A}_{11}^{-1} & \mathbf{C}_{2}^{-1} \\
\end{array}\right)
\end{IEEEeqnarray}
where $\mathbf{C}_{1}=\mathbf{A}_{11}-\mathbf{A}_{12}\mathbf{A}_{
22}^{-1}\mathbf{A}_{21}$ and $\mathbf{C}_{2}=\mathbf{A}_{22}-\mathbf{A}_{21}\mathbf{A}_{
11}^{-1}\mathbf{A}_{12}$,
\eqref{eq:equation_of_operator_value_cauchy_transform_1} can be split into
\begin{equation}
z\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z^2\mathbf{I}_N) = \left(z{\Tilde{{\boldsymbol{\Phi}}}}(z^2)
- \overline{\mathbf{H}}\left(z{{\boldsymbol{\Phi}}}(z^2)\right)^{-1} \overline{\mathbf{H}}{}^H\right)^{-1}
\label{eq:Cauchy_transform_proof_temp1}
\end{equation}
and
\begin{equation}
z\mathcal{G}_k(z^2) = \left(\left(z{{\boldsymbol{\Phi}}}(z^2)
- \overline{\mathbf{H}}{}^H\left(z{\Tilde{{\boldsymbol{\Phi}}}}(z^2)\right)^{-1} \overline{\mathbf{H}}\right)^{-1}\right)_k
\label{eq:Cauchy_transform_proof_temp2}
\end{equation}
where
\begin{equation}
{{\boldsymbol{\Phi}}}(z^2) = {\rm{diag}}\left({{\boldsymbol{\Phi}}}_1(z^2),{{\boldsymbol{\Phi}}}_2(z^2),\cdots,{{\boldsymbol{\Phi}}}_K(z^2)\right).
\end{equation}
Furthermore, \eqref{eq:Cauchy_transform_proof_temp1} and \eqref{eq:Cauchy_transform_proof_temp2} are equivalent to
\begin{equation}
\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N) = \left(z{\Tilde{{\boldsymbol{\Phi}}}}(z)
- \overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(z)^{-1} \overline{\mathbf{H}}{}^H\right)^{-1}
\end{equation}
and
\begin{equation}
\mathcal{G}_k(z) = \left((z{{\boldsymbol{\Phi}}}(z)
- \overline{\mathbf{H}}{}^H{\Tilde{{\boldsymbol{\Phi}}}}(z)^{-1} \overline{\mathbf{H}})^{-1}\right)_k.
\end{equation}
Finally, since the solution has the property $\Im(\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z\mathbf{I}_n)) \prec 0$ for $z \in \mathbb{C}^+$ and
$\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)$ is a principal submatrix of $\mathcal{G}_{\boldsymbol{\mathcal{X}}^2}^{\mathcal{D}}(z\mathbf{I}_n)$, we have that $\Im(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(z\mathbf{I}_N)) \prec 0$ for $z \in \mathbb{C}^+$ by using Theorem $3.4$ of \cite{bapat2012linear}.
\section{Proof of Lemma \ref{lm:shannon_theorem_lemma_1}}
\label{sec:proof_of_shannon_theorem_lemma_1}
Recall that $\mathbf{E}_k(x)=-x\mathcal{G}_k(-x)$.
Let $\boldsymbol{\mathcal{E}}(x)$ denote
\begin{equation}
\left({{\boldsymbol{\Phi}}}(-x)+ x^{-1}\overline{\mathbf{H}}{}^H{\Tilde{{\boldsymbol{\Phi}}}}(-x)^{-1}\overline{\mathbf{H}}\right)^{-1}.
\nonumber \\
\end{equation}
Then, we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\sum\limits_{k=1}^{K}{\rm{tr}}\left(\left({{\boldsymbol{\Phi}}}_k(-x)^{-1}-\mathbf{E}_k(x)\right)\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}\right)
\nonumber \\
&~~~~~~~~={\rm{tr}}\left(\left({{\boldsymbol{\Phi}}}(-x)^{-1}-\boldsymbol{\mathcal{E}}(x)\right)\frac{d{{\boldsymbol{\Phi}}}(-x)}{dx}\right).
\end{IEEEeqnarray}
Recall that $\mathbf{A}(x)=({\Tilde{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H)^{-1}$.
Using the Woodbury identity \cite{higham2002accuracy}, we rewrite $\boldsymbol{\mathcal{E}}(x)$ as
\begin{equation}
\boldsymbol{\mathcal{E}}(x)=
\boldsymbol{\Phi}(-x)^{-1}-x^{-1}\boldsymbol{\Phi}(-x)^{-1}\overline{\mathbf{H}}{}^H\mathbf{A}(x)\overline{\mathbf{H}}\boldsymbol{\Phi}(-x)^{-1}
\end{equation}
which further leads to
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\sum\limits_{k=1}^{K}{\rm{tr}}\left(\left({{\boldsymbol{\Phi}}}_k(-x)^{-1}-\mathbf{E}_k(x)\right)\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}\right)
\nonumber \\
&={\rm{tr}}\left({{\boldsymbol{\Phi}}}(-x)^{-1}x^{-1}\overline{\mathbf{H}}{}^H\mathbf{A}(x)\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1}\frac{d{{\boldsymbol{\Phi}}}(-x)}{dx}\right) \nonumber \\
&={\rm{tr}}\left(x^{-1}\overline{\mathbf{H}}{}^H\mathbf{A}(x)\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1}\frac{d{{\boldsymbol{\Phi}}}(-x)}{dx}{{\boldsymbol{\Phi}}}(-x)^{-1}\right)
\nonumber \\
&~~~~-{\rm{tr}}\left(x^{-1}\overline{\mathbf{H}}{}^H\mathbf{A}(x)\overline{\mathbf{H}}\frac{d{{\boldsymbol{\Phi}}}(-x)^{-1}}{dx}\right).
\end{IEEEeqnarray}
\section{Proof of Lemma \ref{lm:shannon_theorem_lemma_2}}
\label{sec:proof_of_shannon_theorem_lemma_2}
From
\begin{IEEEeqnarray}{Rl}
{{\boldsymbol{\Phi}}}_k(-x) - \mathbf{I}_{M_k}=&- \eta_{k} (\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)) \nonumber \\
=&\eta_{k}(x^{-1}\mathbf{A}(x)) \label{eq:phi_k_Az_relation}
\end{IEEEeqnarray}
we have that
\begin{eqnarray}
\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx} = \eta_{k} \left(\frac{dx^{-1}\mathbf{A}(x)}{dx}\right).
\label{eq:Phi_k_z_Az_relation}
\end{eqnarray}
From ${\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N=\sum_{k=1}^{K}{\tilde{\eta}}_{k} (\mathcal{G}_k(-x))$, we then obtain that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\!\!\!\!{\rm{tr}}\left(\frac{dx^{-1}\mathbf{A}(x)}{dx}\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right)
\nonumber \\
&= -{\rm{tr}}\left(\frac{dx^{-1}\mathbf{A}(x)}{dx}\sum\limits_{k=1}^{K}{\tilde{\eta}}_{k} (\mathcal{G}_k(-x))\right) \nonumber \\
&= {\rm{tr}}\left(\frac{dx^{-1}\mathbf{A}(x)}{dx}\sum\limits_{k=1}^{K}{\tilde{\eta}}_{k} (x^{-1}\mathbf{E}_k(x))\right) \nonumber \\
&= \sum\limits_{k=1}^{K}{\rm{tr}}\left({\eta}_k\left(\frac{dx^{-1}\mathbf{A}(x)}{dx}\right)x^{-1}\mathbf{E}_k(x)\right)
\end{IEEEeqnarray}
where the last equality is due to
\begin{IEEEeqnarray}{Rl}
{\rm{tr}}(\mathbf{A}_1{\tilde{\eta}}_{k} (\mathbf{A}_2))=& {\rm{tr}}({\mathbb E} \{\mathbf{A}_1 \widetilde{\mathbf{H}}_k \mathbf{A}_2 \widetilde{\mathbf{H}}_k^H\}) \IEEEnonumber \\
=& {\rm{tr}}({\mathbb E}\{\widetilde{\mathbf{H}}_k^H\mathbf{A}_1\widetilde{\mathbf{H}}_k\mathbf{A}_2\})
\IEEEnonumber \\
= &{\rm{tr}}(\eta_k({\mathbf{A}_1})\mathbf{A}_2). \nonumber
\end{IEEEeqnarray}
According to \eqref{eq:Phi_k_z_Az_relation}, we finally obtain
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\rm{tr}}\left(\frac{dx^{-1}\mathbf{A}(x)}{dx}\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right)
\nonumber \\
&&=\sum\limits_{k=1}^{K}{\rm{tr}}\left(\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}x^{-1}\mathbf{E}_k(x)\right).
\end{eqnarray}
\section{Proof of Theorem \ref{th:shannon_theorem}}
\label{sec:proof_of_shannon_theorem}
We define ${J}(x)$ by
\begin{eqnarray}
{J}(x) = -x^{-1}-G_{\boldsymbol{\mathcal{B}}_N}(-x) = -x^{-1}{\rm{tr}}(\mathbf{A}(x)\mathbf{B}(x))
\end{eqnarray}
where $\mathbf{B}(x)$ denotes ${\Tilde{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H - \mathbf{I}_N$.
For convenience, we rewrite ${J}(x)$ as
\begin{eqnarray}
{J}(x) = J_1(x) + J_2(x)
\end{eqnarray}
where $J_1(x)$ and $J_2(x)$ are defined by
\begin{eqnarray}
J_1(x) = -\frac{1}{x}{\rm{tr}}\left(\mathbf{A}(x)\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right)
\end{eqnarray}
and
\begin{eqnarray}
J_2(x)= -\frac{1}{x^2}{\rm{tr}}\left(\mathbf{A}(x)\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H\right).
\end{eqnarray}
Differentiating ${\rm{tr}}(-\mathbf{A}(x)({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N))$ with respect to $x$,
we have that
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\!\!\frac{d}{dx}{\rm{tr}}\left(x\mathbf{I}_N-x^{-1}\mathbf{A}(x)\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right)
\IEEEnonumber \\
&\!\!\!\!\!\!= J_1(x) +K(x) -x{\rm{tr}}\left(\!\frac{dx^{-1}\mathbf{A}(x)}{dx}\left(\!{\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\!\right)\!\right) \label{eq:shannon_derivate_1_1}
\end{IEEEeqnarray}
where $K(x)$ is defined as
\begin{eqnarray}
K(x) = -{\rm{tr}}\left(\mathbf{A}(x)\frac{d{\Tilde{{\boldsymbol{\Phi}}}}(-x)}{dx}\right).
\end{eqnarray}
According to {Lemma \ref{lm:shannon_theorem_lemma_2}}, \eqref{eq:shannon_derivate_1_1} becomes
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\!\!\frac{d}{dx}{\rm{tr}}\left(-\mathbf{A}(x)\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right)
\nonumber \\
& = J_1(x) + K(x)
-\sum\limits_{k=1}^{K}{\rm{tr}}\left(\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}\mathbf{E}_k(x)\right). \label{eq:shannon_derivate_1_2}
\end{IEEEeqnarray}
Defining $L(x)$ as
\begin{eqnarray}
L(x) = -\sum\limits_{k=1}^{K}{\rm{tr}}\left(\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}\mathbf{E}_k(x)\right)
\end{eqnarray}
we obtain
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{d}{dx}{\rm{tr}}\left(-\mathbf{A}(x)\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right)
\nonumber \\
&&= J_1(x) + K(x) + L(x). \label{eq:shannon_derivate_1_3}
\end{eqnarray}
For a matrix-valued function $\mathbf{F}(x)$, we have that
\begin{equation}
\frac{d}{dx}\log\det(\mathbf{F}(x)) = {\rm{tr}}\left(\mathbf{F}(x)^{-1}\frac{d\mathbf{F}(x)}{dx}\right).
\end{equation}
When $\mathbf{F}(x)={\Tilde{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H$, we obtain
\begin{eqnarray}
& &\!\!\!\!\!\!\!\!\!\!\frac{d}{dx}\log\det\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H\right) \nonumber \\
&&\!\!\!\!\!\!\!\!= {\rm{tr}}\left(\mathbf{A}(x)\frac{d\mathbf{B}(x)}{dx}\right) \nonumber \\
&&\!\!\!\!\!\!\!\!= {\rm{tr}}\left(\!\mathbf{A}(x)\frac{d{\Tilde{{\boldsymbol{\Phi}}}}(-x)}{dx}\!\right)
+ {\rm{tr}}\left(\!\mathbf{A}(x)\frac{dx^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H}{dx}\!\right) \nonumber \\
&&\!\!\!\!\!\!\!\!= -K(x) + J_2(x) + x^{-1}{\rm{tr}}\left(\!\mathbf{A}(x)\frac{d\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H}{dx}\!\right).\nonumber \\ \label{eq:shannon_derivate_2_1}
\end{eqnarray}
According to {Lemma \ref{lm:shannon_theorem_lemma_1}}, \eqref{eq:shannon_derivate_2_1} becomes
\begin{eqnarray}
& &\!\!\!\!\!\!\!\!\!\!\frac{d}{dx}\log\det\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H\right) \nonumber \\
&&=-K(x) + J_2(x)
\nonumber \\
&&~~~~~~- \sum\limits_{k=1}^{K}{\rm{tr}}\left(\left({{\boldsymbol{\Phi}}}_k(-x)^{-1}-\mathbf{E}_k(x)\right)\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}\right) \nonumber \\
&&=-K(x) + J_2(x) - L(x)
\nonumber \\
&&~~~~~~- \sum\limits_{k=1}^{K}{\rm{tr}}\left(\left({{\boldsymbol{\Phi}}}_k(-x)^{-1}\right)\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}\right). \label{eq:shannon_derivate_2_2}
\end{eqnarray}
From \eqref{eq:shannon_derivate_1_3}, \eqref{eq:shannon_derivate_2_2} and
\begin{eqnarray}
\frac{d}{dx}\log\det({{\boldsymbol{\Phi}}}(-x))
=\sum\limits_{k=1}^{K}{\rm{tr}}\left({{\boldsymbol{\Phi}}}_k(-x)^{-1}\frac{d{{\boldsymbol{\Phi}}}_k(-x)}{dx}\right)
\end{eqnarray}
we obtain
\begin{eqnarray}
J(x) \!\!\!\!&=&\!\!\!\!\frac{d}{dx}\log\det\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H\right)
\nonumber \\
&&+ \frac{d}{dx}\log\det({{\boldsymbol{\Phi}}}(-x))
\nonumber \\
&&~~
-\frac{d}{dx}{\rm{tr}}\left(\mathbf{A}(x)\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right).
\end{eqnarray}
Since $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)\rightarrow 0$ as $x \rightarrow \infty$, the Shannon transform $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ can be obtained as
\begin{eqnarray}
\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)
\!\!\!\!&=&\!\!\!\! \log\det\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)
+x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H\right)
\nonumber \\
&&\!\!+\log\det({{\boldsymbol{\Phi}}}(-x))
\nonumber \\
&&-{\rm{tr}}\left(\mathbf{A}(x)\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right).
\end{eqnarray}
Furthermore, it is easy to verify that
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!{\rm{tr}}\left(\mathbf{A}(x)\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)- \mathbf{I}_N\right)\right)
\nonumber \\
&&= {\rm{tr}}\left(x\sum\limits_{k=1}^{K}{{\eta}}_{k}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N))\mathcal{G}_k(-x) \right).
\end{eqnarray}
Finally, we obtain the Shannon transform $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ as
\begin{IEEEeqnarray}{Rl}
\!\!\!\!\!\!\!\!\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x) = & \log\det\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)
+ x^{-1}\overline{\mathbf{H}}{{\boldsymbol{\Phi}}}(-x)^{-1} \overline{\mathbf{H}}{}^H\right)
\IEEEnonumber \\
&\!\!+\log\det({{\boldsymbol{\Phi}}}(-x))
\IEEEnonumber \\
&- {\rm{tr}}\left(x\sum\limits_{k=1}^{K}{{\eta}}_{k}(\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N))\mathcal{G}_k(-x) \right).
\end{IEEEeqnarray}
\section{Proof of Theorem \ref{th:capaicty_achieving matrix_theorem}}
\label{sec:proof_of_capaicty_achieving matrix_theorem}
The way to show the strict convexity of $-\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ with respect to $\mathbf{Q}$ is similar to Theorem $3$ of \cite{dupuy2011capacity} and Theorem $4$ of \cite{dumont2010capacity}, and thus omitted here.
Let the Lagrangian of the optimization problem \eqref{eq:optimization_problem_of_deterministic_equivalent} be defined as
\begin{IEEEeqnarray}{Rl}
\mathcal{L}(\mathbf{Q},\mathbf{\Upsilon},\boldsymbol{\mu})
=& \mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x) + {\rm{tr}}\left(\sum\limits_{k=1}^{K}\mathbf{\Upsilon}_k\mathbf{Q}_k\right)
\nonumber \\
&~~~~+ \sum\limits_{k=1}^{K}\mu_k(M_k-{\rm{tr}}(\mathbf{Q}_K))
\end{IEEEeqnarray}
{where $\mathbf{\Upsilon}\triangleq\{\mathbf{\Upsilon}_k \succeq 0\}$ and $\boldsymbol{\mu} \triangleq\{ \mu_k \geq 0 \}$ are the Lagrange multipliers associated with the problem constraints.}
In a similar manner to \cite{zhang2013capacity}, \cite{couillet2011deterministic} and \cite{dumont2010capacity}, we write
the derivative of $\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)$ with respect to $\mathbf{Q}_k$ as
\begin{IEEEeqnarray}{Rl}
&\!\!\!\!\!\!\frac{\partial\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)}{\partial \mathbf{Q}_k}
\nonumber \\
&=\frac{\partial\log\det(\mathbf{I}_M+\mathbf{\Gamma}\mathbf{Q})}{\partial \mathbf{Q}_k}
\nonumber \\
&~+\sum\limits_{ij}\frac{\partial\mathcal{V}_{\boldsymbol{\mathcal{B}}_N(x)}} {\partial\!\!\left[\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)\right]_{ij}} \frac{\partial\!\!\left[\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)\right]_{ij}}
{\partial\mathbf{Q}_k} \nonumber \\
&~~~+\sum\limits_{ij}\frac{\partial\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}(x)}{\partial[{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))]_{ij}}\frac{\partial[{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))]_{ij}}{\partial\mathbf{Q}_k}
\end{IEEEeqnarray}
where
\begin{eqnarray}
\frac{\partial\log\det(\mathbf{I}_M+\mathbf{\Gamma}\mathbf{Q})}{\partial \mathbf{Q}_k}
=\left(\left(\mathbf{I}_M+\mathbf{\Gamma}\mathbf{Q}\right)^{-1}\mathbf{\Gamma}\right)_k.
\end{eqnarray}
Furthermore, we obtain equations \eqref{eq:derivative_of_VBN_tmp1} and \eqref{eq:derivative_of_VBN_tmp2} at the top of the following page.
\begin{figure*}[!t]
\normalsize
\setcounter{tempequationcounter}{\value{equation}}
\begin{IEEEeqnarray}{Rl}
\frac{\partial\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}} {\partial\!\!\left[\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)\right]_{ij}}
=& {\rm{tr}}\left(\left({{\boldsymbol{\Phi}}}(-x)
+ x^{-1}\mathbf{Q}^{\frac{1}{2}}\overline{\mathbf{S}}{}^H{\Tilde{{\boldsymbol{\Phi}}}}(-x)^{-1}\overline{\mathbf{S}}\mathbf{Q}^{\frac{1}{2}} \right)^{-1}
\frac{\partial{{\boldsymbol{\Phi}}}(-x)} {\partial\!\!\left[\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)\right]_{ij}}\right) \IEEEnonumber \\
&~~- {\rm{tr}}\left(x\sum\limits_{k=1}^{K}{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))\frac{\partial\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)} {\partial\!\!\left[\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)\right]_{ij}}\right) \IEEEnonumber \\
=& 0
\label{eq:derivative_of_VBN_tmp1} \\
\frac{\partial\mathcal{V}_{\boldsymbol{\mathcal{B}}_N}}{\partial[{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))]_{ij}}
=& {\rm{tr}}\left(
\left({{\boldsymbol{\Phi}}}(-x)
+x^{-1}\mathbf{Q}^{\frac{1}{2}}\overline{\mathbf{S}}{}^H{\Tilde{{\boldsymbol{\Phi}}}}(-x)^{-1} \overline{\mathbf{S}}\mathbf{Q}^{\frac{1}{2}} \right)^{-1}\frac{\partial x^{-1}\mathbf{Q}^{\frac{1}{2}}\overline{\mathbf{S}}{}^H{\Tilde{{\boldsymbol{\Phi}}}}(-x)^{-1}\overline{\mathbf{S}} \mathbf{Q}^{\frac{1}{2}}}{\partial[{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))]_{ij}}\right) \nonumber \\
&~~+ {\rm{tr}}\left({\Tilde{{\boldsymbol{\Phi}}}}(-x)^{-1}\frac{\partial{\Tilde{{\boldsymbol{\Phi}}}}(-x)}{{\partial[{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))]_{ij}}}\right) - {\rm{tr}}\left(x\frac{\partial{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))}{\partial[{\tilde{\eta}}_{Q,k} (\mathcal{G}_k(-x))]_{ij}}\mathcal{G}_{\boldsymbol{\mathcal{B}}_N}^{\mathcal{M}_N}(-x\mathbf{I}_N)\right) \IEEEnonumber \\
=& 0 \label{eq:derivative_of_VBN_tmp2}
\end{IEEEeqnarray}
\addtocounter{tempequationcounter}{2}
\setcounter{equation}{\value{tempequationcounter}}
\hrulefill
\end{figure*}
The problem now becomes the same as that in \cite{zhang2013capacity}. Thus, the rest of the proof is omitted.
\section*{Acknowledgment}
We would like to thank the editor and the anonymous reviewers
for their helpful comments and suggestions.
\bibliographystyle{IEEEtran}
|
2,869,038,153,917 | arxiv | \section{Introduction}
Alumina (Al$_2$O$_3$) is considered to play a significant role in dust
formation around oxygen-rich cool stars. Thermodynamic equilibrium
calculations indicate that it, along with titanium oxides, is one of
the earliest condensates in the mineralogical condensation sequence
\citep{1990fmpn.coll..186T,1999A&A...347..594G}. Observationally, there
is some debate and uncertainty regarding the spectral signatures that
can be ascribed to alumina that permit a firm conclusion to be drawn
for the presence of alumina. In particular features at 11.3~$\micron$
and 13~$\micron$, seen in the spectra of O-rich AGB and
supergiant stars have often been attributed to alumina
\citep[e.g.,][and references
therein]{2000A&AS..146..437S}. Additionally, various authors have
shown that the inclusion of alumina grains in dust models yields
better fits to the observed profile of the silicate feature at
9.7~$\micron$ (especially when the feature is broad) and also
reproduces better the overall infrared spectral energy distribution
(SED) in selected AGB and OH/IR stars
\citep{2000A&AS..146..437S,2005MNRAS.362..872M,2006ApJ...640..971D}. In this paper, we
present evidence for alumina dust detection from {\it Spitzer Space
Telescope} observations of the nova-like variable V4332 Sgr. The
distinguishing feature of its mid/far IR spectrum is a deep, unusually
broad absorption feature at 10~$\micron$. We show that this feature
cannot be reproduced by silicate dust alone and that it is necessary
to invoke the presence of amorphous alumina grains to explain it.
V4332~Sgr erupted in 1994 in what was initially considered a nova-like
outburst \citep{1999AJ....118.1034M}. However, its subsequent
post-outburst evolution to a cool spectral type indicated that this was
not a classical nova eruption. The exact nature of V4332~Sgr is of
considerable interest as it, along with V838 Mon and M31 RV
(a red-variable which erupted in M31 in 1988), may form a new class of
eruptive objects
\citep[e.g.][]{2002A&A...389L..51M,2003Natur.422..405B}. V4332~Sgr
shows an intriguing emission-line spectrum in the optical and
near-infrared with several rare spectral features. Prominent molecular
bands of TiO, ScO, VO and AlO are also seen in the optical
\citep{2005A&A...439..651T,2006AN....327...44K} implying an oxygen
rich environment. The fundamental band of $^{12}$CO at 4.67$\micron$
has also been detected in the source along with water ice at
3.05$\micron$ \citep{2004ApJ...615L..53B}. The IR excess detected in
the source, along with the molecular and ice features, suggest a cool
dusty environment around the central star whose effective temperature
is estimated to be $\sim$ 3250-3280K
\citep{2003ApJ...598L..31B,2005A&A...439..651T}.
\section{Observations and Data reduction}
V4332~Sgr was imaged with the Multiband Imaging Photometer for {\it
Spitzer} \citep[MIPS;][]{2004ApJS..154...25R} at 24 and 70~$\micron$ on 15 Oct
2005 and 2 Nov 2006 (70~$\micron$ Fine and Default modes,
respectively). Data at 160~$\micron$ were
obtained on 15 Oct 2005. Spectra were obtained using the Infrared
Spectrograph on {\it Spitzer} \citep[IRS;][]{2004ApJS..154...18H} on 18 April
2005 and 19 April 2006. In 2005, low resolution (R $\sim$ 60-100) data
from $\sim 5-38~\micron$ and high resolution data (R = 600) from $\sim
18-38~\micron$ were obtained. In 2006, high resolution data from $\sim
19-38~\micron$ and low resolution data from $\sim 5-14~\micron$ were
obtained. In addition, MIPS SED mode data covering the wavelength
range from $\sim 55-90~\micron$ were obtained on 27 Sept 2005. For
the following discussion, data obtained in 2005 and 2006 will be
referred to as epoch 1 and epoch 2, respectively.
\begin{figure}
\plotone{f1.eps}
\caption{Top: Epoch 1 (solid curve and points) along with the best model
fit (dashed curve); see Table
\ref{tab:modpar}, column 3. Bottom: Expanded view around the 10~$\micron$
complex. In addition to the best fit model, the best fit silicate only
model is over plotted (dash-dotted curve). It is clearly seen that a pure silicate model
yields a poor fit to the extended red wing of the
data. \label{fig:fullSED}}\
\end{figure}
The MIPS data were reduced using the Data Analysis Tool v3.06
\citep{2005PASP..117..503G}. V4332~Sgr was detected as a point source
by MIPS at 24, 70, and 160~$\micron$, and the flux densities were
extracted using both PSF fitting and aperture photometry. The
measured MIPS flux densities were 2.34$\pm$0.07, 1.07$\pm$0.11, and
0.12$\pm$0.02~Jy at 24, 70, and 160~$\micron$\, respectively. At 24
and 70~$\micron$\, the flux densities measured in Epochs 1 and 2 were
identical within the errors and we report the weighted mean of those
measurements. The basic instrumental calibration of the MIPS SED mode
is similar to that of the 70~$\micron$ imaging mode
\citep{2007ApJ..InPress} but with a wavelength-dependent illumination
correction. The final spectrum of the source is obtained by extracting
a 5-column (49\farcs25) aperture from the sky subtracted 2-D spectrum
after correcting for slit losses. The MIPS SED data were scaled by a
factor of 0.88 to match the flux density in the 70~$\micron$ bandpass.
Details of the SED reduction can be found in Lu et al.~(2007, in
prep.).
The IRS data were processed and extracted using the {\it
Spitzer} Science Center (SSC) Pipeline S15.3 product. Since V4332~Sgr
is a bright point source, the final spectrum of the low resolution
modules were combined using the SSC post-BCD co-added, nod-subtracted
spectra. No background observation was obtained for the high
resolution modules and the background level for the epoch 1 observations
was $\sim$ 0.06 Jy (a factor of 20 fainter than the source) at
$\sim$20~$\micron$ and fairly uniform in the co-added 2-D low
resolution spectra. The SSC post-BCD extraction of the high-resolution
spectrum agrees with the low-resolution spectrum within the
uncertainties. There is no obvious emission/absorption lines seen in
the high-resolution spectrum; therefore, the final spectra of V4332
Sgr in epochs 1 and 2 are computed by averaging both low- and
high-resolution modules. In Figure \ref{fig:fullSED}, we present the
observed IRS and MIPS SED spectra of V4332~Sgr and the combined broad
band MIPS fluxes at 24, 70 and 160~$\micron$. Since there is little
apparent evolution between the epoch 1 and 2 broad band fluxes, we show
only the epoch 1 data in Figure \ref{fig:fullSED}. Evidence for changes
in the detailed shape of the SED between epochs will be examined
below.
\section{Results}
The spectrum of V4332~Sgr is dominated by a deep, broad feature at
$\sim10~\micron$, normally associated with the presence of amorphous
Mg-Fe silicate grains. However, this observed $10~\micron$ feature is
relatively broad, with an additional feature at $\sim11\micron$ and a
flattened wing beyond $\sim13~\micron$. Additionally, signatures of
ices and organic materials are evident from $\sim5-8~\micron$ (water
ice at 6~$\micron$, ``organics'' at 6.8~$\micron$ and possibly methane
ice at 7.7~$\micron$; see e.g. Bowey \& Hofmeister 2005). We have
modeled the V4332~Sgr spectrum using the radiation transfer code DUSTY
\citep{1999DUSTYManual}. The limitations of DUSTY include the
assumption of a spherically symmetric shell of material which may not
be appropriate for V4332~Sgr as the system may have a disk
\citep{2004ApJ...615L..53B}. However since we are interested in
exploring the overall shape of the observed SED rather than providing
a detailed physical model of the complete system, we have restricted
ourselves to the simplest and most generalized assumptions in our
calculations. As the luminosity of V4332~Sgr is poorly known, we have
fixed the stellar luminosity for V4332~Sgr at 10$^4$~L$_\odot$, the
default input value assumed by DUSTY. This assumption does not affect
the shape of the computed spectrum, only the physical scale of the
system when combined with the dust temperature at the inner radius of
the shell. We have fit the observed SED with two models viz. model 1
that contains silicate dust only and model 2 with a mixture of
silicate and alumina dust, where the inclusion of alumina is prompted
by the presence of a feature at $\sim11~\micron$ often attributed to
amorphous alumina \citep{2000A&AS..146..437S,2006ApJ...640..971D}.
Prompted by the presence of ice absorption at $\sim6~\micron$, the
grains in both models are coated with an ice mantle (20\% by
volume). The silicate dust optical constants are from
\citet{1984ApJ...285...89D} while the alumina optical data used are
from \citet{1997ApJ...476..199B}. Corundum was not included in our
model as there is no evidence in our spectra for the feature at
$\sim$13~$\micron$ associated with the presence of this mineral
\citep{2006ApJ...640..971D}. We tested both the 'porous' and
'compact' alumina samples of \citet{1997ApJ...476..199B}; as there was no
substantive differences between the models, we restricted ourselves to
the 'porous' sample for the subsequent modeling. The ice optical
constants are those of Wiscombe
(ftp://climate1.gsfc.nasa.gov/wiscombe/). The range of parameters
explored is given in Table \ref{tab:modpar}. The output spectra
computed using both dust models are shown in Figure
\ref{fig:fullSED}. It is clearly seen that a pure silicate composition
matches the observed 10~$\micron$ feature poorly. On the other hand,
the inclusion of alumina in model 2 improves the fit
significantly. While there is considerable degeneracy in the fits,
especially between the optical depth and the dust temperature (low
temperature, low optical depth models are somewhat degenerate with
high temperature, high optical depth models, though they consistently
yield formally worse fits), it is notable that no model consisting of
only silicate grains provided a satisfactory fit to the 10~$\micron$
absorption feature. While model 2 provides a good fit overall fit to
the SED and the 10~$\micron$ feature from 9--12.5~$\micron$, it
reproduces neither the flattening beyond $\sim$13~$\micron$, nor the
relatively narrow blue wing. We explored using different silicate
optical constants \citep[e.g.][]{1992A&A...261..567O} as well as
varying the size distribution ($a_{max}$ and $q$) but neither approach
improved the fit in these spectral regions. It is possible that a
more complex geometry than the simple spherical shell utilized in
DUSTY could improve the fit in these regions.
\begin{deluxetable}{lll}
\tablecaption{DUSTY Modeling \label{tab:modpar}}
\tablehead{
\colhead{Parameter} & \colhead{Value} & \colhead{Best Fit} \\
}
\startdata
Stellar Luminosity & $10^4$L$_{\sun}$ & fixed \\
Stellar Temperature & 3250~K\tablenotemark{1} & fixed \\
$R_{out}/R_{in}$ & 1000\tablenotemark{2} & fixed \\
Shell $\rho$ Distribution & $r^{-2}$ & fixed \\
Composition & silicates/alumina & 65\%/35\% \\
(fraction by number) & & \\
$\tau_{9.8~\micron}$ & 2 -- 55 & 45 \\
$T_{dust}(R_{in}$) & 300 -- 1750~K & 1750~K \\
Grain Size Distribution & MRN\tablenotemark{3} & fixed \\
\enddata
\tablenotetext{1}{Banerjee et al. 2003.}
\tablenotetext{2}{Maldoni et al. 2005.}
\tablenotetext{3}{Mathis et al. 1977; $n(a)\sim a^{-q}$ with
$a_{min,max}=0.005,0.25~\micron$, $q=3.5$.}
\end{deluxetable}
The plots in Figure \ref{fig:fullSED} permit a few conclusions to be
drawn: (i) the substantial improvement in the fits to the broad
10~$\micron$ feature with the inclusion of alumina indicates that
alumina is being detected in the source and its presence is
manifested by a broadening of the 9.7~$\micron$ silicate feature. A
similar conclusion was reached by \citet{2000A&AS..146..437S} using
data extending to $\leq$13.5~$\micron$. (ii) A small, yet clearly
discernible, feature is seen at 11~$\micron$. This feature is
attributable to alumina since our model calculations show that
increasing the percentage of alumina in the alumina-silicate mixture
of model 2 enhances the strength of this feature. We note that this
11$\micron$ feature is seen in a significant number of stars studied
by \citet{2000A&AS..146..437S} implying that alumina grains are fairly
prevalent.
\section{Discussion}
\subsection{The Case for Alumina Condensation}
It is perhaps not surprising to see evidence for alumina in the dust
surrounding V4332~Sgr given the presence of the AlO radical in its
optical and NIR spectra
\citep{2003ApJ...598L..31B,2005A&A...439..651T,2006AN....327...44K} and
since AlO can play a critical role in the production of
alumina. Laboratory experiments by \citet{2004A&A...420..547D} show
that aluminum oxide clusters with stoichiometry AlO-(Al$_2$O$_3$)$_n$
are readily formed in laser vaporized metallic Al when quenched in
oxygen and argon. These clusters are found to be very stable and thus
very good nucleation sites for dust growth. Additionally,
\citet{1995JPChem...99...12225} studied the kinetics of AlO + O$_2$
reactions at temperatures in the range 300-1700K with a view to
studying the fate of aluminum in combustion. The higher end of this
temperature range, it may be noted, is very close to the predicted
condensation temperature \citep[1760K;][]{1990fmpn.coll..186T} of
alumina dust around stars. The \citet{1995JPChem...99...12225}
experiment shows that AlO becomes oxidized to AlO$_2$ by O$_2$. An
additional reaction involving AlO is AlO + O +M = AlO$_2$ + M, where
the ``chaperon'', M, is any atom or molecule present that can remove
some energy from the newly-formed, activated AlO$_2$ (A. Fontijn,
private communication). Newly formed AlO$_2$ can further interact with
AlO to generate alumina: AlO + AlO$_2$ + M = Al$_2$O$_3$ + M.
These results suggest that AlO is likely to play a significant role in
the route to Al$_2$O$_3$ formation. This conclusion has theoretical
support in the work of \citet{1999A&A...347..594G} who show
that any possible nucleation species that can go on to form dust
around stars should begin with a monomer with exceptionally high bond
energy. The AlO monomer satisfies this criterion and is thus a favored
candidate to lead to the formation of larger Al$_m$O$_n$ clusters that
serve as nucleation sites for the formation of other grains or to
alumina grains themselves by homogeneous nucleation. While the
\citet{1999A&A...347..594G} analysis is based on thermal equilibrium
considerations, an alternative model is the non-equilibrium formation
of chaotic silicates proposed by \citet{1990ApJ...350L..45S} and
\citet{1990Ap&SS.163...79N}. Chaotic silicates form rapidly from a
supersaturated vapor of metal atoms, SiO, AlO and OH in a hydrogen
atmosphere \citep{1990ApJ...350L..45S}. In the initial stages, the
higher reduction of Al with respect to Si will lead to the
preferential formation of Al-O bonds at the expense of Si-O
bonds. This implies that the IR bands of alumina associated with the
Al-O stretching mode should be prominent early in the formation of the
chaotic silicates. However, as the Al atoms become fully oxidized, the
higher abundance of Si will make the 9.7${\rm{\mu}}$m band associated
with Si-O bonds dominate.
Titanium oxides are considered to also be an early dust condensate
along with alumina. Given that the dust that has formed around
V4332~Sgr is of fairly recent origin as implied by the abrupt infrared
brightening that developed in the source between 2MASS observations in
1998 and subsequent observations in 2003 \citep{2003ApJ...598L..31B},
we might expect some signature of these species in the spectra,
though Ti is nearly 30 times less abundant than Al
\citep{2000A&AS..146..437S}. Bulk titanium oxides can have different
forms: TiO, TiO$_2$, Ti$_2$O$_3$ and Ti$_3$O$_5$
\citep{2004A&A...420..547D}. The most common, TiO$_2$ can exist as
brookite and anatase which convert at high temperature into rutile
which is the third and most common form. The rutile spectrum is
expected to show a broad and strong band at 13-17~$\micron$; the
spectrum of anatase shows two strong and broad bands around 17 and
29~$\micron$. The titanium oxide clusters, studied by
\citet{2004A&A...420..547D} as possible nucleation sites, have a
vibrational transition at $\sim$13.5~$\micron$.
The above discussion pertains mostly to crystalline forms of titanium oxides
while in V4332 Sgr the amorphous form may more likely be present
given that both the silicates and alumina are of amorphous nature.
But a reasonable possibility exists that the flattening of the absorption
band longward of 13.5~$\micron$ is indicating the presence of titanium oxides.
\subsection{Evolution of the Dust Condensates}
There is potential in the present data to address certain aspects of
the dust condensation process in astrophysical environments. Since the
dust formation process in V4332 Sgr has begun
recently - certainly less than 10 years ago - and is possibly still
ongoing, there are a few spectral features that could change with
time. As an example, in the ``chaotic silicates'' hypothesis, it is
predicted that a strengthening of the silicate component of
the 9.7~$\micron$ feature should take place relative to that of the
alumina and other early condensate components that blend with this
feature. There is observational support for such evolution comparing
our data between epochs 1 and 2 (see Fig \ref{fig:epoch_comp}). There
is a hint that the broad red wing of 9.7~$\micron$ feature has
weakened in epoch 2 relative to epoch 1 and that there has been an overall
narrowing of the 10~$\micron$ absorption complex. Although the
evidence is tentative given the small change ($\sim 1 \sigma$) and
only two epochs of data, such a behavior might be expected as Al and
Ti atoms become oxidized and Si-O begins to dominate the composition.
Further, it is also predicted that the ratio of the
10~$\micron$/18~$\micron$ silicate features could be expected to
change monotonically as silicate dust nucleates and anneals in a
circumstellar environment \citep{1990Ap&SS.163...79N}. Freshly
nucleated silicates, as laboratory experiments show, are expected to
have a large 10~$\micron$/18~$\micron$ ratio i.e. the 18~$\micron$
feature is expected to be weak (consistent with what is seen in the
V4332~Sgr spectrum). Thermal processing should increase the
strength of the 18~$\micron$ feature.
Although the time scales involved in the above processes are not clear,
it would be worthwhile to monitor the spectrum of V4332 Sgr in the
future to discriminate between possible scenarios for the evolution of dust
condensates (Dijkstra et al. 2005 and references therein).
\begin{figure}
\plotone{f2.eps}
\caption{Enlargement of the 10~$\micron$ absorption complex comparing
epoch 1 (black) and 2 (blue) data. There is evidence for a narrowing of the
absorption complex between the two epochs.
\label{fig:epoch_comp}}
\end{figure}
The detection of alumina condensate in V4332~Sgr may have implications
for the origin of this object. It has been postulated that, along with
V838 Mon and M31 RV, V4332~Sgr forms a new class of eruptive
variables. The nature of these objects and the source of their
eruptive behavior has not been established and ideas ranging from an
outburst from a compact object in a red giant envelope, various
thermonuclear events in a single high-mass evolved star, a late
He-shell flash, stellar mergers and even the
swallowing of multiple planets by a massive star have been proposed
\citep[e.g.][and references therein]{2006A&A...451..223T}. Within the
scope of this work, it is not possible to discuss in depth the
complexities of the origin and nature of the V4332~Sgr system. To
date, alumina dust has been almost exclusively detected in AGB and
other cool evolved stars and, as discussed above, is
likely a very early condensate in any oxygen rich environment, so the
detection of alumina in the early condensate of V4332~Sgr indicates
that conditions in the ejecta are similar to those found
around cool evolved stars. Thus the detection of alumina in V4332~Sgr
may provide a constaint on the nature of the eruption if more detailed
modeling of some of the proposed eruption mechanisms rules out conditions
conducive to the formation of alumina grains. In addition, the
detection of alumina around V4332~Sgr motivates long-term monitoring
of the ejecta formed around V838 Mon \citep{2006ApJ...644L..57B}. If
indeed these objects are related at a more fundamental level than
simply having roughly similar outburst characteristics, we might
expect that the conditions in the post outburst ejecta of V838 Mon to
be similar to those in V4332~Sgr. Given that V838 Mon erupted $\sim$8
years after V4332~Sgr and exhibits AlO lines in its spectrum
\citep{2004ApJ...607..460L}, we might detect similar
signatures of alumina formation around V838 Mon in the coming years if
both objects do indeed share a common origin.
\begin{acknowledgements}
Research at the Physical Research Laboratory is funded by the
Department of Space, Government of India. This work is based on
observations made with the {\it Spitzer} Space Telescope, which is operated
by the Jet Propulsion Laboratory, California Institute of Technology
under a contract with NASA. Support for this work was provided by NASA
through Contract Number 1277253 and 1256424 issued by JPL/Caltech.
\end{acknowledgements}
|
2,869,038,153,918 | arxiv | \section{Introduction}
The concepts of complementary media \cite{pendry2003complementary} (CM) have been applied in many studies\cite{yang2008superscatterer,luo2009conceal,wang2011waveguide,li2010experimental,zhang2010illusion}, and they provide a simple and clear geometric interpretation for the propagation of metamaterial \cite{shamonina2007metamaterials,chen2010transformation} controlled electromagnetic (EM) waves. More prosaically, the CM possess the ability of optical ``cancellation,'' e.g., a well-designed negative index material (NIM) could ``cancel'' a matched medium (or vacuum space), and sometimes such a cancellation could produce an image for an interacted object or hide a certain part of an object \cite{yang2008superscatterer,luo2009conceal,lai2009illusion,lai2009complementary}. Well-known devices such as a superlens and superscatterer \cite{yang2008superscatterer,pendry2000perfectlens} can be easily understood with the concept of CM. Usually, in studies on CM-based optical devices, the situated objects should not have an impact on the cancellation, or the CM should be properly arranged in advance to cancel any placed obstacles \cite{lai2009illusion,lai2009complementary}. On the other hand, very few studies have been carried out for the mismatched case, where objects occupy the space or medium that used to be canceled by the prearranged CM. It is commonly believed that when the condition of CM is not satisfied owing to the misplacement of obstacles, the aforementioned optical cancellation ability might be lost \cite{luo2009conceal}. However, our rigorous analysis in this paper gives different results.
Actually, even for the detection of weak incident EM waves, a relatively strong field could be detected near the surface of the NIM in the scattering field \cite{yang2008superscatterer}. When obstacles are situated near the surface of the NIM, the interpretations that are related to geometrical optics in the long-wavelength limit cannot give an accurate explanation for this situation. Hence, transformation optics and the ordinary concept of CM \cite{pendry2003complementary}, which is derived in the long-wavelength limit, might not be well applicable for revealing the properties of the mismatched case. To obtain an accurate physical picture for such cases, a rigorous analytical analysis is necessary. However, the discussion for such a case is usually avoided in many studies owing to the lack of precise explanation. In fact, it can be encountered in many research areas. For instance, in research on CM-based wireless power transfer \cite{superlensWPT,2015arXiv150802213Z}, obstacles that are located in the nearby area might interfere with the cancellation of the CM and have an impact on the production of an image.
In this study, to deal with the mismatched CM, an analytical framework based on rigorous multiple scattering theory has been developed. On the basis of the analytical analysis, more physical properties are expected to be revealed. Indeed, our analysis shows that the cancellation ability will still be available for the mismatched CM under certain conditions. The paper is organized as follows. First, to simplify the approach to our analysis, the interpretation of the cancellation properties in mismatched CM is presented with a simple heuristic model based on a superlens \cite{pendry2000perfectlens}. Then, a more applicable model based on a superscatterer \cite{yang2008superscatterer} is provided. In addition to the rigorous analytical analysis, simulations from COMSOL are also provided to illustrate the results.
\section{Superlens with obstacles}
To discuss the mismatched CM, a superlens \cite{pendry2000perfectlens} derived from a one-dimensional (1D) transformation could be introduced as a sketchy model at first. Consider the classical superlens shown in Fig. \ref{fig:sls}(a), in which the NIM (domain in black) with a relative permeability and permittivity $\varepsilon=\mu=-1$ will cancel the grey domain (vacuum with $\varepsilon=\mu=1$). According to the interpretation of folded geometry \cite{leonhardt2006general,milton2008solutions}, for an observer on the right side of the grey domain, an image of object A (object situated on the left side of the NIM) can be detected using EM wave detection. When the thickness of the NIM is $d_s$, the distance between object A and its image will be $2 d_s$, as the black domain cancels the grey one. The electric field distributions are provided by simulation in COMSOL and shown in Fig. \ref{fig:sls}(a) and (b).
\begin{figure}
\centering
\fbox{\includegraphics[width=\linewidth]{fig0s}}
\caption{Schematic of the cancellation strategy in a superlens. (a) The grey domain has not been penetrated by any obstacles; according to the concept of CM, the NIM (domain in black) will cancel the grey domain. (b) If object B is situated near the NIM (i.e., $d_B<d_s$) and the minimum distance between object B and the NIM ($d_B$) is greater than the distance between the image of object A and the NIM (i.e., $d_B>d_s-d_A$), the shape of the grey domain canceled by the black domain is changed according to the position of object A. (c) When object B is located closer to the NIM, i.e., $d_s-d_A>d_B$, the strategy will be changed again, and the shape of the grey domain is determined by the positions of objects A and B.}
\label{fig:sls}
\end{figure}
When the grey domain is penetrated by another object (object B), the cancellation strategy in Fig. \ref{fig:sls}(a) is invalid; however, the simulation shows that the cancellation ability is still available, as shown in Fig. \ref{fig:sl}(c) and (d). In fact, in order to be canceled by the NIM (black domain), the shape of the grey domain should be changed according to the positions of the obstacles. The new cancellation strategy in Fig. \ref{fig:sls}(b) shows that when $d_{s}-d_{A}<d_{B}$ (where $d_A<d_s$), the range of the grey domain will rely on the position of object A ($d_A$). Moreover, when $d_{s}-d_{A}>d_{B}$, as shown in Fig. \ref{fig:sls}(c), the grey domain will be restricted to the positions of both objects A and B ($d_A$ and $d_B$). In other words, when penetrated by object B, the observer will still be able to detect the EM fields scattered by object B and the image of object A, which could not be directly explained from the ordinary cancellation strategy in Fig. \ref{fig:sls}(a).
Numerical results obtained from the finite element method (FEM) by COMSOL are presented in Fig. \ref{fig:sl}. It should be noted that in the numerical simulation, both objects A and B have perfect electrical conductor (PEC) boundaries, and the results might be different for other types of boundary conditions \cite{2015arXiv150802213Z}, which will be discussed later in this paper. Moreover, it should be emphasized that both transformation optics and the ordinary concept of CM are related to the geometrical optics in the long-wavelength limit; this intuitive explanation works well for the ordinary case, e.g., well-organized CM without the placement of obstacles. However, considering the strong field near the surface of the NIM (shown in Fig. \ref{fig:sl}) in the scattering field \cite{yang2008superscatterer}, the scattering properties of obstacles located near it might not be explained very well by these intuitive theories. Indeed, when obstacles occupy the canceled space near the NIM, multiple scattering theory is necessary for investigating their physical properties.
\begin{figure}
\centering
\fbox{\includegraphics[width=\linewidth]{fig0}}
\caption{Simulation results provided by COMSOL for a superlens (with PEC obstacles): When the domain canceled by the slab of the superlens is not penetrated by object B (object B is on the right side of the dashed line), an image of object A can be produced (the distance between object A and its image is $2 d_s$), which is explained by the strategy presented in Fig. \ref{fig:sls}(a). As a result, for an observer on the right side of the canceled domain, the electric field distributions (domain on the right side of the dashed line) in (a) will be identical to those in (b). If object B penetrates the canceled domain, an image of object A can still be detected according to the explanation in Fig. \ref{fig:sls}(b); as a result, the electric field distributions on the right side of the dashed line in (c) are still identical to those in (d). A similar equivalence also exists when object B moves even closer to the NIM slab, as shown in (e) and (f), and can be explained by the strategy in Fig. \ref{fig:sls}(c).}
\label{fig:sl}
\end{figure}
\section{Superscatterer with obstacles}
\begin{figure}
\centering
\fbox{\includegraphics[width=\linewidth]{fig1}}
\caption{Schematic of mismatched CM using a superscatterer. The NIM in black (the ss-shell, $r_1<r<r_2$) will cancel the domain painted in grey. Object B and the image of object A do not overlap. (a) Cancellation strategy when no obstacles penetrate the domain $r_2<r<r_3$. (b) and (c) Strategy when the obstacle (denoted as object B with the PEC boundary) penetrates the domain $r_2<r<r_3$, where $r_3>r_B> \eta r_A$. (d) Strategy when $r_B< \eta r_A$. If the cancellation strategy described in (c) is achieved, the scenario in (e) will be the same as the scenario in (c) observed by the viewer in the domain $r>r_3$ for EM wave detection, i.e., an amplified image of object A can be detected. The relationship between (f) and (d) is similar.}
\label{fig:compleschem}
\end{figure}
To give a clear explanation and to provide a better understanding of the properties of mismatched CM, an analytical framework based on multiple scattering theory should be established. In fact, rather than the superlens discussed above, a two-dimensional (2D) superscatterer \cite{yang2008superscatterer} is more appropriate to show the analytical framework. Since it is a finite circular cylindrical device, it is much easier to give a rigorous analytical analysis here, and transverse electric (TE) or transverse magnetic (TM) wave could be discussed separately in the 2D case. Although the model is relatively simple, the heuristic approach could still provide useful information for three-dimensional (3D) and more practical models. Moreover, the 2D superscatterer has many practical applications in many research areas \cite{luo2009conceal,2015arXiv150802213Z}.
The analytical analysis for the 2D superscatterer is based on the schematic shown in Fig. \ref{fig:compleschem}. Similar to the analysis in the last section, the black domain is the superscatterer shell (ss-shell) and it is complementary to the grey domain. Further, all obstacles have PEC boundaries. The parameter distributions are derived in cylindrical coordinates, and the relative permeability and permittivity can be deduced as follows:
\begin{equation}
\left\{ {\begin{array}{*{20}{l}}
{{\varepsilon _r} = {\mu _r} = \frac{{f(r)}}{r}\frac{1}{{f'(r)}}}, \\
{{\varepsilon _\theta } = {\mu _\theta } = \frac{r}{{f(r)}}f'(r)}, \\
{{\varepsilon _z} = {\mu _z} = \frac{{f(r)}}{r}f'(r)},
\end{array}} \right.
\label{eq:parametersFr}
\end{equation}
with the coordinate transformation
\begin{equation}
f(r) = \left\{ {\begin{array}{*{20}{l}}
f_1(r)={\eta r }, \quad\quad\quad\quad {0 < r < {r_1},}\\
f_2(r)={{r_3} + T(r)},
\quad {r_1} \le r \le {r_2}, \\
f_3(r)=r, \quad\quad\quad\quad {r_2} < r < {r_3},
\end{array}} \right.
\label{eq:frValue}
\end{equation}
where $\eta=r_3/r_1$, and $T(r)$ could be chosen as any continuous and piecewise differentiable function that makes the domain $r_1<r<r_2$ complementary \cite{pendry2003complementary} to the domain $r_2<r<r_3$ (i.e., $T(r_1)=0, T(r_2)=r_2-r_3 $). For purpose of illustration, we choose $T(r)=\frac{({r} - {r_1})( {r_3} - {r_2})}{{r_1} - {r_2}}$ in this study, as shown in Fig. \ref{fig:compleschem}(b). Thus, the schematic could be divided into three parts; the domain $r_2<r<r_3$ (and $r\ge r_3$) is vacuum, the domain $r_1<r<r_2$ (the shell painted black) is filled with the NIM, and the domain $r<r_1$ is filled with a homogeneous material. From the concepts of CM, as shown in Fig. \ref{fig:compleschem}(a), the domain in black (ss-shell) will cancel the grey domain, and the domain $r<r_1$ (as well as the obstacles inside it) will be amplified in the domain $r<r_3$ (for EM wave detection) with the amplification factor $\eta=r_3/r_1$. As a matter of fact, when the system contains no obstacles, the whole domain $0<r<r_3$ will act as a vacuum observed by the viewer outside.
It should be noted that if the radius of the ss-shell ($r_1$) becomes infinite and the thickness of the shell and the canceled space are maintained at finite values of $r_3-r_2=d_s$ and $r_2-r_1=d_s$, the superscatterer can be reduced to a slab of the superlens discussed above. Considering $T(r)=\frac{({r} - {r_1})( {r_3} - {r_2})}{{r_1} - {r_2}}$ in Eq. \ref{eq:frValue}, it is easy to deduce ${\lim _{r_1 \to \infty }}(f_2(r)/r) = 1$ for the ss-shell domain, which indicates that the ss-shell becomes a homogeneous slab of a superlens (with a thickness of $d_s$). Moreover, the amplification factor becomes $\eta=r_3/r_1\to 1$. Hence, the parameter distributions derived from Eq. (\ref{eq:parametersFr}) will be ${\varepsilon _r} = {\mu _r}=-1$, ${\varepsilon _\theta } = {\mu _\theta }=-1$, and ${\varepsilon _z} = {\mu _z}=-1$ when ${r_1 \to \infty }$.
Similar to the analysis for the superlens, the obstacles inside and outside the ss-shell are termed as objects A and B, respectively. In addition, $r_A$ is the maximum distance between a point in object A and the center of the ss-shell, $r_B$ is the minimum distance between object B and the center of the ss-shell. As shown in Fig. \ref{fig:compleschem}(a), if no obstacles penetrate the domain $r_2<r<r_3$, a well-matched CM with the black domain that will be complementary to the grey one (domain $r_2<r<r_3$) can be easily satisfied as described by Eq. (\ref{eq:frValue}). As a result, an amplified image of object A with an amplification factor $\eta=r_3/r_1$ could be detected in the domain $r<r_3$. If another object penetrates into the domain $r<r_3$, with $\eta r_A<r_B<r_3$, the aforementioned grey domain in Fig. \ref{fig:compleschem}(a) will be partially occupied, and the ordinary cancellation strategy in Fig. \ref{fig:compleschem}(a) could not be applied. Similar to the discussion of the analysis of the superlens, another strategy could be utilized, as shown in Fig. \ref{fig:compleschem}(c). This strategy seems to be self-adaptive for $r_A$, as the black domain will be complementary to the domains $r_A<r<r_1$ and $r_2<r<\eta r_A$. It should be noted that such a strategy could also explain the amplification effects in Fig. \ref{fig:compleschem}(a) as described in Eqs. (\ref{eq:frValue}) shown in Fig. \ref{fig:compleschem}(b).
Moreover, when all obstacles have PEC boundaries, object B is situated at $r_B<\eta r_A$, and the third strategy shown in Fig. \ref{fig:compleschem}(d) is more appropriate. It should be remarked that the strategy here is related to both $r_A$ and $r_B$, which implies that the cancellation strategy should rely on the placement of the obstacles, although the parameter distributions in the ss-shell are still the same as above. In fact, the geometric cancellation strategies are just simplified intuitive explanations for the EM wave scattering results; such explanations might not be unique, e.g., the strategy in Fig. \ref{fig:compleschem}(c) (or Fig. \ref{fig:sls}(b)) can also work well in explaining the effects shown in Fig. \ref{fig:compleschem}(a) (or Fig. \ref{fig:sls}(a)). Thus, a rigorous analytical analysis is necessary to understand the physical mechanism of mismatched CM.
\section{Analytical analysis}
To validate the heuristic analysis given in the previous sections, a detailed analytical analysis is presented in this section. Specifically, the obstacles are simplified to circular cylindrical objects, and all obstacles have PEC boundaries. $r_{01}$ (or $r_{03}$) is the distance between the center of object A (or B) and the center of the ss-shell with the angle $\phi_1=\phi_{\overrightarrow{r_{01}}}$ (or $\phi_3=\phi_{\overrightarrow{r_{03}}}$). $R_1$ (or $R_3$) is the radius of object A (or B); thus, we have $r_A=r_{01}+R_1$ and $r_B=r_{03}-R_3$. We consider TE-polarized EM wave detection with the harmonic time dependence $exp(-i\omega t)$. The analytical analysis will be presented as a comparison of the scattering properties of scenario (c) with scenario (e) (or scenario (d) with scenario (f)) in Fig. \ref{fig:compleschem}. More prosaically, our analytical deduction demonstrates that, for detection by the same EM waves (same incident field), scenarios (c) and (e) will give the same scattering EM fields. Such an equivalence can also be found in the comparison between scenarios (d) and (f). The analysis is given for the following four scenarios.
\textbf{For scenarios (c) and (d)}: The general series expansion (analytically deduced from the wave equation \cite{yang2008superscatterer,2015arXiv150802213Z}) for the electric field can be expressed as
\begin{widetext}
\begin{footnotesize}
\begin{equation}
E_z(r ,\phi ) = \left\{ {\begin{array}{*{20}{l}}
\sum\limits_{n = - \infty }^\infty [ {a_n^{(1)}{J_n}({k_0}f_{1}(|\overrightarrow {{r}}|)){e^{in{\phi}}}}+ {b_n^{(1)}{H_n^{(2)}}({k_0}f_{1}(|\overrightarrow {{r}}-\overrightarrow {{r_{01}}} |)){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{r_{01}}})}}}}}],\;r<r_1,\\
\sum\limits_{n = - \infty }^\infty [ {a_n^{(2)}{J_n}({k_0}f_{2}(|\overrightarrow {{r}}|)){e^{in{\phi}}}}+ {b_n^{(2)}{H_n^{(2)}}({k_0}f_{2}(|\overrightarrow {{r}}|)){e^{in{\phi}}}}],\;r_1<r<r_2,\\
\sum\limits_{n = - \infty }^\infty [ {a_n^{(3)}{H_n^{(2)}}({k_0}f_{3}(|\overrightarrow {{r}}|)){e^{in{\phi}}}}+{a_n^{(i)}{J_n}({k_0}|\overrightarrow {{r}}|){e^{in{\phi}}}}+ {b_n^{(3)}{H_n^{(2)}}({k_0}f_{3}(|\overrightarrow {{r}}-\overrightarrow {{r_{03}}})){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{r_{03}}})}}}}}],\;r>r_2,
\end{array}} \right.
\label{eq:Es1}
\end{equation}
\end{footnotesize}
\end{widetext}
where $k_0$ is the wave vector in vacuum, and $J_n$ and $H_n^{(2)}$ are the nth-order Bessel function and the nth-order Hankel function of the second kind, respectively. $\alpha=1,2,3$ denotes the domains $r<r_1$, $r_1<r<r_2$, and $r>r_2$, respectively, $a_n^{(\alpha)}$ and $b_n^{(\alpha)}$ are the series expansion coefficients, and $\sum_n{a_n^{(i)}{J_n}({k_0}|\overrightarrow {{r}}|){e^{in{\phi}}}}$ is the incident field, which is chosen to be the same in the following scenarios. Actually, it is sufficient to discuss only the electric field ($z$ component); the corresponding $H_\theta$ components can be derived using the Maxwell equation ${H_\theta } = - \frac{1}{{i\omega \mu }}\frac{{\partial {E_z}}}{{\partial r}}$. Considering the boundary conditions while imposing Eq. (\ref{eq:frValue}), the coefficients above can be solved from the following equations (where $n=0,\pm 1,\pm 2,\dots$):
\begin{widetext}
\begin{footnotesize}
\begin{equation}
\left\{ {\begin{array}{*{20}{l}}
{{J_n}({k_0}\eta {R_1})\sum\limits_{m = - \infty }^\infty {[a_m^{(1)}{J_{m - n}}({k_0}\eta {r_{01}}){e^{ - i(n - m){\phi _1}}}]} + b_n^{(1)}{H_n^{(2)}}({k_0}\eta {R_1}) =0},\\
{{J_n}({k_0}{R_3})\sum\limits_{m = - \infty }^\infty {[a_m^{(3)}{H_{m - n}^{(2)}}({k_0}{r_{03}}){e^{ - i(n - m){\phi _3}}}+a_m^{(i)}{J_{m - n}}({k_0}{r_{03}}){e^{ - i(n - m){\phi _3}}}]} + b_n^{(3)}{H_n^{(2)}}({k_0}{R_3}) =0},
\\
{\hbox{where}\;\;} {a_m^{(1)} = \sum\limits_{l = - \infty }^\infty {[b_l^{(3)}{H_{m - l}^{(2)}}({k_0}{r_{03}}){e^{ - i(m - l){\phi _3}}}]} }+a_m^{(i)} ,\hfill{a_m^{(3)} = \sum\limits_{l = - \infty }^\infty {[b_l^{(1)}{J_{m - l}}({k_0}\eta {r_{01}}){e^{ - i(m - l){\phi _1}}}]} }.
\end{array} } \right.
\label{eq:meta}
\end{equation}
\end{footnotesize}
\end{widetext}
Here, the first and second equations originate from the PEC boundary conditions at the surfaces of the obstacles (objects A and B). The other two are derived from the continuity at the boundaries between the ss-shell and the domains of $r<r_1$ and $r>r_2$. In addition, the translation \cite{chew1995waves} that expresses the wave functions in one coordinate system in terms of those in another coordinate system is applied. The equations can be rewritten in a matrix form as
\begin{footnotesize}
\begin{eqnarray}
0=&&[J(\eta {R_1})].[J(\eta {r_{01}},-\phi_1)].[A_{(1)}] + [H(\eta {R_1})].[B_{(1)}] ,\nonumber\\
0=&&[J({R_3})].([ H({r_{03}},-\phi_3)].[A_{(3)}]+[ J({r_{03}},-\phi_3)].[A_{(i)}]) + \nonumber\\
&&[H({R_3})].[ B_{(3)}],\nonumber\\
~[A_{(1)}] =&& [ H({r_{03}},\phi_3)].[ B_{(3)}] +[A_{(i)}],\nonumber\\
~[A_{(3)}] =&& [J(\eta {r_{01}},\phi_1)].[B_{(1)}],
\label{eq:metaMatrix}
\end{eqnarray}
\end{footnotesize}
where the matrices $[J(\eta {R_1})]$ and $[J(\eta {r_{01}},-\phi_1)]$ and the vector $[A_{(1)}]$ are defined as follows (and similarly for the others):
\begin{footnotesize}
\begin{align*}
&[J(r)] = Diag[ {\begin{array}{*{20}{c}}\cdots, &{{J_{ - n}}({k_0}r)},&\cdots,&{{J_0}({k_0}r)},&\cdots,&{{J_n}({k_0}r)},&\cdots\end{array}}], \\
&{[J({r}, {\phi})]_{m,n}} = {J_{m - n}}({k_0}{r}){e^{ i(n - m){\phi}}},\\
&[A_{(1)}] =[{\begin{array}{*{20}{c}}\cdots,&{a_{ - m}^{(1)}},&\cdots,&{a_0^{(1)}},&\cdots,&{a_m^{(1)}},&\cdots
\end{array}} ]^T.
\end{align*}
\end{footnotesize}
Once the scattering coefficients ($[A_{3}]$, $[B_{3}]$) are solved, the EM fields for each scenario can be obtained. However, to discuss the equivalence between scenarios (c) and (e) (or (d) and (f)) for detection of the same EM wave, it is unnecessary to solve for the scattering coefficients; a comparison of the equations that describe the scattering properties of each scenario is sufficient.
On the basis of multiple scattering theory, for scenarios (e) and (f) in Fig. \ref{fig:compleschem}, where $r'_{01}=\eta r_{01}$, $r'_{03}=r_{03}$, $R'_1=\eta R_1$, and $R'_3=R_3$, $E_z(r ,\phi )$ should be expressed as
\begin{footnotesize}
\begin{eqnarray}
\label{eq:Esab}
\sum\limits_{n = - \infty }^\infty [&& {{b'}_n^{(1)}{H_n^{(2)}}({k_0}(|\overrightarrow {{r}}-\overrightarrow {{r'_{01}}} |)){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{r'_{01}}})}}}}}+\\
&&{{b'}_n^{(3)}{H_n^{(2)}}({k_0}(|\overrightarrow {{r}}-\overrightarrow {{r'_{03}}} |)){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{r'_{03}}})}}}}}]+ {a_n^{(i)}{J_n}({k_0}|\overrightarrow {{r}}|){e^{in{\phi}}}}. \nonumber
\end{eqnarray}
\end{footnotesize}
To compare with scenario (c) (or (d)), we also divide the domain in scenario (e) (or (f)) into two parts: the domains $r<r_{min}$ and $r>r_{min}$, where $r_{min}=Min(r_B, \eta r_A)$. Moreover, we will rewrite Eq. (\ref{eq:Esab}) for these different cases.
\textbf{For scenario (e)}: For $r_B> \eta r_A$ and $r_{min}=\eta r_A$, similar to the above, we can rewrite the electric field as
\begin{widetext}
\begin{footnotesize}
\begin{equation}
E_z(r ,\phi ) = \left\{ {\begin{array}{*{20}{l}}
\sum\limits_{n = - \infty }^\infty [ {{a'}_n^{(1)}{J_n}({k_0}(|\overrightarrow {{r}}|)){e^{in{\phi}}}}+ {{b'}_n^{(1)}{H_n^{(2)}}({k_0}(|\overrightarrow {{r}}-\overrightarrow {{{r'}_{01}}}|)){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{{r'}_{01}}})}}}}}],\;r< r_{min},\\
\sum\limits_{n = - \infty }^\infty [ {{a'}_n^{(3)}{H_n^{(2)}}({k_0}(|\overrightarrow {{r}}|)){e^{in{\phi}}}}+{a_n^{(i)}{J_n}({k_0}|\overrightarrow {{r}}|){e^{in{\phi}}}}+ {{b'}_n^{(3)}{H_n^{(2)}}({k_0}(|\overrightarrow {{r}}-\overrightarrow {{r'_{03}}} |)){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{r'_{03}}})}}}}}],\;r>r_{min}.
\end{array}} \right.
\label{eq:Es2BigB}
\end{equation}
\end{footnotesize}
\end{widetext}
Because $r_B>\eta r_A$, object B is in the domain $r>\eta r_A$. Similar to scenarios (c) and (d), the boundary conditions mean that
\begin{footnotesize}
\begin{eqnarray}
0=&&[J({R'_1})].[J({r'_{01}},-\phi_1)].[A'_{(1)}] + [H({R'_1})].[B'_{(1)}],\nonumber\\
0=&&[J({R'_3})].([H({r'_{03}},-\phi_3)].[A'_{(3)}]+[ J({r'_{03}},-\phi_3)].[A_{(i)}])+\nonumber\\
&&[H({R'_3})].[B'_{(3)}],\nonumber \\
~[A'_{(1)}] =&& [H({r_{03}},\phi_3)].[B'_{(3)}]+[A_{(i)}] ,\nonumber\\
~[A'_{(3)}] =&& [J({r'_{01}},\phi_1)].[B'_{(1)}].
\label{eq:equaMatrixBigB}
\end{eqnarray}
\end{footnotesize}
We see that the scattering coefficients satisfy exactly the same equations as those in Eq. (\ref{eq:metaMatrix}). This means that $a_n^{(3)}={a'}_n^{(3)}$ and $b_n^{(3)}={b'}_n^{(3)}$ for $n=0,\pm 1,\pm 2,\dots$. Thus, the detected electric fields derived from Eqs. (\ref{eq:Es1}) and (\ref{eq:Es2BigB}) for the domain $r>\eta r_A$ will be the same. In other words, scenario (c) is equivalent to scenario (e) for viewers in the domain $r>Max(\eta r_A,r_2)$.
\textbf{For scenario (f)}: For $r_B<\eta r_A$ and $r_{min}=r_B$, the series expansion for the electric field is different from that in scenario (e), which could be rewritten as follows:
\begin{widetext}
\begin{footnotesize}
\begin{equation}
E_z(r ,\phi ) = \left\{ {\begin{array}{*{20}{l}}
\sum\limits_{n = - \infty }^\infty [ {{a'}_n^{(3)}{J_n}({k_0}(|\overrightarrow {{r}}|)){e^{in{\phi}}}}+ {{b'}_n^{(3)}{H_n^{(2)}}({k_0}(|\overrightarrow {{r}}-\overrightarrow {{r_{03}}} |)){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{r_{03}}})}}}}}],\;r<r_{min},\\
\sum\limits_{n = - \infty }^\infty [ {{a'}_n^{(1)}{H_n^{(2)}}({k_0}(|\overrightarrow {{r}}|)){e^{in{\phi}}}}+{a_n^{(i)}{J_n}({k_0}|\overrightarrow {{r}}|){e^{in{\phi}}}}+ {{b'}_n^{(1)}{H_n^{(2)}}({k_0}(|\overrightarrow {{r}}-\overrightarrow {{{r'}_{01}}} |)){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{{r'}_{01}}})}}}}}],\;r> r_{min},
\end{array}} \right.
\label{eq:Es2SmallB}
\end{equation}
\end{footnotesize}
\end{widetext}
By choosing the virtual interface $r=r_{min}$, the electric field in scenario (f) can be manually divided into two parts, as denoted in Eq. (\ref{eq:Es2SmallB}). However, it should be remarked that $E_z$ in the neighborhood of object B (or A) can always be expressed in the form of the first (or second) equation; thus, its boundary conditions are
\begin{footnotesize}
\begin{eqnarray}
0=&&[J({R'_1})].([H({r'_{01}},-\phi_1)].[A'_{(1)}]+[J({r'_{01}},-\phi_1)].[A_{(i)}])+\nonumber\\
&&[{H}({R'_1})].[B'_{(1)}] ,\nonumber\\
0=&&[J({R'_3})].[ J({r'_{03}},-\phi_3)].[A'_{(3)}] + [H({R'_3})].[B'_{(3)}],\nonumber\\
~[A'_{(1)}] =&& [J({r'_{03}},\phi_3)].[ B'_{(3)}],\nonumber\\
~[A'_{(3)}] =&& [H({r'_{01}},\phi_1)].[B'_{(1)}]+[A_{(i)}].
\label{eq:equaMatrixSmallB}
\end{eqnarray}
\end{footnotesize}
It seems that the equations in Eq. (\ref{eq:equaMatrixSmallB}) are different from those in Eq. (\ref{eq:metaMatrix}) and might not give the same solutions. However, we consider the translation relation that expresses wave functions in one coordinate system in terms of the wave functions in another coordinate system \cite{chew1995waves} and remember that the EM wave will remain the same when translated along $\overrightarrow{r'_{0\alpha}}$ and $-\overrightarrow{r'_{0\alpha}}$ ($\alpha=1,3$). Thus, we have the following relationships:
\begin{footnotesize}
\begin{eqnarray}
[{H}({{R'}_\alpha})] =&& [J({{R'}_\alpha})].[H({{r'}_{0\alpha}}, - {\phi _\alpha})].[J({{r'}_{0\alpha}},{\phi _\alpha})]\nonumber\\
=&& [J({{R'}_\alpha})].[J({{r'}_{0\alpha}}, - {\phi _\alpha})].[H({{r'}_{0\alpha}},{\phi _\alpha}].
\label{eq:tr-aabb}
\end{eqnarray}
\end{footnotesize}
Similarly, a translation along $-\overrightarrow{r'_{01}}$ then $-\overrightarrow{r'_{03}}$ will be equal to that along $-\overrightarrow{r'_{03}}$ then $-\overrightarrow{r'_{01}}$, which means that
\begin{footnotesize}
\begin{eqnarray}
~[J({{r'}_{03}}, - {\phi _3})].&&[H({{r'}_{01}}, - {\phi _1})] =[J({{r'}_{01}}, - {\phi _1})].[H({{r'}_{03}}, - {\phi _3})],\nonumber\\
~[J({{r'}_{03}}, - {\phi _3})].&&[J,({{r'}_{01}} - {\phi _1})] =[J({{r'}_{01}}, - {\phi _1})].[J({{r'}_{03}}, - {\phi _3})] ,\nonumber\\
{\hbox{and\;\;}}[V]:=&&{[H({r'_{03}}, - {\phi _3})]^{ - 1}}.[J({r'_{03}}, - {\phi _3})]\nonumber \\
=&& {[H({r'_{01}}, - {\phi _1})]^{ - 1}}.[J({r'_{01}}, - {\phi _1})].
\label{eq:tr-ab}
\end{eqnarray}
\end{footnotesize}
By substituting the related components, Eqs. (\ref{eq:metaMatrix}) and (\ref{eq:equaMatrixSmallB}) can be respectively expressed as
\begin{footnotesize}
\begin{subequations}
\begin{equation}
\left\{ {\begin{array}{*{20}{c}}
{[H({r_{03}},\phi_3)].[ B_{(3)}] +[A_{(i)}] +[H(\eta {r_{01}},\phi_1)].[B_{(1)}] =0},\\
{[J(\eta {r_{01}},\phi_1)].[B_{(1)}]+[V].[A_{(i)}] +[J({r_{03}},\phi_3)].[B_{(3)}]=0},
\end{array}} \right.
\label{eq:merge-1}
\end{equation}
\begin{equation}
\left\{ {\begin{array}{*{20}{c}}
{[J({r'_{03}},\phi_3)].[ B'_{(3)}] +[V].[A_{(i)}] + [J({r'_{01}},\phi_1)].[B'_{(1)}] =0},\\
{[H({r'_{01}},\phi_1)].[B'_{(1)}] +[A_{(i)}]+ [H({r'_{03}},\phi_3)].[ B'_{(3)}]=0},
\end{array}} \right.
\label{eq:merge-2}
\end{equation}
\label{eq:merge}
\end{subequations}
\end{footnotesize}
Evidently, Eqs. (\ref{eq:merge-1}) and (\ref{eq:merge-2}) have the same form; for scenarios (d) and (f) in Fig. \ref{fig:compleschem}, the above equations produce the same solutions as $[B_{(1)}]=[B'_{(1)}]$ and $[B_{(3)}]=[B'_{(3)}]$. It should be noted that this deduction is established under the assumption that the obstacles have PEC boundaries, which makes the right sides of these equations remain zero during the deduction. For other types of boundary conditions, the analysis becomes more complicated, and more research should be performed.
To show the equivalence between scenarios (d) and (f), the following translation relation \cite{chew1995waves} should be applied to Eq. (\ref{eq:Esab})
\begin{widetext}
\begin{footnotesize}
\begin{equation}
b_{n}^{(1)}{H_n^{(2)}}({k_0}f_3(|\overrightarrow {{r}}-\overrightarrow {{r'_{01}}}|)){e^{in{\phi_{(\overrightarrow {{r}}-\overrightarrow {{r'_{01}}})}}}}=\sum\limits_{m = - \infty }^\infty b_{n}^{(1)}{J_{m-n}({k_0}f_3(|\overrightarrow {{r'_{01}}}|))H_m^{(2)}({k_0}f_3(|\overrightarrow {{r}}|)){e^{-i(m-n)\phi_1}}{e^{im\phi}}},\;r>r_B.
\end{equation}
\end{footnotesize}
\end{widetext}
When $[A_{(3)}] =[J(\eta {r_{01}},\phi_1)].[B_{(1)}]$ in Eq. (\ref{eq:metaMatrix}), Eq. (\ref{eq:Esab}) will be exactly the same as Eq. (\ref{eq:Es1}). In other words, for scenarios (d) and (f), the electric fields in the domain $r> r_B$ ($r_B>r_{2}$) are equivalent.
\textbf{To summarize}, we have discussed the scattering properties of mismatched CM considering penetrated cylindrical PEC obstacles and found that scenarios (c) and (e) (or (d) and (f)) in Fig. \ref{fig:compleschem} will be indistinguishable for the viewer in the domain $r>Max(r_{min},r_2)$ (where $r_{min}=Min(r_B, \eta r_A)$). In fact, the multiple scattering method can also be applied to the analysis of other shapes of obstacles through a multipole expansion, where similar conclusions like those mentioned above can also be drawn. In addition, it should be noted that when $r_B>\eta r_A$ (scenarios (c) and (e)), the equivalence does not rely on the type of boundary for the obstacles; however, when $r_B<\eta r_A$ (scenarios (d) and (f)), the equivalence is derived under the PEC approximation (for obstacles). Other conditions with a different boundary for the obstacles might make the equivalence invalid \cite{2015arXiv150802213Z}.
\section{Numerical verification}
In addition to the analytical analysis given above, the numerical results from COMSOL shown in Fig. \ref{fig:efieldz} are also provided to verify our conclusion. To distinguish two obstacles, object B is a rectangular cylinder, object A is a circular cylinder, and both have PEC boundaries. From an investigation of the EM field in the domain $r>r_{min}$ (here, $r_{min}=r_B<\eta r_A$), an amplified image of object A can be detected in Fig. \ref{fig:efieldz}(a) and (c). Scenarios (a) and (b) (or (c) and (d)) have the same scattering field for the same EM wave detection (both a plane wave and point source are presented), and further study finds that loss tangents in the NIM up to $0.001$ will not have a significant impact on the results.
\begin{figure}
\centering
\fbox{\includegraphics[width=\linewidth]{fig2}}
\caption{Numerical verification from COMSOL, for the z component of the electric field. All obstacles have PEC boundaries. Object B (the rectangular object) is located within $r_B<\eta r_A$. (a) For the detection of a TE-polarized plane wave, an amplified image of object A (the circular one inside the ss-shell) could still be detected, as the domain $r>r_B$ in both (a) and (b) possess the same EM fields. Moreover, a point source is applied in (c) and (d).}
\label{fig:efieldz}
\end{figure}
\section{Discussion and conclusion}
In conclusion, when arbitrarily situated obstacles occupy the space that used to be canceled by the NIM, mismatched CM are formed. A discussion of such cases is usually ignored or avoided. From our analytical analysis, a clear understanding of the scattering properties of mismatched CM was presented, and numerical results verified the expected effects. More prosaically, we studied a superlens and superscatterer considering penetrated PEC obstacles and found the cancellation ability that exists in ordinary CM was still available in the mismatched case, whereas the ordinary cancellation strategy cannot be applied. Moreover, when obstacles become too close to the NIM, a rigorous analysis showed that the cancellation ability might be established only when a PEC boundary is applied to the obstacles, e.g., for scenario (d) [Fig. \ref{fig:compleschem}], where $r_B<\eta r_A$. The equivalence between scenarios (d) and (f) [Fig. \ref{fig:compleschem}] will rely on the PEC boundary of the obstacles; if other types of boundary conditions are applied, the equivalence might not be valid \cite{2015arXiv150802213Z}. Although most optical devices could be discussed under the PEC approximation, the extension of the scope of the conclusion derived from the PEC-penetrated mismatched CM should be carefully analyzed under the analytical framework provided in this paper.
On the other hand, the concept of mismatched CM can be applied to the study of the interaction between CM and the penetrated obstacles, which can be encountered in many applications, e.g., CM-based wireless power transfer \cite{superlensWPT,2015arXiv150802213Z}, where the CM could help enhance the transfer efficiency. In fact, the interaction between the emitter, CM, receiver, and even the obstacles in the environment (that may have an impact on the system) can be discussed according to the concept of the mismatched CM. It should be noted, when obstacles are located relatively far away from the NIM, e.g., $r_B>\eta r_A$, the conclusion and equivalence derived above can still be applied for other types of boundary conditions.
Furthermore, the analytical analysis is not restricted to any specific frequencies, and the conclusion could be applied to studies utilizing a broad range of wavelengths, including applications to CM-modified active cloaking, waveguides, antennas etc. Although the results are obtained for the heuristic 2D case, it is expected to be applicable for more general cases, even for 3D models. Although the numerical results from the FEM (COMSOL) could help illustrate our conclusion, owing to the nonmonotonic transformation, the FEM results might not be reliable and contain large errors in some cases \cite{aznavourian2014morphing,2015arXiv150802213Z}. On the other hand, a strong field exists near the surface of the NIM \cite{yang2008superscatterer}, even for weak EM wave detection; therefore, the transformation optics and the ordinary concept of CM deduced from the long-wavelength limit might not be valid for the mismatched CM. Thus, the analytical framework established in this paper provides an essential tool for future research on mismatched CM.
\section*{Acknowledgments}
\indent This work was sponsored in part by the National Natural Science Foundation of China under Grant No. 51277120 and by the International S\&T Cooperation Program of China (2012DFG01790).
|
2,869,038,153,919 | arxiv | \section{INTRODUCTION}
Magnetoelectric (ME) effects in multiferroic (MF)
or ferromagnetic (metallic) films have brought
remarkable interest since promising technological
applications in spintronics and ultrafast electric field control
on magnetic data storage are seen as
imminent\cite{Duan},\cite{Gerhard}.
Characterization of
the relative strength for the ME coupling can be obtained
by implementing terahertz spectroscopy in rare earth
manganites of the type RMnO$_3$ (R=Tb, Gd, Dy,
Eu:Y)
\cite{Pimenov},\cite{Talbayev},\cite{Nemec},\cite{Pimenov2}
demonstrating that the generated electromagnons
(mixed
spin-waves and photon states) represent, among
others, the
signature of the ME effect for an approximate
range of
frequencies between 10 cm$^{-1}$ and 40 cm$^{-1}$
at
temperatures where antiferromagnetic resonance
modes
(AFMR) coexist, or more recently, the key
mechanism for
controllable magnetochromism in
Ba$_{2}$Mg$_{2}$Fe$_{12}$O$_{22}$ hexaferrites
\cite{Kida}.
The magnetoelectric effect emerges when a
magnetic field
$\mathbf{H}$ can induce a polarization vector
$\mathbf{P}$ at zero applied electric
field
($\mathbf{E}=0$). Likewise, the magnetization of
the
substance $\mathbf{M}$ can be generated for an
electric
field $\mathbf{E}$ with $\mathbf{H}=0$. The
minimal
coupling for describing the thermodynamic
potential
associated with this effect is given by
$\Phi=-\alpha_{ij}E_{i}H_{j}$, where
$\alpha_{ij}$ is an
unsymmetrical magnetoelectric tensor, whose
components
depend on the magnetic symmetry
class\cite{Landau}.
The primary origin for the ME coupling is
commonly
associated with the Dzyaloshinskii-Moriya
relativistic
exchange-interaction
\cite{Nagaosa},\cite{Katsura0} which
is appropriate for the description of asymmetric
spin
wave dispersion on double layer
Fe-films\cite{Zakeri} as
well as for those materials where weak
ferromagnetism
emerges, namely the ilmenite FeTiO$_3$,
TbMnO$_3$,
Eu$_{1-x}$Y${_x}$MnO$_3$ ($0<x\lesssim 0.3$ at
$T<40$
K\cite{Mukhin}) or the widely studied
pyroelectric
ferromagnet BaMnF$_4$ \cite{Scott}. Weak
ferromagnetism on
this compound is generated by canting effects
between
antiferromagnetic sub-lattices, leading to a
spontaneous
polarization $\mathbf{P}$ perpendicular to the
resulting
magnetization $\mathbf{M}$\cite{Gun}.
Considerations in
the symmetry change of the static polarization
and
magnetization fields have brought interesting
unconventional optical phenomena labeled as
non-reciprocal
dichroism associated with the sign reversal of
$\textbf{P}\times\mathbf{M}$, recently reported
in the
perovskite Eu$_{0.55}$Y$_{0.45}$MnO$_{3}$, with
magnetoelectric activity for photon energies
around 0.8
meV (sub THz regime) in the cycloidal phase at 4
K\cite{Taka}. Intense activity in the last decade
has also
been dedicated to achieve possible optical and
photonic
band gap control via Surface Plasmon (SP)
propagation in
periodic arrays\cite{Barnes}, since modern
lithographic
techniques allow to design functional objects
with almost
any desirable geometrical pattern at a sub-wavelength
scale\cite{grooves}. Plasmon localization and its
coupling
with incident light depend on the dielectric
properties of
the metal in conjunction of its surrounding
environment,
enlightening an alternative route for engineering
highly
efficient SP photonic devices via externally
applied
fields, rare earth doping or electron charge
transference
from the modified metal\cite{Freund}. In this
communication, we study an electrodynamic-based
model for
estimating the optical response generated by the
contact
between a material exhibiting weak ferromagnetism
in
contact with a 2D metallic film. It is found that
a
specific strength of the ME interaction might
couple with
localized charge-sheet modes for electron carrier densities about $10^{14}-10^{15}$ cm$^{-2}$ and
incident
frequencies around 18 cm$^{-1}$, leading to a
change in
the reflectance from the metallic film. Applied
magnetic
field effects on relative reflective are discussed in
section III.
\section{MODEL}
Localized charge-sheet modes in a 2D conducting
medium in
the framework of Drude approximation is obtained
from the
Nakayama result
\cite{Nakayama},\cite{Cottam},\cite{Pitarke}:
\begin{equation}\label{Nakay}
\frac{\varepsilon_{1}}{\kappa_{1}}+\frac{\varepsilon_{2}}{\kappa_{2}}=-\frac{ic^{2}\sigma}{\omega}=\frac{\Omega_{S}c}{\omega^{2}},
\end{equation}
where $\kappa_{j}$ corresponds to the quasiwavevector in the
$Z-$
direction, $\Omega_{S}$ is defined as $\nu
e^{2}/\varepsilon_{0}mc$ and $\nu$ denotes the electron
density concentration in a two dimensional space. $\kappa_j$ is related with the wavevector along $Y$-direction through $\kappa_{j}=\left(q_{Y}^{2}-\varepsilon_{j}\omega^{2}/c^{2}\right)^{1/2}$,
($j=1,2$). The term
$\varepsilon_{j}$ represents the relative
dielectric function value for $j$-th medium, with $\varepsilon_{1}=1$ for vacuum. In the range
of
wavelengths behind the far infrared radiation
($<1$ mm),
the dielectric function approaches to the well
recognized Lyddane-Sachs-Teller (LST) relationship:
$\varepsilon_{2}\approx
\left(1+\chi_{\infty}\right)\left(\omega_{L}/\omega_{T}\right)^{2}$,
where $\chi_{\infty}$ corresponds to the
dielectric
permittivity of the medium $j=2$ and
$\omega_{L,\left(T\right)}$ represents the
longitudinal
(transverse)-optical phonon frequency. For
numerical
purposes, we have set
$\left(\omega_{L}/\omega_{T}\right)^2\approx
1.07$, which
coincides with the relationship for the $b$-axis
normal
phonon modes in BaMnF$_4$. The permittivity
$\chi_{\infty}$ is a functional depending on
mechanical
strain deformations and polarization field
depletion in
the proximities between the multiferroic slab and
metal
film\cite{Alpay}, and is taken as constant for
zero
applied (electric) field and fixed temperature.
Formula
(\ref{Nakay}) is derived by solving the complete
set of
Maxwell equations with normal (TM wave) incidence
for
$Z>0$, and boundary conditions on the plane $Z=0$ with the
\emph{ansatz} for propagating fields
$\mathbf{E},\mathbf{H}\sim e^{-i\left(q_{Y}Y-\omega
t\right)}$ in the
region $Z=0$. Magnetoelectric effects are taken
into
consideration throughout the transverse
susceptibility
$\chi^{me}$ and the electric displacement vector
$\mathbf{D}$ is written into the constitutive
equation
like
$\mathbf{D}=\varepsilon_{2}\mathbf{E}+4\pi\chi^{me}\mathbf{H}$.
After inserting the additional term
$4\pi\left[\chi^{me}\mathbf{H}\right]$, the expression
(\ref{Nakay})
shall be modified under $\kappa_{2}\rightarrow
\kappa_{2}+
4\pi i\omega\chi^{me}/c$. In the plane $Z=0$, and
in
agreement with the geometrical configuration
shown in Figure (\ref{r0}), the non-zero surface current
density
component is defined as $J_{Y}=\sigma E_{Y}$,
where
$\sigma$ corresponds to the $\sigma_{YY}$-element
of the
generalized conductivity tensor\cite{Solyom}, and
$E_{Y}$
is the electrical field propagating on the $Y$
direction.
\begin{figure}[!ht]
\centering
\includegraphics[width=8cm,
scale=1]{CompositeModel2.jpg}\label{fig1}
\vspace{-0.75cm}
\caption{Conducting Charge-Sheet in contact with
a
multiferroic surface. The polarization vector
$\mathbf{P}$
and the wavevector of coupled excitations $q_Y$
are also
depicted in the diagram. \emph{Weak
ferromagnetic}
magnetization vector $\mathbf{M}$ is produced by
interacting antiferromagnetic sublattices with
relative
canting angle $\theta_{C}$.}\label{r0}
\end{figure}
The generic expression for the transverse
susceptibility
$\chi^{me}$ is obtained from first principles by minimizing the free-energy density functional $\Phi$, which contains the two sublattice magnetizations, the polarization as well as external fields.
\cite{Tilley},\cite{Stamps}. It can be summarized as:
$4\pi i\omega\chi^{me}/c=2\pi i c
g\omega\left[\left(\omega_{p}^{2}-\omega^{2}\right)^{-1}-\left(\omega_{m}^{2}-\omega^{2}\right)^{-1}\right]$,
where
$g\equiv g\left(\theta_{C},
\mathbf{M},\mathbf{P},\omega_{m},\omega_{p}\right)$
is a
coupling parameter which is an involved function
of the
canting angle between two adjacent
(antiferromagnetic)
sublattices, the spontaneous magnetization
$\mathbf{M}$
and the polarization vector $\mathbf{P}$, as well
as the
parameters $\omega_{m\left(p\right)}$. Factor $g$ is defined in terms of the
characteristic magnetoelectric frequency $S_{me}$
as $g=8\pi^{2}S_{me}^{2}/c^{2}$, given in units
of mm$^{-2}$ all throughout this
paper\cite{Rivera}, in concordance with the
spectral
weight intrinsically associated with the fitting
procedure
for the transmittance spectra via Lorentzian
model in
various multiferroic species, namely RMn$_2$O$_5$(R:Y,Tb), TbMnO$_{3}$ or LuMnO$_3$\cite{Sushkov},
and its
dependence with the externally applied magnetic
field has
been neglected for small canting angles (See for
instance
Eqs. (38) and (47) in reference [14]. Two
main poles
are clearly identified for $\chi^{me}$: the
optical
antiferromagnetic resonance mode (AFMR)
$\omega_m$ and the
soft-phonon along $\mathbf{M}$ with resonance
frequency
$\omega_{p}$, with
$\omega_{p}>\omega_{m}$. Classical plasmon
excitations in
low 2D carrier electron density are
experimentally
detected and theoretically estimated for
wavevectors
$q\lesssim 1.4$ cm$^{-1}$ and energies
$\hbar\omega\lesssim
0.5$ meV\cite{Sarma},
\cite{Chulkov0},\cite{Kanjouri},
therefore the condition
$q_{Y}^{2}>>\varepsilon_{j}\omega^{2}/c^{2}$
remains valid
in the range of interest, and the dispersion
relationship
for the coupled magnetoelectric plasma mode is
obtained by
solving the modified equation (\ref{Nakay}):
\begin{equation}\label{cuasiwv}
q_{Y}^{\pm}=\frac{1}{2}\left[Q\pm\sqrt{Q^{2}-\gamma_{2}\left(\frac{\omega}{2\pi
c}\right)^{2}\left(Q-\gamma_{1}\left(\frac{\omega}{2\pi
c}\right)^{2}\right)}\right],
\end{equation}
with $Q=4\pi
i\omega\chi^{me}/c+\gamma_{1}\left(\omega/2\pi
c\right)^2$, $\gamma_{1}=4\pi^{2}
c\left(\varepsilon_{1}+\varepsilon_{2}\right)/\Omega_{S}$
and
$\gamma_{2}=16\pi^{2}c\varepsilon_{1}/\Omega_{S}$.
For
$\chi^{me}=0$, i.e., no magnetoelectric effects
taken
under consideration, we reproduce the expression
for the
localized plasmon mode \cite{Nakayama}:
\begin{equation}\label{qy}
\omega=\sqrt{\frac{4\pi^{2}c^{2}q_{Y}}{\gamma_{1}}},
\end{equation}
where ($+$) sign in equation (\ref{cuasiwv}) has
been
selected. Complex index of refraction
$\check{n}\left(\omega\right)$ is directly
estimated from
the wavenumber \cite{fowles}
$q_{Y}:$
$\check{n}\left(\omega\right)=cq_{Y}\left(\omega\right)/\omega$.
The lowest-order reflectance coefficient
$R\left(\omega\right)$ for normal incidence is
defined as
$R\left(\omega\right)=\mid
\check{n}\left(\omega\right)-1\mid^{2}/\mid\check{n}\left(\omega\right)+1\mid^{2}$
and its numerical profile discussed on the next
section.
$\check{n}\left(\omega\right)$ can be considered
as the \emph{effective} index of refraction for
the composite 2D metallic foil in contact with a
multiferroic (ferroelectric) system under
normal incidence of a electromagnetic wave
oscillating in
the THz regime. Applied magnetic field $\textbf{B}$ along $Z$-direction enters into
the formalism by taking symmetry considerations upon
the dependence of the electrical conductivity as a
function of $\textbf{B}$ under the transformation
$\sigma\rightarrow\sigma\left(B\right)$, with
$\sigma\left(B\right)=i\Omega_{S}c^{-1}\omega\left(\omega^{2}-\omega_{B}^{2}\right)^{-1}$.
Expression (\ref{cuasiwv}) may be reconstructed as:
$q_{Y}^{\pm}=$
\begin{equation}\label{cuasiwv2}
\frac{1}{2}\left[Q^{\prime}\pm\sqrt{Q^{\prime
2}-\gamma_{2}\frac{\left(\omega^2-\omega^{2}_{B}\right)}{\left(2\pi
c\right)^{2}}\left(Q^{\prime}-\gamma_{1}\frac{\left(\omega^2-\omega^{2}_{B}\right)}{\left(2\pi
c\right)^{2}}\right)}\right],
\end{equation}
with
$Q^{\prime}=Q-\gamma_{1}\omega_{B}^{2}/\left(2\pi
c\right)^{2}$. The classical localized
magnetoplasmon mode
(\ref{qy}) is rewritten for $g=0$ and under $B$ like\cite{Eriksson}:
\begin{equation}\label{qmy}
\omega=\sqrt{\omega_{B}^{2}+\frac{4\pi^{2}c^{2}q_{Y}}{\gamma_{1}}},
\end{equation}
in similarity with the result (\ref{qy}). In this
particular
case the antireflective condition
($\check{n}=1$) depends
on the external magnetic field intensity
$\lambda_{c}^{-1}=\pi/\gamma_{1}+\sqrt{\left(\pi/\gamma_{1}\right)^{2}+\left(\omega_{B}/2\pi
c\right)^2}$, which leads to a quadratic
correlation $\lambda_{c}^{-1}\propto B^{2}$ for
$\gamma_{1}\omega_{B}/2\pi^{2}c<<1$. For an arbitrary orientation of
$\mathbf{B}$, equation (\ref{Nakay}) shall be
modified on
its right side accordingly
$\Omega_{S}c\omega^{-2}\rightarrow
\Omega_{S}c\left(\omega^{2}-\omega_{B}^{2}\right)^{-1}F\left(n_{X},n_{Y},n_{Z}\right)$,
where $F\left(\cdot\right)$ is a function of the
directors
$n_{X,Y,Z}$\cite{magf}. Optical reflectivity response for
this structure might also be verified by adapting
the Rouard method\cite{Lecaruyer},\cite{Heavens}:
\begin{equation}\label{Rou}
R_{Rouard}=\frac{r_{1-2}+r_{2-3}e^{-2i\delta}}{1+r_{1-2}r_{2-3}e^{-2i\delta}},
\end{equation}
where $r_{i-j}$ corresponds to the internal
reflectivity
between media labeled $i$ ($j$) and $\delta$ is
the
phase difference on the second medium with
thickness $\ell$, defined as
$\delta=2\pi\check{n}_{2}\ell\lambda^{-1}$. The
index of
refraction $\check{n}_{2}$ is a function of the
components
for the conductivity tensor $\left[\sigma\right]$ depending on the incoming electromagnetic field polarization. In this particular case, it is calculated as:
\begin{equation}
\check{n}_{2}=\sqrt{1+\left(i\sigma_{YY}/\omega\varepsilon_{0}\right)},
\end{equation}
while $\sigma_{YY}$ is explicitly given by
\begin{equation*}
i\sigma_{YY}/\omega\varepsilon_{0}=-\omega^{2}_{P}\left(\omega^{2}-\omega^{2}_{B}\right)^{-1}\left(1-\omega^{2}_{B}n_{Y}^{2}/\omega^{2}\right),
\end{equation*}
where $\omega_P$ represents the electronic plasma
frequency for the \emph{bulk} system, which is related to $\Omega_S$ through
$\omega_{P}^{2}=cN\Omega_{S}/\nu$ where $N$ being the
volumetric electron density concentration. Reference values for plasma frequencies were taken as $\omega_{P}=2.15\times 10^{15}$ Hz and $\Omega_{S}=2.12\times 10^{12}$ Hz for gold (Au) in the framework of the Drude model fitting\cite{ws}.
Factors $r_{i-j}$ in formula (\ref{Rou}) are
given explicitly by
$r_{1-2}=\left(1-\check{n}_{2}\right)/\left(1+\check{n}_{2}\right)$,
and
$r_{2-3}=\left(\check{n}_{2}-\check{n}_{3}\right)/\left(\check{n}_{2}+\check{n}_{3}\right)$,
with
$\check{n}_{3}=\sqrt{1+\left(4\pi\chi^{me}\right)^{2}}$.
Indeces of refraction are directly obtained by
reconstructing the set of Maxwell equations on
each material media. In the general case, taking
into account the ME effect in the formalism by
inserting the tensor
$\left[\mathbf{\chi}\right]$, the propagating
electric field $\mathbf{E}$ must satisfy:
\begin{eqnarray}\label{Max}
\left(\nabla\times\nabla\times\mathbf{E}\right)_{M-MF}=i\omega\mu_{0}\left(\left[\mathbf{\sigma}\right]\mathbf{E}\right)_{M-MF}\\ \nonumber +\omega^{2}\mu_{0}\mathbf{D}_{M-MF}+4\pi\omega\nabla\times\left(\left[\chi\right]\mathbf{E}\right)_{M-MF},
\end{eqnarray}
where $\left[\mathbf{\sigma}\right]$ is the
conductivity tensor, and $\mathbf{D}$ previously
defined as the electric displacement vector, and subscript $M-MF$ indicates the region where fields propagation are evaluated, namely the metal (M) or multiferroic (MF) slab.
\section{RESULTS AND DISCUSSION}
Figure (\ref{r1}) exhibits the zero field reflectance
response as a
function of the 2D electronic carrier
concentration $\nu$,
for different wavelengths and the
magneto-electric coupling parameter $g$ fixed at
$0.6878$ mm$^{-2}$, the dielectric permittivity values have been taken as $\varepsilon^{ME}=11.6\varepsilon_{0}$ and $20.5\varepsilon_{0}$ for the pyroelectric ferromagnet BaMnF$_4$, which correspond to the values measured along its $a$ and $b$ crystallographic axes, respectively. Dotted curves (a) and (b) are set as reference for $g=0$. Comparative results are shown for Rouard's method (RM) and the modified Nakayama (N) expression (Eq. \ref{Nakay}), indicating the change in the reflectivity spectra under the ME effect and different values for the dielectric constant $\varepsilon^{ME}$. The reflectance response increases from $0.4$ ($g=0.0$) to $0.63$ ($g=0.6878$) for electronic densities lower than $\sim 100\times 10^{13}$ cm$^{-2}$, while it augments monotonically to $1.0$ for electronic concentrations greater than $\sim 200\times 10^{13}$ cm$^{-2}$ regardless of the value of $g$, in the framework of the RM approach. One of the discrepancies with the Nakayama results is due to the difference between the 2D intrinsic plasma frequency $\Omega_S$ and those associated with the plasma frequency in the \emph{bulk} system $\omega_{P}$. Variation in the electronic carrier density in the former case has been simulated by inserting the thickness film dependence $\ell$ on $\omega_{P}$, providing good agreement for $\ell\sim 10$ nm ($\nu\sim 147.42$ cm$^{-2}$) as proven in Fig.(\ref{comp}).
Minima of reflectivity obtained from Eq. (\ref{cuasiwv}), are located at $\lambda_{c}=2\pi\left(\varepsilon_{1}+\varepsilon_{2}\right)c/\Omega_{S}$,
or $\lambda_{c}^{-1}\propto\nu$, indicating that the critical
wavelength for \emph{bare} plasmon excitations is
larger as the electronic concentration decreases. AFMR mode lies in the range THz range, with $\omega_{m}\sim
0.54$
THz, while the transverse phonon frequency is
taken as
7.53 THz for the BaMnF$_4$ compound\cite{Barnas}.
Metallic
behavior predominates for concentrations higher
than
$10^{16}$ cm$^{-2}$ and smaller than $10^{14}$ cm$^{-2}$
and selected wavelengths between 0.5 mm
and 0.6 mm. Resonant plasmon modes (i.e., collective electronic excitations under ME interaction) are
important for
carrier densities around $10^{15}$ cm$^{-2}$,
where
radiative absorption or antireflective phenomena
become strong and the reflectance spectrum is therefore
significantly modified by diminishing the
percentage of absorbed radiation only when the external
frequency approaches the characteristic mode $\omega_m$, and $g\neq 0$.
\begin{figure}[!ht]
\centering
\includegraphics[width=9cm,
scale=1]{newreviewF4b.jpg}
\caption{Zero field reflectance response as a
function of
electron carrier density $\nu$, comparing the Rouard's method and Eq. (\ref{Nakay}) for the dielectric constants $\varepsilon^{ME}=11.6\varepsilon_{0}$ (curves (a) and (a$^\prime$)) and $\varepsilon^{ME}=20.5\varepsilon_{0}$ (curves (b) and (b$^\prime$)), with $\omega\approx \omega_{m}$ in all cases.}\label{r1}
\end{figure}
Figure (\ref{r2}) depicts the shifting of the
minimum of reflectance in the $\left(\nu,g\right)$ plane for the Nakayama approach.
The ME effect becomes relevant by decreasing the
\emph{critical}
carrier density $\nu_c$ as $g$ increases, and it
remains essentially unmodified for those frequencies away from
the AFMR characteristic mode as indicated in line (d). Dotted vertical line is tagged at $g=0.6878$ mm$^{-2}$ as a eye guide for identifying the critical density change as the incident wavelength varies around $2\pi c/\omega_m$. Critical density $\nu_{c}$ shall be understood as the electron carrier concentration which maximizes antireflective effects for the composite metal/multiferroic system. Figure (\ref{bparallel0}) shows the reflectance response under applied magnetic field with magnitude $1.5$ T for different directions on the $XY$ plane. AFMR resonance at $2\pi c/\omega_{m}$ is not essentially affected by the orientation of the external field, but it becomes sensitive with the azimuthal angle for frequencies between the edge of the THz range and the microwave (SHF) band. Highly reflective effects are more intense for external magnetic fields which are applied in the opposite direction with respect to the weak ferromagnetic state $\mathbf{M}$, favoring the metallic behavior for long wavelengths and shielding the resulting ME interaction.
\emph{In-plane}
applied field $\mathbf{B}$ effects on the
reflectance as a
function of carrier density $\nu$ are illustrated
in Fig.
(\ref{bparallel}). $R\left(\mathbf{B}\right)$
tends to increase for $\mathbf{B}$ parallel to $+X$-axis and decreases for $\mathbf{B}$ along $-X$ axis. Curve
(b) for
null $\mathbf{B}$ overlaps the outcome of $R$ at
$B=1.5$
T, $\phi=\pi /2$ and $\phi=\pi/2$ (i.e., parallel to $Y$ axis), indicating no substantial
variation in the
optical reflectance for applied fields in the same direction of the plasmonic wavevector $q_{Y}$ for carrier densities smaller than $\sim 10^{13}$ cm$^{-2}$. Equation (\ref{Max}) has also been treated by implementing Finite Element Method (FEM) and standard boundary conditions for $\mathbf{D}$ and $\mathbf{B}=\mu_{0}\mathbf{H}+4\pi\left[\mathbf{\chi}\right]\mathbf{E}$ fields in order to calculate the reflectance response as a function of incident wavelengths.
Comparative results on the calculated response of the reflectance are shown in figures (\ref{comp}) and (\ref{3cc}). Under Nakayama's formalism, the metallic medium is treated as a 2D system, while Rouard and FEM methods converge with the first one for a film thickness around $\ell\sim 10$ nm, which roughly corresponds to an electronic carrier density of 147.42 cm$^{-2}$ after calculating the correlation between two intrinsic plasma frequencies $\omega_{P}$ and $\Omega_S$. Iso-reflective lines for $\Delta R/R=R\left(B\right)/R\left(0\right)-1$\cite{JDE} close to $2\pi c/\omega_{m}$ and the externally applied magnetic field (in $Z$ direction) are shown in Figure (\ref{control}). Projected lines preserve symmetrical distribution under magnetic field inversion nearby $\lambda_m$ although strong fluctuations and a sign flip on $\Delta R/R$ are present for wavelengths slightly different from $\lambda_m$ and magnetic fields greater than $\sim 5$ T, indicating that interacting ME and plasmonic activity might increase the reflectance outcome from systems with low electronic density and without applied field.
\begin{figure}
\centering
\includegraphics[width=9cm,
scale=1]{newreviewF3.jpg}\label{r2}
\caption{Critical carrier density $\nu_{c}$ as a
function
of the ME coupling parameter $g$ for different
wavelengths. $\nu_c$ is strongly depending on $g$
only for
external frequencies near to AFMR mode
$\omega_m$.}\label{r2}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=9cm,
scale=1]{ReviewArbitraryFieldComparison1.jpg}
\caption{\emph{In plane} magnetic field effects on the reflectance spectrum. $B=1.5$ T, $\theta=\pi/2$ (a) $\phi=0$, (b) $ \phi=\pi/4$, (c) $\phi=\pi/2$, (d) $\phi=\pi$, $\nu=147.42$ cm $^{-2}$, $g=0.6878$.}\label{bparallel0}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=9cm,
scale=1]{newreviewF55.jpg}
\caption{Reflectance response as a function of electron carrier density for \emph{in-plane}
applied field close to AFMR frequency at
$2\pi c/\omega=0.53$ mm, for coupled ($g=0.6878$) and uncoupled ($g=0$) ME interaction.}\label{bparallel}
\end{figure}
\begin{figure}
\includegraphics[width=8.5 cm,
scale=1]{comparison.jpg}
\caption{Calculated reflectance $R$ at zero field
as
function of the incident wavelength by using
three
different techniques: (a) FEM (b) Summation
(Rouard's)
method and (c) expression (\ref{Nakay}), with an
electronic density $\nu=147.42\times 10^{13}$
cm$^{-2}$,
which corresponds to a film of $\ell\sim 10$nm
thickness.
}\label{comp}
\end{figure}
\begin{figure}
\includegraphics[width=7 cm,
scale=1]{newfig9control.jpg}
\caption{Isoreflective lines of $\Delta R/R$
under applied
magnetic fields parallel to $Z$-axis with $\nu=0.52\times 10^{15}$ cm$^{-2}$,
$g=0.6878$ mm$^{-2}$ and $2\pi c/\omega_{m}=0.54$
mm.}\label{control}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=8.5 cm,
scale=1]{3c3.jpg}
\caption{Reflectance spectrum calculated by using (a) Finite Element Method (FEM) (b) Rouard Method (RM) and (c) Nakayama equation (N), for an applied magnetic field on $Y$- direction and $1.5$ T intense. Thickness of $\ell~\sim 10$ nm was taken in cases (a) and (b) corresponding to carrier densities $\nu=147.42\times 10^{13}$ cm$^{-2}$. Methods (a) and (b) have good agreement at $\lambda_m$ though response (c) tends to match (a) and (b) for wavelengths $\leq 0.1$ mm.}\label{3cc}
\end{figure}
\section{CONCLUDING REMARKS}
We have developed a model for studying the
magnetoelectric interactions on 2D plasmonic modes in the THz
range for a metal/multiferroic composite device. The multiferroic
medium exhibits weak ferromagnetism and the metallic
behavior enters into the formalism in the framework of the classical Drude-Lorentz model. Relative reflectance response for normal incidence is numerically calculated
for a particular ME coupling strength $g$ and
wavelengths near to the optical antiferromagnetic resonance frequency
$\omega_{m}$ by using three different approaches: Nakayama's formalism, Rouard's method and Finite Elements (FEM). Characteristic soft phonon and AFMR frequencies were taken for the pyroelectric ferromagnet BaMnF$_4$, showing that a particular condition for reflectivity might
be adjustable by varying the intensity of the applied field, its orientability, film thickness or incident frequency of radiation, mainly in a range $\lambda>\lambda_{m}$. Spectra of reflectance demonstrate that the magnetoelectric interaction predominates for metallic film thicknesses smaller than 25 nm in the THz regime, while for thicker films (50-100 nm) the optical outcomes are not significantly affected by this interaction; instead, total reflectance from the film is observed along a wide range of frequencies up to the cut-off bulk value $\omega_{P}\sim 2.1$ PHz, in which reflectivity decays abruptly to zero and exhibits oscillatory behavior for greater frequencies. The chosen value of $\omega_P$ is into the typical order of magnitude for good conductors like gold, silver or copper, despite that the calculations and comparison with the strictly 2D system were made just for the first one. There is not a clear signature of the plasmonic cut-off for intermediate film thicknesses (25-50 nm) and the reflectivity curve does not breach abruptly as for wider ones; rather, it reaches its maximum value in a broad interval of $10^{13}-2\times 10^{16}$ Hz, suggesting a variation of the effective dielectric response associated with the metal under ME interaction.
Further analysis shall be proposed for other metals or semiconducting materials, since optical control experiments on the THz range have recently been
achieved on GaAs wafers via stimulated
photocarriers generated by interband light absorption.
The resulting reflectivity spectrum is tuned from antireflective ($R<3\%$) to high reflective
($R>85\%$) limits under controlled power illumination\cite{Fekete},\cite{Coutaz}. Although all numerical simulations were conducted
for $\varepsilon^{ME}=K\varepsilon_{0}$, ($K$ being taken as $11.6$ and $20.5$ in the range of interest), simultaneous electric field control
$\mathbf{E}_0$ on optical properties for the
composite device might also be achieved under the dielectric
function dependence for a multiferroic material
$\varepsilon_{2}\left[\mathbf{P}\left(\mathbf{E}_{0}\right)\right]$,
the polarization $\mathbf{P}\left(\mathbf{E}_{0}\right)$
and temperature, issue that shall be addressed in
future investigations.
\newline
\newline
\begin{acknowledgments}
H.V. wants to thank computing accessibility at
POM group.
C. V.-H. acknowledges financial support provided
by DIMA,
\emph{Direcci\'on de Investigaci\'on Sede
Manizales},
Universidad Nacional de Colombia. H.V. declares
no
competing financial interest.
\end{acknowledgments}
\newpage
|
2,869,038,153,920 | arxiv | \section{Introduction}\label{sec:intro}
We analyze convergence and H\"older regularity of multivariate
level dependent (non-stationary) subdivision schemes whose masks
depend linearly on one or several parameters. For this type of
schemes, which include well-known schemes with tension parameters
\cite{BCR2007_1, BCR2007_2, CoGoPi07, ContiRomani10, FMW2007, FMW2010}, the
theoretical results from \cite{US2014} are applicable, but not
always efficient. Indeed, if the level dependent parameters vary
in some compact set, then the set of the so-called limit points
(see \cite{US2014}) of the corresponding sequence of
non-stationary masks exists, but cannot be determined explicitly.
This hinders the regularity analysis of such schemes. Thus, we
present a different perspective on the results in \cite{US2014}
and derive a new general method for convergence and regularity
analysis of such level and parameter dependent schemes. The
practical efficiency of this new method is illustrated on several
examples. We also derive necessary criteria that allow us to
describe the class of functions that can be generated by
non-stationary subdivision schemes. Indeed, we show how to
characterize such functions by the special property of the zeros
of their Fourier transforms.
Subdivision schemes are iterative algorithms for generating curves
and surfaces from given control points of a mesh. They are easy to
implement and intuitive in use. These and other nice mathematical
properties of subdivision schemes motivate their popularity in
applications, i.e. in modelling of freeform curves and surfaces,
approximation and interpolation of functions, computer animation,
signal and image processing etc. Non-stationary subdivision
schemes extend the variety of different shapes generated by
stationary subdivision. Indeed, the level dependency enables to
generate new classes of functions such as exponential polynomials,
exponential B-splines, etc. This gives a new impulse to
development of subdivision schemes and enlarges the scope of their
applications, e.g. in biological imaging \cite{DelUnser2012,
Noi2014}, geometric design \cite{ReifPeter08,WW02} or isogeometric
analysis \cite{Schroder2000, Umlauf10}.
The main challenges in the analysis of any subdivision scheme are
its convergence (in various function spaces), the regularity of
its limit functions and its generation and reproduction
properties. The important role of the matrix approach for
regularity analysis of stationary subdivision schemes is
well-known. It allows to reduces the analysis to the computation
or estimation of the joint spectral radius of the finite set of
square matrices derived from the subdivision mask. Recent advances
in the joint spectral radius computation~\cite{GP13, MR14} makes
the matrix approach very precise and efficient. In the
non-stationary setting, however, this approach has never been
applied because of the several natural obstacles. First of all,
the matrix products that emerge in the study of non-stationary
schemes have a different form than those usually analyzed by the
joint spectral radius techniques. Secondly, the masks of
non-stationary schemes do not necessarily satisfy sum rules, which
destroys the relation between the convergence of the scheme and
spectral properties of its transition matrices. All those
difficulties were put aside by the results in~\cite{US2014}, where
the matrix approach was extended to general non-stationary
setting.
In this paper, in Section \ref{sec:parameters}, we make the next
step and consider level and parameter dependent subdivision
schemes whose masks include tension parameters, used to control
the properties of the subdivision limit. Mostly, the tension
parameters are level dependent and influence the asymptotic
behavior of the scheme. If this is the case, the scheme can be
analyzed by \cite[Theorem 2]{US2014}, which states that the
convergence and H\"older regularity of any such non-stationary
scheme depends on the joint spectral radius of the matrices
generated by the so-called limit points of the sequence of
level-dependent masks. In Theorem~3.5, we show that for the
schemes with linear dependence on these parameters, the result of
\cite[Theorem 2]{US2014} can be simplified and be made more
practical, see examples in Section~\ref{subsec:examples}. In
Section \ref{sec:limitations}, we address the problem of
reproduction property of subdivision schemes and of characterizing
the functions that can be generated by non-stationary subdivision
schemes. This question is crucial in many aspects. For instance,
the reproduction of exponential polynomials is strictly connected
to the approximation order of a subdivision scheme and to its
regularity \cite{ContiRomaniYoon2015}. Essentially, the higher is
the number of exponential polynomials that are being reproduced,
the higher is the approximation order and the possible regularity
of the corresponding scheme.
\section{Background}
\bigskip \noindent Let
$M=mI \in \ZZ^{s \times s}$, $|m| \ge 2$, be a dilation matrix and
$E=\{0, \ldots,|m|-1\}^s$ be the set of the coset representatives
of $\ZZ^s / M \ZZ^s$. We study subdivision schemes given by the
sequence $\{S_{\ba^{(r)}}, \ r \in \NN\}$ of subdivision operators
$S_{\ba^{(r)}}: \ell(\ZZ^s) \rightarrow \ell(\ZZ^s)$ that define
the subdivision rules by
$$
(S_{\ba^{(r)}} \bc)(\alpha)=\sum_{\beta \in \ZZ^s} \ra_{\alpha-M\beta}^{(r)} c(\beta),
\quad \alpha \in \ZZ^s.
$$
The masks ${\mathbf a}^{(r)}=\{ \ra_{\alpha}^{(r)}, \ \alpha \in
\ZZ^s\}$, $r \in \NN$, are sequences of real numbers
$\ra_{\alpha}^{(r)}$ and are assumed to be all supported in $\{0,
\ldots,N\}^s$, $N \in \NN$. For the given set
\begin{equation} \label{def:K}
K= \sum_{r=1}^\infty M^{-1}G,\quad G= \{-|m|,\ldots ,N + 1\}^s,
\end{equation}
the masks define the square matrices
\begin{equation}\label{del:matrices}
A^{(r)}_{\varepsilon}=\left( \ra^{(r)}_{M\alpha+\varepsilon-\beta}\right)_{\alpha,\beta \in K},
\quad r \in \NN, \quad \varepsilon \in E.
\end{equation}
We assume that the level dependent symbols
$$
a^{(r)}(z)= \sum_{\alpha \in \ZZ^s} \ra_{\alpha}^{(r)} z^{\alpha}, \quad
z^{\alpha}=z_1^{\alpha_1} \cdot \ldots \cdot z_s^{\alpha_s},\quad z\in
\left(\CC \setminus \{0\} \right)^s.
$$
of the subdivision scheme
$$
c^{(r+1)}=S_{\ba^{(r)}} c^{(r)}=S_{\ba^{(r)}}S_{\ba^{(r-1)}} \ldots S_{\ba^{(1)}} c^{(1)},
\quad r \in \NN,
$$
satisfy sum rules of order $\ell+1$, $\ell \in \NN_0$. For more
details on sum rules see e.g \cite{Cabrelli,
CaravettaDahmenMicchelli, JetterPlonka, JiaJiang}.
\begin{definition} Let $\ell \in \NN_0$, $r \in \NN$.
The symbol $a^{(r)}(z)$, $z \in (\CC \setminus \{0\})^s$, satisfies sum
rules of order $\ell+1$ if
\begin{equation} \label{def:sumrules}
a^{(r)}(1, \ldots,1)=|m|^s \quad\hbox{and}\quad
\max_{|\eta| \le \ell}\ \max_{\epsilon \in \Xi \setminus
\{1\}} | D^\eta a^{(r)}(\epsilon)|=0\,,
\end{equation}
where $
\Xi=\{e^{-i\frac{2\pi}{|m|}\varepsilon}=(e^{-i\frac{2\pi}{|m|}\varepsilon_1},
\ldots,
e^{-i\frac{2\pi}{|m|}\varepsilon_s}), \ \varepsilon \in E\}$ and
$D^\eta=\frac{\partial^{\eta_1}}{\partial z_1^{\eta_1}} \ldots \frac{\partial^{\eta_s}}{\partial
z_s^{\eta_s}}$.
\end{definition}
The assumption that all symbols $a^{(r)}(z)$ satisfy sum rules of
order $\ell+1$, guarantees that the matrices
$A^{(r)}_{\varepsilon}$, $\varepsilon \in E$, $r \in \NN$, in
\eqref{del:matrices} have common left-eigenvectors of the form
$$
\left( p(\alpha) \right)_{\alpha \in K}, \quad p \in \Pi_\ell,
$$
where $\Pi_\ell$ is the space of polynomials of degree less than
or equal to $\ell$. Thus, the matrices $A^{(r)}_{\varepsilon}$,
$\varepsilon \in E$, $r \in \NN$, possess a common linear
subspace $V_\ell \subset \RR^{|K|}$ orthogonal to the span of the
common left-eigenvectors of $A^{(r)}_{\varepsilon}$, $\varepsilon \in
E$, $r \in \NN$. The spectral properties of the set
$$
{\cal T}=\{ A^{(r)}_{\varepsilon}|_{V_\ell}, \ \varepsilon \in E, \ r \in \NN \}
$$
determine the regularity of the non-stationary scheme,
see \cite{US2014}.
\begin{remark}
In the univariate case, i.e. $|m|=|M|$, the assumption that the
symbols $a^{(r)}(z)$, $r \in \NN$, satisfy sum rules of order
$\ell+1$ implies that
$$
a^{(r)}(z)=(1+z+ \ldots + z^{|m|-1})^\ell \sum_{\alpha \in \ZZ} b^{(r)}_{\alpha} z^{\alpha},
\quad z \in \CC \setminus \{0\},
$$
and
\begin{equation}\label{eq:defA_eps_rest_V_uni}
A^{(r)}_{\varepsilon}|_{V_\ell}=\left( b^{(r)}_{M\alpha+\varepsilon-\beta}\right)_
{\alpha, \beta \in \{0, \ldots, N-\ell\}}, \quad \varepsilon \in E.
\end{equation}
In the multivariate case, the explicit form of the matrices
$A^{(r)}_{\varepsilon}|_{V_\ell}$, $\varepsilon \in E$, $r \in
\NN$, depends on the choice of the basis of ${V_\ell}$, see e.g.
\cite[Section 3.1]{US2014} or \cite{Cabrelli}.
\end{remark}
\begin{definition} \label{def:Cellconvergence}
A subdivision scheme $\{S_{\ba^{(r)}}, \ r\in \NN \}$ is
\emph{$C^\ell$-convergent}, if for any initial sequence $\bc \in
\ell_\infty(\ZZ^s)$ there exists the
limit function $g_\bc \in C^\ell(\RR^s)$ such that for any test function
$f \in C^\ell(\RR^s)$
\begin{equation}\label{eq:C^l_convergence}
\lim_{k \to\infty} \Big \| g_{\bc}(\cdot) - \sum_{\alpha \in \ZZ^s} S_{\ba^{(r)}}
S_{\ba^{(r-1)}} \ldots S_{\ba^{(1)}}c(\alpha) f(M^k\cdot - \alpha) \Big\|_{C^\ell}=0.
\end{equation}
\end{definition}
\noindent For more details on test functions see \cite{DM97}. Note
that, it suffices to check \eqref{eq:C^l_convergence} for only one test
function $f$. Note also that, if all limits of a subdivision
scheme belong to $C^\ell(\RR^s)$, then the scheme may not converge
in $C^\ell$, but only in $C^0(\RR^s)$.
\smallskip \noindent
In this paper, we also show how to estimate the H\"older
regularity of subdivision limits.
\begin{definition} The \emph{H\"older regularity} of the $C^0-$convergent
scheme $\{S_{\ba^{(r)}}, \ r\in \NN\}$ is $\alpha=\ell+\zeta$, if
$\ell$ is the largest integer such that $g_{\bc} \in
C^\ell(\RR^s)$ and $\zeta$ is the supremum of $\nu \in [0,1]$
such that
$$
\max_{\mu \in \NN_0^s, |\mu|=\ell} |D^\mu g_{\bc}(x)-D^\mu g_{\bc}(y)| \le
|x-y|^\nu, \quad x,y \in \RR^s.
$$
We call $\alpha$ the \emph{H\"older exponent} of $\{S_{\ba^{(r)}},
\ r\in \NN\}$.
\end{definition}
\medskip \noindent
The joint spectral radius of a set of square matrices was
introduced in \cite{RotaStrang} and is independent of the choice
of the matrix norm $\|\cdot\|$.
\begin{definition}\label{def:JSR} The joint spectral radius (JSR) of a compact
family ${\cM}$ of square matrices is defined by
$$
\displaystyle{ \rho({\cM}):=\lim_{n \rightarrow \infty}
\max_{M_{1}, \ldots, M_n \in \cM} \left\|\prod_{j=1}^n M_{j}
\right\|^{1/n}.}$$
\end{definition}
\noindent The link between the JSR and subdivision is well-known,
see e.g. \cite{Charina, CJR02, DL1992, J95, H03}.
\section{Parameter dependent subdivision schemes: matrix approach}\label{sec:parameters}
There are several examples of subdivision schemes that include a
tension parameter. We call them parameter dependent schemes. Often
the tension parameter is level dependent and shows a certain
asymptotic behavior which implies the asymptotic behavior of the
corresponding non-stationary scheme, i.e. $\displaystyle \lim_{r
\rightarrow \infty} \mathbf{a}^{(r)}=\mathbf{a}$. In this case,
although the set $\{A_{\varepsilon}^{(r)}, \ \varepsilon \in E, \
r \in \NN \}$ is not compact, the convergence and regularity of
the scheme $\{S_{\mathbf{a}^{(r)}}, \ r \in \NN\}$ can be analyzed
via the joint spectral radius approach in \cite{US2014}. The
results in \cite{US2014} are still applicable even if the
parameter values vary in some compact interval. Indeed, the
existence of the limit points for the sequence
$\{\mathbf{a}^{(r)}, \ r \in \NN\}$ of the subdivision masks is
guaranteed, though these limit points are not always explicitly
known.
\begin{definition} \label{def:set_of_limit_points} For the mask sequence
$\{\mathbf{a}^{(r)},\ \ r \in \NN\}$ we denote by $\cA$ the set of
its limit points, i.e. the set of masks $\mathbf{a}$ such that
$$ \mathbf{a}\in \cA,\quad \hbox{if} \quad \exists \{r_n,\ n\in \NN \}\ \ \mbox{such that}\
\ \lim_{n\rightarrow\infty}\mathbf{a}^{(r_n)}=\mathbf{a}\,.
$$
\end{definition}
In this section, we show that the joint spectral radius approach
can be effectively applied even if the limit points of
$\{\mathbf{a}^{(r)}, \ r \in \NN\}$ cannot be determined
explicitly, but the masks $\mathbf{a}^{(r)}$ depend linearly on
the parameter $\omega^{(r)} \in [\omega_1, \omega_2]$,
$-\infty<\omega_1<\omega_2<\infty$.
Well-known and celebrated examples of parameter dependent stationary subdivision schemes with linear
dependence on the parameter are e.g. the univariate four point scheme \cite{DynGreLev87} with the
symbol
$$
a(z, \omega)=\frac{(1+z)^2}{2}+\omega(-z^{-2}+1+z^2-z^{4}), \quad
\omega \in \left[0,\frac{1}{16} \right], \quad z \in \CC \setminus
\{0\},
$$
which is a parameter perturbation of the linear B-spline. Also the bivariate butterfly scheme \cite{DLG90} with
the symbol
$$
a(z_1,z_2, \omega)=\frac{1}{2}(1+z_1)(1+z_2)(1+z_1z_2)+\omega \,c(z_1,z_2), \quad z_1,z_2 \in \CC \setminus \{0\},
$$
with
\begin{eqnarray} \label{def:c_butterfly}
c(z_1,z_2)&=&z_1^{-1}z_2^{-2} + z_2^2 z_1^{-1} + z_1^{-2}z_2^{-1}
+ z_1^2z_2^{-1} - 2z_1^2z_2^3 - 2z_1^3z_2^2 + z_1^2z_2^4 +
z_1^4z_2^2 + z_1^3z_2^4 \notag \\&+& z_1^4z_2^3 - 2z_1^{-1} +
z_1^{-2} - 2z_1^2 - 2z_2^{-1} + z_1^3 + z_2^{-2} - 2z_2^2 + z_2^3
\end{eqnarray}
is a parameter perturbation of the linear three-directional box spline.
Other examples of such parameter dependent schemes are those with symbols that are convex combinations
$$
\omega\, a(z_1,z_2)+(1-\omega)\,b(z_1,z_2)=b(z_1,z_2)+\omega\, (a(z_1,z_2)-b(z_1,z_2)), \quad \omega \in [0,1], \quad z_1,z_2 \in \CC \setminus \{0\},
$$
of two (or more) symbols of stationary schemes, see e.g. \cite{GoPi2000, CoGoPiSa08, ContiCMS2010, CharinaContiJetterZimm11}. Known are also their non-stationary univariate counterparts with level dependent
parameters $\omega^{(r)}$ (see \cite{BCR2007_1, BCR2007_2, CoGoPi07, ContiRomani10}, for example)
$$
\frac{(1+z)^2}{2}+\omega^{(r)}(-z^{-2}+1+z^2-z^{4}), \quad r \in
\NN, \quad \lim_{r \rightarrow \infty}\omega^{(r)}=\omega \in \RR,
$$
and
$$
\omega^{(r)}\, a(z)+(1-\omega^{(r)})\,b(z), \quad r \in \NN, \quad
\omega^{(r)} \in [0,1].
$$
Note that the use of the level dependent parameters sometimes
allows us to enhance the properties of the existing stationary
schemes (e.g. with respect to their smoothness, size of their
support or reproduction and generation properties
\cite{CharinaContiJetterZimm11,ContiCMS2010,CoGoPi07,ContiRomani10}).
\smallskip
\noindent In all schemes considered above, the subdivision rules
depend either on the same, fixed, parameter $\omega=\omega^{(r)}
\in [\omega_1,\omega_2]$ independent of $r$, or the parameters
$\omega^{(r)} \in [\omega_1,\omega_2]$ are chosen in a such a way
that either $\displaystyle \lim_{r \rightarrow \infty}
\omega^{(r)}=\omega \in [\omega_1,\omega_2]$ or the corresponding
non-stationary scheme is asymptotically equivalent to some known
stationary scheme. In this section, we provide a matrix method for
analyzing regularity of more general subdivision schemes: we
consider the level dependent masks ${\mathbf a}(\omega^{(r)})=\{
\ra_{\alpha}(\omega^{(r)}), \ \alpha \in \ZZ^s\}$, $r \in \NN$,
and require that $\omega^{(r)} \in [\omega_1,\omega_2]$ without
any further assumptions on the behavior of the sequence
$\{\omega^{(r)}, \ r \in \NN\}$. We assume, however, that each of
the masks depends linearly on the corresponding parameter
$\omega^{(r)}$.
The level dependent masks $\{{\mathbf a}(\omega^{(r)}), \ r \in \NN\}$ define the corresponding square matrices which we denote by
\begin{equation} \label{A_eps_r}
A_{\varepsilon, \omega^{(r)}}=\left(
\ra_{M\alpha+\varepsilon-\beta}(\omega^{(r)})\right)_{\alpha,\beta
\in K}, \quad \varepsilon \in E,
\end{equation}
and the level dependent symbols
$$
a(z,\omega^{(r)})= \sum_{\alpha \in \ZZ^s} \ra_{\alpha}(\omega^{(r)}) z^{\alpha},
\quad z^{\alpha}=z_1^{\alpha_1} \cdots z_s^{\alpha_s},\quad z\in \left(\CC \setminus \{0\} \right)^s.
$$
The assumption that each mask ${\mathbf a}(\omega^{(r)})$ depends linearly on $\omega^{(r)}$, leads to
the following immediate, but crucial result.
\begin{proposition} \label{prop:linearity} Let $\ell \in \NN_0$ and $-\infty<\omega_1<\omega_2 < \infty$.
If every symbol of the sequence $\{ a(z,\omega^{(r)}),\ r \in
\NN\}$ depends linearly on the parameter $\omega^{(r)} \in
[\omega_1,\omega_2]$ and satisfies sum rules of order $\ell+1$,
then every matrix in ${\cal T}=\{ A_{\varepsilon,
\omega^{(r)}}|_{V_\ell}, \ \omega^{(r)} \in [\omega_1,\omega_2],
\ \varepsilon \in E, \ r \in \NN\}$ is a convex combination of the
matrices with $\omega^{(r)} \in \{\omega_1, \omega_2\}$
$$
A_{\varepsilon, \omega^{(r)}}|_{V_\ell}=(1-t^{(r)})A_{\varepsilon,\omega_1}|_{V_\ell}+t^{(r)}
A_{\varepsilon, \omega_2}|_{V_\ell},\quad t^{(r)} \in [0,1].
$$
\end{proposition}
\begin{proof}
Let $r \in \NN$. We first write $\omega^{(r)}$ as a convex
combination of $\omega_1$ and $\omega_2$, i.e.
$$
\omega^{(r)}=(1-t^{(r)}) \omega_1 +t^{(r)} \omega_2 \quad \hbox{with}\quad t^{(r)} \in [0,1]\,.
$$
Note that all entries of the matrices $A_{\varepsilon,
\omega^{(r)}}$, $\varepsilon \in E$, are the coefficients of the
corresponding mask ${\mathbf a}(\omega^{(r)})$. Since the mask
coefficients depend linearly on the parameter $\omega^{(r)}$, so
do the matrices $A_{\varepsilon, \omega^{(r)}}$, and hence, the
corresponding linear operators. Therefore, the restrictions of
these operators to their common invariant subspace $V_\ell$ also
depend linearly on this parameter.
\end{proof}
\noindent In the level independent case, i.e.
$\omega^{(r)}=\omega$ for all $r \in \NN$, the use of the joint
spectral radius approach for studying the convergence and
regularity of the corresponding stationary subdivision schemes is
well understood. To show how this approach can be applied in the
our non-stationary setting, we need first to prove the following
auxiliary result.
\begin{proposition}\label{prop:Nicola} Let $\ell \in \NN_0$ and
\begin{equation}\label{def:A_omega}
{\cal T}= \{ A_{\varepsilon,\omega^{(r)}}|_{V_\ell},\ \omega^{(r)}
\in [\omega_1,\omega_2],\ \varepsilon \in E, \ r \in \NN\}
\end{equation}
be the infinite family of square matrices. If the JSR of the
family ${\cal T}_{\omega_1, \omega_2}=\{
A_{\varepsilon,\omega_1}|_{V_\ell},
A_{\varepsilon,\omega_2}|_{V_\ell}, \ \varepsilon \in E \}$
satisfies $\rho({\cal T}_{\omega_1, \omega_2}) = \gamma,$ then
$\rho \left( {\cal T} \right) = \gamma$.
\end{proposition}
\begin{proof}
First of all observe that $\rho({\cal T}_{\omega_1, \omega_2}) = \gamma$ implies, for any $\delta > 0$,
the existence of a $\delta$-extremal norm (see e.g. \cite{Els95,GZ01}), i.e. an operator
norm $\| \cdot \|_\delta$ such that
\begin{equation}
\| A_{\varepsilon,\omega_1}|_{V_\ell} \|_\delta \le \gamma +
\delta, \qquad \| A_{\varepsilon,\omega_2}|_{V_\ell} \|_\delta \le
\gamma + \delta. \label{eq:deltaextremality0}
\end{equation}
Then, by Proposition \ref{prop:linearity}, estimates in \eqref{eq:deltaextremality0} and
subadditivity of
matrix operator norms, we get
\[
\| A_{\varepsilon,\omega^{(r)}}|_{V_\ell} \|_\delta = \|
(1-t^{(r)}) A_{\varepsilon,\omega_1}|_{V_\ell} + t^{(r)}
A_{\varepsilon,\omega_2}|_{V_\ell}\|_\delta \le (1-t^{(r)}) \|
A_{\varepsilon,\omega_1}|_{V_\ell} \|_\delta + t^{(r)} \|
A_{\varepsilon,\omega_2}|_{V_\ell} \|_\delta = \gamma + \delta,
\quad t^{(r)} \in [0,1].
\]
This, due to the arbitrary choice of $\delta > 0$, implies that
$\rho \left( {\cal T} \right) = \gamma$, which concludes the proof.
\end{proof}
\begin{remark} \label{rem:JSRsubsets} $(i)$
Note that, if the family ${\cal T}_{\omega_1, \omega_2}$ is
non-defective, i.e. there exists an extremal norm $\| \cdot \|$
such that $ \max_{\varepsilon \in E} \left\{ \|
A_{\varepsilon,\omega_1}|_{V_\ell} \|, \ \|
A_{\varepsilon,\omega_2}|_{V_{\ell}} \| \right\} = \gamma, $ then
${\cal T}$ is also non-defective and all products of degree $d$ of
the associated product semigroup have maximal growth bounded by
$\gamma^d$. Note also that for any family of matrices ${\cal B}$,
${\cal B}\subset {\cal T}$, it follows that
$\rho \left( {\cal B}\right) \le \gamma$.
$(ii)$ Moreover, if a family $\mathcal T$ is irreducible, i.e.,
its matrices do not have a common nontrivial subspace, then
$\mathcal T$ is non-defective. Therefore, the case of
non-defective families is quite general.
\end{remark}
We are now ready to formulate the main result of this section.
\begin{theorem}\label{teo:JSRregularity_r} Let $\ell \in \NN_0$. Assume that every
symbol of the sequence $\{a(z,\omega^{(r)}),\ r \in \NN\}$
depends linearly on $\omega^{(r)} \in [\omega_1,\omega_2]$ and
satisfies sum rules of order $\ell+1$. Then the non-stationary
scheme $\{S_{{\mathbf a}(\omega^{(r)})}, \ r \in \NN\}$ is
$C^\ell$-convergent, if the JSR of the family ${\cal
T}_{\omega_1, \omega_2} =\{ A_{\varepsilon,\omega_1}|_{V_{\ell}},
A_{\varepsilon,\omega_2}|_{V_{\ell}}, \ \varepsilon \in E \}$
satisfies
\begin{equation}
\rho({\cal T}_{\omega_1, \omega_2}) = \gamma < |m|^{-\ell}.
\label{eq:jsrA}
\end{equation}
Moreover the H\"{o}lder exponent of its limit functions is $\alpha \ge -\log_{|m|} \gamma$.
\end{theorem}
\begin{proof}
Since the parameters $\{\omega^{(r)},\ r \in \NN\}$ vary in the
compact interval $[\omega_1, \omega_2]$, there exists a set of
limits points (finite or infinite) for the sequence $\{{\mathbf
a}(\omega^{(r)}),\ r \in \NN\}$ of subdivision masks. Let us
denote this set by $\cal A$ and the corresponding set of square
matrices by ${\cal
T}_\cA=\{A_\varepsilon=(\ra_{M\alpha+\varepsilon-\beta})_{\alpha,\beta
\in K}, \ \varepsilon \in E, \ {\mathbf a} \in \cA\}$. Obviously,
${\cal T}_\cA \subset {\cal T}$ with ${\cal T}$ as in
\eqref{def:A_omega}. Since by Proposition \ref{prop:Nicola} and
Remark \ref{rem:JSRsubsets}, $\rho \left( {\cal T}_\cA\right)\le
\gamma$, the claim follows by \cite[Corollary 1]{US2014}.
\end{proof}
\begin{remark} $(i)$ Note that, due to $\rho({\cal T}_\cA) \le \gamma$,
Theorem \ref{teo:JSRregularity_r}
yields a smaller H\"older exponent $\alpha$ than what could be
obtained by \cite[Corollary 1]{US2014}. For example, consider
binary subdivision scheme with the symbols
\begin{eqnarray*}
a(z,\omega^{(r)})&=&z^{-1}\frac{(1+z)^2}{2}, \quad \quad r \in \{1,\ldots,L\}, \quad L \in \NN, \\
a(z,\omega^{(r)})&=&z^{-1}\frac{(1+z)^2}{2}+\frac{1}{16}(-z^{-3}+z^{-1}+z-z^{3}),\quad r\ge L+1,
\quad z \in \CC \setminus \{0\}.
\end{eqnarray*}
To apply Theorem \ref{teo:JSRregularity_r}, we can view the
corresponding masks as being linearly dependent on parameters
$\omega^{(r)} \in [0,\frac{1}{16}]$. The corresponding family
${\cal T}_{0,\frac{1}{16}}=\{ A_{\varepsilon,0}|_{V_1},
A_{\varepsilon,\frac{1}{16}}|_{V_1}, \ \varepsilon \in \{0,1\}
\}$ consists of the four matrices
\begin{equation} \label{def:Matrices_4_point}
A_{0,\omega}|_{V_1}= \left( \begin{array}{rrrr}
-\omega & -2\omega+\frac{1}{2} & -\omega & 0 \\
0 & 2\omega & 2\omega & 0 \\
0 & -\omega & -2\omega+\frac{1}{2} & -\omega \\
0 & 0 & 2\omega & 2\omega
\end{array} \right), \quad
A_{1,\omega}|_{V_1}= \left( \begin{array}{rrrr}
2\omega & 2\omega & 0 & 0 \\
-\omega & -2\omega+\frac{1}{2} & -\omega & 0 \\
0 & 2\omega & 2\omega & 0 \\
0 & -\omega & -2\omega+\frac{1}{2} & -\omega
\end{array} \right)
\end{equation}
for $\omega \in\{0,\frac{1}{16}\}$. Due to
$$
\max_{\varepsilon \in \{0,1\}} \left\{ \|A_{\varepsilon,0}|_{V_1}\|_\infty,
\|A_{\varepsilon,\frac{1}{16}}|_{V_1}\|_\infty \right\}= \max
_{\varepsilon \in \{0,1\}} \left\{ \rho(A_{\varepsilon,0}|_{V_1}),
\rho(A_{\varepsilon,\frac{1}{16}}|_{V_1})\right\}=\frac{1}{2},
$$
we get $\rho({\cal T}_{0, \frac{1}{16}}) =\frac12$ and, thus, the
corresponding scheme is convergent and has the H\"{o}lder exponent
$\alpha \ge 1$. On the other hand, the set $\cA$ of limit points
of the masks can be explicitly determined in this case and
consists of the mask of the four point scheme. Thus, by
\cite[Corollary 1]{US2014}, the H\"older exponent is actually
$\alpha \ge 2$.
$(ii)$ The regularity estimate given in Theorem
\ref{teo:JSRregularity_r} can be improved, if the actual range of
the parameters $\omega^{(r)}$, $r \ge L$, for some $L \in \NN$,
is a subinterval of $[\omega_1,\omega_2]$, see section
\ref{subsec:examples}.
$(iii)$ Note that the result of Theorem \ref{teo:JSRregularity_r}
is directly extendable to the case when the matrix family $\cal T$
depends linearly on a convex polyhedral set
$\Omega=\overline{\hbox{co}\{\bomega_1, \ldots, \bomega_L \}}$ of
parameters $\bomega^{(r)} \in \Omega \subset \RR^p$, $r \in \NN$,
such that
\[
\bomega^{(r)} = \sum\limits_{j=1}^{L} t^{(r)}_j \bomega_j \quad
\mbox{with} \quad t^{(r)}_j \in [0,1] \quad \mbox{and} \ \sum
\limits_{j=1}^{L} t^{(r)}_j=1.
\]
This is the case, for example, when we define the level and parameter dependent symbols
$$
a(z,\bomega^{(r)})=\sum \limits_{j=1}^{p} \omega_j^{(r)}a_j(z),
\quad \bomega^{(r)}=(\omega_1^{(r)}, \ldots, \omega_p^{(r)})^T \in
\Omega, \quad r \in \NN.
$$
\end{remark}
\subsection{Examples}\label{subsec:examples}
In this section we present two univariate examples of level dependent parameter
schemes, whose constructions are based on the four point and six point Dubuc-Deslauriers schemes.
In particular, in Example \ref{ex:4_point},
the non-stationary scheme is constructed in such a way that the support of its limit function
$$
\phi_1=\lim_{r \rightarrow \infty} S_{{\mathbf a}(\omega^{(r)})} \ldots S_{{\mathbf a}(\omega^{(1)})} \delta, \quad
\delta(\alpha)=\left\{ \begin{array}{cc} 1, & \alpha=0, \\ 0, & \hbox{otherwise} \end{array}\right., \quad
\alpha \in \ZZ^s,
$$
is smaller than the support of the four point scheme, but its regularity is comparable.
In Example \ref{ex:4_point_6_point}, every non-stationary mask is a convex combination of the four point and six point Dubuc-Deslauriers schemes. We show how the regularity of
the corresponding non-stationary scheme depends on the range of the corresponding parameters $\{\omega^{(r)}, \ r \in \NN\}$.
Both examples illustrate the importance of the
dependency on several parameters $\{\omega^{(r)}, \ r \in \NN\}$ instead of one $\omega \in \RR$.
\begin{example} \label{ex:4_point}
We consider the univariate, binary scheme with the symbols
\begin{eqnarray*}
a(z,\omega^{(r)})&=&z^{-1}\frac{(1+z)^2}{2}, \quad \quad r \in \{1,2\}, \\
a(z,\omega^{(r)})&=&z^{-1}\frac{(1+z)^2}{2}+\omega^{(r)}(-z^{-3}+z^{-1}+z-z^{3}),\quad r\ge 3,
\quad z \in \CC \setminus \{0\},
\end{eqnarray*}
where $\omega^{(r)}$ are chosen at random from the interval
$[\frac{3}{64},\frac{1}{16}]$. The corresponding family
$$
{\cal T}_{0,\frac{1}{16}}=\{ A_{\varepsilon,0}|_{V_1}, A_{\varepsilon,\frac{1}{16}}|_{V_1},
\ \varepsilon \in \{0,1\} \}
$$
consists of the same four matrices as in \eqref{def:Matrices_4_point}.
And at the first glance the H\"older exponent of this scheme is $\alpha \ge 1$.
On the other hand, we can view this scheme as the one with the corresponding matrix
family
$$
{\cal T}_{\frac{3}{64},\frac{1}{16}}=\{ A_{\varepsilon,\frac{3}{64}}|_{V_1}, A_{\varepsilon,\frac{1}{16}}|_{V_1}, \
\varepsilon \in \{0,1\} \},
$$
applied to a different starting data. Then we get $\rho({\cal T}_{\frac{3}{64},\frac{1}{16}})=3/8$ and,
by Theorem \ref{teo:JSRregularity_r}, the H\"older exponent is actually $\alpha \ge
- \hbox{log}_2\frac{3}{8} \approx 1.4150$.
The size of the support of $\phi_1$ can be determined using the technique from \cite{CohenDyn}
and is given by
$$
\left[\sum_{k=0}^\infty 2^{-k-1} \ell(k), \sum_{k=0}^\infty 2^{-k-1} r(k) \right]=\left[-\frac{3}{2}, \frac{3}{2} \right]
$$
with
\begin{eqnarray*}
\ell(k)&=&-1, \quad r(k)=1, \quad k=0,1, \\
\ell(k)&=&-3, \quad r(k)=3, \quad k \ge 2.
\end{eqnarray*}
Recall that the support of the basic limit function of the four
point scheme is $[-3,3]$.
\end{example}
\medskip
\begin{example} \label{ex:4_point_6_point} In this example we consider the univariate non-stationary
scheme with symbols
$$
a(z,\omega^{(r)})=\omega^{(r)} a(z) +(1-\omega^{(r)}) b(z), \quad
\omega^{(r)} \in [0,1], \quad z \in \CC \setminus \{0\},
$$
where
$$
a(z)=-\frac{z^{-3}(z+1)^4}{16}\left(z^2-4z+1\right)
$$
is the symbol of the four point scheme and
$$
b(z)=\frac{z^{-5}(z+1)^6}{256}\left(3z^4-18z^3+38z^2-18z+3\right)
$$
is the symbol of $C^2-$convergent quintic Dubuc-Deslauriers subdivision
scheme \cite{DesDubuc}. By \cite{Floater}, the H\"older exponent of the $S_{\bf b}$ is $\alpha\approx 2.8301$. To determine the regularity of this
level and parameter dependent scheme we consider the matrix set
$$
{\cal T}_{0,1}=\{ A_{\varepsilon,0}|_{V_2}, A_{\varepsilon,1}|_{V_2},
\ \varepsilon \in \{0,1\} \}
$$
with the four matrices
\begin{eqnarray*} \small
A_{0,\omega}|_{V_2}&=&\frac{1}{256} \left(\begin{array}{rrrrrrr}
3-3\omega& 0& 0& 0& 0& 0& 0\\
-7-9\omega& -9+9\omega& 3-3\omega& 0& 0& 0& 0\\
45+3\omega& 45+3\omega& -7-9\omega& -9+9\omega& 3-3\omega& 0& 0\\
-9+9\omega& -7-9\omega& 45+3\omega& 45+3\omega& -7-9\omega& -9+9\omega& 3-3\omega\\
0& 3-3\omega& -9+9\omega& -7-9\omega& 45+3\omega& 45+3\omega& -7-9\omega\\
0& 0& 0& 3-3\omega& -9+9\omega& -7-9\omega& 45+3\omega\\
0& 0& 0& 0& 0& 3-3\omega&-9+9\omega
\end{array} \right), \\
A_{1,\omega}|_{V_2}&=& \frac{1}{256} \left(\begin{array}{rrrrrrr}
-9+9\omega& 3-3\omega& 0& 0& 0& 0& 0\\
45+3\omega& -7-9\omega& -9+9\omega& 3-3\omega0& 0& 0\\
-7-9\omega& 45+3\omega& 45+3\omega& -7-9\omega& -9+9\omega &3-3\omega& 0\\
3-3\omega& -9+9\omega& -7-9\omega& 45+3\omega& 45+3\omega& -7-9\omega&-9+9\omega\\
0& 0& 3-3\omega& -9+9\omega& -7-9\omega& 45+3\omega& 45+3\omega\\
0& 0& 0& 0& 3-3\omega& -9+9\omega& -7-9\omega\\
0& 0& 0& 0& 0& 0&3-3\omega \end{array} \right)
\end{eqnarray*}
for $\omega \in \{0,1\}$. In this case, the regularity of the non-stationary scheme $\{S_{{\bf a}(\omega^{(r)})}, \ r \in \NN\}$ coincides
with the regularity of the four point scheme. For $\omega \in \{a,1\}$, $a>0$, the scheme $\{S_{{\bf a}(\omega^{(r)})}, \ r \in \NN\}$
is $C^2-$convergent. And, for $\omega \in \{0,a\}$, $a <1$, extensive numerical experiments show that
the JSR of the family ${\cal T}_{0,a}$ is determined by the the subfamily $\{ A_{\varepsilon,a}|_{V_2}, \
\varepsilon \in \{0,1\} \}$. For example, for $a=\frac{1}{2}$, we obtain
$\rho( {\cal T}_{0,\frac{1}{2}}) \approx 0.2078$ and, thus, the
corresponding H\"older exponent is $\alpha \ge 2.2662$.
\normalsize
\end{example}
\section{Limitations of generation properties of non-stationary schemes}\label{sec:limitations}
It is known that certain level dependent (non-stationary) subdivision schemes are capable of
generating/reproducing certain spaces of exponential polynomials, see e.g.
\cite{CharinaContiRomani13, ContiRomani11}. In this section, we are interested in answering
the question: How big is the class of functions that can be generated/reproduced by such schemes?
\smallskip \noindent
More precisely, we show that, already in the univariate setting,
the zero sets of the Fourier transforms of the limit functions
$$
\phi_k=\lim_{r \rightarrow \infty} S_{{\mathbf a}^{(r)}} \ldots S_{{\mathbf a}^{(k)}} \delta, \quad
\delta(\alpha)=\left\{ \begin{array}{cc} 1, & \alpha=0, \\ 0, & \hbox{otherwise} \end{array}\right., \quad
\alpha \in \ZZ^s,
$$
of such schemes are unions of the sets
$$
\Gamma_r=\{\omega \in \CC \ : \ a^{(r)}(e^{-i 2 \pi M^{-r}\omega})=0\}, \quad r \ge k,
$$
and that the sets $\Gamma_r$ are such that $\Gamma_r+ M^r \ZZ =\Gamma_r$.
Thus, some elementary functions cannot be generated by non-stationary schemes,
see Example \ref{ex:bad}.
Also the requirement that
$$
\hat{\phi}_k(\omega)=\int_{\RR} \phi_k(x) e^{-i 2 \pi x \omega} dx, \quad \omega \in \CC, \quad k \in \NN,
$$
is an entire function, limits the properties of the functions that can be generated by non-stationary subdivision schemes.
\begin{proposition} \label{prop:limitations}
Let $\{ \phi_k,\ k\in \NN\}$ be continuous functions of compact support satisfying
$$
\phi_k(x)=\sum_{\alpha \in \ZZ} \ra^{(k)}(\alpha) \phi_{k+1}(Mx-\alpha), \quad k \in
\NN, \quad x \in \RR.
$$
Then
$$
\{\omega \in \CC \ : \hat{\phi_k}(\omega)=0\}=\bigcup_{r \ge k} \Gamma_r,
$$
such that the sets $\Gamma_r$ satisfy
$$
\Gamma_r+ M^r \ZZ =\Gamma_r.
$$
\end{proposition}
\begin{proof} Let $k \in \NN$. By Paley-Wiener theorem, the Fourier transform $\hat{\phi}_k$ defined
on $\RR$ has an analytic extension
$$
\hat{\phi}_k(\omega)=\int_{\RR} \phi_k(x) e^{-i 2 \pi x \omega} dx, \quad \omega \in \CC,
$$
to the whole complex plane $\CC$ and $ \hat{\phi}_k$ is an entire function. By
Weierstrass theorem \cite{Conway}, every entire function can be represented by a
product involving its zeroes. Define the sets
$$
\Gamma_r:=\{\omega \in \CC \ : \ a^{(r)}(e^{-i 2 \pi M^{-r}\omega})=0\}, \quad r \in \NN.
$$
Let $z_{r,1}, \ldots, z_{r,N}$ be the zeros of the polynomials
$a^{(r)}(e^{-i 2 \pi M^{-r}\omega})$, counting their
multiplicities. Then
$$
\Gamma_r= i M^r \bigcup_{\ell=1}^N \hbox{Ln}(z_{r,\ell}),
$$
where, by the properties of the complex logarithm, each of the sets $i M^r
\hbox{Ln}(z_{r,\ell})$ consists of sequences of complex numbers
and is $ M^r -$periodic. Thus, each of the sets $\Gamma_r$ satisfy
$$
\Gamma_r+M^r \ZZ=\Gamma_r, \quad r \in \NN.
$$
The definition of $\hat{\phi}_k$ as an infinite product of the
trigonometric polynomials $a^{(r)}(e^{-i 2 \pi M^{-r}\omega})$, $r
\ge k$, yields the claim.
\end{proof}
\smallskip
The following examples illustrate the result of Proposition \ref{prop:limitations}.
\begin{example}
The basic limit function of the simplest stationary scheme is given by
$\phi_1=\chi_{[0,1)}$. Its Fourier transform
is
$$
\hat{\phi}_1(\omega)=\frac{1-e^{-i2\pi \omega}}{i 2 \pi \omega}, \quad \hbox{and} \quad \{\omega
\in \CC \ : \ \hat{\phi}_1(\omega)=0\}= \ZZ \setminus \{0\}.
$$
The mask symbol $a(z)=1+z$ has a single zero at $z=-1$, i.e.
$e^{-i2 \pi 2^{-r}\omega}=-1$ for $\omega = 2^r \{ \frac{1}{2}+ k
\: \ k\in \ZZ \}$, $r \in \NN_0$. In other words, $\Gamma_1=\{1+
2 k \ : \ k\in \ZZ \}$ and $\Gamma_r=2 \Gamma_{r-1}$ for $r \ge
2$. Therefore,
$$
\{\omega \in \CC \ : \ \hat{\phi}_1(\omega)=0\}=\bigcup_{r \in \NN} \Gamma_r.
$$
\end{example}
\begin{example} The first basic limit function of the simplest non-stationary scheme is
given by $\phi_1(x)=\chi_{[0,1)}(x) e^{\lambda x}$,
$\lambda \in \CC$. Its Fourier transform
is
$$
\hat{\phi}_1(\omega)=\frac{e^{-i 2\pi \omega+\lambda}-1}{-i2 \pi \omega+\lambda}, \quad \omega
\in \CC, \quad \hbox{and} \quad \{\omega \in \CC \ : \ \hat{\phi}_1(\omega)=0\}=-\frac{i
\lambda}{2\pi} +\ZZ \setminus\{0\} .
$$
The mask symbol $a^{(r)}(z)=1+e^{\lambda 2^{-r}}z$ has a single
zero at $z=-e^{-\lambda 2^{-r}}$, i.e. $e^{-i2 \pi
2^{-r}\omega}=-e^{-\lambda 2^{-r}}$ for $\omega =
-\frac{i \lambda}{2\pi}+ 2^r \{ \frac{1}{2}+ k \ : \ k\in \ZZ \}$, $r \in \NN$.
Note that $\Gamma_1= -\frac{i \lambda}{2\pi}+\{1+2 k \ : \ k\in \ZZ \}$ and
$$
\bigcup_{r \in \NN} 2^r\{\frac{1}{2}+ k \ : \ k\in \ZZ \}=\ZZ \setminus\{0\}.
$$
Therefore,
$$
\{\omega \in \CC \ : \ \hat{\phi}_1(\omega)=0\}=\bigcup_{r \in \NN} \Gamma_r.
$$
\end{example}
In the next example we identify a compactly supported function
that cannot be generated by any non-stationary subdivision scheme.
\begin{example} \label{ex:bad}
Let us consider the compactly supported function
$$
f(x)=\chi_{[-1,1]}(x) \frac{2}{\sqrt{1-x^2}}, \quad x \in \RR.
$$
It cannot be a limit of any non-stationary subdivision scheme. Indeed, its Fourier transform
\begin{equation} \label{eq:FT_alternative}
J_0(\omega)=\int_{\RR} f(x) e^{-i x \omega} dx, \quad \omega \in \CC,
\end{equation}
is the Bessel function $J_0$ of the first kind, which is entire, but has only positive
zeros.
The lower bound for its zeros $j_{0,s}$, $s \in \NN$, is given by
$j_{0,s} > \sqrt{(s-\frac{1}{4})^2 \pi ^2}$, see \cite{McCann}. Thus, Proposition
\ref{prop:limitations}
implies the claim.
\end{example}
{\bf Acknowledgements:} Vladimir Protasov was sponsored by RFBR grants $13-01-00642$, $14-01-00332$
and by the grant of Dynasty foundation.
|
2,869,038,153,921 | arxiv | \section{Introduction}
\label{intro}
Discrete integrable systems have attracted a lot of attention in recent years \cite{HJN}. One of the reasons for this comes from physics: many physical models include discreteness at a fundamental level. Another reason for the increased interest in discrete integrable systems comes from mathematics: in several instances it turns out that discrete integrable systems are arguably richer, or more fundamental than continuous (i.e. non-discrete) ones. Prime examples are (i) Integrable partial difference equations (P$\Delta$Es), where a single P$\Delta$E yields (through the use of vertex operators) an entire infinite hierarchy of integrable partial differential equations \cite{WC}; (ii) Discrete Painlev\'{e} equations, where the Sakai classification is much richer in the discrete case than in the continuous one \cite{S}; (iii) Darboux polynomials, where in the discrete case unique factorization of the so-called co-factors can be used (which does not exist in the continuous (additive) case)\footnote{cf \cite{CEMOQT,CEMOQTV} for the discrete case, and \cite{Goriely} for a very nice introduction to the continuous case. }.
\medskip \noindent In this Letter we will be interested in autonomous integrable ordinary difference equations (or maps). Much interest was generated by the discovery of the 18-parameter integrable QRT map in $\mathbb{R}^2$ (\cite{JJD,QRT1,QRT2}). For some other examples in higher dimensions, cf. e.g. Chapter 6 of \cite{HJN}.
\medskip \noindent A special aspect of the maps we consider in this Letter is that they are an example of integrable maps arising as discretisations of ordinary differential equations (ODEs). Earlier examples of this arose using the Kahan discretisation of first-order quadratic ODEs (cf. \cite{Kahan}, \cite{HK}, \cite{PPS}, \cite{CMMOQ2} and references therein), or by the discretisation of ODEs of order 1 and arbitrary degree using polarisation methods \cite{CMMOQ1}, and by the methods in \cite{HQ} for the discretisation of ODEs of order $o$ and degree $o+1$, cf. also \cite{M,McL}.
\medskip \noindent In section 3 we present a novel integrable 8-parameter map in $\mathbb{R}^4$. This map generalizes a 5-parameter map in $\mathbb{R}^4$ found earlier in \cite{CMMOQ1} to the inhomogeneous case, and because the derivation of the novel map may be somewhat mysterious if the reader is unfamiliar with the previous map and its derivation, we summarise the latter in section 2.
\section{What went before}
\label{sec:1}
In \cite{CMMOQ1} Celledoni, McLachlan, McLaren, Owren and Quispel introduced a novel integrable map in $\mathbb{R}^4$. It was constructed as follows.
\medskip \noindent The authors considered the homogeneous quartic Hamiltonian
\begin{equation}\label{homQham}
H = aq^4 + 4bq^3p + 6cq^2p^2 + 4dqp^3 + ep^4,
\end{equation}
where $a,b,c,d$ and $e$ are 5 arbitrary parameters.
\medskip \noindent This gave rise to an ordinary differential equation (ODE)
\begin{equation}\label{ode3}
\frac{d}{dt} \left( \begin{array}{c} q \\ p \end{array} \right)
= \left(\begin{array}{rr} 0 & 1 \\ -1 & 0 \end{array} \right) \nabla H = f_3 \left( \begin{array}{c} q \\ p \end{array} \right) ,
\end{equation}
where the cubic vector field $f_3$ is defined by
\begin{equation}\label{f3}
f_3 \left( \begin{array}{c} q \\ p \end{array} \right) =
\left( \begin{array}{c} 4bq^3 +12cq^2p + 12dqp^2 + 4ep^3 \\
-4aq^3 - 12bq^2p - 12 cqp^2 - 4dp^3 \end{array} \right).
\end{equation}
Defining $x:=\left( \begin{array}{c} q \\ p \end{array} \right)$, and introducing the timestep $h$, the vector field (\ref{ode3}) was then discretized:
\begin{equation}\label{map3}
\frac{x_{n+2} - x_n}{2h} = F_3(x_n,x_{n+1},x_{n+2}),
\end{equation}
where $F_3$ was defined using polarization, i.e.
\begin{equation}\label{poldef}
F_3(x_n,x_{n+1},x_{n+2}) := \frac{1}{6} \frac{\partial}{\partial \alpha_1} \frac{\partial}{\partial \alpha_2} \frac{\partial}{\partial \alpha_3} f_3(\alpha_1x_n + \alpha_2x_{n+1} + \alpha_3x_{n+2}) |_{\alpha=0}
\end{equation}
It is not difficult to check that the multilinear function $F_3$ defined by (\ref{poldef}) is equivalent to
\begin{eqnarray}
F_3(x_n,x_{n+1},x_{n+2}) &:=& \frac{9}{2}f_3 \left( \frac{x_n + x_{n+1} + x_{n+2}}{3} \right) - \frac{4}{3} f_3\left( \frac{x_n + x_{n+1}}{2} \right)\nonumber \\
&-& \frac{4}{3} f_3\left( \frac{x_n + x_{n+2}}{2} \right) - \frac{4}{3} f_3\left( \frac{x_{n+1} + x_{n+2}}{2} \right) \nonumber \\
&+& \frac{1}{6} f_3\left( x_n \right) + \frac{1}{6} f_3\left( x_{n+1} \right) + \frac{1}{6} f_3\left( x_{n+2} \right),
\end{eqnarray}
cf \cite{CMMOQ1} and page 110 of reference \cite{Greenberg}.
\medskip \noindent By construction, the rhs of (5) is linear in $x_{n+2}$ and $x_n$ for cubic vector fields, i.e. (\ref{map3}) represents a birational map (see \cite{CMMOQ1}), and it was shown that this map possesses two functionally independent 2-integrals (recall that a 2-integral of a map $\phi$ is defined to be an integral of $\phi \circ \phi$):
\begin{eqnarray}
I(q_n,p_n,q_{n+1},p_{n+1}) &=& q_n p_{n+1} - p_n q_{n+1} \\
I(q_{n+1},p_{n+1},q_{n+2},p_{n+2}) &=& q_{n+1} p_{n+2} - p_{n+1} q_{n+2} ,
\end{eqnarray}
where $q_{n+2}$ and $p_{n+2}$ should be eliminated from (7) using (\ref{map3}).
\medskip \noindent Note that (7) above does not depend on the parameters $a,b,c,d,e$ (in contrast to (8), which will depend on the parameters once expressed in $q_n,q_{n+1},p_n,p_{n+1}$).
\medskip \noindent The map (\ref{map3}) also preserves the measure
\begin{equation}\label{meas4}
\frac{dq_n \wedge dp_n \wedge dq_{n+1} \wedge dp_{n+1}}{1 + 4h^2 \Delta_1},
\end{equation}
where\footnote{Erratum: In eqs (4.1) of \cite{CMMOQ1}, $1-4h^2\Delta$ should read $1+4h^2\Delta$. Their $\Delta$ is our $\Delta_1$.}
\begin{eqnarray}\label{Delta4}
\Delta_1 &=& \left| \begin{array}{cc} c & d \\ d & e \end{array} \right| p_n^2p_{n+1}^2 +
\left| \begin{array}{cc} b & c \\ d & e \end{array} \right| (p_n^2p_{n+1}q_{n+1} + p_nq_nq_{n+1}^2) \\
&+& \left| \begin{array}{cc} b & c \\ c & d \end{array} \right| (p_n^2q_{n+1}^2 + q_n^2p_{n+1}^2) +
\left| \begin{array}{cc} a & b \\ c & d \end{array} \right| (q_n^2p_{n+1}q_{n+1} + p_nq_nq_{n+1}^2) \nonumber \\
&+&\left| \begin{array}{cc} a & c \\ c & e \end{array} \right| p_nq_np_{n+1}q_{n+1} +
\left| \begin{array}{cc} a & b \\ b & c \end{array} \right| q_n^2q_{n+1}^2. \nonumber
\end{eqnarray}
\medskip \noindent Finally, the map (\ref{map3}) is invariant under the scaling symmetry group
\begin{equation}\label{scale}
x_n \rightarrow \lambda^{(-1)^n}x_n.
\end{equation}
\section{A novel 8-parameter integrable map in $\mathbb{R}^4$}
We now generalise the treatment of section 2 to the non-homogeneous Hamiltonian
\begin{equation}\label{inhomQham}
H = aq^4 + 4bq^3p + 6cq^2p^2 + 4dqp^3 + ep^4 + \frac{1}{2}\rho q^2 + \sigma qp + \frac{1}{2} \tau p^2,
\end{equation}
where $a,b,c,d,e,\rho,\sigma$ and $\tau$ are 8 arbitrary parameters.
\medskip \noindent This gives rise to an ODE
\begin{equation}\label{ode31}
\frac{d}{dt} \left( \begin{array}{c} q \\ p \end{array} \right)
=\left(\begin{array}{rr} 0 & 1 \\ -1 & 0 \end{array} \right) \nabla H = f_3 \left( \begin{array}{c} q \\ p \end{array} \right) + f_1 \left( \begin{array}{c} q \\ p \end{array} \right),
\end{equation}
where the cubic part of the vector field, $f_3$, is again given by (\ref{f3}), whereas the linear part $f_1$ is given by
\begin{equation}\label{f1}
f_1 \left( \begin{array}{c} q \\ p \end{array} \right) =
\left( \begin{array}{c} \sigma q + \tau p \\
-\rho q - \sigma p \end{array} \right).
\end{equation}
We now discretise the cubic part resp. the linear part of the vector field in different ways:
\begin{equation}\label{map31}
\frac{x_{n+2} - x_n}{2h} = F_3(x_n,x_{n+1},x_{n+2}) + F_1(x_n,x_{n+2}),
\end{equation}
where $F_3$ is again defined by (5), but $F_1$ is defined by a kind of midpoint rule:
\begin{equation}\label{F1}
F_1(x_n,x_{n+2}) = f_1 \left( \frac{x_n + x_{n+2}}{2} \right).
\end{equation}
It follows that equation (\ref{map31}) again defines a birational map, and, importantly, it again preserves the scaling symmetry (\ref{scale}). (Indeed the latter is the primary reason we use the discretization (\ref{F1})).
\medskip \noindent Two questions thus remain:
\begin{enumerate}
\item Does eq (\ref{map31}) preserve two 2-integrals?
\item Is eq (\ref{map31}) measure-preserving?
\end{enumerate}
The answer to both these questions will turn out to be positive.
\medskip \noindent We actually had numerical evidence several years ago that the map (\ref{map31}) (or at least a special case of it) was integrable. However it has taken us until now to actually find closed-form expressions for the preserved measure and for the 2-integrals.
\medskip \noindent A first clue to the identity of a possible 2-integral of (\ref{map31}) came when we were carrying out experimental mathematical computations (in the sense of \cite{BB}) to find ``discrete Darboux polynomials'' for the map (\ref{map31}) (cf. \cite{CEMOQT} and \cite{CEMOQTV}). This gave a hint that a possible quadratic 2-integral $I(q_n,p_n,q_{n+1},p_{n+1})$ generalising (7), might exist for the map (\ref{map31}).
\medskip \noindent However, the mathematical complexity of the general 8-parameter map (\ref{map31}) was too great to carry out these computations for a completely general quadratic 2-integral in four variables with all 8 parameters symbolic.
\medskip \noindent Our process of discovery thus proceeded in two steps:
\medskip \noindent Step 1: Taking all parameters $a,b,c,d,e,\rho,\sigma,\tau$ and $h$ to be random integers, and assuming the 2-integral was an arbitrary quadratic function in four variables (with coefficients to be determined), we computed the 2-integral for a large number of random choices of the integer parameters. In each case, it turned out that the same six coefficients in the quadratic function were zero, i.e. the 2-integral always had the form
\begin{equation}\label{2int1}
I(q_n,p_n,q_{n+1},p_{n+1}) = A q_nq_{n+1} + B p_np_{n+1} + C q_np_{n+1} + D p_nq_{n+1},
\end{equation}
where $A,B,C$, and $D$ depended on the parameters in a way as yet to be determined.
\medskip \noindent Step 2: Now taking all parameters $a,b,c,d,e,\rho,\sigma,\tau$ and $h$ symbolic, and assuming the 2-integral $I$ had the special quadratic form (\ref{2int1}), we found
\begin{eqnarray}\label{2int2}
I(q_n,p_n,q_{n+1},p_{n+1}) = (h\sigma +1)p_nq_{n+1} &+& (h\sigma -1) q_n p_{n+1} \\
& &+ h\rho q_nq_{n+1} + h\tau p_np_{n+1}. \nonumber
\end{eqnarray}
\medskip \noindent Notes:
\begin{enumerate}
\item The 2-integral (\ref{2int2}) is invariant under the scaling symmetry group (\ref{scale})\footnote{The scaling symmetry (\ref{scale}) is an essential ingredient in our proof of the Theorem in the current Letter that the map (\ref{map31}) is integrable (as well as in our proof in \cite{CMMOQ1} that the map (\ref{map3}) is integrable).}.
\item In the continuum limit $h \rightarrow 0$, and using eq (\ref{ode31}), the integral $I(q_n,p_n,q_{n+1},p_{n+1})/h \rightarrow 4H(q,p)$.
\item Like equation (7), equation (\ref{2int2}) does not explicitly depend on the parameters $a,b,c,d,e$.
\item Note that it is a common feature of many dynamical systems that one has a choice to either study a given phenomenon for a single system containing as many free parameters as possible, or alternatively for multiple systems in so-called normal form (obtained by suitable transformations of the variables), containing fewer parameters. Both in our earlier works on the QRT map \cite{QRT1,QRT2}, and on the 5-parameter map in $\mathbb{R}^4$ \cite{CMMOQ1}, as well as in the current Letter, we have chosen the former option.
\end{enumerate}
\medskip \noindent Once we had the putative equation (\ref{2int2}), it was not difficult to verify using symbolic computation that $I(q_n,p_n,q_{n+1},p_{n+1})$ and $I(q_{n+1},p_{n+1},q_{n+2},p_{n+2})$ are indeed functionally independent 2-integrals of (\ref{map31}).
\medskip \noindent The map (\ref{map31}) preserves the measure
\begin{equation}\label{meas42}
\frac{dq_n \wedge dp_n \wedge dq_{n+1} \wedge dp_{n+1}}{1 + 4h^2 (\Delta_1 + \Delta_2)},
\end{equation}
where the quartic function $\Delta_1$ is given by (\ref{Delta4}) and the quadratic function $\Delta_2$ is given by
\begin{eqnarray}\label{Delta42}
\Delta_2 &=& \frac{1}{2} \left( \left| \begin{array}{cc} a & b \\ \sigma & \tau \end{array} \right| + \left| \begin{array}{cc} c & b \\ \sigma & \rho \end{array} \right| \right) q_nq_{n+1} +
\frac{1}{2} \left( \left| \begin{array}{cc} c & d \\ \sigma & \tau \end{array} \right| + \left| \begin{array}{cc} e & d \\ \sigma & \rho \end{array} \right| \right) p_np_{n+1} \\
&+& \frac{1}{2} \left( \left| \begin{array}{cc} b & c \\ \sigma & \tau \end{array} \right| + \left| \begin{array}{cc} d & c \\ \sigma & \rho \end{array} \right| \right) (p_nq_{n+1} + q_np_{n+1}) + \frac{1}{4} \left| \begin{array}{cc} \rho & \sigma \\ \sigma & \tau \end{array} \right| .\nonumber
\end{eqnarray}
\medskip \noindent Finally, the map (\ref{map31}) is again invariant under the scaling symmetry group (\ref{scale}).
\medskip \noindent {\bf Theorem} The birational map defined by (\ref{map31}) is integrable.
\medskip \noindent {\it Proof} The proof of integrability is identical to the proof in \cite{CMMOQ1}. The second iterate of the map defined by (\ref{map31}) has a one-dimensional measure-preserving symmetry group. The map thus descends to a measure-preserving map on the three-dimensional quotient. The two integrals of the second iterate of the map are invariant under the symmetry and therefore also pass to the quotient. This yields a three-dimensional measure-preserving map with two integrals, which is thus integrable.
\medskip \noindent {\bf Acknowledgements}
\medskip \noindent We are grateful to R. McLachlan for early discussions on scaling symmetry, and to E. Celledoni, B. Owren and B. Tapley for many discussions on discrete Darboux polynomials.
\section*{References}
|
2,869,038,153,922 | arxiv | \section{INTRODUCTION}
Secret sharing schemes are important tool used in security protocols.\@ Originally motivated by the problem of secure key storage by Shamir \cite{shamir1979}, secret sharing schemes have found numerous other applications in cryptography and distributed computing.\@ Threshold cryptography \cite{desmedt1992shared}, access control \cite{naor1998access}, secure multi-party computation \cite{ben1988completeness} \cite{chaum1988multiparty} \cite{cramer2000general}, attribute based encryption \cite{goyal2006attribute} \cite{bethencourt2007ciphertext}, generalized oblivious transfer \cite{tassa2011generalized} \cite{shankar2008alternative}, visual cryptography \cite{naor1995visual} $etc.,$ are the significant areas of development using the secret sharing techniques.
\vskip 2mm
In secret sharing, the secret is divided among $n$ participants in such a way that only designated subset of participants can recover the secret, but any subset of participants which is not a designated set cannot recover the secret.\@ A set of participants who can recover the secret is called an \textit{access structure} or \textit{authorized set}, and a set of participants which is not an authorized set is called an \textit{unauthorized set} or \textit{forbidden set}.
The following are the two fundamental requirements of any secret sharing scheme.
\begin{itemize}
\item \textbf{Recoverability:}Authorized subset of participants should be able to recover the secret by pooling their shares.
\item \textbf{Privacy:}Unauthorized subset of participants should not learn any information about the secret.
\end{itemize}
Let $\mathcal{P}=\{P_i|i=1,2,\ldots,n\}$ be the set of participants and the secret be $K$.\@ The set of all secret is represented by $\mathcal{K}$.\@ The set of all shares $S_1,S_2,\ldots,S_n$ is represented by $\mathcal{S}$. The participants set is partitioned into two classes.
\begin{enumerate}
\item The class of authorized sets $\Gamma$ is called the \textit{access structure.}
\item The class of unauthorized sets $\Gamma^c=2^\mathcal{P}\setminus \Gamma$
\end{enumerate}
Let us assume that $\mathcal{P},\mathcal{K},\mathcal{S}$ are all finite sets and there is a probability distribution on $\mathcal{K}$ and $\mathcal{S}$. We use $H(\mathcal{K})$ and $H(\mathcal{S})$ to denote the entropy of $\mathcal{K}$ and $\mathcal{S}$ respectively.
\vskip 2mm
In a secret sharing scheme there is a special participant called \textit{Dealer} $\mathcal{D} \notin \mathcal{P}$, who is trusted by everyone. The dealer chooses a secret $K \in \mathcal{K}$ and the shares $S_1, S_2,\ldots, S_n$ corresponding to the secret is generated. The shares are then distributed privately to the participants through a secure channel.
\vskip 2mm
In the secret reconstruction phase, participants of an access set pool their shares together and recover the secret. Alternatively participants could give their shares to a combiner to perform the computation for them. If an unauthorized set of participants pool their shares they cannot recover the secret. Thus a secret sharing scheme for the access structure $\Gamma$ is the collection of two algorithms:\\
\textbf{Distribution Algorithm}:This algorithm has to be run in a secure environment by a trustworthy party called Dealer. The algorithm uses the function $f$, which for a given secret $K \in \mathcal{K}$ and a participant $P_i \in \mathcal{P}$, assigns a set of shares from the set $\mathcal{S}$ that is $f(K,P_i)=S_i \subseteq \mathcal{S}$ for $i=1,\ldots,n$.$$f:\qquad \mathcal{K} \times \mathcal{P} \implies 2^\mathcal{S}$$
\textbf{Recovery Algorithm}:This algorithm has to be executed collectively by cooperating participants or by the combiner, which can be considered as a process embedded in a tamper proof module and all participants have access to it. The combiner outputs the generated result via secure channels to cooperating participants. The combiner applies the function, $$g:\mathcal{S}^t \implies \mathcal{K}$$ to calculate the secret. For any authorized set of participants $g(S_1,\ldots,S_t)=K$, if ${P_1,\ldots,P_t} \subseteq \Gamma$. If the group of participant belongs to an unauthorized set, the combiner fails to compute the secret.
\vskip 2mm
A secret sharing scheme is called perfect if for all sets $B$, $ B \subset \mathcal{P}$ and $B \notin \Gamma$, if participants in $B$ pool their shares together they cannot reduce their uncertainty about $S$. That is, $H(K)=H(K\mid\mathcal{S}_B)$, where $\mathcal{S}_B$ denote the collection of shares of the participants in $B$. It is known that for a perfect secret sharing scheme $H(S_i) \geq H(K)$. If $H(S_i) = H(K)$ then the secret sharing scheme is called ideal.
\vskip 2mm
An access structure $\Gamma_1$ is \textit{minimal} if $\Gamma_2 \subset \Gamma_1$ and $\Gamma_2 \in \Gamma$ implies that $\Gamma_2=\Gamma_1$. Only \textit{monotone access structure} is considered for the construction of the scheme in which $\Gamma_1 \in \Gamma$ and $\Gamma_1 \subset \Gamma_2$ implies $\Gamma_2 \in \Gamma$. The collection of minimal access sets uniquely determines the access structure. The access structure is the closure of the minimal access set. The access structure $\Gamma$ in terms of minimal access structure is represented by $\Gamma_{min}(\Gamma_0)$.
\vskip 2mm
For an access structure $\Gamma$, the family of unauthorized sets $\Gamma^c=2^\mathcal{P} \setminus \Gamma$ has the property that, given an unauthorized set $B \in \Gamma^c$ then any subset $C \subset B$ is also an unauthorized set. An immediate consequence of this property is that for any access structure $\Gamma$, the set of unauthorized sets can be uniquely determined by its \textit{maximal set}. We use $\Gamma^c_{max}$ to denote the representation of $\Gamma^c$ in terms of maximal set.
\vskip 2mm
For all $B \in \Gamma$, if $|B| \ge t$, then the access structure corresponds to a $(t,n)$ threshold scheme. In the $(t,n)$ threshold scheme $t$ or more participant can reconstruct the secret. Section 2 gives an insight into the threshold secret sharing schemes. Secret sharing schemes realizing the general access structures are mentioned in Section 3. Section 4 explores the various multi secret sharing techniques in the literature. Section 5 is the summary where different schemes are compared for their merits and demerits. Section 6 is the conclusion.
\section{THRESHOLD SECRET SHARING }
\noindent Development of secret sharing scheme started as a solution to the problem of safeguarding cryptographic keys by distributing the key among $n$ participants and $t$ or more of the participants can recover it by pooling their shares. Thus the authorized set is any subset of participants containing more than $t$ members. This scheme is denoted as $(t,n)$ \textit{threshold scheme}.
\vskip 2mm
The notion of a threshold secret sharing scheme is independently proposed by Shamir \cite{shamir1979} and Blakley \cite{blakley1979} in 1979. Since then much work has been put into the investigation of such schemes. Linear constructions were most efficient and widely used. A threshold secret sharing scheme is called \textit{perfect}, if less than $t$ shares give no information about the secret. Shamir's scheme is perfect while Blakley's scheme is non perfect. Both the Blakley's and the Shamir's constructions realize $t$-out-of-$n$ shared secret schemes. However, their constructions are fundamentally different.
\vskip 2mm
Shamir's scheme is based on polynomial interpolation over a finite field. It uses the fact that we can find a polynomial of degree $t-1$ given $t$ data points. A polynomial $f(x)=\sum_{i=0}^{t-1}a_ix^i$, with $a_0$ is set to the secret value and the coefficients $a_1$ to $a_{t-1}$ are assigned random values in the field is used for secret sharing. The value $f(i)$ is given to the user $i$ as secret share. When $t$ out of $n$ users come together they can reconstruct the polynomial using Lagrange interpolation and hence obtain the secret.
\vskip 2mm
Blakley's secret sharing scheme has a different approach and is based on hyperplane geometry. To implement a $(t,n)$ threshold scheme, each of the $n$ users is given a hyper-plane equation in a $t$ dimensional space over a finite field such that each hyperplane passes through a certain point.\@ The intersection point of these hyperplanes is the secret.\@ When $t$ users come together, they can solve the system of equations to find the secret.
\vskip 2mm
McEliece and Sarwate \cite{mceliece1981sharing} made an observation that Shamir's scheme is closely related to Reed-Solomon codes \cite{reed1960polynomial}. The error correcting capability of this code can be translated into desirable secret sharing properties. Karnin {\it et al.,} \cite{karnin1983} realize threshold schemes using linear codes. Massey \cite{massey1993minimal} introduced the concept of minimal code words and provided that the access structure of a secret sharing scheme based on a $[n,k]$ linear code is determined by the minimal codewords of the dual code.
\vskip 2mm
Number theoretic concepts are also introduced for threshold secret sharing scheme.\@ The Mingotee scheme \cite{mignotte1983} is based on modulo arithmetic and \textit{Chinese Remainder Theorem (CRT)}. A special sequence of integers called Mingotte sequence is used here. The shares are generated using this sequence.\@ The secret is reconstructed by solving the set of congruence equation using CRT. The Mingotte's scheme is not perfect.\@ A perfect scheme based on CRT is proposed by Asmuth and Bloom \cite{asmuth1983}. They also uses a special sequence of pairwise coprime positive integers.
\vskip 2mm
Kothari \cite{kothari1985generalized} gave a generalized threshold scheme. A secret is represented by a scalar and a linear variety is chosen to conceal the secret.\@ A linear function known to all trustees is chosen and is fixed in the beginning, which is used to reveal the secret from the linear variety.\@ The $n$ shadows are hyperplanes containing the liner variety. Moreover the hyperplanes are chosen to satisfy the condition that, the intersection of less than $t$ of them results in a linear variety which projects uniformly over the scalar field by the linear functional used for revealing the secret. The number $t$ is called the threshold. Thus as more shadows are known more information is revealed about the linear variety used to keep the secret, however no information is revealed until the threshold number of shadows are known. He had shown that Blakley's scheme and Karin's scheme are equivalent and provided algorithms to convert one scheme to another. He also stated that the schemes are all specialization of generalized linear threshold scheme. Brickell\cite{brickell1989some} also give a generalized notion of Shamir and Blackley's schemes using vector spaces.
\vskip 2mm
Researchers have investigated $(t, n)$ threshold secret sharing extensively.\@ Threshold schemes that can handle more complex access structures have been described by Simmons \cite{simmons1992} like weighted threshold schemes, hierarchical scheme, compartmental secret sharing $etc$.\@ They were found a wide range of useful applications. Sreekumar {\it et al.,} \cite{sreekumar2009secret} in 2009, developed threshold schemes based on Visual cryptography.
\section{GENERALIZED SECRET SHARING }
\noindent In the previous section, we mentioned that any $t$ of the $n$ participants should be able to determine the secret.\@ A more general situation is to specify exactly which subsets of participants should be able to determine the secret and which subset should not. In this section we give the secret sharing constructions based on generalized access structure. Shamir \cite{shamir1979} discussed the case of sharing a secret between the executives of a company such that the secret can be recovered by any three executives, or by any executive and any vice-president, or by the president alone.\@ This is an example of \textit{hierarchical secret sharing} scheme. The Shamir’s solution for this case is based on an ordinary $(3,m)$ threshold secret sharing scheme. Thus, the president receives three shares, each vice-president receives two shares and finally every executive receives a single share.
\vskip 2mm
The above idea leads to the so-called weighted(or multiple shares based) threshold secret sharing schemes. In these schemes, the shares are pairwise disjoint sets of shares provided by an ordinary threshold secret sharing scheme. Benaloh and Leichter have proven in \cite{benaloh1990generalized} that there are access structures that can not be realized using such scheme.
\vskip 2mm
Several researchers address this problem and introduced secret sharing schemes realizing the general access structure.\@ The most effecient and easy to implement scheme was Ito, Saito, Nishizeki's \cite{ito1989secret} construction. It is based on Shamir's scheme. The idea is to distribute shares to each authorized set of participants using multiple assignment scheme, where more than one share is assigned to a participant, if he belongs to more than one minimal authorized subset.
\vskip 2mm
A simple scheme is mentioned by Beimel \cite{beimel2011secret}, in which the secret $S \in \{0,1\}$ and let $ \Gamma$ be any monotone access structure.\@ The dealer shares the secret independently for each authorized set
$ B \in \Gamma $, where $B=\{P_{i1},\ldots,P_{il}\}$.\@ The Dealer chooses $l-1$ random bits $r_{1},\ldots,r_{l-1}$.
Compute $r_{l}= S \oplus r_{1} \oplus r_{2} \oplus \cdots \oplus r_{l-1}$, and the Dealer distributes share $r_{j}$ to $P_{ij}$.\@ For each set $ B \in \Gamma$, the random bits are chosen independently and each set in $\Gamma$ can reconstruct the secret by computing the exclusive-or of the bits given to the set.\@ The unauthorized set cannot do so.
\vskip 2mm
The disadvantage with multiple share assignment scheme is that the share size depends on the number of authorized set that contain $P_{j}$. A simple optimization is to share the secret $S$ only for minimal authorized sets. Still this scheme is inefficient for access structures in which the number of minimal set is big (Eg:$(n/2,n)$ scheme). The share size grows exponentially in this case.
\vskip 2mm
Benalohand Leichter \cite{benaloh1990generalized} developed a secret sharing scheme for an access structure based on monotone formula.\@ This generalizes the multiple assignment scheme of Ito, Saito and Nishizeki \cite{ito1989secret}. The idea is to translate the monotone access structure into a monotone formula. Each variable in the formula is associated with a trustee in $\mathcal{P}$ and the value of the formula is \textit{true} if and only if the set of variables which are \textit{true} corresponds to a subset of $\mathcal{P}$ which is in the access structure. This formula is then used as a template to describe how a secret is to be divided into shares.
\vskip 2mm
The monotone function contains only AND and OR operator. To divide a secret $S$ into shares such that $P_{1} \; or \; P_{2}$ can reconstruct $S$. In this case $P_{1}$ and $P_{2}$ can simply both be given values $S$. If $P_{1} \; and \; P_{2}$ need to reconstruct secret, then $P_{1}$ can be given value $S_{1}$ and $P_{2}$ can be given value $S_{2}$ such that $S=S_{1}+S_{2} \;mod \; m$,$(0 \le S \le m)$, $S_{1}$ is chosen randomly from $\mathbb{Z}_{m}$, $S_{2}$ is $(S-S_{1}) \; mod \; m$.
\vskip 2mm
More exactly, for a monotone authorized access structure $\Gamma$ of size $n,$ they defined the set $\mathcal{F_A}$ as the set of formula on a set of variables $\{v_1,v_2,\ldots,v_n\}$ such that for every $\mathcal{F} \in \mathcal{F_A}$ the interpretation of $\mathcal{F}$ with respect to an assignation of the variables is true if and only if the true variables correspond to a set $A \in \Gamma$. They have remarked that such formula can be used as templates for describing how a secret can be shared with respect to the given access structure. Because the formula can be expressed using only `$\wedge$' operators and `$\vee$' operators, it is sufficient to indicate how to ``split'' the secret across these operators.
\vskip 2mm
Brickell \cite{brickell1991classification} developed some ideal schemes for generalized access structure using vector spaces. Stinson \cite{stinson1992explication} introduced a monotone circuit construction based on monotone formula and also the construction based on public distribution rules. Benaloh's scheme was generalized by Karchmer and Wigderson \cite{karchmer1993span}, who showed that if an access structure can be described by a small monotone span program then it has an efficient scheme.
\vskip 2mm
Cumulative schemes were first introduced by Ito {\it et al.,} \cite{ito1989secret} and then used by several authors to construct a general scheme for arbitrary access structures.\@ Simmons \cite{simmons1992} proposed cumulative map, Jackson \cite{jackson1993cumulative} proposed a notion of cumulative array. Ghodosi {\it et al.,} \cite{ghodosi1998construction} introduced simpler and more efficient scheme and also introduced capabilities to detect cheaters. Generalized cumulative arrays in secret sharing is introduced by Long \cite{long2006generalised}.
\section{MULTI SECRET SHARING}
\noindent There are several situations in which more than one secret is to be shared among participants. As an example, consider the following situation, described by Simmon \cite{simmons1992}.\@ There is a missile battery and not all of the missiles have the
same launch enable code.\@ We have to devise a scheme which will allow any selected subset of users to enable different launch code.\@ The problem is to devise a scheme which will allow any one, or any selected subset, of the launch enable codes to be activated in this scheme.\@ This problem could be trivially solved by realizing different secret sharing schemes, one for each of the launch enable codes, but this solution is clearly unacceptable since each participant should remember too much information. What is really needed is an algorithm such that the same pieces of private information could be used to recover different secrets.
\vskip 2mm
One common drawback of all secret sharing scheme is that, they are one-time schemes. That is once a qualified group of participants reconstructs the secret $K$ by pooling their shares, both the secret $K$ and all the shares become known to everyone, and there is no further secret. In other words, the share kept by each participant can be used to reconstruct only one secret.
\vskip 2mm
Karnin, Greene and Hellman \cite{karnin1983} in 1983 mentioned the multiple secret sharing scheme where threshold number of users can reconstruct multiple secrets at the same time. Alternatively the scheme can be used to share a large secret by splitting it into smaller shares. Franklin {\it et al.,} \cite{franklin1992communication}, in 1992 used a technique in which the polynomial-based single secret sharing is replaced with a scheme where multiple secrets are kept hidden in a single polynomial.\@ They also considered the case of dependent secrets in which the amount of information distributed to any participant is less than the information distributed with independent schemes. Both the schemes are not perfect. They are also one time threshold schemes. \@ That is, the shares cannot be reused.
\vskip 2mm
Blundo {\it et al.,} \cite{blundo1993efficient}, in 1993 considered the case in which $m$ secrets are shared among participants in a single access structure $\Gamma$ in such a way that any qualified set of participants can reconstruct the secret.\@ But any unqualified set of participants knowing the value of number of secrets might determine some (possibly no) information on other secrets. Jackson {\it et al.,} \cite{jackson1994multisecret}, in 1994 considered the situation in which there is a secret $S_k$ associated with each subset $k$ of participants and $S_k$ can be reconstructed by any group of $t$ participants in $k$ $(t\le k)$.\@ That is each subset of $k$ participants is associated with a secret which is protected by a $( t , k$)-threshold access structure.\@ These schemes are called multi-secret threshold schemes.\@ They came up with a combinatorial model and optimum threshold multi secret sharing scheme. Information theoretic model similar to threshold scheme is also proposed for multi-secret sharing.\@ They have generalized and classified the multi-secret sharing scheme based on the following facts.
\begin{itemize}
\item{Should all the secrets be available for potential reconstruction during the lifetime of the scheme, or should the access of secrets be further controlled by enabling the reconstruction of a particular secret only after extra information has been broadcast to the participants.}
\item{Whether the scheme can be used just once to enable the secrets or should the scheme be designed to enable multiple use. }
\item{If the scheme is used more than once then the reconstructed secret or shares of the participants is known to all other participants or it is known to only the authorized set.}
\item{The access structure is generalized or threshold in nature.}
\end{itemize}
In 1994 He and Dawson \cite{he1995multisecret} proposed the general implementation of multistage secret sharing. The proposed scheme allows many secrets to be shared in such a way that all secrets can be reconstructed separately. The implementation uses Shamir's threshold scheme and assumes the existence of a one way function which is hard to invert.\@ The public shift technique is used here. A $t-1$ degree polynomial $f(x)$ is constructed first, as in Shamir's scheme.\@ The public shift values are $d_i=z_i-y_i$, where $z_i=f(x_i)$. The $y_i$'s are the secret shares of the participant. $y_i$'s are then send to the participants secretly. For sharing the next secret, $h(y_i)$ is used, where $h$ is the one way function. The secrets are reconstructed in particular order, stage by stage and also this scheme needs $kn$ public values corresponds to the $k$ secrets. The advantage is that each participant has to keep only one secret element and is of the same size as any shared secret.\@ In 1995 Harn \cite{harn1995efficient} shows an alternative implementation of multi stage secret sharing which requires only $k(n-t)$ public values. The implementation become very attractive, especially when the threshold value $t$ is very close to the number of participants $n$. In this scheme an $(n-1)$ degree polynomial $f(x)$ is evaluated at $(n-t)$ points and are made public. Any $t$ participants can combine their shares with the $(n-t)$ public shares to interpolate the degree $(n-1)$ polynomial. Multiple secrets are shared with the help of one way function as in He and Dawson scheme.
\vskip 2mm
The desirable properties of a particular scheme depends on both the requirements of the application and also the implementation. Several multi secret threshold schemes are developed by the research community. In this survey we only explore some of the important constructions of multi-secret sharing scheme realizing general access structure.
\subsection{Cachin's Scheme}
\noindent A computationally secure secret sharing scheme with general access structure, where all shares are as short as the secret is proposed by Christian Cachin \cite{cachin1995line} in 1995.\@ The scheme also provides capability to share multiple secrets and to dynamically add participants on-line without having to redistribute new shares secretly to the current participants. These capabilities are achieved by storing additional authentic information in a publicly accessible place which is called a noticeboard or bulletin board. This information can be broadcast to the participants over a public channel. The protocol gains its security from any one-way function.The construction has the following properties.
\begin{itemize}
\item{All shares must be transmitted and stored secretly once for every participants and are as short as the secret.}
\item{Multiple secret can be shared with different access structure requiring only one share per participant for all secrets.}
\item{Provides the ability for the dealer to change the secret after the shares have been distributed.}
\item{The dealer can distribute the shares on-line. When a new participant is added and the access structure is changed, already distributed shares remain valid. Shares must be secretly send to the new participants and the publicly readable information has to be changed.}
\end{itemize}
Let the secret $K$ be an element of finite Abelian Group $\textbf{G}=<G,+>$. The basic protocol to share a single secret is as follows.
\begin{enumerate}
\item{The dealer randomly chooses $n$ elements $S_1,S_2,\ldots,S_n$ from $G$ according to the uniform distribution and send them secretly to the participants over a secret channel.}
\item{For each minimal qualified subset $X \in \varGamma_0$}, the dealer computes $$T_X=K-f(\sum_{x:P_{x} \in X}S_X)$$
and publishes $\mathcal{T}={T_X|X \in \varGamma_0}$ on the bulletin board.
\end{enumerate}
In order to recover the secret $K$, a qualified set of participants $Y$ proceeds as follows.
\begin{enumerate}
\item{The members of $Y$ agree on a minimal qualified subset $X {\subseteq} Y$.}
\item{The members of $X$ add their shares together to get $V_X=\sum_{x:P_x \in X}{S_X}$ and apply the one-way function $f$ to the result.}
\item{They fetch $T_X$ from the bulletin board and compute $K=T_X+f(V_X)$}
\end{enumerate}
The shares of the participants in $X$ are used in the computation to recover the secret $K$. For the basic scheme where only one secret is shared, the shares do not have to be kept secret during this computation. However for sharing multiple secrets the shares and the result of their addition have to be kept secret. \\ \\
In order to share multiple secrets $K^1,K^2,\ldots,K^h$ with different access structures $\Gamma^1,\Gamma^2,\ldots,\Gamma^h$ among the same set of participants $\mathcal{P}$, the dealer has to distribute the private shares $S_i$ only once but prepares $\Gamma^1,\Gamma^2,\ldots,\Gamma^h$ for each secret. The single secret sharing scheme cannot be applied directly for multi secret sharing because it is not secure. If a group of participants $X$ qualified to recover both $K^1$ and $K^2$ then any group $Y \in \Gamma^1$ can obtain $K^2$ as $$K^2=T_X^{2}+T_Y^{1}+f(V_Y)-T_X^{1}$$
\vskip 2mm
To remedy this deficiency, the function $f$ is replaced by a family $F={f_h}$ of one-way functions so that different one-way functions are employed for different secrets. The following protocol is used to share $m$ secrets.
\begin{enumerate}
\item{The dealer randomly chooses $n$ elements $S_1,S_2,\ldots,S_n$ from $G$ and send them securely to the participants as shares.}
\item{For each secret $K^h$ to share( with $h=1,\ldots,m$) and for each minimal qualified subset $X \in \Gamma_0^h$, the dealer computes $$T_X^h=K^h-f_h(\sum_{x:P_x\in X}S_x)$$ and publishes $\mathcal{T}^h=\{T_X^h|X \in \Gamma_0^h\}$ on the bulletin board.}
\end{enumerate}
In order ro recover some secret $K^h$, a set of participants $Y \in \varGamma^h$ proceeds as follows.
\begin{enumerate}
\item{The members of $Y$ agree on a minimal qualified subset $X {\subseteq} Y$.}
\item{The members of $X$ add their shares together to get $V_X=\sum_{x:P_x \in X}{S_X}$ and apply the one-way function $f_h$ to the result.}
\item{They fetch $T_X^h$ from the bulletin board and compute $K^h=T_X^h+f_h(V_X)$}
\end{enumerate}
The scheme does not demand a particular order for the reconstruction of the secrets as in He and Dawson scheme.\@ The required family of functions $F$ can be easily be obtained from $f$ by setting $f_h(x)=f(h+x)$, when $h$ is represented suitably in $G$. Because different one-way function $f_h$ is used for each secret, it is computationally secure. But the shares have to be protected from the eyes of other participants during the reconstruction.\@ Otherwise, these participants could subsequently recover other secrets they are not allowed to know. Therefore the computation of $f_h(V_X)$ should be done with out revealing the secret shares.
\vskip 2mm
In many situations, the participant of a secret sharing scheme do not remain the same during the entire life-time of the secret. The access structure may also change. In this scheme it is assumed that the changes to the access structure are monotone, that is participants are only added and qualified subsets remain qualified.\@ The scheme is not suitable for access structures which are non-monotonic. Removing participants is also an issue which is not addressed. In multi-secret sharing, the shares must be kept hidden to carry out the computation. Cachin suggest that computations involved in recovering $K$ could be hidden from the participants, using a distributed evaluation protocol proposed by Goldreich {\it et al.,} \cite{goldreich1987play}. For access to a predetermined number of secrets in fixed order, a variant of one-time user authentication protocol of Lamport \cite{lamport1981password}could be used.
The proposed scheme has many practical applications in situations where the participants and the access rules or the secret itself frequently change. No new shares have to be distributed secretly when new participants are included or participants leave. Such situation often arise in key management, escrowed system etc.
\subsection{Pinch's Scheme}
\noindent The Cachin's scheme does not allow shares to be reused after the secret has been reconstructed. A distributed computation sub protocol is proposed using one way function but it allows the secret to be reconstructed in a specified order. Pinch \cite{pinch1996line} in 1996 proposed a modified algorithm based on the intractability of the Diffie-Hellman problem, in which arbitrary number of secrets can be reconstructed without having to redistribute new shares.
\vskip 2mm
Let $M$ be a multiplicative group in which the Diffie-Hellman problem is intractable.\@ That is, given elements $g,\;g^x\;\mbox{and}\; g^y$ in $M$ it is computationally infeasible to obtain $g^{xy}$.\@ This implies the intractability of the discrete logarithm problem. If the discrete logarithm problem can be solved then the Diffie-Hellman problem can also be solved. Suppose $f:M\implies G$ is a one-way function, where $G$ be the additive group modulo some prime $p$ and $M$ be the multiplicative group to the same modulus, which will be cyclic of order $q$. The protocol proceeds as follows:
\begin{enumerate}
\item {The dealer randomly chooses secret shares $S_i$, as integers coprime to $q$, for each participant $P_i$ and send them through a secure channel. Alternatively Diffie-Hellman key exchange can be used using the group $M$ to securely exchange $S_i$.}
\item {For each minimal trusted set $X \in \Gamma$, the dealer randomly chooses $g_X$ to be a generator of $M$ and computes $$T_X=K-f\left(g_X^{\prod_{x \in X}S_x}\right)$$} and publish $(g_X,T_X)$ on the notice board.
\end{enumerate}
In order to recover the secret $K$, a minimal trusted set $X={P_1,\ldots,P_t}$, of participants comes together and follow
the protocol mentioned below.
\begin{enumerate}
\item{Member $P_1$ reads $g_X$ from the notice board and computes $g_X^{S_1}$ and passes the result to $P_2$.}
\item{Each subsequent member $P_i$, for $1<i<t$, receives $g_X^{S_1\cdots S_{i-1}}$ and raises this value to the power $S_i$ to form $$V_X=g_X^{\prod_{i=1}^{t}S_i}=g_X^{\prod_{x \in X}S_x}$$}
\item {On behalf of the group $X$, the member $P_t$ reads $T_X$ from the notice board and can now reconstruct $K$ as $K=T_X+f(V_X)$.}
\end{enumerate}
If there are multiple secrets $K_i$ to share, it is now possible to use the same one way function $f$, provided that each entry on the notice board has a fresh value of $g$ attached.\@ There is a variant proposal which avoids the necessity for the first participant to reveal $g^{S_1}$ at the first step.\@ The participant $P_1$ generates a random $r$ modulo $q$ and passes the result of $g^{rS_1}$ to $P_2$. The participant $P_t$ will pass $g_X^{rS_1 \cdots S_{t}}$ back to $P_1$. $P_1$ can find $w$ such that $rw \equiv 1 \; \mbox{mod}\; q$ and raises $g_X^{rS_1 \cdots S_n}$ to the power $w$ to form $$V_X=g_X^{\prod_{i=1}^{t}S_i}=g_X^{\prod_{x \in X}S_x}$$
\vskip 2mm
Ghodosi {\it et al.,} \cite{ghodosi1997prevent} showed that Pinch's scheme is vulnerable to cheating and they modified the scheme to include cheating prevention technique. In Pinch's scheme a dishonest participant $P_i \in X$ may contribute a fake share $S_i^{'}=\alpha S_i$, where $\alpha$ is a random integer modulo $q$. Since every participant of an authorized set has access to the final result $g_X^{S_1,\cdots,S_i^\prime,\cdots,S_t}$, the participant $P_i$ can calculate the value $${\left(g_X^{S_1,\cdots,S_i^\prime,\cdots,S_t}\right)}^{\alpha^{-1}}=$$ \\ $$g_X^{S_1,\cdots,S_i,\cdots, S_t}=g_X^{\prod_{x \in X} S_x}=V_X$$
and hence obtain the correct secret, where as the other participants will get an invalid secret.
\vskip 2mm
The cheating can be detected by publishing $g_X^{V_X}$ corresponds to the every authorized set $X$ in the initialization step by the dealer. Every participants $x \in X$ can verify whether $g_X^{V_X} =g_X^{V_X^ \prime}$, where $V_X^\prime$ is the reconstructed value.\@ However this cannot prevent cheating or cheaters can be identified.\@ The cheating can be prevented by publishing extra information on the notice board.\@ Let $C= \sum_{x \in X}g_x^{S_x}$.\@ For each authorized set $X$, the dealer also publishes $C_X=g_X^C$.\@ At the reconstruction phase, every participant $P_i \in X$ computes $g_x^{S_i}$ and broadcasts it to all participants in the set $X$. Thus every participant can computes $C$ and verifies $C_X=g_X^C$. If the verification fails, then the protocol stops.
If there exist a group of collaborating cheats, they can cheat in the first stage. Yeun {\it et al.,} \cite{yeun1998identify} proposed a modified version of the Pinch's protocol which identifies all cheaters regardless of their number, improving on previous results by Pinch and Ghodosi {\it et al.}
\subsection{RJH and CCH scheme}
\noindent An efficient computationally secure on-line secret sharing scheme is proposed by Re-Junn Hwang and Chin-Chen Chang \cite{hwang1998line} in 1998.\@ In this each participant hold a single secret which is as short as the shared secret.\@ They are selected by the participants itself, so a secure channel is not required between the dealer and the participants. Participants can be added or deleted and secrets can be renewed with out modifying the secret share of the participants. The shares of the participants is kept hidden and hence can be used to recover multi secrets. The scheme is multi use unlike the one time multi secret sharing scheme.
\vskip 2mm
In Cachin's and Pinch's schemes, the dealer has to store the shadow of each participant to maintain the on-line property. The dealer storing the shares is an undesirable property in secret sharing scheme. This scheme avoids the problem and provides great capabilities for many applications.\@ The scheme has four phases:initialization phase, construction phase, recovery phase and reconstruction/renew phase.
\vskip 2mm
Assume that there are $n$ participants $P_1,P_2,\ldots,P_n$, sharing a secret $K$ with the monotone access structure $\Gamma=\{\gamma_1,\gamma_2,\ldots,\gamma_t\}$. In the initialization phase the dealer select two strong primes $p$ and $q$ and publishes $N$ on the public bulletin, where $N$ is the multiplication of $p$ and $q$.\@
The dealer also chooses another integer $g$ from the interval $[N^{1/2},N]$ and another prime $Q$ which is larger than $N$ and publishes them. Each participant can select an integer $S_i$ in the interval $[2,N]$ and computes $U_i=g^{S_i}\; \mbox{mod}\; N$.\@ Each participant keeps $S_i$ secret and send the pseudo share $U_i$ and the identifier $ID_i$ to the dealer.\@ If certain different participant select same shadow, the dealer asks for new shadows or alternatively the dealer can select the shares and send to the participants securely.\@ But this need a secure channel. Finally dealer publishes $(ID_i,U_i)$ of each participant $P_i$ in the public bulletin.
\vskip 2mm
In the construction phase the dealer computes and publishes some information for each qualified subset in access structure $\Gamma$. The participants of any qualified subset $\gamma_j$ can cooperate to recover the shared secret $K$ by using these information and the values generated from their shadows in the recovery phase. The public information corresponds to each qualified set is generated as follows.
\begin{itemize}
\item Randomly select an integer $S_0$ from the interval $[2,N]$ such that $S_0$ is relatively prime to $p-1$ and $q-1$.
\item Compute $U_0=g^{S_0} \mbox{mod}\;N$ and $U_0 \neq U_i$ for all $i=1,2,\ldots,n.$
\item Generate an integer $h$ such that $S_0\times h=1\;\mbox{mod}\; \phi(N).$
\item Publish $U_0$ and $h$ on the public bulletin.
\item For each minimal qualified subset $\gamma_j=P_{j1},P_{j2},\ldots,P_{jd}$ of $\Gamma_0$, the dealer computes public information $T_j$ as follows.
\item Compute $H_j=K \oplus (U_{j1}^{S_0} \;\mbox{mod}\; N) \oplus (U_{j2}^{S_0} \;\mbox{mod}\; N)\; \oplus, \ldots ,\oplus (U_{jd}^{S_0} \;\mbox{mod}\; N)$
\item Use $d+1$ points $(0,H_j),(ID_{j1},(U_{j1}^{S_0}\; \mbox{mod}\; N)),\\ \ldots,(ID_{jd},(U_{jd}^{S_0}\; \mbox{mod}\; N))$ to construct a polynomial $f(X)$ of degree $d$
\begin{equation*}
\begin{split}
f(x)&=H_j \times \prod_{k=1}^{d}(X-ID_{jk})/(-ID_{jk})+ \\
& \sum_{l=1}^{d}[(P_{jl}^{S_0}\;\mbox{mod}\; N) \times (X/ID_{jl})\times \\ &\prod_{\substack{k=1\\k \ne l}}^{d}(X-ID_{jk})/(ID_{jl}-ID_{jk})] \;\mbox{mod}\; Q
\end{split}
\end{equation*}
where $d$ is the number of participants in qualified subset $\gamma_j$
\item Compute and publish $T_j=f(1)$ on the public bulletin.
\end{itemize}
In the recovery phase participants of any qualified subset can cooperate to recover the shared secret $K$ as follows.
\begin{itemize}
\item Each participant gets $(U_0,h,N)$ from the public bulletin.
\item Each participant $P_{ij}$, computes and provides
${S_{ji}}^{'}=U_0^{S_{ji}}\;\mbox{mod}\;N$ ,where ${S_{ji}}^{'}$ is the pseudo share of $P_{ji}$.
$S_{ji}^{'h} \;\mbox{mod}\; N=U_{ji}$, then $S_{ji}^{'}$ is the true shadow else it is false and the participant $P_{ji}$ is the cheater.
\item Get $T_j$ from the public bulletin and use $d+1$ points $(1,T_j),(ID_{j1},S_{j1}^{'}),\ldots,(ID_{jd},S_{jd}^{'})$ and use Lagrange interpolation to reconstruct the $d$ degree polynomial $f(X)$:
\begin{equation*}
\begin{split}
f(X)&=T_j \times \prod_{k=1}^{d}(X-ID_{jk})/(1-ID_{jk})+ \\
& \sum_{l=1}^{d}[(S_{jl}^{'} \times (X-1/ID_{jl}-1)\times \\ &\prod_{\substack{k=1\\k \ne l}}^{d}(X-ID_{jk})/(ID_{jl}-ID_{jk})] \;\mbox{mod}\; Q
\end{split}
\end{equation*}
\item Compute $H_j=f(0)$ and recover the secret $K=H_j \oplus S_{j1}^{'} \oplus S_{j2}^{'} \oplus \cdots
\oplus S_{jd}^{'}$
\end{itemize}
When new participants join the group, the access structure changes.\@ The dealer then performs the construction phase and publish the new public information.\@ The older participants share remain the same.\@ When the participants disenrolled, the corresponding minimal qualified subset should be deleted from the access structure. The shared secret should be renewed for security consideration.\@ Public information must be changed in this case but the rest of the authorized participants still hold the same shadows. Changing the shared secret can also be done by modifying the public values but the same shadows can be reused.
\vskip 2mm
Adding a new subset can also be done easily. If the new qualified subset contains an old minimal qualified subset in the access structure, then nothing needs to be done. If there are old minimal qualified subsets in the new qualified subset, the old ones shall be deleted from the access structure and the public information is updated according to the new access structure.\@ Canceling a qualified subset needs the shared secret to be renewed. The public information corresponds to the rest of the qualified subset must be modified.\@ The public information corresponds to the canceled subset is of no use and is removed.\@ It is noted that the dealer does not need to collect the shadows of all the participants to reconstruct the secret sharing scheme again.
\vskip 2mm
To share multiple secrets $K_1,K_2,\ldots,K_n$ with the access structure $\Gamma_1,\Gamma_2,\ldots,\Gamma_n$, each participant holds only one share $S_i$ for these $n$ secrets.\@ For each shared secret $K_i$ the dealer select a unique $S_0^{i}$ and publishes the corresponding ${h_i, U_{0i}}$.\@ The dealer also generate and publishes the information $T_{ij}$ for each qualified subset $\gamma_{ij}$ in minimal access structure $\Gamma_i$. The participants of each qualified subset $\gamma_{ij}$ in $\Gamma_i$ can cooperate to recover the shared secret $K_i$ by performing the recovery phase.
\subsection{Sun's Scheme}
\noindent In Pinch's scheme high computation overhead is involved and also sequential reconstruction is used in the recovery phase. In 1999 Sun \cite{sun1999line} proposed a scheme having the advantages of lower computation overhead and parallel reconstruction in the secret recovery phase.\@ The security of the scheme is only based on one-way function, not on any other intractable problem.
\vskip 2mm
Let $f$ be a one way function with both domain and range $G$. The following protocol is used to share $m$ secrets $K^{[h]}$ with access structures $\Gamma^{[h]}$ for $h=1,\ldots,m$.
\begin{enumerate}
\item{The dealer randomly chooses $n$ secret shares $S_i,\ldots,S_n$ and send them to the participants through a secret channel.}
\item{For every shared secret $K^{[h]}$ and for every minimal qualified subset $X \in \Gamma_0^{[h]}$, the dealer randomly chooses $R_X^{[h]}$ in $G$ and computes $$ T_X^{[h]}=K^{[h]} - \sum_{x:P_x \in X}f(R_X^{[h]} + S_x)$$ and publishes $H^{[h]}=\{(R_X^{[h]},T_X^{[h]})|X \in \Gamma_0^{[h]}\}$ on the notice board.}
\end{enumerate}
In order to recover the secret $K^{[h]}$, a set of participants $Y \in \Gamma^{[h]}$ proceeds as follows
\begin{enumerate}
\item{The members of $Y$ agree on a minimal qualified subset $X \subseteq Y$, where $X=\{P1,\ldots,P_t\}$}
\item{Each member $P_i$ reads $R_X^{[h]}$ from the notice board and computes $f(R_X^{[h]}+ S_i)$ and send the result to $P_t$ who is designated as secret re-constructor.}
\item{$P_t$ receives $f(R_X^{[h]}+ S_i)$ for $1 \le i \le t-1$, and reconstructs the secret $K^{[h]}=T_X^{[h]}+ \sum_{i=1}^{t}f(R_X^{[h]}+S_i)$}
\end{enumerate}
Once the secret is reconstructed it become public.\@ $f(R_X^{[h]}+ S_i)$ is unique for every secret and every authorized set.\@ Most of the implementations of one way functions are based on permutations, substitution and XOR operation.\@ Therefore the computation is much faster than the exponentiation.\@ The step2 of the reconstruction phase can proceed parallelly where as in Pinch's scheme the construction is sequential.\@ Cheating can be detected by putting additional information $f(K^{[h]})$ on the notice board for every shared secret.\@ Any one can verify the correctness of the computed secret.\@ The scheme can also detect cheaters by putting additional information $C_{X,i}^{[h]}=f(f(R_X^{[h]}+S_i))$ for every secret $K^{h}$, every authorized set $X$ and for every participant $P_i$. The scheme is dynamic. Participants or new access structure can be added by distributing shares to the new participants and update public information on the notice board. The previously distributed shares remain valid. When some participants
or some access structures need to be deleted, the shared secret should be renewed. The dealer only need to update the information on bulletin board.
\begin{table*}[!htb]
\renewcommand{\baselinestretch}{1}
\caption{Comparison of Multi secret sharing schemes \label{tab:comp}}
\begin{small}
\begin{center}
\begin{tabular}{|p{3.5cm}|c|c|c|c|c|c|} \hline
Properties & Cachin \cite{cachin1995line} & Pinch \cite{pinch1996line} & RJH CCH \cite{hwang1998line} & Sun \cite{sun1999line} & Das \cite{das2010efficient} & Roy \cite{roy2010multi} \\ \hline
share size same as secret & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline
use of one way function & Yes & Yes & No & Yes & Yes & Yes \\ \hline
use of discrete logarithm & No & Yes & Yes & No & No & No \\ \hline
use of interpolation & No & No & Yes & No & No & Yes \\ \hline
shares remain secret during reconstruction & No & Yes & Yes & Yes & Yes & Yes \\ \hline
dealer knows the share & Yes & Yes & No & Yes & Yes & Yes \\ \hline
shares can be reused & No & Yes & Yes & Yes & Yes & Yes \\ \hline
dynamic & No & Yes & Yes & Yes & Yes & Yes \\ \hline
verifiability & No & No & Yes & Yes & Yes & Yes \\ \hline
\end{tabular}
\end{center}
\end{small}
\end{table*}
\subsection{Adhikari {\it et al.,} Scheme}
\noindent An efficient, renewable, multi use, multi-secret sharing scheme for general access structure is proposed by Angsuman Das and Avishek Adhikari \cite{das2010efficient} in 2010.\@ The scheme is based on one way hash function and is computationally more efficient.\@ Both the combiner and the participants can also verify the correctness of the information exchanged among themselves in this.\@ The scheme consist of three phases. The dealer phase, pseudo-share generation phase and the combiner's phase.
\vskip 2mm
Let $\mathcal{P}=\{P_1,P_2,\ldots,P_n\}$ be the set of participants and $S_1,S_2,\ldots,S_k$ be the $k$ secrets to be shared by a trusted dealer. Each secret is of size $q$ bits.\@ $\Gamma_{S_i}=\{A_{i1},A_{i2},\ldots,A_{it}\}$ be the access structure corresponds to the secret $S_i$ and $A_{il}$ is the $l$'th qualified subset of the access structure of the $i$'th secret $S_i$
\vskip 2mm
In the dealer phase, the dealer $\mathcal{D}$ chooses a collision resistant one-way hash function $H$, which takes as argument a binary string of arbitrary length and produces an output a binary string of fixed length $q$, where $q$ is the length of each secret. The dealer also choose randomly $x_\alpha$ the shares of size $q$ and send to the participants through a secure channel.
\vskip 2mm
In the pseudo share generation phase, a pseudo share corresponds to each secret and for each authorized set is generated from the participants secret share in the following way $$S_{ij} = S_i \bigoplus \left\{\bigoplus_{\alpha:P_\alpha \in A_{ij}} H(x_\alpha \parallel i_l \parallel j_m) \right \}$$
where $i_l$ represent the $l$ bit representation of the number of secret ie; $l= \lfloor log_2k \rfloor + 1$ and $m= \lfloor log_2t \rfloor+1$ , $t$ is the maximum size of an authorized subset among the access structures corresponds to different secrets.The dealer then publishes the values $S_{ij},H(S_i),H^2(x_\alpha \parallel i_l \parallel j_m)$
\vskip 2mm
In the combiners phase the participants of an authorized subset $A_{ij}$ of $\Gamma_{S_i}$ submit the pseudo share $H(x_\alpha \parallel i_l \parallel j_m)$ which is then x-or with $S_{ij}$ to get the secret $S_i$ by the combiner.$$S_{i} = S_{ij} \bigoplus \left\{\bigoplus_{\alpha:P_\alpha \in A_{ij}} H(x_\alpha \parallel i_l \parallel j_m) \right \}$$The combiner can verify the pseudo share given by the participant by checking it with the public value $H^2(x_\alpha \parallel i_l \parallel j_m)$. The participants can check whether the combiner is giving them back the correct secret $S_i$ by verifying it with the public value $H(S_i)$.
\vskip 2mm
Adhikari and Roy \cite{roy2010multi} also proposed a similar scheme with polynomial interpolation. In this scheme, for each authorized subset in the access structure corresponds to a secret, a polynomial of degree $m-1$ is created with the constant term as the secret $S_i$, where $m$ is the number of participants in the authorized subset.
$$f_q^{S_{i}}(x)= S_i+d_1^{i_q}x+d_2^{i_q}x^2+
\ldots+d_{mi_q-1}^{i_q}x^{m_{i_q}-1}$$
For each participant $P_b^{i_q} \in A_q^{S_i}$ in $\Gamma_{S_i} $ the dealer compute pseudo share $U_{P_b}^{i_q}=h(x_{P_b^{i_q}})\parallel i_l \parallel q_m$, where $x_i$ is the secret share of the participant and
$i=1,\ldots,k; q=1,\ldots,l; b=1,\ldots,m$. The dealer also computes $B_{P_b}^{i_q}=f_q^{S_i}(ID_b^{i_q})$. Finally the shift values are computed and published corresponds to each secret and each authorized subset $M_{P_b}^{i_q}=B_{P_b}^{i_q}-U_{P_b}^{i_q}$.
\vskip 2mm
In the reconstruction phase the pseudo shares of authorized set of participant can be added with the public information to obtain $B_{P_b}^{i_q}=f_q^{S_i}(ID_b^{i_q})=M_{P_b}^{i_q}+U_{P_b}^{i_q}$. The secret can be reconstructed by interpolation using these $m$ values.\\
$S_i=\sum_{b \in \{1,2,\ldots, m_{i_q}\}} B_{P_b}^{i_q}\prod_{r \in \{1,2,\ldots,m_{i_q}r \ne b\}}$ \\$\frac{-ID_{P_{r}^{i_q}}}{ID_{P_{b}^{i_q}}-ID_{P_{r}^{i_q}}},$
It is noted that the computational complexity is more in this case, compared with the previous scheme.
\section{SUMMARY}
In this section we give a brief summary of the important constructions for multi-secret sharing corresponds to generalized access structures. The table \ref{tab:comp} summarize and compares the important properties of different schemes.\@ The important technique used for the constructions are based on one way functions, discrete logarithm problem and Shamir's secret sharing technique. The schemes based on discrete logarithm problem and hash functions provide only computational security because the security depends on the computational complexity of these problems. But for many of the cryptographic application with polynomial time bounded adversary, the computational security is sufficient.\@ For maintaining the unconditional security, large number of shares must be kept by the participant. The number of shares that must be kept is proportional to the number of secret to be shared.
\vskip 2mm
The public values in the bulletin board of each scheme is proportional to the number of authorized subset in an access structure corresponds to each key.\@ There will be at least one public value corresponds to each authorized subset in the access structure corresponds to a key.\@ There are also additional public parameters used for the security of the scheme.\@ The computational complexity depends on the complexity of the one way function used or the modular exponentiation.\@ But these operations can be efficiently done in polynomial time. The most commonly used one way functions like LFSR, MD5, SHA are all based on simple xor, permutation and substitution operation.\@ So these schemes can be implemented in polynomial time. Modular exponentiation is time consuming with large exponent but efficient algorithm exist for the fast computation. The share generation and reconstruction in the Shamir's scheme, which uses polynomial interpolation can also be implemented efficiently.
\vskip 2mm
All the scheme mentioned assumes that the dealer is a trusted person. Cheating detection mechanisms are also proposed in some schemes with the help of additional public parameters. The combiner can verify the share submitted by the participants and the participant can also check the reconstructed secret. However the security is computational. If the computational problem is solved, the secret can be revealed by an adversary.The mathematical model, security notions and computational security for multi-secret sharing is proposed by Javier Herranz {\it et al.,} \cite{herranz2013new}~\cite{herranz2013sharing} in 2013.
\section{CONCLUSIONS}
We have explored some important multi-secret sharing techniques for generalized monotone access structure in this survey.\@ There are several threshold multi-secret sharing schemes where multiple secrets are shared, each with different threshold.\@ These schemes are not considered here.\@ The emphasis is given to a more generalized notion, where each secret is shared according to a monotone generalized access structure.\@ Threshold multi-secret sharing also found several applications and we prefer users to further look into it.\@ The major concern in the multi-secret sharing is the large number of public values and the computational complexity.\@ Only computational security can be achieved in all the schemes mentioned, where security depends on the security of some computationally hard problem.\@ Multi-secret sharing schemes have found numerous application in implementing authentication mechanisms, resource management in cloud, multi policy distributed signatures, multi policy distributed decryption $etc$..
\small\balance
|
2,869,038,153,923 | arxiv | \section{Observational cosmology: the renaissance}
Observational cosmology has come a long way since the early days of being ``the science of two numbers''. The
revolution could be traced back to the discovery of the cosmic microwave background (CMB) in 1964 by Penzias and Wilson \citep{1965ApJ...142..419P, 1965ApJ...142..414D}, which won them the Nobel prize in 1978, and confirmed an earlier prediction of George Gamow \citep{1948PhRv...73..803A}. Gamow had realized that the observed abundance of light nuclei can be explained from a hot uniform big bang phase, which is the inevitable beginning of an expanding universe in general relativity. However, it also predicts a relic background of radiation with $T \simeq 5$ K, which is reasonably close to the current observed value of $T = 2.73$ K \citep[e.g.,][]{2009ApJ...707..916F}.
Gamow's triumph could be arguably thought of as the beginning of physical cosmology, as it bore all the marks of scientific method as we know it: a self-consistent theory is devised based on observations (cosmic expansion and stellar elemental abundances), which makes predictions for completely different observables (microwave background radiation), that are later confirmed.
The subsequent successes of observational cosmology only further confirmed the predictions of the big bang
paradigm, and its general relativistic framework. In particular, the observation of large scale correlations
in galaxy surveys, such as CfA, Las Campanas, 2dF, and the Sloan Digital Sky Survey confirmed the growth of
gravitational instability expected in a Friedmann-Robertson-Walker (FRW) space-time dominated by
non-relativistic matter. Observation of anisotropies in the CMB angular maps by the Cosmic Background
Explorer Differential Microwave Radiometer (COBE DMR) experiment connected the gravitational instability of structures today to their linear seeds in the early Universe \citep{1992ApJ...396L...1S}. A decade later, CMB observations reached their next milestone with the Wilkinson Microwave Anisotropy Probe (WMAP), which measured the anisotropy power spectrum down to $0.1$ degrees with unprecedented precision \citep{2003ApJS..148..135H}, and verified consistency with the concordance cosmological model \citep{2003ApJS..148..175S}. Yet another leap in sensitivity is expected from Planck satellite in early 2013.
In the mean time, geometrical tests in the late-time cosmology, in particular high redshift supernovae Ia as standard candles \citep{1998AJ....116.1009R,1999ApJ...517..565P}, and baryonic acoustic oscillations (BAO) in the spatial correlation of galaxies as standard rulers \citep{2005ApJ...633..560E}, confirmed a consistent picture for cosmic expansion history which appeared to have started accelerating some 5 billion years ago due to a mysterious dark energy component.
These successes have led many to call the current era, the era of ``precision cosmology''. Indeed, a whole battery of cosmological observations, ranging from Lyman-$\alpha$ forest in quasar spectra on scales smaller than $1$ Mpc, to the statistics of galaxies and galaxy clusters on scales of $10-100$ Mpc, and CMB anisotropies on scales of $10-10^4$ Mpc are well-described by a six-parameter model \citep[e.g., see][]{2002PhRvD..66j3508T}, often dubbed as concordance or $\Lambda$CDM cosmological model (where $\Lambda$ and CDM stand for cosmological constant and cold dark matter, which comprise most of the energy in the present day Universe). These parameters include the present day densities of baryons, dark matter, and dark energy, as well as the amplitude and power of the initial spectrum of cosmological fluctuations (assuming that it is a power-law)\footnote{The sixth parameter is the optical depth for Thomson scattering to the last scattering surface of the CMB, which depends on the cosmic time when most atoms are reionized due to early star formation . While, in principle, this parameter is not independent of the others, it cannot be robustly calculated from our current models of star formation in the high redshift Universe.}. With the compilation of independent constraints from different cosmological observations, the statistical errors on these parameters have shrunk quickly over the past decade. Most of these parameters are now measured with 2-3\% statistical precision \citep[e.g.,][]{2011ApJS..192...18K}, even though it is sometimes hard to quantify the systematic errors on these measurements.
The renaissance of observational cosmology has also been recognized by the greater physics community, who have awarded two Nobel prizes in Physics to this discipline in the past six years: The Nobel prize in 2006 was awarded to John Mather and George Smoot for discovery of blackbody spectrum and anisotropy of the CMB. The Nobel prize in 2011 was awarded to Saul Perlmutter, Adam Riess, and Brian Schmidt for the discovery of dark energy (or cosmological constant, the $\Lambda$ in $\Lambda$CDM) by study of magnitude-redshift relation of high redshift supernovae Ia. The two discoveries, however, were of very different nature: the first confirmed two of the most fundamental predictions of standard big bang model, while the latter revived an apparently redundant but enigmatic parameter in General Relativity. As I will discuss below, this dichotomy is reminiscent of the status of modern cosmology today.
\section{Is cosmology solved? The price of precision}
There is a famous quote from Lev Landau, the great Russian physicist of the 20th century, which says: ``Cosmologists are often in error, but seldom in doubt''!
Unfortunately, Landau passed away in 1968, from the complications of a car accident that he was involved in 6 year earlier, so I wonder whether he ever had a chance to reflect on Penzias and Wilson's discovery of the CMB. Maybe, if he had, or if he had lived for another decade or so to witness the onset of the renaissance in observational cosmology, he might have revisited his original scepticism. Nevertheless, one often wonders whether there is some wisdom in the words of such great minds that would survive beyond the ages. For me, this rang particularly true in October 1998, when two of the most influential figures in modern cosmology, James Peebles and Michael Turner debated ``The Nature of the Universe'' in the main auditorium of Smithsonian's National Museum of Natural History in Washington, DC. The 1998 debate subtitle was ``Cosmology Solved?'', and their points of view appeared on the arXiv shortly afterwards \citep{1999PASP..111..274P,1999PASP..111..264T}. Despite being the early days of precision cosmology (which was a term also coined by Turner), and only a few months from the discovery of dark energy from supernovae Ia observations, Turner was very optimistic that the basic tenets of $\Lambda$CDM cosmology, along with inflation, will survive further scrutiny. On the other hand, Peebles was more cautious: ``We have a well defined, testable, and so
far quite successful theoretical description of the expansion: the relativistic Friedmann-Lemaitre cosmological
model. The observational successes of this model are impressive
but I think hardly enough for a convincing scientific case.''. It appears that, as we discussed above, the influx of observational data over the ensuing decade has validated Turner's vision of precision cosmology. However, there is a price for this success which is often ignored.
More than 99 percent of today's energy content of the Universe in the concordance cosmological model is either unidentified, or invisible to us \citep[e.g., see][]{2004PhDT.........6A,2004ApJ...616..643F}! The most enigmatic component of $\Lambda$CDM is $\Lambda$ or the so-called dark energy, which comprises around 73\% of the energy of the Universe today. We will discuss the mysteries of $\Lambda$ (often called the cosmological constant problem) in the next section, as it is intimately connected to the quantum theories of high energy physics and even a quantum theory of gravity.
The next biggest contributor is cold dark matter (or CDM) which makes up around 23\% of the cosmic energy budget. The most popular candidates for CDM are in the form of elementary particles: either weakly interacting massive particles (or WIMP's) or very light scalar particles, hypothesized to resolve the strong CP problem, known as axions. However, none of the efforts to find non-gravitational evidence for these particles have managed to conclusively detect these particles. Therefore, it remains a possibility that a more bizarre candidate, such as a modification of gravity, could explain the same observations. While none of the proposed alternatives to CDM have enjoyed a similar phenomenological success in explaining both the early and late Universe observations \citep[e.g.,][]{2009CQGra..26n3001S}, apparent failures of CDM in matching observations on small scales may point to a more complex possibility \citep[e.g.,][]{2011MNRAS.415L..40B}.
Even most of the standard model particles (often referred to as baryons), which comprise the remaining 5\% of the energy of the Universe, are expected to lie in a tenuous intergalactic medium, which has remained largely invisible to us. Attempts to account for these baryons in representative samples of the Universe, found in large galaxy clusters, has been controversial, and arguably misses up to 30\%-40\% of the cosmic baryonic budget \citep[e.g.,][]{2007MNRAS.378..293A,2011Sci...331.1576S}.
Finally, inflation, a period of rapid exponential expansion in the very early Universe
\citep{1981PhRvD..23..347G,1982PhLB..108..389L}, which is often credited for generating a nearly spatially
flat cosmology with a nearly scale-invariant spectrum of cosmological fluctuations, is plagued by questions
of empirical falsifiability. It turns out that while natural expectations from (certain) inflationary models
are consistent with these observations, it is very easy to introduce arbitrarily large modifications to
inflationary predictions by modifying inflationary initial conditions, and/or adding extra physics to the
inflationary model. In the absence of a(n established) UV-complete model of high energy physics (which
includes quantum gravity), it is hard to argue whether such modifications might (or might not) be natural
\citep[e.g.,][]{2006hep.th...12129C}. Further complication is introduced by the possibility of eternal
inflation, where predictions are plagued by the infamous ``measure problem'', which we will come back to in
Section \ref{falsify}.
As a careful reader might have noticed, we have mentioned quantum gravity more than once in our introduction. This is not a coincidence, as we will see in the next section.
\section{(Cosmologist's) quantum gravity problems}\label{qg}
The search for a consistent quantum theory that includes gravity (or geometrodynamics) as a component is as old as both general relativity (which made geometry and gravity synonymous) and quantum mechanics \citep[see][for an overview of the history of quantum gravity]{2000gr.qc.....6061R}. By now, it has become quite clear that a quantization of Einstein's theory of relativity, while well-behaved as an effective field theory, is non-renormalizable and thus fails (at least as a perturbative theory) as gravitons approach Planck energy ($ M_p c^2\equiv (\hbar c^5 / G_N )^{1/2} \simeq 1.22 \times 10^{19}$ GeV). It is easy to see this on dimensional grounds: Considering small perturbations around Minkowski background, $g_{\mu\nu}= \eta_{\mu\nu}+h_{\mu\nu}$, in natural units ($\hbar=c=1$), the GR action can be written as:
\begin{equation}
S_{GR} \sim -\frac{M^2_p}{16\pi}\int d^4x~ (h+\alpha_2 h^2+\alpha_3 h^3 + ...) \Box h,\label{s_gr}
\end{equation}
where $\alpha_n$'s represent dimensionless constants, we have abbreviated the tensorial structure of the
equation, and ignored additional subtleties in dealing with gauge symmetries. Considering the zero point
fluctuations of $h_{\mu\nu}$ on energy scale $E$, from the free or quadratic action (first term in equation \ref{s_gr}) we have:
\begin{equation}
\langle h^2 \rangle_E \simeq 8\pi M^{-2}_p \int^{\omega^{-1}(E)} \frac{d^3k}{(2\pi)^3 \omega(k)} \sim \left (E\over M_p\right)^2, \label{h2_lorentz}
\end{equation}
where, in the last step, we have used the Lorentz-invariant (or Minkowski space) dispersion relation for
gravitons: $\omega(k) = k$. It is now quite clear that as $E$ approaches $M_p$, the perturbative expansion in
eauation (\ref{s_gr}) breaks down.
In quite the same way that $W^{\pm}$ and $Z$ gauge bosons in modern electroweak theory cured the non-renormalizability of Fermi's low energy 4-point weak interaction, various attempts at a theory of quantum gravity have mainly comprised of coming up with ``more fundamental'' degrees of freedom, that can be included in a renormalizable or finite theory. Such degrees of freedom could be fundamental strings \citep[e.g.,][]{1998stth.book.....P}, spin networks \citep{1995NuPhB.442..593R}, discrete causal sets \citep[e.g.,][]{1997IJTP...36.2759S}, or other discrete theories in space or space-time that resemble general relativity in the continuum limit. Alternatively, it has been proposed that GR might be ``asymptotically safe'', i.e. it has a non-perturbative but well-defined quantization at arbitrarily high energies \citep{1979grec.conf..790W}.
However, for most experimental physicists, approaching energies comparable to Planck energy \footnote{Here, we talk about energy per degree of freedom, or per particle. The total energy of macroscopic objects can obviously be far greater than Planck energy.} is little more than a distant fantasy. The most powerful accelerators on Earth miss Planck energy by 15 orders of magnitude, while ultra high energy cosmic rays are still 9 orders of magnitude short of $M_p$. Therefore, the majority of physicists may not be much disturbed by the limitations of Einstein's theory of gravity.
Unfortunately, astrophysicists do not enjoy such luxury! As first proved by Hawking and Penrose in a series of singularity theorems \citep{1970RSPSA.314..529H}, general relativity predicts its own demise! In particular, the end state of massive stars are in singularities, where temperatures exceed Planck energy. Millions of these singularities just happen to live in our own Galaxy, although they are expected to be shrouded by the event horizons of astrophysical black holes. While strong theoretical motivations exist for a cosmic censorship conjecture, which hides singularities behind the event horizons, there is no guaranty that this will be the case in a theory of quantum gravity. In fact, it is widely believed that event horizons do not exist in a full theory of quantum gravity, although the minimal quantum effects \citep[such as Hawking radiation,][]{1975CMaPh..43..199H} are far from being observable for astrophysical black holes.
More seriously, as we approach big bang in our past, the temperature rises to Planck energy and beyond. Therefore, any consistent theory of cosmological initial conditions has to include quantum gravity, which impacts scalar and tensor gravitational degrees of freedom directly probed in observational cosmology (and potentially gravitational wave detectors)\footnote{It turns out that alternatives to the standard big bang, such as inflation or ekpyrotic scenarios, even though may not approach Planck temperature, still rely on (speculative) non-perturbative features of a quantum theory of gravity.}.
Yet, the most dramatic challenge of quantum gravity for cosmology does NOT come from Planck-scale physics. Quite to the contrary, it comes on scales that should see very little quantum corrections to general relativity. Like quantum gravity itself, this issue also dates back to the early days of the development of quantum mechanics. As early as 1920's, Pauli had recognized the tremendous amount of energy in the zero-point fluctuations of the electromagnetic field \citep{2002gr.qc.....8027S}. While this energy density is divergent, apparently, he had regulated the divergence by taking the classical radius of electron as a momentum cut-off. If all this energy were to gravitate according to Einstein's theory of relativity, the Universe would curve so much that it could not fit the lunar orbit, let alone the solar system, or the rest of the Galaxy! This is now recognized as the ``old cosmological constant (CC) problem''.
A more careful computation of vacuum energy involves introducing a Lorentz-invariant regulator, which suggests that a natural value for vacuum energy is roughly given by the sum of the fourth power of the particle masses in the theory:
\begin{equation}
\rho_{\rm vac} \sim \sum_i \pm m^4_i.
\end{equation}
For standard model, the sum is clearly dominated by the most massive particle, i.e. top quark with $m_t = 171$ GeV. If the energy $\rho_{\rm vac} \sim m^4_t$ were to gravitate, the entire Universe would have been smaller than a centimetre in size! Classical contributions from the Higgs potential can change this expectation by order unity, but short of a conspiracy, it is extremely unnatural to expect a cancelation between different contributions to $\rho_{\rm vac}$ to 1 part in $10^{60}$, in order to be consistent with the observed size of the Universe ($\sim 10$ Gpc).
Because of the clear pathology in the above estimates, physicists ignored the old CC problem for the better part of the 20th century. More conscious theorists speculated a yet unknown symmetry that would lead to cancellations of different contributions to the vacuum energy. While examples of such symmetries, such as conformal symmetry or supersymmetry, exist, they all seem to be violated by the mass terms in standard model, and thus, as we argued, stop short of curing the old CC problem by some 60 orders of magnitude. In an influential review article \citep{1989RvMP...61....1W}, Steven Weinberg reviewed different approaches to the CC problem, outlining why each approach fails to solve the problem, while stopping short of dismissing any approach completely. He further speculated that, if all other approaches fail, anthropic considerations, which require a $\Lambda$ suitable for existence of intelligent life, will predict a value just within reach of cosmological observations. As we discussed in the previous sections, this was indeed verified at the turn of the century with the discovery of cosmic acceleration from high redshift supernovae.
The modern CC problem is sometimes divided into three problems:
\begin{enumerate}
\item {\bf The Old CC problem:} Why isn't $\Lambda$ as big as its natural scale in particle physics?
\item {\bf The New CC problem:} Why does the observed vacuum energy have such an un-naturally small but non-vanishing value?
\item {\bf The Coincidence problem:} Why do we happen to observe vacuum density to be so close to matter density, even though their ratio can vary by up to 120 orders of magnitude during the cosmic history?
\end{enumerate}
Let us reiterate that these are still problems (or puzzles) in quantum gravity, as they appear when we couple gravity to quantum mechanical theories, even though they concern physics far below the Planck scale.
Other than the {\it gravitational aether} model that we will discuss in Section \ref{aether}, the only known ``solution'' to the CC problem comes from the anthropic considerations, or its modern incarnations in the string landscape and/or eternal inflation. In fact, as we mentioned above, one may argue that the non-vanishing $\Lambda$ was ``predicted'' based on these considerations. In the next section, we discuss to what extent this can be compared to predictions in other scientific theories.
\section{Anthropic landscape: physics vs falsifiability}\label{falsify}
Presumably the best (and the worst!) way to define physics is by the set of problems that are tackled by practicing physicists. Nevertheless, a careful study of the history of meaningful progress in science, as often done by philosophers, may reveal common features that may not be immediately obvious to practicing scientists. Probably the most influential philosopher of science of the 20th century was Karl Popper, who coined the term {\it critical rationalism} to describe his philosophy of a scientific theory \citep{1992sbwl.book.....P}. Popper argues that scientific theories are in fact measured by their falsifiability, as opposed to their verification, as no amount of positive tests can in fact verify a theory. However, a single counter-example suffices to rule out (or falsify) a theory. More significantly, a scientific theory needs to be falsifiable, i.e. there should exist observations and/or experiments that can potentially falsify the theory. In practice, falsifiable theories that survive more tests than their competitors will be considered more successful.
So, do anthropic considerations provide a falsifiable solution to the CC problem? To answer this, it is interesting to recount how developments in string theory and cosmology culminated in the anthropic principle at the turn of the century.
One of the most puzzling features of the standard big bang scenario was how points in the Universe that have never been in casual contact have the same temperature to 1 part in $10^5$. This is known as the ``horizon problem'', which was later solved by cosmic inflation, as we discussed in the previous section \citep{1981PhRvD..23..347G,1982PhLB..108..389L}. What inflation does is to stretch a small causally connected patch exponentially, across many Hubble horizons. So, points that appear causally disconnected today were parts of the same Hubble patch at the beginning of inflation. However, it was later realized that for many (if not most) successful inflationary models, inflation never ends! While the success of standard big bang theory requires inflation to end before big bang nucleosynthesis, there are always regions in field space that never stop inflating in these model. Since the physical volume of these regions is exponentially bigger than those that have stopped inflating, most of the volume of the Universe will always be inflating. This is known as eternal inflation, and has a significant anti-Copernican implication: If our cosmology emerges from eternal inflation, we cannot live in a typical region of the Universe \citep[e.g.,][]{1994PhRvD..49.1783L}.
The second development was when research in string theory, which is the most popular contender for a theory of quantum gravity, failed to single out any particular vacuum for the theory. In fact, it was argued that string theory might have as many as $10^{500}$ viable vacua (or a landscape of vacua), with each vacuum having a very different low energy physics \citep[e.g.,][]{2004JHEP...01..060A}.
The combination is now straightforward: No matter where you start the Universe in the string landscape,
assuming that it permits eternal inflation, there will be an infinite time to populate all the other
landscape vacua via quantum tunneling. Of course, most of these regions will not be hospitable to humans (and presumably other intelligent life). Therefore, we can use anthropic principle\footnote{Weinberg calls this application the ``weak anthropic principle''.}, to pick the region where cosmology and particle physics allows humans to live. In particular, this region cannot have a very big positive or negative $\Lambda$, as neither allow enough time for galaxies, stars, planets, and life (as we know them) to form \citep{1989RvMP...61....1W}.
Besides the nebulosity of the notion of ``intelligent life'', one of the problems with this interpretation is that all the predictions are now probabilistic. However, unlike quantum mechanics, where we compute probabilities for finite ensembles, the eternally inflating ensembles are inherently infinite in size. This is known as the ``measure problem'' (which we referred to earlier), as you could find very different probabilities, depending on how you regulate (or cut off) your infinite ensembles.
The second problem is what we started this section with, i.e. falsifiability. Since most of the string landscape exists beyond our cosmological horizon, or at energies much higher than those accessible in accelerators, it is really hard to test (or falsify) its existence. Of course, we might get lucky and see glimpses of another vacuum \citep[e.g.,][]{2011PhRvL.107g1301F}, but for the most part, it has been hard to come up with ways to falsify this paradigm \citep[but see][for a notable possible exception]{2012arXiv1202.5037K}.
We should emphasize that string theory and inflation are the direct results of extending {\it locality} and {\it unitarity}, the underlying principles of relativity and quantum mechanics, to gravity and cosmology. My personal point of view is that their failure in coming up with a falsifiable cosmological model (and most notably a falsifiable solution to the CC problem\footnote{We should note that none of the popular alternatives to string theory have been particularly more successful in addressing the CC problem.}), is cause to revisit these sacred principles of 20th century physics. We will discuss this next.
\section{Why aether? I. Quantum gravity and early Universe}\label{horava}
It has been long recognized that one of the ways to deal with infinities in quantum gravity (and even infinities in quantum field theory), is to break Lorentz symmetry \citep[see e.g.,][]{2006AnPhy.321..150J}. The reason is that Lorentz group SO(3,1), unlike e.g., the rotation group SO(3), is non-compact, and thus has an infinite volume. Therefore, the sum over the intermediate states for many quantum mechanical calculations (for rates or energy shifts) yield infinities. For renormalizable theories, these infinities can be absorbed in renormalization of a finite number of parameters that can be fixed empirically. However, this is not the case for non-renormalizable theories, such as gravity, which require renormalizing an infinite number of parameters, rendering the theory non-predictive.
It is easy to see how violating Lorentz symmetry can cure this problem \citep{2009PhRvD..79h4008H}. Going
back to our quantized gravitons in Section \ref{qg}, we can see that using $\omega(k) = k^3/M^2$ in equation (\ref{h2_lorentz}) yields:
\begin{equation}
\langle h^2 \rangle_E \simeq 8\pi M^{-2}_p \int^{\omega^{-1}(E)} \frac{d^3k}{(2\pi)^3 \omega(k)} \sim \left(M\over M_p\right)^2 \ln\left(E\over M\right).
\end{equation}
Therefore, as long as $M \ll M_p$, the theory remains perturbative, even for energies far beyond $M_p$. The
anisotropic scaling of space and time in this theory (as $\omega \propto k^3$) is known as Lifshitz
symmetry\footnote{While dispersion relation is enough to describe the quadratic action,
\citet{2009PhRvD..79h4008H} went on to write most general non-linear actions which obey (local or global)
Lifshitz symmetry and spatial diffeomorphism invariance. This is known as Ho{\v r}ava-Lifshitz gravity.}. A more realistic scenario is an interpolation between Lorentz symmetry at low energies, and Lifshitz symmetry at high energies:
\begin{equation}
\omega(k) = k + \frac{k^3}{M^2}.\label{omega_k}
\end{equation}
Even though Lorentz symmetry is approximately recovered in the IR, there is still a single preferred frame
in which equation (\ref{omega_k}) can be valid, as the dispersion relation is not invariant under Lorentz
transformation. This amounts to a revival of ``gravitational aether'', as an additional component to the
geometric structure of Einstein's gravity. Moreover, equation (\ref{omega_k}) violates the {\it locality} of relativistic theories, as localized perturbations can travel arbitrarily fast.
Note that this already resolves the ``horizon problem'', which was one of the original motivations for
cosmological inflation. Moreover, the dispersion relation $\omega(k) = k^3/M^2$ leads to a scale-invariant
spectrum of cosmological fluctuations, provided that it can be converted to curvature perturbations at late
times \citep{2009JCAP...06..001M}. In other words, with Lifshitz symmetry we can potentially kill two birds
with one stone: make gravity renormalizable and generate the correct statistical distribution for
cosmological fluctuations. However, more detailed studies are necessary to quantify the phenomenological
implications of Ho{\v r}ava-Lifshitz cosmology as an alternative to cosmic inflation.
In the next section, we provide a {\it second} motivation for aether, based on a falsifiable solution to the CC problem.
\section{Why aether? II. Cosmological constant problem} \label{aether}
The old CC problem can be quantified as the pathologically large contribution to $\rho_{\rm vac}$ in the stress tensor $T_{\mu\nu}$ on the right hand side of Einstein equations:
\begin{equation}
T_{\mu\nu} = \rho_{\rm vac} g_{\mu\nu} + ~{\rm excitations}.
\end{equation}
We thus see that if only the traceless part of $T_{\mu\nu}$ appeared on the right hand side of Einstein equations, gravity would not be sensitive to $\rho_{\rm vac}$, which could potentially resolve (at least) the (old) CC problem. Let us write this as \citep{2008arXiv0807.2639A}:
\begin{equation}
(8\pi G') G_{\mu\nu} = T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T^\alpha_\alpha+T'_{\mu\nu}. \label{grav_aether}
\end{equation}
The reason we added $T'_{\mu\nu}$ to the right hand side of equation (\ref{grav_aether}) is that, thanks to
Bianchi identities and energy-momentum conservation, Einstein and stress tensors have zero divergence.
However, $T^\alpha_\alpha$ is not generically a constant, which implies that consistency of equation (\ref{grav_aether}) requires:
\begin{equation}
\nabla^{\mu}T'_{\mu\nu} = \frac{1}{4}\nabla_\nu T^\mu_\mu. \label{aether_cons}
\end{equation}
Here, we call $T'_{\mu\nu}$ ``gravitational aether'', which can be interpreted as an additional fluid (or degree of freedom) of this new theory of gravity.
Of course, the predictions of the theory depend our choice for $T'_{\mu\nu}$. Given that equation
(\ref{aether_cons}) provides 4 equations, they completely specify the evolution of a perfect fluid with a
given equation of state. If we want to avoid introducing additional dimensionful scales to the theory, the
aether equation of state will be set by a constant: $w'=p'/\rho'$. It turns out that only two possibilities
are consistent with conditions of stability and element abundances from big bang nucleosynthesis: $w'=-1$ or
$w'>5$ \citep{2008arXiv0807.2639A}. The former possibility leads to the so-called unimodular gravity, which
has been discussed by several authors, including Einstein himself \citep[e.g., see][]{1989RvMP...61....1W}.
In this case, aether does not have any dynamical degree of freedom. However, equation (\ref{aether_cons}) can be solved to show that the CC re-emerges in the theory as an integration constant.
The second possibility is more novel and interesting. One may dismiss $w'>5$ due to superluminal propagation, as sound speed is $c_s = w'^{1/2} >1$ for aether. While, as we argued in the previous section, this should not necessarily scare us, the case of $w'\rightarrow \infty$, or the incompressible limit is particularly interesting. It can be argued that, in this limit, sound waves in aether cannot be excited, and thus there is no superluminal signal propagation. One way to see this is that any phonon of finite wavelength has an energy of $E=\hbar \omega = \hbar c_s k \rightarrow \infty$, which implies that we need infinite energy to excite aether phonons. Furthermore, similar to the case of $w'=-1$, $w'=\infty$ does not have any {\it independent} dynamical degree of freedom\footnote{This statement is only strictly valid for an irrotational aether.}, even though it does have a velocity, and thus specifies a preferred frame at each point in space (Relativistic dynamics of irrotational incompressible fluids, otherwise known as cuscuton, has been studied in \citet{2007PhRvD..75h3513A}, and \citet{2007PhRvD..75l3509A}).
Notice that the ``gravitational aether'' theory, as we just specified with $w'=\infty$, i.e.
\begin{equation}
(8\pi G') G_{\mu\nu} = T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T^\alpha_\alpha+ p'(u'_\mu u'_\nu -g_{\mu\nu}),\label{full_aether}
\end{equation}
has no additional parameter (or independent dynamical degree of freedom), compared to Einstein's gravity. So could it be consistent with all the precision and cosmological tests of gravity?
\subsection{Cosmology}
Probably the sharpest prediction of the aether theory is that effective gravitational constant in Friedmann equation becomes dependent on the matter equation of state:
\begin{equation}
H^2=\frac{8\pi G_{\rm eff}}{3} \rho_m, G_{\rm eff} = (1+w_m) G_N,\label{cosmo}
\end{equation}
where $w_m=p_m/\rho_m$, is the matter equation of state, and $G_N$ is Newton's gravitational constant. In particular, this predicts that gravitational constant during radiation era was 33\% larger than in the matter era: $G_N/G_R= 3/4$. Fig. (\ref{fig_BBN}) and Table (\ref{tab-constraints}) summarize the big bang nucleosynthesis and CMB+late time cosmology constraints on this ratio \citep{2011PhRvD..84j3522A}. We see that some datasets, namely $^7$Li, (most) CMB experiments, and Lyman-$\alpha$ forest in quasar spectra prefer ratios close to aether prediction, while others are closer to GR ($G_N=G_R$), or cannot distinguish between the two. Interestingly, however, all the best fits are at $G_N<G_R$\footnote{In the cosmological parameter estimation literature, this is often quantified as an observed effective number of neutrinos larger than 3, which is more than the number expected from the standard model of particle physics \citep[e.g.,][]{2011ApJS..192...18K}}.
The influx of observational data, and in particular CMB power spectrum from the Planck satellite over the next year, should dramatically improve these constraints, and hopefully confirm or rule out aether predictions conclusively.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{etaGN20110610.eps}
\caption{Allowed regions with 2$\sigma$ lines for D/H, $Y_p$(or $^4$He) and
$^{7}$Li/H are shown \citep{2011PhRvD..84j3522A}. The upper and lower horizontal dashed
lines indicate GR and gravitational aether predictions,
respectively.}
\label{fig_BBN}
\end{figure}
\begin{table}
\caption{Summary of the constraints on $G_N/G_R$ and the associated $95\%$ confidence intervals for different
combinations of observational data \citep{2011PhRvD..84j3522A}. Here, WMAP, ACT, and SPT are different CMB
experiments, BAO stands for baryonic acoustic oscillations, while Sne and Hubble refer to measurements of
distance-redshift relation from supernovae Ia, and Hubble constant. Ly-$\alpha$ refers to measurements of
Lyman-$\alpha$ forest absorption distribution in the spectra of distant quasars. The last two rows take
Helium abundance $Y_p$ as a free parameter, while the rest fix it to its value set by Galactic observations.}
\centering
\begin{tabular}{l c }
\hline
& ~~~$G_N / G_R$~~~ \\ \hline
WMAP+ACT & $0.73^{+0.31}_{-0.21}$ \\
WMAP+ACT+SPT & $ 0.88^{+0.17}_{-0.13}$ \\
WMAP+ACT+Hubble+BAO+Sne & $0.89^{+0.13}_{-0.11} $ \\
WMAP+ACT+SPT+Hubble+BAO+Sne & $0.94^{+0.10}_{-0.09}$ \\
WMAP+ACT+Sne+Ly-$\alpha$ (free $Y_p$) & $0.68^{+0.32}_{-0.25} $ \\
WMAP+ACT+SPT+Sne+Ly-$\alpha$ (free $Y_p$) & $0.90^{+0.27}_{-0.23} $ \\ \hline
\end{tabular}
\label{tab-constraints}
\end{table}
\subsection{Precision tests of gravity}
It can be shown that the simple scaling of effective Newton's constant with matter equation of state
(equation \ref{cosmo}) is still valid for inhomogenous situations, provided that:
\begin{equation}
u'_\mu=u_\mu, w_{m} = {\rm const.} \Rightarrow G_{\rm eff} = (1+w_m) G_N,
\end{equation}
i.e. if the matter equation of state is constant, all the solutions in GR also satisfy the equations in the gravitation aether theory with a renormalized gravitational constant, if aether moves with matter.
Let us first ignore the gravitational effect of local vorticity in matter and aether flows. In this regime, the flow of aether is completely fixed by matter, and we find that the only effect of aether is to renormalize Newton's constant by a factor of $1+w_m$. \citet{2011PhRvD..84j3522A} show that none of the current precision tests of gravity constrain this effect, as it involves probing the internal structure of objects with near-relativistic pressure. We will discuss the case of Neutron stars in the next section.
Coming back to the effect of vorticity, it turns out that rotational motion of aether is essentially decoupled from matter. Therefore, there is no reason for aether to rotate within rotating bodies. Assuming an irrotational aether will then boost the gravitomagnetic effect sourced by local vorticity, by 33\%, which is currently consistent at $2\sigma$ with LAGEOS and GPB experiments \citep{2011PhRvD..84j3522A}.
\subsection{Neutron stars}
As we mentioned above, the internal structure of objects with relativistic pressure is expected to be significantly different in the aether theory. The only known (close to equilibrium) astrophysical objects that have this property are neutron stars. In Fig. (\ref{fig:MREOS}), we show the mass-radius relation for GR and aether theories, for two widely used nuclear equations of state \citep{2011PhRvD..84f3011K}. Most notably, we see that as gravity gets stronger with relativistic pressure, the maximum allowed mass of neutron stars (the so-called Oppenheimer-Volkov limit) decreases in aether theory. This is already close to ruling out the theory, for the most massive neutron star with a reliable mass measurement: $1.97 \pm 0.04 M_{\odot}$ \citep{2010Natur.467.1081D}. However, the uncertainty in the nuclear equations of states may prohibit drawing definite conclusions from such observations.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{MREOS2.ps}
\caption{The mass-radius relation of neutron stars given by general relativity (solid) and the aether theory (dashed) based on the parametrized AP3 (black) and FPS (grey) equations of state \citep{2011PhRvD..84f3011K}. The two observed pulsar masses of \citet{2010Natur.467.1081D} and \citet{2011ApJ...728...95V} (which has significantly more uncertainty) are shown in orange and green respectively.}
\label{fig:MREOS}
\end{figure}
\subsection{Black holes: an explanation for dark energy?}
Probably the most speculative, and yet most fascinating, feature of the aether theory is how it couples to astrophysical black holes. As we discussed above, the exciting physics of singularities of black holes is hidden behind their event horizons in general relativity. In fact, astrophysical black holes are expected to have only two numbers (or hairs) associated with them: mass and angular momentum. However, this does not need to be the case for a different theory of gravity, such as the aether theory.
By solving the static spherically symmetric aether equations in vacuum, \citet{2009PhRvD..80d3513P} show
that the aether pressure, no matter how small at infinity, blows up close to the horizon of black holes. This
is not too surprising; one expects the same thing to happen for other types of matter. Of course, the reason
this is pathological for regular matter is that we do not expect matter to sit at rest close to the horizon,
but rather it would fall through. The story, however, is different for an incompressible fluid, as fluid
inside the light horizon can communicate with the outside\footnote{Communication is not meant here in its
literal sense, since incompressible fluids don't propagate signals. Nevertheless, the build-up of pressure
inside the horizon can impact the fluid equations outside.}. One can show that aether pressure would still
blow up even in a dynamical situation, just inside the horizon in a collapsing star (Saravani, Afshordi
\& Mann, in preparation). What \citet{2009PhRvD..80d3513P} propose instead is that this singularity is regulated by quantum gravity effects.
The vacuum metric in the presence of aether is given by \citep{2009PhRvD..80d3513P} :
\begin{equation}
ds^2= (1-r_s/r)\left[1+4\pi G_N p_0 f(r)\right]^2-(1-r_s/r)^{-1} dr^2-r^2d\Omega^2,\label{BH_metric}
\end{equation}
where $r_s=2G_NM_{BH}$ is the Schwarzschild radius, and $p_0$ is the aether pressure far away from the black holes. While $f(r)$ is an analytic function with a closed form, it is particularly illuminating to consider it close to $r_s$:
\begin{equation}
f(r) = r_s^2\left\{-2(r/r_s-1)^{-1/2}+{\cal O}\left[(r/r_s-1)^{1/2}\right]\right\}.
\end{equation}
We notice that unlike Schwarzschild black hole, the gravitational redshift $=g_{00}^{-1/2}$ approaches a maximum value at $r=r_s$:
\begin{equation}
1+z_{\rm max} = -(8\pi G_N p_0 r_s^2)^{-1},
\end{equation}
while the metric is not defined for $r<r_s$\footnote{One can see the singularity just beyond $r=r_s$ by analytically continuing metric (\ref{BH_metric}) in terms of proper radial distance.}. If we assume quantum gravity effects set a maximum gravitational redshift of Planck energy divided by Hawking temperature (which is equivalent to assuming that aether only becomes important within a Planck length of the horizon), the aether pressure away from the black hole is fixed to:
\begin{equation}
p_0 = - \frac{M^7_p}{256 \pi^2 M_{BH}^3} = p_{\rm obs, \Lambda} \left(M_{BH} \over 85 M_{\odot}\right)^{-3},\label{aether_DE}
\end{equation}
i.e. for stellar black holes, which happen to make (by number or mass) the majority of astrophysical black holes in our Universe, this criterion fixes the pressure of aether to be comparable to the observed pressure of dark energy\footnote{It is not important to match the dark energy scale exactly with this prescription, as the exact definition of Planck scale can vary by an order of magnitude.}. Given that the average mass of astrophysical black holes evolves with redshift, one can predict an evolution for dark energy, or the dark energy equation of state (see Fig. \ref{fig-obsmstar}).
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{plotw_1.eps}
\caption
{From \citet{2009PhRvD..80d3513P}:
{\bf Bottom panel:} The mass-weighted geometric mean of black hole
masses, $M_{BH}$, in units of $M_{\odot}$ as a function of redshift. Different lines represent different astrophysical scenarios of black hole formation. {\bf Top panel:} The prediction of
these scenarios for the effective dark energy equation of state $\bar{w}(<z)$, given that aether pressure scales as $M_{BH}^{-3}$, which can be compared to constraints from cosmology. The shaded area shows the region currently excluded at 68\% confidence level for this
parameter, as measured from cosmological observations \citep{2009ApJS..180..330K}.
}\label{fig-obsmstar}
\end{figure}
However, extending the analysis of \citet{2009PhRvD..80d3513P} to more realistic situations, i.e. including
multiple moving black holes, in the presence of matter, has proven incredibly challenging. This is not too
surprising, as one needs to solve time-dependent non-linear partial differential equations on vastly
disparate scales (ranging from Schwarzschild to Hubble radii), and is practically impossible, in lieu of (a
yet to be found) appropriate approximation scheme. Until that is done, which can then provide falsifiable
predictions for the black hole-dark energy connection in the theory, equation (\ref{aether_DE}) remains little more than a(n extremely suggestive) numerical coincidence.
\section{Lessons, challenges and outlook}
In this article, I have provided a very subjective overview of successes and failures of the standard cosmological model, especially in its relation to the fundamental physics of gravity. While observational cosmology is undergoing its renaissance, deep mysteries continue to haunt our theoretical understanding of the ingredients of the concordance cosmological model. In my opinion, the most enigmatic of these puzzles is the cosmological constant (CC) problem: the apparently extremely fine-tuned quantum vacuum energy, its relation to the observed dark energy, and possibly a more fundamental theory of quantum gravity.
Fuelled by parallel theoretical invention of eternal inflation and string landscape at the end of the last
century, the anthropic ``solution'' to these puzzles is gaining more traction among the theoretical physics
community. Personally, I find this to be an alarming development. This is partly due to (near) lack of
falsifiability in these paradigms, which is a minimum standard for our scientific theories, as I discussed in
Section \ref{falsify}. Yet another source of worry is extrapolating well-established physical principles far beyond the regime in which they are tested, which happens both in eternal inflation and string theory. However, my most serious concern is that a general acceptance of an unfalsifiable framework will inevitably stifle a search for alternative solutions to these fundamental puzzles. If this happens, and observational probes of dark energy fail to find any deviation from a cosmological constant, we might very well enter a long period of intellectual stagnation in our understanding of the Universe, with no clear end in sight \citep[to borrow terminology from Christopher Stubbs,][]{2009qcfp.book.....T}.
Given that a century of scientific enquiry to solve problems of quantum gravity, particularly the CC problem, have failed to come up with a falsifiable solution, it stands to reason that it might be time to revisit some of the fundamental principles of 20th century physics, i.e. locality (or Lorentz symmetry), and/or unitarity. In this article, I gave two arguments, based on renormalizability of gravity and the CC problem, for why we might have to give up Lorentz symmetry to come up with a falsifiable solution. I then outlined different predictions of the theory, with varying degrees of robustness, which can rule out or confirm the theory in comparison with GR, potentially as early as next year.
Alternative approaches that take a similar point of view in regards to Lorentz symmetry include Einstein-Aether theories \citep{2004PhRvD..70b4003J}, Horava-Lifshitz gravity \citep{2009PhRvD..79h4008H}, and Shape Dynamics \citep{2011CQGra..28d5005G}. A typical objection to Lorentz-violating theories is that nothing prevents quantum corrections from introducing order unity violation of Lorentz symmetry at low energies \citep[e.g.,][]{2012PhRvD..85b4051L}. So why does particle physics seem to obey Lorentz symmetry to such precision? Here is an argument for why expectation of ${\cal O}(1)$ Lorentz violation might be too naive:
Imagine that aether is described by the gradient of a light scalar field $\chi$ which has a canonical kinetic term. If $m_{\chi} < H$, then the field will be slowly rolling down its potential, and thus $\nabla_\mu\chi$ specifies a preferred frame, which coincides with the cosmological comoving hypersurfaces. A typical Lagrangian which spontaneously breaks Lorentz symmetry for field $\phi$ can be written as:
\begin{equation}
{\cal L} = \frac{1}{2}(\partial \phi)^2-\frac{1}{2} m_\phi^2\phi^2+\frac{\left(\partial_\mu\phi\partial^\mu\chi\right)^2}{\Lambda^4}+ \frac{1}{2}(\partial \chi)^2-\frac{1}{2} m_\chi^2\chi^2. \label{lagrangian}
\end{equation}
Given that the kinetic energy of the aether field $\chi$ should be less than the critical density of the Universe, we have:
\begin{equation}
\dot{\chi}^2 < M_p^2H^2,
\end{equation}
which puts an upper limit on the Lorentz-violation for the $\phi$ field:
\begin{equation}
\delta c_{\phi} = \frac{\dot{\chi}^2}{\Lambda^4} < \left(M_p H \over \Lambda^2\right)^2,
\end{equation}
Given that $\Lambda$ is the energy cut-off of the effective field theory in equation (\ref{lagrangian}), i.e. the theory is only valid for $E < \Lambda$, which puts an upper limit on the Lorentz-violation for $\phi$ particles at energy $E$, assuming the validity of the effective field theory:
\begin{equation}
\delta c_{\phi} < \left(M_p H \over E^2\right)^2 \sim \left(E \over{\rm meV}\right)^{-4}.
\end{equation}
We see that, already for energies as low as $\sim $ MeV (which is the mass of electron, and the typical energy of solar or reactor neutrinos), the Lorentz violation should be less than $10^{-36}$, and is further constrained for more energetic particles. This observation suggests that expectations of large violations of Lorentz symmetry might be too naive within a consistent effective field theory framework, which include gravity in a cosmological spacetime.
Another objection, specific to the gravitational aether model (equation \ref{full_aether}) is that an action principle that could lead to these equations is so far non-existent. However, an action is only necessary if we want to quantize gravity, while the field equations (\ref{full_aether}), assuming that they can be consistently solved, are sufficient at the level of classical and semi-classical gravity. Presumably, more structure will be necessary to define a quantum theory that reduces to the gravitational aether in the classical regime.
Finally, I should mention previous attempts to decouple quantum vacuum from gravity, which have provided much of the inspiration for this work. These include massive gravity and degravitation \citep{2007PhRvD..76h4006D}, cascading gravity \citep[e.g.,][]{2008JCAP...02..011D}, and supersymmetric large extra dimensions \citep[e.g.,][]{2004AIPC..743..417B}. However, to the best of my knowledge, none of these frameworks have been developed well enough to make concrete cosmological predictions (at least in the regime that they address the CC problem).
Let me close this article by stating the obvious, that Physics is an empirical science.
While the bulk of activity in theoretical physics and astrophysics is driven by (for lack of a better word) fashion, the credibility of a theory is ultimately judged by its concrete predictions against Nature, {\it not} its popularity, mathematical elegance, parsimony, etc. Do you hold your favourite theories to this standard?!
\section*{Acknowledgements}
I would like to first thank the Astronomical Society of India for awarding me its 2008 Vainu Bappu gold medal.
Moreover, I am indebted to my students and collaborators Siavash Aslanbeigi, Michael Balogh, Brendan Foster, Farbod Kamiab, Kazunory Kohri, Chanda Prescod-Weinstein, and Georg Robbers, who are responsible for most of the results that are reviewed in this article. I am supported by the University of Waterloo and the
Perimeter Institute for Theoretical Physics. Research at Perimeter
Institute is supported by the Government of Canada through Industry
Canada and by the Province of Ontario through the Ministry of Research
\& Innovation.
|
2,869,038,153,924 | arxiv | \section{The Issue }\label{s: discuss}
Gold \& Soter (1969; GS from here on) originally developed the idea
of thermal tide torques to explain Venus' asynchronous spin rate.
Drawing on their work, \citet{2009arXiv0901.0735A} assesed the
importance of thermal tides for the hot Jupiters. Using the simple
GS prescription for the quadrupole moment, they found that thermal
tides could induce large asynchronous spin, and generate tidal
heating rates more than sufficient to power the observed radii.
\citet{2009arXiv0901.3279G} correctly pointed out that the GS ansatz
does not faithfully represent the fluid motion induced by time-dependent
heating in a completely fluid atmosphere. He argued that the induced
quadrupole moment would be many orders of magnitude smaller than the GS value,
and with an orientation that would act to synchronize the spin, opposite
the GS result.
Motivated by the criticism of \citet{2009arXiv0901.3279G}, we attempted
to carefully analyze a simplified problem which captures the basic physics
of thermal tide excitation in fluid planets. Our results are presented
in \citep{as2}. From here on,
and unless stated otherwise, all references to equations, figures and
paper sections are to \citet{as2}.
In this note we compare our solutions to the fluid equations (Arras \& Socrates
2009b) to the arguments presented in \citet{2009arXiv0901.3279G}.
As \citet{2009arXiv0901.3279G} is unpublished, we quote the text
from his paper posted on the Cornell University astro-ph archive
(http://arxiv.org/). We then comment on the accuracy of the GS formula
for the quadrupole moment. Contemporaneous with the posting of
\citet{2009arXiv0901.0735A}, \citet{2009MNRAS.395..422G} published results
on a related problem concerning thermal forcing of hot Jupiters. We
briefly comment on the assumptions and results in these two papers.
\subsection{ \citet{2009arXiv0901.3279G} }\label{ss: Goodman}
The heart of Goodman's argument is contained in the fourth paragraph
on his page 1: ``A jovian planet, being gaseous, lacks elastic
strength. The excess column density of the colder parts of the
atmosphere is counterbalanced‚ to the degree that
hydrostatic equilibrium holds‚ by an indentation of the
convective boundary and a redistribution of the cores mass
toward the hotter longitudes. Insofar as the radial range over which
mass redistribution occurs is small compared to the planetary radius,
the thermal tide therefore bears no net mass quadrupole. The torque on
the atmosphere is opposed by a torque on the upper parts of the
convection zone."
Through these intuitive arguments, Goodman realized that the Gold and
Soter approximation ignores the following fact: though there is a flow from
hot to cold at high altitudes, there is also a return flow at lower
altitudes. In \S 5.1, we explicitly calculate the
pattern of such a flow, by directly solving the fluid equations in the
limit that inertia is ignored. The fact that this basic flow pattern
is not contained in the derivation of the GS formula is a serious
conceptual shortcoming.
The analysis in \S 5.1 examines the limit of zero
forcing frequency, and departs from Goodman's aforementioned paragraph
in two respects. First, in the limit of zero forcing frequency, figure
3 shows that the return flow need not extend as deep as
the convection zone. The bulk of the fluid motion is confined in the
radiative zone, near the photosphere of the starlight. Second, in \S 5.1
we do not find that density perturbations at high
altitude are compensated by density perturbations of the opposite sign
at lower altitude. Rather, we find that the density perturbation, and
hence torque, is identically zero when fluid inertia is ignored.
Given that density perturbations are zero in the limit of
zero frequency, our next step was to derive finite frequency
corrections. Eq. 45 in \S 5.2 shows that
finite forcing frequency corrections give rise to a nonzero quadrupole
moment. Have we violated hydrostatic balance, assumed by Goodman, by
including finite frequency? The answer rests on a technical detail, which
may be important in future investigations. The equation of hydrostatic
balance $dP/dz=-\rho g$ is obtained from the radial momentum equation
by throwing away the inertial terms. Even if we were to throw away
the inertial terms in the radial momentum equation, but kept them
in the horizontal momentum equations, we would still find a nonzero
quadrupole moment of the correct sign, although its magnitude would
be slightly different (throwing away the $-1$ in the parenthesis in
eq. 45 changes the prefactor from $4$ to $5$).
By what factor should the GS quadrupole moment be reduced
due to the ``isostatic compensation" from the return flow?
\citet{2009arXiv0901.3279G} argued above that the reduction factor
should be a power of $H/R$. By contrast, solution of the fluid equations
in the limit of small forcing frequency, ignoring gravity waves,
finds a frequency dependent reduction factor $\sim 4(\sigma/N)^2$ (see
eq. 40). Allowing for gravity waves, the calculations in
figures 4 and 5 show that the response
is larger than eq. 45 by 1-3 orders of magnitude in
the relevant period range 1 day - 1 month, due to the excitation of
gravity waves.
\citet{2009arXiv0901.3279G} clarifies the range of forcing period over
which the quadrupole moment should be isostatically compensated on his
page 2: ``Whereas terrestrial isostasy operates on such long timescales
that rock behaves as fluid, the corresponding timescale for gaseous
planets is dynamical, hence less than the tidal period."
One of the key results of Arras \& Socrates (2009b) is that low
radial order gravity
waves dominate the overlap with the thermal tide forcing. Hence the low
frequency limit in eq. 45 does not apply until forcing
periods $\sim 1$ month, comparable to or longer than the forcing periods
of interest (Arras \& Socrates 2009a). Note that this surprising
result is completely different from the case of gravitational forcing
of incompressible fluid bodies, where the low frequency limit applies
below the characteristic dynamical frequency $(GM_p/R_p^3)^{1/2}
\sim {\rm (hours)^{-1}} $ for a gas giant planet.
Lastly, \citet{2009arXiv0901.3279G} discusses the orientation
of the induced quadrupole on his page 2:
``Thus, the tidal torque
claimed by Arras \& Socrates (2009) vanishes to first order in the
density variations of the thermal tide. To the next order, the
quadrupole moment of the thermal tide aligns with the hottest and most
distended parts of the atmosphere, because mass elements are weighted
by the squares of their distances from the center. This will lead to
a torque of the opposite sign to that of $\Delta \Omega$, hence
driving the planet toward synchronous rotation. Similarly, the phase
lag of the thermal tide associated with an orbital eccentricity will
affect the orbit only to second order, and will tend to circularize
the orbit."
The analytic solution to the fluid equations in the low frequency
non-resonant limit (eq. 45) has the correct sign to
drive asynchronous rotation, contrary to Goodman's claim. Including
the effect of gravity waves, figures 4 and
5 show that the sign of the quadrupole can alternate with forcing
frequency. These sign changes are due to both the
Lorentzian factors in eq. 60, as well as the signs of the
quadrupole moments for individual modes (figure 6).
Even in this more complicated case, frequency
ranges still exist where the thermal and gravitational tide torques
may oppose each other, leading to an equilibrium spin state.
In summary, \citet{2009arXiv0901.3279G} correctly points out the
deficiencies in the Gold and Soter approximation employed by Arras \&
Socrates (2009a). However, the solutions to the fluid equations
presented in \citet{as2} differ both qualitatively and quantitatively from the
basic picture outlined in his work.
Consequently, we disagree with Goodman's
criticism of Arras \& Socrates (2009a) i.e., that thermal tides cannot
lead to asynchronous spin and eccentric orbits.
\subsection{the Gold \& Soter approximation, and the calculations
of Arras \& Socrates (2009a) }
Gold and Soter's ansatz involves a major assumption: that the fluid
elements remain at roughly constant pressure, so that density
perturbations are related to temperature perturbations by $\delta
\rho/\rho = - \delta T/T = - \Delta s/c_p$. From eq. 29,
we see this is indeed true if the $\delta p$ and $\xi_r$ terms can be
ignored. For low frequency forcing, ignoring the $\delta p$ term may
be a good approximation, but for fluid atmospheres we have seen it is
{\it not} a good approximation to ignore the $\xi_r$ term. If there is
a solid surface, and the boundary condition at this surface is
$\xi_r=0$, then the Gold and Soter ansatz may, in fact, hold. This
might be realized if the heat was all deposited at or near the solid
surface, rather than well above the surface.
On a more practical level, for fluid planets with surface radiative layer,
we have found the Gold and Soter approximation overestimates the quadrupole moment
and torque by more than an order of magnitude for the calculations in this paper.
Since the steady state planetary radii powered by tidal heating found by
Arras \& Socrates (2009a) were rather large compared to observed planets,
the reduction in torque found in this paper may bring their theory into better
agreement with observations.
To summarize, we have found that the Gold and Soter ansatz for the
quadrupole moment is qualitatively, but not quantitatively correct. It may be
viewed as a convenient order of magnitude estimate.
\subsection{ \citet{2009MNRAS.395..422G} }
\citet{2009MNRAS.395..422G} studied perturbations to a hot Jupiter
atmosphere induced by time-dependent radiative heating due to asynchronous
rotation. Their primary result is that waves excited by the thermal
forcing can radiatively damp and transfer angular momentum vertically
in the atmosphere.
\citet{2009MNRAS.395..422G} did not study if net quadrupoles could be
induced in the atmosphere. By working in plane parallel geometry and
requiring the pressure perturbation to vanish at the base of the grid,
they set the quadrupole to zero by hand. As the goal of their paper
was to study differential rotation induced by the damping of downward
propagating gravity waves, it is likely a good approximation to work
in plane parallel geometry, ignoring the possible existence of net
quadrupoles. Note that Arras \& Socrates (2009b) focus on the
complementary issue of net quadrupoles, ignoring possible driving of
differential rotation.
\section{Summary}
\label{s: summary}
In summary, the results in this paper confirm that quadrupole moments
of the correct sign and approximately correct magnitude may be induced
by time-dependent insolation, confirming the basic assumptions of
Arras \& Socrates (2009a). A future study will use the results
from this paper in concert with a thermal evolution code for the hot
Jupiters (Arras \& Bildsten 2006; Arras \& Socrates 2009a) to construct
more detailed steady state solutions for the planetary rotation and radius, and
the orbital eccentricity.
\acknowledgements We thank Peter Goldreich for helpful discussions.
Also, we thank Jeremy Goodman, Gordan Ogilvie and Pin-Gao Gu for
raising the issue of isostatic adjustment.
|
2,869,038,153,925 | arxiv | \section{Introduction} \label{sec:introduction}
In this paper we investigate regularity estimates for weak solutions to nonlocal equations
\begin{equation}\label{eq:PDE}
Lu=f \quad \text{in } Q=(-1,1)^n,
\end{equation}
where $L$ is a nonlinear integro-differential operator of the form
\begin{equation}\label{def:nonlocaloperator}
Lu(x) = \mathrm{PV} \int_{\mathbb{R}^n} |u(y) - u(x)|^{p-2} (u(y)-u(x)) \mu(x, \mathrm{d}y)
\end{equation}
for $p > 1$ and $f\in L^q(Q)$ for some sufficiently large $q$.
The operator $L$ is clearly determined by the family of measures $(\mu(x,\d y))_{x\in\mathbb{R}^n}$. In the special case, when $L$ is the generator of a L\'evy process, $\mu(x,A)$ measures the number of expected jumps from $x$ into the set $A$ within the unit time interval. However, the class of operators that we consider in this paper is more involved and for that reason we first take a look at an important example.
Let $n\in\mathbb{N}$. For $s_1, \cdots, s_n \in (0,1)$, we define
\begin{equation*}
\mu_{\mathrm{axes}}(x,\mathrm{d}y) = \sum_{k=1}^n s_k(1-s_k) |x_k-y_k|^{-1-s_k p} \mathrm{d}y_k \prod_{i\neq k} \delta_{x_i}(\mathrm{d}y_i).
\end{equation*}
This family plays a central role in our paper, since admissible operators resp. families of measures will be defined on the basis of $\mu_{\mathrm{axes}}$.
Given $x\in\mathbb{R}^n$, the measure $\mu_{\mathrm{axes}}(x,\cdot)$ only charges differences that occur along the axes
\[ \{x+te_k \, | \, t\in\mathbb{R}\} \quad \text{for } k\in\{1,\dots,n\}. \]
Hence, we can think of the operator $Lu$ for $\mu(x,\cdot)=\mu_{\mathrm{axes}}(x,\cdot)$
as the sum of one-dimensional fractional $p$-Laplacian in $\mathbb{R}^n$ with orders of differentiability $s_1,\dots,s_n\in(0,1)$ depending on the respective direction.
In particular $\mu_{\mathrm{axes}}(x,\cdot)$ does not possess a density with respect to the Lebesgue measure.
An interesting phenomenon for the case $p=2$ and $s=s_1=\dots=s_n$ is that on one hand the corresponding energies for the fractional Laplacian and $L$ are comparable. On the other hand (for sufficiently good functions) both operators converge to the Laplace operator as $s\nearrow 1$. It is known that the fractional $p$-Laplacian converges to the $p$-Laplacian (see \cite[Theorem 2.8]{BucSqu21} or \cite[Lemma 5.1]{dTGCV20} for details), that is defined by
\[ \Delta_pu(x) = \div\left(|\nabla u(x)|^{p-2}\nabla u(x)\right). \]
However, the operator $L$ for $\mu(x,\cdot)=\mu_{\mathrm{axes}}(x,\cdot)$ converges for any $p>1$ and $s=s_1=\dots=s_n$ to the following local operator (up to a constant depending on $p$ only)
\begin{equation} \label{eq:Aploc}
A_{\text{loc}}^pu(x)=\sum_{i=1}^n \frac{\partial}{\partial x_i} \left(\left|\frac{\partial u(x)}{\partial x_i}\right|^{p-2}\frac{\partial u(x)}{\partial x_i}\right) = \div\left(a\left(\nabla u(x)\right)\right)
\end{equation}
as $s\nearrow 1$, where $a:\mathbb{R}^n\to\mathbb{R}^n$ with $a(z) = (|z_i|^{p-2}z_i)_{i\in\{1,\dots,n\}}$. This convergence is a direct consequence of the convergence for the one-dimensional fractional $p$-Laplacian and the summation structure of the operator for $\mu_{\mathrm{axes}}$. For details, we refer the reader to \Cref{prop:convergence}.
The operator $A_{\text{loc}}^p$ is known as orthotropic $p$-Laplacian and is a well-known operator in analysis (see for instance \cite[Chapter 1, Section 8]{Lions69}). This operator is sometimes also called pseudo $p$-Laplacian.
Minimizers for the corresponding energies have been studied in \cite{BEKA04}, where the authors prove for instance H\"older continuity of minimizers. In \cite{BBLV18}, local Lipschitz regularity for weak solutions to orthotropic $p$-Laplace equations for $p\geq 2$ and every dimension is proved. The case, when $p$ is allowed to be different in each direction, is also studied in several papers.
For instance in \cite{PalaPseudo}, the authors introduce anisotropic De Giorgi classes and study related problems. Another interesting paper studying such operators with nonstandard growth condition is \cite{BB20}, where the authors show that bounded local minimizers are locally Lipschitz continuous. For further results, we refer the reader to the references given in the previously mentioned papers.
The two local operators $\Delta_p$ and $A_{\text{loc}}^p$ are substantially different, as for instance $\Delta_p$ is invariant under orthogonal transformation, while $A_{\text{loc}}^p$ is not. One strength of our results is that they are robust and we can recover results for the orthotropic $p$-Laplacian by taking the limit.
One way to deal with the anisotropy of $\mu_{\mathrm{axes}}$, is to consider for given
$s_1,\dots,s_n\in(0,1)$ a class of suitable rectangles instead of cubes or balls. For this purpose we define $s_{\max} = \max\lbrace s_1, \cdots, s_n \rbrace$.
\begin{definition}\label{def:M_r}
For $r>0$ and $x\in\mathbb{R}^n$ we define
\begin{align*}
M_r(x) =\BIGOP{\times}_{k=1}^n
\left(x_k-r^{\frac{s_{\max}}{s_k}},x_k+r^{\frac{s_{\max}}{s_k}}\right)
\quad \text{ and } M_r = M_r(0) \,.
\end{align*}
\end{definition}
The advantage of taking these cubes is that they take the anisotropy of the measures resp. operators into account and the underlying metric measure space is a doubling space. The choice of $s_{\max}$ in the definition of $M_r(x)$ is not important. It can be replaced by any positive number $\varsigma \geq s_{\max}$. We only need to ensure that $M_r(x)$ are balls in a metric measure space with radius $r > 0$ and center $x \in \mathbb{R}^n$. This allows us to use known results on doubling spaces like the John--Nirenberg inequality or results on the Hardy--Littlewood maximal function.
In the spirit of \cite{CKW19}, for each $k \in \lbrace 1,\dots,n\rbrace$, we define $E_r^k(x) = \lbrace y \in \mathbb{R}^n : \vert x_k - y_k \vert < r^{s_{\max}/{s_k}}\rbrace$. Note, that
\begin{align}
\label{def:E_r}
M_r(x) = \bigcap_{k = 1}^n E_r^k(x).
\end{align}
We consider families of measures $\mu(x,\d y)$ which are given through certain properties regarding the reference family $\mu_{\mathrm{axes}}(x,\d y)$.
Let us introduce and briefly discuss our assumptions on the families $(\mu(x,\cdot))_{x\in\mathbb{R}^n}$.
\begin{assumption}\label{assumption:symmetry}
We assume
\begin{equation*}
\sup_{x\in\mathbb{R}^n} \int_{\mathbb{R}^n} (|x-y|^p \land 1) \mu(x,\mathrm{d}y) < \infty
\end{equation*}
and for all sets $A,B \in \mathcal{B}(\mathbb{R}^n)$:
\begin{align*}
\int_A \int_B \mu(x,\d y) \d x = \int_B \int_A \mu(x,\d y) \d x.
\end{align*}
\end{assumption}
\Cref{assumption:symmetry} provides integrability and symmetry of the family of measures.
Furthermore, we assume the following tail behavior of $(\mu(x,\cdot))_{x\in\mathbb{R}^n}$.
\begin{assumption}\label{assumption:tail}
There is $\Lambda\geq 1$ such that for every $x_0 \in \mathbb{R}^n$, $k \in \lbrace 1, \dots , n \rbrace$ and all $r>0$
\begin{align*}
\mu(x_0, \mathbb{R}^n \setminus E_{r}^k(x_0)) \le
\Lambda (1-s_k)r^{-ps_{\max}}.
\end{align*}
\end{assumption}
Note that \Cref{assumption:tail} is a stronger assumption than an assumption on the volume on the complement of every $M_r(x_0)$. It gives an appropriate tail behavior for the family of measures in each direction separately and allows us to control the appearing constants in our tail estimate in all directions. This is necessary to prove robust estimates for the corresponding operators. \\
Note that by \Cref{assumption:tail} and \eqref{def:E_r}, we have
\begin{align}
\label{assmu1}
\mu(x_0, \mathbb{R}^n \setminus M_{\rho}(x_0)) \le \sum_{k = 1}^n \mu(x_0, \mathbb{R}^n \setminus E_{\rho}^k(x_0)) \le \Lambda \sum_{k = 1}^n (1-s_k) \rho^{-ps_{\max}} \leq \Lambda n\rho^{-ps_{\max}}.
\end{align}
Hence, \eqref{assmu1} shows that \Cref{assumption:tail} implies $\mu(x_0,\mathbb{R}^n\setminus M_{\rho}(x_0)) \leq c\mu_{\mathrm{axes}}(x_0,\mathbb{R}^n\setminus M_{\rho}(x_0))$ for all $x_0\in\mathbb{R}^n$. \\
Finally, we assume local comparability of corresponding functionals.
For this purpose, we define for any open and bounded $\Omega\subset \mathbb{R}^n$
\begin{equation*}
\mathcal{E}_\Omega^{\mu}(u,v) = \int_\Omega \int_\Omega |u(y) - u(x)|^{p-2}(u(y)-u(x))(v(y)-v(x)) \mu(x, \mathrm{d}y) \mathrm{d}x
\end{equation*}
and $\mathcal{E}^{\mu}(u,v)=\mathcal{E}_{\mathbb{R}^n}^{\mu}(u,v)$ whenever these quantities are finite.
\begin{assumption}\label{assumption:comparability}
There is $\Lambda\geq 1$ such that for every $x_0 \in \mathbb{R}^n$, $\rho \in (0,3)$ and every \\
$u \in L^p(M_{\rho}(x_0))$:
\begin{align}
\label{assmu3}
& \Lambda^{-1} \mathcal{E}^{\mu}_{M_{\rho}(x_0)}(u,u) \le \mathcal{E}^{\mu_{\mathrm{axes}}}_{M_{\rho}(x_0)}(u,u) \le \Lambda \mathcal{E}^{\mu}_{M_{\rho}(x_0)}(u,u).
\end{align}
\end{assumption}
Local comparability of the functionals is an essential assumption on the family of measures. It tells us that our family of measures can vary from our reference family in the given sense of local functionals without losing
crucial information on $(\mu(x,\cdot))_{x\in\mathbb{R}^n}$ like functional inequalities, which we deduce for the explicitly known family $(\mu_{\mathrm{axes}}(x,\cdot))_{x\in\mathbb{R}^n}$. This assumption allows us for instance to study operators of the form \eqref{def:nonlocaloperator} for $\mu_{\mathrm{axes}}$ in the general framework of bounded and measurable coefficients. We emphasize that further examples of families of measures satisfying \eqref{assmu3} can be constructed similarly to the case $p=2$ (see \cite[Section 9]{CKW19}).\\
In this paper, we study nonlocal operators of the form \eqref{def:nonlocaloperator} for families of measures that satisfy the previously given assumptions.
\begin{definition}\label{def:admissible}
Let $p> 1$, $\Lambda \ge 1$, and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$.
We call a family of measures $(\mu(x,\cdot))_{x\in\mathbb{R}^n}$ admissible with regard to $(\mu_{\mathrm{axes}}(x,\cdot))_{x\in\mathbb{R}^n}$,
if it satisfies \Cref{assumption:symmetry}, \Cref{assumption:tail}, and \Cref{assumption:comparability}. We denote the class of such measures by $\mathcal{K}(p,s_0,\Lambda)$.
\end{definition}
It is not hard to see that the family $(\mu_{\mathrm{axes}}(x,\cdot))_{x\in\mathbb{R}^n}$ is admissible in the above sense. Note that \Cref{assumption:symmetry} and \Cref{assumption:comparability} are clearly satisfied. Furthermore, for every $x_0 \in \mathbb{R}^n$, $k \in \lbrace 1, \dots , n \rbrace$ and all $r>0$
\[ \mu_{\mathrm{axes}}(x_0, \mathbb{R}^n \setminus E_{r}^k(x_0)) = 2s_k(1-s_k) \int_{r^{s_{\max}/s_k}}^{\infty} h^{-1-s_kp} = \frac{2(1-s_k)}{p}r^{-s_{\max}p}, \]
which shows \Cref{assumption:tail} for $\Lambda=\frac{2}{p}.$
The purpose of this paper is to study weak solutions to nonlocal equations governed by the class of operators $L$ as in \eqref{def:nonlocaloperator}. In order to study weak solutions, we need appropriate Sobolev-type function spaces which guarantee regularity and integrability with respect to $\mu$.
\begin{definition}\label{VHomega}
Let $\Omega\subset\mathbb{R}^n$ open and $p> 1$. We define the function spaces
\begin{align*}
V^{p,\mu}(\Omega|\mathbb{R}^n) &= \Big\{ u: \,\mathbb{R}^n\to\mathbb{R} \text{ meas.} \, | \, u\bigr|_{\Omega}\in
L^p(\Omega), (u,u)_{V^{p,\mu}(\Omega|\mathbb{R}^n)} <\infty\Big\}\,,
\\
H^{p,\mu}_{\Omega}(\mathbb{R}^n) &= \Big\{ u: \,\mathbb{R}^n\to\mathbb{R} \text{ meas.} \, | \, u\equiv 0 \text{ on }
\mathbb{R}^n\setminus\Omega, \|u\|_{H^{p,\mu}_{\Omega}(\mathbb{R}^n)}<\infty \Big\},
\end{align*}
where
\begin{align*}
(u,v)_{V^{p,\mu}(\Omega|\mathbb{R}^n)} &= \int_{\Omega}\int_{\mathbb{R}^n}
|u(y) - u(x)|^{p-2}(u(y)-u(x))(v(y)-v(x))\, \mu(x,\d y)\, \d x \,, \\
\|u\|_{H^{p,\mu}_{\Omega}(\mathbb{R}^n)}^p &= \|u\|_{L^p(\Omega)}^p +
\int_{\mathbb{R}^n}\int_{\mathbb{R}^n} |u(y)-u(x)|^p\mu(x,\d y)\,\d x \,.
\end{align*}
\end{definition}
The space $V^{p,\mu}(\Omega|\mathbb{R}^n)$ can be seen as a nonlocal analog of the space $H^{1,p}(\Omega)$.
It provides fractional regularity (measured in terms of $\mu$) inside of $\Omega$
and integrability on $\mathbb{R}^n \setminus \Omega$. The space $V^{p,\mu}(\Omega|\mathbb{R}^n)$ will serve as solution space.
On the other hand, the space $H^{p,\mu}_{\Omega}(\mathbb{R}^n)$ can be seen as a nonlocal analog of
$H^{1,p}_0(\Omega)$. See \cite{FKV15} and \cite{DyKa17} for further studies of
these spaces in the case $p=2$.
We are interested in finding robust regularity estimates for weak solutions to a class of nonlocal equations. This means that the constants in the regularity estimates do not depend on the orders of differentiability of the integro-differential operator itself but only on a lower bound of the orders. Let us formulate the main results of this paper. For this purpose we define $\bar{s}$ to be the harmonic mean of the orders $s_1,\dots,s_n$, that is
\begin{equation*}
\bar{s} = \left(\frac{1}{n} \sum_{k=1}^n \frac{1}{s_k}\right)^{-1}.
\end{equation*}
It is well known that the Harnack inequality fails for weak solutions to singular equations of the type \eqref{eq:PDE}.
Even in the case $p=2$ and $s_1=\dots=s_n$, the Harnack inequality does not hold (See for instance \cite{BoSz07, BaCh10}). Our first main result is a weak Harnack inequality for weak supersolutions to equations of the type \eqref{eq:PDE}. Throughout the paper, we denote by $p_{\star}=np/(n-p\bar{s})$ the Sobolev exponent, which will appear in \Cref{thm:sobolev}.
\begin{theorem}[Weak Harnack inequality]\label{thm:weak_Harnack}
Let $\Lambda \ge 1$ and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. Let $1<p< n/\bar{s}$ and $f\in L^{q/(p\bar{s})}(M_1)$ for some $q>n$.
There are $p_0 = p_0(n,p,p_{\star},s_0,q,\Lambda)\in(0,1)$ and $C = C(n,p,p_{\star},s_0,q,\Lambda) > 0$ such that for each $\mu\in\mathcal{K}(p,s_0,\Lambda)$ and every $u\in V^{p,\mu}(M_1|\mathbb{R}^n)$ satisfying $u\geq 0$ in $M_1$ and
\[ \mathcal{E}^{\mu}(u,\varphi) \geq (f,\varphi) \quad \text{for every non-negative } \varphi\in H^{p,\mu}_{M_1}(\mathbb{R}^n),\]
the following holds:
\begin{equation}\label{eq:thm:weakHarnack}
\begin{aligned}
\inf_{M_{1/4}} u \geq C \left( \fint_{M_{1/2}} u^{p_0}(x) \,\mathrm{d}x \right)^{1/p_0} &- \sup_{x \in M_{15/16}} 2 \left( \int_{\mathbb{R}^n \setminus M_1} u^-(z)^{p-1} \mu(x, \mathrm{d}z) \right)^{1/(p-1)} \\
& - \|f\|_{L^{q/(p\bar{s})}(M_{15/16})}.
\end{aligned}
\end{equation}
\end{theorem}
Although the weak Harnack inequality provides an estimate on the infimum only, it is sufficient to prove a decay of oscillation for bounded weak solutions and therefore a local H\"older estimate.
\begin{theorem}[Local H\"older estimate]\label{thm:Holder}
Let $\Lambda \ge 1$ and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. Let $1<p< n/\bar{s}$ and $f\in L^{q/(p\bar{s})}(M_1)$ for some $q>n$.
There are $\alpha = \alpha(n,p,p_{\star},s_0,q,\Lambda)\in(0,1)$ and $C = C(n,p,p_{\star},s_0,q,\Lambda) > 0$ such that for each $\mu\in\mathcal{K}(p,s_0,\Lambda)$ and every $u\in V^{p,\mu}(M_1|\mathbb{R}^n)\cap L^{\infty}(\mathbb{R}^n)$ satisfying
\[ \mathcal{E}^{\mu}(u,\varphi) = (f,\varphi) \quad \text{for every } \varphi\in H^{p,\mu}_{M_1}(\mathbb{R}^n),\]
the following holds: $u\in C^{\alpha}(\overline{M}_{1/2})$ and
\begin{equation*}
\|u\|_{C^\alpha(\overline{M}_{1/2})} \leq C\left( \|u\|_{L^{\infty}(\mathbb{R}^n)} + \|f\|_{L^{q/(p\bar{s})}(M_{15/16})} \right).
\end{equation*}
\end{theorem}
Note that the result needs global boundedness of weak solutions. The same assumption is also needed in the previous works \cite{CKW19, DK20, CK20}.
Global and local boundedness of weak solutions to anisotropic nonlocal equations are nontrivial open questions.
Furthermore, note that the general case, replacing $M_{\frac{1}{4}}, M_{\frac{1}{2}}, M_{\frac{15}{16}}, M_1$ by $M_{\frac{r}{4}}(x_0)$, $M_{\frac{r}{2}}(x_0)$, $M_{\frac{15r}{16}}(x_0),$ $M_r(x_0)$, for $x_0\in\mathbb{R}^n$ and $r\in(0,1]$ follows by a translation and anisotropic scaling argument introduced in \Cref{sec:hoeld}. See also \cite{CK20}.
Let us comment on related results in the literature.
The underlying ideas in developing regularity results for uniformly elliptic operators in divergence form with bounded and measurable coefficients go back to the influential contributions by De Giorgi, Nash and Moser (See \cite{DEGIORGI, NASH, MOSER}). These works led to many further results in various directions.
Similar results for nonlocal operators in divergence form have been obtained by several authors including the works \cite{Kin20, Bar09, BLH, CAFFVASS, KUMA, CHENKUMAWANG,Coz17, DCKP16, DyKa17, FallReg, KASFELS, kassmann-apriori, KassSchwab, KuMiSi15, Rod21, Ming11, Mosconi, Str17, Str18}. See also the references therein. For further regularity results concerning nonlocal equations governed by fractional $p$-Laplacians, we refer the reader to \cite{KuMiSi15, ToIrFe15, NOW20, NOW21, NOW21b, NOW21c, BrLiSc18}, \\
In \cite{DCKP16}, the authors extend the De Giorgi--Nash--Moser theory to a class of fractional $p$-Laplace equations. They provide the existence of a unique minimizer to homogenous equations and prove local regularity estimates for weak solutions. Moreover, in \cite{DCKP14}, the same authors prove a general Harnack inequality for weak solutions.
Nonlocal operators with anisotropic and singular kernels of the type $\mu_{\mathrm{axes}}$ are studied in various mathematical areas such as stochastic differential equations and potential theory. In \cite{BaCh10}, the authors study regularity estimates for harmonic functions for systems of stochastic differential equations $\d X_t=A(X_{t-})\d Z_t$ driven by L\'evy processes $Z_t$ with L\'evy measure $\mu_{\mathrm{axes}}(0,\d y)$, where $2s_1=\dots=2s_n=\alpha$ and $p=2$. See also \cite{ZHANG15, Cha20, ROV16}. Sharp two sided bounds for the heat kernels are established in \cite{KuRy17, KKK19}. In \cite{KuRy20}, the authors prove the existence of transition density of the process $X_t$ and establish semigroup properties of solutions.
The existence of densities for solutions to stochastic differential equations with H\"older continuous coefficients driven by L\'evy processes with anisotropic jumps has been proved in \cite{FRIPERU18}.
Such type of anisotropies also appear in the study of the anisotropic stable JCIR process, see \cite{FriPen20}.
Our approach follows mainly the ideas of \cite{DK20,CK20} and \cite{CKW19}.
In \cite{DK20}, the authors develop a local regularity theory for a class of linear nonlocal operators which covers the case $s=s_1=\dots=s_n\in(0,1)$ and $p=2$. Based on the ideas of \cite{DK20}, the authors in \cite{CK20} establish regularity estimates in the case $p=2$ for weak solutions in a more general framework which allows the orders of differentiability $s_1,\dots,s_n$ to be different. In \cite{CKW19} parabolic equations in the case $p=2$ and possible different orders of differentiability are studied. That paper provides robust regularity estimates, which means the constants in the weak Harnack inequality and H\"older regularity estimate do not depend on the orders of differentiability but on their lower one, only. This allows us to recover regularity results for local operators from the theory of nonlocal operators by considering the limit.
The purpose of this paper is to provide local regularity estimates as in \cite{CK20} for operators which are allowed to be nonlinear. This nonlinearity leads to several difficulties like the need for a different proof for the discrete gradient estimate (See \Cref{lem:alg_ineq}). Since we cannot use the helpful properties of Hilbert spaces (like Plancherel's theorem), we also need an approach different from the one in \cite{CK20} to prove a Sobolev-type inequality.
One strength of this paper is the robustness of all results. This allows us to recover regularity estimates for the limit operators such as for the orthotropic $p$-Laplacian.
Finally, we would like to point out that it is also interesting to study such
operators in non-divergence form. We refer the reader to \cite{SilvInd} for regularity results concerning the fractional Laplacian and to \cite{Lind16} for the fractional $p$-Laplacian. See also \cite{LeitSan20} for the anisotropic case. \\
Even in the most simple case, that is $p=2$ and $s=s_1=s_2=\dots=s_n$, regularity estimates for operators in non-divergence form of the type \eqref{def:nonlocaloperator} with $\mu=\mu_{\mathrm{axes}}$ lead to various open problems such as an Alexandrov--Bakelmann--Pucci estimate.
The authors wish to express their thanks to Lorenzo Brasco for helpful comments.
\subsection*{Outline} This paper is organized as follows. In \Cref{sec:aux}, we introduce appropriate cut-off functions and prove auxiliary results concerning functionals for admissible families of measures. One main result of that section is a Sobolev-type inequality (See \Cref{thm:sobolev}). In \Cref{sec:weak}, we prove the weak Harnack inequality and \Cref{sec:hoeld} contains the proof of the local H\"older estimate. In \Cref{sec:ineq}, we prove some auxiliary algebraic inequalities, and \autoref{sec:anisorect}, we briefly sketch the construction of appropriate anisotropic ``dyadic" rectangles. In \Cref{sec:sharp_maximal}, we use the anisotropic dyadic rectangles to sketch the proof of a suitable sharp maximal function theorem.
\section{Auxiliary results}\label{sec:aux}
This section is devoted to providing some general properties for the class of nonlocal operators that we study in the scope of this paper. The main auxiliary result is a robust Sobolev-type inequality.
Let us first introduce a class of suitable cut-off functions that will be useful for appropriate localization.
\begin{definition} \label{def:cut-off}
We say that $(\tau_{x_0,r,\lambda})_{x_0,r,\lambda} \subset C^{0,1}(\mathbb{R}^n)$ is an admissible family of cut-off functions if there is
$c \ge 1$ such that for all $x_0 \in \mathbb{R}^n$, $r \in (0,1]$ and $\lambda \in (1,2]$, it holds that
\[ \begin{cases}
\supp(\tau) \subset M_{\lambda r}(x_0), \\
\| \tau \|_{\infty} \le 1, \\
\tau \equiv 1 \text{ on } M_r(x_0), \\
\| \partial_k \tau \|_{\infty} \le c \left( \lambda^{s_{\max}/s_k} -1 \right)^{-1}r^{-s_{\max}/s_k} \text{ for every } k \in \lbrace 1 \dots n \rbrace.
\end{cases} \]
\end{definition}
For brevity, we simply write $\tau$ for any such function from $(\tau_{x_0,r,\lambda})_{x_0,r,\lambda}$, if the respective choice of $x_0, r$ and $\lambda$ is clear. The existence of such functions is standard. \\
Recall the definition of admissible families of measures $\mathcal{K}(p,s_0,\Lambda)$ from \Cref{def:admissible}
\begin{lemma}\label{lemma:cutoff}
Let $p> 1$, $\Lambda \ge 1$, and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. There is $C = C(n,p,\Lambda) > 0$ such that for each $\mu\in\mathcal{K}(p,s_0,\Lambda)$, every $x_0\in\mathbb{R}^n$, $r \in (0,1]$, $\lambda \in (1,2]$ and every admissible cut-off function $\tau$, the following is true:
\begin{equation*}
\sup_{x\in\mathbb{R}^n} \int_{\mathbb{R}^n} |\tau(y)-\tau(x)|^p \mu(x,\mathrm{d}y) \leq C \left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-ps_k} \right) r^{-ps_{\max}}.
\end{equation*}
\end{lemma}
\begin{proof}
We skip the proof. One can follow the lines of the proof from \cite[Lemma 3.1]{CKW19} and will get the same result with the factor $n^{p-1}$ instead of $n$.
\end{proof}
For future purposes, we deduce the following observation. It is an immediate consequence of the foregoing lemma.
\begin{corollary}\label{cor:quadrat}
Let $p> 1$, $\Lambda \ge 1$, and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. There is a constant $C = C(n,p,\Lambda) > 0$ such that for each $\mu \in \mathcal{K}(p,s_0,\Lambda)$ and every $x_0 \in \mathbb{R}^n$, $r \in (0,1]$, $\lambda \in (1,2]$ and every admissible cut-off function $\tau$ and every $u \in L^p(M_{\lambda r}(x_0))$, it holds true that
\begin{align*}
\int_{M_{\lambda r}(x_0)} \int_{\mathbb{R}^n \setminus M_{\lambda r}(x_0)} &|u(x)|^p|\tau(x)|^p \mu(x,\d y) \d x \\
&\le C \left( \sum_{k=1}^n (\lambda^{s_{\max}/s_k} - 1)^{-ps_k}\right) r^{-ps_{\max}} \Vert u \Vert_{L^p(M_{\lambda r}(x_0))}^p.
\end{align*}
\end{corollary}
Note that the constants in \Cref{lemma:cutoff} and \Cref{cor:quadrat} do not depend on $s_0$. Therefore, the lower bound $s_0 \leq s_k$ for all $k \in \lbrace 1,\cdots, n\rbrace$ can be dropped here.
\subsection{Functional inequalities}
This subsection is devoted to the proofs of a Sobolev and a Poincar\'{e}-type inequality.
We start our analysis by first proving a technical lemma, see also \cite[Lemma 4.1]{CKW19}.
\begin{lemma} \label{lem:cv}
Let $p >1$, $a \in (0,1]$, $b\geq 1$, $N \in \mathbb{N}$, $k \in \lbrace 1,\cdots, n \rbrace$, and $s_k \in (0,1)$.
For any $u \in L^p(\mathbb{R}^n)$
\begin{equation*}
\begin{split}
&\int_{\mathbb{R}^n} \sup_{\rho > 0} \frac{1}{\rho^{(1+ps_k)b}} \int_{\mathbb{R}} |u(x) - u(x+he_k)|^p {\bf 1}_{[a\rho^{b},2a\rho^{b})} (|h|) \,\mathrm{d}h \,\mathrm{d}x \\
&\leq (2a)^{1+ps_k} N^{p(1-s_k)} \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(x) - u(x+he_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{[\frac{a}{N}\rho^{b}, \frac{2a}{N}\rho^{b})} (|h|) \,\mathrm{d}h\,\mathrm{d}x.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Let $I_a = [a\rho^{b}, 2a\rho^{b})$. By the triangle inequality and a simple change of variables, we have
\begin{equation*}
\begin{split}
&\int_{\mathbb{R}} |u(x) - u(x+he_k)|^p {\bf 1}_{I_a} (|h|) \,\mathrm{d}h \\
&\leq N^{p-1} \sum_{j=1}^N \int_{\mathbb{R}} \left| u\left( x + \frac{j-1}{N} he_k \right) - u\left(x+\frac{j}{N}he_k\right) \right|^p {\bf 1}_{I_a} (|h|) \,\mathrm{d}h \\
&= N^p \sum_{j=1}^N \int_{\mathbb{R}} |u(x + (j-1)he_k) - u(x+jhe_k)|^p {\bf 1}_{I_{a/N}} (|h|) \,\mathrm{d}h.
\end{split}
\end{equation*}
Since $|h| < \frac{2a}{N}\rho^{b}$, we obtain
\begin{equation*}
\begin{split}
&\int_{\mathbb{R}^n} \sup_{\rho > 0} \frac{1}{\rho^{(1+ps_k)b}} \int_{\mathbb{R}} |u(x) - u(x+he_k)|^p {\bf 1}_{I_a} (|h|) \,\mathrm{d}h \,\mathrm{d}x \\
&\leq N^p \left( \frac{2a}{N} \right)^{1+ps_k} \sum_{j=1}^N \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(x + (j-1)he_k) - u(x+jhe_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{I_{a/N}} (|h|) \,\mathrm{d}h\,\mathrm{d}x.
\end{split}
\end{equation*}
We change the order of integration by Fubini's theorem and then use the change of variables $y=x+(j-1)he_k$ to conclude that
\begin{equation*}
\begin{split}
&\sum_{j=1}^N \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(x + (j-1)he_k) - u(x+jhe_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{I_{a/N}} (|h|) \,\mathrm{d}h\,\mathrm{d}x \\
&= \sum_{j=1}^N \int_{\mathbb{R}} \int_{\mathbb{R}^n} \frac{|u(x + (j-1)he_k) - u(x+jhe_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{I_{a/N}} (|h|) \,\mathrm{d}x\,\mathrm{d}h \\
&= \sum_{j=1}^N \int_{\mathbb{R}} \int_{\mathbb{R}^n} \frac{|u(y) - u(y+he_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{I_{a/N}} (|h|) \,\mathrm{d}y\,\mathrm{d}h \\
&= N \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(y) - u(y+he_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{I_{a/N}} (|h|) \,\mathrm{d}h\,\mathrm{d}y.
\end{split}
\end{equation*}
\end{proof}
Using the foregoing result for $b=s_{\max}/s_k$ allows us to prove a robust Sobolev-type inequality. Robust in this context means that the appearing constant in the Sobolev-type inequality is independent of $s_1,\dots,s_n$ and depends on the lower bound $s_0$ only.
Before we prove a robust Sobolev-type inequality, we recall the definition of the Hardy--Littlewood maximal function and sharp maximal function. For $u \in L^1_{\mathrm{loc}}(\mathbb{R}^n)$,
\begin{equation*}
{\bf M}u(x) = \sup_{\rho > 0} \fint_{M_\rho(x)} u(y) \,\d y \quad\text{and}\quad {\bf M}^{\sharp}u(x) = \sup_{\rho > 0} \fint_{M_\rho(x)} |u(y) - (u)_{M_{\rho}(x)}| \,\d y,
\end{equation*}
where $(u)_\Omega = \fint_\Omega u(z)\,\d z$. We will use the maximal function theorem and the sharp maximal function theorem. Note that $\mathbb{R}^n$ is equipped with the metric induced by rectangles of the form $M_r(x)$ and the standard Lebesgue measure. Since $|M_{2r}| = 2^n (2r)^{ns_{\max}/\bar{s}} \leq 2^{n/s_0} |M_r|$, this space is a doubling space with the doubling constant $2^{n/s_0}$.
\begin{theorem} \cite[Theorem 2.2]{Hein01} \label{thm:maximal}
Let $s_1, \dots, s_n \in [s_0, 1)$ be given for some $s_0 \in (0,1)$. Then, there is a constant $C_1 = C_1(n, s_0) > 0$ such that
\begin{equation*}
|\lbrace x \in \mathbb{R}^n : {\bf M}u(x) > t \rbrace| \leq \frac{C_1}{t} \|u\|_{L^1(\mathbb{R}^n)}
\end{equation*}
for all $t > 0$ and $u \in L^1(\mathbb{R}^n)$. For $p > 0$, there is a constant $C_p = C_p(n, p, s_0) > 0$ such that
\begin{equation*}
\|{\bf M}u(x)\|_{L^p(\mathbb{R}^n)} \leq C_p \|u\|_{L^p(\mathbb{R}^n)}
\end{equation*}
for all $u \in L^p(\mathbb{R}^n)$.
\end{theorem}
We were not able to find a reference for the sharp maximal function theorem for sets of the type $M_{\rho}$. Actually, we are not sure whether such result is available in the literature. However, one can follow the ideas of \cite[Section 3.4]{GrafakosMF}, where the $L^p$ bound is established for the sharp maximal function with cubes (instead of anisotropic rectangles). In order to prove the same result for the sharp maximal function with anisotropic rectangles, dyadic cubes have to be replaced by appropriate anisotropic ``dyadic" rectangles. We construct the anisotropic dyadic rectangles in \Cref{sec:anisorect} and prove the following theorem in \Cref{sec:sharp_maximal}. See also \Cref{sec:sharp_maximal} for the definition of the dyadic maximal function ${\bf M}_d u$.
\begin{theorem} \label{thm:sharp_maximal}
Let $s_1, \dots, s_n \in [s_0, 1)$ be given for some $s_0 \in (0,1)$ and let $0 < p_0 \leq p < \infty$. Then, there is a constant $C = C(n, p, s_0) > 0$ such that for all $u \in L^1_{\mathrm{loc}}(\mathbb{R}^n)$ with ${\bf M}_du \in L^{p_0}(\mathbb{R}^n)$,
\begin{equation*}
\|u\|_{L^p(\mathbb{R}^n)} \leq C \|{\bf M}^{\sharp}u\|_{L^p(\mathbb{R}^n)}.
\end{equation*}
\end{theorem}
We are now in a position to prove a robust Sobolev-type inequality by using \Cref{lem:cv}, \Cref{thm:maximal}, and \Cref{thm:sharp_maximal}.
\begin{theorem}\label{thm:sobolev}
Let $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. Suppose that $1 < p < n/\bar{s}$ and let $p_{\star} = np/(n-p\bar{s})$. Then, there is a constant $C = C(n, p, p_{\star}, s_0) > 0$ such that
for every $u \in V^{p,\mu_\mathrm{axes}}(\mathbb{R}^n | \mathbb{R}^n)$
\begin{equation} \label{eq:sobolev}
\|u\|_{L^{p_{\star}}(\mathbb{R}^n)}^p \leq C \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} |u(x)-u(y)|^p \mu_{\mathrm{axes}}(x, \mathrm{d}y) \mathrm{d}x.
\end{equation}
\end{theorem}
\begin{proof}
This proof is based on the technique of \cite{NDN20}, which uses the maximal and sharp maximal inequalities. Note that by definition of $V^{p,\mu_\mathrm{axes}}(\mathbb{R}^n | \mathbb{R}^n)$ and Hölder's inequality, $V^{p,\mu_\mathrm{axes}}(\mathbb{R}^n | \mathbb{R}^n)\subset L^p(\mathbb{R}^n) \subset L^p_{\mathrm{loc}}(\mathbb{R}^n) \subset L^1_{\mathrm{loc}}(\mathbb{R}^n)$.
Hence, the maximal and sharp maximal functions are well defined for every function $u \in V^{p,\mu_\mathrm{axes}}(\mathbb{R}^n | \mathbb{R}^n)$.
For $x \in \mathbb{R}^n$ and $\rho > 0$, we have
\begin{equation} \label{eq:avg}
\fint_{M_\rho(x)} |u(y) - (u)_{M_\rho(x)}| \,\mathrm{d}y \leq \fint_{M_\rho(x)} \fint_{M_\rho(x)} |u(y) - u(z)| \,\mathrm{d}z \,\mathrm{d}y.
\end{equation}
Let us consider as in \cite[Lemma 2.1]{CK20} a polygonal chain $\ell = (\ell_0(y,z), \cdots, \ell_n(y,z)) \in \mathbb{R}^{n(n+1)}$ connecting $y$ and $z$ with
\begin{equation*}
\ell_k(y,z) = (l_1^k, \cdots, l_n^k), \quad\text{where}~
l_j^k =
\begin{cases}
z_j, &\text{if}~ j \leq k, \\
y_j, &\text{if}~ j > k,
\end{cases}
\end{equation*}
then $y=\ell_0(y,z)$, $z = \ell_n(y,z)$, and $|\ell_{k-1}(y,z) - \ell_k(y,z)| = |y_k-z_k|$ for all $k=1,\cdots, n$. By the triangle inequality, we have
\begin{equation} \label{eq:tri_ineq}
\begin{split}
&\fint_{M_\rho(x)} \fint_{M_\rho(x)} |u(y) - u(z)| \,\mathrm{d}z \,\mathrm{d}y \\
&\leq \sum_{k=1}^n \fint_{M_\rho(x)} \fint_{M_\rho(x)} |u(\ell_{k-1}(y,z)) - u(\ell_k(y,z))| \,\mathrm{d}z\,\mathrm{d}y.
\end{split}
\end{equation}
For a fixed $k$, we set $w=\ell_{k-1}(y,z) = (z_1,\cdots, z_{k-1}, y_k, \cdots, y_n)$ and $v = y+z-w = (y_1,\cdots, y_{k-1}, z_k, \cdots, z_n)$, then $\ell_k(y,z) = w+e_k(v_k-w_k)$. By Fubini's theorem, we obtain
\begin{equation} \label{eq:Fubini}
\begin{split}
&\fint_{M_\rho(x)} \fint_{M_\rho(x)} |u(\ell_{k-1}(y,z)) - u(\ell_k(y,z))| \,\mathrm{d}z\,\mathrm{d}y \\
&\leq \fint_{M_{\rho}(x)} \fint_{x_k-\rho^{s_{\max}/s_k}}^{x_k+\rho^{s_{\max}/s_k}} |u(w) - u(w+e_k(v_k-w_k))| \,\mathrm{d}v_k\,\mathrm{d}w.
\end{split}
\end{equation}
Moreover, using the inequality $|v_k-w_k| \leq |v_k-x_k| + |w_k-x_k| < 2\rho^{s_{\max}/s_k}$, we make the inner integral in the right-hand side of \eqref{eq:Fubini} independent of $x$. Namely, we have
\begin{equation} \label{eq:x_indep}
\begin{split}
&\fint_{x_k-\rho^{s_{\max}/s_k}}^{x_k+\rho^{s_{\max}/s_k}} |u(w) - u(w+e_k(v_k-w_k))| \,\mathrm{d}v_k \\
&\leq 2 \fint_{w_k-2\rho^{s_{\max}/s_k}}^{w_k+2\rho^{s_{\max}/s_k}} |u(w) - u(w+e_k(v_k-w_k))| \,\mathrm{d}v_k \\
&= 2 \fint_{-2\rho^{s_{\max}/s_k}}^{2\rho^{s_{\max}/s_k}} |u(w) - u(w+he_k)| \,\mathrm{d}h.
\end{split}
\end{equation}
Combining \eqref{eq:avg}, \eqref{eq:tri_ineq}, \eqref{eq:Fubini}, and \eqref{eq:x_indep}, we arrive at
\begin{equation} \label{eq:F_k}
\fint_{M_\rho(x)} |u(y) - (u)_{M_\rho(x)}| \,\mathrm{d}y \leq \sum_{k=1}^n \rho^{s_{\max}} \fint_{M_{\rho}(x)} F_k(w) \,\mathrm{d}w,
\end{equation}
where the function $F_k$ is defined by
\begin{equation*}
F_k(w) := \sup_{\rho > 0} \left( 2\rho^{-s_{\max}} \fint_{-2\rho^{s_{\max}/s_k}}^{2\rho^{s_{\max}/s_k}} |u(w) - u(w+he_k)| \,\mathrm{d}h \right).
\end{equation*}
By H\"older's inequality,
\begin{equation} \label{eq:F_k_Holder}
\begin{split}
\left( \fint_{M_\rho(x)} F_k(w) \,\mathrm{d}w \right)^{p_{\star}}
&\leq \left( \fint_{M_{\rho}(x)} F_k^p(w) \,\mathrm{d}w \right)^{\frac{{p_{\star}}-p}{p}} \left( \fint_{M_{\rho}(x)} F_k(w) \,\mathrm{d}w \right)^p \\
&\leq |M_{\rho}|^{-\frac{{p_{\star}}-p}{p}} \|F_k\|_{L^p(\mathbb{R}^n)}^{{p_{\star}}-p} \left( \fint_{M_{\rho}(x)} F_k(w) \,\mathrm{d}w \right)^p.
\end{split}
\end{equation}
Thus, it follows from \eqref{eq:F_k} and \eqref{eq:F_k_Holder} that
\begin{equation*}
\begin{split}
\Bigg( \fint_{M_\rho(x)} |u(y) - (u)_{M_\rho(x)}|&\,\mathrm{d}y \Bigg)^{p_{\star}}
\leq n^{{p_{\star}}-1} \sum_{k=1}^n \rho^{{p_{\star}}s_{\max}} \left( \fint_{M_{\rho}(x)} F_k(w) \,\mathrm{d}w \right)^{p_{\star}} \\
&\leq n^{{p_{\star}}-1} \sum_{k=1}^n \rho^{{p_{\star}}s_{\max}} |M_{\rho}|^{-\frac{{p_{\star}}-p}{p}} \|F_k\|_{L^p(\mathbb{R}^n)}^{{p_{\star}}-p} \left( \fint_{M_{\rho}(x)} F_k(w) \,\mathrm{d}w \right)^p \\
&\leq \frac{n^{{p_{\star}}-1}}{2^{n\frac{{p_{\star}}-p}{p}}} \sum_{k=1}^n \|F_k\|_{L^p(\mathbb{R}^n)}^{{p_{\star}}-p} \left( \fint_{M_{\rho}(x)} F_k(w) \,\mathrm{d}w \right)^p.
\end{split}
\end{equation*}
Taking the supremum over $\rho > 0$, we obtain
\begin{equation} \label{eq:sharp-maximal-upper}
\left( {\bf M}^\sharp u(x) \right)^{p_{\star}} \leq \frac{n^{{p_{\star}}-1}}{2^{n\frac{{p_{\star}}-p}{p}}} \sum_{k=1}^n \|F_k\|_{L^p(\mathbb{R}^n)}^{{p_{\star}}-p} ({\bf M}F_k(x))^p.
\end{equation}
We now use \Cref{thm:maximal} and \Cref{thm:sharp_maximal}. By \eqref{eq:MdM} and \Cref{thm:maximal}, we know that ${\bf M}_d u \in L^p(\mathbb{R}^n)$. Thus, \Cref{thm:sharp_maximal} yields that
\begin{equation*}
\|u\|_{L^{p_{\star}}(\mathbb{R}^n)}^{p_{\star}} \leq C \|{\bf M}^\sharp u\|_{L^{p_{\star}}(\mathbb{R}^n)}^{p_{\star}}
\end{equation*}
for some $C = C(n, p_{\star}, s_0) > 0$. Moreover, assuming $F_k \in L^p(\mathbb{R}^n)$, we have by \Cref{thm:maximal} and \eqref{eq:sharp-maximal-upper}
\begin{equation*}
\|{\bf M}^\sharp u\|_{L^{p_{\star}}(\mathbb{R}^n)}^{p_{\star}} \leq C \sum_{k=1}^n \|F_k\|_{L^p(\mathbb{R}^n)}^{{p_{\star}}-p} \|{\bf M}F_k\|_p^p \leq C \sum_{k=1}^n \|F_k\|_{L^p(\mathbb{R}^n)}^{p_{\star}}
\end{equation*}
for some $C = C(n, p, p_{\star}, s_0) > 0$. Therefore, it only remains to show that
\begin{equation} \label{eq:claim}
\|F_k\|_{L^p(\mathbb{R}^n)}^p \leq C s_k(1-s_k) \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(x)-u(x+he_k)|^p}{|h|^{1+ps_k}} \,\mathrm{d}h \,\mathrm{d}x
\end{equation}
for each $k=1,\cdots, n$.
Let us fix $k$. Using H\"older's inequality, we have
\begin{equation*}
\begin{split}
\|F_k\|_{L^p(\mathbb{R}^n)}^p
&\leq \int_{\mathbb{R}^n} \sup_{\rho > 0} \frac{2^p}{\rho^{ps_{\max}}} \fint_{-2\rho^{s_{\max}/s_k}}^{2\rho^{s_{\max}/s_k}} |u(x) - u(x+he_k)|^p \,\mathrm{d}h \,\mathrm{d}x \\
&= \sum_{i=0}^\infty \int_{\mathbb{R}^n} \sup_{\rho > 0} \frac{2^{p-2}}{\rho^{(1+ps_k)s_{\max}/s_k}} \int_{\mathbb{R}} |u(x) - u(x+he_k)|^p {\bf 1}_{I_i}(|h|) \,\mathrm{d}h \,\mathrm{d}x,
\end{split}
\end{equation*}
where $I_i = [2^{-i}\rho^{s_{\max}/s_k}, 2^{-i+1}\rho^{s_{\max}/s_k})$. For each $i$, let $\lbrace \beta_{j,i} \rbrace_{j=0}^\infty$ be a sequence such that $\sum_j \beta_{j,i} \geq 1$, which will be chosen later. Then,
\begin{equation*}
\|F_k\|_{L^p(\mathbb{R}^n)}^p \leq \sum_{i,j=0}^\infty \beta_{j,i} \int_{\mathbb{R}^n} \sup_{\rho > 0} \frac{2^{p-2}}{\rho^{(1+ps_k)s_{\max}/s_k}} \int_{\mathbb{R}} |u(x) - u(x+he_k)|^p {\bf 1}_{I_i}(|h|) \,\mathrm{d}h \,\mathrm{d}x.
\end{equation*}
By \Cref{lem:cv} for $N = 2^j$, $a = 2^{-i} \in (0,1]$ and $b=s_{\max}/s_k$, we obtain
\begin{equation*}
\|F_k\|_{L^p(\mathbb{R}^n)}^p \leq \sum_{i,j=0}^\infty 2^{p-2+(1+ps_k)(1-i)+p(1-s_k)j} \beta_{j,i} \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(x) - u(x+he_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{I_{i+j}} (|h|) \,\mathrm{d}h \,\mathrm{d}x.
\end{equation*}
We rearrange the double sums to have
\begin{equation*}
\begin{split}
&\|F_k\|_{L^p(\mathbb{R}^n)}^p \\
&\leq \sum_{i=0}^\infty \sum_{j=0}^i 2^{p-2+(1+ps_k)(1-i+j)+p(1-s_k)j} \beta_{j,i-j} \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(x) - u(x+he_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{I_i}(|h|) \,\mathrm{d}h \,\mathrm{d}x.
\end{split}
\end{equation*}
Let $\beta_{j,i} = p(\log 2)(1-s_k) 2^{-p(1-s_k)j}$, then
\begin{equation*}
1 \leq \sum_{j=0}^\infty \beta_{j,i} = \frac{p(\log 2)(1-s_k)}{1-2^{-p(1-s_k)}} \leq 2p < +\infty.
\end{equation*}
Since
\begin{equation*}
\begin{split}
\sum_{j=0}^i 2^{p-2+(1+ps_k)(1-i+j)+p(1-s_k)j} \beta_{j,i-j}
&= p(\log 2)(1-s_k) 2^{p-2+(1+ps_k)(1-i)} \sum_{j=0}^i 2^{(1+ps_k)j} \\
&= p(\log 2)(1-s_k) 2^{p-2+(1+ps_k)(1-i)} \frac{2^{(1+ps_k)(i+1)}}{2^{1+ps_k}-1} \\
&\leq p(1-s_k) 2^{p(1+s_k)},
\end{split}
\end{equation*}
we arrive at
\begin{equation*}
\begin{split}
\|F_k\|_{L^p(\mathbb{R}^n)}^p
&\leq \sum_{i=0}^\infty p(1-s_k)2^{p(1+s_k)} \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(x) - u(x+he_k)|^p}{|h|^{1+ps_k}} {\bf 1}_{I_i}(|h|) \,\mathrm{d}h \,\mathrm{d}x \\
&\leq p2^{2p}\frac{s_k}{s_0}(1-s_k) \int_{\mathbb{R}^n} \int_{\mathbb{R}} \frac{|u(x) - u(x+he_k)|^p}{|h|^{1+ps_k}} \,\mathrm{d}h \,\mathrm{d}x,
\end{split}
\end{equation*}
which proves \eqref{eq:claim}.
\end{proof}
Next, we can make use of appropriate cut-off functions to prove a localized version of the foregoing Sobolev-type inequality.
\begin{corollary} \label{thm:sobolev_loc}
Let $\Lambda \ge 1$ and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. Suppose that $1 < p < n/\bar{s}$. There is $C = C(n,p,p_{\star},s_0,\Lambda) > 0$ such that for each $\mu \in \mathcal{K}(p,s_0,\Lambda)$ and every $x_0 \in \mathbb{R}^n$, $r \in (0,1]$, $\lambda \in (1,2]$ and $u \in H^{p,\mu}_{{M_{\lambda r}(x_0)}}(\mathbb{R}^n)$ it holds
\begin{equation*}
\begin{split}
\|u\|_{L^{{p_{\star}}}(M_r(x_0))}^p\
\leq& C \int_{M_{\lambda r}(x_0)} \int_{M_{\lambda r}(x_0)} |u(x)-u(y)|^p \mu(x, \mathrm{d}y) \mathrm{d}x \\
&+ C \left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-s_k p} \right) r^{-ps_{\max}} \|u\|_{L^p(M_{\lambda r}(x_0))}^p,
\end{split}
\end{equation*}
where ${p_{\star}}$ is defined as in \Cref{thm:sobolev}.
\end{corollary}
\begin{proof}
Let $\tau:\mathbb{R}^n\to\mathbb{R}$ be an admissible cut-off function in the sense of \Cref{def:cut-off}.
For simplicity of notation we write $M_r=M_r(x_0)$.
By \Cref{thm:sobolev} there is a constant $c_1=c_1(n,p,p_{\star},s_0)>0$ such that
\begin{align*}
\|u\tau\|_{L^{{p_{\star}}}(\mathbb{R}^n)}^p & \leq c_1 \Bigg(\int_{M_{\lambda
r}}\int_{M_{\lambda r}} |u(x)\tau(x)-u(x)\tau(y)|^p \, \mu_{\mathrm{axes}}(x,\d y)\, \d x \\
& \qquad \qquad + 2\int_{M_{\lambda r}}\int_{(M_{\lambda r})^c}
|u(x)\tau(x)-u(x)\tau(y)|^p \, \mu_{\mathrm{axes}}(x,\d y)\, \d x \Bigg) \\
& =: c_1 (I_1 + 2I_2).
\end{align*}
We have
\begin{align*}
I_1 & \leq \frac{1}{2^{p}} \Bigg(\int_{M_{\lambda r}}\int_{M_{\lambda r}}
2^{p-1}\left|(u(y)-u(x))(\tau(x)+\tau(y))\right|^p \, \mu_{\mathrm{axes}}(x,\d y)\, \d x\\
& \qquad \quad + \int_{M_{\lambda r}}\int_{M_{\lambda
r}}2^{p-1}\left|(u(x)+u(y))(\tau(x)-\tau(y))\right|^p \, \mu_{\mathrm{axes}}(x,\d y)\, \d x\Bigg) \\
& = \frac12 (J_1+J_2),
\end{align*}
Since $(\tau(x)+\tau(y))\leq 2$ for all $x,y\in M_{\lambda r}$, we get
\[
J_1 \leq 2^p\int_{M_{\lambda r}}\int_{M_{\lambda r}} |u(y)-u(x)|^p\, \mu_{\mathrm{axes}}(x,\d
y)\,\d x \leq \Lambda 2^p\int_{M_{\lambda r}}\int_{M_{\lambda r}} |u(y)-u(x)|^p\, \mu(x,\d
y)\,\d x, \]
where we used \Cref{assumption:comparability} in the second inequality.\\
Moreover, since $|u(x)+u(y)|^p|\tau(x)-\tau(x)|^p \leq 2^{p-1}|u(x)|^p|\tau(x)-\tau(x)|^p +
2^{p-1}|u(y)|^p|\tau(y)-\tau(x)|^p$, we can again apply \Cref{assumption:comparability} and by \Cref{lemma:cutoff}, we get
\begin{align*}
J_2 &\leq 2^p \left( \sup_{x\in \mathbb{R}^n} \int_{\mathbb{R}^n} |\tau(y)-\tau(x)|^p\mu(x,\d y) \right)\|u\|^p_{L^p(M_{\lambda r})}
\\
&\leq c_2 \left(\sum_{k=1}^n
(\lambda^{\frac{s_{\max}}{s_k}}-1)^{-ps_k}\right)r^{-ps_{\max}}\|u\|^p_{L^p(M_{\lambda
r})}\,
\end{align*}
for some $c_2>0$, depending on $n$, $p$, $s_0$ and $\Lambda$.
Moreover, by \Cref{cor:quadrat} there is $c_3=c_3(n,p,\Lambda)>0$ such that
\[ I_2 \leq c_3 \left(\sum_{k=1}^n
(\lambda^{\frac{s_{\max}}{s_k}}-1)^{-ps_k}\right) r^{-ps_{\max}} \|u\|_{L^p(M_{\lambda r})}^p \]
Combining these estimates, we find a constant $C=C(n,p,p_{\star},s_0,\Lambda)>0$ such that
\begin{align*}
\|u\|_{L^{{p_{\star}}}(M_r)}^p &\leq \|u\tau\|_{L^{{p_{\star}}}(\mathbb{R}^n)}^p \\
& \leq C \Bigg(\int_{M_{\lambda r}}\int_{M_{\lambda r}} |u(x)-u(y)|^p \,
\mu_{\mathrm{axes}}(x,\d y)\, \, \d x\\
&\qquad \qquad \qquad \qquad \qquad +\left(\sum_{k=1}^n
(\lambda^{\frac{s_{\max}}{s_k}}-1)^{-ps_k}\right)r^{-ps_{\max}} \|u\|^p_{L^p(M_{\lambda r})}\Bigg).
\end{align*}
\end{proof}
Applying the same method as in the proof of the Sobolev-type inequality \Cref{thm:sobolev}, we can deduce a Poincar\'e inequality.
\begin{theorem}\label{thm:poincare}
Let $p> 1$, $\Lambda \ge 1$, and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. There is $C = C(n,p,s_0,\Lambda) > 0$ such that for each $\mu \in \mathcal{K}(p,s_0,\Lambda)$ and every $x_0 \in \mathbb{R}^n$, $r \in (0,1]$ and $u \in L^p(M_{r}(x_0))$,
\begin{equation*}
\|u - (u)_{M_r(x_0)}\|_{L^p(M_r(x_0))}^p \leq C r^{ps_{\max}} \mathcal{E}_{M_r(x_0)}^\mu(u,u).
\end{equation*}
\end{theorem}
The proof is analog to the proof of the Poincar\'e inequality for the case $p=2$, see \cite[Theorem 4.2]{CKW19}.
\section{Weak Harnack inequality}\label{sec:weak}
In this section, we prove \Cref{thm:weak_Harnack}. The proof is based on Moser's iteration technique. We first need to verify a few properties for weak supersolutions to \eqref{eq:PDE}.
\begin{lemma}\label{lemma:log}
Let $\Lambda \ge 1$ and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. Let $1<p\leq n/\bar{s}$, $x_0 \in \mathbb{R}^n$, $r \in (0,1]$, and $\lambda \in (1,2]$. Set $M_r = M_r(x_0)$ and assume $f \in L^{q/(p\bar{s})}(M_{\lambda r})$ for some $q > n$. There is $C = C(n,p,s_0,\Lambda) > 0$ such that for each $\mu \in \mathcal{K}(p, s_0, \Lambda)$ and every $u \in V^{p,\mu}(M_{\lambda r} | \mathbb{R}^n)$ that satisfies
\begin{align*}
&\mathcal{E}^\mu(u, \varphi) \geq (f, \varphi) \quad \text{for any nonnegative}~ \varphi \in H^{p,\mu}_{M_{\lambda r}}(\mathbb{R}^n), \\
&u(x) \geq \epsilon \quad \text{a.e. in}~ M_{\lambda r} ~ \text{for some}~ \epsilon > 0,
\end{align*}
the following holds:
\begin{equation*}
\begin{split}
&\int_{M_r} \int_{M_r} |\log u(y) - \log u(x)|^p \,\mu(x,\d y) \,\d x \\
&\leq C \left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-ps_k} \right) r^{-ps_{\max}} |M_{\lambda r}| \\
&\quad + \epsilon^{1-p} \|f\|_{L^{q/(p\bar{s})}(M_{\lambda r})} |M_{\lambda r}|^{\frac{q-p\bar{s}}{q}} + 2\epsilon^{1-p} |M_{\lambda r}| \sup_{x \in M_{(\lambda+1)r/2}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} u_-^{p-1}(y) \mu(x, \d y).
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Let $\tau$ be an admissible cut-off function in the sense of \Cref{def:cut-off} and let $\varphi(x) = \tau^p(x)u^{1-p}(x)$, which is well defined since $\supp(\tau) \subset M_{\lambda r}$. Then, we have
\begin{equation} \label{eq:log_I12}
\begin{split}
(f,\varphi)
&\leq \int_{M_{\lambda r}} \int_{M_{\lambda r}} |u(x)-u(y)|^{p-2}(u(x) - u(y)) \left( \frac{\tau^p(x)}{u^{p-1}(x)} - \frac{\tau^p(y)}{u^{p-1}(y)} \right) \mu(x,\d y) \,\d x \\
&\quad + 2\int_{M_{\lambda r}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} |u(x)-u(y)|^{p-2}(u(x) - u(y)) \frac{\tau^p(x)}{u^{p-1}(x)} \mu(x,\d y) \,\d x \\
&=: I_1 + I_2.
\end{split}
\end{equation}
Similar to the proof of \cite[Lemma 1.3]{DCKP16}, we get the inequality
\begin{equation} \label{eq:ineq_log}
\begin{split}
&|u(x)-u(y)|^{p-2} (u(x)-u(y)) \left( \frac{\tau^p(x)}{u^{p-1}(x)} - \frac{\tau^p(y)}{u^{p-1}(y)} \right) \\
&\leq - c_1 |\log u(x) - \log u(y)|^p \tau^p(y) + c_2 |\tau(x)-\tau(y)|^p,
\end{split}
\end{equation}
where $c_1, c_2 >0$ are constants depending only on $p$. Hence, by \eqref{eq:ineq_log} and \Cref{lemma:cutoff},
\begin{equation} \label{eq:log_I1}
\begin{split}
I_1
&\leq -c_1 \int_{M_r} \int_{M_r} |\log u(x) - \log u(y)|^p \mu(x,\d y) \,\d x \\
&\quad + C \left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-ps_k} \right)r^{-ps_{\max}} |M_{\lambda r}|.
\end{split}
\end{equation}
For $I_2$, again by \Cref{lemma:cutoff}
\begin{equation} \label{eq:log_I2}
\begin{split}
I_2
&\leq 2\int_{M_{\lambda r}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} (u(x)-u(y))^{p-1} \frac{\tau^p(x)}{u^{p-1}(x)} {\bf 1}_{\lbrace u(x) \geq u(y) \rbrace} \mu(x,\d y) \,\d x \\
&\leq 2\int_{M_{\lambda r}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} |\tau(x) - \tau(y)|^p \mu(x,\d y) \,\d x + 2\int_{M_{\lambda r}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} u_-^{p-1}(y) \frac{\tau^p(x)}{\epsilon^{p-1}} \mu(x,\d y) \,\d x \\
&\leq C \left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-ps_k} \right)r^{-ps_{\max}} |M_{\lambda r}| + \frac{2|M_{\lambda r}|}{\epsilon^{p-1}} \sup_{x \in M_{(\lambda+1)r/2}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} u_-^{p-1}(y) \mu(x, \d y),
\end{split}
\end{equation}
where we assumed that $\mathrm{supp}(\tau) \subset M_{(\lambda+1)r/2}$. Combining \eqref{eq:log_I12}, \eqref{eq:log_I1}, and \eqref{eq:log_I2}, and using H\"older's inequality, we conclude that
\begin{equation*}
\begin{split}
&\int_{M_r} \int_{M_r} |\log u(y) - \log u(x)|^p \,\mu(x,\d y) \,\d x \\
&\leq C \left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-ps_k} \right) r^{-ps_{\max}} |M_{\lambda r}| \\
&\quad + \epsilon^{1-p} \|f\|_{L^{q/(p\bar{s})}(M_{\lambda r})} |M_{\lambda r}|^{\frac{q-p\bar{s}}{q}} + 2\epsilon^{1-p} |M_{\lambda r}| \sup_{x \in M_{(\lambda+1)/2}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} u_-^{p-1}(y) \mu(x, \d y).
\end{split}
\end{equation*}
\end{proof}
The next theorem is an essential result to prove the weak Harnack inequality.
\begin{theorem}\label{thm:flipsigns}
Let $\Lambda \ge 1$ and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. Let $1<p< n/\bar{s}$, $x_0 \in \mathbb{R}^n$, and $r \in (0,1]$. Set $M_r = M_r(x_0)$ and assume $f \in L^{q/(p\bar{s})}(M_{5r/4})$ for some $q > n$. There are $C=C(n,p,s_0,q,\Lambda)>0$ and $\bar{p}=\bar{p}(n,p,s_0,q,\Lambda) \in (0, 1)$ such that for each $\mu \in \mathcal{K}(p, s_0, \Lambda)$ and every $u \in V^{p,\mu}(M_{5r/4} | \mathbb{R}^n)$ that satisfies
\begin{align*}
&\mathcal{E}^\mu(u, \varphi) \geq (f, \varphi) \quad \text{for any nonnegative}~ \varphi \in H^{p,\mu}_{M_{5r/4}}(\mathbb{R}^n), \\
&u(x) \geq \epsilon \quad \text{a.e. in}~ M_{5r/4},
\end{align*}
for
\begin{equation*}
\epsilon > r^{\delta}\|f\|_{L^{q/(p\bar{s})}(M_{5r/4})}^{\frac{1}{p-1}} + \left( r^{ps_{\max}} \sup_{x \in M_{9r/8}} \int_{\mathbb{R}^n \setminus M_{5r/4}} u_-^{p-1}(y) \mu(x, \d y) \right)^{\frac{1}{p-1}},
\end{equation*}
where $\delta = \frac{ps_{\max}}{p-1} \frac{q-n}{q}$, the following holds:
\begin{equation*}
\left( \fint_{M_r} u^{\bar{p}}(x) \,\mathrm{d}x \right)^{1/\bar{p}} \leq C \left( \fint_{M_r} u^{-\bar{p}}(x) \,\mathrm{d}x \right)^{-1/\bar{p}}.
\end{equation*}
\end{theorem}
\begin{proof}
We only need to prove that $\log u \in \BMO(M_r)$. The rest of the proof is standard.
The Poincar\'e inequality (see \Cref{thm:poincare}) and \Cref{lemma:log} imply
\begin{equation*}
\begin{split}
&\| \log u - (\log u)_{M_r} \|_{L^p(M_r)}^p \\
&\leq C r^{ps_{\max}} \mathcal{E}^\mu_{M_r} (\log u, \log u) \\
&\leq C \left( \sum_{k=1}^n \left( \left(\frac{5}{4}\right)^{\frac{s_{\max}}{s_k}}-1 \right)^{-s_k p} \right) |M_{5r/4}| + C\epsilon^{1-p} \|f\|_{L^{q/(p\bar{s})}(M_{5r/4})} r^{ps_{\max}} |M_{5r/4}|^{\frac{q-p\bar{s}}{q}} \\
&\quad + C\epsilon^{1-p} r^{ps_{\max}} |M_{5r/4}| \sup_{x \in M_{9r/8}} \int_{\mathbb{R}^n \setminus M_{5r/4}} u_-^{p-1}(y) \mu(x, \d y) \\
&\leq C|M_r|,
\end{split}
\end{equation*}
where we used the bound on $\epsilon$ in the last inequality.
Finally, by H\"older's inequality we obtain
\begin{equation*}
\|\log u\|_{\mathrm{BMO}(M_r)} \leq \left( \fint_{M_r} |\log u - (\log u)_{M_r}|^p \,\d x \right)^{1/p} \leq C,
\end{equation*}
which shows that $\log u \in \mathrm{BMO}(B_r)$.
\end{proof}
In order to apply Moser's iteration for negative exponents, we prove the following lemma.
\begin{lemma} \label{lem:iteration}
Let $\Lambda \ge 1$ and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. Let $1<p< n/\bar{s},$ $x_0 \in \mathbb{R}^n$, $r\in(0,1]$, and $\lambda \in (1,2]$. Set $M_r = M_r(x_0)$ and assume $f\in L^{q/(p\bar{s})}(M_{\lambda r})$ for some $q>n$.
For each $\mu \in \mathcal{K}(p,s_0, \Lambda)$ and every $u \in V^{p,\mu}(M_{\lambda r} | \mathbb{R}^n)$ that satisfies
\begin{align*}
&\mathcal{E}^\mu(u, \varphi) \geq (f, \varphi) \quad \text{for any nonnegative}~ \varphi \in H^{p,\mu}_{M_{\lambda r}}(\mathbb{R}^n), \\
&u(x) \geq \epsilon \quad \text{a.e. in}~ M_{\lambda r},
\end{align*}
for
\begin{equation*}
\epsilon > r^{\delta}\|f\|_{L^{q/(p\bar{s})}(M_{\lambda r/4})}^{\frac{1}{p-1}} + \left( r^{ps_{\max}} \sup_{x \in M_{(\lambda+1)r/2}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} u_-^{p-1}(y) \mu(x, \d y) \right)^{\frac{1}{p-1}},
\end{equation*}
the following is true for any $t > p-1$,
\begin{equation*}
\left\|u^{-1}\right\|_{L^{(t-p+1)\gamma}(M_r)}^{t-p+1} \leq C \left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-s_k p} \right) r^{-s_{\max}p} \left\|u^{-1}\right\|_{L^{t-p+1}(M_{\lambda r})}^{t-p+1},
\end{equation*}
where $\delta = \frac{ps_{\max}}{p-1} \frac{q-n}{q}$, $\gamma = n/(n-p\bar{s})$, and $C = C(n, p, p_{\star}, q, t, s_0, \Lambda) > 0$ is a constant that is bounded when $t$ is bounded away from $p-1$.
\end{lemma}
To prove \Cref{lem:iteration}, we need the following algebraic inequality.
\begin{lemma} \label{lem:alg_ineq}
Let $a, b > 0$, $\tau_1, \tau_2 \in [0,1]$, and $t > p-1 > 0$. Then,
\begin{equation*}
\begin{split}
&|b-a|^{p-2}(b-a)(\tau_1^p a^{-t} - \tau_2^p b^{-t}) \\
&\geq c_1 \left| \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right|^p - c_2 |\tau_1-\tau_2|^p \left( a^{-t+p-1} + b^{-t+p-1} \right),
\end{split}
\end{equation*}
where $c_i = c_i(p, t) > 0$, $i=1, 2$, is bounded when $t$ is bounded away from $p-1$.
\end{lemma}
Note that \Cref{lem:alg_ineq} is a discrete version of
\begin{equation*}
|\nabla v|^{p-2} \nabla v \cdot \nabla (-v^{-t}\tau^p) \geq c_1 \left| \nabla \left(v^{\frac{-t+p-1}{p}} \tau \right) \right|^p - c_2 |\nabla \tau|^p v^{-t+p-1}.
\end{equation*}
The proof of \Cref{lem:alg_ineq} is provided in \Cref{sec:ineq}.
\begin{proof} [Proof of \Cref{lem:iteration}]
Let $\tau$ be an admissible cut-off function in the sense of \Cref{def:cut-off}. Since $\tau = 0$ outside $M_{\lambda r}$, the function $\varphi = -\tau^p u^{-t}$ is well defined. Using \Cref{lem:alg_ineq}, we have
\begin{equation*}
\begin{split}
&(f, -\tau^p u^{-t}) \geq \mathcal{E}(u, -\tau^p u^{-t}) \\
&= \int_{M_{\lambda r}} \int_{M_{\lambda r}} |u(y)-u(x)|^{p-2} (u(y)-u(x)) \left( \tau^p(x) u^{-t}(x) - \tau^p(y) u^{-t}(y) \right) \mu(x, \d y)\, \d x \\
&\quad + 2 \int_{M_{\lambda r}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} |u(y)-u(x)|^{p-2} (u(y)-u(x)) \tau^p(x) u^{-t}(x) \mu(x, \d y)\, \d x \\
&\geq c_1 \int_{M_{\lambda r}} \int_{M_{\lambda r}} \left| \tau(x) u^{\frac{-t+p-1}{p}} (x) - \tau(y) u^{\frac{-t+p-1}{p}}(y) \right|^p \mu(x, \d y)\, \d x \\
&\quad - c_2 \int_{M_{\lambda r}} \int_{M_{\lambda r}} |\tau(x)-\tau(y)|^p \left( u^{-t+p-1}(x) + u^{-t+p-1}(y) \right) \mu(x, \d y)\, \d x \\
&\quad - 2 \int_{M_{\lambda r}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} (u(x)-u(y))^{p-1} \tau^p(x) u^{-t}(x) {\bf 1}_{\lbrace u(y) \leq u(x) \rbrace} \mu(x, \d y)\, \d x \\
&=: c_1 I_1 - c_2 I_2 - I_3,
\end{split}
\end{equation*}
where $c_1$ and $c_2$ are constants given in \Cref{lem:alg_ineq}.
By \Cref{thm:sobolev_loc}, we obtain
\begin{equation*}
\begin{split}
I_1
&= \int_{M_{\lambda r}} \int_{M_{\lambda r}} \left| \tau(x) u^{\frac{-t+p-1}{p}} (x) - \tau(y) u^{\frac{-t+p-1}{p}}(y) \right|^p \mu(x, \d y)\, \d x \\
&\geq C\left\|\tau u^{\frac{-t+p-1}{p}} \right\|_{L^{p_{\star}}(M_{r})}^p -C r^{-ps_{\max}}\left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-s_k p} \right) \left\|\tau u^{\frac{-t+p-1}{p}}\right\|_{L^p(M_{\lambda r})}^p,
\end{split}
\end{equation*}
where
$ p_{\star} = \frac{np}{n-p\bar{s}}$.
For $I_2$, we use \Cref{lemma:cutoff} again to have
\begin{equation*}
\begin{split}
I_2
&= 2\int_{M_{\lambda r}} \int_{M_{\lambda r}} |\tau(x)-\tau(y)|^p u^{-t+p-1}(x) \,\mu(x, \d y)\, \d x \\
&\leq C r^{-ps_{\max}}\left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-s_k p} \right) \left\|u^{-t+p-1}\right\|_{L^1(M_{\lambda r})}.
\end{split}
\end{equation*}
For $I_3$, assuming that $\mathrm{supp}(\tau) \subset M_{(\lambda+1)r/2}$ and using \Cref{lemma:cutoff} we deduce
\begin{equation*}
\begin{split}
I_3
&\leq C \int_{M_{\lambda r}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} \left( u^{p-1}(x)+u_-^{p-1}(y) \right) \tau^p(x) u^{-t}(x) \mu(x, \d y)\, \d x \\
&\leq C r^{-ps_{\max}}\left( \sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-s_k p} \right) \left\|u^{-t+p-1}\right\|_{L^1(M_{\lambda r})} \\
&\quad + C\epsilon^{1-p} \left( \sup_{x \in M_{(\lambda+1)r/2}} \int_{\mathbb{R}^n \setminus M_{\lambda r}} u_-^{p-1}(y) \mu(x, \d y) \right) \left\|u^{-t+p-1}\right\|_{L^1(M_{\lambda r})}.
\end{split}
\end{equation*}
Moreover, we estimate
\begin{equation*}
\begin{split}
|(f, -\tau^p u^{-t})|
&\leq \epsilon^{1-p} \int_{\mathbb{R}^n} |f| \tau^p u^{-t+p-1} \,\d x \\
&\leq \epsilon^{1-p} \|f\|_{L^{q/(p\bar{s})}(M_{\lambda r})} \left\| \tau^p u^{-t+p-1} \right\|_{L^{q/(q-p\bar{s})}(M_{\lambda r})} \\
&= \epsilon^{1-p} \|f\|_{L^{q/(p\bar{s})}(M_{\lambda r})} \left\| \tau u^{\frac{-t+p-1}{p}} \right\|_{L^{pq/(q-p\bar{s})}(M_{\lambda r})}^p.
\end{split}
\end{equation*}
Using Lyapunov's inequality and Young's inequality, we have
\begin{equation*}
\|v\|_{pq/(q-p\bar{s})}^p \leq \|v\|_{p_\star}^{np/q} \|v\|_p^{(qp-np)/q} \leq \frac{n}{q} \omega \|v\|_{p_{\star}}^p + \frac{q-n}{q} \omega^{-n/(q-n)} \|v\|_p^p
\end{equation*}
for any $v \in L^{p_{\star}} \cap L^p$ and any $\omega > 0$. This yields that
\begin{equation*}
\begin{split}
|(f, -\tau^p u^{-t})|
&\leq \epsilon^{1-p} \|f\|_{L^{q/(p\bar{s})}(M_{\lambda r})} \left( \frac{n}{q} \omega \left\| \tau u^{\frac{-t+p-1}{p}} \right\|_{L^{p_\star}}^p + \frac{q-n}{q} \omega^{-n/(q-n)} \left\|\tau u^{\frac{-t+p-1}{p}} \right\|_{L^p}^p \right) \\
&\leq r^{-ps_{\max}\frac{q-n}{q}} \left( \frac{n}{q} \omega \left\| \tau^p u^{-t+p-1} \right\|_{L^{\gamma}} + \frac{q-n}{q} \omega^{-n/(q-n)} \left\|\tau^p u^{-t+p-1} \right\|_{L^1} \right).
\end{split}
\end{equation*}
Combining all the estimates, we have
\begin{equation*}
\begin{split}
&\left\|\tau^p u^{-t+p-1} \right\|_{L^{\gamma}(M_{\lambda r})} \\
&\leq C r^{-ps_{\max}}\left( 1+\sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-s_k p} \right) \left\|u^{-t+p-1}\right\|_{L^1(M_{\lambda r})} \\
&\quad +Cr^{-ps_{\max}\frac{q-n}{q}} \left( \frac{n}{q} \omega \left\| \tau^p u^{-t+p-1} \right\|_{L^{\gamma}(M_{\lambda r})} + \frac{q-n}{q} \omega^{-n/(q-n)} \left\|\tau^p u^{-t+p-1} \right\|_{L^1(M_{\lambda r})} \right).
\end{split}
\end{equation*}
Taking $\omega = \varepsilon_0 r^{-ps_{\max}\frac{q-n}{q}}$ with $\varepsilon_0 > 0$ small enough, we arrive at
\begin{equation*}
\begin{split}
\left\| u^{-1} \right\|_{L^{(t-p+1)\gamma}(M_r)}^{t-p+1}
&\leq \left\|\tau^p u^{-t+p-1} \right\|_{L^{\gamma}(M_{\lambda r})} \\
&\leq C\left( 1+\sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-s_k p} \right) r^{-ps_{\max}} \left\|u^{-1}\right\|_{L^{t-p+1}(M_{\lambda r})}^{t-p+1},
\end{split}
\end{equation*}
where $C$ depends on $n$, $p$, $p_{\star}$, $t$, $s_0$, $q$ and $\Lambda$, and is bounded when $t$ is bounded away from $p-1$. Since $\lambda \leq 2$, we obtain
\begin{equation*}
\sum_{k=1}^n \left( \lambda^{s_{\max}/s_k}-1 \right)^{-s_k p} \geq \sum_{k=1}^n \left( \lambda^{1/s_k}-1 \right)^{-s_k p} \geq \sum_{k=1}^n 2^{-p} = n2^{-p},
\end{equation*}
from which the desired result follows.
\end{proof}
The standard iteration technique proves the following lemma, see \cite{CK20,DK20}.
\begin{lemma} \label{lem:inf}
Under the same assumptions as in \Cref{lem:iteration}, for any $p_0 > 0$ there is a constant $C = C(n, p, p_{\star}, q, p_0, s_0, \Lambda) > 0$ such that
\begin{equation}\label{eq:infest}
\inf_{M_r} u \geq C \left( \fint_{M_{2r}} u(x)^{-p_0} \, \mathrm{d}x \right)^{-1/p_0}.
\end{equation}
\end{lemma}
The proof of \Cref{thm:weak_Harnack} follows from \Cref{thm:flipsigns}, \Cref{lem:inf} and the triangle inequality.
\section{H\"older estimates}\label{sec:hoeld}
This section is devoted to the proof of \Cref{thm:Holder}. The general scheme for the derivation of a priori interior H\"older estimates from the weak Harnack inequality in the non-local setting has been developed in \cite{DK20} and applied successfully to the anisotropic setting \cite{CK20} when $p=2$. We extend the result presented in \cite{CK20} to the general case $p > 1$.
Recall that the rectangles in \Cref{def:M_r} satisfy the following property. For $\lambda > 0$ and $\Omega \subset \mathbb{R}^n$ open, we have
\begin{equation*}
\mathcal{E}_\Omega^{\mu_{\mathrm{axes}}}(u\circ \Psi, v \circ \Psi) = \lambda^{-(n-\bar{s}p)s_{\max}/\bar{s}} \mathcal{E}_{\Psi(\Omega)}^{\mu_{\mathrm{axes}}}(u, v) \quad\text{for every}~ u, v \in V^{p,\mu_{\mathrm{axes}}}(\Omega|\mathbb{R}^n)
\end{equation*}
and
\begin{equation*}
(f \circ \Psi, \varphi \circ \Psi) = \lambda^{-ns_{\max}/\bar{s}} (f, \varphi) \quad\text{for every}~ f \in L^{q/(p\bar{s})}(\Omega),~ \varphi \in H^{p,\mu_{\mathrm{axes}}}_{\Omega}(\mathbb{R}^n),
\end{equation*}
where $\Psi : \mathbb{R}^n \to \mathbb{R}^n$ is a diffeomorphism given by
\begin{equation} \label{eq:Psi}
\Psi(x) =
\begin{pmatrix}
\lambda^{s_{\max}/s_1} & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & \lambda^{s_{\max}/s_n}
\end{pmatrix}x.
\end{equation}
The rectangles from \Cref{def:M_r} are balls in a metric space $(\mathbb{R}^n,d)$, where the metric $d:\mathbb{R}^n\times\mathbb{R}^n\to[0,\infty)$ is defined as follows:
\[ d(x,y) = \sup_{k\in\{1,\dots,n\}} |x_k-y_k|^{s_k/s_{\max}}. \]
By the scaling property and covering arguments provided in \cite{CK20}, it is enough to show the following theorem.
\begin{theorem} \label{thm:Holder_0}
Let $\Lambda \ge 1$ and $s_1,\dots,s_n\in[s_0,1)$ be given for some $s_0\in(0,1)$. Let $1<p< n/\bar{s}$. Assume $f\in L^{q/(p\bar{s})}(M_1)$ for some $q>n$. There are $\alpha=\alpha(n,p,p_{\star},q,s_0,\Lambda) \in (0,1)$ and $C=C(n,p,p_{\star},q,s_0,\Lambda) > 0$ such that for each $\mu \in \mathcal{K}(p,s_0,\Lambda)$ and every $u \in V^{p,\mu}(M_1 | \mathbb{R}^n)$ satisfying
\begin{equation*}
\mathcal{E}^\mu(u, \varphi) = (f, \varphi) \quad \text{for any}~ \varphi \in H^{p,\mu}_{M_1}(\mathbb{R}^n),
\end{equation*}
we have $u \in C^\alpha$ at $0$ and
\begin{equation*}
|u(x)-u(0)| \leq C\left( \|u\|_{L^\infty(\mathbb{R}^n)} + \|f\|_{L^{q/(p\bar{s})}(M_{15/16})} \right) d(x,0)^\alpha
\end{equation*}
for all $x \in M_1$.
\end{theorem}
\begin{proof}
We assume that $2\|u\|_{L^\infty(\mathbb{R}^n)} + \kappa^{-1} \|f\|_{L^{q/(p\bar{s})}(M_{15/16})} \leq 1$ for some $\kappa > 0$ which will be chosen later. It is enough to construct sequences $\lbrace a_k \rbrace_{k=0}^\infty$ and $\lbrace b_k \rbrace_{k=0}^\infty$ such that $a_k \leq u \leq b_k$ in $M_{1/4^k}$ and $b_k - a_k = 4^{-\alpha k}$ for some $\alpha > 0$. For $k=0$, we set $a_0 = -1/2$ and $b_0 = 1/2$. Assume that we have constructed such sequences up to $k$ and let us choose $a_{k+1}$ and $b_{k+1}$.
We assume
\begin{equation} \label{eq:half_measure}
|\lbrace u \geq (b_k+a_k)/2 \rbrace \cap M_{\frac{1}{2}4^{-k}}| \geq |M_{\frac{1}{2}4^{-k}}|/2,
\end{equation}
and then prove that we can choose $a_{k+1}$ and $b_{k+1}$. If \eqref{eq:half_measure} does not hold, then we can consider $-u$ instead of $u$. Let $\Psi$ be the diffeomorphism given by \eqref{eq:Psi} with $\lambda = 4^{-k}$ and define
\begin{equation*}
v(x) = \frac{u(\Psi(x))-a_k}{(b_k-a_k)/2} \quad\text{and}\quad g(x) = \frac{\lambda^{ps_{\max}} f(\Psi(x))}{(b_k-a_k)/2}.
\end{equation*}
Then, $v \geq 0$ in $M_1$ and $\mathcal{E}^\mu_{M_1}(v,\varphi) = (g, \varphi)$ for every $\varphi \in H^\mu_{M_1}(\mathbb{R}^n)$. Moreover, it is easy to see that $v \geq 2(1-4^{\alpha j}) $ in $M_{4^j}$ for every $j \geq 0$ by induction hypothesis. By applying \Cref{thm:weak_Harnack}, we obtain
\begin{equation} \label{eq:v_WHI}
\begin{split}
&\left( \fint_{M_{1/2}} v^{p_0}(x) \,\mathrm{d}x \right)^{1/p_0} \\
&\leq C\inf_{M_{1/4}} v + C \sup_{x \in M_{15/16}} \left( \int_{\mathbb{R}^n\setminus M_1} v^-(y)^{p-1} \,\mu(x, \d y) \right)^{1/(p-1)} + \|g\|_{L^{q/(p\bar{s})}(M_{15/16})}.
\end{split}
\end{equation}
By taking $\alpha < ps_0 \frac{q-n}{q}$, we have
\begin{equation} \label{eq:g}
\|g\|_{L^{q/(p\bar{s})}(M_{15/16})} = 2\cdot 4^{(\alpha-ps_{\max}\frac{q-n}{q})k} \|f\|_{L^{q/(p\bar{s})}(M_{4^{-k} \cdot 15/16})} \leq 2\kappa.
\end{equation}
For $x \in M_{15/16}$ and for each $j \geq 1$, we have $M_{4^j} \setminus M_{4^{j+1}} \subset \mathbb{R}^n \setminus M_{4^j}(x)$. Hence, by \eqref{assmu1}
\begin{equation} \label{eq:tail}
\begin{split}
\int_{\mathbb{R}^n \setminus M_1} v^-(y)^{p-1} \,\mu(x, \d y)
&\leq \sum_{j=1}^\infty \int_{M_{4^j} \setminus M_{4^{j+1}}} (2(4^{\alpha j}-1))^{p-1} \,\mu(x, \d y) \\
&\leq \sum_{j=1}^\infty (2(4^{\alpha j}-1))^{p-1} \mu(x, \mathbb{R}^n \setminus M_{4^j}(x)) \\
&\leq \sum_{j=1}^l \Lambda(2(4^{\alpha j}-1))^{p-1} 4^{-ps_0 j} + 2^{p-1} \Lambda \sum_{j=l+1}^\infty 4^{(\alpha (p-1)-ps_0)j}.
\end{split}
\end{equation}
If we assume that $\alpha < \frac{ps_0}{2(p-1)}$, then we can make the last term in \eqref{eq:tail} as small as we want by taking $l = l(p,s_0)$ sufficiently large. Since the first term in \eqref{eq:tail} converges to 0 as $\alpha \to 0$, we have
\begin{equation} \label{eq:tail2}
C \sup_{x \in M_{15/16}} \left( \int_{\mathbb{R}^n\setminus M_1} v^-(y)^{p-1} \,\mu(x, \d y) \right)^{1/(p-1)} \leq \kappa
\end{equation}
by assuming further that $\alpha = \alpha(n, p, p_{\star}, q, s_0, \Lambda)$ is sufficiently small.
On the other hand, it follows from \eqref{eq:half_measure} that
\begin{equation} \label{eq:L_p0}
\left( \fint_{M_{1/2}} v^{p_0}(x) \,\mathrm{d}x \right)^{1/p_0} \geq \left( \frac{1}{|M_{1/2}|} \int_{M_{1/2} \cap \lbrace v \geq 1 \rbrace} v^{p_0}(x) \,\d x \right)^{1/p_0} \geq 2^{-1/p_0}.
\end{equation}
Combining \eqref{eq:v_WHI}, \eqref{eq:g}, \eqref{eq:tail2}, and \eqref{eq:L_p0}, and choosing $\kappa > 0$ sufficiently small, we arrive at $\inf_{M_{1/4}} v \geq \kappa_0$ for some $\kappa_0$. We take $a_{k+1} = a_k + \kappa_0 (b_k-a_k)/2$ and $b_{k+1}=b_k$, and make $\alpha$ and $\kappa_0$ small so that $1-\kappa_0/2 = 4^{-\alpha}$. Then $a_{k+1} \leq u \leq b_{k+1}$ in $M_{4^{-(k+1)}}$ and $b_{k+1}-a_{k-1} = 4^{-\alpha(k+1)}$, which finishes the proof.
\end{proof}
\begin{appendix}
\section{Algebraic inequalities} \label{sec:ineq}
In this section we prove \Cref{lem:alg_ineq} using the series of lemmas below.
\begin{lemma} \label{lem:ineq1}
Let $a, b > 0$ and $t > p-1 > 0$. Then,
\begin{equation*}
|b-a|^{p-2}(b-a)(a^{-t} - b^{-t}) \geq t\left( \frac{p}{t-p+1} \right)^p \left| a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right|^p.
\end{equation*}
\end{lemma}
\begin{proof}
We may assume that $b > a$. Let $f(x) = -x^{\frac{-t+p-1}{p}}$ and $g(x) = -x^{-t}$, then by using Jensen's inequality we have
\begin{equation*}
\begin{split}
\left| \frac{f(b)-f(a)}{b-a} \right|^p
&= \left| \fint_a^b f'(x) \,\d x \right|^p \leq \fint_a^b (f'(x))^p \,\d x \\
&= \frac{1}{t} \left( \frac{t-p+1}{p} \right)^p \fint_a^b g'(x) \,\d x = \frac{1}{t} \left( \frac{t-p+1}{p} \right)^p \frac{g(b)-g(a)}{b-a},
\end{split}
\end{equation*}
which proves the lemma.
\end{proof}
\begin{lemma} \label{lem:ineq2}
Let $a, b > 0$ and $t > p-1 > 0$. Then,
\begin{equation*}
|b-a|^{p-1} \min\lbrace a^{-t}, b^{-t} \rbrace \leq \left( \frac{p}{t-p+1} \right)^{p-1} \left| a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right|^{p-1} \min\left\lbrace a^{\frac{-t+p-1}{p}}, b^{\frac{-t+p-1}{p}} \right\rbrace.
\end{equation*}
\end{lemma}
\begin{proof}
We may assume that $b > a$. Let $f(x) = -x^{\frac{-t+p-1}{p}}$, then
\begin{equation*}
\begin{split}
\left| \frac{f(b)-f(a)}{b-a} \right|^{p-1}
&= \left| \fint_a^b f'(x) \,\d x \right|^{p-1} = \left( \frac{t-p+1}{p} \right)^{p-1} \left( \fint_a^b x^{\frac{-t-1}{p}} \,\d x \right)^{p-1} \\
&\geq \left( \frac{t-p+1}{p} \right)^{p-1} \left( \fint_a^b b^{\frac{-t-1}{p}} \,\d x \right)^{p-1} = \left( \frac{t-p+1}{p} \right)^{p-1} \frac{b^{-t}}{b^{\frac{-t+p-1}{p}}},
\end{split}
\end{equation*}
which proves the lemma.
\end{proof}
\begin{lemma} \label{lem:ineq3}
Let $\tau_1, \tau_2 \in [0,1]$ and $p > 1$. Then,
\begin{equation*}
|\tau_1^p - \tau_2^p| \leq p |\tau_1-\tau_2| \max\lbrace \tau_1^{p-1},\tau_2^{p-1} \rbrace.
\end{equation*}
\end{lemma}
\begin{proof}
The desired inequality follows from the convexity of the function $f(\tau) = \tau^p$.
\end{proof}
\begin{lemma} \label{lem:ineq4}
Let $a, b > 0$, $\tau_1, \tau_2 \in [0,1]$, and $t > p-1 > 0$. Then,
\begin{equation} \label{eq:min}
\begin{split}
&\min\lbrace \tau_1^p, \tau_2^p \rbrace \left| a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right|^p \\
&\geq 2^{1-p} \left| \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right|^p - |\tau_1-\tau_2|^p \max \lbrace a^{-t+p-1}, b^{-t+p-1} \rbrace
\end{split}
\end{equation}
and
\begin{equation} \label{eq:max}
\begin{split}
&\max\lbrace \tau_1^p, \tau_2^p \rbrace \left| a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right|^p \\
&\leq 2^{p-1} \left| \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right|^p + 2^{p-1} |\tau_1-\tau_2|^p \max \lbrace a^{-t+p-1}, b^{-t+p-1} \rbrace.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
For \eqref{eq:min}, we assume that $\tau_1 \geq \tau_2$. Then, we obtain from
\begin{equation*}
\tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} = \tau_2 \left( a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right) + (\tau_1-\tau_2) a^{\frac{-t+p-1}{p}}
\end{equation*}
that
\begin{equation*}
\begin{split}
2^{1-p} \left| \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right|^p
&\leq \tau_2^p \left|a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}}\right|^p + |\tau_1-\tau_2|^p a^{-t+p-1} \\
&\leq \tau_2^p \left|a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}}\right|^p + |\tau_1-\tau_2|^p \max \lbrace a^{-t+p-1}, b^{-t+p-1} \rbrace,
\end{split}
\end{equation*}
from which \eqref{eq:min} follows. The other case $\tau_1 < \tau_2$ can be proved in the same way.
For \eqref{eq:max}, we assume that $\tau_1 \geq \tau_2$. Then
\begin{equation*}
\begin{split}
\max\lbrace \tau_1^p, \tau_2^p \rbrace \left| a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right|^p
&= \left| \left( \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right) + (\tau_2-\tau_1) b^{\frac{-t+p-1}{p}} \right|^p \\
&\leq 2^{p-1} \left| \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right|^p + 2^{p-1} |\tau_2-\tau_1|^p b^{-t+p-1}.
\end{split}
\end{equation*}
The proof for the case $\tau_1 < \tau_2$ is the same.
\end{proof}
\begin{proof} [Proof of \Cref{lem:alg_ineq}]
We may assume that $b > a$. We begin with the equality
\begin{equation} \label{eq:AB}
\begin{split}
&|b-a|^{p-2}(b-a)(\tau_1^p a^{-t} - \tau_2^p b^{-t}) \\
&= (b-a)^{p-1}(a^{-t} - b^{-t})\tau_1^p + (b-a)^{p-1} b^{-t}(\tau_1^p-\tau_2^p) =: A + B.
\end{split}
\end{equation}
By \Cref{lem:ineq1} and \eqref{eq:min}, we have
\begin{equation} \label{eq:A}
\begin{split}
A
&\geq t\left( \frac{p}{t-p+1} \right)^p \left| a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right|^p \min\lbrace \tau_1^p, \tau_2^p \rbrace \\
&\geq t\left( \frac{p}{t-p+1} \right)^p \left( 2^{1-p} \left| \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right|^p - |\tau_1-\tau_2|^p \max \lbrace a^{-t+p-1}, b^{-t+p-1} \rbrace \right).
\end{split}
\end{equation}
For $B$, we use \Cref{lem:ineq2}, \Cref{lem:ineq3}, and Young's inequality to obtain
\begin{equation*}
\begin{split}
B
&\geq - p \left( \frac{p}{t-p+1} \right)^{p-1} \left| a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right|^{p-1} b^{\frac{-t+p-1}{p}} |\tau_1-\tau_2| \max\lbrace \tau_1^{p-1}, \tau_2^{p-1} \rbrace \\
&\geq - (p-1) \left( \frac{p}{t-p+1} \right)^p \varepsilon^{p/(p-1)} \left| a^{\frac{-t+p-1}{p}} - b^{\frac{-t+p-1}{p}} \right|^p \max\lbrace \tau_1^p, \tau_2^p \rbrace - \frac{1}{\varepsilon^p} b^{-t+p-1} |\tau_1-\tau_2|^p
\end{split}
\end{equation*}
for any $\varepsilon > 0$. Using \eqref{eq:max}, we have
\begin{equation} \label{eq:B}
\begin{split}
B
&\geq - 2^{p-1} (p-1) \left( \frac{p}{t-p+1} \right)^p \varepsilon^{p/(p-1)} \left| \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right|^p \\
&\quad - \left( 2^{p-1} (p-1) \left( \frac{p}{t-p+1} \right)^p \varepsilon^{p/(p-1)} + \frac{1}{\varepsilon^p} \right) |\tau_1-\tau_2|^p \max \lbrace a^{-t+p-1}, b^{-t+p-1} \rbrace.
\end{split}
\end{equation}
Combining \eqref{eq:AB}, \eqref{eq:A}, and \eqref{eq:B}, and then taking $\varepsilon$ so that $2^{p-1} \varepsilon^{p/(p-1)} = 2^{1-p}$, we arrive at
\begin{equation*}
\begin{split}
&|b-a|^{p-2}(b-a)(\tau_1^p a^{-t} - \tau_2^p b^{-t}) \\
&\geq c_1 \left| \tau_1 a^{\frac{-t+p-1}{p}} - \tau_2 b^{\frac{-t+p-1}{p}} \right|^p - c_2 |\tau_1-\tau_2|^p \left( a^{-t+p-1} + b^{-t+p-1} \right),
\end{split}
\end{equation*}
where
\begin{equation*}
c_1 = \frac{2^{1-p} p^p}{(t-p+1)^{p-1}} \quad\text{and}\quad c_2 = \left( t+2^{1-p}(p-1) \right) \left( \frac{p}{t-p+1} \right)^p + 2^{2(p-1)^2}.
\end{equation*}
Note that $c_1$ and $c_2$ are bounded when $t$ is bounded away from $p-1$.
\end{proof}
\section{Anisotropic dyadic rectangles}\label{sec:anisorect}
Let us briefly sketch the construction of anisotropic ``dyadic" rectangles. These objects can be used to prove the lower bound in $L^p$ for the sharp maximal function ${\bf M}^{\sharp}u$.
We construct anisotropic dyadic rectangles having the following properties:
\begin{enumerate}[(i)]
\item
For each integer $k \in \mathbb{Z}$, a countable collection $\lbrace Q_{k, \alpha} \rbrace_\alpha$ covers the whole space $\mathbb{R}^n$.
\item
Each $Q_k$ ($= Q_{k, \alpha}$ for some $\alpha$) has an interior of the form $\mathrm{Int}(Q_k) = M_{2^{-k}}(x)$. We call $Q_k$ an {\it anisotropic dyadic rectangle of generation $k$}.
\item
Every $Q_{k, \alpha}$ is contained in $Q_{k-1, \beta}$ for some $\beta$. We call $Q_{k-1, \beta}$ a {\it predecessor of $Q_{k, \alpha}$}.
\item
If $Q_{k, 0}, \dots, Q_{k, 2^n}$ are $2^n+1$ different anisotropic dyadic rectangles of generation $k$, then $\cap_{i=0}^{2^n} Q_{k, i} = \emptyset$.
\item
If $2^n+1$ different anisotropic dyadic rectangles $Q_{k_0}, \dots, Q_{k_{2^n}}$, $k_0 \leq \dots \leq k_{2^n}$, have a non-empty intersection, then $Q_j \subset Q_i$ for some $0 \leq i < j \leq 2^n$.
\end{enumerate}
\begin{remark} {\ }
\begin{enumerate}
\item
A predecessor may not be unique.
\item
$2^n$ different anisotropic dyadic rectangles from the same generation may have a non-empty intersection.
\end{enumerate}
\end{remark}
Such a family of anisotropic dyadic rectangles can be easily constructed. Since the sets are rectangles, it is sufficient to exemplify the construction in one dimension. Let $Q_0 = [0,1)$. Then, a countable collection $\lbrace Q_0+z \rbrace_{z \in \mathbb{Z}}$ constitutes the zeroth generation. Let $N=\lfloor 2^{s_{\max}/s_1} \rfloor$. In order to construct the first generation, we take a disjoint family of (left-closed and right-opened) $N$ intervals in $Q_0$ starting from 0 with length $2^{-s_{\max}/s_1}$ such that the following interval starts at the endpoint of the previous interval. If the right-endpoint of the last interval is 1, then these intervals constitute the first generation and there is nothing to do. Thus, we assume from now on that $2^{s_{\max}/s_1} \notin \mathbb{Z}$. In this case, we add an interval $[1-2^{-s_{\max}/s_i}, 1)$ so that
\begin{equation*}
Q_0 = \left( \bigcup_{i=0}^{N-1} Q_{1, i} \right) \cup Q_{1, N},
\end{equation*}
where
\begin{equation*}
Q_{1, i} = [i 2^{-s_{\max}/s_1}, (i+1) 2^{-s_{\max}/s_1}) \quad\text{for}~ i=0, \dots, N-1, \quad Q_{1, N} = [1-2^{-s_{\max}/s_i}, 1),
\end{equation*}
and
$N = \lfloor 2^{s_{\max}/s_1} \rfloor$. Then, the collection $\lbrace Q_{1, i}+z \rbrace_{0 \leq i \leq N, z \in \mathbb{Z}}$ forms the first generation of intervals satisfying (i)-(iv).
\begin{figure}[htb]
\includegraphics[width=0.6\textwidth]{new1}
\caption{This figure shows the construction of the family $Q_{1,i}$.}
\end{figure}
We continue to construct the intervals of generation 2 that fill in $Q_{1, i}$ for each $0 \leq i \leq N-2$. However, we have to be careful in filling in $Q_{1, N-1}$ and $Q_{1, N}$ since $Q_{1, N-1} \cap Q_{1,N} \neq \emptyset$. Suppose that we filled in $Q_{1, N-1}$ and $Q_{1, N}$ as above, i.e.,
\begin{equation*}
Q_{1, N-1} = \left( \bigcup_{i=0}^{N-1} Q_{2, i} \right) \cup Q_{2, N} \quad \text{and} \quad Q_{1, N} = \left( \bigcup_{i=0}^{N-1} \tilde{Q}_{2, i} \right) \cup \tilde{Q}_{2, N}
\end{equation*}
for some intervals $Q_{2, i}$ and $\tilde{Q}_{2, i}$, $0 \leq i \leq N$, of length $4^{-s_{\max}/s_1}$. Let $K$ be the smallest integer such that $\overline{Q_{2, K}} \cap Q_{1, N} \neq \emptyset$. Then, we have
\begin{equation*}
Q_{1, N-1} \cup Q_{1, N} = \left( \bigcup_{i=0}^K Q_{2,i} \right) \cup \left( \bigcup_{i=0}^{N-1} \tilde{Q}_{2, i} \right) \cup \tilde{Q}_{2, N}
\end{equation*}
and at most two different intervals among $\lbrace Q_{2, 0}, \dots, Q_{2, K}, \tilde{Q}_{2,0}, \dots, \tilde{Q}_{2, N} \rbrace$ can intersect. Therefore, these intervals constitute the second generation satisfying (i)-(iv).
\begin{figure}[htb]
\includegraphics[width=0.8\textwidth]{new4}
\caption{This figure shows the construction of generation 2.}
\end{figure} \\
In this way, we construct intervals of generation $k$ for all $k \geq 0$.
Let us now construct intervals of generation $k < 0$. It is easy to observe that a collection $\lbrace Q_{-1}+ Nz\rbrace_{z \in \mathbb{Z}}$ of intervals of generation $-1$ satisfies (i)-(iv), where
\begin{equation*}
Q_{-1} = [0, 2^{s_{\max}/s_1}).
\end{equation*}
For the generation of $-2$, let $K$ be the largest integer such that $Q_{-1}+NK \subset Q_{-2}$, where $Q_{-2} = [0, 4^{s_{\max}/s_1})$. Then, the intervals $Q_{-2}+N(K+1)z$, $z \in \mathbb{Z}$, form the generation of $-2$, which satisfying (i)-(iv). We continue this process to construct intervals of all generations $k < 0$.
We show that the intervals constructed in this way satisfy the property (v) as well. Suppose that three different intervals $Q_{k_0}, Q_{k_1}$, and $Q_{k_2}$, $k_0 \leq k_1 \leq k_2$, have a non-empty intersection. If $k_1 = k_2$, then $Q_{k_1} \subset Q_{k_0}$ or $Q_{k_2} \subset Q_{k_0}$. If $k_1 < k_2$, then either $Q_{k_2} \subset Q_{k_1}$ or not. In the former case, we are done. In the latter case, $Q_{k_2} \subset \tilde{Q}_{k_1}$ for some $\tilde{Q}_{k_1} \neq Q_{k_1}$, which reduces to the case $k_1=k_2$.
\section{Sharp maximal function theorem} \label{sec:sharp_maximal}
In this section we prove \Cref{thm:sharp_maximal} by using the anisotropic dyadic rectangles. For $u \in L^1_{\mathrm{loc}}(\mathbb{R}^n)$, we define a dyadic maximal function ${\bf M}_du$ by
\begin{equation*}
{\bf M}_du(x) = \sup_{x \in Q} \fint_{Q} |u(y)| \,\mathrm{d}y,
\end{equation*}
where the supremum is taken over all anisotropic dyadic rectangles $Q$. Since
\begin{equation} \label{eq:MdM}
{\bf M}_d u \leq {\bf M}u,
\end{equation}
\Cref{thm:maximal} also holds for the dyadic maximal function ${\bf M}_du$. We first prove a good-lambda estimate using the dyadic maximal function. See \cite[Theorem 3.4.4]{GrafakosMF}.
\begin{theorem} \label{thm:good_lambda}
Let $s_1, \dots, s_n \in [s_0, 1)$ be given for some $s_0 \in (0,1)$. There exists a constant $C = C(n, s_0) > 0$ such that
\begin{equation*}
|\lbrace x \in \mathbb{R}^n: {\bf M}_d u(x) > 2\lambda, {\bf M}^{\sharp}u(x) \leq \gamma \lambda \rbrace| \leq C\gamma |\lbrace x \in \mathbb{R}^n: {\bf M}_d u(x) > \lambda \rbrace|
\end{equation*}
for all $\gamma>0$, $\lambda > 0$, and $u \in L^1_{\mathrm{loc}}(\mathbb{R}^n)$.
\end{theorem}
\begin{proof}
Let $\Omega_\lambda = \lbrace x \in \mathbb{R}^n: M_d(f)(x) > \lambda \rbrace$. We may assume that $|\Omega_\lambda|<+\infty$ since otherwise there is nothing to prove. For each $x \in \Omega_\lambda$, we find a maximal anisotropic dyadic rectangle $Q^x$ such that
\begin{equation} \label{eq:Qx}
x \in Q^x \subset \Omega_\lambda \quad\text{and}\quad \fint_{Q^x} |f| > \lambda.
\end{equation}
There are at most $2^n$ different maximal anisotropic dyadic rectangles of the same generation satisfying \eqref{eq:Qx}, but we can still choose anyone of them. Let $Q_j$ be the collection of all such rectangles $Q^x$ for all $x \in \Omega_{\lambda}$. Then, we have $\Omega_\lambda = \cup_j Q_j$. Note that different rectangles $Q_j$ may have an intersection, but the intersection is contained in at most $2^n$ different maximal rectangles of the same generation. This is a consequence of the properties (iv) and (v) of anisotropic dyadic rectangles. Hence,
\begin{equation*}
\sum_{j} |Q_j| \leq 2^n |\Omega_\lambda|.
\end{equation*}
Therefore, the desired result follows once we have
\begin{equation} \label{eq:Q_j}
|\lbrace x \in Q_j: {\bf M}_d u(x) > 2\lambda, {\bf M}^{\sharp} u(x) \leq \gamma \lambda \rbrace| \leq C\gamma |Q_j|
\end{equation}
for some $C = C(n, s_0)$. Indeed, one can prove \eqref{eq:Q_j} by following the second paragraph of the proof of \cite[Theorem 3.4.4]{GrafakosMF}, using \Cref{thm:maximal} for ${\bf M}_d$, and replacing \cite[Equation (3.4.8)]{GrafakosMF} by
\begin{equation*}
\frac{1}{\lambda} \int_{Q_j} |u(y) - (u)_{Q_j'}| \,\mathrm{d}y \leq \frac{2^{ns_{\max}/\bar{s}}}{\lambda} \frac{|Q_j|}{|Q_j'|} \int_{Q_j'} |u(y) - (u)_{Q_j'}| \,\mathrm{d}y \leq \frac{2^{n/s_0}}{\lambda} |Q_j| {\bf M}^{\sharp}u(\xi_j)
\end{equation*}
for all $\xi_j \in Q_j$, where $Q_j'$ is anyone of predecessors of $Q_j$.
\end{proof}
\begin{theorem} \label{thm:Md}
Let $s_1, \dots, s_n \in [s_0, 1)$ be given for some $s_0 \in (0,1)$, and let $0<p_0 \leq p<\infty$. Then, there is a constant $C = C(n, p, s_0) > 0$ such that for all functions $u \in L^1_{\mathrm{loc}}(\mathbb{R}^n)$ with ${\bf M}_du \in L^{p_0}(\mathbb{R}^n)$ we have
\begin{equation*}
\|{\bf M}_du\|_{L^p(\mathbb{R}^n)} \leq C\|{\bf M}^{\sharp}u \|_{L^p(\mathbb{R}^n)}.
\end{equation*}
\end{theorem}
\Cref{thm:Md} can be proved in the same way as in the proof of \cite[Theorem 3.4.5]{GrafakosMF} except that we use \Cref{thm:good_lambda} instead of \cite[Theorem 3.4.4]{GrafakosMF}. Finally, we combine the inequality
\begin{equation*}
\|u\|_{L^p(\mathbb{R}^n)} \leq \|{\bf M}_du\|_{L^p(\mathbb{R}^n)},
\end{equation*}
which comes from the Lebesgue differentiation theorem, and \Cref{thm:Md} to conclude \Cref{thm:sharp_maximal}. See \cite[Corollary 3.4.6]{GrafakosMF}.
\section{Pointwise convergence of the fractional orthotropic \texorpdfstring{$p$}{p}-Laplacian}
This section provides the proof of pointwise convergence of the fractional orthotropic $p$-Laplacian as $s \nearrow 1$.
\begin{proposition} \label{prop:convergence}
Let $u \in C^2(\mathbb{R}^n) \cap L^{\infty}(\mathbb{R}^n)$ and $x \in \mathbb{R}^n$ be such that $\partial_iu(x) \neq 0$ for all $i = 1, \dots, n$. Let $s_i = s$ for all $i=1, \dots, n$. Let $L$ be the operator in \eqref{def:nonlocaloperator} with $\mu = \mu_{\mathrm{axes}}$ and $A^{p}_{\mathrm{loc}}$ be as in \eqref{eq:Aploc}. Then, $Lu(x) \to A^{p}_{\mathrm{loc}}u(x)$ as $s \nearrow 1$ up to a constant.
\end{proposition}
\begin{proof}
Let us fix a point $x \in \mathbb{R}^n$ with $\partial_iu(x) \neq 0$. For each $i = 1, \dots, n$, let us define $u_i : \mathbb{R} \to \mathbb{R}$ by $u_i(x_i) = u(x_1, \dots, x_i, \dots, x_n)$ as a function of one variable. Then $u_i \in C^2(\mathbb{R}) \cap L^{\infty}(\mathbb{R})$ and $u_i'(x_i) \neq 0$. We write
\begin{equation*}
\begin{split}
Lu(x) = \sum_{i=1}^{n} s(1-s) \int_{\mathbb{R}} \frac{|u_i(y_i) - u(x_i)|^{p-2} (u_i(y_i) - u_i(x_i))}{|x_i-y_i|^{1+sp}} \,\d y_i = - \sum_{i=1}^n (-\partial^2)^{s}_{p} u_i(x_i),
\end{split}
\end{equation*}
which is the sum of one-dimensional fractional $p$-Laplacians. By \cite[Theorem 2.8]{BucSqu21}, we have
\begin{equation*}
-(-\partial^2)^{s}_{p} u_i(x_i) \to \frac{\d}{\d x_i} \left( \left| \frac{\d u_i}{\d x_i}(x_i) \right|^{p-2} \frac{\d u_i}{\d x_i}(x_i) \right)
\end{equation*}
as $s \nearrow 1$, for each $i=1, \dots, n$, up to a constant depending on $p$ only. Consequently, by summing up $Lu(x) \to A^{p}_{\mathrm{loc}}u(x)$ as $s \nearrow 1$.
\end{proof}
\end{appendix}
\subsection*{Conflict of Interest}
Authors state no conflict of interest
|
2,869,038,153,926 | arxiv | \section{Introduction \label{sec:intro} }
Light nuclei at the neutron dripline display exotic properties connected to the large spatial distribution of weakly bound valence neutrons, such as halo and clustering. A special class of halo nuclei are two-neutron halo nuclei like $^6$He, $^{11}$Li and $^{14}$Be, known as Borromean nuclei, since they can be described as a three-body system without any bound two-body subsystem. They provide a good environment to study dineutron correlations which are expected to be a key element in their stabilization \cite{sag15}. Different methods have been advocated and used to probe the properties of Borromean nuclei, such as Coulomb dissociation \cite{nak06,aum13}, dineutron decay \cite{hag15}, and quasi-free scattering reactions \cite{kik16}. As mentioned, the subsystems constituted of one neutron and the remaining fragment ($^5$He, $^{10}$Li, $^{13}$Be) are unbound and their continuum exhibit a resonant structure.
In this paper we present a study of the spectroscopy of the unbound $^{13}$Be nucleus obtained by measuring the invariant mass of the $^{12}$Be-neutron system resulting from the decay of the $^{13}$Be system produced via the quasi-free scattering reaction $^{14}$Be($p$,$pn$).
The unbound nature of $^{13}$Be was suggested more than 30 years ago \cite{pos66, art70}, and confirmed in 1973 \cite{bow73}. Several experiments have attempted to study the spectroscopy of $^{13}$Be both via missing mass and invariant mass technique using charge exchange \cite{mar15}, fragmentation \cite{tho00}, proton removal from $^{14}$B \cite{lec04,ran14,rib18} and neutron removal from $^{14}$Be \cite{sim07,kon10,aks13a,aks13b}. The missing mass technique offers the advantage of yielding the absolute energy above the one-neutron decay threshold, and typically allows to explore a larger range in excitation energy above the two-neutron separation threshold. Using this method, resonances at $\sim$2, 5, 7, and 10~MeV were observed in Ref.~\cite{kor95}, and at 1.22(10), 2.10(16), 4.14(12), 5.09(14), and 7.0(2)~MeV in \cite{bel98}. Invariant mass spectra from different experiments display a peak at about 0.5~MeV above the $^{12}$Be+neutron threshold, and a broader structure around 2~MeV. As already discussed in Ref.~\cite{nak17}, the spectral shape strongly differs depending on the production mechanism, namely if $^{13}$Be is produced starting from $^{14}$Be \cite{sim07,kon10, aks13a,aks13b} or $^{14}$B \cite{lec04,ran14,rib18}. Even limiting ourselves to the first case, the mechanism adopted in this work, different interpretations of the relative energy spectrum have been provided. Ref.~\cite{kon10} interprets the low lying peak as a 1/2$^-$ ($\ell$=1) intruder state that appears due to the quenching of the $N$=8 spin-orbit shell gap, and the structure around 2~MeV as a 5/2$^+$ ($\ell$=2) state. This interpretation is based on the analysis of the transverse momentum distribution using $s$, $p$ and $d$ waves, corroborated by shell-model calculations, and is in agreement with predictions by \cite{bla07}. Ref.~\cite{aks13b} makes a synthesis of existing experimental results, with special emphasis on those obtained from proton-induced one-neutron removal \cite{kon10}. Nevertheless, the analysis of the transverse momentum distribution performed in Ref.~\cite{aks13b} yields quite different conclusions with respect to Ref.~\cite{kon10}: a much stronger $d$-wave component (dominant above 2.6~MeV), and a dominance of $s$-wave (80(10)\%) around 0.5~MeV, instead of $p$-wave.
This diversity and sometimes inconsistency in the positions and spin assignment of the states of $^{13}$Be indicate that the standard fitting procedures used for the analysis of these spectra may be lacking some constraints on the possible structures due to the complexity of $^{13}$Be spectrum. As such, in this work we study the $^{14}$Be$(p,pn)$ reaction using a novel method, proposed in \citep{gomezramosplb17}, which uses consistent two- and three-body models for $^{14}$Be and $^{13}$Be and is able to provide predictions for the positions and weights of the structures of the spectrum, thus reducing the ambiguities in the analysis.
Part of the complexity of the $^{13}$Be continuum spectrum stems from the admixtures of single-particle structures with core-excited components. In fact, core excitation has been postulated as a key element to understand the formation of Borromean systems \cite{tan08,pot10}, but it is very difficult to pin down. The level scheme of $^{12}$Be is well established. A strong excitation of the $^{12}$Be(2$^+$) state in inelastic proton scattering, consistent with a strong quadrupole deformation, provided a first evidence of $N=8$ shell gap quenching in $^{12}$Be \cite{iwa00}. Furthermore, neutron removal from $^{12}$Be revealed that the last neutrons have a significant ($2s_{1/2}$)$^2$ +($1d_{5/2}$)$^2$ configuration and that there is only of order 30\% of the $(1p_{1/2})^2$ closed shell component \cite{pai06}.
In this experiment we were able to measure with high statistics the possible $^{12}$Be(2$^+$, 1$^-$) core excited component that decays via gamma rays.
The experiment and the results on the spectroscopy of $^{13}$Be are presented in Sec.~\ref{sec:level2}, while their interpretation follows in Sec.~\ref{sec:level4} after a brief description of the theoretical framework.
\section{\label{sec:level2} Experiment}
The experiment was performed at the Radioactive Isotope Beam Factory operated by the RIKEN Nishina Center and the Center for Nuclear Study (CNS) of the University of Tokyo. Secondary
beams were produced and separated by the BigRIPS fragment separator \cite{fuk13}, using projectile fragmentation of a $^{48}$Ca primary beam at 345 MeV/nucleon with a typical intensity of 400 particle nA on a Be target. Fragmentation products were detected and identified by using plastic scintillators and multi-wire drift chambers (MWDCs) positioned along the BigRIPS line.
The main components of the cocktail beams were $^{11}$Li, $^{14}$Be, and $^{17}$B (80\%, 12\%, and 8\%, respectively) and impinged on the secondary target with an average energy of 246, 265 and 277 MeV/nucleon, respectively. The secondary target was a 15-cm thick liquid hydrogen target surrounded by a Time Projection Chamber (TPC) of the MINOS device \cite{obe14}. The TPC in conjunction with the beam tracking MWDC detectors was acting as vertex tracker, allowing to improve invariant-mass and gamma spectroscopy resolution. The combination of the thick MINOS target and the intense secondary beams from RIBF ($\sim$ 10$^5$ pps) was a key ingredient to obtain enough statistics for a kinematically complete measurement.
A rather complex detection system (Fig. \ref{f:stp}) was deployed to perform an exclusive measurement.
The knocked-out neutron was measured by the WINDS array of plastic scintillators \cite{yak12}. Its kinetic energy was deduced with the time of flight technique. The recoil proton was tracked first in the TPC, then in a MWDC, and subsequently traversed an array of plastic scintillators allowing the measurement of its kinetic energy via the time of flight technique. The ensemble of MWDC and plastic scintillators wall is called the recoil proton (RP) detector hereafter. The identification and momentum analysis of the heavy charged fragment was achieved via the combination of tracking in the SAMURAI \cite{kob13} dipole magnet via a set of MWDC placed before and after the magnet, and the energy loss and time of flight measurement in an array of plastic scintillators placed at the focal plane of SAMURAI. Their momentum could then be deduced from the measurement of their trajectory. The dipole gap was kept under vacuum using a chamber equipped with thin exit windows \cite{shi13} so as to reduce to a minimum the amount of material encountered by both the fragments and neutrons. The decay neutron was detected in the two walls of plastic scintillators of the NEBULA array \cite{nak16}. The efficiency of the NEBULA array for the detection of one neutron is $\sim$ 35$\%$ at the beam energy of this experiment. The WINDS and recoil proton detector covered angles between 20$^\circ$ and 60$^\circ$, and 30$^\circ$ to 65$^\circ$, respectively, allowing the selection of high-momentum-transfer events corresponding to quasi-free scattering \cite{mar66,mar73,aum13}.
\begin{figure}[t]
\begin{center}
\fbox{\includegraphics[trim={16cm 8cm 3cm 5cm},clip,width=0.37\textwidth]{20141003_mod}}
\caption{\label{f:stp} Scheme of the experimental setup. }
\end{center}
\end{figure}
In this process, the dominant mechanism for the knockout reaction is a single interaction between the incident particle and the struck nucleon which yields a kinematics for the $(p,pn)$ reaction very close to the one of free scattering. The angular correlation between the scattered neutron and the recoil proton is shown in Fig.~\ref{f:qfs} (left). The opening angle in the laboratory frame corresponds to 85$^\circ$, close to 86$^\circ$ as predicted using the QFS kinematics simulation of Ref. \cite{pan16}. This confirms that the high-momentum transfer ($>$~1 fm$^{-1}$) events selected by the detection system correspond to QFS. Invariant-mass resolution and efficiency have been determined via a GEANT4 simulation \cite{geant} with the code used in \cite{nak16}. The resolution follows a FWHM=$0.587\sqrt{E_r}$ MeV law and is shown in Fig.~\ref{f:qfs} (right). The efficiency is also shown in Fig.~\ref{f:qfs} (right) and is estimated assuming 100\% transmission of the beam and the fragment. The transmission (including tracking detectors efficiency and loss of beam and fragment in the thick MINOS target) is evaluated separately from experimental data taking the average of the values obtained for $^{12}$Be and $^{14}$Be beam transmission (65.0(1)\% and 62.9(2)\%, respectively), and corresponds to 64(1)\%.
$^{12}$Be core excited states that decay via gamma emission (2$^+$, 1$^-$) were identified using a reduced version of the DALI2 gamma array consisting of 68 crystals, partially covering angles between 34$^\circ$ and 115$^\circ$ and arranged in order to avoid interference with $(p,pn)$ measurement. Photopeak efficiency of this reduced version of DALI2 (called DALI2 hereafter) was 8.9(5)\% and 7.0(4)\% at 2.1 and 2.7 MeV, respectively. This experiment was not sensitive to $^{12}$Be(0$^+_2$) core excited state as this state is isomeric and its gamma-decay lifetime (1.87(15) $\mu$s) is too long for in-flight detection \cite{shi07}.
\begin{figure}[t]
\begin{center}
\includegraphics[trim={0cm -1cm 0.cm 0cm},clip,width=0.21\textwidth]{kine_85deg}
\includegraphics[trim={1.7cm 8.3cm 5cm 8cm},clip,width=0.21\textwidth]{eff_invariantmass}
\caption{\label{f:qfs} Left: angular correlation in the laboratory frame of proton and neutron in $^{14}$Be$(p,pn)$ reaction. The red line correspond to the kinematics calculated with the QFS code of Ref. \cite{pan16}. Right: efficiency (red line) and resolution (blue line) for $^{13}$Be invariant mass measurement with NEBULA and SAMURAI. }
\end{center}
\end{figure}
The invariant mass spectrum of $^{13}$Be is shown in Fig.~\ref{f:erel} (left). The absolute cross section is determined taking into account the efficiency for invariant mass measurement and the fragment transmission. The error bars take into account the uncertainty on the transmission (1\%), on the neutron detection efficiency (2.5\%) and the statistical uncertainty on the number of beam and fragment particles. The spectrum is characterized by a prominent peak with maximum at $\sim$~0.48~MeV and a broader structure, peaked at $\sim$~2.3 MeV, extending from $\sim$1~MeV to $\sim$5~MeV.
The contribution corresponding to $^{12}$Be(2$^+$) and $^{12}$Be(1$^-$) core excited states has been fixed via coincidences with 2.1 and 2.7 MeV gamma transitions, respectively, and is shown for comparison after correcting for gamma-detection efficiency. The uncertainty on gamma detection efficiency (6\%) has been added to the error bars.
The gamma spectrum of $^{12}$Be is shown in Fig.~\ref{f:erel} (right). The 2.1(0.1) and 2.7(0.4) MeV transitions are consistent with the known transitions deexciting the 2$^+$ and 1$^-$ excited states of $^{12}$Be to its ground state. We note that the same gamma transitions were observed in Ref.~\cite{kon10}, though with very limited statistics, while \cite{rib18} observed only the 2.1~MeV transition. As can be better seen in the inset of Fig. \ref{f:erel} (left), the 2.1~MeV one is observed in coincidence with a structure peaking at $\sim$0 and $\sim$3 MeV in the relative energy spectrum, as in \cite{rib18}. The 2.7~MeV one is observed in coincidence with a structure at $\sim$3 MeV. The contribution from the Compton events associated to the 2.7~MeV transition summing up to the 2.1~MeV transition has been estimated via the simulation and subtracted from the cross section.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1\columnwidth]{xsec_corex}
\caption{\label{f:erel} Left: relative energy spectrum of $^{13}$Be and contributions from core excited components. The inset shows the spectrum in logarithmic scale. Right: gamma spectrum of $^{12}$Be. The two transitions are reproduced by the sum of an exponential background and the response functions (dashed curves) of DALI2 to a transition at 2.1 MeV and 2.7 MeV, obtained via a GEANT4 simulation.}
\end{center}
\end{figure}
Based on this, we built a partial level scheme presented in Fig.~\ref{f:ls}. Only the levels that can be clearly deduced from the present data are shown. The 2.3 MeV peak observed in the relative energy spectrum likely corresponds to the well-accepted 5/2$^+$ state in $^{13}$Be, whose tail may be responsible for the $\sim$0~MeV transition in coincidence with the 2$^+$ state in $^{12}$Be (as discussed, for instance, in Ref.~\cite{aks13b}). This, together with the spin-parity assignment of the lowest level at 0.48 MeV, will be further discussed in Sec.~\ref{sec:level4}, where we also analyze the information from the corresponding transverse momentum distributions.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\columnwidth]{13BeLevels_v2}
\caption{\label{f:ls} Partial level scheme based on the observed neutron-$^{12}$Be relative energy spectrum and gamma-neutron-$^{12}$Be coincidences. Transitions in the relative energy spectrum are represented by lines (black for transitions to the ground state of $^{12}$Be, blue and green for transitions populating $^{12}$Be(2$^+$) and $^{12}$Be(1$^-$), respectively). Gamma transitions are represented by the red wavy arrows. Energies are given in MeV.}
\end{center}
\end{figure}
\section{\label{sec:level4}Theoretical analysis}
\subsection{\label{ssec:level1}Three-body calculations}
In order to better understand the experimental results, we have performed structure calculations for $^{14}$Be using a three-body model $(^{12}\text{Be}+n+n)$ within the hyperspherical formalism~\cite{Desc03,IJThompson04,MRoGa05}. Details on how the wave function is built can be found, for instance, in Ref.~\cite{JCasal16} and references therein. This consists in diagonalizing the Hamiltonian in an analytical transformed harmonic oscillator (THO) basis for three-body systems. The method has been previously applied with great success to describe direct reactions induced by three-body projectiles~\cite{JCasal15,casalplb17,gomezramosplb17,Arazi18}.
Three-body calculations require, as input, the binary interactions between all constituents. For the $n$-$n$ potential, we employ the GPT tensor interaction~\cite{GPT}. This potential, although simpler than the more sophisticated AV18~\cite{av18}, CD-Bonn \cite{Bonn} or Reid93~\cite{reid93} interactions, reproduces $NN$ observables up to 300 MeV, so it is suitable for three-body structure calculations. In order to get a realistic description of $^{14}$Be, the $^{12}$Be-$n$ interactions needs to reproduce the properties of the unbound binary subsystem $^{13}$Be. From previous fragmentation and knockout experiments, it is mostly accepted that $^{13}$Be exhibits a low-lying $s$-wave state and a $d$-wave resonance around 2~MeV relative energy~\cite{tho00,lec04,sim07}. There are, however, large discrepancies in the interpretation of the $^{13}$Be spectrum from different experimental works~\cite{aks13b,rib18}, many of which are associated with the long-debated existence of a low-lying $p$-wave resonance and the contribution from excited $^{12}$Be components~\cite{kon10}. For this reason, we make use of different core-neutron potentials to study the sensitivity of the structure and reactions observables to the properties of $^{13}$Be.
In order to include some excited-core components in the description of $^{14}$Be, we parametrize the $^{12}$Be-$n$ interaction with a deformed Woods-Saxon potential with $l$-dependent central and spin-orbit terms. Following Ref.~\cite{tar04}, we introduce an effective quadrupole deformation parameter of $\beta_2=0.8$, and the $0^+$ ground state and the first $2^+$ excited state in $^{12}$Be are coupled by means of a simple rotational model~\cite{IJThompson04}. In this scheme, no other excited states of $^{12}$Be are included. Three-body calculations including also the 1$^-$ state in a consistent way are not available. As shown in Ref.~\cite{tar04}, despite the deformation the $s$-wave interaction still gives rise to a 1/2$^+$ virtual state in $^{13}$Be. The potential parameters $V_c^{(0,2)}$ and $V_{ls}$ are adjusted to fix the scattering length of this virtual state and to provide a 5/2$^+$ resonance just below the 2$^+$ excitation threshold in $^{12}$Be, i.e. at 2.11~MeV~\cite{Kelley17-12}. Note that, in this scheme, the 5/2$^+$ state may decay via $d_{5/2}$ neutrons to the ground state of $^{12}$Be, but also via $s_{1/2}$ to the 2$^+$ excited state, given its finite width. For simplicity, we start with a shallow $V_c^{(1)}$ potential, so no negative-parity resonances appear. This potential, labeled P1, produces a $1/2^+$ virtual state characterized by a large scattering length. Details are given in Table~\ref{tab:3b}. In addition, the calculations include a phenomenological three-body force to correct the position of the $0^+$ ground state to the experimental two-neutron separation energy of $^{14}$Be, i.e.~$S_{2n}=1.27$ MeV~\cite{Wang17}, which is kept fixed. Some properties of the resulting $^{14}$Be ground state are also given in Table~\ref{tab:3b}. We remind the reader that although the energy of a virtual state is strictly negative, it is customary to define a nominal positive energy as $E_s=\hbar^2/(2 \mu a^2)$ (with $a$ indicating the scattering length, see e.g.\ Chap.~2 of Ref.~\cite{Bla79}) to quantify the proximity of the virtual-state pole to the threshold. \\
Given the open debate about the presence of a low-lying $p$-wave resonance in $^{13}$Be, we consider another potential labeled P2.
In this case, we increase the $p$-wave potential depth to produce a 1/2$^-$ resonance around the maximum of the $^{12}$Be-$n$ relative-energy distribution, while keeping a small scattering length for the $s$-wave state and the same $d$-wave resonance as with P1. With the adopted parameters, the scattering length of the 1/2$^+$ state is -9.2 fm, which corresponds to a nominal energy of 0.265 MeV. The computed energy and width of the 1/2$^-$ (5/2$^+$) resonance are 0.46 (1.96) and 0.40 (0.27) MeV. The resulting $^{14}$Be properties are also listed in Table~\ref{tab:3b}.
It is worth noting that the P2 model produces a strong mixing between different-parity states, as shown in Table~\ref{tab:3b}. This gives rise to a marked dineutron character of the $^{14}$Be wave function, as opposed to potential P1 and in accord with the discussion in Ref.~\cite{Catara84}.
In the next section, we will study the sensitivity of the $(p,pn)$ cross section to the structure properties of $^{13,14}$Be. For this purpose, we consider variations of potential P2 in which the $1/2^-$ and $5/2^+$ resonances are placed at different energies. These variations are labeled P3-5 and their details are also presented in Table~\ref{tab:3b}.
\begin{table*}[t]
\centering
\begin{tabular}{l|ccccc||ccc|c}
\hline
\hline
& $a$ & $E(5/2^+)$ & $\Gamma(5/2^+)$ & $E(1/2^-)$ & $\Gamma(1/2^-)$ & $s$ & $p$ & $d$ & $2^+$ \\
\hline
P1 & $-40.1$ & 1.96 & 0.27 & - & - & 0.59 & 0.13 & 0.26 & 0.34\\
P2 & $-9.2$ & 1.96 & 0.27 & 0.46 & 0.40 & 0.19 & 0.62 & 0.18 & 0.22\\%P7
P3 & $-9.2$ & 1.53 & 0.28 & 0.46 & 0.40 & 0.20 & 0.53 & 0.25 & 0.25\\%P8
P4 & $-9.2$ & 2.10 & 0.19 & 0.46 & 0.40 & 0.17 & 0.68 & 0.13 & 0.19\\%P9
P5 & $-9.2$ & 1.96 & 0.27 & 0.62 & 0.60 & 0.24 & 0.53 & 0.22 & 0.24\\%P10
\hline
\end{tabular}
\caption{ \label{tab:3b} Scattering length $a$ (in fm) of the 1/2$^+$ virtual state in $^{13}$Be and energies and widths of the 5/2$^+$ and 1/2$^-$ resonances (in MeV) using the different core-neutron potentials P1-5. On the right, the resulting properties of the $^{14}$Be ground state: partial wave content for $L=0,1,2$ neutrons,
and weight of the 2$^+$ core-excited components.
The two-neutron separation energy in $^{14}$Be is fixed to the experimental value of 1.27 MeV~\cite{Wang17}.
}
\end{table*}
\subsection{\label{ssec:level2}Reaction calculations}
We have compared the present $(p,pn)$ data with reaction calculations based on the so-called Transfer to the Continuum (TC) framework~\cite{AMoro15}, which was recently extended to describe processes induced by Borromean projectiles~\cite{gomezramosplb17}. In this method, the differential cross section for the $(p,pn)$ reaction is obtained from the prior-form transition amplitude leading to the three-body continuum states for $p+n+^{13}$Be. The latter are expanded in a basis of $p$-$n$ states, which are conveniently discretized using a binning procedure akin to that employed in the continuum-discretized coupled-channels (CDCC) method \cite{Aus87}. Good convergence of the calculated observables (within 10\%) was attained using a set of energy
bins with a width $\Delta \epsilon_{pn}=15$ MeV, a maximum $p-n$ relative energy of $\epsilon_{pn}=210$ MeV and a maximum angular momentum and parity $j^\pi\leq 3^\pm$.
The model considers a spectator/participant scenario, in which the incident proton is assumed to remove one of the valence neutrons without modifying the state of the remaining $^{13}$Be ($^{12}{\rm Be} + n$) subsystem. This is consistent with QFS conditions. Under this assumption, the $^{14}$Be structure enters through the overlap functions between the initial state (the $^{14}$Be g.s.) and the final $^{13}$Be states so that the cross section for different configurations of $^{13}$Be (defined by their energy $E_{n-^{12}{\rm Be}}$ and angular momentum and parity $J_T^\pi$)
can be computed independently.
Important ingredients are also the proton-neutron and nucleon-$^{13,14}$Be interactions. For the former, following previous applications of the method \cite{AMoro15,gomezramosplb17}, we adopt the Reid93 parametrization \cite{reid93}, which provides an accurate description of the proton-neutron cross sections and phase-shifts up to 350~MeV. For proton-$^{14}$Be and proton/neutron-$^{13}$Be interactions, which take into account the distortion and absorption of the incoming proton and of the outgoing nucleons, we employ the optical potentials computed from the Dirac phenomenological parametrization \cite{Ham90,Coo93}.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.8\linewidth]{13Be-P1_P2.pdf}
\caption{$^{13}$Be relative-energy spectrum using the (a) P1 and (b) P2 potentials. Results are shown after convoluting the theoretical lineshapes with the experimental resolution function. The inset shows the relative energy spectrum measured in coincidence with the $^{12}$Be(2$^+$) decay transition, compared to the calculated core-excited component. See text for details.}
\label{fig:13Be-P1_P2}
\end{figure}
The $^{13}\text{Be}={^{12}}\text{Be}+n$ relative-energy spectrum obtained by using the P1 core-neutron potential and convoluting with the experimental resolution from the simulation (determined as explained in Sec.~\ref{sec:level2}) is shown in Fig.~\ref{fig:13Be-P1_P2}a. Note that these reaction calculations provide absolute cross sections, so no fitting or scaling is carried out. The different contributions are labeled $J_T [L_J\otimes I]$, where $J_T$ is the total $^{13}$Be angular momentum resulting from coupling the single-particle configuration $L_J$ with the spin $I$ of $^{12}$Be. Trivially, since the spin of $^{14}$Be is $0^+$, $J_T$ equals $j_2=[l_2 \otimes s_2]$, the angular momentum of the removed nucleon. In this figure, only the leading components are shown, together with the total cross section. In this way, the contributions from $I=0$ and $I=2$ can be separated. The cross section using P1 overestimates the experimental data at low relative energies due to the large scattering length of the $s$-wave virtual state. Moreover, the maximum of the cross section around 0.5~MeV is not reproduced. We have checked that variations of the position of the 1/2$^+$ virtual state do not improve the agreement.
The results using the P2 potential are shown in Fig.~\ref{fig:13Be-P1_P2}b. Again, for clarity only the leading terms are presented. Notice that, although the width of the 5/2$^+$ state in the present model is smaller than that of the 1/2$^-$ (see Table~\ref{tab:3b}), its contribution to the relative-energy spectrum becomes broader due to the energy resolution. Note also that the contribution of the $2^+$ state is small in spite of the significant core-excited component in these models. This is because the nominal energy of the 5/2$^+$ state, which collects most of the core-excitation weight in the $^{14}$Be wave function, lays below the $^{12}$Be($2^+$) excitation threshold.
The small $5/2^+[s_{1/2}\otimes 2^+]$ contribution is consistent with the analysis from gamma coincidences. The comparison between this component and the experimental data in coincidence with the $^{12}$Be(2$^+$) decay transition is presented in the inset of Fig.~\ref{fig:13Be-P1_P2}b, where we can see that the calculations describe the data at low energies quite well. The disagreement at energies above $\sim 1.5$ MeV might indicate that the present three-body calculations are missing some high-lying state which can also decay via $^{12}$Be($2^+$), as suggested in Fig.~\ref{f:ls}. As for the full spectrum, Fig.~\ref{fig:13Be-P1_P2}b shows that the low-energy peak can be described reasonably well by the 1/2$^-$ resonance using the P2 potential. The theoretical distribution is somewhat broader than the experimental data, so the calculation overestimates the measurements between $\sim$0.6-1 MeV. This might be an indication that the $p$-wave content obtained with the P2 potential is perhaps too large. In addition to the underestimation at large relative energies, this suggests that there might be missing components in the wave-function expansion, in particular those coming from the decay of other states in $^{13}$Be via $^{12}$Be($2^+$), or the coupling to other excited states of $^{12}$Be. Calculations including these features are not available yet.
To test the sensitivity of the relative-energy spectrum to the specific features included in the potential, in Fig.~\ref{fig:13Be-PX} we compare the P2 calculations with three additional models. As shown in Table~\ref{tab:3b}, P3 and P4 are obtained by placing the 5/2$^+$ resonance of $^{13}$Be at lower or higher energies, respectively, while P5 involves a variation on the position of the 1/2$^-$ state. Note that a difference in the position of the relevant levels changes also the weight of the different components in the ground state of $^{14}$Be. It is shown that the best agreement up to 2 MeV relative energy is achieved using the potential P2.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.8\linewidth]{13Be-PX.pdf}
\caption{Results for the $^{13}$Be relative-energy spectrum with potentials P2-5. Only the total calculations are presented.}
\label{fig:13Be-PX}
\end{figure}
\begin{figure}[hbt!]
\centering
\includegraphics[width=1\linewidth]{13Be-PX-300_mod.pdf}
\caption{Experimental transverse momentum distributions at 0-0.2, 0.4-0.5 and 1.8-2.2 MeV relative energy with P2 potential. The solid black line is the total TC result, convoluted with the experimental resolution and globally rescaled through a $\chi^2$ fit. Dashed lines are the contributions corresponding to removal of a neutron from a $s$- (red), $p$- (blue) or $d$-wave (green). }
\label{fig:momdist}
\end{figure}
The $^{13}$Be structure can be further studied from the transverse momentum distributions of the knocked-out neutron. The comparison between the present calculations, using potential P2, and the experimental momentum distributions is presented in Fig.~\ref{fig:momdist}, for three different relative-energy bins: 0-0.2, 0.4-0.5 and 1.8-2.2 MeV. Calculations have been convoluted with the experimental resolution of $\sim$~39 MeV/c (FWHM) obtained from the direct beam measurement. The contribution from momentum resolution of the neutron was checked via the simulation and found to be negligible. The overall normalization of the total theoretical distribution with respect to the data has been adjusted to obtain the best $\chi^2$ fit.
Individual contributions from the different orbital angular momenta of the knocked-out neutron are also presented. The relative weights of these contributions are fixed by the structure and reaction calculations and not via $\chi^2$ fit. Note that, due to the non-zero spin of the $^{12}$Be core in its excited components, the orbital angular momentum $l_2$ of the removed neutron is not necessarily the same as the one of the valence neutron in $^{13}$Be.
Overall, it is found that the width of the momentum distributions is well reproduced.
In particular, we can describe the data at 0.4-0.5 MeV with a dominant $p$-wave contribution. In Ref.~\cite{aks13b}, an $s$-wave resonance (or a combination of two overlapping $s$-wave resonances) was proposed to explain the peak in the $^{13}$Be spectrum.
To test the assumption of the $s$-wave resonance suggested in Ref.~\cite{aks13b}, in the top panel of Fig.~\ref{fig:momdist2} we have performed a $\chi^2$ fit of the momentum distribution for the 0.4-0.5 energy bin retaining only the $s$-wave contribution in our calculations with the P2 potential. In this case, the resulting width is clearly too small. A similar conclusion is achieved if the analysis is performed over the data from Ref.~\cite{aks13b}, which is shown in the bottom panel. In this latter case, our full calculation with a dominant $p$-wave component reproduces very well the experimental momentum distribution, whereas the pure $s$-wave assumption gives again a too narrow distribution. Our analysis shows that the peak observed in the $^{13}$Be relative-energy spectrum at $E_r \sim 0.5$~MeV, populated in the $^{14}$Be($p,pn)$ reaction, is most likely dominated by the $p$-wave contribution.
This assignment is in disagreement with that of Ref. \cite{aks13b}. We believe this discrepancy can be understood as due to the inherent uncertainties in the $\chi^2$ procedure used in Ref. \cite{aks13b} to assign the orbital momentum, as well as the differences in structure and reaction model.
\begin{figure}
\centering
\includegraphics[width=0.75\columnwidth]{Aks_Anna_art.pdf}
\caption{Transverse momentum distribution at 0.4-0.5~MeV relative energy. The top and bottom panels are for the present experiment and from the GSI data at 304 MeV/nucleon with resolution $\sim$72 MeV/c (FWHM) \cite {aks13b}, respectively. The black solid line is the total P2 result, while the red dashed line corresponds to a $\chi^2$ fit assuming a pure $s$-wave distribution.}
\label{fig:momdist2}
\end{figure}
\section{\label{sec:level5} Conclusions}
We have presented a high statistics measurement of the spectroscopy of $^{13}$Be via invariant mass, including the measurement of $^{12}$Be core excited states which decay via gamma rays. We clearly observed for the first time the contribution of both $^{12}$Be(2$^+$, 1$^-$) states in $^{13}$Be populated via $(p,pn)$ reaction. Their contribution to the $^{13}$Be relative-energy spectrum is small. A still missing information is the contribution of the isomeric $^{12}$Be(0$^+_2$) core-excited state, that will demand a dedicated measurement.
A key and novel aspect of our analysis consists in calculating, for the first time, the relative energy cross section and momentum distribution using a well-founded reaction framework and a realistic three-body model of $^{14}$Be that incorporates $^{12}$Be(2$^{+}$) excitations, thus avoiding the common procedure of extracting individual angular momentum components from a fit, a technique that becomes more ambiguous in the case of complex spectra with overlapping structures as the one of $^{13}$Be.
This analysis permitted to pin down the dominant $\ell=1$ contribution of the resonant peak observed in the low-lying spectrum, in agreement with \cite{kon10,rib18} and at variance with the conclusions of Refs.~\cite{aks13a, aks13b}, which assigned a dominant $\ell=0$ to this peak.
Additional observables, such as the distribution of the opening angle between the momenta of $^{13}$Be and the removed neutron in the final state, may help to shed light on the structure of $^{14}$Be and hence $^{13}$Be. The interpretation of such observables, as extracted from the present experiment, will be the subject of an upcoming publication. An improvement of the three-body model, e.g. the inclusion of other $^{12}$Be excited state, proper antisymmetrization of valence neutrons and a better treatment of Pauli principle, will increase the capability of theory to capture the features of the experimental spectrum.
\section*{Acknowledgements}
This work has been supported by the European Research Council through the ERC Starting Grant No. MINOS-258567. A.C. acknowledges Y. Kikuchi and K. Ogata for fruitful discussions, and N.Paul for careful proofreading. M.G.R., A.M.M.\ and J.C.\ are partially supported by the Spanish Ministerio de Ciencia, Innovaci\'on y Universidades and FEDER funds (projects FIS2014-53448-C2-1-P and FIS2017-88410-P) and by the European Union's Horizon 2020 research and innovation program under grant agreement No.\ 654002. J.G., F.M.M. and N.A.O. acknowledge partial support from the Franco-Japanese LIA-International Associated Laboratory for Nuclear Structure Problems as well as the French ANR14-CE33-0022-02 EXPAND.
\section*{References}
|
2,869,038,153,927 | arxiv | \section{Introduction}
The advent of the Anti-de Sitter/conformal field (AdS/CFT) theory~\cite{Maldacena,Gubser1998,Witten} opens up a new avenue for understanding of the pairing mechanism in the high $T_c$ superconductors which can not be described straightforwardly by the conventional BCS theory~\cite{BCS}.
The theory gives an account of a $d$-dimensional quantum field theory in the strong coupling regime in terms of a weakly coupled gravity theory.
The latter, also known as the bulk theory, is at least one dimension higher than the dual quantum field theory, often referred to as the boundary theory.
It has been suggested that, in the light of the AdS/CFT correspondence, the spontaneous $U(1)$ symmetry breaking in the bulk spacetime metric can be used to model the phase transition from the normal to superconducting state in the boundary theory dual to the gravitational system~\cite{GubserPRD78}.
The relevant transition is shown to exhibit the main characteristics of the s-wave superconductor~\cite{HartnollPRL101,HartnollJHEP12}.
These gravitational dual models are called holographic superconductors~\cite{HartnollRev,HerzogRev,HorowitzRev,CaiRev}.
Along this line of thought, by introducing an $SU(2)$ Yang-Mills field into the bulk, Gubser and Pufu constructed a holographic p-wave superconductor.
In their realization, a massive gauge boson is generated by the spontaneous breaking of the non-abelian gauge symmetry.
The latter is associated with one of the $SU(2)$ generators, and the resulting condensation is understood to be dual to the vector order parameter~\cite{GubserPufu}.
To go a step further, Cai~\emph{et al.} devised a new holographic p-wave superconductor model by considering a charged vector field in the Einstein-Maxwell theory with a negative cosmological constant.
The model can be viewed as a generalization of the $SU(2)$ model with a general mass and gyromagnetic ratio~\cite{CaiPWave-1,CaiPWave-2}.
In Refs.~\cite{DWaveChen,DWaveBenini}, the authors studied the properties of a charged massive spin two field propagating in the bulk and implemented the holographic d-wave superconductivity.
Further progress features the AdS soliton in the background metric of the bulk, as Nishioka \emph{et al.} demonstrated that the soliton might be unstable.
In particular, the formation of the scalar hair is impeded, and subsequently, a second-order phase transition takes place when the chemical potential is more significant than the critical value of $\mu_{c}$.
The resulting model is utilized to describe the transition between the insulator and superconductor~\cite{Nishioka-Ryu-Takayanagi}.
Most of the aforementioned works are featured by the Einstein-Maxwell theory coupled to a charged field on the gravity side.
According to the AdS/CFT correspondence, in the AdS spacetime, the curvature correction to the metric~\cite{Gregory,Pan-WangGB2010,NieZeng} and the higher derivative terms related to the gauge field~\cite{JS2010,WuCKW,SunWPJ2019} are expected to modify the dynamics of the dual field theory.
Interestingly enough, Myers~\emph{et al.} introduced a specific form of higher-order correction regarding the gauge field, namely, the $RF^{2}$ correction.
The latter arises from the Kaluza-Klein reduction of the five-dimensional Gauss-Bonnet gravity.
In particular, it has been argued that the correction term in question is universal in the sense that it can be used to produce the second-order equations of motion for both the gauge field and metric for any background~\cite{RCMyers}.
While studying the holographic properties of charged black holes with $RF^{2}$ corrections, Cai and Pang observed its impact on the DC conductivity~\cite{CaiPang}.
Also, by investigating the holographic s-wave superconductor with $RF^{2}$ corrections in the background of the AdS black hole, the authors of Ref.~\cite{ZPJCPL} found that the higher correction term facilitates the condensation of the scalar operator.
To be specific, a significant deviation from the standard value of the ratio of the gap frequency to the critical temperature was observed.
More recently, Lu~\emph{et al.} also constructed a holographic p-wave superconductor with $RF^{2}$ corrections.
Their approach is characterized by a Maxwell complex vector field in the five-dimensional AdS black hole and soliton background spacetimes~\cite{LuNPB2018}.
For the black hole background, it was observed that the $RF^{2}$ correction promotes the conductor/superconductor phase transition and causes the ratio of the gap frequency to the critical temperature to significantly deviate from the standard value.
On the contrary, for the soliton background, it was shown that the correction does not affect the critical chemical potential~\cite{LuNPB2018}.
In Ref. \cite{LuPLB2018}, the authors further extended the study to the Lifshitz gravity and obtained similar features for the effect of the $RF^2$ correction with respect to the holographic properties of the systems.
In this work, we examine the influence of the $RF^2$ corrections on the p-wave superfluid model.
According to the AdS/CFT correspondence, the holographic superfluid is realized by turning on the spatial components of the gauge field.
Special attention will be paid to the role of supercurrent, since it is an essential quantitiy concerning the study of superconductity in condensed matter systems~\cite{BasuMukherjeeShieh,HerzogKovtunSon,Peng2012,KuangLiuWang,Arean2010,AreanJHEP2010,SonnerWithers,ZengSZ,Amado2013,Zeng2013,Amado2014,LaiPJW2016,AriasLandea,GBSuperfluid,HuangPQJW}.
The calculations will be carried out for both the Maxwell complex vector field model and the Yang-Mills theory in the five-dimensional AdS Schwarzschild spacetime regarding the following soliton solution
\begin{eqnarray}\label{SchSoliton}
ds^2=-r^2dt^2+\frac{dr^2}{f\left(r\right)}+f\left(r\right)d\varphi^2+r^2(dx^2+dy^2),
\end{eqnarray}
with $f(r)=r^2(1-r_{s}^{4}/r^{4})$.
This solution does not possess any horizon but a conical singularity, corresponding to the tip of the soliton, at $r_{s}$.
One can avoid the singularity by imposing a period $\beta=\pi/r_{s}$ for the coordinate $\varphi$.
The motivation of the present study is to understand the influences of the $1/N$ or $1/\lambda$ (where $\lambda$ is the 't Hooft coupling) corrections on the holographic p-wave superfluid models.
As discussed in the following sections, in the probe limit where the backreaction of matter fields on the spacetime metric is neglected, the $RF^2$ corrections lead to qualitatively different effects on the superfluid phase transition in the two models with vanishing superfluid velocity.
With the presence of the superfluid velocity, on the other hand, similar features regarding the condensate of the vector operator are observed.
This indicates that one might make use of the $RF^2$ corrections to distinguish between the holographic p-wave superfluid state in the Maxwell complex vector field model and that in the Yang-Mills theory.
The present paper is organized as follows.
In Sec. II, we construct the holographic p-wave superfluid model with the $RF^2$ corrections via a Maxwell complex vector field model.
In the probe limit, an analytical method, Sturm-Liouville approach, is employed to study the effect of the $RF^2$ corrections on the superfluid phase transition.
The analysis is then complemented by a numerical method, namely, the shooting method.
In Sec. III, we extend the investigation to the holographic p-wave superfluid model with the $RF^2$ corrections to the Yang-Mills theory.
Finally, the last section is devoted to the discussions and concluding remarks.
\section{p-Wave superfluid of the Maxwell complex vector field}
In this section, we study the holographic p-wave superfluid phase transition with $RF^{2}$ corrections in the five-dimensional AdS soliton spacetime by considering the Maxwell complex vector field model~\cite{LuPLB2018,LuNPB2018}
\begin{eqnarray}\label{PWaveAtion}
S=\frac{1}{16\pi G}\int
d^{5}x\sqrt{-g}\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\mathcal{L}_{RF^{2}}
-\frac{1}{2}(D_\mu\rho_\nu-D_\nu\rho_\mu)^{\dag}(D^{\mu}\rho^{\nu}-D^{\nu}\rho^{\mu})
-m^2\rho_{\mu}^{\dag}\rho^{\mu}+iq\gamma_{0}\rho_{\mu}\rho_{\nu}^{\dag}F^{\mu\nu} \right],\nonumber \\
\end{eqnarray}
where the $RF^{2}$ correction term reads
\begin{eqnarray}
\mathcal{L}_{RF^{2}}=&\alpha (R_{\mu\nu\rho\lambda}F^{\mu\nu}F^{\rho\lambda}-4R_{\mu\nu}F^{\mu\rho}F^{\nu}_{\rho}
+RF^{\mu\nu}F_{\mu\nu}).
\end{eqnarray}
Here $D_\mu=\nabla_\mu-iqA_\mu$ is the covariant derivative and $F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}$ is the strength of $U(1)$ field $A_\mu$. The coupling parameter $\alpha$ satisfies $-1/20\leq \alpha \leq 1/4$~\cite{RCMyers}, $q$ and $m$ are the charge and mass of the vector field $\rho_\mu$, respectively. The last term, proportional to $\gamma_{0}$, measures the interaction between the vector field $\rho_\mu$ and the gauge field $A_\mu$.
In order to investigate the possibility of DC supercurrent, according to Ref. \cite{LaiPJW2016}, we make use of the following {\it ansatz} for the matter fields
\begin{eqnarray}\label{PWaveAnsatz}
\rho_\mu dx^{\mu}=\rho_{x}(r)dx,~~A_\mu dx^{\mu}=A_t(r)dt+A_{\varphi}(r)d\varphi.
\end{eqnarray}
In the soliton background (\ref{SchSoliton}), one chooses $\rho_{x}(r)$, $A_t(r)$ and $A_{\varphi}(r)$ to be real functions.
Subsequently, one obtains the following equations of motion
\begin{eqnarray}\label{PWaveRhoxr}
\rho_{x}^{\prime\prime}+\left(\frac{1}{r}+\frac{f^\prime}{f}\right)\rho_{x}^{\prime}
-\frac{1}{f}\left(m^2+\frac{q^2A^2_\varphi}{f}-\frac{q^2A_t^2}{r^2}\right)\rho_{x}=0,
\end{eqnarray}
\begin{eqnarray}\label{PWaveAtr}
\left[1+\frac{8\alpha f}{r}\left(\frac{1}{r}+\frac{f'}{f}\right)\right]A_{t}''
+\left[\left(\frac{1}{r}+\frac{f'}{f}\right)+\frac{8\alpha}{r}\left(-\frac{f}{r^{2}}+\frac{2f'}{r}
+\frac{f'^{2}}{f}+f''\right)\right]A_{t}'-\frac{2q^{2}\rho_{x}^{2}}{r^{2}f}A_{t}=0,
\end{eqnarray}
\begin{eqnarray}\label{PWaveAvarphir}
\left(1+\frac{24\alpha f}{r^{2}}\right)A_{\varphi}''+\left[\frac{3}{r}+\frac{24\alpha f}{r^{2}}\left(\frac{1}{r}+\frac{f'}{f}\right)\right]A_{\varphi}'
-\frac{2q^{2}\rho_{x}^{2}}{r^{2}f}A_{\varphi}=0,
\end{eqnarray}
where the prime denotes the derivative with respect to $r$.
It is straightforward to show that Eqs.~(\ref{PWaveRhoxr}) and (\ref{PWaveAtr}) fall back to the case considered in Ref.~\cite{LuNPB2018} when the spatial component $A_{\varphi}$ is turned off.
Eqs.~(\ref{PWaveRhoxr}), (\ref{PWaveAtr}) and (\ref{PWaveAvarphir}) can be solved by using the following procedure.
At the tip $r=r_{s}$, the vector field $\rho_\mu$ and gauge field $A_\mu$ are required to be regular, and $A_{\varphi}(r_{s})=0$.
Also, as $r\rightarrow\infty$, the asymptotical behaviors of the solutions are
\begin{eqnarray}\label{PWInfinityCondition}
\rho_{x}=\frac{\rho_{x-}}{r^{\Delta_{-}}}+\frac{\rho_{x+}}{r^{\Delta_{+}}},~~A_t=\mu-\frac{\rho}{r^2},
~~A_\varphi=S_\varphi-\frac{J_\varphi}{r^2},
\end{eqnarray}
where $\Delta_{\pm}=1\pm\sqrt{1+m^2}$ are the characteristic exponents with the masses beyond the Breitenlohner-Freedman (BF) bound $m_{BF}^2=-1$. According to the AdS/CFT correspondence, $\mu$ and $S_\varphi$ are the chemical potential and superfluid velocity, while $\rho$ and $J_\varphi$ are the charge density and current in the dual field theory, respectively. Furthermore, we can interpret $\rho_{x-}$ and $\rho_{x+}$ as the source and vacuum expectation value of the
vector operator $O_{x}$ in the dual field theory.
Accordingly, we will impose boundary condition $\rho_{x-}=0$ to guarantee the spontaneous breaking of $U(1)$ gauge symmetry in the system.
For simplicity, we will use $\Delta$ to denote $\Delta_{+}$ in the following discussions.
It is straightforward to show that Eqs.~(\ref{PWaveRhoxr}), (\ref{PWaveAtr}) and (\ref{PWaveAvarphir}) are invariant with respect to the following scaling
transformations:
\begin{eqnarray}\label{PWSSymmetry}
&&r\rightarrow\lambda r,~~(t, \varphi, x,
y)\rightarrow\frac{1}{\lambda}(t, \varphi, x,
y),~~q\rightarrow q,~~(\rho_{x},A_{t},A_{\varphi})\rightarrow\lambda(\rho_{x},A_{t},A_{\varphi}),\nonumber \\
&&(\mu,S_\varphi)\rightarrow\lambda(\mu,S_\varphi),~~
(\rho,J_\varphi)\rightarrow\lambda^{3}(\rho,J_\varphi),~~\rho_{x+}\rightarrow\lambda^{1+\Delta}\rho_{x+},
\end{eqnarray}
where $\lambda$ is a positive number.
Subsequently, in what follows, we will present our results in terms of dimensionless quantities, which are invariant regarding Eq.~(\ref{PWSSymmetry}).
\subsection{Analytical approach by the Sturm-Liouville method}
We first use the Sturm-Liouville method~\cite{Siopsis,SiopsisB} to explore the effect of the $RF^{2}$ correction on the condensation as well as other critical phenomena of the system in the immediate vicinity of the critical chemical potential $\mu_{c}$.
The obtained solution provides an analytical understanding of the p-wave superfluid phase transition in the AdS soliton background.
For mathematical convenience, we will change the variable from $r$ to $z=r_{s}/r$ with the range $0<z<1$ in the following calculations.
We note that the vector field $\rho_{x}$ vanishes as long as one approaches the critical point $\mu_{c}$ from below. In this case, Eq.~(\ref{PWaveAtr}) can be simplified to read
\begin{eqnarray}\label{PWaveAtzCritical}
\left[1+8\alpha z^{3}f\left(\frac{1}{z}-\frac{f'}{f}\right)\right]A_{t}''+\left[\left(\frac{1}{z}+\frac{f'}{f}\right)+8\alpha z\left(3f-2zf'
-\frac{z^{2}f'^{2}}{f}-z^{2}f''\right)\right]A_{t}'=0,
\end{eqnarray}
where the prime denotes the derivative with respect to $z$, and the function $f$ is $f(z)=(1-z^{4})/z^2$.
The general solution of Eq.~(\ref{PWaveAtzCritical}) is found to be
\begin{eqnarray}\label{PWaveAtzCriSolution}
A_{t}=\mu+c_{1}\left[\ln\left(\frac{1+z^{2}}{1-z^{2}}\right)
+\frac{4\sqrt{2\alpha}}{\sqrt{1+24\alpha}}{\rm
ArcTan}\left(\frac{2\sqrt{2\alpha}z^{2}}{\sqrt{1+24\alpha}}\right)\right],
\end{eqnarray}
where $\mu$ and $c_{1}$ are the two constants of integration. By considering the Neumann-like boundary condition for the gauge field $A_{t}$, we must have $c_{1}=0$ to ensure that $A_{t}$ is finite at the tip $z=1$.
This is because the term between the square brackets is divergent at $z=1$.
Therefore we arrive at the solution of Eq.~(\ref{PWaveAtzCritical}), namely, $A_{t}(z)=\mu$ for $\mu<\mu_{c}$.
Similarly, as $\mu\rightarrow\mu_{c}$ from below, one finds, from Eq.~(\ref{PWaveAvarphir}), that
\begin{eqnarray}\label{PWaveAphizCritical}
(1+24\alpha z^{2}f)A_{\varphi}''+\left[-\frac{1}{z}+24\alpha z^{2}f\left(\frac{1}{z}+\frac{f'}{f}\right)\right]A_{\varphi}'=0.
\end{eqnarray}
By considering the boundary condition $A_\varphi(1)=0$, one obtains
\begin{eqnarray}\label{PWaveAtzCriticalSolution}
A_{\varphi}&=&S_{\varphi}\phi(z)\nonumber \\
&=&S_{\varphi}(1-z^{2})\left[1+8\alpha(1+z^{2}+z^{4})+\frac{192}{5}\alpha^{2}(z^{2}-1)(2+4z^{2}+6z^{4}+3z^{6})\right],
\end{eqnarray}
where we have neglected the terms of order $O(\alpha^{n})$ for $n\geq 3$.
Also, it is not difficult to show that, as $\mu\rightarrow\mu_{c}$, the vector field equation (\ref{PWaveRhoxr}) in terms of $z$ assumes the form
\begin{eqnarray}\label{PWRhozCriMotion}
\rho_{x}^{\prime\prime}+\left(\frac{1}{z}+\frac{f^\prime}{f}\right)\rho_{x}^\prime
+\left[\frac{1}{z^{2}f}\left(\frac{q\mu}{r_{s}}\right)^{2}-\frac{\phi^{2}}{z^{4}f^{2}}\left(\frac{qS_\varphi}{r_{s}}\right)^{2}
-\frac{m^{2}}{z^{4}f}\right]\rho_{x}=0.
\end{eqnarray}
By taking into account the asymptotical behavior of $\rho_x$ from Eq.~(\ref{PWInfinityCondition}), we make an {\it ansatz} of the following form~\cite{Siopsis}
\begin{eqnarray}\label{PWSLFz}
\rho_x(z)\sim \frac{\langle O_{x}\rangle}{r_{s}^{\Delta}}z^{\Delta}F(z),
\end{eqnarray}
where $F(z)$ is to be determined with the boundary condition $F(0)=1$.
The resulting equation of motion for $F(z)$ is found to be
\begin{eqnarray}\label{SLFzmotion}
(TF^{\prime})^{\prime}+T\left[U+V\left(\frac{q\mu}{r_{s}}\right)^{2}-W\left(\frac{qS_\varphi}{r_{s}}\right)^{2}\right]F=0,
\end{eqnarray}
with
\begin{eqnarray}\label{PWaveTUVWFu}
T=z^{1+2\Delta}f,~~
U=\frac{\Delta}{z}\left(\frac{\Delta}{z}+\frac{f^\prime}{f}\right)-\frac{m^2}{z^{4}f},~~V=\frac{1}{z^{2}f},~~ W=\frac{\phi^{2}}{z^{4}f^{2}}.
\end{eqnarray}
According to the standard procedure for the Sturm-Liouville eigenvalue problem~\cite{Gelfand-Fomin}, the first eigenvalue $\Lambda=q\mu/r_{s}$ can be obtained by using minimization principle in terms of Rayleigh quotient
\begin{eqnarray}\label{PWSLEigenvalue}
\Lambda^{2}=\left(\frac{q\mu}{r_{s}}\right)^{2}=\frac{\int^{1}_{0}T\left(F'^{2}-UF^{2}\right)dz}{\int^{1}_{0}T(V-k^{2}W)F^{2}dz},
\end{eqnarray}
where we have defined the dimensionless parameter $k=S_{\varphi}/\mu$.
It is noted that we have used the boundary condition $[T(z)F(z)F'(z)]|_{0}^{1}=0$ in order to derive the expression (\ref{PWSLEigenvalue}).
As a matter of fact, from Eq.~(\ref{PWaveTUVWFu}), we find that $T(1)\equiv0$, which leads to $T(1)F(1)F'(1)=0$.
Besides, the condition $T(0)F(0)F'(0)=0$ can also be satisfied automatically since the leading order contribution from $T(z)$ as $z\rightarrow0$ is $2\Delta-1=1+2\sqrt{1+m^2}\geq1$ with the mass $m^{2}\geq m_{BF}^2$.
This means that, as discussed in Refs.~\cite{HFLi,WangSPJ}, we need not impose any restrictions on $F'(z)$.
In other words, the Dirichlet boundary condition for the trial function, $F(0)=1$, is sufficient for the present purpose, and therefore we write
\begin{eqnarray}\label{TrialFunction}
F(z)=1-az,
\end{eqnarray}
with $a$ being a constant.
We note that Eq.~(\ref{TrialFunction}) is more appropriate than imposing an additional Neumann boundary condition, such as $F'(0)=0$.
As an example, we calculate the case for a given mass of the vector field $m^{2}=5/4$ together with $k=0.00$ and $\alpha=0.00$.
By choosing the form of the trial function as in Eq.~(\ref{TrialFunction}), we have
\begin{eqnarray}\label{Example}
\Lambda^{2}=\left(\frac{q\mu}{r_{s}}\right)^{2}=\frac{8100-14175a+6748 a^2}{48(21-35a+15a^2)},
\end{eqnarray}
whose minimum is found to be $\Lambda_{min}^{2}=7.757$ with $a=0.492$.
Thus, one finds the critical chemical potential to be $\Lambda_{c}=\Lambda_{min}=2.785(13)$, which is closer to the numerical value $\Lambda_{c}=2.784(99)$ obtained in Ref.~\cite{ZPJ2015}, in comparison with the analytical result $\Lambda_{c}=2.787$ shown in Table 2 of Ref.~\cite{LaiPJW2016}, deduced from the trial function $F(z)=1-az^{2}$ .
Similarly, when turning on the $RF^{2}$ correction and spatial component $A_{\varphi}$, for example, if one considers $\alpha=0.05$ and $k=0.25$, it is found that $\Lambda_{min}^{2}=8.000$ and $a=0.474$.
The latter subsequently lead to a critical chemical potential $\Lambda_{c}=\Lambda_{min}=2.828$.
In general, a similar procedure can be applied to obtain the value of the critical chemical potential analytically.
In Tables~\ref{PWaveTable} and \ref{PWaveTableM0}, we present the calculated critical chemical potential $\Lambda_{c}=q\mu_{c}/r_{s}$ for given $\alpha$, $k$, as well as the mass of the vector field.
\begin{table}[ht]
\begin{center}
\caption{\label{PWaveTable}
The calculated critical chemical potential $\Lambda_{c}=q\mu_{c}/r_{s}$ for the vector operator $O_{x}$ in the holographic p-wave superfluid of the Maxwell complex vector field model.
The results are obtained analytically by the Sturm-Liouville method (left column) and numerically by the shooting method (right column) for different $RF^2$ corrections strength $\alpha$, $k=S_{\varphi}/\mu$ and for a given mass of the vector field $m^{2}=5/4$.}
\begin{tabular}{c c c c c c c}
\hline
$\alpha$ &~~~~-0.03 &~~~~-0.01 &~~~~0 &~~~~0.01 &~~~~0.05 \\
\hline
$k=0.00$ &~~~~2.785~~2.785 &~~~~2.785~~2.785 &~~~~2.785~~2.785 &~~~~2.785~~2.785 &~~~~2.785~~2.785 \\
$k=0.25$ &~~~~2.790~~2.795 &~~~~2.800~~2.802 &~~~~2.805~~2.805 &~~~~2.811~~2.807 &~~~~2.828~~2.814 \\
$k=0.50$ &~~~~2.805~~2.825 &~~~~2.844~~2.856 &~~~~2.867~~2.867 &~~~~2.891~~2.877 &~~~~2.969~~2.906 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\caption{\label{PWaveTableM0}
The calculated critical chemical potential $\Lambda_{c}=q\mu_{c}/r_{s}$ for the vector operator $O_{x}$ in the holographic p-wave superfluid of the Maxwell complex vector field model.
The results are obtained analytically by the Sturm-Liouville method (left column) and numerically by the shooting method (right column) for different $RF^2$ corrections strength $\alpha$, $k=S_{\varphi}/\mu$, and for the massless vector field $m^{2}=0$.}
\begin{tabular}{c c c c c c c}
\hline
$\alpha$ &~~~~-0.03 &~~~~-0.01 &~~~~0 &~~~~0.01 &~~~~0.05 \\
\hline
$k=0.00$ &~~~~2.265~~2.265 &~~~~2.265~~2.265 &~~~~2.265~~2.265 &~~~~2.265~~2.265 &~~~~2.265~~2.265 \\
$k=0.25$ &~~~~2.271~~2.276 &~~~~2.280~~2.282 &~~~~2.285~~2.285 &~~~~2.290~~2.287 &~~~~2.305~~2.292 \\
$k=0.50$ &~~~~2.287~~2.308 &~~~~2.324~~2.336 &~~~~2.345~~2.345 &~~~~2.367~~2.353 &~~~~2.436~~2.378 \\
\hline
\end{tabular}
\end{center}
\end{table}
From Tables~\ref{PWaveTable} and \ref{PWaveTableM0}, for the case of $k=0$ with given $m$, the mass of the vector field, one finds that the critical chemical potential $\mu_{c}$ is independent of the strength of the $RF^2$ correction, $\alpha$.
It implies that the $RF^2$ correction does not affect the stability of the AdS soliton system, just as shown previously in Ref.~\cite{LuNPB2018}.
However, the situation is entirely different as we switch on the spatial component $A_{\varphi}$ of the gauge field.
For given $k$, for instance $k=0.25$ or $0.50$, we observe that the critical chemical potential $\mu_{c}$ increases as we increase the $RF^2$ correction in terms of $\alpha$.
This shows that, in general, a more significant $RF^2$ correction will make it harder for the holographic p-wave superfluid phase transition to be triggered.
Therefore, it is meaningful to further explore the impact of the $RF^2$ correction on the holographic p-wave superfluid, especially with nonvanishing spatial component $A_{\varphi}$.
For the given $\alpha$ and $m$, one finds that the critical chemical potential becomes more significant with increasing $k$.
This is in good agreement with the findings in Ref.~\cite{LaiPJW2016} and indicates that the spatial component of the gauge field hinders the superfluid phase transition.
Now, we move on to discuss the critical phenomena of the holographic p-wave system.
From Eq.~(\ref{PWaveAtr}), in the vicinity of the critical point one may expand $A_{t}(z)$ in terms of small $\langle O_{x}\rangle$ by
\begin{eqnarray}\label{PWAtExpand}
A_{t}(z)\sim\mu_{c}+\frac{2q^{2}\mu_{c}}{r_{s}^{2(1+\Delta)}}\langle
O_{x}\rangle^{2}\chi(z)+\cdots,
\end{eqnarray}
with the boundary condition $\chi(1)=0$ at the tip.
In turn, it provides the equation of motion for $\chi(z)$
\begin{eqnarray}\label{PWChiEoM}
(M\chi')'-z^{2\Delta-1}F(z)^{2}=0,
\end{eqnarray}
where we have defined
\begin{eqnarray}\label{PWMz}
M(z)=(1+24\alpha+8\alpha z^{4})zf.
\end{eqnarray}
By combining the asymptotic behavior of $A_{t}$ in Eq.~(\ref{PWInfinityCondition}) and Eq.~(\ref{PWAtExpand}), we may expand $A_{t}$ near $z\rightarrow0$ as
\begin{eqnarray}\label{PWPhiExp}
A_{t}(z)\simeq\mu-\frac{\rho}{r_{s}^{2}}z^2\simeq\mu_c
+2\mu_{c}\left(\frac{q\langle
O_{x}\rangle}{r_{s}^{1+\Delta}}\right)^{2}\left[\chi(0)+\chi^\prime(0)z+\frac{1}{2}\chi^{\prime\prime}(0)z^2+\cdot\cdot\cdot\right].
\end{eqnarray}
From the above equation one may derive the following relation by comparing the coefficients of the $z^{0}$ term on both sides
\begin{eqnarray}\label{PWOxExpre}
\frac{q\langle
O_{x}\rangle}{r_{s}^{1+\Delta}}=\frac{1}{\left[2\mu_c\chi(0)\right]^{\frac{1}{2}}}\left(\mu-\mu_c\right)^{\frac{1}{2}},
\end{eqnarray}
where $\chi(0)=c_{2}-\int^{1}_{0}M^{-1}\left[\int^{z}_{1}x^{2\Delta-1}F(x)^{2}dx\right]dz$ with the constant of integration $c_{2}$ being determined by the boundary condition of $\chi(z)$.
As an example, one obtains $\langle O_{x}\rangle\approx3.349(\mu-\mu_{c})^{1/2}$ and $a=0.474$ by assuming $k=0.25$, $\alpha=0.05$ and $m^{2}=5/4$, where, owing to the scaling symmetry shown in Eq. (\ref{PWSSymmetry}), we have also chosen $q=1$ and $r_{s}=1$ for simplicity.
From Eq.~(\ref{PWOxExpre}) one may conclude the scaling law $\langle O_{x}\rangle\sim\left(\mu-\mu_c\right)^{1/2}$.
This relation is valid in the immediate vicinity of the critical point and is independent of specific parameters of the $RF^2$ correction, the spatial component of the gauge field, and the mass of the vector field.
In other words, the phase transition of the holographic p-wave superfluid with $RF^{2}$ corrections in the Maxwell complex vector field model is of the second order, and the extracted critical exponent of the system is consistent with that of the mean-field value, $1/2$.
Furthermore, by examing the coefficients of the $z^1$ terms in Eq.~(\ref{PWPhiExp}), we observe that $\chi^\prime(0)\rightarrow 0$.
This behavior is actually consistent with the following relation from Eq.~(\ref{PWChiEoM})
\begin{eqnarray}\label{PWz1}
\left[\frac{\chi'(z)}{z}\right]\bigg|_{z\rightarrow0}=\chi''(0)
=-\frac{1}{(1+24\alpha)}\int^{1}_{0}z^{2\Delta-1}F(z)^{2}dz.
\end{eqnarray}
Moreover, by extracting the coefficients of the $z^2$ terms in Eq.~(\ref{PWPhiExp}), with the help of Eqs.~(\ref{PWOxExpre}) and (\ref{PWz1}), one finds
\begin{eqnarray}\label{PWRhoExpre}
\frac{\rho}{r_{s}^{2}}=-\left(\frac{q\langle
O_{x}\rangle}{r_{s}^{1+\Delta}}\right)^{2}\mu_{c}\chi^{\prime\prime}(0)=\Gamma(k,\alpha,m)(\mu-\mu_{c}),
\end{eqnarray}
with $\Gamma(k,\alpha,m)=[2(1+24\alpha)\chi(0)]^{-1}\int^{1}_{0}z^{2\Delta-1}F(z)^{2}dz$, which is a function of $k$, $\alpha$ and $m^{2}$.
For example, one obtains $\rho=1.069\left(\mu-\mu_c\right)$ by taking $a=0.474$, $k=0.25$, $\alpha=0.05$, and $m^{2}=5/4$, where, again, we have taken the freedom to scale the dimensional quantities and chosen $q=1$ and $r_{s}=1$ for simplicity.
We observe that the $RF^2$ correction, the spatial component of the gauge field, and the mass of the vector field will not alter Eq.~(\ref{PWRhoExpre}) except for the prefactor.
Therefore, we argue that, in the vicinity of the transition point, one may find a mostly linear relationship between the charge density and chemical potential, namely, $\rho\sim(\mu-\mu_{c})$ in the present model.
For the field $A_{\varphi}$, near $\mu_{c}$, Eq.~(\ref{PWaveAvarphir}) can be rewritten into
\begin{eqnarray}\label{PWEMAphizCritical}
(1+24\alpha z^{2}f)A_{\varphi}''+\left[-\frac{1}{z}+24\alpha z^{2}f\left(\frac{1}{z}+\frac{f'}{f}\right)\right]A_{\varphi}'
-\frac{2S_{\varphi}\phi(z)}{z^{2}f}\left(\frac{q\langle
O_{x}\rangle z^{\Delta}F}{r_{s}^{1+\Delta}}\right)^{2}=0,
\end{eqnarray}
which has a general solution
\begin{eqnarray}\label{PWEMAphizSolu}
A_\varphi=S_{\varphi}\phi(z)+S_{\varphi}\left(\frac{q\langle
O_{x}\rangle}{r_{s}^{1+\Delta}}\right)^{2}\int\frac{z}{1+24\alpha z^{2}f}\left[\int\frac{2x^{2\Delta-3}\phi(x)F(x)^{2}}{f(x)}
dx\right]dz.
\end{eqnarray}
By assuming $k=0.25$, $\alpha=0.05$, and $m^{2}=5/4$, as an example, we arrive at $A_\varphi=S_{\varphi}[\phi(z)+(0.0269-0.0288z^{2}+\cdot\cdot\cdot)\langle O_{x}\rangle^{2}$] with $a=0.474$, where, again, we have taken $q=1$ and $r_{s}=1$.
Obviously, the solution Eq.~(\ref{PWEMAphizSolu}) depends on the $RF^2$ correction.
\subsection{Numerical study by the shooting method}
In the previous section, we have made use of the Sturm-Liouville method to analytically investigate the properties of the holographic p-wave superfluid phase transition with $RF^{2}$ corrections in the vicinity of the transition point.
Now, we proceed to numerically study the holographic superfluid model by using the shooting method~\cite{HartnollRev,HerzogRev,HorowitzRev,CaiRev}. As the method is not restricted to the immediate vicinity of the critical chemical potential, the results obtained in the present section help to further explore the properties of the $RF^{2}$ correction on the condensation and critical phenomena of the system from a different perspective.
Moreover, it provides a means to compare the numerical results against the analytical ones, as well as to evaluate the accuracy and effectiveness of the expansion carried out concerning the Sturm-Liouville method.
Again, for convenience, we will make use of the scaling properties, Eq.~(\ref{PWSSymmetry}), to assume $q=1$ and $r_{s}=1$ when performing the numerical calculations.
\begin{figure}[ht]
\includegraphics[scale=0.626]{CondPWMCVk0.eps}\;\;\includegraphics[scale=0.6]{RLPWMCVk0.eps}
\includegraphics[scale=0.626]{CondPWMCVk25.eps}\;\;\includegraphics[scale=0.6]{RLPWMCVk25.eps}
\includegraphics[scale=0.626]{CondPWMCVk5.eps}\;\;\includegraphics[scale=0.6]{RLPWMCVk5.eps}
\caption{\label{PWaveMCV}
(color online) The condensate $\langle O_{x}\rangle$ (left column) and charge density $\rho$ (right column) as functions of chemical potential $\mu$ for different values of $\alpha$ and $k=S_{\varphi}/\mu$ with $m^{2}=5/4$ in the holographic p-wave superfluid phase transition in the Maxwell complex vector field model.
In each plot, different curves correspond to $\alpha=-0.03$ (orange), $-0.01$ (blue), $0.00$ (red), $0.01$ (green) and $0.05$ (black) respectively.}
\end{figure}
By carrying out numerical integration from the tip to the infinity, one can solve the equations of motion (\ref{PWaveRhoxr}), (\ref{PWaveAtr}) and (\ref{PWaveAvarphir}).
On the left column of Fig.~\ref{PWaveMCV}, we plot the condensate of the vector operator $O_{x}$ as a function of the chemical potential for different values of $\alpha$, $k$, with given vector mass $m^{2}=5/4$.
It is shown that, the condensation occurs for $O_{x}$ with different values of $\alpha$ and $k$ if $\mu>\mu_{c}$.
As a comparison, we also present the critical chemical potential $\mu_{c}$ obtained numerically by using the shooting method in Table \ref{PWaveTable}.
It is noted that a satisfactory degree of agreement is achieved between the two methods.
This indicates that the Sturm-Liouville method is indeed powerful to analytically study the holographic superfluid models even with the presence of the $RF^{2}$ corrections.
It is confirmed that the critical chemical potential $\mu_{c}$ increases as $\alpha$ increases for the case where $k\neq0$, but it is mainly independent of $\alpha$ for the case where $k=0$, as can be observed both from Fig.~\ref{PWaveMCV} and Tables \ref{PWaveTable} and \ref{PWaveTableM0}.
On the other hand, from Fig.~\ref{PWaveMCV}, we find that, for all cases considered here, the vector operator $O_{x}$ is single-valued near the critical chemical potential and the condensate drops to zero continuously as the transition takes place.
By fitting these curves, we find that for small condensate, there is a square root behavior $\langle O_{x}\rangle\sim\left(\mu-\mu_c\right)^{1/2}$, which is also in good agreement with the analytical results discussed previously in Eq.~(\ref{PWOxExpre}).
As discussed before, this indicates the emergence of a second-order phase transition with the mean-field critical exponent $1/2$.
The $RF^2$ correction and the spatial component of the gauge field do not affect the result.
Furthermore, we present, in the right column of Fig.~\ref{PWaveMCV}, the charge density $\rho$ as a function of the chemical potential for different values of $\alpha$ and $k$ with given $m^{2}=5/4$.
For given $\alpha$ and $k$, we observe that the system is mostly described by the AdS soliton solution when $\mu$ is small, which can be interpreted as the insulator phase~\cite{Nishioka-Ryu-Takayanagi}.
When $\mu$ increases and reaches $\mu_{c}$, there is a phase transition, and the system transforms into the superfluid phase.
It is clearly shown that a linear relationship exists between the charge density and chemical potential near $\mu_{c}$, consistent with the analytical results discussed concerning Eq.~(\ref{PWRhoExpre}).
Here, we have numerically confirmed that the $RF^2$ correction and the spatial component of the gauge field do not affect the linear relation.
\section{p-Wave superfluid in the Yang-Mills theory}
In the previous section, we investigated the holographic p-wave superfluid with $RF^{2}$ corrections in the Maxwell complex vector field model.
Now, we extend our study of the holographic superfluid model to the non-abelian gauge field, namely, $SU(2)$ Yang-Mills theory with $RF^{2}$ corrections.
The action of the model reads
\begin{eqnarray}\label{YMaction}
S=\int
d^{5}x\sqrt{-g}\left[-\frac{1}{4\hat{g}^{2}}(F^{a}_{\mu\nu}F^{a\mu\nu}-4\mathcal{L}^{a}_{RF^{2}})\right],
\end{eqnarray}
with the $RF^2$ correction term
\begin{eqnarray}
\mathcal{L}^{a}_{RF^{2}}=\alpha (R_{\mu\nu\rho\lambda}F^{a\mu\nu}F^{a\rho\lambda}-4R_{\mu\nu}F^{a\mu\rho}F^{a\nu}_{\rho}
+RF^{a\mu\nu}F^{a}_{\mu\nu}),
\end{eqnarray}
where $\hat{g}$ is the Yang-Mills coupling constant and $F^{a}_{\mu\nu}=\partial_{\mu}A^{a}_{\nu}- \partial_{\nu}A^{a}_{\mu}+\varepsilon^{abc}A^{b}_{\mu}A^{c}_{\nu}$ is the strength of $SU(2)$ Yang-Mills field with the totally antisymmetric tensor $\varepsilon^{abc}$.
$A^{a}_{\mu}$ are the components of the mixed-valued gauge fields $A=A^{a}_{\mu}\tau^{a}dx^{\mu}$, where $\tau^{a}$ represent the three generators of the $SU(2)$ algebra which satisfy the commutation relation $[\tau^{a},\tau^{b}]=\varepsilon^{abc}\tau^{c}$.
Since we need a nonvanishing vector potential, we will adopt the following {\it ansatz} for the gauge fields~\cite{ZengSZ}
\begin{eqnarray}\label{YMansatz}
A(r)=A_{t}(r)\tau^{3}dt+\psi(r)\tau^{1}dx+A_{\varphi}(r)\tau^{3}d\varphi,
\end{eqnarray}
where the $U(1)$ subgroup of $SU(2)$ generated by $\tau^{3}$ is identified to be the electromagnetic gauge group.
Following Refs.~\cite{ZengSZ,GubserPRL2008}, we adopt the scenario of spontaneous symmetry breaking that the local $U(1)$ symmetry is broken down in the bulk, which corresponds to the holographic superfluid phase transition on the boundary. The latter is characterized by condensation in terms of the nonzero component $\psi(r)$ along the $x$-direction.
Subsequently, the vacuum state in question is no longer invariant with respect to the $U(1)$ symmetry, and therefore, according to the Higgs mechanism, a massive Higgs boson associated with $A_t$ is produced.
By making use of Eq.~(\ref{YMansatz}), one obtains the following equations of motion
\begin{eqnarray}\label{YMPWavePsir}
&&\left[1+\frac{8\alpha f}{r}\left(\frac{1}{r}+\frac{f'}{f}\right)\right]\psi''+\left[\left(\frac{1}{r}+\frac{f'}{f}\right)
+\frac{8\alpha}{r}\left(-\frac{f}{r^2}+\frac{2 f'}{r}+\frac{f'^2}{f}+f''\right)\right]\psi'\nonumber \\
&&+\left\{\left[1+4\alpha\left(\frac{2f'}{r}+f''\right)\right]\frac{A_{t}^{2}}{r^{2}f}
-\left[1+\frac{8\alpha f}{r}\left(\frac{1}{r}+\frac{f'}{f}\right)\right]\frac{A_{\varphi}^{2}}{f^{2}}\right\}\psi=0,
\end{eqnarray}
\begin{eqnarray}\label{YMPWaveAtr}
&&\left[1+\frac{8\alpha f}{r}\left(\frac{1}{r}+\frac{f'}{f}\right)\right]A_{t}''+\left[\left(\frac{1}{r}+\frac{f'}{f}\right)
+\frac{8\alpha}{r}\left(-\frac{f}{r^2}+\frac{2 f'}{r}+\frac{f'^2}{f}+f''\right)\right]A_{t}'\nonumber \\
&&-\left[1+4\alpha\left(\frac{2 f'}{r}+f''\right)\right]\frac{\psi^{2}}{r^{2}f}A_{t}=0,
\end{eqnarray}
\begin{eqnarray}\label{YMPWaveAphir}
\left(1+\frac{24\alpha f}{r^2}\right)A_{\varphi}''+\left[\frac{3}{r}+\frac{24\alpha f}{r^2}\left(\frac{1}{r}+\frac{f'}{f}\right)\right]A_{\varphi}'
-\left[1+\frac{8\alpha f}{r}\left(\frac{1}{r}+\frac{f'}{f}\right)\right]\frac{\psi^{2}}{r^{2}f}A_{\varphi}=0,
\end{eqnarray}
where the prime denotes the derivative with respect to $r$.
Obviously, in the case when $\alpha=0$, the two sets of equations of motion are equivalent if we further have $m^{2}=0$.
This can be readily verified by redefining the field by $\rho_{x}(r)=\psi(r)/\sqrt{2}$ in Eqs.~(\ref{PWaveRhoxr}), (\ref{PWaveAtr}) and (\ref{PWaveAvarphir}).
This result is essentially consistent with the arguments given by the authors of Ref.~\cite{CaiLLWPWave}, where they concluded that the complex vector field model could be viewed as a generalization of the $SU(2)$ Yang-Mills model.
However, for the present model, where the $RF^{2}$ correction has been introduced, such a conclusion does not hold.
As will be discussed below, the situation is entirely different when we consider the $RF^{2}$ corrections where $\alpha\neq0$.
We can solve the equations of motion (\ref{YMPWavePsir}), (\ref{YMPWaveAtr}) and (\ref{YMPWaveAphir}) by imposing the appropriate boundary conditions for the matter fields, i.e., the regularity condition at the tip $r=r_{s}$ and boundary behavior at the asymptotic boundary $r\rightarrow \infty$
\begin{eqnarray}\label{YMInfinityCondition}
\psi=\psi_{0}+\frac{\psi_{2}}{r^{2}},~~A_t=\mu-\frac{\rho}{r^2},
~~A_\varphi=S_\varphi-\frac{J_\varphi}{r^2},
\end{eqnarray}
where $\psi_{0}$ and $\psi_{2}=\langle O\rangle$ can be identified as a source and the expectation value of the dual operator.
We will use the asymptotic boundary condition $\psi_{0}=0$ since we are interested in the case where the condensation of the dual operator is spontaneous.
From Eqs.~(\ref{YMPWavePsir}), (\ref{YMPWaveAtr}) and (\ref{YMPWaveAphir}), one also finds that these equations are invariant regarding the following scaling transformation
\begin{eqnarray}
&&r\rightarrow\lambda r\,,\hspace{0.5cm}(t,\varphi,x,y)\rightarrow\frac{1}{\lambda}(t,\varphi,x,y)\,,\hspace{0.5cm}(\psi,A_{t},A_{\varphi})\rightarrow\lambda(\psi,A_{t},A_{\varphi})\,,\hspace{0.5cm}\nonumber \\&&(\mu,S_\varphi)\rightarrow\lambda(\mu,S_\varphi)\,,\hspace{0.5cm}(\rho,J_\varphi)\rightarrow\lambda^{3}(\rho,J_\varphi)\,,\hspace{0.5cm}\psi_{2}\rightarrow\lambda^{3}\psi_{2}\,,
\label{SLsymmetry-1}
\end{eqnarray}
where $\lambda$ is positive.
\subsection{Analytical approach by the Sturm-Liouville method}
We will closely follow the strategy utilized for the analysis regarding the Sturm-Liouville method in the previous section for the Maxwell complex vector field model.
First, we introduce the coordinate $z=r_{s}/r$.
By taking into consideration that the field $\psi=0$ as one approaches the transition point from below the critical chemical potential $\mu_{c}$, one may again derive the reduced equations of motion for the matter fields.
It is not difficult to show that, one actually arrives identical equations as those obtained in Eqs.~(\ref{PWaveAtzCritical}) and (\ref{PWaveAphizCritical}) for $A_{t}$ and $A_{\varphi}$, respectively.
This means that, as $\mu\rightarrow\mu_{c}$ from below the critical point, one obtains the physical solutions $A_{t}(z)=\mu$ and $A_{\varphi}(z)=S_{\varphi}\phi(z)$, identical to those of the Maxwell complex vector field model.
Thus, as $\mu\rightarrow\mu_{c}$, in terms of $z$, Eq.~(\ref{YMPWavePsir}) becomes
\begin{eqnarray}\label{YMpsiCriMotion}
&&\left[1+8\alpha z^{3}f\left(\frac{1}{z}-\frac{f'}{f}\right)\right]\psi''+\left[\left(\frac{1}{z}+\frac{f'}{f}\right)+8\alpha z\left(3f-2zf'-\frac{z^{2}f'^{2}}{f}-z^{2}f''\right)\right]\psi'\nonumber \\&&+\left\{(1+4\alpha z^{4}f'')\frac{1}{z^{2}f}\left(\frac{\mu}{r_{s}}\right)^{2}-\left[1+8\alpha z^{3}f\left(\frac{1}{z}-\frac{f'}{f}\right)\right]\frac{\phi^{2}}{z^{4}f^{2}}\left(\frac{S_{\varphi}}{r_{s}}\right)^{2}\right\}\psi=0,
\end{eqnarray}
where the function $\phi(z)$ has been defined in Eq.~(\ref{PWaveAtzCriticalSolution}).
When comparing with Eq.~(\ref{PWRhozCriMotion}) in the case of $S_{\varphi}=0$ and $m^{2}=0$, we find that Eq.~(\ref{YMpsiCriMotion}) is explicitly dependent on the coupling $\alpha$ even when $S_{\varphi}=0$.
This leads to the dependence of the critical chemical potential $\mu_{c}$ on the $RF^2$ correction in the holographic p-wave insulator/superconductor model ($k=0$) for the Yang-Mills theory.
Regarding the asymptotic behavior near the boundary, Eq.~(\ref{YMInfinityCondition}), we assume that $\psi$ takes the form
\begin{eqnarray}\label{YMWaveFz}
\psi(z)\sim \frac{\langle O\rangle}{r^{2}_{s}} z^{2}F(z),
\end{eqnarray}
where the trial function $F(z)$ with the boundary conditions $F(0)=1$ obeys equations of motion
\begin{eqnarray}\label{YMFzmotion}
(GF^{\prime})^{\prime}+G\left[Q+P\left(\frac{\mu}{r_{s}}\right)^{2}-W\left(\frac{S_{\varphi}}{r_{s}}\right)^{2}\right]F=0,
\end{eqnarray}
with
\begin{eqnarray}\label{YMGHPW}
G=(1+24\alpha+8\alpha z^{4})z^{5}f,~~
Q=-\frac{8(1+16\alpha+16\alpha z^{4})}{(1+24\alpha+8\alpha z^{4})f},~~
P=\frac{(1+24\alpha-8\alpha z^{4})}{(1+24\alpha+8\alpha z^{4})(1-z^{4})},
\end{eqnarray}
where $W(z)$ has been introduced in Eq.~(\ref{PWaveTUVWFu}). Solving the Sturm-Liouville eigenvalue problem \cite{Gelfand-Fomin}, we find
\begin{eqnarray}\label{YMMUC}
\Lambda^{2}=\left(\frac{\mu}{r_{s}}\right)^{2}=\frac{\int^{1}_{0}G(F'^{2}-QF^{2})dz}{\int^{1}_{0}G(P-Wk^{2})F^{2}dz} ,
\end{eqnarray}
which can be used to estimate the minimum eigenvalue of $\Lambda=\mu/r_{s}$.
One easily observes that $[G(z)F(z)F'(z)]|^1_{0}=0$, because of the fact that $G(1)\equiv0$ and $G(0)\equiv0$.
Therefore, similar to the Maxwell complex vector field model, we assume the trial function to be $F(z)=1-az$ with a constant $a$.
From the expression (\ref{YMMUC}), we can obtain the minimum eigenvalue of $\Lambda^{2}$ and the corresponding value of $a$ for different values of $k$ and $\alpha$.
For example, in the case of $k=0 $ and $\alpha=0$
\begin{eqnarray}
\Lambda^{2}=\left(\frac{\mu}{r_{s}}\right)^{2}=\frac{5(224-384a+189a^{2})}{14(15-24a+10a^{2})},
\end{eqnarray}
whose minimum is $\Lambda^{2}_{min}=5.132$ with $a=0.432$.
In comparison with the analytical result $\Lambda_{c}=\mu_{c}/r_{s}=2.267$ from the trial function $F(z)=1-az^{2}$ shown in Table 1 of Ref.~\cite{ZhaoPJ2013}, we have $\Lambda_{c}=2.265(47)$, which is closer to the numerical result $\Lambda_{c}=2.265(23)$.
In Table \ref{YMTable}, we present the calculated critical chemical potential $\Lambda_{c}$ for given $k$ and $\alpha$.
\begin{table}[ht]
\begin{center}
\caption{\label{YMTable}
The obtained critical chemical potential $\Lambda_{c}=\mu_{c}/r_{s}$ for the vector operator $O$ obtained analytically by the Sturm-Liouville method (left column) and numerically by the shooting method (right column).
The calculations are carried out with different $\alpha$, $k=S_{\varphi}/\mu$ in the holographic p-wave superfluid of the Yang-Mills field model.}
\begin{tabular}{c c c c c c c}
\hline
$\alpha$ &~~~~-0.03 &~~~~-0.01 &~~~~0 &~~~~0.01 &~~~~0.05 \\
\hline
$k=0.00$ &~~~~1.746~~1.704 &~~~~2.199~~2.199 &~~~~2.265~~2.265 &~~~~2.307~~2.306 &~~~~2.383~~2.383 \\
$k=0.25$ &~~~~1.747~~1.707 &~~~~2.212~~2.214 &~~~~2.285~~2.285 &~~~~2.333~~2.329 &~~~~2.433~~2.416 \\
$k=0.50$ &~~~~1.751~~1.714 &~~~~2.250~~2.260 &~~~~2.345~~2.345 &~~~~2.418~~2.402 &~~~~2.599~~2.524 \\
\hline
\end{tabular}
\end{center}
\end{table}
From Table \ref{YMTable}, for given $k$, one observes that the critical chemical potential $\mu_{c}$ increases with increasing $\alpha$.
This result agrees reasonably well with the findings in the Maxwell complex vector field model for $k\neq0$.
It indicates that a larger $RF^{2}$ correction hinders the phase transition.
Besides, for a given $\alpha$, $\mu_{c}$ becomes larger as $k$ increases, which is, again, consistent with the results in the Maxwell complex vector field model.
This implies that a nonvanishing spatial component of the gauge field makes the vector condensate harder to form~\cite{LaiPJW2016}.
Interestingly enough, for the case of $k=0$, one sees that $\mu_{c}$ is dependent on $\alpha$.
This is in contrast to the effect of the $RF^2$ correction for the Maxwell complex vector field model with $m^{2}=0$.
There, $\mu_{c}$ is independent of $\alpha$, as shown in Table~\ref{PWaveTableM0}.
Thus, we conclude that, in the case of $k=0$, the $RF^2$ corrections have entirely different effects between the insulator/superconductor phase transition of the Yang-Mills theory and that of the Maxwell complex vector field model.
This means that we can use the $RF^2$ corrections to distinguish between these two types of holographic superfluid models.
In order to analyze the critical phenomena of the system, we again expand $A_{t}(z)$ when $\mu\rightarrow\mu_{c}$ regarding $\langle O\rangle$ as
\begin{eqnarray}\label{YMEigenvalue}
A_{t}(z)\sim\mu_{c}+\frac{\mu_{c}}{r_{s}^{6}}\langle O\rangle^2\chi(z)+\cdots,
\end{eqnarray}
which gives rise to the following equation of motion in terms of $\chi(z)$
\begin{eqnarray}\label{YMXiEoM}
(M\chi')'-(1+24\alpha-8\alpha z^4)z^{3}F(z)^{2}=0,
\end{eqnarray}
where we have introduced the boundary condition $\chi(1)=0$ at the tip, and the function $M(z)$ has been defined in Eq.~(\ref{PWMz}).
By considering the asymptotic behavior and the expanded form of $A_{t}$ near $z\rightarrow0$, one finds
\begin{eqnarray}\label{YMPhiExpand}
A_{t}(z)\simeq\mu-\frac{\rho}{r_{s}^{2}}z^2\simeq\mu_c
+\mu_{c}\left(\frac{\langle O\rangle}{r^{3}_{s}}\right)^{2}\left[\chi(0)+\chi^\prime(0)z+\frac{1}{2}\chi^{\prime\prime}(0)z^2+\cdot\cdot\cdot\right].
\end{eqnarray}
By equating the coefficients of the $z^0$ term on both sides of the above equation, one gets
\begin{eqnarray}\label{YMOOExp}
\frac{\langle O\rangle}{r^{3}_{s}}=\frac{1}{\left[\mu_c\chi(0)\right]^{\frac{1}{2}}}\left(\mu-\mu_c\right)^{\frac{1}{2}},
\end{eqnarray}
where $\chi(0)=c_{3}-\int^{1}_{0}M^{-1} \left[\int^{z}_{1}(1+24\alpha-8\alpha x^4)x^{3}F(x)^{2}dx\right]dz$ with the constant of integration $c_{3}$ determined by the boundary condition of $\chi(z)$.
For given $k=0.25$ and $\alpha=0.05$, as an example, we find $\langle O\rangle\approx3.503(\mu-\mu_{c})^{1/2}$ with $a=0.498$, where, for simplicity, we have scaled the system to choose $r_{s}=1$.
Since Eq.~(\ref{YMOOExp}) is valid in genral, we obtain $\langle O\rangle\sim \left(\mu-\mu_c\right)^{1/2}$ near the critical point.
This indicates that the phase transition of the holographic superfluid with $RF^2$ corrections based on the Yang-Mills theory is of the second order.
Moreover, the critical exponent of the system attains the mean-field value 1/2.
It is noted that the $RF^2$ correction and the spatial component of the gauge field do not influence the result.
In addition, by comparing the coefficients of the $z^2$ terms on both sides of Eq.~(\ref{YMPhiExpand}), we have
\begin{eqnarray}\label{YMExpRho}
\frac{\rho}{r_{s}^{2}}=-\frac{1}{2}\left(\frac{\langle
O\rangle}{r^{3}_{s}}\right)^2\mu_c\chi^{\prime\prime}(0)=\Gamma(k,\alpha)(\mu-\mu_{c}),
\end{eqnarray}
with $\Gamma(k,\alpha)=[2(1+24\alpha)\chi(0)]^{-1} \int^{1}_{0}(1+24\alpha-8\alpha z^4)z^{3}F(z)^{2}dz$.
This is a function of the parameters $k$ and $\alpha$.
For example, in the case of $k=0.25$ with $\alpha=0.05$, we obtain $\rho=1.270\left(\mu-\mu_c\right)$ with $a=0.498$, where we have again scaled the system to have $r_{s}=1$, for simplicity.
Obviously, in the vicinity of the critical point, the linear relationship between the charge density and chemical potential $\rho\sim(\mu-\mu_{c})$ is valid in general for the holographic superfluid model of the Yang-Mills theory.
Similarly, when $\mu\rightarrow\mu_{c}$, Eq.~(\ref{YMPWaveAphir}) for the field $A_{\varphi}$ can be rewritten into
\begin{eqnarray}\label{YMEMAphizCri}
(1+24\alpha z^{2}f)A_{\varphi}''+\left[-\frac{1}{z}+24\alpha z^{2}f\left(\frac{1}{z}+\frac{f'}{f}\right)\right]A_{\varphi}'
-\left[1+8\alpha z^{3}f\left(\frac{1}{z}-\frac{f'}{f}\right)\right]\frac{S_{\varphi}\phi(z)}{z^{2}f}\left(\frac{\langle
O\rangle z^{2}F}{r^{3}_{s}}\right)^{2}=0.
\end{eqnarray}
Hence we finally find
\begin{eqnarray}\label{YMEMAphizSolution}
A_\varphi=S_{\varphi}\phi(z)+S_{\varphi}\left(\frac{\langle
O\rangle}{r_{s}^{3}}\right)^{2}\int\frac{z}{1+24\alpha z^{2}f(z)}\int\left\{1+8\alpha x^{3}f(x)\left[\frac{1}{x}-\frac{f'(x)}{f(x)}\right]\right\}\frac {x\phi(x)F(x)^{2}}{f(x)}dxdz.
\end{eqnarray}
As an example, for given $k=0.25$ and $\alpha=0.05$, we have $A_\varphi=S_{\varphi}[\phi(z)+(0.0377-0.0570z^{2}+\cdot\cdot\cdot)\langle O\rangle^{2}$] with $a=0.498$ and $r_{s}=1$.
It is consistent with the previous findings for the Maxwell complex vector field model.
\subsection{Numerical study by the shooting method}
In this section, the shooting method is employed to solve the equations of motion (\ref{YMPWavePsir}), (\ref{YMPWaveAtr}) and (\ref{YMPWaveAphir}).
In our numerical calculations, $r_{s}=1$ is chosen for convenience.
The results are presented in Fig.~\ref{YMfigure}.
In the left column, we show the condensate of the vector operator $O$ as a function of the chemical potential.
It is found that a phase transition occurs as $\mu$ increases and reaches $\mu_{c}$. Subsequently, the AdS soliton transforms into the superfluid phase.
The transition point is dependent on specific values of $\alpha$ and $k$.
Also, the conclusion that $\alpha$ affects the value of $\mu_{c}$ can also be drawn from the results presented in Table \ref{YMTable}.
Moreover, from Table \ref{YMTable}, it is observed that the numerical results (shown in the right column) agree well with the analytical ones derived from the Sturm-Liouville method (shown in the left column).
From Fig.~\ref{YMfigure} and Table \ref{YMTable}, we confirm that for given $k$, the critical chemical potential increases with increasing $\alpha$, previously obtained in the last section.
It implies that a larger $RF^{2}$ correction will make the vector condensate harder to take place.
\begin{figure}[ht]
\includegraphics[scale=0.616]{CondPWYMk0.eps}\;\;\includegraphics[scale=0.6]{RLPWYMk0.eps}
\includegraphics[scale=0.616]{CondPWYMk25.eps}\;\;\includegraphics[scale=0.6]{RLPWYMk25.eps}
\includegraphics[scale=0.616]{CondPWYMk5.eps}\;\;\includegraphics[scale=0.6]{RLPWYMk5.eps}
\caption{\label{YMfigure}
(color online) The condensate $\langle O\rangle$ (left column) and charge density $\rho$ (right column) as functions of the chemical potential $\mu$ for different values of $\alpha$ and $k=S_{\varphi}/\mu$ in the holographic p-wave superfluid phase transition of the Yang-Mills theory.
In each plot, different curves correspond to $\alpha=-0.03$ (orange), $-0.01$ (blue), $0.00$ (red), $0.01$ (green) and $0.05$ (black) respectively.}
\end{figure}
From the left column of Fig.~\ref{YMfigure}, one also finds that the transition is of the second order and the condensate approaches zero according to the form $\langle O\rangle\sim(\mu-\mu_{c})^{\beta}$ with the critical exponent $\beta=1/2$ in accordance with the mean-field theory. For all cases considered here, this result is independent of either the $RF^2$ correction or the spatial component of the gauge field.
This is in good agreement with the analytical result discussed previously in Eq.~(\ref{YMOOExp}).
From the right column of Fig.~\ref{YMfigure}, we confirm numerically a linear relationship between the charge density and chemical potential in the vicinity of $\mu_{c}$, namely, $\rho\sim(\mu-\mu_{c})$.
For all the cases considered here, it agrees well with the analytical one derived in Eq.~(\ref{YMExpRho}).
The $RF^2$ correction and the spatial component of the gauge field do not affect the observed linearity.
\section{Conclusions}
In order to understand the influences of the $1/N$ or $1/\lambda$ corrections on the vector condensate in the holographic p-wave superfluid, we have investigated the role of the $RF^{2}$ corrections in the AdS soliton background for both the Maxwell complex vector field model and Yang-Mills theory.
In the probe limit, the calculations were carried out by employing the analytical Sturm-Liouville method as well as the numerical shooting method.
The results obtained by the two distinct methods were found to agree with each other to a satisfactory degree.
By turning on the spatial components of the gauge field, we observed that the critical chemical potential $\mu_{c}$ increases as the strength of the $RF^{2}$ correction, $\alpha$, increases.
This indicates that a larger $RF^{2}$ correction hinders the superfluid phase transition in both models. However, in the absence of the superfluid velocity, we noted that the transition point regarding $\mu_{c}$ is insensitive to $\alpha$ for the case of the Maxwell complex vector field model, while it is sensitively dependent on $\alpha$ in the Yang-Mills theory. In other words, the $RF^2$ corrections imply very different effects for the two different models.
This feature might be attributed to the intrinsic difference between the two models in question.
To be more specific, although both models effectively involve vector field, as well as electromagnetic field degrees of freedom and their condensate, the mass of the vector field is obtained by an explicit symmetric breaking in the complex vector model, while the relevant degree of freedom is derived through spontaneous symmetric breaking of $SU(2)$ gauge in the Yang-Mills theory. Moreover, by taking the mass of the vector field in the Maxwell complex vector field model, as well as the $RF^2$ correction, to be zero, one can readily show that the two sets of equations of motion for the two models are equivalent.
This result is similar to what has been pointed out in Ref.~\cite{CaiLLWPWave}. In this context, the authors of Ref.~\cite{CaiLLWPWave} argued that the complex vector model can be seen as a generalization of the Yang-Mills model of holographic superconductor/superfluid.
We understand that the above characteristics can be utilized to distinguish between these two types of superfluid models. Furthermore, for both models, we showed that the phase transition of the system is of the second order, and a linear relationship is found between the charge density and chemical potential in the vicinity of the critical point.
The presence of the $RF^2$ correction or the spatial component of the gauge field does not modify this result.
The present work is carried out in the framework of the probe limit, although such approximation is known to capture the essential features of the problem while significantly simplifies the mathematical formulation, it would still be of great interest to extend the study to take into consideration of the backreaction.
We plan to continue the work in a future study.
\begin{acknowledgments}
This work was supported by the National Natural Science Foundation of China under Grant Nos. 11775076, 11875025, 11705054 and 11690034; Hunan Provincial Natural Science Foundation of China under Grant No. 2016JJ1012;
as well as Brazilian funding agencies Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP),
Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq), and Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES).
\end{acknowledgments}
|