text
stringlengths
28
2.36M
meta
stringlengths
20
188
\begin{document} \title[On the Digraph of a Unitary Matrix]{On the Digraph of a Unitary Matrix } \author{Simone Severini} \address{Computer Science, Univ. Bristol, Bristol, U.K.} \email{severini@cs.bris.ac.uk} \date{May 2002: Published in SIAM Journal on Matrix Analysis and Applications, Volume 25, Number 1, pp. 295-300, July 2003.} \subjclass[2000]{05C20, 51F25, 81P68 \\ DOI. 10.1137/S0895479802410293} \keywords{digraphs, unitary matrices, quantum random walks} \begin{abstract} Given a matrix $M$ of size $n$, the digraph $D$ on $n$ vertices is said to be the \emph{digraph of} $M$, when $M_{ij}\neq 0$ if and only if $\left( v_{i},v_{j}\right) $ is an arc of $D$. We give a necessary condition, called strong quadrangularity, for a digraph to be the digraph of a unitary matrix. With the use of such a condition, we show that a line digraph, $ \overrightarrow{L}D$, is the digraph of a unitary matrix if and only if $D$ is Eulerian. It follows that, if $D$ is strongly connected and $ \overrightarrow{L}D$ is the digraph of a unitary matrix then $ \overrightarrow{L}D$ is Hamiltonian. We conclude with some elementary observations. Among the motivations of this paper are coined quantum random walks, and, more generally, discrete quantum evolution on digraphs. \end{abstract} \maketitle \section{Introduction} Let $D=\left( V,A\right) $ be a digraph on $n$ vertices, with labelled vertex set $V\left( D\right) $, arc set $A\left( D\right) $ and adjacency matrix $M\left( D\right) $. We assume that $D$ may have loops and multiple arcs. Let $M$ be a matrix over any field. A digraph $D$ is the \emph{digraph of }$M$, or, equivalently, the \emph{pattern of }$M$, if $ {\vert} V\left( D\right) {\vert} =n$, and, for every $v_{i},v_{j}\in V(D)$, $\left( v_{i},v_{j}\right) \in A\left( D\right) $ if and only if $M_{ij}\neq 0$. The \emph{support} $^{s}M$ of the matrix $M$ is the $\left( 0,1\right) $-matrix with element \begin{equation*} ^{s}M_{ij}=\left\{ \begin{tabular}{ll} 1 & if $M_{ij}\neq 0,$ \\ 0 & otherwise. \end{tabular} \ \right. \end{equation*} Then the digraph of a matrix is the digraph whose adjacency matrix is the support of the matrix. The \emph{line digraph} of a digraph $D$, denoted by $ \overrightarrow{L}D$, is the digraph whose vertex set $V(\overrightarrow{L}D) $ is $A(D)$ and $(\left( v_{i},v_{j}\right) ,\left( v_{j},v_{k}\right) )\in A(\overrightarrow{L}D)$ if and only if $(v_{i},v_{j}),(v_{j},v_{k})\in A(D)$. A \emph{discrete quantum random walk} on a digraph $D$ is a discrete walk on $D$ induced by a unitary transition matrix. The term \emph{quantum random walk} was coined by Gudder (see, \emph{e.g.}, \cite{G88}), who introduced the model and proposed to use it to describe the motion of a quantum object in discrete space-time and to describe the internal dynamics of elementary particles. Recently, quantum random walks have been rediscovered, in the context of quantum computation, by Ambainis \emph{et al.} (see \cite{ABNVW01} and \cite{AKV01}). Since the notion of quantum random walks is analogous to the notion of random walks, interest on quantum random walks has been fostered by the successful use of random walks on combinatorial structures in probabilistic algorithms (see, \emph{e.g.}, \cite{L93}). Clearly, a quantum random walk on a digraph $D$ can be defined if and only if $D$ is the digraph of a unitary matrix. Inspired by the work of David Meyer on quantum cellular automata \cite{M96}, the authors of \cite{ABNVW01} and \cite {AKV01} overcame this obstacle in the following way.\ In order to define a quantum random walk on a simple digraph $D$, which is regular and is not the digraph of a unitary matrix, a quantum random walk on $\overrightarrow{L}D$ is defined. The digraph $\overrightarrow{L}D$ is the digraph of a unitary matrix. When we chose an appropriate labeling for $V(\overrightarrow{L}D)$, a quantum random walk on $\overrightarrow{L}D$ induces a probability distribution on $V(D)$. The quantum random walk on $\overrightarrow{L}D$ is called the \emph{coined quantum random walk} on $D$. With this scenario in mind, the question which this paper addresses is the following: On which digraphs can quantum random walks be defined? In a more general language, we are interested in the combinatorial properties of the digraphs of unitary matrices. We give a simple necessary condition, called \emph{strong quadrangularity}, for a digraph to be digraph of a unitary matrix. While it seems too daring to conjecture that such a condition is sufficient in the general case, we discover \textquotedblleft accidentally\textquotedblright\ that strong quadrangularity is sufficient when the digraph is a line digraph. We also prove that if a line digraph of a strongly connected digraph is the digraph of a unitary matrix, then it is Hamiltonian. We observe that strong quadrangularity is sufficient to show that certain strongly regular graphs are digraphs of unitary matrices and that $n$-paths, $n$-paths with loops at each vertex, $n$-cycles, directed trees and trees are not. In \cite{GZe88} and \cite{M96} the fact that an $n$ -path is not the digraph of a unitary matrix was called the \emph{NO-GO Lemma }. A consequence of the lemma was that there is no nontrivial, homogeneous, local, one-dimensional quantum cellular automaton. Proposition \ref{mey} below can be then interpreted as a simple combinatorial version of the NO-GO Lemma. We refer to \cite{T84} and to \cite{BR91}, for notions of graph theory and matrix theory, respectively. \section{Digraphs of unitary matrices} Let $D=(V,A)$ be a digraph. A vertex of a digraph is called \emph{source} ( \emph{sink}) if it has no ingoing (outgoing) arcs. A vertex of a digraph is said to be \emph{isolated} if it is not joined to another vertex. We assume that $D$ has no sources, sinks and disconnected loopless vertices. By this assumption, $A(D)$ has neither zero-rows nor zero-columns. For every $ S\subset V(D)$, denote by \begin{equation*} \begin{tabular}{ccc} $N^{+}\left[ S\right] =\{v_{j}:(v_{i},v_{j})\in A(D),v_{i}\in S\}$ & and & $ N^{-}\left[ S\right] =\left\{ v_{i}:(v_{i},v_{j})\in A(D),v_{j}\in S\right\} $ \end{tabular} \end{equation*} the \emph{out-neighbourhood} and \emph{in-neighbourhood} of $S$, respectively. Denote by $ {\vert} X {\vert} $ the cardinality of a set $X$. The non-negative integers $ {\vert} N^{-}\left[ v_{i}\right] {\vert} $ and $ {\vert} N^{+}\left[ v_{i}\right] {\vert} $ are called \emph{invalency} and \emph{outvalency} of the vertex $v_{i}$, respectively. A digraph $D$ is \emph{Eulerian} if and only if every vertex of $D$ has equal invalency and outvalency. The notion defined in Definition 1 is standard in combinatorial matrix theory (see, \emph{e.g.}, \cite{BR91}). In graph theory, the term \emph{ quadrangular} was first used in \cite{GZ98}. \begin{definition} A digraph $D$ is said to be \emph{quadrangular} if, for any two distinct vertices $v_{i},v_{j}\in V(D)$, we have \begin{equation*} \begin{tabular}{lll} $\left\vert N^{+}\left[ v_{i}\right] \cap N^{+}\left[ v_{j}\right] \right\vert \neq 1$ & and & $\left\vert N^{-}\left[ v_{i}\right] \cap N^{-} \left[ v_{j}\right] \right\vert \neq 1$. \end{tabular} \ \end{equation*} \end{definition} \begin{definition} \label{squad}A digraph $D$ is said to be \emph{strongly quadrangular} if there does not exist a set $S\subseteq V\left( D\right) $ such that, for any two distinct vertices $v_{i},v_{j}\in S$, \begin{equation*} \begin{tabular}{lll} $N^{+}\left[ v_{i}\right] \cap \bigcup_{j\neq i}N^{+}\left[ v_{j}\right] \neq \emptyset $ & and & $N^{+}\left[ v_{i}\right] \cap N^{+}\left[ v_{j} \right] \subseteq T,$ \end{tabular} \ \end{equation*} where $\left\vert T\right\vert <\left\vert S\right\vert $, and similarly for the in-neighbourhoods. \end{definition} \begin{remark} \emph{Note that if a digraph is strongly quadrangular then it is quadrangular.} \end{remark} \begin{lemma} \label{sq}Let $D$ be a digraph. If $D$ is the digraph of a unitary matrix then $D$ is strongly quadrangular. \end{lemma} \begin{proof} Suppose that $D$ is the digraph of a unitary matrix $U$ and that $D$ is not strongly quadrangular. Then there is a set $S\subseteq V\left( D\right) $ such that, for any two distinct vertices $v_{i},v_{j}\in S$, $N^{+}\left[ v_{i}\right] \cap \bigcup_{j\neq i}N^{+}\left[ v_{j}\right] \neq \emptyset $ and $N^{+}\left[ v_{i}\right] \cap N^{+}\left[ v_{j}\right] \subseteq T$ where $\left\vert T\right\vert <\left\vert S\right\vert $. This implies that in $U$, there is a set $S^{\prime }$ of rows which contribute, with at least one nonzero entry, to the inner product with some other rows in $S^{\prime }$ . In addition, the nonzero entries of any two distinct rows in $S^{\prime }$ , which contribute to the inner product of the two rows, are in the columns of the same set of columns $T^{\prime }$ such that $\left\vert T^{\prime }\right\vert <\left\vert S^{\prime }\right\vert $. Then the rows of $ S^{\prime }$ form a set of orthonormal vectors of dimension smaller than the cardinality of the set itself. This contradicts the hypothesis. The same reasoning holds for the columns of $U$. \end{proof} Two digraphs $D$ and $D^{\prime }$ are \emph{permutation equivalent} if there are permutation matrices $P$ and $Q$, such that $M\left( D^{\prime }\right) =PM\left( D\right) Q$ (and hence also $P^{-1}M(D^{\prime })Q^{-1}=M(D)$). If $Q=P^{-1}$, then $D$ and $D^{\prime }$ are said to be \emph{isomorphic}. We write $D\cong D^{\prime }$ if $D$ and $D^{\prime }$ are isomorphic. Denote by $I_{n}$ the identity matrix of size $n$. Denote by $A^{\intercal }$ the transpose of a matrix $A$. \begin{lemma} \label{equi}Let $D$ and $D^{\prime }$ be permutation equivalent digraphs. Then $D$ is the digraph of a unitary matrix if and only if $D^{\prime }$ is. \end{lemma} \begin{proof} Suppose that $D$ is the digraph of a unitary matrix $U$. Then, for permutation matrices $P$ and $Q$, we have $PUQ=U^{\prime }$, where $ U^{\prime }$ is a unitary matrix of the digraph $D^{\prime }$. The converse is similar. \end{proof} \begin{lemma} \label{pieni}For any $n$ the complete digraph is the digraph of a unitary matrix. \end{lemma} \begin{proof} The lemma just means that for every $n$ there is a unitary matrix without zero entries. An example is given by the Fourier transform on the group $ \mathbb{Z}/n\mathbb{Z}$ (see, \emph{e.g.} \cite{T99}). \end{proof} A digraph $D$ is said to be $\left( k,l\right) $\emph{-regular} if, for every $v_{i}\in V\left( D\right) $, $\left| N^{-}\left[ v_{i}\right] \right| =k$ and $\left| N^{+}\left[ v_{i}\right] \right| =l$. If $k=l$ then $D$ is said to be simply $k$\emph{-regular}. \begin{remark} \label{triangolo}\emph{Not every }$k$\emph{-regular digraph is the digraph of a unitary matrix. Let} \begin{equation*} M\left( D\right) =\left[ \begin{array}{ccc} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{array} \right] . \end{equation*} \emph{Note that }$D$\emph{\ is }$2$\emph{-regular and it is not quadrangular. } \end{remark} \begin{remark} \emph{Not every quadrangular digraph is the digraph of a unitary matrix. Let } \begin{equation*} M\left( D\right) =\left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \end{array} \right] . \end{equation*} \emph{Note that }$D$\emph{\ is quadrangular and is not the digraph of a unitary matrix. In fact, }$D$\emph{\ is not strongly quadrangular.} \end{remark} \begin{definition} A digraph $D$ is said to be \emph{specular} when, for any two distinct vertices $v_{i},v_{j}\in V(D)$, if $N^{+}\left[ v_{i}\right] \cap N^{+}\left[ v_{j}\right] \neq \emptyset $, then $N^{+}\left[ v_{i}\right] =N^{+}\left[ v_{j}\right] $, and, equivalently, if $N^{-}\left[ v_{i}\right] \cap N^{-} \left[ v_{j}\right] \neq \emptyset $ then $N^{-}\left[ v_{i}\right] =N^{-} \left[ v_{j}\right] $. \end{definition} \begin{definition} A $n\times m$ matrix $M$ is said to have \emph{independent submatrices} $ M_{1}$ and $M_{2}$ when, for every $1\leq i,k\leq n$ and $1\leq j,l\leq m$, if $M_{ij}\neq 0$ is an entry of $M_{1}$ and $M_{kl}\neq 0$ is an entry of $ M_{2}$ then $i\neq k$ and $j\neq l$. \end{definition} \begin{theorem} \label{ladder}A specular and strongly quadrangular digraph is the digraph of a unitary matrix. \end{theorem} \begin{proof} Let $D$ be a digraph. Note that if $D$ is specular and strongly quadrangular then $M\left( D\right) $ is composed of independent matrices. The theorem follows then from Lemma \ref{pieni}. \end{proof} The following theorem collects some classic results on line digraphs (see, \emph{e.g.}, \cite{P96}). \begin{theorem} \label{ri}Let $D$ be a digraph. \begin{itemize} \item[(i)] Then, for every $\left( v_{i},v_{j}\right) \in V\left( \overrightarrow{L}D\right) $, \begin{equation*} N^{+}\left[ \left( v_{i},v_{j}\right) \right] =N^{+}\left[ v_{j}\right] \text{ and }N^{-}\left[ \left( v_{i},v_{j}\right) \right] =N^{-}\left[ v_{i} \right] . \end{equation*} \item[(ii)] A digraph $D$ is a line digraph if and only $D$ is specular. \item[(iii)] Let $D$ be a strongly connected digraph. Then $D$ is Eulerian if and only if $\overrightarrow{L}D$ is Hamiltonian. \end{itemize} \end{theorem} \begin{corollary} \label{main1}A strongly quadrangular line digraph is the digraph of a unitary matrix. \end{corollary} \begin{proof} The proof is obtained by point (i) of Theorem \ref{ri} together with Theorem \ref{ladder}. \end{proof} \begin{remark} \emph{Not every line digraph which is the digraph of a unitary matrix is Eulerian. Let } \begin{equation*} \begin{tabular}{lll} $M\left( D\right) =\left[ \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right] $ & and & $M(\overrightarrow{L}D)=\left[ \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 0 \end{array} \right] .$ \end{tabular} \ \end{equation*} \emph{Note that }$\overrightarrow{L}D$\emph{\ is not Eulerian.} \end{remark} In a digraph, a \emph{directed path of length }$r$, from $v_{1}$ to $v_{r+1}$ , is a sequence of arcs of the form $\left( v_{1},v_{2}\right) ,\left( v_{2},v_{3}\right) ,...,\left( v_{r},v_{r+1}\right) $, where all vertices are distinct. A directed path is an \emph{Hamiltonian path} if it included all vertices of the digraph. A directed path, in which $v_{1}=v_{r+1}$, is called \emph{directed cycle}. An Hamiltonian path, in which $ v_{1}=v_{r+1}=v_{n}$ and $\left\vert V\left( D\right) \right\vert =n$, is called \emph{Hamiltonian cycle}. A digraph with an Hamiltonian cycle is said to be \emph{Hamiltonian}. \begin{theorem} \label{as}Let $D$ be a digraph. Then $\overrightarrow{L}D$ is the digraph of a unitary matrix if and only if $D$ is Eulerian or the disjoint union of Eulerian components. \end{theorem} \begin{proof} Suppose that $\overrightarrow{L}D$ is the digraph of a unitary matrix. By Corollary \ref{main1}, $\overrightarrow{L}D$ is strongly quadrangular. If there is $v_{i}\in V(\overrightarrow{L}D)$ such that $\left\vert N^{+}\left[ v_{i}\right] \right\vert =1$ then for every $v_{j}\in V(\overrightarrow{L}D)$ , $N^{+}\left[ v_{i}\right] \cap N^{+}\left[ v_{j}\right] =\emptyset $. Suppose that, for every $v_{i}\in V(\overrightarrow{L}D)$, $\left\vert N^{+} \left[ v_{i}\right] \right\vert =1$. Since $\overrightarrow{L}D$ is strongly quadrangular then $A\left( D\right) =A(\overrightarrow{L}D)$ and it is a permutation matrix. In general, for every $v_{i}\in V(\overrightarrow{L}D)$, if $\left\vert N^{+}\left[ v_{i}\right] \right\vert =k>1$, then there is a set $S\subset V(\overrightarrow{L}D)$ with $\left\vert S\right\vert =k-1$ and not including $v_{i}$ such that, for every $v_{j}\in S$, $N^{+}\left[ v_{j}\right] =N^{+}\left[ v_{i}\right] $. Writing $v_{i}=uv$, where $u,v\in V\left( D\right) $, by Theorem \ref{ri}, $N^{+}\left[ v_{i}\right] =N^{+} \left[ v\right] $. It follows that $\left\vert N^{+}\left[ v\right] \right\vert =k$. Then, because of $S$, it is easy to see that in $A\left( D\right) $ there are $k$ arcs with head $w$. Hence $\left\vert N^{+}\left[ v \right] \right\vert =\left\vert N^{-}\left[ v\right] \right\vert $, and $D$ is Eulerian. The proof of the sufficiency is immediate. \end{proof} \begin{corollary} Let $D$ be a strongly connected digraph. Let $\overrightarrow{L}D$ be the digraph of a unitary matrix. Then $\overrightarrow{L}D$ is Hamiltonian. \end{corollary} \begin{proof} We obtain the proof by point (iii)\ of Theorem \ref{ri} together with Theorem \ref{as}. \end{proof} Let $G$ be a group with generating set $S$. The \emph{Cayley digraph} of $G$ in respect to $S$ is the digraph denoted by $Cay\left( G,S\right) $, with vertex set $G$ and arc set including $\left( g,h\right) $ if and only if there is a generator $s\in S$ such that $gs=h$. \begin{corollary} The line digraph of a Cayley digraph is the digraph of a unitary matrix. \end{corollary} \begin{proof} The corollary follows from Theorem \ref{as}, since a Cayley digraph is regular. \end{proof} A \emph{strongly regular graph} on $n$ vertices is denoted by $srg\left( n,k,\lambda ,\mu \right) $ and is a $k$-regular graph on $n$ vertices, in which (1) two vertices are adjacent if and only if they have exactly $ \lambda $ common neighbours and (2) two vertices are nonadjacent if and only if they have exactly $\mu $ common neighbours (see, \emph{e.g.}, \cite{CvL91} ). The parameters of $srg\left( n,k,\lambda ,\mu \right) $ satisfy the following equation: $k\left( k-\lambda -1\right) =\left( n-k-1\right) \mu $. The disjoint union of $r$ complete graphs each on $m$ vertices, with $r,m>1$ , is denoted by $rK_{m}$. If $m=2$ then $rK_{2}$ is called \emph{ladder graph }. A strongly regular graph is disconnected if and only if it is isomorphic to $rK_{m}$. \begin{remark} \emph{Not every strongly regular graphs is the digraph of a unitary matrix. The graph }$srg\left( 10,3,0,1\right) $\emph{\ is called }Petersen's graph \emph{. It is easy to check that }$srg\left( 10,3,0,1\right) $\emph{\ is not quadrangular.} \end{remark} \begin{remark} \emph{By Theorem \ref{ladder}, if a digraph }$D$\emph{\ is permutation equivalent to a disconnected strongly quadrangular graph, then }$D$\emph{\ is the digraph of a unitary matrix.} \end{remark} The \emph{complement} of a digraph $D$ is a digraph denoted by $\overline{D}$ with the same vertex set of $D$ and with two vertices adjacent if and only if the vertices not adjacent in $D$. A digraph $D$ is \emph{ self-complementary} if $D\cong \overline{D}$. \begin{remark} \emph{The fact that }$D$\emph{\ is the digraph of a unitary matrix does not imply that }$\overline{D}$\emph{\ is. The digraph used in the proof of Proposition \ref{triangolo} provides a counterexample. Note that this does not hold in the case where }$D$\emph{\ is self-complementary.} \end{remark} A digraph $D$ is an $n$-\emph{path}, if $V\left( D\right) =\left\{ v_{1},v_{2},...,v_{n}\right\} $ and \begin{equation*} A\left( D\right) =\left\{ \left( v_{1},v_{2}\right) ,\left( v_{2},v_{1}\right) ,\left( v_{2},v_{3}\right) ,\left( v_{3},v_{2}\right) ,...,\left( v_{n-1},v_{n}\right) ,\left( v_{n},v_{n-1}\right) \right\} , \end{equation*} where all the vertices are distinct. An $n$-path, in which $v_{1}=v_{n}$, is called $n$\emph{-cycle}. A digraph $D$ is a \emph{directed} $n$\emph{-cycle} if $A\left( D\right) =\left\{ \left( v_{1},v_{2}\right) ,\left( v_{2},v_{3}\right) ,...,\left( v_{n-1},v_{1}\right) \right\} $. A digraph without directed cycles if a \emph{directed tree}. A graph without cycle is a \emph{tree}. \begin{proposition} \label{mey}Let $D$ be a digraph. If $D$ is permutation equivalent to an $n$ -path then it is not the digraph of a unitary matrix. \end{proposition} \begin{proof} A digraph is strongly connected if and only if it is the digraph of an irreducible matrix. Since an $n$-path is strongly connected, it is the digraph of an irreducible matrix. Note that the number of arcs of an $n$ -path is $2\left( n-1\right) $. The proposition is proved by Lemma \ref{sq}, together with the following result (see, \emph{e.g.}, \cite{BR91}). Let $M$ be an irreducible matrix of size $n$ and with exactly $2\left( n-1\right) $ nonzero entries. Then there is a permutation matrix $P$, such that \begin{equation*} PMP^{\intercal }=\left[ \begin{array}{ccccc} a_{11} & 0 & \cdots & 0 & 1 \\ 1 & a_{22} & \cdots & 0 & 0 \\ \vdots & 1 & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & \ddots & 0 \\ 0 & 0 & \cdots & 1 & a_{nn} \end{array} \right] , \end{equation*} where $a_{ii}$ can be equal to zero or one. It is easy to see that for any choice of the diagonal entries the digraph of $PMP^{\intercal }$ is not quadrangular. \end{proof} \begin{proposition} If a digraph $D$ is permutation equivalent to one of the following digraphs, then $D$ is not the digraph of a unitary matrix: $n$-path with a loop at each vertex, $n$-cycle, directed tree, tree. \end{proposition} \begin{proof} Chosen any labeling of $D$, the proposition follows from Lemma \ref{sq} and Lemma \ref{equi}. \end{proof} \begin{acknowledgement} The author thanks Peter Cameron, Richard Jozsa, Gregor Tanner and Andreas Winter for their help. The author is supported by a University of Bristol research scholarship. \end{acknowledgement}
{"config": "arxiv", "file": "math0205187.tex"}
\begin{document} \title[Parametrized topological complexity] {Parametrized topological complexity of collision-free motion planning in the plane} \author[D. Cohen]{Daniel C. Cohen}\thanks{D. Cohen was partially supported by an LSU Faculty Travel Grant} \address{Department of Mathematics, Louisiana State University, Baton Rouge, LA 70803} \email{\href{mailto:cohen@math.lsu.edu}{cohen@math.lsu.edu}} \urladdr{\href{http://www.math.lsu.edu/~cohen/} {www.math.lsu.edu/\char'176cohen}} \author[M. Farber]{Michael Farber}\thanks{M. Farber was partially supported by a grant from the Leverhulme Foundation} \address{School of Mathematical Sciences, Queen Mary University of London, E1 4NS London} \email{\href{mailto:M.Farber@qmul.ac.uk}{M.Farber@qmul.ac.uk}} \author[S. Weinberger]{Shmuel Weinberger} \address{Department of Mathematics, The University of Chicago, 5734 S University Ave, Chicago, IL 60637} \email{\href{mailto:shmuel@math.uchicago.edu}{shmuel@math.uchicago.edu}} \begin{abstract} Parametrized motion planning algorithms have high degrees of universality and flexibility, as they are designed to work under a variety of external conditions, which are viewed as parameters and form part of the input of the underlying motion planning problem. In this paper, we analyze the parameterized motion planning problem for the motion of many distinct points in the plane, moving without collision and avoiding multiple distinct obstacles with a priori unknown positions. This complements our prior work \cite{CFW}, where parameterized motion planning algorithms were introduced, and the obstacle-avoiding collision-free motion planning problem in three-dimensional space was fully investigated. The planar case requires different algebraic and topological tools than its spatial analog. \end{abstract} \keywords{parameterized topological complexity, obstacle-avoiding collision-free motion} \subjclass[2010]{ 55S40, 55M30, 55R80, 70Q05 } \maketitle \section{Introduction} \label{sec:intro} The goal of this paper is to give a topological measurement of the complexity that robots must confront when navigating in a two-dimensional environment, avoiding impediments. This work is a refinement of the work of Farber \cite{Fa03,Fa05} who studied how much forking is necessary in the programming of a robotic motion planner operating in a configuration space $X$ via a numerical invariant $\tc(X)$. This, in turn, was modeled on the seminal paper of Smale \cite{Sm}, which studied the complexity of ``the fundamental theorem of algebra,'' that is, the amount of forking that arises in the course of computation of solutions to polynomial equations. The invariant $\tc(X)$ also measures the amount of instability that any motion planner must have, that is, the number of different overlapping sets in a hybrid motion planning system, or similarly how much forking arises in routing algorithms. Interesting as this invariant is, it only captures part of the difficulty that a robot needs to negotiate. A more realistic theory would take into account the sensing capacity of the robot, multiple robots that maneuver autonomously, energy, timing, and communication. We hope to investigate such issues in future work. In this paper and the previous one in this series \cite{CFW}, we focus on the problem of the computational complexity of flexibly solving motion planning in a potentially changing environment. \renewcommand{\thefootnote}{\fnsymbol{footnote}} A (point) robot\footnote[2]{ Of course, the idea of a point robot is an idealization. The difficulties confronted by a physical robot will only be greater. Dealing with larger robots is related to the issue of dealing with families of problems that need not form fibrations, and will not be addressed in this paper.} moving around a convex room has a simple task. It can go from any point to any other along the straight line connecting them. If there is a single obstacle then any algorithm must fork - the one described would require a decision about whether to go around the obstacle to the left or the right. It turns out that two obstacles are harder than one, but then it gets no harder. Similarly, the complexity of motion in a graph can only have three values - trivial for a tree, complexity $1$ for a graph with a single cycle, but only getting bigger one more time when the number of cycles is larger than $1$. The reason for this is that any connected graph can be described as a union of two trees, so if there is a specific graph that needs to be navigated, one can make use of such a decomposition. (A similar statement can be made regarding the part of a room that is complementary to any union of a finite number of convex subsets.) Here we shall see that if the robot each day needs to move around the room where the obstacles have also been moved around, the complexity of the problem to be solved indeed grows. More generally, our main result provides a solution to the analogous problem for an arbitrary finite number of robots that are centrally controlled. The predecessor paper \cite{CFW} studies the three-dimensional version of this problem, for example, for submarines navigating a mined part of the ocean. Interestingly, the mathematics is somewhat more difficult in this two-dimensional situation than the three-dimensional case. In both cases, however, the general formalism is the same. We consider a parameter space that describes the possible location of obstacles, and therefore study a parametrzed form of topological complexity. In our situation we have the mathematical structure of a fibration describing the set of motion planning problems, which enables the application of the powerful apparatus of algebraic topology. Some of the other problems mentioned above require a weakening of this hypothesis, and cannot be directly approached by the methodology of this paper. \subsection*{Parameterized motion planning } An autonomously functioning system in robotics typically includes a motion planning algorithm which takes as input the initial and terminal states of the system, and produces as output a motion of the system from the initial state to the terminal state. The theory of robot motion planning algorithms is an active area in the field of robotics, see \cite{Lat,Lav} and the references therein. A topological approach to the robot motion planning problem was developed in \cite{Fa03,Fa05}, where topological techniques clarify relationships between instabilities occurring in robot motion planning algorithms and topological features of the configuration spaces of the relevant autonomous systems. In a recent article \cite{CFW}, we developed a new approach to the theory of motion planning algorithms. In this ``parameterized'' approach, algorithms are required to be \emph{universal}, so that they are able to function under a variety of situations, involving different external conditions which are viewed as parameters and are part of the input of the underlying motion planning problem. Typical situations of this kind arise when one is dealing with the collision-free motion of many objects (robots) moving in two- or three-dimensional space avoiding a set of obstacles, and the positions of the obstacles are a priori unknown. In the current paper, we continue our investigation of the problem of collision-free motion of many particles avoiding multiple moving obstacles, focusing primarily on the planar case. A team of robots moving in an obstacle-filled room is one example. As another illustration, consider a spymaster coordinating the motion of a team of spies in a planar theatre of operations each day. Spies must avoid opposition checkpoints, which may be repositioned daily, and may not meet so as to avoid potentially compromising one another. The analogous problem in three-dimensional space, for instance, maneuvering a submarine fleet in waters infested with repositionable mines, was analyzed in \cite{CFW}. In each of these motion planning problems, one is faced with a space of allowable configurations of the robots/spies/submarines which depends on parameters, the daily positions of the obstacles/checkpoints/mines. A motion planning algorithm should then be flexible enough to deal with changes in the parameters. The algebraic and topological tools used to analyze the complexity of such algorithms in the planar and spatial cases are essentially different. These differences are reflected by a numerical invariant, the \emph{parameterized topological complexity}, which differs in the planar and spatial cases. \subsection*{Parameterized topological complexity} We reformulate these considerations mathematically, using the language of algebraic topology. Let $X$ be a path-connected topological space. Viewing $X$ as the space of all states of a mechanical system, the motion planning problem from robotics takes as input an initial state and a terminal state of the system, and requests as output a continuous motion of the system from the initial state to the terminal state. That is, given $(x_0,x_1) \in X\times X$, one would like to produce a continuous path $\gamma\colon I \to X$ with $\gamma(0)=x_0$ and $\gamma(1)=x_1$, where $I=[0,1]$ is the unit interval. Let $X^I$ be the space of all continuous paths in $X$, equipped with the compact-open topology. The map $\pi\colon X^I \to X\times X$, $\pi(\gamma)=(\gamma(0),\gamma(1))$, is a fibration, with fiber $\Omega X$, the based loop space of $X$. A solution of the motion planning problem, a motion planning algorithm, is then a section of this fibration, a map $s\colon X\times X \to X^I$ with $\pi\circ s=\id_{X\times X}$. If $X$ is not contractible, the section $s$ cannot be globally continuous, see \cite{Fa05}. The topological complexity of $X$ is defined to be the sectional category, or Schwarz genus, of the fibration $\pi\colon X^I \to X\times X$, $\tc(X)=\secat(\pi)$. That is, $\tc(X)$ is the smallest number $k$ for which there is an open cover $X\times X=U_0\cup U_1\cup \dots \cup U_k$ and the map $\pi$ admits a continuous section $s_j\colon U_j \to X^I$ satisfying $\pi\circ s_j = \id_{U_j}$ for each $j$. The numerical homotopy type invariant $\tc(X)$ provides a measure of the navigational complexity in $X$. Significant recent advances in the subject include work of Dranishnikov \cite{Dr} on the topological complexity of spaces modeling hyperbolic groups, and work of Grant and Mescher \cite{GrM} on the topological complexity of symplectic manifolds. We refer to the surveys \cite{dc18,Fa18} and recent work of Ipanaque Zapata and Gonz\'alez \cite{IG} for discussions of topological complexity and motion planning algorithms in the context of collision-free motion. A parameterized approach to the motion planning problem was recently put forward in \cite{CFW}. In the parameterized setting, constraints are imposed by external conditions encoded by an auxiliary topological space $B$, and the initial and terminal states of the system, as well as the motion between them, must satisfy the same external conditions. This is modeled by a fibration $p\colon E \to B$, with nonempty path-connected fibers. For $b\in B$, the fiber $X_b=p^{-1}(b)$ is viewed as the space of achievable configurations of the system given the constraints imposed by $b$. Here, a motion planning algorithm takes as input initial and terminal (achievable given $b$) states of the system, and produces a continuous (achievable given $b$) path between them. That is, the initial and terminal points, as well as the path between them, all lie within the same fiber $X_b$. The parameterized topological complexity of the fibration $p\colon E \to B$ is then defined to be the sectional category of the associated fibration $\Pi\colon E^I_B \to E\times_BE$, where $E\times_BE$ is the space of all pairs of configurations lying in the same fiber of $p$, $E^I_B$ is the space of paths in $E$ lying in the same fiber of $p$, and the map $\Pi$ sends a path to its endpoints. \subsection*{Obstacle-avoiding, collision-free motion} Investigating the collision-free motion of $n$ distinct ordered particles in a topological space $Y$ leads one to study the standard (unparameterized) topological complexity of the classical configuration space \[ \Conf(Y,n)=\{(y_1,y_2,\dots,y_n) \in Y^n \mid y_i \neq y_j\ \text{for}\ i \neq j\} \] of $n$ distinct ordered points in $Y$. Similarly, investigating the collision-free motion of $n$ distinct particles in a manifold $Y$ in the presence of $m$ distinct obstacles, with a priori not known positions, leads one to study the parameterized topological complexity of the classical Fadell-Neuwirth bundle, the locally trivial fibration \[ p\colon \Conf(Y,m+n) \to \Conf(Y,m),\quad p(y_1,\dots,y_m,y_{m+1},\dots,y_{m+n}) =(y_1,\dots,y_m), \] with fiber $p^{-1}(y_1,\dots,y_m)=\Conf(Y\smallsetminus \{y_1,\dots,y_m\},n)$. In this paper, we complete the determination of the parameterized topological complexity of the Fadell-Neuwirth bundles of Euclidean configuration spaces begun in \cite{CFW}. Our main result, Theorem \ref{thm:main}, includes the following as a special case. \begin{thm*} For positive integers $m$ and $n$, the parameterized topological complexity of the motion of $n$ non-colliding particles in the plane $\R^2$, in the presence of $m$ non-colliding point obstacles with a priori unknown positions is equal to $2n+m-2$. \end{thm*} The case $m=1$ of this result reduces to the previously known determination of the (standard) topological complexity of $\Conf(\R^2\smallsetminus\{0\}),n)$, see Remark \ref{rem:m=1}. Different techniques yield the same parameterized topological complexity for obstacle-avoiding collision-free motion in $\R^d$ for any $d\ge 4$ even, as discussed in Section \ref{sec:config pTC}. The analogous motion planning problem in $\R^d$, for $d \ge 3$ odd, was analyzed in \cite[Thm.~9.1]{CFW}, where it was shown that the parameterized topological complexity is $2n+m-1$. These results provide examples of fibrations for which the parameterized topological complexity exceeds the (standard) topological complexity of the fiber, since $\tc(\Conf(\R^d\smallsetminus \{y_1,\dots,y_m\},n))=2n$ as shown in \cite{FGY}. Our main result also illustrates that parameterized topological complexity may differ significantly from other notions of the topological complexity of a map which appear in the literature. If $p\colon E \to B$ is a fibration which admits a (homotopy) section, as is the case for many Fadell-Neuwirth bundles, then the topological complexity of $p$, as defined in either \cite{MW} or \cite{P19}, is equal to $\tc(B)$. For the Fadell-Neuwirth bundle $p\colon \Conf(\R^d,m+n) \to \Conf(\R^d,m)$ with $d\ge 2$ even, we have $\tc(B)=\tc(\Conf(\R^d,m))=2m-2$ (see, for instance, \cite{Fa18}), which differs from the parameterized topological complexity of the bundle unless the number of obstacles is twice the number of robots. \section{Parametrized topological complexity} \label{secpTC} In this brief section, we recall requisite material from \cite{CFW}. Recall the broad framework: We wish to analyze the complexity of a motion planning algorithm in an environment which may change under the influence of external conditions. These conditions, parameters treated as part of the input of the algorithm, are encoded by a topological space $B$. Associated to each choice of conditions, that is, to each point $b \in B$, one has a configuration space $X_b$ of achievable configurations in which motion planning must take place. The motion planning algorithim must thus be sufficiently flexible so as to adapt to different external conditions, that is, different points in the parameter space $B$. Let $p\colon E \to B$ be a fibration, with nonempty, path-connected fiber $X$. Let $E^I_B$ denote the space of all continuous paths $\gamma\colon I \to E$ which lie in a single fiber of $p$, so that $p\circ\gamma$ is the constant path in $B$. Let \[ E\times_BE=\{(e,e') \in E \times E \mid p(e)=p(e')\} \] be the space of pairs of points in $E$ which lie in the same fiber. The map \[ \Pi \colon E^I_B \to E\times_BE, \qquad \gamma \mapsto (\gamma(0),\gamma(1)) \] given by sending a path to its endpoints is a fibration, with fiber $\Omega X$, the space of based loops in $X$. \begin{definition} \label{def:pTC} The parameterized topological complexity $\tc[p\colon E \to B]$ of the fibration $p\colon E \to B$ is the sectional category of the fibration $\Pi \colon E^I_B \to E\times_BE$, \[ \tc[p\colon E \to B] :=\secat(\Pi\colon E^I_B \to E\times_BE). \] That is, $\tc[p\colon E \to B]$ is equal to the smallest nonnegative integer $k$ for which the space $E\times_BE$ admits an open cover \[ E\times_BE = U_0\cup U_1 \cup \dots \cup U_k, \] and the map $\Pi \colon E^I_B\to E\times_BE$ admits a continuous section $s_i\colon U_i \to E^I_B$ for each $i$, $0\le i\le k$. If the fibration $p$ is clear from the context, we sometimes use the abbreviated notation $\tc[p\colon E \to B]=\tc_B(X)$, to emphasize the role of the fiber $X$. \end{definition} As shown in \cite[Prop.~5.1]{CFW}, parameterized topological complexity is an invariant of fiberwise homotopy equivalence. For a topological space $Y$, let $\dim(Y)$ denote the covering dimension of $Y$, and let $\hdim(Y)$ denote the homotopy dimension of $Y$, the minimal dimension of a space $Z$ homotopy equivalent to $Y$. Since the parameterized topological complexity of $p\colon E \to B$ is defined to be the sectional category of the associated fibration $\Pi\colon E^I_B \to E\times_BE$, we have \[ \tc[p\colon E \to B] \le \cat(E\times_BE) \le \hdim(E\times_BE), \] where $\cat(Y)$ is the Lusternik-Schnirelmann category of $Y$ (cf. \cite{Sch}). We also have the following. \begin{prop}[{\cite[Prop.~7.1]{CFW}}] \label{prop:upper} Let $p\colon E \to B$ be a locally trivial fibration of metrizable topological spaces, with path-connected fiber $X$. Then, \[ \tc_B(X)=\tc[p\colon E \to B] \le 2\dim(X)+\dim(B). \] \end{prop} Parameterized topological complexity admits a cohomological lower bound. For a graded ring $A$, let $\cl(A)$ denote the cup length of $A$, the largest integer $q$ for which there are homogeneous elements $a_1,\dots, a_q$ of positive degree in $A$ such that $a_1\cdots a_q \neq 0$. \begin{prop}[{\cite[Prop.~7.3]{CFW}}] \label{prop:cup} Let $p\colon E \to B$ be a fibration with path-connected fiber, and let $\Delta\colon E \to E\times_B E$ be the diagonal map, $\Delta(e)=(e,e)$. Then the parameterized topological complexity of $p\colon E \to B$ is greater than or equal to the cup length of the kernel of the map in cohomology induced by $\Delta$, \[ \tc[p\colon E \to B] \ge \cl\left(\ker\bigl[\Delta^*\colon H^*(E\times_B E; R) \to H^*(E;\Delta^*R)\bigr]\right), \] for any system of coefficients $R$ on $E\times_B E$. \end{prop} We conclude this section by recording the following product inequality for parameterized topological complexity, which we will make use of in Section \ref{sec:config pTC} below. \begin{prop}[{\cite[Prop.~6.1]{CFW}}] \label{prop:product} Let $p'\colon E' \to B'$ and $p''\colon E'' \to B''$ be fibrations with path-connected fibers $X'$ and $X''$ respectively. Let $B=B'\times B''$, $E=E'\times E''$, $X=X'\times X''$, and $p=p'\times p''$. Then the product fibration $p\colon E \to B$ satisfies \[ \tc[p\colon E \to B] \le \tc[p'\colon E' \to B'] + \tc[p''\colon E'' \to B'']. \] Equivalently, in abbreviated notation, \[ \tc_{B'\times B''}(X'\times X'') \le \tc_{B'}(X') + \tc_{B''}(X''). \] \end{prop} \section{Cohomology of the obstacle-avoiding configuration space} In this section, we study the structure of the cohomology rings of configuration spaces arising in the context of our main theorem. Let $E=\Conf(\R^d,m+n)$ and $B=\Conf(\R^d,m)$. Then, the Fadell-Neuwirth bundle of configuration spaces is $p\colon E \to B$, with fiber $X= \Conf(\R^d\smallsetminus\sO_m,n)$, where $\sO_m$ is a set of $m$ distinct points in $\R^d$. In order to utilize Proposition \ref{prop:cup} subsequently, we analyze the cohomology ring of the ``obstacle-avoiding configuration space'' $E\times_BE$. We use homology and cohomology with integer coefficients, and suppress the coefficients, throughout. The principal objects of study, $E$, $B$, $X$, and $E\times_BE$, all have torsion free integral homology and cohomology. This is well known for the classical configuration spaces, see \cite{FH}. \begin{prop} \label{prop:HEBE} Let $p\colon E=\Conf(\R^d,m+n) \to B=\Conf(\R^d,m)$ be the Fadell-Neuwirth bundle of configuration spaces. The integral cohomology groups of the space $E\times_B E$ are torsion free. The cohomology ring $H^*(E\times_B E)$ is generated by degree $d-1$ elements $\omega^{}_{i,j}$ and $\omega'_{i,j}$, $1\le i< j \le m+n$, which satisfy the relations \[ \begin{array}{ll} \omega'_{i,j}=\omega^{}_{i,j}\ \text{for}\ 1\le i<j \le m, \quad& \omega^{}_{i,j}\omega^{}_{i,k}-\omega^{}_{i,j}\omega^{}_{j,k}+\omega^{}_{i,k}\omega^{}_{j,k}=0 \ \text{for}\ i<j <k,\\[4pt] (\omega^{}_{i,j})^2=(\omega'_{i,j})^2=0 \ \text{for}\ i<j,& \omega'_{i,j}\omega'_{i,k}-\omega'_{i,j}\omega'_{j,k}+\omega'_{i,k}\omega'_{j,k}=0\ \text{for}\ i<j <k. \end{array} \] \end{prop} Since $\omega'_{i,j}=\omega^{}_{i,j}$ for $1\le i<j\le m$, the last of these relations may be expressed as $\omega^{}_{i,j}\omega'_{i,k}-\omega^{}_{i,j}\omega'_{j,k}+\omega'_{i,k}\omega'_{j,k}=0$ for such $i$ and $j$. We refer to relations of this general form as ``three term relations''. \begin{proof} Let $\mathbf{z}=(z^{}_1,\dots,z^{}_{m+n})$ and $\mathbf{z}'=(z'_1,\dots,z'_{m+n})$ be points in the configuration space $E=\Conf(\R^d,m+n)$, so that $z^{}_i\neq z^{}_j$ and $z'_i\neq z'_j$ for all $i<j$. Points in the space $E\times_BE$ may be expressed as pairs of such points $(\mathbf{z},\mathbf{z}')$ which satisfy $z^{}_i = z'_i$ for $1\le i\le m$. That is, $E\times_BE$ may be realized as the intersection $E\times_BE=(E\times E) \cap S$, where $S=\{(\mathbf{z},\mathbf{z}') \in \R^{2(m+n)} \mid z^{}_i = z'_i\ \text{for}\ 1\le i\le m\}$. Let $\iota\colon E\times_BE \to E\times E$ denote the inclusion. For $1\le i<j \le m+n$, define maps $p^{}_{i,j}, p'_{i,j}\colon E\times E \to \Conf(\R^d,2)$ by $p^{}_{i,j}(\mathbf{z},\mathbf{z}')=(z^{}_i,z^{}_j)$ and $p'_{i,j}(\mathbf{z},\mathbf{z}')=(z'_i,z'_j)$. The space $\Conf(\R^d,2)$ is homotopy equivalent to the sphere $S^{d-1}$. Fix a generator $\sigma \in H^{d-1}(\Conf(\R^d,2))$, and define $\Omega^{}_{i,j}, \Omega'_{i,j} \in H^{d-1}(E\times E)$ by $\Omega^{}_{i,j}=(p^{}_{i,j})^*(\sigma)$ and $\Omega'_{i,j}=(p'_{i,j})^*(\sigma)$. From well known results \cite{FH} on the cohomology of the configuration space $E=\Conf(\R^d,m+n)$ and the K\"unneth formula, the elements $\Omega^{}_{i,j}, \Omega'_{i,j}$ generate $H^*(E\times E)$ and satisfy $(\Omega^{}_{i,j})^2=( \Omega'_{i,j})^2=0$ and the three term relations (involving $\{\Omega^{}_{i,j},\Omega^{}_{i,k},\Omega^{}_{j,k}\}$ and $\{\Omega'_{i,j},\Omega'_{i,k},\Omega'_{j,k}\}$). Now let $\omega^{}_{i,j}=\iota^*(\Omega^{}_{i,j})$ and $\omega'_{i,j}=\iota^*(\Omega'_{i,j})$ in $H^{d-1}(E\times_BE)$ for $1\le i<j \le m+n$. Then, as shown in \cite[Prop.~9.2]{CFW}, these cohomology classes satisfy the asserted relations. In particular, since $z'_i=z^{}_i$ for $i\le m$, we have $\omega'_{i,j}=\omega^{}_{i,j}$ for $i<j\le m$. The other relations follow immediately from naturality. It remains to show that $H^*(E\times_BE)$ is torsion free, generated by the classes $\omega^{}_{i,j},\omega'_{i,j}$. The space $E\times_BE$ may also be realized (up to homeomorphism) as the total space of the bundle obtained by pulling back the product bundle $p\times p\colon E\times E \to B\times B$ along the diagonal map $\Delta_B\colon B \to B\times B$. The common fiber $X\times X$ is totally non-homologous to zero in each of these bundles, both inclusion-induced maps $H^*(E\times E) \to H^*(X\times X)$ and $H^*(E\times_B E) \to H^*(X\times X)$ are surjective. Additionally, $H^*(X\times X)$ is torsion free since $H^*(X)=H^*(\Conf(\R^d\smallsetminus\sO_m,n)$ is. Consequently, the classical Leray-Hirsch theorem applies to both bundles, see \cite{FH,CFW}. From this, we obtain an additive isomorphism $H^*(B) \otimes H^*(X\times X) \cong H^*(E\times_BE)$. Since the cohomology groups of $B=\Conf(\R^d,m)$ are also torsion free, so are those of $E\times_BE$. Lastly, using the commuting diagram \[ \begin{CD} H^*(B \times B) \otimes H^*(X\times X) @>{\cong}>> H^*(E\times E) \\ @V{\Delta_B^* \otimes \id}VV @VV{\iota^*}V \\ H^*(B) \otimes H^*(X\times X) @>{\cong}>> H^*(E\times_B E) \end{CD} \] and the fact that $\Delta_B^*\colon H^*(B\times B) \to H^*(B)$ is surjective, we see that the inclusion $\iota\colon E\times_BE \to E\times E$ induces a surjection in cohomology. Since the classes $\Omega^{}_{i,j}$ and $\Omega'_{i,j}$ generate the ring $H^*(E\times E)$, their images $\omega^{}_{i,j}=\iota^*(\Omega^{}_{i,j})$ and $\omega'_{i,j}=\iota^*(\Omega'_{i,j})$ generate the ring $H^*(E\times_BE)$. \end{proof} For a natural number $q$, let $[q]=\{1,2,\dots,q\}$. Let $I=(i_1,\dots,i_\ell)$ and $J=(j_1,\dots,j_\ell)$ be sequences of elements in $[m+n]$. If $i_k <j_k$ for each $k$, $1\le k\le \ell$, we write $I<J$ and define cohomology classes \[ \omega^{}_{I,J} = \omega^{}_{i_1,j_1}\omega^{}_{i_2,j_2} \cdots \omega^{}_{i_\ell,j_\ell} \quad\text{and}\quad \omega'_{I,J} = \omega'_{i_1,j_1}\omega'_{i_2,j_2} \cdots \omega'_{i_\ell,j_\ell} \] in $H^{(d-1)\ell}(E\times_BE)$. If $\ell=0$, set $\omega^{}_{I,J}=\omega'_{I,J}=1$. Call the sequence $J=(j_1,j_2,\dots,j_\ell)$ increasing if $j_1<j_2<\dots<j_\ell$. \begin{prop}[{\cite[Prop.~9.3]{CFW}}] \label{prop:basis} A basis for $H^*(E\times_BE)$ is given by the set of cohomology classes \[ \omega^{}_{I_1,J_1}\omega^{}_{I_2,J_2}\omega^{}_{I_3,J_3}, \] where $J_1 \subset [m]$, $J_2,J_3 \subset [m+n]$ are increasing sequences, and $I_1$, $I_2$, and $I_3$ are sequences with $I_1<J_1$, $I_2<J_2$, and $I_3<J_3$. \end{prop} We conclude this section with a technical result which will be used in the proof of the main theorem. For a sequence $J=(j_1,\dots,j_\ell)$, let $J^\prime=(j_1,\dots,j_{\ell-1})$. \begin{definition} \label{def:adm} Let $J=(j_1,\dots,j_\ell)$ be an increasing sequence. A $J$-\emph{admissible} sequence $I=(i_1,\dots,i_\ell)$ is defined recursively as follows. If $|J|=\ell=1$, then $I$ is $J$-admissible if and only if $I=J$. If $|J|=\ell \ge 2$, then $I$ is $J$-admissible if \begin{enumerate} \item $I$ is nondecreasing, $i_1\le \dots \le i_\ell$, \item $I^\prime=(i_1,\dots,i_{\ell-1})$ is $J^\prime$-admissible, and \item either $i_{\ell}=i_{\ell-1}$ or $i_\ell=j_\ell$. \end{enumerate} \end{definition} For instance, if $J=(j_1,j_2)$, the $J$-admissible sequences are $(j_1,j_1)$ and $J$ itself. \begin{prop} \label{prop:rewrite} If $J=(j_1,\dots,j_\ell)$ is an increasing sequence and $r > j_\ell$, then \begin{align*} \omega^{}_{j_1,r}\omega^{}_{j_2,r}\cdots \omega^{}_{j_\ell, r} &= (-1)^\ell \sum_I (-1)^{d_I} \omega^{}_{i_1,j_2} \omega^{}_{i_2,j_3} \cdots \omega^{}_{i_{\ell-1},j_\ell} \omega^{}_{i_\ell,r},\\ \intertext{and} \omega'_{j_1,r}\omega'_{j_2,r}\cdots \omega'_{j_\ell, r} &= (-1)^\ell \sum_I (-1)^{d_I} \omega'_{i_1,j_2} \omega'_{i_2,j_3} \cdots \omega'_{i_{\ell-1},j_\ell} \omega'_{i_\ell,r}, \end{align*} where the sums are over all $J$-admissible sequences $I$, and $d_I$ is the number of distinct elements in $I$. \end{prop} Observe that the sums above are linear combinations of distinct elements of the basis for $H^*(E\times_BE)$ given in Proposition \ref{prop:basis}. \begin{proof} Let $R = (r,r,\dots,r)$ be the constant sequence of length $\ell$. The proposition asserts that \[ \omega^{}_{J,R}=(-1)^\ell \sum_I (-1)^{d_I} \omega^{}_{I,K} \quad \text{and} \quad \omega'_{J,R}=(-1)^\ell \sum_I (-1)^{d_I} \omega'_{I,K}, \] where $K=(j_2,\dots,j_\ell,r)$. Clearly, it suffices to consider $\omega^{}_{J,R}$. The proof is by induction on $\ell=|J|$, with the case $\ell=1$ trivial. The case $\ell=2$ is the three term relation $\omega^{}_{j_1,r}\omega^{}_{j_2,r}=\omega^{}_{j_1,j_2}\omega^{}_{j_2,r}-\omega^{}_{j_1,j_2}\omega^{}_{j_1,r}$, which will be crucial subsequently. Assume that $\ell \ge 3$. For $J=(j_1,\dots,j_\ell)$, recall that $J^\prime=(j_1,\dots,j_{\ell-1})$, and let $R^\prime$ be the constant sequence of length $\ell-1$. By induction, we have \[ \omega^{}_{J^\prime,R^\prime} =(-1)^{\ell-1}\sum_{I^\prime} (-1)^{d_{I^\prime}} \omega^{}_{i_1,j_2} \cdots \omega^{}_{i_{\ell-2},j_{\ell-1}} \omega^{}_{i_{\ell-1},r}, \] where the sum is over all $J^\prime$-admissible sequences $I^\prime=(i_1,\dots,i_{\ell-1})$. Since $\omega^{}_{J,R}=\omega^{}_{J^\prime,R^\prime} \omega^{}_{j_\ell,r}$, we obtain \[ \begin{aligned} \omega^{}_{J,R} &= (-1)^{\ell-1}\sum_{I^\prime} (-1)^{d_{I^\prime}} \omega^{}_{i_1,j_2} \cdots \omega^{}_{i_{\ell-2},j_{\ell-1}} \omega^{}_{i_{\ell-1},r}\omega^{}_{j_\ell,r}\\ &= (-1)^{\ell-1}\sum_{I^\prime} (-1)^{d_{I^\prime}} \omega^{}_{i_1,j_2} \cdots \omega^{}_{i_{\ell-2},j_{\ell-1}} (\omega^{}_{i_{\ell-1},j_\ell}\omega^{}_{j_\ell,r} - \omega^{}_{i_{\ell-1},j_\ell}\omega^{}_{i_{\ell-1},r}), \end{aligned} \] using the three term relations on the second line. For $I^\prime$ as above, let $P=(i_1,\dots,i_{\ell-1},j_\ell)$ and $Q=(i_1,\dots,i_{\ell-1},i_{\ell-1})$. Note that $d_P=d_{I^\prime}+1$ and $d_Q=d_{I^\prime}$. Further, as is clear from Definition \ref{def:adm}, every $J$-admissible sequence $I$ arises from a $J^\prime$-admissible sequence $I^\prime$ by adjoining either $j_\ell$ or $i_{\ell-1}$. Thus, \[ \begin{aligned} \omega^{}_{J,R} &= (-1)^{\ell-1}\sum_{I^\prime} (-1)^{d_{I^\prime}} \omega^{}_{P,K}+(-1)^{\ell}\sum_{I^\prime} (-1)^{d_{I^\prime}} \omega^{}_{Q,K}\\ &=(-1)^\ell \sum_{P} (-1)^{d_P} \omega^{}_{P,K}+(-1)^\ell \sum_{Q} (-1)^{d_Q} \omega^{}_{Q,K}\\ &=(-1)^\ell \sum_{I} (-1)^{d_I} \omega^{}_{I,K}, \end{aligned} \] where the last sum is over all $J$-admissible sequences as required. \end{proof} \section{Obstacle-avoiding collision-free motion in the plane} \label{sec:config pTC} In this section, we state and prove our main theorem, determining the parameterized topological complexity of obstacle-avoiding collision-free motion in any Euclidean space $\R^d$ of positive even dimension. The case $d=2$ of the plane was highlighted in the Introduction. \begin{theorem} \label{thm:main} For positive integers $n$, $m$, and $d\ge 2$ even, the parameterized topological complexity of the motion of $n$ non-colliding particles in $\R^d$ in the presence of $m$ non-colliding point obstacles with a priori unknown positions is equal to $2n+m-2$. In other words, the parameterized topological complexity of the Fadell-Neuwirth bundle $p\colon \Conf(\R^d,m+n) \to \Conf(\R^d,m)$ is \begin{equation*} \label{eq:pTCFN} {\tc}\bigl[p\colon \Conf(\R^d,m+n) \to \Conf(\R^d,m)\bigr]=2n+m-2. \end{equation*} \end{theorem} Let $E= \Conf(\R^d,m+n)$ and $B=\Conf(\R^d,m)$, so that the Fadell-Neuwirth bundle is $p\colon E \to B$. The fiber of this bundle is $X=\Conf(\R^d\smallsetminus\sO_m,n)$, where $\sO_m$ is a set of $m$ distinct points (representing the obstacles). Each of the spaces $E$, $B$, $X$, and $E\times_BE$ has the homotopy type of a finite CW-complex of known dimension. For the configuration spaces $B$, $E$, and $X$, see \cite{FH}. For $E\times_BE$, this can be shown using various forms of Morse theory, cf. \cite{Adi,GM}. The dimensions of these CW-complexes are \begin{equation} \label{eq:hdims} \begin{array}{ll} \hdim B = (m-1)(d-1),\quad & \hdim E = (m+n-1)(d-1),\\ \hdim X = n(d-1),\quad & \hdim E\times_BE =(2m+n-1)(d-1). \end{array} \end{equation} Furthermore, each of the spaces $E$, $B$, $X$, and $E\times_BE$ is $(d-2)$-connected, as each is obtained removing codimension $d$ subspaces from a Euclidean space. \begin{remark} \label{rem:m=1} If $m=1$, the base space $B=\Conf(\R^d,m)=\R^d$ of the Fadell-Neuwirth bundle is contractiible, and the bundle is trivial. The parameterized topological complexity of this trivial bundle is equal to the (standard) topological complexity of the fiber $X=\Conf(\R^d\smallsetminus\sO_1,n)$, see \cite[Ex.~4.2]{CFW}, and Theorem \ref{thm:main} is a restatement of results of \cite{FG} in this instance. \end{remark} We subsequently assume that $m\ge 2$. We first show that \[ {\tc}\bigl[p\colon \Conf(\R^d,m+n) \to \Conf(\R^d,m)\bigr] \ge 2n+m-2. \] By Proposition \ref{prop:cup}, this is a consequence of the following. \begin{prop} \label{prop:CL2} For $d\ge 2$ even, $E=\Conf(\R^d,m+n)$ and $B=\Conf(\R^d,m)$, the ideal \[ \ker[\Delta^* \colon H^*(E\times_BE) \to H^*(E)] \] in $H^*(E\times_BE)$ has cup length $\cl(\ker \Delta^*)\ge 2n+m-2$. \end{prop} \begin{proof} The ideal \begin{equation} \label{eq:kerideal} {\mathcal J} = \langle \omega^{}_{i,j}-\omega'_{i,j} \mid 1\le i<j\ \text{and}\ m<j\le n+m\rangle \end{equation} is generated by degree $d-1$ elements in $H^*(E\times_BE)$. One can check (cf. \cite[Prop.~9.4]{CFW}) that ${\mathcal J} \subseteq \ker\Delta^*$. So to prove the proposition it is enough to show that $\cl({\mathcal J}) \ge 2n+m-2$. We establish this by showing that the product \begin{equation*} \label{eq:not0even} \Psi= \prod_{i=1}^{m} (\omega^{}_{i,m+1}-\omega'_{i,m+1}) \prod_{j=m+2}^{m+n} (\omega^{}_{1,j}-\omega'_{1,j}) \prod_{j=m+2}^{m+n} (\omega^{}_{j-1,j}-\omega'_{j-1,j}) \end{equation*} is nonzero in $H^*(E\times_BE)$. If $a_i,b_i$, $1\le i \le q$, are cohomology classes of the same degree, then \begin{align*} \prod_{i=1}^\ell (a_i-b_i) &= \sum_{S \subset [q]} (-1)^{|S|} c_1c_2\cdots c_q, \ \text{where}\ c_j=\begin{cases} a_j&\text{if $j \notin S$,} \\ b_j &\text{if $j \in S$.}\end{cases}\\ \intertext{Using this, we have} \prod_{i=1}^{m} (\omega^{}_{i,m+1}-\omega'_{i,m+1}) &= \sum_{S} (-1)^{|S|} \lambda_{1} \cdots \lambda_{m}, \hskip 46pt \xi_i=\begin{cases} \omega^{}_{i,m+1}&\text{if $i\notin S$,}\\ \omega'_{i,m+1}&\text{if $i\in S$,} \end{cases}\\ \prod_{j=m+2}^{m+n} (\omega^{}_{1,j}-\omega'_{1,j})&= \sum_{T_1} (-1)^{|T_1|} \mu_{m+2} \cdots \mu_{m+n},\hskip 14pt \mu_j=\begin{cases} \omega^{}_{1,j}&\text{\hskip 10pt if $j\notin T_1$,}\\ \omega'_{1,j}&\text{\hskip 10pt if $j\in T_1$,} \end{cases}\\ \prod_{j=m+2}^{m+n} (\omega^{}_{j-1,j}-\omega'_{j-1,j}) &= \sum_{T_2}(-1)^{|T_2|}\xi_{m+2}\cdots\xi_{m+n},\hskip 18pt \lambda_j=\begin{cases} \omega^{}_{j-1j}&\text{if $j\notin T_2$,}\\ \omega'_{j-1j}&\text{if $j\in T_2$,} \end{cases}\end{align*} where, writing $[p,q]=\{p,p+1,\dots,q\}$, $S\subset[m]$ and $T_1,T_2\subset[m+2,m+n]$. For $T=(j_1,\dots,j_\ell)$ a sequence in $[p,q]$, let $T^{\cc}=(p,\dots,\widehat{j_1},\dots,\widehat{j_\ell},\dots,q)$ denote the complementary sequence, and let $\epsilon_T$ be the sign of the shuffle permutation taking $[p,q]$ to $(T^{\cc},T)$. Denote the constant sequence $(1,1,\dots,1)$ (of appropriate length) by $\mathbf{1}$, and let $T-{\mathbf{1}}=(j_1-1,\dots,j_\ell-1)$. Then, the latter two products above may be expressed as \begin{equation} \label{eq:2of3prods} \begin{aligned} \prod_{j=m+2}^{m+n} (\omega^{}_{1j}-\omega'_{1j})&= \sum_{T_1}(-1)^{|T_1|}\epsilon_{T_1} \omega^{}_{\mathbf{1},T_1^{\cc}} \omega'_{{\mathbf{1}},T_1},\\ \prod_{j=m+2}^{m+n} (\omega^{}_{j-1j}-\omega'_{j-1j}) &= \sum_{T_2}(-1)^{|T_2|}\epsilon_{T_2} \omega^{}_{T_2^{\mathsf c}-{\mathbf{1}},T_2^{\cc}} \omega'_{T_2-{\mathbf{1}},T_2}, \end{aligned} \end{equation} Since, for $i=1,2$, $T_i$ and $T_i^\cc$ are increasing sequences in $[m+2,m+n]$ and $\b1<T_i$, $\b1<T_i^\cc$, and $T_i-\b1<T_i$, the monomials $\omega^{}_{\mathbf{1},T_1^{\cc}} \omega'_{{\mathbf{1}},T_1}$ and $ \omega^{}_{T_2^{\mathsf c}-{\mathbf{1}},T_2^{\cc}} \omega'_{T_2-{\mathbf{1}},T_2}$ arising in \eqref{eq:2of3prods} are elements of the basis for $H^*(E\times_BE)$ of Proposition \ref{prop:basis}. Similarly, with $R=(m+1,m+1,\dots,m+1)$, the first of the three products above may be expressed as \begin{equation*} \begin{aligned} \prod_{i=1}^{m} (\omega^{}_{i,m+1}-\omega'_{i,m+1}) &= \sum_{S} (-1)^{|S|} \epsilon_S \omega^{}_{S^\cc,R} \omega'_{S,R}\\ &=\sum_{\emptyset \subsetneq S \subsetneq [m]} (-1)^{|S|} \epsilon_S \omega^{}_{S^\cc,R} \omega'_{S,R} +\epsilon_\emptyset \omega^{}_{[m],R}+(-1)^m \epsilon_{[m]} \omega'_{[m],R}. \end{aligned} \end{equation*} None of the monomials $\omega^{}_{S^\cc,R} \omega'_{S,R}$ is an element of the basis of Proposition \ref{prop:basis}. Rewriting using Proposition \ref{prop:rewrite} and some sign simplification yields \begin{equation} \label{eq:first} \begin{aligned} \prod_{i=1}^{m} (\omega^{}_{i,m+1}-\omega'_{i,m+1}) &= \sum_{\emptyset \subsetneq S \subsetneq [m]} \epsilon_S \Biggl( \sum_{I_1} (-1)^{d_{I_1}} \omega^{}_{I_1,K_1}\Biggr) \Biggl( \sum_{I_2} (-1)^{d_{I_2}} \omega'_{I_2,K_2}\Biggr)\\ &\qquad + \epsilon_\emptyset \sum_{I_1} (-1)^{d_{I_1}} \omega^{}_{I_1,K_1} + \epsilon_{[m]} \sum_{I_2} (-1)^{d_{I_2}} \omega'_{I_2,K_2} \end{aligned} \end{equation} where $I_1$ and $I_2$ range over all $S^\cc$-and $S$-admissible sequences respectively, and if $S^\cc=(i_1,\dots,i_p)$ and $S=(j_1,,\dots,j_q)$, then $K_1=(i_2,\dots,i_{p-1},m+1)$ and $K_2=(j_2,\dots,j_{q-1},m+1)$. Note that $K_1=[2,m+1]$ if $S=\emptyset$ and $S^\cc=[m]$, while $K_2=[2,m+1]$ if $S=[m]$ and $S^\cc=\emptyset$. The product $\Psi$ may then be obtained by multiplying the expressions of \eqref{eq:first} and \eqref{eq:2of3prods}. Expanding yields an expression of $\Psi$ as a linear combination of monomials $\omega^{}_{P_!,Q_1}\omega^{}_{P_2,Q_2}\omega'_{P_3,Q_3}$, where $Q_1\subset[m]$ and $Q_2,Q_3 \subset [m+1,m+n]$. Some of these monomials are elements of the basis of Proposition \ref{prop:basis}, others are not. One of the basis elements appearing in this expansion of $\Psi$ is \begin{equation} \label{eq:theone} \omega^{}_{1,2}\omega^{}_{1,3}\cdots\omega^{}_{1,m}\omega^{}_{1,m+1}\omega^{}_{1,m+2}\cdots\omega^{}_{1,m+n} \omega'_{m+1,m+2}\omega'_{m+2,m+3}\cdots\omega'_{m+n-1,m+n}. \end{equation} This element is obtained by taking $S=\emptyset$, $S^\cc=[m]$ and $I_1=\b1$ in \eqref{eq:first}, so that the expansion of $\omega'_{S,R}$ is simply $1$, and by taking $T_1=T_2^\cc=\emptyset$, $T_1^\cc=T_2=[m+2,m+n]$ in \eqref{eq:2of3prods}, so that $\omega'_{\b1,T_1}=\omega^{}_{T_2^\cc-\b1,T_2^\cc}=1$. It may be expressed briefly as $x=\omega^{}_{\b1,K}\omega^{}_{\b1,T_1^\cc}\omega'_{T_2-\b1,T_2}$, where $K=[2,m+1]$. We assert that the basis element $x=\omega^{}_{\b1,K}\omega^{}_{\b1,T_1^\cc}\omega'_{T_2-\b1,T_2}$ is unaffected by rewriting non-basis monomials in the expansion of $\Psi$ using the three term relations. This will insure the non-vanishing of $\Psi$ as needed. Let $y=\omega^{}_{P_!,Q_1}\omega^{}_{P_2,Q_2}\omega'_{P_3,Q_3}$ be a monomial in the expansion of $\Psi$. From the expansions \eqref{eq:first} and \eqref{eq:2of3prods}, we have \begin{equation} \label{eq:almost} y=\omega^{}_{P_!,Q_1}\omega^{}_{P_2,Q_2}\omega'_{P_3,Q_3}= \bigl(\omega^{}_{I_1,K_1}\omega^{}_{\b1,T_1^\cc}\omega^{}_{T_2^\cc-\b1,T_2^\cc}\bigr) \bigl(\omega'_{I_2,K_2}\omega'_{\b1,T_1^{}}\omega'_{T_2^{}-\b1,T_2^{}}\bigr), \end{equation} where, for $j=1,2$, $K_j$ is either empty or is an increasing sequence in $[2,m+1]$ and $I_j$ is $K_j$-admissible, and $T_j$ and $T_j^\cc$ are complementary increasing sequences in $[m+2,m+n]$. From Proposition \ref{prop:rewrite}, non-empty sequences $K_1$ and $K_2$ are of the form $(k_1,\dots,k_\ell,m+1)$ with $k_\ell\le m$, so $K_1'$ and $K_2'$ are increasing sequences in $[2,m]$ of the form $(k_1,\dots,k_\ell)$. Since $\omega'_{i,j}=\omega^{}_{i,j}$ for $1\le i<j\le m$, up to sign, the monomial $\omega^{}_{P_!,Q_1}\omega^{}_{P_2,Q_2}\omega'_{P_3,Q_3}$ can be rewritten as \begin{equation} \label{eq:almost2} y=\bigl(\omega^{}_{I'_1,K'_1}\omega^{}_{I'_2,K'_2} \bigr) \bigl(\omega^{}_{\alpha,m+1}\omega^{}_{\b1,T_1^\cc}\omega^{}_{T_2^\cc-\b1,T_2^\cc}\bigr) \bigl(\omega'_{\beta,m+1}\omega'_{\b1,T_1^{}}\omega'_{T_2^{}-\b1,T_2^{}}\bigr), \end{equation} where $I^{}_1=(I_1',\alpha)$ and $I^{}_2=(I_2',\beta)$. Suppose the monomial $y$ of \eqref{eq:almost} is not an element of the basis of Proposition \ref{prop:basis}. First, consider the case where the subset $S$ of $[m]$ in \eqref{eq:first} is non-empty, so that $K_2\neq \emptyset$. As indicated in \eqref{eq:almost2} above, this gives rise to a factor of $\omega'_{\beta,m+1}$ in the monomial $y$. Subsequent simplifications, for instance if $\omega'_{\b1,T_1^{}}\omega'_{T_2^{}-\b1,T_2^{}}$ is not a basis element, either annihilate $y$ or given rise to basis elements involving $\omega'_{\beta,m+1}$ or $\omega'_{1,m+1}$. No factor of this form appears in the monomial $x$ of \eqref{eq:theone}. It remains to consider the case where the subset $S$ of $[m]$ in \eqref{eq:first} is empty. For $S=\emptyset$, we have $K_1=[2,m+1]$ and $\alpha=m$ in \eqref{eq:almost2}. In this instance, \begin{equation} \label{eq:almost3} y=\bigl(\omega^{}_{I'_1,K'_1}\bigr) \bigl(\omega^{}_{m,m+1}\omega^{}_{\b1,T_1^\cc}\omega^{}_{T_2^\cc-\b1,T_2^\cc}\bigr) \bigl(\omega'_{\b1,T_1^{}}\omega'_{T_2^{}-\b1,T_2^{}}\bigr), \end{equation} where $I^{}_1=(I_1',m)$ and $K_1'=[2,m]$. We have either $T_1\neq\emptyset$ or $T^\cc_2\neq\emptyset$, since the basis element $x$ of \eqref{eq:theone} is obtained by taking $S=\emptyset$ and $T_1=T^\cc_2=\emptyset$. If $T_1\neq \emptyset$, then $\omega'_{1,k}$ is a factor of $y$, where $k\in[m+2,m+n]$ denotes the minimal element of $T_1$. If $k \notin T_2$, then $\omega'_{1,k}$ is a factor of the basis element $\omega'_{\b1,T_1^{}}\omega'_{T_2^{}-\b1,T_2^{}}$, which survives in each term of the expansion of $y$ arising from application of the three term relations to $\omega^{}_{m,m+1}\omega^{}_{\b1,T_1^\cc}\omega^{}_{T_2^\cc-\b1,T_2^\cc}$ and resulting expressions. If, on the other hand, $k \in T_2$, then $\omega'_{1,k}\omega'_{k-1,k}$ is a factor of $y$. Rewriting using the three term relation $\omega'_{1,k}\omega'_{k-1,k}=\omega'_{1,k-1}(\omega'_{k-1,k}-\omega'_{1,k})$ yields expressions involving $\omega'_{1,k-1}$. Continuing as necessary yields a linear combination of basis elements, each of which contains a factor of $\omega'_{1,j}$, for some $j$, $m+1\le j \le k$. No factor of this form appears in the monomial $x$ of \eqref{eq:theone}. Finally, if $T_1=\emptyset$, then $T_2^\cc\neq\emptyset$. This implies that $T_2$ is a proper subset of $[m+2,m+n]$, and consequently that the factor $\omega'_{m+1,m+2}\omega'_{m+2,m+3}\cdots\omega'_{m+n-1,m+n}$ appearing in the monomial $x$ of \eqref{eq:theone} cannot appear in $y$. Since $\omega'_{T_2-\b1,T_2}$ is a basis element, any necessary expansion of $y$ involves applications of the three term relations to the factor $\omega^{}_{\b1,T_1^\cc}\omega^{}_{T_2^\cc-\b1,T_2^\cc}$. Since these, and subsequent simplifications, cannot introduce any factors of the form $\omega'_{p,q}$, the factor $\omega'_{m+1,m+2}\omega'_{m+2,m+3}\cdots\omega'_{m+n-1,m+n}$ of $x$ cannot appear in any resulting monomial. Thus, as asserted, expressing $\Psi$ in terms of the basis of Proposition \ref{prop:basis} does not alter the summand $x=\omega^{}_{\b1,K}\omega^{}_{\b1,T_1^\cc}\omega'_{T_2-\b1,T_2}$. Therefore, $\Psi \neq 0$ and $\cl(\ker \Delta^*)\ge \cl({\mathcal J})\ge 2n+m-2$ as required. \end{proof} Thus, for $d$ even and $m \ge 2$, we have \begin{equation} \label{eq:lower} {\tc}\bigl[p\colon \Conf(\R^d,m+n) \to \Conf(\R^d,m)\bigr] \ge \cl(\ker \Delta^*) \ge 2n+m-2. \end{equation} We establish the reverse inequality for the case $d=2$ of the plane and for the case $d\ge 4$ of higher even dimensions using different methods. Since the result in the planar case will play a role in the proof in the higher dimensional case, we begin with the former. \subsection*{The plane} Consider the case $d= 2$ of the plane $\R^2=\C$. Express the configuration space $\Conf(\R^2,\ell)$ as \[ \Conf(\R^2,\ell) = \Conf(\C,\ell) = \{(y_1,\dots,y_\ell) \in \C^\ell \mid y_i \neq y_j\ \text{if}\ i \neq j\} \] in complex coordinates. For any $\ell\ge 3$, the map $h_\ell\colon \Conf(\C,\ell) \to \Conf(\C\smallsetminus\{0,1\},\ell-2) \times \Conf(\C,2)$ defined by \begin{equation} \label{eq:homeo} h_{\ell}(y_1,y_2,y_3,\dots,y_\ell) =\left(\Bigl(\frac{y_3-y_1}{y_2-y_1},\dots,\frac{y_\ell-y_1}{y_2-y_1}\Bigr),\bigl(y_1,y_2\bigr)\right) \end{equation} is a homeomorphism. It follows that the bundle $p\colon \Conf(\C,m+n) \to \Conf(\C,m)$ is trivial for $m=2$. The parameterized topological complexity is then equal to the topological complexity of the fiber $\Conf(\C\smallsetminus\{0,1\},n)$, see \cite[Ex.~4.2]{CFW}. Since $\tc(\Conf(\C\smallsetminus\{0,1\},n))=2n$ as shown in \cite{FGY}, for $m=2$, we have \[ \tc[p\colon \Conf(\C,n+2) \to \Conf(\C,2)]=\tc(\Conf(\C\smallsetminus\{0,1\},n))=2n \] as asserted. For $m\ge 3$, the maps \eqref{eq:homeo} give rise to an equivalence of fibrations \[ \begin{CD} \Conf(\C,m+n) @>{h_{m+n}}>> \Conf(\C\smallsetminus\{0,1\},m+n-2) \times \Conf(\C,2) \\ @V{p}VV @VV{q}V \\ \Conf(\C,m) @>{h_m}>> \Conf(\C\smallsetminus\{0,1\},m-2) \times \Conf(\C,2), \end{CD} \] where $q=q'\times q''$, with $q'$ the forgetful map and $q''=\id$ the identity map. Since $\tc[q''\colon \Conf(\C,2)\to \Conf(\C,2)]=0$, the product inequality Proposition \ref{prop:product} implies that ${\tc}\left[p\colon \Conf(\C,m+n) \to \Conf(\C,m)\right]$ is less than or equal to \begin{equation} \label{eq:qprime} \tc\left[q'\colon \Conf(\C\smallsetminus\{0,1\},m+n-2) \to \Conf(\C\smallsetminus\{0,1\},m-2)\right]. \end{equation} Let $E'=\Conf(\C\smallsetminus\{0,1\},m+n-2)$ and $B'=\Conf(\C\smallsetminus\{0,1\},m-2)$. The fiber of $q'\colon E' \to B'$ is the configuration space $X=\Conf(\C\smallsetminus\sO_m,n)$, which has the homotopy type of a CW-complex of dimension $n$. Similarly, $B'$ has the homotopy type of a CW-complex of dimension $m-2$. Using Proposition \ref{prop:upper}, we obtain the following upper bound for \eqref{eq:qprime}: \[ \tc[q'\colon E' \to B'] \le 2\dim(X)+\dim(B)=2n+m-2. \] Combining the above observations yields \[ {\tc}\left[p\colon \Conf(\C,m+n) \to \Conf(\C,m)\right] \le 2n+m-2. \] Together with the lower bound \eqref{eq:lower}, this completes the proof of Theorem \ref{thm:main} in the case $d=2$ of the plane $\R^2=\C$. Theorem \ref{thm:main} for the planar case $d=2$ informs on the structure of the cohomology ring $H^*(E\times_BE)$ for any even $d$. This structure will be utilized in the case $d\ge 4$ of higher even dimensions below. Recall the ideal ${\mathcal J}$ in $H^*(E\times_BE)$ from \eqref{eq:kerideal}. \begin{cor} \label{cor:clJ} For positive integers $m$ and $d$ with $m\ge 2$ and $d\ge 2$ even, let $E=\Conf(\R^d,m+n)$, and $B=\Conf(\R^d,m)$. Then the ideal \[ {\mathcal J}= \langle \omega^{}_{i,j}-\omega'_{i,j} \mid 1\le i<j\ \text{and}\ m<j\le n+m\rangle \] in $H^*(E\times_BE)$ has cup length $\cl({\mathcal J})= 2n+m-2$. \end{cor} \subsection*{Higher even dimensions} \label{sec:high dim} For $d\ge 4$ even, we use obstruction theory to complete the proof of Theorem \ref{thm:main}. The Schwarz genus of a fibration $p\colon E \to B$ with fiber $X$ is at most $r-1$ if and only if its $r$-fold fiberwise join admits a continuous section, cf. \cite[Thm.~3]{Sch}. Consequently, $\tc[p\colon E \to B]\le r-1$ if and only if the $r$-fold fiberwise join \[ \Pi_r\colon \Jo_r (E^I_B) \to E\times_B E \] admits a section. Note that the fiber of $\Pi_r$ is $\Jo_r (\Omega{X})$, the $r$-fold join of the loop space of $X$. In the case of the Fadell-Neuwirth bundle of configuration spaces, we have $B=\Conf(\R^d,m)$, $E=\Conf(\R^d,m+n)$, and $X=\Conf(\R^d\smallsetminus\sO_m,n)$. As noted previously, $X$ is $(d-2)$-connected. Since the join of $p$- and $q$-connected CW-complexes is $(p+q+2)$-connected, the fiber $\Jo_r (\Omega X)$ of $\Pi_r$ is $(rd-r-2)$-connected. Thus, to show that ${\tc}\left[p\colon \Conf(\R^d,m+n)\to \Conf(\R^d,m)\right] \le 2n+m-2$, it suffices to prove the following. \begin{prop} \label{prop:join} For positive integers $m$ and $d$ with $m\ge 2$ and $d\ge 4$ even, let $E=\Conf(\R^d,m+n)$, $B=\Conf(\R^d,m)$, and $r=2n+m-1$. Then the fibration \[ \Pi_r \colon \Jo_r E^I_B \to E\times_B E \] admits a section. \end{prop} \begin{proof} From the connectivity of $X=\Conf(\R^d\smallsetminus\sO_m,n)$ noted previously, the primary obstruction to the existence of a section of $\Pi_r \colon \Jo_r E^I_B \to E\times_B E$ is an element $\Theta_r \in H^{r(d-1)}(E\times_B E; \pi_{r(d-1)-1}(\Jo_r \Omega X))$. Since $\hdim \bigl(E\times_B E\bigr)=r(d-1)$ as noted in \eqref{eq:hdims}, higher obstructions vanish for dimensional reasons. So $\Theta_r$ is the only obstruction. By the Hurewicz theorem, we have $\pi_{r(d-1)-1}(\Jo_r \Omega X)=H_{r(d-1)-1}(\Jo_r\Omega X)$. For spaces $Y$ and $Z$ with torsion free integral homology, the (reduced) homology of the join is given by $\widetilde{H}_{q+1}(Y\Jo Z)=\bigoplus_{i+j=q} \widetilde{H}_i(Y) \otimes \widetilde{H}_j(Z)$. This, together with the fact that the homology groups of $X$ (and $\Omega X$) are free abelian, yields \[ \pi_{r(d-1)-1}(\Jo_r \Omega X)=H_{r(d-1)-1}(\Jo_r\Omega X) = [\widetilde{H}_{d-1}(X)]^{\otimes r} = [H_{d-1}(X)]^{\otimes r}, \] the last equality since $d\ge 4$. Thus, $\Theta_r \in H^{r(d-1)}(E\times_B E; [H_{d-1}(X)]^{\otimes r})$. By \cite[Thm.~1]{Sch}, the obstruction $\Theta_r$ decomposes as $\Theta_r=\theta\smile \dots \smile \theta = \theta^r$, where $\theta \in H^{d-1}(E\times_B E; H_{d-1}(X))$ is the primary obstruction to the existence of a section of $\Pi\colon E^I_B \to E\times_B E$. Since $E\times_B E$ is simply connected, the system of coefficients $H_{d-1}(X)$ on $E\times_B E$ is trivial. As noted above, $H_{d-1}(X)$ is torsion free. By Proposition \ref{prop:HEBE}, the cohomology ring $H^*(E\times_B E)$ is also torsion free. It follows that $H^*(E\times_B E;[H_{d-1}(X)]^{\otimes q})$ is torsion free for any $q\ge 1$. Since $\theta$ is the primary obstruction to the existence of a section the fibration $\Pi\colon E^I_B \to E\times_B E$, we have \[ \theta \in \ker[\Delta^*\colon H^*(E\times_B E;H_{d-1}(X)) \longrightarrow H^*(E;\Delta^*H_{d-1}(X))]. \] For brevity, denote the free abelian group $H_{d-1}(X)$ by $A$. Using a Universal Coefficient theorem (for a (co)chain complex computing $H^*(E\times_B E)$), we can identify $H^{d-1}(E\times_B E;A)$ with $H^{d-1}(E\times_B E)\otimes A$, and $H^{d-1}(E;A)$ with $H^{d-1}(E)\otimes A$. With these identifications, we have $\Delta^*\colon H^{d-1}(E\times_B E)\otimes A \to H^{d-1}(E)\otimes A$, and $\theta\in\ker(\Delta^*)$ may be expressed as a linear combination of elements of the form $\eta_j \otimes a_j$, where the elements $\eta_j$ are the degree $d-1$ generators of $\ker[\Delta^*\colon H^*(E\times_B E)) \to H^*(E;\Z))]$ and $a_j \in A$. The $r$-fold cup product $\Theta_r=\theta^r \in H^{r(d-1)}(E\times_B E)\otimes A^{\otimes r}$ is then realized as a linear combination of elements of the form $\eta_J \otimes a_J$, where $\eta_J=\eta_{j_1}\smile \dots \smile \eta_{j_r}$ is an $r$-fold cup product of degree $d-1$ generators of $\ker[\Delta^*\colon H^*(E\times_B E)) \to H^*(E))]$, and $a_J \in A^{\otimes r}$. But the degree $d-1$ generators of $\ker\Delta^*$ are the generators of the ideal $\mathcal J$ of \eqref{eq:kerideal}. As noted in Corollary \ref{cor:clJ}, we have $\cl({\mathcal J}) = 2n+m-2$. It follows that for $r=2n+m-1$, we have ${\mathcal J}^r=0$, and consequently $\theta^r=0$. Since the primary obstruction $\Theta_r=\theta^r$ vanishes, the fibration $\Pi_r \colon \Jo_r E^I_B \to E\times_B E$ admits a section. \end{proof} This completes the proof of Theorem \ref{thm:main} in the case where $d\ge 4$ is even. \begin{ack} The first author thanks Emanuele Delucci, Nick Proudfoot, and He Xiaoyi for productive conversations, and the organizers of the virtual workshop \emph{Arrangements at Home} for facilitating several of these conversations. Portions of this work were undertaken when the first and second authors visited the University of Florida Department of Mathematics in November, 2019. We thank the department for its hospitality and for providing a productive mathematical environment. \end{ack} \newcommand{\arxiv}[1]{{\texttt{\href{http://arxiv.org/abs/#1}{{arXiv:#1}}}}} \newcommand{\MRh}[1]{\href{http://www.ams.org/mathscinet-getitem?mr=#1}{MR#1}} \bibliographystyle{amsplain}
{"config": "arxiv", "file": "2010.09809.tex"}
TITLE: Definite integrals QUESTION [1 upvotes]: $$\int_0^{1.5}[x^2]dx$$where [.] denotes the greatest integer function, is equal to : (1) $\sqrt{2}-2$ (2) $2 –\sqrt{2}$ (3) $2 + \sqrt{2}$ (4) None of these What I did, I broke the function into two parts..one with limits from 0 to 1.the problem is how should I deal with the other part??Please keep the explanations as simple as possible.thanks. REPLY [2 votes]: $$\int_0^{1.5} [x^2] \mathrm{d} x=\int_0^1[x^2] \mathrm{d}x+\int_1^\sqrt{2} [x^2] \mathrm{d}x+\int_{\sqrt{2}}^{1.5} [x^2] \mathrm{d}x .$$ In the first integral, $[x^2]=0$, in the second $[x^2]=1$ and in the third $[x^2]=2$.
{"set_name": "stack_exchange", "score": 1, "question_id": 789980}
TITLE: Two from cubic subgraph hardness QUESTION [2 upvotes]: The Problem For a given graph $G$, the cubic subgraph problem asks if there is a subgraph where every vertex has degree 3. The cubic subgraph problem is NP-hard even in bipartite planar graphs with maximum degree at most 4. Suppose we have an oracle that decides if a bipartite graph contains a "two from cubic subgraph". Can we solve the cubic subgraph problem in polynomial time? Here "two from cubic" means every vertex is of degree 3 except for two degree 2 vertices. I would also be happy if there was another way to (dis)prove that two from cubic subgraph is NP-hard. Some Remarks One variant I explored is if we are allowed to pick which of the two vertices have degree 2. Then for a given graph $G$ and $uv\in E(G)$, we can ask if $G-uv$ has a two from cubic subgraph with special vertices $u, v$ to solve the cubic subgraph problem. This variant is thus NP-hard. However, I have been unable to find a reduction for the original problem. One other thing to note is that given a two from cubic subgraph oracle, it is pretty easy to find a two from cubic subgraph: While there is a vertex $v$ such that $G-v$ has a two from cubic subgraph, delete $v$. We could also relax the restrictions on the problem. For example, is the two from 4-regular subgraph problem NP-hard? Two from 4-regular means every vertex has degree 4 except for two vertices of degree 3. Even any useful facts about two from cubic subgraph or two from 4-regular graphs would be appreciated. Finally, this point might be a bit off-topic, but we could potentially phrase this as a graph editing problem. "Is there a sequence of edges/vertex deletions leading to a two from cubic subgraph?" I am not familiar with these types of questions but thought it could be an interesting approach. crossposted from https://math.stackexchange.com/questions/3867470/two-from-cubic-subgraph-hardness REPLY [1 votes]: A formal proof has been produced. See https://arxiv.org/abs/2105.07161
{"set_name": "stack_exchange", "score": 2, "question_id": 374391}
\onecolumn \begin{center} { \Large Supplementary Material for:\vspace{2mm} \\ Adaptive Kernel Learning in Heterogeneous Networks}\vspace{2mm} \\ by Hrusikesha Pradhan, Amrit Singh Bedi, Alec Koppel, and Ketan Rajawat \end{center} \section{Proof of Corollary \ref{thm:representer} }\label{proof_representthm} The proof generalizes that of the classical Representer Theorem. The inner minimization in \eqref{eq:primaldualprob_emp} with respect to $\bbf$ can be written as \begin{align}\label{eq:rep_proof1} \ccalE(\bbf;\ccalS,\bbmu)&=\sum_{i\in\ccalV}\frac{1}{N}\sum_{k=1}^N\bigg[ \ell_i(f_i\big(\bbx_{i,k}), y_{i,k}\big)+\sum_{j\in n_i} \mu_{ij}\Big(h_{ij}(f_i(\bbx_{i,k}),f_j(\bbx_{i,k}))-\gamma_{ij}\Big) \bigg]. \end{align} Let the subspace of functions spanned by the kernel function $\kappa(\bbx_{i,t},.)$ for $\bbx_{i,k}\in \ccalS_i$ be denoted as $\ccalF_{\kappa,\ccalS_i}$, i.e., \begin{align} \ccalF_{\kappa,\ccalS_i}=\text{span}\{\kappa(\bbx_{i,k}):1\le k\leq N\}. \end{align} We denote the projection of $f_i$ on the subspace $\ccalF_{\kappa,\ccalS_i}$ as $f_{ip}$ and the component perpendicular to the subspace as $f_{i\perp}$, which can be written as $f_{i\perp}=f_i-f_{ip}$. Now we can write \begin{align}\label{eq:rep_proof2} f_i(\bbx_{ik})=\langle f_i,\kappa(\bbx_{ik},.)\rangle&=\langle f_{ip},\kappa(\bbx_{i,k},.)\rangle + \langle f_{i\perp},\kappa(\bbx_{i,k},.)\rangle\nonumber\\ &=\langle f_{ip},\kappa(\bbx_{i,k},.)\rangle=f_{ip}(\bbx_{i,k}). \end{align} Thus the evaluation of $f_i$ at any arbitrary training point $\bbx_{ik}$ is independent of $f_{i\perp}$. Using this fact, we can now write \eqref{eq:rep_proof1} as, \begin{align}\label{eq:rep_proof3} \ccalE(\bbf;\ccalS,\bbmu)=& \sum_{i\in\ccalV}\frac{1}{N}\sum_{k=1}^N\bigg[ \ell_i(f_{ip}\big(\bbx_{i,k}), y_{i,k}\big)+\sum_{j\in n_i} \mu_{ij}\Big(h_{ij}(f_{ip}(\bbx_{i,k}),f_j(\bbx_{i,k}))-\gamma_{ij}\Big) \bigg]. \end{align} Thus from \eqref{eq:rep_proof3}, we can say that $\ccalE(\bbf;\ccalS,\bbmu)$ is independent of $f_{i\perp}$. As we are minimizing \eqref{eq:empirical_lagrangian} with respect to $f_i$, the evaluation of $f_j$ at the training point of node $i$ can be treated as a constant in $\ccalE(\bbf;\ccalS,\bbmu)$ which is the first part in \eqref{eq:empirical_lagrangian}. Additionally, note that $\lambda\cdot\|f_i\|_{\ccalH}^2\cdot 2^{-1}\geq \lambda\cdot\|f_{ip}\|_{\ccalH}^2\cdot 2^{-1}$. Therefore, given any $\bbmu$, the quantity $\ccalE(\bbf;\ccalS,\bbmu)+\sum_{i=1}^V\lambda\cdot\|f_i\|_{\ccalH}^2\cdot 2^{-1}$ is minimized at some $f_i^*(\bbmu_i)$ such that $f_i^*(\bbmu_i)$ lies in $\ccalF_{k,\ccalS_i}$. This holds specifically for $\bbmu_i^*$ where $f_i^*=f_i^*(\bbmu_i^*)$, there by completing the proof. \hfill $\blacksquare$ \section{Statement and Proof of Lemma \ref{thm:bound_gap}} \label{app:proof_bound_gap} Using Assumption \ref{as:fourth}, we bound the gap between optimal of problem \eqref{eq:main_prob} and \eqref{eq:prob_zero_cons} and is presented as Lemma \ref{thm:bound_gap}. \begin{lemma}\label{thm:bound_gap} Under Assumption \ref{as:second}, \ref{as:fourth} and \ref{as:fifth}, for $0\le \nu\le\xi/2$, it holds that: \begin{align} S(\bbf_\nu^*)-S(\bbf^*)\le \frac{{4}VR_{\ccalB}(C X+\lambda R_{\ccalB})}{\xi}\nu. \end{align} \end{lemma} \begin{proof} Let $(\bbf^*,\bbmu^*)$ be the solution to \eqref{eq:main_prob} and $(\bbf_{\nu}^*,\bbmu_{\nu}^*)$ be the solution to \eqref{eq:prob_zero_cons}. As $\nu \le \frac{\xi}{2} \le\xi$, there exists a strictly feasible primal solution $\bbf^{\dagger}$ such that $G(\bbf^{\dagger})+\mathbf{1}\nu\le G(\bbf^{\dagger})+\mathbf{1}\xi$, where $\mathbf{1}$ denotes the vector of all ones and $G$ denotes the stacked vector of constraints as defined in the proof of Theorem \ref{thm:convergence}. Hence strong duality holds for \eqref{eq:prob_zero_cons}. Therefore using the definition of $S(\bbf)$ from {\eqref{eq:kernel_stoch_opt_global}}, we have \begin{align} S(\bbf_\nu^*)&=\min_{\bbf} S(\bbf) + \langle \mu_{\nu}^*, G(\bbf)+ \mathbf{1}\nu \rangle\nonumber\\ & \le S(\bbf^*) + \langle \mu_{\nu}^*, G(\bbf^*)+ \mathbf{1}\nu \rangle\label{eq:boundgap_1}\\ & \le S(\bbf^*) + \nu \langle \mu_{\nu}^*, \mathbf{1}\rangle\label{eq:boundgap_2} \end{align} where the inequality in \eqref{eq:boundgap_1} comes from from the optimality of $\bbf_\nu^*$ and \eqref{eq:boundgap_2} comes from the fact that $G(\bbf^*)\le 0$. Next using Assumption \ref{as:fourth}, we have strict feasibility of $\bbf^\dagger$, so using \eqref{eq:boundgap_1} we can write: \begin{align} S(\bbf_\nu^*)& \le S(\bbf^\dagger) + \langle \mu_{\nu}^*, G(\bbf^\dagger)+ \mathbf{1}\nu \rangle\nonumber\\ & =S(\bbf^\dagger) + \langle \mu_{\nu}^*, G(\bbf^\dagger)+ \mathbf{1}(\nu + \xi-\xi) \rangle\nonumber\\ &=S(\bbf^\dagger) + \langle \mu_{\nu}^*, G(\bbf^\dagger)+ \mathbf{1} \xi\rangle + \langle \mu_{\nu}^*, \mathbf{1}(\nu -\xi) \rangle \nonumber\\ &\le S(\bbf^\dagger) +(\nu -\xi) \langle \mu_{\nu}^*,\mathbf{1}\rangle.\label{eq:boundgap_3} \end{align} Thus from \eqref{eq:boundgap_3}, we can equivalently write, \begin{align}\label{eq:boundgap_3_1} \langle \mu_{\nu}^*,\mathbf{1}\rangle\le \frac{S(\bbf^\dagger)-S(\bbf_\nu^*)}{\xi-\nu} \end{align} Now we upper bound the difference of $S(\bbf^\dagger)-S(\bbf_\nu^*)$. Using the definition of $S(\bbf)$, we write the difference of $S(\bbf^\dagger)-S(\bbf_\nu^*)$ as \begin{align}\label{eq:boundgap_4} S(\bbf^\dagger)\!-\!S(\bbf_\nu^*)=\mbE\sum_{i\in\ccalV}\!\!\big[\ell_i(f^\dagger_{i}\big(\bbx_{i,t}), y_{i,t}\big)\!-\!\ell_i(f_{i,\nu}^\star\big(\bbx_{i,t}), y_{i,t}\big)\!\big]\!+\frac{\lambda}{2}\sum_{i\in\ccalV}\!\Big(\|f^\dagger_{i} \|^2_{\ccalH}- \|f_{i,\nu}^\star \|^2_{\ccalH}\Big). \end{align} Next, we bound the sequence in \eqref{eq:boundgap_4} as \begin{align}\label{eq:boundgap_5} |S(\bbf^\dagger)\!-\!S(\bbf_\nu^*)|&\!\leq\! \mbE\!\sum_{i\in\ccalV}\!\big[|\ell_i(f^\dagger_{i}\big(\bbx_{i,t}), y_{i,t}\big)\!-\!\ell_i(f_{i,\nu}^\star\big(\bbx_{i,t}), y_{i,t}\big)|\big]+\frac{\lambda}{2}\!\sum_{i\in\ccalV}\!|\|f^\dagger_{i} \|^2_{\ccalH}- \|f_{i,\nu}^\star \|^2_{\ccalH}|\nonumber\\ &\leq \mbE\sum_{i\in\ccalV} C|f^\dagger_{i}\big(\bbx_{i,t})-f_{i,\nu}^\star\big(\bbx_{i,t})|+\frac{\lambda}{2}\!\sum_{i\in\ccalV}\!|\|f^\dagger_{i} \|^2_{\ccalH}- \|f_{i,\nu}^\star \|^2_{\ccalH}|, \end{align} where using triangle inequality we write the first inequality and then using Assumption \eqref{as:second} of Lipschitz-continuity condition we write the second inequality. Further, using reproducing property of $\kappa$ and Cauchy-Schwartz inequality, we simplify $|f^\dagger_{i}\big(\bbx_{i,t})-f_{i,\nu}^\star\big(\bbx_{i,t})|$ in \eqref{eq:boundgap_5} as \begin{align}\label{eq:boundgap_6} |f^\dagger_{i}\big(\bbx_{i,t})-f_{i,\nu}^\star\big(\bbx_{i,t})|&=|\langle f^\dagger_{i}-f_{i,\nu}^\star,\kappa(\bbx_{i,t},\cdot)\rangle| \leq \|f^\dagger_{i}-f_{i,\nu}^\star\|_{\ccalH}\cdot \|\kappa(\bbx_{i,t},\cdot)\|_{\ccalH}\leq {2}R_\ccalB X \end{align} where the last inequality comes from Assumption \ref{as:first} and \ref{as:fifth}. Now, we consider the $|\|f^\dagger_{i} \|^2_{\ccalH}- \|f_{i,\nu}^\star \|^2_{\ccalH}|$ present in the right-hand side of \eqref{eq:boundgap_5}, \begin{align}\label{eq:boundgap_7} &|\|f^\dagger_{i} \|^2_{\ccalH}- \|f_{i,\nu}^\star \|^2_{\ccalH}|\leq \|f^\dagger_{i}-f_{i,\nu}^\star\|_{\ccalH}\cdot \|f^\dagger_{i}+f_{i,\nu}^\star\|_{\ccalH}\!\leq\! 4R_{\ccalB}^2. \end{align} Substituting \eqref{eq:boundgap_6} and \eqref{eq:boundgap_7} in \eqref{eq:boundgap_5}, we obtain \begin{align}\label{eq:boundgap_8} |S(\bbf^\dagger)-S(\bbf_\nu^*)| \leq {2}VCR_\ccalB X+{2}V\lambda R_{\ccalB}^2 ={2}VR_{\ccalB}(C X+\lambda R_{\ccalB}). \end{align} Now using \eqref{eq:boundgap_8}, we rewrite \eqref{eq:boundgap_3_1} as \begin{align}\label{eq:boundgap_9} \langle \mu_{\nu}^*,\mathbf{1}\rangle\le \frac{S(\bbf^\dagger)-S(\bbf_\nu^*)}{\xi-\nu}\le \frac{{2}VR_{\ccalB}(C X+\lambda R_{\ccalB})}{\xi-\nu}\le \frac{{4}VR_{\ccalB}(C X+\lambda R_{\ccalB})}{\xi}. \end{align} Finally, we use \eqref{eq:boundgap_9} in \eqref{eq:boundgap_2} and get the required result: \begin{align} S(\bbf_\nu^*)-S(\bbf^*)\le \frac{{4}VR_{\ccalB}(C X+\lambda R_{\ccalB})}{\xi}\nu. \end{align} \end{proof} The importance of Lemma \ref{thm:bound_gap} is that it establishes the fact that the gap between the solutions of the problem \eqref{eq:main_prob} and \eqref{eq:prob_zero_cons} is $\ccalO(\nu)$. \section{Statement and Proof of Lemma \ref{lemma:bound_primal_dual_grad}} \label{app:bound_primal_dual_gradient} We bound the primal and dual stochastic gradients used for \eqref{eq:projection_hat} and \eqref{eq:dualupdate_edge}, respectively in the following lemma. \begin{lemma}\label{lemma:bound_primal_dual_grad} Using Assumptions \ref{as:first}-\ref{as:fifth}, the mean-square-magnitude of the primal and dual gradients of the stochastic augmented Lagrangian $\hat{\ccalL}_t(\bbf,\bbmu)$ defined in \eqref{eq:stochastic_approx} are upper-bounded as \begin{align}\label{eq:lemma1_final} \!\!\!\!\mathbb{E}[\| \nabla_\bbf\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}]&\leq 4V X^2 C^2 + 4V X^2 L_h^2 {E} \|\bbmu\|^2+2V \lambda^2 R_{\ccalB}^2\\ \!\!\!\!\mathbb{E}\Big[\| \nabla_{\bbmu}\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}\Big]\!\! &\leq {E}\Big(\!(2K_1\!\!+\!2L_h^2X^2 R_{\ccalB}^2)\!+\! 2\delta^2\eta^2\|\bbmu\|^2\! \Big)\label{eq:lemma1_2_final} \end{align} for some $0<K_1<\infty$. \end{lemma} \begin{proof} In this proof for any $(\bbf,\bbmu) \in \ccalH^V\times \mbR^{E}_+ $ we upper bound the mean-square-magnitude of primal gradient as \begin{align}\label{eq:lemma1_1} \mathbb{E}[\| \nabla_\bbf\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}]&=\mathbb{E}[\|vec(\nabla_{fi}\hat{\ccalL}_t(\bbf,\bbmu))\|^2_{\ccalH}]\le V \max_{i\in \ccalV}\mathbb{E}[\|\nabla_{fi}\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}], \end{align} \begin{comment} \blue{The above expression is either confusing or incorrect. What is $\hat{L}_t$? Do we mean $\hat{\ccalL}_t$? Please clarify/correct throughout. I think this problem is rampant throughout the subsequent analysis too. Also, I suggest against using $L_h$ for the Lipschitz constant since it could get confused with the statistical average loss $L(f) = \mathbb{E}_{\bbx, \bby} [\ell(f(\bbx),\bby)]$} \end{comment} where for the first equality we have used the fact that the functional gradient is a concatenation of functional gradients associated with each agent. The second inequality is obtained by considering the worst case estimate across the network. In the right-hand side of \eqref{eq:lemma1_1} we substitute the value of $\nabla_{fi}\hat{\ccalL}_t(\bbf,\bbmu)$ from \eqref{eq:lagg_derv} to obtain, \begin{align}\label{eq:delta_f_bound1_temp} \mathbb{E}[\| \nabla_\bbf\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}] &\le V \!\max_{i\in \ccalV}\mathbb{E}\big[\|\!\big[\ell_i'(f_{i}(\bbx_{i,t}),y_{i,t})\!+\!\!\sum_{j\in n_i}\!\mu_{ij}h'_{ij}(f_{i}(\bbx_{i,t}),\!f_{j}(\bbx_{i,t}))\big]\! \kappa(\bbx_{i,t},\cdot)\!+\!\lambda f_{i}\|^2_{\ccalH}\big]\\ &\le V \max_{i\in \ccalV} \mathbb{E}\big[2\|\big[\ell_i'(f_{i}(\bbx_{i,t}),y_{i,t})+\sum_{j\in n_i}\mu_{ij}h'_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))\big] \kappa(\bbx_{i,t},\cdot)\|_{\ccalH}^2\big]\!+2V\lambda^2 \|f_{i}\|_{\ccalH}^2.\nonumber \end{align} In \eqref{eq:delta_f_bound1_temp}, we have used the fact that $\|a+b\|_{\ccalH}^2\le 2\cdot(\|a\|_{\ccalH}^2+\|b\|_{\ccalH}^2)$ for any $a, b \in \ccalH$, i.e., the sum of squares inequality. Next we again use the sum of squares inequality for the first bracketed term in the right hand side of \eqref{eq:delta_f_bound1_temp} and also used Assumption \ref{as:fifth} to upper bound $\|f_{i}\|_{\ccalH}^2$ by $R_\ccalB^2$ and get, \begin{align}\label{eq:delta_f_bound1} \mathbb{E}[\| \nabla_\bbf\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}]\le V \max_{i\in \ccalV} \mathbb{E}\big[4\|\ell_i'(f_{i}(\bbx_{i,t}),y_{i,t})\kappa(\bbx_{i,t},\cdot)\|^2+4\|\sum_{j\in n_i}\mu_{ij}h'_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t})) \kappa(\bbx_{i,t},\cdot)\|_{\ccalH}^2\big]+c(\lambda), \end{align} where $c(\lambda):=\!\!2V\!\lambda^2\! \cdot\! R_{\ccalB}^2$. Using Cauchy-Schwartz inequality, the first term on the right-hand side of \eqref{eq:delta_f_bound1} can be written as $$\|\ell_i'(f_{i}(\bbx_{i,t}),y_{i,t})\kappa(\bbx_{i,t},\cdot)\|^2\le \|\ell_i'(f_{i}(\bbx_{i,t}),y_{i,t})\|^2 \|\kappa(\bbx_{i,t},\cdot)\|^2.$$ Then using Assumptions \ref{as:first} and \ref{as:second}, we bound $\|\ell_i'(f_{i}(\bbx_{i,t}),y_{i,t})\|^2 $ by $C^2$ and $\|\kappa(\bbx_{i,t},\cdot)\|^2$ by $X^2$. Similarly we use Cauchy-Schwartz inequality for the second term in \eqref{eq:delta_f_bound1} and bound $\|\kappa(\bbx_{i,t},\cdot)\|^2$ by $X^2$. Now using these, \eqref{eq:delta_f_bound1} can be written as, \begin{align}\label{eq:lemma1_final_temp1} \mathbb{E}[\| \nabla_\bbf\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}]\le \!4V' C^2 + 4V' \|\sum_{j\in n_i}\mu_{ij}h'_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t})) \|_{\ccalH}^2+c(\lambda), \end{align} where $V':=VX^2$. Using Assumption \ref{as:third}, we bound $h'_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))$ present in the second term on the right-hand side of \eqref{eq:lemma1_final_temp1} by $L_h$ and then taking the constant $L_h$ out of the summation, we get \begin{align}\label{eq:lemma1_final_temp2} \mathbb{E}[\| \nabla_\bbf\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}]\le 4V' C^2+ 4V'L_h^2 \|\sum_{j=1}^{ |n_i|}\mu_{ij} \|^2+c(\lambda). \end{align} Here, $|n_i|$ denotes the number of neighborhood nodes of agent $i$. Then we have used the fact $\|\sum_{j=1}^{ |n_i|}\mu_{ij} \|^2\le {|n_i|} \sum_{j=1}^{ |n_i|}|\mu_{ij}|^2$ and got \begin{align}\label{eq:lemma1_final_temp3} \mathbb{E}[\| \nabla_\bbf\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}]\le 4V' C^2+ 4V' L_h^2 {|n_i|} \sum_{j=1}^{ |n_i|}|\mu_{ij}|^2+c(\lambda). \end{align} Next we upper bound ${ |n_i|}$ and $\sum_{j=1}^{ |n_i|}|\mu_{ij}|^2$ by ${E}$ and $\|\bbmu\|^2$ and write \eqref{eq:lemma1_final_temp3} as \begin{comment} \blue{I feel like the argument you're trying to say is that $\sum_{ij} \mu_{ij}^2 \leq E \max_{ij} \mu_{ij}^2$ but it's stated in a confusing way. Try to be more explicit about this, and clarify}, \end{comment} \begin{align}\label{eq:lemma1_final} \mathbb{E}[\| \nabla_\bbf\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}]\le 4V' C^2 + 4V' L_h^2 {E} \|\bbmu\|^2+c(\lambda). \end{align} Thus \eqref{eq:lemma1_final} which establishes an the upper bound on $\mathbb{E}[\| \nabla_\bbf\hat{L}_t(\bbf,\bbmu)\|^2_{\ccalH}]$ is valid. With this in hand, we now shift focus to deriving a similar upper-bound on the magnitude of the dual stochastic gradient of the Lagrangian $\mathbb{E}\Big[\| \nabla_{\bbmu}\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}\Big]$ as \begin{align}\label{eq:delta_mu_temp1} \mathbb{E}\Big[\| \nabla_{\bbmu}\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}\Big] &=\mathbb{E}\|\text{vec}(h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))-\gamma_{ij}+\nu-\delta\eta\mu_{ij})\|_{\ccalH}^2\nonumber\\ &\le {E} \max_{(i,j)\in\ccalE}\mathbb{E}\|h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))-\gamma_{ij}+\nu-\delta\eta\mu_{ij}\|_{\ccalH}^2\nonumber\\ &\le {E} \max_{(i,j)\in\ccalE}\mathbb{E}\|h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))+\nu-\delta\eta\mu_{ij}\|_{\ccalH}^2. \end{align} In the first equality we write the concatenated version of the dual stochastic gradient associated with each agent, whereas the second inequality is obtained by considering the worst case bound. In the third inequality, we use the fact $|a-b-c|^2 \le |a-c|^2$ owing to the fact that the right hand side of the inequality is a scalar. Next, applying $\|a+b\|_{\ccalH}^2\le 2\cdot(\|a\|_{\ccalH}^2+\|b\|_{\ccalH}^2)$ for any $a, b \in \ccalH$, we get \begin{align}\label{eq:delta_mu_temp2} \mathbb{E}\Big[\| \nabla_{\bbmu}\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}\Big]&\le {E}\big(2\mathbb{E}\|h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))+\nu\|^2_{\ccalH}+ 2\delta^2\eta^2|\mu_{ij}|^2\big). \end{align} Here we have ignored the $\nu^2$ term as $\nu < 1$ and can be subsumed within the first term. Then we bound the first term in \eqref{eq:delta_mu_temp2} using Assumption \ref{as:third} and the second term is upper bounded by $\|\bbmu\|^2$ \begin{align}\label{eq:delta_mu_temp3} &\mathbb{E}\Big[\| \nabla_{\bbmu}\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}\Big]\le {E}\Big(2\big(K_1+L_h^2\mathbb{E}(|f_{i}(\bbx_{i,t})|^2)\big)+ 2\delta^2\eta^2\|\bbmu\|^2 \Big). \end{align} Next, we use $|f_{i}(\bbx_{i,t})|^2=|\langle f_{i},\kappa(\bbx_{i,t},\cdot)\rangle_\ccalH|^2\le \|f_{i}\|_\ccalH^2\cdot\|\kappa(\bbx_{i,t},\cdot)\|_\ccalH^2$ and then we have upper bounded $\|f_{i}\|_\ccalH^2$ and $\|\kappa(\bbx_{i,t},\cdot)\|_\ccalH^2$ by $R_{\ccalB}^2$ and $X^2$, and we obtain \begin{align}\label{eq:lemma1_2_final} \mathbb{E}&\Big[\| \nabla_{\bbmu}\hat{\ccalL}_t(\bbf,\bbmu)\|^2_{\ccalH}\Big]\le {E}((2K_1+2L_h^2X^2\cdot R_{\ccalB}^2)+ 2\delta^2\eta^2\|\bbmu\|^2 ). \end{align} \end{proof} \vspace{-4mm} \begin{comment} \section*{Appendix B: Proof of bound of mean-square-magnitude of primal and dual gradient} \label{app:bound_primal_dual_gradient} \begin{proof} In this proof for any $(f_t,\mu_t) \in \ccalH^V\times \mbR^M_+ $ we upper bound the mean-square-magnitude of primal gradient as \begin{align}\label{eq:lemma1_1} \mathbb{E}[\| \nabla_f\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}]&=\mathbb{E}[\|vec(\nabla_{fi}\hat{\ccalL}_t(f_t,\mu_t))\|^2_{\ccalH}]\nonumber\\ &\le V \max_{i\in \ccalV}\mathbb{E}[\|\nabla_{fi}\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}] \end{align} where for the first equality we have used the fact that the functional gradient is a concatenation of functional gradients associated with each agent. The second inequality is obtained by considering the worst case estimate across the network. In the right-hand side of \eqref{eq:lemma1_1} we substitute the value of $\nabla_{fi}\hat{\ccalL}_t(f_t,\mu_t)$ from \eqref{eq:lagg_derv} and we get, \begin{align}\label{eq:delta_f_bound1} \!\!&\mathbb{E}[\| \nabla_f\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}]\le \!V \!\max_{i\in \ccalV}\mathbb{E}\bigg[\!\Big\|\!\Big[\ell_i'(f_{i,t}(\bbx_{i,t}),y_{i,t})\!\nonumber\\ &+\!\!\sum_{j\in n_i}\!\mu_{ij}h'_{ij}(f_{i,t}(\bbx_{i,t}),\!f_{j,t}(\bbx_{i,t}))\Big]\! \kappa(\bbx_{i,t},\cdot)\!+\!\lambda f_{i,t}\Big\|^2_{\ccalH}\!\bigg]\nonumber\\ &\stackrel{(a)}\le V \max_{i\in \ccalV} \mathbb{E}\bigg[2\Big\|\Big[\ell_i'(f_{i,t}(\bbx_{i,t}),y_{i,t})\nonumber\\ &+\sum_{j\in n_i}\mu_{ij}h'_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))\Big] \kappa(\bbx_{i,t},\cdot)\Big\|_{\ccalH}^2\bigg]\!+2V\lambda^2 \|f_{i,t}\|_{\ccalH}^2\nonumber\\ &\stackrel{(b)}\le \!V\! \max_{i\in \ccalV} \mathbb{E}\!\bigg[\!4\Big\|\ell_i'(f_{i,t}(\bbx_{i,t}),y_{i,t})\kappa(\bbx_{i,t},\cdot)\Big\|^2\!\!\nonumber\\ &+\!4\Big\|\!\!\sum_{j\in n_i}\!\mu_{ij}h'_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t})) \kappa(\bbx_{i,t},\cdot)\Big\|_{\ccalH}^2\!\bigg]\!\!+\!\!2V\!\lambda^2\! \cdot\! R_{\ccalB}^2 \end{align} In $(a)$ and $(b)$, we have used the fact that $\|a+b\|_{\ccalH}^2\le 2\cdot(\|a\|_{\ccalH}^2+\|b\|_{\ccalH}^2)$ for any $a, b \in \ccalH$, i.e., the sum of squares inequality. In $(b)$, we have also used Assumption \ref{as:fourth} to upper bound $\|f_{i,t}\|_{\ccalH}^2$ by $R_\ccalB^2$. Using Cauchy-Schwartz inequality, the first term on the right-hand side of \eqref{eq:delta_f_bound1} can be written as $\Big\|\ell_i'(f_{i,t}(\bbx_{i,t}),y_{i,t})\kappa(\bbx_{i,t},\cdot)\Big\|^2\le \Big\|\ell_i'(f_{i,t}(\bbx_{i,t}),y_{i,t})\Big\|^2 \Big\|\kappa(\bbx_{i,t},\cdot)\Big\|^2$. Then using Assumptions \ref{as:first} and \ref{as:second}, we bound $\Big\|\ell_i'(f_{i,t}(\bbx_{i,t}),y_{i,t})\Big\|^2 $ by $C^2$ and $\Big\|\kappa(\bbx_{i,t},\cdot)\Big\|^2$ by $X^2$. Similarly we use Cauchy-Schwartz inequality for the second term in \eqref{eq:delta_f_bound1} and bound $\Big\|\kappa(\bbx_{i,t},\cdot)\Big\|^2$ by $X^2$. Now using these, \eqref{eq:delta_f_bound1} can be written as, \begin{align}\label{eq:lemma1_final_temp1} &\mathbb{E}[\| \nabla_f\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}]\le 4V X^2 C^2\nonumber\\ &\! \!+\! 4V \!X^2 \Big\|\sum_{j\in n_i}\mu_{ij}h'_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t})) \Big\|_{\ccalH}^2\!\!+2V\lambda^2\! \cdot\! R_{\ccalB}^2 \end{align} Using Assumption \ref{as:third}, we bound $h'_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))$ present in the second term on the right-hand side of \eqref{eq:lemma1_final_temp1} by $L_h$ and then taking the constant $L_h$ out of the summation, we get \begin{align}\label{eq:lemma1_final_temp2} \!\!\!\!\!\!\mathbb{E}[\| \nabla_f\hat{\ccalL}_t(f_t,\mu_t\!)\|^2_{\ccalH}]\!\le\! 4V \!X^2\! C^2 \!\!+\!\! 4V \!X^2\!\! L_h^2 \Big\|\!\sum_{j=1}^{ |n_i|}\mu_{ij}\! \Big\|^2\!\!\!\!+\!\!2V\!\lambda^2\! \!\cdot\! R_{\ccalB}^2. \end{align} Here, $|n_i|$ is used to denote the number of neighborhood nodes of agent $i$. Then we have used the fact $\|\sum_{j=1}^{ |n_i|}\mu_{ij} \|^2\le {|n_i|} \sum_{j=1}^{ |n_i|}|\mu_{ij}|^2$ and got \begin{align}\label{eq:lemma1_final_temp3} &\mathbb{E}[\| \nabla_f\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}]\le 4V X^2 C^2 \nonumber\\ &+ 4V X^2 L_h^2 {|n_i|} \sum_{j=1}^{ |n_i|}|\mu_{ij}|^2 +2V\lambda^2 \cdot R_{\ccalB}^2 \end{align} Next we upper bound ${ |n_i|}$ and $\sum_{j=1}^{ |n_i|}|\mu_{ij}|^2$ by $M$ and $\|\bbmu\|^2$ and write \eqref{eq:lemma1_final_temp3} as \begin{align}\label{eq:lemma1_final} \!\!\!\!\!\!\mathbb{E}[\| \nabla_f\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}]\!\le\! 4V\! X^2 C^2\!\! +\! 4V\! X^2 L_h^2 M \|\bbmu\|^2\!\!+\!\!2V\!\lambda^2 \!\!\cdot \!R_{\ccalB}^2 \end{align} Thus \eqref{eq:lemma1_final} which establishes an the upper bound on $\mathbb{E}[\| \nabla_f\hat{L}_t(f_t,\mu_t)\|^2_{\ccalH}]$ is valid. With this in hand, we now shift focus to deriving a similar upper-estimate on the magnitude of the dual stochastic gradient of the Lagrangian $\mathbb{E}\Big[\| \nabla_{\mu}\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}\Big]$ as \begin{align}\label{eq:delta_mu_temp1} &\mathbb{E}\Big[\| \nabla_{\mu}\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}\Big]\nonumber\\ &=\mathbb{E}\Big\|vec(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))-\gamma_{ij}-\delta\eta\mu_{ij})\Big\|_{\ccalH}^2\nonumber\\ &\le M \max_{(i,j)\in\ccalE}\mathbb{E}\Big\|h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))-\gamma_{ij}-\delta\eta\mu_{ij}\Big\|_{\ccalH}^2\nonumber\\ &\le \!\!M\!\! \max_{(i,j)\in\ccalE}\mathbb{E}\Big\|h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))-\delta\eta\mu_{ij}\Big\|_{\ccalH}^2 \end{align} In the first equality we write the concatenated version of the dual stochastic gradient associated with each agent, whereas the second inequality is obtained by considering the worst case bound. In the third inequality we simply use the fact $|a-b-c|^2 \le |a-c|^2$ owing to the fact that the right hand side of the inequality is a scalar. Next, we apply the fact $\|a+b\|_{\ccalH}^2\le 2\cdot(\|a\|_{\ccalH}^2+\|b\|_{\ccalH}^2)$ for any $a, b \in \ccalH$ and get \begin{align}\label{eq:delta_mu_temp2} \!\!\mathbb{E}\Big[\| \nabla_{\mu}\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}\Big]&\!\le \!M\!\bigg(\!2\mathbb{E}\Big\|h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))\Big\|^2_{\ccalH}\nonumber\\ &+ 2\delta^2\eta^2|\mu_{ij}|^2\bigg) \end{align} Then we bound the first term in \eqref{eq:delta_mu_temp2} using Assumption \ref{as:third} and the second term is upper bounded by $\|\bbmu\|^2$ \begin{align}\label{eq:delta_mu_temp3} \!\!\!\!\!\!\!\!\!\!\mathbb{E}\Big[\!\| \nabla_{\mu}\hat{\ccalL}_t(\!f_t,\mu_t\!)\|^2_{\ccalH}\!\Big]\!\!\le\! \!M\Big(\!\!2\big(\!\!K_1\!\!+\!\!L_h^2\mathbb{E}(|f_{i,t}(\bbx_{i,t})|^2)\big)\!\!+\!\! 2\delta^2\eta^2\|\bbmu\|^2 \!\Big) \end{align} Next, we use $|f_{i,t}(\bbx_{i,t})|^2=|\langle f_{i,t},\kappa(\bbx_{i,t},\cdot)\rangle_\ccalH|^2\le \|f_{i,t}\|_\ccalH^2\cdot\|\kappa(\bbx_{i,t},\cdot)\|_\ccalH^2$ and then we have upper bounded $\|f_{i,t}\|_\ccalH^2$ and $\|\kappa(\bbx_{i,t},\cdot)\|_\ccalH^2$ by $R_{\ccalB}^2$ and $X^2$ and got, \begin{align}\label{eq:lemma1_2_final} \!\!\!\!\!\!\mathbb{E}\Big[\| \nabla_{\mu}\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}\Big]\!\!\le \!M\!\Big((2K_1\!\!+\!2L_h^2X^2\cdot R_{\ccalB}^2)\!+\! 2\delta^2\eta^2\|\bbmu\|^2\! \Big) \end{align} Thus \eqref{eq:lemma1_2_final} gives the bound of $\mathbb{E}\Big[\| \nabla_{\mu}\hat{\ccalL}_t(f_t,\mu_t)\|^2_{\ccalH}\Big]$. \end{proof} \end{comment} \section{Statement and Proof of Lemma \ref{lemma:diff_of_grad}} \label{app:bound_grad_diff_func_proj_func} The following lemma bounds the difference of projected stochastic functional gradient and un-projected stochastic functional gradient. \vspace{-1mm} \begin{lemma}\label{lemma:diff_of_grad} The difference between the stochastic functional gradient defined by ${\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})$ and projected stochastic functional gradient $\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})$, is bounded as \begin{align}\label{eq:diff_of_grad} \|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}\le \frac{\sqrt{V}\eps}{\eta} \end{align} for all $t>0$. Here, $\eta>0$ is the algorithm step-size and $\eps>0$ is the error tolerance parameter of the KOMP. \end{lemma} \begin{proof} Considering the squared-Hilbert- norm difference of the left hand side of \eqref{eq:diff_of_grad} \begin{subequations} \begin{align} \|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2 &=\frac{1}{\eta^2}\|\eta{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\eta\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2\label{eq:proof_diff1}\\ &=\frac{1}{\eta^2}\|\eta{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})+\bbf_{t+1}-\bbf_t\|^2\label{eq:proof_diff2}. \end{align} In \eqref{eq:proof_diff2}, we used \eqref{eq:projected_func_update} for the second term on the right hand side of \eqref{eq:proof_diff1}. we re-arrange the terms in \eqref{eq:proof_diff2} and then, we use $\bbf_t-\eta\nabla_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})$, which can easily by identified as $\tilde{\bbf}_{t+1}$ given in \eqref{eq:stacked_sgd_tilde} and obtain \begin{align} \|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2 &=\frac{1}{\eta^2}\|\bbf_{t+1}-\big(\bbf_t-\eta{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\big)\|^2 \nonumber\\ &=\frac{1}{\eta^2}\|\bbf_{t+1}-\tilde{\bbf}_{t+1}\|^2\label{eq:proof_diff3}\\ &=\frac{1}{\eta^2}\sum_{i=1}^V\|f_{i,t+1}-\tilde{f}_{i,t+1}\|^2\le\frac{1}{\eta^2}V\eps^2\label{eq:proof_diff4}. \end{align} \end{subequations} In \eqref{eq:proof_diff3} we used the stacked version of $\tilde{f}_{i,t+1}$ to substitute $\tilde{\bbf}_{t+1}$ in place of $\bbf_t-\eta\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})$. In \eqref{eq:proof_diff4} we used the error tolerance parameter of the KOMP update. Then taking the square root of \eqref{eq:proof_diff4} gives the inequality stated in \eqref{eq:diff_of_grad} and concludes the proof . \end{proof} \section{Definition and Proof of Lemma \ref{lemma:inst_lagrang_diff}} \label{app:bound_inst_lag_diff} Next, Lemma \ref{lemma:inst_lagrang_diff} characterizes the instantaneous Lagrangian difference $\hat{\ccalL}_{t}(\bbf_t,\bbmu)-\hat{\ccalL}_{t}(\bbf,\bbmu_t)$. \begin{lemma}\label{lemma:inst_lagrang_diff} Under Assumptions \ref{as:first}-\ref{as:fifth} and the primal and dual updates generated from Algorithm \ref{alg:soldd}, the instantaneous Lagrangian difference satisfies the following decrement property \begin{align}\label{eq:inst_lagrang_diff} &\hat{\ccalL}_t(\bbf_t,\bbmu)-\hat{\ccalL}_t(\bbf,\bbmu_t)\nonumber\\ &\leq \frac{1}{2\eta}\!\big(\|\bbf_t\!-\!\bbf\|_{\ccalH}^2-\|\bbf_{t+1}\!\!-\!\!\bbf\|_{\ccalH}^2+\|\bbmu_{t}\!-\!\bbmu\|^2-\|\bbmu_{t+1}\!-\!\bbmu\|^2\big)\nonumber\\ &\quad+\! \frac{\eta}{2}\big( 2\|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2+\|\nabla_{\bbmu}\hat{L}_{t}(\bbf_t,\bbmu_t)\|^2\big)+\frac{\sqrt{V}\eps}{\eta}\|\bbf_t-\bbf\|_{\ccalH}+\frac{V\eps^2}{\eta}. \end{align} \end{lemma} \begin{proof} Considering the squared hilbert norm of the difference between the iterate $\bbf_{t+1}$ and any feasible point $\bbf$ with each individual $f_i$ in the ball $\ccalB$ and exoanding it using the \eqref{eq:projected_func_update}, we get \begin{align}\label{eq:diff_ft+1_f_1} \|\bbf_{t+1}-\bbf\|_{\ccalH}^2=\|\bbf_t-\eta \tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\bbf\|_{\ccalH}^2 &=\!\|\bbf_t\!-\!\bbf\|_{\ccalH}^2-2\eta\langle \bbf_t-\bbf,\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\rangle +\eta^2\|\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2\nonumber\\ &=\|\bbf_t-\bbf\|_{\ccalH}^2+2\eta\langle \bbf_t-\bbf,{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\rangle\nonumber\\ \!\!&\quad\!-2\eta\langle \bbf_t-\bbf,{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\rangle+\eta^2\|\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2 \end{align} where we have added and subtracted $2\eta\langle \bbf_t-\bbf,{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\rangle$ and gathered like terms on the right-hand side. Now to handle the second term on the right hand side of \eqref{eq:diff_ft+1_f_1}, we use Cauchy Schwartz inequality along with the Lemma \ref{lemma:diff_of_grad} to replace the directional error associated with sparse projections with the functional difference defined by the KOMP stopping criterion: \begin{align}\label{diff_ft+1_f_2} &\langle \bbf_t-\bbf,{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\rangle\le \|\bbf_t-\bbf\|_{\ccalH}\|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t}\!)-\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t}\!)\|_{\ccalH}\le\frac{\sqrt{V}\eps}{\eta}\|\bbf_t-\bbf\|_{\ccalH}. \end{align} Now to bound the norm of $\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})$, the last term in the right hand side of \eqref{eq:diff_ft+1_f_1}, we add and subtract ${\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})$ and then use the identity $\|a+b\|_{\ccalH}^2\le 2.(\|a\|_{\ccalH}^2+\|b\|_{\ccalH}^2)$ and further use Lemma \ref{lemma:diff_of_grad} and finally get, \begin{align}\label{diff_ft+1_f_3} \|\tilde{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2\le 2\frac{V\eps^2}{\eta^2}+2\|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2. \end{align} Now we substitute the expressions in \eqref{diff_ft+1_f_2} and \eqref{diff_ft+1_f_3} in for the second and fourth terms in \eqref{eq:diff_ft+1_f_1} which allows us to write \begin{align} \|\bbf_{t+1}-\bbf\|_{\ccalH}^2\le\|\bbf_t-\bbf\|_{\ccalH}^2+2\sqrt{V}\eps\|\bbf_t\!-\!\bbf\|_{\ccalH}-2\eta\langle \bbf_t-\bbf,{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\rangle\!+\!2V\eps^2\!+\!2\eta^2\|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2. \end{align} By re-ordering the terms of the above equation, we get \begin{align}\label{diff_ft+1_f_4} \langle \bbf_t-\bbf,{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\rangle \!&\le\! \frac{1}{2\eta}\big(\|\bbf_t-\bbf\|_{\ccalH}^2-\|\bbf_{t+1}-\bbf\|_{\ccalH}^2\big)\!+\!\frac{\sqrt{V}\eps}{\eta}\|\bbf_t-\bbf\|_{\ccalH}+\frac{V\eps^2}{\eta}+\eta\|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2. \end{align} Using the first order convexity condition for instantaneous Lagrangian $\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})$, since it is convex with respect to $\bbf_t$ and write \begin{equation}\label{eq:1storderconv} \!\! \hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\hat{\ccalL}_{t}(\bbf,\bbmu_{t})\le \langle \bbf_t-\bbf,{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\rangle. \end{equation} Next we use \eqref{eq:1storderconv} in \eqref{diff_ft+1_f_4} and get \begin{align}\label{diff_ft+1_f_final} &\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\hat{\ccalL}_{t}(\bbf,\bbmu_{t})\le\! \frac{1}{2\eta}\big(\|\bbf_t-\bbf\|_{\ccalH}^2\!-\!\|\bbf_{t+1}-\bbf\|_{\ccalH}^2\big)\!+\!\frac{\sqrt{V}\eps}{\eta}\|\bbf_t-\bbf\|_{\ccalH}+\frac{V\eps^2}{\eta}+\eta\|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})\|_{\ccalH}^2. \end{align} Similarly, we consider the squared difference of dual variable update $\bbmu_{t+1}$ in \eqref{eq:dualupdate} and an arbitrary dual variable $\bbmu$, \begin{align}\label{eq:mu_t+1_mu1} \|\bbmu_{t+1}-\bbmu\|^2&=\|[\bbmu_t+\eta\nabla_{\bbmu}\hat{\ccalL}_{t}(\bbf_t,\mathbf{\bbmu}_t)]_{+}-\bbmu\|^2\le \|\bbmu_t+\eta\nabla_{\bbmu}\hat{\ccalL}_{t}(\bbf_t,\mathbf{\bbmu}_t)-\bbmu\|^2. \end{align} The above inequality in \eqref{eq:mu_t+1_mu1} comes from the non-expansiveness of the projection operator $[.]_+$. Next we expand the square of the right-hand side of \eqref{eq:mu_t+1_mu1} and get, \begin{align}\label{eq:mu_t+1_mu2} \!\! \|\bbmu_{t+1}-\bbmu\|^2&\le \|\bbmu_{t}-\bbmu\|^2 + 2\eta\nabla_{\bbmu}\hat{\ccalL}_{t}(\bbf_t,\bbmu_t)^T(\bbmu_t-\bbmu)+\eta^2\|\nabla_{\bbmu}\hat{\ccalL}_{t}(\bbf_t,\bbmu_t)\|^2. \end{align} We re-arrange the terms in the above expression and get, \begin{align}\label{eq:mu_t+1_mu3} \!\! \nabla_{\bbmu}\hat{\ccalL}_{t}(\bbf_t,\mathbf{\bbmu}_t)^T(\bbmu_t-\bbmu) &\!\!\geq \frac{1}{2\eta}\big(\|\bbmu_{t+1}\!-\!\bbmu\|^2\!-\!\|\bbmu_{t}\!-\!\bbmu\|^2\big) -\frac{\eta}{2}\|\nabla_{\bbmu}\hat{\ccalL}_{t}(\bbf_t,\mathbf{\bbmu}_t)\|^2. \end{align} Since the instantaneous Lagrangian $\hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})$ is concave with respect to the dual variable $\bbmu_t$, i.e., \begin{align}\label{eq:mu_concave} \hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\hat{\ccalL}_{t}(\bbf_{t},\bbmu)\geq \nabla_{\bbmu}\hat{\ccalL}_{t}(\bbf_t,\bbmu_t)^T(\bbmu_t-\bbmu). \end{align} Next we use the left-hand side of the inequality \eqref{eq:mu_concave} in \eqref{eq:mu_t+1_mu3} and get the expression, \vspace{-0.3cm} \begin{align}\label{eq:mu_t+1_mu4} \hat{\ccalL}_{t}(\bbf_{t},\bbmu_{t})-\hat{\ccalL}_{t}(\bbf_{t},\bbmu)&\geq \frac{1}{2\eta}\big(\|\bbmu_{t+1}-\bbmu\|^2-\|\bbmu_{t}-\bbmu\|^2\big) -\frac{\eta}{2}\|\nabla_{\bbmu}\hat{\ccalL}_{t}(\bbf_t,\mathbf{\bbmu}_t)\|^2. \end{align} We subtract \eqref{eq:mu_t+1_mu4} from \eqref{diff_ft+1_f_final} to obtain the final expression, \begin{align} \hat{\ccalL}_{t}(\bbf_{t},\bbmu)-\hat{\ccalL}_{t}(\bbf,\bbmu_t)&\leq \frac{1}{2\eta}\big(\|\bbf_t\!-\!\bbf\|_{\ccalH}^2-\|\bbf_{t+1}-\bbf\|_{\ccalH}^2+\|\bbmu_{t}\!-\!\bbmu\|^2\!\!-\!\|\bbmu_{t+1}\!-\!\bbmu\|^2\big) +\frac{\eta}{2}\big( 2\|{\nabla}_{\bbf}\hat{\ccalL}_{t}(\!\bbf_{t},\bbmu_{t}\!)\|_{\ccalH}^2+\|\nabla_{\bbmu}\hat{\ccalL}_{t}(\!\bbf_t,\mathbf{\bbmu}_t\!)\|^2\big)\nonumber\\ &\quad+\frac{\sqrt{V}\eps}{\eta}\|\bbf_t-\bbf\|_{\ccalH}+\frac{V\eps^2}{\eta}. \end{align} \end{proof} \section{ Definition and proof of Lemma \ref{lemma:dist_subspace}}\label{lemma:dist_function} In this section, we present the proof of Theorem \ref{thm:bound_memory_order}, where we upper bound the growth of the dictionary. But before going into the proof of Theorem \ref{thm:bound_memory_order}, we present Lemma \ref{lemma:dist_subspace} which defines the notion of measuring distance of a point from subspace which will be subsequently used in the proof of Theorem \ref{thm:bound_memory_order}. Using Lemma \ref{lemma:dist_subspace}, we establish the relation between the stopping criteria of the compression procedure to a Hilbert subspace distance. \begin{lemma}\label{lemma:dist_subspace} Define the distance of an arbitrary feature vector $\bbx$ obtained by the feature transformation $\phi(\bbx)=\kappa(\bbx, \cdot)$ to the subspace of the Hilbert space spanned by a dictionary $\bbD$ of size $M$, i.e., $\ccalH_{\bbD}$ as \begin{align} \text{dist}(\kappa(\bbx, \cdot), \ccalH_{\bbD})=\min_{\bbf \in \ccalH_{\bbD}} \|\kappa(\bbx, \cdot)-\bbv^T\boldsymbol{\kappa}_{\bbD}(\cdot)\|_\ccalH. \end{align} This set distance gets simplified to the following least-squares projection when dictionary, $\bbD \in \reals^{p\times M}$ is fixed \begin{align}\label{eq:dist1} \text{dist}(\kappa(\bbx, \cdot), \ccalH_{\bbD})=\|\kappa(\bbx, \cdot)- [\bbK_{\bbD,\bbD}^{-1}\boldsymbol{\kappa}_{\bbD}(\bbx)]^{T}\boldsymbol{\kappa}_{\bbD}(\cdot)\|_\ccalH. \end{align} \end{lemma} \begin{proof} The distance to the subspace $\ccalH_{\bbD}$ is defined as \begin{align}\label{eq:dist_eq1} \text{dist}(\kappa(\bbx, \cdot), \ccalH_{\bbD})= \min_{\bbf \in \ccalH_{\bbD}}\|\kappa(\bbx, \cdot)-\bbv^T\boldsymbol{\kappa}_{\bbD}(\cdot)\|_\ccalH=\min_{\bbv \in \reals^{M}} \|\kappa(\bbx, \cdot)-\bbv^T\boldsymbol{\kappa}_{\bbD}(\cdot)\|_\ccalH \end{align} where the second equality comes from the fact that as $\bbD$ is fixed so minimizing over $\bbf$ translates down to minimizing over $\bbv$ since it is the only free parameter now. Now we solve \eqref{eq:dist_eq1} and obtain $\bbv^*=\bbK_{\bbD,\bbD}^{-1}\boldsymbol{\kappa}_{\bbD}(\bbx)$ minimizing \eqref{eq:dist_eq1} in a manner similar to logic which yields \eqref{eq:hatparam_update}. Now using $\bbv^*$ we obtain the required result given in \eqref{eq:dist1}, thereby concluding the proof. \end{proof} \section{Proof of Corollary \ref{thm:iter_comp}}\label{proof_iter_comp} Considering the optimality gap bound in \eqref{eq:converproof9_1_temp} and using the fact that $\eps\leq {2}R_{\ccalB}$, we can write \eqref{eq:converproof9_1_temp} with step size $\eta=1/\sqrt{T}$ as \begin{align}\label{eq:iter_comp_1} \frac{1}{T}\sum_{t=1}^T\mbE\big[S(\bbf_t)-S(\bbf^*)\big]&\leq\frac{1}{2\eta \sqrt{T}}\|\bbf_\nu^\star\|_{\ccalH}^2+ {\frac{V\eps}{\eta}}{4}R_{\ccalB}+\frac{ K}{2\sqrt{T}} + \frac{{4}VR_{\ccalB}(C X+\lambda R_{\ccalB})}{\xi}\nu. \end{align} Now denoting $Q\coloneqq 4VR_{\ccalB}(CX+\lambda R_{\ccalB})/\xi$ and using $\nu=\zeta T^{-1/2} + \Lambda \alpha$, we write \eqref{eq:iter_comp_1} as \begin{align}\label{eq:iter_comp_2} \frac{1}{T}\sum_{t=1}^T\mbE\big[S(\bbf_t)-S(\bbf^*)\big]&\leq\frac{1}{2\eta \sqrt{T}}\|\bbf_\nu^\star\|_{\ccalH}^2+ {\frac{V\eps}{\eta}}{4}R_{\ccalB}+\frac{ K}{2\sqrt{T}} + Q (\zeta T^{-1/2} + \Lambda \alpha)\nonumber\\ & = \frac{1}{\sqrt{T}}\Big(\frac{\|\bbf_\nu^\star\|_{\ccalH}^2}{2}+\frac{K}{2}+ Q\zeta\Big) + \alpha(4VR_{\ccalB}+Q\Lambda). \end{align} Now for the optimality gap in \eqref{eq:iter_comp_2} to be less than $\varepsilon$, it is sufficient if we consider that both the terms in the right hand side of the inequality \eqref{eq:iter_comp_2} are bounded by $\varepsilon/2$, i.e., \begin{align} \alpha(4VR_{\ccalB}+Q\Lambda)&\leq \varepsilon/2\label{eq:iter_comp_3}\\ \frac{1}{\sqrt{T}}\Big(\frac{\|\bbf_\nu^\star\|_{\ccalH}^2}{2}+\frac{K}{2}+ Q\zeta\Big) &\leq \varepsilon/2\label{eq:iter_comp_4} \end{align} Thus, from \eqref{eq:iter_comp_3} and \eqref{eq:iter_comp_4} we can deduce that for an optimality gap to be less than $\varepsilon$ we require $\ccalO(1/\varepsilon^2)$ iterations with $\alpha$ satisfying \eqref{eq:iter_comp_3}. Now using the bound of $\alpha$ from \eqref{eq:iter_comp_3} in \eqref{eq:agent_mo} of Theorem \ref{thm:bound_memory_order}, we get model order complexity bound of $\ccalO(1/\varepsilon^{2p})$ required to achieve an optimality gap of $\varepsilon$. \begin{comment} \section{Proof of Theorem \ref{thm:order}} \label{app:proof_of_theorem} \begin{proof} The proof depends on the result of Lemma \ref{lemma:inst_lagrang_diff} defined in \eqref{eq:inst_lagrang_diff}. We expand the left-hand side of \eqref{eq:inst_lagrang_diff} using \eqref{eq:stochastic_approx} and use the definiton stated in the theorem $S(f_t)\coloneqq \sum_{i\in\ccalV}\!\!\bigg[\ell_i(f_{i,t}\big(\bbx_{i,t}), y_{i,t}\big)\!+\!\frac{\lambda}{2}\|f_{i,t} \|^2_{\ccalH}\bigg]$ and obtain the following expression, \vspace{-3mm} \begin{align}\label{eq:converproof1} &\sum_{i\in\ccalV}\!\!\bigg[\ell_i(f_{i,t}\big(\bbx_{i,t}), y_{i,t}\big)\!+\!\frac{\lambda}{2}\|f_{i,t} \|^2_{\ccalH} \bigg]\nonumber \\ &\quad+\!\!\!\sum_{(i,j)\in\ccalE}\!\! \left\{\! \bigg[\mu_{ij}(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))-\gamma_{ij})\bigg]\! -\! \frac{\delta\eta}{2}\mu_{ij}^{2}\!\right\}\nonumber\\ &\quad-\!\!\sum_{i\in\ccalV}\!\!\bigg[\ell_i(f_{i}\big(\bbx_{i,t}), y_{i,t}\big)\!+\!\frac{\lambda}{2}\|f_{i} \|^2_{\ccalH} \bigg]\nonumber\\ &\quad+\!\!\sum_{(i,j)\in\ccalE}\!\! \left\{\!\bigg[\mu_{ij,t}(h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))-\gamma_{ij})\bigg]\!\!-\!\frac{\delta\eta}{2}\mu_{ij,t}^{2}\!\!\right\}\nonumber\\ &\leq\!\! \frac{1}{2\eta}\!\bigg(\!\!\|f_t\!-\!f\|_{\ccalH}^2\!-\!\|f_{t+1}\!-\!f\|_{\ccalH}^2\!\!+\!\|\bbmu_{t}\!\!-\!\bbmu\|^2\!\!-\!\|\bbmu_{t+1}\!-\!\bbmu\|^2\!\!\bigg)\nonumber\\ &\quad+ \frac{\eta}{2}\bigg( 2\|{\nabla}_{f}\hat{\ccalL}_{t}(f_{t},\bbmu_{t})\|_{\ccalH}^2+\|\nabla_{\mu}\hat{\ccalL}_{t}(f_t,\mathbf{\bbmu}_t)\|^2\bigg)\nonumber\\ &\quad+\frac{\sqrt{V}\eps_t}{\eta}\|f_t-f\|_{\ccalH}+\frac{V\eps_t^2}{\eta}. \end{align} Next, we compute the expectation not only on the random pair $(\bbx,\bby)$ but also on the entire algorithm history, i.e., on sigma algebra $\ccalF_t$ which measures the algorithm history for times $u\leq t$, i.e., $\ccalF_t \supseteq \{\bbx_u, \bby_u, f_u, \bbmu_u\}_{u =0}^{t-1} $ on both sides of \eqref{eq:converproof1} and we also substitute the bounds of $\|{\nabla}_{f}\hat{\ccalL}_{t}(f_{t},\bbmu_{t})\|_{\ccalH}^2$ and $\|\nabla_{\mu}\hat{\ccalL}_{t}(f_t,\mathbf{\bbmu}_t)\|^2$ given in \eqref{eq:lemma1_final} and \eqref{eq:lemma1_2_final} to obtain \vspace{-1mm} \begin{align}\label{eq:converproof2} &\mbE\Bigg[\!S(f_t)-S(f)\!\!+\!\!\!\!\sum_{(i,j)\in\ccalE}\!\! \bigg[\mu_{ij}\Big(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))-\gamma_{ij}\Big)\nonumber\\ &\quad\!-\mu_{ij,t}\Big(h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))\!-\!\gamma_{ij}\Big)\bigg]\!-\! \frac{\delta\eta}{2}\|\mu\|^{2}\!+\!\frac{\delta\eta}{2}\|\mu_t\|^{2}\!\Bigg]\nonumber \end{align} \begin{align} &\leq\mbE\!\Bigg[\!\frac{1}{2\eta}\!\!\bigg(\!\!\|f_t-f\|_{\ccalH}^2\!-\!\|f_{t+1}\!-\!f\|_{\ccalH}^2\!+\!\|\bbmu_{t}\!\!-\!\bbmu\|^2\!\nonumber\\ &\quad-\!\|\bbmu_{t+1}\!\!-\!\bbmu\|^2\!\bigg)\!+\!\frac{\sqrt{V}\eps_t}{\eta}\|f_t-f\|_{\ccalH}\!+\!\frac{V\eps_t^2}{\eta}\Bigg]\nonumber\\ &\quad+\mbE\Bigg[\!\frac{\eta}{2}\bigg(\!\!2(4V \!X^2 C^2\! + \!4V X^2 L_h^2 M \|\bbmu_t\|^2\!+\!2V\lambda^2 \cdot R_{\ccalB}^2)\! \nonumber\\ &\quad+\! M\Big((2K_1\!+\!2L_h^2X^2\cdot R_{\ccalB}^2)\!+\! 2\delta^2\eta^2\|\bbmu_t\|^2 \!\Big)\!\bigg)\!\!\Bigg]. \end{align} The $\|f_t-f\|_{\ccalH}$ term present in the right-hand side of the fifth term of the inequality multiplied by $\sqrt{V}\epsilon_t/\eta$, i.e. the term present outside the large parentheses is bounded since each individual $f_{t,i}$ and $f_i$ for $\in\{i,\dots,V\}$ in the ball $\ccalB$ have finite Hilbert norm and is bounded by $R_{\ccalB}$. Thus $\|f_t-f\|_{\ccalH}$ can be upper bounded by $2\sqrt{V}R_{\ccalB}$. Next we define $K\coloneqq 8V X^2 C^2 +4V\lambda^2 \cdot R_{\ccalB}^2+2MK_1+2ML_h^2X^2\cdot R_{\ccalB}^2)$. Now using the the bound of $\|f_t-f\|_{\ccalH}$ and the definition of $K$, we write \eqref{eq:converproof2} as, \begin{align}\label{eq:converproof3} &\mbE\!\Bigg[\!S(f_t)\!-\!S(f)\!+\!\!\!\sum_{(i,j)\in\ccalE}\!\! \bigg[\!\mu_{ij}\!\Big(\!h_{ij}(f_{i,t}(\bbx_{i,t}),\!f_{j,t}(\bbx_{i,t}))\!-\!\gamma_{ij}\Big)\!\!\nonumber\\ &\quad-\!\mu_{ij,t}\Big(\!h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))\!-\!\gamma_{ij}\!\Big)\!\bigg]\!\! -\! \frac{\delta\eta}{2}\!\|\mu\|^{2}\!\Bigg]\nonumber\\ &\leq\!\! \mbE\Bigg[\!\!\frac{1}{2\eta}\!\bigg(\!\|f_t\!-\!f\|_{\ccalH}^2\!-\!\|f_{t+1}\!-\!f\|_{\ccalH}^2\!+\!\|\bbmu_{t}\!-\!\bbmu\|^2\!\nonumber\\ &\quad-\!\|\bbmu_{t+1}\!-\!\bbmu\|^2\bigg)+\!\!{\frac{{2}V\eps_t}{\eta}}.R_{\ccalB}\!+\!\frac{V\eps_t^2}{\eta}\!\!\Bigg]\!\!\!\nonumber\\ &\quad+\!\!\mbE\!\Bigg[\frac{\eta}{2}\!\bigg(\!\!K\!\!+\!\!\Big(\!8V X^2 L_h^2 M\!\!+\! 2M\delta^2\eta^2\!\!-\!\delta\!\Big)\!\|\bbmu_t\|^2\!\!\bigg)\!\!\Bigg]. \end{align} Now, we select the constant parameter $\delta$ such that $8V X^2 L_h^2 M + 2M\delta^2\eta^2 -\delta\leq 0$, which then allows us to drop the term involving $\|\mu_t\|^2$ from the second expected term of right-hand side of \eqref{eq:converproof3}. Further, we set the approximation budget $\eps_t=\eps$ and take the sum of the expression \eqref{eq:converproof3} over times $t=1,\dots,T$ and get \begin{align}\label{eq:converproof4} &\mbE\Bigg[\!\!\sum_{t=1}^T\!\!\big[S(f_t)\!-\!S(f)\big]\!\!+\!\!\!\sum_{t=1}^T\!\!\sum_{(i,j)\in\ccalE}\!\! \bigg[\mu_{ij}\Big(\!h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))\!-\!\gamma_{ij}\Big)\!\nonumber\\ &\quad-\!\mu_{ij,t}\Big(h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))\!-\!\gamma_{ij}\!\Big)\!\bigg] - \frac{\delta\eta T}{2}\|\mu\|^{2}\Bigg]\nonumber\\ &\leq \!\mbE\!\Bigg[\!\!\frac{1}{2\eta}\bigg(\!\!\|f_1\!-\!f\|_{\ccalH}^2\!-\!\|f_{T+1}\!-\!f\|_{\ccalH}^2\!+\!\|\bbmu_{1}\!-\!\bbmu\|^2\!\nonumber\\ &\quad-\!\|\bbmu_{T+1}\!-\!\bbmu\|^2\!\!\bigg)\!\!+\! {\frac{{2}V\eps T}{\eta}}.R_{\ccalB}\!+\!\frac{V\eps^2 T}{\eta}\!+\!\frac{\eta K T}{2}\!\Bigg]. \end{align} Note since the terms $\|f_{T+1}-f\|_{\ccalH}^2$ and $\|\bbmu_{T+1}-\bbmu\|^2$ present in the right-hand side of \eqref{eq:converproof4} are postive, thus we can drop the terms and we finally get \begin{align}\label{eq:converproof5} \!\!\!\!&\mbE\Bigg[\!\!\sum_{t=1}^T\!\big[S(f_t)\!-\!S(f)\big] \!\!+\!\!\!\sum_{t=1}^T\!\!\sum_{(i,j)\in\ccalE}\!\! \bigg[\!\mu_{ij}\!\Big(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))\!-\!\gamma_{ij}\Big)\nonumber\\ &\quad-\mu_{ij,t}\Big(h_{ij}(f_{i}(\bbx_{i,t}),f_{j}(\bbx_{i,t}))-\gamma_{ij}\Big)\bigg] - \frac{\delta\eta T}{2}\|\mu\|^{2}\Bigg]\nonumber\\ \!\!\!&\leq\! \frac{1}{2\eta}\!\bigg(\!\!\|f_1\!-\!f\|_{\ccalH}^2\!+\!\|\bbmu_{1}\!-\!\bbmu\|^2\!\!\bigg)\!\!+\!\! {\frac{{2}V\eps T}{\eta}}.R_{\ccalB} \!+\!\!\frac{V\eps^2 T}{\eta}\!+\!\frac{\eta K T}{2}. \end{align} It can be observed from \eqref{eq:converproof5} that the right-hand side of this inequality is deterministic. We now take $f$ to be the solution $f^*$ of \eqref{eq:main_prob}, which in turn implies $f^*$ must satisfy the inequality constraint of \eqref{eq:main_prob}. This means that $f^*$ is a feasible point, such that $\sum_{t=1}^T\sum_{(i,j)\in\ccalE}\mu_{ij,t}\Big(h_{ij}(f^*_{i}(\bbx_{i,t}),f^*_{j}(\bbx_{i,t}))-\gamma_{ij}\leq 0$ holds. Thus we can simply drop this term from the left-hand side of the inequality \eqref{eq:converproof5}, since it may be added to both sides and then replaced by null. Now, assume the initialization $f_1=0\in\ccalH^V$ and $\bbmu_1=0\in\mbR_+^M$. Gathering terms containing $\|\bbmu\|^2$, we further obtain \begin{align}\label{eq:converproof6} \!\!\!\!\!\!&\mbE\Bigg[\!\!\sum_{t=1}^T\!\!\big[S(f_t)\!-\!S(f^*)\big]\!\!+\!\!\!\sum_{t=1}^T\!\!\sum_{(i,j)\in\ccalE}\!\! \bigg[\mu_{ij}\!\Big(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))\!-\!\gamma_{ij}\Big)\!\bigg] \nonumber\\ \!\!\!&-\! \Big(\frac{\delta\eta T}{2}\!+\!\frac{1}{2\eta}\Big)\|\mu\|^{2}\!\Bigg]\!\!\leq\! \frac{1}{2\eta}\|f^*\|_{\ccalH}^2\!\!+\!\! {\frac{V\eps T}{\eta}}\Big({2}R_{\ccalB}\!+\!\eps\Big)\!+\!\frac{\eta K T}{2}. \end{align} It can be observed from \eqref{eq:converproof6} that the first term present on the left side is the objective error collection over time, whereas the second term is the inner product of an arbitrary Lagrange multiplier $\bbmu$ with the time-aggregation of the constraint violation and the third term denotes the norm square of $\bbmu$. Hence, we can maximize the left-hand side of \eqref{eq:converproof6} over $\bbmu$ to obtain the optimal Lagrange multiplier which controls the growth of the long-term constraint violation. Specifically, the function of $\bbmu$ has a minimizer $\bar{\bbmu}\in\mbR_+^M$. Thus for any $i=1,\dots,V$ and $j=1,\dots,M$, the value of $\bbmu_{ij}$ is determined by \begin{align}\label{optimal_muij} \!\!\!\!\!\!\bar{\bbmu}_{ij}\!=\!\mbE\Bigg[\!\frac{1}{2(\delta\eta T+1/ \eta)}\!\sum_{t=1}^T\!\bigg[\!\Big(\!h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))\!-\!\gamma_{ij}\!\Big)\!\bigg]_+\!\!\Bigg]. \end{align} Now, substituting $\bar{\bbmu}$ in place of $\bbmu$ in \eqref{eq:converproof6} we obtain \begin{align}\label{eq:converproof7} &\mbE\Bigg[\!\!\sum_{t=1}^T\!\!\big[S(f_t)\!-\!S(f^*)\big]\!\!+\!\!\!\!\!\sum_{(i,j)\in\ccalE}\!\!\!\!\!\frac{\Big[\sum_{t=1}^T\!\!\big(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))\!-\!\gamma_{ij}\big)\Big]_+^2}{2(\delta\eta T+1/ \eta)}\!\!\Bigg]\nonumber\\ &\leq \frac{1}{2\eta}\|f^*\|_{\ccalH}^2+ {\frac{V\eps T}{\eta}}\Big({2}R_{\ccalB}+\eps\Big)+\frac{\eta K T}{2}. \end{align} We consider step-size $\eta=1/\sqrt{T}$ and approximation budget $\eps=P\eta^2=P/T$, where $P>0$ is a fixed constant. Substituting these in \eqref{eq:converproof7} we get \begin{align}\label{eq:converproof8} &\mbE\Bigg[\!\!\sum_{t=1}^T\!\!\big[S(f_t)\!-\!S(f^*)\big]\!\!\!+\!\!\!\!\sum_{(i,j)\in\ccalE}\!\!\!\frac{\Big[\sum_{t=1}^T\!\!\big(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))\!-\!\gamma_{ij}\big)\Big]_+^2}{2\sqrt{T}(\delta+1)}\Bigg]\nonumber\\ &\leq \frac{\sqrt{T}}{2}\Big(\|f^*\|_{\ccalH}^2+ 4VPR_{\ccalB}+\frac{2VP^2}{T}+K\Big). \end{align} This expression serves as the basis which allows us to derive convergence result of both the objective function and the feasibility of the proposed iterates. Considering first the objective error sequence $\mbE\big[S(f_t)-S(f^*)\big]$, we observe from \eqref{eq:converproof8} that the second term present on the left-side of the inequality can be dropped without affecting the inequality owing to the fact that it is positive. So we obtain \begin{align}\label{eq:converproof9} \!\!\!\!\!\sum_{t=1}^T\!\!\mbE\big[S(f_t)\!-\!S(f^*)\big] \!\!\leq \!\!\frac{\sqrt{T}}{2}\Big(\|f^*\|_{\ccalH}^2\!+\! 4VPR_{\ccalB}\!+\!\frac{2VP^2}{T}\!+\!K\Big). \end{align} Therefore, we can say that the right-hand side has the order $\ccalO(\sqrt{T})$ as stated in \eqref{eq:func_order} of Theorem \ref{thm:order}. Next, we establish the sublinear growth of the constraint violation in $T$. For this consider the objective error sequence, \begin{align}\label{eq:converproof10} \!\!\!S(f_t)\!-\!S(f^*)\!&=\!\!\mbE\!\sum_{i\in\ccalV}\!\!\big[\ell_i(f_{i,t}\big(\bbx_{i,t}), y_{i,t}\big)\!-\!\ell_i(f_i^*\big(\bbx_{i,t}), y_{i,t}\big)\!\big]\!\nonumber\\ &+\frac{\lambda}{2}\sum_{i\in\ccalV}\!\Big(\|f_{i,t} \|^2_{\ccalH}- \|f_i^* \|^2_{\ccalH}\Big). \end{align} Next, we bound the objective error sequence as \begin{align}\label{eq:converproof11} &|S(f_t)-S(f^*)|\nonumber\\ &\leq \!\mbE\!\sum_{i\in\ccalV}\!\!\big[|\ell_i(f_{i,t}\big(\bbx_{i,t}), y_{i,t}\big)\!-\!\ell_i(f_i^*\big(\bbx_{i,t}), y_{i,t}\big)|\big]\!\!\!+\!\!\!\frac{\lambda}{2}\!\sum_{i\in\ccalV}\!|\|f_{i,t} \|^2_{\ccalH}\!- \!\|f_i \|^2_{\ccalH}|\nonumber\\ \!\!&\!\leq \!\mbE\!\sum_{i\in\ccalV}\!\! C|f_{i,t}\big(\bbx_{i,t})\!-\!f_i^*\big(\bbx_{i,t})|\!+\!\frac{\lambda}{2}\!\sum_{i\in\ccalV}\!|\|f_{i,t} \|^2_{\ccalH}\!-\! \|f_i \|^2_{\ccalH}|, \end{align} where using triangle inequality we write the first inequality and then using Assumption \eqref{as:second} of Lipschitz-continuity condition we write the second inequality. Further, using reproducing property of $\kappa$ and Cauchy-Schwartz inequality, we simplify $|f_{i,t}\big(\bbx_{i,t})-f_i^*\big(\bbx_{i,t})|$ in \eqref{eq:converproof11} as \begin{align}\label{eq:converproof12} &|f_{i,t}\big(\bbx_{i,t})-f_i^*\big(\bbx_{i,t})|=|\langle f_{i,t}-f_i^*,\kappa(\bbx_{i,t},\cdot)\rangle|\nonumber\\ &\leq \|f_{i,t}-f_i^*\|_{\ccalH}\cdot \|\kappa(\bbx_{i,t},\cdot)\|_{\ccalH}\leq {2}R_\ccalB X \end{align} where the last inequality comes from Assumption \ref{as:first} and \ref{as:fourth}. Now, we consider the $|\|f_{t,i} \|^2_{\ccalH}- \|f_i^* \|^2_{\ccalH}|$ present in the right-hand side of \eqref{eq:converproof11}, \begin{align}\label{eq:converproof13} &|\|f_{t,i} \|^2_{\ccalH}- \|f_i^* \|^2_{\ccalH}|^2\leq \|f_{t,i}-f_i^*\|_{\ccalH}\cdot \|f_{t,i}+f_i^*\|_{\ccalH}\leq 4R_{\ccalB}^2 \end{align} Substituting \eqref{eq:converproof12} and \eqref{eq:converproof13} in \eqref{eq:converproof11}, we obtain \begin{align}\label{eq:converproof14} \!\!\!\!|S(f_t)-S(f^*)| \leq {2}VCR_\ccalB X\!+\!{2}V\lambda R_{\ccalB}^2 \!=\!{2}VR_{\ccalB}(C X\!\!+\!\!\lambda R_{\ccalB}) \end{align} Thus $S(f_t)-S(f^*)$ can be lower bound as \begin{align}\label{eq:converproof15} S(f_t)-S(f^*)\geq -{2}VR_{\ccalB}(C X+\lambda R_{\ccalB}). \end{align} Substituting this lower bound in \eqref{eq:converproof8}, we get \begin{align}\label{eq:converproof16} \!\!&\mbE\Bigg[{2}TVR_{\ccalB}(C X+\!\lambda R_{\ccalB})\!\nonumber\\ &+\!\!\!\sum_{(i,j)\in\ccalE}\!\!\!\frac{\Big[\sum_{t=1}^T\big(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))-\gamma_{ij}\big)\Big]_+^2}{2\sqrt{T}(\delta+1)}\Bigg]\nonumber\\ &\leq \frac{\sqrt{T}}{2}\Big(\|f^*\|_{\ccalH}^2+ 4VPR_{\ccalB}+\frac{2VP^2}{T}+K\Big). \end{align} After re-arranging \eqref{eq:converproof16}, we get \begin{align}\label{eq:converproof17} &\mbE\Bigg[\sum_{(i,j)\in\ccalE}\!\!\!\frac{\Big[\sum_{t=1}^T\big(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))-\gamma_{ij}\big)\Big]_+^2}{2\sqrt{T}(\delta+1)}\Bigg]\nonumber\\ &\leq \frac{\sqrt{T}}{2}\Big(\|f^*\|_{\ccalH}^2+ 4VPR_{\ccalB}+\frac{2VP^2}{T}\!+\!K\Big)\nonumber\\ &+{2}TVR_{\ccalB}(C X+\lambda R_{\ccalB}) \end{align} Now we denote $K_2\coloneqq \frac{\sqrt{T}}{2}\Big(\|f^*\|_{\ccalH}^2+ 4VPR_{\ccalB}+\frac{2VP^2}{T}+K\Big)$ and $K_3 \coloneqq {2}VR_{\ccalB}(C X+\lambda R_{\ccalB})$ and write \eqref{eq:converproof17} as, \begin{align}\label{eq:converproof18} &\mbE\Bigg[\sum_{(i,j)\in\ccalE}\!\!\!\Big[\sum_{t=1}^T\big(h_{ij}(f_{i,t}(\bbx_{i,t}),f_{j,t}(\bbx_{i,t}))-\gamma_{ij}\big)\Big]_+^2\Bigg]\nonumber\\ &\leq 2\sqrt{T}(\delta\!+\!1) \bigg[\frac{\sqrt{T}}{2}K_2\!+\!TK_3\bigg] \!=\!2T^{1.5}(\delta\!+\!1) \bigg[\frac{K_2}{2\sqrt{T}}\!\!+\!\!K_3\bigg] \end{align} From \eqref{eq:converproof18} we can write, \begin{align}\label{eq:converproof19} \!\!\!\!\!\!\!\mbE\Bigg[\!\!\Big[\!\!\sum_{t=1}^T\!\!\big(\!h_{ij}(f_{i,t}(\bbx_{i,t}\!),f_{j,t}(\bbx_{i,t}\!))\!-\!\gamma_{ij}\!\big)\!\Big]_+^2\!\!\Bigg]\!\!\!\leq\! 2T^{1.5}\!(\delta\!+\!\!1\!)\! \bigg[\!\!\frac{K_2}{2\sqrt{T}}\!\!+\!\!K_3\!\bigg] \end{align} Taking square root of both the sides of \eqref{eq:converproof19} and summing over all the edges, we get the desired result in \eqref{eq:constr_order}. \end{proof} \end{comment} \begin{comment} \section{Proof of Corollary \ref{coro:average}}\label{new_five} We consider the expression \eqref{eq:func_order} in Theorem \ref{thm:order} and divide it by $T$, and use the definition of convexity for the expected objective $S(\bbf_t)$, i.e., \begin{align}\label{eq:avg_obj_corollary} \mbE[S(\bar{\bbf_t})]\le \mbE\bigg[\frac{1}{T}\sum_{t=1}^T S(\bbf_t)\bigg] \end{align} and similarly for the average of expected constraint functions $\mbE \big[h_{ij}(f_i(\bbx_{i,t}),f_j(\bbx_{i,t}))\big]$, \begin{align}\label{eq:avg_constraint_corollary} \!\!\!\mbE \bigg[h_{ij}(\bar{f_i}(\bbx_{i,t}),\bar{f_j}(\bbx_{i,t}))\bigg] \le \mbE\bigg[\frac{1}{T}h_{ij}({f_i}(\bbx_{i,t}),{f_j}(\bbx_{i,t}))\bigg] \end{align} We now apply \eqref{eq:avg_obj_corollary} to the expression \eqref{eq:func_order} divided by $T$ to obtain \eqref{eq:func_order_corollary}. Similarly, we obtain \eqref{eq:constr_order_corollary} by applying \eqref{eq:avg_constraint_corollary} to the expression \eqref{eq:constr_order} divided by $T$. \end{comment}
{"config": "arxiv", "file": "1908.00510/Supplementary.tex"}
TITLE: How do I prove inconsistency in FOL? QUESTION [1 upvotes]: So I have to prove that this set S is inconsistent: S = {{P(x),P(f(a)), ¬Q(z)}, {P(g(x’,x)),Q(x)},{¬P(y)}} I just have no idea where to start. The only time I learned about inconsistency is when I learned about Skolem form, but I got no idea how to prove it. Any help will be much appreciated! Thanks! REPLY [2 votes]: A set of statements is inconsistent if and only if it has no model, i.e. it is impossible for all statements to be true at the same time. This is equivalent to saying that the conjunction of all those sentences is a contradiction. It is also equivalent to saying that the contradiction is a logical consequence of that set of sentences. All of which means: To show a set is inconsistent using resolution: simply start with the sentences, put into clauses, and derive the empty clause! Or: if the set is already a bunch of clauses: simply derive the empty clause! (and that on its turn shows that the original formula $F$ that the clauses came from is a contradiction (or, as a singleton set of sentences $\{ F \}$, is inconsistent ... which I assume is what they mean by $F$ being inconsistent)
{"set_name": "stack_exchange", "score": 1, "question_id": 2210460}
TITLE: What are the algebras for the ultrafilter monad on topological spaces? QUESTION [16 upvotes]: Motivation: Let $(X,\tau)$ be a topological space. Then the set $\beta X$ of ultrafilters on $X$ admits a natural topology (cf. Example 5.14 in Adámek and Sousa - D-ultrafilters and their monads), giving rise to a functor $\beta: \operatorname{Top} \to \operatorname{Top}$ which admits the structure of a monad. It turns out that the algebras for this monad, which I'll call "$\beta$-spaces", admit the following description (which one can alternatively take as a definition). Definition: A $\beta$-space consists of a topological space $(X,\tau)$ equipped with an additional topology $\tau^\xi$ on $X$ such that $(X, \tau^\xi)$ is compact Hausdorff; The topology $\tau^\xi$ refines the topology $\tau$; and For every $x \in X$ and every $\tau$-open neighborhood $U$ of $x$, there exists a $\tau$-open neighborhood $V$ of $x$ such that the $\tau^\xi$-closure of $V$ is contained in $U$. Notes: From (1) and (2) it follows that $(X,\tau)$ is compact. So if $(X,\tau)$ is additionally Hausdorff, then it admits a unique $\beta$-space structure, namely the one with $\tau^\xi = \tau$ (since continuous bijections of compact Hausdorff spaces are homeomorphisms). $(X,\tau)$ need not be Hausdorff—e.g., if $\tau$ is the indiscrete topology, then the topology $\tau^\xi$ can be an arbitrary compact Hausdorff topology. The compact Hausdorff topology $\tau^\xi$ traces back to Manes' theorem, which says that the algebras for the ultrafilter monad on $\operatorname{Set}$ rather than $\operatorname{Top}$ are precisely the compact Hausdorff spaces. Questions: Are there additional restrictions on the topology $(X,\tau)$ such that it admits a refinement $\tau^\xi$ satisfying (1), (2), (3) (i.e. constituting a $\beta$-space), beyond the fact, as noted, that $X$ must be compact? Do $\beta$-spaces already have some other name? Or at least, is condition (3) above, relating a topology $\tau$ to a refinement $\tau^\xi$, something which has a name? REPLY [2 votes]: $\DeclareMathOperator\cp{cp}$We will derive some additional necessary conditions from the following Observation: Let $\tau$ be a topology on $X$ and $\tau'$ a topology refining $\tau$. Suppose that $(X,\tau')$ is compact. Then any $\tau'$-closed set is $\tau$-compact. Indeed, it is compact in $\tau'$ because it is closed in a compact, and so it is compact also in $\tau$ because the identity $\tau' \to \tau$ is continuous. Consequences: Let $(X,\tau)$ be a topological space admitting a $\beta$-structure $\tau^\xi$. Then: $(X,\tau)$ is compact (as noted in the question). $(X,\tau)$ is locally compact (in the sense that for every $x \in X$ there is a local base of compact neighborhoods). This follows from condition (3) on a $\beta$-space and the Observation. $(X,\tau)$ is "c-separated": For every disjoint $C,D \subseteq X$ which are either closed or singletons, there exist compact $K,L \subseteq X$ such that $C \cap K = \emptyset$, $D \cap L = \emptyset$, and $K \cup L = X$. This follows from the fact that $(X,\tau^\xi)$ is Hausdorff, regular, and normal and the Observation. $(X,\tau)$ is "c-completely separated": Let $C,D \subseteq X$ be disjoint and either closed or singletons. Then there exists a (not necessarily continuous) function $f: X \to [0,1]$ such that $f^{-1}(0) = C$, $f^{-1}(1) = D$, and $f^{-1}([a,b])$ is compact for every $a \leq b$. This follows from the fact that $(X,\tau^\xi)$ has the corresponding separation property and the Observation. Note also that if the collection of sets with compact complement forms a topology, this this topology is the unique $\beta$-structure on $(X,\tau)$. But this is not necessarily the case.
{"set_name": "stack_exchange", "score": 16, "question_id": 348574}
TITLE: Let $X$ be a connected metric space and $A, B \subset X$ Show that $d(x,A) = d(x,B)$ doesn't hold without the connectedness assumption. QUESTION [3 upvotes]: Let $X$ be a connected metric space and $A, B \subset X$ non-empty sets. (i) Show that there exists $x \in X$ such that $d(x,A) = d(x,B).$ (ii) Give an example that shows that (i) doesn't work without the connectedness. For (i) let $f(x)=d(x,A)-d(x,B)$. Now if $A\cap B \ne \emptyset$, then there is $x \in A\cap B$ for which $f(x) = d(x, A) - d(x, B) = 0 \implies d(x,A)=d(x,B)$. So assume that $A\cap B = \emptyset$. Then there exists $a \in A$ for which $d(a,A) =0$ So $f(a)=d(a,A)-d(a,B)= -d(a,B) \le 0$. Similarly $b \in B$ such that $d(b,B) = 0$ so $f(b) =d(b,A) \ge 0$ so there is an $x \in X$ for which $f(x) = 0$ and $d(x,A) =d(x,B)$. Now I cannot find an example that would satisfy (ii) any tips on how should I approach this part of the problem? REPLY [2 votes]: Take, for instance, $X=\{-1,1\}$ (endowed with the usual distance), $A=\{-1\}$, and $B=\{1\}$.
{"set_name": "stack_exchange", "score": 3, "question_id": 4128472}
TITLE: Combinatorial question about sets of rational numbers QUESTION [12 upvotes]: The following question came up in my research. Since lots of clever people post here, I thought I'd ask it. Recall that the group ring of a group $G$ is the abelian group $\mathbb{Z}[G]$ consisting of linear combinations of formal symbols $[g]$, where $g$ ranges over elements of $G$ (the abelian group $\mathbb{Z}[G]$ also has an obvious ring structure, but that's not important for this question). Consider the group ring $\mathbb{Z}[\mathbb{Q}]$ of the rational numbers $\mathbb{Q}$ (considered as an additive group). There is a natural projection $\pi : \mathbb{Z}[\mathbb{Q}] \rightarrow \mathbb{Z}[\mathbb{Q}/\mathbb{Z}]$. It has a large kernel; for instance this kernel contains $[n]-[0]$ for integers $n$ and things like $[3/2]-[1/2]$. There is also a natural involution $i : \mathbb{Z}[\mathbb{Q} \setminus \{0\}] \rightarrow \mathbb{Z}[\mathbb{Q} \setminus \{0\}]$ defined by $i([q]) = [1/q]$. Here by $\mathbb{Z}[\mathbb{Q} \setminus \{0\}]$ I just mean formal sums of $[q]$ where $q$ is a nonzero rational number. We have a natural inclusion $\mathbb{Z}[\mathbb{Q} \setminus \{0\}] \subset \mathbb{Z}[\mathbb{Q}]$. Question. What is $\text{ker}(\pi) \cap \text{ker}(\pi \circ i)$? It clearly contains things like $[1]-[-1]$, but I don't know if it contains any more "exotic" elements. REPLY [3 votes]: I believe that that intersection of kernels contains, for any integer $k\notin\{0,-1\}$, the element $[1] - [k] + [\frac k{k+1}] - [\frac{-1}{k+1}]$. I also found (just by messing around) the element $[\frac52] - [\frac57] + [\frac{-2}7] - [\frac23] + [\frac53] - [\frac{-5}2]$.
{"set_name": "stack_exchange", "score": 12, "question_id": 125306}
TITLE: Function is equal to its own derivative QUESTION [7 upvotes]: We all know that derivative of $e^x$ is $e^x$. Is exponential function only function that has such property? If yes how to prove that there are no other functions. If no, what are other functions? Help me please REPLY [12 votes]: You seek to solve the ODE $y'=y$ for arbitrary boundary conditions. This is separable and yields $$1 = \frac y{y'}$$ Integration gives $$x+c = \ln(y(x))$$ or $$y(x)=e^{x+c} = e^c e^x = \tilde c e^x$$ The uniqueness is guaranteed by Picard-Lindelöf.
{"set_name": "stack_exchange", "score": 7, "question_id": 644879}
\begin{definition}[Definition:Completely Additive Function] Let $\left({R, +, \times}\right)$ be a [[Definition:Ring (Abstract Algebra)|ring]]. Let $f: R \to R$ be a [[Definition:Mapping|mapping]] on $R$. Then $f$ is described as '''completely additive''' {{iff}}: :$\forall m, n \in R: f \left({m \times n}\right) = f \left({m}\right) + f \left({n}\right)$ That is, a '''completely additive function''' is one where the value of a [[Definition:Ring Product|product]] of two numbers equals the [[Definition:Ring Addition|sum]] of the value of each one individually. \end{definition}
{"config": "wiki", "file": "def_21031.txt"}
TITLE: Prove that $\lim_{x\to a}f(x) = L$ if and only if $\lim_{x\to a^-}f(x) = L$ and $\lim_{x\to a^+}f(x) = L$ QUESTION [0 upvotes]: First, I mentioned that $\lim_{x\to a^-}f(x) = L$ if there exists a $\delta > 0$ such that $a-x < \delta$ where $|f(a)-f(x)| < \epsilon$. And that $\lim_{x\to a^+}f(x) = L$ if there exists a $\delta > 0$ such that $x-a < \delta$ where $|f(a)-f(x)| < \epsilon$. Starting in the forward direction (If $\lim_{x\to a}f(x) = L$ then $\lim_{x\to a^-}f(x) = L$ and $\lim_{x\to a^+}f(x) = L$), I began to state that $\lim_{x\to a}f(x) = L$ means that $|f(x)-L| < \epsilon$ whenever $|x-a|<\delta$. I wanted to split up the inequality to $x-a<\delta$ or $a-x>\delta$. But after this I'm stuck. The splitting of the equality wouldn't work for $a^-$ and then I wouldn't know how to get the $|f(a)-L|<\epsilon$ part. Any help is appreciated, thank you in advance. REPLY [2 votes]: Quick note, your definition of one-sided limits are not quite correct. The definition is $\lim_{x \to a^+}f(x) = L$ if for every $\epsilon > 0$ there exists a $\delta > 0$ such that $\vert f(x) - L \vert < \epsilon$ whenever $x - a < \delta$, and respectively $a - x < \delta$ for $\lim_{x \to a^-}f(x) = L$. Your definition for a two-sided limit is correct. Take a close look at the definition of limit. Do you know how to prove the forward direction now? We're saying $\vert f(x) - L \vert < \epsilon$ holds when $\vert x - a \vert < \delta$. Note that $\vert x - a \vert < \delta$ is equivalent to $-\delta < x - a < \delta$. Then that means $\vert f(x) - L \vert < \epsilon$ holds when $x - a < \delta$ and when $a - x < \delta$. So by definition...
{"set_name": "stack_exchange", "score": 0, "question_id": 1557246}
TITLE: Is it OK to see time dilation and (relativistic) mass increase as phenomena that avoid $c$ being reached? And how about length contraction? QUESTION [0 upvotes]: I think I have been exposed since years ago to this line of reasoning: if $ v\to c $, then $ \Delta t \to \infty $. As $\displaystyle v=\frac{\Delta s}{\Delta t} $, it's like a natural reaction to some massive object approaching light speed in order to prevent $v=c$. Similarly, if $v \to c$, then $m \to \infty$. As $ F=ma$, accelerating the object needs more and more force, so that $c$ is ungraspable. Is this thinking correct or simplistic and even worse? Is there, anyway, an analogous explanation of length contraction? REPLY [0 votes]: As far as I know relativistic mass increase is a concept long abandoned as an interpretation (as long ago as Einstein). Instead the approach is to use relativistic (3-) momentum where the mass appears only as the rest mass and it is invariant under Lorentz transformation.
{"set_name": "stack_exchange", "score": 0, "question_id": 146038}
TITLE: Why are monotone functions Riemann integrable on a closed interval? QUESTION [0 upvotes]: Monotone functions are continuous except countably many points. If function is Riemann integrable it has only a finite number of discontinuity points. So how monotone functions are Riemann integrable on closed interval always? REPLY [14 votes]: Suppose $f$ is nondecreasing. For any partition $a = x_0 < x_1 \ldots < x_n = b$ of your interval $[a,b]$, any Riemann sum is between the left Riemann sum $L = \sum_{j=1}^n f(x_{j-1})(x_j - x_{j-1})$ and the right Riemann sum $R = \sum_{j=1}^n f(x_{j})(x_j - x_{j-1})$. The difference between them is at most $(f(b) - f(a)) \delta$ where $\delta = \max_j (x_j - x_{j-1})$. Proof without words:
{"set_name": "stack_exchange", "score": 0, "question_id": 920833}
TITLE: Prove that $1\cdot 1! + 2\cdot 2! +\dots+n\cdot n! = (n + 1)! - 1$ QUESTION [0 upvotes]: (whenever $n$ is a non-negative integer) I did the basic step $P(1)$ and found the statment $P(n+1)$ I now have $(n+1)! - 1 + (n+1)\cdot(n+1)!$ This should equal $(n+2)! - 1$, but how do I show that? REPLY [2 votes]: $$(n+1)!-1+(n+1)(n+1)!$$ $$=(1+(n+1))(n+1)!-1$$ $$=(n+2)(n+1)!-1$$ $$=(n+2)!-1$$ REPLY [1 votes]: For $n=k$ , let $$\sum_{j=1}^{k}j\times j\,!=(k+1)!-1$$ If $n=k+1$ we have $$\sum_{j=1}^{k+1}j\times j\,!=(k+1)\times(k+1)!+\sum_{j=1}^{k}j\times j\,!=(k+1)\times(k+1)!+(k+1)!-1\\ \qquad=(k+1)!(k+1+1)-1=(k+1)!(k+2)-1=(k+2)!-1$$
{"set_name": "stack_exchange", "score": 0, "question_id": 1977040}
\begin{document} \title{Sub-Nyquist Sampling for Power Spectrum Sensing in Cognitive Radios: A Unified Approach} \author{Deborah Cohen, \emph{Student IEEE} and Yonina C. Eldar, \emph{Fellow IEEE}} \maketitle \begin{abstract} In light of the ever-increasing demand for new spectral bands and the underutilization of those already allocated, the concept of Cognitive Radio (CR) has emerged. Opportunistic users could exploit temporarily vacant bands after detecting the absence of activity of their owners. One of the crucial tasks in the CR cycle is therefore spectrum sensing and detection which has to be precise and efficient. Yet, CRs typically deal with wideband signals whose Nyquist rates are very high. In this paper, we propose to reconstruct the power spectrum of such signals from sub-Nyquist samples, rather than the signal itself as done in previous work, in order to perform detection. We consider both sparse and non sparse signals as well as blind and non blind detection in the sparse case. For each one of those scenarii, we derive the minimal sampling rate allowing perfect reconstruction of the signal's power spectrum in a noise-free environment and provide power spectrum recovery techniques that achieve those rates. The analysis is performed for two different signal models considered in the literature, which we refer to as the analog and digital models, and shows that both lead to similar results. Simulations demonstrate power spectrum recovery at the minimal rate in noise-free settings and show the impact of several parameters on the detector performance, including signal-to-noise ratio (SNR), sensing time and sampling rate. \end{abstract} \IEEEpeerreviewmaketitle \section{Introduction} Spectral resources are traditionally allocated to licensed or primary users (PUs) by governmental organizations. Today, most of the spectrum is already owned and new users can hardly find free frequency bands. In light of the ever-increasing demand from new wireless communication users, this issue has become critical over the past few years. On the other hand, various studies \cite{Study1, Study2, study3} have shown that this over-crowded spectrum is usually significantly underutilized and can be described as the union of a small number of narrowband transmissions spread across a wide spectrum range. This is the motivation behind cognitive radio (CR), which would allow secondary users to opportunistically use the licensed spectrum when the corresponding PU is not active \cite{Mitola, Haykin}. Even though the concept of CR is said to have been introduced by Mitola \cite{Mitola, MitolaMag}, the idea of learning machines for spectrum sensing can be traced back to Shannon \cite{Shannon}. One of the crucial tasks in the CR cycle is spectrum sensing \cite{cog}. The CR has to constantly monitor the spectrum and detect the PU's activity in order to select unoccupied bands, before and throughout its transmission. At the receiver, the CR samples the signal and performs detection to assert which band is unoccupied and can be exploited for opportunistic transmissions. In order to minimize the interference that could be caused to PUs, the spectrum sensing task performed by a CR should be reliable and fast \cite{cognitive1, cognitive2, WidebandMishali}. On the other hand, in order to increase the chance to find an unoccupied spectral band, the CR has to sense a wide band of spectrum. Nyquist rates of wideband signals are high and can even exceed today's best analog-to-digital converters (ADCs) front-end bandwidths. Besides, such high sampling rates generate a large number of samples to process, affecting speed and power consumption. To overcome the rate bottleneck, several new sampling methods have recently been proposed \cite{Mishali_theory, Mishali_multicoset, MagazineMishali} that reduce the sampling rate in multiband settings below the Nyquist rate. In \cite{Mishali_theory, Mishali_multicoset, MagazineMishali}, the authors derive the minimal sampling rate allowing for perfect signal reconstruction in noise-free settings and provide sampling and recovery techniques. However, when the final goal is spectrum sensing and detection, reconstructing the original signal is unnecessary. Following the ideas in \cite{Leus, Leus2, Davies, Davies2}, we propose, in this paper, to only reconstruct the signal's power spectrum from sub-Nyquist samples, in order to perform signal detection. Several papers have considered power spectrum reconstruction from sub-Nyquist samples, by treating two different signal models. The first, and most popular so it seems, is a digital model which is based upon a linear relation between the sub-Nyquist and Nyquist samples obtained for a given sensing time frame. Ariananda et al. \cite{Leus, Leus2} have deeply investigated this model with multicoset sampling \cite{Mishali_multicoset, Bresler}. They consider both time and frequency domain approaches and discuss the reconstruction of the autocorrelation or power spectrum respectively, from undertermined and overdermined systems. For the first case, they expoit sparsity properties of the signal and apply compressed sensing (CS) reconstruction techniques but do not analyze the sampling rate. The authors rather focus the analysis on the second case, namely the overdetermined system, and show that it can be solved without any sparsity assumption. They demonstrate that the so-called minimal sparse ruler patterns \cite{ruler} provide a sub-optimal solution for sub-Nyquist sampling, when using multicoset sampling. The second is an analog model that treats the class of wide-sense stationary multiband signals, whose frequency support lies within several continuous intervals (bands). Here, a linear relation between the Fourier transform of the sub-Nyquist samples and frequency slices of the original signal's spectrum is exploited. In \cite{Davies, Davies2}, the authors propose a method to estimate finite resolution approximations to the true power spectrum exploiting multicoset sampling. That is, they estimate the average power within subbands rather than power spectrum for each frequency. They consider overdetermined and undertermined, or compressive systems. In the latter case, CS techniques are used, which exploit the signal's sparsity, whereas the former setting does not assume any sparsity. In \cite{Davies}, the authors assume that the sampling pattern is such that the system they obtain has a unique solution but no specific sampling pattern or rate satisfying this condition is discussed. In \cite{Davies2}, sampling patterns generated uniformly at random and the Golomb ruler are considered in simulations but no analysis of the required rate is performed. Another recent paper \cite{wang} considers the analog model with multicoset samplingin the non sparse setting. The authors derive necessary and sufficient conditions for perfect power spectrum reconstruction in noise free settings. They show that any universal sampling pattern guarantees perfect recovery under that sufficient conditions. They further investigate two other sub-optimal patterns that lead to perfect reconstruction under lower sampling rates. In this paper, we aim at filling several gaps in the current literature. First, to the best of our knowledge, no comparison has been made between the two models and their respective results. Second, the general conditions required from the sampling matrix and the resulting minimal sampling rate for perfect power spectrum reconstruction in a noiseless environment have not been analyzed. In \cite{Leus, Leus2}, only multicoset sampling is considered and no universal minimal rate is provided. Rather, several compression ratios given by the sub-optimal solution of the minimal sparse ruler are shown to suffice. In \cite{Davies, Davies2}, no proof of the uniqueness of the solution is given. The authors in \cite{wang} provide necessary and sufficient conditions for perfect recovery, but only for the analog model in the non sparse setting. In this paper, we aim at providing a unifying framework for power spectrum reconstruction from sub-Nyquist samples by bridging between the two models. We thus consider the two different signal models: the analog or multiband model and the digital one that we relate to the multi-tone model in order to anchor it to the original analog signal. For the analog model, we focus on sampling schemes that operate on the bins of the signal's spectrum and provide samples that are linear transformations of these. Two examples of such schemes are the sampling methods proposed in \cite{Mishali_theory, Mishali_multicoset, MagazineMishali}, namely multicoset sampling and the Modulated Wideband Converter (MWC). For the digital model, we analyse a generic sampling scheme and provide two different reconstruction approaches. The first, considered for example in \cite{Leus, Leus2}, is performed in the time domain whereas the second is realized in the frequency domain. While the analysis of the conditions for perfect reconstruction turns out to be difficult in the time domain, we show that it is convenient in the frequency one. There, both the analog and the digital model lead to similar relations and can therefore be investigated jointly. It is interesting to notice that other applications based on sub-Nyquist sampling, such as radar \cite{radar}, use frequency domain analysis as well. We examine three different scenarii: (1) the signal is not assumed to be sparse, (2) the signal is assumed to be sparse and the carrier frequencies of the narrowband transmissions are known, (3) the signal is sparse but we do not assume carrier knowledge. The main contributions of this paper are twofold. First, for each one of the scenarii, we derive the minimal sampling rate for perfect power spectrum reconstruction with respect to our settings in a noise-free environment. We show that the rate required for power spectrum reconstruction is half the rate that allows for perfect signal reconstruction, for each one of the scenarii, namely the Nyquist rate, the Landau rate \cite{LandauCS} and twice the Landau rate \cite{Mishali_multicoset}. Second, we present reconstruction techniques that achieve those rates for both signal models. Throughout the paper, minimal sampling rate refers to the lowest rate enabling perfect reconstruction of the power spectrum in a noiseless environment for a general sampling scheme. We do not consider the minimal rate achievable for a specific design of the sampling system. For instance, in \cite{Leus, Leus2}, the authors show that designing the multicoset sampling matrix according to the minimal sparse ruler pattern results in a minimal rate below ours. Some other specific sampling patterns are considered in \cite{wang}. In contrast, we focus on generic systems without any particular structure. This paper is organized as follows. In Section \ref{ModelProb}, we present the stationary multiband and multi-tone models and formulate the problem. Section \ref{SecOpt} describes the sub-Nyquist sampling stage and ties the original signal's power spectrum to correlation between the samples. In Section \ref{sec:rate}, we derive the minimal sampling rate for each one of the three scenarii described above and present recovery techniques that achieve those rates. Numerical experiments are presented in Section \ref{sec:simulations}. We demonstrate power spectrum reconstruction from sub-Nyquist samples, show the impact of several practical parameters on the detection performance, and compare our detection results to Nyquist rate sampling and to spectrum based detection from sub-Nyquist samples \cite{Mishali_theory}. \section{System Models and Goal} \label{ModelProb} \subsection{Analog Model} \label{sec:model1} Let $x(t)$ be a real-valued continuous-time signal, supported on $\mathcal{F} = [-T_{\text{Nyq}}/2, +T_{\text{Nyq}}/2]$ and composed of up to $N_{\text{sig}}$ uncorrelated stationary transmissions, such that \begin{equation} x(t)=\sum_{i=1}^{N_{\text{sig}}} \rho_i s_i(t). \end{equation} Here $\rho_i \in \{0,1\}$ and $s_i(t)$ is a zero-mean wide-sense stationary signal. The value of $\rho_i$ determines whether or not the $i$th transmission is active. The bandwidth of each transmission is assumed to not exceed $2B$ (where we consider both positive and negative frequency bands). Formally, the Fourier transform of $x(t)$ defined by \begin{equation} X(f)=\int_{-\infty}^{\infty}x(t)e^{-j2\pi f t} \mathrm{d} t \end{equation} is zero for every $f \notin \mathcal{F}$. We denote by $f_{\text{Nyq}} = 1/T_{\text{Nyq}}$ the Nyquist rate of $x(t)$ and by $S_x$ the support of $X(f)$. The power spectrum of $x(t)$ is the Fourier transform of its autocorrelation, namely \begin{equation} \label{eq:spec} P_x(f)=\int_{-\infty}^{\infty}r_x(\tau)e^{-j2\pi f \tau} \mathrm{d} \tau, \end{equation} where $r_x(\tau) = \mathbb{E} \left[ x(t)x(t-\tau) \right]$ is the autocorrelation function of $x(t)$. From \cite{Papoulis}, it holds that \begin{equation} P_x(f)=\mathbb{E} \left| X(f) \right|^2. \end{equation} Thus, obviously, the support of $P_x(f)$ is identical to that of $X(f)$, namely $S_x$. Our goal is to reconstruct $P_x(f)$ from sub-Nyquist samples. In Section \ref{SecOpt}, we describe our sampling schemes and show how one can relate $P_x(f)$ to correlation of the samples. We consider three different scenarii. \subsubsection{No sparsity assumption} In the first scenario, we assume no \emph{a priori} knowledge on the signal and we do not suppose that $x(t)$ is sparse, namely $N_{\text{sig}}B$ can be on the order of $f_{\text{Nyq}}$. \subsubsection{Sparsity assumption and non blind detection} Here, we assume that $x(t)$ is sparse, namely $N_{\text{sig}}B \ll f_{\text{Nyq}}$. We denote $K_f=2N_{\text{sig}}$. Moreover, the support of the potentially active transmissions is known and corresponds to the frequency support of licensed users defined by the communication standard. However, since the PUs' activity can vary over time, we wish to develop a detection algorithm that is independent of a specific known signal support. \subsubsection{Sparsity assumption and blind detection} In the last scenario as in the previous one, we assume that $x(t)$ is sparse, but we do not assume any \emph{a priori} knowledge on the carrier frequencies. Only the maximal number of transmissions $N_{\text{sig}}$ and the maximal bandwidth $2B$ are assumed to be known. \subsection{Digital Model} \label{sec:model2} The second model we consider is the multi-tone model. Let $x(t)$ be a continuous-time signal defined over the interval $[0,T)$ and composed of up to $N_{\text{sig}}$ transmissions, such that \begin{equation} x(t)=\sum_{i=1}^{N_{\text{sig}}} \rho_i s_i(t), \qquad t \in [0,T). \end{equation} Again, $\rho_i \in \{0,1\}$ and $s_i(t)$ is a wide-sense stationary signal. Since $x(t)$ is defined over $[0,T)$, it has a Fourier series representation \begin{equation} x(t) = \sum_{k = -Q/2}^{Q/2} c[k] e^{j \frac{2 \pi k}{T}t}, \qquad t \in [0,T), \label{xmodel} \end{equation} where $Q/(2T)$ is the maximal possible frequency in $x(t)$. Each transmission $s_i(t)$ has a finite number of Fourier coefficients, up to $2K_{max} \le Q+1$, so that \begin{equation} s_i(t) = \sum_{k \in \Omega_i} c[k] e^{j \frac{2 \pi k}{T}t}, \qquad t \in [0,T), \label{smodel} \end{equation} where $\Omega_i$ is a set of integers with $\left| \Omega_i \right| \le 2K_{\text{max}}$ and $\max_{k \in \{\Omega_i\}} |k|\le Q/2$. Thus, here the support $S_x$ of $x(t)$ is $S_x=\bigcup_{i=1}^{N_{\text{sig}}} \Omega_i$. For mathematical convenience, for this model we will consider the Nyquist samples of $x(t)$, namely \begin{equation} \label{xsamp} x[n]=x(n T_{\text{Nyq}}), \qquad 0 \le n < T/T_{\text{Nyq}}, \end{equation} where $T_{\text{Nyq}}=T/(Q+1)$. Since $x(t)$ is wide-sense stationary, it follows that $\mathbf{x}$ is wide-sense stationary as well. Let us define $N=T/T_{\text{Nyq}}= Q+1$. From (\ref{xmodel}), the autocorrelation of $\mathbf{x}$, namely $r_\mathbf{x}[\nu] = \mathbb{E} \left[ x[n]x[n-\nu] \right]$, has a Fourier representation \begin{equation} r_\mathbf{x}[\nu] = \sum_{k = -Q/2}^{Q/2} s_{\mathbf{x}}[k] e^{j \frac{2 \pi k}{N} \nu}, \qquad 0 \le \nu \le N-1, \label{eq:spec_autoco} \end{equation} where \begin{equation} s_{\mathbf{x}}[k]=\mathbb{E} \left[ c^2[k] \right] , \qquad -\frac{Q}{2} \le k \le \frac{Q}{2}. \label{eq:spec2} \end{equation} From the stationarity property of the signal, namely $r_\mathbf{x}[\nu]$ is a function of $\nu$ only, it holds that \begin{equation} \mathbb{E} \left[ c[k] c^*[l] \right] =0, \qquad -\frac{Q}{2} \le k \neq l \le \frac{Q}{2}. \label{eq:spec3} \end{equation} From (\ref{eq:spec2}), it is obvious that the Fourier coefficients of $r_x[\nu]$ lie in the same support as those of $x(t)$, namely $S_x$. Again, we consider three different scenarii. \subsubsection{No sparsity assumption} In the first scenario, we assume no \emph{a priori} knowledge on the signal and we do not suppose that $x(t)$ is sparse, namely $N_{\text{sig}}K_{\text{max}}$ can be on the order of $Q+1$. \subsubsection{Sparsity assumption and non blind detection} Here, we assume that $x(t)$ is sparse, namely $N_{\text{sig}}K_{\text{max}} \ll Q+1$ and that the Fourier frequencies in the Fourier series expansions of $s_i(t)$, namely $\Omega_i, 1 \leq i \leq N_{\text{sig}}$ are known. We denote $K_f=2N_{\text{sig}}K_{\text{max}}$. \subsubsection{Sparsity assumption and blind detection} In the last scenario, we assume that $x(t)$ is sparse but we do not assume any \emph{a priori} knowledge on the Fourier frequencies in the Fourier series expansions of $s_i(t)$. \subsection{Problem Formulation} In each one of the scenarii defined in the previous section, our goal is to assess which of the $N_{\text{sig}}$ transmissions are active from sub-Nyquist samples of $x(t)$. For each signal, we define the hypothesis $\mathcal{H}_{i,0}$ and $\mathcal{H}_{i,1}$, namely the $i${th} transmission is absent and active, respectively. In order to determine which of the $N_{\text{sig}}$ transmissions are active, we first reconstruct the power spectrum of $x(t)$ for the first model (\ref{eq:spec}), or the Fourier coefficients of the signal's sampled autocorrelation for the second one (\ref{eq:spec2}). In the first and third scenarii, we fully reconstruct the power spectrum. In the second one, we exploit our prior knowledge and reconstruct it only at the potentially occupied locations. We can then perform detection on the fully or partially reconstructed power spectrum. Note that, to do so, we do not sample $x(t)$ at its Nyquist rate, nor compute its Nyquist rate samples. For each one of the scenarii, we derive the minimal rate enabling perfect reconstruction of (\ref{eq:spec}) and (\ref{eq:spec2}) respectively, in a noise-free environment, and present recovery techniques that achieve those rates. By performing e.g. energy detection on the reconstructed power spectrum, we can detect unoccupied spectral bands, namely spectrum holes, from sub-Nyquist samples. This makes the detection process faster, more efficient and less power consuming, which fits the requirements of CRs. Other forms of detection are also possible, once the power spectrum is recovered. \section{Spectrum Reconstruction from sub-Nyquist Samples} \label{SecOpt} \subsection{Analog Model: Sampling and the Analog Spectrum} We begin with the analog model. For this model, we consider two different sampling schemes: multicoset sampling \cite{Mishali_multicoset} and the MWC \cite{Mishali_theory} which were previously proposed for sparse multiband signals. We show that both schemes lead to identical expressions of the signal's power spectrum in terms of that of the samples. In this section, we consider reconstruction of the whole power spectrum. In Section \ref{rate2}, we show how we can reconstruct the power spectrum only at potentially occupied locations when we have \emph{a priori} knowledge on the carrier frequencies. \subsubsection{Multicoset sampling} \label{sec:multico} Multicoset sampling \cite{Bresler} can be described as the selection of certain samples from the uniform grid. More precisely, the uniform grid is divided into blocks of $N$ consecutive samples, from which only $M$ are kept. The $i$th sampling sequence is defined as \begin{equation} x_{c_i}[n]= \left\{ \begin{array}{ll} x(nT_{\text{Nyq}}), & n=mN+c_i, m \in \mathbb{Z} \\ 0, & \text{otherwise}, \end{array} \right. \end{equation} where $0 < c_1 < c_2 < \dots < c_M < N-1$. Let $f_s = \frac{1}{NT_{\text{Nyq}}} \ge B$ be the sampling rate of each channel and $\mathcal{F}_s=[-f_s/2, f_s/2]$. Following the derivations from multicoset sampling \cite{Mishali_multicoset}, we obtain \begin{equation} \mathbf{z}(f) = \mathbf{A} \mathbf{x}(f), \qquad f \in \mathcal{F}_s, \label{eq:multico} \end{equation} where $\mathbf{z}_i(f) = X_{c_i}(e^{j2\pi f T_{\text{Nyq}}}), 0 \le i \le M-1$ are the discrete-time Fourier transforms (DTFTs) of the multicoset samples and \begin{equation} \mathbf{x}_k(f)=X\left(f+K_kf_s \right), \quad 1 \le k \le N, \label{xdef} \end{equation} where $K_k = k-\frac{N+1}{2}, 1 \le k \le N$ for odd $N$ and $K_k = k-\frac{N+2}{2}, 1 \le k \le N$ for even $N$. Each entry of $\mathbf{x}(f)$ is referred to as a bin since it consists of a slice of the spectrum of $x(t)$. The $ik$th element of the $M \times N$ matrix $\mathbf{A}$ is given by \begin{equation} \mathbf{A}_{ik} = \frac{1}{NT_{\text{Nyq}}} e^{j\frac{2 \pi}{N} c_i K_k}. \end{equation} \subsubsection{MWC sampling} The MWC \cite{Mishali_theory} is composed of $M$ parallel channels. In each channel, an analog mixing front-end, where $x(t)$ is multiplied by a mixing function $p_i(t)$, aliases the spectrum, such that each band appears in baseband. The mixing functions $p_i(t)$ are required to be periodic. We denote by $T_p$ their period and we require $f_p=1/T_p \ge B$. The function $p_i(t)$ has a Fourier expansion \begin{equation} p_i(t) =\sum_{l=-\infty}^{\infty} c_{il} e^{j\frac{2\pi}{T_p} lt}. \end{equation} In each channel, the signal goes through a lowpass filter with cut-off frequency $f_s/2$ and is sampled at rate $f_s \ge f_p $. For the sake of simplicity, we choose $f_s=f_p$. The overall sampling rate is $Mf_s$ where $M \le N=f_{\text{Nyq}}/f_s$. Repeating the calculations in \cite{Mishali_theory}, we derive the relation between the known DTFTs of the samples $z_i[n]$ and the unknown $X(f)$ \begin{equation} \mathbf{z}(f)=\mathbf{A}\mathbf{x}(f), \qquad f \in \mathcal{F}_s, \label{eq:mwc} \end{equation} where $\mathbf{z}(f)$ is a vector of length $M$ with $i$th element $\mathbf{z}_i(f)=Z_i(e^{j2\pi fT_s})$. The unknown vector $\mathbf{x}(f)$ is given by (\ref{xdef}). The $M \times N$ matrix $\mathbf{A}$ contains the coefficients $c_{il}$: \begin{equation} \mathbf{A}_{il} = c_{i,-l}=c^*_{il}. \end{equation} For both sampling schemes, the overall sampling rate is \begin{equation} f_{tot}=Mf_s=\frac{M}{N}f_{\text{Nyq}}. \end{equation} \subsubsection{Analog Power Spectrum Reconstruction} \label{analog_rec} We note that systems (\ref{eq:multico}) and (\ref{eq:mwc}) are identical for both sampling schemes. The only difference is the sampling matrix $\mathbf{A}$. We assume that $\bf{A}$ is full spark in both cases \cite{Mishali_multicoset, Mishali_theory}, namely, that every $M$ columns of $\bf A$ are linearly independent. We thus can derive a method for reconstruction of the analog power spectrum for both sampling schemes together. We will reconstruct $P_x(f)$ from the correlation between $\mathbf{z}(f)$, defined in (\ref{eq:multico}) and (\ref{eq:mwc}). Since $x(t)$ is a wide-sense stationary process, we have \cite{Papoulis} \begin{equation} \label{eq:papou} \mathbb{E} [X(f_1) X^*(f_2) ] = P_x(f_1) \delta (f_1-f_2) \end{equation} where $P_x(f)$ is given by (\ref{eq:spec}). We define the autocorrelation matrix $\mathbf {R_x}(f) = \mathbb{E} [\mathbf{x}(f) \mathbf{x}^H(f) ]$, where $(.)^H$ denotes the Hermitian operation. From (\ref{eq:papou}), $\mathbf{R_x}(f)$ is a diagonal matrix with $\mathbf{R}_{\mathbf{x}_{(i,i)}}(f)=P_x(f+ K_i f_s)$ \cite{Davies}, where $K_i$ is defined in Section \ref{sec:multico}. Clearly, our goal can be stated as recovery of $\mathbf{R_x}(f)$, since once $\mathbf{R_x}(f)$ is known, $P_x(f)$ follows for all $f$. We now relate $\mathbf{R_x}(f)$ to the correlation of the sub-Nyquist samples. From (\ref{eq:multico}) or (\ref{eq:mwc}), we have \begin{equation} \mathbf{R_z}(f) = \mathbf{A} \mathbf{R_x}(f) \mathbf{A}^H, \qquad f \in \mathcal{F}_s, \label{eq:autoco2} \end{equation} where $\mathbf {R_z}(f) = \mathbb{E} [\mathbf{z}(f) \mathbf{z}^H(f) ]$. It follows that \begin{equation} \mathbf{r_z}(f) = \mathbf{(\bar{A} \otimes A)}\text{vec}(\mathbf{R}_\mathbf{x}(f) = \mathbf{(\bar{A} \otimes A)} \mathbf{B} \mathbf{r_x}(f) \triangleq \mathbf{\Phi} \mathbf{r_x}(f), \label{eq:rzrx} \end{equation} where $\bf \Phi=(\bar{A} \otimes A)B= \bar{A} \odot A$, and $\bf \bar{A}$ denotes the conjugate matrix of $\bf A$. Here $\otimes$ is the Kronecker product, $\odot$ denotes the Khatri-Rao product, $\mathbf{r_z}(f) = \text{vec}(\mathbf{R_z}(f))$, and $\bf B$ is a $N^2 \times N$ selection matrix that has a 1 in the $j$th column and $[(j-1)N+j]$th row, $1 \le j \le N$ and zeros elsewhere. Thus, $\mathbf{r}_{\mathbf{x}_i}(f)=P_x(f+ K_i f_s)$ and by recovering $\mathbf{r_x}(f)$ for all $f \in \mathcal{F}_s$, we recover the entire power spectrum of $x(t)$. We now discuss the sparsity of $\mathbf{r_x}(f)$ for the second and third scenarii. We chose $f_s \ge B$ so that each transmission contributes only a single non zero element to $\mathbf{r_x}(f)$ (referring to a specific $f$), and consequently $\mathbf{r_x}(f)$ has at most $K_f \ll N$ non zeros for each $f$ \cite{Mishali_theory}, corresponding to $S_x$. In the next section, we derive conditions on the sampling rate for (\ref{eq:rzrx}) to have a unique solution. It is interesting to note that (\ref{eq:rzrx}), which is written in the frequency domain, is valid in the time domain as well. We can therefore estimate $\mathbf{r_z}(f)$ and reconstruct $\mathbf{r_x}(f)$ in the frequency domain, or alternatively, we can estimate $\mathbf{r_z}[n]$ and reconstruct $\mathbf{r_x}[n]$ in the time domain using \begin{equation} \mathbf{r_z}[n] = \mathbf{\Phi} \mathbf{r_x}[n]. \label{eq:time} \end{equation} Note that $\mathbf{r_x}(f)$ is $K_f$-sparse for each specific frequency $f \in \mathcal{F}_S$, whereas $\mathbf{r_x}[n]$ is $2K_f$-sparse since each transmission can be split into two bins. Therefore, in Section \ref{sec:scen3}, we show that the minimal sampling rate is achieved only in the frequency domain. Since the vectors $\mathbf{r_x}[n]$ are jointly sparse, we can recover the support $S_x$ from one sample in each channel, provided that the value of the samples in the occupied bins is not zero for each $n$. However, in order to ensure robustness to noise and better performance, we consider more than one sample in the simulations. As a final comment, below we assume full knowledge of $\mathbf{r_z}(f)$ or $\mathbf{r_z}[n]$, or the possibility to compute them. In Section \ref{sec:simulations}, we show how to approximate $\mathbf{r_z}(f)$ and $\mathbf{r_z}[n]$ from a finite data block. \subsection{Discrete Model: Reconstruction of the Digital Spectrum} \label{Discrete} In this model, we wish to recover the Fourier coefficients of the autocorrelation of $\mathbf{x}$, defined in (\ref{eq:spec2}). The traditional approach in this setting exploits the time domain characteristics of the stationary signal. Unfortunately, the analysis of the recovery conditions of the Fourier coefficients of $\mathbf{x}$ turns out to be quite involved. Therefore, we propose a second approach, that exploits the equivalent frequency domain properties of the signal. We show that in that case, the same analysis as for the analog model can be performed. \subsubsection{Time domain} Define the autocorrelation matrix as \begin{eqnarray} \mathbf{R_x}&=& \mathbb{E} \left[ \mathbf{x}[n]{\mathbf{x}}^H [n-\nu] \right] \\ &=& \left[ \begin{array}{cccc} r_\mathbf{x}[0] & r_\mathbf{x}[1] & \dots & r_\mathbf{x}[N-1] \\ r_\mathbf{x}[1] & r_\mathbf{x}[0] & \dots & r_\mathbf{x}[N-2] \\ \vdots & \vdots & \ddots & \vdots \\ r_\mathbf{x}[N-1] & r_\mathbf{x}[N-2] & \dots &r_\mathbf{x}[0] \end{array} \right]. \nonumber \end{eqnarray} From (\ref{eq:spec_autoco}), \begin{equation} \mathbf{s}_\mathbf{x} = \mathbf{F} \mathbf{r}_\mathbf{x}, \end{equation} where $\bf s_x$ is defined in (\ref{eq:spec2}), $\mathbf{F}$ is the $N \times N$ DFT matrix and \begin{equation} \mathbf{r_x}= \left[ \begin{array}{cccc} r_\mathbf{x}[0] & r_\mathbf{x}[1] & \dots & r_\mathbf{x}[N-1] \end{array} \right]^T. \end{equation} Therefore, \begin{equation} \label{eq:vecx} \text{vec}(\mathbf{R}_\mathbf{x}) = \mathbf{\tilde{B}} \mathbf{r_x} =\frac{1}{N}\mathbf{\tilde{B}} \mathbf{F}^{H} \mathbf{s}_\mathbf{x}, \end{equation} where $\mathbf{\tilde{B}}$ is a $N^2 \times N$ repetition matrix whose $i$th row is given by the $\left[ \left| \lfloor \frac{i-1}{N} \rfloor - (i-1) \mod N \right| +1\right] $th row of the $N \times N$ identity matrix. We now relate $\textbf{s}_\mathbf{x}$ to the covariance matrix of the sub-Nyquist samples ${\mathbf{R}_\mathbf{z}} = \mathbb{E} \left[ \mathbf{z}\mathbf{z}^{H} \right]$. We start by deriving the relationship between $\textbf{R}_\mathbf{z}$ and $\textbf{R}_\mathbf{x}$. From (\ref{eq:dmodel1}), we have \begin{equation} \bf R_z = AR_xA^H. \label{eq:autoco} \end{equation} Here, we assume that $\bf A$ is full spark. Vectorizing both sides of (\ref{eq:autoco}) and using (\ref{eq:vecx}), we obtain \begin{equation} \mathbf{r_z} = \mathbf{(\bar{A} \otimes A)}\text{vec}(\mathbf{R}_\mathbf{x}) = \frac{1}{N} \mathbf{(\bar{A} \otimes A)} \mathbf{\tilde{B}} \mathbf{F}^{H} \mathbf{s_x} \triangleq \mathbf{\Psi s_x}, \label{eq:rzsx} \end{equation} where $\bf \Psi=\frac{1}{N} (\bar{A} \otimes A)\tilde{B} F^{H}$ is of size $M^2 \times N$. We recall that $\bf {r}_z$ is a vector of size $M^2$ and $\bf s_x$ is a vector of size $N$. Since $\bf F$ is invertible, $\text{rank}(\mathbf{\Phi}) = \text{rank}((\mathbf{\bar{A} \otimes A)\tilde{B}})$. Note that we can express $\mathbf{C}=(\mathbf{\bar{A} \otimes A)\tilde{B}}$ as \begin{eqnarray} \nonumber \mathbf{C} = \left[ \begin{array}{c} \mathbf{\bar{a}_1} \otimes \mathbf{a_1} + \mathbf{\bar{a}_2} \otimes \mathbf{a_2} + \dots + \mathbf{\bar{a}_N} \otimes \mathbf{a_N} \\ \mathbf{\bar{a}_1} \otimes \mathbf{a_2} + \mathbf{\bar{a}_2} \otimes \mathbf{a_1} + \mathbf{\bar{a}_2} \otimes \mathbf{a_3} + \dots + \mathbf{\bar{a}_N} \otimes \mathbf{a_{N-1}} \\ \vdots \\ \mathbf{\bar{a}_1} \otimes \mathbf{a_N} + \mathbf{\bar{a}_N} \otimes \mathbf{a_1} \end{array} \right]^T, \end{eqnarray} where $\mathbf{a}_j$ denotes the $j$th column of $\bf A$. Analyzing conditions for $\bf C$ to be full rank does not appear to be straightforward. We therefore propose instead to investigate the following frequency domain approach. \subsubsection{Frequency domain} From (\ref{xmodel}), \begin{equation} \mathbf{c}= \mathbf{F} \mathbf{x}. \end{equation} Here, $\mathbf{x}$ is given by (\ref{xsamp}), the entries of $\mathbf{c}$ are the Fourier coefficients of $\bf x$ (see (\ref{xmodel})) and $\bf F$ is the $N \times N$ DFT matrix. Since $\bf F$ is orthogonal, \begin{equation} \label{eq:xc} \mathbf{x}= \frac{1}{N} \mathbf{F}^{H} \mathbf{c}. \end{equation} Define the autocorrelation matrix $\mathbf {R_c}=\mathbb{E} \left[ \mathbf{c} \mathbf{c}^H \right]$. From (\ref{eq:spec3}), $\mathbf{R_c}$ is a diagonal matrix and it holds that $\mathbf{R_c}(i,i)=s_{\mathbf{x}}[i-(N+1)/2]$. Clearly, our goal can be stated as recovery of $\mathbf{R_c}$, since once $\mathbf{R_c}$ is known, $\mathbf{s}_\mathbf{x}$ follows. We now relate $\mathbf{R_c}$ to the correlation of the sub-Nyquist samples. A variety of different sub-Nyquist schemes can be used to sample $x(t)$ \cite{Mishali_multicoset, Mishali_theory,Laska2007}, even when its Fourier series is not sparse as we will show in Section \ref{sec:rate1}. Let $\bf{z} \in \mathbb{R}^M$ denote the vector of sub-Nyquist samples of $x(t), 0 \leq t < T$, sampled at rate $f_s$ with $f_s<N/T$. For simplicity, we assume that $M=f_sT<N$ is an integer. We express the sub-Nyquist samples $\bf{z}$ in terms of the Nyquist samples $\bf{x}$ as \begin{equation} \bf{z} = \bf{A} \bf{x}, \label{eq:dmodel1} \end{equation} where $\mathbf{A}$ is a $M \times N$ matrix. Combining (\ref{eq:xc}) and (\ref{eq:dmodel1}), we obtain \begin{equation} \label{eq:zc} \mathbf{z} = \frac{1}{N} \mathbf{A} \mathbf{F}^{H} \mathbf{c} \triangleq \mathbf{G} \mathbf{c}, \end{equation} where $\mathbf{G}= \frac{1}{N}\mathbf{A} \mathbf{F}^{H}$. We assume that $\bf G$ is full spark, namely $\text{spark}(\mathbf{G})=M+1$. Let ${\mathbf{R}_\mathbf{z}} = \mathbb{E} \left[ \mathbf{z}\mathbf{z}^{H} \right]$ be the covariance matrix of the sub-Nyquist samples. We now relate $\mathbf{R}_\mathbf{z}$ to $\mathbf{R}_\mathbf{c}$. From (\ref{eq:zc}), we have \begin{equation} \bf R_z = GR_cG^H. \label{eq:autoco1} \end{equation} Vectorizing both sides of (\ref{eq:autoco1}), \begin{equation} \mathbf{r_z} = \mathbf{(\bar{G} \otimes G)}\text{vec}(\mathbf{R}_\mathbf{c}) = \mathbf{(\bar{G} \otimes G)} \mathbf{B} \mathbf{r_c} \triangleq \mathbf{\Phi \mathbf{r_c}}. \label{eq:rzrc} \end{equation} Here, $\bf B$ is as defined in Section \ref{analog_rec}, $\bf \Phi=(\bar{G} \otimes G) \mathbf{B}$ is of size $M^2 \times N$ and $\bf r_c$ is a vector of size $N$ that contains the potentially non-zero elements, namely the diagonal elements, of $\mathbf{R_c}$, that is $\mathbf{r_c}(i)=\mathbf{R_c}(i,i), 1 \le i \le N$. In the second and third scenarii, $\bf r_c$ has only $K_f \ll N$ non zero elements, which correspond to the $K_f$ non zero Fourier cofficients in $S_x$. In Section \ref{sec:rate}, we discuss the conditions for (\ref{eq:rzrc}) and (\ref{eq:rzsx}) to have a unique solution, and we derive the minimal sampling rate accordingly. Again, we assume full knowledge of $\mathbf{r_z}$ and will show how it can be approximated in Section \ref{sec:simulations}. We observe that we obtain a similar relation (\ref{eq:rzrx}) and (\ref{eq:rzrc}) in both models. Therefore, the next section refers to both together. We used the same notation for different parameters in the two models so that they both lead to the same relation. In order to avoid confusion, we summarize the notation in \mbox{Table \ref{table:parameters}}. \begin{table} \begin{tabular}{l|l|l} Parameter & Analog Model & Discrete Model \\ \hline $M$ & $\#$ measurements & $\#$ sub-Nyquist samples \\ $N$ & $\#$ frequency bins & $\#$ Nyquist samples \\ $K_f$ & $\#$ potentially non & $\#$ potentially non zero \\ & zero frequency bands & fourier coefficients \\ $S_x$ & continuous support & discrete Fourier series \\ & of $x(t)$ & support \end{tabular} \caption{Parameters notation in both models} \label{table:parameters} \end{table} We also note that, in the analog model, we define an infinite number of equations (\ref{eq:rzrx}), more precisely one per frequency ($f \in \mathcal{F}_s$), whereas in the digital model we obtain a single equation (\ref{eq:rzrc}). \section{Minimal Sampling Rate and Reconstruction} \label{sec:rate} \subsection{No sparsity Constraints} \label{sec:rate1} \subsubsection{Minimal Rate for Perfect Reconstruction} The systems defined in (\ref{eq:rzrx}) and (\ref{eq:rzrc}) are overdetermined for $M^2 \geq N$, if $\bf \Phi$ is full column rank. The following proposition provides conditions for the systems defined in (\ref{eq:rzrx}) and (\ref{eq:rzrc}) to have a unique solution. \begin{proposition} \label{prop:first} Let $\bf T$ be a full spark $M \times N$ matrix ($M \le N$) and $\bf B$ be a $N^2 \times N$ selection matrix that has a 1 in the $j$th column and $[(j-1)N+j]$th row, $1 \le j \le N$ and zeros elsewhere. The matrix $\bf C = \mathbf{(\bar{T} \otimes T)} \mathbf{B}=\bar{T} \odot T$ is full column rank if $M^2 \ge N$ and $2M > N$. \end{proposition} \begin{proof} First, we require $M^2 \ge N$ in order for $\bf C$ to have a smaller or equal number of columns than of rows. Let $\bf x$ be a vector of length $N$ in the null space of $\bf C$, namely $\bf Cx=0$. We show that if $2M > N$, then $\bf x =0$. Assume by contradiction that $\bf x \neq 0$. We denote by $\Omega_z$ the set of indices $1 \le j \le N$ such that $x_j \neq 0$ and $N_z=| \Omega_z |$. It holds that $1 \le N_z \le N$. Note that we can express $\bf C$ as \begin{eqnarray} \nonumber \mathbf{C} &=& \left[ \begin{array}{cccc} \mathbf{\bar{t}_1} \otimes \mathbf{t_1} & \mathbf{\bar{t}_2} \otimes \mathbf{t_2} & \dots & \mathbf{\bar{t}_N} \otimes \mathbf{t_N} \end{array} \right], \end{eqnarray} where $\mathbf{t}_j$ denotes the $j$th column of $\bf T$. Let \begin{equation} \nonumber \mathbf{h}_i = \left[ \begin{array}{cccc} t_{i1} x_1 & t_{i2} x_2 & \dots & t_{iN} x_N \end{array} \right]^T, \quad 1 \le i \le M. \end{equation} Then $\bf Cx=0$ if and only if \begin{equation} \label{eq:proof1} \mathbf{\bar{T}} \mathbf{h}_i = 0, \quad 1 \le i \le M. \end{equation} That is, the $M$ vectors $\mathbf{h}_i$ are in the null space of $\mathbf{\bar{T}}$. If $N_z \le M$, then since $\bf \bar{T}$ is full spark, (\ref{eq:proof1}) holds if and only if $t_{ij}x_j=0, \forall 1 \le i \le M \text{ and } \forall j \in \Omega_z$. Again, since $\mathbf{T}$ is full spark, none of its columns is the zero vector and therefore that $x_j=0, \forall j \in \Omega_z$ and we obtained a contradiction. If $N_z > M$, then we show that the vectors $\mathbf{h}_i, 1 \le i \le M$ are linearly independent. Since $\mathbf{T}$ is full spark, every set of $M$ columns are linearly independent. Let us consider $M$ columns $\mathbf{t}_j$ of $\bf T$ such that $ j \in \Omega_z$. It follows that \begin{equation} \nonumber \sum_j \gamma_j x_j \mathbf{t}_j=0 \end{equation} if and only if $\gamma_j x_j = 0$. From the definition of $\Omega_j$, this holds if and only if $\gamma_j = 0$, that is the $M$ vectors $x_j\mathbf{a}_j$ are linearly independent. Thus, the $M$ vectors $\mathbf{h}_i$ are linearly independent as well. We denote by $\text{nullity}(\mathbf{T})$ the dimension of the null space of $\bf T$. From the rank-nullity theorem, $\text{nullity}(\mathbf{\bar{T}})=N - \text{rank}(\mathbf{\bar{T}})=N-M$. Since the dimension of the space spanned by $\mathbf{h}_i$ is $M$, if $M>N-M$, then $\bf x=0$. \end{proof} The following theorem follows directly from Proposition \ref{prop:first}. \begin{theorem} The systems (\ref{eq:rzrx}) (analog model) and (\ref{eq:rzrc}) (digital model) have a unique solution if \begin{enumerate} \item $\bf A$ in the analog model and $\bf AF^H$ in the digital model are full spark. \item $M^2 \ge N$ and $2M > N$. \end{enumerate} \end{theorem} This can happen even for $M<N$ which is our basic assumption. If $M \ge 2$, we have $M^2 \ge 2M$. Thus, in this case, the values of $M$ for which we obtain a unique solution are $N/2 < M < N$. The minimal sampling rate is then \begin{equation} f_{(1)}=Mf_s>\frac{N}{2}B=\frac{f_{\text{Nyq}}}{2}. \end{equation} This means that even without any sparsity constraints on the signal, we can retrieve its power spectrum by exploiting its stationarity property, whereas the measurement vector $\bf z$ exhibits no stationary constraints in general. This was already observed in \cite{TianLeus} for the digital model, but no proof was provided. In \cite{wang}, the authors show that $2M>N$ is a sufficient condition on $M$ so that $\bf \Phi$ is full column rank in the analog model. Then, a universal sampling pattern can guarantee the full column rank of $\bf \Phi$. In \cite{Davies, Davies2, wang}, the authors claim that the system is overdetermined if $M(M-1)+1 \ge N$ and if the multicoset sampling pattern is such that it yields a full column rank matrix $\bf \Phi$. In \cite{wang}, some simple sub-optimal multicoset sampling patterns are given, that achieve compression rate below $1/2$. Some examples of optimal patterns, namely that guarantee a unique solution under $M(M-1)+1 = N$, are given in \cite {Davies, Davies2} but it is not clear what condition is required from the pattern, or alternatively from the sampling matrix $\bf A$, in order for $\bf \Phi$ to have full column rank. Here, the condition for having a solution is given with respect to the sampling matrix $\bf A$, which directly depends on the sampling pattern, rather than the matrix $\bf \Phi$. \subsubsection{Power Spectrum Reconstruction} If the conditions of Proposition \ref{prop:first} are satisfied, namely if the sampling rate $f \ge f_{(1)}$, then the systems defined in (\ref{eq:rzrx}) and (\ref{eq:rzrc}) are overdetermined, respectively. The power spectrums $\mathbf{r_x}(f)$ and $\bf r_c$ are given by \begin{equation} \label{eq:rec11} \mathbf{\hat{r}_x}(f) = \mathbf{\Phi}^{\dagger} \mathbf{r_z}(f), \end{equation} in the analog model, and \begin{equation} \label{eq:rec12} \bf \hat{r}_c = \Phi^{\dagger} r_z, \end{equation} in the digital one. Here $\dagger$ denotes the Moore-Penrose pseudo-inverse. \subsection{Sparsity Constraints - Non-Blind Detection} \label{rate2} \subsubsection{Minimal Rate for Perfect Reconstruction} We now consider the second scheme, where we have \emph{a priori} knowledge on the frequency support of $x(t)$ and we assume that it is sparse. Instead of reconstructing the entire power spectrum, we exploit the knowledge of the signal's frequencies in order to recover the potentially occupied bands (analog model) or the potential Fourier series coefficients of the autocorrelation function (discrete model). This will allow us to further reduce the sampling rate. In this scenario, $\mathbf{r_x}(f)$ (first model) and $\bf r_c$ (second model) contain only $K_f \ll N$ potentially non zero elements as discussed in Section \ref{SecOpt}. In the first model, the reduced problem can be expressed as \begin{equation} \mathbf{r_z}(f) = \mathbf{\Phi}_S\mathbf{r}_{\mathbf{x}}^S(f). \label{eq:rzrx2} \end{equation} Here, $\mathbf{r}_{\mathbf{x}}^S(f)$ is the vector $\mathbf{r_x}(f)$ reduced to its $K_f$ potentially non zero elements and $\mathbf{\Phi}_S$ contains the corresponding $K_f$ columns of $\bf \Phi$. Here, the support $S$ of $\mathbf{r_x}(f)$ depends on the specific frequency $f$ since the support of the power spectrum of each transmission $s_i(t)$ can be split into two different bins of $\mathbf{r_x}(f)$. Obviously, $S$ can be calculated for each $f$ from the known $S_x$. In the second model, the reduced problem becomes \begin{equation} \mathbf{r_z} = \mathbf{\Phi}_S\mathbf{r}_{\mathbf{c}}^S. \label{eq:rzrc2} \end{equation} Here, $\mathbf{r}_{\mathbf{c}}$ is the reduction of $\bf r_c$ to its $K_f$ potentially non zero elements, $\mathbf{\Psi}_S$ contains the corresponding $K_f$ columns of $\bf \Psi$. In the digital case, it holds that the support $S=S_x$. The following proposition provides conditions for the systems defined in (\ref{eq:rzrx2}) and (\ref{eq:rzrc2}) to have a unique solution. \begin{proposition} \label{prop:second} Let $\bf T$ be a full spark $M \times N$ matrix ($M \le N$) and $\bf B$ be defined as in Proposition \ref{prop:first}. Let $\bf C = \mathbf{(\bar{T} \otimes T)}B$ and $\mathbf{H}$ be the $N \times K_f$ that selects any $K_f < N$ columns of $\bf C$. The matrix $\bf D=CH$ is full column rank if $M^2 \ge K_f$ and $2M> K_f$. \end{proposition} \begin{proof} First, we require $M^2 \ge K_f$ in order for $\bf D$ to have a smaller or equal number of columns than of rows. Let $\mathbf{T}_S$ be the $M \times K_f$ matrix composed of the $K_f$ columns of $\bf T$ corresponding to the $K_f$ selected columns of $\bf C$: \begin{equation} \nonumber \mathbf{T}_S = \left[ \begin{array}{cccc} \mathbf{t_{[1]}} & \mathbf{t_{[2]}} & \dots & \mathbf{t_{[K_f]}} \end{array} \right]. \end{equation} Here $\mathbf{t}_{[i]}, 1 \le i \le K_f$ denotes the column of $\bf t$ corresponding to the $i$th selected column of $\bf C$. We have \begin{equation} \nonumber \label{eq:matrixD} \mathbf{D} = \left[ \begin{array}{cccc} \mathbf{\bar{t}_{[1]}} \otimes \mathbf{t_{[1]}} & \mathbf{\bar{t}_{[2]}} \otimes \mathbf{t_{[2]}} & \dots & \mathbf{\bar{t}_{[K_f]}} \otimes \mathbf{t_{[K_f]}} \end{array} \right]. \end{equation} If $K_f \ge M$, then, since $\bf \text{spark}(T)$$ = M+1$, $\mathbf{T}_S$ is full spark as well. Applying Proposition \ref{prop:first} with $\mathbf{T}_S$, we have that $\bf D$ is full column rank if $2M> K_f$. If $K_f<M$, then from $\bf \text{spark}(T)$$ = M+1>K_f$, $\mathbf{T}_S$ is full column rank. Since $\text{rank} (\bar{\mathbf{T}}_S \otimes \mathbf{T}_S) = \text{rank} (\mathbf{T}_S)^2$$ = K_f^2$, the matrix $\bar{\mathbf{T}}_S \otimes \mathbf{T}_S$ is also full column rank. It can be seen that the matrix $\bf D$ is obtained by selecting $K_f$ columns from $\bar{\mathbf{T}}_S \otimes \mathbf{T}_S$. It follows that $\bf D$ is full column rank as well. \end{proof} The following theorem follows directly from Proposition \ref{prop:second}. \begin{theorem} The systems (\ref{eq:rzrx2}) (analog model) and (\ref{eq:rzrc2}) (digital model) have a unique solution if \begin{enumerate} \item $\bf A$ in the analog model and $\bf AF^H$ in the digital model are full spark. \item $M^2 \ge K_f$ and $2M > K_f$. \end{enumerate} \end{theorem} In this case, the minimal sampling rate is \begin{equation} f_{(2)}=Mf_s>\frac{K_f}{2}B=N_{\text{sig}}B. \end{equation} Landau \cite{LandauCS} developed a minimal rate requirement for perfect signal reconstruction in the non-blind setting, which corresponds to the actual band occupancy, namely $2N_{\text{sig}}B$. Here, we find that the minimal sampling rate for perfect spectrum recovery in this setting is half the Landau rate. \subsubsection{Power Spectrum Reconstruction} If the conditions of Proposition \ref{prop:second} are satisfied, namely the sampling rate $f \ge f_{(2)}$, then we can reconstruct the signal's power spectrum by first reducing the systems as shown in (\ref{eq:rzrx2}) and (\ref{eq:rzrc2}). The reconstructed power spectrum $\mathbf{r_x}(f)$ and $\bf r_c$ are given by \begin{eqnarray} \mathbf{\hat{r}_x}^S(f) &=& \mathbf{\Phi}_S^{\dagger} \mathbf{r_z}(f) \\ \mathbf{\hat{r}}_{\mathbf{x}_i}(f) &=& 0 \quad \forall i \notin S, \nonumber \end{eqnarray} in the first model, and \begin{eqnarray} \mathbf{\hat{r}_c} &=&\mathbf{\Phi}_S^{\dagger} \mathbf{r_z} \\ \mathbf{\hat{r}}_{\mathbf{c}_i}&=& 0 \quad \forall i \notin S, \nonumber \end{eqnarray} in the second one. \subsection{Sparsity Constraints - Blind Detection} \label{sec:scen3} \subsubsection{Minimal Rate for Perfect Reconstruction} We now consider the third scheme, namely $x(t)$ is sparse, without any \emph{a priori} knowledge on the support. In the previous section, we showed that $\mathbf{\Phi}_S$ is full column rank, for any choice of $K_f<2M$ columns of $ \bf \Phi$. Thus, for $M \ge 2$, we have $\text{spark}(\mathbf{\Phi}) \ge 2M$. Therefore, in the blind setting, if $\mathbf{r_x}(f)$ or $\bf r_c$ is $K_f$-sparse, with $K_f < M$, it is the unique sparsest solution to (\ref{eq:rzrx}) or (\ref{eq:rzrc}), respectively \cite{CSBook}. In this case, the minimal sampling rate is \begin{equation} \label{eq:min3} f_{(3)}= Mf_s>K_fB=2N_{\text{sig}}B, \end{equation} which is twice the rate obtained in the previous scenario. As in signal recovery, the minimal rate for blind reconstruction is twice the minimal rate for non-blind reconstruction \cite{Mishali_multicoset}. The authors in \cite{Davies2} consider the sparse case as well for a model similar to our analog model. Again, the conditions for the system to be overdetermined are given with respect to $\bf \Phi$, as in the non sparse case. Moreover, the authors reconstruct the average spectrum of the signal over each bin, rather than the spectrum itself at each frequency. Here, the two approaches become fundamentally different since in this scenario, we deal with a system of equations of infinite measure whereas in \cite{Davies2}, the authors obtain a standard compressed sensing problem aiming at recovering a finite vector. \subsubsection{Power Spectrum Reconstruction} In this scenario, there exists an inherent difference between the two models. In the digital model, we have to solve a single equation (\ref{eq:rzrc}) whereas in the analog model, (\ref{eq:rzrx}) consists of an infinite number of linear systems because $f$ is a continuous variable. Therefore, in the digital case, we can use classical compressed sensing (CS) techniques \cite{CSBook} in order to recover the sparse vector $\bf r_c$ from the measurement vector $\bf r_z$, namely \begin{equation} \mathbf{\hat{r}_c} = \arg\min_{\mathbf{r_c}} || \mathbf{r_c} ||_0 \quad \text{s.t. } \mathbf{r_z=\Phi r_c}. \end{equation} In the analog model, the reconstruction can be divided into two stages: support recovery and spectrum recovery. We use the support recovery paradigm from \cite{Mishali_multicoset} that produces a finite system of equations, called multiple measurement vectors (MMV) from an infinite number of linear systems. This reduction is performed by what is referred to as the continuous to finite (CTF) block. From (\ref{eq:rzrx}), we have \begin{equation} \bf Q = \Phi Z \Phi^H \end{equation} where \begin{equation} \mathbf{Q}= \int_{f \in \mathcal{F}_s} \mathbf{r_z}(f) \mathbf{r_z}^H(f) \mathrm{d}f \end{equation} is a $M \times M$ matrix and \begin{equation} \mathbf{Z}= \int_{f \in \mathcal{F}_s} \mathbf{r_x}(f) \mathbf{r_x}^H(f) \mathrm{d}f \end{equation} is a $N \times N$ matrix. We then construct a frame $\bf V$ such that $\bf Q=VV^H$. Clearly, there are many possible ways to select $\bf V$. We choose to construct it by performing an eigendecomposition of $\bf Q$ and select $\bf V$ as the matrix of eigenvectors corresponding to the non zero eigenvalues. We can then define the following linear system \begin{equation} \label{eq:CTF} \bf V= \Phi U \end{equation} From \cite{Mishali_multicoset} (Propositions 2-3), the support of the unique sparsest solution of (\ref{eq:CTF}) is the same as the support of our original set of equations (\ref{eq:rzrx}). As discussed in Section \ref{SecOpt}, $\mathbf{r_x}(f)$ is $K_f$-sparse for each specific $f \in \mathcal{F}_s$. However, after combining the frequencies, the matrix $\bf U$ is $2K_f$-sparse (at most), since the spectrum of each transmission can be split into two bins of $\mathbf{r_x}(f)$. Therefore, the above algorithm, referred to as SBR4 in \cite{Mishali_multicoset} (for signal reconstruction as opposed to spectrum reconstruction), requires a minimal sampling rate of $2f_{(3)}$. In order to achieve the minimal rate $f_{(3)}$, the SBR2 algorithm regains the factor of two in the sampling rate at the expense of increased complexity \cite{Mishali_multicoset}. In a nutshell, SBR2 is a recursive algorithm that alternates between the CTF described above and a bi-section process. The bi-section splits the original frequency interval into two equal width intervals on which the CTF is applied, until the level of sparsity of $\mathbf{U}$ is less or equal to $K_f$. As opposed to SBR4 which can be performed both in the time and in the frequency domains, SBR2 can obviously be performed only in the frequency domain. We refer the reader to \cite{Mishali_multicoset} for more details. Once the support $S$ is known, perfect reconstruction of the spectrum can be obtained as follows \begin{eqnarray} \label{eq:recs} \mathbf{\hat{r}_x}^S(f) &=& \mathbf{\Phi}_S^{\dagger} \mathbf{r_z}(f) \\ \mathbf{\hat{r}}_{\mathbf{x}_i}(f) &=& 0 \quad \forall i \notin S. \nonumber \end{eqnarray} \section{Simulation Results} \label{sec:simulations} We now demonstrate power spectrum reconstruction from sub-Nyquist samples for the first and third scenarii, respectively. We also investigate the impact of several simulation parameters on the receiver operating characteristic (ROC) of our detector: signal-to-noise ratio (SNR), sensing time, number of averages (for estimating the autocorrelation matrix $\mathbf{R_z}$ as explained below) and sampling rate. Last, we compare the performance of our detector to one based on spectrum reconstruction from sub-Nyquist samples and a second one based on power spectrum reconstruction from Nyquist samples. Throughout the simulations we consider the analog model and use the MWC analog front-end for the sampling stage. \subsection{Reconstruction in time and frequency domains and detection} We first explain how we estimate the elements of $\bf r_z$. The overall sensing time is divided into $P$ frames of length $K$ samples. In Section \ref{subsec:param}, we examine different choices of $P$ and $K$ for a fixed sensing time. In the digital model, the estimate of $\bf r_z$ is simply obtained by averaging the autocorrelation between the samples $\bf z$ over $P$ frames as follows \begin{equation} \mathbf{\hat{r}_z}=\frac{1}{P} \sum_{p=1}^P \mathbf{z}^p (\mathbf{z}^p)^H \end{equation} where $\mathbf{z}^p$ is the vector of sub-Nyquist samples of the $p$th frame. In the analog model, in order to estimate the autocorrelation matrix $\mathbf{R_z}(f)$ in the frequency domain, we first compute the estimates of $\mathbf{z}_i(f), 1 \le i \le M$, $\hat{\mathbf{z}}_i(f)$, using FFT on the samples $z_i[n]$ over a finite time window. We then estimate the elements of $\mathbf{R_z}(f)$ as \begin{equation} \mathbf{\hat{R}_z}(i,j,f)=\frac{1}{P} \sum_{p=1}^{P} \hat{\mathbf{z}}^p(i,f) \hat{\mathbf{z}}^p(j,f), \quad f \in \mathcal{F}_s, \end{equation} where $P$ is the number of frames for the averaging of the spectrum and $\hat{\mathbf{z}}^p(i,f)$ is the value of the FFT of the samples $\mathbf{z}_i[n]$ from the $p$th frame, at frequency $f$. In order to estimate the autocorrelation matrix $\mathbf{R_z}[n]$ in the time domain, we convolve the samples $z_i[n]$ over a finite time window as \begin{equation} \mathbf{\hat{R}_z}[i,j,n]=\frac{1}{P} \sum_{p=1}^{P} z_i^p[n] * z_j^p[n], \quad n \in \mathcal [0, T/T_{\text{Nyq}}]. \end{equation} We then use (\ref{eq:rec11}) or (\ref{eq:recs}) in order to reconstruct $\mathbf{\hat{r}_x}(f)$, or their time-domain equivalents to reconstruct $\mathbf{\hat{r}_x}[n]$. We note that the number of samples dictates the number of DFT cofficients in the frequency domain and therefore the resolution of the reconstructed spectrum in the frequency domain. For the analog model, we use the following test statistic \begin{equation} \mathcal{T}_i = \sum |\hat{\mathbf{r}}_{\mathbf{x}_i}|, \qquad 1 \le i \le N, \end{equation} where the sum is performed over frequency or over time, depending on which domain we chose to reconstruct $\bf \hat{r}_x$. Obviously, other detection statistics can be used on the reconstructed power spectrum. \subsection{Spectrum reconstruction} We first consider spectrum reconstruction of a non sparse signal. Let $x(t)$ be white Gaussian noise with variance $100$, and Nyquist rate $f_{\text{Nyq}}=10GHz$ with two stop bands. We consider $N=65$ spectral bands and $M=33$ analog channels, each with sampling rate $f_s=154MHz$ and with $N_s=131$ samples each. The overall sampling rate is therefore equal to $50.77 \%$ of the Nyquist rate. Figure \ref{fig:sim1} shows the original and the reconstructed spectrum at half the Nyquist rate (both with averaging over $P=1000$). \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{rec1} \caption{Original and reconstructed spectrum of a non sparse signal at half the Nyquist rate.} \label{fig:sim1} \end{center} \end{figure} We now consider the blind reconstruction of the power spectrum of a sparse signal. Let the number of potentially active transmissions $N_{\text{sig}}=3$. Each transmission is generated from filtering white Gaussian noise with a low pass filter whose two-sided bandwidth is $B=80MHz$, and modulating it with a carrier frequency drawn uniformly at random between $-f_{\text{Nyq}}/2=-5GHz$ and $f_{\text{Nyq}}/2=5GHz$. We consider $N=65$ spectral bands and $M=7$ analog channels, each with sampling rate $f_s=154MHz$ and with $K=171$ samples per channel and per frame. The overall sampling rate is equal to $10.8 \%$ of the Nyquist rate, and $1.9$ times the Landau rate. We consider additive white Gaussian noise. Figures \ref{fig:sim21}-\ref{fig:sim26} shows the original and the reconstructed power spectrum for different values of the number of frames $P$ and of the SNR. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{rec_0_1} \caption{Original and reconstructed spectrum: $P=1$ and SNR$=0$dB.} \label{fig:sim21} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{rec_0_25} \caption{Original and reconstructed spectrum: $P=25$ and SNR$=0$dB.} \label{fig:sim22} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{rec_0_50} \caption{Original and reconstructed spectrum: $P=50$ and SNR$=0$dB.} \label{fig:sim23} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{rec_2_100} \caption{Original and reconstructed spectrum: $P=100$ and SNR$=2$dB.} \label{fig:sim24} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{rec_0_100} \caption{Original and reconstructed spectrum: $P=100$ and SNR$=0$dB.} \label{fig:sim25} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{rec_m2_100} \caption{Original and reconstructed spectrum: $P=100$ and SNR$=-2$dB.} \label{fig:sim26} \end{center} \end{figure} \subsection{Practical parameters} \label{subsec:param} In this section, we consider the influence of several practical parameters on the performance of our detector. The experiments are set up as follows. We consider two scenarios where the actual number of transmissions is 2 and 3, namely $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$ respectively. The number of potentially active transmissions $N_{\text{sig}}$ is set to be 6. Each transmission is similar to those described in the previous experiment. We consider $N=115$ spectral bands and $M$ analog channels, each with sampling rate $f_s=87MHz$. The number of samples per channel and per frame is $K$ and the averaging is performed over $P$ frames. Each experiment is repeated over $500$ realisations. In the first experiment, we illustrate the impact of SNR on the detection performance. We consider $M=8$ channels. The overall sampling rate is thus $695MHz$, which is a little below $7\%$ of the Nyquist rate and a little above $1.2$ times the Landau rate. Here, $K=171$ and $P=10$ frames. Figure \ref{fig:sim3} shows the ROC of the detector for different values of SNR. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{roc_snr1} \caption{Influence of the SNR on the ROC.} \label{fig:sim3} \end{center} \end{figure} We observe that up to a certain value of the SNR (between $5$dB and $0$dB in this setting), the detection performance does not decrease much. Below that, the performance decreases rapidly. Another observation that can be made concerns the particular form of the ROC curves. These can be split into two parts. The first part corresponds to a regular ROC curve, where the probability of detection increases faster than linearly with the probability of false alarm. After a certain point, the increase becomes linear. This corresponds to the realisations where the support recovery failed and the energy measured in the band of interest is zero both for $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$. The more such realisations there are, the lower the point where the curve's nature changes. As one can expect, this point is lower for lower SNRs. In the second experiment, we vary the sensing time per frame and keep the number of frames $P=10$ constant. We consider the same sampling parameters as in the previous experiment and set the SNR to be $2$dB. Figure \ref{fig:sim4} shows the ROC of the detector for different values of the number of samples per frame. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{roc_k1} \caption{Influence of the number of samples per frame on the ROC.} \label{fig:sim4} \end{center} \end{figure} In the third experiment, we vary the number of frames and keep the number of samples per frame $K=20$ constant. We consider the same sampling parameters as above and set the SNR to be $0$dB. Figure \ref{fig:sim5} shows the ROC of the detector for different values of the number of frames. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{roc_p1} \caption{Influence of the number of frames on the ROC.} \label{fig:sim5} \end{center} \end{figure} We observe that above a certain threshold, increasing the number of averages $P$ almost does not affect the detection performance. An interesting question is, given a limited overall sensing time, or equivalently a limited number of samples, how should one set the number of frames $P$ and the number of samples per frame $K$. In the next experiment, we investigate different choices of $P$ and $K$ for a fixed total number of samples per channel $PK=100$. The rest of the parameters remain unchanged. Figure \ref{fig:sim6} shows the ROC of the detector for those different settings. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{roc_pk1} \caption{Trade-off between the number of frames and the number of samples per frame.} \label{fig:sim6} \end{center} \end{figure} We can see that in this case, the best performance is attained for a balanced division of the number of samples, namely $P=10$ frames with $K=10$ samples each. Last, we show the impact of the number of channels, namely the overall sampling rate, on the performance of our detector. The sampling parameters are set as above and the SNR is $0$dB. Figure \ref{fig:sim6} shows the ROC of the detector for different values of the number of channels. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{roc_m1} \caption{Influence of the sampling rate on the ROC.} \label{fig:sim7} \end{center} \end{figure} The minimal number of channels in this case is $7$. Due to the noise presence, we need to sample above that threshold to obtain good detection performance. We observe that above $9$ channels, the performance increases very little with the number of channels whereas below the threshold of $7$ it decreases drastically. \subsection{Performance Comparaisons} We now compare our approach to sub-Nyquist spectrum sensing and nyquist power spectrum sensing. \subsubsection{Power Spectrum versus Spectrum Reconstruction} \label{comp_mishali} First, we consider the approach of \cite{Mishali_theory} where the signal itself is reconstructed from sub-Nyquist samples. We compute the energy of the frequency band of interest and compare this spectrum based detection to our power spectrum based detection. We consider the exact same signal as in the previous section. The sampling parameters are as follows: $N=115$ spectral bands and $M=12$ analog channels, each with sampling rate $f_s=87MHz$. We recall that the minimal sampling rate for signal recovery is twice that needed for power spectrum recovery. The overall sampling rate is therefore $1.04GHz$, namely a little above $10\%$ of the Nyquist rate and almost $1.9$ times the Landau rate. The number of samples per channel and per frame is $K=10$ and the averaging is performed over $P=10$ frames. In the signal reconstruction approach, no averaging needs to be performed. Therefore, we use a total of $PK=100$ samples. Each experiment is repeated over $500$ realisations. Figure \ref{fig:sim31} shows the ROC of both detectors for different values of SNR. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{roc_sps} \caption{Power spectrum versus spectrum reconstruction.} \label{fig:sim31} \end{center} \end{figure} We oberve that power spectrum sensing outperforms spectrum sensing. \subsubsection{Nyquist versus Sub-Nyquist Sampling} We now compare our approach to power spectrum sensing from Nyquist rate samples. We consider the exact same signal and sampling parameters as in Section \ref{comp_mishali} except for the number of channels which is set to $M=9$, leading to an overall sampling rate of $783GHz$, namely a little above $7.8\%$ of the Nyquist rate and $1.4$ times the Landau rate. Figure \ref{fig:sim32} shows the ROC of both detectors for different values of SNR. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{roc_nsn} \caption{Sub-Nyquist versus Nyquist sampling.} \label{fig:sim32} \end{center} \end{figure} It can be seen that our detector performs similarly as the Nyquist rate one up to a certain SNR threshold (around $5$dB in this setting). Below that threshold, the performance of our sub-Nyquist receiver decreases faster with SNR decrease whereas the Nyquist rate performance detection remains almost unchanged. The comes from the fact that the sensitivity of energy detection is amplified when performed on sub-Nyquist samples due to noise aliasing \cite{Castro}. \section{Conclusion} In this paper, we considered power spectrum reconstruction of stationary signals from sub-Nyquist samples. We investigated two signal models: the multiband model referred to as the analog model and the multi-tone model converted into a digital model. For the analog setting, two sampling schemes were adopted and for the digital one, two power spectrum reconstruction schemes were considered. We showed that all variations of both the analog and the digital models can be treated and analyzed in a uniform way in the frequency domain whereas a time domain analysis is a lot more complex. We derived the minimal sampling rate for perfect power spectrum reconstruction in noiseless settings for the cases of sparse and non sparse signals as well as blind and non blind detection. We also presented recovery techniques for each one of those scenarii. Simulations show power spectrum reconstruction at sub-Nyquist rates as well as the influence of practical parameters such as noise, sensing time and sampling rate on the ROC of the detector. We also showed that sub-Nyquist power spectrum sensing outperforms sub-Nyquist spectrum sensing and that our detector performance is comparable to that of a Nyquist rate power spectrum based detector up to a certain SNR threshold. \section*{Acknowledgement} The authors would like to thank Prof. Geert Leus for his valuable input and useful comments. This work is supported in part by the Israel Science Foundation under Grant no. 170/10, in part by the Ollendorf Foundation, in part by the SRC, and in part by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). \bibliographystyle{IEEEtran} \bibliography{IEEEabrv,CompSens_ref} \end{document}
{"config": "arxiv", "file": "1308.5149/power_spectrum.tex"}
\begin{document} \maketitle \thispagestyle{empty} \begin{abstract} This paper proposes a ``quasi-synchronous'' design approach for signal processing circuits, in which timing violations are permitted, but without the need for a hardware compensation mechanism. The case of a low-density parity-check (LDPC) decoder is studied, and a method for accurately modeling the effect of timing violations at a high level of abstraction is presented. The error-correction performance of code ensembles is then evaluated using density evolution while taking into account the effect of timing faults. Following this, several quasi-synchronous LDPC decoder circuits based on the offset min-sum algorithm are optimized, providing a 23\%--40\% reduction in energy consumption or energy-delay product, while achieving the same performance and occupying the same area as conventional synchronous circuits. \end{abstract} \section{Introduction}\label{sec:intro} The time required for a signal to propagate through a CMOS circuit varies depending on several factors. Some of the variation results from physical limitations: the delay depends on the initial and final charge state of the circuit. Other variations are due to the difficulty (or impossibility) of controlling the fabrication process and the operating conditions of the circuit \cite{ghosh:2010}. As process technologies approach atomic scales, the magnitude of these variations is increasing, and reducing the supply voltage to save energy increases the variations even further \cite{dreslinski:2010}. The variation in propagation delay is a source of energy inefficiency for synchronous circuits since the clock period is determined by the worst delay. One approach to alleviate this problem is to allow timing violations to occur. While this would normally be catastrophic, some applications (in signal processing or in error-correcting decoding, for example) can tolerate a degradation in the operation of the circuit, either because an approximation to the ideal output suffices, or because the algorithm intrinsically rejects noise. This paper proposes an approach to the design of systems that are tolerant to timing violations. In particular we apply this approach to the design of energy-optimized low-density parity-check (LDPC) decoder circuits based on a state-of-the-art soft-input algorithm and architecture. Other approaches have been previously proposed to build synchronous systems that can tolerate some timing violations. In \emph{better than worst-case} (BTWC) \cite{austin:2005} or \emph{voltage over-scaled} (VOS) circuits, a mechanism is added to the circuit to compensate or recover from timing faults. One such method introduces special latches that can detect timing violations, and can trigger a restart of the computation when needed \cite{bowman:2009,das:2009}. Since the circuit's latency is increased significantly when a timing violation occurs, this approach is only suitable for tolerating small fault rates (e.g., $10^{-7}$) and for applications where the circuit can be easily restarted, such as microprocessors that support speculative execution. In most signal processing tasks, it is acceptable for the output to be non-deterministic, which creates more possibilities for dealing with timing violations. A seminal contribution in this area was the algorithmic noise tolerance (ANT) approach \cite{hegde:1999,shim:2004}, which is to allow timing violations to occur in the main processing block, while adding a separate reliable processing block with reduced precision that is used to bound the error of the main block, and provide algorithmic performance guarantees. The downside of the ANT approach is that it relies on the assumption that timing violations will first occur in the most significant bits. If that is not the case, the precision of the circuit can degrade to the precision of the auxiliary block, limiting the scheme's usefulness. For many circuits, including some adder circuits \cite{liu:2010}, this assumption does not hold. Furthermore, the addition of the reduced precision block and of a comparison circuit increases the area requirement. We propose a design methodology for digital circuits with a relaxed synchronicity requirement that does not rely on any hardware compensation mechanism. Instead, we provide performance guarantees by re-analyzing the algorithm while taking into account the effect of timing violations. We say that such systems are \emph{quasi-synchronous}. LDPC decoding algorithms are good candidates for a quasi-synchronous implementation because their throughput and energy consumption are limiting factors in many applications, and like other signal processing algorithms, their performance is assessed in terms of expected values. Furthermore, since the algorithm is iterative, there is a possibility to optimize each iteration separately, and we show that this allows for additional energy savings. The topic of unreliable LDPC decoders has been discussed in a number of contributions. Varshney studied the Gallager-A and the Sum-Product decoding algorithms when the computations and the message exchanges are ``noisy'', and showed that the density evolution analysis still applies \cite{varshney:2011}. The Gallager-B algorithm was also analyzed under various scenarios \cite{leduc-primeau:2012,tabatabaei-yazdi:2013,huang:2014}. A model for an unreliable quantized Min-Sum decoder was proposed in \cite{ngassa:2013}, which provided numerical evaluation of the density evolution equations as well as simulations of a finite-length decoder. Faulty finite-alphabet decoders were studied in \cite{dupraz:2015}, where it was proposed to model the decoder messages using conditional distributions that depend on the ideal messages. The quantized Min-Sum decoder was also analyzed in \cite{balatsoukas-stimming:2014} for the case where faults are the result of storing decoder messages in an unreliable memory. The specific case of faults caused by delay variations in synchronous circuits is considered in \cite{brkic:2015}, where a deviation model is proposed for binary-output circuits in which a deviation occurs probabilistically when the output of a circuit changes from one clock cycle to the next, but cannot occur if the output does not change. While none of these contributions explicitly consider the relationship between the reliability of the decoder's implementation and the energy it consumes, there have been some recent developments in the analysis of the energy consumption of reliable decoders. Lower bounds for the scaling of the energy consumption of error-correction decoders in terms of the code length are derived in \cite{blake:2015}, and tighter lower bounds that apply to LDPC decoders are derived in \cite{blake:2015b}. The power required by regular LDPC decoders is also examined in \cite{ganesan:2016}, as part of the study of the total power required for transmitting and decoding the codewords. In this paper, we present a modeling approach that provides an accurate representation of the deviations introduced in the output of an LDPC decoder processing circuit in the presence of occasional timing violations, while simultaneously measuring its energy consumption. We show that this model can be used as part of a density evolution analysis to evaluate the channel threshold and iterative performance of the decoder when affected by timing faults. Finally, we show that under mild assumptions, the problem of minimizing the energy consumption of a quasi-synchronous decoder can be simplified to the energy minimization of a small test circuit, and present an \change{approximate} optimization method similar to Gear-Shift Decoding \cite{ardakani:2006} that finds sequences of quasi-synchronous decoders that minimize decoding energy subject to performance constraints. \change{ The remainder of the paper is organized as follows. Section~\ref{sec:LDPC} reviews LDPC codes and describes the circuit architecture of the decoder that is used to measure timing faults. Section~\ref{sec:deviation} presents the \emph{deviation} model that represents the effect of timing faults on the algorithm. Section~\ref{sec:analysis} then discusses the use of density evolution and of the deviation model to predict the performance of a decoder affected by timing faults. Finally, Section~\ref{sec:optimization} presents the energy optimization strategy and results, and Section~\ref{sec:conclusion} concludes the paper. Additional details on the CAD framework used for circuit measurements can be found in Appendix~\ref{sec:appendix:workflow}, and Appendix~\ref{sec:appendix:testcircuit} provides some details concerning the simulation of the test circuits. } \section{LDPC Decoding Algorithm and Architecture}\label{sec:LDPC} \subsection{Code and Channel}\label{sec:code-channel} We consider a communication scenario where a sequence of information bits is encoded using a binary LDPC code of length $n$. The LDPC code described by an $m \times n$ binary parity-check matrix $H = [ h_{j,i} ]$ consists of all length-$n$ row vectors $v$ satisfying the equation $vH^\textsc{T} = 0$. Equivalently, the code can be described by a bipartite Tanner graph with $n$ \emph{variable nodes} (VN) and $m$ \emph{check nodes} (CN) having an edge between the $i$-th variable node and the $j$-th check node if and only if $h_{j,i} \neq 0$. We assume that the LDPC code is regular, which means that in the code's Tanner graph each variable node has a fixed degree $d_v$ and each check node has a fixed degree $d_c$. Let us assume that the transmission takes place over the binary-input additive white Gaussian noise (BIAWGN) channel. A codeword $\bvec{x} \in \{-1,1\}^n$ is transmitted through the channel, which outputs the received vector $\bvec{y} = \bvec{x} + \bvec{w}$, where $\bvec{w}$ is a vector of $n$ \gls{iid} zero-mean normal random variables with variance $\sigma^2$. We use $x_i$ and $y_i$ to refer to the input and output of the channel at time $i$. The BIAWGN channel has the property of being output symmetric, meaning that $\phi_{y_i \vert x_i}\left( q \cond 1 \right) = \phi_{y_i \vert x_i}\left( -q \cond -1 \right)$, and memoryless, meaning that $\phi_{\bm{y} \vert \bm{x}}\left(\bm{q} \cond \bm{r}\right) = \prod_{i=1}^n \phi_{y_i \vert x_i}\left(q_i \cond r_i\right)$. Throughout the paper, $\pmf(\cdot)$ denotes a probability density function. The BIAWGN channel can also be described multiplicatively as $\bm{y}=\bm{xz}$, where $\bm{z}$ is a vector of \gls{iid} normal random variables with mean $1$ and variance $\sigma^2$. Let the \emph{belief} output $\mu_i$ of the channel at time $i$ be given by \begin{equation}\label{eq:channelbelief} \mu_i = \frac{\alpha y_i}{ \sigma^2} \, , \end{equation} with $\alpha>0$. Note that if $\alpha=2$ then $\mu_i$ is a log-likelihood ratio. Assuming that $x_i=1$ was transmitted, then $\mu_i$ has a normal distribution with mean $\alpha/\sigma^2$ and variance $\alpha^2/\sigma^2$. Writing $\rho = \alpha/\sigma^2$, we see that $\mu_i$ is Gaussian with mean $\rho$ and variance $\alpha \rho$, that is, the distribution of $\mu_i$ is described by a single parameter $\rho$. We call this distribution a \gls{1D} normal distribution. The distribution of $\mu_i$ can also be specified using other equivalent parameters, such as the probability of error $p_e$, given by \ifCLASSOPTIONdraftcls \begin{equation}\label{eq:errprob} p_e = \Pr\left(\mu_i < 0 \middle\vert x_i=1\right) = \Pr\left(\mu_i>0 \middle\vert x_i=-1\right) = \frac{1}{2} \,\mathrm{erfc}\!\left(\frac{1}{\sqrt{2\sigma^2}} \right)= \frac{1}{2} \, \mathrm{erfc}\!\left(\sqrt{\frac{\rho}{2\alpha}}\right), \end{equation} \else \begin{multline}\label{eq:errprob} p_e = \Pr\left(\mu_i < 0 \middle\vert x_i=1\right) = \Pr\left(\mu_i>0 \middle\vert x_i=-1\right)\\ = \frac{1}{2} \,\mathrm{erfc}\!\left(\frac{1}{\sqrt{2\sigma^2}} \right)= \frac{1}{2} \, \mathrm{erfc}\!\left(\sqrt{\frac{\rho}{2\alpha}}\right), \end{multline} \fi where $\mathrm{erfc}(\cdot)$ is the complementary error function. \subsection{\changeB{Decoding} Algorithm}\label{sec:algorithm} The well-known Offset Min-Sum \change{(OMS)} algorithm is a simplified version of the Sum-Product algorithm that can usually achieve similar error-correction performance. It has been widely used in implementations of LDPC decoders \cite{cushon:2010,roth:2010,sun:2011}. To make our decoder implementation more realistic and show the flexibility of our design framework, we \change{present an algorithm and architecture that support} a row-layered message-passing schedule. Architectures optimized for this schedule have proven effective for achieving efficient implementations of LDPC decoders \cite{roth:2010,sun:2011,cevrero:2010}. Using a row-layered schedule also allows the decoder to be pipelined to increase the circuit's utilization. In a row-layered LDPC decoder, the rows of the parity-check matrix are partitioned into $L$ sets called \emph{layers}. To simplify the description of the decoding algorithm, we assume that all the columns in a given layer contain exactly one non-zero element. This implies that $L=d_v$. Note that codes with \emph{at most} one non-zero element per column and per layer can also be supported by the same architecture, simply requiring a modification of the way algorithm variables are indexed. Let us define a set $\mathcal{L}_\ell$ containing the indices of the rows of $H$ that are part of layer $\ell$, $\ell \in [1,L]$. We denote by $\mu_{i,j}^{(t)}$ a message sent from VN $i$ to CN $j$ during iteration $t$. and by $\lambda_{i,j}^{(t)}$ a message sent from CN $j$ to VN $i$. It is also useful to refer to the CN neighbor of a VN $i$ that is part of layer $\ell$. Because of the restriction mentioned above, there is exactly one such CN, and we denote its index by $J(i,\ell)$. Finally, we denote the channel information corresponding to the $i$-th codeword bit by $\mu^{(0)}_i$, since it also corresponds to the first message sent by a variable node $i$ to all its neighboring check nodes. The Offset Min-Sum algorithm used with a row-layered message-passing schedule is described in Algorithm~\ref{alg:loms}. In the algorithm, $\mathcal{N}(j)$ denotes the set of indices corresponding to VNs that are neighbors of a check node $j$, and $\Lambda_i$ represents the current sum of incoming messages at a VN $i$. The function $\min_{1,2}(S)$ returns the smallest and second smallest values in the set $S$, $C \geq 0$ is the offset parameter, and \begin{equation*} \mathrm{sgn}(x)= \begin{cases} 1 & \text{if $x \geq 0$,} \\ -1 & \text{if $x < 0$.} \end{cases} \end{equation*} \begin{algorithm}[tb] \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \DontPrintSemicolon \LinesNumbered \Input{$\{\mu^{(0)}_1, \mu^{(0)}_2, \ldots, \mu^{(0)}_n\}$} \Output{$\{\hat{x}_1, \hat{x}_2, \ldots, \hat{x}_n\}$} \Begin{ \tcp{Initialization} $\Lambda_i \gets \mu^{(0)}_i, \; \forall i \in [1,n]$\; $\lambda_{i,j}^{(0)} \gets 0, \; \forall j \in [1,m], i \in \mathcal{N}(j)$\; \tcp{Decoding} \For{$t \gets 1$ \KwTo $T$ } { \For{$\ell \gets 1$ \KwTo $L$} { \tcp{VN to CN messages} $\mu_{i,J(i,\ell)}^{(t)} \gets \Lambda_i - \lambda_{i,J(i,\ell)}^{(t-1)}, \; \forall i$\; \tcp{CN to VN messages} \For{$j \in \mathcal{L}_\ell$} { $[m_1, m_2] \gets \min_{1,2}(\{ |\mu_{i,j}^{(t)}| : i \in \mathcal{N}(j)\})$\; $m_1 \gets \max(0, m_1 - C)$\; $m_2 \gets \max(0, m_2 - C)$\; $s_T \gets \prod_{i \in \mathcal{N}(j)} \mathrm{sgn}(\mu_{i,j}^{(t)})$\; \For{$i \in \mathcal{N}(j)$} { $s_i \gets s_T \cdot \mathrm{sgn}(\mu_{i,j}^{(t)})$\; \lIf{ $|\mu_{i,j}^{(t)}| = m_1$ } { $\lambda_{i,j}^{(t)} \gets s_i \cdot m_2$ } \lElse { $\lambda_{i,j}^{(t)} \gets s_i \cdot m_1$ } } } \tcp{VN update} $\Lambda_i \gets \mu_{i,J(i,\ell)}^{(t)} + \lambda_{i,J(i,\ell)}^{(t)}, \; \forall i$\; \tcp{VN decision} \For{$i \in \{1,2,\dots,n\}$} { \lIf{$\Lambda_i > 0$} { $\hat{x}_i \gets 1$ } \lElseIf{$\Lambda_i < 0$} { $\hat{x}_i \gets -1$ } \lElse { $\hat{x}_i \gets 1 \,\mathrm{or}\, -1 \text{ with equal probability}$ } } } } } \caption{OMS with a row-layered schedule.} \label{alg:loms} \end{algorithm} \subsection{Architecture} The Tanner graph of the code can also be used to represent the computations that must be performed by the decoder. At each decoding iteration, one message is sent from variable to check nodes on every edge of the graph, and again from check to variable nodes. We call a variable node processor (VNP) a circuit block that is responsible for generating messages sent by a variable node, and similarly a check node processor (CNP) a circuit block generating messages sent by a check node. In a row-layered architecture in which the column weight of layer subsets is at most 1, there is at most one message to be sent and received for each variable node in a given layer. Therefore VNPs are responsible for sending and receiving one message per clock cycle. CNPs on the other hand receive and send $d_c$ messages per clock cycle. At any given time, every VNP and CNP is mapped respectively to a VN and a CN in the Tanner graph. The routing of messages from VNPs to CNPs and back can be posed as two equivalent problems. One can fix the mapping of VNs to VNPs and of CNs to CNPs, and find a permutation of the message sequence that matches VNP outputs to CNP inputs, and another permutation that matches CNP outputs to VNP inputs. Alternatively, if VNPs process only one message at a time, one can fix the connections between VNPs and CNPs, and choose the assignment of VN to VNPs to achieve correct message routing. We choose the later approach because it allows studying the computation circuit without being concerned by the routing of messages. The number of CNPs instantiated in the decoder can be adjusted based on throughput requirements from $1$ to $m/L$ (the number of rows in a layer). As the number of CNPs is varied, the number of VNPs will vary from $d_c$ to $n$. An architecture diagram showing one VNP and one CNP is shown in Fig.~\ref{fig:layered_arch}. In reality, a CNP is connected to $d_c-1$ additional VNPs, which are not shown. The memories storing the belief totals $\Lambda_i$ and the intrinsic beliefs $\lambda_{i,j}^{(t)}$ are also not shown. The part of the VNP responsible for sending a message to the CNP is called VNP \emph{front} and the part responsible for processing a message received from a CNP is called the VNP \emph{back}. The VNP front and back do not have to be simultaneously mapped to the same VN. This allows to easily vary the number of pipeline stages in the VNPs and CNPs. Fig.~\ref{fig:layered_arch} shows the circuit with two pipeline stages. Messages exchanged in the decoder are fixed-point numbers. The position of the binary point does not have an impact on the algorithm, and therefore the messages sent by VNs in the first iteration can be defined as rounding the result of \eqref{eq:channelbelief} to the nearest integer, while choosing a suitable $\alpha$. The number of bits in the quantization, the scaling factor $\alpha$, and the OMS offset parameter are chosen based on a density evolution analysis of the algorithm (described in Section~\ref{sec:analysis}). We quantize decoder messages to 6~bits, which yields a decoder with approximately the same channel threshold as a floating-point decoder under a standard fault-free implementation. \begin{figure}[tbp] \begin{center} \includegraphics[width=3.0in]{layered_MS_arch_paper_v2} \caption{Block diagram of the layered Offset Min-Sum decoder architecture.} \label{fig:layered_arch} \end{center} \end{figure} In order to analyze a circuit that is representative of state-of-the-art architectures, we use an optimized architecture for finding the first two minima in each CNP. Our architecture is inspired by the \change{``tree structure''} approach presented in \cite{wey:2008}, but requires fewer comparators. Each pair of CNP inputs is first sorted using the \emph{Sort} block shown in Fig.~\ref{fig:sort-2}. These sorted pairs are then merged recursively using a tree of \emph{Merge} blocks, shown in Fig.~\ref{fig:merge}. If the number of CNP inputs is odd, the input that cannot be paired is fed directly into a special merge block with 3 inputs, which can be obtained from the 4-input \emph{Merge} block by removing the $\mathrm{min}_\mathrm{2b}$ input and the bottom multiplexer. Note that it is possible that changes to the architecture could increase or decrease the robustness of the decoder (see e.g.~\cite{sedighi:2014}), but this is outside the scope of this paper. \begin{figure}[tbp] \begin{center} \subfloat[][Sort block.]{\label{fig:sort-2}\includegraphics[scale=0.45]{sort-2}} \qquad \subfloat[][Merge block.]{\label{fig:merge}\includegraphics[scale=0.45]{dblcmpsel}} \caption{Logic blocks used in the $\mathrm{MIN}_{1,2}$ unit.} \label{fig:minarch} \end{center} \end{figure} \section{\fxnote*[inline,nomargin,author=--Note]{Entirely rewritten}{Deviation Model}}\label{sec:deviation} \subsection{Quasi-Synchronous Systems}\label{sec:deviation:QS} We consider a synchronous system that permits timing violations without hardware compensation, resulting in what we call a \emph{quasi-synchronous} system. Optimizing the energy consumption of these systems requires an accurate model of the impact of timing violations, and of the energy consumption. We propose to achieve this by characterizing a test circuit that is representative of the complete circuit implementation. The term \emph{deviation} refers to the effect of circuit faults on the result of a computation, and the deviation model is the bridge between the circuit characterization and the analysis of the algorithm. We reserve the term \emph{error} for describing the algorithm, in the present case to refer to the incorrect detection of a transmitted symbol. A timing violation occurs in the circuit when the propagation delay between the input and output extends beyond a clock period. Modeling the deviations introduced by timing violations is challenging because they not only depend on the current input to the circuit, but also on the state of the circuit before the new input was applied. In general, timing violations also depend on other dynamic factors and on process variations. In this paper, we focus on the case where the output of the circuit is entirely determined by the current and previous inputs of the circuit, and by the nominal operating condition of the circuit. We denote by $\Gamma$ the set of possible operating conditions, represented by vectors of parameters, and by $\gamma \in \Gamma$ a particular operating condition. For example, an operating condition might specify the supply voltage and clock period used in the circuit. We assume that all the parameters specified by $\gamma$ are deterministic. \subsection{Test Circuit} \begin{figure}[tbp] \begin{center} \includegraphics[width=2.6in]{LDPC_tree_dev_model} \caption{Computation tree of an LDPC decoder combined with the deviation model (for a regular LDPC code with $d_c=4$ and $d_v=3$).} \label{fig:LDPC_tree_dev_model} \end{center} \end{figure} The operation of an LDPC decoder can be represented using its one-iteration computation tree, which models the generation of a VN-to-CN message in terms of messages $\mu^{(t)}$ sent in the previous iteration. There are $(d_v-1)$ check nodes in the tree. Each of these check nodes receives $(d_c-1)$ messages from neighboring variable nodes, and generates a message sent to the one VN whose message was excluded from the computation. This VN then generates an extrinsic message based on the channel prior $\mu_i^{(0)}$ and on the messages received from neighboring check nodes. An example of a computation tree is shown within the dashed box in Fig.~\ref{fig:LDPC_tree_dev_model}. For convenience, we choose to measure deviations on an implementation of this computation tree, so that measurements directly correspond with the progress made by the decoder in one iteration. As discussed in more details in Appendix~\ref{sec:appendix:testcircuit}, the basic processing block of a row-layered decoder handles the messages to and from one check node. The test circuit is therefore built by re-using a basic block $d_v-1$ times. Since the test circuit is synchronous, we can represent it as a discrete-time system. Let $X_k$ be the input at clock cycle $k$. When timing violations are allowed to occur, the corresponding\footnote{The circuit could require one or several clock cycles to generate the first output, but this is irrelevant to the characterization of the computation.} circuit output $Z_k$ can be expressed as $Z_k = g(X_k, S_k)$, where $S_k$ represents the state of the circuit at the beginning of cycle $k$, and $g$ is some deterministic function. Equivalently, we can write $\mu_{i,j}^{(t+1)} = g(\bm{\mu}^{(t)}, S_k)$, where $\bm{\mu}^{(t)}$ is a vector containing all the VN-to-CN messages that form the input of the computation tree, and $\mu_{i,j}^{(t+1)}$ is a VN-to-CN message that will be sent in the next iteration. A sequence of message vectors $\bm{\mu^{(t)}}$ can be mapped to a sequence of circuit inputs $X_k$ in various ways. As is common with the type of decoder architecture considered here, we assume that all processing circuits are re-used several times during the same iteration $t$ and layer $\ell$. Therefore, for a fixed $(t,\ell)$, the sequence of circuit inputs $X_k$ forms an \gls{iid} process. Since $S_k$ depends on input $X_{k-1}$ (and possibly also on other previous inputs), but not on $X_k$, $S_k$ and $X_k$ are independent. At the output, $Z_k$ and $Z_{k-1}$ are not independent, but it is possible to design the architecture so that correlated outputs are not associated with the same Tanner graph nodes or with neighboring nodes. This occurs naturally in a row-layered architecture, since each variable node is only updated once in each layer. Therefore, it is sufficient to consider the marginal distribution of the circuit's output, neglecting the correlation in successive outputs. \subsection{Deviation Model}\label{sec:deviation:model} We have seen above that a decoder message $\mu_{i,j}^{(t+1)}$ can be expressed as a function of the messages $\bm{\mu}^{(t)}$ received by the neighboring check nodes in the previous iteration and of the state of the processing circuit. To separate the deviations from the ideal operation of the decoder, it is helpful to decompose a decoding iteration into the ideal computation, followed by a transmission through a deviation channel. This model is shown in Fig.~\ref{fig:LDPC_tree_dev_model}, where $\nu_{i,j}^{(t+1)}$ is the message that would be sent from variable node $i$ to check node $j$ during iteration $t+1$ if no deviations had occurred during iteration $t$. For the first messages sent in the decoder at $t=0$, the computation circuits are not used and therefore no deviation can occur, and we simply have $\mu_{i,j}^{(0)} = \nu_{i,j}^{(0)}$. Since we neglect correlations in successive circuit outputs, the deviation channel is memoryless. Unlike typical channel models where the noise is independent from other variables in the system, the deviation $D_{i,j}^{(t)}$ is a function of the current circuit input $X_k=\bm{\mu}^{(t)}$ and of the current state $S_k$. However, modeling deviations directly in terms of $\bm{\mu}^{(t)}$ would make the model too complex, because of the large dimensionality of the input. To simplify the model, we consider only the value of the current output, and model deviations in terms of the conditional distribution $\pmf\left(\mu_{i,j}^{(t+1)} \cond \nu_{i,j}^{(t+1)} \right)$, an approach that was also used in \cite{dupraz:2015}. To improve the accuracy of the model, it is also possible to consider the value of the transmitted bit $x_i$ associated with VN $i$. Since the faulty messages depend on the circuit state, the deviation model is obtained by averaging over the states $S_k$: \ifCLASSOPTIONdraftcls \begin{equation} \else \begin{multline} \fi \label{eq:devmodel1} \pmf\left(\mu_{i,j}^{(t+1)} \cond \nu_{i,j}^{(t+1)}, x_i \right) = \\ \sum_{S_k} \pmf\left(\mu_{i,j}^{(t+1)} \cond \nu_{i,j}^{(t+1)}, x_i, S_k \right) \pmf\left( S_k \right), \ifCLASSOPTIONdraftcls \end{equation} \else \end{multline} \fi which in practice can be done using a Monte-Carlo simulation of the test circuit. \subsection{Generalized Deviation Model}\label{sec:deviation:genmodel} When evaluating deviations based on \eqref{eq:devmodel1}, it is important to keep in mind that $S_k$ depends on previous circuit inputs. Under the assumption that the previous use of the circuit belonged to the same $(t,\ell)$, $\pmf(S_k)$ is a function of $\pmf(\mu_{i,j}^{(t)})$. As a result, the model described by \eqref{eq:devmodel1} is only valid for a fixed message distribution. Furthermore, because the message distribution depends on the transmitted codeword, the deviation model also depends on the transmitted codeword. Let us first assume that the transmitted codeword is fixed. In this case, the message distribution $\pmf(\mu_{i,j}^{(t)})$ depends on the channel noise, on the iteration index $t$, and on the operating condition of the circuit. Since the messages are affected by deviations for $t>0$, only $\pmf(\mu_{i,j}^{(0)})$ is known a priori. An obvious way to measure deviations is to perform a first evaluation of \eqref{eq:devmodel1} using the known $\pmf(\mu_{i,j}^{(0)})$, and to repeat the process for each subsequent decoding iteration. However, the resulting deviation model is of limited interest, since it depends on the specific message distributions in each iteration. To generate a model that is independent of the iterative progress of the decoder, we first approximate $\pmf(\mu_{i,j}^{(t)})$ as a \gls{1D} Normal distribution with error rate parameter $p_e^{(t)}$ chosen such that \begin{equation}\label{eq:errorprob} p_e^{(t)} = \Pr(x_i \mu_{i,j}^{(t)} < 0) + \frac{1}{2} \Pr(\mu_{i,j}^{(t)} = 0). \end{equation} Note that while $\pmf(\mu_{i,j}^{(0)})$ does correspond exactly to a \gls{1D} Normal distribution, this is not necessarily the case after the first iteration. This approximation is the price to pay to obtain a standalone deviation model, but note that exact message distributions can still be used when evaluating the performance of the faulty decoder. In fact, combining a density evolution based on exact distributions with a deviation model generated using \gls{1D} Normal distributions leads to very accurate predictions in practice \cite{leduc-primeau:2016a}. To construct the deviation model, we perform a number of Monte-Carlo simulations of \eqref{eq:devmodel1} using \gls{1D} input distributions with various $p_e^{(t)}$ values. Interpolation is then used to obtain a continuous model in $p_e^{(t)}$. The simulations are also performed for all operating conditions $\gamma \in \Gamma$. We therefore obtain a model that consists of a family of conditional distributions, indexed by $(p_e^{(t)}, \gamma)$, that we denote as \begin{equation}\label{eq:devmodel2} \pmf^{(p_e^{(t)},\gamma)}\left(\mu_{i,j}^{(t+1)} \cond \nu_{i,j}^{(t+1)}, x_i \right). \end{equation} However, we generally omit the $(p_e^{(t)},\gamma)$ superscript to simplify the notation. While measuring deviations, we also record the switching activity in the circuit, which is then used to construct an energy model that depends on $\gamma$ and $p_e^{(t)}$, denoted as $c_\gamma(p_e^{(t)})$ (where $c$ stands for ``cost''). To use the model, we first determine the error rate parameter $p_e^{(t)}$ corresponding to the distribution of the messages $\bm{\mu}^{(t)}$ at the beginning of the iteration, and we then retrieve the appropriate conditional distribution, which also depends on the operating condition $\gamma$ of the circuit. This conditional distribution then informs us of the statistics of deviations that occur at the end of the iteration, that is on messages sent in the next iteration. As mentioned above, since $\pmf(\mu_{i,j}^{(t)})$ depends on the transmitted codeword, this is also the case of $\pmf(S_k)$ and of the deviation distributions. We show in Section~\ref{sec:analysis} that the codeword dependence is entirely contained within the deviation model and does not affect the analysis of the decoding performance, as long as the decoding algorithm and deviation model satisfy certain properties. Nonetheless, we would like to obtain a deviation model that does not depend on the transmitted codeword. This can be done when the objective is to predict the average performance of the decoder, rather than the performance for a particular codeword, since it is then sufficient to model the average behavior of the decoder. For the case where all codewords have an equal probability of being transmitted, we propose to perform the Monte-Carlo deviation measurements by randomly sampling transmitted codewords. This approach is supported by the experimental results presented in \cite{leduc-primeau:2016a}, which show that a deviation model constructed in this way can indeed accurately predict the average decoding performance. \section{Performance Analysis}\label{sec:analysis} \subsection{Standard Analysis Methods for LDPC Decoders}\label{sec:std_ldpc_analysis} Density evolution (DE) is the most common tool used for predicting the error-correction performance of an LDPC decoder. The analysis relies on the assumption that messages passed in the Tanner graph are mutually independent, which holds as the code length goes to infinity \cite{richardson:2001a}. Given the channel output probability distribution and the probability distribution of variable node to check node messages at the start of an iteration, DE computes the updated distribution of variable node to check node messages at the end of the decoding iteration. This computation can be performed iteratively to determine the message distribution after any number of decoding iterations. The validity of the analysis rests on two properties of the LDPC decoder. The first property is the conditional independence of errors, which states that the error-correction performance of the decoder is independent from the particular codeword that was transmitted. The second property states that the error-correction performance of a particular LDPC code concentrates around the performance measured on a cycle-free graph, as the code length goes to infinity. Both properties were shown to hold in the context of reliable implementations \cite{richardson:2001a}. It was also shown that the conditional independence of errors always holds when the channel is output symmetric and the decoder has a symmetry property. We can define a sufficient symmetry property of the decoder in terms of a message-update function $F_{i,j}$ that represents one complete iteration of the (ideal) decoding algorithm. Given a vector of all the messages $\bvec{\mu}^{(t)}$ sent from variable nodes to check nodes at the start of iteration $t$ and the channel information $\nu_i^{(0)}$ associated with variable node $i$, $F_{i,j}$ returns the next ideal message to be sent from a variable node $i$ to a check node $j$: $\nu_{i,j}^{(t+1)}=F_{i,j}\left(\bvec{\mu}^{(t)}, \nu_i^{(0)}\right)$. \begin{definition}\label{def:decoder_symmetry} A message-update function $F_{i,j}$ is said to be \emph{symmetric} with respect to a code $C$ if \[ F_{i,j}\left(\bvec{\mu}^{(t)}, \nu_i^{(0)}\right) = x_i F_{i,j}\left(\bvec{x} \bvec{\mu}^{(t)}, x_i \nu_i^{(0)}\right) \] for any $\bvec{\mu}^{(t)}$, any $\nu_i^{(0)}$, and any codeword $\bvec{x} \in C$. \end{definition} In other words, a decoder's message-update function is symmetric if multiplying all the VN-to-CN belief messages sent at iteration $t$ and the belief priors by a valid codeword $\bvec{x} \in C$ is equivalent to multiplying the next messages sent at iteration $t+1$ by that same codeword. Note that the symmetry condition in Definition~\ref{def:decoder_symmetry} is implied by the check node and variable node symmetry conditions in \cite[Def.~1]{richardson:2001a}. \subsection{\fxnote*[inline,nomargin,author=--Note]{Updated}{Applicability of Density Evolution}} \label{sec:applicabilityDE} In order to use density evolution to predict the performance of long finite-length codes, the decoder must satisfy the two properties stated in Section~\ref{sec:std_ldpc_analysis}, namely the conditional independence of errors and the convergence to the cycle-free case. We first present some properties of the decoding algorithm and of the deviation model that are sufficient to ensure the conditional independence of errors. Using the multiplicative description of the BIAWGN channel, the vector received by the decoder is given by $\bm{y}=\bm{xz}$ when a codeword $\bm{x}$ is transmitted, or by $\bm{y}=\bm{z}$ when the all-one codeword is transmitted. In a reliable decoder, messages are completely determined by the received vector, but in a faulty decoder, there is additional randomness that results from the deviations. Therefore, we represent messages in terms of conditional probability distributions given $\bvec{xz}$. Since we are concerned with a fixed-point circuit implementation of the decoder, we can assume that messages are integers from the set $\{-Q, -Q+1, \dots, Q\}$, where $Q>0$ is the largest message magnitude that can be represented. \begin{definition} We say that a message distribution $\pmf_{\mu_{i,j}\vert\bvec{y}}(\mu \vert \bvec{xz})$ is symmetric if \[ \pmf_{\mu_{i,j}\vert\bvec{y}}\left(\mu \cond \bvec{xz}\right) = \pmf_{\mu_{i,j}\vert\bvec{y}}\left(x_i \mu \cond \bvec{z}\right) \, . \] \end{definition} If a message has a symmetric distribution, its error probability as defined in \eqref{eq:errorprob} is the same whether $\bvec{xz}$ or $\bvec{z}$ is received. Similarly to the results presented in \cite{dupraz:2015}, we can show that the symmetry of message distributions is preserved when the message-update function is symmetric. \begin{lemma}\label{lem:ideal} If $F_{i,j}$ is a symmetric message-update function and if $\mu_i^{(0)}$ and $\mu_{i,j}^{(t)}$ have symmetric distributions for all $(i,j)$, the next ideal messages $\nu_{i,j}^{(t+1)}$ also have symmetric distributions. \end{lemma} \begin{proof} We can express the distribution of the next ideal message from VN $i$ to CN $j$ as \ifCLASSOPTIONdraftcls \begin{equation}\label{eq:nextideal} \pmf_{\nu_{i,j}^{(t+1)}\vert\bvec{y}}\left(\nu \cond \bvec{xz}\right) = \sum_{(\bvec{\mu}, \mu_i^{(0)}) \in R} \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{\mu} \cond \bvec{xz}\right) \, \pmf_{\mu_i^{(0)}\vert\bvec{y}}\!\left(\mu_i^{(0)} \cond \bvec{xz}\right), \end{equation} \else \begin{multline}\label{eq:nextideal} \pmf_{\nu_{i,j}^{(t+1)}\vert\bvec{y}}\left(\nu \cond \bvec{xz}\right) = \\ \sum_{(\bvec{\mu}, \mu_i^{(0)}) \in R} \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{\mu} \cond \bvec{xz}\right) \, \pmf_{\mu_i^{(0)}\vert\bvec{y}}\!\left(\mu_i^{(0)} \cond \bvec{xz}\right), \end{multline} \fi where $R= \left\{ (\bvec{\mu},\mu_i^{(0)}) : F_{i,j}\left(\bvec{\mu}, \mu_i^{(0)}\right) = \nu \right\}$. Assuming that the elements of the VN-to-CN message vector $\bm{\mu}^{(t)}$ are independent and that each $\mu_{i,j}^{(t)}$ has a symmetric distribution, \ifCLASSOPTIONdraftcls \[ \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{\mu} \cond \bvec{xz}\right) = \prod_k \pmf_{\mu_k^{(t)} \vert \bvec{y}}\!\left(\mu_k \cond \bvec{xz}\right) = \prod_k \pmf_{\mu_k^{(t)} \vert \bvec{y}}\!\left(x_k \mu_k \cond \bvec{z}\right) = \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{x\mu} \cond \bvec{z}\right), \] \else \begin{align*} \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{\mu} \cond \bvec{xz}\right) &= \prod_k \pmf_{\mu_k^{(t)} \vert \bvec{y}}\!\left(\mu_k \cond \bvec{xz}\right) \\ &= \prod_k \pmf_{\mu_k^{(t)} \vert \bvec{y}}\!\left(x_k \mu_k \cond \bvec{z}\right) \\ &= \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{x\mu} \cond \bvec{z}\right), \end{align*} \fi and since the channel output $\mu_i^{(0)}$ also has a symmetric distribution, \[ \pmf_{\mu_i^{(0)}\vert\bvec{y}}\!\left(\mu_i^{(0)} \cond \bvec{xz}\right) = \pmf_{\mu_i^{(0)}\vert\bvec{y}}\!\left(x_i \mu_i^{(0)} \cond \bvec{z}\right). \] Therefore, we can rewrite \eqref{eq:nextideal} as \ifCLASSOPTIONdraftcls \begin{equation}\label{eq:nextideal2} \pmf_{\nu_{i,j}^{(t+1)}\vert\bvec{y}}\left(\nu \cond \bvec{xz}\right) = \sum_{(\bvec{\mu}, \mu_i^{(0)}) \in R} \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{x\mu} \cond \bvec{z}\right) \, \pmf_{\mu_i^{(0)}\vert\bvec{y}}\!\left(x_i \mu_i^{(0)} \cond \bvec{z}\right). \end{equation} \else \begin{multline}\label{eq:nextideal2} \pmf_{\nu_{i,j}^{(t+1)}\vert\bvec{y}}\left(\nu \cond \bvec{xz}\right) = \\ \sum_{(\bvec{\mu}, \mu_i^{(0)}) \in R} \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{x\mu} \cond \bvec{z}\right) \, \pmf_{\mu_i^{(0)}\vert\bvec{y}}\!\left(x_i \mu_i^{(0)} \cond \bvec{z}\right). \end{multline} \fi Finally, letting $\bvec{\mu}'=\bvec{x\mu^{(t)}}$ and $\nu_i' = x_i \mu_i^{(0)}$, \eqref{eq:nextideal2} becomes \[ \pmf_{\nu_{i,j}^{(t+1)}\vert\bvec{y}}\left(\nu \cond \bvec{xz}\right) = \sum_{(\bvec{\mu}', \nu_i') \in R'} \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{\mu'} \cond \bvec{z}\right) \, \pmf_{\mu_i^{(0)}\vert\bvec{y}}\!\left(\nu_i' \cond \bvec{z}\right), \] where $R' = \left\{ (\bvec{\mu}', \nu_i') : F_{i,j}(\bvec{x\mu'}, x_i \nu_i') = \nu \right\}$. Since $F_{i,j}$ is symmetric, we can also express $R'$ as \[ R' = \left\{ (\bvec{\mu}', \nu_i') : F_{i,j}(\bvec{\mu'}, \nu_i') = x_i \nu \right\}, \] and therefore, \begin{align*} \pmf_{\nu_{i,j}^{(t+1)}\vert\bvec{y}}\left(x_i \nu \cond \bvec{z}\right) &= \sum_{(\bvec{\mu}', \nu_i') \in R'} \pmf_{\bvec{\mu}^{(t)} \vert \bvec{y}}\!\left(\bvec{\mu'} \cond \bvec{z}\right) \, \pmf_{\mu_i^{(0)}\vert\bvec{y}}\!\left(\nu_i' \cond \bvec{z}\right) \\ &= \pmf_{\nu_{i,j}^{(t+1)}\vert\bvec{y}}\left(\nu \cond \bvec{xz}\right), \end{align*} indicating that the next ideal messages have symmetric distributions. \end{proof} To establish the conditional independence of errors under the proposed deviation model, we first define some properties of the deviation. \begin{definition}\label{def:devsym} We say that the deviation model is \emph{symmetric} if \ifCLASSOPTIONdraftcls \[ \pmf_{\mu_{i,j}^{(t)} \vert \nu_{i,j}^{(t)}, \bvec{y}}\left(\mu \cond \nu, \bvec{xz}\right) = \pmf_{\mu_{i,j}^{(t)} \vert \nu_{i,j}^{(t)}, \bvec{y}}\left(\mu \cond \nu, \bvec{z}\right) = \pmf_{\mu_{i,j}^{(t)} \vert \nu_{i,j}^{(t)}, \bvec{y}}\left(-\mu \cond -\nu, \bvec{z}\right). \] \else \begin{align*} \pmf_{\mu_{i,j}^{(t)} \vert \nu_{i,j}^{(t)}, \bvec{y}}\left(\mu \cond \nu, \bvec{xz}\right) &= \pmf_{\mu_{i,j}^{(t)} \vert \nu_{i,j}^{(t)}, \bvec{y}}\left(\mu \cond \nu, \bvec{z}\right) \\ &= \pmf_{\mu_{i,j}^{(t)} \vert \nu_{i,j}^{(t)}, \bvec{y}}\left(-\mu \cond -\nu, \bvec{z}\right). \end{align*} \fi \end{definition} \begin{definition}\label{def:weaksym} We say that the deviation model is \emph{weakly symmetric (WS)} if \[ \pmf_{\mu_{i,j}^{(t)} \vert \nu_{i,j}^{(t)}, \bvec{y}}\left(\mu \cond \nu, \bvec{xz}\right) = \pmf_{\mu_{i,j}^{(t)} \vert \nu_{i,j}^{(t)}, \bvec{y}}\left(x_i \mu \cond x_i \nu, \bvec{z}\right). \] \end{definition} Note that if the model satisfies the symmetry condition, it also satisfies the weak symmetry condition, since $x_i \in \{-1, 1\}$. We then have the following Lemma. \begin{lemma}\label{lem:independencetx} If a decoder having a symmetric message-update function and taking its inputs from an output-symmetric communication channel is affected by weakly symmetric deviations, its message error probability at any iteration $t\geq0$ is independent of the transmitted codeword. \end{lemma} \begin{proof} Similarly to the approach used in \cite[Lemma~4.90]{richardson:2008} and \cite{varshney:2011}, we want to show that the probability that messages are in error is the same whether $\bvec{xz}$ or $\bvec{z}$ is received. This is the case if the faulty messages $\mu_{i,j}^{(t)}$ have a symmetric distribution for all $t\geq0$ and all $(i,j)$. Since the communication channel is output symmetric and since no deviations can occur before the first iteration, messages $\mu_{i,j}^{(0)} = \nu_{i,j}^{(0)}$ have a symmetric distribution. We proceed by induction to establish the symmetry of the messages for $t>0$. We start by assuming that \begin{equation}\label{eq:induct_assumpt} \pmf_{\nu_{i,j}^{(t)}\vert\bvec{y}}\left(\nu \cond \bvec{xz}\right) = \pmf_{\nu_{i,j}^{(t)}\vert\bvec{y}}\left(x_i \nu \cond \bvec{z}\right) \end{equation} also holds for $t>0$. Using Definition~\ref{def:weaksym} and \eqref{eq:induct_assumpt}, we can write the faulty message distribution as \begin{align*} \pmf_{\mu_{i,j}^{(t)} \vert \bvec{y}}\left(\mu \cond \bvec{xz} \right) &= \sum_{\nu=-Q}^{Q} \pmf_{\mu_{i,j}^{(t)} \vert \bvec{y}}\left(\mu \cond \nu, \bvec{xz}\right) \pmf_{\nu_{i,j}^{(t)} \vert \bvec{y}}\left(\nu \cond \bvec{xz}\right) \\ &= \sum_{\nu=-Q}^{Q} \pmf_{\mu_{i,j}^{(t)} \vert \bvec{y}}\left(x_i \mu \cond x_i \nu, \bvec{z}\right) \pmf_{\nu_{i,j}^{(t)} \vert \bvec{y}}\left(x_i \nu \cond \bvec{z}\right) \\ &= \sum_{\nu'=-x_i Q}^{x_i Q} \pmf_{\mu_{i,j}^{(t)} \vert \bvec{y}}\left(x_i \mu \cond \nu', \bvec{z}\right) \pmf_{\nu_{i,j}^{(t)} \vert \bvec{y}}\left(\nu' \cond \bvec{z}\right) \\ &= \sum_{\nu'=-Q}^{Q} \pmf_{\mu_{i,j}^{(t)} \vert \bvec{y}}\left(x_i \mu \cond \nu', \bvec{z}\right) \pmf_{\nu_{i,j}^{(t)} \vert \bvec{y}}\left(\nu' \cond \bvec{z}\right) \\ &= \pmf_{\mu_{i,j}^{(t)} \vert \bvec{y}}\left(x_i \mu \cond \bvec{z}\right). \end{align*} where the third equality is obtained using the substitution $\nu'=x_i\nu$. We conclude that the faulty messages have a symmetric distribution. Finally, since the decoder's message-update function is symmetric, Lemma~\ref{lem:ideal} confirms the induction hypothesis in \eqref{eq:induct_assumpt}. \end{proof} The last remaining step in establishing whether density evolution can be used with a decoder affected by WS deviations is to determine whether the error-correction performance of a code concentrates around the cycle-free case. The property has been shown to hold in \cite{varshney:2011} (Theorems 2, 3 and 4) for an LDPC decoder affected by ``wire noise'' and ``computation noise''. The wire noise model is similar to our deviation model, in the sense that the messages are passed through an additive noise channel, and that the noise applied to one message is independent of the noise applied to other messages. The proof presented in \cite{varshney:2011} only relies on the fact that the wire noise applied to a given message can only affect messages that are included in the directed neighborhood of the edge where it is applied, where the graph direction refers to the direction of message propagation. This clearly also holds in the case of our deviation model, and therefore the proof is the same. Since the message error probability is independent of the transmitted codeword, and furthermore concentrates around the cycle-free case, density evolution can be used to determine the error-correction performance of a decoder perturbed by our deviation model, as long as the deviations are weakly symmetric. \subsection{\fxnote*[inline,nomargin,author=--Note]{New subsection}{Deviation Examples}}\label{sec:devexamples} As described in Section~\ref{sec:deviation:genmodel}, we collect deviation measurements from the test circuits by inputting test vectors representing random codewords, and distributed according to several $p_e^{(t-1)}$ values. We then generate estimates of the conditional distributions in \eqref{eq:devmodel2}. It is interesting to visualize the distributions using an aggregate measure such as the probability of observing a non-zero deviation \begin{equation} p_\mathrm{nz}(\nu_{i,j}^{(t)},x_i) = \Pr^{(p_e^{(t-1)})}\left(\mu_{i,j}^{(t)} \neq \nu_{i,j}^{(t)} \cond \nu_{i,j}^{(t)}, x_i \right). \end{equation} These conditional probabilities are shown for a $(3,30)$ circuit in Fig.~\ref{fig:devPr_3-30}. When $x_i=1$, positive belief values indicate a correct decision, whereas when $x_i=-1$, negative belief values indicate a correct decision. We can see that in this example, deviations are more likely when the belief is incorrect than when it is correct, and therefore a symmetric deviation model is not consistent with these measurements. On the other hand, there is a sign symmetry between the ``correct'' part of the curves, and between the ``incorrect'' parts, that is $p_\mathrm{nz}(\nu_{i,j}^{(t)},1)=p_\mathrm{nz}(-\nu_{i,j}^{(t)},-1)$, and for this reason a weakly symmetric model is consistent with the measurements. Note that the slight jaggedness observed for incorrect belief values of large magnitude in the $p_e^{(t-1)}=0.008$ curves is due to the fact that these $\nu_{i,j}$ values occur only rarely. For the largest incorrect $\nu_{i,j}$ values, only about 100 deviation events are observed for each point, despite the large number of \gls{MC} trials. \begin{figure}[tbp] \centering \includegraphics[width=2.9in]{devPr_3-30_rndcw_110b} \caption{Non-zero deviation probability given $\nu_{i,j}^{(t)}$ and $x_i$ at two $p_e^{(t-1)}$ values, measured on a $(3,30)$ circuit operated at $\vdd=0.75\volt$ and $T_\mathrm{clk}=3.2\ns$. $3 \cdot 10^8$ decoding iteration trials were performed for each $p_e^{(t-1)}$ value. The total number of non-zero deviation events observed is 4,115,229 at $p_e^{(t-1)}=0.015$, and 10,071,810 at $p_e^{(t-1)}=0.008$.} \label{fig:devPr_3-30} \end{figure} Figure~\ref{fig:devPr_3-6} shows a similar plot for a $(3,6)$ circuit. In this case, $p_\mathrm{nz}(\nu_{i,j}^{(t)},x_i) \approx p_\mathrm{nz}(-\nu_{i,j}^{(t)},x_i)$, and a symmetric deviation model could be appropriate. Of course, since it is more general, a WS model is also appropriate. \begin{figure}[tbp] \centering \includegraphics[width=2.9in]{devPr_3-6_rndcw_39} \caption{Non-zero deviation probability given $\nu_{i,j}^{(t)}$ and $x_i$ at two $p_e^{(t-1)}$ values, measured on a $(3,6)$ circuit operated at $\vdd=0.85\volt$ and $T_\mathrm{clk}=2.1\ns$. $3 \cdot 10^8$ decoding iteration trials were performed for each $p_e^{(t-1)}$ value. The total number of non-zero deviation events observed is 2,524,601 at $p_e^{(t-1)}=0.09$, and 1,020,867 at $p_e^{(t-1)}=0.05$.} \label{fig:devPr_3-6} \end{figure} Under the assumption that deviations are weakly symmetric, we have \[ \pmf_{\mu_{i,j}^{(t)} | \nu_{i,j}^{(t)}, x_i}\left(\mu \cond \nu, 1\right) = \pmf_{\mu_{i,j}^{(t)} | \nu_{i,j}^{(t)}, x_i}\left(-\mu \cond -\nu, -1\right). \] Therefore, we can combine the $x_i=1$ and $x_i=-1$ data to improve the accuracy of the estimated distributions. Let $p_L$ and $p_H$ be respectively the smallest and largest $p_e^{(t-1)}$ values for which the deviations have been characterized. We can generate a conditional distribution for any $p_e^{(t-1)} \in [p_L, p_H]$ by interpolating from the nearest distributions that have been measured. We choose $p_H \geq p_e^{(0)}$ to make sure that the first iteration's deviation is within the characterized range. Because messages in the decoder are saturated once they reach the largest magnitude that can be represented, the circuit's switching activity decreases when the message error probability becomes very small. Since timing faults cannot occur when the circuit does not switch, we can expect deviations to be equally or less likely at $p_e^{(t-1)}$ values below $p_L$. Therefore, to define the deviation model for $p_e^{(t-1)}<p_L$, we make the pessimistic assumption that the deviation distribution remains the same as for $p_e^{(t-1)}=p_L$. \subsection{DE and Energy Curves}\label{sec:DE_energ_curves} We evaluate the progress of the decoder affected by timing violations using quantized density evolution \cite{chung:2001}. For the Offset Min-Sum algorithm, a DE iteration can be split into the following steps: 1-a)~evaluating the distribution of the CN minimum, 1-b)~evaluating the distribution of the CN output, after subtracting the offset, 2)~evaluating the distribution of the ideal VN-to-CN message, and 3)~evaluating the distribution of the faulty VN-to-CN messages. Step 1-a is given in \cite{balatsoukas-stimming:2014}, while the others are straightforward. In the context of DE, we write the message distribution as $\bvec{\pi}^{(t)}= \pmf(\mu_{i,j}^{(t)} | x_i=1)$, and the channel output distribution as $\bvec{\pi}^{(0)}= \pmf(\mu_i^{(0)} | x_i=1)$. We write a DE iteration as $\bvec{\pi}^{(t+1)} = f_\gamma(\bvec{\pi}^{(t)}, \bvec{\pi}^{(0)})$. As mentioned in Section~\ref{sec:deviation:genmodel}, the energy consumption is modeled in terms of the message error probability and of the operating condition, and denoted $c_\gamma(p_e^{(t)})$. \changeB{As for the deviation model, we use interpolation to define $c_\gamma(p_e^{(t)})$ for $p_e^{(t)} \in [p_L, p_H]$, and assume that $c_\gamma(p_e^{(t)})=c_\gamma(p_L)$ for $p_e^{(t)} < p_L$.} To display $f_\gamma(\bvec{\pi}^{(t)},\bvec{\pi}^{(0)})$ and $c_\gamma(p_e^{(t)})$ on the same plot, we project $\bvec{\pi}^{(t)}$ onto the message error probability space. \begin{figure}[tbp] \begin{center} \includegraphics[width=3in]{exit_example_2g} \caption{Examples of projected DE curves (solid lines) and energy curves (dashed lines) for rate $0.5$ ensembles with $d_v\in\{3,4,5\}$, and $p_e^{(0)}=0.09$.} \label{fig:EXITexample} \end{center} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=3in]{exit_example_4b} \caption{Examples of projected DE curves (solid lines) and energy curves (dashed lines) for the $(3,30)$ and $(4,40)$ ensembles (rate $0.9$), with $p_e^{(0)}=0.015$.} \label{fig:EXITexample2} \end{center} \end{figure} Several regular code ensembles were evaluated, with rates $\frac{1}{2}$ and $\frac{9}{10}$. Fig.~\ref{fig:EXITexample} shows examples of projected DE curves and energy curves for rate-$\frac{1}{2}$ code ensembles with $d_v \in \{3,4,5\}$ and various operating conditions. The energy is measured as described in Appendix~\ref{sec:appendix:workflow} and corresponds to one use of the test circuit (shown in Fig.~\ref{fig:testcircuit}). The nominal operating condition is $\vdd=1.0\volt$, $T_\mathrm{clk}=2.0\ns$ and therefore these curves correspond to a reliable implementation. With a reliable implementation, these ensembles have a channel threshold of $p_e^{(0)}\leq 0.12$ for the $(3,6)$ ensemble, $p_e^{(0)}\leq 0.11$ for $(4,8)$, and $p_e^{(0)}\leq 0.09$ for $(5,10)$. We use $p_e^{(0)} = 0.09$ for all the curves shown in Fig.~\ref{fig:EXITexample} to allow comparing the ensembles. As can be expected, a larger variable node degree results in faster convergence towards zero error rate, and it is natural to ask whether this property might provide greater fault tolerance and ultimately better energy efficiency. This is discussed in Section~\ref{sec:results}. Fig.~\ref{fig:EXITexample2} is a similar plot for the $(3,30)$ and $(4,40)$ ensembles. The channel threshold of both ensembles is approximately $p_e^{(0)}\leq 0.019$. For these curves, the nominal operating condition is $\vdd=1.0\volt$ and $T_\mathrm{clk}=3\ns$. As we can see, the energy consumption per iteration of the $(4,40)$ decoder is roughly double that of the $(3,30)$ decoder. We note that in the case of the $(3,30)$ ensemble, the reliable decoder stops making progress at an error probability of approximately $10^{-8}$. This floor is the result of the message saturation limit chosen for the circuit. \section{Energy Optimization}\label{sec:optimization} \subsection{Design Parameters}\label{sec:LDPC_param} As in a standard LDPC code-decoder design, the first parameter to be optimized is the choice of code ensemble. In this paper we restrict the discussion to regular codes, and therefore we need only to choose a degree pair $(d_v, d_c)$, where $R=1-d_v/d_c$ is the design rate of the code. For a fixed $R$, we can observe that both the energy consumption and the circuit area of the decoding circuit grow rapidly with $d_v$, and therefore it is only necessary to consider a few of the lowest $d_v$ values. Besides the choice of ensemble, we are interested in finding the optimal choice of operating parameters for the quasi-synchronous circuit. We consider here the supply voltage ($\vdd$) and the clock period ($T_\mathrm{clk}$). Generally speaking, the supply voltage affects the energy consumption, while the clock period affects the decoding time, or latency. The energy and latency are also affected by the choice of code ensemble, since the number of operations to be performed depends on the node degrees. The operating parameters of a decoder are denoted as a vector $\gamma=[\vdd,T_\mathrm{clk}]$. The decoding of LDPC codes proceeds in an iterative fashion, and it is therefore possible to adjust the operating parameters on an iteration-by-iteration basis. In practice, this could be implemented in various ways, for example by using a pipelined sequence of decoder circuits, where each decoder is responsible for only a portion of the decoding iterations. It is also possible to rapidly vary the clock frequency of a given circuit by using a digital clock divider circuit \cite{fischer:2006}. We denote by $\gammaseq$ the sequence of parameters used at each iteration throughout the decoding, and we use $\gammaseq=[\gamma_1^{N_1}, \gamma_2^{N_2}, \dots]$ to denote a specific sequence in which the parameter vector $\gamma_1$ is used for the first $N_1$ iterations, followed by $\gamma_2$ for the next $N_2$ iterations, and so on. \subsection{Objective} The performance of the LDPC code and of its decoder can be described by specifying a vector $\bvec{P}=(p_e^{(0)},\pres,T_\mathrm{dec})$, where $p_e^{(0)}$ is the output error rate of the communication channel, $\pres$ the residual error rate of VN-to-CN messages when the decoder terminates, and $T_\mathrm{dec}$ the expected decoding latency. The decoder's performance $\bvec{P}$ and energy consumption $E$ are controlled by $\gammaseq$. The energy minimization problem can be stated as follows. Given a performance constraint $\bvec{P}=(a,b,c)$, we wish to find the value of $\gammaseq$ that minimizes $E$, subject to $p_e^{(0)} \geq a$, $\pres \leq b$, $T_\mathrm{dec} \leq c$. As in the standard DE method, we propose to use the code's computation tree as a proxy for the entire decoder, and furthermore to use the energy consumption of the test circuit described in Appendix~\ref{sec:appendix:testcircuit} as the optimization objective. To be able to replace the energy minimization of the complete decoder with the energy minimization of the test circuit, we make the following assumptions: \begin{enumerate} \item The ordering of the energy consumption is the same for the test circuit and for the complete decoder, that is, for any $\gamma_1$ and $\gamma_2$, $E_\textsc{test}(\gamma_1) \leq E_\textsc{test}(\gamma_2)$ implies $E_\textsc{dec}(\gamma_1) \leq E_\textsc{dec}(\gamma_2)$, where $E_\textsc{test}(\gamma)$ and $E_\textsc{dec}(\gamma)$ are respectively the energy consumption of the test circuit and of the complete decoder when using parameter~$\gamma$. \item The average message error rate in the test circuit and in the complete decoder is the same for all decoding iterations. \item The latency of the complete decoder is proportional to the latency of the test circuit, that is, if $T_\mathrm{dec}(\gamma)$ is the latency measured using the test circuit with parameter $\gamma$, the latency of the complete decoder is given by $\beta T_\mathrm{dec}(\gamma)$, where $\beta$ does not depend on $\gamma$. \end{enumerate} Assumption~1 is reasonable because the test circuit is very similar to a computation unit used in the complete decoder. The difference between the two is that the test circuit only instantiates one full VNP, the remaining $(d_c-1)$ VNPs being reduced to only their ``front'' part (as seen in Fig.~\ref{fig:testcircuit}), whereas the complete decoder has $d_c$ full VNPs for every CNP. Assumption~2 is the standard DE assumption, which is reasonable for sufficiently long codes. Finally, it is possible for the clock period to be slower in the complete decoder, because the increased area could result in longer interconnections between circuit blocks. Even if this is the case, the interconnect length only depends on the area of the complete decoder, which is not affected by the parameters we are optimizing, and hence $\beta$ does not depend on $\gamma$. Clearly, if Assumption~1 holds and the performance of the test circuit is the same as the performance of the complete decoder, then the solution of the energy minimization is also the same. The performance is composed of the three components $(p_e^{(0)}, \pres, T_\mathrm{dec})$. The channel error rate $p_e^{(0)}$ does not depend on the decoder and is clearly the same in both cases. Because of Assumption~2, the complete decoder can achieve the same residual error rate as the test circuit when $p_e^{(0)}$ is the same. The latencies measured on the test circuit and on the complete decoder are not necessarily the same, but if Assumption~3 holds, and if we assume that the constant $\beta$ is known, then we can find the solution to the energy minimization of the complete decoder subject to constraints $(p_e^{(0)},\pres,T_\mathrm{dec})$ by instead minimizing the energy of the test circuit with constraints $(p_e^{(0)}, \pres, T_\mathrm{dec}/\beta)$. We also consider another interesting optimization problem. It is well known that for a fixed degree of parallelism, energy consumption is proportional to processing speed (represented here by $T_\mathrm{dec}$), which is observed both in the physical energy limit stemming from Heisenberg's uncertainty principle \cite{lloyd:2000}, as well as in practical CMOS circuits \cite{gonzalez:1996}. In situations where both throughput normalized to area and low energy consumption are desired, optimizing the product of energy and latency or \emph{energy-delay product} (EDP) for a fixed circuit area can be a better objective. In that case the performance constraint is stated in terms of $\bvec{P}=(p_e^{(0)}, \pres)$, and the optimization problem becomes the following: given a performance constraint $\bvec{P}=(a,b)$, minimize $E(\gammaseq) \cdot T_\mathrm{dec}(\gammaseq)$ subject to $p_e^{(0)} \geq a$, $\pres \leq b$, and a fixed circuit area. \begin{table*}[t] \centering \caption{Energy and EDP optimization results.} \begin{tabular}{ccllcrrrrr} \toprule & & & & & & \multicolumn{2}{c}{Standard} & \multicolumn{2}{c}{Quasi-synchronous} \\ \cmidrule{7-10} Code & Nom. & \multicolumn{1}{c}{Norm.} & \multicolumn{1}{c}{$p_e^{(0)}$} & $\pres$ & \multicolumn{1}{c}{Latency} & \multicolumn{1}{c}{Energy} & \multicolumn{1}{c}{EDP} & \multicolumn{1}{c}{Best energy} & \multicolumn{1}{c}{Best EDP}\\ family & $T_\mathrm{clk}$ & \multicolumn{1}{c}{area $\dagger$} & & & \multicolumn{1}{c}{[$\ns$]} & \multicolumn{1}{c}{[$\pJ$]} & \multicolumn{1}{c}{[$\nJns$]} & \multicolumn{1}{c}{[$\pJ$]} & \multicolumn{1}{c}{[$\nJns$]} \\ \midrule (3,6) & $2.0\ns$ & $1.066$ & $0.12^\ddag$ & $\leq 10^{-8}$ & $66$ & $250$ & $16.5$ & $192$ (-23\%) & $12.7$ (-23\%) \\ & & & $0.09$ & $\leq 10^{-8}$ & $22$ & $68.2$ & $1.50$ & $45.0$ (-34\%) & $0.98$ (-35\%)\\ (4,8) & $2.0\ns$ & $1.44$ & $0.09$ & $\leq 10^{-8}$ & $18$ & $98.5$ & $1.77$ & $74.9$ (-24\%) & $1.33$ (-25\%) \\ \addlinespace (3,30) & $3.0\ns$ & $1.099$ & $0.019^\ddag$ & $\leq 10^{-8}$ & $84.0$ & $\mathbf{883}$ & $74.2$ & $\mathbf{605}$ (-31\%) & $48.6$ (-35\%) \\ & $2.5\ns$ & $1.135$ & $0.019^\ddag$ & $\leq 10^{-8}$ & $70.0$ & $916$ & $\mathbf{64.1}$ & $664$ (-28\%) & $\mathbf{46.5}$ (-27\%) \\ & $3.0\ns$ & $1.099$ & $0.015$ & $\leq 10^{-8}$ & $39.0$ & $\mathbf{306}$ & $11.9$ & $\mathbf{196}$ (-36\%) & $7.35$ (-38\%) \\ & $2.5\ns$ & $1.135$ & $0.015$ & $\leq 10^{-8}$ & $32.5$ & $324$ & $\mathbf{10.5}$ & $214$ (-34\%) & $\mathbf{6.92}$ (-34\%) \\ (4,40) & $3.0\ns$ & $1.522$ & $0.015$ & $\leq 10^{-8}$ & $27.0$ & $364$ & $9.83$ & $224$ (-38\%) & $5.93$ (-40\%) \\ \bottomrule \multicolumn{10}{l}{$\dagger$ Cell area divided by the minimal area of the smallest decoder having the same code rate. $\ddag$ Approx. threshold.} \\ \end{tabular} \label{tbl:results} \end{table*} \subsection{Dynamic Programming}\label{sec:dynprog} To solve the iteration-by-iteration energy and EDP minimization problems stated above, we adapt the \change{``Gear-Shift''} dynamic programming approach proposed in \cite{ardakani:2006}. The original method relies on the fact that the message distribution has a \gls{1D} characterization, which is chosen to be the error probability. By quantizing the error probability space, a trellis graph can be constructed in which each node is associated with a pair $(\tilde{p}_e^{(t)}, t)$. \changeB{Quantized quantities are marked with tildes.} A particular choice of $\gammaseq$ corresponds to a path $P$ through the graph, and the optimization is transformed into finding the least expensive path that starts from the initial state $(\tilde{p}_e^{(0)}, 0)$ and reaches any state $(\tilde{p}_e^{(t)},t)$ such that $\tilde{p}_e^{(t)} \leq \pres$ and the latency constraint is satisfied, if there is one. \changeB{Note that to ensure that the solutions remain achievable in the original continuous space, the message error rates $p_e^{(t)}$ are quantized by rounding up. To maintain a good resolution at low error rates, we use a logarithmic quantization, with 1000 points per decade.} In the case of a faulty decoder, we want to evaluate the decoder's progress by tracking a complete message distribution using DE, rather than simply tracking the message error probability. In this case, the Gear-Shift method can be used as an approximate solver by projecting the message distribution $\bvec{\pi}^{(t)}=\pmf(\mu_{i,j}^{(t)} | x_i=1)$ onto the error probability space. We refer to this method as DE-Gear-Shift. Any path through the graph is evaluated by performing DE on the entire path using exact distributions, but different paths are compared in the projection space. As a result, the solutions that are found are not guaranteed to be optimal, but they are guaranteed to accurately represent the progress of the decoder. In the DE-Gear-Shift method, a path $P$ is a sequence of states $\{\bvec{\pi}^{(t)}\}$. As in the original Gear-Shift method, any sequence of decoder parameters $\gammaseq$ corresponds to a path. We denote the projection of a state onto the error probability space as $p_e^{(t)}= \Theta(\bvec{\pi}^{(t)})$. To each path $P$, we associate an energy cost $E_P$ and a latency cost $T_P$. A path ending at a state $\bvec{\pi}^{(t)}$ can be extended with one additional decoding iteration using parameter $\gamma$ by evaluating one DE iteration to obtain $\bvec{\pi}^{(t+1)} = f_\gamma(\bvec{\pi}^{(t)}, \bvec{\pi}^{(0)})$. Performing this additional iteration adds an energy cost $c_\gamma(\tilde{p}_e^{(t)}, p_e^{(0)})$ and a latency cost $T_\gamma$ to the path's cost. \changeB{When optimizing EDP, we define the overall cost of a path $C_P$ as $C_P = E_P \cdot T_P$. When optimizing energy under a latency constraint, we define the path cost as a two-dimensional vector $C_P = (E_P, T_P)$.} \changeB{We use the following rules to discard paths that are suboptimal in the error probability space. Rule 1: Paths for which the message error rate is not monotonically decreasing are discarded. Rule 2: A path $P$ with cost $C_P$ is said to \emph{dominate} another path $P'$ with cost $C_{P'}$ if all the following conditions hold: 1) an ordering exists between $C_P$ and $C_P'$, 2) $C_P \leq C_P'$, 3) $\Theta(\bvec{\pi}_P) \leq \Theta(\bvec{\pi}_{P'})$, where $\bvec{\pi}_P$ denotes the last state reached by path $P$. The search for the least expensive path is performed breadth-first. After each traversal of the graph, any path that is dominated by another is discarded.} \changeB{When the path cost is one-dimensional, the optimization requires evaluating $O(|\Gamma| N_s)$ DE iterations, where $|\Gamma|$ is the number of operating points being considered and $N_s$ the number of quantization levels used for $\tilde{p}_e^{(t)}$. This can be seen from the fact that with a 1-D cost, Rule 2 implies that at most one path can reach a given state $\tilde{p}_e^{(t)}$. Therefore, $O(|\Gamma| N_s)$ DE iterations are required for each decoding iteration. In addition, upper bounds can be derived for the number of decoding iterations spanned by the trellis graph in terms of the smallest latency and energy cost of the parameters in $\Gamma$, and therefore it is a constant that does not depend on $|\Gamma|$ or $N_s$. On the other hand, when the cost is two-dimensional, the number of DE iterations could grow exponentially in terms of the number of decoding iterations. However, even in the case of a 2-D cost, an ordering exists between the costs of paths $P$ and $P'$ if $(E_P\geq E_{P'} \wedge T_P\geq T_{P'}) \vee (E_P\leq E_{P'} \wedge T_P\leq T_{P'})$, and in that case Rule 2 can be applied. In practice, for the cases presented in this paper, the discarding rules allowed to keep the number of paths down to a manageable level, even when using a 2-D cost. Note that an alternative to the use of a 2-D cost is to define a 1-D cost as $C_P = E_P + \kappa T_P$, and to perform a binary search for the value of $\kappa$ that yields an optimal solution with the desired latency.} The algorithm can also be modified to search for parameter sequences that have other desirable properties beyond minimal energy or EDP. For example, if the decoder is implemented as a pipelined sequence of decoders, it can be desirable to favor solutions that do not require the decoder to switch its parameters too often. We can find good approximate solutions by adding a penalty to $E_P$ when the algorithm used in the current and next steps is different. \subsection{Results}\label{sec:results} We use DE-Gear-Shift to find good parameter sequences $\gammaseq$ for several regular ensembles with rates $\frac{1}{2}$ and $\frac{9}{10}$. The parameter space $\Gamma$ consists of $(\vdd,T_\mathrm{clk})$ points with $\vdd$ from $0.70\volt$ to $1.0\volt$ in steps of $0.05\volt$ and several $T_\mathrm{clk}$ values depending on $\vdd$, in steps of $0.1\ns$. The standard and quasi-synchronous decoders use the same circuits. \changeB{Parameter $\alpha$ in \eqref{eq:channelbelief} is set to $\alpha=4$ for the $(3,6)$, $(3,30)$, and $(4,40)$ decoders, and to $\alpha=2$ for the $(4,8)$ decoder. The offset parameter $C$ in Alg.~\ref{alg:loms} is set to $C=2$ for the $(4,40)$ decoder and to $C=1$ for all other decoders.} As part of our best effort to design a good standard circuit, in the case of the $(3,30)$ decoder we present results for two circuits synthesized with different nominal $T_\mathrm{clk}$ values. The standard circuit has a lower energy consumption when synthesized with $T_\mathrm{clk}=3\ns$, while it has a lower EDP when synthesized with $T_\mathrm{clk}=2.5\ns$. We first run the DE-Gear-Shift solver without any path penalties to obtain the best possible parameter sequences, for both the energy and the EDP objectives. We also noticed that in some cases, adding a small algorithm change penalty allows to discover slightly better sequences. Note that when the objective is EDP, there is no constraint on latency. These results are summarized in Table~\ref{tbl:results}, where the energy is normalized per check node. Overall, we see that significant gains are possible while achieving the same channel noise, latency, and residual error requirements. The synthesis results show that increasing $d_v$ while keeping the rate constant leads to a significant increase in circuit area. Despite this, increasing the node degrees can result in a reduction of the EDP. For the rate $\frac{9}{10}$ ensembles, going from $d_v=3$ to $d_v=4$ decreases EDP by 6.4\% for a standard system, and by 14\% for a quasi-synchronous system. However this is not the case for the rate $\frac{1}{2}$ ensembles, where $d_v=3$ has the smaller EDP. As expected, we can also see that much more energy is required when the channel quality is close to the ensemble's threshold. By applying a cost penalty to parameter switches, it is possible to find parameter sequences with few switches, without a large increase in cost. For example, for a $(3,6)$ decoder starting at $p_e^{(0)}=0.09$, a single operating condition can provide a 32\% EDP improvement, using $\gammaseq= [[0.8\volt,2.1\ns]^{11}]$. The probability of a non-zero deviation in that schedule ranges from 0.6\% to 7.2\%. In the case of a $(3,30)$ decoder synthesized at a nominal $T_\mathrm{clk}=2.5\ns$, for $p_e^{(0)}=0.015$ the sequence $\gammaseq= [ [0.8\volt, 2.5\ns]^{12}, \allowbreak [1.0\volt, 2.5\ns] ]$ provides a 30\% EDP improvement, with non-zero deviation probabilities from 0 to 0.8\%. For a $(4,40)$ decoder, the single-parameter sequence $\gammaseq= [[ 0.8\volt, 2.8\ns]^{9}]$ provides a 39\% EDP improvement, with non-zero deviation probabilities from 1.6 to 4.6\%. \section{Conclusion}\label{sec:conclusion} We presented a method for the design of synchronous circuit implementations of signal processing algorithms that permits timing violations without the need for hardware compensation. We introduced a model for the deviations occurring in LDPC decoder circuits affected by timing faults that represent the circuit behavior accurately \cite{leduc-primeau:2016a}, while being independent of the iterative progress of the decoder. In addition, we showed that in order to use density evolution to predict the performance of the faulty decoder, it is sufficient for the deviation model to have a weak symmetry property, which is more general than previously proposed sufficient properties. We then presented an approximate optimization method called DE-Gear-Shift to find sequences of circuit operating parameters that minimize the energy or the energy-delay product. The method is similar to the previously proposed Gear-Shift method, but relies on density evolution rather than ExIT charts to evaluate the average iterative progress of the decoder. Our results show that the best energy or EDP reduction is achieved by operating the circuit with a large number of timing violations (often with an average probability of non-zero deviation above 1\%). Furthermore, important savings can be achieved with few parameter switches, and without any compromise on circuit area or decoding performance. In this work, we only considered delay variations associated with the signal transitions at the input of the circuit. While the energy savings that result from tolerating these variations are already significant, we ultimately see quasi-synchronous systems as an approach for tolerating the large process variations found in near-threshold CMOS circuits and other emerging computing technologies, potentially enabling energy savings of an order of magnitude. Furthermore, we believe this approach can be extended to other self-correcting algorithms, such as deep neural networks. \appendices \section{CAD Workflow}\label{sec:appendix:workflow} The deviations and the energy consumption are measured directly on optimized circuit models generated by a commercial synthesis tool (\textvtt{Cadence Encounter} \cite{cadence_encounter}). We use TSMC's 65~nm process with the \emph{tcbn65gplus} cell library \cite{tcbn65gplus}. In order to provide a fair assessment of the improvements provided by the quasi-synchronous circuit, we first synthesize a \emph{benchmark} circuit that represents a best effort at optimizing the metric of interest, for example energy consumption. Since we do not have a specific throughput constraint for the design, we synthesize the benchmark circuit at the standard supply voltage of the library ($\vdd=1.0$V), while the clock period is chosen as small as possible without causing a degradation of the target metric. Second, we synthesize a \emph{nominal} circuit that will serve as the basis for the quasi-synchronous design. In this work, we use a standard synthesis algorithm for the nominal circuit, and in all the cases that we report on, the nominal and the benchmark circuits are actually the same. Using a standard synthesis method for the nominal circuit allows using off-the-shelf tools, but is not ideal since the objective of a standard synthesis algorithm (to make all paths only as fast as the clock period) differs from the objective pursued when some timing violations are permitted. For example, results in \cite{kahng:2010} show that the power consumption of a circuit can be reduced by up to 32\% when the gate-sizing optimization takes into account the acceptable rate of timing violations. Therefore it is possible that our results could be improved by using a different synthesis algorithm. Once the circuit is synthesized, we perform a static timing analysis of the gate-level model at various supply voltages. All timing analyses (including at the nominal supply) are performed using timing libraries generated by the \textvtt{Cadence Encounter Library Characterization} tool. We then use this timing information in a functional simulation of the gate-level circuit to observe the dynamic effect of path delay variations and measure the deviation statistics. Any source of delay variation that can be simulated can be studied, but in this paper we focus on variations due to path activation, that is the variations in delay caused by the different propagation times required by different input transitions. Note that other methods could be used to obtain the propagation delays, such as the method described in \cite{pirbadian:2014} based on analytical models. In addition to speeding up the characterization, such methods allow considering the effect of process variations. Power estimation is performed by collecting switching activity data in the functional simulation and using the power estimation engine in \textvtt{Cadence Encounter}. However, because the circuit is operated in a quasi-synchronous manner, the clock period used to run the circuit is not necessarily the same as the nominal clock period. When that is the case, the power estimation generated by the synthesis tool cannot be used directly. First, the switching activity recorded during the functional simulation must be scaled so that it corresponds to the nominal clock period. The tool's power estimation then reports the dynamic power $P_\mathrm{dyn}$ and the static power $P_\mathrm{stat}$. The dynamic energy consumed during one clock cycle does not depend on the clock period, whereas the static energy does. Therefore, the total energy consumed during one cycle by the quasi-synchronous circuit is given by $E_\mathrm{cycle} = P_\mathrm{dyn} T_\mathrm{clk,nom} + P_\mathrm{stat} T_\mathrm{clk}$, where $T_\mathrm{clk,nom}$ is the nominal clock period and $T_\mathrm{clk}$ is the actual clock period used to run the circuit. \section{Test Circuit Monte-Carlo Simulation}\label{sec:appendix:testcircuit} A suitable test circuit for a row-layered decoder architecture consists in implementing a single check node processor, as well as the necessary logic taken from the variable node processor block to send $d_v$ messages to the CNP, and receive one message from the CNP. This test circuit is shown in Fig.~\ref{fig:testcircuit}. It re-uses logic blocks that are found in the complete decoder, ensuring the accuracy of the deviation and energy measurements, and minimizing design time. The test circuit is used to evaluate the decoder's computation tree (shown in Fig.~\ref{fig:LDPC_tree_dev_model}). The VNP with index $1$, shown at the top, is always mapped to the VN that is at the head of the computation tree, while the VNPs at the bottom of the figure are mapped to different VNs as the CNP is successively mapped to each CN neighbor of the head VN. At any given clock cycle, a \emph{VNP front} block is mapped to a particular VN $i$. For illustrative purposes, we simply index the VN neighbors from $1$ to $d_c$, even if the VNs mapped to the bottom VNPs actually change at each layer. Each \emph{VNP front} block takes as input the previous belief total of that VN $\Lambda'_i$, and the previous CN-to-VN message corresponding to layer $\ell$, $\lambda_{i,J(i,\ell)}^{(t-1)}$. To perform the Monte-Carlo simulation, a \emph{VNP front} circuit block with index $i$ must send a message $\mu_i^{(t)}$, randomly generated according to a \gls{1D} normal distribution with error probability $p_e^{(t)}$. However, the only inputs that are controllable are $\Lambda'_i$ and $\lambda_{i,J(i,\ell)}^{(t-1)}$. To simplify the Monte-Carlo simulation, we disregard the true distribution of $\lambda_{i,J(i,\ell)}^{(t-1)}$ and generate it according to a \gls{1D} normal distribution. We also introduce another simplification: we assume that messages received at a VN only modify the total belief at the end of the iteration, as would be the case when using a flooding schedule. As a result, the messages $\mu_i^{(t)}$ are identically distributed with error rate parameter $p_e^{(t)}$ for all $\ell$. Note that these simplifications are not necessary, and they could be removed at the cost of a slightly more cumbersome Monte-Carlo simulation. To generate inputs with the appropriate distribution, we use the fact that $\Lambda'_i = \mu^{(t)}_{i,J(i,\ell)} + \lambda^{(t-1)}_{i,J(i,\ell)}$. On a cycle-free Tanner graph, $\mu^{(t)}_{i,J(i,\ell)}$ and $ \lambda^{(t-1)}_{i,J(i,\ell)}$ are independent, but naturally $\Lambda'_i$ and $\lambda^{(t-1)}_{i,J(i,\ell)}$ are not. Therefore, we generate $\mu^{(t)}_{i,J(i,\ell)}$ and $\lambda^{(t-1)}_{i,J(i,\ell)}$ and sum them to obtain $\Lambda'_i$. To complete the DE iteration, we want to measure an extrinsic message belonging to the next iteration. Because we assume a flooding schedule, this extrinsic message can be obtained by summing any set of $(d_v-1)$ messages in the current iteration. To achieve this, we start a DE iteration by setting $\Lambda'_1 \gets 0$ and $\lambda^{(t-1)}_{1,J(1,\ell)} \gets 0$ for all $\ell$. The desired extrinsic message then corresponds to the total belief output of the circuit $\Lambda^{(t)}_1$ after $d_v-1$ layers have been evaluated. Just like the processor used in the complete decoder, the test circuit has one input and one output register, as well as one internal pipeline register, for a latency of 3 clock cycles. In order to keep the pipeline fed, several distinct computation trees are evaluated in parallel during the Monte-Carlo simulation. \begin{figure}[tbp] \begin{center} \includegraphics[width=2in]{LOMS_test_tree_4} \caption{Block diagram of the test circuit.} \label{fig:testcircuit} \end{center} \end{figure} \section*{Acknowledgements} The authors wish to thank CMC Microsystems for providing access to the Cadence tools and TSMC 65nm CMOS technology, and Gilles Rust for advice on Cadence tools and cell library characterization. \bibliography{IEEEabrv,computing_refs.bib,article_refs.bib} \bibliographystyle{IEEEtran} \end{document}
{"config": "arxiv", "file": "1503.03880/LDPC_energy_optim_JNL-rev.tex"}
TITLE: Explaining The Unbelievable Pendulum Catch QUESTION [20 upvotes]: What would be a theoretical explanation of an "ideal" 14:1 mass ratio in this experiment, also demonstrated in this video? The experiment ties one nut to one end of the string and 14 nuts to the other, then holds the string like this and lets go: The end with the single nut ends up wrapped round your finger and stops the nuts falling to the floor: Why is a 14:1 mass ratio required for this to happen? EDIT: Here is the set of equations I'm trying myself for this problem: $ l(t) = r^2(t) \alpha'(t) \\ T(t) = \text{max}(\mu g - k \alpha(t), 0) \\ l'(t) = g \cos{\alpha(t)} r(t) + T(t) r_0 \\ r''(t) = g \sin{\alpha(t)} - T(t) $ With $l(t)$ - angular momentum divided by smaller mass, $T(t)$ - string tension, $\mu$ - mass proportion, $k$ - friction coefficient, $r_0$ - pivot radius, $r(t)$ - string length, $\alpha(t)$ - string angle. REPLY [25 votes]: TL;DR: Mass ratio = 14 is not particularly special, but it is in a special region of mass ratios (about 11 to 14) that has optimal properties to wind the rope around the finger as much as possible. If you want to know why read the answer. If you just want to look at pretty gifs check it out (hat tip to @Ruslan for the animation idea). One can actually learn a lot from these movies! Especially if one considers that friction kicks in after probably about 2 windings to stop the rope from slipping along the finger, one can identify which mass ratios should work in practice. Only the experiment can tell the full result since there are a lot more factors not considered in the model here (air resistance, non-ideal rope, finite finger thickness, finger movement...). Code for animations if you want to run it yourself or adapt the equations of motion to someting fancy (such as including friction): import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib import cm import numpy as np # integrator for ordinary differential equations from scipy.integrate import ode def eoms_pendulum(t, y, params): """ Equations of motion for the simple model. I was too dumb to do the geometry elegantly, so there are case distinctions... """ # unpack # v1_x, v1_y, v2, x1, y1, y2 = y m1, m2, g, truncate_at_inversion = params if x1<=0 and y1<=0: # calc helpers # F1_g = m1*g F2_g = m2*g # _g for "gravity" L_swing = np.sqrt( x1**2 + y1**2 ) # distance of mass 1 to the pendulum pivot Theta = np.arctan(y1/x1) # angle dt_Theta = ( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) # derivative of arctan help_term = -F2_g/m2 - F1_g/m1 * np.sin(Theta) - v1_x*np.sin(Theta)*dt_Theta + v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # _r for "rope", this formula comes from requiring a constant length rope # calc derivatives dt_v1_x = ( F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g + F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1>=0 and y1<=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(-x1/y1) dt_Theta = -( v1_x/y1 - v1_y*x1/y1**2)/(1. + x1**2/y1**2) help_term = -F2_g/m2 - F1_g/m1 * np.cos(Theta) - v1_x*np.cos(Theta)*dt_Theta - v1_y*np.sin(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( -F_r*np.sin(Theta) ) / m1 dt_v1_y = ( -F1_g + F_r*np.cos(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1>=0 and y1>=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(y1/x1) dt_Theta = ( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) help_term = -F2_g/m2 + F1_g/m1 * np.sin(Theta) + v1_x*np.sin(Theta)*dt_Theta - v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( -F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g - F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1<=0 and y1>=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(-y1/x1) dt_Theta = -( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) help_term = -F2_g/m2 + F1_g/m1 * np.sin(Theta) - v1_x*np.sin(Theta)*dt_Theta - v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g - F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 if truncate_at_inversion: if dt_y2 > 0.: return np.zeros_like(y) return [dt_v1_x, dt_v1_y, dt_v2, dt_x1, dt_y1, dt_y2] def total_winding_angle(times, trajectory): """ Calculates the total winding angle for a given trajectory """ dt = times[1] - times[0] v1_x, v1_y, v2, x1, y1, y2 = [trajectory[:, i] for i in range(6)] dt_theta = ( x1*v1_y - y1*v1_x ) / np.sqrt(x1**2 + y1**2) # from cross-product theta_tot = np.cumsum(dt_theta) * dt return theta_tot ################################################################################ ### setup ### ################################################################################ trajectories = [] m1 = 1 m2_list = np.arange(2, 20, 2)[0:9] ntimes = 150 for m2 in m2_list: # params # params = [ m1, # m1 m2, # m2 9.81, # g False # If true, truncates the motion when m2 moves back upwards ] # initial conditions # Lrope = 1.0 # Length of the rope, initially positioned such that m1 is L from the pivot init_cond = [ 0.0, # v1_x 0., # v1_y 0., # v2 -Lrope/2, # x1 0.0, # y1 -Lrope/2, # y2 ] # integration time range # times = np.linspace(0, 1.0, ntimes) # trajectory array to store result # trajectory = np.empty((len(times), len(init_cond)), dtype=np.float64) # helper # show_prog = True # check eoms at starting position # #print(eoms_pendulum(0, init_cond, params)) ################################################################################ ### numerical integration ### ################################################################################ r = ode(eoms_pendulum).set_integrator('zvode', method='adams', with_jacobian=False) # integrator and eoms r.set_initial_value(init_cond, times[0]).set_f_params(params) # setup dt = times[1] - times[0] # time step # integration (loop time step) for i, t_i in enumerate(times): trajectory[i,:] = r.integrate(r.t+dt) # integration trajectories.append(trajectory) # ### extract ### # x1 = trajectory[:, 3] # y1 = trajectory[:, 4] # x2 = np.zeros_like(trajectory[:, 5]) # y2 = trajectory[:, 5] # L = np.sqrt(x1**2 + y1**2) # rope part connecting m1 and pivot # Ltot = -y2 + L # total rope length ################################################################################ ### Visualize trajectory ### ################################################################################ import numpy as np from matplotlib import pyplot as plt from matplotlib.animation import FuncAnimation plt.style.use('seaborn-pastel') n=3 m=3 axes = [] m1_ropes = [] m2_ropes = [] m1_markers = [] m2_markers = [] fig = plt.figure(figsize=(10,10)) for sp, m2_ in enumerate(m2_list): ax = fig.add_subplot(n, m, sp+1, xlim=(-0.75, 0.75), ylim=(-1, 0.5), xticks=[], yticks=[]) m1_rope, = ax.plot([], [], lw=1, color='k') m2_rope, = ax.plot([], [], lw=1, color='k') m1_marker, = ax.plot([], [], marker='o', markersize=10, color='r', label=r'$m_1 = {}$'.format(m1)) m2_marker, = ax.plot([], [], marker='o', markersize=10, color='b', label=r'$m_2 = {}$'.format(m2_)) axes.append(ax) m1_ropes.append(m1_rope) m2_ropes.append(m2_rope) m1_markers.append(m1_marker) m2_markers.append(m2_marker) ax.set_aspect('equal', adjustable='box') ax.legend(loc='upper left', fontsize=12, ncol=2, handlelength=1, bbox_to_anchor=(0.1, 1.06)) plt.tight_layout() def init(): for m1_rope, m2_rope, m1_marker, m2_marker in zip(m1_ropes, m2_ropes, m1_markers, m2_markers): m1_rope.set_data([], []) m2_rope.set_data([], []) m1_marker.set_data([], []) m2_marker.set_data([], []) return (*m1_ropes, *m2_ropes, *m1_markers, *m2_markers) def animate(i): for sp, (m1_rope, m2_rope, m1_marker, m2_marker) in enumerate(zip(m1_ropes, m2_ropes, m1_markers, m2_markers)): x1 = trajectories[sp][:, 3] y1 = trajectories[sp][:, 4] x2 = np.zeros_like(trajectories[sp][:, 5]) y2 = trajectories[sp][:, 5] m1_rope.set_data([x1[i], 0], [y1[i], 0]) m2_rope.set_data([x2[i], 0], [y2[i], 0]) m1_marker.set_data(x1[i], y1[i]) m2_marker.set_data(x2[i], y2[i]) return (*m1_ropes, *m2_ropes, *m1_markers, *m2_markers) anim = FuncAnimation(fig, animate, init_func=init, frames=len(trajectories[0][:, 0]), interval=500/ntimes, blit=True) anim.save('PendulumAnim.gif', writer='imagemagick', dpi = 50) plt.show() Main argument Winding angle behavior in the no friction, thin pivot case My answer is based on a simple model for the system(no fricition, infinitely thin pivot, ideal rope, see also detailed description below), from which one can actually get some very nice insight on why the region around 14 is special. As a quantity of interest, we define a winding angle as a function of time $\theta(t)$. It indicates, which total angle the small mass has travelled around the finger. $\theta(t)=2\pi$ corresponds to one full revolution, $\theta(t)=4\pi$ corresponds to two revolutions and so on. One can then plot the winding angle as a function of time and mass ratio for the simple model: The color axis shows the winding angle. We can clearly see that between mass ratio 12-14, the winding angle goes up continuously in time and reaches a high maximum. The first maxima in time for each mass ratio are indicated by the magenta crosses. Also note that the weird discontinuities are places where the swinging mass goes through zero/hits the finger, where the winding angle is not well defined. To see the behaviour in a bit more detail, let us look at some slices of the 2D plot (2$\pi$ steps/full revolutions marked as horizontal lines): We see that mass ratios 12, 13, 14 behave very similarly. 16 has a turning point after 4 revolutions, but I would expect this to still work in practice, since when the rope is wrapped 4 times around the finger, there should be enough friction to clip it. For mass ratio 5, on the other hand, we do not even get 2 revolutions and the rope would probably slip. If you want to reproduce these plots, here is my code. Feel free to make adaptions and post them as an answer. It would be interesting, for example, if one can include friction in a simple way to quantify the clipping effect at the end. I imagine this will be hard though, and one would need at least one extra parameter. import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib import cm import numpy as np from scipy.signal import argrelextrema # integrator for ordinary differential equations from scipy.integrate import ode def eoms_pendulum(t, y, params): """ Equations of motion for the simple model. I was too dumb to do the geometry elegantly, so there are case distinctions... """ # unpack # v1_x, v1_y, v2, x1, y1, y2 = y m1, m2, g, truncate_at_inversion = params if x1<=0 and y1<=0: # calc helpers # F1_g = m1*g F2_g = m2*g # _g for "gravity" L_swing = np.sqrt( x1**2 + y1**2 ) # distance of mass 1 to the pendulum pivot Theta = np.arctan(y1/x1) # angle dt_Theta = ( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) # derivative of arctan help_term = -F2_g/m2 - F1_g/m1 * np.sin(Theta) - v1_x*np.sin(Theta)*dt_Theta + v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # _r for "rope", this formula comes from requiring a constant length rope # calc derivatives dt_v1_x = ( F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g + F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1>=0 and y1<=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(-x1/y1) dt_Theta = -( v1_x/y1 - v1_y*x1/y1**2)/(1. + x1**2/y1**2) help_term = -F2_g/m2 - F1_g/m1 * np.cos(Theta) - v1_x*np.cos(Theta)*dt_Theta - v1_y*np.sin(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( -F_r*np.sin(Theta) ) / m1 dt_v1_y = ( -F1_g + F_r*np.cos(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1>=0 and y1>=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(y1/x1) dt_Theta = ( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) help_term = -F2_g/m2 + F1_g/m1 * np.sin(Theta) + v1_x*np.sin(Theta)*dt_Theta - v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( -F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g - F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 elif x1<=0 and y1>=0: # calc helpers # F1_g = m1*g F2_g = m2*g L_swing = np.sqrt( x1**2 + y1**2 ) Theta = np.arctan(-y1/x1) dt_Theta = -( v1_y/x1 - v1_x*y1/x1**2)/(1. + y1**2/x1**2) help_term = -F2_g/m2 + F1_g/m1 * np.sin(Theta) - v1_x*np.sin(Theta)*dt_Theta - v1_y*np.cos(Theta)*dt_Theta F_r = help_term / ( -1./m1 - 1./m2 ) # calc derivatives dt_v1_x = ( F_r*np.cos(Theta) ) / m1 dt_v1_y = ( -F1_g - F_r*np.sin(Theta) ) / m1 dt_v2 = ( F_r - F2_g ) / m2 dt_x1 = v1_x dt_y1 = v1_y dt_y2 = v2 if truncate_at_inversion: if dt_y2 > 0.: return np.zeros_like(y) return [dt_v1_x, dt_v1_y, dt_v2, dt_x1, dt_y1, dt_y2] def total_winding_angle(times, trajectory): """ Calculates the total winding angle for a given trajectory """ dt = times[1] - times[0] v1_x, v1_y, v2, x1, y1, y2 = [trajectory[:, i] for i in range(6)] dt_theta = ( x1*v1_y - y1*v1_x ) / (x1**2 + y1**2) # from cross-product theta_tot = np.cumsum(dt_theta) * dt return theta_tot def find_nearest_idx(array, value): """ Find the closest element in an array and return the corresponding index. """ array = np.asarray(array) idx = (np.abs(array-value)).argmin() return idx ################################################################################ ### setup ### ################################################################################ theta_tot_traj_list = [] # scan mass ratio m2_list = np.linspace(5,17,200) for m2_ in m2_list: # params # params = [ 1, # m1 m2_, # m2 9.81, # g False # If true, truncates the motion when m2 moves back upwards ] # initial conditions # Lrope = 1.0 # Length of the rope, initially positioned such that m1 is L from the pivot init_cond = [ 0.0, # v1_x 0., # v1_y 0., # v2 -Lrope/2, # x1 0.0, # y1 -Lrope/2, # y2 ] # integration time range # times = np.linspace(0, 2.2, 400) # trajectory array to store result # trajectory = np.empty((len(times), len(init_cond)), dtype=np.float64) # helper # show_prog = True # check eoms at starting position # #print(eoms_pendulum(0, init_cond, params)) ################################################################################ ### numerical integration ### ################################################################################ r = ode(eoms_pendulum).set_integrator('zvode', method='adams', with_jacobian=False) # integrator and eoms r.set_initial_value(init_cond, times[0]).set_f_params(params) # setup dt = times[1] - times[0] # time step # integration (loop time step) for i, t_i in enumerate(times): trajectory[i,:] = r.integrate(r.t+dt) # integration ### extract ### x1 = trajectory[:, 3] y1 = trajectory[:, 4] x2 = np.zeros_like(trajectory[:, 5]) y2 = trajectory[:, 5] L = np.sqrt(x1**2 + y1**2) # rope part connecting m1 and pivot Ltot = -y2 + L # total rope length theta_tot_traj = total_winding_angle(times, trajectory) theta_tot_traj_list.append(theta_tot_traj) theta_tot_traj_list = np.asarray(theta_tot_traj_list) #maxima_idxs = np.argmax(theta_tot_traj_list, axis=-1) maxima_idxs = [] for i,m_ in enumerate(m2_list): maxima_idx = argrelextrema(theta_tot_traj_list[i,:], np.greater)[0] if maxima_idx.size == 0: maxima_idxs.append(-1) else: maxima_idxs.append(maxima_idx[0]) maxima_idxs = np.asarray(maxima_idxs) ### 2D plot ### fig=plt.figure() plt.axhline(14, color='r', linewidth=2, dashes=[1,1]) plt.imshow(theta_tot_traj_list, aspect='auto', origin='lower', extent = [times[0], times[-1], m2_list[0], m2_list[-1]]) plt.plot(times[maxima_idxs], m2_list,'mx') plt.xlabel("Time") plt.ylabel("Mass ratio") plt.title("Winding angle") plt.colorbar() fig.savefig('winding_angle.png') plt.show() fig=plt.figure() slice_list = [5, 12, 13, 14, 16] for x_ in [0.,1.,2.,3.,4.,5.]: plt.axhline(x_*2.*np.pi, color='k', linewidth=1) for i, slice_val in enumerate(slice_list): slice_idx = find_nearest_idx(m2_list, slice_val) plt.plot(times, theta_tot_traj_list[slice_idx, :], label='Mass ratio: {}'.format(slice_val)) plt.xlabel('Time') plt.ylabel('Winding angle') plt.legend() fig.savefig('winding_angle2.png') plt.show() Details The simple model The simple model used above and (probably) the simplest way to model the system is to assume: An ideal infinitely thin rope. An infinitely thin pivot than the rope wraps around (the finger in the video). No friction. Especially the no friction assumption is clearly flawed, because the effect of stopping completely relies on friction. But as we saw above one can still get some insight into the initial dynamics anyway and then think about what friction will do to change this. If someone feels motivated I challenge you to include friction into the model and change my code! Under these assumptions, one can set up a system of coupled differential equations using Newton's laws, which can easily be solved numerically. I won't go into detail on the geometry and derivation, I'll just give some code below for people to check and play with. Disclaimer: I am not sure my equations of motion are completely right. I did some checks and it looks reasonable, but feel free to fill in your own version and post an answer. Geometry The geometry assumed is like this: From the picture, we can get the equations of motion as follows: $$ m_1 \dot{v}_{x,1} = F_\mathrm{rope} \cos(\theta) \,, \\ m_1 \dot{v}_{y,1} = -F_{g,1} + F_\mathrm{rope} \sin(\theta) \,, \\ m_2 \dot{v}_{y,2} = -F_{g,2} + F_\mathrm{rope} \,, \\ \dot{x}_1 = v_{x,1} \,, \\ \dot{y}_1 = v_{y,1} \,, \\ \dot{y}_2 = v_{y,2} \,. $$ This is just Newton's laws for the geometry wirtten as a set of first order coupled differential equations, which can easily be solved in scipy (see code). The hard bit is to find the rope force $F_\textrm{rope}$. It is constraint by the ideal rope condition, that the total rope length does not change in time. Following this through I got $$ F_\textrm{rope} = \frac{\frac{F_{g,2}}{m_2} + \frac{F_{g,1}}{m_1}\sin(\theta) + v_{x,1}\sin(\theta)\dot{\theta} - v_{y,1}\cos(\theta)\dot{\theta}}{\frac{1}{m_1} + \frac{1}{m_2}} \,. $$ Note that my way of writing the solution is not particularly elegant and as a result some of these formulas only apply in the lower left quadrant ($x_1<0$, $y_1<0$). The other quadrants are implemented in the code too. As the initial position, we will consider $x_1 = -L/2$, $y_1 = -L/2$, similarly to the video. $y_1$ does not matter too much, it simply causes an overall displacement of mass 2. We set $L=1$ and $g=9.81$. Someone else can work out the units ;-) Let's do it in python I already gave some code snippets above. You need numpy and matplotlib to run it. Maybe python3 would be good. If you want to plot static trajectories you can used: ################################################################################ ### setup ### ################################################################################ # params # params = [ 1, # m1 14., # m2 9.81, # g False # If true, truncates the motion when m2 moves back upwards ] # initial conditions # Lrope = 1.0 # Length of the rope, initially positioned such that m1 is L from the pivot init_cond = [ 0.0, # v1_x 0., # v1_y 0., # v2 -Lrope/2, # x1 0.0, # y1 -Lrope/2, # y2 ] # integration time range # times = np.linspace(0, 1.0, 400) # trajectory array to store result # trajectory = np.empty((len(times), len(init_cond)), dtype=np.float64) # helper # show_prog = True # check eoms at starting position # print(eoms_pendulum(0, init_cond, params)) ################################################################################ ### numerical integration ### ################################################################################ r = ode(eoms_pendulum).set_integrator('zvode', method='adams', with_jacobian=False) # integrator and eoms r.set_initial_value(init_cond, times[0]).set_f_params(params) # setup dt = times[1] - times[0] # time step # integration (loop time step) for i, t_i in enumerate(times): trajectory[i,:] = r.integrate(r.t+dt) # integration ### extract ### x1 = trajectory[:, 3] y1 = trajectory[:, 4] x2 = np.zeros_like(trajectory[:, 5]) y2 = trajectory[:, 5] L = np.sqrt(x1**2 + y1**2) # rope part connecting m1 and pivot Ltot = -y2 + L # total rope length ################################################################################ ### Visualize trajectory ### ################################################################################ # fig = plt.figure(figsize=(15,7)) plt.subplot(121) titleStr = "m1: {}, m2: {}, g: {}, L: {}".format(params[0], params[1], params[2], Lrope) fs = 8 plt.axvline(0, color='k', linewidth=1, dashes=[1,1]) plt.axhline(0, color='k', linewidth=1, dashes=[1,1]) plt.scatter(x1, y1, c=times, label="Mass 1") plt.scatter(x2, y2, marker='x', c=times, label='Mass 2') #plt.xlim(-1.5, 1.5) #plt.ylim(-2, 1.) plt.xlabel('x position', fontsize=fs) plt.ylabel('y position', fontsize=fs) plt.gca().set_aspect('equal', adjustable='box') cbar = plt.colorbar() cbar.ax.set_ylabel('Time', rotation=270, fontsize=fs) plt.title(titleStr, fontsize=fs) plt.legend() plt.subplot(122) plt.axhline(0., color='k', dashes=[1,1]) plt.plot(times, x1, '-', label="Mass 1, x pos") plt.plot(times, y1, '-', label="Mass 1, y pos") plt.plot(times, y2, '--', label="Mass 2, y pos") plt.xlabel('Time') plt.legend() plt.tight_layout() fig.savefig('{}-{}.pdf'.format(int(params[1]), int(params[0]))) plt.close() # check that total length of the rope is constant # plt.figure() plt.axhline(0, color='k', linewidth=1, dashes=[1,1]) plt.axvline(0.4, color='k', linewidth=1, dashes=[1,1]) plt.plot(times, Ltot, label='total rope length') plt.plot(times, L, label='rope from mass 1 to pivot') plt.legend() plt.tight_layout() plt.close() The dynamics for 1/14 mass ration Here is what the dynamics of the pendulum look like for a mass ratio of 14 ($m_1 = 1$, $m_2=14$): The left panel shows the trajectories of the two masses in the x-y plane, with time being indicated by the color axis. This is supposed to be a front view of the video performance. We see that mass 1 wraps around the pivot (at x=0, y=0) multiple times (see winding angle picture above). After a few revolutions, the model is probably not representative anymore. Instead, friction would start kicking in and clip the rope. In our simplified picture, the particle keeps going. What is interesting is that even without friction, the lower particle stops at some point and even comes back up, causing stable oscillation!! What changes with the mass ratio? We already saw what changes with the mass ratio in the winding angle picture. Just for visual intuiution, here is the corresponding picture for 1/5 mass ratio: For higher mass ratio (1/20):
{"set_name": "stack_exchange", "score": 20, "question_id": 537834}
TITLE: Sufficient condition for $k$-colorability QUESTION [4 upvotes]: We know that a graph is $ 2 $-colorable iff it has no odd cycles. I am asked to generalize this statement to the following: a graph is $ k $-colorable if each vertex is in less than $ \binom{k}{2} $ distinct odd cycles. I am having trouble with this proof: let's prove by induction on the size of the vertex set of $ G $. Clearly it is true if $ |V(G)| \leq k $. Suppose it is true for $ |V(G)| < n $, and let $ G' = G - \{ x \} $. Since removing a vertex does not create more cycles, we have that $ G' $ is $ k $-colorable. Now we have to show that we can color $ x $ without creating a conflict. But how to proceed? Help is greatly appreciated. REPLY [2 votes]: The idea in the last step you're having trouble with is essentially the same as the idea of "Kempe chains" in the proof of the five-color theorem. If we're adding the vertex $x$ back in, and its neighbors don't already use all $k$ colors, then it's easy to color $x$: just give it a color that's not used by its neighbors. If all $k$ colors are used on the neighbors of $x$, we may try the following algorithm: Let Azure and Beige be any two of the colors. Let $G_{AB}$ be the graph obtained by the following process: Start with all neighbors of $x$ which are colored Azure. Next, add on all neighbors of those vertices which are colored Beige. Next, add on all neighbors of those vertices which are colored Azure. Keep going until there are no more vertices to add. Reverse the colors of $G_{AB}$: switch Azure to Beige and Beige to Azure. Color $x$ Azure. If all goes well, then in the new coloring, $x$ no longer has any neighbors colored Azure, all of them have been switched to Beige. So we are free to color $x$ Azure, and get a $k$-coloring of $G$. The trouble is that $G_{AB}$ could eventually include some neighbors of $x$ which are colored Beige. If it does, then reversing the colors of $G_{AB}$ gets rid of all neighbors of $x$ colored Azure (turning them into Beige), but turns some of $x$'s Beige neighbors into Azure, so $x$ still has Azure neighbors, and there's no way to color $x$. However, if that happens, then there is an Azure-Beige odd cycle containing $x$: a cycle that starts at $x$ and goes through $G_{AB}$, alternating Azure-Beige-Azure-Beige-...-Azure-Beige until it comes back to $x$. By assumption, there are fewer than $\binom k2$ odd cycles through $x$. Well, there are $\binom k2$ pairs of colors we could have used in place of Azure and Beige. Therefore, there is a pair of colors that does not result in such a cycle, and our recoloring algorithm will work for that pair of colors.
{"set_name": "stack_exchange", "score": 4, "question_id": 2520571}
\begin{document} \maketitle \begin{abstract} We determine the power of the weighted sum scalarization with respect to the computation of approximations for general multiobjective minimization and maximization problems. Additionally, we introduce a new multi-factor notion of approximation that is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. \smallskip For minimization problems, we provide an efficient algorithm that computes an approximation of a multiobjective problem by using an exact or approximate algorithm for its weighted sum scalarization. In case that an exact algorithm for the weighted sum scalarization is used, this algorithm comes arbitrarily close to the best approximation quality that is obtainable by supported solutions -- both with respect to the common notion of approximation and with respect to the new multi-factor notion. Moreover, the algorithm yields the currently best approximation results for several well-known multiobjective minimization problems. For maximization problems, however, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously by supported solutions. \end{abstract} \begin{keywords} multiobjective optimization, approximation, weighted sum scalarization \end{keywords} \begin{AMS} 90C29, 90C59 \end{AMS} \section{Introduction} Almost any real-world optimization problem asks for optimizing more than one objective function (e.g., the minimization of cost and time in transportation systems or the maximization of profit and safety in investments). Clearly, these objectives are conflicting, often incommensurable, and, yet, they have to be taken into account simultaneously. The discipline dealing with such problems is called \emph{multiobjective optimization}. Typically, multiobjective optimization problems are solved according to the Pareto principle of optimality: a solution is called \emph{efficient} (or \emph{Pareto optimal}) if no other feasible solution exists that is not worse in any objective function and better in at least one objective. The images of the efficient solutions in the objective space are called \emph{nondominated points}. In contrast to single objective optimization, where one typically asks for one optimal solution, the main goal of multiobjective optimization is to compute the set of all nondominated points and, for each of them, one corresponding efficient solution. Each of these solutions corresponds to a different compromise among the set of objectives and may potentially be relevant for a decision maker. \smallskip Several results in the literature, however, show that multiobjective optimization problems are hard to solve exactly~\cite{Ehrgott:book,Ehrgott:hard-to-say} and, in addition, the cardinalities of the set of nondominated points (the \emph{nondominated set}) and the set of efficient solutions (the \emph{efficient set}) may be exponentially large for discrete problems (and are typically infinite for continuous problems). This impairs the applicability of exact solution methods to real-life problems and provides a strong motivation for studying \emph{approximations of multiobjective optimization problems}. \smallskip Both exact and approximate solution methods for multiobjective optimization problems often resort to using single objective auxiliary problems, which are called \emph{scalarizations} of the original multiobjective problem. This refers to the transformation of a multiobjective optimization problem into a single objective auxiliary problem based on a procedure that might use additional parameters, auxiliary points, or variables. The resulting scalarized optimization problems are then solved using methods from single objective optimization and the obtained solutions are interpreted in the context of Pareto optimality. \smallskip The simplest and most widely used scalarization technique is the \emph{weighted sum scalarization} (see, e.g.,~\cite{Ehrgott:book}). Here, the scalarized auxiliary problem is constructed by assigning a weight to each of the objective functions and summing up the resulting weighted objective functions in order to obtain the objective function of the scalarized problem. If the weights are chosen to be positive, then every optimal solution of the resulting \emph{weighted sum problem} is efficient. Moreover, the weighted sum scalarization does not change the feasible set and, in many cases, boils down to the single objective version of the given multiobjective problem --- which represents an important advantage of this scalarization especially for combinatorial problems. However, only some efficient solutions (called \emph{supported solutions}) can be obtained by means of the weighted sum scalarization, while many other efficient solutions (called \emph{unsupported solutions}) cannot. Consequently, a natural question is to determine which approximations of the whole efficient set can be obtained by using this very important scalarization technique. \smallskip \subsection{Previous work}\enlargethispage{\baselineskip} Besides many specialized approximation algorithms for particular multiobjective optimization problems, there exist several general approximation methods that can be applied to broad classes of multiobjective problems. An extensive survey of these general approximation methods is provided in~\cite{Herzel+etal:survey}. Most of these general approximation methods for multiobjective problems are based on the seminal work of Papadimitriou and Yannakakis~\cite{Papadimitriou+Yannakakis:multicrit-approx}, who present a method for generating a $(1+\varepsilon,\dots,1+\varepsilon)$-approximation for general multiobjective minimization and maximization problems with a constant number of positive-valued, polynomially computable objective functions. They show that a $(1+\varepsilon,\dots,1+\varepsilon)$-approximation with size polynomial in the encoding length of the input and $\frac{1}{\varepsilon}$ always exists. Moreover, their results show that the construction of such an approximation is possible in (fully) polynomial time, i.e., the problem admits a \emph{multiobjective (fully) polynomial-time approximation scheme} or \emph{MPTAS} (\emph{MFPTAS}), if and only if a certain auxiliary problem called the \emph{gap problem} can be solved in (fully) polynomial time. More recent articles building upon the results of~\cite{Papadimitriou+Yannakakis:multicrit-approx} present methods that additionally yield bounds on the size of the computed $(1+\varepsilon,\dots,1+\varepsilon)$-approximation relative to the size of the smallest $(1+\varepsilon,\dots,1+\varepsilon)$-approximation possible~\cite{Vassilvitskii+Yannakakis:trade-off-curves,Diakonikolas+Yannakakis:approx-pareto-sets,Bazgan+etal:min-pareto}. Moreover, it has recently been shown in~\cite{Bazgan+etal:one-exact} that an even better $(1,1+\varepsilon,\dots,1+\varepsilon)$-approximation (i.e., an approximation that is exact in one objective function and $(1+\varepsilon)$-approximate in all other objective functions) always exists, and that such an approximation can be computed in (fully) polynomial time if and only if the so-called \emph{dual restrict problem} (introduced in~\cite{Diakonikolas+Yannakakis:approx-pareto-sets}) can be solved in (fully) polynomial time. \smallskip Other works study how the weighted sum scalarization can be used in order to compute a set of solutions such that the convex hull of their images in the objective space yields an approximation guarantee of $(1+\varepsilon,\dots,1+\varepsilon)$~\cite{Diakonikolas+Yannakakis:SODA08,Diakonikolas:Phd,Diakonikolas+Yannakakis:chord-algorithm}. Using techniques similar to ours, Diakonikolas and Yannakakis~\cite{Diakonikolas+Yannakakis:SODA08} show that such a so-called \emph{$\varepsilon$-convex Pareto set} can be computed in (fully) polynomial time if and only if the weighted sum scalarization admits a (fully) polynomial-time approximation scheme. Additionally, they consider questions regarding the cardinality of $\varepsilon$-convex Pareto sets. \smallskip Besides the general approximation methods mentioned above that work for both minimization and maximization problems, there exist several general approximation methods that are restricted either to minimization problems or to maximization problems. For \emph{minimization} problems, there are two general approximation methods that are both based on using (approximations of) the weighted sum scalarization. The previously best general approximation method for multiobjective minimization problems with an arbitrary constant number of objectives that uses the weighted sum scalarization can be obtained by combining two results of Gla\ss er et al.~\cite{Glasser+etal:multi-hardness,Glasser+etal:CiE2010}. They introduce another auxiliary problem called the \emph{approximate domination problem}, which is similar to the gap problem. Gla\ss er et al. show that, if this problem is solvable in polynomial time for some approximation factor $\alpha\geq 1$, then an approximating set providing an approximation factor of $\alpha\cdot (1+\varepsilon)$ in every objective function can be computed in fully polynomial time for every $\varepsilon>0$. Moreover, they show that the approximate domination problem with $\alpha\colonequals\sigma\cdot p$ can be solved by using a $\sigma$-approximation algorithm for the weighted sum scalarization of the $p$-objective problem. Together, this implies that a $((1+\varepsilon)\cdot\sigma\cdot p,\ldots,(1+\varepsilon)\cdot\sigma\cdot p)$-approximation can be computed in fully polynomial time for $p$-objective minimization problems provided that the objective functions are positive-valued and polynomially computable and a $\sigma$-approximation algorithm for the weighted sum scalarization exists. As this result is not explicitly stated in~\cite{Glasser+etal:multi-hardness,Glasser+etal:CiE2010}, no bounds on the running time are provided. For \emph{biobjective} minimization problems, Halffmann et al.~\cite{Halffmann+etal:bicriteria} show how to obtain a $(\sigma\cdot(1+2\varepsilon),\sigma\cdot(1+\frac{2}{\varepsilon}))$-approximation for any given $0<\varepsilon\leq 1$ if a polynomial-time $\sigma$-approximation algorithm for the weighted sum scalarization is given. \smallskip Obtaining general approximation methods for multiobjective \emph{maximization} problems using the weighted sum scalarization seems to be much harder than for minimization problems. Indeed, Gla\ss er et al.~\cite{Glasser+etal:CiE2010} show that certain translations of approximability results from the weighted sum scalarization of an optimization problem to the multiobjective version that work for minimization problems are not possible in general for maximization problems. \smallskip An approximation method specifically designed for multiobjective maximization problems is presented by Bazgan et al.~\cite{Bazgan+etal:fixed-number}. Their method is applicable to biobjective maximization problems that satisfy an additional structural assumption on the set of feasible solutions and the objective functions: For each two feasible solutions none of which approximates the other one by a factor of~$\alpha$ in both objective functions, a third solution approximating both given solutions in both objective functions by a certain factor depending on $\alpha$ and a parameter~$c$ must be computable in polynomial time. The approximation factor obtained by the algorithm then depends on~$\alpha$ and~$c$. \subsection{Our contribution}\label{subsec:our-contribution} Our contribution is twofold: First, in order to better capture the approximation quality in the context of multiobjective optimization problems, we introduce a new notion of approximation for the multiobjective case. This new notion comprises the common notion of approximation, but is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. Second, we provide a precise analysis of the approximation quality obtainable for multiobjective optimization problems by means of an exact or approximate algorithm for the weighted sum scalarization -- with respect to both the common and the new notion of approximation. \smallskip In order to motivate the new notion of approximation, consider the biobjective case, in which a $(2+\varepsilon,2+\varepsilon)$-approximation can be obtained from the results of Gla\ss er et al.~\cite{Glasser+etal:multi-hardness,Glasser+etal:CiE2010} using an exact algorithm for the weighted sum scalarization. As illustrated in Figure~\ref{fig:multi-factor-motivation}, this approximation guarantee is actually too pessimistic: Since each point~$y$ in the image of the approximating set is nondominated (since it is the image of an optimal solution of the weighted sum scalarization), no images of feasible solutions can be contained in the shaded region. Thus, every feasible solution is actually either $(1,2+\varepsilon)$- or $(2+\varepsilon,1)$-approximated. Consequently, the approximation quality obtained in this case can be more accurately described by using \emph{two vectors of approximation factors}. In order to capture such situations and allow for a more precise analysis of the approximation quality obtained for multiobjective problems, our new \emph{multi-factor notion of approximation} uses a \emph{set of vectors of approximation factors} instead of only a single vector. \begin{figure}[ht!] \pgfdeclarepatternformonly{new north west lines}{ \pgfqpoint{-1pt}{-1pt}}{\pgfqpoint{4pt}{4pt}}{\pgfqpoint{3pt}{3pt}} { \pgfsetlinewidth{0.4pt} \pgfpathmoveto{\pgfqpoint{0pt}{3pt}} \pgfpathlineto{\pgfqpoint{3.1pt}{-0.1pt}} \pgfusepath{stroke} } \begin{center} \begin{tikzpicture}[scale=1.25] \fill[gray!30] (3.25,3.25) -- (1.625,3.25) -- (1.625,1.625) -- (3.25,1.625) -- (3.25,3.25); \fill[line space=7pt, pattern=my north west lines] (7.4,7.4) -- (1.625,7.4) -- (1.625,1.625) -- (7.4,1.625) -- (7.4,7.4); \draw[-] (1.625,7.4) -- (1.625,1.625) -- (7.4,1.625); \draw[-] (1.625,3.25) -- (7.4,3.25); \draw[-] (3.25,1.625) -- (3.25,7.4); \draw[->] (-0.2,0) -- (7.4,0) node[below right] {$f_1$}; \draw[->] (0,-0.2) -- (0,7.4) node[above left] {$f_2$}; \fill (3.25,3.25) circle (1.5pt) node[above right] {$y$}; \end{tikzpicture} \caption{Image space of a biobjective minimization problem. The point~$y$ in the image of the approximating set $(2+\varepsilon,2+\varepsilon)$-approximates all points in the hashed region. If~$y$ is nondominated, no images of feasible solutions can be contained in the shaded region, so every image in the hashed region is actually either $(1,2+\varepsilon)$- or $(2+\varepsilon,1)$-approximated.}\label{fig:multi-factor-motivation} \end{center} \end{figure} \newpage The second part of our contribution consists of a detailed analysis of the approximation quality obtainable by using the weighted sum scalarization -- both for multiobjective minimization problems and for multiobjective maximization problems. For minimization problems, we provide an efficient algorithm that approximates a multiobjective problem using an exact or approximate algorithm for its weighted sum scalarization. We analyze the approximation quality obtained by the algorithm both with respect to the common notion of approximation that uses only a single vector of approximation factors as well as with respect to the new multi-factor notion. With respect to the common notion, our algorithm matches the best previously known approximation guarantee of $(\sigma\cdot p+\varepsilon,\ldots,\sigma\cdot p+\varepsilon)$ obtainable for $p$-objective minimization problems and any $\varepsilon>0$ from a $\sigma$-approximation algorithm for the weighted sum scalarization. More importantly, we show that this result is best-possible in the sense that it comes arbitrarily close to the best approximation guarantee obtainable by supported solutions for the case that an exact algorithm is used to solve the weighted sum problem (i.e., when $\sigma=1$). \smallskip When analyzing the algorithm with respect to the new multi-factor notion of approximation, however, a much stronger approximation result is obtained. Here, we show that every feasible solution is approximated with some (possibly different) vector $(\alpha_1,\dots,\alpha_p)$ of approximations factors such that $\sum_{j:\alpha_j>1}\alpha_j = \sigma\cdot p + \varepsilon$. In particular, the worst-case approximation factor of $\sigma\cdot p + \varepsilon$ can actually be tight \emph{in at most one objective} for any feasible point. This shows the multi-factor notion of approximation yields a much stronger approximation result by allowing a refined analysis of the obtained approximation guarantee. Moreover, for $\sigma=1$, we show that the obtained multi-factor approximation result comes arbitrarily close to the best multi-factor approximation result obtainable by supported solutions. We also demonstrate that our algorithm applies to a large variety of multiobjective minimization problems and yields the currently best approximation results for several problems. \smallskip Multiobjective maximization problems, however, turn out to be much harder to approximate by using the weighted sum scalarization. Here, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously when using only supported solutions. \smallskip In summary, our results yield essentially tight bounds on the power of the weighted sum scalarization with respect to the approximation of multiobjective minimization and maximization problems -- both in the common notion of approximation and in the new multi-factor notion. \medskip The remainder of the paper is organized as follows: In Section~\ref{sec:preliminaries}, we formally introduce multiobjective optimization problems and provide the necessary definitions concerning their approximation. Section~\ref{sec:minimization} contains our general approximation algorithm for minimization problems (Subsection~\ref{subsec:results}) as well as a faster algorithm for the biobjective case (Subsection~\ref{subsec:biobjective}). Moreover, we show in Subsection~\ref{subsec:tightness} that the obtained approximation results are tight. Section~\ref{sec:applications} presents applications of our results to specific minimization problems. In Section~\ref{sec:maximixation}, we present our impossibility results for maximization problems. Section~\ref{sec:conclusion} concludes the paper and lists directions for future work. \newpage \section{Preliminaries}\label{sec:preliminaries} In the following, we consider a general multiobjective minimization or maximization problem~$\Pi$ of the following form (where either all objective functions are to be minimized or all objective functions are to be to maximized): \begin{align*} \min/\max\; & f(x)=(f_1(x),\dots,f_p(x))\\ \text{s.\,t. } & x \in X \end{align*} Here, as usual, we assume a constant number~$p\geq 2$ of objectives. The elements $x\in X$ are called \emph{feasible solutions} and the set~$X$, which is assumed to be nonempty, is referred to as the \emph{feasible set}. An image $y=f(x)$ of a feasible solution $x\in X$ is also called a \emph{feasible point}. We let $Y\colonequals f(X) \colonequals \{f(x): x\in X\}\subseteq \mathbb{R}^p$ denote the \emph{set of feasible points}. We assume that the objective functions take only positive rational values and are polynomially computable. Moreover, for each $j\in\{1,\dots,p\}$, we assume that there exist strictly positive rational lower and upper bounds $\LB(j),\UB(j)$ of polynomial encoding length such that $\LB(j) \leq f_j(x) \leq \UB(j)$ for all $x\in X$. We let $\LB\colonequals \min_{j=1,\dots,p}\LB(j)$ and $\UB\colonequals \max_{j=1,\dots,p}\UB(j)$. \medskip \begin{definition} For a minimization problem~$\Pi$, we say that a point $y=f(x)\in Y$ is \emph{dominated} by another point $y'=f(x')\in Y$ if $y'\neq y$ and \begin{align*} y'_j=f_j(x') \leq f_j(x)=y_j \text{ for all } j\in\{1,\dots,p\}. \end{align*} Similarly, for a maximization problem~$\Pi$, we say that a point $y=f(x)\in Y$ is \emph{dominated} by another point $y'=f(x')\in Y$ if $y'\neq y$ and \begin{align*} y'_j=f_j(x') \geq f_j(x)=y_j \text{ for all } j\in\{1,\dots,p\}. \end{align*} If the point $y=f(x)$ is not dominated by any other point~$y'$, we call~$y$ \emph{nondominated} and the feasible solution $x\in X$ \emph{efficient}. The set~$\YN$ of nondominated points is called the \emph{nondominated set} and the set~$\XE$ of efficient solutions is called the \emph{efficient set} or \emph{Pareto set}. \end{definition} \subsection{Notions of approximation} We first recall the standard definitions of approximation for single objective optimization problems. \begin{definition} Consider a single objective optimization problem~$\Pi$ and let $\alpha\geq 1$. If $\Pi$ is a minimization problem, we say that a feasible solution $x\in X$ \emph{$\alpha$-appro\-xi\-mates} another feasible solution $x'\in X$ if $f(x) \leq \alpha \cdot f(x')$. If $\Pi$ is a maximization problem, we say that a feasible solution $x\in X$ \emph{$\alpha$-approximates} another feasible solution $x'\in X$ if $\alpha\cdot f(x) \geq f(x')$. A feasible solution that $\alpha$-approximates every feasible solution of~$\Pi$ is called \emph{an $\alpha$-approximation} for~$\Pi$. A \emph{(polynomial-time) $\alpha$-approximation algorithm} is an algorithm that, for every instance~$I$ with encoding length~$|I|$, computes an $\alpha$-approximation for~$\Pi$ in time bounded by a polynomial in~$|I|$. \end{definition} \noindent The following definition extends the concept of approximation to the multiobjective case. \begin{definition} Let $\alpha=(\alpha_1,\dots,\alpha_p)\in\mathbb{R}^p$ with $\alpha_j\geq 1$ for all $j\in\{1,\dots,p\}$. For a minimization problem~$\Pi$, we say that a feasible solution $x\in X$ \emph{$\alpha$-appro\-xi\-mates} another feasible solution $x'\in X$ (or, equivalently, that the feasible point $y=f(x)$ \emph{$\alpha$-appro\-xi\-mates} the feasible point $y'=f(x')$) if \begin{align*} f_j(x) \leq \alpha_j\cdot f_j(x') \text{ for all } j\in\{1,\dots,p\}. \end{align*} Similarly, for a maximization problem~$\Pi$, we say that a feasible solution $x\in X$ \emph{$\alpha$-approximates} another feasible solution $x'\in X$ (or, equivalently, that the feasible point $y=f(x)$ \emph{$\alpha$-appro\-xi\-mates} the feasible point $y'=f(x')$) if \begin{align*} \alpha_j\cdot f_j(x) \geq f_j(x') \text{ for all } j\in\{1,\dots,p\}. \end{align*} \end{definition} The standard notion of approximation for multiobjective optimization problems used in the literature is the following one. \begin{definition}\label{def:approx-standard} Let $\alpha=(\alpha_1,\dots,\alpha_p)\in\mathbb{R}^p$ with $\alpha_j\geq 1$ for all $j\in\{1,\dots,p\}$. A set~$P\subseteq X$ of feasible solutions is called an \emph{$\alpha$-approximation} for the multiobjective problem~$\Pi$ if, for any feasible solution~$x'\in X$, there exists a solution~$x\in P$ that $\alpha$-approximates~$x'$. \end{definition} In the following definition, we generalize the standard notion of approximation for multiobjective problems by allowing a \emph{set of vectors of approximation factors} instead of only a single vector, which allows for tighter approximation results. \begin{definition}\label{def:approx-generalized} Let $\mathcal{A}\subseteq\mathbb{R}^p$ be a set of vectors with $\alpha_j\geq 1$ for all $\alpha\in \mathcal{A}$ and all $j\in\{1,\dots,p\}$. Then a set~$P\subseteq X$ of feasible solutions is called a \emph{(multi-factor) $\mathcal{A}$-approximation} for the multiobjective problem~$\Pi$ if, for any feasible solution~$x'\in X$, there exists a solution~$x\in P$ and a vector~$\alpha\in\mathcal{A}$ such that~$x$ $\alpha$-approximates~$x'$. \end{definition} Note that, in the case where $\mathcal{A}=\{(\alpha_1,\dots,\alpha_p)\}$ is a singleton, an $\mathcal{A}$-appro\-xi\-ma\-tion for a multiobjective problem according to Definition~\ref{def:approx-generalized} is equivalent to an $(\alpha_1,\dots,\alpha_p)$-approximation according to Definition~\ref{def:approx-standard}. \subsection{Weighted sum scalarization} Given a $p$-objective optimization problem~$\Pi$ and a vector $w=(w_1,\ldots,w_p)\in\mathbb{R}^p$ with $w_j>0$ for all $j \in\{1,\ldots,p\}$, the \emph{weighted sum problem} (or \emph{weighted sum scalarization})~$\Pi^{\WS}(w)$ associated with~$\Pi$ is defined as the following single objective optimization problem: \begin{align*} \min/\max\; & \sum_{j=1}^{p} w_j \cdot f_{j}(x)\\ \text{s.\,t. } & x \in X \end{align*} \begin{definition} A feasible solution~$x\in X$ is called \emph{supported} if there exists a vector $w=(w_1,\ldots,w_p) \in\mathbb{R}^p$ of positive weights such that~$x$ is an optimal solution of the weighted sum problem~$\Pi^{\WS}(w)$. In this case, the feasible point $y=f(x)\in Y$ is called a \emph{supported point}. The set of all supported solutions is denoted by~$\XS$. \end{definition} It is well-known that every supported point is nondominated and, correspondingly, every supported solution is efficient (cf.~\cite{Ehrgott:book}). \smallskip In the following, we assume that there exists a polynomial-time $\sigma$-approxi\-ma\-tion algorithm $\WS_{\sigma}$ for the weighted sum problem, where $\sigma\geq 1$ can be either a constant or a function of the input size. When calling $\WS_{\sigma}$ with some specific weight vector~$w$, we denote this by $\WS_{\sigma}(w)$. This algorithm then returns a solution~$\hat x$ such that $\sum_{j=1}^{p} w_j f_j(\hat x) \leq \sigma \cdot \sum_{j=1}^{p} w_j f_j(x)$ for all $x\in X$, if $\Pi$ is a minimization problem, and $\sigma \cdot \sum_{j=1}^{p} w_j f_j(\hat x) \geq \sum_{j=1}^{p} w_j f_j(x)$ for all $x\in X$, if $\Pi$ is a maximization problem. The running time of algorithm $\WS_{\sigma}$ is denoted by $T_{WS_{\sigma}}$. \medskip The following result shows that a $\sigma$-approximation for the weighted sum problem is also a $\sigma$-approximation of any solution in at least one of the objectives. \begin{lemma}\label{lem:sigma_efficiency} Let~$\hat{x} \in X$ be a $\sigma$-approximation for~$\Pi^{\WS}(w)$ for some positive weight vector~$w\in\mathbb{R}^p$. Then, for any feasible solution~$x \in X$, there exists at least one $i \in \{1,\ldots,p\}$ such that $\hat{x}$ $\sigma$-approximates~$x$ in objective~$f_i$. \end{lemma} \begin{proof} Consider the case where~$\Pi$ is a multiobjective \emph{minimization} problem (the proof for the case where~$\Pi$ is a maximization problem works analogously). Then, we must show that, for any feasible solution $x \in X$, there exists at least one $i \in \{1,\ldots,p\}$ such that $f_i(\hat{x}) \leq \sigma \cdot f_i(x)$. Assume by contradiction that there exists some~$x' \in X$ such that $f_j(\hat{x}) > \sigma \cdot f_j(x')$ for all $j \in \{1,\ldots,p\}$. Then, we obtain $\sum_{j=1}^{p} w_j \cdot f_{j}(\hat{x}) > \sigma \cdot \sum_{j=1}^{p} w_j \cdot f_{j}(x')$, which contradicts the assumption that~$\hat{x}$ is a $\sigma$-approximation for $\Pi^{\WS}(w)$. \end{proof} \section{A multi-factor approximation result for minimization problems}\label{sec:minimization} In this section, we study the approximation of multiobjective \emph{minimization} problems by (approximately) solving weighted sum problems. In Subsection~\ref{subsec:results}, we propose a multi-factor approximation algorithm that significantly improves upon the $((1+\varepsilon)\cdot\sigma\cdot p, \dots,(1+\varepsilon)\cdot\sigma\cdot p)$-approximation algorithm that can be derived from Gla\ss er et al.~\cite{Glasser+etal:CiE2010}. The biobjective case is then investigated in Subsection~\ref{subsec:biobjective}. Finally, we show in Subsection~\ref{subsec:tightness} that the resulting approximation is tight. \subsection{General results}\label{subsec:results} \begin{proposition}\label{prop_1WS} Let $\bar{x}\in X$ be a feasible solution of~$\Pi$ and let $b=(b_1,\dots,b_p)$ be such that $b_j\leq f_j(\bar{x})\leq (1+\varepsilon)\cdot b_j$ for $j=1,\dots,p$ and some $\varepsilon>0$. Applying~$\WS_{\sigma}(w)$ with $w_j\colonequals\frac{1}{b_j}$ for $j=1,\dots,p$ yields a solution~$\hat x$ that $(\alpha_1,\dots,\alpha_p)$-approximates~$\bar x$ for some $\alpha_1,\dots,\alpha_p\geq1$ such that $\alpha_i \leq \sigma$ for at least one $i \in \{1,\ldots,p\}$ and \begin{align*} \sum_{j:\alpha_j>1} \alpha_j = (1+\varepsilon)\cdot\sigma\cdot p. \end{align*} \end{proposition} \begin{proof} Since~$\hat x$ is the solution returned by $\WS_{\sigma}(w)$, we have \begin{align*} \sum_{j=1}^p\frac{1}{b_j} f_j(\hat{x}) \leq \sigma\cdot\left(\sum_{j=1}^p\frac{1}{b_j}f_j(\bar{x})\right) \leq \sigma\cdot(1+\varepsilon)\cdot\left(\sum_{j=1}^p 1\right) = (1+\varepsilon)\cdot\sigma\cdot p. \end{align*} \noindent Since $\frac{1}{b_j} \geq \frac{1}{f_j(\bar{x})}$, we get $\sum_{j=1}^p \frac{f_j(\hat{x})}{f_j(\bar{x})} \leq \sum_{j=1}^p\frac{1}{b_j} f_j(\hat{x})$, which yields \begin{align*} \sum_{j=1}^p \frac{f_j(\hat{x})}{f_j(\bar{x})} \leq (1+\varepsilon)\cdot\sigma\cdot p. \end{align*} \noindent Setting $\alpha_j \colonequals \max\left\{1,\frac{f_j(\hat{x})}{f_j(\bar{x})}\right\}$ for $j=1,\dots,p$, we have \begin{align*} \sum_{j:\alpha_j>1} \alpha_j \leq (1+\varepsilon)\cdot\sigma\cdot p. \end{align*} The worst case approximation factors $\alpha_j$ are then obtained when equality holds in the previous inequality. Moreover, by Lemma~\ref{lem:sigma_efficiency}, there exists at least one $i \in \{1,\ldots,p\}$ such that $f_i(\hat{x}) \leq \sigma \cdot f_i(\bar{x})$. Thus, we have $\alpha_i\leq \sigma$ for at least one $i \in \{1,\ldots,p\}$, which proves the claim. \end{proof} Proposition~\ref{prop_1WS} motivates to apply the given $\sigma$-approximation algorithm~$\WS_{\sigma}$ for~$\Pi^{\WS}$ iteratively for different weight vectors~$w$ in order to obtain an approximation of the multiobjective minimization problem~$\Pi$. This is formalized in Algorithm~\ref{alg:mainAlgo}, whose correctness and running time are established in Theorem~\ref{thm:main-result}. \begin{algorithm2e} \SetKw{Compute}{compute} \SetKw{Break}{break} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \SetKwComment{command}{right mark}{left mark} \Input{an instance of a $p$-objective minimization problem~$\Pi$, $\varepsilon >0$, a $\sigma$-approxi\-ma\-tion algorithm $\WS_{\sigma}$ for the weighted sum problem} \Output{an \emph{$\mathcal{A}$-approximation} $P$ for problem~$\Pi$} \BlankLine $P \leftarrow \emptyset$ $\varepsilon' \leftarrow \frac{\varepsilon}{\sigma\cdot p}$ \For{$j\leftarrow 1$ \KwTo $p$}{ $u_j \leftarrow$ largest integer such that $LB(j)\cdot(1+\varepsilon')^{u_j} \leq UB(j)$ } \For{$k\leftarrow 1$ \KwTo $p$}{ $i_k \leftarrow 0$ \ForEach {$(i_1,\ldots,i_p)$ such that $i_{\ell} \in \{1,\ldots,u_{\ell}\}$ for $\ell < k$, and $i_{\ell} \in \{0,\ldots,u_{\ell}\}$ for $\ell > k$} { \For{$j\leftarrow 1$ \KwTo $p$}{ $b_j \leftarrow LB(j)\cdot(1 + \varepsilon')^{i_j}$ $w_j \leftarrow \frac{1}{b_j}$ } $x \leftarrow \WS_{\sigma}(w)$ $P \leftarrow P \cup \{x\}$ } } \Return $P$ \caption{An \emph{$\mathcal{A}$-approximation for $p$-objective minimization problems}\label{alg:mainAlgo}} \end{algorithm2e} \begin{theorem}\label{thm:main-result} For a $p$-objective minimization problem, Algorithm~\ref{alg:mainAlgo} outputs an $\mathcal{A}$-approxi\-ma\-tion where \begin{align*} \mathcal{A} = & \{(\alpha_1,\dots,\alpha_p) : \alpha_1,\dots,\alpha_p \geq 1, \alpha_i \leq \sigma \mbox{ for at least one $i$, and} \sum_{j:\alpha_j>1} \alpha_j = \sigma\cdot p\ + \varepsilon \} \end{align*} in time bounded by $\displaystyle T_{WS_{\sigma}}\cdot \sum_{i=1}^p \prod_{j\neq i}\left\lceil \log_{1+\frac{\varepsilon}{\sigma p}}\frac{\UB(j)}{\LB(j)}\right\rceil \in \mathcal{O}\left(T_{WS_{\sigma}} \left(\frac{\sigma}{\varepsilon} \log\frac{\UB}{\LB}\right)^{p-1} \right)$. \end{theorem} \begin{proof} In order to approximate all feasible solutions, we can iteratively apply Proposition~\ref{prop_1WS} with $\varepsilon' \colonequals \frac{\varepsilon}{\sigma\cdot p}$ instead of $\varepsilon$, leading to the modified constraint on the sum of the~$\alpha_j$ where the right-hand side becomes $(1+\varepsilon')\cdot\sigma\cdot p = \sigma\cdot p\ + \varepsilon$. More precisely, we iterate with $b_j = LB(j)\cdot(1+ \varepsilon')^{i_j}$ and $i_j=0,\ldots,u_j$, where $u_j$ is the largest integer such that $LB(j)\cdot(1+\varepsilon')^{u_j} \leq UB(j)$, for each $j\in\{1,\ldots,p\}$. Actually, this iterative application of Proposition~\ref{prop_1WS} involves redundant weight vectors. More precisely, consider a weight vector $w = (w_1,\ldots,w_p)$ where $w_j= \frac{1}{b_j}$ with $b_j = LB(j)\cdot(1+\varepsilon')^{t_j}$ for $j=1,\ldots,p$, and let~$k$ be an index such that $t_k = \min_{j=1,\ldots,p} t_j$. Then problem~$\Pi^{\WS}(w)$ is equivalent to problem~$\Pi^{\WS}(w')$ with $w'_j =\frac{1}{b'_j}$, where $b'_j = LB(j)\cdot(1+\varepsilon')^{t_j - t_k}$ for $j=1,\ldots,p$. Therefore, it is sufficient to consider all weight vectors~$w$ for which at least one component~$w_k$ is set to $\frac{1}{LB(k)}$ (see Figure~\ref{fig:approx-grid} for an illustration). The running time follows. \end{proof} \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=1.25] \draw[->] (-0.2,0) -- (7.5,0) node[below right] {$f_1$}; \draw[->] (0,-0.2) -- (0,7.4) node[above left] {$f_2$}; \foreach \x in {0.6, 0.9, 1.35, 2.03, 3.04, 4.56, 6.83}{ \draw[-,thin,color=gray!50!white] (\x,0.6) -- (\x,6.83) node[below] {}; \draw[-,thin,color=gray!50!white] (0.6,\x) -- (6.83,\x) node[below] {}; } \fill [gray!50] (0.6,2.03) rectangle (0.9,3.04); \fill [gray!50] (0.9,3.04) rectangle (1.35,4.56); \fill [gray!50] (1.35,4.56) rectangle (2.03,6.83); \fill (1.35,4.56) circle (2pt) node[below right] {$b$}; \fill (0.6,2.03) circle (2pt) node[below right] {$b'$}; \draw[-] (0.6,0.1) -- (0.6,-0.1) node[below] {$\LB(1)$}; \draw[-] (1.35,0.1) -- (1.35,-0.1) node[below right] {\hspace{-0.75em}$\LB(1)\cdot(1+\varepsilon')^{t_1}$}; \draw[-] (6.5,0.1) -- (6.5,-0.1) node[below] {$\UB(1)$}; \draw[-] (0.1,0.6) -- (-0.1,0.6) node[left] {$\LB(2)$}; \draw[-] (0.1,2.03) -- (-0.1,2.03) node[left,align=center] {$\LB(2)$\\$\cdot(1+\varepsilon')^{t_2-t_1}$}; \draw[-] (0.1,4.56) -- (-0.1,4.56) node[left,align=center] {$\LB(2)$\\$\cdot(1+\varepsilon')^{t_2}$}; \draw[-] (0.1,6.5) -- (-0.1,6.5) node[left] {$\UB(2)$}; \end{tikzpicture} \caption{Weight vectors and subdivision of the objective space in Algorithm~\ref{alg:mainAlgo}. The weight vector~$w=(\frac{1}{b_1},\dots,\frac{1}{b_p})$ with $b_j = LB(j)\cdot(1+\varepsilon')^{t_j}$ for $j=1,\ldots,p$ is equivalent to the weight vector $w'=(\frac{1}{b'_1},\dots,\frac{1}{b'_p})$ obtained by reducing all exponents~$t_j$ by their minimum. The solution~$\WS_{\sigma}(w')$ returned for~$w'$ is then used to approximate all solutions with images in the shaded (hyper-) rectangles.}\label{fig:approx-grid} \end{center} \end{figure} Note that, depending on the structure of the weighted sum algorithm~$\WS_{\sigma}$, the practical running time of Algorithm~\ref{alg:mainAlgo} could be improved by not solving every weighted sum problem from scratch, but using the information obtained in previous iterations. Also note that, as illustrated in Figure~\ref{fig:approx-grid}, Algorithm~\ref{alg:mainAlgo} also directly yields a subdivision of the objective space into hyperrectangles such that all solutions whose images are in the same hyperrectangle are approximated by the same solution (possibly with different approximation guarantees): For each weight vector~$w=(\frac{1}{b_1},\dots,\frac{1}{b_p})$ considered in the algorithm (where $b_k=\LB(k)$ for at least one~$k$), all solutions~$\bar{x}$ with images in the hyperrectangles $\bigtimes_{j=1}^p \left[b_j\cdot(1+\varepsilon')^\ell,b_j\cdot(1+\varepsilon')^{\ell+1}\right]$ for $\ell=0,1,\dots$ are approximated by the solution returned by~$\WS_{\sigma}(w)$. \medskip \noindent When the weighted sum problem can be solved exactly in polynomial time, Theorem~\ref{thm:main-result} immediately yields the following result: \begin{corollary}\label{cor:main-result-special} If $\WS_{\sigma}=\WS_{1}$ is an exact algorithm for the weighted sum problem, Algorithm~\ref{alg:mainAlgo} outputs an $\mathcal{A}$-approximation where \begin{align*} \mathcal{A} = & \{(\alpha_1,\dots,\alpha_p) : \alpha_1,\dots,\alpha_p \geq 1, \alpha_i=1 \text{ for at least one}~i, \mbox{ and } \sum_{j:\alpha_j>1} \alpha_j = p + \varepsilon\} \end{align*} in time $\mathcal{O}\left(T_{\WS_{1}} \left(\frac{1}{\varepsilon} \log\frac{\UB}{\LB}\right)^{p-1} \right)$. \end{corollary} Another special case worth mentioning is the situation where the weighted sum problem admits a polynomial-time approximation scheme. Here, similar to the case in which an exact algorithm is available for the weighted sum problem (see Corollary~\ref{cor:main-result-special}), we can still obtain a set of vectors~$\alpha$ of approximation factors with $\sum_{j:\alpha_j>1}\alpha_j=p+\varepsilon$ while only losing the property that at least one $\alpha_i$ equals~$1$. \begin{corollary}\label{cor:weighted-sum-PTAS} If the weighted sum problem admits a polynomial-time $(1+\tau)$-approximation for any $\tau>0$, then, for any $\varepsilon>0$ and any $0<\tau<\frac{\varepsilon}{p}$, Algorithm~\ref{alg:mainAlgo} can be used to compute an $\mathcal{A}$-approximation where \begin{align*} \mathcal{A} = & \{(\alpha_1,\dots,\alpha_p) : \alpha_1,\dots,\alpha_p \geq 1, \alpha_i\leq 1+\tau \text{ for at least one}~i, \mbox{ and } \sum_{j:\alpha_j>1} \alpha_j = p + \varepsilon\} \end{align*} in time $\mathcal{O}\left(T_{\WS_{1+\tau}} \left(\frac{1+\tau}{\varepsilon-\tau\cdot p} \log\frac{\UB}{\LB}\right)^{p-1} \right)$. \end{corollary} \begin{proof} Given $\varepsilon>0$ and $0<\tau<\frac{\varepsilon}{p}$, apply Algorithm~\ref{alg:mainAlgo} with $\varepsilon-\tau\cdot p$ and $\sigma\colonequals 1+\tau$. \end{proof} Since any component of a vector in the set~$\mathcal{A}$ from Theorem~\ref{thm:main-result} can get arbitrarily close to $\sigma \cdot p + \varepsilon$ in the worst case, the best ``classical'' approximation result using only a single vector of approximation factors that is obtainable from Theorem~\ref{thm:main-result} reads as follows: \begin{corollary}\label{cor:classical-result} Algorithm~\ref{alg:mainAlgo} computes a $(\sigma \cdot p + \varepsilon,\dots,\sigma \cdot p + \varepsilon)$-approximation in time $\mathcal{O}\left(T_{WS_{\sigma}} \left(\frac{1}{\varepsilon} \log\frac{\UB}{\LB}\right)^{p-1} \right)$. \end{corollary} \subsection{Biobjective Problems}\label{subsec:biobjective} In this subsection, we focus on biobjective minimization problems. We first specialize some of the general results of the previous subsection to the case $p=2$. Afterwards, we propose a specific approximation algorithm for biobjective problems, which significantly improves upon the running time of Algorithm~\ref{alg:mainAlgo} in the case where an exact algorithm~$\WS_1$ for the weighted sum problem is available. \medskip Theorem~\ref{thm:main-result}, which is the main general result of the previous subsection, can trivially be specialized to the case $p=2$. It is more interesting to consider the situation where the weighted sum can be solved exactly, corresponding to Corollary~\ref{cor:main-result-special}. In that case, we obtain the following result: \begin{corollary}\label{cor:main-result-special-biobjective} If $\WS_{\sigma}=\WS_{1}$ is an exact algorithm for the weighted sum problem and $p=2$, Algorithm~\ref{alg:mainAlgo} yields an $\mathcal{A}$-approximation where \begin{align*} \mathcal{A} = & \{(1,2 + \varepsilon), (2 + \varepsilon, 1) \} \end{align*} in time $\mathcal{O}\left(T_{WS_{1}} \frac{1}{\varepsilon} \log\frac{\UB}{\LB} \right)$. \end{corollary} It is worth pointing out that, unlike for the previous results, the set $\mathcal{A}$ of approximation factors is now \emph{finite}. This type of result can be interpreted as a \emph{disjunctive} approximation result: Algorithm~\ref{alg:mainAlgo} outputs a set~$P$ ensuring that, for any $x \in X$, there exists $x' \in P$ such that~$x'$ $(1,2 + \varepsilon)$-approximates~$x$ or~$x'$ $(2 + \varepsilon, 1)$-approximates~$x$. In the biobjective case, we may scale the weights in the weighted sum problem to be of the form~$(\gamma,1)$ for some $\gamma>0$. In the following, we make use of this observation and refer to a weight vector~$(\gamma,1)$ simply as~$\gamma$. \smallskip Algorithm~\ref{alg:biobjAlg} is a refinement of Algorithm~\ref{alg:mainAlgo} in the biobjective case when an exact algorithm~$\WS_1$ for the weighted sum problem is available. Algorithm~\ref{alg:mainAlgo} requires to test all the $u_1+u_2+1$ weights $\left(\frac{1}{LB(1)},\frac{1}{LB(2)(1+\varepsilon')^{u_2}} \right)$, $\left(\frac{1}{LB(1)},\frac{1}{LB(2)(1+\varepsilon')^{u_2-1}} \right)$, \ldots, $\left(\frac{1}{LB(1)},\frac{1}{LB(2)} \right)$, $\left(\frac{1}{LB(1)(1+\varepsilon')},\frac{1}{LB(2)} \right)$,\ldots, $\left(\frac{1}{LB(1)(1+\varepsilon')^{u_1}},\frac{1}{LB(2)} \right)$, or equivalently the $u_1+u_2+1$ weights of the form $(\gamma_t,1)$, where $\gamma_t=\frac{LB(2)}{LB(1)}(1+\varepsilon')^{u_2-t+1}$ for $t=1, \ldots, u_1+u_2+1$. Instead of testing all these weights, Algorithm~\ref{alg:biobjAlg} considers only a subset of these weights. More precisely, in each iteration, the algorithm selects a subset of consecutive weights $\{\gamma_{\ell},\ldots,\gamma_r\}$, solves $\WS_{1}(\gamma_t)$ for the weight~$\gamma_t$ with $t=\lfloor\frac{\ell+r}{2}\rfloor$, and decides whether 0, 1, or 2 of the subsets $\{\gamma_{\ell},\ldots,\gamma_t\}$ and $\{\gamma_t,\ldots,\gamma_r\}$ need to be investigated further. This process can be viewed as developing a binary tree where the root, which corresponds to the initialization, requires solving two weighted sum problems, while each other node requires solving one weighted sum problem. This representation is useful to bound the running time of our algorithm. The following technical result on binary trees, whose proof is given in the appendix, will be useful for this purpose: \begin{lemma}\label{lem:tree-size} A binary tree with height~$h$ and $k$~nodes with two children contains $\mathcal{O}(k\cdot h)$ nodes. \end{lemma} \begin{algorithm2e} \SetKw{Compute}{compute} \SetKw{Break}{break} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \SetKwComment{command}{right mark}{left mark} \Input{an instance of a biobjective minimization problem~$\Pi$, $\varepsilon >0$, an exact algorithm $\WS_{1}$ for the weighted sum problem} \Output{a $\{(1,2+\varepsilon),(2+\varepsilon,1)\}$-approximation~$P$ for problem~$\Pi$} \BlankLine $\varepsilon' \leftarrow \frac{\varepsilon}{p}$\\ $u_1 \leftarrow \mbox{largest~integer~ such~that~} LB(1) (1+\varepsilon')^{u_1}\leq UB(1)$\\ $u_2 \leftarrow \mbox{largest~integer~ such~that~} LB(2) (1+\varepsilon')^{u_2}\leq UB(2)$\\ $\gamma_1\leftarrow \frac{LB(2)}{\LB(1)} (1+\varepsilon')^{u_2}$;\qquad $\gamma_{u_1+u_2+1}\leftarrow \frac{LB(2)}{\LB(1)} (1+\varepsilon')^{-u_1} $\\ $x^1 \leftarrow \WS_{1}(\gamma_1)$ ;\qquad $x^{u_1+u_2+1} \leftarrow \WS_{1}(\gamma_{u_1+u_2+1})$\\ $Q \leftarrow \emptyset$\\ \lIf{$x^1$ $(1,2+\varepsilon)$-approximates~$x^{u_1+u_2+1}$} {$P \leftarrow \{x^1\}$} \Else{\lIf{$x^{u_1+u_2+1}$ $(2+\varepsilon,1)$-approximates~$x^1$}{$P \leftarrow \{x^{u_1+u_2+1}\}$} \Else{$P \leftarrow \{x^1,x^{u_1+u_2+1}\}$\\ $Q \leftarrow \{(1,u_1+u_2+1)\}$} } \While{$Q \neq \emptyset$}{ Select $(\ell,r)$ from~$Q$\\ $Q \leftarrow Q \setminus \{(\ell,r)\}$\\ $t \leftarrow \lfloor \frac{\ell+r}{2} \rfloor $\\ $\gamma_t=\frac{LB(2)}{LB(1)}(1+\varepsilon')^{u_2-t+ 1}$ \\ $x^t \leftarrow \WS_{1}(\gamma_t)$ \If{$x^{\ell}$ does not $(1,2+\varepsilon)$-approximate~$x^{t}$ or~$x^{r}$ does not $(2+\varepsilon,1)$-approximate~$x^t$ \label{if1} }{ $P\leftarrow P\cup \{x^t\}$\\ \If{$t\geq \ell+2$ and $x^{\ell}$ does not $(1,2+\varepsilon)$-approximate~$x^{t}$ and ~$x^{t}$ does not $(2+\varepsilon,1)$-approximate~$x^{\ell}$ \label{if2}} {$Q\leftarrow Q\cup \{(\ell,t)\}$ \label{if2then}} \If{$t\leq r-2$ and $x^{t}$ does not $(1,2+\varepsilon)$-approximate~$x^{r}$ and ~$x^{r}$ does not $(2+\varepsilon,1)$-approximate~$x^{t}$ \label{if3}} {$Q\leftarrow Q\cup \{(t,r)\}$ \label{if3then}} } } \Return $P$ \caption{A \emph{$\{(1,2+\varepsilon),(2+\varepsilon,1)\}$-approximation for biobjective minimization problems}\label{alg:biobjAlg}} \end{algorithm2e} \begin{theorem}\label{thm:binary-sear-approx} For a biobjective minimization problem, Algorithm~\ref{alg:biobjAlg} returns a $\{(1,2+\varepsilon),(2+\varepsilon,1)\}$-approximation in time \begin{align*} \mathcal{O}\left(T_{\WS_1}\cdot\log \left(\frac{1}{\varepsilon}\cdot\log \frac{\UB}{\LB}\right) \cdot\log \frac{\UB}{\LB} \right). \end{align*} \end{theorem} \begin{proof} The approximation guarantee of Algorithm~\ref{alg:biobjAlg} derives from Theorem~\ref{thm:main-result}. We just need to prove that the subset of weights used here is sufficient to preserve the approximation guarantee. \smallskip In lines~\ref{if2}-\ref{if2then}, the weights~$\gamma_i$ for $i=\ell+1, \ldots t-1$ are not considered if $x^{\ell}$ $(1,2+\varepsilon)$-approximates~$x^{t}$ or if ~$x^t$ $(2+\varepsilon,1)$-approximates~$x^{\ell}$. We show that, indeed, these weights are not needed. \smallskip To this end, first observe that any solution $x^i \colonequals \WS_{1}(\gamma_i)$ for $i\in\{\ell+1, \ldots t-1\}$ is such that \begin{align*} f_1(x^{\ell}) \leq f_1(x^i) \leq f_1(x^{t})\phantom{.} \quad & \text{ and } \\ f_2(x^{\ell}) \geq f_2(x^i) \geq f_2(x^{t}). \quad & \quad \end{align*} since $\gamma_{\ell} > \gamma_i >\gamma_t$. Thus, if ~$x^{\ell}$ $(1,2+\varepsilon)$-approximates~$x^{t}$, we obtain \begin{align*} f_2(x^{\ell}) \leq (2+\varepsilon)\cdot f_2(x^t) \leq (2+\varepsilon)\cdot f_2(x^i), \quad \end{align*} which shows that~$x^{\ell}$ also $(1,2+\varepsilon)$-approximates~$x^i$. Therefore,~$x^i$ and the corresponding weight~$\gamma_i$ are not needed. \smallskip \noindent Similarly, if~$x^t$ $(2+\varepsilon,1)$-approximates~$x^{\ell}$, we have \begin{align*} f_1(x^t) \leq (2+\varepsilon)\cdot f_1(x^{\ell}) \leq (2+\varepsilon)\cdot f_1(x^i), \quad \end{align*} which shows that~$x^t$ $(2+\varepsilon,1)$-approximates~$x^i$. Therefore~$x^i$ and the corresponding weight~$\gamma_i$ are again not needed. \smallskip In lines~\ref{if3}-\ref{if3then}, the weights~$\gamma_i$ for $i=t+1, \ldots, r-1$ are not considered if~$x^t$ $(1,2+\varepsilon)$-approximates~$x^{r}$ or if~$x^r$ $(2+\varepsilon,1)$-approximates~$x^{t}$ for similar reasons. \smallskip Also, in line~\ref{if1},~$x^t$ can be discarded and the weights $\gamma_i$ for $i=\ell+1, \ldots, r-1$ can be ignored if~$x^{\ell}$ $(1,2+\varepsilon)$-approximates~$x^{t}$ and~$x^r$ $(2+\varepsilon,1)$-approximates~$x^{t}$. Indeed, using similar arguments as before, we obtain that $x^{\ell}$ $(1,2+\varepsilon)$-approximates~$x^{i}$ for $i=\ell+1, \ldots, t$ and~$x^{r}$ $(2+\varepsilon, 1)$-approximates~$x^{i}$ for $i=t, \ldots, r-1$ in this case. Consequently, compared to Algorithm~\ref{alg:mainAlgo}, only superfluous weights are discarded in Algorithm~\ref{alg:biobjAlg} and the approximation guarantee follows by Theorem~\ref{thm:main-result}. \bigskip We now prove the claimed bound on the running time. Algorithm~\ref{alg:biobjAlg} explores a set of weights of cardinality $u_1+u_2+1$ = $\left\lfloor \log_{1+\varepsilon'}\frac{\UB(1)}{\LB(1)}\right\rfloor + \left\lfloor \log_{1+\varepsilon'}\frac{\UB(2)}{\LB(2)}\right\rfloor + 1$. The running time is obtained by bounding the number of calls to algorithm~$\WS_{1}$, which corresponds to the number of nodes of the binary tree implicitly developed by the algorithm. The height of this tree is $\log_2(u_1+u_2+1) \in \mathcal{O}\left(\log \left(\frac{1}{\varepsilon}\cdot\log \frac{\UB}{\LB}\right)\right)$. \smallskip In order to bound the number of nodes with two children in the tree, we observe that we generate such a node (i.e. add the pairs $(\ell,t)$ and $(t,r)$ to~$Q$) only if~$x^\ell$ does not $(1,2+\varepsilon)$-approximate~$x^t$ and $x^t$ does not $(2+\varepsilon,1)$-approximate~$x^{\ell}$, and also~$x^t$ does not $(1,2+\varepsilon)$-approximate~$x^r$ and~$x^r$ does not $(2+\varepsilon,1)$-approximate~$x^t$. Hence, whenever a node with two children is generated, the corresponding solution~$x^t$ does neither $(1,2+\varepsilon)$ nor $(2+\varepsilon,1)$-approximate any previously generated solution and vice versa, so their objective values in both of the two objective functions must differ by more than a factor~$(2+\varepsilon)$. Using that the $j$th objective value of any feasible solution is between~$\LB(j)$ and~$\UB(j)$, this implies that there can be at most \begin{align*} \min\left\{\log_{2+\varepsilon} \left(\frac{\UB(1)}{\LB(1)}\right) ; \log_{2+\varepsilon} \left( \frac{\UB(2)}{\LB(2)}\right) \right\} \in \mathcal{O}\left(\log \frac{\UB}{\LB} \right) \end{align*} nodes with two children in the tree. \smallskip Using the obtained bounds on the height of the tree and the number of nodes with two children, Lemma~\ref{lem:tree-size} shows that the total number of nodes in the tree is \begin{align*} \mathcal{O}\left(\log \left(\frac{1}{\varepsilon}\cdot\log \frac{\UB}{\LB}\right) \cdot\log \frac{\UB}{\LB} \right), \end{align*} which proves the claimed bound on the running time. \end{proof} \subsection{Tightness results}\label{subsec:tightness} When solving the weighted sum problem exactly, Co\-rol\-la\-ry~\ref{cor:main-result-special} states that Algorithm~\ref{alg:mainAlgo} obtains a set~$\mathcal{A}$ of approximation factors in which $\sum_{j:\alpha_j>1}\alpha_j=p+\varepsilon$ for each $\alpha=(\alpha_1,\dots,\alpha_p)\in\mathcal{A}$. The following theorem shows that this multi-factor approximation result is arbitrarily close to the best possible result obtainable by supported solutions: \begin{theorem}\label{thm:inapprox-min} For $\varepsilon>0$, let \begin{align*} \mathcal{A}\colonequals\{\alpha\in\mathbb{R}^p: \alpha_1,\dots,\alpha_p\geq 1,\; \alpha_i=1 \text{ for at least one}~i, \text{ and } \sum_{j:\alpha_j>1}\alpha_j=p-\varepsilon\}. \end{align*} Then there exists an instance of a $p$-objective minimization problem for which the set~$\XS$ of supported solutions is not an $\mathcal{A}$-approximation. \end{theorem} \begin{proof} In the following, we only specify the set~$Y$ of images. A corresponding instance consisting of a set~$X$ of feasible solutions and an objective function~$f$ can then easily be obtained, e.\,g., by setting $X\colonequals Y$ and $f\colonequals \text{id}_{\mathbb{R}^p}$. \smallskip For $M>0$, let $Y\colonequals\{y^{1},\dots,y^{p},\tilde{y}\}$ with $y^{1}=(M,\frac{1}{p},\dots,\frac{1}{p})$, $y^{2}=(\frac{1}{p},M,\frac{1}{p},\dots,\frac{1}{p})$, \dots, $y^{p}=(\frac{1}{p},\dots,\frac{1}{p},M)$ and $\tilde{y}=\left(\frac{M+1}{p},\dots,\frac{M+1}{p}\right)$. Note that the point~$\tilde{y}$ is unsupported, while $y^{1},\dots,y^{p}$ are supported (an illustration for the case $p=2$ is provided in Figure~\ref{fig:minimization-impossiblility}). \medskip \noindent Moreover, the ratio of the $j$-th components of the points~$y^j$ and~$\tilde{y}$ is exactly \begin{align*} \frac{M}{\nicefrac{(M+1)}{p}} = p\cdot\frac{M}{M+1}, \end{align*} which is larger than $p-\varepsilon$ for $M>\frac{p}{\varepsilon}-1$. Consequently, for such~$M$, the point~$\tilde{y}$ is not $\alpha$-approximated by any of the supported points $y^1,\dots,y^p$ for any $\alpha\in\mathcal{A}$, which proves the claim. \end{proof} \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=1.25] \fill[gray!30] (0.05,3.421) -- (0.05,7.4) -- (7.4,7.4) -- (7.4,3.421); \fill[gray!30] (3.421,0.05) -- (3.421,7.4) -- (7.4,7.4) -- (7.4,0.05); \fill[gray!30] (0.0263,7.4) -- (0.05,7.4) -- (0.05,6.5) -- (0.0263,6.5); \fill[gray!30] (7.4,0.0263) -- (7.4,0.05) -- (6.5,0.05) -- (6.5,0.0263); \draw[->] (-0.2,0) -- (7.4,0) node[below right] {$f_1$}; \draw[->] (0,-0.2) -- (0,7.4) node[above left] {$f_2$}; \draw[dashed,gray] (0.05,6.5) -- (6.5,0.05); \fill (3.34,3.34) circle (1.5pt) node[above right] {$\tilde{y}=(\frac{M+1}{2},\frac{M+1}{2})$}; \fill (0.05,6.5) circle (1.5pt) node[above right] {$y^2=(\frac{1}{2},M)$}; \fill (6.5,0.05) circle (1.5pt) node[above right] {$y^1=(M,\frac{1}{2})$}; \end{tikzpicture} \caption{Image space of the instance constructed in the proof of Theorem~\ref{thm:inapprox-min} for $p=2$. The shaded region is $\{(1,2-\varepsilon),(2-\varepsilon,1)\}$-approximated by the supported points~$y^1,y^2$.}\label{fig:minimization-impossiblility} \end{center} \end{figure} We remark that the set of points~$Y$ constructed in the proof of Theorem~\ref{thm:inapprox-min} can easily be obtained from instances of many well-known multiobjective minimization problems such as multiobjective shortest path, multiobjective spanning tree, multiobjective minimum ($s$-$t$-) cut, or multiobjective TSP (for multiobjective shortest path, for example, a collection of $p+1$ disjoint $s$-$t$-paths whose cost vectors correspond to the points $y^{1},\dots,y^{p},\tilde{y}$ suffices). Consequently, the result from Theorem~\ref{thm:inapprox-min} holds for each of these specific problems as well. \medskip Moreover, note that also the classical approximation result obtained in Corollary~\ref{cor:classical-result} is arbitrarily close to best possible in case that the weighted sum problem is solved exactly: While Corollary~\ref{cor:classical-result} shows that a $(p+\varepsilon,\dots,p+\varepsilon)$-approximation is obtained from Algorithm~\ref{alg:mainAlgo} when solving the weighted sum problem exactly, the instance constructed in the proof of Theorem~\ref{thm:inapprox-min} shows that the supported solutions do not yield an approximation guarantee of $(p-\varepsilon,\dots,p-\varepsilon)$ for any $\varepsilon>0$. This yields the following theorem: \enlargethispage{\baselineskip} \begin{theorem}\label{thm:inapprox-min-classical} For any $\varepsilon>0$, there exists an instance of a $p$-objective minimization problem for which the set~$\XS$ of supported solutions is not a $(p-\varepsilon,\dots,p-\varepsilon)$-approximation. \end{theorem} \section{Applications}\label{sec:applications} Our results can be applied to a large variety of minimization problems since exact or approximate polynomial-time algorithms are available for the weighted sum scalarization of many problems. \subsection{Problems with a polynomial-time solvable weighted sum scalarization} If the weighted sum scalarization can be solved exactly in polynomial time, Corollary~\ref{cor:main-result-special} shows that Algorithm~\ref{alg:mainAlgo} yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee $(\alpha_1,\dots,\alpha_p)$ such that $\sum_{j:\alpha_j>1}\alpha_j=p+\varepsilon$ and $\alpha_i=1$ for at least one~$i$. \smallskip Many problems of this kind admit an MFPTAS, i.e., a $(1+\varepsilon,\dots,1+\varepsilon)$-appro\-xi\-ma\-tion that can be computed in time polynomial in the encoding length of the input and $\frac{1}{\varepsilon}$. The approximation guarantee we obtain is worse in this case, even if the sum of the approximation factors for which an error can be observed is $p+\varepsilon$ in both approaches. The running time, however, is usually significantly better in our approach. \smallskip For the multiobjective shortest path problem, for example, the existence of an MFPTAS was shown in~\cite{Papadimitriou+Yannakakis:multicrit-approx}, while several specific MFPTAS have been proposed. Among these, the MFPTAS with the best running time is the one proposed in~\cite{Tsaggouris+Zaroliagis:mult-shortest-path}. For $p\geq 2$, their running time for general digraphs with $n$~vertices and $m$~arcs is $\mathcal{O}\left(m\cdot n^p \left(\frac{1}{\varepsilon}\log\frac{\UB}{\LB} \right)^{p-1}\right)$ while ours is only $\mathcal{O}\left((m + n\log\log n) \cdot \left(\frac{1}{\varepsilon}\log\frac{\UB}{\LB} \right)^{p-1}\right)$ using one of the fastest algorithms for single objective shortest path~\cite{Thorup:SP}, and even $\mathcal{O}\left((m + n\log\log n) \cdot \log \left(\frac{1}{\varepsilon}\log\frac{\UB}{\LB} \right)\cdot\log\frac{\UB}{\LB}\right)$ for $p=2$, using Theorem~\ref{thm:binary-sear-approx} and the same single objective algorithm. \smallskip There are, however, also problems for which the weighted sum scalarization can be solved exactly in polynomial time, but whose multiobjective version does \emph{not} admit an MFPTAS unless $\textsf{P}=\textsf{NP}$. For example, this is the case for the minimum $s$-$t$-cut problem~\cite{Papadimitriou+Yannakakis:multicrit-approx}. For yet other problems, like, e.g., the minimum weight perfect matching problem, only a randomized MFPTAS is known so far~\cite{Papadimitriou+Yannakakis:multicrit-approx}. In both cases, our algorithm can still be applied. \subsection{Problems with a polynomial-time approximation scheme for the weighted sum scalarization} For problems where the weighted sum scalarization admits a polynomial-time approximation scheme, Corollary~\ref{cor:weighted-sum-PTAS} shows that Algorithm~\ref{alg:mainAlgo} yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee $(\alpha_1,\dots,\alpha_p)$ such that $\sum_{j:\alpha_j>1}\alpha_j=p+\varepsilon$. Thus, only the property that $\alpha_i=1$ for at least one~$i$ is lost compared to the case where the weighted sum scalarization can be solved exactly in polynomial time. \smallskip Since there exists a vast variety of single objective problems that admit po\-ly\-no\-mi\-al-time approximation schemes, this result is also widely applicable and yields the best known multiobjective approximation results for many problems. For example, we obtain the best known approximation results for the multiobjective versions of the weighted planar TSP (for which a polynomial-time approximation scheme with running time linear in the number~$n$ of vertices exists~\cite{Klein:planar-TSP}) and minimum weight planar vertex cover (for which a polynomial-time approximation scheme was proposed in~\cite{Baker:planar-graphs}). Note that, for both of these problems and many others in this class, it is not known whether the multiobjective version admits an MPTAS. \subsection{Problems with a polynomial-time $\sigma$-approximation for the weigh\-ted sum scalarization} If the weighted sum scalarization admits a polynomial-time $\sigma$-approxi\-ma\-tion algorithm (where $\sigma$ can be either a constant or a function of the input size), Theorem~\ref{thm:main-result} shows that Algorithm~\ref{alg:mainAlgo} yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee $(\alpha_1,\dots,\alpha_p)$ such that $\sum_{j:\alpha_j>1}\alpha_j=\sigma\cdot p+\varepsilon$ and $\alpha_i\leq\sigma$ for at least one~$i$. Moreover, by Corollary~\ref{cor:classical-result}, the algorithm also yields a (classical) $(\sigma\cdot p+\varepsilon,\dots,\sigma\cdot p+\varepsilon)$-approximation. \smallskip These results yield the best known approximation guarantees for many well-studied problems whose single objective version does not admit a po\-ly\-no\-mi\-al-time approximation scheme unless $\textsf{P}=\textsf{NP}$. Consequently, the multiobjective version of these problems does not admit an MPTAS under the same assumption. Problems of this kind include, e.g., minimum weight vertex cover, minimum $k$-spanning tree, minimum weight edge dominating set, and minimum metric $k$-center, all of which admit $2$-approximation algorithms in the single objective case (see \cite{Bar-Yehuda+Even:vertex-cover,Garg:STOC05,Fujito+Nagamochi:edge-dom-set,Hochbaum+Shmoys:bottleneck-problems}, respectively). An example of a problem where only a non-constant approximation factor can be obtained in the single objective case is the minimum weight set cover problem, where only a $(1+\ln |S|)$-approximation exists with~$|S|$ denoting the cardinality of the ground set to cover~\cite{Chvatal:set-cover-approx}. For all of these problems, Algorithm~\ref{alg:mainAlgo} yields both the best known classical approximation result for the multiobjective version as well as the first multi-factor approximation result. \smallskip A particularly interesting problem of this class is the metric version of the symmetric traveling salesman problem (metric STSP). Here, problem specific (deterministic) algorithms exist that obtain approximation guarantees of $(2,2)$ in the biobjective case~\cite{Glasser+etal:TSP-generalized} and $(2+\varepsilon,\dots,2+\varepsilon)$ for any constant number of objectives~\cite{Manthey+Ram:multicriteria-TSP}. The best approximation algorithm for the single objective version is the $\frac{3}{2}$-approximation algorithm by Christofides~\cite{Christofides:TSP}, which can be used in Algorithm~\ref{alg:mainAlgo} in order to obtain a multi-factor approximation where each feasible solution is approximated with some approximation guarantee $(\alpha_1,\dots,\alpha_p)$ such that $\sum_{j:\alpha_j>1}\alpha_j=\frac{3}{2}\cdot p+\varepsilon$ and $\alpha_i\leq\frac{3}{2}$ for at least one~$i$. \section{An inapproximability result for maximization problems}\label{sec:maximixation} In this section, we show that the weighted sum scalarization is much less powerful for approximating multiobjective \emph{maximization} problems. \smallskip Intuitively, in a multiobjective \emph{minimization} problem, the positivity of the objective function values and of the weights used within the weighted sum scalarization implies that a bad (i.e., large) value in some of the objective functions cannot be compensated in the weighted sum by a good (i.e., small) value in another objective function. This means that, if the weighted sum of the objective values of a solution for a minimization problem is (close to) optimal (i.e., minimal), then no single objective value can be too large. More precisely, for $f(x)=(f_1(x), \dots , f_p(x))\in \mathbb R^p$ and $w\in\mathbb{R}^p$ with $f_j(x),w_j > 0$ for $j=1, \dots , p$ and $v > 0$, it holds that \begin{align*} \sum_{j=1}^p w_j f_j(x) \leq v \Longrightarrow f_j(x) \leq \frac{1}{w_j} v \text{ for all } j=1, \dots , p. \end{align*} For a \emph{maximization} problem, however, a bad (i.e., small) value in some of the objective functions can be completely compensated in the weighted sum by a very good (i.\,e., large) value in another objective function. Thus, a solution that obtains a (close to) optimal (i.e., maximal) value in the weighted sum of the objective values can still have a very small (i.\,e., bad) value in some of the objectives: \begin{align*} \sum_{j=1}^p w_jf_j(x) \geq v \not\Longrightarrow f_j(x) \geq cv \text{ for all } j=1, \dots , p \text{ and any constant } c>0. \end{align*} For instance, while Corollary~\ref{cor:main-result-special-biobjective} implies that the set of supported solutions yields a $\{(1,2+\varepsilon),(2+\varepsilon,1)\}$-approximation for any $\varepsilon>0$ in the case of a biobjective minimization problem, no similar result holds for maximization problems. Indeed, for biobjective maximization problems, Figure~\ref{fig:maximization-impossiblility} demonstrates that there may exist unsupported solutions that are approximated only with an arbitrarily large approximation factor in (all but) one objective function by any supported solution. \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=1.25] \fill[gray!30] (1,6.5) -- (0,6.5) -- (0,0) -- (1,0) -- (1,6.5); \fill[gray!30] (6.5,1) -- (6.5,0) -- (0,0) -- (0,1) -- (6.5,1); \fill[gray!30] (0.5,7.4) -- (0,7.4) -- (0,0) -- (0.5,0) -- (0.5,7.4); \fill[gray!30] (7.4,0.5) -- (7.4,0) -- (0,0) -- (0,0.5) -- (7.4,0.5); \draw[->] (-0.2,0) -- (7.4,0) node[below right] {$f_1$}; \draw[->] (0,-0.2) -- (0,7.4) node[above left] {$f_2$}; \fill (0.5,6.5) circle (1.5pt) node[above right] {$y^2=(\frac{1}{2},M)$}; \fill (6.5,0.5) circle (1.5pt) node[above right] {$y^1=(M,\frac{1}{2})$}; \draw[dashed,gray] (0.5,6.5) -- (6.5,0.5); \fill (3.25,3.25) circle (1.5pt) node[below left] {$\tilde{y}=(\frac{M}{2},\frac{M}{2})$}; \end{tikzpicture} \caption{Image space of a biobjective maximization problem with three points, only two of which are supported (where $M>0$ is a large integer). Each of the two supported points does not approximate the unsupported point~$\tilde{y}$ better than a factor~$M$ in one of the two objective functions. The shaded region is $\{(1,2),(2,1)\}$-approximated by the supported points~$y^1,y^2$.}\label{fig:maximization-impossiblility} \end{center} \end{figure} \medskip The following theorem generalizes the construction in Figure~\ref{fig:maximization-impossiblility} to an arbitrary number of objectives and shows that, for maximization problems, a polynomial approximation factor can, in general, not be obtained in more than one of the objective functions simultaneously even if the approximating set consists of all the supported solutions: \begin{theorem}\label{thm:maximization-impossiblility} For any $p\geq 2$ and any polynomial~$\text{pol}$, there exists an instance~$I$ of a $p$-objective maximization problem such that at least one unsupported solution is \emph{not} approximated with an approximation guarantee of $2^{\text{pol}(|I|)}$ in~$p-1$ of the objective functions by any supported solution. \end{theorem} \begin{proof} Given $p\geq 2$ and a polynomial~$\text{pol}$, consider the $p$-objective maximization problem where each instance~$I$ is given by a $(p+1)$-tuple $(x^1,\dots,x^p,\tilde{x})$ of pairwise different vectors $x^1,\dots,x^p,\tilde{x}\in\mathbb{Z}^p$ and the feasible set is $X=\{x^1,\dots,x^p,\tilde{x}\}$. Given the encoding length~$|I|$ of such an instance , we set $M\colonequals 2^{\text{pol}(|I|)} + 1$ and $f(x^{1})=(M,\frac{1}{p},\dots,\frac{1}{p})$, $f(x^{2})=(\frac{1}{p},M,\frac{1}{p},\dots,\frac{1}{p})$, \dots, $f(x^{p})=(\frac{1}{p},\dots,\frac{1}{p},M)$ and $f(\tilde{x})=\left(\frac{M}{p},\dots,\frac{M}{p}\right)$. Then, the solution~$\tilde{x}$ is unsupported, while $x^{1},\dots,x^{p}$ are supported. \smallskip Moreover, the ratio of the $j$-th components of the images~$f(\tilde{x})$ and~$f(x^{\ell})$ for any $j\neq \ell$ is exactly $M=2^{\text{pol}(|I|)} + 1 > 2^{\text{pol}(|I|)}$, which shows that~$x^{\ell}$ does not yield an approximation guarantee of $2^{\text{pol}(|I|)}$ in objective function~$f_j$ for any $j\neq \ell$.\qed \end{proof} \section{Conclusion}\label{sec:conclusion} The weighted sum scalarization is the most frequently used method to transform a multiobjective into a single objective optimization problem. In this article, we contribute to a better understanding of the quality of approximations for general multiobjective optimization problems which rely on this scalarization technique. To this end, we refine and extend the common notion of approximation quality in multiobjective optimization. As we show, the resulting multi-factor notion of approximation more accurately describes the approximation quality in multiobjective contexts. We also present an efficient approximation algorithm for general multiobjective minimization problems which turns out to be best possible under some additional assumptions. Interestingly, we show that a similar result based on supported solutions cannot be obtained for multiobjective maximization problems. \medskip The new multi-factor notion of approximation is independent of the specific algorithms used here. Thus, a natural direction for future research is to analyze new and existing approximation algorithms more precisely with the help of this new notion. This may yield both a better understanding of existing approaches as well as more accurate approximation results. \appendix \section{Proof of Lemma~\ref{lem:tree-size}} \begin{proof} In order to show the claimed upper bound on the number of nodes, we first show that any binary tree~$T$ with height~$h$ and $k$~nodes with two children that has the maximum possible number of nodes among all such binary trees must have the following property: If~$v$ is a node with two children at level~$\ell$, then all nodes~$u$ at the levels~$0,\dots,\ell-1$ must also have two children. So assume by contradiction that~$T$ is a binary tree maximizing the number of nodes among all trees with height~$h$ and $k$~nodes with two children, but~$T$ does not have this property. Then there exists a node~$v$ in~$T$ with two children at some level~$\ell$ and a node~$u$ with at most one child at a lower level~$\ell'\in\{0,\dots,\ell-1\}$. Then, the binary tree~$T'$, that results from making one node~$w$ that is a child of~$v$ in~$T$ an (additional) child of~$u$, also has height~$h$, contains~$k$ nodes with two children, and has the same number of nodes as~$T$. Moreover, the level of~$w$ in~$T'$ changes to $\ell'+1<\ell+1$. Hence, also the level of any leave of the subtree rooted at~$w$ must have decreased by at least one. Thus, giving any leave of this subtree an additional child in~$T'$ would yield a binary tree of height~$h$ and~$k$ nodes with two children, and a strictly larger number of nodes than~$T$, contradicting the maximality of~$T$. \smallskip By the above property, in any binary tree maximizing the number of nodes among the trees satisfying the assumptions of the lemma, there are only nodes with two children on all levels $i<h'\colonequals \lfloor\log_2(k+1)\rfloor$ and only nodes with at most one child on all levels $i>h'$. Level~$h'$ may contain nodes with two children, but there is at least one node with only one child on this level. \enlargethispage{2.5\baselineskip} Consequently, there are at most~$k$ nodes in total on the levels $0,\dots,h'-1$ and at most $k+1$ nodes at level~$h'$. Moreover, there are at most $2(k+1)$ nodes at level~$h'+1$, each of which is the root of a subtree (path) consisting of at most~$h-h'$ nodes (each with at most one child). Overall, this proves an upper bound of at most $k + (k+1) + 2(k+1)\cdot(h-h')\in\mathcal{O}(k\cdot h)$ on the number of nodes in the tree. ~\end{proof} \newpage \bibliographystyle{siamplain} \bibliography{literature} \end{document}
{"config": "arxiv", "file": "1908.01181/weighted-sum.tex"}
TITLE: Show $h_\mu (f^n)=n*h_\mu(f)$ QUESTION [0 upvotes]: In some article about ergodic theory, it said (without proof) that if $\mu$ is a $f$-invariant measure, then $h_\mu(f^k)=k*h_\mu(f)$. I'm looking to prove this. $h_\mu(f^k,A)=\lim_n (1/n)*H_\mu(\bigvee_{i=0}^{n-1} f^{-ik}A)$, so in some manner we should have $H_\mu(\bigvee_{i=0}^{n-1} f^{-ik}A)=k*H_\mu(\bigvee_{i=0}^{n-1} f^{-i}A)$, but not necessarily exactly that since we only need this to hold on the $sup_A$ over this quantities. It simply looks wrong to me, so im looking for some more opinions. REPLY [2 votes]: Call $g=f^k$ and for a finite entropy partition $\mathcal{P}$ of the space, denote by $\mathcal{P}_f^n=\displaystyle\bigvee_{i=0}^{n-1} f^{-i}(\mathcal{P})$ its n-th iteration under $f$ (and similar notation for $g$). Note that $$ \mathcal{P}_f^{km}=\bigvee_{i=0}^{km-1} f^{-i}(\mathcal{P})=\bigvee_{i=0}^{m-1} f^{-ik}\left( \bigvee_{j=0}^{k-1} f^{-j}(\mathcal{P}) \right) $$ Then $$ h_\mu(g,\mathcal{P}_f^k)=\lim_n\dfrac{1}{n}H_\mu\left(\bigvee_{i=0}^{n-1} g^{-i}(\mathcal{P_f^k})\right)=\lim_n \dfrac{1}{n}H_\mu\left(\bigvee_{i=0}^{n-1}f^{-ik} \left( \bigvee_{j=0}^{k-1} f^{-j}(\mathcal{P}) \ \right) \right)=\lim_n\dfrac{1}{n}H_\mu(\mathcal{P}_f^{kn})=k\lim_n\dfrac{1}{nk}H_\mu(\mathcal{P}_f^{kn})=kh_\mu(f,\mathcal{P}). $$ Now, since $\mathcal{P}^k$ is finer than $\mathcal{P}$, we have that $h_\mu(g,\mathcal{P})\leq kh_\mu(f,\mathcal{P})=h_\mu(g,\mathcal{P^k})\leq \sup_{\mathcal{P}}h_\mu(g,\mathcal{P})=h_\mu(g) $. Since the last inequality is true for every partition $\mathcal{P}$, taking supremum over $\mathcal{P}$ yields $h_\mu(g)=kh_\mu(f)$.
{"set_name": "stack_exchange", "score": 0, "question_id": 1948489}
TITLE: How to find an upper bound on the number of solutions of $y^3=x^2+4^k$ QUESTION [1 upvotes]: I have solved the first two parts of this question but I am struggling with the remaining section. I can't see any meaningful way to reuse what I did before and/or find a way forward. Just to be clear it is part c) that I am stuck on. Would anyone be able to help me thanks! REPLY [2 votes]: The following is heavily based on Theorem 3.3 in Keith Conrad's excellent "Examples of Mordell’s Equation". $$y^2+4^k=x^3$$ Observe that $4^k$ is always a square, so we factor in $\mathbb{Z}[i]$ $$(y-2^ki)(y+2^ki)=y^2+2^{2k}=x^3$$ If $y-2^ki$ and $y+2^ki$ are cubes then: $y-2^ki=(m+ni)^3$, so $2^k=n(3m^3-n^2)$ there are only $2(k+1)$ possible factorization. Each factorization represent at most one possible solutions, so there are at most $2k+2$ solutions. Look at the equation modulo 2: $y^2+4^k \equiv x^3 \pmod 2$ so $y \equiv x \pmod 2$. Assume both $x$ and $y$ are odd and let $δ$ be a common divisor. So $δ$ also divides $(y+2^ki) - (y-2^ki)=2^{k+1}i$. Therefore, the norm of $δ$, $N(δ)$, divides $N(2^{k+1}i)=2^{k+1}$. However it also divides $N(y+2^ki)=y^2+4^k=x^3$ which is odd. Thus the norm of $δ$ is 1 and $(y+2^ki),(y-2^ki)$ are relatively prime. Because $\mathbb{Z}[i]$ is UFD, if a product of two relatively prime factors is a cube then the factors must also be cube up to units. Every unit in $\mathbb{Z}[i]$ is a cube so it can be absorbed inside the other factors and we don't need to worry about it. So $(y+2^ki),(y-2^ki)$ are indeed cubes. Assume both x and y are even and let $x=2x',y=2y'$. The equation becomes $4{y'}^2+2^{2k}=8{x'}^3$, if k is greater than zero then we can divide by four and get ${y'}^2+2^{2k-2}=2{x'}^3$. If $k$ is greater than one, look modulo $2$ and get ${y'}^2\equiv 0 \pmod 2$ so $4|{y'}^2$ and thus $4|2{x'}^3$ and $2|x'$. Let $x'=2x'',y=2y''$, so $4{y''}^2+2^{2k-2}=16{x''}^3$. If $2^{2k-2}$ is not $1$, divide by $4$, get ${y''}^2+2^{2k-4}=4{x''}^3$. $y''$ must be divisible by $4$ if $2^{2k-4}$ is not one, let $y''=2y'''$ and divide by $4$. ${y'''}^2+2^{2k-6}={x''}^3$ continue like we started until the power of four get to one. So we have $y^2=x^3-1,y^2=2x^3-1,y^2=4x^3-1$ as possible ending state. $y^2=2x^3-1$ is already dealt with in Conrad's notes (and this answer is getting long). In $y^2=4x^3-1$, $-1$ is not a quadratic residue modulo 4 and therefore has no solution. $y^2=x^3-1$ have only one integer solution and is reached when $k \equiv 0 \pmod 3$. It is possible to do better - we can divide by $2^6$ as many time we want because we will get a rational point. $\frac{y^2}{2^6}+4^{k-3}=\frac{x^3}{2^6}$ becomes $\left(\frac{y}{2^3}\right)^2+4^{k-3}=\left(\frac{x}{2^2}\right)^3$. So we ask about rational points instead on that curve and the only thing that matters is $k \pmod 3$. $$ \begin{array}{clcr} \text{Curve} & \text{Mordell-Weil group}& \text{Number of rational points} & \text{Easier bound} & \text{Overall bound} \\ \hline y^2=x^3-2^{2\cdot0} & \mathbb{Z}/{2}\mathbb{Z} & 1 & 2k+3 & 1 \\ y^2=x^3-2^{2\cdot1} & \mathbb{Z} & \infty & 2k+2 & 2k+2 \\ y^2=x^3-2^{2\cdot2} & \text{Trivial} & 0 & 2k+2 & 0 \\ \end{array} $$ See 0,1,2. I will admit I don't know how to prove these facts about Mordell-Weil groups at all yet.
{"set_name": "stack_exchange", "score": 1, "question_id": 2170088}
TITLE: Why is the two sheeted cone not a regular surface? QUESTION [0 upvotes]: The two sheeted cone is $\{(x,y,z) \in \mathbb R^3 : x^2+y^2-z^2=0\}$. I would like to use this proposition: to show that the two sheeted cone is not a regular surface. I know that the point of failure has to be the $(0,0,0)$, but I am not sure how to use this proposition to show this. REPLY [1 votes]: Suppose the two sheeted cone $S$ is a regular surface. As you note $p=(0,0,0)\in S$. Let $U\subset S$ be an open neighbourhood of $p$ in $S$ such that $U$ is the graph of a differentiable function of one of the forms $$z=f(x,y),\qquad y=g(x,z),\qquad x=h(y,z).$$ By definition of the topologies on $S$ and $\Bbb{R}^3$ there exists an open ball $B_{\varepsilon}(p)\subset\Bbb{R}^3$, centered at $p$ and with radius $\varepsilon$, such that $V:=B_{\varepsilon}(p)\cap S$ and $V\subset U$. Then $V$ is also the graph of a function as above. Note that $V$ contains a point $(x,y,z)$ with $x\neq0$, $y\neq0$ and $z\neq0$. Then it also contains the points $$(x,y,-z),\qquad (x,-y,z)\qquad\text{ and }\qquad (-x,y,z).$$ But this contradicts the fact that $$z=f(x,y),\qquad y=g(x,z)\qquad\text{ or }\qquad x=h(y,z),$$ respectively. Hence the two sheeted cone is not a regular surface.
{"set_name": "stack_exchange", "score": 0, "question_id": 3088982}
\section{Transferring vanishing lines}\label{sec:Transferring} In this section we explain under which circumstances vanishing lines for $E_k$-homology imply vanishing lines for $E_{k-1}$-homology (``transferring down'') or $E_{k+1}$-homology (``transferring up''). Transferring up using the bar constructions of the previous section, transferring down is the first application of our theory of $E_k$-cells. As before, $\sfC = \sfS^\sfG$ with $\sfS$ satisfying the axioms of Sections \ref{sec:axioms-of-cats} and \ref{sec:axioms-of-model-cats}. \subsection{The bar spectral sequence} The $k$-fold iterated bar construction $B^{E_k}(\gR,\epsilon)$ for an augmented $E_k$-algebra constructed in Section \ref{sec:iterated-bar-def} is a variation on the ordinary $k$-fold iterated bar construction. As such, there exists a bar spectral sequence\index{spectral sequence!bar} which computes $H_{*,*}(B^{E_k}(\gR,\epsilon))$ from $H_{*,*}(B^{E_{k-1}}(\gR,\epsilon))$. This will be used in Section \ref{sec:TrfUp} to show that if $\gR$ is an $E_k$-algebra whose $E_l$-homology for $l<k$ vanishes in a range of bidegrees, then the same is true for its $E_k$-homology. The setup is identical to that for the bar spectral sequence in Section \ref{sec:bar-ss}. Let $\cat{GrMod}_\bk$ denote the category of graded modules over a commutative ring $\bk$ with tensor product as usual involving a Koszul sign, and given a monoidal category $\sfG$ let $\cat{GrMod}_\bk^\sfG$ denote the category of functors with the Day convolution monoidal structure. As in Section \ref{sec:an-estimate}, we let $\bk[\bunit]$ denote the monoidal unit $(\bunit_\sfG)_*(\bk)$ in this category, given by the functor $g \mapsto \bk[\sfG(\bunit_\sfG,-)]$. When $\bk$ is a field, to give an algebraic interpretation of the $E_2$-page of the bar spectral sequence, we use the Segal-like nature of the $k$-fold iterated bar construction in one of its $k$ directions, to endow $H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bk)$ with the structure of an augmented associative algebra in $\cat{GrMod}_\bk^\sfG$. \begin{theorem}\label{thm:BarSS} Let $\gR$ be an augmented $E_k^+$-algebra which is cofibrant in $\sfC$. Then for each $\bk$-module $M$ there is a strongly convergent spectral sequence \[E^1_{g,p,q} = H_{g,q}(B^{E_{k-1}}(\gR,\epsilon)^{\otimes p};M) \Rightarrow H_{g,p+q}(B^{E_k}(\gR,\epsilon);M).\] Let us further suppose that $\bk=\bF$ is a field and $\sfG$ is a groupoid such that $G_x \times G_y \to G_{x \otimes y}$ is injective for all $x,y \in \sfG$. Then $H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)$ has a natural structure of an augmented associative algebra in $\cat{GrMod}_\bF^\sfG$, and we may identify the $E_2$-page of this spectral sequence as \[E^2_{*, p,*} = \mathrm{Tor}^p_{H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)}({\bF[\bunit]},{\bF[\bunit]}),\] with $\mr{Tor}$ formed in the category $\cat{GrMod}_\bF^\sfG$. \end{theorem} \begin{proof}There is a semi-simplicial object in $\sfC$ \[X_\bullet \colon [p] \longmapsto \fgr{B^{E_k}_{p, \bullet \ldots, \bullet}(\gR,\epsilon)}\] given by forming the thick geometric realisation in the last $(k-1)$ simplicial directions. This is levelwise cofibrant, by Lemma \ref{lem:thick-geom-rel-cofibrations}. By Lemma \ref{lem:bek-unit}, $X_0$ is weakly equivalent to $\bunit$. The object $X_1$ is isomorphic to the geometric realisation of $\mathcal{P}_1(1) \times (B^{E_{k-1}}_{\bullet, \ldots, \bullet}(\gR,\epsilon))$, and using the fact that $\mathcal{P}_1(1)$ is contractible, we may conclude that this is weakly equivalent to $B^{E_{k-1}}(\gR,\epsilon)$. More generally there is a $(k-1)$-fold simplicial map \begin{equation}\label{eq:Segal} B^{E_k}_{p, \bullet \ldots, \bullet}(\gR,\epsilon) \lra \mathcal{P}_1(p) \times B^{E_{k-1}}_{\bullet, \ldots, \bullet}(\gR,\epsilon)^{\otimes p} \end{equation} induced by the inclusion \[\mathcal{P}_k(p_1, \ldots, p_k) \hookrightarrow \mathcal{P}_1(p_1) \times \mathcal{P}_{k-1}(p_2, \ldots, p_k)^{p_1}\] which remembers the grid inside each strip in the first simplicial direction. This map is a homotopy equivalence, so \eqref{eq:Segal} is a levelwise weak equivalence. As both objects are levelwise cofibrant, by Lemma \ref{lem:thick-geom-rel-cofibrations} we obtain an equivalence $X_p \simeq X_1^{\otimes p}$. The spectral sequence is then the geometric realization spectral sequence of Theorem \ref{thm:geom-rel-ss-thick} applied to the levelwise cofibrant simplicial object $X_\bullet$, using the equivalences $X_p \simeq X_1^{\otimes p}$ to identify the $E^1$-page. For the second part, when $\bk=\bF$ is a field and $\sfG$ is a groupoid such that $G_x \times G_y \to G_{x \otimes y}$ is injective for all $x,y \in \sfG$ we can apply the K\"unneth isomorphism of Lemma \ref{lem:KunnethFormula} (i) to get isomorphisms \[H_{*,*}(X_p;\bF) \cong H_{*,*}(X_1^{\otimes p};\bF) \cong H_{*,*}(X_1;\bF)^{\otimes p},\] where $H_{*,*}(X_1;\bF) \cong H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)$. Then the map \[H_{*,*}(X_1;\bF) \otimes H_{*,*}(X_1;\bF) \cong H_{*,*}(X_2;\bF) \overset{(d_1)_*}\lra H_{*,*}(X_1;\bF)\] induces a multiplication on $H_{*,*}(X_1;\bF)$, which may be seen to be associative by considering the face maps $X_3 \to X_1$. The two face maps $X_1 \to X_0$ are equal, and define an augmentation $H_{*,*}(X_1;\bF) \to H_{*,*}(X_0;\bF) =\bF[\bunit]$. In terms of this data we have \[H_{*,*}(X_p;\bF) \cong (\bunit_\sfG)_*(\bF) \otimes_{H_{*,*}(X_1;\bF)} (H_{*,*}(X_1;\bF)^{\otimes p+1} \otimes \bF[\bunit])\] and we recognise the chain complex $(E^1_{*,*,*}, d^1)$ as the result of applying the functor ${\bF[\bunit]} \otimes_{H_{*,*}(X_1;\bF)} - \colon \cat{GrMod}_\bF^\sfG \to \cat{GrMod}_\bF^\sfG$ levelwise to the canonical bar resolution of ${\bF[\bunit]}$ by free left $H_{*,*}(X_1;\bF)$-modules. This gives $E^2_{*, p, *} \cong \mathrm{Tor}^p_{H_{*,*}(X_1;\bF)}({\bF[\bunit]},{(\bF[\bunit]})$ as claimed. \end{proof} When $\gR$ is not just an augmented $E_k^+$-algebra but is an $E_{k+1}^+$-algebra, and we work with coefficients in $\bk = \bF$ a field, then the second part of Theorem \ref{thm:BarSS} says that $H_{*,*}(B^{E_k}(\gR,\epsilon);\bF)$ is an associative algebra. We shall shortly prove that in this case $H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)$ is not just an augmented associative $\bF$-algebra, but a \emph{commutative} one. (This should come as no surprise given Theorems \ref{thm:BarHomologyIndec} and \ref{thm:iterated-decomposables}, which endow the \emph{reduced} $E_{k-1}$-bar construction with an $E_2$-algebra structure.) Then we can combine the external tensor product with the multiplication map on $H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)$ (which is a map of algebras if and only if it is commutative) to obtain a multiplication \[\begin{tikzcd}\mathrm{Tor}^*_{H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)}({\bF[\bunit]},{\bF[\bunit]}) \otimes \mathrm{Tor}^*_{H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)}({\bF[\bunit]},{\bF[\bunit]}) \dar \\ \mathrm{Tor}^*_{H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF) \otimes H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)}({\bF[\bunit]} \otimes {\bF[\bunit]},{\bF[\bunit]}\otimes {\bF[\bunit]}) \dar \\ \mathrm{Tor}^*_{H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)}({\bF[\bunit]},{\bF[\bunit]}),\end{tikzcd}\] making $\mathrm{Tor}^*_{H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)}({\bF[\bunit]},{\bF[\bunit]})$ a graded-commutative algebra with additional grading. \begin{lemma}\label{lem:bar-ss-multiplicative} If $\gR$ is an augmented $E_{k+1}^+$-algebra, then $H_{*,*}(B^{E_{k-1}}(\gR,\epsilon);\bF)$ is an augmented commutative algebra. Furthermore, the bar spectral sequence of Theorem \ref{thm:BarSS} is a spectral sequence of $\bF$-algebras.\end{lemma} \begin{proof} For the statement to be meaningful we must have $k-1 \geq 1$ and so $k+1 \geq 3$, which means that $\sfC$ must be symmetric monoidal. It shall be helpful to make the following general observation. For $r \leq k$, take the map \begin{equation} \label{eqn:grid-c1} \mathcal{P}_k(p_1, \ldots, p_k) \lra \cC_r(p_1 \cdots p_r) \times \mathcal{P}_{k-r}(p_{r+1}, \ldots, p_k),\end{equation} which considers the first $r$ grid directions as a collection of little $r$-cubes and remembers the remaining $(k-r)$ grid directions as a grid in an $(k-r)$-dimensional cube. We then define a $(k-r)$-fold semi-simplicial object $Y^{(k)}_{p_1,\ldots,p_r,\bullet,\ldots,\bullet}$ with $(p_{r+1},\ldots,p_k)$-simplices given by \[\cC_r(p_1 \cdots p_r) \times \mathcal{P}_{k-r}(p_{r+1}, \ldots, p_k) \times G_{p_1,\ldots,p_k}(\epsilon),\] with $G_{p_1,\ldots,p_k}(\epsilon)$ as in Definition \ref{def:kfold-bar-augmented}. As in that definition, the $i$th face map $d^j_i$ in the $j$th direction (here $r+1 \leq j \leq k$) is given by the face map of Definition \ref{def:pk} on the first factor. On the second factor, it is given by adjunction, by the map of simplicial sets \begin{align*} \cC_r(p_1 \cdots p_r) \times &\cP_{k-r}(p_{r+1},\ldots,p_k) \lra \cC_k(2) \\ &\qquad \overset{\alpha}\lra \mr{Map}_\sfC(G_{p_1, \ldots, p_k}(\epsilon), G_{p_1,\ldots, p_{j-1}, p_j-1, p_{j+1},\ldots,p_k}(\epsilon)) \end{align*} with the first map given by $\{e\} \times \{t^j_i\} \mapsto \delta^j_i$, and the second map as in Definition \ref{def:kfold-bar-augmented}. We next describe a $(k-r)$-fold simplicial map \begin{equation}\label{eqn:multiplication-ykr} Y^{(k)}_{p_1,\ldots,p_r,\bullet,\ldots,\bullet} \lra B^{E_{k-r}}_{\bullet,\ldots,\bullet}(\gR,\epsilon).\end{equation} On the first factor this is simply the projection $\cC_r(p_1 \cdots p_r) \times \mathcal{P}_{k-r}(p_{r+1}, \ldots, p_k) \to \mathcal{P}_{k-r}(p_{r+1}, \ldots, p_k)$. On the second factor, it is given by adjunction, by the map of simplicial sets \begin{align*} \cC_r(p_1 \cdots p_r) \times \cP_{k-r}(p_{r+1},\ldots,p_k) &\lra \cC_k(p_1 \cdots p_r) \overset{\beta}\lra \mr{Map}_\sfC(G_{p_1,\ldots, p_k}(\epsilon), G_{p_{r+1},\ldots,p_k}(\epsilon))\\ \{e\} \times \{t^j_i\} &\longmapsto \{e \times \mr{id}_{I^{k-1}}\}, \end{align*} with $\beta$ given as follows: as long as $1 \leq q_j \leq p_j$ for all $j \geq r+1$ by the map \begin{align*}\cC_k(p_1 \cdots p_r) \lra \cE_{\gR}(p) &= \mr{Map}_\sfC(\gR^{\otimes p_1 \cdots p_r}, \gR) \\ &\qquad = \mr{Map}_\sfC\left(\bigotimes_{j=1}^r \bigotimes_{i_j=1}^{p_j} B_{p_1,\ldots,p_r,p_{r+1},\ldots, p_k}^{i_1,\ldots,i_r,q_{r+1},\ldots,q_k}, B_{p_{r+1},\ldots,p_k}^{q_{r+1},\ldots,q_k}\right)\end{align*} and the evident identity maps on the remaining factors. If for some $j \geq r+1$, $q_j$ is either $0$ or $p_j+1$, it is the same map but with $\gR$ replaced by $\bunit$. We shall augment notation from the proof of Theorem \ref{thm:BarSS} to make the dependence on $k$ clear: $X^{(k)}_\bullet \coloneqq \fgr{B^{E_{k}}_{p,\bullet,\ldots,\bullet}(\gR,\epsilon)}$. Since $\gR$ is an $E_{k+1}^+$-algebra, we may consider the $E_{k+1}$-bar construction. We set $r=2$, $p_1=1$, and $p_2=2$, then take the geometric realization of \eqref{eqn:grid-c1} and \eqref{eqn:multiplication-ykr} to obtain maps \[\fgr{B^{E_{k+1}}_{1,2,\bullet,\ldots,\bullet}(\gR,\epsilon)} \lra \fgr{Y^{(k+1)}_{1,2,\bullet,\ldots,\bullet}} \lra B^{E_{k-1}}(\gR,\epsilon).\] The multiplication on $X_1^{(k)}$ can be recovered from this. To do so, we use the evident homotopy equivalences \begin{align*}\cP_k(2,p_3,\ldots,p_{k+1}) &\lra \cP_1(2) \times \cP_{k-1}(p_3,\ldots,p_{k+1})^2 \\ \cP_{k+1}(1,2,p_3,\ldots,p_{k+1}) &\lra \cP_{k}(2,p_3,\ldots,p_k), \\ \cC_2(1 \cdot 2) \times \cP_{k-1}(p_3,\ldots,p_{k+1}) &\lra \cC_2(2) \times \cP_{k-1}(p_3,\ldots,p_k)^2, \\ \cP_{k}(1,p_3,\ldots,p_{k+1}) &\lra \cP_{k-1}(p_3,\ldots,p_k),\end{align*} to obtain the weak equivalences in the following commutative diagram \[\begin{tikzcd}\cP_1(2) \times (X^{(k)}_1)^{\otimes 2} \arrow[dd] & X^{(k)}_2 \rar{d_1} \lar[swap]{\simeq} & X^{(k)}_1 \arrow[equals]{d} \\ & \uar{\simeq} \fgr{B^{E_{k+1}}_{1,2,\bullet,\ldots,\bullet}(\gR,\epsilon)} \dar & X^{(k)}_1 \dar{\simeq} \\ \cC_2(2) \times (X^{(k)}_1)^{\otimes 2} & \lar[swap]{\simeq} Y^{k}_{1,2} \rar & B^{E_{k-1}}(\gR,\epsilon).\end{tikzcd}\] Proving that the both squares commute is a simple matter of tracing through the various maps of grids and cubes. Thus we have exhibited the multiplication on $X_1^{(k)}$ up to weak equivalence as arising from a choice of point in $\cC_2(2)$. It now remains to observe that the multiplication in reverse order similarly arises by picking another point in $\cC_2(2)$, and since $\cC_2(2)$ is path-connected these maps are homotopic. \vspace{.5 em} We showed in Theorem \ref{thm:BarSS} that there is a weak equivalence and multiplication \[(X_1^{(k+1)})^{\otimes 2} \overset{\simeq}\longleftarrow X_2^{(k+1)} \lra X_1^{(k+1)}.\] As $X_1^{(k+1)} \simeq B^{E_k}(\gR,\epsilon)$, it is this zigzag of maps that endows the bar spectral sequence with an algebra structure, which by construction converges to the $\bF$-algebra structure on $H_{*,*}(B^{E_{k}}(\gR,\epsilon);\bF)$. On the $E^1$-page it gives the map on canonical bar resolutions induced by the $E_1$-algebra structure in the remaining direction. This is homotopic to the $E_1$-algebra structure used to the construct the product in the second part of Theorem \ref{thm:BarSS}, and hence gives the $\bF$-algebra structure on $\mr{Tor}$-groups discussed above. \end{proof} \subsection{Transferring vanishing lines up}\label{sec:TrfUp} Transferring vanishes lines up follows from our expression of derived $E_k$-indecomposables in terms of the iterated bar construction, the bar spectral sequence described in Theorem \ref{thm:BarSS}, and a K{\"u}nneth-type theorem. \begin{theorem}\label{thm:TrfUp} Let $\gR \in \Alg_{E_k}(\sfC)$, and $\rho \colon \sfG \to [-\infty,\infty]_\geq$ be an abstract connectivity such that $\rho*\rho \geq \rho$. If $l \leq k$ is such that $H^{E_l}_{g,d}(\gR)=0$ for $d < \rho(g)-l$, then $H^{E_k}_{g,d}(\gR)=0$ for $d < \rho(g)-l$ too. \end{theorem} \begin{proof} We claim that it is enough to consider the case $(l,k) = (k-1,k)$. To prove this claim we need to explain how the case $(l,l+1)$ provides the input for $(l+1,l+2)$, etc. We can use the case $(l,l+1)$ to prove that if $H^{E_l}_{g,d}(\gR)=0$ for $d < \rho(g)-l$, then $\smash{H^{E_{l+1}}_{g,d}}(\gR)=0$ for $d < \rho(g)-l$ too. This conclusion provides input for the case $(l+1,l+2)$ when we rewrite it as $H^{E_{l+1}}_{g,d}(\gR)=0$ for $d < \rho'(g)-(l+1)$ with $\rho' \coloneqq \rho+1$, which still satisfies $\rho' \ast \rho' \geq \rho'$. Let us from now assume that $l = k-1$. By Theorem \ref{thm:BarHomologyIndec}, we have equivalences \[\tilde{B}^{E_{k-1}}(\gR) \simeq S^{k-1} \wedge Q_\bL^{E_{k-1}}(\gR) \quad \text{and} \quad \tilde{B}^{E_k}(\gR) \simeq S^k \wedge Q_\bL^{E_k}(\gR),\] so the assumption of the theorem is equivalent to saying that $\tilde{B}^{E_{k-1}}(\gR)$ is homologically $\rho$-connective, and our desired conclusion is equivalent to saying that $\tilde{B}^{E_{k}}(\gR)$ is homologically $(1+\rho)$-connective. We also have, by definition, homotopy cofibre sequences in $\sfC_*$ \begin{align*} {B}^{E_{k-1}}(\bunit, \epsilon_\bunit)_+ \lra &{B}^{E_{k-1}}(\gR^+, \epsilon_{can})_+\lra \tilde{B}^{E_{k-1}}(\gR),\\ {B}^{E_{k}}(\bunit, \epsilon_\bunit)_+ \lra &{B}^{E_{k}}(\gR^+,\epsilon_{can})_+\lra \tilde{B}^{E_{k}}(\gR). \end{align*} Let us write $\epsilon$ for either of the augmentations $\epsilon_\bunit$ or $\epsilon_{can}$. The relative version of the bar spectral sequence of Theorem \ref{thm:BarSS} starts form \[E^1_{g,p,q} = H_{g,q}(B^{E_{k-1}}(\gR,\epsilon)^{\otimes p}, {B}^{E_{k-1}}(\bunit,\epsilon)^{\otimes p})\] and converges strongly to $H_{g, p+q}(B^{E_{k}}(\gR,\epsilon), {B}^{E_{k}}(\bunit,\epsilon))=H_{g,p+q}(\tilde{B}^{E_{k}}(\gR))$. The assumption can be rephrased as saying that the map $B^{E_{k-1}}(\bunit,\epsilon) \to B^{E_{k-1}}(\gR,\epsilon)$ is homologically $\rho$-connective. As ${B}^{E_{k-1}}(\bunit,\epsilon) \simeq \bunit$ by Lemma \ref{lem:bek-unit}, which has homological connectivity given by the unit $\bunit_\text{conn} \in [-\infty,\infty]_\geq^\sfG$ as in (\ref{eqn:abstract-connectivity-unit}), the object $B^{E_{k-1}}(\gR,\epsilon)$ is $\inf(\bunit_\text{conn}, \rho)$-connective. By Corollary \ref{cor:connectivity-under-tensor2} the map ${B}^{E_{k-1}}(\bunit,\epsilon)^{\otimes p} \to {B}^{E_{k-1}}(\gR,\epsilon)^{\otimes p}$ is then homologically $(\inf(\bunit_\text{conn}, \rho)^{* p-1} * \rho)$-connective, and hence $\rho$-connective using the fact that $\rho*\rho \geq \rho$. Furthermore, it is $\infty$-connective if $p=0$, so $E^1_{g,p,q}$ vanishes if $p=0$ or if $q < \rho(g)$, so it vanishes for $p+q < 1+\rho(g)$. As this spectral sequence converges strongly to $H_{g, p+q}(\tilde{B}^{E_{k}}(\gR))$, the conclusion follows. \end{proof} \subsection{Transferring vanishing lines down} To transfer vanishing lines down, we use the theory of CW approximation that we have developed in Section \ref{sec:additive-case} and so we must assume Axiom \ref{axiom:Hurewicz}. \begin{theorem}\label{thm:TrfDown} Suppose that $\sfG$ is Artinian, let $\gR \in \Alg_{E_k}(\sfC)$ be reduced and 0-connective, $l \leq k$, and $\rho \colon \sfG \to [-\infty,\infty]_\geq$ be an abstract connectivity such that $\rho*\rho \geq \rho$ and $H^{E_k}_{g,d}(\gR)=0$ for $d < \rho(g) - l$. Then $H^{E_l}_{g,d}(\gR)=0$ for $d < \rho(g)-l$. \end{theorem} \begin{proof} Firstly, the groupoid $\sfG$ and the operad $E_k$ satisfy the hypotheses of Lemma \ref{lem:TensorDetectNull}. The canonical morphism $\binit \to \gR$ is between 0-connective reduced $E_k$-algebras, so by Theorem \ref{thm:MinCellStr-additive} we may construct a CW approximation $\gZ \overset{\sim}\to \gR$, where $\gZ$ consists of $(g,d)$-cells with $d \geq \rho(g)-l$ and has skeletal filtration $\mr{sk}(\gZ) \in \Alg_{E_k}(\sfC^{\bZ_{\leq}})$. By Theorem \ref{thm:associated-graded-skeletal}, the associated graded $\grr(\mr{sk}(\gZ))$ of this filtration is given by $\gE_k(X)$, where $X$ is a wedge of $S^{n, d, d}$'s with $d \geq \rho(g)-l$. The spectral sequence of Theorem \ref{thm:DerIndecSS} with $\cO = E_l$ takes the form \[E^1_{g,p,q} = H_{g,p+q, q}^{E_l}(\gE_k(X)) \Longrightarrow H_{g,p+q}^{E_l}(\gR),\] so, forgetting the internal grading, it is enough to show the vanishing of $H_{g,d}^{E_l}(\gE_k(X))$ for $d < \rho(g)-l$. To do this we use the weak equivalences \[S^l \wedge Q_\bL^{E_l}(\gE_k(X)) \simeq \tilde{B}^{E_l}(\gE_k(X)) \simeq E_{k-l}(S^l \wedge X_+)\] in $\sfC_*$ from Theorems \ref{thm:BarHomologyIndec} and \ref{thm:CalcFree}, so that it suffices to show that $E_{k-l}(S^l \wedge X_+)$ is homologically $\rho$-connective. We have that $S^l \wedge X_+$ is homologically $\rho$-connective, so it follows from Lemma \ref{lem:connectivity-and-tensor-products} (i) that $(S^l \wedge X_+)^{\otimes p}$ is homologically $\rho^{* p}$-connective, so $\rho$-connective (as $\rho$ is lax monoidal). If $\sfC$ is $\infty$-monoidal, then we have \[E_{k-l}(S^l \wedge X_+) = \bigvee_{i \geq 1} \cC_{k-l}(p) \times_{\fS_p} (E_{k-l}(S^l \wedge X_+))^{\otimes p},\] and it follows from the homotopy orbit spectral sequence as in Section \ref{sec:homotopy-orbit-ss} that this is also homologically $\rho$-connective, as required. If $\sfC$ is $2$-monoidal, one needs to replace $\cC_{k-l}(p)$ by $\cC^{\cat{FB}_2}_{k-l}(p)$ and the symmetric group $\fS_p$ by the braid group $\beta_p$. \end{proof} By doing a more careful analysis we can occasionally relax the condition $d < \rho(g)-l$; we give the following theorem as an example of a general type of argument. \begin{theorem} Let $\gR \in \Alg_{E_\infty}(\cat{sMod}_\bQ^\bN)$ be an $E_\infty$-algebra in $\bN$-graded simplicial $\bQ$-modules such that $H_{*,0}(\gR) = \bQ[\sigma]$ with $\vert \sigma \vert = (1,0)$. If $H^{E_k}_{g,d}(\gR)=0$ for $d < 2(g-1)$ then $H^{E_1}_{g,d}(\gR)=0$ for $d < \frac{3}{2}(g-1)$. \end{theorem} This does not follow from Theorem \ref{thm:TrfDown}, as the assumed vanishing range for $E_k$-homology is $d < (2g-1)-1$, but $\rho(g) = 2g-1$ does not satisfy $\rho*\rho \geq \rho$. \begin{proof} Firstly, by transferring vanishing lines up we may suppose that $H^{E_\infty}_{g,d}(\gR)=0$ for $d < 2(g-1)$. As in the proof of Theorem \ref{thm:TrfDown}, by filtering a suitable CW approximation of $\gR$ we can reduce to the case $\gR = \gE_\infty(X)$ with $X$ a wedge of spheres such that $H_{g,d}(X)=0$ for $d < 2(g-1)$ and $H_{1,0}(X) = \bQ\{\sigma\}$. We use the equivalences \[S^1 \wedge Q^{E_1}_\bL(\gE_\infty(X))) \simeq \tilde{B}^{E_1}(\gE_\infty(X)) \simeq E_\infty(S^1 \wedge X)\] from Theorems \ref{thm:BarHomologyIndec} and \ref{thm:CalcFree}. By F.\ Cohen's computations of the homology of free $E_k$-algebras, which shall be explained in Section \ref{sec:Cohen}, we have \[H_{*,*}(E_\infty(S^1 \wedge X)) \cong \Lambda_\bQ(H_{*,*}(S^1 \wedge X)),\] the free graded-commutative algebra on the rational homology of the suspension $S^1 \wedge X$, which we may write as $\Lambda_\bQ(\bQ\{s \sigma\}) \otimes A$ where $s\sigma$ has bidegree $(1,1)$ and $A$ is a free graded-commutative algebra on generators all of which have slope $\frac{d}{g}$ at least $\tfrac{3}{2}$, so $A$ is trivial in bidegrees $(g,d)$ with $d < \tfrac{2}{3}g$. It follows that $\smash{H^{E_1}_{g,d}}(\gR; \bQ)=0$ for $d < \frac{3}{2}(g-1)$ as required. \end{proof} \begin{remark}In fact, we may also do similar analyses with $\bF_p$-coefficients. Then we obtain the same vanishing range $H^{E_1}_{n,d}(\gR)=0$ for $d < \frac{3}{2}(n-1)$ as long as $p \geq 5$, and a lower range $d < \frac{4}{3}(n-1)$ for $p=3$. For $p=2$ one would need to know more information about the cell structures to improve upon Theorem \ref{thm:TrfUp}.\end{remark}
{"config": "arxiv", "file": "1805.07184/chap14.tex"}
TITLE: How to express the set of intersections between two ordered sets by selecting exactly one element per index? QUESTION [0 upvotes]: Given a set $\mathcal{S} = \{S_1, S_2, S_3\}$, two ordered sets can be produced: $\mathcal{S}^+$, where $S^+_i \in (S_1,S_2,S_3)$, and $\mathcal{S}^-$, where $S^-_i \in (U\setminus S_1, U\setminus S_2, U\setminus S_3)$, where $U$ is the universal set, i.e., $S^-_i$ is the complement of $S_i$. How can I succinctly define a function $P(S_1,S_2,S_3)$ that produces a set of intersections for all combinations between the sets $\mathcal{S}^+$ and $\mathcal{S}^-$, while selecting exactly 1 element for each index? In the above example, $P(S_1,S_2,S_3)$ would give: $\{S^-_1 \cap S^-_2 \cap S^-_3, \\ S^-_1 \cap S^-_2 \cap S^+_3, \\ S^-_1 \cap S^+_2 \cap S^-_3, \\ S^-_1 \cap S^+_2 \cap S^+_3, \\ S^+_1 \cap S^-_2 \cap S^-_3, \\ S^+_1 \cap S^-_2 \cap S^+_3, \\ S^+_1 \cap S^+_2 \cap S^-_3, \\ S^+_1 \cap S^+_2 \cap S^+_3\}$ So clearly, $|P(\mathcal{S})| = 2^{|\mathcal{S}|}$. The best I could come up with was: $P(S_1, S_2, ..., S_n) = \displaystyle\bigcap_{S_i \in \mathcal{S}^+ \text{ or } \mathcal{S}^-} S_i$ but that seems pretty clunky and also, I am not sure it stops both $S^+_i$ and $S^-_i$ being selected anyway. Is there an easy and concise way of writing this function? REPLY [0 votes]: The order of element sets in the set $\{S_1,S_2,S_3\}$ seems to be important to you, so it's better to use an indexed family $(S_i)_{i \in \{1,2,3\}}$ instead. You could index the intersections using functions $f: \{1,2,3\} \to \{0,1\}$, this set of functions is denoted by $2^I$ if $I$ is your index set $\{1,2,3\}$. I could denote the set of intersections as follows: $$I(({S_i})_{i \in I}) = \{I_f((S_i)_{i \in I}: f \in 2^I\}$$ where $$I_f((S_i)_{i \in I}) = \{x \in U: \forall i \in I: (f(x) = 1) \iff (x \in S_i)\}$$ for any fixed $f \in 2^I$. I think this is reasonably concise (you could omit the $(S_i)_{i \in I}$ argument if they're fixed in some context, and just write $\{I_f: f \in 2^I\}$ etc.), it generalises to any size of indexed family, and it's also clear that the sets $I_f, f \in 2^I$ indeed form a partition of $U$ (important, if the choice of tags means anything): If $f \neq g$, they must differ on some index $i$, say WLOG $f(i)= 0$ and $g(i) = 1$, and then $I_f \subseteq U\setminus S_i$ and $I_g \subseteq S_i$ so $I_f \cap I_g =\emptyset$. And if $x \in U$, define $f_x: I \to \{0,1\}$ by $f_x(i) = 0$ if $x \notin S_i$, $f_x(i) = 1$ otherwise. Then clearly $x \in I_{f_x}$. If, as you say in the comments, you want to keep $S^+$ and $S^-$ why not use functions $f: I \to \{+,-\}$ instead? And then you can say $$I_f = \bigcap S_i^{f(i)}$$ to stay in the spirit of your proposal. As the codomain has $2$ elements (symbols), we still have $2^{|I|}$ many such functions.
{"set_name": "stack_exchange", "score": 0, "question_id": 2837314}
TITLE: Pythagorean triplets QUESTION [6 upvotes]: Respected Mathematicians, For Pythagorean triplets $(a,b,c)$, if $c$ is odd then any one of $a$ and $b$ is odd. Here $(a, b, c)$ is a Pythagorean triplet with $c^2 = a^2 + b^2$. Now, I will consider $c = b + \Omega$. The reason for considering $c = b + \Omega$ is, $c$ is a hypotenuse side of right triangle and it is obviously larger than the other side $b$. Now, $$a^2 + b^2 = (b + \Omega)^2 = b^2 + 2b \Omega + \Omega ^2\qquad\qquad(1)$$ which is same as $$b = [a^2 - \Omega ^2] \div 2\Omega.$$ Which implies that $\Omega$ divides $a^2$ for $a^2- \Omega ^2) \gt 0$ or $(a - \Omega) (a + \Omega) \gt 0$, which implies that $$a \gt \Omega\qquad\qquad(2).$$ Now, I will consider $a = 2^m$; then $\Omega$ is also even. Otherwise, if $a = 2^m + 1$ then obviously $\Omega$ is odd. Now, I will consider both $a$ and $\Omega$ is an even numbers such that, $a = 2^m$ and $\Omega = 2^r$ for some $m$ and $r$. By (2), we have $m \gt r$ and by (1), we have $$(2^m)^2 = 2^r (2b + 2^r)$$ or $$b = \frac{2^r}{2}((4^m \div 4^r) - 1))\qquad\qquad(3)$$ As I said earlier, $a$ and $\Omega$ is an even, then $b$ should be an odd number. i.e., $r = 1$ Therefore, the required triplets for even numbers in powers of $2$ are $(2^m, (4^m \div 4) - 1, (4^m \div 4 ) + 1))$ Now my question is, how one can generalize the same for following? Case 1: if we take odd numbers for powers of some prime Case 2: if we take even numbers with prime powers. Thanking you, REPLY [0 votes]: First $A\ne 2^n,n\in\mathbb{N}$ because $A=2n+1.\quad$ What you are seeing is $2|((A-\Omega )^2$ where $\Omega=c-b=(2n-1)^2$ and $b=4n, n\in\mathbb{N}.$ All primitive triples are generated by a formula I developed in $2009$: \begin{align*} A=(2n-1)^2+ & 2(2n-1)k \\ B= \qquad\quad\quad & 2(2n-1)k+ 2k^2\\ C=(2n-1)^2+ & 2(2n-1)k+ 2k^2\\ \end{align*} and it is easy to set that $ C=(b+\Omega)\implies \Omega = (2n-1)^2$ e.g. $$(3,4,5)\rightarrow \Omega=1^2, \quad \dfrac{3^2-1^2}{2\cdot1}=4\\ (15,8,17)\rightarrow \Omega=3^2,\quad \dfrac{15^2-9^2}{2\cdot 9}=8\\ (35,12,37)\rightarrow \Omega=5^2, \quad \dfrac{35^2-25^2}{2\cdot 25}=12\\$$ You can explore these combinations more in this sample generated by the formula above. \begin{array}{c|c|c|c|c|c|} n & k=1 & k=2 & k=3 & k=4 & k=5 \\ \hline Set_1 & 3,4,5 & 5,12,13& 7,24,25& 9,40,41& 11,60,61 \\ \hline Set_2 & 15,8,17 & 21,20,29 &27,36,45 &33,56,65 & 39,80,89 \\ \hline Set_3 & 35,12,37 & 45,28,53 &55,48,73 &65,72,97 & 75,100,125\\ \hline Set_{4} &63,16,65 &77,36,85 &91,60,109 &105,88,137 &119,120,169 \\ \hline Set_{5} &99,20,101 &117,44,125 &135,72,153 &153,104,185 &171,140,221 \\ \hline Set_{6} &43,24,145 &165,52,173 &187,84,205 &209,120,241 &231,160,281 \\ \hline \end{array}
{"set_name": "stack_exchange", "score": 6, "question_id": 101550}
TITLE: tangent space at origin of a variety QUESTION [2 upvotes]: Could any one explain me how to show that the tangent space at origin of the variety $V=\mathbb{V}(y^2-x^3)$ is equal to full affine plane? They have defined $l$ is a tangent line at $p$ if the multiplicity of $l\cap V$ at $p$ exceeds one. The tangent space $T_p(V)$ of $V$ at $p$ is the union all points lying on the lines tangent to $V$ at $p$ REPLY [2 votes]: In general, if you have a plane curve $V = \mathbb{V}(F(x,y)) \subset \mathbb{A}^2$, and you want to compute the multiplicity of the intersection of $V$ with a line $l$ through $p = (0,0)$, you proceed as follows: First, you can parameterize your line $l$ as the set of points $\{(at,bt) : t \in \mathbb{K} \}$ for some choice of $(a,b) \in \mathbb{K}^2$, with not both $a$ and $b$ equal to zero. Here I am using $\mathbb{K}$ to denote the field of definition of $V$. (The choice of $(a,b)$ is well-defined only to rescaling -- the lines through $p$ correspond to points of $\mathbb{P}^1$. But you'll see that all the calculations below come out the same if you replace $(a,b)$ with $(\lambda a, \lambda b)$ for any $\lambda \in \mathbb{K}^*$.) The restriction of the polynomial function $F$ to $l$ is given by $F(at,bt)$, which will be an element of $\mathbb{K}[t]$. By definition, the multiplicity of $l \cap V$ at $p = (0,0)$ is the order of vanishing of $t$ in $F(t)$, i.e. the maximal number of powers of $t$ that can be factored out of $F(t)$. Note that the multiplicity if $0$ if and only $p$ does not lie on $V$. For your particular example, if you follow this procedure, you'll see that every line but one intersects $V$ with multiplicty 2, while one line intersects $V$ with multiplicity $3$. Somethign you should think about: what is that line and how does it relate to the graph of $y^2 = x^3$ in $\mathbb{R}^2$?
{"set_name": "stack_exchange", "score": 2, "question_id": 143948}
TITLE: ODE compartmental model: waiting time QUESTION [1 upvotes]: Here is a ODE compartmental model made of 3 equations : $\frac{dX}{dt}=-\alpha X$ $\frac{dY}{dt}=\alpha X-\beta Y$ $\frac{dZ}{dt}=\beta Y$ $X$, $Y$, $Z$ represents, in my case, the total number of people being in the state/compartment/case $X$, $Y$ or $Z$ (it could be a SIR model) and we assume that each individual can move from $X$ to $Y$ and from $Y$ to $Z$ with the respective rates $\alpha$ and $\beta$. Let's $w_{XY}$ be the average waiting time for an individual being in $X$ before moving to $Y$. This rate is often calculated/estimated as $w_{XY}=1/\alpha$. First question: Is it an approximation or is the exact result? How is it obtained? Should we make more assumptions to find it? Second question: Let's assume now that they are more exits that leave $X$. Will the waiting time $w_{XY}$ still be $w_{XY}=1/\alpha$? Again, how is it obtained? REPLY [0 votes]: Here's the kicker: we're operating under the assumption that the population of compartment $X$ is exponentially distributed, and that its rate of change (probability of change per unit time) is $\alpha.$ Its probability density function is $$f(t)=\begin{cases}\alpha e^{-\alpha t}&t\ge0,\\0&t<0.\end{cases}$$ This means that the probability of a member of that compartment leaving prior to time $T$ will be $$\int_{-\infty}^Tf(t)\,dt$$ in general. For positive $T,$ this will be $$\int_0^T\alpha e^{-\alpha t}\,dt=\left[-e^{-\alpha t}\right]_{t=0}^T=1-e^{-\alpha T},$$ but it will otherwise be $0.$ To find the average waiting time, we need the expected value: $$\int_{-\infty}^\infty tf(t)\,dt=\int_0^\infty\alpha t e^{-\alpha t}\,dt=\left[-\frac1\alpha(\alpha t+1)e^{-\alpha t}\right]_{t=0}^\infty=\frac1\alpha.$$ Now, if we're talking about a "population" of a compartment, we're probably dealing with a discrete case, in actuality, so this should be considered an approximation. If there are more exits, I'm not sure $w_{XY}$ even makes sense.
{"set_name": "stack_exchange", "score": 1, "question_id": 2540988}
TITLE: What formula do I use for factoring these? QUESTION [0 upvotes]: An elementary question, but I am having a lot of discrepancies identifying the correct formula to use, I can do more complex ones but not the simple ones if that makes sense. a) $8x^3 + 1$ b) $m^2 - 100n^2$ Thank you, regards. REPLY [1 votes]: $\!\begin{eqnarray} {\bf Hint}\ \ \ \color{#c00}m - \color{#0a0}{10n}\!&&\mid\, \color{#c00}m^2 -\, (\color{#0a0}{10n})^2\\ {\rm and}\ \ \ \color{#c00}{2x}\!-\!(\color{#0a0}{-1})\!&&\mid (\color{#c00}{2x})^3\!-(\color{#0a0}{-1})^3\ \ \text{by the Factor Theorem.}\end{eqnarray}$
{"set_name": "stack_exchange", "score": 0, "question_id": 731959}
TITLE: Asymptotic boundary on Fourier coefficients of absolutely continuous function QUESTION [1 upvotes]: Let $f$ be absolutely continuous. Prove that $\hat{f}(n)=o\left(\frac{1}{n}\right)$. Any hint will be appreciated, thanks. REPLY [1 votes]: Hint: Absolute continuous means that $f'\in L^1$. What can you say about the decay of $\hat{f'}$? Try to lift up the information from 1. to find the decay information on $\hat{f}$.
{"set_name": "stack_exchange", "score": 1, "question_id": 90197}
TITLE: Given that $\cos\left(\dfrac{2\pi m}{n}\right) \in \mathbb{Q}$ prove $\cos\left(\dfrac{2\pi}{n}\right) \in \mathbb{Q}$ QUESTION [4 upvotes]: Given that $\cos\left(\dfrac{2\pi m}{n}\right) \in \mathbb{Q}$, $\gcd(m,n) = 1$, $m \in \mathbb{Z}, \, n \in \mathbb{N}$ prove that $\cos\left(\dfrac{2\pi}{n}\right) \in \mathbb{Q}.$ I know nothing about how to attack the problem. I believe I need to suppose that $\cos\left(\dfrac{2\pi}{n}\right) \not \in \mathbb{Q}$ and somehow show that $\cos\left(\dfrac{2\pi m}{n}\right) \not \in \mathbb{Q}$ what would have been a contradiction. Could you give me some hints? REPLY [0 votes]: Let $\zeta=\exp(2\pi i/n)$ be a primitive $n$-th root of unity. You know that $\Re(\zeta^m)\in\mathbb Q$ and you want to show that $\Re(\zeta)\in\mathbb Q$. Since $m$ and $n$ are coprime $\zeta^m$ is a primitive $n$-th root of unity as well. This implies that we have $(\zeta^m)^k=\zeta$ for a $k\in\mathbb N$. So it is enough (even better) to prove that for a number $\zeta \in \mathbb C$ with $|\zeta|=1$ and $\Re(\zeta) \in \mathbb Q$ you have $\Re(\zeta^n) \in \mathbb Q$ for all $n \in \mathbb N$. We have $$\zeta^n=(x+iy)^n = \sum_{k=0}^n {{n}\choose{k}} x^k (iy)^k $$ The real part of this is given by only giving a shit on the even $k$. But then we have an even power of $y$ and we have $y^2=1-x^2 \in \mathbb Q$, so everything is in $\mathbb Q$.
{"set_name": "stack_exchange", "score": 4, "question_id": 4023263}
TITLE: PM is supersolvable group QUESTION [1 upvotes]: $G$ is a finite group, $G = PM$, where $P$ is a Sylow $p$-subgroup of $G$, $p$ is the largest prime dividing the order of $G$, $P$ is normal in $G$, $M$ is maximal subgroup of $G$, $M$ is supersolvable and $|G/M| = p$. Is $G$ supersolvable? (The group $G$ is said super-solvable if $G$ has normal series $0=G_0⊴G_1⊴⋯G_{n−1}⊴G_n=G$ such that $G_i/G_{i−1}$ is cyclic for $i\in\{1,2,…,n\}$.) Thanks REPLY [1 votes]: I don't think you need assume that $M \lhd G$, just that the index $|G:M|=p$. For a finite group, being supersolvable is equivalent to the chief factors all being cyclic of prime order. Note that $|P:P \cap M| = p$, so $P \cap M \lhd P$ and $P \cap M \lhd M$ (since $P \lhd G$), so $P \cap M \lhd G$. The terms in the lower central series of $P \cap M$ are characteristic and hence normal in $G$. Refine this series to a chief series of $M$. Then, since $M$ is supersolvable, all factors in this series have prime order. The terms in the series that are contained in $P \cap M$ are all normal in $P$, so they are normal in $G$. Now $P/P\cap M$ has order $p$, so it is cyclic. And $G/P \cong M/(M \cap P)$ is supersolvable, so it has a normal series with cyclic factors.So now we have build up a normal series of $G$ with cyclic factors, and hence $G$ is supersolvable. But I don't seem to have used the fact that $p$ is the largest prime dividing $|G|$, so perhaps I have a mistake somewhere!!!
{"set_name": "stack_exchange", "score": 1, "question_id": 501114}
TITLE: $X$ is the vector space $C[0,1]$ with the norm $\|f\|_1=\int_0^1|f(t)|dt$, and $M=\{f\in X:f(0)=0\}$, show that $M$ is not closed. QUESTION [1 upvotes]: Here is my question: Let $X$ be the vector space $C[0,1]$ with the norm $\|f\|_1=\int_0^1|f(t)|dt$. Let $$M=\{f\in X:f(0)=0\}$$ Show that $M$ is not closed. Show that the “quotient norm" $inf\{\|f-m\|_1:m\in M\}$ is not a norm on $X/M$. Here is what I have: For the closure, I am trying to find a function $f$ such that a sequence $\{f_n\}\in M$ which converges to $f$, but $f\notin M$. This would mean that for any $n$, $f_n(0)=0$ but $f(0)\neq 0$. I cannot seem to find such an $f_n$ and $f$. As for the quotient norm not being a norm on $X/M$, I believe this follows from $M$ not being closed, as with quotient norms, $\|x-M\|=0$ if and only if $x\in M$. So any suggestions on finding the $f$? Thanks. REPLY [0 votes]: As for the "quotient norm", have a look at mookid's counter-example and see if you can modify it to force a violation of one of the norm axioms. I'd aim to show that $$\| [f] \|_{\hbox{am I a norm?}} = 0 \ \ \ \Longrightarrow \ \ \ f = 0$$ is violated.
{"set_name": "stack_exchange", "score": 1, "question_id": 999293}
TITLE: How can I prove, that the 2nd Bergman space is a Hilbert space? QUESTION [0 upvotes]: We consider the analytic functions on an open set $U\subset\mathbb{C}$, which are also in $L^2(U)$. I've found several posts in this topic, but all of them references to classical complex analysis results. It's obvious, that the Bergman space ($L_a^2$) is a linear manifold, so it's efficient to prove the closeness of $L_a^2(U)$ in $L^2(U)$. According to Wikipedia: This is a consequence of the estimate, valid on compact subsets K of D, that $$\sup\limits_{z\in K}\vert f(z)\vert \le C_K \Vert f \Vert_2,$$ which in turn follows from Cauchy's integral formula. Thus convergence of a sequence of holomorphic functions in $L^2(D)$ implies also compact convergence, and so the limit function is also holomorphic. I can't proove neither the inequality, nor the compact convergence $\Rightarrow$ closeness of $L_a^2(U)$ implication. Please help!!! REPLY [1 votes]: The value at a point of a holomorphic function is the average value of the values in a ball around it. This shows that the supremum norm os bounded by the $ L^1$ norm and, a fortiori, by the $ L^2$ norm. (You need to fill gaps, using that you only need the bound on compact subsets) For the second question, all you need to show is that the $ L^2$ limit of holomorphic functions is holomorphic again. This holds because Cauchy's formula is sufficient for holomorphy, and you can easily show that the Cauchy formula holds for the $ L^\infty$ limit. (Note from before that you have supremum bounds even for $ L^2$ convergent functions)
{"set_name": "stack_exchange", "score": 0, "question_id": 1922611}
TITLE: Is "False" in logic analogous to "Null set" in set theory? QUESTION [4 upvotes]: I have been doing proofs in elementary set theory, and so far, just using definitions (like below) and applying propositional logic has sufficed. A ⋃ B = e ∈ A ∨ e ∈ B A ⊂ B = e ∈ A ⟹ e ∈ B A' = e ∉ A = ¬(e ∈ A) So the proofs are as follows: Convert set theory operations to their "logical" definitions Shuffle the symbols using logical identities Convert back from logic land to set theory land Here is my question: Is the logical analogous to the null set - Ø - the boolean false? Is the logical analogous to the universal set - U - the boolean true? More formally, are these definitions correct? Ø = {e | false} U = {e | true} Here's my proof for: A ⊂ B ⟹ A ⋂ B’ = Ø, for example, where I use false for Ø: A ⊂ B ⟹ A ⋂ B’ = Ø ≡ {Definition of Set Intersection and Subset, Definition of Ø} [e ∈ A ⟹ e ∈ B] ⟹ [e ∈ A ∧ e ∈ B’ = false] ≡ {Exportation: A ⟹ [B ⟹ C] ≡ [A ∧ B] ⟹ C} [e ∈ A ∧ e ∈ B] ⟹ [e ∈ A ∧ e ∈ B’ = false] Context 1. e ∈ A Context 2. e ∈ B e ∈ A ∧ e ∈ B’ ≡ {Context 1} e ∈ B’ ≡ {Definition of ‘} ¬(e ∈ B) ≡ {Context 2, Contradiction} false ≡ {Definition of Ø} Ø Is the use of false for Ø valid in the proof above? REPLY [5 votes]: You're basically right, but I'll flesh out a pedantic point. Identifying a set $S$ with the unary predicate $\varphi$ for which $\forall e(e\in S\iff\varphi(e))$, a set is $\emptyset$ is identified with the unary $\varphi$ that always returns "false", not with "false" itself. (This is like confusing a constant function with the value it returns; it's a subtle distinction, but an easy one to make if e.g. you define a function as a certain kind of set of ordered pairs.) The usual choice for an explicit statement of this $\varphi$ is that $\varphi(e)$ iff $e\neq e$.
{"set_name": "stack_exchange", "score": 4, "question_id": 3664146}
TITLE: Calculation of heat flux on a surface QUESTION [0 upvotes]: I have a basic question about calculating heat flux applied to a surface. Suppose you have a solid cylinder that has a height of 1 $\text{cm}$ and a radius of 1 $\text{cm}$, giving it a lateral surface area of 2π $\text{cm}^2$. You take a heat tape device that has the dimensions 1 $\text{cm}$ by 24 $\text{cm}$ and outputs 144W. The surface power density is then 6 W/$\text{cm}^2$. You then wrap the heat tape around the cylinder. Note that, since the surface area of the heat tape is larger than the lateral surface area of the cylinder, only part of the heat tape will be in contact with the cylinder. What would be the heat flux acting upon the side of the cylinder? Is it still 6 W/$\text{cm}^2$, or does it change since not all of the heat tape is in contact with the cylinder? REPLY [1 votes]: This constitutes a nontrivial transient heat transfer problem. You cannot assume that the heat flux of 6 W/cm² is somehow always directed inward toward the cylinder. In fact, over time, less and less heat will flow inward, as the cylinder will asymptotically reach an equilibrium temperature such that all 6 W/cm² is directed outward and is dissipated through convection or radiation, for example. It's essential to estimate (and, if you wish, try to control) heat losses from convection and radiation here, as these will govern the temperature of the cylinder over time. Ignoring the loose tape and thus assuming axisymmetry, and performing an energy balance, we can write $$\frac{\alpha}{r}\frac{\partial }{\partial r}\left(r\frac{\partial T(r,t)}{\partial r}\right)=\frac{\partial T(r,t)}{\partial t}$$ within the cylinder (applying the Laplacian in polar coordinates), where $\alpha$ is the cylinder thermal diffusivity, and $$-k\frac{dT}{dr}+q^{\prime\prime}-h(T-T_\infty)-\sigma\epsilon(T^4-T_\infty^4)=0$$ at the cylinder/tape surface (assuming the tape has negligible thickness), where $k$ is the cylinder thermal conductivity, $q^{\prime\prime}=6 \mathrm{W/cm}^2$, $h$ is the convective coefficient, $\sigma$ is the Stefan–Boltzmann constant, $\epsilon$ is the tape surface emissivity, and $T_\infty$ is the ambient temperature. (Possibly the last term on the left side—the radiative term—can be considered negligible.) One could derive an analytical equation for the flux and temperature at very short times, when the heat transfer into the cylinder can be idealized as heat transfer into a semi-infinite body. From Incropera & DeWitt's Fundamentals of Heat and Mass Transfer, this is $$T(x,t)=T_\infty+\frac{2q^{\prime\prime}(\alpha t/\pi)^{1/2}}{k}\exp\left(\frac{-x^2}{4\alpha t}\right)-\frac{q^{\prime\prime}x}{k}\mathrm{erfc}\left(\frac{x}{2\sqrt{\alpha t}}\right),$$ where $x$ is the depth relative to the surface and $\mathrm{erfc}$ is the complementary error function. Or, one could estimate an equilibrium temperature knowing the dissipated power and the heat-loss mechanisms. Or, one could do a more detailed spatial and temporal study at intermediate times using finite element analysis, for instance, which would also provide predictions of the internal cylinder temperature and its variation. All of this is independent of the fact that you have additional loose tape unwinding adjacent to the cylinder that's also emitting 6 W/cm². This aspect just adds an additional complication. Does this all make sense?
{"set_name": "stack_exchange", "score": 0, "question_id": 652102}
TITLE: Approximation Property: Characterization QUESTION [3 upvotes]: As reference the german wiki: Approximationseigenschaft Problem Given a Banach space. Suppose it has the approximation property: $$C\in\mathcal{C}:\quad\|T_N-1\|_C\to0\quad(T_N\in\mathcal{F}(E))$$ Then every compact operator is of almost finite rank: $$\overline{\mathcal{F}(X,E)}=\mathcal{C}(X,E)\subseteq\mathcal{B}(X,E)$$ How do I prove this actual equivalence? Attempt As the image of the unit ball is precompact one has: $$C\in\mathcal{C}(X,E):\quad\|T_NC-C\|=\|T_N-1\|_{C(B)}\to0\quad(T_NC\in\mathcal{F}(X,E))$$ For the converse one might try to smuggle in a compact operator: $$C\subseteq rB:\quad\|T_N-1\|_C\leq r\|T_N-C\|_B+\|C-1\|_C<r\delta_T+\delta_C\quad(C\in\mathcal{C}(E))$$ But how to construct one that approximates the identity? REPLY [2 votes]: This is a nontrivial result by Grothendieck! (See Lindenstrauss & Tzafriri, Theorem 1.e.4, Volume I.)
{"set_name": "stack_exchange", "score": 3, "question_id": 1122760}
\begin{document} \maketitle \markboth{Florin Diacu and Ernesto P\'erez-Chavela}{Homographic solutions of the curved $3$-body problem} \author{\begin{center} Florin Diacu\\ \smallskip {\footnotesize Pacific Institute for the Mathematical Sciences\\ and\\ Department of Mathematics and Statistics\\ University of Victoria\\ P.O.~Box 3060 STN CSC\\ Victoria, BC, Canada, V8W 3R4\\ diacu@math.uvic.ca\\ }\end{center} \begin{center} and \end{center} \begin{center} Ernesto P\'erez-Chavela\\ \smallskip {\footnotesize Departamento de Matem\'aticas\\ Universidad Aut\'onoma Metropolitana-Iztapalapa\\ Apdo.\ 55534, M\'exico, D.F., M\'exico\\ epc@xanum.uam.mx\\ }\end{center} } \vskip0.5cm \begin{center} \today \end{center} \begin{abstract} In the 2-dimensional curved $3$-body problem, we prove the existence of Lagrangian and Eulerian homographic orbits, and provide their complete classification in the case of equal masses. We also show that the only non-homothetic hyperbolic Eulerian solutions are the hyperbolic Eulerian relative equilibria, a result that proves their instability. \end{abstract} \section{Introduction} We consider the $3$-body problem in spaces of constant curvature ($\kappa\ne 0$), which we will call {\it the curved $3$-body problem}, to distinguish it from its classical Euclidean ($\kappa=0$) analogue. The study of this problem might help us understand the nature of the physical space. Gauss allegedly tried to determine the nature of space by measuring the angles of a triangle formed by the peaks of three mountains. Even if the goal of his topographic measurements was different from what anecdotical history attributes to him (see \cite{Mill}), this method of deciding the nature of space remains valid for astronomical distances. But since we cannot measure the angles of cosmic triangles, we could alternatively check whether specific (potentially observable) motions of celestial bodies occur in spaces of negative, zero, or positive curvature, respectively. In \cite{Diacu1}, we showed that while Lagrangian orbits (rotating equilateral triangles having the bodies at their vertices) of non-equal masses are known to occur for $\kappa=0$, they must have equal masses for $\kappa\ne 0$. Since Lagrangian solutions of non-equal masses exist in our solar system (for example, the triangle formed by the Sun, Jupiter, and the Trojan asteroids), we can conclude that, if assumed to have constant curvature, the physical space is Euclidean for distances of the order $10^1$ AU. The discovery of new orbits of the curved $3$-body problem, as defined here in the spirit of an old tradition, might help us extend our understanding of space to larger scales. This tradition started in the 1830s, when Bolyai and Lobachevsky proposed a {\it curved 2-body problem}, which was broadly studied (see most of the 77 references in \cite{Diacu1}). But until recently nobody extended the problem beyond two bodies. The newest results occur in \cite{Diacu1}, a paper in which we obtained a unified framework that offers the equations of motion of the {\it curved $n$-body problem} for any $n\ge 2$ and $\kappa\ne 0$. We also proved the existence of several classes of {\it relative equilibria}, including the Lagrangian orbits mentioned above. Relative equilibria are orbits for which the configuration of the system remains congruent with itself for all time, i.e.\ the distances between any two bodies are constant during the motion. So far, the only other existing paper on the curved $n$-body problem, treated in a unified context, deals with singularities, \cite{Diacu1bis}, a subject we will not approach here. But relative equilibria can be put in a broader perspective. They are also the object of Saari's conjecture (see \cite{Saari}, \cite{Diacu2}), which we partially solved for the curved $n$-body problem, \cite{Diacu1}. Saari's conjecture has recently generated a lot of interest in classical celestial mechanics (see the references in \cite{Diacu2}, \cite{Diacu3}) and is still unsolved for $n>3$. Moreover, it led to the formulation of Saari's homographic conjecture, \cite{Saari}, \cite{Diacu3}, a problem that is directly related to the purpose of this research. We study here certain solutions that are more general than relative equilibria, namely orbits for which the configuration of the system remains similar with itself. In this class of solutions, the relative distances between particles may change proportionally during the motion, i.e.\ the size of the system could vary, though its shape remains the same. We will call these solutions {\it homographic}, in agreement with the classical terminology, \cite{Win}. In the classical Newtonian case, \cite{Win}, as well as in more general classical contexts, \cite{Diacu0}, the standard concept for understanding homographic solutions is that of {\it central configuration}. This notion, however, seems to have no meaningful analogue in spaces of constant curvature, therefore we had to come up with a new approach. Unlike in Euclidean space, homographic orbits are not planar, unless they are relative equilibria. In the case $\kappa>0$, for instance, the intersection between a plane and a sphere is a circle, but the configuration of a solution confined to a circle cannot expand or contract and remain similar to itself. Therefore the study of homographic solutions that are not relative equilibria is apparently more complicated than in the classical case, in which all homographic orbits are planar. We focus here on three types of homographic solutions. The first, which we call Lagrangian, form an equilateral triangle at every time instant. We ask that the plane of this triangle be always orthogonal to the rotation axis. This assumption seems to be natural because, as proved in \cite{Diacu1}, Lagrangian relative equilibria, which are particular homographic Lagrangian orbits, obey this property. We prove the existence of homographic Lagrangian orbits in Section 3, and provide their complete classification in the case of equal masses in Section 4, for $\kappa>0$, and Section 5, for $\kappa<0$. Moreover, we show in Section 6 that Lagrangian solutions with non-equal masses don't exist. We then study another type of homographic solutions of the curved $3$-body problem, which we call Eulerian, in analogy with the classical case that refers to bodies confined to a rotating straight line. At every time instant, the bodies of an Eulerian homographic orbit are on a (possibly) rotating geodesic. In Section 7 we prove the existence of these orbits. Moreover, for equal masses, we provide their complete classification in Section 8, for $\kappa>0$, and Section 9, for $\kappa<0$. Finally, in Section 10, we discuss the existence of hyperbolic homographic solutions, which occur only for negative curvature. We prove that when the bodies are on the same hyperbolically rotating geodesic, a class of solutions we call hyperbolic Eulerian, every orbit is a hyperbolic Eulerian relative equilibrium. Therefore hyperbolic Eulerian relative equilibria are unstable, a fact that makes them unlikely observable candidates in a (hypothetically) hyperbolic physical universe. \section{Equations of motion}\label{equations} We consider the equations of motion on $2$-dimensional manifolds of constant curvature, namely spheres embedded in $\mathbb{R}^3$, for $\kappa>0$, and hyperboloids\footnote{The hyperboloid corresponds to Weierstrass's model of hyperbolic geometry (see Appendix in \cite{Diacu1}).} embedded in the Minkovski space ${\mathbb{M}}^3$, for $\kappa<0$. Consider the masses $m_1, m_2, m_3>0$ in $\mathbb{R}^3$, for $\kappa>0$, and in $\mathbb{M}^3$, for $\kappa<0$, whose positions are given by the vectors ${\bf q}_i=(x_i,y_i,z_i), \ i=1, 2, 3$. Let ${\bf q}= ({\bf q}_1, {\bf q}_2,{\bf q}_3)$ be the configuration of the system, and ${\bf p}=({\bf p}_1, {\bf p}_2,{\bf p}_3)$, with ${\bf p}_i=m_i\dot{\bf q}_i$, representing the momentum. We define the gradient operator with respect to the vector ${\bf q}_i$ as $$\widetilde\nabla_{{\bf q}_i}=(\partial_{x_i},\partial_{y_i},\sigma\partial_{z_i}),$$ where $\sigma$ is the {\it signature function}, \begin{equation} \sigma= \begin{cases} +1, \ \ {\rm for} \ \ \kappa>0\cr -1, \ \ {\rm for} \ \ \kappa<0,\cr \end{cases}\label{sigma} \end{equation} and let $\widetilde\nabla$ denote the operator $(\widetilde\nabla_{{\bf q}_1},\widetilde\nabla_{{\bf q}_2},\widetilde\nabla_{{\bf q}_3})$. For the 3-dimensional vectors ${\bf a}=(a_x,a_y,a_z)$ and ${\bf b}=(b_x,b_y,b_z)$, we define the inner product \begin{equation} {\bf a}\odot{\bf b}:=(a_xb_x+a_yb_y+\sigma a_zb_z) \label{dotpr} \end{equation} and the cross product \begin{equation} {\bf a}\otimes{\bf b}:=(a_yb_z-a_zb_y, a_zb_x-a_xb_z, \sigma(a_xb_y-a_yb_x)). \end{equation} The Hamiltonian function of the system describing the motion of the $3$-body problem in spaces of constant curvature is $$H_\kappa({\bf q},{\bf p})=T_\kappa({\bf q},{\bf p})-U_\kappa({\bf q}),$$ where $$ T_\kappa({\bf q},{\bf p})={1\over 2}\sum_{i=1}^3m_i^{-1}({\bf p}_i\odot{\bf p}_i)(\kappa{\bf q}_i\odot{\bf q}_i) $$ defines the kinetic energy and \begin{equation} U_\kappa({\bf q})=\sum_{1\le i<j\le 3}{m_im_j |\kappa|^{1/2}{\kappa{\bf q}_i\odot{\bf q}_j}\over [\sigma(\kappa{\bf q}_i \odot{\bf q}_i)(\kappa{\bf q}_j\odot{\bf q}_j)-\sigma({\kappa{\bf q}_i\odot{\bf q}_j })^2]^{1/2}} \label{forcef} \end{equation} is the force function, $-U_\kappa$ representing the potential energy\footnote{In \cite{Diacu1}, we showed how this expression of $U_\kappa$ follows from the cotangent potential for $\kappa\ne 0$, and that $U_0$ is the Newtonian potential of the Euclidean problem, obtained as $\kappa\to 0$.}. Then the Hamiltonian form of the equations of motion is given by the system \begin{equation} \begin{cases} \dot{\bf q}_i= m_i^{-1}{\bf p}_i,\cr \dot{\bf p}_i=\widetilde\nabla_{{\bf q}_i}U_\kappa({\bf q})-m_i^{-1}\kappa({\bf p}_i\odot{\bf p}_i) {\bf q}_i, \ \ i=1,2,3, \ \kappa\ne 0, \label{Ham} \end{cases} \end{equation} where the gradient of the force function has the expression \begin{equation} {\widetilde\nabla}_{{\bf q}_i}U_\kappa({\bf q})=\sum_{\substack{j=1\\ j\ne i}}^3{m_im_j|\kappa|^{3/2}(\kappa{\bf q}_j\odot{\bf q}_j)[(\kappa{\bf q}_i\odot{\bf q}_i){\bf q}_j-(\kappa{\bf q}_i\odot{\bf q}_j){\bf q}_i]\over [\sigma(\kappa{\bf q}_i \odot{\bf q}_i)(\kappa{\bf q}_j\odot{\bf q}_j)-\sigma({\kappa{\bf q}_i\odot{\bf q}_j })^2]^{3/2}}. \label{gradient} \end{equation} The motion is confined to the surface of nonzero constant curvature $\kappa$, i.e.\ $({\bf q},{\bf p})\in {\bf T}^*({\bf M}_\kappa^2)^3$, where ${\bf T}^*({\bf M}_\kappa^2)^3$ is the cotangent bundle of the configuration space $({\bf M}^2_\kappa)^3$, and $$ {\bf M}^2_\kappa=\{(x,y,z)\in\mathbb{R}^3\ |\ \kappa(x^2+y^2+\sigma z^2)=1\}. $$ In particular, ${\bf M}^2_1={\bf S}^2$ is the 2-dimensional sphere, and ${\bf M}^2_{-1}={\bf H}^2$ is the 2-dimensional hyperbolic plane, represented by the upper sheet of the hyperboloid of two sheets (see the Appendix of \cite{Diacu1} for more details). We will also denote ${\bf M}^2_\kappa$ by ${\bf S}^2_\kappa$ for $\kappa>0$, and by ${\bf H}^2_\kappa$ for $\kappa<0$. Notice that the $3$ constraints given by $\kappa{\bf q}_i\odot{\bf q}_i=1, i=1,2,3,$ imply that ${\bf q}_i\odot{\bf p}_i=0$, so the $18$-dimensional system \eqref{Ham} has $6$ constraints. The Hamiltonian function provides the integral of energy, $$ H_\kappa({\bf q},{\bf p})=h, $$ where $h$ is the energy constant. Equations \eqref{Ham} also have the integrals of the angular momentum, \begin{equation} \sum_{i=1}^3{\bf q}_i\otimes{\bf p}_i={\bf c},\label{ang} \end{equation} where ${\bf c}=(\alpha, \beta, \gamma)$ is a constant vector. Unlike in the Euclidean case, there are no integrals of the center of mass and linear momentum. Their absence complicates the study of the problem since many of the standard methods don't apply anymore. Using the fact that $\kappa{\bf q}_i\odot{\bf q}_i=1$ for $i=1,2,3$, we can write system \eqref{Ham} as \begin{equation} \ddot{\bf q}_i=\sum_{\substack{j=1\\ j\ne i}}^3{m_j|\kappa|^{3/2}[{\bf q}_j-(\kappa{\bf q}_i\odot{\bf q}_j){\bf q}_i]\over [\sigma-\sigma({\kappa{\bf q}_i\odot{\bf q}_j })^2]^{3/2}}-(\kappa\dot{\bf q}_i\odot\dot{\bf q}_i){\bf q}_i, \ \ i=1,2,3, \label{second} \end{equation} which is the form of the equations of motion we will use in this paper. \section{Local existence and uniqueness of Lagrangian solutions} In this section we define the Lagrangian solutions of the curved 3-body problem, which form a particular class of homographic orbits. Then, for equal masses and suitable initial conditions, we prove their local existence and uniqueness. \begin{definition} A solution of equations \eqref{second} is called Lagrangian if, at every time $t$, the masses form an equilateral triangle that is orthogonal to the $z$ axis. \label{deflag} \end{definition} According to Definition \ref{deflag}, the size of a Lagrangian solution can vary, but its shape is always the same. Moreover, all masses have the same coordinate $z(t)$, which may also vary in time, though the triangle is always perpendicular to the $z$ axis. We can represent a Lagrangian solution of the curved 3-body problem in the form \begin{equation} {\bf q} =({\bf q}_1,{\bf q}_2, {\bf q}_3), \ \ {\rm with}\ \ {\bf q}_i=(x_i,y_i,z_i),\ i=1,2,3, \label{lagsol} \end{equation} \begin{align*} x_1&=r\cos\omega,& y_1&=r\sin\omega,& z_1&=z,\\ x_2&=r\cos(\omega +2\pi/3),& y_2&=r\sin(\omega +2\pi/3),& z_2&=z,\\ x_3&=r\cos(\omega +4\pi/3),& y_3&=r\sin(\omega +4\pi/3),& z_3&=z, \end{align*} where $z=z(t)$ satisfies $z^2=\sigma\kappa^{-1}-\sigma r^2$; $\sigma$ is the signature function defined in \eqref{sigma}; $r:=r(t)$ is the {\it size function}; and $\omega:=\omega(t)$ is the {\it angular function}. Indeed, for every time $t$, we have that $x_i^2(t)+y_i^2(t)+\sigma z_i^2(t)=\kappa^{-1},\ i=1,2,3$, which means that the bodies stay on the surface ${\bf M}_{\kappa}^2$, each body has the same $z$ coordinate, i.e.\ the plane of the triangle is orthogonal to the $z$ axis, and the angles between any two bodies, seen from the geometric center of the triangle, are always the same, so the triangle remains equilateral. Therefore representation \eqref{lagsol} of the Lagrangian orbits agrees with Definition \ref{deflag}. \begin{definition} A Lagrangian solution of equations \eqref{second} is called Lagrangian homothetic if the equilateral triangle expands or contracts, but does not rotate around the $z$ axis. \end{definition} In terms of representation \eqref{lagsol}, a Lagrangian solution is Lagrangian homothetic if $\omega(t)$ is constant, but $r(t)$ is not constant. Such orbits occur, for instance, when three bodies of equal masses lying initially in the same open hemisphere are released with zero velocities from an equilateral configuration, to end up in a triple collision. \begin{definition} A Lagrangian solution of equations \eqref{second} is called a Lagrangian relative equilibrium if the triangle rotates around the $z$ axis without expanding or contracting. \end{definition} In terms of representation \eqref{lagsol}, a Lagrangian relative equilibrium occurs when $r(t)$ is constant, but $\omega(t)$ is not constant. Of course, Lagrangian homothetic solutions and Lagrangian relative equilibria, whose existence we proved in \cite{Diacu1}, are particular Lagrangian orbits, but we expect that the Lagrangian orbits are not reduced to them. We now show this by proving the local existence and uniqueness of Lagrangian solutions that are neither Lagrangian homothetic, nor Lagrangian relative equilibria. \begin{theorem} In the curved $3$-body problem of equal masses, for every set of initial conditions belonging to a certain class, the local existence and uniqueness of a Lagrangian solution, which is neither Lagrangian homothetic nor a Lagrangian relative equilibrium, is assured. \label{equal-masses} \end{theorem} \begin{proof} We will check to see if equations \eqref{second} admit solutions of the form \eqref{lagsol} that start in the region $z>0$ and for which both $r(t)$ and $\omega(t)$ are not constant. We compute then that $$\kappa{\bf q}_i\odot{\bf q}_j=1-3\kappa r^2/2\ \ {\rm for}\ \ i,j=1,2,3, \ \ {\rm with} \ \ i\ne j,$$ \begin{align*} \dot x_1&=\dot r\cos\omega-r\dot\omega\sin\omega,& \dot y_1&=\dot r\sin\omega+ r\dot\omega\cos\omega, \end{align*} $$\dot x_2=\dot r\cos\Big(\omega +{2\pi\over 3}\Big)-r\dot\omega\sin\Big(\omega +{2\pi\over 3}\Big),$$ $$\dot y_2=\dot r\sin\Big(\omega +{2\pi\over 3}\Big)+r\dot\omega\cos\Big(\omega +{2\pi\over 3}\Big),$$ $$\dot x_3=\dot r\cos\Big(\omega +{4\pi\over 3}\Big)-r\dot\omega\sin\Big(\omega +{4\pi\over 3}\Big),$$ $$\dot y_3=\dot r\sin\Big(\omega +{4\pi\over 3}\Big)+r\dot\omega\cos\Big(\omega +{4\pi\over 3}\Big),$$ \begin{equation} \dot z_1=\dot z_2=\dot z_3=-\sigma r\dot r(\sigma\kappa^{-1}-\sigma r^2)^{-1/2}, \label{zeds} \end{equation} $$\kappa\dot{\bf q}_i\odot\dot{\bf q}_i=\kappa r^2\dot\omega^2+{\kappa\dot r^2\over 1-\kappa r^2} \ \ {\rm for}\ \ i=1,2,3,$$ $$\ddot x_1=(\ddot r-r\dot\omega^2)\cos\omega-(r\ddot\omega+2\dot r\dot\omega)\sin\omega,$$ $$\ddot y_1=(\ddot r-r\dot\omega^2)\sin\omega+(r\ddot\omega+2\dot r\dot\omega)\cos\omega,$$ $$\ddot x_2=(\ddot r-r\dot\omega^2)\cos\Big(\omega +{2\pi\over 3}\Big)-(r\ddot\omega+2\dot r\dot\omega)\sin\Big(\omega +{2\pi\over 3}\Big),$$ $$\ddot y_2=(\ddot r-r\dot\omega^2)\sin\Big(\omega +{2\pi\over 3}\Big)+(r\ddot\omega+2\dot r\dot\omega)\cos\Big(\omega +{2\pi\over 3}\Big),$$ $$\ddot x_3=(\ddot r-r\dot\omega^2)\cos\Big(\omega +{4\pi\over 3}\Big)-(r\ddot\omega+2\dot r\dot\omega)\sin\Big(\omega +{4\pi\over 3}\Big),$$ $$\ddot y_3=(\ddot r-r\dot\omega^2)\sin\Big(\omega +{4\pi\over 3}\Big)+(r\ddot\omega+2\dot r\dot\omega)\cos\Big(\omega +{4\pi\over 3}\Big),$$ $$\ddot z_1=\ddot z_2=\ddot z_3=-\sigma r\ddot r(\sigma\kappa^{-1}-\sigma r^2)^{-1/2}- \kappa^{-1}\dot r^2(\sigma\kappa^{-1}-\sigma r^2)^{-3/2}.$$ Substituting these expressions into system \eqref{second}, we are led to the system below, where the double-dot terms on the left indicate to which differential equation each algebraic equation corresponds: \begin{align*} \ddot x_1: \ \ \ \ \ \ \ \ & A\cos\omega-B\sin\omega=0,\\ \ddot x_2:\ \ \ \ \ \ \ \ & A\cos\Big(\omega +{2\pi\over 3}\Big)-B\sin\Big(\omega +{2\pi\over 3}\Big)=0,\\ \ddot x_3:\ \ \ \ \ \ \ \ & A\cos\Big(\omega +{4\pi\over 3}\Big)-B\sin\Big(\omega +{4\pi\over 3}\Big)=0,\\ \ddot y_1:\ \ \ \ \ \ \ \ & A\sin\omega+B\cos\omega=0,\\ \ddot y_2:\ \ \ \ \ \ \ \ & A\sin\Big(\omega +{2\pi\over 3}\Big)+B\cos\Big(\omega +{2\pi\over 3}\Big)=0,\\ \ddot y_3:\ \ \ \ \ \ \ \ & A\sin\Big(\omega +{4\pi\over 3}\Big)+B\cos\Big(\omega +{4\pi\over 3}\Big)=0,\\ \ddot z_1, \ddot z_2, \ddot z_3:\ \ \ \ \ \ \ \ & A=0, \end{align*} where $$A:=A(t)=\ddot r-r(1-\kappa r^2)\dot\omega^2+{\kappa r\dot r^2\over{1-\kappa r^2}}+ {24m(1-\kappa r^2)\over{r^2(12-9\kappa r^2)^{3/2}}},$$ $$B:=B(t)=r\ddot\omega+2\dot r\dot\omega.$$ Obviously, the above system has solutions if and only if $A=B=0$, which means that the local existence and uniqueness of Lagrangian orbits with equal masses is equivalent to the existence of solutions of the system of differential equations \begin{equation} \begin{cases} \dot r=\nu\cr \dot w=-{2\nu w\over r}\cr \dot\nu=r(1-\kappa r^2)w^2-{\kappa r\nu^2\over{1-\kappa r^2}}- {24m(1-\kappa r^2)\over{r^2(12-9\kappa r^2)^{3/2}}}, \cr \end{cases} \label{prime} \end{equation} with initial conditions $r(0)=r_0, w(0)=w_0, \nu(0)=\nu_0,$ where $w=\dot\omega$. The functions $r,\omega$, and $w$ are analytic, and as long as the initial conditions satisfy the conditions $r_0>0$ for all $\kappa$, as well as $r_0<\kappa^{-1/2}$ for $\kappa>0$, standard results of the theory of differential equations guarantee the local existence and uniqueness of a solution $(r,w,\nu)$ of equations \eqref{prime}, and therefore the local existence and uniqueness of a Lagrangian orbit with $r(t)$ and $\omega(t)$ not constant. The proof is now complete. \end{proof} \section{Classification of Lagrangian solutions for $\kappa>0$} We can now state and prove the following result: \begin{theorem} In the curved $3$-body problem with equal masses and $\kappa>0$ there are five classes of Lagrangian solutions: (i) Lagrangian homothetic orbits that begin or end in total collision in finite time; (ii) Lagrangian relative equilibria that move on a circle; (iii) Lagrangian periodic orbits that are neither Lagrangian homothetic nor Lagrangian relative equilibria; (iv) Lagrangian non-periodic, non-collision orbits that eject at time $-\infty$, with zero velocity, from the equator, reach a maximum distance from the equator, which depends on the initial conditions, and return to the equator, with zero velocity, at time $+\infty$. None of the above orbits can cross the equator, defined as the great circle of the sphere orthogonal to the $z$ axis. (v) Lagrangian equilibrium points, when the three equal masses are fixed on the equator at the vertices of an equilateral triangle. \label{homo} \end{theorem} The rest of this section is dedicated to the proof of this theorem. Let us start by noticing that the first two equations of system \eqref{prime} imply that $\dot w=-{2\dot r w\over r}$, which leads to $$w=\frac{c}{r^2},$$ where $c$ is a constant. The case $c=0$ can occur only when $w=0$, which means $\dot\omega=0$. Under these circumstances the angular velocity is zero, so the motion is homothetic. These are the orbits whose existence is stated in Theorem \ref{homo} (i). They occur only when the angular momentum is zero, and lead to a triple collision in the future or in the past, depending on the sense of the velocity vectors. For the rest of this section, we assume that $c\ne 0$. Then system \eqref{prime} takes the form \begin{equation}\label{lag2} \begin{cases} \dot r=\nu\cr \dot \nu=\frac{c^2(1-\kappa r^2)}{r^3} -{\kappa r\nu^2\over{1-\kappa r^2}}- {24m(1-\kappa r^2)\over{r^2(12-9\kappa r^2)^{3/2}}}. \cr \end{cases} \end{equation} Notice that the term ${\kappa r\nu^2\over{1-\kappa r^2}}$ of the last equation arises from the derivatives $\dot z_1, \dot z_2, \dot z_3$ in \eqref{zeds}. But these derivatives would be zero if the equilateral triangle rotates along the equator, because $r$ is constant in this case, so the term ${\kappa r\nu^2\over{1-\kappa r^2}}$ vanishes. Therefore the existence of equilateral relative equilibria on the equator (included in statement (ii) above), and the existence of equilibrium points (stated in (v))---results proved in \cite{Diacu1}---are in agreement with the above equations. Nevertheless, the term ${\kappa r\nu^2\over{1-\kappa r^2}}$ stops any orbit from crossing the equator, a fact mentioned before statement (v) of Theorem \ref{homo}. Understanding system \eqref{lag2} is the key to proving Theorem \ref{homo}. We start with the following facts: \begin{lemma} Assume $\kappa, m>0$ and $c\ne 0$. Then for $\kappa^{1/2}c^2-(8/\sqrt{3})m< 0$, system \eqref{lag2} has two fixed points, while for $\kappa^{1/2}c^2-(8/\sqrt{3})m\ge 0$ it has one fixed point. \label{prima} \end{lemma} \begin{proof} The fixed points of system \eqref{lag2} are given by $\dot r=0=\dot \nu.$ Substituting $\nu=0$ in the second equation of (\ref{lag2}), we obtain $$\frac{1-\kappa r^2}{r^2}\left[\frac{c^2}{r} - \frac{24m}{(12-9\kappa r^2)^{3/2}}\right] = 0.$$ The above remarks show that, for $\kappa>0$, $r=\kappa^{-1/2}$ is a fixed point, which physically represents an equilateral relative equilibrium moving along the equator. Other potential fixed points of system \eqref{lag2} are given by the equation $$c^2(12-9\kappa r^2)^{3/2} = 24mr,$$ whose solutions are the roots of the polynomial \begin{equation}\label{polynomial1} 729c^4\kappa^3r^6 - 2916c^4\kappa^2r^4 + 144(27c^4\kappa + 4m^2)r^2 - 1728. \end{equation} Writing $x=r^2$ and assuming $\kappa >0$, this polynomial takes the form \begin{equation}\label{polynomial2} p(x)=729\kappa^3x^3 - 2916c^4\kappa^2x^2 + 144(27c^4\kappa + 4m^2)x - 1728, \end{equation} and its derivative is given by \begin{equation}\label{dpolynomial2} p'(x)=2187c^4\kappa^3x^2 - 5832c^4\kappa^2x + 144(27c^4\kappa + 4m^2). \end{equation} The discriminant of $p'$ is $-5038848c^4\kappa^3m^2<0.$ By Descartes's rule of signs, $p$ can have one or three positive roots. If $p$ has three positive roots, then $p'$ must have two positive roots, but this is not possible because its discriminant is negative. Consequently $p$ has exactly one positive root. For the point $(r,\nu)=(r_0,0)$ to be a fixed point of equations \eqref{lag2}, $r_0$ must satify the inequalities $0<r_0\le\kappa^{-1/2}$. If we denote \begin{equation} g(r)=\frac{c^2}{r} - \frac{24m}{(12-9\kappa r^2)^{3/2}},\label{g} \end{equation} we see that, for $\kappa>0$, $g$ is a decreasing function since \begin{equation} {d\over dr}g(r)=-{c^2\over r^2}-{648 m\kappa r\over (12-9\kappa r^2)^{5/2}}<0. \label{derivg} \end{equation} When $r\to 0$, we obviously have that $g(r)>0$ since we assumed $c\ne 0$. When $r\to \kappa^{-1/2}$, we have $g(r)\to \kappa^{1/2}c^2-(8/\sqrt{3})m$. If $\kappa^{1/2}c^2-(8/\sqrt{3})m>0$, then $r_0>\kappa^{-1/2}$, so $(r_0,0)$ is not a fixed point. Therefore, assuming $c\ne 0$, a necessary condition that $(r_0,0)$ is a fixed point of system \eqref{lag2} with $0<r_0<\kappa^{-1/2}$ is that $$\kappa^{1/2}c^2-(8/\sqrt{3})m< 0.$$ For $\kappa^{1/2}c^2-(8/\sqrt{3})m\ge 0,$ the only fixed point of system \eqref{lag2} is $(r,\nu)=(\kappa^{-1/2},0)$. This conclusion completes the proof of the lemma. \end{proof} \subsection{\bf The flow in the $(r,\nu)$ plane for $\kappa>0$} We will now study the flow of system \eqref{lag2} in the $(r,\nu)$ plane for $\kappa>0$. At every point with $\nu\ne 0$, the slope of the vector field is given by ${d\nu\over dr}$, i.e.\ by the ratio ${\dot\nu\over\dot r}=h(r,\nu),$ where $$h(r,\nu)= \frac{c^2(1-\kappa r^2)}{\nu r^3} -{\kappa r\nu\over{1-\kappa r^2}}- {24m(1-\kappa r^2)\over{\nu r^2(12-9\kappa r^2)^{3/2}}}.$$ Since $h(r,-\nu)=-h(r,\nu)$, the flow of system \eqref{lag2} is symmetric with respect to the $r$ axis for $r\in(0,\kappa^{-1/2}]$. Also notice that, except for the fixed point $(\kappa^{-1/2},0)$, system \eqref{lag2} is undefined on the lines $r=0$ and $r=\kappa^{-1/2}$. Therefore the flow of system \eqref{lag2} exists only for points $(r,\nu)$ in the band $(0,\kappa^{-1/2})\times \mathbb{R}$ and for the point $(\kappa^{-1/2},0)$. Since $\dot r=\nu$, no interval on the $r$ axis can be an invariant set for system \eqref{lag2}. Then the symmetry of the flow relative to the $r$ axis implies that orbits cross the $r$ axis perpendicularly. But since $g(r)\ne 0$ at every non-fixed point, the flow crosses the $r$ axis perpendicularly everywhere, except at the fixed points. Let us further treat the case of one fixed point and the case of two fixed points separately. \subsubsection{\bf The case of one fixed point} A single fixed point, namely $(\kappa^{-1/2},0)$, appears when $\kappa^{1/2}c^2-(8/\sqrt{3})m\ge 0$. Then the function $g$, which is decreasing, has no zeroes for $r\in(0,\kappa^{-1/2})$, therefore $g(r)>0$ in this interval, so the flow always crosses the $r$ axis upwards. For $\nu\ne 0$, the right hand side of the second equation of \eqref{lag2} can be written as \begin{equation} G(r,\nu)=g_1(r)g(r)+g_2(r,\nu), \label{g*} \end{equation} where \begin{equation} g_1(r)=\frac{1-\kappa r^2}{r^2}\ \ {\rm and}\ \ g_2(r,\nu)=-{\kappa r\nu^2\over 1-\kappa r^2}. \label{g12} \end{equation} But $\frac{d}{dr}g_1(r)=-2/r^3<0$ and $\frac{\partial}{\partial r}g_2(r,\nu)=-{\kappa\nu^2(1+\kappa r^2)\over(1-\kappa r^2)^2}<0.$ So, like $g$, the functions $g_1$ and $g_2$ are decreasing in $(0,\kappa^{-1/2})$, with $g_1, g>0$, therefore $G$ is a decreasing function as well. Consequently, for $\nu= {\rm constant} >0$, the slope of the vector field decreases from $+\infty$ at $r=0$ to $-\infty$ ar $r=\kappa^{-1/2}$. For $\nu= {\rm constant}<0$, the slope of the vector field increases from $-\infty$ at $r=0$ to $+\infty$ at $r=\kappa^{-1/2}$. This behavior of the vector field forces every orbit to eject downwards from the fixed point, at time $t=-\infty$ and with zero velocity, on a trajectory tangent to the line $r=\kappa^{-1/2}$, reach slope zero at some moment in time, then cross the $r$ axis perpendicularly upwards and symmetrically return with final zero velocity, at time $t=+\infty$, to the fixed point (see Figure \ref{Fig1}(a)). So the flow of system \eqref{lag2} consists in this case solely of homoclinic orbits to the fixed point $(\kappa^{-1/2},0)$, orbits whose existence is claimed in Theorem \ref{homo} (iv). Some of these trajectories may come very close to a total collapse, which they will never reach because only solutions with zero angular momentum (like the homothetic orbits) encounter total collisions, as proved in \cite{Diacu1bis}. So the orbits cannot reach any singularity of the line $r=0$, and neither can they begin or end in a singularity of the line $r=\kappa^{-1/2}$. The reason for the latter is that such points are or the form $(\kappa^{-1/2},\nu)$ with $\nu\ne 0$, therefore $\dot r\ne 0$ at such points. But the vector field tends to infinity when approaching the line $r=\kappa^{-1/2}$, so the flow must be tangent to it, consequently $\dot r$ must tend to zero, which is a contradiction. Therefore only homoclinic orbits exist in this case. \begin{figure}[htbp] \centering \includegraphics[width=2in]{FiggA.jpg} \includegraphics[width=2in]{FiggB.jpg} \caption{A sketch of the flow of system \eqref{lag2} for (a) $\kappa=c=1,m=0.24$, typical for one fixed point, and (b) $\kappa=c=1, m=4$, typical for two fixed points.} \label{Fig1} \end{figure} \subsubsection{\bf The case of two fixed points} Two fixed points, $(\kappa^{-1/2},0)$ and $(r_0,0)$, with $0<r_0<\kappa^{-1/2}$, occur when $\kappa^{1/2}c^2-(8/\sqrt{3})m< 0$. Since $g$ is decreasing in the interval $(0,\kappa^{-1/2})$, we can conclude that $g(t)>0$ for $t\in(0,r_0)$ and $g(t)<0$ for $t\in(r_0,\kappa^{-1/2})$. Therefore the flow of system \eqref{lag2} crosses the $r$ axis upwards when $r<r_0$, but downwards for $r>r_0$ (see Figure \ref{Fig1}(b)). The function $G(r,\nu)$, defined in \eqref{g*}, fails to be decreasing in the interval $(0,\kappa^{-1/2})$ along lines of constant $\nu$, but it has no singularities in this interval and still maintains the properties $$\lim_{r\to 0^+}G(r,\nu)=+\infty \ \ {\rm and}\ \lim_{r\to (\kappa^{-1/2})^-}G(r,\nu)= -\infty.$$ Therefore $G$ must vanish at some point, so due to the symmetry of the vector field with respect to the $r$ axis, the fixed point $(r_0,0)$ is surrounded by periodic orbits. The points where $G$ vanishes are given by the nullcline $\dot\nu=0$, which has the expression $$\nu^2={(1-\kappa r^2)^2\over\kappa r^3}\bigg[\frac{c^2}{r}- {24m\over{(12-9\kappa r^2)^{3/2}}}\bigg].$$ This nullcline is a disconnected set, formed by the fixed point $(\kappa^{-1/2},0)$ and a continuous curve, symmetric with respect to the $r$ axis. Indeed, since the equation of the nullcline can be written as $\nu^2={(1-\kappa r^2)^2\over\kappa r^3} g(r)$, and $\lim_{r\to(\kappa^{-1/2})^-}g(r)=\kappa^{1/2}c^2-(8/\sqrt{3})m<0$ in the case of two fixed points (as shown in the proof of Lemma \ref{prima}), only the point $(\kappa^{-1/2},0)$ satisfies the nullcline equation away from the fixed point $(r_0,0)$. The asymptotic behavior of $G$ near $r=\kappa^{-1/2}$ also forces the flow to produce homoclinic orbits for the fixed point $(\kappa^{-1/2},0)$, as in the case discussed in Subsection 4.1.1. The existence of these two kinds of solutions is stated in Theorem \ref{homo} (iii) and (iv), respectively. The fact that orbits cannot begin or end at any of the singularities of the lines $r=0$ or $r=\kappa^{-1/2}$ follows as in Subsection 4.1.1. This remark completes the proof of Theorem 2. \section{Classification of Lagrangian solutions for $\kappa<0$} We can now state and prove the following result: \begin{theorem} In the curved $3$-body problem with equal masses and $\kappa<0$ there are eight classes of Lagrangian solutions: (i) Lagrangian homothetic orbits that begin or end in total collision in finite time; (ii) Lagrangian relative equilibria, for which the bodies move on a circle parallel with the $xy$ plane; (iii) Lagrangian periodic orbits that are not Lagrangian relative equilibria; (iv) Lagrangian orbits that eject at time $-\infty$ from a certain relative equilibrium solution $\bf s$ (whose existence and position depend on the values of the parameters) and returns to it at time $+\infty$; (v) Lagrangian orbits that come from infinity at time $-\infty$ and reach the relative equilibrium $\bf s$ at time $+\infty$; (vi) Lagrangian orbits that eject from the relative equilibrium $\bf s$ at time $-\infty$ and reach infinity at time $+\infty$; (vii) Lagrangian orbits that come from infinity at time $-\infty$ and symmetrically return to infinity at time $+\infty$, never able to reach the Lagrangian relative equilibrium $\bf s$; (viii) Lagrangian orbits that come from infinity at time $-\infty$, reach a position close to a total collision, and symmetrically return to infinity at time $+\infty$. \label{homoneg} \end{theorem} The rest of this section is dedicated to the proof of this theorem. Notice first that the orbits described in Theorem \ref{homoneg} (i) occur for zero angular momentum, when $c=0$, as for instance when the three equal masses are released with zero velocities from the Lagrangian configuration, a case in which a total collapse takes place at the point $(0,0,|\kappa|^{-1/2})$. Depending on the initial conditions, the motion can be bounded or unbounded. The existence of the orbits described in Theorem \ref{homoneg} (ii) was proved in \cite{Diacu1}. To address the other points of Theorem \ref{homoneg}, and show that no other orbits than the ones stated there exist, we need to study the flow of system \eqref{lag2} for $\kappa<0$. Let us first prove the following fact: \begin{lemma} Assume $\kappa<0, m>0$, and $c\ne 0$. Then system \eqref{lag2} has no fixed points when $27c^4\kappa+4m^2\le 0$, and can have two, one, or no fixed points when $27c^4\kappa+4m^2> 0$. \end{lemma} \begin{proof} The number of fixed points of system \eqref{lag2} is the same as the number of positive zeroes of the polynomial $p$ defined in \eqref{polynomial2}. If $27c^4\kappa+4m^2\le 0$, all coefficients of $p$ are negative, so by Descartes's rule of signs, $p$ has no positive roots. Now assume that $27c^4\kappa+4m^2> 0$. Then the zeroes of $p$ are the same as the zeroes of the monic polynomial (i.e.\ with leading coefficient 1): $$ {\bar p}(x)=x^3-4\kappa^{-1}x^2+[48\kappa^{-2}+(64/81)c^{-4}\kappa^{-3}m^2]x-(64/27)\kappa^{-3},$$ obtained when dividing $p$ by the leading coefficient. But a monic cubic polynomial can be written as $$x^3 - (a_1+a_2+a_3)x^2 + (a_1a_2+a_2a_3+a_3a_1)x - a_1a_2a_3,$$ where $a_1,a_2,$ and $a_3$ are its roots. One of these roots is always real and has the opposite sign of $-a_1a_2a_3$. Since the free term of $\bar p$ is positive, one of its roots is always negative, independently of the allowed values of the coefficients $\kappa, m, c$. Consequently $p$ can have two positive roots (including the possibility of a double positive root) or no positive root at all. Therefore system \eqref{lag2} can have two, one, or no fixed points. As we will see later, all three cases occur. \end{proof} We further state and prove a property, which we will use to establish Lemma \ref{bigG}: \begin{lemma} Assume $\kappa<0, m>0, c\ne 0$, let $(r_*,0)$ be a fixed point of system \eqref{lag2}, and consider the function $g$ defined in \eqref{g}. Then ${d\over dr}g(r_*)=0$ if and only if $r_*=(-{2\over 3\kappa})^{1/2}$. Moreover, ${d^2\over dr^2}g(r_*)>0$. \label{help} \end{lemma} \begin{proof} Since $(r_*,0)$ is a fixed point of system \eqref{lag2}, it follows that $g(r_*)=0$. Then it follows from relation \eqref{g} that $(12-9\kappa r_*^2)^{3/2}=24mr_*/c^2$. Substituting this value of $(12-9\kappa r_*^2)^{3/2}$ into the equation ${d\over dr}g(r_*)=0$, which is equivalent to $${648m\kappa r_*\over (12-9\kappa r_*^2)^{5/2}}=-{c^2\over r_*^2},$$ it follows that $27\kappa/(12-9\kappa r_*^2)=-1/r_*^2$. Therefore $r_*=(-{2\over 3\kappa})^{1/2}$. Obviously, for this value of $r_*$, $g(r_*)=0$, so the first part of Lemma \ref{help} is proved. To prove the second part, substitute $r_*=(-{2\over 3\kappa})^{1/2}$ into the equation $g(r_*)=0$, which is then equivalent with the relation \begin{equation} 9\sqrt{3}c^2(-\kappa)^{1/2}-4m=0. \label{intermediate} \end{equation} Notice that $${d^2\over dr^2}g(r)={2c^2\over r^3}-{648m\kappa\over (12-9\kappa r^2)^{5/2}}- {29160m\kappa^2r^2\over(12-9\kappa r^2)^{7/2}}.$$ Substituting for $r_*=(-{2\over 3\kappa})^{1/2}$ in the above equation, and using \eqref{intermediate}, we are led to the conclusion that ${d^2\over dr^2}g(r_*)=-(2\sqrt{3}+6\sqrt{2})m\kappa/9\sqrt{6}$, which is positive for $\kappa<0$. This completes the proof. \end{proof} The following result is important for understanding a qualitative aspect of the flow of system \eqref{lag2}, which we will discuss later in this section. \begin{lemma} Assume $\kappa<0, m>0, c\ne 0$, and let $(r_*,0)$ be a fixed point of system \eqref{lag2}. If ${\partial\over\partial r}G(r_*,0)=0$, then ${\partial^2\over\partial r^2}G(r_*,0)>0$, where $G$ is defined in \eqref{g*}. \label{bigG} \end{lemma} \begin{proof} Since $(r_*,0)$ is a fixed point of \eqref{lag2}, $G(r_*,0)=0$. But for $\kappa<0$, we have $g_1(r_*)>0$, so necessarily $g(r_*)=0$. Moreover, ${d\over dr}g_1(r_*)\ne 0,$ and since ${\partial\over \partial r}g_2(r,\nu)= -{{\kappa\nu^2(1+\kappa r^2)}\over {(1-\kappa r^2)^2}}$, it follows that ${\partial\over \partial r}g_2(r_*,0)=0$. But $${\partial G\over \partial r}(r,\nu)={d\over dr}g_1(r)\cdot g(r)+g_1(r){d\over dr}g(r)+{\partial\over \partial r} g_2(r,\nu),$$ so the condition ${\partial\over \partial r}G(r_*,0)=0$ implies that ${d\over dr}g(r_*)=0$. By Lemma \ref{help}, $r_*=(-{2\over 3\kappa})^{1/2}$ and ${d^2\over dr^2}g(r_*)>0$. Using now the fact that $${\partial^2 G\over \partial r^2}(r,\nu)={d^2\over dr^2}g_1(r)g(r)+2{d\over dr}g_1(r){d\over dr}g(r)+ g_1(r){d^2\over dr^2}g(r)+{\partial^2\over \partial r^2}g_2(r,\nu),$$ it follows that ${\partial^2\over \partial r^2}G(r_*,0)=g_1(r_*){d^2\over dr^2}g(r_*).$ Since Lemma \ref{help} implies that ${d^2\over dr^2}g(r_*)>0$, and we know that $g_1(r_*)>0$, it follows that ${\partial^2\over\partial r^2}G(r_*,0)>0$, a conclusion that completes the proof. \end{proof} \subsection{The flow in the $(r,\nu)$ plane for $\kappa<0$} We will now study the flow of system \eqref{lag2} in the $(r,\nu)$ plane for $\kappa<0$. As in the case $\kappa>0$, and for the same reason, the flow is symmetric with respect to the $r$ axis, which it crosses perpendicularly at every non-fixed point with $r>0$. Since we can have two, one, or no fixed points, we will treat each case separately. \subsubsection{\bf The case of no fixed points} No fixed points occur when $g(r)$ has no zeroes. Since $g(r)\to\infty$ as $r\to 0$ with $r>0$, it follows that $g(r)>0$. Since $g_1(r)$ and $g_2(r,\nu)$ are also positive, it follows that $G(r,\nu)>0$ for $r>0$. But $h(r,\nu)=G(r,\nu)/\nu$. Then $h(r,\nu)>0$ for $\nu>0$ and $h(r,\nu)<0$ for $\nu<0$, so the flow comes from infinity at time $-\infty$, crosses the $r$ axis perpendicularly upwards, and symmetrically reaches infinity at time $+\infty$ (see Figure \ref{Fig2}(a)). These are orbits as in the statements of Theorem \ref{homoneg} (vii) and (viii) but without any reference to the Lagrangian relative equilibrium $\bf s$. \begin{figure}[htbp] \centering \includegraphics[width=2in]{FiggC.jpg} \includegraphics[width=2in]{FiggD.jpg} \caption{A sketch of the flow of system \eqref{lag2} for (a) $\kappa = -2, c = 1/3,$ and $m = 1/2$, typical for no fixed points; (b) $\kappa = -0.3, c = 0.23,$ and $m = 0.12$, typical for two fixed points, which are in this case on the line $\nu=0$ at approximately $r_1=1.0882233$ and $r_2=2.0007055$.} \label{Fig2} \end{figure} \subsubsection{\bf The case of two fixed points} In this case, the function $g$ defined at \eqref{g} has two distinct zeroes, one for $r=r_1$ and the other for $r=r_2$, with $0<r_1<r_2$. In Theorem \ref{homoneg}, we denoted the fixed point $(r_2,0)$ by $\bf s$. Moreover, $g(r)>0$ for $r\in(0,r_1)\cup(r_2,\infty)$, and $g(r)<0$ for $r\in(r_1,r_2)$. Therefore the vector field crosses the $r$ axis downwards between $r_1$ and $r_2$, but upwards for $r<r_1$ as well as for $r>r_2$. To determine the behavior of the flow near the fixed point $(r_1,0)$, we linearize system \eqref{lag2}. For this let $F(r,\nu)=\nu$ to be the right hand side of the first equation in \eqref{lag2}, and notice that ${\partial F\over\partial r}(r_1,0)=1$, ${\partial F\over\partial\nu}(r_1,0)=0$, and ${\partial G\over\partial\nu}(r_1,0)=0$. Since, along the $r$ axis, $G(r,0)$ is positive for $r<r_1$, but negative for $r_1<r<r_2$, it follows that either ${\partial G\over\partial r}(r_1,0)<0$ or ${\partial G\over\partial r}(r_1,0)=0$. But according to Lemma \ref{bigG}, if ${\partial G\over\partial r}(r_1,0)=0$, then ${\partial^2\over\partial r^2}G(r_*,0)>0$, so $G(r,0)$ is convex up at $(r_1,0)$. Then $G(r,0)$ cannot not change sign when $r$ passes through $r_1$ along the line $\nu=0$, so the only existing possibility is ${\partial G\over\partial r}(r_1,0)<0$. The eigenvalues of the linearized system corresponding to the fixed point $(r_1,0)$ are then given by the equation \begin{equation} \det\begin{bmatrix} -\lambda & 1\\ {\partial G\over\partial r}(r_1,0) & -\lambda \end{bmatrix} =0.\label{eigenv} \end{equation} Since ${\partial G\over\partial r}(r_1,0)$ is negative, the eigenvalues are purely imaginary, so $(r_1,0)$ is not a hyperbolic fixed point for equations \eqref{lag2}. Therefore this fixed point could be a spiral sink, a spiral source, or a center for the nonlinear system. But the symmetry of the flow of system \eqref{lag2} with respect to the $r$ axis, and the fact that, near $r_1$, the flow crosses the $r$ axis upwards to the left of $r_1$, and downwards to the right of $r_1$, eliminates the possibility of spiral behavior, so $(r_1,0)$ is a center (see Figure \ref{Fig2}(b)). We can understand the generic behavior of the flow near the isolated fixed point $(r_2,0)$ through linearization as well. For this purpose, notice that ${\partial F\over\partial r}(r_2,0)=1$, ${\partial F\over\partial\nu}(r_2,0)=0$, and ${\partial G\over\partial\nu}(r_2,0)=0$. Since, along the $r$ axis, $G(r)$ is negative for $r_1<r<r_2$, but positive for $r>r_2$, it follows that ${\partial G\over\partial r}(r_2,0)>0$ or ${\partial G\over\partial r}(r_2,0)=0$. But using Lemma \ref{bigG} the same way we did above for the fixed point $(r_1,0)$, we can conclude that the only possibility is ${\partial G\over\partial r}(r_2,0)>0$. The eigenvalues corresponding to the fixed point $(r_2,0)$ are given by the equation \begin{equation} \det\begin{bmatrix} -\lambda & 1\\ {\partial G\over\partial r}(r_2,0) & -\lambda \end{bmatrix} =0.\label{eigenv2} \end{equation} Consequently the fixed point $(r_2,0)$ is hyperbolic, its two eigenvalues are $\lambda_1>0$ and $\lambda_2<0$, so $(r_2,0)$ is a saddle. Indeed, for small $\nu>0$, the slope of the vector field decreases to $-\infty$ on lines $r=$ constant, with $r_1<r<r_2$, when $\nu$ tends to $0$. On the same lines, with $r>r_2$, the slope decreases from $+\infty$ as $\nu$ increases. This behavior gives us an approximate idea of how the eigenvectors corresponding to the eigenvalues $\lambda_1$ and $\lambda_2$ are positioned in the $r\nu$ plane. On lines of the form $\nu=\eta r$, with $\eta>0$, the slope $h(r,\nu)$ of the vector field becomes $$h(r,\eta r)= \frac{1-\kappa r^2}{\eta r^3}\bigg[{c^2\over r}-{24m(1-\kappa r^2)\over{(12-9\kappa r^2)^{3/2}}}\bigg] -{\kappa\eta r^2\over{1-\kappa r^2}}.$$ So, as $r$ tends to $\infty$, the slope $h(r,\eta r)$ tends to $\eta$. Consequently the vector field doesn't bound the flow with negative slopes, and thus allows it to go to infinity. With the fixed point $(r_1,0)$ as a center, the fixed point $(r_2,0)$ as a saddle, and a vector field that doesn't bound the orbits as $r\to\infty$, the flow must behave qualitatively as in Figure \ref{Fig2}(b). This behavior of the flow proves the existence of the following types of solutions: (a) periodic orbits around the fixed point $(r_1,0)$, corresponding to Theorem \ref{homoneg} (iii); (b) a homoclinic orbit to the fixed point $(r_1,0)$, corresponding to Theorem \ref{homoneg} (iv); (c) an orbit that tends to the fixed point $(r_2,0)$, corresponding to Theorem \ref{homoneg} (v); (d) an orbit that is ejected from the fixed point $(r_2,0)$, corresponding to Theorem \ref{homoneg} (vi); (e) orbits that come from infinity in the direction of the stable manifold of $(r_2,0)$ and inside it, hit the $r$ axis to the right of $r_2$, and return symmetrically to infinity in the direction of the unstable manifold of $(r_2,0)$; these orbits correspond to Theorem \ref{homoneg} (vii); (f) orbits that come from infinity in the direction of the stable manifold of $(r_2,0)$ and outside it, turn around the homoclinic loop, and return symmetrically to infinity in the direction of the unstable manifold of $(r_2,0)$; these orbits correspond to Theorem \ref{homoneg} (viii). Since no other orbits show up, the proof of this case is complete. \subsubsection{\bf The case of one fixed point} We left the case of one fixed point at the end because it is non-generic. It occurs when the two fixed points of the previous case overlap. Let us denote this fixed point by $(r_0,0)$. Then the function $g(r)$ is positive everywhere except at the fixed point, where it is zero. So near $r_0$, $g$ is decreasing for $r<r_0$ and increasing for $r>r_0$, and the $r$ axis is tangent to the graph of $g$. Consequently, ${\partial G\over\partial r}(r_0,0)=0$, and the eigenvalues obtained from equation \eqref{eigenv} are $\lambda_1=\lambda_2=0$. In this degenerate case, the orbits near the fixed point influence the asymptotic behavior of the flow at $(r_0,0)$. Since the flow away from the fixed point looks very much like in the case of no fixed points, the only difference between the flow sketched in Figure \ref{Fig2}(a) and the current case is that at least an orbit ends at $(r_0,0)$, and at least another orbit one ejects from it. These orbits are described in Theorem \ref{homoneg} (iv) and (v). \medskip The proof of Theorem \ref{homoneg} is now complete. \section{Mass equality of Lagrangian solutions} In this section we show that all Lagrangian solutions that satisfy Definition \ref{deflag} must have equal masses. In other words, we will prove the following result: \begin{theorem} In the curved $3$-body problem, the bodies of masses $m_1, m_2$, $m_3$ can lead to a Lagrangian solution if and only if $m_1=m_2=m_3$. \end{theorem} \begin{proof} The fact that three bodies of equal masses can lead to Lagrangian solutions for suitable initial conditions was proved in Theorem \ref{equal-masses}. So we will further prove that Lagrangian solutions can occur only if the masses are equal. Since the case of relative equilibria was settled in \cite{Diacu1}, we need to consider only the Lagrangian orbits that are not relative equilibria. This means we can treat only the case when $r(t)$ is not constant. Assume now that the masses are $m_1, m_2, m_3$, and substitute a solution of the form \begin{align*} x_1&=r\cos\omega,& y_1&=r\sin\omega,& z_1&=(\sigma\kappa^{-1}-\sigma r^2)^{1/2},\\ x_2&=r\cos(\omega +2\pi/3),& y_2&=r\sin(\omega +2\pi/3),& z_2&=(\sigma\kappa^{-1}-\sigma r^2)^{1/2},\\ x_3&=r\cos(\omega +4\pi/3),& y_3&=r\sin(\omega +4\pi/3),& z_3&=(\sigma\kappa^{-1}-\sigma r^2)^{1/2}, \end{align*} into the equations of motion. Computations and a reasoning similar to the ones performed in the proof of Theorem \ref{equal-masses} lead us to the system: $$\ddot r-r(1-\kappa r^2)\dot\omega^2+{\kappa r\dot r^2\over{1-\kappa r^2}}+ {12(m_1+m_2)(1-\kappa r^2)\over{r^2(12-9\kappa r^2)^{3/2}}}=0,$$ $$\ddot r-r(1-\kappa r^2)\dot\omega^2+{\kappa r\dot r^2\over{1-\kappa r^2}}+ {12(m_2+m_3)(1-\kappa r^2)\over{r^2(12-9\kappa r^2)^{3/2}}}=0,$$ $$\ddot r-r(1-\kappa r^2)\dot\omega^2+{\kappa r\dot r^2\over{1-\kappa r^2}}+ {12(m_3+m_1)(1-\kappa r^2)\over{r^2(12-9\kappa r^2)^{3/2}}}=0,$$ $$r\ddot\omega+2\dot r\dot\omega-{4\sqrt{3}(m_1-m_2)\over r^2(12-9\kappa r^2)^{3/2}}=0,$$ $$r\ddot\omega+2\dot r\dot\omega-{4\sqrt{3}(m_2-m_3)\over r^2(12-9\kappa r^2)^{3/2}}=0,$$ $$r\ddot\omega+2\dot r\dot\omega-{4\sqrt{3}(m_3-m_1)\over r^2(12-9\kappa r^2)^{3/2}}=0,$$ which, obviously, can have solutions only if $m_1=m_2=m_3$. This conclusion completes the proof. \end{proof} \section{Local existence and uniqueness of Eulerian solutions} In this section we define the Eulerian solutions of the curved 3-body problem and prove their local existence for suitable initial conditions in the case of equal masses. \begin{definition} A solution of equations \eqref{second} is called Eulerian if, at every time instant, the bodies are on a geodesic that contains the point $(0,0, |\kappa|^{-1/2}|)$. \label{defeul} \end{definition} According to Definition \ref{defeul}, the size of an Eulerian solution may change, but the particles are always on a (possibly rotating) geodesic. If the masses are equal, it is natural to assume that one body lies at the point $(0,0, |\kappa|^{-1/2}|)$, while the other two bodies find themselves at diametrically opposed points of a circle. Thus, in the case of equal masses, which we further consider, we ask that the moving bodies have the same coordinate $z$, which may vary in time. We can thus represent such an Eulerian solution of the curved 3-body problem in the form \begin{equation} {\bf q} =({\bf q}_1,{\bf q}_2, {\bf q}_3), \ \ {\rm with}\ \ {\bf q}_i=(x_i,y_i,z_i),\ i=1,2,3, \label{eulsolu} \end{equation} \begin{align*} x_1&=0,& y_1&=0,& z_1&=(\sigma\kappa)^{-1/2},\\ x_2&=r\cos\omega,& y_2&=r\sin\omega,& z_2&=z,\\ x_3&=-r\cos\omega,& y_3&=-r\sin\omega,& z_3&=z, \end{align*} where $z=z(t)$ satisfies $z^2=\sigma\kappa^{-1}-\sigma r^2=(\sigma\kappa)^{-1} (1-\kappa r^2)$; $\sigma$ is the signature function defined in \eqref{sigma}; $r:=r(t)$ is the {\it size function}; and $\omega:=\omega(t)$ is the {\it angular function}. Notice that, for every time $t$, we have $x_i^2(t)+y_i^2(t)+\sigma z_i^2(t)=\kappa^{-1},\ i=1,2,3$, which means that the bodies stay on the surface ${\bf M}_{\kappa}^2$. Equations \eqref{eulsolu} also express the fact that the bodies are on the same (possibly rotating) geodesic. Therefore representation \eqref{eulsolu} of the Eulerian orbits agrees with Definition \ref{defeul} in the case of equal masses. \begin{definition} An Eulerian solution of equations \eqref{second} is called Eulerian homothetic if the configuration expands or contracts, but does not rotate. \label{ellipticeulhomo} \end{definition} In terms of representation \eqref{eulsolu}, an Eulerian homothetic orbit for equal masses occurs when $\omega(t)$ is constant, but $r(t)$ is not constant. If, for instance, all three bodies are initially in the same open hemisphere, while the two moving bodies have the same mass and the same $z$ coordinate, and are released with zero initial velocities, then we are led to an Eulerian homothetic orbit that ends in a triple collision. \begin{definition} An Eulerian solution of equations \eqref{second} is called an Eulerian relative equilibrium if the configuration of the system rotates without expanding or contracting. \end{definition} In terms of representation \eqref{eulsolu}, an Eulerian relative equilibrium orbit occurs when $r(t)$ is constant, but $\omega(t)$ is not constant. Of course, Eulerian homothetic solutions and elliptic Eulerian relative equilibria, whose existence we proved in \cite{Diacu1}, are particular Eulerian orbits, but we expect that the Eulerian orbits are not reduced to them. We now show this fact by proving the local existence and uniqueness of Eulerian solutions that are neither Eulerian homothetic, nor Eulerian relative equilibria. \begin{theorem} In the curved $3$-body problem of equal masses, for every set of initial conditions belonging to a certain class, the local existence and uniqueness of an Eulerian solution, which is neither homothetic nor a relative equilibrium, is assured. \label{Eulequal-masses} \end{theorem} \begin{proof} To check whether equations \eqref{second} admit solutions of the form \eqref{eulsolu} that start in the region $z>0$ and for which both $r(t)$ and $\omega(t)$ are not constant, we first compute that $$\kappa{\bf q}_1\odot{\bf q}_2=\kappa{\bf q}_1\odot{\bf q}_3=(1-\kappa r^2)^{1/2},$$ $$\kappa{\bf q}_2\odot{\bf q}_3=1-2\kappa r^2,$$ \begin{align*} \dot x_1&=0,& \dot y_1&=0,\\ \dot x_2&=\dot r\cos\omega-r\dot\omega\sin\omega,& \dot y_2&=\dot r\sin\omega+ r\dot\omega\cos\omega,\\ \dot x_3&=-\dot r\cos\omega+r\dot\omega\sin\omega,& \dot y_2&=-\dot r\sin\omega- r\dot\omega\cos\omega,\\ \dot z_1&=0,& \dot z_2=\dot z_3&=-{\sigma r\dot r\over (\sigma\kappa)^{1/2}(1-\kappa r^2)^{1/2}}, \end{align*} $$\kappa\dot{\bf q}_1\odot\dot{\bf q}_1=0,$$ $$\kappa\dot{\bf q}_2\odot\dot{\bf q}_2=\kappa\dot{\bf q}_3\odot\dot{\bf q}_3=\kappa r^2\dot\omega^2+{\kappa\dot r^2\over 1-\kappa r^2},$$ $$\ddot x_1=\ddot y_1=\ddot z_1=0,$$ $$\ddot x_2=(\ddot r-r\dot\omega^2)\cos\omega-(r\ddot\omega+2\dot r\dot\omega)\sin\omega,$$ $$\ddot y_2=(\ddot r-r\dot\omega^2)\sin\omega+(r\ddot\omega+2\dot r\dot\omega)\cos\omega,$$ $$\ddot x_3=-(\ddot r-r\dot\omega^2)\cos\omega+(r\ddot\omega+2\dot r\dot\omega)\sin\omega,$$ $$\ddot y_3=-(\ddot r-r\dot\omega^2)\sin\omega-(r\ddot\omega+2\dot r\dot\omega)\cos\omega,$$ $$\ddot z_2=\ddot z_3=-\sigma r\ddot r(\sigma\kappa^{-1}-\sigma r^2)^{-1/2}- \kappa^{-1}\dot r^2(\sigma\kappa^{-1}-\sigma r^2)^{-3/2}.$$ Substituting these expressions into equations \eqref{second}, we are led to the system below, where the double-dot terms on the left indicate to which differential equation each algebraic equation corresponds: \begin{align*} \ddot x_2, \ddot x_3:\ \ \ \ \ \ \ \ & C\cos\omega-D\sin\omega=0,\\ \ddot y_2, \ddot y_3:\ \ \ \ \ \ \ \ & C\sin\omega+D\cos\omega=0,\\ \ddot z_2, \ddot z_3:\ \ \ \ \ \ \ \ & C=0, \end{align*} where $$C:=C(t)=\ddot r-r(1-\kappa r^2)\dot\omega^2+{\kappa r\dot r^2\over{1-\kappa r^2}}+ {m(5-4\kappa r^2)\over{4r^2(1-\kappa r^2)^{1/2}}},$$ $$D:=D(t)=r\ddot\omega+2\dot r\dot\omega.$$ (The equations corresponding to $\ddot x_1, \ddot y_1,$ and $\ddot z_1$ are identities, so they don't show up). The above system has solutions if and only if $C=D=0$, which means that the existence of Eulerian homographic orbits of the curved $3$-body problem with equal masses is equivalent to the existence of solutions of the system of differential equations: \begin{equation} \begin{cases} \dot r=\nu\cr \dot w=-{2\nu w\over r}\cr \dot\nu=r(1-\kappa r^2)w^2-{\kappa r\nu^2\over{1-\kappa r^2}}- {m(5-4\kappa r^2)\over{4r^2(1-\kappa r^2)^{1/2}}}, \cr \end{cases}\label{eu0} \end{equation} with initial conditions $r(0)=r_0, w(0)=w_0, \nu(0)=\nu_0,$ where $w=\dot\omega$. The functions $r,\omega$, and $w$ are analytic, and as long as the initial conditions satisfy the conditions $r_0>0$ for all $\kappa$, as well as $r_0<\kappa^{-1/2}$ for $\kappa>0$, standard results of the theory of differential equations guarantee the local existence and uniqueness of a solution $(r,w,\nu)$ of equations \eqref{eu0}, and therefore the local existence and uniqueness of an Eulerian orbit with $r(t)$ and $\omega(t)$ not constant. This conclusion completes the proof. \end{proof} \section{Classification of Eulerian solutions for $\kappa>0$} We can now state and prove the following result: \begin{theorem} In the curved $3$-body problem with equal masses and $\kappa>0$ there are three classes of Eulerian solutions: (i) homothetic orbits that begin or end in total collision in finite time; (ii) relative equilibria, for which one mass is fixed at one pole of the sphere while the other two move on a circle parallel with the $xy$ plane; (iii) periodic homographic orbits that are not relative equilibria. None of the above orbits can cross the equator, defined as the great circle orthogonal to the $z$ axis. \label{eulerp} \end{theorem} The rest of this section is dedicated to the proof of this theorem. Let us start by noticing that the first two equations of system \eqref{eu0} imply that $\dot w=-{2\dot r w\over r}$, which leads to $$w=\frac{c}{r^2},$$ where $c$ is a constant. The case $c=0$ can occur only when $w=0$, which means $\dot\omega=0$. Under these circumstances the angular velocity is zero, so the motion is homothetic. The existence of these orbits is stated in Theorem \ref{eulerp} (i). They occur only when the angular momentum is zero, and lead to a triple collision in the future or in the past, depending on the direction of the velocity vectors. The existence of the orbits described in Theorem \ref{eulerp} (ii) was proved in \cite{Diacu1}. For the rest of this section, we assume that $c\ne 0$. System \eqref{eu0} is thus reduced to \begin{equation} \begin{cases} \dot r=\nu\cr \dot\nu={c^2(1-\kappa r^2)\over r^3}-{\kappa r\nu^2\over{1-\kappa r^2}}- {m(5-4\kappa r^2)\over{4r^2(1-\kappa r^2)^{1/2}}}. \cr \end{cases}\label{eu} \end{equation} To address the existence of the orbits described in Theorem \ref{eulerp} (iii), and show that no other Eulerian orbits than those of Theorem \ref{eulerp} exist for $\kappa>0$, we need to study the flow of system \eqref{eu} for $\kappa>0$. Let us first prove the following fact: \begin{lemma} Regardless of the values of the parameters $m, \kappa>0$, and $c\ne 0$, system \eqref{eu} has one fixed point $(r_0,0)$ with $0<r_0<\kappa^{-1/2}$. \label{lemmaeup} \end{lemma} \begin{proof} The fixed points of system \eqref{eu} are of the form $(r,0)$ for all values of $r$ that are zeroes of $u(r)$, where \begin{equation} u(r)={c^2(1-\kappa r^2)\over r}- {m(5-4\kappa r^2)\over{4(1-\kappa r^2)^{1/2}}}.\label{uu} \end{equation} But finding the zeroes of $u(r)$ is equivalent to obtaining the roots of the polynomial $$ 16\kappa^2(c^4\kappa + m^2)r^6 - 8\kappa(6c^4\kappa + 5m^2)r^4 + (48c^4\kappa+25m^2)r^2 - 16c^4. $$ Denoting $x=r^2$, this polynomial becomes $$ q(x)=16\kappa^2(c^4\kappa + m^2)x^3 - 8\kappa(6c^4\kappa + 5m^2)x^2 + (48c^4\kappa+25m^2)x - 16c^4. $$ Since $\kappa>0$, Descarte's rule of signs implies that $q$ can have one or three positive roots. The derivative of $q$ is the polynomial \begin{equation}\label{derEuler-pol2} q'(x)=48\kappa^2(c^4\kappa + m^2)x^2 - 16\kappa(6c^4\kappa + 5m^2)x + 48c^4\kappa+25m^2, \end{equation} whose discriminant is $64\kappa^2m^2(21c^4\kappa + 25m^2)$. But, as $\kappa>0$, this discriminant is always positive, so it offers no additional information on the total number of positive roots. To determine the exact number of positive roots, we will use the resultant of two polynomials. Denoting by $a_i, i=1,2,\dots, \zeta,$ the roots of a polynomial $P$, and by $b_j, j=1,2, \dots, \xi$, those of a polynomial $Q$, the resultant of $P$ and $Q$ is defined by the expression $${\rm Res}(P,Q)=\prod _{i=1}^\zeta\prod_{j=1}^\xi(a_i-b_j).$$ Then $P$ and $Q$ have a common root if and only if ${\rm Res}[P,Q]=0$. Consequently the resultant of $q$ and $q'$ is a polynomial in $\kappa, c$, and $m$ whose zeroes are the double roots of $q$. But $$ {\rm Res}(q,q') = 1024c^4\kappa^5m^4(c^4\kappa + m^2)(108c^4\kappa + 125m^2). $$ Then, for $m,\kappa >0$ and $c\ne 0$, ${\rm Res}[q,q']$ never cancels, therefore $q$ has exactly one positive root. Indeed, should $q$ have three positive roots, a continuous variation of $\kappa, m$, and $c$, would lead to some values of the parameters that correspond to a double root. Since double roots are impossible, the existence of a unique equilibrium $(r_0,0)$ with $r_0>0$ is proved. To conclude that $r_0<\kappa^{-1/2}$ for all $m,\kappa>0$ and $c\ne 0$, it is enough to notice that $\lim_{r\to 0}u(r) = +\infty$ and $\lim_{r\to \kappa^{-1/2}}u(r)= - \infty.$ This conclusion completes the proof. \end{proof} \subsection{The flow in the $(r,\nu)$ plane for $\kappa>0$} We can now study the flow of system \eqref{eu} in the $(r,\nu)$ plane for $\kappa>0$. The vector field is not defined along the lines $r=0$ and $r=\kappa^{-1/2}$, so it lies in the band $(0,\kappa^{-1/2})\times \mathbb{R}$. Consider now the slope ${d\nu\over dr}$ of the vector field. This slope is given by the ratio ${{\dot\nu}\over{\dot r}}=v(r,\nu)$, where \begin{equation} v(r,\nu)={c^2(1-\kappa r^2)\over \nu r^3}-{\kappa r\nu\over{1-\kappa r^2}}- {m(5-4\kappa r^2)\over{4\nu r^2(1-\kappa r^2)^{1/2}}}. \label{slope2} \end{equation} But $v$ is odd with respect to $\nu$, i.e.\ $v(r,-\nu)=-v(r,\nu)$, so the vector field is symmetric with respect to the $r$ axis. \begin{figure}[htbp] \centering \includegraphics[width=2in]{FiggE.jpg} \caption{A sketch of the flow of system \eqref{eu} for $\kappa = 1, c = 2,$ and $m = 2$, typical for Eulerian solutions with $\kappa>0$.} \label{Fig3} \end{figure} Since $\lim_{r\to 0}v(r) = +\infty$ and $\lim_{r\to \kappa^{-1/2}}v(r) = - \infty$, the flow crosses the $r$ axis perpendicularly upwards to the left of $r_0$ and downwards to its right, where $(r_0,0)$ is the fixed point of the system \eqref{eu} whose existence and uniqueness we proved in Lemma \ref{lemmaeup}. But the right hand side of the second equation in \eqref{eu} is of the form \begin{equation} W(r,\nu)=u(r)/r^2+g_2(r,\nu),\label{W} \end{equation} where $g_2$ was defined earlier as $g_2(r,\nu)=-{{\kappa r\nu^2} \over{1-\kappa r^2}}$, while $u(r)$ was defined in \eqref{uu}. Notice that $$\lim_{r\to 0}W(r,\nu)=+\infty\ \ {\rm and}\ \ \lim_{r\to\kappa^{-1/2}}W(r,\nu)=-\infty.$$ Moreover, $W(r_0,0)=0$, and $W$ has no singularities for $r\in(0,\kappa^{-1/2})$. Therefore the flow that enters the region $\nu>0$ to the left of $r_0$ must exit it to the right of the fixed point. The symmetry with respect to the $r$ axis forces all orbits to be periodic around $(r_0,0)$ (see Figure \ref{Fig3}). This proves the existence of the solutions described in Theorem \ref{eulerp} (iii), and shows that no orbits other than those in Theorem \ref{eulerp} occur for $\kappa>0$. The proof of Theorem \ref{eulerp} is now complete. \section{Classification of Eulerian solutions for $\kappa<0$} We can now state and prove the following result: \begin{theorem} In the curved $3$-body problem with equal masses and $\kappa>0$ there are four classes of Eulerian solutions: (i) Eulerian homothetic orbits that begin or end in total collision in finite time; (ii) Eulerian relative equilibria, for which one mass is fixed at the vertex of the hyperboloid while the other two move on a circle parallel with the $xy$ plane; (iii) Eulerian periodic orbits that are not relative equilibria; the line connecting the two moving bodies is always parallel with the $xy$ plane, but their $z$ coordinate changes in time; (iv) Eulerian orbits that come from infinity at time $-\infty$, reach a position when the size of the configuration is minimal, and then return to infinity at time $+\infty$. \label{eulneg} \end{theorem} The rest of this section is dedicated to the proof of this theorem. The homothetic orbits of the type stated in Theorem \ref{eulneg} (i) occur only when $c=0$. Then the two moving bodies collide simultaneously with the fixed one in the future or in the past. Depending on the initial conditions, the motion can be bounded or unbounded. The existence of the orbits stated in Theorem \ref{eulneg} (ii) was proved in \cite{Diacu1}. To prove the existence of the solutions stated in Theorem \ref{eulneg} (iii) and (iv), and show that there are no other kinds of orbits, we start with the following result: \begin{lemma} In the curved three body problem with $\kappa<0$, the polynomial $q$ defined in the proof of Lemma \ref{lemmaeup} has no positive roots for $c^4\kappa + m^2 \leq 0$, but has exactly one positive root for $c^4\kappa + m^2 > 0$. \label{last} \end{lemma} \begin{proof} We split our analysis in three different cases depending on the sign of $c^4\kappa + m^2$: (1) $c^4\kappa + m^2=0$. In this case $q$ has form $8\kappa m^2 x^2 + 23c^4\kappa x -16c^4,$ a polynomial that does not have any positive root. (2) $c^4\kappa + m^2<0.$ Writing $6c^4\kappa + 5m^2 = 6(c^4\kappa + m^2) - m^2$, we see that the term of $q$ corresponding to $x^2$ is always negative, so by Descartes's rule of signs the number of positive roots depends on the sign of the coefficient corresponding to $x$, i.e.\ $48c^4\kappa+25m^2=48(c^4\kappa+m^2)-23m^2$, which is also negative, and therefore $q$ has no positive root. (3) $c^4\kappa + m^2>0$. This case leads to three subcases: -- if $6c^4\kappa + 5m^2<0$, then necessarily $48c^4\kappa+25m^2<0$ and, so $q$ has exactly one positive root; -- if $6c^4\kappa + 5m^2>0$ and $48c^4\kappa+25m^2<0$, then $q$ has one change of sign and therefore exactly one positive root; -- if $48c^4\kappa+25m^2>0,$ then all coefficients, except for the free term, are positive, therefore $q$ has exactly one positive root. These conclusions complete the proof. \end{proof} The following result will be used towards understanding the case when system \eqref{eu} has one fixed point. \begin{lemma} Regardless of the values of the parameters $\kappa<0, m>0$, and $c\ne 0$, there is no fixed point, $(r_*,0)$, of system \eqref{eu} for which ${\partial\over\partial r}W(r_*,0)=0$, where $W$ is defined in \eqref{W}. \label{lllast} \end{lemma} \begin{proof} Since $u(r_*)=0, {\partial\over \partial r}g_2(r_*,0)=0$, and $${\partial W\over \partial r}(r,\nu)=-(2/r^3)u(r)+(1/r^2){d\over dr}u(r)+{\partial\over \partial r} g_2(r,\nu),$$ it means that $W(r_*,0)=0$ if and only if ${d\over dr}u(r_*)=0$. Consequently our result would follow if we can prove that there is no fixed point $(r_*,0)$ for which ${d\over dr}u(r_*)=0$. To show this fact, notice first that \begin{equation} {d\over dr}u(r)=-{{c^2(1+\kappa r^2)}\over{r^2}}-{{\kappa mr(4\kappa r^2-3)} \over{4(1-\kappa r^2)^{3/2}}}. \label{derivu} \end{equation} From the definition of $u(r)$ in \eqref{uu}, the identity $u(r_*)=0$ is equivalent to $$(1-\kappa r_*^2)^{1/2}={mr_*(5-4\kappa r_*^2)\over 4c^2(1-\kappa r_*^2)}.$$ Regarding $(1-\kappa r^2)^{3/2}$ as $(1-\kappa r^2)^{1/2}(1-\kappa r^2)$, and substituting the above expression of $(1-\kappa r_*^2)^{1/2}$ into \eqref{derivu} for $r=r_*$, we obtain that $${\kappa(4\kappa r_*^2-3)\over 5-4\kappa r_*^2}+{{1+\kappa r_*^2}\over r_*^2}=0,$$ which leads to the conclusion that $r_*^2=5/2\kappa<0$. Therefore there is no fixed point $(r_*,0)$ such that ${d\over dr}u(r_*)=0$. This conclusion completes the proof. \end{proof} \subsection{The flow in the $(r,\nu)$ plane for $\kappa<0$} To study the flow of system \eqref{eu} for $\kappa<0$, we will consider the two cases given by Lemma \ref{last}, namely when system \eqref{eu} has no fixed points and when it has exactly one fixed point. \subsubsection{\bf The case of no fixed points} Since $\kappa<0$, and system \eqref{eu} has no fixed points, the function $u(r)$, defined in \eqref{uu}, has no zeroes. But $\lim_{r\to 0}u(r)=+\infty$, so $u(r)>0$ for all $r>0$. Then $u(r)/r^2>0$ for all $r>0$. Since $\lim_{r\to 0}g_2(r,\nu)=0$, it follows that $\lim_{r\to 0}W(r,\nu)=+\infty$, where $W(r,\nu)$ (defined in \eqref{W}) forms the right hand side of the second equation in system \eqref{eu}. Since system \eqref{eu} has no fixed points, $W$ doesn't vanish. Therefore $W(r,\nu)>0$ for all $r>0$ and $\nu$. Notice that the slope of the vector field, $v(r,\nu)$, defined in \eqref{slope2}, is of the form $v(r,\nu)=W(r,\nu)/\nu$, which implies that the flow crosses the $r$ axis perpendicularly at every point with $r>0$. Also, for $r$ fixed, $\lim_{\nu\to\pm\infty}W(r,\nu)=+\infty$. Moreover, for $\nu$ fixed, $\lim_{r\to\infty}W(r,\nu)=0$. This means that the flow has a simple behavior as in Figure \ref{Fig4}(a). These orbits correspond to those stated in Theorem \ref{eulneg} (iv). \begin{figure}[htbp] \centering \includegraphics[width=2in]{FiggF.jpg} \includegraphics[width=2in]{FiggG.jpg} \caption{A sketch of the flow of system \eqref{eu} for (a) $\kappa = -2, c = 2,$ and $m = 4$, typical for no fixed points; (b) $\kappa = -2, c = 2,$ and $m = 6.2$, typical for one fixed point.} \label{Fig4} \end{figure} \subsubsection{\bf The case of one fixed point} We start with analyzing the behavior of the flow near the unique fixed point $(r_0,0)$. Let $F(r,\nu)=\nu$ denote the right hand side in the first equation of system \eqref{eu}. Then ${\partial\over\partial r}F(r_0,0)=0, {\partial\over\partial\nu}F(r_0,0)=1$, and ${\partial\over\partial\nu}W(r_0,0)=0$. To determine the sign of ${\partial\over\partial r}W(r_0,0)$, notice first that $\lim_{r\to 0}W(r,\nu)=+\infty$. Since the equation $W(r,\nu)=0$ has a single root of the form $(r_0,0)$, with $r_0>0$, it follows that $W(r,0)>0$ for $0<r<r_0$. To show that $W(r,0)<0$ for $r>r_0$, assume the contrary, which (given the fact that $r_0$ is the only zero of $W(r,0)$) means that $W(r,0)>0$ for $r>r_0$. So $W(r,0)\ge 0$, with equality only for $r=r_0$. But recall that we are in the case when the parameters satisfy the inequality $c^4\kappa + m^2 > 0$. Then a slight variation of the parameters $\kappa<0, m>0$, and $c\ne 0$, within the region defined by the above inequality, leads to two zeroes for $W(r,0)$, a fact which contradicts Lemma \ref{last}. Therefore, necessarily, $W(r,0)<0$ for $r>r_0$. Consequently $W(r,0)$ is decreasing in a small neighborhood of $r_0$, so ${\partial\over\partial r}W(r_0,0)\le 0$. But by Lemma \ref{lllast}, ${\partial\over\partial r}W(r_0,0)\ne 0$, so necessarily ${\partial\over\partial r}W(r_0,0)< 0$. The eigenvalues corresponding to the system obtained by linearizing equations \eqref{eu} around the fixed point $(r_0,0)$ are given by the equation \begin{equation} \det\begin{bmatrix} -\lambda & 1\\ {\partial W\over\partial r}(r_0,0) & -\lambda \end{bmatrix} =0,\label{eigenvalues} \end{equation} so these eigenvalues are purely imaginary. In terms of system \eqref{eu}, this means that the fixed point $(r_0,0)$ could be a spiral sink, a spiral source, or a center. The symmetry of the flow with respect to the $r$ axis excludes the first two possibilities, consequently $(r_0,0)$ is a center (see Figure \ref{Fig4}(b)). We thus proved that, in a neighborhood of this fixed point, there exist infinitely many periodic Eulerian solutions whose existence was stated in Theorem \ref{eulneg} (iii). To complete the analysis of the flow of system \eqref{eu}, we will use the nullcline $\dot\nu=0$, which is given by the equation \begin{equation} \nu^2={{1-\kappa r^2}\over \kappa r} \bigg[{c^2(1-\kappa r^2)\over r^3}-{m(5-4\kappa^2)\over4r^2(1-\kappa r^2)^{1/2}}\bigg]. \label{curve} \end{equation} Along this curve, which passes through the fixed point $(r_0,0)$, and is symmetric with respect to the $r$ axis, the vector field has slope zero. To understand the qualitative behavior of this curve, notice that $$\lim_{r\to\infty} {{1-\kappa r^2}\over \kappa r} \bigg[{c^2(1-\kappa r^2)\over r^3}-{m(5-4\kappa^2)\over4r^2(1-\kappa r^2)^{1/2}}\bigg]=m(-\kappa)^{1/2}+\kappa c^2.$$ But we are restricted to the parameter region given by the inequality $m^2+\kappa c^4>0$, which is equivalent to $$[m(-\kappa)^{1/2}-(-\kappa)c^2][m(-\kappa)^{1/2}+(-\kappa)c^2]>0.$$ Since the second factor of this product is positive, it follows that the first factor must be positive, therefore the above limit is positive. Consequently the curve given in \eqref{curve} is bounded by the horizontal lines $$\nu=[m(-\kappa)^{1/2}+\kappa c^2]^{1/2}\ \ {\rm and}\ \ \nu=-[m(-\kappa)^{1/2}+\kappa c^2]^{1/2}.$$ Inside the curve, the vector field has negative slope for $\nu>0$ and positive slope for $\nu<0$. Outside the curve, the vector field has positive slope for $\nu>0$, but negative slope for $\nu<0$. So the orbits of the flow that stay outside the nullcline curve are unbounded. They correspond to solutions whose existence was stated in Theorem \ref{eulneg} (iv). This conclusion completes the proof of Theorem \ref{eulneg}. \section{Hyperbolic homographic solutions} In this last section we consider a certain class of homographic orbits, which occur only in spaces of negative curvature. In the case $\kappa=-1$, we proved in \cite{Diacu1} the existence of hyperbolic Eulerian relative equilibria of the curved 3-body problem with equal masses. These orbits behave as follows: three bodies of equal masses move along three fixed hyperbolas, each body on one of them; the middle hyperbola, which is a geodesic passing through the vertex of the hyperboloid, lies in a plane of ${\mathbb R}^3$ that is parallel and equidistant from the planes containing the other two hyperbolas, none of which is a geodesic. At every moment in time, the bodies lie equidistantly from each other on a geodesic hyperbola that rotates hyperbolically. These solutions are the hyperbolic counterpart of Eulerian solutions, in the sense that the bodies stay on the same geodesic, which rotates hyperbolically, instead of circularly. The existence proof we gave in \cite{Diacu1} works for any $\kappa<0$. We therefore provide the following definitions. \begin{definition} A solution of the curved $3$-body problem is called hyperbolic homographic if the bodies maintain a configuration similar to itself while rotating hyperbolically. When the bodies remain on the same hyperbolically rotating geodesic, the solution is called hyperbolic Eulerian. \label{defhyp} \end{definition} While there is, so far, no evidence of hyperbolic non-Eulerian homographic solutions, we showed in \cite{Diacu1} that hyperbolic Eulerian orbits exist in the case of equal masses. In the particular case of equal masses, it is natural to assume that the middle body moves on a geodesic passing through the point $(0,0,|\kappa|^{-1/2})$ (the vertex of the hyperboloid's upper sheet), while the other two bodies are on the same (hyperbolically rotating) geodesic, and equidistant from it. Consequently we can seek hyperbolic Eulerian solutions of equal masses of the form: \begin{equation} {\bf q}=({\bf q}_1,{\bf q}_2, {\bf q}_3),\ {\rm with}\ {\bf q}_i=(x_i,y_i,z_i),\ i=1,2,3,\ {\rm and}\label{hypsolu} \end{equation} \begin{align*} x_1&=0,& y_1&=|\kappa|^{-1/2}\sinh\omega,& z_1&=|\kappa|^{-1/2}\cosh\omega,\\ x_2&=(\rho^2+\kappa^{-1})^{1/2},& y_2&=\rho\sinh\omega,& z_2&=\rho\cosh\omega,\\ x_3&=-(\rho^2+\kappa^{-1})^{1/2},& y_3&=\rho\sinh\omega,& z_3&=\rho\cosh\omega, \end{align*} where $\rho:=\rho(t)$ is the {\it size function} and $\omega:=\omega(t)$ is the {\it angular function}. Indeed, for every time $t$, we have that $x_i^2(t)+y_i^2(t)-z_i^2(t)=\kappa^{-1},\ i=1,2,3$, which means that the bodies stay on the surface ${\bf H}_{\kappa}^2$, while lying on the same, possibly (hyperbolically) rotating, geodesic. Therefore representation \eqref{hypsolu} of the hypebolic Eulerian homographic orbits agrees with Definition \ref{defhyp}. With the help of this analytic representation, we can define Eulerian homothetic orbits and hyperbolic Eulerian relative equilibria. \begin{definition} A hyperbolic Eulerian homographic solution is called Eulerian homothetic if the configuration of the system expands or contracts, but does not rotate hyperbolically. \label{hypeulhomo} \end{definition} In terms of representation \eqref{hypsolu}, an Eulerian homothetic solution occurs when $\omega(t)$ is constant, but $\rho(t)$ is not constant. A straightforward computation shows that if $\omega(t)$ is constant, the bodies lie initially on a geodesic, and the initial velocities are such that the bodies move along the geodesic towards or away from a triple collision at the point occupied by the fixed body. Notice that Definition \ref{hypeulhomo} leads to the same orbits produced by Definition \ref{ellipticeulhomo}. While the configuration of the former solution does not rotate hyperbolically, and the configuration of the latter solution does not rotate elliptically, both fail to rotate while expanding or contracting. This is the reason why Definitions \ref{ellipticeulhomo} and \ref{hypeulhomo} use the same name (Eulerian homothetic) for these types of orbits. \begin{definition} A hyperbolic Eulerian homographic solution is called a hyperbolic Eulerian relative equilibrium if the configuration rotates hyperbolically while its size remains constant. \end{definition} In terms of representation \eqref{hypsolu}, hyperbolic Eulerian relative equilibria occur when $\rho(t)$ is constant, while $\omega(t)$ is not constant. Unlike for Lagrangian and Eulerian solutions, hyperbolic Eulerian homographic orbits exist only in the form of homothetic solutions or relative equilibria. As we will further prove, any composition between a homothetic orbit and a relative equilibrium fails to be a solution of system \eqref{second}. \begin{theorem} In the curved $3$-body problem of equal masses with $\kappa<0$, the only hyperbolic Eulerian homographic solutions are either Eulerian homothetic orbits or hyperbolic Eulerian relative equilibria. \end{theorem} \begin{proof} Consider for system \eqref{second} a solution of the form \eqref{hypsolu} that is not homothetic. Then $$\kappa{\bf q}_1\odot{\bf q}_2=\kappa{\bf q}_1\odot{\bf q}_3=|\kappa|^{1/2}\rho,$$ $$\kappa{\bf q}_2\odot{\bf q}_3=-1-2\kappa \rho^2,$$ \begin{align*} {\dot x}_1&={\ddot x}_1=0,& {\dot y}_1&=|\kappa|^{-1/2}\dot\omega\cosh\omega,& {\dot z}_1&=|\kappa|^{-1/2}\sinh\omega, \end{align*} $${\dot x}_2=-{\dot x}_3={\rho\dot\rho\over{(\rho^2+\kappa^{-1})^{1/2}}},$$ $${\dot y}_2={\dot y}_3=\dot\rho\sinh\omega+\rho\dot\omega\cosh\omega,$$ $${\dot z}_2={\dot z}_3=\dot\rho\cosh\omega+\rho\dot\omega\sinh\omega,$$ $$\kappa\dot{\bf q}_2\odot\dot{\bf q}_2=-\dot\omega^2,\ \ \ \kappa\dot{\bf q}_2\odot\dot{\bf q}_2= \kappa\dot{\bf q}_3\odot\dot{\bf q}_3=\kappa\rho^2\dot\omega^2-{\kappa\dot\rho^2\over{1+\kappa\rho^2}},$$ $${\ddot x}_2=-{\ddot x}_3={\rho\ddot\rho\over{(\rho^2+\kappa^{-1})^{1/2}}}+ {\kappa^{-1}\dot\rho^2\over{(\rho^2+\kappa^{-1})^{3/2}}},$$ $${\ddot y}_2={\ddot y}_3=(\ddot\rho+\rho\dot\omega^2)\sinh\omega+ (\rho\ddot\omega+2\dot\rho\dot\omega)\cosh\omega,$$ $${\ddot z}_2={\ddot z}_3=(\ddot\rho+\rho\dot\omega^2)\cosh\omega+ (\rho\ddot\omega+2\dot\rho\dot\omega)\sinh\omega.$$ Substituting these expressions in system \eqref{second}, we are led to an identity corresponding to $\ddot x_1$. The other equations lead to the system \begin{align*} \ddot x_2, \ddot x_3:\ \ \ \ \ \ \ \ & E=0\\ \ddot y_1:\ \ \ \ \ \ \ \ & |\kappa|^{-1/2}\ddot\omega\cosh\omega=0,\\ \ddot z_1:\ \ \ \ \ \ \ \ & |\kappa|^{-1/2}\ddot\omega\sinh\omega=0,\\ \ddot y_2, \ddot y_3:\ \ \ \ \ \ \ \ & E\sinh\omega+F\cosh\omega=0,\\ \ddot z_2, \ddot z_3:\ \ \ \ \ \ \ \ & E\cosh\omega+F\sinh\omega=0,\\ \end{align*} where $$E:=E(t)=\ddot\rho+\rho(1+\kappa\rho^2)\dot\omega^2-{\kappa\rho\dot\rho^2\over{1+\kappa\rho^2}}+{m(1-4\kappa\rho^2)\over{4\rho^2|1+\kappa\rho^2|^{1/2}}},$$ $$F:=F(t)=\rho\ddot\omega+2\dot\rho\dot\omega.$$ This system can be obviously satisfied only if \begin{equation} \begin{cases} \ddot\omega=0\cr \ddot\omega=-{2\dot\rho\dot\omega\over\rho}\cr \ddot\rho=-\rho(1+\kappa\rho^2)\dot\omega^2+{\kappa\rho\dot\rho^2\over{1+\kappa\rho^2}}-{m(1+4\kappa\rho^2)\over{4\rho^2|1+\kappa\rho^2|^{1/2}}}.\cr \end{cases} \end{equation} The first equation implies that $\omega(t)=at+b$, where $a$ and $b$ are constants, which means that $\dot\omega(t)=a$. Since we assumed that the solution is not homothetic, we necessarily have $a\ne 0$. But from the second equation, we can conclude that $$\dot\omega(t)={c\over \rho^2(t)},$$ where $c$ is a constant. Since $a\ne 0$, it follows that $\rho(t)$ is constant, which means that the homographic solution is a relative equilibrium. This conclusion is also satisfied by the third equation, which reduces to $$a^2=\dot\omega^2={m(1-4\kappa\rho)\over{4\rho^3|1+\kappa\rho^2|^{1/2}}},$$ being verified by two values of $a$ (equal in absolute value, but of opposite signs) for every $\kappa$ and $\rho$ fixed. Therefore every hyperbolic Eulerian homographic solution that is not Eulerian homothetic is a hyperbolic Eulerian relative equilibrium. This conclusion completes the proof. \end{proof} Since a slight perturbation of hyperbolic Eulerian relative equilibria, within the set of hyperbolic Eulerian homographic solutions, produces no orbits with variable size, it means that hyperbolic Eulerian relative equilibria of equal masses are unstable. So though they exist in a mathematical sense, as proved above (as well as in \cite{Diacu1}, using a direct method), such equal-mass orbits are unlikely to be found in a (hypothetical) hyperbolic physical universe.
{"config": "arxiv", "file": "1001.1789/Homographic2.tex"}
TITLE: $\gamma$ and an examination of its composition QUESTION [0 upvotes]: Ok, so the Euler Mascheroni constant is defined as $$\sum_{k=1}^{x} \frac1k - \ln x$$ as $x\rightarrow\infty$. However, through some fancy l'Hôpital footwork, I've discovered that the harmonic series grows at a faster rate than the natural log function, so their difference should be infinite. However, this is not the case as $\gamma$ is finite. So what gives? Thanks in advance! P.S. Here is my footwork, I'm posting from my phone at a pizza place right now, so I didn't bother to type it all out: REPLY [1 votes]: It is not true that $\log{n}$ grows faster than $H_n = \sum_{k=1}^n \frac{1}{k}$, or vice versa. We have $$ \frac{1}{k} \geqslant \frac{1}{x} \geqslant \frac{1}{k+1} \geqslant \frac{1}{x+1} $$ for $k \leqslant x \leqslant k+1$, and integrating from $k$ to $k+1$, $$ \frac{1}{k} \geqslant \log{(k+1)}-\log{k} \geqslant \frac{1}{k+1} \geqslant \log{(k+2)}-\log{(k+1)} $$ Thus, summing from $k=1$ to $n-1$, $$ \sum_{k=1}^{n-1} \frac{1}{k} \geqslant \log{n} - \log{1} \geqslant \sum_{k=2}^{n} \frac{1}{k} \geqslant \log{(n+2)}-\log{2}, $$ so $ H_{n-1} \geqslant \log{n} \geqslant H_{n}-1 \geqslant \log{(n+1)}-\log{2} $, i.e. $H_n$ and $\log{n}$ differ from each other only by a constant smaller than $1$.
{"set_name": "stack_exchange", "score": 0, "question_id": 2156817}
TITLE: Topological Version of First Isomorphism Theorem QUESTION [5 upvotes]: Given a set $X$ and an equivalence relation $\sim$ on $X$, we can define the set $X_\sim=\left\lbrace\left[x\right]:x\in X\right\rbrace$ of equivalence classes, and we can define a projection map $\pi:X\rightarrow X_\sim$ defined by $\pi(x)=\left[x\right]$. If we now put a group structure on $X$, then there are some equivalence relations that are special: if we let $x\sim y$ iff both elements are in the same coset of a normal subgroup of $X$, we can define a group structure on $X_\sim$ in a natural way such that the map $\pi$ is a homomorphism, which is just the first isomorphism theorem. My question is: Is there a corresponding situation in the case of quotient spaces of topological spaces? For any equivalence relation on a topological space, we can define the quotient topology on the quotient space, making the projection map $\pi$ continuous, which seems to be the sort of preservation of structure we might be after. So are there any relations that are special in some way analogous to the way that those arising from quotienting by a normal subgroup are special in the group setting? And do these arise in any way from topological spaces in the way that the ones in the group setting arise from groups? REPLY [5 votes]: When $ S$ is a topological space, and $E$ is an equivalence relation on $S$, the natural- quotient-space topology on the set $S_{/E}$ of $E$-equivalence classes is defined to be the strongest topology on $S_{/E}$ such that the function $f_E(x)=[x]_E $ ,...for $x\in S$ ..., is continuous, (where $[x]_E$ is the $E$-equivalence class containing $x$. ). So any $ V \subset S_{/E}$ is open in $ S_{/E}$ iff $f_E^{-1} V = \{ x \in S : f_E(x) \in V \}$ is open in $ S$ . The set $ S_{/E}$ with this topology is called the quotient space and $f_E$ is called the quotient mapping. This is a large area of study, comparable to the size and scope of quotient groups in group theory. In particular, when $T$ is a subset of $S$,and $E$ is defined by [ $ xEy $ iff $($ $x=y$ or $\{x,y \} \subset T $ $)$ ], then the quotient map $f_E : S \to S_{/E}$ is called the identification of $T$ to a point( which is useful in constructing many examples of certain properties).
{"set_name": "stack_exchange", "score": 5, "question_id": 1416709}
TITLE: How to calculate the average molar mass of the atmosphere? QUESTION [1 upvotes]: The task: Determine the average molar mass of the atmosphere - each for the moist air and the dry air. One example that I have calculated is this one: I have the gas Nitrogen and the percentage for the moist air(77,0%) and the percentage for the dry air(78,08%) aswell as the molar mass(28,014) in $$\left(\frac{g}{mol}\right)$$ My solution: $$\text{Nitrogen}= 0,78 \cdot 0,28+ 0,77 \cdot 0,28 = 43,4 \frac{g}{mol}$$ Is that correct? Kind regards, iloveoov. REPLY [3 votes]: The basic idea is correct. For moist air you can calculate the molar mass given by the part of nitrogen with $$M_{N_2,moist}=0.77\cdot 28.014\frac{g}{\text{mol}} = 21.57\frac{g}{\text{mol}}.$$ Note, that I multiplied with $28.014$ and not with 0.28. The next step would be to also do the same for all other gases (oxygen, argon, carbon dioxide, etc) and then sum that up: $$M_{air,moist}=M_{N_2,moist}+M_{O_2,moist}+ …$$ That will give you the molar mass of moist air. Afterwards do the same steps for dry air. What you also did, is to sum both values of dry and moist air. That is not asked in your task.
{"set_name": "stack_exchange", "score": 1, "question_id": 2480084}
TITLE: How to solve for angles $4\theta = \theta$? QUESTION [0 upvotes]: I want to find all the angles in $[0, 2\pi)$ for which $4\theta = \theta$ is true. I can obviously get $\theta = 0$, but the other solutions are $\frac{2\pi}{3}$ and $\frac{4\pi}{3}$. How do I find these particular ones? REPLY [0 votes]: The writing $4\theta = \theta$ is not accurate. I assume that you are solving some equation that takes the form: $\cos(4\theta) = \cos(\theta)$ or $\sin(4\theta) = \sin(\theta)$ or possibly something else. This boils down to finding the solution of $4\theta \equiv \theta \pmod {2\pi}$ $$4\theta \equiv \theta \pmod {2\pi} \iff \exists \ k \in \mathbb Z \ / \ 3\theta = 2k\pi \iff \exists \ k \in \mathbb Z \ / \ \theta = \frac23k\pi$$ Thus, the solutions to this equation are the elements of $\{\frac23k\pi \ ; \ k \in \mathbb Z\}$. Then, look for those solutions which are in $[0,2\pi)$.
{"set_name": "stack_exchange", "score": 0, "question_id": 1203217}
TITLE: Prove that the set of open spheres is countable. QUESTION [1 upvotes]: Can someone help me with the following problem: I'm trying to prove that the following set of open spheres of $\mathbb R^2$ with $x_1,y_1,r ∈ \mathbb Q$ is countable: $$S[(x_1,y_1),r]=\{(x,y∈\mathbb R^2: \sqrt {(x-x_1)^2+(y-y_1)^2}<r\}$$ I know that $\mathbb Q$ is countable so if I take $$f:S→\mathbb Q, f(S)=[(x_1,y_1),r]= \mathbb Q ×\mathbb Q ×\mathbb Q =\mathbb Q^3$$ is also countable, right? Is that enough? REPLY [0 votes]: A general strategy for showing that a certain set $S$ is countable is to find an injective function $f\colon S\to\mathbb Q$. Note, this is not true if the function is not injective, for instance consider the function $f\colon \mathbb R\to\mathbb Q$ sending everything to $0$. So it suffices to show that your function is injective. But this is clear, since if two open spheres have the same center and radius, then they are the same sphere.
{"set_name": "stack_exchange", "score": 1, "question_id": 3531009}
TITLE: Maximum likelihood estimation 3 QUESTION [0 upvotes]: if I have a simple random sample $Y_{1},...,Y_{n}$ of an uniform distribution over interval $(0,2\theta+1)$, how can i compute the maximum likelihood estimation of $\theta$? Thank you for your time. REPLY [0 votes]: The m.l.e. for $2\theta+1$ is the highest order statistic $Y^{(n)}=\max\{Y_i~|~i=1\cdots,n\}$. Now if $T$ is a m.l.e. for parameter $\eta$ then for continuous function $f$, $f(T)$ is the m.l.e. for $f(\eta)$. As $g(x)=\frac{x-1}2$ is a continuous function $g(Y^{(n)})$ is the m.l.e. for $g(2\theta+1)=\theta$.
{"set_name": "stack_exchange", "score": 0, "question_id": 2569110}
TITLE: Congruence for Bernoulli numbers QUESTION [0 upvotes]: It appears that for every odd prime $p$, the following congruence holds for Bernoulli numbers: $$ 2pB_{p-1}-pB_{2p-2}\equiv p-1\mod p^2\mathbb{Z}_{(p)}. $$ The weaker statement that $2pB_{p-1}-pB_{2p-2}\equiv -1\mod p\mathbb{Z}_{(p)}$ follows from the von Staudt-Claussen Theorem. I am aware of the Kummer congruences, but they don't seem to apply here as the indices of the Bernoulli numbers in question are divisible by $p-1$. How does one prove this congruence modulo $p^2$? REPLY [1 votes]: A proof goes like this: We have the following explicit formula for Bernoulli numbers: $$ B_n= \sum_{ 0\le i \le n} \frac{ 1}{i+1} \sum_{ 0\le j \le i}(-1)^j\binom{i}{j}j^n. \tag{1} $$ and the following explicit formula for Stirling numbers of second kind : $$ \begin{Bmatrix}n\\i\end{Bmatrix}i!=(-1)^i \sum_{ 0\le j \le i} (-1)^j\binom{i}{j}j^n . \tag{2}$$ Now let $p$ be an odd prime and $1\le j \le p-1$, by Wilson theorem and Fermat little theorem, $$w_p=\frac{ (p-1)!+1}{p}, q_p(j)=\frac{ j^{p-1}-1}{p}, r_p(j)=\frac{ j^{2p-2}-1}{p}$$ are integers. Also we have a known congruence for the Harmonic number: $$H_{p-1}= \sum_{ 1\le i \le p-1} \frac{ 1}{i} \equiv 0 \mod p. \tag{3} $$ Then $$ B_{p-1}= \sum_{ 0\le i \le {p-1}} \frac{ 1}{i+1} \sum_{ 0\le j \le i}(-1)^j\binom{i}{j}j^{p-1}. \tag{4} $$ $$ B_{p-1}= \sum_{ 1\le i \le {p-2}} \frac{ 1}{i+1} \sum_{ 1\le j \le i}(-1)^j\binom{i}{j}(p \cdot q_p(j)+1) +\frac{ 1}{p} \sum_{ 0\le j \le p-1}(-1)^j\binom{p-1}{j}j^{p-1} . \tag{5} $$ $$ B_{p-1}= \sum_{ 1\le i \le {p-2}} \frac{ 1}{i+1} \sum_{ 1\le j \le i}(-1)^j\binom{i}{j}(p \cdot q_p(j)+1) +\frac{ 1}{p} \begin{Bmatrix}{p-1}\\{p-1}\end{Bmatrix}(p-1)! . \tag{6} $$ $$ p\cdot B_{p-1}\equiv p\cdot \sum_{ 1\le i \le {p-2}} \frac{ 1}{i+1} \sum_{ 1\le j \le i}(-1)^j\binom{i}{j} +(p-1)! \pmod {p^2}. \tag{7} $$ $$ p\cdot B_{p-1}\equiv -p\cdot \sum_{ 1\le i \le {p-2}} \frac{ 1}{i+1} +(p-1)! \pmod {p^2}. \tag{8} $$ $$ p\cdot B_{p-1}\equiv -p\cdot H_{p-1} +p +(p-1)! \pmod {p^2}. \tag{9} $$ $$ p\cdot B_{p-1}\equiv p +(p-1)! \pmod{p^2} . \tag{10} $$ On the other hand $$ B_{2p-2}= \sum_{ 0\le i \le {2p-2}} \frac{ 1}{i+1} \sum_{ 0\le j \le i}(-1)^j\binom{i}{j}j^{2p-2}. \tag{11} $$ $$ B_{2p-2}= \sum_{\array{1\le i \le {2p-2}\cr i\neq p-1 }} \frac{ 1}{i+1} \sum_{ 1\le j \le i}(-1)^j\binom{i}{j}(p \cdot r_p(j)+1) +\frac{ 1}{p} \sum_{ 1\le j \le p-1}(-1)^j\binom{p-1}{j}j^{2p-2} . \tag{12} $$ $$ p\cdot B_{2p-2}\equiv p\cdot \sum_{ \array{1\le i \le {2p-2}\cr i\neq p-1 }} \frac{ 1}{i+1} \sum_{ 1\le j \le i}(-1)^j\binom{i}{j} + \sum_{ 1\le j \le p-1}(-1)^j\binom{p-1}{j}j^{2p-2}\pmod {p^2}. \tag{13} $$ $$ p\cdot B_{2p-2}\equiv -p\cdot \sum_{ \array{1\le i \le {2p-2}\cr i\neq p-1 }} \frac{ 1}{i+1} + \sum_{ 1\le j \le p-1}(-1)^j\binom{p-1}{j}(p \cdot r_p(j)+1) \pmod {p^2}. \tag{14} $$ $$ p\cdot B_{2p-2}\equiv -p\cdot \sum_{ \array{1\le i \le {2p-2}\cr i\neq p-1 }} \frac{ 1}{i+1} + \sum_{ 1\le j \le p-1}(-1)^j\binom{p-1}{j}p \cdot r_p(j) -1 \pmod {p^2}. \tag{15} $$ $$ p\cdot B_{2p-2}\equiv -p\cdot \sum_{ \array{1\le i \le {2p-2}\cr i\neq p-1 }} \frac{ 1}{i+1} + \sum_{ 1\le j \le p-1}p \cdot r_p(j) -1 \pmod {p^2}. \tag{16} $$ $$ p\cdot B_{2p-2}\equiv -p\cdot \sum_{ \array{1\le i \le {2p-2}\cr i\neq p-1 }} \frac{ 1}{i+1} + \sum_{ 1\le j \le p-1}j^{2p-2} -p \pmod {p^2}. \tag{17} $$ $$ p\cdot B_{2p-2}\equiv p + \sum_{ 1\le j \le p-1}j^{2p-2} -p \pmod {p^2}. \tag{18} $$ $$ p\cdot B_{2p-2}\equiv p + 2(p-1)!+1 \pmod {p^2}. \tag{19} $$ $(16)$ is obtained from $(15)$ since $$ \binom{p-1}{j} \equiv (-1)^j \pmod {p} . \tag{20}$$ $(18)$ is obtained from $(17)$ since $$\sum_{ \array{1\le i \le {2p-2}\cr i\neq p-1 }} \frac{ 1}{i+1} \equiv -1 \pmod {p}. \tag{21} $$ proof: $$\sum_{\array{1\le i \le {2p-2}\cr i\neq p-1}}\frac{ 1}{i+1}=\sum_{1\le i \le p-2}(\frac{ 1}{i+1}+\frac{ 1}{p+i+1})+\frac{1}{p+1} \equiv 2\sum_{1\le i \le p-2}\frac{ 1}{i+1} +1 \pmod {p}.$$ $$\sum_{\array{1\le i \le {2p-2}\cr i\neq p-1}}\frac{ 1}{i+1} \equiv 2\cdot H_{p-1}-2 +1 \equiv -1 \pmod {p}.$$ $(19)$ is obtained from $(18)$ since $$\sum_{1\le j \le {p-1}} j^{2p-2} \equiv p+1 +2(p-1)! \pmod {p^2}. \tag{22} $$ proof: let $1\le j,k \le {p-1}$. $p\cdot r_p(j\cdot k) +1=j^{2p-2}k^{2p-2}=(p\cdot r_p(j) +1)(p\cdot r_p(k) +1) $ $p\cdot r_p(j\cdot k) \equiv p(r_p(j) + r_p(k)) \pmod {p^2}.$ $p\cdot r_p((p-1)!) \equiv \sum_{1\le j \le {p-1}}p \cdot r_p(j) \pmod {p^2}.$ $p\cdot r_p(p\cdot w_p -1) \equiv \sum_{1\le j \le {p-1}}j^{2p-2} -p +1 \pmod {p^2}.$ $ (p\cdot w_p -1)^{2p-2}-1 \equiv \sum_{1\le j \le {p-1}}j^{2p-2} -p +1 \pmod {p^2}.$ $ 1-\binom{2p-2}{1}\cdot p\cdot w_p -1 \equiv \sum_{1\le j \le {p-1}}j^{2p-2} -p +1 \pmod {p^2}.$ $ 2(p-1)! +2 \equiv \sum_{1\le j \le {p-1}}j^{2p-2} -p +1 \pmod {p^2} \square$ Eventually the congruence $$2p\cdot B_{p-1} -p\cdot B_{2p-2} \equiv p-1 \pmod {p^2}. \tag{23} $$ is easily obtained from $(10)$ and $(19)$.
{"set_name": "stack_exchange", "score": 0, "question_id": 1804666}
TITLE: Problem with showing that operations of defined set are $n$-transitive QUESTION [0 upvotes]: I have two problems which I have been thinking about since several days. They are connected with transitive operations. We are considering the group $G$ of all linear transformations of real line in a form $x\rightarrow ax+b$, where $a\in\mathbb{R} \backslash \{0\}$ and $b \in \mathbb{R}$. Is the natural operation $G$ on $\mathbb{R}$ strictly $2$-transitive? Let's assume that the group $G$ acts $1$-transitively on the set $S$. What's more, let's assume that for certain $x \in S$ the stabilizer $\operatorname{Stab}_{G(x)}$ acts strictly $2$-transitive on $S\backslash \{x\}$. How to prove that $G$ acts strictly $3$-transitive on $S$? I would apprectiate any help, because I don't have any ideas. Below I have written a definition of $n$-transitive operation. Definition ($n$-transitive operation) Operation of group $G$ on the set $S$ is [strictly] $n$-transitive if for every sequences $(x_1,x_2,\ldots,x_n)$ and $(y_1,y_2,\ldots,y_n)$ which each of them are composited from different in pairs elements of the set $S$, exists [exactly one] element $g \in G$, s. t. for every $i\leq n$ is reached $gx_i=y_i$. REPLY [2 votes]: Yes. Given $x_1,x_2, y_1,y_2\in\mathbb{R}$ as in your definition, the points $(x_1,y_1),\,(x_2,y_2)\in\mathbb{R}^2$ uniquely determine a straight line containing both of them. This line is neither horizontal (since $y_1\ne y_2$) nor vertical (since $x_1\ne x_2$), and thus of the form $y=ax+b$ with $a\ne0$. In other words, there is a unique element in $G$ that sends $x_1$ to $y_1$ and $x_2$ to $y_2$. I assume that you mean that for any $x\in S$, the stabiliser of $x$ acts strictly 1-transitively on $S\!\setminus\!\{x\}$. Take $x_1,x_2, y_1,y_2\in S$ as in the definition. Then there exists an element $g\in G$ such that $g\cdot x_1=y_1$. Let $h\in G$ be the (unique) element in the stabiliser of $y_1$ that sends $g\cdot x_2$ to $y_2$. Then $hg\cdot x_1 = y_1$ and $hg\cdot x_2 = y_2$, which proves that $G$ acts 2-transitively on $S$. For strictness, assume that $g,h\in G$ both are elements that send $x_1$ to $y_1$ and $x_2$ to $y_2$. You need to show that $g=h$. To do this, compare the actions of the two group elements $g^{-1}h$ and $1_G$ on $S\!\setminus\!\{x_1\}$. Fill in the details yourself.
{"set_name": "stack_exchange", "score": 0, "question_id": 3811145}
TITLE: What is a decision threshold and how does it apply to a statistical power? QUESTION [0 upvotes]: I'm pretty confused on what is actually going on in this section with hypothesis testing. As another note, the values below are computed using R. I have a homework problem that says: From the perspective of a cereal manufacturer, it is desirable to maintain the average weight of a cereal box as close to 400 grams as possible. Suppose $(X_1,X_2,...,X_n)$ is a random sample of size $n = 30$ where $X_i \sim N(\mu,\sigma^2)$ with $\mu = 405$ and $\sigma = 10$. We want to test for: $H_0: \mu = 400$ $H_1: \mu > 400$ Assume a decision threshold $c_{0.10} = 402.3398$ acts as a decision threshold for rejecting $H_0$. What is the statistical power? The official answer states this: If we apply $c_{0.10} = 402.3398$ as the decision threshold, the statistical power, the probability of rejecting $H_0$ in favor of $H_1$ is $P(\bar{X}_{30} \geq 402.3398) = 1 - pnorm(402.3398,405,\sqrt{\frac{100}{30}}) = 0.927$ The problem with this is I have no idea what is going on, or why we are doing it. I was told the statistical power is $1 - \beta$, so how does $\beta$ equal to pnorm in R? How did we translate this? And isn't $P(X \leq x)$ mean the area to the left of $x$ in a normal distributions bell curve? I'm confused as to how $H_0$ and $H_1$ are plotted on the graph, or what they mean when compared to this $c_{0.10}$ value. If someone could just kind of spell out the steps for me I would really appreciate it. REPLY [1 votes]: Every hypothesis test gives rise to a decision rule which instructs you when to reject the null hypothesis. In your example the decision rule is "Reject $H_0$ when $\bar X\ge402.3398$". The power of a hypothesis test is the probability that the decision rule leads to the right conclusion (i.e., the prob that you observe $\bar X\ge402.3398$) when the null is false. Since you want your test to reject the null if the null is false, it is of interest to calculate power for various values of $\mu$ belonging to the alternative hypothesis, and you'd prefer the resulting probability to be large. Typically power gets higher the further away you get from the null hypothesis. In your exercise, power is a function of $\mu$ (for any value $\mu>400)$, but you're being asked to calculate the power for one specific value, namely $\mu=405$. The assumption that the sample $X_1,\ldots,X_{30}$ is from a normal($\mu$, $\sigma=10$) population is still in force, but now $\mu=405$. You need to calculate the prob that your sample mean leads to "reject $H_0$" when $\mu=405$. This is therefore the prob that $\bar X\ge402.3398$, when $\bar X$ has normal distribution with mean $405$ and standard deviation $\sqrt{\frac{\sigma^2}n} =\sqrt{\frac{100}{30}}$. This explains the call in R to pnorm() to obtain a left-tail area under a normal curve, which is then subtracted from 1.
{"set_name": "stack_exchange", "score": 0, "question_id": 1292231}
TITLE: Need help solving a problem on arranging balls QUESTION [1 upvotes]: One of my friends gave me an interesting problem yesterday...Please omit the first four lines...start from the second paragraph... What is the underlying principle that affects the fifth line?How to do it? Thanks a lot in advance!! REPLY [1 votes]: Yeah...found out the solution yesterday....nice problem...!!
{"set_name": "stack_exchange", "score": 1, "question_id": 1449173}
\begin{document} \begin{abstract} The starting point of this article is a decades-old yet little-noticed sufficient condition, presented by Sassenfeld in 1951, for the convergence of the classical Gau\ss-Seidel method. The purpose of the present paper is to shed new light on \emph{Sassenfeld's criterion} and to demonstrate that the original work can be perceived as a special case of a far more extensive concept in the context of preconditioners and iterative linear solvers. Our main result is a classification theorem for the set of all matrices which this general framework applies to. \end{abstract} \maketitle \section{Introduction} The Gau\ss-Seidel method is one of the most classical examples for the iterative solution of linear systems in many numerical analysis textbooks. Convergence is typically established for strictly diagonally dominant as well as for symmetric positive definite matrices. Only a few authors (see, e.g., \cite[Thm.~4.16]{Wendland:17}), however, point to a less standard convergence criterion for the Gau\ss-Seidel scheme introduced by Sassenfeld in his paper~\cite{Sassenfeld:51}: Given a matrix $\mat A=[a_{ij}]\in\RR$ with non-vanishing diagonal entries, i.e.~ $a_{ii}\neq 0$ for each $i=1,\ldots,m$, define non-negative real numbers $s_1,\ldots,s_m$ iteratively by \begin{equation}\label{eq:SF} s_i=\frac{1}{|a_{ii}|}\Bigg(\sum_{j<i}|a_{ij}|s_j+\sum_{j>i}|a_{ij}|\Bigg),\qquad i=1,\ldots,m. \end{equation} Sassenfeld has proved that $\max_{1\le i\le m}s_i<1$ is a sufficient condition for the convergence of the Gau\ss-Seidel scheme. For matrices that satisfy this property, the notion of a \emph{Sassenfeld matrix} was recently introduced in~\cite{BaumannWihler:17} as a generalization of (strict) diagonal dominance. The purpose of the present paper is to show that there is a general principle behind Sassenfeld's original work that applies far beyond the Gau\ss-Seidel method. To illustrate this observation, we note that~\eqref{eq:SF} can be written in matrix form as \begin{equation}\label{eq:SF'} (\abs{\mat D}-\abs{\mat L})\mat s=\abs{\mat U}\mat e, \end{equation} where the matrix $\mat A=\mat L+\mat D+\mat U$ is decomposed in the usual way into the (strict) lower and upper triangular parts $\mat L=\tril(\mat A)$ and $\mat U=\triu(\mat A)$, respectively, and the diagonal part $\mat D=\diag(\mat A)$; furthermore, $\abs{\emptymat}$ signifies the modulus of a matrix $\emptymat$ taken entry-wise, $\mat s=(s_1,\ldots,s_m)$ contains the iteratively defined real numbers $s_1,\ldots,s_m$ from~\eqref{eq:SF}, and \begin{equation}\label{eq:e} \mat e=(1,\ldots,1)^\top\in\R \end{equation} is the (column) vector containing only components~1. More generally, for an appropriate invertible matrix $\mat P\in\RR$, which will be called a \emph{Sassenfeld preconditioner}, we consider the splitting \begin{equation}\label{eq:ss} \mat A=\off(\mat P)+\diag(\mat P)+(\mat A-\mat P), \end{equation} where $\diag\left(\emptymat\right)$ and $\off(\emptymat)$ denote the diagonal and off-diagonal parts of a matrix, respectively. Then, define the vector $\mat s\in\R$ to be the solution (if it exists) of the system \begin{equation}\label{eq:SF''} (\abs{\diag(\mat P)}-\abs{\off(\mat P)})\mat s=\abs{\mat A-\mat P}\mat e. \end{equation} For instance, in the context of the Gau\ss-Seidel scheme, letting $\mat P:=\mat L+\mat D$, with $\mat L$ and $\mat D$ as above, we notice that~\eqref{eq:SF''} translates into~\eqref{eq:SF'}. In this work, we will focus on matrices $\mat A$ and $\mat P$ for which the components of the solution vector $\mat s$ of the linear system~\eqref{eq:SF''} satisfy \begin{equation}\label{eq:s01} 0\le s_i<1\qquad \forall~i=1,\ldots,m. \end{equation} We begin our work by introducing a class of preconditioners $\mat P$, for which the system~\eqref{eq:SF''} is invertible; our approach is based on some classical results from the Perron-Frobenius theory of non-negative matrices. Subsequently, we will focus on all matrices for which the bounds~\eqref{eq:s01} for the solution vector $\mat s$ of~\eqref{eq:SF''} can be achieved; such matrices will be termed \emph{Sassenfeld matrices}. We will discuss a number of properties, and prove that Sassenfeld matrices give rise to convergent iterative splitting methods for linear systems (Prop.~\ref{pr:splitting}). In addition, a characterization theorem for Sassenfeld matrices (Thm.~\ref{thm:main}) will be provided. \subsection*{Outline} We begin by introducing the set of Sassenfeld preconditioners in \S\ref{sc:SP}, and give a number of examples. We then continue to present the Sassenfeld index of a matrix (with respect to a Sassenfeld preconditioner) in \S\ref{sc:SI}, before turning to the definition and characterization of Sassenfeld matrices in \S\ref{sc:sm}. We also elaborate on the application to iterative linear solvers. Moreover, some spectral properties of Sassenfeld matrices are devised in~\S\ref{sc:spectral}. Finally, a few elementary considerations on the construction of preconditioners are provided in~\S\ref{sc:appl}. \subsection*{Notation} For any vectors or matrices $\mat x,\mat y\in\mathbb{R}^{m\times n}$, we use the notation $\mat x\matge\mat y$ (or $\mat x\succ\mat y$) to indicate that all entries of the difference $\mat x-\mat y\in\mathbb{R}^{m\times n}$ are non-negative (resp.~positive). Furthermore, for a matrix $\mat A=[a_{ij}]\in\mathbb{R}^{m\times n}$, we denote by \[ \norm{\mat A}:=\max_{1\le i\le m}\sum_{j=1}^n|a_{ij}| \] the standard infinity matrix norm. Moreover, we signify by $\sr{\mat A}$ the spectral radius of a square-matrix $\mat A\in\RR$. \section{Sassenfeld preconditioners}\label{sc:SP} We define the mapping \begin{equation}\label{eq:wP} \begin{split} \zdiag{\emptymat}:\,\RRd&\to\RRp\\ \mat P&\mapsto\wP:=\abs{\diag(\mat P)}^{-1}\abs{\off(\mat P)}=\Abs{\I-\diag(\mat P)^{-1}\mat P}, \end{split} \end{equation} where $\I=\diag(1,\ldots,1)$ is the identity matrix in $\RR$, and $\RRd$ and $\RRp$ are the sets of all real $m\times m$ matrices with non-vanishing diagonal entries and with non-negative entries, respectively. \begin{definition}[Sassenfeld preconditioners]\label{def:SP} A matrix $\mat P\in\RRd$ is called a \emph{Sassenfeld preconditioner} if the matrix $\wP\in\RRp$ from~\eqref{eq:wP} satisfies $\sr{\wP}<1$. \end{definition} \begin{remark}\label{rem:Mmatrix} Observe that $\mat P\in\RRd$ is a Sassenfeld preconditioner if and only if the matrix $\I-\zdiag{\mat P}$ is an $M$-matrix. In particular, we notice that $(\I-\zdiag{\mat P})^{-1}\matge\mat0$; cf.~\cite[Expl.~7.10.7]{Meyer:00}. \end{remark} The following result provides a useful tool to verify Def.~\ref{def:SP} practically. \begin{lemma}\label{lem:tool} A matrix $\mat P\in\RR$ is a Sassenfeld preconditioner if and only if \begin{equation}\label{eq:20211109a} \abs{\off(\mat P)}\mat z\prec \abs{\diag(\mat P)}\mat z, \end{equation} for some positive vector $\mat z\succ\mat 0$ in $\R$. \end{lemma} Before proving the above lemma, we recall an instrumental fact from the Perron-Frobenius theory of non-negative matrices (see, e.g., \cite[\S8]{Meyer:00}), which will be repeatedly used in this paper. \begin{lemma}\label{lem:PF} Given a non-negative matrix $\mat B\matge\mat 0$ in $\RR$, with $r:=\sr{\mat B}$. Then, for any $\epsilon>0$, there exists a positive vector $\mat z\succ \mat 0$ in $\R$ such that it holds $\mat B\mat z\prec (r+\epsilon)\mat z$. \end{lemma} \begin{proof} For given $\epsilon>0$, choose $\delta>0$ sufficiently small such that $\sr{\mat B+\delta\mat E}<r+\epsilon$, where $ \mat E=\mat e\mat e^\top\in\RR, $ with $\mat e\in\R$ from~\eqref{eq:e}, is the matrix with all entries 1. Due to Perron's theorem, there exists a (right Perron) vector $\mat z\succ\mat 0$ such that \[ ({\mat B}+\delta\mat E)\mat z=\sr{{\mat B}+\delta\mat E}\mat z\prec (r+\epsilon)\mat z, \] which shows the claim. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:tool}] If $\mat P$ is a Sassenfeld preconditioner with $r:=\sr{\zdiag{\mat P}}<1$, then by Lem.~\ref{lem:PF}, for $\epsilon=\nicefrac{(1-r)}{2}>0$, there exists a vector $\mat z\succ\mat 0$ such that \begin{equation}\label{eq:20211110a} \zdiag{\mat P}\mat z \prec\frac12(r+1)\mat z \prec\mat z, \end{equation} which is equivalent to~\eqref{eq:20211109a}. Conversely, suppose that there is $\mat z\succ\mat 0$ in $\R$ such that \eqref{eq:20211109a} is satisfied. Then, we immediately see that $\mat P\in\RRd$. Furthermore, owing to the Perron-Frobenius theory there exists a (left Perron) vector $\mat q \matge\mat 0$ in $\R$, $\mat q\neq\mat 0$, such that $ \mat q^\top\zdiag{\mat P}=\sr{\zdiag{\mat P}}\mat q^\top. $ Hence, exploiting \eqref{eq:20211110a}, we obtain \[ \sr{\zdiag{\mat P}}\mat q^\top\mat z =\mat q^\top\zdiag{\mat P}\mat z \prec\mat q^\top\mat z, \] from which we infer that $\sr{\zdiag{\mat P}}<1$. \end{proof} \begin{proposition} Any Sassenfeld preconditioner is invertible. \end{proposition} \begin{proof} Let $\mat P\in\RRd$ be a Sassenfeld preconditioner. Suppose that there exists a vector $\mat x\in\R$ with $\|\mat x\|_\infty=1$ and $\mat P\mat x=\mat 0$. Then it follows that \[ \mat x-(\I-\diag(\mat P)^{-1}\mat P)\mat x=\mat 0. \] Taking moduli, we obtain $\abs{\mat x}\matle\wP\abs{\mat x}$. Iteratively, for any $n\in\mathbb{N}$, we infer that $\abs{\mat x}\matle(\wP)^n\abs{\mat x}$. Exploiting that $\sr{\wP}<1$ and letting $n\to\infty$, we deduce that $\mat x=\mat 0$, which is a contradiction. \end{proof} We present a few examples. \begin{examples} Any matrix $\mat P=[p_{ij}]\in\RR$ of one of the following types is a Sassenfeld preconditioner: \begin{enumerate}[\rm(i)] \item All invertible upper and lower triangular matrices; \item Any strictly diagonally dominant matrix (by rows or by columns); \item All $M$-matrices; \item Any symmetric positive definite matrix $\mat P$ for which it holds that \begin{equation}\label{eq:spd} \varrho\left(\I-\alpha\mat B\right) \ge \beta\sr{|\mat B|}, \end{equation} for some $\alpha>0$ and $\beta\ge\max(\nicefrac12,\alpha)$, where we let \[ \mat B:=\diag(\mat P)^{-\nicefrac12}\mat P\diag(\mat P)^{-\nicefrac12}. \] \item All symmetric positive definitive matrices $\mat P$ with a symmetric sign pattern of the form \begin{equation}\label{eq:sp} \sign(p_{ij})=-\xi_i\xi_j\qquad 1\le i<j\le m, \end{equation} for a vector $\mat\xi\in\{\pm1\}^m$. \end{enumerate} \end{examples} \begin{proof} We begin by noticing that all matrices $\mat P\in\RR$ in (i)--(v) belong to $\RRd$, i.e. $\zdiag{\mat P}$ is well-defined. For each of the examples, we need to prove that $\sr{\zdiag{\mat P}}<1$. \begin{enumerate}[(i)] \item For any invertible upper or lower triangular matrix $\mat P$ it is straightforward to see that $\sr{\zdiag{\mat P}}=0$. \item If $\mat P$ is strictly diagonally dominant by rows, then we have \[ \sr{\wP}\le\norm{\wP}<1. \] Moreover, if $\mat P$ is strictly diagonally dominant by columns, then we observe that the spectral radii of the two matrices $\zdiag{\mat P}$ and $\zdiag{\big(\mat P^\top\big)}$ are equal, and we can repeat the same argument. \item If $\mat P$ is an $M$-matrix then it can be expressed in the form $\mat P=r\I-\mat B$, where $\mat B\matge\mat 0$ and $r>\sr{\mat B}$; cf.~\cite[Expl.~7.10.7]{Meyer:00}. In light of Lem.~\ref{lem:PF}, for $\epsilon=\nicefrac12(r-\sr{\mat B})$, we can find $\mat z\succ\mat 0$ such that \[ \mat B\mat z \prec \frac{r+\sr{\mat B}}{2}\mat z \prec r\mat z. \] Furthermore, we have \begin{align*} \abs{\off(\mat P)} =\off(\mat B) =\mat B+\diag(\mat P)-r\I. \end{align*} Hence, it follows that \[ \abs{\off(\mat P)}\mat z= \mat B\mat z-r\mat z+\diag(\mat P)\mat z \prec \abs{\diag(\mat P)}\mat z. \] Then, applying Lem.~\ref{lem:tool} yields the claim. \item Let $r:=\sr{\I-\alpha\mat B}\ge\beta\sr{|\mat B|}$. Due to the symmetry of $\mat P$, the spectrum of the matrix $\I-\alpha\mat B$ is real, and contains either $+r$ or $-r$, or both. Suppose that there exists $\mat\zeta\in\R$, $\mat \zeta\neq\mat 0$, with $(\I-\alpha\mat B)\mat\zeta=-r\mat\zeta$. Then, we obtain $ \mat B\mat\zeta=\alpha^{-1}(r+1)\mat\zeta, $ which leads to \[ \sr{|\mat B|}\ge\sr{\mat B}\ge\frac{r+1}{\alpha}>\frac{r}{\alpha}. \] Noticing that $\alpha\le\beta$ yields $\sr{|\mat B|}>\nicefrac{r}{\beta}$, which constitutes a contradiction. Hence, there is $\mat\xi\in\R$, with $\mat\xi^\top\mat\xi=1$, such that \[ \left(\I-\alpha\mat B\right)\mat\xi=r\mat \xi. \] Defining the vector $\mat\eta:=\diag(\mat P)^{-\nicefrac12}\mat\xi$, it follows that \[ \diag(\mat P)^{\nicefrac12}\left(\I-\alpha\mat B\right)\diag(\mat P)^{\nicefrac12}\mat\eta =r\diag(\mat P)\mat\eta, \] and therefore, \[ \diag(\mat P)\mat\eta -\alpha\mat P\mat\eta =r\diag(\mat P)\mat\eta. \] Using that $\mat P$ is positive definitive, and noticing that $\mat\eta^\top\diag(\mat P)\mat\eta=\mat\xi^\top\mat\xi=1$, we deduce that \[ r=1-\alpha\mat\eta^\top\mat P\mat\eta<1. \] Note that the diagonal entries of $\mat P$ are all positive. Hence, exploiting that $\sr{|\mat B|}\le\nicefrac{r}{\beta}$, and applying Lem.~\ref{lem:PF}, for $\epsilon=\nicefrac{(1-r)}{\beta}>0$, there exists $\mat z\in\R$, $\mat z\succ\mat 0$, scaled such that $\mat z^\top\diag(\mat P)\mat z=1$, with $ |\mat B|\mat z \prec \beta^{-1}\mat z. $ Then, upon defining the (positive) vector $\mat y=\diag(\mat P)^{-\nicefrac12}\mat z$, and applying $\beta\ge\nicefrac12$, it holds that \[ \abs{\off(\mat P)}\mat y \prec\left(\beta^{-1}-1\right)\diag(\mat P)\mat y \matle\diag(\mat P)\mat y. \] Applying Lem.~\ref{lem:tool} completes the argument. \item We apply (iv). To this end, by the Perron-Frobenius theorem, we note that there is $\mat z\matge\mat 0$, $\mat z\neq\mat 0$, such that $|\mat B|\mat z=r\mat z$, with $r=\sr{|\mat B|}$. Equivalently, since the diagonal entries of $\mat P$ are all positive, we have $ |\mat P|\mat y=r\diag(\mat P)\mat y, $ with $\mat y=\diag(\mat P)^{-\nicefrac12}\mat z$. Hence, for $1\le i\le m$, using~\eqref{eq:sp}, it holds \begin{align*} r p_{ii}y_i &=\sum_{j=1}^m|p_{ij}|y_j =p_{ii}y_i+\sum_{j\neq i}\sign(p_{ij})p_{ij}y_j =p_{ii}y_i-\xi_i\sum_{j=1}^m\xi_jp_{ij}y_j. \end{align*} Therefore, defining the vector $\mat x=(\xi_1y_1,\ldots,\xi_my_m)^\top$, it follows that \[ rp_{ii}x_i=p_{ii}x_i-\sum_{j\neq i}p_{ij}x_j,\qquad 1\le i\le m. \] We deduce that \[ (2\diag(\mat P)-\mat P)\mat x=r\diag(\mat P)\mat x, \] which, upon letting $\mat w=\diag(\mat P)^{\nicefrac12}\mat x$, yields the eigenvalue equation \[ \left(\I-\frac12\mat B\right)\mat w =\frac{r}{2}\mat w. \] This leads to~\eqref{eq:spd} with $\alpha=\beta=\nicefrac12$, and thereby, concludes the proof. \end{enumerate} \end{proof} \begin{remark} It is easy to see that symmetric positive matrices fail to be Sassenfeld preconditioners in general. An example is given by the symmetric positive definite matrix \[ \mat P={ \text{\footnotesize$ \begin{pmatrix} 1 & 1 & 1 & \cdots & 1\\ 1 & 2 & 2 & \cdots & 2 \\ 1 & 2 & 3 & \cdots & 3\\ \vdots & \vdots &\vdots&\ddots&\vdots \\ 1 & 2 & 3 & \cdots & m \end{pmatrix}$}}\in\RR,\qquad \text{i.e.}\qquad \zdiag{\mat P}= {\text{\footnotesize$ \begin{pmatrix} 0 & 1 & 1 & \cdots & 1\\ \nicefrac12 & 0 & 1 & \cdots & 1 \\ \nicefrac13 & \nicefrac23 & 0 & \cdots & 1\\ \vdots & \vdots &\vdots&\ddots&\vdots \\ \nicefrac{1}{m} & \nicefrac{2}{m} & \nicefrac{3}{m} & \cdots & 0 \end{pmatrix}$}}, \] for which it holds $\sr{\zdiag{\mat P}}\ge1$ for any $m\ge 3$. \end{remark} \section{Sassenfeld index}\label{sc:SI} Consider a Sassenfeld preconditioner $\mat P\in\RR$. Recalling Rem.~\ref{rem:Mmatrix}, for a given matrix $\mat A\in\RR$, the vector defined by \begin{equation}\label{eq:s} \mat s(\mat A,\mat P):= (\I-\wP)^{-1}\abs{\diag(\mat P)}^{-1}\abs{\mat A-\mat P}\mat e\matge\mat 0, \end{equation} with~$\mat e\in\mathbb{R}^m$ from~\eqref{eq:e}, contains only non-negative components. \begin{definition}[Sassenfeld index] The \emph{Sassenfeld index} of a matrix $\mat A\in\RR$ with respect to a Sassenfeld preconditioner $\mat P$ is defined by $\mu(\mat A,\mat P):=\norm{\mat s(\mat A,\mat P)}$, with the vector $\mat s(\mat A,\mat P)$ from~\eqref{eq:s}. \end{definition} The essence of the Sassenfeld index defined above is that it allows to control the norm $\norm{\I-\imat P\mat A}$; see Prop.~\ref{pr:bound} below. This is crucial, for instance, in the convergence analysis of iterative linear solvers, where $\mat P$ takes the role of a preconditioner; see Prop.~\ref{pr:splitting} later on. We note that the vector $\mat s(\mat A,\mat P)$ from~\eqref{eq:s} can be computed approximately by iteration. Indeed, if $\mat P$ is a Sassenfeld preconditioner, then the iterative scheme given by \begin{equation}\label{eq:sk} \mat s^{k+1}=\wP\mat s^k+\abs{\diag(\mat P)}^{-1}\abs{\mat A-\mat P}\mat e,\qquad k\ge 0, \end{equation} converges to the vector $\mat s(\mat A,\mat P)$ from~\eqref{eq:s} for any initial vector $\mat s^0\in\mathbb{R}^m$. The following result provides a practical upper bound for the Sassenfeld index. \begin{proposition}[Iterative estimation]\label{pr:sk} Consider a matrix $\mat A\in\RR$, and a Sassenfeld preconditioner $\mat P\in\RR$. Then, there exists a vector $\mat s^0\in\R$ such that \begin{equation}\label{eq:20210326} \abs{\diag(\mat P)}^{-1}\abs{\mat A-\mat P}\mat e\matle (\I-\wP)\mat s^0. \end{equation} Furthermore, if the iteration~\eqref{eq:sk} is initiated by $\mat s^0$ (for $k=0$), then it holds the bound $\mu(\mat A,\mat P)\le\norm{\mat s^{k}}$ for all $k\ge 0$. \end{proposition} \begin{proof} We proceed in two steps. \begin{enumerate}[1.] \item We first establish the existence of $\mat s^0$. To this end, choose $\epsilon>0$ sufficiently small such that, for the vector $ \mat z_\epsilon:=\epsilon\mat e+(\I-\wP)^{-1}\mat e, $ it holds \[ (\I-\wP)\mat z_{\epsilon}=\epsilon(\mat I-\wP)\mat e+\mat e\matge\frac12\mat e. \] Furthermore, let $\alpha>0$ be large enough so that \[ \abs{\diag(\mat P)}^{-1}\abs{\mat A-\mat P}\mat e\matle\alpha\mat e. \] Then, defining $\mat s^0:=2\alpha\mat z_{\epsilon}$, we obtain the estimate \[ (\I-\wP)\mat s^0 \matge\alpha\mat e\matge\abs{\diag(\mat P)}^{-1}\abs{\mat A-\mat P}\mat e, \] which is~\eqref{eq:20210326}. \item Next, from~\eqref{eq:sk} with $k=0$, we have \[ \mat s^1-\mat s^0=-(\I-\wP)\mat s^0+\abs{\diag(\mat P)}^{-1}\abs{\mat A-\mat P}\mat e\matle\mat 0. \] Hence, by induction, since $\wP\matge\mat 0$, from \eqref{eq:sk} we note that \[ \mat s^{k+1}-\mat s^{k}= \wP(\mat s^{k}-\mat s^{k-1})\matle\mat 0\qquad\forall~ k\ge 1. \] Using that $\sr{\wP}<1$, we infer that~\eqref{eq:sk} converges to $\mat s(\mat A,\mat P)$ from~\eqref{eq:s}. Moreover, from~\eqref{eq:s} and~\eqref{eq:sk} we deduce the identity \begin{align} (\I-\zdiag{\mat P})\mat s(\mat A,\mat P) &=\abs{\diag(\mat P)}^{-1}\abs{\mat A-\mat P}\mat e\label{eq:s'}\\ &=\mat s^{k+1}-\zdiag{\mat P}\mat s^k\nonumber\\ &=(\I-\zdiag{\mat P})\mat s^{k+1}+\zdiag{\mat P}(\mat s^{k+1}-\mat s^k),\nonumber \end{align} for all $k\ge 0$. Exploiting that $(\I-\wP)^{-1}\wP\matge\mat 0$, this implies that \begin{align*} \mat s(\mat A,\mat P) &=\mat s^{k+1}+(\I-\wP)^{-1}\wP(\mat s^{k+1}-\mat s^k)\matle\mat s^{k+1}. \end{align*} Since $\mat s(\mat A,\mat P)$ and $\mat s^{k+1}$ are both non-negative, the asserted bound follows. \end{enumerate} \end{proof} The following estimate, which provides a connection between the $\infty$-norm and the Sassenfeld index, is a crucial observation for some of our subsequent results. \begin{proposition}\label{pr:bound} Let $\mat A\in\RR$ be an invertible matrix, and $\mat P\in\RR$ a Sassenfeld preconditioner. Then, it holds that \begin{equation}\label{eq:bound} \norm{\I-\imat P\mat A} \le\mu(\mat A,\mat P). \end{equation} \end{proposition} \begin{proof} Consider an arbitrary vector $\mat y\in\R$ with $\norm{\mat y}=1$. Defining $\mat R=\mat P-\mat A$, we let \begin{equation}\label{eq:aux20201006a} \mat x=\imat P\mat R\mat y=\imat P(\mat P-\mat A)\mat y=(\I-\imat P\mat A)\mat y. \end{equation} Then, we have $ \diag(\mat P)\mat x+\off(\mat P)\mat x=\mat R\mat y. $ Taking moduli results in \[ \abs{\diag(\mat P)}(\I-\wP)\abs{\mat x} =\left(\abs{\diag(\mat P)}-\abs{\off(\mat P)}\right)\abs{\mat x}\matle\abs{\mat R}\abs{\mat y}\matle\abs{\mat R}\mat e. \] Recalling Rem.~\ref{rem:Mmatrix} and~\eqref{eq:s}, we deduce that \[ \abs{\mat x}\matle (\I-\wP)^{-1}\abs{\diag(\mat P)}^{-1}\abs{\mat R}\mat e=\mat s(\mat A,\mat P). \] Therefore, using~\eqref{eq:aux20201006a}, we infer that \[ \norm{(\I-\imat P\mat A)\mat y}=\norm{\mat x}\le\norm{\mat s(\mat A,\mat P)}, \] which yields~\eqref{eq:bound}. \end{proof} \begin{corollary}[Invertibility]\label{cor:inv} Given a matrix $\mat A$, and a Sassenfeld preconditioner $\mat P$. Then, the matrix $\mat A_\tau=\mat A+\tau\mat P$ is non-singular whenever~$|\tau+1|>\mu(\mat A,\mat P)$. \end{corollary} \begin{proof} We apply a contradiction argument. To this end, suppose that there exists~$\mat v\in\mathbb{R}^m$, $\norm{\mat v}=1$, such that~$\mat A_\tau\mat v=\mat 0$. Then, it holds that $(\tau+1)\mat P\mat v=(\mat P-\mat A)\mat v$, and thus $(\tau+1)\,\mat v=\mat P^{-1}(\mat P-\mat A)\mat v$. Taking norms, and using~\eqref{eq:bound}, yields \[ |\tau+1|= \norm{(\I-\imat P\mat A)\mat v} \le\norm{\I-\imat P\mat A} \le\mu(\mat A,\mat P), \] which causes a contradiction to the range of $\tau$. \end{proof} \section{Sassenfeld matrices}\label{sc:sm} We are now ready to introduce the notion of Sassenfeld matrices. Our definition, see Def.~\ref{def:SF} below, is motivated by the work~\cite{BaumannWihler:17}, where the special case of all matrices $\mat A\in\RR$ with $\mu(\mat A,\mat P)<1$, with $\mat P=\tril(\mat A)+\diag(\mat A)$ being the Gau\ss-Seidel preconditioner, has been discussed. In this specific situation, the system \eqref{eq:s} takes the (lower-triangular) form \[ \abs{\diag(\mat A)}\mat s=\abs{\tril(\mat A)}\mat s+\abs{\triu(\mat A)}\mat e, \] which is a simple forward solve for~$\mat s$. Convergence of the Gau\ss-Seidel method is guaranteed if $\norm{\mat s}<1$; this is the key observation in Sassenfeld's original work~\cite{Sassenfeld:51}. More generally, for Sassenfeld preconditioners in the current paper, we propose the following definition. \begin{definition}[Sassenfeld matrices]\label{def:SF} A matrix $\mat A\in\RR$ is called a \emph{Sassenfeld matrix} if there exists a Sassenfeld preconditioner $\mat P\in\RR$ such that $\mu(\mat A,\mat P)<1$. \end{definition} From Cor.~\ref{cor:inv}, for $\tau=0$, we immediately deduce the following result. \begin{proposition} Every Sassenfeld matrix is invertible. \end{proposition} In the context of linear solvers, the following generalization of Sassenfeld's result \cite{Sassenfeld:51} on the Gau\ss-Seidel scheme is an immediate consequence of Prop.~\ref{pr:bound}. \begin{proposition}[Iterative solvers]\label{pr:splitting} For a Sassenfeld matrix $\mat A\in\RR$, and any given vector~$\mat b\in\mathbb{R}^m$, consider the linear system \begin{equation}\label{eq:Ax=b} \mat A\mat x=\mat b. \end{equation} Then, for a Sassenfeld preconditioner $\mat P\in\RRd$ with $\mu(\mat A,\mat P)<1$, and an arbitrary starting vector $\mat x_0\in\R$, the iterative scheme \begin{equation}\label{eq:it} \mat P\mat x_{n+1}=(\mat P-\mat A)\mat x_n+\mat b,\qquad n\ge 0, \end{equation} converges to the unique solution of~\eqref{eq:Ax=b}. Furthermore, it holds the \emph{a priori} bound \[ \norm{\mat x-\mat x_{n}}\le\mu(\mat A,\mat P)^n\norm{\mat x-\mat x_{0}}, \] for any $n\ge 0$. \end{proposition} The following proposition provides a condition number estimate for the preconditioned matrix $\imat P\mat A$ in terms of the Sassenfeld index. \begin{proposition}[Condition number bound] Suppose that $\mat A\in\RR$ is a Sassenfeld matrix, and $\mat P\in\RRd$ a Sassenfeld preconditioner with $\mu(\mat A,\mat P)<1$. Then, for the condition number with respect to the norm $\norm{\emptymat}$ the bound \[ \kappa_\infty(\imat P\mat A):=\norm{\left(\imat P\mat A\right)^{-1}}\norm{\imat P\mat A}\le\frac{1+\mu(\mat A,\mat P)}{1-\mu(\mat A,\mat P)} \] holds true. \end{proposition} \begin{proof} Let $\mat B:=\imat P\mat A$. From Prop.~\ref{pr:bound}, we deduce the bound \[ \norm{\mat B} \le 1+\norm{\I-\mat B}\le 1+\mu(\mat A,\mat P). \] Moreover, applying a Neumann series, see, e.g. \cite[\S3.8]{Meyer:00}, we deduce the estimate \[ \norm{\mat B^{-1}} =\norm{\left(\I-(\I-\mat B)\right)^{-1}} \le\frac{1}{1-\norm{\I-\mat B}} \le\frac{1}{1-\mu(\mat A,\mat P)}. \] This concludes the proof. \end{proof} We will now establish the main result of this paper, Thm.~\ref{thm:main} below, which provides a characterization of Sassenfeld matrices. Before doing so, we notice the following fact. \begin{lemma}\label{lem:0} Any Sassenfeld matrix belongs to $\RRd$. \end{lemma} \begin{proof} Given a Sassenfeld matrix $\mat A\in\RR$, and a Sassenfeld preconditioner $\mat P\in\RRd$ with $\mu(\mat A,\mat P)<1$. Then, from~\eqref{eq:s'} we have \[ \left(\abs{\diag(\mat P)}-\abs{\off(\mat P)}\right)\mat s(\mat A,\mat P) =\abs{\mat A-\mat P}\mat e, \] with $\mat 0\matle\mat s(\mat A,\mat P)=(s_1,\ldots,s_m)\prec\mat e$. In components, this system reads \[ |p_{ii}|s_i-\sum_{j\neq i}|p_{ij}|s_j=\sum_{j=1}^m|a_{ij}-p_{ij}|,\qquad i=1,\ldots,m. \] Letting \begin{equation}\label{eq:ei} \epsilon_i:=\sum_{j=1}^m(1-s_j)|a_{ij}-p_{ij}|\ge0,\qquad i=1,\ldots,m, \end{equation} and rearranging terms, we observe the identity \[ \epsilon_i+\sum_{j\neq i} \left(|p_{ij}|+|a_{ij}-p_{ij}|\right)s_j =\left(|p_{ii}|-|a_{ii}-p_{ii}|\right)s_i, \] for each $i=1,\ldots, m$. Applying the triangle inequality on either side, it follows that \begin{equation}\label{eq:auxs0} \epsilon_i+\sum_{j\neq i} |a_{ij}|s_j \le |a_{ii}|s_i,\qquad i=1,\ldots,m. \end{equation} Fix $i\in\{1,\ldots,m\}$. If $\epsilon_i>0$ then it follows directly from~\eqref{eq:auxs0} that $a_{ii}\neq 0$. Otherwise, if $\epsilon_i=0$ then, from~\eqref{eq:ei} and the fact that $s_j<1$ for each $j=1,\ldots,m$, we infer that $p_{ij}=a_{ij}$ for all $j=1,\ldots,m$; in particular, for $j=i$, this shows that $a_{ii}=p_{ii}\neq 0$, which completes the proof. \end{proof} \begin{theorem}[Characterization of Sassenfeld matrices]\label{thm:main} A matrix $\mat A\in\RR$ is a Sassenfeld matrix if and only if it is a Sassenfeld preconditioner. \end{theorem} \begin{proof} If $\mat A\in\RRd$ is a Sassenfeld preconditioner then $\mu(\mat A,\mat A)=0$, i.e. $\mat A\in\RRd$ is a Sassenfeld matrix. Conversely, suppose that $\mat A\in\RR$ is a Sassenfeld matrix. Then, due to Lemma~\ref{lem:0}, we know that $\mat A\in\RRd$. Hence, it remains to prove that $\sr{\zdiag{\mat A}}<1$. To this end, select a Sassenfeld preconditioner $\mat P\in\RRd$ with $\mu(\mat A,\mat P)<1$. Then, recalling~\eqref{eq:auxs0}, there are non-negative real numbers $0\le s_i<1$, $i=1,\ldots, m$, such that \begin{equation}\label{eq:auxs} \sum_{j\neq i} |a_{ij}|s_j \le |a_{ii}|(s_i-\delta_i),\qquad i=1,\ldots,m, \end{equation} with \begin{equation}\label{eq:di} \delta_i:=\frac{1}{|a_{ii}|}\sum_{j=1}^m(1-s_j)|a_{ij}-p_{ij}|\ge0,\qquad i=1,\ldots,m; \end{equation} cf.~\eqref{eq:ei}. Furthermore, since $\sr{\zdiag{\mat P}}<1$, by Lem.~\ref{lem:tool}, there is $\mat z\succ\mat 0$, scaled by $\mat e^\top\mat z=1$, such that $\zdiag{\mat P}\mat z\prec\mat z$, cf.~\eqref{eq:20211110a}. Equivalently, \begin{equation}\label{eq:auxz} \frac{1}{|p_{ii}|}\sum_{j\neq i} |p_{ij}|z_j < z_i, \end{equation} for each $i=1,\ldots,m$. Introduce a positive vector $\mat\t=(\t_1,\ldots,\t_m)\in\R$ by \[ \t_i:=\alpha s_i+z_i>0,\qquad i=1,\ldots,m, \] where $\alpha\ge 0$ will be specified later. Moreover, define the matrix $\mat B=\zdiag{\mat A}=[b_{ij}]\in\RR$ by \[ b_{ij}:=\begin{cases} 0&\text{for }i=j,\\ \nicefrac{|a_{ij}|}{|a_{ii}|}&\text{for }i\neq j, \end{cases} \] and \begin{align*} \mathfrak d_i:&=\sum_{j\neq i} \left(b_{ij}-\frac{|p_{ij}|}{|p_{ii}|}\right)z_j,\qquad 1\le i\le m. \end{align*} Then, for $1\le i\le m$, we have \begin{align*} \sum_{j=1}^m b_{ij}\t_j = \sum_{j\neq i} b_{ij}\t_j &=\frac{\alpha}{|a_{ii}|}\sum_{j\neq i}|a_{ij}|s_j +\frac{1}{|p_{ii}|}\sum_{j\neq i} |p_{ij}|z_j +\mathfrak d_i. \end{align*} Employing~\eqref{eq:auxs} and \eqref{eq:auxz}, we derive the estimate \[ \sum_{j=1}^m b_{ij}\t_j < \alpha(s_i-\delta_i)+z_i+\mathfrak d_i. \] Thus, we obtain \begin{equation}\label{eq:auxsz} \sum_{j=1}^m b_{ij}\t_j<\t_i-\alpha\delta_i+\mathfrak d_i, \end{equation} for each $i=1,\ldots,m$. Now let \begin{equation}\label{eq:alpha} \alpha\ge\max_{i\in\set I}\frac{|\mathfrak{d}_i|}{\delta_i}\ge 0, \end{equation} where $\set I$ signifies the set of all indices $1\le i\le m$ for which $\delta_i>0$ in \eqref{eq:di}; we let $\alpha=0$ if $\set I=\emptyset$. We distinguish two separate cases for each $1\le i\le m$: \begin{enumerate}[(i)] \item If $\delta_i=0$ then exploiting that $0\le s_j<1$ for each $j=1,\ldots,m$, we notice from~\eqref{eq:di} that $a_{ij}=p_{ij}$ for all $j=1,\ldots,m$. Hence, we find $\mathfrak d_i=0$. \item Otherwise, if $\delta_i>0$ then recalling $\alpha$ from~\eqref{eq:alpha}, we infer that \[ -\alpha\delta_i+\mathfrak{d}_i =\delta_i\left(-\alpha+\frac{\mathfrak d_i}{\delta_i}\right) \le\delta_i\left(-\alpha+\frac{|\mathfrak d_i|}{\delta_i}\right) \le 0. \] \end{enumerate} In summary, from~\eqref{eq:auxsz}, we obtain that \[ \sum_{j=1}^m b_{ij}\t_j<\t_i\qquad\forall~i=1,\ldots,m. \] Therefore, we have shown that there exists a positive vector $\mat\t\succ\mat 0$ such that $\zdiag{\mat A}\mat\t=\mat B\mat\t\prec\mat\t$, which, by Lem.~\ref{lem:tool}, implies that $\zdiag{\mat A}$ is a Sassenfeld preconditioner. \end{proof} \section{Spectral properties}\label{sc:spectral} As far as the eigenvalues of a Sassenfeld matrix are concerned, we establish a result that is related to the Gershgorin circle theorem (see, e.g., \cite[p.~498]{Meyer:00}). To this end, for a center point $a\in\mathbb{R}$, $a\neq 0$, we define the open ball \[ \set B(a):=\{z\in\mathbb{C}:\,|z-a|<|a|\} \] in the complex plane~$\mathbb{C}$. \begin{theorem}[Spectrum of Sassenfeld matrices]\label{thm:sp} Let $\mat A\in\RR$ be a Sassenfeld matrix, and denote by $\sigma(\mat A)\subset\mathbb{C}$ its spectrum. Then, for any Sassenfeld preconditioner $\mat P\in\RRd$ with $\mu(\mat A,\mat P)<1$, it holds the inclusion \[ \sigma(\mat A)\subset\bigcup_{i=1}^m\set B(p_{ii}). \] \end{theorem} \begin{proof} Suppose that $\mat A\in\RR$ is a Sassenfeld matrix. Let $\lambda\in\sigma(\mat A)$ be an eigenvalue, and $\mat v=(v_1,\ldots,v_m)\in\C$ an associated eigenvector with $\norm{\mat v}=1$. Recalling~\eqref{eq:ss}, we can write \[ \off(\mat P)\mat v+(\mat A-\mat P)\mat v=(\lambda\I-\diag(\mat P))\mat v. \] Inverting by $\diag(\mat P)$, and taking moduli, we obtain \begin{align*} \abs{\lambda\diag(\mat P)^{-1}-\I}\abs{\mat v} &\matle\abs{\diag(\mat P)}^{-1}\abs{\mat A-\mat P}\mat e+\wP\abs{\mat v}. \end{align*} Moreover, recalling~\eqref{eq:s'}, we infer that \[ \left(\abs{\lambda\diag(\mat P)^{-1}-\I}-\I\right)\abs{\mat v}\matle (\I-\wP)\left(\mat s(\mat A,\mat P)-|\mat v|\right). \] Using that $(\I-\wP)^{-1}\matge\mat 0$, cf.~Rem.~\ref{rem:Mmatrix}, it follows that \[ (\I-\wP)^{-1}\left(\abs{\lambda\diag(\mat P)^{-1}-\I}-\I\right)\abs{\mat v}\matle \mat s(\mat A,\mat P)-|\mat v|. \] Since $\norm{\mat s(\mat A,\mat P)}=\mu(\mat A,\mat P)<1$ and $\norm{\mat v}=1$, there is an index $i\in\{1,\ldots,m\}$ such that $s_i(\mat A,\mat P)-|v_i|<0$. Therefore, the (diagonal) matrix $\abs{\lambda\diag(\mat P)^{-1}-\I}-\I$ has at least one negative diagonal entry, i.e. there exists $j\in\{1,\ldots,m\}$ with $ \abs{\nicefrac{\lambda}{p_{jj}}-1}<1. $ This concludes the proof. \end{proof} Using Thm.~\ref{thm:main}, we may choose $\mat P=\mat A$ in Thm.~\ref{thm:sp} in order to draw the following conclusion. \begin{corollary} For any Sassenfeld matrix $\mat A\in\RR$ we have \[ \sigma(\mat A)\subset\bigcup_{j=1}^m\set B(a_{ii}). \] \end{corollary} \begin{remark} From the above corollary, we can deduce a few interesting properties about Sassenfeld matrices. \begin{enumerate}[(i)] \item If the diagonal entries of a Sassenfeld matrix are all positive or negative, then its eigenvalues belong to the corresponding (open) half plane of $\mathbb{C}$. \item In particular, from (i), we infer that every symmetric Sassenfeld matrix with positive diagonal entries is symmetric positive definite. \item If $\mat A=[a_{ij}]\in\RR$ is a Sassenfeld matrix then the spectral radius of $\mat A$ satisfies the bound $\sr{\mat A}<2\max_{1\le i\le m}|a_{ii}|$. \end{enumerate} \end{remark} \section{Applications}\label{sc:appl} Our main Thm.~\ref{thm:main} allows for a straightforward construction of preconditioners of a Sassenfeld matrix. Indeed, consider a Sassenfeld matrix $\mat A\in\RR$, and define the set \[ \set P(\mat A):=\left\{ \mat P\in\RR:\,\diag(\mat P)=\diag(\mat A)\text{ and }\mat P\sqsubset\mat A \right\}, \] where we write $\mat P\sqsubset\mat A$ to mean that any non-diagonal entry of $\mat P$ is either zero or equals the corresponding entry of $\mat A$. For any $\mat P\in\set{P}(\mat A)$ observe that $\zdiag{\mat P}\matle\zdiag{\mat A}$, and consequently $\sr{\zdiag{\mat P}}\le\sr{\zdiag{\mat A}}<1$, cf. Thm.~\ref{thm:main}. In particular, any $\mat P\in\set{P}(\mat A)$ is a Sassenfeld preconditioner. \begin{proposition}\label{pr:prec} Given a Sassenfeld matrix $\mat A\in\RR$. Furthermore, let $\mat P\in\set{P}(\mat A)$ and $0\le\delta<1$ such that \begin{equation}\label{eq:prec} (\zdiag{\mat A}-\zdiag{\mat P})\mat e\matle\delta(\I-\zdiag{\mat P})\mat e. \end{equation} Then, it holds that $\mu(\mat A,\mat P)\le\delta$. Note that \eqref{eq:prec} can be fulfilled trivially upon selecting $\mat P=\mat A$ and $\delta =0$. \end{proposition} \begin{proof} Since $\diag(\mat P)=\diag(\mat A)$, we notice that \[ \abs{\diag(\mat A)}^{-1}\abs{\mat A-\mat P} =\abs{\diag(\mat A)^{-1}\mat A-\diag(\mat P)^{-1}\mat P} =\zdiag{\mat A}-\zdiag{\mat P}. \] Thus, the vector $\mat s(\mat A,\mat P)=(s_1,\ldots,s_m)\matge\mat 0$ from~\eqref{eq:s} satisfies \[ \mat s(\mat A,\mat P) =(\I-\wP)^{-1}(\zdiag{\mat A}-\zdiag{\mat P})\mat e =-(\I-\wP)^{-1}(\I-\zdiag{\mat A})\mat e+\mat e . \] In view of~\eqref{eq:prec}, we have the bound \[ (\I-\zdiag{\mat A})\mat e\matge(1-\delta)(\I-\zdiag{\mat P})\mat e, \] which, upon recalling that $(\I-\zdiag{\mat P})^{-1}\matge\mat 0$, cf.~Rem.~\ref{rem:Mmatrix}, implies that \[ (\I-\zdiag{\mat P})^{-1}(\I-\zdiag{\mat A})\mat e\matge(1-\delta)\mat e. \] Therefore, we infer that $\mat s(\mat A,\mat P)\matle\delta\mat e$, which shows that $\mu(\mat A,\mat P)=\norm{\mat s(\mat A,\mat P)}\le\delta$. This completes the proof. \end{proof} For the diagonal preconditioner $\mat P=\diag(\mat A)$, the bound \eqref{eq:prec} simply expresses that $\mat A$ is strictly diagonally dominant (by rows). Thereby, in combination with Prop.~\ref{pr:splitting}, the above Prop.~\ref{pr:prec} recovers the well-known fact that the classical Jacobi iteration method is convergent for this type of matrices. We derive a slight generalization of this result for matrices $\mat A=[a_{ij}]\in\RRd$, which are not necessarily strictly diagonally dominant. To this end, we define \[ \gamma_i:=\sum_{j\neq i}|a_{ij}|,\qquad i=1,\ldots,m, \] and suppose that the rows of $\mat A$ satisfy the bounds \begin{subequations} \begin{align} \gamma_1&<|a_{11}|,\label{eq:gamma1} \intertext{and, for $i=2,\ldots,m$,} \gamma_i&< |a_{ii}|\qquad\text{if }a_{i,i-1}=0,\label{eq:gamma2}\\ \gamma_i&\le |a_{ii}|\qquad\text{if }a_{i,i-1}\neq 0.\label{eq:gamma3} \end{align} \end{subequations} We claim that the lower bidiagonal preconditioner given by \[ \mat P=\begin{pmatrix} a_{11} &\\ a_{21} & a_{22} &\\ & \ddots & \ddots \\ & & a_{m,m-1} & a_{mm} \end{pmatrix} \] renders the iteration~\eqref{eq:it} convergent. Indeed, we will prove the estimate $ \mu(\mat A,\mat P)<1; $ cf.~Prop.~\ref{pr:splitting}. Due to the sparsity pattern of~$\mat P$, we note that the system~\eqref{eq:s} takes the simple form \begin{align*} s_1=\frac{\gamma_1}{|a_{11}|}, \qquad\text{and}\qquad s_i= \frac{1}{|a_{ii}|}\left(\gamma_i-(1-s_{i-1})|a_{i,i-1}|\right)\quad\text{for } 2\le i\le m. \end{align*} From~\eqref{eq:gamma1}, it follows that $0\le s_1<1$. Furthermore, for an index $i\in\{2,\ldots,m\}$, by induction, suppose that $0\le s_{j}<1$ for all $j=1,\ldots,i-1$. If $a_{i,i-1}=0$ then, from~\eqref{eq:gamma2}, we infer that \[ s_i\le\nicefrac{\gamma_i}{|a_{ii}|}<1. \] Otherwise, if $a_{i,i-1}\neq0$ then, using~\eqref{eq:gamma3}, we deduce that \[ s_i< \nicefrac{\gamma_i}{|a_{ii}|}\le 1. \] In conclusion, this shows that $\mu(\mat A,\mat P)=\max_{1\le i\le m}s_i<1$. \bibliographystyle{amsalpha} \bibliography{myrefs} \end{document}
{"config": "arxiv", "file": "2201.05628/paper9.tex"}
\begin{document} \title{Dilution, decorrelation and scaling in radial growth} \author{Carlos Escudero} \affiliation{ICMAT (CSIC-UAM-UC3M-UCM), Departamento de Matem\'{a}ticas, Facultad de Ciencias, Universidad Aut\'{o}noma de Madrid, Ciudad Universitaria de Cantoblanco, 28049 Madrid, Spain} \begin{abstract} The dynamics of fluctuating radially growing interfaces is approached using the formalism of stochastic growth equations on growing domains. This framework reveals a number of dynamic features arising during surface growth. For fast growth, dilution, which spatially reorders the incoming matter, is responsible for the transmission of correlations. Its effects include the erasing of memory with respect to the initial condition, a partial regularization of geometrically originated instabilities, and the restoring of universality in some special cases in which the critical exponents depend on the parameters of the equation of motion. All together lie in the basis of the preservation of the Family-Vicsek scaling in radial interfaces, which is thus a direct consequence of dilution. This fast growth regime is also characterized by the spatial decorrelation of the interface, which in the case of radially growing interfaces naturally originates rapid roughening and polyfractality, and suggests the advent of a self-similar fractal dimension. The center of mass fluctuations of growing clusters are also studied, and our analysis supports a strong violation of the Family-Vicsek scaling by the surface fluctuations of the Eden model. Consequently, this model would belong to a dilution-free universality class. \end{abstract} \pacs{64.60.Ht, 05.10.Gg, 05.40.-a, 68.35.Ct} \maketitle \section{Introduction} The study of fluctuating interfaces has occupied an important place within statistical mechanics in recent and not so recent times. The origins of this interest are practical, due to the vast range of potential applications that this theory may have, and theoretical, as some of the universality classes discovered within this framework are claimed to play an important role in other areas of physics~\cite{barabasi}. While the great majority of works on this topic has concentrated on strip or slab geometries, it is true that at the very beginning of the theoretical studies on nonequilibrium growth one finds the seminal works by Eden, focused on radial shapes~\cite{eden1,eden2}. To a certain extent, the motivation of considering radial forms is related to biological growth, as for instance the Eden model can be thought of as a simplified description of a developing cell colony. The Eden and other related discrete models have been computationally analyzed along the years, and the results obtained has been put in the context of stochastic growth theory, see for instance~\cite{ferreira} and references therein. Apart from the interest in modelling, there is a genuine theoretical motivation in understanding the dynamics of growing radial clusters. The Eden model is actually a sort of first passage percolation~\cite{hammersley}, and the scaling limit of percolation models has been studied by means of field-theoretic approaches~\cite{hinrichsen} and stochastic processes like Schramm-Loewner evolution~\cite{sle}. A natural theoretical question to be answered is in which cases the Family-Vicsek scaling~\cite{family}, basic to describe planar growth processes, is able to capture the behavior of the surface fluctuations of growing radial clusters. The use of stochastic differential equations, very much spread in the modelling of planar growth profiles, has been not so commonly employed in the case of radial growth. A series of works constitute an exception to this rule \cite{kapral,batchelor,singha,cescudero1,cescudero2,escudero,escudero2}, as they proposed a partial differential equation with stochastic terms as a benchmark for analyzing the dynamics of radial interfaces. Because studying this sort of equations is complicated by the nonlinearities implied by reparametrization invariance, a simplified version in which only the substrate growth was considered was introduced in \cite{escuderojs}. Already in this case it was apparent that for rapidly growing interfaces dilution, which is responsible for matter redistribution as the substrate grows~\cite{maini}, propagates the correlations when large spatiotemporal scales are considered. It is also capable of erasing the memory effects that would otherwise arise, let us show how. In \cite{escuderojs} we considered the linear equation for stochastic growth on a growing domain \begin{equation} \label{gdomain} \partial_t h=-D \left( \frac{t_0}{t} \right)^{\zeta \gamma} |\nabla|^\zeta h -\frac{d\gamma}{t}h +\gamma F t^{\gamma-1}+ \left(\frac{t_0}{t}\right)^{d\gamma/2}\xi(x,t), \end{equation} where the domain grows following the power law $t^\gamma$, $\gamma>0$ is the growth index and $-(d\gamma/t)h$ is the term taking into account dilution~\cite{escuderojs}. Its Fourier transformed version, for $n \ge 1$, is \begin{equation} \frac{d h_n}{dt}=-D \left( \frac{t_0}{t} \right)^{\zeta \gamma} \frac{\pi^\zeta |n|^\zeta}{L_0^\zeta} h_n -\frac{d\gamma}{t}h_n + \left(\frac{t_0}{t}\right)^{d\gamma/2}\xi_n(t). \end{equation} This equation can be readily solved for $\gamma > 1/\zeta$ and in the long time limit \begin{equation} h_n(t)=(t/t_0)^{-d \gamma} \exp \left[ \frac{D t_0}{1-\zeta \gamma} \frac{\pi^\zeta |n|^\zeta}{L_0^\zeta} \right] h_n(t_0)+ (t/t_0)^{-d \gamma} \int_{t_0}^t \left( \frac{\tau}{t_0} \right)^{d\gamma/2}\xi_n(\tau) d\tau, \end{equation} and so the dependence on the initial condition tends to zero as a power law for long times. This is, as mentioned, one of the consequences of dilution. If we considered the dilation transformation $x \to (t/t_0)^\gamma x$ we would find again Eq.~(\ref{gdomain}) but this time without the dilution term. The solution now becomes \begin{equation} h_n(t)= \exp \left[ \frac{D t_0}{1-\zeta \gamma} \frac{\pi^\zeta |n|^\zeta}{L_0^\zeta} \right] h_n(t_0)+ \int_{t_0}^t \left(\frac{t_0}{\tau}\right)^{d\gamma/2}\xi_n(\tau)d\tau, \end{equation} and so the dependence on the initial condition remains for all times. In the first case the long time solution becomes spatially uncorrelated, and in the second one only part of the initial correlations survive. As an abuse of language, we will talk about decorrelation in both cases. The memory effects that affect the solution in the no-dilution (or dilation) situation separate its behavior from the one dictated by the Family-Vicsek ansatz \cite{escuderojs,escuderoar}. For $\gamma < 1/\zeta$ the memory effects and the corresponding dependence on the initial condition disappear exponentially fast for long times as a consequence of the effect of diffusion. Dilution is also the mechanism that controls the amount of matter on the interface. Pure diffusion on a growing domain is described by the equation \begin{equation} \partial_t h= D \left( \frac{t_0}{t} \right)^{2 \gamma} \nabla^2 h -\frac{d \gamma}{t}h, \end{equation} in Eulerian coordinates $x \in [0,L_0]$ (see~\cite{escuderojs}) and where dilution has been taken into account. The total mass on the surface is conserved \begin{equation} \int_0^{L(t)} \cdots \int_0^{L(t)}h(y,t)dy= \left( \frac{t}{t_0} \right)^{d \gamma} \int_0^{L_0} \cdots \int_0^{L_0} h(x,t)dx= \int_0^{L_0} \cdots \int_0^{L_0}h(x,t_0)dx, \end{equation} where $y \equiv [L(t)/L_0]x$ denotes the Lagrangian coordinates. In the no-dilution situation we find \begin{equation} \int_0^{L(t)} \cdots \int_0^{L(t)}h(y,t)dy= \left( \frac{t}{t_0} \right)^{d \gamma} \int_0^{L_0} \cdots \int_0^{L_0} h(x,t)dx= \left( \frac{t}{t_0} \right)^{d \gamma} \int_0^{L_0} \cdots \int_0^{L_0}h(x,t_0)dx. \end{equation} This second case is pure dilation, which implies that not only the space grows, but also the interfacial matter grows at the same rate, in such a way that the average density remains constant. Note that this process of matter dilation, as well as the spatial growth, are deterministic processes. These calculations show that both dilution and dilation dynamics are physically motivated and have a number of measurable differences. It is worth remarking here that all previous works except~\cite{escuderojs} have exclusively considered dilation dynamics. Even in the different field of reaction-diffusion dynamics in which the dilution term was derived, the focus was on the limit in which it was irrelevant~\cite{maini}. This work is devoted to further explore the consequences of dilution, dilation and decorrelation, and their effects on scaling of radial interfaces. We will use in cases radial stochastic growth equations, which may show up instabilities~\cite{escudero2}, and explore the interplay of dilution with them. In other cases, when instabilities do not play a determinant role and for the sake of simplicity, we will consider stochastic growth equations on growing domains. \section{Radial Random Deposition} \label{rrd} In order to construct radial growth equations one may invoke the reparametrization invariance principle~\cite{maritan1,maritan2}, as has already been done a number of times\cite{kapral,batchelor,cescudero1,cescudero2,escudero,escudero2}. In case of white and Gaussian fluctuations, the $d-$dimensional spherical noise is given by \begin{eqnarray} \frac{1}{\sqrt[4]{g \left[\vec{\theta},r(\vec{\theta},t) \right]}}\xi(\vec{\theta},t), \qquad \left< \xi(\vec{\theta},t) \right>=0, \\ \left< \xi(\vec{\theta},t) \xi(\vec{\theta},t) \right>= \epsilon \delta(\vec{\theta}-\vec{\theta}') \delta(t-t'), \end{eqnarray} where $g=\det(g_{ij})= \det(\partial_i \vec{r} \cdot \partial_j \vec{r})$ is the determinant of the metric tensor. Under the small gradient assumption $|\nabla_{\vec{\theta}} \,\, r| \ll r$ one finds $g \approx \mathcal{J}(r,\vec{\theta})^2$, where $\mathcal{J}$ is the Jacobian determinant of the change of variables from the Cartesian representation $(\vec{x},h)$ to the polar representation $(\vec{\theta},r)$. We also have the factorization $\mathcal{J}(r,\vec{\theta})^2= r^{2d} J(\vec{\theta})^2$, where $J$ is the Jacobian evaluated at $r=1$. The simplest growth process is possibly the radial random deposition model. If the growth rate is explicitly time dependent, then the growth equation reads \begin{equation} \label{rdeposition} \partial_t r = F \gamma t^{\gamma-1}+\frac{1}{r^{d/2}J(\vec{\theta})^{1/2}}\xi(\vec{\theta},t), \end{equation} in the absence of dilution. Here $r(\vec{\theta},t)$ is the radius value at the angular position $\vec{\theta}$ and time $t$, $F>0$ is the growth rate, $\gamma>0$ is the growth index, $d$ is the spatial dimension and $\xi$ is a zero mean Gaussian noise, which correlation is given by \begin{equation} \left< \xi(\vec{\theta},t) \xi(\vec{\theta}',s) \right>= \epsilon \delta(\vec{\theta}-\vec{\theta}') \delta(t-s). \end{equation} The equation for the first moment can be easily obtained \begin{equation} \partial_t \left< r \right> = F \gamma t^{\gamma-1}, \end{equation} due to the It\^o interpretation, and integrate it to get \begin{equation} \left< r(\vec{\theta},t) \right> = F t^{\gamma}, \end{equation} where we have assumed the radially symmetric initial condition $r(\vec{\theta},t_0)=F t_0^{\gamma}$ and $t_0 \le t$ is the absolute origin of time. It is difficult to obtain more information from the full equation~(\ref{rdeposition}), so we will perform a perturbative expansion. We assume the solution form \begin{equation} \label{snoise} r(\vec{\theta},t)= R(t)+ \sqrt{\epsilon}\rho_1(\vec{\theta},t), \end{equation} where the noise intensity $\epsilon$ will be used as the small parameter~\cite{gardiner}. Substituting this solution form into Eq.~(\ref{rdeposition}) we obtain the equations \begin{eqnarray} \partial_t R &=& F \gamma t^{\gamma-1}, \\ \partial_t \rho_1 &=& \frac{1}{F^{d/2}t^{\gamma d/2}} \frac{\eta(\vec{\theta},t)}{J(\vec{\theta})^{1/2}}, \end{eqnarray} where $\xi=\sqrt{\epsilon} \, \eta$. These equations have been derived assuming $\sqrt{\epsilon} \ll F t^\gamma$, a condition much more favorable (the better the larger $\gamma$ is) than the usual time independent ones supporting small noise expansions~\cite{gardiner}. The solution to these equations can be readily computed \begin{eqnarray} R(\vec{\theta},t) &=& Ft^\gamma, \\ \left< \rho_1(\vec{\theta},t) \right> &=& 0, \\ \left< \rho_1(\vec{\theta},t) \rho_1(\vec{\theta}',s) \right> &=& \frac{F^{-d}}{1-\gamma d} \left[ \left( \min \{t,s\} \right)^{1-\gamma d}-t_0^{1-\gamma d}\right] \frac{\delta(\vec{\theta}-\vec{\theta}')}{J(\vec{\theta})}, \end{eqnarray} if $\gamma d \neq 1$ and where we have assumed a zero value for the initial perturbation. If $\gamma d = 1$ the correlation becomes \begin{equation} \left< \rho_1(\vec{\theta},t) \rho_1(\vec{\theta}',s) \right> = \frac{1}{F^{d}} \mathrm{ln}\left[ \frac{\min \{t,s\}}{t_0} \right]\frac{\delta(\vec{\theta}-\vec{\theta}')}{J(\vec{\theta})}. \end{equation} Here $R$ is a deterministic function and $\rho_1$ is a zero mean Gaussian stochastic process that is completely determined by the correlations given above. The long time behavior of the correlations, given by the condition $t,s \gg t_0$, is specified by the following two-times and one-time correlation functions \begin{eqnarray} \left< \rho_1(\vec{\theta},t) \rho_1(\vec{\theta}',s) \right> &=& \frac{F^{-d}}{1-\gamma d}\left( \min\{ t,s \} \right)^{1-\gamma d} \frac{\delta(\vec{\theta}-\vec{\theta}')}{J(\vec{\theta})}, \\ \left< \rho_1(\vec{\theta},t) \rho_1(\vec{\theta}',t) \right> &=& \frac{F^{-d}}{1-\gamma d} t^{1-\gamma d} \frac{\delta(\vec{\theta}-\vec{\theta}')}{J(\vec{\theta})}, \end{eqnarray} if $\gamma d>1$, \begin{eqnarray} \left< \rho_1(\vec{\theta},t) \rho_1(\vec{\theta}',s) \right> &=& \frac{1}{F^{d}} \mathrm{ln} \left( \min\{ t,s \} \right) \frac{\delta(\vec{\theta}-\vec{\theta}')}{J(\vec{\theta})}, \\ \left< \rho_1(\vec{\theta},t) \rho_1(\vec{\theta}',t) \right> &=& \frac{1}{F^{d}} \mathrm{ln}(t) \frac{\delta(\vec{\theta}-\vec{\theta}')}{J(\vec{\theta})}, \end{eqnarray} if $\gamma d =1$, and finally \begin{equation} \left< \rho_1(\vec{\theta},t) \rho_1(\vec{\theta}',s) \right> = \frac{F^{-d}}{\gamma d-1}t_0^{1-\gamma d} \frac{\delta(\vec{\theta}-\vec{\theta}')}{J(\vec{\theta})}, \end{equation} when $\gamma d>1$. In this last case the correlation vanishes in the limit $t_0 \to \infty$. Note that the reparametrization invariance principle is not able to capture dilution effects and it reproduces pure dilation dynamics. In order to introduce dilution in the radial case we may use the following functional definition which transforms Eq. (\ref{rdeposition}) into \begin{equation} \label{rddilution} \partial_t r = F \gamma t^{\gamma-1} -\frac{\gamma d}{t}r + \frac{1}{r^{d/2}} \frac{\xi(\vec{\theta},t)}{J(\vec{\theta})^{1/2}}, \end{equation} whose first moment can be exactly calculated again taking advantage of the It\^{o} interpretation of the noise term, yielding \begin{equation} \left< r(\vec{\theta},t) \right>=\frac{F}{d+1}t^\gamma. \end{equation} Performing as in the former case the small noise expansion $r=R+\sqrt{\epsilon}\rho_1$ we find again $R= \left< r \right>$. The perturbation obeys the equation \begin{equation} \partial_t \rho_1 = -\frac{\gamma d}{t}\rho_1+ \frac{(d+1)^{d/2}}{F^{d/2}t^{\gamma d/2}} \frac{\eta(\vec{\theta},t)}{J(\vec{\theta})^{1/2}}, \end{equation} and so the perturbation has zero mean and its long time correlation is given by \begin{equation} \left< \rho_1(\vec{\theta},t) \rho_1(\vec{\theta}',s) \right>= \frac{(d+1)^d}{F^d (\gamma d +1)} \min\{s,t\} \max\{s,t\}^{-\gamma d} \frac{\delta(\vec{\theta} - \vec{\theta}')}{J(\vec{\theta})}, \end{equation} a result that holds uniformly in $\gamma$. Note that the structure of the temporal correlation is different when the effect of dilution is considered and when it is not for all $\gamma>0$. For instance, the characteristic length scale corresponding to a given angular difference is $\lambda = \max\{s,t\}^{\gamma} |\vec{\theta} - \vec{\theta}'|$ when dilution is present, and $\lambda = \min\{s,t\}^{\gamma} |\vec{\theta} - \vec{\theta}'|$ in the absence of dilution. One already sees in this example that the lack of dilution causes the appearance of memory effects on the growth dynamics. The first order correction in the small noise expansion $\rho_1$ is always a Gaussian stochastic process; an attempt to go beyond Gaussianity by deriving the second order correction is reported in appendix \ref{horder}. \section{Random Deposition and Diffusion} \label{rdad} Our next step, in order to approach more complex and realistic growth processes, is to add diffusion to a random deposition equation of growth. This sort of equations may be derived using reparametrization invariance as in~\cite{escudero2}. Following this reference and the former section, we perform a small noise expansion and concentrate on the equation for the Gaussian perturbation. In this section we will consider a number of cases which do not show instabilities, and the study of these will be postponed to the next one. The equation for the perturbation in $d=1$ is~\cite{escudero2} \begin{equation} \label{perturbation} \partial_t \rho = \frac{D_\zeta}{(Ft^\gamma)^\zeta} \Lambda_\theta^\zeta \rho + \frac{1}{\sqrt{Ft^\gamma}}\eta(\theta,t), \end{equation} where $\Lambda_\theta^\zeta$ is a fractional differential operator of order $\zeta$, and dilution has not been considered. The dynamics for $\zeta>d$, which in turn implies that in the linear case the growth exponent $\beta > 0$ and the interface is consequently rough, has been already considered in~\cite{escuderojs}; herein we move to studying the marginal case $\zeta=d$, which turns out to have interesting properties. The case $\zeta < d$ is not so interesting as it corresponds to flat interfaces; an analogous calculation to the corresponding one in~\cite{escudero2} for $\gamma = 1$ and $\zeta < 1$ shows \begin{equation} \left< \rho(\theta,t)\rho(\theta',s) \right> \to 0 \qquad \mathrm{when} \qquad t,s \to \infty, \end{equation} independently of the value of $t_0$. If $\zeta=\gamma=1$ the correlation reads \begin{equation} \left< \rho(\theta,t)\rho(\theta',s) \right>=\frac{1}{4 \pi D}\mathrm{ln}\left[ \frac{(ts)^{D/F}}{(s/t)^{D/F}+(t/s)^{D/F}-2\cos(\theta-\theta')} \right]. \end{equation} The one time correlation adopts the form \begin{equation} \left< \rho(\theta,t)\rho(\theta',t) \right>=\frac{1}{4 \pi D} \mathrm{ln}\left[ \frac{t^{2D/F}}{2-2\cos \left( \theta-\theta' \right)} \right], \end{equation} that reduces to \begin{equation} \label{onetime} \left< \rho(\theta,t)\rho(\theta',t) \right> \approx \frac{1}{2 \pi F} \mathrm{ln}\left( \frac{t}{\left| \theta-\theta' \right|^{F/D}} \right), \end{equation} when we consider local in space dynamics, this is, in the limit $\theta \approx \theta'$. Note that this result allows us to define the local dynamic exponent $z_{loc}=F/D \in (0,\infty)$, which depends continuously on the equation parameters $F$ and $D$, and is thus nonuniversal, as we noted in~\cite{escudero2}. In terms of the arc-length variable $\ell-\ell'=t(\theta -\theta')$ we find \begin{equation} \left< \rho(\ell,t)\rho(\ell',t) \right> \approx \frac{F^{-1}+D^{-1}}{2 \pi} \mathrm{ln}\left( \frac{t}{\left| \ell-\ell' \right|^{F/(D+F)}} \right), \end{equation} where the dynamical exponent in terms of the arc-length variable $z_\ell =F/(D+F) \in (0,1)$ is again nonuniversal. If we take into account dilution Eq. (\ref{perturbation}) transforms to \begin{equation} \partial_t \rho = \frac{D}{Ft} \Lambda_\theta \rho -\frac{1}{t}\rho + \frac{1}{\sqrt{Ft}}\eta(\theta,t). \end{equation} The solution has zero mean and its correlation is given by \begin{eqnarray} \left< \rho(\theta,t)\rho(\theta',s) \right> = \frac{\min\{s,t\}/\max\{s,t\}}{4 \pi F} + \hspace{8.5cm} \\ \frac{(\min\{s,t\}/\max\{s,t\})^{1+D/F}}{2 \pi (F+D)} \Re \left\{ e^{i(\theta-\theta')} {_2F_1}\left[ 1,1+\frac{F}{D};2+\frac{F}{D}; e^{i(\theta-\theta')} \left( \frac{\min\{s,t\}}{\max\{s,t\}} \right)^{D/F} \right] \right\}, \nonumber \end{eqnarray} where $\Re(\cdot)$ denotes the real part and ${_2F_1}(\cdot,\cdot;\cdot;\cdot)$ is Gauss hypergeometric function~\cite{stegun}. This correlation, for $s=t$ and for small angular scales $\theta \approx \theta'$, becomes at leading order \begin{equation} \left< \rho(\theta,t)\rho(\theta',t) \right> \approx \frac{-1}{2 \pi D} \mathrm{ln} \left(|\theta-\theta'| \right), \end{equation} which is time independent, and for the arc-length variable \begin{equation} \left< \rho(\ell,t)\rho(\ell',t) \right> \approx \frac{1}{2 \pi D} \mathrm{ln}\left(\frac{t}{|\ell-\ell'|}\right), \end{equation} for which the planar scaling and the universal dynamical exponent $z=1$ are recovered, see Eq. (C5) in \cite{escudero2}. This is yet another example, this time of a different nature, of how dilution is able to restore the Family-Vicsek ansatz~\cite{escuderojs,escuderoar}. If $\zeta=1$ and $\gamma < 1$ we find the following correlation function \begin{eqnarray} \nonumber \left< \rho(\theta,t)\rho(\theta',s) \right>=\frac{[\min\{t,s\}]^{1-\gamma}}{2 \pi F (1-\gamma)}-\frac{1}{4 \pi D} \mathrm{ln}\left\{1+\exp \left[ -\frac{2D}{F(1-\gamma)} \left|t^{1-\gamma}-s^{1-\gamma} \right| \right] \right. \\ \left. -2 \exp \left[ -\frac{D}{F(1-\gamma)} \left|t^{1-\gamma}-s^{1-\gamma} \right| \right]\cos(\theta-\theta') \right\}. \end{eqnarray} When $t=s$ we get \begin{equation} \left< \rho(\theta,t)\rho(\theta',t) \right>= \frac{t^{1-\gamma}}{2 \pi F(1-\gamma)}-\frac{1}{4 \pi D} \ln \left[ 2-2\cos \left( \theta-\theta' \right) \right], \end{equation} and considering local spatial dynamics we arrive at \begin{equation} \label{onetime2} \left< \rho(\theta,t)\rho(\theta',t) \right> \approx \frac{t^{1-\gamma}}{2 \pi F(1-\gamma)}-\frac{1}{2 \pi D} \mathrm{ln} \left( \left| \theta-\theta' \right| \right)= \frac{1}{2 \pi F (1-\gamma)} \mathrm{ln} \left[ \frac{e^{t^{1-\gamma}}}{|\theta-\theta'|^{F(1-\gamma)/D}} \right], \end{equation} expression that does not allow to define a local dynamic exponent, or alternatively $z_{loc}=0$ due to the exponentially fast spreading of the correlations. These last three expressions contain two clearly different terms. The first one is the zeroth mode component of the correlation, which does not achieve long time saturation. The second term is the nontrivial stationary part of the correlation generated along the evolution. As can be seen, both spatial and temporal correlations are generated. When the dilution term is taken into account we find the correlation \begin{eqnarray} \nonumber \left< \rho(\theta,t)\rho(\theta',s) \right> = \frac{\min\{t,s\}[\max \{t,s\}]^{-\gamma}}{2 \pi F (\gamma +1)}-\frac{1}{4 \pi D} \mathrm{ln}\left\{1+\exp \left[ -\frac{2D}{F(1-\gamma)} \left|t^{1-\gamma}-s^{1-\gamma} \right| \right] \right. \\ \left. -2 \exp \left[ -\frac{D}{F(1-\gamma)} \left|t^{1-\gamma}-s^{1-\gamma} \right| \right]\cos(\theta-\theta') \right\}. \,\,\,\, \end{eqnarray} When $t=s$ we get \begin{equation} \left< \rho(\theta,t)\rho(\theta',t) \right>= \frac{t^{1-\gamma}}{2 \pi F(\gamma +1)}-\frac{1}{4 \pi D} \ln \left[ 2-2\cos \left( \theta-\theta' \right) \right], \end{equation} and considering local spatial dynamics we arrive at \begin{equation} \left< \rho(\theta,t)\rho(\theta',t) \right> \approx \frac{t^{1-\gamma}}{2 \pi F(\gamma +1)}-\frac{1}{2 \pi D} \mathrm{ln} \left( \left| \theta-\theta' \right| \right)= \frac{1}{2 \pi F (\gamma +1)} \mathrm{ln} \left[ \frac{e^{t^{1-\gamma}}}{|\theta-\theta'|^{F(\gamma+1)/D}} \right], \end{equation} and we see that as in the former case, both prefactor and exponent are modified, but the still exponentially fast propagation of correlations implies an effective local dynamical exponent $z_{loc}=0$. Note that for $\gamma > 1$ a radial random deposition behavior for large spatial scales is recovered. Now we move onto the two-dimensional setting. As in the one-dimensional case we focus on the marginal situation $d=\zeta=2$, which leads us to denominate this sort of equations as spherical Edwards-Wilkinson (EW) equations, and $0< \gamma \le 1/2$, as greater values of the growth index lead again to decorrelation. The straightforward generalization of Eq.~(\ref{perturbation}) is \begin{equation} \label{SEW1} \partial_t \rho = \frac{K}{(Ft^\gamma)^2} \nabla^2 \rho + \frac{1}{Ft^\gamma \sqrt{\sin(\theta)}}\eta(\theta,\phi,t), \end{equation} where the noise is a Gaussian random variable of zero mean and correlation given by \begin{equation} \left< \xi(\theta,\phi,t)\xi(\theta',\phi',s) \right>=\delta(\theta-\theta')\delta(\phi-\phi')\delta(t-s). \end{equation} In this case, if $\gamma < 1/2$, the random variable $\rho$ is a zero mean Gaussian process whose correlation is given by \begin{eqnarray} \nonumber \left< \rho(\theta,\phi,t)\rho(\theta',\phi',s) \right>=\frac{\left[ \min(t,s) \right]^{1-2\gamma}}{4\pi F^2 (1-2\gamma)}+ \\ \label{onetime3} \sum_{l=1}^\infty \sum_{m=-l}^l \frac{(-1)^m}{2K(l+l^2)}\exp \left[-\frac{K(l+l^2)}{F^2(1-2\gamma)}\left|t^{1-2\gamma}- s^{1-2\gamma}\right|\right]Y_{-m}^l(\theta,\phi)Y_m^l(\theta',\phi'), \end{eqnarray} where the expansion has been performed on the spherical harmonics basis $Y_{m}^l(\theta,\phi)$. If $\gamma = 1/2$ then $\rho$ becomes a zero mean Gaussian random variable with the new correlation \begin{eqnarray} \nonumber \left< \rho(\theta,\phi,t)\rho(\theta',\phi',s) \right>=\frac{\mathrm{ln}\left[ \min(t,s) \right]}{4\pi F^2}+ \\ \label{corrmar} \sum_{l=1}^\infty \sum_{m=-l}^l \frac{(-1)^m}{2K(l+l^2)}\left[ \frac{\min(s,t)}{\max(s,t)}\right]^{K(l+l^2)/F^2} Y_{-m}^l(\theta,\phi)Y_m^l(\theta',\phi'). \end{eqnarray} It is clear that these correlations are again composed of two different terms, the first one associated with the $l=0$ mode never saturates, and the second one associated with the rest of modes $l>0$, which saturates and is responsible of a non-trivial spatial structure. Taking into account dilution we find for $\gamma < 1/2$ the correlation \begin{eqnarray} \nonumber \left< \rho(\theta,\phi,t)\rho(\theta',\phi',s) \right>=\frac{ \min(t,s) \left[ \max(t,s) \right]^{-2\gamma}}{4\pi F^2 (2\gamma +1)}+ \\ \sum_{l=1}^\infty \sum_{m=-l}^l \frac{(-1)^m}{2K(l+l^2)}\exp \left[-\frac{K(l+l^2)}{F^2(1-2\gamma)}\left|t^{1-2\gamma}- s^{1-2\gamma}\right|\right]Y_{-m}^l(\theta,\phi)Y_m^l(\theta',\phi'), \end{eqnarray} and for $\gamma=1/2$ \begin{equation} \left< \rho(\theta,\phi,t)\rho(\theta',\phi',s) \right>= \sum_{l=0}^\infty \sum_{m=-l}^l \frac{(-1)^m}{2F^2+2K(l^2+l)}\left[ \frac{\min(s,t)}{\max(s,t)} \right]^{1+K(l^2+l)/F^2} Y_{-m}^l(\theta,\phi)Y_m^l(\theta',\phi'). \end{equation} In the two dimensional situation we see that dilution also has a measurable effect, which is more pronounced in the critical $\gamma=1/2$ case. For this value all the modes in the correlation saturate and contribute to create a stationary spatial structure, as in the one-dimensional setting. It is difficult to establish more comparisons among both dimensionalities, as the infinite sums that were explicit in $d=1$ become much more involved in $d=2$, due to the double series containing the spherical harmonics. We however conjecture that the modification of the scaling properties due to effect of dilution in two dimensions is similar to the one explicitly observed in one dimension. \section{Instabilities} A spherical EW equation derived from the geometric principle of surface minimization was introduced in~\cite{escudero2}. The corresponding equation for the radius $r(\theta,\phi,t)$ reads \begin{equation} \label{SEW2} \partial_t r = K \left[ \frac{\partial_\theta r}{r^2 \tan(\theta)}+ \frac{\partial_\theta^2 r}{r^2}+\frac{\partial_\phi^2 r}{r^2 \sin^2(\theta)}-\frac{2}{r}\right] + F\gamma t^{\gamma-1} +\frac{1}{r\sqrt{\sin(\theta)}}\xi(\theta,\phi,t). \end{equation} Performing the small noise expansion $r(\theta,\phi,t)=Ft+\rho(\theta,\phi,t)$ we find a linear equation which differs from Eq. (\ref{SEW1}) in that it has a destabilizing term coming from the fourth term in the drift of Eq. (\ref{SEW2}), see \cite{escudero2}. In this reference one can see that in the absence of dilution the $l=0$ mode is unstable and the $l=1$ modes are marginal while the rest of modes is stable. The effect of this sort of geometrically originated instability on the mean value of the stochastic perturbation and alternative geometric variational approaches that avoid it can be seen in~\cite{escudero2}, herein we will concentrate on its effect on correlations. Its effect on mean values can be easily deduced from them. In the long time limit and provided $\gamma < 1/2$, the perturbation is a Gaussian process whose correlation is given by \begin{eqnarray} \nonumber \left< \rho(\theta,\phi,t)\rho(\theta',\phi',s) \right>=\frac{1}{16\pi K}\exp \left[ \frac{2K(t^{1-2\gamma}+s^{1-2\gamma})}{F^2(1-2\gamma)}\right]+ \\ \nonumber \frac{3\left[ \min(t,s) \right]^{1-2\gamma}}{4\pi F^2 (1- 2\gamma)}\left[\cos(\theta)\cos(\theta')+\cos(\phi-\phi') \sin(\theta)\sin(\theta')\right]+ \\ \sum_{l=2}^\infty \sum_{m=-l}^l \frac{(-1)^m}{2K(l^2+l-2)}\exp\left[-\frac{K(l^2+l-2)}{F^2(1-2\gamma)} \left|t^{1-2\gamma}-s^{1-2\gamma}\right|\right]Y_{-m}^l(\theta,\phi)Y_m^l(\theta',\phi'). \label{unscorr1} \end{eqnarray} If $\gamma = 1/2$ the correlation shifts to \begin{eqnarray} \nonumber \left< \rho(\theta,\phi,t)\rho(\theta',\phi',s) \right>=\frac{(st/t_0^2)^{2K/F^2}}{16\pi K}+ \\ \nonumber \frac{3\mathrm{ln}\left[ \min(s,t) \right]}{4\pi F^2}\left[\cos(\theta)\cos(\theta')+\cos(\phi-\phi') \sin(\theta)\sin(\theta')\right]+ \\ \sum_{l=2}^\infty \sum_{m=-l}^l \frac{(-1)^m}{2K(l^2+l-2)}\left[ \frac{\min(s,t)}{\max(s,t)} \right]^{K(l^2+l-2)/F^2} Y_{-m}^l(\theta,\phi)Y_m^l(\theta',\phi'). \label{unscorr2} \end{eqnarray} In these cases the modes characterized by $l=0$ and $l=1$ do not saturate, and the rest of the modes $l>1$ saturate and create a non-trivial spatial structure. When $\gamma < 1/2$ the $l=1$ modes grow in time as a power law with the exponent $1-2\gamma$, while the $l=0$ mode grows exponentially fast. When $\gamma=1/2$ the $l=1$ modes grow logarithmically and the $l=0$ mode grows as a power law with the non-universal exponent $4K/F^2$. When we consider the effect of dilution, and for $\gamma < 1/2$, we find the correlation \begin{eqnarray} \nonumber \left< \rho(\theta,\phi,t)\rho(\theta',\phi',s) \right>=\frac{1}{16\pi K}\exp \left[ \frac{2K(t^{1-2\gamma}+s^{1-2\gamma})}{F^2(1-2\gamma)}\right]+ \\ \nonumber \frac{3 \min(t,s) \left[ \max(t,s) \right]^{-2\gamma}}{4\pi F^2 (2\gamma +1)}\left[\cos(\theta)\cos(\theta')+\cos(\phi-\phi') \sin(\theta)\sin(\theta')\right]+ \\ \sum_{l=2}^\infty \sum_{m=-l}^l \frac{(-1)^m}{2K(l^2+l-2)}\exp\left[-\frac{K(l^2+l-2)}{F^2(1-2\gamma)} \left|t^{1-2\gamma}-s^{1-2\gamma}\right|\right]Y_{-m}^l(\theta,\phi)Y_m^l(\theta',\phi'). \end{eqnarray} For $\gamma=1/2$ the correlation reads \begin{eqnarray} \nonumber \left< \rho(\theta,\phi,t)\rho(\theta',\phi',s) \right>=\frac{1}{4 \pi} \left< \rho_0^0(t) \rho_0^0(s) \right> + \\ \sum_{l=1}^\infty \sum_{m=-l}^l \frac{(-1)^m}{2F^2+2K(l^2+l-2)}\left[ \frac{\min(s,t)}{\max(s,t)} \right]^{1+K(l^2+l-2)/F^2} Y_{-m}^l(\theta,\phi)Y_m^l(\theta',\phi'), \end{eqnarray} where \begin{equation} \left< \rho_0^0(t) \rho_0^0(s) \right> = \left\{ \begin{array}{lll} (2F^2-4K)^{-1} \left( \min\{s,t\}/\max\{s,t\} \right)^{1-2K/F^2} & \mbox{\qquad if \qquad $F^2>2K$}, \\ \mathrm{ln}(\min\{s,t\})/F^2 & \mbox{\qquad if \qquad $F^2=2K$}, \\ (4K-2F^2)^{-1} \left( t s/t_0^2 \right)^{2K/F^2-1} & \mbox{\qquad if \qquad $F^2<2K$}, \end{array} \right. \end{equation} where $t_0$ is the absolute origin of time. Contrary to what happens in the stable case, Eq.~(\ref{SEW1}), in the unstable case with no dilution, Eq.~(\ref{SEW2}), the $l=0$ mode is unstable, showing an exponential growth, and the $l=1$ modes shows an algebraic increase with the universal exponent $1-2\gamma$, provided $\gamma<1/2$; the rest of modes is stable. The marginal value of the growth index $\gamma = 1/2$ translates into a power law increase of the $l=0$ mode with a non-universal exponent, while the $l=1$ modes grow logarithmically; the rest of modes is again stable. It is clear that dilution has a stabilizing effect. Indeed, for $\gamma<1/2$ the $l=0$ mode is unchanged, but the $l=1$ modes, which still grow in time, experience a lost of memory effects. In the critical $\gamma=1/2$ situation the dilution effects are stronger. The $l=1$ modes, which formerly grew logarithmically, now become stable; the $l=0$ mode, which formerly showed an algebraic growth, now shows (non-universal) algebraic or logarithmic grow, or even saturation, depending on the relation among the values of the parameters of the spherical EW equation. In any case, even that of algebraic growth, this growth is always slower than in the no dilution situation. Stable modes saturate contributing to create a non-trivial spatial structure in the whole range $\gamma \le 1/2$. In summary, the effect of dilution is weakly stabilizing in the subcritical case, while stronger and more identifiable in criticality. Of course, the supercritical situation is characterized by an effective random deposition behavior in the large spatial scale. \section{Intrinsically spherical growth and rapid roughening} It is necessary to clarify the role of the diffusivity index $\zeta$. We have defined it as the order of the fractional differential operator taking mass diffusion into account, and so far we have referred to it as the key element triggering decorrelation. This has been an abuse of language because we have assumed that the negative power of the radius (or its mean field analog $Ft^\gamma$ -- what really matters is the resulting power of the temporal variable) preceding this differential operator was exactly $\zeta$. This would not be the case if the diffusion constant were time or radius dependent, but also in some other cases as the Intrinsically Spherical (IS) equation derived from geometric variational principles in~\cite{escudero2}. This equation was obtained as a gradient flow pursuing the minimization of the interface mean curvature, and then linearizing with respect to the different derivatives of the radius as given by the small gradient assumption~\cite{escudero2}. It is termed ``intrinsically spherical'' because it has no planar counterpart, as the nonlinearity becomes fundamental in any attempt to derive such a gradient flow in the Cartesian framework~\cite{escudero3}. Note the similarity of this with other equations classical in this context, as the EW equation is a gradient flow which minimizes the surface area and the Mullins-Herring equation minimizes the interface square mean curvature~\cite{escudero2}; the IS equation, as mentioned, minimizes the interface mean curvature. It reads~\cite{escudero2} \begin{equation} \partial_t r= K \left[ \frac{\partial_\theta^2 r}{r^3}+\frac{\partial_\phi^2 r}{r^3 \sin^2(\theta)} +\frac{\partial_\theta r}{r^3 \tan(\theta)}-\frac{1}{r^2}\right] + F\gamma t^{\gamma-1} +\frac{1}{r\sqrt{\sin(\theta)}}\xi(\theta,\phi,t), \end{equation} and so $\zeta=2$ in this case, but however one finds a factor $r^{-3}$ in front of the diffusive differential operator, instead of the $r^{-2}$ factor characteristic of the EW equation. This difference will have a number of measurable consequences, as we will show in the following. The equation for the stochastic perturbation reads in this case \begin{equation} \frac{d \rho^l_m}{dt}=\frac{K}{F^3 t^{3 \gamma}}[2-l(l+1)]\rho^l_m -\frac{2 \gamma}{t}\rho^l_m + \frac{1}{Ft^\gamma}\eta^l_m(t), \end{equation} which reveals that the critical value of the growth index $\gamma=1/3$; a faster growth leads to decorrelation. This is the first but not the unique difference with respect to the EW equation. To find out more we will first put things in a broader context. A more general equation for radial growth, after introducing dilution, would be \begin{equation} \label{damping} \partial_t r= -\frac{K}{r^\delta} |\nabla|^\zeta r -\frac{\gamma d}{t}r + F\gamma t^{\gamma-1} +\frac{\sqrt{\epsilon}}{\sqrt{r^d J(\vec{\theta})}}\eta(\vec{\theta},t), \end{equation} which defines the damping index $\delta$, differing from the diffusivity index $\zeta$ in general; note that Eq.~(\ref{damping}) has left aside the instability properties of the IS equation, which are analogous to those of the EW equation, and would add nothing to last section discussion. For simplicity we will focus on values of the damping index fulfilling $\delta \ge \zeta$. This equation can be treated perturbatively for small $\epsilon$ following the previous sections procedure and by introducing the hyperspherical harmonics $Y^{\vec{m}}_l(\vec{\theta})$, which obey the eigenvalue equation~\cite{wen} \begin{equation} \nabla^2 Y^{\vec{m}}_l(\vec{\theta})= -l(l+d-1)Y^{\vec{m}}_l(\vec{\theta}), \end{equation} where the vector $\vec{m}$ represents the set of $(d-1)$ indices. The fractional operator acts on the hyperspherical harmonics in the following fashion \begin{equation} |\nabla|^\zeta Y^{\vec{m}}_l(\vec{\theta})= [l(l+d-1)]^{\zeta/2}Y^{\vec{m}}_l(\vec{\theta}). \end{equation} The hyperspherical noise is Gaussian, has zero mean and its correlation is given by \begin{equation} \left< \eta(\vec{\theta},t) \eta(\vec{\theta}',t') \right>= \delta(\vec{\theta}-\vec{\theta}') \delta(t-t'). \end{equation} It can be expanded in terms of hyperspherical harmonics \begin{equation} \frac{\eta(\vec{\theta},t)}{\sqrt{J(\vec{\theta})}}=\sum_{l,\vec{m}} \eta_l^{\vec{m}}(t) Y^{\vec{m}}_l(\vec{\theta}), \end{equation} and the amplitudes are given by \begin{equation} \eta_l^{\vec{m}}(t)=\int \eta(\vec{\theta},t) \bar{Y}^{\vec{m}}_l(\vec{\theta}) \sqrt{J(\vec{\theta})} \, d\vec{\theta}, \end{equation} and so they are zero mean Gaussian noises whose correlation is given by \begin{equation} \left< \eta_l^{\vec{m}}(t) \bar{\eta}_{l'}^{\vec{m}'}(t') \right>= \delta(t-t') \delta_{l,l'} \delta_{\vec{m},\vec{m}'}, \end{equation} where the overbar denotes complex conjugation. Note that the amplitudes are in general complex valued. They obey the linear equation \begin{equation} \frac{d \rho_l^{\vec{m}}}{dt}=-\frac{K}{F^\delta t^{\delta \gamma}}[l(l+d-1)]^{\zeta/2}\rho_l^{\vec{m}} -\frac{\gamma d}{t}\rho_l^{\vec{m}} +\frac{1}{F^{d/2}t^{\gamma d/2}} \eta_l^{\vec{m}}(t). \end{equation} From this equation it is clear that the critical value of the growth index is $\gamma=1/\delta$, and a faster growth leads to decorrelation. It is convenient to move to a growing hypercubic geometry as in~\cite{escuderojs} in order to calculate different quantities \begin{equation} \label{hcubic} \partial_t h=-D \left( \frac{t_0}{t} \right)^{\delta \gamma} |\nabla|^\zeta h -\frac{d\gamma}{t}h +\gamma F t^{\gamma-1}+ \left(\frac{t_0}{t}\right)^{d\gamma/2}\xi(x,t), \end{equation} since this change simplifies calculations without modifying the leading results. Our goal is finding the growth and auto-correlation exponents, as this last one is a good quantity to measure decorrelation \cite{escuderojs}. In order to calculate the temporal correlations we need to consider the short time limit, where the growth exponent $\beta$ becomes apparent. The propagator of Eq.(\ref{hcubic}) is \begin{equation} G_n(t)= \left(\frac{t}{t_0}\right)^{-d\gamma} \exp \left[ -\frac{n^\zeta \pi^\zeta D}{L_0^\zeta} \frac{t_0^{\gamma \delta} t^{1-\gamma \delta}-t_0}{1-\gamma \delta} \right], \end{equation} that yields the following complete solution when the initial condition vanishes: \begin{equation} h_n(t)=G_n(t) \int_{t_0}^{t} G_n^{-1}(\tau) \left(\frac{t_0}{\tau}\right)^{d \gamma/2} \xi_n(\tau) d\tau. \end{equation} The one point two times correlation function then reads \begin{equation} \label{corrf} \langle h_n(t)h_n(t') \rangle \sim G_n(t) G_n(t') \int_{t_0}^{\min(t,t')} G_n^{-2}(\tau) \left( \frac{t_0}{\tau} \right)^{d \gamma} d\tau, \end{equation} and after inverting Fourier we arrive at the real space expression \begin{equation} \label{realseries} \langle h(x,t)h(x,t') \rangle = \sum_{n=0}^\infty \langle h_n(t) h_n(t') \rangle \cos^2 \left( \frac{n \pi x}{L_0} \right), \end{equation} where we have assumed no flux boundary conditions as in \cite{escuderojs}, although the values of both the growth and auto-correlation exponents do not depend on the choice of boundary conditions. The propagator $G_n(t)$ suggests the scaling variable $v_n \sim nt^{(1-\gamma \delta)/\zeta}$ in Fourier space, that corresponds to the real space scaling variable $u \sim xt^{(-1+\gamma \delta)/\zeta}$, as can be read directly from Eq. (\ref{realseries}). This suggests the definition of the effective dynamical exponent $z_{\mathrm{eff}}= \zeta/(1-\gamma \delta)$. If we express the correlation Eq. (\ref{corrf}) for $t=t'$ in terms of the scaling variable $v_n$ (and we refer to it as $C(v_n)$ multiplied by a suitable power of $t$) and we introduce the ``differential'' $1 \equiv \Delta n \sim t^{(-1+\gamma \delta)/\zeta} \Delta v$, we can cast the last expression in the integral form \begin{equation} \label{avezero} \langle h(x,t)^2 \rangle -\langle h(x,t) \rangle^2 = t^{1-d/\zeta+\gamma d(\delta/\zeta-1)} \int_{v_1}^\infty C(v_n) \cos^2\left( \frac{v_n \pi u}{L_0} \right) dv_n, \end{equation} where the series converges as a Riemann sum to the above integral when \begin{equation} D t \ll (L_0^\zeta+D t_0){t^{\delta \gamma} \over t_0^{\delta \gamma}}, \end{equation} or equivalently $t \ll t_c \sim L_0^{z_{\mathrm{eff}}}$, for $t_c$ being the time it takes the correlations reaching the substrate boundaries, assuming that the substrate initial size is very large. If $\gamma <1/\delta$, the whole substrate becomes correlated, yielding a finite $t_c$; for $\gamma > 1/\delta$ the convergence of the Riemann sum to the integral is assured for all times, corresponding to the physical fact that the substrate never becomes correlated. In front of the integral we find a power of the temporal variable compatible with the growth exponent \begin{equation} \label{beta} \beta= \frac{1}{2}- \frac{d}{2\zeta}+ \frac{\gamma d}{2}\left( \frac{\delta}{\zeta}-1 \right), \end{equation} and the integral can be shown to be absolutely convergent as the integrand decays faster than exponentially for large values of the scaling variable $v_n$. We are now in position to calculate the temporal auto-correlation \begin{equation} \label{temporalc} A(t,t') \equiv \frac{\langle h(x,t)h(x,t')\rangle_0}{\langle h(x,t)^2 \rangle_0^{1/2} \langle h(x,t')^2 \rangle_0^{1/2}} \sim \left(\frac{\min\{t,t'\}}{\max\{t,t'\}}\right)^\lambda, \end{equation} where $\lambda$ is the auto-correlation exponent and $\langle \cdot \rangle_0$ denotes the average with the zeroth mode contribution suppressed, as in (\ref{avezero}). The remaining ingredient is the correlation $\langle h(x,t)h(x,t')\rangle_0$. Going back to Eq.(\ref{realseries}) we see that the Fourier space scaling variable now reads \begin{equation} v_n=\left[\frac{t^{1-\gamma \delta} +(t')^{1-\gamma \delta} - 2 \tau^{1-\gamma \delta}}{1-\gamma \delta} \right]^{1/\zeta}n. \end{equation} If $\gamma < 1/\delta$ the term $\max\{t,t'\}^{1-\gamma \delta}$ is dominant and the factor in front of the convergent Riemann sum reads \begin{equation} \max\{t,t'\}^{(\delta/\zeta -1)\gamma d- d/\zeta} \min\{t,t'\}, \end{equation} after the time integration has been performed and in the limit $\max\{t,t'\} \gg \min\{t,t'\}$. In this same limit, but when $\gamma > 1/\delta$, the term $\min\{t,t'\}^{1-\gamma \delta}$ becomes dominant and the prefactor reads \begin{equation} \max\{ t,t' \}^{-d\gamma} \min\{t,t'\}^{1- d/\zeta+d\gamma \delta/\zeta}. \end{equation} The resulting temporal correlation adopts the form indicated in the right hand side of (\ref{temporalc}), where \begin{equation} \lambda = \left\{ \begin{array}{ll} \beta + d/\zeta +\gamma d(1-\delta/\zeta) & \mbox{\qquad if \qquad $\gamma < 1/\delta$}, \\ \beta +\gamma d & \mbox{\qquad if \qquad $\gamma > 1/\delta$}, \end{array} \right. \end{equation} or alternatively \begin{equation} \label{lambda} \lambda= \beta + {d \over z_\lambda}, \end{equation} where the $\lambda-$dynamical exponent is defined as \begin{equation} z_\lambda = \left\{ \begin{array}{ll} \frac{\zeta}{1+\gamma(\zeta-\delta)} & \mbox{\qquad if \qquad $\gamma < 1/\delta$}, \\ 1/\gamma & \mbox{\qquad if \qquad $\gamma > 1/\delta$}. \end{array} \right. \end{equation} If we disregarded the effect of dilution we would find again Eq. (\ref{lambda}), but this time \begin{equation} z_\lambda = \left\{ \begin{array}{ll} \frac{\zeta}{1-\gamma \delta}=z_{\mathrm{eff}} & \mbox{\qquad if \qquad $\gamma < 1/\delta$}, \\ \infty & \mbox{\qquad if \qquad $\gamma > 1/\delta$}. \end{array} \right. \end{equation} To further clarify the dynamics we now calculate the scaling form that the two points correlation function adopts for short spatial scales $|x-x'| \ll t^{(1-\delta \gamma)/\zeta}$ in the decorrelated regime. As dilution does not act on such a microscopic scale, the following results are independent of whether we contemplate dilution or not. In this case one has \begin{equation} \langle h(x,t)h(x',t) \rangle = \sum_{n_1, \cdots, n_d} \langle h_n^2(t) \rangle \cos \left( \frac{n_1 \pi x_1}{L_0} \right)\cos \left( \frac{n_1 \pi x_1'}{L_0} \right) \cdots \cos \left( \frac{n_d \pi x_d}{L_0} \right)\cos \left( \frac{n_d \pi x_d'}{L_0} \right), \end{equation} where $x=(x_1,\cdots,x_d)$ and $n=(n_1,\cdots,n_d)$, and we assume the rough interface inequality $\zeta > d$ in order to assure the absolute convergence of this expression. By introducing the scaling variables $v_i=n_i t^{(1-\delta \gamma)/\zeta}$ and $u_i=x_i t^{(\gamma \delta-1)/\zeta}$ for $i=1,\cdots,d$ and assuming statistical isotropy and homogeneity of the scaling form we find \begin{equation} \langle h(x,t)h(x',t) \rangle - \langle h(x,t) \rangle^2 = |x-x'|^{\zeta-d} t^{\gamma(\delta-d)}\mathcal{F}\left[ |x-x'| t^{(\delta \gamma -1)/\zeta} \right], \end{equation} or in Lagrangian coordinates $|y-y'|=|x-x'|t^\gamma$ \begin{equation} \langle h(y,t)h(y',t) \rangle - \langle h(y,t) \rangle^2 = |y-y'|^{\zeta-d} t^{\gamma(\delta-\zeta)}\mathcal{F}\left[ \frac{|y-y'|}{t^{\{1+ \gamma(\zeta-\delta)\}/\zeta}} \right]. \end{equation} We see that this form is statistically self-affine with respect to the re-scaling $y \to b y$, $t \to b^z t$, and $h \to b^\alpha h$, where the critical exponents are \begin{equation} \label{critexp} \alpha=\frac{\zeta-d}{2}+\frac{\zeta}{1+\gamma(\zeta-\delta)}\frac{(\delta-\zeta)\gamma}{2}, \qquad z=\frac{\zeta}{1+\gamma(\zeta-\delta)}. \end{equation} Note that the scaling relation $\alpha=\beta z$ holds, where the growth exponent $\beta$ was calculated in Eq. (\ref{beta}). The macroscopic decorrelation, which is observed for length scales of the order of the system size $|x-x'| \approx L_0$, is controlled by the effective dynamical exponent $z_{\mathrm{eff}}$. When $\delta>\zeta$ decorrelation might happen at microscopic length scales $|x-x'| \ll t^{(1-\delta \gamma)/\zeta}$ as well. Microscopic decorrelation happens in the limit $\delta \to \zeta + 1/\gamma$. For $\delta < \zeta + 1/\gamma$ the interface is microscopically correlated and the critical exponents take on their finite values given in Eq. (\ref{critexp}). For $\delta \ge \zeta + 1/\gamma$ the interface is microscopically uncorrelated and the critical exponents diverge $\alpha=z=\infty$, while the growth exponent is still finite and given by Eq. (\ref{beta}) (so one could say the scaling relation $\alpha= \beta z$ still holds in some sense in the microscopic uncorrelated limit). With respect to the growth exponent we can say that $\beta <1/2$ when $\delta < \gamma^{-1}+\zeta$, $\beta \to 1/2$ when $\delta \to \gamma^{-1}+\zeta$, and $\beta > 1/2$ when $\delta > \gamma^{-1}+\zeta$, so rapid roughening is a consequence of microscopic decorrelation. And now, by applying the developed theory to the IS equation, for which $d=2$, $\zeta=2$, $\delta=3$ and assuming as in~\cite{escudero2} that $\gamma=1$, we find that it is exactly positioned at the threshold of microscopic decorrelation, this is, its critical exponents are $\alpha=z=\infty$ and $\beta=1/2$. Note that the effective dynamical exponent $z_{\mathrm{eff}}=\zeta/(1-\gamma \delta)$ states the speed at which both correlation and decorrelation occur. The transition from correlation to decorrelation is triggered by the comparison among the indexes $\gamma$ and $\delta$. The derivation order $\zeta$ controls the speed at which both processes happen: a larger $\zeta$ implies slower correlation/decorrelation processes. Note also that rapid roughening might appear in exactly the same way in planar processes, just by allowing field or time dependence on the diffusion constant. This could be thought as somehow artificial in some planar situations, but as we have shown it appears naturally in the radial case, where such a dependence is a straightforward consequence of the lost of translation invariance, due to the existence of an absolute origin of space, characterized by a zero radius (and which in turn implies the existence of an absolute origin of time in the small noise approximation, as we have already seen). Such a naturalness can be seen in the derivation of the IS equation in~\cite{escudero2}, where it was found as a consequence of a simple variational principle. \section{Polyfractality} We devote this section to showing that rapidly growing radial interfaces develop ``polyfractality''. We coin this term to denote a behavior characterized by a scale dependent fractal dimension taking place in a finite system and for long times. It is different from the concept of multifractality, which in this topic is usually associated to a nonlinear relation among the exponents characterizing the higher order height difference correlations~\cite{barabasi}. In the classical case of static planar interfaces the fractal dimension is computed from the height difference correlation function \begin{equation} \left< [h(x,t)-h(x',t)]^2 \right>^{1/2} \sim |x-x'|^H, \end{equation} in the long time limit, i. e. after saturation have been achieved, where the Hurst exponent $H=(\zeta-d)/2$ for linear growth equations and the right hand side is time independent. The interface fractional dimension is calculated using the box counting method and is given by $d_f=1+d-H$. The general linear equation for stochastic growth on a growing domain was found in the last section to be \begin{equation} \label{hcubic2} \partial_t h=-D \left( \frac{t_0}{t} \right)^{\delta \gamma} |\nabla|^\zeta h -\frac{d\gamma}{t}h +\gamma F t^{\gamma-1}+ \left(\frac{t_0}{t}\right)^{d\gamma/2}\xi(x,t), \end{equation} for which we will assume $\zeta \le \delta < \zeta + \gamma^{-1}$. Its Fourier transformed version, for $n \ge 1$, is \begin{equation} \label{hcubic2f} \frac{d h_n}{dt}=-D \left( \frac{t_0}{t} \right)^{\delta \gamma} \frac{\pi^\zeta |n|^\zeta}{L_0^\zeta} h_n -\frac{d\gamma}{t}h_n + \left(\frac{t_0}{t}\right)^{d\gamma/2}\xi_n(t). \end{equation} For slow growth $\gamma < 1/\delta$ diffusion dominates over dilution and one finds an expression compatible with that of the planar case \begin{equation} \left< [h(x,t)-h(x',t)]^2 \right>^{1/2} \sim t^{\gamma(\delta-d)/2} |x-x'|^{(\zeta-d)/2}, \end{equation} and so the Hurst exponent and interface fractal dimension are the same as in the planar case for fixed time. In the case of fast growth $\gamma>1/\delta$, for small spatial scales $|x-x'| \ll t^{(1-\delta \gamma)/\zeta}$ we recover again this result, while for large spatial scales $|x-x'| \gg t^{(1-\delta \gamma)/\zeta}$ we find \begin{equation} \left< [h(x,t)-h(x',t)]^2 \right>^{1/2} \sim t^{\beta}, \end{equation} and so, for fixed time, $H=0$ and $d_f=d+1$. This means that the interface becomes highly irregular and so dense that it fills the $(d+1)-$dimensional space. This way decorrelation marks the onset of polyfractality, as specified by a scale dependent Hurst exponent, whose asymptotic values are \begin{equation} H(|x-x'|,t) = \left\{ \begin{array}{ll} (\zeta-d)/2 & \mbox{\qquad if \qquad $|x-x'| \ll t^{(1-\delta \gamma)/\zeta}$}, \\ 0 & \mbox{\qquad if \qquad $|x-x'| \gg t^{(1-\delta \gamma)/\zeta}$}, \end{array} \right. \end{equation} and the corresponding asymptotic values of the scale dependent fractal dimension \begin{equation} d_f(|x-x'|,t) = \left\{ \begin{array}{ll} 1+(3d-\zeta)/2 & \mbox{\qquad if \qquad $|x-x'| \ll t^{(1-\delta \gamma)/\zeta}$}, \\ d+1 & \mbox{\qquad if \qquad $|x-x'| \gg t^{(1-\delta \gamma)/\zeta}$}. \end{array} \right. \end{equation} Note that these results imply dynamic polyfractality as the scale separating the two regimes depends on time $|x-x'| \sim t^{(1-\delta \gamma)/\zeta}$; also, the rough interface inequality $\zeta>d$ implies the strict inequality $1+(3d-\zeta)/2 < d+1$. This asymptotic behavior strongly suggests the self-similar form of both Hurst exponent and fractal dimension \begin{equation} H=H \left( \frac{|x-x'|}{t^{(1-\delta \gamma)/\zeta}} \right), \qquad \mathrm{and} \qquad d_f=d_f \left( \frac{|x-x'|}{t^{(1-\delta \gamma)/\zeta}} \right). \end{equation} According to this the fractal dimension would be a dynamic fractal itself, invariant to the transformation $x \to b \, x$, $t \to b^{z_f}t$, and $d_f \to b^{\alpha_f} d_f$, for $z_f=\zeta/(1-\delta \gamma)=z_{\mathrm{eff}}$ and $\alpha_f=0$. Note that all these results concerning polyfractality are independent of whether we contemplate dilution or not (because the height difference correlation function depends on strictly local quantities~\cite{escuderojs}), and so we could, in this particular calculation, substitute Eqs.~(\ref{hcubic2}) and (\ref{hcubic2f}) by their dilution-free counterparts and still get the same results. Note also that at the very beginning of this section we have assumed the inequality $\zeta \le \delta < \zeta + \gamma^{-1}$, which implies that for rapid growth the interface is macroscopically but not microscopically uncorrelated. If $\delta \ge \zeta + \gamma^{-1}$ then the interface is microscopically uncorrelated and the fractal dimension becomes $d_f=d+1$ independently of the scale from which we regard it, i. e., polyfractality is a genuine effect of macroscopic decorrelation, which disappears for strong damping causing microscopic decorrelation. Note that polyfractality does not appear in non-growing domain systems as for long times saturation is achieved and the fractal dimension becomes constant (assuming no multifractality is present). Although the behavior of the height difference correlation function we found here is similar to the one present in classical unbounded systems, results concerning the fractal dimension cannot be immediately extrapolated. The fractal dimension can be computed in a bounded growing domain, for instance using the box counting method as we have done herein, by employing as the reference length $L(t)$, the linear time dependent size of the system. Of course, in an unbounded static domain there is not such a reference length. \section{The Kardar-Parisi-Zhang Equation} One of the most important nonlinear models in the field of surface growth is the Kardar-Parisi-Zhang (KPZ) equation~\cite{kpz} \begin{equation} \partial_t h= \nu \nabla^2 h + \lambda (\nabla h)^2 + \xi(x,t). \end{equation} It is related to the biologically motivated Eden model, as this model, at least in a planar geometry, was numerically found to belong to the KPZ universality class~\cite{barabasi}. As we will see, understanding the KPZ equation on a growing domain may shed some light on some of the properties of the classical version of this model. The KPZ equation on a growing domain reads \begin{equation} \label{dkpz} \partial_t h= \nu \left( \frac{t_0}{t} \right)^{2 \gamma} \nabla^2 h + \frac{\lambda}{2} \left( \frac{t_0}{t} \right)^{2 \gamma} (\nabla h)^2 -\frac{d \gamma}{t}h + \gamma Ft^{\gamma-1} + \left( \frac{t_0}{t} \right)^{d \gamma/2} \xi(x,t). \end{equation} Of course, if we just considered the dilation $x \to (t/t_0)^\gamma x$ we would find \begin{equation} \label{ndkpz} \partial_t h= \nu \left( \frac{t_0}{t} \right)^{2 \gamma} \nabla^2 h + \frac{\lambda}{2} \left( \frac{t_0}{t} \right)^{2 \gamma} (\nabla h)^2 + \gamma Ft^{\gamma-1} + \left( \frac{t_0}{t} \right)^{d \gamma/2} \xi(x,t). \end{equation} As we have shown in the previous section, the dilution mechanism fixes the Family-Vicsek ansatz in the fast growth regime. In the radial Eden model case, assuming it belongs to the KPZ universality class, we would have $z=3/2$ in $d=1$ and $\gamma=1$. And so, one would na\"{\i}fly expect that the resulting interface is uncorrelated and we have to resort on dilution effects in order to fix the Family-Vicsek ansatz and get rid of memory effects. But here comes the paradoxical situation. There are two main symmetries associated with the $d$-dimensional KPZ equation: the Hopf-Cole transformation which maps it onto the noisy diffusion equation~\cite{wio} and the related directed polymer problem~\cite{kardar,lassig}, and Galilean invariance which have been traditionally related to the non-renormalization of the KPZ vertex at an arbitrary order in the perturbation expansion~\cite{fns,medina}. In the case of the no-dilution KPZ equation (\ref{ndkpz}) both symmetries are still present. Indeed, this equation transforms under the Hopf-Cole transformation $u=\exp[\lambda h/(2\nu)]$ to \begin{equation} \partial_t u = \nu \left( \frac{t_0}{t} \right)^{2 \gamma} \nabla^2 u + \frac{\gamma F \lambda}{2 \nu} t^{\gamma-1} u + \frac{\lambda}{2 \nu} \left( \frac{t_0}{t} \right)^{d \gamma/2} \xi(x,t) u, \end{equation} which is again a noisy diffusion equation and it can be explicitly solved in the deterministic limit $\epsilon=0$. We find in this case \begin{equation} u(x,t)=\frac{(1-2\gamma)^{d/2} \exp[F \lambda t^\gamma/(2 \nu)]}{[4 \pi t_0^{2\gamma}(t^{1-2\gamma}-t_0^{1-2\gamma})]^{d/2}} \int_{\mathbb{R}^d} \exp \left[ -\frac{|x-y|^2(1-2\gamma)}{4 t_0^{2 \gamma}(t^{1-2\gamma}-t_0^{1-2\gamma})} \right] u(y,t_0) dy, \end{equation} which corresponds to \begin{equation} h(x,t)= \frac{2 \nu}{\lambda} \ln \left \{ \frac{(1-2\gamma)^{d/2} \exp[F \lambda t^\gamma/(2 \nu)]}{[4 \pi t_0^{2\gamma}(t^{1-2\gamma}-t_0^{1-2\gamma})]^{d/2}} \int_{\mathbb{R}^d} \exp \left[ -\frac{|x-y|^2(1-2\gamma)}{4 t_0^{2 \gamma}(t^{1-2\gamma}-t_0^{1-2\gamma})} +\frac{\lambda}{2 \nu} h(y,t_0) \right] dy \right\}, \end{equation} for given initial conditions $u(x,t_0)$ and $h(x,t_0)$. It is clear by regarding this formula that decorrelation at the deterministic level will happen for $\gamma > 1/2$. It is still necessary to find out if at the stochastic level this threshold will be moved to $\gamma > 2/3$. If we consider the dilution KPZ equation (\ref{dkpz}) then transforming Hopf-Cole we would find the nonlinear equation \begin{equation} \partial_t u = \nu \left( \frac{t_0}{t} \right)^{2 \gamma} \nabla^2 u -\frac{d \gamma}{t} u \ln(u) + \frac{\gamma F \lambda}{2 \nu} t^{\gamma-1} u + \frac{\lambda}{2 \nu} \left( \frac{t_0}{t} \right)^{d \gamma/2} \xi(x,t) u, \end{equation} which may be thought of as a time dependent and spatially distributed version of the Gompertz differential equation~\cite{gompertz}. In this case it is not evident how to find an explicit solution at the deterministic level and what would be its decorrelation threshold. Galilean invariance means that the transformation \begin{equation} x \to x-\lambda v t, \qquad h \to h+vx, \qquad F \to F - \frac{\lambda}{2}v^2, \end{equation} where $v$ is an arbitrary constant vector field, leaves the KPZ equation invariant. In case of no dilution this transformation can be replaced by \begin{equation} x \to x-\frac{\lambda}{1-2\gamma} v t_0^{2 \gamma} t^{1-2\gamma}, \qquad h \to h+vx, \qquad F \to F - \frac{\lambda}{2 \gamma}v^2 t_0^{2 \gamma} t^{1-3\gamma}, \end{equation} which leaves invariant equation~(\ref{ndkpz}). If we consider dilution, then it is not clear how to extend this transformation to leave equation~(\ref{dkpz}) invariant. The main difficulty comes from the dilution term which yields a non-homogeneous contribution to the dynamics as a response to the rotation $h \to h+vx$. So in summary we may talk of a certain sort of Galilean invariance which is obeyed by the no-dilution KPZ dynamics (\ref{ndkpz}) and is lost when dilution is taken into account. If it were found that the dilution equation~(\ref{dkpz}) obeys the traditional KPZ scaling (at least in some suitable limit), then that would mean the possible necessity for readdressing the role that the symmetries of the KPZ equation have in fixing the universality class~\cite{hochberg1,hochberg2,wio2}. There is still another fundamental symmetry of the KPZ equation, but this time it just manifests itself in one spatial dimension: the so called fluctuation-dissipation theorem~\cite{barabasi}. It basically says that for long times, when saturation has already being achieved, the nonlinearity ceases to be operative and the resulting interface profile would be statistically indistinguishable from that created by the EW equation. For fast domain growth, we know from the linear theory that the interface never becomes correlated, and it operates, in this sense, as if it were effectively in the short time regime for all times~\cite{escuderojs}. As a consequence, the fluctuation-dissipation theorem is not expected to play any role in this case. Of course, this result would be independent of whether we contemplated dilution or not. In more general terms, it is known that the different symmetries of statistical mechanical models influence their scaling properties~\cite{henkel,pleimling}. It would be interesting to understand in complete generality the interplay among the symmetries of a physical model in a static domain and the asymmetric presence of dilution when we let this domain grow in time. \section{Center of mass fluctuations} Another property that has been studied in the context of radial growth, particularly in Eden clusters, is the center of mass fluctuations. It was found numerically that the Eden center of mass fluctuates according to the power law $C_m \sim t^{2/5}$ in $d=1+1$~\cite{ferreira}, while in $d=2+1$ there is a strong decrease in this exponent~\cite{madison}. This reduced stochastic behavior in higher dimensions was already predicted in \cite{escudero} using radial growth equations, and we will further examine herein the compatibility among the equations and the Eden cluster dynamics. The center of mass fluctuations are characteristic not only of radial growth but also of planar situations. Let us recall the classical EW equation \begin{equation} \partial_t h = D \nabla^2 h + \xi(x,t), \end{equation} defined on a one dimensional domain of linear size $L_0$ and with no flux boundary conditions. It is straightforward to find that the center of mass $h_0(t) = L_0^{-1}\int_0^{L_0} h(x,t)dx$ is a Gaussian stochastic process defined by its two first moments \begin{equation} \left< h_0(t) \right>=0, \qquad \left< h_0(t) h_0(s) \right>= \frac{\epsilon}{L_0} \min(t,s), \end{equation} and so we have found that the center of mass performs Brownian motion, or equivalently we would say that its position is given by a Wiener process. Note that the fluctuations amplitude decreases with the linear system size, suggesting that in the case of a growing domain our current law $C_m=\left< h_0^2 \right>^{1/2} \sim t^{1/2}$ will be replaced by a different power law with a smaller exponent. It is easy to see that this result does not hold uniquely for the one dimensional EW equation; indeed, for any $d-$dimensional growth equation with a conserved growth mechanism, be it linear as the EW or Mullins-Herring equations~\cite{barabasi} or nonlinear as the Villain-Lai-Das Sarma equation~\cite{villain,lai} or its Monge-Amp\`{e}re variation~\cite{escudero3}, the center of mass performs Brownian motion characterized by the correlators \begin{equation} \left< h_0(t) \right>=0, \qquad \left< h_0(t) h_0(s) \right>= \frac{\epsilon}{L_0^d} \min(t,s), \end{equation} as a consequence of the decoupling of the zeroth mode with respect to the surface fluctuations~\cite{villain}. Note that in the case of non-conserved growth dynamics this is not the case, as illustrated by the KPZ equation \begin{equation} \partial_t h= \nu \nabla^2 h + \lambda (\nabla h)^2 + \xi(x,t). \end{equation} It is easy to see that in this case \begin{equation} \frac{dh_0}{dt}= \frac{\lambda}{L^d} \int (\nabla h)^2 dx +\xi_0(t) \ge \xi_0(t), \end{equation} where $\xi_0(t)=L^{-d}\int \xi(x,t) dx$ and the equal sign is attained only for $h= \,$constant, an unstable configuration for KPZ dynamics. And so one expects stronger center of mass fluctuations in this case. Actually, the short time center of mass fluctuations can be easily calculated for any model which obeys the Family-Vicsek scaling, including the KPZ equation. Indeed, the Family-Vicsek scaling implies the following form of the height-height correlation \begin{equation} \left< h(x,t)h(x',t) \right>= t^{2 \beta} \, C \left( \frac{|x-x'|}{t^{1/z}} \right), \end{equation} which in the short time limit tends to \begin{equation} \left< h(x,t)h(x',t) \right> \sim t^{2 \beta + d/z} \, \delta \left( x-x' \right), \end{equation} leading to the result \begin{equation} \label{com} \langle h_0(t)^2 \rangle \sim L^{-d} t^{2 \beta + d/z}. \end{equation} And so, within the Family-Vicsek scaling framework, the exponent characterizing the short time behavior of the center of mass fluctuations is $\beta +d/(2z)$. As we have seen, the center of mass fluctuations are given by the zeroth mode. In the growing domain case it can be shown that the equation controlling the evolution of $h_0$ is \cite{escuderojs} \begin{equation} \frac{d h_0}{dt}= -\frac{d \gamma}{t}h_0+ \gamma F t^{\gamma-1} + \left(\frac{t_0}{t}\right)^{d \gamma/2}\xi_0(t), \end{equation} in case dilution is taken into account. In this case we find for long times the center of mass fluctuations \begin{equation} \label{comfv} C_m^2= \left< h_0(t)^2 \right>-\left< h_0(t) \right>^2=\frac{\epsilon t_0^{d \gamma}}{L_0^d (d \gamma +1)} t^{1-d \gamma}, \end{equation} and so $C_m \sim t^{(1-d \gamma)/2}$. If we did not consider dilution we would find in the long time limit \begin{equation} \label{comfvnot} C_m^2 = \left\{ \begin{array}{lll} \frac{\epsilon \, t_0^{d \gamma}}{L_0^d (1-d \gamma)}t^{1-d \gamma} & \mbox{\qquad if \qquad $\gamma < 1/d$}, \\ \frac{\epsilon \, t_0}{L^d_0} \ln(t) & \mbox{\qquad if \qquad $\gamma = 1/d$}, \\ \frac{\epsilon \, t_0}{L_0^d (d \gamma -1)} & \mbox{\qquad if \qquad $\gamma > 1/d$}. \end{array} \right. \end{equation} If we adapt result (\ref{com}) to the present setting we find \begin{equation} \left< h(x,t)h(x',t) \right> \sim t^{2 \beta + d/z} \, \delta \left[ t^\gamma (x-x') \right] = t^{2 \beta + d/z-\gamma d} \, \delta \left( x-x' \right). \end{equation} For linear systems the equality $2 \beta + d/z = 1$ holds, and so this last equation agrees with (\ref{comfv}) but not with (\ref{comfvnot}). This is a consequence of the violation of the Family-Vicsek scaling in the absence of dilution~\cite{escuderojs,escuderoar}. In the case of the $(1+1)-$dimensional Eden model $d=\gamma=1$, and if it belonged to the KPZ universality class the center of mass would fluctuate according to the law $C_m \sim t^{1/6}$. This of course does not agree with the measured behavior $C_m \sim t^{2/5}$. This exponent could be recovered by introducing an \emph{ad hoc} instability mechanism, such as for instance considering a growth equation whose zeroth moment obeyed \begin{equation} \frac{d h_0}{dt}=D \left( \frac{t_0}{t} \right)^{\delta \gamma} h_0 + \gamma F t^{\gamma-1}+ \left( \frac{t_0}{t} \right)^{d \gamma/2} \xi_0(t). \end{equation} The desired exponent is obtained for $\delta=1$ and $Dt_0=2/5$, but however this result is uniform on the spatial dimension and so can not predict the $(2+1)-$dimensional behavior~\cite{madison}. Additionally this instability mechanism seems to be not enough justified and too non-generic to be a good explanation of the observed phenomenology. Everything points to the fact that the center of mass fluctuations of the Eden model result from a strong violation of the Family-Vicsek ansatz. As we may see from equation (\ref{comfvnot}), this sort of violations imply stronger center of mass fluctuations. This point will be further discussed in the next section. In summary we can say that the result $C_m \sim t^{2/5}$ suggests a strong violation of the Family-Vicsek scaling by the surface fluctuations of the $(1+1)-$dimensional Eden model. Although the linear law $C_m \sim t^{(1-\gamma d)/2}$ does not reproduce quantitatively the results, we still expect from it a qualitative description of the dynamics, as the strong decrease of this exponent was already reported in $(2+1)-$dimensions. According to the linear law, the center of mass fluctuations should decrease for increasing growth velocity and spatial dimension. Note also that the nonlinearity seems to be a necessary ingredient; the linearization of the KPZ equation proposed in~\cite{singha} reads in Fourier space \begin{equation} \frac{d}{dt}\left< h_n^2 \right>=-A |n|^{3/2}\left< h_n^2 \right>+\frac{B}{|n|^{1/2}}, \end{equation} for some constants $A$ and $B$ and in case of a non-growing domain. This equation supports unbounded fluctuations as revealed by the divergent diffusion in the limit $n \to 0$, and so this does not constitute a good model for predicting the center of mass fluctuations. \section{Applications to the Eden Model} In statistical mechanics it has been customary to classify the behavior of discrete models within universality classes defined by continuum field theories. Non-equilibrium growth theories have been by no means an exception to this rule~\cite{vvedensky1,vvedensky2}. In this sense one would be interested in finding the universality class the Eden model belongs to. According to the simulations performed in the planar geometry the Eden model belongs to the KPZ universality class~\cite{barabasi}. This agrees with the measured exponent $\beta=1/3$ in radial systems~\cite{ferreira}. However, as we have already seen, there are at least two possible universality classes associated to the KPZ equation in radial systems: dilution-KPZ and dilation-KPZ. The first one is characterized by a behavior more akin to that of planar systems, and the second one by memory effects which imply the departure from the Family-Vicsek ansatz. According to the measurement of the autocorrelation exponent of the Eden model in~\cite{singha} that yielded $\lambda=1/3$, {\it the Eden model would be in the dilation-KPZ universality class} (one would expect $\lambda=4/3$ for dilution dynamics according to the theory developed herein and in~\cite{escuderojs}). This fact admits a simple explanation. In the Eden model, cells are aggregated to the colony peripheral in such a way that the positions of already present cells are not modified. Consequently, as the system grows, no dilution is redistributing its constituents. So the rigidity of the Eden model may well be at the origin of the memory effects present at its interface~\cite{singha}, which presumably place it in the dilation-KPZ universality class. But to be sure one would still need, of course, to verify that this implies no contradiction with the center of mass fluctuations as discussed in the last section. As we have already mentioned, the Eden model may be thought of as an idealization of a developing cell colony. Of course, as it was completely clear from the very beginning~\cite{eden1,eden2}, there are multiple factors of biological, chemical and even physical nature that are not captured by this model. Apart from them, one could be interested in improving the model in pure statistical mechanical terms. To this end one may look for inspiration in real cell colonies. The structure of a rapidly developing cell colony would be dominated by dilution effects, originated in the birth of new cells which volume causes the displacement of the existent cells. This feature is not captured by any sort of Eden model (diverse proliferation rules, on/off lattice,...) and is fundamental in preserving the Family-Vicsek scaling, as we have already seen. So it seems quite reasonable to modify the Eden model in order to remove its rigidity, allowing bulk cells proliferation and the displacement of the existent cells, both at the bulk and interface, by the newborn cells. This would not be interesting just in modelling terms, but also for introducing dilution in the model and consequently shifting its universality class. \section{Conclusions and outlook} In this work we have investigated the role of dilution and decorrelation on radial growth. Dilution drives matter redistribution along the growing interface: as the surface becomes larger the already deposited matter occupies a smaller fraction of interface, which is being simultaneously complemented with incoming matter, the actual driving force of domain growth in radial systems. Dilution is important for any rate of domain growth, as it keeps the interfacial density constant, but specially for rapidly growing domains, for which the diffusion mechanism becomes irrelevant and dilution becomes the sole responsible for the propagation of correlations on the macroscopic scale. The importance of dilution is such that in its absence, which takes place in the alternative dilation dynamics, strong memory effects arise. These include an enhanced stochasticity, which separates the behavior of the large spatial scale limit of the two-points correlation function from that dictated by the Family-Vicsek ansatz, and the appearance of non-universal critical exponents in the marginally rough regime, characterized by the equality $\zeta=d$. As have seen, both universality and the Family-Vicsek structure of the correlation function are recovered by virtue of dilution. As dilution propagates correlations at the same speed at which the interface grows a global correlation becomes impossible for fast domain growth. This leads to decorrelation, or in other words, to a whitening of the interfacial profile in the sense that distant points become uncorrelated. Decorrelation might be macroscopic, which is evident only if we regard the dynamics from a spatial scale of the same order of magnitude of the system size, or microscopic, in which case it is apparent for much smaller length scales. Microscopic decorrelation supports rapid roughening, i. e., growing regimes characterized by $\beta >1/2$. These appear naturally in the context of radial growth, for instance by considering the IS equation, which results from a geometric variational principle and for which $\zeta=d=2$ and $\delta=3$, and thus it shows rapid roughening for all $\gamma > 1$. A consequence of macroscopic decorrelation is the advent of a scale dependent interfacial fractal dimension, which rends the surface polyfractal and we have conjectured it to be self-similar. There are several theoretical problems that can be straightforwardly analyzed with the techniques introduced here. We have for instance considered radial interfaces whose mean radius grows as a power law of time $\left< r \right> \sim t^\gamma$. This result has been obtained by means of a linear mechanism in which an explicit power law dependence on time has been considered, see Eq. (\ref{rdeposition}). This linear mechanism can be substituted by a nonlinear one in which time does not appear explicitly \begin{equation} \partial_t r = \gamma F^{1/\gamma} r^{1-1/\gamma} +\frac{1}{r^{d/2}J(\vec{\theta})^{1/2}}\xi(\theta,t), \end{equation} which yields at the deterministic order $R=F t^\gamma$ again, but it is the source at the first stochastic order of a term (reminiscent of dilution) which may be either stabilizing or destabilizing depending on the value of $\gamma$ \begin{equation} \partial_t \rho = \frac{\gamma-1}{t} \rho +\frac{1}{F^{d/2} t^{\gamma d/2} J(\vec{\theta})^{1/2}}\eta(\theta,t); \end{equation} for small values of $\gamma$ the previous sections results are recovered, while for large values of $\gamma$ memory effects and enhanced (power law) stochasticity appear (which are standard effects of instability as we have already seen), with the threshold value of $\gamma$ depending of whether we introduce dilution or not (in this concrete example dilution completely erases instability). Also, this instability mechanism, contrary to the ones studied herein and in~\cite{escudero2} which rend the zeroth mode unstable and the $l=1$ ones marginal, is able to destabilize all modes. Different nonlinearities which might destabilize a fixed number of modes lying before some given $l^* \in \mathbb{N}$ can be easily devised too (basically by introducing terms of the form $-r^{-m}$ for some suitable $m \in \mathbb{N}$ in the corresponding equation of motion) and can even be cast on some geometric variational formulation as the cases considered in~\cite{escudero2}. Of course, deciding which model is the good one must rely on numerical or experimental evidence based on the study of specific models or systems of interest. As mentioned in the introduction, part of the motivation for studying radial growth models such as the Eden or different ones lies in the possible similarity of these with some forms of biological development, such as for instance cell colonies. The results of our study can be translated into this context to obtain some simple conclusions, provided the modelling assumptions make sense for some biological system. The structure of a rapidly developing cell colony would be dominated by dilution effects, originated in the birth of new cells which volume causes the displacement of the existent cells. If the rate of growth is large enough this motion will dominate over any possible random dispersal of the individual cells. It is remarkable that such a consequence simply appears by considering domain growth, while it is not necessary to introduce corrections coming from the finite size of the constituents. This is the dilution dominated situation we have formalized by means of the (decorrelation) inequality $\gamma > 1/\zeta$ (assuming in this case $\delta=\zeta$). If we were to introduce some control protocol in order to keep the consequences of bacterial development to a minimum we would need to eliminate colony constituents (possibly randomly selected) at a high enough rate so the effective growth velocity were one that reversed the decorrelation inequality. For the one dimensional Eden model, accepting it belongs to the KPZ universality class, one finds $\gamma=1$ and $z =3/2$. If $z$ played the same role for the nonlinear KPZ equation as $\zeta$ for the linear equations considered herein (as it is reasonable to expect), the Eden model would be in the uncorrelated regime. In order to control it we would need to eliminate its cells at rate such that the effective growth rate obeyed $\gamma < 2/3$. For the two dimensional Eden model, if its behavior were still analogous to that of the KPZ equation, we would find $z > 3/2$ and thus a greater difficulty for control. Note that for the particular growth rules of the Eden model one would need to eliminate peripheral cells in order to control the system. This would not be so in the case of an actual bacterial colony, for which bulk cells are still able to reproduce, and so cell elimination could be performed randomly across the whole colony. Of course, these conclusions are speculative as long as radial growth equations are not proved to reasonably model some biological system. In more general terms, we have found that the surface fluctuations of the Eden model presumably strongly violate the Family-Vicsek scaling. We have identified the absence of dilution in this model as the reason underlying such a violation. In this sense, this model would not be able to describe growing cell colonies, precisely because it assumes a spurious rigidity of bulk cells. On the other hand, it would be better suited to describe the radial growth of crystalline structures~\cite{einstein}. We have also found that reparametrization invariance~\cite{maritan2} implicitly implies dilation dynamics. Our results call for an extension of the generalization of Langevin dynamics to arbitrary geometries in order to capture both dilution and dilation scenarios, and the associated bifurcation of universality classes. This same remark would affect as well equilibrium systems, but in this case of course the domain evolution will drive them out of equilibrium, unless growth is quasistatic~\cite{parisi}. \section*{Acknowledgments} This work has been partially supported by the MICINN (Spain) through Project No. MTM2008-03754. \appendix \section{Higher order perturbation expansion} \label{horder} As we have mentioned in Sec.~\ref{rrd}, the first order correction in the small noise expansion is a Gaussian stochastic process. We will try to go beyond this order in this appendix, and we will show the difficulties that arise in trying so. We focus again on the radial random deposition equation~(\ref{rdeposition}) and assume the solution form \begin{equation} \label{snoise2} r(\vec{\theta},t)= R(t)+ \sqrt{\epsilon}\rho(\vec{\theta},t) + \epsilon \rho_2(\vec{\theta},t), \end{equation} where the noise intensity $\epsilon$ will be used as the small parameter~\cite{gardiner}. Substituting this solution form into Eq.~(\ref{rdeposition}) we obtain the equations hierarchy \begin{eqnarray} \partial_t R &=& F \gamma t^{\gamma-1}, \\ \partial_t \rho_1 &=& \frac{1}{F^{d/2}t^{\gamma d/2}} \frac{\eta(\vec{\theta},t)}{J(\vec{\theta})^{1/2}}, \\ \partial_t \rho_2 &=& -\frac{d}{2 F^{1+d/2}} \frac{\rho_1}{t^{\gamma + d \gamma/2}} \frac{\eta(\vec{\theta},t)}{J(\vec{\theta})}, \end{eqnarray} where $\xi=\sqrt{\epsilon} \, \eta$ and both $\eta$ and $\xi$ are now zero mean quasiwhite Gaussian processes whose correlations are given by \begin{equation} \left< \eta(\vec{\theta},t) \eta(\vec{\theta},t) \right>= C(\vec{\theta}-\vec{\theta}') \delta(t-t'), \qquad \left< \xi(\vec{\theta},t) \xi(\vec{\theta},t) \right>= \epsilon C(\vec{\theta}-\vec{\theta}') \delta(t-t'), \end{equation} where $C(\cdot)$ is some regular function approximating the Dirac delta; the necessity for the quasiwhite assumption will we clear in few lines. These equations have been derived assuming $\sqrt{\epsilon} \ll F t^\gamma$, and we will further assume a zero value for both initial perturbations as in Sec.~\ref{rrd}. The solution to the first two was characterized in Sec.~\ref{rrd}, where the approximating function $C(\cdot)$ was substituted by the Dirac delta. Here $R$ is a deterministic function and $\rho_1$ is a zero mean Gaussian stochastic process that is completely determined by its correlation function. The stochastic function $\rho_2$ is a zero mean process too, but it is not Gaussian this time, and its correlation (which no longer completely determines the process) is given by \begin{eqnarray} \nonumber \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right> = \frac{d^2}{4F^{2+2d}(1-\gamma d)} \times \\ \left[ \frac{(\min\{t,s\})^{2-2\gamma-2\gamma d}-t_0^{2-2\gamma-2\gamma d}}{2-2\gamma-2\gamma d} - t_0^{1-\gamma d} \frac{(\min\{t,s\})^{1-2\gamma-\gamma d}-t_0^{1-2\gamma-\gamma d}}{1-2\gamma-\gamma d} \right] \frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{eqnarray} if $\gamma d \neq 1$, $\gamma (1+d) \neq 1$, and $\gamma (2+d) \neq 1$. If $\gamma d = 1$ we find \begin{equation} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right> = \frac{1}{16 F^{2+2d}\gamma^4} \left\{t_0^{-2\gamma}- \left[ \min\{t,s\} \right]^{-2\gamma}\left[1+2\gamma \mathrm{ln}\left(\frac{\min\{t,s\}}{t_0} \right) \right]\right\}\frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{equation} if $\gamma (1+d) = 1$ then \begin{equation} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right> = \frac{d^2}{4F^{2+2d}\gamma}\left[ \mathrm{ln}\left( \frac{\min \{ t,s \}}{t_0} \right) + \frac{t_0^\gamma}{\gamma} ([\min \{ t,s \}]^{-\gamma}-t_0^{-\gamma}) \right]\frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{equation} and if $\gamma (2+d) = 1$ we get \begin{equation} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right> = \frac{d^2}{8F^{2+2d}\gamma}\left[ \frac{(\min \{t,s\})^{2\gamma}-t_0^{2\gamma}}{2\gamma}-t_0^{2\gamma}\mathrm{ln}\left( \frac{\min \{t,s\}}{t_0} \right) \right]\frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}. \end{equation} The long time behavior of the correlations, given by the condition $t,s \gg t_0$, is specified by the following two-times and one-time functions \begin{eqnarray} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right> &=& \frac{d^2}{4F^{2+2d}(1-\gamma d)} \frac{\left(\min\{ t,s \}\right)^{2-2\gamma-2\gamma d}}{2-2\gamma-2\gamma d} \frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \\ \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',t) \right> &=& \frac{d^2}{4F^{2+2d}(1-\gamma d)} \frac{t^{2-2\gamma-2\gamma d}}{2-2\gamma-2\gamma d} \frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{eqnarray} when $\gamma (d+1)<1$, and if $\gamma (d+1)=1$ then \begin{eqnarray} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right> &=& \frac{d^2}{4F^{2+2d}\gamma} \mathrm{ln}\left( \min\{t,s\} \right) \frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \\ \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',t) \right> &=& \frac{d^2}{4F^{2+2d}\gamma} \mathrm{ln}(t) \frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{eqnarray} and finally, when $\gamma (d+1)>1$, we find \begin{equation} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right> = \frac{d^2}{8F^{2+2d}}\frac{t_0^{2-2\gamma-2\gamma d}}{1-(3+2d)\gamma+ (2+3d+d^2)\gamma^2}\frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{equation} a correlation function that vanishes in the limit $t_0 \to \infty$. Now it is clear why we needed the quasiwhite approximation: for a regular function $C(\cdot)$ the expression $C(\cdot)^2$ makes sense, contrary to what happens if we substitute it by the Dirac delta to get $\delta(\cdot)^2$. This is the first indication of the failure of the higher order perturbation theory. We now examine the effect that dilution has on the random function $\rho_2$, which in this case obeys the equation \begin{equation} \partial_t \rho_2=-\frac{\gamma d}{t} \rho_2 -\frac{d}{2} \frac{(d+1)^{1+d/2}}{F^{1+d/2}t^{\gamma+\gamma d/2}}\frac{\rho_1(\vec{\theta},t)\xi(\vec{\theta},t)}{J(\vec{\theta})}. \end{equation} In this case the long time correlation function reads \begin{equation} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right>= \frac{d^2(d+1)^{2+2d}}{8F^{2+2d}(\gamma d +1)(1-\gamma)} (ts)^{-\gamma d} \min\{t,s\}^{2-2\gamma} \frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{equation} if $\gamma <1$, \begin{equation} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right>= \frac{d^2(d+1)^{1+2d}}{4F^{2+2d}} (ts)^{-d} \ln[\min\{t,s\}] \frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{equation} if $\gamma =1$, \begin{equation} \left< \rho_2(\vec{\theta},t) \rho_2(\vec{\theta}',s) \right>= \frac{d^2(d+1)^{2+2d}}{8F^{2+2d}(\gamma d +1)(\gamma-1)} (ts)^{-\gamma d} \, t_0^{2-2\gamma} \frac{C(\vec{\theta}-\vec{\theta}')^2}{J(\vec{\theta})J(\vec{\theta}')}, \end{equation} if $\gamma >1$. The one time correlation function is then \begin{equation} \left< \rho_2(\vec{\ell},t) \rho_2(\vec{\ell}',t) \right>= \frac{d^2(d+1)^{2+2d}}{8F^{2+2d}(\gamma d +1)(1-\gamma)} t^{2-2\gamma} \frac{C(\vec{\ell}-\vec{\ell}')^2}{J(t^{-\gamma}\vec{\ell})J(t^{-\gamma}\vec{\ell}')}, \end{equation} if $\gamma <1$, \begin{equation} \left< \rho_2(\vec{\ell},t) \rho_2(\vec{\ell}',t) \right>= \frac{d^2(d+1)^{1+2d}}{4F^{2+2d}} \ln(t) \frac{C(\vec{\ell}-\vec{\ell}')^2}{J(t^{-\gamma}\vec{\ell})J(t^{-\gamma}\vec{\ell}')}, \end{equation} if $\gamma =1$, \begin{equation} \left< \rho_2(\vec{\ell},t) \rho_2(\vec{\ell}',t) \right>= \frac{d^2(d+1)^{2+2d}}{8F^{2+2d}(\gamma d +1)(\gamma-1)} t_0^{2-2\gamma} \frac{C(\vec{\ell}-\vec{\ell}')^2}{J(t^{-\gamma}\vec{\ell})J(t^{-\gamma}\vec{\ell}')}, \end{equation} if $\gamma>1$, where $\vec{\ell}-\vec{\ell}'=t^{\gamma} (\vec{\theta}-\vec{\theta}')$, $C(\vec{\ell}-\vec{\ell}')=t^{-\gamma d} C(\vec{\theta}-\vec{\theta}')$, and we have assumed that the approximating function $C(\cdot)$ has the same homogeneity as the Dirac delta. Although it is evident that dilution carries out a measurable action, particularly erasing part of the memory effects, the result is far from satisfactory. In all cases the prefactor deviates from the expected random deposition form $t^2$~\cite{footnote}, the unexpected critical value $\gamma=1$ has appeared, and for $\gamma \ge 1$ memory effects are present as signaled by the logarithm and the $t_0$ dependence respectively; and the situation is further complicated by the presence of the factor $C(\cdot)^2$ which becomes singular in the white noise limit. All of these elements suggest the failure of the small noise expansion beyond the first order. Classical results suggest the possibility of constructing a systematic approach to the solution of some nonlinear stochastic differential equations by continuing the small noise expansion to higher orders~\cite{gardiner}. Our present results suggest the failure of this sort of expansions beyond the Gaussian (which turns out to be the first) order in very much the same way as the Kramers-Moyal expansion of the master equation~\cite{pawula} and the Chapman-Enskog expansion of the Boltzmann equation~\cite{carlo} fail beyond the Fokker-Planck and Navier-Stokes orders respectively.
{"config": "arxiv", "file": "0909.5304.tex"}
TITLE: Did Supernova 2007bi really explode due to antimatter creation? QUESTION [2 upvotes]: I was watching a video (https://www.youtube.com/watch?v=IZ59_akUUBs) about massive explosions and came across 2007bi. The video stated that this SN happened due to gamma-ray driven antimatter creation. Apparently, its core being made mostly of oxygen began releasing energetic photons which decayed into electron/positron pairs. Their mutual annihilation caused the core to collapse and triggered the supernova. I have a couple of questions concerning this. Pair-instability supernova happens when a star is about 130 solar masses, but the star here was only at 100 solar masses.... (per Wiki "These stars are large enough to produce gamma rays with enough energy to create electron-positron pairs, but the resulting net reduction in counter-gravitational pressure is insufficient to cause the core-overpressure required for supernova. Instead, the contraction caused by pair-creation provokes increased thermonuclear activity within the star that repulses the inward pressure and returns the star to equilibrium. It is thought that stars of this size undergo a series of these pulses until they shed sufficient mass to drop below 100 solar masses, at which point they are no longer hot enough to support pair-creation. Pulsing of this nature may have been responsible for the variations in brightness experienced by Eta Carinae in 1843, though this explanation is not universally accepted.") [https://en.wikipedia.org/wiki/SN_2007bi ] Is it more likely the size of the star was wrong, or the that it can happen at lower mass or possibly there was something else at work here? Why wouldn't the extra energy from the electron/positron annihilation add more energy to the star's core? It seems counter-intuitive that adding energy reduces the internal supporting pressure. Can someone explain this? REPLY [1 votes]: Does gravity or pressure get stronger faster? Suppose a star is in equilibrium between pressure and gravity. If it compresses slightly, the core is compressed adiabatically and it's pressure increases. But gravity also increases. If gravity increases more, the equilibrium is unstable and the collapse will accelerate. How much does gravity increase? Consider a the pressure added from the weight of a shell 1cm thick and 1000km across. If the shell compresses to 500km (8-fold volume reduction of the core), it experiences 4 times as much gravity on a quarter of area (16 times the pressure). Thus $P_{grav}$ ~ $V^{-4/3}$ [because 8^(4/3) = 16]. For ideal gases/plasmas at moderate temperatures (enough to be fully ionized) the pressure is $P_{cold}$ ~ $V^{-5/3}$. This is a steeper power-law ("stiffer" equation of state) than gravity and the gas is stable. At high temperatures photons support most of the pressure. Photon pressure have a power-law of $P_{rad}$ ~ $V^{-4/3}$. This is right on the boundary of (in)stability. But since the gas pressure still contributes slightly, the actual power law is slightly more negative than -4/3 and the star is slightly stable. But it doesn't take much to destabilize things. In all stars the core is heating and compressing over time as the fuel gets used. When the core gets within a factor of 5 or so of 511 keV, or 5.9 Giga Kelvin, electrons and positrons start being produced. Some of the heat of compression is "wasted" as pair creation rather than making more energetic photons or increasing particle kinetic energies. This makes the power-law "softer" and makes the core unstable. Once electron-positron pairs (but not actual anti-protons) are created en-masse the core destabilizes and collapses. Temperature, pressure, and gravity are all increasing, but gravity is rising faster than pressure. The sudden increase in temperature and pressure causes fusion to massively accelerate. Fusion is releasing much more heat, 7 MeV per nucleon, than the thermal energy which is well less than 1 MeV. If the collapse were mild, fusion would gently stop the collapse and the star would reach a new equilibrium. But the collapse is severe. Fusion has to drive pressure far higher than gravity to reverse the inertia of the in-falling matter. At the point of minimum core size, pressure far exceeds gravity and fusion is occurring faster and faster in a thermal runaway. A violent explosion ensues that leaves no remnant behind. The energy source is fusion, not antimatter, but the pair-production allows gravity to (temporary) win and set the stage for runaway fusion.
{"set_name": "stack_exchange", "score": 2, "question_id": 464016}
TITLE: Are all orbits of the conservative pendulum homoclinic? QUESTION [1 upvotes]: I don't understand this statement: "The homoclinic orbit is characterized by $E = mgl$. When $E < mgl$, the pendulum is tracing other orbits." If energy is conserved, then $E_0 = E$ ($E$ is shorthand for $E(t)$, the energy at time $t$, and $E_0$ is the energy at time 0). Therefore, if the homoclinic orbit is satisfied $\iff$ $E = mgl \iff E_0 = mgl$ which means orbits starting from any initial energy $E_0$ are immediately tracing the homoclinic orbit, which means all orbits are equivalent to the homoclinic orbit. What's wrong with my reasoning here? See further details about the $E=mgl$ derivation and pendulum description here: http://underactuated.mit.edu/pend.html in section 2.2.2. REPLY [3 votes]: For a given initial condition $\mathbf{x}_0$ it can be helpful to consider the orbit $\mathbf{x}(t)$ associated with it as that both forward and backward in time (i.e., with $t \to \pm \infty$). This way, every smooth curve in the phase space below 1 is one orbit: where the ellipses describe back-and-forth oscillations; the wavy lines, rotations; the separatrix between them, i.e., the lines stemming from the "X" point (the unstable fixed point highlighted by the circle - remember there's just one, given the periodicity of $\theta$) are homoclinic orbits, which only asymptotically approach that point. Note that a conservative system is not one that can have only one single value for the energy, it's one that, once initialized at a given energy, cannot move away from it. The phase portrait above represents the pendulum's behavior for different starting energies. That is, the total energy should not change between points of a given orbit, so when you start on one of those curves, you're forced to stay on it. Intuitively it's not surprising, since all that's being prohibited here is that a pendulum that starts at, say, a small amplitude cannot later on display larger oscillations on its own. The second and last ingredient we need to answer the OP question is gained by noticing that an orbit's energy increases monotonically with the distance to the resting fixed point $(0,0)$. Therefore there is a one-to-one correspondence between each ellipse and a given value of energy and, apart from the degeneracy between clockwise and anticlockwise movements (the mirror symmetry with respect to the $\dot\theta=0$ in the plot), the same is true for the remaining orbits. So, to the question: which means orbits starting from any initial energy E0 are immediately tracing the homoclinic orbit, which means all orbits are equivalent to the homoclinic orbit. No, not any initial $E_0$, but specifically those with $E_0=mgl$; but, yes, given the (almost) one-to-one correspondence between energy and orbit for the pendulum, any initial condition with $E=mgl$ will lie on the separatrix (on either its lower or upper branch). 1 Russ Tedrake. Underactuated Robotics: Algorithms for Walking, Running, Swimming, Flying, and Manipulation (Course Notes for MIT 6.832). Downloaded on 14/06/2020 from http://underactuated.mit.edu/.
{"set_name": "stack_exchange", "score": 1, "question_id": 559202}
TITLE: If $\mathbf{AA}^T=\mathbf{I}$, is $\mathbf A$ necessarily square? QUESTION [3 upvotes]: If $\mathbf{AA}^T=\mathbf{I}$, is $\mathbf A$ necessarily square? I am starting to learn about matrices, and had the above question. When I have tried to think about this, I have not been able to progress using matrix multiplication, since $\textbf{A}$ and its transpose do not have inverses unless they are square. The only conclusion I could come to using matrix multiplication is that the product of a matrix and its transpose, whatever the dimensions, is square and symmetric. I also tried to consider this component-wise; for a 1x3 case, it was easy to see that there are no solutions. But the algebra for a 2x3 case was quite messy because it involved 6 variables. I am not sure how else to think about this. I have seen the proofs that a matrix must be square to have and inverse (here), but the answers all rely on the additional defining property of an inverse being that $\textbf{AA}^{-1}=\textbf{A}^{-1}\textbf{A}$, and if $\textbf{A}$ was not square, $\textbf{AA}^{-1}$ could theoretically be equal to $\textbf{I}$ but then it would not have the same dimensions as $\textbf{A}^{-1}\textbf{A}$, violating the above property. As a similar constraint is applied to orthogonal matrices, these qould also have to be square. However is it possible for a non-square matrix to be such that $\textbf{AA}^T=\textbf{I}$, whether or not $\textbf{A^TA}=\textbf{I}$, where the identity matrix here could be of a different dimension? If so, does $\textbf{AA}^T=\textbf{I}$ mandate that $\textbf{A}^T\textbf{A}=\textbf{I}$? REPLY [0 votes]: $$ \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$
{"set_name": "stack_exchange", "score": 3, "question_id": 2257809}
TITLE: Graph The Solution Of First Order Linear ODE QUESTION [0 upvotes]: Graph all of the solutions of $y'=-\frac{x}{y}$ 2.find the value of $x_0$ and $y_0$ such there is one and only solution, defined in the area that includes $x_0$ such that $y(x_0)=y_0$ $$\frac{dy}{dx}=-\frac{x}{y}$$ $$\frac{ydy}{dx}=-x$$ $${ydy}=-xdx$$ Integrating both parts: $$\frac{y^2}{2}=-\frac{x^2}{2}+c$$ $$y^2=-{x^2}+k$$ whereas $k=2c$ $$y=\pm\sqrt{k-x^2}$$ If I raise both side in the power of 2 and get a circle, I am changing the graph? Should it be just a graph of a square root stating in y=k and a mirror graph for the negtive square root? REPLY [1 votes]: You found all the real solutions expressed on the form the equation: $$x^2+y^2=k$$ with $y\neq 0$ (due to the term $\frac{x}{y}$ in the ODE). This implies $k>0\quad\to\quad k=R^2$ $$x^2+y^2=R^2$$ So, the graph is an infinity of circles with a common center $(0,0)$ and any radius $R$. Writing $y=\pm\sqrt{R^2-x^2}$ changes nothing in the real domain. But you cannot draw an infinity of circles. Don't you forgot in the wording of your question to mention a condition which allows to determine a particular value of $R$, hence to draw only one circle ?
{"set_name": "stack_exchange", "score": 0, "question_id": 2008079}
TITLE: For a number field $K$, does there exist totally splitting prime? QUESTION [3 upvotes]: Let $K$ be a number field. Then does there exist a rational prime number $l$ which splits completely in $K$? I think this follows from Cebotarev density theorem. But I think there exists more elementary proof. REPLY [1 votes]: Wojowu gave a link to an elementary proof that the Galois closure has infinitely many primes splitting completely, because the minimal polynomial of its primitive element as a funtion on the integers has a polynomial growth rate, giving infinitely many $p$ with $f(m) \equiv 0 \bmod p$ so those $p$ split completely in the Galois closure as well as in $K$. Now an interesting alternative is to look at the Dedekind zeta function containing all the information about those things : Let $\sigma_1,\ldots,\sigma_n$ be the embeddings $K \to \mathbb{C}$, counting for only one each pair of complex embeddings. Let $\nu_j = 2$ for a complex embedding, $=1$ for a real-embedding. If there are finitely many primes splitting completely then $$\zeta_K(s) = \prod_p \prod_{j=1}^{g(p)} \frac{1}{1-p^{-s f_j(p)}}$$ is analytic at $s=1$ since $g(p) \le N$ and for almost every $p$, $f_j(p) \ge 2$ We know it isn't the case because $$\Gamma(s) \zeta_K(s) \ge \Gamma(s)\sum_{a \in O_K^*/O_K^\times}N(a)^{-s} = \int_{\mathbb{R}^n / \log \iota(O_K^\times)} (\Theta(e^x)-1) |e^x|^sd^n x$$ with $e^x = (e^{x_1},\ldots,e^{x_n}), |e^x| = \prod_{j=1}^n |e^x_j|$ and $$\log\iota(a) = (\log\sigma_1(a),\ldots,\log\sigma_n(a)), \qquad \log \iota(O_K^\times) \text{ a lattice of rank } n-m$$ and $$\Theta(x)= \sum_{a \in O_K}\exp(-\sum_{j=1}^n x_j |\sigma_j(a)|^{ \nu_j})$$ and $\Theta(x)-1 \to \infty $ as $x \to 0$
{"set_name": "stack_exchange", "score": 3, "question_id": 3100398}
TITLE: Are there negative prime numbers? QUESTION [1 upvotes]: It seems generally admitted that there are no negative prime numbers. What are the rules that can affirm this? Thanks in advance and happy new year to all. Best regards, REPLY [2 votes]: This is false. $-2$ is prime. One of the two following statements (depends a bit on context) is the definition of primarily. Indivisibility: A number $p$ is prime if it doesn’t have any factors other than itself and $1$, up to unit multiples. Note: “up to unit multiples” allows us to ignore the fact that $-1|7$ or $i|3i$ Primality: A number $p$ is prime if whenever $p|ab$ ether $p|a$ or $p|b$. In the integers these definitions are equivalent, but for other sets they might not be. In other sets, we call the second the definition of primality usually. However, $-7$ is a prime integer according to both of these definitions. Although lay people might claim that there aren’t any negative primes, but there’s no mathematical basis for this claim.
{"set_name": "stack_exchange", "score": 1, "question_id": 2605120}
TITLE: Is the path space of a space homeomorphic to the disjoint union of the path spaces of the path components QUESTION [2 upvotes]: Let $X$ be an arbitrary wild topological space. Equip the space $\mathcal{C}([0,1],X)$ of continuous paths $[0,1]\rightarrow X$ with the compact open topology. As every path lands in exactly one path component, $\mathcal{C}([0,1],X)$ is in bijection to $\coprod_{X_i\in\pi_0(X)}\mathcal{C}([0,1],X_i)$. However, is this bijection a homeomorphism? As $\mathcal{C}([0,1],\_)$ is a functor using the compact open topology, the map from $\coprod_{X_i\in\pi_0(X)}\mathcal{C}([0,1],X_i)$ to $\mathcal{C}([0,1],X)$ is continuous, but is its inverse continuous as well? I assume this surely holds if the path components are open, i.e. $X$ is locally path connected, but does this hold in any case? REPLY [2 votes]: No, it is not true if $X$ is not locally path-connected. Consider $X = \{ 1/n \mid n \in \mathbb{N}^* \} \cup \{ 0 \}$ (with the subspace topology $X \subset \mathbb{R}$). Then the path components of $X$ are its singletons, and for each singleton $\{x\}$, $\mathcal{C}([0,1], \{x\})$ is a singleton itself. However, $\mathcal{C}([0,1], X)$ itself is not a countable disjoint union of singleton: it is in fact homeomorphic to $X$ itself, via $$\mathcal{C}([0,1], X) \xrightarrow{\cong} X, \quad f \mapsto f(0)$$ (there is nothing special about $0$: every $f : [0,1] \to X$ is constant, I could have chosen evaluation at any $t \in [0,1]$). (This can be seen intuitively: if $X$ itself is not homeomorphic to the disjoint union of its path components, then $\mathcal{C}([0,1], X)$ has little chance of being a disjoint union on the path components of $X$ itself. See here for more examples.) However if $X$ is locally path connected this is true. The map $$w : \bigsqcup_{X_i \in \pi_0(X)} \mathcal{C}([0,1], X_i) \to \mathcal{C}([0,1], X)$$ is continuous by definition (the inclusion $\mathcal{C}([0,1], X_i) \to \mathcal{C}([0,1], X)$ is continuous, use the universal property of the disjoint union). But it is also open: let $$C(U,K) = \{ f : [0,1] \to X_i \mid f(K) \subset U \}$$ be an open set in the subbasis defining the topology on $\mathcal{C}([0,1], X_i)$ (i.e. $K \subset [0,1]$ is compact and $U \subset X_i$ is open). Then since $X$ is locally path-connected, its path component $X_i$ is open, and thus $U \subset X_i \subset X$ is open in $X$ too. It follows that $w(C(U,K))$ is also open in $\mathcal{C}([0,1], X)$ (by definition). So $w$ sends open sets of a subbasis to open sets, so it's an open map. To conclude, $w$ is open, continuous, and bijective, so it's a homeomorphism.
{"set_name": "stack_exchange", "score": 2, "question_id": 1576887}
\begin{document} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\tx}[1]{\quad\mbox{#1}\quad} \title [Escaping Set of Hyperbolic Semigroup]{Escaping Set of Hyperbolic Semigroup} \author[Bishnu Hari Subedi, Ajaya Singh]{Bishnu Hari Subedi $^1$ \and Ajaya Singh $^2$} \address{ $^{1, \; 2}$Central Department of Mathematics, Institute of Science and Technology, Tribhuvan University, Kirtipur, Kathmandu, Nepal \\Email: subedi.abs@gmail.com, singh.ajaya1@gmail.com} \vspace{-.2cm} \thanks{\hspace{-.5cm}\tt This research work of first author is supported by PhD faculty fellowship from University Grants Commission, Nepal. \hfill } \maketitle \thispagestyle{empty} {\footnotesize \noindent{\bf Abstract:} \textit{In this paper, we mainly study hyperbolic semigroups from which we get non-empty escaping set and Eremenko's conjecture remains valid. We prove that if each generator of bounded type transcendental semigroup $ S $ is hyperbolic, then the semigroup is itself hyperbolic and all components of $ I(S) $ are unbounded}. \\ \noindent{\bf Key Words}: Escaping set, Eremenko's conjecture, transcendental semigroup, hyperbolic semigroup. \\ \bf AMS (MOS) [2010] Subject Classification.} {37F10, 30D05} \section{Introduction} Throughout this paper, we denote the \textit{complex plane} by $\mathbb{C}$ and set of integers greater than zero by $\mathbb{N}$. We assume the function $f:\mathbb{C}\rightarrow\mathbb{C}$ is \textit{transcendental entire function} (TEF) unless otherwise stated. For any $n\in\mathbb{N}, \;\; f^{n}$ always denotes the nth \textit{iterates} of $f$. Let $ f $ be a TEF. The set of the form $$ I(f) = \{z\in \mathbb{C}:f^n(z)\rightarrow \infty \textrm{ as } n\rightarrow \infty \} $$ is called an \textit{escaping set} and any point $ z \in I(S) $ is called \textit{escaping point}. For TEF $f$, the escaping set $I(f)$ was first studied by A. Eremenko \cite{ere}. He himself showed that $I(f)\not= \emptyset$; the boundary of this set is a Julia set $ J(f) $ (that is, $ J(f) =\partial I(f) $); $I(f)\cap J(f)\not = \emptyset$; and $\overline{I(f)}$ has no bounded component. By motivating from this last statement, he posed a question: \textit{Is every component of $ I(f) $ unbounded?}. This question is considered as an important open problem of transcendental dynamics and it is known as \textit{Eremenko's conjecture}. Note that the complement of Julia set $ J(f) $ in complex plane $ \mathbb{C} $ is a \textit{Fatou set} $F(f)$. Recall that the set $CV(f) = \{w\in \mathbb{C}: w = f(z)\;\ \text{such that}\;\ f^{\prime}(z) = 0\} $ represents the set of \textit{critical values}. The set $AV(f)$ consisting of all $w\in \mathbb{C}$ such that there exists a curve (asymptotic path) $\Gamma:[0, \infty) \to \mathbb{C}$ so that $\Gamma(t)\to\infty$ and $f(\Gamma(t))\to w$ as $t\to\infty$ is called the set of \textit{asymptotic values} of $ f $ and the set $SV(f) = \overline{(CV(f)\cup AV(f))}$ is called the \textit{singular values} of $ f $. If $SV(f)$ has only finitely many elements, then $f$ is said to be of \textit{finite type}. If $SV(f)$ is a bounded set, then $f$ is said to be of \textit{bounded type}. The sets $$\mathscr{S} = \{f: f\;\ \textrm{is of finite type}\} \;\; \text{and}\; \; \mathscr{B} = \{f: f\;\ \textrm{is of bounded type}\} $$ are respectively called \textit{Speiser class} and \textit{Eremenko-Lyubich class}. The \textit{post-singular point} is the point on the orbit of singular value. That is, if $z$ is a singular value of entire function $f$, then $f^n(z)$ is a post-singular point for $n\geq 0$. The set of all post-singular points is called \textit{post-singular set} and it is denoted by $$P(f) =\bigcup_{n\geq 0}f^n(SV( f)) $$ The entire function $f$ is called \textit{post-singularly bounded} if its post-singular set is bounded and it is called \textit{post-singularly finite} if its post-singular set is finite. A transcendental entire function $ f $ is \textit{hyperbolic} if the post singular set $ P(f) $ is compact subset of the Fatou set $ F(f) $. The main concern of this paper is to study of escaping set under transcendental semigroup. So we start our formal study from the notion of transcendental semigroup. Note that for given complex plane $\mathbb{C}$, the set $\text{Hol}(\mathbb{C})$ denotes a set of all holomorphic functions of $ \mathbb{C} $. If $ f\in \text{Hol}(\mathbb{C}) $, then $ f $ is a polynomial or transcendental entire function. The set $\text{Hol}(\mathbb{C})$ forms a semigroup with semigroup operation being the functional composition. \begin{dfn}[\textbf{Transcendental semigroup}] Let $ A = \{f_i: i\in \mathbb{N}\} \subset \text{Hol}(\mathbb{C})$ be a set of transcendental entire functions $ f_{i}: \mathbb{C}\rightarrow \mathbb{C} $. A \textit{transcendental semigroup} $S$ is a semigroup generated by the set $ A $ with semigroup operation being the functional composition. We denote this semigroup by $S = \langle f_{1}, f_{2}, f_{3}, \cdots, f_{n}, \cdots \rangle$. \end{dfn} Here, each $f \in S$ is the transcendental entire function and $S$ is closed under functional composition. Thus $f \in S$ is constructed through the composition of finite number of functions $f_{i_k},\; (k=1, 2, 3,\ldots, m) $. That is, $f =f_{i_1}\circ f_{i_2}\circ f_{i_3}\circ \cdots\circ f_{i_m}$. A semigroup generated by finitely many functions $f_i, (i = 1, 2, 3,\ldots, n) $ is called \textit{finitely generated transcendental semigroup}. We write $S= \langle f_1,f_2,\ldots,f_n\rangle$. If $S$ is generated by only one transcendental entire function $f$, then $S$ is \textit{cyclic or trivial transcendental semigroup}. We write $S = \langle f\rangle$. In this case each $g \in S$ can be written as $g = f^n$, where $f^n$ is the nth iterates of $f$ with itself. The transcendental semigroup $S$ is \textit{abelian} if $f_i\circ f_j =f_j\circ f_i$ for all generators $f_{i}$ and $f_{j}$ of $ S $. Based on the Fatou-Julia-Eremenko theory of a complex analytic function, the Fatou set, Julia set and escaping set in the settings of semigroup are defined as follows. \begin{dfn}[\textbf{Fatou set, Julia set and escaping set}]\label{2ab} The set of normality or the Fatou set of the transcendental semigroup $S$ is defined by \[F (S) = \{z \in \mathbb{C}: S\;\ \textrm{is normal in a neighborhood of}\;\ z\}\] The \textit{Julia set} of $S$ is defined by $J(S) = \mathbb{C} \setminus F(S)$ and the \textit{escaping set} of $S$ by \[I(S) = \{z \in \mathbb{C}: \; f^n(z)\rightarrow \infty \;\ \textrm{as} \;\ n \rightarrow \infty\;\ \textrm{for all}\;\ f \in S\}\] We call each point of the set $ I(S) $ by \textit{escaping point}. \end{dfn} It is obvious that $F(S)$ is the largest open subset of $ \mathbb{C} $ such that semigroup $ S $ is normal. Hence its compliment $J(S)$ is a smallest closed set for any transcendental semigroup $S$. Whereas the escaping set $ I(S) $ is neither an open nor a closed set (if it is non-empty) for any semigroup $S$. If $S = \langle f\rangle$, then $F(S), J(S)$ and $I(S)$ are respectively the Fatou set, Julia set and escaping set in classical iteration theory of complex dynamics. In this situation we simply write: $F(f), J(f)$ and $I(f)$. For the existing results of Fatou Julia theory under transcendental semigroup, we refer \cite{kri, kum2, kum1, kum3, poo}. \section{Some Fundamental Features of Escaping Set} The following immediate relation between $ I(S) $ and $ I(f) $ for any $ f \in S $ will be clear from the definition of escaping set. \begin{theorem}\label{1c} $I(S) \subset I(f)$ for all $f \in S$ and hence $I(S)\subset \bigcap_{f\in S}I(f)$. \end{theorem} \begin{proof} Let $ z \in I(S) $, then $ f^{n}(z)\rightarrow \infty $ as $ n \rightarrow \infty $ for all $ f \in S $. By which we mean $ z \in I(f) $ for any $ f \in S $. This immediately follows the second inclusion. \end{proof} Note that the above same type of relation (Theorem \ref{1c}) holds between $ F(S) $ and $ F(f) $. However opposite relation holds between the sets $ J(S) $ and $ J(f) $. Poon {\cite[Theorem 4.1, Theorem 4.2] {poo}} proved that the Julia set $ J(S) $ is perfect and $ J(S) = \overline{\bigcup_{f \in S} J(f)} $ for any transcendental semigroup $ S $. From the last relation of above theorem \ref{1c}, we can say that the escaping set may be empty. Note that $I(f)\not = \emptyset$ in classical iteration theory \cite{ere}. Dinesh Kumar and Sanjay Kumar {\cite [Theorem 2.5]{kum2}} have mentioned the following transcendental semigroup $S$, where $I(S)$ is an empty set. \begin{theorem}\label{e} The transcendental entire semigroup $S = \langle f_{1},\;f_{2}\rangle$ generated by two functions $f_{1}$ and $ f_{2} $ from respectively two parameter families $\{e^{-z+\gamma}+c\; \text{where}\; \gamma, c \in \mathbb{C} \; \text{and}\; Re(\gamma)<0, \; Re(c)\geq 1\}$ and $\{e^{z+\mu}+d, \; \text{where}\; \mu, d\in \mathbb{C} \; \text{and}\; Re(\mu)<0, \; Re(d)\leq -1\}$ of functions has empty escaping set $I(S)$. \end{theorem} In the case of non-empty escaping set $ I(S) $, Eremenko's result \cite{ere}, $\partial I(f) = J(f)$ of classical transcendental dynamics can be generalized to semigroup settings. The following results is due to Dinesh Kumar and Sanjay Kumar {\cite [Lemma 4.2 and Theorem 4.3]{kum2}} which yield the generalized answer in semigroup settings. \begin{theorem}\label{3} Let $S$ be a transcendental entire semigroup such that $ I(S) \neq \emptyset $. Then \begin{enumerate} \item $int(I(S))\subset F(S)\;\ \text{and}\;\ ext(I(S))\subset F(S) $, where $int$ and $ext$ respectively denote the interior and exterior of $I(S)$. \item $\partial I(S) = J(S)$, where $\partial I(S)$ denotes the boundary of $I(S)$. \end{enumerate} \end{theorem} This last statement is equivalent to $ J(S)\subset \overline{I(S)} $. If $ I(S) \neq \emptyset $, then we {\cite[Theorem 4.6]{sub1}} proved the following result which is a generalization of Eremenko's result $I(f)\cap J(f) \neq \emptyset $ {\cite[Theorem 2]{ere}} of classical transcendental dynamics to holomorphic semigroup dynamics. \begin{theorem}\label{lu1} Let $S$ be a transcendental semigroup such that $ F(S)$ has a multiply connected component. Then $I(S)\cap J(S) \neq \emptyset $ \end{theorem} Eremenko and Lyubich \cite{ere1} proved that if transcendental function $ f\in \mathscr{B} $, then $ I(f)\subset J(f) $, and $ J(f) = \overline{I(f)} $. Dinesh Kumar and Sanjay Kumar {\cite [Theorem 4.5]{kum2}} generalized these results to a finitely generated transcendental semigroup of bounded type as shown below. \begin{theorem}\label{4} For every finitely generated transcendental semigroup $ S= \langle f_1, f_2, \ldots,f_n\rangle $ in which each generator $f_i $ is of bounded type, then $ I(S)\subset J(S) $ and $ J(S) = \overline{I(S)} $. \end{theorem} \begin{proof} Eremenko and Lyubich's result \cite{ere1} shows that $ I(f) \subset J(f) $ for each $ f\in S $ of bounded type. Poon's result shows {\cite[Theorem 4.2]{poo}} that $ J(S) = \overline {\bigcup_{f\in S}J(f)}$. Therefore, (from the definition of escaping set and theorem \ref{1c}) for every $ f\in S, \; I(S)\subset I(f)\subset J(f)\subset J(S)$. The next part follows from the facts $ J(S)\subset\overline{I(S)} $ and $ I(S)\subset J(S) $. \end{proof} \section{Escaping set of Hyperbolic Semigroup} The definitions of critical values, asymptotic values and singular values as well as post singularities of transcendental entire functions can be generalized to arbitrary setting of transcendental semigroups. \begin{dfn}[\textbf{Critical point, critical value, asymptotic value and singular value}] A point $z\in \mathbb{C}$ is called \textit{critical point} of $S$ if it is critical point of some $g \in S$. A point $w\in \mathbb{C}$ is called a \textit{critical value} of $S$ if it is a critical value of some $g \in S$. A point $w \in \mathbb{C}$ is called an \textit{asymptotic value} of $S$ if it is an asymptotic value of some $g \in S$. A point $w\in \mathbb{C}$ is called a \textit{singular value} of $S$ if it is a singular value of some $g \in S$. For a semigroup $ S $, if all $g \in S $ belongs to $\mathscr{S}$ or $\mathscr{B}$, we call $ S $ a semigroup of class $\mathscr{S}$ or $\mathscr{B}$ (or finite or bounded type). \end{dfn} \begin{dfn}[\textbf{Post singularly bounded (or finite) semigroup}] A transcendental semigroup $ S $ is said to be post-singularly bounded (or post-singularly finite) if each $g \in S$ is post-singularly bounded (or post-singularly finite). Post singular set of post singularly bounded semi-group $ S $ is the set of the form $$P(S) =\overline{\bigcup_{f\in S}f^n(SV( f))} $$ \end{dfn} \begin{dfn}[\textbf{Hyperbolic semigroup}]\label{1m} An transcendental entire function $f$ is said to be \textit{hyperbolic} if the post-singular set $P(f)$ is a compact subset of $F(f)$. A transcendental semigroup $S$ is said to be \textit{hyperbolic} if each $g\in S$ is hyperbolic (that is, if $ P(S)$ is a compact subset of $F(S) $). \end{dfn} Note that if transcendental semigroup $ S $ is hyperbolic, then each $ f\in S$ is hyperbolic. However, the converse may not true. The fact $ P(f^{k}) = P(f) $ for all $ k \in \mathbb{N} $ shows that $ f^{k} $ is hyperbolic if $ f $ is hyperbolic. The following result has been shown by Dinesh Kumar and Sanjay Kumar {\cite [Theorem 3.16]{kum2}} where Eremenko's conjecture holds. \begin{theorem}\label{2a} Let $f \in \mathscr{B}$ periodic with period p and hyperbolic. Let $g =f^n+p, \; n \in \mathbb{N}$. Then $S =\langle f, g\rangle$ is hyperbolic and all components of $I(S)$ are unbounded. \end{theorem} \begin{exm} $ f(z) = e^{\lambda z} $ is hyperbolic entire function for each $\lambda \in (0, \frac{1}{e}) $. The semigroup $ S = \langle f, g \rangle $ where $g =f^m +p$, where $ p = \frac{2 \pi i}{\lambda} $, is hyperbolic transcendental semigroup. \end{exm} We have generalized the above theorem \ref{2a} to finitely generated hyperbolic semigroup with some modifications. This theorem will be the good source of non-empty escaping set transcendental semigroup from which the Eremenko's conjecture holds. \begin{theorem}\label{hs1} Let $ S =\langle f_{1}, f_{2}, \ldots, f_{n} \rangle$ is an abelian bounded type transcendental semigroup in which each $ f_{i} $ is hyperbolic for $ i =1, 2, \ldots, n $. Then semigroup $ S $ is hyperbolic and all components of $ I(S) $ are unbounded. \end{theorem} \begin{lem}\label{hs2} Let $ f $ and $ g $ be transcendental entire functions. Then $ SV(f \circ g) \subset SV(f) \cup f(SV(g)) $. \end{lem} \begin{proof} See for instance {\cite[Lemma 2]{ber}}. \end{proof} \begin{lem}\label{hs3} Let $ f $ and $ g $ are permutable transcendental entire functions. Then $ f^{m}(SV(g)) \subset SV(g) $ and $g^{m}(SV(f))\subset SV(f) $ for all $ m \in \mathbb{N} $. \end{lem} \begin{proof} We first prove that $ f(SV(g)) \subset SV(g) $. Then we use induction to prove $ f^{m}(SV(g)) \subset SV(g) $. Let $ w \in f(SV(g)) $. Then $ w = f(z) $ for some $ z \in SV(g) $. In this case, $ z $ is either critical value or an asymptotic value of function $ g $. First suppose that $ z $ is a critical value of $ g $. Then $ z = g(u) $ with $ g^{'}(u) =0 $. Since $ f $ and $ g $ are permutable functions, so $ w = f(z) = f(g(u))= (f\circ g)(u) = (g \circ f)(u) $. Also, $ (f\circ g)^{'}(u) = f^{'}(g(u)) g^{'}(u) =0 $. This shows that $ u $ is a critical point of $ f \circ g =g \circ f $ and $ w $ is a critical value of $ f \circ g =g \circ f $. By permutability of $ f $ and $ g $, we can write $f^{'}(g(u)) g^{'}(u) = g^{'}(f(u)) f^{'}(u) =0$ for any critical point $ u $ of $ f \circ g $. Since $ g^{'}(u) =0 $, then either $ f^{'}(u) =0 \Rightarrow u$ is a critical point of $ f $ or $ g^{'}(f(u)) =0 \Rightarrow f(u) $ is a critical point of $ g$. This shows that $ w = g(f(u)) $ is a critical value of $ g $. Therefore, $w\in SV(g)$. Next, suppose that $ z $ is an asymptotic value of function $ g $. We have to prove that $ w = f(z) $ is also asymptotic value of $ g $. Then there exists a curve $ \gamma: [0, \infty) \to \mathbb{C} $ such that $ \gamma (t) \to \infty $ and $ g(\gamma (t)) \to z $. So, $ f(g(\gamma (t))) \to f(z) =w $ as $ t \to \infty $ along $ \gamma $. Since $ f\circ g = g \circ f $, so $ f(g(\gamma (t))) \to f(z) =w\Rightarrow g(f(\gamma (t))) \to f(z) =w$ as $ t \to \infty $ along $ \gamma $. This shows $ w $ is an asymptotic value of $ g $. This proves our assertion. Assume that $ f^{k}(SV(g)) \subset SV(g) $ for some $ k \in \mathbb{N} $ with $ k \leq m $. Then $$ f^{k +1}(SV(g)) =f(f^{k}(SV(g))) \subset f(SV(g)) \subset SV(g) $$ Therefore, by induction, for all $ m \in \mathbb{N} $, we must have $ f^{m}(SV(g)) \subset SV(g) $. The next part $g^{m}(SV(f))\subset SV(f) $ can be proved similarly as above. \end{proof} \begin{lem}\label{hs4} Let $ f $ and $ g $ are two permutable hyperbolic transcendental entire functions. Then their composite $ f\circ g $ is also hyperbolic. \end{lem} \begin{proof} We have to prove that $ P(f \circ g) $ is a compact subset of Fatou set $ F(f \circ g) $. From {\cite[Lemma 3.2]{kum4}}, $ F(f \circ g) \subset F(f) \cap F(g) $. This shows that $ F(f \circ g) $ is a subset of $ F(f) $ and $ F(g) $. So this lemma will be proved if we prove $ P(f \circ g) $ is a compact subset of $ F(f) \cup F(g) $. By the definition of post singular set of transcendental entire function, we can write \begin{align*} P(f \circ g) & =\overline{\bigcup_{m\geq 0}(f \circ g)^m(SV(f \circ g))}\\ & = \overline{\bigcup_{m\geq 0} f^{m}(g^{m}(SV(f \circ g)))}\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\text{by using permutabilty of $f$ and $g$}) \\ & \subset \overline{\bigcup_{m\geq 0} f^{m}(g^{m}(SV(f) \cup f(SV(g)))} \;\;\;\;\;\;\;\;\;\; (\text{by above lemma \ref{hs2}})\\ & = \overline{\bigcup_{m\geq 0} f^{m}(g^{m}(SV(f))) \cup g^{m}(f^{m + 1}(SV(g)))} \\ & \subset\overline{\bigcup_{m\geq 0} f^{m}(SV(f)))} \cup \overline{\bigcup_{m\geq 0} g^{m}(SV(g)))}\;\;\;\;(\text{by above lemma \ref{hs3}})\\ & = P(f) \cup P(g) \end{align*} Since $ f $ and $ g $ are hyperbolic, so $ P(f) $ and $ P(g) $ are compact subset of $ F(f) $ and $ F(g) $. Therefore, the set $P(f) \cup P(g) $ must be compact subset of $ F(f) \cup F(g) $. \end{proof} \begin{proof}[Proof of the Theorem \ref{hs1}] Any $ f \in S $ can be written as $ f = f_{i_1}\circ f_{i_2}\circ f_{i_3}\circ \cdots\circ f_{i_m}$. By permutability of each $ f_{i} $, we can rearrange $ f_{i_{j}} $ and ultimately represented by $$ f = f_{1}^{t_{1}} \circ f_{2}^{t_{2}} \circ \ldots \circ f_{n}^{t_{n}} $$ where each $ t_{k}\geq 0 $ is an integer for $ k = 1, 2, \ldots, n $. The lemma \ref{hs4} can be applied repeatably to show each of $f_{1}^{t_{1}}, f_{2}^{t_{2}},\ldots, f_{n}^{t_{n}} $ is hyperbolic. Again by repeated application of above same lemma, we can say that $f = f_{1}^{t_{1}} \circ f_{2}^{t_{2}} \circ \ldots \circ f_{n}^{t_{n}}$ is itself hyperbolic and so the semigroup $ S $ is hyperbolic. Next part follows from {\cite[Theorem 3.3]{sub2}} by the assumption of this theorem. \end{proof} \textbf{Acknowledgment}: We express our heart full thanks to Prof. Shunshuke Morosawa, Kochi University, Japan for his thorough reading of this paper with valuable suggestions and comments.
{"config": "arxiv", "file": "1803.10381.tex"}
TITLE: Integration by Trig Substution - completely stuck QUESTION [4 upvotes]: I'm trying to solve this integral, but after more than an hour I can't figure it out. I've outlined my thinking below. $$ \int \dfrac{dx}{x^2\sqrt{4x^2+9}} $$ If we let $\ a=3 $ and $\ b=2 $, the radical in the denominator fits the form $\ \sqrt{b^2+a^2x^2} $. This makes me think this is a trig substation problem. From the looks of the radical in the denominator $\sqrt{9+4x^2} $, this seems like a trig substitution problem. I make the substitution $\ x=\dfrac{3}{2}\tan\theta $ and $\ dx=\dfrac{3}{2}\sec^2\theta $. I then have: $$ \int \dfrac{3}{2}\dfrac{\sec^2\theta}{\dfrac{9}{4}\tan^2\theta\sqrt{9+4(\dfrac{9}{4}\tan^2\theta)}}d\theta $$ I pull the constants out of the integral by the constant multiple rule: $$ \dfrac{12}{18} \int \dfrac{\sec^2\theta}{\tan^2\theta\sqrt{9+4(\dfrac{9}{4}\tan^2\theta)}}d\theta $$ After simplifying the radiand, I get $\sqrt{9(1+\tan^2\theta)}$, which allows me to eliminate the radical entirely by the Pythagorean Identity (also pulling the 3 out of the denominator): $$ \dfrac{2}{9} \int \dfrac{\sec^2\theta}{\tan^2\theta \sec\theta}d\theta $$ Then I'm stuck after canceling the $\sec\theta$ in both the numerator and the denominator. $$ \dfrac{2}{9} \int \dfrac{\sec\theta}{\tan^2\theta}d\theta $$ I've tried every trig identity I know to try and rewrite $\sec\theta$ and $\tan\theta$ in a way that allows me to simplify or do something and I'm just lost at this point. Can anyone please help point me in the right direction? REPLY [3 votes]: Rewrite $\sec$ and $\tan$ in terms of sines and cosines; you'll find that $$\int \frac{\sec \theta}{\tan^2 \theta} d\theta = \int\frac{1/\cos \theta}{\sin^2 \theta/\cos^2 \theta} d\theta = \int \frac{\cos \theta}{\sin^2\theta} d\theta$$ Now consider a substitution of $u = \sin \theta$.
{"set_name": "stack_exchange", "score": 4, "question_id": 684459}
TITLE: Proving $A^n=\left[\begin{smallmatrix}1&(2^n-1)a\\0&2^n\end{smallmatrix}\right]$ QUESTION [0 upvotes]: Given the matrix \begin{align} A = \begin{bmatrix}1&a\\0&2\end{bmatrix} , \end{align} is it true that \begin{align} A^n=\begin{bmatrix}1&(2^n-1)a\\0&2^n\end{bmatrix} \end{align} for all $n \geq 0$? I found out that $A^2 = \begin{bmatrix}1 & 3a\\ 0&4 \end{bmatrix}$ and that $A^3 = \begin{bmatrix}1 & 7a\\ 0&8 \end{bmatrix}$, which means that my conjecture holds for small $n$. But I do not know how to prove it in general. REPLY [1 votes]: $A$'s characteristic equation is $\lambda^2 -3\lambda +2=0$, whose roots are $1$ and $2$. Hence, $A^n=B+2^n\cdot C$,where $B$ and $C$ are some matrices that you will find by substituting $n=0$ and $n=1$ into this relation. Note: This is a method that works for any $2 \times 2$ matrix provided that the computations do not get messy.
{"set_name": "stack_exchange", "score": 0, "question_id": 3159115}
TITLE: Reweighting probability measures by convex potentials, and contraction in transport distance QUESTION [2 upvotes]: Let $W: \mathbf{R}^d \to \mathbf{R}$ be a convex function such that $\int \exp(-W) = 1$, and define probability measures $\mu_y$ by $$\mu_y (dx) = \exp( - W (x - y)) \,dx,$$ i.e. each $\mu_y$ is a translation of the measure $\mu_0$ in the direction $y$. Now, let $V: \mathbf{R}^d \to \mathbf{R}$ be another convex function which is uniformly quadratically convex with parameter $m > 0$, i.e. $V''(x) \succeq m\cdot I_d$ for all $x$. Define reweighted measures $\nu_y$ by \begin{align} \nu_y (dx) &= \exp( - W (x - y) - V(x) + F(y)) \,dx \\ &= \mu_y (dx) \cdot \exp( - V(x) + F(y)), \end{align} with $F(y)$ chosen so that $\nu_y$ integrates to $1$. For some specific choices of $W$, it is possible to show that this reweighting operation is a contraction, in the sense that \begin{align} d ( \nu_{y_1}, \nu_{y_2}) &\leqslant \kappa_{V, W} \cdot d ( \mu_{y_1}, \mu_{y_2}) \\ &= \kappa_{V, W} \cdot | y_2 - y_1 | \end{align} with $\kappa_{V, W} < 1$, and $d$ some transport distance. For a bit of intuition, one can imagine that $\kappa_{V, W}$ gets smaller as the strength of the reweighting operation grows, e.g. as $m$ increases. I am not making a rigorous claim to this effect. My question is: is there a general result which would guarantee that, given a specific $(V, W)$, there exists a $\kappa_{V, W} < 1$ such that the earlier estimate holds? In the best case, I would also hope for quantitative estimates of $\kappa_{V, W}$. I could believe that one might need to make further assumptions on $W$ as well (e.g. uniform convexity, smoothness), but I would ideally like to avoid this. REPLY [1 votes]: There is a simple sufficient condition: If $\nabla W$ is $L$-Lipschitz, then $y \mapsto \nu_y$ is $(L/m)$-Lipschitz with respect to the quadratic Wasserstein distance $d=\mathcal{W}_2$. You thus have a contraction if $L<m$, and this condition fits with your "bit of intuition." Though this may be a stronger assumption than you are willing to impose on $W$. Proof: Identifying $\nu_y$ with its density, the convexity of $W$ and $m$-convexity of $V$ ensure that $(-\log\nu_y)$ is $m$-convex, for each $y$. By the Bakry-Emery criterion, $\nu_y$ satisfies the log-Sobolev inequality $$H(\mu\,|\,\nu_y) \le \frac{1}{2m}I(\mu\,|\,\nu_y),$$ for every probability measure $\mu$ on $\mathbb{R}^d$. Here $H(\mu\,|\,\nu_y) = \int \log \tfrac{d\mu}{d\nu_y}\,d\mu$ denotes the relative entropy (KL divergence) and $I(\mu\,|\,\nu_y) = \int |\nabla \log \tfrac{d\mu}{d\nu_y}|^2\,d\mu$ the relative Fisher information. By Otto-Villani, we also have the quadratic transport inequality $$\mathcal{W}_2^2(\mu,\nu_y) \le \frac{2}{m}H(\mu\,|\,\nu_y),$$ for all $\mu$. Combine these two inequalities to get $$\mathcal{W}_2^2(\mu,\nu_y) \le \frac{1}{m^2}I(\mu\,|\,\nu_y),$$ for all $\mu$. For any $y_1,y_2$, we thus find \begin{align*} \mathcal{W}_2^2(\nu_{y_1},\nu_{y_2}) &\le \frac{1}{m^2}I(\nu_{y_1}\,|\,\nu_{y_2}) \\ &= \frac{1}{m^2} \int |\nabla W(x-y_1) - \nabla W(x-y_2)|^2\,\nu_{y_1}(dx) \\ &\le \frac{L^2}{m^2} |y_1-y_2|^2. \end{align*}
{"set_name": "stack_exchange", "score": 2, "question_id": 393537}
TITLE: Prove the following Lemma in the polynomial rings. QUESTION [1 upvotes]: Let $R$ be a ring. Then, the natural inclusion $R \to R[x]$ which just sends an element $r \in R$ to the constant polynomial $r$, is a ring homomorphism. Attempts: Let $r \in R$ and define $g : R \to R[x]$ as $g(r) = f(x)$ where $f(x) = r$ for all $x \in R$. Then, \begin{align*} g(r_1 + r_2) &= r_1 + r_2 \\ &= g(r_1) + g(r_2) \end{align*} and \begin{align*} g(r_1 r_2) &= r_1 r_2 \\ &= g(r_1) g(r_2) \end{align*} Hence, proved. Is above true? REPLY [1 votes]: Your proof looks fine. Depending on your definition of group homomorphism, you may also want to verify that $g(0)=0$ and $g(1)=1$. A nice general fact is that if $R$ is a subring of a ring $S$, then the inclusion $R\ \hookrightarrow\ S$ is a group homomorphism.
{"set_name": "stack_exchange", "score": 1, "question_id": 3893414}
TITLE: ordinary differential equations solution needed QUESTION [0 upvotes]: How to find non-trivial solution $y$ of BVP $$y''+xy=0, x\in [a,b],$$ $$y(a)=y(b)=0.$$ Till time no method which i know works. Please help some one. REPLY [1 votes]: The general solution of $[ \partial_{x} + k^{2} x ] y = 0$ is \begin{align} y(x) = \frac{ \sqrt{x} }{3} \left[ A J_{-1/3}\left( \frac{2k}{3} \ x^{2/3} \right) + B J_{1/3}\left( \frac{2k}{3} \ x^{2/3} \right) \right]. \end{align} The conditions $y(a) = y(b) = 0$ lead to \begin{align} A = - B \frac{J_{1/3}\left( \frac{2k}{3} \ a^{2/3} \right) }{ J_{-1/3}\left( \frac{2k}{3} \ a^{2/3} \right) }, \end{align} for the case of $y(a)=0$. Now the case of $y(b)=0$ yields \begin{align} 0 = B \left[ J_{-1/3}\left( \frac{2k}{3} \ a^{2/3} \right) J_{1/3}\left( \frac{2k}{3} \ b^{2/3} \right) - J_{1/3}\left( \frac{2k}{3} \ a^{2/3} \right) J_{-1/3}\left( \frac{2k}{3} \ b^{2/3} \right) \right]. \end{align} From this equation either $B=0$, which implies $A=0$ and $y(x)=0$, or $B \neq 0$ which then says that $a$ and $b$ are connected by the equation \begin{align} J_{-1/3}\left( \frac{2k}{3} \ a^{2/3} \right) J_{1/3}\left( \frac{2k}{3} \ b^{2/3} \right) = J_{1/3}\left( \frac{2k}{3} \ a^{2/3} \right) J_{-1/3}\left( \frac{2k}{3} \ b^{2/3} \right). \end{align} This differential equation is well known and has a good description here http://mathworld.wolfram.com/AiryDifferentialEquation.html
{"set_name": "stack_exchange", "score": 0, "question_id": 792190}
TITLE: How to derive end-correction value relationship for open-ended air columns? QUESTION [5 upvotes]: According to Young and Freedman's Physics textbook, in open-ended air columns like some woodwind instruments, the position of the displacement antinode extends a tiny amount beyond the end of the column. UCONN's website states that the end correction for a cylinder could be found by: $$d=0.6 r$$ where $d$ is the end correction, the distance which the antinode extends beyond the end of the pipe and $r$ is the radius of the cylindrical pipe. How does one derive that relationship? REPLY [3 votes]: This is actually a fairly involved calculation that was done by Levine and Schwinger in 1948. If you are interested , the reference is H.Levine and J. Schwinger, "On the radiation of sound from an unflanged circular pipe", Physical Review 73:383-406 I'll not attempt to replicate that calculation here but will try to describe the main points. The main factor leading to the end correction is the boundary condition at the end of the pipe. The continuity of air pressure and velocity at the end of the pipe requires that the mechanical impedance of the wave equal the acoustic radiation impedance of the end of the pipe. So, the acoustic radiation impedance at the end of the pipe determines the end correction on the antinode. The radiation impedance is not zero, but is the radiation impedance of the pipe end. To calculate the radiation impedance of the pipe end, it's treated as an unflanged piston embedded in a plane. Also it's assumed that the wavelength of sound is much larger than the diameter of the pipe. The resulting radiation impedance has real and imaginary parts, and it's the imaginary part that leads to the end correction. Levine and Schwinger arrived at a value of 0.6133d for the end correction to the effective length of the pipe.
{"set_name": "stack_exchange", "score": 5, "question_id": 214330}
\begin{document} \begin{abstract} Internal diffusion-limited aggregation is a growth model based on random walk in~$\Z^d$. We study how the shape of the aggregate depends on the law of the underlying walk, focusing on a family of walks in $\Z^2$ for which the limiting shape is a diamond. Certain of these walks---those with a directional bias toward the origin---have at most logarithmic fluctuations around the limiting shape. This contrasts with the simple random walk, where the limiting shape is a disk and the best known bound on the fluctuations, due to Lawler, is a power law. Our walks enjoy a uniform layering property which simplifies many of the proofs. \end{abstract} \maketitle \section{Introduction and main results} \label{sec:introduction} Internal diffusion-limited aggregation (internal DLA) is a growth model proposed by Diaconis and Fulton~\cite{DF91}. In the original model on~$\Z^d$, particles are released one by one from the origin~$o$ and perform simple symmetric discrete-time random walks. Starting from the set $A(1) = \{o\}$, the clusters $A(i+1)$ for $i\geq1$ are defined recursively by letting the $i$-th particle walk until it first visits a site not in $A(i)$, then adding this site to the cluster. Lawler, Bramson and Griffeath~\cite{LBG92} proved that in any dimension $d\geq2$, the asymptotic shape of the cluster~$A(i)$ is a $d$-dimensional ball. Lawler~\cite{La95} subsequently showed that the fluctuations around a ball of radius~$r$ are at most of order $r^{1/3}$ up to logarithmic corrections. Moore and Machta~\cite{MM00} found experimentally that the fluctuations appear to be at most logarithmic in~$r$, but there is still no rigorous bound to match their simulations. Other studies of internal DLA include~\cite{GQ00,BQR03,BB07,LP09b}. Here we investigate how the shape of an internal DLA cluster depends on the law of the underlying random walk. Perhaps surprisingly, small changes in the law can dramatically affect the limiting shape. Consider the walk in~$\Z^2$ with the same law as simple random walk except on the $x$ and $y$-axes, where steps toward the origin are reflected. For example, from a site $(x,0)$ on the positive $x$-axis, the walk steps to $(x+1,0)$ with probability~$1/2$ and to each of $(x,\pm 1)$ with probability~$1/4$; see Figure~\ref{fig:SimpleKernel}. It follows from Theorem~\ref{thm:diamondshape}, below, that when we rescale the resulting internal DLA cluster~$A(i)$ to have area~$2$, its asymptotic shape as $i\to \infty$ is the diamond \[ \D = \{(x,y)\in \R^2 : |x|+|y| \leq 1 \}. \] \begin{figure} \begin{center} \includegraphics{SimpleKernel} \end{center} \caption{Example of a uniformly layered walk. The sites enclosed by the shaded area form the diamond~$\D_3$. Only the transition probabilities from layer~$\L_3$ are shown. Open-headed arrows indicate transitions that take place with probability~$1/2$; all the other transitions have probability~$1/4$.} \label{fig:SimpleKernel} \end{figure} In fact, a rather large family of walks produce this diamond as their limiting shape. The key property shared by the walks we will consider is that their position at any time~$t$ is distributed as a mixture of uniform distributions on diamond layers. To define these walks, for $k\geq0$ let \[ \L_k := \{x \in \Z^2: \norm{x}=k\} \] where for $x=(x_1,x_2)$ we write $\norm{x} = |x_1|+|x_2|$. A \emph{uniformly layered walk} is a discrete-time Markov chain on state space $\Z^2$ whose transition probabilities $Q(x,y)$ satisfy \begin{itemize} \item[(U1)] $Q(x,y)=0$ if $\norm{y}>\norm{x}+1$; \item[(U2)] For all $k\geq 0$ and all $x\in \L_k$, there exists $y\in \L_{k+1}$ with $Q(x,y)>0$; \item[(U3)] For all $k,\ell \geq 0$ and all $y,z \in \L_\ell$, \[ \sum_{x\in \L_k} Q(x,y) = \sum_{x\in \L_k} Q(x,z). \] \end{itemize} In order to state our main results, let us now give a more precise description of the aggregation rules. Set $A(1)=\{o\}$, and let $Y^i(t)$ ($i=1,2,\dotsc$) be independent uniformly layered walks with the same law, started from the origin. For $i\geq1$, define the stopping times $\sigma^i$ and the growing cluster~$A(i)$ recursively by setting \[ \sigma^i = \min\{ t\geq0: Y^i(t)\not\in A(i) \} \] and \[ A(i+1) = A(i) \cup \{ Y^i(\sigma^i) \}. \] Now for any real number $r \geq 0$, let \[ \D_r := \left\{x \in \Z^2 : \norm{x}\leq r \right\}. \] We call~$D_r$ the diamond of radius~$r$ in~$\Z^2$. Note that $\D_r = D_{\floor{r}}$. For integer $n \geq 0$, we have $\D_n = \union_{k=0}^n \L_k$. Since~$\#\L_k = 4k$ for $k\geq 1$, the volume of~$\D_n$ is \[ v_n := \#\D_n = 2n(n+1)+1. \] Our first result says that the internal DLA cluster of $v_n$~sites based on any uniformly layered walk is close to a diamond of radius~$n$. \begin{theorem} \label{thm:diamondshape} For any uniformly layered walk in $\Z^2$, the internal DLA clusters $A(v_n)$ satisfy \[ \Pr\left( \D_{n-4\sqrt{n\log n}} \subset A(v_n) \subset \D_{n+20\sqrt{n\log n}} \text{ eventually} \right) = 1. \] \end{theorem} Here and throughout this paper \emph{eventually} means ``for all but finitely many~$n$.'' Likewise, we will write \emph{i.o.}\ or \emph{infinitely often} to abbreviate ``for infinitely many~$n$.'' Our proof of Theorem~\ref{thm:diamondshape} in Section~\ref{sec:general} follows the strategy of Lawler~\cite{La95}. The uniform layering property~(U3) takes the place of the Green's function estimates used in that paper, and substantially simplifies some of the arguments. \begin{figure} \begin{center} \includegraphics[width=.46\textwidth]{DDiamond-p0-245700} \includegraphics[width=.46\textwidth]{DDiamond-p1by2-245700} \end{center} \caption{Internal DLA clusters in $\Z^2$ based on the uniformly layered walk with transition kernel $p\,\Qin + q\,\Qout$. Left: $p=0$, walks are directed outward. Right: $p=1/2$, walks have no directional bias. Each cluster is composed of $v_{350} = 245\,701$ particles.} \label{fig:DDiamond} \end{figure} Within the family of uniformly layered walks, we study how the law of the walk affects the fluctuations of the internal DLA cluster around the limiting diamond shape. A natural walk to start with is the outward-directed layered walk $X(t)$ satisfying \[ \norm{X(t+1)} = \norm{X(t)} + 1 \] for all $t$. There is a unique such walk satisfying condition (U3) whose transition probabilities are symmetric with respect to reflection about the axes. It is defined in the first quadrant by \begin{align} \label{eq:Qoutbegin} \Qout\bigl( (x,y),(x,y+1) \bigr) &= \frac{y+1/2}{x+y+1} && \text{for $x,y = 1,2,\dotsc$,} \\ \Qout\bigl( (x,y),(x+1,y) \bigr) &= \frac{x+1/2}{x+y+1} && \text{for $x,y = 1,2,\dotsc$,} \end{align} and on the positive horizontal axis by \begin{align} \Qout\bigl( (x,0),(x,\pm1) \bigr) &= \frac{1/2}{x+1} && \text{for $x = 1,2,\dotsc$,} \\ \label{eq:Qoutend} \Qout\bigl( (x,0),(x+1,0) \bigr) &= \frac{x}{x+1} && \text{for $x = 1,2,\dotsc$.} \end{align} In the other quadrants~$\Qout$ is defined by reflection symmetry, and at the origin we set $\Qout(o,z) = 1/4$ for all $z\in\Z^2$ with $\norm{z}=1$. See Figure~\ref{fig:DirKernels}. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{DirKernels} \end{center} \caption{Left: transition probabilities of the outward directed kernel~$\Qout$. Right: transition probabilities for the inward directed kernel~$\Qin$. The origin $o$ is near the lower-left corner.} \label{fig:DirKernels} \end{figure} Likewise one can construct a symmetric Markov kernel defining an inward directed random walk which remains uniformly distributed on diamond layers. This kernel is defined in the first quadrant by \begin{align} \label{eq:Qinbegin} \Qin\bigl( (x,y),(x,y-1) \bigr) &= \frac{y-1/2}{x+y-1} && \text{for $x,y = 1,2,\dotsc$,} \\ \Qin\bigl( (x,y),(x-1,y) \bigr) &= \frac{x-1/2}{x+y-1} && \text{for $x,y = 1,2,\dotsc$,} \end{align} and on the positive horizontal axis by \begin{align} \label{eq:Qinend} \Qin\bigl( (x,0),(x-1,0) \bigr) &= 1 && \text{for $x = 1,2,\dotsc$.} \end{align} Again, the definition extends to the other quadrants by reflection symmetry, and is completed by making the origin an absorbing state: $\Qin(o,o) = 1$. See Figure~\ref{fig:DirKernels}. We now choose a parameter $p\in[0,1)$, let $q=1-p$ and define the kernel $Q_p := p\,\Qin + q\,\Qout$. The parameter~$p$ allows us to interpolate between a fully outward directed walk at $p=0$ and a fully inward directed walk at $p=1$. Theorem~\ref{thm:diamondshape} shows that the fluctuations around the limit shape are at most of order $\sqrt{n\log n}$ for the entire family of walks~$Q_p$. However, one may expect that the true size of the fluctuations depends on~$p$. When $p$ is large, particles tend to take a longer time to leave a diamond of given radius, affording them more opportunity to fill in unoccupied sites near the boundary of the cluster. Indeed, in simulations we find that the boundary becomes less ragged as $p$ increases (Figure~\ref{fig:DCloseup}). Our next result shows that when $p>1/2$, the boundary fluctuations are at most logarithmic in~$n$. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=.445\textwidth]{DCloseup-p0} & \includegraphics[width=.445\textwidth]{DCloseup-p1by4} \\ $p=0$ & $p=1/4$ \\ \includegraphics[width=.445\textwidth]{DCloseup-p1by2} & \includegraphics[width=.445\textwidth]{DCloseup-p3by4} \\ $p=1/2$ & $p=3/4$ \end{tabular} \end{center} \caption{Closeups of the boundary of the diamond. Fluctuations decrease as the directional bias of the walk tends from outward ($p=0$) to inward ($p=1$).} \label{fig:DCloseup} \end{figure} \begin{theorem} \label{thm:inward} For all $p\in(1/2,1)$, we have \[ \Pr\left( \D_{n-6\log_r n} \subset A(v_n) \subset \D_{n+6\log_r n} \text{ eventually} \right) = 1 \] where the base of the logarithm is $r=p/q$. \end{theorem} We believe that for all $p\in[0,1/2)$ the boundary fluctuations are of order~$\sqrt{n}$ up to logarithmic corrections, and that therefore an abrupt change in the order of the fluctuations takes place at $p=1/2$. At present, however, we are able to prove a lower bound on the order of fluctuations only in the case $p=0$: \begin{theorem} \label{thm:outward} For $p=0$ we have \[ \Pr\left( \D_{n - (1-\eps) \sqrt{2(n\log\log n)/3 }} \not\subset A(v_n) \text{ i.o.} \right) = 1 \qquad \forall\eps>0\phantom{.} \] and \[ \Pr\left( A(v_n) \not\subset \D_{n + (1-\eps) \sqrt{2(n\log\log n)/3}} \text{ i.o.} \right) = 1 \qquad \forall\eps>0. \] \end{theorem} Uniformly layered walks are closely related to the walks studied in~\cite{Du04, Ka07}. Indeed, the diamond shape of the layers does not play an important role in our arguments. A result similar to Theorem~\ref{thm:diamondshape} will hold for walks satisfying (U1)--(U3) for other types of layers $\L_k$, provided the cardinality $\#\L_k$ grows at most polynomially in~$k$. Figure~\ref{fig:Hexagon} shows an example of a walk on the triangular lattice satisfying (U1)--(U3) for hexagonal layers. The resulting internal DLA clusters have the regular hexagon as their asymptotic shape. Blach\`{e}re and Brofferio \cite{BB07} study internal DLA based on uniformly layered walks for which $\#\L_k$ grows exponentially, such as simple random walk on a regular tree. \begin{figure} \begin{center} \includegraphics{TriKernel} \quad \includegraphics[height=2in]{Hexagon} \end{center} \caption{Left: Example of a uniformly layered walk on the triangular lattice with hexagonal layers. Only transitions from a single (shaded) layer are shown. Open-headed arrows indicate transitions that take place with probability~$1/2$; all the other transitions have probability~$1/4$. Right: An internal DLA cluster of $100\, 000$ particles based on this uniformly layered walk.} \label{fig:Hexagon} \end{figure} Given how sensitive the shape of an internal DLA cluster is to the law of the underlying walk, it is surprising how robust the shape is to other types of changes in the model. For example, the particles may perform deterministic rotor-router walks instead of simple random walks. These walks depend on an initial choice of rotors at each site in $\Z^d$, but for any such choice, the limiting shape is a ball. Another variant is the divisible sandpile model, which replaces the discrete particles by a continuous amount of mass at each lattice site. Its limiting shape is also a ball. These models are discussed in~\cite{LP09a}. The remainder of the paper is organized as follows. Section~\ref{sec:preliminaries} explores the properties of uniformly layered walks, section~\ref{sec:abelian} discusses an ``abelian property'' of internal DLA which is essential for the proof of Theorem~\ref{thm:diamondshape}, and section~\ref{sec:largedeviations} collects the limit theorems we will use. Sections \ref{sec:general}, \ref{sec:inward} and~\ref{sec:outward} are devoted to the proofs of Theorems \ref{thm:diamondshape}, \ref{thm:inward} and~\ref{thm:outward}, respectively. \section{Uniformly layered walks} \label{sec:preliminaries} Let $\{X(t)\}_{t\geq 0}$ be a uniformly layered walk, that is, a walk on $\Z^2$ satisfying properties (U1)--(U3) of the introduction. Write $\nu_k$ for the uniform measure on the sites of layer~$\L_k$, and let $\Pr_k$ denote the law of the walk started from $X(0) \sim \nu_k$. Likewise, let $\Pr_x$ denote the law of the walk started from $X(0)=x$. Consider the stopping times \begin{align*} \tau_z &:= \min\{ t\geq0: X(t) = z \} &&\text{for }z\in\Z^2;\\ \tau_k &:= \min\{ t\geq0: X(t) \in \L_k \} &&\text{for }k\geq0. \end{align*} The key to the diamond shape, as we shall see, is the fact that our random walks have the uniform distribution on diamond layers at all fixed times, and at the particular stopping times~$\tau_k$. The next lemma shows that under~$\Pr_k$, conditionally on $\norm{X(s)}$ for $s \leq t$, the distribution of~$X(t)$ is uniform on~$\L_{\norm{X(t)}}$. We remark that the fact that this conditional distribution depends only on~$\norm{X(t)}$, and not on $\norm{X(s)}$ for $s<t$, implies that~$\norm{X(t)}$ is a Markov chain under~$\Pr_k$; see~\cite{RP81}. \begin{lemma} \label{lem:uniformity} Fix~$k\geq0$. For all $t\geq 0$ and all sequences of nonnegative integers $k=\ell(0),\dotsc,\ell(t)$ satisfying $\ell(s+1) \leq \ell(s) + 1$ for $s=0,\ldots,t-1$, we have for all $z \in \L_{\ell(t)}$ \[ \begin{split} \Pr_k \bigl( X(t) = z \bigm| \norm{X(s)} = \ell(s),\ 0\leq s\leq t \bigr) &= \frac{1}{\#\L_{\ell(t)}} \\ &= \Pr_k \bigl( X(t) = z \bigm| \norm{X(t)} = \ell(t) \bigr). \end{split} \] \end{lemma} \begin{proof} We prove the first equality by induction on~$t$. The base case $t=0$ is immediate. Write \[ \mathcal{E}_t = \bigl\{ \norm{X(s)} = \ell(s), \, 0\leq s\leq t \bigr\}. \] By the Markov property and the inductive hypothesis, we have for $t\geq 1$ and any $y \in \L_{\ell(t)}$ \[ \begin{split} \Pr_k( X(t) = y, \mathcal{E}_t ) &= \sum_{x\in\L_{\ell(t-1)}} \Pr_k( X(t) = y, X(t-1)=x, \mathcal{E}_{t-1} ) \\ &= \sum_{x\in\L_{\ell(t-1)}} Q(x,y) \cdot \Pr_k( X(t-1)=x, \mathcal{E}_{t-1} ). \\ &= \sum_{x\in\L_{\ell(t-1)}} Q(x,y) \cdot \frac{1}{\#\L_{\ell(t-1)}} \cdot \Pr_k( \mathcal{E}_{t-1} ). \end{split} \] By property (U3), the right side does not depend on the choice of $y \in \L_{\ell(t)}$. It follows that \[ \Pr_k( X(t) = z \mid \mathcal{E}_t ) =\frac{\Pr_k(X(t)=z, \mathcal{E}_t)}{\sum_{y\in\L_{\ell(t)}} \Pr_k(X(t)=y,\mathcal{E}_t)} = \frac{1}{\#\L_{\ell(t)}}. \] By induction this holds for all $t\geq0$ and all sequences $\ell(0), \dotsc, \ell(t)$. Therefore, for fixed $\ell(t)$ and $z\in\L_{\ell(t)}$ \[ \begin{split} \Pr_k\bigl( X(t) = z \bigr) &= \sum_{\ell(0),\dotsc,\ell(t-1)} \Pr_k\bigl( X(t) = z, \norm{X(s)} = \ell(s)\ \forall s\leq t \bigr) \\ &= \frac{1}{\#\L_{\ell(t)}} \, \sum_{\ell(0),\dotsc,\ell(t-1)} \Pr_k\bigl( \norm{X(s)} = \ell(s)\ \forall s\leq t \bigr) \\ &= \frac{1}{\#\L_{\ell(t)}} \, \Pr_k\bigl( \norm{X(t)} = \ell(t) \bigr) \end{split} \] which implies \[ \Pr_k\big( X(t) = z \,\big| \norm{X(t)} = \ell(t) \big) = \frac{1}{\#\L_{\ell(t)}}. \qedhere \] \end{proof} As a consequence of Lemma~\ref{lem:uniformity}, our random walks have the uniform distribution on layer~$\ell$ at the stopping time~$\tau_\ell$. \begin{lemma} \label{lem:uniformhitting} Fix integers $0 \leq k<\ell$. Then \[ \Pr_k( X(\tau_\ell) = z ) = \frac{1}{\#\L_\ell} = \frac{1}{4\ell} \qquad \text{for every $z\in\L_\ell$}. \] \end{lemma} \begin{proof} Note that property (U2) and Lemma~\ref{lem:uniformity} imply $\tau_\ell < \infty$ almost surely. For $t \geq 0$ we have \[ \{\tau_\ell = t\} = \union_{\ell_0,\dotsc,\ell_t} \bigl\{ \norm{X(s)} = \ell_s, 0\leq s\leq t \bigr\}, \] where the union is over all sequences of nonnegative integers $\ell_0, \ell_1, \dotsc, \ell_t$ with $\ell_0=k$ and $\ell_t=\ell$, such that $\ell_{s+1} \leq \ell_s + 1$ and $\ell_s\neq\ell$ for all $s=0,1,\ldots,t-1$. Writing $\mathcal{E}_{\ell_0, \dotsc, \ell_t}$ for the disjoint events in this union, it follows that \[ \begin{split} \Pr_k( X(\tau_\ell) = z ) &= \sum_{t\geq0} \Pr_k( X(t) = z, \,\tau_\ell = t ) \\ &= \sum_{t\geq0} \sum_{\ell_0,\dotsc,\ell_t} \Pr_k( X(t) = z, \,\mathcal{E}_{\ell_0,\dotsc,\ell_t} ) \\ &= \sum_{t\geq0} \sum_{\ell_0,\dotsc,\ell_t} \Pr_k( X(t) = z \mid \mathcal{E}_{\ell_0,\dotsc,\ell_t} ) \Pr_k( \mathcal{E}_{\ell_0,\dotsc,\ell_t} ). \end{split} \] Since $\sum_{t\geq0} \Pr_k(\tau_\ell=t)=1$, the result follows from Lemma~\ref{lem:uniformity}. \end{proof} The previous lemmas show that one can view our random walks as walks that move from layer to layer on the lattice, while remaining uniformly distributed on these layers. This idea can be formalized in terms of an intertwining relation between our two-dimensional walks and a one-dimensional walk that describes the transitions between layers, an idea explored in~\cite{Du04, Ka07} for closely related random walks in wedges. This approach is particularly useful for computing properties of the Green's function. Next we calculate some hitting probabilities for the walk with transition kernel $Q_p = p\,Q_{in} + q\,Q_{out}$ defined in the introduction; we will use these in the proof of Theorem~\ref{thm:inward}. We start with the probability of visiting the origin before leaving the diamond of radius~$n$. By the definition of~$Q_p$, this probability depends only on the layer on which the walk is started, not on the particular starting point on that layer. That is, if $0<\ell<n$, then $\Pr_x( \tau_o <\tau_n ) = \Pr_\ell( \tau_o <\tau_n )$ for all $x\in\L_\ell$, since at every site except the origin, the probability to move inward is~$p$ and the probability to move outward is~$q$. This leads to the following well-known gambler's ruin calculation (see, e.g.,~\cite[\S 7]{Bi95}). \begin{lemma} \label{lem:hitorigin} Let $0<\ell<n$ and $x\in\L_\ell$. If $p\neq q$, then \[ \Pr_x(\tau_o<\tau_n) = \Pr_\ell(\tau_o<\tau_n) = \frac{r^n-r^\ell}{r^n-1} \] where $r= p/q$. If $p=q=1/2$, then \[ \Pr_x(\tau_o<\tau_n) = \Pr_\ell(\tau_o<\tau_n) = \frac{n-\ell}{n}. \] \end{lemma} Next we bound the probability that the inward-biased walk ($p>1/2$) exits the diamond $\D_{n-1}$ before hitting a given site $z\in \D_{n-1}$. \begin{lemma} \label{lem:youcantavoidz} Write $r= p/q$. For $p\in(1/2,1)$, if $z\in\L_k$ for $0<k<n$, then \[ \Pr_o(\tau_z \geq \tau_n) < (4k-1)r^{k-n}. \] \end{lemma} \begin{proof} Let $T_0 = 0$ and for $i\geq 1$ consider the stopping times \begin{align*} U_i &= \min\{ t>T_{i-1} : X(t)\in\L_k \}; \\ T_i &= \min\{ t>U_i : X(t)=o \}. \end{align*} Let $M = \max\{ i: U_i<\tau_n \}$. For any integer $m\geq 1$ and any $x_1,\dotsc,x_m \in \L_k$, we have by the strong Markov property \begin{multline*} \Pr_o( M=m,\ X(U_1)=x_1, \dotsc, X(U_m)=x_m ) \\ = \prod_{i=1}^{m-1} \bigl[ \Pr_o( X(\tau_k)=x_i ) \, \Pr_{x_i}( \tau_o<\tau_n ) \bigr] \cdot \Pr_o( X(\tau_k)=x_m ) \, \Pr_{x_m}( \tau_n<\tau_o ). \end{multline*} By Lemma~\ref{lem:uniformhitting}, $\Pr_o( X(\tau_k)=x_i ) = 1/4k$ for each $x_i\in\L_k$. Moreover, by Lemma~\ref{lem:hitorigin} we have for any $x\in\L_k$ \[ \Pr_x(\tau_n<\tau_0) = \frac{r^k-1}{r^n-1} < r^{k-n}, \] where we have used the fact that $r=p/q>1$. Hence \[ \Pr_o( M=m,\ X(U_i)\neq z\ \forall i\leq m ) < r^{k-n} \left( 1-\frac{1}{4k} \right)^m. \] Since the event $\{\tau_z\geq\tau_n\}$ is contained in the event $\{ X(U_i)\neq z\ \forall i\leq M \}$, we conclude that \[ \begin{split} \Pr_o(\tau_z\geq\tau_n) &= \sum_{m\geq1} \Pr_o(M=m,\ \tau_z\geq \tau_n) \\ &\leq \sum_{m\geq1} \Pr_o( M=m,\ X(U_i)\neq z\ \forall i\leq m ) \\ &< \sum_{m\geq1} r^{k-n} \left( 1-\frac{1}{4k} \right)^m \\ &= (4k-1)r^{k-n}. \qedhere \end{split} \] \end{proof} \section{Abelian property} \label{sec:abelian} In this section we discuss an important property of internal DLA discovered by Diaconis and Fulton~\cite[Theorem~4.1]{DF91}, which gives some freedom in how the clusters $A(i)$ are constructed. We will use this property in the proof of Theorem~\ref{thm:diamondshape}. It was also used in~\cite{La95}. Instead of performing~$i$ random walks one at a time in sequence, start with~$i$ particles at the origin. At each time step, choose a site occupied by more than one particle, and let one particle take a single random walk step from that site. The abelian property says that regardless of these choices, the final set of~$i$ occupied sites has the same distribution as the cluster~$A(i)$. This property is not dependent on the law of the random walk, and in fact holds deterministically in a certain sense. Suppose that at each site $x \in \Z^2$ we place an infinite stack of cards, each labeled by a site in~$\Z^2$. A \emph{legal move} consists of choosing a site~$x$ which has at least two particles, burning the top card at~$x$, and then moving one particle from~$x$ to the site labeled by the card just burned. A finite sequence of legal moves is \emph{complete} if it results in a configuration in which each site has at most one particle. \begin{lemma}[Abelian property] \label{abelianproperty} For any initial configuration of particles on~$\Z^2$, if there is a complete finite sequence of legal moves, then any sequence of legal moves is finite, and any complete sequence of legal moves results in the same final configuration. \end{lemma} In our setting, the cards in the stack at~$x$ have i.i.d.\ labels with distribution $Q(x,\cdot)$. Starting with $i$~particles at the origin, one way to construct a complete sequence of legal moves is to let each particle in turn perform a random walk until reaching an unoccupied site. The resulting set of occupied sites is the internal DLA cluster~$A(i)$. By the abelian property, any other complete sequence of legal moves yields the same cluster~$A(i)$. For the proof of Theorem~\ref{thm:diamondshape}, it will be useful to define generalized internal DLA clusters for which not all walks start at the origin. Given a (possibly random) sequence $x_1,x_2,\ldots \in \Z^2$, we define the clusters $A(x_1,\ldots,x_i)$ recursively by setting $A(x_1)=\{x_1\}$, and \[ A(x_1,\ldots,x_{i+1}) = A(x_1,\ldots,x_i) \cup \{Y^i(\sigma^i)\}, \qquad i\geq 1, \] where the $Y^i$ are independent uniformly layered walks started from $Y^i(0)=x_{i+1}$, and \[ \sigma^i = \min \{t \geq 0 \,:\, Y^i(t) \notin A(x_1,\ldots,x_i) \}. \] When $x_1 = \cdots = x_i =o$ we recover the usual cluster~$A(i)$. The next lemma gives conditions under which two such generalized clusters can be coupled so that one is contained in the other. Let $x_1, \ldots, x_r$ and $y_1, \ldots, y_s$ be random points in $\Z^2$. For $z \in \Z^2$, let \begin{align*} N_z &= \#\{i \leq r \,:\, x_i=z\} \\ \tilde{N}_z &= \#\{j \leq s \,:\, y_j=z\} \end{align*} and consider the event \[ \mathcal{E} = \bigcap_{z \in \Z^2} \big\{ N_z \leq \tilde{N}_z \big\}. \] \begin{lemma}[Monotonicity] \label{monotonicity} There exists a random set~$A'$ with the same distribution as $A(y_1,\ldots,y_s)$, such that $\mathcal{E} \subset \big\{ A(x_1,\ldots,x_r) \subset A' \big\}$. \end{lemma} The proof follows directly from the abelian property: since the distribution of $A(y_1,\ldots,y_s)$ does not depend on the ordering of the points $y_1,\ldots,y_s$, we can take \[ A' = \begin{cases} A(y'_1, \ldots, y'_s) & \mbox{ on } \mathcal{E} \\ A(y_1,\ldots,y_s) & \mbox{ on } \mathcal{E}^c \end{cases} \] where $y'_1,\ldots,y'_s$ is a (random) permutation of $y_1,\ldots,y_s$ such that $y'_i=x_i$ for all $i\leq r$. \section{Sums of independent random variables} \label{sec:largedeviations} We collect here a few standard results about sums of independent random variables. First we consider large deviation bounds for sums of independent indicators, which we will use several times in the proofs of Theorems \ref{thm:diamondshape} and~\ref{thm:inward}. Let $S$ be a finite sum of independent indicator random variables. We start with simple Chernoff-type bounds based on the inequality \[ \Pr(S\geq b) \leq e^{-tb} \, \Ex\left( e^{tS} \right). \] There are various ways to give an upper bound on the right side when the summands of~$S$ are i.i.d.\ indicators; see for example~\cite[Appendix~A]{AS92}. These bounds extend to the case of independent but not necessarily identically distributed indicators by an application of Jensen's inequality, leading to the following bounds~\cite[Theorems 1 and~2]{Ja02}: \begin{lemma}[Chernoff bounds] \label{lem:Chernoff} Let $S$ be a finite sum of independent indicator random variables. For all $b\geq0$, \begin{align*} \Pr(S\geq\Ex S+b) &\leq \exp\left( -\frac{1}{2} \frac{b^2}{\Ex S+b/3} \right), \\ \Pr(S\leq\Ex S-b) &\leq \exp\left( -\frac{1}{2} \frac{b^2}{\Ex S} \right). \end{align*} \end{lemma} Next we consider limit theorems for sums of independent random variables, which we will use in the proof of Theorem~\ref{thm:outward}. For $\{ X_n \}_{n\geq 1}$ a sequence of independent random variables satisfying $\Ex|X_i|^3 < \infty$, we define \begin{align} B_n &= \sum_{1\leq i\leq n} \Var(X_i), \label{eq:Bn} \\ L_n &= B_n^{-3/2} \sum_{1\leq i\leq n} \Ex|X_i-\Ex X_i|^3. \label{eq:Ln} \end{align} It is well known that the partial sums \begin{equation} \label{eq:Sn} S_n = \sum_{1\leq i\leq n} X_i \end{equation} satisfy the Central Limit Theorem when $L_n\to0$; this is a special case of Lyapunov's condition. We are interested in the rate of convergence. Let \begin{equation} \label{eq:Deltan} \Delta_n = \sup_{x\in\R} \left| \Pr\left( S_n-\Ex S_n < x\sqrt{B_n} \right) - \Phi(x) \right|, \end{equation} where $\Phi$ is the standard normal distribution function. Esseen's inequality (see \cite[Introduction, equation~(6)]{Es45} and \cite[Chapter~I]{PS00}) gives a bound on~$\Delta_n$ in terms of~$L_n$. This bound can be used to verify the conditions given by Petrov~\cite[Theorem~1]{Pe66} (see also \cite[Chapter~I]{PS00}), under which the partial sums~$S_n$ satisfy the Law of the Iterated Logarithm. \begin{lemma}[Esseen's inequality] \label{lem:Esseen} Let $X_1,\dotsc,X_n$ be independent and such that $\Ex|X_i|^3 < \infty$, and define $B_n$, $L_n$, $S_n$ and~$\Delta_n$ by \eqref{eq:Bn}--\eqref{eq:Deltan}. Then \[ \Delta_n \leq 7.5 \cdot L_n. \] \end{lemma} \begin{lemma}[Petrov's theorem] \label{lem:Petrov} Let $\{X_i\}_{i\geq1}$ be a sequence of independent random variables with finite variances, and define $B_n$, $S_n$ and~$\Delta_n$ by \eqref{eq:Bn}, \eqref{eq:Sn} and~\eqref{eq:Deltan}. If, as $n\to\infty$, \[ B_n\to\infty, \quad \frac{B_{n+1}}{B_n}\to1 \quad\text{and}\quad \Delta_n = O\left( \frac{1}{(\log B_n)^{1+\delta}} \right) \text{ for some $\delta>0$}, \] then \[ \Pr\left( \limsup_{n\to\infty} \frac{S_n-\Ex S_n}{\sqrt{2B_n \log\log B_n}} = 1 \right) = 1. \] \end{lemma} \vspace{0mm} \section{Proof of Theorem~\ref{thm:diamondshape}} \label{sec:general} We control the growth of the cluster $A(i)$ by relating it to two modified growth processes, the \emph{stopped process} $S(i)$ and the \emph{extended process} $E(i)$. In the stopped process, particles stop walking when they hit layer~$\L_n$, even if they have not yet found an unoccupied site. More formally, let $S(1) = \{o\}$, and define the stopping times~$\sigma_S^i$ and clusters $S(i)$ for $i\geq1$ recursively by \begin{equation*} \sigma_S^i = \min\{ t\geq0: Y^i(t) \in \L_n \cup S(i)^c \} \end{equation*} and \begin{equation} \label{eq:stoppedcluster} S(i+1)= S(i) \cup \{ Y^i(\sigma_S^i) \}. \end{equation} Here $Y^i(t)$ for $i=1,2,\ldots$ are independent uniformly layered walks started from the origin in $\Z^2$, all having the same law. Note that $S(i+1)=S(i)$ on the event that the walk~$Y^i$ hits layer $\L_n$ before exiting the cluster $S(i)$. By the abelian property, Lemma~\ref{abelianproperty}, we have $S(i) \subset A(i)$. Indeed, $A(i)$ can be obtained from $S(i)$ by letting all but one of the particles stopped at each site in $\L_n$ continue walking until reaching an unoccupied site. The extended process $E(i)$ is defined by starting with every site in the diamond~$\D_n$ occupied, and letting each of $i$~additional particles in turn walk from the origin until reaching an unoccupied site. More formally, let $E(0)=\D_n$, and for $i\geq 0$ define \[ \sigma_E^i = \min\{ t\geq0: Y^{v_n+i}(t) \notin E(i) \} \] and \[ E(i+1)= E(i) \cup \{ Y^{v_n+i}(\sigma_E^i) \}. \] An outline of the proof of Theorem~\ref{thm:diamondshape} runs as follows. We first show in Lemma~\ref{lem:Sbound} that the stopped cluster $S(v_n)$ contains a large diamond with high probability. Since the stopped cluster is contained in $A(v_n)$, the inner bound of Theorem~\ref{thm:diamondshape} follows. The proof of the outer bound proceeds in three steps. Lemma~\ref{lem:Nzbound} shows that the particles that stop in layer~$\L_n$ during the stopped process cannot be too bunched up at any single site $z \in \L_n$. We then use this to argue in Lemma~\ref{lem:AinsideE} that with high probability, $A(v_n)$ is contained in a suitable extended cluster $E(m)$. Finally, Lemma~\ref{lem:Ebound} shows that this extended cluster is contained in a slightly larger diamond. A notable feature of the argument (also present in \cite{La95}) is that the proof of the outer bound relies on the inner bound: Lemma~\ref{lem:Sbound} is used in the proof of Lemma~\ref{lem:Nzbound}. This dependence is responsible for the larger constant in the outer bound of Theorem~\ref{thm:diamondshape}. It would be interesting to know whether this asymmetry is merely an artifact of the proof, or whether the outer fluctuations are really larger than the inner fluctuations. We introduce an auxiliary collection of walks that will appear in the proofs. Let $\{Y^x(t): x\in \Z^2\}$ be independent walks with the same transition probabilities as~$Y^1$, which are independent of the~$Y^i$, and which start from~$Y^x(0)=x$. Now for $i=1,\ldots,v_n-1$ define \[ X^i(t) = \begin{cases} Y^i(t) & \text{for $0\leq t\leq\sigma_S^i$}, \\ Y^{Y^i(\sigma_S^i)}(t-\sigma_S^i) & \text{for $t>\sigma_S^i$}. \end{cases} \] Note that replacing the walks $Y^i$ with~$X^i$ in~\eqref{eq:stoppedcluster} has no effect on the clusters~$S(i)$. Finally, for $i\geq v_n$ we set $X^i(t)=Y^i(t)$ for all $t\geq 0$. We associate the following stopping times with the auxiliary walks~$Y^x(t)$: \begin{align*} \tau^x_z &:= \min\{ t\geq0: Y^x(t) = z \} &&\text{for $z\in\Z^2$}; \\ \tau^x_k &:= \min\{ t\geq0: Y^x(t) \in \L_k \} &&\text{for $k\geq 0$}. \end{align*} Likewise, let \begin{align*} \tau_z^i &:= \min\{ t\geq0: X^i(t) = z \} &&\text{for $z\in\Z^2$};\\ \tau_k^i &:= \min\{ t\geq0: X^i(t) \in \L_k \} &&\text{for $k\geq0$}. \end{align*} \begin{lemma} \label{lem:Sbound} There exists $n_0$ such that for all uniformly layered walks and all $n\geq n_0$ \begin{equation} \label{eq:innerdiamond} \Pr\left( \D_{n-4\sqrt{n\log n}} \not\subset S(v_n) \right) < 6n^{-2}. \end{equation} \end{lemma} \begin{remark} To avoid referring to too many unimportant constants, for the rest of this section we will take the phrase ``for sufficiently large $n$,'' and its variants, to mean that a single bound on~$n$ applies to all uniformly layered walks. \end{remark} \begin{proof} For $z\in \D_{n-1}$, write \[ \mathcal{E}_z(v_n) = \bigcap_{i=1}^{v_n-1} \left\{ \sigma_S^i < \tau^i_z \right\} \] for the event that the site~$z$ does not belong to the stopped cluster $S(v_n)$. We want to show that $\Pr\bigl( \mathcal{E}_z(v_n) \bigr)$ is very small when $z$ is taken too deep inside~$\D_n$. To this end, let $\ell = \norm{z}$, and consider the random variables \begin{align*} N_z &= \sum\nolimits_{0<i<v_n} \I\{ \tau^i_z \leq \sigma_S^i \}, \\ M_z &= \sum\nolimits_{0<i<v_n} \I\{ \tau^i_z = \tau^i_{\ell} \}, \\ L_z &= \sum\nolimits_{0<i<v_n} \I\{\sigma_S^i<\tau^i_z=\tau^i_\ell\}. \end{align*} Then $\mathcal{E}_z(v_n) = \{N_z=0\}$. Since $N_z\geq M_z-L_z$, we have for any real number~$a$ \begin{equation} \label{eq:geninfundbound} \begin{split} \Pr\bigl( \mathcal{E}_z(v_n) \bigr) = \Pr(N_z=0) &\leq \Pr(M_z\leq a \text{ or } L_z\geq a) \\ &\leq \Pr(M_z\leq a) + \Pr(L_z\geq a). \end{split} \end{equation} Our choice of~$a$ will be made below. Note that $M_z$ is a sum of i.i.d.\ indicator random variables, and by Lemma~\ref{lem:uniformhitting}, \begin{equation} \label{eq:EM_z} \Ex M_z = 2n(n+1) \, \Pr_o( X(\tau_{\ell}) = z ) = \frac{1}{2} \, \frac{n(n+1)}{\ell}. \end{equation} The summands of~$L_z$ are not independent. Following~\cite{LBG92}, however, we can dominate~$L_z$ by a sum of independent indicators as follows. By property~(U1), a uniformly layered walk cannot exit the diamond $\D_{\ell-1}$ without passing through layer~$\L_\ell$, so the event $\{ \sigma_S^i < \tau^i_z = \tau^i_\ell \}$ is contained in the event $\{ X^i(\sigma_S^i)\in \D_{\ell-1} \}$. Hence \[ \begin{split} L_z &= \sum_{0<i<v_n} \I\left\{ X^i(\sigma_S^i)\in \D_{\ell-1},\; \tau^{X^i(\sigma_S^i)}_z = \tau^{X^i(\sigma_S^i)}_\ell \right\} \\ &\leq \sum_{x\in \D_{\ell-1}-\{o\}} \I\{ \tau_z^x = \tau_\ell^x \} =: \tilde{L}_z \end{split} \] where we have used the fact that the locations $X^i(\sigma_S^i)$ inside $D_{\ell-1}$ where particles attach to the cluster are distinct. Note that $\tilde{L}_z$ is a sum of independent indicator random variables. To compute its expectation, note that for every $0<k<\ell$, by Lemma~\ref{lem:uniformhitting} \[ \sum_{x\in \L_k} \Pr_x( X(\tau_{\ell}) = z ) = 4k \, \Pr_k( X(\tau_{\ell}) = z ) = \frac{k}{\ell}, \] hence \begin{equation} \label{eq:EL_z} \Ex\tilde{L}_z = \sum_{k=1}^{\ell-1} \frac{k}{\ell} = \frac{\ell-1}{2}. \end{equation} Now set $a = \tfrac{1}{2} (\Ex M_z+\Ex \tilde{L}_z)$, and let \[ b=\frac{\Ex M_z-\Ex \tilde{L}_z}{2} > \frac{n^2 - \ell^2}{4\ell} \] where the inequality follows from \eqref{eq:EM_z} and~\eqref{eq:EL_z}. Since $a = \Ex M_z-b = \Ex \tilde{L}_z+b$, we have by Lemma~\ref{lem:Chernoff} \[ \Pr(\tilde{L}_z\geq a) \leq \exp\left( -\frac{1}{2} \, \frac{b^2}{\Ex\tilde{L}_z+b/3} \right) \leq \exp\left( -\frac{1}{2} \, \frac{b^2}{\Ex M_z} \right) \] and \[ \begin{split} \Pr(M_z\leq a) &\leq \exp\left(-\frac{1}{2} \, \frac{b^2}{\Ex M_z} \right) \\ &< \exp \left( -\frac12 \frac{(n^2 - \ell^2)^2}{16 \ell^2} \frac{2\ell}{n(n+1)} \right) \\ &\leq \exp \left( -\frac{1}{16} \frac{(n^2-\ell^2)^2}{n^3} \right) \end{split} \] where in the last line we have used $\ell \leq n-1$. Since $L_z \leq \tilde{L}_z$, we obtain from~\eqref{eq:geninfundbound} \[ \begin{split} \Pr\bigl( \mathcal{E}_z(v_n) \bigr) &\leq \Pr(M_z\leq a) + \Pr(\tilde{L}_z\geq a) \\ &< 2\exp\left( -\frac{1}{16} \, \frac{(n^2-\ell^2)^2}{n^3} \right). \end{split} \] Writing $\ell = n-\rho$, with $\rho\geq \ceil{4\sqrt{n\log n}}$, we obtain for sufficiently large $n$ \[ \begin{split} \Pr\bigl( \mathcal{E}_z(v_n) \bigr) &< 2\exp\left( -\frac{1}{16} \frac{\rho^2(2n-\rho)^2}{n^3} \right) \\ &\leq 2\exp\left( -\frac{\rho^2}{4n} + \frac{\rho^3}{4n^2} \right) \\ &\leq 3n^{-4}. \end{split} \] We conclude that for $n$ sufficiently large \[ \Pr\left( \D_{n-4\sqrt{n\log n}} \not\subset S(v_n) \right) \leq \sum_{z\in\D_{n-4\sqrt{n\log n}}} \Pr\bigl( \mathcal{E}_z(v_n) \bigr) < 6n^{-2}. \qedhere \] \end{proof} Turning to the outer bound of Theorem~\ref{thm:diamondshape}, the first step is to bound the number \begin{equation} \label{eq:Nz} N_z := \sum\nolimits_{0<i<v_n} \I\{ \sigma_S^i = \tau_z^i \} \end{equation} of particles stopping at each site $z \in \L_n$ in the course of the stopped process. To get a rough idea of the order of~$N_z$, note that according to Lemma~\ref{lem:Sbound}, with high probability, at least $v_{n-4\sqrt{n\log n}}$ of the $v_n$~particles find an occupied site before hitting layer~$\L_n$. The number of particles remaining is of order $n^{3/2} \sqrt{\log n}$. If these remaining particles were spread evenly over~$\L_n$, then there would be order $\sqrt{n \log n}$ particles at each site $z \in \L_n$. The following lemma shows that with high probability, all of the $N_z$ are at most of this order. \begin{lemma} \label{lem:Nzbound} If $n$ is sufficiently large, then \[ \Pr\biggl( \union_{z\in\L_n} \left\{ N_z > 7\sqrt{n\log n} \right\} \biggr) < 13n^{-5/4}. \] \end{lemma} \begin{proof} For $z\in\L_n$, define \begin{align*} M_z &= \sum\nolimits_{0<i<v_n} \I\{\tau^i_z = \tau^i_n \}, \\ L_z &= \sum\nolimits_{0<i<v_n} \I\{ \sigma_S^i<\tau^i_z=\tau^i_n \}, \end{align*} so that $N_z = M_z-L_z$. Write $\eta = \sqrt{n\log n}$ and $\rho = \ceil{4\eta}$, and let \[ \tilde{L}_z = \sum_{y\in\D_{n-\rho}-\{o\}} \I\{ \tau_z^y=\tau_n^y \}. \] Note that $\tilde{L}_z\leq L_z$ on the event $\{ \D_{n-\rho} \subset S(v_n) \}$. Therefore, \begin{multline} \label{eq:Nzbound} \Pr\biggl( \union_{z\in\L_n} \{ N_z>7\eta \} \biggr) = \Pr\biggl( \union_{z\in\L_n} \{ M_z-L_z>7\eta \} \biggr) \\ \leq \sum_{z\in\L_n} \Pr( M_z-\tilde{L}_z > 7\eta ) + \Pr\left( \D_{n-4\sqrt{n\log n}}\not\subset S(v_n) \right). \end{multline} To obtain a bound on $\Pr( M_z-\tilde{L}_z > 7\eta )$, note that \[ \Ex M_z = 2n(n+1)\,\Pr_o( X(\tau_n)=z ) = \frac{n+1}{2}. \] Moreover, by Lemma~\ref{lem:uniformhitting} \[ \sum_{y \in \L_k} \Pr(\tau_z^y=\tau_n^y) = 4k \, \Pr_k(\tau_z=\tau_n) = \frac{k}{n}, \] hence \[ \Ex\tilde{L}_z = \sum_{k=1}^{n-\rho} \frac{k}{n} = \frac{n+1}{2} - \rho + \frac{\rho(\rho-1)}{2n}. \] In particular, $\Ex M_z - \Ex\tilde{L}_z < \rho-1 \leq 4\eta$ for large enough~$n$, so that \begin{equation} \label{eq:MzLz} \begin{split} \Pr( M_z-\tilde{L}_z > 7\eta ) &\leq \Pr( M_z-\tilde{L}_z > \Ex M_z - \Ex\tilde{L}_z + 3\eta ) \\ &\leq \Pr\bigl( M_z > \Ex M_z + \tfrac32\eta \quad\text{or}\quad \tilde{L}_z < \Ex\tilde{L}_z - \tfrac32\eta \bigr) \\ &\leq \Pr\bigl( M_z > \Ex M_z + \tfrac32\eta \bigr) + \Pr\bigl( \tilde{L}_z < \Ex\tilde{L}_z - \tfrac32\eta \bigr). \end{split} \end{equation} By Lemma~\ref{lem:Chernoff}, \[ \Pr\bigl( \tilde{L}_z < \Ex\tilde{L}_z - \tfrac32\eta \bigr) \leq \exp\left( -\frac12 \frac{(3\eta/2)^2}{\Ex\tilde{L}_z} \right) < \exp\left( -\frac{9}{8}\frac{\eta^2}{n/2} \right) = n^{-9/4}. \] Likewise, for sufficiently large~$n$ \[ \begin{split} \Pr\bigl( M_z > \Ex M_z + \tfrac32\eta \bigr) &\leq \exp\left( -\frac12\frac{(3\eta/2)^2}{\Ex M_z+\eta/2} \right) \\ &= \exp \left( -\frac94 \frac{n\log n}{n+1+\sqrt{n\log n}} \right) \\ &< 2n^{-9/4}. \end{split} \] Combining \eqref{eq:Nzbound}, \eqref{eq:MzLz} and Lemma~\ref{lem:Sbound} yields for sufficiently large~$n$ \[ \Pr\biggl( \union_{z\in\L_n} \{ N_z>7\eta \} \biggr) < 3n^{-9/4}\#\L_n + 6 n^{-2} < 13n^{-5/4}. \qedhere \] \end{proof} Given random sets $A,B \subset \Z^2$, we write $A \eqindist B$ to mean that $A$ and $B$ have the same distribution. \begin{lemma} \label{lem:AinsideE} Let $m = \ceil{29n\sqrt{n\log n}}$. For all sufficiently large~$n$, there exist random sets $A' \eqindist A(v_n)$ and $E' \eqindist E(m)$ such that \[ \Pr\bigl( A' \not\subset E' \bigr) < 14n^{-5/4}. \] \end{lemma} \begin{proof} By the abelian property, Lemma~\ref{abelianproperty}, we can obtain $A(v_n)$ from the stopped cluster~$S(v_n)$ by starting $N_z$ particles at each $z \in \L_n$, and letting all but one of them walk until finding an unoccupied site. More formally, let $x_1 = o$ and $x_{i+1} = Y^i(\sigma_S^i)$ for $0<i<v_n$. Then \[ \# \{i \leq v_n \,:\, x_i=z\} = \begin{cases} N_z, & z \in \L_n \\ 1, & z \in S(v_n)-\L_n \\ 0, & \mbox{else} \end{cases} \] and \[ A(v_n) \eqindist A(x_1,\ldots,x_{v_n}). \] To build up the extended cluster~$E(m)$ in a similar fashion, let $s = v_n+m$, and let $y_1,\ldots,y_s \in \Z^2$ be such that $\{y_1,\ldots,y_{v_n}\} = \D_n$, and \[ y_{v_n+i} = Y^{v_n+i-1}(\tau_n^{v_n+i-1}), \qquad i=1,2,\ldots,m. \] By Lemma~\ref{abelianproperty}, we have \[ E(m) \eqindist A(y_1,\ldots,y_s). \] For each $z \in \L_n$, let \[ \tilde{N}_z = \sum\nolimits_{0\leq i<m} \I \left\{ \tau_z^{v_n+i} = \tau_n^{v_n+i} \right\} \] be the number of extended particles that first hit layer~$\L_n$ at~$z$. Then \[ \#\{i \leq s \,:\, y_i = z\} = \begin{cases} \tilde{N}_z, & z \in \L_n \\ 1, & z \in \D_{n-1} \\ 0, & \mbox{else}. \end{cases} \] Now let $A' = A(x_1,\ldots,x_{v_n})$ and consider the event \[ \mathcal{E} = \inter_{z\in\L_n} \bigl\{ N_z \leq \tilde{N}_z \bigr\}. \] By Lemma~\ref{monotonicity}, on the event~$\mathcal{E}$ there exists a random set $E' \eqindist A(y_1,\ldots,y_s)$ such that $A'\subset E'$. Therefore, to finish the proof it suffices to show that $\Pr( \mathcal{E}^c )< 14n^{-5/4}$. Note that $\tilde{N}_z$ is a sum of independent indicators, and \[ \Ex \tilde{N}_z = \frac{m}{4n} \geq \frac{29}{4} \eta \] where $\eta := \sqrt{n \log n}$. Setting $b=\eta/4$ in Lemma~\ref{lem:Chernoff} yields for sufficiently large~$n$ \[ \Pr\left(\tilde{N}_z \leq 7\eta \right) \leq \exp\left( -\frac12 \frac{b^2}{\Ex\tilde{N}_z} \right) = \exp\left( -\frac{1}{232} \sqrt{n\log n} \right) < \frac14 n^{-9/4}, \] hence by Lemma~\ref{lem:Nzbound} \[ \begin{split} \Pr(\mathcal{E}^c) &\leq \Pr\biggl( \bigcup_{z \in \L_n} \bigl\{ N_z > 7\eta \quad\text{or}\quad \tilde{N}_z \leq 7\eta \bigr\} \biggr) \\ & \leq \Pr\biggl( \union_{z\in\L_n} \bigl\{ N_z > 7\eta \bigr\} \biggr) + \sum_{z\in\L_n} \Pr\bigl( \tilde{N}_z \leq 7\eta \bigr) \\ &< 14n^{-5/4}. \qedhere \end{split} \] \end{proof} To finish the argument it remains to show that with high probability, the extended cluster $E(m)$ is contained in a slightly larger diamond. Here we follow the strategy used in the proof of the outer bound in~\cite{LBG92}. \begin{lemma} \label{lem:Ebound} Let $m = \ceil{29n\sqrt{n \log n}}$. For all sufficiently large~$n$, \[ \Pr\left( E(m) \not\subset \D_{n+20\sqrt{n\log n}} \right) < n^{-2}. \] \end{lemma} \begin{proof} For $j,k \geq 1$, let \[ Z_k(j) = \#\bigl( E(j) \cap \L_{n+k} \bigr) \] and let $\mu_k(j) = \Ex Z_k(j)$. Then $\mu_k(j)$ is the expected number of particles that have attached to the cluster in layer $\L_{n+k}$ after the first $j$ extended particles have aggregated. Note that \[ \mu_k(i+1)-\mu_k(i) = \Pr\bigl( Y^{v_n+i+1}(\sigma_E^{i+1}) \in \L_{n+k} \bigr). \] By property~(U1), in order for the $(i+1)^{\rm th}$ extended particle to attach to the cluster in layer~$\L_{n+k}$, it must be inside the cluster~$E(i)$ when it first reaches layer~$\L_{n+k-1}$. Therefore, by Lemma~\ref{lem:uniformhitting}, \[ \begin{split} \mu_k(i+1)-\mu_k(i) &\leq \Pr\bigl( Y^{v_n+i+1}(\tau^{v_n+i+1}_{n+k-1}) \in E(i) \bigr) \\ &= \sum_{y\in\L_{n+k-1}} \Pr_o\bigl( Y^{v_n+i+1}(\tau_{n+k-1}^{v_n+i+1}\bigr) = y ) \cdot \Pr\bigl( y\in E(i) \bigr) \\ &= \frac{1}{4(n+k-1)} \cdot \mu_{k-1}(i) \leq \frac{\mu_{k-1}(i)}{4n}. \end{split} \] Since $\mu_k(0)=0$, summing over $i$ yields \[ \mu_k(j) \leq \frac{1}{4n} \sum_{i=1}^{j-1} \mu_{k-1}(i). \] Since $\mu_1(j)\leq j$ and $\sum_{i=1}^{j-1} i^{k-1} \leq j^k/k$, we obtain by induction on~$k$ \[ \mu_k(j) \leq 4n \left( \frac{j}{4n} \right)^k \frac{1}{k!} \leq 4n \left( \frac{je}{4nk} \right)^k, \] where in the last equality we have used the fact that $k!\geq k^ke^{-k}$. Since $29e/80<1$, setting $j = m$ and $k = \floor{20\sqrt{n\log n}}$ we obtain \[ \mu_{k+1}(m) \leq 4n\left( \frac{\ceil{29 n\sqrt{n\log n}}e}{4n \cdot 20\sqrt{n\log n}} \right)^{k+1} < n^{-2} \] for sufficiently large~$n$. To complete the proof, note that \[ \Pr( E(m) \not\subset \D_{n+k} ) = \Pr( Z_{k+1}(m) \geq1 ) \leq \mu_{k+1}(m). \qedhere \] \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:diamondshape}] Write $\eta = \sqrt{n \log n}$. Since $S(v_n) \subset A(v_n)$, we have by Lemma~\ref{lem:Sbound} \[ \sum_{n\geq1} \Pr\bigl( \D_{n-4\eta} \not\subset A(v_n) \bigr) \leq \sum_{n\geq1} \Pr\bigl( \D_{n-4\eta} \not\subset S(v_n) \bigr) < \infty. \] Likewise, by Lemmas~\ref{lem:AinsideE} and~\ref{lem:Ebound} \[ \begin{split} \sum_{n\geq1} \Pr\bigl( A(v_n) \not\subset \D_{n+20\eta} \bigr) &\leq \sum_{n \geq 1} \Pr\bigl( A(v_n) \not\subset E(m) \bigr) + \sum_{n \geq 1} \Pr\bigl( E(m) \not\subset \D_{n+20\eta} \bigr) \\ &< \infty. \end{split} \] By Borel-Cantelli we obtain Theorem~\ref{thm:diamondshape}. \end{proof} \section{The inward directed case} \label{sec:inward} \begin{proof}[Proof of Theorem~\ref{thm:inward}] Write $\ell = n-\ceil{6\log_r n}$, and denote by \[ \mathcal{A}_n = \inter_{0<i<v_n} \inter_{z\in\D_\ell} \{ \tau^i_z < \tau^i_n \} \] the event that each of the first $v_n-1$ walks visits every site $z\in \D_\ell$ before hitting layer~$\L_n$. Since $\#\D_{n-1} < v_n$, at least one of the first $v_n-1$ particles must exit $\D_{n-1}$ before aggregating to the cluster: $\sigma^i \geq \tau_n^i$ for some $i<v_n$. On the event $\mathcal{A}_n$, this particle visits every site $z\in \D_\ell$ before aggregating to the cluster, so $\D_\ell \subset A(i) \subset A(v_n)$. Hence \[ \Pr\bigl( \mathcal{D}_\ell\not\subset A(v_n) \bigr) \leq \Pr( \mathcal{A}_n^c ) \\ \leq \sum_{0<i<v_n} \sum_{z\in\D_\ell} \Pr(\tau_z^i\geq\tau_n^i) \] By Lemma~\ref{lem:youcantavoidz}, \[ \begin{split} \Pr\bigl( \mathcal{D}_\ell\not\subset A(v_n) \bigr) &< 2n(n+1) \sum_{k=1}^{\ell} 4k (4k-1) r^{k-n} \\ &\leq 32 n^3(n+1) \frac{r^{\ell+1}-r}{r^n(r-1)} \\ &\leq \frac{32r}{r-1} n^3(n+1) \cdot n^{-6}, \end{split} \] and by Borel-Cantelli we conclude that $\Pr( \D_\ell \subset A(v_n) \text{ eventually} ) = 1$. Likewise, writing $m = n+\floor{6\log_r n}$, let \[ \mathcal{B}_n = \inter_{0<i<v_n} \inter_{z\in\D_n} \{ \tau^i_z<\tau^i_m \} \] be the event that each of the first $v_n-1$ walks visits every site $z\in\D_n$ before hitting layer~$\L_m$. Since the occupied cluster $A(v_n-1)$ has cardinality $v_n-1 = \#\D_n-1$, there is at least one site $z\in\D_n$ belonging to $A(v_n-1)^c$. On the event $\mathcal{B}_n$, each of the first $v_n-1$ particles visits $z$ before hitting layer~$\L_m$, so \[ \sigma^i \leq \tau_z^i < \tau_m^i, \qquad i=1,\dotsc,v_n-1. \] Therefore, \[ \Pr\bigl( A(v_n)\not\subset \D_m \bigr) \leq \Pr( \mathcal{B}_n^c ) \\ \leq \sum_{0<i<v_n} \sum_{z\in\D_n} \Pr(\tau_z^i\geq\tau_m^i). \] By Lemma~\ref{lem:youcantavoidz}, \[ \begin{split} \Pr\bigl( A(v_n)\not\subset \D_m \bigr) &< 2n(n+1) \sum_{k=1}^{n} 4k (4k-1) r^{k-m} \\ &\leq 32 n^3(n+1) \frac{r^{n+1}-r}{r^m(r-1)} \\ &\leq \frac{32r^2}{r-1} n^3(n+1) \cdot n^{-6}, \end{split} \] and by Borel-Cantelli we conclude that $\Pr( A(v_n) \subset \D_m \text{ eventually} ) = 1$. \end{proof} \section{The outward directed case} \label{sec:outward} To prove Theorem~\ref{thm:outward} we make use of a specific property of the uniformly layered walks for $p=0$. Recall that these walks have transition kernel~$\Qout$. By \eqref{eq:Qoutbegin}--\eqref{eq:Qoutend}, such a walk can only reach the site $(m,0)$ for $m\geq1$ by visiting the sites $(0,0),(1,0),\dotsc,(m,0)$ in turn. We can use this to find the exact growth rate of the clusters $A(i)$ along the $x$-axis. Suppose that we count time according to the number of particles we have added to the growing cluster, and for $m\geq1$ set \[ T_m := \min\{ n\geq0: (m,0)\in A(n+1)\}. \] Then we can interpret $T_m$ as the time it takes before the site $(m,0)$ becomes occupied. The following lemma gives the exact order of the fluctuations in~$T_m$ as $m\to\infty$. \begin{lemma} \label{lem:outwardaxis} For $p=0$ we have that \[ \Pr\left( \limsup_{m\to\infty} \frac{T_m-2m(m+1)}{\sqrt{32(m^3 \log\log m)/3}} = 1 \right) = 1 \] and \[ \Pr\left( \liminf_{m\to\infty} \frac{T_m-2m(m+1)}{\sqrt{32(m^3 \log\log m)/3}} = -1 \right) = 1. \] \end{lemma} \begin{proof} Set $X_1=T_1$ and $X_m = T_m-T_{m-1}$ for $m>1$. Consider the aggregate at time~$T_{m-1}$ when $(m-1,0)$ gets occupied. Since a walk must follow the $x$-axis to reach the site $(m-1,0)$, we know that at time~$T_{m-1}$ all sites $\{ (i,0) : i=0,1,\dotsc,m-1 \}$ are occupied and all sites $\{ (i,0) : i\geq m\}$ are vacant. Now consider the additional time $X_m = T_m-T_{m-1}$ taken before the site $(m,0)$ becomes occupied. Each walk visits $(m,0)$ if and only if it passes through the sites $(1,0), (2,0), \dotsc, (m,0)$ during the first~$m$ steps, which happens with probability $1/4m$. Thus $X_m$ has the geometric distribution with parameter~$1/4m$. Moreover, the $X_i$ are independent. Hence $T_m$ is a sum of independent geometric random variables~$X_i$. Since $\Ex X_i = 4i$, $\Var X_i =16i^2-4i$ and $\Ex X_i^3 = 384i^3 - 96i^2 + 4i$, \[ B_m = \sum_{1\leq i\leq m} \Var X_i = \frac{16}{3} m^3 + O(m^2) \] and \[ \sum_{1\leq i\leq m} \Ex\bigl( |X_i-\Ex X_i|^3 \bigr) \leq \sum_{1\leq i\leq m} \bigl(\Ex X_i^3 + (\Ex X_i)^3 \bigr) = O(m^4). \] By Lemma~\ref{lem:Esseen}, $\Delta_m = O(m^{-1/2})$, which shows that Petrov's conditions of Lemma~\ref{lem:Petrov} are satisfied. Therefore, \[ \Pr\left( \limsup_{m\to\infty} \frac{T_m-\Ex T_m}{\sqrt{2B_m\log\log B_m}} = 1 \right) = 1. \] Since $\Ex T_m = \sum_{i=1}^m 4i = 2m(m+1)$ and $B_m = 16m^3/3+O(m^2)$, this proves the first statement in Lemma~\ref{lem:outwardaxis}. The second statement is obtained by applying Lemma~\ref{lem:Petrov} to $-T_m = \sum_{i=1}^m (-X_i)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:outward}] Fix $\eps>0$, set $\eta := \sqrt{2(n\log\log n)/3}$ and let $\rho = \ceil{(1-\eps)\eta}$. If we write $m = n-\rho$, then \begin{align*} 2m(m+1) &= 2n(n+1) - 4n\rho + o(n), \\ \sqrt{32(m^3 \log\log m)/3} &= 4n \eta + o(n^{5/4}\log\log n). \end{align*} Hence, setting $m = n-\rho$ in Lemma~\ref{lem:outwardaxis} gives \[ \Pr\left( \limsup_{n\to\infty} \frac{T_{n-\rho}-2n(n+1)+4n\rho}{4n\eta} = 1 \right) = 1. \] Since $\{(n-\rho,0) \not\in A(v_n)\} = \{T_{n-\rho} > v_n-1\}$ and $v_n - 1 = 2n(n+1)$, this implies \[ \Pr\bigl( (n-\rho,0) \not\in A(v_n) \text{ i.o.} \bigr) = 1. \] Likewise, setting $m = n+\rho$ in Lemma~\ref{lem:outwardaxis} gives \[ \Pr\left( \liminf_{n\to\infty} \frac{T_{n+\rho}-2n(n+1)-4n\rho}{4n\eta} = -1 \right) = 1, \] hence \[ \Pr\bigl( (n+\rho,0) \in A(v_n) \text{ i.o.} \bigr) = 1. \qedhere \] \end{proof} \section{Concluding Remarks} \begin{figure} \begin{center} \begin{tabular}{ccc} && \includegraphics[height=.1\textheight]{DDiamond-p0-388080-source33.png} \\ && (3,3) \medskip \\ & \includegraphics[height=.1\textheight]{DDiamond-p0-405900-source22.png} & \includegraphics[height=.1\textheight]{DDiamond-p0-405900-source32.png} \\ & (2,2) & (3,2) \medskip \\ \includegraphics[height=.1\textheight]{DDiamond-p0-405900-source11.png} & \includegraphics[height=.1\textheight]{DDiamond-p0-405900-source21.png} & \includegraphics[height=.1\textheight]{DDiamond-p0-405900-source31.png} \\ (1,1) & (2,1) & (3,1) \end{tabular} \end{center} \caption{Internal DLA clusters in the first quadrant of $\Z^2$ based on the outward-directed layered walk $Q_{out}$ started from a point other than the origin. For example, the cluster on the lower left is formed from $405\,900$ particles started at the point $(1,1)$.} \label{Fig:off-center} \end{figure} Lawler, Bramson and Griffeath \cite{LBG92} discovered a key property of the Euclidean ball that characterizes it as the limiting shape of internal DLA clusters based on simple random walk in~$\Z^d$: for simple random walk killed on exiting the ball, any point~$z$ sufficiently far from the boundary of the ball is visited more often in expectation by a walk started at the origin than by a walk started at a uniform point in the ball. Uniformly layered walks have an analogous property with respect to the diamond: the Green's function $g(y,\cdot)$ for a walk started at~$y$ and killed on exiting $\D_n$ satisfies \[ g(o,z) \geq \frac{1}{\# \D_n} \sum_{y \in \D_n} g(y,z) \] for all $z\in \D_n$. Indeed, both the walk started at $o$ and the walk started at a uniform point in~$\D_n$ are uniformly distributed on layer $\L_{\norm{z}}$ at the time~$\tau_{\norm{z}}$ when they first hit this layer, so the expected number of visits to $z$ after time $\tau_{\norm{z}}$ is the same for both walks. The inequality comes from the fact that a walk started at the origin must hit layer $\L_{\norm{z}}$ before exiting~$\D_n$. We conclude with two questions. The first concerns uniformly layered walks started from a point other than the origin. Figure~\ref{Fig:off-center} shows internal DLA clusters for six different starting points in the first quadrant of~$\Z^2$. These clusters are all contained in the first quadrant. Our simulations indicate that a limiting shape exists for each starting point, and that no two starting points have the same limiting shape; but we do not know of any explicit characterization of the shapes arising in this way. The second question is, do there exist walks with bounded increments having uniform harmonic measure on $L^1$ spheres in~$\Z^d$ for $d\geq 3$? \section*{Acknowledgement} We thank Ronald Meester for fruitful discussions.
{"config": "arxiv", "file": "0905.1361/ddiamond.tex"}
\begin{document} \maketitle \begin{abstract} \noi {\it It is shown that irreducible two-state continuous-time Markov chains interacting on a network in a bilinear fashion have a unique stable steady state. The proof is elementary and uses the relative entropy function.} \end{abstract} \bigskip \section{{\bf Description of the main result }} \medskip This is an elementary paper about two-state Markov chain attached to each node (vertex) of a finite undirected network (simple weighted undirected graph). In this paper, we will deal with Markov chains in continuous time (sometimes called Markov jump processes). The interaction between two chains that are linked by an edge of the network is a simple bilinear function of the two opposite states (a coupling constant times the product of the probabilities of the two opposite states). The purpose of this paper is to give an elementary proof of the existence and uniqueness of the steady state (or equilibrium). In short, we will give a simple proof of the following fact: \smallskip \noi {\it There exists a unique steady state of irreducible two-state Markov chains which are linked on an undirected network through an interaction term that depends bilinearly on neighbouring opposite states.} \\ \noi In more precise technical terms: \medskip \noi {\bf Theorem 3.1} {\it Let $\alpha, \beta \in \R^N_+$ , $\gamma_{0 1} , \gamma_{ 1 0} \geq 0$ and $W$ be a symmetric $N \times N$ matrix with non-negative entries and zeros on the diagonal. Then the system of differential equations: $$\frac{dp^i}{dt} = - \, \alpha^i p^i + \beta^i q^i - \gamma_{01} \, p^i \sum_j W_j^i q^j + \gamma_{10} \, q^i \sum_j W_j^i p^j \qquad i = 1, \ldots , N $$ where $q^i = 1 - p^i$, leaves the $N$-dimensional unit cube $[0 , 1]^N$ invariant and possesses a unique globally stable steady state (equilibrium point) in the interior of $I^N$.} \bigskip \noi This is proved in section 3, after setting up the notation in the next section. In the final section we make some simple remarks about the steady state distribution and discuss some special cases. In the next paper we plan to deal with the case of directed networks (which is the more interesting case). \bigskip \bigskip \section{ \bf{The structure of the equations}} \medskip \subsection{Basic notation and terminology} \medskip \subsubsection{Markov chains} There is an extensive theory of Markov chains. Here are two introductory textbooks: \cite{la}\cite{no}. We will briefly describe what we need to know. In continuous time, a time-homogeneous two-state Markov chain (or a Markov jump process) is completely determined by a $2 \times $2 matrix (the infinitesimal transition probability matrix between the two states) : $$ Q = \bm -\alpha & \alpha \\ \beta & -\beta \em $$ where $\alpha \geq 0 $ and $\beta \geq 0$. The time evolution of the probabilities $p$ and $q = 1 - p$ at the two states $|0 \rangle$ and $|1 \rangle$ is then determined by solving the linear differential equation (with constant coefficients): $$ \frac{d}{dt}(p,q) = (p,q) Q $$ whose solution is simply $(p(t), q(t)) = (p(0), q(0)) e^{t Q}$, and as $t \rightarrow \infty$, this converges to the steady state: $ ({\bar p}, {\bar q}) = \frac{1}{\alpha + \beta} ( \beta , \alpha ) $. The Markov chain is irreducible and aperiodic provided $\al$ and $\be$ are strictly positive. \bigskip \noi An important function on the one dimensional simplex ($p+q =1$) is the relative entropy function. With respect to the steady state distribution, it is defined as: $$ E_{\bar{p}}(p) = E_{(\bar{p}, \bar{q})}(p,q) = - \bar{p} \log \frac{p}{\bar{p}} - \bar{q} \log \frac{q}{\bar{q}} $$ This is also known as the Kullback-Leibler ``distance" (although it is not symmetric and does not satisfy the triangle inequality). \medskip \noi As a warm-up exercise, let us compute the evolution of this entropy function along the flow: \medskip \noi $$ \frac{dE}{dt} = (\frac{\bar{q}}{q} - \frac{\bar{p}}{p}) \dot{p} = - \frac{(- \alpha p + \beta q)^2}{(\alpha + \beta) p q} = - \frac{\alpha + \beta}{p q} (p - \bar{p})^2$$ which is strictly negative unless $p = \bar{p}$. This proves the uniqueness and global stability of the steady state. \medskip \noi The main purpose of this paper is to do a similar calculation on two-state Markov chains interacting bilinearly on a network. \bigskip \subsubsection{ Networks and Graphs } A finite network (or a graph) is a collection of vertices (or nodes), denoted by $\mathcal{V} = \{ v_1, \ldots v_N \}$, together with a collection of edges, denoted by $\mathcal{E}$, where each edge joins two vertices. For an undirected network we think of each edge $e \in \mathcal{E} $ as an unordered pair of vertices. An edge connecting a vertex to itself is called a loop. In this paper we will consider undirected finite graphs without loops. We will also assume that there is at most one edge between two different vertices, but we will consider the case where each edge is assigned a weight $w(e) = w_{i j} = W^i_j = W^j_i$, a positive real number. We will set $w_{i j} = 0$ if there is no edge between $v_i$ and $v_j$. In particular $w_{i i} = 0$. By default, if there is no specific weight attached, each edge will have weight $1$ (we then sometimes say it is a graph!). The whole information about an undirected network is therefore completely encoded by a real symmetric matrix $ W^i_j$ with non-negative entries and zeros on the diagonal. The sum of $i^{th}$-row of $W$ (which is the same as the sum of the $i^{th}$-column), denoted by $d^i$, is the (weighted) number of edges that contain the vertex $v_i$ and is called the degree of that vertex. We will denote the diagonal matrix of these degrees $d^1, \ldots, d^N$ by $D$. The (combinatorial) Laplacian of the network is now defined as: $L = D - W$. $L$ is a symmetric matrix with non-positive non-diagonal entries with all row sums (and column sums) equal to zero. $L$ determines $W$, and so encodes all the information about the graph. The (non-negative) quadratic form associated to the Laplacian $L$ is then: $$ < Lx , x > \, = \sum_{i,j} w_{i j} |x^i - x^j|^2 $$ where $x^i = x(v_i) \; i = 1, \ldots, N$ is a function defined on the vertices, thought of as a column vector. Since $L$ vanishes on constant functions, $0$ is always an eigenvalue and it is a simple eigenvalue iff the network is connected. We note that $Lx(v_{max}) \geq 0$ and $ Lx(v_{min}) \leq 0$ if $v_{max}$ and $v_{min}$ are respectively, local maximum and minimum points of $x$. The inequalities are strict for strict maxima and minima. \medskip \noi These are very basic elementary facts about networks and graphs. There is an extensive theory and here are two introductory books: \cite{ch}\cite{ne}. \bigskip \subsection {Interacting Markov processes on a network} Now suppose that at each node (vertex) $v_i$ of a network with weight matrix $W_j^i$, we have a continuos time Markov chain with two states $|0 \rangle \, , \, |1 \rangle$ and infinitesimal transition matrix: $Q^i = \bm -\alpha^i & \alpha^i \\ \beta^i & -\beta^i \em$. \\ \medskip \noi We will denote the probabilities at each node by $(p^i, q^i= 1 - p^i)$. If there would be no interaction between the Markov processes at different nodes, the infinitesimal transition matrix for the whole system, consisting of $2^N$ states, will be the tensor product acting as a derivation $ {\bf Q} = \sum_i id \otimes \cdots \otimes Q^i \otimes \cdots \otimes id $\,, one node at a time. This describes a random walk on the hypercube $\{ 0, 1 \} ^N$. We will add to this tensor product an interaction term depending on the network as follows. At each node $v_i$, we change the probabilities of ${\bf Q } $ by adding the following terms which describe a very simple bilinear interaction between the two states at neighbouring sites (only opposite states will interact). \bear p^i &\mapsto& - \, \gamma_{01} \, p^i \sum_j W_j^i q^j + \gamma_{10} \, q^i \sum_j W_j^i p^j \\ q^i &\mapsto& + \, \gamma_{01} \, p^i \sum_j W_j^i q^j - \gamma_{10} \, q^i \sum_j W_j^i p^j \eear where $\gamma_{01} \geq 0$ and $\gamma_{10} \geq 0$ are coupling constants (not necessarily equal). This means that we are changing the transitional probabilities of the independent tensor product process by a bilinear interaction term that depends on the network and on the coupling constants between the opposite states. \medskip \noi The new system is strictly speaking not a Markov chain on the hypercube with $2^N$ states, but it can be thought of as a ``non-linear Markov process" on the (continuous) space of all probability distributions on the nodes of the network. The state space is therefore $I^N = [0,1]^N$ and we study the following dynamical system of $N$ (independent) differential equations where the non-linearity is of a simple type. \begin{equation} \frac{dp^i}{dt} = - \frac{dq^i}{dt} = - \, \alpha^i p^i + \beta^i q^i - \gamma_{01} \, p^i \sum_j W_j^i q^j + \gamma_{10} \, q^i \sum_j W_j^i p^j \end{equation} \medskip \noi In terms of the Laplacian, these equations can also be written as: \begin{equation} \frac{dp^i}{dt} = - \frac{dq^i}{dt} = - \alpha^i p^i + \beta^i q^i - \hat{\gamma}\, d^i \, p^i \, q^i + \gamma_{01} \, p^i \sum_j L_j^i q^j - \gamma_{10} \, q^i \sum_j L_j^i p^j \end{equation} where $ d_i = \sum_j w_{i j} $ is the degree of the vertex $v_i$ and $ \hat{\gamma} = (\gamma_{01} - \, \gamma_{10}) $. \medskip \noi Note that $\sum_j L^i _{j} q^j = - \sum_j L^i_j p^j $ and that $ \sum_i \sum_j L^i_j x^j = 0$ for any $x^j$.\\ \noi Let us denote the vector $Lp = -Lq$ by $ l $ , i.e., $l^i = \sum_j L^i_j p^j$. We then have: \begin{equation} \frac{dp^i}{dt} = \hat{\gamma}\, d^i \,( p^i)^2 - (\alpha^i + \beta^i + \hat{\gamma}\, d^i) \, p^i + \beta^i - \hat{\gamma} \, p^i \, (Lp)^i - \gamma_{10} (Lp)^i \end{equation} \medskip \noi The equations decouple on different connected components of the network. The relation to the bigger system on the ``hypercube" $I^{2^N}$ is given by the embedding $\Phi: I^N \rightarrow I^{2^N} \; p= (p^i) \mapsto {\bf p} = \Phi(p) $ defined as: $$ {\bf p}^{\sigma} = p(\sigma) = \prod_i p^i(\sigma^i) $$ where for $(\sigma^i) \in \{0 ,1\}^N$ which is a sequence of $0$'s and $1$'s (corners of $I^N$), we define $ p^i(0) = p^i$ and $ p^i(1) = 1-p^i = q^i$. Obviously, $\sum_{\sigma} p(\sigma) = \prod_i (p^i + q^i) = 1$. \medskip \noi The process on $I^{2^N}$ is now determined by its ``restriction" to an $N$-dimensional ``quadratic variety". Note also that the relative entropy (Kullback-Leibler divergence) on the hypercube, restricted to this subvariety is just the sum of the relative entropies at each node. \begin{equation} E_{{\bf {\bar p}}}({\bf p} ) = \, - \sum_{\sigma} {\bf {\bar p}}_{\sigma} \log \frac{{\bf p}^i_{\sigma}}{ {{\bf \bar p}}_{\sigma}} = \sum_i E_{{\bar p}^i} (p^i) \end{equation} \noi so the embedding $\Phi$ preserves relative entropies. \bigskip \bigskip \section{ {\bf The existence and stability of the steady state} } \bigskip \noi We will prove existence, uniqueness and stability of the steady state of the differential equation \bear \frac{dp^i}{dt} = F(p)^i \eear on the product space $I^{N}$, where $F$ is the vector field: \beqn F(p)^i = \hat{\gamma}\,d^i \,( p^i)^2 - (\alpha^i + \beta^i + \hat{\gamma}\,d^i) \, p^i + \beta^i - \hat{\gamma} \, p^i (Lp)^i - \gamma_{10} (Lp)^i \eeqn \bigskip \noi We now check $F$ on the boundary of the cube $I^N$, consisting of $2^N$ faces where one of the $p^i$'s is equal to $0$ or $1$. \medskip \noi When $p^i = 0$, then $(Lp)^i \leq 0$ and hence $F(p)^i = + \beta^i - \gamma_{10} \, (Lp)^i > 0 $. \smallskip \noi When $p^i = 1$, then $(Lp)^i \geq 0$ and hence $F(p)^i = - \alpha^i - \gamma_{01} \, (Lp)^i < 0 $. \medskip \noi So $F$ points inwards and this proves that there exists at least one zero of the vector field (or steady state) in the interior $(0 , 1)^N$. \bigskip \noi We prove now that there is a unique globally stable steady state by showing that the entropy function with respect to any steady state is strictly decreasing along the flow until it reaches that steady state. To that purpose, we first need to establish a little fact about the Laplacian acting on the unit cube: \begin{Lemma} If $x^i \in (0,1)$ and $y^i \in (0,1)$ for all $i = 1, \dots, N$, then $$ \sum_i \left( \frac{y^i}{x^i}\sum_j L_j^i x^j + \frac{x^i}{y^i}\sum_j L_j^i y^j \right) \leq 0$$ and equality holds iff $x^i = y^i$ for all $i$. \end{Lemma} \medskip \noi {\bf Proof}: Using the definition of the Laplacian: $$ \sum_{i,j} (\frac { y^i } {x^i} L^i_j x ^j + \frac {x^i} { y^i} L^i_j y ^j ) = \sum_{i \sim j} w_{i j}( \frac { y^i } {x^i} - \frac {y^j} {x^j} )( x^i - x^j) + w_{i j}(\frac {x^i}{ y ^i} - \frac {x^j}{ y ^j}) ( y^i - y ^j) $$ where $i \sim j$ means that $i$ and $j$ are connected by an edge with weight $w_{i j} = w_{j i} > 0$. \noi Now $$ \frac { y^i } {x^i} - \frac { y^j} {x^j} = \frac{ y^i }{x^i x^j} (x^j - x^i) + \frac{1}{x^j} ( y^i - y^j) $$ $$ \frac { x^i } { y^i} - \frac { x^j} { y^j} = \frac { x^i }{ y^i y^j } ( y^j - y ^i) + \frac{1} { y^j} ( x^i - x^j) $$ \noi and hence $$ ( \frac { y^i } {x^i} - \frac {y^j} {x^j} )( x^i - x^j) + (\frac {x^i}{ y ^i} - \frac {x^j}{ y ^j}) ( y^i - y ^j)$$ $$ = - \frac{ y^i }{x^i x^j} (x^j - x^i) ^2 + (\frac{1}{x^j} + \frac{1} { y^j}) ( y^i - y^j) ( x^i - x^j) - \frac { x^i }{ y^i y^j } ( y^j - y^i) ^2 $$ \noi which is a negative definite quadratic form since $$ (\frac{1}{x^j} + \frac{1} { y^j})^2 \geq \frac{4}{ x^j y^j } $$ \hfill QED \\ \bigskip \noi Any steady state $\bar{p}^i$ satisfies: \beqn - \alpha^i \bar{p}^i + \beta^i \bar{q}^i - \hat{\gamma}\,d^i \, \bar{p}^i \, \bar{q}^i - \gamma_{01} \bar{p}^i \, \bar{l}^i - \gamma_{10} \bar{q}^i \, \bar{l}^i = 0 \eeqn for each $i$, where $\bar{l}^i = L^i_j \bar{p}^j$. \bigskip \noi Let \beqn E_{\bar{p}}(p) = - \sum_i ( \bar{p}^i \log p^i + \bar{q}^i \log q^i ) + \sum_i ( \bar{p}^i \log \bar{p}^i + \bar{q}^i \log \bar{q}^i ) \eeqn which is the sum of all the relative entropies to the steady state at each node. \bear \frac{dE }{dt} & = & \sum_i(\frac{\bar{q}^i}{q^i} - \frac{\bar{p}^i}{p^i}) \frac{dp^i}{dt} \\ &=& \sum_i (p^i - \bar{p}^i) \left( - \frac{\alpha^i} {q^i} + \frac{\beta^i} {p^i} - \hat{\gamma}\,d^i - \gamma_{01} \frac{l^i}{q^i} - \gamma_{10} \frac{l^i}{p^i} \right) \\ & = & \sum_i (p^i - \bar{p}^i) \left( \alpha^i ( \frac{1}{\bar{q}^i} - \frac{1}{q^i}) - \beta^i( \frac{1}{\bar{p}^i} - \frac{1}{p^i} ) - \gamma_{01} (\frac{l^i}{q^i} - \frac{\bar{l}^i}{\bar{q}^i} ) - \gamma_{10} (\frac{l^i}{p^i} - \frac{\bar{l}^i}{\bar{p}^i} )\right) \eear where $l^i = L_j^ip^j ,\, \bar{l}^i = L^i_j \bar{p}^j$ and we used the steady state equation 3.6. \noi Now \bear \gamma_{0 1} \sum_i (p^i - \bar{p}^i) (\frac{l^i}{q^i} - \frac{\bar{l}^i} {\bar{q}^i} ) = \gamma_{0 1} \sum_i (\bar{q}^i - q^i) (\frac{l^i}{q^i} - \frac{\bar{l}^i} {\bar{q}^i}) = - \gamma_{0 1} \sum_i \left( \frac{\bar{q}^i}{q^i} l^i + \frac{q^i}{\bar{q}^i} \bar{l}^i \right)\\ \gamma_{1 0} \sum_i (p^i - \bar{p}^i) (\frac{l^i}{p^i} - \frac{\bar{l}^i} {\bar{p}^i} ) = \gamma_{1 0} \sum_i (p^i - \bar{p}^i) (\frac{l^i}{p^i} - \frac{\bar{l}^i} {\bar{p}^i}) = \gamma_{1 0} \sum_i \left( \frac{\bar{p}^i}{p^i} l^i + \frac{p^i}{\bar{p}^i} \bar{l}^i \right) \eear since $\sum_i l^i = \sum_i \bar{l}^i = 0$ \medskip \noi Using the fact that $l^i = \sum_j L^i_j p^j = -\sum_j L^i_j q^j \, , \; \, \bar{l}^i = \sum_j L^i_j \bar{p}^j = -\sum_j L^i_j \bar{q}^j $ we can now apply the basic Lemma above to the terms involving the Laplacian to get: \bear \frac{dE }{dt} & \leq & \sum_i (p^i - \bar{p}^i) \left ( - {\alpha^i} (\frac{1}{q^i} - \frac{1}{\bar{q}^i}) + \beta^i ( \frac{1} {p^i} - \frac{1}{\bar{p}^i} ) \right) \\ & = & - \sum_i \, \frac{\alpha^i}{\bar{q}^i q^i} ( \bar{q}^i - q^i)^2 - \sum_i \frac{\beta^i}{\bar{p}^i p^i } (p^i - \bar{p}^i)^2 \\ &\leq& 0 \eear with strictly inequality unless $ p^i = {\bar p}^i $, for all $i$. \bigskip \noi This proves both uniqueness and global stability of the steady state and hence the following theorem is now established. \begin{theorem} Let $\alpha, \beta \in \R^N_+$ , \, $\gamma_{0 1} , \gamma_{ 1 0} \geq 0$ and $W$ be a symmetric $N \times N$ matrix with all entries non-negative and with zeros on the diagonal. Then the system of differential equations: $$\frac{dp^i}{dt} = - \, \alpha^i p^i + \beta^i q^i - \gamma_{01} \, p^i \sum_j W_j^i q^j + \gamma_{10} \, q^i \sum_j W_j^i p^j \qquad i = 1, \ldots , N $$ where $q^i = 1 - p^i$, leaves the $N$-dimensional unit cube $[0 , 1]^N$ invariant and possesses a unique globally stable steady state (fixed point) in the interior of $I^N$. \end{theorem} \bigskip \bigskip \section{\bf{Remarks}} \subsection{The spatial distribution of the steady state} \medskip \noi Let us denote the mean (average) of a function $x$ on the network by $\langle x \rangle = \frac{1}{N}\sum_i x^i$. Let $r = x - \langle x \rangle$. Then the variance of $x$ is given by $Var(x) = \langle r^2 \rangle$. We then have by the basic properties of the Laplacian, the basic inequality: \beq \frac{1}{N} \langle x , Lx \rangle = \frac{1}{N}\sum_i x^i \, L_j^i x^j = \frac{1}{N}\sum_i r^i \, L_j^i r^j \geq \lambda_1 Var(x) \eeq where $\lambda_1$ is the first positive eigenvalue (which is the same as the second eigenvalue, since we are assuming that the graph is connected) of the Laplacian. \medskip \noi Now since the steady state $\bar{p}$ satisfies: $$\hat{\gamma}\,d^i\, \bar{p}^i \bar{q}^i - (\alpha^i + \beta^i ) \, \bar{p}^i + \beta^i = \hat{\gamma} \bar{p}^i \, L_j^i \bar{p}^j + \gamma_{10} L_j^i \bar{p}^j$$ we get by taking averages, the following \beq - \hat{\gamma} \frac{1}{N}\sum_i d^i \bar{p}^i \bar{q}^i - \frac{1}{N}\sum_i (\alpha^i + \beta^i) \bar{p}^i + \langle \beta \rangle = \hat{\gamma} \frac{1}{N}\sum_i \bar{r}^i \, L_j^i \bar{r}^j \eeq where $ \bar{r} = \bar{p} - \langle \bar{p} \rangle$, and hence (trivially): \begin{prop} The variance of the equilibrium distribution satisfies the estimates: $$Var(\bar{p}) \leq \frac{1}{\lambda_1} \frac{\langle \beta \rangle}{\hat{\gamma}} \qquad \mbox{if} \; \; \hat{\gamma} > 0 \qquad \left( Var(\bar{p}) \leq - \frac{1}{\lambda_1} \frac{\langle \alpha \rangle}{\hat{\gamma}} \; \; \mbox{if} \; \; \hat{\gamma} < 0 \right) $$ \end{prop} \medskip \noi These are not very useful estimate unless $\lambda_1 |\hat{\gamma}|$ is very large. We will discuss the special case $\hat{\gamma} = 0$ in the next section.\\ \medskip \noi Let $R^i$ be the quadratic function: \beqn R^i (x) = \hat{\gamma}\,d^i\, x^2 - (\alpha^i + \beta^i + \hat{\gamma}\,d^i) \, x + \beta^i \eeqn defined at each node with a (unique) zero $\rho^i \in (0,1)$. Since $L(\bar{p})^i \geq 0 $ at the nodes where $\bar{p}$ attains a local maximum and $L(\bar{p})^i$ at the node where $\bar{p}$ attains a local minimum (with strict inequalities for strict (local) maxima and minima nodes we have the following bounds on the absolute maximum and minimum values $\bar{p}_{max}$ and $\bar{p}_{min}$ of the steady state $\bar{p}$. \beqn R^{i_{max}}(\bar{p}_{max}) \geq 0 \qquad \mbox{and} \qquad R^{i_{min}}(\bar{p}_{min}) \leq 0 \eeqn \medskip \noi To simplify the discussion, let us assume that all the $\alpha^i$'s and the $\beta^i$'s are the same. Then $\rho^i < \rho^j$ iff $ d^i > d^j $ and that $\rho$ will be close to a constant if the degrees are almost the same. In other words, if the graph is ``almost" homogeneous, $L(\rho)$ would be small and hence $\rho$ will be close to the true steady state $\bar{p}$. We can set up an iterative procedure starting with the initial guess $p(0) = \rho$ and iterating using the Laplacian: We define recursively, $ (p(k+1))^i$ to be the solution $\in (0 , 1)$ of the equation: $$R^i(p(k+1)^i) = \hat{\gamma} \, (p(k))^i \, L(p(k))^i + \gamma_{10} (L(p(k))^i$$ This will converge rapidly to the steady state if the graph is ``almost" homogeneous. \bigskip \subsection{Some special cases} \subsubsection{The homogeneous case} If all nodes have the same matrix $Q$, the same degree $d$ and all the non-zero weights are equal to $1$ (i.e. the network is a regular graph), then the stationary probability $( \bar{p} , \bar{q} ) $ is the same for all nodes and since the Laplacian vanishes on constant functions we get: \bear \hat{\gamma}\,d\, \bar{p}^2 - (\alpha + \beta + \hat{\gamma}\,d) \, \bar{p} + \beta &=& 0 \eear \medskip \noi This quadratic equation has exactly one zero in the interior of $[0 , 1]$, provided $\alpha > 0$ and $\beta > 0$. If $\hat{\gamma} = 0 $ then $\bar{p} = \frac{\beta}{\alpha + \beta} $. It is also easy to check that $\bar{p} < \frac{\beta}{\alpha + \beta} \,$ if $\hat{\gamma} >0$ and $\bar{p} > \frac{\beta}{\alpha + \beta} \,$if $\hat{\gamma} < 0$, so the probability strictly changes if the Markov chains are linked by a network. In fact, $\bar{p} \rightarrow 0$ as $\hat{\gamma} \rightarrow + \infty$ and $\bar{p} \rightarrow 1$ as $\hat{\gamma} \rightarrow - \infty$. Note also that even if $\alpha = 0$ there is a solution $ \bar{p} = \frac{\beta}{\hat{\gamma}} \in (0,1)$ provided $ 0 < \beta < \hat{\gamma}$ and if $\beta = 0$ there is a solution $\bar{p} = 1 + \frac{\alpha}{\hat{\gamma}} \in (0,1)$ provided $ 0 < \alpha < - \, \hat{\gamma}$. On the hypercube $\{ 0 ,1 \}^N$, the probabilities are then binomially distributed. The probability at a state with $ k \, |0 \rangle $'s and $ l \, |1 \rangle$'s is $\bar{p}^k \, \bar{q}^l$. \\ The proof of the uniqueness and stability of the steady state in the homogeneous case can be simplified using another useful little fact about the Laplacian which we would like to record here (the proof is elementary). \begin{Lemma} If $x^i \in (0,1)$ for all $i = 1,\ldots, N$, then $$ \sum_{i,j} \frac {x^i} {1 - x^i} L^i_j x^j \geq 0 $$ and equality holds iff $x^i = x^j$ for all $i, j$. \end{Lemma} \medskip \subsubsection {SIS model } This is a simple epidemiological model (see \cite{ne}), corresponding to $\alpha = 0$ and $\gamma_{10} = 0 $ in our notation. {\it In the epidemiological literature, what we call $\be$ is $\gamma$, what we call $\gamma_{01}$ is $\be$, the state $|0 \rg$ is called $S$(susceptible), $|1 \rg$ is $I$(infected)}. Let us also assume that all the Markov chains are identical, so all the $\beta$'s are the same. $S$ is an absorbing steady state at each site in the absence of connections. If the network is homogeneous (a regular graph) where every node has the same degree $d$, there is another stable steady state solution (endemic equilibrium) $ \bar{p} = \frac{\beta}{ d \gamma} \in (0,1)$ provided $ 0 < \beta < d \gamma$. In the case of a general network, our proof shows that if there is an endemic equilibrium in the interior (this is true in many cases), it will be unique and stable. \subsubsection{The case $\hat{\gamma} = 0$} \bigskip If we assume that the two interaction strengths are the same $\gamma_{0 1} = \gamma_{1 0} = \gamma $ and all the $\alpha$'s and $\beta$'s are equal (but we do not assume that the network is homogeneous), then the equation for the equilibrium state simplifies to: \bear - (\alpha + \beta) \, \bar{p}^i + \beta = \gamma \sum_j L_j^i \bar{p}^j \eear Averaging over $i$, we see that $\bar{p}$ is a constant equal to $\frac{\beta}{\alpha + \beta}$, so the network has no effect in this case and the ``synchronization" is perfect. \bigskip \bigskip
{"config": "arxiv", "file": "1412.0700.tex"}
TITLE: Upper-bound for nuclear norm of $A \circ (v \otimes v)$ in terms of operator norm (or nuclear norm) of matrix $A$ and $L_\infty$-norm of vector $v$. QUESTION [1 upvotes]: Let $A \in \mathbb R^{n \times }$ be a psd matrix such that $\|A\|_{op} \le r_1$ and $\|A\|_{*} \le r_2$. Let $v \in \mathbb R^n$ such that $\|v\|_\infty \le r_3$. Let $B:=A \circ V$ be the Hadamard product of $A$ and the outer-product $V := v \otimes v$ of $v$ with itself. Question. Is there a good generic upper-bound for the nuclear norm $\|B\|_*$ of $B$ in terms of $r_3$ and $r_1$ (or $r_2$) ? REPLY [1 votes]: As initially observed by user Ben Grossmann in the comments, one has $\|B\|_\star \le r_2 r_3^2$. Indeed, if $D = diag(v)$, then $B = DAD$ and so $$ \|B\|_\star = tr(B) = tr(DAD) = tr(AD^2) \le tr(A)\|D\|_{op}^2 = \|A\|_\star\|D\|_{op}^2 \le r_2 r_3^2, $$ where we have used the fact that $A$ and $D$ are symmetric psd matrices (thus so is $B$). Moreover, this inequality is tight as can be seen by taking $v=1_n:=(1,1,\ldots,1) \in \mathbb R^n$, so that $B=A$.
{"set_name": "stack_exchange", "score": 1, "question_id": 4319197}
TITLE: Proof integration identity $\int_{0}^{1}dx\int_{0}^{x}e^{x^2}dy=\int_{0}^{1}dy\int_{y}^{1}e^{x^2}dx$ QUESTION [0 upvotes]: I have to prove this identity: $$\int_{0}^{1}dx\int_{0}^{x}e^{x^2}dy=\int_{0}^{1}dy\int_{y}^{1}e^{x^2}dx$$ I've shown that: $$\int_{0}^{1}dx\int_{0}^{x}e^{x^2}dy=\int_{0}^{1}xe^{x^2}dx=\frac{1}{2}(e-1)$$ After i tried to solve the second member of identity in this way: The first part is done by substituting $x^2$ in series $$ e^{x} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots $$ yielding $$e^{x^2} = \sum_{n=0}^{\infty} \frac{x^{2n}}{n!} $$ Through some theorems as uniform continuity, then we can switch the order of integration and summation, that is: $$\int_{y}^{1} \sum_{n=0}^{\infty} \frac{x^{2n}}{n!}dx= \sum_{n=0}^{\infty}\frac{1}{n!}\int_{y}^{1} x^{2n}dx=\sum_{n=0}^{\infty}\frac{1}{n!}*\Big(1-\frac{y^{2n+1}}{2n+1}\Big)dx$$ And from here i'm not sure how to proceed. What can i do? Thanks for the help in advance!! REPLY [3 votes]: Trying to calculate the iterated integrals is not the way here - one of those integrals will work in an elementary way, but the other won't. No, this is a case of Fubini's theorem - the two integrals are the same because they're the same double integral over a triangle, integrated in the two possible orders. We wish to show that $$\int_0^1 \int_0^x f(x,y)\,dy\,dx = \int_0^1\int_y^1 f(x,y)\,dx\,dy$$ for nice enough $f$. The function we're trying to integrate is continuous and bounded on this bounded set, and that's certainly nice enough. Draw the picture: Integrating over $y$ first, our condition is that $0\le y\le x$. Integrating over $x$ first, our condition is that $y\le x\le 1$. Then, in both cases, the outer variable runs from $0$ to $1$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3084349}
TITLE: Finding the Density Change of a Fluid QUESTION [1 upvotes]: Consider the motion of a fluid with velocity field defined in Eulerian variables by the following equations $$u=kx,\,\,v=-ky,\,\,w=0$$ where $k$ is a constant. Also assume that the density is given by $$\rho = \rho_0 + Aye^{kt}$$ What is the rate of change of density for each individual fluid particle? ($\rho_0$, $A$ are constant) I am pretty unsure what to do with the information that I'm given. I know that the $\textbf{Conservation of Mass}$ states that the rate-of-increase of mass inside a region $\Sigma$ must equal to the mass flux into $\Sigma$ across the surface $S$. Thus $$\iiint_{\Sigma}\frac{\partial\rho}{\partial t}\,dV = \iint_{S}(\rho\underline{u})\cdot\underline{\hat{n}}\,dA$$ From $\textbf{Gauss's Divergence Theorem}$ I know that $$\iint_{S}(\rho\underline{u})\cdot\underline{\hat{n}}\,dA = \iiint_{\Sigma}\underline{\nabla}\cdot(\rho\underline{u})\,dV$$ Leading to $$\iiint_{\Sigma}\Big[\frac{\partial\rho}{\partial t} + \underline{\nabla}\cdot(\rho\underline{u})\Big]\,dV = 0$$ So the $\textbf{mass-conservation equation}$ is $$\frac{\partial\rho}{\partial t} + \underline{\nabla}\cdot(\rho\underline{u})$$ So in tensor notation $$\frac{\partial\rho}{\partial t} + \frac{\partial}{\partial x_j}(\rho u_j) = 0$$ Now am I right to think that $$\frac{\partial \rho}{\partial t} = Ayke^{kt},\,\,\frac{\partial}{\partial x_j}(\rho u_j) = \frac{\partial\rho}{\partial x_j}u_j + \rho\frac{\partial u_j}{\partial x_j}$$ But I don't know what to do next... REPLY [1 votes]: The rate of change of density $\rho$ for each individual fluid particle is the time derivative of $\rho$ in Lagrangian frame. Since the velocity field is given in Eulerian variables, this corresponds to the material derivative of $\rho$ which is given by $$ \frac{D\rho}{Dt} = \frac{\partial\rho}{\partial t} + u\cdot\nabla\rho. $$
{"set_name": "stack_exchange", "score": 1, "question_id": 2654317}
\begin{document} \begin{center} \Large{\bf Minimal non-orientable matroids in a projective plane} \normalsize {\sc Rigoberto Fl\'orez} \footnote {The work of the first author was performed at the State University of New York at Binghamton.} \footnotesize {\sc University of South Carolina Sumter\\ Sumter, SC, U.S.A.\ 29150-2498}\\ \normalsize and {\sc David Forge} \footnote {The work of the second author was performed while visiting the State University of New York at Binghamton.} \footnotesize {\sc Laboratoire de recherche en informatique UMR 8623 \\ B\^at 490 Universit\'e Paris-Sud\\ 91405 Orsay Cedex France} \\ {\tt forge@lri.fr} \normalsize \end{center} \footnotesize {\it Abstract:} { We construct a new family of minimal non-orientable matroids of rank three. Some of these matroids embed in Desarguesian projective planes. This answers a question of Ziegler: for every prime power $q$, find a minimal non-orientable submatroid of the projective plane over the $q$-element field.} \normalsize \thispagestyle{empty} \section {introduction} The study of non-orientable matroids has not received very much attention compared with the study of representable matroids or oriented matroids. Proving non-orientability of a matroid is known to be a difficult problem even for small matroids of rank 3. Richter-Gebert \cite{RG} even proved that this problem is NP-complete. In the general case, there are only some necessary conditions for a matroid to be non-orientable (see section 6.6 of \cite{oriented}). In 1991 Ziegler \cite {smnm} constructed a family of minimally non-orientable matroids of rank three which are submatroids of a projective plane over $\mathbb {F}_p$ for $p$ a prime. These matroids are of size $3n+2$ with $n\ge2$ and the smallest is the Mac Lane matroid on 8 elements (the only non-orientable matroid on 8 or fewer elements). Ziegler raised this question (\cite {oriented}, page 337): For every prime power $q$, determine a minimal non-orientable submatroid of the projective plane of order $q$. We study an infinite family $\{F(n) : n \in \mathbb N \}$ of line arrangements in the real projective plane (where $\mathbb N$ is the set of positive integers). $F(n)$ consists of $2n+1$ lines constructed by taking the infinite line together with a series of parallel lines going through two points. We give an easy criterion to decide when it is possible to extend the arrangement by a pseudoline passing through given intersection vertices of $F(n)$. This criterion gives a construction of a family of non-orientable matroids with $2n+2$ elements for $n\ge3$. Our smallest example is again the Mac Lane matroid but all others are different from Ziegler's arrangements. Finally, we prove that a subfamily of these non-orientable matroids embeds in Desarguesian projective planes coordinatized by fields of prime-power order. This answers Ziegler's question. The \emph {Reid cycle matroid} $R_{\text {cycle}}[k]$ for $k \geq 3$ is a certain single-element extension of our minimal non-orientable matroid $M(n,\sigma){\mid (C\cup\{c_0,c_1\}})$ (given in Theorem \ref {t3}). Kung \cite [page 52]{jk} conjectured that for $k \geq 3$, the matroid $R_{\text {cycle}}[k]$ is non-orientable. McNulty proved this conjecture \cite {{jmn}, {jn}}. Our Theorem \ref {t3} shows that the Reid cycle matroid is not minimally non-orientable. \section {Extension of pseudoline arrangements} \label{extension1} We define a family of pseudoline arrangements $F(n)$ of size $2n+1$ in the real projective plane. We then study the possibility of extending such an $F(n)$ by new pseudolines going through given sets of intersection vertices. A pseudoline arrangement $L$ is a set of simple closed curves in the real projective plane $\Pi$, of which each pair intersects at exactly one point, at which they cross. Two arrangements are \emph{isomorphic} if one is the image of the other by a continuous deformation of the plane. An arrangement is \emph{stretchable} if it is isomorphic to an arrangement of straight lines. The \emph{extension} of an arrangement $L$ by a pseudoline $l$ is the arrangement $L\cup l$ if the line $l$ meets correctly all the lines of $L$. Given a finite set $V$ of vertices it is always possible to draw a pseudoline going through the points of $V$. However, given an arrangement $L$ and a set $V$ of points, it may be impossible to construct an extension of $L$ by a pseudoline going through $V$. We will use the following simple case of impossible extension. Let $L=\{l_1,l_2\}$ be an arrangement of two pseudolines meeting at a point $P_1$. These two lines separate the real projective plane into two connected components $C_1$ and $C_2$. Let $P_2$ and $P_3$ be two points one in each of the two connected components defined by $L$. Then there is no extension of $L$ by a pseudoline going through the set of points $\{P_1,P_2,P_3\}$. Let $n$ be a positive integer. We adopt the notation $[n]:=\{1,2,\ldots,n\}$. Let $c_0$ be a line in the projective plane (in the affine representation of the figures $c_0$ is the line at infinity), and let $A$ and $B$ be two points not on $c_0$. Let $\{X_i : i\in [n]\}$ be a set of $n$ points of $c_0$ that appear in the order $X_1, X_2,\ldots,X_n$ on $c_0$. Let us call $F(n)$ a pseudoline arrangement with $2n+1$ pseudolines $a_i$ for $ i \in [n]$, $b_i$ for $ i\in [n]$, $c_0$ such that $$ \displaystyle { \bigcap_{i=1}^n a_i =A \text{ , } \ \ \bigcap_{i=1}^n b_i =B, \ \ \ \ \text{ and } \ \ a_i \cap b_i \cap c_0 = X_i} \ ,\ \forall i \in [n] .$$ Let us denote by $X_{i,j}$ the intersection point of the lines $a_{i}$ and $b_{j}$ for two different integers $i$ and $j$ (in this notation the point $X_i$ corresponds to $X_{i,i}$). We remark that $F(n)$ is not uniquely defined but is unique up to isomorphism (this is a key remark for the following). Indeed, since the lines $a_i$ all meet at the vertex $A$, they also cross there and nowhere else. This gives then all the other crossings and their order on the lines. The points $A, X_{i,j}$ for $ j\in [n]$ appear on the line $a_i$ in the order \[ \big ( A, X_{i,1},X_{i,2},\ldots,X_{i,i-1},X_i,X_{i,i+1},\ldots,X_{i,n} \big ) \] and similarly the points $B,X_{i,j}$ for $ i\in [n]$ appear on the line $b_j$ in the order \[ \big ( B, X_{1,j},\ldots,X_{j-1,j},X_j,X_{j+1,j},\ldots,X_{n,j} \big ).\] $F(n)$ is stretchable; one can just put $c_0$ at infinity and take for the $a_i$ and $b_i$ $n$ pairs of parallel lines passing through the two given points $A$ and $B$. In fact, $F(n)$ is rational, i.e., it is isomorphic to an arrangement in the real projective plane of lines defined by equations with integer coefficients. However, in the proofs we will not use the fact that $F(n)$ is stretchable or rational and for convenience in our figures we may represent $F(n)$ with pseudolines. \begin{figure}[htpb] \begin{center} \includegraphics{fleur.pdf} \caption{The pseudoline arrangement $F(4)$.} \label{f1} \end{center} \end{figure} \begin {lem} \label{lemmaorder} For any integer $n\ge 3$ and any three increasing integers $1\le i_1<i_2<i_3\le n$, there exists an extension of the arrangement $F(n)$ by a pseudoline passing through the three points $X_{i_1,j_1}$, $X_{i_2,j_2}$, and $X_{i_3,j_3}$ if and only if $j_1<j_2<j_3$ or $j_1>j_2>j_3$. \end{lem} \begin{proof} We know the order in which the points $A,X_{i,j} $ for $ j\in [n]$ appear on the line $a_i$ and similarly the order in which the points $B,X_{i,j}$ for $i\in [n]$ appear on the line $b_j$. The two lines $a_i$ and $b_j$ meeting at $X_{i,j}$ separate the projective plane into two connected components. Hence, point $X_{i,j}$ defines a partition of the point set $S_{i,j}=\{X_{i',j'} : i' \not=i, j' \not=j\}$ into the two parts \[ S_{i,j}^+= \big \{ X_{i',j'} : (i'-i)(j'-j)>0 \big \} \text { and } S_{i,j}^- = \big \{ X_{i',j'} : (i'-i)(j'-j)<0 \big \}. \] There exists a pseudoline passing through $X_{i_2,j_2}$ and the two other points $X_{i_1,j_1}$ and $X_{i_3,j_3}$ if and only if $X_{i_1,j_1}$ and $X_{i_3,j_3}$ belong to the same part of the partition defined by $X_{i_2,j_2}$. Since we know that $i_1<i_2<i_3$, the last statement is equivalent to the conclusion. \end{proof} \begin{lem}\label{alpha} For any integer $n$ and any injective function $f:D\rightarrow [n]$ where $D\subseteq[n]$, there exists an extension of the arrangement $F(n)$ by a pseudoline $c_1$ passing through the points $X_{i,f(i)}$, $i\in D$, if and only if the function $f$ is increasing or decreasing. \end{lem} \begin{proof} The preceding lemma implies the conclusion. \end{proof} \begin{lem}\label{extension} For any integer $n\ge2$ and for any cyclic permutation $\alpha$ of $[n]$, there exists an extension of the arrangement $F(n)$ by a pseudoline passing through the points $X_{\alpha^{i-1}(1),\alpha^i(1)}$, $ i\in [n]$, if and only if $n=2$ and $\alpha = (1 \ 2)$. \end{lem} \begin{proof} If $n\ge3$ then Lemma \ref{alpha} applied to $\alpha$ implies that the bijection $\alpha$ is increasing or decreasing. But a cyclic permutation on more than two elements cannot be increasing or decreasing. If $n=2$ then the only cyclic permutation is $\alpha(1)=2$ and $\alpha(2)=1$. And clearly one can find a pseudoline passing through the two points $X_{1,2}$ and $X_{2,1}$ (in fact, any two points). \end{proof} \section{Orientability of matroids} \label{extension2} From the Folkman-Lawrence Representation Theorem, the orientability of a rank-three matroid is equivalent to its representability by a pseudoline arrangement in the projective plane (see \cite{oriented} for more details). In such a representation, the elements of the matroid correspond to pseudolines of the arrangement. Similarly, the rank-two flats of the matroid correspond to vertices of intersection of the pseudolines. In this section we will define a family of minimal non-orientable matroids using Lemma \ref{extension}. Let $A=\{a_i : i\in[n]\}$, $B=\{b_i : i\in[n]\}$ and $\{c_0\}$ be disjoint sets. For $i\in[n]$, let us call $X_i$ the set $\{a_i,b_i,c_0\}$. Let $M'(n)$ be the simple rank-3 matroid on the ground set $E_n=A\cup B\cup \{c_0\}$ defined by the $n+2$ non-trivial rank-two flats : $A$, $B$, and the $n$ sets $X_i$, $i\in[n]$. Let $\tau$ be a permutation of $[n]$. The arrangement $F(n)$ was defined in the previous section after placing the vertices $X_i$ in the natural order on the line $c_0$. The position of the lines $a_i$ and $b_i$ and of the vertices $X_{i,j}$ were then constructed. Let instead place the vertices $X_i$ on the line $c_0$ in the order $X_{\tau(1)}$, $X_{\tau(1)}$, $\ldots, X_{\tau(n)}$. By keeping the rule that $a_i$ and $b_i$ crosses on $c_0$ at vertex $X_i$, we get a new pseudoline arrangement that we denote by $F(n,\tau)$. \begin{lem}\label{perm} The representations of $M'(n)$ by pseudoline arrangements are the arrangements $F(n,\tau)$ where $\tau$ is a permutation of $[n]$. \end{lem} \begin{proof} The permutation $\tau$ fixes the order of the points $X_i$ on the line $c_0$. Once this order is fixed, every thing else is determined by the fact that the lines $a_i$, for $i\in[n]$, go through the points $X_i$ and $A$ and that the lines $b_i$, for $i\in[n]$, go through the points $X_i$ and $B$. \end{proof} Let $\sigma$ be a permutation of $[n]$ without fixed elements. We denote by $M(n,\sigma)$ the matroid extension of $M'(n)$ by an element $c_1$ such that the sets $\{a_i,b_{\sigma(i)},c_1\}$, for $i\in [n]$ are the additional non-trivial rank-2 flats. This means that in $M(n,\sigma)$, the new element $c_1$ is the intersection of the rank-2 flats $\cl(a_i,b_{\sigma(i)})$, $i\in [n]$. Note that in the pseudoline representation of the matroid, the pseudoline $c_1$ will have to pass through the vertices of intersection $a_i\cap b_{\sigma(i)}$. To the permutation $\sigma$ corresponds naturally the bipartite graph $G_\sigma$ with vertex set $A\cup B$ and with edge set \[ \big \{ \{a_i,b_i\} : i\in [n] \big \}\cup \big \{ \{a_i,b_{\sigma(i)}\} : i\in[n]\big \}. \] In the graph $G_\sigma$, two vertices $a_i$ and $b_j$ form an edge $\{a_i,b_j\}$ if and only if they both belong to some 3-point line with $c_0$ or $c_1$. The graph $G_\sigma$ is clearly 2-regular, which implies that it is a union of disjoint cycles. Let us point out that a $2k$-cycle of the graph $G_\sigma$ corresponds to a $k$-cycle of the permutation $\sigma$. \begin{figure} [htbp] \begin{center} \includegraphics{droites.pdf} \caption{ A linear realization of $M(4,(1 \ 3)(2 \ 4)) $} \label{f2} \end{center} \end{figure} \begin{thm} \label{t3} Let $n\ge2$ and let $\sigma$ be a permutation of $[n]$ without fixed elements. The matroid $M(n,\sigma)$ is orientable if and only if the graph $G_\sigma$ has no cycle of length greater than four. Moreover, if for some $k\ge 3$, the graph $G_\sigma$ contains a cycle of length $2k$, say on the vertex set \[ C=\big \{ a_i,a_{\sigma(i)},\ldots,a_{\sigma^{k-1}(i)},b_i,b_{\sigma(i)}, \ldots,b_{\sigma^{k-1}(i)} \big \}, \] then the restriction $M(n,\sigma){\mid (C\cup\{c_0,c_1\}})$ is a minimal non-orientable matroid. \end{thm} \begin{proof} If the graph $G_\sigma$ has a decomposition into cycles of length only 4 (hence $n$ must be even), we give an explicit realization (see Figure \ref {f2}). We first relabel the elements using a permutation $\tau $ defining the position of the vertices $X_i$ at infinity. This permutation is defined by the following algorithm. Start with $k=1$ and $S=[n]$. While $S \neq \emptyset$ do: \hspace{1cm} a) let $i$ be the smallest element of $S$ and set $\tau(i)\leftarrow k$ and $\tau(\sigma(i))\leftarrow n+1-k$; \hspace{1cm} b) put $k\leftarrow k+1$ and $S\leftarrow S\setminus \{i,\sigma(i)\}$. The algorithm stops when the permutation $\tau$ has been completely defined (i.e., when $S$ is finally empty, which will happen after $n/2$ steps). Put the points $A$ and $B$ at $(-1,0)$ and $(1,0)$ respectively. Using the permutation $\tau$, the following realization works: (a) the line $a_{\tau ^{-1}(i)} $ has equation $y=-ix-i$, for $i \leq n/ 2$, (b) the line $b_{\tau ^{-1}(i)} $ has equation $y=-ix+i$, for $i \leq n / 2$, (c) the line $a_{\tau ^{-1}(n-i+1)}$ has equation $y=ix+i$, for $i \leq n / 2$, (d) the line $b_{\tau ^{-1}(n-i+1)}$ has equation $y=ix-i$, for $i \leq n / 2$, (e) the line $c_0$ is at infinity, (f) the line $c_1$ has equation $x=0$. If $G_\sigma$ contains a cycle $C$ of length $2k\ge 6$ then the matroid $M(n,\sigma)(\mid C\cup\{c_0,c_1\})$ is an extension of $M'(k)$ by the element $c_1$. By Lemma \ref{perm}, a representation of $M'(k)$ is a pseudoline arrangement $F(k,\tau)$ for a permutation $\tau$. Then a representation of $M(n,\sigma){\mid C\cup\{c_0,c_1\}}$ is an extension of $M'(k)$ by a pseudoline $c_1$ going through the points $X_{\tau(i),\tau(\sigma(i))}$, $i\in [n]$. By Lemma \ref{extension}, this is impossible. Let us now prove the minimality of $M(n,\sigma)\mid ( C\cup\{c_0,c_1\})$ as a non-orientable matroid. If one of the $c_i$ is deleted we get a matroid isomorphic to $M'(n)$, which is orientable. If we delete one of the $a_i$ or one of the $b_i$ (say $a_1$) then the matroid $M(n,\sigma)\mid C \setminus a_1$ is realized by the following line arrangement in the real projective plane: put the points $A$ and $B$ at $(0,0)$ and $(1,0)$ respectively and (a) the line $a_i $ has equation $x=ix$, for $2\le i\le k$, (b) the line $b_i $ has equation $y=ix+1$, for $2\le i\le k$, (c) the line $c_0$ is at infinity, (d) the line $c_1$ has equation $y=1$. \qedhere \end{proof} \begin{figure}[htpb] \begin{center} \includegraphics{fig3.pdf} \caption{ A linear realization of $M(3,(1 \ 2 \ 3))\setminus a_1$.} \label{f3} \end{center} \end{figure} \section {Minimal non-orientable matroids contained in a projective plane} \label{last} In this section we will define a simple matroid $M(\mathfrak{G}, g_0,g_1 )$ where the definition of the lines depends on a given group $\mathfrak{G}$ and two fixed elements of $\mathfrak{G}$. We will see that this matroid is a particular case of $M(n,\sigma)$. The special case $M(\mathbb{Z}_n, 0, 1 )$ is a submatroid of a non-orientable matroid given by McNulty \cite {jn}. If a finite field $F$ contains $\mathfrak{G}$ as a multiplicative or an additive subgroup then $M(\mathfrak{G}, g_0,g_1 )$ embeds in the projective plane coordinatized by $F$. In Lemma \ref {bias6} (which follows by Theorems 2.1 and 4.1 in \cite {b4}, because $M(\mathfrak{G}, g_0,g_1 )$ is a bias matroid of a gain graph) we prove this fact for finite fields. With this lemma and Theorems \ref {mg} and \ref{Ziegler} we will answer Ziegler's question. Let $p^t$ be a prime power and let $\mathbb{F}_{p^t}$ be a Galois field. We will denote by $\Pi_{p^t}$ the projective plane coordinatized by $\mathbb{F}_{p^t}$. The points and lines of $\Pi_{p^t}$ will be denoted by $[x,y,z]$ for $x,y,z$ in $\mathbb{F}_{p^t}$, not all zero and $ \langle a ,b ,c \rangle := \{ [x,y,z] : ax +by + cz = 0 \}$ for $a,b,c$ in $\mathbb{F}_{p^t}$, not all zero. Let $\mathfrak{G}$ be a finite group of order $n$ and let $g_0,g_1$ be two of its elements. Let $A=\{a_g : g\in\mathfrak{G}\}$, $B=\{b_g : g\in\mathfrak{G}\}$ and $\{c_{g_0},c_{g_1}\}$ be disjoint sets. Let $M( \mathfrak{G}, g_0,g_1 )$ be the simple matroid of rank 3 on the ground set $E:= A\cup B \cup \{ c_{g_0}, c_{g_1} \}$ defined by the $2n+2$ non-trivial rank-2 flats $A$, $B$, and the $2n$ sets $\{ a_{g}, b_{g \cdot g_0}, c_{g_0} \}$, $g\in \mathfrak{G}$, and $ \{ a_{g}, b_{g \cdot g_1}, c_{g_1} \}$, $g\in \mathfrak{G}$. \begin{thm} \label{mg} Let $g_0$ and $g_1$ be two different elements of a finite abelian group $\mathfrak{G}$. Let $r$ be the order of $g_0\cdot g_1^{-1}$. Then $ M( \mathfrak{G}, g_0,g_1 )$ is non-orientable if and only if $r \geq 3$. \end{thm} \begin{proof} Let $n$ be the order of $\mathfrak{G}$. Let us first note that $M( \mathfrak{G}, g_0,g_1 )$ is isomorphic to an $M(n,\sigma)$. Let $\alpha$ be a bijection from $[n]$ to $\mathfrak{G}$. Let $\beta$ be the bijection from $[n]$ to $\mathfrak{G}$ defined by $\beta(i)=\alpha(i)\cdot g_0$. Let $\sigma$ be the permutation on $[n]$ defined by $\sigma (i)= \beta^{-1}\big(\alpha(i)\cdot g_1\big)$. The permutation $\sigma$ is clearly without fixed elements. We now have an isomorphism $\phi$ between $M(n,\sigma)$ and $M(\mathfrak{G}, g_0,g_1 )$ given by $\phi(c_0) = c_{g_0}$, $\phi(c_1) = c_{g_1}$, $\phi(a_i)=a_{\alpha(i)}$ and $\phi(b_i)=b_{\beta(i)}$. Let $G$ be the graph with vertex set $ \{ a_{g} : g \in \mathfrak{G} \} \cup \{ b_{g} : g \in \mathfrak{G} \} $ and edges $\{ a_{g}, b_{g'} \}$ such that $ \{ a_{g}, b_{g'}, c_{g_0} \}$ or $ \{ a_{g}, b_{g'}, c_{g_1} \}$ is a line of $ M( \mathfrak{G}, g_0,g_1 )$. This graph is the graph $G_\sigma$ for the corresponding permutation $\sigma$. A cycle of $G$ has the form \[ \big \{ a_g , b_{g\cdot g_0} , a_{g \cdot g_0 \cdot g_1^{-1}} , b_{g\cdot g_0\cdot (g_1^{-1}\cdot g_0)} , a_{g\cdot (g_0\cdot g_1^{-1})^2} ,\ldots , a_{g\cdot (g_0\cdot g_1^{-1})^{r-1}} , b_{g\cdot g_0 \cdot ( g_1^{-1} \cdot g_0)^{r-1}} \big \}.\] Therefore the length of a cycle of $G$ is $2 r$. So, Theorem \ref {t3} implies that $M( \mathfrak{G}, g_0,g_1 )$ is non-orientable if and only if $r\ge3$. \end{proof} \begin{lem} \label{bias6} Let $p$ be a prime number and let $m\ge 2$ and $t\ge 1$ be two integers. (i) $M( \mathbb{Z}_{p}, 0, 1 )$ embeds in $\Pi_p$. (ii) If $m$ divides $p^t -1$, then $M(\mathbb{Z}_m,0, 1)$ embeds in $\Pi_{p^t}$. \end{lem} \begin {proof}[Proof of (i)] Let $\psi$ be the map from the ground set of $M( \mathbb{Z}_p, 0, 1 )$ into the point set of $\Pi_p$ defined as follows: \[ \psi (a_i) = [0,i,1],\ \psi (b_i) = [1,i,1], \ \psi (c_0) = [1,0,0], \ \psi (c_1) = [1,1,0] \text { for } i \in \mathbb{Z}_{p}. \] By the definition of the incidence relation between points and lines in $\Pi_{p}$, $ \big \{ [ 0,i, 1]: i \in \mathbb{Z}_p \big \} \subseteq \langle -1,0, 0 \rangle$, $\big \{ [ 1,i,1]: i \in \mathbb{Z}_p \} \subseteq \langle -1,0,1 \rangle $, and $\big \{ [1,0,0], [1,1,0] \} \subseteq \langle 0,0,1 \rangle .$ Now, for fixed $i,j \in \mathbb{Z}_p$ and fixed $k \in \{0,1\}$, it is easy to verify that $\psi( \{ a_i,b_j, c_k \})$ is collinear in $\Pi_p$ if and only if $j=i + k$. \end {proof} \begin {proof}[Proof of (ii)] Let $\phi $ be an isomorphism between the group $\mathbb {Z}_m$ and the cyclic subgroup of order $m$ of the multiplicative group $\mathbb{F}_{p^t}^*$ (such isomorphism exist because $m$ divides $p^t-1$). Let $\psi$ be a map from the ground set of $M( \mathbb{Z}_m, 0, 1 )$ into the point set of $\Pi_{p^t}$ defined as follows: $$ \psi (a_i) = [\phi (i),0,1], \ \psi (b_i) = [0,- \phi (i),1], $$ $$ \psi (c_0) = [1,\phi (0),0],\ \psi (c_1) = [1,\phi (1),0] \text { for } i \in \mathbb{Z}_{m}.$$ By the definition of the incidence relation between points and lines in $\Pi_{p^t}$, $$\big \{ [\phi(i),0, 1]: i \in \mathbb {Z}_{m} \big \} \subseteq \langle 0,1, 0 \rangle,$$ $$\big \{ [ 0,- \phi (i),1]: i \in \mathbb {Z}_{m} \big \} \subseteq \langle 1,0,0 \rangle , \text { and }$$ $$\big \{[1,\phi (0),0], [1,\phi (1),0] \big \} \subseteq \langle 0,0,1 \rangle.$$ Now, for fixed $i,j \in \mathbb{Z}_m$ and fixed $k \in \{0,1\}$ it is easy to verify that $\psi (\{ a_i,b_j, c_k \}) $ is collinear in $\Pi_{p^t}$ if and only if $j=i+k$. \end {proof} \begin{thm} \label{Ziegler} Let $p \geq 3$ be a prime number and let $m \geq 3$ and $t\ge 1$ be two integers. (i) $M( \mathbb{Z}_p, 0, 1 )$ is a minimal non-orientable matroid that embeds in $\Pi_p$. (ii) If $m $ is a divisor of $p^t -1 $ then $M( \mathbb{Z}_m, 0, 1 )$ is a minimal non-orientable matroid that embeds in $\Pi_{p^t}$. (iii) $M( \mathbb{Z}_{p^t-1}, 0, 1 )$ is a minimal non-orientable matroid that embeds in $\Pi_{p^t}$ and in none of the $\Pi_{p^k}$ for $k < t$. \end{thm} \begin{proof} Parts (i) and (ii) follow by Theorem \ref {mg} and Lemma \ref {bias6}. As a consequence of part (ii) $M( \mathbb{Z}_{p^t-1}, 0, 1 )$ is a minimal non-orientable matroid in $\Pi_{p^t}$. Since $M( \mathbb{Z}_{p^t-1}, 0, 1 )$ has a line with $p^t-1$ points, $M( \mathbb{Z}_{p^t-1}, 0, 1 )$ does not embed in $\Pi_{p^k}$ for $k < t$. \end{proof} The matroids given in parts $(i),$ $ (ii)$, and $(iii)$ of the previous theorem are new minimal non-orientable matroids embeddable in projective planes, except for $M( \mathbb{Z}_3, 0, 1 )$, which is the Mac Lane matroid. Part $(iii)$ answers Ziegler's question. \section {Concluding remarks} \label{remarks} At no moment in the previous sections did we really need to have a finite set of points. We could have considered infinite rank-3 matroids and infinite pseudoline arrangements. For a permutation on $\bbN$ without fixed elements, we can define the rank-3 infinite matroid $M(\bbN,\sigma)$ on the set $\{a_i : i\in \bbN\}\cup\{b_i : i\in \bbN\}\cup\{c_0,c_1\}$ by taking for its non-trivial rank-2 flats $A=\{a_i : i\in \bbN\}$, $B=\{b_i:i\in \bbN\}$, $X_i=\{a_i,b_i,c_0\} $, $i\in \mathbb N$, and $\{ a_i,b_{\sigma(i)},c_1\}$, $i\in \mathbb N$. The permutation $\sigma$, as in the finite case, also defines a graph $G_\sigma$ on the vertex set $A \cup B$. This graph is infinite but still of degree 2. This implies that $G_\sigma$ is a union of cycles and infinite 2-way paths. We then have the following results, which are similar to Theorems \ref{t3} and \ref{mg}: \begin{thm} Let $\sigma$ be a permutation of $\bbN$ without fixed elements. The matroid $M(\bbN,\sigma)$ is orientable if and only if the graph $G_\sigma$ has no cycle of length greater than four. \end{thm} \begin{thm} Suppose that $ \mathfrak{G}$ is a finitely generated abelian group. Then $ M( \mathfrak{G}, g_0,g_1 )$ is non-orientable if and only if the order of $g_0\cdot g_1^{-1}$ is finite and greater than $2$. \end{thm} We want to remark also that $M(\mathbb{Z}_n, 0, 1 )$ is linearly representable over the complex numbers $\mathbb{C}$ (It follows by \cite [Theorem 2.1] {b4}). Therefore, $M( \mathbb{Z}_n, 0, 1 )$ embeds in the projective plane coordinatized by $\mathbb{C}$. \section* {Acknowledgment} We thank Thomas Zaslavsky for his helpful comments and valuable advices.
{"config": "arxiv", "file": "2202.09621/MinimalNon-OrientableMatroidsPP_ARXIV.tex"}
TITLE: intersection of maximal ideals in a polynomial ring QUESTION [2 upvotes]: Given $A=K[x_1,\dots,x_n]$ a polynomial ring on a field $K$, let $p(x)\in A$ be an element, and $M_1,\dots,M_s$ some maximal ideals. Is it true that $$\cap(M_i,p) = (\cap M_i,p)?$$ I obtained that it's true if $K$ is algebraically closed, since you can work well with their varieties, but I don't know how to dis/prove it in general. Some addictional facts: I proved that, in any case, $$\cap(M_i,p) = \sqrt{(\cap M_i,p)}$$ So it's sufficient to prove that the ideal is radical. Another thing I discovered is that this fact is equivalent to For every radical $0$-dimensional ideal $Q$, if $Q\subseteq J$, then $J$ is radical REPLY [1 votes]: Ok, I just found the solution. It's obvious that $$\cap(M_i,p) \supseteq (\cap M_i,p)$$ but if $p\not\in M_1,\dots,M_r$, $p\in M_{r+1},\dots,M_s$ then $$\exists m_i\in M_i, \exists a_i : m_i+a_ip =1 \quad \forall i\le r$$ so $$q\in \cap(M_i,p) \iff q\in M_{r+1}\cap\dots\cap M_s$$ $$q=q\prod(m_i+a_ip)=q\prod m_i + p(\dots)\in (\cap M_i,p)$$ resulting in $$\cap(M_i,p) \subseteq (\cap M_i,p)$$
{"set_name": "stack_exchange", "score": 2, "question_id": 1358717}
TITLE: Evaluating a logarithmic integral in terms of trilogarithms QUESTION [3 upvotes]: For $a,c\in\mathbb{R}\land-1\le a\land-1<c$, define the function $J{\left(a,c\right)}$ to be the value of the dilogarithmic integral $$J{\left(a,c\right)}:=\int_{0}^{1}\mathrm{d}y\,\frac{\operatorname{Li}_{2}{\left(\frac{c}{1+c}\right)}-\operatorname{Li}_{2}{\left(\frac{ay}{1+ay}\right)}}{c-ay}.$$ In principle, $J{\left(a,c\right)}$ may be evaluated in terms of trilogs, dilogs, and elementary functions. In the process of trying to develop my own solution, I managed to obtain partial solutions valid over various subsets of the parameter space $(a,c)\in[-1,\infty)\times(-1,\infty)$, but I would prefer an alternative approach that eliminates the need for all the casework and produces a single expression valid over all parameter choices (excepting possibly at $a,c=-1,0$). Any suggestion/hints are welcome. Cheers! Progress so far: Defining the auxiliary parameters $A:=1+a,~C:=1+c$, we find: $$\begin{align} J{\left(a,c\right)} &=\int_{0}^{1}\mathrm{d}y\,\frac{\operatorname{Li}_{2}{\left(\frac{c}{1+c}\right)}-\operatorname{Li}_{2}{\left(\frac{ay}{1+ay}\right)}}{c-ay}\\ &=\int_{0}^{1}\mathrm{d}y\,\frac{1}{c-ay}\int_{ay}^{c}\mathrm{d}z\,\frac{\ln{\left(1+z\right)}}{z\left(1+z\right)}\\ &=\int_{0}^{1}\mathrm{d}y\,\frac{1}{c-ay}\int_{1+ay}^{1+c}\mathrm{d}x\,\frac{\ln{\left(x\right)}}{\left(x-1\right)x};~~~\small{\left[1+z=x\right]}\\ &=-\frac{1}{a}\int_{\frac{1}{1+c}}^{\frac{1+a}{1+c}}\mathrm{d}t\,\frac{1}{\left(1-t\right)}\int_{\left(1+c\right)t}^{1+c}\mathrm{d}x\,\frac{\ln{\left(x\right)}}{x\left(1-x\right)};~~~\small{\left[\frac{1+ay}{1+c}=t\right]}\\ &=-\frac{1}{a}\int_{\frac{1}{C}}^{\frac{A}{C}}\mathrm{d}t\,\frac{1}{\left(1-t\right)}\left[\int_{0}^{C}\mathrm{d}x\,\frac{\ln{\left(x\right)}}{x\left(1-x\right)}-\int_{0}^{Ct}\mathrm{d}x\,\frac{\ln{\left(x\right)}}{x\left(1-x\right)}\right]\\ &=-\frac{1}{a}\int_{\frac{1}{C}}^{\frac{A}{C}}\mathrm{d}t\,\frac{1}{\left(1-t\right)}\left[\int_{0}^{C}\mathrm{d}x\,\frac{\ln{\left(x\right)}}{x\left(1-x\right)}-\int_{0}^{C}\mathrm{d}w\,\frac{\ln{\left(tw\right)}}{w\left(1-tw\right)}\right];~~~\small{\left[x=tw\right]}\\ &=-\frac{1}{a}\int_{\frac{1}{C}}^{\frac{A}{C}}\mathrm{d}t\int_{0}^{C}\mathrm{d}x\,\left[\frac{\ln{\left(x\right)}}{\left(1-x\right)\left(1-tx\right)}-\frac{\ln{\left(t\right)}}{x\left(1-t\right)\left(1-tx\right)}\right]\\ &=-\frac{1}{a}\int_{0}^{C}\frac{\mathrm{d}x}{1-x}\int_{\frac{1}{C}}^{\frac{A}{C}}\mathrm{d}t\left[\frac{\ln{\left(xt\right)}}{1-xt}-\frac{\ln{\left(t\right)}}{x\left(1-t\right)}\right]\\ &=\small{-\frac{1}{a}\int_{0}^{C}\frac{\mathrm{d}x}{1-x}\left[\frac{\operatorname{Li}_{2}{\left(1-\frac{Ax}{C}\right)}-\operatorname{Li}_{2}{\left(1-\frac{x}{C}\right)}}{x}-\frac{\operatorname{Li}_{2}{\left(1-\frac{A}{C}\right)}-\operatorname{Li}_{2}{\left(1-\frac{1}{C}\right)}}{x}\right]}\\ &=\small{-\frac{1}{a}\int_{0}^{1}\mathrm{d}y\,\frac{\operatorname{Li}_{2}{\left(1-Ay\right)}-\operatorname{Li}_{2}{\left(1-y\right)}-\operatorname{Li}_{2}{\left(1-\frac{A}{C}\right)}+\operatorname{Li}_{2}{\left(1-\frac{1}{C}\right)}}{y\left(1-Cy\right)}};~~~\small{\left[x=Cy\right]}\\ &=...\\ \end{align}$$ This feels like progress of a sort, but how best to proceed from there? REPLY [1 votes]: $$J{\left(a,c\right)}=\int_{0}^{1}\mathrm{d}y\,\frac{\operatorname{Li}_{2}{\left(\frac{c}{1+c}\right)}-\operatorname{Li}_{2}{\left(\frac{ay}{1+ay}\right)}}{c-ay}$$ $$J{\left(a,c\right)}=\operatorname{Li}_{2}{\left(\frac{c}{1+c}\right)}\int_{0}^{1}\frac{\mathrm{d}y}{c-ay}-\int_{0}^{1}\mathrm{d}y\,\frac{\operatorname{Li}_{2} \left(\frac{ay}{1+ay}\right)}{c-ay}$$ $$J{\left(a,c\right)}=\operatorname{Li}_{2}{\left(\frac{c}{1+c}\right)} \left(\frac{-1}{a} \ln \frac{a-c}{c} \right) -\int_{0}^{1}\mathrm{d}y\,\frac{\operatorname{Li}_{2} \left(\frac{ay}{1+ay}\right)}{c-ay}$$ With change of variables : $X=ay$ and $C=c/a$ : $$\int_{0}^{1}\mathrm{d}y\,\frac{\operatorname{Li}_{2} \left(\frac{ay}{1+ay}\right)}{c-ay}= \frac{1}{a}\int_{0}^{1/a}\mathrm{d}X\,\frac{\operatorname{Li}_{2} \left(\frac{X}{1+X}\right)}{c-X}$$ For the last integral, WolframAlpha gives an huge result :
{"set_name": "stack_exchange", "score": 3, "question_id": 1428987}
TITLE: Reference request: the theory of currents QUESTION [14 upvotes]: I am a graduate student and want to study the theory of currents. What is a good reference for a beginner? I should be familiar with the theory of distributions or generalized functions on $\mathbb R^n$. REPLY [18 votes]: The theory of currents is a part of the geometric measure theory. Unfortunately, Federer made the subject completely inaccessible after he wrote his famous monograph: H. Federer, Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften, Band 153 Springer-Verlag New York Inc., New York 1969. The problem is that the book contains `everything' (well, almost) and it is unreadable. After this book was published, people did not dare to write other books on the topic and only the bravest hearts dared to read Federer's Bible. In my opinion the first accessible book on the subject is L. Simon, Lectures on geometric measure theory. Proceedings of the Centre for Mathematical Analysis, Australian National University, 3. Australian National University, Centre for Mathematical Analysis, Canberra, 1983. You can find it as a pdf file in the internet. Note that this book was written 14 years after Federer's book and there was nothing in between. I would also suggest: F. Lin, X. Yang, Geometric measure theory—an introduction. Advanced Mathematics (Beijing/Boston), 1. Science Press Beijing, Beijing; International Press, Boston, MA, 2002. I haven't read it, but it looks relatively elementary (relatively, because by no means the subject is elementary). The last, but not least is F. Morgan, Geometric measure theory. A beginner's guide. Fifth edition. Illustrated by James F. Bredt. Elsevier/Academic Press, Amsterdam, 2016. You will not learn anything form that book as it does not have detailed proofs, but you can read it rather quickly and after that you will have an idea about what it is all about. REPLY [15 votes]: A beginner friendly introduction can be found in chapter 7 of the book "Geometric Integration Theory " by Krantz and Parks. It is from 2008 and written in a modern and clear style and it starts nearly from "zero".
{"set_name": "stack_exchange", "score": 14, "question_id": 372196}
TITLE: Show the set of the limits of all the subsequence of a bounded set contains sup and inf. QUESTION [5 upvotes]: Let $(X_n)$ be a bounded sequence, and let $E$ be the set of subsequential limits of $(X_n)$. Prove that $E$ is bounded and contains $\sup E$ and $\inf E$. Does this ask us to prove limsup and liminf exists? Could you help me ? REPLY [3 votes]: Since $(x_n)$ is a bounded sequence by the Bolzano–Weierstrass Theorem contain a convergent subsequence, then $E$ is non-empty. Now we have to show that $E$ is bounded. Let $M$ be a bound for the entire sequence, i.e., $|x_n|< M$ for all $n\in \mathbb{N}$, and suppose for sake of contradiction that $E$ is unbounded. Then we must have $|x|>M$ for all but finitely many $x\in E$. Choose one, say $x_0\in E$ and $|x_0|>M$, then since $x_0\in E$ and by definition of $E$, we can find a subsequence $(x_{n_i})$ which converges to $x_0$ and in particular for $i\ge n_0$ we have $|x_{n_i}-x_0|< |x_0|-M$, so $|x_{n_i}|\ge |x_0|-|x_{n_i}-x_0|>M$, a contradiction. Hence $E$ is non-empty and bounded. Let $s= \sup E$, we shall show that lies in $E$. Only we check the claim for the least upper bound, the other part is similar. First we will show that is a point of accumulation (a limit point). Given $\varepsilon>0$ and $n_0\ge 0$. Then $s-\varepsilon/2$ is not the least upper bound and so there is $x\in E$ such that $s-\varepsilon/2<x\le s$, but since $x\in E$, we have $|x_{n_i}-x|< \varepsilon/2$ for some $n_i\ge i\ge n_0$. Thus $|x_{n_i}-s|\le |x_{n_i}-x|+|x-s|< \varepsilon$. Hence $s$ is a limit point. Define recursively the sequence: $n_0=0$ and $n_k = \min \{n>n_{k-1}:|x_n-s|<1/k \}$. Notice that since $s$ is a limit point $\{n>n_{k-1}:|x_n-s|<1/k \} \not= \varnothing$ for all $k$, since otherwise contradicts what we have shown in the above paragraph. Then $(x_{n_k})$ is a subsequence of $x_n$, and $s-1/k<x_{n_k}<s+1/k$, by the squeeze theorem we conclude $x_{n_k} \to s$. Thus $s\in E$. as was to be shown.
{"set_name": "stack_exchange", "score": 5, "question_id": 701846}
TITLE: Question about Wilson's theorem, when $n = 4$. QUESTION [0 upvotes]: Recently I was looking up on Wilson's theorem to find out what are the values of a function $f(n) \equiv (n-1) \pmod{n}$ for any $n$. So I know that for prime numbers $\geq $ 2 that would be -1, and it looks like for composite values $(n-1)! \equiv 0 \pmod{n} $ unless $n = 4$. So I'm wondering, why does $4$ stick out? REPLY [2 votes]: For $n>4$ and composite, $(n-1)!\equiv0\pmod{n}$. First suppose $n$ can be written as $n=ab$, with $a\ne b$, and both $>1$. Then $a$ and $b$ appear as factors of $(n-1)!$, so we are done. Otherwise $n=p^2$, for a prime $p$. Since $n>4$, we have $p\ge3$ and $(n-1)!$ contains $p$ and $2p$ as factors. Thus the only exceptional case is $n=4$, where $3!\equiv 2\pmod{4}$. The case $n=1$ is not exceptional either.
{"set_name": "stack_exchange", "score": 0, "question_id": 2327817}
\section{Third Sylow Theorem} Tags: Sylow Theorems, Third Sylow Theorem \begin{theorem} All the [[Definition:Sylow p-Subgroup|Sylow $p$-subgroups]] of a [[Definition:Finite Group|finite group]] are [[Definition:Conjugate of Group Subset|conjugate]]. \end{theorem} \begin{proof} Suppose $P$ and $Q$ are [[Definition:Sylow p-Subgroup|Sylow $p$-subgroups]] of $G$. By the [[Second Sylow Theorem]], $Q$ is a [[Definition:Subset|subset]] of a [[Definition:Conjugate of Group Subset|conjugate]] of $P$. But since $\order P = \order Q$, it follows that $Q$ must equal a [[Definition:Conjugate of Group Subset|conjugate]] of $P$. {{qed}} \end{proof} \begin{proof} Let $G$ be a [[Definition:Finite Group|finite group]] of [[Definition:Order of Group|order]] $p^n m$, where $p \nmid m$ and $n > 0$. Let $H$ be a [[Definition:Sylow p-Subgroup|Sylow $p$-subgroup]] of $G$. We have that: :$\order H = p^n$ :$\index G H = m$ Let $S_1, S_2, \ldots, S_m$ denote the [[Definition:Left Coset|left cosets]] of $G \pmod H$. We have that $G$ [[Definition:Group Action|acts on]] $G / H$ by the rule: :$g * S_i = g S_i$. Let $H_i$ denote the [[Definition:Stabilizer|stabilizer]] of $S_i$. By the [[Orbit-Stabilizer Theorem]]: :$\order {H_i} = p^n$ while: :$S_i = g H \implies g H g^{-1} \subseteq H_i$ Because $\order {g H g^{-1} } = \order H = \order {H_i}$, we have: :$g H g^{-1} \subseteq H_i$ Let $H'$ be a second [[Definition:Sylow p-Subgroup|Sylow $p$-subgroup]] of $G$. Then $H'$ acts on $G / H$ by the same rule as $G$. Since $p \nmid m$, there exists at least one [[Definition:Orbit (Group Theory)|orbit]] under $H'$ whose [[Definition:Cardinality|cardinality]] is not [[Definition:Divisor of Integer|divisible]] by $p$. Suppose that $S_1, S_2, \ldots, S_r$ are the [[Definition:Element|elements]] of an [[Definition:Orbit (Group Theory)|orbit]] where $p \nmid r$. Let $K = H' \cap H_1$. Then $K$ is the [[Definition:Stabilizer|stabilizer]] of $S_1$ under the action of $H'$. Therefore: :$\index {H'} K = r$ However: :$\order {H'} = p^n$ and: :$p \nmid r$ from which it follows that: :$r = 1$ and: :$K = H'$ Therefore: :$\order K = \order {H'} = \order {H_1} = p^n$ and: :$H' = K = H_1$ Thus $H'$ and $H$ are [[Definition:Conjugate of Group Subset|conjugates]]. \end{proof}
{"config": "wiki", "file": "thm_942.txt"}
TITLE: How many units squares can fit on $\mathbb R^2$? QUESTION [0 upvotes]: How many units squares can fit on $\mathbb R^2$? I have thought about this problem for a while and I've come to a solution, however, I am not sure whether my reasoning is good. Filling the plane with unit squares can be done following this algorithm: 1. Start at $(0.0)$. 2. Place one square to the right, one to the left, one to the right, one to the left and so one, until $\mathbb R \times [0,1]$ has been filled. To fill the mentioned area, we need to repeat this jumping to the left and to the right countably many times for there are as many squares as there are integers. 3. Return to the origin and move one square down - repeat $2$. 4. Repeat $3$ until $\mathbb R \times \mathbb R$ has been filled. To recap, we have moved from left to right countably many times and up and down countably many times. Therefore, we 'only' need countably many squares to fill the plane. What do you think of my solution? REPLY [1 votes]: I think there is a problem when you say "until" you fill $\mathbb{R}\times[0,1]$ since that would take infinite steps (so maybe you would have to use a transfinite induction argument). To fix this you can tile the plane in a "spiral" way. On the other hand maybe you need to make your question more specific: Can the squares overlap? From your question I understand that the squares can only overlap at their boundary. If that is the case notice that each square will contain an element of the form $(p,q) \in \mathbb{Q}\times \mathbb{Q}$ so the squares are less than all the rationals but more than finite therefore they are countably many. Of course if you allow any kind of overlaping you can fit uncountably many. It is a non trivial fact that you cannot tile the plane with non-overlaping squares with boundary.
{"set_name": "stack_exchange", "score": 0, "question_id": 2631080}
\begin{document} \title{Infinite-dimensional analyticity in quantum physics} \author{Paul~E.~Lammert} \email{lammert@psu.edu} \affiliation{Department of Physics, 104B Davey Lab \\ Pennsylvania State University \\ University Park, PA 16802-6300} \begin{abstract} A study is made, of families of Hamiltonians parameterized over open subsets of Banach spaces in a way which renders many interesting properties of eigenstates and thermal states analytic functions of the parameter. Examples of such properties are charge/current densities. The apparatus can be considered a generalization of Kato's theory of analytic families of type B insofar as the parameterizing spaces are infinite dimensional. It is based on the general theory of holomorphy in Banach spaces and an identification of suitable classes of sesquilinear forms with operator spaces associated with Hilbert riggings. The conditions of lower-boundedness and reality appropriate to proper Hamiltonians is thus relaxed to sectoriality, so that holomorphy can be used. Convenient criteria are given to show that a parameterization $x \mapsto {\mathsf{h}}_x$ of sesquilinear forms is of the required sort ({\it regular sectorial families}). The key maps \hbox{${\mathcal R}(\zeta,x) = (\zeta - H_x)^{-1}$} and ${\mathcal E}(\beta,x) = e^{-\beta H_x}$, where $H_x$ is the closed sectorial operator associated to $\frm{h}_x$, are shown to be analytic. These mediate analyticity of the variety of state properties mentioned above. A detailed study is made of nonrelativistic quantum mechanical Hamiltonians parameterized by scalar- and vector-potential fields and two-body interactions. \end{abstract} \date{Aug. 22, 2021} \maketitle \tableofcontents \newpage \section{Introduction} \subsection{Motivation} The mathematical concept of analyticity is ubiquitous in physics. Here is a short list of examples. It is in the background whenever we approximate a function by a few terms of its Taylor series. The question of whether perturbation series converge or not is of interest in many contexts. Kramers-Kr\"{o}nig relations are a manifestation of analyticity in complex half-planes. In thermodynamics, phase transitions are identified with the locus of points in a phase diagram at which free energy fails to be analytic. In quantum mechanics, analyticity of a resolvent operator in the spectral parameter is important. Those examples, and most other applications, consider regularity with respect to a few variables, often just one. This paper is concerned with analyticity when both domain and codomain are infinite-dimensional. Functional Taylor expansions, so-called, are used in the physics literature, but in a purely formal way so that one is hard-pressed to say anything about their existence or what convergence would even amount to. The original motivation for this investigation emerged from density functional theory\cite{Koch+Holthausen,Capelle06,Dreizler+Gross,Parr+Yang,Burke-12} (DFT), which is the foundation for very practical and successful computation in solid-state physics, chemistry and materials science. The connection to the present work is briefly described to illustrate ``real-world'' relevance. One considers the ground-state energy $E(v)$ of an $N$-electron system as a function of an ``arbitrary'' external one-body potential $v$. Alternatively (and preferentially in DFT) one focuses on the intrinsic energy $F(\rho)$ which is the minimum kinetic-plus-Coulomb-interaction energy consistent with charge density $\rho$. $E(v)$ and $F(\rho)$ stand in a relation of Legendre duality to each other, and their arguments range over certain infinite-dimensional spaces. $F(\rho)$ is {\em everywhere discontinuous}, and one may not expect much better of $E(v)$ due to the duality relation. Surprisingly, that is very far from true. For instance (see Section~\ref{sec:eigenstate-cc-density}), for an energetically isolated nondegenerate eigenstate (not only ground states), charge density is {\em analytic} in $L^3(\Real^3)\cap L^1(\Real^3)$ as function of scalar potential $v$ in $L^{3/2}(\Real^3)\cap L^\infty(\Real^3)$. Thus, as a function of $v$, $\rho$ is so smooth that it has a convergent Taylor series. This fact has significant implications for computational practice, which, however, are outside the scope of this paper and will be taken up elsewhere. Other results have implications for less-common flavors of DFT such as current-density functional theory\cite{Vignale+Rasolt-88,Grayce+Harris-94} and non-zero-temperature DFT\cite{Mermin-65,Dornheim+18}. More generally, suppose we have a family of quantum Hamiltonians parameterized in a natural way by parameter $x$ ranging over an open subset of a Banach space. Under what conditions are physically interesting quantities analytic functions of $x$? Such quantities pertaining to an eigenstate include: the state itself, the corresponding energy eigenvalue, expectations of observables and generalized observables such as charge/current density. And, for nonzero temperature: statistical operator (i.e., the thermal state), free energy, thermal expectations, susceptibilities, and so on. The framework developed here can be used to address such analyticity questins with relative ease, as is demonstrated explicitly. The framework is flexible, powerful, and general due to treating Hamiltonians initially as sesquilinear forms, with a relaxation of the physically-grounded requirements of reality and lower-boundedness to sectoriality so that holomorphy ($\Cmplx$-differentiability) can be invoked, and complex analysis methods brought to bear. Kato's analytic perturbation theory for type B families\cite{Kato} is concerned with similar questions, but only for families with parameterization domains in the complex numbers $\Cmplx$. The move to infinite-dimensional parameterization domains (Banach spaces, specifically) not only increases flexibility, but triggers a conceptual rearrangement, leading to a rephrasing of everything in terms of compositions of holomorphic maps between Banach spaces. It therefore becomes imperative to repackage appropriate classes of unbounded sesquilinear forms as Banach spaces. Section~\ref{sec:families} develops that key part of the apparatus. \subsection{An operator prototype} It is a familiar and useful fact that the resolvent $\Rmap(\zeta,H) = (H-\zeta)^{-1}$ of operator $H$ is a holomorphic function of the spectral parameter $\zeta$. The extension to a holomorphic dependence on $H$ is worth looking at as a prototype for the theory to be developed. \begin{defn} \label{def:relative-operator-bound} Given: a closed, densely-defined, operator $T$ on Banach space $\sX$ [denoted $T\in\Lincl(\sX)$]. An operator $A$ is {\it $T$-bounded} if $\dom A \supseteq \dom T$ and there are $a,b$ such that \begin{equation} \forall x \in \dom T, \; \|A x\| \le a \|x\| + b\|Tx\|. \end{equation} By increasing $a$, it may be possible to decrease $b$. The infimum of all $b$'s that work is the {\it $T$-bound} of $A$. \end{defn} If $T$ is closed and invertible, $\dom T$ can be turned into a Banach space with the norm $\|x\|_T = \|Tx\|_\sX$; we understand it as such in the following. The following Lemma brings the notion of analyticity to the surface. (A proof is provided at the end of the subsection, which should probably be skipped until it is called upon.) \begin{lem} \label{lem:inverse-of-perturbed-closed-op} Suppose $T$ is closed with range {$\sX$}, and $A$ is $T$-bounded. If \hbox{$\rng(T+A) = \sX$}, then \begin{equation} \label{eq:perturbed-inverse} (T+A)^{-1}= T^{-1}(1 + AT^{-1})^{-1} \in \Lin(\sX). \end{equation} This holds in particular when $\|AT^{-1}\| < 1$, which implies convergence of the Neumann series \begin{equation} \label{eq:Neumann-series} (T+A)^{-1}= T^{-1}\sum_{n=0}^\infty (-AT^{-1})^n. \end{equation} \end{lem} One would like to hold up series (\ref{eq:Neumann-series}) as the demonstration that $(T+A)^{-1}$ is analytic at $A=0$. That the terms are not simply multiples of powers of $A$, however, shows the need for at least the rudiments of a more general theory of analyticity, a theory which will be reviewed in Section \ref{sec:Banach-holomorphy}. Similarly, a claim that the series converges uniformly on some ball about the origin raises the question of domain and codomain of the map. The codomain is clearly $\Lin(\sH)$. The domain {\em could} be taken the same, but that would be far too timid. Instead, consider $\dom T$ as a Banach space with norm $\|x\|_T = \|Tx\|$, making $T$ an isomorphism. Then, $A\mapsto (T+A)^{-1}$ can be considered a map from $\Lin(\dom T;\sX)$ to $\Lin(\sX)$. Indeed, the norm of $A$ as an element of $\Lin(\dom T;\sX)$ is precisely $\|AT^{-1}\|$, so the series (\ref{eq:Neumann-series}) is uniformly convergent on any center-zero ball of radius less than 1. It might seem very difficult to extend this to families of operators having {\em differing} domains, but we shall find it possible by working with Hamiltonians in the guise not of operators, but of {\it sesquilinear forms}. Recall that a sesquilinear form (\sqf) in Hilbert space $\sH$ is a complex-valued function ${\frm{h}}[\phi,\psi]$, linear in $\psi$ and conjugate-linear in $\phi$ which range over some subspace of $\sH$. Motivations for working with sesquilinear forms are, first, the increased strength. That is needed for DFT applications, for example, where potentials in $L^{3/2}(\Real^3)$ are considered. Secondly, there is a corresponding gain in flexibility in applications, as it becomes easier to verify that a family of {\sqf}s is appropriate for feeding into the automatic abstract machinery. This is illustrated in Section \ref{sec:QM}. And finally, there is the argument that {\sqf}s are more physically natural and meaningful than operators. For, an \sqf\ can be recovered from its diagonal elements, and as expectation values these have a far clearer operational meaning than multiplication by a Hamiltonian operator. \begin{proof}[Proof of Lemma~\ref{lem:inverse-of-perturbed-closed-op}] $T+A$ is closed on $\dom (T+A) = \dom T$ by Lemma~\ref{lem:stability-of-closedness} below, and $AT^{-1} \in \Lin(\sX)$ since $\|AT^{-1} x\| \le \Big( a\|T^{-1}\| + b \Big) \|x\|$. Now, \hbox{$(T+A) = (T+A)T^{-1}T = (1+AT^{-1})T$} gives \hbox{$\rng(T+A) \subseteq \rng(1+AT^{-1})$}. The reverse inclusion follows from \hbox{$(T+A)T^{-1} = (1+AT^{-1})$}. Thus, if either $T+A$ or $1+AT^{-1}$ has a (necessarily bounded) inverse, so does the other, and (\ref{eq:perturbed-inverse}) holds. \end{proof} \begin{lem} \label{lem:stability-of-closedness} If $A$ has $T$-bound strictly less than one, Then $T+A$ is closed on $\dom T$. \end{lem} \begin{proof} For any $\psi\in\dom T$, \begin{equation} \label{eq:eq-lem-stability} | \|T\psi\| - \|(T+A)\psi\| | \le \|A\psi\| \le a\| \psi \| + b \|T\psi\|, \end{equation} which yields $(1-b) \|T\psi\| \le a\|\psi\| + \|(T+A)\psi\|$, after rearrangement. Suppose that sequence $(\psi_n)$ in $\dom T$ converges to zero and \hbox{$((T+A)\psi_n)$} is Cauchy. $(T\psi_n)$ is also Cauchy by the preceding inequality, with (because $T$ is closed) limit zero. But, then (\ref{eq:eq-lem-stability}) shows that \hbox{$(T+A)\psi_n \to 0$}, as well. \end{proof} \subsection{Sketch of the theory} The main ideas are sketched in this subsection, made more concrete with the aid of the example of a nonrelativistic ``spinless electron'' subjected to a variable external vector potential field ${\bm A}(x)$ ($x\in\Real^3$). This application will be treated in depth in Section~\ref{sec:QM}; here we are mostly using it simply as something concrete to fix attention on. The energy of the state with wavefunction $\psi$ is \begin{equation} \label{eq:hA} \frm{h}_{\bm A}[\psi] = \int |(\nabla - i{\bm A})\psi|^2\, dx, \end{equation} well-defined as a real-valued and lower-bounded quadratic form defined on a dense subspace of $L^2(\Real^3)$. Under a technical condition, $\frm{h}_{\bm A}$ is naturally associated to a corresponding self-adjoint operator $\frm{H}_{\bm A}$. This well-known theory is recovered as part of the development in Section~\ref{sec:families}. The resolvent $\Rmap(\zeta,H_{\bm A}) = (H_{\bm A}-\zeta)^{-1}$ is $\Cmplx$-analytic in $\zeta$. Is it also analytic in ${\bm A}$? Merely posing the question shows that we should take ${\bm A}$ in some {\em complex} space, and therefore allow the field ${\bm A}(x)$ to be complex-valued. Thus, we generalize (\ref{eq:hA}) to the sesquilinear form (\sqf) \begin{equation} \label{eq:hA-c} {\frm{h}}_{\bm A}[\phi,\psi] = \int ({\nabla} + i{\bm A})\overline{\phi} \cdot ({\nabla} - i{\bm A})\psi \, dx, \end{equation} the diagonal part \hbox{$\frm{h}_{\bm A}[\psi] \defeq \frm{h}_{\bm A}[\psi,\psi]$} being the associated quadratic form. We were careful to not have the complex conjugate of ${\bm A}$ appear in the form (\ref{eq:hA-c}). Now, we might ask whether \hbox{$(\zeta,{\bm A}) \mapsto \Rmap(\zeta,{\bm A})$} is a \hbox{$\Cmplx$-differentiable}, or {\it holomorphic}, map from some open subset of \hbox{$\Cmplx \times \vec{L}^3(\Real^3)$} into $\Lin(\sH)$ (The arrow on $\vec{L}$ merely indicates a vector, rather than scalar, field). Just as in elementary complex analysis, this is enough to guarantee \hbox{$\Cmplx$-analyticity}, and then $\Real$-analyticity for real ${\bm A}$ by restriction. This theory of holomorphy in Banach spaces is reviewed in Section~\ref{sec:Banach-holomorphy}. A resolvent operator holomorphic in this sense is not an end in itself. With its aid, however, one can show that isolated eigenvalues and properties of the associated eigenvectors are analytic functions of the Hamiltonian paramater, i.e., ${\bm A}$ in this case. This kind of application is considered in detail in Section~\ref{sec:eigenvalues}. There are certainly limits to how ${\bm A}$ can be allowed to vary and have all this work. One might imagine that ${\bm A}$ should represent a ``not too big'' perturbation of $\frm{h}_{\bm 0}$. A good idea, which we shall follow, is that all the {\sqf}s $\frm{h}_{\bm A}$ should be mutually relatively bounded. This suggests an abstract study of complete equivalence classes of mutually relatively bounded {\sqf}s on a dense subspace $\sK$ of $\sH$, independently of any concrete paramaterization (such as provided here by ${\bm A}$), an idea which turns out to be quite fruitful. As described below, the fundamental $\Rmap$-map $(\zeta,\frm{h}_x)\mapsto (H_x-\zeta)^{-1}$ and $\Emap$-map $(\beta,\frm{h}_x)\mapsto e^{-\beta H_x}$, which are the basis of applications, are shown holomorphic at this abstract level. We therefore know that a concrete parameterization enjoys these holomorphy properties as soon as it is shown to parameterize part of such an equivalence class $\calC$ in the right way, and that turns out to be surprisingly easy. The relation $\relbd$ of relative boundedness induces equivalence classes of {\sqf}s on $\sK$. Suppose $\calC$ is one such (technically: containing some {\it closable sectorial} form). When ${\bm A}$ is allowed to be complex, $\frm{h}_{\bm A}[\psi]$ is no longer real, but appropriate restrictions ensure that it takes values in a right-facing wedge in $\Cmplx$ as $\psi$ varies among unit vectors in its domain. This property is {\it sectoriality}, and the useful generalization of lower-bounded self-adjointness which allows complex analytic methods to be brought into play. The class $\calC_\relbd$ of {\em all} {\sqf}s bounded relative to $\calC$ has a natural Banach space structure, up to norm equivalence. In fact, it can be identified with $\Lin(\sHp;\sHm)$, where $\sHp\subset\sH\subset\sHm$ is a Hilbert rigging of the ambient Hilbert space $\sH$. Such Hilbert riggings play a very important role in our methodology, and are reviewed in Section~\ref{sec:Hilbert-rigging-abstract}. The sectorial forms in $\calC$, denoted $\sct{\calC}$, comprise an open subset of $\calC_\relbd$, and are the ones of real interest. They induce closed sectorial operators, $H$ corresponding to $\frm{h}$. A central result is that $\frm{h} \mapsto H^{-1}$ is holomorphic from an open subset of $\sct{\calC}$ into $\Lin(\sH)$. Note that the same cannot be said of $\frm{h} \mapsto H$, since the operators have differing domains, hence it is not even clear in what Banach space we can locate them all. This result can be unfolded to display the resolvent explicitly since $\sct{\calC}$ is invariant under translation by multiples of the identity: $(\zeta,\frm{h}) \mapsto \Rmap(\zeta,H)$ is holomorphic on its natural domain in $\Cmplx\times\sct{\calC}$. The other central abstract result is holomorphy of $\frm{h} \mapsto e^{-H}$, as a map into $\Lin(\sH)$. Again, we can unfold this to the map $(\beta,\frm{h})\mapsto e^{-\beta H}$ from a natural domain in $\Cmplx_{\text{rt}}\times \sct{\calC}$, where $\Cmplx_{\text{rt}}$ is the open right half-plane. Returning to the vector potential ${\bm A}$, what needs to be done once these abstract results are in place is very simple, as the theory summarized in Section \ref{sec:Banach-holomorphy} shows. Indeed, once an appropriate equivalence class of {\sqf}s is identified --- for instance, those equivalent to $\frm{h}_{\bm 0}$ on $C^\infty_c(\Real^3)$ --- one only needs to check local boundedness and holomorphy on one-complex-dimensional affine slices, and the latter really does reduce simply to noting that the expression contains two ${\bm A}$'s. The conditions seem easy to check in general. We can even anticipate that for a reasonable family of closable sectorial {\sqf}s parameterized over an open subset $\calU$ of a Banach space, if they are all mutually relatively bounded, hence lie in some $\sct{\calC}$, then this map $\calU\to\sct{\calC}$ will be holomorphic, and therefore so will be the $\Rmap$-map and $\Emap$-map. Some uses of these are treated in Sections \ref{sec:eigenvalues} and \ref{sec:free-energy}, respectively. The former can be used to show analyticity of isolated eigenvalues, as well as various properties of the associated eigenstates, such as charge and current density. The negative exponential $e^{-\beta H}$ shows up in quantum physics in two major contexts. One is analytic continuation of time, a topic which will not be addressed here. The other is quantum statistical mechanics, where it represents the thermal statistical operator, if it is trace-class. This is dealt with in Section \ref{sec:free-energy}, where we show that, if the free energy is (well-defined and) locally bounded, it is analytic, and give conditions for that to be the case. \subsection{Organization of the paper} Section~\ref{sec:Banach-holomorphy} provides needed background on analyticity and holomorphy in Banach spaces. Section~\ref{sec:Hilbert-riggings} provides background on Hilbert rigging. Prop.~\ref{prop:iso-to-closed} is a nonstandard result there which plays an important r\^ole in the later development. Section~\ref{sec:families} is the technical core of the paper. It develops the Banach space structure associated with equivalence classes of closable {\sqf}s and the general idea of {\RSF}s, and proves holomorphy of the $\Rmap$-map in Thm.~\ref{thm:resolvent-holo}. Section~\ref{sec:QM} is concerned with identifying specific holomorphic families of nonrelativistic Hamiltonians --- magnetic Schr\"odinger forms parameterized by both scalar and vector potentials. Sections~\ref{sec:eigenvalues} and \ref{sec:free-energy} on the other hand, are concerned with what can be done if one has such a family, that is, what other quatities inherit the holomorphy. Section~\ref{sec:eigenvalues} studies low-energy Hamiltonians and eigenstate perturbation. Holomorphy of the energy and charge/current densities for isolated nondegenerate eigenstates is derived here, among other things. Section~\ref{sec:free-energy} is concerned with holomorphy of the $\Emap$-map and its consequences. Under appropriate conditions, this yields holomorphy of the nonzero-temperature statistical operator in {\em trace-norm}, as well of free energy and thermal expectations. Special attention is again given to charge/current density. Section~\ref{sec:summary} gives a selective summary. \subsection{Conventions and notations} For convenient reference, some conventions will be listed here. $\sX$, $\sY$ and $\sZ$ denote generic Banach spaces, $\calU$ an open subset of a Banach space, $\sH$ denotes a Hilbert space. $\Lin(\sX;\sY)$ is the space of bounded linear operators from $\sX$ to $\sY$, with the usual operator norm, $\Lin(\sX) = \Lin(\sX;\sX)$, $\Lin^1(\sH)$ and $\Lin^2(\sH)$ denote the spaces of trace-class and Hilbert-Schmidt operators, respectively, and $\Lincl(\sX)$ the set of densely-defined closed operators in $\sX$, and $\Linv(\sX;\sY)$ that of invertible bounded operators (Banach isomorphisms). Product spaces, e.g., $\sX\times\sY$ are usually denoted that way rather than as $\sX\oplus\sY$ because the product notion matches the informal interpretation better. $\Rmap(z,A) = (A-z)^{-1}$ is the resolvent operator (the notation $\Rmap$ will be overloaded later), $\res A$ the resolvent set, and $\spec A$ the spectrum of $A$. Topological closure is generally denoted by $\cl$, instead of an overbar. Barred arrows specify functions, while plain arrows display the domain and codomain, e.g., $A\mapsto e^{A}\colon \Lin(\sH) \to \Lin(\sH)$ is exponentiation on bounded operators. $(x_n)_{n\in\Nat}$ or $(x_n)_n$, or even just $(x_n)$ if it is unambiguous, denotes a sequence. Additional notations will be defined as need arises. Definitions and all theorem-like environments share a common counter; the numbering is merely a navigational aid. \section{Analyticity in the Banach space setting} \label{sec:Banach-holomorphy} This section reviews the necessary theory of differential calculus and holomorphy in Banach spaces. The material on differential calculus reviewed in Section~\ref{sec:calculus} is quite standard and can be found in many places\cite{Lang-Real_analysis,AMR,AMP,Chae}. The theory of holomorphy in Banach spaces discussed in Section~\ref{sec:holomorphy}, much less so. In depth treatments are in monographs of Mujica\cite{Mujica} and Chae\cite{Chae}. Thm.~\ref{thm:holomorphy-summary} is the reason this section is here at all, and the other results are mostly concerned with making the demonstration of holomorphy as easy as possible. \subsection{Differential calculus} \label{sec:calculus} In this subsection, the base space of Banach spaces ($\sX$, $\sY$, \dots) may be either $\Real$ or $\Cmplx$. $\calU$ is an open subset of a Banach space (usually $\sX$). \subsubsection{Derivatives} If $\Arr{\calU}{f}{\sY}$ admits a linear approximation near $a\in \calU$ as \begin{equation} f(a+x) = f(a) + Df(a) x + o(\|x\|), \end{equation} for some continuous linear map $\Arr{\sX}{Df(a)}{\sY}$, i.e. $Df(a) \in \Lin(\sX;\sY)$, then $Df(a)$ is said to be the {\it Fr\'{e}chet differential} (or {\it derivative}) of $f$ at $a$. We are not interested in differentiability at isolated points only, but throughout $\calU$. $f$ is $C^1$ on $\calU$ if $Df$ is everywhere defined on $\calU$ and continuous. In that case, $\Arr{\calU}{Df}{\Lin(\sX;\sY)}$ is itself a continuous map into a Banach space (with the usual operator norm) and we may ask about differentiability of $Df$. If the differential of $Df$ at $a$, denoted $D^2f(a)$, exists it belongs to $\Lin(\sX,\Lin(\sX;\sY))$, by definition. Thus, for $x,x'\in \sX$, $D^2f(a)(x) \in \Lin(\sX;\sY)$ (dropping some parentheses and writing simply `$D^2f(a)\, x$' is a good idea) and $D^2f(a)\, x\, x' \in \sY$. Elements of $\Lin(\sX;\Lin(\sX;\sY))$ are actually {\em bilinear}, that is, linear in each argument with the other held fixed. Moreover, $D^2f(a)$ is symmetric, that is, $D^2f(a)\, x\, x' = D^2f(a)\, x'\, x$. This symmetry continues to higher orders, as long as differentiability holds, and provides good motivation to think primarily in terms of multilinear mappings rather than nested linear mappings. Eliding the distinction between a nested operator in $\Lin(\sX;\cdots \Lin(\sX;\sY)\cdots )$, the $n$-th differential $D^nf(a)\in\Lin(\sX,\ldots,\sX;\sY)$ is a continuous, symmetric, $n$-linear map from $\sX\times\cdots \times \sX$ into $\sY$. \subsubsection{Taylor series and analyticity} If $f$ is continuously differentiable, then whenever the line segment from $a$ to $a+x$ is in $\calU$, \hbox{$f(a+x) = f(a) + (\int_0^1 Df(a+tx) \, dt)\, x$}. Suspending the question of convergence, one deduces that the Taylor series expansion should be $\sum_{n=0}^\infty \frac{1}{n!} D^nf(a)\, x\cdots x$. If, for every point $a\in \calU$, the Taylor series expansion of $f$ converges to $f$ uniformly and absolutely on a ball of some nonzero ($a$-dependent) radius, $f$ is said to be {\it analytic} on $\calU$. This is the favorable situation in which we are interested. The notion of analyticity, and the actual use of a convergent series expansion, is independent of the base field, but it follows from a {\it prima facie} much weaker condition when the base field is $\Cmplx$, as discussed next. \subsection{Holomorphy} \label{sec:holomorphy} This subsection is concerned with the equivalence between holomorphy and $\Cmplx$-analyticity (Thm.~\ref{thm:holomorphy-summary}) and ways to make the demonstration of holomorphy easy (nearly everything else). \subsubsection{Complex linearity and conjugate linearity} \label{sec:C-linearity} A function of type ${\Real}\rightarrow{\Real}$ can be differentiable to all orders without being analytic, whereas the situation is remarkably otherwise for those of type $\Cmplx \rightarrow \Cmplx$. What is much less well-appreciated is that this contrast persists even in infinite-dimensional Banach spaces. Now we assume that the base field for $\sX$ and $\sY$ is $\Cmplx$. They can still be regarded as real vector spaces $\sX_{\Real}$, $\sY_{\Real}$ by restriction of scalars; in that case $ix$ is considered not a scalar multiple of $x$, but a vector in an entirely different ``direction''. Suppose $\Arr{\sX_{\Real}}{f}{\sY_{\Real}}$ is $\Real$-differentiable at $a$, temporarily denote the differential as $D_\Real f(a)$, and define $Df(a)(x) = \frac{1}{2}[D_\Real f(a)(x) - i D_\Real f(a)(ix)]$, $\overline{D}f(a)(x) = \frac{1}{2}[D_\Real f(a)(x) + i D_\Real f(a)(ix)]$. The condition for $\Cmplx$-differentiability is then $\overline{D}f(a) = 0$. This is the analog of the Cauchy-Riemann equation. The function $f$ is said to be {\it holomorphic} on $\calU$ if it is \hbox{$\Cmplx$-differentiable} there. Sometimes (e.g., Chae\cite{Chae}) {\it holomorphic} is instead taken synonymous with $\Cmplx$-analyticity by definition, but it does not really matter as the following remarkable theorem shows. \begin{thm} \label{thm:holomorphy-summary} For {\em complex} Banach spaces $\sX$ and $\sY$, $\calU$ open in $\sX$, the following properties of $\Arr{\calU}{f}{ \sY}$ are equivalent: \newline \textnormal{(a)} holomorphy \textnormal{(}$\Cmplx$-differentiability\textnormal{)} \newline \textnormal{(b)} infinte $\Cmplx$-differentiability \newline \textnormal{(c)} $\Cmplx$-analyticity \end{thm} \begin{proof} See \S\S 8 and 14 of Mujica\cite{Mujica}; or Chae\cite{Chae}, Thm.~14.13. \end{proof} Thus, even if we are ultimately interested only in $\Real$-analyticity in some real subspace $\tilde{\sX}$ of a complex space $\sX$, it can be advantageous to work in $\sX$, establish holomorphy (a comparatively simple property) to get $\Cmplx$-analyticity in $\sX$ and thence $\Real$-analyticity in $\tilde{\sX}$ by restriction. This is in the spirit of Jacques Hadamard's famous dictum, ``{Le plus court chemin entre deux v\'{e}rit\'{e}s dans le domaine r\'{e}el passe par le domaine complexe}''. The following mostly simple permanence properties of holomorphy are important: \begin{itemize} \item Composition: Whenever \hbox{$\Arr{\sX\supset\calU}{f}{\sY}$} and \hbox{$\Arr{\sY\supset {\mathcal V}}{g}{\sZ}$} are holomorphic, so is \hbox{$\Arr{\calU \cap f^{-1}({\mathcal V})}{g\circ f}{\sZ}$}. \item Inversion: \hbox{$\Lin_{\text{iso}}(\sX;\sY) \overset{\mathrm{inv}}{\to} \Lin_{\text{iso}}(\sY;\sX)$} is holomorphic, where $\Lin_{\text{iso}}(\sX;\sY)$ denotes the open set of invertible operators in $\Lin(\sX;\sY)$. \item Products: If the domain of $f$ is in a product space $\sX_1\times \sX_2$, then $f$ is holomorphic iff it is jointly continuous and separately holomorphic. \item Equivalent norms: Holomorphy is stable under equivalent renorming of the domain or codomain space. \item Differentiation: If $f$ is holomorphic, so is $Df$. \item Sequential limits: Sequential convergence uniformly on compact sets preserves holomorphy. (See Prop. \ref{prop:convergent-sequences}) \end{itemize} \subsubsection{Reduction to 1D domain or range} The preceding shows that convergence of series expansions can be deduced from the mere existence of a differential. However, the latter is still complicated by the infinite-dimensional setting. Fortunately, a remarkable reduction is possible here as well --- to consideration of one-$\Cmplx$-dimensional subspaces both in the domain and codomain (together with local boundedness). \begin{defn} \label{def:G-holo} $\Arr{\calU}{f}{\sY}$ is {\it G-holomorphic} if for all $x\in \calU$, $y\in\sX$, \hbox{$\zeta \mapsto f(x+\zeta y)$} is an ordinary holomorphic function of $\zeta$ on some neighborhood of zero in $\Cmplx$. \end{defn} The following fundamental theorem is named after Graves, Taylor, Hille and Zorn\cite{Chae}. \begin{thm}[GTHZ] \label{thm:GTHZ} For a map $\Arr{\calU}{f}{\sY}$, the following property equivalence holds: \newline holomorphy $\Leftrightarrow$ G-holomorphy and locally boundedness. \end{thm} \begin{proof} See Mujica, Prop.~8.6 and Thm.~8.7; Chae\cite{Chae}, Thm.~14.9. \end{proof} \begin{rem} By defintion, {\it locally bounded} means bounded on some neighborhood of each point of the domain. In a Banach space (or even a metric space), this is equivalent to boundedness on compact subsets of the domain. \end{rem} Maps into spaces of linear operators will be very important in the following and are considered now. In fact, since any Banach space is isometrically embedded in its bidual, this is not really a special case. \begin{defn} \label{def:wk-st-wo-holo} $\Arr{\calU}{f}{\sY}$ is {\em weakly holomorphic} if \hbox{$x \mapsto \pair{\lambda}{f(x)}\in \Cmplx$} is holomorphic for each $\lambda\in\sY^*$. It is {\em densely weakly holomorphic} if the condition holds for a set of $\lambda$'s dense in $\sY^*$. $\Arr{\calU}{f}{\Lin(\sY;\sZ)}$ is {\em strongly holomorphic} if \hbox{$x \mapsto f(x)y \in \sZ$} is holomorphic for each $y\in\sY$; and {\em weak-operator holomorphic} if \hbox{$x \mapsto \pair{\lambda}{f(x)y}\in \Cmplx$} is holomorphic for each $y\in\sY$ and $\lambda\in\sZ^*$. As for weak holomorphy, these may be modified with {\em dense} to indicate that the set of $y$'s [resp. pairs $(y,\lambda)$] in question is dense in $\sY$ [resp. $\sY\times \sZ^*$]. \end{defn} The following Lemma is preparation for Propositions \ref{prop:wk-holo} and \ref{prop:WO-holo}. Some obvious abbreviations (`st.' for `strong', `loc. bdd.' for `locally bounded', `holo' for `holomorphic') are used. \begin{lem} \label{lem:st-holo} For \hbox{$\Arr{\calU}{f}{\Lin(\sY;\sZ)}$}, the following property implications hold. \newline \textnormal{(a)} st. G-holo. $\Rightarrow$ G-holo. \newline \textnormal{(b)} loc. bdd. \& dense st. holo. $\Rightarrow$ st. holo. \newline \textnormal{(c)} st. holo. $\Rightarrow$ loc. bdd. \newline \textnormal{(d)} loc. bdd. \& dense st. G-holo. $\Rightarrow$ holo. \newline \textnormal{(e)} st. holo. $\Rightarrow$ holo. \end{lem} \begin{rem} Parts (a), (b), and (c) are really just preparation for (d) and (e). \end{rem} \begin{proof} \noindent (a): Since G-holomorphy concerns affine planes independently, assume that $\calU \subseteq \Cmplx$ without loss. Assume (to be justified later) that $f$ is also continuous. Then, for every $y\in \sY$ and simple closed contour $\Gamma$ in $\calU$, \begin{equation} 0 = \oint_\Gamma f(\omega) y \frac{d\omega}{2\pi i} = \left[ \oint_\Gamma f(\omega) \frac{d\omega}{2\pi i} \right] y. \end{equation} Continuity of $f$ is used here to justify taking $y$ outside the integral. Since $y$ ranges over $\sY$, which is separating for $\Lin(\sY;\sZ)$, the integral in square brackets is zero. Finally, Morera's theorem implies that $f$ is holomorphic, because $\Gamma$ is arbitrary. To complete the proof of (a), we must show that $f$ is continuous at $\zeta\in\calU$. Suppose not. Then there is a sequence $\calU\ni \zeta_n \to \zeta$ such that \hbox{$\|f(\zeta') - f(\zeta)\|/(\zeta_n-\zeta) \to \infty$}, and by the uniform boundedness principle, $y\in\sY$ such that \hbox{$(f(\zeta')y - f(\zeta)y)/(\zeta_n-\zeta)$} diverges. However, since $f$ is strongly holomorphic, the limit of the latter is \hbox{$\frac{d}{dz}(f(z)y)|_{z=\zeta}$}. Contradiction. \smallskip \noindent (b): We need to show that, for each $y\in \sY$, $x \mapsto f(x) y$ [abbreviated here $f(\;) y$] is holomorphic near each point of $\calU$. By the dense strong holomorphy assumption, there is $D$ dense in $\sY$ such that, for every $u\in D$, $f(\;) u$ is holomorphic. Also, for any sequence $D \ni y_n \to y$, local boundedness implies that the sequence $f(\;) y_n$ converges not merely pointwise, but locally uniformly, to $f(\;) y$, which is therefore holomorphic by Prop. \ref{prop:convergent-sequences}. \noindent (c): Fix compact $K\subset \calU$. For every $y\in\sY$, ${f(\;)y}$ is holomorphic by hypothesis, therefore continuous, therefore bounded on $K$. The uniform boundedness principle secures boundedness $f$ on $K$. \noindent (d): local boundedness \& dense strong G-holomorphy implies strong G-holomorphy by the G\^ateaux version of (b), which implies G-holomorphy by (a). Finally, holomorphy follows by Thm. \ref{thm:GTHZ}. \noindent (e): We have G-holomorphy by (a), and local boundedness by (c). Again, conclude via Thm. \ref{thm:GTHZ}. \end{proof} \begin{prop} \label{prop:wk*-holo} For $\Arr{\calU}{f}{\sY^*}$, the following are equivalent: \newline \noindent (a) holomorphy \newline \noindent (b) weak-* holomorphy \newline \noindent (c) local boundedness \& dense weak-* G-holomorphy \end{prop} \begin{proof} This follows immediately from Lemma~\ref{lem:st-holo} for the case $\sY^* \simeq \Lin(\sY;\Cmplx)$, realizing that the adjective ``strong'' there specializes to ``weak-*''. \end{proof} \begin{prop} \label{prop:wk-holo} For $\Arr{\calU}{f}{\sY}$, the following are equivalent: \newline \noindent (a) holomorphy \newline \noindent (b) weak holomorphy \newline \noindent (c) local boundedness \& dense weak G-holomorphy \end{prop} \begin{proof} ${\sY}$ is isometrically imbedded in its bidual \hbox{${\sY}^{**} \cong {\Lin(\sY^*;\Cmplx)}$.} Now apply Prop.~\ref{prop:wk*-holo}. \end{proof} \begin{prop} \label{prop:WO-holo} For $\Arr{\calU}{f}{\Lin(\sY;\sZ)}$, the following are equivalent: \newline \noindent (a) holomorphy \newline \noindent (b) weak-operator holomorphy \newline \noindent (c) loc. bdd. \& dense weak-operator G-holomorphy \end{prop} \begin{proof} Use the same trick as in Prop.~\ref{prop:wk-holo} to write \hbox{$\Arr{\calU}{f}{ \Lin(\sY;\Lin(\sZ^*;\Cmplx)) }$}, apply Lemma~\ref{lem:st-holo} directly, and then Prop.~\ref{prop:wk-holo}. \end{proof} Although holomorphy for the case $\sX\equiv \sY\equiv \Cmplx$ is not usually discussed in terms of linear operators as here, we may note that it fits in perfectly. The operator $Df(a)$ in that case can be construed simply as multiplication by a complex number, $\partial f(a)$, so that $a\mapsto Df(a)$ is identified with the complex function $a \mapsto \partial f(a)$. Differentiation does not generate objects of a fundamentally different type in that case. For higher-dimensional Banach spaces, however, it does so, and part (b) of Thm.~\ref{thm:holomorphy-summary} thereby gains in importance. The $D^nf$, as $n$ varies, all have distinct codomains, yet they are all holomorphic if $f$ is so. We close this Section with a proof of the sequential permanence property mentioned earlier, which is also found as Prop. 9.13 of Mujica. \begin{prop} \label{prop:convergent-sequences} If $\Arr{\calU}{f_n}{\sY}$ is a sequence of holomorphic mappings converging to $f$ uniformly on compact subsets of $\calU$, then $f$ is holomorphic. \end{prop} \begin{proof} Use Thm.~\ref{thm:GTHZ} (\hbox{G-holomorphic} and locally bounded $\Leftrightarrow$ holomorphic). For any compact subset $K$ of $\calU$, the $f_n$'s are bounded, and converge uniformly to $f$, hence $f$ is bounded. By Prop.~\ref{prop:wk-holo} G-holomorphy of $f$ reduces to the case \hbox{$\calU\subseteq \sY = \Cmplx$}, which is a well-known result of classical complex analysis. \end{proof} \section{Hilbert riggings} \label{sec:Hilbert-riggings} This section is also primarily background, although Prop.~\ref{prop:iso-to-closed} is not standard and will play an important r\^ole. Section~\ref{sec:kinetic-energy} is a concrete illustation of Hilbert rigging intended primarily for those unfamiliar with the idea. A Hilbert rigging of a Hilbert space $\sH$ is a sandwiching $\sHp\subset \sH \subset\sHm$ by two other Hilbert spaces such that $\sHm$ is the dual space of $\sHp$ with respect to the original inner product on $\sH$. They will be used through the identification of a family $\calC_\relbd$ of {\sqf}s in $\sH$ with $\Lin(\sHp;\sHm)$ for an appropriate $\sHp$. Prop.~\ref{prop:iso-to-closed} concerns the identification of isomorphisms from $\sHp$ to $\sHm$ with closed operators on $\sH$. \subsection{Example: kinetic energy} \label{sec:kinetic-energy} Before presenting the abstract construction of Hilbert rigging, we illustrate briefly with the concrete and pertinent example of kinetic energy. The reader unfamiliar with Hilbert riggings may find it helpful to keep this example in mind in Section~\ref{sec:Hilbert-rigging-abstract}. Thus, take $\sHz$ to be $L^2(\Real^n)$; the inner product is \begin{equation} \label{eq:H0} \inpr{u}{v} = \int u(x)^* v(x) \, d^nx = \int \widetilde{u}(p)^* \widetilde{v}(p) \, d^np. \end{equation} Fourier transform will be indicated (in this subsection only) by an over-tilde, as above. Now, a sesquilinear form corresponding to kinetic energy is \begin{equation} \label{eq:KE-form} \inpr{\phi}{\psi}_+ \defeq \inpr{\phi}{\psi} + \sum_{i=1}^{n}\inpr{\partial_i \phi}{\partial_i \psi}. \end{equation} To be precise, $\|{\psi}\|_+^2 = \inpr{\psi}{\psi}_+$ is the kinetic energy of vector state $\psi$, up to the addition of $\|{\psi}\|^2$. The notation suggests, as indeed is the case, that this sesquilinear form is a legitimate inner product. Moreover, it corresponds to a Hilbert space $\sHp$ based on a dense subspace of $\sH$. That this is so is best seen in momentum space, a move which also alleviates the technical compication that we must be careful to {\it a priori} interpret the derivatives in (\ref{eq:KE-form}) in a weak or distributional sense. The momentum space expression is \begin{equation} \label{eq:KE-form-momentum} \inpr{\phi}{\psi}_+ = \int \widetilde{\phi}(p)^* \widetilde{\psi}(p) \, (1+|p|^2) d^np. \end{equation} This clarifies both that there really is a subspace of $\sH$ which is complete for the new inner product $\inpr{\phantom{\phi}}{\phantom{\psi}}_+$ and why we included the term $\inpr{{\phi}}{{\psi}}$ in (\ref{eq:KE-form}). Authorized by the Riesz-Fr\'echet theorem, we could identify $\sHp$ with its dual space as usual, associating $\phi\in\sH_+$ with the functional $\psi \mapsto \inpr{\phi}{\psi}_+$. However, we want to identify the dual with respect not to $\inpr{\phantom{\phi}}{\phantom{\psi}}_+$, but with respect to $\inpr{\phantom{\phi}}{\phantom{\psi}}$. The momentum-space expression (\ref{eq:KE-form-momentum}) makes clear how to do this: Define $J$ by $\widetilde{J\phi}(p) = (1+|p|^2)\widetilde{\phi}(p)$, so that $\inpr{\phi}{\psi}_+ = \inpr{J\phi}{\psi}$, where the last represents some extension of the inner product on $\sH$. With the inner product \begin{equation} \label{eq:H-minus} \inpr{\phi}{\psi}_{-} = \int \widetilde{\phi}(p)^* \widetilde{\psi}(p) \, (1+|p|^2)^{-1} d^np, \end{equation} we get another Hilbert space $\sHm$ such that \hbox{$\sHp\subset\sH\subset\sHm$}, and $\Arr{\sHp}{J}{\sHm}$ is unitary. All three of these spaces consist of functions in momentum space, but elements of $\sHm$ are actually tempered distributions, in general. For instance, if $n=1$, $\sHm$ contains delta-functions. Now we can clarify the meaning of $\inpr{J\phi}{\psi}$: The map $\sHp\times\sHp \ni (\phi,\psi) \mapsto \inpr{\phi}{\psi}$ admits an extension by continuity to either $\sH$ in both factors (yielding the ordinary inner product), or to $\sHm$ in one factor. \subsection{General construction} \label{sec:Hilbert-rigging-abstract} We now review the abstract idea of a {\it Hilbert rigging} as summarized in the (not commutative!) diagram \begin{equation} \label{eq:scale-of-spaces} \begin{tikzcd} \sHp \arrow[r,hookrightarrow,"\iota\subsub{+}"] \arrow[rr,bend right=35, "J"] & \sHz \arrow[r,hookrightarrow,"\iota\subsub{0}"] & \sHm \arrow[ll,bend right=35, "J^{-1}"'] \end{tikzcd} \end{equation} Expositions of this technology can be found in \S II.2 of Simon\cite{Simon-Forms}, \S VIII.6 of Reed~\&~Simon\cite{Reed+Simon}, Ch.~4 of de~Oliveira\cite{deOliveira}, or \S 14.1 of Berezansky\cite{Berezansky-II}. Start with a Hilbert space $\sHz$ with inner produce $\inpr{\phantom{u}}{\phantom{v}}$, and a dense subspace equipped with stronger inner product $\inpr{\phantom{u}}{\phantom{v}}_+$, which makes it into a Hilbert space $\sHp$, so that the inclusion of one underlying vector space $\{\sHp\}$ into the other $\{\sH\}$ induces a continuous injection $\iota_{\scriptscriptstyle{+}} \colon \sHp {\hookrightarrow} \sHz$. The adjoint of $\iota_{_+}$, defined by \begin{equation} \label{eq:i*} \ilinpr{\iota_{_+}^* u}{\psi}_{+} = \inpr{u}{\iota_{_+}\psi} \end{equation} is also injective with dense image, since taking adjoints swaps those properties. Use $\iota_{_+}^*$ to define a new inner product on $\{\sHz\}$ via \begin{equation} \inpr{u}{v}_{-} \defeq \ilinpr{\iota_{_+}^* u}{\iota_{_+}^* v}_{_+}, \end{equation} equipped with which it becomes the preHilbert space $\{\sHz\}_{-}$, with a completion denoted $\sHm$. The inclusion of $\{\sHz\}$ into $\sHm$ is $\iota_0$. By construction, $\iota_{_+}^*$ extends by continuity to a unitary mapping \begin{equation} J^{-1}\colon \sHm\overset{\sim}{\to} \sHp. \end{equation} Thus, suppressing the injection $\iota_+$ of $\sHp$ into $\sH$, we may rewrite (\ref{eq:i*}) as \begin{equation} \label{eq:dual-with-respect-to-H} \inpr{u}{\psi} = \ilinpr{J^{-1} u}{\psi}_{+}. \end{equation} Furthermore, according to the preceding, the right-hand side extends by continuity to a continuous sesquilinear map on $\sHm\times \sHp$ with $J^{-1}\sHm = \sHp$. Using (\ref{eq:dual-with-respect-to-H}) then to define an extension of the $\sH$ inner product $\inpr{\;}{\;}$ to $\sHm\times\sHp$, we say that $\sHm$ realizes the dual space of $\sHp$ relative to the original inner product. The maps in (\ref{eq:scale-of-spaces}) naturally induce two {\em bounded} linear mappings \begin{align} \nonumber & { T \mapsto \iota_0 T \iota_+ }\;\colon {\Lin(\sH)}\to {\Lin(\sHp;\sHm)}, \nonumber \\ & {T \mapsto \iota_+ T \iota_0}\;\colon {\Lin(\sHm;\sHp)}\to {\Lin(\sH) }. \nonumber \end{align} These will be useful below. More interesting, though, is a map that takes arbitrary $\hat{T}\in\Lin(\sHp;\sHm)$ into a (generally unbounded) linear operator $T$ on $\sH$ according to the following notational convention. \begin{cnvntn} \label{cnvntn:hats} For \hbox{$\hat{T}\in\Lin(\sHp;\sHm)$}, ${T}$ denotes the restriction of $T$ to \hbox{$\dom {T} = \setof{\psi\in\sHp}{\hat{T}\psi\in\sH}$}, considered simply as an operator {\em in} $\sH$. \end{cnvntn} Not every linear operator in $\sH$ comes from an operator in $\Lin(\sHp;\sHm)$ in this way, so one should not think of the hat as a map or transform of some sort; the map actually goes the other way. The following Proposition can be viewed as an analog of Lemma~\ref{lem:inverse-of-perturbed-closed-op}. It plays an important r\^ole in the theory. \begin{prop} \label{prop:iso-to-closed} Given \hbox{$\hat{T}\in\Linv(\sHp;\sHm)$}. \newline\noindent\textnormal{(a)} ${T}\in\Lincl(\sH)$, i.e., it is closed with dense domain. \newline\noindent\textnormal{(b)} \hbox{$\Arr{\Linv(\sHp;\sHm)}{\hat{T}\mapsto {T}^{-1} }{\Lin(\sH)}$} is holomorphic. \end{prop} \begin{proof} ${T}^{-1} = {\iota_+}{\hat{T}^{-1}}{\iota_0}$ is bounded with domain $\sH$, hence closed, hence so is ${T}$. Since $\iota_0$ and $\iota_+$ have dense image, $\dom{T}$ is dense in $\sH$. Finally, $\hat{T}\mapsto {T}^{-1}$ is holomorphic since it is explicitly a composite of inversion and composition with a linear map, which are holomorphic operations. \end{proof} Certainly ${T}^{-1}$ exists for some operators $\hat{T}$ in $\Lin(\sHp;\sHm)$ which are not invertible, and one may ask whether \hbox{$\hat{T}\mapsto{T}^{-1}$} is holomorphic on a larger domain. Close examination of this question is postponed to Prop.~\ref{prop:final-piece} when more motivation will be in place. \section{Families of forms and operators} \label{sec:families} This section is the technical core of the paper, preparing for applications in Sections \ref{sec:QM}, \ref{sec:eigenvalues}, and \ref{sec:free-energy}. Section~\ref{sec:sforms-1} recalls some basic ideas and definitions connected with sesquilinear forms ({\sqf}s). That is preparation for consideration of families of sectorial forms parameterized over an open set $\calU$ of some Banach space. We want these parameterizations to be holomorphic, hence the generalization of the $\Real$-centered notion of lower-bounded hermitian to sectorial. However, this can make sense only if relevant classes of {\sqf}s have a Banach space structure themselves. Thm.~\ref{thm:[]+-} solves this problem, showing that the class $\calC_\relbd$ of {\sqf}s relatively bounded with respect to an equivalence class $\calC$ of closable forms is naturally identified with $\Lin(\sHp;\sHm)$, where $\sHp\subset \sH\subset \sHm$ is an Hilbert rigging. Attention is then turned to the closed operators associated with the sectorial forms $\sct{\calC}$ in $\calC$. Thm.~\ref{thm:resolvent-holo} is the second main result, showing that the operator $H$ associated with $\frm{h}\in\sct{\calC}$ is invertible iff $\frm{h}$ viewed as an element of $\Lin(\sHp;\sHm)$ is so. This gives holomorphy of the $\Rmap$-map $(\zeta,\frm{h}) \mapsto (H-\zeta)^{-1}$ on its natural domain in $\Cmplx\times\sct{\calC}$, which will be a basic tool in Sections \ref{sec:eigenvalues} and \ref{sec:free-energy}. Attention then swings back to parameterizations and convenient criteria for a family $\frm{h}$ of {\sqf}s to be a \RSF, i.e., holomorphically embedded in some $\sct{\calC}$. \subsection{Sesquilinear forms} \label{sec:sforms-1} This section consists mostly of definitions and notational conventions. as well as some notational conventions. A standard source for this material is \S\S VI.1,2 of Kato's treatise\cite{Kato}. \begin{enumerate}[label={(\arabic*)}] \label{defn:SQF} \item A {\it sesquilinear form} ({\it \sqf} henceforth) $\frm{h}$ on complex vector space $\sK$ is a map \hbox{$(\phi,\psi) \mapsto \frm{h}[\phi,\psi] \Type{\sK\times \sK}{\Cmplx}$} linear in the second variable and conjugate-linear in the first. (Conjugate-linearity distinguishes these from bilinear forms.) Dirac-style notation will also be used: $\Dbraket{\phi}{\frm{h}}{\psi} \equiv \frm{h}[\phi,\psi]$. To a sesquilinear form is associated a {\it quadratic form} $\frm{h}[\psi] \defeq \frm{h}[\psi,\psi]$. The sesquilinear form can be recovered by polarization, so we will always use the term \sqf\ for economy. We write $|\frm{t}|$ for the map $\psi\mapsto |\frm{t}[\psi]|$. This {\em is not} an \sqf, unless $|\frm{t}| = \frm{t}$. \item The {\it adjoint} of the {\sqf} $\frm{h}$ is \hbox{$\frm{h}^*[\phi,\psi] \defeq \overline{\frm{h}[\psi,\phi]}$}. If \hbox{$\frm{h} = \frm{h}^*$}, $\frm{h}$ is {\it hermitian}. $\frm{h}$ is split into {\it real} and {\it imaginary} hermitian parts as $\frm{h} = \frm{h}^r + i \frm{h}^i$ with $\frm{h}^r = \frac{1}{2}(\frm{h} + \frm{h}^*)$, $\frm{h}^i = \frac{1}{2i}(\frm{h} - \frm{h}^*)$. Hermitian quadratic forms are partially ordered similarly to self-adjoint operators: $\frm{h} \le \frm{h}'$ means $\forall\psi\in\sK, \; \frm{h}[\psi] \le \frm{h}'[\psi]$. The inner product of the ambient Hilbert space provides the special \sqf\ \hbox{${\bm 1}[\phi,\psi] \defeq \inpr{\phi}{\psi}$}. \item The {\it numerical range} of $\frm{h}$ is the set \begin{equation} \label{eq:numerical-range} \Num \frm{h} \defeq \setof{\frm{h}[\psi]}{\psi\in\dom \frm{h}, \|\psi\|=1}. \end{equation} The role of numerical range for {\sqf}s somewhat analogous to that of {\it spectrum} for operators. \begin{lem} \label{lem:Num-cvx} $\Num \frm{h}$ is a convex set. \end{lem} \begin{proof} We need to show that the line segment in $\Cmplx$ from $\frm{h}[\psi]$ to $\frm{h}[\phi]$ is in $\Num \frm{h}$, for unit vectors $\psi,\phi \in\dom \frm{h}$. By suitable scaling and translation (replace $\frm{h}$ by $a\frm{h}+b{\bm 1}$), we may assume that $\frm{h}[\psi]=0$ and $\frm{h}[\phi]=1$. Define \hbox{$\varphi(s) = (1-s)\psi + s e^{i\theta}\phi$} for $0\le s\le 1$, with $\theta$ to be chosen. Then, \begin{equation} \nonumber \frm{h}[\varphi(s)] = s^2 + s(1-s)\Big\{ e^{i\theta} \frm{h}[\psi,\phi] + e^{-i\theta} \frm{h}[\phi,\psi]\Big\}. \end{equation} For suitable choice of $\theta$, the quantity in braces, thus $\frm{h}[\varphi(s)]$ is real. $\frm{h}[\varphi(s)]$ goes continuously from $0$ to $1$ as $s$ increases from $0$ to $1$, and therefore covers at least the segment $[0,1]$. Since $\varphi(0)$ and $\varphi(1)$ are already normalized, normalizing $\varphi(s)$ will not alter this conclusion. \end{proof} \item \label{item:sector} An {\em open sector} is a right-facing wedge, \begin{equation} \nonumber \oSec{c}{\theta} \defeq \setof{c + r e^{i\varphi}}{r > 0,\, |\varphi| < \theta}, \end{equation} in $\Cmplx$ for some {\em vertex} $c\in\Cmplx$ and {\em half-angle} $\theta < \pi/2$, and the {\em closed sector} $\cSec{c}{\theta}$ is its closure. If sector $\Sigma$ is contained in the interior of $\Sigma'$ and $\Sigma'$ has a strictly larger half-angle than does $\Sigma$, then $\Sigma'$ is a {\em dilation} of $\Sigma$. \item \label{item:sectorial} $\frm{h}$ is {\it sectorial} if its numerical range is contained in some sector, and any such will be said to be {\it a sector for $\frm{h}$}. $\Sigma$ is an {\em ample sector} for $\frm{h}$ if it is a dilation of some sector for $\frm{h}$. For any sectorial form $\frm{h}$, $\frm{h}^+$ will denote an arbitrary translate $m{\bm 1} + \frm{h}^r$ such that ${\bm 1} \le \frm{h}^+$. (Of course, the choice of $m$ can be standardized, but for our purposes there is no need.) \item Any operator $T$ in $\sH$ naturally induces an \sqf\ on $\dom T$ by $(\phi,\psi) \mapsto \inpr{\phi}{T\psi}$. The numerical range of $T$ is simply the numerical range of this \sqf. Caution: a closed operator is called {\it sectorial} if its spectrum lies in a sector. This is not the same thing as the associated \sqf\ being sectorial; the latter is a stronger condition. The relation between numerical range and spectrum is taken up in Section~\ref{sec:Num-and-spec}. \item The vector space of {\sqf}s on a dense subspace $\sK$ of $\sH$ will be denoted $\SF(\sK)$. The set of sectorial {\sqf}s on $\sK$, denoted $\sct{\SF}(\sK)$, is a cone in $\SF(\sK)$. Generally, superscript `$\triangleleft$' indicates the sectorial members of any class of {\sqf}s. \item {\sqf} $\frm{t}$ is {\it bounded relative to} {\sqf} $\frm{h}$, denoted $\frm{t} \relbd \frm{h}$, if $\dom \frm{t} \supseteq \dom \frm{h}$ and there exist $a,b > 0$ such that $|\frm{t}[\psi]| \le a {\bm 1}[\psi] + b |\frm{h}[\psi]|$ for every $\psi\in\dom\frm{h}$. The relation $\relbd$ is reflexive and transitive. If $\frm{t}\relbd\frm{h}$ and $\frm{h}\relbd\frm{t}$, then $\frm{t}$ and $\frm{h}$ are {\it equivalent}, denoted $\frm{t}\sim\frm{h}$. Equivalent {\sqf}s have the same domain. Sectoriality of $\frm{h}$ can be expressed as: $\frm{h}^r$ is bounded below and \hbox{$\frm{h}^i \relbd \frm{h}^r$}. $\relbd$ has a modest but useful calculus. For instance, \begin{alignat}{3} & B\in\Lin(\sH) & \;\Rightarrow\; & B \relbd \frm{h}, \nonumber \\ & c\in\Cmplx\setminus\{0\} & \;\Rightarrow\; & \frm{h} \sim c\frm{h}, \nonumber \\ & \frm{t} \relbd \frm{h} & \;\Rightarrow\; & \frm{h}+\frm{t} \relbd \frm{h}, \nonumber \\ & \frm{t}, \frm{h} \text{ sectorial } & \;\Rightarrow\; & \frm{h} \relbd \frm{h}+\frm{t}. \nonumber \end{alignat} \item \label{item:Cauchy} A sequence $(\psi_n)$ in $\dom \frm{h}$ is {\it $\frm{h}$-Cauchy} if \hbox{$(|\frm{h}| + {\bm 1})[\psi_n-\psi_m]\to 0$}. It \hbox{\it $\frm{h}$-converges} to $\psi$ if \hbox{$(|\frm{h}| + {\bm 1})[\psi_n-\psi]\to 0$}. $\frm{h}$ is {\it closed} if all {$\frm{h}$-Cauchy} sequences $\frm{h}$-converge, {\it closable} if it has a closed extension. Note that $\frm{t}\relbd\frm{h}$ is equivalent to every $\frm{h}$-Cauchy sequence is $\frm{t}$-Cauchy. \end{enumerate} \subsection{Completion and closure} \label{sec:closure} The notion of Cauchy-ness in item \ref{item:Cauchy} above is common across an equivalence ($\sim$) class of {\sqf}s. This is an important fact, as it points the way to a ``completion'' of an entire equivalence class on a common domain. Therefore, we consider an equivalence class $\calC$ of {\sqf}s defined on a dense subspace $\sK\subseteq \sH$, containing a sectorical {\sqf} $\frm{h}$, and therefore a hermitian {\sqf} $\frm{h}^+ \ge {\bm 1}$. The class of all forms on $\sK$ which are bounded relative to those in $\calC$ is denoted $\calC_\relbd$. The various sets of {\sqf}s involved here are related as \begin{equation} \sct{\calC} = \calC\cap \sct{\SF}(\sK) \subset \calC \subset \calC_\relbd \subset \SF(\sK). \end{equation} The set $\sct{\calC}$, the sectorial forms among $\calC$, is a cone, while $\calC_\relbd$ is a vector space. It will emerge that it has a natural Banach space structure, up to norm-equivalence. Two ${\mathcal C}$-Cauchy sequences $(x_n)$ and $(y_n)$ are equivalent if \hbox{$(x_n-y_n)$} is $\calC$-Cauchy. This is written as $x \sim y$, and the equivalence class of $(x_n)$ is denoted $x^\sim$. Vectors in $\sK$ are identified with the classes of constant sequences. The completion of ${\mathcal C}$ is constructed on the vector space \begin{equation} \nonumber \sKc \defeq\{\sim\text{-classes of } {\mathcal C}\text{-Cauchy sequences in }\sK\}, \end{equation} and $s$-forms in $\calC$ are extended to $\sKc$ according to \begin{equation} \label{eq:K-bar-forms} \Dbraket{{x^\sim}}{\frm{t}}{{y^\sim}} \defeq \lim_{n\to\infty} \Dbraket{x_n}{\frm{t}}{y_n}, \end{equation} as we now discuss. The term {\it completion} suggests that we are dealing with the ordinary completion of a relevant preHilbert space structure on $\sK$. That is correct, and the inner product represented by any $\frm{h}^+ \ge {\bm 1}$ in $\calC$ will do. Let $\frm{h} \in \calC$ be sectorial, $\frm{h}^+$ as in item \ref{item:sectorial} above, and $(\sK,\frm{h}^+)$ be the preHilbert space structure consisting of the space $\sK$ with inner product $\inpr{{\phi}}{{\psi}}_{\frm{h}} \defeq \Dbraket{{\phi}}{\frm{h}^+}{{\psi}}$. $\calC$-Cauchy is the same thing as $(\sK,\frm{h}^+)$-Cauchy in the usual sense, and the usual Hilbert space completion of $(\sK,\frm{h}^+)$ can be viewed as being carried on $\sKc$. In order to see that {\sqf}s in $\calC$ can be extended to $\sKc$, we need to know that they satisfy a Cauchy-Schwarz-like inequality. \begin{lem} \label{lem:quasi-Cauchy-Schwarz} Suppose ${\bm 1} \le \frm{h}^+$ and $\frm{t} \relbd \frm{h}$. Then, there is some $M > 0$ such that for every $x,y\in\dom \frm{h}^+$, \begin{equation} \label{eq:pseudo-CS} |\Dbraket{x}{\frm{t}}{y}|^2 \le M \frm{h}^+[x] \frm{h}^+[y] \end{equation} \end{lem} \begin{proof} Only the case $\frm{t}$ hermitian, $|\frm{t}| \le \frm{h}^+$, $\frm{t}[x,y]$ real, $\frm{h}^+[x] = \frm{h}^+[y] = 1$ need be checked, since the general case follows by rescaling, multiplying $x$ by a phase $e^{i\theta}$, and $|\frm{t}[x,y]| \le |\frm{t}^r[x,y]| + |\frm{t}^i[x,y]|$. Here is the verification of the special case: \begin{align} 4|\frm{t}[x,y]| &= \frm{t}[x+y] - \frm{t}[x-y] \nonumber \\ & \le |\frm{t}[x+y]| + |\frm{t}[x-y]| \nonumber \\ & \le \frm{h}^+[x+y] + \frm{h}^+[x-y] = 4 \nonumber \end{align} \end{proof} This lemma asserts that every $\frm{t}\in\calC_\relbd$ is a bounded sesquilinear form on the dense subspace $\sK$ of $(\sKc,\frm{h}^+)$, hence extends by continuity to the full space so as to satisfy (\ref{eq:K-bar-forms}). Each such extended {\sqf} is represented by a bounded operator on $(\sKc,\frm{h}^+)$; for instance, $\frm{h}^+$ itself is represented by the identity. However, we also desire to identify $\sKc$ with a subspace of the ambient Hilbert space $\sH$. Certainly, the inclusion $\iota\colon {\sK} \hookrightarrow {\sH}$ extends by continuity to a bounded operator $\Arr{(\sKc,{\frm{h}^+})}{\tilde{\iota}}{\sH}$. The only question is whether it is injective. It fails to be so only if there are two inequivalent $\calC$-Cauchy sequences in $\sK$, which converge as sequences in $\sH$ to the same vector. By linearity, only the case $x_n \to 0$ in $\sH$ need be considered: $x \sim 0$ fails if and only if $\frm{t}[x_n] \not\to 0$, for any $\frm{t}\in\calC$. The test may therefore be performed for any member of the class $\calC$. If $\tilde{\iota}$ is injective, we simply identify $\sKc$ with its image, and thereby obtain a closed {\sqf} in $\sH$ for every $\frm{t}$ in $\calC$. In that case, $\calC$ is said to be closable. Although it must be checked, only closable classes are of interest to us, so closability is assumed henceforth. \subsection{From {\sqf}s to operators} \label{sec:ops-from-forms} With $(\sKc,{\frm{h}^+})$ in the role of $\sHp$, we obtain a Hilbert rigging as in Section~\ref{sec:Hilbert-rigging-abstract}, from which we now take over various notations. Any bounded sesquilinear form $\frm{t}$ on $\sHp$, (in particular, one in $\calC$) is represented by a unique operator $\Opp{+}{+}{\frm{t}}\in\Lin(\sHp)$ satisfying \begin{equation} \inpr{\phi}{\Opp{+}{+}{\frm{t}}\psi}_{+} = \Dbraket{\phi}{\frm{t}}{\psi} \end{equation} for all $\phi,\psi\in\sK$. Using the unitary isomorphism $\Arr{\sHp}{J}{\sHm}$, we get another ``representation'' \hbox{$\Opp{+}{-}{\frm{t}} \defeq J \Opp{+}{+}{\frm{t}} \in\Lin(\sHp;\sHm)$} of $\frm{t}$ satisfying \begin{equation} \inpr{\phi}{\Opp{+}{-}{\frm{t}}\psi} = \Dbraket{\phi}{\frm{t}}{\psi}. \end{equation} [Recall that the inner product on $\sH$ extends to \hbox{$\sHm\times\sHp \cup \sHp\times\sHm$} as in (\ref{eq:dual-with-respect-to-H}).] The notation $\Opp{a}{b}{\frm{t}}$ indicates the domain and range spaces in the subscript and superscript, respectively. It is unambiguous, but cumbersome. Fortunately, it will not be needed much. Restricting $\Opp{+}{-}{\frm{t}}$ to those $\psi$ such that $\Opp{+}{-}{\frm{t}}\psi$ is in $\sH$ yields yet a third operator, $\Opp{0}{0}{\frm{t}} \in\Lin_0(\sH)$. The domain and range of this operator are subspaces of $\sH$. \subsection{Holomorphy of the $\Rmap$-map} \label{sec:R-map} The operator guise of $\frm{t}$ which is ultimately of most interest is $\Opp{0}{0}{\phantom{\frm{t}}}$. However, the $\Opp{+}{-}{\phantom{\frm{t}}}$ and $\Opp{+}{+}{\phantom{\frm{t}}}$ forms have some especially nice properties, collectively: \begin{thm} \label{thm:[]+-} the map $\frm{t}\mapsto \Opp{+}{-}{\frm{t}}$ is a bijection between $\calC_{\relbd}$ and $\Lin(\sHp;\sHm)$. The image $\Opp{+}{-}{\sct{\calC}}$ of $\sct{\calC}$ under this map is an open subset of \hbox{$\Linv(\sHp;\sHm) - \Real_+$}. \end{thm} \begin{proof}[Proof of Thm. \ref{thm:[]+-}, part 1] We already know from Section \ref{sec:closure} that there is a natural bijection between $\calC_{\relbd}$ and $\Lin(\sHp)$. By means of the unitary $J$, this is mapped into $\Lin(\sHp;\sHm)$. \end{proof} \begin{cnvntn} \label{cnvntn:C<-as-Banach-space} From now on, we consider $\calC_\relbd$ to be equipped with this Banach space structure --- up to norm-equivalence. This structure is independent of the choice of $\frm{h}^+$ used to construct $\sHp$, and therefore intrinsic. \end{cnvntn} The proof of the second part of Thm.~\ref{thm:[]+-} relies on the following three Lemmas. \begin{lem} \label{lem:adjoint-ops} If $\frm{t}\in\sct{\calC}$, then $\Opp{a}{b}{\frm{t}^*} = ( \Opp{a}{b}{\frm{t}} )^*$ for all choices of $a$ and $b$. (N.B., the two $*$'s mean slightly different things.) \end{lem} \begin{proof} For $\Opp{+}{+}{\frm{t}}$ and $\Opp{+}{-}{\frm{t}}$, this is a simple matter of checking defintions. $\Opp{0}{0}{\frm{t}}$ involves some consideration of domains. $\psi\in\sHp$ is in $\dom \Opp{0}{0}{\frm{t}^*}$ iff \hbox{$\phi \mapsto \Dbraket{\psi}{\frm{t}}{\phi}$} extends to a bounded functional on $\sH$, whereas $\psi\in\sH$ is in $\dom (\Opp{0}{0}{\frm{t}})^*$ iff \hbox{$\phi \mapsto \inpr{\psi}{T \phi}$} does so. Hence $\Opp{0}{0}{\frm{t}^*} \subseteq ( \Opp{0}{0}{\frm{t}} )^*$ is clear. To see the opposite, recognize that these are both closed sectorial operators, and without loss we may suppose that they are both {\em surjective}. \end{proof} \begin{lem} \label{lem:sectorial-to-iso} If $\frm{t}\in\sct{\calC}$ satisfies ${\bm 1}\le \frm{t}^r$, then \hbox{$\Opp{+}{-}{\frm{t}}\in \Linv(\sHp;\sHm)$}. \end{lem} \begin{proof} For notational simplicity, set $T\defeq \Opp{+}{-}{\frm{t}}$. Also, we may assume that $\frm{t}^r$ dominates $\|\cdot\|_+^2$ without loss, since some multiple does so. \smallskip\newline $\ker T=\{0\}$ and $\rng T$ closed: $\|\psi\|_{+} \le \|{T}\psi\|_{-}$ follows from \hbox{$\|\psi\|_+^2 \le | \Dbraket{\psi}{\frm{t}}{\psi} | = | \ilinpr{\psi}{{T}\psi}_{0}| \le \|{T}\psi\|_{-}\|\psi\|_{+}$}. \smallskip\newline $\rng T$ dense: $(\rng {T})^\perp = \ker {T}^*$ and $|\frm{t}^*| = |\frm{t}|$. By Lemma~\ref{lem:adjoint-ops}, $\ker {T}^* = \{0\}$ follows just as $\ker {T} = \{0\}$ above. \smallskip\newline $\rng T = \sHm$: $\rng {T}$ is both closed and dense in $\sHm$. \end{proof} \begin{lem} \label{lem:sectorial-usc} Suppose $\Sigma$ is an ample sector for $\frm{t}$. Then, $\Sigma$ is an ample sector for all $\frm{s}$ in some neighborhood of $\frm{t}$. \end{lem} \begin{proof} Without loss of generality, we may add a constant to $\frm{t}$ so that ${\bm 1}\le \frm{t}$, and choose the form used to turn $\sKc$ into a Hilbert space such that $\Opp{+}{+}{\frm{t}} = 1+iK$, with $K$ hermitian operator in $\Lin(\sHp)$. Then, with \hbox{$\Opp{+}{+}{\frm{s}} = (1+A) + i (K+B)$}, \begin{align} \left| \frm{t}[\psi] - \frm{s}[\psi] \right| & = \left| \inpr{\psi}{(A+iB)\psi}_+ \right| \nonumber \\ & \le (\|A\|+\|B\|)\|\psi\|_+^2 \nonumber \\ & \le (\|A\|+\|B\|) \frm{t}[\psi] \nonumber \end{align} \end{proof} \begin{proof}[Proof of Thm.~\ref{thm:[]+-}, part 2] If $\frm{t}\in\sct{\calC}$, then for some $m > 0$, $\frm{t}+m{\bm 1}$ satisfies the hypotheses of Lemma~\ref{lem:sectorial-to-iso}. It follows that $\Opp{+}{-}{\frm{t}} \in \Linv(\sHp;\sHm) - \Real_+$. It only remains to show that some neighborhood of $\Opp{+}{-}{\frm{t}}$ in $\Linv(\sHp;\sHm)$ corresponds to sectorial forms. This, follows from Lemma~\ref{lem:sectorial-usc}. \end{proof} The pieces are now in place for a holomorphy-of-resolvent type result. \begin{cnvntn} \label{cnvntn:Rmap} If $H\in\Lincl(\sH)$, then $\Rmap(\zeta,H)$ is the resolvent of $H$ at $\zeta$. This is thought of as a function of $\zeta$, in a context specified by $H$. We will overload this notation, writing $\Rmap(\zeta,{\frm{h}})$ for $\Rmap(\zeta,\Opp{0}{0}{\frm{h}})$, or in the case of an explicit parameterization, $\Rmap(\zeta,x)$ for $\Rmap(\zeta,H_x)$. In the latter two cases, we use the name $\Rmap$-map for $\Rmap$ (even though that's redundant), rather than {\it resolvent}. The $\Rmap$-map has two arguments; the context is specified by a \RSF. \end{cnvntn} We show now that the $\Rmap$-map is holomorphic on \begin{equation} \label{eq:Omega-def} \Omega \defeq \setof{ (\zeta,\frm{h})\in\Cmplx\times\sct{\calC} }{ \zeta\in\res\Opp{0}{0}{\frm{h}} }. \end{equation} Since \hbox{$(\zeta,\frm{h}) \mapsto \Opp{+}{-}{\frm{h}}-\zeta \in \Lin(\sHp;\sHm)$} is linear, this reduces to the question (recall Convention~\ref{cnvntn:hats}) whether $\hat{T}\mapsto {T}^{-1}$ is holomorphic on the subset of $\Linv(\sHp;\sHm)-\Cmplx$ where it is well-defined. Prop.~\ref{prop:iso-to-closed} addressed the case of $\Linv(\sHp;\sHm)$, and it is now a simple matter to extend it: \begin{prop} \label{prop:final-piece} Given $\hat{T}\in\Linv(\sHp;\sHm) + \Lin(\sH)$. \newline\noindent\textnormal{(a)} ${T}\in\Lincl(\sH)$. \newline\noindent\textnormal{(b)} ${T}$ is injective iff $\hat{T}$ is injective. \newline\noindent\textnormal{(c)} If \hbox{$\Arr{ \dom{T} }{{T}}{\sH}$} is bijective, \hbox{$\hat{T}\in\Linv(\sHp;\sHm)$}. \end{prop} \begin{proof} \noindent\textnormal{(a)} This follows immediately from Prop.~\ref{prop:iso-to-closed} and Lemma~\ref{lem:stability-of-closedness}. \noindent\textnormal{(b)} If $\hat{T}\phi=0$, then $\phi\in\dom{T}$. \noindent\textnormal{(c)} By assumption, \hbox{$\hat{T}+B\in\Linv(\sHp;\sHm)$} for some \hbox{$B\in\Lin(\sH)$}. Hence, given $\xi\in\sHm$, there is \hbox{$\phi\in\sHp$} such that \hbox{$\xi = (\hat{T}+B)\phi = \hat{T}\phi + B\phi$}. But $B\phi\in\sH$, so the equation $B\phi = \hat{T}\psi$ can be solved for $\psi\in\sHp$, yielding $\xi = \hat{T}(\phi+\psi)$. That is, $\hat{T}$ is not only bounded, but bijective as well, so $\hat{T}\in\Linv(\sHp;\sHm)$ (Open Mapping Theorem). \end{proof} Therefore, the supposed extension from $\Linv(\sHp;\sHm)$ to $\Linv(\sHp;\sHm) - \Cmplx$ is illusory; all the operators we are interested in here are {\em actually} already in the former set. The following main result now follows immediately from the preceding work. \begin{thm} \label{thm:resolvent-holo} For $\frm{h}\in\sct{\calC}$, the closed operator $H = \Opp{0}{0}{\frm{h}}$ has an inverse in $\Lin(\sH)$ iff $\hat{H} = \Opp{+}{-}{\frm{h}}$ has an inverse in $\Lin(\sHm;\sHp)$, and $\Rmap$ is holomorphic on $\Omega$ \textnormal{[see (\ref{eq:Omega-def})]}. \end{thm} \subsection{Series expansion} \label{sec:series} We can reframe some of the main result in terms of the simplest ideas about series expansions. Suppose $\hat{H}$ is in $Linv(\sHp;\sHm)$ and $\hat{T}$ is in $\Lin(\sHp;\sHm)$. Then, $\hat{H}^{-1}$ exists in $\Linv(\sHp;\sHm)$. In case \hbox{$\|\hat{T}\|_{\Lin(\sHp;\sHm)} < (\|\hat{H}^{-1}\|_{\Lin(\sHm;\sHp)})^{-1}$}, both \hbox{$\hat{T} \hat{H}^{-1}\in\Lin(\sHm)$} and \hbox{$ \hat{H}^{-1}\hat{T}\in\Lin(\sHp)$} are operators of norm less than one, and \begin{align} (\hat{H}+\hat{T})^{-1} & = \sum_{n=0}^\infty (\hat{H}^{-1} \hat{T})^n \hat{H}^{-1} \nonumber \\ & = \hat{H}^{-1} \sum_{n=0}^\infty (\hat{T} \hat{H}^{-1})^n. \label{eq:quasi-resolvent-series} \end{align} Therefore, if $\hat{H} = \Opp{+}{-}{\frm{h}}$ and $\hat{T} = \Opp{+}{-}{\frm{t}}$, we have a more or less explicit formula for $(\Opp{0}{0}{\frm{h}+\frm{t}})^{-1}$, which we write $(H+T)^{-1}$ (recognizing that `$+$' here must be interpreted indirectly): Merely sandwich the expansions in (\ref{eq:quasi-resolvent-series}) between $\iota_0$ and $\iota_+$. This exhibits holomorphy in a very direct way. However, it does not itself show that $H+T$ (or even $H$) is closed, nor does it show that invertibility of $H$ implies invertibility of $\hat{H}$. \subsection{Holomorphic families} \label{sec:holomorphic-families} We now return to the idea of parameterizing families of sectorial forms by an open set in a Banach space. \begin{defn} \label{def:holofam} Let $\sK$ be a dense subset of Hilbert space $\sH$, and $\calU$ a {\em connected} open subset of a Banach space. The map $\Arr{\calU}{\frm{h}}{\SF(\sK)}$ is a \newline \noindent\textnormal{(a)} {\it [G-]holomorphic family} in $\SF(\sK)$ parameterized over $\calU$ iff \hbox{$x \mapsto \frm{h}_x[\psi] \Type{\calU}{\Cmplx}$} is \hbox{[G-]holomorphic} for each $\psi\in \sK$. \newline \noindent\textnormal{(b)} {\it regular sectorial family} in $\sct{\calC}$ parameterized over $\calU$ iff $\Arr{\calU}{\frm{h}}{{\calC}_\relbd}$ is holomorphic with range in $\sct{\calC}$, where $\calC$ is an equivalence class of closable {\sqf}s on $\sK$ and $\calC_\relbd$ has the Banach space structure of Convention~\ref{cnvntn:C<-as-Banach-space}. \end{defn} Various adjectives (``in $\SF(\sK)$/$\sct{\calC}$'', ``parameterized over $\calU$'') may be omitted when context disambiguates. \begin{rem} By polarization, holomorphy of $\frm{h}$ immediately implies that $\frm{h}_x[\phi,\psi]$ is holomorphic in $x$ for every $\phi,\psi \in \sK$. \end{rem} If $\Arr{\calU}{\frm{h}}{\sct{\calC}}$ is a regular sectorial family, then the composition of $\frm{h}$ with any holomorphic function on $\sct{\calC}$, such as $\Rmap$ or (as shown in Section \ref{sec:free-energy}) $\Emap$, is automatically holomorphic: \begin{cor} \label{cor:Rmap-holo} If $\frm{h}$ defined on $\calU$ is a \RSF, then $\Rmap$ is holomorphic from its open domain in $\Cmplx\times \calU$ into $\Lin(\sH)$. \end{cor} On the other hand, the requirement to be merely a holomorphic family is weak and easily checkable in application. Hence to get the abstract machinery appropriately hooked up to specific parameterized families, the only real question is when a holomorphic or G-holomorphic family is actually regular sectorial. \begin{prop} \label{prop:regular-sectorial} A G-holomorphic family \hbox{$\Arr{\calU}{\frm{h}}{\sct{\SF}(\sK)}$} is regular sectorial if any of the following criteria holds. \begin{enumerate} \item[\textnormal{(a)}] $\frm{h}(\calU)$ consists of equivalent, closed {\sqf}s. \item[\textnormal{(b)}] {$\Arr{\calU}{\frm{h}}{\sct{\calC}\subset\calC_\relbd}$} is locally bounded for some closable class \hbox{$\calC\subseteq \SF(\sK)$}. \item[\textnormal{(c)}] $\frm{h}(\calU)$ consists of equivalent, closable {\sqf}s, and for every $x$, $\frm{h}_y$ is uniformly bounded with respect to $\frm{h}_x$ for $y$ in some neighborhood of $x$. \end{enumerate} \end{prop} \begin{proof} For (a), note that the assumption is that the forms $\frm{h}_x$ are already closed on $\sK$. Hence, holomorphy amounts to weak-operator holomorphy on $\sHp$. Conclude with Prop.~\ref{prop:WO-holo}(b). For (b), appeal to Prop.~\ref{prop:WO-holo}(c). Criterion (c) is a rephrasing of criterion (b) in light of the preceding theory. \end{proof} \subsection{Operator bounded families} \label{sec:self-adjoint} This subsection discusses an important kind of holomorphic family constructed on the basis of a given lower-bounded self-adjoint operator $H$. It is not used until Section \ref{sec:free-energy} and can safely be skipped until then. Take $H$ to be a lower-bounded self-adjoint operator in $\sH$, and assume $1 \le H$, which can be arranged without loss by adding a constant. Choose $\sK = \dom H$. On $\sK$, $H$ defines an \sqf\ $\frm{h}$ by \begin{equation} \label{eq:h0} \frm{h}[\psi] \defeq \inpr{\psi}{H\psi}. \end{equation} We denote the equivalence class of {\sqf}s to which $\frm{h}$ belongs by $\calC(H)$, or simply by $\calC$ in this subsection, when there is no ambiguity. Once it is known that $\calC$ is closable, the theory developed in this section shows that \hbox{$\calC_\relbd \simeq \Lin(\sHp;\sHm)$}, where $\sHp$ is the completion of $\dom H$ under the inner product $\inpr{\phi}{\psi}_+ \defeq \inpr{\phi}{H\psi}$. \begin{lem} $\calC(H)$ is closable. \end{lem} \begin{proof} $\sHp$ exists at least as the abstract completion of $\dom H$. Let $(\psi_n) \subset \dom H$ be an $\sHp$-Cauchy sequence, such that $\|\psi_n\|\to 0$, $\|\psi_n - \psi\|_+ \to 0$. It needs to be shown that $\psi= 0$. Taking the limit of \hbox{$\|\psi_n-\psi_m\|_+ = \|\psi_n\|_+^2 + \|\psi_m\|_+^2 - 2 \re \inpr{\psi_n}{H\psi_m}$} as $n\to\infty$, yields \hbox{$0 = \|\psi\|_+^2 + \lim_{m\to\infty} \|\psi_m\|_+^2$}, showing that $\psi=0$, as required. \end{proof} Define the {\em real} subspace $\sX^r(H)$ of $\SF(\dom H)$ to consist of hermitian {\sqf}s such that the norm \begin{equation} \|\frm{t}\|_{H} = \sup \setof{ \frac{ |\Dbraket{\phi}{\frm{t}}{\psi}| }{\|{\phi}\| \|H{\psi}\|} } { 0 \neq \phi,\psi\in\dom H}. \end{equation} $\sX^r(H)$ corresponds precisely to the set of symmetric operators on $\dom H$ which are operator bounded with respect to $H$. Now, let \begin{equation} \label{eq:OF} \OF{H} \defeq \sX^r(H) \oplus i \sX^r(H) \end{equation} be the complexification of $\sX^r(H)$, with the norm extended according to \begin{equation} \|\frm{t}\|_H \defeq \|\frm{t}^r\|_H + \|\frm{t}^i\|_H. \end{equation} We aim to show that $\OF{H}$ is continuously embedded in $\calC(H)_\relbd$. The following Lemma is the key step. \begin{lem} \label{lem:XH} For $\frm{t}\in\sX^r(H)$, \hbox{$|\frm{t}| < \|\frm{t}\|_{H}\, \frm{h}$}. \end{lem} \begin{proof} Assume \hbox{$\| \frm{t}\|_H = 1$}; the general case follows by homogeneity. If for some $\psi$, $|\frm{t}[\psi]| \ge \frm{h}[\psi]$, the numerical range of at least one of $\frm{h} + \frm{t}$ and $\frm{h} - \frm{t}$ contains a negative number. Therefore, it suffices to show that \hbox{$0 \not\in \Num (\frm{h} + \frm{t})$}, and even, by Prop.~\ref{prop:inf-Num-in-spec} below, that $(-\infty,0]\in\res (H+T)$, where $T$ is the operator on $\dom H$ induced by $\frm{t}$. That will be the case if $\|T\Rmap(x,H)\| < 1$ for $x \le 0$ (Lemma~\ref{lem:inverse-of-perturbed-closed-op}). But this follows immediately from the definition of the norm $\|\cdot \|_H$: \begin{equation} \nonumber \|T\Rmap(x,H)\| < \|H\Rmap(x,H)\| \le 1. \end{equation} \end{proof} The desired result follow immediately. \begin{prop} \label{prop:X(H)} Given lower-bounded self-adjoint operator $H$, $\OF{H}$ is a Banach space continuously embedded in $\calC(H)_\relbd$. Moreover, if $\|\frm{t}-\frm{h}\|_{H} < {1}$, then $\oSec{0}{\frac{\pi}{4}}$ is a sector for $\frm{t}$. \end{prop} \subsection{Numerical range and spectrum} \label{sec:Num-and-spec} This subsection collections somewhat auxiliary results relating the numerical ranges and spectra of operators. In general, the relationship is subtle. Prop.~\ref{prop:resolvent-outside-Num} shows that the spectrum of a closed sectorial operator is contained in the closure of its numerical range, but in general, the spectrum could be much smaller: consider the matrix $ \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} $, with spectrum $\{0\}$ and numerical range a disk of radius $1/2$. \begin{lem} \label{lem:Num-and-spec-1} Let $S$ be a symmetric operator with numerical range in $[0,c]$ with $c < \infty$, and suppose that $(\phi_n)$ is a sequence of vectors such that $\inpr{\phi_n}{S\phi_n} \to 0$. Then, $S\phi_n\to 0$. \end{lem} \begin{proof} Assume for a contradiction that there is a subsequence $n(k)$ such that $\|S\phi_{n(k)}\| > \epsilon > 0$. Without any loss, we may assume that the subsequence is the entire sequence. Hence, there exists a sequence of {\em unit} vectors $(\eta_n)$ such that \begin{align} \epsilon & \le |\inpr{\eta_n}{S\phi_n}|^2 \le \inpr{\eta_n}{S\eta_n} \inpr{\phi_n}{S\phi_n} \nonumber \\ & \le c \inpr{\phi_n}{S\phi_n} \to 0. \nonumber \end{align} The second inequality here is Cauchy-Schwarz, and the contradiction finishes the proof. \end{proof} \begin{prop} \label{prop:inf-Num-in-spec} For $T$ a symmetric operator, $\inf\Num T \in \spec T$. \end{prop} \begin{proof} Assume that $\inf \Num T = 0$, since that can be arranged by adding a constant, unless $\Num T$ is unbounded below, in which case the Lemma is vacuous anyway. Then, there is a sequence $(\psi_n)$ of unit vectors in $\dom T$ such that \begin{equation} \label{eq:numerical-range-tending-to-zero} \inpr{\psi_n}{T\psi_n} \to 0. \end{equation} Assume, for a contradiction, that \hbox{$0\in\res T$}, i.e., \hbox{$T^{-1}\in\Lin(\sH)$}. We will show this implies $\psi_n\to 0$. Multiplying $T$ by a constant if necessary, we may assume $\|T^{-1}\|=1$. Since \begin{equation} \inpr{\psi}{T\psi} = \inpr{T^{-1} T\psi}{T\psi} \in \|T\psi\| (\Num T^{-1}), \end{equation} non-negativity of $\Num T$ implies the same for $\Num T^{-1}$, so that Lemma \ref{lem:Num-and-spec-1} will apply to $T^{-1}$. Define \hbox{$\phi_n = T\psi_n$}. Then, \hbox{$\|\phi_n\| \ge \|\psi_n\| = 1$} because $\|T^{-1}\| = 1$, and (\ref{eq:numerical-range-tending-to-zero}) is rewritten as \begin{equation} \nonumber \inpr{\phi_n}{T^{-1}\phi_n} \to 0. \end{equation} By Lemma \ref{lem:Num-and-spec-1} \hbox{$\psi_n = {T^{-1}\phi_n} \to 0$}. Contradiction. \end{proof} \begin{prop} \label{prop:resolvent-outside-Num} For an operator $T$, \newline \noindent \textnormal{(a)} Each connected component of the open set \hbox{$\Cmplx \setminus \cl{\Num T}$} is either disjoint from $\res T$, or contained in it. \newline \noindent \textnormal{(b)} In components contained in $\res T$, the resolvent is bounded as \begin{equation} \label{eq:resolvent-bound} \| \Rmap(\zeta,T) \| \le \frac{1}{\dist(\zeta,\Num T)}. \end{equation} \newline \noindent \textnormal{(c)} If $T$ is {\em closed} sectorial, $\spec T \subseteq \cl\Num T$. \end{prop} \begin{proof} For brevity, write $G$ for the {\em open} set \hbox{$\Cmplx\setminus\cl\Num T$}. We first demonstrate the bound (\ref{eq:resolvent-bound}) for arbitrary \hbox{$\zeta\in G\cap\res T$}, and use that to show that both $G\cap \res T$ and $G\cap \spec T$ are open. That is equivalent to (a), and proves the remaining part of (b). Thus, check for any unit vector $\psi\in\dom T$: \begin{align} \|(T-\zeta)\psi\| & \ge |\inpr{\psi}{(T-\zeta)\psi}| \ge |\inpr{\psi}{T\psi} - \zeta| \nonumber \\ & \ge \dist(\zeta,\Num T). \label{eq:T-zeta-closed-rng} \end{align} This establishes (\ref{eq:resolvent-bound}). \noindent $G\cap\res T$ is open: The open disk with center $\zeta$ and radius $\|\Rmap(\zeta,T)\|^{-1} \ge {\dist(\zeta,\Num T)}$ is contained in $\res T$. (See Lemma~\ref{lem:inverse-of-perturbed-closed-op}.) \noindent $G\cap \spec T$ is open: For $\omega$ in $G\cap\spec T$, if there is $\zeta$ in $\res T$ with \hbox{$|\omega - \zeta| < \frac{1}{2}\dist(\omega,\Num T)$}, then \hbox{$ |\omega - \zeta| < \dist(\zeta,\Num T) $}, contradicting the previous paragraph. For part (c), Since $\cl\Num T$ is convex, it is geometrically more-or-less obvious that it is either bounded, a closed sector, or bounded by two parallel lines. The last is impossible since $T$ is sectorial, and $G$ has exactly one component in either of the other two cases. The conclusion follows from $\res T\neq\varnothing$ ($T$ is closed). \end{proof} \section{Magnetic Schr\"odinger forms} \label{sec:QM} The core theory of the previous section is inert on its own. To use it, we need some interesting {\RSF}s, and some associated quantities and objects which are holomorphic. The following two sections will take up the latter issue. This section is concerned with {\RSF}s of nonrelativistic Hamiltonians which are parameterized by scalar and vector potential fields and a two-body interaction. Though inteded to be more illustrative than exhaustive, the results are nevertheless nontrivial. See Section~\ref{sec:put-together} for the summary conclusion. \subsection{Nonrelativistic $N$-particle systems} We consider a system of $N$ identical particles moving in three-dimensional euclidean space. Hence, the ambient Hilbert space is $\sH \equiv L^2((\Real^3)^N)$. As {\sqf}s, the Hamiltonians we wish to consider are sums \begin{equation} \label{eq:total-hamiltonian-form} \frm{h}_{{\bm A},u,v} = \kfrm{{\bm A}_0+{\bm A}}+\ufrm{u_0+u}+\vfrm{v_0+v}, \end{equation} where \begin{equation} \label{eq:kA} {\frm{k}}_{\bm A}[\psi] = \int_{\Real^{3N}} \sum_{\alpha=1}^{N} \left|[\nabla_\alpha - i{\bm A}(x_\alpha)] \psi\right|^2 \, dx, \end{equation} is kinetic energy with magnetic vector potential ${\bm A}$; \begin{equation} \label{eq:h1} \ufrm{u}[\psi] = \int_{\Real^{3N}} \left[\sum_\alpha u(x_\alpha)\right] |\psi(x)|^2 \, dx \end{equation} is a one-body potential energy for scalar potential $u$; and \begin{equation} \label{eq:h2} \vfrm{v}[\psi] = \int_{\Real^{3N}} \left[\frac{1}{2}\sum_{\alpha\neq\beta} v(x_\alpha-x_\beta)\right] |\psi(x)|^2 \, dx \end{equation} is a two-body interaction. These are taken to be defined on the space $\sK \equiv C_c^\infty(\Real^{3N})$ of compactly supported, infinitely differentiable functions. In (\ref{eq:total-hamiltonian-form}), ${\bm A}_0$, $u_0$ and $v_0$ are fixed background or unperturbed fields, while ${\bm A}$, $u$ and $v$ are variable, drawn from appropriate Banach spaces (to be determined) so that $\frm{h}_{{\bm A},u,v}$ is a regular sectorial family. We do not say anything here about statistics because all the {\sqf}s/operators to be considered are invariant under particle permutations; thus, one can simply restrict attention to the subspace carrying the desired representation of the permutation group. Taking spin explicitly into account is similarly unnecessary since we are concerned with spin-independent Hamiltonians. The uperturbed scalar and interaction potentials are taken to be locally integrable, non-negative functions: \begin{equation} \label{eq:u0-v0} u_0, v_0 \in L_{\mathrm{loc}}(\Real^3)_+. \end{equation} An interesting and natural choice for the unperturbed scalar potential $u_0$ is some kind of confining potential, e.g., $|x|^2$ or $|x|^4$. It is actually somewhat artificial to consider an interaction potential which could not be treated as a perturbation, since the Coulomb interaction $v(x) = |x|^{-1}$ can be. Since we are not aiming for an exhaustive treatment, ${\bm A}_0$ is dropped (or taken identically zero). Other choices complicate the analysis considerably. Local integrability of $u_0$ and $v_0$ ensures that $\sK$ is in the domains of $\ufrm{u_0}$ and $\vfrm{v_0}$, while positivity then implies closability on $\sK$. Indeed, $\ufrm{u_0}$ is closed on the space of functions square integrable with respect to $[1 + \sum_\alpha u(x_\alpha)]\, dx$, and similarly for $\frm{v}_0$. Closability of $\kfrm{0}$ was already considered in Section \ref{sec:kinetic-energy}. Closability of the sum $\kfrm{0} + \ufrm{u_0} + \vfrm{v_0}$ is a new problem (the pieces are not equivalent), which, however, is easily solved as follows (see paragraph VI.1.6 of Kato\cite{Kato}): if $\frm{t},\frm{s}$ are sectorial {\sqf}s closable on a common domain $\sK$ (with closures $\overline{\frm{t}}$, $\overline{\frm{s}}$), then $\frm{t}+\frm{s}$ is also closable on $\sK$. This is true because a Cauchy sequence with respect to $\frm{t}^+ + \frm{s}^+$ is Cauchy with respect to each of $\frm{t}^+$ and $\frm{s}^+$ separately, hence has a limit in $\dom \overline{\frm{t}} \cap \dom \overline{\frm{s}}$. By induction, this extends to any finite number of closable {\sqf}s. In summary, the unperturbed form $\frm{h}_{{\bm 0},0,0}$ is sectorial and closable on $\sK \equiv C_c^\infty((\Real^3)^n)$, being a member of an equivalence class $\calC$ of forms on $\sK$. We are interested in pertubations $\frm{h}_{{\bm A},u,v}$ lying in $\sct{\calC}$, and which, moreover, vary holomorphically with the parameters $({\bm A},u,v)$. Prop.~\ref{prop:regular-sectorial} will be the main tool for demonstrating this. Note that the identification of an unperturbed form is arbitrary. Any form in $\sct{\calC}$ would do. Even on a practical level, this can be so to some extent. For example, one may prefer $u_0(x) = R^4$ if $|x| \le R$, otherwise $|x|^4$ to the ``simpler'' $|x|^4$, and if there is a background magnetic field, different gauge choices may recommend themselves. \subsection{Scalar potential} \label{sec:scalar_potential} We begin the search for suitable perturbations with scalar potentials: bounded potentials, $L^{3/2}$ potentials, and proportional modifications $fu_0$ of the background potential. Note that $\ufrm{u_0+u} = \ufrm{u_0} + \ufrm{u}$, so we work with $\ufrm{u}$ by itself, although $\ufrm{u_0}$ plays a role in determining which $u$'s are acceptable. The following new concept, an extreme sort of relative boundedness, is now going to be very important. \begin{defn} For {\sqf}s $\frm{t}$ and $\frm{h}$: $\frm{t}$ is ({\it Kato}) {\it tiny} with respect to $\frm{h}$ iff, for any $b > 0$, $|\frm{t}| \le a {\bm 1} + b |\frm{h}|$, for some $a\in\Real_+$. This is denoted $\frm{t} \ktiny \frm{h}$. \end{defn} Here are some simple yet useful properties of $\ktiny$. \begin{enumerate} \item $\frm{t}\sim {\bm 0} \;\Rightarrow\; \frm{t}\ktiny\frm{h}$ for any $\frm{h}$ \item $\setof{\frm{t}}{\frm{t}\ktiny \frm{h}}$ is a vector space. \item Given $\frm{t} \ktiny \frm{h}$: \begin{enumerate} \item $\frm{h}+\frm{t} \sim \frm{h}$ \item $\frm{h}'\sim\frm{h}\;\Rightarrow\; \frm{t}\ktiny\frm{h}'$ \item $\frm{h}$ sectorial $\Rightarrow\; \frm{h}+\frm{t}$ sectorial \item $\frm{h}$, $\frm{h}'$ sectorial $\Rightarrow \frm{t} \ktiny \frm{h}+\frm{h}'$ \item $\frm{t}'\relbd \frm{t},\; \frm{h}\relbd \frm{h}' \;\Rightarrow \; \frm{t}' \ktiny \frm{h}'$ \end{enumerate} \end{enumerate} The main point here is that we can accumulate tiny perturbations indefinitely without danger of moving out of $\sct{\calC}$. Because of item 3(b), it makes sense to write $\frm{t}\ktiny \calC$. But, beware: a {\em set} of forms tiny with respect to $\calC$ might still be unbounded in $\calC_\relbd$. \subsubsection{Bounded potentials} These are {\em complex} functions, $u\in L^\infty(\Real^3)$, even though ultimately we are (probably) only interested in the real subspace $L^\infty(\Real^3;\Real)$. This expansion is for the sake of holomorphy, as usual; we need to work in the complex space to use the theory of Section \ref{sec:families}. This simple case is a good illustration of the basic method: check that the perturbation does not move $\frm{h}_{{\bm A},u,v}$ out of $\sct{\calC}$; check G-holomorphy; check local boundedness. Now, \begin{equation} \label{eq:bounded-potl-trivial} |\ufrm{u}[\psi]| \le \|u\|_{L^\infty(\Real^3)} {\bm 1}[\psi]. \end{equation} This immediately demonstrates local boundedness of $\frm{h}_{{\bm 0},u,0}$ due to the factor $\|u\|_{L^\infty(\Real^3)}$, as well as that \hbox{$\ufrm{u} \sim 0 \ktiny \frm{h}_{{\bm 0},0,0}$}. G-holomorphy of $\ufrm{u}[\psi]$ in $u$ is trivial because it is {\em linear}. In complete detail: \begin{equation} \nonumber \ufrm{u+zu'}[\psi] = \ufrm{u}[\psi] + z \ufrm{u'}[\psi], \end{equation} so the issue reduces to holomorphy of the right-hand side in $z$, which only requires that $\ufrm{u}[\psi]$ and $\ufrm{u'}[\psi]$ be well-defined. N.B. This argument has nothing to do with the topology of $L^\infty$ and will hold for any vector space for which $\ufrm{u}\in\SF(\sK)$. Conclusion: $L^\infty(\Real^3) \ni u \mapsto \frm{h}_{{\bm 0},u,0}$ is a \RSF. \subsubsection{Unbounded potentials} Now we move on to a space of unbounded potentials, namely, $u\in L^{3/2}(\Real^3)$. The following Lemma provides a bound playing the same r\^ole as (\ref{eq:bounded-potl-trivial}). The Sobolev inequality used can be found in books on Sobolev spaces\cite{Adams}, partial differential equations\cite{Taylor1} and general analysis\cite{Lieb+Loss}. \begin{lem} \label{lem:L3/2-bound} If $u\in L^{3/2}(\Real^3)$, then \begin{equation} \label{eq:L3/2} |\ufrm{u}| \le c'' \|u\|_{L^{3/2}(\Real^3)} \frm{k}_0. \end{equation} \end{lem} \begin{proof} For fixed $y \equiv (x_2,\ldots,x_N)$, the H\"{o}lder inequality gives \begin{align} \nonumber \int u(x_1) |\psi(x_1,y)|^2 \, dx_1 & \le c \|u\|_{L^{3/2}} \left\{ \int |\psi(x_1,y)|^6 \, dx \right\}^{1/3} \nonumber \\ & = c \|u\|_{L^{3/2}} \| \psi(\cdot ,y) \|_{ L^{3/2}(\Real^3) }^2 \nonumber \end{align} For the integral here, use the Sobolev inequality \begin{equation} \|f\|_{L^q(\Real^d)} \le c' \|f\|_{W_k^p(\Real^d)}, \quad p\le q \le \frac{pd}{d-kp} \end{equation} with the values $d=3$, $k=1$, $p=2$, $q=6$ to obtain \begin{equation} \nonumber \int \| \psi(\cdot ,y) \|_{ L^{3/2}(\Real^3) }^2 \, dy \le c' \int |\nabla_1\psi(x_1,y) |^2 \, dy. \end{equation} Adding up the inequalities with each of $x_2,\ldots,x_N$ in place of $x_1$ yields \begin{equation} \nonumber \ufrm{u}[\psi] \le c c' \|u\|_{L^{3/2}} \kfrm{0}[\psi]. \end{equation} \end{proof} This demonstrates local boundedness of \hbox{$L^{3/2}(\Real^3)\ni u \mapsto \frm{h}_{{\bm 0},u,0}$}, but would allow us to conclude that \hbox{$\frm{h}_{{\bm 0},u,0}\in\sct{\calC}$} only for $\|u\|_{L^{3/2}}$ sufficiently small (depending on $c''$). Fortunately, it can be improved by using density of $L^\infty$ in $L^{3/2}$. \begin{lem} \label{lem:L3/2-tiny} For $u\in L^{3/2}(\Real^3)$, \hbox{$\ufrm{u} \ktiny \frm{k}_0$}. \end{lem} \begin{proof} Split $u$ as $u = u' + u''$, with $u'\in L^\infty(\Real^3)$ and $u'' \in L^{3/2}(\Real^3)$. $\|u''\|_{L^{3/2}(\Real^3)}$ can be made as small as desired by choosing $u'$ appropriately (e.g. $u' = u \, 1[|u|\le M]$ for large $M$). \end{proof} Conclusion: $L^{3/2}(\Real^3) \ni u \mapsto \frm{h}_{{\bm 0},u,0}$ is a \RSF. \begin{rem} This result is very important to Lieb's framework\cite{Lieb83} for DFT. \end{rem} \subsubsection{Modulating the confining potential} The final kind of scalar potential to be considered is modulation of the background (confining) potential: $L^\infty(\Real^3) \ni f \mapsto \ufrm{fu_0}$. Evidently, \begin{equation} |\ufrm{fu_0}| \le \|f\|_{L^\infty(\Real^3)} \ufrm{u_0}. \end{equation} Local boundedness is thus secure, but $\frm{h}_{{\bm 0},fu_0,0}$ will generally fail to be sectorial if $u_0$ is anything like what we have in mind. Thus, we need to restrict $f$ to the open unit ball $B(L^\infty(\Real^3))$. With that restriction, another \RSF\ is obtained. \subsubsection{Removing redundancy} Combine the preceding three kinds of scalar potential perturbation yields a holomorphic map \begin{equation} \nonumber L^\infty(\Real^3) \oplus L^{3/2}(\Real^3) \oplus L^\infty(\Real^3) \to \calC_\relbd \end{equation} given by \begin{equation} \label{eq:total-scalar-potl-map} (u',u'',f) \mapsto \ufrm{u'}+ \ufrm{u''}+ \ufrm{fu_0} = \ufrm{u'+u''+fu_0}. \end{equation} However, this should be restricted to the open set \begin{equation} \label{eq:U-for-u} \calU \defeq \setof{(u',u'',f) \in L^\infty \oplus L^{3/2} \oplus L^\infty }{\|f\| < 1} \end{equation} to ensure that $\frm{h}_{{\bm 0},u'+u''+fu_0,0}$ is in $\sct{\calC}$. Thus, we have a \RSF\ in $\sct{\calC}$ parameterized over $\calU$ above. However, this is not entirely satisfactory because there is redundancy: many distinct triples $(u',u'',f)$ may give the same total potential $u' + u'' + fu_0$. To cure this infelicity, we pass to a quotient. Recall that the quotient $\sX/{\mathscr M}$ of a Banach space $\sX$ by a closed subspace ${\mathscr M}$ is a Banach space with norm \begin{equation} \nonumber \|\pi x\|_{\sX/{\mathscr M}} \defeq \inf \setof{\|x+m\|_{\sX}}{m\in{\mathscr M}}, \end{equation} where $\Arr{\sX}{\pi}{\sX/{\mathscr M}}$ is the canonical projection. A continuous linear map $\Arr{\sX}{f}{\sY}$ naturally induces a linear map on the quotient $\sX/\ker f$, eliminating directions along which $f$ is constant. This simple picture is complicated in situations which interest us for two reasons. $f$ is not sure to be either linear or defined on the entire space $\sX$, hence a slightly generalized notion of kernel is needed, and taking a quotient by a subspace is not an immediately sensible thing to do. \begin{lem} Given ${\calU \subseteq \sX}$ open and {\em convex}, and \hbox{$\Arrtop{\calU}{f}{\sY}$}, holomorphic, let \begin{equation} \nonumber {\mathscr M} = \cap_{x\in\calU} \ker Df(x). \end{equation} Then, $f$ has a unique holomorphic extension to \hbox{$\calU + {\mathscr M}$}, given by \hbox{$f(x+m) = f(x)$} for \hbox{$m\in{\mathscr M}$}. In turn, a holomorphic map \hbox{$\Arr{(\calU+{\mathscr M})/{\mathscr M}}{\tilde{f}}{\sY}$ } is induced on the quotient, given by $\tilde{f}(\pi x) = f(x)$. \end{lem} \begin{proof} First, note that $\ker Df(x)$ is a closed subspace of $\sX$ for each $x\in\calU$, so ${\mathscr M}$ is indeed a closed subspace. To see that the asserted extension is well-defined, suppose that $y = x+m = x'+m'$, for $x,x'\in\calU$, $m,m'\in{\mathscr M}$. Denote the affine (two-$\Cmplx$-dimensional) subspace containing $x,x',y$ by $A$, and consider the restriction of $f$ to $A\cap\calU$, which is convex. The restriction of $Df$ is everywhere zero, hence $f$ is constant on $A\cap\calU$, i.e., $f(x)=f(x')$ and the extended $f$ is well-defined. That the extension is holomorphic follows immediately from $Df(x+m) = Df(x)$, and unicity from $\calU+{\mathscr M}$ being connected and $f$ given on an open set, namely $\calU$. Therefore, $\tilde{f}$ is well-defined on $\calU/{\mathscr M}$ according to the given formula, and it remains only to show that it is holomorphic. As usual, we use the equivalence with G-holomorphy plus local boundedness (Thm. \ref{thm:GTHZ}). For G-holomorphy, note that $\tilde{f}(\pi x+\zeta \pi y) = f(x+\zeta y)$, so the question reduces to G-holomorphy of $f$ itself. For local boundedness, note that $\|\pi x - \tilde{y}\| < \epsilon$ implies that $x$ is within distance $\epsilon$ of $\pi^{-1} \tilde{y}$. \end{proof} To apply this, one only need check that $\calU$ in (\ref{eq:U-for-u}) is convex, which is immediate. So, define \hbox{$L^\infty(\Real^3)+L^{3/2}(\Real^3)+u_0 L^\infty(\Real^3)$} to be the space of functions (equivalence classes under a.e. equality) $u$ such that \begin{equation} \nonumber \inf\setof{\|u'\|_{L^\infty} +\|u''\|_{L^{3/2}} +\|f\|_{L^{\infty}}} {u=u'+u''+fu_0} \end{equation} is finite. This is a norm $\|u\|$ making $L^\infty+L^{3/2}+u_0 L^\infty$ a Banach space, and the subset $\calU_u$ consisting of $u$ with {\em some} decomposition obeying the constraint $\|f\|_{L^\infty} < 1$ is open. Conclusion: the map $u \mapsto \frm{h}_{{\bm 0},u,0}$ is a \RSF\ parameterized over $\calU_u$. \subsection{Interaction} \label{sec:interaction} The message of this subsection is that two-body interactions can be treated in much the same way as one-body potentials, an observation that goes back centuries. Indeed, instead of coordinatizing configuration space $(\Real^3)^N$ with $x_1,x_2,\ldots,x_N$, we may use $\tfrac{x_2-x_1}{\sqrt{2}},\tfrac{x_2+x_1}{\sqrt{2}},x_3,\ldots,x_N$, and thereby control an interaction between particles $1$ and $2$ by the kinetic energy just as an external potential for particle $1$. As long as we use only Kato tiny perturbations, as was done in Section~\ref{sec:scalar_potential}, then, owing to property 2 of Section~\ref{sec:scalar_potential} it is not possible that each perturbation alone is controllable, while the combination is not. We have, for example, an \RSF\ of pair interactions parameterized over $\calU_{v} = L^\infty(\Real^3)+L^{3/2}(\Real^3)$. \subsection{Vector potential} \label{sec:vector_potential} For out purposes, the form of ${\frm{k}}_{\bm A}$ given in (\ref{eq:kA}) is not good for complex vector potentials. In order that ${\frm{k}}_{\bm A}$ be holomorphic in ${\bm A}$, it should {\em not} appear complex-conjugated. The correct definition is \begin{align} \label{eq:complex-A} {\frm{k}}_{\bm A}[\psi] &= \sum_{\alpha=1}^{N} \int_{\Real^{3N}} (\nabla_\alpha + i{\bm A}(x_\alpha)) \overline{\psi}\cdot (\nabla_\alpha - i{\bm A}(x_\alpha)) \psi \, dx \nonumber \\ &= \inpr{ ({\nabla} - i{\overline{\bm A}})\psi } { ({\nabla} - i{\bm A})\psi } \end{align} We take a somewhat different approach with this than for scalar potentials. $\nabla_\alpha$ is a bounded operator from $W_1^2(\Real^{3N})$ into $\vec{L}^2(\Real^{3N})$ (we use an over-arrow to indicate ordinary, complex, three-dimensional vectors). The integral in (\ref{eq:complex-A}) will be a legitimate $L^2$ inner product if multiplication by ${\bm A}$ (or $\overline{\bm A}$) has the same property. This is very natural train of thought, but before pursuing it, we consider bounded vector potentials. \subsubsection{bounded \texorpdfstring{${\bm A}$}{vector potential}} \label{sec:bounded-A} \begin{lem} If ${\bm A}$ is bounded, then $\kfrm{\bm A}$ is a tiny perturbation of $\frm{k}_0$. \end{lem} \begin{proof} For an arbitrary $\psi\in W_1^2(\Real^{3N})$, \begin{align} \Big|\kfrm{\bm A}[\psi] - \kfrm{0}[\psi]\Big| = & \Big|\inpr{ ({\nabla} - i{\overline{\bm A}})\psi } { ({\nabla} - i{\bm A})\psi } - \|\nabla\psi\|^2\Big| \nonumber \\ \le & \|{\bm A}\|_{L^\infty}^2 \|\psi\|^2 +2 \|{\bm A}\|_{L^\infty} \|\nabla\psi\| \|\psi\|. \label{eq:k-A-tiny-pert} \end{align} Control the final term with the inequality \begin{equation} \nonumber 2 \|\nabla\psi\| \|\psi\| \le \epsilon \|\nabla\psi\|^2 +\frac{1}{\epsilon}\|\psi\|^2, \quad \epsilon > 0. \end{equation} Since $\epsilon$ can be taken as small as desired here, \begin{equation} \kfrm{\bm A} - \kfrm{0} \ktiny \kfrm{0}. \end{equation} \end{proof} Just as G-holomorphy of $\ufrm{u}$ followed from holomorphy of $\Cmplx\ni z \mapsto z$, G-holomorphy of $\kfrm{\bm A}$ follows from holomorphy of $z \mapsto z^2$. Local boundedness of $\kfrm{\bm A}[\psi]$ as a function of \hbox{${\bm A} \in \vec{L}^\infty(\Real^3)$} follows from an estimate like that in (\ref{eq:k-A-tiny-pert}). Thus, $\vec{L}^\infty(\Real^3)\ni {\bm A} \mapsto \kfrm{\bm A}$ is a \RSF. \subsubsection{Sobolev multipliers} Now we return to the idea mentioned at the beginning of this section. Multiplication of elements of $W_1^2(\Real^d)$ by a fixed function $f$ is a linear operation. If it is actually a {\em bounded} linear operator into $L^2(\Real^d)$, then $f$ is a member of the space \hbox{$M(W_1^2(\Real^{d})\to L^2(\Real^{d}))$} of Sobolev multipliers\cite{Mazya-85-book,Mazya-09-book}. This space is nontrivial (it contains $L^\infty$) and is a Banach space with the norm it inherits from $\Lin(W_1^2(\Real^{d});L^2(\Real^{d}))$: \begin{equation} \|f\|_{M(W_1^2\to L^2)} \defeq \sup\setof{\|f\psi\|_{L^2}}{\|\psi\|_{W_1^2} = 1} \end{equation} Therefore, we consider \hbox{${\bm A} \in \vec{M}(W_1^2(\Real^{3});L^2(\Real^{3}))$}. One needs to check that this lifts from 3-dimensional to $3N$-dimensional space properly, but that is simple: abbreviating $y\equiv (x_2,\ldots,x_N)$, \begin{align} \int |{\bm A}(x_1)\psi(x_1,y)|^2 & \, dx_1 \nonumber \\ & \le \|{\bm A}\|_{\vec{M}(W_1^2\to L^2)} \int |\nabla_1 \psi(x_1,y)|^2 \, dx_1. \nonumber \end{align} Integration over $y$ shows that the norm is independent of $N$. G-holomorphy has nothing to do with the topology of the space over which ${\bm A}$ ranges, so it follows for $\vec{M}(W_1^2(\Real^{3N});L^2(\Real^{3N}))$ just as for bounded vector potentials. Local boundedness follows from a calculation much like (\ref{eq:k-A-tiny-pert}): \begin{align} \Big|\kfrm{{\bm A}+{\bm a}}[\psi] - \kfrm{\bm A}[\psi]\Big| \le & \| ({\nabla} - i{\bm A})\psi \|\| {\bm a}\psi \| \nonumber \\ & + \| ({\nabla} - i\overline{\bm A})\psi \| \| {\bm a}\psi \| + \| {\bm a}\psi \|^2. \nonumber \end{align} This establishes that, for \hbox{${\bm A}\in\vec{M}(W_1^2(\Real^{3N});L^2(\Real^{3N}))$}, $\kfrm{\bm A} \relbd \kfrm{0}$. However, the opposite, $\kfrm{0} \relbd \kfrm{\bm A}$, is problematic in general, although it does hold if $\|{\bm A}\|_{M(W_1^2 \to L^2)} < 1$. The situation looks at first like what we faced with \hbox{$u\in L^{3/2}$} for $\ufrm{u}$. However, $L^\infty$ is not dense in $M(W_1^2 \to L^2)$. The norm is an operator norm and we face the familiar problem that strong convergence does not imply norm convergence. Thus, we settle for what is clear, $\kfrm{0} \sim \kfrm{\bm A}$ for ${\bm A}$ in the unit ball $B(M(W_1^2 \to L^2))$. On one level the preceding is entirely satisfactory. The Sobolev-multiplier norm is natural. However, one might prefer something more familiar and easier to work with, such as given in the following Lemma. \begin{lem} \label{lem:vec-L3-tiny} For ${\bm A} \in \vec{L}^{3}(\Real^3)$, \hbox{$\frm{k}_{\bm A} \sim \frm{k}_0$}. \end{lem} \begin{proof} Use a H\"older inequality and the Sobolev inequality cited in Lemma~\ref{lem:L3/2-bound} to obtain \begin{equation} \label{eq:Holder-Sobolev} \|{\bm A}\psi\|_{L^2} \le \|{\bm A}\|_{L^3} \|\psi\|_{L^6} \le c \|{\bm A}\|_{L^3} \|\psi\|_{W_{1}^2}. \end{equation} Again, just as in Lemma~\ref{lem:L3/2-tiny}, a bounded vector field can be subtracted from ${\bm A}$ so that the $L^3$ norm of the residual is as small as desired. \end{proof} \subsubsection{Removing redundancy again} As for scalar potentials, there is also redundancy here, since $\vec{L}^\infty$ intersects $\vec{L}^3$, and is contained in ${\vec{M}(W_1^2\to L^2)}$. It can be solved in exactly the same way to obtain a \RSF\ parameterized over an open set $\calU_{\bm A}$ in $\vec{L}^\infty + {\vec{M}(W_1^2\to L^2)}$ or all of $\vec{L}^\infty + \vec{L}^3$. \subsection{Putting it all together} \label{sec:put-together} Here is the summary of preceding investigation. With lower-bounded locally integrable background potential and interaction (\ref{eq:u0-v0}) and no background vector potential, $\frm{h}_{\bm A,u,v}$ (\ref{eq:total-hamiltonian-form}) is a \RSF\ on {\em all} of \hbox{$(\vec{L^3} + \vec{L}^\infty)\times (L^{3/2} + L^\infty) \times (L^{3/2} + L^\infty)$}. Alternatively, the $L^3$ summand for ${\bm A}$ can be replaced by $M(W_1^2(\Real^{3}))$ and summands $u_0L^\infty(\Real^3)$ and $v_0L^\infty(\Real^3)$ added to the potential and interaction factors with restriction to an open neighborhood $\calU$ of the origin. The condition to be in $\calU$ does not factorize. \section{Low-energy Hamiltonians \& eigenstate properties} \label{sec:eigenvalues} The previous section was concerned with one component of application, namely the construction of {\RSF}s useful for nonrelativistic quantum mechanics. This section and the next tackle the question: given an \RSF\ $\frm{h}$ defined on $\calU$, what interesting functions/quantities are holomorphic? To a considerable extent, this can be fruitfully discussed without reference to any concrete \RSF. This section uses Riesz-Dunford-Taylor integral methods to discuss ``low-energy Hamiltonians'' in case there is a gap in the spectrum, i.e., a curve $\Gamma$ in the resolvent set of $H_x$ running top-to-bottom in $\Cmplx$ (recall, we deal in ``Hamiltonians'' which are sectorial but not necessarily self-adjoint). The part of the spectrum to the left of $\Gamma$ then corresponds to a bounded Hamiltonian which is holomorphic on some neighborhood of $x$. Properties of nondegenerate eigenstates associated with isolated eigenvalues are considered in section \ref{sec:rank-1}. The eigenvalue itself and expectations of all ordinary observables, as well as of generalized observables such as charge-density and current-density (when they make sense) are holomorphic. Some of the material here, primarily Section \ref{sec:Riesz-Dunford-Taylor} and Prop.~\ref{prop:automatic-holo-Lp} are appealed to in section \ref{sec:free-energy}. \subsection{Riesz-Dunford-Taylor integrals} \label{sec:Riesz-Dunford-Taylor} Recall that one of the main conclusions of Section \ref{sec:families} was holomorphy of the map $(\zeta,x) \mapsto \Rmap(\zeta,H_x)$. As a function of the single complex variable $\zeta$, it is natural to integrate this around contours. The Riesz-Dunford-Taylor calculus constructs a holomorphic function $f(A)$ of an arbitrary {\em bounded} operator $A$ by integrating $f(\zeta)\Rmap(\zeta,A)$ around a contour encircling the entire spectrum $\spec A$, where $f$ is an ordinary holomorphic function. Some basic references for this technology are \S III.6 of Kato\cite{Kato}, Chap. 6 of Hislop \& Sigal\cite{Hislop+Sigal}, or \S 3.3 of Kadison \& Ringrose\cite{Kadison+Ringrose-I}. Since we deal with unbounded operators, we cannot do that, but the idea can be modified for some interesting purposes. The first basic idea is that, if $H$ is a closed operator, $E$ is an isolated eigenvalue, and ${\Gamma}$ is a simple anticlockwise closed contour in $\res H$, surrounding $E$ but no other part of $\spec H$, then \begin{equation} \label{eq:P(u)} P(\Gamma) = -\oint_{\Gamma} \Rmap(\zeta,H) \frac{d\zeta}{2\pi i}. \end{equation} is a projection onto the corresponding eigenspace. For normal (in particular, self-adjoint) operators this is straighforward as the relevant part of the resolvent looks like $(E-\zeta)^{-1} P$, where $P$ is an {\em orthogonal} projector onto the eigenspace, so that $P(\Gamma) = P$. The restriction of a non-normal operator to an eigenspace is generally not simply a multiple of the identity if the algebraic multiplicity exceeds one. Consequently, the resolvent generally has higher-order poles. If the eigenvalue is {\em nondegenerate} however, that cannot happen and its value can be extracted as \begin{equation} \label{eq:ground-energy-integral} E = -\Tr \oint_{\Gamma} \zeta \Rmap(\zeta, H) \frac{d\zeta}{2\pi i}. \end{equation} We can profitably generalize somewhat. First, we have the basic result \begin{prop} \label{prop:Riesz-Dunford-Taylor} Given: $A\in\Lincl(\sX)$ and $\Gamma$ a simple anticlockwise contour in $\res A$, surrounding the part $\sigma$ of $\spec A$. Then, \newline\noindent \textnormal{(a)} \begin{equation} \label{eq:P(G)} P(\Gamma) \defeq -\oint_\Gamma \Rmap(\zeta,A) \frac{d\zeta}{2\pi i} \end{equation} is a projection with $\rng P(\Gamma) \subseteq \dom A$. \newline\noindent \textnormal{(b)} \begin{equation} \label{eq:A(G)} A(\Gamma) \defeq -\oint_\Gamma \zeta \Rmap(\zeta,A) \frac{d\zeta}{2\pi i} \end{equation} satisfies $A(\Gamma) = A P(\Gamma) = P(\Gamma) A P(\Gamma)$ (hence maps $\rng P(\Gamma)$ into itself), annihilates $\ker P(\Gamma)$, and its spectrum as an operator on $\rng P(\Gamma)$ is $\sigma$. \newline\noindent \textnormal{(c)} More generally, for open $U$ containing $\Gamma$ and the region it surrounds, and $f$ an ordinary holomorphic function on $U$, \begin{equation} \label{eq:f(A|G)} f(A|\Gamma) \defeq -\oint_\Gamma f(\zeta) \Rmap(\zeta,A) \frac{d\zeta}{2\pi i} \end{equation} maps $\rng P(\Gamma)$ into itself and annihilates $\ker P(\Gamma)$. $P(\Gamma) = 1(A|\Gamma)$ and $A(\Gamma) = \Id(A|\Gamma)$ are special cases. $f\mapsto f(A|\Gamma)$ is a Banach algebra morphism from the space of holomorphic functions on $U$ (with uniform norm) into $\Lin(\rng P(\Gamma))$. \end{prop} \begin{proof} For the parts concerning $P(\Gamma)$ and $A(\Gamma)$, see Hislop \& Sigal\cite{Hislop+Sigal}, Prop.~6.9. For the Banach algebra aspects, see Kadison and Ringrose. \end{proof} We are not nearly so interested in varying $f$ in (\ref{eq:f(A|G)}), however, as in varying $A$ for a few simple cases of $f$, principally $1$ and $\Id$. \begin{thm} \label{prop:f(y|G)-holo} Given \RSF\ $\frm{h}$ and simple closed contour $\Gamma \subset \res H_x$, there is a neighborhood $\calW$ of $x$ such that \hbox{$y\in\calW\;\Rightarrow\; \Gamma\subset \res H_y$} and for each $f$ holomorphic on and inside $\Gamma$, \hbox{$y \mapsto f(H_y|\Gamma) \colon \calW \to\Lin(\sH)$} is holomorphic. \end{thm} \begin{proof} By compactness of $\Gamma$ and holomorphy of \hbox{$(\zeta,y)\mapsto \Rmap(\zeta,H_y)$}. \end{proof} \subsection{Low-energy Hamiltonians} No contour can be drawn around the entire spectrum of an unbounded operator $H_x$. However, since $H_x$ is bounded below, it might be possible to surround the part of $\spec H_x$ in some left-half-plane, if there is a gap. That such a contour will continue to surround the ``low energy'' part of the spectrum when $x$ is perturbed is not immediately evident: Each $H_y$ is bounded below, but is it possible that $\spec H_y$ has a part that drifts off to $-\infty$ as $y\to x$? Fortunately, such pathology is ruled out by Lemma~\ref{lem:sectorial-usc}, which says that a slight enlargement of a sector for one member of a \RSF\ is a sector for all sufficiently close members. \begin{figure} \centering \includegraphics[width=40mm]{fig-right-boundary} \caption{The concept of right-boundary. $\Sigma$ is a sector for $\Num A$, and the cross-hatched regions represent $\spec A$. The green region, containing all of $\spec A$ to the left of $\Gamma$, is surrounded by a contour bordered by parts of $\Gamma$ and the edges of $\Sigma$, and the precise choice of $\Sigma$ is irrelevant for Riesz-Dunford-Taylor integrals as in (\ref{eq:f(A|G)}). } \label{fig:right-boundary} \end{figure} More generally than a vertical line, we may start with a continuous curve $\Gamma \subset \Cmplx$ such that each horizontal line \hbox{$\mathrm{Im}\ z = $ constant} intersects $\Gamma$ in exactly one point. In other words, $\Gamma$ goes from bottom to top of the plane without overhangs, as illustrated in Fig.~\ref{fig:right-boundary}. Such a curve, with upward orientation, will be called a {\it right-boundary}. Suppose $\Gamma$ is a right-boundary contained in $\res H_x$, and let $\Sigma$ be a sector for $\frm{h}_x$ with vertex to the left of $\Gamma$ (Fig.~\ref{fig:right-boundary}). Then we may form a closed contour by running along the lower edge of $\Sigma$ away from the vertex until meeting $\Gamma$, then running upward along $\Gamma$ until meeting the upper edge of $\Sigma$, and then back to the vertex. This contour, called $\tilde{\Gamma}$, encircles all the numerical range of $\frm{h}_x$ to the left of $\Gamma$, hence the part of $\spec H_x$ in that region. And, therefore, according to the preceding paragraph, $\tilde{\Gamma}$ also encloses all of $\spec H_y$ lying to the left of $\Gamma$, for $y$ in some neighborhood of $x$. Now we extend the notation in (\ref{eq:P(G)}), (\ref{eq:A(G)}), and (\ref{eq:f(A|G)}) (as long as $f$ is holomorphic on the region to the left of $\Gamma$), writing for instance $f(H_y|\Gamma)$ for the integral taken around $\tilde{\Gamma}$. The point is that it does not matter how $\Gamma$ is completed to a closed contour as long as all the spectrum to the left of $\Gamma$ is enclosed. Since that can always be done (assuming $\Gamma \subset \res H_x$), the notation is justified. \subsection{Schatten classes} \label{sec:Schatten} The preceding part of this Section showed how we get a variety of holomorphic maps $\Arr{\calU}{f}{\Lin(\sH)}$. What if the image of $f$ happens to be in some restricted class of operators which has its own Banach space structure, for instance, the trace-class operators $\Lin^1(\sH)$, the Hilbert-Schmidt operators $\Lin^2(\sH)$, or more generally a Schatten $p$-class $\Lin^p(\sH)$? Nearly automatic holomorphy in these situations is shown in Prop.~\ref{prop:automatic-holo-Lp} below. Only the trace-class $\Lin^1(\sH)$ is used in this Section, but other Schatten classes $\Lin^p(\sH)$ will be put to work in Section \ref{sec:free-energy}. First, we recall some basic facts about the Schatten $p$-classes\cite{Schatten,Gohberg+Krein,Simon-trace-ideals} that we will use. \begin{defn} For $1 \le p < \infty$, $\Lin^p(\sH)$ is the set of compact operators $T$ such that $|T|^p\in\Lin^1(\sH)$, where \hbox{$|T| = (T^*T)^{1/2}$}. \end{defn} \begin{prop} \label{prop:Schatten} The classes $\Lin^p(\sH)$ have the following properties. \begin{enumerate} \item Equipped with the norm \hbox{$\|T\|_p = (\Tr |T|^p)^{1/p}$}, $\Lin^p(\sH)$ is a Banach space. \item $\Lin^p(\sH)$ is also a two-sided $*$-ideal: \hbox{$\|ACB\|_p \le \|A\| \|C\|_p \|B\|$} and it contains $C^*$ whenever it contains $C$. \item $\Lin^1(\sH)$ is the dual space of the compact operators $\Lin_0(\sH)$ with the usual operator norm, while for \hbox{$1 < p < \infty$}, $\Lin^p(\sH)$ realizes the dual of $\Lin^{q}(\sH)$, where \hbox{$p^{-1} + q^{-1} = 1$}, via the pairing \hbox{$(S,T) \mapsto \Tr ST$}. On the other hand, the finite-rank operators are dense in $\Lin_0(\sH)$ as well as $\Lin^p(\sH)$ for $1 < p < \infty$. Thus, {\em every} $\Lin^p(\sH)$ ($1\le p < \infty$) is the dual space of a Banach space in which the finite-rank operators are dense. \end{enumerate} \end{prop} \begin{prop} \label{prop:automatic-holo-Lp} Given: \hbox{$\Arr{\calU}{f}{\Lin(\sH)}$} holomorphic. If $f$ is a locally bounded map into $\Lin^p(\sH)$ ($1\le p < \infty$), then $f$ is holomorphic into $\Lin^p(\sH)$. \end{prop} \begin{proof} For $B$ finite-rank, $\Tr f(x)B$ is a finite sum of terms of the form $\inpr{\phi_\alpha}{f(x) \psi_\alpha}$, each of which is holomorphic by hypothesis. Hence, the result follows from the remark about density of such operators in the pre-dual which precedes the Proposition together with Prop.~\ref{prop:wk*-holo}. \end{proof} \subsection{Finite rank} \label{sec:finite-rank} Prop.~\ref{prop:automatic-holo-Lp} does not quite give holomorphy due to the hypothesis of local boundedness. However, if we specialize to Riesz-Dunford-Taylor integrals and ask that $P_x(\Gamma)$ have finite rank, holomorphy into $\Lin^1(\sH)$ follows without an explicit local boundedness assumption. The next two well-known Lemmas encapsulate the simple key observations. \begin{lem} \label{lem:rank-stability} If $P$ and $Q$ are projections (not necessarily orthogonal), \hbox{$\|P-Q\| < 1$} implies that $\rank P = \rank Q$. \end{lem} \begin{proof} If $\rng Q \ni \phi \mapsto P\phi$ is injective, then \hbox{$\rank P \ge \rank Q$}, which suffices by symmetry of the situation. However, for $\phi\in\rng Q$, \begin{equation} \nonumber \|P\phi\| = \|Q\phi + (P-Q)\phi\| \ge \|\phi\| - \|P-Q\| \|\phi\| > 0. \end{equation} \end{proof} \begin{lem} \label{lem:cts-into-trace-class} A continuous function into $\Lin(\sH)$ with range in operators of rank $\le N < \infty$ is actually continuous into $\Lin^1(\sH)$. \end{lem} \begin{proof} {$\|A - B\|_1 \le (\rank A + \rank B) \|A - B\|$}. \end{proof} \begin{prop} \label{prop:holo-into-L1} $\rank P_x(\Gamma) = N < \infty$ implies that $x$ has a neighborhood $\calW$ such that $\rank P_y(\Gamma) = N$ for every $y\in\calW$, and \hbox{$y \mapsto f(H_y|\Gamma)\colon \calW \to\Lin^1(\sH)$} is holomorphic. \end{prop} \begin{proof} Lemma~\ref{lem:rank-stability} ensures existence of $\calW$ such that $\rank P_y(\Gamma) = N$ for $y\in\calW$. Therefore $f(H_y|\Gamma)$ also has rank $N$ since it maps $\rng P_y(\Gamma)$ into itself while annihilating $\ker P_y(\Gamma)$. Lemma~\ref{lem:cts-into-trace-class} then completes the proof. \end{proof} \subsection{Eigenstate perturbation} \label{sec:rank-1} The extreme case is $\rank P_x(\Gamma)=1$. Then we are in the venerable context of eigenstate perturbation. A general rank-1 projection can be written as \begin{equation} \outpr{\phi}{\eta},\;\text{with}\; \inpr{\eta}{\phi} = 1\;\text{and } \|\phi\|=1, \end{equation} where $\phi$ and $\eta$ are determined up to a {common} phase factor $e^{i\theta}$. Suppose, now, that ${\frm{h}}$ is a \RSF\ that $H_x$ has an isolated nondegenerate eigenvalue at $E_x$, and let $\Gamma$ be a contour which separates $E_x$ from the rest of $\spec H_x$. Then, Prop.~\ref{prop:holo-into-L1} shows that as $y$ varies in some neighborhood of $x$, \begin{equation} \label{eq:Py(Gamma)} P_y(\Gamma) = \outpr{\phi_y}{\eta_y}, \; \inpr{\eta_y}{\phi_y} = 1, \; \|\phi_y\|=1, \end{equation} and \begin{equation} H_y(\Gamma) = E_y \outpr{\phi_y}{\eta_y}, \end{equation} with $P_y(\Gamma)$ and $E_y = \Tr H_y(\Gamma)$ holomorphic. {\it A fortiori}, $E_y$ moves continuously with $y$ as long as it remains separated from the rest of $\spec H_y$ --- the {\it isolation condition}, for short. As $y$ moves along any continuous curve in $\sX$ beginning at $x$ and respecting the isolation condition $E_y$ can be continuously tracked, but if the path returns to $x$, we may not return to $E_x$ unless the path can be contracted to a point without violating the isolation condition. Therefore, we consider $\calW$, a maximal {\em simply connected} open set containing $x$ and with the isolation condition satisfied everywhere in $\calW$. For $y$ in $\calW$, we can simply write $P_y$ and $E_y$, since the particular choice of $\Gamma$ is immaterial. Now, $E_y$ is holomorphic as a $\Cmplx$-valued function and $P_y$ and $H_y$ as $\Lin^1(\sH)$-valued functions, for $y\in\calW$. Therefore, for any bounded observable $B\in\Lin(\sH)$, its ``expectation'' \begin{equation} y \mapsto \Tr B \outpr{\phi_y}{\eta_y} = \inpr{\eta_y}{B\phi_y} \end{equation} is holomorphic on $\calW$. The quotation marks are because this coincides with the usual notion of expectation only when $\eta_y=\phi_y$, e.g., when $H_y$ is self-adjoint. There are other interesting holomorphic quantities which do not fall into this category, however. $E_y$ itself, \begin{equation} E_y = \inpr{\eta_y}{H_y\phi_y}, \end{equation} is one such. The charge and current density are others when our parameter space includes scalar and vector potentials. This is because these quantities are the derivatives of $E_y$ with respect to scalar and vector potential, respectively. At a heuristic level, this claim is straightforward, but there are delicate details, which we will now check. \begin{lem}[Hellmann-Feynman] \label{lem:Hellmann-Feynman} Suppose finite-rank projections $P_y$ and bounded operators $A_y$ depend differentiably on parameter $y$, and that $[P_y,A_y]=0$. Then $D_y \Tr P_y A_y = \Tr P_y D_yA_y$. \end{lem} \begin{proof} ($y$ subscripts will be suppressed for notational simplicity) Differentiating $P(1-P)=0$, deduce that $DP$ maps $\rng P$ into $\rng (1-P)$ and vice versa. Since both $\rng P$ and $\rng (1-P)$ are invariant under $A$, it immediately follows that $\Tr (DP)A = 0$ (put $(P + 1 - P)$ on each side and use cyclicity of trace). \end{proof} Since $H_y(\Gamma)$ is analytic, the preceding Lemma gives \begin{align} \label{eq:DE-integral} D_y E_y|_{y=x} &= \inpr{\eta_x}{DH_y(\Gamma)|_x\, \phi_x} \nonumber \\ &= \oint_\Gamma \inpr{\eta}{D_y\Rmap(\zeta,H_y)\phi} \zeta\frac{d\zeta}{2\pi i}. \end{align} ($x$ subscripts are being omitted now, for simplicity.) Now, may we may write $D\Rmap(\zeta,H_y) {=} \Rmap(\zeta,H_y)D_yH_y\Rmap(\zeta,H_y)$? A priori, this makes no sense since $H_y$ here is the full ({\em not} projected) operator. However, if we understand $\Rmap(\zeta,H_y)$ as being in $\Lin(\sHp;\sHm)$ (see Sections \ref{sec:R-map} and \ref{sec:series}), so that $\ilinpr{\eta}{\Rmap(\zeta,H_y)\phi} {=} \ilinpr{\eta}{(\hat{H}_y-\zeta)^{-1} \phi}$, all is well. Here, $\phi$ is considered as an element of $\sHp$, and $\eta$ of $\sHm$. Then, \begin{equation} D_y\inpr{\eta}{\Rmap(\zeta,H_y)\phi} = \ilinpr{\eta}{(\hat{H}_y-\zeta)^{-1} D_y\hat{H}_y(\hat{H}_y - \zeta)^{-1} \phi}. \end{equation} To continue, we need \begin{lem} \label{lem:eta-eigenvector} $H_y^*\eta_y = \overline{E}_y\eta_y$ and $\eta_y\in\sHp$. \end{lem} \begin{proof} First, note that $P_y^*\eta_y = \eta_y$. Now (Prop.~\ref{prop:Riesz-Dunford-Taylor}), $H_y = P_y H_y + (1-P_y)H_y(1-P_y)$, and $P_y$ commutes with $H_y$ on $\dom H_y$. Therefore, \begin{align} \psi\in\dom H_y & \;\Rightarrow\; \nonumber \\ & \inpr{\eta_y}{H_y\psi} = \inpr{\eta_y}{P_y H_y \psi} = \inpr{\eta_y}{H_y P_y \psi} \nonumber \\ = & E_y \inpr{\eta_y}{\phi_y} \inpr{\eta_y}{\psi} = E_y \inpr{\eta_y}{\psi}. \end{align} This shows that $H_y^*\eta = \overline{E_y} \eta$. Also, $\eta_y\in\sHp$, because (Lemma~\ref{lem:adjoint-ops}) \hbox{$H_y^* = \Opp{0}{0}{\frm{h}_h^*}$} and $\frm{h}_y^*\in\sct{\calC}$ even if not in our parameterization. \end{proof} Using this Lemma, the previous display is rewritten as \hbox{$-(\zeta - E_y)^{-2} \ilinpr{\eta}{D\hat{H}_y \phi}$}, which, inserted into the contour integral (\ref{eq:DE-integral}) allows an easy evaluation. In conclusion, \begin{prop} \begin{equation} D_y E_y\Big|_{x} = D_y \Dbraket{\eta_x}{\frm{h}_y}{\phi_x}\Big|_x. \end{equation} \end{prop} To do much with this requires explicit knowledge of $\frm{h}$. \subsubsection{charge/current density} \label{sec:eigenstate-cc-density} For a concrete case, consider a \RSF\ of Schr\"odinger forms as in Section \ref{sec:QM}. The differentials of $E$ with respect to $u$ and ${\bm A}$ are linear forms on a perturbation $\delta\!{u}$ or $\delta\!{\bm A}$ (the `$\delta$' doesn't actually have any independent meaning from our perspective), given by \begin{align} D_{u} E \cdot\delta\!{u} & = \sum_\alpha \inpr{\eta}{\delta\!{u}(x_\alpha) \phi} \nonumber \\ \label{eq:rho} & \eqdef \int \delta\!{u}\, {\rho}\, d\underline{x}, \end{align} and \begin{align} D_{\bm A} E \cdot\delta\!{\bm A} & = \inpr{ \cc{\delta\!{\bm A}}\, \eta }{ (i{\nabla} + {\bm A})\phi } + \inpr{(i\nabla+\cc{\bm A}) \eta }{ \delta\!{\bm A}\, \phi } \nonumber \\ \label{eq:J} & \eqdef -\int \delta\!{\bm A}\cdot {\bm J}\, d\underline{x} \end{align} using the abbreviated notation of (\ref{eq:complex-A}). These define the charge density $\rho$ and current density ${\bm J}$ of the state in question. In classical notation, one writes \hbox{$\rho = {\delta E}/{\delta u}$} and \hbox{${\bm J} = -{\delta E}/{\delta {\bm A}}$}. More explicitly, \begin{equation} \label{eq:rho-formula} \rho(x) = \sum_\alpha \int \cc{\eta}\phi|_{(x_{\alpha}=x)} \, d\underline{x}_{-\alpha} \end{equation} and \begin{equation} \label{eq:J-formula} {\bm J}(x) = 2 {\bm A}(x)\rho(x) + \sum_\alpha \int i(\cc{\eta}\overleftrightarrow{\nabla}\phi)|_{(x_{\alpha}=x)} \, d\underline{x}_{-\alpha}, \end{equation} where the notation means that integration is over all positions except those of particle $\alpha$, which is set equal to $x$. Of course, when $\frm{h}$ is not hermitian, the physical interpretation of these as charge/current densities is rather unclear, but the identifications are natural generalizations, indeed analytic continuations. Restricted to hermitian $\frm{h}$, $\rho$ and $\bm J$ are $\Real$-analytic, but as maps into what Banach spaces? Simplifying very slightly what we had in Section~\ref{sec:QM}, we take $u$ and ${\bm A}$ in \hbox{$\sX_u = L^{3/2}(\Real^3)+L^\infty(\Real^3)$} and \hbox{$\sX_{\bm A} = \vec{L}^{3}(\Real^3)+\vec{L}^\infty(\Real^3)$}, respectively. As differentials of a scalar function on $\sX_u\times\sX_{\bm A}$, then, $(\rho,{\bm J})$ is in $\sX_u^*\times\sX_{\bm A}^*$, a priori. This is highly inconvenient due to the presence of the $L^\infty$ summands. Fortunately, we can show that \hbox{$\rho\in \sY_\rho = L^{3}\cap L^{1}$} and \hbox{${\bm J}\in \sY_{\bm J} = \vec{L}^{3/2}\cap \vec{L}^1$}. It then follows that $(u,{\bm A}) \mapsto (\rho,{\bm J})$ is analytic into $\sY_\rho \times \sY_{\bm J}$ because\cite{Liu+Wang-68,Liu+Wang-69} $\sX_u = \sY_\rho^*$, which implies that $\sY_\rho$ is embedded into $\sX_{u}^*$ [$ = \sY_\rho^{**}$] as a closed subspace, and similarly $\sY_{\bm J}$ into $\sX_{\bm A}^*$. Here, we understand $L^p\cap L^q$ to be equipped with the max norm \hbox{$\|f\| = \max(\|f\|_p,\|f\|_q)$}. It suffices to show that $\rho$ and ${\bm J}$ are integrable, since the integral forms (\ref{eq:rho},\ref{eq:J}), and the fact that they induce linear functionals on $L^{3/2}$ and $L^3$, respectively, then shows that $\rho\in L^3$ and ${\bm J}\in \vec{L}^{3/2}$. Here are the required bounds: First, from (\ref{eq:rho-formula}), $\|\rho\|_1 \le N \|\eta\|^2 = N \|P^* P\| = N \|P\|^2$, $P$ being the state projector [see (\ref{eq:Py(Gamma)})]. Then, from (\ref{eq:J-formula}), what was just shown establishes that $\rho{\bm A}$ is integrable, and the Cauchy-Bunyakovsky-Schwarz inequality shows that the second term is also, since $\eta,\phi\in\sHp$. As discussed in the Introduction, these conclusions are relevant to density functional theory (DFT), current-density functional theory (CDFT), and magnetic-field density functional theory. \section{Semigroups and statistical operators} \label{sec:free-energy} Whereas the ideas of the previous section trace their lineage back to the primitive notion of inversion, the progenitor of this section is exponentiation. We will study the operator family $e^{-\beta H}$ as $\beta$ ranges over a vertex-zero sector and $H$ over operators associated with a \RSF. In quantum statistical mechanics, $e^{-\beta H}$, assuming it is trace-class, is the unnormalized statistical operator of a system with Hamiltonian $H$ at temperature $T=\beta^{-1}$. The trace, $Z_{\beta,H} = \Tr e^{-\beta H}$, is the partition function, and $F_{\beta,H} = -\beta^{-1} \ln Z_{\beta,H}$ is interpreted as thermodynamic free energy. At nonzero temperature, the statistical operator and free energy play roles analogous to those played by the ground state and ground state energy at zero temperature. Temperature, however, is not the only thermodynamic control parameter. For a system with variable particle number(s), for instance, there are chemical potentials $\mu_i$ for the various species, $i$. $\beta H$ should be replaced by $\beta \left( H - \sum \mu_i N_i \right)$, where $N_i$ is the number of particles of species $i$. This can be treated as a Hamiltonian on a Fock space with variable particle number. Another thermodynamic parameter, volume can be incorporated in the form of a confining potential. In this way, we naturally move in the direction of considering the Hamiltonian as being a highly variable object and studying the dependence of the statistical operator and free energy on it. This statistical interpretation ceases to be viable if the trace-class requirement is dropped, but this more relaxed setting also has physical interest, especially in connection with ideas around ``imaginary time'' evolution. Here, the {\em semigroup} aspects come to the fore. \hbox{$[0,\infty) \ni \beta \mapsto T(\beta) \equiv e^{-\beta H}$} should be the operator semigroup generated by $-H$. As Cor.~\ref{cor:Rmap-holo} showed that the $\Rmap$-map $(\zeta,x) \mapsto \Rmap(\zeta,x) = (\zeta-H_x)^{-1}$ is holomorphic on its natural domain in $\Cmplx\times \calU$, Cor.~\ref{cor:exp-holo} shows that the $\Emap$-map $(\beta,x) \mapsto \Emap(\beta,x) = e^{-\beta H_x}$ is holomorphic, where $\beta$ in the right half-plane $\CmplxRt$ is restricted only by the requirement of sectoriality. Section \ref{sec:free-energy-perturbation} considers a case where the statistical interpretation is viable. With $H_0$ a lower-bounded self-adjoint operator with resolvent in some Schatten class, and an \RSF\ in $\OF{H_0}$, $F_{\beta,x}$ is holomorphic for $\beta$ in some neighborhood of $\Real_+$ and $x$ in some neighborhood of zero. Similarly to the case of nondegenerate eigenstates considered in section \ref{sec:rank-1}, this implies analyticity of (generalized) observables. Charge-density and current-density are again examined in detail. \subsection{Operator semigroups} \label{eq:semigroups} We begin with a recollection of some relevant definitions\cite{Engel+Nagel,Engel+Nagel-big,Goldstein-semigps,Kato}. A map $\Arr{[0,\infty)}{U}{\Lin(\sX)}$ is a {\em strongly continuous operator semigroup} if \newline \noindent \textnormal{(1)} It respects the semigroup structure of $[0,\infty)$: \hbox{$U(0)=\Id$} and \hbox{$U(s+t) = U(s)U(t)$}. \newline \noindent \textnormal{(2)} For each $x\in\sX$, the orbit map $t \mapsto U(t)x$ is continuous. The {\em generator} $A$ of the semigroup is defined by \begin{equation} A x = \lim_{t\downarrow 0} \frac{Ax - x}{t}, \end{equation} $\dom A$ being the subspace on which the limit exists. $A$ is a closed operator with dense domain and for $x\in\dom A$, $\frac{d}{dt} U(t)x = U(t) Ax$ (e.g., Engel \& Nagel\cite{Engel+Nagel}, Thm II.1.4 and Lemma II.1.1). The semigroup $U(t)$ is often denoted $e^{tA}$, which can be understood in a very straightforward (power series) sense when $A$ is bounded. A strongly continuous semigroup is necessarily locally bounded in operator norm. If we leave everything above the same, except to expand the domain from $[0,\infty)$ to $\oSec{0}{\theta}\cup\{0\}$ (also a semigroup), $U$ is a {\em holomorphic} semigroup. That the appelation is deserved follows from denseness of $\dom A$ and local boundedness, which implies that $U$ is strongly holomorphic, and therefore [Lemma~\ref{lem:st-holo} (e)] holomorphic $\oSec{0}{\theta} \to \Lin(\sX)$. Now, if $H$ were bounded, $e^{-\beta H}$ could be obtained with a Riesz-Dunford-Taylor integral of the function $e^{-\beta\zeta}$ along a contour surrounding the entire spectrum. If $H$ is sectorial, though, its spectrum is unbounded only toward the right in $\Cmplx$, where $e^{-\beta\zeta}$ is rapidly decreasing, assuming $|\arg \beta|$ is not too large. This suggests that a contour such as $\Gamma$ in Fig.~\ref{fig:free-energy-contour} might work. That it does so is the content of the following theorem, for the proof of which we refer to the secondary literature. \begin{figure} \centering \includegraphics[width=65mm]{fig-semigp-contour} \caption{ The contour $\Gamma$ is adapted to the sector $\Sigma$. The dashed line is the boundary of a dilation of $\Sigma$ and $\Gamma$ lies exterior to it. } \label{fig:free-energy-contour} \end{figure} \begin{defn} The contour $\Gamma$ in $\Cmplx$ parameterized by arc-length $s$ is {\em adapted} to sector $\Sigma$ if $\re \Gamma(s) \to + \infty$ as $s \to \pm\infty$, and $\Gamma$ is exterior to some dilation of $\Sigma$ (item~\ref{item:sector}, Sec.~\ref{sec:sforms-1}). \end{defn} \begin{thm} \label{thm:holo-semigp} Let $A$ be a densely-defined operator with $\spec A$ contained in a sector $\Sigma$ of half-angle $\theta$, such that \begin{equation} \zeta \not\in\Sigma' \;\Rightarrow\; \|\Rmap(\zeta,A)\| \le \frac{M(\Sigma')}{|\zeta|+1}. \end{equation} for every dilation $\Sigma'$ of $\Sigma$. Then, with $\Gamma$ a contour adapted to $\Sigma$, a holomorphic semigroup \hbox{$\oSec{0}{\tfrac{\pi}{2}-\theta} \to \Lin(\sH)$} with generator $A$ is defined by \begin{equation} \label{eq:holomorphic-semigroup-integral} \beta \mapsto e^{-\beta A} = \int_\Gamma \Rmap(\zeta,A) e^{-\beta\zeta} \frac{d\zeta}{2\pi i}. \end{equation} \end{thm} \begin{proof} See \S II.4 of Engel \& Nagel\cite{Engel+Nagel}, \S IX.1.6 of Kato\cite{Kato}, or \S X.8 of Reed \& Simon\cite{Reed+Simon}. \end{proof} Because $e^{-\beta A}$ is holomorphic into bounded operators, it has a strong regularizing property not enjoyed by the generic operator semigroup: \begin{cor} \label{cor:maps-into-dom-A} $\beta \mapsto e^{-\beta A}$ is a continuous linear map of $\sH$ into $\dom A$ (with the $A$-norm). \end{cor} \subsection{The exponential map $\Emap$} \label{sec:inverse-exponential} Just as we earlier expanded the usual holomorphy of the resolvent $\Rmap(\zeta,H)$ with respect to the spectral parameter to find that it was holomorphic in a parameterization of $H$ via a \RSF, we will in this subsection (Thm.~\ref{thm:exponential-holo}) expand the holomorphy of $\beta \mapsto e^{-\beta H}$ just discussed to include $H$. If we imagine varying $A$ in (\ref{eq:holomorphic-semigroup-integral}), we see that we should restrict to $A$ with spectrum in a sector to which $\Gamma$ is adapted. Since we deal with operators coming from {\sqf}s, we want to consider sectors for the numerical ranges, not the spectra. \begin{notn} For a sector $\Sigma$, $\Op(\Sigma)$ denotes the set of closed, densely defined operators on $\sH$ with numerical range in $\Sigma$. \end{notn} A key ingredient of the theorem is the following lemma, which shows that the resolvent bound in Thm.~\ref{thm:holo-semigp} is respected. \begin{lem} \label{lem:resolvent-bound} Given sector $\Sigma$, and $\Sigma'$, a dilation of $\Sigma$, there is a constant $M(\Sigma,\Sigma')$ such that \begin{equation} \zeta\not\in\Sigma' \;\Rightarrow\; \|\Rmap(\zeta,H)\| < \frac{M(\Sigma,\Sigma')}{|\zeta|+1}. \end{equation} for every $H\in\Op(\Sigma)$. \end{lem} \begin{proof} This is an immediate consequence of Prop.~\ref{prop:resolvent-outside-Num}. \end{proof} \begin{thm} \label{thm:exponential-holo} Let $\frm{h}$ be a \RSF. With the notation $H_y = \Opp{0}{0}{\frm{h}_y}$ as in Sec. \ref{sec:closure}, \begin{equation} \Arr{\calU}{ y \mapsto e^{-H_y} }{\Lin(\sH)} \end{equation} is holomorphic. \end{thm} \begin{proof} Let $\Sigma$ be an ample sector for $\frm{h}_x$. Thus $\cl\Num H_x$ and, {\it a fortiori}, $\spec H_x$ is contained in $\Sigma$. Furthermore, by Lemma~\ref{lem:sectorial-usc}, there is a neighborhood $\calV$ of $x$ such that for $y \in \calV$, the same holds for $\spec H_y$. Now, let $\Gamma$ be a contour adapted to $\Sigma$ (Fig.~\ref{fig:free-energy-contour}) parameterized by arc length $s$, and $\Gamma_n$ the restriction to $-n \le s \le n$, for $n\in\Nat$. The integrals \begin{equation} {\mathcal I}_n(y) \defeq \int_{\Gamma_n} \Rmap(\zeta,{H}_y) e^{-\zeta} \frac{d\zeta}{2 \pi i}, \end{equation} and ${\mathcal I}(y)$, the integral over the entire contour, are well-defined on $\calV$. Since $\Gamma_n$ is compact, Thm.~\ref{thm:resolvent-holo} guarantees that $y \mapsto {\mathcal I}_n(y)$ is holomorphic. Finally, holomorphy of $\mathcal I$ will be secured by uniform convergence \hbox{${\mathcal I}_n\to {\mathcal I}$} on $\calV$, according to Prop. \ref{prop:convergent-sequences}. Such convergence holds due to the damping factor $e^{- \re \zeta}$ in the definition of ${\mathcal I}(y)$ combined with the resolvent bound in Lemma~\ref{lem:resolvent-bound}, which holds uniformly on $\calV$. \end{proof} \begin{defn} For a \RSF\ $\frm{h}$, the \hbox{$\Emap$-map} is defined by \begin{equation} \Emap(\beta,x) = e^{-\beta H_x} \end{equation} on the domain \begin{equation} \Omega {\!\defeq\!\!\!} \setof{(\beta,x)\in \Cmplx_{\text{rt}}\times\calU}{\beta H_x \text{ is sectorial}}, \end{equation} where $\Cmplx_{\text{rt}}$ is the open right half-plane $\re \beta > 0$. As with the $\Rmap$-map, we may also write $\Emap(\beta,\frm{t})$ for a particular \sqf\ $\frm{t}$, thinking of $\sct{\calC}$ as an \RSF\ parameterized over itself. \end{defn} \begin{cor} \label{cor:exp-holo} Let $\frm{h}$ be a \RSF\ defined on $\calU$, with associated family $x\mapsto H_x$ of closed operators. Then $\Arr{\Omega}{\Emap}{\Lin(\sH)}$ is holomorphic. \end{cor} \subsection{Statistical operator and free energy} \label{sec:free-energy-perturbation} In quantum statistical mechanics, $e^{-\beta H_x}$ is used in the following way: with real $\beta$ interpreted as inverse temperature, the partition function is $Z_{\beta,x} = \Tr e^{-\beta H_x}$, the free energy is $F_{\beta,x} = -\beta^{-1} \ln Z_{\beta,x}$, and the {\it statistical operator} is $\rho_{\beta,x} = Z_{\beta,x}^{-1} e^{-\beta H_x}$. The latter describes the (mixed) thermal state at inverse temperature $\beta$ under Hamiltonian $H_x$, so that the thermal expectation of (bounded, at least) observable $B$ in this state is \begin{equation} \langle B \rangle_x = \Tr \rho_{\beta,x} B. \end{equation} The basic condition for this to make mathematical sense is that $e^{-\beta H_x}$ be trace-class. When we generalize to allow non-real $\beta$ and non-self-adjoint $H_x$, the additional condition that $Z_{\beta,x} \neq 0$ is required. Phase transitions are generally identified with points of non-analyticity of the free energy density in the thermodynamic limit (quantity of matter tends to infinity at fixed temperature and pressure, or whatever parameters are appropriate). For simple lattice models in particular, it is easy to see that free energy density is analytic for finite systems, while (not so easy to see) singularities can occur in the thermodynamic limit. This is strongly connected with the dogma that phase transitions are phenomena purely of the thermodynamic limit\cite{Kadanoff-09}. One may well ask, however, to what extent we may rule out non-analyticity with more realistic Hamiltonians and a greater, possibly infinite, number of parameters, without any thermodynamic limit. This question is addressed here. Thm.~\ref{thm:exponential-holo} is an important stepping stone, but the conclusion of holomorphy into $\Lin(\sH)$ must be strengthened. A very useful frame in which to think is that of an ``unperturbed'' Hamiltonian with a polynomial-bounded energy density of states. This seems to be about the right assumption, since it will allow a good perturbation theory as we shall see, while being satisfied in the usual models. For instance, for $N$ distinguishable particles moving in $d$ dimensions, the density of states for a harmonic oscillator hamiltonian is ${\mathcal O}(E^{3N-1})$, and for the usual kinetic energy in a box with periodic boundary conditions, ${\mathcal O}(E^{(3N-2)/2})$. Nontrivial quantum statistics or repulsive interactions only improve matters, by the min-max principle. The main theorem (\ref{thm:holo-statistical-op}) is framed in the context of the space $\OF{H_0}$ of Sec.~\ref{sec:self-adjoint}, where $H_0$ is a lower-bounded self-adjoint operator with resolvent in $\Lin^p(\sH)$ for some $p$, and says that $e^{-\beta H_x}$ is holomorphic for $\beta$ in a nontrivial sector and $x$ in some neighborhood of $0$. The key ideas involved are Prop.~\ref{prop:automatic-holo-Lp}, bounding the integral (\ref{eq:holomorphic-semigroup-integral}) simply by bounding the $\Rmap(\zeta,H_x)$, and using elementary semigroup properties to get an $L^1(\sH)$ bound from an $L^p(\sH)$ bound. Here is the main result of this subsection. \begin{thm} \label{thm:holo-statistical-op} If self-adjoint $H_0$ is such that $\Rmap(\zeta,H_0)$ is in $\Lin^p(\sH)$ for one (hence every) resolvent point and some $1\le p < \infty$, then \begin{equation} \nonumber \Arr{\oSec{0}{\frac{\pi}{4}}\times B_{1}(\OF{H_0})}{\Emap}{\Lin^1(\sH)} \end{equation} is holomorphic. \end{thm} Proof of the theorem proceeds through four lemmas. The first reduces the context from $\Lin^1(\sH)$ to $\Lin^p(\sH)$. \begin{lem} \label{lem-Exp-from-Lp-to-L1} Suppose \begin{equation} \nonumber \Arr{\oSec{0}{\theta}\times\calU}{\Emap}{\Lin^p(\sH)} \end{equation} is locally bounded. Then, ${\Emap}$ is holomorphic into $\Lin^1(\sH)$. \end{lem} \begin{proof} According to Thm.~\ref{thm:exponential-holo} and Prop.~\ref{prop:automatic-holo-Lp}, what needs to be shown is that ${\Emap}$ is a locally bounded map into $\Lin^1(\sH)$. Given the hypotheses, though, that follows from the generalized H\"older inequality \begin{equation} \|e^{-\beta H}\|_1 \le \|e^{-(\beta/p) H}\|_p^p . \end{equation} \end{proof} To make use of this Lemma, we need conditions which will ensure the hypothesized local $\Lin^p$-boundedness. In the next two Lemmas, sector $\Sigma$, and $\Sigma'$ a dilation of $\Sigma$, and a point $\zeta_0 \not\in\Sigma'$ are understood as given, while $H$ is arbitrary in $\Op(\Sigma)$. They reduce the problem to one of bounding $\|\Rmap(\zeta_0,H)\|_p$. \begin{lem} \label{lem:resolvent-bounded-uniformly-in-zeta} \begin{equation} \| \Rmap(\zeta,H)\|_p \le C(\Sigma,\Sigma',\zeta_0) \| \Rmap(\zeta_0,H) \|_p \end{equation} \end{lem} \begin{proof} Lemma~\ref{lem:resolvent-bound} ensures that the factor in square brackets in the resolvent identity \begin{equation} \Rmap(\zeta,H) = [1+\Rmap(\zeta,H)(\zeta-\zeta_0)]\Rmap(\zeta_0,H), \end{equation} is bounded uniformly for $\zeta\not\in\Sigma'$. \end{proof} \begin{lem} \label{lem:exponential-bounded-in-Lp} \begin{equation} \|e^{-\beta H}\|_p \le M(\Sigma,\Sigma',\zeta_0,\beta) \|\Rmap(\zeta_0,H)\|_p, \end{equation} with $M(\Sigma,\Sigma',\zeta_0,\beta)$ locally bounded in \hbox{$\beta\in\oSec{0}{\tfrac{\pi}{2} - \theta}$}, where $\theta$ is the half-angle of $\Sigma'$. \end{lem} \begin{proof} Let contour $\Gamma$ satisfy $\zeta_0 \in \Gamma \subset \Sigma'$ (hence, $\Gamma$ is adapted to $\Sigma$). Then, \begin{align} \| e^{-\beta H} \|_p & = \Big\| \int_\Gamma \Rmap(\zeta,H) e^{-\beta \zeta} \frac{d\zeta}{2\pi i} \Big\|_p \nonumber \\ & \le \int_\Gamma \| \Rmap(\zeta,H) \|_p e^{- \re \beta \zeta} \frac{|d\zeta|}{2\pi} \nonumber \\ & \le C(\Gamma,\beta) \sup_{\zeta\in\Gamma} \| \Rmap(\zeta,H) \|_p \nonumber \\ & \le M(\Sigma,\Sigma',\zeta_0,\beta) \| \Rmap(\zeta_0,H) \|_p. \end{align} The third line follows since \hbox{$\int_\Gamma e^{- \re \beta \zeta} {|d\zeta|} < \infty$}, and the fourth line is by Lemma~\ref{lem:resolvent-bounded-uniformly-in-zeta}. \end{proof} \begin{lem} \label{lem:perturbed-resolvent-Lp-bound} If $\dom H \subseteq \dom A$ and \hbox{$\| A\Rmap(\zeta_0,H_0) \| < 1$}, then \begin{equation} \nonumber \| \Rmap(\zeta_0,H+A) \|_p \le \| ( 1 + A\Rmap(\zeta_0,H))^{-1}\| \| \Rmap(\zeta_0,H) \|_p \end{equation} \end{lem} \begin{proof} Immediate. \end{proof} \begin{proof}[Completion of Proof of Thm.~\ref{thm:holo-statistical-op}] Now it is merely a matter of stringing the pieces together. Prop.~\ref{prop:automatic-holo-Lp} asserts that local boundedess of $e^{-\beta H_x}$ in $\Lin^1(\sH)$ suffices to establish holomorphy, Lemma~\ref{lem-Exp-from-Lp-to-L1} shows that $\Lin^1(\sH)$ can be replaced by $\Lin^p(\sH)$; Lemma~\ref{lem:exponential-bounded-in-Lp} that we only need a local bound on $\Rmap(\zeta_0,H_x)$; and Lemma~\ref{lem:perturbed-resolvent-Lp-bound} shows how big the perturbation can be. According to the definition of $\OF{H_0}$ [see Section~\ref{sec:self-adjoint}, especially Prop.~\ref{prop:X(H)}], it suffices that $\|\frm{t}-\frm{h}\|_{H_0}< 1$. The restriction on $\beta$ is needed to insure that $\beta H_x$ is sectorial for all $x$ in $B_1(\OF{H_0})$. \end{proof} \subsection{Thermal expectations} \label{sec:thermal-expectations} This subsection is concerned with consequences of Thm.~\ref{thm:holo-statistical-op}. In other words, what do we do with the holomorphic statistical operator? We Suppose given a \RSF\ in $B_1(\OF{H_0})$, and adopt the notational convention that $\frm{h}_x$ corresponds to the operator $H_0 + T_x$ on $\dom H_0$ (i.e., this is $\Opp{0}{0}{\frm{h}_x}$). We can be fairly explicit about the Taylor series expansion of $e^{-\beta(H_0+T_x)}$. By appeal to Cor.~\ref{cor:maps-into-dom-A}, \begin{equation} \label{eq:perturbed-semigroup} e^{-\beta(H_0+T_x)} = \int_0^1 e^{-s\beta(H_0+T_x)} (-\beta T_x) e^{-(1-s)\beta H_0} \,ds \end{equation} for any $\frm{h}_x\in\OF{H_0}$. Iteration shows that the $n$-th term of the Taylor series has the familiar form \begin{equation} (-\beta)^n\int_{\substack{ s\ge 0 \\ \sum s_k = 1}} e^{-s_{n+1}\beta H} T_x e^{-s_n \beta H}\ldots T_x e^{-s_1 \beta H} \,d\underline{s}. \end{equation} Thm.~\ref{thm:holo-statistical-op} implies that this actually converges for small enough $x$. In the following, we will be concerned only with the first term, however. For $(\beta,x)$ in some neighborhood of $\Real_+ \times \{0\}$, $Z_{\beta,x}$ is nonzero and therefore the free energy $F_{\beta,x}$ is well-defined and holomorphic. According to (\ref{eq:perturbed-semigroup}), the derivative (also holomorphic) $-\beta D_x F_{\beta,x}$ is the expectation value $\Tr \rho_{\beta,x} D_x T_x$. \subsubsection{charge/current density} \label{sec:thermal-cc-density} Parallel to the treatment of properties of energetically-isolated eigenstates in Section~\ref{sec:rank-1}, we will consider charge and current-density in the thermal context for an system of $N$ nonrelativistic particles in a three-dimensional box, under a periodic boundary condition Hamiltonian consisting of a kinetic energy operator $K_{\bm A} = \sum_{\alpha=1}^{N} \left|i\nabla_\alpha + {\bm A}(x_\alpha)\right|^2$, a one-body potential operator $U_u = \sum_\alpha u(x_\alpha)$, and a repulsive two-body interaction $V_v = \tfrac{1}{2}\sum_{\alpha\neq\beta} v(x_\alpha-x_\beta)$. The variables here are $u$ and ${\bm A}$. $H_0 = K_{\bm A} + V_v$ has a polynomial-bounded density of states, hence Thm.~\ref{thm:holo-statistical-op} applies and the free energy is holomorphic. Charge and current-density are obtained by differentiating the free energy with respect to $u$ and ${\bm A}$, respecively, hence are also holomorphic if the perturbed Hamiltonians comprise a \RSF\ in $\OF{H_0}$. Now, apply the result of Kato (\S~5.5.3 of Kato\cite{Kato}, Thm.~6.2.2 of de~Oliveira\cite{deOliveira} or Example 13.4 of Hislop \& Sigal\cite{Hislop+Sigal}) that a potential in $L^2(\Real^3) + L^\infty(\Real^3)$ is relatively bounded with respect to the ${\bm A}=0$ kinetic energy operator $-\Delta$ with relative bound zero (See Def.~\ref{def:relative-operator-bound}). Since the system is confined to a box, a bounded potential is automatically square-integrable. For the kinetic energy operator \begin{equation} |i\nabla + {\bm A}|^2 = -\Delta + 2i{\bm A}\cdot\nabla + i\,{\mathrm{div}}{\bm A} + |{\bm A}|^2, \end{equation} ${\bm A}$ must be restricted so that each of the last three terms is adequately tame. It will suffice that ${\bm A}\in \vec{L}^4(\text{Box})$ if we work in Coulomb gauge, i.e., $\mathrm{div}{\bm A}=0$, or in Fourier components, ${\bm q}\cdot\tilde{\bm A}({\bm q})=0$. We will denote this subspace of ``transverse'' vector fields by $\vec{L}^4(\text{Box})_{\text{trans}}$. That restriction obviously takes care of the divergence term. \begin{equation} \| |{\bm A}|^2 \|_{L^2} \le c \| {\bm A} \|_{L^4}^2 \end{equation} by H\"older's inequality. Finally, since the box is bounded, $\vec{L}^3(\text{Box})$ is continously embedded in $\vec{L}^4(\text{Box})$, so (\ref{eq:Holder-Sobolev}) demonstrates a suitable bound for ${\bm A}\cdot \nabla\psi$ when \hbox{$\psi \in \dom (-\Delta)$}. For the scalar potential, no trickery is required to apply the result cited above. Simply assume $u,v\in L^2(\text{Box})$. Thus, we obtain an \RSF\ in $\OF{H_0}$ defined for $x \equiv (u,{\bm A})$ on some neighborhood of the origin in $L^2(\text{Box}) \times\vec{L}^4(\text{Box})_{\text{trans}}$. The charge/current density $(\rho,{\bm J}) = -\beta D_xF_{\beta,x}$ is then an analytic function of $x$ valued in $L^2(\text{Box}) \times \vec{L}^{4/3}(\text{Box})$. \section{Summary} \label{sec:summary} Here is a summary of the apparatus developed here, from an application-oriented perspective. The starting point is a family \hbox{$\Arr{\sX\supseteq \calU}{\frm{h}}{\sct{\SF}(\sK)}$} of closable, mutually relatively bounded, sectorial {\sqf}s parameterized over $\calU$. Thinking of these as generalized Hamiltonians, sectoriality is an appropriate generalization of lower-bounded and hermitian, which allows use of holomorphy. If quantities related to these forms $\frm{h}_x$ or their associated operators $H_x$ are holomorphic in the parameter $x\in\calU$, then real analyticity results for proper Hamiltonians by restriction. If $x\mapsto\frm{h}$ is a \RSF, then holomorphy of $(\zeta,x) \mapsto \Rmap(\zeta,x) = (\zeta-H_x)^{-1}$ and $(\beta,x) \mapsto \Emap(\beta,x) = e^{-\beta H_x}$, as maps into $\Lin(\sH)$, is secured on natural domains. This is the content of Cors.~\ref{cor:exp-holo} and \ref{cor:Rmap-holo}, respectively. Prop.~\ref{prop:regular-sectorial} provides a few sets of convenient criteria for $x\mapsto\frm{h}$ to be a \RSF. One of these is, (a) G-holomorphy: for each $x\in\calU$, $w\in \sX$, and $\psi\in\sK$, \hbox{$\zeta \mapsto \frm{h}_{x+\zeta w}[\psi]$} is holomorphic on some neighborhood of the origin in $\Cmplx$; and (b) local boundedness: each $x\in\calU$ has a neighborhood such that $\frm{h}_y$ is bounded uniformly relative to $\frm{h}_x$ for $y$ in that neighborhood. The practicality of these criteria is demonstrated in Section~\ref{sec:QM}, where an \RSF\ of multi-particle Schr\"odinger forms is constructed. The $\Rmap$-map and $\Emap$-map are themselves mostly means to an end. An important tool in using them is Prop.~\ref{prop:automatic-holo-Lp}, which says that either is actually holomorphic into the Schatten class $\Lin^p(\sH)$ (not just into $\Lin(\sH)$) if it is merely locally bounded into $\Lin^p(\sH)$. Using this, we can effectively deal with properties of isolated eigenstates, or of thermal states when the $\Emap$-map is verified to be locally bounded into trace-class operators (Thm.~\ref{thm:holo-statistical-op}). Particularly interesting are derivatives of the energy or free energy with respect to scalar potential $u$ or vector potential ${\bm A}$, which give (expectation of) charge-density and current-density, respectively. As differentials of holomorphic functions, these are automatically holomorphic themselves. In the case of isolated eigenstates (Section \ref{sec:eigenstate-cc-density}), $(\rho,{\bm J})$ is analytic in \hbox{$(L^{3}(\Real^3) \cap L^1(\Real^3))\times(\vec{L}^{3/2}(\Real^3)\cap \vec{L}^1(\Real^3))$}. as function of $(u,{\bm A})$ in $(L^{3/2}+L^\infty) \times (\vec{L}^3+\vec{L}^\infty)$. For thermal states, additional restrictions are required on the potentials to ensure existence of the free energy. For a system in a box (Section~\ref{sec:thermal-cc-density}), $(\rho,{\bm J})$ is analytic in $L^{2}(\text{Box}) \times \vec{L}^{4/3}(\text{Box})$ as function of $(u,{\bm A})$ in $L^{2} \times \vec{L}^4$, with ${\bm A}$ in Coulomb gauge.
{"config": "arxiv", "file": "2108.10094/arxiv-210821.tex"}
TITLE: Prove the tautology (⋀( → ¬)) → ¬ QUESTION [1 upvotes]: I must prove this tautology using logical equivalences but I can't quite figure it out. I know it has something to do with the fact that not p and p have opposite truth values at all times. Any help would be appreciated. REPLY [0 votes]: Use the fact that $p \rightarrow q \Leftrightarrow \neg p \lor q$ Applied to your statement: $(q \land (p \rightarrow \neg )) \rightarrow \neg p \Leftrightarrow$ $\neg (q \land (p \rightarrow \neg )) \lor \neg p \Leftrightarrow$ $\neg q \lor \neg (p \rightarrow \neg ) \lor \neg p \Leftrightarrow$ $\neg q \lor \neg (\neg p \lor \neg ) \lor \neg p \Leftrightarrow$ $\neg q \lor (p \land q) \lor \neg p \Leftrightarrow$ $((\neg q \lor p) \land (\neg q \lor q)) \lor \neg p \Leftrightarrow$ $((\neg q \lor p) \land \top) \lor \neg p \Leftrightarrow$ $(\neg q \lor p) \lor \neg p \Leftrightarrow$ $\neg q \lor p \lor \neg p$ ... and now you're almost there ... do you see it?
{"set_name": "stack_exchange", "score": 1, "question_id": 3109036}
\begin{document} \baselineskip=16pt \title[Relations in the tautological ring] {Relations in the tautological ring of the moduli space of $K3$ surfaces} \author{Rahul Pandharipande} \address{Department of Mathematics, ETH Z\"urich} \email {rahul@math.ethz.ch} \author{Qizheng Yin} \address{Department of Mathematics, ETH Z\"urich} \email {qizheng.yin@math.ethz.ch} \date{July 2016} \begin{abstract} We study the interplay of the moduli of curves and the moduli of $K3$ surfaces via the virtual class of the moduli spaces of stable maps. Using Getzler's relation in genus 1, we construct a universal decomposition of the diagonal in Chow in the third fiber product of the universal $K3$ surface. The decomposition has terms supported on Noether-Lefschetz loci which are not visible in the Beauville-Voisin decomposition for a fixed $K3$ surface. As a result of our universal decomposition, we prove the conjecture of Marian-Oprea-Pandharipande: the full tautological ring of the moduli space of $K3$ surfaces is generated in Chow by the classes of the Noether-Lefschetz loci. Explicit boundary relations are constructed for all $\kappa$ classes. More generally, we propose a connection between relations in the tautological ring of the moduli spaces of curves and relations in the tautological ring of the moduli space of $K3$ surfaces. The WDVV relation in genus 0 is used in our proof of the MOP conjecture. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \setcounter{section}{-1} \newpage \section{Introduction} \subsection{$\kappa$ classes} \label{MMM} Let $\mathcal{M}_{2\ell}$ be the moduli space of quasi-polarized $K3$ surfaces $(X,H)$ of degree $2\ell>0$: \begin{enumerate} \item[$\bullet$] $X$ is a nonsingular, projective $K3$ surface over $\com$, \item[$\bullet$] $H\in \text{Pic}(X)$ is a primitive and nef class satisfying $$\langle H, H\rangle_X\, =\, \int_X H^2\, =\, 2\ell\, .$$ \end{enumerate} The basics of quasi-polarized $K3$ surfaces and their moduli are reviewed in Section \ref{ooo}. Consider the universal quasi-polarized $K3$ surface over the moduli space, $$ \pi: \mathcal{X} \rightarrow \mathcal{M}_{2\ell}\, .$$ We define a {canonical} divisor class on the universal surface, $$\mathcal{H} \ \in \ \mathsf{A}^1(\mathcal{X},\mathbb{Q})\, ,$$ which restricts to $H$ on the fibers of $\pi$ by the following construction. Let $\MM_{0,1}(\pi,H)$ be the $\pi$-relative moduli space of stable maps: $\MM_{0,1}(\pi,H)$ parameterizes stable maps from genus 0 curves with 1 marked point to the fibers of $\pi$ representing the fiberwise class $H$. Let $$\epsilon: \MM_{0,1}(\pi,H) \rightarrow \mathcal{X}\, $$ be the evaluation morphism over $\mathcal{M}_{2\ell}$. The moduli space $\MM_{0,1}(\pi,H)$ carries a $\pi$-relative reduced obstruction theory with reduced virtual class of $\pi$-relative dimension $1$. We define $$\mathcal{H} \, = \, \frac{1}{N_0(\ell)} \,\cdot \, \epsilon_*\left[ \MM_{0,1}(\pi,H)\right]^{\text{red}} \ \in \ \mathsf{A}^1(\mathcal{X},\mathbb{Q}) \, ,$$ where $N_0(\ell)$ is the genus 0 Gromov-Witten invariant{\footnote{While $\ell>0$ is required for the quasi-polarization $(X,H)$, the reduced Gromov-Witten invariant~$N_0(\ell)$ is well-defined for all $\ell\geq -1$.}} $$N_0(\ell) = \int_{[\MM_{0,0}(X,H)]^{\text{red}}} 1\,. $$ By the Yau-Zaslow formula{\footnote{The formula was proposed in \cite{yauz}. The first proofs in the primitive case can be found in \cite{bea,brl}. We will later require the full Yau-Zaslow formula for the genus 0 Gromov-Witten counts also in imprimitive classes proven in \cite{KMPS}.}}, the invariant $N_0(\ell)$ is never 0 for $\ell\geq -1$, $$\sum_{\ell=-1}^\infty q^\ell N_0(\ell) \,=\, \frac{1}{q}+ 24 + 324 q + 3200 q^2 \ldots\, .$$ The construction of $\mathcal{H}$ is discussed further in Section \ref{fafa2}. The $\pi$-relative tangent bundle of $\mathcal{X}$, $$ \mathcal{T}_\pi\rightarrow \mathcal{X} \, , $$ is of rank 2 and is canonically defined. Using $\mathcal{H}$ and $c_2(\mathcal{T}_\pi)$, we define the $\kappa$ classes, $$\kappa_{[a;b]} \, = \, \pi_*\left(\mathcal{H}^a \cdot c_2(\mathcal{T}_\pi)^b\right) \ \in \mathsf{A}^{a+2b-2}(\mathcal{M}_{2\ell},\mathbb{Q})\, .$$ Our definition follows \cite[Section 4]{MOP3} {\it except for the canonical choice of $\mathcal{H}$}. The construction here requires {\it no} choices to be made in the definition of the $\kappa$ classes. \subsection{Strict tautological classes} \label{hah2} The Noether-Lefschetz loci also define classes in the Chow ring $\AA(\mathcal{M}_{2\ell},\mathbb{Q})$. Let $$\NL(\mathcal{M}_{2\ell}) \subset \AA(\mathcal{M}_{2\ell},\mathbb{Q})$$ be the subalgebra generated by the Noether-Lefschetz loci (of all codimensions). On the Noether-Lefschetz locus{\footnote{We view the Noether-Lefschetz loci as proper maps to $\mathcal{M}_{2\ell}$ instead of subspaces.}} $$\mathcal{M}_{\Lambda}\rightarrow \mathcal{M}_{2\ell}\, ,$$ corresponding to the larger Picard lattice $\Lambda \supset (2\ell)$, richer $\kappa$ classes may be defined by simultaneously using several elements of $\Lambda$. We define {\it canonical} $\kappa$ classes based on the lattice polarization $\Lambda$. A nonzero class $L\in \Lambda$ is {\it admissible} if \begin{enumerate} \item[(i)] $L= m \cdot \widetilde{L}$ with $\widetilde{L}$ primitive, $m>0$, and $\langle \widetilde{L}, \widetilde{L}\rangle_\Lambda\geq -2$, \item[(ii)] $\langle H, L\rangle_\Lambda \geq 0$, \end{enumerate} and in case of equality in (ii), which forces equality in (i) by the Hodge index theorem, \begin{enumerate} \item[(ii')] $L$ is effective. \end{enumerate} Effectivity is {\it equivalent} to the condition $$\langle H, L\rangle_\Lambda \geq 0\, $$ for {\it every} quasi-polarization $H\in \Lambda$ for a generic $K3$ surface parameterized by $\mathcal{M}_\Lambda$. For $L\in \Lambda$ admissible, we define $$\mathcal{L} \, = \, \frac{1}{N_0(L)} \,\cdot \, \epsilon_*\left[ \MM_{0,1}(\pi_\Lambda,L)\right]^{\text{red}} \ \in \ \mathsf{A}^1(\mathcal{X}_\Lambda,\mathbb{Q}) \, ,$$ where $\pi_\Lambda:\mathcal{X}_\Lambda \rightarrow \mathcal{M}_{\Lambda}$ is the universal $K3$ surface. The reduced Gromov-Witten invariant $$N_0(L) = \int_{[\MM_{0,0}(X,L)]^{\text{red}}} 1$$ is nonzero for all admissible classes by the full Yau-Zaslow formula proven in \cite{KMPS}, see Section \ref{gen0}. For $L_1,\ldots,L_k\in \Lambda$ admissible classes, we have canonically constructed divisors $$\mathcal{L}_1,\ldots,\mathcal{L}_k \ \in \ \mathsf{A}^1(\mathcal{X}_\Lambda,\mathbb{Q})\, .$$ We define the richer $\kappa$ classes on $\mathcal{M}_\Lambda$ by \begin{equation}\label{xrrx} \kappa_{[L_1^{a_1},\ldots,L_k^{a_k};b]} \, = \, \pi_{\Lambda*}\left( \mathcal{L}_1^{a_1}\cdots \mathcal{L}_k^{a_k} \cdot c_2(\mathcal{T}_{\pi_\Lambda})^b\right) \ \in \ \mathsf{A}^{\sum_i a_i+2b-2}(\mathcal{M}_{\Lambda},\mathbb{Q})\, . \end{equation} We will sometimes suppress the dependence on the $L_i$, $$\kappa_{[L_1^{a_1},\ldots,L_k^{a_k}; b]}= \kappa_{[{a_1},\ldots,{a_k}; b]}\, .$$ We define the {\it strict tautological ring} of the moduli space of $K3$ surfaces, $${\mathsf{R}}^\star(\mathcal{M}_{2\ell}) \subset \AA(\mathcal{M}_{2\ell},\mathbb{Q})\, ,$$ to be the subring generated by the push-forwards from the Noether-Lefschetz loci $\mathcal{M}_\Lambda$ of all products of the $\kappa$ classes \eqref{xrrx} obtained from admissible classes of $\Lambda$. By definition, $$\NL(\mathcal{M}_{2\ell}) \subset {\mathsf{R}}^\star(\mathcal{M}_{2\ell})\, .$$ There is no need to include a $\kappa$ index for the first Chern class of $\mathcal{T}_\pi$ since $$c_1(\mathcal{T}_\pi) = -\pi^*\lambda$$ where $\lambda=c_1(\mathbb E)$ is the first Chern class of the Hodge line bundle $$\mathbb{E} \rightarrow \mathcal{M}_{2\ell}$$ with fiber $H^0(X,K_X)$ over the moduli point $(X,H)\in \mathcal{M}_{2\ell}$. The Hodge class $\lambda$ is known to be supported on Noether-Lefschetz divisors.{\footnote{ By \cite[Theorem 1.2]{BKPSB}, $\lambda$ on $\mathcal{M}_\Lambda$ is supported on Noether-Lefschetz divisors for every lattice polarization~$\Lambda$. See also \cite[Theorem 3.1]{DM} for a stronger statement: $\lambda$ on $\mathcal{M}_{2\ell}$ is supported on any infinite collection of Noether-Lefschetz divisors.}} A slightly different {tautological ring} of the moduli space of $K3$ surfaces was defined in \cite{MOP3}. A basic result conjectured in \cite{MP} and proven in \cite{Ber} is the isomorphism $$\mathsf{NL}^1(\mathcal{M}_{2\ell}) = \mathsf{A}^1(\mathcal{M}_{2\ell},\mathbb{Q})\, .$$ In fact, the Picard group of $\mathcal{M}_{\Lambda}$ is generated by the Noether-Lefschetz divisors of $\mathcal{M}_{\Lambda}$ for every lattice polarization $\Lambda$ of rank $\leq 17$ by \cite{Ber}. As an immediate consequence, the strict tautological ring defined here is isomorphic to the tautological ring of \cite{MOP3} in all codimensions up to 17. Since the dimension of $\mathcal{M}_{2\ell}$ is 19, the differences in the two definitions are only possible in degrees 18 and 19. We prefer to work with the strict tautological ring. A basic advantage is that the $\kappa$ classes are defined canonically (and not {\it up to twist} as in \cite{MOP3}). Every class of the strict tautological ring $\mathsf{R}^\star(\mathcal{M}_{2\ell})$ is defined explicitly. A central result of the paper is the following generation property conjectured first in \cite{MOP3}. \begin{theorem} \label{dxxd} The strict tautological ring is generated by Noether-Lefschetz loci, $$\NL(\mathcal{M}_{2\ell}) = \mathsf{R}^\star(\mathcal{M}_{2\ell})\, .$$ \end{theorem} Our construction also defines the strict tautological ring $${\mathsf{R}}^\star(\mathcal{M}_{\Lambda})\subset \mathsf{A}^\star(\mathcal{M}_{\Lambda},\mathbb{Q})$$ for every lattice polarization $\Lambda$. As before, the subring generated by the Noether-Lefschetz loci corresponding to lattices $\widetilde{\Lambda} \supset \Lambda$ is contained in the strict tautological~ring, $$\NL(\mathcal{M}_{{\Lambda}}) \subset {\mathsf{R}}^\star(\mathcal{M}_{\Lambda})\, .$$ In fact, we prove a generation result parallel to Theorem \ref{dxxd} for every lattice polarization, $$\NL(\mathcal{M}_{\Lambda}) = \mathsf{R}^\star(\mathcal{M}_{\Lambda}) \,.$$ \subsection{Fiber products of the universal surface} \label{ttun} Let $\mathcal{X}^n$ denote the $n^{\text{th}}$ fiber product of the universal $K3$ surface over $\mathcal{M}_{2\ell}$, $$\pi^n: \mathcal{X}^n \rightarrow \mathcal{M}_{2\ell}\, .$$ The strict tautological ring $${\mathsf{R}}^\star(\mathcal{X}^n) \subset \mathsf{A}^\star(\mathcal{X}^n,\mathbb{Q})$$ is defined to be the subring generated by the push-forwards to $\mathcal{X}^n$ from the Noether-Lefschetz loci $$\pi^n_\Lambda: \mathcal{X}^n_\Lambda \rightarrow \mathcal{M}_{\Lambda}\, $$ of all products of \begin{enumerate} \item[$\bullet$] the $\pi^n_\Lambda$-relative diagonals in $\mathcal{X}_\Lambda^n$, \item[$\bullet$] the pull-backs of $\mathcal{L} \in \mathsf{A}^1(\mathcal{X}_\Lambda,\mathbb{Q})$ via the $n$ projections $$\mathcal{X}_\Lambda^n \rightarrow \mathcal{X}_\Lambda$$ for every admissible $L\in \Lambda$, \item[$\bullet$] the pull-backs of $c_2(\mathcal{T}_{\pi_\Lambda}) \in \mathsf{A}^2(\mathcal{X}_\Lambda,\mathbb{Q})$ via the $n$ projections, \item[$\bullet$] the pull-backs of ${\mathsf{R}}^\star({\mathcal{M}}_{\Lambda})$ via $\pi^{n*}_\Lambda$. \end{enumerate} The construction also defines the strict tautological ring $${\mathsf{R}}^\star(\mathcal{X}^n_{\Lambda})\subset \mathsf{A}^\star(\mathcal{X}^n_{\Lambda},\mathbb{Q})$$ for every lattice polarization $\Lambda$. \subsection{Export construction} \label{conjs} Let $\MM_{g,n}(\pi_\Lambda,L)$ be the $\pi_\Lambda$-relative moduli space of stable maps representing the admissible class $L\in \Lambda$. The evaluation map at the $n$ markings is $$\epsilon^n: \MM_{g,n}(\pi_\Lambda,L) \rightarrow \mathcal{X}^n_\Lambda\, .$$ \begin{conjecture} \label{conj1} The push-forward of the reduced virtual fundamental class lies in the strict tautological ring, $$\epsilon_*^n \left[ \MM_{g,n}(\pi_\Lambda,L)\right]^{\textup{red}} \ \in \ {\mathsf{R}}^\star(\mathcal{X}^n_\Lambda)\, .$$ \end{conjecture} When Conjecture \ref{conj1} is restricted to a fixed $K3$ surface $X$, another open question is obtained. \begin{conjecture} \label{conj2} The push-forward of the reduced virtual fundamental class, $$\epsilon_*^n \left[ \MM_{g,n}(X,L)\right]^{\textup{red}} \ \in \ {\mathsf{A}}^\star(X^n,\mathbb{Q})\, ,$$ lies in the Beauville-Voisin ring of $X^n$ generated by the diagonals and the pull-backs of $\textup{Pic}(X)$ via the $n$ projections. \end{conjecture} If Conjecture \ref{conj1} could be proven also for descendents (and in an effective form), then we could export tautological relations on $\MM_{g,n}$ to $\mathcal{X}^n_\Lambda$ via the morphisms $$ \MM_{g,n}\ \stackrel{\tau}{\longleftarrow} \ \MM_{g,n}(\pi_\Lambda, L) \ \stackrel{\epsilon_\Lambda^n}{\longrightarrow}\ \mathcal{X}^n_{\Lambda}\, .$$ More precisely, given a relation $\mathsf{Rel}$ among tautological classes on $\MM_{g,n}$, $$\epsilon_*^n \tau^*(\mathsf{Rel}) \,=\, 0 \ \in\ {\mathsf{R}}^\star (\mathcal{X}^n_\Lambda)$$ would then be a relation among strict tautological classes on $\mathcal{X}^n_\Lambda$. We prove Theorem \ref{dxxd} as a consequence of the export construction for the WDVV relation in genus 0 and for Getzler's relation in genus 1. The required parts of Conjectures~\ref{conj1} and \ref{conj2} are proven by hand. \subsection{WDVV and Getzler} We fix an admissible class $L \in \Lambda$ and the corresponding divisor ${\mathcal{L}} \in \mathsf{A}^1(\mathcal{X}_{\Lambda}, \mathbb{Q})$. For $i \in \{1, \ldots, n\}$, let $${\mathcal{L}}_{(i)} \ \in \ \mathsf{A}^1(\mathcal{X}_\Lambda^n, \mathbb{Q})$$ denote the pull-back of ${\mathcal{L}}$ via the $i^{\text{th}}$ projection $$\text{pr}_{(i)} : \mathcal{X}_\Lambda^n \to \mathcal{X}_\Lambda \,.$$ For $1 \leq i < j \leq n$, let $$\Delta_{(ij)} \ \in \ \mathsf{A}^2(\mathcal{X}_\Lambda^n, \mathbb{Q})$$ be the $\pi_\Lambda^n$-relative diagonal where the $i^{\text{th}}$ and $j^{\text{th}}$ coordinates are equal. We write $$\Delta_{(ijk)} \, = \, \Delta_{(ij)} \cdot \Delta_{(jk)} \ \in \ \mathsf{A}^4(\mathcal{X}_\Lambda^n, \mathbb{Q}) \,.$$ The Witten-Dijkgraaf-Verlinde-Verlinde relation in genus 0 is \begin{equation} \label{wdvv} \left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,2) [label=above:$3$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[vertex] (v2) at (.5,1.5) [label=right:$0$] {}; \node[vertex] (v1) at (.5,.5) [label=right:$0$] {}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l2) at (1,0) [label=below:$2$] {}; \path (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\right] \ - \ \left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l2) at (0,2) [label=above:$2$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[vertex] (v2) at (.5,1.5) [label=right:$0$] {}; \node[vertex] (v1) at (.5,.5) [label=right:$0$] {}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l3) at (1,0) [label=below:$3$] {}; \path (l2) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l3) ; \end{tikzpicture}\right] \ = \ 0 \ \in \ \mathsf{A}^1(\MM_{0, 4}, \mathbb{Q})\, . \end{equation} \vspace{0pt} \begin{theorem} \label{WDVV} For all admissible $L\in \Lambda$, exportation of the WDVV relation yields \begin{multline} \tag{\dag} {\mathcal{L}}_{(1)} {\mathcal{L}}_{(2)} {\mathcal{L}}_{(3)} \Delta_{(34)} + {\mathcal{L}}_{(1)}{\mathcal{L}}_{(3)} {\mathcal{L}}_{(4)} \Delta_{(12)} \\ - {\mathcal{L}}_{(1)}{\mathcal{L}}_{(2)}{\mathcal{L}}_{(3)}\Delta_{(24)} - {\mathcal{L}}_{(1)} {\mathcal{L}}_{(2)}{\mathcal{L}}_{(4)}\Delta_{(13)} + \ldots \, = \, 0 \ \in \ \mathsf{A}^5(\mathcal{X}_\Lambda^4, \mathbb{Q})\,, \end{multline} where the dots stand for strict tautological classes supported over proper Noether-Lefschetz divisors of $\mathcal{M}_\Lambda$. \end{theorem} Getzler \cite{getz} in 1997 discovered a beautiful relation in the cohomology of $\MM_{1,4}$ which was proven to hold in Chow in \cite{pan}: \begin{multline} \label{getzler} 12\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,3) {}; \node[leg] (l4) at (1,3) {}; \node[vertex] (v3) at (.5,2.5) [label=right:$0$] {}; \node[vertex] (v2) at (.5,1.5) [label=right:$1$] {}; \node[vertex] (v1) at (.5,.5) [label=right:$0$] {}; \node[leg] (l1) at (0,0) {}; \node[leg] (l2) at (1,0) {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\ \right] \ - \ 4\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,3) {}; \node[leg] (l4) at (1,3) {}; \node[vertex] (v3) at (.5,2.5) [label=right:$0$] {}; \node[leg] (l2) at (0,1.5) {}; \node[vertex] (v2) at (.5,1.5) [label=right:$0$] {}; \node[vertex] (v1) at (.5,.5) [label=right:$1$] {}; \node[leg] (l1) at (0,0) {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l2) edge (v2) (v2) edge (v1) (v1) edge (l1) ; \end{tikzpicture}\ \right] \ - \ 2\left[\ \begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l3) at (0,2.5) {}; \node[leg] (l4) at (1,2.5) {}; \node[vertex] (v3) at (.5,2) [label=right:$0$] {}; \node[leg] (l2) at (0,1.5) {}; \node[leg] (l1) at (0,.5) {}; \node[vertex] (v2) at (.5,1) [label=right:$0$] {}; \node[vertex] (v1) at (.5,0) [label=right:$1$] {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l2) edge (v2) (l1) edge (v2) (v2) edge (v1) ; \end{tikzpicture}\ \right] \ + \ 6\left[\ \begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l2) at (0,2.5) {}; \node[leg] (l3) at (.5,2.5) {}; \node[leg] (l4) at (1,2.5) {}; \node[vertex] (v3) at (.5,2) [label=right:$0$] {}; \node[leg] (l1) at (0,1) {}; \node[vertex] (v2) at (.5,1) [label=right:$0$] {}; \node[vertex] (v1) at (.5,0) [label=right:$1$] {}; \path (l2) edge (v3) (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l1) edge (v2) (v2) edge (v1) ; \end{tikzpicture}\ \right] \\[3pt] + \ \left[\begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l2) at (0,1.5) {}; \node[leg] (l3) at (.5,1.5) {}; \node[leg] (l4) at (1,1.5) {}; \node[vertex] (v2) at (.5,1) [label=right:$0$] {}; \node[leg] (l1) at (0,.5) {}; \node[vertex] (v1) at (.5,0) [label=right:$0$] {}; \path (l2) edge (v2) (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (l1) edge (v1) (v1) edge[in=-135,out=-45,loop] (v1) ; \end{tikzpicture}\right] \ + \ \left[\begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l1) at (0,1.5) {}; \node[leg] (l2) at (.33,1.5) {}; \node[leg] (l3) at (.67,1.5) {}; \node[leg] (l4) at (1,1.5) {}; \node[vertex] (v2) at (.5,1) [label=right:$0$] {}; \node[vertex] (v1) at (.5,0) [label=right:$0$] {}; \path (l1) edge (v2) (l2) edge (v2) (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge[in=-135,out=-45,loop] (v1) ; \end{tikzpicture}\right] \ - \ 2\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,2) {}; \node[leg] (l4) at (1,2) {}; \node[vertex] (v2) at (.5,1.5) [label=right:$0$] {}; \node[vertex] (v1) at (.5,.5) [label=right:$0$] {}; \node[leg] (l1) at (0,0) {}; \node[leg] (l2) at (1,0) {}; \path (l3) edge (v2) (l4) edge (v2) (v2) edge[bend left=60] (v1) (v2) edge[bend right=60] (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\ \right] \ = \ 0 \ \in \ \mathsf{A}^2(\MM_{1, 4}, \mathbb{Q})\, . \end{multline} \vspace{8pt} \noindent Here, the strata are summed over all marking distributions and are taken in the stack sense (following the conventions of \cite{getz}). \begin{theorem}\label{ggg} For admissible $L\in \Lambda$ satisfying the condition $\langle L, L\rangle_\Lambda \geq 0$, exportation of Getzler's relation yields \begin{multline} \tag{\ddag} {\mathcal{L}}_{(1)}\Delta_{(12)}\Delta_{(34)} + {\mathcal{L}}_{(3)}\Delta_{(12)}\Delta_{(34)} + {\mathcal{L}}_{(1)}\Delta_{(13)}\Delta_{(24)} + {\mathcal{L}}_{(2)}\Delta_{(13)}\Delta_{(24)} + {\mathcal{L}}_{(1)}\Delta_{(14)}\Delta_{(23)} \\ + {\mathcal{L}}_{(2)}\Delta_{(14)}\Delta_{(23)} - {\mathcal{L}}_{(1)}\Delta_{(234)} - {\mathcal{L}}_{(2)}\Delta_{(134)} - {\mathcal{L}}_{(3)}\Delta_{(124)} - {\mathcal{L}}_{(4)}\Delta_{(123)} \\ - {\mathcal{L}}_{(1)}\Delta_{(123)} - {\mathcal{L}}_{(1)}\Delta_{(124)} - {\mathcal{L}}_{(1)}\Delta_{(134)} - {\mathcal{L}}_{(2)}\Delta_{(234)} +\ldots \, = \, 0 \ \in \ \mathsf{A}^5(\mathcal{X}_\Lambda^4, \mathbb{Q})\,, \end{multline} where the dots stand for strict tautological classes supported over proper Noether-Lefschetz loci of $\mathcal{M}_\Lambda$. \end{theorem} The statements of Theorems \ref{WDVV} and \ref{ggg} contain only the {\it principal} terms of the relation (not supported over proper Noether-Lefschetz loci of $\mathcal{M}_\Lambda$). We will write all the terms represented by the dots in Sections \ref{wwww} and \ref{gggg}. The relation of Theorem \ref{WDVV} is obtained from the export construction after dividing by the genus 0 reduced Gromov-Witten invariant $N_0(L)$. The latter never vanishes for admissible classes. Similarly, for Theorem \ref{ggg}, the export construction has been divided by the genus 1 reduced Gromov-Witten invariant $$N_1(L) = \int_{[\MM_{1, 1}(X, L)]^{\text{red}}} \text{ev}^*(\mathsf{p}) \,,$$ where $\mathsf{p} \in H^4(X, \mathbb{Q})$ is the class of a point on $X$. By a result of Oberdieck discussed in Section \ref{gen1}, $N_1(L)$ does not vanish for admissible classes satisfying $\langle L, L\rangle_\Lambda \geq 0$. \subsection{Relations on $\mathcal{X}^3_\Lambda$} As a Corollary of Getzler's relation, we have the following result. Let $$\text{pr}_{(123)} : \mathcal{X}_\Lambda^4 \to \mathcal{X}_\Lambda^3$$ be the projection to the first 3 factors. Let $L=H$ and consider the operation $$\text{pr}_{(123)*} (\mathcal{H}_{(4)} \cdot -)$$ applied to the relation ($\ddag$). We obtain a universal decomposition of the diagonal $\Delta_{(123)}$ which generalizes the result of Beauville-Voisin \cite{BV} for a fixed $K3$ surface. \begin{corollary} \label{bvdiag} The $\pi^3_\Lambda$-relative diagonal $\Delta_{(123)}$ admits a decomposition with principal terms \begin{multline} \tag{$\ddag'$} 2\ell \cdot \Delta_{(123)} \, = \, \mathcal{H}_{(1)}^2 \Delta_{(23)} + \mathcal{H}_{(2)}^2 \Delta_{(13)} + \mathcal{H}_{(3)}^2 \Delta_{(12)} \\ - \mathcal{H}_{(1)}^2 \Delta_{(12)} - \mathcal{H}_{(1)}^2 \Delta_{(13)} - \mathcal{H}_{(2)}^2 \Delta_{(23)} +\ldots \ \in \ \mathsf{A}^4(\mathcal{X}_\Lambda^3, \mathbb{Q})\,, \end{multline} where the dots stand for strict tautological classes supported over proper Noether-Lefschetz loci of $\mathcal{M}_\Lambda$. \end{corollary} The diagonal $\Delta_{(123)}$ controls the behavior of the $\kappa$ classes. For instance, we have $$\kappa_{[a;b]} \, = \, \pi^3_*\left(\mathcal{H}_{(1)}^a \cdot \Delta_{(23)}^b \cdot \Delta_{(123)}\right) \ \in \ \mathsf{A}^{a + 2b - 2}(\mathcal{M}_{2\ell}, \mathbb{Q})\,.$$ The diagonal decomposition of Corollary \ref{bvdiag} plays a fundamental role in the proof of Theorem \ref{dxxd}. \subsection{Cohomological results} Bergeron and Li have an announced an independent approach to the generation (in most codimensions) of the tautological ring $\mathsf{RH}^\star(\mathcal{M}_\Lambda)$ by Noether-Lefschetz loci in cohomology. Petersen \cite{Pet} has proven the vanishing{\footnote{We use the complex grading here.}} $$\mathsf{RH}^{18}(\mathcal{M}_{2\ell})= \mathsf{RH}^{19}(\mathcal{M}_{2\ell})=0\, .$$ We expect the above vanishing to hold also in Chow. What happens in codimension 17 is a very interesting question. By a result of van der Geer and Katsura \cite{kvg}, $$\mathsf{RH}^{17}(\mathcal{M}_{2\ell})\neq 0\, .$$ We hope the stronger statement \begin{equation} \label{hope5} \mathsf{RH}^{17}(\mathcal{M}_{2\ell}) =\mathbb{Q} \end{equation} holds. If true, \eqref{hope5} would open the door to a numerical theory of proportionalities in the tautological ring. The evidence for \eqref{hope5} is rather limited at the moment. Careful calculations in the $\ell=1$ and $2$ cases would be very helpful here. \subsection{Acknowledgments} We are grateful to G.~Farkas, G.~van der Geer, D.~Huybrechts, Z.~Li, A.~Marian, D.~Maulik, G.~Oberdieck, D.~Oprea, D.~Petersen, and J.~Shen for many discussions about the moduli of $K3$ surfaces. The paper was completed at the conference {\it Curves on surfaces and threefolds} at the Bernoulli center in Lausanne in June~2016 attended by both authors. R.~P. was partially supported by SNF-200021143\-274, SNF-200020162928, ERC-2012-AdG-320368-MCSK, SwissMAP, and the Einstein Stiftung. Q.~Y. was supported by the grant ERC-2012-AdG-320368-MCSK. \section{$K3$ surfaces} \label{ooo} \subsection{Reduced Gromov-Witten theory} \label{yzc} Let $X$ be a nonsingular, projective $K3$ surface over $\com$, and let $$L \ \in \ \text{Pic}(X) \, =\, H^2(X,\mathbb{Z}) \cap H^{1,1}(X,\com)$$ be a nonzero effective class. The moduli space ${\MM}_{g,n}(X,L)$ of genus $g$ stable maps with $n$ marked points has expected dimension $$\text{dim}^{\text{vir}}_\com\ {\MM}_{g,n}(X,\beta) = \int_L c_1(X) + (\text{dim}_\com(X) -3)(1-g) +n = g-1+n\,.$$ However, as the obstruction theory admits a 1-dimensional trivial quotient, the virtual class $[{\MM}_{g,n}(X,L)]^{\text{vir}}$ vanishes. The standard Gromov-Witten theory is trivial. Curve counting on $K3$ surfaces is captured instead by the {\it reduced} Gromov-Witten theory constructed first via the twistor family in \cite{brl}. An algebraic construction following~\cite{BF} is given in \cite{MP}. The reduced class $$\left[{\MM}_{g,n}(X,L)\right]^{\text{red}} \ \in \ \mathsf{A}_{g+n}({\MM}_{g,n}(X,L), \mathbb{Q})$$ has dimension $g+n$. The reduced Gromov-Witten integrals of $X$, \begin{equation}\label{veq} \Big\langle \tau_{a_1}(\gamma_1) \cdots \tau_{a_n}(\gamma_n) \Big\rangle_{g,L}^{X,\text{red}} \, = \, \int_{[{\MM}_{g,n}(X,L)]^{\text{red}}} \prod_{i=1}^n \text{ev}_i^*(\gamma_i)\cup \psi_i^{a_i} \ \in \ \mathbb{Q}\,, \end{equation} are well-defined. Here, $\gamma_i \in H^\star(X,\mathbb{Q})$ and $\psi_i$ is the standard descendent class at the $i^{\text{th}}$ marking. Under deformations of $X$ for which $L$ remains a $(1,1)$-class, the integrals \eqref{veq} are invariant. \subsection{Curve classes on $K3$ surfaces} Let $X$ be a nonsingular, projective $K3$ surface over $\mathbb{C}$. The second cohomology of $X$ is a rank 22 lattice with intersection form \begin{equation}\label{ccet} H^2(X,\mathbb{Z}) \cong U\oplus U \oplus U \oplus E_8(-1) \oplus E_8(-1)\,, \end{equation} where $$U = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)$$ and $$ E_8(-1)= \left( \begin{array}{cccccccc} -2& 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & -2 & 0 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & -2 & 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & -2 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & -2 & 1 & 0& 0\\ 0 & 0& 0 & 0 & 1 & -2 & 1 & 0\\ 0 & 0& 0 & 0 & 0 & 1 & -2 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1& -2\end{array}\right)$$ is the (negative) Cartan matrix. The intersection form \eqref{ccet} is even. The {\it divisibility} $m(L)$ is the largest positive integer which divides the lattice element $L\in H^2(X,\mathbb{Z})$. If the divisibility is 1, $L$ is {\it primitive}. Elements with equal divisibility and norm square are equivalent up to orthogonal transformation of $H^2(X,\mathbb{Z})$, see \cite{CTC}. \subsection{Lattice polarization} \label{lpol} A primitive class $H\in \text{Pic}(X)$ is a {\it quasi-polarization} if $$\langle H,H \rangle_X >0 \ \ \ \text{and} \ \ \ \langle H,[C]\rangle_X \geq 0 $$ for every curve $C\subset X$. A sufficiently high tensor power $H^n$ of a quasi-polarization is base point free and determines a birational morphism $$X\rightarrow \widetilde{X}$$ contracting A-D-E configurations of $(-2)$-curves on $X$. Therefore, every quasi-polarized $K3$ surface is algebraic. Let $\Lambda$ be a fixed rank $r$ primitive{\footnote{A sublattice is primitive if the quotient is torsion free.}} sublattice \begin{equation*} \Lambda \subset U\oplus U \oplus U \oplus E_8(-1) \oplus E_8(-1) \end{equation*} with signature $(1,r-1)$, and let $v_1,\ldots, v_r \in \Lambda$ be an integral basis. The discriminant is $$\Delta(\Lambda) = (-1)^{r-1} \det \begin{pmatrix} \langle v_{1},v_{1}\rangle & \cdots & \langle v_{1},v_{r}\rangle \\ \vdots & \ddots & \vdots \\ \langle v_{r},v_{1}\rangle & \cdots & \langle v_{r},v_{r}\rangle \end{pmatrix}\,.$$ The sign is chosen so $\Delta(\Lambda)>0$. A {\it $\Lambda$-polarization} of a $K3$ surface $X$ is a primitive embedding $$j: \Lambda \hookrightarrow \mathrm{Pic}(X)$$ satisfying two properties: \begin{enumerate} \item[(i)] the lattice pairs $\Lambda \subset U^3\oplus E_8(-1)^2$ and $\Lambda\subset H^2(X,\mathbb{Z})$ are isomorphic via an isometry which restricts to the identity on $\Lambda$, \item[(ii)] $\text{Im}(j)$ contains a {quasi-polarization}. \end{enumerate} By (ii), every $\Lambda$-polarized $K3$ surface is algebraic. The period domain $M$ of Hodge structures of type $(1,20,1)$ on the lattice $U^3 \oplus E_8(-1)^2$ is an analytic open subset of the 20-dimensional nonsingular isotropic quadric $Q$, $$M\subset Q\subset \proj\big( (U^3 \oplus E_8(-1)^2 ) \otimes_\Z \com\big)\,.$$ Let $M_\Lambda\subset M$ be the locus of vectors orthogonal to the entire sublattice $\Lambda \subset U^3 \oplus E_8(-1)^2$. Let $\Gamma$ be the isometry group of the lattice $U^3 \oplus E_8(-1)^2$, and let $$\Gamma_\Lambda \subset \Gamma$$ be the subgroup restricting to the identity on $\Lambda$. By global Torelli, the moduli space~$\mathcal{M}_{\Lambda}$ of $\Lambda$-polarized $K3$ surfaces is the quotient $$\mathcal{M}_\Lambda = M_\Lambda/\Gamma_\Lambda\,.$$ We refer the reader to \cite{dolga} for a detailed discussion. \subsection{Genus $0$ invariants}\label{gen0} Let $L\in \text{Pic}(X)$ be a nonzero and {\it admissible} class on a $K3$ surface $X$ as defined in Section \ref{hah2}: \begin{enumerate} \item[(i)] $\frac{1}{m(L)^2}\cdot \langle L,L\rangle_X \geq -2$, \item[(ii)] $\langle H, L\rangle_X \geq 0$. \end{enumerate} In case of equalities in both (i) and (ii), we further require $L$ to be effective. \begin{proposition} \label{trtr} The reduced genus $0$ Gromov-Witten invariant $$N_0(L) = \int_{[\MM_{0,0}(X,L)]^{\textup{red}}} 1$$ is nonzero for all admissible classes $L$. \end{proposition} \begin{proof} The result is a direct consequence of the full Yau-Zaslow formula (including multiple classes) proven in \cite{KMPS}. We define $N_0(\ell)$ for $\ell\geq-1$ by $$\sum_{\ell=-1}^\infty q^\ell N_0(\ell)\, = \, \frac{1}{q\prod_{n=1}^\infty (1-q^n)^{24}}\, =\, \frac{1}{q}+ 24 + 324 q + 3200 q^2 \ldots\, .$$ For $\ell<-1$, we set $N_0(\ell)=0$. By the full Yau-Zaslow formula, \begin{equation} \label{pqq2} N_0(L) = \sum_{r|m(L)} \frac{1}{r^3} N_0\left(\frac{ \langle L,L\rangle_X}{2r^2} \right)\, . \end{equation} Since all $N_0(\ell)$ for $\ell\geq -1$ are positive, the right side of \eqref{pqq2} is positive. \end{proof} \subsection{Genus $1$ invariants} \label{gen1} Let $L\in \text{Pic}(X)$ be an {admissible} class on a $K3$ surface $X$. Let $$N_1(L) = \int_{[\MM_{1, 1}(X, L)]^{\text{red}}} \text{ev}^*(\mathsf{p})$$ be the reduced invariant virtually counting elliptic curves passing through a point of $X$. We define $$\sum_{\ell=0}^\infty q^\ell N_1(\ell) \, = \, \frac{ \sum_{k=1}^\infty \sum_{d|k} dk q^k }{q\prod_{n=1}^\infty (1-q^n)^{24}}\, \, =\, 1 + 30 q + 480 q^2 + 5460 q^3 \ldots\, .$$ For $\ell\leq -1$, we set $N_1(\ell)=0$. If $L$ is primitive, $$N_1(L)=N_1\left(\frac{\langle L,L\rangle_X}{2}\right)$$ by a result of \cite{brl}. In particular, $N_1(L)>0$ for $L$ admissible and primitive if $\langle L, L\rangle_X\geq 0$. \begin{proposition}[Oberdieck] \label{trtrtr} The reduced genus $1$ Gromov-Witten invariant $N_1(L)$ is nonzero for all admissible classes $L$ satisfying $\langle L, L\rangle_X\geq 0$. \end{proposition} \begin{proof} The result is a direct consequence of the multiple cover formula for the reduced Gromov-Witten theory of $K3$ surfaces conjectured in \cite{OP}. By the multiple cover formula, \begin{equation} \label{pqq3} N_1(L) = \sum_{r|m(L)} r N_1\left(\frac{ \langle L,L\rangle_X}{2r^2} \right)\, . \end{equation} Since all $N_1(\ell)$ for $\ell\geq 0$ are positive, the right side of \eqref{pqq3} is positive. To complete the argument, we must prove the multiple cover formula \eqref{pqq3} in the required genus 1 case. We derive \eqref{pqq3} from the genus 2 case of the Katz-Klemm-Vafa formula for imprimitive classes proven in \cite{PT}. Let $$N_2(L)= \int_{[\MM_{2}(X, L)]^{\text{red}}} \lambda_2\, ,$$ where $\lambda_2$ is the pull-back of the second Chern class of the Hodge bundle on $\MM_{2}$. Using the well-known boundary expression{\footnote{See \cite{Mumford}. A more recent approach valid also for higher genus can be found in \cite{DRcycles}.}} for $\lambda_2$ in the tautological ring of $\MM_{2}$, Pixton \cite[Appendix]{MPT} proves \begin{equation}\label{gww} N_2(L)= \frac{1}{10} N_1(L) + \frac{\langle L,L\rangle^2_X}{960} N_0(L)\,. \end{equation} By \cite{PT}, the multiple cover formula for $N_2(L)$ carries a factor of $r$. By the Yau-Zaslow formula for imprimitive classes \cite{KMPS}, the term $\frac{\langle L,L\rangle^2_X}{960} N_0(L)$ also carries a factor of $$(r^2)^2\cdot \frac{1}{r^3} = r\,. $$ By \eqref{gww}, $N_1(L)$ must then carry a factor of $r$ in the multiple cover formula exactly as claimed in \eqref{pqq3}. \end{proof} \subsection{Vanishing} Let $L\in \text{Pic}(X)$ be an {inadmissible} class on a $K3$ surface $X$. The following vanishing result holds. \begin{proposition} \label{vvvv} For inadmissible $L$, the reduced virtual class is $0$ in Chow, $$\left[\MM_{g,n}(X,L)\right]^{\textup{red}} \,=\, 0 \ \in\ \mathsf{A}_{g+n}(\MM_{g,n}(X,L),\mathbb{Q})\,.$$ \end{proposition} \begin{proof} Consider a 1-parameter family of $K3$ surfaces \begin{equation}\label{q22q} \pi_C:\mathcal{X} \rightarrow (C,0) \end{equation} with special fiber $\pi^{-1}(0)=X$ for which the class $L$ is algebraic on all fibers. Let \begin{equation}\label{trrtt} \phi: \MM_{g,n}(\pi_C, L) \rightarrow C \end{equation} be the universal moduli space of stable maps to the fibers of $\pi_C$. Let $$\iota: 0 \hookrightarrow C$$ be the inclusion of the special point. By the construction of the reduced class, $$[\MM_{g,n}(X, L)]^{\text{red}} = \iota^! [\MM_{g,n}(\pi_C, L)]^{\text{red}}\, .$$ Using the argument of \cite[Lemma 2]{MP} for elliptically fibered $K3$ surfaces with a section, such a family \eqref{q22q} can be found for which the fiber of $\phi$ is {\it empty} over a general point of~$C$ since $L$ is not generically effective. The vanishing \begin{equation}\label{dkkd} \left[\MM_{g,n}(X,L)\right]^{\textup{red}} \,=\, 0 \ \in\ \mathsf{A}_{g+n}(\MM_{g,n}(X,L),\mathbb{Q})\, \end{equation} then follows: $\iota^!$ of {\it any} cycle which does not dominate $C$ is 0. If the family \eqref{q22q} consists of projective $K3$ surfaces, the argument stays within the Gromov-Witten theory of algebraic varieties. However, if the family consists of non-algebraic $K3$ surfaces (as may be the case since $L$ is not ample), a few more steps are needed. First, we can assume {\it all} stable maps to the fiber of the family $\eqref{q22q}$ lie over $0\in C$ and map to the algebraic fiber $X$. There is no difficulty in constructing the moduli space of stable maps \eqref{trrtt}. In fact, all the geometry takes place over an Artinian neighborhood of $0\in C$. Therefore the cones and intersection theory are all algebraic. We conclude the vanishing \eqref{dkkd}. \end{proof} \section{Gromov-Witten theory for families of $K3$ surfaces} \label{zzss} \subsection{The divisor $\mathcal{L}$} \label{fafa2} Let $\mathcal{B}$ be any nonsingular base scheme, and let $$\pi_{\mathcal{B}}: \mathcal{X}_{\mathcal{B}} \rightarrow \mathcal{B}$$ be a family of $\Lambda$-polarized $K3$ surfaces.{\footnote{Since the quasi-polarization class may not be ample, $\mathcal{X}_{\mathcal{B}}$ may be a nonsingular algebraic space. There is no difficulty in defining the moduli space of stable maps and the associated virtual classes for such nonsingular algebraic spaces. Since the stable maps are to the fiber classes, the moduli spaces are of finite type. In the original paper on virtual fundamental classes by Behrend and Fantechi \cite{BF}, the obstruction theory on the moduli space of stable maps was required to have a global resolution (usually obtained from an ample bundle on the target). However, the global resolution hypothesis was removed by Kresch in \cite[Theorem 5.2.1]{kresch}. }} For $L\in \Lambda$ admissible, consider the moduli space \begin{equation}\label{eee} \MM_{g,n}(\pi_{\mathcal{B}},L) \rightarrow \mathcal{B}\, . \end{equation} The relationship between the $\pi_{\mathcal{B}}$-relative standard and reduced obstruction theory of $\MM_{g,n}(\pi_{\mathcal{B}},L)$ yields $$\left[\MM_{g,n}(\pi_{\mathcal{B}},L)\right]^{\text{vir}}= -\lambda \cdot \left[\MM_{g,n}(\pi_{\mathcal{B}},L)\right]^{\text{red}}\, $$ where $\lambda$ is the pull-back via \eqref{eee} of the Hodge bundle on $\mathcal{B}$. The reduced class is of $\pi_{\mathcal{B}}$-relative dimension $g+n$. The canonical divisor class associated to an admissible $L\in \Lambda$ is $$\mathcal{L} \, = \, \frac{1}{N_0(L)} \,\cdot \, \epsilon_*\left[ \MM_{0,1}(\pi_{\mathcal{B}},L)\right]^{\text{red}} \ \in \ \mathsf{A}^1(\mathcal{X}_{\mathcal{B}},\mathbb{Q}) \, .$$ By Proposition \ref{trtr}, the reduced Gromov-Witten invariant $$N_0(L) = \int_{[\MM_{0,0}(X,L)]^{\text{red}}} 1$$ is not zero. For a family of $\Lambda$-polarized $K3$ surfaces over any base scheme $\mathcal{B}$, we define $$\mathcal{L} \ \in \ \mathsf{A}^1(\mathcal{X}_{\mathcal{B}},\mathbb{Q})$$ by pull-back from the universal family over the nonsingular moduli stack $\mathcal{M}_\Lambda$. \subsection{The divisor $\w{\mathcal{L}}$} \label{gwgw} Let $\mathcal{X}_\Lambda$ denote the universal $\Lambda$-polarized $K3$ surface over $\mathcal{M}_{\Lambda}$, $$\pi_\Lambda: \mathcal{X}_\Lambda \rightarrow \mathcal{M}_{\Lambda}\, .$$ For $L\in \Lambda$ admissible, Let $\MM_{0,0}(\pi_\Lambda, L)$ be the $\pi_\Lambda$-relative moduli space of genus 0 stable maps. Let $$\phi: \MM_{0,0}(\pi_\Lambda, L) \rightarrow \mathcal{M}_\Lambda$$ be the proper structure map. The reduced virtual class $\left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}}$ is of $\phi$-relative dimension 0 and satisfies $$\phi_*\left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \, =\, N_0(L) \cdot [\mathcal{M}_\Lambda]\, \neq\, 0 \,.$$ The universal curve over the moduli space of stable maps, $$\mathsf{C} \rightarrow \MM_{0,0}(\pi_\Lambda, L) \,,$$ carries an evaluation morphism $$\epsilon_{\MM}: \mathsf{C} \rightarrow \mathcal{X}_\MM \,=\, \phi^*\mathcal{X}_\Lambda$$ over $\mathcal{M}_{\Lambda}$. Via the Hilbert-Chow map, the image of $\epsilon_{\MM}$ determines a canonical Chow cohomology class $$\widehat{\mathcal{L}} \ \in \ \mathsf{A}^1( \mathcal{X}_\MM,\mathbb{Q})\,. $$ Via pull-back, we also have the class $${\mathcal{L}} \ \in \ \mathsf{A}^1( \mathcal{X}_\MM,\mathbb{Q})\, $$ constructed in Section \ref{fafa2}. The classes $\widehat{\mathcal{L}}$ and $\mathcal{L}$ are are certainly equal when restricted to the fibers of $$\pi_{\MM} \, : \mathcal{X}_{\MM} \, \rightarrow\, \MM_{0,0}(\pi_\Lambda, L)\, .$$ However, more is true. We define the reduced virtual class of $\mathcal{X}_\MM$ by flat pull-back, $$[\mathcal{X}_\MM]^{\text{red}} \, = \, \pi_\MM^*\, \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}}\ \in \ \mathsf{A}_{\mathsf{d}(\Lambda)+2}( \mathcal{X}_\MM,\mathbb{Q})\, ,$$ where $\mathsf{d}(\Lambda)=20- \text{rank}(\Lambda)$ is the dimension of $\mathcal{M}_\Lambda$. \begin{theorem} \label{dldl} For $L\in \Lambda$ admissible, $$\w{\mathcal{L}}\cap [\mathcal{X}_\MM]^{\textup{red}} \, =\, \mathcal{L} \cap [\mathcal{X}_\MM]^{\textup{red}}\ \in\ \mathsf{A}_{\mathsf{d}(\Lambda)+1}( \mathcal{X}_\MM,\mathbb{Q})\, .$$ \end{theorem} \noindent The proof of Theorem \ref{dldl} will be given in Section \ref{pfpf5}. \section{Basic push-forwards in genus $0$ and $1$} \label{expo} \subsection{Push-forwards of reduced classes} Let $L \in \Lambda$ be a nonzero class. As discussed in Section \ref{conjs}, the export construction requires knowing the push-forward of the reduced virtual class $\left[\MM_{g,n}(\pi_\Lambda,L)\right]^{\text{red}}$ via the evaluation map $$\epsilon^n: \MM_{g,n}(\pi_\Lambda,L) \rightarrow \mathcal{X}^n_\Lambda\, .$$ Fortunately, to export the WDVV and Getzler relations, we only need to analyze three simple cases. \subsection{Case $g = 0$, $n \geq 1$} Consider the push-forward class in genus 0, $$\epsilon^n_*\left[\MM_{0,n}(\pi_\Lambda,L)\right]^{\text{red}} \ \in \ \mathsf{A}^n(\mathcal{X}_\Lambda^n, \mathbb{Q}) \,.$$ For $n = 1$ and $L \in \Lambda$ admissible, we have by definition $$\epsilon_*\left[\MM_{0,1}(\pi_\Lambda,L)\right]^{\text{red}} \, = \, N_0(L) \cdot \mathcal{L} \,.$$ \begin{proposition}\label{zzzr} For all $n \geq 1$, we have $$\epsilon^n_*\left[\MM_{0,n}(\pi_\Lambda,L)\right]^{\textup{red}} \, = \begin{cases} \, N_0(L) \cdot \mathcal{L}_{(1)} \cdots \mathcal{L}_{(n)} & \text{ if $L \in \Lambda$ is admissible} \,, \\ \, 0 & \text{ if not} \,. \end{cases}$$ Here $\mathcal{L}_{(i)}$ is the pull-back of $\mathcal{L}$ via the $i^{\text{th}}$ projection. \end{proposition} \begin{proof} Consider first the case where the class $L\in \Lambda$ is admissible. The evaluation map~$\epsilon^n$ factors as $$\MM_{0,n}(\pi_\Lambda,L) \, \stackrel{\epsilon^n_{\MM}}{\longrightarrow} \, \mathcal{X}_\MM^n \, \stackrel{\rho^n}{\longrightarrow}\, \mathcal{X}_\Lambda^n$$ where $\epsilon^n_\MM$ is the lifted evaluation map and $\rho^n$ is the projection. We have \begin{align*} \epsilon^n_*\left[\MM_{0,n}(\pi_\Lambda,L)\right]^{\text{red}} \, & = \, \rho^n_*\epsilon_{\MM*}^n \left[\MM_{0,n}(\pi_\Lambda,L)\right]^{\text{red}} \\ & = \, \rho^n_* \left(\w{\mathcal{L}}_{(1)} \cdots \w{\mathcal{L}}_{(n)} \cap [\mathcal{X}_\MM^n]^{\text{red}}\right) \\ & = \, \rho^n_* \left({\mathcal{L}}_{(1)} \cdots {\mathcal{L}}_{(n)} \cap [\mathcal{X}_\MM^n]^{\text{red}}\right) \\ & = \, N_0(L)\cdot {\mathcal{L}}_{(1)} \cdots {\mathcal{L}}_{(n)} \cap [\mathcal{X}^n_\Lambda]\, , \end{align*} where the third equality is a consequence of Theorem \ref{dldl}. Next, consider the case where $L \in \Lambda$ is inadmissible. By Proposition \ref{vvvv} and a spreading out argument \cite[1.1.2]{Voi}, the reduced class $\left[\MM_{0, n}(\pi_\Lambda, L)\right]^{\text{red}}$ is supported over a proper subset of $\mathcal{M}_\Lambda$. Since $K3$ surfaces are not ruled, the support of $$\epsilon^n_*\left[\MM_{0,n}(\pi_\Lambda,L)\right]^{\text{red}} \ \in \ \mathsf{A}^n(\mathcal{X}_\Lambda^n, \mathbb{Q})$$ has codimension at least $n + 1$ and therefore vanishes. \end{proof} \subsection{Case $g = 1$, $n= 1$} The push-forward class $$\epsilon_*\left[\MM_{1,1}(\pi_\Lambda,L)\right]^{\text{red}} \ \in \ \mathsf{A}^0(\mathcal{X}_\Lambda, \mathbb{Q})$$ is a multiple of the fundamental class of $\mathcal{X}_\Lambda$. \begin{proposition} \label{g1p1} We have \begin{equation*} \epsilon_*\left[\MM_{1,1}(\pi_\Lambda,L)\right]^{\textup{red}} = \begin{cases} \, N_1(L) \cdot [\mathcal{X}_\Lambda] & \text{ if $L \in \Lambda$ is admissible and $\langle L, L\rangle_\Lambda \geq 0$} \,, \\ \, 0 & \text{ if not} \,. \end{cases} \end{equation*} \end{proposition} \begin{proof} The multiple of the fundamental class $[\mathcal{X}_\Lambda]$ can be computed fiberwise: it is the genus 1 Gromov-Witten invariant $$N_1(L) = \int_{[\MM_{1, 1}(X, L)]^{\text{red}}} \text{ev}^*(\mathsf{p}) \, .$$ The invariant vanishes for $L \in \text{Pic}(X)$ inadmissible as well as for $L$ admissible and $\langle L, L\rangle_X < 0$. \end{proof} \subsection{Case $g = 1$, $n= 2$} The push-forward class is a divisor, $$\epsilon^2_*\left[\MM_{1,2}(\pi_\Lambda,L)\right]^{\text{red}} \ \in \ \mathsf{A}^1(\mathcal{X}^2_\Lambda, \mathbb{Q}) \,.$$ \begin{proposition} \label{g1g1} We have \begin{multline*} \epsilon^2_*\left[\MM_{1,2}(\pi_\Lambda,L)\right]^{\textup{red}} \\ = \begin{cases} \, N_1(L) \cdot \Big(\mathcal{L}_{(1)} + \mathcal{L}_{(2)} + Z(L)\Big) & \text{ if $L \in \Lambda$ is admissible and $\langle L, L\rangle_\Lambda \geq 0$} \,, \\ \, 0 & \text{ if not} \,. \end{cases} \end{multline*} Here $Z(L)$ is a divisor class in $\mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q})$ depending on $L$.\footnote{We identify $\AA(\mathcal{M}_\Lambda, \mathbb{Q})$ as a subring of $\AA(\mathcal{X}^n_\Lambda, \mathbb{Q})$ via $\pi_\Lambda^{n*}$.} \end{proposition} \noindent In Section \ref{dvdv}, we will compute $Z(L)$ explicitly in terms of Noether-Lefschetz divisors in the moduli space $\mathcal{M}_\Lambda$. \begin{proof} Consider first the case where the class $L\in \Lambda$ is admissible and $\langle L, L\rangle_\Lambda \geq 0$. If $L$ is a multiple of the quasi-polarization $H$, we may assume $\Lambda = (2\ell)$. Then, the relative Picard group $$\text{Pic}(\mathcal{X}_\Lambda/\mathcal{M}_\Lambda)$$ has rank 1. Since the reduced class $\left[\MM_{1, 2}(\pi_\Lambda, L)\right]^{\text{red}}$ is $\mathfrak{S}_2$-invariant, the push-forward takes the form \begin{equation} \label{fofofo} \epsilon^2_*\left[\MM_{1,2}(\pi_\Lambda,L)\right]^{\textup{red}} \, = \, c(L) \cdot \left(\mathcal{L}_{(1)} + \mathcal{L}_{(2)}\right) + \widetilde{Z}(L) \ \in \ \mathsf{A}^1(\mathcal{X}^2_\Lambda, \mathbb{Q}) \, , \end{equation} where $c(L) \in \mathbb{Q}$ and $\widetilde{Z}(L)$ is (the pull-back of) a divisor class in $\mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q})$. The constant $c(L)$ can be computed fiberwise: by the divisor equation{\footnote{Since $L$ is a multiple of the quasi-polarization, $\langle L, L\rangle_\Lambda > 0$.}}, we have $$c(L) = N_1(L) \,.$$ Since $N_1(L) \neq 0$ by Proposition \ref{trtrtr}, we can rewrite \eqref{fofofo} as $$\epsilon^2_*\left[\MM_{1,2}(\pi_\Lambda,L)\right]^{\textup{red}} \, = \, N_1(L) \cdot \Big(\mathcal{L}_{(1)} + \mathcal{L}_{(2)} + Z(L)\Big) \ \in \ \mathsf{A}^1(\mathcal{X}^2_\Lambda, \mathbb{Q}) \,,$$ where $Z(L) \in \mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q})$. If $L \neq m \cdot H$, we may assume $\Lambda$ to be a rank 2 lattice with $H,L\in \Lambda$. Then, the push-forward class takes the form \begin{multline} \label{rmrmrm} \epsilon^2_*\left[\MM_{1,2}(\pi_\Lambda,L)\right]^{\textup{red}} \, = \, c_H(L) \cdot \left(\mathcal{H}_{(1)} + \mathcal{H}_{(2)}\right) + c_L(L) \cdot \left(\mathcal{L}_{(1)} + \mathcal{L}_{(2)}\right) \\ + \widetilde{Z}(L) \ \in \ \mathsf{A}^1(\mathcal{X}^2_\Lambda, \mathbb{Q}) \,, \end{multline} where $c_H(L), c_L(L) \in \mathbb{Q}$ and $\widetilde{Z}(L) \in \mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q})$. By applying the divisor equation with respect to $$\langle L, L\rangle_\Lambda \cdot H - \langle H, L\rangle_\Lambda \cdot L \,,$$ we find $$c_H(L) \Big(2\ell \langle L, L\rangle_\Lambda - \langle H, L\rangle_\Lambda^2\Big) = 0 \,.$$ Since $2\ell \langle L, L\rangle_\Lambda - \langle H, L\rangle_\Lambda^2 < 0$ by the Hodge index theorem, we have $c_H(L) = 0$. Moreover, by applying the divisor equation with respect to $H$, we find $$c_L(L) = N_1(L) \,.$$ Since $N_1(L) \neq 0$ by Proposition \ref{trtrtr}, we can rewrite \eqref{rmrmrm} as $$\epsilon^2_*\left[\MM_{1,2}(\pi_\Lambda,L)\right]^{\textup{red}} \, = \, N_1(L) \cdot \Big(\mathcal{L}_{(1)} + \mathcal{L}_{(2)} + Z(L)\Big) \ \in \ \mathsf{A}^1(\mathcal{X}^2_\Lambda, \mathbb{Q}) \,,$$ where $Z(L) \in \mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q})$. Next, consider the case where the class $L \in \Lambda$ is inadmissible. As before, by Proposition \ref{vvvv} and a spreading out argument, the reduced class $\left[\MM_{1, 2}(\pi_\Lambda, L)\right]^{\text{red}}$ is supported over a proper subset of $\mathcal{M}_\Lambda$. Since $K3$ surfaces are not elliptically connected\footnote{A nonsingular projective variety $Y$ is said to be {\it elliptically connected} if there is a genus 1 curve passing through two general points of $Y$. In dimension $\geq 2$, elliptically connected varieties are uniruled, see \cite[Proposition 6.1]{Gou}.}, the support of the push-forward class $$\epsilon^2_*\left[\MM_{1,2}(\pi_\Lambda, L)\right]^{\text{red}} \ \in \ \mathsf{A}^1(\mathcal{X}_\Lambda^2, \mathbb{Q})$$ has codimension at least 2. Hence, the push-forward class vanishes. Finally, for $L \in \Lambda$ admissible and $\langle L, L\rangle_\Lambda < 0$, the reduced class $\left[\MM_{1,2}(\pi_\Lambda, L)\right]^{\text{red}}$ is fiberwise supported on the products of finitely many curves in the $K3$ surface.{\footnote{The proof exactly follows the argument of Proposition \ref{vvvv}. We find a (possibly non-algebraic) 1-parameter family of $K3$ surfaces for which the class $L$ is generically a multiple of a $(-2)$-curve. The open moduli space of stable maps to the $K3$ fibers which are not supported on the family of $(-2)$-curves (and its limit curve in the special fiber) is constrained to lie over the special point in the base of the family. The specialization argument of Proposition \ref{vvvv} then shows the virtual class is 0 when restricted to the open moduli space of stable maps to the special fiber which are not supported on the limit curve.}} This implies the support of the push-forward class $\epsilon^2_*\left[\MM_{1,2}(\pi_\Lambda, L)\right]^{\text{red}}$ has codimension 2 in~$\mathcal{X}_\Lambda^2$. Hence, the push-forward class vanishes. \end{proof} \section{Exportation of the WDVV relation} \label{wwww} \subsection{Exportation} Let $L \in \Lambda$ be an admissible class. Consider the morphisms $$ \MM_{0,4}\ \stackrel{\tau}{\longleftarrow} \ \MM_{0,4}(\pi_\Lambda, L) \ \stackrel{\epsilon^4}{\longrightarrow}\ \mathcal{X}^4_{\Lambda}\, .$$ Following the notation of Section \ref{conjs}, we export here the WDVV relation with respect to the curve class $L$, \begin{equation}\label{exex} \epsilon_*^4 \tau^*(\mathsf{WDVV}) \,=\, 0 \ \in\ \mathsf{A}^5 (\mathcal{X}^4_\Lambda, \mathbb{Q})\, . \end{equation} We will compute $\epsilon_*^4 \tau^*(\mathsf{WDVV})$ by applying the splitting axiom of Gromov-Witten theory to the two terms of the WDVV relation \eqref{wdvv}. The splitting axiom requires a distribution of the curve class to each vertex of each graph appearing in \eqref{wdvv}. \subsection{WDVV relation: unsplit contributions} The unsplit contributions are obtained from curve class distributions which do {\it not} split $L$. The first unsplit contributions come from the first graph of \eqref{wdvv}: $$\left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,2) [label=above:$3$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[vertex] (v2) at (.5,1.5) [label=right:$0$] {}; \node[circ] (v1) at (.5,.5) [label=right:$0$] {$L$}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l2) at (1,0) [label=below:$2$] {}; \path (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\right] \ + \ \left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,2) [label=above:$3$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L$}; \node[vertex] (v1) at (.5,.5) [label=right:$0$] {}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l2) at (1,0) [label=below:$2$] {}; \path (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\right]$$ $$N_0(L) \cdot \Big({\mathcal{L}}_{(1)} {\mathcal{L}}_{(2)} {\mathcal{L}}_{(3)} \Delta_{(34)} + {\mathcal{L}}_{(1)}{\mathcal{L}}_{(3)} {\mathcal{L}}_{(4)} \Delta_{(12)}\Big) \,.$$ \vspace{8pt} \noindent The unsplit contributions from the second graph of \eqref{wdvv} are: $$- \left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l2) at (0,2) [label=above:$2$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[vertex] (v2) at (.5,1.5) [label=right:$0$] {}; \node[circ] (v1) at (.5,.5) [label=right:$0$] {$L$}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l3) at (1,0) [label=below:$3$] {}; \path (l2) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l3) ; \end{tikzpicture}\right] \ - \ \left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l2) at (0,2) [label=above:$2$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L$}; \node[vertex] (v1) at (.5,.5) [label=right:$0$] {}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l3) at (1,0) [label=below:$3$] {}; \path (l2) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l3) ; \end{tikzpicture}\right]$$ $$-N_0(L) \cdot \Big({\mathcal{L}}_{(1)}{\mathcal{L}}_{(2)}{\mathcal{L}}_{(3)}\Delta_{(24)} + {\mathcal{L}}_{(1)} {\mathcal{L}}_{(2)}{\mathcal{L}}_{(4)}\Delta_{(13)}\Big) \,.$$ \vspace{8pt} The curve class $0$ vertex is not reduced and yields the usual intersection form (which explains the presence of diagonal $\Delta_{(ij)}$). The curve class $L$ vertex is reduced. We have applied Proposition \ref{zzzr} to compute the push-forward to $\mathcal{X}^4_\Lambda$. All terms are of relative codimension 5 (codimension 1 each for the factors $\mathcal{L}_{(i)}$ and codimension 2 for the diagonal~$\Delta_{(ij)}$). The four unsplit terms (divided by $N_0(L)$) exactly constitute the principal part of Theorem \ref{WDVV}. \subsection{WDVV relation: split contributions} The split contributions are obtained from non-trivial curve class distributions to the vertices $$L = L_1 + L_2\, , \ \ L_1\,,\,L_2\neq 0\, .$$ By Proposition \ref{zzzr}, we need only consider distributions where {\it both} $L_1$ and $L_2$ are admissible classes. Let $\ww\Lambda$ be the saturation{\footnote{We work only with primitive sublattices of $U^3 \oplus E_8(-1)^2$.}} of the span of $L_1$, $L_2$, and $\Lambda$. There are two types. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}If $\text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda) + 1$, the split contributions are pushed forward from $\mathcal{X}^4_{\ww{\Lambda}}$ via the map $\mathcal{X}^4_{\ww{\Lambda}} \to \mathcal{X}^4_\Lambda$. Both vertices carry the reduced class by the obstruction calculation of \cite[Lemma 1]{MP}. The split contributions are: $$\left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,2) [label=above:$3$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L_2$}; \node[circ] (v1) at (.5,.5) [label=right:$0$] {$L_1$}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l2) at (1,0) [label=below:$2$] {}; \path (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\right]$$ \vspace{0pt} $$N_0(L_1) N_0(L_2) \langle L_1, L_2\rangle_{\ww{\Lambda}} \cdot \mathcal{L}_{1,(1)}\mathcal{L}_{1,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)}\, ,$$ $$- \left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l2) at (0,2) [label=above:$2$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L_2$}; \node[circ] (v1) at (.5,.5) [label=right:$0$] {$L_1$}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l3) at (1,0) [label=below:$3$] {}; \path (l2) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l3) ; \end{tikzpicture}\right]$$ \vspace{0pt} $$-N_0(L_1) N_0(L_2) \langle L_1, L_2\rangle_{\ww{\Lambda}} \cdot \mathcal{L}_{1,(1)}\mathcal{L}_{1,(3)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)}\, .$$ \vspace{8pt} \noindent All terms are of relative codimension 5 (codimension 1 for the Noether-Lefschetz condition and codimension 1 each for the factors $\mathcal{L}_{a,(i)}$). \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}If $\ww{\Lambda} = \Lambda$, there is no obstruction cancellation as above. The extra reduction yields a factor of $-\lambda$. The split contributions are: $$\left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,2) [label=above:$3$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L_2$}; \node[circ] (v1) at (.5,.5) [label=right:$0$] {$L_1$}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l2) at (1,0) [label=below:$2$] {}; \path (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\right]$$ \vspace{0pt} $$N_0(L_1) N_0(L_2) \langle L_1, L_2\rangle_{\ww{\Lambda}} \cdot (-\lambda)\mathcal{L}_{1,(1)}\mathcal{L}_{1,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)}\, ,$$ $$- \left[\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l2) at (0,2) [label=above:$2$] {}; \node[leg] (l4) at (1,2) [label=above:$4$] {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L_2$}; \node[circ] (v1) at (.5,.5) [label=right:$0$] {$L_1$}; \node[leg] (l1) at (0,0) [label=below:$1$] {}; \node[leg] (l3) at (1,0) [label=below:$3$] {}; \path (l2) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l3) ; \end{tikzpicture}\right]$$ \vspace{0pt} $$-N_0(L_1) N_0(L_2) \langle L_1, L_2\rangle_{\ww{\Lambda}} \cdot (-\lambda)\mathcal{L}_{1,(1)}\mathcal{L}_{1,(3)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)}\, .$$ \vspace{8pt} \noindent All terms are of relative codimension 5 (codimension 1 for $-\lambda$ and codimension 1 each for the factors $\mathcal{L}_{a,(i)}$). \subsection{Proof of Theorem \ref{wdvv}} The complete exported relation \eqref{exex} is obtained by adding the unsplit contributions to the summation over all split contributions $$L=L_1+L_2$$ of both types. Split contributions of the first type are explicitly supported over the Noether-Lefschetz locus corresponding to $$\ww{\Lambda} \subset U^3 \oplus E_8^2\, .$$ Split contributions of the second type all contain the factor $-\lambda$. The class $\lambda$ is known to be a linear combination of proper Noether-Lefschetz divisors of $\mathcal{M}_\Lambda$ by \cite[Theorem~1.2]{BKPSB}. Hence, we view the split contributions of the second type also as being supported over Noether-Lefschetz loci. For the formula of Theorem \ref{WDVV}, we normalize the relation by dividing by $N_0(L)$. \qed \section{Proof of Theorem \ref{dldl}} \label{pfpf5} \subsection{Overview} Let $L \in \Lambda$ be an admissible class, and let $\MM_{0,0}(\pi_\Lambda, L)$ be the $\pi_\Lambda$-relative moduli space of genus 0 stable maps, $$\phi : \MM_{0,0}(\pi_\Lambda, L) \to \mathcal{M}_\Lambda\,.$$ Let $\mathcal{X}_{\MM}$ be the universal $\Lambda$-polarized $K3$ surface over $\MM_{0,0}(\pi_\Lambda, L)$, $$\pi_{\MM} : \mathcal{X}_{\MM} \to \MM_{0,0}(\pi_\Lambda, L) \,.$$ In Sections \ref{fafa2} and \ref{gwgw}, we have constructed two divisor classes $$\w{\mathcal{L}} \,, \ \mathcal{L} \ \in \ \mathsf{A}^1(\mathcal{X}_{\MM}, \mathbb{Q}) \,.$$ We define the $\kappa$ classes with respect to $\w{\mathcal{L}}$ by $$\w{\kappa}_{[L^a;b]} \,=\, \pi_{\MM*}\left(\w{\mathcal{L}}^a \cdot c_2(\mathcal{T}_{\pi_{\MM}})^b\right) \ \in \ \mathsf{A}^{a + 2b - 2}\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right) \,.$$ Since $\w{\mathcal{L}}$ and $\mathcal{L}$ are equal on the fibers of $\pi_{\MM}$, the difference $\w{\mathcal{L}} - \mathcal{L}$ is the pull-back{\footnote{We use here the vanishing $H^1(X,\mathcal{O}_X)=0$ for $K3$ surfaces $X$ and the base change theorem.}} of a divisor class in $\mathsf{A}^1\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right)$. In fact, the difference is equal{\footnote{We keep the same notation for the pull-backs of the $\kappa$ classes via the structure map $\phi$. Also, we identify $\AA\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right)$ as a subring of $\AA(\mathcal{X}_{\MM}^n, \mathbb{Q})$ via $\pi_{\MM}^{n*}$.}} to $$\frac{1}{24} \,\cdot\, \left(\w{\kappa}_{[L;1]} - \kappa_{[L;1]}\right) \ \in \ \mathsf{A}^1\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right) \,. $$ Therefore, \begin{equation} \label{th51} \w{\mathcal{L}} - \frac{1}{24}\,\cdot\,\w{\kappa}_{[L;1]} \,=\, \mathcal{L} - \frac{1}{24}\,\cdot\,\kappa_{[L;1]} \ \in \ \mathsf{A}^1(\mathcal{X}_{\MM}, \mathbb{Q}) \,. \end{equation} Our strategy for proving Theorem \ref{dldl} is to export the WDVV relation via the morphisms $$\MM_{0,4}\ \stackrel{\tau}{\longleftarrow} \ \MM_{0,4}(\pi_\Lambda, L) \ \stackrel{\epsilon_{\MM}^4}{\longrightarrow}\ \mathcal{X}^4_{\MM}\, .$$ We deduce the following identity from the exported relation \begin{equation} \label{wwwu} \epsilon_{\MM*}^4 \tau^*(\mathsf{WDVV}) \,=\, 0 \ \in\ \mathsf{A}_{\mathsf{d}(\Lambda) + 3}(\mathcal{X}^4_{\MM}, \mathbb{Q})\,, \end{equation} where $\mathsf{d}(\Lambda) = 20 - \text{rank}(\Lambda)$ is the dimension of $\mathcal{M}_\Lambda$. \begin{proposition} \label{blab} For $L\in \Lambda$ admissible, \begin{equation*} \label{th52} \w{\kappa}_{[L;1]} \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\textup{red}} \,=\, \kappa_{[L;1]} \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\textup{red}} \ \in \ \mathsf{A}_{\mathsf{d}(\Lambda) - 1}\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right)\, . \end{equation*} \end{proposition} Equation \eqref{th51} and Proposition \ref{blab} together yield $$\w{\mathcal{L}}\cap [\mathcal{X}_\MM]^{\textup{red}} \, =\, \mathcal{L} \cap [\mathcal{X}_\MM]^{\textup{red}}\ \in\ \mathsf{A}_{\mathsf{d}(\Lambda)+1}(\mathcal{X}_\MM,\mathbb{Q})\, ,$$ thus proving Theorem \ref{dldl}. The exportation process is almost identical to the one in Section \ref{wwww}. However, since we work over $\MM_{0,0}(\pi_\Lambda, L)$ instead of $\mathcal{M}_\Lambda$, we do {\it not} require Proposition \ref{zzzr} (whose proof uses Theorem \ref{dldl}). \subsection{Exportation} We briefly describe the exportation \eqref{wwwu} of the WDVV relation with respect to the curve class $L$. As in Section \ref{wwww}, the outcome of $\epsilon_{\MM*}^4\tau^*(\mathsf{WDVV})$ consists of unsplit and split contributions: \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}For the unsplit contributions, the difference is that one should replace $\mathcal{L}$ by the corresponding $\w{\mathcal{L}}$. Moreover, since we do {\it not} push-forward to~$\mathcal{X}^4_\Lambda$, there is no overall coefficient~$N_0(L)$. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}For the split contributions corresponding to the admissible curve class distributions $$L = L_1 + L_2 \,,$$ one again replaces $\mathcal{L}_i$ by the corresponding $\w{\mathcal{L}}_i$ and removes the coefficient $N_0(L_i)$. As before, the terms are either supported over proper Noether-Lefschetz divisors of $\mathcal{M}_\Lambda$, or multiplied by (the pull-back of) $-\lambda$. \vspace{8pt} \noindent We obtain the following analog of Theorem \ref{WDVV}. \pagebreak \begin{proposition} For admissible $L\in \Lambda$, exportation of the WDVV relation yields \begin{multline} \label{dada} \Big(\w{\mathcal{L}}_{(1)} \w{\mathcal{L}}_{(2)} \w{\mathcal{L}}_{(3)} \Delta_{(34)} + \w{\mathcal{L}}_{(1)}\w{\mathcal{L}}_{(3)} \w{\mathcal{L}}_{(4)} \Delta_{(12)} - \w{\mathcal{L}}_{(1)}\w{\mathcal{L}}_{(2)}\w{\mathcal{L}}_{(3)}\Delta_{(24)} \\ - \w{\mathcal{L}}_{(1)}\w{\mathcal{L}}_{(2)}\w{\mathcal{L}}_{(4)}\Delta_{(13)} + \ldots\Big) \cap [\mathcal{X}_\MM^4]^{\textup{red}} \, = \, 0 \ \in \ \mathsf{A}_{\mathsf{d}(\Lambda) + 3}(\mathcal{X}_{\MM}^4, \mathbb{Q})\,, \end{multline} where the dots stand for (Gromov-Witten) tautological classes supported over proper Noether-Lefschetz divisors of $\mathcal{M}_\Lambda$. \end{proposition} \noindent Here, the Gromov-Witten tautological classes on $\mathcal{X}^n_{\MM}$ are defined by replacing $\mathcal{L}$ by $\w{\mathcal{L}}$ in Section \ref{ttun}. \subsection{Proof of Proposition \ref{blab}} We distinguish two cases. \vspace{8pt} \noindent {\bf Case $\langle L, L\rangle_\Lambda \neq 0$.} \nopagebreak \vspace{8pt} First, we rewrite \eqref{th51} as $$\w{\kappa}_{[L;1]} - \kappa_{[L;1]} \,=\, 24 \cdot (\w{\mathcal{L}} - \mathcal{L}) \ \in \ \mathsf{A}^1(\mathcal{X}_{\MM}, \mathbb{Q}) \,.$$ By the same argument, we also have $$\w{\kappa}_{[L^3;0]} - \kappa_{[L^3;0]} \,=\, 3 \langle L, L\rangle_\Lambda \cdot (\w{\mathcal{L}} - \mathcal{L}) \ \in \ \mathsf{A}^1(\mathcal{X}_{\MM}, \mathbb{Q}) \,.$$ By combining the above equations, we find \begin{equation} \label{mir1} \langle L, L\rangle_\Lambda \cdot \w{\kappa}_{[L;1]} - 8 \cdot \w{\kappa}_{[L^3;0]} \,=\, \langle L, L\rangle_\Lambda \cdot {\kappa}_{[L;1]} - 8 \cdot {\kappa}_{[L^3;0]} \ \in \ \mathsf{A}^1\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right)\,. \end{equation} Next, we apply \eqref{dada} with respect to $L$ and insert $\Delta_{(12)}\Delta_{(34)} \in \mathsf{A}^4(\mathcal{X}_\MM^4,\mathbb{Q})$. The relation $$\Delta_{(12)}\Delta_{(34)} \cap \epsilon^4_{\MM*}\tau^*(\mathsf{WDVV}) \,=\, 0 \ \in \ \mathsf{A}_{\mathsf{d}(\Lambda) - 1}(\mathcal{X}^4_\MM, \mathbb{Q})$$ pushes down via $$\pi^4_\MM: \mathcal{X}_{\MM}^4 \rightarrow \MM_{0,0}(\pi_\Lambda, L)$$ to yield the result \begin{multline} \label{mir2} \Big(2\langle L, L\rangle_\Lambda \cdot \w{\kappa}_{[L;1]} - 2 \cdot \w{\kappa}_{[L^3;0]}\Big) \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \\ \in \ \phi^*\,\mathsf{NL}^1(\mathcal{M}_\Lambda, \mathbb{Q}) \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}}\,. \end{multline} Since $\langle L, L\rangle_\Lambda \neq 0$, a combination of \eqref{mir1} and \eqref{mir2} yields $$\w{\kappa}_{[L;1]} \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \ \in \ \phi^*\,\mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q}) \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}}\,.$$ In other words, there is a divisor class ${D} \in \mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q})$ for which \begin{equation*} \w{\kappa}_{[L;1]} \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \,=\, \phi^*({D}) \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \ \in \ \mathsf{A}_{\mathsf{d}(\Lambda) - 1}\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right)\,. \end{equation*} Then, by the projection formula, we find $$\phi_*\left(\w{\kappa}_{[L;1]} \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}}\right) \,=\, N_0(L) \cdot \kappa_{[L;1]} \,=\, N_0(L) \cdot {D} \ \in \ \mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q}) \,.$$ Hence ${D} = \kappa_{[L;1]}$, which proves Proposition \ref{blab} in case $\langle L, L\rangle_\Lambda \neq 0$. \vspace{8pt} \noindent {\bf Case $\langle L, L\rangle_\Lambda = 0$.} \nopagebreak \vspace{8pt} Let $H \in \Lambda$ be the quasi-polarization and let $$\mathcal{H} \ \in \ \mathsf{A}^1(\mathcal{X}_{\MM}, \mathbb{Q})$$ be the pull-back of the class $\mathcal{H} \in \mathsf{A}^1(\mathcal{X}_\Lambda, \mathbb{Q})$. We define the $\kappa$ classes $$\w{\kappa}_{[H^{a_1},L^{a_2};b]} \,=\, \pi_{\MM*}\left(\mathcal{H}^{a_1} \cdot \w{\mathcal{L}}^{a_2} \cdot c_2(\mathcal{T}_{\pi_{\MM}})^b\right) \ \in \ \mathsf{A}^{a_1 + a_2 + 2b - 2}\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right) \,.$$ First, by the same argument used to prove \eqref{th51}, we have $$\w{\kappa}_{[H,L^2;0]} - {\kappa}_{[H,L^2;0]} \,=\, 2\langle H, L\rangle_\Lambda \cdot (\w{\mathcal{L}} - \mathcal{L}) \ \in \ \mathsf{A}^1(\mathcal{X}_{\MM}, \mathbb{Q}) \,.$$ By combining the above equation with \eqref{th51}, we find \begin{multline} \label{mir3} \langle H, L\rangle_\Lambda \cdot \w{\kappa}_{[L;1]} - 12 \cdot \w{\kappa}_{[H,L^2;0]} \\ =\, \langle H, L\rangle_\Lambda \cdot {\kappa}_{[L;1]} - 12 \cdot {\kappa}_{[H,L^2;0]} \ \in \ \mathsf{A}^1\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right) \,. \end{multline} Next, we apply \eqref{dada} with respect to $L$ and insert $\mathcal{H}_{(1)}\mathcal{H}_{(2)}\Delta_{(34)} \in \mathsf{A}^4(\mathcal{X}_\MM^4,\mathbb{Q})$. The relation $$\mathcal{H}_{(1)}\mathcal{H}_{(2)}\Delta_{(34)} \cap \epsilon^4_{\MM*}\tau^*(\mathsf{WDVV}) \,=\, 0 \ \in\ \mathsf{A}_{\mathsf{d}(\Lambda) - 1}(\mathcal{X}^4_\MM, \mathbb{Q})$$ pushes down via $\pi^4_\MM$ to yield the result \begin{multline} \label{mir4} \Big(\langle H, L\rangle_\Lambda^2 \cdot \w{\kappa}_{[L;1]} - 2\langle H, L\rangle_\Lambda \cdot \w{\kappa}_{[H,L^2;0]}\Big) \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \\ \in \ \phi^*\,\mathsf{NL}^1(\mathcal{M}_\Lambda, \mathbb{Q}) \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}}\,. \end{multline} Since $\langle H, L\rangle_\Lambda \neq 0$ by the Hodge index theorem, a combination of \eqref{mir3} and \eqref{mir4} yields $$\w{\kappa}_{[L;1]} \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \ \in \ \phi^*\,\mathsf{A}^1(\mathcal{M}_\Lambda, \mathbb{Q}) \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}}\,.$$ As in the previous case, we conclude $$\w{\kappa}_{[L;1]} \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \,=\, \kappa_{[L;1]} \cap \left[\MM_{0,0}(\pi_\Lambda, L)\right]^{\text{red}} \ \in \ \mathsf{A}_{\mathsf{d}(\Lambda) - 1}\left(\MM_{0,0}(\pi_\Lambda, L), \mathbb{Q}\right)\,.$$ The proof of Proposition \ref{blab} (and thus Theorem \ref{dldl}) is complete. \qed \section{Exportation of Getzler's relation} \label{gggg} \subsection{Exportation} Let $L \in \Lambda$ be an admissible class satisfying $\langle L,L\rangle_\Lambda\geq 0$. Consider the morphisms $$\MM_{1,4}\ \stackrel{\tau}{\longleftarrow} \ \MM_{1,4}(\pi_\Lambda, L) \ \stackrel{\epsilon^4}{\longrightarrow}\ \mathcal{X}^4_{\Lambda}\, .$$ Following the notation of Section \ref{conjs}, we export here Getzler's relation with respect to the curve class $L$, \begin{equation}\label{exex1} \epsilon_*^4 \tau^*(\mathsf{Getzler}) \,=\, 0 \ \in\ \mathsf{A}^5 (\mathcal{X}^4_\Lambda, \mathbb{Q})\, . \end{equation} We will compute $\epsilon_*^4 \tau^*(\mathsf{Getzler})$ by applying the splitting axiom of Gromov-Witten theory to the 7 terms of Getzler's relation \eqref{getzler}. The splitting axiom requires a distribution of the curve class to each vertex of each graph appearing in \eqref{getzler}. \subsection{Curve class distributions} To export Getzler's relation with respect to the curve class $L$, we will use the following properties for the graphs which arise: \begin{itemize} \item[(i)] Only distributions of admissible classes contribute. \item[(ii)] A genus 1 vertex with valence{\footnote{The valence counts all incident half-edges (both from edges and markings).}} 2 or a genus 0 vertex with valence at least 4 must carry a nonzero class. \item[(iii)] A genus 1 vertex with valence 1 cannot be adjacent to a genus 0 vertex with a nonzero class. \item[(iv)] A genus 1 vertex with valence 2 cannot be adjacent to two genus 0 vertices with nonzero classes. \end{itemize} Property (i) is a consequence of Propositions \ref{zzzr}, \ref{g1p1}, and \ref{g1g1}. For Property (ii), the moduli of contracted 2-pointed genus 1 curve produces a positive dimensional fiber of the push-forward to $\mathcal{X}_\Lambda^4$ (and similarly for contracted 4-point genus 0 curves). Properties~(iii) and~(iv) are consequences of positive dimensional fibers of the push-forward to $\mathcal{X}_\Lambda^4$ obtained from the elliptic component. We leave the elementary details to the reader. \subsection{Getzler's relation: unsplit contributions} \label{gusp} We begin with the unsplit contributions. The strata appearing in Getzler's relation are ordered as in \eqref{getzler}. \vspace{8pt} \noindent {\bf Stratum 1.} $$12\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,3) {}; \node[leg] (l4) at (1,3) {}; \node[vertex] (v3) at (.5,2.5) [label=right:$0$] {}; \node[circ] (v2) at (.5,1.5) [label=right:$1$] {$L$}; \node[vertex] (v1) at (.5,.5) [label=right:$0$] {}; \node[leg] (l1) at (0,0) {}; \node[leg] (l2) at (1,0) {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\!\right]$$ \begin{multline*} 12 N_1(L) \cdot \Big(\mathcal{L}_{(1)}\Delta_{(12)}\Delta_{(34)} + \mathcal{L}_{(3)}\Delta_{(12)}\Delta_{(34)} + \mathcal{L}_{(1)}\Delta_{(13)}\Delta_{(24)} \\ + \mathcal{L}_{(2)}\Delta_{(13)}\Delta_{(24)} + \mathcal{L}_{(1)}\Delta_{(14)}\Delta_{(23)} + \mathcal{L}_{(2)}\Delta_{(14)}\Delta_{(23)}\Big) \\ + 12 N_1(L) \cdot Z(L) \Big(\Delta_{(12)}\Delta_{(34)} + \Delta_{(13)}\Delta_{(24)} + \Delta_{(14)}\Delta_{(23)}\Big) \end{multline*} \vspace{8pt} \noindent By Property (ii), the genus 1 vertex must carry the curve class $L$ in the unsplit case. The contribution is then calculated using Propositions \ref{zzzr} and \ref{g1g1}. \vspace{8pt} \noindent {\bf Stratum 2.} $$-4\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,3) {}; \node[leg] (l4) at (1,3) {}; \node[vertex] (v3) at (.5,2.5) [label=right:$0$] {}; \node[leg] (l2) at (0,1.5) {}; \node[vertex] (v2) at (.5,1.5) [label=right:$0$] {}; \node[circ] (v1) at (.5,.5) [label=right:$1$] {$L$}; \node[leg] (l1) at (0,0) {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l2) edge (v2) (v2) edge (v1) (v1) edge (l1) ; \end{tikzpicture}\!\right]$$ \begin{multline*} {-12}N_1(L) \cdot \Big(\mathcal{L}_{(1)}\Delta_{(234)} + \mathcal{L}_{(2)}\Delta_{(134)} + \mathcal{L}_{(3)}\Delta_{(124)} + \mathcal{L}_{(4)}\Delta_{(123)} \\ + \mathcal{L}_{(1)}\Delta_{(123)} + \mathcal{L}_{(1)}\Delta_{(124)} + \mathcal{L}_{(1)}\Delta_{(134)} + \mathcal{L}_{(2)}\Delta_{(234)}\Big) \\ -12 N_1(L) \cdot Z(L) \Big(\Delta_{(123)} + \Delta_{(124)} + \Delta_{(134)} + \Delta_{(234)}\Big) \end{multline*} \vspace{8pt} \noindent Again by Property (ii), the genus 1 vertex must carry the curve class $L$ in the unsplit case. The contribution is then calculated using Propositions \ref{zzzr} and \ref{g1g1}. \vspace{8pt} \noindent {\bf Stratum 3.} No contribution by Properties (ii) and (iii). \vspace{8pt} \noindent {\bf Stratum 4.} $$6\left[\ \begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l2) at (0,2.5) {}; \node[leg] (l3) at (.5,2.5) {}; \node[leg] (l4) at (1,2.5) {}; \node[circ] (v3) at (.5,2) [label=right:$0$] {$L$}; \node[leg] (l1) at (0,1) {}; \node[vertex] (v2) at (.5,1) [label=right:$0$] {}; \node[vertex] (v1) at (.5,0) [label=right:$1$] {}; \path (l2) edge (v3) (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l1) edge (v2) (v2) edge (v1) ; \end{tikzpicture}\!\right]$$ \vspace{0pt} $$N_0(L) \cdot \lambda \mathcal{L}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)}\mathcal{L}_{(4)}$$ \vspace{8pt} \noindent The genus 0 vertex of valence 4 must carry the curve class $L$ in the unsplit case. The contracted genus 1 vertex contributes the virtual class \begin{equation}\label{lwwl} \epsilon_*[\overline{\mathsf{M}}_{1,1}(\pi_\Lambda,0)]^{\text{vir}} \,=\, \frac{1}{24}\,\cdot\,\lambda \ \in \ \mathsf{A}^1(\mathcal{X}^1_\Lambda,\mathbb{Q})\, . \end{equation} The coefficient 6 together with the 4 graphs which occur cancel the 24 in the denominator of \eqref{lwwl}. Proposition \ref{zzzr} is then applied to the genus 0 vertex of valence 4. \vspace{8pt} \noindent {\bf Stratum 5.} No contribution by Property (ii) since there are two genus 0 vertices of valence 4. \vspace{8pt} \noindent {\bf Stratum 6.} $$\left[\begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l1) at (0,1.5) {}; \node[leg] (l2) at (.33,1.5) {}; \node[leg] (l3) at (.67,1.5) {}; \node[leg] (l4) at (1,1.5) {}; \node[circ] (v2) at (.5,1) [label=right:$0$] {$L$}; \node[vertex] (v1) at (.5,0) [label=right:$0$] {}; \path (l1) edge (v2) (l2) edge (v2) (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge[in=-135,out=-45,loop] (v1) ; \end{tikzpicture}\!\right]$$ $$\frac{1}{2}N_0(L) \cdot \kappa_{[L;1]} \mathcal{L}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)}\mathcal{L}_{(4)}$$ \vspace{8pt} \noindent The genus 0 vertex of valence 4 must carry the curve class $L$ in the unsplit case. Proposition \ref{zzzr} is applied to the genus 0 vertex of valence 4. The self-edge of the contracted genus 0 vertex yields a factor of $c_2(\mathcal{T}_{\pi_{\Lambda}})$. The contribution of the contracted genus 0 vertex is $$\frac{1}{2}\,\cdot\,\kappa_{[L;1]}$$ where the factor of $\frac{1}{2}$ is included since the self-edge is not oriented. \vspace{8pt} \noindent {\bf Stratum 7.} No contribution by Property (ii) since there are two genus 0 vertices of valence 4. \vspace{8pt} We have already seen that $\lambda$ is expressible in term of the Noether-Lefschetz divisors of~$\mathcal{M}_{\Lambda}$. Since we will later express $Z(L)$ and $\kappa_{[L;1]}$ in terms of the Noether-Lefschetz divisors of $\mathcal{M}_{\Lambda}$, the principal terms in the above analysis only occur in Strata 1 and~2. The principal parts of Strata 1 and 2 (divided{\footnote{The admissibility of $L$ together with condition $\langle L,L\rangle_\Lambda \geq 0$ implies $N_1(L)\neq 0$ by Proposition \ref{trtrtr}.}} by $12N_1(L)$) exactly constitute the principal part of Theorem \ref{ggg}. \subsection{Getzler's relation: split contributions} \label{gsp} The split contributions are obtained from non-trivial curve class distributions to the vertices. By Property (i), we need only consider distributions of admissible classes. \vspace{8pt} \noindent {\bf Case A.} The class $L$ is divided into two nonzero parts $$L = L_1 + L_2\,.$$ Let $\ww\Lambda$ be the saturation of the span of $L_1$, $L_2$, and $\Lambda$. \begin{enumerate} \item[$\bullet$] If $\text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda) + 1$, the contributions are pushed forward from $\mathcal{X}^4_{\ww{\Lambda}}$ via the map $\mathcal{X}^4_{\ww{\Lambda}} \to \mathcal{X}^4_\Lambda$. \item[$\bullet$] If $\ww{\Lambda} = \Lambda$, the contributions are multiplied by $-\lambda$. \end{enumerate} \noindent With the above rules, the formulas below address both the $\text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda) + 1$ and the $\text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda)$ cases simultaneously. \vspace{8pt} \noindent {\bf Stratum 1.} $$12\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,3) {}; \node[leg] (l4) at (1,3) {}; \node[vertex] (v3) at (.5,2.5) [label=right:$0$] {}; \node[circ] (v2) at (.5,1.5) [label=right:$1$] {$L_1$}; \node[circ] (v1) at (.5,.5) [label=right:$0$] {$L_2$}; \node[leg] (l1) at (0,0) {}; \node[leg] (l2) at (1,0) {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (v2) edge (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\!\right]$$ \begin{multline*} 12N_1(L_1)N_0(L_2)\langle L_1, L_2\rangle_{\ww{\Lambda}} \cdot \Big(\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\Delta_{(34)} + \mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)}\Delta_{(12)} \\ + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\Delta_{(24)} + \mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)}\Delta_{(13)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(4)}\Delta_{(23)} + \mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\Delta_{(14)}\Big) \end{multline*} \vspace{8pt} \noindent By Property (ii), the genus 1 vertex must carry a nonzero curve class. The contribution is calculated using Propositions \ref{zzzr} and \ref{g1g1}. \vspace{8pt} \noindent {\bf Stratum 2.} $$-4\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,3) {}; \node[leg] (l4) at (1,3) {}; \node[vertex] (v3) at (.5,2.5) [label=right:$0$] {}; \node[leg] (l2) at (0,1.5) {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L_2$}; \node[circ] (v1) at (.5,.5) [label=right:$1$] {$L_1$}; \node[leg] (l1) at (0,0) {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l2) edge (v2) (v2) edge (v1) (v1) edge (l1) ; \end{tikzpicture}\!\right]$$ \begin{multline*} {-4}N_1(L_1)N_0(L_2)\langle L_1, L_2\rangle_{\ww{\Lambda}} \cdot \Big(\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\Delta_{(23)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\Delta_{(24)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\Delta_{(34)} \\ + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\Delta_{(13)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\Delta_{(14)} + \mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\Delta_{(34)} \\ + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\Delta_{(12)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\Delta_{(14)} + \mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\Delta_{(24)} \\ + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(4)}\Delta_{(12)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(4)}\Delta_{(13)} + \mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)}\Delta_{(23)}\Big) \end{multline*} $$-4\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,3) {}; \node[leg] (l4) at (1,3) {}; \node[circ] (v3) at (.5,2.5) [label=right:$0$] {$L_2$}; \node[leg] (l2) at (0,1.5) {}; \node[vertex] (v2) at (.5,1.5) [label=right:$0$] {}; \node[circ] (v1) at (.5,.5) [label=right:$1$] {$L_1$}; \node[leg] (l1) at (0,0) {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l2) edge (v2) (v2) edge (v1) (v1) edge (l1) ; \end{tikzpicture}\!\right]$$ \begin{multline*} {-12}N_1(L_1)N_0(L_2) \cdot \Big(\mathcal{L}_{1,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} + \mathcal{L}_{1,(2)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} \\ + \mathcal{L}_{1,(3)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)} + \mathcal{L}_{1,(4)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\Big) \\ - 4N_1(L_1)N_0(L_2) \cdot \Big(\mathcal{L}_{1,(1)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)} + \mathcal{L}_{1,(1)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)} + \mathcal{L}_{1,(1)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} \\ + \mathcal{L}_{1,(2)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)} + \mathcal{L}_{1,(2)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)} + \mathcal{L}_{1,(2)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} \\ + \mathcal{L}_{1,(3)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)} + \mathcal{L}_{1,(3)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} + \mathcal{L}_{1,(3)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} \\ \phantom{\Big(} + \mathcal{L}_{1,(4)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)} + \mathcal{L}_{1,(4)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} + \mathcal{L}_{1,(4)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)}\Big) \\ - 12N_1(L_1)N_0(L_2) \cdot Z(L_1) \Big(\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)} \\ + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} + \mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)}\Big) \, \end{multline*} \vspace{8pt} \noindent By Property (ii), the genus 1 vertex must carry a nonzero curve class. There are two possibilities for the distribution. Both contributions are calculated using Propositions \ref{zzzr} and \ref{g1g1}. \vspace{8pt} \noindent {\bf Stratum 3.} No contribution by Properties (ii) and (iii). \vspace{8pt} \noindent {\bf Stratum 4.} $$6\left[\ \begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l2) at (0,2.5) {}; \node[leg] (l3) at (.5,2.5) {}; \node[leg] (l4) at (1,2.5) {}; \node[circ] (v3) at (.5,2) [label=right:$0$] {$L_2$}; \node[leg] (l1) at (0,1) {}; \node[vertex] (v2) at (.5,1) [label=right:$0$] {}; \node[circ] (v1) at (.5,0) [label=right:$1$] {$L_1$}; \path (l2) edge (v3) (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l1) edge (v2) (v2) edge (v1) ; \end{tikzpicture}\!\right]$$ \vspace{0pt} $$24N_1(L_1)N_0(L_2) \cdot \mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)}$$ \vspace{8pt} \noindent By Property (iii), the genus 0 vertex in the middle can not carry a nonzero curve class. The contribution is calculated using Propositions \ref{zzzr} and \ref{g1p1}. \vspace{8pt} \noindent {\bf Stratum 5.} $$\left[\ \begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l2) at (0,1.5) {}; \node[leg] (l3) at (.5,1.5) {}; \node[leg] (l4) at (1,1.5) {}; \node[circ] (v2) at (.5,1) [label=right:$0$] {$L_2$}; \node[leg] (l1) at (0,.5) {}; \node[circ] (v1) at (.5,0) [label=right:$0$] {$L_1$}; \path (l2) edge (v2) (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (l1) edge (v1) (v1) edge[in=-120,out=-60,loopcirc] (v1) ; \end{tikzpicture}\!\right]$$ \begin{multline*} \frac{1}{2} N_0(L_1)N_0(L_2)\langle L_1, L_1\rangle_{\ww{\Lambda}}\langle L_1, L_2\rangle_{\ww{\Lambda}} \cdot \Big(\mathcal{L}_{1,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} + \mathcal{L}_{1,(2)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} \\ + \mathcal{L}_{1,(3)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)} + \mathcal{L}_{1,(4)}\mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\Big) \end{multline*} \vspace{8pt} \noindent The factor $\frac{1}{2} \langle L_1, L_1\rangle_{\ww{\Lambda}}$ is obtained from the self-edge. The contribution is calculated using Proposition \ref{zzzr}. \vspace{8pt} \noindent {\bf Stratum 6.} $$\left[\ \begin{tikzpicture}[baseline={([yshift=-.3ex]current bounding box.center)}] \node[leg] (l1) at (0,1.5) {}; \node[leg] (l2) at (.33,1.5) {}; \node[leg] (l3) at (.67,1.5) {}; \node[leg] (l4) at (1,1.5) {}; \node[circ] (v2) at (.5,1) [label=right:$0$] {$L_2$}; \node[circ] (v1) at (.5,0) [label=right:$0$] {$L_1$}; \path (l1) edge (v2) (l2) edge (v2) (l3) edge (v2) (l4) edge (v2) (v2) edge (v1) (v1) edge[in=-120,out=-60,loopcirc] (v1) ; \end{tikzpicture}\!\right]$$ $$\frac{1}{2}N_0(L_1)N_0(L_2)\langle L_1, L_1\rangle_{\ww{\Lambda}}\langle L_1, L_2\rangle_{\ww{\Lambda}} \cdot \mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)}$$ \vspace{8pt} \noindent The factor $\frac{1}{2} \langle L_1, L_1\rangle_{\ww{\Lambda}}$ is obtained from the self-edge. The contribution is calculated using Proposition \ref{zzzr}. \vspace{8pt} \noindent {\bf Stratum 7.} $$-2\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,2) {}; \node[leg] (l4) at (1,2) {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L_2$}; \node[circ] (v1) at (.5,.5) [label=right:$0$] {$L_1$}; \node[leg] (l1) at (0,0) {}; \node[leg] (l2) at (1,0) {}; \path (l3) edge (v2) (l4) edge (v2) (v2) edge[bend left=45] (v1) (v2) edge[bend right=45] (v1) (v1) edge (l1) (v1) edge (l2) ; \end{tikzpicture}\!\right]$$ \begin{multline*} {-}N_0(L_1)N_0(L_2)\langle L_1, L_2\rangle_{\ww{\Lambda}}^2 \cdot \Big(\mathcal{L}_{1,(1)}\mathcal{L}_{1,(2)}\mathcal{L}_{2,(3)}\mathcal{L}_{2,(4)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{1,(3)}\mathcal{L}_{1,(4)} \\ + \mathcal{L}_{1,(1)}\mathcal{L}_{1,(3)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(4)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(3)}\mathcal{L}_{1,(2)}\mathcal{L}_{1,(4)} \\ + \mathcal{L}_{1,(1)}\mathcal{L}_{1,(4)}\mathcal{L}_{2,(2)}\mathcal{L}_{2,(3)} + \mathcal{L}_{2,(1)}\mathcal{L}_{2,(4)}\mathcal{L}_{1,(2)}\mathcal{L}_{1,(3)}\Big) \end{multline*} \vspace{8pt} \noindent The factor $-2 \left(\frac{1}{2} \langle L_1, L_2\rangle^2_{\ww{\Lambda}}\right)$ is obtained from two middle edges (the $\frac{1}{2}$ comes from the symmetry of the graph). The contribution is calculated using Proposition \ref{zzzr}. \vspace{8pt} \noindent {\bf Case B.} The class $L$ is divided into three nonzero parts $$L = L_1 + L_2+L_3\,.$$ Let $\ww\Lambda$ be the saturation of the span of $L_1$, $L_2$, $L_3$, and $\Lambda$. By Properties (ii)-(iv), only Stratum 2 contributes. \begin{enumerate} \item[$\bullet$] If $\text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda) + 2$, the contributions are pushed forward from $\mathcal{X}^4_{\ww{\Lambda}}$ via the map $\mathcal{X}^4_{\ww{\Lambda}} \to \mathcal{X}^4_\Lambda$. \item[$\bullet$] If $\text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda) + 1$, the contributions are pushed forward from $\mathcal{X}^4_{\ww{\Lambda}}$ via the map $\mathcal{X}^4_{\ww{\Lambda}} \to \mathcal{X}^4_\Lambda$ {\it and} multiplied by $-\lambda$. \item[$\bullet$] If $\ww{\Lambda} = \Lambda$, the contributions are multiplied by $(-\lambda)^2$. \end{enumerate} \noindent With the above rules, the formula below addresses all three cases $$\text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda) + 2\, ,\ \ \text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda) + 1\, , \ \ \text{rank}(\ww{\Lambda}) = \text{rank}(\Lambda)$$ simultaneously. \vspace{8pt} \noindent {\bf Stratum 2.} $$-4\left[\ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}] \node[leg] (l3) at (0,3) {}; \node[leg] (l4) at (1,3) {}; \node[circ] (v3) at (.5,2.5) [label=right:$0$] {$L_3$}; \node[leg] (l2) at (0,1.5) {}; \node[circ] (v2) at (.5,1.5) [label=right:$0$] {$L_2$}; \node[circ] (v1) at (.5,.5) [label=right:$1$] {$L_1$}; \node[leg] (l1) at (0,0) {}; \path (l3) edge (v3) (l4) edge (v3) (v3) edge (v2) (l2) edge (v2) (v2) edge (v1) (v1) edge (l1) ; \end{tikzpicture}\!\right]$$ \begin{multline*} {-4}N_1(L_1)N_0(L_2)N_0(L_3)\langle L_1, L_2\rangle_{\ww{\Lambda}}\langle L_2, L_3\rangle_{\ww{\Lambda}} \cdot \Big(\mathcal{L}_{2,(1)}\mathcal{L}_{3,(2)}\mathcal{L}_{3,(3)} + \mathcal{L}_{2,(1)}\mathcal{L}_{3,(2)}\mathcal{L}_{3,(4)} \\ + \mathcal{L}_{2,(1)}\mathcal{L}_{3,(3)}\mathcal{L}_{3,(4)} + \mathcal{L}_{2,(2)}\mathcal{L}_{3,(1)}\mathcal{L}_{3,(3)} + \mathcal{L}_{2,(2)}\mathcal{L}_{3,(1)}\mathcal{L}_{3,(4)} + \mathcal{L}_{2,(2)}\mathcal{L}_{3,(3)}\mathcal{L}_{3,(4)} \\ + \mathcal{L}_{2,(3)}\mathcal{L}_{3,(1)}\mathcal{L}_{3,(2)} + \mathcal{L}_{2,(3)}\mathcal{L}_{3,(1)}\mathcal{L}_{3,(4)} + \mathcal{L}_{2,(3)}\mathcal{L}_{3,(2)}\mathcal{L}_{3,(4)} \\ + \mathcal{L}_{2,(4)}\mathcal{L}_{3,(1)}\mathcal{L}_{3,(2)} + \mathcal{L}_{2,(4)}\mathcal{L}_{3,(1)}\mathcal{L}_{3,(3)} + \mathcal{L}_{2,(4)}\mathcal{L}_{3,(2)}\mathcal{L}_{3,(3)}\Big) \end{multline*} \vspace{8pt} \noindent The contribution is calculated using Propositions \ref{zzzr} and \ref{g1g1}. \subsection{Proof of Theorem \ref{ggg}} The complete exported relation \eqref{exex1} is obtained by adding all the unsplit contributions of Section \ref{gusp} to all the split contributions of Section \ref{gsp}. Using the Noether-Lefschetz support{\footnote{To be proven in Section \ref{dvdv}.}} of $$\lambda\, , \ \ \kappa_{[L;1]}\, , \ \ Z(L)$$ the only principal contributions are unsplit and obtained from Strata 1 and 2. For the formula of Theorem \ref{ggg}, we normalize the relation by dividing by $12N_1(L)$. \qed \subsection{Higher genus relations} In genus 2, there is a basic relation among tautological classes in codimension 2 on $\overline{\mathcal{M}}_{2,3}$, see \cite{BelP}. However, to export in genus 2, we would first have to prove genus 2 analogues of the push-forward results in genus 0 and 1 of Section \ref{expo}. To build a theory which allows the exportation of all the known tautological relations{\footnote{For a survey of Pixton's relations, see \cite{PSLC}.}} on the moduli space of curves to the moduli space of $K3$ surfaces is an interesting direction of research. Fortunately, to prove the Noether-Lefschetz generation of Theorem \ref{dxxd}, only the relations in genus 0 and 1 are required. \section{Noether-Lefschetz generation} \label{pfpf} \subsection{Overview} We present here the proof of Theorem \ref{dxxd}: the strict tautological ring is generated by Noether-Lefschetz loci, $$\NL(\mathcal{M}_{\Lambda}) = \mathsf{R}^\star(\mathcal{M}_{\Lambda})\, .$$ We will use the exported WDVV relation $(\dag)$ of Theorem \ref{wdvv}, the exported Getzler's relation $(\ddag)$ of Theorem \ref{ggg}, the diagonal decomposition $(\ddag')$ of Corollary \ref{bvdiag}, and an induction on codimension. For $(\ddag)$, we will require not only the principal terms which appear in the statement of Theorem \ref{ggg}, but the entire formula proven in Section \ref{gggg}. In particular, for $(\ddag)$ we will {\it not} divide by the factor $12N_1(L)$. \subsection{Codimension $1$} \label{dvdv} The base of the induction on codimension consists of all of the {\it divisorial} $\kappa$ classes: \begin{equation}\label{kakaka} \kappa_{[L^3;0]}\,, \ \kappa_{[L;1]}\,, \ \kappa_{[L_1^2,L_2;0]}\,, \ \kappa_{[L_1,L_2,L_3;0]}\ \in \ \mathsf{R}^1(\mathcal{M}_{\Lambda})\, , \end{equation} for $L, L_1, L_2, L_3 \in \Lambda$ admissible. Our first goal is to prove the divisorial $\kappa$ classes \eqref{kakaka} are expressible in terms of Noether-Lefschetz divisors in $\mathcal{M}_{\Lambda}$. In addition, we will determine the divisor $Z(L)$ defined in Proposition \ref{g1g1} for all $L \in \Lambda$ admissible and $\langle L, L\rangle_\Lambda \geq 0$. Let $L, L_1, L_2, L_3 \in \Lambda$ be admissible, and let $H \in \Lambda$ be the quasi-polarization with $$\langle H, H\rangle_\Lambda = 2\ell > 0 \,.$$ \vspace{8pt} \noindent {\bf Case A.} $\kappa_{[L^3;0]}$, $\kappa_{[L;1]}$, and $Z(L)$ for $\langle L, L\rangle_\Lambda > 0$. \nopagebreak \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\dag$) with respect to $L$ and insert $\Delta_{(12)}\Delta_{(34)}\in \mathsf{R}^4(\mathcal{X}^4_\Lambda)$. The relation $$\epsilon^4_*\tau^*(\mathsf{WDVV}) \cup \Delta_{(12)}\Delta_{(34)} \,=\, 0 \ \in\ \mathsf{R}^9(\mathcal{X}^4_\Lambda)$$ pushes down via $$\pi^4_\Lambda: \mathcal{X}^4_\Lambda \rightarrow \mathcal{M}_\Lambda$$ to yield the result \begin{equation} \label{wd1} 2\langle L, L\rangle_\Lambda \cdot \kappa_{[L;1]} - 2 \cdot \kappa_{[L^3;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag$) with respect to $L$ and insert $\mathcal{L}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)}\mathcal{L}_{(4)}\in \mathsf{R}^4(\mathcal{X}^4_\Lambda)$. The relation $$\epsilon^4_*\tau^*(\mathsf{Getzler}) \cup \mathcal{L}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)}\mathcal{L}_{(4)} \,=\, 0 \ \in\ \mathsf{R}^9(\mathcal{X}^4_\Lambda)$$ pushes down via $\pi^4_\Lambda$ to yield the result \begin{multline*} 72 N_1(L)\langle L, L\rangle_\Lambda \cdot \kappa_{[L^3;0]} + 36 N_1(L) \langle L, L\rangle_\Lambda^2 \cdot Z(L) \\ - 48N_1(L)\langle L, L\rangle_\Lambda \cdot \kappa_{[L^3;0]} + \frac{1}{2}N_0(L) \langle L, L\rangle_\Lambda^4 \cdot \kappa_{[L;1]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{multline*} The divisors $Z(L)$ and $\kappa_{[L;1]}$ are obtained from the unsplit contributions of Strata 1, 2, and 6. After combining terms, we find \begin{equation}\label{get1} 24 N_1(L) \cdot \kappa_{[L^3;0]} + \frac{1}{2}N_0(L) \langle L, L\rangle_\Lambda^3 \cdot \kappa_{[L;1]} + 36 N_1(L) \langle L, L\rangle_\Lambda \cdot Z(L) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag$) with respect to $L$ and insert $\mathcal{L}_{(1)}\mathcal{L}_{(2)}\Delta_{(34)}\in \mathsf{R}^4(\mathcal{X}^4_\Lambda)$. After push-down via $\pi^4_\Lambda$ to $\mathcal{M}_\Lambda$, we obtain \begin{multline*} 288N_1(L) \cdot \kappa_{[L^3;0]} + 12N_1(L)\langle L, L\rangle_\Lambda \cdot \kappa_{[L;1]} + 48N_1(L)\cdot \kappa_{[L^3;0]} \\ + 288N_1(L)\langle L, L\rangle_\Lambda \cdot Z(L) + 24N_1(L)\langle L, L\rangle_\Lambda \cdot Z(L) \\ - 24N_1(L)\langle L, L\rangle_\Lambda \cdot \kappa_{[L;1]} - 24N_1(L) \cdot \kappa_{[L^3;0]} - 24N_1(L) \cdot \kappa_{[L^3;0]} \\ - 24N_1(L)\langle L, L\rangle_\Lambda \cdot Z(L) + \frac{1}{2}N_0(L)\langle L, L\rangle_\Lambda^3 \cdot \kappa_{[L;1]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{multline*} After combining terms, we find \begin{multline} \label{get2} 288N_1(L) \cdot \kappa_{[L^3;0]} - \Big( 12N_1(L)\langle L, L\rangle_\Lambda -\frac{1}{2} N_0(L)\langle L, L\rangle_\Lambda^3\Big) \cdot \kappa_{[L;1]} \\ + 288N_1(L)\langle L, L\rangle_\Lambda \cdot Z(L) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{multline} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag$) with respect to $L$ and insert $\Delta_{(12)}\Delta_{(34)} \in \mathsf{R}^4(\mathcal{X}^4_\Lambda)$. After push-down via $\pi^4_\Lambda$ to $\mathcal{M}_\Lambda$, we obtain \begin{multline*} 576N_1(L) \cdot \kappa_{[L;1]} + 48N_1(L) \cdot \kappa_{[L;1]} + 6912N_1(L) \cdot Z(L) + 576N_1(L) \cdot Z(L) \\ - 48N_1(L) \cdot \kappa_{[L;1]} - 48N_1(L) \cdot \kappa_{[L;1]} - 1152N_1(L) \cdot Z(L) \\ + \frac{1}{2} N_0(L)\langle L, L\rangle_\Lambda^2 \cdot \kappa_{[L;1]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{multline*} After combining terms, we find \begin{equation} \label{get3} \Big(528N_1(L) + \frac{1}{2}N_0(L)\langle L, L\rangle_\Lambda^2\Big) \cdot \kappa_{[L;1]} + 6336N_1(L) \cdot Z(L) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} The system of equations \eqref{wd1}, \eqref{get1}, \eqref{get2}, and \eqref{get3} yields the matrix \begin{equation} \label{matx} \left( \begin{array}{ccc} -2 & 2\langle L, L\rangle_\Lambda & 0 \\ 24N_1(L) & \frac{1}{2}N_0(L)\langle L, L\rangle_\Lambda^3 & 36N_1(L)\langle L, L\rangle_\Lambda \\ 288N_1(L) & -12N_1(L)\langle L, L\rangle_\Lambda + \frac{1}{2}N_0(L)\langle L, L\rangle_\Lambda^3 & 288N_1(L)\langle L, L\rangle_\Lambda \\ 0 & 528N_1(L) + \frac{1}{2}N_0(L)\langle L, L\rangle_\Lambda^2 & 6336N_1(L) \end{array} \right)\, . \end{equation} Since $N_0(L), N_1(L) \neq 0$, straightforward linear algebra{\footnote{One may even consider $\lambda$ as a $4^{\text{th}}$ variable in the equations \eqref{wd1}, \eqref{get1}, \eqref{get2}, and \eqref{get3}. For $\Lambda = (2\ell)$ and $L = H$, the only $\lambda$ terms are obtained from the unsplit contribution of Stratum 4 to ($\ddag$). We find the matrix \begin{equation*} \left( \begin{array}{cccc} -2 & 2(2\ell) & 0 & 0\\ 24N_1(\ell) & \frac{1}{2}N_0(\ell)(2\ell)^3 & 36N_1(\ell)(2\ell) & N_0(\ell)(2\ell)^3 \\ 288N_1(\ell) & -12N_1(\ell)(2\ell) + \frac{1}{2}N_0(\ell)(2\ell)^3 & 288N_1(\ell)(2\ell) & N_0(\ell)(2\ell)^3 \\ 0 & 528N_1(\ell) + \frac{1}{2}N_0(\ell)(2\ell)^2 & 6336N_1(\ell) & N_0(\ell)(2\ell)^2 \end{array} \right) \end{equation*} whose determinant is easily seen to be nonzero. In particular, we obtain a geometric proof of the fact $$\lambda \ \in \ \mathsf{NL}^1(\mathcal{M}_{2\ell}) \,.$$ The determinant of the $4 \times 4$ matrix is likely nonzero for every $\Lambda$ and $H$ (in which case additional $\lambda$ terms appear). We plan to carry out more detailed computation in the future.}} shows the matrix \eqref{matx} to have maximal rank 3. We have therefore proven $$\kappa_{[L^3;0]}, \ \kappa_{[L;1]}, \ Z(L) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,$$ and completed the analysis of Case A. \vspace{8pt} \noindent {\bf Case B.} $\kappa_{[H^2,L;0]}$ for $\langle L, L\rangle_\Lambda > 0$. \nopagebreak \vspace{8pt} We apply ($\ddag'$) with insertion $\mathcal{L}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)}\in\mathsf{R}^3(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. Since $$\kappa_{[H;1]}\, ,\ Z(H) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda)$$ by Case A, we find $$2\ell \cdot \kappa_{[L^3;0]} - 3\langle L, L\rangle_\Lambda \cdot \kappa_{[H^2,L;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,.$$ Since $\kappa_{[L^3;0]} \in \mathsf{NL}^1(\mathcal{M}_\Lambda)$ by Case A, we have $$\kappa_{[H^2,L;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,.$$ Case B is complete. \vspace{8pt} \noindent {\bf Case C.} $\kappa_{[L^3;0]}$, $\kappa_{[H^2,L;0]}$, and $\kappa_{[L;1]}$ for $\langle L, L\rangle_\Lambda < 0$. \nopagebreak \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag'$) with insertion $\mathcal{L}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)}\in\mathsf{R}^3(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. Since $$\kappa_{[H;1]}\, ,\ Z(H) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda)$$ by Case A, we find \begin{equation} \label{get4} 2\ell \cdot \kappa_{[L^3;0]} - 3\langle L, L\rangle_\Lambda \cdot \kappa_{[H^2,L;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag'$) with insertion $\mathcal{H}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)}\in\mathsf{R}^3(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. Since $\kappa_{[H^3;0]} \in \mathsf{NL}^1(\mathcal{M}_\Lambda)$ by Case A, we find \begin{equation} \label{get5} 2\ell \cdot \kappa_{[H,L^2;0]} - 2 \langle H, L\rangle_\Lambda \cdot \kappa_{[H^2,L;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\dag$) with respect to $L$, insert $\mathcal{H}_{(1)}\mathcal{H}_{(2)}\mathcal{L}_{(3)} \mathcal{L}_{(4)}\in\mathsf{R}^4(\mathcal{X}^4_\Lambda)$, and push-down via~$\pi^4_\Lambda$ to $\mathcal{M}_\Lambda$. We find \begin{equation} \label{wd2} \langle H, L\rangle^2 \cdot \kappa_{[L^3;0]} + \langle L, L\rangle^2 \cdot \kappa_{[H^2,L;0]} - 2\langle H, L\rangle \langle L, L\rangle \cdot \kappa_{[H,L^2;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\dag$) with respect to $L$, insert $\Delta_{(12)}\Delta_{(34)}\in\mathsf{R}^4(\mathcal{X}^4_\Lambda)$, and push-down via $\pi^4_\Lambda$ to~$\mathcal{M}_\Lambda$. We find \begin{equation} \label{wd3} 2\langle L, L\rangle_\Lambda \cdot \kappa_{[L;1]} - 2 \cdot \kappa_{[L^3;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} The system of equations \eqref{get4}, \eqref{get5}, and \eqref{wd2} for $$\kappa_{[L^3;0]}\, , \ \ \kappa_{[H,L^2;0]}\, , \ \ \kappa_{[H^2,L;0]}$$ yields the matrix $$\left( \begin{array}{ccc} 2\ell & 0 & -3\langle L, L\rangle_\Lambda \\ 0 & 2\ell & -2\langle H, L\rangle_\Lambda \\ \langle H, L\rangle_\Lambda^2 & -2\langle H, L\rangle_\Lambda \langle L, L\rangle_\Lambda & \langle L, L\rangle_\Lambda^2 \end{array} \right)$$ with determinant $$2\ell \langle L, L\rangle_\Lambda \Big(2\ell \langle L, L\rangle_\Lambda - \langle H, L\rangle_\Lambda^2\Big) > 0 \, $$ by the Hodge index theorem applied to the second factor. Therefore, $$\kappa_{[L^3;0]} \,, \ \kappa_{[H,L^2;0]} \,, \ \kappa_{[H^2,L;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda)\, ,$$ and by \eqref{wd3}, we have $\kappa_{[L;1]} \in \mathsf{NL}^1(\mathcal{M}_\Lambda)$. Case C is complete. \vspace{8pt} \noindent {\bf Case D.} $\kappa_{[L^3;0]}$, $\kappa_{[H^2,L;0]}$, $\kappa_{[L;1]}$, and $Z(L)$ for $\langle L, L\rangle_\Lambda = 0$. \nopagebreak \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag'$) with insertion $\mathcal{L}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)} \in\mathsf{R}^3(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. Since $$\kappa_{[H;1]}\, ,\ Z(H) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda)$$ by Case A, we find $$2\ell \cdot \kappa_{[L^3;0]} - 3\langle L, L\rangle_\Lambda \cdot \kappa_{[H^2,L;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,,$$ hence{\footnote{A direct argument using elliptically fibered $K3$ surfaces shows $\kappa_{[L^3;0]} = 0$ for $\langle L, L\rangle_\Lambda = 0$.}} $\kappa_{[L^3;0]} \in \mathsf{NL}^1(\mathcal{M}_\Lambda)$. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag'$) with insertion $\mathcal{H}_{(1)}\mathcal{L}_{(2)}\mathcal{L}_{(3)}\in\mathsf{R}^3(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. We find \begin{equation} \label{get6} 2\ell \cdot \kappa_{[H,L^2;0]} - 2 \langle H, L\rangle_\Lambda \cdot \kappa_{[H^2,L;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\dag$) with respect to $L$, insert $\mathcal{H}_{(1)}\mathcal{H}_{(2)}\Delta_{(34)} \in\mathsf{R}^4(\mathcal{X}^4_\Lambda)$, and push-down via~$\pi^4_\Lambda$ to $\mathcal{M}_\Lambda$. We find $$\langle H, L\rangle_\Lambda^2 \cdot \kappa_{[L;1]} - 2\langle H, L\rangle_\Lambda \cdot \kappa_{[H,L^2;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,.$$ Since $\langle H, L\rangle_\Lambda \neq 0$ by the Hodge index theorem, we have \begin{equation} \label{wd4} \langle H, L\rangle_\Lambda \cdot \kappa_{[L;1]} - 2 \cdot \kappa_{[H,L^2;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag$) with respect to $L$, insert $\mathcal{H}_{(1)}\mathcal{H}_{(2)}\mathcal{H}_{(3)}\mathcal{L}_{(4)} \in\mathsf{R}^4(\mathcal{X}^4_\Lambda)$, and push-down via~$\pi^4_\Lambda$ to $\mathcal{M}_\Lambda$. We find \begin{multline*} 36N_1(L) \langle H, L\rangle_\Lambda \cdot \kappa_{[H^2,L;0]} + 36N_1(L)(2\ell) \cdot \kappa_{[H,L^2;0]} + 36N_1(L)(2\ell) \langle H, L\rangle_\Lambda \cdot Z(L) \\ - 36N_1(L) \langle H, L\rangle_\Lambda \cdot \kappa_{[H^2,L;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{multline*} Since $N_1(L) \neq 0$, we have \begin{equation} \label{get7} \kappa_{[H,L^2;0]} + \langle H, L\rangle_\Lambda \cdot Z(L) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag$) with respect to $L$, insert $\mathcal{H}_{(1)}\mathcal{H}_{(2)}\Delta_{(34)} \in\mathsf{R}^4(\mathcal{X}^4_\Lambda)$, and push-down via~$\pi^4_\Lambda$ to $\mathcal{M}_\Lambda$. We find \begin{multline*} 288N_1(L) \cdot \kappa_{[H^2,L;0]} + 12N_1(L)(2\ell) \cdot \kappa_{[L;1]} + 48N_1(L) \cdot \kappa_{[H^2,L;0]} \\ + 288N_1(L)(2\ell) \cdot Z(L) + 24N_1(L)(2\ell) \cdot Z(L) \\- 24N_1(L) \cdot \kappa_{[H^2,L;0]} - 24N_1(L) \cdot \kappa_{[H^2,L;0]} - 24N_1(L)(2\ell) \cdot Z(L) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{multline*} After combining terms, we obtain \begin{equation} \label{get8} 24 \cdot \kappa_{[H^2,L;0]} + 2\ell \cdot \kappa_{[L;1]} + 24(2\ell) \cdot Z(L) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,. \end{equation} \vspace{8pt} We multiply \eqref{get8} by $\langle H, L\rangle_\Lambda$, and make substitutions using \eqref{get6}, \eqref{wd4}, and \eqref{get7}, which yields $$(12 + 2 - 24)(2\ell) \cdot \kappa_{[H,L^2;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,.$$ Therefore, $\kappa_{[H,L^2;0]} \in \mathsf{NL}^1(\mathcal{M}_\Lambda)$. Then, again by \eqref{get6}, \eqref{wd4}, and \eqref{get7}, $$\kappa_{[H^2,L;0]}\,, \ \kappa_{[L;1]}\,, \ Z(L) \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,.$$ Case D is complete. \vspace{8pt} \noindent {\bf Case E.} $\kappa_{[L_1,L_2,L_3;0]}$ for arbitrary $L_1, L_2, L_3 \in \Lambda$. \nopagebreak \vspace{8pt} We apply ($\ddag'$) with insertion $\mathcal{L}_{1,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{3,(3)} \in\mathsf{R}^3(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to~$\mathcal{M}_\Lambda$. The result expresses $2\ell \cdot \kappa_{[L_1,L_2,L_3;0]}$ in terms of Noether-Lefschetz divisors and $\kappa$ divisors treated in the previous cases. Therefore, $$\kappa_{[L_1,L_2,L_3;0]} \ \in \ \mathsf{NL}^1(\mathcal{M}_\Lambda) \,.$$ Case E is complete. \vspace{8pt} Cases A-E together cover all divisorial $\kappa$ classes and prove the divisorial case of Theorem \ref{dxxd}. \begin{proposition} \label{pdxxd} The strict tautological ring in codimension $1$ is generated by Noether-Lefschetz loci, $$\mathsf{NL}^1(\mathcal{M}_{\Lambda}) = \mathsf{R}^1(\mathcal{M}_{\Lambda})\, .$$ \end{proposition} In fact, by the result of \cite{Ber}, $\mathsf{NL}^1(\mathcal{M}_\Lambda)$ generates {\it all} of $\mathsf{A}^1(\mathcal{M}_\Lambda)$ for $\text{rank}(\Lambda)\leq 17$. We have given a direct proof of Proposition \ref{pdxxd} using exported relations which is valid for every lattice polarization $\Lambda$ without rank restriction. The same method will be used to prove the full statement of Theorem \ref{dxxd}. \subsection{Second Chern class} \label{scc} The next step is to eliminate the $c_2(\mathcal{T}_{\pi_\Lambda})$ index in the class $\kappa_{[L_1^{a_1},\ldots,L_k^{a_k};b]}$ and reduce to the case $$\kappa_{[L_1^{a_1},\ldots,L_k^{a_k};0]} \,.$$ Our strategy is to express $c_2(\mathcal{T}_{\pi_\Lambda}) \in \mathsf{R}^2(\mathcal{X}_\Lambda)$ in terms of simpler strict tautological classes. From now on, we will require only the decomposition $(\ddag')$. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag'$) with insertion $\mathcal{H}_{(1)}\mathcal{H}_{(2)}\Delta_{(23)} \in\mathsf{R}^4(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. As a result, we find $$2\ell \cdot \kappa_{[H^2;1]} - \kappa_{[H^3;0]}\kappa_{[H;1]} - 2 \cdot \kappa_{[H^4;0]} + 2 \cdot \kappa_{[H^4;0]} \ \in \ \mathsf{NL}^2(\mathcal{M}_\Lambda) \,,$$ where we have used Proposition \ref{pdxxd} for all the non-principal terms corresponding to larger lattices. By Proposition \ref{pdxxd} for $\Lambda$, we have $\kappa_{[H^3;0]}, \,\kappa_{[H;1]} \in \mathsf{NL}^1(\mathcal{M}_{\Lambda})$. We conclude $$\kappa_{[H^2;1]} \ \in \ \mathsf{NL}^2(\mathcal{M}_\Lambda) \,.$$ \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}We apply ($\ddag'$) with insertion $\Delta_{(12)}\in\mathsf{R}^2(\mathcal{X}^3_\Lambda)$, and push-forward to $\mathcal{X}_\Lambda$ via the third projection $$\text{pr}_{(3)}: \mathcal{X}^3_\Lambda \rightarrow \mathcal{X}_\Lambda\, .$$ We find \begin{align*} 2\ell \cdot c_2(\mathcal{T}_{\pi_\Lambda}) \, & = \, 2 \cdot \mathcal{H}^2 + 24 \cdot \mathcal{H}^2 - \kappa_{[H^2;1]} - 2 \cdot \mathcal{H}^2 + \ldots \\ & = \, 24 \cdot \mathcal{H}^2 - \kappa_{[H^2;1]} + \ldots \ \in \ \mathsf{R}^2(\mathcal{X}_\Lambda) \,, \end{align*} where the dots stand for strict tautological classes supported over proper Noether-Lefschetz loci of $\mathcal{M}_\Lambda$. \vspace{8pt} We have already proven $\kappa_{[H^2;1]} \in \mathsf{NL}^2(\mathcal{M}_\Lambda)$. Therefore, up to strict tautological classes supported over proper Noether-Lefschetz loci of $\mathcal{M}_\Lambda$, we may replace $c_2(\mathcal{T}_{\pi_\Lambda})$ by $$\frac{24}{2\ell} \,\cdot\, \mathcal{H}^2 \ \in \ \mathsf{R}^2(\mathcal{X}_\Lambda) \,.$$ The replacement lowers the $c_2(\mathcal{T}_{\pi_\Lambda})$ index of $\kappa$ classes. By induction, we need only prove Theorem \ref{dxxd} for $\kappa$ classes with trivial $c_2(\mathcal{T}_{\pi_\Lambda})$ index. \subsection{Proof of Theorem \ref{dxxd}} The $\kappa$ classes with trivial $c_2(\mathcal{T}_{\pi_\Lambda})$ index can be written as $$\kappa_{[H^a,L_1,\ldots,L_k;0]} \ \in \ \mathsf{R}^{a + k - 2}(\mathcal{M}_\Lambda) \,,$$ where the $L_i \in \Lambda$ are admissible classes (not necessarily distinct) that are different from the quasi-polarization $H$. \vspace{8pt} \noindent {\bf Codimension $2$.} \nopagebreak \vspace{8pt} In codimension $2$, the complete list of $\kappa$ classes (with trivial $c_2(\mathcal{T}_{\pi_\Lambda})$ index) is: $$\kappa_{[H^4;0]}\,, \ \kappa_{[H^3,L;0]}\,, \ \kappa_{[H^2,L_1,L_2;0]}\,, \ \kappa_{[H,L_1,L_2,L_3;0]}\,, \ \kappa_{[L_1,L_2,L_3,L_4;0]} \ \in \ \mathsf{R}^2(\mathcal{M}_\Lambda) \,.$$ \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}For $\kappa_{[H^4;0]}$, we apply ($\ddag'$) with insertion $\mathcal{H}_{(1)}^2\Delta_{(23)}\in\mathsf{R}^4(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. We find $$2\ell \cdot \kappa_{[H^2;1]} - 24 \cdot \kappa_{[H^4;0]} - 2 \cdot \kappa_{[H^4;0]} + 2 \cdot \kappa_{[H^4;0]} + 2\ell \cdot \kappa_{[H^2;1]} \ \in \ \mathsf{NL}^2(\mathcal{M}_\Lambda) \,,$$ where we have used Proposition \ref{pdxxd} for all the non-principal terms corresponding to larger lattices. Since $\kappa_{[H^2;1]} \in \mathsf{NL}^2(\mathcal{M}_\Lambda)$ by Section \ref{scc}, we have $\kappa_{[H^4;0]} \in \mathsf{NL}^2(\mathcal{M}_\Lambda)$. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}For $\kappa_{[H^3,L;0]}$, we apply ($\ddag'$) with insertion $\mathcal{H}_{(1)}^2\mathcal{H}_{(2)}\mathcal{L}_{(3)} \in\mathsf{R}^4(\mathcal{X}^3_\Lambda)$, and push-down via~$\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. We find $$2\ell \cdot \kappa_{[H^3,L;0]} - \langle H, L\rangle_\Lambda \cdot \kappa_{[H^4;0]} - 2 \cdot \kappa_{[H^3;0]} \kappa_{[H^2,L;0]} + 2\ell \cdot \kappa_{[H^3,L;0]} \ \in \ \mathsf{NL}^2(\mathcal{M}_\Lambda) \,,$$ hence $\kappa_{[H^3,L;0]} \in \mathsf{NL}^2(\mathcal{M}_\Lambda)$. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}For $\kappa_{[H^2,L_1,L_2;0]}$, we apply ($\ddag'$) with insertion $\mathcal{H}_{(1)}^2\mathcal{L}_{1,(2)}\mathcal{L}_{2,(3)} \in\mathsf{R}^4(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. We find \begin{multline*} 2\ell \cdot \kappa_{[H^2,L_1,L_2;0]} - \langle L_1, L_2\rangle_\Lambda \cdot \kappa_{[H^4;0]} \\ - 2 \cdot \kappa_{[H^2,L_1;0]}\kappa_{[H^2,L_2;0]} + 2\ell \cdot \kappa_{[H^2,L_1,L_2;0]} \ \in \ \mathsf{NL}^2(\mathcal{M}_\Lambda) \,, \end{multline*} hence $\kappa_{[H^2,L_1,L_2;0]} \in \mathsf{NL}^2(\mathcal{M}_\Lambda)$. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}For $\kappa_{[H,L_1,L_2,L_3;0]}$, we apply ($\ddag'$) with insertion $\mathcal{H}_{(1)}\mathcal{L}_{1,(1)}\mathcal{L}_{2,(2)}\mathcal{L}_{3,(3)} \in \mathsf{R}^4(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. We find \begin{multline*} 2\ell \cdot \kappa_{[H,L_1,L_2,L_3;0]} - \langle L_2, L_3\rangle_\Lambda \cdot \kappa_{[H^3,L_1;0]} - \kappa_{[H^2,L_2;0]}\kappa_{[H,L_1,L_3;0]} \\ - \kappa_{[H^2,L_3;0]}\kappa_{[H,L_1,L_2;0]} + \langle H, L_1\rangle_\Lambda \cdot \kappa_{[H^2,L_2,L_3;0]} \ \in \ \mathsf{NL}^2(\mathcal{M}_\Lambda) \,, \end{multline*} hence $\kappa_{[H,L_1,L_2,L_3;0]} \in \mathsf{NL}^2(\mathcal{M}_\Lambda)$. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}For $\kappa_{[L_1,L_2,L_3,L_4;0]}$, we apply ($\ddag'$) with insertion $\mathcal{L}_{1,(1)}\mathcal{L}_{2,(1)}\mathcal{L}_{3,(2)}\mathcal{L}_{4,(3)} \in \mathsf{R}^4(\mathcal{X}^3_\Lambda)$, and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. We find \begin{multline*} 2\ell \cdot \kappa_{[L_1,L_2,L_3,L_4;0]} - \langle L_3, L_4\rangle_\Lambda \cdot \kappa_{[H^2,L_1,L_2;0]} - \kappa_{[H^2,L_3;0]}\kappa_{[L_1,L_2,L_4;0]} \\ - \kappa_{[H^2,L_4;0]}\kappa_{[L_1,L_2,L_3;0]} + \langle L_1, L_2\rangle_\Lambda \cdot \kappa_{[H^2,L_3,L_4;0]} \ \in \ \mathsf{NL}^2(\mathcal{M}_\Lambda) \,, \end{multline*} hence $\kappa_{[L_1,L_2,L_3,L_4;0]} \in \mathsf{NL}^2(\mathcal{M}_\Lambda)$. \vspace{8pt} \noindent {\bf Codimension $\geq 3$.} \nopagebreak \vspace{8pt} Our strategy in codimension $c\geq 3$ involves an induction on codimension together with a second induction on the $H$ index $a$ of the kappa class $$\kappa_{[H^a,L_1,\ldots,L_k;0]} \ \in \ \mathsf{R}^{a + k - 2}(\mathcal{M}_\Lambda) \,.$$ For the induction on $c$, we assume the Noether-Lefschetz generation for all {\it lower} codimension. The base case is Proposition \ref{pdxxd}. For the induction on $a$, we assume the Noether-Lefschetz generation for all {\it higher} $H$ index. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}For the base of the induction on $H$ index, consider the class $$\kappa_{[H^a;0]} \ \in \ \mathsf{R}^{a - 2}(\mathcal{M}_\Lambda)\, .$$ We apply ($\ddag'$), insert $$\mathcal{H}_{(1)}^{a - 3}\mathcal{H}_{(2)}^{2}\mathcal{H}_{(3)} \ \in \ \mathsf{R}^a(\mathcal{X}^3_\Lambda)\, \ \ \text{with} \ a-2=c\, ,$$ and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. By the induction on codimension, we obtain \begin{multline} \label{fnfn} 2\ell \cdot \kappa_{[H^a;0]} - 2 \cdot \kappa_{[H^3;0]}\kappa_{[H^{a-1};0]} - \kappa_{[H^4;0]}\kappa_{[H^{a-2};0]} \\ + 2\ell \cdot \kappa_{[H^a;0]} + \kappa_{[H^5;0]}\kappa_{[H^{a-3};0]} \ \in \ \mathsf{NL}^{a - 2}(\mathcal{M}_\Lambda) \,. \end{multline} For both{\footnote{Since $a-2=c\geq 3$, $a\geq 5$.}} $a = 5$ and $a > 5$, the coefficient of $\kappa_{[H^a;0]}$ is positive and the other terms in \eqref{fnfn} are products of $\kappa$ classes of lower codimension. Therefore, by the induction hypothesis, $$\kappa_{[H^a;0]} \ \in \ \mathsf{NL}^{a - 2}(\mathcal{M}_\Lambda) \,.$$ \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}If $a>0$ and $k > 0$, we apply ($\ddag'$), insert $$\mathcal{H}_{(1)}^{a - 1}\mathcal{L}_{1,(1)} \cdots \mathcal{L}_{k - 1,(1)}\mathcal{H}_{(2)}\mathcal{L}_{k,(3)} \ \in \ \mathsf{R}^{a+k}(\mathcal{X}^3_\Lambda)\, \ \ \text{with} \ a+k-2=c\, ,$$ and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. By the induction on codimension, we obtain \begin{multline}\label{msms7} 2\ell \cdot \kappa_{[H^a,L_1,\ldots,L_k;0]} - \langle H, L_k\rangle_\Lambda \cdot \kappa_{[H^{a + 1},L_1,\ldots,L_{k-1};0]} \\ - \kappa_{[H^3;0]}\kappa_{[H^{a-1},L_1,\ldots,L_{k-1},L_k;0]} - \kappa_{[H^2,L_k;0]}\kappa_{[H^a,L_1,\ldots,L_{k-1};0]} \\ + \kappa_{[H^3,L_k;0]}\kappa_{[H^{a-1},L_1,\ldots,L_{k-1};0]} \ \in \ \mathsf{NL}^{a + k - 2}(\mathcal{M}_\Lambda) \,. \end{multline} Since the last three terms of \eqref{msms7} are products of $\kappa$ classes of lower codimension (since $a+k\geq 5$), using the induction hypothesis again yields $$2\ell \cdot \kappa_{[H^a,L_1,\ldots,L_k;0]} - \langle H, L_k\rangle_\Lambda \cdot \kappa_{[H^{a + 1},L_1,\ldots,L_{k-1};0]} \ \in \ \mathsf{NL}^{a + k - 2}(\mathcal{M}_\Lambda) \,,$$ which allows us to raise the $H$ index. \vspace{8pt} \noindent \makebox[12pt][l]{$\bullet$}If $a = 0$, we apply ($\ddag'$), insert $$\mathcal{L}_{1,(1)} \cdots \mathcal{L}_{k - 2,(1)}\mathcal{L}_{k - 1, (2)}\mathcal{L}_{k,(3)} \ \in \ \mathsf{R}^k(\mathcal{X}^3_\Lambda)\, \ \ \text{with} \ k-2=c\, ,$$ and push-down via $\pi^3_\Lambda$ to $\mathcal{M}_\Lambda$. By the induction on codimension, we obtain \begin{multline} \label{msms} 2\ell \cdot \kappa_{[L_1,\ldots,L_k;0]} - \langle L_{k - 1}, L_k\rangle_\Lambda \cdot \kappa_{[H^2,L_1,\ldots,L_{k - 2};0]} \\ - \kappa_{[H^2,L_{k - 1};0]}\kappa_{[L_1,\ldots,L_{k - 2},L_k;0]} - \kappa_{[H^2,L_k;0]}\kappa_{[L_1,\ldots,L_{k - 2},L_{k - 1};0]} \\ + \kappa_{[H^2,L_{k-1},L_k;0]}\kappa_{[L_1,\ldots,L_{k-2};0]} \ \in \ \mathsf{NL}^{k - 2}(\mathcal{M}_\Lambda) \,. \end{multline} Since the last three terms of \eqref{msms} are products of $\kappa$ classes of lower codimension (since $k\geq 5$), using the induction hypothesis again yields $$2\ell \cdot \kappa_{[L_1,\ldots,L_k;0]} - \langle L_{k - 1}, L_k\rangle_\Lambda \cdot \kappa_{[H^2,L_1,\ldots,L_{k - 2};0]} \ \in \ \mathsf{NL}^{k - 2}(\mathcal{M}_\Lambda) \,,$$ which allows us to raise the $H$ index. \vspace{8pt} The induction argument on codimension and $H$ index is complete. The Noether-Lefschetz generation of Theorem \ref{dxxd} is proven. \qed
{"config": "arxiv", "file": "1607.08758.tex"}
\begin{document} \title[class numbers along a Galois representation]{ Asymptotic lower bound of class numbers along a Galois representation} \author{Tatsuya Ohshita} \address{Graduate School of Science and Engineering, Ehime University 2--5, Bunkyo-cho, Matsuyama-shi, Ehime 790--8577, Japan} \email{ohshita.tatsuya.nz@ehime-u.ac.jp} \date{\today} \subjclass[2010]{Primary 11R29; Secondary 11G05, 11G10, 11R23. } \keywords{class number; Galois representation; elliptic curve; abelian variety; Selmer group; Mordell--Weil group; Iwasawa theory} \begin{abstract} Let $T$ be a free $\Z_p$-module of finite rank equipped with a continuous $\Z_p$-linear action of the absolute Galois group of a number field $K$ satisfying certain conditions. In this article, by using a Selmer group corresponding to $T$, we give a lower bound of the additive $p$-adic valuation of the class number of $K_n$, which is the Galois extension field of $K$ fixed by the stabilizer of $T/p^n T$. By applying this result, we prove an asymptotic inequality which describes an explicit lower bound of the class numbers along a tower $K(A[p^\infty]) /K$ for a given abelian variety $A$ with certain conditions in terms of the Mordell--Weil group. We also prove another asymptotic inequality for the cases when $A$ is a Hilbert--Blumenthal or CM abelian variety. \end{abstract} \maketitle \section{Introduction}\label{secintro} Commencing with Iwasawa's class number formula (\cite{Iw} \S 4.2), it is a classical and important problem to study the asymptotic behavior of class numbers along a tower of number fields. Greenberg (\cite{Gr}) and Fukuda--Komatsu--Yamagata (\cite{FKY}) studied Iwasawa's $\lambda$-invariant of a certain (non-cyclotomic) $\Z_p$-extension of a CM field for a prime number $p$: by using Mordell--Weil group of a CM abelian variety, they gave a lower bound of the $\lambda$-invariant. Sairaiji--Yamauchi (\cite{SY1}, \cite{SY2}) and Hiranouchi (\cite{Hi}) studied asymptotic behavior of class numbers along a $p$-adic Lie extension $\Q (E[p^\infty])/\Q$ generated by coordinates of all $p$-power torsion points of an elliptic curve $E$ defined over $\Q$ satisfying certain conditions, and obtained results analogous to those in \cite{Gr} and \cite{FKY}. In this article, by using the terminology of Selmer groups, we generalize their results to the $p$-adic Lie extension of a number field $K$ along a $p$-adic representation of the absolute Galois group $G_K:=\Gal (\overline{K}/K)$ (Theorem \ref{thmmain}). As an application of this theory, we prove an asymptotic inequality which gives a lower bound of the class numbers along a tower $K(A[p^\infty]) /K$ for a given abelian variety $A$ with certain conditions (Corollary \ref{corab}). We also prove another asymptotic inequality for the cases when $A$ is a Hilbert--Blumenthal or CM abelian variety (Corollary \ref{corCM}). Let us introduce our notation. Fix a prime number $p$, and $\ord_p \colon \Q^\times \longrightarrow \Z$ the additive $p$-adic valuation normalized by $\ord_p (p)=1$. Let $K/\Q$ be a finite extension, and $\Sigma$ a finite set of places of $K$ containing all places above $p$ and all infinite places. We denote by $K_\Sigma$ the maximal Galois extension field of $K$ unramified outside $\Sigma$, and put $G_{K,\Sigma}:=\Gal(K_\Sigma/K)$. Let $d \in \Z_{>0}$, and suppose that a free $\Z_p$-module $T$ of rank $d$ equipped with a continuous $\Z_p$-linear $G_{K,\Sigma}$-action \( \rho\colon G_{K,\Sigma} \longrightarrow \Aut_{\Z_p} (T) \simeq \GL_d (\Z_p) \). We put $V:=T \otimes_{\Z_p} \Q_p$, and $W:=V/T \simeq T \otimes_{\Z_p} \Q_p/\Z_p$. Let $n \in \Z_{\ge 0}$. We denote the $\Z_p[G_{K,\Sigma}]$-submodule of $W$ consisting of all $p^n$-torsion elements by $W[p^n]$. We define $K_n:=K(W[p^n])$ to be the maximal subfield of $K_{\Sigma}$ fixed by the kernel of the continuous group homomorphism \( \rho_n \colon G_{K,\Sigma} \longrightarrow \Aut_{\Z/p^n \Z} (W[p^n]) \) induced by $\rho$. We denote by $h_n$ the class number of $K_n$. In this article, we study the asymptotic behavior of the sequence $\{ \ord_p (h_n) \}_{n \ge 0}$ by using a Selmer group of $W$. Let us introduce some notation related to Selmer groups in our setting briefly. (For details, see \S \ref{ssloccond}.) Let \( \cF=\{ H^1_{\cF} (L_w, V) \subseteq H^1 (L_w, V) \}_{L,w} \) be any local condition on $(V, \Sigma)$ in the sense of Definition \ref{defLC}. For instance, we can set $\cF$ to be Bloch--Kato's finite local condition $f$. Let $v \in \Sigma$ be any element, and $H^1_{\cF} (K_v, W)$ be the $\Z_p$-submodule of $H^1 (K_v, W)$ attached to $\cF$. Since the Galois cohomology $H^1 (K_v, W)$ is a cofinitely generated $\Z_p$-module, so is the subquotient \[ \cH_v:= H^1_{\cF} (K_v, W)/(H^1_{\cF} (K_v, W) \cap H^1_{\unram} (K_v, W)), \] where $H^1_{\unram} (K_v, W)$ denotes the unramified part of $H^1 (K_v, W)$. We denote by the corank of the $\Z_p$-module $\cH_v$ by $r_v=r_v (T, \cF)$. We define the Selmer group of $W$ over $K$ with respect to the local condition $\cF$ by \[ \Sel_{\cF}(K,W):= \Ker \left( H^1(K_\Sigma /K,W) \longrightarrow \prod_{v \in \Sigma} \frac{H^1 (K_v, W)}{H^1_{\cF} (K_v, W)} \right). \] Since $H^1(K_\Sigma /K,W)$ is a cofinitely generated $\Z_p$-module, so is $\Sel_{\cF}(K,W)$. We define $r_{\Sel}:=r_{\Sel}(T,\cF)$ to be the corank of the $\Z_p$-module $\Sel_{\cF}(K,W)$. For two sequences $\{ a_n \}_{n \ge 0}$ and $\{ b_n \}_{n \ge 0}$ of real numbers, we write $a_n \succ b_n$ if we have $\liminf_{n \to \infty} (a_n -b_n)> - \infty$, namely if the sequence $\{ a_n -b_n \}_{n \ge 0}$ is bounded below. The following theorem is the main result of our article. \begin{thm}\label{thmmain} Assume that $T$ satisfies the following two conditions. \begin{itemize} \setlength{\leftskip}{3mm} \item[$\Abs$] The representation $W[p]$ of $G_{K,\Sigma}$ over $\F_p$ is absolutely irreducible. \item[$\NT$] If $d=1$, then $G_{K,\Sigma}$ acts on $W[p]$ non-trivially. \end{itemize} Then, we have \[ \ord_p (h_n) \succ d \left(r_{\Sel} - \sum_{v \in \Sigma}r_v \right) n. \] \end{thm} \begin{rem}\label{remstrver} In this article, we also show a stronger assertion than Theorem \ref{thmmain} which describes not only asymptotic behavior but also a lower bound of each $h_n$ in the strict sense. (See Theorem \ref{thmmainstr}.) \end{rem} Let $A$ be an abelian variety defined over a number field $K$, and put $g:=\dim A$. For each $N \in \Z_{\ge 0}$, we denote by $A[N]$ the $N$-torsion part of $A(\overline{K})$. Put $r_{\Z}(A):=\dim_\Q (A(K)\otimes_{\Z} \Q)$. For each $n \in \Z_{\ge 0}$, we denote by $h_{n}(A;p)$ the class number of $K(A[p^n])$, which is the extension field of $K$ generated by the coordinates of elements $A[p^n]$. By applying Theorem \ref{thmmain}, and we obtain the following corollary. (See \S \ref{secab}.) \begin{cor}\label{corab} Suppose that $A[p]$ is an absolutely irreducible representation of $G_{K }$ over $\F_p$. Then, it holds that \[ \ord_p (h_n(A;p)) \succ 2g \left( r_\Z (A) -g [K : \Q] \right) n. \] \end{cor} \begin{rem} Suppose that $K=\Q$, and $A$ is an elliptic curve over $\Q$. Then, it follows from Corollary \ref{corab} that \( \ord_p (h_n(A;p)) \succ 2 \left( r_{\Z} (A) - 1 \right) n \). This asymptotic inequality coincides with that obtained by Sairaiji--Yamauchi (\cite{SY1}, \cite{SY2}) and Hiranouchi (\cite{Hi}). Moreover, if $p$ is odd, then Theorem \ref{thmmainstr}, which is a stronger result than Theorem \ref{thmmain} mentioned in Remark \ref{remstrver}, implies strict estimates obtained in \cite{SY1}, \cite{SY2} and \cite{Hi}. For details, see Example \ref{exellstr}. \end{rem} We have another application of Theorem \ref{thmmain} for the cases when $A$ is a Hilbert--Blumenthal abelian variety or a CM abelian variety. Suppose that $K$ is a totally real or CM field which is Galois over $K$, and denote by $\cO_K$ the ring of integers in $K$. We denote by $K^+$ the maximal totally real subfield of $K$, and put $g:=[K^+: \Q]$. Let $p$ be a prime number which splits completely in $K$, and has prime ideal decomposition \( p\cO_K = \prod_{\sigma \in \Gal (K/\Q)} \sigma(\pi) \cO_K \) for some element $\pi \in \cO_K$. Let $A$ be a $g$ dimensional Hilbert--Blumenthal (resp.\ CM) abelian variety over $K$ which has good reduction at every places above $p$, and satisfies $\End_K (A) =\cO_K$ if $K$ is totally real (resp.\ CM). For each $n \in \Z_{\ge 0}$, we denote by $h_{n}(A;\pi)$ the class number of $K(A[\pi^n])$. We put $r_{\cO_K}(A):=\dim_K (A(K)\otimes_{\cO_K}K)$. Then, the following holds. \begin{cor}\label{corCM} Suppose that $A[\pi]$ is an absolutely irreducible non-trivial representation of $G_K$ over $\F_p$. Then, we have \[ \ord_p(h_{n}(A;\pi)) \succ \frac{2}{ [K:K^+]}\left( r_{\cO_K}(A) -g \right) n. \] \end{cor} \begin{rem} When $K$ is a CM field, the asymptotic inequality in Corollary \ref{corCM} coincides with that obtained by Greenberg \cite{Gr} and Fukuda--Komatsu--Yamagata \cite{FKY}. \end{rem} The strategy for the proof of our main result, namely Theorem \ref{thmmain}, is quite similar to those established in earlier works \cite{Gr}, \cite{FKY}, \cite{SY1}, \cite{SY2} and \cite{Hi}. For each $n \in \Z_{\ge 0}$, by using elements of the Selmer group, we construct a finite abelian extension $L_n/K_n$ which is unramified outside $\Sigma$, and whose degree is a power of $p$. Then, we compute the degree $[L_n : K_n]$ and the order of inertia subgroups at ramified places. In \S \ref{secGalcoh}, we introduce notation related to Galois cohomology, and prove some preliminary results. In \S \ref{secproof}, we prove our main theorem, namely Theorem \ref{thmmain}. In \S \ref{secab}, we apply Theorem \ref{thmmain} to the Galois representations arising from abelian varieties, and prove Corollary \ref{corab} and Corollary \ref{corCM}. We also compare our results with earlier works by Fukuda--Komatsu--Yamagata \cite{FKY}, Sairaiji--Yamauchi \cite{SY2} and Hiranouchi \cite{Hi}. \subsubsection*{Notation} Let $L/F$ be a Galois extension, and $M$ a topological abelian group equipped with a $\Z$-linear action of $G$. Then, for each $i \in \Z_{\ge 0}$, we denote by $H^i(L/F , M):=H^i_{\mathrm{cont}}(\Gal (L/F), M)$ the $i$-th continuous Galois cohomology. When $L$ is a separable closure of $F$, then we write $H^i(F , M):=H^i(L/F , M)$. Let $F$ be a non-archimedean local field. We denote by $F^{\unram}$ the maximal unramified extension of $F$. For any topological abelian group $M$ equipped with continuous $\Z$-linear action of $G_{F}$, we define \( H^1_{\unram}(F,M):= \Ker \left( H^1(F,M) \longrightarrow H^1(F^{\unram},M) \right) \). Let $R$ be a commutative ring, and $M$ an $R$-module $M$. We denote by $\ell_R (M)$ the length of $M$. For each $a \in R$, we denote $M[a]$ by the $R$-submodule of $M$ consisting of elements annihilated by $a$. \if0 Let $G$ be a group, and suppose that $M$ is an abelian group equipped with a $\Z$-linear action of $G$. Then, we denote by $M^G$ the $G$-invariant part of $M$. \fi \section*{Acknowledgment} The author would like to thank Takuya Yamauchi for giving information on his works with Fumio Sairaiji (\cite{SY1}, \cite{SY2}), and suggesting related problems on abelian varieties. This work is motivated by Yamauchi's suggestion. \if0 This work is supported by JSPS KAKENHI Grant Number 26800011. \fi \section{Preliminaries on Galois cohomology}\label{secGalcoh} Here, we introduce some notation related to Galois cohomology, and prove preliminary results. Let $K$, $\Sigma$ and $T$ be as in \S \ref{secintro}, and assume that $T$ satisfies the conditions $\Abs$ and $\NT$ in Theorem \ref{thmmain}. We denote by $\Fin (K;\Sigma )$ be the set of all intermediate fields $L$ of $K_\Sigma /K$ which are finite over $K$. For each $L \in \Fin (K;\Sigma )$, we denote by $P_L$ the set of all places of $L$, and by $\Sigma_L$ the subset of $P_L$ consisting of places above an element of $\Sigma$. \subsection{Local conditions}\label{ssloccond} In this subsection, let us define the notion of local conditions and Selmer groups in our article. \begin{defn}\label{defLC} Recall that we put $V:= T \otimes_{\Z_p} \Q_p$, and $W:=V/T$. \begin{enumerate}[{\rm (i)}]\setlength{\parskip}{1mm} \item A collection \( \cF := \left\{ H^1_{\cF}(L_w,V) \subseteq H^1(L_w , V) \mathrel{\vert} L \in \Fin (K; \Sigma) ,\, w \in \Sigma_L \right\} \) of $\Q_p$-subspaces is called {\em a local condition} on $(V, \Sigma)$ if the following $(*)$ is satisfied. \begin{enumerate}\setlength{\parskip}{1mm} \item[$(*)$] {\em Let $\iota \colon L_1 \hookrightarrow L_2 $ be an embedding of fields belonging to $\Fin_K (L; \Sigma)$ over $K$. Then, for any $w_1\in P_{L_1}$ and $w_2 \in P_{L_2}$ satisfying $\iota^{-1} w_2 =w_1$, the image of $ H^1_{\cF}(L_{1,w_1},V)$ via the map $ H^1 (L_{1,w_1},V) \longrightarrow H^1 (L_{2,w_2},V)$ induced by $\iota$ is contained in $H^1_{\cF} (L_{w_2},V)$.} \end{enumerate} \item Let $L \in \Fin (K; \Sigma)$ and $w \in P_L$. Then, we define $ H^1_{\cF}(L_w,W)$ to be the image of $ H^1_{\cF}(L_w,V)$ via the natural map $ H^1(L_w,V) \longrightarrow H^1(L_w,W)$. For any $n \in \Z_{\ge 0}$, we define $H^1_{\cF}(L_w,W[p^n])$ to be the inverse image of $ H^1_{\cF}(L_w,W)$ via the natural map $ H^1(L_w,W[p^n]) \longrightarrow H^1(L_w,W)$. \item Let $L \in \Fin (K; \Sigma)$, and $n \in \Z_{\ge 0} \cup \{ \infty \}$. Then, we define \[ \Sel_{\cF}(L,W[p^n]):= \Ker \left( H^1(K_\Sigma /L,W[p^n]) \longrightarrow \prod_{w \in \Sigma_L} \frac{H^1 (L_w, W[p^n])}{H^1_{\cF} (L_w, W[p^n])} \right). \] \end{enumerate} \end{defn} \begin{rem} Let $\cF$ be a local condition on $(V,\Sigma)$. Then, by definition, for any $L \in \Fin (K; \Sigma)$ and $w \in P_L$, the $\Z_p$-module $H^1_{\cF}(L_w, W)$ is divisible. \end{rem} \begin{rem} Let $L \in \Fin (K; \Sigma)$ any element, and $w \in \Sigma_L$ an infinite place. Then, we note that $H^1 (L_w, V)=0$. Thus for any local condition $\cF$ on $(V, \Sigma)$, it clearly holds that $H^1_{\cF} (L_w, V)=0$. We also note that $H^1 (L_w, W)$ is annihilated by $2$. In particular $H^1 (L_w, W)$ never has a non-trivial divisible $\Z_p$-submodule, So, the corank of $H^1 (L_w, W)$ is zero. When we treat a local condition, we may not care infinite places. \end{rem} \begin{ex}[\cite{BK} \S 3] For $L \in \Fin (K; \Sigma)$ and finite place $w \in P_L$, we put \[ H^1_f (L_w, V):= \begin{cases} H^1_{\unram} (L_w, V) & (\text{if $w \nmid p$}), \\ \Ker \left( H^1 (L_w, V) \longrightarrow H^1 (L_w, V\otimes_{\Q_p} B_{\mathrm{crys}}) \right) & (\text{if $w \mid p$}), \\ 0 & (\text{if $w \mid \infty$}) \end{cases} \] where $B_{\mathrm{crys}}$ is Fontaine's $p$-adic period ring introduced in \cite{Fo} and \cite{FM}. Then, we can easily verify that the collection \( \{ H^1_f (L'_{w'}, V) \mathrel{\vert} L' \in \Fin (K; \Sigma), \ w' \in \Sigma_{L'} \} \) forms a local condition on $(V,\Sigma)$. We call this collection {\em Bloch--Kato's finite local condition}. \end{ex} \subsection{Global cohomology} In this subsection, we introduce some preliminaries on global Galois cohomology. We put $K_\infty:=\bigcup_{n \ge 0} K_n$. For each $m,n \in \Z_{\ge 0} \cup \{ \infty \}$ with $n \ge m$, we put $G_{n,m} :=\Gal(K_n/K_m)$, and $G_{m} :=\Gal(K_\Sigma /K_m)$. First, we control the kernel of the restriction map \[ \res_{n,W} \colon H^1 (K,W[p^n]) \longrightarrow H^1 (K_n,W[p^n]) \] for every $n \in \Z_{\ge 0}$. In order to do it, the following fact and the irreducibility of $V$, which follows from $\Abs$ for $T$ become a key. \begin{thm}[\cite{Ru} Theorem C.1.1 in Appendix C]\label{thmvanish} Let $V'$ be a finite dimensional $\Q_p$-vector space, and suppose that a compact subgroup $H$ of $\GL (V'):=\Aut_\Q (V')$ acts irreducibly on $V'$ via the standard action. Then, we have $H^1 (H, V')=0$. \end{thm} By using Theorem \ref{thmvanish}, let us prove the following lemma. \begin{lem}\label{lemGlcohfin} There exist a positive integer $\nuim$ such that for any $n \in \Z_{\ge 0}$, we have $\ell_{\Z_p}(\Ker \res_{n,W}) \le \nuim$. \end{lem} \begin{proof} Let us show that $\# H^1 (K_\infty /K, W)< \infty$. By $\Abs$ and Theorem \ref{thmvanish}, we have $H^1 (K_\infty /K, V)=0$. So, we obtain an injection $H^1 (K_\infty /K, W) \hookrightarrow H^2 (K_\infty /K, T)$. Note that $G_{\infty,0}=\Gal(K_\infty / K)$ is a compact $p$-adic analytic group since $G_{\infty,0}$ can be regarded as a compact subgroup of $\GL (V)$. So, by Lazard's theorem (for instance, see \cite{DDMS} 8.1 Theorem), it holds that $G_{\infty,0}$ is topologically finitely generated. This implies that the order of $H^2 (K_\infty/K, W[p])$ is finite. Thus $H^2 (K_\infty /K, T)$ is finitely generated over $\Z_p$. (See \cite{Ta} Corollary of (2.1) Proposition.) Since $H^1 (K_\infty /K, W)$ is a torsion $\Z_p$-module, we deduce that the order of $H^1 (K_\infty /K, W)$ is finite. Let $\nu$ be the length of the $\Z_p$-module $H^1 (K_\infty /K, W)$. Take any $n \in \Z_{\ge 0}$. By the assumptions $\Abs$ and $\NT$, the natural map $H^1 (K_n /K, W[p^n]) \longrightarrow H^1 (K_n /K, W)$ is injective. So, by the inflation-restriction exact sequence, we obtain an injection \( \Ker \res_{n,W} \hookrightarrow H^1 (K_\infty /K, W) \). Hence $\nuim :=\nu$ satisfies the desired properties. \end{proof} \begin{rem}\label{remnuimvanish} Note that if the image $\rho_1 (\Gal (K_1/K)) \subseteq \GL_d (\F_p)$ contains a non-trivial scalar matrix, then we can take $\nuim =0$. Indeed, in such cases, similarly to \cite{LW} \S 2 Lemma 3, we can show that $H^1 (K_\infty /K, W)=0$. \end{rem} \begin{ex}\label{exmnuimvanish} Suppose $d= \dim_{\Q_p} V=2$, and the following assumption $\Full$. \begin{itemize}\setlength{\leftskip}{3mm} \item[$\Full$] If $p$ is odd, then the map $\rho_1 \colon G_{K,\Sigma} \longrightarrow \Aut_{\F_p}(W[p]) \simeq \GL_2 (\F_p)$ is surjective. If $p=2$, then $\rho \colon G_{K,\Sigma} \longrightarrow \Aut_{\Z_2}(T) \simeq \GL_2 (\Z_2)$ is surjective. \end{itemize} Then, we may take $\nuim=0$ because of the following Claim \ref{claimvanish}. Note that in \cite{SY1}, \cite{SY2} and \cite{Hi}, Sairaiji, Yamauchi and Hiranouchi assumed the hypothesis $\Full$. So, the constant $\nuim$ does not appear explicitly in these works. \end{ex} \begin{claim}\label{claimvanish} If $d=2$, and if $T$ satisfies $\Full$, then we have $H^1 (K_\infty /K, W)=0$. \end{claim} \begin{proof}[Proof of Claim \ref{claimvanish}] If $p$ is odd, then the claim follows from similar arguments to those in the proof of \cite{LW} \S 2 Lemma 3. So, we may suppose that $p=2$. It suffices to show that $H^0(K, H^1 (K_{n+1} /K_n, W[2]))=0$ for each $n \in \Z_{\ge 0}$. Note that we have $G_{1,0} \simeq \GL_2 (\F_2) \simeq \mathfrak{S}_3$. Let $A$ be a unique normal subgroup of $G_{1,0}$ of order $3$. Then, we have $W[2]^A=0$, and $H^1 (A, W[2])=0$. So, we have $H^1 (K_{1} /K, W[2])=0$. Take any $n\ge 1$, and let us show that \begin{equation}\label{eqKn+1Knvanish} H^0(K, H^1 (K_{n+1} /K_n, W[2]))=\Hom(G_{n+1,n}, W[2])^{G_{n+1,0}}=0. \end{equation} The map $\rho_{n+1} \colon G_{n+1,0} \longrightarrow \GL_2(\Z/2^{n+1} \Z)$ induces an isomorphism from $G_{n+1, n}$ to a subgroup of \( (1+2^nM_2(\Z_2))/((1+2^{n+1}M_2(\Z_2))) \simeq M_2(\F_2) \) preserving the conjugate action of $G_{n+1,0} \simeq \rho_{n+1}(G_{n+1, 0})$, which factors through $G_{1,0} \simeq \GL_2 (\F_p)$. By the assumption $\Full$, we have $\Gal (K_{n+1}/K_n) \simeq M_2 (\F_2)$. Note that the $\F_2 [\GL (\F_2)]$-submodules of $M_2 (\F_2)$ are $0 \subseteq \F_2 \subseteq \fsl (\F_2) \subseteq M_2 (\F_2)$. So, we deduce that $M_2 (\F_2)$ never has a quotient isomorphic to $\F_2^2$. Hence the equality (\ref{eqKn+1Knvanish}) holds. \end{proof} Next, by using Galois cohomology classes contained in the Selmer group, we shall construct certain number fields. Let $n \in \Z_{\ge 0}$ be any element. Clearly, we have a natural isomorphism \( H^1(K_\Sigma/K_n,W[p^n])\simeq \Hom_{\cont}(G_n, W[p^n]) \), where $\Hom_{\cont}(G_n, W[p^n])$ denotes the group consisting of continuous homomorphisms from $G_n$ to $ W[p^n]$. Since $G_n$ is a normal subgroup of $G_{K,\Sigma}=G_0$, we can define a left action \[ G_{n,0} \times \Hom_{\cont}(G_n, W[p^n]) \longrightarrow \Hom_{\cont}(G_n, W[p^n]);\ (\sigma, f) \longmapsto \sigma * f \] of $G_{n,0}$ on $\Hom_{\cont}(G_n, W[p^n])$ by \( (\sigma * f)(x):=\sigma (f(\widetilde{\sigma}^{-1} x \widetilde{\sigma})) \) for each $x \in G_n$, where $\widetilde{\sigma} \in G_0$ is a lift of $\sigma$. Note that he definition of $\sigma * f$ is independent of the choice of $\widetilde{\sigma}$. The following lemma is a key of the proof of Theorem \ref{thmmain}. \begin{lem}\label{lemlengthdeg} Take any $n \in \Z_{\ge 0}$. Let $M$ be a $\Z_p$-submodule of \[ \cH_n:= H^0(K, H^1(K_\Sigma/K_n,W[p^n]))=\Hom_{\cont}(G_n, W[p^n])^{G_{n,0}}. \] We define $K_n (M)$ to be the maximal subfield of $K_\Sigma$ fixed by $\bigcap_{h \in M} \Ker h$. Then $K_n (M) /K_n$ is Galois, and $[K_n (M): K_n]= p^{d \ell_{\Z_p} (M)}$. Moreover, the evaluation map \( e_{M} \colon M \longrightarrow \Hom_{\Z_p [G_{n,0}]} \left( \Gal (K_n (M)/K_n), W[p^n] \right) \) is an isomorphism of $\Z_p$-modules. \end{lem} \begin{proof} By definition, the extension $K_n (M) /K_n$ is clearly Galois, and $e_M$ is a well-defined injective homomorphism. Let us show the rest of the assertion of Lemma \ref{lemlengthdeg} by induction on $\ell_{\Z_p} (M)$. When $\ell_{\Z_p} (M)=0$, the assertion of Lemma \ref{lemlengthdeg} is clear. Let $\ell$ be a positive integer, and suppose that the assertion of Lemma \ref{lemlengthdeg} holds for any $\Z_p$-submodule $M'$ of $\cH_n$ satisfying $\ell_{\Z_p} (M') < \ell$. Let $M$ be any $\Z_p$-submodule of $\cH_n$ satisfying $\ell_{\Z_p} (M) = \ell$. Take a $\Z_p$-submodule $M_0$ of $M$ such that $\ell_{\Z_p} (M_0) = \ell-1$. By definition, we have $K_n (M_0) \subseteq K_n(M)$. Since $e_{M_0}$ is an isomorphism by the hypothesis of induction, and since $e_M$ is an injection, we deduce that \begin{equation}\label{eqdeg>1} [K_n (M): K_n (M_0)]>1. \end{equation} For each $\Z_p$-submodule $N$ of $M$, we put $\mathfrak{K}(N):= \bigcap_{h' \in N} \Ker h' = \Gal (K_\Sigma/K_n({N}))$. Take an element $f \in M$ not contained in $M_0$. Then, we have $\mathfrak{K}(M)=\Ker f \cap \mathfrak{K}(M_0)$. Note that the abelian group $\Gal (K_n (M)/ K_n (M_0))= \mathfrak{K}(M_0)/\mathfrak{K}(M)$ is annihilated by $p$. (Indeed, if there exists an element $\Gal (K_n (M)/ K_n (M_0))$ which is not annihilated by $p$, then we obtain a sequence $M_0 \subset pM+M_0 \subset M$, which contradicts the fact that $M/M_0$ is a simple $\Z_p$-module.) So, the map $f$ induces an injective $\F_p [G_{n,0}]$-linear map from $\Gal (K_n (M)/ K_n (M_0))=\mathfrak{K}(M_0)/ (\Ker f \cap \mathfrak{K}(M_0))$ into $(W[p^n])[p]=W[p]$. By the inequality (\ref{eqdeg>1}) and the assumption $\Abs$, we deduce that \begin{equation}\label{eqGalsimple} \Gal (K_n (M)/ K_n (M_0)) \simeq W[p]. \end{equation} Since we have $[K_n (M_0) : K_n]=p^{d(\ell-1)}$ by the induction hypothesis, we obtain \[ [K_n (M): K_n]= p^{d(\ell-1)}\cdot \# W[p] =p^{d \ell} = p^{d \ell_{\Z_p} (M)}. \] In order to complete the proof of Lemma \ref{lemlengthdeg}, it suffices to prove that the map $e_M$ is surjective. For each $\Z_p$-submodule $N$ of $M$, we put \[ X(N):= \Hom_{\Z_p [G_n]} \left( \Gal (K_n (M)/K_n), W[p^n] \right). \] Since $e_M$ is injective, and since $\ell_{\Z_p} (M)=\ell$, it suffices to show that $\ell_{\Z_p} (X(M)) \le \ell$. By the induction hypothesis, we have $\ell_{\Z_p} (X(M_0))=\ell-1$. Put \[ X(M;M_0):= \Hom_{\Z_p [G_{n,0}]} \left( \Gal (K_n (M)/K_n(M_0)), W[p^n] \right). \] Since we have an exact sequence \( 0 \longrightarrow X(M;M_0) \longrightarrow X(M) \longrightarrow X(M_0) \), it suffices to show that the $\Z_p$-module $X(M;M_0)$ is simple. By (\ref{eqGalsimple}), we obtain \[ X(M;M_0) \simeq \Hom_{\Z_p [G_{n,0}]} (W[p],W[p^n]) \simeq \End_{\F_p [G_{n,0}]} (W[p]) . \] Since the representation $W[p]$ of $G_n$ is absolutely irreducible over $\F_p$ by the assumption $\Abs$, we obtain $\End_{\F_p [G_{n,0}]} (W[p]) =\F_p$. Hence the $\Z_p$-module $X(M;M_0)$ is simple. This completes the proof of Lemma \ref{lemlengthdeg}. \end{proof} \section{Proof of Theorem \ref{thmmain}}\label{secproof} In this section, we prove Theorem \ref{thmmain}. Let us fix our notation. Again, let $K$, $\Sigma$ and $T$ be as in \S \ref{secintro}. Assume that $T$ satisfies the condition $\Abs$ and $\NT$. Take any local condition $\cF$ on $(V,\Sigma)$. Recall that we put $r_{\Sel}=r_{\Sel} (T, \cF) = \corank_{\Z_p} \Sel_{\cF} (K, W)$. For each $n \in \Z_{\ge 0}$, we denote by $M_n$ the image of \( \Sel_{\cF} (K, W[p^n]) \longrightarrow \Sel_{\cF} (K_n, W[p^n]), \) and we put $L_n:=K_n(M_n)$ in the sense of Lemma \ref{lemlengthdeg}. \begin{prop}\label{propglob} Let $\nuim $ be as in Lemma \ref{lemGlcohfin}. For any w$n \in \Z_{\ge 0}$, the extension $L_n/K_n$ is unramified outside $\Sigma_{K_n}$, and \( \ord_p [L_n : K_n] \ge d (n r_{\Sel} - \nuim) \). \end{prop} \begin{proof} Since every $f \in M_n$ is a map defined on $G_{K,\Sigma}$, we deduce that $L_n$ is unramified outside $\Sigma$. By Lemma \ref{lemGlcohfin}, the length of the $\Z_p$-module $M_n$ is at least $n r_{\Sel} - \nuim$. So, Lemma \ref{lemlengthdeg} implies that $\ord_p [L_n : K_n] \ge d (n r_{\Sel} - \nuim)$. \end{proof} Let $v \in \Sigma$ be any finite place. We denote by $P_{n,v}$ the set of all places of $K_n$ above $v$. For each $w \in P_{n,v}$, we denote by $I_{w}(L_n/K_n)$ the inertia subgroup of $\Gal (L_n/K_n)$ at $w$. We define $I_{n,v}$ to be the subgroup of $\Gal (L_n/K_n)$ generated by $\bigcup_{w \in P_{n,v}} I_w (L_n /K_n)$. Recall that $r_{v}=r_{v} (T, \cF)$ denotes the corank of the $\Z_p$-module \( \cH_v:= H^1_{\cF} (K_v, W)/(H^1_{\cF} (K_v, W) \cap H^1_{\unram} (K_v, W)) \). Let $\cK /K_v$ be an algebraic extension. Put $W(\cK):=H^0 (\cK,W)$. We define $W(\cK)_{\divi}$ to be the maximal divisible $\Z_p$-submodule of $W (\cK)$. For each $n \in \Z_{\ge 0} \cup \{ \infty \}$, we define \[ \nu_{v,n} := \ell_{\Z_p} \left(H^0 (K_v, W(K_v^{\unram}) / W(K_v^{\unram})_{\divi}) \otimes_{\Z} \Z/p^n\Z\right). \] Note that $\{ \nu_{v, n} \}_{n \ge 0}$ is a bounded increasing sequence, and for any sufficiently large $m$, we have $\nu_{v, m} =\nu_{v,\infty}$. Let us prove the following proposition. \begin{prop}\label{proploc} We have $\ell_{\Z_p}(I_{n,v}) \le d(r_v n + \nu_{v, n })$ for any $v \in \Sigma$ and $n \in \Z_{\ge 0}$. \end{prop} In order to show that Proposition \ref{proploc}, we need the following lemma. \begin{lem}\label{lemunrerro} Let $n \in \Z_{\ge 0}$ be any element, and define \[ \wcH_{v,n}:= H^1_{\cF} (K_v, W[p^n])/(H^1_{\cF} (K_v, W[p^n]) \cap H^1_{\unram} (K_v, W[p^n])). \] Then, we have $\ell_{\Z_p} (\wcH_{v,n}) \le r_{v}n+ \nu_{v,n}$. \end{lem} \begin{proof}[Proof of Lemma \ref{lemunrerro}] Let $\cK /K_v$ be an algebraic extension, and \[ \iota_{\cK ,n} \colon H^1 (\cK ,W[p^n]) \longrightarrow H^1 (\cK ,W) \] the natural map. By the short exact sequence $0 \longrightarrow W[p^n] \longrightarrow W \longrightarrow W \longrightarrow 0$, we obtain isomorphism \( \Ker \iota_{\cK, n} \simeq W(\cK) \otimes_{\Z} \Z/p^N\Z \simeq (W(\cK) / W(\cK)_{\divi})\otimes_{\Z} \Z/p^N\Z. \) We put $\widetilde{Y}_n:=H^1_{\cF} (K_v, W[p^n]) \cap H^1_{\unram} (K_v, W[p^n])$, and $Y:=H^1_{\cF} (K_v, W) \cap H^1_{\unram} (K_v, W)$. By definition, it clearly holds that $\widetilde{Y}_{n}\subseteq \iota_{K_v,n}^{-1} (Y)$, and $H^1_{\cF} (K_v, W[p^n])/\widetilde{Y}_{n} = \wcH_{v,n}$. Moreover, we have an injection \( H^1_{\cF} (K_v, W[p^n])/\iota_{K_v,n}^{-1} (Y) \hookrightarrow \cH_{v}[p^n] \). So, we obtain \[ \ell_{\Z_p}(\wcH_{v,n}) \le \ell_{\Z_p}(\cH_{v}[p^n])+ \ell_{\Z_p}(\iota_{K_v,n}^{-1} (Y)/\widetilde{Y}_{n} ). \] On the one hand, since $H^1_{\cF} (K_v, W)$ is a divisible $\Z_p$-module by definition, so is the quotient module $\cH_{v}$. This implies that $\ell_{\Z_p}(\cH_{v}[p^n])= r_v n$. On the other hand, the restriction map $H^1(K_v, W[p^n]) \longrightarrow H^0(K_v, H^1(K^{\unram}_v, W[p^n]))$ induces an injection \[ \iota_{K_v,n}^{-1} (Y)/\widetilde{Y}_{n} \hookrightarrow H^0(K_v, \Ker \iota_{K^{\unram}_v,n}) \simeq H^0(K_v, W(K^{\unram}_v) / W(K^{\unram}_v)_{\divi}) \otimes_{\Z} \Z/p^n\Z. \] So, we obtain $\ell_{\Z_p}(\iota_{K_v,n}^{-1} (Y)/\widetilde{Y}_{n} ) \le \nu_{v,n}$. Hence $\ell_{\Z_p}(\wcH_{v,n}) \le r_{v}n+ \nu_{v,n}$. \end{proof} \begin{proof}[Proof of Proposition \ref{proploc}] Take any $v \in \Sigma$ and $n \in \Z_{\ge 0}$. Let $w \in P_{n,v}$. We define \[ \res_{I,w} \colon \Hom_{\Z_p} (\Gal(L_n/K_n),W[p^n]) \longrightarrow \Hom_{\Z_p} (I_{w}(L_n/K_n),W[p^n]) \] to be the restriction maps, and put $M^{\unram}_{n,w}:= \Ker (\res_{I,w} \circ \res_{D,w} \vert_{M_n}) \subseteq M_n$. By definition, the extension $K_n(M^{\unram}_{n,w})/K_n$ is unramified at $w$. So, we obtain \begin{equation}\label{eqIwMur} I_w (L_n / K_n) \subseteq \Gal (L_n/ K_n(M^{\unram}_{n,w})). \end{equation} We fix an element $w_0 \in P_{n,v}$. Let $\sigma \in G_n$ be any element. Then, the diagram \[ \xymatrix{ M_n \ar[rr]^{\sigma *(-) =\id_{M_n}} \ar[d]_{\res_{I,\sigma^{-1} w_0}} & & M_n \ar[d]^{\res_{I, w_0}} \\ \Hom_{\Z_p} (I_{\sigma^{-1}w_0}(L_n/K_n),W[p^n]) \ar[rr]^{\sigma *(-)}_{\simeq} & & \Hom_{\Z_p} (I_{w_0}(L_n/K_n),W[p^n]) } \] commutes. So, for each $w \in P_{n,v}$, we have $M^{\unram}_{n,w}=M^{\unram}_{n,w_0}$. Hence by (\ref{eqIwMur}), we obtain \( I_{n,v} \subseteq \Gal (L_n/ K_n(M^{\unram}_{n,w_0})) \). In order to prove Proposition \ref{proploc}, it suffices to show that \begin{equation}\label{eqineqMnMnur} \ell_{\Z_p} \left( \Gal (L_n/ K_n(M^{\unram}_{n,w_0})) \right) \le d(r_v n + \nu_{v , n}). \end{equation} By Lemma \ref{lemlengthdeg}, we have \( \ell_{\Z_p}(\Gal (L_n/ K_n(M^{\unram}_{n,w_0}))) =d\ell_{\Z_p}(M_n/ M^{\unram}_{n,w_0}) \). Since the natural surjection \( \xymatrix{ \Sel_{\cF} (K,W[p^n]) \ar@{->>}[r] & M_n/ M^{\unram}_{n,w_0} \simeq (\res_{I,w} \circ \res_{D,w})(M_n) } \) factors through the $\Z_p$-module $\wcH_{v,n}$ in Lemma \ref{lemunrerro}, we obtain the inequality (\ref{eqineqMnMnur}). \end{proof} \begin{proof}[Proof of Theorem \ref{thmmain}] Take any $n \in \Z_{\ge 0}$. Let $I$ be the subgroup of $\Gal (L_n/K_n)$ generated by $\bigcup_{v \in \Sigma} I_{n,v}$. Then, the extension $L_n^I/K_n$ is unramified at every finite place, and the degree $[L^I_n:K_n]$ is a power of $p$. So, by the global class field theory, we have \[ \ord_p (h_n) \ge \ord_p [L^I_n: K_n] \ge \ord_p [L_n:K_n] - \sum_{v \in \Sigma} \ord_p (\#I_{n,v}). \] For each $n \in \Z_{\ge 0}$, we put $\nuimn:= \ell_{\Z_p} (H^1(K_n/K, W[p^n]))$. Then, by Lemma \ref{lemGlcohfin}, the sequence $\{ \nuimn \}_{n \ge 0}$ is bounded. By Proposition \ref{propglob} and Proposition \ref{proploc}, we obtain \begin{equation}\label{eqprec} \ord_p (h_n) \ge d \left(r_{\Sel} \cdot n- \nuimn \right) - d \sum_{v \in \Sigma} \left( r_v n + \nu_{v, n} \right) \succ d \left(r_{\Sel} - \sum_{v \in \Sigma}r_v \right) n. \end{equation} This completes the proof of Theorem \ref{thmmain}. \end{proof} By the above arguments, in particular by the inequality (\ref{eqprec}), we have also obtained a bit stronger result than Theorem \ref{thmmain}. \begin{thm}\label{thmmainstr} For any $n \in \Z_{\ge 0}$, we have \[ \ord_p (h_n) \ge d \left(r_{\Sel} - \sum_{v \in \Sigma}r_v \right) n - d \nuimn - d\sum_{v \in \Sigma}\nu_{v, n}. \] \end{thm} \section{Application to abelian varieties}\label{secab} In this section, we apply Theorem \ref{thmmain} to the extension defined by an abelian variety $A$. In \S \ref{subsecnoncm}, we prove Corollary \ref{corab}. Moreover, from the view point of Theorem \ref{thmmainstr}, we compare our results (of stronger form) with earlier results in the cases when $A$ is an elliptic curve. In \S \ref{seccm}, we study the cases when $A$ is a Hilbert--Blumenthal or CM abelian variety, and prove Corollary \ref{corCM}. \subsection{General cases}\label{subsecnoncm} Let $A$ be an abelian variety over a number field $K$, and fix a prime number $p$ such that $A[p]$ becomes an absolutely irreducible representation of the absolute Galois group $G_K$ of $K$ over $\F_p$. We denote the dimension of $A$ over $K$ by $g$. Let $T_pA$ be the $p$-adic Tate module of $A$, namely $T_pA :=\varprojlim_n A[p^n]$, and put $V_p A := T_p A \otimes_{\Z_p} \Q_p$. Note that $T_p A$ is a free $\Z_p$-module of rank $2g$. Let $\Sigma(A)$ be the subset of $P_K$ consisting of all places dividing $p \infty$ and all places where $A$ has bad reduction. Then, the natural action $\rho_A^{(p)}$ of $G_K$ is unramified outside $\Sigma(A)$. Let $L/K$ be any finite extension, and $w \in P_K$ any finite place above a prime number $\ell$. With the aid of the implicit function theorem (for instance \cite{Se} PART II Chapter III \S 10.2 Theorem) and the Jacobian criterion (for insatnce \cite{Li} Chapter 4 Theorem 2.19), the projectivity and smoothness of $A$ implies that $A(L_w)$ is a $g$-dimensional compact abelian analytic group over $L_w$. So, we have \begin{equation}\label{eqstrthm} A(L_w) \simeq \Z_\ell^{g[L_w: \Q_\ell]} \oplus (\text{a finite abelian group}) \end{equation} (See Corollary 4 of Theorem 1 in \cite{Se} PART II Chapter V \S 7.) Related to this fact, the following is known. \begin{prop}[\cite{BK} Example 3.11]\label{corclvsfin} For any finite extension field $L$ of $K$, and any finite place $w \in P_L$, we have a natural isomorphism $H^1_f (L_w, A[p^\infty]) \simeq A(L_w) \otimes_{\Z} \Q_p/\Z_p $. If $w$ lies above $p$, then the corank of the $\Z_p$-module $H^1_{f}(L_w,A[p^\infty])$ is equal to $g [L_w:\Q_p]$. If $w$ does not lie above $p$, then $H^1_{f}(L_w,A[p^\infty])=0$. \end{prop} \begin{proof}[Proof of Corollary \ref{corab}] We define the Tate-Shafarevich group $\Sha (A/K) $ to be the kernel of \( H^1(K,A(\overline{K})) \longrightarrow \prod_{v \in P_K} H^1 (K_v, A(\overline{K}_v)) \). Then, we have a short exact sequence \begin{equation}\label{eqexseqSel} 0 \longrightarrow A(K) \otimes_\Z \Q_p/\Z_p \longrightarrow \Sel_f(K,A[p^\infty]) \longrightarrow \Sha (A/K)[p^\infty] \longrightarrow 0. \end{equation} (See \S C.4 in \cite{HS} Appendix C. Note that by Proposition \ref{corclvsfin}, our $\Sel_f(K,A[p^\infty])$ is naturally isomorphic to $\varinjlim_n \Sel^{([p^n])}(A/K)$ in the sense of \cite{HS} Appendix C, where $[p^n]\colon A \longrightarrow A$ denotes the multiplication-by-$p^n$ isogeny.) So, we obtain \[ \corank_{\Z_p} \Sel_f(K,A[p^\infty]) \ge r_\Z (A):= \rank_{\Z} A(K). \] Hence by Theorem \ref{thmmain} for $(T,\Sigma, \cF)=(T_pA, \Sigma (A), f)$ and Proposition \ref{corclvsfin}, we obtain \[ \ord_p (h_n(A;p)) \succ 2g \left( r_{\Z}(A) - g \sum_{v \mid p}[K_v : \Q_p] \right) n =2g \left(r_{\Z} (A) - g [K : \Q] \right) n. \] This completes the proof of Corollary \ref{corab}. \end{proof} \begin{rem}\label{remnuimforab} Here, we give remarks on the image of the modulo $p$ representation \( \rho_{A, 1}^{(p)} \colon \Gal (K(A[p])/K) \longrightarrow \Aut (A[p]) \). Let $A$ be a principally polarized abelian variety of dimension $g$ defined over $K$, and take an odd prime number $p$. Then, the image of $\rho_1$ can be regarded as a subgroup of $\GSp_{2g} (\F_p)$. Clearly if $\mathrm{Im}\, \rho_{A,1}^{(p)}$ contains $\Sp_{2g} (\F_p)$, then $T_p A$ satisfies the conditions $\Abs$ and $\NT$. Since $p$ is odd, the non-trivial scalar $-1$ is contained in $\Sp_{2g} (\F_p)$. So, as noted in Remark \ref{remnuimvanish}, if $\mathrm{Im}\, \rho_{A,1}^{(p)}$ contains $\Sp_{2g} (\F_p)$, then we can take $\nuim=0$, where $\nuim$ denotes the error constant in Theorem \ref{thmmainstr} for $(T,\Sigma, \cF)=(T_pA, \Sigma (A), f)$. It is proved by Banaszak, Gajda and Kraso\'n that $\mathrm{Im}\, \rho_{A,1}^{(p)}$ contains $\Sp_{2g} (\F_p)$ for sufficiently large $p$ if $A$ satisfies the following (i)--(iv). \begin{enumerate}[{\rm (i)}] \item The abelian variety $A$ is simple. \item There is no endomorphism on $A$ defined over $\overline{K}$ except the multiplications by rational integers, namely $\End_{\overline{K}}(A) = \Z$. \item For any prime number $\ell$, the Zariski closure of the image of the $\ell$-adic representation \( \rho^{(\ell)}_{A} \colon G_K \longrightarrow \Aut_{\Z_\ell} (V_\ell A) \simeq \GL_{2g} (\Q_\ell) \) is a connected algebraic group. \item The dimension $g$ of $A$ is odd. \end{enumerate} See \cite{BGK} Theorem 6.16 for the cases when $\End_{\overline{K}}(A) = \Z$. (Note that in \cite{BGK}, they proved more general results. For details, see loc.\ cit..) \end{rem} \begin{rem}\label{remabred} Here, we shall describe the error terms $\nu_{v,n}$ in Theorem \ref{thmmainstr} for $T=T_pA$ in terminology related to the reduction of $A$ at $v$. Let $\cA$ be the N\'eron model of $A$ over $\cO_K$. Take any finite place $v \in P_K$, and denote by $k_v$ the residue field of $\cO_{K_v}$. We put $A_{0,v}:=\cA \otimes_{\cO_K}k_v$, and define $A_{0,v}^0$ to be the identity component of $A_{0,v}$. Note that we have $H^0( K_v^{\unram}, A[p^\infty]) \simeq A_{0,v}(\overline{k}_v)$. (See \cite{ST} \S 1, Lemma 2.) By Chevalley decomposition, we have an exact sequence \( 0 \longrightarrow T_w \times U_w \longrightarrow A_{0,v}^0 \longrightarrow B_v \longrightarrow 0 \) of group schemes over $k$, where $T_w$ is a torus, $U_w$ is a unipotent group, and $B_v$ is an abelian variety. (For instance, see \cite{Co} Theorem 1.1 and \cite{Wa} Theorem 9.5.) In particular, if $U_w( \overline{k}_v)[p^\infty]=0$ (for instance, if $v$ does not lie above $p$), then the divisible part of $A_{0,v}( \overline{k}_v)[p^\infty]$ coincides with $A^0_{0,v}( \overline{k}_v)[p^\infty]$, and hence \begin{equation}\label{eqnured} \nu_{v,n}=\ell_{\Z_p} (\pi_0 (A_{0,v})(k_v)[p^\infty]\otimes_\Z \Z/p^n \Z) =\ell_{\Z_p} (\pi_0 (A_{0,v})(k_v)[p^n]), \end{equation} where $\pi_0 (A_{0,v})$ denotes the group of the connected components of $A_{0,v}$. (Note that the second equality holds since $\pi_0 (A_{0,v})$ is finite.) We can compute the error factors $\nu_{v,n}$ explicitly if we know the structure of the reduction $A_{0,v}$ of $A$ at each finite place $v$. \end{rem} \begin{ex}[Error factors for elliptic curves]\label{exellstr} Now, we study the error factors $\nu_{v,n}$ in the setting of \cite{SY2} and \cite{Hi}. We set $K=\Q$, and let $A$ be an elliptic curve with minimal discriminant $\Delta$. Let $p$ be a prime number satisfying $\Full$ in Example \ref{exmnuimvanish}. We assume that $p$ is odd. Moreover, we also assume the following hypothesis. \begin{itemize} \item If $p=3$, then $A$ does not have additive reduction at $p$. \item If $A$ has additive reduction at $p$, then $A(\Q_p)[p]=0$. \item If $A$ has split multiplicative reduction at $p$, $p$ does not divide $\ord_p (\Delta)$. \end{itemize} (These hypotheses are assumed in \cite{SY2} for $p>2$ and \cite{Hi}.) Note that $\Sigma(A)$ is the set of places dividing $\infty p \Delta$. In this situation, we have $U_p( \overline{\F}_p)[p^\infty]=0$, where $U_p$ is the unipotent part of $A_{0,p}^0$. So, by (\ref{eqnured}), we obtain $\nu_{p, n}=0$ for each $n \in \Z_{\ge 0}$ since by \cite{Si} CHAPTER IV \S 9 Tate's algorithm 9.4, \begin{itemize} \item if $A$ has good reduction at $p$, then $A_{0,p}$ is connected; \item if $A$ has split multiplicative reduction, then $\pi_0 (A_{0,p})(\F_p) \simeq \Z/\ord_p (\Delta) \Z$; \item if $A$ has non-split multiplicative reduction, then $\pi_0 (A_{0,p})(\F_p) \simeq 0\ \text{or}\ \Z/2\Z$; \item if $A$ has additive reduction, then the order of $\pi_0 (A_{0,p})(\F_p)$ is prime to $p$. \end{itemize} Let $\ell$ be a prime number distinct from $p$. Then, we have $U_\ell ( \overline{\F}_\ell)[p^\infty]=0$. So, similarly to the above arguments, we obtain \begin{equation}\label{eqnuell} \nu_{\ell, n} = \begin{cases} \min \{ \ord_p (\ord_\ell (\Delta)), n) \} & \left(\begin{array}{l} \text{if $A$ has split multiplicative} \\ \text{reduction at $\ell$} \end{array} \right), \\ 0 & (\text{otherwise}). \end{cases} \end{equation} Combining with Example \ref{exmnuimvanish}, Theorem \ref{thmmainstr} implies that \[ \ord_p (h_n(A;p)) \ge 2 \left( r_{\Z} (A) - 1 \right) n -\sum_{p \ne \ell\mid \Delta} \nu_{\ell, n}, \] where $\nu_{\ell, n}$ is given by (\ref{eqnuell}). This inequality coincides with that obtained in \cite{SY1}, \cite{SY2} and \cite{Hi} when $p$ is odd. In \cite{SY2}, Sairaiji and Yamauchi also treat the cases when $p=2$. Note that when $p=2$, an inequality following from Theorem \ref{thmmainstr} is weaker than that obtained in \cite{SY2}. Indeed, our $\nu_{p,n}$ is always non-negative by definition, but instead of it, in \cite{SY2}, they introduced a constant $\delta_2$ related to the local behavior of $A$ at $p=2$ which may become a negative integer. \end{ex} \subsection{RM and CM cases}\label{seccm} Here we shall prove Corollary \ref{corCM}. Let $K/K^+, p, \pi, A$ and $h_n (A;\pi)$ be as in Corollary \ref{corCM}. Take a subset $\Phi =\{\phi_1 , \dots , \phi_g \} \subseteq \Gal (K/\Q)$ such that we have an isomorphism \begin{equation}\label{eqstrCM} \Lie (A/K) \simeq \bigoplus_{i=1}^g (K, \phi_i) \end{equation} of modules over the ring \( K \otimes_\Z\End (A)= K \otimes_\Z \cO_K = \prod_{\sigma \in \Gal (K/\Q)} (K,\sigma) \). Note that $\phi_1\vert_{K^+}, \dots, \phi_g\vert_{K^+}$ are distinct $g$ elements of $\Gal (K^+ /K)$. Let us introduce notation related to the formal group law. Take any $\sigma \in \Gal (K/\Q)$, and denote by ${\sigma(\pi)}$ the place of $K$ corresponding to $\sigma(\pi) \cO_K$ (by abuse of notation). We put $k_{\sigma (\pi)}:= \cO_{K}/\sigma (\pi) \cO_{K}=\F_p$. We define $\cA_{\sigma(\pi)}$ to be the N\'eron model of $A_{K_{\sigma (\pi)}}:= A \otimes_K K_{\sigma (\pi)}$ over $\cO_{K_{\sigma(\pi)}}$, and $O_{\sigma (\pi), s}$ (resp.\ $O_{\sigma (\pi), \eta}$) the origin of the special (resp.\ generic) fiber of $\cA_{\sigma(\pi)}$. For each $\star \in \{ s, \eta \}$, let $\mathfrak{m}_{\sigma(\pi),\star}$ be the maximal ideal of the local ring $\sO_{\widehat{\cA}_{\sigma(\pi)},\star}$. Note that $\sO_{{\cA}_{\sigma(\pi)},s}$ is a regular local ring since $A$ has good reduction at $\sigma (\pi)$. Let $s'_{0}=p, s'_{ 1}, \dots , s'_{g} \in \mathfrak{m}_{{\sigma(\pi)},s}$ be any regular system of parameters for the local ring $(\sO_{{\cA}_{\sigma(\pi)},O_{{\sigma(\pi)},s}}, \mathfrak{m}_{{\sigma(\pi)},s})$. Put $\mathfrak{n}:=\sO_{{\cA}_{{\sigma(\pi)},O_{{\sigma(\pi)},s}}} \cap \mathfrak{m}_{\sigma(\pi),\eta}$. Since we have the identity section $\Spec \cO_{K_{\sigma (\pi)}} \longrightarrow \cA_{\sigma (\pi)}$, it holds that $\sO_{{\cA}_{\sigma(\pi)},O_{{\sigma(\pi)},s}}/\mathfrak{n} \simeq \cO_{K_{\sigma (\pi)}}$. So, for each $i \in \Z$ with $1 \le i \le g$, we have a unique element $c_i \in \sigma (\pi) \cO_{K_{\sigma (\pi)}}$ such that $s_i:=s'_i-c_i \in \mathfrak{n}$. Since $\cO_{K_{\sigma (\pi)}}$ is a DVR, and since we have \begin{align} \mathfrak{n}/\mathfrak{n}^2 \otimes_{\cO_{K_{\sigma (\pi)}}} k(\sigma (\pi)) & \simeq \coLie (A_{0,\sigma(\pi)}/k(\sigma (\pi))), \\ \mathfrak{n}/\mathfrak{n}^2 \otimes_{\cO_{K_{\sigma (\pi)}}} K_{\sigma (\pi)} & \simeq \coLie (A_{K_{\sigma (\pi)}}/K_{\sigma (\pi)}), \label{eqcoLie} \end{align} the sequence $s_{ 1}, \dots , s_{g} $ forms a regular system of parameters for the regular local ring $(\sO_{{\cA}_{\sigma(\pi)},O_{{\sigma(\pi)},\eta}}, \mathfrak{m}_{{\sigma(\pi)}, \eta})$, and it holds that \begin{equation}\label{eqn/n2} \mathfrak{n}/\mathfrak{n}^2 = \bigoplus_{i=1}^g \cO_{K_{\sigma (\pi)}} \bar{s}_i , \end{equation} where $A_{0,\sigma(\pi)}$ denotes the special fiber of ${\cA}_{\sigma(\pi)}$, and $\bar{s}_i$ denotes the image of $s_i$ in $\mathfrak{n}/\mathfrak{n}^2$. We denote by $\widehat{\cA}_{\sigma(\pi)}= \Spf \sO_{\widehat{\cA}_{\sigma(\pi)}}$ the completion of $\cA_{\sigma(\pi)}$ along $O_{\sigma(\pi),s}$, and by $\widehat{\mathfrak{m}}_{\sigma(\pi)}$ the maximal ideal of $\sO_{\widehat{\cA}_{\sigma(\pi)}}$. Note that $s_{0}:=p, s_{ 1}, \dots , s_{g} $ forms a regular system of parameters for the complete regular local ring $(\sO_{\widehat{\cA}_{\sigma(\pi)}}, \widehat{\mathfrak{m}}_{\sigma(\pi)})$ We also note that $p$ splits completely (unramified in particular) in $K/\Q$ by our assumption. So, we have $\sO_{\widehat{\cA}_{\sigma(\pi)}} = \cO_{K_{\sigma(\pi)}}[[s_1 , \dots , s_g]]$. (See \cite{Ma} Theorem 29.7.) For each $i \in \Z$ with $1 \le i \le g$, we define a formal power series $\sF_{A, \sigma(\pi),i} \in \cO_{K_{\sigma(\pi)}}[[x_1, \dots , x_g, y_1, \dots, y_g]]$ by \[ \sF_{A, \sigma(\pi),i} := \mathrm{add}_{A, \sigma(\pi)}^{\sharp} (s_i), \] where $\mathrm{add}_{A, \sigma(\pi)}^{\sharp} \colon \sO_{\widehat{\cA}_{\sigma(\pi)} } \longrightarrow \sO_{\widehat{\cA}_{\sigma(\pi)}} \widehat{\otimes}_{\cO_{K_{\sigma(\pi)}}} \sO_{\widehat{\cA}_{\sigma(\pi)}} =\cO_{K_{\sigma(\pi)}}[[x_1, \dots , x_g, y_1, \dots, y_g]]$ is the ring homomorphism corresponding to the group structure of the formal group scheme $\widehat{\cA}_{\sigma (\pi)}$. Note that since $s_1, \dots , s_g$ forms a regular system of parameters for the regular local ring $\sO_{{\cA}_{\sigma(\pi)},O_{{\sigma(\pi)},\eta}}$, the correction $\sF_{A, \sigma(\pi)} =(\sF_{A, \sigma(\pi),i})_{i=1}^g$ is a $g$-dimensional commutative formal group law over $K_{\sigma(\pi)}$, and hence that over $\cO_{K_{\sigma(\pi)}}$. (For details, see Lemma C.2.1 in Appendix C of \cite{HS}.) Let $\alpha \in \cO_K$ be any element, and $[\alpha]_{A, {\sigma(\pi)}}^{\sharp} \colon \sO_{\widehat{\cA}_{\sigma(\pi)}} \longrightarrow \sO_{\widehat{\cA}_{\sigma(\pi)}}$ the ring homomorphism corresponding to the multiplication-by-$\alpha$ endomorphism on the formal group scheme $\widehat{\cA}_{\sigma (\pi)}$. For each $i \in \Z$ with $1 \le i \le g$, we put \[ [\alpha]_{A, {\sigma(\pi)},i}(s_1 , \dots , s_g) := [\alpha]_{A, {\sigma(\pi)}}^\sharp (s_i) \in \sO_{\widehat{\cA}_{\sigma(\pi)}} = \cO_{K_{\sigma(\pi)}}[[s_1 , \dots , s_g]]. \] \begin{lem} \label{lemlocparam} There exists a regular system of parameters $s_{ 0}=p, s_{ 1}, \dots , s_{ g} \in \mathfrak{m}_{\sigma(\pi),s}$ such that for any $i \in \Z$ with $1 \le i \le g$ and any $\alpha \in \cO_K$, it holds that $s_i \in \mathfrak{n}$, and \[ [\alpha]_{A, {\sigma(\pi)},i}(s_1 , \dots , s_g) \equiv \phi_i (\alpha) s_i \mod \mathfrak{n}^2. \] \end{lem} \begin{proof} Since we have a direct product decomposition \[ \cO_{K_{\sigma (\pi)}} \otimes_{\Z} \cO_K= \Z_p \otimes_{\Z} \cO_K= \varprojlim_{n} \cO_K/p^n \cO_K =\prod_{\tau \in \Gal (K/\Q)} \cO_{K_{\tau (\pi)}} \] by Chinese remainder theorem, and by (\ref{eqstrCM}), (\ref{eqcoLie}) and (\ref{eqn/n2}), we can take an $\cO_{K_{\sigma (\pi)}}$-basis $\bar{s}_1, \cdots, \bar{s}_g$ of $\mathfrak{n}/\mathfrak{n}^2$ such that for each $i \in \Z$ with $1 \le i \le g$ and each $\alpha \in \cO_K$, the element $\bar{s}_i$ is an eigenvector of the multiplication-by-$\alpha$ map $[\alpha]$ attached to the eigenvalue $\phi_i (\alpha)$. We take any lift $s_1, \cdots, s_g \in \mathfrak{n}$ of $\bar{s}_1, \cdots, \bar{s}_g$. Then the sequence $s_{ 0}=p, s_{ 1}, \dots , s_{ g}$ is the one as desired. \end{proof} From now on, let the parameters $s_1, \dots , s_g$ be as in \ref{lemlocparam}. We define \[ \sF_{A, {\sigma(\pi)}} ( \sigma (\pi) \cO_{K_{\sigma (\pi)}})= \left( (\sigma (\pi) \cO_{K_{\sigma (\pi)}})^g, \sF_{A, {\sigma(\pi)}} \right)\] to be the set $(\sigma (\pi) \cO_{K_{\sigma (\pi)}})^g$ equipped with a group structure defined by the formal group law $\sF_{A, {\sigma(\pi)}}$. Note that by the scalar multiplication defined by the collection $([\alpha]_{A, {\sigma(\pi)},i})_{i=1}^g$ of the power series, we can regard $\sF_{A, {\sigma(\pi)}} ( \sigma( \pi ) \cO_{K_{\sigma (\pi)}})$ as an $\cO_K$-module. By Lemma \ref{lemlocparam}, the following holds. \begin{cor}\label{corformal} Let $\sigma \in \Gal (K/\Q)$ any element. Then, we have \[ \sF_{A, {\sigma(\pi)}} \left( \sigma(\pi) \cO_{K_{\sigma (\pi)}} \right) \otimes_{\cO_{K}} \varinjlim_{n>0} \pi^{-n} \cO_{K}/\cO_{K} \simeq \begin{cases} \Q_p/\Z_p & (\text{if $\sigma \in \Phi$}), \\ 0 & (\text{if $\sigma \notin \Phi$}). \end{cases} \] \end{cor} Let $A_{0,\sigma(\pi)}$ be the special fiber of ${\cA}_{\sigma(\pi)}$. Since $A$ has good reduction at $\sigma (\pi)$, we can define the reduction map \( \mathrm{red}_{A,\sigma(\pi)} \colon A(K_{\sigma (\pi)}) \longrightarrow A_{0,v}(k_{\sigma (\pi)}) \). \begin{lem}[For instance, see \cite{HS} Theorem C.2.6]\label{lemredformal} For any finite place $v \in P_K$, the $\cO_K$-module $\Ker \mathrm{red}_{A,\sigma(\pi)}$ is isomorphic to $\sF_{A, {\sigma(\pi)}} (\pi_v \cO_{K_{\sigma (\pi)}})$. \end{lem} \begin{rem} Note that \cite{HS} Theorem C.2.6 says that $\Ker \mathrm{red}_{A,\sigma(\pi)}$ is isomorphic to $\sF_A (\pi_v \cO_{K_v})$ only as a group, but it is easy to verify that the group isomorphism constructed in the proof of \cite{HS} Theorem C.2.6 preserves the scalar action of $\cO_K$. \end{rem} We define $T_\pi A:=\varprojlim_n A[\pi^n]$. Let $\Sigma(A)$ be a subset of $P_K$ consisting of all places dividing $p \infty$ and all places where $A$ has bad reduction. Since $T_\pi A$ is regarded as a $\Z_p$-submodule of $T_p A$, the action of $\Gal (\overline{K}/K)$ on $T_\pi A$ is unramified outside $\Sigma(A)$. Since $p$ splits completely in $K$, the $\Z_p$-module $T_\pi A$ is free of rank $2 \dim/[K:\Q]=2/[ K : K^+ ]$. Note that $T_\pi A$ satisfies $\Abs$ and $\NT$ by our assumption. Take a finite place $v \in P_K$ above a prime number $\ell$. Since $H^1(K_v,A[\pi^\infty])$ is a direct summand of $H^1(K_v,A[p^\infty])$ consisting of elements annihilated by $\pi^n$ for some $n \in \Z_{\ge 0}$, Proposition \ref{corclvsfin} implies that \begin{align*} H^1_f(K_v,A[\pi^\infty])& =H^1_f(K_v,A[p^\infty]) \cap H^1(K_v,A[\pi^\infty]) =H^1_f(L_v,A[p^\infty]) [\pi^\infty] \\ & \simeq (A(K_v) \otimes_{\Z_p}\Q_p/\Z_p)[\pi^\infty] \simeq A(K_v) \otimes_{\cO_{K}} K_\pi/ \cO_{K_\pi}. \end{align*} By isomorphisms (\ref{eqstrthm}) (for $v \nmid p$), Corollary \ref{corformal} and Lemma \ref{lemredformal}, the following holds. \begin{prop}\label{propfinclCM} Let $v \in P_K$ be a finite place. Then, we have \[ H^1_f(K_v,A[\pi^\infty])= \begin{cases} \Q_p/\Z_p & (\text{if $v= \sigma(\pi)$ for some $\sigma \in \Phi$}), \\ 0 & (\text{otherwise}). \end{cases} \] \end{prop} \begin{proof}[Proof of Corollary \ref{corCM}] Note that the Selmer group $\Sel_f(K,A[\pi^\infty])$ is a direct summand of the $\Z_p$-module $\Sel_f(K,A[p^\infty])$ consisting of elements annihilated by $\pi^n$ for some $n \in \Z_{\ge 0}$. So, we have \( \Sel_f(K,A[\pi^\infty]) \simeq \Sel_f(K,A[p^\infty]) \otimes_{\cO_K} K_\pi/ \cO_{K_\pi} \). Combining with the short exact sequence (\ref{eqexseqSel}), this implies that $\Sel_f(K,A[\pi^\infty])$ has a $\Z_p$-submodule isomorphic to $A(K) \otimes_{\cO_K} K_\pi/ \cO_{K_\pi}$. Thus it holds that \[ \corank_{\Z_p} \Sel_f(K,A[\pi^\infty]) \ge r_{\cO_K} (A) := \dim_K (A(K) \otimes_{\cO_K} K). \] Hence by Theorem \ref{thmmain} for $(T,\Sigma, \cF)=(T_\pi A, \Sigma (A), f)$ and Proposition \ref{propfinclCM}, we obtain the assertion of Corollary \ref{corCM}, that is, \( h_{n}(A;\pi) \succ 2 [K : K^+]^{-1} \left( r_{\cO_K} (A) -g \right) n \). \end{proof} \begin{rem} Here, let $p$ be an odd prime number, and consider the image of $G_{1,0};=\Gal (K_1/K)$ in $\Aut_{\F_p}(A[\pi])$. First, let $K$ be a CM field. By Banaszak--Gajda--Kraso\'n's work (\cite{BGK} Theorem 6.16 for the cases then $\End_{\overline{K}} (A)=\cO_K$) it is known that the image of $G_{1,0}:=\Gal (K_1/K)$ in $\Aut_{\F_p}(A[\pi])= \GL_2 (\F_p)$ contains $\SL_2 (\F_p)$ if $A$ is principally polarized, and if $A$ satisfies (i) and (iii) in Remark \ref{remnuimforab}. So, if $A$ satisfies the conditions (i) and (iii) in Remark \ref{remnuimforab}, then $T_\pi A$ satisfies $\Abs$ and $\NT$, and we can take $\nuim=0$ by Remark \ref{remnuimvanish}. Next, let $K$ be a CM field. Then $T_\pi A$ obviously satisfies the condition $\Abs$ since $\dim_{\F_p} A[\pi]=1$. Moreover, if $K$ is a CM field, then $G_{1,0}=\Gal (K_1/K) \simeq \F_p^\times$. (See, for instance \cite{Ro} Proposition 3.1.) So, in this situation, the $\pi$-adic Tate module $T_\pi A$ also satisfies the condition $\NT$, and we can take $\nuim =0$ by Remark \ref{remnuimvanish}. \end{rem}
{"config": "arxiv", "file": "1805.03850.tex"}
TITLE: Can special relativity be deduced from $E=mc^2$? QUESTION [1 upvotes]: So instead of assuming that the velocity $c$ is a maximal velocity, proving that while assuming $E=mc^2$. REPLY [0 votes]: Here is a quick result that might interest you: Let's begin from the equation of relativistic energy in terms of the rest mass, $m_0$, $E=\gamma m_oc^2$, then by inserting $\gamma = \sqrt{\frac{1}{1-\frac{v^2}{c^2}}}$, we find: $\hspace{6cm}E^2-E^2\frac{v^2}{c^2}=(m_0)^2c^4$ Hence: $\hspace{6cm}E^2=(E_0)^2+p^2c^2$ which is the equation that relates relativistic energy and momentum. So maybe the best we can say is if you start from the equations $E^2=(E_0)^2+p^2c^2$, and $E=\gamma m_oc^2$, we can derive the Lorentz contraction.
{"set_name": "stack_exchange", "score": 1, "question_id": 1273}
\begin{document} \begin{frontmatter} \title{A heat flow for the mean field equation on a finite graph} \author{Yong Lin} \ead{yonglin@mail.tsinghua.edu.cn} \address{Yau Mathematical Sciences Center, Tsinghua University, Beijing 100084, China} \author{Yunyan Yang\footnote{Corresponding author}} \ead{yunyanyang@ruc.edu.cn} \address{Department of Mathematics, Renmin University of China, Beijing 100872, P. R. China} \begin{abstract} Inspired by works of Cast\'eras (Pacific J. Math., 2015), Li-Zhu (Calc. Var., 2019) and Sun-Zhu (Calc. Var., 2020), we propose a heat flow for the mean field equation on a connected finite graph $G=(V,E)$. Namely $$\le\{\begin{array}{lll} \p_t\phi(u)=\Delta u-Q+\rho \f{e^u}{\int_Ve^ud\mu}\\[1.5ex] u(\cdot,0)=u_0, \end{array}\ri.$$ where $\Delta$ is the standard graph Laplacian, $\rho$ is a real number, $Q:V\ra\mathbb{R}$ is a function satisfying $\int_VQd\mu=\rho$, and $\phi:\mathbb{R}\ra\mathbb{R}$ is one of certain smooth functions including $\phi(s)=e^s$. We prove that for any initial data $u_0$ and any $\rho\in\mathbb{R}$, there exists a unique solution $u:V\times[0,+\infty)\ra\mathbb{R}$ of the above heat flow; moreover, $u(x,t)$ converges to some function $u_\infty:V\ra\mathbb{R}$ uniformly in $x\in V$ as $t\ra+\infty$, and $u_\infty$ is a solution of the mean field equation $$\Delta u_\infty-Q+\rho\f{e^{u_\infty}}{\int_Ve^{u_\infty}d\mu}=0.$$ Though $G$ is a finite graph, this result is still unexpected, even in the special case $Q\equiv 0$. Our approach reads as follows: the short time existence of the heat flow follows from the ODE theory; various integral estimates give its long time existence; moreover we establish a Lojasiewicz-Simon type inequality and use it to conclude the convergence of the heat flow. \end{abstract} \begin{keyword} Heat flow on graph\sep the Lojasiewicz-Simon inequality\sep mean field equation \MSC[2010] 35R02\sep 34B45 \end{keyword} \end{frontmatter} \titlecontents{section}[0mm] {\vspace{.2\baselineskip}} {\thecontentslabel~\hspace{.5em}} {} {\dotfill\contentspage[{\makebox[0pt][r]{\thecontentspage}}]} \titlecontents{subsection}[3mm] {\vspace{.2\baselineskip}} {\thecontentslabel~\hspace{.5em}} {} {\dotfill\contentspage[{\makebox[0pt][r]{\thecontentspage}}]} \setcounter{tocdepth}{2} \section{Introduction} Let us start with the mean field equation on a closed Riemann surface $(\Sigma,g)$, which says \be\label{Mean-0}-\Delta_g u+Q=\rho\f{e^u}{\int_\Sigma e^udv_g},\ee where $\rho\in \mathbb{R}$ is a number, $Q:\Sigma\ra \mathbb{R}$ is a smooth function with $\int_\Sigma Qdv_g=\rho$, and $\Delta_g$ is the Laplacian operator with respect to the metric $g$. This equation arises in various topics such as conformal geometry \cite{KZ}, statistical mechanics \cite{Caglioti}, and the abelian Chen-Simons-Higgs model \cite{Tarantello,Caffarelli, Ding-2}. The existence of solutions of (\ref{Mean-0}) has been extensively investigated for several decades. Landmark achievements have been obtained for the case $\rho\not=8k\pi$, $k\in\mathbb{N}$, \cite{Brezis-Merle,Chen-Lin,Ding-3,Li1999, Li-Shafrir,Malchiodi2008,Struwe-Tarantello,Djadli2008}, and for the case $\rho=8\pi$ \cite{Ding-1}. In 2015, Cast\'eras \cite{Casteras1,Casteras2} proposed and studied the following parabolic equation \be\label{flow-0}\le\{\begin{array}{lll} \f{\p}{\p t}e^u=\Delta_g u-Q+\rho\f{e^u}{\int_\Sigma e^udv_g}\\[1.5ex] u(x,0)=u_0(x), \end{array}\ri.\ee where $u_0\in C^{2,\alpha}(\Sigma)$, $0<\alpha<1$, is the initial data, $\Delta_g$, $Q$ and $\rho$ are described as in (\ref{Mean-0}). It is a gradient flow for the energy functional $J_\rho: W^{1,2}(\Sigma,g)\ra\mathbb{R}$ defined by $$J_\rho(u)=\f{1}{2}\int_\Sigma |\nabla_g u|^2dv_g+\int_\Sigma Q udv_g-\rho\log\int_\Sigma e^udv_g,$$ where $\nabla_g$ is the gradient operator with respect to the metric $g$. It was proved by Cast\'eras that for any $\rho\not=8k\pi$, $k=1,2,\cdots$, there exists some initial data $u_0$ such that $u(\cdot,t)$ converges to a function $u_\infty$ in $W^{2,2}(\Sigma)$, where $u_\infty$ is a solution of the mean field equation (\ref{Mean-0}); For $\rho=8\pi$, a sufficient condition for convergence of the flow (\ref{flow-0}) was given by Li-Zhu \cite{Li-Zhu}. This gives a new proof of the result of Ding-Jost-Li-Wang \cite{Ding-1}, which was extended by Chen-Lin \cite{Chen-Lin2} to a general critical case, and generalized by Yang-Zhu \cite{Yang-Zhu} to a non-negative prescribed function case. Recently, using a more refined analysis, Sun-Zhu \cite{Sun-Zhu} studied a modified version of (\ref{flow-0}), i.e. the parabolic equation \be\label{flow-1}\le\{\begin{array}{lll} \f{\p}{\p t}e^u=\Delta_g u-\f{8\pi}{{\rm Area}(\Sigma)}+8\pi\f{he^u}{\int_\Sigma he^udv_g}\\[1.5ex] u(x,0)=u_0(x), \end{array}\ri.\ee where $h(x)\geq 0$, $h\not\equiv 0$ on $\Sigma$, and ${\rm Area}(\Sigma)=\int_\Sigma dv_g$ denotes the area of $\Sigma$. Clearly this is another method of proving the result in \cite{Yang-Zhu}. In this paper, we are concerned with the mean field equation on a finite graph. Let us fix some notations. Assume $G=(V,E)$ is a finite graph, where $V$ denotes the vertex set and $E$ denotes the edge set. For any edge $xy\in E$, we assume that its weight $w_{xy}>0$ and that $w_{xy}=w_{yx}$. Let $\mu:V\ra \mathbb{R}^+$ be a finite measure. For any function $u:V\ra \mathbb{R}$, the Laplacian of $u$ is defined as \be\label{lap}\Delta u(x)=\f{1}{\mu(x)}\sum_{y\sim x}w_{xy}(u(y)-u(x)),\ee where $y\sim x$ means $xy\in E$, or $y$ and $x$ are adjacent. The associated gradient form reads $$\Gamma(u,v)(x)=\f{1}{2\mu(x)}\sum_{y\sim x}w_{xy}(u(y)-u(x))(v(y)-v(x)).$$ Write $\Gamma(u)=\Gamma(u,u)$. We denote the length of its gradient by \be\label{gr}|\nabla u|(x)=\sqrt{\Gamma(u)(x)}=\le(\f{1}{2\mu(x)}\sum_{y\sim x}w_{xy}(u(y)-u(x))^2\ri)^{1/2}.\ee For any function $g:V\ra\mathbb{R}$, an integral of $g$ over $V$ is defined by \be\label{int}\int_V gd\mu=\sum_{x\in V}\mu(x)g(x).\ee Let $W^{1,2}(V)$ be a Sobolev space including all real functions $u$ with the norm $$\|u\|_{W^{1,2}(V)}=\le(\int_V(|\nabla u|^2+u^2)d\mu\ri)^{1/2}.$$ As an analog of (\ref{Mean-0}), the mean field equation on the finite graph $G$ reads as \be\label{Mean-1}-\Delta u+Q=\rho\f{e^u}{\int_V e^ud\mu},\ee where $\rho$ is a real number, $Q:V\ra \mathbb{R}$ is a function with $\int_V Qd\mu=\rho$, and $\Delta$ is the graph Laplacian with respect to the measure $\mu$ as in (\ref{lap}). The equation (\ref{Mean-1}) can be viewed as a discrete version of (\ref{Mean-0}). Let $\phi: \mathbb{R}\ra\mathbb{R}$ be a $C^1$ function. We propose the following heat flow \be\label{heat-flow}\le\{\begin{array}{lll} \f{\p}{\p t}\phi(u)=\Delta u-Q+\rho\f{e^u}{\int_V e^ud\mu}\\[1.5ex] u(x,0)=u_0(x),\,\,\, x\in V. \end{array}\ri. \ee This is an analog of (\ref{flow-1}). Obviously it is a gradient flow for the functional $J_\rho: W^{1,2}(V)\ra \mathbb{R}$, which is defined as \be\label{functional}J_\rho(u)=\f{1}{2}\int_V|\nabla u|^2d\mu+\int_VQud\mu-\rho \log\int_Ve^ud\mu,\ee where the notations (\ref{gr}) and (\ref{int}) are used. Our main result is stated as follows: \begin{theorem}\label{thm1} Let $G=(V,E)$ be a connected finite graph. Suppose $\phi:\mathbb{R}\ra\mathbb{R}$ is a $C^1$ function satisfying \be\label{phi} \lim_{s\ra-\infty}\phi(s)=0,\quad \phi^\prime(s)>0\,\,{\rm for\,\,all}\,\, s\in\mathbb{R},\quad \inf_{s\in[0,+\infty)}\phi^\prime(s)>0. \ee Let $\rho$ be any real number, and $Q$ be any function with $\int_VQd\mu=\rho$. Then for any initial function $u_0:V\ra\mathbb{R}$, we have the following assertions:\\ $(i)$ there exists a unique solution $u:V\times[0,\infty)\ra\mathbb{R}$ of the heat flow (\ref{heat-flow});\\ $(ii)$ there exists some function $u_\infty:V\ra\mathbb{R}$ such that $u(\cdot,t)$ converges to $u_\infty$ uniformly in $x\in V$ as $t\ra+\infty$; moreover $u_\infty$ is a solution of the mean field equation (\ref{Mean-1}). \end{theorem} There are two interesting special cases of results in Theorem \ref{thm1} as follows: \begin{corollary}\label{Cor} Let $G=(V,E)$ and $\phi$ be as in Theorem \ref{thm1}. If $\int_Vfd\mu=0$, then for any initial function $u_0:V\ra\mathbb{R}$, the heat flow $$\le\{\begin{array}{lll} \f{\p}{\p t}\phi(u)=\Delta u-f\\[1.5ex] u(\cdot,0)=u_0 \end{array}\ri.$$ has a solution $u:V\times[0,+\infty)\ra\mathbb{R}$. Moreover, there exists some function $u^\ast$ such that $u(\cdot,t)$ converges to $u^\ast$ as $t\ra+\infty$ uniformly in $x\in V$, and that $u^\ast$ satisfies $$\le\{\begin{array}{lll} \Delta u^\ast=f\\[1.5ex] \int_V\phi{(u^\ast)}d\mu=\int_V\phi(u_0)d\mu.&& \end{array}\ri.$$ \end{corollary} \begin{corollary}\label{Cor-2} Let $G=(V,E)$ and $\phi$ be as in Theorem \ref{thm1}. Then for any initial function $u_0:V\ra\mathbb{R}$, the heat flow $$\le\{\begin{array}{lll} \f{\p}{\p t}\phi(u)=\Delta u\\[1.5ex] u(\cdot,0)=u_0 \end{array}\ri.$$ has a solution $u:V\times[0,+\infty)\ra\mathbb{R}$; moreover, as $t\ra+\infty$, $u(\cdot,t)$ converges to a constant $c$ uniformly in $x\in V$, in particular $$\phi(c)=\f{1}{|V|}\int_V\phi(u_0)d\mu.$$ \end{corollary} Obviously there are infinitely many examples of $\phi$ in Theorem \ref{thm1}. A typical example is $$\phi(s)=\le\{\begin{array}{lll} e^{\alpha s}+\beta s^p&{\rm when}& s>0\\[1.5ex] e^{\alpha s}&{\rm when}& s\leq 0, \end{array}\ri.$$ where $\alpha>0$, $\beta\geq 0$ and $p>1$ are constants. Another one says for any real number $a>1$, $$\phi(s)=\le\{\begin{array}{lll} s^2+(\log a)(s+\cos s-1)+1&{\rm when}& s>0\\[1.5ex] a^{s}&{\rm when}& s\leq 0. \end{array}\ri.$$ Though $G=(V,E)$ is a finite graph, the results in Theorem \ref{thm1} are quite unexpected, even in special cases $\rho=0$ and $Q\equiv 0$ (Corollaries \ref{Cor} and \ref{Cor-2}). As for its proof, we find a way of thinking from Simon \cite{Simon}, Jendoubi \cite{Jendoubi}, Cast\'eras \cite{Casteras1,Casteras2}, Li-Zhu \cite{Li-Zhu}, and Sun-Zhu \cite{Sun-Zhu}. Firstly, we use the ODE theory to conclude the short time existence of the heat flow (\ref{heat-flow}). Secondly, we obtain the global existence of the flow through estimating the uniform bound of $\|u(\cdot,t)\|_{W^{1,2}(V)}$ for all time $t$. This allows us to select a sequence of times $(t_n)\ra+\infty$ such that $u(\cdot,t_n)$ converges to some function $u_\infty$ uniformly in $V$, where $u_\infty$ is a solution of the mean field equation (\ref{Mean-1}). Thirdly, we establish a Lojasiewicz-Simon type inequality along the heat flow by employing an estimate due to Lojasiewicz (\cite{Lojasiewitz}, Theorem 4, page 88), namely \begin{lemma}\label{Lojasie}{\rm (Lojasiewicz, 1963).} Let $\Gamma:\mathbb{R}^\ell\ra\mathbb{R}$ be an analytic function in a neighborhood of a point $\mathbf{a}\in\mathbb{R}^\ell$ with $\nabla\Gamma(\mathbf{a})=\mathbf{0}\in\mathbb{R}^\ell$. Then there exist $\sigma>0$ and $0<\theta<1/2$ such that $$\|\nabla \Gamma(\mathbf{y})\|\geq |\Gamma(\mathbf{y})-\Gamma(a)|^{1-\theta},\quad \forall \mathbf{y}\in\mathbb{R}^\ell,\,\,\|\mathbf{y}-\mathbf{a}\|<\sigma, $$ where $\nabla \Gamma(\mathbf{y})=({\p_{y_1}}\Gamma(\mathbf{y}),\cdots,\p_{y_\ell}\Gamma(\mathbf{y}))$, and $\|\cdot\|$ stands for the standard norm of $\mathbb{R}^\ell$. \end{lemma} \noindent Finally, we conclude the uniform convergence of $u(\cdot,t)$ to $u_\infty$ as $t\ra+\infty$ with the help of the above Lojasiewicz-Simon type inequality. Since the graph $G$ is finite, this inequality seems much simpler than that of \cite{Simon,Jendoubi,Casteras1,Casteras2,Li-Zhu,Sun-Zhu}. Moreover, in our case, all integral estimates look very concise and very easy to understand. Note that if the mean field equation (\ref{Mean-1}) has a solution, so does the Kazdan-Warner equation \cite{GLY1}. For such kind of equations, see for examples \cite{Ge,Keller-Schwarz,Huang-Lin-Yau,Liu-Yang,Sun-Wang,Zhu}. According to \cite{GLY1}, its solvability needs some assumptions. While Theorem \ref{thm1} implies that for any real number $\rho$, (\ref{Mean-1}) is solvable. One may ask whether or not these two conclusions are consistent. Let us answer this question. Suppose that $u_\infty$ is given as in Theorem \ref{thm1} and satisfies the mean field equation (\ref{Mean-1}). Clearly $v=u_\infty-\log\int_Ve^{u_\infty}d\mu$ is a solution of the Kazdan-Warner equation \be\label{K-W}\Delta v=Q-\rho e^v.\ee By the assumption in Theorem \ref{thm1}, $\int_VQd\mu=\rho$. If we assume $Q\equiv c$ for some constant $c$, then $\rho=c|V|$, where $|V|=\sum_{x\in V}\mu(x)$ denotes the volume of the graph. It follows from (\cite{GLY1}, Theorems 2-4) that if $c>0$ or $c<0$, then (\ref{K-W}) has a solution. This implies that the results of Theorem \ref{thm1} and those of \cite{GLY1} are consistent and do not contradict each other. For flow on infinite graph, partial existence results for the mean field equation are obtained by Ge-Jiang \cite{Ge-Jiang}. Also there is a possibility of solving problems in \cite{GLY2,GLY3,Han-Shao-Zhao,Hou,Lin-Yang,Man,Zhang-Zhao} by a method of heat flow. Throughout this paper, we often denote various constants by the same $C$ from line to line, even in the same line. The remaining part of this paper is arranged as follows: In Section 2, we prove the short time existence of the heat flow; In Section 3, we show the heat flow exists for all time $t\in[0,+\infty)$; In Section 4, we establish a Lojasiewicz-Simon type inequality and use it to prove the uniform convergence of the heat flow as $t\ra+\infty$. As a consequence, the proof of Theorem \ref{thm1} is finished. \section{Short time existence}\label{sec2} In this section, using the theory of ordinary differential equation, we shall prove that the solution of the heat flow (\ref{heat-flow}) exists on a short time interval. Also we shall give several properties of the heat flow. Since $G=(V,E)$ is a finite graph, we assume with no loss of generality that $V=\{x_1,\cdots,x_\ell\}$ for some integer $\ell\geq 1$. Then any function $u:V\ra\mathbb{R}$ can be represented by $\mathbf{y}=(y_1,\cdots,y_\ell)\in\mathbb{R}^\ell$ with $y_j=u(x_j)$ for $1\leq j\leq \ell$; moreover, we denote \be\label{Mu}\mathcal{M}(u)=\Delta u-Q+\f{\rho e^u}{\int_Ve^{u}d\mu}\ee and a map $\mathcal{F}:\mathbb{R}^\ell\ra\mathbb{R}^\ell$ by $\mathcal{F}(\mathbf{y})=(f_1(\mathbf{y}),\cdots,f_\ell(\mathbf{y}))$, where $f_j(\mathbf{y})=\mathcal{M}(u)(x_j)$ for $1\leq j\leq\ell$. Then the equation (\ref{heat-flow}) is equivalent to the ordinary differential system \be\label{ODEsystem}\le\{\begin{array}{lll} \f{d}{dt}\phi(y_1)&=&f_1(\mathbf{y})\\ &\vdots&\\ \f{d}{dt}\phi(y_\ell)&=&f_\ell(\mathbf{y})\\[1.2ex] \mathbf{y}(0)&=&\mathbf{y}_0, \end{array}\ri.\ee where $\mathbf{y}_0=(u_0(x_1),\cdots,u_0(x_\ell))$ is the initial data. For the map $\mathcal{F}$, we have the following: \begin{lemma}\label{analytic} The map $\mathcal{F}:\mathbb{R}^\ell\ra\mathbb{R}^\ell$ is analytic. \end{lemma} \proof At $\mathbf{y}=(u(x_1),\cdots,u(x_\ell))$, we write $$f_j(\mathbf{y})=\mathcal{M}(u)(x_j)=\f{1}{\mu(x_j)}\sum_{z\sim x_j}w_{zx_j}(u(z)-u(x_j))- Q(x_j)+\f{\rho e^{u(x_j)}}{\sum_{i=1}^\ell\mu(x_i)e^{u(x_i)}}.$$ Replacing $(u(x_1),\cdots,u(x_\ell))$ by $\mathbf{y}$ on the righthand side of the above equality, we have that $f_j$ is analytic, $j=1,\cdots,\ell$. $\hfill\Box$\\ On the short time existence of solutions of (\ref{heat-flow}), we obtain \begin{lemma}\label{short-time} There exists some constant $T^\ast>0$ such that (\ref{heat-flow}) has a solution $u:V\times [0,T^\ast]\ra\mathbb{R}$. \end{lemma} \proof By the short time existence theorem of the ordinary differential equation (\cite{ODE}, page 250), there exist some $T^\ast>0$ and a $C^1$ map $\mathbf{y}:[0,T^\ast]\ra \mathbb{R}^\ell$ such that $\mathbf{y}(t)$ satisfies (\ref{ODEsystem}). Define $u(x_j,t)=y_j(t)$ for $1\leq j\leq \ell$. Then $u:V\times [0,T^\ast] \ra\mathbb{R}$ is a solution of (\ref{heat-flow}). $\hfill\Box$\\ For any $\rho\in\mathbb{R}$, let $J_\rho: W^{1,2}(V)\ra\mathbb{R}$ be a functional defined as in (\ref{functional}). One can easily see that (\ref{heat-flow}) is a negative gradient flow of $J_\rho$. In particular \be\label{dJ}\langle dJ_\rho(u(\cdot,t)),\phi\rangle=-\int_V\mathcal{M}(u(\cdot,t))\phi d\mu,\quad\forall\phi\in W^{1,2}(V).\ee Along the heat flow (\ref{heat-flow}), there are two important quantities: one is invariant, the other is monotone, namely \begin{lemma}\label{prop1} $(i)$ For all $t\in[0,T^\ast]$, we have an invariant quantity $$\int_V\phi(u(\cdot,t))d\mu=\int_V\phi(u_0)d\mu.$$ $(ii)$ $J_\rho(\cdot,t)$ is monotone with respect to $t$, in particular, if $0\leq t_1< t_2\leq T^\ast$, then $$J_\rho(u(\cdot,t_2))\leq J_\rho(u(\cdot,t_1)).$$ \end{lemma} \proof Since $u(x,t)$ is a solution of (\ref{heat-flow}), we have by calculating \bna \f{d}{dt}\int_V\phi(u(\cdot,t))d\mu&=&\int_V\phi^\prime(u)u_td\mu\\ &=&\int_V\le(\Delta u-Q+\f{\rho e^u}{\int_Ve^ud\mu}\ri)d\mu\\ &=&0. \ena This immediately implies the assertion $(i)$. By the integration by parts, \bea\nonumber \f{d}{dt}J_\rho(u(\cdot,t))&=&\int_V\nabla u\nabla u_td\mu+\int_VQu_td\mu- \f{\rho }{\int_Ve^{u}d\mu}\int_Ve^{u}u_td\mu\\\nonumber &=&-\int_V\mathcal{M}(u)u_td\mu\\\label{deriv} &=&-\int_V\phi^\prime(u)u_t^2d\mu\leq 0, \eea since $\phi^\prime(s)>0$ for all $s\in\mathbb{R}$. Here we denote $u_t={\p}u/{\p t}$. This concludes the assertion $(ii)$. $\hfill\Box$ \section{Long time existence} In this section, we prove the long time existence of the heat flow (\ref{heat-flow}). By Lemma \ref{short-time}, there exists some $T^\ast>0$ such that (\ref{heat-flow}) has a solution $u:V\times [0,T^\ast]\ra \mathbb{R}$. Let \be\label{T}T=\sup\le\{T^\ast>0: u:V\times[0,T^\ast]\ra\mathbb{R}\,\, {\rm solves\,\,} (\ref{heat-flow})\ri\}.\ee Clearly, (\ref{heat-flow}) has a solution $u:V\times[0,T)\ra\mathbb{R}$. \begin{proposition}\label{prop2} Let $T$ be defined as in (\ref{T}). Then there exists some constant $C$ independent of $T$ such that $$\|u(\cdot,t)\|_{W^{1,2}(V)}\leq C,\quad\forall t\in[0,T).$$ \end{proposition} \proof We divide the proof into several steps. {\bf Step 1}. {\it There exists some constant $C$ independent of $T$ such that for all $x\in V$ and $t\in[0,T)$, $$u(x,t)\leq C.$$} By (\ref{phi}), $\phi^\prime(s)>0$ for all $s\in\mathbb{R}$ and there exists some constant $a>0$ such that $\phi^\prime(s)\geq a>0$ for all $s\in[0,+\infty)$. There would hold $\phi(s)\geq as$ for all $s\in \mathbb{R}$. Indeed, the mean value theorem implies $$\phi(s)-\phi(0)=\phi^\prime(\xi)s,$$ where $\xi$ lies between $0$ and $s$. Hence $\phi(s)\geq as$ for all $s\geq 0$ since $\phi(0)>0$. Obviously $\phi(s)\geq as$ for all $s< 0$ since $\phi(s)>0$ for all $s\in\mathbb{R}$. This together with $(i)$ of Lemma \ref{prop1} leads to \bna u(x,t)&\leq&\f{1}{a}\phi(u(x,t)) \\ &\leq&\f{1}{a\min_{x\in V}\mu(x)}\int_V\phi(u(\cdot,t))d\mu\\ &=&\f{1}{a\min_{x\in V}\mu(x)}\int_V\phi(u_0)d\mu\ena for all $x\in V$. This finishes the first step.\\ {\bf Step 2}. {\it There exists a constant $C$ independent of $T$ such that for any $t\in [0,T)$, one finds a subset $A_t\subset V$ satisfying $\|u(\cdot,t)\|_{L^\infty(A_t)}\leq C$ and $|A_t|\geq C^{-1}$.}\\ For any $\epsilon>0$ and $t\in[0,T)$, we define a set $$V_{\epsilon,t}=\le\{x\in V: \phi(u(x,t))<\epsilon\ri\}.$$ This together with $(i)$ of Lemma \ref{prop1} and Step 1 leads to \bea\nonumber \int_V\phi(u_0)d\mu&=&\int_V\phi(u(\cdot,t))d\mu\\\nonumber &=&\int_{V_{\epsilon,t}}\phi(u(\cdot,t))d\mu+\int_{V\setminus V_{\epsilon,t}}\phi(u(\cdot,t))d\mu\\ &\leq&\epsilon|V|+\phi(C)|V\setminus V_{\epsilon,t}|.\label{11} \eea Taking $\epsilon=\epsilon_0=\f{1}{2|V|}\int_V\phi(u_0)d\mu$, we conclude from (\ref{11}) that \be\label{Vt}|V\setminus V_{\epsilon_0,t}|\geq \f{1}{2\phi(C)}\int_V\phi(u_0)d\mu.\ee Set $A_t=V\setminus V_{\epsilon_0,t}$. For any $x\in A_t$, there holds $\phi(u(x,t))\geq \epsilon_0$. Since $\phi(s)\ra 0$ as $s\ra-\infty$, we find some real number $b$ such that $\phi(b)=\epsilon_0$. It follows that $u(x,t)\geq b$ for all $x\in A_t$. This together with Step 1 leads to $$\|u(\cdot,t)\|_{L^\infty(A_t)}\leq C$$ If $C$ is chosen larger but independent of $T$, then we have by (\ref{Vt}) that $|A_t|\geq C^{-1}$. \\ {\bf Step 3}. {\it There exists a positive constant $C$ independent of $T$ such that for all $t\in[0,T)$, there holds $$\int_Vu^2(\cdot,t)d\mu\leq C\int_V|\nabla u(\cdot,t)|^2d\mu+C.$$} Recalling the definition of the first eigenvalue of the negative Laplacian, namely $$\lambda_1=\inf_{v\in W^{1,2}(V),\,\int_Vvd\mu=0,\,v\not\equiv 0}\f{\int_V|\nabla v|^2d\mu}{\int_Vv^2d\mu}>0,$$ we obtain for any $v\in W^{1,2}(V)$, \bea\nonumber\int_Vv^2d\mu&=&\int_V(v-\overline{v})^2d\mu+\int_V\overline{v}^2d\mu\\&\leq& \f{1}{\lambda_1}\int_V|\nabla v|^2d\mu+\f{1}{|V|}\le(\int_Vvd\mu\ri)^2,\label{poincare}\eea where $\overline{v}=\f{1}{|V|}\int_Vvd\mu$. By Step 2, one calculates along the heat flow (\ref{heat-flow}), \bea\nonumber \f{1}{|V|}\le(\int_Vu(\cdot,t)d\mu\ri)^2&=&\f{1}{|V|}\le(\int_{A_t}u(\cdot,t)d\mu+\int_{V\setminus A_t}u(\cdot,t)d\mu\ri)^2\\ \nonumber&=&\f{1}{|V|}\le(\int_{A_t}u(\cdot,t)d\mu\ri)^2+\f{1}{|V|}\le(\int_{V\setminus A_t}u(\cdot,t)d\mu\ri)^2\\\nonumber&&+\f{2}{|V|}\int_{A_t}u(\cdot,t)d\mu\int_{V\setminus A_t}u(\cdot,t)d\mu\\ \nonumber&\leq&\f{C^2|A_t|^2}{|V|}+\f{1}{|V|}\le(\int_{V\setminus A_t}u(\cdot,t)d\mu\ri)^2\\\label{est-1} &&+\f{C^2|A_t|^2}{\epsilon |V|}+\f{\epsilon}{|V|}\le(\int_{V\setminus A_t}u(\cdot,t)d\mu\ri)^2, \eea where $\epsilon$ is a positive constant to be determined later. Using the H\"older inequality, one has $$\le(\int_{V\setminus A_t}u(\cdot,t)d\mu\ri)^2\leq |V\setminus A_t|\int_Vu^2(\cdot,t)d\mu.$$ This together with (\ref{poincare}) and (\ref{est-1}) implies \be\int_Vu^2(\cdot,t)d\mu\leq \f{1}{\lambda_1}\int_V|\nabla u(\cdot,t)|^2d\mu+ \f{(1+\epsilon)|V\setminus A_t|}{|V|}\int_Vu^2(\cdot,t)d\mu+{C^2|V|}\le(1+\f{1}{\epsilon}\ri).\label{est-2}\ee By Step 2, we have $|A_t|\geq C^{-1}$ with $0<C^{-1}<|V|$. Taking $\epsilon=(2C)^{-1}/(|V|-C^{-1})$ in (\ref{est-2}), one gets $(1+\epsilon)|V\setminus A_t|/|V|\leq 1-(2C|V|)^{-1}$, and thus $$(2C|V|)^{-1}\int_Vu^2(\cdot,t)d\mu\leq \f{1}{\lambda_1}\int_V|\nabla u(\cdot,t)|^2d\mu+ C.$$ This completes the proof of Step 3.\\ {\bf Step 4}. {\it There exists a constant $C$ independent of $T$ such that $\|u(\cdot,t)\|_{W^{1,2}(V)}\leq C$ for all $t\in[0,T)$.}\\ It follows from the Poincar\'e inequality and the Young inequality that \be\label{ep-1}\int_V|u(\cdot,t)-\overline{u}(t)|d\mu\leq \epsilon\int_V|\nabla u(\cdot,t)|^2d\mu+C,\ee where $\epsilon>0$ is chosen later, $C$ is a constant depending on $\epsilon$, but independent of $T$. For any fixed $\rho\in\mathbb{R}$, in view of (\ref{functional}), we have by using (\ref{ep-1}), \bea\nonumber J_\rho(u(\cdot,t))&=&J_\rho(u(\cdot,t)-\overline{u}(t))\\\nonumber &=&\f{1}{2}\int_V|\nabla u(\cdot,t)|^2d\mu+\int_VQ(u(\cdot,t)-\overline{u}(t))d\mu\\\nonumber &&\quad-\rho\log\le(\int_Ve^{u(\cdot,t)-\overline{u}(t)}d\mu\ri)\\\label{tot} &\geq&\le(\f{1}{2}-\epsilon\ri)\int_V|\nabla u(\cdot,t)|^2d\mu-C-\rho\log \int_Ve^{u(\cdot,t)-\overline{u}(t)}d\mu. \eea As in Section \ref{sec2}, we write $V=\{x_1,\cdots,x_\ell\}$. Let $\theta_i=\mu(x_i)/|V|$ and $s_i=u(x_i,t)$, $1\leq i\leq \ell$. Obviously $0<\theta_i<1$ for any $i$ and $\sum_{i=1}^\ell\theta_i=1$. Since $e^s$ is convex in $s\in\mathbb{R}$, we have \bna \f{1}{|V|}\int_Ve^{u(\cdot,t)}d\mu &=&\sum_{i=1}^\ell\f{\mu(x_i)}{|V|}e^{u(x_i,t)}\\ &=&\sum_{i=1}^\ell\theta_i e^{s_i}\\ &\geq&e^{\sum_{i=1}^\ell\theta_is_i}\\ &=&e^{\overline{u}(t)}, \ena where $\overline{u}(t)=\f{1}{|V|}\int_Vu(\cdot,t)d\mu$. This immediately gives for $t\in[0,T)$, \be\label{et-01}\log\int_Ve^{u(\cdot,t)-\overline{u}(t)}d\mu\geq \log|V|.\ee According to the Trudinger-Moser embedding (\cite{GLY1}, Lemma 6), for any real number $\beta>0$, there exists some constant $C$ depending only on $\beta$ and the Graph $G$ such that $$\int_Ve^{\beta\f{(u(\cdot,t)-\overline{u}(t))^2} {\|\nabla u(\cdot,t)\|_2^2}}d\mu\leq C.$$ As a consequence, \bea\nonumber \log\int_Ve^{u(\cdot,t)-\overline{u}(t)}d\mu&\leq&\log\int_Ve^{\f{(u(\cdot,t)-\overline{u}(t))^2} {4\epsilon\|\nabla u(\cdot,t)\|_2^2}+\epsilon\|\nabla u(\cdot,t)\|_2^2}d\mu\\ &\leq& \epsilon\int_V|\nabla u(\cdot,t)|^2d\mu+C\label{in-2} \eea for some constant $C$ depending on $\epsilon$ and the graph $G$. Combining (\ref{et-01}) and (\ref{in-2}), we have for any fixed real number $\rho$, \be\label{t0}\rho\log\int_Ve^{u(\cdot,t)-\overline{u}(t)}d\mu\leq |\rho|\epsilon\int_V|\nabla u(\cdot,t)|^2d\mu+C.\ee Inserting (\ref{t0}) into (\ref{tot}) and taking $\epsilon={1}/{(4+4|\rho|)}$, we conclude \be\label{J-lower}J_\rho(u(\cdot,t))\geq \f{1}{4}\int_V|\nabla u(\cdot,t)|^2d\mu-C.\ee This together with $(ii)$ of Lemma \ref{prop1} implies $$\int_V|\nabla u(\cdot,t)|^2d\mu\leq C, \quad\forall t\in[0,T).$$ In view of Step 3, we complete the finial step and the proof of the lemma. $\hfill\Box$\\ Now we are in a position to prove the long time existence of the heat flow (\ref{heat-flow}). \begin{proposition}\label{prop3} Let $T$ be given as in (\ref{T}). Then $T=+\infty$. \end{proposition} \proof Suppose $T<+\infty$. By Proposition \ref{prop2} and the short time existence theorem of the ordinary differential equation (\cite{ODE}, page 250), $u(\cdot,t)$ can be uniquely extended to a time interval $[0,T_2]$ for some $T_2>T$. This contradicts the definition of $T$. Therefore $T=+\infty$. $\hfill\Box$\\ {\it Completion of the proof of $(i)$ of Theorem \ref{thm1}}. An immediate consequence of Proposition \ref{prop3}. $\hfill\Box$ \section{Convergence of the heat flow} In this section, we shall prove $(ii)$ of Theorem \ref{thm1}. Since $V$ is finite, all norms of the function space $W^{1,2}(V)$ are equivalent. Then it follows from Proposition \ref{prop2} that there exists a constant $C$ such that for all $t\in [0,+\infty)$, \be\label{u-bd}\|u(\cdot,t)\|_{L^\infty(V)}\leq C.\ee In view of (\ref{deriv}) and (\ref{J-lower}), we have $$\int_0^{+\infty}\int_V\phi^\prime(u)u_t^2d\mu dt\leq J_\rho(u_0)+C.$$ This together with the finiteness of $V$ and $\phi^\prime(s)>0$ for all $s\in\mathbb{R}$ implies that there exists an increasing sequence $t_n\ra+\infty$ such that for all $x\in V$, \be\label{ten-0}\le.\phi^\prime(u(x,t))u_t^2(x,t)\ri|_{t=t_n}\ra 0\quad{\rm as}\quad n\ra\infty.\ee By (\ref{u-bd}), since $\phi^\prime(s)>0$ for all $s\in\mathbb{R}$, we obtain \be\label{lower}0<\min_{s\in[-C,C]}\phi^\prime(s)\leq\phi^\prime(u(x,t_n))\leq \max_{s\in[-C,C]}\phi^\prime(s),\quad\forall n\geq 1.\ee Combining (\ref{ten-0}) and (\ref{lower}), we conclude that for all $x\in V$, \be\label{t-0}\le.\f{\p}{\p t}\phi(u(x,t))\ri|_{t=t_n}\ra 0\quad{\rm as}\quad n\ra\infty.\ee Moreover, up to a subsequence, we can find some function $u_\infty: V\ra\mathbb{R}$ such that $u(x,t_n)$ converges to $u_\infty(x)$ uniformly in $x\in V$ as $n\ra\infty$. This together with (\ref{heat-flow}) and (\ref{t-0}) leads to \be\label{u-infty}\mathcal{M}(u_\infty)=\Delta u_\infty-Q+\f{\rho e^{u_\infty}}{\int_Ve^{u_\infty}d\mu}=0\quad{\rm on} \quad V.\ee In conclusion, we found an increasing sequence $(t_n)\ra+\infty$ such that $u(x,t_n)\ra u_\infty(x)$ uniformly in $x\in V$ as $n\ra\infty$, where $u_\infty$ is a solution of the mean field equation (\ref{u-infty}). Hereafter we further prove that along the heat flow, $u(\cdot,t)$ converges to $u_\infty$ as $t\ra+\infty$ uniformly on $V$. For this purpose, we need an estimate due to Lojasiewicz, namely Lemma \ref{Lojasie}. The power of Lemma \ref{Lojasie} is shown in the following finite dimensional Lojasiewicz-Simon inequality. \begin{proposition}\label{prop4} Let $\sigma>0$ and $0<\theta<1/2$ be given as in Lemma \ref{Lojasie}, $\mathcal{M}(u)$ be defined as in (\ref{Mu}), and $\ell$ be the number of points of $V$. Along the heat flow (\ref{heat-flow}), if $\|u(\cdot,t)-u_\infty\|_{L^\infty(V)}<\sigma/\sqrt{\ell}$ for some fixed $t$, then there exists some constant $C$ independent of $t$ such that $$|J_\rho(u(\cdot,t))-J_\rho(u_\infty)|^{1-\theta}\leq C\|\mathcal{M}(u)(\cdot,t)\|_{L^2(V)}.$$ \end{proposition} \proof Assume $\|u(\cdot,t)-u_\infty\|_{L^\infty(V)}<\sigma/\sqrt{\ell}$ for some fixed $t$. For the sake of clarity, we denote $\mathbf{y}=(y_1,\cdots,y_\ell)=(u(x_1,t),\cdots,u(x_\ell,t))$, $\mathbf{a}=(u_\infty(x_1),\cdots,u_\infty(x_\ell))$, $\Gamma(\mathbf{y})=J_\rho(u(\cdot,t))$ and $\Gamma(\mathbf{a})= J_\rho(u_\infty)$. Clearly the function $\Gamma:\mathbb{R}^\ell\ra\mathbb{R}$ is analytic due to Lemma \ref{analytic}, and $$\|\mathbf{y}-\mathbf{a}\|=\sqrt{\sum_{i=1}^\ell(y_i-a_i)^2}\leq \sqrt{\ell}\max_{1\leq i\leq\ell}|y_i-a_i|<\sigma.$$ For any $1\leq i\leq \ell$, we define a function $e_i:V\ra\mathbb{R}$ by $$e_i(x)=\le\{\begin{array}{lll} 1,&{\rm if}& x=x_i\\[1.5ex] 0,&{\rm if}& x\not=x_i. \end{array}\ri.$$ Let $\mathbf{e}_i$ be a unit vector in $\mathbb{R}^\ell$, whose $i$-th component is $1$ and the rest are $0$. In view of (\ref{dJ}), one calculates the partial derivative of the analytic function $\Gamma(y)$ as follows. For any $1\leq i\leq\ell$, \bea\nonumber \p_{y^i}\Gamma(\mathbf{y})&=&\lim_{h\ra 0}\f{1}{h}\le(\Gamma(\mathbf{y}+h\mathbf{e}_i)-\Gamma(\mathbf{y})\ri)\\ &=&\lim_{h\ra 0}\f{1}{h}\le(J_\rho(u(x,t)+he_i(x))-J_\rho(u(x,t))\ri)\nonumber\\ &=&dJ_\rho(u(x,t))(e_i(x))\nonumber\\ &=&\int_V\mathcal{M}(u)(x,t)e_i(x)d\mu.\label{d-gamma} \eea This together with the fact $\sum_{i=1}^\ell\int_Ve_i^2d\mu=\sum_{i=1}^\ell\mu(x_i)=|V|$ leads to \bea\|\nabla \Gamma(\mathbf{y})\|&=&\sqrt{\sum_{i=1}^\ell\le(\p_{y_i}\Gamma(y)\ri)^2}\nonumber\\ &\leq& \sqrt{\le(\int_V\mathcal{M}(u)^2d\mu\ri)\sum_{i=1}^\ell\int_Ve_i^2d\mu}\nonumber\\ &=& \sqrt{|V|}\,\|\mathcal{M}(u)(\cdot,t)\|_{L^2(V)}.\label{na-1}\eea Similar to (\ref{d-gamma}), we have by (\ref{u-infty}) that for all $1\leq i\leq\ell$, \be\label{nab-Ga}\p_{y_i}\Gamma(\mathbf{a})=\int_V\mathcal{M}(u_\infty(x))e_i(x)d\mu=0.\ee In view of the definition of $\Gamma$, (\ref{na-1}) and (\ref{nab-Ga}), we obtain by applying Lemma \ref{Lojasie} that \bna |J_\rho(u(\cdot,t))-J_\rho(u_\infty)|^{1-\theta}&=&|\Gamma(\mathbf{y})-\Gamma(\mathbf{a})|^{1-\theta}\\ &\leq&\|\nabla \Gamma(y)\|\\ &\leq&\sqrt{|V|}\,\|\mathcal{M}(u)(\cdot,t)\|_{L^2(V)}. \ena This ends the proof of the proposition. $\hfill\Box$\\ Finally we prove the uniform convergence of the heat flow (\ref{heat-flow}), namely \begin{proposition}\label{uniform} Along the heat flow (\ref{heat-flow}), there holds \be\label{uni}\lim_{t\ra+\infty}\int_V|u(\cdot,t)-u_\infty|^2d\mu=0.\ee \end{proposition} \proof For the proof of this proposition, we modify an argument of Sun-Zhu (\cite{Sun-Zhu}, Section 5). Suppose that $(\ref{uni})$ does not hold. Then there exists some constant $\epsilon_0>0$ and an increasing sequence of numbers $(t_n^\ast)$ such that $t_n^\ast>t_n$ and \be\label{contr}\int_V|u(\cdot,t_n^\ast)-u_\infty|^2d\mu\geq 2\epsilon_0,\ee where $(t_n)$ is given by (\ref{ten-0}) and satisfies $u(\cdot,t_n)\ra u_\infty$ uniformly on $V$. Obviously $$\lim_{n\ra \infty}\int_V|u(\cdot,t_n)-u_\infty|^2d\mu=0.$$ Thus there exists $n_1\in\mathbb{N}$ such that if $n\geq n_1$, then \be\label{lim}\int_V|u(\cdot,t_n)-u_\infty|^2d\mu<\epsilon_0.\ee We {\it claim} that $J_\rho(u(\cdot,t))> J_\rho(u_\infty)$ for all $t\in [0,+\infty)$. Indeed, we have by $(ii)$ of Lemma \ref{prop1} that $J_\rho(u(\cdot,t))$ is decreasing with respect to $t$, and in particular $J_\rho(u(\cdot,t))\geq J_\rho(u_\infty)$ for all $t\geq 0$. Suppose there exists some $\tilde{t}>0$ such that $J_\rho(u(\cdot,\tilde{t}))= J_\rho(u_\infty)$. Then $J_\rho(u(\cdot,{t}))\equiv J_\rho(u_\infty)$ and thus $u_t\equiv 0$ on $V$ for all $t\in[\tilde{t},+\infty)$. Hence $u(x,t)\equiv u_\infty(x)$ for all $x\in V$ and all $t\in[\tilde{t},+\infty)$, which contradicts (\ref{contr}). This confirms our claim $J_\rho(u(\cdot,t))> J_\rho(u_\infty)$ for all $t\geq 0$. For any $n\geq n_1$, we define $$s_n=\inf\le\{t>t_n: \|u(\cdot,t)-u_\infty\|_{L^2(V)}^2\geq 2\epsilon_0\ri\}.$$ It follows from (\ref{contr}) that $s_n<+\infty$, and that for all $t\in[t_n,s_n)$, \be\label{equ-1}\int_V|u(\cdot,t)-u_\infty|^2d\mu<2\epsilon_0=\int_V|u(\cdot,s_n)-u_\infty|^2d\mu.\ee For $t\in[t_n,s_n)$, we calculate by (\ref{deriv}), Proposition \ref{prop4}, the fact $u_t=(\phi(u))^{-1}\mathcal{M}(u)$, and (\ref{u-bd}) that \bna -\f{d}{dt}(J_\rho(u(\cdot,t))-J_\rho(u_\infty))^{\theta}&=&-\theta (J_\rho(u(\cdot,t))-J_\rho(u_\infty))^{\theta-{1}}\f{d}{dt}J_\rho(u(\cdot,t))\\ &=&\theta (J_\rho(u(\cdot,t))-J_\rho(u_\infty))^{\theta-{1}}\int_V\mathcal{M}(u)u_td\mu\\ &\geq&C\f{\int_V(\phi^\prime(u))^{-1}\mathcal{M}^2(u)d\mu}{\|\mathcal{M}(u)\|_{L^2(V)}}\\ &\geq& C\|u_t\|_{L^2(V)}. \ena Hence \be\label{est-4}\int_{t_n}^{s_n}\|u_t\|_{L^2(V)}dt\leq C(J_\rho(u(\cdot,t_n))-J_\rho(u_\infty))^{\theta}.\ee By the H\"older inequality, \be\label{est-5}\f{d}{dt}\le(\int_V|u(\cdot,t)-u_\infty|^2d\mu\ri)^{1/2}=\f{1}{\|u(\cdot,t)-u_\infty\|_{L^2(V)}}\int_V(u-u_\infty)u_td\mu\leq \le(\int_Vu_t^2d\mu\ri)^{1/2}.\ee Combining (\ref{est-4}) and (\ref{est-5}), we have $$\|u(\cdot,s_n)-u_\infty\|_{L^2(V)}-\|u(\cdot,t_n)-u_\infty\|_{L^2(V)}\leq C(J_\rho(u(\cdot,t_n))-J_\rho(u_\infty))^{\theta}.$$ This together with (\ref{lim}) and (\ref{equ-1}) leads to $$\epsilon_0\leq C(J_\rho(u(\cdot,t_n))-J_\rho(u_\infty))^{\theta},$$ which is impossible if $n$ is chosen sufficiently large, since $J_\rho(u(\cdot,t_n))\ra J_\rho(u_\infty)$ as $n\ra\infty$. This confirms (\ref{uni}). $\hfill\Box$\\ {\it Completion of the proof of $(ii)$ of Theorem \ref{thm1}}. Recalling $V=\{x_1,\cdots,x_\ell\}$, one concludes from Proposition \ref{uniform} that $$\lim_{t\ra+\infty}\sum_{i=1}^\ell \mu(x_i)|u(x_i,t)-u_\infty(x_i)|^2=0.$$ Since for all $j\in\{1,\cdots,\ell\}$, there holds \bna |u(x_j,t)-u_\infty(x_j)|\leq \f{1}{\min_{x\in V}\mu(x)}\sum_{i=1}^\ell \mu(x_i)|u(x_i,t)-u_\infty(x_i)|^2, \ena one comes to a conclusion that $u(x,t)$ converges to $u_\infty(x)$ uniformly in $x\in V$ as $t\ra+\infty$. By (\ref{u-infty}), $u_\infty$ is a solution of (\ref{Mean-1}). Thus the proof of Theorem \ref{thm1} is completely finished. $\hfill\Box$\\ {\bf Acknowledgements.} Yong Lin is partly supported by the National Science Foundation of China (Grant No. 12071245). Yunyan Yang is partly supported by the National Science Foundation of China (Grant No. 11721101) and National Key Research and Development Project SQ2020YFA070080. Both of the two authors are supported by the National Science Foundation of China (Grant No. 11761131002). \bigskip
{"config": "arxiv", "file": "2108.01416/Flow-Graph.tex"}
TITLE: Existence and Uniqueness Theorem QUESTION [2 upvotes]: I had a question about how to do one of these problems. So here's the question: Given this equation $y'=\frac{-\cos(t)y(t)}{(t+2)(t-1)}+t$, find if the initial conditions $y(0)=10, y(2)=-1, y(-10)=5$ exist. So I think the first step is just to take the partial derivative with respect to y which gives me: $$y''=\frac{-\cos(t)y'(t)}{(t+2)(t-1)}$$ So the 1'st equation doesn't exist at $t=-2,1$ and the partial derivative doesn't exist at $t=-2,1$ ....so do I conclude that all the initial values exists since none of them are $y(-2)$ or $y(1)$. Don't really know how to do this whole existence and uniqueness thing....so am I right or completely off track? REPLY [4 votes]: What you've done looks perfectly fine. Here's a general outline of what you do when you're looking to find where solutions exist for the first-order differential equation $y'+ p(t) y = g(t)$. Write the differential equation in the form: $y'=f(y, t)$ Find $f_y = \frac{\partial}{\partial y} f$. Determine points of discontinuities of both $f_y$ and $f$. At this point, if you're just looking to see if a particular initial condition ($t_0$) has a solution, just check if $t_0$ is one of the points of discontinuity. If you're looking for where the solution exists: Draw a number line denoting where the discontinuities are (if possible). Find where the initial condition falls on the number line. If the discontinuity to the left of $t_0$ is $a$, and the discontinuity to the right of $t_0$ is $b$, then the solution exists on the interval $(a, b)$. EDIT Based on requests from comments below, here's a statement of the existence and uniqueness theorem: Let the functions $f$ and $\frac{\partial f}{\partial y}$ be continuous in some rectangle $\alpha < t < \beta$, $\gamma < y < \delta$ containing the point $(t_0, y_0)$. Then, in some interval $t_0 - h < t < t_0 + h$ contained in $\alpha < t < \beta$, there is a unique solution $y = \phi(t)$ of the initial value problem: $$\begin{array}{cc} y' = f(t,y) & y(t_0) = y_0. \end{array}$$ Source: Elementary Differential Equations and Boundary Value Problems, Boyce and DiPrima, 10th Edition, pg 70.
{"set_name": "stack_exchange", "score": 2, "question_id": 296796}
TITLE: Prove: If $A$ and $B$ are closed subsets of $[0,\Omega]$ then at least $A$ or $B$ is bounded QUESTION [0 upvotes]: As usual, I am self studying topology and my knowledge of ordinals is meagre. Have done some research on it. Theorem 5.1 Any countable subset of $[0,\Omega)$ is bounded above. (This exercise requires a knowledge of ordinals) From problems: 1)A countable space need not be compact 2)countable compact space is pseudocompact it is immediate that the the space of countable ordinals is pseudocompact. In fact, it actually more than just pseudocompact. In fact, it is actually more than just pseudocompact because a continuous real valued function on $[0,\Omega)$ is ultimately constant ,so is more than just bounded. (A function $f$ on $[0, \Omega)$ if there is an ordinal number $a\in [0,\Omega)$ such that $f(x)=f(y)$ $\forall x,y\in(a, \Omega)$.) Showing that every continuous real valued function on [0, $\Omega$) is ultimately continuous is not easy. Use the following out which is adapted from chapter 5 and 6 of Gillman and Jerison[6] Below is part(a) of a four part question adapted from the above text. (a) Claim If A and B are closed sets of $[0,\Omega)$, then atleast A or B is bounded. Attempted proof Let A and B be closed subsets of $[0,\Omega)$ Then there is an ordinal number $a<\Omega$ such that $A \in [0,a]$ or $B\in [0,a]$. Since $A$ and $B$ are infinite, it contains a cluster point. So A or B is countably compact So by Theorem 5.1 is bounded. I think my first effort sucks. On second thought, I thought of following something like this argument closed intervals of $\omega_1$ are compact to do it. Any help to solve this theorem would be appreciated. REPLY [0 votes]: If both $A$ and $B$ are (closed and) unbounded pick $a_0 \in A$ and $b_0 \in B$ with $a_0 < b_0$, and next $a_1 \in A$ with $a_1 > b_0$ and so on till we have interspersed sequences $a_0 < a_1 < a_2 < \ldots$ in $A$ and $b_0 < b_1 < b_2 < \ldots$ in $B$ with $a_n < b_n$ for all $n$. Then define $\alpha=\sup_n a_n \in [0,\Omega)$ and also $\beta=\sup_n b_n \in [0,\Omega)$ and by closedness of $A$ and $B$ (and the intersepresedness) we have $\alpha=\beta\in A \cap B$ contradicting their disjointness. We only need that a sequence of countable ordinals has a sup that is still a countable ordinal. It's easy that $X=[0,\Omega)$ is countably compact, just use sup argument for a countable subset of $X$ and it's clear that $X$ is not compact, as all initial segements for a cover without a finite (or even countable) subcover. Pseudocompactness follows from countable compactness and needs no separate argument.
{"set_name": "stack_exchange", "score": 0, "question_id": 4351932}
TITLE: Finding the inverse of a function using bisection method QUESTION [0 upvotes]: It is said that we can find $f^{-1}(y)$ by solving the equation $y-f(x)=0$ using bisection method. But all sources that I can find use bisection to find roots, so I can't figure how and why. Could you explain it? REPLY [3 votes]: For each particular value of $y$, you find the root of the equation $y-f(x)=0$. You will not get a formula for $f^{-1}(y)$, but an (approximate) value.
{"set_name": "stack_exchange", "score": 0, "question_id": 1440818}
TITLE: How to expand constant function in fourier sine series? QUESTION [1 upvotes]: If a function is constant, by orthogonality $$\int_{-L}^L C \cdot \sin(n\pi x/L) \, dx = C \cdot \int_{-L}^{L} \sin(n\pi x/L) \, dx=0\text{ ??}$$ REPLY [5 votes]: Extend the constant function C into odd function: $$f(x) = \begin{cases} C\, \text{ for } 0 < x < L \\ -C\, \text{ for } -L < x < 0 \end{cases} $$ You can expand step function into sine series using Fourier series, then consider the value in $ 0 < x < L$
{"set_name": "stack_exchange", "score": 1, "question_id": 1699638}
TITLE: Factoring the polynomial $3(2x+3)^2 + 7(2x+3) - 6$ QUESTION [1 upvotes]: Factor $3(2x+3)^2 + 7(2x+3) - 6$ What I did: With the substitution $X=2x+3$: \begin{align} 3(2x+3)^2 + 7(2x+3) - 6 &= 3X^2-9X+2X-6 \\ &= 3X(X-3)+2(X-3) \\ &= (X-3)(3X+2) \\ &=((2x+3)-3)(3(2x+3)+2) \\ &=(2x+0)(6x+9+2) \\ &=(2x)(6x+11) \end{align} REPLY [0 votes]: Your method is correct. You made a sign error: $$...=(X+3)(3X-2)=...$$
{"set_name": "stack_exchange", "score": 1, "question_id": 2994809}
\begin{document} \font\ef=eufm10 \def\fraktur#1{{\ef {#1}}} \def\bb#1{{\mathbb{#1}}} \def\cal#1{{\mathcal{#1}}} \def\fk#1{{\hbox{\fraktur#1}}} \newtheorem{theorem}{Theorem}[section] \newtheorem{axiom}{Axiom}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definitions}[theorem]{Definitions} \newtheorem{problem}{Problem}[section] \def\Proof{{\par\medskip\noindent\bf Proof. }} \def\qed{{\hfill\vrule height 4pt width 4pt depth 0pt \par\vskip\baselineskip}} \def\Wlg{{Without loss in generality, we may assume that }} \def\wlg{{without loss in generality }} \def\Example{\subsection{Example}} \def\Examples{\subsection{Examples}} \def\Exercise{\subsection{Exercise}} \def\Exercises{\subsection{Exercises}} \def\Remark{\subsection{Remark}} \def\Remarks{\subsection{Remarks}} \def\Notation{\subsection{Notation}} \def\implies{{\ \Rightarrow\ }} \def\mapsonto{{\rightarrow\hbox{\hskip -9pt \hbox{$\rightarrow$}}}} \def\B{{\bb B}} \def\C{{\bb C}} \def\R{{\bb R}} \def\N{{\bb N}} \def\Q{{\bb Q}} \def\R{{\bb R}} \def\Rc{{\R^c}} \def\Rd{{\R^d}} \def\S{{\bb S}} \def\U{{\bb U}} \def\Z{{\bb Z}} \def\mod{{\,\hbox{mod}\,}} \def\Oh{{\hbox{O}}} \def\oh{{\hbox{o}}} \def\coeff{{\rm coeff}} \def\dist{{\hbox{dist}}} \def\dim{{\rm dim}} \def\order{{\rm order}} \def\diam{{\rm diam}} \def\clos{{\rm clos}} \def\bdy{{\rm bdy}} \def\side{{\rm side}} \def\span{{\rm span}} \def\spt{{\rm spt}} \def\dom{{\rm dom}} \def\im{{\rm im}} \def\diag{{\rm diag}} \def\ball{{\rm ball}} \def\Tan{{\rm Tan}} \def\half{{\raise1pt\hbox{$\scriptscriptstyle{1\over 2}$}}} \def\third{{\scriptstyle{1\over3}}} \def\fifth{{\scriptstyle{1\over5}}} \def\twothirds{{\scriptstyle{2\over3}}} \def\quarter{{\scriptstyle{1\over4}}} \def\deg{{^\circ}} \def\ONE{1\hskip-6pt1} \maketitle \footnotetext[1]{Supported by Grant SFI RFP05$/$MAT0003 and the ESF Network HCAA.} \footnotetext[2]{Mathematics Subject Classification 2000: 30D05, 39B32, 37F99, 30C35.} \section*{Abstract} Let $G$ be a group. We say that an element $f\in G$ is {\em reversible in} $G$ if it is conjugate to its inverse, i.e. there exists $g\in G$ such that $g^{-1}fg=f^{-1}$. We denote the set of reversible elements by $R(G)$. For $f\in G$, we denote by $R_f(G)$ the set (possibly empty) of {\em reversers} of $f$, i.e. the set of $g\in G$ such that $g^{-1}fg=f^{-1}$. We characterise the elements of $R(G)$ and describe each $R_f(G)$, where $G$ is the the group of biholomorphic germs in one complex variable. That is, we determine all solutions to the equation $ f\circ g\circ f = g$, in which $f$ and $g$ are holomorphic functions on some neighbourhood of the origin, with $f(0)=g(0)=0$ and $f'(0)\not=0\not=g'(0)$. \section{Introduction} \subsection{General Setting} Let $G$ be a group. We say that an element $f\in G$ is {\em reversible in} $G$ if it is conjugate to its inverse, i.e. there exists $g\in G$ such that $g^{-1}fg=f^{-1}$. We denote the set of reversible elements by $R(G)$. For $f\in G$, we denote by $R_f(G)$ the set (possibly empty) of {\em reversers} of $f$, i.e. the set of $g\in G$ such that $g^{-1}fg=f^{-1}$. The set $R(G)$ always includes the set $I(G)$ of involutions (elements of order at most 2). Indeed, it also includes the larger set $$I^2(G) = \{\tau_1\tau_2: \tau_i\in I(G)\}$$ of {\em strongly-reversible elements}, i.e. elements that are reversed by an involution. If $g\in G$ reverses $f\in G$, then $g^2$ commutes with $f$, i.e. $g^2$ belongs to the centraliser $C_f(G)$. More generally, the composition of any two elements of $R_f(G)$ belongs to $C_f(G)$. For this reason, an understanding of centralisers in $G$ is a prerequisite for an understanding of reversers. The following easily-proved theorem characterises the reversers of an element, in any group. \begin{theorem}[Basic Theorem]\label{theorem-basic} Let $G$ be a group and $f,g\in G$. Then the following three conditions are equivalent: \begin{enumerate} \item $g\in R_f(G)$; \item there exists $h\in G$ with $g^2=h^2$ and $f=g^{-1}h$; \item there exist $h\in G$ such that $f=gh$ and $f^{-1}=hg$. \end{enumerate} \end{theorem} \qed This then yields two characterisations of reversibility: \begin{corollary} Let $G$ be a group and $f\in G$. Then the following three conditions are equivalent: \begin{enumerate} \item $f\in R(G)$; \item there exist $g,h\in G$ with $g^2=h^2$ and $f=g^{-1}h$; \item there exist $g,h\in G$ such that $f=gh$ and $f^{-1}=hg$. \end{enumerate} \end{corollary} \qed This shows that reversibility is interesting only in nonabelian groups in which there are elements with multiple square roots. In any specific group, it is interesting to give more explicit characterisations of reversibility than those of this theorem. \medskip This paper is about the reversible elements in the group of invertible biholomorphic germs and some of its subgroups. We shall characterise these elements, and their reversers, and the strongly-reversible elements, in explicit ways. We shall also consider some related questions. The theory of reversibility for formal power series in one variable has already been dealt with in \cite{OF}. \medskip We shall see (cf. Section \ref{section-reversers}) that there exist germs $f\in G$ that are formally reversible, but not holomorphically reversible. \subsection{Our specific groups} For the remainder of the paper, we shall denote by $G$ the group of biholomorphic germs at $0$ in one complex variable. Thus an element of $G$ is represented by some function $f$, holomorphic on some neighbourhood (depending on $f$) of $0$, with $f'(0)\not=0$, and two such functions represent the same germ if they agree on some neighbourhood of $0$. The group operation is composition. The identity is the germ of the identity function $\ONE$. The {\em multiplier map} $m:f\to m(f)=f'(0)$ is a homomorphism from $G$ onto the multiplicative group $\C^\times$ of the complex field. Obviously, since $\C^\times$ is abelian, the value $m(f)$ depends only on the conjugacy class of $f$ in $G$. We denote $$\begin{array}{rcl} H&=& \{f\in G: m(f) = \exp(i\pi q), \textup{ for some } q\in\Q\},\\ H_0 &=& \{f\in G: m(f)=\pm1\}= \ker m^2,\\ \textup{and }\hskip1cm&&\\ G_1 &=& \ker m. \end{array} $$ These normal subgroups have $ G_1 \le H_0 \le H \le G$. Further, for $p\in\N$, we define $$ G_p=\{f\in G_1: f^{(k)}(0)=0 \textup{ whenever }2\le k\le p\},$$ and $$ A_p = G_p\sim G_{p+1}.$$ Then $G_1$ is the disjoint union of $\{\ONE\}$ and the sets $A_p$. For $f\in G_1$, with $f\not=\ONE$, we denote by $p(f)$ the unique $p$ such that $ f\in A_p$. The natural number $p(f)$ is a conjugacy invariant of $f$ (with respect to conjugation in $G$), so that each $G_p$ is a normal subgroup of $G$. For $f\in G_p$, we may write $f(z)=z+f_{p+1}z^{p+1}+O(z^{p+2})$. The map $f\mapsto f_{p+1}$ is a group homomorphism from $G_p$ onto $(\C,+)$. Thus $f_{p+1}$ is a conjugacy invariant of $f$ in $G_p$. It is even invariant under conjugation in $G_1$, but it is not invariant under conjugation in $G$. Each $f\in A_p$ may be conjugated to the form $g^{-1}fg= z+z^{p+1}+a(f)z^{2p+1}+O(z^{2p+2})$, and then the complex number $a(f)$ is a conjugacy invariant of $f$ in $G$. The invariants $p(f)$ and $a(f)$ classify the elements of $G_1\sim\{\ONE\}$ up to formal conjugacy. The complete biholomorphic conjugacy classification requires additional invariants, and these have been provided by the equivalence class of the EV data $\Phi(f)$ of \'Ecalle-Voronin theory, which is reviewed briefly in Section \ref{section-EV} below. For $f\in H_0\sim G_1$, a complete set of conjugacy invariants (with respect to conjugacy in $G$) is provided by $m(f)=-1$ and the conjugacy class of $f^2$, which belongs to $G_1$. (See Theorem \ref{theorem-powers} below.) \subsection{Summary of results} It is obvious that each group homomorphism maps the reversible elements of its domain to reversible elements of its target, and that the only reversible elements in an abelian group are its involutions. Hence $R(G)\subset H_0$. Consequently, the reversible elements in all subgroups of $G$ lie in $H_0$. Also, it is always true that for $f\in G_1$, $p(f)=p(f^{-1})$. Also, by purely formal considerations \cite{OF}, the condition $a(f)=a(f^{-1})$ is equivalent to $a(f)=(p(f)+1)/2$. Thus the short answer to the question of which $f\in G$ are reversible in $G$ is the following: \begin{proposition}\label{proposition-EV} Let $f\in G$. Then $f\in R(G)$ if and only if (exactly) one of the following holds: \begin{enumerate} \item $f'(0)=1$, and $\Phi(f)$ is equivalent to $\Phi(f^{-1})$; \item $f'(0)=-1$, and $f^2\in R(G)$. \end{enumerate} \end{proposition} For Part 2, see Corollary \ref{corollary-square}. However, we can provide much more explicit information about reversibility in $G$. In general groups, a reversible element $f$ may have no reversers of finite order. If there is a reverser of finite order, then there is one whose order is a positive power of $2$. Only involutions can have a reverser of odd order. In our present group $G$, we have the following: \begin{theorem}\label{theorem-reversers} Let $f\in A_p$, for some $p\in \N$, and $g\in R_f(G)$. Then $g$ has finite even order $2s$, for some $s\in \N$ with $p/s$ an odd integer. \end{theorem} We shall give examples (cf. Section \ref{section-reversers}) to show that there are $f\in G$ for which the lowest order of a reverser is any preassigned power of $2$. We can be rather more precise about the order of reversers, but we have to distinguish between \lq\lq flowable" and \lq\lq non-flowable" reversible germs $f$. {\bf Definition}. By a {\em flow} in $G_1$ we mean a continuous group homomorphism $t\mapsto f_t$ from $(\R,+)$ (a {\em real flow}) or $(\C,+)$ (a {\em complex flow}) into $G_1$. A germ $f\in G_1$ is called {\em flowable} if and only if there exists a flow $(f_t)$ with $f_1=f$. The more precise result about reversers involves technical parameters that are associated to a reversible germ $f\in G_1$, and we shall give the statement and proof later (cf. Section \ref{section-last}), after we have explained these parameters. \begin{theorem}\label{theorem-reversible} let $f\in A_p$, for some $p\in \N$. Then $f\in R(G)$ if and only if it may be written as $g^{-1}h$, where $g,h\in H$ are germs of finite even order $2s$, $g^2=h^2$, $s|p$, and $p/s$ is odd. \end{theorem} As is well-known, each germ of finite order in $G$ is conjugate in $H$ to a rotation through a rational multiple of $\pi$ radians. Indeed an elements $g \in G$ of finite order $\delta$ must have multiplier $\beta=m(g)$ a $\delta$-th root of unity, and is conjugate in $H$ to $z\mapsto \beta z$; in fact the function $$ \frac1{\delta}\left({z+ \frac{g(z)}{\beta} + \cdots + \frac{g^{\delta-1}(z)}{\beta^{\delta-1}}}\right)$$ provides a conjugation. \begin{theorem}\label{theorem-series} Let $f\in A_p$, for some $p\in \N$. Then $f\in R(G)$ if and only if there exists $\psi\in H$ such that \begin{equation}\label{equation-series-1} (\psi^{-1}f\psi)(z) = z + z^{p+1} + \sum_{k=1}^{\infty}c_kz^{sk+p+1}, \end{equation} where $p/s$ is an odd integer, and \begin{equation}\label{equation-series-2} (\psi^{-1}f^{-1}\psi)(z) = z - z^{p+1} + \sum_{k=1}^{\infty}(-1)^kc_kz^{sk+p+1}. \end{equation} (In other words, $ f_1=\psi^{-1}f\psi$ is reversed by $z\mapsto \exp(\pi i /s)z$.) \end{theorem} We shall give examples (cf. Section \ref{section-reversers}) to show that each $p\in\N$ and each $s|p$ with $p/s$ odd may occur. These results allow us to understand reversibility in $G$: One reverses a germ $f$ essentially by \lq\lq rotating" it (using a rotation modulo conjugacy), so as to swap the attracting and repelling petals of its Leau flower. We note some consequences: \begin{corollary}\label{corollary-reversible-square} Let $f\in G$. Then $f\in R(G)$ if and only if $f^2\in R(G)$. \end{corollary} \medskip The strongly-reversible elements of $G$ were already identified (in terms of EV data) in \cite{AG}, but we note the result, which follows immediately from Theorem \ref{theorem-reversers} above: \begin{corollary}\label{corollary-strongly-reversible} Let $f\in G$. Then $f\in I^2(G)$ if and only if $f\in R(G)$ and one of the following holds: \begin{enumerate} \item $f\in I(G)$, or \item $f\in A_p$ with $p$ odd. \end{enumerate} \end{corollary} We note that the case $p=1$ was already given by Voronin \cite{V2}. The following summarises our conclusions about reversibility in all the above-named subgroups of $G$: \begin{corollary} For each $p\in\N$, we have $$ (\ONE)=R(G_p) = R(G_1)\subset R(H_0)=I^2(G)\subset R(H)=R(G)\subset H_0,$$ and the three inclusions are proper. \end{corollary} \section{Conjugacy}\label{section-EV} {\bf Definition.} Let $p\in\N$. Let $\fk S$ denote the set of all functions $h$ that are defined and holomorphic on some upper half-plane (depending on $h$), and are such that $h(\zeta)-\zeta$ is bounded and has period 1. By {\it \'Ecalle-Voronin $p$-data} (or just EV data) we mean an ordered $2p$-tuple $\Phi = (\Phi_1,\ldots,\Phi_{2p})$, where $\Phi_1(\zeta)$,$-\Phi_2(-\zeta)$,$\Phi_3(\zeta)$, $\ldots$,$-\Phi_{2p}(-\zeta)\in\fk S$. Given EV $p$-data $\Phi$ and $q$-data $\Psi$, we say that they are {\em equivalent} if $p=q$ and there exist $k\in\Z$ and complex constants $c_1$,$\ldots$,$c_{2p}$, such that for each $j$ we have $$\Phi_{j+2k}(\zeta+c_j) = \Psi_j(\zeta)+c_{j+1},$$ (where we define $\Phi_j$, $\Psi_j$ and $c_j$ for all $j\in\Z$ by making them periodic in $j$, with period $2p$). \medskip Let $f\in G_1$. Let $p=p(f)$. Voronin \cite{V1} described how to associate \'Ecalle-Voronin data $\Phi(f) = (\Phi_1,\ldots,\Phi_{2p})$ to $f$. We shall not recapitulate the construction here\footnote{ For a detailed description, see Voronin's paper \cite{V1} or (for full details when $p>1$) \cite[pp.7-19]{AG}. The case $p>1$ was first fully elaborated by Yu. S. Ilyashenko \cite{Ilya}.}, but roughly speaking the $\Phi_j$ are obtained as (analytic extensions of) compositions $F_j\circ F_{j+1}^{-1}$, where the $F_j$ are conformal maps of alternately attracting and repelling Leau petals for $f$, which conjugate $f$ on the petals to translation by $1$ near $\infty$. Essentially the same construction was discovered independently by \'Ecalle \cite{M}. They proved the following: \begin{theorem}[Conjugacy] Let $f,g\in G_1$. Then $f$ is conjugate to $g$ in $G$ if and only if $\Phi(f)$ is equivalent to $\Phi(g)$. \end{theorem} \begin{theorem}[Realization] Given any EV data $\Phi$, there exists a function $f\in G_1$ having equivalent EV data. \end{theorem} For $f\in H$, the expositions in print usually say that the conjugacy classification is easily reduced to the case of multiplier $1$. We need to consider multiplier $-1$, so we need a precise statement. The result goes back to Muckenhoupt \cite[Theorem 8.7.6, p. 359]{KCG}. \begin{theorem}[Muckenhoupt]\label{theorem-powers} Suppose that $f,g\in H$ both have the same multipier $\lambda$, a primitive $s$-th root of unity, where $s\in \N$. Then $f$ and $g$ are conjugate in $G$ if and only if $f^s$ and $g^s$ are conjugate in $G$. \end{theorem} We supply a proof, partly for the reader's convenience, but also because we wish to draw a useful corollary from it. \Proof It is evident that if $h^{-1}fh=g$, then $h^{-1}f^sh= g^s$. For the other direction, suppose that there exists $h\in G$ with $h^{-1}f^sh= g^s$. We have $(h^{-1}fh)^s= g^s$, and $m(h^{-1}fh)=\lambda$. So it suffices to show that $$ \left\{ \begin{array}{rcl} m(f)&=&m(g)=\lambda \and \\ f^s&=&g^s \end{array} \right\} \implies f \hbox{ is conjugate to }g.$$ Let $k=f^s$. Then $k\in G_1$. If $k$ is the identity, then $f$ and $g$ are periodic with the same multiplier, so they are conjugate. If $k$ is not the identity, then the centraliser of $k$ is abelian (see Theorems \ref{theorem-3.1} and \ref{theorem-3.2} below). Since $f$ and $g$ belong to it, they commute with each other, hence $(f^{-1}g)^s=f^{-s}g^s=\ONE$. But $f^{-1}g\in G_1$, so $f^{-1}g=\ONE$, and $f$ is actually equal to $g$. \qed \begin{corollary}\label{corollary-powers} If $f,g\in H$ have as multiplier the same $n$-th root of unity, and $f^n\not=\ONE$, then each $h\in G$ that conjugates $f^n$ to $g^n$ will also conjugate $f$ to $g$. \end{corollary} \section{Centralisers} The facts about $C_f(G)$, for $f\in G_1$, were established by Baker and Liverpool \cite{Baker1,Baker2,Baker3,Liverpool} (see also Szekeres \cite{S}). We may summarise the facts about centralisers as follows: \begin{theorem}\label{theorem-3.1} Suppose that $p\in\N$ and $f\in A_p$ is flowable. Then $C_f(G)$ is an abelian group, equal to the inner direct product $$\{ f_t : t\in\C\}\times\{ \omega^j: 0\le j\le p\}$$ where $(f_t)_{t\in \C}$ is a complex flow, and $\omega\in H$ has finite order $p$. \end{theorem} It follows from Theorem \ref{theorem-3.1} that if $f\in G_1$ is flowable then $C_f(G_1)$ is the flow $(f_t)_{t\in \C}$. It is a remarkable result of Baker and Liverpool that in the non-flowable case $C_f(G_1)$ is an abelian group with a single generator $g$. Since $f\in C_f(G_1)$ we have $f=g^d$ for some integer $d$ (which we can assume to be positive (by replacing $g$ by $g^{-1}$ if necessary). This $g$, which is unique, is usually denoted by $f^{\frac1d}$. \begin{theorem}\label{theorem-3.2} Suppose $f\in A_p$ is not flowable. Then $C_f(G)$ is abelian, and there exist positive integers $q$ and $\delta$ with $\delta|q$ and $q|p$ and elements $\tau$ and $\omega \in C_f(G)$ such that \begin{enumerate} \item $C_f(G)/C_f(G_1)$ is cyclic of order $q$, \item $C_f(G)$ is generated by $\tau$ and $f^{1/d}$ \item $\omega$ has finite order $\delta$, \item we have a direct product decomposition $C_f(G)=\langle \tau\rangle \times\langle \omega\rangle $, and finally \item we have the relation $$\tau^{\frac{q}{\delta}}=\omega f^{1/d}.$$ \end{enumerate} \end{theorem} The formal centraliser of an $f\in G_1$ (other than $\ONE$) is always isomorphic to the product of a flow and a finite cyclic group. Thus $C_f(G_1)$ is isomorphic to an additive subgroup of $\C$. The achievement of Baker and Liverpool was to show that the only possible subgroups that can occur are $\C$ itself and an infinite cyclic group $\Z\alpha$, for some $\alpha\in\C$. In the latter case, $f$ has only a finite number of compositional roots. In particular, if $f$ is real-flowable, or infinitely-divisible, or lies in the image of a $\Z^2$ action, then it must be complex-flowable. Voronin \cite{V1} used the EV data to characterise divisibility of the elements $f\in G_1$, i.e. the existence of composition roots. In fact, for a given $f\in G_1$ and $k\in\N$, there exists $g\in G_1$ with $g^k=f$, if and only if $\Phi=\Phi(f)$ satisfies $$ \Phi_j(\zeta+\frac1k)=\Phi_j(\zeta)+\frac1k,$$ for $j=1,\ldots,2p(f)$. In view of the Realisation Theorem, this means that generic $f\in G_1$ have no roots at all. The above theorems are deep, but may be proved rather more easily than in the the original papers, by using Voronin's approach \cite{V1}. The flowable $f\in G_1$ are characterised as those that have EV data equivalent to $\Phi_j(\zeta)=\zeta+\lambda_j$, for constant $\lambda_j$, i.e. data that are translations. \section{Reversers}\label{section-reversers} After these preliminaries, we are ready to discuss reversibility in $G$. First, we deal with the case $m(f)=-1$. Then we proceed to prove the results stated in Section 1.3, and to provide the examples promised. \subsection{Multiplier $-1$} First, we deal with the case $m(f)=-1$. From Corollary \ref{corollary-powers} we deduce: \begin{corollary}\label{corollary-square} Let $f\in G$ have $f'(0)=-1$. Then (i) $f$ is an involution or $R_f(G) = R_{f^2}(G)$, and (ii) $f\in R(G)$ $\Leftrightarrow$ $f^2\in R(G)$. \end{corollary} \subsection{Proof of Theorem \ref{theorem-reversers}} We make use of formal series arguments below. It is also possible to prove some of the results by considering separately the flowable and non-flowable germs, and using the Baker-Liverpool theory on the latter. Let $\fk G$ denote the group of formally-invertible series, under the operation of formal composition. To prove Theorem \ref{theorem-reversers}, fix $p\in\N$, a reversible $f\in A_p$, and $g\in R_f(G)$. Since $f\in R(G)$, then considered as a formal series, it belongs to $R(\fk G)$. Hence \cite[Corollary 6]{OF} there exists a formal series $\tau\in R_f(\fk G)$, of order $2p$. Formally, $f$ is uniquely flowable \cite{Baker1}, i.e. there exists a unique flow $(f^t)_{t\in\C}$ in $\fk G$ with $f^1=f$. Also, $C_f(\fk G)$ is the set generated by $\tau^2$ and the $f^t$, $t\in\C$. This is well-known \cite{Baker1,Liverpool,Lubin}, but quite concretely $f$ is formally-conjugate \cite[Theorem 5]{OF} to $$ \frac{z}{(1+ z^p)^{1/p}},$$ and the same conjugacy takes $f^t(z)$ to $$ \frac{z}{(1+t z^p)^{1/p}}.$$ For all $t\in\C$, the latter commutes with $z\mapsto \exp(2\pi i/p)z$, and is reversed $z\mapsto \exp(\pi i/p)z$, and $\tau$ is obtained by conjugating the latter back. In particular, $\tau$ reverses each $f^t$, for $t\in\C$. Now $\tau^{-1}g\in C_f(\fk G)$, and hence $\tau^{-1}g=\tau^{2r}f^t$ for some $r\in\Z$ and $t\in\C$, so $g=\tau^mf^t$ for for an odd $m\in\Z$. Since $\tau^m$ reverses $f^t$, we get $g^2=\tau^mf^tf^{-t}\tau^m=\tau^{2m}$, so the order of $g^2$ divides $p$, so the order of $g$ is finite, dividing $2p$. The order of $g$ cannot be odd (since $f$ is not involutive), and hence it is $2s$, for some $s|p$. Finally, if $p/s$ were even, we would have $m(g)^p=1$, but a simple formal calculation shows that $g$ cannot reverse $f$ unless $m(g)^p=-1$. \qed \subsection{Proof of Theorem \ref{theorem-reversible}} This is immediate from Corollary 1.2(2) and Theorem \ref{theorem-reversers}. \subsection{Proof of Theorem \ref{theorem-series}} Suppose $f\in R(G)$. By Theorem \ref{theorem-reversers}, there exists $g\in R_f$, of order $2s$, with $p/s$ odd. Thus there is a function $\psi\in H$ that conjugates $g$ to $\beta z$, where $\beta=m(g)$. Then $\psi^{-1}f\psi$ is reversed by $\beta z$, and commutes with $\beta^2z$. Since $\beta^2$ is a primitive $s$-th root of unity, it follows that $\psi^{-1}f\psi$ takes the form given by equation (\ref{equation-series-1}). Since $\beta z$ reverses it, $$ \psi^{-1}f^{-1}\psi(z) = \beta^{-1} (\psi^{-1}f\psi)(\beta z)$$ takes the form (\ref{equation-series-2}). This proves one direction, and the converse is obvious. \qed \subsection{Proof of Corollary \ref{corollary-reversible-square}} It is true in any group that $f\in R(G)\implies f^2\in R(G)$. For the converse in our specific $G$, there are two cases: $m(f)=\pm1$. If $m(f)=1$, and $f^2\in R(G)$, then we have seen in the proof of Theorem \ref{theorem-reversers} that each reverser of $f^2$ reverses each element of the formal flow $(f^2)^t$, and hence reverses $(f^2)^{1/2}=f$. (Observe that if a convergent series is formally reversed by a convergent series, then it is holomorphically reversed by it, too.) If $m(f)=-1$, and $f^2\in R(G)$, then we have $f\in R(G)$ by Proposition \ref{proposition-EV}, Part 2. \subsection{Example: Reversible germ, not reversible by any germ of order dividing $2^k$} Fix any even $p\in\N$, and take $s=p$. Let $\mu\in G$ be multiplication by a primitive $s$-th root of $-1$. Take $\phi\in G_1$ commuting with $\mu^2$, but not with $\mu$. (This may be done, for instance, by taking $\phi(z)=z+z^{s+1}$.) Take $g=\mu$, $h=\phi^{-1}\mu\phi$, and $f=g^{-1}h$. Then a calculation shows that $g^2=h^2$ has order $s$ (and hence $g$ is a reverser for $f$ of order $2s$), and that $f\in A_p$. In case $p=2^{k+1}$, we see (by Theorem \ref{theorem-reversers}) that no element of order $2^k$ can reverse $f$. \medskip Another example is provided by the function $z(1+z^p)^{-1/p}$ used in the proof of Theorem 1.4, in view of Corllary 1.8. Examples of this kind may also be constructed (rather less concretely) by appealing to the Realization Theorem). However, the Realization Theorem is the best way to do the next thing: \subsection{Example: Non-flowable reversible germ} Fix any $p\in\N$, and take EV data $\Phi$, where $$\Phi_1(\zeta) = \zeta + \exp(-2\pi i\zeta), \qquad\Phi_2(\zeta) = \zeta - \exp(2\pi i\zeta), $$ and $\Phi_{j+2}=\Phi_j$ for all $j$. By the Realization Theorem, there is some $f\in A_p$ with EV data $\Phi(f)$ equivalent to $\Phi$. Hence this $f$ is reversible, by Proposition \ref{proposition-EV}, because $(-\Phi_{j+1}(-\zeta))$ is the EV data for $f^{-1}$. (This is so, because the consecutive attracting and repelling petals for $f$ are, respectively, repelling and attracting for $f^{-1}$, and because $F_{j+1}$ conjugates $f^{-1}$ in the $j+1$-st petal to to $\zeta\mapsto\zeta-1$ near $\infty$, so that $-F_{j+1}(-\cdot)$ conjugates $f^{-1}$ to $\zeta\mapsto\zeta+1$, so that the EV recipe gives $ -F_{j+1}(--F^{-1}_{j+2}(-\zeta)) = -\Phi_{j+1}(-\zeta) $ as EV data for $f^{-1}$.) But since $\Phi_1$ is not a translation, $f$ is not flowable. \subsection{Example: Formally-reversible germ, not reversible in $G$} Let ${\Phi}_1(\zeta)=\zeta +e^{-2\pi i\zeta}$ and ${\Phi}_2(\zeta)=\zeta$. If $f$ realizes this EV data then $a(f)=1=(p+1)/2$ by the formula on top of page 19 of \cite{AG}, and hence f is formally reversible, but these data do not have the symmetry required of reversible germ data. \section{The Order of a Reverser}\label{section-last} Flowable reversible germs $f\in A_p$ are very special: they form a single conjugacy class -- all are conjugate to $ z/(1+z^p)^{1/p}$, and all reversers for them have order dividing $2p$. The possible orders are precisely the divisors of $2p$ of the form $2^ku$, where $u|p$ is odd, and $2^k$ is the largest power of $2$ dividing $2p$. \medskip In the nonflowable case, we can relate the possible orders for reversers to the centraliser generators $\tau$, $\omega$, and the natural numbers $d$, $q$ and $\delta$ of Theorem \ref{theorem-3.2}. The numbers $d$, $q$, and $\delta$ are uniquely-determined by $f$: the $1/d$-th power of $f$ is the smallest positive power that converges, $q$ is the index of $C_f(H_1)$ in $C_f(G)=C_f(H)$, and $\delta$ is the order of the (cyclic) torsion subgroup of $C_f(G)$. The germ $\omega$ may be any generator of this torsion subgroup; we may specify a unique $\omega$ by requiring that the multiplier $m(\omega) =e^{\frac{2\pi i}{\delta}}$ (as opposed to some other primitive $\delta$-th root of unity). \begin{theorem} Let $p\in\N$, and suppose $f\in A_p$ is reversible but not flowable. Let $\tau,\omega$ and $d,q,\delta$ be as in Theorem \ref{theorem-3.2}. Then \begin{enumerate} \item\label{last-1} If $g\in R_f(G)$ then $g$ commutes with $\omega$, and $g$ reverses $f^{r/d}$, for each $r\in\Z$. \item\label{last-2} $\delta=q$, and $\frac{p}{q}$ is odd. \item\label{last-3} If we choose $\omega$ such that $m(\omega) =e^{\frac{2\pi i}{\delta}}$, then we have $$\{g^2:g\in R_f(G)\}=\{\omega^l:l \hbox{ is odd}\},$$ and we always have $$\{\ord(g):g\in R_f\}=\{2r\in\N: r| q, \hbox{ and } q/r \hbox{ is odd}\}.$$ \end{enumerate} \end{theorem} \begin{proof} We abbreviate $R_f=R_f(G)$. (\ref{last-1}) Since $g$ (and hence $g^{-1}$) reverse $f$ and $\omega$ commutes with $f$ we see that $g\omega g^{-1}$ commutes with $f$, has order $\delta$ and has the same multiplier as $\omega$, and so it equals $\omega$. To show the second part of \ref{last-1}, it suffices to deal with the case $r=1$. Again $gf^{\frac1d}g^{-1}$ commutes with $f$ and it has multiplier $1$ so $gf^{\frac1d}g^{-1}=f^{\frac{l}d}$ for some $l$. Raise both sides of the last equation to the power d to get $f^{-1}=f^l$ and so $l=-1$ as desired. This proves part \ref{last-1}. \medskip\noindent (\ref{last-2}) We know that if $g\in R_f$ then $g'(0)^p=-1$, $g^2$ commutes with $f$ and that $g$ has finite order. It follows that $g'(0)=e^{\frac{\pi im}{p}}$ where $m$ is odd. Since $g^2$ is periodic and commutes with $f$ we have $g'(0)^{2\delta}=1$ i.e. $e^{2\pi im \delta\over p}=1$. This means that $m={\frac{p}{\delta}}l$ for some integer $l$. Since $m$ is odd, so also are $\frac p{\delta}$ and $l$. So far we have seen that $\frac{p}{\delta}$ is odd. Now we show that $q=\delta$. Now $g\tau g^{-1} {\tau}^{-1}$ commutes with $f$ and has multiplier $1$ so $g\tau g^{-1} {\tau}^{-1}=f^{\frac nd}$ for some integer $n$. If we take this last identity and raise both sides to the power q we get $gf^{\delta\over d}g^{-1}f^{-\delta\over d}=f^{qn\over d}$. Now using the fact that g reverses $f^{l\over d}$ we arrive at $-2\delta=qn$. So $-2={q\over \delta}n$ so that ${q\over \delta}$ is either 1 or 2. But $q=2\delta$ is not consistent with the fact that ${p\over \delta}$ is odd. Hence $q=\delta$. \medskip\noindent (\ref{last-3}) Pick any $g\in R_f$. We already know from Theorem \ref{theorem-reversers} that $g$ has finite order. Since $g^2\in C_f$, it follows that $g^2$ belongs to the torsion subgroup, and hence is a power $\omega^l$. If $l$ were even, then $m(g)^p=1$, but a reverser of $f$ must have $m(g)^p=-1$. This proves that $$\{g^2:g\in R_f\}\subset \{\omega^l:l \hbox{ is odd}\}.$$ To see the opposite inclusion, fix $g_0\in R_f$, with $g_0^2=\omega^l$. Then $\omega^j g_0\in R_f$ whenever $j\in\Z$, and the square of this reverser is $\omega^{2j}g_0^2=\omega^{l+2j}$. Letting $j$ run through $\delta$ consecutive integers, we get each odd power of $\omega$. Thus $$\{g^2:g\in R_f\}= \{\omega^l:l \hbox{ is odd}\}.$$ \medskip We conclude that the possible values of $\ord(g)$ are the numbers $2\,\ord(\omega^l)$, where $l$ ranges over the odd numbers. Since $\omega$ has order $\delta=q$, the order of $\omega^l$ is $r=q/u$, where $u$ is the greatest common divisor of $l$ and $\delta$. Since $l$ is odd, $u$ must be odd as well. Conversely, suppose that $r$ is a divisor of $q$ and $u=q/r$ is odd. Then by the last equation there is a $g\in R_f$ with $g^2={\omega}^u$, which obviously has order $r$. \end{proof} \begin{corollary} If $p=2^ku$ where $u$ is odd, and $f\in A_p$ is nonflowable and reversible in $G$, then $\delta=q=2^kn$ where n divides $u$. The largest order for a reverser of f is $2\delta$ and the smallest order is $2^{k+1}$. \end{corollary}\ \qed Note that in the flowable case, this corollary also holds (with, additionally, $q=p$). \medskip\noindent Using EV theory it can be shown that given any positive integer p and any divisor $q$ of $p$ such that ${p\over q}$ is odd then there is a reversible $f\in A_p$ such that the associated ${q}_f=q$, and in fact an infinite dimensional set of inequivalent ones.
{"config": "arxiv", "file": "0812.1575.tex"}
\section{Introduction} This is the third of the series concerning a localization of the index of elliptic operators. The localization of integral is a mechanism by which the integral of a differential form on a manifold becomes equal to the integral of another differential form on a submanifold, which has been formulated under various geometric settings. The {\it submanifold} is either an open submanifold or a closed submanifold. When it is an open submanifold, the localization is closely related to some {\it excision formula}. When it is a closed submanifold, the localization is usually obtained by applying some {\it product formula} to the normal bundle of the submanifold after the localization to the open tubular neighborhood. A typical geometric setting for such localization is given by action of compact Lie group, and a localization is formulated in terms of the equivariant de Rham cohomology groups. An example is Duistermaat and Heckman's formula on a symplectic manifold. It is formally possible to replace the equivariant de Rham cohomology groups with the equivariant K cohomology groups, and the resulting localization in terms of the equivariant $K$-cohomology groups is known as Atiyah-Segal's Lefschetz formula of equivariant index. In our previous papers \cite{Fujita-Furuta-Yoshida1, Fujita-Furuta-Yoshida2} the geometric setting ensuring our localization is typically given by the structure of a torus fiber bundle. Under this setting we consider the Riemann-Roch number or the index of the Dolbeault operator or the Dirac-type operator, associated to an almost complex structure or a spin{$^c$} structure, which is twisted by some vector bundle. We do not assume any global group action. Instead, on the vector bundle, we assume a family of flat connections of the fibers of the torus bundle. Our setting has generalized from the setting of a single torus bundle structure to the case that we have a finite open covering and a family of torus bundle structures on the open sets which satisfy some compatibility condition. The dimensions of the fibers of the family of torus bundle structures can vary. This generalization was necessary to formulate a product formula in a full form \cite{Fujita-Furuta-Yoshida2}, and the product formula is used to compute the local contribution in some examples. In this paper we introduce the equivariant version of our localization. When a compact Lie group $G$ acts on everything, it is straightforward to generalize our previous argument. The index takes values in the character ring $R(G)$ and we have the Riemann-Roch character. We go further. Suppose two compact Lie group $G$ and $K$ acts on everything simultaneously and assume that their actions are commutative. In this paper we formulate another type of equivariant version as follows. The main assumption of our previous papers in our geometric setting was the vanishing of the de Rham cohomology groups with some local coefficients on each fiber of the torus bundles. Our new setting is given by weaken the assumption. Roughly speaking our new assumption is that only the $G$-invariant part of the de Rham cohomology groups vanish. Under this new weaker assumption, the full $G \times K$-equivariant index is not well defined. Instead only the $G$-invariant part of the $G\times K$-equivariant index is well defined as an element of the character ring of $K$. As an application of the latter equivariant version we give a proof of Guillemin-Sternberg's quantization conjecture in the case of torus action. Our localization is basically a purely topological statement. It would be required to formulate it as the equality between topological index and analytical index. The definition of topological index is, however, not given at the present. In this paper we work in the smooth category. In Section 2 we first describe the orbifold version of our localization in the previous papers \cite{Fujita-Furuta-Yoshida1, Fujita-Furuta-Yoshida2}. We give several definitions under the same names as in the previous papers, though the notions are generalized as well as the propositions there. In the latter part of Section 2 we introduce group actions and give our main theorem (Theorem~\ref{equivariant localization for invariant part}). In Section 3, as a typical example of our setting, we explain the construction using an action of a torus $G$ with a simultaneous action of a compact Lie group $K$ on an almost complex manifold. In Section 4, as a preparation of the proof of quantization conjecture, we show a vanishing property of the $G$-invariant part of the equivariant Riemann-Roch number when $G$ is $S^1$ under some condition. In Section 5 we give a proof of quantization conjecture for torus action.
{"config": "arxiv", "file": "1008.5007/ss1.tex"}
TITLE: Scalar product of Gaussian random vector with projection matrix is chi-squared QUESTION [1 upvotes]: We define the $n$ chi-square random variable this way : if $Z \sim N(0,I_n)$ is multivariate Gaussian random vector, then $\lVert Z \rVert ^2 = \sum_{i=1}^n Z_i^2$ (sum of $n$ standard gaussian RV squared) is said to have $n$ chi-square random variable. Let $H$ be a projection matrix of rank $k \leq n$ and $Z \sim N(0,I_n)$. Show that $Z^T HZ$ is a $k$ chi-square random variable. $Z^T HZ$ is a 1D random variable so its squared norm is equal to $\lvert Z^T HZ \rvert^2$. We want to show that it can be written as the sum of $k$ standard gaussian RV squared. Since $H$ is a projection matrix of rank $k$, it can be written as $H=U^T D U$ with $D=diag(1,\dots,1,0,\dots,0)$ with k ones and $n-k$ zeros (because rank $k$). It gives $Z^T HZ=(UZ)^TD (UZ)$ and this gives $$\lvert Z^T H Z\rvert =\bigg\lvert \sum_{i=1}^k(UZ)_i^2\bigg\rvert = \sum_{i=1}^k(UZ)_i^2$$ Now I know that $U$ is orthogonal so $UZ \sim N(0,I_n)$ and hence $\lvert Z^T H Z\rvert$ is the sum of $k$ standard gaussian RV squared. However, we are interested in $\lvert Z^T H Z\rvert^2=\bigg( \sum_{i=1}^k(UZ)_i^2\bigg)^2$ which a really something else to treat. How can I deal with this ? REPLY [0 votes]: $Z^T H Z$ is not univariate normal so it does not make sense to consider its norm; as said by @StubbornAtom in comment section.
{"set_name": "stack_exchange", "score": 1, "question_id": 4547610}
TITLE: Solve for unknown in exponential equation QUESTION [0 upvotes]: How do I solve for an unknown in the base of an exponential equation. In my example W is unknown: $$PPV=K\cdot\left(\frac{D}{W^{0.5}}\right)^{-1.6}$$ REPLY [1 votes]: I will do the first step. $$PPV=K\cdot\left(\frac{D}{W^{0.5}}\right)^{-1.6} \implies \left(\frac{PPV}{K}\right)^{-5/8}\cdot \frac{1}{D}=\frac{1}{W^{.5}}.$$ Can you see how very mild manipulation will yield the answer? The idea here is to see how to express one variable ($W$) in terms of the rest.
{"set_name": "stack_exchange", "score": 0, "question_id": 1908147}
TITLE: AMC 2011 Coloring Problem QUESTION [9 upvotes]: A 40 X 40 white square is divided into 1 X 1 squares by lines parallel to its sides. Some of these 1 X 1 squares are coloured red so that each of the 1 X 1 squares, regardless of whether it is coloured red or not, shares a side with at most one red square (not counting itself). What is the largest possible number of red squares? What I did is as following(R is red, w is white), There are only 400 red squares. The answer should be more. (Sorry, the previous diagram was wrong, I missed every empty White lines. Now I fixed it. ): RRwwRRwwRRww...RRww wwwwwwwwwwww...wwww wwRRwwRRwwRR...wwRR wwwwwwwwwwww...wwww RRwwRRwwRRww...RRww wwwwwwwwwwww...wwww wwRRwwRRwwRR...wwRR ................... RRwwRRwwRRww...RRww wwwwwwwwwwww...wwww wwRRwwRRwwRR...wwRR wwwwwwwwwwww...wwww REPLY [3 votes]: In a separate answer, user TonyK has shown that it is possible to have as many as $420$ red squares on the board, and that it remains to show that the number of red squares is strictly less than $421$. Below I will complete the proof by showing that the number of red squares must be strictly less than $421$. To see how it is possible to have $420$ red squares, you can look at user TonyK's answer, or you can click the link in his answer, which will take you to this website on Yahoo. Suppose we have a $40\times40$ white board, and we color some of the squares red, so that each square shares a side with at most one red square. Let $R$ be the number of red squares. It has been shown that there is a way of doing this with $R=420$. Our goal is to show that $R<421$. Since the board is $40\times40$, the board consists of $1600$ $1\times1$ squares. Let $\mathcal{S}$ be the set of all $1600$ of these $1\times1$ squares. For each $x\in\mathcal{S}$, let $f(x)$ be the number of red squares that share a side with $x$. We'll start be examining the sum $$\sum_{x\in\mathcal{S}}f(x).$$ The number of times a red square is counted in this sum depends on whether the red square is a corner square, an edge square, or an interior square. So let $C$ be the number of red corner squares. Let $E$ be the number of red edge squares, and let $I$ be the number of red interior squares. In the above sum, red corner squares are counted twice; red edge squares are counted three times, and red interior squares are counted four times. Hence $$\sum_{x\in\mathcal{S}}f(x)=2C+3E+4I.$$ Since each squares shares a side with at most one red square, we have that $f(x)\le1$ for each $x\in\mathcal{S}$. So $$\sum_{x\in\mathcal{S}}f(x)\le1600.$$ Hence $2C+3E+4I\le1600$. We want to maximize $R=C+E+I$ subject to the constraint that $2C+3E+4I\le1600$. We haven't gotten enough information yet to show that $R<421$ so we will have to examine the problem further to come up with some additional constraints. We can get the constraints we need by considering the squares on the border of the board, i.e. the corner and edge squares. In user TonyK's answer, it is shown that is possible to have as many as $78$ red squares on the border. This is done by working your way around the border, coloring two squares red and then leaving two squares white, until half the border is red. If the border had more than $78$ red squares, then it follows from the pigeon-hole principle that there would be $4$ consecutive border squares, $3$ of which are red, which would contradict the fact that no square shares a side with more than one red square. Hence $C+E\le78$. For our last constraint, we can use the fact that there are four corners. So $C\le4$. So we want to maximize $$R=C+E+I$$ given that $$2C+3E+4I\le1600$$ $$C+E\le78$$ $$C\le4$$ $$C,E,I\ge0.$$ Readers familiar with linear programming will recognize that this a problem that can be solved with the simplex method. However, our goal is just to show that $R<421$, which can be done without the simplex method. If we add the the three $\le$ constraints, we get that $4C+4E+4I\le1682$. Hence $R=C+E+I\le\frac{1681}{4}=420.5.\;$ So $R<421$. $\;\square$ Acknowledgments: I would like to thank user TonyK for showing me that it was possible to have $R=420$. Without his help, I may not have been to get this problem. I would also like to thank user Rob Pratt, who provided a substantial simplification. Closing Remark: If we modify the problem slightly, then the configuration mentioned in the original problem will in fact be optimal. Suppose that the original $40\times40$ board were inscribed on a torus instead of a just a $40\times40$ square. Now the squares on the bottom row share sides with squares on the top row, and squares on the leftmost column share sides with squares on the rightmost column. It follows that $$\sum_{x\in\mathcal{S}}f(x)=4R.$$ Hence $R\le400$. So the configuration mentioned in the original problem would be optimal.
{"set_name": "stack_exchange", "score": 9, "question_id": 3488926}
\begin{document} \maketitle \newpage \begin{abstract} We consider planar rotors (XY spins) in $\mathbb{Z}^d$, starting from an initial Gibbs measure and evolving with infinite-temperature stochastic (diffusive) dynamics. At intermediate times, if the system starts at low temperature, Gibbsianness can be lost. Due to the influence of the external initial field, Gibbsianness can be recovered after large finite times. We prove some results supporting this picture. \end{abstract} \section{Introduction} Time evolution of spin systems with different initial Gibbs measures and different dynamics shows various interesting features. In particular, in the transient regime, the structure of the evolved measure can have various properties, which may change in time. For example, in \cite{vEntFerHolRed02}, \cite{vEntRus07}, \cite{KueOpo07}, \cite{KueRed06} and \cite{DerRoe05} the question was investigated whether the time-evolved measure is Gibbsian or not. Results about conservation, loss and recovery of the Gibbs property could be obtained. Ising spin systems were considered in \cite{vEntFerHolRed02} and different types of unbounded spin systems in \cite{DerRoe05} and \cite{KueRed06}. In \cite{vEntRus07} and \cite{KueOpo07} compact continuous spin systems are investigated. In more physical terms, the question is whether one can or cannot associate an effective temperature ($=$ inverse interaction norm) to the system when it is in this non-equilibrium situation \cite{OlPe07}. Variations of both the initial and the dynamical temperature (the temperature of the Gibbs measure(s) to which the system will converge, which is a property of the dynamics) have influence on the existence (or absence) of the quasilocality property of the time-evolved measure of the system. This quasilocality property is a necessary (and almost sufficient) condition to have Gibbsianness \cite{EFS93, Geo88}. In \cite{vEntRus07} we showed that the time-evolved measure for planar rotors stays Gibbsian for either short times, starting at arbitrary temperature and with arbitrary-temperature-dynamics, or for high- or infinite-temperature dynamics starting from a high- or infinite-temperature initial measure for all times. Furthermore the absence of the quasilocality property is shown for intermediate times for systems starting in a low-temperature regime with zero external field and evolving under infinite-temperature dynamics. The fact that there exist intermediate times where Gibbsianness is lost for XY spins even in two dimensions is remarkable, because those systems do not have a first-order phase transition due to the Mermin-Wagner theorem. However, it turns out that conditionings can induce one. To establish the occurrence of such conditional first-order transitions is a major step in the proof that a certain measure is not Gibbsian. \newline Similar short-time results for more general compact spins can be found in \cite{KueOpo07}. \newline These results about compact continuous spins can be seen as intermediate between those for discrete Ising spins and the results for unbounded continuous spins. Conservation, loss and recovery results can be found in \cite{vEntFerHolRed02} for Ising spins and conservation for short times and loss for larger times for unbounded spins in \cite{KueRed06}. Conservation for short times for more general dynamics (e.g. Kawasaki) for discrete spins was proven in \cite{LeNRed02}, and for unbounded spins with bounded interactions in \cite{DerRoe05}. \medskip This paper is a continuation of \cite{vEntRus07}. As in that paper, we consider XY-spins living on a lattice sites on $\mathbb{Z}^d$ and evolving with time. The initial Gibbs measure is a nearest neighbour ferromagnet, but now in a positive external field. So we start in the regime where there is a unique Gibbs measure. The system is evolving under infinite-temperature dynamics. We expect, that just as in the Ising case, whatever the initial field strength, we have after the short times when the measure is always Gibbsian, if the initial temperature is low, that a transition towards a non--Gibbsian regime occurs, and that after another, longer time, the measure becomes Gibbs again. We can prove a couple of results which go some way in confirming this picture. We prove that when the initial field is small, and $d$ is at least 3, there exists a time interval, depending on the initial field, during which the time-evolved measure is non-Gibbsian. We present a partial result, indicating why we expect the same phenomenon to happen in two dimensions. Furthermore, we argue that the presence of an external field is responsible for the reentrance into the Gibbsian regime for larger times, independently of the initial temperature. We can prove this for the situation in which the original field is strong enough. \section{Framework and Result} Let us introduce some definitions and notations. The state space of one continuous spin is the circle, $\mathbb{S}^1$. We identify the circle with the interval $[0,2\pi)$ where $0$ and $2\pi$ are considered to be the same points. Thus the configuration space $\Omega$ of all spins is isomorphic to $[0,2\pi)^{\Z^d}$. We endow $\Omega$ with the product topology and natural product probability measure $d\nu_0(x) = \bigotimes_{i \in \mathbb{Z}^d} d\nu_0(x_i)$. In our case we take $d\nu_0(x_i) = \frac{1}{2 \pi} dx_i$. An interaction $\varphi$ is a collection of $\mathcal{F}_{\Lambda}$-measurable functions $\varphi_{\Lambda}$ from $([0,2\pi))^{\Lambda}$ to $\mathbb{R}$ where $\Lambda \subset \mathbb{Z}^d$ is finite. $\mathcal{F}_{\Lambda}$ is the $\sigma$-algebra generated by the canonical projection on $[0,2\pi)^{\Lambda}$. \newline The interaction $\varphi$ is said to be of \textbf{finite range} if there exists a $r > 0$ s.t. $diam(\Lambda) > r$ implies $\varphi_{\Lambda} \equiv 0$ and it is called \textbf{absolutely summable} if for all $i$, $\sum_{\Lambda \ni i} \parallel \varphi_{\Lambda} \parallel_{\infty} < \infty$. \newline We call $\nu$ a \textbf{Gibbs measure} associated to a reference measure $\nu_0$ and interaction $\varphi$ if the series $H_{\Lambda}^{\varphi}= \underset{\Lambda^{\prime} \cap \Lambda \neq \emptyset} \sum \varphi_{\Lambda^{\prime}} $ converges ($\varphi$ is absolutely summable) and $\nu$ satisfies the DLR equations for all $i$: \begin{equation} d\nu_{\beta}(x_i \mid x_j, j \neq i) = \frac{1}{Z_i} \exp( - \beta H_{i}^{\varphi}(x)) d\nu_0( x_i), \label{Gibbs} \end{equation} where $Z_i = \int_0^{2\pi} \exp( - \beta H_{i}^{\varphi}(x)) d\nu_0(x)$ is the partition function and $\beta$ proportional to the inverse temperature. The set of all Gibbs measures associated to $\varphi$ and $\nu_0$ is denoted by $\mathcal{G}(\beta, \varphi, \nu_0)$. Now, instead of working with Gibbs measures on $[0,2\pi)^{\Z^d}$ we will first investigate Gibbs measures as space-time measures $Q^{\nu_{\beta}}$ on the path space $\overset{\sim} \Omega = C(\mathbb{R}_+, [0,2\pi))^{\mathbb{Z}^d}$. In \cite{Deu87} Deuschel introduced and described infinite-dimen\-sional diffusions as Gibbs measures on the path space $C([0,1])^{\Z^d}$ when the initial distribution is Gibbsian. This approach was later generalized by \cite{CatRoeZes96} who showed that there exists a one-to-one correspondence between the set of initial Gibbs measures and the set of path-space measures $Q^{\nu_{\beta}}$. \bigskip We consider the process $X=(X_i(t))_{t \geq 0, i \in \Z^d}$ defined by the following system of stochastic differential equations (SDE) \begin{eqnarray} \begin{cases} & d X_i(t) = d B_i^{\odot}(t) , i \in \mathbb{Z}^d, t > 0 \label{system2-1}\\ & X(0) \sim \nu_{\beta} , t=0 \end{cases} \end{eqnarray} for $\nu_{\beta} \in \mathcal{G}(\beta, \overset{\sim} \varphi, \nu_0)$ and the initial interaction $\overset{\sim} \varphi$ given by \begin{equation} \overset{\sim} \varphi_{\Lambda}(x) = - J \underset{i,j \in \Lambda: i \sim j} \sum \cos(x_i-x_j) - h\sum_{i \in \Lambda}\cos(x_i) \label{interaction} \end{equation} $J, h$ some non-negative constants and $d\nu_0(x) = \frac{1}{2\pi} dx$. $\overset{\sim}H$ denotes the initial Hamiltonian associated to $\overset{\sim}\varphi$ and $(B_i^{\odot}(t))_{i,t}$ is independent Brownian motion moving on a circle with transition kernel given (via the Poisson summation formula) \begin{equation*} p_t^{\odot}(x_i,y_i) = 1 + 2\cdot \sum_{n \geq 1} e^{-n^2 t} \cos(n\cdot(x_i - y_i)) \end{equation*} for each $i \in \mathbb{Z}^d$, just as we used in \cite{vEntRus07}. Note also that the eigenvalues of the Laplacian on the circle, which is the generator of the process, are given by $\lbrace n^2, n \geq 1 \rbrace$, see also \cite{Ros97}. We remark that the normalization factor $1/2\pi$ is absorbed into the single-site measure $\nu_0$. \newline Obviously $\overset{\sim}\varphi$ is of finite range and absolutely summable, so the associated measure $\nu_{\beta}$ given by $\eqref{Gibbs}$ is Gibbs. \medskip For the failure of Gibbsianness we will use the necessary and sufficient condition of finding a point of essential discontinuity of (every version of) the conditional probabilities of $\nu_{\beta}$, i.e. a so-called \textbf{bad configuration}. It is defined as follows \begin{defi} A configuration $\zeta$ is called \textbf{bad} for a probability measure $\mu$ if there exists an $\e > 0$ and $i \in \Z^d$ such that for all $\Lambda$ there exists $\Gamma \supset \Lambda$ and configurations $\xi$, $\eta$ such that \begin{equation} |\mu_{\Gamma}(X_i | \zeta_{\Lambda \setminus \lbrace i \rbrace}\eta_{\Gamma \setminus \Lambda}) - \mu_{\Gamma}(X_i | \zeta_{\Lambda \setminus \lbrace i \rbrace}\xi_{\Gamma \setminus \Lambda}) | > \e. \label{badconfig} \end{equation} \end{defi} The measure at time $t$ can be viewed as the restriction of the two-layer system, considered at at times $0$ and $t$ simultaneously, to the second layer. In order to prove Gibbsianness or non-Gibbsianness we need to study the joint Hamiltonian for a fixed value $y$ at time $t$. \newline The time-evolved measure is \textbf{Gibbsian} if for every fixed configuration $y$ the joint measure has no phase transition in a strong sense (e.g. via Dobrushin uniqueness, or via cluster expansion{/}analyticity arguments). In that case, an absolutely summable interaction can be found for which the evolved measure is a Gibbs measure. On the other side the measure is \textbf{non-Gibbsian} if there exists a configuration $y$ which induces a phase transition for the conditioned double-layer measure at time $0$ which can be detected via the choice of boundary conditions. In that case no such interaction can be found, see for example \cite{FerPfi97}. \newline The results we want to prove are the following. \begin{Theorem} Let $Q^{\nu_{\beta}}$ be the law of the solution $X$ of the planar rotor system $\eqref{system2-1}$ in $\Z^d$, $\nu_{\beta} \in \mathcal{G}(\beta, \overset{\sim}\varphi, \nu_0)$ and $\overset{\sim} \varphi$ given by $\eqref{interaction}$, with $\beta$ the inverse temperature, $J$ some non-negative constant and $h> 0$ the external field, and $d$ at least 3. Then, for $\beta $ large enough, and $h$ small enough, there is a time interval $(t_0(h,\beta), t_1(h,\beta))$ such that for all $t_0(h,\beta) < t < t_1(h,\beta)$ the time-evolved measure $\nu^t_{\beta}=Q^{\nu_{\beta}}\circ X(t)^{-1}$ is not Gibbs, i.e. there exists no absolute summable interaction $\varphi^t$ such that $\nu^t_{\beta} \in \mathcal{G}(\beta, \varphi^t, \nu_0)$. \end{Theorem} \begin{Theorem} For any $h$ chosen such that $\beta h$ is large enough, compared to $\beta$, there exists a time $t_2(h)$, such that for all $t \geq t_2(h)$ the time-evolved measure is Gibbs, $\nu^t_{\beta} \in \mathcal{G}(\beta, \varphi^t, \nu_0)$. \end{Theorem} \textbf{Proof of Theorem 2.1:}\\ We consider the double layer system, describing the system at times $0$ and $t$. We can rewrite the transition kernel in Hamiltonian form, and we will call the Hamiltonian for the two-layer system the dynamical Hamiltonian, (as it contains the dynamical kernel). It is formally given by: \begin{equation*} -\textbf{H}^t_{\beta}(x,y) = - \beta \overset{\sim}H(x) + \sum_{i \in \Z^2} \log(p_t^{\odot}(x_i,y_i)), \end{equation*} where $x, y \in [0,2\pi)^{\Z^d}$, $p_t^{\odot}(x_i,y_i)$ is the transition kernel on the circle and $\overset{\sim}H(x)$ is formally given by \begin{equation*} -\overset{\sim}H(x) = J \sum_{i \sim k} \cos(x_i-x_k) + h\sum_i \cos(x_i). \end{equation*} \textbf{1.} First we want to prove that there exists a time interval where Gibbsianness is lost. For this we have to find a "\textit{bad configuration}" such that the conditioned double-layer system has a phase transition at time 0, which implies $\eqref{badconfig}$ for the time-evolved measure. We expect this to be possible for each strength of the external field, and in each dimension at least 2. At present we can perform the programme only for weak fields, and for dimension at least 3. We also show a partial result, at least indicating how a conditioning also in d=2 can induce a phase transition. \medskip Thus, given $h>0$, we immediately see that the spins from the initial system prefer to follow the field and point upwards (take the value $x_i=0$ at each site $i$). To compensate for that, we will condition the system on the configuration where all spins point downwards (at time $t$), i.e. $y^{spec}:=(\pi)_{i \in \mathbb{Z}^d}$. Thus the spin configuration in which all spins point in the direction opposite to the initial field will be our `` bad configuration''. We expect that then the minimal configuration of $-\textbf{H}^t_{\beta}(x,y^{spec})$, so the ground states of the conditioned system at time 0, will need to compromise between the original field and the dynamical (conditioning) term. In the ground state(s) either all spins will point to the right, possibly with a small correction $\e_t$, $(\pi/2 - \e_t)_{i \in \Z^2}$ or to the left $(3\pi/2 + \e_t)_{i \in \Z^2}$, also with a small correction. $\e_t$ is a function depending on $t$. Finally these two symmetry-related ground states will yield a phase transition of the "spin-flop" type, also at low temperatures. It is important to observe that for this intuition to work, it is essential that the rotation symmetry of the zero-field situation will not be restored, due to the appearance of higher order terms from the expansion of the transition kernel, as we will indicate below. \medskip We perform a little analysis for the logarithm of the transition kernel $p_t^{\odot}$. Let $y^{spec}:=(\pi)_{i \in \mathbb{Z}^d}$. We want to focus on the first three terms coming from the expansion of the logarithm. \begin{eqnarray*} & & \log\biggl ( 1 + 2\sum_{n \geq 1}e^{-n^2 t}\cos(n(x_i-\pi)) \biggr) = \\ & & -2e^{-t}\cos(x_i) - 2e^{-2t}\cos^2(x_i) - \frac{8}{3}e^{-3t}\cos^3(x_i) + R_t(x_i) \end{eqnarray*} where \begin{equation*} R_t(x_i) := \biggl[ \sum_{n \geq 1} \frac{(-1)^{n+1}}{n}\biggl ( 2 \sum_{k \geq 1} e^{-k^2 t} \cos(k(x_i-\pi))\biggr)^n \biggr] \1_{\lbrace n \neq 1,2,3 \rbrace \cup \lbrace k \neq 1 \rbrace}, \end{equation*} is of order $\mathcal{O}_i(e^{-4t})$, for details see the Appendix. We define $h_t=e^{-t}$. Note that given $\beta h$, there is a time interval where the effect of the initial field is essentially compensated by the field induced by the dynamics (containing the $h_t$. For large times the initial field term dominates all the others and the system is expected to exhibit a ground (or Gibbs) state following this field. For intermediate times the other terms are important, too. If we consider a small initial field, it is enough to consider the second and third order terms which we indicated above. Those terms create, however, the discrete left-right symmetry for the ground states which will now prefer to point either to the right or to the left. \newline For the moment we forget about the rest term $R_t(x_i)$ and investigate the restricted Hamiltonian $-\textbf{H}_{res3}^t(x,y^{spec})$ which is formally equal to \begin{equation} \beta J \sum_{i \sim k} \cos(x_i - x_k) + \beta h\sum_i \cos(x_i) + \sum_i \biggl( -2h_t\cos(x_i) - 2h_t^2\cos^2(x_i) - \frac{8}{3}h^{3}_t\cos^3(x_i) \biggr) \label{Cpotential}. \end{equation} To be more precise, the external field including the inverse temperature $\beta h$ will be chosen small enough, and then the inverse temperature $\beta$ large enough. We want first to find the ground states of the restricted Hamiltonian $\textbf{H}_{res3}^t(x,y^{spec})$ which are points $x=(x_i)_{i \in \mathbb{Z}^d}$. It is fairly immediate to see that in the ground states all spins point in the same direction, so we then only need to minimize the single-site energy terms. The first-order term more or less compensates the external field, and the second-order term is maximal when $cos^2(x_i)$ is minimal, thus when one has the value $\pi/2$ or $3 \pi/2$. The higher-order terms will only minimally change this picture. We can define a function $\e_t $ depending on $t$ such that asymptotically $\beta h=h_t + \e_t$ yields the following unique maxima $(\pi/2 - \e_t,\pi/2 -\e_t)$ and $(3\pi/2 + \e_t,3\pi/2 +\e_t)$. The function $\e_t$ is a correction of the ground states pointing to the left or right. We present a schematic illustration of the two ground states. \begin{minipage}[hbt]{5cm} \centering \includegraphics[width= 4 cm, height= 4 cm]{GroundState22.eps} $(3\pi/2 + \e_t)_i$ \end{minipage} \hfill \begin{minipage}[hbt]{5cm} \centering \includegraphics[width= 4 cm, height= 4 cm]{GroundState11.eps} $(\pi/2 - \e_t)_i$ \end{minipage} \medskip Hence for every arbitrarily chosen small external field $h$, we find a time interval depending on $h$, such that we obtain two reflection symmetric ground states of all spins pointing either (almost) to the right $(\pi/2 -\e_t )_{i\in \Z^d}$ or all spins pointing (almost) to the left $(3\pi/2 + \e_t)_{i\in \Z^d}$. The rest term $R_t(x_i)$ does not change this behaviour since it is suppressed by the first terms and is of order $\mathcal{O}_i(e^{-4t})$. Moreover, it respects the left-right symmetry. We will first, as a partial argument, show that the interaction \begin{equation} J \sum_{i \sim k} \cos(x_i - x_k) + h\sum_i \cos(x_i) + \sum_i \biggl( -2h_t\cos(x_i) - 2h_t^2\cos^2(x_i) - \frac{8}{3}h^{3}_t\cos^3(x_i) \biggr) \end{equation} has a low-temperature transition in $d \geq 2$. \medskip To show this we notice that we are in a similar situation as in \cite{vEntRus07}. The conditioning of the double-layer system for the XY spins created left-right symmetric ground states. \medskip Now we want to apply a percolation argument for low-energy clusters to prove that such that spontaneous symmetry breaking occurs. The arguments follow essentially \cite{vEntRus07} and are based on \cite{Geo81}. The potential corresponding to the Hamiltonian $\eqref{Cpotential}$ is clearly a $C$-potential, that is a potential which is nonzero only on subsets of the unit cube \cite{Geo81}. It is of finite range, translation-invariant and symmetric under reflections. \medskip Including the rest term (which is a translation-invariant single-site term) does not change this. A fortiori the associated measure is reflection positive and we can again use the same arguments as in \cite{vEntRus07} to deduce that for $\beta$ large enough, there is long-range order. This argument indicates how conditioning might induce a phase transition. \bigskip However, to get back to our original problem, that is, to prove the non-Gibbsianness of the evolved state we need an argument which holds for values of not only of $h$, but of $\beta h$ which are small uniformly in temperature. Then only we can deduce that there exists a time interval $(t_0(\beta,h), t_1(\beta,h))$ such that $|\mathcal{G}_{\beta}(\textbf{H}^t_{\beta} (\cdot,y^{spec}),\nu_0)| \geq 2$. To obtain this, for $d=3$, we can invoke a proof using infrared bounds (see e.g. \cite{FILS78,Geo88,Bis08}. Note that the infrared bound proof, although primarily developed for proving continuous symmetry breaking, also applies to models with discrete symmetry breaking as we have here. In fact we may include the rest term without any problem here, as the symmetry properties of the complete dynamical Hamiltonian are the same as that of our restricted one, and adding single-site terms does not spoil the reflection positivity. From this an initial temperature interval is established, where Gibbsianness is lost after appropriate times. Indeed, the infrared bound provides a lower bound on the two-point function which holds uniformly in the single-site measure, (which in our case varies only slightly anyway, as long as the field and the compensating term due to the kernel are small enough). This shows that a phase transition occurs at sufficiently low temperatures, as for decreasing temperatures the periodic boundary condition state converges to the symmetric mixture of the right- and left-pointing ground-state configurations. \bigskip \textbf{Comment}: One might expect that, by judiciously looking for other points of discontinuity, the time interval of proven non-Gibbsianness might be extended, hopefully also to $d=2$; however, qualitatively this does not change the picture. In fact, there are various configurations where one might expect that conditioning on them will induce a first-order transition. For example, the XY model in at least two dimensions in a weak random field which is plus or minus with equal probability is expected to have such transitions \cite{Wehr2006}. The same situation should occur for various appropriately chosen (in particular random) choices of configuration where spins point only up or down. In a somewhat similar vein, if the original field is not so weak, and thus also higher terms are non-negligible, we expect that qualitatively not much changes, and there will again be an intermediate-time regime of non-Gibbsianness at sufficiently low temperatures. \bigskip \textbf{ About the proof of Theorem 2.2:} Let us now turn to the second statement. Here the initial temperature does not affect the argument. The intuitive idea, as mentioned before, is as follows: As after a long time, the term due to the conditioning becomes much weaker than the initial external field -however weak it is-, uniformly in the conditioning, and thus the system should behave in the same way as a plane rotor in a homogeneous external field, and have no phase transition. However, the higher-order terms which were helpful for proving the non-Gibbsianness, now prevent us using the ferromagneticity of the interaction. Indeed, we cannot use correlation inequalities of FKG type, and we will have to try analyticity methods. In fact, we expect that the statement should be true for each strength of the initial field. Indeed, once the time is large enough, the dynamical single-site term should be dominated by the initial field, and, just as in that case, one should have no phase transition \cite{Dun79, Dun79a, LieSok81}. However, to conclude that we can consider the dynamical single-site term as a small perturbation, in which the free energy and the Gibbs measure are analytic, although eminently plausible, does not seem to follow from Dunlop's Yang-Lee theorem. For high fields, we can either invoke cluster expansion techniques, showing that the system is Completely Analytic, or Dobrushin uniqueness statements. Precisely such claims were developed for proving Gibbsianness of evolved measures at short times in \cite{vEntRus07} and in \cite{KueOpo07}. A direct application of those proofs also provides our theorem, which is for long times. \section{Conclusion} In this paper we extended the results from \cite{vEntRus07} and show some results on loss and recovery of Gibbsianness for XY spin systems in an external field. Giving a low-temperature initial Gibbs measure in a weak field and evolving with infinite-temperature dynamics we find a time interval where Gibbsianness is lost. Moreover at large times and strong initial fields, the evolved measure is a Gibbs measure, independently of the initial temperature. \newline Generalizations are possible to include for example more general finite-range, translation invariant ferromagnetic interactions $\overset{\sim} \varphi$. We conjecture, but at this point cannot prove, that both the loss and recovery statements actually hold for arbitrary strengths of the initial field. \\ \textit{Acknowledgements:} We thank Christof K\"ulske, Alex Opoku, Roberto Fern\'andez, Cristian Spitoni and especially Frank Redig for helpful discussions. We thank Roberto Fern\'andez for a careful reading of the manuscript. We thank Francois Dunlop for a useful correspondence. \section{Appendix} The logarithm of the transition kernel is given by \begin{equation} \log\biggl ( 1 + 2\sum_{n \geq 1}e^{-n^2 t}\cos(n(x_i-\pi)) \biggr) = \sum_{k \geq 1} \frac{(-1)^{k+1}}{k}\biggl ( 2 \sum_{n \geq 1} e^{-n^2 t} \cos(n(x_i-\pi))\biggr)^k. \label{logExp} \end{equation} Since the first term of the series of $p_t^{\odot}$ is dominating we can write \begin{equation*} 2\sum_{n \geq 1} e^{-n^2 t} \cos(n(x_i-\pi)) = -2 e^{-t}\cos(x_i) + Rest_t(x_i). \end{equation*} The rest term $Rest_t(x_i)$ is smaller than $2e^{-4t}$ uniformly in $x_i$. Then we can bound \begin{equation*} 2\sum_{n \geq 1} e^{-n^2 t} \cos(n(x_i-\pi)) \leq -2 e^{-t}\cos(x_i) + 2e^{-4t}. \end{equation*} Furthermore we write $\eqref{logExp}$ as \begin{eqnarray*} & & \biggl ( -2 e^{-t}\cos(x_i) + \mathcal{O}(e^{-4t}) \biggr) - \frac{1}{2}\biggl ( -2 e^{-t}\cos(x_i) + \mathcal{O}(e^{-4t}) \biggr)^2 + \\ & & \frac{1}{3}\biggl ( -2 e^{-t}\cos(x_i) + \mathcal{O}(e^{-4t}) \biggr)^3 + \sum_{k \geq 4} \frac{(-1)^{k+1}}{k}\biggl ( -2 e^{-t}\cos(x_i) + \mathcal{O}(e^{-4t})\biggr)^k \end{eqnarray*} and afterwards bound it by \begin{equation*} -2 e^{-t}\cos(x_i) + \mathcal{O}(e^{-4t}) - 2 e^{-2t}\cos^2(x_i) + \mathcal{O}(e^{-5t}) - \frac{8}{3} e^{-3t}\cos^3(x_i) + \mathcal{O}(e^{-6t}) + \mathcal{O}(e^{-4t}) \end{equation*} thus $\eqref{logExp}$ is then bounded by \begin{equation*} -2e^{-t}\cos(x_i) - 2e^{-2t}\cos^2(x_i) - \frac{8}{3}e^{-3t}\cos^3(x_i) + \mathcal{O}(e^{-4t}). \end{equation*} Altogether we consider the leading terms of the series $\eqref{logExp}$, $-2e^{-t}\cos(x_i) - 2e^{-2t}\cos^2(x_i) - \frac{8}{3}e^{-3t}\cos^3(x_i)$, separately and bound the rest uniformly in $x_i$ for every $i$ by $const \times e^{-4t}$ for large $t$.
{"config": "arxiv", "file": "0808.4092/enruyepE.tex"}
\begin{document} \author{Camilo Hern\'andez} \address{ IEOR Departament\\ Columbia University\\ NY, USA. } \email{camilo.hernandez@columbia.edu} \author{Mauricio Junca} \address{ Mathematics Department\\ Universidad de los Andes\\ Bogot\'a, Colombia. } \email{mj.junca20@uniandes.edu.co} \author{Harold Moreno-Franco} \address{ Laboratory of Stochastic Analysis and its Applications\\ National Research University Higher School of Economics \\ Moscow, Russia. } \email{hmoreno@hse.ru} \keywords{Dividend payment, Optimal control, Ruin time constraint, Spectrally one-sided L\'evy processes} \title[A time of ruin constrained optimal dividend problem]{A time of ruin constrained optimal dividend problem for spectrally one-sided L\'evy processes} \begin{abstract} We introduce a longevity feature to the classical optimal dividend problem by adding a constraint on the time of ruin of the firm. We extend the results in \cite{HJ15}, now in context of one-sided L\'evy risk models. We consider de Finetti’s problem in both scenarios with and without fix transaction costs, e.g. taxes. We also study the constrained analog to the so called Dual model. To characterize the solution to the aforementioned models we introduce the dual problem and show that the complementary slackness conditions are satisfied and therefore there is no duality gap. As a consequence the optimal value function can be obtained as the pointwise infimum of auxiliary value functions indexed by Lagrange multipliers. Finally, we illustrate our findings with a series of numerical examples. \end{abstract} \maketitle \section{Introduction}\label{Int} Proposed in 1957 by Bruno de Finetti \cite{Definetti}, the problem of finding the dividend payout strategy that maximizes the discounted expected payout throughout the life of an insurance company has been at the core of actuarial science and risk theory. An important element of this problem is how one chooses to model the process describing the reserves of the firm, $X$. The solution to de Finetti's problem has been given for the case $X$ is assumed to be a compound Poisson process with negative jumps and positive drift, commonly referred as the \emph{Cram\'er-Lundberg} model, where $X$ is a Brownian motion, and the sum of the previous two, \cite{Schmidli,asta,tak}. Nowadays, the case in which $X$ is assumed to be a spectrally negative L\'evy process is the most general set up for which the problem has been studied (references below). The case when $X$ is a spectrally positive is also considered in the literature and it is known as the Dual model, \cite{KyYa14}. This set up fits in the context of a company whose income depends on inventions or discoveries. Both settings make a strong use of properties of the underlying L\'evy measure and fluctuation theory of L\'evy processes which requires the study of the so-called \emph{scale functions}. A common result in all these scenarios is that the optimal strategy, in the absence of transaction cost, corresponds to a \emph{barrier/reflection strategy}. In such strategy the reserves are reduced to the barrier level by paying out dividends. Nevertheless, in general the solution is not necessarily of this type. In \cite{azcuemuler2005} the first example for the \emph{Cram\'er-Lundberg} model with Gamma claim distribution for which no barrier strategy is optimal was presented. Today, it is well known that, in the spectrally negative case, barrier strategies solve the optimal dividend problem if the tail of the L\'evy measure is log-convex\footnote{A function $f$ is said to be log-concave [resp. log-convex] if $\log(f)$ is concave [resp. convex].}, see \cite{Loeffen10}. Now, for the spectrally positive case, the optimal strategy is always a barrier strategy, \cite{KyYa14}. In the presence of transaction cost, when $X$ is a spectrally negative L\'evy process and the L\'evy measure has a log-convex density, \cite{LoeffenTrans} shows that the optimal strategy is given by paying out dividends in such a way that the reserves are reduced to a certain level $b_-$ whenever they are above another level $b_+>b_-\geq0$. This strategy is known as \emph{single band strategy}. The same result holds for the Dual model, \cite{BayraktarImpdual}. However, a missing element in the current set up had long been noticed. The longevity aspect of the firm remained as a separate problem, see \cite{schmidli2002} for a survey on this matter. Despite efforts to integrate both features, \cite{Hipp03,Jostein03,ThonAlbr,Grandits}, it was not until very recent that a successful solution to a model that actually accounts for the trade-off between performance and longevity was presented. In \cite{HJ15}, the authors considered de Finetti's problem in the setting of Cram\'er-Lundberg reserves with exponentially distributed jumps adding a constraint on the expected time of ruin of the firm. The main contribution of this article is to extend the results of \cite{HJ15} for different models. Namely, to the case in which the reserves are modeled by a spectrally negative L\'evy process with complete monotone L\'evy measure with and without transaction cost, and the Dual model. As an intermediate step we also show that scale functions of the spectrally negative L\'evy process with complete monotone measure are strictly log-concave in an unbounded interval. This paper is organized as follows: In Section \ref{problem} we present the problem we want to solve and describe the strategy to solve it. In the Section \ref{scale} we review the main results in fluctuation theory of spectrally negative L\'evy processes. Section \ref{SecdeFinetti} presents the solution to the constrained dividend problem for de Finetti's model, first without transaction cost and later including transaction cost. The result of strict log-concavity of scale functions is also included in this section, Corollary \ref{strictlogconcave}. Section \ref{SecDual} presents the solution of the constrained problem for the Dual model. In the following section we illustrate our results throughout a series of numerical examples. We finalize this article with a section of conclusions and questions. \section{Problem formulation}\label{problem} Let $X$ be the process modeling the reserves of the firm. In the setting of this paper we will assume $X$ to be a \emph{spectrally one-sided L\'evy process}, i.e. spectrally negative [resp. positive] L\'evy processes which have neither monotone paths nor positive [negative] jumps. The above process is defined on the filtered probability space $(\Omega,\F,\FF,\P)$, where $\FF=(\F_t)_{t\geq0}$ is the natural filtration generated by the process $X$. Given the process $X$, we consider the family of probability measures $\{\P_x:x\in\R\}$ such that under $\P_x$ we have $X_0=x$ a.s. (and so $\P_0=\P$), and we denote by $\E_x$ expectation with respect to $\P_x$. The insurance company is allowed to pay dividends which are modeled by the process $D=(D_t)_{t\geq0}$ representing the cumulative payments up to time $t$. A dividend process is called admissible if it is a non-decreasing, right continuous with left limits, i.e. c\`adl\`ag, process adapted to the filtration $\FF$ which starts at 0. Therefore, the reserves process under dividend process $D$ reads as \begin{equation}\label{surplus} L_t^D= X_t-D_t. \end{equation} Let $\tau^D$ denote the time of ruin under dividend process $D$, i.e., $\tau^D=\inf\{t\geq0: L_t^D<0\}$. We also require that the dividend process do not lead to ruin, i.e., $D_{t+}-D_t\leq L_t^D$ for $t<\tau^D$ and $D_t= D_{\tau^D}$ for $t\geq \tau^D$, so no dividends are paid after ruin. We call $\Theta$ the set of such processes. As proposed by de Finetti, the company wants to maximize the expected value of the discounted flow of dividend payments along its lifespan, where the lifespan of the company will be determined by its ruin. If we also consider a transaction cost each time dividends are paid, then a continuous dividend process is forbidden, therefore, we require in addition that the dividend processes $D$ are pure jump processes. So, the objective function of the company can be written as \begin{align} \V^D(x):=\E_{x}\left[ \int_0^{\tau^D-}e^{-q t}(d D_t-\beta d N^D_{t})\right], \end{align} where $q$ is the discount factor, $\beta\geq0$ the transaction cost and $N^D$ is the stochastic process that counts the number of jumps of $D$. The purpose of this paper is to add a restriction on the dividend process $D$ to the previous problem, which we model by the constraint: \begin{equation}\label{Rest} \E_{x} \Big[e^{-q \tau^D}\Big]\leq K, \quad 0\leq K\leq1 \text{ fixed.} \end{equation} The motivation behind such a constraint is that it takes into account the time of ruin under the dividend process. One possible way to choose the parameter $K$ is to consider the equivalent constraint $$\E_{x}\left[\int_0^{\tau^D}e^{-qt}dt\right]\geq\int_0^Te^{-qt}dt, \quad T>0,$$ as in \cite{HJ15}. Also, note that $$\E_{x} \Big[e^{-q \tau^D}\1_{\tau^D<\infty}\Big]\leq\E_{x} \Big[\1_{\tau^D<\infty}\Big]=\P_x(\tau^D<\infty),$$ hence, another possibility is to interpret the constraint as a restriction in the probability of ruin weighted by the time of ruin. The advantage of the chosen form of the constraint, as it will be clear in the following sections, is that it fits in with the model in a smooth way. Combining all the above components we state the problem we aim to solve: \begin{align*}\label{P1} \tag{P} V(x):=\underset{D\in \Theta}\sup\quad \V^D(x), \quad \text{s.t.} \quad \E_{x} \Big[e^{-q \tau^D}\Big]\leq K. \end{align*} In order to solve this problem we use Lagrange multipliers to reformulate our problem. For $ \Lambda \geq 0$ we define the function \begin{equation}\label{lagrangian} \V_{\Lambda}^{D}(x):=\V^D(x)-\Lambda\E_{x}\Big[e^{-q \tau^D }\Big]+ \Lambda K. \end{equation} We will follow the same strategy as in \cite{HJ15} to verify strong duality which is summarized here: First note that \eqref{P1} is equivalent to $\underset{D\in \Theta}\sup\,\, \underset{\Lambda\geq 0}\inf\,\,\V_{\Lambda}^{D}(x)$ since $$\underset{\Lambda\geq 0}\inf\,\,\V_{\Lambda}^{D}(x)=\begin{cases} \V^D(x), &\mbox{if }\E_{x} \Big[e^{-q\tau^D}\Big]\leq K \\ -\infty, & \mbox{otherwise }. \end{cases} $$ Next, the dual problem of \eqref{P1}, is defined as \begin{equation}\label{D} \tag{D} \underset{\Lambda\geq 0}\inf\,\,\underset{D\in \Theta}\sup\,\, \V_{\Lambda}^{D}(x), \end{equation} which is always an upper bound for the primal \eqref{P1}. Therefore, the main goal of this paper is to prove that $$\underset{D\in \Theta}\sup\,\, \underset{\Lambda\geq 0}\inf\,\,\V_{\Lambda}^{D}(x)= \underset{\Lambda\geq 0}\inf\,\,\underset{D\in \Theta}\sup\,\, \V_{\Lambda}^{D}(x).$$ Now, to solve \eqref{D}, we can focus on solving for fixed $\Lambda\geq0$ the problem \begin{equation}\label{P2} \tag{P$_\Lambda$} V_\Lambda(x):=\underset{D\in \Theta}\sup\,\, \V_{\Lambda}^{D}(x). \end{equation} Note that this is the optimal dividend problem with a particular type of Gerber-Shiu penalty function as in \cite{avram2015}. There, the authors considered the spectrally negative case under sufficient conditions on the L\'evy measure and prove the optimality of barrier and single band strategies without and with transaction cost, respectively. The optimal strategy in the Dual model, when $\beta=0$, also corresponds to a barrier strategy regardless of the L\'evy measure as shown in \cite{Yin}. In both scenarios the value of such barrier depends on the shape of the well known scale functions. Not surprisingly such family of functions is also the tool to characterize the solution of \eqref{P1}. The following section formally presents such family of functions and motivates its introduction in this context. \section{Scale functions of Spectrally Negative L\'evy processes}\label{scale} In this section $X$ will be assumed to be spectrally negative. However, the differences with the spectrally positive case should be clear since $-X$ is spectrally positive. For the process $X$ its \emph{Laplace exponent} is given by \begin{equation} \psi(\theta):= \log (\E[e^{\theta X_1}]), \end{equation} and it is well defined for $\theta\geq0$. The L\'evy-Khintchine formula guarantees the existence of a unique triplet $(\gamma,\sigma,\nu)$, with $\gamma\in \R$, $\sigma\geq0$ and $\nu$ a measure concentrated on $(-\infty,0)$ satisfying $\int_{(-\infty,0)} (1\wedge x^2)\nu(dx)<\infty$, such that, \begin{align*} \psi(\theta)=\gamma \theta +\frac{1}{2}\sigma^2 \theta^2 +\int_{(-\infty,0)}(e^{\theta x}-1-\theta x \1_{\{-1<x\}})\nu(dx), \end{align*} for every $\theta\geq 0$. The triplet $(\gamma,\sigma,\nu)$ is commonly referred as the \emph{L\'evy triplet}. Scale functions appear naturally in the context of fluctuation theory of spectrally negative L\'evy processes. More specifically, they are characterized as the family of functions $W^{(q)}:\R \rightarrow [0,\infty)$ defined for each $q\geq 0$, such that $W^{(q)}(x)=0$ for $x<0$ and it is the unique strictly increasing and continuous function whose Laplace transform satisfies \begin{equation}\label{wqlaplace} \int_0^\infty e^{-\beta x} W^{(q)}(x) dx= \frac{1}{\psi(\beta)-q}, \qquad \beta >\Phi(q), \end{equation} where $\Phi(q):=\sup\{\theta\geq0: \psi(\theta)=q\}$ is the right inverse of $\psi(\theta)$. Such functions $W^{(q)}$ are referred as the \emph{q-scale functions}. Associated to this functions, we define for $q\geq0$ the functions $Z^{(q)}:\R\rightarrow[1,\infty)$ and $\bar{Z}^{(q)}:\R\rightarrow \R$ as \begin{align*} Z^{(q)}(x)&:=1+q\int_0^x W^{(q)}(z)dz,\\ \bar{Z}^{(q)}(y)&:=\int_0^y Z^{(q)}(z)dz=y+q\int_0^y\int_0^z W^{(q)}(w)dw dz. \end{align*} We now review some properties of the scale functions, available for example in \cite{KKRivero2013}, that will be needed later on. First, it is useful to understand their behaviour at $0$ and at $\infty$. For $q\geq 0$, $W^{(q)}(0)=0$ if and only if X has unbounded variation. Otherwise, $W^{(q)}(0)=1/c$, where $c=\gamma+\int_{-1}^0 |x| \nu(dx)$. Recall that $c$ must be strictly positive to exclude the case of monotone paths. The initial value of the derivative of the scale function is given by \begin{align*} W^{(q)'}(0+)=\begin{cases} 2/\sigma^2,\qquad &\text{if } \sigma > 0 \\ (\nu(-\infty,0)+q)/c^2,\qquad &\text{if } \sigma=0 \text{ and } \nu(-\infty,0)<\infty\\ \infty, & \text{otherwise}. \end{cases} \end{align*} Regarding the behavior at infinity, we know that \begin{align}\label{limitqfact} \lim_{x\rightarrow \infty} e^{-\Phi(q)x}W^{(q)}(x)=\frac{1}{\psi'(\Phi(q))}. \end{align} \begin{remark}\label{logconcaveprop} From \cite{Loeffen08} we know that $q$-scale functions are always log-concave on $(0,\infty)$. \end{remark} An useful representation of scale functions was provided in \cite{Loeffen08}. Making use of Bernstein's theorem it was proven that when the L\'evy measure $\nu$ has a completely monotone density\footnote{A function $f$ is said to be \emph{completely monotone} if $f \in C [0,\infty)$, $f \in C^\infty (0,\infty)$ and satisfies $(-1)^n \frac{d^n}{dx^n}f(x)\geq 0$}, and $q>0$, \begin{align}\label{qscalecompmon} W^{(q)}(x)=\frac{e^{\Phi(q)x}}{\psi'(\Phi(q))}- f(x), \quad x>0, \end{align} with $f$ a completely monotone function. Furthermore, from the proof of this result it is known that $f(x)=\int_{0+}^{\infty}e^{-xt}\xi(dt+\Phi(q))$ where $\xi$ is a finite measure on $(0,\infty)$. Using this one can deduce that $f^{(n)}(x)\rightarrow0$ as $x\rightarrow\infty$ for all non-negative integers $n$. Also, that $q$-scale functions are infinitely differentiable, and odd derivatives are strictly positive and strictly log-convex. Scale functions are present in a vast majority of fluctuation identities of spectrally negative L\'evy processes and, as we will see next, they appear in the setting of the optimal dividend problem. \section{Solution of the constrained de Finetti's problem}\label{SecdeFinetti} Let us consider the case where the reserves process $X$ is a spectrally negative L\'evy process. We will solve the constrained problem in both scenarios, first without transaction cost, and then for $\beta>0$. \subsection{No transaction cost} As mentioned before, optimal strategies for Problem \eqref{P2} in this setting are barrier strategies. If we consider the dividend barrier strategy at level $b$, $D^b$, we have that $D_t^{b}=(b\vee \overline{X}_t)-b$ for $t\geq 0$, where $\overline{X}_t:=\underset{0\leq s\leq t}{\sup} X_s$ and therefore $X_t^{D^b}=b - [(b\vee \overline{X}_t)-X_t]$. The process in square brackets is a type of reflected process. More generally, for a given process $Y$ we define $\hat{Y}_t^s:=s\vee \overline{Y}_t-Y_t, t\geq0$, known as the \emph{reflected process at its supremum with initial value $s$}. For such processes also define the exit time $\hat{\sigma}_k^s:=\inf \{t>0:\hat{Y}_t^s>k\}$. From the previous definitions it follows that $X_t^{D^b}=b-\hat{X}_t^{b}$ and $\tau^{D^{b}}=\hat{\sigma}_{b}^{b}$. This simple observation provides an useful identity for the value function when a barrier strategy is followed. The next identity, first presented in \cite{Gerber72}, can be found in \cite{kyprianou2014}. \begin{prop} Let $b>0$ and consider the dividend process $D_t^{b}=X_t -(b- \hat{X}_t)$. For $x\in [0,b]$, \begin{equation}\label{Valuefunctqscale} \V^{D^b}(x)=\E_x\left[\int_0^{\tau^{D^{b}}-} e^{-qt}dD_t^b\right]=\frac{W^{(q)}(x)}{W^{(q)'}_{+}(b)}, \end{equation} where $W^{(q)'}_{+}(b)$ is understood as the right derivative of $W^{(q)}$ at $b$. \end{prop} The previous proposition suggests that in the unconstrained de Finetti's problem, the existence of an optimal barrier strategy boils down to the existence of a minimizer of $W^{(q)'}$, hence the importance of understanding the properties of scale functions and its derivatives. It was first shown in \cite{Loeffen082} that when $W^{(q)}$ is sufficiently smooth, meaning it is once [resp. twice] continuously differentiable when X is of bounded [resp. unbounded] variation, and $W^{(q)'}$ is increasing on $(b^*,\infty)$, where $b^*$ is the largest point where $W^{(q)'}$ attains its minimum, the barrier strategy al level $b^*$ is optimal. A sufficient condition for such properties to be satisfied is for $X$ to have a L\'evy measure with complete monotone density. This work also showed that $W^{(q)'}$ is strictly convex on $(0,\infty)$. Later results in \cite{KyprianouRS10}, stated that under the weaker assumption of a log-convex density of the L\'evy measure the same result holds on $(b^*,\infty)$ but not necessarily on $(0,\infty)$. Finally, \cite{Loeffen10} made a final improvement showing that if the tail of the L\'evy measure is log-convex the scale function of the spectrally negative L\'evy process has a log-convex derivative. \subsubsection{Solution of \eqref{P2}}\label{dualclassical} Likewise, the role of the scale functions in the setting of the constrained dividend problem is very important. The next result follows from \cite{Avram04} and it is also shown in \cite{avram2015}. \begin{prop}\label{Lagrangianbarrier} For a sufficiently smooth $q$-scale function $W^{(q)}$, the function $\V_\Lambda^{D^b}$, where $D^b$ is the barrier strategy at level $b \geq 0$, for $x\geq 0$ is given by \begin{align}\label{lagrangianbarrier} \V_\Lambda^{D^b}(x)= \begin{cases} W^{(q)}(x)\Big[\frac{1+q\Lambda W^{(q)}(b)}{W^{(q)'}(b)}\Big]-\Lambda Z^{(q)}(x) + \Lambda K &\text{if}\quad x\leq b\\ x-b+\mathcal{V}_\Lambda^{D^b}(b) &\text{if}\quad x>b. \end{cases} \end{align} \end{prop} The solution of \eqref{P2} can be extracted from \cite{Loeffen08}. In that work it was proven that barrier strategies are optimal under the assumption of complete monotonicity of the L\'evy measure, so in this section we will make this assumption. To certify the optimality of an admissible barrier strategy two steps are carried out: First use \eqref{lagrangianbarrier} to propose a candidate for optimal barrier level, and second certify optimality with a verification lemma argument. To understand the solution of \eqref{P2}, we will elaborate on how such candidate is proposed. In light of Proposition \ref{Lagrangianbarrier}, define the function $\zeta_{\Lambda}:[0,\infty)\rightarrow \R$ by \begin{align}\label{functionZ} \zeta_{\Lambda}(\varsigma):=\frac{1+q\Lambda W^{(q)}(\varsigma)}{W^{(q)'}(\varsigma)},\quad \varsigma>0 \end{align} and $\zeta_{\Lambda}(0):=\underset{\varsigma\downarrow 0}{\lim}\, \zeta_{\Lambda}(\varsigma)$. Now, the barrier strategy at level \begin{align}\label{bLambda} b_\Lambda:=\sup \{b:\zeta_{\Lambda}(b)\geq \zeta_{\Lambda}(\varsigma),\text{ for all } \varsigma\geq 0\} \end{align} is proposed as candidate optimal strategy for \eqref{P2}. \begin{remark}\label{complmonotprop} From \cite{Loeffen08} we know that when the L\'evy measure $\nu$ is assumed to have completely monotone density, the set of maxima of the function $\zeta_{\Lambda}$ consists of a single point for all $\Lambda$. In fact, $\zeta_{\Lambda}$ is strictly increasing in $(0,b_{\Lambda})$ and strictly decreasing in $(b_{\Lambda},\infty)$ . \end{remark} We now state the theorem that characterizes the solution of \eqref{P2}. \begin{thm}[Optimal strategy for \eqref{P2}] Suppose the L\'evy measure of the spectrally negative L\'evy process $X$ has a completely monotone density. Then the optimal strategy consists of a barrier strategy at level $b_\Lambda$ given by \eqref{bLambda}, and the corresponding value function is given by equation \eqref{lagrangianbarrier}. \end{thm} \subsubsection{Solution of \eqref{P1}}\label{P1classical} We now proceed to solve \eqref{P1} following the same ideas as in \cite{HJ15}. Let $b_0$ be the optimal barrier for \eqref{P2} with $\Lambda=0$, that is, the optimal barrier for the unconstrained problem. Let $\bar{\Lambda}:=\sup \{\Lambda\geq 0: b_\Lambda=0\}\vee0$. Note that if $\bar{\Lambda}>0$, then $b_0=0$. Now, since $b_{\Lambda}$ is the only maximum of $\zeta_{\Lambda}$, we consider the function $\Lambda:[b_0,\infty)\rightarrow \R_+$ defined by \begin{align}\label{btolambdamap} \Lambda(b):=\begin{cases} 0 & \mbox{if } b=b_0\\ \frac{-W^{(q)''}(b)}{q[W^{(q)}(b) W^{(q)''}(b)-[W^{(q)'}(b)]^2]} & \mbox{if } b>b_0. \end{cases} \end{align} We will show that this function establishes a biyection between $b$ and $\Lambda$ such that $b_{\Lambda(b)}=b$. Figure \ref{graphLambda} shows the behavior of the map when $\bar{\Lambda}=0$ and $\bar{\Lambda}>0$. \begin{figure}[t] \includegraphics[width=0.46\linewidth]{barlambdazero.pdf} \hfill \includegraphics[width=0.46\linewidth]{barlambdapositivo.pdf} \caption{The map $\Lambda(b)$. On the left $\bar{\Lambda}=0$ and $b_0>0$. On the right $\bar{\Lambda}>0$ and $b_0=0$. These maps correspond to different choices of parameters for the Cram\'er-Lundberg model with exponential claims. See \cite{HJ15} for the explicit formula of the map.}\label{graphLambda} \end{figure} \begin{prop}\label{btolambdaprop} For each $b \in (b_0,\infty)$ the barrier strategy at level $b$ is optimal for \eqref{P2} with $\Lambda(b)$. Also, this map is strictly increasing. \begin{proof} We want to show that the function $\Lambda(b)$ is well defined and maps barrier levels $b>b_0$ to $\Lambda(b)$ such that the pair $(D^b,\Lambda(b))$ is optimal for \eqref{P2}. To see this, first recall from \cite{Loeffen08} that $W^{(q)''}(b)$ is strictly positive for $b>b_0$. Also, the log-concavity of the $q$-scale function implies that $W^{(q)}(b) W^{(q)''}(b)-[W^{(q)'}(b)]^2\leq0$ and we claim it cannot be 0. Since \begin{align*} \zeta'_{\Lambda}(\varsigma)=-\frac{W^{(q)''}(\varsigma)+\Lambda q[W^{(q)}(\varsigma) W^{(q)''}(\varsigma)-[W^{(q)'}(\varsigma)]^2]}{[W^{(q)'}(\varsigma)]^2}, \end{align*} this would prove that $\zeta'_{\Lambda(b)}(b)=0$ for $b>b_0$ and therefore the pair $(D^b,\Lambda(b))$ is optimal for \eqref{P2}. To prove the claim we argue by contradiction. Let $\hat{b}>b_0$ be the minimum such that $W^{(q)}(\hat{b}) W^{(q)''}(\hat{b})-[W^{(q)'}(\hat{b})]^2=0$. We have two possibilities: Either $W^{(q)}(b') W^{(q)''}(b')-[W^{(q)'}(b')]^2<0$ for a some value $b'>\hat{b}$, or the expression equals zero in $[\hat{b},\infty)$. In the first case, by the continuity of $\Lambda(\cdot)$ in its domain, we will have two values $b''<\hat{b}<b'$ such that $\Lambda(b'')=\Lambda(b')$. This implies that those two barrier values in $(b_0,\infty)$ are optimal for \eqref{P2} for the same value of $\Lambda$, which contradicts Remark \ref{complmonotprop}. In the later case, it follows that the tail of $\log(W^{(q)})$ is linear, and so is the tail of $\log(W^{(q)'})$, which contradicts the strict log-convexity of $W'$ on $(0,\infty)$, see Remark \ref{complmonotprop}. Finally, as $W^{(q)'}(x)$ is strictly log-convex and strictly positive on $(0,\infty)$, \begin{align*} \frac{d\Lambda(b)}{db}=\frac{W^{(q)'}(b)[W^{(q)'}(b)W^{(q)'''}(b)-[W^{(q)''}(b)]^2]}{q[W^{(q)}(b) W^{(q)''}(b)-[W^{(q)'}(b)]^2]^2} \end{align*} is always positive and so the map is strictly increasing. \end{proof} \end{prop} As a consequence of the proof of the previous proposition we have the following important property of $q$-scale functions. \begin{corollary}\label{strictlogconcave} $W^{(q)}(x)$ is strictly log-concave in $(b_0,\infty)$. \end{corollary} The behavior of the map $\Lambda(b)$ at infinity will be important for the final result of this section. \begin{lemma}\label{lambdainftylemma} $\Lambda(b)\rightarrow \infty$ as $b\rightarrow \infty$. \end{lemma} \begin{proof}Note that \begin{align*} \Lambda(b)=\frac{1}q \left[\frac{W^{(q)'}(b)^2}{W^{(q)''}(b)}-W^{(q)}(b)\right]^{-1}, \end{align*} so, in order to prove the result we need to show that the term in brackets goes to 0. Using \eqref{qscalecompmon} we can obtain the following: \begin{align*} \frac{W^{(q)'}(b)^2}{W^{(q)''}(b)}-W^{(q)}(b)=&\frac{\left[\frac{\Phi(q)e^{\Phi(q)b}}{\psi'(\Phi(q))}- f'(b)\right]^2}{\left[\frac{\Phi(q)^2 e^{\Phi(q)b}}{\psi'(\Phi(q))}- f''(b)\right]}-\left[\frac{e^{\Phi(q)b}}{\psi'(\Phi(q))}- f(b)\right]\\ =& \frac{e^{\Phi(q)b}}{\psi'(\Phi(q))}\left[ \frac{\frac{\Phi(q)^2 e^{\Phi(q)b}}{\psi'(\Phi(q))}-\Phi(q)f'(b)}{\frac{\Phi(q)^2 e^{\Phi(q)b}}{\psi'(\Phi(q))}- f''(b)} -1\right]\\ &\hspace{1cm}+f(b)-f'(b)\left[\frac{\frac{\Phi(q)e^{\Phi(q)b}}{\psi'(\Phi(q))}- f'(b)}{\frac{\Phi(q)^2 e^{\Phi(q)b}}{\psi'(\Phi(q))}- f''(b)}\right]. \end{align*} Since $f$ and all its derivatives vanish at infinity, we have that for $b$ large \begin{align*} \frac{W^{(q)'}(b)^2}{W^{(q)''}(b)}-W^{(q)}(b)=&\frac{e^{\Phi(q)b}}{\psi'(\Phi(q))}\left[ \frac{f''(b)-\Phi(q)f'(b)}{\frac{\Phi(q)^2 e^{\Phi(q)b}}{\psi'(\Phi(q))}- f''(b)}\right]+o(1)\\ =&\frac{1}{\psi'(\Phi(q))} \left[\frac{f''(b)-\Phi(q)f'(b)}{\frac{\Phi(q)^2 }{\psi'(\Phi(q))}- \frac{f''(b)}{e^{\Phi(q)b}}}\right]+o(1). \end{align*} Now, since \begin{align*} \frac{f''(b)-\Phi(q)f'(b)}{\frac{\Phi(q)^2 }{\psi'(\Phi(q))}- \frac{f''(b)}{e^{\Phi(q)b}}} \longrightarrow 0,\quad \textrm{as} \quad b\rightarrow\infty, \end{align*} we have the result. \end{proof} From the previous lemma and Proposition \ref{btolambdaprop} we obtain the following corollary. \begin{corollary}\label{bLambdainfty} The map $\Lambda(b)$ is one-to-one onto $(\bar{\Lambda},\infty)$. Furthermore, $b_{\Lambda}$ is strictly increasing and goes to $\infty$ as $\Lambda$ goes to $\infty$. \end{corollary} Now, in order the show the complementary slackness condition (condition \eqref{cond3clas} in the proposition below), we need to understand the behavior of the constraint as a function of the barrier level. Observing Equations \eqref{Valuefunctqscale} and \eqref{lagrangianbarrier}, we introduce the function \begin{equation}\label{Psi} \varPsi_x(b):=\mathbb{E}_{x}\left[e^{-q \tau^{D^b}}\right]= \begin{cases} Z^{(q)}(x)-q \frac{W^{(q)}(b)}{W^{(q)'}(b)}W^{(q)}(x) & \mbox{if } 0\leq x\leq b\\ \varPsi_b(b) & \mbox{if } x>b. \end{cases}. \end{equation} \begin{prop}\label{optimalpair} For each $x\geq0$ there exists $\bar{K}_{x}\geq0$ such that if $K>\bar{K}_{x}$, there exists $b^*$ which satisfies : \begin{enumerate}[(i)] \item\label{cond2clas} $\mathbb{E}_{x}\left[e^{-q \tau^{D^{b^*}}}\right]\leq K$ and \item\label{cond3clas} $\Lambda(b^*)\left(K-\mathbb{E}_{x}\left[e^{-q \tau^{D^{b^*}}}\right]\right)=0$. \end{enumerate} \end{prop} \begin{proof} If $x\leq b$, $\varPsi_x(b)$ is given by \eqref{Psi}. Rewriting this expression as $$\varPsi_x(b)=-q W^{(q)}(x)\Big[\frac{d \log (W^{(q)}(b))}{db}\Big]^{-1}+Z^{(q)}(x),$$ we can easily see that $$\frac{d \varPsi_x(b)}{db}=qW^{(q)}(x)\frac{d^2\log(W^{(q)}(b))}{db^2}\Big[\frac{d \log (W^{(q)}(b))}{db}\Big]^{-2}<0,$$ for $b\in(b_0,\infty)$, from Corollary \ref{strictlogconcave}. Otherwise, if $x>b$, then $\varPsi_x(b)= \varPsi_b(b)$ and some calculations yield that $$\frac{d \varPsi_b(b)}{db}=qW^{(q)}(b)\frac{d^2\log(W^{(q)}(b))}{db^2}\Big[\frac{d \log (W^{(q)}(b))}{db}\Big]^{-2},$$ which is again strictly negative for $b\in(b_0,\infty)$. So, for fixed $x$, $\varPsi_x(b)$ is strictly decreasing as a function of $b$ in $(b_0,\infty)$, and just decreasing before $b_0$. Now, let \begin{equation}\label{Klimitclass} \bar{K}_{x}:=\lim\limits_{b\rightarrow\infty}\varPsi_x(b)=-q\frac{W^{(q)}(x)}{\Phi(q)}+Z^{(q)}(x), \end{equation} where we use \eqref{limitqfact} to find the limit. Now, if $K\geq \varPsi_{x}(b_0)$, then the unconstrained problem satisfies the restriction and therefore $b^*=b_0$ satisfies the conditions. Otherwise, if $\bar{K}_x< K < \varPsi_{x}(b_0)$, there exists $b^*>b_0$ such that $\varPsi_{x}(b^*)=K$, since $\varPsi_x(b)$ is strictly decreasing. This $b^*$ satisfies the conditions. \end{proof} \begin{remark}\label{remdonothing} Note that $\bar{K}_{x}=\mathbb{E}_{x}\left[e^{-q \tau^0}\right]$, where $\tau^0$ is the time of ruin when no dividends are paid, see also \cite{Loeffen08}. \end{remark} The special case $K=\bar{K}_{x}$ requires the following lemma. \begin{lemma}\label{limitclassical} Let $x\geq0$. If $K=\bar{K}_{x}$ then $\Lambda(b)\Big(K-\mathbb{E}_x\Big[e^{-q \tau^{D^b}}\Big]\Big)\rightarrow 0$ as $b\rightarrow \infty$. \begin{proof} First, note that $\Lambda(b)\left(K-\mathbb{E}_x\Big[e^{-q \tau^{D^{b}}}\Big]\right)\leq 0$ for all $b> b_0$. Also, from \eqref{Valuefunctqscale} $\V^{D^b}(x)\rightarrow 0$ as $b$ goes to $\infty$. On the other hand, from the previous remark the do-nothing strategy is feasible for \eqref{P1} and hence $0\leq V(x)$. Finally, by weak duality we have that \begin{align*}\label{eqlimitclassical}\nonumber 0\leq V(x)&\leq\underset{\Lambda\geq 0}\inf\,\,V_{\Lambda}(x)\\\nonumber &\leq \underset{b\to \infty}\lim V_{\Lambda(b)}(x)\\ &=\underset{b\to \infty}\lim\Lambda(b)\Big(K-\mathbb{E}_x\Big[e^{-q \tau^{D^b}}\Big]\Big)\leq0. \end{align*} \end{proof} \end{lemma} All this is enough to derive the main result. \begin{thm}\label{strongduality}Let $x\geq 0$, $K\geq0$ and $V(x)$ be the value function of \eqref{P1}. Then $$V(x)\geq \underset{\Lambda\geq 0}\inf\,\,V_{\Lambda}(x)$$ and therefore, $\underset{\Lambda\geq 0}\inf\,\,V_{\Lambda}(x)=V(x)$. \end{thm} \begin{proof} Fix $x\geq 0$. We consider the following cases: \begin{itemize} \item \underline{$K>\bar{K}_{x}$}: By Proposition \ref{optimalpair} there is $b^*$ such that \begin{align*} \underset{\Lambda\geq 0}\inf\,\,V_{\Lambda}(x)&\leq V_{\Lambda(b^*)}(x)\\ &= \V^{D^{b^*}}- \Lambda(b^*) \mathbb{E}_{x}\Big[ e^{-q \tau^{D^{b^*}}} \Big] + \Lambda(b^*)K\\ &= \V^{D^{b^*}}\leq V(x), \end{align*} where the last inequality follows since the barrier strategy $D^{b^*}$ satisfies the constraint. \item \underline{$K=\bar{K}_{x}$}: From the proof of Lemma \ref{limitclassical}, it follows that $$0=\underset{\Lambda\geq 0}\inf\,\,V_{\Lambda}(x)=V(x).$$ \item \underline{$K<\bar{K}_{x}$}: Here, we have that there exists $\epsilon>0$ such that $\mathbb{E}_{x}\left[e^{-q \tau^{D^b}}\right]>K+\epsilon$ for all $b$ . Hence $$\Lambda(b)\left(K-\mathbb{E}_x\left[e^{-q \tau^{D^b}}\right]\right)<-\Lambda(b)\epsilon.$$ Letting $b \rightarrow\infty$ we obtain that $\underset{\Lambda\geq 0}\inf\,\,V_{\Lambda}(x)=-\infty\leq V(x)$. Note that in this case \eqref{P1} is infeasible. \end{itemize} \end{proof} \subsection{With transaction cost} We now consider the case where $\beta>0$. We will continue assuming that L\'evy measure of the spectrally negative process $X$ has a completely monotone density. In this case we need to consider single band strategies for $b=(b_-,b_+)$ with $b_+>b_-\geq0$ denoted by $D^b$. Using the two-sided exit above fluctuation identity, \cite{LoeffenTrans} shows the following result. \begin{prop} Let $b$ be a single band strategy and consider the dividend process $D_t^{b}$ with $X$ a spectrally negative L\'evy process. The function $\V^{D^b}$ with transaction cost $\beta>0$, for $x\geq0$ is given by \begin{equation}\label{ValuefunctqscaleTrans} \V^{D^b}(x)=\begin{cases} W^{(q)}(x)\dfrac{b_{+}-b_{-}-\beta}{W^{(q)}(b_{+})-W^{(q)}(b_{-})}, & \mbox{if } x\leq b_{+}\\ x-b_{-}-\beta+\V^{D^b}(b_{-}), & \mbox{if } x>b_{+}, \end{cases} \end{equation} \end{prop} \subsubsection{{Solution of \eqref{P2}}} Also, \cite{avram2015} shows the equivalent result for the function $\V_\Lambda^{D^b}$. \begin{prop}\label{LagrangianbarrierTrans} The function $\V_\Lambda^{D^b}$ with transaction cost $\beta>0$, where $D^b$ is the single band strategy $b=(b_-,b_+)$, for $x\geq0$ is given by \begin{equation}\label{lagrangianbarrierTrans} \V_{\Lambda}^{D^b}(x)=\begin{cases} W^{(q)}(x)G_{\Lambda}(b_{-},b_{+})-\Lambda Z^{(q)}(x)+\Lambda K, & \mbox{if } x\leq b_{+}\\ x-b_{-}-\beta+\V_{\Lambda}^{D^b}(b_{-}), & \mbox{if } x>b_{+} \end{cases} \end{equation} where \begin{equation}\label{functionG} G_{\Lambda}(b_{-},b_{+}):=\frac{b_{+}-b_{-}-\beta+q\Lambda\int_{b_{-}}^{b_{+}}W^{(q)}(z)d z }{W^{(q)}(b_{+})-W^{(q)}(b_{-})}. \end{equation} \end{prop} As expected, there is a close relation between \eqref{functionG} and the function $\zeta_{\Lambda}$ defined by \eqref{functionZ}. \begin{remark} If $\beta=0$ and letting $b_{-}\rightarrow b_{+}$ in \eqref{functionG}, we can see that \begin{align*} \lim_{b_{-}\rightarrow b_{+}}G_{\Lambda}(b_{-},b_{+})&=\lim_{b_{-}\rightarrow b_{+}}\frac{1+\frac{q\Lambda}{b_{+}-b_{-}}\int_{b_{-}}^{b_{+}}W^{(q)}(z)d z }{\frac{W^{(q)}(b_{+})-W^{(q)}(b_{-})}{b_{+}-b_{-}}}\\ &=\frac{1+q\Lambda W^{(q)}(b_{+})}{W^{(q)'}(b_{+})}=\zeta_{\Lambda}(b_{+}). \end{align*} \end{remark} Now, from Proposition \ref{LagrangianbarrierTrans} we note that a candidate for optimal single band strategy would be a maximizer of the function $G_{\Lambda}$. The candidate to optimal levels $b^{\Lambda}=(b_-^{\Lambda},b_+^{\Lambda})$ are defined as follows: \begin{equation}\label{bLambdaTrans} \begin{cases} b^{\Lambda}_{-}=b^{*}(d^{*}),\\ b^{\Lambda}_{+}= b^{\Lambda}_{-}+d^{*}, \end{cases} \end{equation} where \begin{equation} \begin{cases} b^{*}(d):=\sup\{\eta\geq0:G_{\Lambda}(\eta,\eta+d)\geq G_{\Lambda}(\varsigma,\varsigma+d),\forall \varsigma\geq0 \},\ \text{with}\ d>0,\\ d^{*}:=\sup\{d\geq0:G_{\Lambda}(b^{*}(d),b^{*}(d)+d)\geq G_{\Lambda}(b^{*}(\varsigma),b^{*}(\varsigma)+\varsigma),\forall \varsigma\geq0 \}. \end{cases} \end{equation} It can be verified that \begin{equation} G_{\Lambda}(b^{\Lambda}_{-},b^{\Lambda}_{+})\geq G_{\Lambda}(b_{-},b_{+}),\ \text{for any}\ (b_{-},b_{+})\ \text{with}\ 0\leq b_{-}<b_{+}, \end{equation} and from \cite{avram2015}, we get the following statement. \begin{thm}[Optimal strategy for \eqref{P2}]\label{L2} Let $b^{\Lambda}=(b^{\Lambda}_{-},b^{\Lambda}_{+})$ be defined as in \eqref{bLambdaTrans}. Then, $b^{\Lambda}_{+}<\infty$ and \begin{equation}\label{p1} G_{\Lambda}(b^{\Lambda}_{-},b^{\Lambda}_{+})=\zeta_{\Lambda}(b^{\Lambda}_{+}), \end{equation} where $\zeta_{\Lambda}$ is given by \eqref{functionZ}. In particular, it is optimal to adopt the strategy $D^{b^{\Lambda}}$. \end{thm} \subsubsection{Solution of \eqref{P1}} In this section we take a slightly different approach than in the previous case. In this case we consider the parametric curve given by $\Lambda\mapsto b^{\Lambda}=(b_-^{\Lambda},b_+^{\Lambda})$ for $\Lambda\geq0$. The following lemma gives the relationship between $b_{\Lambda}$ and the optimal pair $(b_{-}^{\Lambda},b_{+}^{\Lambda})$, where $b_{\Lambda}$ is given by \eqref{bLambda}. \begin{lemma}\label{bLambdaypair} Let $(b^{\Lambda}_{-},b^{\Lambda}_{+})$ be defined as in \eqref{bLambdaTrans}, where $\Lambda\geq0$. Then: \begin{enumerate}[(1)] \item If $b_{\Lambda}>0$, then $0\leq b^{\Lambda}_{-}< b_{\Lambda}<b^{\Lambda}_{+}$. \item If $b_{\Lambda}=0$, then $b_{-}^{\Lambda}=b_{\Lambda}<b_{+}^{\Lambda}$. \end{enumerate} \end{lemma} \begin{proof} First note that under the assumption of completely monotonicity of the density of $\nu$, we have that $G_{\Lambda}$ is smooth, therefore we can compute its stationary points. So, if $(b^{\Lambda}_{-},b^{\Lambda}_{+})$ is an interior maximum point, i.e. $0<b_{-}^{\Lambda}<b_{+}^{\Lambda}$, we must have that \begin{equation}\label{p13} \nabla G_{\Lambda}(b_-,b_+)=\begin{pmatrix} \dfrac{W^{(q)'}(b_{-})}{W^{(q)}(b_{+})-W^{(q)}(b_{-})}\left(G_{\Lambda}(b_-,b_+)-\zeta_{\Lambda}(b_-)\right)\\ -\dfrac{W^{(q)'}(b_{+})}{W^{(q)}(b_{+})-W^{(q)}(b_{-})}\left(G_{\Lambda}(b_-,b_+)-\zeta_{\Lambda}(b_+)\right) \end{pmatrix}=\begin{pmatrix}0\\0\end{pmatrix} \end{equation} Suppose now that $b_{\Lambda}>0$. If $b_-^{\Lambda}$ is strictly positive, by \eqref{p13} it follows that \begin{equation}\label{p15} \zeta_{\Lambda}(b^{\Lambda}_{-})=G_{\Lambda}(b^{\Lambda}_{-},b^{\Lambda}_{+})=\zeta_{\Lambda}(b^{\Lambda}_{+}), \end{equation} and by Remark \ref{complmonotprop}, this means that $b^{\Lambda}_{-}< b_{\Lambda}<b^{\Lambda}_{+}$. If $b_{-}^{\Lambda}=0$, define the function $g_{\Lambda}:(0,\infty)\longrightarrow\R$ as $g_{\Lambda}(\varsigma):=G_{\Lambda}(0,\varsigma)$. Then, \begin{equation*} g'_{\Lambda}(\varsigma)=\frac{W^{(q)'}(\varsigma)}{W^{(q)}(\varsigma)-W^{(q)}(0)}[\zeta_{\Lambda}(\varsigma)-g_{\Lambda}(\varsigma)], \end{equation*} and since $b_{+}^{\Lambda}$ is a maximizer of $g_{\Lambda}$, it follows that $\zeta_{\Lambda}(b_{+}^{\Lambda})=g_{\Lambda}(b_{+}^{\Lambda})$ and \begin{equation*} \begin{cases} \zeta_{\Lambda}(\varsigma)>g_{\Lambda}(\varsigma),& \text{if}\ \varsigma< b_{+}^{\Lambda}\\ \zeta_{\Lambda}(\varsigma)<g_{\Lambda}(\varsigma),& \text{if}\ \varsigma> b_{+}^{\Lambda}, \end{cases} \end{equation*} which shows that $b_{+}^{\Lambda}>b_{\Lambda}$, again by Remark \ref{complmonotprop}. In the case where $b_{\Lambda}=0$, we must have that $b_{-}^{\Lambda}=0$. Otherwise, if $b_{-}^{\Lambda}\neq0$, from \eqref{p13}, we have that $\zeta_{\Lambda}(b_{-}^{\Lambda})=\zeta_{\Lambda}(b_{+}^{\Lambda})$, which is a contradiction since $\zeta_{\Lambda}$ is a strictly decreasing function on $(0,\infty)$. \end{proof} \begin{prop}\label{lambdainftylemmaTrans} The curve $\Lambda\mapsto(b_-^{\Lambda},b_+^{\Lambda})$ for $\Lambda\geq0$ is continuous and unbounded. \end{prop} \begin{proof} The previous lemma and Corolary \ref{bLambdainfty} shows that $b_{+}^{\Lambda}\rightarrow\infty$ as $\Lambda\rightarrow\infty$, so the curve is unbounded. The continuity follows from the Implicit Function Theorem by considering two cases. First, suppose $b_-^{\Lambda}=0$, then by Theorem \ref{L2} we can define $b_+^{\Lambda}$ by the equation $$F(\Lambda,b_+^{\Lambda}):=G_{\Lambda}(0,b_+^{\Lambda})-\zeta_{\Lambda}(b_+^{\Lambda})=0.$$ Simple calculations show that $$\frac{\partial F}{\partial b_+}(\Lambda,b_+^{\Lambda})=\frac{\partial G_{\Lambda}}{\partial b_+}(0,b_+^{\Lambda})-\zeta'_{\Lambda}(b_+^{\Lambda})=-\zeta'_{\Lambda}(b_+^{\Lambda})>0,$$ since $b_+^{\Lambda}>b_{\Lambda}$, so the conditions of the Implicit Function Theorem are satisfied. Now, if $b_-^{\Lambda}>0$, the optimal pair is defined by the equations $F(\Lambda,b_-^{\Lambda},b_+^{\Lambda})=(F_1(\Lambda,b_-^{\Lambda},b_+^{\Lambda}),F_2(\Lambda,b_-^{\Lambda},b_+^{\Lambda}))=(0,0)$, where \begin{align*} F_1(\Lambda,b_-^{\Lambda},b_+^{\Lambda}):=G_{\Lambda}(b_-^{\Lambda},b_+^{\Lambda})-\zeta_{\Lambda}(b_-^{\Lambda})&=0,\\ F_2(\Lambda,b_-^{\Lambda},b_+^{\Lambda}):=G_{\Lambda}(b_-^{\Lambda},b_+^{\Lambda})-\zeta_{\Lambda}(b_+^{\Lambda})&=0. \end{align*} Again, simple calculations show that the Jacobian determinant of this system of equations is $\zeta'_{\Lambda}(b_+^{\Lambda})\zeta'_{\Lambda}(b_-^{\Lambda})<0$, since $b^{\Lambda}_{-}< b_{\Lambda}<b^{\Lambda}_{+}$, implying the continuity of the curve. \end{proof} Next, we proceed to analyze the level curves of the constraint. From Equations \eqref{ValuefunctqscaleTrans} and \eqref{lagrangianbarrierTrans} we observe that for $b=(b_-,b_+)$ \begin{align}\label{PsiTrans}\nonumber \varPsi_x(b_{-},b_{+}):&=\E_x\left[e^{-q\tau^{D^b}}\right]\\ &= \begin{cases} Z^{(q)}(x)-W^{(q)}(x)\dfrac{q\int_{b_{-}}^{b_{+}}W^{(q)}(z)d z}{W^{(q)}(b_{+})-W^{(q)}(b_{-})}, & \mbox{if } 0\leq x\leq b_{+}\\ \dfrac{Z^{(q)}(b_{-})W^{(q)}(b_{+})-Z^{(q)}(b_{+})W^{(q)}(b_{-})}{W^{(q)}(b_{+})-W^{(q)}(b_{-})}, & \mbox{if } x>b_{+}. \end{cases} \end{align} \begin{remark} Note that \begin{equation*} \lim_{b_{-}\rightarrow b_{+}}\varPsi_{x}(b_{-},b_{+})=\varPsi_x(b_+), \end{equation*} where $\varPsi_x(b)$ is defined in \eqref{Psi}. \end{remark} The next few lemmas will describe the properties of the level curves of the function \eqref{PsiTrans}. \begin{lemma}\label{Psidecr} Let $x\geq0$ be fixed. \begin{enumerate}[(i)] \item If $b_-\geq0$ is fixed, the function $\varPsi_x(b_{-},b_{+})$, given in \eqref{PsiTrans}, is non-increasing for all $b_{+}>b_{-}$, and \begin{equation}\label{KlimitTrans} \lim_{b_{+}\rightarrow\infty}\varPsi_x(b_{-},b_{+})=\bar{K}_{x}, \end{equation} where $\bar{K}_x$ is defined in \eqref{Klimitclass}. \item If $b_{+}>0$ is fixed, $\varPsi_x(b_{-},b_{+})$ is non-increasing for all $b_{-}\in[0,b_{+})$. \end{enumerate} \end{lemma} \begin{proof} First, assume that $x\leq b_+$. To show that $\varPsi_x(b_{-},b_{+})$ is non-increasing, it is sufficient to verify that \begin{equation}\label{p5.0} \frac{\int_{b_{-}}^{b_{+}}W^{(q)}(z) d z}{W^{(q)}(b_{+})-W^{(q)}(b_{-})}, \end{equation} is non-decreasing, which is true if \begin{align}\nonumber \frac{\partial}{\partial b_{+}}\biggr[&\frac{\int_{b_{-}}^{b_{+}}W^{(q)}(z)d z}{W^{(q)}(b_{+})-W^{(q)}(b_{-})}\biggl]\\\label{p5} &=\frac{W^{(q)}(b_{+})}{W^{(q)}(b_{+})-W^{(q)}(b_{-})}-\frac{W^{(q)'}(b_{+})\int_{b_{-}}^{b_{+}}W^{(q)}(z) d z}{[W^{(q)}(b_{+})-W^{(q)}(b_{-})]^{2}}\geq0. \end{align} Since $W^{(q)}$ is a log-concave function on $[0,\infty)$, we have that \begin{equation}\label{p4} \frac{W^{(q)'}(\eta)}{W^{(q)}(\eta)}\geq\frac{W^{(q)'}(\varsigma)}{W^{(q)}(\varsigma)},\ \text{for any}\ \eta \ \text{and}\ \varsigma\ \text{with}\ \eta\leq\varsigma. \end{equation} Taking $\varsigma=b_{+}$ in the above inequality, it follows that \begin{equation*} W^{(q)'}(\eta)\geq\frac{W^{(q)'}(b_{+})}{W^{(q)}(b_{+})}W^{(q)}(\eta),\ \text{for any}\ \eta\in[b_{-},b_{+}]. \end{equation*} Then, integrating between $b_{-}$ and $b_{+}$, it yields \eqref{p5} and hence \eqref{p5.0} is non-decreasing. For the case $x>b_+$, if $b_-=0$ and $W^{(q)}(0)=0$ we obtain the constant 1. Otherwise, similar calculations as above show that the function is non-increasing. Proceeding in a similar way that before, we also obtain (ii). Now, by \eqref{Klimitclass} and L'H\^opital's rule, it is easy to see \eqref{KlimitTrans} for any of $b_-$. \end{proof} \begin{remark} Note that if $b_+>b_0$ we obtain strictly decreasing functions in the above lemma by Corollary \ref{strictlogconcave}. \end{remark} \begin{lemma}\label{Kcurve} Let $x\geq0$. Then, for each $K\in(\bar{K}_{x},\varPsi_x(0))$ there exist $\underline{b}$ and $\bar{b}$ such that the level curve $L_K(\varPsi_x)=\{(b_-,b_+):\varPsi_x(b_-,b_+)=K\}$ is continuous and contained in $[0,\underline{b}]\times[\underline{b},\bar{b}]$. \end{lemma} \begin{proof} The continuity of the level curve is an immediate consequence of the continuity of $\varPsi_x(\cdot,\cdot)$. First, observe that by Proposition \ref{optimalpair} we know the existence of $\underline{b}\geq0$ such that $\varPsi_x(\underline{b})=K$ (if it is not unique, we take the minimum of them). On the other hand, by Lemma \ref{Psidecr} there exists $\bar{b}\in[\underline{b},\infty)$ such that $\varPsi_x(0,\bar{b})=K$ (again, if it is not unique, we take the maximum of them). Now, the fact that the curve is contained in $[0,\underline{b}]\times[\underline{b},\bar{b}]$ is again consequence of Lemma \ref{Psidecr}. \end{proof} We now prove the result analogous to Propositions \ref{optimalpair}. \begin{prop}\label{optLambda} Let $x\geq0$. Then, for each $K>\bar{K}_x$ there exists $\Lambda^*\geq0$ such that: \begin{enumerate}[(i)] \item $\varPsi_x(b_-^{\Lambda^*},b_+^{\Lambda^*})=\mathbb{E}_{x}\left[e^{-q \tau^{D^{b_{\Lambda^*}}}}\right]\leq K$ and \item\ $\Lambda^*\left(K-\mathbb{E}_{x}\left[e^{-q \tau^{D^{b_{\Lambda^*}}}}\right]\right)=0$. \end{enumerate} \end{prop} \begin{proof} If $K\geq\varPsi_x(b_-^0,b_+^0)$, the the unconstrained problem satisfies the restriction and $\Lambda^*=0$ satisfies the conditions. Otherwise, by Proposition \ref{lambdainftylemmaTrans} and Lemma \ref{Kcurve} we deduce that the parametric curve $\Lambda\mapsto b_{\Lambda}=(b_-^{\Lambda},b_+^{\Lambda})$ and the level curve $L_K(\varPsi_x)$ must intersect, that is, there exist $\Lambda^*$ such that $\varPsi_x(b_-^{\Lambda^*},b_+^{\Lambda^*})=K$, so satisfies the conditions. \end{proof} By similar arguments as in the previous case we can show the absence of duality gap also in this case. \section{Solution of the constrained Dual Model}\label{SecDual} Let us consider the Dual model where the reserve process $X$ is a spectrally positive L\'evy process. In this model we will only study the case without transaction cost. The other case should be a straightforward applications of the ideas presented in this article. In order to consider the barrier strategy at level $b$, we need to construct the reflected process at its supremum with initial value $b$, as before. To do so, we note that \begin{equation}\label{posnegrel} \hat{X}^{b}=(b\vee \overline{X})-X=Y-(0\wedge\underline{Y}), \end{equation} where $Y=b-X$ and $\underline{Y}_t:=\underset{0\leq s\leq t}{\inf} Y_s$. Note that $Y$ is now a spectrally negative L\'evy processes, hence the useful identities in this case concern the \emph{reflected process at its infimum}. Therefore when we refer to the Dual model, $q$-scale functions and other quantities correspond to the process $-X$. We now present the equivalent version of Proposition \ref{Valuefunctqscale} for the Dual model. The proof of the following proposition is available in \cite{AvPaPi07,KyYa14}. \begin{prop} Let $b>0$ and consider the dividend process $D_t^{b}=X_t -(b- \hat{X}^b_t)$ and $X$ a spectrally positive L\'evy process. For $x\in [0,b]$, \begin{equation}\label{ValuefunctqscaleDual} \V^{D^b}(x)=\E_x\left[\int_0^{\tau^{D^b}-} e^{-qt}dD_t^b\right]=-k(b-x)+\frac{Z^{(q)}(b-x)}{Z^{(q)}(b)}k(b), \end{equation} where $k(\varsigma):=\bar{Z}^{(q)}(\varsigma)-\dfrac{1}{\Phi(q)}Z^{(q)}(\varsigma)+\dfrac{\psi'(0+)}{q},\, \varsigma\geq 0$. \end{prop} \subsection{Solution of \eqref{P2}} We also need the equivalent result for the function $\V_\Lambda^{D^b}$. \begin{prop} The function $\V_\Lambda^{D^b}$, where $D^b$ is the barrier strategy at level $b \geq 0$, for $x\geq0$ is given by \begin{align}\label{lagrangianbarrierDual} \V_\Lambda^{D^b}(x)= \begin{cases} -k(b-x)+\frac{Z^{(q)}(b-x)}{Z^{(q)}(b)}\left[k(b)-\Lambda\right] + \Lambda K, &\text{if}\quad x\leq b\\ x-b+\mathcal{V}_\Lambda^{D^b}(b), &\text{if}\quad x>b. \end{cases} \end{align} \end{prop} \begin{proof} First note that $Z^{(q)}(z)=1$ and $\bar{Z}^{(q)}(z)=z$ for $z<0$. Now, from \cite{AvPaPi07} we have that if $Y$ is a spectrally negative L\'evy process and $\tilde{Y}=Y-(0\wedge\underline{Y})$, the reflected process at its past infimum below 0, then $\E_{y}\left[e^{-q \tau_b}\right]=\frac{Z^{(q)}(y)}{Z^{(q)}(b)}$, where $\tau_b$ is the first hitting time of $\tilde{Y}$ at $\{b\}$. Therefore, for $X$ a spectrally positive L\'evy process, by \eqref{posnegrel}, one gets \begin{equation}\label{eqrestdual} \E_{x}\left[e^{-q \tau^{D^b}}\right]=\frac{Z^{(q)}(b-x)}{Z^{(q)}(b)}. \end{equation} Combining this with the previous proposition yields the result. \end{proof} The last result is also included in \cite{Yin}. Now, to solve \eqref{P2} in the set up of the Dual model we will follow \cite{KyYa14} closely. Again, the idea is to propose a candidate for optimal barrier and run it through a verification lemma. In contrast to the approach taken in subsection \ref{dualclassical}, the candidate barrier will be such that corresponding value function is $C^1$ [resp. $C^2$] in the case of bounded [resp. unbounded] variation. This approach is commonly referred as \emph{smooth fit}. In this section no assumption about the L\'evy measure $\nu$ is made. From Equation \eqref{lagrangianbarrierDual} we get, \begin{align*} (\V_\Lambda^{D^b})'(x)=Z^{(q)}(b-x)-q W^{(q)}(b-x)\xi_\Lambda(b), \end{align*} and \begin{align*} (\V_\Lambda^{D^b})''(x)=-q W^{(q)}(b-x)+qW^{(q)'}(b-x)\xi_\Lambda(b), \end{align*} where \begin{align} \xi_\Lambda(\varsigma):=\frac{1}{\Phi(q)}+\frac{k(\varsigma)-\Lambda}{Z^{(q)}(\varsigma)}. \end{align} It is easy to check that the smooth fit condition is equivalent to $\xi_\Lambda(b)=0$ in both the bounded or unbounded variation case. This is equivalent to $b$ satisfying the relation $\bar{Z}^{(q)}(b)=\Lambda-\frac{\psi'(0+)}q$. Finally, since $\bar{Z}^{(q)}$ is strictly increasing and $\bar{Z}^{(q)}(0)=0$ the candidate to optimal barrier is given by \begin{align}\label{optbarrierdual} b_\Lambda:= \begin{cases} (\bar{Z}^{(q)})^{-1}\left(\Lambda-\frac{\psi'(0+)}{q}\right)& \mbox{if } \frac{\psi'(0+)}{q}<\Lambda\\ 0 & \mbox{otherwhise.} \end{cases} \end{align} This level is indeed optimal. To see that, we can use the standard verification lemma approach as in Proposition 5 in \cite{AvPaPi07} and Theorem 2.1 in \cite{KyYa14}. This is also shown in \cite{Yin}. \begin{thm}[Optimal strategy for \eqref{P2}] The optimal strategy of \eqref{P2} consist of a barrier strategy at level $b_\Lambda$ given by \eqref{optbarrierdual}, and the corresponding value function is given by \eqref{lagrangianbarrierDual}. \end{thm} \subsection{Solution of \eqref{P1}} As in the previous section let $b_0$ be the optimal barrier for \eqref{P2} with $\Lambda=0$, that is, the optimal barrier for the unconstrained problem, and let $\bar{\Lambda}:=\sup \{\Lambda\geq 0: b_\Lambda=0\}\vee0$. We also consider the function $\Lambda:[b_0,\infty)\rightarrow \R_{+}$ defined by $$\Lambda(b):= \begin{cases} 0& \mbox{if } b=b_0\\ \bar{Z}^{(q)}(b)+\frac{\psi'(0+)}{q} & \mbox{if } b>b_0. \end{cases} $$ \begin{prop} For each $b \in (b_0,\infty)$ the barrier strategy at level $b$ is optimal for \eqref{P2} with $\Lambda(b)$. Also, this map is one-to-one onto $(\bar{\Lambda},\infty)$. \begin{proof} For $b\in (b_0,\infty)$ the . First, it is strictly increasing and goes to $\infty$ as $b$ goes to $\infty$, since so satisfies $\bar{Z}^{(q)}$. Finally, $\Lambda(b)\geq0$ and satisfies the optimality condition by \eqref{optbarrierdual}. \end{proof} \end{prop} Now we show that the complementary slackness condition if satisfied. \begin{prop}\label{optimalpairdual} For each $x\geq0$ there exists $\bar{K}_{x}\geq0$ such that if $K>\bar{K}_{x}$ there exists $b^*$ such that : \begin{enumerate}[(i)] \item $\mathbb{E}_{x}\left[e^{-q \tau^{D^{b^*}}}\right]\leq K$ and \item $\Lambda(b^*)\left(K-\mathbb{E}_{x}\left[e^{-q \tau^{D^{b^*}}}\right]\right)=0$. \end{enumerate} \begin{proof} Again, let $\varPsi_x(b):=\mathbb{E}_{x}\left[e^{-q \tau^{D^b}}\right]$ given by $$\varPsi_x(b)=\frac{Z^{(q)}(b-x)}{Z^{(q)}(b)}.$$ Note that this expression is valid for any $b\geq0$. Let $x>0$. To get that $\frac{d \varPsi_x(b)}{db}< 0$, a simple calculation shows that this condition is equivalent to $q\frac{W^{(q)}(b-x)}{Z^{(q)}(b-x)}<q\frac{W^{(q)}(b)}{Z^{(q)}(b)}$. This is true since $\ln (Z^{(q)}(x))$ is strictly increasing. Now, using \eqref{limitqfact}, we define $\bar{K}_{x}:=\lim\limits_{b\rightarrow\infty}\varPsi_x(b)=e^{-\Phi(q) x}$. Now, the proof follows identically as in Proposition \ref{optimalpair}. For the case $x=0$, note that $\bar{K}_{0}=\varPsi_0(b)=1$ for all $b$, so $b^*=b_0$ satisfies the conditions. \end{proof} \end{prop} A remark analogous to \ref{remdonothing} also holds in this model and therefore we can also prove the next lemma. \begin{lemma}\label{limit} Let $x\geq0$. If $K=\bar{K}_{x}$ then $\Lambda(b)\Big(K-\mathbb{E}_x\Big[e^{-q \tau^{D^b}}\Big]\Big)\rightarrow 0$ as $b\rightarrow \infty$. \end{lemma} As in the previous section we derive the main result. \begin{thm}\label{strongduality} Let $x\geq 0$, $K\geq0$ and $V(x)$ be the optimal solution to \eqref{P1}. Then $$V(x)\geq \underset{\Lambda\geq 0}\inf\,\,V_{\Lambda}(x)$$ and therefore, $\underset{\Lambda\geq 0}\inf\,\,V_{\Lambda}(x)=V(x)$. \end{thm} \section{Numerical example}\label{numerics} In this section we illustrate with numerical examples the previous results. The main difficulty here is that, in most cases, there are no closed form expression for scale functions. Hence, we will follow a numerical procedure presented in \cite{surya2008} to approximate the scale functions by Laplace transform inversion of \eqref{wqlaplace}. We do the same to approximate derivatives of the scale functions and use the trapezoidal rule to calculate integrals of it. \begin{ex}\label{exdeFin} In this example we consider the Cram\'er-Lundberg model with income premium rate $c=1$, Poisson process intensity $\lambda=1$ and a heavy-tailed Pareto Type II distributed claims with density function $p(x)=1.5\left(1+x\right)^{-2.5}$, also know as Lomax(1,1.5). Note that this density is a completely monotone function. In this example $q=0.05$. In this case the optimal barrier for the unconstrained problem, that is, $b_0=0.42$. Figure \ref{figrest} shows the function $\Psi_b(x)$ for different values of $b$. The figure also shows pairs $(x,K)$ for which the problem has biding, infeasible and inactive constraints and when the do-nothing strategy is optimal. The latter corresponds to pairs of the form $(x,\bar{K}_x)$. A plot of the map $\Lambda(b)$ is presented in Figure \ref{figmap}. In this case we obtain that $\bar{\Lambda}=0$ since $b_0>0$, and a strictly increasing map on $[b_0,\infty)$. \begin{figure}[h!] \begin{subfigure}[b]{.48\linewidth} \includegraphics[width=1\textwidth]{restParetoII.pdf} \caption{Example \ref{exdeFin}} \end{subfigure} \begin{subfigure}[b]{.48\linewidth} \includegraphics[width=1\textwidth]{restDual.pdf} \caption{Example \ref{exDual}} \end{subfigure} \caption{$\Psi_x(b)$ as a function of $x$ for different values of $b$.}\label{figrest} \end{figure} \begin{figure}[t!] \begin{subfigure}[b]{.48\linewidth} \includegraphics[width=1\textwidth]{mapParetoII.pdf} \caption{Example \ref{exdeFin}} \end{subfigure} \begin{subfigure}[b]{.48\linewidth} \includegraphics[width=1\textwidth]{mapDual.pdf} \caption{Example \ref{exDual}} \end{subfigure} \caption{Map $\Lambda(b)$.}\label{figmap} \end{figure} Figure \ref{figbs} shows the value of $b^*$ from Proposition \ref{optimalpair} for different values of $K$ as a function of $x$. From this figure we can extract the optimal policy for the constrained problem. Note that for some values of $x$ there is no $b^*$, for these values the problem is infeasible. The figure also shows the value of $b_0$ for reference. The value function for the unconstrained and constrained problems for few levels of $K$ is showed in Figure \ref{figoptimal}. \end{ex} \begin{figure}[t!] \begin{subfigure}[b]{.48\linewidth} \includegraphics[width=1\textwidth]{bstarParetoII.pdf} \caption{Example \ref{exdeFin}} \end{subfigure} \begin{subfigure}[b]{.48\linewidth} \includegraphics[width=1\textwidth]{bstarDual.pdf} \caption{Example \ref{exDual}} \end{subfigure} \caption{$b^*$ for fixed values of $K$ as a function of $x$.}\label{figbs} \end{figure} \begin{figure}[t!] \begin{subfigure}[b]{.48\linewidth} \includegraphics[width=1\textwidth]{optParetoII.pdf} \caption{Example \ref{exdeFin}} \end{subfigure} \begin{subfigure}[b]{.48\linewidth} \includegraphics[width=1\textwidth]{optDual.pdf} \caption{Example \ref{exDual}} \end{subfigure} \caption{Optimal value function $V$ for the constrained problem with different values of $K$.}\label{figoptimal} \end{figure} \begin{ex}\label{exDual} We now consider problem with the Dual model. In this example the reserves process is $-X$, where $X$ follows the Cram\'er-Lundberg model plus a diffusion. The parameters are $c=1$, the intensity of the jumps is $\lambda=0.4$, the distribution of the jumps is Gamma(2,1) (note that this distribution doesn't have a completely monotone density) and $\sigma=0.5$. In this example $q=0.03$. In this case $b_0=0$ and $\bar{\Lambda}=6.71$. The results are shown in the same figures as in the previous example. Since $b_0=0$, if the problem is feasible then the constraint is active, Figure \ref{figrest}. \end{ex} \begin{ex} We now consider an example of the problem wit transaction cost. The reserves process follows an $\alpha$-stable L\'evy process with $\alpha=1.5$. Note that this is a pure jump unbounded variation process. We take $q=0.1$ an the transaction cost $\beta=0.01$. Figure \ref{figPhiTran} shows contour plots of the function $\Psi_x(b_-,b_+)$ for $x=3,10$. This figure also shows the curve described by the map $\Lambda\mapsto(b_-^{\Lambda},b_+^{\Lambda})$. The values of the optimal pair as a function of $\Lambda$ and the values of $b_{\Lambda}$ are shown in Figure \ref{figoptb}. \begin{figure}[t!] \begin{subfigure}[b]{.495\linewidth} \includegraphics[width=1\textwidth]{Phi3.pdf} \caption{$x=3$, $\bar{K}_x=0.303$} \end{subfigure} \begin{subfigure}[b]{.485\linewidth} \includegraphics[width=1\textwidth]{Phi10.pdf} \caption{$x=10$, $\bar{K}_x=0.097$} \end{subfigure} \caption{Contour plots of $\Psi_x(b_-,b_+)$.}\label{figPhiTran} \end{figure} \begin{figure}[t!] \includegraphics[width=0.5\textwidth]{mapLbopt.pdf} \caption{Optimal pair $(b_-^{\Lambda},b_+^{\Lambda})$ and $b_{\Lambda}$.}\label{figoptb} \end{figure} Finally, Figure \ref{figoptL} shows the value of $\Lambda^*$ from Proposition \ref{optLambda} for different values of $K$ as a function of $x$, and the value functions for the unconstrained and constrained problems. \begin{figure}[t!] \begin{subfigure}[b]{.495\linewidth} \includegraphics[width=1\textwidth]{Lstar.pdf} \caption{$\Lambda^*$} \end{subfigure} \begin{subfigure}[b]{.485\linewidth} \includegraphics[width=1\textwidth]{optStable.pdf} \caption{$V$} \end{subfigure} \caption{$\Lambda^*$ and value function for the constrained problem with different values of $K$.}\label{figoptL} \end{figure} \end{ex} \section{Conclusions and future work} In the framework of the classical dividend problem there exists a trade-off between stability and profitability. We were able to continue the work started in \cite{HJ15} in order to solve the optimal dividend problem subject to a constraint in the time of ruin. Using the fundamental tool of scale functions and fluctuation theory we improved the previous result for spectrally one-sided L\'evy processes, and included the case of fixed transaction cost. New questions arise from this work. The first one would be if the same results hold for band strategies instead of barrier strategies. This would probably require to know in advance the number of bands. Now, if we continue working with barrier strategies, to find other constraints that fit with the tools developed in this work is another challenging question. Finally, an interesting question is the existence of a Hamilton-Jacobi-Bellman-like equation that characterizes the value function of the constrained problem, this could open the door for a new theory for constrained stochastic optimal control. \section*{Acknowledgments} Mauricio Junca was supported by Universidad de los Andes under the Grant Fondo de Apoyo a Profesores Asistentes (FAPA). Harold Moreno-Franco acknowledges financial support from HSE, which was given within the framework of a subsidy granted to the HSE by the government of the Russian Federation for the implementation of the Global Competitiveness Program. \bibliographystyle{alpha} \bibliography{ref} \end{document}
{"config": "arxiv", "file": "1608.02550/DividendsConstraint.tex"}
TITLE: Algebraic of degree $n$ over $F$ implies algebraic of degree at most $n$ over $F(a)$? QUESTION [0 upvotes]: Suppose $a, b$ in the field $K$ are algebraic over the subfield $F \subseteq K$. Suppose $a$ is algebraic of degree $m$ over $F$ and $b$ is algebraic of degree $n$ over $F$. Let $F(a)$ be the field obtained by adjoining $a$ to $F$. My question is, is it true that $b$ is algebraic of degree at most $n$ over $F(a)$? If so, how come? REPLY [1 votes]: If $b$ is algebraic of degree $n$ over $F$, then there is a polynomial $P$ of degree $n$ with coefficients in $F$ such that $P(b) = 0$. This polynomial has coefficients in $F$, but it also has coefficients in $F(a)$, so we have that $b$ is algebraic of degree at most $n$ in $F(a)$.
{"set_name": "stack_exchange", "score": 0, "question_id": 4001192}
TITLE: Derivative of dot product? QUESTION [2 upvotes]: What's the derivative ${\partial \over \partial x} \langle x, f(x)\rangle$? According to the product rule it should be $1\cdot f(x) + x \cdot f'(x) $ but in my previous post I was told that this makes no sense. Here $f: \mathbb R \to \mathbb R^2$ and $1$ is the constant one vector and $x \in \mathbb R^2$. REPLY [8 votes]: Let $f,g: I\subset \Bbb R \to \Bbb R^n$ be smooth maps, and $\langle \cdot, \cdot\rangle$ be the usual dot product in $\Bbb R^n$. So: $$\begin{align}\frac{\rm d}{{\rm d}x}\langle f(x),g(x)\rangle &= \frac{\rm d}{{\rm d}x}\sum_{i=1}^n f_i(x)g_i(x) \\ &= \sum_{i=1}^n \frac{\rm d}{{\rm d}x}(f_i(x)g_i(x)) \\ &= \sum_{i=1}^n(f'_i(x)g_i(x)+f_i(x)g'_i(x)) \\ &= \sum_{i=1}^nf'_i(x)g_i(x)+\sum_{i=1}^nf_i(x)g'_i(x)\\ &= \langle f'(x),g(x)\rangle + \langle f(x),g'(x)\rangle.\end{align}$$
{"set_name": "stack_exchange", "score": 2, "question_id": 1362611}
TITLE: Does sound waves pick up the speed of its source? QUESTION [5 upvotes]: I googled the speed of sound and found that it only depends on the medium (just like the speed of light but with different parameters). I can't see how it doesn't pick up the speed of its source! I mean for the constancy of the speed of light the hole addition rule of velocities was modified to the relativistic one. So how to maintain the constancy of the speed of sound? P.S. I'm so grateful to every single one answered this question cause she/he truly induced me to understand the subject. REPLY [0 votes]: The following are the three bold statements that I make. Out of the speeds of the source and the observer/receiver, the wavelength of a sound depends only upon the speed of the source. Out of the speeds of the source and the observer/receiver, the frequency of a sound depends upon the speeds of both of them. Out of the speeds of the source and the observer/receiver, the "apparent speed" of a sound (i.e. speed wrt a observer) depends only upon the speed of the observer/receiver.
{"set_name": "stack_exchange", "score": 5, "question_id": 237676}
TITLE: Which schemes can be presented as limits of smooth varieties? QUESTION [4 upvotes]: I can prove a certain statement for any scheme that can be presented as the limit of an essentially affine (filtering) projective system of smooth varieties over a perfect field such the connecting morphisms are dominant. In this text I only treat schemes that are excellent separated of finite Krull dimension. So, I have the following questions. Is there a shorter description of schemes that can be presented as limits of this sort (either of all ones, or of excellent separated of finite Krull dimension)? Is there an interesting subclass in the class of all limit schemes of this sort? I don't want to restrict myself to affine schemes. If no nice answers to questions 1 and 2 will be given, I will need a term for limits of this sort. Any suggestions?:) REPLY [8 votes]: Here is an answer for the affine case. Assume $f:A\to B$ is a homomorphism of Noetherian rings. Then $B$ is a filtered colimit of smooth (finitely generated) $A$-algebras iff $f$ is regular (flat with geometrically regular fibers). This is due to Popescu and Spivakovsky; see for instance Teissier's Bourbaki talk http://www.math.jussieu.fr/~teissier/documents/Approx.BBk.pdf and the references therein (the above result is Thm. 1.1). If $A=k$ is a field, this says that $B$ is a colimit of smooth $k$-algebras iff it is geometrically regular over $k$. If $k$ is perfect (e.g. the prime field!) you can remove "geometrically". The field case might possibly be simpler than the general case.
{"set_name": "stack_exchange", "score": 4, "question_id": 123716}
TITLE: What is the analytical expression which shows the convergence of a 6 sided fair dice's expected value to 3.5 as a function of the number of rolls(N)? QUESTION [1 upvotes]: What is the analytical expression which shows the convergence of a 6 sided fair dice's expected value to 3.5 as a function of the number of rolls(N)? I realize that there may be a need for a confidence interval. Here is how to apply the central limit theorem as an approximation for large N rolls. The distribution of the sample mean multiplied by N−−√ is, thanks to the Central Limit Theorem, approximately normal with mean of the population μ and standard deviation σ. So the distribution of the mean (or the sum) is approximately normal for large N with mean μ (or Nμ ) and standard deviation σ/N−−√ (or σN−−√) . The Central limit theorem can be demonstrated with characteristic functions. That for a discrete uniform distribution such as a 6-sided die is at en.wikipedia.org/wiki/Discrete_uniform_distribution. Any help is greatly appreciated. REPLY [2 votes]: The Central Limit Theorem is an asymptotic result. I am not sure that it is what you are looking for. I imagine you are looking for some finite sample result. I will present one possible way of doing it. Let $X_i$ be a random variable taking values in $\{1,2,3,4,5,6\}$, which denotes the output of the the $i^{th}$ trial. Let $S_n = \sum_{i=1}^nX_i$, then by applying Chebyshev's inequality (and noting that $var(S_n)=n \times var(X_1)$, since $(X_i)_{i=1}^n$ are i.i.d.), we have $P(|\frac{S_n}{n}-\mu| > \epsilon) \leq \frac{1}{n\epsilon^2}var(X_1)$ Thus for large value of $n$, this probability goes to zero for any $\epsilon >0$. It is possible to get much tighter results than this by using better concentration inequalities.
{"set_name": "stack_exchange", "score": 1, "question_id": 1933507}