text
stringlengths 28
2.36M
| meta
stringlengths 20
188
|
---|---|
\begin{document}
\title
{Convergent Iteration in Sobolev Space
for Time Dependent Closed Quantum
Systems}
\author{Joseph W. Jerome\footnotemark[1]}
\date{}
\maketitle
\pagestyle{myheadings}
\markright{Convergent Iteration in Sobolev Space}
\medskip
\vfill
{\parindent 0cm}
{\bf 2010 AMS classification numbers:}
35Q41; 47D08; 47H09; 47J25; 81Q05.
\bigskip
{\bf Key words:}
Time dependent quantum systems;
time-ordered evolution operators; Newton iteration; quadratic convergence.
\fnsymbol{footnote}
\footnotetext[1]{Department of Mathematics, Northwestern University,
Evanston, IL 60208. \\In residence:
George Washington University,
Washington, D.C. 20052\\
{URL: {\tt {http://www.math.northwestern.edu/$\sim$jwj}}} $\;$
{Email: {\tt {jwj620@gwu.edu}}}
}
\begin{abstract}
Time dependent quantum systems have become indispensable in science and
its applications, particularly at the atomic and molecular levels.
Here, we discuss the approximation of closed
time dependent quantum systems on bounded domains, via
iterative methods in Sobolev space based upon evolution operators.
Recently, existence and uniqueness of weak solutions were demonstrated
by a contractive fixed point mapping defined by the evolution operators.
Convergent successive approximation is then guaranteed.
This article uses the same mapping to define
quadratically convergent Newton and approximate Newton methods.
Estimates for the constants used in the convergence estimates are provided.
The evolution operators are ideally suited to serve as the
framework for this operator approximation theory,
since the Hamiltonian is time-dependent. In addition, the
hypotheses required to guarantee
quadratic convergence of the Newton iteration build naturally upon the
hypotheses used for the existence/uniqueness theory.
\end{abstract}
\section{Introduction}
Time dependent density functional theory (TDDFT) dates from the seminal
article \cite{RG}.
A current account of the subject
and a development of the mathematical model may be found in \cite{U}.
For the earlier density functional theory (DFT),
we refer the reader to \cite{KS,KV}.
Our focus in this article is TDDFT only.
Closed quantum systems on bounded domains of
${\mathbb R}^{3}$ were analyzed in \cite{JP,J1},
via time-ordered evolution operators. The article \cite{JP} includes
simulations
based on approximations of the evolution operator, employing a spectral
algorithm.
The article \cite{J1} extended the existence results of \cite{JP}
to include weak
solutions via a strict contraction argument for an operator $K$;
\cite{J1} also includes the exchange-correlation component of the
Hamiltonian potential not included in \cite{JP}.
TDDFT is a significant field for applications (see
\cite{CMR,tddft,YB,LHP,MG}), including the expanding field of chemical
physics, which studies the response of atoms and molecules to external
stimuli.
By permitting time dependent potentials,
TDDFT extends the nonlinear Schr\"{o}dinger equation,
which has been studied extensively (\cite{CH,Caz}).
In this article, we build upon \cite{J1} by introducing a Newton iteration
argument for $I -K$. This is examined at several levels, including
classical Newton iteration and also
that of approximate Newton iteration, defined by residual
estimation.
It is advantageous that the existence/uniqueness theory of \cite{J1}
employs strict contraction on a domain with an `a priori' norm bound.
One consequence is that the local requirements of Newton's method can, in
principle, be satisfied by preliminary Picard iteration (successive
approximation).
The results of this article should be viewed as advancing understanding
beyond that of an existence/uniqueness theory;
they are directed toward ultimately
identifying a successful constructive approach to obtaining solutions.
In the following subsections of the introduction, we familiarize the reader with
the model, and summarize the basic results of \cite{J1},
which serve as the starting point for the present article.
In this presentation, we provide explicit estimates
for the domain and contraction constant used for the application of the
Banach contraction theorem.
Section two cites and derives essential operator results regarding Newton
iteration in Banach spaces.
Section three is devoted to an exact Newton iteration, with
quadratic convergence, for the quantum TDDFT model, whereas section four
considers an approximate quadratically convergent Newton
iteration for the TDDFT model.
Section five is a brief `Conclusion' section.
Appendices are included, which state the hypotheses used in \cite{J1}
(Appendix A),
basic definitions of the norms and function spaces adopted for this
article (Appendix B), and a general Banach space lemma
characterizing quadratic convergence for approximate Newton iteration
(Appendix C).
The constants which appear in the analysis are directly
related to the components of the potential. The external potential and the
Hartree potential present no problem.
However, as observed in \cite{U}, there is no explicit universally
accepted representation of the exchange-correlation potential, which is
required to be non-local in our approach.
It follows that the explicit convergence estimates we present are
strongest in the absence of this potential, or
in the case when concrete approximations are employed.
\subsection{The model}
TDDFT includes an external potential, the Hartree potential,
and the exchange-correlation potential.
If $\hat H$ denotes
the Hamiltonian operator of the system, then the state $\Psi(t)$ of the
system obeys the nonlinear Schr\"{o}dinger equation,
\begin{equation}
\label{eeq}
i \hbar \frac{\partial \Psi(t)}{\partial t} = \hat H \Psi(t).
\end{equation}
Here, $\Psi = \{\psi_{1}, \dots, \psi_{N}\}$ and the charge
density $\rho$ is defined by
$$ \rho({\bf x}, t) = |\Psi({\bf x}, t)|^{2} =
\sum_{k = 1}^{N} |\psi_{k} ({\bf x}, t)|^{2}.
$$
For well-posedness, an initial condition,
\begin{equation}
\label{ic}
\Psi(0) = \Psi_{0},
\end{equation}
consisting of $N$ orbitals,
and boundary conditions must be adjoined.
We will assume that the particles are confined to a
bounded region $\Omega \subset {\mathbb R}^{3}$ and that
homogeneous Dirichlet boundary conditions hold for the evolving quantum
state within a closed system. In general, $\Psi$ denotes a finite vector
function of space and time.
\subsection{Specification of the Hamiltonian operator}
We study effective potentials $V_{\rm e}$
which are real scalar functions of the form,
$$
V_{\rm e} ({\bf x},t, \rho) = V({\bf x}, t) +
W \ast \rho + \Phi({\bf x}, t, \rho).
$$
Here,
$W({\bf x}) = 1/|{\bf x}|$
and the convolution
$W \ast \rho$
denotes the Hartree potential. If $\rho$ is extended as zero outside
$\Omega$, then, for ${\bf x} \in \Omega$,
$$
W \ast \rho \; ({\bf x})=\int_{{\mathbb R}^{3}}
W({\bf x} -{\bf y}) \rho({\bf y})\;d {\bf y},
$$
which depends only upon values $W({\xi})$,
$\|{\xi}\|\leq
\mbox{diam}(\Omega)$. We may redefine $W$
smoothly outside this set,
so as to obtain a function of compact support for which Young's inequality
applies.
$\Phi$ represents a time history of $\rho$:
$$
\Phi({\bf x}, t, \rho)= \Phi_{0}({\bf x}, 0, \rho) +
\int_{0}^{t} \phi({\bf x}, s, \rho) \; ds.
$$
As explained in \cite[Sec.\thinspace 6.5]{U}, $\Phi_{0}$
is determined by
the initial state of the Kohn-Sham system and the initial state of the
interacting reference system with the same density and divergence of the
charge-current.
The Hamiltonian operator is given by, for effective
mass $m$ and normalized Planck's constant $\hbar$,
\begin{equation}
\hat H
= -\frac{\hbar^{2}}{2m} \nabla^{2}
+V({\bf x}, t) +
W \ast \rho + \Phi({\bf x}, t, \rho).
\label{Hamiltonian}
\end{equation}
\subsection{Definition of weak solution}
The solution $\Psi$ is continuous from the time interval $J$, to be
defined shortly,
into the finite energy Sobolev space
of complex-valued
vector functions which vanish in a generalized sense on the boundary,
denoted $H^{1}_{0}(\Omega)$. One writes: $\Psi \in C(J; H^{1}_{0})$.
The time derivative is continuous from $J$
into the dual $H^{-1}$ of $H^{1}_{0}$.
One writes: $\Psi \in C^{1}(J; H^{-1})$.
The spatially dependent test functions $\zeta$ are
arbitrary in $H^{1}_{0}$.
The duality bracket is denoted $\langle f, \zeta \rangle$.
\begin{definition}
For $J=[0,T]$,
the vector-valued function $\Psi = \Psi({\bf x}, t)$ is a
weak solution of (\ref{eeq}, \ref{ic}, \ref{Hamiltonian}) if
$\Psi \in C(J; H^{1}_{0}(\Omega)) \cap C^{1}(J;
H^{-1}(\Omega)),$ if $\Psi$ satisfies the initial condition
(\ref{ic}) for $\Psi_{0} \in H^{1}_{0}$,
and if $\forall \; 0 < t \leq T$:
\vspace{.25in}
\begin{equation}
i \hbar\langle \frac{\partial\Psi(t)}{\partial t},
\zeta \rangle =
\int_{\Omega} \frac{{\hbar}^{2}}{2m}
\nabla \Psi({\bf x}, t)\cdotp \nabla { \zeta}({\bf x})
+ V_{\rm e}({\bf x},t,\rho) \Psi({\bf x},t) { \zeta}({\bf x})
d{\bf x}.
\label{wsol}
\end{equation}
\end{definition}
\subsection{The associated linear problem}
The approach to solve the nonlinear problem (\ref{wsol}) is to define a
fixed point mapping $K$. For each $\Psi^{\ast}$ in the domain
$C(J;H^{1}_{0})$ of $K$
we produce the image $K \Psi^{\ast} = \Psi$ by the following
steps.
\begin{enumerate}
\item
$\Psi^{\ast} \mapsto \rho =
\rho({\bf x}, t) = |\Psi^{\ast}({\bf x}, t)|^{2}$.
\item
$\rho \mapsto \Psi$ by the solution of the
{\it associated} linear problem (\ref{wsol})
where the potential $V_{\rm e}$ uses $\rho$
in its final argument.
\end{enumerate}
In general, $\Psi \not= \Psi^{\ast}$ unless $\Psi$ is a fixed
point of $K$.
In order to construct a fixed point, one introduces the linear evolution
operator $U(t,s)$: for given $\Psi^{\ast}$ in
$C(J;H^{1}_{0})$,
set $U(t,s) = U^{\rho}(t,s)$ so that
\begin{equation}
\label{evolution}
\Psi(t) = U^{\rho}(t,0) \Psi_{0}.
\end{equation}
For each $t$, we interpret $\Psi(t)$
as a function (of ${\bf x}$).
Moreover, the effect of the evolution operator is to obtain
$\Psi = K \Psi^{\ast}$ since the operator is generated
from (\ref{wsol}) with specified $\rho$.
\subsection{Discussion of the evolution operator}
The evolution operator used here and in \cite{J1}
was introduced in two fundamental
articles \cite{K1,K2}
by Kato in the 1970s. A description of Kato's theory can be
found in \cite{J2}. For the application to (\ref{wsol}), one identifies
the frame space
with the dual space $H^{-1}$ and the smooth space with the finite energy space
$H^{1}_{0}$. A significant step is to show that the operators
$(-i/{\hbar}) \hat H(\rho)$ generate contraction semigroups on the frame space,
which remain stable on the smooth space. If one can demonstrate these
properties, then the evolution operator exists and can be used as in
(\ref{evolution}) to retrieve the solution of the initial value problem.
\subsection{Discussion of the result}
The following theorem was proved in \cite{J1}. We include a short appendix
which describes the hypotheses under which Theorem \ref{EU} holds.
\begin{theorem}
\label{EU}
There is a closed ball ${\overline {B(0, r)}} \subset C(J; H^{1}_{0})$
on which $K$ is invariant.
For $t$ sufficiently small, $K$ defines a strict contraction. The contraction
constant is independent of the restricted time interval, so that the unique
fixed point can be continued globally in time.
In particular, for any interval $[0,T]$, the system (\ref{wsol})
has a unique solution
which coincides with this fixed point.
\end{theorem}
The proof uses the
Banach contraction mapping theorem.
We will elaborate on this after the following subsection
because it is required for later estimates.
\subsection{A variant of conservation of energy}
\label{cofe}
If the functional ${\mathcal E}(t)$ is defined
for $0 < t \leq T$ by,
$$
\int_{\Omega}\left[\frac{{\hbar}^{2}}{4m}|\nabla \Psi|^{2}
+
\left(\frac{1}{4}(W \ast |\Psi|^{2})+ \frac{1}{2}
(V + \Phi)\right) |\Psi|^{2}\right]d{\bf x},
$$
then the following identity holds for the unique solution:
\begin{equation}
{\mathcal E}(t)={\mathcal E}(0)
+
\frac{1}{2}\int_{0}^{t}\int_{\Omega}[(\partial V/\partial s)({\bf x},s)
+ \phi({\bf x}, s, \rho)]
|\Psi|^{2}\;d{\bf x}ds,
\label{consener}
\end{equation}
where
${\mathcal E}(0)$
is given by
$$
\int_{\Omega}\left[\frac{{\hbar}^{2}}{4m}|\nabla
\Psi_{0}|^{2}+\left(\frac{1}{4}
(W\ast|\Psi_{0}|^{2})+\frac{1}{2}
(V + \Phi_{0}) \right)|\Psi_{0}|^{2}\right]
\;d{\bf x}.
$$
The functional ${\mathcal E}(t)$ is related to the physical
energy $E(t)$ of the system, defined by
$
E(t) = \langle{\hat H}(t) \Psi(t), {\bar \Psi(t)} \rangle:
$
$$
{\mathcal E}(t) = \frac{1}{2} \left(E(t) - \frac{1}{2} \langle \Psi(t),
(W \ast |\Psi(t)|^{2}) {\bar \Psi(t)} \rangle \right).
$$
\subsection{Restriction of the domain of $K$}
We discuss the invariant closed ball cited in Theorem \ref{EU}.
Consider the evolution operator $U^{v}$ corresponding to
$v = 0$ (zero charge),
and use this as reference:
$$
U^{\rho}(t,0)\Psi_{0} =
[U^{\rho}(t,0)\Psi_{0} -
U^{v}(t,0)\Psi_{0}] +
U^{v}(t,0)\Psi_{0}.
$$
This is valid for the first stage of the continuation process indicated in
Theorem \ref{EU}. For subsequent stages, $\Psi_{0}$ is replaced by the
solution evaluated at discrete points $t_{k}$.
We define $r$ as follows.
\begin{equation}
\label{definitionr}
r = 2 \|U\|_{\infty, H^{1}_{0}} \max (\|\Psi_{0}\|_{H^{1}_{0}}, r_{0}).
\end{equation}
Here, $r_{0}$ is a bound for the $C(J; H^{1}_{0})$ norm of the
solution, derived from the preceding subsection, and discussed in the
following corollary. By use of identity
(\ref{IDENTITY}) below, the difference term in the
above representation can be controlled by the size of $t$,
not to exceed $r/2$ (see (\ref{IDENTITY})). In particular,
the closed ball is invariant.
\begin{corollary}
The number $r^{2}_{0}$ can be chosen as any upper bound for
\begin{equation}
\label{firstr}
\frac{4m}{\hbar^{2}} \sup_{0\leq t \leq T}{\cal E}(t) +
\|\Psi_{0}\|^{2}_{L_{2}}.
\end{equation}
In particular, if we denote by ${\cal V}$ the sum $V + \Phi$, then
we may make the choice,
\begin{equation}
\label{secondr}
r^{2}_{0} =
\frac{4m}{\hbar^{2}}\left[{\cal E}_{0} + \frac{T}{2}\sup_{x\in \Omega,
t\leq T} \left|\partial {\cal V}/\partial t \right|
\|\Psi_{0}\|^{2}_{L_{2}}\right]
+ \|\Psi_{0}\|^{2}_{L_{2}}.
\end{equation}
\end{corollary}
\begin{proof}
The upper bound contained in (\ref{firstr}) uses definitions and
nonnegativity. The choice in (\ref{secondr}) uses the identity
(\ref{consener}). Both use $L^{2}$ norm invariance for $\Psi$.
\end{proof}
\subsection{The contractive property: Role of the evolution operator}
\label{CP}
There is an integral operator identity satisfied by the evolution operator
which permits the estimation of the metric distance between
$K \Psi^{\ast}_{1}$ and
$K \Psi^{\ast}_{2}$. Note that separate evolution operators
$U^{\rho_{1}}(t,s)$
and $U^{\rho_{2}}(t,s)$ are generated, for $\rho_{1} = |\Psi_{1}^{\ast}|^{2},
\rho_{2} = |\Psi_{2}^{\ast}|^{2}$.
One has the following identity (see \cite{J2}):
\begin{equation}
U^{\rho_{1}}\Psi_{0}(t) - U^{\rho_{2}}\Psi_{0}(t) =
\frac{i}{\hbar}\int_{0}^{t} U^{\rho_{1}}(t,s)[{\hat H}(s, \rho_{1}) -
{\hat H}(s, \rho_{2})]U^{\rho_{2}}(s,0)\Psi_{0} \; ds.
\label{IDENTITY}
\end{equation}
The $H^{1}_{0}$-norm of the evolution operators can be
uniformly bounded in $t,s$ by a
constant (see below).
The following Lipschitz condition,
on the (restricted) domain
${\overline {B(0, r)}} \subset C(J; H^{1}_{0})$ of $K$
was used in \cite{J1}:
\begin{equation}
\label{LipforV}
\|[V_{\rm e}({\rho_{1}})-
V_{\rm e}({\rho_{2}})]\psi\|_{C(J;H^{1}_{0})} \leq
C\|\Psi^{\ast}_{1} -
\Psi^{\ast}_{2}
\|_{C(J;H^{1}_{0})} \|\psi\|_{H^{1}_{0}}.
\end{equation}
Here $C$ is a fixed positive constant (see below)
and $\psi$ is arbitrary in $H^{1}_{0}$.
Inequality (\ref{LipforV}) (see
Theorem \ref{hartreeLip} and (\ref{ecfollowsH}) to follow)
can be used to verify contraction, for sufficiently small $t$,
in terms of the distance between
$\Psi^{\ast}_{1}$ and
$\Psi^{\ast}_{2}$. In other words, one begins by replacing
$J$ by $[0, t]$ so that $\gamma < 1$.
Continuation to $t = T$ occurs
in a finite number of steps.
We now present a lemma which estimates the Lipschitz
(contraction) constant $\gamma$
of $K$ on ${\overline {B(0,r)}}$.
In the lemma, and elsewhere, we use the notation,
\begin{equation}
\label{uniformstU}
\|U\|_{\infty,H^{1}_{0}} :=
\sup_{t \in J, s \in J} \|U(t,s)\|_{H^{1}_{0}}.
\end{equation}
\begin{lemma}
\label{contractionconstant}
The mapping $K$ is Lipschitz continuous on
${\overline {B(0,r)}}$, with Lipschitz constant $\gamma = \gamma_{t}$
estimated by
\begin{equation}
\label{gamma}
\gamma = \gamma_{t}
\leq \frac{Ct}{\hbar} \|U\|_{\infty, H^{1}_{0}}^{2}\;\|\Psi_{0}\|_{H^{1}_{0}}.
\end{equation}
Here, $C$ is the constant of (\ref{LipforV}). $C$ can be estimated
precisely in the case when $V_{\rm e}(\rho)$ is independent of $\Phi$.
If $E_{1}$ is the Sobolev embedding constant associated
with the embedding of $H^{1}_{0}$ into $L^{6}$, then $C = 2r C_{0}$, where
\begin{equation}
\label{constantC}
C_{0} = E_{1}^{2}[ E_{1}\|\nabla W\|_{L^{!}} + |\Omega|^{2/3}\|W\|_{L^{2}}
+ E_{1}^{2}|\Omega|].
\end{equation}
Here, $W$ is the Hartree convolution kernel, and $r$ has the meaning of
(\ref{definitionr}). Also, in this case, we have the estimate,
\begin{equation}
\label{estEVY}
\|U\|_{\infty, H^{1}_{0}} \leq \exp \left(\frac{E_{1}\hbar^{2}T}{2 m}
\left[2r E_{1}^{2}
\|\nabla W\|_{L^{1}} + \|\nabla V\|_{C(J; L^{3})}\right]\right).
\end{equation}
If $\Phi$ is included, the estimate for $C_{0}$ in (\ref{constantC}) is
incremented by the constant appearing in (\ref{ecfollowsH}).
Also, the rhs of (\ref{estEVY}) is modified: within [ $\cdot$],
one adds a uniform estimate for $\|\nabla \Phi \|_{L^{3}}$.
\end{lemma}
\begin{proof}
The estimate (\ref{gamma}) is a direct consequence of estimating the
$H^{1}_{0}$ norm of (\ref{IDENTITY}).
The estimation of the constant $C$ in (\ref{gamma}) follows from tracking
the constants appearing in
the proofs of Lemma \ref{3.1} and Theorem \ref{hartreeLip} to
follow. The uniform bound for
$ \|U\|_{\infty, H^{1}_{0}}$ is established in the construction of Kato;
we used the explicit estimate given in \cite[Corollary 6.3.6, p. 229]{J2},
together with the fact that the evolution operators are contractive on the
dual space, and the fact that we employ the canonical isomorphism between
$H^{1}_{0}$ and $H^{-1}$.
\end{proof}
\section{Background Results for Newton Iteration in Banach Space}
We have seen previously that,
by appropriate use of successive approximation, we can, in principle,
determine approximations of arbitrary prescribed accuracy.
In fact, if it is required of the $n$th iterate $\psi_{n}$
of a contractive map with contraction constant $q$ and fixed point
$\psi$ that
\begin{equation}
\label{SA}
\|\psi_{n} - \psi \| \leq \epsilon,
\end{equation}
then the well-known successive approximation estimate,
$$
\|\psi_{n} - \psi \| \leq
\frac{q^{n}}{1 - q}
\|\psi_{1} - \psi_{0} \| \leq \epsilon,
$$
gives the value of $n$, which must be satisfied.
It is natural then to investigate rapidly converging local methods,
specifically, Newton's method.
In this article, we will discuss
both exact and approximate quadratically convergent Newton methods. The latter
permit approximate derivative inverses, and thus approximate
Newton iterations.
\subsection{The core convergence result}
We have included in Appendix \ref{appendixC} a
core Lemma
which permits both exact
and approximate Newton methods in Banach spaces. It is based upon earlier
work of the author \cite{J3} and will serve as a resource result.
In the estimates, $h\leq 1/2$ should be viewed as a discretionary
parameter, and $\kappa, \sigma$ as constraining parameters to be
determined. The
locality of Newton's method is incorporated in the requirement that the
initial residual not exceed $\sigma^{-1}$.
The parameter $\alpha < 1$ positions $u_{0}$ in the interior of an
appropriate closed ball. The parameter $\tau < 1$ is part of the defining
equation for invertibility of the derivative map.
\subsection{An exact Newton method for fixed point maps}
We discuss the case where the exact inverse is employed for the
Fr\'{e}chet derivative. We derive a general operator result,
applied in section three.
\begin{proposition}
\label{VK}
Let ${\mathcal O}$ be an open subset of a Banach space $X$,
and let $P: {\mathcal O} \mapsto X$ be such that $P$ is Lipschitz continuously
Fr\'{e}chet differentiable on ${\mathcal O}$,
and such that
$S^{\prime}(x_{0}) = I - P^{\prime}(x_{0})$
is invertible for some $x_{0} \in {\mathcal O}$.
Then there is a suitable closed ball
$B_{\delta} = {\overline {B(x_{0}, \delta)}}$
for which
$S^{\prime}(v)$ is invertible for each $v \in B_{\delta}$.
If $0 < \alpha < 1$ and $h \leq 1/2$, then there are choices of $\kappa$ and
$\sigma$
such that, if $u_{0} \in B_{\alpha \delta}$ satisfies
the consistency condition,
\begin{equation}
\label{smallres}
\|Su_{0}\| \leq \sigma^{-1},
\end{equation}
then the hypotheses (\ref{Kanone}, \ref{Kantwo})
hold with $G_{v} = [S^{\prime}(v)]^{-1}$.
If $P$ is a strict contraction on $B_{\delta}$, with unique fixed point
$x_{0}$, then a starting iterate $u_{0} \in B_{\alpha \delta}$ can be
found for which (\ref{smallres}) holds.
In particular, (\ref{R-quadratic}) holds for the Newton iteration with
$u$ identified with $x_{0}$.
\end{proposition}
\begin{proof}
According to the Lipschitz property satisfied
by $P^{\prime}$ on ${\mathcal O}$,
say Lipschitz constant $c$,
for any specified positive $\tau < 1$,
we can find $\delta > 0$ such that
$$
\|[I - P^{\prime}(x_{0})]^{-1} \| \;
\|P^{\prime}(x_{0}) - P^{\prime}(v)\|
\leq \tau < 1, \; \mbox{if} \;
\|x_{0} - v\| \leq \delta.
$$
By a standard perturbation lemma \cite{G},
it follows that $I - P^{\prime}(v)$ is invertible in the closed
ball of radius $\delta$, centered at $x_{0}$.
The perturbation lemma gives the uniform bound
for the norms of the inverses $[S^{\prime}(v)]^{-1}$:
\begin{equation}
\label{definitionkappa}
\kappa:= \frac{\|[S^{\prime}(x_{0})]^{-1}\|}{1 - \tau}.
\end{equation}
This gives (\ref{Kanone}).
Now choose $\sigma$ sufficiently large so that the following two
inequalities hold:
$$
c \kappa^{2} \leq h \sigma, \; \frac{\kappa}{h \sigma} \leq (1-
\alpha)\delta.
$$
Suppose $u_{0} \in B_{\alpha \delta}$.
To obtain (\ref{Kantwo}),
we employ a version of the fundamental theorem of
calculus for Fr\'{e}chet derivatives (valid in Fr\'{e}chet spaces \cite{H}):
\begin{equation*}
S(u_{k}) =
\int_{0}^{1} [S^{\prime}(u_{k-1} +t (u_{k} - u_{k-1})) - S^{\prime}(u_{k-1})]
(u_{k} - u_{k-1}) \; dt.
\end{equation*}
By estimating this integral, we obtain via (\ref{Kanone}):
$$
\|S(u_{k})\| \leq \frac{c}{2} \|u_{k} - u_{k-1}\|^{2}
\leq \frac{c \kappa^{2}}{2}\|S(u_{k-1})\|^{2}.
$$
By the choice of $\sigma$, we thus obtain (\ref{Kantwo}).
We now consider the residual condition for $u_{0}$.
Suppose $P$ is a strict contraction, with contraction constant $q$ and
fixed point $x_{0}$.
In order to obtain the consistency and convergence of the iterates, we
select $\epsilon = \sigma^{-1}/(1 + q)$ in (\ref{SA}) and identify $u_{0}$
with the
$nth$ successive approximation $p_{n} = P p_{n-1}$ defined by $P$.
This works since, by interpreting (\ref{SA}), we have
$$
\|Sp_{n}\| = \|Pp_{n} - p_{n}\| = \|P(p_{n} - x_{0}) + (x_{0} - p_{n})\|
\leq (q + 1) \epsilon \leq \sigma^{-1}.
$$
The Lemma of Appendix C now applies.
\end{proof}
\begin{corollary}
\label{exactcombined}
Suppose that $P$ is a strict contraction, with contraction constant $q$.
Suppose the hypotheses of Proposition \ref{VK} are satisfied and suppose
$c$ is the Lipschitz constant of $P^{\prime}$.
If $S = I - P$, define
$\kappa$ by (\ref{definitionkappa}) and $\sigma$ by
\begin{equation}
\label{explicitsigma}
\sigma = \max \left(\frac{c \kappa^{2}}{h},
\frac{c \kappa^{2} (1 - \tau)}{h (1- \alpha) \tau} \right).
\end{equation}
If $u_{0}$ is defined by successive approximation to be the iterate
$p_{n}$ satisfying (\ref{SA}) with $\epsilon = \frac{1}{\sigma (1 + q)}$,
then the exact Newton iteration is quadratically convergent
as in (\ref{R-quadratic}).
\end{corollary}
\begin{remark}
Note that, along the curve $1 - \tau = \tau (1 - \alpha)$, in the open square
$1/2 < \tau < 1, 0 < \alpha < 1$,
both expressions on the rhs of (\ref{explicitsigma}) are equal.
\end{remark}
\section{Classical Newton Iteration for the Quantum System}
This section is devoted to the exact Newton method for our
model, as based upon the fixed point mapping $K$, which is identified with
$P$ in Proposition \ref{VK}.
\subsection{Fundamental inequality for the Hartree potential}
The results of this subsection were used in the estimation of the
contraction constant $\gamma$ of $K$ (see Lemma
\ref{contractionconstant}).
We begin with a lemma for the convolution of $W = 1/|x|$ with products of
$H^{1}_{0}(\Omega)$ functions. This is later applied to the Hartree
potential.
\begin{lemma}
\label{3.1}
Suppose that $f,g$ and $\psi$ are arbitrary functions in $H^{1}_{0}$, and
set $w = W \ast (fg)$. Then
\begin{equation}
\label{conprod}
\|w \psi\|_{H^{1}_{0}} \leq C \|f\|_{H^{1}_{0}} \|g\|_{H^{1}_{0}}
\|\psi\|_{H^{1}_{0}}
\end{equation}
where $C$ is a generic constant.
\end{lemma}
\begin{proof}
We claim that
the following two inequalities are sufficient to prove
(\ref{conprod}).
\begin{eqnarray*}
\|w\|_{W^{1,3}} &\leq& C_{1} \|f\|_{H^{1}_{0}}\|g\|_{H^{1}_{0}} \\
\|w\|_{L^{\infty}} &\leq& C_{2} \|f\|_{H^{1}_{0}}\|g\|_{H^{1}_{0}}.
\end{eqnarray*}
Suppose that these two inequalities hold. We show that the lemma follows.
We use duality \cite[Theorem 4.3, p. 89]{Rudin} to estimate the
$H^{1}_{0}$ norm of $w \psi$. To begin, we have
$$
\|w \psi \|_{H^{1}_{0}} =
\sup_{\|\omega\|_{H^{1}_{0}} \leq 1}\left\{\left|\int_{\Omega}\nabla
(w\psi) \cdotp \nabla \omega +
\int_{\Omega}
w \psi \omega \right| \right\}.
$$
Furthermore,
$$
\nabla (w \psi) = (\nabla w) \psi + w \nabla \psi,
$$
so that
$$
\nabla (w \psi) \cdotp \nabla \omega =
(\nabla w) \psi \cdotp \nabla \omega + w \nabla \psi \cdotp
\nabla \omega.
$$
Upon taking the $L^{1}$ norm of both sides of this relation, we obtain,
$$
\|\nabla (w \psi) \cdotp \nabla \omega \|_{L^{1}} \leq
\|(\nabla w) \psi \cdotp \nabla \omega \|_{L^{1}} +
\|w \nabla \psi \cdotp
\nabla \omega\|_{L^{1}}.
$$
We now estimate each of these terms via the H\"{o}lder and Sobolev
inequalities. For the first term,
\begin{equation}
\label{I}
\|(\nabla w) \psi \cdotp \nabla \omega \|_{L^{1}} \leq
\|\psi\|_{L^{6}} \|\nabla w \|_{L^{3}} \|\nabla \omega \|_{L^{2}}
\leq E_{1} \|\psi \|_{H^{1}_{0}} C_{1} \|f \|_{H^{1}_{0}} \|g \|_{H^{1}_{0}},
\end{equation}
where $E_{1}$ is a Sobolev embedding constant
($H^{1}_{0} \hookrightarrow L^{6}$),
and where we have used the
first inequality of the claim, together with the norm assumption on
$\omega$.
For the second term above,
\begin{equation}
\label{II}
\|w \nabla \psi \cdotp
\nabla \omega\|_{L^{1}} \leq \|w\|_{L^{\infty}} \|\nabla \psi
\|_{L^{2}} \|\nabla \omega\|_{L^{2}}
\leq C_{2} \|f \|_{H^{1}_{0}} \|g\|_{H^{1}_{0}}
\|\psi \|_{H^{1}_{0}},
\end{equation}
where we have used the
second inequality of the claim, together with the norm assumption on
$\omega$.
We estimate the final term in the supremum.
\begin{equation}
\label{III}
\|w \psi
\omega\|_{L^{1}} \leq \|w\|_{L^{3}} \|\psi\|_{L^{3}}
\|\omega\|_{L^{3}}
\leq E_{2}^{2}
\|\psi \|_{H^{1}_{0}} C_{1} \|f\|_{H^{1}_{0}}
\|g\|_{H^{1}_{0}},
\end{equation}
where $E_{2}$ is a Sobolev embedding constant
($H^{1}_{0} \hookrightarrow L^{3}$),
and we have used the norm
assumption on $\omega$.
By assembling inequalities (\ref{I}, \ref{II}, \ref{III}), we obtain the
estimate of the lemma if the claim is valid.
We now verify each of the inequalities of the claim.
For the first, we have
\begin{eqnarray}
\nabla w = \nabla
[W \ast (fg)]
&=& \nabla W \ast (fg) \nonumber \\
\|\nabla w\|_{L^{3}} &\leq& \|\nabla W\|_{L^{1}}
\|fg\|_{L^{3}} \nonumber \\
&\leq&
\|\nabla W\|_{L^{1}}
\|f\|_{L^{6}}\; \|g\|_{L^{6}} \nonumber \\
&\leq&
\|\nabla W\|_{L^{1}}
E_{1}^{2}
\|f\|_{H^{1}_{0}}\; \|g\|_{H^{1}_{0}},
\label{CI}
\end{eqnarray}
where we have used the Young, H\"{o}lder, and Sobolev inequalities.
For the second inequality of the claim, we have, by similar inequalities,
\begin{eqnarray}
\|w \|_{\infty} &=& \sup_{x \in \Omega}\left| \int_{\Omega}
\frac{f(y)g(y) dy}{|x - y|}\right| \nonumber \\
&\leq& \|W\|_{L^{2}} \|fg\|_{L^{2}} \nonumber \\
&\leq&
\|W\|_{L^{2}}
\|f\|_{L^{4}}\; \|g\|_{L^{4}} \nonumber \\
&\leq&
\|W\|_{L^{2}}
E_{3}^{2}
\|f\|_{H^{1}_{0}}\; \|g\|_{H^{1}_{0}}.
\label{CII}
\end{eqnarray}
Here, $E_{3}$ is a Sobolev embedding constant
($H^{1}_{0} \hookrightarrow L^{4}$).
This establishes the claim, with specific estimates for $C_{1}, C_{2}$,
and the proof is concluded.
\end{proof}
\begin{remark}
Since convolution with $W/(4 \pi)$ provides a right inverse for the (negative)
Laplacian, we could infer the $L^{\infty}$ property of $w$ from the theory
of elliptic equations (see \cite[Th. 8.16, p.181]{GT}). However, we
require the explicit inequality (\ref{CII}).
\end{remark}
\begin{theorem}
\label{hartreeLip}
For the Hartree potential,
$$
H = W \ast \rho, \; \rho = |\Psi|^{2} \; (\Psi \in H^{1}_{0}),
$$
and $\psi \in H^{1}_{0}$,
we have,
$$
\|[W \ast (\rho_{1} - \rho_{2})]\psi \|_{H^{1}_{0}} \leq C
\|\Psi_{1} - \Psi_{2}\|_{H^{1}_{0}} \|\psi\|_{H^{1}_{0}}.
$$
This inequality remains valid when $\Psi_{1}, \Psi_{2}$ are
functions in $C(J; H^{1}_{0})$. The appropriate norm subscripts are
replaced by
$C(J; H^{1}_{0})$. In this case,
$C$ is explicitly discussed in
Lemma \ref{contractionconstant}.
\end{theorem}
\begin{proof}
We apply the previous lemma after the simple factorization,
$$
\rho_{1} - \rho_{2} =
(|\Psi_{1}| - |\Psi_{2}|)
(|\Psi_{1}| + |\Psi_{2}|).
$$
In the lemma, we select
$$
f = |\Psi_{1}| - |\Psi_{2}|, \;
g = |\Psi_{1}| + |\Psi_{2}|.
$$
The use of the reverse triangle inequality applied to $\|f\|_{H^{1}_{0}}$
and the standard triangle inequality applied to
$\|g\|_{H^{1}_{0}}$ implies the estimate.
\end{proof}
\subsection{Hypothesis
for the exchange-correlation potential}
The hypotheses required of the exchange-correlation potential $\Phi$ in
\cite{J1} are listed in the appendix. An additional hypothesis is required
if Fr\'{e}chet derivatives are required, as is the case in this article.
The hypothesis mirrors (\ref{conprod}). Note that $\Phi$ is defined in
section 1.2. The integrand $\phi$ may be a functional of $\rho$, as is the
case for the Hartree potential.
\begin{assumption}
\label{3.1def}
In addition to the hypotheses on $\Phi$ expressed in the appendix, we
assume in addition the following.
\begin{itemize}
\item
The functional derivative of $\Phi$ with respect to $\rho$ exists, is
defined on the product of
$C(J; H^{1}_{0})$ functions, is linear, both in
the product, and in the members of the product,
and satisfies, for $z = (\partial \Phi/\partial
\rho)(fg)$, and $f,g \in
C(J; H^{1}_{0}),
\psi \in H^{1}_{0}$,
\begin{equation}
\label{conprod2}
\|z \psi\|_{C(J; H^{1}_{0})}
\leq C \|f\|_{C(J; H^{1}_{0})} \|g\|_{C(J; H^{1}_{0})}
\|\psi\|_{H^{1}_{0}},
\end{equation}
for some constant $C$.
\end{itemize}
\end{assumption}
\subsection{G\^{a}teaux differentiability}
For application in this section, we rewrite
(\ref{IDENTITY}) and
(\ref{LipforV})
in slightly simplified notation for use in this section:
\begin{equation}
U^{\rho_{1}}\Psi_{0}(t) - U^{\rho_{2}}\Psi_{0}(t) =
\frac{i}{\hbar}\int_{0}^{t} U^{\rho_{1}}(t,s)[V_{\rm e}(s, \rho_{1}) -
V_{\rm e}(s, \rho_{2})]U^{\rho_{2}}(s,0)\Psi_{0} \; ds.
\label{IDENTITY2}
\end{equation}
\begin{equation}
\label{LipV}
\|[V_{\rm e}({\rho_{1}})-
V_{\rm e}({\rho_{2}})]\psi\|_{C(J;H^{1}_{0})} \leq
C\|\Psi_{1} -
\Psi_{2}
\|_{C(J;H^{1}_{0})} \|\psi\|_{H^{1}_{0}}.
\end{equation}
\begin{remark}
The constant $C$ in inequality (\ref{LipV})
and a uniform bound for the operator norm
$\|U^{\rho}(t,s)\|_{\infty, H^{1}_{0}}$
are discussed in
Lemma \ref{contractionconstant}.
We will represent such a bound by $\sup \|U^{\rho}\|$ below.
\end{remark}
\begin{lemma}
\label{uniform}
Suppose that $\Psi_{\epsilon}$ converges to $\Psi$ in $C(J; H^{1}_{0})$
as $\epsilon \rightarrow 0$. Then $U^{\rho_{\epsilon}}$ converges to
$U^{\rho}$ in the operator topology, uniformly in $t,s$.
In fact, the convergence is of order
$O(\|\Psi_{\epsilon} - \Psi\|_{C(J;H^{1}_{0})})$.
\end{lemma}
\begin{proof}
We use the following operator representation for
$U^{\rho_{\epsilon}}(t,s) - U^{\rho}(t,s)$, for $0\leq s < t$, which is the
appropriate substitute for (\ref{IDENTITY2}):
\begin{equation}
U^{\rho_{\epsilon}}(t,s) - U^{\rho}(t,s) =
\frac{i}{\hbar}\int_{s}^{t} U^{\rho_{\epsilon}}(t,r)
[V_{\rm e}(r, \rho_{\epsilon}) -
V_{\rm e}(r, \rho)]U^{\rho}(r,s) \; dr.
\label{IDENTITY3}
\end{equation}
We estimate this operator at an arbitrary $\psi \in
H^{1}_{0}$ of norm not exceeding one:
\begin{equation}
\label{upperbd}
(T/\hbar)\;
\sup \|U^{\rho}\|
\sup_{0 \leq s \leq r \leq T} \|
[V_{\rm e}(r,\rho_{\epsilon})-V_{\rm e}(r,\rho)]U^{\rho}(r,s)\psi\|_{H^{1}_{0}}.
\end{equation}
An application of (\ref{LipV}) yields the
bound of a constant times
$$
\|\Psi_{\epsilon} - \Psi\|_{C(J;H^{1}_{0})},
$$
which completes the proof.
\end{proof}
\begin{proposition}
\label{Dif}
The operator $K$ is G\^{a}teaux differentiable on
$C(J;H^{1}_{0})$. The derivative is given by (\ref{defKprime}) below, and
is a bounded linear operator on
$C(J;H^{1}_{0})$,
with bound given by (\ref{adapest}) below.
\end{proposition}
\begin{proof}
Let $\Psi$ be a given element of $C(J; H^{1}_{0})$ and set $\rho = |\Psi|^{2}$.
We begin with the formula (\ref{IDENTITY2})
and make the
identification,
$$\rho_{\epsilon} =
|\Psi + \epsilon \omega|^{2}, \; \mbox{for} \; \omega \in H^{1}_{0}, \;
\epsilon \in {\mathbb R}, \epsilon \not=0. $$
By direct calculation, this gives for the G\^{a}teaux derivative of $K$
at $\Psi$, evaluated at arbitrary $\omega$:
$$
K^{\prime}(\Psi)[\omega] = \lim_{\epsilon \rightarrow 0}
\frac{U^{\rho_{\epsilon}}\Psi_{0}(t) -
U^{\rho}\Psi_{0}(t)}{\epsilon} =
$$
\begin{equation}
\frac{2i}{\hbar}\int_{0}^{t}
U^{\rho}(t,s)\left[\mbox{Re}({\bar \Psi} \omega) \ast W
+ ({\partial \Phi}{\partial \rho})
\mbox{Re}({\bar \Psi} \omega) \right]
U^{\rho}(s,0)\Psi_{0} \; ds.
\label{defKprime}
\end{equation}
Indeed, by direct calculation, we obtain
$$
\frac{U^{\rho_{\epsilon}}\Psi_{0}(t) - U^{\rho}\Psi_{0}(t)}{\epsilon} =
$$
$$
\frac{2i}{\hbar}\int_{0}^{t}
U^{\rho_{\epsilon}}(t,s)\left[\mbox{Re}({\bar \Psi} \omega) \ast W
+ ({\partial \Phi}{\partial \rho})
\mbox{Re}({\bar \Psi} \omega) \right]
U^{\rho}(s,0)\Psi_{0} \; ds \; +
$$
$$
\frac{i\epsilon}{\hbar}\int_{0}^{t}
U^{\rho_{\epsilon}}(t,s)\left[|\omega|^{2} \ast W
+ (\partial \Phi/\partial \rho)
|\omega|^{2} \right]
U^{\rho}(s,0)\Psi_{0} \; ds.
$$
An application of Lemma \ref{uniform} yields the limit. In fact, the first term
converges to the derivative, and the second term converges to zero; note
that the multiplier of $\epsilon$ remains bounded.
We now verify that
$K^{\prime}(\Psi)$ is a bounded linear operator on $C(J; H^{1}_{0})$.
By a direct estimate of the representation for
$K^{\prime}(\Psi)[\omega]$, as given in (\ref{defKprime}), we have
the norm estimate,
\begin{equation}
\label{adapest}
\|K^{\prime}(\Psi)\| \leq (2 C_{0} T/\hbar)
\|U^{\rho}\|_{\infty, H_{0}^{1}}^{2}
\|\Psi_{0}\|_{H^{1}_{0}}.
\end{equation}
The constant $C_{0}$ and the operator norm
$ \|U^{\rho}\|_{\infty, H_{0}^{1}}^{2}$ are discussed in
Lemma \ref{contractionconstant}.
This concludes the proof.
\end{proof}
\subsection{Lipschitz continuous Fr\'{e}chet differentiability}
\begin{proposition}
\label{LF}
The operator $K$ is continuously G\^{a}teaux differentiable
and thus continuously Fr\'{e}chet differentiable. The derivative is,
in fact, Lipschitz continuous on
${\overline {B(0, r)}} \subset C(J; H^{1}_{0})$.
\end{proposition}
\begin{proof}
It is sufficient to prove the Lipschitz continuity of the G\^{a}teaux
derivative, as obtained in (\ref{defKprime}).
For given $\Psi_{1}$ and $\Psi_{2}$, and
arbitrary $\omega, \|\omega\|_{H^{1}_{0}} \leq 1$,
we write the difference of
$K^{\prime}[\Psi_{1}](\omega)$ and
$K^{\prime}[\Psi_{2}](\omega)$ as:
$$
K^{\prime}[\Psi_{1}](\omega) -
K^{\prime}[\Psi_{2}](\omega) =
$$
$$
\frac{2i}{\hbar}\int_{0}^{t}
U^{\rho_{1}}(t,s)\left[\mbox{Re}({\bar \Psi_{1}} \omega) \ast W
+ \frac{\partial \Phi}{\partial \rho}
\mbox{Re}({\bar \Psi_{1}} \omega) \right]
U^{\rho_{1}}(s,0)\Psi_{0} \; ds \;-
$$
$$
\frac{2i}{\hbar}\int_{0}^{t}
U^{\rho_{2}}(t,s)\left[\mbox{Re}({\bar \Psi_{2}} \omega) \ast W
+ \frac{\partial \Phi}{\partial \rho}
\mbox{Re}({\bar \Psi_{2}} \omega) \right]
U^{\rho_{2}}(s,0)\Psi_{0} \; ds.
$$
The latter difference can be written as the sum of the three differences,
$D_{1}, D_{2}, D_{3}$, where $D_{1} =$
$$
\frac{2i}{\hbar}\int_{0}^{t}
U^{\rho_{1}}(t,s)\left[\mbox{Re}({\bar \Psi_{1}} \omega) \ast W
+ \frac{\partial \Phi}{\partial \rho}
\mbox{Re}({\bar \Psi_{1}} \omega) \right]
(U^{\rho_{1}}(s,0)-U^{\rho_{2}}(s,0))\Psi_{0} \; ds,
$$
where $D_{2} = $
$$
\frac{2i}{\hbar}\int_{0}^{t}
U^{\rho_{1}}(t,s)\left[\mbox{Re}(({\bar \Psi_{1}}-{\bar\Psi_{2}}) \omega)\ast W
+ \frac{\partial \Phi}{\partial \rho}
\mbox{Re}(({\bar \Psi_{1}} -{\bar \Psi_{2}}) \omega) \right]
U^{\rho_{2}}(s,0)\Psi_{0} \; ds,
$$
and where $D_{3} =$
$$
\frac{2i}{\hbar}\int_{0}^{t}
[U^{\rho_{1}}(t,s)-
U^{\rho_{2}}(t,s)]
\left[\mbox{Re}({\bar \Psi_{2}} \omega) \ast W
+ \frac{\partial \Phi}{\partial \rho}
\mbox{Re}({\bar \Psi_{2}} \omega) \right]
U^{\rho_{2}}(s,0)\Psi_{0} \; ds.
$$
\begin{itemize}
\item
Estimation of $D_{1}$.
\end{itemize}
We estimate from left to right inside the integral as follows.
The uniform boundedness of the evolution operator, followed by the
combination of Lemma \ref{3.1} and Assumptions \ref{3.1def},
gives the estimate for the $D_{1}$ contribution:
$$
\frac{2rT\gamma_{T}}{\hbar} \beta \|U\|_{\infty, H^{1}_{0}}
\|\Psi_{1} - \Psi_{2}\|_{C(J; H^{1}_{0})},
$$
where $\beta$ is defined below. Here, $r$ is defined in the
introduction in (\ref{definitionr})
and $\gamma_{T}, \|U\|_{\infty, H^{1}_{0}}$ are discussed in Lemma
\ref{contractionconstant}.
\begin{itemize}
\item
Estimation of $D_{2}$.
\end{itemize}
Again, we estimate from left to right inside the integral, and utilize
the uniform boundedness of the evolution operator.
The
combination of Lemma \ref{3.1} and Assumptions \ref{3.1def} yields the
result. Specifically, we have
the estimate for the $D_{2}$ contribution:
$$
\frac{2T}{\hbar} \beta \|U\|_{\infty, H^{1}_{0}}^{2}
\|\Psi_{0}\|_{H^{1}_{0}}
\|\Psi_{1} - \Psi_{2}\|_{C(J; H^{1}_{0})}.
$$
\begin{itemize}
\item
Estimation of $D_{3}$.
\end{itemize}
The reasoning is similar to that of the estimation of $D_{1}$.
We have
the estimate for the $D_{3}$ contribution:
$$
\frac{2rT\gamma_{T}}{\hbar} \beta \|U\|_{\infty, H^{1}_{0}}
\|\Psi_{1} - \Psi_{2}\|_{C(J; H^{1}_{0})}.
$$
It remains to define $\beta$.
If $\Phi$ is not included in the effective potential, then $\beta$ is
simply the constant $C_{0}$ appearing in (\ref{constantC}).
If $\Phi$ is included, then this value must be incremented by the constant
appearing in (\ref{conprod2}).
\end{proof}
\subsection{Invertibility}
The following proposition addresses the invertibility at the fixed point
of
$I - K^{\prime}(\Psi)$, on the space $C(J; H^{1}_{0})$.
\begin{proposition}
\label{INV}
We denote by $\Psi$ the unique fixed point of $K$.
The operator,
$$
S^{\prime}(\Psi) = I - K^{\prime}(\Psi),
$$
is invertible on $C(J; H^{1}_{0})$.
\end{proposition}
{\it Proof:}
We consider, in turn, the injective and surjective properties. By the open
mapping theorem, the inverse then exists and is a continuous linear
operator.
\begin{description}
\item[Injective Property]
\end{description}
We assume that there is $\omega \in C(J; H^{1}_{0})$,
such that
$$
S^{\prime}(\Psi)[\omega] = (I - K^{\prime}(\Psi))\omega = 0.
$$
We can apply Gronwall's inequality to the estimate,
\begin{equation}
\label{Gronpre}
\|\omega (\cdotp, t)\| \leq C \int_{0}^{t} \|\omega (\cdotp, s)\| \; ds,
\end{equation}
where the norm is the $H^{1}_{0}$ norm, and $C$ is a fixed positive constant.
Gronwall's inequality yields that
$\omega \equiv 0$.
The derivation of (\ref{Gronpre}) proceeds directly from
Proposition \ref{Dif}.
\begin{description}
\item[Surjective Property]
\end{description}
We use a fixed point argument (even though the problem is linear).
Suppose $f$ is given in
$X = C(J; H^{1}_{0})$.
We consider the equation,
$$
S^{\prime}(\Psi)[\psi] = (I - K^{\prime}(\Psi))\psi = f,
$$
for $\psi \in X$. This is equivalent to a fixed point for
$$
L \psi =
K^{\prime}(\Psi)\psi + f.
$$
$L$ is seen to be a strict contraction for $t$ sufficiently small by an
application of (\ref{adapest}) with $T \mapsto t$.
By continuation in $t$, we obtain a fixed point $\psi$.
\subsection{Exact Newton iteration for the system}
We have obtained the following result.
\begin{theorem}
\label{NCT}
Under the hypotheses of Appendix A,
the TDDFT model admits of Picard iteration
(successive approximation)
for the mapping $K$, restricted to ${\overline {B(0, r)}}$,
which is convergent to the solution
$\Psi$ in $C(J; H^{1}_{0})$.
If $\Phi$ is explicitly present in the potential $V_{\rm e}$, we also
assume (the) Assumptions
(\ref{3.1def}), and define $\kappa, \sigma$
by (\ref{definitionkappa}), (\ref{explicitsigma}), resp. Here, $S = I -
K$. If the starting iterate $u_{0}$ is chosen as stated in Corollary
\ref{exactcombined}, where $q = \gamma_{t} < 1$, then exact Newton iteration is
consistent and quadratically convergent
in $C(J;H^{1}_{0})$.
\end{theorem}
\begin{proof}
We have demonstrated the Lipschitz continuity of $K^{\prime}$ in
Proposition \ref{LF} and the invertibility of $I - K^{\prime}$ at the
solution $\Psi$ in Proposition \ref{INV}.
Moreover, the Lipschitz constant
can be estimated explicitly by the estimations of $D_{1}, D_{2}, D_{3}$ in
the proof of Proposition \ref{LF}.
The result follows from Corollary
\ref{exactcombined}.
\end{proof}
\begin{remark}
If
$[I - K^{\prime}(\Psi)]^{-1}$ has a convergent Neumann series, then
$\kappa$ can be estimated explicitly. This case is discussed in the
following section.
\end{remark}
\section{Approximate Newton Methods}
In this
section, we consider the reduction of the complexity
of Newton iteration to
that of the approximation of the evolution operator itself. Given the
framework with which we have studied the TDDFT model,
this is the ultimate reduction possible.
In order to do this, it will be necessary to interpret the mappings
$G_{z}$ of the Lemma of Appendix \ref{appendixC}
as approximate right inverses of the
derivative of $S$, as used in that lemma. This, in turn, is accomplished
by the truncated Neumann series, which is becoming increasingly
relevant in the study of the Schr\"{o}dinger equation \cite{MF}.
\subsection{Properties of an approximate Newton method}
\begin{definition}
\label{prec}
The operators $S$ and $\{G_{z}\}$ of the
Lemma of Appendix \ref{appendixC}
are said to satisfy the conditions of an
approximate Newton method if, on the domain $B_{\delta}$,
there is $M$ such that:
\begin{enumerate}
\item
$S^{\prime}$ is Lipschitz continuous with constant $2M$.
\item
$G_{z}$ is uniformly bounded in norm by $M$.
\item
$G_{z}$ satisfies the following approximation of the identity condition:
For each $z \in B_{\delta}$, the operator $G_{z}$ satisfies the operator
inequality,
$$
\|I - S^{\prime}(z) G_{z} \| \leq M \|S(z)\|.
$$
It is understood that, if $S(z) = 0$, then $G_{z} = [S^{\prime}(z)]^{-1}$.
\end{enumerate}
\end{definition}
The following proposition is a consequence of \cite[Theorem 2.3]{J3}.
\begin{proposition}
\label{quadapp}
Suppose that the conditions of
Definition \ref{prec} are satisfied
for a given closed ball $B_{\delta}$
with $0 < \alpha < 1$ selected.
Given $h \leq 1/2$,
choose $\sigma$,
$$
\sigma = \frac{2(M + M^{3})}{h}\max\left(1, \; \frac{1}{(1 - \alpha)
\delta}\right),
$$
and choose $\kappa = M$.
If
$$
\|Su_{0}\| \leq \sigma^{-1},
$$
then the approximate Newton sequence satisfies
the inequalities (\ref{Kanone}, \ref{Kantwo}), and
is quadratically convergent, as specified in
(\ref{R-quadratic}).
\end{proposition}
\subsection{An approximate Newton method based on residual estimation}
Definition \ref{prec} of the preceding section reveals the
properties required of an approximate
Newton method in order to maintain classical quadratic convergence.
In this section, we introduce such a method associated with TDDFT.
It has the desired effect of
simplifying the formation of the algorithm in the case when the Neumann
series exists. We remark that we have established a
general result in
Appendix C, which applies to a
(closed) neighborhood $B_{\delta}$ of the fixed point.
We require a preliminary lemma, which establishes a condition under which
the Neumann series exists. Mathematically, this involves the possible
restrictions of $H^{1}_{0}$-valued
functions to $[0, T^{\prime}], T^{\prime} \leq T$.
For convenience, we will retain the notation, $B_{\delta}$, used in
Appendix C.
\begin{lemma}
\label{Neumannlocal}
There is a terminal
time $T = T^{\prime}$ such that
\begin{equation*}
\|K^{\prime}(\Psi)\| < 1,
\end{equation*}
uniformly in $\Psi \in B_{\delta}$.
In fact, if the constant
$C_{0}$,
appearing in (\ref{adapest}),
is chosen as discussed in
Lemma \ref{contractionconstant},
and $\|U^{\rho}\|_{\infty, H^{1}_{0}}$ is bounded
as in Lemma \ref{contractionconstant}, then $T^{\prime}$ may be chosen
to make the (resulting) rhs of
(\ref{adapest}) less than one.
\end{lemma}
\begin{proof}
This is immediate from the estimate (\ref{adapest})
and the
formulation of Lemma
\ref{contractionconstant}.
\end{proof}
\begin{definition}
\label{Neumannapp}
By the truncated Neumann series approximation to $(I -
K^{\prime}(\Psi))^{-1}$ is meant the expression,
\begin{equation}
\label{Neumannappeq}
G_{\Psi}\omega = [I + \sum_{k=1}^{n} (K^{\prime}(\Psi))^{k}](\omega),
\end{equation}
where $K^{\prime}(\Psi)[\omega]$ has been defined in (\ref{defKprime}).
The choice of $n$ may depend on $\Psi$. We will assume that
\begin{equation}
\label{Neumannsatisfied}
\|K^{\prime}(\Psi)\| \leq K_{0} < 1,
\end{equation}
uniformly in $\Psi$,
whenever this approximation is used.
\end{definition}
This will play the role of the approximate inverse.
We consider
the magnitude of the lhs of condition 3 of Definition \ref{prec}.
\begin{lemma}
\label{AID}
The identity,
$$
I - (I - K^{\prime}(\Psi))
(I + \sum_{k=1}^{n} (K^{\prime}(\Psi))^{k}) =
(K^{\prime}(\Psi))^{n+1},
$$
holds.
The term
$
\|K^{\prime}(\Psi)\|
$
is estimated, as in
(\ref{adapest}),
by a positive constant times
$T^{\prime}$.
\end{lemma}
\begin{proof}
The identity is routine.
The estimation of
$
\|K^{\prime}(\Psi)\|
$
is given by the inequality (\ref{adapest}) with $T \mapsto T^{\prime}$.
The conclusion follows.
\end{proof}
\begin{proposition}[Choice of constants in Definition \ref{prec}]
\label{adaptivepre}
For an appropriate terminal time, $T = T^{\prime}$,
chosen so that (\ref{Neumannsatisfied}) holds,
we define
$$
M = \max \left(
\frac{1}{1- K_{0}}, \frac{c}{2} \right).
$$
Here, $c$ is the Lipschitz constant of $K^{\prime}$, estimated in
Proposition \ref{LF}.
Then conditions one and two of Definition \ref{prec} hold.
The third condition holds if, for $\Psi$ not the fixed point of $K$,
we choose $n$ to be the smallest positive integer such that
\begin{equation}
\label{choiceofN}
\|K^{\prime}(\Psi)\|_{C(J^{\prime}; H^{1}_{0})}^{n} \leq
\|(I - K)\Psi\|_{C(J^{\prime}; H^{1}_{0})}.
\end{equation}
\end{proposition}
\begin{proof}
As previously remarked, $M$ is a bound for the norm of the approximate
inverse, and $2M$ is a bound for the Lipschitz constant of $S^{\prime}$.
It follows directly that conditions one and two hold.
If we combine the identity in Lemma \ref{AID} with the definition of $n$
in (\ref{choiceofN}), we obtain
$$
\|I - (I - K^{\prime}(\Psi)) G_{\Psi}\|
_{C(J^{\prime}; H^{1}_{0})}
\leq K_{0} \|(I - K)\Psi\|
_{C(J^{\prime}; H^{1}_{0})}.
$$
Since $K_{0} \leq M$, we conclude that condition 3 holds.
\end{proof}
The following theorem is a direct consequence of the discussion of this
section,
specifically Propositions \ref{quadapp} and
\ref{adaptivepre}.
\begin{theorem}
Consider a time interval $J^{\prime} = [0, T^{\prime}]$,
where $T^{\prime}$ has been
selected so that (\ref{Neumannsatisfied}) holds and, further, so that
the contraction constant $\gamma_{T^{\prime}}$, associated with $K$, is less
than one on the closed ball $B_{r} = {\overline {B(0, r)}} \subset
C(J^{\prime}; H^{1}_{0})$.
Suppose that a family $\{G_{\Psi}\}$
of approximate inverses is selected as in
Definition \ref{adaptivepre}, with $n = n(\Psi)$ chosen according to
(\ref{choiceofN}). Suppose that $M$ is defined as in Proposition
\ref{adaptivepre}
and $\sigma$ as in
Proposition \ref{quadapp}, where we identify $\delta$ with $r$.
If $u_{0}$ is selected by
Picard iteration (successive approximation) to satisfy
$\|(I - K)u_{0}\|_{C(J^{\prime}; H^{1}_{0})} \leq \sigma^{-1}$, then
approximate Newton iteration is quadratically convergent. Specifically,
the sequence, for $k \geq 1$, and $n = n(k)$,
$$
u_{k} = u_{k-1} -
(I + \sum_{j=1}^{n} (K^{\prime}(u_{k-1}))^{j}) (I - K)(u_{k-1}),
$$
converges quadratically to the unique solution of the system.
\end{theorem}
\section{Conclusion}
The current article
suggests successive approximation, proved in
\cite{J1}, combined with operator Newton iteration.
The underlying operator and convergence
theory for this analysis were demonstrated in \cite{J2,J3}.
Our results establish the classical Newton iteration theory, with
quadratic convergence, under hypotheses which naturally extend those
used to prove existence and uniqueness. Moreover, our
theory also permits an approximate inverse for the derivative mapping, and
this is implemented via the truncated Neumann series.
The norm condition, for the use of the truncated Neumann series as
approximate inverse, appears severe,
since it represents an implicit restriction on the
length of the time interval. However, a similar restriction
appears in the existence/uniqueness theory of \cite{J1};
continuation in a finite number of time steps is possible to obtain a
global solution. It is thus natural to employ the exact or approximate Newton
method in combination with Picard iteration (successive approximation).
Although this intermediate study is quite far from the algorithms of
scientific computation,
it is compatible with numerical methods as illustrated in
\cite{CP} and implemented in \cite{JP}. It is also compatible with the
numerical methods employed in the scientific literature cited here.
Newton methods have been applied to quantum models previously; see \cite{CBDW}
for a control theory application to
the quantum Liouville-von Neumann master equation. For TDDFT, however,
we believe that we have laid the foundation for future refinements defined
by the methods of approximation theory.
\appendix
\section{Hypotheses for the Hamiltonian}
We make the following assumptions for the existence/uniqueness theory
described in Theorem \ref{EU}.
\begin{itemize}
\item
$\Phi$ is assumed nonnegative
for each $t$, and continuous in $t$ on
$H^{1}$ and bounded in $t$ into $W^{1,3}$.
The continuity on $H^{1}$ is consistent with the zero-force law as
defined in \cite[Eq.\thinspace (6.9)]{U}:
$$
\int_{\Omega} \rho({\bf x},t) \nabla \Phi({\bf x}, t, \rho) = 0.
$$
\item
The derivative $\partial \Phi/\partial t =
\phi$ is assumed measurable, and bounded in its arguments.
\item
Furthermore, the following smoothing condition
is assumed,
expressed in a (uniform) Lipschitz norm condition:
$\forall t \in [0,T]$,
\begin{equation}
\|[\Phi(t, |\Psi_{1}|^{2}) - \Phi(t, |\Psi_{2}|^{2})]\psi\|_{H^{1}}
\leq
C \|\Psi_{1} - \Psi_{2}\|_{H^{1}} \|\psi\|_{H^{1}_{0}}.
\label{ecfollowsH}
\end{equation}
Here, $\psi$ is arbitrary in $H^{1}_{0}$.
\item
The so-called external potential $V$ is assumed to be nonnegative and
continuously
differentiable on the closure of the space-time domain.
\end{itemize}
We remark that the hypotheses of nonnegativity for $V$ and $\Phi$ are for
convenience only.
\section{Notation and Norms}
We employ complex Hilbert spaces
in this article.
$$
L^{2}(\Omega) = \{f = (f_{1}, \dots, f_{N})^{T}: |f_{j}|^{2} \;
\mbox{is integrable on} \; \Omega \}.
$$
Here, $f: \Omega \mapsto {\mathbb C}^{N}$ and
$
|f|^{2} = \sum_{j=1}^{N} f_{j} {\overline {f_{j}}}.
$
The inner product on $L^{2}$ is
$$
(f,g)_{L^{2}}=\sum_{j=1}^{N}\int_{\Omega}f_{j}(x){\overline {g_{j}(x)}} \; dx.
$$
However, $\int_{\Omega} fg$ is interpreted as
$$
\sum_{j=1}^{N} \int_{\Omega} f_{j} g_{j} \;dx.
$$
For $f \in L^{2}$, as just defined, if each component $f_{j}$ satisfies
$
f_{j} \in H^{1}_{0}(\Omega; {\mathbb C}),
$
we write $f \in H^{1}_{0}(\Omega; {\mathbb C}^{N})$, or simply,
$f \in H^{1}_{0}(\Omega)$.
The inner product in $H^{1}_{0}$ is
$$
(f,g)_{H^{1}_{0}}=
(f,g)_{L^{2}}+\sum_{j=1}^{N}\int_{\Omega}
\nabla f_{j}(x) \cdotp {\overline {\nabla g_{j}(x)}} \; dx.
$$
$\int_{\Omega} \nabla f \cdotp \nabla g$ is interpreted as
$$
\sum_{j=1}^{N}\int_{\Omega}
\nabla f_{j}(x) \cdotp \nabla g_{j}(x) \; dx.
$$
Finally, $H^{-1}$ is defined as the dual of $H^{1}_{0}$, and its
properties are discussed at length in \cite{Adams}.
The Banach space $C(J; H^{1}_{0})$ is defined in the traditional manner:
$$
C(J; H^{1}_{0}) = \{u:J \mapsto H^{1}_{0}: u(\cdotp) \mbox{is
continuous}\}, \;
\|u\|_{C(J; H^{1}_{0}} = \sup_{t \in J} \|u(t)\|_{H^{1}_{0}}.
$$
\section{A General Quadratic Convergence Result in Banach Space}
\label{appendixC}
We cite a weaker version, with different notation,
of a result proved in \cite[Lemma 2.2]{J3}.
Additional references to the earlier literature can be found there.
{\bf Lemma}. {\it
Suppose that $B_{\delta}:= \overline {B(x_{0}, \delta)}$
is a closed ball in a Banach space $X$ and
$S$ is a continuous mapping from $B_{\delta}$ to a Banach space $Z$.
Suppose that
$u_{0} \in B_{\alpha \delta}$ where $0 < \alpha < 1$, and
$\|S(u_{0})\| \leq \sigma^{-1}$. Suppose that a family
$\{G_{z}\}$ of bounded linear
operators is given, for each $z$ in the range of $S$,
where $G_{z}: Z \mapsto X$.
Define the iterates,
$$
u_{k} - u_{k-1} = -G_{u_{k-1}}S(u_{k-1}), \; k = 1,2, \dots
$$
Let $h$ be chosen so that $h \leq 1/2$, and set
$$
t^{\ast} =(h \sigma)^{-1} (1 - \sqrt{1 - 2h}).
$$
The procedure is consistent, i.\thinspace e.\thinspace, $\{u_{k}\} \subset
B_{\delta}$, if the inequalities,
\begin{eqnarray}
\|u_{k}-u_{k-1}\|& \leq & \kappa
\|{S}(u_{k-1})\|, \;\; k \geq 1, \label{Kanone} \\
\|{S}(u_{k})\| & \leq &
\frac{h\sigma}{2}\|{S}(u_{k-1})\|^{2},
\;\; k \geq 1, \label{Kantwo}
\end{eqnarray}
hold for some $\kappa \leq (1 - \alpha)\delta/t^{\ast}$.
Moreover, the sequence converges to a root $u$ with
the error estimate,
\begin{equation}
\|u - u_{k}\| \leq \frac{\kappa}{h \sigma}
\frac{(1-\sqrt{1-2h})^{2^{k}}}{2^{k}}.
\label{R-quadratic}
\end{equation}
} | {"config": "arxiv", "file": "1706.09788.tex"} |
\section{Double Negation/Double Negation Elimination}
Tags: Double Negation
\begin{theorem}
The rule of '''double negation elimination''' is a [[Definition:Valid Argument|valid]] deduction [[Definition:Sequent|sequent]] in [[Definition:Propositional Logic|propositional logic]].
=== [[Double Negation/Double Negation Elimination/Proof Rule|Proof Rule]] ===
{{:Double Negation/Double Negation Elimination/Proof Rule}}
=== [[Double Negation/Double Negation Elimination/Sequent Form|Sequent Form]] ===
{{:Double Negation/Double Negation Elimination/Sequent Form}}
\end{theorem}
| {"config": "wiki", "file": "thm_6397.txt"} |
TITLE: Sufficient Requirement to be a Galois Extension of $\mathbb{Q}$
QUESTION [1 upvotes]: Related to my last question:
In Silverman and Tate, Rational Points on Elliptic Curves, there is the following proposition.
Proposition 6.5. Let $E$ be an elliptic curve given by a Weierstrass equation $$E:y^2=x^3+ax^2+bx+c, \quad a,b,c \in \mathbb{Q}.$$
(a) Let $P=(x_1,y_1)$ be a point of order dividing $n$. Then $x_1$ and $y_1$ are algebraic over $\mathbb{Q}$.
(b) Let $$E[n]=\{(x_1,y_1) , \cdot \cdot \cdot , (x_m,y_m) , \mathcal{O} \}$$be the complete set of points of $E(\mathbb{C})$ of order dividing $n$. Let $$K=\mathbb{Q}(x_1,y_1 , \cdot \cdot \cdot , x_m,y_m )$$ be the field generated by the coordinates of all of the points in $E[n]$. Then $K$ is a Galois extension of $\mathbb{Q}$.
So, (a) proves that the extension $K=\mathbb{Q}(x_1,y_1 , \cdot \cdot \cdot , x_m,y_m )$ is an algebraic extension.
Wikipedia says
A Galois extension is an algebraic field extension $E/F$ that is normal and separable; or equivalently, $E/F$ is algebraic, and the field fixed by the automorphism group Aut$(E/F)$ is precisely the base field $F$.
Since all embeddings of extensions of $\mathbb{Q}$ into $\mathbb{C}$ fix $\mathbb{Q}$, can we immediately conclude that $K=\mathbb{Q}(x_1,y_1 , \cdot \cdot \cdot , x_m,y_m )$ is a Galois extension of $\mathbb{Q}$? (The proof in the text shows that all field homomorphisms $\sigma: K \rightarrow \mathbb{C}$ are automorphisms of $K$, and concludes this shows that $K$ is a Galois extension of $\mathbb{Q}$.)
REPLY [1 votes]: Yes the multiplication by $n$ map $(a,b)\mapsto [n](a,b)$ is given by a pair of rational functions in $a,b$ with coefficients in $\Bbb{Q}$ thus for $\sigma \in Gal(\overline{\Bbb{Q}}/\Bbb{Q})$ then $$[n](x_j,y_j) =O\implies \text{ the denominator vanishes} \implies [n](\sigma(x_j),\sigma(y_j))=O$$ ie. $(\sigma(x_j),\sigma(y_j))\in K^2$ and $\sigma(K)\subset K$ ie. $K/\Bbb{Q}$ is normal, in characteristic $0$ it is separable thus it is Galois. Moreover $\sigma$ commutes with the addition of $E$ so that $Gal(K/\Bbb{Q})$ is a subgroup of $Aut(E[n])$. | {"set_name": "stack_exchange", "score": 1, "question_id": 3482260} |
TITLE: Sketch the Region of specific Volume $\frac{512\pi}{15}$
QUESTION [2 upvotes]: I have already calculated the volume (I hope my answer is right). However, I'm having a little trouble deciding which one of the "Sketching the solid, and a typical disk or washer." is correct? Can some one help me?
Please view attached pictures (Zoom to Enlarge):
REPLY [1 votes]: These two together are correct:
At lest these two belong to the given volume: $\frac{512\pi}{15}.$ These two belong to the equations given: $y^2=2x$,$x=2y$. | {"set_name": "stack_exchange", "score": 2, "question_id": 1508427} |
\begin{document}
\title{Deep learning versus $\ell^1$-minimization for compressed sensing photoacoustic tomography}
\maketitle
\begin{abstract}
We investigate compressed sensing (CS) techniques for reducing the number of measurements in photoacoustic tomography (PAT). High resolution imaging from CS data requires particular image reconstruction algorithms. The most established reconstruction techniques for that purpose use sparsity and $\ell^1$-minimization. Recently, deep learning appeared as a new paradigm for CS and other inverse problems. In this paper, we compare a recently invented joint $\ell^1$-minimization algorithm with two deep learning methods, namely a residual network and an approximate nullspace network. We present numerical results showing that all developed techniques perform well for deterministic sparse measurements as well as for random Bernoulli measurements. For the deterministic sampling, deep learning shows more accurate results, whereas for Bernoulli measurements the $\ell^1$-minimization algorithm performs best. Comparing the implemented deep learning approaches, we show that the nullspace network uniformly outperforms the residual network in terms of the mean squared error (MSE).
\bigskip\noindent
\textbf{Keywords:}
Compressed sensing, sparsity, $\ell^1$-minimization, deep learning, residual learning, nullspace network
\end{abstract}
\section{Introduction}
Compressed sensing (CS) allows to reduce the number of
measurements in photoacoustic tomography (PAT) while preserving high spatial resolution.
A reduced number of measurements can increase the measurement speed and reduce system
costs \cite{haltmeier2018sparsification,sandbichler2015novel,haltmeier2016compressed,betcke2016acoustic,provost2009application} .
However, CS PAT image reconstruction requires special algorithms to achive high resolution
imaging. In this work, we compare $\ell^1$-minimization and deep learning algorithms for
2D PAT. Among others, the two-dimensional case arises in PAT with integrating line detectors \cite{paltauf2007photacoustic,burgholzer2007temporal}.
In the case that a sufficiently large number of detectors is used, according to Shannon's sampling theory,
implementations of full data methods yield almost artifact free reconstructions \cite{haltmeier2016sampling}. As the fabrication of an array of detectors is demanding,
experiments using integrating line detectors are often carried out using a single line detector, scanned on circular paths using scanning stages~\cite{NusEtAl10, GruEtAl10}, which is very time consuming. Recently, systems using arrays of $64$ parallel line detectors have been demonstrated~\cite{gratt201564line, beuermarschallinger2015photacoustic}. To keep production costs low and to allow fast imaging, the number of measurements will typically be kept much smaller than advised by Shannon's sampling theory and one has to deal with highly under-sampled data.
After discretization, image reconstruction in CS PAT consists in solving the inverse
problem
\begin{equation} \label{eq:ip}
\Data = \Fullop \fv + \noise \,,
\end{equation}
where $\fv\in \R^{n}$ is the discrete photoacoustic (PA) source to be reconstructed,
$\Data \in \R^{mQ}$ are the given CS data, $\noise$ is the noise in the
data and $\Fullop \colon \R^n \to \R^{mQ} $ is the forward matrix. The forward matrix is the
product of the PAT full data problem and the compressed sensing measurement matrix.
Making CS measurements in PAT implies that $mQ \ll n$ and therefore,
even in the case of exact data, solving \eqref{eq:ip} requires particular
reconstruction algorithms.
\subsection{CS PAT recovery algorithms}
Standard CS reconstruction techniques for \eqref{eq:ip}
are based on sparse recovery via $\ell^1$-minimization.
These algorithms rely on sparsity of the unknowns
in a suitable basis or dictionary and special incoherence of the forward matrix.
See \cite{sandbichler2015novel,haltmeier2016compressed,betcke2016acoustic,provost2009application} for different CS approaches in PAT.
To guarantee sparsity of the unknowns, in
\cite{haltmeier2018sparsification} a new sparsification and corresponding joint
$\ell^1$-minimization have been derived.
Recently, deep learning appeared as a new reconstruction paradigm
for CS and other inverse problems.
Deep learning approaches for PAT can be found in
\cite{antholzer2018deep,antholzer2018photoacoustic,kelly2017deep,allman2018photoacoustic,schwab2018fast,hauptmann2018model,waibel2018reconstruction}.
In this work, we compare the performance of the joint $\ell^1$-minimization algorithm of \cite{haltmeier2018sparsification} with deep learning
approaches for CS PAT image reconstruction. For the latter we use the residual network \cite{jin2016deep,han2016deep,antholzer2018deep}
and the nullspace network \cite{mardani2017deep,schwab2018deep}. The nullspace network
includes a certain data consistency layer and even has been shown to be a
regularization method in~\cite{schwab2018deep}. Our results show that the
nullspace network uniformly outperforms the residual network for CS PAT in terms of the mean squared error (MSE).
\subsection{Outline}
In Section~\ref{sec:cspat}, we present the required background
from CS PAT. The sparsification strategy and the joint $\ell^1$-minimization algorithm are summarized in Section \ref{sec:sparse}.
The employed deep learning image reconstruction strategies using a residual network
and an (approximate) nullspace network are described in Section~\ref{sec:deep}.
In Section~\ref{sec:num} we present reconstruction results for sparse
measurements and Bernoulli measurements.
The paper ends with a discussion in Section~\ref{sec:conclusion}.
\begin{psfrags}
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=1\columnwidth]{pics/PATprinciple-eps-converted-to.pdf}
\caption{\label{fig:pat} (a) An object is illuminated with a short optical pulse; (b) the absorbed light distribution causes an acoustic pressure; (c) the acoustic pressure is measured outside the object and used to reconstruct an image of the interior.}
\end{center}
\end{figure}
\end{psfrags}
\section{Compressed photoacoustic tomography}
\label{sec:cspat}
\subsection{Photoacoustic tomography}
As illustrated in Figure~\ref{fig:pat}, PAT is based
on generating an acoustic wave inside some investigated object using short optical pulses.
Let us denote by $\source \colon \R^d \to \R$ the initial pressure
distribution which provides diagnostic information about the
patient and which is the quantity of interest in PAT \cite{kuchment2011mathematics,paltauf2007photacoustic,wang2006photoacoustic}.
For keeping the presentation simple and focusing on the main ideas we only consider the case of $d=2$.
Among others, the two-dimensional case arises in PAT with so called integrating line detectors \cite{paltauf2007photacoustic,burgholzer2007temporal}.
Further, we restrict ourselves to the case of a circular measurement geometry, where the acoustic measurements are made on a circle surrounding the investigated object.
In two spatial dimensions, the induced pressure in PAT satisfies the 2D wave equation
\begin{multline} \label{eq:wave}
\partial^2_t p (\rr,t) - c^2 \Delta p(\rr,t) \\
= \delta'(t) \source (\rr) \quad \text{ for } (\rr,t) \in \R^2 \times \R_+ \,.
\end{multline}
Here $\rr \in \R^2$ is the spatial location, $t \in \R$ the time variable, $\Delta_{\rr}$ the spatial Laplacian, $c$ the speed of sound, and $\source (\rr)$ the PA source that is assumed to vanish
outside the disc $B_R \triangleq \sset{x \in \R^2 \mid \norm{x} < R}$ and has to be recovered.
The wave equation \eqref{eq:wave} is augmented with
$p(\rr, t) =0$ on $\set{t < 0}$. The acoustic pressure is then uniquely defined and referred to as the causal solution of~\eqref{eq:wave}.
PAT in a circular measurement geometry consist in recovering the function $\source$ from measurements of $p(\rss,t) $ on $\partial B_R \times (0,\infty)$. In the case of full data, exact and stable PAT image reconstruction is possible \cite{haltmeier2017iterative,stefanov2009thermoacoustic} and several efficient methods for recovering
$\Source$ are available. As an example, we mention the FBP
formula derived in \cite{FinHalRak07},
\begin{equation} \label{eq:fbp2d}
\source(\rr)
=
- \frac{1}{\pi R}
\int_{\partial B_R}
\int_{\abs{\rr-z}}^\infty
\frac{ (\partial_t t p)(\rss, t)}{ \sqrt{t^2-\sabs{\rr-\rss}^2}} \, \rmd t
\rmd S(\rss)
\,.
\end{equation}
Note the inversion operator in \eqref{eq:fbp2d} is also the adjoint of the
forward operator, see \cite{FinHalRak07}.
\subsection{Discretization}
In practical applications, the acoustic pressure can only be
measured with a finite number of acoustic detectors.
The standard sampling scheme for PAT in circular geometry assumes
uniformly sampled values
\begin{equation} \label{eq:data}
p \kl{ \rss_k, t_\ell}
\text{ for }
( k, \ell) \in \set{ 1, \dots, M} \times \set{ 1, \dots, Q }\,,
\end{equation}
with
\begin{align}
\rss_k
&\triangleq
\begin{bmatrix} R \cos \kl{2\pi(k-1)/M} \\ R\sin \kl{2\pi(k-1)/M} \end{bmatrix}
\\
t_\ell
&\triangleq
2R (\ell-1) /(Q-1)
\,.
\end{align}
The number $M$ of detector positions in \eqref{eq:data} is directly related to the resolution of the final reconstruction. Namely,
\begin{equation} \label{eq:samplingcondition}
M \geq 2 R_0 \lambda_0
\end{equation}
equally spaced transducers are required to stably recover any
PA source $\source$ that has maximal essential wavelength $\lambda_0$ and is supported in a disc
$B_{R_0} \subseteq B_{R}$; see \cite{haltmeier2016sampling}.
Image reconstruction in this case can be performed by discretizing the inversion formula \eqref{eq:fbp2d}.
The sampling condition \eqref{eq:samplingcondition} requires a very high sampling rate, especially when the PA source contains narrow features, such as blood vessels or sharp interfaces.
Note that temporal samples can easily be collected at a high sampling rate compared to the spatial sampling, where each sample requires a separate sensor.
It is therefore beneficial to keep $M$ as small as possible.
Consequently, full sampling in PAT is costly and time consuming and strategies for
reducing the number of detector locations are desirable.
\subsection{Compressive measurements in PAT}
To reduce the number of measurements we use CS measurements.
Instead of collecting $M$ individually sampled signals as in \eqref{eq:data}, we take general linear measurements
\begin{equation} \label{eq:cs}
\Data(j, \ell ) \triangleq \sum_{k=1}^M
\samp[j, k] p(\rr_k, t_\ell )
\; \text{ for } j \in \set{ 1, \dots, m} \,,
\end{equation}
with $m \ll M$. Several choices for the measurement matrix $\samp$ are possible and have been used for CS PAT
\cite{sandbichler2015novel,haltmeier2016compressed,betcke2016acoustic}.
In this work, we take $\samp$ as deterministic sparse subsampling matrix or
Bernoulli random matrix; see Subsection \ref{eq:setup}.
Let us denote by $\Wave \in \R^{MQ \times n} $ the discretized solution operator
of the wave equation and by $ \Samp \triangleq \samp \otimes \Io \in \R^{ mQ \times MQ }$
the Kronecker (or tensor) product between the CS measurement matrix $\samp$
and the identity matrix $\Io$. Then the CS data \eqref{eq:cs} written as column vector
$\Data \in \R^{mQ}$ are given by
\begin{equation} \label{eq:ip0}
\Data = \Fullop \fv \quad \text { with } \Fullop \triangleq \Samp \circ \Wave \in \R^{mQ \times n} \,.
\end{equation}
In the case of CS measurements we have $mQ \ll n$ and
therefore \eqref{eq:ip0} is highly underdetermined and image reconstruction
requires special reconstruction algorithms.
\section{Joint $\ell^1$-minimization for CS PAT}
\label{sec:sparse}
Standard CS image reconstruction is based on $\ell^1$ minimization and
sparsity of the unknowns to be recovered. In \cite{haltmeier2018sparsification} we introduced
a sparse recovery strategy that we will use in the present paper and recall below.
\subsection{Background from $\ell^1$-minimization}
An element $\Lsource \in \R^n$ is called $s$-sparse if
it contains at most $s$ nonzero elements. If we are given measurements $\Fullop \Lsource = \Data$
where $\Lsource \in \R^n$ and $\Data \in\R^{mQ}$ with $mQ \ll n$,
then stable recovery of $\Lsource$ from $\Data$ via $\ell^1$-minimization
can be guaranteed if $\Lsource$ is sparse and the matrix $\Fullop$ satisfies the
restricted isometry property of order $2s$.
The latter property means that for all $2s$-sparse vectors $\hh \in \R^n$ we have
\begin{equation} \label{eq:RIP}
(1-\delta) \norm{\hh}^2\leq \norm{ \Fullop \hh}^2 \leq(1+\delta) \norm{\hh}^2 \,,
\end{equation}
for an RIP constant $\delta < 1 / \sqrt{2}$; see~\cite{foucart2013mathematical}.
Bernoulli random matrices satisfy the RIP with
high probability~\cite{baraniuk2008simple} whereas the
subsampling matrix clearly does not satisfy the RIP.
In the case of CS PAT, the forward matrix is given by
$ \Fullop = (\samp \otimes \Io ) \circ \Wave$.
It is not known whether $\Fullop$ satisfies the RIP for either
the Bernoulli of the subsampling matrix. In such situations one may use
the following stable reconstruction result from inverse problems theory.
\begin{theorem}[$\ell^1$-minimization] \label{thm:ell1}
Let $\Fullop \in \R^{mQ \times n}$ and $\Lsource \in \R^{n}$
Assume
\begin{align} \label{eq:ssc-1}
&\exists \Eeta \in \R^{mQ} \colon \Fullop^\trans \Eeta \in \sign(\Lsource)
\\ \label{eq:ssc-2}
&\abs{(\Fullop^\trans \Eeta)_i} < 1
\text{ for } i \not \in \supp (\Lsource)\,,
\end{align}
where $\sign(\Lsource)$ is the set valued signum function and
$\supp (\Lsource)$ the set of all nonzero entries of $\Lsource$,
and that the restriction of $\Fullop$ to the subspace spanned by $e_i$ for
$ i \in \supp (\Lsource)$ is injective.
Then for any $\Data^\delta \in \R^{mQ}$ with $\snorm{ \Fullop \Lsource - \Data^\delta}_2 \leq \delta$,
any minimizer of the $\ell^1$-Tikhonov functional
\begin{equation} \label{eq:tikhonov}
\Lsource_\beta^\delta \in \argmin_{\hh} \frac{1}{2} \norm{ \Fullop \hh - \Data^\delta }_2^2 + \beta \norm{\hh}_1
\end{equation}
satisfies $\snorm{ \Lsource_\beta^\delta - \Lsource }_2 = \mathcal{O} (\delta)$
provided $\beta \asymp \delta$. In particular, $\Lsource$ is the unique
$\norm{\edot}_1$-minimizing solution of $\Fullop \hh= \Data$.
\end{theorem}
\begin{proof}
See \cite{Gra11}.
\end{proof}
In \cite{candes2005decoding,Gra11} it is shown that the RIP implies the conditions in
Theorem~\ref{thm:ell1}. Moreover, the smaller $\supp (\Lsource)$, the easier the
conditions in Theorems are satisfied.
Therefore, sufficient sparsity of the unknowns is a crucial condition
for the success of $\ell^1$-minimization.
\subsection{Sparsification strategy}
The used CSPAT approach in \cite{haltmeier2018sparsification}
is based on following theorem which allows bringing
sparsity into play.
\begin{theorem} \label{thm:laplace}
Let $\source$ be a given PA source vanishing outside $B_R$,
and let $p$ denote the causal solution of~\eqref{eq:wave}.
Then $\partial_t^2 p$ is the causal solution of
\begin{multline} \label{eq:wavesparse}
\partial^2_t q (\rr,t) - c^2 \Delta q(\rr,t) \\
= \delta'(t) c^2 \Delta \Source (\rr) \quad \text{ for } (\rr,t) \in \R^2 \times \R_+ \,.
\end{multline}
In particular, up to discretization error, we have
\begin{equation}
\forall \Source \in \R^n \colon \quad \ppartial_t^2 \Fullop [\Source] = \Fullop [c^2 \DDelta_\rr \Source] \,,
\end{equation}
where $\Fullop = (\samp \otimes \Io ) \circ \Wave$ denotes the discrete
CS PAT forward operator defined by \eqref{eq:ip0}, $\DDelta_\rr$ is the discretized
Laplacian, and $\ppartial_t$ the discretized temporal derivate.
\end{theorem}
\begin{proof}
See \cite{haltmeier2018sparsification}.
\end{proof}
Typical phantoms consist of smoothly varying parts
and rapid changes at interfaces. For such PA sources,
the modified source $c^2 \DDelta_{\rr} \Source$
is sparse or at least compressible. The theory of
CS therefore predicts that the modified source
can be recovered by solving via $\ell^1$-minimization
\begin{equation} \label{eq:L1exact}
\min_{ \Lsource} \norm{\Lsource}_1
\quad \text{such that } \Fullop \Lsource = \ppartial_t^2 \Data \,.
\end{equation}
Having obtained an approximate minimizer $\Lsource$ by either solving
\eqref{eq:L1exact} or its relaxed version, one can recover the original
PA source $\Source $ by subsequently solving the Poisson
equation $ \DDelta_{\rr} \Source = \Lsource/c^2$ with zero
boundary conditions. Using the above two-stage procedure,
we observed disturbing low frequency artifacts in the reconstruction.
Therefore, in \cite{haltmeier2018sparsification} we
introduced a different joint $\ell^1$-minimization approach based on
Theorem~\ref{thm:laplace} that jointly recovers $\Source$ and $ c^2 \DDelta_{\rr} \Source$.
\subsection{Joint $\ell^1$-minimization framework}
The modified data $\ppartial_t^2 \Data$ is well suited
to recover singularities of $\Source$, but hardly contains low-frequency
components of $\Source$. On the other hand, the low frequency
information is contained in the original data, which is still available to us.
This motivates the following joint $\ell^1$-minimization problem
\begin{equation} \label{eq:joint2}
\begin{aligned}
&\min_{(\Source, \Lsource)} \norm{\Lsource}_1 + I_{C} (\Source) \\
&\text{such that }
\begin{bmatrix} \Fullop\Source, \Fullop\Lsource , \DDelta_{\rr} \Source - \Lsource/c^{2} \end{bmatrix} =
\begin{bmatrix} \Data , \ppartial_t^2 \Data ,0 \end{bmatrix} \,.
\end{aligned}
\end{equation}
Here $I_C $ is the indicator function of
$ C \triangleq [0, \infty)^n$, defined by
$I_C(\Source) = 0 $ if $\Source \in C$ and $I_C(\Source) =\infty $ otherwise, and
guarantees non-negativity.
\begin{theorem} \label{thm:recovery}
Assume that $\Source \in \R^n$ is non-negative,
that the measurement matrix $\Fullop$ and the modified PA source
$\Lsource = c^2 \DDelta_{\rr} \Source$ satisfy Equations \eqref{eq:ssc-1}, \eqref{eq:ssc-2},
and denote $\Data = \Fullop\Source$.
Then, the pair $[\Source, c^2 \DDelta_{\rr} \Source]$ can be recovered as the
unique solution of the joint $\ell^1$-minimization problem~\eqref{eq:joint2}.
\end{theorem}
\begin{proof}
See \cite{haltmeier2018sparsification}.
\end{proof}
In the case the data is only approximately sparse or noisy, we propose, instead of
\eqref{eq:joint2}, to solve the $\ell^2$-relaxed version
\begin{multline} \label{eq:joint-pen}
\frac{1}{2} \norm{\Fullop \Source- \Data}_2^2
+
\frac{1}{2} \norm{\Fullop\Lsource - \ppartial_t^2 \Data}_2^2
+
\frac{\alpha}{2} \norm{\DDelta_{\rr} \Source - \Lsource/c^{2}}_2^2
\\ +
\beta \norm{\Lsource}_1 + I_{C} (\Source)
\to \min_{(\Source, \Lsource)} \,.
\end{multline}
Here $\alpha>0$ is a tuning and $\beta >0$ a regularization
parameter.
\subsection{Numerical minimization}
We will solve~\eqref{eq:joint-pen} using a proximal forward-backward splitting method~\cite{combettes2011proximal}, which is well suited for minimizing
the sum of a smooth and a non-smooth but convex part.
In the case of \eqref{eq:joint-pen} we take the smooth part as
\begin{multline}
\Phi(\Source,\Lsource) \triangleq \frac{1}{2} \norm{\Fullop \Source- \Data}_2^2
\\ +
\frac{1}{2} \norm{\Fullop\Lsource - \ppartial_t^2\Data}_2^2
+
\frac{\alpha}{2} \norm{\DDelta_{\rr} \Source - \Lsource/c^{2}}_2^2
\end{multline}
and the non-smooth part as $\Psi(f,h) \triangleq \beta \norm{\Lsource}_1 + I_{C}(f)$.
The proximal gradient algorithm then alternately performs an explicit gradient step
for $\Phi$ and an implicit proximal step for $\Psi$. For the proximal step, the proximity operator
of a function must be computed. The proximity operator of a given convex function $F\colon \R^n \to\R$ is defined by~\cite{combettes2011proximal}
\[\prox_{F}(\Source) \triangleq
\operatorname{argmin}
\set{ F(\hh)+\tfrac{1}{2} \| \Source - \hh \|_2^2 \mid \hh \in\R^n } \,.\]
The regularizers we are considering here have the advantage, that their proximity operators can be computed explicitly and do not cause a significant computational overhead.
The gradient $[\nabla_\Source \Phi, \nabla_\Lsource \Phi]$ of the smooth part can
easily be computed to be
\begin{align*}
\nabla_\Source \Phi (\Source,\Lsource)
&= \Fullop^* (\Fullop \Source- \Data)- \alpha \DDelta_{\rr} (\DDelta_{\rr} \Source - \Lsource/c^{2})\\
\nabla_\Lsource \Phi (\Source,\Lsource)
&= \Fullop^* (\Fullop \Lsource- \ppartial_t^2\Data)- \frac{\alpha }{c^2}(\DDelta_{\rr} \Source - \Lsource/c^{2}) \,.
\end{align*}
The proximal operator of the non-smooth part is given by
\begin{align*}
\prox(\Source,\Lsource) &:= [\prox_{I_C}(\Source), \prox_{\beta\|\cdot\|_1(\Lsource)}] \,,\\
\prox_{I_C}(\Source)_i &= (\max(\Source_i,0))_i \,,\\
\prox_{\beta\|\cdot\|_1}(\Lsource)_i &= (\max(|\Lsource_i|-\beta,0)\,\sign(\Lsource_i))_i
\end{align*}
With this, the proximal gradient algorithm is given by
\begin{align} \label{eq:prox1}
\Source^{k+1} &= \prox_{I_C}\left(\Source^k - \mu \nabla_\Source \Phi (\Source^k, \Lsource^k) \right)
\\ \label{eq:prox2}
\Lsource^{k+1} &= \prox_{\mu\beta\|\cdot\|_1}\left(\Lsource^k - \mu \nabla_\Lsource \Phi (\Source^k, \Lsource^k)\right),
\end{align}
where $(\Source^k, \Lsource^k)$ is the $k$-th iterate and $\mu$ the step size.
We initialize the proximal gradient algorithm with $\Source^0=\Lsource^0=0$.
\section{Deep learning for CS PAT}
\label{sec:deep}
As an alternative to the joint $\ell^1$-minimization algorithm
we use deep learning or CS image reconstruction.
We thereby use a trained residual network as well as a corresponding
(approximate) nullspace network, which offers improved data consistence.
\subsection{Image reconstruction by deep learning}
\label{sec:nn}
Deep learning is a recent paradigm to solve inverse problems of the form
\eqref{eq:ip}. In this case, image reconstruction is performed by an explicit reconstruction
function
\begin{equation} \label{eq:nn}
\rec_{\theta} = \NN_\theta \circ \Fullop^\sharp \colon \R^{mQ} \to \R^n \,.
\end{equation}
The reconstruction operator $\rec_{\theta}$ is the composition of a backprojection
operator and a convolutional neural network
\begin{align}
\Fullop^\sharp \colon \R^{mQ} \to \R^n \\
\NN_\theta \colon \R^{n} \to \R^n \,.
\end{align}
The backprojection $\Fullop^\sharp$ performs an initial reconstruction that is
subsequently improved by the CNN $\NN_\theta$.
In this work, we use the filtered backprojection (FBP) algorithm \cite{FinHalRak07} for
$\Fullop^\sharp$, which is a discretization of the inversion formula \eqref{eq:fbp2d}.
For the CNN $\NN_\theta$ we use the residual network
(see Subsection~\ref{sec:resnet}) and the nullspace network (see Subsection~\ref{sec:nullnet}).
The CNN is taken from a parameterized family, where parameterization
$ \theta \in \Theta \mapsto \NN_\theta $ is determined by the network architecture.
For adjusting the parameters, one assumes a family of training
data $ ((\Xin_k, \Xout_k))_{k=1}^N$ is given where any
training example consist of artifact-free output image $\Xout_k$
and a corresponding input image $ \Xin_k = \Fullop^\sharp \Fullop (\Xout_k) $. The free parameters
$\theta$ are chosen in such a way, that the overall error
of the network for predicting $\Xout_k$ from $\Xin_k$ is minimized.
The minimization procedure used in this paper is described in
Subsection \eqref{sec:training}.
\subsection{Residual network}
\label{sec:resnet}
The architecture of the CNN is a crucial step
for the performance of tomographic image reconstruction with deep learning.
A common architecture in that context is the following residual network
\begin{equation}\label{eq:resnet}
\rec_{\theta}^{\rm res} = (\Id + \UU_\theta) \Fullop^\sharp \,,
\end{equation}
where $ \UU_\theta$ is the Unet, originally introduced in
\cite{ronneberger2015unet} for biomedical image segmentation. The
residual network \ref{eq:resnet} has successfully been used for various tomographic
image reconstruction tasks \cite{antholzer2018deep,jin2016deep,han2016deep}
including PAT.
\begin{figure}[htb!]
\centering
\includegraphics[width=\columnwidth]{pics/u-net-eps-converted-to.pdf}
\caption{ Architecture of the residual network $\Id + \UU_\theta$. \label{fig:net}
The number written above each layer denotes the number of convolution kernels (channels).
The numbers written on the left are the image sizes.
The long arrows indicate direct connections with subsequent concatenation or summation.}
\end{figure}
Using $\Id + \UU_\theta$ instead of $ \UU_\theta$ affects that actually
the residual images $\Xout + \Xin $ are learned by the Unet. The residual images
often have a simpler structure than the original outputs $\Xout $.
As argued in~\cite{han2016deep}, learning the residuals and adding them to the inputs after the last layer is more effective than directly training for the outputs.
The resulting deep neural network architecture is shown in Figure~\ref{fig:net}.
\subsection{Nullspace network}
\label{sec:nullnet}
Especially when applying $\rec_{\theta}^{\rm res}$ to objects very different from the training set,
the residual network \eqref{eq:resnet} lacks data consistency, in the sense that
$\rec_{\theta}^{\rm res} \Data$ is not necessarily a solution of the given equation
$\Fullop \Source = \Data$.
To overcome this limitation, as an alternative we use the nullspace network
\cite{schwab2018deep},
\begin{equation}\label{eq:nullnet}
\rec_{\theta}^{\rm null} = (\Id + \Po_{ \Kern (\Fullop)} \UU_\theta ) \Fullop^\sharp \,.
\end{equation}
One strength of the nullspace network is that the term
$\Po_{ \Kern (\Fullop)} \UU_\theta$ only adds information that is
consistent with the given data. For example, if $\Fullop^\sharp = \Fullop^+$ equals the
pseudoinverse, then $\rec_{\theta}^{\rm null} \Data$ even
is fully data consistent as implied by the following theorem.
\begin{theorem} \label{thm:null}
Let $\Data = \Fullop(\Source^\star)$ be in the range of the forward operator, write $L(\Fullop, \Data)$ for the set of solutions of the equation
$\Fullop \Source =\Data$ and take $\Fullop^\sharp = \Fullop^+$ as the
pseudoinverse.
\begin{enumerate}
\item $ \rec_{\theta}^{\rm null} (\Data) $ is a solution of $\Fullop \Source = \Data$.
\item We have $ \rec_{\theta}^{\rm null}(\Data) = \Po_{ L(\Fullop, \Data)} \rec_{\theta}^{\rm res}(\Data)$.
\item Consider the iteration
\begin{align} \label{eq:iter1}
\Source^{(0)} &= \rec_{\theta}^{\rm res}(\Data)
\\ \label{eq:iter2}
\Source^{(k+1)} &= \Source^{(k)} - s\Fullop^\trans ( \Fullop \Source^{(k)} - \Data ) \,,
\end{align}
with step size $0 < s < \snorm{\Fullop}^{-2}$.
Then:
\begin{enumerate}
\item $\snorm{\Source^\star - \Source^{(k)}}$ is monotonically decreasing
\item $\lim_{k \to \infty} \Source^{(k)} = \rec_{\theta}^{\rm null} (\Data)$
\item $\snorm{\Source^\star - \Source^{(k)}} \leq
\snorm{\Source^\star - \rec_{\theta}^{\rm res}(\Data)}$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
Will be presented elsewhere.
\end{proof}
Theorem~\ref{thm:null} implies that iteration \eqref{eq:iter1}, \eqref{eq:iter2}
defines a sequence
\begin{equation} \label{eq:iter3}
\rec_{\theta}^{\mathrm{null},(k)}(\Data) \triangleq \Source^{(k)}
\end{equation}
that monotonically converges to $\rec_{\theta}^{\rm null}(\Data) $.
It implies that the nullspace network as well as the approximate nullspace network
$\rec_{\theta}^{\mathrm{null},(k)}(\Data)$ have a smaller reconstruction
error than the residual network. Moreover, according to Theorem~\ref{thm:null}
the nullspace network yields a solution of the equation $\Fullop \Source =\Data$
even for elements very different from the training data.
\begin{figure}[htb!]
\includegraphics[width=0.45\columnwidth]{pics/vessel_true-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/head_true-eps-converted-to.pdf}\\
\includegraphics[width=0.45\columnwidth]{pics/vessel_full-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/head_full-eps-converted-to.pdf}
\caption{Test phantoms for results presented below.
Top: vessel phantom (left) and head phantom (right).
Bottom: FBP reconstruction from full data of vessel phantom (left)
and head phantom (right).\label{fig:phantoms}}
\end{figure}
\section{Numerical results}
\label{sec:num}
In this section we numerically compare the joint $\ell^1$-minimization approach
with the residual network and the nullspace network. We also compare the results with plain
FBP. We use Keras~\cite{keras} with TensorFlow~\cite{tensorflow}
to train and evaluate the CNN. The FBP, the $\ell^1$-minimization algorithm and the
iterative update \eqref{eq:iter2} is implemented in MATLAB.
We ran all our experiments on a computer using an
Intel i7-6850K and an NVIDIA 1080Ti. The phantoms as well as the FBP
reconstruction from fully sampled data are shown in
Figure~\ref{fig:phantoms}. Note that we use limited view data which implies that
the reconstructions in Figure~\ref{fig:phantoms} contain some artefacts.
\subsection{Measurement setup}
\label{eq:setup}
The entries of any discrete PA source $\Source \in \R^{n}$ with
$n= 256^2$ correspond to discrete samples of the continuous source at a
$256 \times 256$ Cartesian grid covering the square $[\SI{-5}{\micro \meter} , \SI{9}{\micro\meter}] \times
[ \SI{-12.5}{\micro \meter}, \SI{1.5}{\micro \meter}]$.
The full wave data $\Data \in \R^{ M P}$ corresponds to $P= 747$ equidistant
temporal samples in $[0,T]$ with $T=\SI{4.9749d-2}{\micro \second} $ and $M=240 $ equidistant sensor locations on the circle of radius $\SI{40}{\micro \meter}$ and polar angles in the interval $[\SI{35}{\degree}, \SI{324}{\degree}] $. The sound speed is taken as $c= \SI{1.4907d3}{\meter \per \second}$.
The wave equation is evaluated by discretizing the solution formula of the wave equation, and the
inversion formula \eqref{eq:fbp2d} is discretized using the standard FBP procedure described in \cite{FinHalRak07,haltmeier2011mollification}. Recall that the continuous setting the inversion integral in \eqref{eq:fbp2d} equals the adjoint of the forward operator. Therefore the above procedure gives a pair
\begin{align*}
\Wave \colon \R^{n} \to \R^{mQ} \\
\BP \colon \R^{nQ} \to \R^{n}
\end{align*}
of forward operator and unmatched adjoint.
We consider $m = 60$ spatial measurements which corresponds to a
compression factor of four. For the sampling matrices $\samp \in \R^{m \times M}$ we use
the following instances:
\begin{itemize}
\item Deterministic sparse subsampling matrix
with
entries
\begin{equation} \label{eq:subsampling}
\samp[i,j]
=
\begin{cases}
2 & \text{ if } j = 4(i-1) + 1 \\
0 & \text{ otherwise} \,.
\end{cases}
\end{equation}
\item Random Bernoulli matrix where each entry is taken
independently as $\pm 1/\sqrt{m}$ with equal probability.
\end{itemize}
The Bernoulli matrix satisfies then RIP with high probability,
whereas the sparse subsampling matrix doesn't. Therefore, we expect the
$\ell^1$-minimization approach to work better for Bernoulli measurements.
On the other hand, in the subsampling case the artefacts have more structure
which therefore is expected to be better for the deep learning approaches.
Our findings below confirm these conjectures.
\subsection{Construction of reconstruction networks}
\label{sec:training}
For the residual and the nullspace network
we use the backprojection layer
\begin{equation}\label{eq:bo}
\Fullop^\sharp = \BP \circ \Samp^\trans \,,
\end{equation}
and the same trained CNN.
For that purpose, we construct $N = 5000$ training examples
$ (\Xin_k,\Xout_k)_{k=1}^N$ where $\Xout_k$ are taken as projection
images from three dimensional lung blood vessel data as described in \cite{schwab2018fast}.
All images $\Xout_k$ are normalized to have maximal intensity one.
The corresponding input images
are computed by $\Xin_k = \Fullop^\sharp \Fullop \Xout_k $.
The CNN is constructed by minimizing the mean absolute error
\begin{equation} \label{eq:err-nn}
E_N(\theta)
\triangleq
\frac{1} {N}
\sum_{k=1}^N \norm{ ( \Id + \UU_\theta) (\Xin_k) - \Xout_k }_1
\end{equation}
using stochastic gradient descent with batch size 1 and momentum 0.9. We trained for 200 epochs
and used a decaying learning parameter between $0.005$ to $0.0025$.
Having computed the minimizer of \eqref{eq:err-nn} we use the trained residual network
$\rec_{\theta}^{\rm res}$ as well as the corresponding approximate nullspace network
$\rec_{\theta}^{\mathrm{null}, (10)}$ for image reconstruction.
\subsection{Blood vessel phantoms}
First we investigate the performance on 50 blood vessel phantoms
that are not contained in the training set. We consider sparse sampling
as well as Bernoulli measurements. For the joint recovery approach,
we use 70 iterations of the iterative thresholding procedure with coupling parameter
$\al = 0.001$, regularization parameter $\beta = 0.005$ and step size $\mu = 0.125$.
For the (approximate) nullspace network
$\rec_{\theta}^{\mathrm{null}, (10)}$ we use 10 iterations to approximately compute the projection.
Results for one of the vessel phantoms are visualized in Figure \ref{fig:vessel}.
To quantitatively evaluate the results
we computed the MSE (mean square error), the PSNR (peak signal to noise ratio) and the SSIM (structural similarity index \cite{wang2004image}) averaged over all 50 blood vessel phantoms.
The reconstruction errors are summarized in
Table~\ref{tab:vessel} where the best results are framed.
\begin{figure}[htb!]
\includegraphics[width=0.45\columnwidth]{pics/vessel_sparse_bp-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/vessel_bern_bp-eps-converted-to.pdf}\\
\includegraphics[width=0.45\columnwidth]{pics/vessel_sparse_joint-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/vessel_bern_joint-eps-converted-to.pdf}\\
\includegraphics[width=0.45\columnwidth]{pics/vessel_sparse_unet-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/vessel_bern_unet-eps-converted-to.pdf}\\
\includegraphics[width=0.45\columnwidth]{pics/vessel_sparse_nullnet-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/vessel_bern_nullnet-eps-converted-to.pdf}
\caption{Reconstructions of blood vessel image from sparse measurements (left) and Bernoulli measurements (right).
Top row: FBP reconstruction.
Second row: Joint $\ell^1$-minimization.
Third row: Residual network.
Bottom row: Nullspace network.
\label{fig:vessel}}
\end{figure}
\begin{table}[htb!]
\caption{Performance averaged over 50 blood vessel images.\label{tab:vessel}}
\centering
\begin{tabular}{ l | c c c}
\toprule
& SME & PSNR & SSIM \\
\midrule
\multicolumn{2}{c}{Sparse measurements}\\
FBP & \colorbox{white}{\SI{15.1d-4}{}} & \colorbox{white}{28.6} & \colorbox{white}{\SI{4.83d-1}{}} \\
$\ell^1$-minimization & \colorbox{white}{\SI{3.35d-4}{}} & \colorbox{white}{35.0} & \colorbox{white}{\SI{8.50d-1}{}} \\
residual network & \colorbox{white}{\SI{3.08d-4}{}} & \colorbox{white}{35.6} & \colorbox{green}{ \SI{9.30d-1}{} } \\
nullspace network & \colorbox{green}{ \SI{2.22d-4}{}} & \colorbox{green}{ 37.0 }& \SI{9.17d-1}{} \\
\midrule
\multicolumn{2}{c}{Bernoulli measurements}\\
FBP &\colorbox{white}{ \SI{20.2d-4}{}} & \colorbox{white}{27.3} & \colorbox{white}{\SI{4.18d-1}{}} \\
$\ell^1$-minimization & \colorbox{green}{ \SI{1.89d-4}{} } & \colorbox{green}{ 37.5 } & \colorbox{green}{ \SI{9.06d-1}{} } \\
residual network & \colorbox{white}{\SI{6.32d-4}{}} & \colorbox{white}{32.6} & \colorbox{white}{\SI{8.89d-1}{}} \\
nullspace network & \colorbox{white}{\SI{2.21d-4}{}} & \colorbox{white}{36.9} & \colorbox{white}{\SI{8.89d-1}{}} \\
\bottomrule
\end{tabular}
\end{table}
From Table~\ref{tab:vessel} we see that the hybrid
as well as the deep learning based methods significantly
outperform the FBP reconstruction.
Moreover, the deep learning approach even outperforms
the joint recovery approach for the sparse sampling.
The nullspace network in all cases decreases the MSE (increases the PSNR)
compared to the residual network.
\subsection{Shepp-Logan type phantom}
Next we investigate the performance on a Shepp-Logan type phantom
that contains structures completely different from the training data.
For the joint recovery approach, we use 50 iterations of the iterative
thresholding procedure with
$\al = 0.001$ regularization parameter $\beta = 0.005$ and step size $\mu = 0.1$.
For the nullspace network we use
$\rec_{\theta}^{\mathrm{null}, (10)}$. Results are shown in
Figure~\ref{fig:head}. Table~\ref{tab:head} shows the MSE, the PSNR and the SSIM
for the head phantom, where the best results are again framed.
\begin{figure}[htb!]
\includegraphics[width=0.45\columnwidth]{pics/head_sparse_bp-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/head_bern_bp-eps-converted-to.pdf}\\
\includegraphics[width=0.45\columnwidth]{pics/head_sparse_joint-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/head_bern_joint-eps-converted-to.pdf}\\
\includegraphics[width=0.45\columnwidth]{pics/head_sparse_unet-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/head_bern_unet-eps-converted-to.pdf}\\
\includegraphics[width=0.45\columnwidth]{pics/head_sparse_nullnet-eps-converted-to.pdf}
\includegraphics[width=0.45\columnwidth]{pics/head_bern_nullnet-eps-converted-to.pdf}
\caption{Reconstructions of Shepp-Logan type phantom from sparse measurements (left) and Bernoulli measurements (right).
Top row: FBP reconstruction.
Second row: Joint $\ell^1$-minimization.
Third row: Residual network.
Bottom row: Nullspace network.
\label{fig:head}}
\end{figure}
\begin{table}[htb!]
\caption{Performance for the Shepp-Logan phantom.\label{tab:head}}
\centering
\begin{tabular}{ l | c c c}
\toprule
& SME & PSNR & SSIM \\
\midrule
\multicolumn{2}{c}{Sparse measurements}\\
$\ell^1$-minimization & \colorbox{white}{\SI{6.73d-4}{}} & \colorbox{white}{31.7} & \colorbox{white}{\SI{8.04d-1}{}} \\
residual network & \colorbox{white}{\SI{6.32d-4}{}} & \colorbox{white}{32.0} & \colorbox{green}{ \SI{8.90d-1}{} } \\
nullspace network & \colorbox{green}{ \SI{5.29d-4}{} } & \colorbox{green}{ 32.8 } & \colorbox{white}{\SI{8.59d-1}{}} \\
\midrule
\multicolumn{2}{c}{Bernoulli measurements}\\
$\ell^1$-minimization & \colorbox{green}{ \SI{6.03d-4}{} } & \colorbox{green}{ 32.2 } & \colorbox{green}{ \SI{8.19d-1}{} } \\
residual network & \colorbox{white}{\SI{19.2d-4}{} } & \colorbox{white}{ 27.2 } & \colorbox{white}{ \SI{7.63d-1}{} } \\
nullspace network & \colorbox{white}{ \SI{6.92d-4}{} } & \colorbox{white}{ 31.6 } & \colorbox{white}{ \SI{7.67d-1}{} } \\
\bottomrule
\end{tabular}
\end{table}
As the considered Shepp-Logan type phantom is very different from
the training data it is not surprisingly the standard residual network
does not perform that well for the Bernoulli measurements.
Surprisingly the residual network still works well in the sparse data case.
The nullspace network yields significantly improved
results compared for the residual network, especially for the
Bernoulli case. In the Bernoulli case, the $\ell^1$-minimization approach
performs best, however only slightly better than the approximate nullspace
network.
\section{Conclusion}
\label{sec:conclusion}
In this paper we compared $\ell^1$-minimization
with deep learning for CS PAT image reconstruction.
The two approaches have been tested on blood vessel
data (test data not contained in the training set that consists of similar objects)
as well as a Shepp-Logan type phantom (with structures very different from the training data).
For the CS PAT measurements we considered deterministic subsampling
as well as random Bernoulli measurements. For the used reconstruction networks, we considered
the Unet with residual connection and an approximate nullspace network which contains an
additional data consistency layer.
In terms of reconstruction quality, our findings can be summarized as follows:
\begin{enumerate}
\item
Sparse recovery and deep learning both significantly outperforms filtered backprojection
for both measurement matrices.
If the training data are not accurate for the object to be reconstructed, for the deep learning
approach this conclusion only holds for the null-space network.
\item
In the case of the sparse measurement matrix, the deep learning approach outperforms
$\ell^1$-minimization. In the case of Bernoulli measurement, the $\ell^1$-minimization algorithms
yields better performance.
\item
The nullspace network contains a data consistence layer and yields good results even
for phantoms very different from the training data. Even for the test data similar to the training
data it yields an improved PSNR compared to the residual network
\end{enumerate}
According to the above results we can recommend the $\ell^1$-minimization algorithm in the case of random measurements and the nullspace network in the case of sparse measurements.
We point out that application of the CNN only takes fractions of second
(actually, less than $0.01$ seconds) in Keras whereas the joint recovery approach requires around
2 minutes for 50 iterations in Matlab. Note that this comparison is not completely
fair and with a recent GPU implementation in PyTorch we have been able the reduce the
computation time to about one second for 50 iterations.
Nevertheless, the deep learning based methods are still significantly faster.
Therefore, especially the nullspace network is very promising for high quality real-time CS PAT
imaging.
\section*{Acknowledgement}
The work of M.H and S.A. has been supported by the Austrian Science Fund (FWF), project P 30747-N32. | {"config": "arxiv", "file": "1901.06510/ash18arxiv.tex"} |
TITLE: Formal proof for closed set.
QUESTION [0 upvotes]: I'm trying to prove that for $K = Z \cap E \subset \mathbb{R}^3$ with:
$Z = \{(x,y,z) \in \mathbb{R}^3 | x^2+y^2 = 1 \}$ and
$E = \{(x,y,z) \in \mathbb{R}^3 | x+z = 0 \}$
the set K is closed. Now I know and can use that the intersection of closed sets is closed, which would leave me with proving that $E$ and $Z$ are closed. Isn't it enough to state that the two sets obviously contain their border because of the equality in the restriction?
Edit: Sorry for the confusion, the original title asking for proof of open set was wrong, since I'm trying to prove that the set is closed.
REPLY [2 votes]: Since $Z$ and $E$ are closed sets, $K$ is closed too. The set $Z$ is closed since $Z=f^{-1}\bigl(\{1\}\bigr)$, where $f(x,y,z)=x^2+y^2$ and $f$ is continuous. And $E$ is closed for a similar reason. | {"set_name": "stack_exchange", "score": 0, "question_id": 3419250} |
\begin{document}\title{Double Schubert polynomials
for the classical groups}\author{Takeshi Ikeda} \address{Department of Applied Mathematics,
Okayama University of Science, Okayama 700-0005, Japan} \email{ike@xmath.ous.ac.jp}
\author{Leonardo~C.~Mihalcea}
\address{Department of Mathematics, Duke University, P.O. Box 90320 Durham, NC 27708-0320 USA}
\email{lmihalce@math.duke.edu}
\author{Hiroshi Naruse}\address{Graduate School of Education, Okayama University, Okayama 700-8530, Japan}\email{rdcv1654@cc.okayama-u.ac.jp}
\subjclass[2000]{Primary 05E15; Secondary 14N15, 14M15,05E05}
\date{October 5, 2008}
\maketitle
\begin{abstract}
For each infinite series of the classical Lie groups of type $\mathrm{B}$, $\mathrm{C}$ or $\mathrm{D}$,
we introduce a family of polynomials parametrized by the elements of the
corresponding Weyl group of infinite rank.
These polynomials represent the Schubert classes in the equivariant
cohomology of the appropriate flag variety. They satisfy a stability property,
and are a natural extension of the (single) Schubert polynomials of Billey
and Haiman, which represent non-equivariant
Schubert classes. They are also positive in a certain sense, and when indexed by maximal Grassmannian elements, or by the longest element in a finite Weyl group, these polynomials can be expressed in terms of the factorial analogues of
Schur's $Q$- or $P$-functions defined earlier
by Ivanov.
\end{abstract}
\section{Introduction}
A classical result of Borel states that
the (rational) cohomology ring of
the flag variety of any
simple complex Lie group $G$
is isomorphic, as a graded ring, to the {\it coinvariant\/} algebra
of the corresponding Weyl group $W$,
i.e. to the quotient of a polynomial
ring modulo the ideal generated by
the $W$-invariant polynomials
of positive degree.
The Schubert classes form a
distinguished additive basis of the cohomology ring
indexed by the elements in the Weyl group.
Bernstein-Gelfand-Gelfand \cite{BGG} (see also Demazure \cite{D})
showed that if one starts with a polynomial that represents the
cohomology class of
highest codimension (the Schubert
class of a point), one obtains all the other Schubert
classes by applying a succession of
{\it divided difference\/} operators
corresponding to simple roots. This construction depends on
the choice of a polynomial representative for the ``top'' cohomology class.
For $SL(n,\C)$, Lascoux and Sch\"utzenberger \cite{LS} considered one particular choice, which yielded polynomials - the Schubert polynomials -
with particularly good combinatorial and geometric
properties.
It is a natural problem to extend
the construction in \cite{LS} to the other
classical Lie groups.
To this end, Fomin and Kirillov
\cite{FK} listed up five properties that characterize
the Schubert polynomials in type $\mathrm{A},$
but they showed that it is impossible to
construct a theory of ``Schubert polynomials''
in type $\mathrm{B}$ satisfying the same properties.
For type $\mathrm{B}_{n}$, they constructed several families
of polynomials which satisfy all but one of these properties.
There is another approach to this
problem due to Billey and Haiman
\cite{BH}. Consider one of the series of Lie types
$\mathrm{B}_{n},\mathrm{C}_{n}$, and $\mathrm{D}_{n}$
and denote by $G_{n},T_{n},$ and $B_{n}$
the corresponding classical Lie group,
a maximal torus, and a Borel subgroup
containing the maximal torus. The associated flag variety is $\mathcal{F}_{n}=G_{n}/B_{n}$, and the cohomology Schubert classes $\sigma_{w}^{(n)}$ are labeled by elements $w$ in the Weyl group
$W_{n}$ of $G_n$. There is a natural embedding of groups $G_{n}\hookrightarrow G_{n+1}$, and this induces an embedding
of the flag varieties
$\mathcal{F}_{n}\hookrightarrow \mathcal{F}_{n+1}$
and reverse maps in cohomology $H^*(\F_{n+1},\Q) \to H^{*}(\F_n,\Q)$, compatible with the Schubert classes.
For each element $w$ in the infinite Weyl group $W_\infty = \bigcup_{n \ge 1} W_n$, there is
a {\it stable\/} Schubert class
$\sigma_{w}^{(\infty)}=\invlim \,
\sigma_{w}^{(n)}$
in the inverse system
$\invlim \,H^{*}(\mathcal{F}_{n},\Q)$
of the cohomology rings.
A priori, the class $\sigma_{w}^{(\infty)}$
is represented by a homogeneous element in the ring of
power series $\Q[[z_1, z_2, \dots ]]$, but
Billey and Haiman showed in \cite{BH} that
it is represented by
a {\em unique} element $\mathfrak{S}_{w}$
in the subring\footnote{
The elements $\{z_{1},z_{2},\ldots;p_{1}(z),p_{3}(z),p_{5}(z),\ldots\}$ are algebraically independent, so
the ring (\ref{eq:BHring}) can also be
regarded simply as the polynomial
ring in $z_{i}$ and $p_{k}$ ($k=1,3,5,\ldots)$. }
\begin{equation}
\mathbb{Q}[z_1,z_2, \ldots;
p_{1}(z),p_{3}(z),p_{5}(z),\ldots],\label{eq:BHring}
\end{equation}
where $p_{k}(z)=\sum_{i=1}^{\infty}z_{i}^{k}$
denotes the power-sum
symmetric function.
Note that
the images of the even power-sums $p_{2i}(z)$ in the limit of the coinvariant rings
vanish for each of the types $\mathrm{B,C}$ and $\mathrm{D}$.
The elements $\{\mathfrak{S}_{w}\}$ are obtained as
the {\em unique} solutions of a system of equations involving infinitely many
divided difference operators.
These polynomials will satisfy the main combinatorial properties of the type $\mathrm{A}$ Schubert polynomials, if interpreted appropriately.
In particular, the polynomial
$\mathfrak{S}_{w}$
is stable, i.e.
it represents the Schubert classes
$\sigma_{w}^{(n)}$
in $H^*(\F_n,\Q)$ simultaneously for all positive integers $n.$
The flag varieties admit an action of
the maximal torus $T_{n}$, and the inclusion $\F_n \hookrightarrow \F_{n+1}$ is equivariant
with respect to $T_{n}\hookrightarrow T_{n+1}
$. Therefore one can define an {\em equivariant} version
of the stable Schubert classes, and one can ask whether
we can `lift' the polynomials of Billey and Haiman
to the equivariant setting.
These will be the ``double Schubert polynomials", which we will define and study next.
The terminology comes from type $\mathrm{A}$, where Lascoux and Sch\"utzenberger defined a double version of their Schubert polynomials in \cite{LS} (see also \cite{MacSchubert}).
As shown by Fulton in \cite{F:deg}, the type $\mathrm{A}$ double Schubert polynomials can also be constructed as polynomials which represent the cohomology classes of some degeneracy loci. More recently, two related constructions connecting double Schubert polynomials to equivariant cohomology of flag manifolds, using either Thom polynomials or Gr\"obner degenerations were obtained independently by Feh\'er and Rim\'anyi \cite{FR} and by Knutson and Miller \cite{KM}. The degeneracy locus construction was extended to other types by Fulton \cite{F}, Pragacz-Ratajski \cite{PR} and Kresch-Tamvakis \cite{KT}. The resulting polynomials are expressed in terms of Chern classes canonically associated to the geometric situation at hand.
Their construction depends again on the choice of a
polynomial to represent the ``top class" - the diagonal class
in the cohomology of a flag bundle.
Unfortunately
different choices lead to polynomials having some desirable combinatorial properties - but not all. In particular, the polynomials in \cite{KT,F} do not satisfy the
stability property.
In this paper we will work in the equivariant cohomology of flag varieties.
As in \cite{BH}, there is a unique family of stable polynomials,
which is the unique solution of a system of divided difference operators. In our study we will make full use of {\it localization\/} techniques in equivariant cohomology. In the process, we will reprove, and put on a more solid geometric foundation, the results from \cite{BH}.
\subsection{Infinite hyperoctahedral groups}\label{ssec:infinite_weyl} To fix notations, let $W_\infty$ be the infinite hyperoctahedral group, i.e.~the Weyl group of type $\mathrm{C}_\infty$ (or $\mathrm{B}_\infty$). It is generated by elements $s_0,s_1, \ldots$ subject to the braid relations described in (\ref{eq:Wrel}) below. For each nonnegative integer $n$, the subgroup $W_n$ of $W_\infty$ generated by $s_0, \ldots , s_{n-1}$ is the Weyl group of type $\mathrm{C}_n$. $W_\infty$ contains a distinguished subgroup $W_\infty'$ of index $2$ - the Weyl group of $\mathrm{D}_\infty$ - which is generated by $s_{\hat{1}},s_{1}, s_2, \ldots$, where $s_{\hat{1}} = s_0s_1s_0$. The corresponding finite subgroup $W_n'=W_\infty' \cap W_n$ is the type $\mathrm{D}_n$ Weyl group. To be able to make statements which are uniform across all classical types, we use $\W_{\infty}$ to denote $W_{\infty}$ when we consider types $\mathrm{C}$ or $\mathrm{B}$ and
$W_{\infty}'$ for type $\mathrm{D}$; similar notation is used for $\W_{n}\subset \W_{\infty}$. Finally,
set $\I_{\infty}$ to be the indexing set $\{0,1,2,\ldots\}$
for types $\mathrm{B,C}$ and $\{\hat{1},1,2,\ldots\}$ for type $\mathrm{D}.$
\subsection{Stable Schubert classes}\label{ssec:SSch} To each element $w \in \W_n$ there corresponds a torus-fixed point $e_{w}$ in the flag variety $\mathcal{F}_{n}=G_{n}/B_{n}$ (see \S \ref{ssec:NotationFlag} below).
The {\em Schubert variety}
$X_{w}$ is defined to be the closure of the {\em Schubert cell}
$B^{-}_{n}e_{w}$ in $\mathcal{F}_{n},$
where $B^{-}_{n}$ is the Borel subgroup opposite
to $B_{n}$. The fundamental class of $X_w$ determines a {\em Schubert class}
$\sigma_{w}^{(n)}$ in the
equivariant cohomology ring\begin{footnote}{
Unless otherwise stated, from now on we work over cohomology with coefficients over $\Z$.}\end{footnote} $H_{T_{n}}^{2 \ell(w)}(\mathcal{F}_{n})$, where $\ell(w)$ is the length of $w$. The classes $\{\sigma_{w}^{(n)}\}_{w\in \W_{n}}$
form an $H_{T_{n}}^{*}(pt)$-basis
of the equivariant cohomology ring
$H_{T_{n}}^{*}(\mathcal{F}_{n}).$
Note that $H_{T_{n}}^{*}(pt)$ is canonically
identified with the polynomial ring
$\Z[t]^{(n)}:=\Z[t_{1},\ldots,t_{n}]$ generated by
a standard basis $\{t_{1},\ldots,t_{n}\}$ of the character group of $T_{n}.$
Since the torus $T_n$ acts trivially on $e_v$, the inclusion map $\iota_v:e_v \to \F_n$ is equivariant, and induces the {\em localization map} $\iota_{v}^{*}: H_{T_{n}}^{*}(\mathcal{F}_{n})
\rightarrow H_{T_{n}}^{*}(e_{v})$.
It is well-known (cf. e.g. \cite{A}) that
the product map
$$
\iota^{*}=(\iota_{v}^{*})_{v}: H_{T_{n}}^{*}(\mathcal{F}_{n})
\longrightarrow
\prod_{v\in \W_{n}}
H_{T_{n}}^{*}(e_{v})
=\prod_{v\in \W_{n}}
\Z[t]^{(n)}$$
is injective, so we will often
identify $\sigma_{w}^{(n)}$ with an
element in $\prod_{v\in \W_{n}}
\Z[t]^{(n)}$ via $\iota^{*}.$
It turns out that the localizations of Schubert classes stabilize, in the sense that for $v,w \in \W_n \subset \W_m$, the polynomial $\iota_v^*(\sigma_w^{(m)})$ in $H^*_{T_{m}}(e_v)=\Z[t]^{(m)}$ remains constant as $m$ varies.
Therefore, we can pass to the limit to define the {\em stable (equivariant) Schubert class} $\sigma_{w}^{(\infty)}
$
in $\prod_{v\in \W_{\infty}}\Z[t]$,
where $\Z[t]$ is the polynomial ring
in the variables $t_i$ ($i \ge 1$).
Denote by $H_{\infty}$ the $\Z[t]$-submodule of $\prod_{v\in \W_{\infty}}\Z[t]$ spanned by the stable Schubert classes. We will show that $H_{\infty}$ is actually a subalgebra of $\prod_{v\in \W_{\infty}}\Z[t]$ which has a $\Z[t]$-basis consisting of stable Schubert classes $\{\sigma_{w}^{(\infty)}\}.$
A crucial part in the theory of Schubert polynomials of classical types is played by the $P$ and $Q$ Schur functions \cite{Schur}. These are symmetric functions $P_\lambda(x)$ and $Q_\lambda(x)$ in a new set of variables $x=(x_1, x_2, \ldots)$, and are indexed by strict partitions $\lambda$ (see \S \ref{sec:Schur} below for details). The $P$ or $Q$ Schur function corresponding to $\lambda$ with one part of length $i$ is denoted respectively by $P_i(x)$ and $Q_i(x)$.
Define $\Gamma=\Z[Q_{1}(x),Q_{2}(x),\ldots]$ and
$\Gamma'=\Z[P_{1}(x),P_{2}(x),\ldots]$. Note that $\Gamma$ and $\Gamma'$ are not polynomial rings, since $Q_i(x)$ respectively $P_i(x)$ are not algebraically independent (see \S \ref{sec:Schur} for the relations among them), but they have canonical $\Z$-bases consisting of the $Q$-Schur functions $Q_\lambda(x)$ (respectively $P$-Schur functions $P_\lambda(x)$).
We define next the $\Z[t]$-algebras of {\em Schubert polynomials} $$R_{\infty}=\Z[t]\otimes_{\Z}\Gamma\otimes_{\Z}\Z[z],\quad
R'_{\infty}=\Z[t]\otimes_{\Z}\Gamma'\otimes_{\Z}\Z[z],
$$
where $\Z[z]=\Z[z_{1},z_{2},\ldots]$ is the polynomial
ring in $z=(z_{1},z_{2},\ldots).$ We will justify the terminology in the next paragraph.
Again, in order to state results uniformly in all types, we use the bold letter $\R_{\infty}$
to denote $R_{\infty}$ for type $\mathrm{C}$ and $R'_{\infty}$
for types $\mathrm{B}$ and $\mathrm{D}$.
There exists a homomorphism
$$
\Phi=(\Phi_{v})_{v\in \W_{\infty}} : \R_{\infty}\longrightarrow
\prod_{v\in \W_{\infty}}\Z[t]
$$
of graded $\Z[t]$-algebras,
which we call {\em universal localization map}. Its precise (algebraic) definition is given in \S \ref{sec:UnivLoc} and it has a natural geometrical interpretation explained in \S \ref{sec:geometry}. One of the main results in this paper is that $\Phi$ is an isomorphism
from $\R_{\infty}$ onto $H_{\infty}$
(cf. Theorem \ref{thm:isom} below). While injectivity is easily proved algebraically, surjectivity is more subtle. It uses the {\em transition equations}, which are recursive formulas which allow writing any stable Schubert class $\sigma_w^{(\infty)}$ in terms of
the (Lagrangian or Orthogonal) Grassmannian Schubert classes. Once reduced to the Grassmannian case, earlier results of the first and third author \cite{Ik,IN}, which show that the classes in question are represented by Ivanov's {\em factorial $Q$ (or $P$)} Schur functions \cite{Iv}, finish the proof of surjectivity.
By pulling back - via $\Phi$ - the stable Schubert classes,
we introduce polynomials $\mathfrak{S}_{w}
=\mathfrak{S}_{w}(z,t;x)$ in $\R_{\infty}$, which are uniquely determined by
the "localization equations"
\begin{equation}
\Phi_{v}\left(
\mathfrak{S}_{w}(z,t;x)\right)
=\iota_{v}^{*}(\sigma_{w}^{(\infty)}),
\label{eq:Phisigma}
\end{equation}
where
$\iota_{v}^{*}(\sigma_{w}^{(\infty)})
\in \Z[t]$ is the stable limit
of $\iota_{v}^{*}(\sigma_{w}^{(n)})$.
\subsection{Divided difference operators}
Alternatively, $\{\mathfrak{S}_{w}(z,t;x)\}$ can be characterized
in a purely algebraic manner by using the divided difference operators.
There are two families of operators $\partial_i, \delta_i$ ($i \in \I_\infty$) on $\R_\infty$, such that operators from one family commute with those from the other (see \S \ref{ssec:divdiff}
for the definition). Then:
\begin{thm}
\label{existC}
There exists a unique family of elements $\mathfrak{\s}_{w}= \s_w(z,t;x) $ in $\R_\infty$, where $w\in \W_{\infty}$,
satisfying the equations
\begin{equation}\label{E:dd} \partial_{i}\mathfrak{S}_{w}=\begin{cases}
\mathfrak{S}_{ws_{i}}\quad\mbox{if}\quad \ell(ws_{i})<\ell(w)\\
0 \quad \mbox{otherwise}
\end{cases},\quad
\delta_{i}\mathfrak{S}_{w}=\begin{cases}
\mathfrak{S}_{s_{i}w}\quad\mbox{if}\quad \ell(s_{i}w)<\ell(w)\\
0 \quad \mbox{otherwise}
\end{cases},
\end{equation}
for all $i\in \I_{\infty}$,
and such that $\mathfrak{S}_{w}$ has no
constant term except for
$\mathfrak{S}_{e}=1$.
\end{thm}
The operators $\partial_{i},\delta_{i}$
are the limits of the same operators on the equivariant
cohomology $\eqcoh (\F_n)$, since the latter are compatible with the projections $H^*_{T_{n+1}}(\F_{n+1}) \to \eqcoh(\F_n)$. In this context, the operator $\partial_i$ is an equivariant generalization the operator defined in \cite{BGG,D}, and it can shown that it is induced by the right action of the Weyl group on the equivariant cohomology (cf. \cite{KK,Kn}). The operator $\delta_i$ exists only in equivariant cohomology, and it was used in \cite
{Kn,Ty} to study equivariant Schubert classes. It turns out that it corresponds to a left Weyl group action on $\eqcoh (\F_n)$.
\subsection{Billey and Haiman's polynomials and a change of variables}
The polynomials from Theorem \ref{existC} are lifts of Billey-Haiman polynomials from the non-equivariant
to equivariant cohomology.
Concretely, if we forget the torus action,
then
$$
\mathfrak{S}_{w}(z,0;x)=\mathfrak{S}_{w}^{BH}(z;x)
$$
where $\mathfrak{S}_{w}^{BH}(z;x)$ denotes
the Billey-Haiman's polynomial. This follows immediately from the Theorem \ref{existC}, since $\{\mathfrak{S}_{w}^{BH}(z;x)\}$
are the unique elements in
$\Gamma\otimes_{\Z}\Z[z]$ (for type $\mathrm{C}$)
or $\Gamma'\otimes_{\Z}\Z[z]$ (for type $\mathrm{B,D}$)
which satisfy the equation involving the right divided difference operators
$\partial_{i}.$
The variables $z_i$ in $\mathfrak{S}_w(z,t;x)$ correspond geometrically to the limits of Chern classes of the tautological line bundles, while the variables $t_i$ to the equivariant parameters. To understand why the variables $x_i$ are needed - both algebraically and geometrically - we comment next on the ``change of variables'' which relates the Billey-Haiman polynomials to those defined by Fomin and Kirrilov in \cite[Thm. 7.3]{FK}. The general formula in our situation - along with its geometrical explanation - will be given in section \ref{sec:geometry} below. The relation between
$x$ and $z$ is given by
$$
\prod_{i=1}^{\infty}\frac{1+x_{i}u}{1-x_{i}u}
=\prod_{i=1}^{\infty}(1-z_{i}u),
$$
or equivalently
$p_{2i}(z)=0$ and $-2p_{2i+1}(x)=p_{2i+1}(z).$ In type $\mathrm{C}$, it is known that $p_{2i}(z)$ generates the (limit of the) ideal of relations in cohomology, therefore such a variable change eliminates the ambiguity of representatives
coming from $p_{2i}(z)=0.$ Note that the change of variables can be also
expressed as $(-1)^{i}e_{i}(z)=Q_{i}(x)=2P_{i}(x)\;(i\geq 1).$
It follows that after extending the scalars from $\Z$ to $\Q$, the ring $\Gamma\otimes_{\Z}\Z[z]$
(or $\Gamma'\otimes_{\Z}\Z[z]$) is identified with
the ring (\ref{eq:BHring}).
Since both $\Gamma$ and $\Gamma'$ have
distinguished $\Z$-bases,
the polynomials $\mathfrak{S}^{BH}_{w}(z;x)$ will expand uniquely as a combination of $Q$-Schur (or $P$-Schur) functions with coefficients in $\Q[z]$.
In type $\mathrm{C}$, the change of variables corresponds to making $Q_i(x)=c_i(\mathcal{S}^*)$ - where $c_i(\mathcal{S}^*)$ is the limit of the Chern classes of the duals of the tautological subbundles of the Lagrangian Grassmannians, regarded as elements in $\invlim H^*(\F_n)$. This is the identification which was used by Pragacz (see e.g. \cite[pag. 32]{FP}) to study the cohomology of the Lagrangian Grassmannian.
\subsection{Combinatorial properties of the double Schubert polynomials} We state next
the combinatorial properties of the double Schubert polynomials $\s_w(z,t;x)$:
\begin{itemize}
\item(Basis) $\s_w(z,t;x)$ form a $\Z[t]$-basis of $\R_\infty$;
\item(Symmetry) $\s_w(z,t;x) = \s_{w^{-1}}(-t,-z;x)$;
\item(Positivity) The double Schubert polynomial $\s_w(z,t;x)$ can be uniquely written as \[ \s_w(z,t;x) = \sum_{\lambda} f_\lambda(z,t) F_\lambda(x) \/, \] where the sum is over strict partitions $\lambda=(\lambda_1, \dots , \lambda_r)$ such that $\lambda_1+ \dots +\lambda_r \le \ell(w)$, $f_\lambda(z,t)$ is a homogeneous polynomial in $\N[z,-t]$,
and $F_\lambda(x)$ is the $Q$-Schur function $Q_\lambda(x)$ in type $\mathrm{C}$, respectively the $P$-Schur function $P_\lambda(x)$ in types $\mathrm{B,D}$.
For a precise combinatorial formula for the
coefficients $f_\lambda(z,t)$ see Cor. \ref{cor:typeAexpand} and Lem. \ref{lem:S00} below.
\end{itemize}
The basis property implies that we can define the structure constants
$c_{uv}^{w}(t)\in \Z[t]$ by the
expansion
$$
\mathfrak{S}_{u}(z,t;x)
\mathfrak{S}_{v}(z,t;x)
=\sum_{w \in \W_{\infty}}c_{uv}^{w}(t)\mathfrak{S}_{w}(z,t;x).
$$ These coincide with the structure constants in equivariant cohomology of $\F_n$, written in a stable form. The same phenomenon happens in \cite{LS,BH}.
\subsection{Grassmannian Schubert classes}
To each strict partition $\lambda$ one can associate a {\em Grassmannian element} $w_\lambda \in \W_\infty$.
Geometrically these arise as the elements
in $\W_{\infty}$ which index the pull-backs of the Schubert classes from the appropriate Lagrangian or Orthogonal Grassmannian, via the natural projection from the flag variety. For the Lagrangian Grassmannian,
the first author \cite{Ik} identified the equivariant Schubert
classes with the factorial analogues
of Schur $Q$-function defined by Ivanov \cite{Iv}.
This result was extended
to the maximal isotropic
Grassmannians of orthogonal types $\mathrm{B}$ and $\mathrm{D}$
by Ikeda and Naruse \cite{IN}.
See \S \ref{sec:Schur} for the definition
of Ivanov's functions
$Q_{\lambda}(x|t),P_{\lambda}(x|t).$
We only mention here that if
all $t_{i}=0$ they coincide with the ordinary $Q$ or $P$
functions;
in that case, these results recover Pragacz's results from \cite{Pr} (see also \cite{Jo}).
We will show in Theorem \ref{PhiFacQ}
that the polynomial $\s_{w_\lambda}(z,t;x)$ coincides with $Q_\lambda(x|t)$
or $P_\lambda(x|t)$, depending on the type at hand.
In particular, the double Schubert polynomials
for the Grassmannian elements
are Pfaffians - this is a {\em Giambelli formula} in this case.
\subsection{Longest element formulas}
Next we present the combinatorial formula for the double Schubert polynomial indexed by $w_0^{(n)}$, the longest element in $\W_n$ (regarded as a subgroup of $\W_\infty$). This formula has a particular significance since this is the top class mentioned in the first section.
We denote by $\mathfrak{B}_{w},\mathfrak{C}_{w},
\mathfrak{D}_{w}$ the double Schubert polynomial
$\mathfrak{S}_{w}$ for types $\mathrm{B,C},$ and $\mathrm{D}$ respectively.
Note that $\mathfrak{B}_{w}=2^{-s(w)}
\mathfrak{C}_{w},$ where $s(w)$ is the number of
signs changed by $w$ (cf. \S \ref{sec:signed.permut} below).
\begin{thm}[Top classes]\label{thm:Top} The double Schubert polynomial
associated with the longest element $w_{0}^{(n)}$ in $\W_{n}$
is equal to:
\begin{enumerate}
\item
$\mathfrak{C}_{w_{0}^{(n)}}(z,t;x) =
Q_{\rho_{n}+\rho_{n-1}}
(x|t_{1},-z_{1},t_{2},-z_{2},\ldots,t_{n-1},-z_{n-1}),\label{longC}$
\item $\mathfrak{D}_{w_{0}^{(n)}}(z,t;x) =P_{2\rho_{n-1}}
(x|t_{1},-z_{1},t_{2},-z_{2},\ldots,t_{n-1},-z_{n-1}),\label{longD}$
\end{enumerate}
where $\rho_{k}=(k,k-1,\ldots,1).$
\end{thm}
\subsection{Comparison with degeneracy loci formulas}
One motivation for the present paper was to give a
geometric interpretation
to the factorial Schur $Q$-function by means of degeneracy loci formulas.
In type $\mathrm{A}$,
this problem was treated by the second author in \cite{Mi},
where the Kempf-Laksov formula for degeneracy loci
is identified with the Jacobi-Trudi type
formula for the factorial (ordinary) Schur function. To this end,
we will reprove a multi-Pfaffian expression for $\sigma_{w_\lambda}$ (see \S \ref{sec:Kazarian} below) obtained by Kazarian \cite{Ka} while studying Lagrangian degeneracy loci.
\subsection{Organization}
Section \ref{sec:EqSch} is devoted to some general
facts about the equivariant cohomology of the
flag variety.
In section \ref{sec:Classical} we fix notation concerning
root systems and Weyl groups, while in section \ref{sec:Schur} we give the definitions and some
properties of
$Q$- and $P$-Schur functions, and of their factorial analogues.
The stable (equivariant) Schubert classes
$\{\sigma_{w}^{(\infty)}\}$
and the ring $H_{\infty}$ spanned by these classes are introduced in section \ref{sec:StableSchubert}.
In section \ref{sec:UnivLoc} we define the ring of Schubert polynomials $\R_{\infty}$
and establish the isomorphism
$\Phi: \R_{\infty}\rightarrow H_{\infty}.$
In the course of the proof, we recall
the previous results on isotropic Grassmannians
(Theorem \ref{PhiFacQ}).
In section \ref{sec:WactsR} we define the left and right action of the infinite Weyl group on ring
$\R_{\infty},$ and then use them to define the divided
difference operators.
We also discuss the compatibility
of the actions on both $\R_{\infty}$ and $H_{\infty}$
under the isomorphism $\Phi.$
We will prove the existence and
uniqueness theorem for the double Schubert polynomials
in section \ref{sec:DSP}, along with some basic combinatorial properties
of them. The formula for the Schubert polynomials indexed by the longest Weyl group element is proved in
section \ref{sec:Long}.
Finally, in section \ref{sec:geometry} we give an alternative geometric
construction of our universal localization map $\Phi$, and
in section \ref{sec:Kazarian}, we prove the formula for $Q_{\lambda}(x|t)$
in terms of a multi-Pfaffian.
\subsection{Note}
After the present work was completed
we were informed that
A. Kirillov \cite{K} had introduced
double Schubert polynomials
of type $\mathrm{B}$ (and $\mathrm{C}$) in 1994
by using Yang-Baxter operators (cf. \cite{FK}), independently to us,
although no connection with (equivariant) cohomology had been established.
His approach is quite different from ours, nevertheless
the polynomials are the same, after a suitable identification
of variables. Details will be given elsewhere.
\bigskip
This is the full paper version of
`extended abstract' \cite{fpsac} for the FPSAC 2008
conference held in Vi\~na del Mar, Chile, June 2008.
Some results in this paper
were announced without proof in \cite{rims}.
\subsection{Acknowledgements}
We would like to thank S.~Billey and H.~Tamvakis for stimulating conversations
that motivated this work, and to S. Kumar, K.~Kuribayashi, M.~Mimura, M.~Nakagawa, T. Ohmoto, N.~Yagita and M. Yasuo for helpful comments. This work was facilitated by the "Workshop on Contemporary Schubert Calculus and Schubert Geometry" organized at Banff in March 2007. We are grateful to the organizers J. Carrell and F. Sottile for inviting all the authors there.
\section{Equivariant Schubert classes of the flag variety}
\label{sec:EqSch}
\setcounter{equation}{0}
In this section we will recall some basic facts about the equivariant cohomology of the flag variety $\F=G/B$.
The main references are \cite{A} and \cite{KK} (see also \cite{Ku}).
\subsection{Schubert varieties and equivariant cohomology}\label{ssec:NotationFlag}
Let $G$ be a complex connected semisimple Lie group,
$T$ a maximal torus, $W=N_{G}(T)/T$ its Weyl group,
and $B$ a Borel subgroup such that $T\subset B.$ The flag variety is the variety $\F=G/B$ of translates of the Borel subgroup $G$, and it admits a $T$-action, induced by the left $G$-action. Each Weyl group element determines a $T$-fixed point $e_w$ in the flag variety (by taking a representative of $w$), and these are all the torus-fixed points. Let $B^{-}$ denote the opposite Borel subgroup. The {\em Schubert variety} $X_w$ is the closure of $B^- e_w$ in the flag variety; it has codimension $\ell(w)$ - the length of $w$ in the Weyl group $W$.
In general, if $X$ is a topological space with a left $T$-action, the equivariant cohomology of $X$ is the ordinary cohomology of a ``mixed space'' $(X)_T$, whose definition (see e.g.
\cite{GKM} and references therein) we recall. Let $ET
\longrightarrow BT$ be the universal $T-$bundle. The $T-$action on
$X$ induces an action on the product $ET \times X$ by $t\cdot
(e,x)=(et^{-1},tx)$. The quotient space $(X)_T=(ET \times X)/T$ is
the ``homotopic quotient'' of $X$ and the ($T-$)equivariant
cohomology of $X$ is by definition
\[ H^{i}_T(X)=H^i(X_T). \] In particular, the equivariant
cohomology of a point, denoted by $\mathcal{S}$, is equal to the
ordinary cohomology of the classifying space $BT$. If $\chi$ is a
character in $\hat{T}=Hom(T, \C^*)$ it determines a line
bundle $L_\chi: ET\times_T \C_\chi \longrightarrow BT$ where
$\C_\chi$ is the $1-$dimensional $T-$module determined by $\chi$.
It turns out that the morphism $\hat{T} \longrightarrow
H^2_T(pt)$ taking the character $\chi$ to the first Chern
class $c_1(L_\chi)$ extends to an isomorphism from the symmetric
algebra of $\hat{T}$ to $H^*_T(pt)$. Therefore, if one chooses a basis $t_1, \ldots , t_n$ for $\hat{T}$, then $\mathcal{S}$ is the polynomial ring $\Z[t_1, \ldots , t_n]$.
Returning to the situation when $X=\F$, note that $X_{w}$ is a $T$-stable,
therefore its fundamental class determines
the {\em (equivariant) Schubert class} $\sigma_{w}=[X_{w}]_T$ in
$H_{T}^{2\ell(w)}(\mathcal{F}).$
It is well-known that the Schubert classes form an $H_{T}^{*}(pt)$-basis of $H_{T}^{*}(\mathcal{F}).$
\subsection{Localization map}
Denote by $\F^T= \{e_v| v \in W\}$ the set of $T$-fixed points in $\F$; the inclusion
$\iota: \mathcal{F}^{T}\hookrightarrow \mathcal{F}$
is $T$-equivariant and
induces
a homomorphism
$\iota^{*}: H_{T}^{*}(\mathcal{F})\longrightarrow
H_{T}^{*}(\mathcal{F}^{T})=\prod_{v\in W}H_{T}^{*}(e_{v}).$
We identify each $H_{T}^{*}(e_{v})$ with $\cS$ and for $\eta \in H^*_T (\F)$ we denote its localization in $H_{T}^{*}(e_{v})$ by $\eta|_v$.
Let $R^{+}$ denote the set of positive roots corresponding
to $B$ and set $R^{-}=-R^{+},
R=R^{+}\cup R^{-}.$
Each root $\alpha$ in $R$ can be
regarded as a linear
form in $\cS.$ Let $s_{\alpha}$ denote the reflection
corresponding to the root $\alpha.$ Remarkably, the localization map $\iota^*$ in injective,
and the elements $\eta=(\eta|_{v})_{v}$ in $\prod_{v\in W}\cS$ in the image of $\iota^*$
are characterized by the {\em GKM conditions} (see e.g. \cite{A}):
$$
\eta|_{v}-\eta|_{s_{\alpha}v}\;\mbox{is a multiple
of}\;\alpha
$$
for all $v$ in $W$ and $\alpha\in R^{+},$ where $s_\alpha \in W$ is the reflection associated to $\alpha$.
\subsection{Schubert classes}
We recall a characterization of the Schubert class $\sigma_{w}.$
Let $\leq$ denote the Bruhat-Chevalley ordering
on $W$; then $e_{v}\in X_{w}$ if and only if
$w\leq v.$
\begin{prop}\cite{A},\cite{KK} \label{xi}
The Schubert class $\sigma_{w}$ is characterized by the
following conditions:
\begin{enumerate}
\item $\sigma_w|_{v}$ vanishes unless $w\leq v,$
\item If $w\leq v$ then $\sigma_{w}|_{v}$ is homogeneous of degree $\ell(w)$,
\item $\sigma_w|_{w}=\prod_{\alpha\in R^{+}\cap wR^{-}}\alpha.$
\end{enumerate}
\end{prop}
\begin{prop}\label{prop:basis}
Any cohomology class $\eta$ in $H_{T}^{*}(\mathcal{F})$
can be written uniquely as an
an $H_{T}^{*}(pt)$-linear
combination of $\sigma_{w}$ using only
those $w$ such that $w\geq u$ for some
$u$ with $\eta|_{u}\ne 0.$
\end{prop}
\begin{proof}
The corresponding fact
for the Grassmann variety
is proved in \cite{KnTa}.
The same proof works for the general flag variety also.
\end{proof}
\subsection{Actions of Weyl group}\label{ssec:WeylAction}
There are two actions of the Weyl group on the equivariant cohomology ring $H_{T}^{*}(\F)$, which are used to define corresponding divided-difference operators.
In this section we will follow the approach presented in \cite{Kn}. Identify $\eta \in H_{T}^{*}(\F)$ with the sequence of polynomials $(\eta|_v)_{v \in W}$ arising from the localization map. For $w \in W$ define
\[ (w^{R}\eta)|_v = \eta|_{vw} \quad (w^{L}\eta)|_v = w \cdot (\eta|_{w^{-1}v}) \/.\]
It is proved in \cite{Kn} that these are well defined actions on $\eqcoh(\F_n),$ and that $w^{R}$ is $H^*_{T}(pt)$-linear, while $w^{L}$ it is not (precisely because it acts on the polynomials' coefficients).
\subsection{Divided difference operators}\label{ssec:divdiff}
For each simple root $\alpha_{i}$, we define
the {\it divided difference operators\/}
$\partial_{i}$ and $\delta_{i}$
on
$H_{T}^{*}(\mathcal{F})$
by
$$
(\partial_{i}\eta)|_{v}
=\frac{\eta|_{v}-(s_{i}^{R}\eta)|_{v}}{-v(\alpha_{i})},\quad
(\delta_{i}\eta)_{v}=\frac{\eta|_{v}-(s_{i}^{L}\eta)|_{v}}{\alpha_{i}}\quad (v\in W).
$$
These rational functions are proved to be actually polynomials. They satisfy the GKM conditions, and thus
give elements in $H_{T}^{*}(\mathcal{F})$
(see \cite{Kn}).
We call $\partial_{i}$'s (resp. $\delta_{i}$'s) {\it right\/}
(resp. {\it left\/}) divided difference operators.
The operator $\partial_{i}$ was introduced
in \cite{KK}.
On the ordinary cohomology, analogous operators
to $\partial_{i}$'s are introduced
independently by Bernstein et al. \cite{BGG}
and Demazure \cite{D}.
The left divided difference operators
$\delta_{i}$ was studied by Knutson in \cite{Kn}
(see also \cite{Ty}). Note that $\partial_{i}$ is $H_{T}^{*}(pt)$-linear
whereas $\delta_{i}$ is not. The next proposition was stated \cite[Prop.2]{Kn} (see also \cite{KK,Ty}).
\begin{prop}
\label{prop:propertiesdiv}
\begin{enumerate}
\item \label{welldef} Operators $\partial_{i}$ and $\delta_{i}$
are well-defined on the ring $H_{T}^{*}(\mathcal{F})$;
\item The left and right divided difference
operators commute with each other;
\item \label{eq:partial_sigma} We have
\begin{equation}
\partial_{i}\sigma_{w}=\begin{cases}
\sigma_{ws_{i}} &\mbox{if}\;\ell(ws_{i})=\ell(w)-1\\
0 &\mbox{if}\;\ell(ws_{i})=\ell(w)+1
\end{cases},\quad
\delta_{i}\sigma_{w}=\begin{cases}
\sigma_{s_{i}w} &\mbox{if}\;\ell(s_{i}w)=\ell(w)-1\\
0 &\mbox{if}\;\ell(s_{i}w)=\ell(w)+1
\end{cases}.\label{eq:divdiffsigma}
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof} We only prove (\ref{eq:divdiffsigma})
for $\delta_{i}$ here, as the rest is proved in \cite{Kn}.
By Prop. \ref{xi}
$(\delta_{i}{\sigma}_{w})|_{v}$ is nonzero
only for $\{v|\;v\geq w\;\mbox{or}\; s_{i}v\geq w\}.$
This implies that
the element $\delta_{i}{\sigma}_{w}$
is a $H_{T}^{*}(pt)$-linear combination of
$\{{\sigma}_{v}\,|\,v\geq w\;\mbox{or}\; s_{i}v\geq w\}$
by Prop. \ref{prop:basis}.
Moreover $\sigma_{v}$ appearing in the linear combination
have degree at most $\ell(w)-1.$
Thus if $\ell(s_{i}w)=\ell(w)+1$ then
$\delta_{i}{\sigma}_{w}$ must vanish.
If $\ell(s_{i}w)=\ell(w)-1$
the only possible
term is a multiple of ${\sigma}_{s_{i}w}.$
In this case we calculate
$$
\left(\delta_{i}{\sigma}_{w}\right)|_{s_{i}w}
=\frac{{\sigma}_{w}|_{s_{i}w}
-s_{i}({\sigma}_{w}|_{w})}{\alpha_{i}}
=-\frac{s_{i}({\sigma}_{w}|_{w})}{\alpha_{i}},
$$
where we used
${\sigma}_{w}|_{s_{i}w}=0$
since $s_{i}w<w.$
Here we recall the following well-known fact that
$$w>s_{i}w\Longrightarrow
s_{i}(R^{+}\cap w R^{-})=
(R^{+}\cap s_{i}w R^{-})\sqcup \{-\alpha_{i}\}.$$
So we have $$s_{i}({\sigma}_{w}|_{w})
=\prod_{\beta\in s_{i}(R^{+}\cap w R^{-})}\beta
=(-\alpha_{i})\prod_{\beta\in (R^{+}\cap s_{i}w R^{-})}\beta
=(-\alpha_{i})\cdot{\sigma}_{s_{i}w}|_{s_{i}w}.$$
By the characterization (Prop. \ref{xi}),
we have $\delta_{i}{\sigma}_{w}=\sigma_{s_{i}w}.$
\end{proof}
\section{Classical groups}
\label{sec:Classical}
\setcounter{equation}{0}
In this section, we fix the notations
for the root systems, Weyl groups,
for the classical groups
used throughout the paper.
\subsection{Root systems}\label{ssection:root_systems} Let $G_n$ be the classical Lie group of one of the types $\mathrm{B}_n,\mathrm{C}_n$ or $\mathrm{D}_n$, i.e. the symplectic group $Sp(2n,\C)$ in type $\mathrm{C}_n$, the odd orthogonal group $SO(2n+1,\C)$
in types $\mathrm{B}_n$ and $SO(2n,\C)$ in type $\mathrm{D}_n$.
Correspondingly we have the set $R_{n}$ of
roots, and the set of simple roots. These are subsets of
the character group
$\hat{T}_{n}=\bigoplus_{i=1}^{n}\Z t_{i}$
of $T_{n},$ the maximal torus of $G_{n}.$
The positive roots $R_{n}^{+}$
(set $R_{n}^{-}:=-R_{n}^{+}$ the negative roots)
are given by
\begin{eqnarray*}
\mbox{Type}\; \mathrm{B}_{n}:&
R_{n}^{+}&=\{t_{i}\;|\; 1\leq i\leq n\}
\cup\{t_{j}\pm t_{i}\;|\;1\leq i<j\leq n\},\\
\mbox{Type}\; \mathrm{C}_{n}:&
R_{n}^{+}&=\{2t_{i}\;|\; 1\leq i\leq n\}
\cup\{t_{j}\pm t_{i}\;|\;1\leq i<j\leq n\},\\
\mbox{Type}\; \mathrm{D}_{n}:&
R_{n}^{+}&=\{t_{j}\pm t_{i}\;|\;1\leq i<j\leq n\}.
\end{eqnarray*}
The following are the simple roots:
\begin{eqnarray*}
\mbox{Type}\; \mathrm{B}_{n}:&
\alpha_{0}&=t_{1},
\quad \alpha_{i}=t_{i+1}-t_{i}\quad(1\leq i\leq n-1) ,\\
\mbox{Type}\; \mathrm{C}_{n}:&
\alpha_{0}&=2t_{1},
\quad
\alpha_{i}=t_{i+1}-t_{i}\quad(1\leq i\leq n-1),\\
\mbox{Type}\; \mathrm{D}_{n}:&
\alpha_{\hat{1}}&=t_{1}+t_{2},
\quad \alpha_{i}=t_{i+1}-t_{i}\quad(1\leq i\leq n-1).
\end{eqnarray*}
We introduce a symmetric
bilinear form $(\cdot,\cdot)$ on
$\hat{T}_{n}\otimes_{\Z}\Q$
by $(t_{i},t_{j})=\delta_{i,j}.$
The simple {\it coroots\/} $\alpha_{i}^{\vee}$
are defined to be $\alpha_{i}^{\vee}=2\alpha_{i}/(\alpha_{i},\alpha_{i}).$
Let $\omega_{i}$ denote the {\it fundamental weights}, i.e. those elements in $\hat{T}_{n}\otimes_{\Z}\Q$ such that $(\omega_{i},\alpha_{i}^{\vee})=\delta_{i,j}$.
They are explicitly given as follows:
\begin{eqnarray*}
\mbox{Type}\; \mathrm{B}_{n}:&
\omega_{0}&={\textstyle\frac{1}{2}}\left(t_{1}+t_{2}+\cdots+t_{n}\right),
\quad \omega_{i}=t_{i+1}+\cdots+t_{n}\quad(1\leq i\leq n-1),\\
\mbox{Type}\; \mathrm{C}_{n}:&
\omega_{i}&=t_{i+1}+\cdots+t_{n}\quad(0\leq i\leq n-1),\\
\mbox{Type}\; \mathrm{D}_{n}:&
\omega_{\hat{1}}&={\textstyle\frac{1}{2}}\left(t_{1}+t_{2}+\cdots+t_{n}\right),\quad
\omega_{1}={\textstyle\frac{1}{2}}\left(-t_1+t_2+\cdots+t_n\right),\\
&\omega_{i}&=t_{i+1}+\cdots+t_{n}\quad(2\leq i\leq n-1).
\end{eqnarray*}
\subsection{Weyl groups}\label{ssec:Weyl}
Set $I_{\infty}=\{0,1,2,\ldots\}$ and $I_{\infty}'
=\{\hat{1},1,2,\ldots\}.$
We define the Coxeter
group $(W_{\infty},I_{\infty})$
(resp. $(W_{\infty}',I_{\infty})$)
of infinite rank, and its finite {\it parabolic\/} subgroup
$W_{n}$
(resp. $W'_{n}$)
by the following Coxeter graphs:
\noindent
$\mathrm{C}_{n}\subset \mathrm{C}_{\infty}$ ($\mathrm{B}_{n}\subset \mathrm{B}_{\infty}$)
\setlength{\unitlength}{0.4mm}
\begin{center}
\begin{picture}(200,25)
\thicklines
\put(0,15){$\circ$}
\put(4,16.5){\line(1,0){12}}
\put(4,18.5){\line(1,0){12}}
\multiput(15,15)(15,0){4}{
\put(0,0){$\circ$}
\put(4,2.4){\line(1,0){12}}}
\put(75,15){$\circ$}
\put(120,10)
{\put(0,5){$\circ$}
\put(4,6.5){\line(1,0){12}}
\put(4,8.5){\line(1,0){12}}
\multiput(15,5)(15,0){5}{
\put(0,0){$\circ$}
\put(4,2.4){\line(1,0){12}}}
\put(90,5){$\circ$}}
\put(0,8){\tiny{$s_0$}}
\put(15,8){\tiny{$s_1$}}
\put(30,8){\tiny{$s_2$}}
\put(72,8){\tiny{$s_{n-1}$}}
\put(120,8){\tiny{$s_0$}}
\put(135,8){\tiny{$s_1$}}
\put(150,8){\tiny{$s_2$}}
\put(190,8){\tiny{$s_{n-1}$}}
\put(210,8){\tiny{$s_{n}$}}
\put(170,8){\tiny{$\cdots$}}
\put(50,8){\tiny{$\cdots$}}
\put(95,13){$\hookrightarrow$}
\put(214.5,17.5){\line(1,0){10}}
\put(226,15){$\cdots$}
\end{picture}
\end{center}
$\mathrm{D}_{n}\subset \mathrm{D}_{\infty}$
\setlength{\unitlength}{0.4mm}
\begin{center}
\begin{picture}(200,20)
\thicklines
\put(0,25){$\circ$}
\put(0,5){$\circ$}
\put(4,26){\line(3,-2){12}}
\put(4,8.5){\line(3,2){12}}
\multiput(15,15)(15,0){4}{
\put(0,0){$\circ$}
\put(4,2.4){\line(1,0){12}}}
\put(75,15){$\circ$}
\put(120,0)
{\put(0,25){$\circ$}
\put(0,5){$\circ$}
\put(4,26){\line(3,-2){12}}
\put(4,8.5){\line(3,2){12}}
\multiput(15,15)(15,0){4}{
\put(0,0){$\circ$}
\put(4,2.4){\line(1,0){12}}}
\put(75,15){$\circ$}
\put(120,10)}
\put(0,20){\tiny{$s_{\hat{1}}$}}
\put(0,0){\tiny{$s_1$}}
\put(15,8){\tiny{$s_2$}}
\put(30,8){\tiny{$s_3$}}
\put(72,8){\tiny{$s_{n-1}$}}
\put(120,20){\tiny{$s_{\hat{1}}$}}
\put(120,0){\tiny{$s_1$}}
\put(135,8){\tiny{$s_2$}}
\put(150,8){\tiny{$s_3$}}
\put(190,8){\tiny{$s_{n-1}$}}
\put(210,8){\tiny{$s_{n}$}}
\put(170,8){\tiny{$\cdots$}}
\put(50,8){\tiny{$\cdots$}}
\put(95,13){$\hookrightarrow$}
\put(199,17.5){\line(1,0){12}}
\put(210,15){$\circ$}
\put(214.5,17.5){\line(1,0){10}}
\put(226,15){$\cdots$}
\end{picture}
\end{center}
More explicitly,
the group $W_{\infty}$ (resp. $W_{\infty}'$ ) is
generated by the simple reflections
$s_{i}\,(i\in I_{\infty})$ (resp. $s_{i}\,(i\in I_{\infty}')$)
subject to the relations:
\begin{equation}
\begin{cases}s_{i}^{2}=e\;(i\in I_{\infty})\\
s_{0}s_{1}s_{0}s_{1}=s_{1}s_{0}s_{1}s_{0}\\
s_{i}s_{i+1}s_{i}=s_{i+1}s_{i}s_{i+1}
\; (i\in I_{\infty}\setminus \{0\})\\
s_{i}s_{j}=s_{j}s_{i}\;(|i-j|\geq 2)
\end{cases},\quad
\begin{cases}
s_{i}^{2}=e\;( i\in I_{\infty}')\\
s_{\hat{1}}s_{2}s_{\hat{1}}=s_{2}s_{\hat{1}}s_{2}\\
s_{i}s_{i+1}s_{i}=s_{i+1}s_{i}s_{i+1}
\; (i\in I_{\infty}'\setminus \{\hat{1}\})\\
s_{\hat{1}}s_{i}=s_{i}s_{\hat{1}}\;(i\in I_{\infty}',\;i\ne 2)\\
s_{i}s_{j}=s_{j}s_{i}\;(\;i,j\in I_{\infty}'\setminus \{\hat{1}\},
\,|i-j|\geq 2)
\end{cases}.\label{eq:Wrel}
\end{equation}
For general facts on Coxeter groups, we refer to
\cite{BjBr}.
Let $\leq $ denote the {\it Bruhat-Chevalley order\/}
on $W_{\infty}$ or $W_{\infty}'.$ The {\it length\/} $\ell(w)$ of $w\in W_{\infty}$ (resp. $w\in W'_{\infty}$) is defined to be the least number $k$ of simple reflections
in any reduced expression of $w\in W_{\infty}$.
The subgroups $W_{n}
\subset W_{\infty}$,
$W_{n}'\subset W_{\infty}'$ are
the Weyl groups of the following types:
$$
\mbox{Type}\; B_{n},C_{n}:
W_{n}=\langle s_{0},s_{1},s_{2},\ldots,s_{n-1}
\rangle,\quad
\mbox{Type}\; D_{n}:
W_{n}'=\langle s_{\hat{1}},s_{1},s_{2},\ldots,s_{n-1}\rangle.
$$
It is known that the inclusion $W_{n}\subset W_{\infty}$
(resp. $W_{n}'\subset W_{\infty}'$) preserves the length and the Bruhat-Chevalley order, while $W_{\infty}'\subset W_{\infty},$
(resp. $W_{n}'\subset W_{n}$) is not (using terminology from \cite{BjBr} this says that $W_n$ is a {\em parabolic subgroup} of $W_\infty$, while $W'_\infty$ is not). From now on, whenever possible, we will employ the notation explained in \S \ref{ssec:infinite_weyl}, and use bold fonts $\W_\infty$ respectively $\W_n$ to make uniform statements.
\subsection{Signed permutations}\label{sec:signed.permut}
The group $W_\infty$
is identified with the set of all permutations $w$ of the set $\{1,2,\ldots\}\cup \{\bar{1},\bar{2},\ldots\}$ such that $w(i)\ne i$ for only finitely many $i$, and $\overline{w(i)}=w(\bar{i})$ for all $i$. These can also be considered as
signed (or barred) permutation of $\{1,2,\ldots\}$; we often use one-line notation
$w=(w(1),w(2),\ldots)$ to denote an element $w\in W_{\infty}$. The simple reflections are
identified with the transpositions
$s_{0}=(1,\bar{1})$ and $s_{i}=(i+1,i)(\overline{i},\overline{i+1})$ for $i\geq 1.$
The subgroup $W_n\subset W_{\infty}$
is described as
$$
W_{n}=\{w\in W_{\infty}\;|\; w(i)=i\;\mbox{for} \; i>n\}.
$$
In one-line notation,
we often denote
an element $w\in W_{n}\subset W_{\infty}$
by the finite sequence
$(w(1),\ldots,w(n)).$
The group $W_{\infty}'$, as a (signed) permutation group,
can be realized
as the subgroup of $W_{\infty}$
consisting of elements in $W_\infty$ with even number of sign changes.
The simple reflection $s_{\hat{1}}$ is identified with $s_{0}s_{1}s_{0}
\in W_{\infty}$, so as a permutation
$s_{\hat{1}}=(1,\bar{2})(2,\bar{1}).$
\subsection{Grassmannian elements}
An element $w\in W_{\infty}$ is
a {\it Grassmannian element\/}
if $$w(1)<w(2)<\cdots<w(i)<\cdots$$ in the ordering
$ \cdots<\bar{3}<\bar{2}<\bar{1}<1<2<3<\cdots
.$
Let $W_{\infty}^{0}$ denote the set of all Grassmannian elements in $W_{\infty}.$
For $w\in W_{\infty}^{0}$, let $r$ be the number such that
\begin{equation} w(1)<\cdots<w(r)<1\quad\mbox{and}
\quad \bar{1}<w(r+1)<w(r+2)<\cdots.
\label{eq:GrassPerm}
\end{equation}
Then we define the $r$-tuple of positive integers
$\lambda=(\lambda_{1},\ldots,\lambda_{r})$ by
$ \lambda_{i}=\overline{w(i)}$
for $1\leq i\leq r.$
This is a {\it strict partition\/}
i.e. a partition with distinct parts:
$\lambda_{1}>\cdots>\lambda_{r}>0.$
Let $\mathcal{SP}$ denote the set of all
strict partitions.
The correspondence gives a bijection
$
W_{\infty}^{0}\longrightarrow
\mathcal{SP}.
$
We denote by $w_{\lambda}\in W_{\infty}^{0}$
the Grassmannian element corresponding
to $\lambda\in \mathcal{SP}$; then
$\ell(w_{\lambda})=|\lambda|=\sum_{i}\lambda_{i}$.
Note that this bijection preserves the partial order
when $\mathcal{SP}$ is considered to be a partially ordered set given by
the inclusion $\lambda\subset \mu$
of strict partitions.
We denote by $W_{\infty}^{\hat{1}}$
the set of all
Grassmannian elements contained in $W_{\infty}'.$
For $w\in W_{\infty}^{\hat{1}},$
the number $r$ in (\ref{eq:GrassPerm}) is always even.
Define the strict partition
$\lambda'=(\lambda'_{1}>\cdots>\lambda'_{r}\geq 0)$ by setting $ \lambda'_{i}=\overline{w(i)}-1$
for $1\leq i\leq r.$
Note that $\lambda'_{r}$ can be zero
this time.
This correspondence gives also a
bijection $
W_{\infty}^{\hat{1}}
\longrightarrow
\mathcal{SP}.$
We denote by $w'_{\lambda}\in W_{\infty}^{\hat{1}}$
the element corresponding
to $\lambda\in \mathcal{SP}.$
As before, $\ell(w'_{\lambda})=|\lambda|$ where
$\ell(w)$ denotes the length of $w$ in $W_\infty'$.
\bigskip
{\bf Example.} Let $\lambda=(4,2,1).$
Then the corresponding Grassmannian elements are given by
$w_{\lambda}=\bar{4}\bar{2}\bar{1}3
=s_{0}s_{1}s_{0}s_{3}s_{2}s_{1}s_{0}$
and $w'_{\lambda}=\bar{5}\bar{3}\bar{2}\bar{1}4
=s_{\hat{1}}s_{2}s_{1}s_{4}s_{3}s_{2}s_{\hat{1}}.$
\bigskip
The group $W_{\infty}$ (resp. $W_{\infty}'$)
has a parabolic subgroup
generated by $s_{i}\;(i\in I_{\infty}\setminus \{0\})$
(resp. $s_{i}\,(i\in I'_{\infty}\setminus \{\hat{1}\}).$
We denote these subgroups by $S_{\infty}
=\langle s_{1},s_{2},\ldots\rangle$
since it is isomorphic to the infinite Weyl group of type $\mathrm{A}.$
The product map
$$
W_{\infty}^{0}\times S_{\infty}
\longrightarrow W_{\infty}
\quad(\mbox{resp.}\;
W_{\infty}^{\hat{1}}\times S_{\infty}
\longrightarrow W'_{\infty}),
$$
given by $(u,w)\mapsto uw$
is a bijection satisfying $\ell(uw)=\ell(u)+\ell(w)$
(cf. \cite[Prop. 2.4.4]{BjBr}).
As a consequence, $w_{\lambda}$ (resp. $w_{\lambda}'$) is the unique element
of minimal length
in the left coset $w_{\lambda}S_{\infty}$
(resp. $w_{\lambda}'S_{\infty}$).
\section{Schur's $Q$-functions and its factorial
analogues}\label{sec:Schur}
\setcounter{equation}{0}
\subsection{Schur's $Q$-functions}
Our main reference for symmetric functions is
\cite{Mac}.
Let $x=(x_{1},x_{2},\ldots)$
be infinitely many
indeterminates.
Define $Q_i(x)$ as the coefficient of $u^i$ in
the generating function
$$f(u)=
\prod_{i=1}^{\infty}
\frac{1+x_{i}u}{1-x_{i}u}
=\sum_{k\geq 0}Q_{k}(x)u^{k}.
$$
Note that $Q_{0}=1.$
Define $\Gamma$ to be $\Z[Q_1(x),Q_2(x),\ldots].$
The identity
$f(u)f(-u)=1$ yields
\begin{equation}\label{eq:quadraticQ}
Q_{i}(x)^{2}+2\sum_{j=1}^{i}(-1)^{j}Q_{i+j}(x)Q_{i-j}(x)=0
\quad \mbox{for}\quad i\geq 1.
\end{equation}
It is known that the ideal of relations among the functions $Q_k(x)$ is
generated by the previous relations.
For $i\geq j\geq 0,$ define elements
$$
Q_{i,j}(x):=Q_{i}(x)Q_{j}(x)+2\sum_{k=1}^{j}(-1)^{k}
Q_{i+k}(x)Q_{j-k}(x).
$$
Note that $Q_{i,0}(x)=Q_{i}(x)$
and $Q_{i,i}(x)\,(i\geq 1)$ is identically zero.
For $\lambda$ a strict partition
we write $\lambda=(\lambda_{1}>\lambda_{2}>\cdots
>\lambda_{r}\geq 0)$ with $r$ even.
Then the corresponding
Schur's $Q$-function
$Q_{\lambda}=Q_{\lambda}(x)$ is defined by
$$
Q_{\lambda}(x)=\mathrm{Pf}(Q_{\lambda_{i},\lambda_{j}}(x))_{1\leq i<j\leq r},
$$
where $\mathrm{Pf}$ denotes the Pfaffian.
It is known then that the functions
$Q_\lambda(x)$ for $\lambda\in \mathcal{SP}$ form
a $\Z$-basis of $\Gamma.$
The {\em $P$-Schur function} is defined to be $P_{\lambda}(x)=2^{-\ell(\lambda)}Q_{\lambda}(x)$ where $\ell(\lambda)$ is the number of non-zero parts in $\lambda.$
The next lemma shows that the $Q$-Schur function is {\em supersymmetric\/.}
\begin{lem}\label{lem:super}
Each element
$\varphi(x)$ in $\Gamma$
satisfies
$$
\varphi(t,-t,x_{1},x_{2},\ldots)
=\varphi(x_{1},x_{2},\ldots)
$$
where $t$ is an indeterminate.
\end{lem}
\begin{proof}
It suffices to show this for the ring generators
$Q_{i}(x).$ This follows immediately from the generating function.
\end{proof}
\subsection{Factorial $Q$ and $P$-Schur functions}\label{ssec:FacSchur} In this section we recall the definition and some properties of the factorial $Q$-Schur and $P$-Schur functions defined by V.N. Ivanov in \cite{Iv}. Fix $n \ge 1$ an integer, $\lambda$ a strict partition of length $r \le n$ and $a=(a_i)_{i \ge 1}$ an infinite sequence. By $(x|a)^k$ we denote the product $(x-a_1)\cdots (x-a_k)$. According to \cite[Def. 2.10]{Iv} the factorial $P$-Schur function $P_\lambda^{(n)}(x_1, \ldots x_n|a)$ is defined by: \[ P_\lambda^{(n)}(x_1, \ldots x_n|a) = \frac{1}{(n-r)!} \sum_{w \in S_n} w\cdot \bigl( \prod_{i=1}^r (x_i|a)^{\lambda_i} \prod_{i \le r, i < j \le n} \frac{x_i+x_j}{x_i-x_j} \bigr) \/, \] where $w$ acts on variables $x_i$. If $a_1=0$ this function is stable, i.e. $P_\lambda^{(n+1)}(x_1, \ldots x_n,0|a) = P_\lambda^{(n)}(x_1, \ldots x_n|a)$, therefore there is a well defined limit denoted $P_\lambda(x|a)$. It was proved in \cite[Prop. 8]{IN} that if $a_1 \neq 0$, $P_\lambda^{(n)}(x_1, \ldots x_n|a)$ is stable modulo $2$, i.e. $P_\lambda^{(n+2)}(x_1, \ldots x_n,0,0|a)=P_\lambda^{(n)}(x_1, \ldots x_n|a)$; so in this case there is a well-defined even and odd limit. From now on we will denote by $P_\lambda(x|a)$ the {\em even} limit of these functions. Define also the factorial $Q$-Schur function $Q_\lambda(x|a)$ to be \[ Q_\lambda(x|a) = 2^{\ell(\lambda)}P_\lambda(x|0,a) \/,\] where $\ell(\lambda)$ is the number of non-zero parts of $\lambda$. As explained in \cite{IN}, the situation $a_1 \neq 0$ is needed to study type $\mathrm{D}$; in types $\mathrm{B,C}$, the case $a_1=0$ will suffice. For simplicity, we will also denote $P_\lambda^{(n)}(x_1, \ldots x_n|a)$ by $P_\lambda(x_1, \ldots x_n|a)$.
Let now $t=(t_{1},t_{2},\ldots)$ be indeterminates. Define:
$$
t_{\lambda}=
(t_{\lambda_{1}},\ldots,
t_{\lambda_{r}},0,0,\ldots),
\quad
t_{\lambda}'=\begin{cases}
(t_{\lambda_{1}+1},\ldots,
t_{\lambda_{r}+1},0,0,\ldots) &\mbox{if}\;r\;\mbox{is even}\\
(t_{\lambda_{1}+1},\ldots,
t_{\lambda_{r}+1},t_{1},0,\ldots)&\mbox{if}\;r\;\mbox{is odd}
\end{cases}.
$$
Let also $w_{\lambda}\in W_{\infty}^{0}$ (resp. $w_{\lambda}'\in W_{\infty}^{\hat{1}}$ ) be the Grassmann element corresponding
to $\lambda\in\mathcal{SP}.$
We associate to $\lambda$ its
{\it shifted Young diagram} $Y_\lambda$
as the set of {\it boxes}
with coordinates $(i,j)$
such that $1\leq i\leq r$ and $i\leq j\leq i+\lambda_i-1.$
We set $\lambda_j=0$ for $j>r$ by convention.
Define
$$
H_{\lambda}(t)=\prod_{(i,j)\in Y_{\lambda}}
(t_{\overline{w_{\lambda}(i)}}
+t_{\overline{w_{\lambda}(j)}}),\quad
H_{\lambda}'(t)=\prod_{(i,j)\in Y_{\lambda}}
(t_{\overline{w_{\lambda}'(i)}}
+t_{\overline{w_{\lambda}'(j+1)}}).
$$
{\bf Example.} Let $\lambda=(3,1).$
Then $w_{\lambda}=\bar{3}\bar{1}2,$
$w_{\lambda}'=\bar{4}\bar{2}13,$ and
$$H_{\lambda}(t)=4t_{1}t_{3}(t_{3}+t_{1})(t_{3}-t_{2}),\quad
H_{\lambda}'(t)=(t_{4}+t_{2})(t_{4}-t_{1})(t_{4}-t_{3})(t_{2}-t_{1}).$$
\setlength{\unitlength}{0.6mm}
\begin{center}
\begin{picture}(200,45)
\put(-25,30){Type $\mathrm{C}$}
\put(5,35){\line(1,0){60}}
\put(5,25){\line(1,0){60}}
\put(25,15){\line(1,0){20}}
\put(5,25){\line(0,1){10}}
\put(25,15){\line(0,1){20}}
\put(45,15){\line(0,1){20}}
\put(65,25){\line(0,1){10}}
\put(11,28){\small{$2t_{3}$}}
\put(31,18){\small{$2t_{1}$}}
\put(27,28){\small{$t_{3}\!+\!t_{1}$}}
\put(47,28){\small{$t_{3}\!-\!t_{2}$}}
\put(75,30){Type $D$}
\put(105,35){\line(1,0){60}}
\put(105,25){\line(1,0){60}}
\put(125,15){\line(1,0){20}}
\put(105,25){\line(0,1){10}}
\put(125,15){\line(0,1){20}}
\put(145,15){\line(0,1){20}}
\put(165,25){\line(0,1){10}}
\put(107,28){\small{$t_{4}\!+\!t_{2}$}}
\put(127,18){\small{$t_{2}\!-\!t_{1}$}}
\put(127,28){\small{$t_{4}\!-\!t_{1}$}}
\put(147,28){\small{$t_{4}\!-\!t_{3}$}}
\put(0,28){\small{$3$}}
\put(0,18){\small{$1$}}
\put(13,38){\small{$3$}}
\put(33,38){\small{$1$}}
\put(53,38){\small{$\bar{2}$}}
\put(100,28){\small{$4$}}
\put(100,18){\small{$2$}}
\put(113,38){\small{${2}$}}
\put(133,38){\small{$\bar{1}$}}
\put(153,38){\small{$\bar{3}$}}
\put(5,5){$w_{\lambda}=s_{0}s_{2}s_{1}s_{0}=\bar{3}\bar{1}2$}
\put(105,5){$w_{\lambda}'=s_{1}s_{3}s_{2}s_{\hat{1}}=\bar{4}\bar{2}13$}
\end{picture}
\end{center}
\begin{prop} (\cite{Iv})
For any strict partition $\lambda$,
the factorial $Q$-Schur function $Q_{\lambda}(x|t)$
(resp. $P_{\lambda}(x|t)$)
satisfies the following properties:
\begin{enumerate}
\item $Q_{\lambda}(x|t)$ (resp. $P_{\lambda}(x|t)$) is
homogeneous of degree $|\lambda|=\sum_{i=1}^{r}\lambda_{i},$
\item $Q_{\lambda}(x|t)=Q_{\lambda}(x)+\mbox{lower order terms in}\;x$\\
(resp. $P_{\lambda}(x|t)=P_{\lambda}(x)+\mbox{lower order terms in}\;x$),
\item $Q_{\lambda}(t_{\mu}|t)=0$ (resp. $P_{\lambda}(t_{\mu}'|t)=0$) unless $\lambda\subset \mu,$
\item $Q_{\lambda}(t_{\lambda}|t)=
H_{\lambda}(t)$
(resp. $P_{\lambda}(t'_{\lambda}|t)=
H'_{\lambda}(t)$).
\end{enumerate}
Moreover $Q_{\lambda}(x|t)$
(resp. $P_{\lambda}(x|t)$)
($\lambda\in \mathcal{SP}$)
form a $\Z[t]$-basis of $\Z[t]\otimes_{\Z}
\Gamma$ (resp.
$\Z[t]\otimes_{\Z}
\Gamma'$).
\end{prop}
\begin{proof} In the case $t_1=0$ this was proved in \cite[Thm. 5.6]{Iv}. If $t_1 \neq 0$, the identity (3) follows from definition (cf. \cite[Prop.~9]{IN}), while (4) follows from a standard computation.\end{proof}
\begin{remark}{\rm The statement in the previous proposition can be strengthened by showing that the properties (1)-(4) characterize the factorial $Q-$Schur (respectively $P$-Schur) functions. For $t_1=0$ this was shown in \cite[Thm.~5.6]{Iv}. A similar proof can be given for $t_1 \neq 0$, but it also follows from Thm. \ref{thm:isom} below. The characterization statement will not be used in this paper.} \end{remark}
\begin{remark}{\rm The function
$Q_{\lambda}(x|t)$ belongs actually to
$\Gamma\otimes_{\Z}\Z[t_{1},t_{2},\ldots,t_{\lambda_{1}-1}]$ and $P_{\lambda}(x|t)$ to
$\Gamma'\otimes_{\Z}\Z[t_{1},t_{2},\ldots,t_{\lambda_{1}}]$. For example we have
$$
Q_{i}(x|t)=\sum_{j=0}^{i-1}(-1)^{j}
e_{j}(t_{1},\ldots,t_{i-1})Q_{i-j}(x),\quad
P_{i}(x|t)=\sum_{j=0}^{i-1}(-1)^{j}
e_{j}(t_{1},\ldots,t_{i})P_{i-j}(x).
$$}
\end{remark}
\begin{remark}
{\rm
An alternative formula for $Q_\lambda(x|t)$, in terms of a {\em multi-Pfaffian}, will be given below in \S \ref{sec:Kazarian}}.
\end{remark}
The following proposition will only be used within the proof of the formula for the Schubert polynomial for the longest element in each type, presented in \S \ref{sec:Long} below.
\begin{prop} [\cite{Iv}]\label{prop:Pf}
Let $\lambda=(\lambda_{1}>\cdots>\lambda_{r}\geq 0)$ a strict partition
with $r$ even. Then
$$
Q_{\lambda}(x|t)=\mathrm{Pf}\left(
Q_{\lambda_{i},\lambda_{j}}(x|t)
\right)_{1\leq i<j\leq r},\quad
P_{\lambda}(x|t)=\mathrm{Pf}\left(
P_{\lambda_{i},\lambda_{j}}(x|t)
\right)_{1\leq i<j\leq r}.
$$
\end{prop}
\begin{proof} Again, for $t_1=0$, this was proved in \cite[Thm.3.2]{Iv}, using the approach described in \cite[III.8 Ex.13]{Mac}. The same approach works in general, but for completeness we briefly sketch an argument. Lemma \ref{injective} below shows that there is an injective universal localization map $\Phi: \Z[z] \otimes \Z[t] \otimes \Gamma' \to \prod_{w \in W_\infty'} \Z[t]$. The image of $P_\lambda(x|t)$ is completely determined by the images at Grassmannian Weyl group elements $w_\mu'$ and it is given by $P_\lambda(t'_\mu|t)$. But by the results from \cite[\S 10]{IN} we have that $P_\lambda(t'_\mu|t)= \mathrm{Pf}( P_{\lambda_i, \lambda_j}(t'_\mu|t))_{1 \le i< j \le r}$. The result follows by injectivity of $\Phi$.\end{proof}
We record here the following formula used later. The proof is by a standard computation (see e.g. the proof of \cite[Thm. 8.4]{Iv}).
\begin{lem}
We have
\begin{equation}
P_{k,1}(x|t)=
P_{k}(x|t)P_{1}(x|t)
-P_{k+1}(x|t)-(t_{k+1}+t_{1})P_{k}(x|t)\quad
\mbox{for}\quad k\geq 1.\label{eq:P2row}
\end{equation}
\end{lem}
\subsection{Factorization formulae}
In this section we present several
{\em factorization} formulas for the factorial $P$ and $Q$-Schur functions,
which will be used later in \S \ref{sec:Long}. To this end, we first consider the case of ordinary factorial Schur functions.
\subsubsection{Factorial Schur polynomials}
Let $\lambda=(\lambda_{1},\ldots,\lambda_{n})$
be a partition.
Define the {\it factorial
Schur polynomial\/} by
$$s_{\lambda}(x_{1},\ldots,x_{n}|t)
=\frac{\det((x_{j}|t)^{\lambda_{i}+n-i})_{1\leq i,j\leq n}}{\prod_{1\leq i<j\leq n}(x_{i}-x_{j})} \/, $$
where $(x|t)^{k}$ denotes $\prod_{i=1}^{k}(x-t_{i})$.
It turns out that $s_{\lambda}(x|t)$ is an element
in
$\Z[x_{1},\ldots,x_{n}]^{S_{n}}
\otimes \Z[t_{1},\ldots,t_{\lambda_{1}+n-1}].$
For some basic properties of these polynomials,
the reader can consult \cite{MS}.
The following formula will be used to prove
Lem. \ref{lem:piDelta} in \S \ref{sec:Long}.
\begin{lem}\label{lem:A-long} We have
$
s_{\rho_{n-1}}(t_{1},\ldots,t_{n}|t_{1},-z_{1},t_{2},-z_{2},
\ldots,
t_{n-1},-z_{n-1})=\prod_{1\leq i<j\leq n}(t_{j}+z_{i}).
$
\end{lem}
\begin{proof} When variables $z_i,t_i$ are specialized as in this Lemma,
the numerator is
an anti-diagonal lower triangular matrix. The entry on the $i$-th row
on the anti-diagonal is given by
$\prod_{j=1}^{i-1}(t_{i}-t_{j})(t_{i}+z_{j}).$
The Lemma follows immediately from this.
\end{proof}
Next formula is a version of Lem. \ref{lem:A-long}
which will be used in the proof of Lem. \ref{lem:piDeltaD}.
\begin{lem}\label{lem:A-longOdd}
If $n$ is odd
then we have
$$
s_{\rho_{n-1}+1^{n-1}}(t_{2},\ldots,t_{n}|t_{1},-z_{1},\ldots,
t_{n-1},-z_{n-1})
=\prod_{j=2}^{n}(t_{j}-t_{1})
\prod_{1\leq i<j\leq n}(t_{j}+z_{i}).
$$
\end{lem}
\begin{proof}
Similar to the proof of Lem. \ref{lem:A-long}.
\end{proof}
\subsubsection{$P$ and $Q$-Schur functions}
We need the following factorization formula.
\begin{lem}\label{lem:factorization} Let $\lambda=(\lambda_{1},\ldots,\lambda_{n})$ be a partition.
Then
we have
\begin{eqnarray*}
Q_{\rho_{n}+\lambda}(x_{1},\ldots,x_{n}|t)
&=&\prod_{i=1}^{n}2x_{i}\prod_{1\leq i< j\leq n}
(x_{i}+x_{j})\times
s_{\lambda}(x_{1},\ldots,x_{n}|t),
\end{eqnarray*}
\end{lem}
\begin{proof}
By their very definition
$$Q_{\rho_{n}+\lambda}(x_{1},\ldots,x_{n}|t)
=2^{n}\sum_{w\in S_{n}}w\left[\prod_{i=1}^{n}
x_{i}(x_{i}|t)^{\lambda_{i}+n-i}
\prod_{1\leq i<j\leq n}\frac{x_{i}+x_{j}}{x_{i}-x_{j}}
\right],
$$
where $w$ acts as permutation of the variables
$x_{1},\ldots,x_{n}.$
Since the polynomial
$\prod_{i=1}^{n}x_{i}
\prod_{1\leq i<j\leq n}(x_{i}+x_{j})$ in the parenthesis
is symmetric in $x$, the last expression factorizes into
$$
2^{n}\prod_{i=1}^{n}x_{i}
\prod_{1\leq i<j\leq n}(x_{i}+x_{j})\times
\sum_{w\in S_{n}}w\left[\prod_{i=1}^{n}
(x_{i}|t)^{\lambda_{i}+n-i}
\prod_{1\leq i<j\leq n}({x_{i}-x_{j}})^{-1}
\right]$$
Then by the definition of $s_{\lambda}(x_{1},\ldots,x_{n}|t)$
we have the lemma.
\end{proof}
The following two lemmas are proved in the same way:
\begin{lem}\label{lem:factorizationD} Assume $n$ is even.
Let
$\lambda=(\lambda_{1},\ldots,\lambda_{n})$
be a partition.
Then we have
$$
P_{\rho_{n-1}+\lambda}
(x_{1},\ldots,x_{n}|t)
=
\displaystyle{
\prod_{1\leq i<j\leq n}}(x_{i}+x_{j})
\cdot s_{\lambda}(x_{1},\ldots,x_{n}|t).
$$
\end{lem}
\begin{lem}\label{lem:factorizationDodd}
Assume $n$ is odd. Let
$\lambda=(\lambda_{1},\ldots,\lambda_{n-1})$
be a partition.
Then we have
$$
P_{\rho_{n-1}+\lambda}(x_{1},\ldots,x_{n-1}|t)
=
\prod_{1\leq i<j\leq n-1}(x_{i}+x_{j})
\times s_{\lambda+1^{n-1}}
(x_{1},\ldots,x_{n-1}|t).
$$
\end{lem}
\section{Stable Schubert classes}\label{sec:StableSchubert}
\setcounter{equation}{0}
The aim of this section is to introduce
{\it stable\/} Schubert classes
indexed by the Weyl group of
infinite rank $\W_{\infty}$. Recall that the embeddings of Dynkin diagrams shown in \S \ref{ssec:Weyl} induce embeddings $i:\W_n \to \W_{n+1}$; then $\W_\infty = \bigcup_{n \ge 1} \W_n$.
\subsection{Stable Schubert classes}\label{ssec:SSC}
Let us denote by $\sigma_{w}^{(n)}$ the equivariant Schubert
class on $\mathcal{F}_{n}$ labeled by $w\in \W_{n}.$
\begin{prop}\label{prop:StabSch}
The localization of Schubert classes is stable, i.e
$$\sigma_{i(w)}^{(n+1)}|_{i(v)}=\sigma_w^{(n)}|_v
\quad\mbox{for all}\quad
w,v\in \W_{n}.$$
\end{prop}
\begin{proof}
First we claim that
$\sigma_{i(w)}^{(n+1)}|_{i(v)}\in \Z[t]^{(n)}$
for any $w,v\in \W_{n}.$
Let $w_{0}^{(n)}$ be
the longest element in $\W_{n}$.
By Prop. \ref{xi}, we have for $v\in \W_{n}$
$$
\sigma_{i(w_{0}^{(n)})}^{(n+1)}|_{i(v)}
=\begin{cases}
\prod_{\beta\in R^{+}_{n}}\beta
&\mbox{if} \;v= w_{0}^{(n)}
\\
0 &\mbox{if} \;v\ne w_{0}^{(n)}
\end{cases}.
$$
In particular these polynomials belong to
$\Z[t]^{(n)}.$
For arbitrary $w\in \W_{n}$,
any reduced expression of $i(w)$ contains only simple reflections $s_0, \dots , s_{n-1}$ (in type $D$, $s_0$ is replaced by $s_{\hat{1}}$).
Hence we obtain
the Schubert
class $\sigma_{i(w)}^{(n+1)}$
by applying the divided difference
operators $\partial_{0},\ldots,\partial_{n-1}$
(in type $D$, $\partial_0$ is replaced by $\partial_{\hat{1}}$)
successively to the class $\sigma_{i(w_{0}^{(n)})}^{(n+1)}.$
In this process only the variables $t_{1},\ldots,t_{n}$
are involved to compute
$\sigma_{i(w)}^{(n+1)}|_{i(v)}\;(v\in \W_{n}).$
Hence the claim is proved.
For $w\in \W_{n},$
we consider the element $\eta_{w}$
in $\prod_{v\in \W_{n}}\Z[t]^{(n)}$
given by
$\eta_{w}|_{v}=\sigma_{i(w)}^{(n+1)}|_{i(v)}\;(v\in \W_{n}).$
We will show
the element $\eta_{w}$
satisfies the conditions in Prop. \ref{xi} that
characterize $\sigma_{w}^{(n)}.$
In fact, the vanishing condition
holds since $i(w)\leq i(v)$
if and only if $w\leq v.$
Homogeneity and the degree condition is
satisfied because $\ell(i(w))=\ell(w).$
The normalization follows from the fact
$R_{n}^{+}\cap w R^{-}_{n}=
R_{n+1}^{+}\cap i(w) R^{-}_{n+1}.$
Thus we have $\eta_{w}=\sigma_{w}^{(n)}$
and the proposition is proved.
\end{proof}
Fix $w$ be in $\W_{\infty}.$
Then, by the previous proposition, for any $v\in \W_{\infty}$, and for any sufficiently
large $n$ such that $w,v\in \W_{n}$,
the polynomial
$\sigma_{w}^{(n)}|_{v}$ does not depend on
the choice of $n.$
Thus we can introduce
a unique element
$\sigma_{w}^{(\infty)}
=(\sigma_{w}^{(\infty)}|_{v})_{v}$ in $\prod_{v\in \W_{\infty}}\Z[t]$ such that
$$
\sigma_{w}^{(\infty)}|_{v}
=\sigma_{w}^{(n)}|_{v}$$
for all sufficiently large $n.$
We call this element the {\it stable\/} Schubert class.
\begin{Def} Let $H_{\infty}$ be the $\Z[t]$-submodule of $\prod_{v\in \W_{\infty}}\Z[t]$ spanned by the stable Schubert classes $\sigma_{w}^{(\infty)},w\in \W_{\infty}$, where the $\Z[t]$-module structure is given by the diagonal multiplication.
\end{Def}
We will show later in Cor. \ref{cor:Hsubalg} that
$H_{\infty}$ is actually a $\Z[t]$-subalgebra
in the product ring $\prod_{v\in \W_{\infty}}\Z[t].$
The properties of the (finite-dimensional) Schubert classes extend immediately to the stable case. For example, the classes $\sigma_{w}^{(\infty)}\;(w\in W_{\infty})$ are
linearly independent over $\Z[t]$
(Prop. \ref{prop:basis}), and they satisfy the properties from Prop. \ref{xi}. To state the latter, define $R^{+}=\bigcup_{n\geq 1}R_{n}^{+}$, regarded
as a subset of $\Z[t].$ Then:
\begin{prop}\label{StableClass}
The stable Schubert class satisfies the following:
\begin{enumerate}
\item[(1)](Homogeneity) $\sigma_w^{(\infty)}|_v$ is homogeneous of degree $\ell(w)$ for each $v\geq w,$
\item[(2)](Normalization) \label{char:normal}$\sigma_w^{(\infty)}|_w=\prod_{\beta\in R^{+}\cap w(R^{-})}\beta,$
\item[(3)](Vanishing) $\sigma_w^{(\infty)}|_{v}$ vanishes unless $v\geq w.$
\end{enumerate}
\end{prop}
It is natural to
consider the following stable version of the GKM conditions
in the ring $\prod_{v\in\W_{\infty}}\Z[t]$:
$$\eta|_{v}-\eta|_{s_{\alpha}v}\;\;\mbox{is a multiple
of}\;\alpha\;\mbox{for all}\;
\alpha\in R^{+},\; v\in \W_{\infty}.
$$
Then the stable Schubert class $\sigma_{w}^{(\infty)}$
is the unique element in
$\prod_{v\in W_{\infty}}\Z[t]$ that
satisfies the GKM conditions and the
three conditions Prop. \ref{StableClass}. It follows that all the elements
from $H_\infty$ satisfy the GKM conditions. In particular, the proofs from \cite{Kn} can be retraced, and one
can define the left and right actions of
$\W_{\infty}$ on $H_{\infty}$
by the same formulas as in \S \ref{ssec:WeylAction}
but for $i\in \I_{\infty}.$
Using these actions, we define also the divided difference operators
$\partial_{i},\delta_{i}$ on $H_{\infty}$
(see \S \ref{ssec:divdiff}). The next result follows again from the finite dimensional case (Prop. \ref{prop:propertiesdiv}).
\begin{prop}\label{prop:divdiff}
We have
$$
\partial_{i}\sigma_{w}^{(\infty)}=\begin{cases}
\sigma_{ws_{i}}^{(\infty)}& \ell(ws_{i})=\ell(w)-1\\
0 &\ell(ws_{i})=\ell(w)+1
\end{cases},\quad
\delta_{i}\sigma_{w}^{(\infty)}=\begin{cases}
\sigma_{s_{i}w}^{(\infty)}& \ell(s_{i}w)=\ell(w)-1\\
0 &\ell(s_{i}w)=\ell(w)+1
\end{cases}.
$$
\end{prop}
\subsection{Inverse limit of cohomology groups}\label{ssec:invlim}
Let $H_{n}$ denote the image of
the localization map
$$
\iota_{n}^{*}: H_{T_{n}}^{*}(\mathcal{F}_{n})
\longrightarrow H_{T_{n}}^{*}(\mathcal{F}_{n}^{T_{n}})
=\prod_{v\in \W_{n}}\Z[t]^{(n)}.
$$
By the stability property for the localization of Schubert classes, the natural projections $H_\infty \to H_n \simeq \eqcoh (\F_n)$ are compatible with the homomorphisms $H^*_{T_{n+1}}(\F_{n+1}) \to \eqcoh(\F_n)$ induced by the equivariant embeddings $\F_n \to \F_{n+1}$. Therefore there is a $\Z[t]$-module homomorphism \[j: H_{\infty}\hookrightarrow \invlim\, H_{T_{n}}^{*}(\mathcal{F}_{n})\/.\] The injectivity of localization maps in the finite-dimensional setting implies that $j$ is injective as well.
\section{Universal localization map}\label{sec:UnivLoc}
In this section, we introduce a $\Z[t]$-algebra $\R_\infty$
and establish an explicit isomorphism from
$\R_{\infty}$ onto
$H_\infty,$ the $\Z[t]$-module spanned by the stable Schubert
classes. This isomorphism will be used
in the proof of the existence of
the double Schubert polynomials from \S \ref{sec:DSP}.
\subsection{The ring $\R_{\infty}$ and the universal localization map}\label{ssec:UnivLoc}
Set $\Z[z]=\Z[z_{1},z_{2},z_{3},\ldots]$ and define the following rings:
$$R_{\infty}:=\Z[t]
\otimes_{\Z}\Z[z]
\otimes_{\Z} \Gamma,\quad \textrm{ and } \quad
R_{\infty}':=\Z[t]
\otimes_{\Z}\Z[z]
\otimes_{\Z} \Gamma' \/.
$$
As usual, we will use $\R_{\infty}$ to denote $R_\infty$ for type $\mathrm{C}$ and
$R_{\infty}'$ for types $\mathrm{B}$ and $\mathrm{D}$.
We introduce next the most important algebraic tool of the paper.
Let $v$ be in $W_{\infty}.$
Set $t_{v}=(t_{v,1},t_{v,2},\ldots)$ to be
$$t_{v,i}=\begin{cases}t_{\overline{v(i)}}&
\mbox{if}\;v(i)\;\mbox{is negative}\\
0 & \mbox{otherwise}
\end{cases},
$$
where we set $t_{\overline{i}}$ to be $-t_{i}.$
Define a homomorphism of $\Z[t]$-algebras
$$
\Phi_{v}: R_{\infty}'\longrightarrow \Z[t]
\quad
\left(x\mapsto
t_{v},\quad
z_{i}\mapsto t_{v(i)}\right).$$
Note that since $v(i)=i$ for all
sufficiently large $i$, the substitution $x\mapsto t_{v}$
to $P_\lambda(x)$ gives a {\em polynomial} $P_{\lambda}(t_{v})$ in $\Z[t]$ (rather than a formal power series). Since $R_{\infty}$ is a subalgebra of $R_{\infty}'$, the restriction map
$\Phi_v:R_{\infty}\longrightarrow \Z[t]$ sends $Q_{\lambda}(x)$ to $Q_{\lambda}(t_{v}).$
\begin{Def} Define the "universal localization map"
to be the homomorphism of $\Z[t]$-algebras given
by
$$\Phi:
\R_{\infty}\longrightarrow
\prod_{v\in \W_{\infty}}\Z[t],\quad
f\mapsto (\Phi_{v}(f))_{v\in \W_{\infty}}.
$$
\end{Def}
\begin{remark}{\rm A geometric interpretation of the map $\Phi$, in terms of the usual localization map, will be given later in \S \ref{sec:geometry}.}\end{remark}
The main result of this section is:
\begin{thm}\label{thm:isom}
The map
$\Phi$ is an isomorphism
of graded $\Z[t]$ algebras
from $\R_\infty$ onto its image. Moreover, the image of $\Phi$ is equal to $H_\infty$.
\end{thm}
\begin{cor}\label{cor:Hsubalg}
$H_{\infty}$ is a $\Z[t]$-subalgebra in $\prod_{v\in \W_{\infty}}\Z[t].$
\end{cor}
The proof of the theorem will be given in several lemmata and propositions, and it occupies the remaining part of section 6. The more involved part is to show surjectivity, which relies on the analysis of the "transition equations" implied by the equivariant Chevalley rule, and on study of factorial $P$ and $Q$- Schur functions. The proof of injectivity is rather short, and we present it next.
\begin{lem} \label{injective}
The map $\Phi$ is injective.
\end{lem}
\begin{proof}
We first consider type $\mathrm{B}$ case.
Write $f\in R_\infty'$ as
$f=\sum_{\lambda}c_\lambda(t,z) P_\lambda(x).$
Suppose $\Phi(f)=0.$
There are $m,n$ such that
$$c_\lambda\in \Z[t_1,\ldots,t_m,z_1,\ldots,z_n]$$
for all $\lambda$ such that $c_\lambda\ne 0.$
Define $v\in W_\infty$ by
$v(i)=m+i\;(1\leq i\leq n),$
$ v(n+i)=i\;(1\leq i\leq m),$
$v(m+n+i)=\overline{m+n+i}\;(1\leq i\leq N),$
and $
v(i)=i\;(i>N),$
where $N\geq m+n+1.$
Then we have
$\Phi_v(f)=\sum_\lambda c_\lambda(t_1,\ldots,t_m;t_{m+1},\ldots,t_{m+n})
P_\lambda(t_{m+n+1},t_{m+n+2},\ldots,t_{m+n+N})=0.$
Since this holds for all sufficiently large $N,$
we have $$\sum_\lambda c_\lambda(t_1,\ldots,t_m;t_{m+1},\ldots,t_{m+n})
P_\lambda(t_{m+n+1},t_{m+n+2},\ldots)=0.$$
Since $P_\lambda(t_{m+n+1},t_{m+n+2},\ldots)$ are linearly
independent over $\Z$
(see \cite{Mac}, III, (8.9)),
we have $$c_\lambda(t_1,\ldots,t_m;t_{m+1},\ldots,t_{m+n})=0$$
for all $\lambda.$
This implies $c_\lambda(t_1,\ldots,t_m;z_{1},\ldots,z_{n})=0$
for all $\lambda.$
Since $R_{\infty}\subset R_{\infty}',$
type $\mathrm{C}$ case follows immediately.
Type $\mathrm{D}$ case is proved by a minor modification.
Take $N$ to be always even, and consider
the sufficiently large {\it even} $N.$
\end{proof}
\subsection{Factorial $Q$-(and $P$-)functions
and Grassmannian Schubert classes}
Recall that there is a natural bijection between
$W_{\infty}^{0},W_{\infty}^{\hat{1}}$ and the set of strict partitions $\mathcal{SP}$.
The next result was proved by Ikeda in \cite{Ik} for type
$\mathrm{C}$, and Ikeda-Naruse in \cite{IN} for types $\mathrm{B,D}$.
\begin{thm}[\cite{Ik},\cite{IN}] \label{PhiFacQ}
Let $\lambda\in \mathcal{SP}$ and $w_{\lambda}
\in W_{\infty}^{0}$ and $w'_{\lambda}
\in W_{\infty}^{\hat{1}}$ be the corresponding Grassmannian
elements.
Then we have
\begin{enumerate}
\item $\Phi\left(Q_{\lambda}(x|t)\right)=\sigma_{w_{\lambda}}^{(\infty)}\;
\mbox{for type}\;C,$
\item $\Phi\left(P_{\lambda}(x|0,t)\right)
=\sigma_{w_{\lambda}}^{(\infty)}\mbox{for type}\;B,$
\item
$\Phi\left(P_{\lambda}(x|t)\right)
=\sigma_{w'_{\lambda}}^{(\infty)}\;
\mbox{for type}\;D.
$
\end{enumerate}
\end{thm}
\begin{proof} We consider first the type $\mathrm{C}$ case.
The map on $W_{\infty}$
given by $v\mapsto \sigma_{w_{\lambda}}^{(\infty)}|_{v}$
is constant on each left coset of $W_{\infty,0}\cong
S_{\infty}$
and it is determined by the values at the Grassmannian elements.
Let $v\in W_{\infty}$ and $w_{\mu}$ be the minimal length representative
of the coset $vS_{\infty}$ corresponding to
a strict partition $\mu.$
Then $t_{v}$ defined in \S \ref{ssec:UnivLoc}
is a permutation of $t_{\mu}.$
Since $Q_{\lambda}(x|t)$ is
symmetric with respect to $x$
we have
$\Phi_{v}(Q_{\lambda}(x|t))=Q_{\lambda}(t_{v}|t)
=Q_{\lambda}(t_{\mu}|t).
$
In \cite{Ik}, it was shown that
$Q_{\lambda}(t_{\mu}|t)=\sigma_{w_{\mu}}^{(\infty)}|_{w_{\mu}},$
which is equal to $\sigma_{w_{\mu}}^{(\infty)}|_v.$
This completes the proof in this case.
Proofs of the other cases are the same
with appropriate identification
of the functions and strict partitions.
\end{proof}
\subsection{Equivariant Chevalley formula}
The {\em
Chevalley formula} is a rule to multiply a Schubert class with a divisor class.
To state it we need some notation.
For a positive root $\alpha\in R^{+}$ and
a simple reflection $s_{i}$,
set
$$
c_{\alpha,s_{i}}=(\omega_i,\alpha^{\vee}),\quad
\alpha^{\vee}=2\alpha/(\alpha,\alpha),
$$
where $\omega_{i}$
is the $i$-th fundamental weight
of one of the classical types $\mathrm{A_{n}-D_n}$ for sufficiently large $n$.
The number $c_{\alpha,s_{i}}$ - called {\it Chevalley multiplicity} - does not depend on the choice of $n.$
\begin{prop}[cf. \cite{KK}]\label{prop:codim1}
For any $w\in \W_{\infty}$, the Chevalley multiplicity $\sigma_{s_{i}}^{(\infty)}|_{w}$ is given
by $\omega_{i}-w(\omega_{i})$, where
$\omega_{i}$ is the fundamental weight
for a classical type $\mathrm{A_n-D_n}$ such that $n\geq i.$
\end{prop}
\begin{lem}[Equivariant Chevalley formula]
\label{lem:eqCh}
$$
\sigma_{s_{i}}^{(\infty)}
\sigma_{w}^{(\infty)}
=
\displaystyle\sum_{\alpha\in R^{+},\;\ell(w s_\alpha)=\ell(w)+1}
c_{\alpha,s_{i}}\,\sigma_{w s_\alpha}^{(\infty)} + \sigma_{s_{i}}^{(\infty)}|_{w} \cdot \sigma_{w}^{(\infty)}. $$
\end{lem}
\begin{proof}
The non-equivariant case is due to Chevalley \cite{C}, but the stable version of this formula was given in \cite{B93}.
An easy argument using localization shows that the only difference in the equivariant case is the appearance of the
equivariant term
$\sigma_{s_{i}}^{(\infty)}|_{w} \cdot \sigma_{w}^{(\infty)}.$
\end{proof}
\begin{remark}{\rm
There are only finitely many nonzero terms in the sum
in the right hand side.}
\end{remark}
\begin{lem} \label{lem:z}
The elements $\Phi(z_{i}) \in H_\infty$
are expressed in terms of Schubert classes as follows:
\noindent Type $\mathrm{B}$:
$\Phi(z_1)=\sigma_{s_1}^{(\infty)}-2\sigma_{s_{0}}^{(\infty)}+t_{1},\;
\Phi(z_i)=\sigma_{s_i}^{(\infty)}-\sigma_{s_{i-1}}^{(\infty)}+t_{i}
\; (i\geq 2),$
\noindent Type $\mathrm{C}$:
$\Phi({z}_i)=\sigma_{s_i}^{(\infty)}-\sigma_{s_{i-1}}^{(\infty)}+t_{i}
\; (i\geq 1),$
\noindent Type $\mathrm{D}$:
$\Phi({z}_1)=\sigma_{s_{1}}^{(\infty)}-\sigma_{s_{\hat{1}}} ^{(\infty)} +t_{1},\;
\Phi({z}_2)=\sigma_{s_{2}} ^{(\infty)}-\sigma_{s_{1}} ^{(\infty)}
-\sigma_{s_{\hat{1}}} ^{(\infty)} +t_{2}$, \quad and \\
$\Phi({z}_i)=\sigma_{s_{i} }^{(\infty)}-\sigma_{s_{i-1}} ^{(\infty)} +t_{i}\;(i\geq 3).
$
\end{lem}
\begin{proof}
This follows by localizing both sides of the formulas, and then using Prop. \ref{prop:codim1}.
\end{proof}
\begin{lem}\label{lem:PhisubH} We have $\mathrm{Im}(\Phi)\subset H_\infty.$
\end{lem}
\begin{proof}
The ring $R_{\infty}$ has a $\Z[t]$-basis
$z^{\alpha}Q_{\lambda}(x|t)$ where $z^{\alpha}$
are monomials in $\Z[z]$ and $\lambda$
are strict partitions.
Since $\Phi$ is $\Z[t]$-linear, it is enough to show that
$\Phi\left(z^{\alpha}Q_{\lambda}(x|t)\right)$
belongs to $H_{\infty}.$
We use induction on degree $d$ of the monomial $z^{\alpha}.$
The case $d=0$ holds by Thm. \ref{PhiFacQ}.
Let $d\geq 1$ and assume that
$\Phi(z^{\alpha}Q_{\lambda}(x|t))$ lies in $H_{\infty}$
for any monomial $z^{\alpha}$ of degree less than $d.$
Note that, by Lem. \ref{lem:z}, we have
$\Phi(z_{i})\in H_{\infty}.$
Choose any index $i$ such
that $z^{\alpha}=z_{i}\cdot z^{\beta}.$
By induction hypothesis $\Phi\left(z^{\beta}Q_{\lambda}(x|t)
\right)$ is an element in $H_{\infty},$
i.e., a linear combination
of $\sigma_{w}^{(\infty)}$'s with coefficients
in $\Z[t].$ Lem. \ref{lem:z} together with
equivariant Chevalley formula imply
that $\Phi(z_{i})\sigma_{w}^{(\infty)}$
belongs to $H_{\infty}.$
It follows that $z^{\alpha}Q_{\lambda}(x|t)$
belongs to $H_{\infty}.$
\end{proof}
\subsection{Transition equations}\label{ssec:trans} To finish the proof of surjectivity of $\Phi$, we need certain recursive relations for the Schubert classes - the {\it transition equations} - implied by the (equivariant) Chevalley formula. The arguments in this subsection are very similar to those given by S. Billey in \cite{B93}.
Let $t_{ij}$ denote the reflection with respect to the root $t_{j}-t_{i}$,
$s_{ij}$ the reflection with respect to $t_{i}+t_{j}$ and $s_{ii}$ the reflection
with respect to $t_{i}$ or $2t_{i}$ (depending on type).
From now on we regard $\Z[z]$
as subalgebra of $H_{\infty}$ via $\Phi$
and we identify $z_i$ with its image $\Phi(z_{i})$ in $H_{\infty}$
(cf. Lem \ref{lem:PhisubH}).
\begin{prop}[Transition equations]
The Schubert classes $\sigma_{w}$ of types
$\mathrm{B,C}$ and $\mathrm{D}$ satisfy the following recursion formula:
\begin{equation}
\sigma_w ^{(\infty)} =({z}_r-v(t_r))\;\sigma_v ^{(\infty)}
+
\sum_{1\leq i<r} \sigma^*_{v t_{ir}}+
\sum_{i\neq r}\sigma^*_{v s_{ir}}+
\chi\sigma^*_{v s_{rr}},\label{Transition}
\end{equation}
where $r$ is the last descent of $w$,
$s$ is the largest index such that $w(s)<w(r)$,
$v=wt_{rs}$,
$\chi=2,1,0$ according to the types $\mathrm{B,C,D}$,
and for each
$\sigma_{vt}^*=0$ unless $\ell(vt)
=\ell(v)+1=\ell(w)$ for $v,t\in \W_{\infty}$
in which case $\sigma_{vt}^{*}=\sigma_{vt}^{(\infty)}.$
\end{prop}
\begin{proof}
The same as in \cite[Thm.4]{B93}
using the equivariant Chevalley formula (Lemma \ref{lem:eqCh}).
\end{proof}
\begin{remark}{\rm
The precise recursive nature of the equation (\ref{Transition}) will be explained
in the proof of the next Proposition below.}
\end{remark}
\begin{prop} \label{TransExp}
If $w\in \W_{n}$ then
the Schubert class $\sigma_{w}^{(\infty)}$
is expressed as a $\Z[z,t]$-linear
combination of the Schubert classes of maximal Grassmannian type.
More precisely we have
\begin{equation}
\sigma_{w}^{(\infty)}=\sum_{\lambda}
g_{w,\lambda}({z},t)\sigma_{\lambda}^{(\infty)},\label{expansion}
\end{equation}
for some polynomials
$g_{w,\lambda}({z},t)$ in variables $t_i$ and $z_i$, and the sum is over strict partitions $\lambda$ such that $|\lambda|\leq n.$
\end{prop}
\begin{proof}
We will show that the recursion (\ref{Transition})
terminates in a finite number of steps
to get the desired expression.
Following \cite{B93}, we define a partial ordering on the elements of $\W_{\infty}.$
Given $w$ in $\W_{\infty}$,
let $LD(w)$ be the position of the last descent.
Define a partial ordering on the elements
of $\W_{\infty}$ by $w<_{LD}u$
if $LD(u)<LD(w)$ or if $LD(u)=LD(w)$ and
$u(LD(u))<w(LD(w))$.
In \cite{B93} it was shown that each element
appearing on the right hand side of
(\ref{Transition}) is less than $w$ under this ordering.
Moreover it was proved in \cite[Thm.4]{B93} that
recursive applications of
(\ref{Transition}) give only terms which correspond to the elements in
$\W_{n+r}$ where $r$ is the last descent of $w.$
Therefore we obtain the expansion (\ref{expansion}).
\end{proof}
\subsection{Proof of Theorem \ref{thm:isom}}
\begin{proof} By Lem. \ref{lem:PhisubH} we know
$\mathrm{Im}(\Phi)\subset H_{\infty}.$
Clearly $\Phi$ preserves the degree.
So it remains to show
$H_{\infty}\subset\mathrm{Im}(\Phi).$
In order to show this, it suffices to
$\sigma_{w}^{(\infty)}\in \mathrm{Im}(\Phi).$
In fact we have
\begin{equation}
\Phi\left(\sum_{\lambda}
g_{w,\lambda}(z,t)Q_{\lambda}(x|t)
\right)=\sigma_{w}^{(\infty)}.
\label{eq:DefSch}
\end{equation}
since
$\Phi$ is $\Z[z,t]$-linear.
\end{proof}
\section{Weyl group actions and divided difference operators on $\R_{\infty}$}\label{sec:WactsR}
We define two commuting actions of $\W_{\infty}$ on the ring $\R_{\infty}.$
It is shown that the Weyl group actions
are compatible with the action on $H_{\infty}$
via $\Phi.$
\subsection{Weyl group actions on $R_{\infty}$}
We start from type $\mathrm{C}$. We make $W_{\infty}$ act as ring automorphisms
on $R_{\infty}$
by letting
$s_{i}^{z}$ interchange $z_{i}$ and $z_{i+1},$
for $i>0,$ and letting
$s_{0}^{z}$ replace $z_{1}$ and $-z_{1},$
and also
$$
s_{0}^{z}Q_{i}(x)=Q_{i}(x)+2\sum_{j=1}^{i}z_{1}^{j}Q_{i-j}(x).
$$
The operator $s_{0}^{z}$ was introduced in
\cite{BH}.
Let $\omega: R_{\infty}\rightarrow R_{\infty}$ be an involutive ring automorphism
defined by
$$
\omega(z_{i})=-t_{i},\quad
\omega(t_{i})=-z_{i},\quad
\omega(Q_{k}(x))=Q_{k}(x).
$$
Define the operators $s_{i}^{t}$ on $R_{\infty}$
by $s_{i}^{t}=\omega s_{i}^{z}\omega$
for $i\in I_{\infty}.$
More explicitly, $s_{i}^{t}$ interchange $t_{i}$ and
$t_{i+1}$, for $i>0$, and $s_{0}^{t}$
replace $t_{1}$ and $-t_{1}$ and also
$$
s_{0}^{t}Q_{i}(x)=Q_{i}(x)+2\sum_{j=1}^{i}(-t_{1})^{j}Q_{i-j}(x).
$$
\begin{lem}\label{lem:s0super} The action of operators
$s_{0}^{z},s_{0}^{t}$ on any $\varphi(x)\in \Gamma$
are written as
$$s_{0}^{z}\varphi(x_{1},x_{2},\ldots)
=\varphi(z_{1},x_{1},x_{2},\ldots),\quad
s_{0}^{t}\varphi(x_{1},x_{2},\ldots)
=\varphi(-t_{1},x_{1},x_{2},\ldots).$$
\end{lem}
Note that the right hand side
of both the formulas above belong to
$R_{\infty}.$
\begin{proof}
We show this for the generators $\varphi(x)=Q_{k}(x)$
of $\Gamma.$
By the definition of $s_{0}^{z}$ we have
$$
\sum_{k=0}^{\infty}
s_{0}^{z}
Q_{k}(x)\cdot u^{k}
=
\left(\prod_{i=1}^{\infty}
\frac{1+x_{i}u}{1-x_{i}u}
\right)\frac{1+z_{1}u}{1-z_{1}u}
=\sum_{k=0}^{\infty} Q_{k}(z_{1},x_{1},x_{2},\ldots)u^{k}.
$$
Thus we have the result for $s_{0}^{z}Q_{k}(x)$
for $k\geq 1.$
The proof for $s_{0}^{t}$ is similar.
\end{proof}
\begin{prop}
\begin{enumerate}
\item The operators
$s_{i}^{z}\;(i\geq 0)$ give an action of $W_{\infty}$ on $R_{\infty},$
\item The operators
$s_{i}^{t}\;(i\geq 0)$ give an action of $W_{\infty}$ on $R_{\infty},$
\item The two actions of $W_{\infty}$ commute with
each other.
\end{enumerate}
\end{prop}
\begin{proof}
We show that $s_{i}^{z}$ satisfy the Coxeter relations
for $W_{\infty}.$
The calculation for $s_{i}^{t}$ is the same.
We first show that $(s_{0}^{z})^{2}=1.$
For $f(z)\in \Z[z]$, $(s_{0}^{z})^{2}f(z)=f(z)$ is obvious.
We have for $\varphi(x)\in \Gamma,$
$$
(s_{0}^{z})^{2}
(\varphi(x))=
s_{0}^{z}\varphi(z_{1},x_{1},x_{2},\ldots)
=\varphi(z_{1},-z_{1},x_{1},x_{2},\ldots)
=\varphi(x_{1},x_{2},\ldots),
$$
where we used the super-symmetry
(Lemma \ref{lem:super})
at the last equality.
The verification of the remaining relations and the commutativity are left for the reader.
\end{proof}
In type $\mathrm{B}$, the action of $W_\infty$ on $R_\infty'$ is obtained by extending in the canonical way the action from $R_{\infty}.$ Finally, we consider the type $\mathrm{D}$ case.
In this case, the action is given by restriction the action of $W_{\infty}$ on $R'_\infty$ to the subgroup $W_{\infty}'.$
Namely, if we
set $s_{\hat{1}}^{z}=s_{0}^{z}s_{1}^{z}s_{0}^{z}$
and $s_{\hat{1}}^{t}=s_{0}^{t}s_{1}^{t}s_{0}^{t}$, then
we have the corresponding
formulas for $s_{\hat{1}}^{t}$ and $s_{\hat{1}}^{t}$ (in type D):
$$
s_{\hat{1}}^{z}\varphi(x_{1},x_{2},\ldots)
=\varphi(z_{1},z_{2},x_{1},x_{2},\ldots),\quad
s_{\hat{1}}^{t}\varphi(x_{1},x_{2},\ldots)
=\varphi(-t_{1},-t_{2},x_{1},x_{2},\ldots).
$$
\subsection{Divided difference operators}\label{ssec:divdiffgeom}
The divided difference operators
on $\R_{\infty}$ are defined by
$$
\partial_{i}f=\frac{f-s_{i}^{z}f}{\omega(\alpha_{i})},\quad
\delta_{i}f=\frac{f-s_{i}^{t}f}{\alpha_{i}},
$$
where $s_{i}$ and $\alpha_{i}\;(i\in \I_{\infty})$ are the
simple reflections and the corresponding simple roots.
Clearly we have
$\delta_{i}=\omega\partial_{i}\omega \quad (i\in \I_{\infty}).$
\subsection{Weyl group action and
commutativity with divided difference operators}
\begin{prop}\label{prop:comm}
We have
$
(1)\;s_{i}^{L}\Phi=\Phi s_{i}^{t},\;
(2)\; s_{i}^{R}\Phi=\Phi s_{i}^{z}.
$
\end{prop}
\begin{proof} We will only prove this for type C; the other types can be treated similarly.
We first show $(1).$
This is equivalent to
$s_{i}\left(\Phi_{s_{i}v}(f)\right)=\Phi_{v}(s_{i}^{t}f)$
for all $f\in R_{\infty}.$
If $f\in \Z[z,t]$ the computation is
straightforward and we omit the proof.
Suppose $f=\varphi(x)\in \Gamma$.
We will only show
$s_{0}\left(\Phi_{s_{0}v}(f)\right)=\Phi_{v}(s_{0}^{t}f)$
since the case $i\geq 1$ is straightforward.
By Lem. \ref{lem:s0super}, the right hand side of this
equation is written as
\begin{equation}
\varphi(-t_1,x_1,x_2,\ldots)|_{x_j=t_{v,j}}.
\label{eq:rhs-t}
\end{equation}
Let $k$ be the (unique) index such that $v(k)=
1$ or $\overline{1}.$
Then the string $t_{s_0v}$ differs
from $t_v$ only in $k$-th position.
If $v(k)=\overline{1}$,
then
$t_{v,k}=t_1,\, t_{s_0v,k}=0$
and $t_{v,j}=t_{s_0v,j}$ for $j\ne k.$
In this case (\ref{eq:rhs-t}) is
$$
\varphi(-t_1,t_{v,1},\ldots,t_{v,k-1},t_1,t_{v,k+1},\ldots).
$$
This polynomial is equal to
$\varphi(t_{v,1},\ldots,t_{v,k-1},t_{v,k+1},\ldots)$
because $\varphi(x)$ is supersymmetric.
It is straightforward to see that
$s_{0}\Phi_{s_{0}v}(\varphi(x))$
is equal to $\varphi(t_{v,1},\ldots,t_{v,k-1},t_{v,k+1},\ldots).$ The case for $v(k)={1}$ is easier, so
we left it to the reader.
Next we show (2), i.e. $
\Phi_{vs_{i}}(f)=\Phi_{v}(s_{i}^{z}f)
$ for all $f\in R_{\infty}.$
Again, the case $f\in \Z[z,t]$ is straightforward,
so we we omit the proof of it.
We show $\Phi_{vs_{0}}\left(\varphi(x)\right)
=\Phi_{v}(s_{0}^{z}\varphi(x))$
for $\varphi(x)\in \Gamma.$
The right hand side is
\begin{equation}
\varphi(z_{1},x_{1},x_{2},\ldots)|_{z_{1}=v(t_{1}),\,
x_{j}=t_{v,j}},
\label{eq:rhs}
\end{equation}
where $t_{v,j}=t_{\overline{v(j)}}$ if
$v(j)$ is negative and otherwise $t_{v,j}$ is zero.
If $v(1)=-k$ is negative, the above function (\ref{eq:rhs}) is
$$
\varphi(-t_{k},t_{k},t_{v,2},t_{v,3},\ldots).
$$
This is equal to
$\varphi(0,0,t_{v,2},t_{v,3},\ldots)$
because $\varphi$ is supersymmetric.
Then also this is equal to $\varphi(0,0,t_{v,2},t_{v,3},\ldots)
=\varphi(0,t_{v,2},t_{v,3},\ldots)$ by stability
property.
Now since $\overline{v(1)}$ is positive we have
$t_{vs_{0}}=(0,t_{v,2},t_{v,3},\ldots).$
Therefore the polynomial (\ref{eq:rhs})
coincides with $\Phi_{vs_{0}}(\varphi(x)).$
If $v(1)$ is positive, then
$t_{v}=(0,t_{v,2},t_{v,3},\ldots)$ and
$t_{vs_{0}}=(t_{v(1)},t_{v,2},t_{v,3},\ldots).$
Hence the substitution
$x\mapsto t_{vs_{0}}$ to
the function $\varphi(x_{1},x_{2},\ldots)$
gives rise to the polynomial (\ref{eq:rhs}).
Next we show $
\Phi_{vs_{i}}(\varphi(x))=\Phi_{v}(s_{i}^{z}\varphi(x))
$
for $i\geq 1.$
First recall that $s_{i}^{z}\varphi(x)=\varphi(x).$
In this case
$t_{vs_{i}}$ is obtained from
$t_{v}$ by exchanging $t_{v,i}$ and $t_{v,i+1}.$
So $\varphi(t_{vs_{i}})=\varphi(t_{v}).$
This completes the proof.
\end{proof}
Using the above proposition, the next
result follows:
\begin{prop}\label{prop:PhiCommD} The localization map
$\Phi:\R_{\infty}\rightarrow H_{\infty}$
commutes with the divided difference operators
both on $\R_{\infty}$ and $H_{\infty},$ i.e.,
$$\Phi \,\partial_{i}=\partial_{i}\,\Phi,\quad
\Phi\,\delta_{i}=\delta_{i}\,\Phi$$
\end{prop}
\begin{proof}
Let $f\in R_{\infty}.$
Applying $\Phi$ on the both hand sides of
equation
$
{\omega(\alpha_{i})}\cdot
\partial_{i}f={f-s_{i}^{z}f}
$ we have
$\Phi(-\omega(\alpha_{i}))\cdot \Phi(\partial_{i}f)
=\Phi(f)-s_{i}^{R}\Phi(f)$,
where we used Prop. \ref{prop:comm} and linearity.
Localizing at $v$ we obtain
$v(\alpha_{i})\cdot \Phi_{v}(\partial_{i}f)
=\Phi_{v}(f)-\Phi_{vs_{i}}(f).$
Note that we used the definition of $s_{i}^{R}$ and
$\Phi_{v}(\omega(\alpha_{i}))=-v(\alpha_{i}).$
The proof for the statement regarding $\delta_{i}$
is similar, using $\Phi(\alpha_{i})=\alpha_{i}.$
\end{proof}
\subsection{Proof of the existence and uniqueness Theorem \ref{existC}}
\begin{proof} (Uniqueness)
Let
$\{\mathfrak{S}_{w}\}$ and $\{\mathfrak{S}_{w}'\}$
be two families
both satisfying the defining conditions
of the double Schubert polynomials.
By induction on the length of $w$,
we see
$\partial_{i}(\mathfrak{S}_{w}-\mathfrak{S}'_{w})=
\delta_{i}(\mathfrak{S}_{w}-\mathfrak{S}'_{w})=0$
for all $i\in \I_{\infty}.$
This implies that
the difference $\mathfrak{S}_{w}-\mathfrak{S}'_{w}$ is
invariant for both left and right actions of $\W_{\infty}.$
It is easy to see that the only such invariants
in $\R_{\infty}$ are the constants.
So $\mathfrak{S}_{w}-\mathfrak{S}'_{w}=0$
by the constant term condition.
(Existence) Define $\mathfrak{S}_{w}(z,t;x)
=\Phi^{-1}(\sigma_{w}^{(\infty)}).$
By Prop. \ref{prop:PhiCommD}
and Prop. \ref{prop:divdiff},
$\mathfrak{S}_{w}(z,t;x)$
satisfies the defining equations for the double
Schubert polynomials.
The conditions on the constant term are
satisfied since $\sigma_{w}^{(\infty)}$
is homogeneous of degree $\ell(w)$ (Prop. \ref{StableClass})
and we have $\mathfrak{S}_{e}=1.$
\end{proof}
\begin{remark}\label{rem:TransDSP}
{\rm By construction, $\mathfrak{S}_{w}(z,t;x)$
satisfies the transition equation (\ref{Transition})
with $\sigma_{w}^{(\infty)}$ replaced by $\mathfrak{S}_{w}(z,t;x)$.
This equation provides an effective way to calculate
the double Schubert polynomials.}
\end{remark}
\begin{remark}\label{rem:A}{\rm The ring
$\Z[z]\otimes_{\Z}\Z[t]$ is
stable under the actions of the divided
difference operators $\partial_{i},\delta_{i}\,(i\geq 1)$ of type $\mathrm{A}$, and the type $\mathrm{A}$ double Schubert polynomials $\mathfrak{S}_{w}^{A}(z,t)$,
$w\in S_{\infty}$ form the unique family of solutions of the system of equations
involving only $\partial_{i},\delta_{i}$ for $i\geq 1$, and which satisfy the constant term conditions.}\end{remark}
\subsection{Projection to the cohomology of flag manifolds}\label{ssec:projection}
We close this section with a brief discussion
of the projection from $\R_{\infty}$ onto
$H_{T_{n}}^{*}(\mathcal{F}_{n}).$
For $f\in \Z[t]$, we denote by $f^{(n)}\in \Z[t]^{(n)}$
the polynomial given
by setting $t_{i}=0$ for $i>n$ in $f.$
Let $\mathrm{pr}_{n}:
H_{\infty}\rightarrow H_{n}$ be the projection
given by
$(f_{v})_{v\in \W_{\infty}}\mapsto
(f_{v}^{(n)})_{v\in \W_{n}}.$
Consider the following composition of maps
$$
\pi_{n}: \R_{\infty}\overset{\Phi}{\longrightarrow} H_{\infty}
\overset{\mathrm{pr}_{n}}{\longrightarrow} H_{n}
\cong
H_{T_{n}}^{*}
(\mathcal{F}_{n}).
$$
Explicitly, we have
$
\pi_{n}(f)|_{v}=\Phi_{v}(f)^{(n)}\;
(f\in \R_{\infty},\; v\in \W_{n}).
$
We will give an alternative geometric description
for $\pi_{n}$ in Section \ref{sec:geometry}.
\begin{prop}\label{prop:piCommD}
We have
$\pi_{n}(\mathfrak{S}_{w})=\sigma_{w}^{(n)}$
for $w\in \W_{n}$ and $\pi_{n}(\frak{S}_{w})=0$
for $w\notin\W_{n}.$
Moreover $\pi_{n}$ commutes with divided difference operators
$$
\partial_{i}^{(n)}\circ\pi_{n}=\pi_{n}\circ\partial_{i},\quad
\delta_{i}^{(n)}\circ\pi_{n}=\pi_{n}\circ\delta_{i}
\quad
(i\in \I_{\infty}),
$$
where $\partial_{i}^{(n)},\delta_{i}^{(n)}$ are divided difference
operators on $H_{T_{n}}^{*}(\mathcal{F}_{n}).$
\end{prop}
\begin{proof}
The first statement follows from the construction
of $\sigma_{w}^{(\infty)}$
and the vanishing property (Prop. \ref{StableClass}).
The second statement follows from Prop. \ref{prop:PhiCommD}
and the commutativity $\partial_{i}^{(n)}\circ\mathrm{pr}_{n}=
\mathrm{pr}_{n}\circ\partial_{i},\,
\delta_{i}^{(n)}\circ\mathrm{pr}_{n}=
\mathrm{pr}_{n}\circ\delta_{i}$
which is obvious from the construction of
$\partial_{i},\delta_{i}.$
\end{proof}
\begin{cor} There exists an injective homomorphism of $\Z[t]$-algebras $\pi_\infty: \R_\infty \to \invlim \eqcoh(\F_n)$.\end{cor}
\begin{proof} The proof follows from the above construction and \S \ref{ssec:invlim}.\end{proof}
\section{Double Schubert polynomials}\label{sec:DSP}
\subsection{Basic properties} Recall that the {\em double Schubert polynomial} $\mathfrak{S}_w(z,t;x)$ is equal to the inverse image of the stable Schubert class $\sigma_w^{(\infty)}$ under the algebra isomorphism $\Phi:\R_\infty \to H_\infty$. In the next two sections we will study the algebraic properties of these polynomials.
\begin{thm}\label{T:properties}
The double Schubert
polynomials satisfy the following:
\begin{enumerate}
\item (Basis)
The double Schubert polynomials $\{\mathfrak{S}_{w}\}_{w
\in \W_{\infty}}$
form a $\Z[t]$-basis of
$\R_{\infty}.$
\item (Relation to Billey-Haiman's polynomials) For all $w\in \W_{\infty}$ we have
\begin{equation}
\mathfrak{S}_{w}(z,0;x)=\mathfrak{S}_{w}(z;x),
\label{eq:relBH}
\end{equation}
where $\mathfrak{S}_{w}(z;x)$ denotes
Billey-Haiman's polynomial.
\item (Symmetry) We have
$\mathfrak{S}_{w}(-t,-z;x)=\mathfrak{S}_{w^{-1}}(z,t;x).$
\end{enumerate}
\end{thm}
\begin{proof}
Property (1) holds because the stable Schubert classes $\sigma_w^{(\infty)}$ form a $\Z[t]$-basis for $H_\infty$ (cf. \S\ref{ssec:SSch}).
Property (2) holds because $\mathfrak{S}_{w}(z,0;x)\in \Z[z]\otimes \Gamma'$
satisfies the defining conditions for
Billey-Haiman's polynomials
involving the right divided difference operators $\partial_{i}.$
Then by the uniqueness of Billey-Haiman's polynomials,
we have the results.
For (3), set $\mathfrak{X}_{w}=\omega(\mathfrak{S}_{w^{-1}}).$
Then by the relation $\delta_{i}=\omega\partial_{i}\omega$
we can show that
$\{\mathfrak{X}_{w}\}$ satisfies the defining
conditions of the double Schubert polynomials.
So the uniqueness of the double Schubert polynomials
implies $\mathfrak{X}_{w}=\mathfrak{S}_{w}.$
Then we have
$\omega(\mathfrak{S}_{w})
=\omega(\mathfrak{X}_{w})=
\omega(\omega\mathfrak{S}_{w^{-1}})=
\mathfrak{S}_{w^{-1}}.$
\end{proof}
\begin{remark}{\rm
For type $D$ we have
$s_{0}^{z}s_{0}^{t}\mathfrak{D}_{w}=\mathfrak{D}_{\hat{w}}$
where $\hat{w}$ is the image of $w$ under the involution
of $W_{\infty}'$ given by interchanging
$s_{1}$ and $s_{\hat{1}}.$
This is shown by the uniqueness of
solution as in the proof of the symmetry property.
See \cite[Cor. 4.10]{BH} for the corresponding
fact for the Billey-Haiman polynomials.}
\end{remark}
\subsection{Relation to type $\mathrm{A}$ double Schubert polynomials}
Let $\mathfrak{S}_{w}^{A}(z,t)$ denote the
type $\mathrm{A}$ double Schubert polynomials.
Recall that $\W_{\infty}$
has a parabolic subgroup
generated by $s_{i}\;(i\geq 1)$ which is
isomorphic to
$S_{\infty}.$
\begin{lem}\label{lem:typeA}
Let $w\in \W_{\infty}.$
If $w\in S_{\infty}$ then
$\mathfrak{S}_{w}(z,t;0)=\mathfrak{S}_{w}^{A}(z,t)
$ otherwise we have
$\mathfrak{S}_{w}(z,t;0)=0.$
\end{lem}
\begin{proof} The polynomials
$\{\mathfrak{S}_{w}(z,t;0)\}$, $w\in S_{\infty},$
in $\Z[t]\otimes_{\Z}\Z[z]\subset \R_{\infty}$ satisfy the defining divided difference
equations for the double Schubert polynomials of type $\mathrm{A}$
(see Remark \ref{rem:A}). This proves the first statement.
Suppose $w\not\in S_{\infty}.$
In order to show
$\mathfrak{S}_{w}(z,t;0)=0$,
we use the universal localization map
$\Phi^{A}: \Z[z]\otimes \Z[t] \to \prod_{v \in S_\infty}\Z[t]$ of type $\mathrm{A}$, which is defined in the obvious manner.
A similar proof to Lem. \ref{injective}
shows that the map
$\Phi^{A}$ is injective.
For any $v\in S_{\infty}$ we have
$\Phi_{v}(\mathfrak{S}_{w}(z,t;0))
=\Phi_{v}(\mathfrak{S}_{w}(z,t;x))
$, which is equal to $\sigma_{w}^{(\infty)}|_{v}$
by construction of $\mathfrak{S}_{w}(z,t;x).$
Since $v\not\geq w,$ we have
$\sigma_{w}^{(\infty)}|_{v}=0.$
This implies
that the image of $\mathfrak{S}_{w}$
under the universal localization map
$\Phi^{A}$ is zero, thus $\mathfrak{S}_{w}(z,t;0)=0.$
\end{proof}
\subsection{Divided difference operators and the double Schubert polynomials}
We collect here some properties concerning
actions of the divided difference operators
on the double Schubert polynomials.
These will be used in the next section.
\begin{prop}\label{prop:divdifSch}
Let $w=s_{i_{1}}\cdots s_{i_{r}}$ be a reduced expression
of $w\in \W_{\infty}.$
Then the operators
$$\partial_{w}=\partial_{i_{1}}\cdots \partial_{i_{r}},
\quad
\delta_{w}=\delta_{i_{1}}\cdots \delta_{i_{r}}$$
do not depend on the reduced expressions and are
well-defined for $w\in \W_{\infty}.$
Moreover we have
\begin{eqnarray}
\partial_{w}\mathfrak{S}_{u}&=&\begin{cases}
\mathfrak{S}_{uw^{-1}} & \mbox{if}\;\ell(uw^{-1})=\ell(u)-\ell(w)\\
0 &\mbox{otherwise}
\end{cases},\label{eq:partialSch}\\
\delta_{w}\mathfrak{S}_{u}&=&\begin{cases}
\mathfrak{S}_{wu}& \mbox{if}\;\ell(wu)=\ell(u)-\ell(w)
\\
0&\mbox{otherwise}
\end{cases}.
\end{eqnarray}
\end{prop}
\begin{proof}
Since $\{\mathfrak{S}_{u}\}$ is a
$\Z[t]$-basis of $\R_{\infty}$,
the equation (\ref{eq:partialSch}) uniquely determine
a $\Z[t]$-linear operator, which we denote
by $\varphi_{w}.$
One can prove
$\partial_{i_{1}}\cdots \partial_{i_{r}}=\varphi_{w}$
by induction on the length of $w.$
The proof for $\delta_{i}$ is similar.
\end{proof}
\begin{remark}{\rm
The argument here is based on the
existence of $\{\mathfrak{S}_{w}\}$, but one can also prove it in the classical way - using braid relations (cf. e.g. \cite{BGG}) -
by a direct calculation.}
\end{remark}
The next result will be used in the next section
(Prop. \ref{prop:InterP}).
\begin{lem}\label{Phie}
We have $\Phi_{e}(\partial_{u}\mathfrak{S}_{w})=\delta_{u,w}.$
\end{lem}
\begin{proof}
First note that
$\Phi_{e}(\mathfrak{S}_{w})=\sigma_{w}^{(\infty)}|_{e}
=\delta_{w,e}.$
If $\ell(wu^{-1})=\ell(w)-\ell(u)$ is satisfied
then
by Prop. \ref{prop:divdifSch}, we have
$\Phi_{e}(\partial_{u}\mathfrak{S}_{w})
=\Phi_{e}(\mathfrak{S}_{wu^{-1}})=\delta_{w,u}.$
Otherwise we have $\partial_{u}\mathfrak{S}_{w}=0$
again by Prop. \ref{prop:divdifSch}.
\end{proof}
\subsection{Interpolation formulae and their applications}
In this section we obtain an explicit combinatorial formula for the double Schubert polynomials, based on the explicit formulas for the single Schubert polynomials from \cite{BH}. The main tool for doing this is the interpolation formula, presented next.
\begin{prop}[Interpolation formula]\label{prop:InterP} For any $f\in \R_{\infty}$, we have
$$
f=\sum_{w\in \W_{\infty}}\Phi_{e}(\partial_{w}(f))\mathfrak{S}_{w}(z,t;x).
$$
\end{prop}
\begin{proof}
Since the double Schubert polynomials
$\{\mathfrak{S}_{w}\}$
form a $\Z[t]$-basis
of the ring $\R_{\infty}$, we write
$
f=\sum_{w\in \W_{\infty}}c_{w}\mathfrak{S}_{w},\;
c_{w}(t)\in \Z[t].
$
As
$\partial_{w}$ is $\Z[t]$-linear
we obtain by using Lemma \ref{Phie}
$$
\Phi_{e}(\partial_{w}f)=\sum_{u\in \W_{\infty}}c_{u}(t)
\Phi_{e}(\partial_{w}\mathfrak{S}_{u})
=\sum_{u\in \W_{\infty}}c_{u}(t)
\delta_{w,u}=c_{w}(t).
$$
\end{proof}
\begin{remark}\label{rem:y}{\rm
Let $y=(y_{1},y_{2},\ldots)$ be formal parameters.
On the extended ring $\Z[y]\otimes \R_{\infty}$,
we can introduce the Weyl group actions, divided difference operators, and the localization map
in the trivial way such that they are $\Z[y]$-linear.
Since the elements $\mathfrak{S}_{w} \,(w\in \W_{\infty})$ clearly
form a $\Z[y]\otimes \Z[t]$-basis of $\Z[y]\otimes \R_{\infty},$ the interpolation formula holds also
for any $f\in \Z[y]\otimes \R_{\infty}.$}
\end{remark}
\begin{prop}\label{prop:LS-BH}
Let $y=(y_{1},y_{2},\ldots)$ be formal parameters. Then
$$
\mathfrak{S}_{w}(z,t;x)=\sum_{u,v}\mathfrak{S}_{u}^{A}(y,t)
\mathfrak{S}_{v}(z,y;x)
$$
summed over all $
u\in S_{\infty},\,v\in \W_{\infty}$ such that
$w=uv,\,
\ell(u)+\ell(v)=\ell(w).$
\end{prop}
\begin{proof}
By the interpolation formula (see Remark \ref{rem:y}),
we have $$\mathfrak{S}_{w}(z,y;x)=\sum_{v}\Phi_{e}(\partial_{v}\mathfrak{S}_{w}(z,y;x))\mathfrak{S}_{v}(z,t;x).$$
By Prop. \ref{prop:divdifSch}, we see
$\partial_{v}\mathfrak{S}_{w}(z,y;x)
$ is equal to $\mathfrak{S}_{wv^{-1}}(z,y;x)$
if $\ell(wv^{-1})=\ell(w)-\ell(v)$,
and zero otherwise. Suppose $\ell(wv^{-1})=\ell(w)-\ell(v)$,
then
$\Phi_{e}\left(\mathfrak{S}_{wv^{-1}}(z,y;x)\right)=\mathfrak{S}_{wv^{-1}}(t,y;0)$ by the definition of $\Phi_{e}.$
By Lemma \ref{lem:typeA} this is
$\mathfrak{S}_{wv^{-1}}^{A}(t,y)$
if $wv^{-1}=u\in S_{\infty}$ and zero otherwise.
Then interchanging $t$ and $y$ we have the Proposition.
\end{proof}
Making $y=0$ in the previous proposition, and using that $\mathfrak{S}_u^A(y,t) = \mathfrak{S}_{u^{-1}}^A(-t,-y)$ (cf. Theorem \ref{T:properties} (3) above, for type $\mathrm{A}$ double Schubert polynomials) we obtain:
\begin{cor}\label{cor:typeAexpand} Let $\mathfrak{S}_{w}^{A}(z)$ denote the
(single) Schubert polynomial of type $\mathrm{A}.$
We have
$$
\mathfrak{S}_{w}(z,t;x)=\sum_{u,v}
\mathfrak{S}_{u^{-1}}^{A}(-t)\mathfrak{S}_{v}(z;x)
$$
summed over all $u\in S_{\infty}, v\in \W_{\infty}$
such that $w=uv$ and $\ell(w)=\ell(u)+\ell(v).$
\end{cor}
There is an explicit combinatorial expression
for the Billey-Haiman polynomials $\mathfrak{S}_{w}(z;x)$
in terms of Schur $Q$-functions and type $\mathrm{A}$ (single)
Schubert polynomials (cf. Thms. 3 and 4 in \cite{BH}).
This, together with the above corollary
implies also an explicit formula in our case.
Moreover, the formula for
$\mathfrak{S}_{w}(z;x)$ is {\em positive},
and therefore this yields a positivity property for the double Schubert polynomials (see Thm. \ref{thm:positivity} below).
We will give an alternative proof for this
positivity result, independent of the results from {\em loc.cit.}
\subsection{Positivity property} To prove the positivity of the double Schubert polynomials, we begin with the following lemma (compare with Thms. 3 and 4 in \cite{BH}):
\begin{lem}\label{lem:S00}
We have
$
\mathfrak{S}_{w}(z;x)=
\sum_{u,v}\mathfrak{S}_{u}^{A}(z)\mathfrak{S}_{v}(0,0;x)
$
summed over all $u\in \W_{\infty}, v\in S_{\infty}$
such that $w=uv$ and $\ell(w)=\ell(u)+\ell(v).$
\end{lem}
\begin{remark}{\rm
The function $\mathfrak{S}_{v}(0,0;x)$
is the Stanley's symmetric function
involved in the combinatorial expression for $\mathfrak{S}_{w}(z;x)$
from \cite{BH}. This follows from comparing the present
lemma and the Billey-Haiman's formulas 4.6 and 4.8.}
\end{remark}
\begin{proof}
By (\ref{eq:relBH}) and symmetry
property we have
$
\mathfrak{S}_{w}(z;x)=
\mathfrak{S}_{w}(z,0;x)
=\mathfrak{S}_{w^{-1}}(0,-z;x).$
Applying Prop. \ref{prop:LS-BH} with $y=0$
we can rewrite this as follows:
$$
\sum_{w^{-1}=u^{-1}v^{-1}}\mathfrak{S}_{u^{-1}}^{A}
(0,-z)\mathfrak{S}_{v^{-1}}(0,0;x)
=\sum_{w=vu}\mathfrak{S}_{u}^{A}
(z)\mathfrak{S}_{v}(0,0;x),$$
where the sum is over
$v\in \W_{\infty}, u\in S_{\infty}$ such that
$w^{-1}=u^{-1}v^{-1},$ and $\ell(w^{-1})=\ell(u^{-1})+\ell(v^{-1}).$ The last equality
follows from symmetry property.
\end{proof}
We are finally ready to prove the positivity property of $\mathfrak{S}_{w}(z,t;x)$. Expand $\mathfrak{S}_{w}(z,t;x)$ as $$
\mathfrak{S}_{w}(z,t;x)=\sum_{\lambda
\in \mathcal{SP}}
f_{w,\lambda}(z,t)
F_{\lambda}(x),
$$ where
$F_{\lambda}(x)=Q_{\lambda}(x)$ for type
$\mathrm{C}$ and $P_{\lambda}(x)$ for type $\mathrm{D}.$
\begin{thm}[Positivity of double Schubert polynomials]\label{thm:positivity} For any $w \in W_n$, the coefficient $f_{w,\lambda}(z,t)$ is a polynomial in
$ \mathbb{N}[-t_{1},\ldots,-t_{n-1},
z_{1},\ldots,z_{n-1}]$.
\end{thm}
\begin{proof} The proof follows from the expression on Corollary \ref{cor:typeAexpand}, Lemma \ref{lem:Stanley} below, combined with Lemma \ref{lem:S00} and the fact that
$\mathfrak{S}_{u}^A(z)\in \N[z]$.
\end{proof}
\begin{lem}\label{lem:Stanley}
$\mathfrak{S}_{v}(0,0;x)$
is a linear combination of
Schur's $Q$- (respectively $P$-) Schur functions
with nonnegative integral coefficients. \end{lem}
\begin{proof} This follows from the transition equations
in \S \ref{ssec:trans}
(see also Remark \ref{rem:TransDSP}).
In fact, the functions
$\mathfrak{S}_{w}(0,0;x)$ satisfy the transition equations
specialized at $z=t=0$ with the Grassmannian
Schubert classes identified with the Schur's $Q$ or $P$-
functions.
In fact, the
recursive formula for $F_{w}(x)=\mathfrak{S}_{w}(0,0;x)$ is positive, in the sense that the right hand side of the
equation is a certain non-negative integral
linear combination of the functions $\{F_{w}(x)\}.$
This implies that
$F_{w}(x)=\mathfrak{S}_{w}(0,0;x)$ can be expressed as
a linear combination of Schur's $Q$ (or $P$)
functions with coefficients in non-negative integers.
\end{proof}
\section{Formula for the longest element}\label{sec:Long}
\setcounter{equation}{0}
In this section, we give explicit formula
for the double Schubert polynomials
associated with the longest element
$w_{0}^{(n)}$ in $W_{n}$ (and $W_{n}'$).
We note that our proof
of Theorem \ref{existC} is independent of this section.
\subsection{Removable boxes}
We start this section with some combinatorial
properties of factorial $Q$ and $P$-Schur functions.
The goal is to prove Prop. \ref{prop:deltaQ}, which shows how the divided difference operators act on the aforementioned functions.
See \S \ref{ssec:FacSchur} to recall the convention
for the shifted Young diagram $Y_{\lambda}.$
\begin{Def}
A box $x\in Y_\lambda$ is removable
if
$Y_\lambda-\{x\}$ is again a
shifted Young diagram of a strict partition.
Explicitly,
$x=(i,j)$ is removable if
$j=\lambda_i+i-1$ and $\lambda_{i+1}\leq \lambda_i-2.$
\end{Def}
To each box $x=(i,j)$ in $Y_\lambda$ we
define its {\it content} $c(x)\in I_\infty$,
$c'(x)\in I_\infty'$
by $c(x)=j-i,$
and
$c'(x)=j-i+1$ if $i\ne j,$
$c'(i,i)=\hat{1} $ if $i$ is odd, and $c'(i,i)=1$ if $i$ is even.
Let $i\in I_\infty$ (resp. $i\in I_\infty'$).
We call $\lambda$ $i$-{\it removable\/}
if there is a removable box $x$ in $Y_\lambda$
such that $c(x)=i$ (resp $c'(x)=i)$.
Note that there is at most one
such $x$ for each $i\in I_\infty$
(resp. $i\in I_\infty').$
We say $\lambda$ is $i$-{\it unremovable\/}
if it is not $i$-{\it removable}.
\setlength{\unitlength}{0.5mm}
\begin{center}
\begin{picture}(100,70)
\put(5,60){Type $\mathrm{C}$}
\put(5,55){\line(1,0){50}}
\put(5,45){\line(1,0){50}}
\put(15,35){\line(1,0){40}}
\put(25,25){\line(1,0){20}}
\put(35,15){\line(1,0){10}}
\put(5,45){\line(0,1){10}}
\put(15,35){\line(0,1){20}}
\put(25,25){\line(0,1){30}}
\put(35,15){\line(0,1){40}}
\put(45,15){\line(0,1){40}}
\put(55,35){\line(0,1){20}}
\put(8.5,47){\small{$0$}}
\put(18.5,37){\small{$0$}}
\put(28.5,27){\small{$0$}}
\put(18.5,47){\small{$1$}}
\put(28.5,37){\small{$1$}}
\put(38.5,27){\small{$1$}}
\put(38.5,17){\small{$0$}}
\put(38.5,37){\small{$2$}}
\put(28.5,47){\small{$2$}}
\put(38.5,47){\small{$3$}}
\put(48.5,47){\small{$4$}}
\put(48.5,37){\small{$3$}}
\put(5,5){$0$ or $3$\,\mbox{-removable}}
\end{picture}
\begin{picture}(100,70)
\put(5,60){Type $D$}
\put(5,55){\line(1,0){50}}
\put(5,45){\line(1,0){50}}
\put(15,35){\line(1,0){40}}
\put(25,25){\line(1,0){20}}
\put(35,15){\line(1,0){10}}
\put(5,45){\line(0,1){10}}
\put(15,35){\line(0,1){20}}
\put(25,25){\line(0,1){30}}
\put(35,15){\line(0,1){40}}
\put(45,15){\line(0,1){40}}
\put(55,35){\line(0,1){20}}
\put(8.5,47){\small{$\hat{1}$}}
\put(18.5,37){\small{$1$}}
\put(28.5,27){\small{$\hat{1}$}}
\put(18.5,47){\small{$2$}}
\put(28.5,37){\small{$2$}}
\put(38.5,27){\small{$2$}}
\put(38.5,17){\small{$1$}}
\put(38.5,37){\small{$3$}}
\put(28.5,47){\small{$3$}}
\put(38.5,47){\small{$4$}}
\put(48.5,47){\small{$5$}}
\put(48.5,37){\small{$4$}}
\put(5,5){$1$ or $4$\,\mbox{-removable}}
\end{picture}
\end{center}
The following facts are well-known (see e.g. \S 7 in \cite{IN}).
\begin{lem} \label{lem:s_iGrass}
Let $w_\lambda\in W_\infty^0$ (resp. $w'_\lambda\in W_\infty^{\hat{1}}$) denote the Grassmannian
element corresponding to $\lambda\in \mathcal{SP}.$
For $i\in I_\infty$ (resp. $i\in I_\infty'$),
a strict partition $\lambda$ is $i$-removable if and only if
$\ell(s_iw_\lambda)=\ell(w_\lambda)-1$
(resp. $\ell(s_iw_\lambda')=\ell(w_\lambda')-1$).
If $\lambda$ is $i$-removable
then $s_iw_\lambda$ (resp. $s_iw_\lambda'$) is also a
Grassmannian
element and the corresponding
strict partition is the one obtained from $\lambda$
by removing a (unique) box of content $i$.
\end{lem}
\begin{prop} \label{prop:deltaQ} Let $\lambda$ be a strict
partiton and $i\in I_\infty$ (resp. $i\in I_\infty'$).
\begin{enumerate}
\item \label{remove} If $\lambda$ is $i$-removable,
then $\delta_{i}Q_{\lambda}(x|t)=Q_{\lambda'}(x|t),$
(resp. $\delta_{i}P_{\lambda}(x|t)=P_{\lambda'}(x|t)$)
where $\lambda'$ is the strict partition obtained by removing
the (unique) box of content $i$ from $\lambda,$
\item \label{unremov} If $\lambda$ is $i$-unremovable, then
$\delta_{i}Q_{\lambda}(x|t)=0$
(resp. $\delta_{i}P_{\lambda}(x|t)=0$),
that is to say
$s_{i}^{t}Q_{\lambda}(x|t)=Q_{\lambda}(x|t)$
(resp. $s_{i}^{t}P_{\lambda}(x|t)=P_{\lambda}(x|t)$).
\end{enumerate}
\end{prop}
\begin{proof} This follows from Lemma \ref{lem:s_iGrass} and from the fact that $\mathfrak{C}_{w_\lambda}=Q_\lambda(x|t)$ and $\mathfrak{D}_{w_\lambda'}=P_\lambda(x|t)$, hence we can apply the divided difference equations from Theorem \ref{existC}.
\end{proof}
\subsection{Type $\mathrm{C}_{n}$ case}\label{ssec:LongC}
For $\lambda\in \mathcal{SP}$ we define
$$K_{\lambda}=
K_{\lambda}(z,t;x)
=Q_{\lambda}(x|t_{1},-z_{1},t_{2},-z_{2},\ldots,
t_{n},-z_{n},\ldots).
$$
We need the following two lemmata to
prove Thm. \ref{thm:Top}.
\begin{lem}\label{lem:deltaK} Set $\Lambda_{n}=\rho_{n}+\rho_{n-1}.$
We have
$
\delta_{n-1}\cdots\delta_{1}\delta_{0}
\delta_{1}\cdots\delta_{n-1}
K_{\Lambda_{n}}=K_{\Lambda_{n-1}}.
$
\end{lem}
\begin{lem}\label{lem:piDelta} We have
$
\pi_{n}(K_{\Lambda_{n}})
=\sigma_{w_{0}^{(n)}}^{(n)}$, where $\pi_n:R_\infty \to \eqcoh (\F_n)$ is the projection defined in \S \ref{ssec:projection}.
\end{lem}
\subsubsection{Proof of Theorem \ref{thm:Top} for type $\mathrm{C}$}
\label{ssec:PfLongC}
\begin{proof}
Let $w_{0}^{(n)}$ in $W_{n}$ be the longest
element in $W_{n}.$
We need to show that
\begin{equation}
\mathfrak{C}_{w_{0}^{(n)}}(z,t;x)=K_{\Lambda_{n}}
(z,t;x).\label{LongElmFormula}
\end{equation}
Let $w\in W_{\infty}.$ Choose any
$n$ such that $w\in W_{n}$
and set
$$
F_{w}:=\delta_{ww_{0}^{(n)}}K_{\Lambda_{n}}.
$$ Since $\ell(w w_0^{(n)}) + 2n-1 = \ell(w w_0^{(n+1)})$ and $ww_0^{(n)} s_n \cdots s_1s_0s_1 \cdots s_n = ww_0^{(n+1)}$, it follows that $\delta_{ww_0^{(n)}} \cdot \delta_n \cdots \delta_{1}\delta_0 \delta_{1}\cdots \delta_n = \delta_{ww_0^{(n+1)}}$. Then
Lem. \ref{lem:deltaK} yields
$\delta_{ww_{0}^{(n+1)}}K_{\Lambda_{n+1}}
=\delta_{ww_{0}^{(n)}}K_{\Lambda_{n}}$
for any $w\in W_{n}$, so $F_{w}$ is
independent of the choice of $n.$
In order to prove the theorem
it is enough to prove $F_{w}=\mathcal{C}_{w}$
for all $w\in W_{\infty}.$
By definition of $F_{w}$
and basic properties of divided differences
we can show that
\begin{equation}
\delta_{i}F_{w}=\begin{cases}
F_{s_{i}w}& \ell(s_{i}w)=\ell(w)-1\\
0& \mbox{otherwise}
\end{cases}.\label{eq:divF}
\end{equation}
Now we claim that
$\pi_{n}(F_{w})=\sigma_{w}^{(n)}$
(for any $n$ such that $w\in W_{n}$).
In fact,
by commutativity of $\pi_{n}$ and
divided difference operators (Prop. \ref{prop:piCommD}), we have
$$
\pi_{n}(F_{w})=\delta_{ww_{0}^{(n)}}\pi_{n}(K_{\Lambda_{n}})
=\delta_{ww_{0}^{(n)}}\sigma_{w_{0}^{(n)}}^{(n)}
=\sigma_{w}^{(n)}.
$$
In the second equality we used
Lem. \ref{lem:piDelta},
and the last equality
is a consequence of (\ref{eq:divF}). Thus the claim is proved.
Since the claim holds for any
sufficiently large $n$,
we have $\Phi(F_{w})=\sigma_{w}^{(\infty)}$
(cf. Prop. \ref{prop:StabSch}).
\end{proof}
\subsubsection{Proof of Lemma \ref{lem:deltaK}}
\begin{proof} The
Lemma follows from the successive use of the following
equations (see the example below):
\begin{enumerate}
\item \label{deltaQ1}
$
\delta_{i}K_{\Lambda_{n}-1^{n-i-1}}
=K_{\Lambda_{n}-1^{n-i}}\quad (0\leq i\leq n-1),
$
\item \label{deltaQ2}
$
\delta_{i}K_{\Lambda_{n}-1^{n}-0^{n-i}1^{i-1}}
=K_{\Lambda_{n}-1^{n}-0^{n-i-1}1^{i}}\quad (1\leq i\leq n-1).
$
\end{enumerate}
We first prove (\ref{deltaQ1}).
For the case $i=0$,
we can apply Prop. \ref{prop:deltaQ} (\ref{remove})
directly to get the equation.
Suppose $1\leq i\leq n-1.$
Before applying $\delta_{i}$
to $K_{\Lambda_{n}-1^{n-i-1}}$
we switch the parameters
at $(2i-1)$-th and $2i$-th positions to get
$$
K_{\Lambda_{n}-1^{n-i-1}}
=Q_{\Lambda_{n}-1^{n-i-1}}
(x|t_{1},-z_{1},\ldots,-z_{i},t_{i},t_{i+1},-z_{i+1},\ldots,t_{n},-z_{n}).
$$
This is valid in view of Prop. \ref{prop:deltaQ} (\ref{unremov}) and
the fact that
$\Lambda_{n}-1^{n-i-1}$ is $(2i-1)$-unremovable.
In the right hand side,
the parameters $t_{i}$ and $t_{i+1}$
are on $2i$-th and $(2i+1)$-th
positions. Thus the operator $\delta_{i}$ on
this function is equal to the $2i$-th divided difference
operator ``$\delta_{2i}$'' with respect to
the sequence of the rearranged parameters
$(t_{1},-z_{1},\ldots,-z_{i},t_{i},t_{i+1},-z_{i+1},\ldots,t_{n},-z_{n})$
(see example below).
Thus we have by Prop. \ref{prop:deltaQ} (\ref{remove})
$$
\delta_{i}K_{\Lambda_{n}-1^{n-i-1}}
=Q_{\Lambda_{n}-1^{n-i}}(x|t_{1},-z_{1},\ldots,-z_{i},t_{i},t_{i+1},-z_{i+1},\ldots,t_{n},-z_{n}),
$$
namely we remove the box of content $2i$
from $\Lambda_{n}-1^{n-i-1}.$
Then again by Prop. \ref{prop:deltaQ} (\ref{unremov}), the last
function is equal to
$K_{\Lambda_{n}-1^{n-i}}$;
here we notice $\Lambda_{n}-1^{n-i}$ is $(2i-1)$-unremovable.
Next we prove (\ref{deltaQ2}).
In this case, by Prop. \ref{prop:deltaQ} (\ref{unremov}),
we can switch $2i$-th and $(2i+1)$-th
parameters to get
$$
K_{\Lambda_{n}-1^{n}-0^{n-i}1^{i-1}}
=Q_{\Lambda_{n}-1^{n}-0^{n-i}1^{i-1}}(x|t_{1},-z_{1},\ldots,-z_{i-1},t_{i},t_{i+1},-z_{i},\ldots,
t_{n},-z_{n}).
$$
Here we used the fact that
$\Lambda_{n}-1^{n}-0^{n-i}1^{i-1}$
is $2i$-unremovable.
Now we apply $\delta_i$ to the function.
The operator $\delta_i$
is now ``$\delta_{2i-1}$''
with respect to
the sequence of the rearranged parameters
$(t_{1},-z_{1},\ldots,-z_{i-1},t_{i},t_{i+1},-z_{i},\ldots,
t_{n},-z_{n}).
$
By applying Prop.
\ref{prop:deltaQ} (\ref{remove}),
we have
$$
\delta_i
K_{\Lambda_{n}-1^{n}-0^{n-i}1^{i-1}}
=Q_{\Lambda_{n}-1^{n}-0^{n-i-1}1^{i}}
(x|t_{1},-z_{1},\ldots,-z_{i-1},t_{i},t_{i+1},-z_{i},\ldots,
t_{n},-z_{n}).
$$
The last expression is equal to
$K_{\Lambda_{n}-1^{n}-0^{n-i-1}1^{i}}$ since
$\Lambda_{n}-1^{n}-0^{n-i-1}1^{i}$ is $2i$-unremovable.
\end{proof}
{\bf Examples.}
Here we illustrate the process to
show $\delta_{2}\delta_{1}\delta_{0}\delta_{1}\delta_{2}
K_{\Lambda_{3}}=K_{\Lambda_{2}}$
(case $n=3$ in Lem. \ref{lem:deltaK}).
\setlength{\unitlength}{0.4mm}
\begin{center}
\begin{picture}(800,50)
\put(5,45){\line(1,0){50}}
\put(5,35){\line(1,0){50}}
\put(15,25){\line(1,0){30}}
\put(25,15){\line(1,0){10}}
\put(5,35){\line(0,1){10}}
\put(15,25){\line(0,1){20}}
\put(25,15){\line(0,1){30}}
\put(35,15){\line(0,1){30}}
\put(45,25){\line(0,1){20}}
\put(55,35){\line(0,1){10}}
\put(8.5,38){\small{$0$}}
\put(18.5,28){\small{$0$}}
\put(28.5,18){\small{$0$}}
\put(18.5,38){\small{$1$}}
\put(28.5,28){\small{$1$}}
\put(38.5,28){\small{$2$}}
\put(28.5,38){\small{$2$}}
\put(38.5,38){\small{$3$}}
\put(48.5,38){\small{$4$}}
\put(75,45){\line(1,0){40}}
\put(75,35){\line(1,0){40}}
\put(85,25){\line(1,0){30}}
\put(95,15){\line(1,0){10}}
\put(75,35){\line(0,1){10}}
\put(85,25){\line(0,1){20}}
\put(95,15){\line(0,1){30}}
\put(105,15){\line(0,1){30}}
\put(115,25){\line(0,1){20}}
\put(78.5,38){\small{$0$}}
\put(88.5,28){\small{$0$}}
\put(98.5,18){\small{$0$}}
\put(88.5,38){\small{$1$}}
\put(98.5,28){\small{$1$}}
\put(108.5,28){\small{$2$}}
\put(98.5,38){\small{$2$}}
\put(108.5,38){\small{$3$}}
\put(130,0){
\put(5,45){\line(1,0){40}}
\put(5,35){\line(1,0){40}}
\put(15,25){\line(1,0){20}}
\put(25,15){\line(1,0){10}}
\put(5,35){\line(0,1){10}}
\put(15,25){\line(0,1){20}}
\put(25,15){\line(0,1){30}}
\put(35,15){\line(0,1){30}}
\put(45,35){\line(0,1){10}}
\put(8.5,38){\small{$0$}}
\put(18.5,28){\small{$0$}}
\put(28.5,18){\small{$0$}}
\put(18.5,38){\small{$1$}}
\put(28.5,28){\small{$1$}}
\put(28.5,38){\small{$2$}}
\put(38.5,38){\small{$3$}}
}
\put(190,0){
\put(5,45){\line(1,0){40}}
\put(5,35){\line(1,0){40}}
\put(15,25){\line(1,0){20}}
\put(5,35){\line(0,1){10}}
\put(15,25){\line(0,1){20}}
\put(25,25){\line(0,1){20}}
\put(35,25){\line(0,1){20}}
\put(45,35){\line(0,1){10}}
\put(8.5,38){\small{$0$}}
\put(18.5,28){\small{$0$}}
\put(18.5,38){\small{$1$}}
\put(28.5,28){\small{$1$}}
\put(28.5,38){\small{$2$}}
\put(38.5,38){\small{$3$}}
}
\put(250,0){
\put(5,45){\line(1,0){40}}
\put(5,35){\line(1,0){40}}
\put(15,25){\line(1,0){10}}
\put(5,35){\line(0,1){10}}
\put(15,25){\line(0,1){20}}
\put(25,25){\line(0,1){20}}
\put(35,35){\line(0,1){10}}
\put(45,35){\line(0,1){10}}
\put(8.5,38){\small{$0$}}
\put(18.5,28){\small{$0$}}
\put(18.5,38){\small{$1$}}
\put(28.5,38){\small{$2$}}
\put(38.5,38){\small{$3$}}
}
\put(310,0){
\put(5,45){\line(1,0){30}}
\put(5,35){\line(1,0){30}}
\put(15,25){\line(1,0){10}}
\put(5,35){\line(0,1){10}}
\put(15,25){\line(0,1){20}}
\put(25,25){\line(0,1){20}}
\put(35,35){\line(0,1){10}}
\put(8.5,38){\small{$0$}}
\put(18.5,28){\small{$0$}}
\put(18.5,38){\small{$1$}}
\put(28.5,38){\small{$2$}}}
\put(56,35){$\longrightarrow$}
\put(116,35){$\longrightarrow$}
\put(176,35){$\longrightarrow$}
\put(236,35){$\longrightarrow$}
\put(296,35){$\longrightarrow$}
\put(60,43){\small{$\delta_{2}$}}
\put(120,43){\small{$\delta_{1}$}}
\put(180,43){\small{$\delta_{0}$}}
\put(240,43){\small{$\delta_{1}$}}
\put(300,43){\small{$\delta_{2}$}}
\put(19,5){\small{$K_{5,3,1}$}}
\put(89,5){\small{$K_{4,3,1}$}}
\put(149,5){\small{$K_{4,2,1}$}}
\put(209,5){\small{$K_{4,2}$}}
\put(269,5){\small{$K_{4,1}$}}
\put(319,5){\small{$K_{3,1}$}}
\end{picture}
\end{center}
We pick up the first arrow:
$\delta_{2}K_{5,3,1}=K_{4,3,1}$
(equation (\ref{deltaQ1}) in
Lem. \ref{lem:deltaK} for $n=3,\,i=2$).
As is indicated in the proof, we divide
this equality into the following
four steps:
\begin{eqnarray*}
K_{5,3,1}=Q_{5,3,1}(x|t_{1},-z_{1},\underline{t_{2},-z_{2}},t_{3},-z_{3})
\underset{(a)}{=}
Q_{5,3,1}(x|t_{1},-z_{1},-z_{2},t_{2},t_{3},-z_{3})\\
\overset{\delta_{2}}{\longrightarrow}
Q_{4,3,1}(x|t_{1},-z_{1},\underline{-z_{2},t_{2}},t_{3},-z_{3})
\underset{(b)}{=}Q_{4,3,1}(x|t_{1},-z_{1},t_{2},-z_{2},t_{3},-z_{3})=K_{4,3,1}.
\end{eqnarray*}
In the equality $(a)$ we used the fact that
$\Lambda_{3}=(5,3,1)$ is $3$-unremovable,
so the underlined pair of variables can be exchanged
(by Prop. \ref{prop:deltaQ}, (2)).
Then we apply $\delta_{2}$ to this function.
Note that the variables $t_{2},t_{3}$ are in the
$4$-th and $5$-th positions in the parameters of
the function. So if we rename the parameters as
$
f=Q_{5,3,1}(x|t_{1},-z_{1},-z_{2},t_{2},t_{3},-z_{3})
=Q_{5,3,1}(x|u_{1},u_{2},u_{3},u_{4},u_{5},u_{6}),$
then $\delta_{2}$ is ``$\delta_{4}$''
with respect to the parameter sequence $(u_{i})_{i}.$
Namely we have
$$
\delta_{2}f=\frac{f-s_{2}^{t}f}{t_{3}-t_{2}}
=\frac{f-s_{4}^{u}f}{u_{5}-u_{4}},
$$
where $s_{4}^{u}$ exchanges $u_{4}$ and $u_{5}.$
Since $\Lambda_{3}=(5,3,1)$ is $4$-removable,
we see from Prop. \ref{prop:deltaQ}, (1) that
$\delta_{2}=$``$\delta_{4}$'' removes the box of content $4$
from $(5,3,1)$ to obtain the shape $(4,3,1).$
Then finally, in the equality $(b)$, we exchange the variables
$-z_{2},t_{2}$ again using Prop. \ref{prop:deltaQ}, (2).
This is valid since $(4,3,1)$ is $3$-unremovable.
Thus we obtained $K_{4,3,1}.$
\subsubsection{Proof of Lemma \ref{lem:piDelta}}
\begin{proof}
We calculate $\Phi_{v}(K_{\Lambda_{n}})$ for $v\in W_{n}.$
Recall that the map $\Phi_{v}: R_{\infty}\rightarrow
\Z[t]$ is the $\Z[t]$-algebra homomorphism
given by $x_{i}\mapsto t_{v,i}$
and $z_{i}\mapsto t_{v(i)}.$
So we have
$$
\Phi_{v}(K_{\Lambda_{n}})
=Q_{\Lambda_{n}}
(t_{v,1},\ldots,t_{v,n}|t_{1},-{t_{v(1)}},
\ldots, t_{n},-{t_{v(n)}}).
$$
Note that $t_{v,i}=0$ for $i>n$ since $v$ is an element
in $W_{n}.$
From the factorization formula \ref{lem:factorization},
this is equal to
$$\prod_{1\leq i\leq n}2t_{v,i}
\prod_{1\leq i<j\leq n}(t_{v,i}+t_{v,j})
\times s_{\rho_{n-1}}
(t_{v,1},\ldots,t_{v,n}|t_{1},-{t_{v(1)}},
\ldots, t_{n},-{t_{v(n)}}).
$$
The presence of the
factor $\prod_{i}2t_{v,i}$
implies that
$\Phi_{v}(K_{\Lambda_{n}})$ vanishes unless
$v(1),\ldots,v(n)$ are all negative.
So from now on we assume
$v=(\overline{\sigma(1)},\ldots,\overline{\sigma(n)})$
for some permutation $\sigma \in S_{n}.$
Then we have $t_{v,i}=t_{\sigma(i)}$
and $t_{v(i)}=-t_{\sigma(i)}$ so the last factor
of factorial Schur polynomial becomes
$$s_{\rho_{n-1}}
(t_{\sigma(1)},\ldots,t_{\sigma(n)}|t_{1},{t_{\sigma(1)}},
\ldots, t_{n},{t_{\sigma(n)}}).$$
This is equal to
$s_{\rho_{n-1}}
(t_{1},\ldots,t_{n}|t_{1},{t_{\sigma(1)}},
\ldots, t_{n},{t_{\sigma(n)}})$ because
$s_{\rho_{n-1}}$ is symmetric in ther first set of variables.
From Lem. \ref{lem:A-long} we know that this polynomial factors into
$\prod_{1\leq i<j\leq n}
(t_{j}-t_{\sigma(i)}).$
This is zero except for the case $\sigma=\mathrm{id},$
namely $v=w_{0}^{(n)}.$
If $\sigma=\mathrm{id}$ then
$\Phi_{v}(K_{\Lambda_{n}})$ becomes
$\prod_{1\leq i\leq n}2t_{i}
\prod_{1\leq i<j\leq n}(t_{i}+t_{j})
\prod_{1\leq i<j\leq n}
(t_{j}-t_{i})=\sigma_{w_{0}^{(n)}}^{(n)}|_{w_{0}^{(n)}}.$
\end{proof}
\subsection{Type $\mathrm{D}_{n}$ case}
Set
$K'_{\lambda}(z,-t;x)=
P_{\lambda}
(x|t_{1},-z_{1},\ldots,t_{n-1},-z_{n-1},\ldots).
$
Our goal in this section is
$$
\mathfrak{D}_{w_{0}^{(n)}}
=K'_{2\rho_{n-1}}
(z,t;x).
$$
We use the same strategy
as in \S \ref{ssec:LongC} to
prove this.
Actually the proof in \S \ref{ssec:PfLongC}
works also in this case
using the following two lemmata,
which will be proved below.
\begin{lem}\label{lem:DeltaKD} We have
$\delta_{n-1}\cdots\delta_{2}\delta_{\hat{1}}\delta_{1}\delta_{2}
\cdots\delta_{n-1}K_{2\rho_{n-1}}'
=K_{2\rho_{n-2}}'.$
\end{lem}
\begin{lem}\label{lem:piDeltaD} We have
$\pi_{n}(K'_{2\rho_{n-1}})=\sigma_{w^{(n)}_{0}}^{(n)}.$
\end{lem}
\subsubsection{A technical lemma}
We need the following technical lemma which is used
in the proof of Lem. \ref{lem:DeltaKD}.
Throughout the section,
$(u_{1},u_{2},u_{3},\ldots)$
denote any sequence of variables independent of $t_{1},t_{2}.$
\begin{lem}\label{lem:hat1}
Let $\lambda=(\lambda_{1},\ldots,\lambda_{r})$
be a strict partition such that $r$ is odd and $\lambda_{r}\geq 3.$
Set $\tilde{t}=(u_{1},t_{1},t_{2},u_{2},u_{3},\ldots).$
Then
$
\delta_{\hat{1}}P_{\lambda_{1},\ldots,\lambda_{r},1}(x|\tilde{t})
=P_{\lambda_{1},\ldots,\lambda_{r}}(x|\tilde{t}).
$
\end{lem}
\begin{sublem}\label{lem:deltahat1} Suppose
$\lambda$ is $1, 2$ and $\hat{1}$-unremovable.
Then we have
$
\delta_{\hat{1}}
P_{\lambda}(x|\tilde{t})=0.\label{eq:deltahat1}
$
\end{sublem}
\begin{proof}
Since
$\lambda$
is $1,2$-unremovable,
we can rearrange the first three parameters
by using Prop. \ref{prop:deltaQ}, so we have
$P_{\lambda}(x|\tilde{t})=P_{\lambda}(x|t_{1},t_{2},u_{1},u_{2},u_{3},\ldots).$
Because $\lambda$ is also $\hat{1}$-unremovable,
it follows that $\delta_{\hat{1}}P_{\lambda}(t_{1},t_{2},u_{1},u_{4},\ldots)=0$ from Prop. \ref{prop:deltaQ}.
\end{proof}
\begin{sublem}[Special case of Lem. \ref{lem:hat1} for $r=1$]\label{lem:Pk1}
We have $
\delta_{\hat{1}}P_{k,1}(x|\tilde{t})=P_{k}(x|\tilde{t})$
for $k\geq 3.$
\end{sublem}
\begin{proof}
Substituting $\tilde{t}$ for $t$ into (\ref{eq:P2row})
we have
$$P_{k,1}(x|\tilde{t})=
P_{k}(x|\tilde{t})P_{1}(x|\tilde{t})
-P_{k+1}(x|\tilde{t})-(u_{k-1}+u_{1})P_{k}(x|\tilde{t}).$$
By the explicit formula $P_{1}(x|\tilde{t})=P_{1}(x),$
we have
$\delta_{\hat{1}}P_{1}(x|\tilde{t})=1.$
We also have $\delta_{\hat{1}}P_{k}(x|\tilde{t})=\delta_{\hat{1}}P_{k+1}(x|\tilde{t})=0$
by Sublemma \ref{lem:deltahat1}.
Then we use the Leibnitz rule
$\delta_{\hat{1}}(fg)=\delta_{\hat{1}}(f)g+(s_{\hat{1}}f)\delta_{\hat{1}}(g)$
to get $\delta_{\hat{1}}P_{k,1}(x|\tilde{t})=P_{k}(x|\tilde{t}).$
\end{proof}
\begin{proof}[Proof of Lem. \ref{lem:hat1}.]
From the definition of the Pfaffian
it follows that
$$
P_{\lambda_{1},\ldots,\lambda_{r},1}(x|\tilde{t})
=\sum_{j=1}^{r}(-1)^{r-j}P_{\lambda_{j},1}(x|\tilde{t})P_{\lambda_{1},\ldots,
\widehat{\lambda_{j}},\ldots,\lambda_{r}}(x|\tilde{t}).
$$
Then the Leibnitz rule
combined with Sublemma
\ref{eq:deltahat1} and Sublemma \ref{lem:Pk1}
implies
$$
\delta_{\hat{1}}
P_{\lambda_{1},\ldots,\lambda_{r},1}(x|\tilde{t})
=\sum_{j=1}^{r}(-1)^{r-j}P_{\lambda_{j}}(x|\tilde{t})P_{\lambda_{1},\ldots,
\widehat{\lambda_{j}},\ldots,\lambda_{r}}(x|\tilde{t})
=P_{\lambda_{1},\ldots,\lambda_{r}}(x|\tilde{t}),
$$
where in the last equality we used the expansion
formula of Pfaffian again. \end{proof}
\subsubsection{Proof of Lem. \ref{lem:DeltaKD}}
\begin{proof}
Consider the case when $n$ is even.
By applying the same method of calculation as in
type $\mathrm{C}$ case, we have
$$
\delta_{1}\delta_{2}\cdots\delta_{n-1}K'_{2\rho_{n-1}}
=P_{\rho_{n-1}+\rho_{n-2}}(x|-z_{1},t_{1},t_{2},-z_{2},t_{3},-z_{3},\ldots).
$$
The problem here is that $\rho_{n-1}+\rho_{n-2}$ is not
$1$-unremovable when $n$ is even.
So we can not rewrite the function
as $K_{\rho_{n-1}+\rho_{n-2}}'.$
Nevertheless, by using Lem. \ref{lem:hat1},
we can show
$$
\delta_{\hat{1}}P_{\rho_{n-1}+\rho_{n-2}}(x|-z_{1},t_{1},t_{2},-z_{2},t_{3},-z_{3},\ldots)
=K'_{\rho_{n-1}+\rho_{n-2}-0^{n-2}1}.
$$
The rest of calculation is similar to type $\mathrm{C}$ case.
If $n$ is odd, we can show this equation
using only Prop. \ref{prop:deltaQ}
as in type $\mathrm{C}$ case.
\end{proof}
\subsubsection{Proof of Lem. \ref{lem:piDeltaD}}
\begin{proof}
Similar to the proof of Lem. \ref{lem:piDelta} using
Lem. \ref{lem:A-long}, \ref{lem:A-longOdd},
\ref{lem:factorizationD}, and
\ref{lem:factorizationDodd}.
We calculate $\Phi_{v}(K'_{2\rho_{n-1}})$ for $v\in W'_{n}.$
We have
$$
\Phi_{v}(K'_{2\rho_{n-1}})
=P_{2\rho_{n-1}}
(t_{v,1},\ldots,t_{v,n}|t_{1},-{t_{v(1)}},
\ldots, t_{n},-{t_{v(n)}}).
$$
Note that $t_{v,i}=0$ for $i>n$ since $v$ is an element
in $W'_{n}.$
Assume now that $n$ is even.
From the factorization formula (Lem. \ref{lem:factorizationD}),
this is equal to
$$
\prod_{1\leq i<j\leq n}(t_{v,i}+t_{v,j})
\times s_{\rho_{n-1}}
(t_{v,1},\ldots,t_{v,n}|t_{1},-{t_{v(1)}},
\ldots, t_{n},-{t_{v(n)}}).
$$
The factorial Schur polynomial
factorizes further into linear terms by Lem. \ref{lem:A-long},
and we finally obtain $$
\Phi_{v}(K'_{2\rho_{n-1}})=
\prod_{1\leq i<j\leq n}(t_{v,i}+t_{v,j})
\prod_{1\leq i<j\leq n}
(t_{j}+t_{v(i)}).
$$
We set
$v=(\overline{\sigma(1)},\ldots,\overline{\sigma(n)})$
for some $\sigma \in S_{n}$ since otherwise
$t_{v,i}=t_{v,j}=0$ for some $i,j$ with $i\ne j$
and then $\prod_{1\leq i<j\leq n}
(t_{v,i}+t_{v,j})$ vanishes.
Then
the factor $\prod_{1\leq i<j\leq n}
(t_{j}+t_{v(i)})$ is $\prod_{1\leq i<j\leq n}
(t_{j}-t_{\sigma(i)}).$
This is zero except for the case $\sigma=\mathrm{id},$
namely $v=w_{0}^{(n)}.$
If $w=w_{0}^{(n)}$, we have
$\Phi_{v}(K'_{\rho_{n-1}})=
\prod_{1\leq i<j\leq n}(t_{i}+t_{j})
\prod_{1\leq i<j\leq n}
(t_{j}-t_{i})=\sigma_{w_{0}^{(n)}}^{(n)}|_{w_{0}^{(n)}}.$
Next we consider the case when $n$ is odd.
Note that the longest element $w_{0}^{(n)}$ in this case
is $1\bar{2}\bar{3}\cdots\bar{n}.$
Let $s(v)$ denote the number of
nonzero entries in $t_{v,1},\ldots,t_{v,n}.$
Then we have
$s(v)\leq n-1$ since $v\in W_{n}'$ and $n$ is odd.
We use the following identity:
\begin{eqnarray}
&&P_{2\rho_{n-1}}(x_{1},\ldots,x_{n-1}|t_{1},-z_{1},\ldots,t_{n-1},-z_{n-1})\nonumber\\
&=&\prod_{1\leq i<j\leq n-1}(x_{i}+x_{j})
\times s_{\rho_{n-1}+1^{n-1}}(x_{1},\ldots,x_{n-1}|t_{1},-z_{1},\ldots,t_{n-1},-z_{n-1}).\label{eq:facD}
\end{eqnarray}
If $s(v)<n-1$,
then $s(v)\leq n-3$ because $v\in W_{n}'.$
This means that there are at least $3$
zeros in $t_{v,1},\ldots,t_{v,n}.$
Because there is the factor
$\prod_{1\leq i<j\leq n-1}(x_{i}+x_{j})$
in (\ref{eq:facD}) we have $\Phi_{v}(K'_{2\rho_{n-1}})=0.$
So we suppose $s(v)=n-1.$
By a calculation using the definition of the factorial
Schur polynomial, we see that
$ s_{\rho_{n-1}+1^{n-1}}(x_{1},\ldots,x_{n-1}|t_{1},-z_{1},\ldots,
t_{n-1},-z_{n-1})$
is divisible by the factor $\prod_{i=1}^{n-1}(t_{1}-x_{i}).$
By this fact we may assume
$t_{v,1},\ldots,t_{v,n}$ is a permutation of
$0,t_{2},t_{3},\ldots,t_{n}$ since
otherwise $\Phi_{v}(K'_{2\rho_{n-1}})$ is zero.
Thus under the assumption, we have
$$
\Phi_{v}(K'_{2\rho_{n-1}})
=\prod_{2\leq i<j\leq n}(t_{i}+t_{j})
s_{\rho_{n-1}+1^{n-1}}(t_{2},\ldots,t_{n}|t_{1},-t_{v(1)},t_{2},-t_{v(2)},\ldots,
t_{n-1},-t_{v(n-1)}).
$$
By Lem. \ref{lem:A-longOdd}
this factorizes into
$
\prod_{2\leq i<j\leq n}(t_{i}+t_{j})
\prod_{j=2}^{n}(t_{j}-t_{1})
\prod_{1\leq i<j\leq n}(t_{j}+t_{v(i)}).
$
Now our assumption is that
the negative elements in
$\{v(1),\ldots,v(n)\}$ are exactly $\{2,3,\ldots,n\}.$
Among these elements, only
$w_{0}^{(n)}$ gives a non-zero
polynomial, which
is shown to be
$\prod_{1\leq i<j\leq n}(t_{i}+t_{j})(t_{j}-t_{i})
=\sigma_{w_{0}^{(n)}}^{(n)}|_{w_{0}^{(n)}}.$
\end{proof}
\section{Geometric construction
of the universal localization map}\label{sec:geometry}
\setcounter{equation}{0}
In this section, we construct the morphism of $\Z[t]$-algebras $\tilde{\pi}_{\infty}:\R_\infty \longrightarrow \invlim\, \eqcoh(\F_n)$
from a geometric point of view.
We start this section by describing
the embedding $\F_{n}\hookrightarrow \F_{n+1}$
explicitly, and calculate the localization
of the Chern roots of tautological bundles.
Then we introduce some particular cohomology classes
$\beta_{i}$ in $H_{T_{n}}^{*}(\F_{n}) $, by using the geometry of isotropic flag varieties. These classes satisfy the relations of the $Q$-Schur functions $Q_{i}(x)$, and mapping $Q_i(x)$ to $\beta_i$ ultimately leads to the homomorphism
$\tilde{\pi}_{\infty}.$ In particular, this provides an explanation on why
the Schur $Q$-functions enter into our
theory (cf. Prop. \ref{prop:pi}).
The final goal is to establish the connection
of $\tilde{\pi}_{\infty}$ and the universal
localization map $\Phi$ (Thm. \ref{thm:piQpiQ}).
The arguments in the preceding sections are
logically independent from this section.
However, we believe that the results in this section provide
the reader with some
insight into the underlying geometric idea
of the algebraic construction.
\subsection{Flag varieties of isotropic flags}\label{ssection:flagv} The groups $G_n$ are the group of automorphisms preserving a non-degenerate, bilinear form $\langle \cdot, \cdot \rangle$ on a complex vector space $V_n$. The pair $(V_n, \langle \cdot , \cdot \rangle)$ is the following:
\begin{enumerate}
\item In type $\mathrm{C}_n$, $V_n = \C^{2n}$; fix $\e_n^{*}, \dots, \e_1^{*}, \e_1, \dots ,\e_n$ a basis for $V_n$. Then $\langle\cdot , \cdot \rangle$ is the skew-symmetric form given by $\langle\e_i, \e_j^*\rangle = \delta_{i,j}$ (the ordering of the basis elements will be important later, when we will embed $G_n$ into $G_{n+1}$).
\item In types $\mathrm{B}_n$ and $\mathrm{D}_n$, $V_n$ is an odd, respectively even-dimensional complex vector space. Let $\e_n^{*}, \dots , \e_1^{*}, \e_0, \e_1, \dots , \e_n$ respectively $\e_n^{*}, \dots , \e_1^{*}, \e_1, \dots , \e_n$ be a basis of $V_n$. Then $\langle \cdot , \cdot \rangle$ is the symmetric form such that $\langle\e_i,\e_j^*\rangle=\delta_{i,j}$.
\end{enumerate}
A subspace $V$ of $V_n$ will be called {\em isotropic} if $\langle \vecu,\vecv\rangle = 0 $ for any $\vecu,\vecv \in V$. Then $\F_n$ is the variety consisting of complete isotropic flags with respect to the appropriate bilinear form. For example, in type $\mathrm{C}_n$, $\F_n$ consists of nested sequence of vector spaces
\[ F_1 \subset F_2 \subset \dots \subset F_n \subset V_n = \C^{2n} \/, \] such that each $F_i$ is isotropic and $\dim F_i =i$. Note that the maximal dimension of an isotropic subspace of $\C^{2n}$ is $n$; but the flag above can be completed to a full flag of $\C^{2n}$ by taking $V_{n+i}= V_{n-i}^\perp$, using the non-degeneracy of the form $\langle \cdot , \cdot \rangle$. A similar description can be given in types $\mathrm{B}_n$ and $\mathrm{D}_n$, with the added condition that, in type $\mathrm{D}_n$, \[\dim F_n \cap \langle \e_n^{*}, \dots , \e_1^*\rangle \equiv 0 \mod 2 \/; \] in this case we say that all $F_n$ are {\em in the same family} (cf. \cite[pag.68]{FP}).
The flag variety $\F_{n}$ carries
a transitive left action of the group $G_n,$
and can be identified with the homogeneous space $G_n/B_n$, where $B_{n}$ is the Borel subgroup consisting of
upper triangular matrices in $G_{n}$.
Let $T_{n}$ be the maximal torus
in $G_{n}$ consisting of
diagonal matrices in $G_{n}.$
Let $t = \diag(\xi_n^{-1}, \dots ,\xi_1^{-1},\xi_1, \dots , \xi_n)$ be a torus element in types $\mathrm{C_n,D_n}$, and $t= \diag(\xi_n^{-1}, \dots ,\xi_1^{-1},1,\xi_1, \dots , \xi_n)$ in type $\mathrm{B}_n$. We denote by $t_{i}$ the character
of $T_{n}$ defined by $t\mapsto \xi_{i}^{-1}\;(t\in T_{n}).$
Then the weight of $\C \,\e_i$ is $-t_i$ and that of $\C\, \e_i^*$ is $t_i$.
We identify $t_i \in H^2_{T_n}(pt)$ with $c_1^T(\C\e_i^*)$, where $\C\e_i^*$ is the (trivial, but not equivariantly trivial) line bundle over $pt$
with fibre $\C\e_i^*$. For $v \in W_n$, the corresponding $T_{n}$-fixed point ${e}_v$ is \[{e}_v: \langle \e_{v(n)}^{*} \rangle \subset \langle \e_{v(n)}^{*}, \e_{v(n-1)}^{*}\rangle \subset \dots \subset \langle \e_{v(n)}^{*}, \e_{v(n-1)}^{*}, \dots , \e_{v(1)}^{*}\rangle \subset V_n \/. \]
\subsection{Equivariant embeddings of flag varieties}\label{ssec:embeddings}
There is a natural embedding $G_n \hookrightarrow G_{n+1}$, given explicitly by
$$
g\rightarrow
\left(\begin{array}{c|c|c}
1 & & \\
\hline
& g & \\
\hline
& & 1
\end{array}\right).
$$
This corresponds to the embedding of Dynkin diagrams in each type.
This also induces embeddings $B_{n}\hookrightarrow B_{n+1},$ $T_{n}\hookrightarrow T_{n+1},$ and ultimately $\varphi_n
:\F_{n}\hookrightarrow \F_{n+1}.$
The embedding $\varphi_n$ sends
the complete isotropic flag $F_1 \subset \cdots \subset F_n$ of $V_n$ to the complete isotropic flag of $V_{n+1}=
\C\, \e_{n+1}^{*} \oplus V_{n}\oplus \C\, \e_{n+1}$:
\[ \C\, \e_{n+1}^* \subset \C\, \e_{n+1}^* \oplus F_1 \subset \cdots \subset \C\, \e_{n+1}^* \oplus F_n \/. \]
Cleary $\varphi_n$ is
equivariant with respect to the embedding
$T_{n}\hookrightarrow T_{n+1}.$
\subsection{Localization of Chern classes of tautological bundles}
Consider the flag of tautological (isotropic) vector bundles
$$
0=\mathcal{V}_{n+1}\subset \mathcal{V}_{n}\subset
\cdots
\subset \mathcal{V}_{1}\subset \mathcal{E},
\quad \mathrm{rank}\,\mathcal{V}_{i}=n-i+1,
$$
where $\mathcal{E}$ is the trivial bundle
with fiber $V_{n}$ and
$\mathcal{V}_{i}$ is defined to be the vector
subbundle of $\mathcal{E}$ whose
fiber over the point $F_{\bullet}= F_1 \subset \dots \subset F_n$ in $\mathcal{F}_{n}$
is $F_{n-i+1}.$
Let $z_{i}=c_{1}^{T}(\mathcal{V}_{i}/\mathcal{V}_{i+1})$
\begin{footnote}{The bundle $\mathcal{V}_i/\mathcal{V}_{i+1}$ is in fact negative; for example, in type $\mathrm{C}$, if $n=1$, $\F_1=\mathbb{P}^1$ and $\mathcal{V}_1 = \mathcal{O}(-1)$. The reason for choosing positive sign for $z_i$ is to be consistent with the conventions used by Billey-Haiman in \cite{BH}.}\end{footnote}denote the equivariant Chern class of the line bundle
$\mathcal{V}_{i}/\mathcal{V}_{i+1}$.
\begin{prop}\label{prop:loc} Let $v \in W_n$. Then the localization map $\iota_v^*: \eqcoh (\F_n) \to \eqcoh ({e}_v)$ satisfies $\iota_v^*(z_i) = t_{v(i)}$. \end{prop}
\begin{proof} The pull-back of the line bundle $\mathcal{V}_{i}/\mathcal{V}_{i+1}$ via $\iota_v^*$ is the line bundle over ${e}_v$ with fibre $\C\e_{v(i)}^{*}$, which has (equivariant) first Chern class $t_{v(i)}$. \end{proof}
\subsection{The cohomology class $\beta_{i}$}
In this section we introduce the cohomology classes
$\beta_{i}$, which will later be identified to $Q$-Schur functions $Q_i(x)$.
The torus action on $V_{n}$ induces a $T_{n}$-equivariant splitting $\mathcal{E}=\oplus_{i=1}^{n}\mathcal{L}_{i}\oplus\mathcal{L}_{i}^{*}$ ($\mathcal{E}=\oplus_{i=1}^{n}\mathcal{L}_{i}\oplus\mathcal{L}_{i}^{*}\oplus\mathcal{L}_{0}$ for type $\mathrm{B}_{n}$)
where $\mathcal{L}_{i}$ (resp. $\mathcal{L}_{i}^{*}$) is the trivial line bundle over $\F_n$
with fiber $\C\e_{i}$ (resp. $\C\e_{i}^{*}$). Recall from \S \ref{ssection:flagv} that $T_n$ acts on $\mathcal{L}_i^*$ by weight $t_i$ and that $t_i = c_1^T(\mathcal{L}_i^*)$.
Let $\F_n$ be the flag variety of type $\mathrm{C}_{n}$ or $\mathrm{D}_{n}$ and
set $\mathcal{V}=\mathcal{V}_{1}$.
We have the following
exact sequence of $T_{n}$-equivariant
vector bundles:
\begin{equation}
0\longrightarrow \mathcal{V}\longrightarrow
\mathcal{E}
\longrightarrow
\mathcal{V}^{*}\longrightarrow 0.\label{tauto}
\end{equation}
where $\mathcal{V}^{*}$ denotes the dual bundle
of $\mathcal{V}$ in $\mathcal{E}$ with respect to
the bilinear form.
Let $\mathcal{L}=\oplus_{i=1}^{n}\mathcal{L}_{i}$
and $\mathcal{L}^{*}=\oplus_{i=1}^{n}\mathcal{L}_{i}^{*}.$
Since $\mathcal{E}=\mathcal{L}\oplus
\mathcal{L}^{*},$
we have $c^{T}(\mathcal{E})=c^{T}(\mathcal{L})c^{T}(\mathcal{L}^{*}).$ Define
the class $\beta_{i}\in H_{T_{n}}^{*}(\F_{n})$ by
$$
\beta_{i}=c_{i}^{T}(\mathcal{V}^{*}-\mathcal{L}),
$$
where $c_{i}^{T}(\mathcal{A}-\mathcal{B})$
is the term of degree $i$ in the formal expansion
of $c^{T}(\mathcal{A})/c^{T}(\mathcal{B}).$
Using the relation $c^{T}(\mathcal{L})c^{T}(\mathcal{L}^{*})
=c^{T}(\mathcal{V})c^{T}(\mathcal{V}^{*})$,
we also have the expression:
$$
\beta_{i}=c_{i}^{T}(\mathcal{L}^{*}-\mathcal{V}).
$$
In terms of the Chern classes $z_{i},t_{i}$,
the class $\beta_{i}$ has
the following two equivalent expressions:
\begin{equation}
\sum_{i=0}^{\infty}\beta_{i}u^{i}=\prod_{i=1}^{n}\frac{1-z_{i}u}{1-t_{i}u}=
\prod_{i=1}^{n}\frac{1+t_{i}u}{1+z_{i}u}.\label{eq:twoexpr}
\end{equation}
\begin{lem}\label{lem:beta}
The classes $\beta_{i}$ satisfy
the same relations as the $Q$-Schur functions
of $Q_{i}(x)$, i.e.
$$
\beta_{i}^{2}+2\sum_{j=1}^{i}(-1)^{j}\beta_{i+j}\beta_{i-j}=0\quad
\mbox{for}\; i\geq 1.
$$
\end{lem}
\begin{proof}
We have the following two expressions:
$$
\sum_{i=0}^{\infty}\beta_{i}u^{i}
=
\prod_{i=1}^{n}\frac{1-z_{i}u}{1-t_{i}u},\quad
\sum_{j=0}^{\infty}(-1)^{j}\beta_{j}u^{j}=
\prod_{i=1}^{n}\frac{1-t_{i}u}{1-z_{i}u}.
$$
The lemma follows from multiplying both sides, and then extracting
the degree $2i$ parts.
\end{proof}
Minor modifications need to be done if $\F_n$ is the flag variety of type $\mathrm{B}_{n}$. In this case the tautological sequence of isotropic flag subbundles consists of $
0=\mathcal{V}_{n+1}\subset \mathcal{V}_{n}\subset
\cdots
\subset \mathcal{V}_{1}\subset \mathcal{E}= \C^{2n+1} \times \F_n$, but the dual bundle $\mathcal{V}_1^*$ of $\mathcal{V}_1$ is not isomorphic to $\mathcal{E}/\mathcal{V}_1$, which has rank $n+1$.
However, the line bundle $\mathcal{V}_1^\perp/\mathcal{V}_1$ is equivariantly isomorphic to $\bigwedge^{2n+1} \mathcal{E}$ - cf. \cite[pag.75]{FP} - so $c_1^T(\mathcal{V}_1^\perp/\mathcal{V}_1)=0$; here $\mathcal{V}_1^\perp$ denotes the bundle whose fibre over $V_1 \subset \cdots \subset V_n$ is the subspace of vectors in $\C^{2n+1}$ perpendicular to those in $V_n$ with respect to the non-degenerate form $\langle \cdot , \cdot \rangle$.
It follows that the bundle $\mathcal{E}/\mathcal{V}_{1}$ has (equivariant) total Chern class $(1-z_1u) \cdots (1-z_nu)$, which is the same as the total Chern class of $\mathcal{V}_{1}^*.$ Similarly, the total Chern class of $\mathcal{E}/\mathcal{L}$
with $\mathcal{L}=\oplus_{i=1}^{n}\mathcal{L}_{i}$ is $(1+t_1u)\cdots (1+t_nu)$ and equals $c^T(\mathcal{L}^*)$. So the definition of $\beta_i$ and the proofs of its properties remain unchanged.
Recall that in \S \ref{ssec:projection} we introduced $\pi_{n}:\R_{\infty}
\longrightarrow H_{T_{n}}^{*}(\mathcal{F}_{n})$
by using the universal localization map $\Phi$. The following is the key fact used in the proof of the main
result of this section.
\begin{lem}\label{lem:key} We have
$
\pi_{n}(Q_{i}(x))=\beta_{i}.
$
\end{lem}
\begin{proof}
It is enough to show that
$\iota_{v}^{*}(\beta_{i})=Q_{i}(t_{v})
$ for $v\in \W_{n}.$
By Prop. \ref{prop:loc} and the definition of
$\beta_{i}$, we have
$$
\iota_{v}^{*}\left(
\sum_{i=0}^{\infty}\beta_{i}u^{i}
\right)
=\iota_{v}^{*}\left(\prod_{i=1}^{n}\frac{1-z_{i}u}{1-t_{i}u}
\right)
=\prod_{i=1}^{n}\frac{1-t_{v(i)}u}{1-t_{i}u}.$$
If $v(i)$ is positive, the factors $1-t_{v(i)}u$ cancel out and
the last expression
becomes
$$
\prod_{v(i)\;{\small \mbox{negative}}}\frac{1-t_{v(i)}u}{1+t_{v(i)}u}
=\sum_{i=0}^{\infty}
Q_{i}(t_{v})u^{i}$$
where the last equality follows
from the definition of $Q_{i}(x)$ and
that of $t_{v}.$
\end{proof}
\subsection{Homomorphism $\tilde{\pi}_{n}$}
We consider $\F_{n}$ of one of the types
$\mathrm{B}_{n},\mathrm{C}_{n},$ and $\mathrm{D}_{n}.$
We will define next the projection homomorphism
from $\R_{\infty}$ to $H_{T_{n}}^{*}(\F_{n})$, which will be used to construct the geometric analogue $\tilde{\pi}_n$ of $\pi_n$.
Note that $R_{\infty}$ is a proper subalgebra of $\R_{\infty}$ in types $\mathrm{B}$ and $\mathrm{D}.$
We regard $H_{T_{n}}^{*}(\F_{n})$
as $\Z[t]$-module via the natural projection
$\Z[t]\rightarrow \Z[t_{1},\ldots,t_{n}].$
\begin{prop}\label{prop:pi}
There exists a homomorphism of graded
$\Z[t]$-algebras
$\tilde{\pi}_{n}:R_{\infty}\rightarrow H_{T_{n}}^{*}(\F_{n})$
such that
$$
\tilde{\pi}_{n}(Q_{i}(x))=\beta_{i}\quad
(i\geq 1)
\quad\mbox{,}\quad
\tilde{\pi}_{n}(z_{i})=z_{i}\quad
(1\leq i\leq n)\quad\mbox{and}\quad
\tilde{\pi}_{n}(z_{i})=0\quad(i>n).$$
\end{prop}
\begin{proof} This follows from the fact that $R_\infty$ is generated as a $\Z[t]$-algebra by $Q_{i}(x),z_{i}\;(i\geq 1)$, and that the ideal of relations among $Q_i(x)$ is generated by those in (\ref{eq:quadraticQ}) (see \cite{Mac}, III \S 8). Since the elements $\beta_i$ satisfy also those relations by Lemma \ref{lem:beta}, the result follows.\end{proof}
\subsection{Types $\mathrm{B}$ and $\mathrm{D}$} In this section, we extend $\tilde{\pi}_{n}$
from $R_{\infty}'$ to $H_{T_{n}}^{*}(\F_{n})$. The key to that is the identity $P_i(x) = \frac{1}{2} Q_i(x)$.
\begin{prop} Let
$\F_{n}$ be the flag variety of type $\mathrm{B}_{n}$ or $\mathrm{D}_{n}.$
Then there is an (integral) cohomology class
$\gamma_{i}$ such that
$2\gamma_{i}=\beta_{i}.$
Moreover, the classes $\gamma_i$ satisfy the following quadratic relations:
$$
\gamma_{i}^{2}+2\sum_{j=1}^{i-1}(-1)^{j}
\gamma_{i+j}\gamma_{i-j}
+(-1)^{i}\gamma_{2i}=0 \quad(i>0).
$$
\end{prop}
\begin{proof} Define $\gamma_i=\frac{1}{2} \beta_i$. Then, as in the proof of Lemma \ref{lem:key}, the localization $\iota^*_v(\gamma_i) = \frac{1}{2} Q_i(t_v) = P_i(t_v)$ which is a polynomial with integer coefficients.
The quadratic relations follow immediately from Lem. \ref{lem:beta}.
\end{proof}
The proposition implies immediately the following:
\begin{prop}\label{prop:piBD}
Let $\F_{n}$ be the flag variety of type $\mathrm{B}_{n}$ or $\mathrm{D}_{n}.$
There exists a homomorphism of graded
$\Z[t]$-algebras
$\tilde{\pi}_{n}:R_{\infty}'\rightarrow H_{T_{n}}^{*}(\F_{n})$
such that
$$
\tilde{\pi}_{n}(P_{i}(x))=\gamma_{i}\quad
(i\geq 1)
\quad\mbox{and}\quad
\tilde{\pi}_{n}(z_{i})=z_{i}\quad
(1\leq i\leq n)\quad\mbox{and}\quad
\tilde{\pi}_{n}(z_{i})=0\quad(i>n).$$
\end{prop}
\begin{remark} {\rm It is easy to see (cf, \cite[\S 6.2]{FP}) that the morphism $\tilde{\pi}_n:R_\infty \to \eqcoh(\F_n)$ is surjective in type $\mathrm{C}$, and also in types $\mathrm{B,D}$, but with coefficients over $\Z[1/2]$. But in fact, using that $\Phi:R_\infty' \to H_\infty$ is an isomorphism, one can show that surjectivity holds over $\Z$ as well.}\end{remark}
\subsection{The geometric interpretation of the universal localization map $\Phi$}
From Prop. \ref{prop:pi} and Prop. \ref{prop:piBD},
we have $\Z[t]$-algebra homomorphism
$
\tilde{\pi}_{n}: \R_{\infty}\longrightarrow
H_{T_{n}}^{*}(\F_{n})
$
for all types $\mathrm{B,C,D}.$
Since $\tilde{\pi}_n$ is compatible with maps $\varphi_{n}^{*}:\eqcoh(\F_{n+1}) \to \eqcoh(\F_n)$ induced by embeddings $\F_n \to \F_{n+1}$
there is an induced homomorphism $$\tilde{\pi}_{\infty}:\R_\infty \longrightarrow \invlim \eqcoh (\F_n).$$
Recall from \S \ref{ssec:projection} that we have the natural embedding
$\pi_\infty: R_{\infty}\hookrightarrow
\invlim \eqcoh (\F_n)$, defined via the localization map $\Phi$. Then:
\begin{thm}\label{thm:piQpiQ}
We have that $\tilde{\pi}_{\infty}=\pi_\infty$.
\end{thm}
\begin{proof}
It is enough to show that $\tilde{\pi}_{n}=\pi_{n}$. To do that, we
compare both maps on the generators
of $\R_{\infty}.$
We know that $\tilde{\pi}_{n}(Q_{i}(x))={\pi}_{n}(Q_{i}(x))
=\beta_{i}$
by Lem. \ref{lem:key} and this implies that
$\tilde{\pi}_{n}(P_{i}(x))={\pi}_{n}(P_{i}(x))$
for types $\mathrm{B}_{n}$ and $\mathrm{D}_{n}.$
It remains to show
$\pi_{n}(z_{i})=\tilde{\pi}_{n}(z_{i}).$
In this case, for $v\in \W_{n}$,
\[
\iota_{v}^{*}\pi_{n}(z_{i})
=\Phi_{v}(z_{i})^{(n)}=t_{v(i)}^{(n)}
=\iota_{v}^{*}\tilde{\pi}_{n}(z_{i})\/.
\]
This completes the proof.
\end{proof}
\subsection{Integrality of Fulton's classes $c_{i}$}
We take to opportunity to briefly discuss, in the present setting, an integrality property of some
cohomology classes considered by Fulton in \cite{F}, in relation to degeneracy loci in classical types. This property was proved before, in a more general setting, by Edidin and Graham \cite{EG}, by using the geometry of quadric bundles.
Let $\mathcal{F}_{n}$ be the flag variety of type
$\mathrm{B}_{n}$ or $\mathrm{D}_{n}.$ Recall that $c_i$ is the equivariant cohomology class in $H_{T_{n}}^{*}(\F_{n},\Z[{\textstyle\frac{1}{2}}])$ defined by:
$$
c_{i}={\textstyle\frac{1}{2}}
\left(
e_{i}(-z_{1},\ldots,-z_{n})+e_{i}(t_{1},\ldots,t_{n})
\right)\quad(0\leq i\leq n).
$$
\begin{prop}
We have
$
c_{i}=
\sum_{j=1}^{i-\epsilon}(-1)^{j}e_{j}(t_{1},\ldots,t_{n})\gamma_{i-j}\;(0\leq i\leq n),
$
where $\epsilon=0$ if $i$ is even and $\epsilon=1$
if $i$ is odd.
In particular, $c_{i}$ are classes defined over $\Z$.
\end{prop}
\begin{proof}
Using the definition of $\beta_{i}$
and $
\sum_{i=0}^{n}2c_{i}u^{i}
=\prod_{i=1}^{n}(1-z_{i}u)
+\prod_{i=1}^{n}(1+t_{i}u)
$
we have
$$\sum_{i=0}^{n}2c_{i}u^{i}
=\prod_{i=1}^{n}(1+t_{i}u)+
\sum_{i=0}^{\infty}\beta_{i}u^{i}\prod_{i=1}^{n}(1-t_{i}u).
$$
Comparing both hand side of degree $i$ and using $2\gamma_{i}=\beta_{i}$
we have the equation.
\end{proof}
\section{Kazarian's formula for Lagrangian Schubert classes}
\label{sec:Kazarian}
\setcounter{equation}{0}
In this section, we give a brief discussion
of a ``multi-Schur Pfaffian'' expression for the Schubert classes
of the Lagrangian Grassmannian.
This formula appeared in a preprint of Kazarian, regarding a
degeneracy loci formula for the Lagrangian
vector bundles \cite{Ka}.
\subsection{Multi-Schur Pfaffian}
We recall the definition of the multi-Schur Pfaffian from \cite{Ka}.
Let $\lambda=(\lambda_{1}>\cdots>\lambda_{r}\geq 0)$
be any strict partition with $r$ even.
Consider an $r$-tuple of infinite sequences
$c^{(i)}=\{c^{(i)}_k\}_{k=0}^\infty \;(i=1,\ldots,r)$,
where each $c^{(i)}_k$ is an element in
a commutative ring with unit.
For $a\geq b\geq 0$, we set
$$
c_{a,b}^{(i),(j)}
:=c_{a}^{(i)}
c_{b}^{(j)}
+2\sum_{k=1}^{b}(-1)^{k}
c_{a+k}^{(i)}
c_{b-k}^{(j)}.
$$
Assume that the matrix $(c_{\lambda_i,\lambda_j}^{(i),(j)})_{i,j}$
is skew-symmetric, i.e.
$c_{\lambda_i,\lambda_j}^{(i),(j)}
=-c_{\lambda_j,\lambda_i}^{(j),(i)}$ for
$1\leq i,j\leq r.$
Then we consider its Pfaffian
$$\mathrm{Pf}_\lambda(c^{(1)},\ldots,c^{(r)})
=
\mathrm{Pf}\left(
c_{\lambda_i,\lambda_j}^{(i),(j)}
\right)_{1\leq i<j\leq r},
$$
called {\it multi-Schur Pfaffian}.
\subsection{Factorial Schur functions
as a multi-Schur Pfaffian}
We introduce the following
versions of factorial $Q$-Schur functions $Q_k(x|t)$:
$$
\sum_{k=0}^\infty Q_k^{(l)}(x|t)u^k=
\sum_{i=1}^\infty
\frac{1+x_iu}{1-x_iu}
\prod_{j=1}^{l-1}(1-t_ju).
$$
Note that, by definition,
$Q_k^{(k)}(x|t)=Q_k(x|t)$ and
$Q_k^{(1)}(x|t)=Q_k(x).$
\begin{prop} \label{prop:multiPf} Let $\lambda=(\lambda_{1}>\cdots>\lambda_{r}\geq 0)$
be any strict partition with $r$ even.
Set $c^{(i)}_k=Q_{k}^{(\lambda_i)}(x|t)$
for $i=1,\ldots,r.$
Then the matrix $(c_{\lambda_i,\lambda_j}^{(i),(j)})_{i,j}$
is skew-symmetric
and
we have
$$
\mathrm{Pf}_\lambda(c^{(1)},\ldots,c^{(r)})
=Q_\lambda(x|t).
$$
\end{prop}
\begin{proof}
In view of the Pfaffian formula for $Q_{\lambda}(x|t)$ (Prop. \ref{prop:Pf}),
it suffices to show the following identity:
\begin{equation}
Q_{k,l}(x|t)
=Q_{k}^{(k)}(x|t)Q_{l}^{(l)}(x|t)
+2\sum_{i=1}^{l}(-1)^{i}Q_{k+i}^{(k)}(x|t)Q_{l-i}^{(l)}(x|t).\label{Fkl}
\end{equation}
By induction we can show that for $k\geq 0$
\begin{eqnarray*}
Q_{j}^{(j+k)}(x|t)&=&\sum_{i=0}^{j}(-1)^{i}
e_{i}(t_{j+k-1},t_{j+k-2},\ldots,t_{j-i+1})Q_{j-i}(x|t),\\
Q_{j}^{(j-k)}(x|t)&=&\sum_{i=0}^{k}
h_{i}(t_{j-k},t_{j-k+1},\ldots,t_{j-i})Q_{j-i}(x|t).
\end{eqnarray*}
Substituting these
expressions into (\ref{Fkl}), we get
a quadratic expression
in $Q_i(x|t)$'s.
The obtained expression
coincides with a formula for
$Q_{k,l}(x|t)$ proved in \cite[Prop.7.1]{Ik}.
\end{proof}
\subsection{Schubert classes in the Lagrangian Grassmannian
as multi-Pfaffians} We use the notations from \S \ref{sec:geometry}.
The next formula expresses the equivariant Schubert class $\sigma_{w_{\lambda}}^{(n)}$ in a flag variety of type $\mathrm{C}$ in terms of a multi-Pfaffian. Recall that this is also the equivariant Schubert class for the Schubert variety indexed by $\lambda$ in the Lagrangian Grassmannian, so this is a "Giambelli formula" in this case. Another such expression, in terms of ordinary Pfaffians, was proved by the first author in \cite{Ik}.
\begin{prop}[cf. \cite{Ka}, Thm. 1.1] Set $\mathcal{U}_{k}
=\oplus_{j=k}^n \mathcal{L}_i.$
Then $
\sigma_{w_{\lambda}}^{(n)}
=\mathrm{Pf}_{\lambda}(c^{T}(\mathcal{E}-\mathcal{V}-\mathcal{U}_{\lambda_{1}}),\ldots,
c^{T}(\mathcal{E}-\mathcal{V}-\mathcal{U}_{\lambda_{r}})).$
\end{prop}
\begin{proof} By Thm. \ref{PhiFacQ}, we know
$\pi_{n}(Q_{\lambda}(x|t))=\sigma_{w_{\lambda}}^{(n)}.$
On the other hand the formula of Prop. \ref{prop:multiPf}
writes $Q_{\lambda}(x|t)$ as a multi-Pfaffian.
So it is enough to show that:
$$
c^{T}_{i}(\mathcal{E}-\mathcal{V}-\mathcal{U}_{k})=
\pi_{n}(Q_{i}^{(k)}(x|t)).
$$
We have
$$
c^{T}(\mathcal{E}-\mathcal{V}-\mathcal{U}_{k})
=\frac{\prod_{i=1}^n(1-t_i^2u^{2})}{
\prod_{j=1}^{n}(1+z_{i}u)\prod_{j=k}^n(1-t_ju)}
=\prod_{i=1}^n
\frac{1+t_iu}{1+z_{i}u}
\prod_{j=1}^{k-1}(1-t_ju).
$$
The first factor of the
right hand side is the generating
function for $\beta_{i}=\pi_{n}(Q_{i}(x))
(i\geq 0).$
So the last expression is
$$
\sum_{i=0}^{\infty}\pi_{n}(Q_{i}(x))u^{i}
\prod_{j=1}^{k-1}(1-t_ju)
=\sum_{i=0}^{\infty}\pi_{n}(Q_{i}^{(k)}(x|t))u^{i}.
$$
Hence the proposition is proved.
\end{proof}
\section{Type $\mathrm{C}$ double Schubert polynomials for $w \in W_3$}
\tiny
\renewcommand{\arraystretch}{1.2}
$\begin{array}{|c|l|}
\hline
123 & 1 \\
\hline
\123 & Q_1 \\
\hline
213 & Q_1+(z_1-t_1) \\
\hline
\213 & Q_2+Q_1(-t_1) \\
\hline
2\13 & Q_2+Q_1z_1 \\
\hline
\2\13 & Q_{21} \\
\hline
1\23 & Q_3+Q_2(z_1-t_1)+Q_1(-z_1t_1) \\
\hline
\1\23 & Q_{31}+Q_{21}(z_1-t_1) \\
\hline
132 & Q_{1}+(z_1+z_2-t_1-t_2) \\
\hline
\132 & 2Q_{2}+Q_{1}(z_1+z_2-t_1-t_2) \\
\hline
312 & Q_2+Q_1(z_1-t_1-t_2)+(z_1-t_1)(z_1-t_2) \\
\hline
\312 & Q_3+Q_2(-t_1-t_2)+Q_1 t_1t_2 \\
\hline
3\12 & Q_{3}+Q_{21}+Q_{2}(2z_1-t_1-t_2)+Q_{1}(z_1)(z_1-t_1-t_2) \\
\hline
\3\12 & Q_{31}+Q_{21}(-t_1-t_2) \\
\hline
1\32 & Q_{4}+Q_{3}(z_1-t_1-t_2)+Q_{2}(t_1t_2-z_1(t_1+t_2))+Q_1z_1t_1t_2 \\
\hline
\1\32 & Q_{41}+Q_{31}(z_1-t_1-t_2)+Q_{21}(t_1t_2-z_1(t_1+t_2)) \\
\hline
231 & Q_2+Q_{1}(z_1+z_2-t_1)+(z_1-t_1)(z_2-t_1) \\
\hline
\231 & Q_{3}+Q_{21}+Q_{2}(z_1+z_2-2t_1)+Q_{1}(-t_1)(z_1+z_2-t_1) \\
\hline
321 & Q_{3}+Q_{21}+Q_{2}(2z_1+z_2-2t_1-t_2)+Q_{1}(z_1+z_2-t_1)(z_1-t_1-t_2)+(z_1-t_1)(z_1-t_2)(z_2-t_1) \\
\hline
\321 & Q_{4}+Q_{31}+Q_{3}(z_1+z_2-2t_1-t_2)+Q_{21}(-t_1-t_2)+\\ & Q_{2}(t_1t_2-(t_1+t_2)(z_1+z_2-t_1)+Q_{1}t_1t_2(z_1+z_2-t_1) \\
\hline
3\21 & Q_{31}+Q_{3}(z_1-t_1)+Q_{21}(z_1-t_1)+Q_{2}(z_1-t_1)^2+Q_1 z_1(-t_1)(z_1-t_1) \\
\hline
\3\21 & Q_{32}+Q_{31}(-t_1)+Q_{21} t_1^2 \\
\hline
2\31 & Q_{41}+Q_{4}(z_1-t_1)+Q_{31}(z_1-t_1-t_2)+Q_{3}(z_1-t_1)(z_1-t_1-t_2)+Q_{21}(t_1t_2-z_1(t_1+t_2))+\\ &
Q_{2}(z_1-t_1)(t_1t_2-z_1(t_1+t_2))+Q_{1}(z_1-t_1)z_1t_1t_2 \\
\hline
\2\31 & Q_{42}+Q_{32}(z_1-t_1-t_2)+Q_{41}(-t_1)+Q_{31}(-t_1)(z_1-t_1-t_2)+Q_{21}t_1^2(z_1-t_2) \\
\hline
23\1 & Q_{3}+Q_{2}(z_1+z_2)+Q_{1} z_1z_2 \\
\hline
\23\1 & Q_{31}+Q_{21}(z_1+z_2) \\
\hline
32\1 & Q_{4}+Q_{31}+Q_{3}(2z_1+z_2-t_1-t_2)+Q_{21}(z_1+z_2)+\\ & Q_{2}((z_1+z_2)(z_1-t_1-t_2)+z_1z_2)+Q_{1}z_1z_2(z_1-t_1-t_2) \\
\hline
\32\1 & Q_{32}+Q_{41}+Q_{31}(z_1+z_2-t_1-t_2)+Q_{21}(z_1+z_2)(-t_1-t_2) \\
\hline
3\2\1 & Q_{32}+Q_{31}z_1+Q_{21}z_1^2 \\
\hline
\3\2\1 & Q_{321} \\
\hline
2\3\1 & Q_{42}+Q_{32}(z_1-t_1-t_2)+Q_{41}z_1+Q_{31} z_1(z_1-t_1-t_2)+Q_{21}z_1^2(-t_1-t_2) \\
\hline
\2\3\1 & Q_{421}+Q_{321}(z_1-t_1-t_2) \\
\hline
13\2 & Q_{4}+Q_{3}(z_1+z_2-t_1)+Q_{2}(z_1z_2-t_1(z_1+z_2))+Q_1(-t_1)z_1z_2 \\
\hline
\13\2 & Q_{41}+Q_{31}(z_1+z_2-t_1)+Q_{21}(z_1z_2-t_1(z_1+z_2)) \\
\hline
31\2 & Q_{41}+Q_{4}(z_1-t_1)+Q_{31}(z_1+z_2-t_1)+Q_3(z_1-t_1)(z_1+z_2-t_1)+Q_{21}(z_1z_2-t_1(z_1+z_2))+\\ & Q_{2}(z_1-t_1)(z_1z_2-t_1(z_1+z_2))+Q_{1}(z_1-t_1)z_1z_2(-t_1) \\
\hline
\31\2 & Q_{42}+Q_{32}(z_1+z_2-t_1)+Q_{41}(-t_1)+Q_{31}(z_1+z_2-t_1)(-t_1)+Q_{21}t_1^2(z_1+z_2) \\
\hline
3\1\2 & Q_{42}+Q_{41}z_1+Q_{32}(z_1+z_2-t_1)+Q_{31}z_1(z_1+z_2-t_1)+Q_{21}z_1^2(z_2-t_1) \\
\hline
\3\1\2 & Q_{421}+Q_{321}(z_1+z_2-t_1) \\
\hline
\end{array}$
$\begin{array}{|c|l|}
\hline
1\3\2 & Q_{43}+Q_{42}(z_1-t_1)+Q_{32}(z_1^2+t_1^2-z_1t_1)+Q_{41}(-z_1t_1)+Q_{31}z_1(-t_1)(z_1-t_1)+Q_{21}(z_1^2t_1^2) \\
\hline
\1\3\2 & Q_{431}+Q_{421}(z_1-t_1)+Q_{321}(z_1^2-z_1t_1+t_1^2) \\
\hline
12\3 & Q_{5}+Q_4(z_1+z_2-t_1-t_2)+Q_3(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2))+\\ & Q_2(z_1z_2(-t_1-t_2)+t_1t_2(z_1+z_2) )+Q_1 z_1z_2t_1t_2 \\
\hline
\12\3 & Q_{51}+Q_{41}(z_1+z_2-t_1-t_2)+Q_{31}(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2))+Q_{21}(z_1z_2(-t_1-t_2)+t_1t_2(z_1+z_2)) \\
\hline
21\3 & Q_{51}+Q_{5}(z_1-t_1)+Q_{41}(z_1+z_2-t_1-t_2)+Q_{4}(z_1-t_1)(z_1+z_2-t_1-t_2)+ \\ & Q_{31}(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2)) +Q_{3}(z_1-t_1)(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2))+ \\ & Q_{21}(z_1z_2(-t_1-t_2)+t_1t_2(z_1+z_2)) +Q_2(z_1-t_1)(z_1z_2(-t_1-t_2)+t_1t_2(z_1+z_2))+Q_1 z_1z_2t_1t_2(z_1-t_1) \\
\hline
\21\3 & Q_{52}+Q_{42}(z_1+z_2-t_1-t_2)+Q_{32}(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2))+Q_{51}(-t_1)+\\ & Q_{41}(-t_1)(z_1+z_2-t_1-t_2)+Q_{31}(-t_1)(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2)) \\
& +Q_{21}(t_1)^2(z_1z_2-(z_1+z_2)t_2)) \\
\hline
2\1\3 & Q_{52}+Q_{42}(z_1+z_2-t_1-t_2)+Q_{32}(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2))+Q_{51}z_1+Q_{41}z_1(z_1+z_2-t_1-t_2)+\\ & Q_{31}z_1(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2)) +Q_{21} z_1^2(t_1t_2-z_2(t_1+t_2)) \\
\hline
\2\1\3 & Q_{521}+Q_{421}(z_1+z_2-t_1-t_2)+Q_{321}(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2)) \\
\hline
1\2\3 & Q_{53}+Q_{52}(z_1-t_1)+Q_{51}(-z_1t_1)+Q_{43}(z_1+z_2-t_1-t_2)+Q_{42}(z_1-t_1)(z_1+z_2-t_1-t_2)+\\ & Q_{41}(-z_1t_1)(z_1+z_2-t_1-t_2) +Q_{32}(z_1(z_1-t_1)(z_2-t_1-t_2)+t_1^2(z_2-t_2))+\\ & Q_{31}(-z_1t_1)(z_1(z_2-t_1-t_2)-t_1(z_2-t_2))+Q_{21}z_1^2t_1^2(z_2-t_2) \\
\hline
\1\2\3 & Q_{531}+Q_{431}(z_1+z_2-t_1-t_2)+Q_{521}(z_1-t_1)+
Q_{421}(z_1-t_1)(z_1+z_2-t_1-t_2)+\\ & Q_{321}((z_1^2-z_1t_1+t_1^2)(z_2-t_2)+z_1t_1(t_1-z_1)) \\
\hline
\end{array}
$
\normalsize
\section{Double Schubert polynomials in type $\mathrm{D}$ for $w \in W'_3$}
\tiny
\renewcommand{\arraystretch}{1.2}
$\begin{array}{|c|l|}
\hline
123 & 1 \\
\hline
213 & P_1+(z_1-t_1) \\
\hline
\2\13 & P_1 \\
\hline
\1\23 & P_2+P_1(z_1-t_1) \\
\hline
132 & 2P_1+(z_1+z_2-t_1-t_2) \\
\hline
312 & P_2+P_1(2z_1-t_1-t_2)+(z_1-t_1)(z_1-t_2) \\
\hline
\3\12 & P_2+P_1(-t_1-t_2) \\
\hline
\1\32 & P_3+P_2(z_1-t_1-t_2)+P_1(t_1t_2-z_1t_1-z_1t_2) \\
\hline
231 & P_2+P_1(z_1+z_2-2t_1)+(z_1-t_1)(z_2-t_1) \\
\hline
321 & P_{3}+P_{21}+P_2(2z_1+z_2-2t_1-t_2)+P_1(z_1^2+2z_1z_2+t_1^2+2t_1t_2-3t_1z_1-t_1z_2-t_2z_1-t_2z_2)+\\ & (z_1-t_1)(z_1-t_2)(z_2-t_1) \\
\hline
\3\21 & P_{21}+P_2(-t_1)+P_1t_1^2 \\
\hline
\2\31 & P_{31}+P_{21}(z_1-t_1-t_2)+P_3(-t_1)+P_2(-t_1)(z_1-t_1-t_2)+P_1 t_1^2(z_1-t_2) \\
\hline
\23\1 & P_2+P_1(z_1+z_2) \\
\hline
3\2\1 & P_{21}+P_{2}z_1+P_{1}z_1^2 \\
\hline
\32\1 & P_3+P_{21}+P_2(z_1+z_2-t_1-t_2)+P_1(z_1+z_2)(-t_1-t_2) \\
\hline
2\3\1 & P_{31}+P_{3}z_1+P_{21}(z_1-t_1-t_2)+P_2 z_1(z_1-t_1-t_2)+P_1 z_1^2(-t_1-t_2) \\
\hline
\13\2 & P_3+P_2(z_1+z_2-t_1)+P_1(z_1z_2-t_1(z_1+z_2)) \\
\hline
3\1\2 & P_{31}+P_{21}(z_1+z_2-t_1)+P_3 z_1+P_2z_1(z_1+z_2-t_1)+P_1 z_1^2(z_2-t_1) \\
\hline
\31\2 & P_{31}+P_{21}(z_1+z_2-t_1)+P_3(-t_1)+P_2(-t_1)(z_1+z_2-t_1)+P_1(z_1+z_2)t_1^2 \\
\hline
1\3\2 & P_{32}+P_{31}(z_1-t_1)+P_3(-z_1t_1)+P_{21}(z_1^2-z_1t_1+t_1^2)+P_2(-z_1t_1)(z_1-t_1)+P_1z_1^2t_1^2 \\
\hline
\12\3 & P_4+P_3(z_1+z_2-t_1-t_2)+P_2(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2))+P_1(z_1z_2(-t_1-t_2)+t_1t_2(z_1+z_2)) \\
\hline
2\1\3 & P_{41}+P_{4}z_1+P_{31}(z_1+z_2-t_1-t_2)+P_3(z_1)(z_1+z_2-t_1-t_2)+P_{21}(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2)) \\
& +P_2z_1(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2))+P_1z_1^2(t_1t_2-z_2t_1-z_2t_2) \\
\hline
\21\3 & P_{41}+P_{4}(-t_1)+P_{31}(z_1+z_2-t_1-t_2)+P_{3}(-t_1)(z_1+z_2-t_1-t_2)+P_{21}(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2)) \\
& +P_2(-t_1)(z_1z_2+t_1t_2-(z_1+z_2)(t_1+t_2))+P_1t_1^2(z_1z_2-z_1t_2-z_2t_2) \\
\hline
1\2\3 & P_{42}+P_{32}(z_1+z_2-t_1-t_2)+P_{41}(z_1-t_1)+P_{31}(z_1-t_1)(z_1+z_2-t_1-t_2)+ \\ & P_{21}(z_1^2z_2-t_1^2t_2+z_1t_1t_2-z_1z_2t_1+z_1^2(-t_1-t_2)+t_1^2(z_1+z_2)) +P_4(-z_1t_1)+\\ & P_3(-z_1t_1)(z_1+z_2-t_1-t_2)+P_2(-z_1t_1)(-z_1t_1-z_2t_1-z_1t_2+z_1z_2+t_1t_2)+P_1(z_1^2t_1^2)(z_2-t_2) \\
\hline
\end{array}
$
\nocite{*} | {"config": "arxiv", "file": "0810.1348.tex"} |
\begin{document}
\begin{plainfootnotes}
\maketitle
\end{plainfootnotes}
\thispagestyle{scrplain}
\begin{abstract}
\noindent
We establish Sturm bounds for degree~$g$ Siegel modular forms modulo a prime $p$, which are vital for explicit computations. Our inductive proof exploits Fourier-Jacobi expansions of Siegel modular forms and properties of specializations of Jacobi forms to torsion points. In particular, our approach is completely different from the proofs of the previously known cases $g=1,2$, which do not extend to the case of general~$g$.
\vspace{.25em}
\noindent{\sffamily\tbf MSC 2010: Primary 11F46; Secondary 11F33}
\end{abstract}
\vspace*{2em}
\Needspace*{4em}
\lettrine[lines=2,nindent=.5em]{L}{et} $p$ be a prime. A celebrated theorem of Sturm~\cite{Sturm-LNM1987} implies that an elliptic modular form with $p$-integral rational Fourier series coefficients is determined by its ``first few" Fourier series coefficients modulo $p$. Sturm's theorem is an important tool in the theory of modular forms (for example, see~\cite{Ono, Stein} for some of its applications). Poor and Yuen~\cite{P-Y-paramodular} (and later \cite{C-C-K-Acta2013} for $p\geq 5$) proved a Sturm theorem for Siegel modular forms of degree~$2$. Their work has been applied in different contexts, and for example, it allowed~\cite{C-C-R-Siegel, D-R-Ramanujan} to confirm Ramanujan-type congruences for specific Siegel modular forms of degree $2$. In~\cite{RR-MRL}, we gave a characterization of $U(p)$ congruences of Siegel modular forms of arbitrary degree, but (lacking a Sturm theorem) we could only discuss one explicit example that occurred as a Duke-Imamo\u glu-Ikeda lift. If a Siegel modular form does not arise as a lift, then one needs a Sturm theorem to justify its $U(p)$ congruences.
In this paper, we provide such a Sturm theorem for Siegel modular forms of degree~$g\geq 2$. Our proof is totally different from the proofs of the cases $g=1,2$ in~\cite{Sturm-LNM1987, P-Y-paramodular, C-C-K-Acta2013}, which do not have visible extensions to the case~$g>2$. More precisely, we perform an induction on the degree~$g$. As in~\cite{Bru-WR-Fourier-Jacobi}, we employ Fourier-Jacobi expansions of Siegel modular forms, and we study vanishing orders of Jacobi forms. However, in contrast to~\cite{Bru-WR-Fourier-Jacobi} we consider restrictions of Jacobi forms to torsion points (instead of their theta decompositions), which allow us to relate mod~$p$ diagonal vanishing orders (defined in Section~\ref{sec: preliminaries}) of Jacobi forms and Siegel modular forms. We deduce the following theorem.
\begin{maintheorem}
\label{thm:maintheorem}
Let $F$ be a Siegel modular form of degree~$g\geq 2$, weight~$k$, and with $p$-integral rational Fourier series coefficients~$c(T)$.
Suppose that
\begin{gather*}
c(T) \equiv 0 \pmod{p}
\quad
\text{for all $T=(t_{i\!j})$ with}
\quad
t_{i\!i} \le \Big(\frac{4}{3}\Big)^g \frac{k}{16}
\text{.}
\end{gather*}
Then $c(T) \equiv 0 \pmod{p}$ for all $T$.
\end{maintheorem}
If a Siegel modular form arises as a lift, then one can sometimes infer that it has integral Fourier series coefficients (see~\cite{P-R-Y-BullAust09}). The situation is more complicated for Siegel modular forms that are not lifts. However, if the ``first few diagonal" coefficients of a Siegel modular form are integral (or $p$-integral rational), then Theorem~\ref{thm:maintheorem} implies that all of its Fourier series coefficients are integral (or $p$-integral rational).
\begin{maincorollary}
\label{cor:maincorollary}
Let $F$ be a Siegel modular form of degree~$g\geq 2$, weight~$k$, and with rational Fourier series coefficients $c(T)$. Suppose that
\begin{gather}
\label{eq:maincorollary-assumption}
c(T) \in \ZZ
\quad
\text{for all $T=(t_{i\!j})$ with}
\quad
t_{i\!i} \le \Big(\frac{4}{3}\Big)^g \frac{k}{16}
\text{.}
\end{gather}
Then $c(T) \in\ZZ$ for all $T$.
\end{maincorollary}
\begin{mainremarksenumerate}
\item
Theorem~\ref{thm:maintheorem} and Corollary~\ref{cor:maincorollary} are effective for explicit calculations with Siegel modular forms, since only finitely many $T$ satisfy the condition $t_{i\!i} \le (\frac{4}{3})^g \frac{k}{16}$ for all~$i$.
\item
If $p\geq 5$, then Theorem~\ref{thm:slope-bounds} shows that the bounds $(\frac{4}{3})^g \frac{k}{16}$ in Theorem~\ref{thm:maintheorem} and in Corollary~\ref{cor:maincorollary} can be replaced by the slightly better bounds $(\frac{4}{3})^g \frac{9k}{160}$.
\item
If \eqref{eq:maincorollary-assumption} in~Corollary~\ref{cor:maincorollary} is replaced by the assumption that $c(T)$ is $p$-integral rational for all $T=(t_{i\!j})$ with $ t_{i\!i} \le (\frac{4}{3})^g \frac{k}{16}$, then considering the case $q=p$ in the proof of Corollary~\ref{cor:maincorollary} yields that $c(T)$ is $p$-integral rational for all $T$.
\item
One can remove the assumption that $c(T) \in \QQ$ in~Corollary~\ref{cor:maincorollary}.
More precisely, if $F$ is a Siegel modular form of degree~$g\geq 2$, weight~$k$, and with Fourier series coefficients $c(T)\in\CC$ such that \eqref{eq:maincorollary-assumption} holds, then results of~\cite{Chai-Fal} show that $F$ is a linear combination of Siegel modular forms of degree~$g\geq 2$, weight~$k$, and with rational Fourier series coefficients, and applying Corollary~\ref{cor:maincorollary} yields that $c(T) \in \ZZ$ for all $T$.
\end{mainremarksenumerate}
The paper is organized as follows. In Section~\ref{sec: preliminaries}, we give some background on Jacobi forms and Siegel modular forms. In Section~\ref{sec: Vanishing orders of Jacobi forms}, we explore diagonal vanishing orders of Jacobi forms and of their specializations to torsion points. In Section~\ref{sec: Slope bounds for Siegel modular forms}, we inductively establish diagonal slope bounds for Siegel modular forms of arbitrary degree, and we prove Theorem~\ref{thm:maintheorem} and Corollary~\ref{cor:maincorollary}.
\vspace{1ex}
\section{Preliminaries}
\label{sec: preliminaries}
Throughout, $g, k, m \geq 1$ are integers, and $p$ is a rational prime. We work over the maximal unramified extension $\Qpur$ of $\QQ_p$. Note that $\Qpur$ contains all $N$\thdash\ roots of unity if $N$ and $p$ are relatively prime. We always write $\fp$ to denote a prime ideal in $\Qpur$, and $\Op$ stands for the localization of $\Qpur$ at~$\fp$. Moreover, we refer to the elements of the local ring $\ZZ_p\cap\QQ$ as $p$-integral rational numbers.
Finally, let $\HS_g$ be the Siegel upper half space of degree~$g$, $\Sp{g}(\ZZ)$ be the symplectic group of degree~$g$ over the integers, and $\rho$ be a representation of $\Sp{g}(\ZZ)$ with representation space $V(\rho)$, and such that $\big[ \ker\rho : \Sp{g}(\ZZ) \big]<\infty$.
\subsection{Siegel modular forms}
\label{sec: Siegel modular forms}
Let $\rmM^{(g)}_k (\rho)$ denote the vector space of Siegel modular forms of degree~$g$, weight~$k$, type $\rho$, and with coefficients in~$\Op$ (see \cite{Sh-Acta78}). If $\rho$ is trivial, then we simply write $\rmM^{(g)}_k$. Recall that an element $F\in\rmM^{(g)}_k(\rho)$ is a holomorphic function $F:\HS_g\ra V(\rho)$ with transformation law
\begin{gather*}
F\big( (A Z + B) (C Z + D)^{-1} \big)
=
\rho(M)\,\det(C Z + D)^{k}\, F(Z)
\end{gather*}
for all $M=\left(\begin{smallmatrix} A & B \\ C & D \end{smallmatrix}\right) \in \Sp{g}(\ZZ)$. Furthermore, $F$ has a Fourier series expansion of the form
\begin{gather*}
F(Z)= \sum_{T=\rT{T}\geq 0} \hspace{-.5em}
c(T)\, e^{2\pi i\, \tr(T Z) }
\text{,}
\end{gather*}
where $\tr$ denotes the trace, $\rT T$ is the transpose of $T$, and where the sum is over symmetric, positive semi-definite, and rational $g\times g$ matrices $T$.
If $F\in\rmM^{(g)}_k (\rho)$ such that $F\not\equiv 0\pmod{\,\fp}$, i.e., if there exists a Fourier series coefficient $c(T)$ of $F$ such that $c(T)\not\equiv 0\pmod{\,\fp}$, then the mod~$\fp$ diagonal vanishing order of $F$ is defined by
\begin{gather}
\label{eq:def:Siegel-diagonal-vanishing-order}
\ord_{\fp}\,F
:=
\max \big\{0 \le l \in \ZZ \,:\, \forall T=(t_{i\!j}), t_{i\!i} \le l\;\tx{ for all } 1 \le i \le g
\,:\, c(T) \equiv 0 \pmod{\,\fp} \big\}
\tx{.}
\end{gather}
If $F$ has $p$-integral rational coefficients such that $F\not\equiv 0\pmod{p}$, then $\ord_{p}\,F$ is defined likewise. Finally, the mod~$\fp$ diagonal slope bound for degree~$g$ (scalar-valued) Siegel modular forms is given by
\begin{gather}
\label{eq:def:diagonal-slope-bound}
\rho_{\diag,\,\fp}^{(g)}
:=
\inf_{k}
\inf_{\substack{F \in \rmM^{(g)}_{k} \\ F \not\equiv 0 \pmod{\,\fp}}}\,
\frac{k}{\ord_{\fp}\,F}
\tx{,}
\end{gather}
and the definition of the mod~$p$ diagonal slope bound $\rho_{\diag, p}^{(g)}$ for degree~$g$ (scalar-valued) Siegel modular forms with $p$-integral rational coefficients is completely analogous.
\subsection{Jacobi forms}
\label{sec: Jacobi forms}
Ziegler~\cite{Zi} introduced Jacobi forms of higher degree (extending~\cite{EZ}). Let $\rmJ_{k, m}^{(g)}(\rho)$ denote the ring of Jacobi forms of degree~$g$, weight~$k$, index~$m$, type $\rho$, and with coefficients in~$\Op$. If $\rho$ is trivial, then we suppress it from the notation. Recall that Jacobi forms occur as Fourier-Jacobi coefficients of Siegel modular forms: Let $F\in\rmM^{(g+1)}_k (\rho)$, and write $Z =\left(\begin{smallmatrix} \tau & \rT z\\ z & \tau' \end{smallmatrix}\right)\in\HS_{g+1}$,
where $\tau \in \HS_{g}$, $z \in \CC^{g}$ is a row vector, and $\tau'\in \HS_{1}$ to find the Fourier-Jacobi expansion:
\begin{gather*}
F(Z)
=
F(\tau,z,\tau')
=
\sum_{0 \le m \in \ZZ}
\phi_m(\tau,z)\, e^{2\pi i m \tau'}
\text{,}
\end{gather*}
where $\phi_m\in\rmJ_{k, m}^{(g)}(\rho)$. We now briefly recollect some defining properties of such Jacobi forms.
Let $G^\rmJ:=\Sp{g}(\RR)\ltimes (\RR^{2g}\tilde{\times}\RR)$ be the real Jacobi group of degree $g$ (see~\cite{Zi}) with group law
\begin{gather*}
[M,(\lambda,\mu),\kappa]\cdot[M',(\lambda',\mu'),\kappa']:=[MM',(\tilde{\lambda}+\lambda', \tilde{\mu}+\mu'),\kappa+\kappa'+\tilde{\lambda} \rT{\mu'}-\tilde{\mu}\rT{\lambda'}]
\text{,}
\end{gather*}
where $(\tilde{\lambda},\tilde{\mu}):=(\lambda, \mu)M'$. For fixed $k$ and $m$, define the following slash operator on functions $\phi:\HS_g \times\CC^g\rightarrow V(\rho)$\,:
\begin{align}
\label{Jacobi-slash}
&
\Big(\phi\,\big|_{k,m}
\left[\left(\begin{smallmatrix}A & B\\C & D\end{smallmatrix}\right),\,
(\lambda, \mu),x\right]
\Big)(\tau,z)
\;:=\;
\rho^{-1} \left(\begin{smallmatrix}A & B\\C & D\end{smallmatrix}\right)\,
\det(C\tau+D)^{-k}
\\\nonumber
&
\quad\cdot\,
\exp\Big( 2\pi im\big(
- (C\tau+D)^{-1} (z+\lambda\tau+\mu) \,C\, \rT(z+\lambda\tau+\mu)\;
+ \lambda \tau \rT\lambda
+ 2\lambda\rT z
+\mu\rT\lambda
+x \big) \Big)
\\\nonumber
&
\quad\cdot\,
\phi\big( (A\tau+B)(C\tau+D)^{-1},\, (z+\lambda\tau+\mu)(C\tau+D)^{-1} \big)
\end{align}
for all $\left[\left(\begin{smallmatrix}A & B\\C & D\end{smallmatrix}\right), (\lambda, \mu),x\right]\in G^\rmJ$. A Jacobi form of degree~$g$, weight~$k$, and index~$m$ is invariant under \eqref{Jacobi-slash} when restricted to $\left(\begin{smallmatrix}A & B\\C & D\end{smallmatrix}\right)\in\Sp{g}(\ZZ)$, $(\lambda, \mu)\in\ZZ^{2g}$, and $\kappa=0$. Moreover, every $\phi\in\rmJ_{k, m}^{(g)}(\rho)$ has a Fourier series expansion of the form
\begin{gather*}
\phi(\tau,z)
=
\sum_{T,R}
c(T,R)\, e^{2\pi i\, \tr(T \tau + zR)}
\text{,}
\end{gather*}
where the sum is over symmetric, positive semi-definite, and rational $g\times g$ matrices~$T$ and over column vectors $R\in\QQ^g$ such that $4mT - R \rT R$ is positive semi-definite.
Finally, we state the analog of~\eqref{eq:def:Siegel-diagonal-vanishing-order} for Jacobi forms. Let $\phi\in\rmJ_{k, m}^{(g)}(\rho)$ such that $\phi\not\equiv 0\pmod{\,\fp}$, i.e., there exists a Fourier series coefficient $c(T,R)$ of $\phi$ such that $c(T,R)\not\equiv 0\pmod{\,\fp}$. Then the mod~$\fp$ diagonal vanishing order of $\phi$ is defined by
\begin{gather}
\label{eq:def:Jacobidiagonal-vanishing-order}
\ord_{\fp}\,\phi
:=
\max \big\{0 \le l \in \ZZ \,:\, \forall R, T=(t_{i\!\!j}), t_{i\!i} \le l\;\tx{ for all } 1 \le i \le g
\,:\, c(T,R) \equiv 0 \pmod{\,\fp} \big\}
\tx{,}
\end{gather}
and if $\phi$ has $p$-integral rational coefficients such that $\phi\not\equiv 0\pmod{p}$, then one defines $\ord_{p}\,\phi$ in the same way.
\vspace{1ex}
\section{Vanishing orders of Jacobi forms}
\label{sec: Vanishing orders of Jacobi forms}
In this section, we discuss diagonal vanishing orders of Jacobi forms and of their evaluations at torsion points.
Throughout, $N$ is a positive integer that is not divisible by $p$. Consider the $\CC$ vector space
\begin{gather}
V\big( \rho_{[N]} \big)
:=
\CC\Big[\big(\tfrac{1}{N} \ZZ^g \slashdiv N \ZZ^g \big)^2 \Big]
=
\lspan_\CC\big\{ \frake_{\alpha, \beta} \,:\,
\alpha, \beta \in \tfrac{1}{N} \ZZ^g / N \ZZ^g
\big\}
\tx{,}
\end{gather}
and the representation~$\rho_{[N]}$ on $V\big( \rho_{[N]} \big)$, which is defined by the action of $\Sp{g}(\ZZ)$ on $(\tfrac{1}{N} \ZZ^g / N \ZZ^g)^2$:
\begin{gather}
\rho_{[N]}\big( M^{-1} \big)\, \frake_{\alpha, \beta}
:=
\frake_{\alpha', \beta'}
\text{,}
\qquad
\text{where}\;\;
\big( \alpha',\beta' \big)
:=
\big( \alpha,\beta \big)\, M
\quad
\text{for $M\in\Sp{g}(\ZZ)$}
\tx{.}
\end{gather}
If $\phi \in \rmJ^{(g)}_{k,m}$, then $\phi[N]$ is its restriction to torsion points of denominator at most~$N$, i.e.,
\begin{align}
\label{eq:def:jacobi-forms-restriction-to-torsion-points}
\nonumber
\phi[N]
&:\,
\HS^{(g)} \lra V\big( \rho_{[N]} \big)
\\
\phi[N](\tau)
&:=
\Big(
\big( \phi \big|_{k, m} [ I_g, (\alpha, \beta), 0 ] \big) (\tau, 0)
\Big)_{\alpha, \beta \in \frac{1}{N}\ZZ^g / N \ZZ^g }
\tx{,}
\end{align}
where $I_g$ stands for the $g\times g$ identity matrix. It is easy to see that $\phi[N]$ is a vector-valued Siegel modular form (see also Theorem ~1.3~of~\cite{EZ} and Theorem~1.5 of~\cite{Zi}):
\begin{lemma}
Let $\phi \in \rmJ^{(g)}_{k,m}$. Then $\phi[N] \in \rmM^{(g)}_{k}(\rho_{[N]})$.
\end{lemma}
\begin{proof}
We first argue that $\phi[N]$ is well-defined: If $a, b \in \ZZ^g$, then
\begin{gather*}
\phi \big|_{k, m} [ I_g, (\alpha + N a, \beta + N b ), 0 ]
=
\phi
\big|_{k, m} [ I_g, (N a, N b), N \alpha \rT b - N \beta \rT a]
\big|_{k, m} [ I_g, (\alpha, \beta), 0 ]
\text{.}
\end{gather*}
Note that $\kappa := N \alpha \rT{b} - N \beta \rT{a} \in \ZZ$ does not contribute to the action, and we find that the defining expression for $\phi[N]$ is independent of the choice of representatives of $\alpha, \beta \in \frac{1}{N} \ZZ^g \slashdiv N \ZZ^g$.
Next we verify the behavior under modular transformation of~$\phi[N]$. Let $M \in \Sp{g}(\ZZ)$. Then
\begin{gather*}
[ I_g, (\alpha, \beta ), 0 ] \cdot [M, (0,0), 0)]
=
[M, (0,0), 0]\cdot [ I_g, (\alpha', \beta'), 0]
\end{gather*}
with $\big(\alpha', \beta' \big)=\big(\alpha, \beta \big)\, M$, which implies that
\begin{align*}
\big( \phi[N]_{\alpha, \beta} \big) \big|_k\, M
& =
\big( \phi \big|_{k, m} [ I_g, (\alpha, \beta), 0 ] \big) (\,\cdot\,, 0) \big)
\big|_k\, M
=
\big( \phi \big|_{k, m} [M, (0,0), 0] \cdot [ I_g, (\alpha', \beta'), 0] \big) (\,\cdot\,, 0)\\
& =
\big( \phi[N]_{\alpha', \beta'} \big)
\text{.}
\end{align*}
\end{proof}
The next lemma relates the mod~$\,\fp$ diagonal vanishing orders of a Jacobi form $\phi$ and its specialization $\phi[N]$.
\begin{lemma}
\label{la:vanishing-order-of-jaocbi-specializations}
Let $\phi \in \rmJ^{(g)}_{k,m}$. Then $\ord_{\fp}\,\phi[N]\geq \ord_{\fp}\,\phi - \frac{m}{4}$.
\end{lemma}
\begin{proof}
Let $\phi(\tau,z)=\sum_{T,R} c(T,R)\, e^{2\pi i\, (\tr(T \tau)+zR)}$. Then $\phi[N](\tau)$ equals
\begin{gather}
\label{eq:jacobi-forms-restriction-to-torsion-points:fourier-coefficients}
\begin{aligned}
\big( \phi\big|_k [I_g, (\alpha, \beta), 0] \big) (\tau, 0)
&=
e^{2\pi im (\alpha \tau \rT\alpha +\beta\rT\alpha)}\,
\sum_{T,R}
c(T,R)\, e^{2\pi i\, \big(\tr(T \tau)+ (\alpha\tau + \beta)R\big)}
\\
&=
e^{2\pi im\,\beta \rT\alpha}\,
\sum_{T,R}
c(T,R) e^{2\pi i\,\beta R}\,
e^{2\pi i\,\tr\Big(
\big(T - \frac{1}{4m}R\rT R
\,+\,
\frac{1}{m}\rT\big(m \alpha +\frac{1}{2}\rT R\big)\,\big(m \alpha + \frac{1}{2}\rT R\big)
\big) \tau \Big)}
\tx{.}
\end{aligned}
\end{gather}
Observe that $c(T,R) e^{2\pi i\,\beta (\rT \alpha +R)}\in\Op$. It suffices to show that $c(T,R)$ vanishes mod~$\,\fp$ if the diagonal entries $t'_{i\!i}$ of $T':= T - \frac{1}{4m}R\rT R$ are less than $\ord_{\fp}\,\phi- \frac{m}{4}$.
Consider $T, R$ such that $t'_{i\!i} \le \ord_\frakp\,\phi - \frac{m}{4}$ for some fixed~$i$. Note that $c(T, R)$ remains unchanged when replacing $T\mapsto T + \frac{1}{2}(R \lambda+\rT\lambda\rT R) +m \rT \lambda \lambda$ and $R\mapsto R + 2m \rT \lambda$, which corresponds to the invariance of $\phi$ under~$\big|_{k,m}\, [I_g, (\lambda,0) ,0]$. Hence we only have to consider the case of $R = \rT(r_1, \ldots, r_g)$ with $-m \le r_i \le m$. In this case, $t'_{i\!i} = t_{i\!i} - \frac{1}{4m}r_i^2 \le \ord_{\fp}\,\phi- \frac{m}{4}$ implies that $t_{i\!i} \le \ord_{\fp}\,\phi$, i.e., $c(T, R) \equiv 0 \pmod{\,\fp}$.
\end{proof}
The following lemma associates the mod~$\,\fp$ diagonal vanishing orders of scalar-valued and vector-valued Siegel modular forms.
\begin{lemma}
\label{la:slope-bound-vector-valued}
Suppose that there exists a mod~$\,\fp$ diagonal slope bound $\rhop$ for degree~$g \ge 1$. Let $\rho$ be a representation of~$\Sp{g}(\ZZ)$ defined over~$\Op$, and assume that its dual $\rho^\ast$ is also defined over~$\Op$. If $F \in \rmM^{(g)}_k(\rho)$ such that $\ord_{\fp}\,F>k \big\slash \rhop$, then $F \equiv 0 \pmod{\,\fp}$.
\end{lemma}
\begin{proof}
Let $v$ be a linear form on $V(\rho)$, i.e., $v \in V(\rho)^\ast (\Op)$. Then $\langle F, v \rangle := v \circ F$ is a scalar-valued Siegel modular form of weight $k$ for the group $\ker \rho$. We obtain a scalar-valued Siegel modular form for the full group $\Sp{g}(\ZZ)$ via the standard construction (see also the proof of Proposition~1.4 of~\cite{Bru-WR-Fourier-Jacobi})
\begin{gather*}
F_v
:=
\prod_{M :\, \ker \rho \backslash \Sp{g}(\ZZ)} \langle F, v \rangle |_k\, M
=
\prod_{M :\, \ker \rho \backslash \Sp{g}(\ZZ)} \langle F, \rho^\ast(M) v \rangle\in\rmM^{(g)}_{dk}
\text{,}
\end{gather*}
where $d := \big[ \ker\rho : \Sp{g}(\ZZ) \big]$. Observe that $\rho^\ast(M) v \in V(\rho)^\ast (\Op)$, and hence the Fourier series coefficients of $F_v$ do belong to $\Op$. The assumption $\ord_{\fp}\,F>k \big\slash \rhop$ implies that $\ord_{\fp}\,F_v> dk \big\slash \rhop$, and since $F_v$ is of weight $d k$, we find that $F_v \equiv 0 \pmod{\,\fp}$ for all $v$. Hence $\langle F, v \rangle$ vanishes mod~$\,\fp$ for every $v$, which proves that $F \equiv 0 \pmod{\,\fp}$.
\end{proof}
The final result in this Section on the mod~$\,\fp$ diagonal vanishing orders of scalar-valued Jacobi forms and Siegel modular forms is an important ingredient in the proof of Theorem~\ref{thm:maintheorem} in the next Section.
\begin{proposition}
\label{prop:slope-bound-jacobi}
Suppose that there exists a mod~$\,\fp$ diagonal slope bound $\rhop$ for degree~$g \ge 1$. Let $\phi \in \rmJ^{(g)}_{k,m}$ such that $\ord_{\fp}\,\phi>\frac{m}{4} + k \big\slash \rhop$. Then $\phi\equiv 0\pmod{\,\fp}$.
\end{proposition}
\begin{proof}
Let $\phi(\tau,z)=\sum_{T,R} c(T,R)\, e^{2\pi i\, (\tr(T \tau)+zR)}$. Lemmata~\ref{la:vanishing-order-of-jaocbi-specializations} and~\ref{la:slope-bound-vector-valued} imply that $\phi[N]\equiv 0\pmod{\,\fp}$ for all~$N$ that are relatively prime to $p$. We prove by induction on the diagonal entries $(t_{i\!i})$ of $T$ that $c(T,R) \equiv 0 \pmod{\,\fp}$. The constant Fourier series coefficient of $\phi[1]$ equals $c(0,0)$. Hence $c(0,0) \equiv 0 \pmod{\,\fp}$, i.e., the base case holds. Next, let $T$ be positive semi-definite and suppose that $c(T', R) \equiv 0 \pmod{\,\fp}$ for all $T'=(t'_{i\!j})$ with $t'_{i\!i} < t_{i\!i}$ for all~$i$. If $R = \rT(r_1, \ldots, r_g)$ such that $|r_i| > m$ for some~$i$, then (as in the proof of Lemma~\ref{la:vanishing-order-of-jaocbi-specializations}) use the modular invariance of $\phi$ to relate $c(T,R)$ to some $c(T',R')$ with $t'_{i\!i} < t_{i\!i}$. That is, it suffices to show that $c(T, R) \equiv 0 \pmod{\,\fp}$ for $R$ with $-m \le r_i \le m$ for all~$i$. Now, fix a prime~$N\not=p$ such that $2m < N - 2$. If $\beta= \rT(\beta_1, \ldots, \beta_g) \in \frac{1}{N} \ZZ^g$, then $\phi[N]\equiv 0\pmod{\,\fp}$ implies that (see also~\eqref{eq:jacobi-forms-restriction-to-torsion-points:fourier-coefficients})
\begin{gather*}
\sum_{\substack{R\\ |r_i| \le \frac{N - 1}{2}}} c(T,R) e^{2\pi i\,\beta R}
\equiv
\sum_{R} c(T,R) e^{2\pi i\,\beta R}
\equiv 0\pmod{\,\fp}
\text{,}
\end{gather*}
where the first congruence follows from the induction hypothesis and the assumption that $2m < N - 2$ (see also the proof of Lemma~\ref{la:vanishing-order-of-jaocbi-specializations}). Note that $e^{2\pi i\,\beta R}$ are integers in the $N$\nbd th cyclotomic field. Moreover, if
\begin{gather*}
A:=\big( e^{2\pi i\,\beta R} \big)
_{\substack{R \in \ZZ^g,\, \frac{1-N}{2} < r_i \le \frac{N-1}{2}\\
\beta \in \frac{1}{N}\ZZ^g,\, 0 \le N \beta_i \le N-2}}
\text{,}
\end{gather*}
then (observing that $N$ is prime) $\det A=(-1)^{N-1}N^{N-2}$ is the discriminant of the $N$\nbd th cyclotomic field. In particular, $\det A \not\equiv 0 \pmod{\,\fp}$, and we conclude that $c(T,R) \equiv 0 \pmod{\,\fp}$.
\end{proof}
\vspace{1ex}
\section{Slope bounds for Siegel modular forms}
\label{sec: Slope bounds for Siegel modular forms}
We prove by induction that there exists a diagonal slope bound $\rhop$ for Siegel modular forms of degree~$g\geq 1$, which then yields Theorem~\ref{thm:maintheorem} and Corollary~\ref{cor:maincorollary}.
\begin{proposition}
\label{prop:relative-sturm-bound}
If $\varrho^{(g-1)}_{\diag,\,\fp}$ is a diagonal slope bound for degree~$g-1$ Siegel modular forms, then $\rhop:= \frac{3}{4} \varrho^{(g-1)}_{\diag,\,\fp}$ is a diagonal slope bound for degree $g$ Siegel modular forms.
\end{proposition}
\begin{proof}
Suppose that there exists an $0 \not\equiv F \in \rmM^{(g)}_k$ whose diagonal slope modulo~$\fp$ is less than $\rhop= \frac{3}{4} \varrho^{(g-1)}_{\diag,\,\fp}$, i.e., the diagonal vanishing order of $F$ is greater than $k \big\slash \rhop$. Consider Fourier-Jacobi coefficients $0\not\equiv\phi_m\in \rmJ^{(g-1)}_{k,m}$ of~$F$. If $m \leq k \big\slash \rhop$, then
\begin{gather*}
\ord_{\fp}\,\phi_m
>
\frac{k}{\rhop}
\geq
\frac{m}{4} + \frac{3}{4} \frac{k}{\rhop}
=
\frac{m}{4} + \frac{k}{\varrho^{(g-1)}_{\diag, \fp}}
\,\text{,}
\end{gather*}
and Proposition~\ref{prop:slope-bound-jacobi} implies that $\phi_m \equiv 0 \pmod{\,\fp}$.
If $m > k \big\slash \rhop$, then an induction on $m$ shows that $\phi_m \equiv 0 \pmod{\,\fp}$. More specifically, fix an index $m$ and suppose that $\phi_{m'} \equiv 0 \pmod{\,\fp}$ for all $m' < m$. Thus, the mod~$\fp$ diagonal vanishing order of $\phi_m$ is at least $m$, and we apply again Proposition~\ref{prop:slope-bound-jacobi} to find that $\phi_m \equiv 0 \pmod{\,\fp}$. Hence $F\equiv 0 \pmod{\,\fp}$, which yields the claim.
\end{proof}
Proposition~\ref{prop:relative-sturm-bound} holds for any prime ideal $\fp$ in $\Qpur$, and hence also for the rational prime $p$. As a consequence we discover explicit slope bounds, which immediately imply Theorem~\ref{thm:maintheorem}.
\begin{theorem}
\label{thm:slope-bounds}
Let $g\geq 1$. There exist a diagonal slope bound $\varrho^{(g)}_{\diag, p}$ such that
\begin{gather*}
\varrho^{(g)}_{\diag, p}
\ge
16 \cdot \Big(\frac{3}{4}\Big)^g
\text{.}
\end{gather*}
If, in addition, $g \ge 2$ and $p \ge 5$, then
\begin{gather*}
\varrho^{(g)}_{\diag, p}
\ge
\frac{160}{9} \cdot \Big(\frac{3}{4}\Big)^g
\text{.}
\end{gather*}
\end{theorem}
\begin{proof}
We apply Proposition~\ref{prop:relative-sturm-bound} to the base case $\varrho^{(1)}_{\diag, p} = 12$ (see~\cite{Sturm-LNM1987}), and if $p\geq 5$, to the base case $\varrho^{(2)}_{\diag, p} = 10$ (see~\cite{C-C-K-Acta2013}).
\end{proof}
\begin{example}
If $p\geq 5$, then for $g=3,4,5,6$ we obtain
\begin{gather*}
\varrho^{(3)}_{\diag, p} \ge 7.5
\text{,}\quad
\varrho^{(4)}_{\diag, p} \ge 5.6
\text{,}\quad
\varrho^{(5)}_{\diag, p} \ge 4.2
\text{,}\quad
\varrho^{(6)}_{\diag, p} \ge 3.1
\text{.}
\end{gather*}
\end{example}
Finally, we prove Corollary~\ref{cor:maincorollary}.
\begin{proof}[Proof of Corollary~\ref{cor:maincorollary}]
Let $F \in \rmM^{(g)}_{k}$ with rational Fourier series coefficients $c(T)$ such that $c(T)\in\ZZ$ for all $T=(t_{ij})$ with $t_{ii} \le \big( \frac{4}{3} \big)^g \frac{k}{16}$ for all~$i$. Note that $F$ has bounded denominators (this follows from~\cite{Chai-Fal}), i.e., there exists an $0 < l \in \ZZ$ such that $l F \in \rmM^{(g)}_{k}$ has integral Fourier series coefficients. Let $l$ be minimal with this property. We need to show that $l = 1$. If $l\not=1$, then there exists a prime $q$ such that $q \isdiv l$. Hence $lc(T)\equiv 0\pmod{q}$ for all $T$ with $t_{ii} \le \big( \frac{4}{3} \big)^g \frac{k}{16}$, and Theorem~\ref{thm:maintheorem} asserts that $l c(T)\equiv 0\pmod{q}$ for all $T$. This contradicts the minimality of~$l$, and we conclude that $l = 1$.
\end{proof}
\renewbibmacro{in:}{}
\renewcommand{\bibfont}{\normalfont\small\raggedright}
\renewcommand{\baselinestretch}{.8}
\Needspace*{4em}
\printbibliography[heading=bibnumbered]
\addvspace{1em}
\titlerule[0.15em]\addvspace{0.5em}
{\noindent\small
Department of Mathematics,
University of North Texas,
Denton, TX 76203, USA\\
E-mail: \url{richter@unt.edu}\\
Homepage: \url{http://www.math.unt.edu/~richter/}
\\[1.5ex]
Max Planck Institute for Mathematics,
Vivatsgasse~7,
D-53111, Bonn, Germany\\
E-mail: \url{martin@raum-brothers.eu}\\
Homepage: \url{http://raum-brothers.eu/martin}
}
\end{document} | {"config": "arxiv", "file": "1501.07733/sturmbounds.tex"} |
\section{Analytical Apparatus}\label{sec:2}
In this section we provide an insight into the rank-structure of the Schur complements that arise in the construction of the preconditioner. To make the discussion precise, let us consider the boundary value problem:
\begin{subequations}\label{pb:0}
\begin{alignat}{3}
\mathcal{L} \, u & = f \qquad && \text{in } &&\Omega \\
\mathcal{B} \, u & = g && \text{on } &&\Gamma := \partial \Omega
\end{alignat}
\end{subequations}
where $\mathcal{L}$ is a linear second order differential operator, and $\mathcal{B}$ is a linear boundary condition. Whenever the problem is well-posed, there exists a Green's function $G$ such that:
\[
u(x) =\int_\Omega G(x,y) \, f(y) \, dy + \int_\Gamma H(x,y) \, g(y) \, dS(y)
\]
where $H$ is related to the Green's function through the boundary condition $\mathcal{B}$. The right-hand-side of the previous equality defines the so-called solution operator for problem (\ref{pb:0}). A rigorous derivation of the previous integral equation rests upon a generalized Green's formula of the second type, while a characterization of $G$ as the solution of a boundary value problem involves the notion of formal adjoint operator, see, e.g, Chapter 6.7 of \cite{oden2010applied}.
When $\mathcal{L}$ is a uniformly elliptic operator with smooth coefficients and $\Omega$ has a smooth boundary, then the long-range interactions of $G$ are rank-deficient. Recently, the case of non smooth, $L^\infty$-coefficients, and $\Omega$ a bounded Lipschitz domain has been addressed in \cite{bebendorf2003existence}. The authors show that, even in this general setting, the Green's function $G$ can be approximated to high precision by an $\mathcal{H}$-matrix. In the case of homogeneous boundary conditions, this property immediately extends to the solution operator. When non-homogeneous boundary conditions are considered, we postulate that the nature of the solution operator is preserved.
Let $A$ be the stiffness matrix arising from a finite element discretization of problem (\ref{pb:0}). Let us suppose that, up to a permutation we shall omit, we can partition $A$ as follows, and define the (aggregated) submatrices $A^{(k)}$:
\[
A =
\begin{pmatrix}
{A^{(1)}}_{ii} & & {A^{(1)}}_{ib} & \\
& {A^{(2)}}_{ii} & & {A^{(2)}}_{ib} \\
{A^{(1)}}_{bi} & & {A^{(1)}}_{bb} & {A^{(1,2)}} \\
& {A^{(2)}}_{bi} & {A^{(2,1)}} & {A^{(2)}}_{bb}
\end{pmatrix} \qquad , \qquad
A^{(k)} = \begin{pmatrix}
{A^{(k)}}_{ii} & {A^{(k)}}_{ib} \\
{A^{(k)}}_{bi} &{A^{(k)}}_{bb}
\end{pmatrix} \quad k =1,2
\]
In the partitioning of $A$, we assume the blocks on the diagonal to be square matrices. We proceed to characterize matrix $A^{(k)}$ as the discretization of a boundary value problem.
In the finite element method, each degree of freedom $j$ of the stiffness matrix corresponds to a unique finite element basis function $\varphi_j$. We define the following subdomains of $\Omega$ \emph{via} the supports of the basis functions:
\[
\overline{\Omega_i^{(k)}} = \cup \, \{ \supp \varphi_j \: : \: j \in \ind ( {A^{(k)}}_{ii}) \} \quad , \quad
\overline{\Omega_b^{(k)}} = \cup \, \{ \supp \varphi_j \: : \: j \in \ind ( {A^{(k)}}_{bb}) \}
\]
The notation $\ind (B)$ refers to the (row or column) indices of $A$ that are indices of its submatrix $B$ as well. We also define $\overline{\Omega^{(k)}} = \overline{\Omega_i^{(k)}} \cup \overline{\Omega_b^{(k)}}$. When the previous discretization scheme is applied to the boundary value problem
\begin{subequations}\label{pb:1}
\begin{alignat}{3}
\mathcal{L} \, u & = f \qquad && \text{in } &&\Omega^{(k)} \\
\mathcal{B} \, u & = g && \text{on } &&\Gamma \cap \partial \Omega^{(k)} \\
u & = 0 && \text{on } &&\inte (\partial \Omega^{(k)} \setminus \Gamma )
\end{alignat}
\end{subequations}
it gives rise to $A^{(k)}$ as a stiffness matrix.
An $LDM^t$ block-factorization of $A^{(k)}$ is realized as follows:
\[
A^{(k)} =
\underset{L}{
\begin{pmatrix}
I & \\
{A^{(k)}}_{ib} \, {{A^{(k)}}_{ii}}^{-1} & I
\end{pmatrix}
}
\begin{pmatrix}
{A^{(k)}}_{ii} & \\
& S^{(k)}
\end{pmatrix}
\underset{M^t}{
\begin{pmatrix}
I & {{A^{(k)}}_{ii}}^{-1} \, {A^{(k)}}_{ib} \\
& I
\end{pmatrix}
}
\]
Matrices $L$, $M$ are Gauss transforms, and the Schur complement $S^{(\cdot)}$ is defined as:
\[
S^{(k)} = {A^{(k)}}_{bb} - {A^{(k)}}_{bi} \, {{A^{(k)}}_{ii}}^{-1} \, {A^{(k)}}_{ib}
\]
By inverting the factorization, it immediately follows that the bottom-right block of ${A^{(k)}}^{-1}$ coincides with ${S^{(k)}}^{-1}$. Since ${A^{(k)}}^{-1}$ is the discrete analog of the solution operator of problem (\ref{pb:1}), we conclude that ${S^{(k)}}^{-1}$ is the restriction to $\Omega_b^{(k)}$ of the discrete solution operator. Consequently, the inverse Schur complement has rank-deficient off-diagonal blocks. Since such class of matrices is closed with respect to inversion, the Schur complement has rank-deficient off-diagonal blocks as well.
We employ the factorizations of the sub-problems (\ref{pb:1}) to obtain and $LDM^t$ factorization of the stiffness matrix $A$ of the original problem (\ref{pb:0}):
\[ A =
L^{(2)}L^{(1)}
\underset{D}{
\left(
\begin{array}{cc | cc}
{A^{(1)}}_{ii} & & & \\
& {A^{(2)}}_{ii} & & \\ \hline
& & {S^{(1)}} & {A^{(1,2)}} \\
& & {A^{(2,1)}} & {S^{(2)}}
\end{array}\right)
}
{M^{(1)}}^t {M^{(2)}}^t
\]
Here $L^{(\cdot)}$ and $M^{(\cdot)}$ are accumulated Gauss transforms, and the Schur complements $S^{(\cdot)}$ are defined as previously. In order to further proceed with the factorization, we manipulate $D_{BR}$, namely the bottom right block of $D$, in the following way:
\begin{multline*}
D_{BR} =
\left(
\begin{array}{cc}
{S^{(1)}} & {A^{(1,2)}} \\
{A^{(2,1)}} & {S^{(2)}}
\end{array}
\right)
\xrightarrow{\text{repartition}}
\left(
\begin{array}{ cc | cc}
{S^{(1)}}_{ii} & {S^{(1)}}_{ib} & {A^{(1,2)}}_{ii} & {A^{(1,2)}}_{ib} \\
{S^{(1)}}_{bi} & {S^{(1)}}_{bb} & {A^{(1,2)}}_{bi} & {A^{(1,2)}}_{bb} \\ \hline
{A^{(2,1)}}_{ii} & {A^{(2,1)}}_{ib} & {S^{(2)}}_{ii} & {S^{(2)}}_{ib} \\
{A^{(2,1)}}_{bi} & {A^{(2,1)}}_{bb} & {S^{(2)}}_{bi} & {S^{(2)}}_{bb} \\
\end{array}\right) \\
\xrightarrow{\text{permute}}
\left(
\begin{array}{cc | cc}
{S^{(1)}}_{ii} & {A^{(1,2)}}_{ii} & {S^{(1)}}_{ib} & {A^{(1,2)}}_{ib} \\
{A^{(2,1)}}_{ii} & {S^{(2)}}_{ii} & {A^{(2,1)}}_{ib} & {S^{(2)}}_{ib} \\ \hline
{S^{(1)}}_{bi} & {A^{(1,2)}}_{bi} & {S^{(1)}}_{bb} & {A^{(1,2)}}_{bb} \\
{A^{(2,1)}}_{bi} & {S^{(2)}}_{bi} & {A^{(2,1)}}_{bb} & {S^{(2)}}_{bb}
\end{array}\right)
\xrightarrow{\text{regroup}}
\left(
\begin{array}{cc}
{\hat{A}{}^{(0)}}_{ii} & {\hat{A}^{(1,2)}} \\
{\hat{A}^{(2,1)}} & {\hat{A}{}^{(0)}}_{bb}
\end{array}
\right)
\end{multline*}
Up to a permutation we shall omit, we obtain the final factorization:
\[ A =
L^{(0)} L^{(2)} L^{(1)}
\left(
\begin{array}{cc cc}
{A^{(1)}}_{ii} & & & \\
& {A^{(2)}}_{ii} & & \\
& & {\hat{A}{}^{(0)}}_{ii} & \\
& & & {S^{(0)}}
\end{array}\right)
{M^{(1)}}^t {M^{(2)}}^t {M^{(0)}}^t
\]
where the Schur complement is defined as:
\[
{S^{(0)}} = {\hat{A}{}^{(0)}}_{bb} - {\hat{A}^{(2,1)}} \, {{\hat{A}{}^{(0)}}_{ii}}^{-1} \, {\hat{A}^{(1,2)}}
\]
If define $\overline{\Omega_b} = \cup \, \{ \supp \varphi_j \: : \: j \in \ind ({\hat{A}{}^{(0)}}_{bb}) \}$, the previous reasoning allows us to characterize ${S^{(0)}}^{-1}$ as the restriction of the solution operator of problem (\ref{pb:0}) to $\Omega_b$. We conclude that ${S^{(0)}}^{-1}$ and ${S^{(0)}}$ have low-rank off-diagonal blocks.
Finally, let us discuss the case of $\mathcal{L}$ being the Helmholtz operator. Although the interpretation of the inverse Schur complements as restrictions of the solution operator is preserved, the underlying Green's function not longer exhibits long-range low-rank interactions. However, in the case of scattering problems that involve relatively thin and elongated structures, the Green's function still exhibits low-rank behavior, see \cite{martinsson2007fast}. As remarked in \cite{engquist2011sweeping}, the rank is dependent on the boundary condition employed to terminate the domain, which in our scenario is $\mathcal{B} \, u = g$ on $\Gamma \cap \partial \Omega^{(k)}$, and only a PML condition guarantees consistently low ranks. Let us interpret the sets $\Omega_b^{(1)}$, $\Omega_b^{(2)}$ and $\Omega_b$ as elongated scatterers. Although we are not using a PML boundary condition, we conjecture that the observed numerical ranks are sufficiently small to yield a cheap and reliable approximation of the solution operator. | {"config": "arxiv", "file": "1508.07798/section2.tex"} |
TITLE: 3rd Order Taylor expansion of $e^x\cos(y)\sin(z)$
QUESTION [1 upvotes]: I'm looking for the 3rd.-order Taylor approximation of
$(x,y,z) \mapsto e^x\cos(y)\sin(z)$ at $(x_0,y_0,z_0) = (0,0,0)$
I've got this piece of advice at hand: $\quad\textit{Use the Taylor series of known functions.}$
I can conclude that:
$\exp(x) = \sum_{n = 0}^\infty \frac{x^n}{n!}$
$\sin (x) = \sum_{n=0}^\infty (-1)^n\frac{x^{2n+1}}{(2n+1)!}$
$\cos (x) = \sum_{n=0}^\infty (-1)^n\frac{x^{2n}}{(2n)!}$
But how am I supposed to use this?
I get as a result by hiting it in Python (SymPy) :
$z + x*z + (x^2*z)/2 - (y^2*z)/2 - z^3/6$
Adding all three 2-dimensional taylor-approximations with n = 3 doesn't yield the same...
Any hints - not solutions at first- would be greatly appreciated.
REPLY [1 votes]: To get what Python says try again by expanding $$\left(1 + x +x^2/2 + x^3/6\right)\left(1-y^2/2\right)\left(z - z^3/6\right)$$
and dropping monomials with degree more than $3$ (like $x^3z/6$ whose degree is $4$). | {"set_name": "stack_exchange", "score": 1, "question_id": 819487} |
TITLE: $m(E) \geq 0$ instead of $m(E) = 0$?
QUESTION [1 upvotes]: There's this lemma in Real Analysis by Royden and Fitzpatrick that goes:
Lemma 16: Let $E$ be a bounded measurable set of real numbers. Suppose there is a bounded countably infinite set of real numbers $\Lambda$ for which the collection of translates of $E$, $\{\lambda + E\}_{\lambda \in \Lambda}$, is disjoint. Then $m(E) = 0$.
There's this claim in the proof that I do not understand.
Isn't this supposed to be $\geq 0$ rather than $> 0$? I don't really see how positive numbers add up to zero...
If no (this is), thanks, and ignore what follows.
If yes (this isn't), how does $m(E)=0$ follow? If $m(E) = 0.0001$, $0.0001+0.0001+\dots = \infty$, right?
REPLY [2 votes]: You are correct, $m(E) \geq 0$. Alternatively, you could interpret it as a proof by contradiction where the assumption that is made is that $m(E) > 0$. I'm pretty sure it is just a typo though.
Added Later: You can check the errata to see if it is indeed a typo. I don't have the page number so I can't do it myself. | {"set_name": "stack_exchange", "score": 1, "question_id": 819094} |
TITLE: What is the connection between metric distances and the norm in a normed vector space in the given example?
QUESTION [0 upvotes]: Let $v_{0}$ be the space containing each sequence $y = (y_{n})_{n=1}^{\infty}$ with $\displaystyle{\lim_{n \to \infty}} y_{n} = 0$, equipped with the norm $\Vert y \Vert_{\infty} = \sup_{n} |y_{n}|$.
I'm having difficulty understanding the connection between metric distances and the norm in a normed vector space. So considering the vector space above, do we write $\displaystyle{\lim_{n \to \infty}} y_{n} = 0$ as $\Vert y_{n} - 0 \Vert < \epsilon $ for all $n \ge N$? Where $\Vert y_{n} - 0 \Vert$ is the Euclidean norm of each point of the sequence $y$ to $0$. Then, is the sup norm just greater than or equal to the Euclidean norm?
REPLY [1 votes]: A norm is a function from $X$ (vector space) to $\mathbb{R}+$, so, is a function that for each x $\in X$ you get a non-negative number.
While a metric is a function from $X$ x $X$ to $\mathbb{R}+$.
Your $v_0$ is a vector space where you can use the norm $||$ $\cdot $ $||_\infty$.
From a norm you always get a metric: $\rho(x,y)=||x-y||$.
Informally, the norm gives you a way to "measure" the "length" of a vector, while the metric gives you a "distance" between two points, and in a vector space, your points are vectors, and in this case, they are sequences. | {"set_name": "stack_exchange", "score": 0, "question_id": 3524843} |
TITLE: Continuity in weak star topology
QUESTION [2 upvotes]: If $K$ is $\mathbb R$ or $\mathbb C$, consider $$\varphi : \ell^{1}(\mathbb N) \to K$$ that is defined by $(a_{j})\mapsto \varphi((a_{j})):=\sum_{k=1}^{\infty} a_{j}.$
I want to prove that $\varphi$ is continuous in norm but not with respect to the weak* topology of $\ell^{1}=(c_{0})'$. Continuity in norm is easy to see because the sequences is in $\ell^{1}$ so if the series converges absolutely then the series converges and this linear function is norm bounded, but I don't have clear the definition of continuity with regards to the weak topology. How can I proceed?
REPLY [4 votes]: Consider the kronecker-delta sequence $(\delta^n):=((\delta_i^n)_{i\in\mathbb N})_{n\in\mathbb N}\subset \ell^1$, where $\delta_n^n=1$ and $\delta_i^n=0$ if $i\neq n$. Then for every $c=(c_i)\in c_0$ we know that $\delta^n(c)=c_n\to 0$, meaning that $\delta^n$ converges to $0$ in the weak* topology (it is crucial that you understand why). However,
$$\varphi(\delta^n)=\sum_{i=1}^\infty\delta_i^n=1\neq 0=\varphi(0).$$
Thus $\varphi$ is not sequentially continuous, so definitely not continuous (in $\ell^1(\mathbb N)$ these are, in fact, equivalent) . | {"set_name": "stack_exchange", "score": 2, "question_id": 4576039} |
TITLE: Exist unique $f \in C([0, 1])$ such that $f(x) = \int_0^x K(x, y)\,f(y)\,dy + g(x)?$
QUESTION [4 upvotes]: Let $K \in C([0, 1] \times [0, 1])$ and $g \in C([0, 1])$. Does there exist a unique $f \in C([0, 1])$ such that, for all $x \in [0, 1]$, we have$$f(x) = \int_0^x K(x, y)\,f(y)\,dy + g(x)?$$
REPLY [0 votes]: $$f(x)=\int_0^x K(x,y)f(y)\,dy+g(x)$$
Taking derivatives both sides with respect to $x$ we get
$$f'(x)=K(x,x)f(x)+g'(x)$$
$$f'(x)-K(x,x)f(x)=g'(x)$$
Let $\mu(x)=\exp(\int -K(x,x)\,dx)$ and multiply both sides by it
$$\mu(x)f'(x)-K(x,x)\mu(x)f(x)=\mu(x)g'(x)$$
Notice that $-K(x,x)\mu(x)=\mu'(x)$ so:
$$\mu(x)f'(x)+\mu'(x)f(x)=\mu(x)g'(x)$$
$$\frac{d}{dx}(\mu(x)f(x))=\mu(x)g'(x)$$
$$\mu(x)f(x)=\int_0^x \mu(y)g'(y)\,dy+C$$
$$f(x)=\frac{\int_0^x \mu(y)g'(y)\,dy+C}{\mu(x)}$$
Then we find $C$ from the relation:
$$f(0)=\int_0^0K(x,y)f(y)\,dy+g(0)=g(0)$$
So $f$ must be unique. | {"set_name": "stack_exchange", "score": 4, "question_id": 1550737} |
TITLE: Let $A$ be a real $n\times n$ matrix. Suppose $A$ satisfies the equation $x^3 + 3x^2 - 4 = 0$. Find $A^{-1}$
QUESTION [0 upvotes]: Let $A$ be a real $n\times n$ matrix. Suppose $A$ satisfies the equation $x^3 + 3x^2 - 4 = 0$. Prove that $A$ is non-singular and find $A^{-1}$.
I can find two different $A$ that satisfy this; $x^3 + 3x^2 - 4=(x-1)(x+2)^2$. So surely $I$ and $-2I$ are two such $A$. But how do I know that there are not more? Could we not have some $A\ne I, -2I$ such that $(A-I)(A+2I)^2=0$ and if this was the case, wouldn't such an $A$ be singular?
REPLY [3 votes]: By assumption, $A^3+3A^2-4 I=0$. We can rewrite this as
$$A(A^2+3A)=A^3+3A^2=4I.$$
Thus, $A$ is nonsingular with inverse $\frac{1}{4}(A^2+3A)$. (Recall that given matrices $A$ and $B$ having $AB=I$, then we can conclude that $A$ is nonsingular and $B=A^{-1}$.) | {"set_name": "stack_exchange", "score": 0, "question_id": 2430119} |
\section{Numerical Experiments}
\label{sec:nuex}
\begin{figure*}[t]
\begin{tabular}{ccc}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/dst_5_20_50_iter.eps}}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/S_5_20_50_iter.eps}}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/L_5_20_50_iter.eps}}
\end{minipage}
\end{tabular}
\caption{
\textcolor{black}{
Transition of approximation errors in the case of SNR $50$ dB and $\kappa
= 50$.
}
}
\label{fig:iter}
\end{figure*}
\begin{table*}[t]
\centering
\caption{
\textcolor{black}{
Computation time, number of iterations, until
$\norm{S^{\star}_{\mathrm{I}}-\tS_{\mathrm{I,II}}(t)}_{F}^{2}/\norm{S_{\mathrm{I}}^{\star}}_{F}^{2} \le 10^{-2},
10^{-3}$, or $10^{-4}$, and rate of successful trials (from left to
right) in the case of SNR $50$ dB and $\kappa = 50$. Note that we
use "-" if the rate of successful trials is smaller than $10 \%$.
}
}
\label{tbl:cmptime2}
\begin{tabular}{llllllllll}
\hline
$\norm{S^{\star}_{\mathrm{I}}-\tS_{\mathrm{I,II}}(t)}_{F}^{2}/\norm{S_{\mathrm{I}}^{\star}}_{F}^{2}$ & \multicolumn{3}{l}{$\le 10^{-2}$} & \multicolumn{3}{l}{$\le 10^{-3}$} & \multicolumn{3}{l}{$\le 10^{-4}$} \\
sh-rt~\cite{Fu2006} & $4.298$ ms & $52.69$ & $75 \%$ & - & - & $8 \%$ & - & - & $2 \%$ \\
JDTM~\cite{Luciani2014} & $\mathbf{1.552}$ {\bf ms} & $18.85$ & $100 \%$ & $\mathbf{1.873}$ {\bf ms} & $22.99$ & $72 \%$ & - & - & $8 \%$ \\
Algorithm~\ref{alg:ovralg} & $4.004$ ms & $\mathbf{1.610}$ & $100 \%$ & $5.212$ ms & $\mathbf{3.540}$ & $\mathbf{100 \%}$ & $12.32$ ms & $\mathbf{14.79}$ & $\mathbf{100 \%}$ \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[t]
\begin{tabular}{ccc}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/dst_5_20_100.eps}}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/S_5_20_100.eps}}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/L_5_20_100.eps}}
\end{minipage}
\end{tabular}
\caption{
Comparison of ASD algorithms for $S^{\star}$ of condition number $\kappa =
50$.
}
\label{fig:asd2}
\end{figure*}
\begin{table*}[t]
\centering
\caption{
\textcolor{black}{
Computation time, number of iterations, until
$\norm{S^{\star}_{\mathrm{I}}-\tS_{\mathrm{I,II}}(t)}_{F}^{2}/\norm{S_{\mathrm{I}}^{\star}}_{F}^{2} \le 10^{-2},
10^{-3}$, or $10^{-4}$, and rate of successful trials (from left to
right) in the case of SNR $50$ dB and $\kappa = 5$.
}
}
\label{tbl:cmptime1}
\begin{tabular}{llllllllll}
\hline
$\norm{S^{\star}_{\mathrm{I}}-\tS_{\mathrm{I,II}}(t)}_{F}^{2}/\norm{S_{\mathrm{I}}^{\star}}_{F}^{2}$ & \multicolumn{3}{l}{$\le 10^{-2}$} & \multicolumn{3}{l}{$\le 10^{-3}$} & \multicolumn{3}{l}{$\le 10^{-4}$} \\
sh-rt~\cite{Fu2006} & $2.139$ ms & $21.27$ & $100 \%$ & $2.542$ ms & $26.83$ & $100 \%$ & $2.777$ ms & $29.72$ & $100 \%$ \\
JDTM~\cite{Luciani2014} & $\mathbf{1.227}$ {\bf ms} & $14.09$ & $100 \%$ & $\mathbf{1.425}$ {\bf ms} & $17.13$ & $100 \%$ & $\mathbf{1.603}$ {\bf ms} & $19.06$ & $100 \%$ \\
Algorithm~\ref{alg:ovralg} & $3.208$ ms & $\mathbf{1.000}$ & $100 \%$ & $3.334$ ms & $\mathbf{1.000}$ & $100 \%$ & $3.338$ ms & $\mathbf{1.000}$ & $100 \%$ \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[t]
\begin{tabular}{ccc}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/dst_5_20_5.eps}}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/S_5_20_5.eps}}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=55mm]{Images/L_5_20_5.eps}}
\end{minipage}
\end{tabular}
\caption{
Comparison of ASD algorithms for $S^{\star}$ of condition number $\kappa =
5$.
}
\label{fig:asd1}
\end{figure*}
\textcolor{black}{
To see the numerical performance of Algorithm~\ref{alg:ovralg}, in comparison
to the two Jacobi-like methods~(sh-rt~\cite{Fu2006} and
JDTM~\cite{Luciani2014}), under several
conditions~(e.g., noise levels, condition numbers of an ideal common
diagonalizer), we conduct numerical experiments for a perturbed version
$\mbA = (A_1, \ldots , A_K) \in \btK \Rnn\ (n = 5, K = 20)$ of $\mbA^{\star} = (A^{\star}_{1}, \ldots , A^{\star}_{K}) \coloneqq (S^\star \Lambda_k^\star (S^\star)^{-1})_{k=1}^{K} \in \SD$ with $A_k \coloneqq
A_{k}^{\star} + \sigma N_k \ (k = 1, \ldots , K)$, where the
diagonal entries of diagonal matrices $\Lambda_k^\star \in \Rnn$ and all the entries of $N_k \in \Rnn$ are drawn
from the standard normal distribution $\mcN(0,1)$ and $\sigma > 0$ is used to define the Signal
to Noise Ratio~(SNR). To conduct numerical
experiments for $S^{\star} \in \Rnn$ of various condition numbers, say
$\kappa > 1$, we design $S^{\star}$ by replacing singular values, of a matrix
whose entries are drawn from the standard normal distribution $\mcN(0,1)$, with
$\sigma_{i}(S^{\star}) \coloneqq (\kappa - 1) (n - i) / (n - 1) + 1 (i = 1,
\ldots , n)$ implying thus $\sigma_{1}(S^{\star}) / \sigma_{n}(S^{\star}) =
\kappa$.
For Algorithm~\ref{alg:ovralg}, we use, as the $t$th estimates, $\tS(t) \coloneqq $ \texttt{DODO}($P_{\Xi^{-1}(\hA(t))}(\mbA), n$) if $P_{\Xi^{-1}(\hA(t))}(\mbA) \in \SD$; otherwise $\tS(t) \coloneqq$ \texttt{PCD}($P_{\Xi^{-1}(\hA(t))}(\mbA)$)~(see function \texttt{PCD} in Algorithm~\ref{alg:ovralg}),\footnotemark[8] $\tbA(t) = (\tA_{1}(t), \ldots , \tA_{K}(t)) \coloneqq P_{\Xi^{-1}(\hA(t))}(\mbA)$ if $P_{\Xi^{-1}(\hA(t))}(\mbA) \in \SD$; otherwise $\tbA(t) \coloneqq P_{\SD(\tS(t))}(\mbA)$, and $\tLam_{k}(t) \coloneqq (\tS(t))^{-1} \tA_{k}(t) \tS(t) \ (k = 1, \ldots , K; t = 0, 1, \ldots)$.
For the
Jacobi-like methods, on the other hand, we use, as the $t$th estimates, $\tS(t) \coloneqq \breve{S}(t)$, $\tLam_{k}(t) \coloneqq
\breve{\Lambda}_{k}(t)$, where $\breve{S}(t)$ and $\breve{\Lambda}_{k}(t)$ are respectively the $t$th updates of $\breve{S}$ and $\breve{\Lambda}$ in the Jacobi-like methods~(sh-rt and JDTM; see Section~\ref{ssec:jcb} for sh-rt), and $\tA_{k}(t)
\coloneqq \tS(t) \tLam_{k}(t) (\tS(t))^{-1}\ (k = 1, \ldots , K)$.
The Jacobi-like
methods are terminated when the number of iteration
exceeds $2 \times 10^{4}$ or when
$\abs{f_\mbA(\tS(t))-f_{\mbA}(\tS(t-1))}/\abs{f_{\mbA}(\tS(t))} \le 10^{-6}\
(t \in \N)$, where $f_\mbA(\tS(t)) = \sum_{k = 1}^K
\off((\tS(t))^{-1} A_k \tS(t))$. We choose $\varepsilon = 10^{-6}$ and
$t_{\mathrm{max}} = 2 \times 10^{4}$ in Algorithm~\ref{alg:ovralg}. For each algorithm, we use $t_{\mathrm{end}} \in \N$ to indicate the iteration when the algorithm is terminated. We
evaluate the approximation errors of $\tS(t)$, $(\tLam_{1}(t), \ldots , \tLam_{K}(t))$, $\tbA(t) = (\tA_{1}(t), \ldots , \tA_{k}(t))$ by
$\norm{S^{\star}_{\mathrm{I}}-\tS_{\mathrm{I, II}}(t)}^{2}_{F} / \norm{S^{\star}_{\mathrm{I}}}^{2}_{F}$,
$\sum_{k=1}^{K} \norm{\Lambda^{\star}_{k}-\mbox{$\tLam_{k}$}_{\mathrm{II}}(t)}^{2}_{F} /
\norm{\Lambda^{\star}_{k}}^{2}_{F}$, and $\sum_{k=1}^{K}
\norm{A_{k}-\tA_{k}(t)}^{2}_{F} / \norm{A_{k}}^{2}_{F}$, respectively, where (i)~$S^{\star}_{\mathrm{I}}$ stands for the column-wise normalization of $S^{\star}$ and $\tS_{\mathrm{I, II}}(t)$ stands for the column-wise permutation applied to achieve the best approximation to $S_{I}^{\star}$ after the column-wise normalization of $\tS(t)$, and (ii)~$\mbox{$\tLam_{k}$}_{\mathrm{II}}(t)$ stands for the diagonal matrix after applying the corresponding permutation for $\tS_{\mathrm{I,II}}(t)$ to diagonal entries of $\tLam_{k}(t)$.
}
\footnotetext[8]{
\textcolor{black}{
In our numerical experiments, we have not seen any exceptional case where
Assumption~1 for $P_{\Xi^{-1}(\hA(t))}(\mbA)$ is not satisfied.
}
}
\textcolor{black}{
We conducted numerical experiments on Intel Core i7-8559U running at $2.7$ GHz
with $4$ cores and $32$ GB of main memory. By using Matlab, we implemented all the ASD
algorithms by ourselves. We measured the computation times of
all the ASD algorithms by Matlab's \texttt{tic/toc} functions.
We compared the computation time and the number of iterations, of all the algorithms, until
$\norm{S^{\star}_{\mathrm{I}}-\tS_{\mathrm{I,II}}(t)}_{F}^{2}/\norm{S^{\star}_{\mathrm{I}}}_{F}^{2} \le 10^{-2},
10^{-3}$, and $10^{-4}\ (t = 0, 1, \ldots)$ as shown in Table~2 and~3.
For each ASD algorithm, the successful trial means a trial where the
algorithm succeeds in achieving smaller approximation error of $\tS(t)$ than prescribed values $10^{-2}, 10^{-3}$, and $10^{-4}$ before termination of each algorithm.
}
\textcolor{black}{
Since it is reported in~\cite{Luciani2014} that the Jacobi-like methods tend to suffer from the cases where the input tuple is a perturbed version of a simultaneously diagonalizable tuple with a common diagonalizer of large condition number, we first compared all the ASD algorithms for $\kappa = 50$. Figure~\ref{fig:iter} depicts the transition of the mean values, of the relative squared errors of $\tbA(t), \tS(t)$ and $(\tLam_{1}(t), \ldots , \tLam_{K}(t))$ after proper column-wise normalization/permutation for $\tS(t)$ and $(\tLam_{k}(t))_{k=1}^{K}$, over $100$ trials in the case of SNR $50$ dB.
Figure~1 illustrates that, compared with the Jacobi-like methods, Algorithm~\ref{alg:ovralg} achieves estimations (i)~$\tbA(t) \in \SD$, of $\mbA^{\star}$, closer to $\mbA$, (ii)~$\tS_{\mathrm{I,II}}(t)$, of $S_{\mathrm{I}}^{\star}$, closer to $S_{\mathrm{I}}^{\star}$, and (iii)~$(\mbox{$\tLam_{k}$}_{\mathrm{II}}(t))_{k=1}^{K}$, of $(\Lambda^{\star}_{k})_{k=1}^{K}$, closer to $(\Lambda^{\star}_{k})_{k=1}^{K}$ with smaller number of iterations.
Table~\ref{tbl:cmptime2} depicts the mean values of (i)~the computation times
and (ii)~the numbers of iterations taken until
$\norm{S^{\star}_{\mathrm{I}}-\tS_{\mathrm{I,II}}(t)}_{F}^{2}/\norm{S_{\mathrm{I}}^{\star}}_{F}^{2} \le 10^{-2},
10^{-3}$, or $10^{-4}$ over successful trials in $100$ trials and (iii)~the rates
of successful trials over $100$ trials.
This result shows that Algorithm~\ref{alg:ovralg} takes around
$3$ times longer computation time than JDTM but its $\tS(t_{\mathrm{end}})$ achieves
the prescribed conditions even for the trials where the Jacobi-like
methods fail to achieve the prescribed conditions. Figure~\ref{fig:asd2} depicts the mean values, of the relative squared errors of $\tbA(t_{\mathrm{end}}), \tS(t_{\mathrm{end}})$, and $(\tLam_{k}(t_{\mathrm{end}}))_{k=1}^{K}$ after proper column-wise normalization/permutation for $\tS(t)$ and $(\tLam_{k}(t))_{k=1}^{K}$, over $100$ trials in the cases of SNR from $0$ dB to $50$ dB. Figure~2 illustrates Algorithm~2 outperforms the
Jacobi-like methods in the sense of achieving approximation errors at $t_{\mathrm{end}}$ especially when SNR is higher than $10$ dB.
}
\textcolor{black}{
We also compared all the algorithms in the case of $\kappa = 5$.
All the values in Table~\ref{tbl:cmptime1} and Figure~\ref{fig:asd1} are
calculated by the same way as done for $\kappa = 50$.
Figure~\ref{fig:asd1} and Table~\ref{tbl:cmptime1} show that Algorithm~2 takes around $3$ times longer computation time than JDTM but can outperform the
Jacobi-like methods in the sense of achieving approximation errors at $t_{\mathrm{end}}$ in SNR from $5$ dB to $50$ dB.
From these experiments, we see that Algorithm~\ref{alg:ovralg} is robust against wider range of condition numbers of $S^{\star}$ than the Jacobi-like methods.
} | {"config": "arxiv", "file": "2010.06305/Sections/nmex.tex"} |
TITLE: Show that Cov(X,Y)=0 while X and Y are dependent
QUESTION [2 upvotes]: Problem:
Show that $Cov(X,Y)=0$ while X and Y are dependent.$$f_{X,Y}(x,y)=\begin{cases}1 \ &\text{ for } -y<x<y,0<y<1\\0\ &\text{ otherwise } \end{cases}$$
Attempt:
Starting by drawing the domain for the joint density function I derive a triangle with corners in (0,0), (1,0) and (1,1).
My solution strategy hence becomes to compute the corresponding marginal density functions from:
$$f_{X}(x)=\int_{\in D} f_{X,Y}(x,y) \text{ }dy $$ and to compute the covariance from:
$$Cov(X,Y)=E[XY]-E[X]E[Y].$$ The expected value for X is derived from:
$$E[X]=\int_{-\infty}^\infty x f_{X}(x) \text{ }dx $$ and E[Y] is similarly computed. Likewise
$$E[XY]=\int_{-\infty}^\infty\int_{-\infty}^\infty xy f_{X,Y}(x,y) \text{ }dxdy. $$
Lastly, dependent variables would not fulfil the independent criterion:
$$f_{X,Y}(x,y)=f_X(x)f_Y(y).$$
I suspect that my marginal density functions are incorrect since I fail to get the equality:
$$E[XY]=E[X]E[Y]$$
Which in turn implies that I have misinterpreted the domain or the boundaries from which the marginal densities are computed.
I am again solving this as an exercise in my probability course and any help would be greatly appreciated!
REPLY [2 votes]: The support is the triangle $\triangle (0,0)(-1,1)(1,1)$ because, notice for any $y\in(0;1)$ that $x\in(-y;y)$. Thus establishing the bounds: $$\mathsf E(g(X,Y)) =\int_0^1\int_{-y}^y g(x,y) \operatorname d x\operatorname d y$$
However, you can demonstrate the required result without integration.
The support is : $\{(x,y):−y<x<y,0<y<1\}$ and the density is uniform within the support, so we may use the known properties of uniform distributions. Mainly: $$\begin{align}\because~&X\mid Y~\sim~\mathcal U(-Y;Y)\\[2ex]\therefore ~&\mathsf E(X\mid Y) ~=~ \dfrac{Y-(-Y)}{2}~=~0\end{align}$$
Now $\mathsf {Cov}(X,Y) = \mathsf E(Y\,\mathsf E(X\mid Y))- \mathsf E(Y)\,\mathsf E(\mathsf E(X\mid Y))$ by the tower property. (Law of Iterated Expectation). So $$\mathsf {Cov}(X,Y)=0$$
There is clearly a relation between $X,Y$, as the larger the value of $Y$ the wider the range of $X$, however the relation is not linear as $X$ is symmetrically distributed around zero regardless of the value of $Y$. | {"set_name": "stack_exchange", "score": 2, "question_id": 1897562} |
TITLE: Confusion with differential equations
QUESTION [0 upvotes]: We have the following ODE: $ \dfrac{dy}{dx} = \dfrac{x}{y}$
If we want to solve it we just rewrite this to the form: $ydy = xdx$ and we then take the integral, but here is where I get confused. My textbook says this is the integral:
$\dfrac{y^2}{2} + C_1 = \dfrac{x^2}{2} + C_2 $, but isn't the integral of dx just x and the integral of dy just y? Where are these left out?
REPLY [0 votes]: the integral of $ x \ \ dx$ is $\frac{1}{2} x^2+c$ | {"set_name": "stack_exchange", "score": 0, "question_id": 310493} |
TITLE: The relation between the homology of a 1-cycle and its projections
QUESTION [0 upvotes]: Let $U\subseteq\mathbb{C}^n$ be open and $0$-connected, and let $\Gamma\in Z_1(U,\mathbb{Z})$ be a $1$-cycle in $U$. Let $p_k:\mathbb{C}^n\to\mathbb{C}$ be the $k$-th projection, it is continuous and open and induces a map in homology $(p_k)_*:H_1(U,\mathbb{Z})\to H_1(p(U),\mathbb{Z})$.
I am looking for conditions on the projected $1$-cycles $\Gamma_k:=(p_k)_\#\Gamma$ that would ensure $[\Gamma]=0$. Clearly, if $[\Gamma]=0$, then $[\Gamma_k]=0$, so it is a necessary condition, but not sufficient, unless $U$ is so nice that $\bigcap_{1\leq k\leq n}\ker(p_k)_* = 0$.
If $U$ were a product, then one could use Künneth formula. In general, $U$ can be covered by a countable famility of products of opens, and I imagine there is a countable version of Mayer-Vietoris, but this seems like an overkill approach. On the other hand, open sets can be very complicated, so maybe it's hopeless?
If it helps, you can assume that $U$ is bounded.
EDIT: Nick L's comments made me realize that the previous title formulation was ambiguous.
I am looking to express the condition $[\Gamma]=0$ through $\Gamma_1,\dots,\Gamma_n$ rather than $[\Gamma_1],\dots,[\Gamma_n]$.
Sorry for my oversight!
REPLY [1 votes]: The following example illustrates that the condition $[\gamma] = 0$ cannot be recovered from the $[\gamma_{k}]$. In the following example there are infinitely many non-zero $[\gamma]$ for which $[\gamma_{k}] = 0, \forall k$.
Let $U$ be the complement of set $\{z_{1}^{3} + z_{2}^{2} = 0\}$ in $\mathbb{C}^{2}$, intersected with an open ball of radius $r>0$ (centred at $(0,0)$). Then $U$ is diffeomorphic to $$ (S^{3} \setminus K) \times (0,r), $$ where $K$ is the trefoil knot*. Hence, $H_{1}(U , \mathbb{Z}) \cong \mathbb{Z}$ (this follows because the homology of any knot complement in $S^{3}$ is $\mathbb{Z}$ and the Kunneth formula).
The image of $U$ by $p_{i}$ is an open ball in $\mathbb{C}$ with radius $r$ (centred at the origin), hence $H_{1}(p_{i}(U,\mathbb{Z}))= \{0\}$, for $i$ = 1,2.
*See here https://en.wikipedia.org/wiki/Torus_knot (section: connection to complex hypersurfaces) for a description of the trefiol knot as the link of the singularity $z_{1}^{3} + z_{2}^{2} = 0 $.
Perhaps there is something that can be said if one imposed some additional assumptions on $U$... In any case, I thought this example might be helpful and it was too long for a comment. | {"set_name": "stack_exchange", "score": 0, "question_id": 2729477} |
\begin{document}
\pagestyle{fancy}
\title{Global Convergence of Policy Gradient for Sequential \\Zero-Sum Linear Quadratic Dynamic Games}
\author{Jingjing Bu \and Lillian J. Ratliff \and Mehran Mesbahi\thanks{The authors are with the University of Washington, Seattle, WA, 98195; Emails: \{bu,ratliffl,mesbahi\}@uw.edu}}
\date{}
\maketitle
\begin{abstract}
We propose projection-free sequential algorithms for linear-quadratic dynamics games. These policy gradient based algorithms are akin to Stackelberg leadership model and can be extended to model-free settings. We show that if the ``leader'' performs natural gradient descent/ascent, then the proposed algorithm has a global sublinear convergence to the Nash equilibrium. Moreover, if the leader adopts a quasi-Newton policy, the algorithm enjoys a global quadratic convergence. Along the way, we examine and clarify the intricacies of adopting sequential policy updates for LQ games, namely, issues pertaining to stabilization, indefinite cost structure, and circumventing projection steps.
\end{abstract}
{\bf Keywords:} Dynamic LQ games; stabilizing policies; sequential algorithms
\section{Introduction}
\label{sec:intro}
Linear-quadratic (LQ) dynamic and differential games exemplify situations where
two players influence an underlying linear dynamics in order to respectively, minimize and maximize a given quadratic cost on the state and the control over an infinite time-horizon.\footnote{We will adopt the convention of referring to the continuous time scenario as differential games. Moreover, in this paper, we focus on infinite horizon LQ games without a discount factor.} This LQ game setup has a rich history in system and control theory, not only due to its wide range of applications but also since it directly extends its popular one player twin, the celebrated linear quadratic regular (LQR) problem~\cite{Bernhard2013-aa,Engwerda2005-np,Zhang2005-pe}.
LQR on the other hand, is one of the foundations of modern system theory~\cite{willems1971least, Anderson2007-mn}. This partially stems from the fact that the elegant analysis of minimizing a quadratic (infinite horizon) cost over an infinite dimensional function space leads to a solution that is in the constant feedback form, that can be obtained via solving the \emph{algebraic Riccati equation} (ARE)~\cite{Lancaster1995algebraic}.
As such, solving LQR and its variants have often been approached from the perspective
of exploiting the structure of ARE~\cite{bini2012numerical}.
The ARE facilitates solving for the so-called cost-to-go (encoded by a positive semi-definite matrix), that
can subsequently be used to characterize the optimal state feedback gain.
This general point of view has also influenced the ``data-driven'' approaches for solving
the generic LQR problem. For instance, in the value-iteration for
reinforcement learning (RL)---e.g., $Q$ learning---one aims to first estimate
the cost-to-go at a given time instance and through this estimate, update the state feedback gain.
In recent years, RL has witnessed major advances for wide range of
decision-making problems (see, e.g.,~\cite{silver2016mastering,mnih2015human}).
Direct policy update is another algorithmic pillar
for decision-making over time.
The conceptual simplicity of policy optimization offers advantages in terms of
computational scalability and potential extensions to model-free settings.
As such, there is a renewed interest in analyzing the classical LQR problem under the RL framework from the perspective of direct policy updates~\cite{dean2017sample, fazel2018global}.
The extension of LQR control to multiple-agent settings, i.e., LQ dynamic and differential
games, has also been explored by the game theory and optimal control
communities~\cite{basar1999dynamic,Vamvoudakis2017-ls}.
Two-person zero-sum LQ dynamic and differential games
are particular instances of this more general setting where two players aim to optimize an objective with
directly opposing goals subject to a shared linear dynamical system.
More
precisely, one player attempts to minimize the objective while the other aims to
maximize it. This framework has important applications in $\ca H_{\infty}$
optimal control~\cite{basar2008h}. In fact, as it is well-known in control theory,
the saddle point solution---i.e., a Nash equilibrium (NE)---of an
infinite horizon LQ game can be obtained via the corresponding \emph{generalized algebraic
Riccati equation} (GARE). As such, seeking the NE in a zero-sum LQ dynamic
game revolves around solving the
GARE~\cite{stoorvogel1994discrete}. Recently, multi-agent
RL~\cite{srinivasan2018actor, jaderberg2019human} has also achieved impressive
performance by using direct policy gradient updates. Since LQ dynamic games have
explicit solutions via the GARE, understanding the performance of policy gradient
based algorithms for LQ games could serve as a benchmark, and providing deeper insights into theoretical
guarantees of multi-agent RL in more general settings~\cite{Vamvoudakis2017-ls}.
In the meantime, the application of policy optimization algorithms in the game
setting proves to require more intricate analysis due to the fact that the infinite horizon cost is undiscounted and potentially unbounded per stage.
As such, it is well known that devising direct policy iterations for undiscounted and unbounded per stage cost functions in the RL setting is nontrivial~\cite{bertsekas2005dynamic}. The cost structure of standard LQR, however, streamlines the design of policy based iterations~\cite{hewer1971iterative, fazel2018global, bu2019lqr}.\footnote{In~\cite{fazel2018global, bu2019lqr} it has been assumed that the quadratic state cost is via a positive definite $Q$; this is not ``standard'' as only the detectability of the pair $(Q, A)$ and $Q \succeq 0$ is required for LQR synthesis.} Nevertheless in policy iteration, special care has to be exercised to ensure that the iterative policy updates are in fact stabilizing.
The stabilization issues is particularly relevant in the LQ dynamic games.
Note that the policy space for LQ games is an open set admitting a cartesian product structure.
Hence, in the policy updates for LQ dynamic games (say in the RL setting), we must guarantee that the iterates jointly stay in the open set as otherwise the ``simulator'' would diverge. Recently, Zhang \emph{et al.}~\cite{zhang2019policy}, using certain assumptions and relying on a projection step, have proposed a sequential direct policy updates with a sublinear convergence for LQ dynamic games.
\paragraph{{\bf Contributions.}} In this paper, we first clarify the setting for
discussing sequential LQ dynamic games, particularly addressing issues pertaining to stabilization.
We then propose leader-follower type algorithms that resemble the Stackelberg leadership model~\cite{basar1999dynamic}.
Specifically, in the proposed iterative algorithms for LQ games, one player is designated as a leader
and the other as a follower.
We require that the leader plays natural gradient or quasi-Newton policies while the follower can play any first-order based policies. In particular, we do not require a specific player to be the leader as long as the algorithm can be initialized appropriately. We prove that if the leader performs a natural gradient policy update, then the proposed leader-follower algorithm has a global sublinear convergence and asymptotic linear convergence rate. In the meantime, if the leader adopts a quasi-Newton policy update, the algorithm converges at a global quadratic rate.
Moreover, we show that gradient policy (respectively, natural gradient
and quasi-Newton policies) has a global linear (respectively, linear and
quadratic) convergence to the optimal stabilizing feedback gains even when the
state cost matrix $Q$ is indefinite. This result essentially extends
the results on standard LQR investigated in~\cite{fazel2018global, bu2019lqr},
where the analysis relies on the assumption of having $Q \succ 0$; this
extension is of independent interest for various optimal control
applications as well, e.g., control with conflicting
objectives~\cite{willems1971least}). Compared with the results presented
in~\cite{zhang2019policy}, the contributions of this work include
the following:
\begin{enumerate}
\item[(1)] We remove the ``nonstandard assumption'' that the NE point $(K_*, L_*)$ satisfies $Q-L_*^{\top}R_2 L_* \succ 0$.\footnote{It is noted that in~\cite{zhang2019policy} that the projection step is not generally required in implementations; however the convergence analysis presented in~\cite{zhang2019policy} is based on such a projection.} We note that such an assumption is not standard in the LQ literature and as such, needs further justification beyond its algorithmic implications. In fact, in the analysis presented in~\cite{zhang2019policy}, it is crucial for the convergence of the algorithm to a priori know the positive number $\varepsilon > 0$ for which $\{L: Q - L_*^{\top}R_2 L_* \succeq \varepsilon I\}$; moreover, one has to be able to project onto this set.
\item[(2)] Our setting allows for larger stepsizes for the policy iteration in LQ dynamic games, greatly improving its practical performance. This is facilitated by providing insights into the stabilizing policy updates through a careful analysis of the corresponding indefinite GARE.
\item[(3)] We clarify the interplay between key concepts in control and optimization in the convergence analysis of the proposed iterative algorithms for LQ dynamic games. This is inline with our belief that identifying the role of concepts such stabilizability and detectability in the convergence analysis of ``data-guided" algorithms for decision making problems with an embedded dynamic system is of paramount importance.
\item[(4)] We show that the quasi-Newton policy has a global quadratic convergence rate for LQ dynamic games. This result might be of independent interest for discrete-time GARE, data-driven or not. To the best of our knowledge, the proposed algorithm is the first iterative approach for discrete-time GARE with a global quadratic convergence.
\item[(5)] Finally, we show that in the proposed iterative algorithms for LQ dynamic games, any player can assume the role of the ``leader" whereas in~\cite{zhang2019policy}, it is required that the player maximizing the cost be designated as the leader. As such, we clarify the algorithmic source of asymmetry in the leader-follower setup for solving this class of dynamic game problems.
\end{enumerate}
\section{Notation and Background}
\label{sec:notations}
We use the symbols $\prec, \preceq, \succ, \succeq$ to denote the ordering induced by the positive semidefinite (p.s.d.) cone. Namely, $A \succeq B$ means that $A - B$ is positive semidefinite. For a symmetric matrix $M \in \bb R^{n \times n}$, we denote the eigenvalues in non-increasing order, i.e., $\lambda_1(M) \le \ldots \le \lambda_n(M)$.
\par
Let us recall relevant definitions and results from control theory. A matrix $A \in \bb R^{n \times n}$ is Schur if all the eigenvalues of $A$ are \emph{inside} the \emph{open unit disk} of $\bb C$, i.e., $\rho(A) < 1$ where $\rho(\cdot)$ denotes the spectral radius. A pair $(A, B)$ with $A \in \bb R^{n \times n}$ and $B \in \bb R^{n \times m}$ is stabilizable if there exists some $K \in \bb R^{m \times n}$ such that $A-BK$ is Schur. A pair $(C, A)$ is detectable if $(A^{\top}, C^{\top})$ is stabilizable. An eigenvalue $\lambda$ of $A$ is $(C, A)$-observable if
\begin{align*}
\rank \begin{pmatrix}
\lambda I - A \\
C
\end{pmatrix} = n.
\end{align*}
A matrix $K \in \bb R^{m \times n}$ is stabilizing for system pair $(A, B)$ if $A-BK$ is Schur; it is \emph{marginally stabilizing} if $\rho(A-BK) = 1$.
For fixed $A \in \bb R^{n \times n}$ and $Q \in \bb R^{n \times n}$,
the Lyapunov matrix is of the form
\begin{align*}
A^{\top} X A + Q - X= 0.
\end{align*}
For a system pair $(A, B)$, $Q \in \bb R^n$ and $R \in GL_n(\bb R)$, the discrete algebraic Riccati equation (DARE) is of the form
\begin{align}
\label{eq:dare}
A^{\top} X A - X - A^{\top} XB (R+B^{\top}XB)^{-1} B^{\top} XA + Q = 0.
\end{align}
Next, we recall a result on standard linear-quadratic-regulator (LQR) control.
\begin{theorem}
If $Q \succeq 0$, $R \succ 0$, $(A,B)$ is stabilizable and the spectrum of $A$ on the unit disk (centered at the origin) in $\bb C$ is $(Q, A)$-observable, then there exists a unique maximal solution $X^+$ to DARE~\eqref{eq:dare}.\footnote{Where the notion of maximality is with respect to the p.s.d. ordering.}
Moreover, the infinite-horizon LQR cost is $x_{0}^{\top} X^{+} x_{0}$ and the optimal feedback control $K_*$ is stabilizing and characterized by $K_* = (R+B^{\top}X^+ B)^{-1} B^{\top} X^+ A$.
\end{theorem}
In the presentation, we shall refer a solution $X_0$ to~\eqref{eq:dare} as \emph{stabilizing} if the corresponding feedback gain $K_0 = (R+B^{\top}X_0B)^{-1}B^{\top} X_0 A$ is stabilizing; a solution $X_0$ is \emph{almost stabilizing} if the corresponding gain $K_0 = (R+B^{\top} X_0 B)^{-1}B^{\top}X_0A$ is marginally stabilizing.\par
In the sequential LQ game setup, one key difference from standard LQR is that $Q$ may be indefinite in the corresponding DARE. As such, the following generalization of the above theorem becomes particularly relevant.
\begin{theorem}
Suppose that $Q = Q^{\top}$, $R \succ 0$, $(A,B)$ is stabilizable and there exists a solution $X$ to DARE~\eqref{eq:dare}.
Then there exists a maximal solution $X^+$ to DARE such that the LQR cost is given by $x_0^{\top} X^{+}x_0$. Moreover the optimal feedback control is given by $K_* = (R+B^{\top}XB)^{-1} B^{\top} X^+ A$ and the eigenvalues of $A-BK_*$ lie inside the {closed} unit disk of $\bb C$.
\end{theorem}
For DAREs with an indefinite $Q$, we recall a theorem concerning the existence of solutions.
\begin{theorem}[Theorem $13.1.1$ in~\cite{Lancaster1995algebraic}]
\label{thrm:dare_existence}
Suppose that $(A, B)$ is stabilizable, $R=R^{\top}$ is invertible, $Q = Q^{\top}$ (no definiteness assumption), and there exists a symmetric solution $\hat{X}$ to the matrix inequality,
\begin{align*}
\ca R(X) \coloneqq A^{\top} X A + Q - X - A^{\top} X B (R+B^{\top} X B)^{-1}B^{\top} X A\succeq 0,
\end{align*}
with $R + B^{\top} \tilde{X} B \succeq 0$.
Then there exists a maximal solution $X^+$ to~\eqref{eq:dare} such that $R + B^{\top} X^+ B \succ 0$. Moreover, all eigenvalues of $A-B (R+B^{\top}XB)^{-1} B^{\top} X^+A$ are inside the closed unit disk.
\end{theorem}
The map $\ca R_{A, B, Q, R}: \bb R^{n \times n} \to \bb R^{n \times n}$ will be referred as \emph{Riccati map} in our analysis; we will also suppress its dependency on system parameters $A, B, Q, R$. In our subsequent analysis, these system parameters will not remain constant as the corresponding feedback gains are iteratively updated.
\section{LQ Dynamic Games and some of its Analytic Properties}
In this section, we review the setup of zero-sum LQ games. In particular we discuss the modified sequential formulation of LQ games and make a few analytical observations that are of independent interest. We note that some of these observations have only become necessary in the context of sequential policy updates for LQ dynamic games.
\subsection{Zero-sum LQ Dynamic Games}
In the standard setup of LQ game, we consider a (discrete-time) linear time invariant model of the form,
\begin{align*}
x(k+1)= A x(k) - B_1 u_1(k) - B_2 u_2(k), \qquad x(0) = x_0,
\end{align*}
where $A \in \bb R^{n \times n}$, $B_1 \in \bb R^{n \times m_1}$, $B_2 \in \bb R^{n \times m_2}$, $u_1(k)$ and $u_2(k)$ are strategies played by two players. The cost incurred for both players is the quadratic cost
\begin{align*}
J(u_1, u_2, x_0) &= \sum_{k=0}^{\infty} \left( \langle x(k), Q x(k) \rangle + \langle u_1(k), R_1 u_1(k) \rangle - \langle u_2(k), R_2 u_2(k) \rangle \right),
\end{align*}
where $Q \in \bb S_n^{+}$, $R_1 \in \bb S_{m_1}^{++} , R_2 \in \bb S_{m_2}^{++},$ and $x_0$ is the initial condition;\footnote{The notation $S_n^{+}$ and $S_n^{++}$ designate, respectively, $n \times n$ symmetric positive semidefinite and positive definite matrices.} the underlying inner product is denoted by $\langle \cdot, \cdot \rangle$. In this setting, player one chooses its policy to minimize $J$ while player two aims to maximize it. \par
The players' strategy space that we will be particularly interested in are closed-loop static linear policies, namely, policies of the form $u_1(k) = K x(k)$ and $u_2(k) = Lx(k)$, where $K \in \bb R^{m_1 \times n}$ and $L \in \bb R^{m_2 \times n}$. Note that the cost function is guaranteed to be finite over the set of \emph{Schur stabilizing feedback gains},
\begin{align*}
\ca S = \{(K, L) \in \bb R^{m \times n} \times \bb R^{m \times n} : \rho(A-B_1 K - B_2 L) < 1\}.
\end{align*}
Indeed, if $(K, L) \in \ca S$, with initial condition $x_0$, the cost will be given by,
\begin{align*}
J(u_1, u_2, x_0) &= x_0^{\top} \left( \sum_{j=0}^{\infty} [(A-B_1K-B_2 L)^{\top}]^{j} (Q+K^{\top}R_1 K-L^{\top}R_2L) (A-B_1K-B_2L)^j\right)x_0 \\
&= x_0^{\top} X x_0,
\end{align*}
where $X$ solves the \emph{Lyapunov matrix equation},
\begin{align}
\label{eq:lyapunov_matrix}
(A-B_1K-B_2L)^{\top} X(A-B_1K-B_2L) + Q+ K^{\top}R_1K-L^{\top}R_2 L = 0.
\end{align}
Note that~\eqref{eq:lyapunov_matrix} has a unique solution if $(K, L) \in \ca S$. We say that a pair of strategies $(K, L)$ is \emph{admissible} if $(K, L) \in \ca S$.
\begin{remark}
An elegant result in $\ca H_{\infty}$ control theory states that the minimax problem,
\begin{align*}
\inf_{u_1 \in \ell_2(\bb N)} \sup_{u_2 \in \ell_2(\bb N)} \{J(u_1, u_2) \, | \, x_{u_1, u_2} \in \ell_2(\bb N)\},
\end{align*}
where $\ell_2(\bb N)$ denotes the Banach space of all square summable sequences and $x_{u_1, u_2}$ denotes the state trajectory after adopting control signals $u_1, u_2$, has a unique saddle point for all initial conditions if and only if there exists two static linear feedback gains $K_*$ and $L_*$ such $u_1(k) = K_* x(k)$ and $u_2(k)=L_* x(k)$ satisfying the saddle point condition~\cite{stoorvogel1990h}. Hence, the restriction of the optimization process to static linear policies in the LQ game setting is without loss of generality.
\end{remark}
A stabilizing \emph{Nash equilibrium} for the zero-sum game is the pair of actions $\{u_1^{*}(k), u_2^{*}(k)\} =\{K_* x(k), L_*x(k)\}$ such that,
\begin{align}
\label{eq:nash_ineq}
J(u_1^*(k), u_2(k)) \le J(u_1^*(k), u_2^*(k)) \le J(u_1(k), u_2^*(k)),
\end{align}
for all initial states $x_0$ and all $u_1(k), u_2(k)$ for which $(u_1(k), u_2^*(k))$ and $(u_1^*(k), u_2(k))$ are both admissible pairs. We \emph{emphasize} that it is important that $(u_1(k), u_2^*(k))$ and $(u_1^*(k), u_2(k))$ are stabilizing action pairs in the inequality~\eqref{eq:nash_ineq}. To demonstrate this delicate situation, we denote by $\ca S_{\pi_i}$ the projection of $\ca S$ onto the $i$th coordinate, i.e.,
\begin{align*}
\ca S_{\pi_1} = \{K: \exists L \text{ such that } A-B_1 K-B_2 L \text{ is Schur}\},\\
\ca S_{\pi_2} = \{L: \exists K \text{ such that } A-B_1 K-B_2 L \text{ is Schur}\},\\
\end{align*}
and $\ca S_{\hat{K}}, \ca S_{\hat{L}}$ as sets defined by,
\begin{align*}
\ca S_{\hat{K}} = \{L: A-B_1 \hat{K}-B_2 L \text{ is Schur}\}, \\
\ca S_{\hat{L}} = \{K: A-B_2 \hat{L}-B_1 K \text{ is Schur}\}.
\end{align*}
Clearly, $\ca S_{K_*} \subset \ca S_{\pi_2}$ and $\ca S_{L_*} \subset \ca S_{\pi_1}$. This means that it is not the case that for all $K \in \ca S_{\pi_1}$ and all $L \in \ca S_{\pi_2}$, the corresponding actions $u_1(k) = K x(k)$ and $u_2(k)=Lx(k)$ yield
\begin{align*}
J(u_1^*(k), u_2(k)) \le J(u_1^*(k), u_2^*(k)) \le J(u_1(k), u_2^*(k)).
\end{align*}
This is simply due to the fact that $(\hat{K}, L_*)$ is not guaranteed to be stabilizing for all $\hat{K} \in \ca S_{\pi_1}$. \par
Note that the cost function $J$ is a function of polices $K, L$ and initial condition $x_0$. Since we are interested in the Nash equilibrium independent of the initial conditions, naturally, we should formulate cost functions for both players to reflect this independent. Indeed, this point has been discussed in~\cite{bu2019lqr} where it has been argued that such a formulation is in general necessary for the cost functions to be well defined (see details in \S III~\cite{bu2019lqr}). The independence with respect to initial conditions can be achieved by either sampling $x_0$ from a distribution with full-rank covariance~\cite{fazel2018global}, or choosing a spanning set $\{w_1, \dots, w_n\} \subseteq \bb R^n$~\cite{bu2019lqr}, and defining the value function over $\ca S$ as,
\begin{align*}
f(K, L) = \sum_{i=1}^n J_{w_i}(K, L),
\end{align*}
where $J_{w_i}(K, L)$ is the cost by choosing initial state $w_i$, $u_1(k) = Kx(k)$ and $u_2(k) = Lx(k)$. Note that over the set $\ca S$ the value of function $f$ admits a compact form,
\begin{align*}
f(K, L) = \Tr( X {\bf \Sigma}),
\end{align*}
where $ {\bf \Sigma} = \sum_{i=1}^n w_i w_i^{\top}$ and $X$ is the solution to~\eqref{eq:lyapunov_matrix}.
\paragraph{Behavior of $f$ on $\ca S^{c}$:}
How the cost function $f$ behaves near the boundary $\partial \ca S$ is of paramount importance in the design of iterative algorithms for LQ games. In the standard LQR problem (corresponding to a single player case in the game setup), the cost function diverges to $+\infty$ when the feedback gain approaches the boundary of this set (see~\cite{bu2019lqr} for details). In fact, this property guarantees stability of the obtained solution via first order iterative algorithms for suitable choice of stepsize. However, the behavior of $f$ on the boundary $\partial \ca S$ could be more intricate. For example, if $(K, L) \in \partial \ca S$, i.e., $\rho(A-B_1 K-B_2 L) = 1$, then it is possible that the cost is still finite for both players. This happens when an eigenvalue of $A-B_1K-B_2 L$ on the unit disk in the complex plane is not $(Q+K^{\top} R_1 K - L^{\top} R_2 L, A-B_1K -B_2L)$-observable. To see this, we observe that for every $\omega_i$, the series
\begin{align*}
\sum_{j=0}^{\infty} J_{\omega_i}(K, L) = \omega_i^{\top} \left(\sum_{j=0}^{\infty} ((A-B_1K-B_2L)^\top)^{j} (Q+K^{\top}R_1 K-L^{\top}R_2 L)(A-B_1K-B_2L)^{j} \right)\omega_i
\end{align*}
is convergent to a finite (real) number if the marginally stable modes are not detectable.
Even on $\bar{\ca S}^c$ (complement of closure of $\ca S$), $f$ could be finite if all the non-stable modes of $(A-B_2K-B_2L)$ are not $(Q+K^{\top} R_1 K - L^{\top} R_2 L, A-B_1K -B_2L)$-observable. The complication suggests that the function value is no longer a valid indictor of stability. We remark that such a situation does not occur in the LQ setting examined in~\cite{fazel2018global, bu2019lqr}, as it has been assumed that $Q$ is positive definite.
\subsection{Stabilizing Policies in Sequential Zero-Sum LQ Games}
Another subtle situation arising in sequential zero-sum LQ dynamic game is as follows: there is clearly no incentive for player $1$ to destabilize the dynamics. However, from the perspective of player $2$, making the states diverge to infinity could be desirable as the player aims to maximize the cost. For player $1$, in the situation where $Q-L^{\top} R_2 L$ is not positive semidefinite, it is also possible that the best policy is not the one in $\ca S_{\pi_1}$. Hence, in round $j$, in order to guarantee that the game can be continued, it is important that both players choose their respective policies in $\ca S_{\pi_1}$ and $\ca S_{\pi_2}$. We may then stipulate that both players play \emph{Schur stable} policies. We can justify this constraint by insisting that both players have an incentive to stabilize the system in the first place. This can also be encoded in the cost function for the player. That is, we may define the cost functions for player $1$ and player $2$ by,
\begin{align*}
f_1(K, L) = \delta_{\ca S_{\pi_1}}(K) + f(K, L),\\
f_2(K, L) = -\delta_{\ca S_{\pi_2}}(L) + f(K, L),
\end{align*}
where $\delta_{\ca S_{\pi_i}}(x)$ is the indicator function of the set
\begin{align*}
\delta_{\ca S_{\pi_i}}(x) = \begin{cases}
0, & x \in \ca S_{\pi_i},\\
+\infty, & x \notin \ca S_{\pi_i}.
\end{cases}
\end{align*}
Then we have two cost functions defined everywhere for both players and assume a finite value on $\ca S$ which agree with each other, i.e., $f(K, L)$. We still need to be careful in realizing that there are points for which the function value is indeterminate. For example, it is possible to find a point $(\hat{K}, \hat{L})$ such that $f(\hat{K}, \hat{L}) = -\infty$; then $f_1(\hat{K}, \hat{L}) = +\infty - \infty$. To resolve this complication, we shall declare the function value to be the first summand; namely, if $f_1(\hat{K}, \hat{L}) = + \infty - \infty$, then $f_1(\hat{K}, \hat{L}) \equiv +\infty$. \par
From the perspective of sequential algorithm design, these newly introduced cost functions would constrain both players to play policies in $\ca S$. It might be tempting to design projection based algorithms. However, this can be difficult since describing the sets $\ca S_{\pi_1}$ and $\ca S_{\pi_2}$ for given system $(A, B)$ is not straightforward. We shall see later that by exploiting the problem structure, we can design sequential algorithms for both players to guarantee this condition without any projection step.
\subsection{Analytic Properties of the Cost Function}
In this section we shall clarify analytical properties of the cost functions in terms of the polices played by the two players; that is, we consider the cost function $f(K, L)$ over $\ca S$.\footnote{In our formulation, the two players have different cost functions. But over the set $\ca S$, the cost functions coincide.}
To begin with, we observe the set $\ca S$ even though is not convex, still possesses nice topological properties.
\begin{proposition}
\label{prop:domain_topo}
The set $\ca S$ is open, contractible (i.e., path-connected and simply connected) and in general non-convex.
\end{proposition}
\begin{proof}
It suffices to note that by Kalman Decomposition~\cite{wonham2012linear}, there exists some $T \in GL_n(\bb R)$ such that,
\begin{align*}
T A T^{-1} = \begin{pmatrix}
\tilde{A}_{11} & \tilde{A}_{12}\\
0 & \tilde{A}_{22}
\end{pmatrix}, T[B_1, B_2] = \begin{pmatrix}
\tilde{B}_1 \\
0
\end{pmatrix},
\end{align*}
where $(\tilde{A}_{11},\tilde{B}_1)$ is controllable and $\tilde{A}_{22}$ is Schur. Suppose that $\tilde{B}_{1} \in \bb R^{n_1 \times (l_1 + l_2)}$ and further observe that $\ca S$ can be diffeomorphically identified by $\ca S_{(\tilde{A}_{11}, \tilde{B}_1)} \times \bb R^{(n-n_1) \times (l_1 + l_2)}$. The statement then follows by the results reported in~\cite{bu2019topological}.
\end{proof}
As the set $\ca S$ is generally not convex, Proposition~\ref{prop:domain_topo} assures us that algorithms based on local search (\emph{e.g.} gradient descent) can potentially reach the Nash equilibrium. If $\ca S$ had more than one path-connected components, it will be impossible to guarantee the convergence to Nash equilibrium under random initialization. Moreover this observation implies that $f$ is not convex-concave as the domain is not even convex.\par
We next observe that the value function is smooth and indeed real analytic, i.e., $f \in C^{\omega}(\ca S)$.
\begin{proposition}
One has $f \in C^{\omega}(\ca S)$.
\end{proposition}
\begin{proof}
For $(K, L) \in \ca S$, $f$ is the composition
\begin{align*}
(K, L) \mapsto X(K, L) \mapsto \Tr(X {\bf \Sigma}),
\end{align*}
where $X$ solves~\eqref{eq:lyapunov_matrix}. But
\begin{align*}
\vect(X) = \left( I \otimes I - A_{K, L}^{\top} \otimes A_{K, L}^{\top}\right)^{-1} \vect(Q+ K^{\top} R_1 K - L^{\top} R_2 L),
\end{align*}
by Cramer's Rule; the proof thus follows.
\end{proof}
As $f$ is smooth, its partial derivatives with respect to $K$ and $L$ can be characterized as follows.
\begin{proposition}
On the set $\ca S$, the gradients of $f$ with respect to its arguments are given by,
\begin{align*}
&\partial_{K} f(K, L) = (R_1 K - B_1^{\top} X A_{K, L}) Y, \\
&\partial_{L} f(K, L) = (-R_2 L - B_2^{\top} X A_{K, L}) Y,
\end{align*}
where $X$ solves the Lyapunov equation~\eqref{eq:lyapunov_matrix} and $Y$ solves the Lyapunov equation,
\begin{align}
\label{eq:lyapunov_matrix_Y}
A_{K, L} Y A_{K, L}^{\top} + {\bf \Sigma} = 0.
\end{align}
\end{proposition}
\begin{proof}
It suffices to rewrite the Lyapunov equation in the form
\begin{align*}
\left( A - \begin{pmatrix}B_1 & B_2\end{pmatrix} \begin{pmatrix} K \\ L \end{pmatrix}\right)^{\top} X\left( A - \begin{pmatrix}B_1 & B_2\end{pmatrix} \begin{pmatrix} K \\ L \end{pmatrix}\right) - X + Q + \begin{pmatrix} K \\ L \end{pmatrix}^{\top} \begin{pmatrix} R_1 & 0 \\ 0 & -R_2 \end{pmatrix} \begin{pmatrix} K \\ L \end{pmatrix} = 0.
\end{align*}
By the result in~\cite{fazel2018global, bu2019lqr}, the gradient of $f$ is given by
\begin{align*}
\nabla f(K, L) = \begin{pmatrix} (R_1 K - B_1^{\top}XA_{K, L})Y \\(-R_2 K - B_1^{\top}XA_{K, L})Y \end{pmatrix}.
\end{align*}
\end{proof}
We now observe that $Y(K,L)$ is a smooth function in $(K, L)$ and is positive definite everywhere on $\ca S$. Hence $Y(K, L)$ is a well-defined Riemannian metric on $\ca S$. Under this Riemannian metric, we can thereby identify the gradient. In learning and statistics literature, such a gradient is referred as a ``natural gradient.'' We shall use $N_{f, K}$ and $N_{f, L}$ to denote the natural gradient of $f$ over $K$ and $L$, respectively. Namely,
\begin{align*}
N_{f, K}(K, L) = R_1 K - B_1^{\top} X A_{K, L}, \\
N_{f, L}(K, L) = -R_2 L - B_2^{\top} X A_{K, L}.
\end{align*}
\subsection{A Key Assumption and its Implications}
Throughout the manuscript, we have the following standing assumption.
\begin{assumption}{1}
\label{assump1}
There exists a \emph{stabilizing Nash Equilibrium} $(K_*, L_*) \in \ca S$ for the zero-sum game over the system dynamic $(A,[B_1, B_2])$. Moreover, the corresponding value matrix $X_* = X_*(K_*, L_*)$ satisfies at least one of the following conditions:
\begin{itemize}
\item[(a1):]
$R_1 + B_1^{\top} X_* B_1 \succ 0$ and $R_2 - B_2^{\top}X_* B_2 + B_2^{\top} X_* (R_1 + B_1^{\top}X_* B_1)^{-1}B_1 B_2 \succ 0$.
\item[(a2):]
$-R_2 + B_2^{\top}X_* B_1 \prec 0$ and $R_1 + B_1^{\top} X_* B_1 - B_1^{\top}X_* B_2 (-R_2 + B_2^{\top}X_* B_2)^{-1} B_2^{\top}X_* B_1 \succ 0$.
\end{itemize}
\end{assumption}
\begin{remark}
The existence of a stabilizing Nash is a necessary assumption adopted in the LQ literature~\cite{basar2008h}. However, we do not constrain the value matrix $X_*$ to be positive semidefinite, as assumed in~\cite{stoorvogel1994discrete, basar2008h, zhang2019policy}. The definiteness is useful when the LQ game formulation is tied to $\ca H_{\infty}$ control. However, from the LQ game perspective, this association seems unnecessary. Conditions $(a1)$ or $(a2)$ are necessary if it is desired to extract \emph{unique} policies from the optimal value matrix $X_*$. Namely, if the \emph{total derivative} of $f$ vanishes, i.e.,
\begin{align*}
\begin{pmatrix} R_1 & 0 \\ 0 & -R_2 \end{pmatrix} \begin{pmatrix} K \\ L \end{pmatrix} - \begin{pmatrix} B_1^{\top} \\ B_2^{\top} \end{pmatrix} X A + \begin{pmatrix} B_1^{\top} \\ B_2^{\top} \end{pmatrix} X \begin{pmatrix} B_1 & B_2 \end{pmatrix} = 0,
\end{align*}
conditions $(a1)$ or $(a2)$ are sufficient to guarantee the uniqueness of the solution in $\ca S$. Indeed, assumptions $(a1)$ or $(a2)$ are ``almost necessary.'' If $(K_*, L_*)$ is a NE, then $f(\cdot, L_*)$ achieves a local minimum at $K_*$, i.e., $\nabla_{KK} f(K_*, L_*)[E, E] = \langle E, (R_1 + B_1^{\top}X_* B_1)EY_* \rangle \ge 0$ (note that by assumption, $K_*$ is in the interior of $\ca S$ and thus the second-order partial derivative is well-defined). Similarly, $\nabla_{LL} f(K_*, L_*) = -R_2 + B_2^{\top}X_* B_2 \preceq 0$. We relax these two necessary conditions to hold as strictly positive (respectively, negative) definite.\footnote{If we do not relax the semidefiniteness conditions, the NE would be solutions to GARE involving Moore-Penrose inverse. This will introduce other complications than practically needed.} In fact, in the sequential LQ formulation, the inequalities in these two conditions correspond to certain ``quasi-Newton'' directions and as such play a central role in our convergence analysis (see \S\ref{sec:ng} and \S\ref{sec:qn} for details.).\par
Moreover, we shall subsequently see that assumptions $(a1)$ and $(a2)$ lead to distinct choices of leaders in the sequential algorithms. More specifically, if we assume condition $(a1)$, the leader of the sequential algorithm should be player $L$; for assumption $(a2)$, player $K$ should be the designated leader.
\end{remark}
We observe several implications of this assumption.
\begin{proposition}
Under Assumption $1$, we have following implications:
\begin{enumerate}
\item The pair $(A, [B_1, B_2])$ is stabilizable.
\item $X_*$ is symmetric and solves the Generalized Algebraic Riccati Equation (GARE),
\begin{align}
\label{eq:gare}
A^{\top} X A - X + Q + \begin{pmatrix} B_1^{\top} X A \\ B_2^{\top}X A \end{pmatrix}^{\top} \begin{pmatrix} R_1 + B_1^{\top}X B_1 & B_1^{\top}XB_2 \\ B_2^{\top}X B_1 & -R_2 + B_2^{\top}XB_2 \end{pmatrix}^{-1} \begin{pmatrix} B_1^{\top} X A \\ B_2^{\top}X A \end{pmatrix}= 0.
\end{align}
\item $X_*$ is unique among all \emph{almost stabilizing solutions} of~\eqref{eq:gare}.
\end{enumerate}
\end{proposition}
\begin{proof}
The statement in $(a)$ is immediate since $A-B_1K_*-B_2L_*$ is Schur.\\
In order to show $(b)$, we note that since $(K_*, L_*)$ is a stabilizing Nash Equilibrium, then $X_*$ is the solution of the Lyapunov equation~\eqref{eq:lyapunov_matrix}; it thus follows that $X_*$ is symmetric. Further, note that the partial gradients of $f$ vanish at $(K_*, L_*)$, namely $(K_*, L_*) \in \ca S$ solves the equations
\begin{align*}
&\nabla_{K} f(K, L) = (R_1 K - B_1^{\top} X A_{K, L}) Y = 0, \\
&\nabla_{L} f(K, L) = (-R_2 L - B_2^{\top} XA_{K, L}) Y = 0.
\end{align*}
Substituting this in the Lyapunov equation~\eqref{eq:lyapunov_matrix}, it follows that $(K_*, L_*)$ solves the GARE~\eqref{eq:gare}. Note that the inverse,
\begin{align*}
\begin{pmatrix} R_1 + B_1^{\top}X B_1 & B_1^{\top}XB_2 \\ B_2^{\top}X B_1 & -R_2 + B_2^{\top}XB_2 \end{pmatrix}^{-1}
\end{align*}
is well-defined at $X_*$ by the conditions $a1$ or $a2$ in the assumption.\\
For the statement in $(c)$, by Lemma $3.1$~\cite{stoorvogel1994discrete}, $X_*$ is the unique stabilizing solution. It remains to show that there does not exist almost stabilizing solution to~\eqref{eq:gare}. Suppose there exists a pair $(K, L) \in \partial \ca S$, i.e., $\rho(A-B_1K-B_2L) = 1$, solving~\eqref{eq:gare} with solution $X$. Then taking the difference between the identity~\eqref{eq:gare} at $(K_*, L_*)$ and $(K, L)$, we have,
\begin{align*}
A_*^{\top} (X_* - X) A_{K, L} = X_*-X.
\end{align*}
Since $A_{K, L}$ is marginally stable and $A_*$ is stable, then $I \otimes I - A_{K, L}^{\top} \otimes A_*$ is invertible and thus $X_* - X = 0$.
\end{proof}
\section{Oracle Models for Sequential LQ Games}
In this work, we assume that both players have access to oracles that return either gradient, natural gradient or quasi-Newton directions. Suppose that $\ca O_K$ and $\ca O_L$ are the orcales for the two players respectively. The players will query their respective oracles in a sequential manner: if player $1$ query the oracle, we assume the policy played by player $2$ is fixed during the query and this policy is transparent to oracle $\ca O_K$ of player $1$. As $f$ is in general not convex-concave, if two players have the same oracles and play greedily using the information they have acquired, theoretically, there is no guarantee that they will eventually converge to the Nash equilibrium. In order to obtain theoretical guarantees, we assume that player $1$ can access an oracle that computes the minimizer of $f(K, L)$ over $K$ for a fixed $L$. This oracle can be constructed out of the simple first-order oracles by repeatedly performing gradient descent/natural gradient descent/quasi-Newton type steps. More explicitly, we shall assume that for player $1$, if player $2$'s policy is $\hat{L}$, the oracle can return $K \leftarrow \argmin_{K} f(K, \hat{L})$.
\subsection{Motivation}
We shall present the motivation for equipping player $1$ with a more powerful oracle model. As finding the Nash equilibrium is equivalent to solving the saddle point of $f(K, L)$, from the perspective of player $2$, we may associate a value function independent of player $1$. Namely, we may define a function of the form,
\begin{align*}
g(L) = \begin{cases}
\inf_{K \in \ca S_{L}} f(K, L), & \text{if } L \in \ca S_{\pi_2},\\
-\infty, & \text{otherwise}.
\end{cases}
\end{align*}
If $g(L)$ possesses a \emph{smoothness} property, we may consider \emph{projected gradient ascent} over the policy space. However, reflecting over $g(L)$ would reveal that $g(L)$ is not necessarily even continuous on $\ca S_{\pi_2}$. For example, if $Q-L^{\top} R_2 L \prec 0$, then $\inf_{K} f(K, L)$ could be $-\infty$. But on the other hand, by Danskin's Theorem, $g(L)$ is differentiable at $L \in \ca S_{\pi_2}$, where $f(K, L)$ admits a unique minimizer over $K$.
\begin{lemma}
\label{lemma:differentiable}
Suppose that $U \subseteq \dom(g)$ is an open subset such that for every $L \in U$,
$$\argmin_{K \in \ca S_{L}} f(K, L)$$
exits and is unique. Then $g(L)$ is differentiable and its gradient is,
\begin{align*}
\nabla g(L) = \nabla_L f(K_L, L), \text{ where } K_L = \argmin_{K} f(K, L).
\end{align*}
\end{lemma}
Traditionally Danskin's theorem would require that for every $L$, the minimization of $K$ is over a common compact set. This is not the situation in our case as $\ca S_L$ is not compact nor common over $L$\footnote{Namely, it is not necessarily true $\ca S_{L_1} = \ca S_{L_2}$ if $L_1 \neq L_2$.}. The statement of Lemma~\ref{lemma:differentiable} instead follows from a variant of Danskin's Theorem in~\cite{bernhard1995theorem}.
\begin{proof}
We only need to observe that $f(K, L)$ is $C^{\infty}$ in both variables and thus Fr\'{e}chet differentiable. Hence, the assumptions of Hypothesis $D2$ in~\cite{bernhard1995theorem} are satisfied. By Theorem $D2$ in~\cite{bernhard1995theorem}, $g(L)$ is directionally differentiable in every direction. As the minimizer $K_L$ is unique, the directional derivative is uniform in every direction and consequently, $g(L)$ is differentiable.
\end{proof}
The next issue that needs to be addressed is whether $\ca U$ is empty. It turns out that by standard LQR theory, $\{L \in \dom(g): Q-L^{\top} R_2 L \succ 0\}$ is a subset in $U$.
We can thus outline an update rule assuming that $g(L)$ is Lipschitz, namely, a
\emph{projected gradient ascent} over $L$ as,
\begin{align*}
L_{j+1} = P_{\ca S_{\pi_2}} \left(L_j + \eta_j \nabla g(L_j)\right),
\end{align*}
where $\nabla g(L_j)$ is given by
\begin{align*}
\nabla g(L_j) = (-R_2 L - B_2^TX_{K_{L_j}, L_j} A_{K_{L_j}, L_j}) Y_{K_{L_j}, L_j} \text{ with } K_{L_j} = \argmin_{K} f(K, L_j).
\end{align*}
As already noted, we do not have a full description of the nonconvex set $\ca S_{\pi_2}$ and a projection would rather be prohibitive. What we shall propose instead, are update rules that guarantee all of its iterates stay in the set $\ca S_{\pi_2}$; this will be achieved without a projection step by exploiting the problem structure.\par
Another interesting interpretation of our setup is to consider this game from the perspective of player $2$: we have a game played by player $2$ with a greedy adversary. Each time player $2$ chooses a policy $L'$, the adversary (player $1$) would try to act greedily, i.e., minimize the cost $f(K, L')$ over $K$. The goal for player $2$ is to achieve the Nash equilibrium point for himself/herself and the greedy adversary. The information player $2$ can acquire from the game (i.e., oracle) is the first-order information (function value and gradient). As such, player $2$ has an obligation to guarantee that along the iterates $\{L_j\}$, the oracle could return meaningful first-order information of $g(L)$, i.e., $g(L_j)$ is differentiable for every $j$.
\section{Algorithm: Natural Gradient Policy on $L$}
\label{sec:ng}
Throughout \S\ref{sec:ng} and \S\ref{sec:qn}, we assume that condition $(a1)$ in our assumption holds, i.e.,
\begin{align*}
R_1 + B_1^{\top}X_* B_1 \succ 0, -R_2 + B_2^{\top}X_* B_2 + B_2^{\top} X_* B_1(R_1 + B_1^{\top}X_* B_1)^{-1}B_1^{\top}X_* B_2 \prec 0,
\end{align*}
and an oracle $\ca O_K$ that returns the stabilizing minimizer $f(K, L)$ for any fixed $L$, provided that such minimizer exists. Note that the unique minimizer corresponds to the maximal solution $X^+$ to the algebraic Riccati equation (with fixed $L$), namely,
\begin{align*}
(A-B_2L)^{\top} X (A-B_2 L) - X + Q-L^{\top} R_2 L - (A-B_2 L)^{\top} X B_1 (R+B_1^{\top} X B_1)^{-1} B_1^{\top} X (A-B_2 L) = 0.
\end{align*}
We shall subsequently discuss how to construct this oracle by policy gradient based algorithms in \S\ref{sec:pg_K}.
\subsection{Algorithm}
The algorithm is given by:
\begin{algorithm}[H]
\caption{Natural Gradient Policy for LQ Game}
\label{alg1}
\begin{algorithmic}[1]
\State Initialize $L_0$ such that $(A-B_1 L_0, B_2)$ is stabilizable and the DARE
\begin{align*}
(A-B_2L_0)^{\top}X(A-B_2L_0) - X + Q - L_0^{\top}R_2 L_0 - (A-B_2 L_0)^{\top} X B_1(R_1 + B_1^{\top} X B_1)B_1^{\top}X(A-B_2L_0) = 0
\end{align*}
has a stabilizing solution $X^+$ with $R_1 + B_1^{\top} X^{+} B_1 \succ 0$.
\If{ $j \ge 1$}
\State Set: $K_{j-1} \leftarrow \argmin_K f(K, L_{j-1})$.
\State Set: $L_{j} = L_{j-1} + \eta_j N_g(L_j) \equiv L_{j-1} + \eta_j N_{f, L} (K_{j-1}, L_{j-1})$.
\EndIf
\end{algorithmic}
\end{algorithm}
We note that the initialization step is generally nontrivial. However, if we further assume that $(A, B_1)$ is stabilizable, $Q \succeq 0$ and those eigenvalues of $A$ lying on the unit disk are $(Q, A)$-detectable, then we can choose $L_0 = 0$\footnote{This is indeed the standard assumption in the LQ literature.}. For the general case, we may need to check invariant subspaces of system parameters (see~\cite{Lancaster1995algebraic} for details.)
\subsection{Convergence Analysis}
To simplify the notation let,
\begin{align*}
{\bf U}_{K, L} = R_1K - B^{\top} X_{K, L} A_{K, L}, \qquad {\bf V}_{K, L} = -R_2 L - B^{\top} X_{K, L}A_{K, L};
\end{align*}
namely $2{\bf U}_{K, L} = N_{f, K}(K, L)$ and $2{\bf V}_{K, L} = N_{f, L}(K, L)$.
First a useful observation.
\begin{lemma}[NG Comparison Lemma]
\label{lemma:ng_comparison}
Suppose that $(K, L)$ and $(\hat{K}, \hat{L})$ are both stabilizing and let $X$ and $\hat{X}$ be the corresponding value matrices. Then
\begin{enumerate}
\item
\begin{align*}
X-\hat{X} = A_{\hat{K}, \hat{L}}^{\top} (X - \hat{X}) A_{\hat{K}, \hat{L}} + (K - \hat{K})^{\top} {\bf U}_{K, L} + {\bf U}^{\top}_{K, L} (K- \hat{K}) - (K-\hat{K})^{\top} R_1 (K-\hat{K}) \\
+ (L - \hat{L})^{\top} {\bf V}_{K, L} + {\bf V}_{K, L}^{\top} (L-\hat{L}) + (L-\hat{L})^{\top} R_2 (L-\hat{L}) - (A_{K, L}-A_{\hat{K}, \hat{L}})^{\top}X(A_{K, L}- A_{\hat{K}, \hat{L}}).
\end{align*}
\item
\begin{align*}
X-\hat{X} = A_{K, {L}}^{\top} (X - \hat{X}) A_{{K}, {L}} + (K - \hat{K})^{\top} {\bf U}_{\hat{K}, \hat{L}} + {\bf U}^{\top}_{\hat{K}, \hat{L}} (K- \hat{K}) + (K-\hat{K})^{\top} R_1 (K-\hat{K}) \\
+ (L - \hat{L})^{\top} {\bf V}_{\hat{K}, \hat{L}} + {\bf V}_{\hat{K}, \hat{L}}^{\top} (L-\hat{L}) - (L-\hat{L})^{\top} R_2 (L-\hat{L}) + (A_{K,L}-A_{\hat{K}, \hat{L}})^{\top} \hat{X}(A_{K, L}-A_{\hat{K}, \hat{L}}).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{remark}
Item $(b)$ of this lemma was observed in~\cite{zhang2019policy}. Our presentation offers a control theoretic perspective on its proof.
\end{remark}
\begin{proof}
We prove item $(a)$; item $(b)$ can be proved in a similar manner. \\
It suffices to take the difference of the Lyapunov equations:
\begin{align*}
&A_{K, L}^{\top} X A_{K, L} + Q + K^{\top} R_1 K - L^{\top} R_2 L = X, \\
& A_{\hat{K}, \hat{L}}^{\top} \hat{X} A_{\hat{K}, \hat{L}} + Q + \hat{K}^{\top} R_1 \hat{K} - \hat{L}^{\top} R_2 \hat{L} = \hat{X}.
\end{align*}
Indeed, a few algebraic operations reveal that
\begin{align*}
X - \hat{X} &= A_{\hat{K}, \hat{L}}^{\top} (X - \hat{X}) A_{\hat{K}, \hat{L}} + (K - \hat{K})^{\top} {\bf U}_{K, L} + {\bf U}^{\top}_{K, L} (K- \hat{K})
- (K-\hat{K})^{\top} (R_1) (K-\hat{K}) \\
&\quad + (L - \hat{L})^{\top} {\bf V}_{K, L} + {\bf V}_{K, L}^{\top} (L-\hat{L}) + (L-\hat{L})^{\top} R_2 (L-\hat{L})- (A_{K, L}- A_{\hat{K}, \hat{L}})^{\top} X (A_{K, L}- A_{\hat{K}, \hat{L}}) .
\end{align*}
\end{proof}
We now observe another version of comparison lemma when $L, \tilde{L} \in \dom(g)$. Indeed, this lemma will play a more prominent role in our convergence analysis.
\begin{lemma}[Comparison Lemma $2$]
\label{lemma:ng_comparison_2}
Suppose that $L, \tilde{L} \in \dom(g)$, namely there exists $K, \tilde{K}$ such that
\begin{align*}
K = \argmin_{K' \in \ca S_{L}} f(K', L), \qquad \tilde{K} = \argmin_{K' \in \ca S_{\tilde{L}}} f(K', \tilde{L}).
\end{align*}
Further, suppose that the algebraic Riccati map $\ca R_{A-B_2 L, B_1, Q-L^{\top}R_2L, R_1}(\tilde{X})$ is well-defined, i.e., $R_1 + B_1^{\top} \tilde{X} B_1$ is invertible. Recall that the Riccati map is given by,
\begin{align*}
\ca R_{A-B_2L, B_1, Q-L^{\top}R_2 L, R_1}(\tilde{X}) &= (A-B_2L)^{\top} \tilde{X} (A-B_2L) - \tilde{X} + Q-L^{\top}R_2 L \\
&\quad - (A-B_2L)^{\top} \tilde{X} B_1 (R_1 + B_1^{\top} \tilde{X} B_1)^{-1} B_1^{\top} \tilde{X} (A-B_2L).
\end{align*}
Let $X$ and $\tilde{X}$ be the corresponding value matrix. Putting
\begin{align*}
{\bf E} \coloneqq R_1 + B_1^{\top} \tilde{X} B_1, \qquad {\bf F} \coloneqq B_1^{\top} \tilde{X} (A-B_2 L),
\end{align*}
then
\begin{align*}
X - \tilde{X} = A_{K, L}^{\top}(X-\tilde{X}) A_{K, L} + \ca R_{A-B_2 L, B_1, Q-L^{\top}R_2 L, R_1}(\tilde{X}) + ({\bf E} K - {\bf F})^{\top} {\bf E}^{-1}({\bf E}K - {\bf F}).
\end{align*}
Moreover,
\begin{align*}
\ca R_{A-B_2 L, B_1, Q-L^{\top}R_2 L, R_1} = (L - \tilde{L})^{\top} {\bf V}_{\tilde{K}, \tilde{L}} + {\bf V}_{\tilde{K}, \tilde{L}}^{\top}(L-\tilde{L}) - (L-\tilde{L})^{\top} {\bf O}_{\tilde{X}} (L-\tilde{L}),
\end{align*}
where
\begin{align*}
{\bf O}_{\tilde{X}} \coloneqq R_2-B_2^{\top} \tilde{X} B_2 +B_2^{\top} \tilde{X} B_1(R_1 + B_1^{\top} \tilde{X} B_1)^{-1} B_1^{\top} \tilde{X} B_2.
\end{align*}
\end{lemma}
\begin{proof}
Note that $X$ solves the Lyapunov matrix equation,
\begin{align*}
X = (A-B_1K-B_2L)^{\top} X (A-B_1K-B_2L) + Q - L^{\top} R_2 L + K^{\top} R_1 K,
\end{align*}
with $K = (R_1 + B_1^{\top} X B_1)^{-1} B_1^{\top} X (A-B_2 L)$. Then
\begin{align*}
X - \tilde{X} - A_{K, L}^{\top}(X-\tilde{X}) A_{K, L} &= A_{K, L}^{\top} \tilde{X} A_{K, L} - \tilde{X} + Q - L^{\top} R_2 L + K^{\top} R_1 K \\
&= (A-B_2 L)^{\top} \tilde{X} (A-B_2 L) -\tilde{X} + Q -L^{\top} R_2 L + K^{\top}(R_1 + B_1^{\top} \tilde{X} B_1)K \\
&\quad - K^{\top}B_1^{\top} \tilde{X} (A-B_2L) - (A-B_2L)^{\top}\tilde{X}B_1 K \\
&= \ca R_{A-B_2L, B_1, Q-L^{\top}R_2L, R_1}(\tilde{X})\\
&\quad + (A-B_2L)^{\top}\tilde{X}B_1 (R_1 + B_1^{\top} \tilde{X}B_1)^{-1}B_1^{\top}\tilde{X}(A-B_2L) \\
&\quad + K^{\top}(R_1 + B_1^{\top} \tilde{X} B_1)K - K^{\top}B_1^{\top} \tilde{X} (A-B_2L) - (A-B_2L)^{\top}\tilde{X}B_1 K \\
&= \ca R_{A-B_2L, B_1, Q-L^{\top}R_2L, R_1}(\tilde{X}) + ({\bf E} K - {\bf F})^{\top} {\bf E}^{-1}({\bf E}K - {\bf F}).
\end{align*}
Since $\tilde{X}$ satisfies the algebraic Riccati equation
\begin{align*}
(A-B_2 \tilde{L})^{\top} \tilde{X} (A-B_2 \tilde{L}) + Q - \tilde{L}^{\top}R_2 \tilde{L} - (A-B_2 \tilde{L})^{\top}\tilde{X}B_1 (R_1 + B_1^{\top} \tilde{X} B_1)^{-1} B_1^{\top} \tilde{X}(A-B_2\tilde{L}) = \tilde{X},
\end{align*}
it follows that,
\begin{align*}
\ca R_{A-B_2 L, B_1, Q-L^{\top}R_2 L, R_1}(\tilde{X}) &= (A-B_2L)^{\top} \tilde{X} (A-B_2L) -L^{\top}R_2 L \\
&\quad - (A-B_2L)^{\top} \tilde{X} B_1 (R_1 + B_1^{\top} \tilde{X} B_1)^{-1} B_1^{\top} \tilde{X} (A-B_2L) \\
&\quad - (A-B_2 \tilde{L})^{\top} \tilde{X} (A-B_2 \tilde{L}) + \tilde{L}^{\top}R_2 \tilde{L} \\
&\quad + (A-B_2 \tilde{L})^{\top}\tilde{X}B_1 (R_1 + B_1^{\top} \tilde{X} B_1)^{-1} B_1^{\top} \tilde{X}(A-B_2\tilde{L}) \\
&= (L-\tilde{L})^{\top}(-R_2 \tilde{L} - B_2^{\top} \tilde{X} A_{\tilde{K}, \tilde{L}}) + (-R_2 \tilde{L}-B_2^{\top}\tilde{X}A_{\tilde{X}, \tilde{L}})^{\top}(L-\tilde{L}) \\
& \quad - (L-\tilde{L})^{\top}(R_2 - B_2^{\top} \tilde{X} B_2 + B_2^{\top} \tilde{X} B_1(R_1+B_1^{\top}\tilde{X} B_1)^{-1}B_1^{\top}\tilde{X} B_2) (L-\tilde{L}).
\end{align*}
\end{proof}
Let us now prove the convergence of the proposed algorithm. In the following analysis, $K_j$'s are exclusively used as the unique \emph{stabilizing minimizers},\footnote{Depending on the structure of our problem, it is possible that there exists non-stabilizing minimizers. But here we are only concerned with minimizers in the set $\ca S$.} for $f(K, L_j)$ over $K$.\footnote{Of course, we need to guarantee that for $L_j$, there exists a unique minimizer.} To simplify the notation, let
\begin{align*}
\Delta &\coloneqq X_{K_{j-1}, L_{j-1}}, \\
{\bf O}_{j-1} &\coloneqq R_2-B_2^{\top}X_{K_{j-1}, L_{j-1}}B_2 +B_2^{\top}X_{K_{j-1}, L_{j-1}} B_1(R_1 + B_1^{\top} X_{K_{j-1}, L_{j-1}} B_1)^{-1} B_1^{\top} X_{K_{j-1}, L_{j-1}} B_2.
\end{align*}
We shall first show that if Algorithm~\ref{alg1} is initialized appropriately, then with stepsize
$$\eta_{j-1} < \frac {1}{\lambda_n({\bf O}_{j-1})},$$
Algorithm~\ref{alg1} generates a sequence $\{L_j\}$ satisfying properties listed in the following lemma.
\begin{lemma}
\label{lemma:ng_key_lemma}
Suppose that Algorithm~\ref{alg1} is initialized appropriately.
With stepsize $\eta_{j-1} \le \frac{1}{\lambda_n({\bf O}_{j-1})}$, we then have,
\begin{enumerate}
\item $(A-B_2 L_j, B_1)$ is stabilizable for every $j \ge 1$.
\item ${\bf O}_j \succ 0$ for every $j \ge 1$.
\item For every $j \ge 1$, $f(K, L_j)$ is bounded below over $K$ and there exists a unique minimizer $K_j$, which forms a stabilizing pair $(K_j, L_j)$. Namely, the DARE
\begin{align}
\label{eq:dare_iterate_j}
\begin{split}
0 &= (A-B_2L_j)^{\top} X (A-B_2L_j) -X\\
& \quad + Q - L_j^{\top}R_2 L_j - (A-B_2L_j)^{\top}X B_1(R_1 + B_1^{\top}X B_1)^{-1} B_1^{\top} X (A-B_2L_j) ,
\end{split}
\end{align}
admits a stabilizing maximal solution $X^+$ satisfying $R + B_1^{\top} X^{+} B_1 \succ 0$.
\item Putting $\Lambda = X_{K_j, L_j}$, ${\bf E}_j = R_1+B_1^{\top} X_{K_j, L_j} B_1$ and ${\bf F}_j = B_1^{\top} X_{K_j, L_j}(A-B_2 L_j)$ we have
\begin{align*}
\Lambda - \Delta &=
A_{K_j, L_j}^{\top}(\Lambda - \Delta) A_{K_j, L_j} + {\bf V}_{K_{j-1}, L_{j-1}}^{\top} \left( 4 \eta_{j-1} I - 4 \eta_j^2 {\bf O}_{j-1} \right) {\bf V}_{K_{j-1}, L_{j-1}} \\
& \quad + ({\bf E}_{j-1} K_j - {\bf F}_{j-1} )^{-1}{\bf E}_{j-1}^{-1} ( {\bf E}_{j-1} K_j - {\bf F}_{j-1}).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
It suffices to prove the lemma by induction since all items holds at $j=0$ (by initialization of the algorithm). We shall first suppose that $(A-B_2 L_j, B_1)$ is stabilizable, i.e., $(a)$ holds. Note that this property is not automatically guaranteed and we subsequently provide an analysis to carefully remove this assumption.\footnote{A side remark on our proof strategy: in linear system theory, a number of synthesis results are developed under the assumption of stabilizability of the system. We will utilize these observations here; however, to use those tools, we must assume that $(A-B_2L_j, B_1)$ is stabilizable. But this is also one of our goals to show. The reader might recognize certain circular line of reasoning here. Indeed, one of our contributions is pseudo trick devised to circumvent this issue: we first assume stabilizability and then use the results developed to arrive at a contradiction if the system had not been stabilizable.}\\
First note that by our assumption, $\Delta$ is the maximal stabilizing solution of the DARE,\footnote{This can be considered as an LQR problem for system $(A-B_2 L_{j-1}, B_1)$ with state cost matrix $Q-L_{j-1}^{\top}R_2 L_{j-1}$.}
\begin{align*}
0 &=(A-B_2 L_{j-1})^{\top} X (A-B_2 L_{j-1}) - X\\
&\quad + Q - L_{j-1}^{\top} R_2 L_{j-1} - (A-B_2 L_{j-1})^{\top} X B_1 (R_1 + B_1^{\top} X B_1)^{-1} B_1^{\top} X (A-B_2 L_{j-1}),
\end{align*}
and $K_{j-1} = (R_1 + B_1^{\top} \Delta B_1)^{-1} B_1^{\top} \Delta (A-B_2 L_{j-1})$.
Now adopt the update rule,
\begin{align*}
L_j = L_{j-1} - \eta_{j-1} N_g(L_{j-1}) = L_{j-1} - 2\eta_{j-1} {\bf V}_{K_{j-1}, L_{j-1}}.
\end{align*}
by Lemma~\ref{lemma:ng_comparison_2} it follows
\begin{align}
\label{eq:inequality_existence}
\begin{split}
\ca R_{A-B_2L_j, B_1, Q-L_j^{\top}R_2 L_j, R_1}(X_{j-1}) = {\bf V}_{K_{j-1}, L_{j-1}}^{\top} \left( 4 \eta_{j-1} I - 4 \eta_{j-1}^2 {\bf O}_{j-1} ) \right) {\bf V}_{K_{j-1}, L_{j-1}}.
\end{split}
\end{align}
Thereby with the stepsize
$\eta_{j-1} \le \frac{1}{\lambda_n({\bf O})}$, we have
\begin{align}
\label{eq:existence_ng}
\ca R_{A-B_2L_j, B_1, Q-L_j^{\top}R_2 L_j, R_1}(X_{j-1}) = {\bf V}_{K_{j-1}, L_{j-1}} \left( 4 \eta_{j-1} I - 4 \eta_{j-1}^2 {\bf O}_{j-1} \right) {\bf V}_{K_{j-1}, L_{j-1}} \succeq 0.
\end{align}
By Theorem~\ref{thrm:dare_existence} (note that the inequality~\eqref{eq:existence_ng} is crucial for applying the theorem), there exists a maximal solution $X^+ \succeq \Delta$ to the DARE~\eqref{eq:dare_iterate_j} and moreover, with
$$K^+ = (R_1 +B_1^{\top} X^+ B_1 )^{-1} B_1^{\top} X^+(A-B_2 L_j),$$ the eigenvalues of $A-B_2 L_j - B_1 K^+$ are in the closed unit disk of $\bb C$. Equivalently, $X^+$ solves the following Lyapunov equation,
\begin{align*}
(A-B_1K^+ -B_2 L_j)^{\top} X^+ (A-B_1 K^+ - B_2 L_j) + Q + (K^{+})^{\top} R_1 K^{+} - L_j^{\top} R_2 L_j = X^+.
\end{align*}
Item $(d)$ thereby follows from the first part of Lemma~\ref{lemma:ng_comparison_2}. We now observe that $K^+$ is indeed stabilizing. Suppose that this is not the case; then there exists $v \in \bb C^n$ such that $(A-B_2 L_j - B_1 K^+) v = \lambda v$ with $|\lambda| = 1$. Hence,
\begin{align*}
&v^{\top} \left( A_{K^{+}, L_j}^{\top}(X^{+} - \Delta) A_{K^+, L_j} \right)v + v^{\top} \left({\bf V}_{K_{j-1}, L_{j-1}}^{\top} \left( 4 \eta_{j-1} I - 4 \eta_j^2 {\bf O}_{j-1} \right) {\bf V}_{K_{j-1}, L_{j-1}} \right) v \\
&\le v^{\top} (X^+-\Delta) v.
\end{align*}
This would imply that ${\bf V}_{K_{j-1}, L_{j-1}} v = 0$. But this means that,
\begin{align}
\label{eq:ng_eq1}
L_j v = L_{j-1} v.
\end{align}
By Lemma~\ref{lemma:ng_comparison}, we have
\begin{align*}
A_{K^+, L_j}^{\top}(X^+ - \Delta) A_{K^+, L_j} - (X^+ - \Delta) + {\bf V}_{K_{j-1}, L_{j-1}}^{\top} \left( 4 \eta_{j-1} I - 4 \eta_j^2 R_2 \right) {\bf V}_{K_{j-1}, L_{j-1}} \\
+ (K^+-K_{j-1})^{\top} R_1 (K^+ - K_{j-1}) + (A_{K^+, L_j}-A_{K_{j-1}, L_{j-1}})^{\top}\Delta (A_{K^+, L_j}-A_{K_{j-1}, L_{j-1}}) = 0.
\end{align*}
Multiplying $v^{\top}$ and $v$ on each side and combining the resulting expression with~\eqref{eq:ng_eq1}, we obtain,
\begin{align*}
K_{j-1}v = K^+ v.
\end{align*}
But now we have
\begin{align*}
(A-B_1 K_{j-1} -B_2 L_{j-1})v = Av - B_1 K^{+} v - B_2 L_j v = \lambda v.
\end{align*}
This is a contradiction to the Schur stability of $(K_{j-1}, L_{j-1})$. For item $(b)$, note that $X_j \preceq X_*$, so $R_2 - B_2^{\top} X_j B_2 \succ 0$ and consequently ${\bf O}_j \succ 0$ as $R_1 + B_1^{\top}X_j B_1 \succeq R_1 + B_1^{\top} X_{j-1}B_1 \succ 0$. Hence, we have completed the proof for items $(b), (c), (d)$ under the assumption of item $(a)$.\par
We now argue that with the stepsize $\eta_{j-1} \le 1/\lambda_n({\bf O}_{j-1})$, this assumption of stabilizability is indeed valid; namely, item $(a)$ holds.
Consider the ray $\{L_t: L_t = L_{j-1} + t 2{\bf V}_{K_{j-1}, L_{j-1}}\}$. We first note that there exists a maximal half-open interval $[0, \sigma)$ such that $(A-B_2L_t, B_1)$ is stabilizable for every $t \in [0, \sigma)$ and $(A-B_2 L_{\sigma}, B_1)$ is not stabilizable (this is due to the fact that stabilizability is an open condition; see Proposition~\ref{prop:stabilizability_open} for a proof). Now suppose that $\sigma < \frac{1}{\lambda_n({\bf O}_{j-1})}$. We may take a sequence $t_l \uparrow \sigma$, and note that $(A-B_2 L_{t_l}, B_1)$ is stabilizable for every $t_l$. Let us denote the corresponding sequence of solutions to the DARE by $\{Z_{t_l}\}$. By our previous arguments, $\Delta \preceq Z_{t_l} \preceq X_*$, where $X_*$ is the corresponding value matrix at Nash equilibrium point $(K_*, L_*)$. Denote by $\ca L$ as the set of all limit points of the sequence $\{Z_{t_l}\}_{l=1}^{\infty}$. By Weirestrass-Balzano, the set $\ca L$ is nonempty as the sequence is bounded. Clearly, for every $Z \in \ca L$, $\Delta \preceq Z \preceq X_*$. By continuity, $Z$ must solve the DARE,
\begin{align}
\label{eq:dare_1}
(A-B_2L_{\sigma})^{\top} X (A-B_2 L_{\sigma}) + Q - L_{\sigma}^{\top} R_2 L_{\sigma} - (A-B_2 L_{\sigma})^{\top}X B_1 (R_1+B_1^{\top}XB_1)^{-1}B_1^{\top} X(A-B_2L_{\sigma}) = X.
\end{align}
Putting $K' = (R_1+B_1^{\top}ZB_1)^{-1} B_1^{\top} Z(A-B_2 L_j)$,
we claim that $A-B_2 L_{\sigma} - B_1 K'$ is Schur stable. This is a consequence of $(d)$ and the Comparison Lemma~\ref{lemma:ng_comparison}: it suffices to observe that $A_{K', L_{\sigma}}$ is marginally stable satisfying $(d)$ and,
\begin{align*}
A_{K', L_\sigma}^{\top}(Z - \Delta) A_{K', L_{\sigma}} - (Z-\Delta) + {\bf V}_{K_{j-1}, L_{j-1}}^{\top} \left( 4 \sigma I - 4 \sigma^2 {\bf O}_{j-1}\right) {\bf V}_{K_{j-1}, L_{j-1}} \preceq 0.
\end{align*}
Proceeding similar to the above line of reasoning, we can show that $A-B_2 L_\sigma - B_1 K'$ is Schur stable. But this contradicts our standing assumption that $(A-B_2 L_{\sigma}, B_1)$ is not stabilizable. Hence, for all $\eta_{j-1} \le 1/\lambda_n({\bf O}_{j-1})$, the pair $(A-B_2 L_j, B_1)$ is indeed stabilizable.
\end{proof}
We are now ready to state the convergence rate for the algorithm.
\begin{theorem}
If the stepsize is taken as $\eta_{j} = 1/(2\lambda_n({\bf O_{j-1}}))$, then,
\begin{align*}
\sum_{j=0}^{\infty} \|N_g(L_j)\|_F^2 \le \frac{1}{\eta} \left( g(L_*) - g(L_0)\right),
\end{align*}
where $\eta \in \bb R_+$ is some positive constant.
\end{theorem}
\begin{remark}
This theorem suggests the gradient will vanish at a sublinear rate. As we know, there is a unique stationary point of $g$; this means the sublinear convergence to global maximum, i.e., Nash equilibrium point.
\end{remark}
\begin{proof}
Let $\eta = \inf_{j} 1/\lambda_n({\bf O}_j)$ and note that ${\bf O}_j \succeq R_2 - B_2^{\top} X_* B_2$, so $\eta > 0$.
It suffices to note that by Lemma~\ref{lemma:ng_key_lemma}, we have
\begin{align*}
g(L_j) - g(L_{j-1}) &= \Tr((X_{K_{j}, L_j} - X_{K_{j-1}, L_{j-1}}) {\bf \Sigma}) \\
&\ge \frac{1}{\lambda_n({\bf O}_{j-1})}\Tr\left[ Y_{K_j, L_j}\left( N_g(L_{j-1})^{\top} N_g(L_{j-1})\right)\right] \\
&\ge \eta \|N_g(L_{j-1})\|_F^2.
\end{align*}
Telescoping the sum and noting that $g(L)$ is bounded above by $g(L_*)$, we have
\begin{align*}
\sum_{j=0}^{\infty} \|N_g(L_j)\|_F^2 \le \frac{1}{\eta} \left( g(L_*) - g(L_0)\right) < \infty.
\end{align*}
\end{proof}
We observe that the convergence rate is asymptotically linear. This is a consequence of the local curvature of $g(L)$. Indeed, if we compute the Hessian at $g(L_*)$, the action of the Hessian (see Appendix~\ref{sec:hessian} for details) is given by
\begin{align*}
\nabla^2 g(L_*)[E, E] = -2\langle {\bf O}_{X_*} E, EY_*\rangle.
\end{align*}
As $\nabla^2 g(L_*)$ is negative definite, $-g(L)$ is locally strongly convex around a convex neighborhood of $L_*$. It thus follows that gradient descent enjoys a linear convergence rate around $L_*$.
\section{Algorithm: quasi-Newton Iterations of $L$}
\label{sec:qn}
In this section, we shall assume that the oracle $\ca O_L$ returns the quasi-Newton direction.
The motivation of quasi-Newton is to investigate the second-order local approximation of $g(L)$. Indeed, we may observe that,
\begin{align*}
g(L+ \Delta L) \approx g(L) + 2\langle Y_{L + \Delta L} N_g(L), \Delta L\rangle - \langle {\bf O}_L \Delta L, \Delta L \rangle.
\end{align*}
\begin{algorithm}[H]
\caption{quasi-Newton Policy for LQ Game}
\label{alg2}
\begin{algorithmic}[1]
\State Initialize $(K_0, L_0) \in \ca S$ such that $(Q-L_0^{\top}R_2 L_0, A-B_2 L_0)$ is detectable and the DARE
\begin{align*}
(A-B_2L_0)^{\top} X (A-B_2 L_0) - X + Q - L_0^{\top}R_2 L_0 + (A-B_2L_0)^{\top}X B_1(R_1 + B_1^{\top}X B_1)B_1^{\top}X (A-B_2L_0)
\end{align*}
is solvable in $\ca S_{L_0} \equiv \{K \in \bb R^{m_1 \times n}: A-B_2L_0 - B_1 K \text{ is Schur}\}$.
\If{ $j \ge 1$}
\State Set: $K_j \leftarrow \argmin_K f(K, L_{j-1})$.
\State Set: $L_{j} = L_{j-1} + \eta_{j-1} {\bf O}_{j-1}^{-1}2(-R_2L_{j-1}- B_2^{\top} X_{K_{j-1}, L_{j-1}} A_{K_{j-1}, L_{j-1}})$.
\EndIf
\end{algorithmic}
\end{algorithm}
\subsection{Convergence Analysis}
We first prove a result that can be considered as a counterpart to Lemma~\ref{lemma:ng_key_lemma}.
\begin{lemma}
\label{lemma:qn_key_lemma}
Suppose that Algorithm~\ref{alg2} is initialized appropriately.
With stepsize $\eta_{j-1} \le \frac{1}{\lambda_n({\bf O}_{j-1})}$, we then have,
\begin{enumerate}
\item $(A-B_2 L_j, B_1)$ is stabilizable for every $j \ge 1$.
\item ${\bf O}_j \succ 0$ for every $j \ge 1$.
\item For every $j \ge 1$, $f(K, L_j)$ is bounded below over $K$ and there exists a unique minimizer $K_j$, which forms a stabilizing pair $(K_j, L_j)$. Namely, the DARE
\begin{align*}
(A-B_2L_j)^{\top} X (A-B_2L_j) + Q - L_j^{\top}R_2 L_j - (A-B_2L_j)^{\top}X B_1(R_1 + B_1^{\top}X B_1)^{-1} B_1^{\top} X (A-B_2L_j) = X,
\end{align*}
admits a stabilizing maximal solution $X^+$ satisfying $R + B_1^{\top} X^{+} B_1 \succ 0$.
\item Putting $\Lambda = X_{K_j, L_j}$, ${\bf E}_j = R_1+B_1^{\top} X_{K_j, L_j} B_1$ and ${\bf F}_j = B_1^{\top} X_{K_j, L_j}(A-B_2 L_j)$ we have
\begin{align*}
\Lambda - \Delta &=
A_{K_j, L_j}^{\top}(\Lambda - \Delta) A_{K_j, L_j} + {\bf V}_{K_{j-1}, L_{j-1}}^{\top} \left( 4 \eta_{j-1} {\bf O}_{j-1}^{-1} - 4 \eta_j^2 {\bf O}_{j-1}^{-1} \right) {\bf V}_{K_{j-1}, L_{j-1}} \\
& \quad + ({\bf E}_{j-1} K_j - {\bf F}_{j-1} )^{\top}{\bf E}_{j-1}^{-1} ({\bf E}_{j-1} K_j - {\bf F}_{j-1}).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
The proof proceeds similar to Lemma~\ref{lemma:ng_key_lemma}. The key difference is that the algebraic Riccati map assumes a new form with the quasi-Newton update. Namely, with quasi-Newton iteration,
\begin{align*}
\ca R_{A-B_2L_j, B_1, Q-L_j^{\top}R_2 L_j, R_1}(X_{j-1}) = \left( 4 \eta_{j-1} I - 4 \eta_{j-1}^2 \right) {\bf V}_{K_{j-1}, L_{j-1}}^{\top} {\bf O}_{j-1}^{-1} {\bf V}_{K_{j-1}, L_{j-1}}.
\end{align*}
The statements then follows from almost same arguments as in Lemma~\ref{lemma:ng_key_lemma}.
\end{proof}
We are now ready to state the convergence rate for the algorithm.
\begin{theorem}
If the stepsize is taken as $\eta = 1/2$, then
\begin{align*}
g(L_*) - g(L_j) \le q (g(L_*) - g(L_{j-1}))^2,
\end{align*}
for some $q >0$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lemma:qn_key_lemma}, the sequence of value matrices $\{X_j\}$ is monotonically nondecreasing and bounded above. Thereby $X_j \to X_*$ as $j \to \infty$. It follows the set $\ca E = \{X_j\} \cup \{X_*\}$ is compact. Substituting $K_{j-1} = (R_1 + B_1^{\top}X_{j-1}B_1)^{-1} B_1^{\top}X_{j-1}(A-B_2 L_{j-1})$ into the update rule, we get
\begin{align*}
L_j = - {\bf O}_{j-1}^{-1}B_2^{-1}X_{j-1} (A-B_1 (R_1+B_1^{\top}X_{j-1}B_1)^{-1}B_1^{\top}X_{j-1} A).
\end{align*}
By Lemma~\ref{lemma:ng_comparison_2} (take $\tilde{X} = X_*$ and $X = X_j$ and note ${\bf V}_{K_*, L_*} = 0$),
\begin{align*}
X_* - X_j \preceq \sum_{\nu = 0}^{\infty} (A_j^{\top})^\nu \left( (L_*-L_j)^{\top}{\bf O}_{*} (L_*-L_j) \right)A_j^{\nu},
\end{align*}
where
\begin{align*}
{\bf O}_* = R_2 - B_2^{\top} X_* B_2 + B_2^{\top} X_* B_1(R_1+B_1^{\top} X_* B_1)^{-1}B_1^{\top}X_*B_2.
\end{align*}
It follows that,
\begin{align*}
g(L_*) - g(L_j) &\le \Tr(Y_* (L_*-L_j)^{\top} {\bf O}_{*} (L_* - L_j)) \\
&\le \lambda_n(Y_*) \lambda_n({\bf O}_{*}) \Tr((L_* - L_j)^{\top} (L_* - L_j)).
\end{align*}
We observe
\begin{align*}
L_* - L_j = -{\bf O}_*^{-1} B_2^{\top} X_* (A-B_1(R_1+B_1^{\top}X_* B_1)^{-1}B_1^{\top}X_* A)+ {\bf O}_{j-1}^{-1}B_2^{\top}X_{j-1} (A-B_1 (R_1+B_1^{\top}X_{j-1}B_1)^{-1}B_1^{\top}X_{j-1} A),
\end{align*}
and further, note that the map $\phi$ given by
\begin{align*}
X \mapsto -{\bf O}_X^{-1}B_2^{\top} X \left( A-B_1 (R_1 + B_1^{\top} X B_1)^{-1} B_1^{\top} X A\right)
\end{align*}
is smooth where,
\begin{align*}
{\bf O}_{X} =R_2 - B_2^{\top} X B_2 + B_2^{\top} X B_1(R_1+B_1^{\top}XB_1)^{-1} B_1^{\top}X B_2.
\end{align*}
So over the compact set $\ca E$, we can find a Lipschitz constant $\beta$ of $\phi$, namely, for every $X, X' \in \ca E$, we have
$\|\phi(X)-\phi(X')\|_F \le \beta \|X-X'\|_F$.
Then
\begin{align*}
\|L_* - L_j\|_F^2 = \|\phi(X_*)-\phi(X_j)\|_F^2 \le \beta^2\|X_*-X_{j-1}\|_F^2.
\end{align*}
Hence
\begin{align*}
g(L_*)-g(L_j) &\le c \|X_*-X_{j-1}\|_F^2 \le q \left( g(L_*)-g(L_{j-1})\right)^2,
\end{align*}
where $c, q > 0$ are constants.
\end{proof}
\section{Policy Gradient Algorithms for Solving $K \leftarrow \argmin_{K} f(K, L)$}
\label{sec:pg_K}
In this section, we describe how we can use policy gradient based oracles to solve the minimization problem for fixed $L$. If the iterates $Q-L_j{\top}R_2 L_j$ were positive definite, policy based updates, {e.g.}, gradient descent, natural gradient descent and quasi-Newton iterations for standard LQR problem as treated in~\cite{fazel2018global}~\cite{bu2019lqr}, could be adopted. Then the oracle $\ca O_K$ can be constructed by repeatedly performing the procedure to the desired precision. However, there are no guarantees that this condition would be valid in the LQ dynamic game setup. We shall prove that with the assumption that there exists a maximal stabilizing solution to the algebraic Riccati equation, gradient descent (respectively, natural gradient descent and quasi-Newton) converges to the maximal solution of the ARE at a linear (respectively, linear and quadratic) rate. In this direction, recall that the gradient, natural gradient and quasi-Newton directions for fixed $L$ are given by,
\begin{align*}
\nabla_K f(K, L) &= 2(R_1 K- B_1^{\top} X A_{K, L}) Y \eqqcolon {\bf g}(K),\\
N_{f, K}(K, L) &= 2(R_1 K- B_1^{\top} X A_{K, L}) \eqqcolon {\bf n}(K),\\
qN_{f, K}(K, L) &= 2(R_1+B_1^{\top} X B_1)^{-1}(R_1K-B_1^{\top}XA_{K, L}) \eqqcolon {\bf qn}(K).
\end{align*}
We first provide the convergence analysis for the case of natural gradient descent.
\begin{theorem}[Natural Gradient Analysis]
\label{thrm:ng_inner_convergence}
Suppose that with fixed $L$, the ARE
\begin{align*}
(A-B_2 L)^{\top} X (A-B_2 L) - X + (A-B_2L)^{\top}XB_1(R_1 + B_1^{\top} X B_1)^{-1} B_1^{\top} X(A-B_2L) + Q-L^{\top} R_2 L = 0,
\end{align*}
has a maximal stabilizing solution $X^+$. Then the update rule,
\begin{align*}
M_{i+1} = M_i - \frac{1}{2\lambda_n(R_1 + B_1^{\top} X_i B_1)} 2(R_1 M_i - B_1^{\top} X_i (A-B_2L-B_1 M_i)),
\end{align*}
where $X_i$ solves the Lyapunov equation
\begin{align*}
(A-B_2 L - B_1 M_{i})^{\top} X_i + X_i (A-B_2 L_j - B_1 M_i) + Q - L^{\top} R_2 L + K_i^{\top} R_1 K_i = 0,
\end{align*}
converges linearly to $K^+$ provided $M_0 \in \ca S_{L}$. That is,
\begin{align*}
\|M_{i} - M_*\|_F^2 \le q^{i}\|M_0 - M_*\|_F^2,
\end{align*}
for some $q \in (0, 1)$.
\end{theorem}
\begin{remark}
The difference between above theorem and the standard results treated in~\cite{fazel2018global,bu2019lqr} is that we are not assuming $Q$ to be positive definite. Mind that in the above problem, the matrix $Q-L_j^{\top} R_2 L_j$ corresponding to the penalization on states in the standard LQR, can be indefinite in our sequential algorithms. The one step progression would follow from Theorem in~\cite{bu2019lqr}. However, the important difference is that the cost function is no longer coercive, requiring a separate analysis for establishing the stability of the iterates.\footnote{We note that stability of the iterates in natural gradient update was not explicitly shown in~\cite{fazel2018global}. Some of the perturbation arguments for gradient descent in~\cite{fazel2018global} can however be applied to argue for this property. Such an argument would however rely on the strict positivity of the minimum eigenvalue of $Q$.}
\end{remark}
\begin{proof}
The analysis in~\cite{bu2019lqr} on the one-step progression of the natural gradient descent holds here and thus the convergence rate would remain the same if we can prove that the iterates remain stabilizing. \par
By induction, it suffices to argue that with the chosen stepsize, $M_i$ is stabilizing provided that $M_{i-1}$ is. Consider the ray $\{M_t = M_{i-1} - t {\bf n}(M_{i-1}): t \ge 0\}$. Note that by openness of $\ca S$ and continuity of eigenvalues, there is a maximal interval $[0, \zeta)$ such that $M_{i-1} + t {\bf n}(M_{i-1}) $ is stabilizing for $t \in [0, \zeta)$ and $M_{i-1} + \zeta {\bf n}(M_{i-1})$ is marginally stabilizing. Now suppose that $\zeta \le \frac{1}{2\lambda_n(R_1+B_1^{\top} X_{i-1} B_1)}$ and take a sequence $t_l \in [0, \zeta)$ such that $t_l \to \zeta$. Consider the sequence of value matrices $\{X_{t_l}\}$ and denote by $\ca L$ as the set of all limit points of $\{X_{t_j}\}$. Note that $\ca L$ is nonempty since the sequence is bounded by $X^{+} \preceq X_{t_l} \preceq X_{i-1}$ for every $t_l$.\footnote{Note that it is not guaranteed that $X_{t_j}$ is convergent. The limit points are also not necessarily well-ordered in the ordering induced by the p.s.d. cone.} By continuity, any $Z \in \ca L$ solves,
\begin{align*}
(A-B_1 M_{\zeta} - B_2 L)^{\top} Z (A-B_1 M_{\zeta}-B_2 L) -Z + Q - L^{\top} R_2 L + M_{\zeta}^{\top} R_1 M_{\zeta} = 0.
\end{align*}
But this is a contradiction, since by the Comparison Lemma~\ref{lemma:ng_comparison}, we have
\begin{align*}
(A-B_1 M_{\zeta}-B_2 L)^{\top} (Z-X^{+}) (A-B_1 M_{\zeta} - B_2 L) - (Z-X^{+}) + (M_{\zeta}-K^+)^{\top} R_1(M_{\zeta}-K^+) = 0.
\end{align*}
Suppose that $(\lambda, v)$ is the eigenvalue-eigenvector pair of $A-B_1 M_{\zeta}-B_2 L_1$ such that $(A-B_1 M_{\zeta}-B_2 L_j) v = \lambda v$ and $|\lambda| = 1$. Then as $Z-X^{+} \succeq 0$, we would have $M_{\zeta} v = K^+ v$. But this is a contradiction to the assumption that $K^+$ is a stabilizing solution. Hence $\{X_i\}$ is a monotonically non-increasing sequence bounded below by $X^{+}$. As such, the sequence of iterates $\{M_i\}$ would converge linearly to $K^+$ following the arguments in~\cite{bu2019lqr}.
\end{proof}
We mention that the above stability argument can be applied for the sequence generated by the quasi-Newton iteration as well. The quadratic convergence rate for such a sequence would then follow from the proof in~\cite{bu2019lqr}.
\begin{theorem}[Quasi-Newton Analysis]
\label{thrm:qn_inner_convergence}
Suppose that with a fixed $L$, the ARE
\begin{align*}
(A-B_2 L)^{\top} X (A-B_2 L) - X + (A-B_2L)^{\top}XB_1(R_1 + B_1^{\top} X B_1)^{-1} B_1^{\top} X(A-B_2L) + Q-L^{\top} R_2 L = 0,
\end{align*}
has a maximal stabilizing solution $X^+$. Then the update rule
\begin{align*}
M_{i+1} = M_i - \frac{1}{2} 2(R_1 + B_1^{\top} X_{i}B_1)^{-1}(R_1 M_i - B_1^{\top} X_i (A-B_2L-B_1 M_i)),
\end{align*}
where $X_i$ solves the Lyapunov equation
\begin{align*}
(A-B_2 L_j - B_1 M_{i})^{\top} X_i + X_i (A-B_2 L_j - B_1 M_i) + Q - L_j^{\top} R_2 L_j + K_i^{\top} R_1 K_i = 0,
\end{align*}
converges quadratically to $K^+$ provided $M_0 \in \ca S$. That is,
\begin{align*}
\|M_{i} - M_*\|_F \le q\|M_0 - M_*\|_F^2,
\end{align*}
for some $q > 0$.
\end{theorem}
The gradient policy analysis requires more work since the stepsize developed in~\cite{bu2019lqr} involves the smallest eigenvalue $\lambda_1(Q)$. However by carefully replacing ``$\lambda_1(Q)$ related quantities'' in~\cite{bu2019lqr}, one can still prove the global linear convergence rate as follows.
\begin{theorem}[Gradient Analysis]
\label{thrm:gd_inner_convergence}
Suppose that with a fixed $L$, the ARE
\begin{align*}
(A-B_2 L)^{\top} X (A-B_2 L) - X + (A-B_2L)^{\top}XB_1(R_1 + B_1^{\top} X B_1)^{-1} B_1^{\top} X(A-B_2L) + Q-L^{\top} R_2 L = 0,
\end{align*}
has a maximal stabilizing solution $X^+$. Then the update rule
\begin{align*}
M_{i+1} = M_i - \eta_i 2(R_1 M_i - B_1^{\top} X_i (A-B_2L-B_1 M_i)) Y_{M_i, L},
\end{align*}
where $X_i$ solves the Lyapunov equation
\begin{align*}
(A-B_2 L_j - B_1 M_{i})^{\top} X_i + X_i (A-B_2 L_j - B_1 M_i) + Q - L_j^{\top} R_2 L_j + K_i^{\top} R_1 K_i = 0,
\end{align*}
converges linearly to $K^+$ provided $M_0 \in \ca S$. That is,
\begin{align*}
\|M_{i} - M_*\|_F^2 \le q^i \|M_0 - M_*\|_F^2,
\end{align*}
for some $q \in (0,1)$.
\end{theorem}
The convergence analysis of gradient policy follows closely of the idea presented in~\cite{bu2019lqr}. In~\cite{bu2019lqr}, the compactness of sublevel sets was used to devise the stepsize rule to guarantee a sufficient decrease in the cost and stability of the iterates. The proof of compactness in~\cite{bu2019lqr} however, relies on the positive definiteness of $Q-L_j^{\top} R_2 L_j$.\footnote{Or the observability of $(Q-L_j^{\top} R_2 L_j, A)$.} It is also not valid to assume that the function is coercive. But, we can show that an analogous strategy adopted in~\cite{bu2019lqr} can be employed to derive a suitable stepsize for the game setup. The details analysis are defered to Appendix~\ref{sec:gd_inner}.
\section{Comments on Adopting Gradient Polices for $L$}
Gradient policy update for LQ games has been discussed in~\cite{zhang2019policy}, where a projection step is required in updating the policy $L$. In particular, in~\cite{zhang2019policy}, it has been stated that a projection step onto the set $ \Omega = \{L: Q - L^{\top} R_2 L \succeq 0\}$ would guarantee that $L$ is stabilizing. The key issue however is the stabilizability of $(A-B_2 L, B_1)$; as such, it is not valid to assume that every $L \in \Omega$ would yield a stabilizable pair $(A-B_2 L, B_1)$. In fact, the approach adopted in our work would not work for gradient policy either, as we rely on a monotonicity property of the corresponding value matrix; gradient policy would only decrease the cost function without any guarantees to decrease the value matrix (with respect to the p.s.d. ordering).
If we assume that the Nash equilibrium $L_* \in \Omega$ and could guarantee that $(A-B_2L_j, B_1)$ is stabilizable, then it would be warranted that $f(K, L_j)$ has a unique minimizer for every $L_j$; in this case, the approach adopted in this paper would provide a simpler proof for convergence of gradient policies for LQ games.
\section{Switching the Leader in the Sequential Algorithms}
We shall demonstrate in this section if the condition $a2$ in the assumption holds, it might not be guaranteed that $g(L)$ is differentiable in a neighborhood of $L_*$. In this case, however, choosing the leader to be player $K$ would converge. The analysis would proceed in a similar manner. First, we observe that we can define a value function,
\begin{align*}
h(K) = \sup_{L \in \ca S_{K}} f(K, L).
\end{align*}
Following virtually the same argument, we can the establish the following.
\begin{proposition}
Suppose that $\ca U \subseteq \dom(h)$ is an open set such that for every $K \in \ca U$, there is a unique maximizer of $f(K, L)$ over $L$. Then $h(L)$ is differentiable and the gradient is given by
\begin{align*}
\nabla h(K) = \nabla_{K} f(K, L_K), \text{ where } L_K = \argmax_{L \in \ca S_K} f(K, L).
\end{align*}
\end{proposition}
The algorithm for player $K$ using natural gradient policy can be described similarly to Algorithm~\ref{alg1}.
\begin{algorithm}[H]
\caption{Natural Gradient Policy for LQ Game}
\label{alg3}
\begin{algorithmic}[1]
\State Initialize $K_0$ such that $(A-B_1 K_0, B_2)$ is stabilizable and the DARE
\begin{align*}
(A-B_1K_0) Z (A-B_1 K_0) - Q - K_0^{\top}R_1 K_0 - (A-B_1K_0)^{\top}ZB_2(R_2 + B_2^{\top}ZB_2)^{-1}B_2^{\top}Z(A-B_1 K_0) = Z.
\end{align*}
has a maximal symmetric solution $Z^+$ with $R_2 + B_2^{\top}Z^+ B_2 \succ 0$.
\If{ $j \ge 1$}
\State Set: $L_{j-1} \leftarrow \argmax_L f(K_{j-1}, L)$.
\State Set: $K_{j} = K_{j-1} - \eta_j N_h(L_j) \equiv K_{j-1} - \eta_j N_{f, K} (K_{j-1}, L_{j-1})$.
\EndIf
\end{algorithmic}
\end{algorithm}
We observe that for fixed $K'$, if $L'$ is the unique stabilizing maximizer of $f(K', L)$ over $L$, then substituting $\nabla_L f(K', L') = 0$ into the Lyapunov matrix equation, $L'$ solves the following ARE,
\begin{align}
\label{eq:max_are}
(A-B_1K') Z (A-B_1 K') + Q + K'^{\top}R_1 K' - (A-B_1K')^{\top}ZB_2(-R_2 + B_2^{\top}ZB_2)^{-1}B_2^{\top}Z(A-B_1 K') = Z.
\end{align}
To utilize the theory developed in standard ARE, which concerns a minimization problem, we may consider following modification:
\begin{align}
\label{eq:max_mod_are}
(A-B_1K') W (A-B_1 K') - Q - K'^{\top}R_1 K' - (A-B_1K')^{\top}WB_2(R_2 + B_2^{\top}WB_2)^{-1}B_2^{\top}W(A-B_1 K') = W.
\end{align}
We observe that if $W$ solves~\eqref{eq:max_mod_are}, then $-W$ solves~\eqref{eq:max_are}. Now the analysis can be done almost in parallel.
\begin{lemma}
Suppose that $L_{j-1}$ is the unique stabilizing maximizer of $f(K_{j-1}, L)$, i.e., $L_{j-1} = \argmin_{L} f(K_{j-1}, L)$. Putting $\Delta = X_{K_{j-1}, L_{j-1}}$ and
\begin{align*}
{\bf O}_{j-1} = R_1 + B_1^{\top} \Delta B_1 + B_1^{\top} \Delta B_2 (R_2 - B_2^{\top}\Delta B_2)^{-1} B_2^{\top} \Delta B_1,
\end{align*}
with stepsize $\eta_{j-1} \le \frac{1}{\lambda_n({\bf O}_{j-1})}$, we then have,
\begin{enumerate}
\item $(A-B_1 K_j, B_2)$ is stabilizable.
\item $f(K_j, L)$ is bounded above over $L$ and there exists a unique stabilizing maximizer $L_j$, namely $(K_j, L_j)$ is a stabilizing pair.
\item Putting $\Lambda = X_{K_j, L_j}$, we have
\begin{align*}
\Delta - \Lambda &\succeq
A_{K_j, L_j}^{\top}(\Delta - \Lambda) A_{K_j, L_j} + {\bf U}_{K_{j-1}, L_{j-1}}^{\top} \left( -4 \eta_{j-1} I + 4 \eta_j^2 {\bf O}_{j-1} \right) {\bf U}_{K_{j-1}, L_{j-1}}.
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
The proof proceeds similarly to Lemma~\ref{lemma:ng_key_lemma}. Indeed, putting $\tilde{\Delta} = - \Delta$ and $\tilde{\Lambda} = -\Lambda$, we observe $\tilde{\Delta}$ and $\tilde{\Lambda}$ solves the DARE~\eqref{eq:max_mod_are} and also note the DARE~\eqref{eq:max_mod_are} has the same form with the DARE considered in Lemma~\ref{lemma:ng_key_lemma}. Further, note that the update rule is equivalent to
\begin{align*}
K_{j} &= K_{j-1} - \eta_{j-1} 2 (R_1 K_{j-1} - B_1^{\top}\Delta A_{K_{j-1}, L_{j-1}}) = K_{j-1} - \eta_{j-1} 2(R_1 K_{j-1} + B_1^{\top} \tilde{\Delta} A_{K_{j-1}, L_{j-1}})\\
&=K_{j-1} + 2 \eta_{j-1}(-R_1K_{j-1} - B_1^{\top}\tilde{\Delta}A_{K_{j-1}, L_{j-1}}).
\end{align*}
In view of these observations, by the same machinery we employed in Lemma~\ref{lemma:ng_key_lemma}, we conclude
\begin{align*}
\tilde{\Lambda} - \tilde{\Delta} &\succeq A_{K_j, L_j}^{\top}( \tilde{\Lambda} - \tilde{\Delta}) A_{K_j, L_j} + \tilde{{\bf U}}_{K_{j-1}, L_{j-1}}^{\top} \left( 4 \eta_{j-1} I - 4 \eta_j^2 \tilde{{\bf O}}_{j-1} \right) \tilde{{\bf U}}_{K_{j-1}, L_{j-1}},
\end{align*}
where
\begin{align*}
\tilde{\bf U}_{K_{j-1}, L_{j-1}} &= -R_1 K_{j-1} - B^{\top} \tilde{\Delta} A_{K_{j-1}, L_{j-1}}, \\
\tilde{{\bf O}}_{j-1} &= R_1 - B_1^{\top} \tilde{\Delta} B_1 + B_1^{\top} \tilde{\Delta} B_2 (R_2 + B_2^{\top}\tilde{\Delta} B_2)^{-1} B_2^{\top} \tilde{\Delta} B_1.
\end{align*}
It thus follows that,
\begin{align*}
-\Lambda + \Delta &\succeq
A_{K_j, L_j}^{\top}(\Delta - \Lambda) A_{K_j, L_j} + {\bf U}_{K_{j-1}, L_{j-1}}^{\top} \left( -4 \eta_{j-1} I + 4 \eta_j^2 {\bf O}_{j-1} \right) {\bf U}_{K_{j-1}, L_{j-1}}.
\end{align*}
\end{proof}
Now it is straightforward to conclude the sublinear convergence rate of Algorithm~\ref{alg3}.
\begin{lemma}
Suppose $\{K_j\}$ are the iterates generated by Algorithm~\ref{alg3}. Then we have
\begin{align*}
\sum_{j=0}^{\infty} N_h(K_j) \le \eta \left( h(K_0) - h(K_*)\right),
\end{align*}
where $\eta > 0$ is some positive number.
\end{lemma}
The analysis of quasi-Newton method with $K$ as the leader proceeds in a similar manner as Algorithm~\ref{alg2}; as such we omit the details here.
\section{Concluding Remarks}
The papers considers sequential policy-based algorithms for LQ dynamic games. We prove global convergence of several mixed-policy algorithms as well as identifying the role of control theoretic constructs in their analysis. Moreover, we have clarified a number of intricate issues pertaining to stabilization for LQ games and indefinite cost structure, while removing restrictive assumptions and circumventing the projection step.
\section*{Acknowledgements}
The authors thank Henk van Waarde and Shahriar Talebi for many helpful discussions.
\section*{Appendix}
\begin{appendix}
\section{Hessian of $g(L)$}
\label{sec:hessian}
In this section, we compute the Hessian of $g(L)$ at a point of differentiation of $L_0$. Indeed, we shall assume stronger assumptions of $L_0$: there is a unique stabilizing minimizer of $f(K, L_0)$ over $L_0$, denoted by $K_0$. Throughout the section, we denote $A_0 = A - B_1K_0 - B_2L_0$. Note, by assumption the DARE is solvable
\begin{align*}
(A-B_2L_0)^{\top}X(A-B_2L_0) + Q-L_0^{\top}R_2L_0 + (A-B_2L_0)^{\top}XB_1(R_1+B_1^{\top}XB_1)^{-1}B_1^{\top}X(A-B_2L_0)=X,
\end{align*}
and the maximal solution $X_0$ is stabilizing, i.e., $K_0 = (R_1+B_1^{\top}X_0B_1)^{-1}B_1^{\top}X(A-B_2L_0)$ is stabilizing the system $(A-B_2L_0, B_1)$. As we have noted, the gradient of $g(L_0)$ is given by
\begin{align*}
\nabla g(L_0) = 2(-R_2 L_0 - B_2^{\top} X_0 (A-B_2 L_0 - B_1 K_0))Y_0,
\end{align*}
where $Y_0$ is the solution to the Lyapunov matrix equation
\begin{align*}
A_0Y A_0^{\top} + {\bf \Sigma} = Y.
\end{align*}
We now compute the Fr\'{e}chet derivative of $\phi(L_0) = 2(-R_2 L_0 - B_2^{\top}X_0 A_0)$. Note $\phi: \bb R^{m_2 \times n} \to \bb R^{m_2 \times n}$, the Fr\'{e}chet derivative $D\phi(L_0)$ is a bounded linear map in $\ca L(\bb R^{m_2 \times n}, \bb R^{m_2 \times n})$. So the action of $D\phi(L_0)$ at any $E \in \bb R^{m_1 \times n}$, denoted by $D\phi(L_0)[E] \eqqcolon {\bf D} \in \bb R^{m_2 \times n}$ is given by
\begin{align*}
{\bf D} = 2(-R_2 E - B_2^{\top}X_0'(E)(A-B_2L_0 - B_1 K_0) -B_2^{\top}X_0(-B_2 E) - B_2^{\top}X_0 (-B_2 K_0'(E))),
\end{align*}
where $X_0'(E) \in \bb R^{n \times n}$ (respectively, $K_0'(E)$) is the action of the Fr\'{e}chet derivative of $X_0$ (respectively, $K_0$) with respect to $L_0$. Here we concern $X_0, K_0$ as maps of $L$. Now $X_0'(E)$ satisfies
\begin{align*}
A_{0}^{\top} X_0'(E) A_0 - X_0'(E) - E^{\top} R_2 L_0 - L_0^{\top}R_2 E + K_0'(E)^{\top}R_1 K_0 + K_0^{\top}R_1 K_0'(E)\\
-(B_2 E + B_1 K_0'(E))^{\top}X_0 A_0 -A_0^{\top}X_0 (B_1K_0'(E)+ B_2 E) = 0.
\end{align*}
Noting $R_1K_0 - B_1^{\top}X_0 A_0 =0$, we have $X_0'(E)$ is the solution to the following Lyapunov equation
\begin{align*}
A_{0}^{\top} X_0'(E) A_0 - X_0'(E) - E^{\top} (R_2 L_0 + B_2^{\top}X_0 A_0) - L_0^{\top}(R_2 E + A_0^{\top}X_0 B_2 E) = 0.
\end{align*}
As $A_0$ is Schur, the solution exists and is unique
\begin{align*}
X_0'(E) = \sum_{j=0}^{\infty} (A_0^{\top})^j [- E^{\top} (R_2 L_0 + B_2^{\top}X_0 A_0) - L_0^{\top}(R_2 E + A_0^{\top}X_0 B_2 E) ] A_0^j.
\end{align*}
Similarly, we may compute
\begin{align*}
K_0'(E) &= (R_1 + B_1^{\top}X_0B_1)^{-1}B_1^{\top}X_0(-B_2E) + (R_1+B_1X_0B_1)^{-1}B_1^{\top}X_0'(E) (A-B_2L_0)\\
&\quad + (R_1+B_1^{\top}X_0 B_1)^{-1} B_1^{\top}X_0'(E)B_1 (R_1 +B_1^{\top}X_0B_1)^{-1} B_1^{\top}X_0 (A-B_2L_0),
\end{align*}
and
\begin{align*}
Y_0'(E) = \sum_{j=0}^{\infty} A_0^j \left[ A_0Y_0(-B_2E)^{\top} + (-B_2E)Y_0 A_0^{\top}\right](A_0^{\top})^j.
\end{align*}
Combining the computations, we have the action of the Hessian is given by
\begin{align*}
\langle \nabla^2 g(L) E, E \rangle &= 2\langle -R_2 + B_2^{\top}X_0B_2 - B_2^{\top}X_0 B_1(R_1+B_1^{\top}X_0 B_1)^{-1}B_1^{\top}X_0B_2)E, E\rangle \\
&\quad + 2\langle -B_2^{\top}X_0'(E)A_0-B_2^{\top}X_0 B_1 (R_1+B_1X_0B_1)^{-1}B_1^{\top}X_0'(E) (A-B_2L_0), E\rangle \\
&\quad + 2\langle B_2^{\top}X_0B_1(R_1+B_1^{\top}X_0 B_1)^{-1} B_1^{\top}X_0'(E) B_1 (R_1 +B_1^{\top}X_0B_1)^{-1} B_1^{\top}X_0 (A-B_2L_0),E\rangle \\
&\quad + 2 \langle (-R_2 L_0 - B_2^{\top} X_0 A_0)Y_0'(E), E\rangle
\end{align*}
It is instructive to note that at $L_*$, $X_*'(E) = 0$ since $-R_2 L_* - B_2^{\top}X_* A_* = 0$. So the action of Hessian at $L_*$ is given by
\begin{align*}
\nabla^2 g(L_*)[E, E] = \langle \left( -R_2 + B_2^{\top}X_0B_2 - B_2^{\top}X_0 B_1(R_1+B_1^{\top}X_0 B_1)^{-1}B_1^{\top}X_0B_2)\right)EY_0, E\rangle.
\end{align*}
That is, $\nabla^2 g(L_*)$ is a positive definite operator by condition $(a1)$ in the assumption. Hence, $-g(L)$ is locally strongly convex in a convex neighborhood.
\begin{remark}
If we ignore the formality, the above computation is nothing but a linear approximation.
\end{remark}
\section{Gradient Policy Analysis for Nonstandard LQR}
\label{sec:gd_inner}
This section is devoted to the proof of Theorem~\ref{thrm:gd_inner_convergence}. As it was pointed out previously, the strategy for devising a stepsize guaranteeing linear convergence for the LQ game setup is similar to the one presented in~\cite{bu2019lqr}. However, the convergence analysis for the game setup is more involved as one can not estimate the needed quantities using the current function values due to the indefiniteness of the term $Q-L^{\top}R_2L$. However, as we will show, a perturbation bound would circumvent this issue and allows deriving the required stepsize.\par
In order to simplify the notation, let
\begin{align*}
\psi(M) \coloneqq f(M, L), \qquad {\bf U} = R_1M-B_1^{\top}X(A-B_2L-B_1M), \qquad A_M = A-B_2L-B_1 M.
\end{align*}
Note that in our analysis $L$ is always fixed; we also adopt the notation $A_L \coloneqq A-B_2L$. \\
If $M_\eta = A-B_2 L - B_1 (M - \eta 2 {\bf U} Y)$ is stabilizing, \begin{align*}
\psi(M) - \psi(M_\eta) = 4\eta \Tr\left( {\bf U}^{\top} {\bf U} (Y Y(\eta) - \eta a Y Y_{\eta} Y) \right),
\end{align*}
where $a = \lambda_n(R_1 + B_1^{\top}M B_1)$, and $Y(\eta)$ solves the Lyapunov matrix equation
\begin{align*}
(A-B_2L-B_1 M_{\eta})^{\top} Y(\eta) (A-B_2L-B_1 M_{\eta}) + {\bf \Sigma} = Y(\eta).
\end{align*}
Now define a univariate function $\phi$ as,
\begin{align*}
\phi(\eta) = \Tr\left( {\bf U}^{\top} {\bf U} (Y Y(\eta) - \eta a Y Y_{\eta} Y) \right).
\end{align*}
We observe that $\phi(\eta)$ is well-defined locally around $0$ by openness of the set $\ca S_{L}$.\footnote{$\phi$ is well-defined only if $A-B_2 L - B_1 M_{\eta}$ is Schur.}
Further, observe that $\phi(0) > 0$ if the gradient does not vanish at $M$. Now our goal is to characterize a step size such that $\phi(\eta) > 0$. In this direction, we first observe a perturbation bound on $Y(\eta)$.
\begin{proposition}
Putting $\mu_1 = \|Y\|_2 \|B_1 {\bf U} Y\|_2^2/\lambda_1({\bf \Sigma})$ and $\mu_2 = \|Y\|_2 \|B_1 {\bf U}Y\|_2\|A-B_2L\|_2/\lambda_1({\bf \Sigma}) $, if we let
\begin{align*}
\eta_0 = \frac{\sqrt{1+ \frac{\mu_2^2}{\mu_1}}}{2\sqrt{\mu_1}} - \frac{\mu_2}{2\mu_1},
\end{align*} and supposing that
$A_{\eta} = A-B_2L-B_1 M_{\eta}$ is Schur stable for every $\eta \le \eta_0$, then for all $\eta \le \eta_0$,
\begin{align*}
\|Y(\eta)\|_2 \le \beta_0 \|Y\|_2,
\end{align*}
where $\beta_0 = \frac{1}{1- 4 \mu_1 \eta_0^2 - 4\mu_2 \eta_0} > 0$.
\end{proposition}
\begin{proof}
Taking the difference of the corresponding Lyapunov matrix equations, we have
\begin{align*}
Y(\eta) -Y - (A_L-B_1M) (Y(\eta)-Y) (A_L-B_1M)^{\top} &= A_L Y(\eta) 2 \eta (B_1{\bf U} Y)^{\top} + 2\eta B_1{\bf U} Y Y(\eta) A_L \\
&\quad + 4\eta^2 B_1{\bf U}Y Y(\eta) (B_1 {\bf U} Y)^{\top}\\
&\preceq \|Y_{\eta}\|_2 \left( 4\eta \|B_1 {\bf U} Y\|_2\|A_L\| + 4\eta^2 \|B_1 {\bf U}Y\|_2^2 \right) I \\
&\preceq \|Y_{\eta}\|_2 \left( 4\eta \|B_1 {\bf U} Y\|_2\|A_L\| + 4\eta^2 \|B_1 {\bf U}Y\|_2^2 \right) \frac{{\bf \Sigma}}{\lambda_1({\bf \Sigma})}.
\end{align*}
It thus follows that,
\begin{align*}
Y_{\eta} - Y \preceq \frac{\|Y_{\eta}\|_2 \left( 4\eta \|B_1 {\bf U} Y\|_2\|A_L\| + 4\eta^2 \|B_1 {\bf U}Y\|_2^2 \right) }{\lambda_1({\bf \Sigma})} Y.
\end{align*}
Hence,
\begin{align*}
\|Y_{\eta}\|_2 \left( 1- \frac{\|Y\|_2 \left( 4\eta \|B_1 {\bf U} Y\|_2\|A_L\| + 4\eta^2 \|B_1 {\bf U}Y\|_2^2 \right) } {\lambda_1({\bf \Sigma})} \right) \le \|Y\|_2.
\end{align*}
The proof is completed by a direct computation and noting that $1/\beta_0 = 1-\mu_1 \eta_0^2 - 4 \mu_2 \eta_0 > 0$ with the choice of $\eta_0$ and for every $\eta \le \eta_0$,
\begin{align*}
1-4\mu_1 \eta^2 - 4 \mu_2 \eta \ge 1-4\mu_1 \eta_0^2 - 4 \mu_2 \eta_0.
\end{align*}
\end{proof}
We now present an important result for our analysis. The basic idea of this lemma is as follows: if $[0,c)$ is the largest interval such that $A_{t}$ is Schur stable for every $t \in [0, c)$ and $A_c$ is marginally Schur stable,\footnote{Such a $c$ exists by the openness of the set of stabilizing gains.} then we can find a number $c_0 < c$ such that $\psi(M_s) \le \psi(M)$ for every $s \in [0, c_0]$.
\begin{lemma}
Let $c$ be the largest real positive number such that $A_{t}$ is Schur stable for every $t \in [0, c)$ and $A_c$ is marginally Schur stable\footnote{Here we have assumed that $c$ is not $+\infty$. Of course, if $c=+\infty$, then any stepsize would remain stabilizing.}. Let
\begin{align*}
a_1 &= a \beta_0\lambda_n(Y) + 4\|{\bf U}\|_2 \beta_0 [\lambda_n(Y)]^2,\\
a_2 &= a 4\|{\bf U}\|_2 \beta_0 [\lambda_n(Y)]^2;
\end{align*}
then with
\begin{align*}
\eta_1 \le \min(c-\varepsilon, \eta_0, c_0),
\end{align*}
where $\varepsilon > 0$ is an arbitrary positive real number and
\begin{align*}
c_0 < \sqrt{\frac{ 1}{a_2} + \frac{a_1^2}{4a_2^2}} - \frac{a_2}{2a_1},
\end{align*}
one has $\phi(\eta_1) \ge 0$.
\end{lemma}
\begin{proof}
The computations follow a similar method used in~\cite{bu2019lqr} by replacing the estimate of $Y(\theta)$ by the bound in the above proposition (see details in Lemma $5.5$ in~\cite{bu2019lqr}).
\end{proof}
If one could explicitly compute the $c$ in the above result, then a deterministic choice of stepsize could be chosen; however, this is not feasible. Fortunately, we can show that $c > \min(\eta_0, c_0)$. This would then imply that one can choose the stepsize $\eta = \min(\eta_0, c_0)$.
\begin{theorem}
With the stepsize $\eta = \min(\eta_0, c_0)$, $M_{\eta}$ remains stabilizing and $\phi(\eta) \ge 0$.
\end{theorem}
\begin{proof}
Let $\eta = \min(\eta_0,c_0)$. It suffices to prove that for every $t \in [0, \eta]$, $A_t$ is Schur stabilizing and $\phi(t) \ge 0$. We prove this by contradiction. Suppose that this is not the case. Then by continuity of eigenvalues, there exists a number $\eta' \le \eta$ such that $A_s$ is stabilizing for every $s \in [0, \eta')$ and $M_{\eta'}$ is stabilizing. If this is the case, the choice of $\eta_0, c_0$ guarantees that for every $s \in [0, \eta')$, $\phi(s)$ is well-defined and $\phi(s) \ge 0$.
Now take a sequence $t_i \to \eta'$ and consider the corresponding sequence of value matrices $\{X_{t_i}\}$. Note that the sequence of function values $\Tr(X_{t_i} {\bf \Sigma})$ satisfies,
\begin{align*}
\Tr(X^+{\bf \Sigma}) \le \Tr(X_{t_i}{\bf \Sigma}) \le \Tr(X{\bf \Sigma}),
\end{align*}
since $\phi(t) \ge 0$.
But this implies that $\{X_{t_i}\}$ is a bounded sequence (note that the above inequality on function values does not guarantee the boundedness of the sequence; it is crucial that $X_{t_i} \succeq X^+$).
Hence by a similar argument adopted in the proof of Theorem~\ref{thrm:ng_inner_convergence}, these observations establish
a contradiction, and as such, the proposed stepsize guarantees stabilization.
\end{proof}
It is now straightforward to conclude the convergence rate.
\section{A Useful Control Theoretic Observation}
In the proof to Lemma~\ref{lemma:ng_key_lemma} and~\ref{lemma:qn_key_lemma}, we have used the fact that the set $\{L: (A-B_2L, B_1) \text{ is stabilizable}\}$ is open; here is the justification.
\begin{proposition}
\label{prop:stabilizability_open}
Suppose $A \in \bb R^{n \times n}$, $B_1 \in \bb R^{n \times m_1}$ and $B_2 \in \bb R^{n \times m_2}$ are fixed. Then the set
\begin{align*}
\ca L = \{L \in \bb R^{m_2 \times n}: (A-B_2 L, B_1) \text{ is stabilizable}\}
\end{align*}
is open in $\bb R^{m_2 \times n}$.
\end{proposition}
\begin{proof}
Recall a pair $(A-B_2L, B_1)$ is stabilizable if and only if there exists some $F \in \bb R^{m_1 \times n}$ such that $A-B_2 L-B_1 F$ is Schur. So $(A-B_2L, B_1)$ is stabilizable if and only if there exists $X \succ 0$ and $F \in \bb R^{m_1 \times n}$ such that
$$(A-B_2L-B_1F)^{\top} X (A-B_2L-B_1F) - X \prec 0.$$
Now consider the map $\psi: \bb S_n^{++} \times \bb R^{m_2 \times n} \times \bb R^{m_1 \times n} \to \bb R$ by
\begin{align*}
(X, L, F) \mapsto (A-B_2L-B_1F)^{\top} X (A-B_2L-B_1F) - X \\
\mapsto \lambda_{\max}\left( (A-B_2L-B_1F)^{\top} X (A-B_2L-B_1F) - X \right).
\end{align*}
The map $\psi$ is continuous as it is a composition of continuous maps. It thus follows that $\psi^{-1}( (-\infty, 0) )$ is open. We now observe that $\ca L \equiv \pi_2(\psi^{-1}(-\infty, 0))$ where $\pi_2$ is the projection onto the second coordinate. Since projection map is open map\footnote{A map $f: X \to Y$ is open if $f(U)$ is open in $Y$ whenever $U \subseteq X$ is an open set.}, $\ca L$ is open.
\end{proof}
\end{appendix}
\bibliographystyle{alpha}
\bibliography{ref}
\end{document} | {"config": "arxiv", "file": "1911.04672/main.tex"} |
TITLE: Laplace transform of $f(t)=\left|\sin\frac{t}{2}\right|$?
QUESTION [2 upvotes]: If you are given a rectified sine wave, $$f(t)=\left|\sin\frac{t}{2}\right|$$ how do you find the Laplace transform of this?
I tried using the equation $$L\{ f(t)\} = \frac{1}{1-e^{-sT}} \int_0^T f(t) e^{-sT}\, dt$$ with period $T= 2\pi$ but I am not sure if that's correct.
REPLY [1 votes]: If you're not sure go back to first principles:
$$L[f](s) = \int_0^\infty f(t)e^{-st} \ dt = \sum_{n=0}^\infty \int_{2n\pi}^{2(n+1)\pi} |\sin(t/2)| e^{-st} \ dt \ = \ ...$$
Can you take it from here?
Added: The last expression is equal to
$$\sum_{n=0}^\infty \int_0^{2\pi} \sin(t/2) e^{-s(t+2n\pi)} \ dt = \sum_{n=0}^\infty e^{-2n\pi s} \int_0^{2\pi} \sin(t/2) e^{-st} \ dt $$
$$ = \frac{1}{1 - e^{-2\pi s}} \int_0^{2\pi} \sin(t/2) e^{-st} \ dt$$
How about now? | {"set_name": "stack_exchange", "score": 2, "question_id": 1035732} |
TITLE: Why $\mathbb Q[\sqrt 2]$ is a field?
QUESTION [2 upvotes]: I was reading this question (that has been changed a bit). By definition $\mathbb Q[\sqrt 2]$ is a ring. It's the ring $\{a+b\sqrt 2\mid a,b\in\mathbb Q\}$.
Q1) Does $\mathbb Q[\sqrt 2]=\{a+b\sqrt 2\mid a,b\in\mathbb Q\}$ by definition or $\mathbb Q[\sqrt 2]\cong \{a+b\sqrt 2\mid a,b\in\mathbb Q\}$ ?
Q2) why does $\mathbb Q[\sqrt 2]=\mathbb Q(\sqrt 2)$ (where $\mathbb Q(\sqrt 2)$ is the fraction field of $\mathbb Q[\sqrt 2]$) ?
REPLY [0 votes]: $\mathbb Q[\sqrt 2]$ is a field because $\sqrt{2}$ is an algebraic number i.e. it is the solution of a polynomial with rational coefficients, namely $x^2=2$. This means, for example, that $\frac{1}{\sqrt{2}}=\frac{\sqrt{2}}{2} \in \mathbb Q[\sqrt 2]$.
On the other hand $\mathbb Q[\pi]$ is not a field because $\pi$ is transcendental, so $\frac{1}{\pi} \notin \mathbb Q[\pi]$. | {"set_name": "stack_exchange", "score": 2, "question_id": 3185282} |
\section{Decomposition of $W_P(N)$}
There are several ways to obtain the formula for $|W_P(N)|$. By use of Chinese remainder theorem, we get it easily. Here it is.
\begin{theorem}
Size of $W_P(N)$ is
\begin{eqnarray}
|W_P(N)| = \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{2N}} (p-2) = \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6N}} (p-2)
\end{eqnarray}
\end{theorem}
\begin{proof}
If $2\mid P_N$, let $\mathcal{J}_2 = \{0\}$. For $p\mid P_{2N}$, let $\mathcal{J}_p = \mathcal{I}_{0,p-1}\setminus\{a,b\}$ where $a$ and $b\in \mathcal{I}_{1,p-1}$ are the solutions $s$ of $p\mid s + N$ and $p\mid s - N$ respectively. Since $a\ne b$, then $|\mathcal{J}_p|=p-2$. For $n\in W_P(N)$ and $p\mid P$, let $h_p\equiv n$ mod $p$ and $h_p\in \mathcal{I}_{0,p-1}$. We are going to prove $h_p\in \mathcal{I}_{1,p-1}$ if $p\mid N_P$ and $h_p\in \mathcal{J}_p$ if $p\mid P_N$. First we assume $p\mid N_P$, then $p\nmid (N-n)(N+n)$ means $p\nmid n$ and $h_p\in \mathcal{I}_{1,p-1}$. Next we assume $p\mid P_N$. If $p = 2$, then $2\nmid N$ and $2\nmid(N-n)(N+n)$ mean $n$ is even and $h_2 = 0$. If $p>2$, then $p\nmid (N-n)(N+n)$ means $h_p\not\equiv \pm N$ mod $p$ and therefore, $h_p\in \mathcal{J}_p$. If $n'\in W_P(N)$ and $p\mid n-n'$ for all $p\mid P$, then $n'=n$ and
\begin{eqnarray}
|W_P(N)| \leq \prod_{p\mid N_P}|\mathcal{I}_{1,p-1}| \prod_{p\mid P_N} |\mathcal{J}_p| = \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{2N}} (p-2)
\end{eqnarray}
Here we have $|\mathcal{J}_2|=1$ if $2\mid P_N$. Now we pick one value $f_p\in \mathcal{I}_{1,p-1}$ for $p\mid N_P$ and one value $f_p\in \mathcal{J}_p$ for $p\mid P_N$, then the system of equations
\begin{eqnarray}
z \equiv f_p \mbox{ mod } p \mbox{ for all } p\mid P
\end{eqnarray}
has one solution $n$ between 1 and $P$ by Chinese remainder theorem. Thus, $z = n\in W_P(N)$ and
\begin{eqnarray}
|W_P(N)| \geq \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{2N}} (p-2) = \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6N}} (p-2)
\end{eqnarray}
Here we have $|\mathcal{J}_3| = 1$ if $3\mid P_{2N}$. That completes the proof.
\end{proof}
\begin{definition}
For $d\mid P_{6N}$, a factor of $P_{6N}$, let $W_P^d(N)$ be the set of $n\in W_P(N)$ such that $\gcd(P_{6N},n) = d$.
\end{definition}
\begin{theorem}
{\bf (Decomposition of $W_P(N)$)} $W_P(N)$ is the disjoint union of $W_P^d(N)$ for all $d\mid P_{6N}$.
\end{theorem}
\begin{proof}
For $n\in W_P(N)$, let $d=\gcd(P_{6N}, n)$, then $n\in W_P^d(N)$ and $W_P(N)$ is the union of $W_P^d(N)$ for all $d\mid P_{6N}$. If $n\in W_P^d(N)\cap W_P^{d'}(N)$, then $d = \gcd(P_{6N}, n) = d'$ and $W_P^d(N)$ is distinct for each $d\mid P_{6N}$.
\end{proof}
We define $c = \frac{\gcd(NP,6)}{\gcd(N,6)}$, the \emph{index} of $(P,N)$. Clearly $c\mid 6$.
\begin{lemma}
{\rm i.} $c\mid n$ if $n\in W_P(N)$. {\rm ii.} $c\perp N$. {\rm iii.} $P_{cN} = P_{6N}$.
\end{lemma}
\begin{proof}
Assume $n\in W_P(N)$, then $P\perp (N-n)(N+n)$. If $2 \mid c$, then, by definition of $c$, $N$ is odd, $P$ is even and $2\mid n$ since $2\nmid (N-n)$. If $3\mid c$, then $3\mid P$, $3\nmid N$ and $3\mid n$ since $3\nmid (N-n)(N+n)$. Thus, part i and part ii are proved. For part iii, we only need to prove $2\nmid P_{cN}$ and $3\nmid P_{cN}$; it is obvious by definition of $c$.
\end{proof}
\begin{definition}
For $d\mid P_{6N}$, let $P_{cd}^\perp$ be the set of $k\perp P_{cd}$ and $1\leq k\leq P_{cd}$.
\end{definition}
\begin{theorem}
Assume $n\in W_P$ and $d\mid P_{6N}$. $n\in W_P^d$ if and only if $n=cdk$ and $k\in P_{cd}^\perp$.
\end{theorem}
\begin{proof}
Assume $n\in W_P^d$, then $d = \gcd(P_{6N},n)$. Since $cd \mid n \leq P$ then $n=cdk$ for some $k\leq P_{cd}$.
Since $n\in W_P$, then $n\perp N_P$ and $k\perp N_P$. Since $\gcd(P_{cN}, cdk) = d$, then $\gcd(P_{cdN}, k) = 1$ and $k\perp P_{cdN}$. Thus, $k\perp N_PP_{cdN}=P_{cd}$ and $k\in P_{cd}^\perp$. Now assume $n=cdk$ and $k\in P_{cd}^\perp$. Let $d'= \gcd(P_{6N},n) = \gcd(P_{cN},cdk)$, then $d'/d = \gcd(P_{cdN},ck) = \gcd(P_{cdN},k)$. Since $\gcd(P_{cd},k) = 1$, then $\gcd(P_{cdN},k) = 1$ and $d'=d$.
\end{proof}
For $d\mid P_{6N}$, let $V_P^d(N)$ be the set of $k\in P_{cd}^\perp$ such that $cdk\in W_P^d(N)$. Clearly $|V_P^d(N)| = |W_P^d(N)|$.
\begin{theorem}
For $d\mid P_{6N}$, size of $V_P^d(N)$ is
\begin{eqnarray}
|V_P^d(N)| = \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6dN}} (p-3)
\end{eqnarray}
\end{theorem}
\begin{proof}
For $p\mid P_{6dN}$, let $\mathcal{K}_p = \mathcal{I}_{1,p-1} \setminus \{a,b\}$ where $a$ and $b\in \mathcal{I}_{1,p-1}$ are the solutions $s$ of $p\mid cds+N$ and $p\mid cds-N$ respectively. Since $a\ne b$, then $|\mathcal{K}_p| = p-3$. For $k\in V_P^d(N)$, let $h_p\equiv k$ mod $p$ and $h_p\in \mathcal{I}_{0,p-1}$ for $p\mid P$. Since $V_P^d(N)\subset P_{cd}^\perp$, then $h_p\in \mathcal{I}_{1,p-1}$. For $p\mid P_{6dN}$, we have $p\nmid (N-cdk)(N+cdk)$; thus, $cdh_p\not\equiv \pm N$ mod $p$ and $h_p\in \mathcal{K}_p$. Therefore,
\begin{eqnarray}
|V_P^d(N)| \leq \prod_{p\mid N_P}|\mathcal{I}_{1,p-1}| \prod_{p\mid P_{6dN}} |\mathcal{K}_p| = \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6dN}} (p-3)
\end{eqnarray}
Now we pick one value $f_p\in \mathcal{I}_{1,p-1}$ for $p\mid N_P$ and one value $f_p\in \mathcal{K}_p$ for $p\mid P_{6dN}$. Since $P_{cd} = N_PP_{6dN}$, then the system of equations
\begin{eqnarray}
z \equiv f_p \mbox{ mod } p \mbox{ for all } p \mid P_{cd}
\end{eqnarray}
has one solution $k$ between 1 and $P_{cd}$ by Chinese remainder theorem. Thus, $z = k\in V_P^d(N)$ and the size of $V_P^d(N)$ is
\begin{eqnarray}
|V_P^d(N)| \geq \prod_{p\mid N_P}|\mathcal{I}_{1,p-1}| \prod_{p\mid P_{6dN}} |\mathcal{K}_p| = \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6dN}} (p-3)
\end{eqnarray}
That completes the proof.
\end{proof}
\begin{theorem}
Let $K$ be a square-free integer and let $h_p$ be given for prime $p\mid K$, then
\begin{eqnarray}
\prod_{p\mid K} (h_p+1) = \sum_{d\mid K} \prod_{p\mid K_d} h_p
\end{eqnarray}
where $d$ goes over all factors of $K$ and $K_d = K/d$.
\end{theorem}
An easy way to understand this formula is to treat each $h_p$ as an indeterminate in polynomial. This formula will be applied to the proof of Goldbach cosine sum-product formula over $W_P^d(N)$ and two more places: one is in the following example and one in the proof of Goldbach momentum formula.
\begin{example}
We take $K = P_{6N}$ and $h_p = p-3$ for $p\mid P_{6N}$, then
\begin{eqnarray}
\prod_{p\mid P_{6N}} (p-2) & = & \sum_{d\mid P_{6N}} \prod_{p\mid P_{6dN}} (p-3) \\
\prod_{p\mid N_P}(p-1)\prod_{p\mid P_{6N}} (p-2) & = & \sum_{d\mid P_{6N}} \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6dN}} (p-3) \\
|W_P(N)| & = & \sum_{d\mid P_{6N}} |W_P^d(N)|
\end{eqnarray}
\end{example}
\begin{theorem}
{\bf (Goldbach momentum formula)} For any $f_p$ defined for each $p\mid P$,
\begin{eqnarray}
\sum_{n\in W_P(N)} \prod_{p\mid P_n} \frac{f_p-1}{p-1} = \prod_{p\mid N_P}(f_p-1) \prod_{p\mid P_{6N}} \left(f_p - 2\frac{f_p-1}{p-1}\right)
\end{eqnarray}
\end{theorem}
\begin{proof}
For $p\mid P$ and given $f_p$, we define
\begin{eqnarray}
h_p = \frac{f_p-1}{p-1}(p-3) = f_p - 2\frac{f_p-1}{p-1} - 1
\end{eqnarray}
Thus,
\begin{eqnarray}
\prod_{p\mid P_{6N}}\left(f_p - 2\frac{f_p-1}{p-1}\right) = \prod_{p\mid P_{6N}} (h_p+1) & = & \sum_{d\mid P_{6N}} \prod_{p\mid P_{6dN}} h_p = \sum_{d\mid P_{6N}} \prod_{p\mid P_{6dN}} \frac{f_p-1}{p-1}(p-3) \\
\prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6N}}\left(f_p - 2\frac{f_p-1}{p-1}\right) & = & \sum_{d\mid P_{6N}} \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6dN}} \frac{f_p-1}{p-1}(p-3) \\
& = & \sum_{d\mid P_{6N}} |V_P^d(N)| \prod_{p\mid P_{6dN}} \frac{f_p-1}{p-1}
\end{eqnarray}
Since $P_{6dN} = P_{cdkN}$ for any $d\mid P_{6N}$ and $k\in V_P^d(N)$, then
\begin{eqnarray}
\prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6N}}\left(f_p - 2\frac{f_p-1}{p-1}\right) = \sum_{d\mid P_{6N}} \sum_{k\in V_P^d(N)} \prod_{p\mid P_{cdkN}} \frac{f_p-1}{p-1}
= \sum_{n\in W_P(N)} \prod_{p\mid P_{nN}} \frac{f_p-1}{p-1}
\end{eqnarray}
For $n\in W_P(N)$, we have $n\perp N_P$ and $P_n = N_PP_{nN}$; thus,
\begin{eqnarray}
\prod_{p\mid P_n} \frac{f_p-1}{p-1} & = & \prod_{p\mid N_P} \frac{f_p-1}{p-1}\prod_{p\mid P_{nN}} \frac{f_p-1}{p-1} \\
\sum_{n\in W_P(N)} \prod_{p\mid P_n} \frac{f_p-1}{p-1} & = & \prod_{p\mid N_P} \frac{f_p-1}{p-1} \sum_{n\in W_P(N)} \prod_{p\mid P_{nN}} \frac{f_p-1}{p-1} \\
& = & \prod_{p\mid N_P} \frac{f_p-1}{p-1} \prod_{p\mid N_P}(p-1) \prod_{p\mid P_{6N}}\left(f_p - 2\frac{f_p-1}{p-1}\right)
\end{eqnarray}
and Goldbach momentum formula follows.
\end{proof}
Goldbach momentum formula was first discovered for $f_p = p^s$ by another method \cite{Wu}. We are going to prove the Goldbach cosine sum-product formula in the next several sections. | {"config": "arxiv", "file": "1607.07272/w_size_wpd.tex"} |
TITLE: The size of the universe and the scale factor of $\Lambda$CDM model
QUESTION [1 upvotes]: I wonder is there a relation between the size of the universe and the scale factor calculated by solving Friedmann equations.
I mean if the volume of the universe nowadays is a round $V= 10^{78} m^3$, does this mean the current value of the cosmological scale factor is around $10^{26} m $ ? can we say $V=a^3 ~m^3$?
When solving Friedmann equation:
$$\left( \frac{\dot{a}}{a} \right) = H_0 \sqrt{\Omega_ra(t)^{-4}+\Omega_ma(t)^{-3}+\Omega_{\Lambda}}$$
According to this thread: The scale factor of ΛCDM as a function of time
Or according to this code:
The scale factor of ΛCDM
It gives the normalized dimensionless scale factor with $a(t_0)=1$, where $t_0$ is the current age of the universe $\sim 13$ Gyr .
Now I think if we wish to get a dimensionful scale factor with units of length, we should use an alternative formula for Friedmann equations. I tried
$$\dot{a}(\eta) = \frac{H_0}{c} \left(\Omega_m a_0^3 a + \Omega_r a_0^4 + \Omega_\Lambda a^4\right)^{1/2}$$
Where $\eta$ is a dimensionless conformal time . This formula is from Notes equation (28). But when using NDSolve in Mathematica in this Thread the equation has not been solved.
So any help to understand that? I thought when the equation is solved it gives $a(\eta_0) = a(13) = 10^{26}$ meter ?
REPLY [1 votes]: The size of the universe and the size of the observable universe are different things. The radius of the observable universe is equal to the conformal time times the scale factor if they're appropriately normalized.
If $k=\pm 1$, the scale factor is the radius of curvature of the spatial slices. Commonly it's called $R$ instead of $a$ in that case. If $k=1$ then $R$ is the size of the universe (or its reduced circumference). If $k=-1$, it at least sets a characteristic scale, though the total volume is of course infinite.
If $k=0$, as it is in ΛCDM, then there is no geometrical basis for assigning units to the scale factor. Your formula for $a'(η)$, although correct, doesn't help since it's invariant under a rescaling of $a$ (keeping in mind that the unitless conformal time also depends on $a_0$). | {"set_name": "stack_exchange", "score": 1, "question_id": 729635} |
\begin{document}
\title[Summability of formal solutions for some generalized moment PDEs]{Summability of formal solutions for some generalized moment partial differential equations}
\author{Alberto Lastra}
\address{Departamento de F\'isica y Matem\'aticas\\
University of Alcal\'a\\
Ap. de Correos 20, E-28871 Alcal\'a de Henares (Madrid), Spain}
\email{alberto.lastra@uah.es}
\author{S{\l}awomir Michalik}
\address{Faculty of Mathematics and Natural Sciences,
College of Science\\
Cardinal Stefan Wyszy\'nski University\\
W\'oycickiego 1/3,
01-938 Warszawa, Poland}
\email{s.michalik@uksw.edu.pl}
\urladdr{\url{http://www.impan.pl/~slawek}}
\author{Maria Suwi\'nska}
\address{Faculty of Mathematics and Natural Sciences,
College of Science\\
Cardinal Stefan Wyszy\'nski University\\
W\'oycickiego 1/3,
01-938 Warszawa, Poland}
\email{m.suwinska@op.pl}
\date{}
\keywords{summability, formal solution, moment estimates, moment derivatives, moment partial differential equations}
\subjclass[2010]{35C10, 35G10}
\begin{abstract}
The concept of moment differentiation is extended to the class of moment summable functions, giving rise to moment differential properties. The main result leans on accurate upper estimates for the integral representation of the moment derivatives of functions under exponential-like growth at infinity, and appropriate deformation of the integration paths. The theory is applied to obtain summability results of certain family of generalized linear moment partial differential equations with variable coefficients.
\end{abstract}
\maketitle
\thispagestyle{empty}
\section{Introduction}
This work is devoted to the study of the summability properties of formal solutions of moment partial differential equations in the complex domain. The purpose of this work is twofold. On the one hand, a deeper knowledge on the moment derivative operator acting on certain functional spaces of analytic functions is put into light; and on the other hand, the previous knowledge serves as a tool to attain summability results of the formal solutions of concrete families of Cauchy problems.
The study of moment derivatives, generalizing classical ones, and the solution of moment partial differential equations is of increasing interest in the scientific community. The concept of moment derivative was put forward by W. Balser and M. Yoshino in 2010, in~\cite{BY}. Given a sequence of positive real numbers (in practice a sequence of moments), say $m:=(m(p))_{p\ge0}$, the operator of moment derivative $\partial_{m,z}:\C[[z]]\to\C[[z]]$ acts on the space of formal power series with complex coefficients into itself in the following way (see Definition~\ref{def260}):
$$\partial_{m,z}\left(\sum_{p\ge0}\frac{a_p}{m(p)}z^p\right)=\sum_{p\ge0}\frac{a_{p+1}}{m(p)}z^p.$$
This definition can be naturally extended to holomorphic functions defined on a neighborhood of the origin.
The choice $m=(\Gamma(1+p))_{p\ge0}=(p!)_{p\ge0}$ determines the usual derivative operator, whereas $m=\left(\Gamma\left(1+\frac{p}{s}\right)\right)_{p\ge0}$ is linked to the Caputo $1/s$-fractional differential operator $\partial_{z}^{1/s}$ (see~\cite{M}, Remark 3). Given $q\in (0,1)$ and $m=([p]_{q}!)_{p\ge0}$, with $[p]!_q=[1]_q[2]_q\cdots[p]_{q}$ and $[h]_q=\sum_{j=0}^{h-1}q^{j}$, the operator $\partial_{m,z}$ coincides with the $q$-derivative $D_{q,z}$ defined by
$$D_{q,z}f(z)=\frac{f(qz)-f(z)}{qz-z}.$$
Several recent studies of the previous functional equations have been made in the complex domain and in terms of summability of their formal solutions, such as~\cite{michalik10} regarding summability of fractional linear partial differential equations;~\cite{immink,immink2} in the study of difference equations; or~\cite{dreyfus,ichinobeadachi,lastramalek} in the study of $q$-difference-differential equations.
In the more general framework of moment partial differential equations, the seminal work~\cite{BY} was followed by other studies such as~\cite{M} where the second author solves certain families of Cauchy problems under the action of two moment derivatives. We also refer to~\cite{michalik13jmaa,michalik17fe} and~\cite{sanzproceedings} (Section 7), where conditions on the convergence and summability of formal solutions to homogeneous and inhomogeneous linear moment partial differential equations in two complex variables with constant coefficients are stated. Further studies of moment partial differential equations with constant coefficients are described in~\cite{lastramaleksanz}, or in~\cite{michaliktkacz} when dealing with the Stokes phenomenon, closely related to the theory of summability. We also cite~\cite{lastramaleksanz2}, where the moments govern the growth of the elements involved in the problem under study.
A first step towards the study of summability of the formal solution of a functional equation is that of determining the growth rate of its coefficients, which is described in the works mentioned above, and also more specifically in the recent works~\cite{LMS,michaliksuwinska,suwinska} when dealing with moment partial differential equations with constant and time-dependent coefficients. See also the references therein for a further knowledge on the field.
The present work takes a step forward into the theory of generalized summability of formal solutions of moment partial differential equations. The first main result (Theorem~\ref{lemma1}) determines the integral representation of the moment derivatives ($m$-derivatives) of an analytic function defined on an infinite sector with the vertex at the origin together with a neighborhood of the origin, with prescribed exponential-like growth governed by a second sequence, say $\tilde{\mathbb{M}}$. In addition to this, accurate upper estimates of such derivatives are provided showing the same exponential-like growth at infinity, but also its dependence on the moment sequence $m$. This result entails that the set of $\tilde{\mathbb{M}}$-summable formal power series along certain direction $d\in\R$, $\C\{z\}_{\tilde{\mathbb{M}},d}$ (see Definition~\ref{defi271} and Theorem~\ref{teo1}), is closed under the action of the operator $\partial_{m,z}$. As a consequence, it makes sense to extend the definition of $\partial_{m,z}$ to $\C\{z\}_{\tilde{\mathbb{M}},d}$ (Definition~\ref{def487}) and also to provide analogous estimates as above for the $m$-derivatives of the elements in $\C\{z\}_{\tilde{\mathbb{M}},d}$ (Proposition~\ref{prop497}).
We apply the previous theory to achieve summability results on moment partial differential equations of the form
\begin{equation}\label{epralintro}
\left\{ \begin{array}{lcc}
\left(\partial_{m_1,t}^{k}-a(z)\partial_{m_2,z}^{p}\right)u(t,z)=\hat{f}(t,z)&\\
\partial_{m_1,t}^{j}u(0,z)=\varphi_j(z),&\quad j=0,\ldots,k-1,
\end{array}
\right.
\end{equation}
where $1\le k<p$ are integer numbers and $m_1,\,m_2$ are moment sequences under additional assumptions. The elements $a(z),\,a(z)^{-1},\,\varphi_j(z)$ for $j=0,\ldots,k-1$ are assumed to be holomorphic functions in a neighborhood of the origin, and $\hat{f}\in\C[[t,z]]$. The second main result of this research (Theorem~\ref{teopral}) states that summability of the unique formal solution of (\ref{epralintro}) $\hat{u}(t,z)$ (with respect to $z$ variable) along direction $d\in\R$ is equivalent to summability of $\hat{f}$ and $\partial_{m_2,z}^{j}\hat{u}(t,0)$, for $j=0,\ldots,p-1$, along $d$. A result on the convergence of the formal solution is also provided (Corollary~\ref{corofinal}). It is worth mentioning that the results on the upper estimates of formal solutions obtained in~\cite{LMS} remain coherent with these results, and also with those in~\cite{remy2016}, in the Gevrey classical settings. The study of more general moment problems remains open and it is left for future research of the authors.
The paper is structured as follows: After a section describing the notation followed in the present study (Section~\ref{secnot}), we recall the main concepts and results on the generalized moment differentiation of formal power series. Section~\ref{sec31} is devoted to recalling the main tools associated with strongly regular sequences and some of their related elements. In Section~\ref{secfun}, based on the general moment summability methods, we state the first main result of the paper (Theorem~\ref{lemma1}) and its main consequences. The work is concluded in Section~\ref{secapp} with the application of the theory to the summability of formal solutions of certain family of Cauchy problems involving moment partial differential equations in the complex domain (Theorem~\ref{teopral}).
\section{Notation}\label{secnot}
Let $\mathbb{N}$ denote the set of natural numbers
$\{1,2,\cdots\}$ and $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$.
$\mathcal{R}$ stands for the Riemann surface of the logarithm.
Let $\theta>0$ and $d\in\R$. We write $S_{d}(\theta)$ for the open infinite sector contained in the Riemann surface of the logarithm with the vertex at the origin, bisecting direction $d\in\R$ and opening $\theta>0$, i.e.,
$$S_{d}(\theta):=\left\{z\in\mathcal{R}: |\hbox{arg}(z)-d|<\frac{\theta}{2}\right\}.$$
We write $S_d$ in the case when the opening $\theta>0$ does not need to be specified. A sectorial region $G_d(\theta)$ is a subset of $\mathcal{R}$ such that $G_d(\theta)\subseteq S_{d}(\theta)\cap D(0,r)$ for some $r>0$, and for all $0<\theta'<\theta$ there exists $0<r'<r$ such that $(S_d(\theta')\cap D(0,r'))\subseteq G_d(\theta)$. We denote by $\hbox{arg}(S)$ the set of arguments of $S$, in particular $\hbox{arg}(S_{d}(\theta))=\left(d-\frac{\theta}{2},d+\frac{\theta}{2}\right)$.
We put $\hat{S}_{d}(\theta;r):=S_{d}(\theta)\cup D(0,r)$. Analogously, we write $\hat{S}_{d}(\theta)$ (resp. $\hat{S}_{d}$) whenever the radius $r>0$ (resp. the radius and the opening $r,\,\theta>0$) can be omitted. We write $S\prec S_d(\theta)$ whenever $S$ is an infinite sector with the vertex at the origin with $\overline{S}\subseteq S_d(\theta)$. Analogously, we write $\hat{S}\prec \hat{S}_d(\theta;r)$ if $\hat{S}=S\cup D(0,r')$, with $S\prec S_{d}(\theta)$ and $0<r'<r$. Given two sectorial regions $G_d(\theta)$ and $G_{d'}(\theta')$, we use notation $G_d(\theta)\prec G_{d'}(\theta')$ whenever this relation holds for the sectors involved in the definition of the corresponding sectorial regions.
Given a complex Banach space $(\mathbb{E},\left\|\cdot\right\|_{\mathbb{E}})$, the set $\mathcal{O}(U,\mathbb{E})$ stands for the set of holomorphic functions in a set $U\subseteq\C$, with values in $\mathbb{E}$. If $\mathbb{E}=\C$, then we simply write $\mathcal{O}(U)$. We denote the formal power series with coefficients in $\mathbb{E}$ by $\mathbb{E}[[z]]$.
Given $\hat{f},\hat{g}\in\mathbb{E}[[z]]$, with $\hat{f}(z)=\sum_{p\ge0}f_pz^p$ and $\hat{g}(z)=\sum_{p\ge0}g_pz^p$, such that $g_p\ge0$ for all $p\ge 0$, we write $\hat{f}(z)\ll\hat{g}(z)$ if $|f_p|\le g_p$ for all $p\ge0$.
\section{On generalized summability and moment differentiation}
The aim of this section is to recall the concept and main results on the so-called generalized moment differentiation of formal power series. Certain algebraic properties associated with the families of analytic functions which are related to this notion allow to go further by defining the moment differentiation associated with the sum of a given formal power series.
\subsection{Strongly regular sequences and related elements}\label{sec31}
As a first step, we recall the main tools associated with strongly regular sequences and some of their related elements. The concept of a strongly regular sequence is put forward by V. Thilliez in~\cite{thilliez}.
\begin{defin}
Let $\mathbb{M}:=(M_{p})_{p\ge0}$ be a sequence of positive real numbers with $M_0=1$.
\begin{itemize}
\item[$(lc)$] The sequence $\mathbb{M}$ is logarithmically convex if
$$M_p^2\le M_{p-1}M_{p+1},\hbox{ for all }p\ge1.$$
\item[$(mg)$] The sequence $\mathbb{M}$ is of moderate growth if there exists $A_1>0$ such that $$M_{p+q}\le A_1^{p+q}M_{p}M_{q},\hbox{ for all }p,q\ge 0.$$
\item[$(snq)$] The sequence $\mathbb{M}$ satisfies the strong non-quasianalyticity condition if there exists $A_2>0$ such that
$$\sum_{q\ge p}\frac{M_q}{(q+1)M_{q+1}}\le A_2\frac{M_p}{M_{p+1}},\hbox{ for all }p\ge0.$$
\end{itemize}
Any sequence satisfying properties $(lc)$, $(mg)$ and $(snq)$ is known as a \textit{strongly regular sequence}.
\end{defin}
It is worth recalling that given a (lc) sequence $\mathbb{M}=(M_p)_{p\ge0}$, one has
\begin{equation}\label{e136}
M_pM_q\le M_{p+q}, \hbox{ for all }p,q\in\N_0
\end{equation}
(see Proposition 2.6 (ii.2)~\cite{sanzproceedings}), which entails that given $s\in\N_0$, one has
\begin{equation}\label{e140}
M_p^{s}\le M_{ps},\hbox{ for all }p\in\N_0,
\end{equation}
following an induction argument.
Examples of such sequences are of great importance in the study of formal and analytic solutions of differential equations. Gevrey sequences are predominant among them, appearing as upper bounds for growth of the coefficients of the formal solutions of such equations. Given $\alpha>0$, the Gevrey sequence of order $\alpha$ is defined by $\mathbb{M}_{\alpha}:=(p!^{\alpha})_{p\ge0}$. A natural generalization of the previous are the sequences defined by $\mathbb{M}_{\alpha,\beta}:=(p!^{\alpha}\prod_{m=0}^{p}\log^{\beta}(e+m))_{p\ge0}$ for $\alpha>0$ and $\beta\in\R$. These sequences turn out to be strongly regular sequences, provided that the first terms are slightly modified in the case that $\beta<0$, without affecting their asymptotic behavior. Formal solutions of difference equations are quite related to the 1+ level, associated with the sequence $\mathbb{M}_{1,-1}$, see~\cite{immink,immink2}.
Given a strongly regular sequence $\mathbb{M}=(M_p)_{p\ge0}$, one can define the function
\begin{equation}\label{e123}
M(t):=\sup_{p\ge0}\log\left(\frac{t^p}{M_p}\right),\quad t>0,\qquad M(0)=0,
\end{equation}
which is non-decreasing, and continuous in $[0,\infty)$ with $\lim_{t\to\infty}M(t)=+\infty$. J. Sanz (\cite{sanz},~Theorem 3.4) proves that the order of the function $M(t)$, defined in~\cite{goldbergostrovskii} by
$$\rho(M):=\limsup_{r\to\infty}\max\left\{0,\frac{\log (M(r))}{\log(r)}\right\}$$
is a positive real number. Moreover, its inverse $\omega(\mathbb{M}):=1/\rho(M)$ determines the limit opening for a sector in such a way
that nontrivial flat function in ultraholomorphic classes of functions defined on such sectors exist, see Corollary 3.16,~\cite{jjs19}. Indeed, $\omega(\mathbb{M})$ can be recovered directly from $\mathbb{M}$ under some admissibility conditions on the sequence $\mathbb{M}$ (Corollary 3.10,~\cite{jjs17}):
$$\lim_{p\to\infty}\frac{\log(M_{p+1}/M_p)}{\log(p)}=\omega(\mathbb{M}).$$
Such conditions are satisfied by the sequences of general use in the asymptotic theory of solutions to functional equations.
The next results can be found in~\cite{chaumatchollet,thilliez} under more general assumptions.
\begin{lemma}\label{lemaaux1}
Let $\mathbb{M}$ be a strongly regular sequence, and let $s\ge1$. Then, there exists $\rho(s)\ge1$ which only depends on $\mathbb{M}$ and $s$, such that
$$ \exp\left(-M(t)\right)\le \exp(-sM(t/\rho(s))),$$
for all $t\ge0$.
\end{lemma}
In view of Lemma 1.3.4~\cite{thilliez}, one has that given a strongly regular sequence, $\mathbb{M}=(M_{p})_{p\ge0}$ and $s>0$. Then, the sequence $\mathbb{M}^s=(M_p^{s})_{p\ge0}$ is strongly regular, and $\omega(\mathbb{M}^s)=\omega(s\mathbb{M})$.
Following~\cite{petzsche}, one has the next definition.
\begin{defin}
Given two sequences of positive real numbers, $\mathbb{M}=(M_p)_{p\ge0}$ and $\tilde{\mathbb{M}}:=(\tilde{M}_p)_{p\ge 0}$, we say that $\mathbb{M}$ and $\tilde{\mathbb{M}}$ are \textit{equivalent} if there exist $B_1,B_2>0$ with
\begin{equation}\label{e195}
B_1^p M_p\le \tilde{M}_p\le B_2^p M_p,
\end{equation}
for every $p\ge0$.
\end{defin}
The next result is a direct consequence of the definition of the function $M$ in (\ref{e123}).
\begin{lemma}\label{lemma2}
Let $\mathbb{M},\,\tilde{\mathbb{M}}$ be two strongly regular sequences which are equivalent. Let $M(t)$ (resp. $\tilde{M}(t)$) be the function associated with $\mathbb{M}$ (resp. $\tilde{\mathbb{M}}$) through (\ref{e123}). Then, it holds that
$$ M\left(\frac{t}{B_2}\right)\le \tilde{M}(t)\le M\left(\frac{t}{B_1}\right) \hbox{ for all }t\ge0,$$
where $B_1,\,B_2$ are the positive constants in (\ref{e195}).
\end{lemma}
\begin{defin}
Let $(M_p)_{p\ge0}$ be a sequence of positive real numbers with $M_0=1$, and let $s\in\R$. A sequence of positive real numbers $(m(p))_{p\ge0}$ is said to be an $(M_p)$-sequence of order $s$ if there exist $A_3,A_4>0$ with
\begin{equation}\label{e100}
A_3^p(M_p)^{s}\le m(p)\le A_4^p(M_p)^s,\quad p\ge0.
\end{equation}
\end{defin}
\subsection{Function spaces and generalized summability}\label{secfun}
In the whole subsection $(\mathbb{E},\left\|\cdot\right\|_{\mathbb{E}})$ stands for a complex Banach space.
\begin{defin}
Let $r,\,\theta>0$ and $d\in\R$. We also fix a sequence $\mathbb{M}$ of positive real numbers. The set $\mathcal{O}^{\mathbb{M}}(\hat{S}_d(\theta;r),\mathbb{E})$ consists of all functions $f\in\mathcal{O}(\hat{S}_d(\theta;r),\mathbb{E})$ such that for every $0<\theta'<\theta$ and $0<r'<r$ there exist $\tilde{c},\tilde{k}>0$ with
\begin{equation}\label{e144}
\left\|f(z)\right\|_{\mathbb{E}}\le \tilde{c}\exp\left(M\left(\frac{|z|}{\tilde{k}}\right)\right),\quad z\in \hat{S}_d(\theta',r').
\end{equation}
Analogously, the set $\mathcal{O}^{\mathbb{M}}(S_d(\theta),\mathbb{E})$ consists of all $f\in\mathcal{O}(S_d(\theta),\mathbb{E})$ such that for all $0<\theta'<\theta$, there exist $\tilde{c},\tilde{k}>0$ such that (\ref{e144}) holds for all $z\in S_d(\theta')$.
\end{defin}
The aforementioned definition generalizes that of functions of exponential growth at infinity of some positive order. Indeed, if $\mathbb{M}=\mathbb{M}_{\alpha}$ for some $\alpha>0$, then the property (\ref{e144}) determines that $f$ is of exponential growth at most $1/\alpha$.
The general moment summability methods developed by W. Balser (see Section 5.5,~\cite{balser}) were adapted by J. Sanz to the framework of strongly regular sequences (see Section 5,~\cite{sanz}; or Definition 6.2.,~\cite{sanzproceedings}).
\begin{defin}
Let $\mathbb{M}$ be a strongly regular sequence with $\omega(\mathbb{M})<2$. Let $M$ be the function associated with $\mathbb{M}$, defined by (\ref{e123}). The complex functions $e,\,E$ define \textit{kernel functions for $\mathbb{M}$-summability} if the following properties hold:
\begin{enumerate}
\item $e\in\mathcal{O}(S_0(\omega(\mathbb{M})\pi))$. The function $e(z)/z$ is locally uniformly integrable at the origin, i.e., there exists $t_0>0$, and for all $z_0\in S_{0}(\omega(\mathbb{M})\pi)$ there exists a neighborhood $U$ of $z_0$, with $U\subseteq S_0(\omega(\mathbb{M})\pi)$, such that
\begin{equation}\label{e192}
\int_0^{t_0}\frac{\sup_{z\in U}\left|e\left(\frac{t}{z}\right)\right|}{t}dt<\infty.
\end{equation}
Moreover, for all $\epsilon>0$ there exist $c, k>0$ such that
\begin{equation}\label{e162}
|e(z)|\le c\exp\left(-M\left(\frac{|z|}{k}\right)\right)\quad \hbox{ for all }z\in S_0(\omega(\mathbb{M})\pi-\epsilon).
\end{equation}
We also assume that $e(x)\in\R$ for all $x>0$.
\item $E\in\mathcal{O}(\C)$ and satisfies that
\begin{equation}\label{e191}
|E(z)|\le \tilde{c}\exp\left(M\left(\frac{|z|}{\tilde{k}}\right)\right),\quad z\in \C,
\end{equation}
for some $\tilde{c},\,\tilde{k}>0$. There exists $\beta>0$ such that for all $0<\tilde{\theta}<2\pi-\omega(\mathbb{M})\pi$ and $M_E>0$, there exist $\tilde{c}_2>0$ with
\begin{equation}\label{e202}
|E(z)|\le \frac{\tilde{c}_2}{|z|^{\beta}},\quad z\in S_\pi(\tilde{\theta})\setminus D(0,M_E).
\end{equation}
\item Both kernel functions are related via the Mellin transform of $e$. More precisely, the \textit{moment function} associated with $e$, defined by
\begin{equation}\label{e166}
m_{e}(z):=\int_{0}^{\infty} t^{z-1}e(t)dt
\end{equation}
is a complex continuous function in $\{z\in\C:\hbox{Re}(z)\ge 0\}$ and holomorphic in $\{z\in\C:\hbox{Re}(z)> 0\}$. The kernel function $E$ has the power series expansion at the origin given by
\begin{equation}\label{e167}
E(z)=\sum_{p\ge0}\frac{z^p}{m_{e}(p)},\quad z\in\C.
\end{equation}
\end{enumerate}
\end{defin}
\textbf{Remark:} In the remainder of the work we will only mention the kernel function $e$, rather than the pair $e,\,E$, as $E$ is determined by the knowledge of $e$ in terms of its Taylor expansion at the origin.
\vspace{0.3cm}
\textbf{Remark:} The growth condition of the kernel function $E(z)$ at infinity (\ref{e202}) is usually substituted in the literature (\cite{balser,michalik17fe,sanz,sanzproceedings}) by the less restrictive condition:
\textit{``The function $E(1/z)/z$ is locally uniformly integrable at the origin in the sector $S_{\pi}((2-\omega(\mathbb{M}))\pi)$. Namely, there exists $t_0>0$, and for all $z_0\in S_{\pi}((2-\omega(\mathbb{M}))\pi)$ there exists a neighborhood $U$ of $z_0$, $U\subseteq S_{\pi}((2-\omega(\mathbb{M}))\pi)$, such that
$$\int_0^{t_0}\frac{\sup_{z\in U}\left|E\left(\frac{z}{t}\right)\right|}{t}dt<\infty.\hbox{''}$$}
Condition (\ref{e202}) has already been used and justified in~\cite{jkls} (see Lemma 4.10, Remark 4.11 and Remark 4.12 in~\cite{jkls}), in order to obtain convolution kernels for multisummability.
\begin{defin}
Let $\mathbb{M}$ be a strongly regular sequence and let $e,\,E$ be a pair of kernel functions for $\mathbb{M}$-summability. Let $m_e$ be the moment function given by (\ref{e166}). The sequence $(m_{e}(p))_{p\ge0}$ is the so-called \textit{sequence of moments} associated with $e$.
\end{defin}
\textbf{Remark:}
The previous definition can be adapted to the case $\omega(\mathbb{M})\ge2$ by means of a ramification of the kernels (see~\cite{sanzproceedings}, Remark 6.3 (iii)). For practical purposes, we will focus on the case that $\omega(\mathbb{M})<2$, taking into consideration that all the results can be adapted to the general case.
\vspace{0.3cm}
\textbf{Remark:}\label{r223}
Given a strongly regular sequence $\mathbb{M}$, the existence of pairs of kernel functions for $\mathbb{M}$-summability is guaranteed, provided that $\mathbb{M}$ admits a nonzero proximate order (see~\cite{jjs17,lastramaleksanz}).
\vspace{0.3cm}
\begin{example}\label{ex187}
Let $\alpha>0$. We consider a Gevrey sequence $\mathbb{M}_{\alpha}$. Then, the functions $e_{\alpha}(z):=\frac{1}{\alpha}z^{\frac{1}{\alpha}}\exp\left(-z^{\frac{1}{\alpha}}\right)$ and $E_{\alpha}(z):=\sum_{p\ge0}\frac{z^p}{\Gamma(1+\alpha p)}$ are kernel functions for $\mathbb{M}_{\alpha}$-summability. Indeed, the moment function is given by $m_{\alpha}(z):=\Gamma(1+\alpha z)$.
\end{example}
The definition of moment differentiation, moment (formal) Borel and moment Laplace transformation generalize the classical concepts of differentiation, formal Borel and Laplace transformations, respectively. In the classical setting of the Gevrey sequence of order $\alpha>0$, the moment sequence is $(\Gamma(1+\alpha p))_{p\ge0}$ seen in Example~\ref{ex187}. Classical differentiation corresponds to $\alpha=1$.
At this point, given a strongly regular sequence $\mathbb{M}$, one has that a sequence of moments can be constructed, provided a pair of kernel functions for $\mathbb{M}$-summability, say $e$ and $E$. The associated sequence of moments $m_e:=(m_e(p))_{p\ge0}$ is a strongly regular sequence (see~\cite{sanzproceedings}, Remark 6.6), which is equivalent to $\mathbb{M}$ (see~\cite{sanzproceedings}, Proposition 6.5). Therefore, $\omega(\mathbb{M})=\omega(m_{e})$. The definition of generalized derivatives is done in terms of a sequence of moments, rather than the initial sequence itself, and we will work directly with this sequence, obviating the initial strongly regular sequence and the pair of kernel functions. Hereinafter, when referring to a sequence of moments, we will assume without mentioning that such sequence is indeed the sequence of moments associated with some pair of kernel functions, and therefore with a strongly regular sequence (in conditions that admit such pair of kernels, e.g., if the strongly regular sequence admits a nonzero proximate order).
Departing from a sequence of moments $m_e$, one can consider the formal $m_e$-moment Borel transform. This definition can be extended for any sequence of positive numbers, and not only to a sequence of moments. We present it in this way for the sake of clarity.
\begin{defin}\label{def259}
Let $m_e=(m_e(p))_{p\ge0}$ be a sequence of moments. The formal $m_e$-moment Borel transform $\hat{\mathcal{B}}_{m_e,z}:\mathbb{E}[[z]]\to\mathbb{E}[[z]]$ is defined by
$$\hat{\mathcal{B}}_{m_e,z}\left(\sum_{p\ge0}a_pz^p\right)=\sum_{p\ge0}\frac{a_p}{m_e(p)}z^p.$$
\end{defin}
There are several different equivalent approaches to the general moment summability of formal power series. We refer to~\cite{balser}, Section 6.5, under Gevrey-like settings, and~\cite{sanzproceedings}, Section 6, in the framework of strongly regular sequences.
\begin{defin}\label{defi271}
Let $\mathbb{M}$ be a strongly regular sequence admitting a nonzero proximate order. The formal power series $\hat{u}\in\mathbb{E}[[z]]$ is $\mathbb{M}$-summable in direction $d\in\R$ if the formal power series $\hat{\mathcal{B}}_{\mathbb{M},z}(\hat{u}(z))$ is convergent in a neighborhood of the origin and can be extended to an infinite sector of bisecting direction $d$, say $\hat{S}_d$, such that the extension belongs to $\mathcal{O}^{\mathbb{M}}(\hat{S}_d,\mathbb{E})$. We write $\mathbb{E}\{z\}_{\mathbb{M},d}$ for the set of $m_e$-summable formal power series in $\mathbb{E}[[z]]$. Here we have assumed that $e$ is a kernel function for $\mathbb{M}$-summability and $m_e$ is its associated sequence of moments.
\end{defin}
\textbf{Remark:} We recall that, given a sequence of moments associated with a strongly regular sequence $\mathbb{M}$ via a kernel function $e$, say $m_e$, $\mathbb{M}$ and $m_e$ are equivalent sequences. Regarding Lemma~\ref{lemma2} and the definition of the formal Borel transformation, it is easy to check that the set $\mathbb{E}\{z\}_{m_e,d}$ does not depend on the choice of the kernel function $e$, and therefore one can write $\mathbb{E}\{z\}_{\mathbb{M},d}:=\mathbb{E}\{z\}_{m_e,d}$ for any choice of kernel function $e$. Moreover, we observe that the formal power series $\hat{\mathcal{B}}_{m_e,z}(\hat{u})$ has a positive radius of convergence with independence of the kernel function considered, associated with $\mathbb{M}$.
\vspace{0.3cm}
The statements in the next proposition can be found in detail in~\cite{sanzproceedings}, Section 6.
\begin{prop}\label{prop316}
Let $d\in\R$ and let $e,\,E$ be a pair of kernel functions for $\mathbb{M}$-summability. Let $\theta>0$. For every $f\in\mathcal{O}^{\mathbb{M}}(S_d(\theta),\mathbb{E})$, the $e$-\emph{Laplace transform} of $f$ along a direction $\tau\in\hbox{arg}(S_d(\theta))$ is defined by
$$(T_{e,\tau}f)(z)=\int_0^{\infty(\tau)}e(u/z)f(u)\frac{du}{u},$$
for $|\hbox{arg}(z)-\tau|<\omega(\mathbb{M})\pi/2$, and $|z|$ small enough. The variation of $\tau\in\hbox{arg}(S_d)$ defines a function denoted by $T_{e,d}f$ in a sectorial region $G_d(\theta+\omega(\mathbb{M})\pi)$.
Under the assumption that $\omega(\mathbb{M})<2$ let $G=G_d(\theta)$ be a sectorial region with $\theta>\omega(\mathbb{M})\pi$. Given $f\in\mathcal{O}(G,\mathbb{E})$ and continuous at 0, and $\tau\in\R$ with $|\tau-d|<(\theta-\omega(\mathbb{M})\pi)/2$, the operator $T^{-}_{e,\tau}$, known as the $e$-\emph{Borel transform} along direction $\tau$ is defined by
$$(T^{-}_{e,\tau}f)(u):=\frac{-1}{2\pi i}\int_{\delta_{\omega(\mathbb{M})}(\tau)}E(u/z)f(z)\frac{dz}{z},\quad u\in S_{\tau},$$
where $S_{\tau}$ is an infinite sector of bisecting direction $\tau$ and small enough opening, and $\delta_{\omega(\mathbb{M})}(\tau)$ is the Borel-like path consisting of the concatenation of a segment from the origin to a point $z_0$ with $\hbox{arg}(z_0)=\tau+\omega(\mathbb{M})(\pi+\epsilon)/2$, for some small enough $\epsilon\in(0,\pi)$, followed with the arc of circle centered at 0, joining $z_0$ and the point $z_1$, with $\hbox{arg}(z_1)=\tau-\omega(\mathbb{M})(\pi+\epsilon)/2$, clockwise, and concluding with the segment of endpoints $z_1$ and the origin.
Let $G_{d}(\theta)$ and $f$ be as above. The family $\{T^{-}_{e,\tau}\}_{\tau}$, with $\tau$ varying among the real numbers with $|\tau-d|<(\theta-\omega(\mathbb{M})\pi)/2$ defines a holomorphic function denoted by $T^{-}_{e,d}f$ in the sector $S_{d}(\theta-\omega(\mathbb{M})\pi)$ and $T^{-}_{e,d}f\in\mathcal{O}^{\mathbb{M}}(S_{d}(\theta-\omega(\mathbb{M})\pi),\mathbb{E})$.
\end{prop}
\textbf{Remark:} We recall that if $\lambda\in\C$, with $\hbox{Re}(\lambda)\ge0$, then $T^{-}_{e,d}(u\mapsto u^{\lambda})(z)=\frac{z^{\lambda}}{m_e(\lambda)}$, which relates $T^{-}_{e,d}$ with the formal Borel operator $\hat{\mathcal{B}}_{m_e,u}$.
Theorem 30~\cite{balser} can be adapted to the strongly regular sequence framework, under minor modifications.
\begin{theo}\label{teo324}
Let $S=S_d(\theta)$ for some $\theta>0$. Let $\mathbb{M}$ be a strongly regular sequence with $\omega(\mathbb{M})<2$ admitting a nonzero proximate order, and choose a kernel function for $\mathbb{M}$-summability $e$. Let $f\in\mathcal{O}^{\mathbb{M}}(S,\mathbb{E})$ and define $g(z)=(T_{e,d}f)(z)$ for $z$ in a sectorial region $G_{d}(\theta+\omega(\mathbb{M})\pi))$. Then it holds that $f\equiv T^{-}_{e,d}g$.
\end{theo}
The following is an equivalent of $\mathbb{M}$-summable formal power series (see Theorem 6.18,~\cite{sanzproceedings}).
\begin{theo}\label{teo1}
Let $\mathbb{M}=(M_p)_{p\ge0}$ be a strongly regular sequence admitting a nonzero proximate order. Let $\hat{u}=\sum_{p\ge0}u_pz^p\in\mathbb{E}[[z]]$ and $d\in\R$. The following statements are equivalent:
\begin{itemize}
\item[(a)] $\hat{u}(z)$ is $\mathbb{M}$-summable in direction $d$.
\item[(b)] There exists a sectorial region $G_d(\theta)$ with $\theta>\omega(\mathbb{M})\pi$ and $u\in\mathcal{O}(G_d(\theta),\mathbb{E})$ such that for all $0<\theta'<\theta$, $S_d(\theta';r)\subseteq G_d(\theta)$ and all integers $N\ge1$, there exist $C,\,A>0$ with
$$\left\|u(z)-\sum_{p=0}^{N-1}u_{p}z^p\right\|_{\mathbb{E}}\le C A^{N}M_N|z|^{N},\qquad z\in S_d(\theta';r).$$
\end{itemize}
If one of the previous equivalent statements holds, the function $u$ in Definition~\ref{defi271} can be constructed as the $e$-Laplace transform of $\hat{\mathcal{B}}_{\mathbb{M},z}(\hat{u}(z))$ along direction $\tau\in\hbox{arg}(S_d)$.
The previous construction can be done for, and it is independent of, any choice of the kernel for $\mathbb{M}$-summability $e$.
\end{theo}
The function $u$ satisfying the previous equivalent properties is unique (see Corollary 4.30~\cite{sanzproceedings}), and it is known as the \textit{$\mathbb{M}$-sum} of $\hat{u}$ along direction $d$. We write $\mathcal{S}_{\mathbb{M},d}(\hat{u})$ for the $\mathbb{M}$-sum of $\hat{u}$ along direction $d$.
The concept of a moment differential operator was put forward by W. Balser and M. Yoshino in~\cite{BY}, and extended to $m_e$-moment differential operators in~\cite{LMS}, which leans on moment sequences of some positive order.
\begin{defin}\label{def260}
Let $(\mathbb{E},\left\|\cdot\right\|_{\mathbb{E}})$ be a complex Banach space. Given a sequence of moments $(m_{e}(p))_{p\ge0}$, the $m_e$-moment differentiation $\partial_{m_e,z}$ is the linear operator $\partial_{m_e,z}:\mathbb{E}[[z]]\to\mathbb{E}[[z]]$ defined by
$$\partial_{m_e,z}\left(\sum_{p\ge0}\frac{u_{p}}{m_e(p)}z^p\right):=\sum_{p\ge0}\frac{u_{p+1}}{m_e(p)}z^p.$$
\end{defin}
This definition can be naturally extended to $f\in\mathcal{O}(D,\mathbb{E})$, for some complex Banach space $(\mathbb{E},\left\|\cdot\right\|_{\mathbb{E}})$, and any neighborhood of the origin $D$, by applying the definition of $
\partial_{m_e,z}$ to the Taylor expansion of $f$ at the origin. Moreover, one defines the linear operator $\partial_{m_e,z}^{-1}:\mathbb{E}[[z]]\to\mathbb{E}[[z]]$ as the inverse operator of $\partial_{m_e,z}$, i.e. $\partial_{m_e,z}^{-1}(z^j)=\frac{m_e(j)}{m_e(j+1)}z^{j+1}$ for every $j\ge0$.
\begin{lemma}\label{lema268}
Let $m_1=(m_1(p))_{p\ge0},\,m_2=(m_2(p))_{p\ge0}$ be two sequences of moments. The following statements hold:
\begin{itemize}
\item The sequence of products $m_1m_2:=(m_1(p)m_2(p))_{p\ge0}$ is a sequence of moments.
\item $\hat{\mathcal{B}}_{m_1,z}\circ \partial_{m_2,z}\equiv \partial_{m_1m_2,z}\circ\hat{\mathcal{B}}_{m_1,z}$ as operators defined in $\mathbb{E}[[z]]$.
\end{itemize}
\end{lemma}
\begin{proof}
The first part is a direct consequence of Proposition 4.15,~\cite{jkls}. The second part is a direct consequence of the definition of the formal Borel transform (Definition~\ref{def259}) and the moment differentiation (Definition~\ref{def260}).
\end{proof}
The first statement of the next result extends Proposition 6~\cite{michalik17fe} to the framework of strongly regular sequences. Its proof rests heavily on that of Proposition 3,~\cite{M}. The second statement will be crucial in the sequel at the time of giving a coherent meaning to the moment derivatives of the sum of a formal power series.
\begin{theo}\label{lemma1}
Let $m_e=(m_e(p))_{p\ge0}$ be a sequence of moments, and let $\tilde{\mathbb{M}}$ be a strongly regular sequence. We also fix $d,\,\theta,\,r\in\R$, with $\theta,\,r>0$, and $\varphi\in\mathcal{O}^{\tilde{\mathbb{M}}}(\hat{S}_{d}(\theta;r),\mathbb{E})$. Then there exists $0<\tilde{r}<r$ such that for all $0<\theta_1<\theta$, all $z\in \hat{S}_d(\theta_1;\tilde{r})$ and $n\in\N_0$, the following statements hold:
\begin{enumerate}
\item
\begin{equation}\label{e244}
\partial_{m_e,z}^{n}\varphi(z)=\frac{1}{2\pi i}\oint_{\Gamma_z}\varphi(w)\int_0^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi dw,
\end{equation}
with $\tau=\tau(\omega)\in (-\arg(w)-\frac{\omega(m_e)\pi}{2},-\arg(w)+\frac{\omega(m_e)\pi}{2})$, which depends on $w$. The integration path $\Gamma_z$ is a deformation of the circle $\{|w|=r_1\}$, for any choice of $0<r_1<r$, which depends on $z$.
\item There exist constants $C_1,C_2,C_3>0$ such that
\begin{equation}\label{e279}
\left\|\partial_{m_e,z}^{n}\varphi(z)\right\|_{\mathbb{E}}\le C_1 C_2^n m_e(n)\exp\left(\tilde{M}(C_3|z|)\right),
\end{equation}
for all $n\in\N_0$ and $z\in \hat{S}_d(\theta_1;\tilde{r})$.
\end{enumerate}
\end{theo}
\begin{proof}
We first give a proof for the first statement. From Taylor expansion of $\varphi$ at the origin and the definition of $m_e$-moment derivatives one has
$$\varphi(z)=m_{e}(0)\sum_{p\ge0}\frac{\partial_{m_e,z}^{p}\varphi(0)}{m_e(p)}z^p,$$
for all $z\in D(0,r)$. The application of the Cauchy integral formula for the derivatives yields
$$\partial_{m_e,z}^{p}\varphi(0)=\frac{m_e(p)}{p!m_e(0)}\varphi^{(p)}(0)=\frac{m_e(p)}{2\pi i m_e(0)}\oint_{|w|=r_1}\frac{\varphi(w)}{w^{p+1}}dw,$$
for any $0<r_1<r$.
Let $w\in\C$ with $|w|=r_1$. Following (\ref{e166}) and the change of variables $x=\xi w$ we derive
$$\frac{m_e(p)}{w^{p+1}}=\int_0^{\infty}x^{p-1}\frac{e(x)}{w^{p+1}}dx=\int_0^{\infty(\tau)}\xi^{p}\frac{e(\xi w)}{\xi w}d\xi,$$
where $\tau=-\arg(w)$. We observe from (\ref{e162}) that the previous equality can be extended to any direction of integration $\tau\in\left(-\arg(w)-\frac{\omega(m_e)\pi}{2},-\arg(w)+\frac{\omega(m_e)\pi}{2}\right)$. Therefore, regarding the definition of the kernel function $E(z)$ in (\ref{e167}), one has
\begin{multline}\varphi(z)=m_e(0)\sum_{p\ge0}\frac{\partial_{m_e,z}^{p}\varphi(0)}{m_e(p)}z^{p}=\frac{1}{2\pi i}\oint_{|w|=r_1}\varphi(w)\int_0^{\infty(\tau)}\frac{e(\xi w)}{\xi w}\sum_{p\ge0}\frac{\xi^pz^{p}}{m_e(p)}d\xi dw \\
=\frac{1}{2\pi i}\oint_{|w|=r_1}\varphi(w)\int_0^{\infty(\tau)}E(\xi z)\frac{e(\xi w)}{\xi w}d\xi dw.\label{e309}
\end{multline}
We conclude the first part of the proof, at least from the formal point of view, by observing that
$$\partial_{m_e,z}^{n}E(\xi z)=\partial_{m_e,z}^{n}\left(\frac{(\xi z)^{p}}{m_e(p)}\right)=\sum_{p\ge0}\frac{\xi^{p+n}z^{p}}{m_e(p)}=\xi^{n}E(\xi z),$$
for every $\xi,z\in\C$. It only rests to guarantee that the formal interchange of sum and integrals in (\ref{e309}) can also be made with analytic meaning. We give details about this issue in the second part of the proof.
We proceed to give proof for the second statement of the result. Let us first consider the integral
$$\int_0^{\infty(\tau)}\xi^nE(z\xi)\frac{e(w\xi)}{w\xi}d\xi,$$
for $z$ belonging to some neighborhood of the origin, $w\in\C$ with $|w|=r_1$ as above. We choose $\tau\in\left(-\arg(w)-\frac{\omega(m_e)\pi}{2},-\arg(w)+\frac{\omega(m_e)\pi}{2}\right)$. We first prove that
\begin{equation}\label{e327}
\left|\int_0^{\infty(\tau)}\xi^nE(z\xi)\frac{e(w\xi)}{w\xi}d\xi\right|\le A_0 B_0^n m_e(n),
\end{equation}
for some $A_0,\,B_0>0$ and all $n\ge0$. This can be done following analogous arguments as those in the proof of Lemma 7.2,~\cite{sanzproceedings}. Let us consider the parametrization $[0,\infty)\ni s\mapsto se^{i\tau}$. In view of (\ref{e162}) and (\ref{e191}), we have
\begin{multline}
\left|\int_0^{\infty(\tau)}\xi^nE(z\xi)\frac{e(w\xi)}{w\xi}d\xi\right|\le \int_0^{\infty}s^{n}|E(se^{i\tau}z)|\frac{|e(se^{i\tau}w)|}{sr_1}ds\\
\le \frac{c\tilde{c}}{r_1}\int_0^{\infty}s^{n-1}\exp\left(M\left(\frac{s|z|}{\tilde{k}}\right)\right)\exp\left(-M\left(\frac{sr_1}{k}\right)\right)ds,
\end{multline}
for some $c,\,\tilde{c},\,k,\,\tilde{k}>0$. We apply Lemma~\ref{lemaaux1} to arrive at
\begin{equation}\label{e329}
\exp\left(M\left(\frac{s|z|}{\tilde{k}}\right)\right)\exp\left(-M\left(\frac{sr_1}{k}\right)\right)\le \exp\left(M\left(\frac{s|z|}{\tilde{k}}\right)-2M\left(\frac{sr_1}{\rho(2)k}\right)\right).
\end{equation}
We recall that the function $M$ is a monotone increasing function. Therefore, if $|z|\le \tilde{r}:=\frac{r_1 \tilde{k}}{k\rho(2)}$, then (\ref{e329}) is bounded from above by $\exp(-M(sr_1/(\rho(2)k)))$. Let us write
\begin{equation}\label{e352}
\int_{0}^{\infty}s^{n-1}e^{-M\left(\frac{sr_1}{\rho(2)k}\right)}ds= \int_0^1s^{n-1}e^{-M\left(\frac{sr_1}{\rho(2)k}\right)}ds+\int_1^\infty s^{n-1}e^{-M\left(\frac{sr_1}{\rho(2)k}\right)}ds=I_1+I_2.
\end{equation}
The definition of $M$ guarantees upper bounds for $|I_1|$ which do not depend on $n$. We proceed to study $I_2$.
Bearing in mind the definition of $M$, we arrive at
$$\int_1^\infty s^{n-1}e^{-M\left(\frac{sr_1}{\rho(2)k}\right)}ds\le \left(\frac{\rho(2)k}{r_1}\right)^{n+2}m_e(n+2)\int_1^\infty \frac{1}{s^{3}}ds.$$
The application of $(mg)$ condition on $m_e(n+2)\le A_1^{n+2}m_{e}(2)m_e(n)$ allows to conclude (\ref{e327}) for $z\in D(0,\tilde{r})$. The estimate (\ref{e279}) is attained by applying (\ref{e327}) to (\ref{e244}). More precisely, we have
\begin{equation}\label{e358}
\left\|\partial_{m_e,z}^{n}\varphi(z)\right\|_{\mathbb{E}}\le \left(\sup_{|w|=r_1}\left\|\varphi(w)\right\|\right)A_0B_0^nm_{e}(n),
\end{equation}
which entails (\ref{e279}) for $z\in D(0,\tilde{r})$.
Let $0<\theta_1<\theta$, and $z\in S_d(\theta_1)$ with $|z|\ge \tilde{r}$. We study (\ref{e279}) in such a domain. We deform the integration path $\{|w|=r_1\}$ as follows. Let $\theta_1<\theta_2<\theta$ and let $R=R(z)=\frac{\rho(2)k}{\tilde{k}}|z|$. We write $\Gamma_z=\Gamma(R)=\Gamma_1+\Gamma_2(R)+\Gamma_3(R)+\Gamma_4(R)$, where $\Gamma_1$ is the arc of the circle joining the points $r_1e^{i(d+\theta_2)}$ and $r_1e^{i(d-\theta_2)}$ counterclockwise, $\Gamma_2(R)$ is the segment $[r_1,R]e^{i(d-\theta_2)}$, $\Gamma_3(R)$ is the arc of the circle joining the points $Re^{i(d-\theta_2/2)}$ and $Re^{i(d+\theta_2/2)}$ counterclockwise and $\Gamma_4(R)$ is the segment $[r_1,R]e^{i(d+\theta_2/2)}$. Figure~\ref{fig1} illustrates this deformation path.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figura1.pdf}
\caption{Deformation path}
\label{fig1}
\end{figure}
We first study the case $\omega\in\Gamma_1$, i.e., $|w|=r_1$. Observe that for every $w\in\Gamma_1$ it is always possible to choose $\tau$ such that
\begin{equation}\label{e379}
\tau\in\left(-\hbox{arg}(w)-\frac{\omega(m_e)\pi}{2},-\hbox{arg}(w)+\frac{\omega(m_e)\pi}{2}\right)\cap \left(-\hbox{arg}(z)+\frac{\omega(m_e)\pi}{2},-\hbox{arg}(z)+2\pi-\frac{\omega(m_e)\pi}{2}\right).
\end{equation}
For one of such directions $\tau$ we have
$$\int_0^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi =\int_0^{\infty}(se^{i\tau})^n E(zse^{i\tau})\frac{e(wse^{i\tau})}{wse^{i\tau}}e^{i\tau}ds.$$
We split the previous integral into two parts. Let $M_E>0$. On the one hand, one has
\begin{multline}
\left|\int_{0}^{M_E/|z|}(se^{i\tau})^n E(zse^{i\tau})\frac{e(wse^{i\tau})}{wse^{i\tau}}e^{i\tau}ds\right|\le \int_{0}^{M_E/|z|}s^n |E(zse^{i\tau})|\frac{|e(wse^{i\tau})|}{r_1s}ds\\
\le \left(\max_{y\in\overline{D}(0,M_E)}|E(y)|\right)\frac{1}{r_1}\int_{0}^{M_E/|z|}s^{n-1} |e(wse^{i\tau})|ds.
\end{multline}
Bearing in mind (\ref{e162}), we have
$$\int_{0}^{M_E/|z|}s^{n-1} |e(wse^{i\tau})|ds\le c\int_{0}^{M_E/|z|}s^{n-1} \exp\left(-M\left(\frac{r_1 s}{k}\right)\right)ds.$$
Analogous estimates as in (\ref{e352}) allow us to arrive at
\begin{equation}\label{e383}
\left|\int_{0}^{M_E/|z|}(se^{i\tau})^n E(zse^{i\tau})\frac{e(wse^{i\tau})}{wse^{i\tau}}e^{i\tau}ds\right|\le C_{1.1}C_{2.1}^{n}m_e(n)\left(\max_{y\in\overline{D}(0,M_E)}|E(y)|\right)\frac{1}{r_1^{n+3}},
\end{equation}
for some $C_{1.1},\,C_{2.1}>0$. Analogously, we estimate the second integral by means of the upper bounds in (\ref{e202}). Indeed, one has
\begin{align*}
\left|\int_{M_E/|z|}^{\infty}(se^{i\tau})^n E(zse^{i\tau})\frac{e(wse^{i\tau})}{wse^{i\tau}}e^{i\tau}ds\right|&\le \int_{M_E/|z|}^{\infty}s^n \frac{\tilde{c_2}}{(|z|s)^{\beta}}\frac{|e(wse^{i\tau})|}{r_1s}ds\\
&\le\frac{\tilde{c_2}}{r_1(M_E)^{\beta}}\int_{M_E/|z|}^{\infty}s^{n-1} |e(wse^{i\tau})|ds,
\end{align*}
for some $\tilde{c}_2,\,\beta>0$. The estimates in (\ref{e352}) can be applied again in order to arrive at
\begin{equation}\label{e384}
\left|\int_{M_E/|z|}^{\infty}(se^{i\tau})^n E(zse^{i\tau})\frac{e(wse^{i\tau})}{wse^{i\tau}}e^{i\tau}ds\right|\le C_{1.2}C_{2.2}^{n}m_e(n)\frac{1}{r_1^{n+3}},
\end{equation}
for some $C_{1.2},\,C_{2.2}>0$. From (\ref{e383}) and (\ref{e384}) one can conclude in the spirit of (\ref{e358}) that
\begin{equation}\label{sol1}
\left\|\frac{1}{2\pi i}\int_{\Gamma_1}\varphi(w)\int_0^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi dw\right\|_{\mathbb{E}}\le D_1 D_2^{n}m_e(n)\exp\left(\tilde{M}(D_3|z|)\right),
\end{equation}
for some $D_1,\,D_2,\,D_3>0$.
We continue with the case $w\in\Gamma_2(R)$. The same choice for $\tau$ in (\ref{e379}) holds. We parametrize $\Gamma_2(R)$ by $[r_1,R]\ni\rho\mapsto \rho e^{i(d-\theta_2/2)}$ to arrive at
$$\left|\int_0^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi\right|\le
\frac{1}{\rho}\int_0^{\infty}s^{n-1}\left|E(zse^{i\tau})\right|\left|e(\rho se^{i\left(\tau+d-\theta_2/2\right)})\right|ds.$$
The same splitting of the path into the segment $[0,M_E/|z|]$ and the ray $[M_E/|z|,\infty)$, and analogous arguments as in the first part of the proof yield
$$\left|\int_{0}^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi\right|\le C_{1.3}C_{2.3}^{n}m_e(n)\frac{1}{\rho^{n+3}},$$
for some $C_{1.3},\,C_{2.3}>0$, and $w=\rho e^{i(d-\theta_2/2)}$ for some $r_1\le \rho\le R$. We derive that
\begin{multline}
\left\|\int_{\Gamma_2(R)}\varphi(w)\int_0^{\infty(\tau)}\xi^nE(z\xi)\frac{e(w \xi)}{w\xi}d\xi dw\right\|_{\mathbb{E}}\le C_{1.3}C_{2.3}^{n}m_e(n)\frac{1}{2\pi}\int_{r_1}^{R}\left\|\varphi(\rho e^{i(d-\theta_2/2)})\right\|_{\mathbb{E}}\frac{1}{\rho^{n+3}}d\rho\\
\le \frac{c_{\varphi}C_{1.3}}{2\pi r_1^3}\left(\frac{C_{2.3}}{r_1}\right)^{n}m_e(n) R\exp\left(\tilde{M}\left(\frac{R}{\tilde{k}_{\varphi}}\right)\right)
\end{multline}
for some $c_{\varphi},\,\tilde{k}_{\varphi}>0$ associated with the growth of $\varphi$ near infinity. A direct consequence of the definition of the function $\tilde{M}$, and the definition of the radius $R=R(|z|)$ yield
$$R\exp\left(\tilde{M}\left(\frac{R}{\tilde{k}_{\varphi}}\right)\right)\le \exp\left(\tilde{M}\left(\frac{cR}{\tilde{k}_{\varphi}}\right)\right)=\exp\left(\tilde{M}\left(\frac{c\rho(2) k|z|}{\tilde{k}\tilde{k}_{\varphi}}\right)\right),$$
which allows to end this part of the proof. We get that
\begin{equation}\label{sol2}
\left\|\frac{1}{2\pi i}\int_{\Gamma_2(R)}\varphi(w)\int_0^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi dw\right\|_{\mathbb{E}}\le D_4 D_5^{n}m_e(n)\exp\left(\tilde{M}(D_6|z|)\right),
\end{equation}
for some $D_4,\,D_5,\,D_6>0$.
The case $w\in\Gamma_4(R)$ can be treated analogously, to arrive at
\begin{equation}\label{sol3}
\left\|\frac{1}{2\pi i}\int_{\Gamma_4(R)}\varphi(w)\int_0^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi dw\right\|_{\mathbb{E}}\le D_7 D_8^{n}m_e(n)\exp\left(\tilde{M}(D_9|z|)\right),
\end{equation}
for some $D_7,\,D_8,\,D_9>0$.
We conclude the proof with the case that $w\in\Gamma_3(R)$. We parametrize $\Gamma_3(R)$ by $[d-\theta_2/2,d+\theta_2/2]\ni t\mapsto Re^{it}$ and choose $w\in\Gamma_3(R)$. Let $\tau\in \left(-\arg(w)-\frac{\omega(m_e)\pi}{2},-\arg(w)+\frac{\omega(m_e)\pi}{2}\right)$. Then, one has
\begin{align*}
\left|\int_0^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi\right|&=\left|\int_0^{\infty}(se^{i\tau})^n E(zse^{i\tau})\frac{e(wse^{i\tau})}{wse^{i\tau}}e^{i\tau}ds\right|\\
&\le \frac{1}{R}\int_0^{\infty}s^{n-1} |E(zse^{i\tau})||e(wse^{i\tau})|ds.
\end{align*}
In view of (\ref{e191}) and (\ref{e162}), together with the application of the same argument as in (\ref{e329}) (for $r_1$ substituted by $R$), the previous expression is bounded from above by
\begin{multline}\frac{c\tilde{c}}{R}\int_0^{\infty}s^{n-1}\exp\left(M\left(\frac{|z|s}{\tilde{k}}\right)\right)\exp\left(-M\left(\frac{Rs}{k}\right)\right) ds\\
\le \frac{c\tilde{c}}{R}\int_0^{\infty}s^{n-1}\exp\left(M\left(\frac{s|z|}{\tilde{k}}\right)-2M\left(\frac{sR}{\rho(2)k}\right)\right)ds.
\end{multline}
The function $M$ is monotone increasing in $[0,\infty)$. We recall that $R=\frac{\rho(2)k}{\tilde{k}}|z|$, so the previous expression is bounded from above by
$$\frac{c\tilde{c}}{R}\int_0^{\infty}s^{n-1}\exp\left(-M\left(\frac{sR}{\rho(2)k}\right)\right)ds.$$
At this point, one can take into account (\ref{e352}), together with $R=\frac{\rho(2)k}{\tilde{k}}|z|\ge \frac{\rho(2)k}{\tilde{k}}\tilde{r}$, and
$$\sup_{|w|=R}\left\|\varphi(w)\right\|_{\mathbb{E}}\le c_{\varphi}\exp\left(\tilde{M}\left(\frac{R}{\tilde{k}_{\varphi}}\right)\right)=c_{\varphi}\exp\left(\tilde{M}\left(\frac{\rho(2)k}{\tilde{k}\tilde{k}_{\varphi}}|z|\right)\right)$$
to get that
\begin{equation}\label{sol4}
\left\|\frac{1}{2\pi i}\int_{\Gamma_3(R)}\varphi(w)\int_0^{\infty(\tau)}\xi^n E(z\xi)\frac{e(w\xi)}{w\xi}d\xi dw\right\|_{\mathbb{E}}\le D_{10} D_{11}^{n}m_e(n)\exp\left(\tilde{M}(D_{12}|z|)\right),
\end{equation}
for some $D_{10},\,D_{11},\,D_{12}>0$.
Statement (\ref{e279}) follows from the application of (\ref{sol1}), (\ref{sol2}), (\ref{sol3}) and (\ref{sol4}). We observe that the identity in (\ref{e244}) is of analytic nature after the deformation path with respect to $w$ and the appropriate choice of $\tau$ described in the proof, for each $z\in \hat{S}_d(\theta_1;\tilde{r})$.
\end{proof}
\begin{corol}\label{coro1}
Let $m_e$ be a sequence of moments, and let $\tilde{\mathbb{M}}$ be a strongly regular sequence admitting a nonzero proximate order. Given $d\in\R$, the space $\mathbb{E}\{z\}_{\tilde{\mathbb{M}},d}$ is closed under $m_e$-differentiation.
\end{corol}
\begin{proof}
Let $\tilde{e},\,\tilde{E}$ be a pair of kernel functions for $\tilde{\mathbb{M}}$-summability, whose existence is guaranteed (see Remark at page~\pageref{r223}). Let $m_{\tilde{e}}$ be its associated sequence of moments. We write $\overline{m}:=(m_e(p)m_{\tilde{e}}(p))_{p\ge0}$, which is a sequence of moments in view of Lemma~\ref{lema268}.
Let $\hat{f}\in \mathbb{E}\{z\}_{\tilde{\mathbb{M}},d}$. Then, it holds that $\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{f}$ defines a function on some neighborhood of the origin, say $U$, which can be extended to an infinite sector of bisecting direction $d$, say $S_{d}$. Therefore, $\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{f}\in\mathcal{O}^{\mathbb{M}}(\hat{S}_d,\mathbb{E})$. We apply the second part of Theorem~\ref{lemma1} to the strongly regular sequence $\overline{m}$ and $n=1$ in order to achieve that $\partial_{\overline{m},z}\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{f}$, which is an element in $\mathcal{O}(U)$, is such that $\partial_{\overline{m},z}\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{f}\in \mathcal{O}^{\mathbb{M}}(\hat{S}_d,\mathbb{E})$. Lemma~\ref{lema268} yields that
$$\partial_{\overline{m},z}\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{f}\equiv \hat{\mathcal{B}}_{m_{\tilde{e}},z}\partial_{m_{e},z}\hat{f}.$$
We conclude that $\partial_{m_{e},z}\hat{f}$ defines a holomorphic function on some neighborhood of the origin, and admits an analytic extension to an infinite sector of bisecting direction $d$, with adequate growth at infinity. This entails that $\partial_{m_{e},z}\hat{f}\in\mathbb{E}\{z\}_{\tilde{\mathbb{M}},d}$.
\end{proof}
As a consequence of Corollary~\ref{coro1} and the alternative definition of summable formal power series stated in Theorem~\ref{teo1} the following definition makes sense.
\begin{defin}\label{def487}
Let $\tilde{\mathbb{M}}$ be a strongly regular sequence admitting a nonzero proximate order. Assume that $\hat{u}\in\mathbb{E}\{z\}_{\tilde{\mathbb{M}},d}$, for some $d\in\R$. Let $m_e$ be a sequence of moments. Then, we define the operator of $m_e$-moment differentiation of $\mathcal{S}_{\tilde{M},d}(\hat{u})$ by
$$\partial_{m_e,z}(\mathcal{S}_{\tilde{M},d}(\hat{u})):=\mathcal{S}_{\tilde{\mathbb{M}},d}(\partial_{m_e,z}(\hat{u})).$$
\end{defin}
The previous definition allows to determine upper bounds for the sum of a formal power series in the same way as in Theorem~\ref{lemma1}.
\begin{prop}\label{prop497}
Let $m_{e}=(m_e(p))_{p\ge 0}$ be a sequence of moments. Let $\tilde{\mathbb{M}}$ be a strongly regular sequence admitting a nonzero proximate order and $d\in\R$. We choose $\hat{u}\in\mathbb{E}\{z\}_{\tilde{\mathbb{M}},d}$ and write $u=\mathcal{S}_{\tilde{\mathbb{M}},d}(\hat{u})\in\mathcal{O}(G,\mathbb{E})$ for some sectorial region $G=G_d(\theta)$ with $\theta>\pi\omega(\tilde{\mathbb{M}})$. Then for every $G'\prec G$ there exist $C_4,\,C_5>0$ such that
\begin{equation}\label{e496}
\left\|\partial_{m_e,z}^{n}u(z)\right\|_{\mathbb{E}}\le C_4C_5^nm_e(n)\tilde{M}_n,
\end{equation}
for all $n\in\N_0$ and $z\in G'$.
\end{prop}
\begin{proof}
In view of Theorem~\ref{teo1} one can write $u=\mathcal{S}_{\tilde{\mathbb{M}},d}(\hat{u})=T_{\tilde{e},\tau}(\hat{\mathcal{B}}_{m_{\tilde{e}},z}(\hat{u}))$, for some direction $\tau$ close to $d$, with $\tilde{e}$ being any kernel for $\tilde{\mathbb{M}}$-summability and $m_{\tilde{e}}$ its associated sequence of moments. Taking into account Definition~\ref{def487} and Lemma~\ref{lema268}, one has that
$$\partial_{m_e,z}^{n}u(z)=T_{\tilde{e},\tau}(\hat{\mathcal{B}}_{m_{\tilde{e}},z}(\partial_{m_e,z}^{n}\hat{u}))(z)=T_{\tilde{e},\tau}(\partial_{m_{\tilde{e}}m_e,z}^{n}(\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{u}))(z),$$
for all $n\in\N_0$ and $z\in G_d(\theta)$.
We observe that $\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{u}\in\mathcal{O}^{\tilde{\mathbb{M}}}(\hat{S}_d(\delta;r),\mathbb{E})$ for some $\delta>0$ and $r>0$. Therefore, one may apply Theorem~\ref{lemma1} to the sequence of moments $m_em_{\tilde{e}}$ (see Lemma~\ref{lema268}) to arrive at
\begin{equation}\label{e507}
\left\|\partial_{m_em_{\tilde{e}},z}^n(\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{u}(z))\right\|_{\mathbb{E}}\le C_1C_2^nm_e(n)m_{\tilde{e}}(n)\exp\left(\tilde{M}(C_3|z|)\right),
\end{equation}
for some $C_1,\,C_2,\,C_3>0$ and $z\in\hat{S}_d(\delta_1;\tilde{r})$ for $0<\delta_1<\delta$, $0<\tilde{r}<r$. Let $f(z):=\partial_{m_em_{\tilde{e}},z}^n(\hat{\mathcal{B}}_{m_{\tilde{e}},z}\hat{u}(z))$. Then, there exists $C_6>0$ such that
$$\left\|\int_0^{\infty(\tau)}\tilde{e}(u/z)f(u)\frac{du}{u}\right\|_{\mathbb{E}}\le\left\|\int_0^{t_0}\tilde{e}(u/z)f(u)\frac{du}{u}\right\|_{\mathbb{E}}+\left\|\int_{t_0}^{\infty(\tau)}\tilde{e}(u/z)f(u)\frac{du}{u}\right\|_{\mathbb{E}}=I_3+I_4\le C_6,$$
for some $\tau\in\hbox{arg}(S_d(\delta_1))$. Usual estimates for $\tilde{e}$-Laplace transform lead to the conclusion: the integrability property of $e$ at the origin (see~(\ref{e192})) leads to upper bounds for $I_3$ and $I_4$ is bounded from above in view of (\ref{e162}) and the very definition of the function $M$. More precisely, this holds for $|\hbox{arg}(z)-\tau|<\omega(\tilde{M})\pi/2$ and small enough $|z|$. One may vary $\tau$ among the arguments $\hbox{arg}(S_d(\delta_1))$ following the usual procedure.
Finally, the bounds in (\ref{e496}) are attained taking into account that $\mathbb{M}$ and $m_{\tilde{e}}$ are equivalent sequences, in view of the remark after Definition~\ref{defi271}.
\end{proof}
\section{Application: Summability of formal solutions of moment Partial Differential Equations}\label{secapp}
This section is devoted to the study of summability properties of the formal solutions of a certain family of moment partial differential equations.
Let $\mathbb{M}$ be a strongly regular sequence which admits a nonzero proximate order. We assume that $\mathbb{M}_1$ and $\mathbb{M}_2$ are strongly regular sequences which admit nonzero proximate order. Let $e_1$ (resp. $e_2$) be a kernel function for $\mathbb{M}_1$-summability (resp. for $\mathbb{M}_2$-summability), and we write $m_1$ (resp. $m_2$) for its associated sequence of moments. Additionally, we assume that $m_1$ and $m_2$ are $\mathbb{M}$-sequences of orders $s_1>0$ and $s_2>0$, respectively.
Let $1\le k<p$ be integer numbers such that $s_2p>s_1k$. Let $r>0$. We denote $D:=D(0,r)$ and assume that $a(z)\in\mathcal{O}(\overline{D})$ and $a(z)^{-1}\in\mathcal{O}(\overline{D})$. We also fix $\hat{f}\in\C[[t,z]]$ and $\varphi_{j}(z)\in\mathcal{O}(\overline{D})$ for $j=0,\ldots,k-1$.
We consider the following Cauchy problem.
\begin{equation}\label{epral}
\left\{ \begin{array}{lcc}
\left(\partial_{m_1,t}^{k}-a(z)\partial_{m_2,z}^{p}\right)u(t,z)=\hat{f}(t,z)&\\
\partial_{m_1,t}^{j}u(0,z)=\varphi_j(z),&\quad j=0,\ldots,k-1.
\end{array}
\right.
\end{equation}
\begin{lemma}
Under the previous assumptions there exists a unique formal solution $\hat{u}(t,z)\in\C[[t,z]]$ of the Cauchy problem (\ref{epral}). Moreover, in the case that $\hat{f}(t,z)\in\mathcal{O}(\overline{D})[[t]]$ we have $\hat{u}(t,z)\in\mathcal{O}(\overline{D})[[t]]$.
\end{lemma}
\begin{proof}
Let $\hat{u}(t,z)\in\C[[t,z]]$. We write $\hat{u}(t,z)=\sum_{n\ge0}\frac{u_{n,\star}(z)}{m_1(n)}t^{n}$, for some $u_{n,\star}(z)\in\C[[z]]$. The initial conditions of (\ref{epral}) determine $u_{j,\star}(z)=m_1(0)\varphi_j(z)$ in order that $\hat{u}(t,z)$ is a formal solution of (\ref{epral}). We plug the formal power series $\hat{u}(t,z)$ into the problem to arrive at the recurrence formula
\begin{equation}\label{e547}
u_{n+k,\star}(z)=a(z)\partial_{m_2,z}^{p}u_{n,\star}(z)+\hat{f}_{n,\star}(z),
\end{equation}
where we write $\hat{f}(t,z)=\sum_{n\ge0}\frac{\hat{f}_{n,\star}(z)}{m_1(n)}t^{n}$. Therefore, the elements $u_{n,\star}(z)$ for $n\ge k$ are determined by the initial data. Furthermore, under the convergence assumption on $\hat{f}$ the solution of (\ref{e547}) belongs to $\mathcal{O}(\overline{D})$.
\end{proof}
From now on, the pair $(\mathbb{E},\left\|\cdot\right\|_{\mathbb{E}})$ denotes the Banach space of holomorphic functions in $\overline{D}$, and $\left\|\cdot\right\|_{\mathbb{E}}$ stands for the norm
$$\left\|\sum_{n=0}^{\infty}a_nz^n\right\|_r:=\sum_{n=0}^{\infty}|a_n|r^n.
$$
\begin{lemma}
\label{le:5}
Let $m=(m(p))_{p\ge0}$ be a (lc) sequence and $f\in\mathcal{O}(\overline{D})$.
If there exist $C>0$ and $n\in\mathbb{N}_0$ such that
\begin{equation}
\label{eq:f}
\left\|f(z)\right\|_{\tilde{r}}\leq \frac{|z|^n}{m(n)}C\quad\textrm{for every}\quad z\in\overline{D},\quad \tilde{r}=|z|
\end{equation}
then
\begin{equation*}
\left\|\partial_{m,z}^{-k}f(z)\right\|_{\tilde{r}}\leq \frac{|z|^{n+k}}{m(n+k)}C\quad\textrm{for every}\quad k\in\mathbb{N}_0
\quad\textrm{and}\quad z\in\overline{D},\quad \tilde{r}=|z|.
\end{equation*}
\end{lemma}
\begin{proof}
By (\ref{eq:f}) we may write
$f(z)=\sum_{j=n}^{\infty}f_jz^j\in\mathcal{O}(\overline{D})$. We define the auxiliary function $g(z)\in\mathcal{O}(\overline{D})$ as
$$
g(z):=\sum_{j=0}^{\infty}|f_{j+n}|m(n)z^j.
$$
By (\ref{eq:f}) we get $\left\|g(z)\right\|_{\tilde{r}}\le C$
and $f(z)\ll\frac{z^n}{m(n)}g(z)$. Since $m$ is a (lc) sequence
we conclude that
$$
\partial_{m,z}^{-k}f(z)=\frac{z^{n+k}}{m(n+k)}\sum_{j=0}^{\infty}\frac{m(n+j)m(n+k)}{m(j+n+k)}f_{j+n}z^{j}
\ll \frac{z^{n+k}}{m(n+k)}\sum_{j=0}^{\infty}|f_{j+n}|m(n)z^j=\frac{z^{n+k}g(z)}{m(n+k)}.
$$
Hence
$$
\left\|\partial_{m,z}^{-k}f(z)\right\|_{\tilde{r}}\le \frac{|z|^{n+k}}{m(n+k)}\left\|g(z)\right\|_{\tilde{r}}
\le\frac{|z|^{n+k}}{m(n+k)}C
$$
for every $k\in\mathbb{N}_0$ and $z\in\overline{D}$, $\tilde{r}=|z|$.
\end{proof}
\begin{theo}\label{teopral}
Under the assumptions made on the elements involved in the Cauchy problem (\ref{epral}) let $\hat{u}(t,z)$ be the formal solution of (\ref{epral}) and $d\in\R$. We define the strongly regular sequence $\overline{\mathbb{M}}=(M_n^{\frac{s_2p}{k}-s_1})_{n\ge0}$. The following statements are equivalent:
\begin{enumerate}
\item[(i)] $\hat{u}(t,z)$ is $\overline{\mathbb{M}}$-summable along direction $d$ as a formal power series in $\mathbb{E}[[t]]$.
\item[(ii)] $\hat{f}(t,z)\in\mathbb{E}[[t]]$ and $\partial_{m_2,z}^{j}\hat{u}(t,0)\in\C[[t]]$ for $j=0,\ldots,p-1$ are $\overline{\mathbb{M}}$-summable along direction $d$.
\end{enumerate}
If one of the previous equivalent statements holds then the sum of $\hat{u}(t,z)$ is an actual solution of (\ref{epral}).
\end{theo}
\begin{proof}
(i)$\Rightarrow$ (ii). Equation (\ref{e547}) entails that $\hat{f}(t,z)=\sum_{n\ge0}\frac{\hat{f}_{n,\star}(z)}{m_1(n)}t^{n}\in\mathbb{E}[[t]]$ whenever $\hat{u}(t,z)\in\mathbb{E}[[t]]$. In addition to this, the space $\mathbb{E}\{t\}_{\overline{\mathbb{M}},d}$ is a differential algebra, with $\mathcal{S}_{\overline{\mathbb{M}},d}$ respecting the operations of addition, product and derivation (see Proposition 6.20 (i)~\cite{sanzproceedings}) and also under the action of the operator $\partial_{m_2,z}$ (see Corollary~\ref{coro1}). Regarding equation (\ref{epral}) we conclude that $\hat{f}(t,z)\in\mathbb{E}\{t\}_{\overline{\mathbb{M}},d}$.
Let $0\le j\le p-1$. The same argument as before yields that the formal power series $\partial_{m_2,z}^{j}\hat{u}(t,z)\in\mathbb{E}\{t\}_{\overline{\mathbb{M}},d}$. A direct application of the definition of summable formal power series guarantees summability of its evaluation at $z=0$ along direction $d$.
\vspace{0.3cm}
(ii)$\Rightarrow$ (i). Let $\hat{\psi}_0(t):=\hat{u}(t,0)$, and for all $1\le j\le p-1$ let $\hat{\psi}_{j}(t):=\frac{m_2(0)}{m_2(j)}\partial_{m_2,z}^{j}\hat{u}(t,0)$. We also write $\hat{\omega}(t,z):=\partial_{m_2,z}^{p}\hat{u}(t,z)$. We observe that the formal power series $\hat{\omega}(t,z)$ satisfies the equation
$$\left(1-\frac{1}{a(z)}\partial_{m_1,t}^{k}\partial_{m_2,z}^{-p}\right)\hat{\omega}(t,z)=\hat{g}(t,z),$$
with
$$\hat{g}(t,z):=\frac{1}{a(z)}(\partial_{m_1,t}^k\hat{\psi}_0(t)+z\partial_{m_1,t}^k\hat{\psi}_1(t)+\ldots+z^{p-1}\partial_{m_1,t}^k\hat{\psi}_{p-1}(t)-\hat{f}(t,z)).$$
We write $\hat{\omega}(t,z)$ in the form
$$\hat{\omega}(t,z)=\sum_{q\ge0}\hat{\omega}_q(t,z),$$
where
$$\hat{\omega}_0(t,z):=\hat{g}(t,z),\quad \hbox{ and }\quad \hat{\omega}_q(t,z):=\frac{1}{a(z)}\partial_{m_1,t}^{k}\partial_{m_2,z}^{-p}\hat{\omega}_{q-1}(t,z)\hbox{ for all }q\ge1.$$
Observe that the hypotheses in (ii) together with the properties of differential algebra of $\mathbb{E}\{t\}_{\overline{\mathbb{M}},d}$ guarantee that $\hat{\omega}_0(t,z)\in\mathbb{E}[[t]]$ is $\overline{\mathbb{M}}$-summable in direction $d$. Let $\omega_0(t,z)\in\mathcal{O}(G\times \overline{D})$ denote its sum, where $G$ stands for a sectorial region of opening larger than $\pi\omega(\overline{\mathbb{M}})$ bisected by direction $d$. By Proposition~\ref{prop497}, for all $G'\prec G$, there exist $C_{4},\,C_5>0$ such that
$$\left\|\partial_{m_1,t}^n\omega_0(t,z)\right\|_{r}\le C_4C_5^{n}m_{1}(n)M_n^{\frac{s_2p}{k}-s_1}\le \tilde{C}_1\tilde{C}_2^{n}M_{n}^{\frac{s_2p}{k}},$$
for some $\tilde{C}_1,\,\tilde{C}_2>0$, all $n\in\N_0$ and $t\in G'$. An induction argument allows to state that for every $q\ge0$ the formal power series $\hat{\omega}_{q}(t,z)\in\mathbb{E}[[t]]$ is $\overline{\mathbb{M}}$-summable in direction $d$, with
$$\left\|\partial_{m_1,t}^n\omega_q(t,z)\right\|_{\tilde{r}}\le \tilde{C}_1C^{q}\tilde{C}_2^{qk+n}M_{qk+n}^{\frac{s_2p}{k}}\frac{|z|^{pq}}{m_2(pq)},$$
for $t\in G'\prec G$, $z\in\overline{D}$ with $\tilde{r}=|z|$ and $C=\left\|\frac{1}{a(z)}\right\|_r$.
Indeed, by Lemma \ref{le:5} and by the inductive hypothesis we get
\begin{multline*}
\left\|\partial_{m_1,t}^n\omega_{q+1}(t,z)\right\|_{\tilde{r}}=\left\|\frac{1}{a(z)}\partial_{m_2,z}^{-p}\partial_{m_1,t}^{k+n}\hat{\omega}_{q}(t,z)\right\|_{\tilde{r}}\le C\frac{|z|^{pq+p}}{m_2(pq+p)}\tilde{C}_1C^{q}\tilde{C}_2^{qk+n+k}M_{qk+n+k}^{\frac{s_2p}{k}}
\end{multline*}
for $t\in G'\prec G$, $z\in\overline{D}$ and $\tilde{r}=|z|$.
We have the following upper bound:
$$\sum_{q\ge0}\left\|\partial_{m_1,t}^n\omega_q(t,z)\right\|_{\tilde{r}}\le
\tilde{C}_1\tilde{C}_2^n\sum_{q\ge0}(C\tilde{C}_2^k|z|^p)^{q}M_{qk+n}^{\frac{s_2p}{k}}\frac{1}{m_2(pq)}.$$
Due to $(mg)$ condition, the fact that $m_2$ is an $\mathbb{M}$-sequence of order $s_2$ (see also \cite{LMS}, Lemma 8) together with (\ref{e140}) yield
\begin{multline}
M_{qk+n}^{\frac{s_2p}{k}}\frac{1}{m_2(pq)}\le (A_{1}^{qk+n}M_{qk}M_n)^{\frac{s_2p}{k}}\frac{1}{A_3^{pq}M_{pq}^{s_2}}=\frac{A_1^{\frac{s_2p(qk+n)}{k}}}{A_{3}^{pq}}M_{n}^{\frac{s_2p}{k}}\frac{M_{qk}^{\frac{s_2p}{k}}}{M_{pq}^{s_2}}\\
\le \frac{A_1^{\frac{s_2p(qk+n)}{k}}}{A_{3}^{pq}}M_{n}^{\frac{s_2p}{k}}\frac{A_1^{qps_2(k+1)k/2}M_{qp}^{s_2}}{M_{qp}^{s_2}}=\frac{A_1^{\frac{s_2p(qk+n)}{k}}A_1^{qps_2(k+1)k/2}}{A_{3}^{pq}}M_{n}^{\frac{s_2p}{k}},
\end{multline}
for some $A_1,\,A_3>0$
We finally have
$$\sum_{q\ge0}\left\|\partial_{m_1,t}^n\omega_q(t,z)\right\|_{\tilde{r}}\le
\tilde{C}_1\tilde{C}_4^nM_{n}^{\frac{s_2p}{k}}\sum_{q\ge0}(A_{3}^{-p}A_1^{ps_2+ps_2(k+1)k/2}C\tilde{C}_2^k|z|^p)^{q}.$$
The previous series is convergent for $|z|<\frac{A_{3}}{A_1^{s_2+s_2(k+1)k/2}}\left(\frac{1}{C\tilde{C}_2^k}\right)^{1/p}=:r'$. Therefore, one has that
$$\omega(t,z):=\sum_{q\ge0}\omega_q(t,z)$$
defines an analytic function on $G\times D(0,r')$. We reduce $r$ in order that $r\le r'$, if necessary, to arrive at
\begin{equation}\label{e635}
\sum_{q\ge0}\left\|\partial_{m_1,t}^n\omega_q(t,z)\right\|_{\mathbb{E}}\le
\tilde{C}_3\tilde{C}_4^nM_{n}^{\frac{s_2p}{k}},
\end{equation}
for some $\tilde{C}_3>0$, which is valid for all $t\in G'$.
We show that $\omega(t,z)$ is the $\overline{\mathbb{M}}$-sum of $\hat{\omega}(t,z)=\sum_{q\ge0}\hat{\omega}_q(t,z)\in\mathbb{E}[[t]]$ along direction $d$.
Let $e$ be a kernel function for $\overline{\mathbb{M}}$-summability. Then, for all $q\in\N_0$ it holds that $\omega_q(t,z)=T_{e,d}\hat{\mathcal{B}}_{m_e,t}\hat{\omega}_q(t,z)$ and $\omega(t,z)=\sum_{q\ge0}T_{e,d}\hat{\mathcal{B}}_{m_e,t}\hat{\omega}_q(t,z)$.
By (\ref{e635}) we get that $T^{-}_{e,d}\omega(t,z)\in\mathcal{O}(D'\times D)$ for some disc at the origin $D'$. Proposition~\ref{prop316} can be applied to arrive at $T^{-}_{e,d}\omega(t,z)\in\mathcal{O}^{\overline{\mathbb{M}}}(S_d,\mathbb{E})$, for some infinite sector $S_d$ with bisecting direction $d$. Hence, $T^{-}_{e,d}\omega(t,z)\in\mathcal{O}^{\overline{\mathbb{M}}}(\hat{S}_d,\mathbb{E})$.
Finally, convergence of the series and Theorem~\ref{teo324} allow us to write
\begin{multline}
T^{-}_{e,d}\omega(t,z)=T^{-}_{e,d}\sum_{q\ge0}T_{e,d}\hat{\mathcal{B}}_{m_e,t}(\hat{\omega}_q(t,z))=T^{-}_{e,d}T_{e,d}\sum_{q\ge0}\hat{\mathcal{B}}_{m_e,t}(\hat{\omega}_q(t,z))\\
=\sum_{q\ge0}\hat{\mathcal{B}}_{m_e,t}(\hat{\omega}_q(t,z))=\hat{\mathcal{B}}_{m_e,t}\left(\sum_{q\ge0}\hat{\omega}_q(t,z)\right)=\hat{\mathcal{B}}_{m_e,t}\hat{\omega}(t,z).
\end{multline}
Therefore, $\hat{\mathcal{B}}_{m_e,t}\hat{\omega}(t,z)\in\mathcal{O}^{\overline{\mathbb{M}}}(\hat{S}_d\times D)$ and the formal power series $\hat{\omega}(t,z)$ is $\overline{\mathbb{M}}$-summable along direction $d$ (as an element in $\mathbb{E}[[t]]$), with sum given by $\omega(t,z)$.
Assume that one of the equivalent statements holds. Let $f(t,z)$ (resp. $u(t,z)$) be the sum of $\hat{f}(t,z)\in\mathbb{E}[[t]]$ (resp. $\hat{u}(t,z)\in\mathbb{E}[[t]]$) in direction $d$. Then the function $t\mapsto(\partial_{m_1,t}^{k}-a(z)\partial_{m_2,z}^{p})u(t,z)-f(t,z)$ with values in $\mathbb{E}$ admits null $(\overline{\mathbb{M}})$-asymptotic expansion in a sector of opening larger than $\omega(\overline{\mathbb{M}})\pi$. Watson's lemma (see Corollary 4.12~\cite{sanz}) states that it is the null function, which entails that $u(t,z)$ is an analytic solution of (\ref{epral}), which satisfies the Cauchy data.
\end{proof}
Analogous estimates as in the proof of Theorem~\ref{teopral} can be applied to achieve the next result.
\begin{corol}\label{corofinal}
Assume that $s_1k\ge s_2p$. Under the assumptions made on the elements involved in the Cauchy problem (\ref{epral}) let $\hat{u}(t,z)$ be the formal solution of (\ref{epral}) and $d\in\R$. The following statements are equivalent:
\begin{enumerate}
\item[(i)] $\hat{u}(t,z)$ is convergent in a neighborhood of the origin.
\item[(ii)] $\hat{f}(t,z)$ and $\partial_{m_2,z}^{j}\hat{u}(t,0)$ for $j=0,\ldots,p-1$ are convergent in a neighborhood of the origin.
\end{enumerate}
\end{corol}
\textbf{Remark:} Theorem~\ref{teopral} is compatible with the results obtained in~\cite{LMS}. Indeed, equation (\ref{epral}) falls into the case $\Gamma=\{(0,p)\}$ in Section 5~\cite{LMS}, and where the associated Newton polygon has one positive slope $k_1$ if and only if $s_2p>s_1k$ and it has no positive slope in the opposite case. Indeed,
$$\frac{1}{k_1}=\max\left\{0,\frac{s_2p}{k}-s_1\right\}.$$
Theorem 1 in~\cite{LMS} states that the formal solution of the equation $\hat{u}(t,z)=\sum_{n\ge0}u_n(z)t^{n}$ satisfies that for some $0<r'<r$ there exist $C,\,H>0$ such that
$$\sup_{z\in D(0,r')}|u_n(z)|\le CH^n(M_n)^{1/k_1},\quad n\in\N_0.$$
The result remains coherent with Theorem 2,~\cite{LMS}.
\textbf{Remark:} Theorem~\ref{teopral} is also coherent with the results obtained in~\cite{remy2016} in the Gevrey classical setting. See Theorem 2 and Theorem 3,~\cite{remy2016}. | {"config": "arxiv", "file": "2007.08764/summabilitymomentPDEs.tex"} |
TITLE: Basic buoyancy question: Man in a boat with a stone
QUESTION [33 upvotes]: This comes from a brain teaser but I'm not sure I can solve it:
You are in a rowing boat on a lake. A large heavy rock is also in the boat. You heave the rock overboard. It sinks to the bottom of the lake. What happens to the water level in the lake? Does it rise, fall or stay the same?
I'd say that the water level drops, because when you drop the stone in the lake, the water level rises according to the volume of the stone, BUT the water level decreases by a volume of water weighting the stone's weight, when you take it off the boat.
Is that correct?
REPLY [7 votes]: When the rock is in the boat, the weight of both the rock and boat is pressuring the water which causes the water level to rise. When the rock is at the bottom of the water, it displaces its volume in the water rather than the weight. Due to the fact that the rock’s density is greater than that of water and volume, the water level will fall. | {"set_name": "stack_exchange", "score": 33, "question_id": 30268} |
TITLE: Zeros of a continuously differentiable function
QUESTION [0 upvotes]: Let $f: D\rightarrow \mathbb{R}$ be a continuously differentiable function defined on a domain $D$. Is it true that if $x$ is a non critical point of $f$, then there is a neighborhood of $x$ which contains no accumulation point of the set of zeros of $f$?
Thanks in advance!
REPLY [1 votes]: If $f(x)\ne0$, then there is a neighborhood of $x_0$ with no zeros of $f$. Without loss of generality, asume $f\colon(-a,a)\to\Bbb R$, $f(0)=0$ and $f'(0)\ne0$. Then
$$
f(x)=f'(0)\,x+h(x)\quad\text{with}\quad \lim_{x\to0}\frac{h(x)}{x}=0.
$$
There exists $\delta>0$ such that
$$
|x|<\delta\implies\Bigl|\frac{h(x)}{x}\Bigr|\le\frac{|f'(0)|}{2}.
$$
Then, if $0<|x|<\delta$,
$$
|f(x)|\ge|f'(0)|\,|x|-|h(x)|\ge|f'(0)|\,|x|-\frac{|f'(0)|}{2}\,|x|\ge\frac{|f'(0)|}{2}\,|x|>0.
$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 3027057} |
\begin{document}
\title{Torus Knots and the Chern-Simons path integral: a rigorous treatment}
\maketitle
\begin{center} \large
Atle Hahn
\end{center}
\begin{center}
\it \large Group of Mathematical Physics, University of Lisbon \\
Av. Prof. Gama Pinto, 2\\
PT-1649-003 Lisboa, Portugal\\
Email: atle.hahn@gmx.de
\end{center}
\begin{abstract}
In 1993 Rosso and Jones computed for every simple, complex Lie algebra $\cG_{\bC}$
and every colored torus knot in $S^3$
the value of the corresponding $U_q(\cG_{\bC})$-quantum invariant
by using the machinery of quantum groups.
In the present paper we derive a $S^2 \times S^1$-analogue of the Rosso-Jones formula
(for colored torus ribbon knots) directly from a rigorous realization of the corresponding
(gauge fixed) Chern-Simons path integral.
In order to compare the explicit expressions obtained for torus knots in $S^2 \times S^1$
with those for torus knots in $S^3$ one can perform a suitable surgery operation.
By doing so we verify that the original Rosso-Jones formula is indeed recovered
for every $\cG_{\bC}$.
\end{abstract}
\medskip
\section{Introduction}
\label{sec1}
Let $\cG_{\bC}$ be an arbitrary simple complex Lie algebra, let
$q \in \bC \backslash \{0\}$ be either generic or a root of unity of sufficiently high order,
and let $U_q(\cG_{\bC})$ be the corresponding quantum group.
In \cite{RoJo} an explicit formula for the values
of the $U_q(\cG_{\bC})$-quantum invariant of an arbitrary colored torus knot in $S^3$
was found and proven using the representation theory of $U_q(\cG_{\bC})$.\par
In the special case where $q$ is a root of unity
the quantum invariants studied in \cite{RoJo} are normalized
versions of the Reshetikhin-Turaev invariants associated to $M = S^3$ and $U_q(\cG_{\bC})$
(cf. Eq. \eqref{eq_QI_RT} in the Appendix).
It is widely believed that the Reshetikhin-Turaev invariants associated to
a closed oriented 3-manifold $M$ and to $U_q(\cG_{\bC})$
are equivalent to Witten's heuristic path integral expressions based on the Chern-Simons action function
associated to $(M,G,k)$ where
$G$ is the simply connected, compact Lie group corresponding to the compact real form
$\cG$ of $\cG_{\bC}$ and $k \in \bN$ is chosen suitably (cf. Remark \ref{rm_shift_in_level} below).
Accordingly, it is natural to ask whether it is possible to derive the Rosso-Jones formula
(or analogues/generalizations for base manifolds $M$ other than $S^3$)
directly from Witten's path integral expressions. \par
In the present paper I will show how one can do this
for a large class of colored torus (ribbon) knots $L$ in the manifold
$M=S^2 \times S^1$ in a rigorous way.
The approach of the present paper is based on the so-called torus gauge fixing procedure
which was introduced in \cite{BlTh1,BlTh3} for the study of the CS path integral on manifolds of the form
$M=\Sigma \times S^1$. In \cite{Ha3c,Ha4} the basic heuristic formula of \cite{BlTh1}
was generalized to general colored links $L$ in $M$.
The generalized formula in \cite{Ha3c,Ha4} was recently simplified in \cite{Ha7a}, cf. the heuristic equation \eqref{eq2.48_Ha7a} below, which will be the starting point for the rigorous treatment of the present paper.\par
In order to make rigorous sense of the RHS of the aforementioned Eq. \eqref{eq2.48_Ha7a}
we will work within the simplicial setting developed in \cite{Ha7a}.
The simplicial setting not only allows a completely rigorous treatment but also one that is essentially elementary:
apart from some\footnote{In fact, even most of the Lie theoretic results appear only
after the path integral expressions have already been evaluated explicitly (cf. Steps 1--4 in the proof
of Theorem \ref{theorem1}) and we compare the explicit expressions with
those in the Rosso-Jones formuala (cf. Step 5 in the proof
of Theorem \ref{theorem1} and Sec. \ref{subsec6.2})}
basic results from general Lie theory only a few quite
simple results on oscillatory Gauss-type integrals on Euclidean vector spaces
will be needed, cf. Sec. \ref{sec4} below.\par
The paper is organized as follows: \par
In Sec. \ref{sec2} we first recall the aforementioned heuristic formula Eq. \eqref{eq2.48_Ha7a}
for the CS path integral in the torus gauge and later
give a ribbon version of Eq. \eqref{eq2.48_Ha7a}, cf. Eq. \eqref{eq2.48} below. \par
In Sec. \ref{sec3} we introduce a (rigorous) simplicial realization $\WLO^{disc}_{rig}(L)$
of the RHS of the heuristic formula Eq. \eqref{eq2.48} in Sec. \ref{sec2} for generic colored ribbon links $L$.
The definition of $\WLO^{disc}_{rig}(L)$ is similar to the one in \cite{Ha7a}
but incorporates some improvements and simplifications. \par
In Sec. \ref{sec4} we recall the relevant results from \cite{Ha7b}
on oscillatory Gauss-type integrals on Euclidean vector spaces
which we will use in Sec. \ref{sec5}.\par
In Sec. \ref{sec5} we compute $\WLO^{disc}_{rig}(L)$ (and the normalized version $\WLO^{disc}_{norm}(L)$)
explicitly for a large class of colored torus ribbon knots $L$ in $S^2 \times S^1$,
see Theorem \ref{theorem1} and its proof.
Apart from Theorem \ref{theorem1} a straightforward
generalization is proven, cf. Theorem \ref{theorem2}.\par
In Sec. \ref{sec6} we combine Theorem \ref{theorem2} with a suitable surgery argument.
The explicit expressions obtained in this way are then compared with those in the Rosso-Jones
formula for colored torus knots in $S^3$.
We find agreement for all $\cG_{\bC}$. \par
The paper concludes with Sec. \ref{sec7} and a short appendix.
\begin{comment} \label{comm1} \rm The present paper pursues two closely related but still quite different goals:
\begin{description}
\item[\it Goal 1 (= Main Goal):] Make progress with the simplicial program for Chern-Simons theory, cf. Sec. 3 in \cite{Ha7a} and Remark \ref{rm_sec3.9b} below. \par
Goal 1 is achieved by Theorem \ref{theorem1}, Theorem \ref{theorem2}, and the partial verification of Conjecture~\ref{conj0} given in Sec. \ref{subsec6.2}.
\item[\it Goal 2:] Give a (new) heuristic derivation of the original Rosso-Jones formula
for general $\cG_{\bC}$. \par
Goal 2 is achieved by combining Theorem \ref{theorem2} (and Remark \ref{rm_sec3.9c}) below
with the modification of Sec. \ref{subsec6.2} which is obtained by rewriting Sec. \ref{subsec6.2}
using Witten's heuristic surgery argument instead of the (rigorous) surgery argument for
the Reshetikhin-Tureav invariant, cf. Remark \ref{rm_Goal1_vs_Goal2} below.
[Of course, if we were only interested in Goal 2 and not in Goal 1
we could significantly reduce the amount of work and simply state and ``prove'' a heuristic continuum version
of Theorem \ref{theorem2}. This would take only a few pages.
In particular, Sec. \ref{sec3} could then be omitted.]
\end{description}
Regarding Goal 2 it should be noted
that there are already several quite general heuristic approaches for calculating the CS path integral expressions
for a large class of knots \& links and base manifolds $M$, cf. the perturbative approach in \cite{GMM,BN,Kon,AxSi1,AxSi2,BoTa} based on Lorentz gauge fixing
and the approach in \cite{BeaWi,Bea} which is based on non-Abelian localization.
It is expected that in the special case of torus knots in $M=S^3$
the approach in \cite{BeaWi,Bea} leads to the Rosso-Jones formula
but to my knowledge this has so far only been shown explicitly in the special case $\cG_{\bC} = sl(2,\bC)$. \par
Apart from these (heuristic) path integral approaches
one should also mention the approach in \cite{LabLlaRam,IsLabRam,LabMa1,LabMa2,LabPer,Ste}
where Witten's CS path integral expressions are evaluated for torus knots \& links
using the heuristic ``knot operator'' approach introduced in \cite{LabRam}.
The knot operator approach allows the derivation of the Rosso-Jones formula
for arbitrary simple complex Lie algebras but
involves only few genuine path integral arguments
(i.e. arguments which deal directly/explicitly with the CS path integral).
Instead, a variety of several quite different heuristic arguments, some of them from Conformal Field Theory, are used in the knot operator approach.
\end{comment}
\section{The heuristic Chern-Simons path integral in the torus gauge}
\label{sec2}
Let $G$ be a simple, simply-connected, compact Lie group
and $T$ a maximal torus of $G$.
By $\cG$ and $\ct$ we will denote the Lie algebras of $G$ and $T$
and by $\langle \cdot , \cdot \rangle$
the unique $\Ad$-invariant scalar product
on $\cG$ such that $\langle \Check{\alpha} , \Check{\alpha} \rangle = 2$
for every short coroot $\Check{\alpha} \in \ct$.\par
Let $M$ be a compact oriented 3-manifold
of the form $M = \Sigma \times S^1$ where $\Sigma$ is a compact oriented surface.
(From Sec. \ref{sec5} on we will only consider the special case $\Sigma = S^2$.)
Finally, let $L$ be a fixed (ordered and oriented) ``link'' in $M$,
i.e. a finite tuple $(l_1, \ldots, l_m)$, $m \in \bN$, of pairwise non-intersecting
knots $l_i$ . We equip each $l_i$ with a ``color'', i.e. an irreducible,
finite-dimensional, complex representation $\rho_i$ of $G$.
Recall that a ``knot'' in $M$ is an embedding $l:S^1 \to M$.
Using the surjection $[0,1] \ni t \mapsto e^{2 \pi i t} \in \{ z \in \bC \mid |z| =1 \} \cong S^1$
we can consider each knot as a loop $l:[0,1] \to M$, $l(0) = l(1)$, in the obvious way.
\subsection{Basic spaces}
\label{subsec2.1}
As in \cite{Ha7a,Ha7b} we will use the following notation\footnote{Here $\Omega^p(N,V)$ denotes the space of $V$-valued $p$-forms
on a smooth manifold $N$}
\begin{subequations} \label{eq_basic_spaces_cont}
\begin{align}
\cB & = C^{\infty}(\Sigma,\ct) \\
\cA & = \Omega^1(M,\cG)\\
\cA_{\Sigma} & = \Omega^1(\Sigma,\cG) \\
\cA_{\Sigma,\ct} & = \Omega^1(\Sigma,\ct), \quad \cA_{\Sigma,\ck} = \Omega^1(\Sigma,\ck) \\
\cA^{\orth} & = \{ A \in \cA \mid A(\partial/\partial t) = 0\}\\
\label{eq_part_f}
\Check{\cA}^{\orth} & = \{ A^{\orth} \in \cA^{\orth} \mid \int A^{\orth}(t) dt \in \cA_{\Sigma,\ck} \} \\
\label{eq_part_g} \cA^{\orth}_c & = \{ A^{\orth} \in \cA^{\orth} \mid \text{ $A^{\orth}$ is constant and
$\cA_{\Sigma,\ct}$-valued}\}
\end{align}
\end{subequations}
where $\ck$ is the orthogonal complement of $\ct$ in $\cG$ w.r.t.
$\langle \cdot, \cdot \rangle$.
Above $dt$ denotes the normalized translation-invariant volume form on $S^1$
and $\partial/\partial t$ the vector field on $M=\Sigma \times S^1$
obtained by ``lifting'' in the obvious way
the normalized translation-invariant vector field $\partial/\partial t$ on $S^1$.
In Eqs. \eqref{eq_part_f} and \eqref{eq_part_g}
we used the ``obvious'' identification (cf. Sec. 2.3.1 in \cite{Ha7a})
\begin{equation}
\cA^{\orth} \cong C^{\infty}(S^1,\cA_{\Sigma})
\end{equation}
where $C^{\infty}(S^1,\cA_{\Sigma})$ is the space of maps
$f:S^1 \to \cA_{\Sigma}$ which are ``smooth'' in the sense that
$\Sigma \times S^1 \ni (\sigma,t) \mapsto (f(t))(X_{\sigma}) \in \cG$
is smooth for every smooth vector field $X$ on $\Sigma$.
Note that we have
\begin{equation} \label{eq_cAorth_decomp}
\cA^{\orth} = \Check{\cA}^{\orth} \oplus \cA^{\orth}_c
\end{equation}
\subsection{The original Chern-Simons path integral}
\label{subsec2.1b}
The Chern-Simons action function $S_{CS}: \cA \to \bR$
associated to $M$, $G$, and the ``level''\footnote{cf. Remark \ref{rm_shift_in_level} below}
$k \in \bN$ is given by
\begin{equation} \label{eq2.2'} S_{CS}(A) = - k \pi \int_M \langle A \wedge dA \rangle
+ \tfrac{1}{3} \langle A\wedge [A \wedge A]\rangle, \quad
A \in \cA \end{equation}
Here $[\cdot \wedge \cdot]$ denotes the wedge product associated to the
Lie bracket $[\cdot,\cdot] : \cG \times \cG \to \cG$
and $\langle \cdot \wedge \cdot \rangle$
the wedge product associated to the
scalar product $\langle \cdot , \cdot \rangle : \cG \times \cG \to \bR$.\par
The (expectation value of the) ``Wilson loop observable'' associated to the colored link $L=(l_1,l_2,\ldots,l_m)$
fixed above is the informal integral expression given by
\begin{equation} \label{eq_WLO_orig}
\WLO(L) := \int_{\cA} \left( \prod_{i=1}^m \Tr_{\rho_i}(\Hol_{l_i}(A)) \right) \exp( i S_{CS}(A)) DA
\end{equation}
where $\Hol_{l}(A) \in G$ is the holonomy of $A \in \cA$ around the loop $l = l_i$, $i \le m$,
and $DA$ is the (ill-defined) ``Lebesgue measure'' on the infinite-dimensional space $\cA$.
A useful explicit formula for $\Hol_l(A)$ is
\begin{equation} \label{eq_Hol_heurist}
\Hol_l(A) = \lim_{n \to \infty} \prod_{j=1}^n \exp\bigl(\tfrac{1}{n} A(l'(t))\bigr)_{| t=j/n}
\end{equation}
where $\exp:\cG \to G$ is the exponential map of $G$.
\begin{remark} In the physics literature the notation $Z(M,L)$ and $P \exp(\int_l A)$ is usually
used instead of $\WLO(L)$ and $\Hol_{l}(A)$.
\end{remark}
\subsection{The torus gauge fixed Chern-Simons path integral}
\label{subsec2.2}
Let $\pi_{\Sigma}: \Sigma \times S^1 \to \Sigma$ be the canonical projection.
For each loop $l_i$ appearing in the link $L$ we set $l^i_{\Sigma}:= \pi_{\Sigma} \circ l_i$.
Moreover, we fix $\sigma_0 \in \Sigma$ such that
$$\sigma_0 \notin \bigcup_i \arc(l^i_{\Sigma})$$
By applying ``abstract torus gauge fixing'' (cf. Sec. 2.2.4 in \cite{Ha7a})
and suitable change of variable (cf. Sec. 2.3.1 and Appendix B.3 in \cite{Ha7a})
one can derive at a heuristic level (cf. Eq. (2.53) in \cite{Ha7a})
\begin{multline} \label{eq2.48_Ha7a} \WLO(L)
\sim \sum_{y \in I} \int_{\cA^{\orth}_c \times \cB} \biggl\{
1_{C^{\infty}(\Sigma,\ct_{reg})}(B) \Det_{FP}(B)\\
\times \biggl[ \int_{\Check{\cA}^{\orth}} \biggl( \prod_i \Tr_{\rho_i}\bigl(
\Hol_{l_i}(\Check{A}^{\orth} + A^{\orth}_c, B)\bigr) \biggr)
\exp(i S_{CS}( \Check{A}^{\orth}, B)) D\Check{A}^{\orth} \biggr] \\
\times \exp\bigl( - 2\pi i k \langle y, B(\sigma_0) \rangle \bigr) \biggr\}
\exp(i S_{CS}(A^{\orth}_c, B)) (DA^{\orth}_c \otimes DB)
\end{multline}
where ``$\sim$'' denotes equality up to a multiplicative ``constant''\footnote{``constant'' in the sense
that $C$ does not depend on $L$.
By contrast, $C$ may depend on $G$, $\Sigma$, and $k$.} $C$,
where $I:= \ker(\exp_{| \ct}) \subset \ct$, where $DB$ and $DA^{\orth}_c$ are the
informal ``Lebesgue measures'' on the infinite-dimensional spaces $\cB$ and $\cA^{\orth}_c$,
and where we have set $\ct_{reg} := \exp^{-1}(T_{reg})$,
$T_{reg}$ being the set of ``regular'' elements\footnote{i.e. the set of all $t \in T$ which are not contained
in a different maximal torus $T'$} of $T$.
Moreover, we have set for each $B \in \cB$, $A^{\orth} \in \cA^{\orth}$
\begin{align}
S_{CS}(A^{\orth},B) & := S_{CS}(A^{\orth} + B dt), \\
\label{eq4.17}\Hol_{l}(A^{\orth}, B) & := \Hol_{l}(A^{\orth} + B dt) \nonumber \\
& = \lim_{n \to \infty} \prod_{j = 1}^n
\exp\bigl( \tfrac{1}{n} [ A^{\orth}(l_{S^1}(t))(l'_{\Sigma}(t)) +
B(l_{\Sigma}(t)) \cdot dt(l'_{S^1}(t))] \bigr)_{t = j/n}
\end{align}
where $dt$ is the real-valued 1-form on $M=\Sigma \times S^1$
obtained by pulling back the 1-form $dt$ on $S^1$ by means of the canonical projection
$\pi_{S^1}: \Sigma \times S^1 \to S^1$ and where
$l_{S^1}: [0,1] \to S^1$ and $l_{\Sigma}: [0,1] \to \Sigma$
are the projected loops given by
$l_{S^1} := \pi_{S^1} \circ l$ and $l_{\Sigma} := \pi_{\Sigma} \circ l$.\par
Finally, the expression $\Det_{FP}(B)$ in Eq. \eqref{eq2.48_Ha7a} is the informal expression given by
\begin{equation} \label{eq_DetFP} \Det_{FP}(B) := \det\bigl(1_{\ck}-\exp(\ad(B))_{|\ck}\bigr)
\end{equation}
where $1_{\ck}-\exp(\ad(B))_{|\ck} $
is the linear operator on $C^{\infty}(\Sigma,\ck)$ given by
\begin{equation}
(1_{\ck}-\exp(\ad(B))_{|\ck} \cdot f)(\sigma) = (1_{\ck}-\exp(\ad(B(\sigma)))_{|\ck}) \cdot f(\sigma)
\quad \quad \forall \sigma \in \Sigma, \quad \forall f \in C^{\infty}(\Sigma,\ck)
\end{equation}
where on the RHS $1_{\ck}$ is the identity on $\ck$.
\smallskip
It will be convenient to generalize the definition of $1_{\ck}-\exp(\ad(B))_{|\ck}$ above.
For every $p \in \{0,1,2\}$ we define
$(1_{\ck}-\exp(\ad(B))_{|\ck})^{(p)}$ to be the linear operator on $\Omega^p(\Sigma,\ck)$ given by
\begin{multline} \label{eq_def_det(p)}
\forall \alpha \in \Omega^p(\Sigma,\ck): \forall \sigma \in \Sigma: \forall X_{\sigma} \in \wedge^p
T_{\sigma} \Sigma: \\
\quad \quad \bigl( \bigl(1_{\ck}-\exp(\ad(B))_{|\ck}\bigr)^{(p)} \cdot \alpha\bigr)(X_{\sigma}) = (1_{\ck}-\exp(\ad(B(\sigma))_{|\ck}) \cdot \alpha(X_{\sigma})
\end{multline}
Note that under the identification $C^{\infty}(\Sigma,\ck) \cong \Omega^0(\Sigma,\ck)$
the operator $(1_{\ck}-\exp(\ad(B))_{|\ck})^{(0)}$ coincides with what above we call $1_{\ck}-\exp(\ad(B))_{|\ck}$.\par
As in \cite{Ha7a} we will now fix
an auxiliary Riemannian metric ${\mathbf g_{\Sigma}}$ on $\Sigma$.
Let $\ll \cdot , \cdot \gg_{\cA_{\Sigma}}$ and
$\ll \cdot , \cdot \gg_{\cA^{\orth}}$ be the scalar products on $\cA_{\Sigma}$ and
$\cA^{\orth} \cong C^{\infty}(S^1, \cA_{\Sigma})$ induced by ${\mathbf g_{\Sigma}}$,
and let $\star: \cA_{\Sigma} \to \cA_{\Sigma}$ be the corresponding Hodge star operator.
By $\star$ we will also denote the linear automorphism
$\star: C^{\infty}(S^1, \cA_{\Sigma}) \to C^{\infty}(S^1, \cA_{\Sigma})$ given
by $(\star A^{\orth})(t) = \star (A^{\orth}(t))$ for all $A^{\orth} \in \cA^{\orth}$ and $t \in S^1$.
We then have (cf. Eq. (2.48) in \cite{Ha7a})
\begin{equation} \label{eq_SCS_expl0} S_{CS}(A^{\orth},B) = \pi k \ll A^{\orth},
\star \bigl(\tfrac{\partial}{\partial t} + \ad(B) \bigr) A^{\orth} \gg_{\cA^{\orth}}
+ 2 \pi k \ll\star A^{\orth}, dB \gg_{\cA^{\orth}}
\end{equation}
for all $B \in \cB$ and $A^{\orth} \in \cA^{\orth}$,
and in particular,
\begin{align} \label{eq_SCS_expl}
S_{CS}(\Check{A}^{\orth},B) & = \pi k \ll \Check{A}^{\orth},
\star \bigl(\tfrac{\partial}{\partial t} + \ad(B) \bigr) \Check{A}^{\orth} \gg_{\cA^{\orth}} \\
\label{eq_SCS_expl2}
S_{CS}(A^{\orth}_c,B) & = 2 \pi k \ll\star A^{\orth}_c, dB \gg_{\cA^{\orth}}
\end{align}
for $B \in \cB$, $\Check{A}^{\orth} \in \Check{\cA}^{\orth}$, and $A^{\orth}_c \in \cA^{\orth}_c$.
\subsection{Ribbon version of Eq. \eqref{eq2.48_Ha7a}}
\label{subsec2.4}
Recall that our goal is to find a rigorous realization of Witten's CS path integral expressions
which reproduces the Reshetikhin-Turaev invariants (in the special situation described in the Introduction).
Since the Reshetikhin-Turaev invariants are defined for ribbon links (or, equivalently\footnote{From the knot theory point of view the framed link picture and the ribbon link picture are equivalent.
However, the ribbon picture seems to be better suited for the
study of the Chern-Simons path integral in the torus gauge},
for framed links) we will now write down a ribbon analogue of Eq. \eqref{eq2.48_Ha7a}. \par
A closed ribbon $R$ in $\Sigma \times S^1$ is a smooth embedding
$R: S^1 \times [0,1] \to \Sigma \times S^1$.
A ribbon link in $\Sigma \times S^1$ is
a finite tuple of non-intersecting closed ribbons in in $\Sigma \times S^1$.
We will replace the link $L =(l_1,l_2, \ldots, l_m)$
by a ribbon link $L_{ribb} = (R_1,R_2, \ldots, R_m)$ where each $R_i$, $i \le m$,
is chosen such that $l_i(t)= R_i(t,1/2)$ for all $t \in S^1$. Instead of $L_{ribb}$ we will
simply write $L$ in the following.
From now on we will assume that $\sigma_0 \in \Sigma$ was chosen such that
$$\sigma_0 \notin \bigcup_i \Image(R^i_{\Sigma})$$
where $R^i_{\Sigma}:= \pi_{\Sigma} \circ R_i$.
For every $R \in \{R_1, R_2, \ldots, R_m\}$
we define
$$\Hol_{R}(A) := \lim_{n \to \infty} \prod_{j=1}^n \exp\bigl(\tfrac{1}{n} \int_{0}^1 A(l'_u(t)) du \bigr)_{| t=j/n}
\in G$$
where $l_u$, $u \in [0,1]$, is the knot
$l_u := R(\cdot,u)$, considered as a loop $[0,1] \to \Sigma \times S^1$.
Moreover, for $A^{\orth} \in \cA^{\orth}$ and $B \in \cB$ we set
\begin{multline} \label{eq_sec3.3_1}
\Hol_{R}(A^{\orth}, B) := \Hol_{R}(A^{\orth} +B dt) \\
= \lim_{n \to \infty} \prod_{j=1}^n \exp\bigl(\tfrac{1}{n} \int_{0}^1
[ A^{\orth}(l^u_{S^1}(t))((l^u_{\Sigma})'(t)) +
B(l^u_{\Sigma}(t)) \cdot dt((l^u_{S^1})'(t))] du\bigr)_{| t = j/n}
\end{multline}
where $l^u_{S^1} := \pi_{S^1} \circ l_u$ and $l^u_{\Sigma} := \pi_{\Sigma} \circ l_u$ for each $u \in [0,1]$.\par
We now obtain the aforementioned ribbon analogue of Eq. \eqref{eq2.48_Ha7a}
by replacing the expression $\Hol_{l_i}(\Check{A}^{\orth} + A^{\orth}_c,B)$ in Eq. \eqref{eq2.48_Ha7a}
with $\Hol_{R_i}(\Check{A}^{\orth} + A^{\orth}_c, B)$:
\begin{multline} \label{eq2.48} \WLO(L)
\sim \sum_{y \in I} \int_{\cA^{\orth}_c \times \cB} \biggl\{
1_{C^{\infty}(\Sigma,\ct_{reg})}(B) \Det(B)\\
\times \biggl[ \int_{\Check{\cA}^{\orth}} \left( \prod_i \Tr_{\rho_i}\bigl(
\Hol_{R_i}(\Check{A}^{\orth} + A^{\orth}_c, B)\bigr) \right)
d\mu^{\orth}_B(\Check{A}^{\orth}) \biggr] \\
\times \exp\bigl( - 2\pi i k \langle y, B(\sigma_0) \rangle \bigr) \biggr\}
\exp(i S_{CS}(A^{\orth}_c, B)) (DA^{\orth}_c \otimes DB)
\end{multline}
where, as a preparation for Sec. \ref{subsec2.5}, we have set, for each $B \in \cB$,
\begin{align} \label{eq_def_Z_B}
\Check{Z}(B) & := \int \exp(i S_{CS}( \Check{A}^{\orth}, B)) D\Check{A}^{\orth},\\
\label{eq_def_mu_B}
d\mu^{\orth}_B & := \tfrac{1}{\Check{Z}(B)} \exp(i S_{CS}( \Check{A}^{\orth}, B)) D\Check{A}^{\orth}
\end{align}
and
\begin{equation} \label{eq_def_det(B)}
\Det(B) := \Det_{FP}(B) \Check{Z}(B)
\end{equation}
\subsection{Rewriting $\Det(B)$}
\label{subsec2.5}
Informally, we have for $B \in \cB_{reg}:= C^{\infty}(\Sigma,\ct_{reg})$
\begin{equation} \label{eq_sec2.5_1}
\Check{Z}(B) \sim \det\bigl(\tfrac{\partial}{\partial t} + \ad(B)\bigr)^{ -1/2} \overset{(*)}{\sim}
\det\bigl(\bigl(1_{\ck}-\exp(\ad(B))_{|\ck}\bigr)^{(1)}\bigr)^{-1/2}
\end{equation}
where $\tfrac{\partial}{\partial t} + \ad(B):\Check{\cA}^{\orth} \to \Check{\cA}^{\orth}$
is the operator appearing in Eq. \eqref{eq_SCS_expl} above,
and where $(1_{\ck}-\exp(\ad(B))_{|\ck})^{(1)}$ is the linear operator on $\cA_{\Sigma,\ck}=
\Omega^1(\Sigma,\ck)$ given by Eq. \eqref{eq_def_det(p)} above with $p=1$.
Here step $(*)$ is suggested by
$$\det(\tfrac{\partial}{\partial t} + \ad(b)\bigr) \sim \det\bigl(\bigl(1_{\ck}-\exp(\ad(b))_{|\ck}\bigr)\bigr) \quad \forall b \in \ct_{reg}$$
where $\tfrac{\partial}{\partial t} + \ad(b): C^{\infty}(S^1,\ck) \to C^{\infty}(S^1,\ck)$
and where $\det(\tfrac{\partial}{\partial t} + \ad(b))$ is
defined with the help of a standard $\zeta$-function regularization argument. \par
Observe also that
$\bigl(1_{\ck}-\exp(\ad(B))_{|\ck}\bigr)^{(0)} = \star^{-1} \circ \bigl(1_{\ck}-\exp(\ad(B))_{|\ck}\bigr)^{(2)} \circ \star$ where $(1_{\ck}-\exp(\ad(B))_{|\ck})^{(2)}$ is the linear operator on
$\Omega^2(\Sigma,\ck)$ given by Eq. \eqref{eq_def_det(p)} above with $p=2$
and where $\star: \Omega^0(\Sigma,\ck) \to \Omega^2(\Sigma,\ck)$ is the Hodge star operator induced by ${\mathbf g_{\Sigma}}$.
Thus we obtain, informally,
\begin{equation} \label{eq_sec2.5_2}
\det\bigl(\bigl(1_{\ck}-\exp(\ad(B))_{|\ck}\bigr)^{(0)}\bigr) = \det\bigl(\bigl(1_{\ck}-\exp(\ad(B))_{|\ck}\bigr)^{(2)}\bigr)
\end{equation}
Combining Eq. \eqref{eq_def_det(B)}, Eq. \eqref{eq_sec2.5_1}, and Eq. \eqref{eq_sec2.5_2} we obtain
\begin{equation} \label{eq_Det(B)_rewrite}
\Det(B) = \prod_{p=0}^2 \det\bigl(\bigl(1_{\ck}-\exp(\ad(B))_{|\ck}\bigr)^{(p)}\bigr)^{(-1)^p /2}
\end{equation}
\section{Simplicial realization $\WLO^{disc}_{rig}(L)$ of $\WLO(L)$}
\label{sec3}
\subsection{Some polyhedral cell complexes}
Let $\cP$ be a finite oriented polyhedral cell complex (cf. Appendix C in \cite{Ha7a}).
\begin{itemize}
\item We denote by $\face_p(\cP)$, $ p \in \bN_0$, the set of $p$-faces of $\cP$.
The elements of $\face_0(\cP)$ ($\face_1(\cP)$, respectively)
will be called the ``vertices'' (``edges'', respectively) of $\cP$.
\item For every fixed real vector space $V$
we denote by $C^p(\cP,V)$, $ p \in \bN_0$, the space of maps $\face_p(\cP) \to V$ (``$V$-valued
$p$-cochains of $\cP$''). Instead of $C^p(\cP,\bR)$ we will often write $C_p(\cP)$.
\item We identify $\face_p(\cP)$, $ p \in \bN_0$, with a subset of $C_p(\cP) = C^p(\cP,\bR)$
in the obvious way, i.e. each $\alpha \in \face_p(\cP)$ is identified with
$\delta_{\alpha} \in C^p(\cP,\bR)$ given by $\delta_{\alpha}(\beta) = \delta_{\alpha\beta}$
for all $\beta \in \face_p(\cP)$.
\item By $d_{\cP}$, $ p \in \bN_0$, we will denote the coboundary operator\footnote{in the special case
$p=0$, which is the only case relevant for us, $d_{\cP}: C^0(\cP,V) \to C^{1}(\cP,V)$
is given explicitly by $d f(e) = f(end(e))-f(start(e))$ for all $f \in C^0(\cP,V)$ and $e \in \face_1(\cP)$
where $start(e), end(e) \in \face_0(\cP)$ denote the starting/end point of the (oriented) edge $e$}
$C^p(\cP,V) \to C^{p+1}(\cP,V)$.
\end{itemize}
i) As a discrete analogue of the Lie group $S^1$ we will
use the finite cyclic group $\bZ_{N}$, $N \in \bN$.
The number $N$ will be kept fixed throughout the rest of this paper.
We will identify $\bZ_N$ with the subgroup $\{e^{2 \pi i k/N} \mid 1 \le k \le N\}$ of
$S^1$. The points of $\bZ_N$ induce a polyhedral cell decomposition of $S^1$. The (1-dimensional oriented\footnote{we equip each edge of $\bZ_N$ with the orientation
induced by the orientation $dt$ of $S^1$}) polyhedral cell complex obtained in this way
will also be denoted by $\bZ_N$ in the following.\par
ii) We fix a finite oriented smooth polyhedral cell decomposition
$\cC$ of $\Sigma$ .
By $\cC'$ we will denote the ``canonical dual'' of the polyhedral cell decomposition $\cC$ (cf. the end of
Appendix C in \cite{Ha7a}), again equipped with an orientation.
By $\cK$ and $\cK'$ we will denote the (oriented)
polyhedral cell complexes associated to $\cC$ and $\cC'$,
i.e. $\cK = (\Sigma,\cC)$ and $\cK' = (\Sigma,\cC')$.
Instead of $\cK$ and $\cK'$
we often write $K_1$ and $K_2$ and we set $K:= (K_1,K_2)$. \par
iii) We introduce a joint subdivision $q\cK$ of $\cK$ and $\cK'$
which is uniquely determined by the conditions
\begin{align*} \face_0(q\cK) & = \face_0(b\cK), \\
\face_1(q\cK) & = \face_1(b\cK) \backslash \{e \in \face_1(b\cK) \mid \text{ both endpoints of $e$
lie in $\face_0(\cK) \cup \face_0(\cK')$} \},
\end{align*}
$b\cK$ being the barycentric subdivision of $\cK$ (cf. Sec. 4.4.3 in \cite{Ha7a} for more details).
We equip the faces of $q\cK$ with an orientation.
For convenience we choose the orientation on the edges of $q\cK$ to be ``compatible''\footnote{more precisely,
for each $e \in \face_1(q\cK)$ we choose the orientation which is induced by orientation of the unique
edge $e' \in \face_1(\cK) \cup \face_1(\cK')$ which contains $e$} with
the orientation on the edges of $\cK$ and $\cK'$.\par
iv) By $\cK \times \bZ_N$ and $q\cK \times \bZ_N$ we will denote the obvious product (polyhedral) cell complexes.
\subsection{The basic spaces}
\label{subsec3.0}
\subsubsection*{a) The spaces $\cB(q\cK)$, $\cA_{\Sigma}(q\cK)$, and $\cA^{\orth}(q\cK)$}
We first introduce the following simplicial analogues of the spaces $\cB$, $\cA_{\Sigma}$, and $\cA^{\orth}$
in Sec. \ref{subsec2.1} above:
\begin{subequations} \label{eq_basic_spaces}
\begin{align}
\cB(qK) & :=C^0(q\cK,\ct) \\
\cA_{\Sigma}(q\cK) & := C^1(q\cK,\cG) \\
\cA^{\orth}(q\cK) & := \Map(\bZ_N,\cA_{\Sigma}(q\cK))
\end{align}
\end{subequations}
\noindent
The scalar product $\langle \cdot, \cdot \rangle$ on $\cG$ induces
scalar products $\ll \cdot, \cdot\gg_{\cB(q\cK)}$ and $\ll \cdot, \cdot\gg_{\cA_{\Sigma}(q\cK)}$
on $\cB(q\cK)$ and $\cA_{\Sigma}(q\cK)$ in the standard way.
We define a scalar product
$\ll \cdot , \cdot \gg_{\cA^{\orth}(q\cK)}$ on $\cA^{\orth}(q\cK) = \Map(\bZ_N,\cA_{\Sigma}(q\cK))$ by
\begin{equation} \label{eq_norm_scalarprod}
\ll A^{\orth}_1 , A^{\orth}_2 \gg_{\cA^{\orth}(q\cK)} = \tfrac{1}{N} \sum_{t \in \bZ_N} \ll
A^{\orth}_1(t) , A^{\orth}_2(t) \gg_{\cA_{\Sigma}(q\cK)}
\end{equation} for all $A^{\orth}_1 , A^{\orth}_2 \in \cA^{\orth}(q\cK)$.
\subsubsection*{b) The subspaces $\cB(\cK)$, $\cA_{\Sigma}(K)$, and $\cA^{\orth}(K)$}
For technical reasons (cf. Remark \ref{rm_3.1} below) we will now introduce suitable subspaces
of $\cB(q\cK)$, $\cA_{\Sigma}(q\cK)$, and $\cA^{\orth}(q\cK)$.
It will be convenient to first define these three spaces in an ``abstract'' way and
then to explain how they are embedded into the three aforementioned spaces. We set
\begin{subequations}
\begin{align}
\cB(\cK) & := C^0(\cK,\ct)\\
\cA_{\Sigma}(K) & :=C^1(K_1,\cG) \oplus C^1(K_2,\cG)\\
\cA^{\orth}(K) & := \Map(\bZ_N,\cA_{\Sigma}(K))
\end{align}
\end{subequations}
\begin{itemize}
\item We will identify $\cA_{\Sigma}(K) \cong (C_1(K_1) \oplus C_1(K_2)) \otimes_{\bR} \cG$ with a linear subspace of $\cA_{\Sigma}(q\cK) \cong C_1(q\cK) \otimes_{\bR} \cG$
by means of the linear injection $\psi \otimes \id_{\cG}$
where $\psi:C_1(K_1) \oplus C_1(K_2) \to C_1(q\cK)$ is the linear injection given by
$\psi(e)=e_1 + e_2$ for all $e \in \face_1(K_1) \cup \face_1(K_2)$
where $e_1, e_2 \in \face_1(q\cK)$ are the two edges of $q\cK$
``contained'' in $e$.
\item Since $\cA_{\Sigma}(K)$ is now identified with a subspace of $\cA_{\Sigma}(q\cK) $
the space $\cA^{\orth}(K)$ can be considered as a subspace of $\cA^{\orth}(q\cK)$ in the obvious way.
\item Finally, the space $\cB(\cK)$ will be identified with a subspace of $\cB(q\cK)$
via the linear injection $\psi: \cB(\cK) \to \cB(q\cK)$
which associates to each $B \in \cB(\cK)$ the extension $\bar{B} \in \cB(q\cK)$
given by $\bar{B}(x) = \mean_{y \in S_x} B(y)$ for all $x \in \face_0(q\cK)$.
Here ``mean'' refers to the arithmetic mean and $S_x$ denotes the set of all $y \in \face_0(\cK)$ which
lie in the closure of the unique open cell of $\cK$ containing $x$.
\end{itemize}
\begin{remark} \label{rm_3.1} \rm
i) The reason for introducing the subspaces
$\cA_{\Sigma}(K)$ and $\cA^{\orth}(K)$ is that
these spaces allow us to obtain a nice simplicial analogue of the Hodge star operator,
cf. Sec. \ref{subsec3.2} below.
\medskip
ii) In order to motivate the introduction of the subspace $\cB(\cK)$ of $\cB(q\cK)$ we remark that
$\ker(\pi \circ d_{q\cK}) \neq \cB_c(q\cK)$ where
$ \cB_c(q\cK):= \{B \in \cB(q\cK) \mid B \text{ constant}\}$ and where
\begin{equation} \label{eq_pi_proj} \pi: \cA_{\Sigma}(q\cK) \to \cA_{\Sigma}(K)
\end{equation}
denotes the orthogonal projection. The advantage of working with the space $\cB(\cK)$ is that
\begin{equation} \label{eq_obs1}
\ker((\pi \circ d_{q\cK})_{|\cB(\cK)}) = \cB_c(q\cK)
\end{equation}
(Observe that $\cB_c(q\cK) \subset \cB(\cK)$).
Eq. \eqref{eq_obs1} will play an important role in Step 2 in the proof of Theorem \ref{theorem1} below.
\end{remark}
\subsubsection*{c) The spaces $\Check{\cA}^{\orth}(K)$ and $\cA^{\orth}_c(K)$}
In order to obtain a simplicial analogue of the decomposition $\cA^{\orth} = \Check{\cA}^{\orth}
\oplus \cA^{\orth}_c$ in Eq. \eqref{eq_cAorth_decomp} above
we introduce the following spaces:
\begin{subequations}
\begin{align}
\cA_{\Sigma,\ct}(K) & := C^1(K_1,\ct) \oplus C^1(K_2,\ct)\\
\cA_{\Sigma,\ck}(K) & := C^1(K_1,\ck) \oplus C^1(K_2,\ck)\\
\label{eq_CheckcA_disc} \Check{\cA}^{\orth}(K) & :=
\{ A^{\orth} \in \cA^{\orth}(K) \mid
\sum\nolimits_{t \in \bZ_{N}} A^{\orth}(t) \in \cA_{\Sigma,\ck}(K) \} \\
\cA^{\orth}_c(K) & := \{ A^{\orth} \in \cA^{\orth}(K) \mid
\text{ $A^{\orth}(\cdot)$ is constant and $ \cA_{\Sigma,\ct}(K)$-valued}\} \overset{(*)}{\cong} \cA_{\Sigma,\ct}(K)
\end{align}
\end{subequations}
where in step $(*)$ we made the obvious identification.
Observe that we have
\begin{equation} \label{eq4.30}
\cA^{\orth}(K) = \Check{\cA}^{\orth}(K) \oplus \cA^{\orth}_c(K).
\end{equation}
\begin{convention} \label{conv_EucSpaces} \rm
In the following we will always consider $\cB(\cK)$, $\cA^{\orth}(K)$
and their subspaces
as Euclidean vector spaces in the ``obvious''\footnote{More precisely, we will assume that the space
$\cB(\cK)$ (or any subspace of $\cB(\cK)$) is equipped with the (restriction of the) scalar product $\ll \cdot, \cdot\gg_{\cB(q\cK)}$ on $\cB(q\cK)$, and the space $\cA^{\orth}(K)$ (or any subspace of $\cA^{\orth}(K)$)
is equipped with the restriction of the scalar product $\ll \cdot, \cdot\gg_{\cA^{\orth}(q\cK)}$,
introduced in Sec. \ref{subsec3.0} above} way.
\end{convention}
\subsection{Discrete analogue of the operator $\tfrac{\partial}{\partial t} + \ad(B)$ in Eq. \eqref{eq_SCS_expl0}}
\label{subsec3.1}
\subsubsection*{a) Discrete analogue(s) of the operator
$\tfrac{\partial}{\partial t} + \ad(b): C^{\infty}(S^1,\cG) \to C^{\infty}(S^1,\cG)$, $b \in \ct$}
As a preparation for the next subsection, let us introduce, for fixed $b \in \ct$, two simplicial analogues
$\hat{L}^{(N)}(b): \Map(\bZ_N,\cG) \to \Map(\bZ_N,\cG)$ and $\Check{L}^{(N)}(b): \Map(\bZ_N,\cG) \to \Map(\bZ_N,\cG)$ of the continuum operator $L(b):= \tfrac{\partial}{\partial t} + \ad(b): C^{\infty}(S^1,\cG) \to C^{\infty}(S^1,\cG)$ by
\begin{align} \label{eq_def_LOp}
\hat{L}^{(N)}(b) & := N( \tau_1 e^{\ad(b)/N} - \tau_{0})\\
\Check{L}^{(N)}(b) & := N( \tau_0 - \tau_{-1} e^{-\ad(b)/N})
\end{align}
where $\tau_x$, for $x \in \bZ_N$, denotes the translation operator $\Map(\bZ_N,\cG) \to \Map(\bZ_N,\cG)$ given by $(\tau_x f)(t) = f(t +x)$ for all $t \in \bZ_N$.
I want to emphasize that $\hat{L}^{(N)}(b)$ and $\Check{L}^{(N)}(b)$
are indeed totally natural simplicial analogues of $L(b)$, see Sec. 5 in \cite{Ha7a} for a detailed motivation.
\begin{remark} \rm The operator $\bar{L}^{(N)}(b):\Map(\bZ_N,\cG) \to \Map(\bZ_N,\cG)$
given by
$$\bar{L}^{(N)}(b) := \tfrac{N}{2}( \tau_1 e^{\ad(b)/N} - \tau_{-1} e^{-\ad(b)/N} )$$
might at first look appear to be the natural candidate for a simplicial analogue of the continuum operator
$L(b)$. However, there are several problems with $\bar{L}^{(N)}(b)$.
Firstly, the properties of $\bar{L}^{(N)}(b)$ depend on whether $N$ is odd or even.
Secondly, when $N$ is odd then $\bar{L}^{(N)}(b)$ seems to have the ``wrong'' determinant.
Most probably, part ii) of Remark \ref{rm_Sec4.3} below will no longer be true if we redefine the operator $L^{(N)}(B)$
given in Eq. \eqref{def_LN} below using $\bar{L}^{(N)}(b)$ instead of $\hat{L}^{(N)}(b)$ and $\Check{L}^{(N)}(b)$.
On the other hand, if $N$ is even then $\bar{L}^{(N)}(b)$ has the ``wrong''\footnote{For example, if $b \in \ct_{reg}$ then we have
$\ker(L(b)) = \{ f \in C^{\infty}(S^1,\cG) \mid f \text{ constant and $\ct$-valued}\}$.
Similarly, we have
$\ker(\hat{L}^{(N)}(b)) = \ker(\Check{L}^{(N)}(b)) = \{f \in \Map(\bZ_N,\cG) \mid f \text{ constant and $\ct$-valued}\}$.
By contrast, $\ker(\bar{L}^{(N)}(b))$ is strictly larger than $ \{f \in \Map(\bZ_N,\cG) \mid f \text{ constant and $\ct$-valued}\}$.} kernel.
\end{remark}
\subsubsection*{b) Discrete analogue of the operator $\tfrac{\partial}{\partial t} + \ad(B):\cA^{\orth} \to \cA^{\orth}$}
For every fixed $B \in \cB(q\cK)$ we first introduce
the linear operators
\begin{align*}
\hat{L}^{(N)}(B) & \text{ on } \Map(\bZ_N, C^1(K_1,\cG)) \cong \oplus_{e \in \face_1(K_1)} \Map(\bZ_N, \cG), \text{ and } \\
\Check{L}^{(N)}(B) & \text{ on } \Map(\bZ_N, C^1(K_2,\cG)) \cong \oplus_{e \in \face_1(K_2)} \Map(\bZ_N, \cG)
\end{align*}
which are given by
\begin{subequations}
\begin{align} \label{eq_LN_ident1}
\hat{L}^{(N)}(B) & \cong \oplus_{e \in \face_1(K_1)}
\hat{L}^{(N)}(B(\bar{e})) \\
\label{eq_LN_ident2} \Check{L}^{(N)}(B)) & \cong \oplus_{e \in \face_1(K_2)}
\Check{L}^{(N)}(B(\bar{e}))
\end{align}
\end{subequations}
where $\bar{e} \in \face_0(q\cK)$ for $e \in \face_1(K_1) \cup \face_1(K_2)$
is the barycenter of $e$.
\smallskip
As the simplicial analogue of the the operator $\tfrac{\partial}{\partial t} + \ad(B):\cA^{\orth} \to \cA^{\orth}$
we now take the operator $L^{(N)}(B):\cA^{\orth}(K) \to \cA^{\orth}(K)$
which, under the identification
$$\cA^{\orth}(K) \cong \Map(\bZ_N, C^1(K_1,\cG)) \oplus \Map(\bZ_N, C^1(K_2,\cG)),$$
is given by (cf. Remark \ref{rm_Sec4.3} below for the motivation)
\begin{equation} \label{def_LN} L^{(N)}(B) = \left( \begin{matrix}
\hat{L}^{(N)}(B) && 0 \\
0 && \Check{L}^{(N)}(B)
\end{matrix} \right)
\end{equation}
Note that $L^{(N)}(B)$ leaves the subspace $\Check{\cA}^{\orth}(K)$ of $\cA^{\orth}(K)$
invariant. The restriction of $L^{(N)}(B)$ to $\Check{\cA}^{\orth}(K)$
will also be denoted by $L^{(N)}(B)$ in the following.
\subsection{Definition of $S^{disc}_{CS}(A^{\orth},B)$}
\label{subsec3.2}
Recall that $\cK = K_1$ and $\cK' = K_2$ are dual to each other.
As in \cite{Ad1} we can therefore introduce simplicial Hodge star operators
$\star_{K_1}: C^1(K_1,\bR) \to C^{1}(K_2,\bR)$ and $\star_{K_2}: C^1(K_2,\bR) \to C^{1}(K_1,\bR)$.
These are the linear isomorphisms given by
\begin{equation} \label{eq_Hodge_star_concrete}
\star_{K_j} e = \pm \Check{e} \quad \quad \forall e \in \face_1(K_j)
\end{equation}
for $j=1,2$ where $\Check{e} \in \face_1(K_{3-j})$ is the edge dual to $e$.
The sign $\pm$ above is ``$+$'' if the orientation of $\Check{e}$ is the one induced
by the orientation of $e$ and the orientation of $\Sigma$, and it is ``$-$'' otherwise.
The two simplicial Hodge star operators above induce
``$\cG$-valued versions'' $\star_{K_1}: C^1(K_1,\cG) \to C^{1}(K_2,\cG)$ and $\star_{K_2}: C^1(K_2,\cG) \to C^{1}(K_1,\cG)$ in the obvious way.\par
Let $\star_K$ be the linear automorphism of
$\cA_{\Sigma}(K) = C^1(K_1,\cG) \oplus C^1(K_2,\cG)$ which is given by
\begin{equation} \label{eq_Hodge_matrix}
\star_K := \left(\begin{matrix} 0 && \star_{K_2} \\ \star_{K_1} && 0 \end{matrix}
\right)
\end{equation}
By $\star_K$ we will also denote the linear automorphism
of $\cA^{\orth}(K)$ given by
\begin{equation} \label{eq_star_K_vor_rm}
(\star_K A^{\orth})(t) = \star_K (A^{\orth}(t)) \quad \quad \forall A^{\orth} \in \cA^{\orth}(K), t \in \bZ_N
\end{equation}
As the simplicial analogues of the continuum expression
$S_{CS}(A^{\orth},B)$ in Eq. \eqref{eq_SCS_expl0} above
we use the expression
\begin{subequations} \label{eq_SCS_expl_disc}
\begin{equation} S^{disc}_{CS}(A^{\orth},B) := \pi k \biggl[ \ll A^{\orth},
\star_K L^{(N)}(B)
A^{\orth} \gg_{\cA^{\orth}(q\cK)}
+ 2 \ll \star_K A^{\orth}, d_{q\cK} B \gg_{\cA^{\orth}(q\cK)} \biggr]
\end{equation}
for $B \in \cB(q\cK)$, $A^{\orth} \in \cA^{\orth}(K) \subset \cA^{\orth}(q\cK)$.
Observe that this implies
\begin{align} \label{eq_SCS_expl_discb} S^{disc}_{CS}(\Check{A}^{\orth},B) & = \pi k \ll \Check{A}^{\orth},
\star_K L^{(N)}(B) \Check{A}^{\orth} \gg_{\cA^{\orth}(q\cK)} \\
\label{eq_SCS_expl_discc} S^{disc}_{CS}(A^{\orth}_c,B) & = 2 \pi k \ll \star_K A^{\orth}_c, d_{q\cK} B \gg_{\cA^{\orth}(q\cK)}
\end{align}
\end{subequations}
for $B \in \cB(q\cK)$, $\Check{A}^{\orth} \in \Check{\cA}^{\orth}(K)$, $A^{\orth}_c \in \cA^{\orth}_c(K)$.
\begin{remark} \label{rm_Sec4.3} \rm
i) The operator $\star_K L^{(N)}(B): \cA^{\orth}(K) \to \cA^{\orth}(K)$
is symmetric w.r.t to the scalar product $\ll \cdot, \cdot \gg_{\cA^{\orth}(q\cK)}$,
cf. Proposition 5.3 in \cite{Ha7a}. This would not be the case if on the RHS
of Eq. \eqref{def_LN} we had used $\hat{L}^{(N)}(B)$ twice (or $\Check{L}^{(N)}(B)$ twice). \par
ii) According to Proposition 5.1 in \cite{Ha7b} we have
$\det\bigl(\star_K L^{(N)}(B)_{| \Check{\cA}^{\orth}(K)} \bigr) \neq 0$ for all
\begin{equation}B \in \cB_{reg}(q\cK) := \{ B \in \cB(q\cK) \mid
B(x) \in \ct_{reg} \text{ for all $x \in \face_0(q\cK)$}\}
\end{equation}
where $\star_K L^{(N)}(B)_{| \Check{\cA}^{\orth}(K)}$ is the restriction
of $\star_K L^{(N)}(B)$ to the invariant
subspace $\Check{\cA}^{\orth}(K)$ of $\cA^{\orth}(K)$.
\end{remark}
\subsection{Definition of $\Hol^{disc}_{R}(A^{\orth}, B)$ }
\label{subsec3.3}
\subsubsection*{a) Preparation: The simplicial loop case}
A ``simplicial curve'' in a finite oriented polyhedral cell complex
$\cP$ is a finite sequence $c=(x^{(k)})_{0 \le k \le n}$, $n \in \bN$, of vertices in $\cP$
such that for every $1 \le k \le n$
the two vertices $x^{(k)}$ and $x^{(k-1)}$
either coincide or are the two endpoints
of an edge $e \in \face_1(\cP)$.
We will call $n$ the ``length'' of the simplicial curve $c$.
If $x^{(n)} = x^{(0)}$ we will call
$c= (x^{(k)})_{0 \le k \le n}$ a ``simplicial loop'' in $\cP$.\par
Every simplicial curve $c= (x^{(k)})_{0 \le k \le n}$ induces a
sequence $(e^{(k)})_{1 \le k \le n}$ of ``generalized edges'', i.e. elements of
$\face_1(\cP) \cup \{0\} \cup (- \face_1(\cP))\subset
C_1(\cP)$ in a natural way. More precisely, we have
$e^{(k)} = 0$ if $x^{(k-1)} = x^{(k)}$ and $e^{(k)} = \pm e$ if $x^{(k-1)} \neq x^{(k)}$
where $e \in \face_1(\cP)$ is the unique edge connecting the vertices $x^{(k-1)}$ and $x^{(k)}$
and where the sign $\pm$ is $+$ if $x^{(k-1)}$ is the starting point of $e$ and $-$ if it is the endpoint.
\begin{convention} \label{conv_loop_pic} For a given simplicial loop $l= (x^{(k)})_{0 \le k \le n}$
we will usually write $\start l^{(k)}$ instead of $x^{(k-1)}$
and $l^{(k)}$ instead of $e^{(k)}$ (for $1 \le k \le n$) where $(e^{(k)})_{1 \le k \le n}$
is the corresponding sequence of generalized edges.
\end{convention}
Let $l= (x^{(k)})_{0 \le k \le n}$, $n \in \bN$,
be a simplicial loop in $q\cK \times \bZ_N$
and let $l_{q\cK}$ and $l_{\bZ_N}$ be the ``projected'' simplicial loops in $q\cK$ and $\bZ_N$.
Instead of $l_{q\cK}$ and $l_{\bZ_N}$ we will usually write
$l_{\Sigma}$ and $l_{S^1}$. (Recall that $\Sigma$ and $S^1$ are the topological spaces
underlying $q\cK$ and $\bZ_N$.) \par
For $A^{\orth} \in \cA^{\orth}(K) \subset \cA^{\orth}(q\cK)$ and $B \in \cB(q\cK)$
we now define the following simplicial analogue of the expression $\Hol_{l}(A^{\orth}, B)$
in Eq. \eqref{eq4.17} (cf. Convention \ref{conv_loop_pic}):
\begin{equation} \label{eq4.18}
\Hol^{disc}_{l}(A^{\orth}, B) := \prod_{k=1}^n \exp\biggl(
A^{\orth}(\start l^{(k)}_{S^1})(l^{(k)}_{\Sigma}) +
B(\start l^{(k)}_{\Sigma}) \cdot dt^{(N)}(l^{(k)}_{S^1}) \biggr)
\end{equation}
with $dt^{(N)} \in C^1(\bZ_{N},\bR)$ given by
$dt^{(N)}(e)= \tfrac{1}{N}$ for all $e \in \face_1(\bZ_N)$
and where we made the identification $C^1(\bZ_{N},\bR) \cong \Hom_{\bR}(C_1(\bZ_{N}),\bR)$
and $\cA_{\Sigma}(q\cK) = C^1(q\cK,\cG) \cong \Hom_{\bR}(C_1(q\cK),\cG)$.
\subsubsection*{b) The simplicial ribbon case}
A ``closed simplicial ribbon'' in a finite oriented polyhedral cell complex $\cP$ is a finite sequence $R = (F_i)_{i \le n}$ of 2-faces of $\cP$ such that every $F_i$ is a tetragon
and such that $F_i \cap F_{j} = \emptyset$ unless $i = j$ or $j= i \pm 1$ (mod n).
In the latter case $F_i$ and $F_{j}$ intersect in a (full) edge (cf. Remark 4.3 in Sec. 4.3 in \cite{Ha7a}
and the paragraph before Remark 4.3 in \cite{Ha7a}). \par
From now on we will consider only the special case
where $\cP = \cK \times \bZ_N$.
Observe that if $R = (F_k)_{k \le \bar{n}}$, $\bar{n} \in \bN$,
is a closed simplicial ribbon in $\cK \times \bZ_N$
then either all the edges $e_{ij} := F_i \cap F_{j}$, $j= i \pm 1$ (mod $\bar{n}$),
are parallel to $\Sigma$ or they are all parallel to $S^1$.
In the first case we will call $R$ ``regular''.\par
In the following let $R = (F_k)_{k \le \bar{n}}$, $\bar{n} \in \bN$, be a fixed regular
closed simplicial ribbon in $\cK \times \bZ_N$.
Observe that $R$ induces three simplicial loops $l^j=(x^{j(k)})_{0 \le k \le n}$, $j=0,1,2$,
(with\footnote{The common length $n$ of the three loops is given by $n = 2n_{\Sigma} + n_{S^1}$
where $n_{\Sigma}$ (and $n_{S_1}$, respectively) is the number of those faces appearing in $R = (F_k)_{k \le \bar{n}}$, which are parallel to $\Sigma$ (or are parallel to $S^1$, respectively).
Observe that since $\bar{n} = n_{\Sigma} + n_{S^1}$ we have
$\bar{n} \le n \le 2 \bar{n}$.} $n \le 2 \bar{n}$) in $q\cK \times \bZ_N$ in a natural way,
$l^1$ and $l^2$ being the two boundary loops of $R$ and $l^0$ being the loop ``inside'' $R$.
[Here we consider $R$ as a subset of $\Sigma \times S^1$ in the obvious way.
Note that the vertices $(x^{j(k)})_{0 \le k \le n}$, $j=0,1,2$, appearing above
are just the elements of $R \cap \face_0(q\cK \times \bZ_N)$.
The ``starting'' points $x^{j(0)}$, $j=0,1,2$, of the three simplicial loops $l^j=(x^{j(k)})_{0 \le k \le n}$
are the three elements of $e \cap \face_0(q\cK \times \bZ_N)$ where $e \in \face_1(\cK \times \bZ_N)$ is
the edge $e = e_{1 \bar{n}} = F_1 \cap F_{\bar{n}}$.] \par
By $l^j_{\Sigma}$ and $l^j_{S^1}$, $j=0,1,2$, we will denote
the corresponding ``projected'' simplicial loops in $q\cK$ and $\bZ_N$.\par
Let $A^{\orth} \in \cA^{\orth}(K) \subset \cA^{\orth}(q\cK)$ and $B \in \cB(q\cK)$.
As the simplicial analogue of the continuum expression
$\Hol_{R}(A^{\orth}, B)$ in Eq. \eqref{eq_sec3.3_1} we will take
\begin{multline} \label{eq4.21_full_kurz}
\Hol^{disc}_{R}(A^{\orth}, B) :=
\prod_{k=1}^n \exp\biggl( \sum_{j=0}^2 w(j) \cdot \bigl( (A^{\orth}(\start l^{j(k)}_{S^1})\bigr)(l^{j(k)}_{\Sigma}) + B(\start l^{j(k)}_{\Sigma}) \cdot dt^{(N)}(l^{j(k)}_{S^1}) \bigr) \biggr)
\end{multline}
where we use again Convention \ref{conv_loop_pic} and
where we have introduced three weight factors
$$w(0)=1/2, \quad \quad w(1)=1/4, \quad \quad w(2)=1/4$$
\begin{remark} \label{rm_subsec3.3} \rm
Other natural choices would be
$$(w(0),w(1),w(2)) = (1/3,1/3,1/3) \quad \quad \text{or} \quad \quad (w(0),w(1),w(2)) = (0,1/2,1/2)$$
However, these two choices would not even lead to the correct values for $\WLO^{disc}_{rig}(L)$ in the special
situation of Sec. \ref{sec5}.
\end{remark}
\subsection{Definition of $\Det^{disc}(B)$}
\label{subsec3.4}
Let us first try the following
ansatz for the discrete analogue $\Det^{disc}(B)$ of the heuristic expression $\Det(B)$ given by Eq.
\eqref{eq_Det(B)_rewrite} above. For every $B \in \cB_{reg}(q\cK)$ we set
\begin{equation} \label{eq_def_DetFPdisc_0}
\Det^{disc}(B) := \prod_{p=0}^2 \biggl(
\det\bigl(\bigl(1_{{\ck}}-\exp(\ad(B))_{| {\ck}}\bigr)^{(p)}\bigr)\biggr)^{(-1)^p /2}
\end{equation}
where
$\bigl(1_{{\ck}}-\exp(\ad(B))_{| {\ck}}\bigr)^{(p)}: C^p(\cK,\ck) \to C^p(\cK,\ck)$
is the linear operator given by
\begin{equation}\label{eq_3.21} (\bigl(1_{{\ck}}-\exp(\ad(B))_{| {\ck}}\bigr)^{(p)}(\alpha)\bigr)(X) = \bigl(1_{{\ck}}-\exp(\ad(B(\sigma_X)))_{| {\ck}}\bigr) \cdot \alpha(X) \quad
\forall \alpha \in C^p(\cK,\ck), X \in \face_p(\cK)
\end{equation}
where $\sigma_X \in \face_0(q\cK)$ is the barycenter of $X$.
Observe that we can rewrite Eq. \eqref{eq_def_DetFPdisc_0} in the following way:
\begin{equation}
\Det^{disc}(B) = \prod_{p=0}^2 \biggl( \prod_{F \in \face_p(\cK)}
\det\bigl(\bigl(1_{{\ck}}-\exp(\ad(B(\bar{F}))_{| {\ck}}\bigr)^{1/2}\biggr)^{(-1)^p}
\end{equation}
where $\bar{F}$ is the barycenter of $F$.
\smallskip
It turns out however, that this ansatz would not lead to the correct values for
$\WLO^{disc}_{rig}(L)$ defined below. This is why we will modify our original ansatz.
In order to do so we
first choose a smooth function $\det^{1/2}\bigl(1_{\ck}-\exp(\ad(\cdot))_{|\ck}\bigr): \ct \to \bR$
with the property $\forall b \in \ct: \bigl(\det^{1/2}\bigl(1_{\ck}-\exp(\ad(b))_{|\ck}\bigr)\bigr)^2 =
\det\bigl(1_{\ck}-\exp(\ad(b))_{|\ck}\bigr)$.
Observe that every such function will necessarily take both positive and negative values.
Motivated by the formula
$$ \det\nolimits\bigl(1_{{\ck}}-\exp(\ad({b}))_{|{\ck}}\bigr)
= \prod_{{\alpha} \in {\cR}} (1 - e^{2 \pi i \langle \alpha, b \rangle})
= \prod_{{\alpha} \in {\cR}_+} \bigl( 4 \sin^2( \pi \langle \alpha, b \rangle ) \bigr)$$
(with $\cR$ and $\cR_+$ as in Sec. \ref{subsec5.2} below)
we will make the choice
\begin{equation} \label{eq_det_in_terms_of_roots}
\det\nolimits^{1/2}\bigl(1_{{\ck}}-\exp(\ad({b}))_{|{\ck}}\bigr) = \prod_{{\alpha} \in {\cR_+}} \bigl( 2 \sin( \pi \langle \alpha, b \rangle ) \bigr)
\end{equation}
and then redefine $\Det^{disc}(B)$ for $B \in \cB_{reg}(q\cK)$ by
\begin{equation} \label{eq_def_DetFPdisc}
\Det^{disc}(B) := \prod_{p=0}^2 \biggl( \prod_{F \in \face_p(\cK)}
\det\nolimits^{1/2}\bigl(1_{{\ck}}-\exp(\ad(B(\bar{F}))_{| {\ck}})\bigr)\biggr)^{(-1)^p}
\end{equation}
\begin{remark} \rm In the published versions of \cite{Ha7a,Ha7b}
there is a notational inaccuracy. When we write $\det\bigl(1_{{\ck}}-\exp(\ad({b}))_{|{\ck}}\bigr)^{1/2}$
in \cite{Ha7a,Ha7b} we actually mean $\det\nolimits^{1/2}\bigl(1_{{\ck}}-\exp(\ad({b}))_{|{\ck}}\bigr)$
given as in Eq. \eqref{eq_det_in_terms_of_roots} above.
\end{remark}
\subsection{Discrete version of $1_{C^{\infty}(\Sigma,\ct_{reg})}(B)$}
\label{subsec3.6}
Let us fix a $s>0$ which is sufficiently small\footnote{\label{ft_sec3.6} $s$ needs to be smaller than the distance
between the two sets $\ct_{sing}$ and $\ct_{reg} \cap \tfrac{1}{k} \Lambda$
where $k$ is as in Sec. \ref{sec2} and $\Lambda \subset \ct$ is the weight lattice,
cf. Sec. \ref{subsec5.2} below}
and choose $1^{\smooth}_{\ct_{reg}} \in C^{\infty}(\ct,\bR)$
such that
\begin{itemize}
\item $0 \le 1^{\smooth}_{\ct_{reg}} \le 1$
\item $1^{\smooth}_{\ct_{reg}} = 0$ on a neighborhood of $\ct_{sing}:= \ct \backslash \ct_{reg} $
\item $1^{\smooth}_{\ct_{reg}} = 1$ outside the $s$-neighborhood of $\ct_{sing}$
\item $1^{\smooth}_{\ct_{reg}}$ is invariant under the operation of the affine Weyl group $\cW_{\aff}$
on $\ct$ (cf. Sec. \ref{subsec5.2} below).
\end{itemize}
\noindent
For fixed $B \in \cB(q\cK)$ we will now take
the expression
\begin{equation} \label{eq_sec3.6}
\prod_{x} 1^{\smooth}_{\ct_{reg}}(B(x)):=
\prod_{x \in \face_0(q\cK)}
1^{\smooth}_{\ct_{reg}}(B(x))
\end{equation}
as the discrete analogue of $1_{C^{\infty}(\Sigma,\ct_{reg})}(B)$.
\subsection{Oscillatory Gauss-type measures}
\label{subsec3.7}
i) An ``oscillatory
Gauss-type measure'' on Euclidean vector space $(V, \langle \cdot, \cdot \rangle)$
is a complex Borel measure $d\mu$ on $V$
of the form
\begin{equation} \label{eq3.1}
d\mu(x) = \tfrac{1}{Z} e^{ - \tfrac{i}{2} \langle x - m, S (x-m) \rangle} dx
\end{equation}
with $Z \in \bC \backslash \{0\}$,
$m \in V$, and where $S$ is a symmetric endomorphism of $V$ and
$dx$ the normalized\footnote{i.e. unit hyper-cubes have volume $1$ w.r.t. $dx$}
Lebesgue measure on $V$.
Note that $Z$, $m$ and $S$ are uniquely determined by $d\mu$. We will often use
the notation $m_{\mu}$ and $S_{\mu}$ in order to refer to $m$ and $S$.
\begin{itemize}
\item We call $d\mu$ ``centered''iff $m_{\mu}=0$.
\item We call $d\mu$ ``degenerate'' iff $S_{\mu}$ is not invertible
\end{itemize}
\smallskip
ii) Let $d\mu$ be an oscillatory
Gauss-type measure on a Euclidean vector space $(V, \langle \cdot, \cdot \rangle)$.
A (Borel) measurable function
$f: V \to \bC$ will be called improperly integrable w.r.t. $d\mu$
iff\footnote{Observe that
$\int_{\ker(S_{\mu})} e^{- \eps \|x\|^2} dx =
(\tfrac{\eps}{\pi})^{-n/2}$.
In particular, the factor
$(\tfrac{\eps}{\pi})^{n/2} $ in Eq. \eqref{eq3.2} above ensures
that also for degenerate oscillatory
Gauss-type measure the improper integrals $\int\nolimits_{\sim} 1 \ d\mu$ exists}
\begin{equation}\label{eq3.2} \int\nolimits_{\sim} f d\mu := \int\nolimits_{\sim} f(x)
d\mu(x): =
\lim_{\eps \to 0} (\tfrac{\eps}{\pi})^{n/2} \int f(x) e^{- \eps |x|^2} d\mu(x)
\end{equation}
exists. Here we have set $n:=\dim(\ker(S_{\mu}))$.
Note that if $d\mu$ is non-degenerate we have $n=0$ so the factor $(\tfrac{\eps}{\pi})^{n/2}$
is then trivial.
\begin{itemize}
\item
We call $d\mu$ ``normalized'' iff $\int\nolimits_{\sim} 1 d\mu = 1$.
\end{itemize}
\subsection{Simplicial versions of the two Gauss-type measures in Eq. \eqref{eq2.48}}
\label{subsec3.8}
i) As the simplicial analogue of the heuristic complex measure $d\Check{\mu}^{\orth}_B =
\tfrac{1}{\Check{Z}(B)} \exp(i S_{CS}(\Check{A}^{\orth},B)) D\Check{A}^{\orth}$
in Eq. \eqref{eq2.48} we will take the (rigorous) complex measure
\begin{equation} \label{eq_def_mu_orth_disc} d\Check{\mu}^{\orth,disc}_{B}(\Check{A}^{\orth}):= \tfrac{1}{\Check{Z}^{disc}(B)} \exp(iS^{disc}_{CS}(\Check{A}^{\orth},B)) D\Check{A}^{\orth}
\end{equation}
on $\Check{\cA}^{\orth}(K)$ where
$D\Check{A}^{\orth}$ denotes the (normalized) Lebesgue measure
on $\Check{\cA}^{\orth}(K)$ and where we have set
$\Check{Z}^{disc}(B) := \int\nolimits_{\sim} \exp(iS^{disc}_{CS}(\Check{A}^{\orth},B)) D\Check{A}^{\orth}$.
Observe that $d\Check{\mu}^{\orth,disc}_{B}(\Check{A}^{\orth})$ is not well-defined
for all $B$. However, if $B \in \cB_{reg}(q\cK)$, which is the only case relevant for us (cf. Sec. \ref{subsec3.6} above), Eq. \eqref{eq_SCS_expl_disc} and Remark \ref{rm_Sec4.3} above imply that
the complex measure in Eq. \eqref{eq_def_mu_orth_disc} is indeed a well-defined,
non-degenerate, centered, normalized oscillatory Gauss type measure on $\Check{\cA}^{\orth}(K)$.
\medskip
\noindent ii) As the simplicial analogue of the heuristic complex measure
$\exp(i S_{CS}(A^{\orth}_c, B)) (DA^{\orth}_c \otimes DB)$ in Eq. \eqref{eq2.48} we will take the (rigorous) complex measure on $\cA^{\orth}_c(K) \oplus \cB(\cK)$
\begin{equation} \label{eq_def_mu2_disc} \exp(i S^{disc}_{CS}(A^{\orth}_c,B)) (DA^{\orth}_c \otimes DB)
\end{equation}
where $DA^{\orth}_c$ denotes the (normalized) Lebesgue measure on $\cA^{\orth}_c(K)$
and $DB$ the (normalized) Lebesgue measure on $\cB(\cK)$.\par
According to Eq. \eqref{eq_SCS_expl_disc} above, the (rigorous) complex measure in Eq. \eqref{eq_def_mu2_disc}
is a centered oscillatory Gauss type measure on $\cA^{\orth}_c(K) \oplus \cB(\cK)$.
\subsection{Definition of $\WLO^{disc}_{rig}(L)$ and $\WLO^{disc}_{norm}(L)$}
\label{subsec3.9}
A finite tuple $L= (R_1, R_2, \ldots, R_m)$, $m \in \bN$, of closed simplicial ribbons in
$\cK \times \bZ_{N}$ which do not intersect each other
will be called a ``simplicial ribbon link'' in $\cK \times \bZ_{N}$.
For every such simplicial ribbon link $L= (R_1, R_2, \ldots, R_m)$ in
$\cK \times \bZ_{N}$ equipped with a tuple of ``colors''
$(\rho_1,\rho_2,\ldots,\rho_m)$, $m \in \bN$,
we now introduce the following
simplicial analogue $\WLO^{disc}_{rig}(L)$
of the heuristic expression $\WLO(L)$ in Eq. \eqref{eq2.48}:
\begin{multline} \label{eq_def_WLOdisc}
\WLO^{disc}_{rig}(L) :=
\sum_{y \in I}\int\nolimits_{\sim} \bigl( \prod_{x} 1^{\smooth}_{\ct_{reg}}(B(x))
\bigr) \Det^{disc}(B)\\
\times \biggl[
\int\nolimits_{\sim} \prod_{i=1}^m \Tr_{\rho_i}\bigl( \Hol^{disc}_{R_i}(\Check{A}^{\orth} + A^{\orth}_c, B)\bigr) d\Check{\mu}^{\orth,disc}_{B}(\Check{A}^{\orth}) \biggr] \\
\times \exp\bigl( - 2 \pi i k \langle y, B(\sigma_0) \rangle \bigr)
\exp(i S^{disc}_{CS}(A^{\orth}_c,B)) (DA^{\orth}_c \otimes DB)
\end{multline}
{where $\sigma_0$ is an arbitrary fixed point of $\face_0(q\cK)$
which does not lie in $\bigcup_{i \le m} \Image(\pi_{\Sigma} \circ R_i)$.
Here we consider each $R_i$ as a continuous map $[0,1] \times S^1 \to \Sigma \times S^1$
in the obvious way (cf. Remark 4.3 in Sec. 4.3 in \cite{Ha7a}).
\smallskip
Apart from considering the simplicial analogue $\WLO^{disc}_{rig}(L)$ of the heuristic expression
$\WLO(L)$ in Eq. \eqref{eq2.48} it will also
be convenient to introduce a simplicial analogue of the normalized heuristic expression
$$\WLO_{norm}(L):= \frac{\WLO(L)}{\WLO(\emptyset)}$$
where $\emptyset$ is the ``empty link'' in $M = \Sigma \times S^1$.
Accordingly, we will now define
\begin{equation} \label{eq_def_WLO_norm}
\WLO^{disc}_{norm}(L):= \frac{\WLO^{disc}_{rig}(L)}{\WLO^{disc}_{rig}(\emptyset)}
\end{equation}
where $L$ is the colored simplicial ribbon link fixed above
and where $\WLO^{disc}_{rig}(\emptyset)$ is defined in the obvious way, i.e.
by the expression we get from the RHS of Eq. \eqref{eq_def_WLOdisc}
after replacing the product $\prod_{i=1}^m \Tr_{\rho_i}\bigl( \Hol^{disc}_{R_i}(\Check{A}^{\orth} + A^{\orth}_c, B)\bigr)$ by $1$.
\medskip
We conclude this section with four important remarks.
In Remark \ref{rm_sec3.9} we compare the main aims \& results of the present paper
with those in \cite{Ha7a,Ha7b}. In Remark \ref{rm_sec3.9'} we make some comments regarding
the case of general ribbon links $L$.
In Remark \ref{rm_sec3.9b} we describe how the main result of the present paper fits
into the bigger picture of the ``simplicial program'' for Chern-Simons theory (cf. also ``Goal 1''
of the Introduction). Finally, Remark \ref{rm_sec3.9c} we clarify some points related to ``Goal 2''
of the Introduction.
\begin{remark} \label{rm_sec3.9} \rm
In \cite{Ha7a,Ha7b} a simplicial analogue of $\WLO_{norm}(L)$
which is closely related to the simplicial analogue $\WLO^{disc}_{norm}(L)$ above was
evaluated explicitly for a simple type of simplicial ribbon links $L$, cf. Theorem 6.4 in \cite{Ha7a}.
An analogous result can be obtained for our $\WLO^{disc}_{norm}(L)$ above\footnote{or rather for
$\WLO^{disc}_{norm}(L)$ after making the modification (M2) described in Sec. \ref{subsec3.10} below}.
More precisely, it can be shown that
for every simplicial ribbon link $L= (R_1, R_2, \ldots, R_m)$ in $\cK \times \bZ_{N}$
which fulfills an analogue of Conditions (NCP)' and (NH)' in \cite{Ha7a}
we have $\WLO^{disc}_{norm}(L) = |L|/|\emptyset|$
where $|\cdot|$ is the shadow invariant on $M= \Sigma \times S^1$ associated to $\cG$ and $k$ as above.
This result is interesting because it shows how major ``building blocks''
of the shadow invariant arise within the torus gauge approach to the CS path integral.
However, from a knot theoretic point of view the class of simplicial ribbon links $L$
fulfilling (the analogue of) Condition (NCP)' in \cite{Ha7a} is not very interesting.
In particular, this class of (simplicial ribbon) links does not include any non-trivial knots. \par
One of the main aims of the present paper is to show that the torus gauge approach to the CS path integral
also allows the treatment of non-trivial knots, namely a large class of torus (ribbon)
knots in $S^2 \times S^1$, cf. Definition \ref{def5.3} and Theorem \ref{theorem1} in Sec. \ref{sec5} below.
\end{remark}
\begin{remark} \label{rm_sec3.9'} \rm
The simplicial ribbon knots/links $L= (R_1, R_2, \ldots, R_m)$, $m \in \bN$,
mentioned in Remark \ref{rm_sec3.9} above have the special property that
the projected ribbons $\pi_{\Sigma} \circ R_i$, $i \le m$, in $\Sigma$ have either no (self-)intersections
(= the situation in \cite{Ha7a,Ha7b}) or only ``longitudinal'' self-intersections (= the situation in Definition \ref{def5.3}, Theorem \ref{theorem1} and Theorem \ref{theorem2} below).
As explained in Sec. 6 in \cite{Ha7b}, if we want to have a chance of
obtaining the correct values for the rigorous version of $\WLO_{norm}(L)$
for general simplicial ribbon links $L= (R_1, R_2, \ldots, R_m)$
(where the projected ribbons $\pi_{\Sigma} \circ R_i$, $i \le m$,
are allowed to have ``transversal'' intersections) we will probably have
to modify our approach in a suitable way. One way to do so is to make what in Sec. 7 in \cite{Ha7b}
was called the ``transition to the $BF$-theory setting''.
Alternatively, one can use a ``mixed'' approach where
some of the simplicial spaces are embedded naturally into suitable continuum
spaces\footnote{For example, we can exploit the embeddings
$C^p(\cK,V) \hookrightarrow C^p(b\cK,V) \overset{W}{\hookrightarrow} \Omega^p(\Sigma^{(2)},V)$
for $p=0,1,2$ and $V \in \{\cG,\ct\}$, where $W$ is the Whitney map of the simplicial complex
$b\cK$ and $\Sigma^{(2)}$ is the complement of the 1-skeleton of $b\cK$ in $\Sigma$.
Observe that $b\cK$ induces in a natural way a Riemannian metric on $\Sigma^{(2)}$,
which gives rise to a Hodge star operator $\star:\Omega^p(\Sigma^{(2)},V) \to \Omega^{2-p}(\Sigma^{(2)},V)$},
cf. \cite{Ha10}. This leads to a greater flexibility
and allows us, for example, to work with (continuum) Hodge star operators and continuum ribbons instead
of the simplicial Hodge star operators and simplicial ribbons mentioned above.
\end{remark}
\begin{remark} \label{rm_sec3.9b} \rm
The longterm goal of what in Sec. \ref{sec1} was called the ``simplicial program'' for Chern-Simons theory
(cf. Sec. 3 in \cite{Ha7a} and see also \cite{Mn2})
is to find for every oriented closed 3-manifold $M$ and every colored (ribbon) link $L$ in $M$
a rigorous simplicial realization $\WLO^{disc}(L)$
of the original or gauge-fixed CS path integral for the WLOs associated
to $L$ such that $\WLO^{disc}(L)$ coincides with the corresponding Reshetikhin-Turaev invariant $RT(M,L)$.
In the present paper we are much less ambitious. Firstly, we only consider
the special case $M= \Sigma \times S^1$ (and from Sec. \ref{sec5} on we restrict ourselves
to the case $\Sigma = S^2$) and secondly, we only deal with a restricted
class of simplicial ribbon links $L$ (cf. Theorem \ref{theorem1} and Theorem \ref{theorem2} below).
\end{remark}
\begin{remark} \label{rm_sec3.9c} \rm In view of ``Goal 2''
in Comment \ref{comm1} of the Introduction note that $\WLO^{disc}_{norm}(L)$
can be interpreted as a (convenient) ``lattice regularization'' of the heuristic continuum expressions
$\WLO_{norm}(L)$ above.
Usually, when one works with a lattice regularization in Quantum Gauge Field Theory
one has to perform a suitable continuum limit.
We can do this here as well\footnote{\label{ft_distinguished} There is, however, a major difference compared to the standard situation in QFT where the continuum limit is usually independent of the lattice regularization.
In the case of the Chern-Simons path integral (in the torus gauge) the value of the continuum limit
will depend on the lattice regularization. In particular, only a distinguished subclass of
lattice regularizations will lead to the correct result, cf. \cite{Ha10}
for an interpretation of this phenomenon}.
So, instead of working with a fixed $\cK$ and
$\bZ_N$ with fixed $N \in \bN$ let us now consider a sequence $(\cK^{(n)})_{n \in \bN}$ of consecutive refinements
of $\cK$ and $(\bZ_{N^{(n)}})_{n \in \bN}$ where $N^{(n)}:= n \cdot N$.
By doing so we can approximate every ``horizontal''\footnote{i.e. a ribbon link in $M = \Sigma \times S^1$
which, when considered as a framed link instead of a ribbon link,
is ``horizontally framed'' in the sense that the framing vector field
is ``parallel'' to the $\Sigma$-component of $M = \Sigma \times S^1$}
ribbon link $L$ in $M = \Sigma \times S^1$
by a suitable sequence $(L^{(n)})_{n \in \bN}$ of simplicial ribbon links $L^{(n)}$ in $\cK^{(n)} \times \bZ_{N^{(n)}}$.\par
Let us now restrict our attention to horizontal ribbon links $L$ in $M = \Sigma \times S^1$
which are analogous to the simplicial ribbon links
appearing in Theorem \ref{theorem1} and Theorem \ref{theorem2} below
and let $(L^{(n)})_{n \in \bN}$ be a suitable approximating sequence as above.
Then we obtain, informally\footnote{\label{ft_distinguished2}and under the assumption that the simplicial framework we use in Sec. \ref{sec3}
indeed belongs to the ``distinguished subclass'' of lattice regularizations mentioned in Footnote \ref{ft_distinguished} above},
\begin{equation} \label{eq_WLO_appr}
\WLO_{norm}(L) = \lim_{n \to \infty} \WLO^{disc}_{norm}(L^{(n)})
\end{equation}
where $\WLO^{disc}_{norm}(L^{(n)})$ is defined in an analogous way
as $\WLO^{disc}_{norm}(L)$ above (with $\cK^{(n)}$ playing the role of
$\cK$ and $\bZ_{N^{(n)}}$ playing the role of $\bZ_{N}$).\par
But from (the proof of) Theorem \ref{theorem1} and Theorem \ref{theorem2}
it follows that $\WLO^{disc}_{norm}(L^{(n)})$
does not depend on $n$ (provided that $\cK$ was chosen fine enough and $N$ large enough).
Accordingly, the $n \to \infty$-limit in Eq. \eqref{eq_WLO_appr} is trivial
and we simply obtain
$$\WLO_{norm}(L) = \WLO^{disc}_{norm}(L^{(1)})$$
So in order to evaluate the heuristic expression $\WLO_{norm}(L)$
(for the special type of continuum ribbon links $L$ we are considering here) it is enough to compute
$\WLO^{disc}_{norm}(L^{(1)})$. And this is exactly what is done in
Theorem \ref{theorem1} and Theorem \ref{theorem2} (with $\cK^{(1)}$ replaced by $\cK$).
\end{remark}
\subsection{Modification of the definition of $\WLO^{disc}_{rig}(L)$ and $\WLO^{disc}_{norm}(L)$}
\label{subsec3.10}
As we will see later the definition of $\WLO^{disc}_{rig}(L)$ and of $\WLO^{disc}_{norm}(L)$
above need\footnote{I consider this to be a purely technical issue which can probably be resolved by using an alternative way for making rigorous sense
of the RHS of Eq. \eqref{eq2.48}, cf. Remark \ref{rm_subsec3.11} below}
to be modified slightly if we want to obtain the correct values for $\WLO^{disc}_{norm}(L)$.
Without such a modification a ``wrong'' factor $1_{\ct_{reg}}(B(Z_0))$ will appear at the end of the computations
in Step 4 in Sec. \ref{subsec5.4} below.
Here are two modifications of the current approach for each of which this extra factor does not appear and one indeed obtains the correct values for $\WLO_{norm}^{disc}(L)$:
\begin{enumerate}
\item[(M1)] Instead of working with closed simplicial ribbons in $\cK \times \bZ_N$
we could work with closed simplicial ribbons in $q\cK \times \bZ_N$.
In fact, this is exactly what was done in \cite{Ha7a,Ha7b} in the situation studied there.
The disadvantage of this kind of modification is that the space $\cB(\cK)$ above needs to be replaced by a
less natural space. Moreover, the proof of the analogue of Lemma \ref{lem2} in Sec. \ref{subsec5.4}
will become unnaturally complicated.
\item[(M2)] We regularize the RHS of Eq. \eqref{eq_def_WLOdisc} in a suitable way. In order to do so we
first choose a fixed vector $v \in \ct$ which is not orthogonal
to any of the roots $\alpha \in \cR$. Then we define $B_{displace} \in \cB(q\cK)$ by
$$ B_{displace}(x) =
\begin{cases} 0 & \text{ if } x \in \face_0(\cK) \\
v & \text{ if } x \in \face_0(q\cK) \backslash \face_0(\cK)
\end{cases} $$
and set, for each $\beta >0$ and each $B \in \cB(\cK) \subset \cB(q\cK)$,
$$B(\beta) = B + \beta B_{displace}$$
After that we replace $B$ by $B(\beta)$ in each of the three terms
$\prod_{x} 1^{\smooth}_{\ct_{reg}}(B(x)) \bigr)$,
$\Det^{disc}(B)$, and $d\Check{\mu}^{\orth,disc}_{B}(\Check{A}^{\orth})$
appearing on the RHS of Eq. \eqref{eq_def_WLOdisc}.
Finally, we let\footnote{without letting $s\to 0$ first,
the $\beta \to 0$ limit has no effect} $s \to 0$ and later $\beta \to 0$.
More precisely, we add $\lim_{\beta \searrow 0} \lim_{s \searrow 0} \cdots $
in front of the (modified) RHS of Eq. \eqref{eq_def_WLOdisc}.
($\WLO^{disc}_{norm}(L)$ is again defined by Eq. \eqref{eq_def_WLO_norm}.)
\end{enumerate}
During the proof of Theorem \ref{theorem1} below, which will be given in Sec. \ref{subsec5.4} below
we will first work with the original definition of $\WLO^{disc}_{rig}(L)$ in Sec. \ref{subsec3.9}
until the end of ``Step 4''. This is instructive because we see how the factor $1_{\ct_{reg}}(B(Z_0))$
arises. After that we switch to the modified definition of $\WLO^{disc}_{rig}(L)$
(using either of the two options (M1) and (M2) above) and complete the proof.
\begin{remark} \label{rm_subsec3.11} \rm
The simplicial approach described above for obtaining a rigorous realization
of $\WLO(L)$ is simple and fairly natural and it will be sufficient
for the goals of the present paper, cf. ``Goal 1'' and ``Goal 2'' in Comment \ref{comm1}
in the Introduction. \par
That said I want to emphasize that
even though the approach above is probably one of the simplest ways
for making rigorous sense of the RHS of Eq. \eqref{eq2.48}
(for the special simplicial ribbon links we are interested in in the present paper)
I do not claim that it is the best way of obtaining such a rigorous realization. It is likely that
several improvements are possible and that, in particular,
there is an alternative to modification (M1) or modification (M2) which is more natural.
\end{remark}
\section{Some useful results on oscillatory Gauss-type measures}
\label{sec4}
In the present section we will review (without proof and in a slightly modified form)
some of the basic definitions and results in \cite{Ha7b}
on oscillatory Gauss-type measures.
\smallskip
In the following let $(V, \langle \cdot, \cdot \rangle)$ be a Euclidean vector space
and $d\mu$ an oscillatory Gauss-type measure on $(V, \langle \cdot, \cdot \rangle)$,
cf. Sec. \ref{subsec3.7} above.
\begin{proposition} \label{obs1}
If $d\mu$ is normalized and non-degenerate then we have for all $v, w \in V$
\begin{equation} \label{eq3.4}
\int\nolimits_{\sim} \langle v, x \rangle \ d\mu(x) = \langle v, m \rangle , \quad \quad
\int\nolimits_{\sim} \langle v, x \rangle \langle w, x \rangle \ d\mu(x)
= \tfrac{1}{i} \langle v, S^{-1} w \rangle + \langle v, m \rangle \langle w, m \rangle
\end{equation}
where $m = m_{\mu}$ and $S = S_{\mu}$.
\end{proposition}
\begin{definition} \label{def3.3}
By $\cP_{exp}(V)$ we denote the subalgebra
of $\Map(V,\bC)$
which is generated by the polynomial functions $f:V \to \bC$
and all functions $f:V \to \bC$ of the form
$f = \theta \circ \exp_{\End(\bC^n)} \circ \varphi$, $n \in \bN$, where
$\theta: \End(\bC^n) \to \bC$ is linear, $\varphi: V \to \End(\bC^n)$ is affine, and
$\exp_{\End(\bC^n)}:\End(\bC^n) \to \End(\bC^n)$ is the exponential map of
the (normed) $\bR$-algebra $\End(\bC^n)$.
\end{definition}
\begin{proposition} \label{prop3.1} For every $f \in \cP_{exp}(V)$ the improper integral $\int\nolimits_{\sim} f \ d\mu \in \bC$ exists.
\end{proposition}
\begin{proposition} \label{prop3.3} If $d\mu$ is normalized and non-degenerate and if
$(Y_k)_{k \le n}$, $n \in \bN$, is a sequence of affine maps $V \to \bR$ such that
\begin{equation} \label{eq3.10} \int\nolimits_{\sim} Y_i Y_j d\mu = \bigl( \int\nolimits_{\sim} Y_i d\mu \bigr) \bigl(
\int\nolimits_{\sim} Y_j d\mu \bigr) \quad \forall i,j \le n
\end{equation}
then we have for every $\Phi \in \cP_{exp}(\bR^n)$
\begin{equation} \label{eq3.11} \int\nolimits_{\sim} \Phi((Y_k)_{k}) d\mu
=\Phi\bigl(\bigl( \int\nolimits_{\sim} Y_k d\mu \bigr)_{k} \bigr)
\end{equation}
A totally analogous statement holds in the situation where
instead of the sequence $(Y_k)_{k \le n}$,
$n \in \bN$, we have a family $(Y^{a}_k)_{k \le n,
a \le d}$ of affine maps fulfilling the obvious analogue of Eq.
\eqref{eq3.10} and where $\Phi \in \cP_{exp}(\bR^{n \times d})$.
\end{proposition}
\begin{definition} \label{conv3.1} \rm
Let $f:V \to \bC$ be a continuous function, let $d:= \dim(V)$,
and let $dx$ be the normalized Lebesgue measure on $V$.
We set
\begin{equation}
\int^{\sim}_{V} f(x) dx := \tfrac{1}{\pi^{d/2}} \lim_{\eps \to 0} \eps^{d/2} \int_{V} e^{-\eps |x|^2} f(x) dx
\end{equation}
whenever the expression on the RHS of the previous equation is well-defined.
\end{definition}
\begin{remark}\label{rm_last_sec3} \rm
Let $\Gamma$ be a lattice in $V$ and $f:V \to \bC$ a $\Gamma$-periodic continuous function.
Then $\int^{\sim}_{V} f(x) dx$ exists and we have
\begin{equation} \label{eq2_lastrmsec3}
\int^{\sim}_{V} f(x) dx = \frac{1}{vol(Q)} \int_{Q} f(x) dx
\end{equation}
with $Q :=\{\sum_i x_i e_i \mid 0 \le x_i \le 1 \forall i \le d\}$
where $(e_i)_{i \le d}$ is an arbitrary basis of the lattice $\Gamma$
and where $vol(Q)$ denotes the volume of $Q$.
Clearly, Eq. \eqref{eq2_lastrmsec3} implies
\begin{equation} \label{eq1_lastrmsec3} \forall y \in V: \quad
\int^{\sim}_{V} f(x) dx = \int^{\sim}_{V} f(x + y) dx
\end{equation}
\end{remark}
\begin{proposition} \label{prop3.5}
Assume that $V= V_0 \oplus V_1 \oplus V_2$ where $V_0$, $V_1$, $V_2$ are pairwise
orthogonal subspaces of $V$. (We will denote the $V_j$-component of $x \in V$ by $x_j$ in the following.)
Assume also that $d\mu$ is a (centered) normalized oscillatory Gauss-type measure on $(V, \langle \cdot, \cdot \rangle)$ of the form $d\mu(x)= \tfrac{1}{Z} \exp(i \langle x_2,M x_1) dx$
for some linear isomorphism $M: V_1 \to V_2$.
Then, for every $v \in V_2$
and every bounded uniformly continuous function $F:V_0 \oplus V_1 \to \bC$
the LHS of the following equation exists iff the RHS exists
and in this case we have
\begin{equation} \label{eq3.18}
\int_{\sim} F(x_0 + x_1) \exp(i \langle x_2,v \rangle ) d\mu(x) = \int^{\sim}_{V_0} F(x_0 - M^{-1} v) dx_0,
\end{equation}
where $dx_0$ is the normalized Lebesgue measure on $V_0$.
\end{proposition}
\section{Evaluation of $\WLO^{disc}_{rig}(L)$ for torus ribbon knots $L$ in $S^2\times S^1$}
\label{sec5}
From now on we will only consider the special case $\Sigma = S^2$.
\subsection{A certain class of torus (ribbon) knots in $S^2\times S^1$}
\label{subsec5.1}
Recall that a torus knot in $S^3$ is a knot
which is contained in an unknotted torus $\tilde{\cT} \subset S^3$.
Motivated by this definition we will now introduce an analogous notion
for knots in the manifold $M = S^2 \times S^1$.
\begin{definition} \label{def5.1}
A torus knot in $S^2 \times S^1$ of standard type is a knot in $S^2 \times S^1$
which is contained in a torus $\cT$ in $ S^2 \times S^1$ fulfilling
the following condition
\begin{description}
\item[(T)] $\cT$ is of the form $\cT = \psi(\cT_0)$ with $\cT_0 := C_0 \times S^1$
where $C_0$ is an embedded circle in $S^2$ and $\psi:S^2 \times S^1 \to S^2 \times S^1$ is a diffeomorphism.
\end{description}
\end{definition}
\begin{remark} \rm \label{rm_sec5.1} Note that every unknotted torus $\tilde{\cT}$ in $S^3$ can be obtained
from a torus $\cT$ in $ S^2 \times S^1$ fulfilling condition (T) by performing a suitable
Dehn surgery on a separate knot in $ S^2 \times S^1$.
Consequently, every torus knot $\tilde{K}$ in $S^3$ can be obtained from a torus knot $K$ in $S^2 \times S^1$ of standard type by performing such a Dehn surgery.
Moreover, even if we restrict ourselves to the special situation where
$K$ lies in $\cT_0 = C_0 \times S^1$ for $C_0$ as above
we can still obtain all torus knots in $S^3$ up to equivalence by performing a suitable Dehn surgery.
We will exploit this fact in Sec. \ref{subsec6.2} below.
\end{remark}
Let us now go back to the simplicial setting introduced in Sec. \ref{sec3}.
Recall that in Sec. \ref{sec3} we fixed two polyhedral cell complexes $\cK$ and $\bZ_N$
and considered also their product $\cK \times \bZ_N$.
The topological space underlying $\cK \times \bZ_N$ is $\Sigma \times S^1 = S^2 \times S^1$.
We want to find a ``simplicial analogue'' of Definition \ref{def5.1} above.
In view of Remark \ref{rm_sec5.1} we will work with the following definition:
\begin{definition} \label{def5.2} Let $l$ be a simplicial loop in $\cK \times \bZ_N$
(which we will consider as a continuous map $S^1 \to S^2 \times S^1$ in the obvious way).
We say that $l$ is a simplicial torus knot of standard type iff $l: S^1 \to S^2 \times S^1$ is an embedding
and also the following condition is fulfilled:
\begin{description}
\item[(TK)] $\Image(l)$ is contained in $\cT_0 := C_0 \times S^1$
where $C_0$ is some embedded circle in $S^2$ which lies on the 1-skeleton of $\cK$.
\end{description}
By ${\mathbf p}(l)$ and ${\mathbf q}(l)$ we will denote the
winding numbers of $\pi_i \circ l:S^1 \to S^1$, $i = 1,2$, where
$\pi_1$ and $\pi_2$ are the two canonical projections $\cT_0 = C_0 \times S^1 \cong S^1 \times S^1 \to S^1$
where for the identification $C_0 \cong S^1$ we picked an orientation on $C_0$.
(Observe that ${\mathbf p}(l)$ and ${\mathbf q}(l)$ will always be coprime.)
\end{definition}
\begin{definition} \label{def5.3} Let $R$ be a closed simplicial ribbon in $\cK \times \bZ_N$
(which we will consider as a continuous map $S^1 \times [0,1] \to S^2 \times S^1$
in the obvious way). We say that $R$ is a simplicial torus ribbon knot of standard type
iff it is regular (cf. Sec. \ref{subsec3.3} above) and also the following condition is fulfilled:
\begin{description}
\item[(TRK)] Each of the two simplicial loops $l_1$ and $l_2$ on the boundary of $R$ fulfills condition (TK) above.
\end{description}
The two integers ${\mathbf p}:= {\mathbf p}(l_1) = {\mathbf p}(l_2)$ and
${\mathbf q}:= {\mathbf q}(l_1) = {\mathbf q}(l_2)$ will be called the winding numbers of $R$.
\end{definition}
\begin{definition} \label{def5.4} Let $R=(F_i)_{i \le n}$ be a closed simplicial ribbon in $\cK \times \bZ_N$.
We say that $R$ is vertical
iff $R$ is regular and moreover, every 2-face $F_i \in \face_2(\cK \times \bZ_N)$ is ``parallel'' to $S^1$.
In this case the three simplicial loops $l^{j}$ ($j=0,1,2$) in $q\cK \times \bZ_N$,
associated to $R$ (cf. Sec. \ref{subsec3.3} above) will be ``parallel'' to $S^1$ as well.
More precisely, for each $l^{j}$ the image of the projected simplicial loop $l^{j}_{\Sigma}$ in $q\cK$
will simply consist of a single point $\sigma^j \in \face_0(q\cK)$.\par
Observe that every vertical closed simplicial ribbon is a simplicial torus ribbon knot of standard type with ${\mathbf p}=0$ and ${\mathbf q}=\pm1$.
If ${\mathbf q}=1$ we say that $R$ has standard orientation.
\end{definition}
\subsection{Some notation}
\label{subsec5.2}
Recall that in Sec. \ref{sec2}, above we have fixed a scalar product $\langle \cdot, \cdot \rangle$
on $\ct$. Using this scalar product we will now make the obvious identification $\ct \cong \ct^*$.
\begin{itemize}
\item $\cR \subset \ct^*$ will denote the set of real roots associated to ($\cG, \ct)$
\item $\Check{\cR}$ denotes the set of real coroots, i.e. $\Check{\cR} := \{\Check{\alpha} \mid \alpha \in \cR\} \subset \ct$
where $\Check{\alpha}: = \frac{ 2 \alpha}{\langle \alpha, \alpha \rangle}$.
\item $\Lambda \subset \ct^*$ denotes the real weight lattice associated to $(\cG,\ct)$.
\item $\Gamma \subset \ct$ will denote the lattice generated by the set of real coroots.
\item A Weyl alcove is a connected component
of the set
\smallskip
$\ct_{\reg} = \exp^{-1}(T_{reg}) = \ct \backslash \bigcup_{\alpha \in \cR, k \in \bZ} H_{\alpha,k}
\quad \text{ where $H_{\alpha,k}:= \alpha^{-1}(k)$.} $
\item $\cW$ will denote the Weyl group associated to $(\cG,\ct)$
\item $\cW_{\aff}$ will denote the affine Weyl group associated to $(\cG,\ct)$, i.e.
the group of isometries of $\ct \cong \ct^*$
generated by the orthogonal reflections on the hyperplanes $H_{\alpha,k}$, $\alpha \in \cR$, $k \in \bZ$,
defined above.
Equivalently, one can define $\cW_{\aff}$ as the group of isometries of $\ct \cong \ct^*$ generated by
$\cW$ and the translations associated to the coroot lattice $\Gamma$.
For $\tau \in \cW_{\aff}$ we will denote the sign
of $\tau$ by $(-1)^{\tau}$.
\end{itemize}
Recall that in Sec. \ref{sec2} above we fixed $k \in \bN$.
Let us now also fix a Weyl chamber $\CW$.
\begin{itemize}
\item $\cR_+$ denotes the set of positive (real) roots
associated to $(\cG,\ct)$ and $\CW$.
\item $\Lambda_+$ denotes the set of dominant (real) weights
associated to $(\cG,\ct)$ and $\CW$.
\item $\rho$ denotes the half-sum of the positive (real) roots
\item $\theta$ denotes the unique long (real) root in $\overline{\CW}$.
\item We set $ \cg:= 1 + \langle \theta,\rho \rangle$ ($\cg$ is the dual Coxeter number of $\cG$)
\item For $\lambda \in \Lambda_+$ let $\lambda^* \in
\Lambda_+$ denote the weight conjugated to $\lambda$ and
$\bar{\lambda} \in \Lambda_+ $ the weight conjugated to $\lambda$
``after applying a shift by $\rho$''. More precisely, $\bar{\lambda}$
is given by $\bar{\lambda} + \rho = (\lambda + \rho)^*$.
\item We set
$ \Lambda_+^k := \{ \lambda \in \Lambda_+ \mid \langle \lambda + \rho ,\theta \rangle < k \}
= \{ \lambda \in \Lambda_+ \mid \langle \lambda,\theta \rangle\leq k - \cg\}$.
\end{itemize}
\begin{remark} \label{rm_shift_in_level} \rm In Sec. \ref{sec1} I mentioned that for a given
oriented closed 3-manifold $M$ the Reshetikhin-Turaev invariants associated to $(M,\cG_{\bC},q)$
are widely believed to be equivalent to Witten's heuristic path integral expressions
based on the Chern-Simons action function associated to $(M,G,k)$ where
$G$ is the simply connected, compact Lie group corresponding to the compact real form
$\cG$ of $\cG_{\bC}$ and $k \in \bN$ is chosen suitably.
It it commonly believed that this relationship between $q$ and $k$ is given by
$$q = e^{2 \pi i/(k+\cg)}, \quad \quad k \in \bN$$
The appearance of $k + \cg$ instead of $k$ (i.e. the replacement $k \to k + \cg$)
is the famous ``shift of the level'' $k$.
However, several authors have argued (cf., e.g., \cite{GMM2}) that the occurrence
(and magnitude) of such a shift in the level depends on the regularization procedure and
renormalization prescription which is used for making sense of the heuristic path integral.
Accordingly, it should not be surprising that there are several papers (cf. the references in \cite{GMM2}) where the shift $k \to k + \cg$ is not observed and one is therefore led to
the following relationship between $q$ and\footnote{In view of the definition of the set $\Lambda_+^k$
above it is clear that the situation $k \le \cg$ is not interesting} $k$:
$$q = e^{2 \pi i/k}, \quad \quad k \in \bN \text{ with\ } k > \cg$$
This is also the case in \cite{Ha7a,Ha7b} and the present paper\footnote{by contrast, in \cite{HaHa}
a shift $k \to k + \cg$ was inserted by hand into several formulas. Accordingly, several definitions
in the present paper differ from the definitions in \cite{HaHa}}.
\end{remark}
Let $C$ and $S$ be the $\Lambda_+^k \times \Lambda_+^k$ matrices with complex entries given by
\begin{subequations} \label{eq_def_C+S}
\begin{align}
C_{\lambda \mu} & := \delta_{\lambda \bar{\mu}}, \\
\label{eq_def_S} S_{\lambda \mu} & :={i^{\# \cR _{+}}\over k^{\dim(\ct)/2}} \frac{1}{|\Lambda/\Gamma|^{1/2}}
\sum_{\tau \in \cW} (-1)^{\tau} e^{- {2\pi i\over k} \langle \lambda + \rho , \tau \cdot
(\mu + \rho) \rangle }
\end{align}
\end{subequations}
for all $\lambda, \mu \in \Lambda_+^k$
where $\# \cR _{+}$ is the number of elements of $\cR _{+}$.
We have
\begin{equation} \label{eq_S2=C} S^2 = C
\end{equation}
It will be convenient to generalize the definition of $S_{\lambda \mu}$ to the situation
of general $\lambda, \mu \in \Lambda$ using again Eq. \eqref{eq_def_S}. \par
Let $\theta_{\lambda}$ and $d_{\lambda}$ for $\lambda \in \Lambda$
be given by\footnote{\label{ft_warning}For $r \in \bQ$ we will write $\theta_{\lambda}^r$ instead of
$e^{r \cdot\frac{\pi i}{k} \langle \lambda,\lambda+2\rho \rangle}$.
Note that this notation is somewhat dangerous since $\theta_{\lambda_1} = \theta_{\lambda_2}$
does, of course, in general not imply $\theta_{\lambda_1}^r = \theta_{\lambda_2}^r$}
\begin{subequations} \label{eq_def_th+d}
\begin{align}
\label{eq_def_th}
\theta_{\lambda} & := e^{\frac{\pi i}{k} \langle \lambda,\lambda+2\rho \rangle}\\
\label{eq_def_d}
d_{\lambda} & := \frac{S_{\lambda 0}}{S_{00}}
\overset{(*)}{=} \prod_{\alpha \in \cR_+}
\frac{\sin(\frac{\pi}{k} \langle \lambda+\rho,\alpha \rangle) }{\sin(\frac{\pi}{k} \langle \rho,\alpha \rangle)}
\end{align}
\end{subequations}
where step $(*)$ follows, e.g., from part iii) in Theorem 1.7 in Chap. VI in \cite{Br_tD}. \par
For every $\lambda \in \Lambda_+$ we denote by $\rho_{\lambda}$
the (up to equivalence) unique
irreducible, finite-dimensional, complex representation of $G$
with highest weight $\lambda$.
For every $\mu \in \Lambda$ we will denote by
$m_{\lambda}(\mu)$ the multiplicity of $\mu$
as a weight in $\rho_{\lambda}$.
It will be convenient to introduce $ \bar{m}_{\lambda}: \ct \to \bZ$ by
\begin{equation}\label{eq_mbar_def}
\bar{m}_{\lambda}(b) =
\begin{cases} m_{\lambda}(b) & \text{ if } b \in \Lambda\\
0 & \text{ otherwise }
\end{cases}
\end{equation}
Instead of $\bar{m}_{\lambda}$ we will simply write $m_{\lambda}$ in the following. \par
Finally, let us define $\ast: \cW_{\aff} \times \ct \to \ct$ by
\begin{equation} \label{eq_def_ast}
\tau \ast b = k \bigl( \tau \cdot \tfrac{1}{k} (b+\rho)\bigr) - \rho, \quad \quad
\text{for all $\tau \in \cW_{\aff}$ and $b \in \ct$}
\end{equation}
and set, for all $\lambda \in \Lambda_+$, $\mu, \nu \in \Lambda$, $\mathbf p \in \bZ \backslash \{0\}$,
and $\tau \in \cW_{\aff}$
\begin{equation} \label{eq_def_plethysm}
m^{\mu \nu }_{\lambda, \mathbf p}(\tau) := (-1)^{\tau} m_{\lambda}\bigl(\tfrac{1}{\mathbf p} (\mu - \tau \ast \nu)\bigr) \in \bZ
\end{equation}
and
\begin{equation} \label{eq_def_plethysm_org}
M^{\mu \nu }_{\lambda, \mathbf p} := \sum_{\tau \in \cW_{\aff}}
m^{\mu \nu }_{\lambda, \mathbf p}(\tau) \in \bZ
\end{equation}
\subsection{The two main results}
\label{subsec5.3}
From now on we will always assume that $k > \cg$, cf. Remark \ref{rm_shift_in_level} above.
\begin{theorem} \label{theorem1} Let $L=(R_1)$ be a simplicial ribbon link in $\cK \times \bZ_N$
colored with $\rho_1$ where $R_1$ is simplicial torus ribbon knot
of standard type with winding numbers ${\mathbf p} \in \bZ \backslash \{0\}$ and $\mathbf q \in \bZ$ (cf. (TRK) in Sec. \ref{subsec5.1}).
Assume that $\lambda_1 \in \Lambda_+^k$ where $\lambda_1$ is the highest weight of $\rho_1$.
Then $\WLO^{disc}_{norm}(L)$
is well-defined and we have
\begin{equation} \label{eq_theorem1}
\WLO^{disc}_{norm}(L) = S_{00}^2
\sum_{\eta_1, \eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}} m^{\eta_1\eta_2}_{\lambda_1,\mathbf p}(\tau) \
d_{\eta_1} d_{\eta_2} \ \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}}
\end{equation}
\end{theorem}
The following generalization of Theorem \ref{theorem1} will play a crucial role in Sec. \ref{subsec6.2} below.
\begin{theorem} \label{theorem2}
Let $L=(R_1, R_2)$ be simplicial ribbon link in $\cK \times \bZ_N$ colored with $(\rho_1,\rho_2)$
where $R_1$ is a simplicial torus ribbon knot of standard type
with winding numbers ${\mathbf p} \in \bZ \backslash \{0\}$ and $\mathbf q \in \bZ$
and $R_2$ is vertical with standard orientation.
Let us assume that $R_1$ winds around $R_2$ in the ``positive direction''\footnote{cf. Sec. \ref{subsec5.5}
below for a precise definition}
and that $\lambda_1, \lambda_2 \in \Lambda_+^k$
where $\lambda_1$ and $\lambda_2$ are the highest weights of $\rho_1$ and $\rho_2$.
Then $\WLO^{disc}_{norm}(L)$ is well-defined and we have
\begin{equation} \label{eq_theorem2}
\WLO^{disc}_{norm}(L) = S_{00} \sum_{\eta_1, \eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}} m^{\eta_1\eta_2}_{\lambda_1,\mathbf p}(\tau) \
d_{\eta_1} S_{\lambda_2 \eta_2} \ \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}}
\end{equation}
\end{theorem}
\begin{remark} \label{rm_theorems} \rm
In the special case where $\mathbf p = 1$ we have
$\theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}} = \theta_{\tau \ast \eta_2}^{- {\mathbf q}} =
\theta_{\eta_2}^{- {\mathbf q}} $ (cf. Eq. \eqref{eq_theta_inv} below)
and Eq. \eqref{eq_theorem1} can be rewritten as
$$ \WLO^{disc}_{norm}(L) = S_{00}^2
\sum_{\eta_1, \eta_2 \in \Lambda_+^k} M^{\eta_1\eta_2}_{\lambda_1,1}
d_{\eta_1} d_{\eta_2} \ \theta_{\eta_1}^{ {\mathbf q} } \theta_{\eta_2}^{- {\mathbf q} }$$
where $M^{\mu \nu}_{\lambda,1}$ is as in Eq. \eqref{eq_def_plethysm_org} above.
(A totally analogous remark applies to Eq. \eqref{eq_theorem2}). But
\begin{equation} \label{eq_rm5.8_1}
M^{\mu \nu}_{\lambda,1 } = \sum_{\tau \in \cW_{\aff}} (-1)^{\tau} m_{\lambda}\bigl(\mu - \tau \ast \nu\bigr) \overset{(*)}{= } N_{\lambda \nu}^{\mu}
\end{equation}
where $N_{\lambda \nu}^{\mu}$, $\lambda, \mu, \nu \in \Lambda_+^k$,
are the so-called fusion coefficients, see e.g., \cite{Saw} for the
definition of $N_{\lambda \nu}^{\mu}$ and for the proof of the equality $(*)$ (called
the ``quantum Racah formula'' in \cite{Saw}).\par
Eq. \eqref{eq_rm5.8_1} implies that the RHS of both theorems can be rewritten in terms of Turaev's shadow invariant
(or, equivalently, the Reshetikhin-Turaev invariant), cf. Remark \ref{rm_sec3.9} above.
Accordingly, in this case it is clear\footnote{Note, for example, that
if $\mathbf p = 1$ then the simplicial ribbon link
$L=(R_1)$ appearing in Theorem \ref{theorem1} will fulfill the analogue of
Conditions (NCP)' and (NH)' in \cite{Ha7a} mentioned
in Remark \ref{rm_sec3.9} above.} that both theorems give the results expected in the literature, i.e. Conjecture \ref{conj0} below is indeed true if $\mathbf p = 1$.
\end{remark}
\subsection{Proof of Theorem \ref{theorem1}}
\label{subsec5.4}
Let $L=(R_1)$ where $R_1$ is a simplicial torus ribbon knot
of standard type in $\cK \times \bZ_N$, colored with $\rho_1$ and with winding numbers ${\mathbf p} \in \bZ \backslash \{0\}$ and $\mathbf q \in \bZ$. (In the following we sometimes write $R$ instead of $R_1$.)
Let $n \in \bN$ be the length of the three simplicial loops $l^j$, $j =0,1,2$, in $q\cK \times \bZ_N$ associated
to $R=R_1$, cf. Sec. \ref{subsec3.3} above.\par
The symbol $\sim$ will denote equality up to a multiplicative (non-zero) constant
which is allowed to depend on $G$, $k$, $\cK$ and $N$ but not
on the colored ribbon knot considered\footnote{in particular, it will depend neither on $\mathbf p$ nor on $\mathbf q$ nor on $\rho_1$}.\par
Recall that, as mentioned in Sec. \ref{subsec3.10} above, until the end of ``Step 4'' below
we will work with the original definition of $\WLO^{disc}_{rig}(L)$.
Then we will explain how Steps 1--4 need to be modified if the new definition is used.
In Step 5--6 we then work with the new definition of $\WLO^{disc}_{rig}(L)$.
\subsubsection*{a) Step 1: Performing the $\int\nolimits_{\sim} \cdots
d\Check{\mu}^{\orth,disc}_{B}(\Check{A}^{\orth}) $
integration in Eq. \eqref{eq_def_WLOdisc}}
We will prove below that under the assumptions on $L=(R_1)$
made above we have for every fixed $A^{\orth}_c \in \cA^{\orth}_c(K)$ and $B \in \cB_{reg}(q\cK)$
\begin{equation} \label{eq5.1}\int\nolimits_{\sim} \Tr_{\rho_1}\bigl( \Hol^{disc}_{R_1}(\Check{A}^{\orth} + A^{\orth}_c, B)\bigr) d\Check{\mu}^{\orth,disc}_{B}(\Check{A}^{\orth}) = \Tr_{\rho_1}\bigl(
\Hol^{disc}_{R_1}(A^{\orth}_c, B)\bigr)
\end{equation}
By taking into account that
$ \prod_{x} 1^{\smooth}_{\ct_{reg}}(B(x))
\neq 0$ for $B \in \cB(\cK) \subset \cB(q\cK)$ implies $B \in \cB_{reg}(q\cK)$
we then obtain from Eq. \eqref{eq5.1} and Eq. \eqref{eq_def_WLOdisc}
\begin{multline} \label{eq5.14}
\WLO^{disc}_{rig}(L) = \sum_{y \in I} \int\nolimits_{\sim} \biggl\{
\bigl( \prod_{x} 1^{\smooth}_{\ct_{reg}}(B(x)) \bigr)
\Tr_{\rho_1}\bigl(\Hol^{disc}_{R_1}( A^{\orth}_c, B)\bigr) \Det^{disc}(B) \biggr\}\\
\times \exp\bigl( - 2 \pi i k \langle y, B(\sigma_0) \rangle \bigr)
\exp(i S^{disc}_{CS}(A^{\orth}_c,B)) (DA^{\orth}_c \otimes DB)
\end{multline}
{\it Proof of Eq. \eqref{eq5.1}:} Let $A^{\orth}_c \in \cA^{\orth}_c(K)$ and $B \in \cB_{reg}(q\cK)$ be fixed.
We will prove Eq. \eqref{eq5.1} by applying Proposition \ref{prop3.3}
to the special situation where
\begin{itemize}
\item $V= \Check{\cA}^{\orth}(K)$ \quad and \quad $d\mu= d\Check{\mu}^{\orth,disc}_{B}$,
\item $(Y^{a}_{k})_{k \le n,a \le \dim(\cG)}$ is the
family of maps $Y^{a}_{k}: \Check{\cA}^{\orth}(K) \to \bR$ given by
\begin{multline} \label{eq5.4}
Y^{a}_k(\Check{A}^{\orth}) = \bigl\langle e_a,
\sum_{j=0}^2 w(j) \bigl( \Check{A}^{\orth}(\start l^{j(k)}_{S^1})(l^{j(k)}_{\Sigma})
+ A^{\orth}_c(l^{j(k)}_{\Sigma})
+ B(\start l^{j(k)}_{\Sigma}) \cdot dt^{(N)}(l^{j(k)}_{S^1}) \bigr) \bigr\rangle
\end{multline}
for all $\Check{A}^{\orth} \in \Check{\cA}^{\orth}(K)$,
\item $\Phi: \bR^{n \times \dim(\cG)} \to \bC$ is given by
\begin{equation} \label{eq5.5} \Phi((x^{a}_k)_{k,a}) = \Tr_{\rho_1}(\prod_{k=1}^n \exp(\sum_{a=1}^{\dim(\cG)} e_a x^{a}_k))
\quad \quad \text{for all $(x^{a}_k)_{k,a} \in \bR^{n \times \dim(\cG)}$}
\end{equation}
Here $(e_a)_{a \le \dim(\cG)}$ is an arbitrary but fixed
$\langle \cdot, \cdot \rangle$-orthonormal basis of $\cG$.
\end{itemize}
Note that
$d\Check{\mu}^{\orth,disc}_{B}$ is a well-defined normalized, non-degenerate, centered oscillatory Gauss-type measure. (Since by assumption $B \in \cB_{reg}(q\cK)$ this follows from
the remarks in Sec. \ref{subsec3.8}).
Moreover, we have
\begin{equation}
\Tr_{\rho_1}\bigl( \Hol^{disc}_{R_1}(\Check{A}^{\orth} + A^{\orth}_c, B)\bigr) = \Tr_{\rho_1}(\prod_{k=1}^n \exp(\sum_{a=1}^{\dim(\cG)} e_a Y^{a}_k(\Check{A}^{\orth}))) \quad \forall \Check{A}^{\orth} \in \Check{\cA}^{\orth}(K)
\end{equation}
Finally, since $ d\Check{\mu}^{\orth,disc}_{B}$ is centered
and normalized we have for every $k \le n$ and $a \le \dim(\cG)$
\begin{equation} \label{eq5.7}
\int\nolimits_{\sim} Y^{a}_k \ d\Check{\mu}^{\orth,disc}_{B} = Y^{a}_k(0)
\end{equation}
Consequently, we obtain
\begin{multline} \label{eq5.11} \int\nolimits_{\sim}
\Tr_{\rho_1}\bigl( \Hol^{disc}_{R_1}(\Check{A}^{\orth} + A^{\orth}_c,
B)\bigr) d\Check{\mu}^{\orth,disc}_{B}(\Check{A}^{\orth})\\
= \int\nolimits_{\sim} \Tr_{\rho_1}(\prod_k \exp(\sum_a e_a Y^{a}_k))
d\Check{\mu}^{\orth,disc}_{B} = \int\nolimits_{\sim} \Phi((Y^{a}_k)_{k,a}) \ d\Check{\mu}^{\orth,disc}_{B} \overset{(*)}{=}
\Phi((\int\nolimits_{\sim} Y^{a}_k \ d\Check{\mu}^{\orth,disc}_{B})_{k,a}) \\
= \Phi((Y^{a}_k(0))_{k,a}) = \Tr_{\rho_1}( \prod_k
\exp( \sum_a e_a Y^{a}_k(0))) = \Tr_{\rho_1}\bigl(
\Hol^{disc}_{R_1}(A^{\orth}_c, B)\bigr)
\end{multline}
where step $(*)$ follows from Proposition \ref{prop3.3} above.
The following remarks show that the assumptions of Proposition \ref{prop3.3} are indeed fulfilled.
\begin{enumerate}
\item We have $\Phi \in \cP_{exp}(\bR^{n \times \dim(\cG)})$. In order to see this note first that
\begin{multline} \label{eq5.6}
\Phi((x^{a}_k)_{k,a}) = \Tr_{\rho_1}(\prod_k
\exp(\sum_a e_a x^{a}_k))
= \Tr_{\End(V_1)} \bigl( \prod_k \rho_1(\exp( \sum_a e_a x^{a}_k )) \bigr) \\
= \Tr_{\End(V_1)} \bigl( \prod_k \exp_{\End(V_1)}( \sum_a
(\rho_1)_* (e_a) x^{a}_k ) \bigr)
\end{multline}
where $V_1$ is the representation space of $\rho_1$, $\exp_{\End(V_1)}$ is the exponential map of the associative algebra $\End(V_1)$, and $(\rho_1)_* : \cG \to \gl(V_1)$ is the Lie algebra representation
induced by $\rho_1: G \to \GL(V_1)$. Without loss of generality we can assume that
$V_1 = \bC^d$ where $d=\dim(V_1)$. From Definition \ref{def3.3} it now easily follows that we have indeed $\Phi \in \cP_{exp}(\bR^{n \times \dim(\cG)})$.
\item For all $k,k' \le n$, $a,a' \le \dim(\cG)$ we have
\begin{equation} \label{eq5.8} \int\nolimits_{\sim} Y^{a}_k Y^{a'}_{k'} \ d\Check{\mu}^{\orth,disc}_{B}
= \int\nolimits_{\sim} Y^{a}_k \ d\Check{\mu}^{\orth,disc}_{B}
\int\nolimits_{\sim} Y^{a'}_{k'} \ d\Check{\mu}^{\orth,disc}_{B}
\end{equation}
This follows from Eq. \eqref{eq5.7} above and
\begin{align} \label{eq5.9}
& \int\nolimits_{\sim} (Y^{a}_k - Y^{a}_k(0))
(Y^{a'}_{k'} - Y^{a'}_{k'}(0)) d\Check{\mu}^{\orth,disc}_{B} =
\int\nolimits_{\sim} \ll \cdot , f \gg \ll \cdot ,
f' \gg d\Check{\mu}^{\orth,disc}_{B} \nonumber \\
& \quad \overset{(*)}{\sim} \quad
\ll f, \bigl(\star_K L^{(N)}(B)_{| \Check{\cA}^{\orth}(K)} \bigr)^{-1}
f' \gg \overset{(**)}{=} 0
\end{align}
where $\ll \cdot, \cdot \gg$ is the scalar product on\footnote{which, according to
Convention \ref{conv_EucSpaces} above, is the the scalar product induced by
$\ll \cdot , \cdot \gg_{\cA^{\orth}(q\cK)}$} $\Check{\cA}^{\orth}(K)$
and, for given $k,k' \le n$, $a,a' \le \dim(\cG)$,
$f, f' \in \Check{\cA}^{\orth}(K)$ are chosen such that $Y^{a}_k(\Check{A}^{\orth}) - Y^{a}_k(0)
= \ll \Check{A}^{\orth} ,f \gg$
and $Y^{a'}_{k'}(\Check{A}^{\orth}) - Y^{a'}_{k'}(0)
= \ll \Check{A}^{\orth} ,f' \gg$ for all $\Check{A}^{\orth} \in \Check{\cA}^{\orth}(K)$.
Here in step $(*)$ we have used Proposition \ref{obs1} (cf. also Remark \ref{rm_Sec4.3}),
and in step $(**)$ we have used that for all non-trivial
$l^{j(k)}_{\Sigma}$ and $l^{j'(k')}_{\Sigma}$, $k, k' \le n, j, j' \in \{0,1,2\}$,
appearing on the RHS of Eq. \eqref{eq5.4} we have
\begin{equation} \label{eq_Ende_Step1}
\star_K \pi(l^{j(k)}_{\Sigma}) \neq \pm \pi(l^{j'(k')}_{\Sigma})
\end{equation}
where $\pi: C^1(q\cK,\bR) \to C^1(K,\bR)$
is the real analogue of the orthogonal projection given in Eq. \eqref{eq_pi_proj}.
Eq. \eqref{eq_Ende_Step1} follows from Eq. \eqref{eq_Hodge_star_concrete} in Sec. \ref{subsec3.2} above
and from our assumption that $R_1$ is a simplicial torus ribbon knot of standard type in $\cK \times \bZ_N$.
\end{enumerate}
\subsubsection*{b) Step 2: Performing the $\int\nolimits_{\sim} \cdots
\exp(i S^{disc}_{CS}(A^{\orth}_c,B)) (DA^{\orth}_c \otimes DB)$-integration in \eqref{eq5.14}}
Note that the remaining fields $A^{\orth}_c$ and $B$ in Eq. \eqref{eq5.14}
take values in the Abelian Lie algebra $\ct$.
For fixed $A^{\orth}_c$ and $B$ we can therefore rewrite $\Hol^{disc}_{R_1}(A^{\orth}_c,B)$ as
\begin{equation} \label{eq5.16} \Hol^{disc}_{R_1}( A^{\orth}_c, B)
= \exp\bigl( \Psi(B) + \sum_{k=1}^n \sum_{j=0}^2 w(j) A^{\orth}_c(l^{j(k)}_{\Sigma}) \bigr)
\end{equation}
where we have set
\begin{equation} \label{eq5.17} \Psi(B) := \sum_k \sum_{j=0}^2 w(j)
B(\start l^{j(k)}_{\Sigma}) dt^{(N)}(l^{j(k)}_{S^1}) \quad \in \ct
\end{equation}
\begin{observation} \label{obs_red_loops} \rm From the assumption that $R = R_1$ is a simplicial torus ribbon knot in $\cK \times \bZ_N$
of standard type with first winding number ${\mathbf p} \neq 0$
it follows that
for each $j=0,1,2$ there is a simplicial
loop ${\mathfrak l}^j_{\Sigma} = ({\mathbf x}^{j(k)}_{\Sigma})_{0 \le k \le \mathbf n}$, $\mathbf n \in \bN$,
in $q\cK$ which, considered as a continuous map $S^1 \to S^2$, is an embedding and fulfills
(cf. Convention \ref{conv_loop_pic})
$$ \sum_{k=1}^n l^{j(k)}_{\Sigma} = {\mathbf p} \sum_{k=1}^{\mathbf n} {\mathfrak l}^{j(k)}_{\Sigma} $$
\end{observation}
Since $\rho_1 = \rho_{\lambda_1}$ it follows from the definitions
in Sec. \ref{subsec5.2} above that
\begin{equation} \label{eq5.18}
\Tr_{\rho_1}(\exp(b)) = \sum_{\alpha \in
\Lambda} m_{\lambda_1}(\alpha) e^{2 \pi i \langle \alpha, b \rangle} \quad \forall b \in \ct
\end{equation}
Combining Eqs. \eqref{eq5.16} -- \eqref{eq5.18} with Observation \ref{obs_red_loops} we obtain
\begin{align} \label{eq5.19}
& \Tr_{\rho_1}\bigl( \Hol^{disc}_{R_1}( A^{\orth}_c, B) \bigr) \nonumber \\
& = \sum_{\alpha \in \Lambda} m_{\lambda_1}(\alpha)
\bigl( \exp(2 \pi i \langle \alpha, \Psi(B) \rangle ) \bigr)
\exp\bigl( 2 \pi i \ll A^{\orth}_c , \alpha \, {\mathbf p} \, (w{\mathfrak l})_{\Sigma} \gg_{\cA^{\orth}(q\cK)} \bigr)
\end{align}
where we have set
\begin{equation} \label{eq_mathfrak_l_def}
(w{\mathfrak l})_{\Sigma} := \sum_{k=1}^{\mathbf n} \sum_{j=0}^2 w(j) {\mathfrak l}^{j(k)}_{\Sigma} \in C_1(q\cK)
\end{equation}
Let us now introduce for each $y \in I$,
$\alpha \in \Lambda$ and $B \in \cB(\cK) \subset \cB(q\cK)$:
\begin{multline} \label{eq5.22}
F_{\alpha,y}(B)
:= \bigl( \prod_{x}
1^{\smooth}_{\ct_{reg}}(B(x)) \bigr)
\bigl( \exp(2 \pi i \langle \alpha, \Psi(B) \rangle ) \bigr) \Det^{disc}(B) \exp\bigl( - 2 \pi i k \langle y, B(\sigma_0) \rangle \bigr)
\end{multline}
After doing so we can rewrite Eq. \eqref{eq5.14} as
\begin{multline} \label{eq5.21} \WLO^{disc}_{rig}(L)
= \sum_{\alpha \in \Lambda} m_{\lambda_1}(\alpha)
\sum_{y \in I}\\
\times \int_{\sim} F_{\alpha,y}(B)
\exp\bigl( 2 \pi i \ll A^{\orth}_c , \alpha {\mathbf p} (w{\mathfrak l})_{\Sigma} \gg_{\cA^{\orth}(q\cK)} \bigr) \exp(i S^{disc}_{CS}(A^{\orth}_c,B)) (DA^{\orth}_c \otimes DB)
\end{multline}
For each fixed $y \in I$ and $\alpha \in \Lambda$
we will now evaluate the corresponding integral in Eq. \eqref{eq5.21} by
applying Proposition \ref{prop3.5} above in the special situation where
\begin{itemize}
\item $V = \cA^{\orth}_c(K) \oplus \cB(\cK)$. For $V$ we use the decomposition
$V = V_0 \oplus V_1 \oplus V_2$ given by
\begin{align*}
V_0 & := \cB_c(q\cK) \oplus (\Image(\star_K \circ \pi \circ d_{q\cK}) )^{\orth}\\
V_1 & := (\ker(\star_K \circ
\pi \circ d_{q\cK}))^{\orth} = (\ker(\pi \circ d_{q\cK}))^{\orth} \overset{(*)}{=} (\cB_c(q\cK))^{\orth} \subset \cB(\cK)\\
V_2 & := \Image(\star_K \circ \pi \circ d_{q\cK}) \subset \cA^{\orth}_c(K)
\end{align*}
where $\pi:\cA_{\Sigma,\ct}(q\cK) \to \cA_{\Sigma,\ct}(K) \cong \cA^{\orth}_c(K)$ is the orthogonal
projection of Eq. \eqref{eq_pi_proj}, $d_{q\cK}$ is a short notation for $(d_{q\cK})_{| \cB(\cK)}$,
and $(\cdot)^{\orth}$ denotes the orthogonal complement in $\cA^{\orth}_c(K)$ and $\cB(\cK)$, respectively.
Note that step $(*)$ follows from Eq. \eqref{eq_obs1}.
\item $d\mu = d\nu^{disc} := \tfrac{1}{Z^{disc}} \exp(iS^{disc}_{CS}(A^{\orth}_c,B))
(DA^{\orth}_c \otimes DB)$ where
$$Z^{disc}:= \int_{\sim} \exp(i S^{disc}_{CS}(A^{\orth}_c,B)) (DA^{\orth}_c \otimes DB)$$
\item $F = F_{\alpha,y} \circ p$
where $p: V_0 \oplus V_1 = \cB(\cK) \oplus (V_2)^{\orth} \to \cB(\cK)$
is the canonical projection.
\item $v = 2 \pi \alpha {\mathbf p} (w{\mathfrak l})_{\Sigma}$
\end{itemize}
\noindent Before we continue we need to verify that the assumptions of Proposition \ref{prop3.5} above are indeed fulfilled.
\begin{enumerate}
\item[1.] $d\nu^{disc}$ is a normalized, centered oscillatory Gauss type measure on $\cA^{\orth}_c(K) \oplus \cB(q\cK)$. In order to see this we rewrite $d\nu^{disc}$ as\footnote{recall Eq. \eqref{eq_SCS_expl_discc}
and observe that $ \ll \star_K A^{\orth}_c, d_{q\cK} B \gg_{\cA^{\orth}(q\cK)}
= \ll \star_K A^{\orth}_c, \pi( d_{q\cK} B) \gg_{\cA^{\orth}(q\cK)}
= - \ll A^{\orth}_c, \star_K ( \pi( d_{q\cK} B)) \gg_{\cA^{\orth}(q\cK)}$}
$$ d\nu^{disc} = \tfrac{1}{Z^{disc}} \exp( i
\ll A^{\orth}_c, - 2 \pi k (\star_K \circ \pi \circ d_{q\cK}) B \gg_{\cA^{\orth}(q\cK)} ) (DA^{\orth}_c \otimes DB)$$
Accordingly, $d\nu^{disc}$ has the form as in Proposition \ref{prop3.5}
with $V_0$, $V_1$, and $V_2$ as above and where
$M: V_1 \to V_2$ is the well-defined linear isomorphism given by
\begin{equation} \label{eq_def_M} M = - 2 \pi k (\star_K \circ \pi \circ d_{q\cK})_{| V_1}
\end{equation}
\item[2.] $F = F_{\alpha,y} \circ p$ is a bounded and uniformly continuous function.
\item[3.] $v$ is an element of $V_2$. In order to prove this
we introduce a linear map
$$m_{\bR}: C^0(q\cK,\bR) \to C^1(K,\bR)$$
by $m_{\bR} := \star_K \circ \pi \circ d_{q\cK}$
where $\pi: C^1(q\cK,\bR) \to C^1(K,\bR)$, $d_{q\cK}:C^0(q\cK,\bR) \to C^1(q\cK,\bR)$,
and $\star_{K}:C^1(K,\bR) \to C^1(K,\bR)$
are the ``real analogues'' of the three maps appearing on the RHS of Eq. \eqref{eq_def_M} above.
From Lemma \ref{lem2_pre} in Step 3 below it follows that
there is a unique\footnote{here $C^0(\cK,\bR)$ is embedded into $C^0(q\cK,\bR)$
in the same way as $\cB(\cK)$ is embedded into $\cB(q\cK)$, cf. Sec. \ref{subsec3.0}}
$f \in C^0(\cK,\bR) \subset C^0(q\cK,\bR)$
such that
\begin{equation} \label{eq5.28}
(w{\mathfrak l})_{\Sigma} = m_{\bR} \cdot f
\end{equation}
and also
\begin{equation} \label{eq5.28''} \sum_{x \in \face_0(q\cK)} f(x) = 0
\end{equation}
That $v \in V_2$ now follows from
\begin{equation} \label{eq_def_v} v = 2 \pi \alpha {\mathbf p} (w{\mathfrak l})_{\Sigma} =
2 \pi \alpha {\mathbf p} ( m_{\bR} \cdot f) = (\star_K \circ \pi \circ d_{q\cK}) \cdot \bigl( 2 \pi \alpha {\mathbf p} f\bigr)
\end{equation}
\end{enumerate}
\noindent
By applying Proposition \ref{prop3.5} we now obtain
\begin{align} \label{eq5.33} & \tfrac{1}{ Z^{disc}} \int_{\sim} F_{\alpha,y}(B)
\exp\bigl( 2 \pi i \ll A^{\orth}_c , \alpha {\mathbf p} (w{\mathfrak l})_{\Sigma} \gg_{\cA^{\orth}(q\cK)} \bigr) \exp(iS^{disc}_{CS}(A^{\orth}_c,B)) (DA^{\orth}_c \otimes DB) \nonumber \\
& = \int_{\sim} F(B)
\exp\bigl( i \ll A^{\orth}_c , v \gg_{\cA^{\orth}(q\cK)} \bigr)
d\nu^{disc}(A^{\orth}_c,B) \nonumber \\
& \quad \quad \sim \int^{\sim}_{V_0}
F( x_0 - M^{-1} v) dx_0 \overset{(*)}{\sim}
\int^{\sim}_{\ct} F_{\alpha,y}(b - M^{-1} v) db
\end{align}
Here $\int^{\sim} \cdots dx_0$ and
$\int^{\sim} \cdots db$ are improper integrals
defined according to Definition \ref{conv3.1} above.
(Remark \ref{rm_last_sec3} above and a ``periodicity argument'', which will be given in Step 4 below,
imply that these improper integrals are well-defined.)
Step $(*)$ above follows because $F( x_0 - M^{-1} v)$ depends only on the $\cB_{c}(q\cK)$-component of
$x_0 \in V_0 = \cB_{c}(q\cK) \oplus (V_2)^{\orth} \cong \ct \oplus (V_2)^{\orth}$.
\smallskip
According to Eq. \eqref{eq5.28''} we have $\alpha f \in V_1 = (\cB_c(q\cK))^{\orth}$, so Eq. \eqref{eq_def_v} implies that
\begin{equation} \label{eq5.34} M^{-1} v = - \tfrac{1}{k} \alpha {\mathbf p} f
\end{equation}
\noindent Combining Eqs. \eqref{eq5.22}, \eqref{eq5.21}, \eqref{eq5.33}, and \eqref{eq5.34}
we obtain
\begin{multline} \label{eq5.35}
\WLO^{disc}_{rig}(L) \sim \sum_{\alpha \in \Lambda} m_{\lambda_1}(\alpha)
\sum_{y \in I} \\
\times \int^{\sim}_{\ct} db \ \biggl[ \exp\bigl( - 2 \pi i k \langle y, B(\sigma_0) \rangle \bigr) \bigl( \prod_{x}
1^{\smooth}_{\ct_{reg}}(B(x)) \bigr) \\
\times \bigl( \exp(2 \pi i \langle \alpha, \Psi(B) \rangle ) \bigr) \Det^{disc}(B)
\biggr]_{| B = b + \tfrac{1}{k} \alpha {\mathbf p} f }
\end{multline}
It is not difficult to see that
Eq. \eqref{eq5.35} also holds if we redefine $f$
using the normalization condition
\begin{equation} \label{eq5.28'} f(\sigma_0)=0
\end{equation}
instead of the normalization condition \eqref{eq5.28''} above\footnote{this follows from Eq. \eqref{eq1_lastrmsec3}
in Remark \ref{rm_last_sec3} above
and the periodicity properties of the integrand in
$\int^{\sim}_{\ct} \cdots db$ (for fixed $y$ and $\alpha$),
cf. Step 4 below}.
\subsubsection*{c) Step 3: Rewriting Eq. \eqref{eq5.35}}
\begin{lemma} \label{lem2_pre}
Assume that $ (w{\mathfrak l})_{\Sigma} \in C_1(q\cK) \cong C^1(q\cK,\bR)$ is as in Eq. \eqref{eq_mathfrak_l_def} above. Then there is a $f \in C^0(\cK,\bR) \subset C^0(q\cK,\bR)$, unique up to an additive constant,
such that $ (w{\mathfrak l})_{\Sigma} = m_{\bR} \cdot f $.
\end{lemma}
\begin{proof} That $f$ is unique up to an additive constant follows by combining the definition of $m_{\bR}$
with the real analogue of Eq. \eqref{eq_obs1} in Sec. \ref{subsec3.0} above and
the fact that $\star_{K}:C^1(K,\bR) \to C^1(K,\bR)$ is a bijection.\par
In order to show the existence of $f$ we observe first that the assumption that
$R = R_1$ is a simplicial torus ribbon knot of standard type in $\cK \times \bZ_N$ implies that
$\Sigma \backslash (\arc({\mathfrak l}^1_{\Sigma}) \cup \arc({\mathfrak l}^2_{\Sigma}))$
has three connected components.
Let us denote these three connected components by
$Z_0, Z_1, Z_2$. The enumeration is chosen such that
$Z_0$ is the connected component containing $\arc({\mathfrak l}^0_{\Sigma})$
and $Z_1$ is the other connected component having $ \arc({\mathfrak l}^1_{\Sigma})$ on its boundary.
Let $f: \face_0(\cK) \to \bR$ be given by
\begin{equation} \label{eq_def_f}
f(\sigma):=
\begin{cases} c & \text{ if } \sigma \in \overline{Z_1} \\
c \pm \tfrac{1}{2} & \text{ if } \sigma \in Z_0 \\
c \pm 1 & \text{ if } \sigma \in \overline{Z_2} \\
\end{cases}
\end{equation}
for all $\sigma \in \face_0(\cK)$
where $\overline{Z_i}$ is the closure of $Z_i$ in $\Sigma$,
$c \in \bR$ is an arbitrary constant,
and the sign $\pm$ is ``$+$'' if for any $k \le \mathbf n$
the edge $\star_K ( \pi ( {\mathfrak l}^{0(k)}_{\Sigma}))$
points from $Z_2$ to $Z_1$ and ``$-$'' otherwise.
In order to conclude the proof of Lemma \ref{lem2_pre}
we have to show that
\begin{equation} \label{eq_lemma3_crucial}
(w{\mathfrak l})_{\Sigma} = m_{\bR} \cdot f = \star_K(\pi(d_{q\cK}f))
\end{equation}
Let $S $ be the set of those $e \in \face_1(q\cK)$ which are contained in $Z_0$
but do not lie on $\arc({\mathfrak l}^0_{\Sigma})$.
Now observe that $(d_{q\cK} f)(e) = 0$ if $e \notin S$.
On the other hand if $e \in S$ we have
$$(d_{q\cK} f)(e) = \pm \tfrac{1}{2} \sgn(e) $$
where the sign $\pm$ is the same as in Eq. \eqref{eq_def_f} and
where $\sgn(e) = 1$ if the (oriented) edge $e$ ``points from'' the region $Z_1$ to the region $Z_2$ and $\sgn(e) = -1$ otherwise. \par
From the definition of $q\cK$ it follows that every $e \in \face_1(q\cK)$ has
exactly one endpoint in $ \face_0(K_1) \cup \face_0(K_2) \subset \face_0(q\cK)$
and one endpoint in $(\face_0(K_1) \cup \face_0(K_2))^c := \face_0(q\cK) \backslash (\face_0(K_1) \cup \face_0(K_2))$.
On the other hand, if $e \in S$ then both endpoints of $e$ will be in $\bigcup_{j=0}^2 \arc({\mathfrak l}^{j}_{\Sigma})$.
Accordingly, for every $e \in S$ we can distinguish between exactly three types:
$$\text{ $e$ is of type $j$ ($j=0,1,2$) if $e$ has an endpoint in $\arc({\mathfrak l}^{j}_{\Sigma}) \cap (\face_0(K_1) \cup \face_0(K_2))^c$ }$$
Next we observe that for each fixed $e \in S$ of type $j$ there exist exactly two indices $k \le \mathbf n$
such that
\begin{equation} \label{eq_lemma_step3} \star_K (\pi(\sgn(e) \cdot e)) = \pi({\mathfrak l}^{j(k)}_{\Sigma})
\end{equation}
Moreover, if we let $e \in S$ vary then for $j =1,2$ every $k \le \mathbf n$ arises exactly once on the RHS
of Eq. \eqref{eq_lemma_step3}
and for $j=0$ every $k \le \mathbf n$ arises exactly twice
Taking this into account we arrive at
\begin{multline*} \star_K(\pi(d_{q\cK}f)) = \sum_{e \in S} \tfrac{1}{2} \star_K (\pi(\sgn(e) \cdot e))
= \sum_{k=1}^{\mathbf n} \bigl( \tfrac{1}{4} \pi({\mathfrak l}^{1(k)}_{\Sigma}) + \tfrac{1}{4} \pi({\mathfrak l}^{2(k)}_{\Sigma}) + 2 \tfrac{1}{4} \pi({\mathfrak l}^{0(k)}_{\Sigma}) \bigr) \\
= \sum_{k=1}^{\mathbf n} \bigl( \tfrac{1}{4} {\mathfrak l}^{1(k)}_{\Sigma} + \tfrac{1}{4} {\mathfrak l}^{2(k)}_{\Sigma} + \tfrac{1}{2} {\mathfrak l}^{0(k)}_{\Sigma} \bigr) =
\sum_{k=1}^{\mathbf n} \sum_{j=0}^2 w(j) {\mathfrak l}^{j(k)}_{\Sigma} =
(w{\mathfrak l})_{\Sigma}
\end{multline*}
\end{proof}
In the following we assume that $B \in \cB(\cK) \subset \cB(q\cK) = C^0(q\cK,\ct) $ is of the form
\begin{equation} \label{eq5.37}
B= b + \tfrac{1}{k} \alpha {\mathbf p} f
\end{equation}
with $b \in \ct$, $\alpha \in \Lambda$ and where $f$ is given by
Eq. \eqref{eq5.28} above in combination with \eqref{eq5.28'}.
\begin{observation} \label{obs3} Let $Z_0, Z_1, Z_2$ be as in the proof of Lemma \ref{lem2_pre} above.
Then the restriction of $B: \face_0(q\cK) \to \ct$
to $\face_0(q\cK) \cap Z_i$ is constant for each $i = 0,1,2$.
Moreover, if $i=1,2$ then also the restriction of
$B: \face_0(q\cK) \to \ct$
to $\face_0(q\cK) \cap \overline{Z_i}$ is constant.
\end{observation}
We set $B(Z_i) := B(x)$ for any $x \in Z_i \cap \face_0(q\cK)$, which
-- according to Observation \ref{obs3} -- is well-defined.
\begin{lemma} \label{lem2} For every $B \in \cB(\cK)$
of the form \eqref{eq5.37} and fulfilling $\prod_{x}
1^{\smooth}_{\ct_{reg}}(B(x)) \neq 0$ we have\footnote{Note that the expression on the RHS of Eq. \eqref{eq5.70} is well-defined since by assumption $\prod_{x} 1^{\smooth}_{\ct_{reg}}(B(x)) \neq 0$, which implies that
$B(x) \in \ct_{reg}$ and therefore $\det\bigl(1_{\ck}-\exp(\ad(B(x)))_{|\ck}\bigr) \neq 0$
(recall the definition of $\ct_{reg}$ and cf. Eq \eqref{eq_det_in_terms_of_roots} above)}
\begin{align} \label{eq5.70}
\Det^{disc}(B) & = \prod_{i=0}^2 \det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(B(Z_i)))_{|\ck}\bigr)^{\chi(Z_i)}
\end{align}
where $Z_i$, $i=0,1,2$ are as in the proof of Lemma \ref{lem2_pre}
and $\chi(Z_i)$ is the Euler characteristic of $Z_i$.
\end{lemma}
\begin{proof} We set $\face_p(\overline{Z_i}) :=
\{F \in \face_p(\cK) \mid \bar{F} \in \overline{Z_i}\} = \{F \in \face_p(\cK) \mid F \subset \overline{Z_i}\}$,
for $i=0,1,2$, and $\face_p(Z_0) := \{F \in \face_p(\cK) \mid \bar{F} \in Z_0\}$.
Since $\Sigma = S^2$ is the disjoint union of the three sets $Z_0$, $\overline{Z_1}$, and $\overline{Z_2}$
we obtain from Eq. \eqref{eq_def_DetFPdisc}
in Sec. \ref{subsec3.4} and Observation \ref{obs3}
\begin{align} \label{eq_5.44b}
\Det^{disc}(B) & = \prod_{p=0}^2 \biggl[ \biggl( \prod_{i=1}^2
\det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(B(Z_i)))_{|\ck}\bigr)^{(-1)^p} \biggr)^{\#
\face_p(\overline{Z_i})} \nonumber \\
& \quad \quad \quad \times \biggl(\det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(B(Z_0)))_{|\ck}\bigr)^{(-1)^p} \biggr)^{\# \face_p(Z_0)} \biggr] \nonumber \\
& \overset{(*)}{=} \prod_{i=0}^2 \det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(B(Z_i)))_{|\ck}\bigr)^{\sum_{p=0}^2 (-1)^p \# \face_p(\overline{Z_i})}
\end{align}
where in step $(*)$ we have used that
$\sum_{p=0}^2 (-1)^p \# \face_p(Z_0) = \sum_{p=0}^2 (-1)^p \# \face_p(\overline{Z_0})$.
The assertion of the lemma now follows by combining Eq. \eqref{eq_5.44b} with
$$ \chi(Z_i)= \chi(\overline{Z_i}) = \sum_{p=0}^2 (-1)^p \# \face_p(\overline{Z_i}), \quad i = 0,1,2 $$
where we have used that $\overline{Z_i}$ is a subcomplex of the CW complex $\cK$.
\end{proof}
Taking into account that $\arc(l^j_{\Sigma}) \subset \overline{Z_j}$ for $j=1,2$ and
$\arc(l^{0}_{\Sigma}) \subset Z_0$ we see that
Observation \ref{obs3} implies that
\begin{equation} \label{eq5.38} B(Z_j) =
B(\start l^{j(k)}_{\Sigma}) \quad \forall k \le n
\end{equation}
for every $B$ of the form in Eq. \eqref{eq5.37}. Moreover, for such $B$ we have
\begin{equation} B(Z_0) = \tfrac{1}{2} ( B(Z_1) + B(Z_2))
\end{equation}
Finally, note that for $j=0,1,2$ we have
\begin{equation} \label{eq5.39} {\mathbf q} = \wind(l^j_{S^1}) = \sum_k dt^{(N)}(l^{j(k)}_{S^1})
\end{equation}
In order to see this recall that
${\mathbf q}$ is the second winding number of the simplicial torus ribbon knot of standard type $R_1$,
which coincides with the winding number
$\wind(l^j_{S^1})$ of $l^j_{S^1}$, considered as a continuous map $S^1 \to S^1$.\par
Combining Eq. \eqref{eq5.35} and Eq. \eqref{eq5.17} with
Eqs. \eqref{eq5.38} -- \eqref{eq5.39} and Lemma \ref{lem2}, and taking into account
Eq. \eqref{eq5.28'} above (cf. the paragraph before Observation \ref{obs3})
\begin{equation} \label{eq5.50} \WLO^{disc}_{rig}(L)
\sim \sum_{\alpha \in \Lambda}
m_{\lambda_1}(\alpha) \ \sum_{y \in I}
\int^{\sim}_{\ct} db \ e^{ - 2\pi i k \langle y, b \rangle} F_{\alpha}(b)
\end{equation}
where for $b \in \ct$ and $\alpha \in \Lambda$ we have set
\begin{multline} \label{eq5.49}
F_{\alpha}(b) := \bigl[ \bigl( \prod_{x}
1^{\smooth}_{\ct_{reg}}(B(x)) \bigr) \bigl( \exp( \pi i {\mathbf q} \langle \alpha,
B(Z_1) + B(Z_2) \rangle ) \\
\times \prod_{i=0}^2 \det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(B(Z_i)))_{|\ck}\bigr)^{\chi(Z_i)} \bigr) \bigr]_{| B = b + \tfrac{1}{k} \alpha {\mathbf p} f }
\end{multline}
\subsubsection*{d) Step 4: Performing $\int^{\sim} \cdots db$ and $\sum_{y \in I}$ in Eq. \eqref{eq5.50}}
For all $y \in I$ and $\alpha \in \Lambda$ the function
$\ct \ni b \mapsto e^{ - 2\pi i k \langle y, b \rangle} F_{\alpha}(b)
\in \bC$ is invariant under all translations of the form
$b \mapsto b + x$ where $x \in I = \ker(\exp_{| \ct}) \cong \bZ^{\dim(\ct)}$.
In order to prove this it is enough to show that
for all $b \in \ct$ and $x \in I$ we have
\begin{subequations} \label{eq5.51}
\begin{align}
1^{\smooth}_{\ct_{reg}}(b + x) & = 1^{\smooth}_{\ct_{reg}}(b)\\
e^{ 2\pi i \eps \langle \alpha, b + x \rangle} & = e^{ 2\pi i \eps \langle \alpha, b \rangle}
\quad \text{ for all $\alpha \in \Lambda$, $\eps \in \bZ$} \\
\det\nolimits^{1/2}(1_{\ck} - \exp(\ad(b + x))_{| \ck}) & = \det\nolimits^{1/2}(1_{\ck} - \exp(\ad(b))_{| \ck})\\
e^{ - 2\pi i k \langle y, b + x \rangle} & = e^{ - 2\pi i k \langle y, b\rangle} \quad \text{
for all $y \in I$}
\end{align}
\end{subequations}
Note that because of the assumption that
$G$ is simply-connected we have $\Gamma = I$.
The first of the four equations above
therefore follows from the assumption in Sec. \ref{subsec3.6} that $1^{\smooth}_{\ct_{reg}}$
is invariant under $\cW_{\aff}$.
The second equation follows because by definition,
$\Lambda$ is the lattice dual to $\Gamma = I$.
The third equation follows from Eq. \eqref{eq_det_in_terms_of_roots} above
by taking into account that $(-1)^{ \sum_{\alpha \in \cR_+} \langle \alpha, x \rangle}
= (-1)^{ 2 \langle \rho, x \rangle} = 1$ for $x \in \Gamma = I$
because $\rho = \tfrac{1}{2} \sum_{\alpha \in \cR_+}$
is an element of the weight lattice $\Lambda$.
Finally, in order to see that the fourth equation holds, observe
that it is enough to show that
\begin{equation} \label{eq_CartanMatrix} \langle \Check{\alpha}, \Check{\beta} \rangle \in \bZ \quad \text{ for all coroots $\Check{\alpha}, \Check{\beta}$}
\end{equation}
According to the general theory of semi-simple Lie algebras we have
$2\tfrac{\langle \Check{\alpha}, \Check{\beta} \rangle}{\langle \Check{\alpha}, \Check{\alpha} \rangle}
\in \bZ$. Moreover, there are at most two different (co)roots lengths
and the quotient between the square lengths of the long and short coroots is either 1, 2, or 3.
Since the normalization of $\langle
\cdot, \cdot \rangle$ was chosen such that
$\langle \Check{\alpha}, \Check{\alpha} \rangle = 2$ holds
if $\Check{\alpha}$ is a short coroot we therefore have
$\langle \Check{\alpha}, \Check{\alpha} \rangle/2 \in \{1,2,3\}$
and \eqref{eq_CartanMatrix} follows.
\medskip
From Eqs. \eqref{eq5.49} and \eqref{eq5.51} we conclude that
$\ct \ni b \mapsto e^{ - 2\pi i k \langle y, b \rangle} F_{\alpha}(b)
\in \bC$ is indeed $I$-periodic
and we can therefore apply Eq. \eqref{eq2_lastrmsec3} in Remark \ref{rm_last_sec3} above and obtain
\begin{equation} \label{eq5.57}
\int^{\sim} db \
e^{ - 2\pi i k \langle y, b \rangle} F_{\alpha}(b)
\sim \int_{Q} db \ e^{ - 2\pi i k \langle y, b \rangle} F_{\alpha}(b)
= \int db \ e^{ - 2\pi i k \langle y, b \rangle} 1_{Q}(b) F_{\alpha}(b)
\end{equation}
where on the RHS $\int_{Q} \cdots db$ and $\int \cdots db$ are now ordinary integrals
and where we have set
\begin{equation} \label{eq5.54}
Q:= \{ \sum_i \lambda_i e_i \mid \lambda_i \in (0,1) \text{ for all $i \le m$} \} \subset \ct,
\end{equation}
Here $(e_i)_{i \le m}$ is an (arbitrary) fixed basis of $I$.\par
According to Eq. \eqref{eq5.57} we can now rewrite Eq. \eqref{eq5.50} as
\begin{align} \label{eq5.63}
\WLO^{disc}_{rig}(L)&\sim \sum_{\alpha \in \Lambda}
m_{\lambda_1}(\alpha) \sum_{y \in I}
\int db \ e^{ - 2\pi i k \langle y, b \rangle} 1_{Q}(b) F_{\alpha}(b) \nonumber \\
& \overset{(*)}{\sim} \sum_{\alpha \in \Lambda}
m_{\lambda_1}(\alpha) \sum_{b \in \tfrac{1}{k} \Lambda} 1_{Q}(b) F_{\alpha}(b)
\end{align}
where in step $(*)$ we have used,
for each $\alpha \in \Lambda$, the Poisson summation formula
\begin{equation} \label{eq5.61}
\sum_{y \in I} e^{ - 2\pi i k \langle y, b \rangle}
= c_{\Lambda} \sum_{x \in \tfrac{1}{k} \Lambda} \delta_x(b)
\end{equation}
where $\delta_x$ is the delta distribution in $x \in \ct$ and $c_{\Lambda}$ a constant depending on the lattice $\Lambda$. (Recall that the lattice $\Lambda$ is dual to $\Gamma = I$.)
Observe also that $1_{Q} F_{\alpha}$ clearly has compact support
and that $1_{Q} F_{\alpha}$ is smooth because $ \partial Q \subset \ct_{sing} = \ct \backslash \ct_{reg}$ and
$F_{\alpha}$ vanishes\footnote{note that according to the definition of $F_{\alpha}$ and Eq. \eqref{eq5.28'}
there is a factor $1^{(s)}_{\ct_{reg}}(b)$ appearing in $F_{\alpha}(b)$
which vanishes on a neighborhood of $\ct_{sing}$, cf. Sec. \ref{subsec3.6}}
on a neighborhood of $\ct_{sing}$. \par
Finally, note that since $s>0$ above was chosen small enough (cf. Footnote \ref{ft_sec3.6} in Sec. \ref{subsec3.6})
we have for every $B \in \cB(q\cK)$ of the form Eq. \eqref{eq5.37} with $b \in \tfrac{1}{k} \Lambda$
\begin{equation} \label{eq_lem3}
\prod_{x \in \face_0(q\cK)} 1^{\smooth}_{\ct_{reg}}(B(x)) =
\prod_{x \in \face_0(q\cK)} 1_{\ct_{reg}}(B(x)) \overset{(+)}{=} \prod_{i=0}^2 1_{\ct_{reg}}(B(Z_i))
\end{equation}
where step $(+)$ follows from Observation \ref{obs3}.\par
Using this we obtain from Eq. \eqref{eq5.63} and Eq. \eqref{eq5.49} after the change of variable
$b \to k b =: \alpha_0$ and writing $\alpha_1$ instead of $\alpha$
(and by taking into account that $\chi(Z_0) = 0$ and
$\chi(Z_1) = \chi(Z_2) = 1$):
\begin{align} \label{eq5.89_org} \WLO^{disc}_{rig}(L) & \sim
\sum_{\alpha_0, \alpha_1 \in \Lambda} 1_{k Q}(\alpha_0)
m_{\lambda_1}(\alpha_1) \nonumber \\
& \quad \quad \quad \times \biggl[ \biggl( \prod_{i=0}^2 1_{\ct_{reg}}(B(Z_i))\biggr)
\biggl(\prod_{i=1}^2 \det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(B(Z_i)))_{|\ck}\bigr) \biggr) \nonumber \\
& \quad \quad \quad \times \exp( \pi i {\mathbf q} \langle \alpha_1,
B(Z_1) + B(Z_2) \rangle ) \biggr]_{|
B = \tfrac{1}{k} ( \alpha_0 + \alpha_1 {\mathbf p} f)}
\end{align}
Recall from the paragraph at the beginning of Sec. \ref{subsec5.4} that so far we have been working with
the original definition of $\WLO^{disc}_{rig}(L)$ given in Sec. \ref{subsec3.9}. \par
By examining the calculations above it it becomes clear\footnote{for example, if one works with the
first modification (M1) on the list in Sec. \ref{subsec3.10}
then this point is obvious after examining the proof of Theorem 3.5
in \cite{Ha3b} where simplicial ribbons
in $q\cK \times \bZ_N$ are used. For modification (M2) this is also not difficult to see}
that if one modifies the definition of $\WLO^{disc}_{rig}(L)$
in one of the possible ways listed in Sec. \ref{subsec3.10} then instead of Eq. \eqref{eq5.89_org} one arrives at
\begin{align} \label{eq5.89} \WLO^{disc}_{rig}(L) & \sim
\sum_{\alpha_0, \alpha_1 \in \Lambda} 1_{k Q}(\alpha_0)
m_{\lambda_1}(\alpha_1) \nonumber \\
& \quad \quad \quad \times \biggl( \prod_{i=1}^2 1_{\ct_{reg}}(B(Z_i)) \biggr)
\biggl( \prod_{i=1}^2 \det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(B(Z_i)))_{|\ck}\bigr)
\biggr) \nonumber \\
& \quad \quad \quad \times \exp( \pi i {\mathbf q} \langle \alpha_1,
B(Z_1) + B(Z_2) \rangle ) \biggr]_{|
B = \tfrac{1}{k} ( \alpha_0 + \alpha_1 {\mathbf p} f)}
\end{align}
Note that the only difference between Eq. \eqref{eq5.89} and \eqref{eq5.89_org}
is that the $1_{\ct_{reg}}(B(Z_0))$-factor appearing in Eq. \eqref{eq5.89_org}
no longer appears in Eq. \eqref{eq5.89}.
\subsubsection*{e) Step 5: Some algebraic/combinatorial arguments}
For each $\alpha_0, \alpha_1 \in \Lambda$ we define
$$\eta_{(\alpha_0,\alpha_1)}: \{1, 2\} \to \Lambda$$
by
\begin{equation}\eta_{(\alpha_0,\alpha_1)}(i)= k B(Z_i) - \rho \quad \quad i =1,2
\end{equation}
where $B= \tfrac{1}{k}(\alpha_0 + \alpha_1 {\mathbf p} f)$.
Observe that for $\eta=\eta_{(\alpha_0,\alpha_1)}$ and $B= \tfrac{1}{k}(\alpha_0 + \alpha_1 {\mathbf p} f)$
we have
\begin{subequations}
\begin{equation}
\det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(B(Z_i)))_{|\ck}\bigr)) = \det\nolimits^{1/2}\bigl(1_{\ck}-\exp(\ad(\tfrac{1}{k}(\eta(i)+\rho))_{|\ck}\bigr)\bigr) \overset{(*)}{\sim} d_{\eta(i)},
\end{equation}
where in step $(*)$ we used Eq. \eqref{eq_det_in_terms_of_roots} and Eq. \eqref{eq_def_th+d}. \par
In the following we will assume, without loss of generality\footnote{one can check easily that
the final result for $\WLO_{rig}^{disc}(L)$ does not depend on this assumption }
that $\sigma_0 \in Z_1$ and therefore
\begin{equation} \label{eq_sigma_0_in_Z1}
B(\sigma_0) = B(Z_1)
\end{equation}
Moreover, we will assume, without loss of generality\footnote{again one can check easily that
the final result for $\WLO_{rig}^{disc}(L)$ does not depend on this assumption }
that the enumeration of $Z_1$ and $Z_2$ was chosen such that we have the situation where
in Eq. \eqref{eq_def_f} in the proof of
Lemma \ref{lem2_pre} the ``$-$''-sign appears. Then we have
\begin{equation}
\alpha_0 = k B(\sigma_0) = k B(Z_1) = \eta(1)+\rho,
\end{equation}
\begin{equation}
\alpha_1 = \tfrac{1}{\mathbf p} (\eta(1)-\eta(2))
\end{equation}
\begin{multline} \label{eq_prep_diagonal_argument}
\exp( \pi i {\mathbf q} \langle \alpha_1,B(Z_1) + B(Z_2)\rangle
= \exp( \pi i {\mathbf q} \langle \tfrac{1}{\mathbf p} (\eta(1)-\eta(2)),\tfrac{1}{k}(\eta(1)+\eta(2) + 2 \rho)\rangle \\
= \exp\bigl( \tfrac{\pi i}{k} \tfrac{\mathbf q}{\mathbf p}
\langle \eta(1),\eta(1) + 2\rho \rangle - \langle \eta(2),\eta(2) + 2\rho \rangle \bigr)
= \theta_{\eta(1)}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\eta(2)}^{-\tfrac{\mathbf q}{\mathbf p}}
\end{multline}
\end{subequations}
In view of the previous equations
it is clear that we can rewrite Eq. \eqref{eq5.89} in the following form
\begin{multline} \label{eq5.89b} \WLO^{disc}_{rig}(L) \sim
\sum_{\alpha_0, \alpha_1 \in \Lambda} \biggl[
m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} (\eta(1)-\eta(2))\bigr)
1_{k Q}(\eta(1)+\rho) \bigl( \prod_{i=1}^2 1_{\ct_{reg}}(\tfrac{1}{k}(\eta(i)+\rho)\bigr) \bigr) \\
\quad \quad \quad \times d_{\eta(1)} d_{\eta(2)} \theta_{\eta(1)}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\eta(2)}^{-\tfrac{\mathbf q}{\mathbf p}} \biggr]_{|\eta = \eta_{( \alpha_0,\alpha_1)}}\
\end{multline}
But
\begin{multline*}
1_{k Q}(\eta(1)+\rho) \ \bigl( \prod_{i=1}^2 1_{\ct_{reg}}(\tfrac{1}{k}(\eta(i)+\rho)) \bigr)\\
= 1_{k Q}(\eta(1)+\rho) 1_{k \ct_{reg}}(\eta(1)+\rho) 1_{k \ct_{reg}}(\eta(2)+\rho)
= 1_{k (Q \cap \ct_{reg})}(\eta(1)+\rho) 1_{k \ct_{reg}}(\eta(2) + \rho)
\end{multline*}
Combining this with Eq. \eqref{eq5.89b} we obtain
\begin{align} \label{eq5.89c} \WLO^{disc}_{rig}(L) & \sim
\sum_{\eta_1, \eta_2 \in \Lambda } \biggl[
m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} (\eta_1-\eta_2)\bigr)
1_{k (Q \cap \ct_{reg})}(\eta_1 + \rho) 1_{k \ct_{reg}}(\eta_2 + \rho)
d_{\eta_1} d_{\eta_2} \theta_{\eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\eta_2}^{-\tfrac{\mathbf q}{\mathbf p}} \biggr] \nonumber \\
& \sim \sum_{\eta_1 \in k (Q \cap \ct_{reg} - \rho) \cap \Lambda, \eta_2 \in
(k \ct_{reg} - \rho) \cap \Lambda} \biggl[
m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} (\eta_1-\eta_2)\bigr)
d_{\eta_1} d_{\eta_2} \theta_{\eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\eta_2}^{-\tfrac{\mathbf q}{\mathbf p}} \biggr]
\end{align}
Let $P$ be the unique Weyl alcove which is contained in the Weyl chamber $\CW$ fixed above
and which has $0 \in \ct$ on its boundary. Explicitly, $P$ is given by
\begin{equation} \label{eq_P_formula} P = \{b \in \CW \mid \langle b,\theta \rangle < 1 \}
\end{equation}
Note that the map
\begin{subequations}
\begin{equation}\cW_{\aff} \times P \ni (\tau,b) \mapsto \tau \cdot b \in \ct_{reg}
\end{equation}
is a well-defined bijection.
Moreover, there is a finite subset $W$ of $\cW_{\aff}$ such that
\begin{equation}W \times P \ni (\tau,b) \mapsto \tau \cdot b \in Q \cap \ct_{reg}
\end{equation}
\end{subequations}
is a bijection, too.
Clearly, these two bijections above induce two other bijections
\begin{subequations}
\begin{equation}\cW_{\aff} \times (k P - \rho) \ni (\tau,b) \mapsto \tau \ast b \in k \ct_{reg} - \rho
\end{equation}
\begin{equation}W \times (k P - \rho) \ni (\tau,b) \mapsto \tau \ast b \in k (Q \cap \ct_{reg}) - \rho
\end{equation}
\end{subequations}
where $\ast$ is given as in Eq. \eqref{eq_def_ast} in Sec. \ref{subsec5.2} above.\par
Observe that for $\eta \in \Lambda$ and $\tau \in \cW_{\aff}$ we have
\begin{subequations} \label{eq_d+th_inv}
\begin{align}
\label{eq_theta_inv}
\theta_{\tau \ast \eta} &= \theta_{\eta}\\
\label{eq_d_inv}
d_{\tau \ast \eta} & = (-1)^{\tau} d_{\eta}
\end{align}
\end{subequations}
[Since
$\cW_{\aff}$ is generated by $\cW$ and the translations associated to the lattice
$\Gamma$ it is enough to check Eq. \eqref{eq_d_inv} and Eq. \eqref{eq_theta_inv} for elements of $\cW$ and the aforementioned translations. If $\tau \in \cW $ then $\tau \ast \eta = \tau \cdot \eta + \tau \cdot \rho - \rho$. On the other hand if $\tau$ is the translation by $y \in \Gamma$ we have
$\tau \ast \eta = \eta + k y$. Using this\footnote{and taking into account the relations
$\rho \in \Lambda$, $\forall x,y \in \Gamma: \langle x, y \rangle \in \bZ$, and
$\forall x \in \Gamma: \langle x, x \rangle \in 2\bZ$ (cf. Eq. \eqref{eq_CartanMatrix} above and the paragraph following Eq. \eqref{eq_CartanMatrix} } Eq. \eqref{eq_theta_inv} follows from
Eq. \eqref{eq_def_th} and Eq. \eqref{eq_d_inv} follows from Eq. \eqref{eq_def_d}
and Eq. \eqref{eq_def_S}].\par
In the special case where $\tau \in \cW$ we also have
$\theta_{\tau \ast \eta}^{\tfrac{\mathbf q}{\mathbf p}} = \theta_{\eta}^{\tfrac{\mathbf q}{\mathbf p}}$.
However, if ${\mathbf p} \neq \pm 1$ we cannot expect the last relation to hold
for a general element $\tau$ of $\cW_{\aff}$ (cf. Footnote \ref{ft_warning} in Sec. \ref{subsec5.2} above). \par
On the other hand, by taking into account\footnote{this is relevant only in the special
case where $\tau_1$ is a translation by $y \in \Gamma$.
If $\tau_1 \in \cW$
then the validity of Eq. \eqref{eq_diagonal_inv} follows from the $\cW$-invariance of $m_{\lambda_1}(\cdot)$
and the relations mentioned above}
Eq. \eqref{eq_prep_diagonal_argument} above and\footnote{recall that -- according to the conventions made above we have $m_{\lambda_1}(\cdot) = \bar{m}_{\lambda_1}(\cdot)$} Eq. \eqref{eq_mbar_def} above
we see that we always have
\begin{equation} \label{eq_diagonal_inv}
m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} (\tau_1 \ast \eta_1- \tau_1 \ast \eta_2)\bigr)
\theta_{\tau_1 \ast \eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\tau_1 \ast \eta_2}^{-\tfrac{\mathbf q}{\mathbf p}} =
m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} (\eta_1- \eta_2)\bigr)
\theta_{\eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\eta_2}^{-\tfrac{\mathbf q}{\mathbf p}}
\end{equation}
for all $\tau_1 \in \cW_{\aff}$ and $\eta_1,\eta_2 \in \Lambda$.
Combining this with Eq. \eqref{eq5.89c} we finally obtain
\begin{align} \label{eq5.89d}
& \WLO^{disc}_{rig}(L) \nonumber \\
& \sim \sum_{\eta_1, \eta_2 \in (k P - \rho) \cap \Lambda} \sum_{\tau_1 \in W, \tau_2 \in \cW_{\aff}}
\biggl[ m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} (\tau_1 \ast \eta_1- \tau_2 \ast \eta_2)\bigr)
d_{\tau_1 \ast \eta_1} d_{\tau_2 \ast \eta_2} \theta_{\tau_1 \ast \eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\tau_2 \ast \eta_2}^{-\tfrac{\mathbf q}{\mathbf p}} \biggr] \nonumber \\
& = \sum_{\eta_1, \eta_2 \in (k P - \rho) \cap \Lambda} \sum_{\tau_1 \in W, \tau_2 \in \cW_{\aff}}
\biggl[ m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} (\tau_1 \ast \eta_1- \tau_2 \ast \eta_2)\bigr)
(-1)^{\tau_1} (-1)^{\tau_2} d_{\eta_1} d_{\eta_2} \theta_{\tau_1 \ast \eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\tau_2 \ast \eta_2}^{-\tfrac{\mathbf q}{\mathbf p}} \biggr] \nonumber \\
& \overset{(*)}{=} \sum_{\eta_1, \eta_2 \in (k P - \rho) \cap \Lambda} \sum_{\tau_1 \in W, \tau \in \cW_{\aff}}
\biggl[ m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} ( \eta_1- \tau \ast \eta_2)\bigr) (-1)^{\tau}
d_{\eta_1} d_{\eta_2} \theta_{\eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\tau \ast \eta_2}^{-\tfrac{\mathbf q}{\mathbf p}} \biggr] \nonumber \\
& \overset{(**)}{\sim}
\sum_{\eta_1, \eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}}
\biggl[ (-1)^{\tau} m_{\lambda_1}\bigl(\tfrac{1}{\mathbf p} ( \eta_1- \tau \ast \eta_2)\bigr)
d_{\eta_1} d_{\eta_2} \theta_{\eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\tau \ast \eta_2}^{-\tfrac{\mathbf q}{\mathbf p}} \biggr] \nonumber\\
& = \sum_{\eta_1, \eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}}
m^{\eta_1\eta_2}_{\lambda_1,\mathbf p}(\tau)
d_{\eta_1} d_{\eta_2} \theta_{\eta_1}^{\tfrac{\mathbf q}{\mathbf p}} \theta_{\tau \ast \eta_2}^{-\tfrac{\mathbf q}{\mathbf p}}
\end{align}
where in step $(*)$ we applied Eq. \eqref{eq_diagonal_inv}
and made the change of variable $\tau_2 \to \tau := \tau_1^{-1} \tau_2$
and where in step $(**)$ we have used that (cf. \eqref{eq_P_formula} above)
\begin{align*}
(k P - \rho)
& = \{k b - \rho \mid b \in \CW \text{ and } \langle b ,\theta \rangle < 1 \}\\
& = \{\bar{b} \in \ct \mid \bar{b} + \rho \in \CW \text{ and } \langle \bar{b} + \rho,\theta\rangle < k \}
\end{align*}
and therefore
\begin{align*}
\Lambda \cap (k P - \rho)
& = \{\lambda \in \Lambda \mid \lambda + \rho \in \CW \text{ and } \langle \lambda + \rho,\theta\rangle < k \}\\
& \overset{(*)}{=} \{\lambda \in \Lambda \cap \overline{\CW} \mid \langle \lambda + \rho,\theta\rangle < k \} = \Lambda^{k}_+
\end{align*}
where step $(*)$ follows because for each $\lambda \in \Lambda$,
$\lambda + \rho $ is in the open Weyl chamber $\CW$ iff $\lambda$
is in the closure $\overline{\CW}$ (cf. the last remark in Sec. V.4 in \cite{Br_tD}).
\subsubsection*{f) Step 6: Final step}
In Steps 1--5 we showed that for $L$ as in Theorem \ref{theorem1} we have
\begin{equation} \label{eq_WLO_value_gen}
\WLO^{disc}_{rig}(L) \sim \sum_{\eta_1, \eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}} m^{\eta_1\eta_2}_{\lambda_1,\mathbf p}(\tau) \
d_{\eta_1} d_{\eta_2} \ \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}}
\end{equation}
For the empty simplicial ribbon link $L = \emptyset$ the computations in Step 1--5 simplify drastically and we obtain
\begin{equation} \label{eq_WLO_value_empty} \WLO^{disc}_{rig}(\emptyset) \sim \frac{1}{S^2_{00}}
\end{equation}
where the multiplicative (non-zero) constant represented by $\sim$ is the same as that in Eq. \eqref{eq_WLO_value_gen}
above.
Combining Eq. \eqref{eq_WLO_value_gen} and Eq. \eqref{eq_WLO_value_empty} and recalling the meaning of $\sim$ we conclude
$$ \WLO^{disc}_{norm}(L) = \frac{\WLO^{disc}_{rig}(L)}{\WLO^{disc}_{rig}(\emptyset)}
= S^2_{00} \sum_{\eta_1, \eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}} m^{\eta_1\eta_2}_{\lambda_1,\mathbf p}(\tau) \
d_{\eta_1} d_{\eta_2} \ \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}}
$$
\begin{remark} \rm An alternative (and more explicit) derivation of
Eq. \eqref{eq_WLO_value_empty} can be obtained as follows.
We can apply Eq. \eqref{eq_WLO_value_gen} to the special situation
$L = L':=(R'_1)$ where $R'_1$ is a simplicial torus ribbon knot in $\cK \times \bZ_N$
of standard type with winding numbers ${\mathbf p} =1$ and $\mathbf q =0$
and which is a colored with the trivial representation $\rho_0$.
Since $\WLO^{disc}_{rig}(\emptyset) = \WLO^{disc}_{rig}(L')$ we obtain
\begin{multline}
\WLO^{disc}_{rig}(\emptyset) = \WLO^{disc}_{rig}(L') \sim \sum_{\eta_1, \eta_2 \in \Lambda_+^k}
\sum_{\tau \in \cW_{\aff}} m^{\eta_1\eta_2}_{0,1}(\tau) \
d_{\eta_1} d_{\eta_2} \ \theta_{\eta_1}^{ \frac{{0} }{{1}}} \theta_{\tau \ast \eta_2}^{- \frac{{0} }{{1}}} \\
\overset{(*)}{=}
\sum_{\eta_1 \in \Lambda_+^k} d_{\eta_1}^2 = \sum_{\eta_1 \in \Lambda_+^k} \tfrac{S_{\eta_1 0} S_{\eta_1 0}}{S_{00} S_{00}} = C_{0 0} \tfrac{1}{S^2_{00}} =
\delta_{0 \bar{0}} \tfrac{1}{S^2_{00}} = \tfrac{1}{S^2_{00}}
\end{multline}
where in step $(*)$ we used $m_{0}(\alpha) = \delta_{0 \alpha}$
which implies that $m^{\eta_1\eta_2}_{0,1}(\tau)$ vanishes
unless $\tau = 1$ and $\eta_2 = \eta_1$.
\end{remark}
\subsection{Proof of Theorem \ref{theorem2}}
\label{subsec5.5}
We will now sketch how a proof of Theorem \ref{theorem2} can be obtained
by a straightforward modification of the proof of Theorem \ref{theorem1}.
\smallskip
Recall that $R_2$ appearing in Theorem \ref{theorem2} is a closed simplicial ribbon which is vertical
and has standard orientation.
Let $\sigma_2^j \in \face_0(q\cK)$, $j=0,1,2$, be the three points associated to $R_2$ as explained
in Definition \ref{def5.4}. We then have
\begin{equation} \label{eq_sec5.5} \Tr_{\rho_2}\bigl( \Hol^{disc}_{R_2}(\Check{A}^{\orth} + A^{\orth}_c, B)\bigr)
= \Tr_{\rho_2}\bigl(\exp({\mathbf q_2}\sum_{j=0}^2 w(j) B(\sigma_2^j))\bigr)
\end{equation}
where $w(j)$, $j=0,1,2$, is as in Sec. \ref{subsec3.3} and where ${\mathbf q_2}$ is the second winding number of $R_2$.
Since $R_2$ has -- by assumption --
standard orientation (cf. again Definition \ref{def5.4}) we have ${\mathbf q_2} =1$. \par
Clearly, the last expression in Eq. \eqref{eq_sec5.5} above does not depend on $\Check{A}^{\orth}$ and $ A^{\orth}_c$.
Thus when proving Theorem \ref{theorem2} we can repeat the Steps 1--3 in the proof of
Theorem \ref{theorem1} almost without modifications,
the only difference being that now
an extra factor $\Tr_{\rho_2}\bigl(\exp(\sum_{j=0}^2 w(j) B(\sigma_2^j))\bigr)$ appears in several equations.
For example, we obtain again Eq. \eqref{eq5.50} at the end of Step 3 where this time the function $F_{\alpha}(b)$
contains an extra factor $\Tr_{\rho_2}\bigl(\exp(\sum_{j=0}^2 w(j) B(\sigma_2^j)))\bigr)$
inside the $[\cdots]$ brackets.
According to Observation \ref{obs3} in Step 3 for all $B$ appearing in $F_{\alpha}(b)$
we have $B(\sigma_2^0) = B(\sigma_2^1) = B(\sigma_2^2)$, which implies that
$\Tr_{\rho_2}\bigl(\exp(\sum_{j=0}^2 w(j) B(\sigma_2^j))\bigr)
= \Tr_{\rho_2}\bigl(\exp( B(\sigma_2^0))\bigr)$ for the relevant $B$.
This extra factor appears later, in Eq. \eqref{eq5.89_org} (and in the modification Eq.
\eqref{eq5.89}) at the end of Step 4.
(In order to arrive at the modified version of Eq. \eqref{eq5.89_org}
we need to add the equation $\Tr_{\rho_2}\bigl(\exp(b + x)\bigr) = \Tr_{\rho_2}\bigl(\exp(b)\bigr)$ for all $b \in \ct$, $x \in I$ to the list of equations in Eqs \eqref{eq5.51}).\par
As in Sec. \ref{subsec5.4} above let us assume again
(without loss of generality)
that the enumeration of $Z_1$ and $Z_2$ was chosen such that in Eq. \eqref{eq_def_f} in the proof of
Lemma \ref{lem2_pre} we have the situation where the ``$-$''-sign appears.
Then it follows from our assumption that $R_1$ winds around $R_2$ in ``positive direction''\footnote{by
which we meant that the winding number of the projected ribbon
$(R_1)_{\Sigma}:= \pi_{\Sigma} \circ R_1: S^1 \times [0,1] \to S^2$ around $\sigma_2^0$ is positive}
that $\sigma_2^0 \in Z_2$.
So if we now, in Step 5 replace the variable $B$ by $\eta: \{1,2\} \to \Lambda$
given by $\eta(i) = k B(Z_i) - \rho$ then
$B(\sigma_2^0)$ gets replaced by
$\tfrac{1}{k} (\eta(2) + \rho)$ and the term $\Tr_{\rho_2}\bigl(\exp( B(\sigma_2^0))\bigr)$
is replaced by $\Tr_{\rho_2}\bigl(\exp(\tfrac{1}{k} (\eta(2) + \rho))\bigr)$.
From Weyl's character formula it follows that
$$\Tr_{\rho_2}\bigl(\exp(\tfrac{1}{k} (\eta(2) + \rho))\bigr) = \frac{S_{\lambda_2 \eta(2)}}{S_{0 \eta(2)}}$$
where $\lambda_2$ is the highest weight of $\rho_2$.\par
Accordingly, at a later stage in Step 5 an extra factor
$ \frac{S_{\lambda_2 \eta_2}}{S_{0 \eta_2}}$ appears in several equations
where $\eta_2$ is one of the two summation variables.
Taking into account that apart from Eq. \eqref{eq_d+th_inv} we also
have $S_{\lambda_2 (\tau \ast \eta)} = (-1)^{\tau} S_{\lambda_2 \eta}$ we finally arrive at
\begin{align} \label{eq_5.73}
\WLO^{disc}_{rig}(L) & \sim
\sum_{\eta_1, \eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}}
m^{ \eta_1 \eta_2}_{\lambda_1,\mathbf p}(\tau)
d_{\eta_1} d_{\eta_2} \frac{S_{\lambda_2 \eta_2}}{S_{0 \eta_2}} \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}} \nonumber \\
& = \sum_{\eta_1, \eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}} m^{ \eta_1 \eta_2}_{\lambda_1,\mathbf p}(\tau)
d_{\eta_1} \tfrac{1}{S_{00}} S_{\lambda_2 \eta_2} \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}}
\end{align}
Combining this with Eq. \eqref{eq_WLO_value_empty} above (where $\sim$ represents equality up to the same multiplicative constant as in Eq. \eqref{eq_5.73}) we arrive at Eq. \eqref{eq_theorem2}.
\section{Comparison of $\WLO^{disc}_{norm}(L)$ with the Reshetikhin-Turaev invariant $RT(L)$}
\label{sec6}
\subsection{Conjecture \ref{conj0}}
\label{subsec6.1}
For every $\cG$ as in Sec. \ref{sec2}, every $k \in \bN$ with $k > \cg$,
and every colored ribbon link $L$ in $S^2 \times S^1$
let us denote by $RT(S^2 \times S^1,L)$, or simply by $RT(L)$,
the corresponding Reshetikhin-Turaev invariant
associated to $U_{q}(\cG_{\bC})$ where $\cG_{\bC} := \cG \otimes_{\bR} \bC$
and $q = e^{2 \pi i/k}$ (cf. Remark \ref{rm_shift_in_level} above).\par
According to Remark \ref{rm_sec3.9c} above it is plausible\footnote{in view of Footnote \ref{ft_distinguished}
and Footnote \ref{ft_distinguished2} in Remark \ref{rm_sec3.9c} above we do not have the guarantee that
this really is the case} to expect that the values of
$\WLO^{disc}_{norm}(L)$ computed above coincide with the
corresponding values of Witten's heuristic path integral expressions\footnote{\label{ft_Z=1}
Here we used Witten's heuristic argument that $Z(S^2 \times S^1) = 1$}
$\WLO_{norm}(L) = Z(S^2 \times S^1,L)/Z(S^2 \times S^1) = Z(S^2 \times S^1,L)$ in the special case considered.
In view of the expected equivalence between Witten's heuristic path integral expressions $Z(S^2 \times S^1,L)$
and the rigorously defined Reshetikhin-Turaev invariants $RT(L)$
we arrive at the following (rigorous) conjecture:
\begin{conjecture} \label{conj0} For every colored simplicial
ribbon link $L$ as in Theorem \ref{theorem1} or Theorem \ref{theorem2}
we have
$$\WLO^{disc}_{norm}(L) = RT(L)$$
where on the RHS we consider $L$ as a continuum ribbon link in $S^2 \times S^1$
in the obvious way.
\end{conjecture}
As mentioned already in Remark \ref{rm_theorems} above in
the special case ${\mathbf p} =1$ Conjecture \ref{conj0} is true.
Since I have not found a concrete formula for $RT(L)$ where $L$ is as in Theorem \ref{theorem1} or Theorem \ref{theorem2} with ${\mathbf p} > 1$
in the standard literature at present I cannot prove Conjecture \ref{conj0} in general
(cf. also Sec. \ref{subsec6.3} below).
We can, however, make a ``consistency check'' and
compute -- assuming the validity of Conjecture \ref{conj0} --
the value of $RT(S^3,\tilde{L})$ for an arbitrary colored torus knot $\tilde{L}$ in $S^3$.
We will do this in Sec. \ref{subsec6.2} below with the help of a standard surgery argument.
It turns out that we indeed obtain the correct value for $RT(S^3,\tilde{L})$
(which is given by the Rosso-Jones formula, cf. Eq. \eqref{eq_my_RossoJones3} below).
\subsection{Derivation of the Rosso-Jones formula}
\label{subsec6.2}
Let us now combine Theorem \ref{theorem2} with a simple surgery argument
in order to derive\footnote{As explained in Remark \ref{rm_Goal1_vs_Goal2} below, we will do this in two different ways. Firstly, in a rigorous way in order to obtain a consistency check of Conjecture \ref{conj0} above
and, secondly, in a heuristic way (where we do not need Conjecture \ref{conj0})
in order to obtain a heuristic derivation of the Rosso-Jones formula}
the Rosso-Jones formula for general colored torus knots in $S^3$. \par
In the following it will be convenient to switch forth and back between
the framed link picture and the ribbon link picture.
\smallskip
Let us first recall Witten's (heuristic) surgery formula.
For our purposes it will be sufficient to consider the following special case
of Witten's surgery formula\footnote{here we use a notation which is very
similar to Witten's notation; one important difference is that we write $(C,\rho_{\alpha})$ where Witten writes $R_{\alpha}$}
\begin{equation} \label{eq_surgery_formula0}
Z(S^3,\tilde{L}) = \sum_{\alpha \in \Lambda_+^k} K_{\alpha 0} \ Z(S^2 \times S^1, L, (C,\rho_{\alpha}))
\end{equation}
where
\begin{itemize}
\item $L$ is a colored, framed link in $S^2 \times S^1$,
\item $\tilde{L}$ is the colored, framed link in $S^3$ obtained from $L$ by performing a surgery on a separate
(framed) knot $C$ in $S^2 \times S^1$,
\item $\rho_{\alpha}$ is the irreducible, finite-dimensional, complex representation of $G$ with highest weight
$\alpha \in \Lambda_+^k$ (we assume that $C$ is colored with $\rho_{\alpha}$),
\item $(K_{\mu \nu})_{\mu,\nu \in \Lambda_+^k}$ is the matrix associated to the surgery mentioned
above.
\end{itemize}
Let us now restrict to the special case where $L$ is the colored knot
$L = (T_{{\mathbf p},{\mathbf q}},\rho_{\lambda})$
where $\lambda \in \Lambda_+^k$ and where $T_{{\mathbf p},{\mathbf q}}$
is a torus knot of standard type in $S^2 \times S^1$ with winding numbers ${\mathbf p} \in \bZ \backslash \{0\}$ and ${\mathbf q} \in \bZ$ (cf. Definition \ref{def5.1} and Definition \ref{def5.2}) and equipped with a ``horizontal'' framing\footnote{a special case of $T_{{\mathbf p},{\mathbf q}}$ is any simplicial torus ribbon knot of standard type (cf. Definition \ref{def5.3})
when considered as a framed knot instead of a ribbon knot}, i.e. a normal vector field on
$T_{{\mathbf p},{\mathbf q}}$ which is parallel to the $S^2$-component of $S^2 \times S^1$.
Moreover, let $C$ be a vertical loop in $S^2 \times S^1$ (equipped with a horizontal framing).
Let $\tilde{T}_{{\mathbf p},{\mathbf q}}$ be the framed torus knot in
$S^3$ which is obtained from $T_{{\mathbf p},{\mathbf q}}$
by performing the surgery on $C$ which transforms $S^2 \times S^1$ into $S^3$
and has the matrix $K=S$ associated to it (cf. p. 389 in \cite{Wi}).
\begin{remark} \label{rm_sec6.2_0} \rm Note that up to equivalence and a change of framing
every framed torus knot in $S^3$ can be obtained in this way.
\end{remark}
In the special situation described above
formula \eqref{eq_surgery_formula0} reads
\begin{equation} \label{eq_surgery_formula1}
Z(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda})) = \sum_{\alpha \in \Lambda_+^k} S_{\alpha 0} \ Z(S^2 \times S^1, (T_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (C,\rho_{\alpha}))
\end{equation}
Clearly, Eq. \eqref{eq_surgery_formula1} is not rigorous.
We can obtain a rigorous version of Eq. \eqref{eq_surgery_formula1} by replacing
the two heuristic path integral expressions
$Z(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda}))$ and $Z(S^2 \times S^1, (T_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (C,\rho_{\alpha}))$ with the corresponding Reshetikhin-Turaev invariants. Doing so we arrive at
\begin{equation} \label{eq_surgery_formula2}
RT(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda})) = \sum_{\alpha \in \Lambda_+^k} S_{\alpha 0} \ RT(S^2 \times S^1, (T_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (C,\rho_{\alpha}))
\end{equation}
\begin{remark} \rm \label{rm_Goal1_vs_Goal2} Even though Eq. \eqref{eq_surgery_formula1}
is only heuristic it is sufficient/appropriate for achieving ``Goal 2'' of Comment \ref{comm1} in the
Introduction. It is straightforward to rewrite the next paragraphs
using Eq. \eqref{eq_surgery_formula1}
instead of Eq. \eqref{eq_surgery_formula2} and using\footnote{cf. the argument at the beginning of Sec. \ref{subsec6.1} where we used Witten's heuristic equation $Z(S^2 \times S^1)=1$, cf. Footnote \ref{ft_Z=1} above.}
$Z(S^2 \times S^1, (T_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (C,\rho_{\alpha})) = \WLO_{norm}(L) = \WLO^{disc}_{norm}(L)$ where $L$ is given as in the paragraph after the present remark. Doing so we arrive at
\begin{equation} Z(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda})) = S_{00} \sum_{\mu \in \Lambda_+}
c^{ \mu}_{\lambda,\mathbf p} d_{\mu} \theta_{\mu}^{ \frac{{\mathbf q} }{{\mathbf p}}},
\end{equation}
which is the (heuristic) ``Chern-Simons path integral version'' of the
Rosso-Jones formula, cf. Eq. \eqref{eq_my_RossoJones3} below. \par
On the other hand, for ``Goal 1'' we should use the rigorous formula Eq. \eqref{eq_surgery_formula2}
in order to make the aforementioned ``consistency check'' of Conjecture \ref{conj0} above.
Since Goal 1 is our main goal we will now work with Eq. \eqref{eq_surgery_formula2}.
\end{remark}
Let us now consider the special case where $ T_{{\mathbf p},{\mathbf q}}$
``comes from''\footnote{i.e. $T_{{\mathbf p},{\mathbf q}}$ agrees
with $R_1$ when $R_1$ is considered as a framed knot instead of a ribbon knot}
a simplicial torus ribbon knot $R_1$ in $\cK \times \bZ$ of standard type
and $C$ ``comes from'' a vertical closed simplicial ribbon $R_2$ in $\cK \times \bZ$.
Moreover, set $\rho_1 := \rho_{\lambda}$ and $\rho_2 := \rho_{\alpha}$ where $\alpha \in \Lambda_+^k$
is fixed (temporarily).
Finally, assume that for the colored simplicial ribbon link $L := ((R_1,R_2), (\rho_1,\rho_2))$ in $\cK \times \bZ$ the assumptions of Theorem \ref{theorem2} above are fulfilled.
If Conjecture \ref{conj0} is true we have
\begin{equation} \label{eq_RT_WLO_spec} RT(S^2 \times S^1, (T_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (C,\rho_{\alpha})) = \WLO^{disc}_{norm}(L)
\end{equation}
Combining Eq. \eqref{eq_RT_WLO_spec} with Theorem \ref{theorem2} (for every $\alpha \in \Lambda_+^k$)
and using Eq. \eqref{eq_surgery_formula2} we obtain
\begin{align} \label{eq_surgery_formula}
RT(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda}))
& = \sum_{\alpha \in \Lambda_+^k} S_{\alpha 0} \biggl(S_{00} \sum_{\eta_1,\eta_2 \in \Lambda_+^k}
\sum_{\tau \in \cW_{\aff}} m^{ \eta_1 \eta_2}_{\lambda,\mathbf p}(\tau)
d_{\eta_1} S_{\alpha \eta_2} \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}} \biggr) \nonumber\\
& = S_{00} \sum_{\eta_1,\eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}}
\biggl(\sum_{\alpha \in \Lambda_+^k} S_{\alpha 0} S_{\alpha \eta_2} \biggr)
m^{ \eta_1 \eta_2}_{\lambda,\mathbf p}(\tau)
d_{\eta_1} \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}} \nonumber\\
& \overset{(*)}{=} S_{00} \sum_{\eta_1,\eta_2 \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}} \bigl(C_{0 \eta_2} \bigr) m^{ \eta_1 \eta_2}_{\lambda,\mathbf p}(\tau)
d_{\eta_1} \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \eta_2}^{- \frac{{\mathbf q} }{{\mathbf p}}} \nonumber\\
& \overset{(**)}{=} S_{00} \sum_{\eta_1\in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}}
m^{ \eta_1 0}_{\lambda,\mathbf p}(\tau)
d_{\eta_1} \theta_{\eta_1}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast 0}^{- \frac{{\mathbf q} }{{\mathbf p}}}
\end{align}
Here in Step $(*)$ we used $S^2 = C$ and the fact that $S$ is a symmetric matrix (cf. Sec. \ref{subsec5.2}),
and in Step $(**)$ we used $C_{0 \mu} = \delta_{\bar{0} \mu} = \delta_{0 \mu}$. \par
By renaming the index $\eta_1$ as $\mu$ we obtain from Eq. \eqref{eq_surgery_formula}
\begin{equation} \label{eq_RT_rewrite0}
RT(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda})) =
S_{00} \sum_{\mu \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}}
m^{ \mu 0}_{\lambda,\mathbf p}(\tau) d_{\mu} \theta_{\mu}^{ \frac{{\mathbf q} }{{\mathbf p}}}
\theta_{\tau \ast 0}^{- \frac{{\mathbf q} }{{\mathbf p}}}
\end{equation}
For simplicity we will now assume that $k$ is ``large'' (cf. Remark \ref{rm_general_k} below for the case of general $k > \cg$). If $k$ is ``large enough''\footnote{i.e. $k \ge k(\lambda,{\mathbf p})$ where
$k(\lambda,{\mathbf p})$ is a constant depending only on $\lambda$ and ${\mathbf p}$}
the sum $\sum_{\tau \in \cW_{\aff}} \cdots $ appearing in Eq. \eqref{eq_RT_rewrite0}
can be replaced by $\sum_{\tau \in \cW} \cdots $. On the other hand, for $\tau \in \cW$
we do have $\theta_{\tau \ast 0}^{- \frac{{\mathbf q} }{{\mathbf p}}} = \theta_{0}^{- \frac{{\mathbf q} }{{\mathbf p}}} = 1$ and so Eq. \eqref{eq_RT_rewrite0} simplifies and we obtain
\begin{align} \label{eq_RT_rewrite}
RT(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda})) & =
S_{00} \sum_{\mu \in \Lambda_+^k} \sum_{\tau \in \cW}
m^{ \mu 0}_{\lambda,\mathbf p}(\tau) d_{\mu} \theta_{\mu}^{ \frac{{\mathbf q} }{{\mathbf p}}} \nonumber \\
& = S_{00} \sum_{\mu \in \Lambda_+^k} \bar{M}^{ \mu 0}_{\lambda,\mathbf p} d_{\mu} \theta_{\mu}^{ \frac{{\mathbf q} }{{\mathbf p}}}
\end{align}
where we have set
\begin{equation} \label{eq_def_Mbar}
\bar{M}^{ \mu 0}_{\lambda,\mathbf p} := \sum_{\tau \in \cW}
m^{ \mu 0}_{\lambda,\mathbf p}(\tau) = \sum_{\tau \in \cW} (-1)^{\tau} m_{\lambda}\bigl(\tfrac{1}{\mathbf p} (\mu - \tau \cdot \rho + \rho)\bigr)
\end{equation}
Observe that (for fixed $\lambda$ and ${\mathbf p}$) the coefficients $\bar{M}^{ \mu 0}_{\lambda,\mathbf p}$
are non-zero only for a finite number of values of $\mu$.
So if $k$ is large enough
we can replace the index set $\Lambda_+^k$ in Eq. \eqref{eq_RT_rewrite} by $\Lambda_+$ and obtain
\begin{equation} \label{eq_my_RossoJones2}
RT(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda})) = S_{00} \sum_{\mu \in \Lambda_+}
\bar{M}^{ \mu 0}_{\lambda,\mathbf p} d_{\mu} \theta_{\mu}^{ \frac{{\mathbf q} }{{\mathbf p}}}
\end{equation}
\begin{remark} \rm \label{rm_general_k} For simplicity, we considered here (and in the paragraph before Eq. \eqref{eq_RT_rewrite}) the case where $k$ is ``large''. However,
it is not too difficult to see that this restriction on $k$ can be dropped, i.e.
assuming the validity of Conjecture \ref{conj0} we can actually derive
Eq. \eqref{eq_my_RossoJones2} for all $k > \cg$.
\end{remark}
According to Lemma 2.1 in \cite{GaVu} we have
\begin{equation} \label{eq_cond1}
\forall \mu, \lambda \in \Lambda_+: \forall {\mathbf p} \in \bN: \quad
\bar{M}^{ \mu 0}_{\lambda,\mathbf p} = c^{\mu}_{\lambda,{\mathbf p}}
\end{equation}
where $(c^{\mu}_{\lambda,{\mathbf p}})_{\mu, \lambda \in \Lambda_+,{\mathbf p} \in \bN }$ are the ``plethysm coefficients'' appearing in
the Rosso-Jones formula, cf. \cite{RoJo} and Eq. (10)
in \cite{GaMo}. Accordingly, we can rewrite Eq. \eqref{eq_my_RossoJones2} as
\begin{equation} \label{eq_my_RossoJones3}
RT(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda})) = S_{00} \sum_{\mu \in \Lambda_+}
c^{ \mu}_{\lambda,\mathbf p} d_{\mu} \theta_{\mu}^{ \frac{{\mathbf q} }{{\mathbf p}}},
\end{equation}
which is a version of the Rosso-Jones formula.
(Note that the original Rosso-Jones formula deals with unframed torus knots
rather than framed torus knots. In Appendix \ref{appA} below we will show
that Eq. \eqref{eq_my_RossoJones3} above is indeed equivalent to the original Rosso-Jones formula).
\subsection{Reformulation of Conjecture \ref{conj0}}
\label{subsec6.3}
Note that Eq. \eqref{eq_surgery_formula2} above can be generalized to
\begin{equation} \label{eq_surgery_formula2_gen}
RT(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (\tilde{C},\rho_{\beta})) = \sum_{\alpha \in \Lambda_+^k} S_{\alpha \beta} \ RT(S^2 \times S^1, (T_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (C,\rho_{\alpha})) \quad \forall \beta \in \Lambda_+^k
\end{equation}
where $((\tilde{T}_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (\tilde{C},\rho_{\beta}))$ on the LHS is the
(framed, colored) two-component-link in $S^3$ obtained from the two-component-link
$((T_{{\mathbf p},{\mathbf q}},\rho_{\lambda}), (C,\rho_{\alpha}))$ in $S^2 \times S^1$ after applying the same surgery operation as the one described in the paragraph before Remark \ref{rm_sec6.2_0} in Sec. \ref{subsec6.2} above. \par
By modifying the arguments and calculations after Remark \ref{rm_Goal1_vs_Goal2}
in Sec. \ref{subsec6.2} above in the obvious way
we can show that Conjecture \ref{conj0} implies that for all $\lambda, \beta \in \Lambda_+^k$ we have
\begin{equation} \label{eq_RT_rewrite0_gen}
RT(S^3,(\tilde{T}_{{\mathbf p},{\mathbf q}}, \rho_{\lambda}), (\tilde{C},\rho_{\beta})) =
S_{00} \sum_{\mu \in \Lambda_+^k} \sum_{\tau \in \cW_{\aff}}
m^{ \mu \bar{\beta}}_{\lambda,\mathbf p}(\tau) d_{\mu} \theta_{\mu}^{ \frac{{\mathbf q} }{{\mathbf p}}} \theta_{\tau \ast \bar{\beta}}^{- \frac{{\mathbf q} }{{\mathbf p}}}
\end{equation}
The converse is also true: If Eq. \eqref{eq_RT_rewrite0_gen} holds for every (framed) colored two-component-link $((\tilde{T}_{{\mathbf p},{\mathbf q}}, \rho_{\lambda}), (\tilde{C},\rho_{\beta}))$, $\lambda, \beta \in \Lambda_+^k$ in $S^3$ obtained as above
then Conjecture \ref{conj0} will be true.
I expect that Eq. \eqref{eq_RT_rewrite0_gen} (and therefore Conjecture \ref{conj0})
can be proven for arbitrary\footnote{note that according to Sec. \ref{subsec6.2} above
in the special case where $\beta = 0$ (and $\lambda \in \Lambda_+^k$ is arbitrary)
Eq. \eqref{eq_RT_rewrite0_gen} is indeed true.} $\beta \in \Lambda_+^k$ by using similar techniques as the ones used in \cite{RoJo}.
\section{Conclusions}
\label{sec7}
In the present paper we introduced and studied -- for every simple, simply-connected, compact Lie group $G$,
and a large class of colored torus (ribbon) knots\footnote{In the present paper we have restricted ourselves to the case of torus knots but it should not be difficult to generalize
our main results to torus links.} $L$ in $S^2 \times S^1$ --
a rigorous realization $\WLO^{disc}_{norm}(L)$ of the torus gauge fixed version
of Witten's heuristic CS path integral expressions $Z(S^2\times S^1,L)$.
Moreover, we computed the values of $\WLO^{disc}_{norm}(L)$ explicitly, cf. Theorem \ref{theorem1}. \par
As a by-product we obtained an elementary, heuristic
derivation\footnote{\label{ft_sec7_1} Our derivation is ``almost'' a pure path
integral derivation. Essentially all our arguments
are based on (the rigorous realization of) the Chern-Simons path integral
in the torus gauge introduced in Sec. \ref{sec3}.
The only two exceptions are the arguments involving
Witten's heuristic surgery formula and
Witten's formula $Z(S^2 \times S^1)=1$ mentioned in Remark \ref{rm_Goal1_vs_Goal2} above.
(Both formulas were derived by Witten using arguments from Conformal Field Theory.)}
of the original Rosso-Jones formula for arbitrary colored torus knots $\tilde{L}$ in $S^3$
(and arbitrary simple complex Lie algebras $\cG_{\bC}$).
This means that we have achieved ``Goal 2'' of Comment \ref{comm1} in the Introduction. \par
Apart from achieving ``Goal 2''
we have also made progress towards achieving ``Goal 1'' of Comment \ref{comm1}.
The rigorous computation in Sec. \ref{subsec6.2}
provides strong evidence in favor of Conjecture \ref{conj0} above, i.e. the conjecture that the explicit values
of $\WLO^{disc}_{norm}(L)$ obtained in Theorem \ref{theorem1} and Theorem \ref{theorem2} indeed coincide
with the values of the corresponding Reshetikhin-Turaev invariants $RT(L)$.
If this is indeed the case
Theorem \ref{theorem1} can be considered as a step forward in the simplicial program for Chern-Simons theory, cf. Sec. 3 of \cite{Ha7a} and cf. also Remark \ref{rm_sec3.9b} in Sec. \ref{sec3} of the present paper.
\renewcommand{\thesection}{\Alph{section}}
\setcounter{section}{0}
\section{Appendix: The original Rosso-Jones formula for unframed torus knots in $S^3$}
\label{appA}
In this appendix we will recall the original Rosso-Jones formula (which deals with unframed torus knots in $S^3$
rather than framed torus knots) and show that it is equivalent
to Eq. \eqref{eq_my_RossoJones3} in Sec. \ref{subsec6.2} above.
\smallskip
Recall that above we set $\cG_{\bC} = \cG \otimes_{\bR} \bC$ and $q = e^{ 2 \pi i/k}$
and denoted by $RT(M,\cdot)$ (for $M=S^3$ and $M=S^2 \times S^1$) the
Reshetikhin-Turaev invariant associated to $U_{q}(\cG_{\bC})$.
Let us write $RT(\cdot)$ instead of $RT(S^3,\cdot)$. \par
We will now compare $RT(\cdot)$ with $QI(\cdot)$
where $QI(\cdot)$ is the $U_{q}(\cG_{\bC})$-analogue\footnote{Note that
in \cite{Tu0,RoJo} the letter $q$ refers to a complex variable or, equivalently,
a generic element of $\bC$. We now replace this variable by the root of unity $e^{ 2 \pi i/k}$.
Doing so we obtain a complex valued topological invariant of all colored links $L$ in $S^3$
whose colors $\rho_i$ fulfill the condition $\lambda_i \in \Lambda_+^k$ where $\lambda_i$
is the highest weight of $\rho_i$.
Recall that if $\lambda_i \notin \Lambda_+^k$
then $RT(L)$ is not defined and $QI(L)$ need not be defined either since division by $0$ may occur.}
of the topological invariant for colored links in $S^3$
which appeared in \cite{Tu0,RoJo}. \par
The two invariants $RT(\cdot)$ and ${QI}(\cdot)$ are very closely related
but there are some important differences:
\begin{itemize}
\item ${QI}(\cdot)$ is an invariant of (unframed) colored links.
More precisely, it is an ambient isotopy invariant (i.e. invariant under\footnote{here we consider every link $L \subset S^3$ as a link in $\bR^3$ and project it down to a suitable fixed plane $P$} Reidemeister I, II and III moves)
\item ${QI}(\cdot)$ is normalized such that ${QI}(U_{\lambda}) = 1$
where $U_{\lambda}$ is the unknot colored with (the irreducible complex representation $\rho$ with highest weight) $\lambda \in \Lambda_+^k$.
\end{itemize}
By contrast we have
\begin{itemize}
\item $RT(\cdot)$ is an invariant of framed, colored links.
More precisely, it is a regular isotopy invariant (i.e. invariant only under Reidemeister II and III moves)
\item For every framed knot $L$ with color $\rho_{\lambda}$, $\lambda \in \Lambda_+^k$, the value of $RT(L)$ changes by a factor
${\theta}_{\lambda}^{\pm 1}$ when we perform a Reidemeister I move on $L$.
\item $RT(\cdot)$ is normalized such that $RT(U_{\lambda}) = {S}_{\lambda 0}$ where
$U_{\lambda}$ is the 0-framed unknot colored with $\lambda$.
\end{itemize}
Taking this into account one can deduce
the following relation between $RT(\cdot) = RT(S^3,\cdot)$ and ${QI}(\cdot)$
\begin{equation} \label{eq_QI_RT}
{QI}(L^{0}) = \frac{1}{{S}_{\lambda 0}} {\theta}_{\lambda}^{- \writhe(D(L))} RT(S^3,L)
\end{equation}
for every framed knot $L$ with color $\rho_{\lambda}$, $\lambda \in \Lambda_+^k$, where $L^0$ is the unframed, colored knot obtained from $L$ by forgetting the framing and where $D(L)$ is any
``admissible''\footnote{Here by ``admissible'' I mean a knot diagram which is obtained in the following way
(here we use the ribbon picture, i.e. we consider the framed knot $L$ as a ribbon in the obvious way):
We press the ribbon $L$ flat onto a fixed plane $P$.
After that the ribbon width is sent to zero} knot diagram of $L$.
[Note that the writhe is also a regular isotopy invariant and the effect of a Reidemeister I move
on the exponential on the RHS of Eq. \eqref{eq_QI_RT} cancels out the effect of the move on the factor $RT(L)$.
Accordingly, the RHS of Eq \eqref{eq_QI_RT} will be invariant under Reidemeister I-III moves.] \par
Let us now consider the special case $L = (\tilde{T}_{{\mathbf p}, {\mathbf q}},\rho_{\lambda})$
where $\tilde{T}_{{\mathbf p}, {\mathbf q}}$ is the framed torus knot in $S^3$
that appeared in Sec. \ref{subsec6.2} above.
It can be shown that for a torus knot $\tilde{T}_{{\mathbf p}, {\mathbf q}}$
obtained by surgery from a torus knot $T_{{\mathbf p}, {\mathbf q}}$ in $S^2 \times S^1$ of standard type
with horizontal framing we have
$$\writhe(D(\tilde{T}_{{\mathbf p}, {\mathbf q}})) = {\mathbf p}{\mathbf q}$$
for every admissible knot diagram $D(\tilde{T}_{{\mathbf p}, {\mathbf q}})$ of $\tilde{T}_{{\mathbf p}, {\mathbf q}}$
(in the sense above).
Accordingly, Eq. \eqref{eq_QI_RT} specializes to
\begin{equation} \label{eq_A.4} {QI}((\tilde{T}^0_{{\mathbf p}, {\mathbf q}},\rho_{\lambda}))
= \frac{1}{{S}_{\lambda0}} {\theta}_{\lambda}^{-{\mathbf p}{\mathbf q}}
RT(S^3,( \tilde{T}_{{\mathbf p}, {\mathbf q}},\rho_{\lambda}))
\end{equation}
In view of Eq. \eqref{eq_A.4} it is now clear that Eq. \eqref{eq_my_RossoJones3} in Sec. \ref{subsec6.2} above
is equivalent to
\begin{equation} \label{eq_RossoJones0}
QI((\tilde{T}^0_{{\mathbf p}, {\mathbf q}},\rho_{\lambda})) = \frac{1}{d_{\lambda}} \theta_{\lambda}^{-{\mathbf p}{\mathbf q}} \sum_{\mu \in \Lambda_+} c^{\mu}_{\lambda,{\mathbf p}} d_{\mu} \theta_{\mu}^{ \frac{{\mathbf q} }{{\mathbf p}}},
\end{equation}
which is the original Rosso-Jones formula, cf. Eq. (10)
in \cite{GaMo}. | {"config": "arxiv", "file": "1508.03804.tex"} |
\section{Embedding Delay Parameter Selection} \label{sec:timeLag}
Delay embedding is a uniform subsampling of the original time series according to the embedding parameter $\tau$.
For example, the subsampled sequence $X$ with elements $\{x_i: i \in \mathbb{N}\cup 0 \}$ subject to the delay $\tau$ is defined as $X(\tau) = [x_0, x_{\tau}, x_{2\tau}, \ldots]$.
Riedl et al.~\cite{riedl2013practical} showed that PE is sensitive to the time delay, which prompts the need for a robust method for determining an appropriate value for $\tau$.
For estimating the optimal $\tau$, we will be investigating the following methods in the subsequent sections: Mutual Information (MI) in Section~\ref{sec:MI}, combining Fourier analysis with $0$-D persistence and the modified $z$-score (Section \ref{sec:0dpers_zscore}), and Sliding Window for One-Dimensional Persistence Scoring (SW1PerS) (Section~\ref{sec:sw1pers_delay}).
We recognize, but do not investigate, some other commonly used methods for finding $\tau$.
These include the autocorrelation function~\cite{grassberger1983measuring} and the phase space expansion~\cite{buzug1992optimal}. | {"config": "arxiv", "file": "1905.04329/sections/sec-delay-choice.tex"} |
\begin{document}
\title{A class of non-ergodic probabilistic cellular automata with unique invariant measure and quasi-periodic orbit}
\author{
Benedikt Jahnel
\footnote{ Ruhr-Universit\"at Bochum, Fakult\"at f\"ur Mathematik, D44801 Bochum, Germany,
\newline
\texttt{Benedikt.Jahnel@ruhr-uni-bochum.de},
\newline
\texttt{http://http://www.ruhr-uni-bochum.de/ffm/Lehrstuehle/Kuelske/jahnel.html }}
\, and Christof K\"ulske
\footnote{ Ruhr-Universit\"at Bochum, Fakult\"at f\"ur Mathematik, D44801 Bochum, Germany,
\newline
\texttt{Christof.Kuelske@ruhr-uni-bochum.de},
\newline
\texttt{http://www.ruhr-uni-bochum.de/ffm/Lehrstuehle/Kuelske/kuelske.html
/$\sim$kuelske/ }}\,
\,
}
\maketitle
\begin{abstract}
{\bf }
We provide an example of a discrete-time Markov process on the three-dimensional infinite integer lattice with
$\Z_q$-invariant Bernoulli-increments which has as local state space the cyclic group $\Z_q$.
We show that the system has a unique invariant measure, but remarkably
possesses an invariant set of measures on which the dynamics is conjugate to an irrational rotation on the continuous sphere $S^1$.
The update mechanism we construct is exponentially well localized on the lattice.
\end{abstract}
\smallskip
\noindent {\bf AMS 2000 subject classification:} 82B20,
82C20, 60K35.
\smallskip
\noindent {\bf Keywords:} Markov chain, probabilistic cellular automaton, interacting particle system, non-equilibrium, non-ergodicity, rotation, discretization, Gibbs measures, XY-model, clock model.
\vfill\eject
\vfill\eject
\newpage
\section{Introduction}
The possible long-time behavior of infinite lattice systems under stochastic dynamics is a subject of ongoing research.
The situation is non-trivial already for reversible dynamics and
becomes even more difficult when one leaves the assumption of reversibility
of the dynamics and enters the realm of driven systems.
Infinite lattice systems
may possess different equilibria for the same stochastic dynamics.
The first question which comes to mind is to estimate
the approach to equilibrium, if there is a unique equilibrium (using for example spectral gap analysis, logarithmic Sobolev inequalities, etc. see \cite{Ch04,Ch05}).
If there are multiple equilibria one
may be interested in their domains of attraction.
An interesting question in this context is whether a unique equilibrium has to be an attractor for a stochastic lattice dynamics (see for example \cite{Li85,ChMa11,Mo95,RaVa96}).
Ideally one would like to understand possible behavior of invariant sets and attractors.
Under what circumstances can there be oscillatory behavior and closed orbits of measures?
This is difficult to answer for infinite lattice systems.
On the one hand motivation comes from the study of given models describing e.g systems of coupled neurons. These show characteristic patterns of spiking, phases of long-range order, periodicity, synchronization as well as disordered phases. A mathematical analysis
of these interesting phenomena so far has been restricted mainly to mean-field models (see for example \cite{DeGaLoPr14} or
the Kuramoto model \cite{DH96,AcBoPeRiSp05,GiPaPePo12,GPP12}).
On the other hand there is the theoretical interest to make progress by restricting the possible
forms of limiting behavior, or to explore possible forms and
provide model systems which illustrate possible forms of "non-standard" limit behavior.
Ultimately one may strive for a classification of models into classes which look similar, but in the beginning we are forced
to work with model systems, which serve as illustrations, their universality being a question that hopefully can be tackled later.
\medskip
These types of question may be posed for continuous-time Markov processes or discrete-time Markov processes.
In a previous paper \cite{JaKu12} we were able to settle an old question raised by Liggett \cite{Li85} whether it is possible to have a continuous-time Markovian dynamics with
a unique invariant measure which does not attract all initial conditions, but has a closed
orbit of measures.
An example of a probabilistic cellular automaton (PCA) which shows that non-ergodicity is possible in one dimension with positive rates was given in \cite{Ga01}.
An example of a PCA which shows that non-ergodicity in one dimension, even with a unique stationary measure, is possible was given in \cite{ChMa11} (albeit with some deterministic updatings).
In \cite{DaLoRo02} another example of a non-ergodic, non-degenerate, Ising-type PCA on the $d$-dimensional lattice with $d\geq2$ is exhibited.
Both examples have periodic orbits of period two.
To compare, in the case of an interacting particle system (IPS) where the updating is in continuous time, non-ergodicity with unique invariant measure was proved to be impossible for local rates in one lattice dimension in \cite{Mo95,RaVa96}. In two lattice dimensions the question remains open. For general background, see also \cite{LeMaSp90,To01}.
\medskip
The aim of the present paper is to provide an example of a discrete-time Markov process with
{\em discrete} $\Z_q$-invariant Bernoulli-updates having a unique invariant measure,
but a non-trivial quasi-periodic orbit under the dynamics.
It is remarkable that a quasi-periodic orbit which is conjugate to an irrational rotation on a sphere can arise from exponentially localized dynamical rules on a discrete local spin space.
To do so, we consider a class of discrete $q$-state spin equilibrium models defined in terms of a translation-invariant
quasilocal specification with discrete clock-rotation invariance having extremal Gibbs measures $\mu'_{\phi}$ labeled by the uncountably many values of $\phi$ in the one-dimensional sphere (introduced
in \cite{EnKuOp11}).
Next we
construct an associated discrete-time Markov process as a
PCA with exponentially localized updating rule.
The process has the property to reproduce a deterministic rotation of the extremal Gibbs measures and preserve macroscopic coherence by a quasilocal and time-synchronous updating mechanism in discrete time without deterministic transitions.
We prove that, depending on an updating-velocity parameter $\t$ there is either a continuum of time-stationary measures and closed orbits of rotating states, or a unique time-stationary measure and a dense orbit of rotating states. In both cases the process in non-ergodic.
Our paper partially builds on constructions of \cite{JaKu12} where we considered an IPS which can be seen as a
continuous-time analogue of the present Markov process.
The bulk of the present work consists in the proof of uniqueness of the invariant measure in the absence of continuous time,
which heavily draws on techniques relating entropy loss, the inverse transition operator and the Gibbs property of the invariant measure.
\subsection{Discrete-time infinite lattice models -- Main result}
We consider Markovian dynamics on the infinite-volume state space
$\O=E^G$, where $E$ is a finite local state space sitting at each site $i\in G$ and $G$ is a countable set.
In the examples we want to construct we will specifically choose
$E=\Z_q=\{0,1,\dots, q-1\}$ to be the cyclic group and
$G=\Z^d$ to be the $d$-dimensional integer lattice, so that the infinite volume configuration space
$\O$ is an abelian group
w.r.t. sitewise addition modulo $q$.
Recall the following definitions.
A {\em (deterministic) cellular automaton} is given by a deterministic local updating rule
$f: E^\L \mapsto E$ where $\L\subset G$ is a finite set containing the origin $\L\ni 0$.
The simultaneous application of $f$ yields a corresponding function $F:\O\mapsto \O$ acting on infinite-volume configurations $\s\in \O$, where the $i$-th coordinate of the image $F(\s)$ is
given by $F(\s)_i=f(\th_i\s)$ for all sites $i \in G$
and $\th_i \s$ is the configuration obtained from $\s$ by a lattice shift by the vector $i\in G$, that is
$(\th_i \s)_j = \s_{j+i}$. One then likes to study properties of the discrete dynamical system obtained by
iterates of $F$.
We will not discuss such deterministic cellular automata any further, but mention that
there are recent developments concerning very interesting questions about simple cellular automata which are fed
random initial configurations, bootstrap percolation being
one of these (see for example \cite{AiLe88,BoDuMoSm14,BaBoMo09,DuEn13}).
A {\em (strict) probabilistic cellular automaton} (PCA) is
given by a probabilistic single-site updating kernel $p$ from the infinite-volume
state space $\O$ to single-site probability measures,
which plays the role of a random version of the map $f$.
So one has
$p: \O \mapsto \PP(E)$ where $\PP(E)$ denotes the probability measures on the finite set $E$
and $p$ is assumed to be a strictly local probability kernel, which means that for each $a\in E$ the functions
$\s\mapsto p(\s,a)$ are strictly local functions for all $a\in E$, that is functions depending
only on finitely many coordinates of $\s$.
This probabilistic updating rule is applied (stochastically) independently over all sites
to yield a Markovian transition operator $M$ on infinite lattice
configurations (which generalizes the deterministic function $F$) of the form
\begin{equation*}\label{Product}
\begin{split}
M(\s,d\eta)=\otimes_{i\in G}p(\theta_i\s, d\eta_i).
\end{split}
\end{equation*}
This transition operator describes
simultaneous updates on the infinite lattice which are performed in a sitewise independent way according
to a lattice-shift invariant rule.
There is a huge literature on probabilistic cellular automata (see for example \cite{ChMa11,LeMaSp90,Ga01,DaLoRo02,To01,ToVaStMiKuPi90}).
A {\em probabilistic cellular automaton with exponentially localized update kernel}
is given by a Markovian kernel $M$ from the set of infinite-volume configurations
$\O$ to the probability measures on $\O$
allowing for exponentially suppressed non-localities in the following sense.
1. The update kernel $M(\s, d\eta)=:M_\s(d\eta)$ is {\em uniformly spatially mixing in the future with exponent
$\a_2$}.
This means by definition that there is a positive
constant $K_2$, such that at any fixed starting configuration $\s$
the following decay estimate holds:
For all finite volumes $\L \subset \D \subset G$, for all sets $A$ in the sigma-algebra
generated
by the $\eta$-coordinates in $\L$ and sets $B$ in the sigma-algebra generated by the $\eta$-coordinates in $G\setminus\D$ one has
\begin{equation}\label{in the future}
\begin{split}
&|M_\s(A|B) - M_\s(A)| \leq K_2 \sum_{i\in \L, j\in G\setminus\D }e^{- \a_2 |i-j| }.
\end{split}
\end{equation}
We remark that this uniform mixing property is much stronger (closer to independence) than simply correlation decay.
While correlation decay
holds e.g whenever the conditional measure on the $\eta$ variables is an extremal Gibbs measure even
in the phase-transition regime, phase transitions are excluded by \eqref{in the future}.
2. The update kernel $M$ satisfies the property of {\em exponential locality from the past with exponent $\a_1$}.
This means by definition that there is a positive constant $K_1$ such that
the variation w.r.t. $\s_i$ of the probability $M(\s,\eta_\L)$ with finite $\L$
is bounded by the exponential estimate
\begin{equation*}\label{Product}
\begin{split}
&\d_{i}( M(\cdot ,\eta_\L)):= \sup_{{\s,\tilde \s:}\atop{\s_{i^c}=\tilde\s_{i^c}}}(M(\s,\eta_\L)-M(\tilde \s,\eta_\L))
\leq K_1e^{-\a_1|i-\L|}.
\end{split}
\end{equation*}
Here we used notation $i^c=G\setminus \{i\}$, $|i-\L|=\min_{j\in \L}|i-j|$ and $\eta_\L:=\{\xi\in\O: 1_{\eta_\L}(\xi)=1\}$.
Notice that kernels with these two properties are related to the infinite-volume Markovian kernels studied in \cite{Ku84}.
\medskip
Specifying now to Markov kernels $M$ on the state space $(\Z_q)^G$ we say that $M$
is {\em $\Z_q$-invariant} iff it is compatible with
joint rotation on the local state space and we can write $M(\s+a, A+a)= M(\s, A)$ for all $a \in \Z_q$.
We say that $M$ has {\em Bernoulli-increments} if $M(\s, \s+\{0,1\}^G)=1$.
In that case the updated configuration is obtained from an initial configuration $\s$
by the site-wise addition modulo $q$ of a $\{0,1\}^G$-valued
field $N$ of Bernoulli-increments whose distribution conditional on $\s$ we denote
by the symbol with a hat, namely $\hat M(\s, dn)$.
To exclude degeneracies we only consider kernels in this paper which are {\em uniformly non-null} meaning by definition that
$\hat M_\s(n_{\L})\geq c^{|\L|}$ for all $n_{\L}\in \{0,1\}^{\L}$
for some strictly positive uniform constant $c$.
Given such a Bernoulli updating kernel
$\hat M(\s, dn)$ from $(\Z_q)^{\Z^3}$ to $\{0,1\}^{\Z^3}$,
we then look at
the associated discrete-time Markov process
\begin{equation*}\label{Product}
\begin{split}X^t=(X^{t-1}+N^{t})\text{ mod } q
\end{split}
\end{equation*}
on the state space $\O=(\Z_q)^{\Z^3}$
with increment distribution
\begin{equation*}\label{Product}
\begin{split}\LL\bigl(N^t=\cdot \bigl |X^{t-1}=\s\bigr)=\hat M(\s, \cdot)
\end{split}
\end{equation*}
and use the symbol $M$ (without the hat) for the corresponding transition kernel, i.e.
\begin{equation*}\label{Product}
\begin{split}M(\s, \cdot )=\LL\bigl(X^t=\cdot \bigl |X^{t-1}=\s\bigr).
\end{split}
\end{equation*}
We write $\PP_{\theta}(\O)$ for the lattice-translation invariant probability measures on $\O$.
\medskip
The main result of the paper is the following theorem which states that we may
construct exponentially localized transition kernels with Bernoulli updates
which are arbitrarily well localized,
but whose time-evolutions show macroscopic coherence and non-ergodicity
in the PCA-sense.
\begin{thm}\label{PCA_Theorem} For any arbitrarily large prescribed mixing exponents
$\a_1,\a_2\in (0,\infty)$ there exists
an integer $q_0\in \N$ such that, for all $q\geq q_0$
the following is true.
There exists an updating kernel $M$ with Bernoulli increments
on $(\Z_q)^{\Z^3}$,
which satisfies the properties of uniform exponential mixing in the future with an exponent of (at least)
$\a_2$ and uniform spatial locality from the past with an exponent of (at least) $\a_1$,
such that the associated discrete-time stochastic dynamics
possesses a set $\II\subset \PP_{\theta}(\O)$ of lattice translation-invariant measures
on which the dynamics acts like a rotation. More precisely:
\begin{enumerate}
\item
There is a quasilocal observable $\psi:\O\rightarrow \R^2$ such that the expectation map
$$\nu \mapsto \nu(\psi)$$ is a bijection from $\II$ to the sphere $S^1\subset \R^2$.
\item The stochastic dynamics restricted to the invariant set
$\II$ is conjugate to a rotation by some angle $\t=\t(M)$:
$$\forall \nu \in \II:\quad (M\nu)(\psi)=\nu(\psi)+ \t$$
where we have written rotation on $S^1$ on the r.h.s in an additive way.
\item
The kernel $M$ can always be chosen so that the cases, $\t/2\pi$ is rational, or irrational can both occur.
\begin{enumerate}
\item If $\t/2\pi$ is irrational, there exists a unique invariant measure $\nu^*$ among the translation-invariant
measures,
but the dynamics is quasiperiodic. In particular $M^n \nu \not\rightarrow \nu^*$ (in the sense of local convergence)
and the PCA is non-ergodic.
\item If $\t/2\pi$ is rational, there are uncountably many periodic orbits and uncountably
many invariant measures.
\end{enumerate}
\end{enumerate}
\end{thm}
\subsection{Comparison with continuous-time dynamics}\label{Comp}
We believe that it is instructive to compare this result to our previous work
on the existence of non-ergodic
{\em continuous-time} dynamics in \cite{JaKu12}. For easy comparison let us also present the following result which is based on the work of \cite{JaKu12}.
We consider a continuous-time interacting particle system (IPS) on $(\Z_q)^{\Z^3}$ given in terms
of a generator $U$ acting on observables $\psi:(\Z_q)^{\Z^3}\mapsto \R$
of the form
\begin{equation}\label{IPS}
U\psi(\o)=\sum_{i \in \Z^d}\Bigl( c_i^+(\o)\bigl(\psi(\o+1_i)-\psi(\o) \bigr) +c_i^-(\o)\bigl(\psi(\o-1_i)-\psi(\o) \bigr) \Bigr)
\end{equation}
where $(\o\pm 1_i)_j=(\o_j \pm \d_{i,j})$ mod $q$, so that it allows both for increments $\pm 1$ at each site $i$.
In analogy to the notion of uniform exponential locality from the past in the discrete-time setup
we say that the rates $ c_i^\pm (\o)$ satisfy the property of {\em exponential locality with exponent $\a_1$} whenever
there is a finite $K_1$ such that
\begin{equation}\label{Exp_Loc}
\begin{split}
&\d_{i} c_j^\pm(\cdot):
= \sup_{{\o,\tilde \o:}\atop{\o_{i^c}=\tilde\o_{i^c}}}
(c_j^\pm(\o)- c_j^\pm(\tilde\o) )\leq K_1 e^{- \a_1 |i - j |}.
\end{split}
\end{equation}
This in particular makes the dynamics well-defined by standard methods.
Then we have the following theorem.
\begin{thm} \label{PCA_IPS} For arbitrarily large exponent
$\a_1\in (0,\infty)$ there exists
an integer $q_0\in \N$ such that, for all $q\geq q_0$
the following is true: There exists a generator
$Q$ on $\O=(\Z_q)^{\Z^3}$ with lattice-translation invariant and $\Z_q$-invariant rates which
satisfy exponential locality with exponent (at least) $\a_1$
such that the associated Markov process with continuous-time semigroup $(S_t)_{t\geq0}$
possesses a set $\II\subset \PP_{\theta}(\O)$ of lattice translation-invariant measures
on which the dynamics acts like a rotation. More precisely:
\begin{enumerate}
\item (This is identical to the corresponding point for the discrete-time dynamics.)
There is a quasilocal observable $\psi:\O\rightarrow \R^2$ such that the expectation map
$$\nu \mapsto \nu(\psi)$$ is a measurable bijection from $\II$ to the sphere $S^1\subset \R^2$.
\item The continuous-time stochastic dynamics restricted to the invariant set
$\II$ is conjugate to a continuous rotation by $t\in \R$:
$$\forall \nu \in \II:\quad (S_t\nu)(\psi)=\nu(\psi)+ t.$$
\item There is a unique time-stationary measure $\nu^*$ among the lattice-translation invariant measures,
namely the uniform mixture over $\II$.
\end{enumerate}
\end{thm}
Note that in the case of the continuous-time Markov process we do not need a requirement analogous to property \eqref{in the future} posed in the PCA case.
Neither does Theorem \ref{PCA_Theorem} imply Theorem \ref{PCA_IPS} nor does Theorem \ref{PCA_IPS} imply Theorem \ref{PCA_Theorem}.
\subsection{Ideas of the proof}
There are common parts (see A below) and essential differences
(see C, and the more difficult D) in the treatment of discrete-time dynamics
and continuous-time dynamics.
A. First we give a one-parameter family of measures $\II= \II(\b,q)\subset \PP_{\theta}((\Z_q)^{\Z^3})$,
depending on an inverse temperature parameter $\b$ for each sufficiently large $q$.
This family is used in both cases of discrete-time and continuous-time dynamics.
We will shortly review its construction which was given in \cite{JaKu12}, based on arguments concerning
the preservation of Gibbsianness, in Section \ref{The equilibrium model}.
B. We construct a discrete-time Bernoulli update kernel for which $\II$ is an invariant set in Section \ref{The updating mechanism}.
This is analogous to but different from the construction of $U=U(\b,q)$ given in \cite{JaKu12}.
C. We prove locality, mixing, and further properties also in Section \ref{The updating mechanism}.
D. We prove uniqueness of the invariant measure
(where it is claimed to hold). The discrete case does not follow
from the continuous case since time-derivatives are not available, and it necessitates the use of a different chain of arguments.
\medskip
Physically the construction of the rotating-states mechanism
is inspired by conjectures in \cite{MaSh11} in the context of IPS based on a clock model in an intermediate-temperature regime \cite{FrSp82}.
To carry out our construction, as in \cite{JaKu12,JaKu14} we draw on the relation to the planar rotator model which has as a local state space the one-dimension sphere $S^1$. On the lattice in three or more space dimensions, at sufficiently strong coupling constant this system exhibits the breaking of the rotation symmetry in spin-space, see \cite{FrSiSp76,Pf82,FrPf83,MaSh11}. To arrive at a system of discrete spins (or particles) with finite local state space
$\Z_q$ a local discretization is applied for $q$ sufficiently large but finite. Then the interplay between the systems of discrete and continuous spins is exploited. In particular we use the fact that the discretization map bijectively maps the lattice translation-invariant extremal Gibbs measures $\mu_{\phi}$ of the continuous system to the extremal lattice translation-invariant Gibbs measures $\mu'_{\phi}$ of the discrete system where $\phi$ runs over the one-dimensional sphere $S^1$. Note also the non-trivial fact that the discrete system has uncountably many extremal Gibbs measures.
As we will see below we can define an associated discrete-time Markov process with transition kernel $M_\t$, where $\t$ is a continuous parameter carrying also the meaning of an angle.
The kernel $M_\t$ assigns to a particle configuration
in $(\Z_q)^{\Z^d}$ a random particle configuration in $(\Z_q)^{\Z^d}$ and will be obtained by a natural three-step procedure.
We call this procedure {\em Sample-Rotate-Project}, to be described in detail below. Here it is the rotation step which
carries the dependence on the continuous angle $\t$. For $0\leq\t\leq2\pi/q$ the updating is Bernoulli.
The dynamics works nicely on the Gibbs measures and we have the rotation property:
\begin{enumerate}
\item An application of the transition operator $M_\t$ to a discrete Gibbs measure $\mu'_{\phi}$
yields a rotation by an angle $\t$, so that we have $M_{\t}\mu'_{\phi}=\mu'_{\phi+\t}$.
\end{enumerate}
The proof of this fact is more straightforward than the proof of the analogous statement in the IPS setup \cite{JaKu12}, given the previous work on preservation of Gibbsianness under discretizations. Property 1 already implies that the symmetric mixture $\mu'_*=\frac{1}{2\pi}\int_0^{2\pi} d\phi\mu'_{\phi}$ is invariant under the dynamics.
Note also that we can play with the velocity-parameter $\t$ now, and consider the action of the dynamics on the Gibbs measures.
Rational values of $\t/2\pi$ yield finite closed orbits, of which there are uncountably many,
so that there are uncountably many time-stationary measures obtained as the equal-weight measures on these orbits.
Irrational values of $\t/2\pi$ yield a quasiperiodic orbit and a unique time-stationary measure.
Next we note:
\begin{enumerate}
\setcounter{enumi}{1}
\item The translation-invariant measures $\mu'$ which have no
decrease of relative entropy density relative to the invariant measure $\mu_*'$ (equivalently, to one of the Gibbs measures $\mu'_{\phi}$) in one time step, are necessarily Gibbs measures for the same specification as $\mu'_*$.
\end{enumerate}
In particular translation-invariant and dynamically stationary measures are Gibbs measures for the same potential as $\mu'_*$ and their relative entropy density is also zero. In short: Zero entropic loss implies Gibbsianness. To use such a connection is similar in spirit to the IPS case (where the analogous connection was termed "Holley's argument" \cite{Ho71,Li85}).
However, the proof is different
and in fact we get stronger statements than for the IPS case (compare Theorem \ref{ZeroEntrGibbs} below). We are inspired in this part by the paper \cite{DaLoRo02} about probabilistic cellular automata with strictly local update rules (see also \cite{Ku84}). We also use the relation between, on the one hand, the decrease of relative entropy density between a general starting measure and the invariant measure $\mu_*'$ under application of the dynamics, and on the other hand, relative entropies between corresponding time-reversed transition operators. Using K\"unsch's ideas from \cite{Ku84} we then derive the desired DLR equation to identify the Gibbs measures. Technically there are also differences in our treatment to these papers: We need to take proper care of non-localities, but we are able to bypass a K\"unsch-type representation of the transition operator in terms of double-Gibbs potentials and work directly with specifications, taking advantage of their properties we have to our disposition in our case, which considerably simplifies things. (For background on how to go from specifications to potentials see \cite{Su73,Ko74,Ku01}.)
The non-reversible time-evolutions we consider here suggest another set of questions, namely whether there are any non-Gibbsian pathologies
along the trajectories depending on starting measures as found for reversible dynamics in \cite{EnFeHoRe02,EnRu09,KuRe06,KuNy07,ErKu10,EnFeHoRe10,EnKuOpRu10,JaKuRuWe14,FeHoMa13a}.
\bigskip
\textbf{Acknowledgement: }
This work is supported by the Sonderforschungsbereich SFB $|$ TR12-Symmetries and Universality in Mesoscopic Systems. Christof K\"ulske thanks Universit\'{e} Paris Diderot - Paris 7 for kind hospitality and Giambattista Giacomin for stimulating discussions.
\section{Equilibrium model and rotation dynamics}
In this section we present the equilibrium model also exhibited in \cite{JaKu12}. Further we define the updating via the three-step procedure {\em Sample-Rotate-Project} and show locality properties.
\subsection{The equilibrium model}\label{The equilibrium model}
\textit{The first-layer model: }We have to first introduce a continuous-spin model which is given in terms of a Gibbsian specification for an absolutely summable Hamiltonian acting on lattice-configurations with continuous local state space. More precisely we consider an $S^1$-rotation invariant and translation-invariant Gibbsian specification $\g^\Phi$ on the lattice $G=\Z^d$, with local state space $S^1=[0,2\pi)$. Let this specification $\g^\Phi=(\g^\Phi_\L)_{\L\subset G}$
be given in the standard way by an absolutely summable, $S^1$-invariant and translation-invariant potential
$\Phi=(\Phi_A)_{A\sb G, A\text{ finite}}$, w.r.t to the Lebesgue measure $\l$ on the spheres.
This means that the Gibbsian specification is given by the family of probability kernels
\begin{equation*}\label{First_Layer_Specification*}
\begin{split}
\g^\Phi_\L(B|\eta)=\frac{\int 1_B(\s_\L\eta_{\L^c}) \exp(-H_\L(\s_\L\eta_{\L^c}))\l^{\otimes\L}(d\s_\L)}{\int \exp(-H_\L(\s_\L\eta_{\L^c}))\l^{\otimes\L}(d\s_\L)}
\end{split}
\end{equation*}
for finite $\L\sb G$ and Hamiltonian $H_\L=\sum_{A\cap\L\neq\emptyset}\Phi_A$ applied to a measurable set $B\sb(S^1)^G$ and a boundary condition $\eta\in(S^1)^G$ (for details on Gibbsian specifications see \cite{Ge11}). We use notation $\L^c:=G\setminus\L$.
A standard example of such a model is provided by the nearest-neighbor scalarproduct interaction rotator model with Hamiltonian
\begin{equation}\label{Metric_Family}
H_{\Lambda}(\s_\L\eta_{\L^c}) = -\beta \sum_{i,j\in \Lambda: i\sim j }\cos(\s_i-\s_j) -
\beta \sum_{i\in \Lambda,j \in \Lambda^c: i\sim j } \cos(\s_i-\eta_j).
\end{equation}
Denote by $\GG(\g^\Phi)$ the simplex of the Gibbs measures corresponding to this specification,
which are the probability
measures $\mu$ on $(S^1)^G$ which satisfy the DLR-equation $\int\mu(d\eta)
\g^\Phi_\L(B|\eta)=\mu(B)$ for all finite $\L$. Denote by $\GG_{\theta}(\g^\Phi)$ the lattice
translation-invariant Gibbs measures.
We will make as an assumption on the class of potentials (Hamiltonians)
we discuss moreover that it has a continuous symmetry breaking in the following sense. Assume that the extremal translation-invariant Gibbs measures can be obtained as weak limits with
homogeneous boundary conditions, i.e with $\eta_\phi\in(S^1)^G$ defined as $(\eta_\phi)_i=\phi$ for all $i\in G$ and $\phi \in S^1$ we have $$\text{ex } \GG_{\theta}(\g^\Phi)=\{ \mu_\phi |\mu_\phi = \lim_{\L\nearrow G} \g^{\Phi}_{\L}(\cdot|\eta_\phi) , \phi \in S^1\}.$$
We further assume that
different boundary conditions $\eta_\phi$ yield different measures so that there is a unique labelling of states $ \mu_\phi $
by the angles $\phi$ in the sphere $S^1$. It is a non-trivial proven fact that
this assumption is true in the case of the standard rotator model \eqref{Metric_Family} in $d=3$ for $\l$-a.a temperatures in the low-temperature region as discussed in \cite{FrSiSp76,MaSh11,Pf82}. Here
the unique labelling can be given by the local magnetization $\mu_\phi(\s_0)=m e_\phi$ where $1>m>0$ is the temperature-dependent length of the unit vector $e_\phi\in S^1$ with angle $\phi$.
\medskip
\textit{The second-layer model: }We will now describe the discretization transformation which maps the continuous-spin model
to a discrete-spin model.
Denote by $T$ the local coarse-graining with equal arcs, i.e
$T:[0,2\pi)\mapsto \{1, \dots, q\}$ where $T(\phi):=k$ iff $2\pi(k-1)/q\leq\phi<2\pi k/q$.
Extend this map to infinite-volume configurations
by performing it sitewise. We will refer to the image space $\O:=\{0,\dots,q-1\}^G$ as the
coarse-grained layer.
In particular we will consider images of infinite-volume measures under $T$.
We will need to choose the parameter of this discretization $q\geq q_0(\Phi)$ large enough so that the image measures of first-layer Gibbs measures are again Gibbs measures for a discrete specification on the coarse-grained layer.
That this is always possible follows from the earlier works \cite{KuOp08,EnKuOp11}.
More precisely, we assume that the condition from Theorem 2.1 of
\cite{EnKuOp11} is fulfilled
(ensuring a regime where the Dobrushin uniqueness condition holds
for the so-called constrained first-layer models where the Dobrushin condition is a weak dependence condition implying uniqueness and locality properties).
Note, as in our notation the usual inverse temperature parameter $\b$ is incorporated into $\Phi$, for $\b$ tending to infinity so does $q_0(\Phi)$.
To talk about the correspondence between the continuous and the discrete system we need
to make explicit the relevant Gibbsian specification for the latter.
To do so define a family of kernels $\g'=(\g'_\L)_{\L\subset G, \L \text{ finite}}$ for the discretized model by
\begin{equation}\label{Coarse_Specification}
\begin{split}
\g'_{\L} (\s'_{\L} | \s' _{\L^c})
&=\frac{\int\mu_{\L^c}[\s'_{\L^c}](d\s_{\L^c})\int\l^{\otimes\L}(d\s_\L)e^{-H_{\L}(\s_\L\s_{\L^c})}
1_{T(\s_\L)=\s'_{\L}}}{\int\mu_{\L^c}[\s'_{\L^c}](d\s_{\L^c})\int\l^{\otimes\L}(d\s_\L)e^{-H_{\L}(\s_\L\s_{\L^c})}
}
\cr
&=\frac{\mu_{\L^c}[\s'_{\L^c}](\l^{\L} (e^{-H_{\L}}
1_{\s'_{\L}}) )
}{\mu_{\L^c}[\s'_{\L^c}](\l^{\L} (e^{-H_{\L}}))}
\cr
\end{split}
\end{equation}
where the second line is a short notation that we want to adapt in the sequel. The measure $\mu_{\L^c}[\s'_{\L^c}]$ is the unique continuous-spin Gibbs measure for a system on the smaller volume $\L^c$ with conditional specification obtained by deleting all interactions with $\L$ and constrained to take values $\s_{\L^c}$ with discretization
images $T(\s_{\L^c})=\s'_{\L^c}$.
For more details and definition of $\mu_{\L^c}[\s'_{\L^c}]$ in terms of formulas
see \cite{JaKu12} Section 2.
Note that these constrained Gibbs measures are well-defined and have nice locality properties for sufficiently fine discretization $q\geq q_0(\Phi)$, see \cite{EnKuOp11,KuOp08} and below. For general background on constrained Gibbs measures in the context of preservation of Gibbsianness see \cite{EnFeSo93,Fe05,KuLeRe04}. We note that $\g'$ is indeed a quasilocal specification and the discretized Gibbs measures are Gibbs for $\g'$.
\medskip
\textit{The relation between first- and second-layer models: }
The infinite-volume discretization map $T$ is injective when applied to the set of translation-invariant extremal Gibbs states in the continuum model $\text{ex } \GG_{\theta}(\g^\Phi)$.
More precisely we have the following proposition proved in \cite{JaKu12}.
\begin{prop}\label{Bijection} Let $q\geq q_0(\Phi)$, then $T$ is a bijection from $\text{ex } \GG_{\theta}(\g^\Phi)$ to $\text{ex } \GG_{\theta}(\g')$
with inverse given by the kernel
$\mu_G[\s'](d\s)$.
\end{prop}
Here $\mu_G[\s'](d\s)$ is the unique conditional continuous-spin Gibbs measure on the whole volume $G$.
Regarding part 1 of Theorem \ref{PCA_Theorem} we have the following corollary.
\begin{cor}\label{QuasilocalBijection}
Let $\Phi$ be in the phase-transition region, $q\geq q_0(\Phi)$ and $m=|\mu(\s_0)|$ the uniform local magnetization length for all $\mu\in\text{ex } \GG_{\theta}(\g^\Phi)$. Under these assumptions, the mapping $\psi: \O\mapsto S^1$
\begin{equation*}\label{psi}
\begin{split}
\psi(\s'):=\mu_G[\s'](\s_0)/m
\end{split}
\end{equation*}
is quasilocal and the expectation map
$\nu \mapsto \nu(\psi)$ is a measurable bijection from $\text{ex } \GG_{\theta}(\g')$ to the sphere $S^1\subset \R^2$.
\end{cor}
\textbf{Proof: }
The quasilocality of $\psi$ follows from Dobrushin uniqueness arguments given in \cite{JaKu12} for $q\geq q_0(\Phi)$. By the continuous symmetry breaking of the first layer model and the surjectivity of $T$, for every $\phi\in S^1$ there exists a $\mu'_\phi\in\text{ex } \GG_{\theta}(\g')$. Further we have
\begin{equation*}\label{psi_bij}
\begin{split}
\mu'_\phi(\psi)=\frac{1}{m}\int_\O\mu'_\phi(d\s')\mu_G[\s'](\s_0)=\frac{1}{m}\mu_\phi(\s_0)=e_\phi
\end{split}
\end{equation*}
and different $\phi$ yield different $\mu'_\phi$.
$\Cox$
\subsection{The updating mechanism}\label{The updating mechanism}
We define an updating at finite time $0\leq\t<2\pi$, according to the {\em Sample-Rotate-Project} algorithm, as follows: Given a discrete-spin configuration $\s'$ we perform the following steps:
\begin{enumerate}
\item Sample a continuous-spin configuration $\s$ according to the conditional measure $\mu_G[\s'](d\s)$.
\item Rotate deterministically the resulting continuous-spin configuration $\s \mapsto \s +\t$ jointly in all sites by the same angle $\t$.
\item Project the rotated configuration using the discretization map $T$, i.e look at the coarse-grained configuration $T(\s+\t)$.
\end{enumerate}
The resulting kernel on discrete spins, describing the probability distribution of $T(\s+\t)$ where $\s$ is distributed according to $\mu_G[\s'](d\s)$ for a given initial configuration $\s'$, we denote by $M_\t(\s', \cdot\,)$. This \textit{transition operator} can be expressed via
\begin{equation}\label{OneStepUpdate}
\begin{split}
M_\t(\s',\eta'_\L)=\mu_G[\s'](\eta'_{\L,\t})
\end{split}
\end{equation}
where $\L$ is a finite set of sites and
\begin{equation*}\label{Eta_tau}
\begin{split}
\eta'_{\L,\t}:=T^{-1}(\eta'_\L)-\t \in (S^1)^\L.
\end{split}
\end{equation*}
In words: $\eta'_{\L,\t}$ is obtained by joint rotation by $-\t$ of the segments of the sphere prescribed by $\eta'_\L$.
It will be convenient to use the rewriting
\begin{equation}\label{rewriting}
\begin{split}
M_\t(\s',\eta'_\L)
=\frac{\mu_{\L^c}[\s'_{\L^c}](\l^{\L}(e^{-H_\L}1_{\s'_\L}1_{\eta'_{\L,\t}}))}{\mu_{\L^c}[\s'_{\L^c}](\l^{\L}(e^{-H_\L}1_{\s'_\L}))}
\end{split}
\end{equation}
which follows from reorganization of terms in the Hamiltonian in the Dobrushin uniqueness regime, see \cite{JaKu12} Section 2.
\medskip
Notice that $M_\t(\s',\eta'_\L)=0$ if and only if $\s'_{\L,0}\cap\eta'_{\L,\t}=\emptyset$. If $M_\t(\s',\eta'_\L)>0$ we call the configuration $\eta'$ \textit{accessible for $\s'$ in $\L$}. For $0\leq\t\leq2\pi/q$ the updating has indeed Bernoulli-increments since in this case the numerator of \eqref{rewriting} is zero for $\eta'_\L\neq\s'_\L+\{0,1\}^\L$.
\medskip
The following two propositions verify the locality properties addressed in Theorem \ref{PCA_Theorem}.
\begin{prop}\label{Mixing_Past}
For any $\a_2>0$ we can choose the descretization $q=q(\a_2)$ fine enough such that the update kernel $M_\t(\s', d\eta')$ is uniformly spatially mixing in the future with exponent $\a_2>0$.
\end{prop}
{\bf Proof: } Let $\s'$ be any starting configuration. Further let $A$ be a set of configurations measurable w.r.t the sigma-algebra corresponding to a finite volume $\L$ and $B$ be a set of configurations measurable w.r.t the sigma-algebra corresponding to a volume $\D^c$ where $\D$ is a finite volume. Further assume
$M_\t(\s',B)>0$, then we have
\begin{equation*}\label{zwei}
\begin{split}
M_\t(\s',A|B) - M_\t(\s',A)
&=\mu_G[\s'](B_{\t})^{-1}\mu_G[\s'](A_\t\cap B_\t)-\mu_G[\s'](A_{\t}).\cr
\end{split}
\end{equation*}
where we used notation analogue to \eqref{OneStepUpdate} for $A_\t, B_\t$.
Notice,
$\mu_G[\s']$ is uniquely specified by the constrained specification $\g^{\s'}$ (for details see \cite{JaKu12}) and thus we can write
\begin{equation*}\label{zwei}
\begin{split}
\mu_G[\s'](A_\t\cap B_\t)=\int_{B_{\t}}\g^{\s'}_{\D}(A_{\t}|\s)\mu_G[\s'](d\s).\cr
\end{split}
\end{equation*}
$\g^{\s'}$ is in the Dobrushin uniqueness region with Dobrushin matrix $\bar C$ uniformly in $\s'$ thus
by Theorem 8.23 (ii) in \cite{Ge11} we have
\begin{equation*}\label{zwei}
\begin{split}
\sup_{A_\t}\Vert\g^{\s'}_{\D}(A_\t|\cdot)-\mu_G[\s'](A_\t)\Vert\leq\sum_{i\in \L, j\in \D^c }\bar D_{ij}
\end{split}
\end{equation*}
with $\bar D:=\sum_{n\geq0} \bar C^n$. Now we can pick $q\geq q(\a_2)$ large enough such that the Dobrushin matrix has exponential decay (see \cite{Ge11} Remark 8.26 and \cite{JaKu12} Lemma 2.8) and hence $\bar D_{ij}\leq K_2 e^{- \a_2 |i-j| }$ which finishes the proof.
$\Cox$
\begin{prop}\label{Mixing_Futur}
The update kernel $M_\t$ satisfies the property of exponential locality from the past with exponent $\a_1$.
\end{prop}
{\bf Proof: } Using notation as in \eqref{OneStepUpdate} we have
\begin{equation*}\label{Product}
\begin{split}
\d_{i}( M(\cdot ,\eta'_\L))=\sup_{{\s',\tilde \s':}\atop{\s'_{i^c}=\tilde\s'_{i^c}}}|\mu_G[\s'](\eta'_{\L,\t})-\mu_G[\tilde\s'](\eta'_{\L,\t})|
\leq \sum_{j\in\L}\bar D_{ji}
\end{split}
\end{equation*}
since $\mu_G[\s']$ is the unique Gibbs measure for the specification $\g^{\s'}$ which is in the Dobrushin uniqueness region see Theorem 8.20 in \cite{Ge11}.
As above we can choose $q\geq q(2\a_1)$ to be large enough such that
$\bar D_{ji}\leq C_1e^{-2\a_1 |i-j|}$ and thus for $i\notin\L$ and $\a_1$ sufficiently large
\begin{equation*}\label{Product}
\begin{split}
\sum_{j\in\L}\bar D_{ji}&\leq C_1 \sum_{j\in\L} e^{-2\a_1 |i-j|}
\leq C_1 \sum_{k=|i-\L|}^\infty e^{-2\a_1 k}\#\{j\in \Z^d, |j-i|=k \}\cr
& \leq C_2 e^{-\a_1 |i-\L|}.
\end{split}
\end{equation*}
$\Cox$
\bigskip
As already addressed in Section \ref{Comp}, in \cite{JaKu12} we present a rotation dynamics in continuous-time as an IPS. Specifically we exhibit a Markov generator $L$ with the property that for the associated semigroup $(S_t)_{t\geq0}$ we have $S_t\mu'_\phi=\mu'_{\phi+t}$ where $\mu'_\phi\in\text{ex } \GG_{\theta}(\g')$ (see \cite{JaKu12} Theorem 1.3). The exponential locality property \eqref{Exp_Loc} is a consequence of Lemma 3.4 in \cite{JaKu12}.
More precisely, given any arbitrarily large decay exponent $\a_1$
we may ensure the decay \eqref{Exp_Loc} by choosing $\b$ and $q$ in \cite{JaKu12}
such that the corresponding Dobrushin-constant $\bar c=\bar c (\b,q)$ (of formula (28) of \cite{JaKu12})
is sufficiently small.
Now choose $\b$ sufficiently large such that the initial rotator models has a phase transition.
Since $\bar c (\b,q)\downarrow 0 $ for $\b$ fixed, with $q\uparrow \infty$, the statement
now follows for $q$ large.
\medskip
In the present discrete-time setting the mapping $\t \mapsto M_\t$ can not be expected to be a semigroup. In particular at finite $\t$ the transition operator $M_\t$ differs from the continuous rotation semigroup $(S_t)_{t\geq0}$. They become equal at infinitesimal $\t$ and this is the route for the definition of $(S_t)_{t\geq0}$ (see the proof of Theorem 1.3 in \cite{JaKu12} Section 3.2). Nevertheless $M_\t$ also possesses the rotation property presented in the following proposition.
\begin{prop}\label{RotationProp}
Discrete Gibbs measures transform in a covariant way, i.e \linebreak
$M_\t \mu'_{\phi}= \mu'_{\phi+\t}$ for all $\mu'_\phi\in\text{ex } \GG_{\theta}(\g')$ and $0\leq\t<2\pi$.
\end{prop}
In particular $\II:=\GG_{\theta}(\g')$ is the set of translation-invariant measures where the discrete-time process acts like a rotation. Applying the function $\psi$ from Section \ref{The equilibrium model} this proves part 1 and 2 in Theorem \ref{PCA_Theorem}.
\bigskip
\textbf{Proof:} It suffices to prove that for all $\mu'_\phi\in\text{ex } \GG_{\theta}(\g')$ and discrete-spin test-functions $f$ the following equality holds:
\begin{equation*}\label{zwei}
\begin{split}
&\int \mu'_\phi(d\s')\int M_\t(\s',d\eta')f(\eta')=\int\mu'_{\phi+\t}(d\eta')f(\eta')
\end{split}
\end{equation*}
By Proposition \ref{Bijection} the conditional probability under coarse-graining of a continuous-spin measure ordering in direction $\phi$ does not depend on $\phi$, i.e for continuous-spin test-functions $g$ we have
\begin{equation}\label{Kernel}
\begin{split}
&\int \mu_\phi(d\s)g(\s)=\int\mu'_{\phi}(d\s')\mu[\s'](d\s)g(\s).
\end{split}
\end{equation}
Thus we can write
\begin{equation*}\label{vier2}
\begin{split}
\int \mu'_\phi(d\s')\int M_\t(\s',d\eta')f(\eta')&=\int \mu'_\phi(d\s')\int \mu[\s'](d\s)f(T(\s+\t) )\cr
&=\int \mu_\phi(d\s)f(T(\s+\t) )\cr
&=\int \mu_{\phi+\t}(d\s)f(T(\s) )\cr
&=\int\mu'_{\phi+\t}(d\eta')f(\eta')\cr
\end{split}
\end{equation*}
where the first equality is the definition of $M_\t$, the second equality is the independence of conditional probability on $\phi$ (see \eqref{Kernel}), the third equality is the property of the continuous-spin Gibbs measures to transform under rotations and the last inequality is the definition of the coarse-grained measures.
$\Cox$
\bigskip
To summarize the interplay between the discretization and the dynamics let us consider the joint rotation of a first-layer measure $\mu$ by an angle $\t$ written as $R_\t\mu$. Then, under $R_\t$ and the Markov transition operator $M_\t(\s',d \eta')$, the diagram in Figure \ref{Diagram} is commutative.
\begin{figure}[h]
$$
\begin{xy}
\xymatrix{
\PP_\theta((S^1)^{\Z^3})\supset \ar[dd]^T & \text{ex } \GG_{\theta}(\g^\Phi) \ar[rrrr]^{\mu\mapsto R_\t\mu} \ar@/_0,5cm/[dd] & & & & \text{ex } \GG_{\theta}(\g^\Phi) \ar@/_0,5cm/[dd] \\ \\
\PP_\theta((\Z_q)^{\Z^3})\supset \ar[dd]^{\mu'\mapsto \mu'(\psi)} & \text{ex } \GG_{\theta}(\g') \ar[rrrr]^{\mu'\mapsto \int\mu'(d\s')M_\t(\s',\cdot)} \ar@/_0,5cm/[dd] \ar@/_0,5cm/[uu] & & & & \text{ex } \GG_{\theta}(\g') \ar@/_0,5cm/[dd] \ar@/_0,5cm/[uu]\\ \\
\R^2\supset & S^1 \ar[rrrr]^{\phi\mapsto\phi+\t} \ar@/_0,5cm/[uu] & & & & S^1 \ar@/_0,5cm/[uu]
}
\end{xy}
$$
\caption{\scriptsize{The discretization map $T$ and labelling map $\mu'\mapsto \mu'(\psi)$ become bijections when applied to the extremal translation-invariant Gibbs measures. The transition kernel $M_\t$ reproduces the deterministic rotation actions $R_\t$ and $\phi\mapsto\phi+\t$.}}
\label{Diagram}
\end{figure}
\medskip
In order to further explain the transition kernel $M_\t$, let us consider the case of the zero initial continuous-spin Hamiltonian
as an example. In this case the spins become independent and it is sufficient to consider a single spin with the Gibbs distribution
being the uniform distribution on the circle $\s_0\sim \hbox{Unif}([0,2\pi])$. The image measure under $T$ of the continuous-spin Gibbs measure is again the uniform distribution on the discretized circle $\s'_0\sim \hbox{Unif}(\{1,\dots,q\})$.
Hence, for $0\leq\t<2\pi/q$ we have that
\begin{equation*}\label{zwei}
\begin{split}
M_\t(\s'_0,\{\s'_0,\s'_0+1\})=1\hspace{0.2cm}\text{ and }\hspace{0.2cm}M_\t(\s'_0,\s'_0+1)=\t\frac{q}{2\pi}.\cr
\end{split}
\end{equation*}
This describes a random walk $(\s'_0(n\t))_{n\in\N}$ on the discretized circle $\{1,\dots,q\}$ which can only move up by one step with probability proportional to $\t$ or stay where it is. Clearly for larger parameters $\t$ it is more probable to jump. In that sense $\t$ controls the velocity of the walker.
Taking the limit $\t\downarrow 0$ the process converges to a Poisson process which moves around the discretized circle where jumps are made with intensity $q/2\pi$.
\bigskip
Finally let us point out that the uniformly mixed Gibbs measure $\mu_*':=\frac{1}{2\pi}\int_0^{2\pi} d\phi \mu'_{\phi}$ with $\mu'_\phi\in\text{ex } \GG_{\theta}(\g')$ is time-stationary for $M_\t$. Indeed we have
\begin{equation}\label{MixedMeasure}
\begin{split}
M_\t\mu_*'=\frac{1}{2\pi}\int_0^{2\pi} d\phi M_\t\mu'_{\phi}=\frac{1}{2\pi}\int_0^{2\pi} d\phi\mu'_{\phi+\t}=\frac{1}{2\pi}\int_0^{2\pi} d\phi\mu'_{\phi}=\mu_*'.
\end{split}
\end{equation}
In other words, there exists a translation-invariant Gibbs measure that is also time-stationary for the dynamics.
In the following section we prove that any translation-invariant and time-stationary measure must be a Gibbs measure for the same specification.
\section{Entropic loss and reversed transition operators}
We want to connect the two properties, having zero entropy loss under time evolution and being Gibbs w.r.t the same specification as $\mu'_*$. In order to do this relative entropy densities will be considered. More precisely we want to employ an extension to our Gibbsian updating mechanism of arguments which are carried out in \cite{DaLoRo02} Proposition 2.1 and Proposition 2.2 involving time-reversed transition operators. From now on we will always consider translation-invariant measures.
Let us introduce some notation. For infinite-volume probability measures $\nu',\mu'\in\PP(\{0,\dots,q-1\}^G)$ and a finite set of sites $\L$ the \textit{local relative entropy} is defined as
\begin{equation*}\label{LRE}
\begin{split}
h_\L(\nu'|\mu'):=\sum_{\s'_\L\in\{1,\dots,q\}^\L}\nu'(1_{\s'_\L})\log\frac{\nu'(1_{\s'_\L})}{\mu'(1_{\s'_\L})}.
\end{split}
\end{equation*}
If $\mu'\in\GG_{\theta}(\g')$, existence of the \textit{specific relative entropy}
\begin{equation*}\label{SRE}
\begin{split}
h(\nu'|\mu'):=\limsup_{\L\uparrow G}\frac{1}{|\L|}h_\L(\nu'|\mu')
\end{split}
\end{equation*}
is guaranteed, where $\L$ varies over hypercubes centered at the origin. Similarly one can define the \textit{specific relative entropy between transition operators w.r.t a base measure $\nu'$} by
\begin{equation*}\label{SRETrans}
\begin{split}
&\HH_{\nu'}(M|\tilde M)=\int \nu'(d\s')\limsup_{\L}\frac{1}{|\L|}h_\L\big( M(\s',\cdot)|\tilde M(\s',\cdot)\big).
\end{split}
\end{equation*}
Let us define the \textit{joint two-step distribution} $Q_{\nu'}(d\s',d\eta'):=M_\t(\s',d\eta')\nu'(d\s')$. We will consider different conditionings of $Q_{\nu'}$. To keep notation reasonably simple we set the convention and write \textit{$\s'$ for the present configuration} and \textit{$\eta'$ for the future configuration}, just as in $Q_{\nu'}(d\s',d\eta')$. With this we have $Q_{\nu'}(\eta'_\L|\s')=M_\t(\s',\eta'_\L)$ which is independent of $\nu'$. The \textit{backwards transition operator} is given by
\begin{equation*}\label{BackwardsTrans}
\begin{split}
\hat M_{\t,\nu'}(\eta',\s'_\L):=Q_{\nu'}(\s'_\L|\eta').
\end{split}
\end{equation*}
Using the short notation $M_\t\nu'(\eta'_\L)=\int\nu'(d\s')M_\t(\s',\eta'_\L)$, the backwards transition operator is characterized by the requirement that
\begin{equation}\label{BackwardsTransChar}
\begin{split}
\int\nu'(d\s')\int M_\t(\s',d\eta')f(\s',\eta')=\int M_\t\nu'(d\eta')\int\hat M_{\t,\nu'}(\eta',d\s')f(\s',\eta')
\end{split}
\end{equation}
holds for all local test-functions $f$.
\begin{lem}\label{GibbsBackwards}
The backwards transition operator for any translation-invariant Gibbs measure
is given by $M_{-\t}$ where $M_{-\t}$ is obtained from formula \eqref{OneStepUpdate} for negative $\t$.
\end{lem}
\textbf{Proof: }Let us first check \eqref{BackwardsTransChar} for the extremal Gibbs measure, i.e let $\mu'_\phi\in\text{ex }\GG_{\theta}(\g')$, then
independently of the angle $\phi$ we have
\begin{equation*}\label{BackwardsGibbs}
\begin{split}
\int \mu'_\phi(d\s')\int M_\t(\s',d\eta')f(\s',\eta')&=\int \mu'_\phi(d\s')\int \mu_G[\s'](d\eta)f(\s',T(\eta+\t))\cr
&=\int \mu'_\phi(d\s')\int \mu_G[\s'](d\eta)f(T(\eta),T(\eta+\t))\cr
&=\int \mu_\phi(d\eta)f(T(\eta),T(\eta+\t))\cr
&=\int \mu_{\phi+\t}(d\eta)f(T(\eta-\t),T(\eta))\cr
&=\int \mu'_{\phi+\t}(d\eta')\int \mu_G[\eta'](d\s)f(T(\s-\t),\eta')\cr
&=\int \mu'_{\phi+\t}(d\eta')\int M_{-\t}(\eta',d\s') f(\s',\eta')
\end{split}
\end{equation*}
where we used equation \eqref{Kernel} two times. By linearity of the integrals the above equation also holds for any convex combination of the extremal Gibbs measures and hence for all $\mu'\in\GG_{\theta}(\g')$.
$\Cox$
\bigskip
Next we consider the entropy loss under $M_\t$ which can be expressed in terms of the backward transition operators.
\begin{lem}\label{Extension of Proposition 2.1. from DaPrLoRo02}
Suppose that $\mu'$ is a translation-invariant Gibbs measure w.r.t the specification $\g'$ and also time-stationary w.r.t $M_\t$. Then, for any translation-invariant measure $\nu'$ the entropic loss
can be expressed via
\begin{equation}\label{Lemma1}
\begin{split}
&h(\nu'|\mu')-h(M_\t\nu'|\mu' )=\HH_{M_\t\nu'}(\hat M_{\t,\nu'}| M_{-\t}).
\end{split}
\end{equation}
\end{lem}
In \cite{DaLoRo02} this result is proved for the case of a PCA with sitewise independent local updating. Here we extend it to our case of a weak PCA with quasilocal updating.
\bigskip
{\bf Proof: }
Let us suppress all primes in the notation.
First notice, the error we make by replacing the starting configuration outside some finite volume is of boundary order. Indeed, let $\xi$, whenever it appears, be an arbitrary but fixed configuration in $\{0,\dots,q-1\}^{G}$, then
\begin{equation}\label{OneStepError}
\begin{split}
\frac{M_\t(\s_{\L}\xi_{\L^c},\eta_\L)}{M_\t(\s,\eta_\L)}=\frac{\frac{\mu_{G\ba\L}[\xi_{G\ba\L}](\l^{\L}(e^{-H_\L}1_{\s_\L}1_{\eta_{\L,\t}}))}{\mu_{G\ba\L}[\xi_{G\ba\L}](\l^{\L}(e^{-H_\L}1_{\s_\L}))}}{\frac{\mu_{G\ba\L}[\s_{G\ba\L}](\l^{\L}(e^{-H_\L}1_{\s_\L}1_{\eta_{\L,\t}}))}{\mu_{G\ba\L}[\s_{G\ba\L}](\l^{\L}(e^{-H_\L}1_{\s_\L}))}}\leq e^{4\sum_{A\cap\L\neq\emptyset,A\cap\L^c\neq\emptyset}\Vert\Phi_A\Vert}=e^{o(|\L|)}
\end{split}
\end{equation}
where of course we assumed $M_\t(\s,\eta_\L)\neq0$.
Then since $\mu$ is assumed to be time-stationary
the local entropic loss can be expressed in terms of the backwards transition operator and some error term of boundary order. Indeed,
\begin{equation*}\label{EntropyProductionAbove}
\begin{split}
&h_{\L}(\nu|\mu)-h_\L(M_\t\nu|\mu)\cr
&=\sum_{\eta_\L\in\{1,\dots,q\}^\L}M_\t\nu(\eta_\L)\log\frac{\mu(\eta_\L)}{M_\t\nu(\eta_\L)}+\sum_{\s_{\L}\in\{1,\dots,q\}^\L}\nu(\s_{\L})\log\frac{ \nu(\s_{\L})}{\mu(\s_{\L})}\cr
&=\sum_{\eta_\L}M_\t\nu(\eta_\L)\sum_{\s_{\L}}Q_\nu(\s_{\L}|\eta_\L)\log\frac{M_\t(\s_{\L}\xi_{\L^c},\eta_\L)\nu(\s_{\L})}{M_\t\nu(\eta_\L)}\frac{M_\t\mu(\eta_\L)}{M_\t(\s_{\L}\xi_{\L^c},\eta_\L)\mu(\s_{\L})}\cr
&=\sum_{\eta_\L}M_\t\nu(\eta_\L)\sum_{\s_{\L}}Q_\nu(\s_{\L}|\eta_\L)\log\frac{\int\nu(d\s) \frac{M_\t(\s_{\L}\xi_{\L^c},\eta_\L)}{M_\t(\s,\eta_\L)}M_\t(\s,\eta_\L)1_{\s_{\L}}(\s)}{\int\mu(d\s) \frac{M_\t(\s_{\L}\xi_{\L^c},\eta_\L)}{M_\t(\s,\eta_\L)}M_\t(\s,\eta_\L)1_{\s_{\L}}(\s)}\frac{M_\t\mu(\eta_\L)}{M_\t\nu(\eta_\L)}\cr
&\leq\sum_{\eta_\L}M_\t\nu(\eta_\L)\sum_{\s_{\L}}Q_\nu(\s_{\L}|\eta_\L)\log\frac{Q_\nu(\s_{\L},\eta_\L)}{M_\t\nu(\eta_\L)}\frac{M_\t\mu(\eta_\L)}{Q_\mu(\s_{\L},\eta_\L)}+\log\frac{\sup_{\s_{\L}=\tilde\s_{\L},\eta}\frac{M_\t(\s,\eta_\L)}{M_\t(\tilde\s,\eta_\L)}}{\inf_{\s_{\L}=\tilde\s_{\L},\eta}\frac{M_\t(\s,\eta_\L)}{M_\t(\tilde\s,\eta_\L)}}\cr
&=\sum_{\eta_\L,\s_{\L}}Q_\nu(\s_{\L},\eta_\L)\log\frac{ Q_\nu(\s_{\L}|\eta_\L)}{Q_\mu(\s_{\L}|\eta_\L)}+2 o(|\L|)\cr
\end{split}
\end{equation*}
where we used \eqref{OneStepError} and $\sum_{\eta_\L\in\{1,\dots,q\}^\L}M_\t\nu(\eta_\L)Q_\nu(\s_{\L}|\eta_\L)=\sum_{\eta_\L}Q_\nu(\s_{\L},\eta_\L)=\nu(\s_{\L})$. Notice, we get a similar bound from below, i.e
\begin{equation*}\label{EntropyProduction}
\begin{split}
&h_{\L}(\nu|\mu)-h_\L(M_\t\nu|\mu)
\geq\sum_{\eta_\L,\s_{\L}}Q_\nu(\s_{\L},\eta_\L)\log\frac{Q_\nu(\s_{\L}|\eta_\L)}{Q_\mu(\s_{\L}|\eta_\L)}-o(|\L|).\cr
\end{split}
\end{equation*}
Together we have the following identity
\begin{equation}\label{EntropyProduction_2}
\begin{split}
&h_{\L}(\nu|\mu)-h_\L(M_\t\nu|\mu)=\E^{Q_\nu}[\log\frac{Q_\nu(\s_{\L}|\eta_\L)}{Q_\mu(\s_{\L}|\eta_\L)}]\pm o(|\L|)\cr
&=\E^{Q_\nu}[\log\frac{Q_\nu(\s_{\L}|\eta_\L)}{Q_\nu(\s_{\L}|\eta)}]+\E^{Q_\nu}[\log\frac{\hat M_{\t,\nu}(\eta, \s_{\L})}{M_{-\t}(\eta,\s_{\L})}]+\E^{Q_\nu}[\log\frac{Q_\mu(\s_{\L}|\eta)}{Q_\mu(\s_{\L}|\eta_\L)}]\pm o(|\L|).\cr
\end{split}
\end{equation}
Under the volume limit, the l.h.s of \eqref{EntropyProduction_2} becomes the l.h.s of \eqref{Lemma1} and for the second summand on the r.h.s of \eqref{EntropyProduction_2} we have
\begin{equation*}\label{EntropyProduction_3}
\begin{split}
\limsup_{\L\uparrow G}\frac{1}{|\L|}\E^{Q_\nu}[\log\frac{\hat M_{\t,\nu}(\eta, \s_{\L})}{M_{-\t}(\eta,\s_{\L})}]
&=\limsup_{\L\uparrow G}\frac{1}{|\L|}\int M_\t\nu(d\eta)h_\L[\hat M_{\t,\nu}(\eta, \cdot)|M_{-\t}(\eta,\cdot)]\cr
&=\mathcal H_{M_\t\nu}(\hat M_{\t,\nu}|M_{-\t})
\end{split}
\end{equation*}
which is the r.h.s of \eqref{Lemma1}. Hence, in order to prove \eqref{Lemma1} it suffices to show, that the first and the third summand on the r.h.s of \eqref{EntropyProduction_2} are $o(|\L|)$-functions.
Since the third summand is not a special case of the first summand, we have to proceed separately.
\medskip
\textbf{For the first summand in \eqref{EntropyProduction_2}} we can follow closely the arguments from \cite{DaLoRo02} Proposition 2.1. For the readers convenience we provide them here as well.
Let $\{i_1,\dots, i_{|\L|}\}$ be the lexicographic ordering of the elements of $\L$ and define $\L_k=\{i_1,\dots,i_k\}$ for $1\leq k\leq\L$ with $\L_0=\emptyset$. By Bayes' theorem we have
\begin{equation*}\label{44}
\begin{split}
Q_\nu(\s_{\L}|\eta_\L)
&=Q_\nu(\s_{i_1}|\eta_\L)Q_\nu(\s_{i_2}|\eta_\L,\s_{i_1})\cdots Q_\nu(\s_{i_{|\L|}}|\eta_\L,\s_{\L_{|\L|-1}})\cr
\end{split}
\end{equation*}
and hence we can write
\begin{equation*}\label{45}
\begin{split}
\log Q_\nu(\s_{\L}|\eta_\L)=\sum_{k=1}^{|\L|}\log Q_\nu(\s_{i_k}|\eta_\L,\s_{\L_{k-1}}).
\end{split}
\end{equation*}
By translation invariance of $Q_\nu$ we have
\begin{equation*}\label{46}
\begin{split}
\E^{Q_\nu}[\log Q_\nu(\s_{i_k}|\eta_\L,\s_{\L_{k-1}})]=\E^{Q_\nu}[\log Q_\nu(\s_0|\eta_{\theta_{-i_k}\L},\s_{\theta_{-i_k}\L_{k-1}})]
\end{split}
\end{equation*}
where $\theta_{i}\L$ denotes a lattice translation of the set $\L\subset G$ by $i\in G$.
We want to use the Shannon-Breiman-McMillan Theorem from \cite{Ba85} as used in \cite{DaLoRo02} and therefore understand the conditional densities as a sequence of densities belonging to a stochastic process under the invariant measure $Q_\nu$.
Let $G_{-}:=\{i\in G:i\prec0\}$, where $\prec$ is the lexicographic order. By the Shannon-Breiman-McMillan Theorem the stationary and ergodic process has the "Asymptotic Equipartition Property" and thus the sequence of conditional measures has an almost sure limit. In particular it is a Cauchy sequence and hence, for every $\e>0$ there are $A\subset G,B\subset G_{-}$ finite such that if $A\subset V$ and $B\subset W\subset G_{-}$ we have
\begin{equation*}\label{Shannon-Breiman-McMillan}
\begin{split}
\Bigr|\E^{Q_\nu}[\log Q_\nu(\s_0|\eta_{V},\s_{W})]-\E^{Q_\nu}[\log Q_\nu(\s_0|\eta_{A},\s_{B})]\Bigl|<\e.
\end{split}
\end{equation*}
Also notice for all $\L$ and $i_k$ there is the simple bound
\begin{equation*}\label{BrutalBound}
\begin{split}
\E^{Q_\nu}[\log Q_\nu(\s_0|\eta_{\theta_{-i_k}\L},\s_{\theta_{-i_k}\L_{k-1}})]\leq\log q.
\end{split}
\end{equation*}
Now we can separate bulk and boundary terms. This gives
\begin{equation}\label{TogetherA}
\begin{split}
&\limsup_{\L\uparrow G}\frac{1}{|\L|}\sum_{k=1}^{|\L|}
\E^{Q_\nu}[\log Q_\nu(\s_0|\eta_{\theta_{-i_k}\L},\s_{\theta_{-i_k}\L_{k-1}})]\cr
&=\limsup_{\L\uparrow G}\frac{1}{|\L|}\Bigl\{\sum_{k:A\subset\theta_{-i_k}\L,B\subset\theta_{-i_k}\L_{k-1}}
\E^{Q_\nu}[\log Q_\nu(\s_0|\eta_{\theta_{-i_k}\L},\s_{\theta_{-i_k}\L_{k-1}})]\cr
&\hspace{5cm}+|\{k:A\not\subset\theta_{-i_k}\L,B\not\subset\theta_{-i_k}\L_{k-1}\}|\log q\Bigr\}.\cr
\end{split}
\end{equation}
For large $\L$ the first term on the r.h.s of \eqref{TogetherA} contains the bulk of the summands. The second term on the r.h.s of \eqref{TogetherA} is of boundary order. Hence
\begin{equation*}\label{Together2}
\begin{split}
&\limsup_{\L\uparrow G}\frac{1}{|\L|}\E^{Q_\nu}[\log Q_\nu(\s_{\L}|\eta_\L)]
\leq\E^{Q_\nu}[\log Q_\nu(\s_0|\eta_{A},\s_{B})]+\e\hspace{1cm}\text{ and }\cr
&\limsup_{\L\uparrow G}\frac{1}{|\L|}\E^{Q_\nu}[\log Q_\nu(\s_{\L}|\eta_\L)]
\geq\E^{Q_\nu}[\log Q_\nu(\s_0|\eta_{A},\s_{B})]-\e.\cr
\end{split}
\end{equation*}
Letting $\e$ go to zero we have
\begin{equation*}\label{Together1}
\begin{split}
&\limsup_{\L\uparrow G}\frac{1}{|\L|}\E^{Q_\nu}[\log Q_\nu(\s_{\L}|\eta_\L)]
=\E^{Q_\nu}[\log Q_\nu(\s_0|\eta,\s_{G_{-}})].
\end{split}
\end{equation*}
Using the same arguments as above one also shows
\begin{equation*}\label{Together2}
\begin{split}
&\limsup_{\L\uparrow G}\frac{1}{|\L|}\E^{Q_\nu}[\log Q_\nu(\s_{\L}|\eta)]
=\E^{Q_\nu}[\log Q_\nu(\s_0|\eta,\s_{G_{-}})].
\end{split}
\end{equation*}
\textbf{For the second summand in \eqref{EntropyProduction_2}} recall
equation \eqref{OneStepError}. Of course also $M_{-\t}$ has such an error bound and we have
\begin{equation*}\label{BackwardsGibbsError}
\begin{split}
\frac{M_{-\t}(\eta_{\L}\xi_{\L^c},\s_{\L})}{M_{-\t}(\eta,\s_{\L})}\leq e^{o(|\L|)}.
\end{split}
\end{equation*}
Hence with Lemma \ref{GibbsBackwards} we can write
\begin{equation*}\label{BackwardsGibbs3}
\begin{split}
\frac{Q_{\mu}(\s_{\L}|\eta_\L)}{Q_{\mu}(\s_{\L}|\eta)}
=\int M_\t\mu(d\tilde\eta|\eta_\L)\frac{M_{-\t}(\eta_\L\tilde\eta_{\L^c},\s_\L)}{M_{-\t}(\eta,\s_{\L})}
\leq e^{o(|\L|)}
\end{split}
\end{equation*}
and thus $\E^{Q_{\nu}}[\log\frac{Q_{\mu}(\s_{\L}|\eta)}{Q_{\mu}(\s_{\L}|\eta_\L)}]=\pm o(|\L|)$.
As a result, in the infinite-volume limit, the $o(|\L|)$ terms vanish and this completes the proof.
$\Cox$
\bigskip
Recall that our goal is to show that time-stationary measures must be Gibbs measures. The following result is slightly more general. Using the characterization of the entropic loss from the preceding lemma we will show that zero entropic loss implies the single site DLR equation and thus the Gibbs property. The main tools for the proof are an adaptation of the Gibbs variational principle and an argument from \cite{Ku84} to infer the DLR equation from the backwards transition operator. For more general background about the Gibbs variational principle also in the context of generalized Gibbsian measures see \cite{Ge11,KuLeRe04,EnVe04}.
\begin{thm}\label{ZeroEntrGibbs}
Suppose that $\mu'$ is a translation-invariant Gibbs measure w.r.t the specification $\g'$ and also time-stationary w.r.t $M_\t$ where $0<\t<2\pi/q$. Then, for any translation-invariant measure $\nu'$ with
\begin{equation*}\label{zwei}
\begin{split}
&h(\nu'|\mu')=h(M_\t\nu'|\mu' )
\end{split}
\end{equation*}
we have that $\nu'$ is also a Gibbs measure for the same specification, i.e $\nu'\in \GG_\theta(\g')$.
Conversely if $\nu'\in \GG_\theta(\g')$ then also $h(\nu'|\mu')=h(M_\t\nu'|\mu' )$.
\end{thm}
Please note, Theorem \ref{ZeroEntrGibbs} and the following results are valid for general $\t\in\R$. We set $0<\t<2\pi/q$ only to keep the presentation more transparent.
\bigskip
{\bf Proof: }Let us again suppress all primes in the notation whenever unambiguous.
\textbf{Step 1: }We show that
\begin{equation}\label{Step1}
\begin{split}
\hat M_{\t,\nu}(\eta,\s_{\L})=M_{-\t}(\eta,\s_{\L})\hspace{0.5cm}\text{for }M_\t\nu\text{-a.a. }\eta
\end{split}
\end{equation}
under the hypothesis of the theorem. To do so we extend the proof of the Gibbs variational principle in \cite{Ge11} Chapter 15.4. to our situation where we have to deal with an additional dependence on the future configuration $\eta$ distributed according to $M_\t\nu$.
Define $h_\L^\eta(\hat M_\nu|\hat M_\mu):=h_\L[\hat M_{\t,\nu}(\eta,\cdot)|M_{-\t}(\eta,\cdot)]$, then by Fatou's Lemma and Lemma \ref{Extension of Proposition 2.1. from DaPrLoRo02} we have
\begin{equation}\label{Zero}
\begin{split}
0=\HH_{M_\t\nu}(\hat M_\nu|\hat M_\mu)&=\int \nu(d\eta) \limsup_{\L}\frac{1}{|\L|}h_\L^\eta(\hat M_\nu|\hat M_\mu)\cr
&\geq\limsup_{\L}\frac{1}{|\L|}\int M_\t\nu(d\eta) h_\L^\eta(\hat M_\nu|\hat M_\mu)=0.\cr
\end{split}
\end{equation}
In particular for all cofinal sequences of finite sets
$0=\lim_{\L_n\uparrow G}\frac{1}{|\L_n|}\int M_\t\nu(d\eta) h_{\L_n}^\eta(\hat M_\nu|\hat M_\mu)$.
Since the local relative entropy $h_\L^\eta(\hat M_\nu|\hat M_\mu)$ must be finite for all $\L$ and $M_\t\nu$-a.a. $\eta$ there exists a density
\begin{equation*}\label{vier2}
\begin{split}
f_\L^\eta:=\frac{d\hat M_\nu(\eta, \cdot |_{\L})}{d\hat M_\mu(\eta, \cdot |_{\L})}\cr
\end{split}
\end{equation*}
depending on configurations inside $\L$ only.
\medskip
\textbf{Step 1a: }We show that expectations of local entropy densities behave nice w.r.t volumes in the following sense:
For any $\d>0$ and any cube $C\supset\L$ there exists a finite set $\D$ with
$\D\supset C$ such that
\begin{equation*}\label{funf}
\begin{split}
\int M_\t\nu(d\eta) h_{\D}^\eta(\hat M_\nu|\hat M_\mu)-\int M_\t\nu(d\eta) h_{\D\setminus\L}^\eta(\hat M_\nu|\hat M_\mu)\leq\d.\cr
\end{split}
\end{equation*}
The proof is an integrated version of the first step of the proof of the variational principle in \cite{Ge11} Chapter 15.4. Indeed, by \eqref{Zero} we can find $n\geq1$ such that for the centered hypercube $\L_n$ we have $|\L_n|\geq|C|$ and $|\L_n|^{-1}\int M_\t\nu(d\eta) h_{\L_n}^\eta(\hat M_\nu|\hat M_\mu)\leq \d/2^d|C|$. Now we can choose an integer $m\geq1$ in such a way that
\begin{equation*}\label{sechs}
\begin{split}
m^d|C|\leq |\L_n|\leq (2m)^d|C|.
\end{split}
\end{equation*}
Further let us choose $m^d$ lattice sites $i(1),\dots, i(m^d)$ in such a way that the translates $C(k) = C + i(k)$, $1\leq k\leq m^d$ are pairwise disjoint subsets of $\L_n$. For
each $1\leq k\leq m^d$ we put $W(k) = C(1)\cup\dots\cup C(k)$ and $\L(k) = \L + i(k)$. Then
using the monotonicity of the relative density we can write
\begin{equation*}\label{sieben}
\begin{split}
\frac{1}{m^d}&\sum_{i=1}^{m^d}[\int M_\t\nu(d\eta) h_{W(k)}^\eta(\hat M_\nu|\hat M_\mu)-\int M_\t\nu(d\eta) h_{W(k)\setminus\L(k)}^\eta(\hat M_\nu|\hat M_\mu)]\cr
&\leq\frac{1}{m^d}\sum_{i=1}^{m^d}[\int M_\t\nu(d\eta) h_{W(k)}^\eta(\hat M_\nu|\hat M_\mu)-\int M_\t\nu(d\eta) h_{W(k)\setminus C(k)}^\eta(\hat M_\nu|\hat M_\mu)]\cr
&=\frac{1}{m^d}\int M_\t\nu(d\eta) h_{W(m^d)}^\eta(\hat M_\nu|\hat M_\mu)\cr
&\leq\frac{1}{m^d}\int M_\t\nu(d\eta) h_{\L_n}^\eta(\hat M_\nu|\hat M_\mu)\leq 2^d|C||\L_n|^{-1}\int M_\t\nu(d\eta) h_{\L_n}^\eta(\hat M_\nu|\hat M_\mu)\leq\d.\cr
\end{split}
\end{equation*}
Consequently, there exists an index $k$ such that $$\int M_\t\nu(d\eta) h_{W(k)}^\eta(\hat M_\nu|\hat M_\mu)-\int M_\t\nu(d\eta) h_{W(k)\setminus\L(k)}^\eta(\hat M_\nu|\hat M_\mu)\leq\d.$$ The claim of Step 1a thus follows by putting $\D:= W(k)-i(k)$ and using the translation-invariance of $\hat M_\nu(\eta, \cdot |_{\L})$ and $\hat M_\mu(\eta, \cdot |_{\L})$ under $M_\t\nu(d\eta)$.
\medskip
\textbf{Step 1b: }We want to show that closeness of integrated entropy densities implies closeness of integrated densities. This again is an integrated version of the analogous statement in \cite{Ge11} Chapter 15.4. More precisely we show that for each $\e>0$ there exists some $\d>0$ such that $$\int M_\t\nu(d\eta)\int\hat M_\mu(d\s,\eta)|f_\D^\eta(\s)-f_{\D\setminus\L}^\eta(\s)|\leq\e$$ whenever $\L\subset\D$ and $$\int M_\t\nu(d\eta) h_{\D}^\eta(\hat M_\nu|\hat M_\mu)-\int M_\t\nu(d\eta) h_{\D\setminus\L}^\eta(\hat M_\nu|\hat M_\mu)\leq\d.$$
Indeed, let $\psi(x):=x\log x-x+1$ for $x\geq0$, then
\begin{equation*}\label{acht}
\begin{split}
&\int M_\t\nu(d\eta) h_{\D}^\eta(\hat M_\nu|\hat M_\mu)-\int M_\t\nu(d\eta) h_{\D\setminus\L}^\eta(\hat M_\nu|\hat M_\mu)\cr
&=\int M_\t\nu(d\eta)\int \hat M_\nu(\eta,d\s)\log\frac{f_\D^\eta(\s)}{f_{\D\setminus\L}^\eta(\s)}\cr
&=\int M_\t\nu(d\eta)\int \hat M_\mu(\eta,d\s)f_{\D\setminus\L}^\eta(\s)\psi(\frac{f_\D^\eta(\s)}{f_{\D\setminus\L}^\eta(\s)}).\cr
\end{split}
\end{equation*}
Notice, there is a number $0<r<\infty$ such that $|x-1|\leq r\psi(x) + \e/2$ for all $x\geq0$. By putting $\d = \e/2r$ we can thus write
\begin{equation*}\label{neun}
\begin{split}
&\int M_\t\nu(d\eta)\int\hat M_\mu(d\s,\eta)|f_\D^\eta(\s)-f_{\D\setminus\L}^\eta(\s)|\cr
&=\int M_\t\nu(d\eta)\int\hat M_\mu(d\s,\eta)f_{\D\setminus\L}^\eta(\s)|\frac{f_\D^\eta(\s)}{f_{\D\setminus\L}^\eta(\s)}-1|\cr
&\leq r\int M_\t\nu(d\eta)\int \hat M_\mu(\eta,d\s)f_{\D\setminus\L}^\eta(\s)\psi(\frac{f_\D^\eta(\s)}{f_{\D\setminus\L}^\eta(\s)})+\frac{\e}{2}\leq\e.\cr
\end{split}
\end{equation*}
\medskip
\textbf{Step 1c: }To prove the desired result for the Step 1, i.e that $M_\t\nu$-a.s. we have $\hat M_{\t,\nu}(\eta,\s_{\L})=M_{-\t}(\eta,\s_{\L})$,
we show the DLR equation for the backwards operator corresponding to $\nu$ for the specification given by the backwards transition operator corresponding to the Gibbs measure $\mu.$ Indeed, let $g$ be a local test-function and $\e>0$.
Since $\mu$ is a Gibbs measure
there exists a local function $\tilde g^\eta$ with support independent of $\eta$ (which depend only on configurations outside $\L$) such that
$\sup_{\eta}\Vert\g^\eta_\L(g|\cdot)-\tilde g^\eta(\cdot)\Vert<\e$ where $$\g^\eta_\L(d\tilde\s|\s_{\L^c}):=Q_\mu(d\tilde\s|\s_{\L^c},\eta).$$
To see this we note that $M_{-\t}(\eta,d\s)$ as a measure on the $\s$ is in the Dobrushin region uniformly in $\eta$.
This is true by arguments provided in the proof of Step 2 below equation \eqref{TauCoarse}.
Let $C\supset \L$ be a cube such that $g$ depends only on configuration on $C$ and $\tilde g^\eta$ depends only on $C\setminus\L$. Choose $\d$ in terms of $\e$ as in Step 1b, and define $\D$ in terms of $C$ and $\d$ as in Step 1a.
\begin{equation*}\label{zehn}
\begin{split}
&\int M_\t\nu(d\eta)|\int\hat M_{\t,\nu}(\eta,d\s)g(\s)-\int\hat M_{\t,\nu}(\eta,d\s)\g^\eta_\L(g|\s)|\cr
&\leq\int M_\t\nu(d\eta)\Bigl[\int\hat M_{\t,\nu}(\eta,d\s)|\g^\eta_\L(g|\s)-\tilde g^\eta(\s)|\cr
&\hspace{1cm}+|\int\hat M_{\t,\nu}(\eta,d\s)\tilde g^\eta(\s)-\int M_{-\t}(\eta,d\s)f^\eta_{\D\setminus\L}(\s)\tilde g^\eta(\s)|\cr
&\hspace{1cm}+\int M_{-\t}(\eta,d\s)f^\eta_{\D\setminus\L}(\s)|\tilde g^\eta(\s)-\g^\eta_\L(g|\s)|\cr
&\hspace{1cm}+|\int M_{-\t}(\eta,d\s)(f^\eta_{\D\setminus\L}(\s)(\g^\eta_\L(g|\s)-g(\s))|\cr
&\hspace{1cm}+\Vert g\Vert\int M_{-\t}(\eta,d\s)|f^\eta_{\D\setminus\L}(\s)-f^\eta_{\D}(\s)|\cr
&\hspace{1cm}+|\int M_{-\t}(\eta,d\s)f^\eta_{\D}(\s)g(\s)-\int\hat M_{\t,\nu}(\eta,d\s)g(\s)|\Bigr]\cr
\end{split}
\end{equation*}
Since $\tilde g^\eta$ depends only on $\D\setminus\L$ and $g$ depends only $\D$, the second and the last term on the right are zero. The fourth term vanishes because $\int M_{-\t}(\eta,d\s)\int\g^\eta(d\tilde\s|\s)=\int M_{-\t}(\eta,d\s)$ and $f^\eta_{\D\setminus\L}$ depends only on $\L^c$. Due to the choice of $g$, the first and the third term are each at most $\e$. The only non-trivial term is the fifth one. This term is not larger than $\Vert g\Vert\e$ because of
our choice of $\D$ see Step 1b. As $\e$ was arbitrary, we conclude that $\int M_\t\nu(d\eta)|\int\hat M_{\t,\nu}(\eta,d\s)g(\s)-\int\hat M_{\t,\nu}(\eta,d\s)\g^\eta_\L(g|\s)|=0$. Hence $M_\t\nu$-a.s. we have $\int\hat M_{\t,\nu}(d\s,\eta)g(\s)=\int\hat M_{\t,\nu}(d\s,\eta)\g^\eta_\L(g|\s)$. And thus $\hat M_{\t,\nu}$ is the unique Gibbs measure for $\g_\L^\eta$.
In particular equation \eqref{Step1} follows.
\bigskip
\textbf{Step 2: }
As a consequence of Step 1, equation \eqref{Step1} holds and we can conclude that $Q_\nu$-a.s.
\begin{equation*}\label{Together_5}
\begin{split}
Q_\nu(\s_{\L}|\s_{\L^c},\eta)=Q_\mu(\s_{\L}|\s_{\L^c},\eta).
\end{split}
\end{equation*}
In what follows, we use the basic idea from \cite{Ku84} to infer from this the single-site DLR equation for $\nu$. Notice, for $i\in G$, finite sets $\L\subset G\setminus i$, $\bar\L\subset G$ and $\tilde\s_{i^c}=\s_{i^c}$ we have
\begin{equation*}\label{Together_6}
\begin{split}
\mu(\s_i|\s_{\L})Q_\mu(\eta_{\bar\L}|\s_{\L\cup i})Q_\mu(\tilde\s_{i}|\s_{\L},\eta_{\bar\L})=
\mu(\tilde\s_i|\s_{\L})Q_\mu(\eta_{\bar\L}|\tilde\s_{\L\cup i})Q_\mu(\s_{i}|\s_{\L},\eta_{\bar\L})
\end{split}
\end{equation*}
where $\mu$ is a Gibbs measure for the specification $\g'$ given in \eqref{Coarse_Specification}.
Letting $\L$ go to $G\setminus i$ and $\bar\L$ go to $G$ we get $Q_\mu$-a.s.
\begin{equation}\label{Kuensch1}
\begin{split}
\frac{\mu(\s_i|\s_{i^c})}{\mu(\tilde\s_i|\s_{i^c})}=\frac{\g'(\s_i|\s_{i^c})}{\g'(\tilde\s_i|\s_{i^c})}=\lim_{\bar\L\uparrow G}\frac{M_\t(\tilde\s,\eta_{\bar\L})}{M_\t(\s,\eta_{\bar\L})}\frac{Q_\mu(\s_{i}|\s_{i^c},\eta)}{Q_\mu(\tilde\s_{i}|\s_{i^c},\eta)}
\end{split}
\end{equation}
where we note that $Q_\mu$-a.s. $\eta$ is accessible for $\s,\tilde\s$ and vice versa. Also we will prove right below that the
$\lim_{\bar\L\uparrow G}\frac{M_\t(\tilde\s,\eta_{\bar\L})}{M_\t(\s,\eta_{\bar\L})}$ exists
. In the same way one gets $Q_\nu$-a.s.
\begin{equation}\label{Kuensch2}
\begin{split}
\frac{\nu(\s_i|\s_{i^c})}{\nu(\tilde\s_i|\s_{i^c})}=\lim_{\bar\L\uparrow G}\frac{M_\t(\tilde\s,\eta_{\bar\L})}{M_\t(\s,\eta_{\bar\L})}\frac{Q_\nu(\s_{i}|\s_{i^c},\eta)}{Q_\nu(\tilde\s_{i}|\s_{i^c},\eta)}
\end{split}
\end{equation}
and hence $Q_\nu$-a.s. $\frac{\nu(\s_i|\s_{i^c})}{\nu(\tilde\s_i|\s_{i^c})}=\frac{\g'(\s_i|\s_{i^c})}{\g'(\tilde\s_i|\s_{i^c})}$. Summing over the $\tilde\s_i$ we get the desired single-site DLR-equation $$\nu(\s_i|\s_{i^c})=\g'(\s_i|\s_{i^c})$$ $Q_\nu$-a.s. and thus $\nu$ is a Gibbs measure for $\g'$.
\medskip
What remains to be proved is that the limit in \eqref{Kuensch1} and \eqref{Kuensch2} indeed exists. Let us therefore consider $M_\t$, $\tilde\s'_{i^c}=\s'_{i^c}$ and $\eta'_{\bar\L}$ such that $M_\t(\s',\eta'_{\bar\L})\neq0\neq M_\t(\tilde\s',\eta'_{\bar\L})$ for all $\bar\L$. Then we have, using the notation from \eqref{OneStepUpdate}
\begin{equation*}\label{Together_9}
\begin{split}
\frac{M_\t(\tilde\s',\eta_{\bar\L}')}{M_\t(\s',\eta_{\bar\L}')}&=\frac{\mu_{G\ba i}[\s'_{G\ba i}](\l(e^{-H_i}1_{\tilde\s'_i}1_{\eta'_{\bar\L,\t}}))}{\mu_{G\ba i}[\s'_{G\ba i}](\l(e^{-H_i}1_{\s'_i}1_{\eta'_{\bar\L,\t}}))}\frac{\mu_{G\ba i}[\s'_{G\ba i}](\l(e^{-H_i}1_{\s'_i}))}{\mu_{G\ba i}[\s'_{G\ba i}](\l(e^{-H_i}1_{\tilde\s'_i}))}\cr
&=\frac{\mu_{G\ba i}[\s'_{G\ba i}](\l(e^{-H_i}1_{\tilde\s'_i}1_{\eta'_{i,\t}})|1_{\eta'_{\bar\L\setminus i,\t}})}{\mu_{G\ba i}[\s'_{G\ba i}](\l(e^{-H_i}1_{\s'_i}1_{\eta'_{i,\t}})|1_{\eta'_{\bar\L\setminus i,\t}})}\frac{\g'(\s'_i|\s'_{G\ba i})}{\g'(\tilde\s'_i|\s'_{G\ba i})}.\cr
\end{split}
\end{equation*}
Now since we are in the Dobrushin region already for the coarse-graining $T$ with $q\geq q_0(\Phi)$, to coarse-grain with
\begin{equation}\label{TauCoarse}
\begin{split}
[0,2\pi)\mapsto \{[0,\t),[\t,2\pi/q),[2\pi/q,2\pi/q+\t),\dots,[2\pi-\t,2\pi)\}
\end{split}
\end{equation}
is even finer and thus the conditional first-layer specifications are even more in the Dobrushin region. The idea here is that by conditioning to configurations of small segments of the sphere, one can control the possible interaction strength between spins in such a way, that the conditional specification in the first-layer model is in a parameter regime where there is a unique conditional Gibbs measure. Since the individual segments of the sphere under equal-arc discretization with $q$ become even smaller when we discretize the sphere as in \eqref{TauCoarse} a fortiori the Dobrushin condition of weak interaction is satisfied. In particular there is a unique limiting measure on the conditioning $\s'_{G\ba i}\cap\eta'_{{G\ba i},\t}$. In other words
\begin{equation*}\label{Together_10}
\begin{split}
\lim_{\bar\L\uparrow G}&\frac{\mu_{G\ba i}[\s'_{G\ba i}](\l(e^{-H_i}1_{\tilde\s'_i}1_{\eta'_{i,\t}})|1_{\eta'_{\bar\L\setminus i,\t}})}{\mu_{G\ba i}[\s'_{G\ba i}](\l(e^{-H_i}1_{\s'_i}1_{\eta'_{i,\t}})|1_{\eta'_{\bar\L\setminus i,\t}})}\cr
&=\frac{\mu_{G\ba i}[\s'_{G\ba i}\cap\eta'_{{G\ba i},\t}](\l(e^{-H_i}1_{\tilde\s'_i}1_{\eta'_{i,\t}}))}{\mu_{G\ba i}[\s'_{G\ba i}\cap\eta'_{{G\ba i},\t}](\l(e^{-H_i}1_{\s'_i}1_{\eta'_{i,\t}}))}
\end{split}
\end{equation*}
exists where the limiting measure $\mu_{G\ba i}[\s'_{G\ba i}\cap\eta'_{{G\ba i},\t}]$ is the unique Gibbs measure conditional to the configuration $\s'_{G\ba i}\cap\eta'_{{G\ba i},\t}$ which is in the $\t$-dependent coarse-graining \eqref{TauCoarse}.
\medskip
\textbf{Step 3: }The converse statement follows from the fact that if $\nu'\in \GG_\theta(\g')$ then $\hat M_{\t,\nu'}=M_{-\t}$ by Lemma \ref{GibbsBackwards} and hence the l.h.s of equation \eqref{Lemma1} in Lemma \ref{Extension of Proposition 2.1. from DaPrLoRo02} is zero. We conclude $h(\nu'|\mu')=h(M_\t\nu'|\mu' )$.
$\Cox$
\bigskip
Notice, the uniform mixture $\mu'_*$ as in \eqref{MixedMeasure} is a Gibbs measure for $\g'$, translation-invariant and time-stationary. Hence we can apply Theorem \ref{ZeroEntrGibbs} to our model with $\mu'=\mu'_*$ and the following corollary holds.
\begin{cor}\label{MainCorollary}
Take $0<\t<\frac{2\pi}{q}$ fixed,
then the lattice-translation invariant measures which are invariant under the one time-step updating with the transition operator $M_\t$ must be contained in the lattice-translation invariant measures $\GG_{\th}(\g')$.
\end{cor}
Let us come to the main conclusion regarding ergodicity and uniqueness of time-stationary measures for our model.
\section{Stationary measures and rotating states}
Notice that we have assembled essentially two properties of our process. In simple words those are: Translation-invariant time-stationary measures must be Gibbs measures and translation-invariant Gibbs measures rotate. The rotation property alone already induces non-ergodicity. Using the combination of the two properties we can draw conclusions about the number of time-stationary measures and prove part 3 in Theorem \ref{PCA_Theorem}.
\begin{thm}\label{IrrationRational}
Let $0<\t<\frac{2\pi}{q}$ then there are two scenarios.
\begin{enumerate}
\item If $\frac{\t}{2\pi}$ is rational, the Markov process with transition operator $M_\t$ has a continuum of translation-invariant and time-stationary measures. Further the dynamics is non-ergodic with periodic orbits given by $$\O_\a:=\{\mu'_\phi\in\text{ex }\GG_{\th}(\g'):\phi=n\t+\a\text{ for some }n\in\N\}.$$
\item If $\frac{\t}{2\pi}$ is irrational, the process has a unique translation-invariant and time-stationary measure $\mu_*'=\frac{1}{2\pi}\int_0^{2\pi}d\phi\mu'_\phi$ where $\mu'_\phi\in\text{ex }\GG_{\th}(\g')$. The dynamics is non-ergodic since $M^n_\t \mu'_{\phi} \not\rightarrow \mu'_*$ as $n\to\infty$.
All orbits are dense in
$\text{ex }\GG_{\th}(\g')$ w.r.t the weak topology.
\end{enumerate}
\end{thm}
\textbf{Proof: }By Corollary \ref{MainCorollary} the investigation of translation-invariant and time-stationary measures can be restricted to the set $\GG_{\th}(\g')$.
In case $\frac{\t}{2\pi}$ is rational there exist integers $n,m$ such that $n\t=m2\pi$. Hence for all $0\leq\a<2\pi$ the equal weight measure $\frac{1}{n}\sum_{k=1}^{n}\mu'_{k\t+\a}$
is time-stationary where $\mu'_{k\t+\a}$ denotes the extremal translation-invariant Gibbs measure with order parameter $(k\t+\a)\text{mod}_{2\pi}\in S^1$. Further we have $M_\t^{n}\mu'_\phi=\mu'_{\phi+n\t}=\mu'_{\phi+m2\pi}=\mu'_\phi$ for all $\mu'_\phi\in\text{ex }\GG_{\th}(\g')$ and hence for a given starting measure $\mu'_\phi$ the set $\{\mu'_\psi\in\text{ex }\GG_{\th}(\g'): \psi=n\t\phi\text{ for some }n\in\N\}$ is a periodic orbit for the dynamics. In particular the process can not be ergodic.
\medskip
If $\frac{\t}{2\pi}$ is irrational the only possible symmetrically mixed measure as in the first case
is the measure $\mu'_*$. A more detailed measure-theoretic proof for this fact can be found in \cite{JaKu12} Proposition 5.1 where one can replace the Markov semigroup by $(M_\t^n)_{n\in\N}$. For the second statement notice, $M^n_\t \mu'_{\phi}=\mu'_{\phi+n\t}\in\text{ex }\GG_{\th}(\g')$ for all $n\in\N$, but $\mu'_*\not\in\text{ex }\GG_{\th}(\g')$ and hence $M^n_\t \mu'_{\phi} \not\rightarrow \mu'_*$. For the third statement realize that for any $0\leq\phi,\psi<2\pi$ there exists a subsequence such that $(\psi+n_k\t)\text{mod}_{2\pi}\to\phi$ for $k\to\infty$. Hence it suffices to show $\mu'_\phi\to\mu'_0$ weakly for $\phi\to0$. But this is true since for the test-functions $1_{\o'_\L}$ we have
\begin{equation*}\label{WeakConv}
\begin{split}
\lim_{\phi\to0}\mu'_\phi(\o'_\L)&=\lim_{\phi\to0}\int\mu_\phi(d\o)1_{\o'_\L}(\o)
=\int\mu_0(d\o)\lim_{\phi\to0}\g_{\L}(1_{\o'_\L-\phi}|\o_{\L^c})\cr
\end{split}
\end{equation*}
where we used the DLR-equation and the dominated convergence theorem. Now by the continuity of the Hamiltonian and the fact, that the a priori measures is the Lebesgue measure on $S^1$ we have $\lim_{\phi\to0}\g_{\L}(1_{\o'_\L-\phi}|\o_{\L^c})=\g_{\L}(1_{\o'_\L}|\o_{\L^c})$ for any boundary condition $\o_{\L^c}$. In particular the process can again not be ergodic.
$\Cox$
\bigskip
\textbf{Proof of Theorem \ref{PCA_Theorem}: }
The proofs of the locality properties in the past and in the future are given in Propositions \ref{Mixing_Past} and \ref{Mixing_Futur}. For $0\leq\t\leq2\pi/q$ the updating is Bernoulli as can be seen from the representation \eqref{rewriting}. Further defining $\II:=\text{ex }\GG_{\th}(\g')$ in Corollary \ref{QuasilocalBijection} we prove the desired properties of the function $\psi$ (this is part 1). The rotation property is proved to hold in Proposition \ref{RotationProp} (this is part 2). The two possible scenarios for time-stationary measures and periodic orbits are proved in Theorem \ref{IrrationRational} (this is part 3).
$\Cox$ | {"config": "arxiv", "file": "1404.3314.tex"} |
\subsection{Robustness to Heterogeneous Interference vs. Estimation Efficiency: a Bias-Variance Tradeoff}
\label{subsection:efficiency-loss}
In this section, we formally demonstrate that the specification of interference structures discussed in Section \ref{subsec:exchangeability} induces a bias-variance tradeoff in treatment effect estimation. We start with a simple example to illustrate the intuition. Suppose we are interested in estimating the average treatment effect from observational data for a population with a clustering structure, and we suspect that there \emph{may} be (homogeneous) interference within clusters. In this case, we can set $m=1$ in our conditional exchangeability framework and define the overall average treatment effect as\footnote{When all units are i.i.d. and also independent of the clustering structure, this effect $\beta$ has a simpler representation $\mathbb{E}[Y_{c,i}(1,G_{c,i})-Y_{c,i}(0,G_{c,i})]$, which is a direct generalization of the classical average treatment effect.}
\begin{align}
\label{eq:tradeoff-simple}
\beta \coloneqq \sum_{g_1=0}^{n-1} P(g_1) \cdot \beta_1(g_1),
\end{align}
where $P(g_1):=P(G_{c,i} = g_1)$ is the marginal probability of unit $i$ having $g_1$ treated neighbors that is assumed to be the same for all $i$ here. We can consistently estimate $\beta$ by estimating the direct effect $\beta_1(g_1)$ with AIPW estimators and the marginal probability $P(g_1)$ with empirical probability for each $g_1$, and plugging them into the sum. We refer to this estimator as the plug-in estimator.
If, however, there is in fact \emph{no} interference among units and hence SUTVA holds, then $\beta_1(g_1) = \beta$ for all $g_1$, where $\beta$ can be interpreted as the classical average treatment effect (e.g., \cite{imbens2015causal}).\footnote{Under SUTVA, the potential outcomes can be written as $Y_{c,i}(z)$ for $z \in \{0,1\}$ and $\beta = \frac{1}{n} \sum_{i=1}^n \+E\left[Y_{c,i}(1) - Y_{c,i}(0) \right]$. If $(X_{c,i}, Y_{c,i}, Z_{c,i})$ is randomly sampled from the same superpopulation for all $i$, then $\beta = \+E\left[Y_{c,i}(1) - Y_{c,i}(0) \right]$, which is identical to the classical definition of average treatment effects (e.g., \cite{imbens2015causal}). }
In this case, the conventional AIPW estimator (e.g., \cite{robins1994estimation}) for $\beta$ is also consistent. Additionally, it is semiparametric efficient, and is therefore more efficient than the plug-in estimator in general.
On the other hand, if there is indeed homogeneous interference, then in general only the plug-in estimator is consistent for $\beta$.\footnote{When SUTVA fails, the conventional AIPW estimator is a consistent estimator of $\beta$ only under some special settings, e.g., completely random treatment assignments \citep{savje2021average}, which are generally not satisfied in observational studies, the primary settings studied in this paper. }
We propose that this tradeoff between estimation bias and efficiency holds for a more general \emph{hierarchy} of interference structures on which estimators are built:
\begin{center}
\begin{tabular}{ll}
\multirow{5}*{\rotatebox[origin=c]{270}{{\large $\xRightarrow[\text{Efficiency Loss}]{\text{Reduced Bias}}$} }} & No Interference \\
& Partial Interference with Full Exchangeability \\ & Partial Interference with Conditional Exchangeability \\ & Network Interference with Conditional Exchangeability \\ & General Interference
\end{tabular}
\end{center}
Specifically, let us consider two estimation approaches: one is a sophisticated estimation approach (like the plug-in estimator) that accounts for possible complex interference structures, and the other one is a naive approach (like the conventional AIPW estimator) that neglects such potential interference. Intuitively, the sophisticated approach generally has less bias than the naive approach in the presence of complex interference, but is generally less efficient when the true interference structure is simple.
In this section, we formally show the tradeoff among the top three levels in the hierarchy above, for which we have fully understood the asymptotic theory. To start, let us consider two \emph{nested} candidate partitions of a cluster. The first partition is $\mathcal{I}_1, \cdots, \mathcal{I}_m$, which is a granular partition. The second partition is $\mathcal{I}^\prime_1, \cdots,\mathcal{I}^\prime_\ell$, for $0 \leq \ell < m$, which combines some of the subsets in the first partition together and is a coarse partition. $\ell = 1$ denotes homogeneous interference (i.e., second level in the hierarchy) for the second partition.
$\ell = 0$ is used to denote no interference (i.e., the top level in the hierarchy). This setup thus includes all of the top three levels of interference structure.
Given a vector of neighbors' treatments $\mathbf{z}_{c,(i)}\in \{0,1\}^{n-1}$ for any $c,i$, let $\gvec \in \+Z_{\geq 0}^m$ be the number of treated neighbors (calculated using $\mathbf{z}_{c,(i)}$) in each of the $m$ subsets in $\mathcal{I}_1, \cdots, \mathcal{I}_m$, and $\*h \in \+Z_{\geq 0}^\ell$ be the number of treated neighbors in each of the $\ell$ subsets in $\mathcal{I}^\prime_1, \cdots,\mathcal{I}^\prime_\ell$, for $0 \leq \ell < m$.
Note that since the two partitions are nested, i.e., the second partition combines some of the subsets in the first partition together, $\*g$ and $\*h$ are related through the following binary matrix $A = [A_{kj}] \in \{0,1\}^{\ell \times m}$:
\[\sum_{k=1}^\ell \sum_{j=1}^m A_{kj} = m \qquad \text{and} \qquad \*h = A \cdot \gvec. \]
For example, if $\ell = 1$, then $A = \bm{1}_{1 \times m}$ is a $1\times m$ vector of ones, and $\*h$ is the scalar that equals to the total number of treated neighbors, i.e.,
$\*h = \bm{1}_{1 \times m} \cdot \gvec = \sum_{j = 1}^m g_j$. If $\ell = m$, then $A = \*I_{m \times m}$ is the $m \times m$ identity matrix, and $\*h$ equals to $\*g$, i.e., $\*h = \*I_{m \times m} \cdot \gvec$.
Note that the mapping from $\gvec$ to $\*h$ is a many-to-one mapping, so we use $\mathcal{G}_A(\*h) = \{\gvec: \*h = A \gvec\}$ to denote the set of all $\gvec$ that map to $\*h$. If $\ell = 1$, then $\*h$ is a scalar and $\mathcal{G}_A(\*h) $ denotes all possible $\*g$ that satisfy $ \|\gvec\|_1 = \*h$. If $\ell = 0$, we let $\mathcal{G}_A(\*h)$ denote all possible values of $\*g$.\footnote{Specifically, if $\ell = 0$, then $\mathcal{G}_A(\*h) = \{\*g: \sum_{j=1}^m g_j \leq n, 0\leq g_j \leq |\mathcal{I}_j \backslash i|\}$. }
We will formally show the bias-variance tradeoff in the estimation of the following estimand, which is related to both candidate partitions:
\begin{align}\label{eqn:weighted-estimand}
\tilde{\beta}_j(\*h) \coloneqq
\sum_{\gvec \in \mathcal{G}_A(\*h)} \omega_{A}(\gvec) \cdot \beta_j(\gvec),
\end{align}
where $\omega_A(\gvec) \geq 0$ is a generic weight of $\gvec$ that satisfies $\sum_{\gvec \in \mathcal{G}_A(\*h)} \omega_{A}(\gvec) = 1$, and $j \in [m]$. For example, $\omega_A(\gvec)$ can be the same for every possible $\gvec$, or $\omega_A(\gvec)$ can be proportional to $P(\Gvec_{c,i} = \gvec)$.\footnote{If $\omega_A(\gvec)$ is the same for every $\gvec$, then $\omega_A(\gvec) = \frac{1}{|\mathcal{G}_A(\*h)|}$. If $\omega_A(\gvec)$ is proportional to $P(\Gvec_{c,i} = \gvec)$, then $\omega_A(\gvec) = \frac{P(\Gvec_{c,i} = \gvec)}{P(\Gvec_{c,i} \in \mathcal{G}_A(\*h))}$ with the assumption that $P(\Gvec_{c,i} = \gvec)$ is the same for every $i \in \mathcal{I}_j$. }
\begin{remark}
\label{remark:tradeoff-alpha-allocation}
\normalfont
If $\ell = 0$, then $\tilde{\beta}_j(\*h)$ does not depend on $\*h$, and a natural choice of $\omega_A(\gvec)$ is $P(\Gvec_{c,i} = \gvec)$ for any $i$, which generalizes the example in \eqref{eq:tradeoff-simple} with $m=1$.
Thus our framework includes the case of no interference vs. homogeneous partial interference as a special case. On the other hand, if $\ell=0, m=n$, and $\omega_A(\gvec)=\alpha^{\|\gvec\|_1}\cdot(1-\alpha)^{n-1-\|\gvec\|_1}$, $\tilde{\beta}_j(\*h)$ reduces to the direct effect under the $\alpha$-allocation strategy \citep{hudgens2008toward,park2020efficient}. Thus our framework includes this setting as a special case, and in particular, our bias-variance tradeoff analysis applies.
\end{remark}
We will consider two estimation approaches for $\tilde{\beta}_j(\*h)$, each based on one of the two candidate partitions. The first approach is a sophisticated approach that is based on the granular partition $\mathcal{I}_1, \cdots, \mathcal{I}_m$.
This approach first estimates $\beta_j(\gvec)$ and (if necessary) $\omega_{A}(\gvec)$ for $\gvec \in \mathcal{G}_A(\*h)$, and then plugs them into \eqref{eqn:weighted-estimand} to estimate $\tilde \beta_j(\*h)$. Denote this estimator as $\hat\beta^\ind_j(\*h)$.
The second approach is a simplified approach that is based on the coarse partition $\mathcal{I}^\prime_1, \cdots, \mathcal{I}^\prime_\ell$. If this coarse partition is sufficient to capture the interference structure, i.e., it satisfies Assumption \ref{ass:cond-outcome-partial-exchangeable}, then units in $\mathcal{I}^\prime_k$ are exchangeable for $k \in \{1,\cdots, \ell\}$, and the potential outcomes satisfy
\[Y_{c,i}(z,\gvec) = Y_{c,i}(z,\gvec^\prime) \qquad \forall \gvec, \gvec^\prime \in \mathcal{G}_A(\*h).\]
Consequently, $\beta_j(\gvec)$ is the same for all $\gvec \in \mathcal{G}_A(\*h)$, and $\tilde{\beta}_j(\*h) $ is invariant to any choice of $\omega_A(\gvec)$, and is in fact equal to the ADE $\beta_j(\*h)$ based on the coarse partition.\footnote{The direct effect for the subset of units corresponding to $\mathcal{I}_j$ in the granular partition is still well defined under the coarse partition based on \eqref{eqn:beta-g-short-definition}, even if they are included in a larger $\mathcal{I}^{\prime}_k$ in the coarse partition, i.e., $\mathcal{I}_j \subsetneq \mathcal{I}_k^\prime$.} The second approach then uses our generalized AIPW estimator to estimate $\tilde{\beta}_j(\*h)$. Denote this estimator as $\hat\beta^\agg_j(\*h)$.
If the coarse partition satisfies Assumption \ref{ass:cond-outcome-partial-exchangeable} (hence so will the granular partition), we will show in Theorem \ref{thm:flexible-model-efficiency} that both $\hat\beta^\ind_j(\*h)$ and $\hat\beta^\agg_j(\*h)$ are consistent estimators of $\tilde\beta_j(\*h)$, but $\hat{\beta}_j^\ind(\*h)$ is weakly less efficient than $\hat{\beta}_j^\agg(\*h)$. The key insight is when both partitions satisfy Assumption \ref{ass:cond-outcome-partial-exchangeable},
\begin{align*}
\Var\left(\hat{\beta}_j^\ind(\*h) \right) \propto\+E\Bigg[ \underbrace{ \sum_{\gvec \in \mathcal{G}_A(\*h)} \frac{\omega_{A}(\gvec)^2 }{p_{i,(z,\gvec)}(\c(X))}}_{\textbf{I}} \Bigg] \qquad \Var\left(\hat{\beta}_j^\agg(\*h) \right) \propto \+E\Bigg[\underbrace{ \frac{1 }{p_{i,(z,\*h)}(\c(X))} }_{\textbf{II}} \Bigg],
\end{align*}
where the expectation is taken with respect to $\c(X)$. For any fixed $\c(X)$, we have
\begin{align*}
\textbf{I}= \sum_{\gvec \in \mathcal{G}_A(\*h)} \omega_{A}(\gvec) \cdot \frac{1 }{p_{i,(z,\gvec)}(\c(X))/\omega_{A}(\gvec)} \geq \frac{1 }{\sum_{\gvec \in \mathcal{G}_A(\*h)} \omega_{A}(\gvec) \cdot p_{i,(z,\gvec)}(\c(X))/\omega_{A}(\gvec)} =\textbf{II},
\end{align*}
where we have used the \emph{convexity} of the function $x: \rightarrow 1/x$ for $x > 0$.
On the other hand, if the coarse partition $\mathcal{I}^\prime_1, \cdots, \mathcal{I}^\prime_\ell$ does not satisfy Assumption \ref{ass:cond-outcome-partial-exchangeable} but the granular partition $\mathcal{I}_1, \cdots, \mathcal{I}_m$ does, then $\hat{\beta}_j^\ind(\*h)$ is consistent, but $\hat{\beta}_j^\agg(\*h)$ is generally inconsistent. A similar estimation bias result when $\ell=0$ and $m=1$ was observed in \cite{forastiere2020identification}, but in Theorem \ref{thm:flexible-model-efficiency} below we provide the complete bias-variance tradeoff between robustness to interference and estimation efficiency in the general conditional exchangeability framework.\footnote{For exposition, we will assume that if a partition does not satisfy Assumption \ref{ass:cond-outcome-partial-exchangeable}, it also does not satisfy Assumption \ref{ass:partial-prop-exchangeable}. If the coarse partition satisfies Assumption \ref{ass:partial-prop-exchangeable} but not Assumption \ref{ass:cond-outcome-partial-exchangeable}, a similar tradeoff result holds if $\omega(\gvec)=\frac{1}{| \mathcal{G}_A(\*h)|}$ for all $\gvec \in \mathcal{G}_A(\*h)$, due to the double robustness of AIPW estimators.}
\begin{theorem}[Bias-Variance Tradeoff between Robustness vs. Efficiency]\label{thm:flexible-model-efficiency}
Suppose Assumptions \ref{ass:network}-\ref{ass:overlap} hold, and that the granular patition $\mathcal{I}_1, \cdots, \mathcal{I}_m$ satisfies Assumptions \ref{ass:cond-outcome-partial-exchangeable}-\ref{ass:continuity-boundedness}.
If the coarse partition $\mathcal{I}^\prime_1, \cdots, \mathcal{I}^\prime_\ell$ also satisfies Assumption \ref{ass:cond-outcome-partial-exchangeable}-\ref{ass:continuity-boundedness}, then both $\hat\beta^{\agg}_j(\*h)$ and $\hat\beta^{\ind}_j(\*h)$ based on sieve estimators are consistent estimators for $\tilde{\beta}_j(\*h)$; otherwise,
only $\hat\beta^{\ind}_j(\*h)$ is consistent for $\tilde{\beta}_j(\*h)$, unless for any $\gvec$, $\frac{p_{i,(z,\gvec)}(\c(X))}{\sum_{\gvec \in \mathcal{G}_A(\*h)}p_{i,(z,\gvec)}(\c(X)) } \equiv \omega_A(\gvec)$ for all $\c(X)$ and $z$.
On the other hand, if $\mu_{i,(z,\gvec)}(\c(X))$ and $\sigma^2_{i,(z,\gvec)}(\c(X))$ are the same for $\gvec \in \mathcal{G}_A(\*h) $, then $\hat \beta^{\ind}_j(\*h) $ is weakly less efficient than $\hat \beta_j^{\agg}(\*h)$, i.e.,
the asymptotic variance of $\hat \beta^{\ind}_j(\*h)$ is bounded below by the asymptotic variance of $\hat \beta^{\agg}_j(\*h)$.
The inequality is strict unless the following two conditions hold: (1)
$\big( \hat \omega_A(\gvec) - \omega_A(\gvec) \big)\cdot \beta_j(\gvec) = o_p \big(M^{-1/2} \big)$; (2)
for any $\gvec$, $\frac{ \omega_A(\gvec)}{p_{i,(z,\gvec)}(\c(X))}$ is the same for all $\c(X)$ and $z$.
\end{theorem}
For the variance comparison in Theorem \ref{thm:flexible-model-efficiency}, if the coarse partition is correctly specified, i.e., it satisfies Assumption \ref{ass:cond-outcome-partial-exchangeable}, then the condition for $\hat \beta_j^{\ind}(\*h) $ to be weakly less efficient is satisfied. In this case, $\hat \beta_j^{\ind}(\*h) $ is generally \emph{strictly} less efficient than $\hat \beta_j^{\agg}(\*h) $ except for some special cases (see Remark \ref{remark:efficiency}). Theorem \ref{thm:flexible-model-efficiency} highlights an important aspect of interference that, although intuitive, has not been formalized in the literature before. It implies that there is no free lunch when modeling interference: if we specify a general structure that allows complex interference patterns, the resulting estimator is robust to bias, but at the cost of efficiency loss. A less sophisticated specification potentially increases estimation efficiency, but at the cost of increased risk of bias. As an example, recall the causal estimands defined based on the $\alpha$-allocation strategy discussed in Remark \ref{remark:tradeoff-alpha-allocation}.
Theorem \ref{thm:flexible-model-efficiency} implies that estimation strategies that assume \emph{no} units are exchangeable, i.e. $m=n$, are potentially inefficient.
In Table \ref{tab:tradeoff}, we illustrate this bias-variance tradeoff in a simulation study.
Therefore, we suggest using the most parsimonious, but correct, conditional exchangeability structure, whenever possible.
In Section \ref{sec:testing}, we develop hypothesis tests that can be used to test for the heterogeneity of interference and treatment effects, that can help practitioners determine the appropriate specification of the interference structure.
\begin{remark}[Bias]
\normalfont
In general, estimators of $\tilde \beta_j(\*h)$ are consistent only if they are based on a partition that satisfies Assumption \ref{ass:cond-outcome-partial-exchangeable}. One exception is that when treatment assignments are completely random, $\hat\beta^{\agg}_j(\*h)$ is a consistent estimator of $\tilde \beta_j(\*h), $
even if the partition $\mathcal{I}^\prime_1, \cdots,\mathcal{I}^\prime_\ell$ is \emph{mis-specified}. This type of robustness result has been observed for example in \cite{savje2021average}. Our results highlight the fundamental difference in the observational setting.
\end{remark}
\begin{remark}[Efficiency]\label{remark:efficiency}
\normalfont
The two conditions for efficiency equality are usually violated. Specifically, if $\hat{\omega}_A(\gvec)$ is estimated from a sample with $O(M)$ observations, then the convergence rate of most estimators is no more than $\sqrt{M}$, violating the first condition for efficiency equality. Furthermore, if the propensity score $p_{i,(z,\gvec)}(\c(X))$ vary with either covariates $\c(X)$ or $z$ (which is commonly the case), then the second condition for efficiency equality is violated.
Only under some special cases can the two conditions hold. For example, if for each $\gvec \in \mathcal{G}_A(\*h)$, we either know the true value of $\omega_A(\gvec)$ or $\gvec \in$ $\beta_j(\gvec) = 0$, then the first condition holds. Furthermore, if the treatment assignments are completely random, then the second condition holds.
\end{remark}
\begin{table}
\caption{Bias-variance tradeoff in a simulation study}
\begin{centering}
\begin{adjustbox}{max width=\linewidth,center}
\begin{tabular}{cccccccccc}
\toprule
& \multicolumn{3}{c}{Bias} & \multicolumn{3}{c}{Variance} & \multicolumn{3}{c}{MSE}
\tabularnewline
\cmidrule(l){2-4} \cmidrule(l){5-7} \cmidrule(l){8-10}
DGP & $\hat{\beta}_{\text{no}}$ & $\hat{\beta}_{\text{homo}}$ & $\hat{\beta}_{\text{heter}}$ & $\hat{\beta}_{\text{no}}$ & $\hat{\beta}_{\text{homo}}$ & $\hat{\beta}_{\text{heter}}$ & $\hat{\beta}_{\text{no}}$ & $\hat{\beta}_{\text{homo}}$ & $\hat{\beta}_{\text{heter}}$ \tabularnewline
\midrule
no interference & \textbf{0.001} & 0.0009 & 0.0003 & \textbf{0.002} & 0.003 & 0.006 &\textbf{0.002} &0.003 &0.005\tabularnewline
\midrule
homo. interference & -0.076 & 0.001 & \textbf{0.0001} & --- & \textbf{0.0008} &0.0017 &0.084 & \textbf{0.001}& 0.002 \tabularnewline
\midrule
heter. interference & -0.123 & -0.049 & \textbf{0.001} & --- & --- &\textbf{0.001} &0.146 &0.018 & \textbf{0.002} \tabularnewline
\bottomrule
\end{tabular}
\end{adjustbox}
\par\end{centering}
\bnotetab{Average bias, variance, and MSE of estimators based on three different specifications of the interference
structure. The simulation setting is similar to that in Section \ref{section:simulations}.
Variance estimators under mis-specification (below the diagonal) are invalid and therefore omitted.}
\label{tab:tradeoff}
\end{table} | {"config": "arxiv", "file": "2107.12420/bias-variance.tex"} |
TITLE: inflexion point how to find it?
QUESTION [0 upvotes]: Can someone explain me this part of wikipedia I don't understand it
https://en.wikipedia.org/wiki/Inflection_point#A_necessary_but_not_sufficient_condition
REPLY [0 votes]: There are two kinds of POSSIBLE inflection points,(i) where $f''(x_0)=0$ and (ii)where $f''(x_0)$ does not exist.First, find all the points where $f''(x_0)=0$. At each such point, investigate whether $f''(x)$ is positive on one side of $x_0$ and negative on the other for $x$ close enough to $x_0.$ Such points are inflection points. Second, find all the points where $f''(x_0)$ does not exist.At each such point investigate whether $f''(x)$ is positive on one side of $x_0$ and negative on the other for $x$ close enough to $x_0.$ Such points are inflection points. | {"set_name": "stack_exchange", "score": 0, "question_id": 3563609} |
TITLE: Expected length of unit segments
QUESTION [0 upvotes]: Suppose we draw $x, y, z$ from the uniform distribution on $(0, 5)$. How would we calculate the expected value of $|(x, x + 1) \cup (y, y + 1) \cup (z, z+1)|$?
REPLY [1 votes]: Evan's answer to this question might be of some interest to you. I'll use a similar method to talk about the points covered by at least one segment. Let $p_1(t)$ be the probability that a point $t\in[0, 6]$ is covered by a particular segment. If the segment's start is placed at position $x$, $p_1(t)$ is the ratio of the length of the interval of $x$ values that result in $t$ being covered and the total interval from which $x$ is drawn. From this, we can obtain a piecewise expression for $p_1(t)$:
$$
p_1(t) =
\begin{cases}
\frac {t}{5} & 0 \le t < 1 \newline
\frac {1}{5} & 1 \le t < 5 \newline
\frac {6-t}{5} & 5 \le t \le 6 \newline
\end{cases}
$$
If we integrate $p_1(t)$ on $[0, 6]$, we obtain the expected size of the interval of points covered, which is of course $1$. To obtain the expected length covered by 3 segments, we need to obtain the probability that a point be covered by at least one of 3 such segments, chosen independently, which I'll call $q(t)$.
To do this, we can note that the probability of not being covered by a particular segment is $1-p_1(t)$ Since the probability of being covered be a particular segment is independent of being covered by another segment, the probability of not being covered by any of the 3 segments is simply $(1-p_1(t))^3$. Thus, the probability of being covered by at least one segment is the compliment, $q(t) = 1-(1-p_1(t))^3$. We can plug our piecewise expression for $p_1(t)$ into this relation to obtain $q(t)$.
$$
q(t) =
\begin{cases}
1 - (1 - \frac t 5)^3 & 0 \le t < 1 \newline
1 - (1 - \frac 1 5)^3 & 1 \le t < 5 \newline
1 - (1 - \frac{6-t}5)^3 & 5 \le t \le 6 \newline
\end{cases}
=
\begin{cases}
\frac{1}{125} t^3 - \frac{3}{25} t^2 + \frac{3}{5} t & 0 \le t < 1 \newline
\frac{61}{125} & 1 \le t < 5 \newline
\frac{1}{125} (6-t)^3 - \frac{3}{25} (6-t)^2 + \frac{3}{5} (6-t) & 5 \le t \le 6 \newline
\end{cases}
$$
To obtain the expected length, all we need to do is evaluate the integral:
$$\int_0^6q(t) dt = \int_0^1q(t) dt + \int_1^5q(t) dt + \int_5^6q(t) dt$$
Notice that the first and last integrals are equivalent due to symmetry. After a bit of calculus, we obtain the expected length of $\frac{619}{250}$. | {"set_name": "stack_exchange", "score": 0, "question_id": 2371307} |
TITLE: Suppose that X is a cell complex with $\tilde{H}_{*}(X) = 0.$ Prove that the suspension $SX$ is contractible.
QUESTION [1 upvotes]: The question is:
Suppose that X is a cell complex with $\tilde{H}_{*}(X) = 0.$ Prove that the suspension $SX$ is contractible.
I feel like this link contains a part (or maybe all) of the solution to this question, am I correct? if so I just need a recap of the general idea of the solution please. If not could you give me a hint for the solution?
Suspension: if $X$ is $(n-1)$-connected CW, is $SX$ $n$-connected?
REPLY [3 votes]: Consider the map $f: \{*\} \to SX$ sending the single point to one of the 0-cells of $X$, which lies in $SX$ (So $f$ is WLOG an inclusion of a zero-cell). Then consider $f_*: \tilde{H}_n(\{*\}) \to \tilde{H}_n (SX)$. The first of these groups is zero for all $n$, and the second of these satisfies $\tilde{H}_n (SX) \cong \tilde{H}_{n-1}(X) \cong 0$ for all $n\geq 1$. For $n=0$, by hypothesis we know that $X$ is path-connected (because $\tilde{H}_0(X) \cong 0$), so $SX$ is too, and thus $\tilde{H}_0(SX) \cong 0$ as well. Since $f_*$ is a map of trivial groups it's automatically an isomorphism.
So we know that $f_*$ is an isomorphism on all reduced homology groups. It's also an isomorphism on all (absolute) homology groups. (To see why, note that we only need to check the case $n=0$ because reduced homology coincides with the absolute homology in positive degrees. But in degree zero, the induced map on absolute homology is given by $f_* \oplus \text{id}_\mathbb{Z}: \tilde{H}_0(\{*\}) \oplus \mathbb{Z} \to \tilde{H}_0(SX)\oplus \mathbb{Z}$ which is the identity map on $\mathbb{Z}$ since the reduced homology groups are zero.)
Now we know $f_*$ induces isomorphisms on all homology groups. Then since $X$ is path-connected, we have that $SX$ is simply connected (this is in the question you linked). Certainly the one-point space is also simply connected. So $\pi_1(SX, \{*\}) \cong 0$. Then the Hurewicz theorem tells you that the first nonzero relative homotopy and homology groups of the pair $(SX, \{*\})$ coincide. But all of the relative homology groups are zero, so $\pi_n(SX, \{*\}) \cong H_n(SX, \{*\}) \cong \tilde{H}_n(SX) \cong 0$ for all $n$.
Then since $f$ is an inclusion, we can look at the long exact homotopy sequence for the pair to see that $f$ induces an isomorphism on all homotopy groups (remember that the map induced by inclusion is one of the maps in the long exact sequence). Then Whitehead's theorem asserts that $f$ is a homotopy equivalence. | {"set_name": "stack_exchange", "score": 1, "question_id": 3464358} |
TITLE: Meaning of dipole approximation for selection rules
QUESTION [5 upvotes]: This is a really tough one:
I would like to understand what it really means to apply the dipole approximation when deriving the selection rules. This question is purely about intuitive understanding because the derivation itself seems way above my level of quantum mechanics.
What I know:
In classical electrodynamics we can expand our potentials in a multipole expansion. Often, we do not need to consider higher orders of this expansion as they decrease with additional $\frac{1}{r^2}$ factors and thus at reasonable distances won't make much of a difference anymore
To derive the selection rules, we have to consider the transition dipole moment which looks roughly like this: $M_{if} = \int \psi^*_f \boldsymbol{\mu} \psi_i d\vec{r}$ where $\boldsymbol{\mu}$ is the transition operator
Now, I have looked at a lot of stuff and I have seen you can write this operator down without really taking much benefit. I suppose it somehow contains our potentials and we can write in the form of a multipole expansion so that leaving out all terms is a viable thing.
The confusion: Classically, I would leave out terms if I wanted to know the potential of some charge distribution at a relatively distant point but what is the reasoning of neglecting higher order terms of the multipole expansion in the quantum mechanical picture for let's say absorption or emission? If it is about distances $r$ again then where are these distances?
And ultimately, what does it really mean not to consider for example a magnetic dipole in this case. I have absolutely no intuition for it. I have read that considering those higher order terms, there could be transitions that do not follow the $\Delta l = \pm 1$ rule for example and I wonder wether there is any way to imagine this without going to math right away.
Progress: I know now that the approximation is based on the wavelength of the photon being significantly larger than the extension of an atom and that higher order terms in the expansion scale with this factor $R / \lambda$. This would imply that this approximation should not hold for short wavelength (X-Ray for instance). Unfortunately, I still don't have any explanation for questions above.
REPLY [5 votes]: It seems to me you have asked a cluster of questions. You asked why electric dipole transition is often the only interaction we are interested in, with all other terms corresponding to higher multipole moments ignored. You also asked (effectively) what is the smallness parameter that justifies this choice. I believe the best way to answer them is to go through the derivation of various terms of contributions to the interaction between an atom and a radiation field. I will omit some details in order to present a clear outline of the argument. My derivations here mostly draws from Cohen-Tannoudji's Quantum Mechanics Vol. 2, Complement A13.
The Hamiltonian of an electron in an electromagnetic field described by vector potential $\mathbf A(\mathbf r, t)$ and scalar potential $q\phi(\mathbf r, t)$ is
$$
H = \frac{1}{2m} [\mathbf p - q \mathbf A(\mathbf r, t)]^2 + q\phi(\mathbf r, t)- \frac{q\hbar}{2m} \boldsymbol \sigma \cdot \nabla\times \mathbf A (\mathbf r, t) \>.
$$
We can expand it into
$$
H(t) = \frac{\mathbf p^2}{2m} + q\phi(\mathbf r, t)- \frac{q}{m}\mathbf p \cdot \mathbf A - \frac{q\hbar}{2m} \boldsymbol \sigma\cdot \nabla\times \mathbf A + \frac{q^2}{2m} \mathbf A^2 \>.
$$
The last term can be ignored for ordinary light sources, since the intensity is sufficiently low. Call the third term $W_1$ and the fourth term $W_2$. We can now treat $W_1 +W_2$ as a perturbation to the atomic Hamiltonian (the first and second terms), and analyse it by attempting to expand it with respect to some smallness parameter.
Assuming we are dealing with plane waves polarized in one direction, we can estimate and compare the magnitude of the terms $W_1$ and $W_2$,
$$
\frac{W_2}{W_1} \simeq \frac{\frac{q}{m}\hbar |\mathbf k| \mathcal A_0}{\frac{q}{m}p \mathcal A_0} = \frac{\hbar |\mathbf k|}{p} \>,
$$
where $\mathcal A_0$ is the amplitude of the vector potential field, $\mathbf k$ its wave vector, and $p$ the momentum of the electron. Since $\hbar/p$ is at at most on the order of the size of the atom which is on the scale of the Bohr radius $a_0$, and $|\mathbf k| = 2\pi/\lambda$, where $\lambda$ typically much greater than the size of the atom, this ratio $W_2/W_1$ is about $a_0 /\lambda$, which is very small. This is a good enough justification for ignoring $W_2$ when we are performing the computation to the zeroth order in $a_0/\lambda$, which is the starting point of many analyses you can find in books or online that are only trying to obtain the electric dipole Hamiltonian. However we are not satisfied with that, so we shall perform a full expansion.
Both $W_1$ and $W_2$ contain an exponential factor $e^{\pm i\mathbf k\cdot \mathbf r}$ as the spatial dependence of $\mathbf A(\mathbf r, t)$. We can then expand it in powers of $\mathbf k\cdot \mathbf r$. Note again that $|\mathbf k| = 2\pi/\lambda$, and $\mathbf r$ is of the order of the size of the atom, so this is of the same order as $W_2/W_1$. Thus if we expand $W_1$ into $W_1^0 + W_2^1 + \dots$ and $W_2$ into $W_2^0 + W_2^1 + \dots$, we will find that to the zeroth order of $\mathbf k \cdot \mathbf r$, we have $W_1^0$, and to the first order we have $W_1^1 + W_2^0$ and so on.
The form of the relevant terms are $W_1^0 = \frac{q}{m} \mathbf p \cdot \partial_t \mathbf E(\mathbf r, t) = - q \mathbf r \cdot \mathbf E(\mathbf r, t)$ (It takes a bit of work to show that these two forms are equivalent, and I'll have Cohen-Tannoudji do that for me.) This is the electric dipole term. To the next order, the magnetic dipole term is $W_2^0 = - \frac{q}{m}(\mathbf L + 2 \mathbf S)\cdot \mathbf B(\mathbf r, t)$, and the electric quadrupole term is $W_1^1 = - \frac{q}{m} \mathbf r\mathbf r\colon \nabla\mathbf E$, where $\colon$ represents the double contraction between the quadrupole moment tensor and the gradient of the electric field. These terms are labeled by the electric and magnetic multipole moments because the respective multipole moment operators appear in them. For the derivation of the multipole moments operators, again refer to Complement E10 of Cohen-Tannoudji.
Now we have illustrated where each of the terms came from, and more importantly how their magnitudes compare. The answer your question as to why electric dipole transitions are prominent is simply that the Hamiltonian for electric dipole transition is much larger in magnitude than the magnetic dipole Hamiltonian, as well as transitions corresponding to higher multipole moments.
As a final note, in your question you described what you knew about multipole expansion in classical electrodynamics, but what you described is how things work in the far field regime, where the frequency is high and we are interested in the radiation far from the source, or $\mathbf k \cdot \mathbf r \gg 1$. In what we are discussing here, we are working in the opposite limit where $\mathbf k \cdot \mathbf r \ll 1$. Though to be more precise, we are not studying radiation from a source but how radiation interacts with the atom, so the situations are not exactly comparable. For more on far and near field approximations and multipole fields, refer to Chapter 9 of Jackson's Electrodynamics. | {"set_name": "stack_exchange", "score": 5, "question_id": 312365} |
TITLE: Find the value of $a^{2020}+b^{2020}+c^{2020}$.
QUESTION [5 upvotes]: Question: Let $f(x)=x^3+ax^2+bx+c$ and $g(x)=x^3+bx^2+cx+a$ where $a,b,c\in\mathbb{Z}$, $c\neq 0$. Suppose $f(1)=0$ and roots of $g(x)=0$ are squares of the roots of $f(x)=0$. Find the value of $a^{2020}+b^{2020}+c^{2020}$.
My approach: Let the other roots of $f(x)$ except $1$ be $\alpha$ and $\beta$. This implies that the roots of $g(x)$ are $1,\alpha^2$ and $\beta^2$. Thus, we have $1+\alpha+\beta=-a,$ $\alpha+\beta+\alpha\beta=b,$ $\alpha\beta=-c$ and $1+\alpha^2+\beta^2=-b,$ $\alpha^2+\beta^2+\alpha^2\beta^2=c,$ $\alpha^2\beta^2=-a.$ Note that thus we have $c^2=-a$. Again, $$a^2=(1+\alpha+\beta)^2=1+\alpha^2+\beta^2+2(\alpha+\beta+\alpha\beta)=-b+2b=b.$$ Also, since $f(1)=0\implies a+b+c=-1$. Now since we have $a=-c^2$ and $b=a^2\implies b=c^4,$ which in turn implies that $$-c^2+c^4+c+1=0\implies c^4-c^3+c^3-c^2+c-1=-2\\\implies (c^3+c^2+1)(c-1)=-2.$$ Thus, we either have $$\begin{cases}c^3+c^2+1=1\\ c-1=-2\end{cases} \text{ or }\begin{cases}c^3+c^2+1=-1\\ c-1=2\end{cases} \text{ or }\begin{cases}c^3+c^2+1=2\\ c-1=-1\end{cases} \text{ or }\begin{cases}c^3+c^2+1=-2\\ c-1=1\end{cases}.$$ Note that only the first diophantine equation yields a solution, that is $c=-1$. Thus, $a=-1$ and $b=1$. Therefore, $$a^{2020}+b^{2020}+c^{2020}=1+1+1=3.$$
Is this solution correct and rigorous enough and is there any other way to solve the problem? Also, does anyone know the original source of this problem?
REPLY [1 votes]: Another way to reach the solution is this:
Since $a=-c^2$ and $b=c^4$ you have $f(x) =x^3-c^2x^2+c^4x+c$ and since $f(1)=0$ you obtain $1-c^2+c^4+c=0$ i.e. $c^2(c^2-1)+c+1=(c+1)(c^3-c^2+1)=0$, by observing that $c^3-c^2+1=0$ can't have neither odd or even solution, you obtain $c=-1$ and your solution. | {"set_name": "stack_exchange", "score": 5, "question_id": 3808144} |
TITLE: Global uniqueness and determinism in classical mechanics
QUESTION [5 upvotes]: Something always bugged me about Newton's equations (or, equivalently, Euler-Lagrange/Hamilton's):
Determinism, which is the philosophical framework of classical mechanics, requires that, by completely knowing the state of a system at a given instant, $\textbf{x}(t_0)$ and the law by which the system evolves, which, in dynamics, looks something like $$m\ddot{\textbf{x}}=f(\textbf{x},\dot{\textbf{x}},t)$$
You know the exact state of the system at any instant, forwards in time and, when defined, backwards. But global uniqueness theorems state that, for this to be true, the function $f$ needs some properties, namely that it doesn't "blow up" anywhere in the domain (iirc it's enough for $f$ to be uniformly continous). My question then can be posed as such: are there any systems in which the forces that naturally arise violate the global existence/uniqueness theorems? And if so, then what does this tell us about the system?
REPLY [7 votes]: If a mathematical model "blows up" at some point in the future (or past) for physically-reasonable initial conditions, then we generally regard the model as being an imperfect representation of nature. The model may still be useful as an approximation for many things, but we don't expect it to be the final word, because nature shouldn't behave that way.
A famous example is the singularity theorems in general relativity. With physically-reasonable initial conditions, general relativity often predicts that a singularity will develop in the curvature of spacetime, such as the singularity that is hidden behind the event horizon of a a black hole. This is reviewed in Witten's "Light Rays, Singularities, and All That" (https://arxiv.org/abs/1901.03928). This property of general relativity is regarded as a sign that general relativity cannot be the final word: it must be just an approximation to something else, albeit an excellent approximation under less-extreme conditions. By the way, that diagnosis is consistent with a completely different type of evidence that general relativity is incomplete, namely the fact that general relativity doesn't include quantum effects. Most (all?) physicists expect that a proper quantum theory of gravity will not have such singularities — probably because the usual concept of spacetime itself is only an approximation that becomes a bad approximation in cases where non-quantum GR would have predicted a singularity.
Quantum theory isn't deterministic, but even in quantum theory, good models are required to respect a general principle called the time-slice principle, more presumptuously called unitarity. This principle says that all observables at any time in the past or future can be expressed (using sums, products, and limits) in terms of observables associated with any one time. | {"set_name": "stack_exchange", "score": 5, "question_id": 585376} |
\subsection{\large Difference operator from $C_1$ trinion.}
\label{app2}
In this Appendix we present some details of the derivations of van Diejen operator from $C_1$ trinion summarized in the
Section \ref{sec:vandiejen}.
We start by S gluing the trinion \eqref{C1:trinion} to a tube with no defect \eqref{usptube} along $y$ puncture by gauging
corresponding $\SU(2)_y$ global symmetry.
This gluing is S-confining, {\it i.e.} the resulting $\SU(2)_y$ gauge node can be eliminated using $A_1$ elliptic beta integral
\eqref{elliptic:beta}. We end up with $\SU(3)\times \SU(2)$ gauge theory. Then we perform Seiberg
duality on the $\SU(3)$ node which has $7$ flavors, so the duality results in $\SU(4)$ gauge node with the same number of flavors.
Once we do that the $\SU(2)$ node becomes
S-confining. Integrating it out we are left with $\SU(4)$ gauge theory with $7$ flavors, one antisymmetric and some flip singlets.
This chain of dualities is shown on the Figure \ref{pic:c1dualities}. Calculation results in the following index of a trinion theory:
\begin{figure}[!btp]
\centering
\includegraphics[width=\linewidth]{figures/c1dualities-crop.pdf}
\caption{On this figure we show the chain of Seiberg dualities used in the derivation of the $C_1$ trinion given in
\eqref{c1:trin2}. On the Figure (a) we start with the S gluing of the $C_1$ trinion \eqref{C1:trinion} with the tube \eqref{usptube}
obtained by closing $\SU(2)_z$ minimal puncture with no defect. Then we use Seiberg duality on the S-confining $\SU(2)_y$ node. Resulting
gauge theory is shown on the Figure (b). After this we perform two Seiberg dualities on one and another gauge nodes one after another. In the
end we arrive to the $\SU(4)$ gauge theory with $7$ flavors, one antisymmetric and some singlets. Corresponding quiver is shown on the
Figure (d).}
\label{pic:c1dualities}
\end{figure}
\begin{shaded}
\be
&&K_{(3;0)}^{C_1}(x,x_1,z)=\frac{\kappa^3}{4!}\oint\frac{dt_1}{2\pi i t_1}\frac{dt_2}{2\pi i t_2}\frac{dt_3}{2\pi i t_3}\frac{1}{\prod_{i\neq j}^4\gama{t_i/t_j }}
\gama{ (pq)^\half w^\frac{-9}{2} a_1^{-1} \widetilde{x}^{\pm1}}
\nn\\
&&
\prod_{j=1}^4\gama{ (pq)^\frac14 w^{-\frac{27}{8}}a_1^\frac14 t_j x^{\pm1}}\gama{ (pq)^\frac14 w^{-\frac{27}{8}}a_1^\frac14 t_j z^{\pm1}}
\gama{ (pq)^{-\frac14} w^\frac{117}{8} a_1^\frac14 t_j }
\nn\\
&&
\prod_{i=2}^8 \gama{ (pq)^\frac14 w^{-\frac98}a_1^{-\frac14} a_i^{-1} t_j^{-1}}
\gama{(pq)^\frac14 w^{\frac{-27}{8}} a_1^{\frac14}t_j \widetilde{x}^{\pm1} }
\prod_{k<j}^4\gama{ (pq)^\half w^\frac{27}{4} a_1^{-\half} t_j t_k}
\nn\\
&&
\prod_{i=2}^8
\gama{ pq w^{-\frac{27}{2}} a_i} \gama{ (pq)^\half w^\frac92 a_i x^{\pm1} } \gama{ (pq)^\half w^\frac92 a_i z^{\pm1}}
\label{c1:trin2}
\ee
\end{shaded}
Now we glue this to a general theory with $\SU(2)_{\widetilde{x}}$ global symmetry by gauging
it and close $z$-puncture with the defect introduced.
It can be seen from the expression above that the mesonic vev, which corresponded to setting
$z=(pq)^{-\half}w^{-\frac92}a_1^{-1}q^{-1}$, is now a baryonic one. At this value of $z$ the
trinion written above has a pole. So we have to calculate the following residue:
\be
\cO_x^{(z;w^{\frac{9}{2}}a_1;1,0)}\cdot\cI(x)\propto \mathrm{Res}_{z\rightarrow (pq)^{-\half}w^{-\frac92}a_1^{-1}q^{-1}}\kappa
\oint\frac{d\widetilde{x}}{4\pi i \widetilde{x}}\frac{1}{\GF{\widetilde{x}^{\pm 2}}}K_{(3;0)}^{C_1}(x,\widetilde{x},z)\cI(\widetilde{x})
\ee
As always this pole is coming from the contour pinching. Without loss of generality
let's consider $t_1$ integration in \eqref{c1:trin2}. The integrand has a sequence of poles outside the contour at,
\be \label{poles}
t_1^{out}= (pq)^{-\frac14} w^{-\frac{27}{8}} z^{-1} q^{-l_1}=(pq)^\frac14 q w^\frac{63}{8} a_1^\frac34 q^{-l_1}
\ee
On the other hand remaining integrations have poles at the following positions,
\be
&t_2=(pq)^{-\frac14} w^\frac{27}{8} a_1^{-\frac14} xq^{-l_2}, \;& t_3= (pq)^{-\frac14} w^\frac{27}{8} a_1^{-\frac14} x^{-1}q^{-l_3},\;
t_4= (pq)^\frac14 w^{-\frac{117}{8}} a_1^{-\frac14}q^{-l_4}\,;
\nn\\
&&\mathrm{and}\label{poles1}\\
&t_2=(pq)^{-\frac14} w^\frac{27}{8} a_1^{-\frac14} \tx q^{-l_2}, \;& t_3= (pq)^{-\frac14} w^\frac{27}{8} a_1^{-\frac14}\tx^{-1}q^{-l_3},\;
t_4= (pq)^\frac14 w^{-\frac{117}{8}} a_1^{-\frac14}q^{-l_4}\,;
\nn
\ee
Using the $\SU(4)$ condition $\prod_{i=1}^4 t_i=1$ both of these sets of poles can be rewritten as poles of $t_1$ but this time inside integration contour:
\be
t_1^{in}=t_2^{-1}t_3^{-1}t_4^{-1}= (pq)^\frac14 w^\frac{63}{8} a_1^\frac34 q^{l_2+l_3+l_4}
\ee
Poles inside and outside the contour collide whenever $l_1+l_2+l_3+l_4=1$.
Thus, we have $4$ contributions $(l_1,l_2,l_3,l_4)=(1,0,0,0)$ or $(0,1,0,0)$ or $(0,0,1,0)$ or $(0,0,0,1)$.
We have $4!$ such contributions due to the permutations of the $t$'s. There is also a factor of $2$ due to the sum over equal contributions of poles in
the two lines of \eqref{poles1}\footnote{The fact that poles with $x$ and with $\tx$ contribute equally has to be shown in the
detailed calculation which we leave to the interested reader}. However this overall factor is not relevant for the form of operator, and as elsewhere in the
text of the paper we disregard it.
Finally once we evaluate the residue at the pole in $z$ we are left only with $\tx$ integration and an overall factor of zero.
This zero is cancelled noticing that $\tx$ integration contour is pinched either at $\tx=x^{\pm1}$ or $\tx=(q^{\pm1}x)^{\pm1}$.
Once we compute residues corresponding to these pinchings we obtain the A$\Delta$O \eqref{uspdeff}. | {"config": "arxiv", "file": "2106.08335/sections/app_c1.tex"} |
TITLE: Integral yielding part of a harmonic series
QUESTION [6 upvotes]: Why is this true?
$$\int_0^\infty x \frac{M}{c} e^{(\frac{-x}{c})} (1-e^{\frac{-x}{c}})^{M-1} \,dx = c \sum_{k=1}^M \frac{1}{k}.$$
I already tried substituting $u = \frac{-x}{c}$. Thus, $du = \frac{-dx}{c}$ and $-c(du) = dx$. Then, the integral becomes (after cancellation) $\int_0^\infty c u M e^u (1-e^u)^{M-1}\,du$.
I looked at integral-table.com, and this wasn't there, and I tried wolfram integrator and it told me this was a "hypergeometric integral".
Thanks,
REPLY [4 votes]: Suppose that $X_1,\ldots,X_M$ are independent exponential random variables with mean $c$, so that their pdf and cdf are given by $f(x)=c^{-1}e^{-x/c}$ and $F(x)=1-e^{-x/c}$, $x \geq 0$, respectively. Let $Y_M=\max \{X_1,\ldots,X_M\}$. Then, $Y_M$ has cdf $F_M (x) = F(x)^M$ and hence pdf $f_M (x) = M F(x)^{M-1} f(x)$. Thus, the expectation of $Y_M$ is given by
$$
{\rm E}[Y_M ] = \int_0^\infty {xMF(x)^{M - 1} f(x)dx} = \int_0^\infty {x\frac{M}{c}e^{ - x/c} (1 - e^{ - x/c} )^{M-1}dx}.
$$
So you want to know why
$$
{\rm E}[Y_M] = c\sum\limits_{k = 1}^M {\frac{1}{k}}.
$$
Now, $Y_M$ is equal in distribution to $E_1 + \cdots + E_M$ where the $E_k$ are independent exponentials and $E_k$ has mean $c/k$; for this fact, see James Martin's answer to this MathOverflow question (where $c=1$). The result is thus established.
EDIT: As an additional reference, see Example 4.22 on p. 157 in the book Probability, stochastic processes, and queueing theory by Randolph Nelson.
EDIT: It is interesting to note that
$$
\int_0^\infty {x\frac{M}{c}e^{ - x/c} (1 - e^{ - x/c} )^{M - 1} dx} = c\int_0^1 { - \log (1 - x^{1/M} )dx} .
$$
This follows by using a change of variable $x \mapsto x/c$ and then $x \mapsto (1-e^{-x})^M$.
So, this gives the following integral representation for the $M$-th harmonic number $H_M := \sum\nolimits_{k = 1}^M {\frac{1}{k}}$:
$$
H_M = \int_0^1 { - \log (1 - x^{1/M} )dx}.
$$
Finally, it is both interesting and useful to note note
$$
H_M = \int_0^1 {\frac{{1 - x^M }}{{1 - x}}dx} = \sum\limits_{k = 1}^M {( - 1)^{k - 1} \frac{1}{k}{M \choose k}},
$$
see Harmonic number.
With the above notation, the right-hand side corresponds to
$$
{\rm E}[Y_M] = \int_0^\infty {{\rm P}(Y_M > x)dx} = \int_0^\infty {[1 - F_M (x)]dx} .
$$ | {"set_name": "stack_exchange", "score": 6, "question_id": 30074} |
TITLE: For any $x>0$, does $|a-b| < x$ imply that $|a-b|=0$?
QUESTION [0 upvotes]: Let $a\in \mathbb{R}$. For any $x > 0$:
$$
|a-b| < x
$$
implies that:
$$
a=b
$$
Is this statement true? Why?
My attempt:
I think it is true. My reasoning is that, if $x$ is allowed to be anything larger than $0$, but not $0$, it will contain an infinitesimal that is closest possible to $0$, but not exactly $0$. I know such thing doesn't exist in the set of real numbers, but this is my intuition.
Then, if we say that there are numbers $a$ and $b$, such that their absolute difference $|a-b|$ is even smaller than $x$, then what possibilities do we have? We already said that $x$ could be the smallest positive number that is not $0$, so if $|a-b| < x$ is true, then $|a-b|$ is even smaller than the smallest positive number $x$.
If a number, e.g. $|a-b|$ (non-negative) is smaller than the smallest positive number, then it must be that $|a-b| = 0$. I cannot find any other possibility.
REPLY [0 votes]: If possible let $a \neq b$
$$\Rightarrow |a-b|>0$$
$\Rightarrow$ for any $x \in (0, |a-b|) $ clearly $|a-b|>x>0$ and this is a contradiction to the given condition.
$$ $$ $\therefore$ our assumption is wrong and $a=b$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2542249} |
TITLE: Reissner–Nordström blackhole and irreducible mass
QUESTION [5 upvotes]: I'm learning the Reissner–Nordström metric and confused by the notion of irreducible mass $M_\text{irr}$. It is said that the "total mass" $M$ contains both the irreducible mass and the equivalent mass from the electric field,
$$
M = M_\text{irr} + \frac{Q^2}{16\pi \varepsilon_0 G M_\text{irr}} \ .
$$
I wonder what is the physical meaning of the irreducible mass?
To make my confusion more precise, suppose I have a Schwarzschild black hole with mass $M_0$, and I start throwing $10$ electrons into it. I guess I will eventually get a Reissner-Nordstrom black hole, with a charge $Q = -10$. What are the total mass $M$ and the irreducible mass $M_\text{irr}$ of the resulting RN black hole? Is $M_\text{irr} = M_0$?
Conversely, given a RN black hole, is there any classical/quantum process that reduces its charge to zero and reduces its mass $M \to M_\text{irr}$? (One candidate process would be to first throw opposite charges into the RN black hole, increasing its total mass, and then let it evaporate to reach $M_\text{irr}$ at some point; but I would love to see a process that doesn't increase the total mass along the way).
REPLY [5 votes]: I wonder what is the physical meaning of the irreducible mass?
This concept naturally arises when formalizing the irreversibility of classical processes involving black holes (classical here is meant as non-quantum, in what follows we completely ignore quantum effects around black holes, such as Hawking radiation). Intuition for this irreversiblility is quite simple: we can drop a particle in a black hole and there is no way of getting it back. However, if we start considering rotating and charged black holes it is easy to see that during black hole's evolution its mass, angular momentum and charge (i.e. all the parameters that according to “no-hair” theorems specify stationary black hole state) can either increase or decrease. But there is no way to decrease the irreversible mass: during black hole evolution it either stays the same or increases. And irreducible mass stays the same not only when the state of black hole does not change but also when the black hole evolves via a special kind of reversible processes.
To understand the difference between reversible and irreversible changes of black hole states let us consider “Penrose process” as a model for a black hole's change of state following the discussion in lectures by T. Damour & M. Lilley, section 2.1. Particle falls toward black hole, splits in two near (but outside) its event horizon with one part being absorbed by the black hole while the other escapes away (see the figure, note that for RN black holes we can ignore angular momentum $p_\varphi$)
The split allows on one hand for the in-falling particle to be in a state incompatible with falling from infinity (such as having negative total energy) and on the other hand escaping particle may carry energy, charge, angular momentum away from the black hole (which is needed if we expect that some of the changes in black hole states could be reversed).
When particle 3 gets absorbed by the black hole its charge and energy would correspond to the changes in charge ($\delta Q$) and mass ($\delta M$) of the black hole. Omitting the calculations (which could be found in the cited lectures) those changes are related via the equation:
$$
\delta M=\frac{Q \delta Q}{r_+(M,Q)}+|p^r|,
$$
where $r_+(M,Q)$ is black hole event horizon radius, $p^r$ is the radial component of particle's momentum (it remains finite on the horizon). The nonnegativity of $|p^r|$ means that
$$
\delta M \ge \frac{Q \delta Q}{r_+(M,Q)}.\tag{*}
$$
This inequality is what provides the distinction between reversible (corresponding to “$=$” sign) and irreversible (strict “$>$” sign) processes. Reversible process of absorption of a charged particle with $p^r=0$ and subsequent absorption of antiparticle with the opposite charge and again with $p^r=0$ would leave the black hole in the original state. Integrating the equation for reversible process$$
\delta M = \frac{Q \delta Q}{r_+(M,Q)},
$$ would produce the mass formula from OP, which in geometric units $G=c=4πε_0=1$ reads:
$$M=M_\text{irr}+\frac{Q^2}{4 M_\text{irr}},
$$
where the irreducible mass arises as an integration constant. Inserting this formula into ($*$) gives us
$$
\delta M_\text{irr}\ge 0.\tag{**}
$$
There is a strong similarity/connection between thermodynamics and the laws of black hole mechanics, within this paradigm reversible process for black holes are equivalent to adiabatic processes for ordinary systems. The irreducible mass then must be some monotonic function of the black hole entropy, so that ($**$) is a restatement of the second law of thermodynamics. And indeed we can express the surface area $A$ of the stationary black hole horizon through irreducible mass:
$$A=16\pi M_\text{irr}^2,$$
which together with Bekenstein formula, $S_\text{BH}=A/4$, gives us another interpretation of the concept: irreducible mass $M_\text{ir}$ of charged / rotating black hole is the mass of Schwarzschild black hole with the same area of event horizon (or, equivalently, with the same entropy).
Armed with this knowledge we can answer further questions of OP:
To make my confusion more precise, suppose I have a Schwarzschild black hole with mass $M_0$, and I start throwing $10$ electrons into it. I guess I will eventually get a Reissner-Nordstrom black hole, with a charge $Q = -10$. What are the total mass $M$ and the irreducible mass $M_\text{irr}$ of the resulting RN black hole? Is $M_\text{irr} = M_0$?
Ignoring radiation of falling electrons and assuming black hole does not gain considerable linear momentum the new mass would be:
$$
M=M_0+\sum_{i=1}^{10} \varepsilon _i,
$$
where $\varepsilon _i$ is the energy of $i$-th electron. The irreducible mass of this new black hole state would be larger than $M_0$ but smaller than this new mass $M$, the difference $M-M_\text{irr}$ could be interpreted as the energy of electric field outside the horizon.
Conversely, given a RN black hole, is there any classical/quantum process that reduces its charge to zero and reduces its mass $M \to M_\text{irr}$? (One candidate process would be to first throw opposite charges into the RN black hole, increasing its total mass, and then let it evaporate to reach $M_\text{irr}$ at some point; but I would love to see a process that doesn't increase the total mass along the way).
Reversible process for black holes is an idealization (just like other adiabatic processes encountered in thermodynamics textbooks) but it is possible to describe process that would come close. Take a spherical shell of matter (or just a system of charges) with very small mass $\mu$ carrying the charge of $-Q$ (minus the charge of the black hole). Lower this shell using system of ropes toward the black hole horizon stopping at a very small (but positive) distance $d$ away from it. Release the shell so that it falls into the black hole neutralizing it. In the limit $\mu\to 0$, $d\to 0$ the result would be Schwarzschild black hole with $M=M_\text{irr}$. To see that it is indeed so, note that ropes lowering the shell transfer the energy of electromagnetic field away from the black hole (and if we put generator at the winch that lets this rope out we can convert this energy into useful work). In the idealized limit the entirety of energy available outside the horizon is removed leaving us only with irreducible mass. | {"set_name": "stack_exchange", "score": 5, "question_id": 691193} |
TITLE: Pseudo metric spaces are not Hausdorff.
QUESTION [6 upvotes]: I know that every metric space is Hausdorff,and every metric space is Pseudo metric.
How can I prove that every Pseudo metric space is not Hausdorff??
REPLY [9 votes]: A pseudometric satisfies all of the requirements of a metric except one: it need not separate points. Suppose that $d$ is a pseudometric on $X$ that is not a metric; then there must be points $x,y\in X$ such that $d(x,y)=0$, but $x\ne y$. That’s the only way that a pseudometric can fail to be a metric.
Now suppose that $x\in U$, where $U$ is an open sets in $X$. Then by the definition of the pseudometric topology there is a real number $r>0$ such that $B(x,r)\subseteq U$, where as usual $$B(x,r)=\big\{z\in X:d(x,z)<r\big\}\;.$$ But $d(x,y)=0<r$, so $y\in B(x,r)\subseteq U$. In other words, every open neighborhood of $x$ also contains $y$, and we cannot possibly find open sets $U$ and $V$ such that $x\in U$, $y\in V$, and $U\cap B=\varnothing$: no matter what $U$ and $V$ pick, it will always be the case that $y\in U\cap V$. (It will also be true that $x\in U\cap V$.)
You might find it useful to recognize that the pseudometric $d$ induces an equivalence relation $\sim$ on $X$: for $x,y\in X$ put $x\sim y$ iff $d(x,y)=0$. I’ll leave to you the easy proof that $\sim$ is an equivalence relation and that its equivalence classes are closed sets. The pseudometric $d$ treats the points in a single equivalence classes as if they were identical: if $x\sim y$ and $u\sim v$, then $d(x,u)=d(y,v)$. From the standpoint of $d$, $x$ and $y$ are interchangeable, as are $u$ and $v$.
I don’t know whether you’ve studied quotient topologies at all yet, but when you do, you can check that the quotient space $X/\sim$ whose points are the $\sim$-equivalence classes actually becomes a metric space with a metric $\hat d$ when you define $\hat d\big([x],[y]\big)=d(x,y)$ (where $[x]$ and $[y]$ are the equivalence classes of $x$ and $y$, respectively). It’s what you get if you simply pretend that all of the points that are $d$-distance $0$ from one another are really the same point. | {"set_name": "stack_exchange", "score": 6, "question_id": 234610} |
\begin{document}
This paper has been accepted for publication on the IEEE Transactions on Cognitive Communications and Networking, special issue on "Intelligent Surfaces for Smart Wireless Communications". It was originally submitted for publication on September 15, 2020, and revised on February 3, 2021 and on March 12, 2021. It has been finally accepted for publication on March 14, 2021.
\bigskip
\copyright 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
\newpage
\bstctlcite{IEEE_nodash:BSTcontrol}
\title{RIS Configuration, Beamformer Design, and Power Control in Single-Cell and Multi-Cell Wireless Networks}
\author{Stefano Buzzi,~\IEEEmembership{ Senior Member,~IEEE,}
Carmen D'Andrea,~\IEEEmembership{ Member,~IEEE,} Alessio Zappone,~\IEEEmembership{ Senior Member,~IEEE,} Maria Fresia,
Yong-Ping Zhang, and Shulan Feng
\thanks{This paper has been partly presented at 2020 IEEE PIMR C.} \thanks{This work was supported by HiSilicon through cooperation agreement YBN2018115022.}
\thanks{S. Buzzi, C. D'Andrea and A. Zappone are with the Department
of Electrical and Information Engineering, University of Cassino and Southern Latium, Cassino,
Italy, and with Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT), Parma, Italy.
M. Fresia is with Huawei Technol. Duesseldorf GmbH Wireless Terminal Chipset Technology Lab, Munich, Germany. Y. Zhang and Shulan Feng are with HiSilicon Technologies Balong Solution Department Bejing, China}}
\maketitle
\begin{abstract}
Reconfigurable Intelligent Surfaces (RISs) are recently attracting a wide interest due to their capability of tuning wireless propagation environments in order to increase the system performance of wireless networks.
In this paper, a multiuser wireless network assisted by a RIS is studied and resource allocation algorithms are presented for several scenarios.
First of all, the problem of channel estimation is considered, and an algorithm that permits separate estimation of the mobile user-to-RIS and RIS-to-base stations components is proposed. Then,
for the special case of a single-user system, three possible approaches are shown in order to optimize the Signal-to-Noise Ratio with respect to the beamformer used at the base station and to the RIS phase shifts. {Then, for a multiuser system with two cells, assuming channel-matched beamforming, the geometric mean of the downlink Signal-to-Interference plus Noise Ratios across users is maximized with respect to the base stations transmit powers and RIS phase shifts configurations.
In this scenario, the RIS is placed at the cell-edge and some users are jointly served by two base stations to increase the system performance.}
Numerical results show that the proposed procedures are effective and that the RIS brings substantial performance improvements to wireless system.
\end{abstract}
\begin{IEEEkeywords}
reconfigurable intelligent surfaces, resource allocation, MIMO, multicell systems
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
While Massive multiple input multiple output (MIMO) has been a breakthrough technology that has significantly contributed to the evolution of wireless networks in the last decade and is currently being deployed worldwide by several telecom operators, new technologies and solutions have recently started to appear and gather attention as possible evolution of massive MIMO systems. These include, among others, cell-free massive MIMO systems \cite{ngo2015cell,BuzziWCL2017}, the use of massive MIMO arrays for joint communication and sensing \cite{BuzziAsilomar2019}, large distributed antenna arrays \cite{amiri2018extremely}, and, also, reconfigurable intelligent surfaces (RISs) \cite{hu2017potential,hu2018beyond,di2019smart}.
RISs are thin surfaces that can be used to coat buildings, ceilings, or other surfaces; they have electromagnetic properties that can be tuned electronically through a software controller, and their use permits modifying the propagation environment of wireless signals, so as to be able to concentrate information signals where needed and thus to improve the
Signal-to-Interference plus Noise Ratio (SINR). RIS' elements are passive, in the sense that they have no RF chains and transmit and receive antennas; they just add a tunable phase offset to the impinging and reflected waves.
Prototypes of reconfigurable metasurfaces are currently being developed in many parts of the world
\cite{VISOSURF_project,NTT_DoCoMo_LIS2019}.
\noindent
Recent surveys and tutorials on RIS-based communications have appeared in \cite{JSAC_RIS,Liaskos_COMMAG,RuiZhang_COMMAG,BasarMag2019,HuangMag2020}, where the fundamentals, main research results, and future lines of research of RIS-based systems are discussed. {Innovative and emerging RIS applications include multicell networks\cite{Pan_MulticellNetworks_TWC2020}, simultaneous wireless information and power transfer\cite{Pan_SWIPT_JSAC2020},
mobile edge computing networks\cite{Bai_MEC_JSAC2020}, multicast networks\cite{Zhou_Multicas_TSP2020}, physical layer security systems\cite{Hong_PLS_TCOM2020} and cognitive radio networks\cite{Zhang_Cognitive_TVT2020}.}
In the following, we direct our attention on the issues of resource allocation and channel estimation in RIS-based wireless networks.
In \cite{EE_RISs}, the rate and energy efficiency of RIS-based multiple input single output (MISO) downlink systems are optimized with respect to the base station transmit powers and to the RIS phase shifts. The optimization is carried out by means of the alternating optimization, fractional programming, and sequential optimization frameworks. In \cite{Wu2018}, a similar scenario is addressed, with the difference that the problem of power minimization subject to minimum rate constraints is tackled. Also in this case, the tool of alternating optimization is used to allocate the base station transmit power and the RIS phase shifts. A RIS-based MISO downlink system is also analyzed in \cite{Yang2019}, assuming that the orthogonal frequency division multiplexing (OFDM) transmission scheme is considered. In \cite{Yu2019}, algorithms are devised for the maximization of the sum-rate in an RIS-based MISO system. Alternating optimization is employed to optimize the transmit beamformer and the RIS phase shifts. Alternating optimization methods are also used in \cite{Guo2019} to address the problem of sum-rate maximization in a RIS-based MISO downlink system. The phase shifts applied by the RIS are assumed to take on discrete values and the base station beamformer and the RIS phase shifts are optimized. In \cite{Jiang2019}, over-the-air computations in a multi-user RIS-based MISO channel is considered and alternating optimization is merged with difference convex (DC)-programming to develop a method that is shown to outperform the traditional use of semi-definite relaxation methods. In \cite{Li2019b} a massive MIMO system aided by the presence of multiple RISs is considered. Assuming that a large number of reflectors is equipped on each RIS, the problem of maximizing the minimum signal-to-interference-plus-noise-ratio at the users is tackled with respect to the transmit precoding vector and the RISs phase shifts. In \cite{Liu2019}, a RIS with a discrete phase resolution is assumed, and the problem of precoding design in an RIS-based multi-user MISO wireless system is investigated. Rate maximization in a RIS-assisted MIMO link is tackled in \cite{Ning2019b}, assuming that the RIS is used to aid the communication between the transmitter and the receiver. In \cite{Han2019b}, power control for physical-layer broadcasting in RIS-empowered networks is discussed, with the constraints of quality of service for the mobile users. In \cite{Pan_MulticellNetworks_TWC2020}, a multi-cell scenario is considered, and a RIS is deployed at the boundary between multiple cells to aid cell-edge users. In this context, the problem of weighted sum-rate maximization is tackled by alternating optimization of the base station transmit powers and of the RIS phase shifts. A single-user, RIS-based MISO system using millimeter waves transmissions is considered in \cite{Wang2019b}, studying the problem of transmit beamforming and RIS phase shifts allocation considering the presence of both a single RIS and multiple RISs.
In \cite{You2019} the problem of joint channel estimation and sum-rate maximization is tackled in the uplink of a single-user RIS-based system. First, a channel estimation method based on the discrete Fourier transform and on a truncation of Hadamard matrices is developed, and then rate optimization is tackled by a low-complexity successive refinement algorithm. In \cite{Nadeem2019}, the minimum SINR achieved by linear detection in downlink RIS-based systems is characterized considering line-of-sight between the base station and the RIS, with the corresponding channel being either full-rank or unit-rank. In \cite{Nadeem2019b}, a minimum mean square error channel estimation method is devised for RIS-based networks and, based on the estimated channels, the RIS phase shifts are optimized by a gradient-based algorithm. In \cite{He2019}, the problem of channel estimation in RIS-based systems is addressed by developing a method for the estimation of the cascaded Tx-RIS and RIS-Rx channels.
A novel RIS architecture is proposed in \cite{alexandropoulos2020hardware}, wherein, based on the existence of an active RF chain at the RIS, explicit channel estimation at the RIS side is realized.
Uplink channel estimation is RIS-based wireless networks is discussed in \cite{Chen2019b}, where the cascade channel including the channel from the transmitter to the RIS and from the RIS to the receiver is estimated by compressive sensing techniques. In \cite{Lin2019} channel estimation for RIS-based networks is approached as a constrained estimation error minimization problem, that is tackled by the Lagrange multipliers and dual ascent-based schemes. In \cite{Ning2019}, beam training is used for channel estimation in a massive MIMO, RIS-assisted wireless system operating in the THz regime. {Reference \cite{Wang2019d} proposes a three-phase pilot-based channel estimation framework in which the channels involved in the communication between RIS, BS, and users are estimated switching on and off the RIS elements.} In \cite{cui2019efficient}, a low-complexity channel estimation method for RIS-based communication systems using millimeter waves is developed exploiting the sparsity of the millimeter wave channel and the large size of the RIS. In \cite{wei2020parallel}, channel estimation in the downlink of a RIS-assisted MISO communication system is devised using the alternating least squares algorithm to iteratively estimate the channel between the base station and the RIS, as well as the channels between the RIS and the users. In \cite{liu2019matrix}, channel estimation for a RIS-based multiuser MIMO system is formulated as a message-passing algorithm that enables to factorize the cascaded channels from transmitter to receiver. In \cite{Han2019}, a robust approach to the design of RIS-based networks is proposed, thus reducing the overhead required for channel estimation. However, robust and statistical approaches do not allow to exploit the reconfiguration capabilities of RISs, which enables to dynamically compensate for the random fluctuations of wireless channels. {References \cite{Zhou_Robust_WCL2020,Zhou_Robust_TSP2020} consider robust beamforming based on the imperfect cascaded BS-RIS-user channels at the transmitter, formulating transmit power minimization problems subject to worst-case rate constraints corresponding to the bounded CSI error model.} In \cite{ZapTWC2020}, a model is developed to quantify the overhead required for channel estimation in RIS-based networks, and for deploying an optimized phase configuration on a RIS. Next, based on this model, and overhead-aware optimization of RIS-based systems is performed, for the maximization of the system rate, energy efficiency, and their trade-off. It is shown that, despite the overhead, resource allocation based on the instantaneous channel realizations, performs better than other allocations, provided the transmit and receive antennas do not grow too large. Similar results are obtained in \cite{ZapArXiv2020}, with reference to the problem of optimizing the number of reflectors to activate at the RIS, for the maximization of the rate, energy efficiency, and their trade-off.
Following on this track, this paper considers resource allocation problems for several instances of wireless networks assisted by a RIS. The contribution of this paper can be summarized as follows. {First of all, we tackle the problem of channel estimation and develop a protocol that permits estimating decoupled channel coefficients. The proposed protocol is based on the transmission of pilot signals from the mobile stations (MSs) and allows to compute the channel impulse response for any arbitrary RIS configuration.} This feature of the proposed channel estimation algorithm is crucial in order to enable practical implementation of the proposed resource allocation procedures. {Then, we consider the special case of a single-cell network with a single-user and focus on maximizing the signal-to-noise ratio (SNR) with respect to the beamformer at the BS and the phase shifts at the RIS.} Three different algorithms are proposed to this end, one based on a classical alternating-maximization approach, and two based on the maximization of a lower bound and of an upper bound of the SNR. For the latter two cases, the optimal solution is obtained in closed form, and this enables a fast and computationally-efficient computation of the sought solution.
Finally, for a multi-user multi-cell system, we consider a scenario where some of the users may be jointly served by multiple BSs to improve performance, and maximize the geometric mean of the {downlink SINRs} with respect to the transmit powers and to the RIS phases,
using the gradient algorithm and the alternating maximization methodology.
One further distinguishing feature of our study is that we consider the general cases that, for each MS, the direct BS-MS and the reflected BS-RIS-MS paths may be or may not be simultaneously active, whereas in many of the previously mentioned papers the assumption that the direct BS-MS path is blocked is necessary in order to solve the considered optimization problems.
Numerical results will show that the proposed resource allocation procedures provide substantial performance improvements, as well as that they blend well with the proposed channel estimation procedures.
This paper is organized as follows. Next section is devoted to the description of the system model. Section III and IV contain the description of the proposed optimization procedures, for the single-user and the multiuser case, respectively.
In Section V numerical results are presented, showing the effectiveness of the proposed procedures, while, finally concluding remarks are given in Section VI.
\subsection*{Notation}
We use non-bold letters for scalars, $a$ and $A$, lowercase boldface letters, $\mathbf{a}$, for vectors and uppercase lowercase letters, $\mathbf{A}$, for matrices. The transpose, the inverse and the conjugate transpose of a matrix $\mathbf{A}$ are denoted by $\mathbf{A}^T$, $\mathbf{A}^{-1}$ and $\mathbf{A}^H$, respectively. The trace and the main diagonal of the matrix $\mathbf{A}$ are denoted as tr$\left(\mathbf{A}\right)$ and diag$\left(\mathbf{A}\right)$, respectively. The diagonal matrix obtained by the scalars $a_1,\ldots, a_N$ is denoted by diag$( a_1,\ldots, a_N)$. The $N$-dimensional identity matrix is denoted as $\mathbf{I}_N$, the $(N \times M)$-dimensional matrix with all zero entries is denoted as $\mathbf{0}_{N \times M}$ and $\mathbf{1}_{N \times M} $ denotes a $(N \times M)$-dimensional matrix with unit entries. The vectorization operator is denoted by vec$(\cdot)$ and the Kronecker product is denoted by $\otimes$. Given the matrices $\mathbf{A}$ and $\mathbf{B}$, with proper dimensions, the horizontal concatenation is denoted by $\left[\mathbf{A}, \mathbf{B}\right]$. The $(m,\ell$)-th entry and the $\ell$-th column of the matrix $\mathbf{A}$ are denoted as $\left[\mathbf{A}\right]_{(m,\ell)}$ and $\left[\mathbf{A}\right]_{(:,\ell)}$, respectively. The block-diagonal matrix obtained from matrices $\mathbf{A}_1, \ldots, \mathbf{A}_N$ is denoted by blkdiag$\left( \mathbf{A}_1, \ldots, \mathbf{A}_N\right)$. The statistical expectation operator is denoted as $\mathbb{E}[\cdot]$; $\mathcal{CN}\left(\mu,\sigma^2\right)$ denotes a complex circularly symmetric Gaussian random variable with mean $\mu$ and variance $\sigma^2$.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.7]{Scenario_MultiBS_RIS-eps-converted-to.pdf}
\caption{RIS-assisted multicell wireless network. Two BSs serve a set of users in the same frequency band relaying both on a direct link and on a further link reflected by a RIS, a planar array of reflecting devices with tunable phase reflection shift.}
\label{Fig:Scenario_multi_BS}
\end{figure}
\section{System model}
We consider a wireless cellular network and focus in particular on a system formed by two BSs, equipped with $N_{B,1}$ and $N_{B,2}$ antennas, respectively. The two BSs may cooperate to serve $K$ single antenna mobile stations (MSs) {\footnote{ {The cooperation between BSs is a well-investigated topic in literature and in reference \cite{Lozano_Cooperation_2013} the authors show that in presence of practical impairments, such as channel estimation and out of cell interference, it is essential to improve system performance.}}}, and both exploit a shared RIS with $N_R$ passive elements to improve the performance of the users\footnote{The extension to the case in which there are multiple RISs and more than two BSs can be {done} with ordinary efforts and is not considered here for the sake of simplicity.} -- See Fig. \ref{Fig:Scenario_multi_BS}.
We denote by $\widetilde{\bH}_1$ the $(N_{B,1} \times N_R)$-dimensional matrix representing the fast fading component of the wireless channel from the RIS to the first BS, by $\widetilde{\bh}_{k}$ the $N_R$-dimensional vector representing the fast fading component of the wireless channel from the $k$-th MS to the RIS, and by $\widetilde{\bh}_{1,k}^{(d)}$ the $N_{B,1}$-dimensional vector representing the fast fading component of the wireless direct link channel from the $k$-th MS to the first BS. Similarly $\widetilde{\bH}_2$ denotes the $(N_{B,2} \times N_R)$-dimensional matrix representing the fast fading component of the wireless channel from the RIS to the second BS and by $\widetilde{\bh}_{2,k}^{(d)}$ the $N_{B,2}$-dimensional vector representing the fast fading component of the wireless direct link channel from the $k$-th MS to the second BS. The RIS behaviour is modelled through a set of $N_R$ complex coefficients, representing the loss and phase shifts imposed on the reflected waves by each RIS elements. These coefficients are compactly represented through the diagonal matrix
$
\bPhi= \textrm{diag}\left(\rho e^{j\phi_1}, \ldots, \rho e^{j\phi_{N_R}}\right)
$
The positive real-valued constant $\rho$ accounts for possible reflection losses and it is assumed to be constant across the RIS elements, while the phase offsets $\phi_1, \ldots, \phi_{N_R}$ can be controlled via software.
Based on the above notation, the composite uplink channel from the $k$-th MS to the $i$-th BS (with $i \in \{1,2\}$) when both the direct link and the RIS-reflected link exist, can be easily shown to be written as
\begin{equation}
\overline{\bh}_{i,k, \bPhi}=\bH_i \bPhi \bh_k + \bh_{i,k}^{(d)} \, ,
\label{comp_channel}
\end{equation}
where, in order to give evidence to the distance-dependent path-loss, we also let
\begin{equation}
\bH_i \bPhi \bh_k= \sqrt{\beta_{i,k}}\widetilde{\bH}_i \bPhi \widetilde{\bh}_k \; , \; \bh_{i,k}^{(d)}=\sqrt{\beta_{i,k}^{(d)}}\widetilde{\bh}_{i,k}^{(d)} \, ,
\end{equation}
where $\beta_{i,k}$ and $\beta_{i,k}^{(d)}$ denote the $k$-th MS power attenuation coefficients for the downlink RIS-reflected and direct links from the $i$-th BS, respectively.
The Time Division Duplex (TDD) protocol is used to multiplex uplink (UL) and downlink (DL) data on the same carrier, thus implying that the UL and DL channels coincide. The channel coherence time must be thus used to perform UL training for CE, DL data transmission, and UL data transmission.
\subsection{Signal model during UL training}
Let us now focus on the signal model during the UL training phase.
Denote by $\bp_k$ the unit energy vector pilot sequence assigned to the $k$-th MS, by $\tau_p < \tau_c$ the length (in discrete time samples) of the UL training phase and by $\tau_c$ the length
(again in discrete time) of the coherence interval. During the UL training phase, each MS transmits its own pilot sequence; the discrete-time version of the baseband equivalent of the signal received at the $i$-th BS, with $i \in \{1,2\}$, can be thus represented through the following $(N_{B,i} \times \tau_p)$-dimensional matrix:
\begin{equation}
\bY_i=\ds \sum_{k=1}^K \sqrt{\eta_k} \overline{\bh}_{i,k, \bPhi} \bp_k^H + \bW_i \; .
\label{eq:Y}
\end{equation}
In the above equation, the power coefficient $\eta_k$ is defined as $\eta_k=\tau_p \bar{\eta}_k$, with $\bar{\eta}_k$ the power transmitted by the $k$-th MS for each UL training symbol, and the $(N_{i,B} \times \tau_p)$-dimensional matrix $\bW_i$ represents the additive white Gaussian noise (AWGN) contribution. It is assumed that the entries of $\bW_i$ are independent and identically distributed as ${\cal CN}(0, \sigma^2_w)$, with $\sigma^2_w$ the thermal noise variance.
\subsection{Signal model for DL data transmission}
Focusing now on the DL data transmission phase, let $\eta_{i,k}^{\rm DL}$ denote the DL transmit power
reserved for the $k$-th MS by the $i$-th BS, with $i \in \{1,2\}$, and let $q_k^{\rm DL}$ denote the information symbol to be delivered to the $k$-th MS. We consider the case in which the two BS cooperate and perform a joint transmission to the users that are located at the cell-edge. To this end, we define the following $\{0,1\}$-valued variate:
\begin{equation}
I_{i,\ell}=\left \lbrace
\begin{array}{llll}
1 \quad \text{if} \; \text{the } \ell\text{-th user is served by the } i\text{-th BS}
\\
0 \quad \text{otherwise}
\end{array}\right. \, .
\label{I_definition}
\end{equation}
Denoting by $\mathbf{w}_{i,k}$ the $N_{B,i}$-dimensional beamforming vector used at the $i$-th BS for the signal intended to the $k$-th MS, the signal transmitted by the $i$-th BS in the generic symbol interval can be thus expressed as
\begin{equation}
\mathbf{s}_{i}^{\rm DL}= \ds \sum_{\ell=1}^K I_{i,\ell} {\sqrt{\eta_{i,\ell}^{\rm DL}} \mathbf{w}_{i,\ell} q_{\ell}^{\rm DL}} \, ,
\end{equation}
while the signal received at the $k$-th MS is
\begin{equation}
r_{k}^{\rm DL}= \overline{\bh}_{1, k, \bPhi}^H \mathbf{s}_{1}^{\rm DL} + \overline{\bh}_{2, k, \bPhi}^H \mathbf{s}_{2}^{\rm DL} \, .
\end{equation}
It thus follows that the soft estimate of the data symbol $q_k^{\rm DL}$ at the $k$-th MS can be written as
\begin{equation}
\begin{array}{lll}
&\!\!\!\!\!\widehat{q}_k^{\rm DL}=\underbrace{\left(I_{1,k}\sqrt{\eta_{1,k}^{\rm DL}}\overline{\bh}_{1, k, \bPhi}^H \mathbf{w}_{1,k} + I_{2,k}\sqrt{\eta_{2,k}^{\rm DL}}\overline{\bh}_{2, k, \bPhi}^H \mathbf{w}_{2,k} \right)q_k^{\rm DL}}_{\mbox{useful contribution}}\\ &+ \ds \underbrace{\sum_{\substack{\ell=1 \\ \ell \neq k}}^K {\left(I_{1,\ell}\sqrt{\eta_{1,\ell}^{\rm DL}} \overline{\bh}_{1, k, \bPhi}^H \mathbf{w}_{1,\ell} + I_{2,\ell}\sqrt{\eta_{2,\ell}^{\rm DL}} \overline{\bh}_{2, k, \bPhi}^H \mathbf{w}_{2,\ell}\right) q_{\ell}^{\rm DL}}}_{\mbox{interference}} \\ &+ z_k \, ,
\end{array}
\label{DL_signal_k}
\end{equation}
where $z_k \sim \mathcal{CN}(0,\sigma^2_z)$ is the AWGN contribution in the generic symbol interval.
Based on the above expression, we can thus define the DL SINR as reported in Eq. \eqref{DL_SINR} at the top of next page.
\begin{figure*}
\begin{equation}
\text{SINR}_{k, \bPhi}^{\rm DL}= \frac{\left|I_{1,k}\sqrt{\eta_{1,k}^{\rm DL}}\left(\bH_1 \bPhi \bh_k + \bh_{1,k}^{(d)}\right)^H \mathbf{w}_{1,k} + I_{2,k}\sqrt{\eta_{2,k}^{\rm DL}}\left( \bH_2 \bPhi \bh_k + \bh_{2,k}^{(d)} \right)^H \mathbf{w}_{2,k}\right|^2}{\ds \sum_{\substack{\ell=1 \\ \ell \neq k}}^K {\left|I_{1,\ell}\sqrt{\eta_{1,\ell}^{\rm DL}}\left(\bH_1 \bPhi \bh_k + \bh_{1,k}^{(d)}\right)^H \mathbf{w}_{1,\ell} + I_{2,\ell}\sqrt{\eta_{2,\ell}^{\rm DL}}\left(\bH_2 \bPhi \bh_k + \bh_{2,k}^{(d)}\right)^H \mathbf{w}_{2,\ell}\right|^2}+ \sigma^2_z} \, .
\label{DL_SINR}
\end{equation}
\hrulefill
\end{figure*}
\section{Channel estimation procedures} \label{Section_CE}
Let us now tackle the problem of channel estimation. Based on the observation of $\bY_i$ in \eqref{eq:Y}, and relying on the knowledge of the pilot sequences $\bp_1, \ldots, \bp_K$, the $i$-th BS is faced with the task of performing the channel estimation (CE). Since this phase is implemented locally at the BSs without any cooperation, in the following we focus on the processing at the generic BS, and omit the subscript $i$ in order to simplify the notation. {Since, in this paper, the RIS is a completely passive device controlled by the BSs, we only entrust the BSs with the task of estimating the channels involved in the communication.} It is worth noting that the unknown quantities to be estimated here are the $(N_B \times N_R)$-dimensional RIS-BS channel, the vectors containing the RIS-MS and BS-MS channels, for all the MSs in the system. A moment's thought however reveals that, given the model in Eq. \eqref{eq:Y} and the definition in Eq. \eqref{comp_channel}, the matrix $\bH$ and the channels $\bh_{k}$ cannot be individually estimated; indeed, these quantities always appear as a product, thus implying that what can be actually estimated is the componentwise product between the rows of $\bH$ and the vectors $\bh_{k}$.
Luckily, this is enough to be able to predict the overall channel response when the RIS configuration changes.
More precisely, given the identity
\beq
\bH \bPhi \bh_k= \bD_k \bm{\phi}\; ,
\label{eq:identity}
\eeq
where $\bD_k(i,j) \triangleq \bH(i,j) \bh_k(j)$, for all $i=1, \ldots, N_B$, $j=1, \ldots, N_R$ and $\bm{\phi}=\textrm{diag}(\bPhi)$ is the $N_R$-dimensional column vector containing the diagonal entries of $\bPhi$, the matrix $\bY$ in Eq. \eqref{eq:Y} is easily shown to contain a linear combination of the channel coefficients.
For the estimation of the composite channel for the generic $k$-th MS, we focus on the $N_B$-dimensional observable $\overline{\by}_k=\bY \bp_k/\sqrt{\eta_k}$. Using Eqs. \eqref{comp_channel}, \eqref{eq:Y} and \eqref{eq:identity}, $\overline{\by}_k$ can be written as
\begin{equation}
\begin{array}{llll}
\overline{\by}_k= &\ds \bD_k \bm{\phi} + \bh_{k}^{(d)} + \overline{\bw}_k \\ &+ \ds \sum_{\substack{j=1 \\ j \neq k}}^K \sqrt{\frac{\eta_j}{\eta_k}}\left( \bD_j \bm{\phi} +
\bh_{j}^{(d)}\right) p_{j,k}\; ,
\end{array}
\label{y_k1}
\end{equation}
with $\overline{\bw}_k=\bW \bp_k \sim \mathcal{CN}\left(0, \sigma^2_w \mathbf{I}_{N_B}\right)$ and $p_{j,k}=\bp_j^H \bp_k$.
Given Eq. \eqref{y_k1}, the CE problem is now well defined as the problem of estimating the $(N_B \times N_R)$-dimensional matrices $\bD_k$ and the $N_B$-dimensional vectors $ \bh_{k}^{(d)}$ for all $k=1, \ldots K$.
In order to conveniently cast the problem of CE, we write the noiseless part of the observables as the product of a known matrix times a column vector containing all the unknowns to be estimated. Otherwise stated, letting
$
\widetilde{\by}= \left[ \begin{array}{llll}
\overline{\by}_1^T, \; \ldots , \; \overline{\by}_K^T
\end{array} \right]^T
$
be the $K N_B$-dimensional observable containing the projection of the received data $\bY$ onto the
pilot sequences $\bp_1, \ldots, \bp_K$,
upon defining the vector $N_B(N_R+1)$-dimensional vector $\widetilde{\mathbf{d}}_k$ as
$$
\widetilde{\mathbf{d}}_k= \left[
\text{vec}( \bD_k);
\bh_{k}^{(d)}\right] \, ,
$$
the $K N_B(N_R+1)$-dimensional vector $
\mathbf{d}= \left[ \widetilde{\mathbf{d}}_1^T, \; \ldots ,\; \widetilde{\mathbf{d}}_K^T \right]^T
$, and letting $\widetilde{\bA}$ be a matrix whose $(k,j)$-th block, say $\widetilde{\bA}_{k,j}$,
with dimension $[N_B \times N_B(N_R+1)]$,
is defined as
\begin{equation}
\widetilde{\bA}_{k,j}= \sqrt{\frac{\eta_j}{\eta_k}} p_{j,k}\left[ \bm{\phi}^T \otimes \mathbf{I}_{N_B}, \mathbf{I}_{N_B} \right]\; ,
\end{equation}
for all $k, j = 1, \ldots, K$, then it can be shown that
the observable $\widetilde{\by}$ can be expressed as follows
\begin{equation}
\widetilde{\by}= \widetilde{\bA} \mathbf{d} + \widetilde{\mathbf{w}} \; .
\label{y_tilde}
\end{equation}
Expression \eqref{y_tilde} reveals the linear relationship between the data $\widetilde{\by}$ and the $K N_B(N_R+1)$-dimensional vector
$\mathbf{d}$ to be estimated. In the following, we outline two possible CE strategies.
\subsection{Least squares (LS) CE} \label{LS_CE_Section}
Since in \eqref{y_tilde} the number of parameters to be estimated is larger than the number of available measurements, a least squares procedure cannot be directly applied. To circumvent this problem, we assume that the pilot sequences are transmitted by the MSs $Q$ times, each one with a different RIS configuration. This assumption is needed since we should generate a number of observables that is not smaller than the number of unknown coefficients in order to be able to use a linear estimation rule.
We thus assume that the RIS phase shifts assume $Q$ different configurations, and denote by $$\bPhi^{(q)}= \textrm{diag}\left(\rho e^{j\phi_1^{(q)}}, \ldots, \rho e^{j\phi_{N_R}^{(q)}}\right) \; , \forall q=1,\ldots,Q, $$
the diagonal matrix representing the RIS in the $q$-th configuration. Letting $\widetilde{\by}^{(q)}$ denote the
$(K N_B)$-dimensional observable vector when the RIS is in the $q$-th state, we form the following $QN_B$-dimensional observable
\begin{equation}
\widetilde{\by}_Q= \left[ \begin{array}{llll}
\widetilde{\by}^{(1)\, T}, \; \ldots, \; \widetilde{\by}^{(Q)\, T}
\end{array} \right]^T= \widetilde{\bA}_Q \mathbf{d} + \widetilde{\mathbf{w}}_Q \; ,
\label{y_tilde_Q}
\end{equation}
with $\widetilde{\mathbf{w}}_Q= \left[\widetilde{\mathbf{w}}^{(1) \, T}, \, \ldots ,\, \widetilde{\mathbf{w}}^{(Q), \, T} \right]^T$ and
$
\widetilde{\bA}_Q= \left[ \widetilde{\bA}^{(1) \, T}\, , \ldots , \; \widetilde{\bA}^{(Q)\, T} \right]^T
$.
The LS-based estimator of $\mathbf{d}$ can be thus simply written as
\begin{equation}
\widehat{\mathbf{d}}_{\rm LS} =\widetilde{\bA}_Q^{-1}\widetilde{\by}_Q \, .
\label{d_estimation_LS}
\end{equation}
The number of needed RIS configurations in order to be able to correctly implement Eq. \eqref{d_estimation_LS} is $Q=N_R+1$.
\subsection{Linear minimum mean square error (MMSE) CE} \label{MMSE_CE_Section}
Another possible approach is based on the use of linear MMSE estimation. To this end,
we assume perfect knowledge of the large-scale power attenuation coefficients for all the MSs, i.e. the quantities $\beta_k$ and $\beta_{k,d}, \; \forall \; k=1,\ldots,K$ are known at the BS.
For linear MMSE estimation, the channel can be estimated even with just one configuration of the RIS elements. In the following, we thus consider both the cases that the number $Q$ of RIS configurations is larger than 1 and that is exactly 1.
\subsubsection{Linear MMSE CE with $Q>1$ configurations of the RIS}
Based on \eqref{y_tilde_Q}, the linear MMSE estimate for the vector $\mathbf{d}$ can be written as \cite{kay1993fundamentals}
\begin{equation}
\widehat{\mathbf{d}}_{\rm MMSE, Q} =\mathbf{E}_Q^H\widetilde{\by}_Q \, ,
\label{d_estimation_MMSE}
\end{equation}
with $\mathbf{E}_Q$ is a suitable $\left[K N_B Q \times K N_B (N_R+1)\right]$-dimensional matrix, such that the statistical expectation of the squared estimation error $\|\widehat{\mathbf{d}}_{\rm MMSE, Q} - \mathbf{d}\|^2$ is minimized.
Applying well-known statistical signal processing results \cite{kay1993fundamentals}, we have:
\begin{equation}
\mathbf{E}_Q=\left( \widetilde{\bA}_Q \mathbf{R}_{d} \widetilde{\bA}_Q^H + \sigma^2_w \mathbf{I}_{K N_B Q}\right)^{-1} \widetilde{\bA}_Q \mathbf{R}_{d} \, ,
\label{E_Q}
\end{equation}
where $\mathbf{R}_{d}=\mathbb{E}\left[ \mathbf{d} \mathbf{d}^H \right]=\text{blkdiag}\left( \mathbf{R}_{d}^{(1)}, \ldots, \mathbf{R}_{d}^{(K)}\right)$,
with
\begin{equation}
\mathbf{R}_{d}^{(k)}=\text{diag}\left( \underbrace{\beta_k, \ldots, \beta_k}_{N_B N_R}, \underbrace{\beta_{k,d},\ldots, \beta_{k,d}}_{N_B}\right) \; ,
\label{R_d_twolinks}
\end{equation}
\subsubsection{Linear MMSE CE with $Q=1$ RIS configuration}
With regard to the case in which the RIS assumes just one configuration and the pilot sequences are trasnmitted by each MS just one, the linear MMSE estimator for the channel coefficients can be obtained by simply specializing to the case $Q=1$ the derivations of Eq. \eqref{d_estimation_MMSE}. We omit providing further details for the sake of brevity.
\section{Joint beamformer and RIS configuration design in a single-user system} \label{Single_user_Resource}
This section focuses on the special case of a single-user system served by just one BS, which may be representative of a network with an orthogonal multiple access scheme and with negligible co-channel interference.
We tackle the maximization of the system SNR with respect to the base station beamforming vector $\mathbf{w}$ (active beamforming) and of the RIS phase shifts (which we refer to as passive beamforming). In agreement with the previously defined notation, in the following we denote by $\eta^{\rm DL}$ the BS transmit power during the data transmission phase, by $q^{\rm DL}$ the information symbol intended for the MS in the generic (discrete) symbol interval. The soft estimate of such data symbol at the MS can be expressed as
\begin{equation}
\begin{array}{lll}
\widehat{q}^{\rm DL}=\sqrt{\eta^{\rm DL}} \left( \bH \bPhi \bh +\bh^{(d)} \right)^H \mathbf{w} q^{\rm DL} + z \, ,
\end{array}
\label{DL_signal_k_SU}
\end{equation}
with $z \sim \mathcal{CN}(0,\sigma^2_z)$ denoting thermal noise.
Our first step, is to rewrite \eqref{DL_signal_k_SU} in a more convenient form, by exploiting the identity in Eq. \eqref{eq:identity}, which enables us to rewrite \eqref{DL_signal_k_SU} as
\begin{equation}
\begin{array}{lll}
\widehat{q}^{\rm DL}=\sqrt{\eta^{\rm DL}} \left( \bD \bm{\phi} +\bh^{(d)} \right)^H \mathbf{w} q^{\rm DL} + z \, ,
\end{array}
\label{DL_signal_k2_SU}
\end{equation}
Based on \eqref{DL_signal_k_SU}, the system SNR can be defined as
\begin{equation}
\text{SNR}= \frac{\eta^{\rm DL}}{\sigma_z^2} \left| \mathbf{w}^H \left( \bD \bm{\phi} +\bh^{(d)} \right) \right|^2
\end{equation}
In practice, the BS will have access to estimates of $\bD$ and $\bh^{(d)}$, which will be denoted in the following by $\widehat{\bD}$ and $\widehat{\bh}^{(d)}$, respectively, which implies that the function that can be optimized at the transmit side is
\begin{equation}
\widehat{\text{SNR}}= \frac{\eta^{\rm DL}}{\sigma_z^2} \left| \mathbf{w}^H \left( \widehat{\bD} \bm{\phi} +\widehat{\bh}^{(d)} \right) \right|^2
\end{equation}
Then, the problem to solve is stated as
\begin{subequations}\label{Prob:MaxSNR}
\begin{align}
&\ds\max_{\mathbf{w}, \bm{\phi}}\; \; \; \; \left| \mathbf{w}^H \left( \widehat{\bD} \bm{\phi} +\widehat{\bh}^{(d)} \right) \right|^2 \label{Prob:Max_SNR}\\
&\;\textrm{s.t.}\; \; [\bm{\phi}]_i= \rho e^{j \phi_{i}}, \\
&\;\; \; \;\; \; \; \phi_{i} \, \in \, [-\pi, \pi], \, \forall \; i=1, \ldots, N_R \\
&\;\; \;\;\; \; \; \|\mathbf{w}\|^2=1
\end{align}
\end{subequations}
{Problems} of the form of \eqref{Prob:MaxSNR} are usually tackled by alternating optimization methods which iterate between the optimization of the base station beamforming vector $\mathbf{w}$ and of the RIS phase shifts $\bm{\phi}$. This approach could be used also for the case at hand, but it has the drawback of requiring a numerical iterative algorithm. Instead, in the following, we propose two optimization methods that optimize an upper-bound and a lower-bound of the objective of \eqref{Prob:MaxSNR}, and which have the advantage of leading to closed-form expressions of $\mathbf{w}$ and $\bm{\phi}$.
\subsection{Upper-bound maximization} \label{UB_max_Section}
Assume, without loss of generality, that $N_B<N_R$, and consider the singular value decomposition of $\widehat{\bD}$, i.e.,
\begin{equation}
\widehat{\bD}=\sum_{i=1}^{N_B} \lambda_i \mathbf{u}_i \mathbf{v}_i^H\;.
\end{equation}
Next, let us express $\widehat{\bh}^{(d)}$ in terms of its projection on the orthonormal basis vectors $\mathbf{u}_1, \ldots, \mathbf{u}_{N_B}$, i.e.,
\begin{equation}
\widehat{\bh}^{(d)}=\sum_{i=1}^{N_B} \alpha_i \mathbf{u}_i,
\end{equation}
where $\alpha_i=\mathbf{u}_i^H\widehat{\bh}^{(d)}$.
At this point, we observe that an upper bound of the objective of Problem \eqref{Prob:MaxSNR} can be written as
\begin{equation}
\begin{array}{lllll}
\left| \mathbf{w}^H \left( \widehat{\bD} \bm{\phi} +\widehat{\bh}^{(d)} \right) \right|^2 &= \left| \mathbf{w}^H \left[ \ds \sum_{i=1}^{N_B} \mathbf{u}_i \left( \lambda_i \mathbf{v}_i^H \bm{\phi} +\alpha_i\right) \right] \right|^2 \\ & \leq N_B \ds \sum_{i=1}^{N_B} \left|\mathbf{w}^H\mathbf{u}_i\right|^2 \left| \lambda_i \mathbf{v}_i^H \bm{\phi} +\alpha_i \right|^2
\end{array}
\label{SNR}
\end{equation}
The upper-bound in \eqref{SNR} can be jointly maximized with respect to both $\bm{\phi}$ and $\mathbf{w}$. To see this, let us first consider, for all $i=1,\ldots,N_{B}$, the following optimization problem
\begin{equation}
\ds\max_{\bm{\phi}} \left| \lambda_i \mathbf{v}_i^H \bm{\phi} +\alpha_i \right|^2 = \ds\max_{\bm{\phi} } \left| \lambda_i \mathbf{v}_i^H \bm{\phi} e^{-j \angle{\alpha_i}} +|\alpha_i| \right|^2\;,
\label{max_phi_i_UB}
\end{equation}
whose optimal solution $\bm{\phi}_i^{\rm opt}$ is found by noticing that the phase of the $n$-th entry of $\bm{\phi}_i^{\rm opt}$, say $\phi_{n,i}^{\rm opt}$, is given by
\begin{equation}
\phi_{n,i}^{\rm opt}= -\angle{[\mathbf{v}_i^*]_n} + \angle{\alpha_i}\;.
\end{equation}
Next, let us define the index $i^{+}=\text{argmax}_{i}\left| \lambda_i \mathbf{v}_i^H \bm{\phi}_i^{\rm opt} +\alpha_i \right|^2$ and $c_{i^{+}}=\left| \lambda_{i^{+}} \mathbf{v}_{i^{+}}^H \bm{\phi}_{i^{+}}^{\rm opt} +\alpha_{i^{+}}\right|^2$. Thus, it follows that
\begin{equation}
\ds \sum_{i=1}^{N_B} \left|\mathbf{w}^H\mathbf{u}_i\right|^2 \left| \lambda_i \mathbf{v}_i^H \bm{\phi} +\alpha_i \right|^2\leq
c_{i^{+}}\sum_{i=1}^{N_{B}}|\mathbf{w}^{H}\mathbf{u}_{i}|^{2}\leq c_{i^{+}}\;,\label{Eq:UpperBound}
\end{equation}
where we have also exploited that fact that both $\mathbf{w}^{H}$ and $\mathbf{u}_{i}$ have unit-norm. Finally, we observe that all inequalities in \eqref{Eq:UpperBound} turn to equalities by choosing
$
\bm{\phi}^{\rm opt}=\bm{\phi}_{i^+}^{\rm opt}$ and $\mathbf{w}^{\rm opt}=\mathbf{u}_{i^+}$,
which therefore are the maximizers of the right-hand-side of \eqref{SNR}.
\subsection{Lower-bound Maximization} \label{LB_Max_Section}
Define $\mathbf{g}_{w}^H=\mathbf{w}^H \widehat{\bD}$, and $t_{w}=\mathbf{w}^H\widehat{\bh}^{(d)}$. Then, it holds:
\begin{align}
&\ds\max_{\mathbf{w},\bm{\phi}}\left|\mathbf{w}^{H}(\widehat{\bD}\bm{\phi}+\widehat{\bh}_{d})\right|^{2}= \ds \max_{\mathbf{w}}\left(\max_{\bm{\phi}}\left|\mathbf{g}_{w}^{H}\bm{\phi}+t_{w})\right|^{2}\right)\stackrel{(a)}{=}\notag\\
&\ds\rho^2 \max_{\mathbf{w}}\left|\sum_{i=1}^{N_{R}}|\mathbf{g}_{w}(i)|+|t_{w}|\right|^{2}\stackrel{(b)}\geq
\ds \rho^2\max_{\mathbf{w}}\left|\mathbf{w}^{H}\left(\sum_{i=1}^{N_{R}}\widehat{\mathbf{d}}_{i}+\widehat{\bh}^{(d)}\right)\right|^{2}
\label{SNR_LB}
\end{align}
where the equality $(a)$ holds since, for any given $\mathbf{w}$, the optimal $\bm{\phi}$ is the one that aligns the phases of $t_{w}$ and of the components of $\mathbf{g}_{w}$, denoted by $\mathbf{g}_{w}(i)$ with $i=1,\ldots,N_{R}$, while inequality (b) holds by removing the inner absolute values and since $\mathbf{g}_{w}(i)=\mathbf{w}^{H}\widehat{\mathbf{d}}_i$, with $\widehat{\mathbf{d}}_i$ the $i$-th column of $\widehat{\bD}$.
From \eqref{SNR_LB}, we see that the optimal $\mathbf{w}$ has the form:
\begin{equation}
\mathbf{w}^{\rm opt}=\frac{\ds \sum_{i=1}^{N_B} \widehat{\mathbf{d}}_i + \widehat{\bh}^{(d)}}{\norm{\ds \sum_{i=1}^{N_B} \widehat{\mathbf{d}}_i + \widehat{\bh}^{(d)}}}\;,
\end{equation}
from which we can obtain the optimal phases of the RIS as
\begin{equation}
\phi_{i}^{\rm opt}= -\angle{\mathbf{g}_{w}^{*}(i)} + \angle{t_w}\;,\;\forall\;n=1,\ldots,N_{R} .
\end{equation}
\section{Joint RIS configuration design and power allocation in multiuser systems with joint transmission} \label{Joint_Resource}
{In the general multi-user scenario, we assume that the users may be jointly served by the two BSs to improve performance, and tackle the problem of the maximization of the geometric mean of the downlink SINRs expressed as in Eq. \eqref{DL_SINR} with respect to the transmit powers and to the RIS phases. Exploiting the definition in Eq. \eqref{eq:identity} and rewriting Eq. \eqref{DL_SINR} in a more compact form the problem to solve is stated as follows:}
\begin{subequations}\label{Prob:MaxSNR_MU}
\begin{align}
\ds\max_{\bm{\eta}_1^{\rm DL}, \bm{\eta}_2^{\rm DL}, \bm{\phi}} & \prod_{k=1}^K \! \frac{\ds \sum_{i=1}^2\left|I_{i,k}\sqrt{\eta_{i,k}^{\rm DL}}\left(\widehat{\bD}_{i,k} \bm{\phi} + \widehat{\bh}_{i,k}^{(d)}\right)^H\!\!\! \mathbf{w}_{i,k}\right|^2}{\ds \sum_{\substack{\ell=1 \\ \ell \neq k}}^K {\left|
\ds \sum_{i=1}^2
I_{i,\ell}\sqrt{\eta_{i,\ell}^{\rm DL}}\left(\widehat{\bD}_{i,k} \bm{\phi} + \widehat{\bh}_{i,k}^{(d)}\right)^H \!\!\!\!\mathbf{w}_{i,\ell}\right|^2}\!\!+\! \sigma^2_z} , \\
\textrm{s.t.} \quad & [\bm{\phi}]_n= \rho e^{j \phi_{n}}, \\
& \phi_{n} \, \in \, [-\pi, \pi], \, \forall \; n=1, \ldots, N_R \\
& \sum_{\ell=1}^K { I_{i,\ell}\eta_{i,\ell}^{\rm DL}} \leq P_{\rm max}^{{\rm BS},i}, \; , i=1,2\\
& {\eta_{i,\ell}^{\rm DL}} \geq 0 \; \forall \ell=1,\ldots,K \; , i=1,2
\end{align}
\end{subequations}
where $\bm{\eta}_1^{\rm DL}= \left[ \eta_{1,1}^{\rm DL}, \ldots, \eta_{1,K}^{\rm DL}\right]^T$ , $\bm{\eta}_2^{\rm DL}= \left[ \eta_{2,1}^{\rm DL}, \ldots, \eta_{2,K}^{\rm DL}\right]^T$ and we are assuming again that the BSs treat the channel estimates as the true channels. Substituting $\bm{\phi}= \rho e^{j \bm{\widetilde{\phi}}}$, with $\bm{\widetilde{\phi}}= \left[ \phi_1, \ldots \ \phi_{N_R}\right]^T$ and assuming channel-matched beamforming (CM-BF), i.e., the beamforming vector at the $i$-th BS to the $k$-th MS is chosen as
\begin{equation}
\mathbf{w}_{i,k}= \displaystyle \frac{\rho \widehat{\bD}_{i,k} e^{j\bm{\widetilde{\phi}}}+\widehat{\bh}_{i,k}^{(d)}}{\norm{\rho \widehat{\bD}_{i,k} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,k}^{(d)}}} \; .
\label{CM_BF}
\end{equation}
Problem \eqref{Prob:MaxSNR_MU} can be rewritten as Problem \eqref{Prob:MaxSNR_MU2} at the top of next page.
\begin{figure*}
\begin{subequations}\label{Prob:MaxSNR_MU2}
\begin{align}
\ds\max_{\bm{\eta}_1^{\rm DL}, \bm{\eta}_2^{\rm DL}, \bm{\widetilde{\phi}}} \; & \prod_{k=1}^K \frac{
\ds \sum_{i=1}^2\left|I_{i,k}\sqrt{\eta_{i,k}^{\rm DL}}\norm{\rho \widehat{\bD}_{i,k} e^{j \bm{\widetilde{\phi}}} +\widehat{\bh}_{i,k}^{(d)}}\right|^2}{\ds \sum_{\substack{\ell=1 \\ \ell \neq k}}^K \left|
\ds \sum_{i=1}^2 I_{i,\ell}\sqrt{\eta_{i,\ell}^{\rm DL}}\frac{\left(\rho \widehat{\bD}_{i,k} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,k}^{(d)}\right)^{\!\!H}\!\!\!\!\left( \rho \widehat{\bD}_{i,\ell} e^{j\bm{\widetilde{\phi}}} \!\! +\!\!\widehat{\bh}_{i,\ell}^{(d)} \right)}{\norm{\rho \widehat{\bD}_{i,\ell} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,\ell}^{(d)}}} \right|^2 \!\! + \!\! \sigma^2_z} \, , \\
\textrm{s.t.} \quad & \left[\bm{\widetilde{\phi}}\right]_{n} \, \in \, [-\pi, \pi], \, \forall \; n=1, \ldots, N_R , \\
& \sum_{\ell=1}^K { I_{i,\ell}\eta_{i,\ell}^{\rm DL}} \leq P_{\rm max}^{{\rm BS},i}, \; , i=1,2\\
& {\eta_{i,\ell}^{\rm DL}} \geq 0 \; \forall \ell=1,\ldots,K \; , i=1,2.
\end{align}
\end{subequations}
\end{figure*}
Solving \eqref{Prob:MaxSNR_MU2} optimally appears challenging, due to the presence of multi-user interference. This motivates us to resort to alternating optimization to find a candidate solution of \eqref{Prob:MaxSNR_MU2} \cite[Section 2.7]{BertsekasNonLinear}, i.e, we solve alternatively the problem with respect to $\bm{\widetilde{\phi}}$ and then the problem with respect to $\bm{\eta}_1^{\rm DL}$ and $\bm{\eta}_2^{\rm DL}$. At each step, the objective of \eqref{Prob:MaxSNR_MU2} does not decrease, and so the algorithm converges in the value of the objective function.
\subsection{Solution of the problem with respect to $\bm{\widetilde{\phi}}$} \label{Phi_opt_Section}
Determining the optimal solution of Problem \eqref{Prob:MaxSNR_MU2} appears challenging even with respect to only the RIS phases, due to the fact multiple users are present, that are served by the same RIS matrix $\bm{\phi}$. This prevents from obtaining significant insight on the optimal $\bm{\Phi}$. On the other hand, it was shown in \cite{EE_RISs} that good results are obtained when employing a gradient-based search to optimize the phase shift matrix in RIS-based networks. Here we take a similar approach, applying the gradient algorithm in order to find a candidate solution for Problem \eqref{Prob:MaxSNR_MU2}. Before applying the gradient algorithm, we equivalently reformulate the problem by taking the logarithm of the objective, which leads us to Problem \eqref{Prob:MaxSNR_MU1_phi}, shown in the next page\footnote{Without loss of generality, we have neglected the constraint $\left[\bm{\widetilde{\phi}}\right]_{n} \, \in \, [-\pi, \pi], \, \forall \; n=1, \ldots, N_R$, since the objective is periodic with respect to each phase, with period $2\pi$, and thus any phase can be restricted to this fundamental period after the optimization routine has converged.}.
\begin{figure*}
\begin{subequations}\label{Prob:MaxSNR_MU1_phi}
\begin{align}
&\ds\max_{\bm{\widetilde{\phi}}} \sum_{k=1}^K \log_2 \left(\frac{\left|
\ds \sum_{i=1}^2
I_{i,k}\sqrt{\eta_{i,k}^{\rm DL}}\norm{\rho \widehat{\bD}_{i,k} e^{j \bm{\widetilde{\phi}}} +\widehat{\bh}_{i,k}^{(d)}} \right|^2}{\ds \sum_{\substack{\ell=1 \\ \ell \neq k}}^K \left|
\ds \sum_{i=1}^2
I_{i,\ell}\sqrt{\eta_{i,\ell}^{\rm DL}}\frac{\left(\rho \widehat{\bD}_{i,k} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,k}^{(d)}\right)^{\!\!H}\!\!\!\!\left( \rho \widehat{\bD}_{i,\ell} e^{j\bm{\widetilde{\phi}}} \!\! +\!\!\widehat{\bh}_{i,\ell}^{(d)} \right)}{\norm{\rho \widehat{\bD}_{i,\ell} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,\ell}^{(d)}}} \right|^2 \!\! + \!\! \sigma^2_z} \right) \, ,
\end{align}
\end{subequations}
\end{figure*}
Solving \eqref{Prob:MaxSNR_MU1_phi} still appears challenging even with respect to only the RIS phases, due to the fact multiple users and base stations are present. This motivates the use of the gradient algorithm to find a candidate solution for $\bPhi$, which was shown to yield good results in \cite{EE_RISs}, although in a less challenging scenario as the one considered here.
To elaborate, let us define, for $i\in\{1,2\}$,
\begin{equation}
F_{k,\ell}^{(i)}=I_{i,\ell}\eta_{i,\ell}^{DL}\left( \rho \widehat{\bD}_{i,k} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,k}^{(d)}\right)^{H}\left( \rho \widehat{\bD}_{i,\ell} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,\ell}^{(d)} \right)\;.
\label{F_k_ell_def}
\end{equation}
Then, denoting by $G(\bm{\widetilde{\phi}})$ the objective of \eqref{Prob:MaxSNR_MU1_phi}, it holds that
\begin{equation}
G(\bm{\widetilde{\phi}})\!=\!\!\sum_{k=1}^{K}\!\log_{2}\!\!\left(\!\!\frac{F_{k,k}^{(1)}+F_{k,k}^{(2)}+2\sqrt{F_{k,k}^{(1)}F_{k,k}^{(2)}}}{\ds\sigma_{z}^{2}\!+\!\sum_{\ell\neq k}\frac{|F_{k,\ell}^{(1)}|^{2}}{F_{\ell,\ell}^{(1)}}\!+\!\frac{|F_{k,\ell}^{(2)}|^{2}}{F_{\ell,\ell}^{(2)}}\!+\!\frac{2\Re\left\{F_{k,\ell}^{(1)}F_{k,\ell}^{*(2)}\right\}}{\sqrt{F_{\ell,\ell}^{(1)}F_{\ell,\ell}^{(2)}}}}\!\!\right)
\end{equation}
Therefore, denoting by $\widetilde{\phi}_{n}$ the $n$-th component of $\bm{\widetilde{\phi}}$, for any $n=1,\ldots,N_{R}$, the derivative of $G$ with respect to $\widetilde{\phi}_{n}$ can be expressed as shown in Eq. \eqref{Eq:DerG} at the bottom of next page,
\begin{figure*}
\begin{align}\label{Eq:DerG}
&\frac{\partial G}{\partial \widetilde{\phi}_{n}}=\log_{2}(e)\sum_{k=1}^{K}\frac{\ds\sigma_{z}^{2}\!+\!\sum_{\ell\neq k}\frac{|F_{k,\ell}^{(1)}|^{2}}{F_{\ell,\ell}^{(1)}}\!+\!\frac{|F_{k,\ell}^{(2)}|^{2}}{F_{\ell,\ell}^{(2)}}\!+\!\frac{2\Re\left\{F_{k,\ell}^{(1)}F_{k,\ell}^{*(2)}\right\}}{\sqrt{F_{\ell,\ell}^{(1)}F_{\ell,\ell}^{(2)}}}}{F_{k,k}^{(1)}+F_{k,k}^{(2)}+2\sqrt{F_{k,k}^{(1)}F_{k,k}^{(2)}}}\\
&\times\Bigg\{\left(\frac{\partial F_{k,k}^{(1)}}{\partial \widetilde{\phi}_{n}}+\frac{\partial F_{k,k}^{(2)}}{\partial \widetilde{\phi}_{n}}+2\frac{\partial \sqrt{F_{k,k}^{(1)}F_{k,k}^{(2)}}}{\partial \widetilde{\phi}_{n}}\right)\left(\sigma_{z}^{2}\!+\!\sum_{\ell\neq k}\frac{|F_{k,\ell}^{(1)}|^{2}}{F_{\ell,\ell}^{(1)}}\!+\!\frac{|F_{k,\ell}^{(2)}|^{2}}{F_{\ell,\ell}^{(2)}}\!+\!\frac{2\Re\left\{F_{k,\ell}^{(1)}F_{k,\ell}^{*(2)}\right\}}{\sqrt{F_{\ell,\ell}^{(1)}F_{\ell,\ell}^{(2)}}}\right)\notag\\
&-\left(\!\sum_{\ell\neq k}\frac{\frac{\partial |F_{k,\ell}^{(1)}|^{2}}{\partial \widetilde{\phi}_{n}}F_{\ell,\ell}^{(1)}\!-\!\frac{\partial F_{\ell,\ell}^{(1)}}{\partial \widetilde{\phi}_{n}}|F_{k,\ell}^{(1)}|^{2}}{(F_{\ell,\ell}^{(1)})^{2}}\!+\!\frac{\frac{\partial |F_{k,\ell}^{(2)}|^{2}}{\partial \widetilde{\phi}_{n}}F_{\ell,\ell}^{(2)}\!-\!\frac{\partial F_{\ell,\ell}^{(2)}}{\partial \widetilde{\phi}_{n}}|F_{k,\ell}^{(2)}|^{2}}{(F_{\ell,\ell}^{(2)})^{2}} \right.\notag\\
&\left. +\!2\left[\frac{\partial\Re\left\{F_{k,\ell}^{(1)}F_{k,\ell}^{*(2)}\right\}}{\partial\widetilde{\phi}_{n}}\frac{1}{\sqrt{F_{\ell,\ell}^{(1)}F_{\ell,\ell}^{(2)}}}\!-\!\frac{\partial\sqrt{F_{\ell,\ell}^{(1)}F_{\ell,\ell}^{(2)}}}{\partial\widetilde{\phi}_{n}}\frac{\Re\left\{F_{k,\ell}^{(1)}F_{k,\ell}^{*(2)}\right\}}{F_{\ell,\ell}^{(1)}F_{\ell,\ell}^{(2)}}\right]\right)\notag\\
&\times\left(F_{k,k}^{(1)}+F_{k,k}^{(2)}+2\sqrt{F_{k,k}^{(1)}F_{k,k}^{(2)}}\right)\Bigg\}\frac{1}{\left(\ds\sigma_{z}^{2}\!+\!\sum_{\ell\neq k}\frac{|F_{k,\ell}^{(1)}|^{2}}{F_{\ell,\ell}^{(1)}}\!+\!\frac{|F_{k,\ell}^{(2)}|^{2}}{F_{\ell,\ell}^{(2)}}\!+\!\frac{2\Re\left\{F_{k,\ell}^{(1)}F_{k,\ell}^{*(2)}\right\}}{\sqrt{F_{\ell,\ell}^{(1)}F_{\ell,\ell}^{(2)}}}\right)^{2}}\notag
\end{align}
\end{figure*}
wherein, for $i\in\{1,2\}$,
\begin{align}
\frac{\partial |F_{k,\ell}^{(i)}|^{2}}{\partial \widetilde{\phi}_{n}}&=2\Re\left\{\frac{\partial F_{k,\ell}^{(i)}}{\partial \widetilde{\phi}_{n}}F_{k.\ell}^{(i)*}\right\} \label{partial_derivative_F_k_ell_2} \\
\frac{\partial F_{k,\ell}^{(i)}}{\partial \widetilde{\phi}_{n}}&=\eta_{\ell}^{DL}\rho j\Bigg[\rho \sum_{m\neq n}\left[\mathbf{\widehat{D}}_{i,k}^{H}\mathbf{\widehat{D}}_{i,\ell}\right]_{(m,n)}e^{j(\widetilde{\phi}_{n}-\widetilde{\phi}_{m})} \notag\\
&-\rho \sum_{i\neq n}\left[\mathbf{\widehat{D}}_{i,k}^{H}\mathbf{\widehat{D}}_{i,\ell}\right]_{(n,i)}e^{-j(\widetilde{\phi}_{n}-\widetilde{\phi}_{i})}\notag\\
&+e^{j\widetilde{\phi}_{n}}\left[\mathbf{\widehat{D}}_{i,\ell}^{T}\mathbf{\widehat{h}}_{i,k}^{(d)*}\right]_{(n)}-e^{-j\widetilde{\phi}_{n}}\left[\mathbf{\widehat{D}}_{i,k}^{H}\mathbf{\widehat{h}}_{i,\ell}^{(d)}\right]_{(n)} \Bigg] \label{partial_derivative_F_k_ell} \\
&\hspace{-1cm}\frac{\partial\Re\left\{F_{k,\ell}^{(1)}F_{k,\ell}^{*(2)}\right\}}{\partial\widetilde{\phi}_{n}}=\Re\left\{\frac{\partial F_{k,\ell}^{(1)}}{\partial \widetilde{\phi}_{n}}F_{k,\ell}^{*(2)}+\frac{\partial F_{k,\ell}^{*(2)}}{\partial \widetilde{\phi}_{n}}F_{k,\ell}^{(1)}\right\}\label{partial_derivative_F_k_ell_product}\\
&\hspace{-1cm}\frac{\partial\sqrt{F_{k,k}^{(1)}F_{k,k}^{(2)}}}{\partial\widetilde{\phi}_{n}}=\frac{\frac{\partial F_{k,k}^{(1)}}{\partial\widetilde{\phi}_{n}}F_{k,k}^{(2)}+\frac{\partial F_{k,k}^{(2)}}{\partial\widetilde{\phi}_{n}}F_{k,k}^{(1)}}{2\sqrt{F_{k,k}^{(1)}F_{k,k}^{(2)}}} \label{partial_derivative_F_k_ell_product_sqrt}
\end{align}
Equipped with the above derivatives, the gradient algorithm can be implemented by means of any off-the-shelf software routine.
{
\subsubsection*{Computational complexity}
The gradient algorithm iteratively updates the variable according to the update rule:
\begin{equation}
\bm{\widetilde{\phi}}^{(s)}=\bm{\widetilde{\phi}}^{(s-1)}-\nu \nabla G(\bm{\widetilde{\phi}}^{(s-1)})\;,
\end{equation}
wherein $\bm{\widetilde{\phi}}^{(s)}$ denotes the vector of RIS phases at the $s$-th iteration, $\bm{\widetilde{\phi}}^{(s-1)}$ denotes the vector of RIS phases at the $(s-1)$-th iteration, $G$ is the objective function to maximize, and $\nabla G(\bm{\widetilde{\phi}}^{(s-1)})$ is the gradient of $G$ evaluated at $\bm{\widetilde{\phi}}^{(s-1)}$. Finally, $\nu$ is the step-size that defines the magnitude of each update.
For the case at hand, before the gradient method starts, it is possible to compute the matrices $\mathbf{\widehat{D}}_{i,k}$, the vectors $\mathbf{\widehat{h}}_{i,k}^{(d)}$, and all the operations involving only these quantities that appear in Eqs. \eqref{F_k_ell_def}-\eqref{partial_derivative_F_k_ell_product_sqrt}. Thus, the complexity associated with these operations will be neglected in the sequel, since these are simple initializations that need not be repeated in each iteration of the gradient method.
Instead, the bulk of the complexity is due to the computation of $\bm{\widetilde{\phi}}^{(s)}$ in each iteration, times the number of iterations until convergence. The former can be computed by the formulas in Eq. \eqref{Eq:DerG}, which, although cumbersome to write, contains only elementary functions and thus can be easily evaluated based on the formulas in Eqs. \eqref{F_k_ell_def} and \eqref{partial_derivative_F_k_ell}. Inspecting \eqref{F_k_ell_def}, we can see that its complexity scales as $O(N_{R}N_{B})$, because $N_{R}N_{B}$ multiplications are required to compute each of the products ${\mathbf{\widehat{D}}_{i,k}}e^{j\bm{\widetilde{\phi}}}$ and ${\mathbf{\widehat{h}}_{i,\ell}^{(d)}}e^{j\bm{\widetilde{\phi}}}$. Then, \eqref{F_k_ell_def} must be computed $K^{2}$ times, i.e. for any $k$ and $\ell$. As for \eqref{partial_derivative_F_k_ell}, its complexity is linear in $N_{R}$ due to the sum over the number of RIS elements, and again quadratic in $K$, since it must be computed for any $k$ and $\ell$. Once \eqref{F_k_ell_def} and \eqref{partial_derivative_F_k_ell} have been computed, \eqref{partial_derivative_F_k_ell_2} , \eqref{partial_derivative_F_k_ell_product}, and \eqref{partial_derivative_F_k_ell_product_sqrt} can be computed by one, two, or three additional multiplications for each $n=1,\ldots,N$ and $k,\ell=1,\ldots,K$. Finally, all these quantities can be plugged into \eqref{Eq:DerG}, which requires an additional number of computations that is quadratic in $K$, due to the nested sums over the number of users. Thus, asymptotically, the complexity of computing ${\bm \phi}^{(s)}$ in each iteration scales as the complexity of computing \eqref{F_k_ell_def}, which is $O(K^{2}N_{R}N_{B})$, and the overall asymptotic complexity of the considered instance of the gradient method is $O(N_{it}K^{2}N_{R}N_{B})$, where $N_{it}$ is the number of iterations until convergence.
As for $N_{it}$, it is challenging to give a closed-form expression, as it heavily depends on the choice of the step-size $\nu$. Moreover, $\nu$ is typically handled in an adaptive way, i.e. updating its value during the execution of the algorithm, in order to achieve the best trade-off between convergence speed and performance. Empirically, in our simulations we have observed convergence in a handful of iterations.}
\subsection{Solution of the problem with respect to $\bm{\eta}_1^{\rm DL}$ and $\bm{\eta}_2^{\rm DL}$} \label{Eta_Opt_Section}
After the solution of Problem \eqref{Prob:MaxSNR_MU1_phi}, we solve now the problem with respect to the variables $\bm{\eta}_1^{\rm DL}$ and $\bm{\eta}_2^{\rm DL}$ for fixed $\bm{\widetilde{\phi}}$. To begin with, we again take the logarithm of the objective function of \eqref{Prob:MaxSNR_MU2}, which causes no optimality loss, since the logarithm is an increasing function. Thus, the optimization problem with respect to the transmit powers can be equivalently reformulated as
\begin{subequations}\label{Prob:MaxSNR_MU2_eta}
\begin{align}
\ds\max_{\bm{\eta}^{\rm DL}} \sum_{k=1}^K \; & \ds \log_2 \left(\frac{\left|\ds \sum_{i=1}^2 I_{i,k}\sqrt{\eta_{i,k}^{\rm DL}} a_{k,k}^{(i)}\right|^2}{\ds \sum_{\substack{\ell=1 \\ \ell \neq k}}^K {\left|\ds \sum_{i=1}^2 I_{i,\ell}\sqrt{\eta_{i,\ell}^{\rm DL}} a_{k,\ell}^{(i)}\right|^2} + \sigma^2_z}\right) \, , \\
\textrm{s.t.} \quad & \sum_{\ell=1}^K {I_{i,\ell}\eta_{i,\ell}^{\rm DL}} \leq P_{\rm max}^{{\rm BS},i}, \; , i=1,2\\
& {\eta_{i,\ell}^{\rm DL}} \geq 0 \; \forall \ell=1,\ldots,K \; , i=1,2.,
\end{align}
\end{subequations}
with
\begin{equation}
a_{k,\ell}^{(i)}= \ds \frac{\left( \rho \widehat{\bD}_{i,k} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,k}^{(d)}\right)^{H}\left( \rho \widehat{\bD}_{i,\ell} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,\ell}^{(d)} \right)}{\norm{\rho \widehat{\bD}_{i,\ell} e^{j\bm{\widetilde{\phi}}} +\widehat{\bh}_{i,\ell}^{(d)}}} \, .
\end{equation}
Problem \eqref{Prob:MaxSNR_MU2_eta} can be tackled by means of the sequential optimization framework. To see this, let us expand the objective of \eqref{Prob:MaxSNR_MU2_eta} as the difference of two logarithms, which yields
\begin{align}\label{Eq:ObjPower}
\text{SR}&=\sum_{k=1}^{K}\log_{2}\left(I_{1,k} \eta_{1,k}^{DL}(a_{k,k}^{(1)})^{2}+I_{2,k} \eta_{2,k}^{DL}(a_{k,k}^{(2)})^{2}\right.\notag\\
&\hspace{3cm}\left.+2 I_{1,k}I_{2,k}\sqrt{\eta_{1,k}^{DL}\eta_{2,k}^{DL}}a_{k,k}^{(1)}a_{k,k}^{(2)}\right)\notag\\
&-\sum_{k=1}^{K}\log_{2}\Bigg(\sigma_{z}^{2}+\sum_{\ell\neq k}I_{1,\ell}\eta_{1,\ell}^{DL}(a_{k,\ell}^{(1)})^{2}+I_{2,\ell}\eta_{2,\ell}^{DL}(a_{k,\ell}^{(2)})^{2}\notag\\
&\hspace{3cm}+2I_{1,\ell}I_{2,\ell}\sqrt{\eta_{1,\ell}^{DL}\eta_{2,\ell}^{DL}}a_{k,\ell}^{(1)}a_{k,\ell}^{(2)}\Bigg)
\end{align}
By virtue of \cite[Lemma 1]{Dandrea2020} \eqref{Eq:ObjPower} is the difference of two concave functions. Thus, it can be maximized by an instance of the sequential optimization method, in which at the $j$-th iteration the second line in \eqref{Eq:ObjPower} is linearized around the point $(\bm{\tilde{\eta}}_{1},\bm{\tilde{\eta}}_{2})$ obtained as the solution of the linearized problem at the $(j-1)$-th iteration.
\begin{table}
\centering
\caption{Simulation parameters}
\label{table:parameters}
\def\arraystretch{1.2}
\begin{tabulary}{\columnwidth}{ |p{3cm}|p{4.5cm}| }
\hline
MSs distribution & Horizontal: uniform in each cell, vertical: 1.5~m\\ \hline
BS height & 25~m\\ \hline
RIS height & 40~m\\ \hline
Carrier freq., bandwidth & $f_0=3$ GHz, $B = 20$ MHz \\ \hline
BSs antenna array & 16-element with $\lambda/2$ spacing\\ \hline
RIS antenna array & $N_R$-element with $\lambda/2$ spacing\\ \hline
MS antennas & Omnidirectional with 0~dBi gain\\ \hline
Thermal noise & -174 dBm/Hz spectral density \\ \hline
Noise figure & 9 dB at BS/MS\\ \hline
\end{tabulary}
\end{table}
\section{Numerical Results}
In order to provide numerical results, we refer to the scenario depicted in Fig. \ref{Fig:Scenario_multi_BS}, considering an inter-site distance between the two BSs of 300 meters. We consider BSs with 16 antennas, i.e., $N_{B,1} =N_{B,2}=64$ and a RIS with $N_R$ reflecting elements; the number of MSs in each cell is 10, thus the number of users simultaneously served on the same frequency in the system is $K=20$ in the multi-user scenario. All the remaining simulation parameters are summarized in Table \ref{table:parameters}.
The channel coefficients $\beta_{i,k}$ and $\beta_{i,k}^{(d)}$ that model the attenuation on the reflected path $k$-th MS-RIS-BS and on the direct $k$-th MS-BS path with respect to the $i$-th BS, respectively, are modelled according to \cite{EE_RISs}. In particular,
\begin{equation}
\beta_{i,k}=\frac{10^{-3.53}}{\left(d_{{\rm BS}_i ,\rm{RIS}}+d_{{\rm RIS},k}\right)^{3.76}} \, , \; \; \text{and} \; \; \beta_{i,k}^{(d)}=\frac{10^{-3.53}}{d_{{\rm BS}_i,k}^{3.76}},
\label{Beta_k}
\end{equation}
where $d_{{\rm BS}_i ,\rm{RIS}}$ and $d_{{\rm RIS},k}$ are the distance between the $i$-th BS and the RIS and the distance between the $k$-th MS and the RIS in meters, respectively, while $d_{{\rm BS}_i,k}$ is the distance between the $i$-th BS and the $k$-th MS.
We assume that the maximum power transmitted by the BS is $P_{\rm max}^{{\rm BS},i}=10$ W. In the following figures, we report the results of the resource allocation strategies proposed in the paper in the case of {perfect channel state information (PCSI)} and in the case in which channel estimation (CE) is performed at each BS according to the procedures detailed in Section \ref{Section_CE}. With regard to the CE procedure, orthogonal sequences with length $\tau_p=K$ are assumed at the MS and the power transmitted during the uplink training is $\eta_k=\tau_p\widetilde{\eta}_k$ with $\widetilde{\eta}_k=100$ mW. The number of RIS configurations during the CE procedures is $Q=N_R+1$ in the case of LS CE and MMSE with $Q$ realizations of the RIS (MMSEQ), and $Q=1$ in the case of MMSE with one random realization of the RIS configuration (MMSE1).
\begin{figure}[!t]
\centering
\includegraphics[scale=0.63]{NMSE_comparison-eps-converted-to.pdf}
\caption{NMSE versus $N_R$ in the multiuser scenario with $N_B=64$ and $K=20$.}
\label{Fig:NMSE_comparison}
\end{figure}
{
\subsection{Channel estimation performance}
In this subsection, we evaluate the performance of the channel estimation procedures proposed in the paper and provide a comparison with the channel estimation framework proposed in reference \cite{Wang2019d}. The channel estimation procedure proposed in \cite{Wang2019d} is based on the idea of switching groups of RIS elements on and off and assumes a three-phases protocol in which i) the RIS is switched off and the BS estimates the direct channel between BS and MS, ii) RIS is switched on and only one user transmits pilot signals, and iii) groups of elements are switched on and all the other users transmit pilot signals alternatively. In reference \cite{Wang2019d} the authors also use an MMSE approach and assume knowledge of the statistics of the correlation between channels to different users. The pilot training required by the channel estimation in \cite{Wang2019d} is lower, in fact the minimum number of pilots is $K+N_R+\max\left( K-1, \lceil \frac{(K-1)N_R}{N_B} \rceil \right)$, while our approach requires $K(N_R+1)$ total pilot signals for LS CE and MMSEQ CE and $K$ in MMSE CE.
We report the performance of the considered channel estimation procedures in terms of normalized mean-squares-error (NMSE) defined as in Eq. \eqref{NMSE_definition} at the top of next page.
\begin{figure*}
{\begin{equation}
\text{NMSE}_i=\frac{\ds \sum_{k=1}^K \mathbb{E} \left[ \norm{\bh_{i,k}^{(d)}-\widehat{\bh}_{i,k}^{(d)}}^2 \right] + \ds \sum_{n=1}^{N_R} \ds \sum_{k=1}^K \mathbb{E} \left[ \norm{\left[\bD_{p,k}\right]_{(:,n)}-\left[\widehat{\bD}_{p,k}\right]_{(:,n)}}^2 \right]}{\ds \sum_{k=1}^K \mathbb{E} \left[ \norm{\bh_{i,k}^{(d)}}^2\right] + \ds \sum_{n=1}^{N_R} \ds \sum_{k=1}^K \mathbb{E} \left[ \norm{\left[\bD_{p,k}\right]_{(:,n)}}^2 \right]}
\label{NMSE_definition}
\end{equation}}
\end{figure*}
In Fig. \ref{Fig:NMSE_comparison}, we report the performance of the LS CE, MMSE1 CE and MMSEQ CE in terms of NMSE versus the number of antennas at the RIS $N_R$. We can note that the MMSE-based approaches offer better performance with respect to the LS one, that does not assume knowledge of the large scale coefficients of the channels. Assuming the same amount of channel knowledge and length of the pilot signals, we report also the performance of the procedure proposed in reference \cite{Wang2019d} and we note that with MMSE approaches we outperform it.
Our approaches thus exhibit the following advantages. First, better performance in terms of NMSE is obtained in the considered scenario, and, then, it is not required switching on/off the RIS elements and making them completely absorbing, which may be difficult.}
\subsection{Single-user and single BS}
We start evaluating the performance of the optimization procedure detailed in Section \ref{Single_user_Resource}.
In Fig. \ref{Fig:SNR_SingleUser_Opt}, we report the cumulative distribution functions (CDFs) of the SNR. In particular, we compare the performance obtained in the following cases: closed-form optimization (CF-Opt) via the upper-bound maximization (UB Max) in Section \ref{UB_max_Section}; closed-form optimization (CF-Opt) via lower-bound maximization (LB Max) in Section \ref{LB_Max_Section}; alternating maximization (AM) approach and random configuration of the RIS (No Opt.). {In the AM approach we use alternating optimization to find a candidate solution of Problem \eqref{Prob:MaxSNR} \cite[Section 2.7]{BertsekasNonLinear}, i.e, we solve alternatively the problem with respect to $\bm{\phi}$ and then the problem with respect to $\mathbf{w}$.} We can see that the proposed strategies are effective since the gap with the performance corresponding to the case in which the RIS configuration is random is of several dBs.
Moreover, the performance of the closed-form solutions is very close (within few dBs) to the one obtained using the AM methodology, with a significantly lower complexity, given the fact that we have been able to express the solution in closed form. The figure also shows that for the case in which MMSE CE with $Q>1$ RIS configurations is used the performance are quite close (within fractions of dB) to the ones attained in the case of perfect CSI.
Fig. \ref{Fig:SNR_SingleUser_vsNR} shows the impact on the system performance of the number of the RIS reflecting elements. The figure shows that, for the considered optimization procedures, when $N_R$ is increased from 8 to 128, the SNR gain is in the order of about 15 dB. The SNR values corresponding to the point $N_R=8$ is also an upper bound to the system performance when no RIS is present in the system. This plot thus also shows that installing a RIS may bring very remarkable performance improvements.
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.45]{DL_SNR_SingleUser_CE-eps-converted-to.pdf}
\caption{CDFs of the SNR in the single-user scenario with $N_R=64$ and $N_B=64$. Performance obtained in the following cases: closed-form optimization via the upper-bound maximization (CF-Opt, UB Max); closed-form optimization via the lower-bound maximization (CF-Opt, LB Max); alternating maximization (AM) approach and random configuration of the RIS (No Opt.).}
\label{Fig:SNR_SingleUser_Opt}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.55]{Av_SNR_SingleUser_vsNR-eps-converted-to.pdf}
\caption{Average SNR versus $N_R$ in the single-user scenario with $N_B=64$. Performance obtained in the following cases: closed-form optimization via the upper-bound maximization (CF-Opt, UB Max); closed-form optimization via the lower-bound maximization (CF-Opt, LB Max); alternating maximization (AM) approach and random configuration of the RIS (No Opt.).}
\label{Fig:SNR_SingleUser_vsNR}
\end{figure}
\subsection{Performance of multi-user system with joint transmission}
We now evaluate the performance obtained by the optimization procedure described in the general multiuser scenario with two BSs and joint transmission, as reported in Section \ref{Joint_Resource}.
First of all, we detail the procedure that we use to define the binary variables $I_{i,\ell}, \, \forall i =1,2, \text{and} \, \ell=1,\ldots K$, i.e. to determine which users are to be jointly served by the two BSs in the system.
Basically, we choose to adopt joint transmission for a fixed percentage, $p_{\rm JT}$ say, of the users in the system, and denote by $K_{\rm JT}$ the number of MSs served by joint transmission. More precisely, the procedure starts by associating each MS to the BS with the largest direct power attenuation coefficient; otherwise stated, the generic $k$-th MS is associated to the $i^*$-th BS such that
\begin{equation}
i^*= \arg \max_{i=1,2} \beta_{i,k}^{(d)} \, .
\end{equation}
Then, the $K_{\rm JT}$ users with the lowest value of the metric
\begin{equation}
\gamma_{k}^{(d)}=\frac{\max \left( \beta_{1,k}^{(d)}, \beta_{2,k}^{(d)}\right)}{\min \left( \beta_{1,k}^{(d)}, \beta_{2,k}^{(d)}\right)} \,
\end{equation}
are served with joint transmission.
Fig. \ref{Fig:SNR_MultiUser_Opt} reports the CDFs of the geometric mean of the SINRs in the multi-user scenario. In particular, we compare the performance obtained in the following cases: {joint passive beamforming design and power allocation (Joint Opt.); optimization of the only RIS phase shifts (Only RIS Opt.); optimization of the only transmit powers (Only Powers Opt.), and without optimization (No Opt.). In the Joint Opt., we use the whole procedure proposed in Section \ref{Joint_Resource}; in Only RIS Opt., we optimize the RIS configuration as in Section \ref{Phi_opt_Section} and consider uniform power allocation, in Only Powers Opt., we assume a random configuration of the phase shifts and optimize the transmit power according to the procedure in Section \ref{Eta_Opt_Section}, and in No Opt., we assume random configuration of the RIS and uniform power allocation.} It is assumed that 20\% of the users are served with joint transmission, i.e., $p_{\rm JT}= 20 \%$. The results confirm that the proposed procedures are effective, since noticeable performance gains are obtained with respect to the case in which the RIS has a random configuration. The joint optimization of the RIS phase shifts and of the transmit powers achieves the best performance; also in this case MMSEQ channel estimation exhibits the smallest gap with respect to the ideal performance obtained with perfect CSI.
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.45]{SINR_Geom_mean_MultiUser-eps-converted-to.pdf}
\caption{CDFs of the geometric mean of the SINRs in the multi-user scenario with $N_R=64$, $N_{B,1}=N_{B,2}=64$, $K=20$ and $p_{\rm JT}= 20 \%$. Performance obtained in the following cases: joint passive beamforming design and power allocation (Joint Opt.) technique; optimization of the only RIS phase shifts (Only RIS Opt.); optimization of the only transmit powers (Only Powers Opt.) and random configuration of the RIS with uniform power allocation (No Opt.).}
\label{Fig:SNR_MultiUser_Opt}
\end{figure*}
Fig. \ref{Fig:Av_SNR_vs_NR} shows the impact of the number of reflecting elements at the RIS, $N_R$, on the the average SINR per user, using the same resource allocation techniques reported in Fig. \ref{Fig:SNR_MultiUser_Opt}. One very interesting remark that can be done here is that in a multiuser environment, when the RIS phase shifts are not optimized, the performance slightly decreases when the RIS size increases. This result can be justified by noticing that a RIS with random phase shifts increases the overall interference level and ultimately decreases the system performance. When, instead, RIS phases are optimized, the system performance correctly increases, even though the improvements are now smaller than those observed in Fig. \ref{Fig:SNR_SingleUser_vsNR} for a single user scenario. This behavior can be explained by noticing that in a single-user system all the RIS elements can be exploited in order to improve the system performance of the only user in the system, while in the present multiuser system the RIS benefit is to be shared among the several users in the system.
Finally, Fig. \ref{Fig:CDF_SINR_geom_mean_CoMP} shows the CDF of the geometric mean of the SINR for several values of the parameter $p_{\rm JT}$. The general behaviour that is observed is that usually increasing the number of users enjoying joint transmission brings a performance improvement for most of the users in the system, even though a few fraction of them (the ones represented in the highest part of the CDF) experience some performance degradation due to the increased level of interference. These ones are indeed the users very close to the serving BS, and the activation of the joint transmission to cell-edge users causes a reduction of their received useful power.
Overall, the plots show that the proposed resource allocation procedures bring considerable performance improvements to the system, that the presence of the RIS is beneficial to the system, and, also, that the proposed CE techniques blend well with the described optimization procedures.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.47]{Av_SINR_MultiUser_vsNR-eps-converted-to.pdf}
\caption{Average SINR versus number of reflecting elements at the RIS, $N_R$, $N_{B,1}=N_{B,2}=64$,$K=20$, $p_{\rm JT}= 20 \%$ and MMSEQ CE. Performance obtained in the following cases: joint passive beamforming design and power allocation (Joint Opt.) technique; optimization of the only RIS phase shifts (Only RIS Opt.); optimization of the only transmit powers (Only Powers Opt.) and random configuration of the RIS with uniform power allocation (No Opt.).}
\label{Fig:Av_SNR_vs_NR}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.47]{SINR_GeomMean_MultiUser_CoMP-eps-converted-to.pdf}
\caption{CDFs of the geometric mean of the SINRs in the multi-user scenario with different values of $p_{\rm JT}$, $N_R=64$, $N_{B,1}=N_{B,2}=64$, $K=20$, MMSEQ CE and with the joint passive beamforming design and power allocation (Joint Opt.)}
\label{Fig:CDF_SINR_geom_mean_CoMP}
\end{figure}
\section{Conclusion}
For a wireless network assisted by a RIS, channel estimation algorithms and resource allocation procedures have been presented in this paper. In particular, the paper has tackled the problem of SNR maximization with respect to the RIS configuration and to the BS beamformer for a single-user setting. Moreover, for a multi-user multi-cell scenario, the geometric mean of the SINRs has been maximized with respect to the BS transmit power vectors and to the RIS configuration, assuming that some of the users are jointly served by two BSs. The obtained results have shown the beneficial impact on the system performance of the presence of a RIS and of the described optimization procedures. Current research on this topic is now centred on the study of RISs in conjunction with innovative networks deployments such as cell-free massive MIMO systems.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\bibliography{LISreferences,FracProg,RIS_Survey}
\end{document} | {"config": "arxiv", "file": "2103.11165/Paper-TCCN-ISSWC-20-0006_ArXiv.tex"} |
TITLE: plane curve of degree 4
QUESTION [3 upvotes]: I was dealing with exercise IV.3.2 in Hartshorne, which says the following:
Let $X$ be a plane curve of degree 4.
a) Show that the canonical divisors on $X$ are exactly the hyperplane divisors.
b) If $D$ is any effective divisor of degree 2 on $X$, then $\dim |D|=0$.
c) Conclude $X$ is not hyperelliptic.
To prove that $X$ is not hyperelliptic it is sufficient to use adjunction formula to obtain $\omega_X=O_X(1)$, so the canonical divisor $K$ is very ample, hence $X$ is not hyperelliptic. But I wanted to follow the exercise to get a more geometric view of the problem.
a) The curve has genus 3 and so $l(K)=3$ and $\deg(K)=4$. Set $D:=X\cdot L$, where $L$ is a line. We then have that $D=p_1+...+p_4$ and thus by Riemann-Roch $$l(D)=2+l(K-D).$$ Since $K$ is bpf $l(K-D)\in \{0,1\}$, if it is $1$, we get $$K-D\sim 0,$$ as $\deg(K-D)=0$.
Now, to prove $l(K-D)=1$, I would like to prove $l(D)=3$. Is there a way of doing it by exploiting the fact that $D=X\cdot L$?
b) Let $P+Q$ be our degree 2 effective divisor. By Riemann-Roch $$l(P+Q)=l(K-P-Q)$$ and by adjunction we have K bpf as seen before and so $l(P+Q)=1,$ as required.
Is there a way to prove it exploiting a)?
c) follows by b).
REPLY [0 votes]: The more geometric answer I was looking for is the following:
a) We are left to prove $\dim|X\cdot L|=2$ and this follows by the geometric version of Riemann-Roch giving $\dim \operatorname{span}\{p_1+\ldots p_4\}=\deg(X\cdot L)-1-\dim|X\cdot L|$. Since $p_i$ belongs to a line they span a line so we get $\dim|X\cdot L|=2$ as wanted.
b) Write $D=P_1+P_2$ and suppose there exists an effective divisor linear equivalent to $D$, $Q_1+Q_2$. Then taking the line through $P_1,P_2$ it intersects $X$ in other two points $P_3,P_4$ and by a), $$K\sim P_1+P_2+P_3+P_4\sim Q_1+Q_2+P_3+P_4,$$ hence $Q_1,Q_2$ belong to the same line containing $P_1,P_2,P_3,P_4$. Therefore $Q_1,Q_2$ must coincide with $P_1,P_2$. In conclusion we cannot find any such divisor, so $\dim|D|=0$. | {"set_name": "stack_exchange", "score": 3, "question_id": 2391419} |
\begin{document}
\maketitle
\def\HDS{half-disc sum}
\def\irred{irreducible}
\def\half{spinal pair }
\def\spinal{\half}
\def\spinals{\halfs}
\def\halfs{spinal pairs }
\def\reals{\mathbb R}
\def\rationals{\mathbb Q}
\def\complex{\mathbb C}
\def\naturals{\mathbb N}
\def\integers{\mathbb Z}
\def\proj{P}
\def\hyp {\hbox {\rm {H \kern -2.8ex I}\kern 1.15ex}}
\def\weight#1#2#3{{#1}\raise2.5pt\hbox{$\centerdot$}\left({#2},{#3}\right)}
\def\intr{{\rm int}}
\def\inter{\ \raise4pt\hbox{$^\circ$}\kern -1.6ex}
\def\Cal{\cal}
\def\from{:}
\def\inverse{^{-1}}
\def\id{{\rm Id}}
\def\Max{{\rm Max}}
\def\Min{{\rm Min}}
\def\sing{{\rm Sing}}
\def\fr{{\rm fr}}
\def\embed{\hookrightarrow}
\def\Genus{{\rm Genus}}
\def\Z{Z}
\def\X{X}
\def\roster{\begin{enumerate}}
\def\endroster{\end{enumerate}}
\def\intersect{\cap}
\def\definition{\begin{defn}}
\def\enddefinition{\end{defn}}
\def\subhead{\subsection\{}
\def\theorem{thm}
\def\endsubhead{\}}
\def\head{\section\{}
\def\endhead{\}}
\def\example{\begin{ex}}
\def\endexample{\end{ex}}
\def\ves{\vs}
\def\mZ{{\mathbb Z}}
\def\M{M(\Phi)}
\def\bdry{\partial}
\def\hop{\vskip 0.15in}
\def\mathring{\inter}
\def\trip{\vskip 0.09in}
\begin{abstract}
We classify isotopy classes of automorphisms (self-homeomorphisms) of
3-manifolds satisfying the Thurston Geometrization Conjecture. The
classification is similar to the classification of automorphisms of
surfaces developed by Nielsen and Thurston, except an automorphism of
a reducible manifold must first be written as a suitable composition
of two automorphisms, each of which fits into our classification.
Given an automorphism, the goal is to show, loosely speaking, either
that it is periodic, or that it can be decomposed on a surface
invariant up to isotopy, or that it has a ``dynamically nice"
representative, with invariant laminations that ``fill" the manifold.
We consider automorphisms of irreducible and boundary-irreducible
3-mani\-folds as being already classified, though there are some
exceptional manifolds for which the automorphisms are not understood.
Thus the paper is particularly aimed at understanding automorphisms of
reducible and/or boundary reducible 3-manifolds.
Previously unknown phenomena are found even in the case of connected
sums of $S^2\times S^1$'s. To deal with this case, we prove that a
minimal genus Heegaard decomposition is unique up to isotopy, a result
which apparently was previously unknown.
Much remains to be understood about some of the automorphisms of the
classification.
\end{abstract}
\section{Introduction}\label{Intro}
There is a large body of work studying the mapping class groups and
homeomorphism spaces of reducible and $\bdry$-reducible manifolds.
Without giving references, some of the researchers who have
contributed are: E. C{\'e}sar de S{\'a}, D. Gabai, M. Hamstrom,
A. Hatcher, H. Hendriks, K. Johannson, J. Kalliongis, F. Laudenbach,
D. McCullough, A. Miller, C. Rourke. Much less attention has been
given to the problem of classifying or describing individual
automorphisms up to isotopy. The paper \cite{JWW:Homeomorphisms} shows
that if $M$ is a Haken manifold or a manifold admitting one of
Thurston's eight geometries, then any homeomorphism of $M$ is isotopic
to one realizing the Nielsen number.
In this paper we use the word ``automorphism" interchangeably to mean
either a self-homeomorphism or its isotopy class.
The paper \cite{UO:Autos} gives a classification up to isotopy of
automorphisms (self-homeo\-morph\-isms) of 3-di\-men\-sion\-al
handlebodies and compression bodies, analogous to the Nielsen-Thurston
classification of automorphisms of surfaces. Indecomposable
automorphisms analogous to pseudo-Anosov automorphisms were identified
and called {\it generic}. Generic automorphisms were studied further
in \cite{LNC:Generic} and \cite{LNC:Tightness}. Our understanding of
generic automorphisms remains quite limited.
The goal of this paper is to represent an isotopy class of an
automorphism of any compact, orientable 3-manifold $M$ by a
homeomorphism which is either periodic, has a suitable invariant
surface on which we can decompose the manifold and the automorphism,
or is dynamically nice in the sense that it has invariant laminations
which ``fill" the manifold. We do not quite achieve this goal. In
some cases we must represent an automorphism as a composition of two
automorphisms each of which fit into the classification.
Throughout this paper $M$ will denote a compact, orientable
3-manifold, possibly with boundary.
If an automorphism $f\from M \to M$ is not periodic (up to
isotopy), then we search for a {\it reducing surface}, see Section
\ref{Reducing}. This is a suitable surface $F\embed M$ invariant
under $f$ up to isotopy. In many cases, we will have to decompose
$M$ and $f$ by cutting $M$ on the surface $F$. Cutting $M$ on $F$
we obtain a manifold $M|F$ ($M$-cut-on-$F$) and if we isotope $f$
such that it preserves $F$, we obtain an induced automorphism
$f|F$ ($f$-cut-on-$F$) of $M|F$.
If there are no reducing surfaces for $f$ (and $f$ is not periodic),
then the goal is to find a ``dynamically nice" representative in the
isotopy class of $f$. This representative will have associated
invariant laminations. Also, just as in the case of pseudo-Anosov
automorphisms, we will associate to the dynamically nice
representative a growth rate. A ``best" representative should, of
course, have minimal growth rate. In the case of automorphisms of
handlebodies it is still not known whether, in some sense, there is a
canonical representative.
The reducing surfaces used in the analysis of automorphisms of
handlebodies and compression bodies, see \cite{UO:Autos}, have a nice
property we call {\it rigidity}: A reducing surface $F$ for $f\from M
\to M$ is {\it rigid} if $f$ uniquely determines the isotopy class of
the induced automorphism $f|F$ on $M|F$. As we shall see, the
analysis of automorphisms of many 3-manifolds requires the use of
non-rigid reducing surfaces $F$. This means that there are many
choices for the induced isotopy class of automorphisms $f|F$, and it
becomes more difficult to identify a ``canonical" or ``best"
representative of $f|F$. For example, if $F$ cuts $M$ into two
components, then on each component of $M|F$, the induced automorphism
$f|F$ is not uniquely determined. If there are no reducing surfaces
for $f|F$ on each of these components, and they are not periodic, we
again seek a dynamically nice representative, and in particular, the
best representative should at least achieve the minimal growth rate of
all possible ``nice" representatives. But now we search for the best
representative not only from among representatives of the isotopy
class of $f|F$, but from among all representatives of all possible
isotopy classes of $f|F$. This means one may have to isotope $f$
using an isotopy not preserving $F$ to find a better representative of
$f|F$.
There is a subtlety in the definition of reducing surface. One
must actually deal with automorphisms of {\it pairs} or {\it Haken
pairs}, of the form $(M,V)$ where $M$ is irreducible and $V$ is an
incompressible surface in $\bdry M$. An automorphism $f:(M,V)\to
(M,V)$ of a Haken pair is an automorphism which restricts on $V$
to an automorphism of $V$. Even in the case of automorphisms of
handlebodies, Haken pairs play a role: The kind of Haken pair
which arises is called a compression body.
A {\it compression body} is a Haken pair $(Q,V)$ constructed from
a product $V\times [0,1]$ and a collection of balls by attaching
1-handles to $V\times 1$ on the product and to the boundaries of
the balls to obtain a 3-manifold $Q$. Letting $V=V\times 0$, we
obtain the pair $(Q,V)$. Here $V$ can be a surface with boundary.
The surface $V$ is called the {\it interior boundary} $\bdry_iQ$
of the compression body, while $W=\bdry Q-\intr(V)$ is called the
{\it exterior boundary}, $W=\bdry_eQ$. We regard a handlebody $H$
as a compression body whose interior boundary $V$ is empty. An
automorphism of a compression body is an automorphism of the pair
$(Q,V)$.
A reducing surface $F$ for $f:(M,V)\to (M,V)$ is always $f$-invariant
up to isotopy, but may lie entirely in $\bdry M$. If $F\subseteq\bdry
M$, $F$ is incompressible in $M$ and $f$-invariant up to isotopy, and
if $F\supset V$ up to isotopy, then $F$ is a reducing surface. In
other words, if it is possible to rechoose the $f$-invariant Haken
pair structure for $(M,V)$ such that $V$ is replaced by a larger $F$
to obtain $(M,F)$, then $f$ is reducible, and $F$ is regarded as a
reducing surface, and is called a {\it peripheral reducing surface}.
We shall give a more complete account of reducing surfaces in Section
\ref{Reducing}.
An automorphism is {\it (rigidly) reducible} if it has a (rigid)
reducing surface.
Returning to the special case of automorphisms of handlebodies, when
the reducing surface is not peripheral, then the handlebody can
actually be decomposed to yield an automorphism $f|F$ of $H|F$. When
the reducing surface is peripheral, and incompressible in the
handlebody, the reducing surface makes it possible to regard the
automorphism as an automorphism of a compression body whose underlying
topological space is the handlebody, and the automorphism is then
deemed ``reduced."
We continue this introduction with an overview of our current
understanding of automorphisms of handlebodies. Without
immediately giving precise definitions (see Sections
\ref{Reducing} and \ref{Laminations} for details), we state one of
the main theorems of \cite{UO:Autos}, which applies to
automorphisms of handlebodies.
\begin{thm}\label{HandlebodyClassificationThm}
Suppose $f\from H\to H$ is an automorphism of a connected
handlebody. Then the automorphism is:
1) rigidly reducible,
2) periodic, or
3) generic on the handlebody.
\end{thm}
A {\it generic automorphism} $f$ of a handlebody is defined as an
automorphism which is not periodic and does not have a reducing
surface (see Section \ref{Reducing} for the precise definition).
One can show that this amounts to requiring that $f|_{\bdry H}$ is
pseudo-Anosov and that there are no closed reducing surfaces.
There is a theorem similar to Theorem
\ref{HandlebodyClassificationThm}, classifying automorphisms of
compression bodies.
In the paper \cite{UO:Autos}, the first steps were taken towards
understanding generic automorphisms using invariant laminations of
dimensions 1 and 2. The theory has been extended and refined in
\cite{LNC:Generic} and \cite{LNC:Tightness}. We state here only the
theorem which applies to automorphisms of handlebodies; there is an
analogous theorem for arbitrary compression bodies, though in that
case no 1-dimensional invariant lamination was described. We shall
remedy that omission in this paper.
If $H$ is a handlebody, in the following theorem $H_0$ is a {\it
concentric} in $H$, meaning that $H_0$ is embedded in the interior of
$H$ such that $H-\intr(H_0)$ has the structure of a product $\bdry
H\times I$.
\begin{thm}\label{HandlebodyLaminationThm}
Suppose $f\from H\to H$ is a generic automorphism of a $3$-dimensional
handlebody. Then there is a 2-dimensional measured lamination
$\Lambda\embed \intr (H)$ with transverse measure $\mu$ such that, up
to isotopy, $f((\Lambda,\mu))=(\Lambda,\lambda \mu)$ for some
$\lambda> 1$. The lamination has the following properties:
1) Each leaf $\ell$ of $\Lambda$ is an open $2$-dimensional disk.
2) The lamination $\Lambda$ fills $H_0$, in the sense that the
components of $H_0-\Lambda$ are contractible.
3) For each leaf $\ell$ of $\Lambda$, $\ell-\intr (H_0)$ is
incompressible in $H-\intr(H_0)$.
4) $\Lambda\cup \bdry H$ is a closed subset of $H$.
There is also a 1-dimensional lamination $\Omega$, transverse to
$\Lambda$, with transverse measure $\nu$ and a map $\omega\from
\Omega\to \intr (H_0)$ such that
$f(\omega(\Omega,\nu))=\omega(\Omega,\nu/\lambda)$. The map $f$ is an
embedding on $f\inverse(N(\Lambda))$ for some neighborhood
$N(\Lambda)$. The statement that
$f(\omega(\Omega,\nu))=\omega(\Omega,\nu/\lambda)$ should be
interpreted to mean that there is an isomorphism $h\from
(\Omega,\nu)\to (\Omega,\nu/\lambda)$ such that
$f\circ\omega=\omega\circ h$.
\end{thm}
We note that the laminations in the above statement are ``essential"
only in a rather weak sense. For example, lifts of leaves of
$\Lambda$ to the universal cover of $H$ are not necessarily properly
embedded. The map $\omega$ need not be proper either: If we regarded
$w\from \Omega\to H$ as a homotopy class, in some examples there would
be much unravelling. The lamination $\Omega$ can also be regarded as
a ``singular" embedded lamination, with finitely many singularities.
\begin{remark}
The laminations $\Lambda$ and $\Omega$ should be regarded as a
pair of dual laminations. Both are constructed from a chosen
``complete system of essential discs," $\cal E$, see Section
\ref{Laminations}. One attempts to find the most desirable pairs
$\Lambda$ and $\Omega$. For example, it is reasonable to require
laminations with minimal $\lambda$, but even with this requirement
it is not known whether the invariant laminations are unique in
any sense.
\end{remark}
One of the authors has shown in his dissertation,
\cite{LNC:Generic} and in the preprint \cite{LNC:Tightness}, that
it is often possible to replace the invariant laminations in the
above theorem by laminations satisfying a more refined condition.
The condition on the 2-dimensional lamination is called {\it
tightness} and is defined in terms of both invariant laminations.
He shows that invariant lamination having this property
necessarily achieve the minimum eigenvalue $\lambda$ (growth rate)
among all (suitably constructed) measured invariant laminations.
One remaining problem is to show that every generic automorphism
admits a tight invariant lamination; this is known in the case of
an automorphism of a genus 2 handlebody and under certain other
conditions. The growth rate $\lambda$ for a tight invariant
lamination associated to a generic automorphism $f$ is no larger
than the growth rate for the induced pseudo-Anosov automorphism on
$\bdry H$, see \cite{LNC:Generic}. In particular, this means that
for a genus 2 handlebody a growth rate can be achieved which is no
larger than the growth rate of the pseudo-Anosov on the boundary.
We hope that the tightness condition on invariant laminations will
be sufficient to yield some kind of uniqueness. The most
optimistic goal (perhaps too optimistic) is to show that a tight
invariant 2-dimensional lamination for a given generic
automorphism $f\from H\to H$ is unique up to isotopy.
\hop
In our broader scheme for classifying automorphisms of arbitrary
compact 3-manifolds, an important test case is to classify up to
isotopy automorphisms $f\from M\to M$, where $M$ is a connected sum of
$S^2\times S^1$'s. In calculations of the mapping class group, this
was also an important case, see \cite{FL:Spheres} and
\cite{FL:HomotopyIsotopy}. The new idea for dealing with this case is
that there is {\em always} a surface invariant up to isotopy, a
reducing surface which, as we shall see, is non-rigid. This invariant
surface is the Heegaard splitting surface coming from the Heegaard
splitting in which $M$ is expressed as the double of a handlebody. We
call this a {\it symmetric} Heegaard splitting. A remark by
F. Waldhausen at the end of his paper \cite{FW:Heegaard} outlines a
proof that every minimal genus Heegaard splitting of a connected sum
of $S^2\times S^1$'s is symmetric. We prove that there is only one
symmetric splitting up to isotopy, and we call this the {\it canonical
splitting} of $M$. We were surprised not to find this result in the
literature, and it is still possible that we have overlooked a
reference.
\begin{thm} \label{CanonicalHeegaardTheorem}
Any two symmetric Heegaard splittings of a connected sum of $S^2\times
S^1$'s are isotopic.
\end{thm}
Thus when an automorphism $f$ of $M$ is isotoped so it preserves $F$,
it induces automorphisms of $H_1$ and $H_2$, which are mirror images
of each other. However, there is not a unique way of isotoping an
automorphism $f$ of $M$ to preserve $F$, see
\cite{FL:ConnectedS2TimesS1} and Example \ref{NonUniqueExample}. Thus
the canonical splitting of a connected sum of $S^2\times S^1$'s gives
a class of examples of non-rigid reducing surfaces.
In the important case where $M$ is a connected sum of $k$ copies
of $S^2\times S^1$, if $f:M\to M$ is an automorphism, we next
consider the best choice of representative of $f|F$, which is the
induced automorphism on $M|F$, a pair of genus $k$ handlebodies.
From the symmetry of the canonical splitting, the induced
automorphisms of the two handlebodies are conjugate, and we denote
each of these by $g$.
To state our theorem about automorphisms of connected sums of
$S^2\times S^1$'s, we first mention that given a generic automorphism
$g$ of a handlebody $H$ and an invariant 2-dimensional lamination as
in Theorem \ref{HandlebodyLaminationThm}, the lamination gives a
quotient map $H\to G$ where $G$ is a graph, together with a homotopy
equivalence of $G$ induced by $g$. Typically, the 2-dimensional
lamination {\em cannot} be chosen such that the induced homotopy
equivalence on the quotient graph is a train track map in the sense of
\cite{BH:Tracks}. In the exceptional cases when there exists a
2-dimensional invariant lamination so the quotient homotopy
equivalence is a train track map, we say $g$ is a {\it train track
generic automorphism}.
\begin{thm}[Train Track Theorem]\label{TrainTrackTheorem}
If $M$ is a connected sum of $S^2\times S^1$'s, $M=H_1\cup H_2$ is the
canonical splitting, and $f\from M\to M$ is an automorphism, then $f$
is non-rigidly reducible on the canonical splitting surface, and can
be isotoped so that $f$ preserves the splitting and the induced
automorphism $g$ on $H_1$ is
1) periodic,
2) rigidly reducible, or
3) train track generic.
\end{thm}
In the above, not so much information is lost in passing to the
induced automorphism on $\pi_1(H_1)$, which is not surprising in view
of Laudenbach's calculation of the mapping class group of $M$.
F. Laudenbach showed in \cite{FL:Spheres} that the map from the
mapping class group of $M$ to $\text{Out}(\pi_1(M))$ has kernel
$\integers_2^n$ generated by rotations in a maximal collection of $n$
disjoint non-separating 2-spheres in $M$.
Note that when the automorphism $g$ is generic, we have not shown that
there is a unique pair of invariant laminations such that the quotient
is a train track map.
For a typical generic automorphism, any invariant 1-dimensional
lamination has much self-linking and is somewhat pathological. Even
for a train-track generic map, some of the self-linking and pathology
may remain. We say a 1-dimensional lamination $\Omega\embed M$ is
{\it tame} if any point of $\Omega$ has a flat chart neighborhood in
$M$.
\begin{conj}[Tameness Conjecture]\label{TamenessConjecture}
If $M$ is a connected sum of $S^2\times S^1$'s, $M=H_1\cup H_2$ is the
canonical splitting, and $f\from M\to M$ is a train-track generic
automorphism, then $f$ can be isotoped so that $f$ preserves the
splitting and the induced automorphism $g$ on $H_1$ is train-track
generic and has a tame 1-dimensional invariant lamination.
\end{conj}
To proceed, we must say something about automorphisms of
compression bodies. Generic automorphisms of compression bodies
are defined like generic automorphisms of handlebodies; in
particular, if $f\from Q\to Q$ is an automorphism, the induced
automorphism $\bdry_ef$ on $\bdry_eQ$ is pseudo-Anosov. Also,
there should be no (suitably defined) reducing surfaces (see
Section \ref{Reducing}).
We will adopt some further terminology for convenience. A connected
sum of $S^2\times S^1$'s is a {\it sphere body}. Removing open
ball(s) from a manifold is called {\it holing} and the result is a
{\it holed manifold}, or a manifold with sphere boundary components.
Thus holing a sphere body yields a {\it holed sphere body}. A {\it
mixed body} is a connected sum of $S^2\times S^1$'s and handlebodies.
If we accept a ball as a special case of a handlebody, then a holed
mixed body is actually also a mixed body, since it can be obtained by
forming a connect sums with balls. Nevertheless, we distinguish a
{\it holed mixed body}, which has sphere boundary components, from a
mixed body, which does not.
It is often convenient to cap sphere boundary components of a
holed manifold with balls, distinguishing the caps from the
remainder of the manifold. Thus given a manifold $M$ with sphere
boundary components, we cap with balls, whose union we denote
$\mathcal{D}$, to obtain a {\it spotted manifold} (with
3-dimensional balls). For classifying automorphisms, spotted
manifolds are just as good as holed manifolds.
Suppose now that $M$ is a mixed body. Again, there is a canonical
Heegaard splitting $M=H\cup Q$, where $H$ is a handlebody and $Q$ is a
compression body, see Theorem \ref{CanonicalHeegaardTheorem2}.
Whereas in the decomposition of an irreducible manifold with
compressible boundary the Bonahon characteristic compression body has
exterior boundary in the boundary of the manifold, our compression
body $Q$ here has interior boundary $\bdry_iQ=\bdry M$. Again, this
splitting lacks rigidity for the same reasons.
In Section \ref{MixedBodies}, we undertake the study of automorphisms
of mixed bodies. We prove a theorem analogous to Theorem
\ref{TrainTrackTheorem} (the Train Track Theorem), see Theorem
\ref{CompressionTrainTrackTheorem}.
In a classification of automorphisms of compact 3-manifolds, the
most difficult case is the case of an arbitrary reducible
3-manifold. We will assume the manifold has no sphere boundary
components; if the manifold has boundary spheres, we cap them with
balls. Let $B$ be the 3-ball. Suppose that a (possibly holed)
mixed body $R\neq B$ is embedded in $M$. We say that $R$ is {\em
essential} if any essential sphere in $R$ or sphere component of
$\bdry R$ is also essential in $M$. An automorphism $f$ of $M$
which preserves an essential mixed body $R$ up to isotopy and
which is periodic on $M-\intr(R)$ is called an {\it adjusting
automorphism}. The surface $\bdry R$ is $f$-invariant up to
isotopy, but we do not regard it as a reducing surface, see
Section \ref{Reducing}.
Note that an essential mixed body $R$ in an irreducible manifold
is a handlebody. Since an adjusting automorphism preserving $R$
is isotopic to a periodic map on $\bdry R$, it is then also
isotopic to a periodic map on $R$, so such an adjusting
automorphism is not especially useful.
For the classification of automorphisms of reducible compact
3-manifolds, we will use the following result, which is based on a
result due to E. C{\'e}sar de S{\'a} \cite{EC:Automorphisms}, see also
M. Scharlemann in Appendix A of \cite{FB:CompressionBody}, and
D. McCullough in \cite{DM:MappingSurvey} (p. 69).
\begin{thm}\label{AdjustingTheorem}
Suppose $M$ has irreducible summands $M_i$, $i=1,\ldots k$, and
suppose $\hat M_i$ is $M_i$ with one hole, and one boundary sphere
$S_i$. Suppose the $\hat M_i$ are disjoint submanifolds of $M$,
$i=1,\ldots k$. Then the closure of $M-\cup_i \hat M_i$ is an
essential holed sphere body $\hat M_0$. ($\hat M_0$ is a holed
connected sum of $S^2\times S^1$'s or possibly a holed sphere.) If
$f\from M\to M$ is an automorphism, then there is an adjusting
automorphism $\hslash$ preserving an essential (possibly holed)
mixed body $R\supset \hat M_0$ in $M$ with the property that
$g=\hslash\circ f$ is rigidly reducible on the spheres of $\bdry
\hat M_0$.
The automorphism $g$ is determined by $f$ only up to application of an
adjusting automorphism preserving $\bdry \hat M_0$.
Note that we can write $f$ as a composition $f=\hslash\inverse\circ
g=h\circ g$, where $h=\hslash\inverse$ is an adjusting automorphism
and $g$ preserves each $\hat M_i$.
The automorphism $\hat g_i=g|_{\hat M_i}$ clearly uniquely determines
an automorphism $g_i$ of $M_i$ by capping (for $i=0$, $M_0$ is $\hat
M_0$ capped). On the other hand $g_i$ determines $\hat g_i$ only up to
application of an adjusting automorphism in $\hat M_i$.
\end{thm}
The main ingredients used in this result are {\em slide automorphisms
of $M$}. There are two types, one of which is as defined in
\cite{DM:MappingSurvey}. We shall describe them briefly in Section
\ref{Adjusting}.
With a little work, see Section \ref{Adjusting}, we can reformulate
Theorem \ref{AdjustingTheorem} in terms of 4-dimensional compression
bodies as follows. Let $Q$ be a 4-dimensional compression body
constructed from a disjoint union of products $M_i\times I$ and a
4-dimensional ball $K_0$ by attaching one 1-handle joining $\bdry
K_0=M_0$ to $M_i\times 1$ and one 1-handle from $\bdry K_0$ to itself
for every $S^2\times S^1$ summand of $M$. Then $\bdry_eQ$ is
homeomorphic with and can be identified with $M$, while $\bdry_iQ$ is
homeomorphic to and can be identified with the disjoint union of
$M_i$'s, $i\ge 1$.
\begin{corollary} \label{AdjustingCorollary} With the
hypotheses of Theorem \ref{AdjustingTheorem} and with the
4-dimensional compression body $Q$ constructed as above, there is
an automorphism $\bar f:Q\to Q$ of the compression body such that
$\bar f|_{\bdry_eQ}=f$. Also, there is an automorphism $\bar
\hslash:Q\to Q$ of the compression body such that $\bar
\hslash|_{\bdry_eQ}=\hslash$, $\bar
\hslash|_{\bdry_iQ}=\text{id}$. Also $\bar\hslash\circ \bar f$
gives the adjusted automorphisms on $\bdry_iQ$, which is the union
of $M_i$'s, $i\ge 1$.
In other words, the automorphism $f:M\to M$ is cobordant over $Q$ to
an automorphism of the disjoint union of the $M_i$'s via an automorphism
$\bar \hslash$ of $Q$ rel $\bdry_iQ$.
\end{corollary}
We will refer to the automorphism $\bar \hslash$ as an {\it adjusting automorphism of the compression body $Q$}.
Before turning to the classification of automorphisms, we mention
another necessary and important ingredient.
\begin{thm} \label{BonahonTheorem} (F. Bonahon's characteristic compression body) Suppose $M$
is a compact manifold with boundary. Then there is a compression body
$Q\embed M$ which is unique up to isotopy with the property that
$\bdry_eQ=\bdry M$ and $\bdry_iQ$ is incompressible in
$\overline{M-Q}$.
\end{thm}
Bonahon's theorem gives a canonical decomposition of any compact
irreducible 3-manifold with boundary, dividing it into a compression
body and a compact irreducible manifold with incompressible boundary.
The interior boundary of the characteristic compression body is a
rigid reducing surface.
Finally, in the classification, we often encounter automorphisms of
{\it holed manifolds}. That is to say, we obtain an automorphism
$f:\hat M\to \hat M$ where $\hat M$ is obtained from $M$ by removing a
finite number of open balls. We have already observed that an
automorphism of a holed manifold $\hat M$ is essentially the same as
an automorphism of manifold pair $(M,\mathcal{D})$ where $\mathcal{D}$
is the union of closures of the removed open balls,
i.e. $\mathcal{D}=\overline{M-\hat M}$. An automorphism of a spotted
manifold $(M,\mathcal{D})$ or holed manifold $\hat M$ determines an
automorphism of the manifold $M$ obtained by forgetting the spots or
capping the boundary spheres of $\hat M$. Clearly, there is a loss of
information in passing from the automorphism $\hat f$ of the spotted
or holed manifold to the corresponding automorphism $f$ of the
unspotted or unholed manifold. The relationship is that $\hat f$
defined on the spotted manifold is isotopic to $f$ if we simply regard
$\hat f$ as an automorphism of $M$. To obtain $\hat f$ from $f$, we
compose with an isotopy $h$ designed to ensure that $h\circ f$
preserves the spots. There are many choices for $h$, and two such
choices $h_1$ and $h_2$ differ by an automorphism of $(M,B)$ which can
be realized as an isotopy of $M$. This means $h_1\circ h_2\inverse$
is an automorphism of $(M,B)$ realized by an isotopy of $M$. In
describing the classification, we will not comment further on the
difference between automorphisms of manifolds and automorphisms of
their holed counterparts. Of course any manifold with sphere boundary
components can be regarded as a holed manifold. Much more can be said
about automorphisms of spotted manifolds, but we will leave this topic
to another paper.
\subsection*{Automorphisms of Irreducible and $\partial$-irreducible Manifolds}
The following outlines the classification of automorphisms of
irreducible and $\partial$-irre\-ducible manifolds. If the
characteristic manifold (Jaco-Shalen, Johannson) is non-empty or
the manifold is Haken, we use \cite{WJPS:Characteristic},
\cite{KJ:Characteristic}. For example, if the characteristic
manifold is non-empty and not all of $M$, then the boundary of the
characteristic manifold is a rigid incompressible reducing
surface. There are finitely many isotopy classes of automorphisms
of the complementary ``simple'' pieces \cite{KJ:Characteristic}.
For automorphisms of Seifert-fibered pieces we refer to
\cite{PS:Geometries,JWW:Homeomorphisms}. The idea is that in most
cases an automorphism of such a manifold preserves a Seifert
fibering, therefore the standard projection yields an
automorphism of the base orbifold. The article
\cite{JWW:Homeomorphisms} gives a classification of automorphisms
of orbifolds similar to Nielsen-Thurston's classification of
automorphisms of surfaces. It is then used for a purpose quite
distinct from ours: their goal is to realize the Nilsen number in
the isotopy class of a $3$-manifold automorphism. Still, their
classification of orbifold automorphisms is helpful in our
setting. The idea is that invariant objects identified by the
classification in the base orbifold lift to invariant objects in
the original Seifert fibered manifold.
In the following we let $T^n$ be the $n$-torus, $D^2$ the
$2$-disc, $I$ the interval ($1$-disc), $\mathbb{P}(2,2)$ the
orbifold with the projective plane as underlying space and two
cone points of order $2$, and $M_{\mathbb{P}(2,2)}$ the Seifert
fibered space over $\mathbb{P}(2,2)$.
\begin{thm}[\cite{JWW:Homeomorphisms}]\label{T:PreservedFibering}
Suppose that $M$ is a compact orientable Seifert fibered space
which is not $T^3$, $M_{\mathbb{P}(2,2)}$, $D^2\times S^1$ or
$T^2\times I$. Given $f\colon M\to M$ there exists a Seifert
fibration which is preserved by $f$ up to isotopy.
\end{thm}
We discuss the classification of automorphisms of these four
exceptional cases later. In the general case we consider the
projection $M\to X$ over the fiber space $X$, which is an
orbifold, and let the surface $Y$ be the underlying space of $X$.
If $f\colon M\to M$ preserves the corresponding fiber structure we
let $\hat f\colon X\to X$ be the projected ``automorphism'' of
$X$. An {\em automorphism of $X$} is an automorphism of $Y$
preserving the orbifold structure. Let $\sing(X)$ be the set of
singular points of $X$. Note that we are assuming that $M$ is
orientable therefore there are no reflector lines in $\sing(X)$,
which then consists of isolated cone points. Moreover $\hat f$ has
to map a cone point to another of the same order. An isotopy of
$\hat f$ is an isotopy restricted to $X-\sing(X)$.
The classification of automorphisms of $X$ is divided in three
cases, depending on the sign of $\chi(X)$ (as an orbifold Euler
characteristic). Below we essentially follow
\cite{JWW:Homeomorphisms}.
\begin{enumerate}
\item $\chi(X)>0$.
In this case, unless $\sing(X)=\emptyset$,
the underlying space $Y$ is either $S^2$ or $\mathbb{P}^2$ and $X$ has a few singular points.
Any map is isotopic to a periodic one.
\item $\chi(X)=0$.
For most orbifolds $X$ any automorphism is isotopic to a periodic one.
The only two exceptions are $T^2$ (without singularities) and $S^2(2,2,2,2)$, the sphere with
four cone points of order 2.
If $X=T^2$ then an automorphism $\hat f\colon X\to X$
is isotopic to either i) a periodic one, ii) reducible, preserving a curve or
iii) an Anosov map. In ii) the lift of an invariant curve in $X$
yields a torus reducing surface for $f$ in $M$ (if the lift of the curve is a 1-sided Klein
bottle we consider the torus boundary of its regular neighborhood). Such a torus may separate
a $K\widetilde{\times} I$ piece, which is the orientable $I$-bundle over the Klein bottle $K$.
An automorphism of such a piece is isotopic to a periodic map.
In iii) the invariant foliations lift to foliations on $M$ invariant
under $f$. The leaves are either open annuli or open M\"obius bands.
If $X=S^2(2,2,2,2)$ then $X'=X-\sing(X)$ is a four times
punctured sphere. Since $\chi(X')<0$ the automorphism $\hat f|_{X'}$ is subject to
Nielsen-Thurston's classification, which isotopes it to
be either i) periodic, ii) reducible or iii) pseudo-Anosov. As
above, in ii) a reduction of $\hat f$ yields a reduction of $f$ along
tori, maybe separating $K\widetilde{\times} I$ pieces. In iii)
we consider the foliations on $X$ invariant under $\hat f$.
They may have 1-prong singularities in $\sing(X)$. We can assume that there are
no other singularities since $\chi(X)=0$, with $X$ covered by
$T^2$. But all cone points have order 2, therefore the lifts of the foliations
to $M$ are non-singular foliations invariant under $f$.
\item $\chi(X)<0$. Also in this case $X'=X-\sing(X)$ is a surface
with $\chi(X')<0$, therefore $\hat f|_{X'}$ is subject to
Nielsen-Thurston's classification. A reducing curve system for
$\hat f|_{X'}$ is a reducing curve system for $\hat
f$. As before, this yields a reducing surface for $f$ consisting of
tori. Some may be thrown out, in case there are parallel
copies.
In the pseudo-Anosov case we consider the corresponding
invariant laminations on $X'$, which lift to invariant laminations on
$M$ by vertical open annuli or M\"obius bands.
\end{enumerate}
In the above we did not say anything about $f\colon M\to M$ when
$\hat f\colon X\to X$ is periodic. There is not much hope in
trying to isotope $f$ to be periodic, since there are
homeomorphisms of $M$ preserving each fiber but not periodic (e.g,
``vertical'' Dehn twists along vertical tori). What can be done is
to write $f$ as a composition $f=g\circ h$ where $h$ is periodic
and $g$ preserves each fiber.
The outline above deals with Seifert fibered spaces $M$ for which
there is a fibering structure preserved by $f\colon M\to M$.
Recall that this excludes $T^3$, $M_{\mathbb{P}(2,2)}$, $D^2\times
S^1$ or $T^2\times I$ \ref{T:PreservedFibering}, which we consider
now. If $M=D^2\times S^1$ then $f$ is isotopic to a power of a
Dehn twist along the meridional disc. If $M=T^2\times I$ then $f$
is isotopic to a product $g\times(\pm\id_I)$, where $\id_I$ is the
identity and $-\id_I$ is the orientation reversing isometry of
$I$. Assume that $g\colon T^2\to T^2$ is either periodic,
reducible (along a curve) or Anosov.
In the case $M=T^3$ any automorphism is isotopic to a {\em linear
automorphism} (i.e., an automorphism with a lift to
$\widetilde{M}=\mathbb{R}^3$ which is linear). The study of
eigenspaces of such an automorphism yield invariant objects.
Roughly, such an automorphism may be periodic, reducible
(preserving an embedded torus) or may have invariant 2-dimensional
foliations, whose leaves may consist of dense open annuli or
planes. In some cases it may also be relevant to further consider
invariant 1-dimensional foliations.
In the case $M=M_{\mathbb{P}(2,2)}$ there is a cell decomposition
which is preserved by any automorphism up to isotopy. It thus
imply that any automorphism is isotopic to a periodic one.
This last case completes the outline of the classification for
Seifert fibered spaces.
If the irreducible and $\bdry$-irreducible manifold $M$ is closed
and hyperbolic, it is known that any automorphism is periodic, see
\cite{WPT:Notes}. If the manifold is closed and spherical, recent
results of D. McCullough imply the automorphisms are periodic, see
\cite{DM:Elliptic}.
\subsection*{The Classification} The following outline
classifies automorphisms of 3-manifolds in terms of automorphisms of
compression bodies, automorphisms of mixed bodies, and automorphisms
of irreducible, $\bdry$-irreducible 3-manifolds. (Here we regard
handlebodies as compression bodies.) In every case, we consider an
automorphism $f\from M\to M$.
i) If $M$ is reducible use Theorem \ref{AdjustingTheorem} and write
$f=h\circ g$, where $g$ is rigidly reducible on essential spheres, and
$h$ is non-rigidly reducible. The reductions yield automorphisms of
holed irreducible 3-manifolds (the irreducible summands of $M$ with
one hole each), and of (holed) mixed bodies, which we consider in the
following cases. The automorphism $h$ also yields, after cutting on
$\bdry R$ of Theorem \ref{AdjustingTheorem}), periodic automorphisms
of (holed) irreducible manifolds with incompressible boundary.
Alternatively, using Corollary \ref{AdjustingCorollary}, the
automorphism $f:M\to M$ is cobordant over a 4-dimensional compression
body to an ``adjusted" automorphism $f_a$ of the disjoint union of the
irreducible summands $M_i$.
ii) If $M$ is irreducible, with $\bdry M\ne \emptyset$ let $Q$ be the
characteristic compression body of Theorem \ref{BonahonTheorem}, then
$f$ induces unique automorphisms on $Q$ and $\overline{M-Q}$, , see
\cite{FB:CompressionBody} and \cite{UO:Autos}. $Q$ is a compression
body and $\overline{M-Q}$ is irreducible and $\bdry$-irreducible,
whose automorphisms we consider in following cases.
iii) If $M$ is a holed sphere body (a connected sum of $S^2\times
S^1$'s with open balls removed), as is the case when the automorphism
is obtained by decomposition from an adjusting automorphism for a
different automorphism of a different 3-manifold, then we first cap
the boundary spheres to obtain an induced automorphism of a sphere
body. Then we use the Train Track Theorem (Theorem
\ref{TrainTrackTheorem} and Theorem
\ref{CompressionTrainTrackTheorem}), to obtain a particularly nice
representative of $f$. If the Tameness Conjecture, Conjecture
\ref{TamenessConjecture}, is true, then one can find an even nicer
representative.
iv) If $M$ is a (holed) mixed body, as is the case when the
automorphism is an adjusting automorphism for a different
automorphism of a different 3-manifold, then we use the canonical
splitting surface (Theorem \ref{CanonicalHeegaardTheorem2}) as a
non-rigid reducing surface for $f$ which cuts $M$ into a
handlebody $H$ and a compression body $Q$. In the handlebody,
there is another non-rigid reducing surface (see Section
\ref{MixedBodies}), which makes it possible to cut the handlebody
$H$ into a compression body $Q'$ isomorphic to $Q$, and a possibly
disconnected handlebody $H'$, with a component of genus $g$
corresponding to each genus $g$ component of $\bdry M$. Theorem
\ref{CompressionTrainTrackTheorem} gives a best representative,
yielding the simplest invariant laminations for $f$ restricted to
$Q$, $Q'$, and $H'$. The restrictions of $f$ to $Q$ and $Q'$ are
then automorphisms of compression bodies periodic on the interior
boundary.
v) If $M$ is a compression body, the automorphisms are classified by
Theorem \ref{HandlebodyClassificationThm} and its analogue, see
\cite{UO:Autos}, which says the automorphism is periodic, rigidly
reducible or generic. See Section \ref{Laminations} for a description
of generic automorphisms.
vi) If $M$ is irreducible and $\bdry$-irreducible, then the
automorphism $f$ is classified following the previous subsection.
\centerline {---}
\hop
If it were true that every automorphism of a reducible manifold had a
closed reducing surface, possibly a collection of essential spheres,
then decomposition on the reducing surface would make it possible to
analyze the automorphism of the resulting manifold-with-boundary in a
straightforward way, without the necessity of first writing the
automorphism as a composition of an adjusting automorphism and an
automorphism preserving a system of reducing spheres. This would
yield a dramatic improvement of our classification. For this reason,
we pose the following:
\begin{question}\label{ReducingQuestion} Is it true that every automorphism of a reducible manifold admits a closed reducing
surface, which cannot be isotoped to be disjoint from a system of essential spheres cutting $M$ into holed irreducible
manifolds? In particular, does every automorphism of a closed reducible 3-manifold admit a closed reducing surface of this
kind?
\end{question}
To understand the significance of this question, one needs a more
detailed understanding of reducing surfaces. Assuming that an
automorphism $f:M\to M$ has no reducing surfaces consisting of an
$f$-invariant collection of essential spheres, a closed reducing
surface $F$ is required to have the property that $M|F$ is
irreducible, see Section \ref{Reducing}.
We briefly describe the much improved classification of automorphisms
which would be possible if the answer to Question
\ref{ReducingQuestion} were positive. If $f:M\to M$ is an
automorphism and $M$ is reducible, we first seek reducing surfaces
consisting of essential spheres. If these exist, we decompose and cap
the spheres. If there are no sphere reducing surfaces, we seek closed
reducing surfaces. Decomposing on such a reducing surface gives an
automorphism of an irreducible manifold with non-empty boundary, where
the irreducibilty of the decomposed manifold comes from the
requirement that cutting on a reducing surface should yield an
irreducible manifold. For the resulting irreducible manifold with
boundary, we decompose on the interior boundary of the characteristic
compression body as before, obtaining an automorphism of a manifold
with incompressible boundary and an automorphism of a compression
body. (Either one could be empty, and the compression body could have
handlebody component(s).) Automorphisms of compression bodies are
analyzed using the existing (incomplete) theory, the automorphism of
the irreducible, $\bdry$-irreducible 3-manifold is analyzed as in the
classification given above, using mostly well-established 3-manifold
theory.
\section{Canonical Heegaard splittings}\label{Splitting}
Given a handlebody $H$ of genus $g$ we can double the handlebody to
obtain a connected sum $M$ of $g$ copies of $S^2\times S^1$, a genus
$g$ sphere body. We choose $g$ non-separating discs in $H$ whose
doubles give $g$ non-separating spheres in $M$. We then say that the
Heegaard splitting $M=H\cup (M- \intr(H))$ is a {\it symmetric
Heegaard splitting}. The symmetric splitting is characterized by the
property that any curve in $\bdry H$ which bounds a disc in $H$ also
bounds a disc in $M- \intr(H)$.
The following is our interpretation of a comment at the end of a paper
of F. Waldhausen, see \cite{FW:Heegaard}.
\begin{lemma} (F. Waldhausen)
Any minimal genus Heegaard splitting of a sphere body is symmetric.
\end{lemma}
We restate a theorem which we will prove in this section:
\begin{restate} [Theorem \ref{CanonicalHeegaardTheorem}]
Any two symmetric Heegaard splittings of a sphere body are isotopic.
\end{restate}
One would expect, from the fundamental nature of the result, that this
theorem would have been proved much earlier. A proof may exist in the
literature, but we have not found it.
In the first part of this section, $M$ will always be a genus $g$
sphere body. An {\it essential system of spheres} is the isotopy
class of a system of disjointly embedded spheres with the property
that none of the spheres bounds a ball. The system is {\it
primitive} if no two spheres of the system are isotopic. A system
$\cal S$ is said to be {\it complete} if cutting $M$ on $\cal S$
yields a collection of holed balls. There is a cell complex of
weighted primitive systems of essential spheres in $M$ constructed as
follows. For any primitive essential system $\cal S$ in $M$, the
space contains a simplex corresponding to projective classes of
systems of non-negative weights on the spheres. Two simplices
corresponding to primitive essential systems ${\cal S}_1$ and ${\cal
S}_2$ are identified on the face corresponding to the maximal
primitive essential system ${\cal S}_3$ of spheres common to ${\cal
S}_1$ and ${\cal S}_2$. We call the resulting space the {\it sphere
space} of $M$. In the sphere space, the union of open cells
corresponding to complete primitive systems is called the {\it
complete sphere space} of $M$
\begin{lemma}\label{SphereSpaceLemma}
The complete sphere space of a sphere body $M$ is connected. In other
words, it is possible to pass from any primitive complete essential
system to any other by moves which replace a primitive complete
essential system by a primitive complete subsystem or a primitive
complete essential supersystem.
\end{lemma}
Note: A. Hatcher proved in \cite{AH:HomologicalStability} that the
sphere space is contractible. It may be possible to adapt the
following proof to show that the complete sphere space is contractible.
\begin{proof}
Suppose ${\cal S}={\cal S}_0$ and ${\cal S}'$ are two primitive
complete systems of essential spheres. We will abuse notation by
referring to the union of spheres in a system by the same name as the
system. Without loss of generality, we may assume ${\cal S}$ and
${\cal S}'$ are transverse.
Consider ${\cal S}\cap{\cal S}'$ and choose a curve of intersection
innermost in ${\cal S}'$. Remove the curve of intersection by isotopy
of ${\cal S}$ if possible. Otherwise, surger $\cal S$ on a disc
bounded in ${\cal S}'$ by the curve to obtain ${\cal S}_2$. Now
${\cal S}_2$ is obtained from ${\cal S}={\cal S}_0$ by removing one
sphere, the surgered sphere $S_0$ of ${\cal S}_0$, say, and adding two
spheres, $S_{21}$ and $S_{22}$, which result from the surgery. We can
do this in two steps, first adding the two new spheres to obtain
${\cal S}_1$, then removing the surgered sphere to obtain ${\cal
S}_2$. If one of the spheres $S_{21}$ and $S_{22}$ is inessential,
this shows that the curve of intersection between ${\cal S}={\cal
S}_0$ and ${\cal S}'$ could have been removed by isotopy. We note
that $S_{21}$ or $S_{22}$, or both, might be isotopic to spheres of
${\cal S}_0$. If the new spheres are essential, we claim that ${\cal
S}_1$ and ${\cal S}_2$ are complete systems of essential spheres. It
is clear that ${\cal S}_1$ is complete, since it is obtained by adding
two essential spheres $S_{21}$ or $S_{22}$ to ${\cal S}_0$. We then
remove $S_0$ from ${\cal S}_1$ to obtain ${\cal S}_2$. There is, of
course a danger that removing an essential sphere from an essential
system ${\cal S}_1$ yields an incomplete system. However, if we
examine the effect of surgery on the complementary holed balls, we see
that first we cut one of them on a properly embedded disc (the surgery
disc), which clearly yields holed balls. Then we attach a 2-handle to
the boundary of another (or the same) holed ball, which again yields a
holed ball. Thus, ${\cal S}_2$ is complete. If ${\cal S}_2$ is not
primitive, i.e. contains two isotopic spheres, we can remove one,
until we are left with a primitive complete essential system.
Inductively, removing curves of intersection of ${\cal S}_{k}$ with
${\cal S}'$ either by isotopy or by surgery on an innermost disc in
${\cal S}'$ as above, we must obtain a sequence ending with ${\cal
S}_{n-2}$ disjoint from ${\cal S}'$. Next, we add the spheres of
${\cal S}'$ not in ${\cal S}_{n-2}$ to ${\cal S}_{n-2}$ to obtain
${\cal S}_{n-1}$, which is still complete. Finally we remove all
spheres from ${\cal S}_{n-1}$ except those of ${\cal S}'$ to get
${\cal S}_{n}={\cal S}'$.
\end{proof}
A {\it ball with $r$ spots}, or an {\it $r$-spotted ball} is a pair
$(K,E)$ where $K$ is a 3-ball and $E$ is consists of $r$ discs in
$\bdry K$. Let $P$ be a holed ball, with $r\ge 1$ boundary components
say. Then we say $(K,E)\embed (P,\bdry P)$ is {\it standard} if the
closure of $P-K$ in $P$ is another $r$-spotted ball. In that case,
the complementary spotted ball is also standard. In practical terms,
a standard $r$-spotted ball is ``unknotted." Another point of view
is the following: If $(K,E)\embed (P,\bdry P)$ is standard, then
$(P,\bdry P)=(K,E)\cup (K',E')$, where $(K',E')$ complementary to
$(K,E)$, is a ``symmetric" splitting of $(P,\bdry P)$, similar to a
symmetric splitting of a sphere body.
\begin{lemma} \label{SpottedBallStandardLemma}
(a) A standard embedded $r$-spotted ball $(K,E)$ in an $r$-holed
3-sphere $(P,\bdry P)$ ($r\ge 1$) is unique up to isotopy of pairs.
(b) Suppose $(K,E)$ and $(K',E')$ are two different standard spotted
balls in $(P,\bdry P)$ and suppose $\hat E\subseteq E$ is a union of
spots in $E$, which coincides with a union of spots in $E'$. Then
there is an isotopy rel $\hat E$ of $(K,E)$ to $(K',E')$.
\end{lemma}
\begin{proof}
(a) We will use the following result. (It is stated, for example, in
\cite{DM:MappingSurvey}.) The (mapping class) group of isotopy
classes of automorphisms of $(P,\bdry P)$ which map each component of
$\bdry P$ to itself is trivial. It is then also easy to verify that
the mapping class group of $(P,\bdry P)$ rel $\bdry P$ is trivial.
Suppose now that $(K,E)$ and $(K',E')$ are two different standard
spotted balls in $(P,\bdry P)$. We will construct an automorphism $f$
of $(P,\bdry P)$ by defining it first on $K$, then extending to the
complement. We choose a homeomorphism $f:(K,E)\to (K',E')$. Next, we
extend to $\bdry P- E$ such that $f(\bdry P- E)=\bdry P-E'$. Since
$(K,E)$ is standard, the sphere $\fr(K)\cup (\bdry P-E)$ bounds a ball
$K^c$ (with interior disjoint from $\intr(K)$), and similarly
$\fr(K')\cup (\bdry P-E')$ bounds a ball $(K')^c$. Thus we can extend
$f$ so that the ball $K^c$ is mapped homeomorphically to $(K')^c$.
Then $f$ is isotopic to the identity, whence $(K,E)$ is isotopic to
$(K',E')$.
(b) To prove this, we adjust the isotopy of (a) in a collar of $\bdry
P$. To see that such an adjustment is possible, note that we may
assume the collar is chosen sufficiently small so that by
transversality the isotopy of (a) is a ``product isotopy" in the
collar, i.e., has the form $h_t\times \text{id}$ for an isotopy $h_t$
of $\bdry P$. The adjustment of the isotopy in the collar is achieved
using isotopy extension.
\end{proof}
\begin{proof} [Proof of Theorem \ref{CanonicalHeegaardTheorem}]
Any complete system of essential spheres $\cal S$ gives a symmetric
Heegaard splitting as follows. Each holed 3-sphere $(P_i,\bdry P_i)$
obtained by cutting $M$ on $\cal S$ (where $P_i$ has $r_i$ holes, say)
can be expressed as a union of two standard $r_i$-spotted balls
$(K_i,E_i)$ and $(K_i^c,E_i^c)$. We assemble the spotted balls
$(K_i,E_i)$ by isotoping and identifying the two spots on opposite
sides of each essential sphere of the system $\cal S$, extending the
isotopy to $K_i$ and to $K_i^c$. The result is a handlebody
$H=\cup_iK_i$ with a complementary handlebody $H^c=\cup_iK_i^c$.
We claim that $H$ (depending on $\cal S$) is unique up to isotopy in
$M$. First replace each sphere $S$ of $\cal S$ by $\bdry N(S)$,
i.e. by two parallel copies of itself. The result is that no holed
ball obtained by cutting $M$ on $\cal S$ has two different boundary
spheres identified in $M$. Suppose $H'$ is constructed in the same
way as $H$ above. Inductively, we isotope $H'\cap P_i=K_i'$ to $K_i$
using Lemma \ref{SpottedBallStandardLemma}, extending the isotopy to
$H'$. Having isotoped $K_i'$ to $K_i$ for $i<k$, we isotope $K_k'$ to
$K_k$ in $P_k$ rel $\cup_{i<k} K_i'\cap \bdry P_k$. This means we
perform the isotopy rel spots on the boundary of $P_k$ where previous
$K_i'$'s have already been isotoped to coincide with $K_i$'s. Now it
is readily verified that the $H$ associated to the non-primitive $\cal
S$ obtained by replacing each sphere with two copies of itself is the
same as the $H$ associated to the original $\cal S$.
Next we must show that $H$ does not depend on the choice of a complete
system $\cal S$ of essential spheres. By Lemma \ref{SphereSpaceLemma}
it is possible to find a sequence $\{{\cal S}_i\}$ of complete systems
of essential spheres such that any two successive systems differ by
the addition or removal of spheres. It is then easy to verify that
after each move, the handlebody $H_i$, constructed from ${\cal S}_i$
as $H$ was constructed from $\cal S$ above, will be the same up to
isotopy. For example, when removing an essential sphere from the
system ${\cal S}_i$ to obtain ${\cal S}_{i+1}$, two spotted balls are
identified on a disc on the removed sphere, yielding a new spotted
ball in a new holed sphere, and $H_{i+1}=H_i$. The reverse process of
adding an essential sphere must then also leave $H_{i+1}=H_i$
unchanged.
\end{proof}
We offer an alternative simpler proof of the theorem. Although it is
simpler, this approach has the disadvantage that it does not appear to
work for our more general result about canonical splittings of mixed
bodies.
\begin{proof}[Alternate proof of Theorem \ref{CanonicalHeegaardTheorem}]
Let $M=H_1\cup_{\partial H_1=\partial H_2} H_2$ and
$M=H_1'\cup_{\partial H_1'=\partial H_2'} H_2'$ determine two
symmetric Heegaard splittings of $M$. Our goal is to build an
automorphism $h\colon M\to M$ taking $H_1$ to $H_1'$ (and hence $H_2$
to $H_2'$) which is isotopic to the identity.
Choose any automorphism $f\colon H_1\to H_1'$. It doubles to an
automorphism $Df\colon M\to M$ which takes the first Heegaard
splitting to the second. Now consider $Df_*\colon \pi_1(M)\to
\pi_1(M)$. It is a known fact that any automorphism of a free group is
realizable by an automorphism of a handlebody (e.g., \cite{HG:64}).
Let $g\colon H_1\to H_1$ be such that $g_*=(Df_*)^{-1}$ and double
$g$, obtaining $Dg\colon M\to M$. Then it is clear that $h= (Df\circ
Dg)\colon M\to M$ takes the first Heegaard splitting to the
second. Also, $h_*=\text{Id}_{\pi_1(M)}$.
Now let $\mathcal{S}$ be a complete system of spheres symmetric with
respect to the splitting $M=H_1\cup H_2$. From
$h_*=\text{Id}_{\pi_1(M)}$ follows that $h$ is isotopic to a
composition of twists on spheres of $\mathcal{S}$
\cite{FL:Spheres}. But such a composition preserves $H_1$, $H_2$ (a
twist on such a sphere restricts to each handlebody as a twist on a
disc), therefore $h$ is isotopic to an automorphism $h'$ preserving
$H_1$, $H_2$. The isotopy from $h$ to $h'$ then takes the second
splitting to the first one.
\end{proof}
Our next task is to describe canonical Heegaard splittings of mixed
bodies. Until further notice, $M$ will be a mixed body without holes.
We obtain a canonical splitting of a holed mixed body simply by
capping the holes.
\begin{thm}\label{CanonicalHeegaardTheorem2}
Let $M$ be a mixed body. There is a surface $F$ bounding a handlebody
$H$ in $M$ with the property that every curve on $F$ which bounds a
disc in $H$ also bounds a disc in $M-\intr (H)$, or it is boundary
parallel in $M$. $F$ and $H$ are unique up to isotopy. $M-\intr(H)$
is a compression body. Also $\pi_1(H)\to \pi_1(M)$ is an isomorphism.
\end{thm}
The proof is entirely analogous to the proof of Theorem
\ref{CanonicalHeegaardTheorem}, but we must first generalize the
definitions given above, for $M$ as sphere body, to definitions for
$M$ a mixed body. Let $M$ be a connected sum of handlebodies $H_i$,
$i=1,\ldots, k$, $H_i$ having genus $g_i$, and $N$, which is a
connected sum of $n$ copies of $S^2\times S^1$.
An {\it essential reducing system} or {\it essential system} for $M$
is the isotopy class of a system of disjointly embedded spheres and
discs with the property that none of the spheres bounds a ball and
none of the discs is $\bdry$-parallel. The system is {\it primitive}
if no two surfaces of the system are isotopic. A system $\cal S$ is
said to be {\it complete} if cutting $M$ on $\cal S$ yields a
collection of holed balls $P_i$ satisfying an additional condition
which we describe below. Cutting on $\cal S$ yields components of the
form $(P_i,S_i)$, where $S_i$ is the union of discs and spheres in
$\bdry P_i$ corresponding to the cutting locus: if $P_i$ is viewed as
an immersed submanifold of $M$, then $\intr(S_i)$ is mapped to the
interior of $M$. For the system to be complete we also require that
$S_i$ contain all but at most one boundary sphere of $P_i$. Thus at
most one boundary component contains discs of $S_i$. There is a cell
complex of weighted primitive systems of essential spheres and discs
in $M$ constructed as follows. For any primitive essential system
$\cal S$ in $M$, the space contains a simplex corresponding to
projective classes of systems of non-negative weights on the spheres.
Two simplices corresponding to primitive essential systems ${\cal
S}_1$ and ${\cal S}_2$ are identified on the face corresponding to the
maximal primitive essential system ${\cal S}_3$ of spheres common to
${\cal S}_1$ and ${\cal S}_2$. We call the resulting space the {\it
reducing space} of $M$. In the reducing space, the union of open
cells corresponding to complete primitive systems is called the {\it
complete reducing space} of $M$
\begin{lemma}\label{ReducingSpaceLemma}
The complete reducing space of a mixed body $M$ is connected. In
other words, it is possible to pass from any primitive complete
essential system to any other by moves which replace a primitive
complete essential system by a primitive complete subsystem or a
primitive complete essential supersystem.
\end{lemma}
\begin{proof}
The proof is, of course, similar to that of Lemma
\ref{SphereSpaceLemma}. Suppose ${\cal S}={\cal S}_0$ and ${\cal S}'$
are two primitive complete systems. We will use the term ``reducing
surface" or just ``reducer" to refer to a properly embedded surface
which is either a sphere or a disc.
Consider ${\cal S}\cap{\cal S}'$ and choose a curve of intersection
innermost in ${\cal S}'$. Remove the curve of intersection by isotopy
of ${\cal S}$ if possible. Otherwise, surger $\cal S$ on a disc
(half-disc) bounded in ${\cal S}'$ by the curve to obtain ${\cal
S}_2$. Now ${\cal S}_2$ is obtained from ${\cal S}={\cal S}_0$ by
removing one reducer, the surgered reducer $S_0$ of ${\cal S}_0$, say,
and adding two reducers, $S_{21}$ and $S_{22}$, which result from the
surgery. We can do this in two steps, first adding the two new
reducers to obtain ${\cal S}_1$, then removing the surgered reducer to
obtain ${\cal S}_2$. If one of the reducers $S_{21}$ and $S_{22}$ is
inessential, this shows that the curve of intersection between ${\cal
S}={\cal S}_0$ and ${\cal S}'$ could have been removed by isotopy.
Note that surgery on arcs is always between two different discs of the
system, but surgery on closed curves may involve any pair of reducers
(disc and sphere, two discs, or two spheres). We note that $S_{21}$
or $S_{22}$, or both, might be isotopic to reducers of ${\cal
S}_0$. If the new reducers are essential, we claim that ${\cal S}_1$
and ${\cal S}_2$ are complete systems. It is clear that ${\cal S}_1$
cuts $M$ into holed balls, since it is obtained by adding two reducers
$S_{21}$ or $S_{22}$ to ${\cal S}_0$. The property that cutting on
reducers yields pairs $(P_i,S_i)$ with at most one boundary component
of $P_i$ not in $S_i$ is preserved. This is clearly true when adding
a sphere. When adding a disc, a sphere of $\bdry P_i$ not contained
in $S_i$ is split, and so is $P_i$, yielding two holed balls with
exactly one boundary sphere not contained in $S_i$. We then remove
$S_0$ from ${\cal S}_1$ to obtain ${\cal S}_2$. There is, of course a
danger that removing an essential reducer from an essential system
${\cal S}_1$ yields a system which does not cut $M$ into holed balls,
or does not preserve the other completeness property. As before, if
the surgery is on a disc, then the effect on the complementary
$(P_i,S_i)$'s is first to cut a $P_i$ on a disc with boundary in
$S_i$. Clearly this has the effect of cutting $P_i$ into two holed
balls. If the surgery is on a disc of $S_i$, we obtain one new holed
ball $(P_j,S_j)$ with all of its boundary in $S_j$, and one holed
ball with one boundary component not contained in $S_j$. Then we
attach a 2-handle to an $S_i$ of another $(P_i,S_i)$, which, as
before, yields the required types of holed balls. If the surgery is on
a half-disc, we first cut a holed ball $(P_i,S_i)$ on a half disc,
i.e. we cut on a disc intersecting the distinguished component of
$\bdry P_i$ which is not in $S_i$. This yields the required types of
holed balls. Then we attach a half 2-handle whose boundary intersects
$S_i$ in a single arc to a $(P_i,S_i)$, which has the effect of
splitting a disc of $S_i$ into two discs. Thus, ${\cal S}_2$ cuts $M$
into holed balls of the desired type. If ${\cal S}_2$ is not
primitive, i.e. contains two isotopic reducers, we can remove one,
until we are left with a primitive complete essential system.
Inductively, removing curves of intersection of ${\cal S}_{k}$ with
${\cal S}'$ either by isotopy or by surgery on an innermost disc
(half-disc) in ${\cal S}'$ as above, we must obtain a sequence ending
with ${\cal S}_{n-2}$ disjoint from ${\cal S}'$. Next, we add the
reducers of ${\cal S}'$ not in ${\cal S}_{n-2}$ to ${\cal S}_{n-2}$ to
obtain ${\cal S}_{n-1}$, which is still complete. Finally we remove
all reducers from ${\cal S}_{n-1}$ except those of ${\cal S}'$ to get
${\cal S}_{n}={\cal S}'$.
\end{proof}
Let $P$ be a 3-sphere with $r\ge 1$ holes and let $S\subseteq \bdry P$
be a union of discs and spheres such that each of $r-1\ge 1$
components of $\bdry P$ is a sphere of $S$, and the remaining sphere
of $\bdry P$ either is also in $S$ or contains a non-empty set of
discs of $S$. ($(P,S)$ is the type of pair obtained by cutting on a
complete system.) If $S$ contains discs, a {\it standard} $r$-spotted
ball $(K,E)$ in $(P,S)$ is one with the property that each component
of $S$ contains a component of $E$, and the complement
$\overline{P-K}$ is a spotted product, $(V\times I, F)$, where $F$ is
a union of discs in $V\times 1$. This means that $\overline{P-K}$ has
the form $V\times I$, where $V$ is a planar surface, $V\times 0$ is
mapped to $\bdry P-\intr(S)$, $\bdry V\times I$ is mapped to the
annular components of $S-\intr(E)$, and exactly one disc component of
$F$ is mapped to each sphere of $S$. We have already defined what it
means for $(K,E)$ to be standard in $(P,S)$ if $\bdry P=S$. Note that
the spots of $F$ are in one-one correspondence with the sphere
components of $S$, and each is mapped to the complement of $E$ in its
sphere component.
\begin{lemma} \label{SpottedBallStandardLemma2}
Let $P$ be a 3-sphere with $r\ge 1$ holes and let $S\subseteq \bdry P$
be a union of discs and spheres such that each of $r-1\ge 1$
components of $\bdry P$ is a sphere of $S$, and the remaining sphere
of $\bdry P$ either is also in $S$ or contains a non-empty set of
discs of $S$
(a) A standard embedded $r$-spotted ball $(K,E)$ in $(P,S)$ is unique
up to isotopy of pairs.
(b) Suppose $(K,E)$ and $(K',E')$ are two different standard spotted
balls in $(P,S)$ and suppose $\hat E\subseteq E$ is a union of spots
in $E$, which coincides with a union of spots in $E'$. Then there is
an isotopy rel $\hat E$ of $(K,E)$ to $(K',E')$.
\end{lemma}
\begin{proof}
(a) The (mapping class) group of isotopy classes of automorphisms of
$(P,S)$ which map each component of $S$ to itself is trivial. It is
then also easy to verify that the mapping class group of $(P,S)$ rel
$S$ is trivial.
We may assume that $S$ contains discs, otherwise we have already
proved the result. Suppose now that $(K,E)$ and $(K',E')$ are two
different standard spotted balls in $(P,S)$. We will construct an
automorphism $f$ of $(P,S)$ by defining it first on $K$, then
extending to the complement. Isotope $E'$ to $E$ extending the
isotopy to $K'$. We choose a homeomorphism $f:(K,E)\to (K',E')$
sending a disc of $E$ to the corresponding disc of $E'$. Next, we
extend to $S- E$ such that $f(S- E)=S-E'$. Since $(K,E)$ is standard,
its complement has a natural structure as a spotted product, and the
complement of $K'$ has an isomorphic structure. Thus we can extend
$f$ so that the complementary spotted product $K^c$ is mapped
homeomorphically to $(K')^c$. Then $f$ is isotopic to the identity,
whence $(K,E)$ is isotopic to $(K',E')$.
(b) To prove (b) we adjust the isotopy of (a) in a collar of $\hat E$
in $K$.
\end{proof}
\begin{proof} {(\it Theorem \ref{CanonicalHeegaardTheorem2})}
Any complete system $\cal S$ for $M$ gives a Heegaard splitting as
follows. In each holed 3-sphere $(P_i,S_i)$ obtained by cutting $M$
on $\cal S$ we insert an $s_i$-spotted ball $(K_i,E_i)$, where $s_i$
is the number of components of $S_i$. Each component of $S_i$ contains
exactly one disc $E_i$. We assemble the spotted balls $(K_i,E_i)$ by
isotoping and identifying the two spots on opposite sides of each
essential sphere or disc of the system $\cal S$, extending the isotopy
to $K_i$. The result is a handlebody $H=\cup_iK_i$ with a
complementary handlebody $H^c=\cup_iK_i^c$.
We claim that $H$ (depending on $\cal S$) is unique up to isotopy in
$M$. First replace each sphere or disc $S$ of $\cal S$ by $\bdry
N(S)$, i.e. by two parallel copies of itself. The result is that no
$(P_i,S_i)$ obtained by cutting $M$ on $\cal S$ has two different disc
or sphere components of $S_i$ identified in $M$. Suppose $H'$ is
constructed in the same way as $H$ above. Inductively, we isotope
$H'\cap P_i=K_i'$ to $K_i$ using Lemma
\ref{SpottedBallStandardLemma2}, extending the isotopy to $H'$.
Having isotoped $K_i'$ to $K_i$ for $i<k$, we isotope $K_k'$ to $K_k$
in $P_k$ rel $\cup_{i<k} K_i'\cap \bdry P_k$. This means we perform
the isotopy rel spots on the boundary of $P_k$ where previous $K_i'$'s
have already been isotoped to coincide with $K_i$'s. Now it is
readily verified that the $H$ associated to the non-primitive $\cal S$
obtained by replacing each sphere or disc with two copies of itself is
the same as the $H$ associated to the original $\cal S$.
Next we must show that $H$ does not depend on the choice of a complete
system $\cal S$. By Lemma \ref{ReducingSpaceLemma} it is possible to
find a sequence $\{{\cal S}_i\}$ of complete systems such that any two
successive systems differ by the addition or removal of reducers. It
is then easy to verify that after each move, the handlebody $H_i$,
constructed from ${\cal S}_i$ as $H$ was constructed from $\cal S$
above, will be the same up to isotopy. For example, when removing an
essential reducer from the system ${\cal S}_i$ to obtain ${\cal
S}_{i+1}$, two spotted balls are identified on a disc on the removed
reducing surface, yielding a new spotted ball in a new holed sphere,
and $H_{i+1}=H_i$. The reverse process of adding an essential reducer
must then also leave $H_{i+1}=H_i$ unchanged.
\end{proof}
The canonical splitting surface for $M$ a sphere body or a mixed body
yields an automatic reducing surface for any automorphism of $f:M\to
M$. However, this reducing surface is non-rigid. There are
automorphisms of $M$ isotopic to the identity which induce
automorphisms not isotopic to the identity on the splitting
surface. The non-rigidity (without our terminology) is well-known, but
rather than giving the example described in
\cite{FL:ConnectedS2TimesS1}, we describe a simpler example, which in
some sense is the source of the phenomenon.
\begin{example}\label{NonUniqueExample}
Let $M$ be a sphere body and let $S$ be a non-separating sphere in $M$
intersecting the canonical splitting surface in one closed curve. Let
$H$ be one of the handlebodies bounded by the splitting surface, so
the splitting surface is $\bdry H$. We describe an isotopy of $H$ in
$M$ which moves $H$ back to coincide with itself and induces an
automorphism not isotopic to the identity on $\bdry H$. In other
words, we describe $f:M\to M$ isotopic to the identity, such that
$f|_{\bdry H}$ is not isotopic to the identity. Figure
\ref{Aut3UhrMove} shows the intersection of $H$ with $S$. We have
introduced a kink in a handle of $H$ such that the handle runs
parallel to $S$ for some distance. The ends of the handle project to
(neighborhoods of) distinct points while the handle projects to a
neighborhood of an arc $\alpha$ in $S$. There is an isotopy of
$\alpha$ in $S$ fixing endpoints, so the arc rotates about each point
by an angle of $2\pi$ and the arc sweeps over the entire 2-sphere,
returning to itself. We perform the corresponding isotopy of the
1-handle in $H$. Calling the result of this isotopy $f$, we note that
$f|_{\bdry H}$ is a double Dehn twist as shown. As one can easily
imagine, by composing automorphisms of this type on different spheres
intersecting $H$ in a single meridian, one can obtain $f$ such that
$f$ restricted to $\bdry H$ is a more interesting automorphism, e.g. a
pseudo-Anosov, see \cite{FL:ConnectedS2TimesS1}.
\begin{figure}[ht]
\centering
\psfrag{S}{\fontsize{\figurefontsize}{12}$S$}\psfrag{H}{\fontsize{\figurefontsize}{12}$H$}
\psfrag{a}{\fontsize{\figurefontsize}{12}$\alpha$}
\scalebox{1.0}{\includegraphics{Aut3UhrMove}} \caption{\small
Non-rigidity of canonical splitting.} \label{Aut3UhrMove}
\end{figure}
\end{example}
We shall refer to an isotopy of the type described above as a {\it
double-Dehn isotopy}. If, as above, $M$ is a sphere body or
mixed body, and $f\from M\to M$ is an automorphism, we shall use
double-Dehn isotopies to choose a good representative of the
isotopy class of $f: M\to M$ preserving the canonical splitting.
\section{Reducing surfaces} \label{Reducing}
The purpose of this section is to define reducing surfaces in the
general setting of automorphisms of compact 3-manifolds. In addition,
we review reducing surfaces which arise in the setting of
automorphisms of handlebodies and compression bodies, \cite{UO:Autos},
using the somewhat more inclusive definitions given here.
We will also consider manifolds with some additional structure; we
will consider holed or spotted manifolds.
Suppose $M$ is a compact 3-manifold without sphere boundary
components, and $f:M\to M$ is an automorphism. If $\mathcal{S}$ is an
$f$-invariant (up to isotopy) union of essential spheres, then
$\mathcal{S}$ is a special kind of reducing surface. We can decompose
$M$ on $\mathcal{S}$ to obtain $M|\mathcal{S}$ and cap boundary
spheres with balls to obtain a {\it manifold $(\hat M, {\cal D})$
with spots in the interior}, where $\mathcal{D}$ denotes the union of
the capping balls, and an induced automorphism $(\hat M, {\cal D})\to
(\hat M, {\cal D})$. We could equally well not cap boundary spheres
after decomposing on $\mathcal{S}$ and consider holed manifolds rather
than spotted manifolds.
\begin{definition} Let
$(M,\mathcal{D})$ be a spotted 3-manifold with spots in the interior
($\mathcal{D}$ a union of 3-balls disjointly embedded in the interior
of $M$). We say a sphere is {\it essential} in $(M,\mathcal{D})$ if
it is embedded in $M-\mathcal{D}$, it does not bound bound a ball in
$M-\mathcal{D}$, and it is not isotopic in $M-\mathcal{D}$ to a
component of $\bdry \mathcal{D}$. The spotted manifold is {\it
irreducible} if it has no essential spheres.
\end{definition}
Clearly a connected spotted manifold $(M,\mathcal{D})$ is irreducible
if and only if the underlying manifold $M$ is irreducible and
$\mathcal{D}$ is empty or consists of a single ball.
\begin{definition} Let $(M,\mathcal{D})$ be a spotted manifold with spots in the interior. Let $f:(M,\mathcal{D})\to (M,\mathcal{D})$
be an automorphism of the spotted manifold. Then $f$ is {\it
reducible on spheres} if there is an $f$-invariant union $\mathcal{S}$
of disjoint essential spheres in $(M,\mathcal{D})$. $\cal S$ is
called a {\it sphere reducing surface}. {\it Decomposing} on
$\mathcal{S}$ and capping with balls gives a new spotted manifold with
an induced automorphism. The caps of duplicates of spheres of the
reducing surface are additional spots on the decomposed manifold.
\end{definition}
Clearly by repeatedly decomposing on sphere reducing surfaces we
obtain automorphisms of manifolds with interior spots which do not
have sphere reducing surfaces.
Automorphisms of a spotted 3-manifold $(M,\mathcal{D})$ (with
3-dimensional spots) induce automorphisms of the underlying spotless
3-manifold $M$, and in the cases of interest to us it is relatively
easy to understand the relationship between the two. For this reason,
we can henceforth consider only 3-manifolds without 3-dimensional
spots and without sphere reducing surfaces.
\begin{definition} Suppose $f:M\to M$ is an automorphism,
without sphere reducing surfaces, of a 3-manifold. Then a closed surface
$F\embed M$ which is $f$-invariant up to isotopy is called a {\it
closed reducing surface} provided $M|F$ is irreducible. The reducing
surface $F$ is {\it rigid} if $f$ induces uniquely an automorphism of
the decomposed manifold $M|F$.
\end{definition}
In particular, the canonical splitting surface for $M$ a sphere body
or mixed body is a (non-rigid) reducing surface for any automorphism
of $M$.
The existence of any closed reducing surface for $f:M\to M$ (without
sphere components) gives a decomposition into automorphisms of
compression bodies and manifolds with incompressible boundary, as the
following theorems show. We give a simple version for closed
manifolds separately to illustrate the ideas.
\begin{thm} \label{ReducingDecompositionTheorem}
Let $M$ be a closed reducible manifold, $f:M\to M$ an automorphism,
and suppose $F$ is a closed reducing surface, with no sphere
components. Then either
i) $F$ is incompressible,
ii) there is a reducing surface $F'$ such that the reducing surface
$F\cup F'$ cuts $M$ into a non-product compression body $Q$ (possibly
disconnected) and a manifold $M'$ with incompressible boundary, or
iii) $F$ is a Heegaard surface.
\end{thm}
\begin{proof} We suppose $M$ is a closed manifold, $f:M\to M$ is an automorphism
and suppose $F$ is a closed reducing surface. Cutting $M$ on $F$ we
obtain $M|F$ which is a manifold whose boundary is not in general
incompressible. Let $Q$ be the characteristic compression body in
$M|F$. This exists because $M|F$ is irreducible by the definition of
a reducing surface. Then $f|F$ preserves $Q$ and $F'=\bdry_i Q$ is
another reducing surface, unless it is empty. In case $\bdry_iQ$ is
empty, we conclude that $Q$ must consist of handlebodies, and we have
a Heegaard decomposition of $M$. Otherwise, letting
$M'=\overline{M-Q}$, $M'$ has incompressible boundary.
\end{proof}
Here is a more general version:
\begin{thm} \label{ReducingDecompositionTheorem2}
Let $M$ be a reducible manifold, $f:M\to M$ an automorphism, and
suppose $F$ is a closed reducing surface, with no sphere components.
Then either
i) $F$ is incompressible,
ii) there is a reducing surface $F'$ such that the reducing surface
$F\cup F'$ cuts $M$ into a non-product compression body $Q$ (possibly
disconnected) and a manifold $M'$ with incompressible boundary, or
iii) $F$ is a Heegaard surface dividing $M$ into two compression
bodies.
\end{thm}
A priori, there is no particular reason why every automorphism $f:M\to
M$ without sphere reducing surfaces should have a closed reducing
surface. Recall we posed Question \ref{ReducingQuestion}. In any
case, if $M$ is reducible, we deal only with automorphisms for which
this is true. Therefore we henceforth consider only automorphisms of
irreducible 3-manifolds of the types resulting from the previous two
theorems: Namely, we must consider automorphisms of irreducible,
$\bdry$-irreducible manifolds, and we must consider automorphisms of
compression bodies (and handlebodies).
It might be interesting to learn more about all possible reducing
surfaces associated to a given automorphism of a given manifold,
especially for answering Question \ref{ReducingQuestion}. In our
classification, however, we often use reducing surfaces guaranteed by
the topology of the 3-manifold: reducing surfaces coming from
F. Bonahon's characteristic compression body and the
Jaco-Shalen-Johannson characteristic manifold, and reducing surfaces
coming from the canonical Heegard splittings of sphere bodies or mixed
bodies. To analyze automorphisms of compression bodies, one uses
reducing surfaces not guaranteed by the topology, see \cite {UO:Autos}.
The definition of reducing surfaces for irreducible,
$\bdry$-irreducible manifolds should be designed such that the
frontier of the characteristic manifold is a reducing surface. The
following is a natural definition.
\begin{definition} Suppose $f:M\to M$ is an automorphism of an irreducible, $\bdry$-irreducible compact manifold. Then an invariant
surface $F$ is a {\it reducing surface} if it is incompressible and
$\bdry$-incompressible and not boundary-parallel.
\end{definition}
It only remains to describe reducing surfaces for automorphisms of
handlebodies and compression bodies, and this has already been done in
\cite{UO:Autos}. We give a review here, taken from that paper.
\begin{defns} We have already defined a {\it spotless compression body} $(Q,V)$:
The space $Q$ is obtained from a disjoint union of balls and a product
$V\times I$ by attaching 1-handles to the boundaries of the balls and
to $V\times 1$; the surface $V\embed \bdry Q$ is the same as $V\times
0$. We require that $V$ contain no disc or sphere components. When
$Q$ is connected and $V=\emptyset$, $(Q,V)$ is a handlebody or ball.
If $(Q,V)$ has the form $Q=V\times I$ with $V\times 0=V$, then $(Q,V)$
is called a {\it spotless product compression body} or a {\it trivial
compression body}. As usual, $\bdry_eQ=\bdry Q-\intr (V)$ is the {\it
exterior boundary}, even if $V=\emptyset$.
A {\it spotted compression body} is a triple $(Q,V,\mathcal{D})$ where
$(Q,V)$ is a spotless compression body and $\mathcal{D}\ne \emptyset$
denotes a union of discs or ``spots" embedded in $\bdry_e Q=\bdry
Q-\intr(V)$.
A {\it spotted product} is a spotted compression body
$(Q,V,\mathcal{D})$ of the form $Q=V\times I$ with $V=V\times 0$ and
with $\mathcal{D}\ne \emptyset$ a union of discs embedded in $V\times
1$.
A {\it spotted ball} is a spotted compression body whose underlying
space is a ball. It has the form $(B,\mathcal{D})$ where $B$ is a
ball and $\mathcal{D}\ne \emptyset$ is a disjoint union of discs in
$\bdry B=\bdry_eB$.
An {\it $I$-bundle pair} is a pair $(H,V)$ where $H$ is the total
space of an $I$-bundle $p\from H \to S$ over a surface $S$ and $V$ is
$\bdry_iH$, the total space of the associated $\bdry I$-bundle.
\end{defns}
\begin{com}
A spotless compression body or $I$-bundle pair is an example of a {\it
Haken pair}, i.e. a pair $(M,F)$ where $M$ is an irreducible
3-manifold and $F\subset \bdry M$ is incompressible in $M$.
\end{com}
\begin{definition} If $(Q, V)$ is a compression body, and $f\from(Q, V)\to (Q, V)$ is an automorphism, then an
$f$-invariant surface $(F,\bdry F)\embed (Q,\bdry Q)$, having no
sphere components, is a {\it reducing surface for $f$}, if:
i) $F$ is a union of essential discs, $\bdry F\subset \bdry_eQ$,
or no component of $F$ is a disc and:
ii) $F$ is the interior boundary of an invariant compression body
$(X,F)\embed (Q,V)$, with $\bdry_eX\embed \bdry_eQ$ but $F$ is not
isotopic to $V$, or
iii) $F\subset \bdry Q$, $F$ is incompressible, $F$ strictly contains
$V$, and $(Q,F)$ has the structure of an $I$-bundle pair, or
iv) $F$ is a union of annuli, each having one boundary component in
$\bdry_eQ$ and one boundary component in $\bdry_iQ$
In case i) we say $F$ is a {\it disc reducing surface}; in case ii) we
say $F$ is a {\it compressional reducing surface}; in case iii) we say
$F$ is a {\it peripheral reducing surface}; in case iv) we say $F$ is
an {\it annular reducing surface}.
In case i) we {\it decompose} on $F$ by replacing $(Q,V)$ by the {\it
spotted compression body $(M|F,V,\mathcal D)$}, and replacing $f$ by
the induced automorphism on this spotted compression body. ($\cal D$
is the union of duplicates of the discs in $F$.) An automorphism $g$
of a spotted compression body $(Q,V,\mathcal D)$ can be {\it
decomposed} on a {spot-removing reducing surface}, which is $\bdry_e
Q$ isotoped to become a properly embedded surface. This yields an
automorphism of a {\it spotted product} and an automorphism of a
spotless compression body.
In case ii) we {\it decompose} on $F$ by cutting on $F$ to obtain
$Q|F$, which has the structure $(X,F)\cup (Q',V')$ of a compression
body, and by replacing $f$ by the induced automorphism. The duplicate
of $F$ not attached to $X$ belongs to $\bdry_eQ'$. We throw away any
trivial product compression bodies in $(Q',V')$.
In case iii) we {\it decompose on $F$} by replacing $(Q,V)$ by
$(Q,F)$.
In case iv), cutting on $F$ yields another compression body with
induced automorphism.
For completeness, if $(H,V)$ is a spotless $I$-bundle pair, we define
reducing surfaces for automorphisms $f\from (H,V)\to (H,V)$. Any
$f$-invariant non-$\bdry$-parallel union of annuli with boundary in
$V$ is such an {\it annular reducing surface}, and these are the only
reducing surfaces.
\end{definition}
Non-rigid reducing surfaces are quite common, as the following example
shows.
\begin{example}
Let $M$ be a mapping torus of a pseudo-Anosov automorphism $\phi$,
$M=S\times I/ \sim$ where $(x,0)\sim (\phi(x),1)$ describes the
identification. Let $\tau$ be an invariant train track for the
unstable lamination of $\phi$ in $S=(S\times 0)/\sim$. Let $f:M\to M$
be the automorphism induced on $M$ by $\phi\times \text{Identity}$.
Then $N(\tau)$ is invariant under $f$ up to isotopy and $F=\bdry
N(\tau)$ is a reducing surface for $F$. Isotoping $\tau$ and $F=\bdry
N(\tau)$ through levels $S_t=S\times t$ from $t=0$ to $t=1$ and
returning it to its original position induces a non-trivial
automorphism on $N(\tau)$. Hence $F$ is non-rigid.
It is easy to modify this $M$ by Dehn surgery operations such that
$F=\bdry N(\tau)$ becomes incompressible.
\end{example}
The results of Section \ref{Splitting} guarantee reducing surfaces in
the special case where $M$ is a mixed body without sphere boundary
components, i.e. a connected sum of $S^2\times S^1$'s and
handlebodies, and this is what we use in the classification. The
canonical splitting surfaces of Section \ref{Splitting} are examples
of reducing surfaces such that applying Theorem
\ref{ReducingDecompositionTheorem} and Theorem
\ref{ReducingDecompositionTheorem2} yields a Heegaard splitting.
We state the obvious:
\begin{proposition} If $M$ is a mixed body,
then there is a canonical Heegaard decomposition $M=H\cup Q$ on a
reducing surface $F=\bdry H$, which is unique up to isotopy. $F$ is a
non-rigid closed reducing surface for any automorphism $f:M\to M$.
\end{proposition}
\section{Adjusting automorphisms} \label{Adjusting}
In this section we will prove Theorem \ref{AdjustingTheorem} and
Corollary \ref{AdjustingCorollary}.
The adjusting automorphisms are constructed from {\em slide
automorphisms of $M$}. There are two types, one of which is as
defined in \cite{DM:MappingSurvey}, see the reference for details.
For a fixed $i$, construct $\check M$ by removing
$\intr(\hat{M}_i)$ from $M$ and capping the resulting boundary
sphere with a ball $B$. Let $\alpha\subseteq \check M$ be an
embedded arc meeting $B$ only at its endpoints. One can ``slide''
$B$ along $\alpha$ and back to its original location. By then
replacing $B$ with $\hat M_i$ we obtain an automorphism of $M$
which is the identity except in a holed solid torus $T$ (a closed
neighborhood of $\alpha\cup S_i$ in $M-\intr(\hat M_i)$). Call
such an automorphism a {\em simple slide}. It is elementary to
verify that $\hat M_0 \cup T$ is a mixed body.
We also need another type of slide which is not described explicitly
in \cite{DM:MappingSurvey}. First, consider the classes of $\hat
M_i$'s which are homeomorphic. For each class, fix an abstract
manifold $N$ homeomorphic to the elements of the class. For each
$\hat M_i$ in the class fix a homeomorphism $f_i\colon \hat M_i\to N$.
If $i\ne j$, and $\hat M_i$ is homeomorphic to $\hat M_j$,
construct $\check M$ by removing $\intr(\hat{M}_i\cup\hat{M}_j)$
and capping the boundary spheres $S_i$, $S_j$ with balls $B_i$,
$B_j$. Let $\alpha$, $\alpha'\subseteq\check M$ be two disjoint
embedded arcs in $\check M$ linking $S_i$ and $S_j$. One can slide
$B_i$ along $\alpha$ and $B_j$ along $\alpha'$ simultaneously,
interchanging them. Now remove the balls and re-introduce $\hat
M_i$, $\hat M_j$, now interchanged. Denote by $s$ the resulting
automorphism of $M$. Isotope $s$ further to ensure that it
restricts to $\hat M_i$ as $f_j^{-1}\circ f_i$ (and to $\hat M_j$
as $f_i^{-1}\circ f_j$). Call such an automorphism an {\em
interchanging slide}. The support of an interchanging slide is a
closed neighborhood of $\hat M_i\cup\hat
M_j\cup\alpha\cup\alpha'$. Note that a sequence of interchanging
slides that fixes an $\hat M_i$ restricts to it as the identity.
\begin{remark}
An argument in \cite{DM:MappingSurvey} makes use of automorphisms
called {\em interchanges of irreducible summands}. Our interchanging
slides are a particular type of such automorphisms.
\end{remark}
\begin{proof} (Theorem \ref{AdjustingTheorem}.)
We start by noting that $\hat M_0$ is an essential holed sphere
body.
Let $\mathcal{S}=\bigcup_i S_i$ and consider $f(\mathcal{S})$. The
strategy is to use isotopies and both types of slides to take
$f(S_i)$ to $S_i$. If only isotopes are necessary we make
$R=\emptyset$, $\hslash=\id_M$ and $f=g$ in the statement.
Therefore suppose that some $f(S_i)$ is not isotopic to $S_i$. The
first step is to simplify $f(\mathcal{S})\cap\mathcal{S}$. The
argument of \cite{DM:MappingSurvey} yields a sequence of simple
slides (and isotopies) whose composition is $\sigma$ satisfying
$\sigma\circ f(\mathcal{S})\cap\mathcal{S}=\emptyset$. We skip the
details. We can then assume that $\sigma\circ
f(\mathcal{S})\subseteq\hat M_0$, otherwise some $S_i$ would bound
a ball. Note that $\sigma$ preserves $K\subseteq M$, where $K$
consists of $\hat M_0$ to which is attached a collection of
disjoint 1-handles $J\subseteq\cup\hat M_i$ along $\mathcal{S}$.
Each 1-handle consists of a neighborhood (in $\hat M_i$) of a
component of $\alpha\cap\hat M_i$, where $\alpha$ is a path along
which a simple slide is performed. Perturb these paths so that
they are pairwise disjoint.
Now consider $\sigma\circ f(S_i)\subseteq\hat M_0$. It must bound an
(once holed) irreducible summand on one side. It is then easy to check
that $\sigma\circ f(S_i)$ is isotopic to an $S_j$. Indeed,
$\sigma\circ f(S_i)$ separates $\hat M_0$, bounding (in $\hat M_0$) on
both sides a holed sphere body. On the other hand $\sigma\circ f(S_i)$
must bound (in $M$) an irreducible summand. Therefore in $\hat M_0$
it must bound an once-holed ball, whose hole corresponds to some
$S_j$. We can then make $\sigma\circ f(\mathcal{S})=\mathcal{S}$.
If $f(S_i)=S_j$, $i\neq j$ we use a slide to interchange $\hat M_i$
and $\hat M_j$. A composition $\iota$ of these interchanging slides
yields $\iota\circ\sigma\circ f(S_i)=S_i$ for each $i$.
Now consider the collection of 1-handles $J=(\cup\hat M_i)\cap K$,
where we recall that $K$ is the union of the support of $\sigma$
with $\hat M_0$ ($K$ is preserved by $\sigma$). We note that there
is some $n$ such that, for any $i\geq 1$, $\iota^n|_{\hat
M_i}=\text{Id}$. In particular, $\iota^n(J)=J$. By perturbing the
paths along which the simple slides of $\sigma$ are performed we
can assume that $\iota^k(J)\cap J=\emptyset$ for any $k$, $1\le
k<n$. Therefore $\cup_k\iota^k(J)$ consists of finitely many
1-handles. Let $R=K\cup\left(\cup_k\iota^k(J)\right)$. Clearly
$R$ is preserved by $\iota$. The restriction of $\iota$ to $\cup_i
\hat M_i$ is periodic, therefore its restriction to
$\overline{M-R}\subseteq(\cup_i \hat M_i)$ also is. Finally, $R$
is a mixed body. Either $\iota$ or $\sigma$ is not the identity by
hypothesis, therefore $R\neq B$, the 3-ball. It is not hard to see
that it is essential. Indeed, if $S\subseteq R$ is an essential
sphere then an argument similar to the one above yields a sequence
of slides $\tau$ which preserves $R$ and takes $S$ to
$\tau(S)\subseteq \hat M_0$. Essentiality of $S$ implies
essentiality of $\tau(S)$ in $R$. Therefore $\tau(S)$ is essential
in $\hat M_0$ (which is itself essential in $M$), and hence in
$M$. But essentiality of $\tau(S)$ in $M$ implies essentiality of
$S$ in $M$. It remains to consider $S$ a sphere boundary component
of $R$, but this is parallel to an $S_i\subseteq \mathcal{S}$,
which is essential since $M$ has no sphere boundary components.
Let $\hslash=\iota\circ\sigma$, $h=\hslash^{-1}$ and $g=\hslash\circ
f$. The following are immediate from the construction:
1) $R$ is preserved by $\hslash$ (and hence by $h$).
2) $\hslash|_{(\overline{M-R})}=\iota|_{(\overline{M-R})}$, which is
periodic.
3) $g$ preserves each $\hat M_i$.
The other conclusions of the theorem are clear. \end{proof}
\begin{proof} (Corollary \ref{AdjustingCorollary}.) We follow the proof of Theorem
\ref{AdjustingTheorem}. In that proof, a composition of simple slides
and interchanging slides has the effect of moving $f(S_i)$ to $S_i$.
A 4-dimensional handlebody $P_0$ is obtained from a 4-ball $K_0$ by
attaching $\ell$ 1-handles. The boundary is a connected sum of $\ell$
$S^2\times S^1$'s. The 4-dimensional compression body $Q$ is obtained
from a disjoint union of the products $M_i\times I$ and from $P_0$ by
attaching a 1-handle to $P_0$ at one end and to $M_i\times 1$ at the
other end, one for each $i\ge 1$. Then $\bdry_eQ$ is the connected
sum of the $M_i$'s and the sphere body $\bdry P_0$. The cocore of
each 1-handle has boundary equal to an $S_i$, $i=1,\ldots, k$. Each
$S_i$ bounds a 3-ball $E_i$ in $Q$. A simple slide in $\bdry_eQ=M$
using some $S_i$ can be realized as the (restriction to the) exterior
boundary of an automorphism of $Q$ as follows: Cut a 1-handle of $Q$
on the ball $E_i$. This gives a lower genus compression body $\hat Q$
with two 3-ball {\it spots} on its exterior boundary. We slide one of
the spots along the curve $\alpha$ associated to the slide, extending
the isotopy to $\hat Q$ and returning the spot to its original
position, then we reglue the pair of spots. This defines an
automorphism of the compression body fixing the interior boundary and
inducing the slide on the exterior boundary. An interchanging slide
can be realized similarly.
Suppose now that $f$ is given as in the proof of Theorem
\ref{AdjustingTheorem}, and we perform the slides in the theorem
so that $\iota\circ\sigma\circ f(S_i)=S_i$ for each $i$. Capping
each $S_i$ in $\bdry \hat M_i$ with a disc $E_i$ to obtain $M_i$,
and capping the other copy of $S_i$ in the holed sphere body $\hat
M_0$ with a disc $E_i'$, we obtain a disjoint union of the $M_i$'s
and the sphere body $\bdry P_0=M_0$. The automorphism $f$ induces
an automorphism of this disjoint union, regarded as spotted
manifolds. Thus we have an induced automorphism of a spotted
manifold $ f_s:((\sqcup_iM_i\sqcup \bdry P_0),\cup_i(E_i\cup
E_i'))\to((\sqcup_iM_i\sqcup \bdry P_0),\cup_i(E_i\cup E_i'))$.
($f_s$ for $f$ spotted.) If we ignore the spots, we have an
automorphism $f_a$ of $(\sqcup_iM_i\sqcup \bdry P_0)$. ($f_a$ for
$f$ adjusted.) These automorphisms yield an automorphism $f_p$ of
the disjoint union of the spotted products $M_i\times I$ with
spots $E_i$ in $M_i\times 1$ where $f_p$ restricts to $(M_i\times
1,E_i\times 1)$ as $f_s$ and to $M_i\times 0$ as $f_a$. ($f_p$ for
$f$ product.) We can also extend $f_s$ to $f_p$ on all of
$(P_0,\cup_i E_i')$ so that the restriction of $f_p$ to $(\bdry
P_0, \cup_iE_i')$ is $f_s$. Now $f_p$ is an automorphism of a
disjoint union of products $M_i\times I$ with spots on one end of
the product, disjoint union $P_0$ with spots on $\bdry P_0$.
Gluing $E_i$ to $E_i'$, we obtain $Q$ and $f_p$ yields an
automorphism $f_c$ of the compression body.
Now it only remains to apply the inverses of the automorphisms of $Q$
realizing $\iota\circ\sigma$ on $\bdry_eQ$ as a composition of slides
of both kinds. This composition is $\bar\hslash\inverse$, and we have designed $\bar\hslash$ such that
$\bar\hslash\inverse\circ f_c$ restricts to $f$ on $\bdry_eQ=M$. Then take $\bar f=\bar\hslash\inverse\circ f_c$.
Note that the support of the slides is actually in $R$ of the previous
proof.
\end{proof}
\section{One-dimensional invariant laminations}\label{Laminations}
If $f\from H\to H$ is a generic automorphism of a handlebody, there is
an invariant 1-dimensional lamination for $f$ described in
\cite{UO:Autos}, associated to an incompressible 2-dimensional
invariant lamination of a type which we shall describe below, see
Proposition \ref{GoodLaminationProp}. The invariant ``lamination" is
highly pathological, not even being embedded. One of our goals in this
section is to describe various types of 1-dimensional laminations in
3-manifolds, ranging from ``tame" to ``wild embedded" and to the more
pathological ``wild non-embedded" laminations which arise as invariant
laminations for automorphisms of handlebodies. All invariant
1-dimensional laminations we construct are dual to (and associated
with) an invariant 2-dimensional lamination. In general, it seems
most useful to regard the 1-dimensional and the 2-dimensional
invariant laminations of a generic automorphism as a dual pair of
laminations.
In the case of an automorphism of a compression body, only a
2-dimensional invariant lamination was described in
\cite{UO:Autos}. In this section we shall also describe invariant
1-dimension laminations for (generic) automorphisms of compression
bodies.
\begin{defns}
A 1-dimensional {\it tame lamination} $\Omega\embed M$ in a
3-manifold $M$ is a subspace $\Omega$ with the property that $\Omega$
is the union of sets of the form $T\times I$ in {\it flat charts} of
the form $D^2\times I$ in $M$ for a finite number of these charts
which cover $\Omega$. ($T$ is the {\it transversal} in the flat
chart.) A tame lamination is {\it smooth} if the charts can be chosen
to be smooth relative to a smooth structure on $M$.
A 1-dimensional {\it abstract lamination} is a topological space which
can be covered by finitely many flat charts of the form $T\times I$,
where $T$ is a topological space, usually, in our setting, a subspace
of the 2-dimensional disc.
A 1-dimensional {\it wild embedded lamination} in a 3-manifold $M$ is
an embedding of an abstract lamination. Usually wild will also mean
``not tame."
A 1-dimensional {\it wild non-embedded lamination} in a 3-manifold is
a mapping of an abstract lamination into $M$.
\end{defns}
We observe that in general the 1-dimensional invariant laminations
constructed in \cite{UO:Autos} for $f\from H \to H$, $H$ a handlebody,
are wild non-embedded, though they fail to be embedded only at
finitely many {\it singular points} in $H$.
We will now construct invariant laminations for generic automorphisms
of arbitrary compression bodies.
We will deal with a connected, spotless compression body $(H,V)$.
``Spotless" means in particular that $V$ contains no disc components.
The surface $\bdry H-\intr (V)$ will always be denoted $W$. Also,
throughout the section, we assume $\text{genus}(H,V)>0$, where
$\text{genus}(H,V)$ denotes the number of 1-handles which must be
attached to $V\times I$ to build $Q$.
Let $(H_0, V_0) \subseteq \intr (H)$ be a ``concentric compression
body" with a product structure on its complement. If the compression
body has the form $V\times [0,1]$ with handles attached to $V\times
1$, $H_0=V\times [0,1/2]$ with handles attached to $V\times 1/2$ in
such a way that $H-H_0\cup \text{fr}(H_0)$ has the structure of a
product $W\times [0,1]$. Here $V=V\times 0$ and $W= W\times 1$.
We shall sketch the proof of the following proposition, which is
proven in \cite{UO:Autos}.
\begin {proposition} \label{InvtLamProp1}
Suppose $f:(H,V) \to (H, V)$ is generic automorphism of an
$3$-dimen\-sional compression body. Then there is a 2-dimensional
measured lamination $\Lambda\subseteq \intr (H)$ with transverse
measure $\mu$, such that $f(\Lambda,\mu) = (\Lambda,\lambda \mu)$, up
to isotopy, for some stretch factor $\lambda > 1$. The leaves of
$\Lambda$ are planes. Further, $\Lambda$ ``fills" $H_0$, in the sense
that each component of $H_0-\Lambda$ is either contractible or
deformation retracts to $V$.
\end{proposition}
The statement of Proposition \ref{InvtLamProp1} will be slightly
expanded later, to yield Proposition \ref{InvtLamProp2}.
An automorphism $f: (H,V)\to (H,V)$ is called {\it outward expanding}
with respect to $(H_0,V)$ if $f|_{W \times I} = f|_{W\times 0} \times
h$, where $h \from I \to I$ is a homeomorphism moving every point
towards the fixed point 1, so that $h(1)=1$ and $h(t)>t$ for all
$t<1$. We define $H_t=f^t(H_0)$ for all integers $t\ge 0$ and
reparametrize the interval $[0,1)$ in the product $W\times [0,1]$ such
that $\bdry H_t=W\times t=W_t$, and the parameter $t$ now takes values
in $[0, \infty)$. Finally, for any $t\ge 0$ we define $(H_t,V)$ to be
the compression body cut from $(H,V)$ by $W\times t=W_t$.
Since any automorphism $f\from (H,V)\to (H,V)$ of a compression body,
after a suitable isotopy, agrees within a collar $W \times I$ of
$W\subseteq \bdry H$ with a product homeomorphism, a further
``vertical" isotopy of $f$ within this collar gives:
\begin{lemma}
Every automorphism of a compression body is isotopic to an outward
expanding automorphism.
\end{lemma}
Henceforth, we shall always assume that automorphisms $f$ of
compression bodies have been isotoped such that they are outward
expanding.
Let ${\cal E}=\{E_i,i=1\ldots q\}$. be a collection of discs essential
in $(H_0,V)$, with $\bdry E_i\subseteq W_0$, cutting $H_0$ into a
product of the form $V\times I$, possibly together with one or more
balls. Such a collection of discs is called a {\it complete}
collection of discs. (When $V=\emptyset$, $\cal E$ cuts $H$ into one
or more balls.) We abuse notation by also using $\cal E$ to denote
the union of the discs in $\cal E$. We further abuse notation by
often regarding $\cal E$ as a collection of discs properly embedded in
$H$ rather than in $H_0$, using the obvious identification of $H_0$
with $H$. Thus, for example, we shall speak of $W$-parallel discs in
$\cal E$, meaning discs isotopic to discs in $W_0$. Corresponding to
the choice of a complete $\cal E$, there is a dual object $\Gamma$
consisting of the surface $V$ with a graph attached to $\intr (V)$ at
finitely many points, which are vertices of the graph. The edges
correspond to discs of $\cal E$; the vertices, except those on $V$,
correspond to the complementary balls; and the surface $V$ corresponds
to complementary product. If $V=\emptyset$, and $H$ is a handlebody,
then $\Gamma$ is a graph.
Now $H_0$ can be regarded as a regular neighborhood of $\Gamma$, when
$\Gamma$ is embedded in $H_1$ naturally, with $V=V\times0$. By
isotopy of $f$ we can arrange that $f({\cal E})$ is transverse to the
edges of $\Gamma$ and meets $H_0=N(\Gamma)$ in discs, each isotopic to
a disc of $\cal E$. Any collection $\cal E$ of discs properly
embedded in $H_0$ (or $H$) with $\bdry{\cal E}\subseteq W_0$ (or
$\bdry{\cal E}\subseteq W$), not necessarily complete and not
necessarily containing only essential discs, is called {\it
admissible} if every component of $f({\cal E})\cap H_0$ is a disc
isotopic to a disc of $\cal E$. We have shown:
\begin{lemma}\label{AdmissibleExistsLemma}
After a suitable isotopy every outward expanding automorphism $f\from
(H, V) \to (H, V)$ admits a complete admissible collection ${\cal E}
\subseteq H_0$ as above, where every $E_i\in \cal E$ is a
compressing disc of $W$.
\end{lemma}
We shall refer to admissible collections $\cal E$ of discs $(E_i,
\bdry E_i)\embed (H,W)$ as {\it systems} of discs. Sometimes, we
shall retain the adjective ``admissible" for emphasis, speaking of
``admissible systems." A system may contain discs which are not
compressing discs of $W$. Also, even if a system contains these
$W$-parallel discs, the definition of completeness of the system
remains the same. We will say a system is {\it $W$-\-parallel} if
every disc in the system is $W$-parallel.
Let $P_j$ denote the holed disc $f(E_j)-H_0$. Let $m_{ij}$ denote the
number of parallel copies of $E_i$ in $f(E_j)$, and let $M = M({\cal
E})$ denote the matrix $(m_{ij})$, which will be called the {\it
incidence matrix} for $\cal E$ with respect to $f$. A system $\cal E$
is {\it irreducible} if the incidence matrix is irreducible. In terms
of the discs $E_i$, the system $\cal E$ is irreducible if for each
$i,j$ there exists a $k\ge 1$ with $f^k(E_j)\cap H_0$ containing at
least one disc isotopic to $E_i$. It is a standard fact that a matrix
$M$ with non-negative integer entries has an eigenvector $x$ with
non-negative entries and that the corresponding eigenvalue
$\lambda=\lambda({\cal E})$ satisfies $\lambda\ge 1$. If the matrix is
irreducible, the eigenvector is unique and its entries are positive.
It turns out that the lack of reducing surfaces for $f$ in $(H,V)$ is
related to the existence of irreducible complete systems. The
following are proven in \cite{UO:Autos}.
\begin{lemma}\label{SystemsReducingLemma}
Suppose $f\from (H,V)\to (H,V)$ is an automorphism of a compression
body, and suppose that there is an (admissible) system which is not
complete and not $W$-parallel. Then there is a reducing surface for
$f$.
\end{lemma}
\begin{proposition}\label{IrreducibleReducingProp}
If the automorphism $f\from (H,V)\to (H,V)$ is generic, then there is
a complete irreducible system $\cal E$ for $f$. Also, any
non-$W$-parallel complete system $\cal E$ has a complete irreducible
subsystem ${\cal E}'$ with no $W$-parallel discs. Further,
$\lambda({\cal E}')\le \lambda(\cal E)$, and $\lambda({\cal E}')<
\lambda({\cal E})$ if $\cal E$ contains $W$-parallel discs.
\end{proposition}
We always assume, henceforth, that $f$ is generic, so there are no
reducing surfaces. Assuming $\cal E$ is {\it any} (complete)
irreducible system of discs for $f$ in the compression body $(H,V)$,
we shall construct a branched surface $B=B({\cal E})$ in $\intr
(H)$. First we construct $B_1=B\cap H_1$: it is obtained from
$f({\cal E})$ by identifying all isotopic discs of $f({\cal E})\cap
H_0$. To complete the construction of $B$ we note that $f(B_1-\intr
(H_0))$ can be attached to $\bdry B_1$ to obtain $B_2$ and
inductively, $f^i(B_1-\intr (H_0))$ can be attached to $B_i$ to
construct a branched surface $B_{i+1}$. Alternatively, $B_i$ is
obtained by identifying all isotopic discs of $f^i({\cal E})\cap H_j$
successively for $j=i-1,\ldots,0$. Up to isotopy, $B_i\cap H_r=B_r,\
r<i$, so we define $B=\cup_iB_i$. The branched surface $B$ is a
non-compact branched surface with infinitely many sectors. (Sectors
are completions of components of the complement of the branch locus.)
Note that the branched surface $B$ does not have boundary on $W_i$;
this follows from the irreducibility of $\cal E$. If $\cal E$ is
merely admissible, the same construction works, but $B$ may have
boundary on $W_i,\ i\ge 0$.
If $x$ is an eigenvector corresponding to the irreducible system $\cal
E$, each component $x_i$ of the eigenvector $x$ can be regarded as a
weight on the disc $E_i\in \cal E$. The eigenvector $x$ now yields an
infinite weight function $w$ which assigns a positive weight to each
sector:
$$w(E_i)=x_i\text{ and }w(f^t(P_i))=x_i/\lambda^{t+1}\text{ for } t\ge
1.$$
Recall that $P_i$ is a planar surface, $P_i=f(E_i)-\intr (H_0)$. As is
well known, a weight vector defines a measured lamination $(\Lambda,
\mu)$ carried by $B$ if the entries of the weight vector satisfy the
{\it switch conditions}. This means that the weight on the sector in
$H_t-H_{t-1}$ adjacent to a branch circle in $W_t$ equals the sum of
weights on sectors in $H_{t+1}-H_t$ adjacent to the same branch
circle, with appropriate multiplicity if a sector abuts the branch
circle more than once. It is not difficult to check that our weight
function satisfies the switch conditions, using the fact that it is
obtained from an eigenvector for $M({\cal E})$.
At this point, we have constructed a measured lamination
$(\Lambda,\mu)$ which is $f$-invariant up to isotopy and fully carried
by the branched surface $B$. Applying $f$, by construction we have
$f(\Lambda,\mu)=(\Lambda,\lambda\mu)$. We summarize the results of
our construction in the following statement, which emphasizes the fact
that the lamination depends on the choice of non-$W$-parallel system:
\begin{proposition}\label{InvtLamProp2}
Suppose $f:(H,V)\to (H,V)$ is a generic automorphism of a
$3$\-dimensional compression body. Given an irreducible system $\cal
E$ which is not $W$-parallel, there exists a lamination
$(\Lambda,\mu)$, carried by $B({\cal E})$, which is uniquely
determined, up to isotopy of $\Lambda$ and up to scalar multiplication
of $\mu$.
The lamination $\Lambda$ ``fills" $H_0$, in the sense that each
component of the complement is either contractible or deformation
retracts to $V$. Also, $\Lambda\cup W$ is closed.
\end{proposition}
It will be important to choose the best possible systems $\cal E$ to
construct our laminations. With further work, it is possible to
construct an invariant lamination with good properties as described in
the following proposition. This is certainly not the last word on
constructing ``good" invariant laminations, see \cite{LNC:Generic}.
\begin{proposition}\label{GoodLaminationProp}
Suppose $f\from (H,V) \to (H,V) $ is a generic automorphism of a
compression body. Then there is a system $\cal E$ such that there is
an associated 2-dimensional measured lamination $\Lambda\subseteq
\intr (H)$ as follows: It has a transverse measure $\mu$ such that,
up to isotopy, $f((\Lambda,\mu))=(\Lambda,\lambda \mu)$ for some
$\lambda> 1$. Further, the lamination has the following properties:
1) Each leaf $\ell$ of $\Lambda$ is an open $2$-dimensional disc.
2) The lamination $\Lambda$ fills $H_0$, in the sense that each
component of $H_0-\Lambda$ is contractible or deformation retracts to
$V$
3) For each leaf $\ell$ of $\Lambda$, $\ell-\intr (H_0)$ is
incompressible in $H-(\intr (H_0)\cup \intr(V))$.
4) $\Lambda\cup (\bdry H-\intr (V))$ is closed in $H$.
\end{proposition}
Given a ``good" system $\cal E$, e.g. as in Proposition
\ref{GoodLaminationProp}, we now construct a dual 1-dimensional
invariant lamination. This was done in \cite{UO:Autos} for
automorphisms $f:H\to H$ where $H$ is a handlebody. We now construct
an invariant lamination in the case of an automorphism $f\from
(H,V)\to (H,V)$ where $(H,V)$ is a compression body.
\begin{thm}\label{CompressionBodyClassificationTheorem}
Suppose $f\from (H,V)\to (H,V)$ is a generic automorphism of a
compression body with invariant 2-dimensional measured lamination
$(\Lambda,\mu)$ satisfying the incompressibility property. There is a
1-dimensional abstract measured non-compact 1-dimensional lamination
$(\Omega,\nu)$, and a map $\omega\from \Omega\to H_0-V$ such that
$f(\omega(\Omega,\nu))=\omega(\Omega,\nu/\lambda)$. The map $\omega$
is an embedding on $\omega\inverse\big(N(\Lambda)\big)$ for some
neighborhood $N(\Lambda)$. The statement that
$f(\omega(\Omega,\nu))=\omega(\Omega,\nu/\lambda)$ should be
interpreted to mean that there is an isomorphism $h\from
(\Omega,\nu)\to (\Omega,\nu/\lambda)$ such that
$f\circ\omega=\omega\circ h$.
\end{thm}
\begin{proof}.
As in \cite{UO:Autos} we define $H_n$, $\mathcal{E}_n$ (for
$n\in\mathbb{Z}$) to be $f^n(H_0)$ and $f^n({\cal E})$
respectively. We choose a two-dimensional invariant lamination
$(\Lambda, \mu)$ with the incompressibility property. It can be shown,
as in \cite{UO:Autos}, that $\Lambda$ can be put in Morse position
without centers with respect to a height function $t$ whose levels
$t\in \integers$ correspond to $\bdry_e H_t$. It can also be shown as
in \cite{UO:Autos} that for $n>t$ $f^n({\cal E})\cap H_t$ consists of
discs belonging to a system $\mathcal{E}_t$ in $H_t$. This implies
that for all real $t$, $(H_t,V)$ is a compression body homeomorphic to
$(H,V)$, and we can take a quotient of $(H_t,V)$ which gives
$\Gamma_t$, a graph attached to $V$, the quotient map being $q_t\from
H_t\to \Gamma_t$. More formally, let $H_t^\Lambda$ denote the
completion of the complement of $\Lambda$ in $H_t$. Points of the
graph in $\Gamma_t$ correspond to leaves of the 2-dimensional
lamination $\Lambda_t$ in $(H_t,V)$, or they correspond to balls in
$H_t^\Lambda$. Points of $V\subseteq \Gamma_t$ correspond to interval
fibers in the product structure for the product component of
$H_t^\Lambda$. We also clearly have maps $\omega_{st}\from
\Gamma_s\to \Gamma_t$ such that if $i_{st}:H_s\to H_t$ is the
inclusion, then $q_t\circ i_{st}=\omega_{st}\circ q_s$. Then the
inverse limit of the maps $\omega_{st}$, for integer values of $t=s+1$
say, yields a 1-dimensional lamination limiting on $V$. The
1-dimensional lamination is the desired lamination. More details of
the ideas in this proof can be found in the proof in \cite{UO:Autos}
of the special case when $V=\emptyset$.
\end{proof}
We next investigate the 1-dimensional invariant lamination $\Omega$
further. In \cite{UO:Autos} it is pointed out that (in the case of an
automorphism of a handlebody) $\Omega$ has, in general, much
self-linking. An issue which was not addressed is whether the leaves
of $\Omega$ might be ``knotted." This sounds unlikely, and in fact
knotting does not occur, but it is not even clear what it means for a
leaf to be knotted. To clarify this notion, we first make some
further definitions.
\begin{defn}
A {\it spotted ball} is a pair $(K,\mathcal{D})$ where $K$ is a ball
and $\mathcal{D}$ is a collection of discs (or {\em spots}) in
$\partial K$. A {\it spotted product} is a triple $(K,V,\mathcal{D})$
of the form $K=V\times I$ with $V=V\times 0$ and with $\mathcal{D}$ a
collection of discs in the interior of $V\times 1$. The spotted
product can also be regarded as a pair $(K,R)$, where $R=\bdry
K-\intr(V)-\intr(\mathcal{D})$. If $(K,\mathcal{D})$ is a spotted ball
we let the corresponding $V=\emptyset$. If $(K,V,\mathcal{D})$ is a
spotted ball or connected product, (where $V=\emptyset$ if $K$ is a
ball) we say that it is a {\em spotted component}.
\end{defn}
Spotted balls occur naturally in our setting as follows: Suppose $H$
is a handlebody. Define $\widehat{H}_1$ to be $H_1$ cut open along
$\mathcal{E}_1$, or $H_1|\mathcal{E}_1$. We let
$\Gamma_t\cap\widehat{H}_1$ (or $H_t\cap\widehat{H}_1$) denote
$\Gamma_t|\mathcal{E}_1$ (or $H_t|\mathcal{E}_1$). The disc system
$\mathcal{E}_1$ determines a collection of spots
$\widehat{\mathcal{E}}_1\subseteq\partial\widehat{H}_1$ ($2$ spots for
each disc of $\mathcal{E}_1$). If $K$ is a component of
$\widehat{H}_1$ and $\mathcal{D}=K\cap\widehat{\mathcal{E}}_1$ then it
is clear that $(K,\mathcal{D})$ is a spotted ball. If we make similar
definitions for a compression body $(H,V)$, then we also obtain
spotted product components $K$ in $\widehat{H}_1$.
In general, for $t\leq 1$, the triple $(K_t,V_t,\mathcal{D}_t)$ will
represent a spotted component where $K_t$ is a component of
$H_t\cap\widehat{H}_1$ and
$\mathcal{D}_t=K_t\cap\widehat{\mathcal{E}}_1$. Usually a spotted ball
$(K_t,\mathcal{D}_t)$ is a ball-with-two-spots, corresponding to arc
components $L_t$ of $\Gamma_t\cap\widehat{H}_1$. The other spotted
balls have more spots, $K_t$ corresponding to star components $L_t$ of
$\Gamma_t\cap\widehat{H}_1$. We denote $L_t\cap\partial K_t$ by
$\partial L_t$. It is clear that the spots of $\mathcal{D}_t$
correspond to points of $\partial L_t$ or to the edges of $L_t$. We
also note that $L_t\subseteq K_t$ is uniquely defined, up to isotopy
(rel $\mathcal{D}_t$) by the property that it is a star with a vertex
in each spot and that it is ``unknotted''. We say $L_t$ is {\em
unknotted} in $K_t$ if it is isotopic in $K_t$ (rel $\partial K_t$) to
$\bdry K_t$.
When we are dealing with a compression body ($V_t\ne \emptyset$), we
also have component(s) of $L_t$ consisting of a component $X$ of $V$
with some edges attached at a single point of $X$. In fact, it is
often useful to consider the union of star graphs $\hat L_t$, which is
the closure of $L_t-V$ in $L_t$. These components of $L_t$
correspond to connected spotted products $K_t$ and the edges
correspond to spots. The natural equivalence classes for the $ L_t$'s
are isotopy classes (rel $\mathcal{D}_t$), and for the $\hat L_t$'s
isotopy classes (rel $\mathcal{D}_t\cup X$). Given $K_t$ corresponding
to a spotted product, there are many such equivalence classes, even if
restricted to ``unknotted'' $\hat L_t$'s. Here our $\hat L_t$ is
unknotted if it can be isotoped rel $\mathcal{D}_t$ to $V\times 1$.
We will often use ``handle terminology." $H_t$ is constructed from
the 0-handles $K_t$ (or spotted products), with 1-handles
corresponding to edges of $\Gamma_t$ attached to the spots.
\begin{defn}
If $\Omega$ is the 1-dimensional dual lamination associated to a
2-dimensional invariant lamination $\Lambda$ for a generic
automorphism $f\from (H,V)\to (H,V)$, which in turn is associated to a
complete system $\cal E$, then a leaf $\ell$ of $\Omega$ is {\it
unknotted} if every component of $\ell\cap\widehat H_1$, contained in
a component $K$, say of $\widehat H_1$, is $\bdry$-parallel in $K$, or
isotopic to an arc embedded in $\bdry K$, whether $K$ is a spotted
ball or a spotted product. In a spotted product $(K,V,\mathcal{D})$,
we require that the arc be parallel to $R=\bdry
K-\intr(V)-\intr(\mathcal{D})$.
\end{defn}
\begin{thm}\label{UnknottedTheorem}
Let $f\from (H,V)\to (H,V)$ be a generic automorphism of a compression
body. The 1-dimensional dual lamination $\Omega$ associated to a
system $\mathcal{E}$ is unkotted in $(H,V)$.
\end{thm}
To prove the theorem, we prove some lemmas from which the theorem
follows almost immediately. We first make a few more definitions.
If a spotted ball $K_t$ has $k$ spots then we can embed a polygonal
disc $T_t$ with $2k$ sides in $R_t=\partial K_t -
\intr({\mathcal{D}}_t)$, see Figure \ref{Aut3HalfSurface}.
Alternate sides of $T_t$ are identified with embedded subarcs of each
component of $\mathcal{D}_t$ in $\partial K_t$. We let $P_t=\bdry
K_t -\intr(T_t)$. Then the pair $(P_t,\mathcal{D}_t)$ is a {\em
\half for $(K_t,\mathcal{D}_t)$}. From another point of view, $P_t$
is a disc embedded in $\bdry K_t$ containing $\mathcal{D}_t$ and with
$\bdry P_t$ meeting the boundary of each disc in $\mathcal{D}_t$ in
an arc. The reason we call $(P_t,\mathcal{D}_t)$ a spinal pair is
that there is a homotopy equivalence from $(P_t,\mathcal{D}_t)$ to
$(L_t,\text{closed edges of }L_t)$, which first collapses $K_t$ to
$P_t$, then replaces the discs of $\mathcal{D}_t$ by edges, and
$P_t-\mathcal{D}_t$ by a central vertex for the star graph $L_t$.
Thus the \spinal is something like a spine. There is of course a
similar homotopy equivalence between $(K_t,\mathcal{D}_t)$ and $L_t$.
\begin{figure}[ht]
\centering \scalebox{1.0}{\includegraphics{Aut3HalfSurface}}
\caption{\small Polygonal disc. The complement is $P_t$.}
\label{Aut3HalfSurface}
\end{figure}
If $(K_t,V_t,\mathcal{D}_t)$ is a spotted product, we let in
$P_t=\bdry K_t-\intr(V_t)$ (which is just $V_t\times 1$ when $K_t$ is
given the product structure $K_t=V_t\times I$) and the {\it \half} for
$(K_t,V_t,\mathcal{D}_t)$ is $(P_t, \mathcal{D}_t)$. In the case of
spotted products, there is no choice for $P_t$. Again, there is a an
obvious homotopy equivalence between the \half and $\Gamma_t$, taking
discs of $\mathcal{D}_t$ to closed edges.
Finally, it will be convenient later in the paper to have another
notion, used mostly in the case of a spotted ball
$(K_t,\mathcal{D}_t)$. A {\it subspinal pair} for
$(K_t,V_t,\mathcal{D}_t)$ is a pair $(\breve P_t,\breve{\mathcal{D}}_t)$
embedded in a spinal pair $(P_t,\mathcal{D}_t)$, such
that $\breve{\mathcal{D}}_t$ is (the union of) a subcollection of the
discs in $\mathcal{D}_t$, and such that $\breve P_t\embed P_t$ is an
embedded subdisc whose boundary meets each boundary of a disc in
$\breve{\mathcal{D}}_t$ in a single arc. Exceptionally, we also allow
a subspinal pair consisting of a single disc component $D$ of
$\mathcal{D}_t$. In this case the subspinal pair is $(D,D)$.
\begin{definition} Fix $s\leq t$ and consider a spotted component $(K_{s},\mathcal{D}_s))$
contained in a spotted component $(K_t,\mathcal{D}_t))$. We say that
$(K_{s},\mathcal{D}_s))$ is {\em $\bdry$-parallel} to
$(K_t,\mathcal{D}_t))$ if there exist \halfs $P_s$ and $P_t$ such that
there is an isotopy of $(K_{s},\mathcal{D}_{s})$ in
$(K_t,\mathcal{D}_t)$ such that, after the isotopy,
$(P_{s},\mathcal{D}_{s})$ is contained in $(P_t,\mathcal{D}_{t})$.
Then we say the \spinal $(P_{s},\mathcal{D}_{s})$ is {\em parallel} to
the \spinal $(P_t,\mathcal{D}_{t})$. Similarly, we say a {\it
subspinal pair} $(\breve P_s,\breve{\mathcal{D}}_s)$ is {\em parallel}
to $(P_t,\mathcal{D}_{t})$ if there is an isotopy from the first pair
into the second. Finally, we also say that the graph $(L_s, \bdry
L_s)$ is {\em parallel} to $(P_t,\mathcal{D}_{t})$ if it can be
isotoped in $(K_t,\mathcal{D}_{t})$ to $(P_t,\mathcal{D}_{t})$.
We will need one more variation of the definition. In Section
\ref{MixedBodies} we will have a spotted product $K_t$ with closed
interior boundary and contained in a handlebody $K_t\cup H'$, with
$H'$ a handlebody; the interior boundary of $K_t$ is ``capped" with a
handlebody $H'$. Then we speak of $(L_s, \bdry L_s)$,
$(P_{s},\mathcal{D}_{s})$, or $(\breve P_s,\breve{\mathcal{D}}_s)$
being {\it parallel in $K_t\cup H'$} to $(P_t,\mathcal{D}_{t})$.
\end{definition}
\begin{lemma} \label{GraphToSpinalLemma}
For $s<t\le 1$, suppose $K_t$ is a spotted component of $H_t\cap \widehat
H_1$; suppose $(P_t,\mathcal{D}_t)$ is a \spinal for $K_t$; and
suppose $K_s$ is a spotted component of $H_s\cap \widehat H_1$ with
$K_s\subseteq K_t$. (If $K_t$ is a spotted product, it might be
capped by $H'$ on its interior boundary.)
Then:
i) If $K_s$ is a spotted ball, and the star graph $L_s$ is parallel to
$(P_t,\mathcal{D}_{t})$ (in $K_t\cup H'$) then there exists a \spinal
$(P_{s},\mathcal{D}_{s})$ parallel to $(P_t,\mathcal{D}_t)$ (in
$K_t\cup H'$).
ii) If $K_s$ is a spotted ball, and the embedded star subgraph $\breve
L_s$ of $L_s$ with ends in discs of $\breve{\mathcal{D}}_s\subseteq
\mathcal{D}_s)$ is parallel to $(P_t,\mathcal{D}_{t})$ in $K_t$ (in
$K_t\cup H'$) then there exists a subspinal pair $(\breve
P_s,\breve{\mathcal{D}}_s)$ parallel to $(P_t,\mathcal{D}_t)$ in $K_t$
(in $K_t\cup H'$).
iii) If $K_s$ is a spotted product, and the embedded star subgraph
$\breve L_s$ of $\hat L_s$ with ends in discs of
$\breve{\mathcal{D}}_s\subseteq \mathcal{D}_s)$ is parallel to
$(P_t,\mathcal{D}_{t})$ in $K_t$ (in $K_t\cup H'$) then there exists a
subspinal pair $(\breve P_s,\breve{\mathcal{D}}_s)$ parallel to
$(P_t,\mathcal{D}_t)$ in $K_t$ (in $K_t\cup H'$). In particular, if
$\breve L_s=\hat L_s$, then we obtain a subspinal pair $(\hat
P_s,\mathcal{D}_s)$ including all spots of $K_s$.
iv) If $K_t$ is a spotted ball, a choice of isotopy of pairs of
$(L_t,\bdry L_t)$ in $(K_t,\mathcal{D}_t)$ to $\bdry K_t$ corresponds
to a choice of a \spinal $P_t$.
v) If $K_t$ is a connected spotted product, a choice of isotopy of
pairs of $(\hat L_t,\bdry \hat L_t)$ in $(K_t,\mathcal{D}_t)$ to
$(P_t,\mathcal{D}_t)$ corresponds to a choice of a subspinal pair
$(\hat P_t,\mathcal{D}_t)$, including all spots, for $K_t$.
\end{lemma}
\begin{proof}
i) The argument is the same regardless $K_t$ is capped with $H'$ or
not. Isotope $L_s$ to $L\subseteq P_t$ (possibly passing through
$H'$). Now note that a regular neighborhood $N(L)$ of $L$ in $K_t$ is
a spotted ball isotopic to $K_{s}$, the component of $H_s\cap
\widehat{H}_t$ corresponding to (and containing) $L_{s}$. We let
$P=N(L)\cap \bdry K_t$, then $(P,P\cap \mathcal{D}_t)$ is a spinal
pair for the spotted ball $(N(L),N(L)\cap \mathcal{D}_t)$. We can
use the inverse of the isotopy taking $L_{s}$ to $L$ to obtain an
isotopy taking $N(L)$ to $K_{s}$. This isotopy takes $(P,P\cap
\mathcal{D}_t)$ to a \spinal $(P_s,\mathcal{D}_s)$ in $K_{s}$. By
construction, $(K_{s},\mathcal{D}_{s})$ is parallel to
$(K_t,\mathcal{D}_{t})$ as desired and we have proved the isotopy of
$(L_s,\bdry L_s)$ to $(P_t,\mathcal{D}_t)$ determines a $P_s$ such
that $(P_s,\mathcal{D}_s)$ is isotopic to $(P_t, \mathcal{D}_t)$.
ii), iii), The proofs are very similar to the proof of i).
iv) If we consider $L_t$ as a core graph of $K_t$, an isotopy of
$(L_t,\bdry L_t)$ in $(K_t,\mathcal{D}_t)$ to $\bdry K_t$ gives a
graph $(L,\bdry L)\subseteq (\bdry K_t,\mathcal{D}_t)$. Again we let
$P=N(L)\cap \bdry K_t$ containing $\mathcal{D}_t$ to form a \spinal
$(P,\mathcal{D}_t)$ for the spotted ball $(N(L),\mathcal{D}_t)$ .
$N(L)$ is the same as $K_t$ up to isotopy, and $N(L)\cap \bdry K_t$
gives a \spinal for $N(L)$, so we have a \spinal for $K_t$. It is
easy to recover the $\hat L_t$ and $L_t$ from the spinal pair.
v) The proof is similar to the proof of iv).
\end{proof}
The following lemma analyzes the ``events" which change the $L_t$'s
corresponding to spotted balls or products $K_t$.
\begin{figure}[ht]
\centering
\psfrag{L}{\fontsize{\figurefontsize}{12}$L$} \psfrag{H}{\fontsize{\figurefontsize}{12}$H$} \psfrag{t}{\fontsize{\figuresmallfontsize}{12}$t$} \psfrag{s}{\fontsize{\figuresmallfontsize}{12}$s$}
\scalebox{1.0}{\includegraphics{Aut3Splitting}} \caption{\small
Event in $H_t$.} \label{Aut3Splitting}
\end{figure}
\begin{lemma} \label{EventLemma}
Recall $K_t$ is a component of $H_t\cap \widehat H_1$ and $L_t$ is the corresponding
component of $\Gamma_t\cap \widehat H_1$. Suppose $s$ and $t$ are
consecutive regular values, in the sense that there is exactly one
critical value between them. Then one of the following holds (see
Figure \ref{Aut3Splitting}):
\begin{enumerate}
\item If the corresponding critical point is not contained in $K_t$ then
$L_t=\Gamma_t\cap K_t$ is the same as $L_s=\Gamma_s\cap K_t$, otherwise
\item $L_t$ is obtained from two components $L_s$ and
$L_s'$ by isotoping an edge of $L_s$ to an edge of $L_s'$ (in the
complement of the remainder of $\Gamma_s\cap K_t$) and identifying
the two edges, or
\item $L_t$ is obtained from a component $L_s$ of
$\Gamma_s\cap \widehat H_1$ by isotoping an edge of $L_s$ to an other
edge of $L_s$ (in the complement of the remainder of $\Gamma_s\cap
K_1$) and identifying the two edges.
\end{enumerate}
Figure \ref{Aut3Splitting} shows the events in case $K_t$ is a spotted
ball.
\end{lemma}
\begin{proof} Recall that in the context of Theorem \ref{UnknottedTheorem} we
are assuming that the discs $\mathcal{E}_1$ are in Morse position
without centers with respect to the height function in
$H_1-\intr({H}_0)$. The lemma follows from the fact that there is
exactly one critical value between $s$ and $t$ for the Morse height
function on the system of discs in $H_1$.
\end{proof}
We classify critical values according to the three cases of Lemma
\ref{EventLemma}.
In the following statement ``consecutive" regular values are again
regular values separated by exactly one critical value.
\begin{lemma} \label{SpinalPairLemma} For $s<t\le 1$ consecutive regular values, suppose $K_t$ is a spotted component
of $H_t\cap \widehat H_1$ and suppose $K_s$ is a spotted component of
$H_s\cap \widehat H_1$ with $K_s\subseteq K_t$. Then for any choice of
\spinal $(P_{t},\mathcal{D}_{t})$ for $(K_t,\mathcal{D}_t)$ there is a
choice of \spinal $(P_{s},\mathcal{D}_{s})$ for $(K_s,\mathcal{D}_s)$
such that the \spinal $(P_{s},\mathcal{D}_{s})$ is parallel to the
\spinal $(P_{t},\mathcal{D}_{t})$, and hence $(K_s,\mathcal{D}_s)$ is
$\bdry$-parallel in $(K_t,\mathcal{D}_t)$.
\end{lemma}
\begin{proof}
We first consider the case that $K_t$ is a spotted ball, and we
choose a \spinal $(P_t, \mathcal{D}_t)$.
We are assuming that the discs $\mathcal{E}_1$ are in Morse position
without centers with respect to the height function in
$H_1-\intr({H}_0)$. We assume $s$ and $t$ are ``consecutive''
regular values, in the sense that there is exactly one critical value
between them, and that the corresponding critical point lies in $K_t$.
Lemma \ref{EventLemma} classifies the events corresponding to the
critical value: A component $L_{s}$ of $\Gamma_s\cap K_t$ is obtained
from a component $L_t$ of $\Gamma_t\cap K_t$ as described in the
lemma. In Case 1, the component $L_{s}$ coincides with the
corresponding $L_t$. In particular $L_{s}\subseteq L_t$ in this case.
In Case 2, where $L_{s}$ is isotopic to a subspace of $L_t$, which is
split, we can isotope it so that $L_{s}\subseteq L_t$. In Case 3,
$L_{s}$ is obtained from $L_t$ by splitting the edge $e$, and we can
isotope $L_{s}$ so that it is contained in $L_t$ away from a
neighborhood $N(e)\subseteq K_t$ of $e$.
But $L_t$ is clearly {\em parallel to $P_t$}, in the sense that there
exists an isotopy of $(L_t,\partial L_t)$ in $(K_t,\mathcal{D}_t)$
taking $(L_t,\bdry L_t)$ to $(P_t,\mathcal{D}_t)$. Therefore, in
Cases 1 and 2, when $L_{s}\subseteq L_t$, the parallelism of $L_t$
realizes the parallelism of $L_{s}$ and by Lemma
\ref{GraphToSpinalLemma}, we obtain a \spinal
$(P_{s},\mathcal{D}_{s})$ which is parallel to the \spinal
$(P_{t},\mathcal{D}_{t})$.
We deal with Case 3 as follows. We can regard the isotopy of $L_t$
into $P_t$ as a composition of two isotopies: first isotope a
neighborhood $L_t\subseteq K_t$ (containing $e)$) into a neighborhood
$N(P_t)\simeq P_t\times I$. We may assume that $L_t$ is taken to a
horizontal slice $P_t\times\{x\}$. The isotopy of $L_t$ is completed
by vertical projection. Now we can assume $L_{s}\subseteq K_t$ is
contained in $L_t$ away from $N(e)$. After the first isotopy we then
have that $\big(L_{s}-N(e)\big)\subseteq P_t\times\{x\}$. But we can
isotope $L_{s}\cap N(e)$ (rel $\partial N(e)$) so that $L_{s}\subseteq
P_t\times\{x\}$. Vertical projection gives $L_{s}\subseteq P_t$.
Again, by Lemma \ref{GraphToSpinalLemma} we obtain a \spinal
$(P_{s},\mathcal{D}_{s})$ which is parallel to the \spinal
$(P_{t},\mathcal{D}_{t})$.
Now we consider the case that $K_t$ is a spotted product and $K_s$ is
a spotted ball. There is no choice for the \spinal $(P_t,
\mathcal{D}_t)$. Again, analysis of the cases of Lemma
\ref{EventLemma} shows that in each case $L_s$ can be isotoped to
$P_t$, so by Lemma \ref{GraphToSpinalLemma} we again get a \spinal
$(P_{s},\mathcal{D}_{s})$ which is parallel to $(P_t, \mathcal{D}_t)$.
Finally, if both $K_t$ and $K_s$ are spotted products, then there is
no choice for either spinal pair $(P_t, \mathcal{D}_t)$ or $(P_s,
\mathcal{D}_s)$. The surface $P_s$ is isotopic to $P_t$ and
$\mathcal{D}_s$ represents a subcollection of the discs of
$\mathcal{D}_t$. Hence $(P_s, \mathcal{D}_s)$ is automatically
parallel to $(P_t, \mathcal{D}_t)$
\end{proof}
\begin{proposition}\label{ParallelLemma} For all $s<t\le 1$, for any component $K_t$ of $H_t\cap \widehat H_1$ and for any
component $K_s$ of $H_s\cap \widehat H_1$ with $K_s\subseteq K_t$, $K_s$
is $\bdry$-parallel in $K_t$.
\end{proposition}
\begin{proof} There is a sequence $t_i$, $1\le i\le n$ where
$t_1=s$, $t_n=t$ and with consecutive values $t_i$ separated by a
single critical value. By Lemma \ref{SpinalPairLemma}, given a
\spinal $(P_{t_n}, \mathcal{D}_{t_n})$ for $K_{t_n}$, there is a
parallel \spinal $(P_{t_{n-1}}, \mathcal{D}_{t_{n-1}})$ for
$K_{t_{n-1}}$. We use induction with decreasing $i$: If we have
found a spinal pair $(P_{t_{i}}, \mathcal{D}_{t_{i}})$ for $K_{t_{i}}$
for $K_{t_i}$ which is parallel to $(P_{t_{i+1}},
\mathcal{D}_{t_{i+1}})$ for $K_{t_{i+1}}$ for $K_{t_{i+1}}$, then we
can find a \spinal $(P_{t_{i-1}}, \mathcal{D}_{t_{i-1}})$ for
$K_{t_{i-1}}$ which is parallel to the pair $(P_{t_{i}},
\mathcal{D}_{t_{i}})$ for $K_{t_{i}}$ for $K_{t_i}$. This then proves
we have a sequence of parallel \spinals. It is easy to see that
parallelism for \spinals is transitive, whence we obtain the statement
of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{UnknottedTheorem}] For a leaf $\ell$ of $\Omega$, every component of $\ell
\intersect \widehat H_1$ is the core of a spotted ball $K_s$ with two
spots, for some $s$. $K_s$ is contained in some component $K$ of
$\widehat H_1$. Letting $t=1$ in the previous lemma, we see that $K_s$ is
$\bdry$-parallel in $K_t$, which implies the arc of $\ell$ in $\widehat
H_1$ is $\bdry$-parallel, hence unknotted.
\end{proof}
\section{Automorphisms of sphere bodies, the Train Track Theorem}\label{TrainTrackSection}
Our goal in this section is to prove Theorem \ref{TrainTrackTheorem}.
We recall from the Introduction that $g$ is {\em train track generic}
if it is generic on $H_1$ and the corresponding invariant
2-dimensional lamination determines, through a quotient map $H\to G$
on a graph $G$, a homotopy equivalence which is a train track map.
Our strategy for proving Theorem \ref{TrainTrackTheorem} is similar
to that of \cite{BH:Tracks} for improving homotopy equivalences of
graphs. Given an automorphism $f\from M\to M$ we first isotope it so
that it preserves the canonical splitting $M=H\cup H'$ (see Section
\ref{Splitting} and Theorem \ref{CanonicalHeegaardTheorem}). Let
$g=f|_{H}\colon H\to H$. By Theorem \ref{HandlebodyClassificationThm}
$g$ is either periodic, reducible or generic. If it is periodic or
reducible the proof is done, so assume that it is generic. We
consider the corresponding 2-dimensional lamination and homotopy
equivalence of a graph. If it is a train-track generic map, the proof
is also done, so assume otherwise. This means that a power of the
induced homotopy equivalence on the graph admits ``back tracking'', in
the sense of \cite{BH:Tracks}. In that context, there is a sequence of
moves that simplify the homotopy equivalence. Similarly in our
setting there are analogous moves that realize this
simplification. The result of this simplification is an automorphism
$g'$ of $H$ which is either periodic, reducible or generic. In the
last case, the growth rate of $g'$ is smaller than that of
$g$. Proceeding inductively, this process yields the proof of the
theorem. In general $g$ and $g'$, which are automorphisms of $H$, will
not be isotopic. The key here is that they are both restrictions of
automorphisms $f$, $f'$ of $M$ to the handlebody $H$. These $f$ and
$f'$ will be isotopic (through an isotopy that does not leave
invariant the splitting, see e.g. Example \ref{NonUniqueExample}).
In the light of the sketch above, it is clear that Theorem
\ref{TrainTrackTheorem} will follow from the result below.
\begin{thm}\label{BackTrackTheorem}
Let $f\colon M\to M$ be an automorphism preserving the canonical
splitting $M=H\cup H'$. Assume that $g=f|_{H}$ is generic and not
train track. Then $f$ is isotopic to an $f'$ preserving the splitting
such that $g'=f'|_{H}$ is either periodic, reducible or generic with
growth rate smaller than $g$.
\end{thm}
There are two technical ingredients in proving Theorem
\ref{BackTrackTheorem}. The first is to prove the existence of
``half-discs'', which are geometric objects that represent some of the
homotopy operations performed in the algorithm of \cite{BH:Tracks}
(these operations are ``folding'' and ``pulling-tight''). In special
situations, when the half-discs are ``unholed,'' we can use them to
realize these homotopies by isotopies. These isotopies are described
in \cite{UO:Autos} (a ``down-moves'' corresponds to a ``fold'' and a
``diversion'' to a ``pulling-tight''). We can then proceed to the next
step of the algorithm remaining in the same isotopy class, simplifying
the automorphism by either reducing the growth rate or turning it into
a periodic or reducible automorphism.
The second ingredient is an operation called ``unlinking''; the aim is
to isotope $f$ so that we can obtain these special unholed half-discs
from general ones. As noted above, this makes it possible to proceed
with the algorithm. An unlinking preserves the canonical splitting
but, unlike the other moves, it is an isotopy which does not leave
invariant the canonical splitting. In the unlinking process we will
decompose the given half-disc into ``rectangles'' and a ``smaller''
half-disc.
We have sketched the proof of Theorem \ref{BackTrackTheorem} (which
implies Theorem \ref{TrainTrackTheorem}). We now turn to details. We
fix $f\colon M\to M$ preserving the splitting $M=H\cup H'$ and
consider its restriction $g\colon H\to H$. We assume that $g$ is
generic. Now consider the construction of the invariant laminations,
determining $H_t$, $\mathcal{E}_t$ and $\Gamma_t$ for $t\in
\mathbb{R}$, as in Section \ref{Laminations}. We also use the notions
of spotted balls as in Section \ref{Laminations}, and we work in the
space $\widehat{H}_1$, i.e., $H_1|\mathcal{E}_1$ as before. In
general, for $t\leq 1$, the pair $(K_t,\mathcal{D}_t)$ will represent
a spotted ball where $K_t$ is a component of $H_t\cap\widehat{H}_1$
and $\mathcal{D}_t=K_t\cap\widehat{\mathcal{E}}_1$.
\begin{defns}\label{HalfDisc}
A {\em half-disc} is a triple $(\Delta,\alpha,\beta)$ where $\Delta$
is a disc and $\alpha$, $\beta\subseteq\partial\Delta$ are
complementary closed arcs. We may abuse notation and say that $\Delta$
is a half-disc.
Let $s<t$. A {\em half-disc for $H_s$ in $H_t$} is a half-disc
$(\Delta,\alpha,\beta)$ satisfying:
\begin{enumerate}[\upshape (i)]
\item $\Delta\subseteq K_t$ is embedded for some $K_t$,
\item $\beta$ is contained in a spot $D_t$ of $K_t$,
\item $\alpha\subseteq\partial K_s$ for some $K_s\subseteq K_t$ and
$\Delta\cap K_s=\alpha$,
\item $\alpha$ {\em connects} (i.e., intersects the boundaries of) two
distinct spots of $(K_s,\mathcal{D}_s)$.
\end{enumerate}
We refer to $\alpha$, the arc in $\partial K_s$ as the {\em upper
boundary of the half-disc for $H_s$ in $H_t$, $s<t$}, denoted by
$\partial_u\Delta$, and $\beta$ as its {\em lower boundary}, denoted
by $\partial_l\Delta$.
Similarly, a {\em rectangle} is a triple $(R,\gamma,\gamma')$ where
$R$ is a disc and $\gamma$, $\gamma'\subseteq\partial R$ are disjoint
closed arcs. We may abuse notation and say that $R$ is a rectangle.
Let $s<t$. A {\em rectangle for $H_s$ in $H_t$} is a rectangle
$(R_s,\gamma_s,\gamma_t)$ satisfying:
\begin{enumerate} [\upshape (a)]
\item $R_s\subseteq K_t$ is embedded for some $K_t$,
\item $\gamma_s\subseteq\partial K_s$ for some $K_s\subseteq K_t$
and $R_s\cap K_s=\gamma_s$,
\item $\gamma_s$ connects distinct spots of $(K_s,\mathcal{D}_s)$,
\item $\gamma_t\subseteq\partial K_t-\intr({\mathcal{D}}_t)$,
\item $\partial R_s-(\gamma_s\cup\gamma_t)\subseteq\mathcal{D}_t$.
\end{enumerate}
\end{defns}
We note that a half-disc $\Delta$ (for $H_s$) may be {\em holed}, in
the sense that $H_s$ may intersect the interior of $\Delta$. If this
intersection is essential then $\Delta$ also intersects $H_0$. We
define {\it holed rectangles} similarly.
\begin{defn}
Let $0\leq s<t\leq 1$ and $\Delta$ a half-disc for $H_s$ in $H_t$.
Suppose that $s=t_0<t_1<\cdots<t_k=t$ is a sequence of regular
values. We say that $\Delta$ is {\em standard with respect to
$\{\,t_i\,\}$} if there is $0\leq m<k$ such that the following
holds. The half-disc $\Delta$ is the union of rectangles $R_{t_i}$,
$s\leq t_i\leq t_{m-1}$, and a half-disc $\Delta_{t_m}$ (see
Figure~\ref{Aut3StandardDisc}) with the properties that:
\begin{enumerate}
\item each $R_{t_i}$, $s\leq t_i< t_{m}$, is a rectangle for
$H_{t_i}$ in $H_{t_{i+1}}$,
\item the holes $\intr({R}_{t_i})\cap H_{t_i}$ are essential
discs in $H_{t_i}$,
\item $\Delta_{t_m}$ is an unholed half-disc for $H_{t_m}$ in
$H_{t_{m+1}}$.
\end{enumerate}
\begin{figure}[ht]
\centering
\psfrag{R}{\fontsize{\figurefontsize}{12}${\hskip-1.7pt}\raise0.8pt\hbox{$R$}$}
\psfrag{t}{\fontsize{\figuresmallfontsize}{12}$t$}\psfrag{m}{\fontsize{\figuresmallerfontsize}{12}$m$}
\psfrag{i}{\fontsize{\figuresmallerfontsize}{12}${\hskip-0.7pt}i$}
\psfrag{g}{\fontsize{\figurefontsize}{12}$\gamma$}\psfrag{D}{\fontsize{\figurefontsize}{12}$\Delta$}
\scalebox{1.0}{\includegraphics{Aut3StandardDisc}}
\caption{\small Standard half disc.} \label{Aut3StandardDisc}
\end{figure}
\end{defn}
We want to prove the following:
\begin{proposition}\label{P:half-disc}
Let $0=t_0<\dots<t_k=1$ be a complete sequence of regular values. If a
component of $H_0\cap\widehat{H}_1$ intersects a disc of
$\widehat{\mathcal{E}}_1$ in two distinct components $D_0$, $D_1$,
then there exists a half-disc $\Delta$ for $H_0$ connecting $D_0$ and
$D_1$ which is standard with respect to $\{\,t_i\,\}$.
\end{proposition}
To prove the proposition, we need:
\begin{lemma}\label{L:half-disc}
Suppose that there exist \halfs\ $P_{t_i}$ such that
$(K_{t_i},P_{t_i})$ is parallel to $(K_{t_{i+1}},P_{t_{i+1}})$. If a
component $K_0$ intersects a disc of $\mathcal{D}_1$ in two distinct
components $D_0$, $D_1$, then there exists a half-disc for $H_0$
connecting $D_0$ and $D_1$ which is standard with respect to
$\{\,t_i\,\}$.
\end{lemma}
\begin{proof}
Since $P_{t_0}=P_{0}$ is a \half\ it intersects both $D_0$, $D_1$.
Let $\gamma_{t_0}\subseteq P_{t_0}$ be an arc connecting $D_0$ to
$D_1$.
For any $i$, let $\gamma_{t_i}\subseteq P_{t_i}$ be an embedded arc
with endpoints in distinct spots of $K_{t_i}$. Consider an isotopy
realizing the parallelism between $(K_{t_i},P_{t_i})$ and
$(K_{t_{i+1}},P_{t_{i+1}})$, taking $P_{t_i}$ to a subsurface of
$P_{t_{i+1}}$. The arc $\gamma_{t_i}$ is taken to an arc
$\gamma_{t_{i+1}}\subseteq P_{t_{i+1}}$. It is clear that the isotopy
yields, via the Loop Theorem, a rectangle
$(R_{t_{i}},\gamma_{t_{i}},\gamma_{t_{i+1}})$ for $H_{t_{i}}$ in
$H_{t_{i+1}}$. Here we note that indeed $R_{t_i}\cap
K_{t_i}=\gamma_{t_i}$, as needed. It can happen, though, that
$H_{t_i}$ intersects $R_{t_i}$ in other components. In this case there
exists another $K_{t_i}'\neq K_{t_i}$ contained in
$K_{t_{i+1}}$. Consider then the graph $L_{t_i}'$ corresponding to
$K_{t_i}'$ and perturb $R_{t_i}$ so that $L_{t_i}'\cap R_{t_i}$ is
transverse. Now make $K_{t_i}'\cap R_{t_i}$ consist of essential discs
(in $H_{t_i}$).
From $\gamma_{t_0}$ defined in the beginning of the proof we use the
process described above to proceed inductively, obtaining rectangles
$(R_{t_{i}},\gamma_{t_{i}},\gamma_{t_{i+1}})$. We do that for as long
as $\gamma_{t_i}$ has endpoints in distinct spots of $K_{t_{i+1}}$
(and hence in distinct spots of $K_{t_i}$).
It is clear that eventually $\gamma_{t_{i}}$ will have both endpoints
in the same spot of the corresponding $K_{t_{i+1}}$. To see this
simply note that all $\gamma_{t_j}$'s are parallel (as pairs
$(\gamma_{t_j},\partial\gamma_{t_j})$ in $(K_1,\mathcal{D}_1)$) and
hence have endpoints in the same spot of $K_{1}$ by hypothesis.
So let $\gamma_{t_{m}}$ be the first such arc which has both endpoints
in the same spot of $K_{t_{m+1}}$, say $D$. In this case we still
build $\gamma_{t_{m+1}}\subseteq P_{t_{m+1}}$ through parallelism. Now
consider $\beta\subseteq (\partial D\cap\partial P_{t_{m+1}})$ with
$\partial\beta=\partial\gamma_{t_{m+1}}$. Then
$\beta\cup\gamma_{t_{m+1}}$ bounds a disc $D'\subseteq
P_{t_{m+1}}$. As before, the parallelism between $\gamma_{t_m}$ and
$\gamma_{t_{m+1}}$ defines a rectangle
$(R_{t_m},\gamma_{t_m},\gamma_{t_{m+1}})$. We consider
$\Delta_{t_m}=R_{t_m}\cup D'$ and push $(\Delta_{t_m},\beta)$ into
$(K_{t_{m+1}},D)$ as a pair. Now $(\Delta_{t_m},\gamma_{t_m},\beta)$
is a half-disc for $H_{t_m}$ in $H_{t_{m+1}}$.
From the construction it is clear that if $R_{t_m}$ is unholed then so
is $\Delta_{t_m}$. We claim that that is the case. Indeed, recall that
$\{\,t_i\,\}$ is complete and that $\gamma_{t_{m+1}}$ is the first
$\gamma_{t_i}$ to have both endpoints in the same spot of
$K_{t_{i+1}}$. Therefore $K_{t_m}$ may be regarded as obtained from
$K_{t_{m+1}}$ by the event pictured Figure \ref{Aut3Splitting}, Case
3. In particular, there is no other $K'_{t_m}\neq K_{t_m} $ in
$K_{t_{m+1}}$. Therefore $R_{t_m}$ is unholed, so the same holds for
$\Delta_{t_m}$.
It is now easy to see that $\Delta=\big(\,\bigcup_{t_0\leq t_i\leq
t_{(m-1)}} R_{t_i}\,\big)\cup\Delta_{t_m}$ is 1) a half-disc for
$H_0$, 2) connects $D_0$ and $D_1$ and 3) is standard with respect to
$\{\,t_i\,\}$.
\end{proof}
\begin{remark}\label{UnholedHalf-Disc}
Note from the construction of $\Delta$ above that the rectangles
$R_{t_i}$ are holed only if there are two distinct $K_{t_i}$,
$K_{t_i}'$ contained in $K_{t_{i+1}}$, as pictured in Figure
\ref{Aut3Splitting}, Case 2.
\end{remark}
\begin{proof}[Proof of Proposition \ref{P:half-disc}]
Follows directly from the result above, Theorem \ref{UnknottedTheorem}
and Lemma \ref{ParallelLemma}.
\end{proof}
We wish to use a standard half disc to obtain a simplified
representative of the automorphism.
Let $\Delta$ be a half-disc for $H_0$, with upper boundary on some
$K_0$. If its interior did not intersect $H_0$, the half-disc $\Delta$
could be used to drag $H_0$ so it intersects ${\cal E}_1$ in fewer
discs, but recall that $\Delta$ may be holed. A solution would be to
isotope $1$-handles of $H_0$, to avoid $\Delta$. One might worry that
the handles that make holes in $\Delta$ may be ``linked'' with
$K_0$. More precisely, it may be impossible to isotope the handles of
$H_0$ in $H$ (leaving $K_0$ fixed) in order to avoid holes in
$\Delta$. We emphasize that the word ``linked'' is abused of here. The
property it describes is not intrinsic to the $K_0$'s in $K_1$, and
rather relative to the half-disc $\Delta$. In any case, these
``essential links'' do appear in the setting of handlebodies (see
\cite{UO:Autos,LNC:Generic, LNC:Tightness}). But if we allow the
isotopies to run through the whole manifold $M$, not just $H$, the
unlinking is always possible. The idea is to use isotopies of the kind
described in Example \ref{NonUniqueExample}.
\hop
More specifically we shall describe an isotopy of the type in Example
\ref {NonUniqueExample} and how it can be used to unlink. We call an
isotopy of this type {\em an unlinking move} or just an {\em
unlinking}.
At this point we do not worry about the automorphism $f\colon M\to
M$. Although $f$ is the object of interest, we will intentionally
disregard it initially, thinking instead of the various $H_t\subseteq
H$, disc systems and sphere systems, as geometric objects embedded in
$M$. We will bring $f$ back into the picture later in the argument.
As usual, let $K$ be a component of $\widehat{H}_1$. Let $0\leq s\leq
1$ and consider a component $K_s$ of $H_s\cap K$. Recall that a spot
$D_s$ of $(K_s,\mathcal{D}_s)$ corresponds to a disc $E_s\subseteq
H_s$. Such a disc extends to an essential sphere $S_s\embed M$ with
the following properties: 1) $S_s\cap H_s=E_s$ and 2) for $s<t\leq 1$,
$S_s\cap H_t$ consists of a single disc. We say that such a sphere is
{\em dual to $H_s$}, or just a {\em dual sphere} in case the value of
$s$ is clear.
The unlinking move will be performed along a rectangle (coming from a
standard half-disc) and a sphere. Let $K$ be a component of
$\widehat{H}_1$. Fix $0\leq s<t\leq 1$, with exactly one critical
point between $s$ and $t$ as usual. Consider a component $K_s$ of
$H_s\cap K$ and the component $K_t$ of $H_t\cap K$ containing
$K_s$. In this situation, most spots $K_t$ contain exactly one spot of
$K_s$, see Figure \ref{Aut3Splitting}. We shall take advantage of this
fact. Consider a rectangle $(R_s,\gamma_s,\gamma_t)$ for $H_s$ in
$H_t$ with $R_s\subseteq K_t$. Let $K_s$ be the component that
contains $\gamma_s\subseteq\partial R_s$. Assume that $\gamma_s$
connects distinct spots of $K_s$ (which is the case if the rectangle
comes from a standard half-disc). Finally, since most spots of $K_t$
contain exactly one spot of $K_s$, it is reasonable also to suppose
that at least one of the two sides of $R_s$ in a spot of $K_t$ lies in
a spot containing only one spot of $K_s$. If this is the case, we can
assume that the dual sphere $S_s$ containing $D_s$ has the property
that $S_s\cap K_t$ is a spot of $K_t$. In other words, we are assuming
that $S_s$ is also dual to $H_t$, which is a strong restriction. Thus,
$S_s\cap R_s\subseteq\partial R_s$, a key property we need for the
unlinking move.
Consider a component $C$ of $H_s\cap R_s$ distinct from
$\gamma_s$. Assume that $C$ is an essential disc in $H_s$, which is
the case if the rectangle comes from a half-disc in standard
position. Let $K'_s$ be the component containing $C$. The move that we
will describe isotopes $K'_s$ along $R_s$ and $S_s$, see Figure
\ref{Aut3Move}. More precisely, consider an embedded path
$\sigma\subseteq R_s-\intr({H}_s)$ connecting $C$ to $D_s$. We first
isotope $C$ along $\sigma$ so it is close to $D_s\subseteq S_s$. This
move changes $K'_s$ in a neighborhood $N\subseteq K'_s$ of $C$. We
note that the condition imposed on $S_s$ that $S_s\cap K_t$ is a spot
of $K_t$ implies that $\sigma\cap S_s=\sigma\cap
D_s\subseteq\partial\sigma$. Hence the move described does not
introduce intersections of $K'_s$ with $S_s$. So $S_s$ is preserved
as a dual sphere and the disc $\check D_s=S_s-\intr({D}_s)$ does not
intersect $H_s$ (except at $\partial\check D_s=\partial D_s$).
We now regard $N$ as the neighborhood of an arc $\alpha$ transverse to
$R_s$ and close to an arc $\alpha'\subseteq \partial D_s$ (this
$\alpha'$ also intersecting $R_s$). Let $\beta'\subseteq\partial D_S$
be the closed complement of $\alpha'$.
\begin{figure}[ht]
\centering
\psfrag{R}{\fontsize{\figurefontsize}{12}$R$}\psfrag{C}{\fontsize{\figurefontsize}{12}$C$}
\psfrag{H }{\fontsize{\figurefontsize}{12}$H$} \psfrag{S }{\fontsize{\figurefontsize}{12}${\hskip2pt}S$}
\psfrag{t}{\fontsize{\figuresmallfontsize}{12}${\hskip2pt}t$}\psfrag{s}{\fontsize{\figuresmallfontsize}{12}$s$}
\psfrag{m}{\fontsize{\figuresmallerfontsize}{12}$m$}\psfrag{i}{\fontsize{\figuresmallerfontsize}{12}$i$}
\psfrag{g}{\fontsize{\figurefontsize}{12}$\gamma$}\psfrag{D}{\fontsize{\figurefontsize}{12}$D$}
\psfrag{N}{\fontsize{\figurefontsize}{12}${\hskip-3pt}N$}\psfrag{N'}{\fontsize{\figurefontsize}{12}$N'$}
\scalebox{1.0}{\includegraphics{Aut3Move}} \caption{\small
Unlinking move.} \label{Aut3Move}
\end{figure}
Now recall that the disc $\check{D}_s$ is disjoint from $H_s$. We can
push the arc $\alpha$ close to $\alpha'$ and isotope it along
$\check{D}_s$ to an arc $\beta\subseteq K_t-\intr({H}_s)$ close to
$\beta'$. More precisely, consider an arc $\beta\subseteq
\intr({K}_t)-{H}_s$ such that 1)
$\alpha\cap\beta=\partial\alpha=\partial\beta$ and that 2) the circle
$\alpha\cup\beta$ is parallel in $K_t-\intr({H}_s)$ to $\partial
D_s=\alpha'\cup\beta'$. Then $\alpha$ is isotopic to $\beta$ (along
the disc $\check{D}_s$). If $N'$ is a neighborhood of $\beta$ we can
isotope $N$ to $N'$. But $\beta\cap R_s=\emptyset$ then also $N'\cap
R_s=\emptyset$. Clearly, this operation does not change $H_s$ away
from $N$, therefore such an operation reduces $H_s\cap R_s$. We call
this move an {\em unlinking along $R_s$ and $S_s$}. We also say that
this unlinking is {\em performed in $H_t$}. We emphasize that $H_t$
remains unchanged and that the resulting $H_s$ still is contained in
$H_t$.
The following summarizes properties of an unlinking that are either
clear or were already mentioned above.
\begin{lemma}\label{L:preserve}
Consider $H_s'$ obtained from $H_s$ by an unlinking along $R_s$ and
$S_s$ realized in $H_t$. Then:
\begin{enumerate}[\upshape (i)]
\item $|H_s'\cap R_s|<|H_s\cap R_s|$,
\item $H_t\subseteq H_1$ remains unchanged and $H_s'\subseteq H_t$
is isotopic to $H_s$ in $M$, and
\item the spots ${D}_s$ of $K_s$ also remain unchanged, therefore
$H_s'\cap\mathcal{E}_1=H_s\cap\mathcal{E}_1$.
\end{enumerate}
\end{lemma}
\begin{proposition}\label{P:unlinking}
Let $0=t_0<t_1<\dots<t_{k-1}<t_k=1$ be a complete increasing sequence
of regular values. Suppose that $\Delta$ is a half-disc for
$H_0\subseteq H_1$, standard with respect to $\{\,t_i\,\}$. Then
there exists a sequence of unlinkings taking $H_0$ to $H_0'\subseteq
H_1$ for which $\Delta$ is an unholed half-disc. Moreover
$H_0'\cap\mathcal{E}_1=H_0\cap\mathcal{E}_1$.
\end{proposition}
\begin{proof}
Let $(R_{t_i},\gamma_{t_i}, \gamma_{t_{i+1}})$, $t_0\leq t_i\leq
t_{m-1}$ and $\Delta_{t_m}$ be the rectangles and half-disc contained
in $\Delta$ determined by $\{\,t_j\,\}$. For each of the rectangles
$R_{t_i}$ we will perform a sequence of unlinkings along $R_{t_i}$ and
a carefully chosen sphere $S_{t_i}$ attached to one side of
$R_{t_i}$. The effect will be to unlink $K_{t_i}$ in the $K_{t_{i+1}}$
containing $R_{t_i}$. The unlinkings will be performed successively
using induction on $i$, this time with $i$ increasing. At the end of
the inductive process, all rectangles will be unholed, leading to an
unholed $\Delta$ (recall that $\Delta_{t_m}$ is unholed from the start
and this property will be preserved).
As we have mentioned before, an important part of the argument will be
aimed at the following. Each $R_{t_i}$ is contained in a $K_{t_{i+1}}$
and intersects a $K_{t_i}$. It is necessary to find a spot $D_{t_i}$
of $K_{t_i}$ which 1) intersects $R_{t_i}$ and 2) extends to a dual
sphere $S_{t_i}$ with the property that $S_{t_i}\cap K_{t_{i+1}}=
D_{t_{i+1}}$ for some spot $D_{t_{i+1}}$ of $K_{t_{i+1}}$ (see Lemma
\ref{L:spots} below). This means that one side of $R_{t_i}$ intersects
a spot of $K_{t_{i+1}}$ containing just one spot of $K_{t_i}$. Recall
that this property is necessary to perform an unlinking.
Let $K$ be the component of $\widehat{H}_1$ that contains $\Delta$ and
let $K_0$ be the component of $H_0\cap K$ that contains the upper
boundary $\alpha$ of $\Delta$. We assume that $\alpha$ intersects two
distinct $1$-handles of $H_0$ and thus a $0$-handle. If it intersects
just one $1$-handle (hence no $0$-handle) then we replace a disc of
$\mathcal{E}_0$, the one corresponding to this $1$-handle, by two
parallel copies, which has the effect of introducing a $0$-handle in
$K_0$. This amounts to choosing a new system ${\cal E}$ a, and there
are choices in how $f({\cal E}_0)={\cal E}_1$ intersects $H_0$ in
discs parallel to discs of ${\cal E}_0$, i.e. choices for the
incidence matrix, see Section \ref{Laminations}.
Our modification ensures that each $K_{t_i}\supseteq K_0$, $t_i>0$,
corresponds to a 0-handle, with 1-handles giving distinct spots on
$\bdry K_t$. Therefore each rectangle $R_{t_i}$ has the following
property: the spots of $K_{t_i}$ that $\gamma_{t_i}\subseteq\partial
R_{t_i}$ connects correspond to distinct discs of
$\mathcal{E}_{t_i}$. That property will be important in finding the
sphere $S_{t_i}$ intersecting $K_{t_{i+1}}$ at a spot.
Consider two consecutive terms $s<t$ of the sequence (so there is
precisely one singular value between them). We need the following.
\begin{lemma}\label{L:spots}
A holed rectangle $R_s$ intersects a spot of $K_s$ with the property
that the corresponding dual sphere intersects $K_t$ in a single spot.
\end{lemma}
\begin{proof}
It clearly suffices to show that $R_s$ intersects a spot $D_s$ with
the following property. If $D_t$ is the spot of $K_t$ containing $D_s$
then $K_s\cap D_t=D_s$.
Recall that we are assuming that no disc of $\mathcal{E}_1\subseteq
H_1$ is represented twice as a spot of a single component $K_1$ of
$\widehat{H}_1$.
There are three cases. The case analysis is exactly the same as in
the proof of \ref{ParallelLemma}, pictured in Figure
\ref{Aut3Splitting}. The simplest and most common is Case 1, where a
single component $K_s$ of $H_s\cap K$ is contained in $K_t$ and with
the same isotopy type (in $K$) as $K_t$. Another possibility is Case
2; that there are two distinct $K_s$'s in $K_t$. In Case 3, there is a
single $K_s$ in $K_t$ but $K_s$ is not isotopic to $K_t$. In Cases 1
and 3, there is only one $K_s$ in $K_t$, so in fact there is no real
need for unlinking; there may be a rectangle, but there is no danger
that it is holed by a $K'_s$ in $K_t$, see Remark
\ref{UnholedHalf-Disc}. Therefore we only need to deal with Case 2.
In Case 2 there are two distinct components of $H_s\cap K$ contained
in $K_t$: $K_s$ and $K'_s$. Here one spot $D_t$ of $K_t$ contains
precisely one spot of each of $K_s$, $K'_s$. Denote these by $D_s$
and $D'_s$ respectively. The other spots of $K_t$ contain a single
spot, of either $K_s$ or $K'_s$. Now all spots of $K_s$, $K'_s$ other
than $D_s$, $D'_s$ extend to distinct spots of $K_t$. But each of
$K_s$, $K_s'$ contains just one of these exceptional spots. Recalling
that $\gamma_s$ connects two distinct spots, at least one will have
the desired property. The illustration of Case 2 in Figure
\ref{Aut3Splitting} is somewhat misleading, since it is not easy to
picture a holed rectangle there. Figure \ref{Aut3Linked} is more
realistic. It shows a rectangle $R_s$ essentially intersected by
$K_s'$.
\begin{figure}[ht]
\centering
\psfrag{K}{\fontsize{\figurefontsize}{12}$K$}\psfrag{K'}{\fontsize{\figurefontsize}{12}$K'$}
\psfrag{R}{\fontsize{\figurefontsize}{12}${\hskip1pt}R$}
\psfrag{s}{\fontsize{\figuresmallfontsize}{12}${\hskip1pt}\raise1.8pt\hbox{$s$}$}
\psfrag{t}{\fontsize{\figuresmallfontsize}{12}${\hskip-1pt}\raise2.5pt\hbox{$t$}$}
\scalebox{1.0}{\includegraphics{Aut3Linked}} \caption{\small
Linking and the unlinking move.} \label{Aut3Linked}
\end{figure}
\end{proof}
We continue the proof of Proposition \ref{P:unlinking}. We are
considering two consecutive terms $s<t$ of the sequence. Suppose that
for any $t_i<s$ the rectangle $R_{t_i}$ is unholed. Suppose also that
this process of removing holes did not alter $H_s$. Therefore the
spheres dual to $H_s$ are preserved and so the conclusions of Lemma
\ref{L:spots} above still hold for $R_s$.
The goal now is to prove the induction step and remove components
(holes) of $H_s\cap R_s$ using unlinkings. We shall do this while also
preserving $H_t$. Consider $K_s$ containing $\gamma_s$. The arc
$\gamma_s\subseteq\partial R_s$ connects two spots of $K_s$. As
mentioned in the previous paragraph, at least one of these has the
property that the corresponding sphere $S_s$ intersects $K_t$ at a
spot, therefore an unlinking along $R_s$ and $S_s$ is possible.
We consider components of $H_s\cap R_s$ disjoint from $\gamma_s$.
Note that these are essential discs in $H_s$. Indeed, $\Delta$ was
standard in the beginning. This was true before any unlinking moves
were done because $\Delta$ was standard. But our inductive procedure
ensures that $H_s$ remains unchanged as we do unlinkings in lower
levels, therefore the property is preserved. We perform the unlinking
move to remove such a disc component of $H_s\cap R_s$. As stated in
Lemma \ref{L:preserve}, this reduces $|H_s\cap R_s|$. The idea is to
continue performing unlinking moves inductively until $H_s\cap
R_s=\gamma_s$. Here again we note that unlinkings preserve spots of
$K_s$ and $K_t$ (Lemma \ref{L:preserve}), therefore the property in
the conclusion of Lemma \ref{L:spots} is preserved.
Lemma \ref{L:preserve} also ensures that each unlinking does not
introduce intersections of $H_0$ with the other rectangles $R_{t_i}$,
$t_i<s$, and that $\Delta$ is preserved as a half-disc. It also shows
that the resulting $H_0\subseteq H_s$ is contained in $H_1$ and that
$H_0\cap\mathcal{E}_1$ is left unchanged. We now have all $R_{t_i}$
for $t_i\leq s$ unholed, preserving the other desired properties of
the conclusion of the proposition. This proves the induction step.
We recall that $\Delta_{t_m}$ is unholed originally. Since unlinkings
along $R_{t_i}$, $t_i<t_m$ preserve $H_{t_m}$, $\Delta_{t_m}$ remains
unholed, completing the proof.
\end{proof}
\hop
So far, we have introduced the unlinking move and used it to obtain an
unholed half-disc out of a holed one. As mentioned before, we omitted
any reference to the automorphism $f\colon M\to M$, which is the
object of interest. We now use Proposition \ref{P:unlinking} to
obtained the desired unholed half-disc by isotopies of $f$.
The reader may have anticipated the idea for dealing with $f$: A
sequence of unlinkings is isotopic to the identity, hence the
composition is realizable by an isotopy of $f$. The catch here is that
it is not obvious that the resulting automorphism preserves some
important features of the original one. Among these is the property of
preserving the canonical Heegaard splitting of $M$, restricting to an
automorphism of $H$. It is also needed that this handlebody
automorphism is outward expanding with respect to $H'_0$ and that the
growth rate does not increase (in fact, it will remain unchanged).
Recall that handlebodies $H\subseteq H'$ are ``concentric'' if
$H'-\intr(H)\simeq(\partial H\times I)$. The main technical lemma
in this discussion is the following:
\begin{lemma}\label{L:concentric}
In the situation described in Proposition \ref{P:unlinking} we have
that $H_0'\subseteq H_1$ are concentric.
\end{lemma}
\begin{proof}
With the same complete increasing sequence
$0=t_0<t_1<\dots<t_{k-1}<t_k=1$ used in the proof of Proposition
\ref{P:unlinking}, suppose $s,t$ are consecutive terms of the complete
sequence. Suppose we can show that an unlinking move replacing $H_s$
by $H_s'$ in $H_t$ leaves $H_s'$ concentric in $H_t$. Then the
concentricity of each $H_{t_i}$ in $H_{t_{i+1}}$ is preserved by the
unlinking move, and inductively, we then show that all the sequence of
unlinkings preserves concentricity.
To prove that an unlinking move replacing $H_s$ by $H_s'$ in $H_t$
leaves $H_s'$ concentric in $H_t$, we first observe that the
concentricity of $H_s$ in $H_t$ follows from the fact that for all but
one $K_t$, there is just one component $K_s\subseteq K_t$ of $H_s\cap
K_t$. For all but two components $K_t$, the unique $K_s\subseteq K_t$
is concentric, i.e. there is a product structure for $K_t-K_s$. The
two exceptional $K_t$'s are illustrated in Figure \ref{Aut3Splitting},
on the right in Cases 2 and 3. The transition to a picture before the
event, replacing $H_s$ by $H_s'$, which intersects every $K_t$
concentrically, is made by adding a neighborhood of a pinching
half-disc, as illustrated in Figure \ref{Aut3PinchingHalfDisc}.
\begin{figure}[ht]
\centering
\psfrag{H}{\fontsize{\figurefontsize}{12}$H$}\psfrag{H'}{\fontsize{\figurefontsize}{12}${\hskip1.3pt}H'$}
\psfrag{s}{\fontsize{\figuresmallfontsize}{12}${\hskip1.5pt}\raise1pt\hbox{$s$}$}
\psfrag{t}{\fontsize{\figuresmallfontsize}{12}${\hskip-0.7pt}\raise2pt\hbox{$t$}$}
\psfrag{pinching half-disc}{\fontsize{\figurefontsize}{12}$\hbox{pinching half-disc}$}
\scalebox{1.0}{\includegraphics{Aut3PinchingHalfDisc}}
\caption{\small The pinching half-disc.}
\label{Aut3PinchingHalfDisc}
\end{figure}
Now we need only observe that the unlinking move does not affect the
pinching half-disc. Namely, the rectangle $R_s$ can easily be chose
to be disjoint from the pinching half-disc. Thus after the unlinking
move, we can still add a neighborhood of the pinching half disc to
obtain an $H_s'$ perfectly concentric in each $K_t$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{BackTrackTheorem}]
Let $G$ be the quotient graph determined by the 2-dimensional
lamination invariant under $g$. Then $g$ determines a homotopy
equivalence $h_g\colon G\to G$. By hypothesis, $h_g$ is not
train-track. Consider the algorithm of Bestvina-Handel to simplify the
homotopy equivalence. It clearly suffices to show that we can carry
out the algorithm using isotopies of $f$ that preserve the canonical
splitting.
The following moves of \cite{BH:Tracks} are realizable by isotopies of
$g$ (and hence isotopies of $f$): {\em collapses}, {\em valence-one}
and {\em valence-two homotopies} and {\em subdivisions}. We refer the
reader to \cite{UO:Autos} for details on these isotopies.
We shall consider the other moves, {\em folds} and {\em
pulling-tights}, more carefully. For a fold, two edges $e$, $e'$ of
$G$ incident to the same vertex are mapped to the same edge. In the
context of $g\colon H\to H$ this corresponds to the following
situation. There is a component $K$ of $\widehat{H}_1$ and a
$K_0\subseteq H_0\cap K$ with the property that two distinct spots
$D_0$, $D_0'$ of $K_0$ are contained in the same spot $D_1$ of $K$,
see Figure \ref{Aut3Fold}.
\begin{figure}[ht]
\centering
\psfrag{G}{\fontsize{\figurefontsize}{12}$G$}\psfrag{K}{\fontsize{\figurefontsize}{12}$K$}
\psfrag{D}{\fontsize{\figurefontsize}{12}$D$}
\psfrag{D '}{\fontsize{\figurefontsize}{12}$D'$}\psfrag{e}{\fontsize{\figurefontsize}{12}$e$}
\psfrag{e'}{\fontsize{\figurefontsize}{12}$e'$}
\scalebox{1}{\includegraphics{Aut3Fold}} \caption{\small A fold in
the quotient graph and the corresponding $H_0$ in $H_1$.}
\label{Aut3Fold}
\end{figure}
The spots $D_0$, $D_0'$ correspond to the edges $e$, $e'$ and $D_1$ to
the edge to which they are mapped. By Proposition \ref{P:half-disc},
there exists a standard half-disc $\Delta$ for $H_0$ connecting these
two spots.
If $\Delta$ is unholed, the fold is realizable by a {\em down-move}
\cite{UO:Autos}, which is an isotopy of $g$. Roughly speaking, such a
move changes $g$ so that the disc corresponding to the spot $D_1$,
which is the image of a disc $E\subseteq\mathcal{E}_0$, is isotoped
along $\Delta$. The two discs of $\mathcal{E}_0$ corresponding to the
spots $D_0$, $D_0'$ are replaced by their band-sum along the upper
boundary of $\Delta$. Such a move realizes the desired fold.
If $\Delta$ is holed then we use Proposition \ref{P:unlinking} and
obtain $H_0'$. Lemma \ref{L:preserve} assures that $H_1$ (and the
canonical splitting) are preserved. Lemma \ref{L:concentric} shows
that $H_0'\subseteq H$ and $H$ are concentric, so the resulting automorphism of
$M$ can be adjusted to preserve the canonical splitting $M=H\cup H'$.
Moreover the resulting restriction $g'\colon H\to H$ can be made
outward expanding with respect to $H_0'$ (see Section
\ref{Laminations}). The assertion of Proposition \ref{P:unlinking}
that $H_0'\cap\mathcal{E}_1=H_0\cap\mathcal{E}_1$ implies that the
original disc system yields invariant laminations with the same growth
rate. In fact more holds. Let $H\to G'$ be the new quotient map and
$h_{g'}\colon G'\to G'$ the new homotopy equivalence. Then $h_{g'}$
and $h_g$ are conjugate by a homeomorphism $G'\to G$. Therefore the
half-disc $\Delta$ still corresponds to the fold being considered. But
now $\Delta$ is not holed so we fall in the previous case and realize
the fold.
For a pulling-tight the argument is similar, so we will skip
details. In this case there exists an edge of $G$ in whose interior
$h_g$ fails to be locally injective. As before, in the setting of the
handlebody $H$ that corresponds to a $K_0\subseteq K$ with two
distinct spots in the same spot of $K$ (in this case $K_0$ will have
just two spots). So, here again, that corresponds to a standard
half-disc $\Delta$. If it is not holed we perform a {\em diversion}
\cite{UO:Autos}, which realizes the pulling-tight move. If it is holed
we remove the holes with unlinkings and return to the previous case.
\end{proof}
\begin{remark}
We can say more about the resulting $f'$ and $g'=f'|_{H}$ in the
conclusion of Theorem \ref{TrainTrackTheorem}. Bestvina-Handel's
algorithm stops when either 1) the homotopy equivalence admits no
back-tracking and the corresponding incidence matrix is irreducible or
2) when the incidence matrix, after collapsing of invariant forests,
is reducible. In this last case $g'$ is either reducible or
periodic. In the first case, if the growth $\lambda=1$ then $g'$ may
be isotoped to permute (transitively) the discs of
$\mathcal{E}_0$. This means that a power $(g')^k$ fixes
$\mathcal{E}_0$. Therefore $(f')^k$ fixes the sphere system
$\mathcal{S}_0\subseteq M$ corresponding to $\mathcal{E}_0$. By
reducing $(f')^k$ along $\mathcal{S}_0$ we obtain automorphisms of
holed balls preserving each boundary component, therefore isotopic to
the identity. This proves that $(f')^k$ is isotopic to a composition
of twists on spheres of $\mathcal{S}_0$. In particular $(f')^{2k}$ is
isotopic to $Id_{M}$ (although in general $g'$ will not be
periodic). If $\lambda>1$ then $g'$ may be either reducible or
generic. If it is generic, as stated in the theorem, it is
train-track. If it is reducible, we believe we can prove that it is a
lift of a pseudo-Anosov automorphism of a surface $F$ to the total
space of an $I$-bundle over $F$ (in this case $f$ restricted to the
splitting surface ${\partial H}$ is reducible). The proof of this
last statement is not written formally yet.
\end{remark}
\section{Automorphisms of mixed bodies} \label{MixedBodies}
In this section we suppose $M$ is a mixed body. Automorphisms of
holed mixed bodies can be understood by the same method. In fact,
recall from the introduction that the ``difference" between an
automorphism of a holed or spotted manifold and an unspotted manifold
is an automorphism of the spotted manifold which can be realized by an
isotopy of the unspotted manifold $M$.
Recall from Theorem \ref{CanonicalHeegaardTheorem2} that there exists
a canonical closed surface $F\subseteq M$ separating $M$ into a
handlebody $H$ and a compression body $Q$. The splitting $M=H\cup_F Q$
has the property that every curve in $F$ bounding a disc in $Q$ also
bounds a disc in $H$. A curve in $F$ bounding a disc in $H$ may either
bound a disc in $Q$ or be parallel to $\partial_i Q=\partial M$. As in
the special case treated in Section \ref{TrainTrackSection} we can use
the canonicity of $F$ to isotope an automorphism $f\colon M\to M$ to
preserve the splitting, so it induces restricted automorphisms on $Q$
and $H$. Here, again, $F$ is not ``rigid'' (e.g., a double twist on a
sphere $S$ intersecting $F$ on an essential curve is isotopic to the
identity, see Example \ref{NonUniqueExample}).
But, in this case, there will be another canonical surface $V\subseteq
H$ which can also be made invariant. From another point of view, we
shall prove that the restriction of $f$ to $H$ is rigidly
reducible. More specifically, we shall prove that if $f$ preserves
$F$, then $f|_{H}$ always preserve a compression body $Q'\subseteq H$
with $\partial_e Q'=\partial H$ and $\partial_i Q'\neq\emptyset$ (see
Section \ref{Intro}). Roughly speaking, $Q'$ is the ``mirror image''
of $Q$ through $F$ and $V$ is the mirror image of $\partial
M=\partial_i Q$.
\begin{proposition}\label{FurtherReducingSurface}
Let $f$ be an automorphism of $M$. Then $f$ can be isotoped to
preserve the canonical splitting $M=H\cup_F Q$. Moreover, $f$ can be
further isotoped rel $F$ to preserve a surface $V\subseteq H$ with the
following properties. It separates $H$ into a (possibly disconnected)
handlebody $H'$ and a compression body $Q'$ such that a curve in $F$
bounds a disc in $Q$ if and only if it bounds a disc in $Q'$. Then $f$
preserves $Q$, $Q'$ and $H'$. Also, $f|_{Q}$ and $f|_{Q'}$ are
conjugate, and $f$ restricted to $V$ (and hence $H'$) is unique, in
the sense that it depends only on the isotopy class of $f$, and not on
$f|_H$.
\end{proposition}
To prove the proposition above we consider a compression body
$Q'\subseteq M$ with the following properties: 1) $Q'\subseteq H$, 2)
$\partial_e Q'=\partial H=F$ and 3) a curve in $F$ bounds a disc in
$Q'$ if and only if it bounds a disc in $Q$. We can regard $Q\cup Q'$
as the double of $Q$ along $\partial_e Q$.
\begin{lemma}\label{MirrorCompressionBody}
The compression body $Q'$ is well-defined up to isotopy. In
particular, so is $V=\partial_i Q'$.
\end{lemma}
\begin{proof}
Consider a complete system of essential discs $\mathcal{E}$ in $Q$
i.e., $\mathcal{C}=\partial\mathcal{E}\subseteq F$ and $\mathcal{E}$
cuts $Q$ open into a union of a product $(\partial_i Q)\times I$ and
balls. Then $\mathcal{C}$ also bounds discs $\mathcal{E}'\subseteq
H$. Let $Q'\subseteq H$ be the compression body determined by $F$ and
$\mathcal{E}'$. More precisely, consider a neighborhood $N\subseteq H$
of $F\cup\mathcal{E}'$. Let $B\subseteq H$ be the union of ball
components of $\overline{H-N}$. Then $Q'=N\cup B$.
We need to prove ``symmetry'', i.e. that curves of $F$ bound discs in
$Q'$ if and only if they also bound in $Q$. Here we require that the
system $\mathcal{E}$ is minimal, in the sense that the removal of any
discs yields a system which is not complete. It is clear that any
essential disc is contained in some such system. It is a standard fact
that any pair of such minimal disc systems in a compression body are
connected by a sequence of {\em disc slides} (see
e.g. \cite{FB:CompressionBody}, appendix B). But a disc slide in any
of the compression bodies $Q$, $Q'$ corresponds to a disc slide in the
other whose results coincide in $F$. We need not consider
$\partial$-parallel discs in the compression bodies, so the proof that
$Q$ and $Q'$ are symmetric is complete.
The above argument with disc slides, together with irreducibility of
$H$, also proves that the construction of $Q'$ does not depend on
$\mathcal{E}$, implying its uniqueness.
\end{proof}
Let $f\colon M\to M$ be an automorphism of a mixed body. Denote by
$\mathcal{A}_f$ the class of automorphisms $f'\colon M\to M$ which are
isotopic to $f$ and preserve $Q$, $Q'$, $H'\subseteq M$.
\begin{lemma}\label{RigidHandlebody}
If $f$, $f'\in\mathcal{A}_f$ then $f|_{V}$ and $f'|_{V}$ are isotopic
(hence also $f|_{H'}$ and $f'|_{H'}$ are isotopic).
\end{lemma}
\begin{proof}
Let $h=(f^{-1}\circ f')\in\mathcal{A}_{\rm Id}$. Therefore
$h|_{\partial M}={\rm Id}_{\partial M}$.
It is a simple and known fact that an automorphism of a compression
body is uniquely determined by its restriction to the exterior
boundary. Recall that $F$ is the exterior boundary of both $Q$ and
$Q'$. Symmetry of $Q$ and $Q'$ implies that $h|_{Q\cup Q'}$ may be
regarded as the double of $h|_{Q}$. Now $\partial_i Q=\partial M$,
then $h|_{\partial_i Q}={\rm Id}$. Therefore $h|_{\partial_i Q'}={\rm
Id}$, completing the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{FurtherReducingSurface}]
Use Lemma\ref{MirrorCompressionBody} and note the following. By
cutting $Q'$ along the cocores of its $1$-handles it is easy to see
that $V=\partial_i Q'$ separates $H$ into $Q'$ and a handlebody
$H'=\overline{H-Q'}$. This proves the conclusions on the existence of
the canonical decomposition. The conclusions on $f|_{V}$ and $f|_{H'}$
follow from Lemma \ref{RigidHandlebody}. The conclusion on conjugacy
of $g=f|_{Q}$ and $g'=f|_{Q'}$ is clear from the symmetry of $Q$ and
$Q'$.
\end{proof}
\begin{remark}
We emphasize that while $V$ is a rigid reducing surface for $f|_{H}$
it is not a rigid reducing surface for $f$, even with unique $f|_{H'}$
and $f|_{V}$. The reason is that $f$ need not restrict uniquely to the
complement $Q\cup Q'$.
Recall from the introduction that some of the automorphisms $f\colon
M\to M$ being considered arise from adjusting automorphisms of general
reducible $3$-manifolds. In this case $f|_{\partial M}$ is periodic,
permuting components. The argument above in Lemma
\ref{RigidHandlebody} shows, then, that in this case $f$ will always
be periodic on the (possibly disconnected) handlebody $H'\subseteq
H\subseteq M$. But we also consider cases without the periodic
hypothesis on $f|_{\partial M}$. Here there is a constraint on
$f|_{\partial M}$. To see this recall from the argument that
$f|_{\partial M}$ determines $f|_{V}$ uniquely. But $V=\partial H'$,
therefore $f|_{V}$ is a surface automorphism that extends over the
handlebody $H'$, which is a constraint.
\end{remark}
The goal is to find a nice representative for the class of an
automorphism $f\colon M\to M$. Use Proposition
\ref{FurtherReducingSurface} to decompose $f\colon M\to M$ into
automorphisms $f|_{Q}$, $f|_{Q'}$ and $f|_{H'}$. The first two are
conjugate and they will be dealt with using unlinkings analogous to
those of Section \ref{TrainTrackSection}. The last is dealt with in
\cite{UO:Autos} (see also Sections \ref{Intro} and \ref{Laminations}).
We now proceed to find desirable restrictions of $f$ to the canonical
compression bodies $Q$, $Q'$. The two restrictions are conjugate so it
suffices to deal with one of them. We choose $Q'$ because $Q'\cup H'$
is a handlebody, what shall be of a technical relevance. So consider $g=f|_{Q'}$. We need to define a
convenient notion of ``back-tracking'' for an automorphism of
$Q'$. Here suppose that $g\colon Q'\to Q'$ is generic. Similarly to
what is done in Section \ref{TrainTrackSection}, consider concentric
$Q_i\subseteq Q'$, disc systems $\mathcal{E}_i\subseteq Q_i$,
laminations $\Lambda$ and $\Omega$, and growth rate $\lambda$ as in
Section \ref{Laminations}. Also assume that $\Lambda$ has the
incompressibility property. Consider $\widehat{Q}_1=Q_1|\mathcal{E}_1$
and let $K_1$ be a component. Recall that $(K_1,V_1,\mathcal{D}_1)$
denotes either a spotted ball or spotted product, with spots
determined as duplicates of $\mathcal{E}_1$.
The new phenomenon for back-tracking in $Q'$ is roughly the
following. If one follows a leaf of the mapped 1-dimensional
lamination it may enter the product part of $Q'$ through a spot $D$
and also leave through $D$. Such a leaf may ``back-track'' in $Q'$, in
the sense that it can be pulled tight (homotopically) in $Q'$ to
reduce intersections with that spot. Other (portions of) leaves may
not admit pulling tight in this way, being somehow linked with
$\partial_i Q'$. But among these some can still be pulled tight if
they are allowed to pass through the handlebody $H'$ (recall that
$H=Q'\cup H'$).
\begin{defn}\label{BackTrackingCB}
Suppose that $\sigma\subseteq\Omega$ is an arc such that
$\sigma'=\omega(\sigma)\subseteq Q_0\subseteq Q'$ has the following
properties: 1) $\partial\sigma'$ is contained in a single disc $E_0$
of the system $\mathcal{E}_0\subseteq Q_0$, and 2) $\sigma'$ is
isotopic in $H$ (rel $\partial\sigma'$) to an arc in $E_0$. If there
exists such an arc we say that $\Omega$ {\em back-tracks}. In this
case we say that $g=f|_{Q'}$ {\em admits back-tracking in $H$} or just
{\em admits back-tracking}.
\end{defn}
\begin{remark}
Property 1) above is intrinsic to $Q'$ while property 2) is not, since
it refers to the entire handlebody $H$.
\end{remark}
\begin{lemma}\label{L:back-track}
If $\Omega$ back-tracks then there exists an integer $i\leq 0$ such
that $Q_i$ has the following properties. There is a component $K_i$ of
$Q_i\cap K_1$ which is a twice spotted ball (in other words, $K_i$ is
contained in a 1-handle of $Q_i$), both contained in the same spot
$D_1$ of $\mathcal{D}_1$. Moreover if $(P_i,\mathcal{D}_i)$ is the
unique \spinal for $Q_i$, then it is parallel in $K_1\cup H'$ to the
subspinal pair $(D_1,D_1)$ for $(K_1,V_1,\mathcal{D}_1)$, where $D_1$
is a disc of $\mathcal{D}_1$.
\end{lemma}
We skip the proof, which follows directly from unknottedness and
parallelism (see Section \ref{Laminations}).
\begin{remark}
The converse is obviously true.
\end{remark}
In the situation of the lemma above we say that $Q_i$ {\em back-tracks
in $Q_1$}. More generally, we say that {\em $Q_i$ back-tracks in
$Q_{j+1}$}, $i<j$, if $g^{-j}(Q_i)$ back-tracks in
$g^{-j}(Q_{j+1})=Q_1$.
\begin{defn}
If $f\in\mathcal{A}_f$ and $g=f|_{Q'}$ is generic and admits no
back-tracking in $H$ then we say that $g$ is {\em train-track generic
in $H$}, or just {\em train-track generic}.
\end{defn}
The importance of considering train-track generic automorphisms comes
from the result below. Its proof is an adaptation of an argument of
\cite{LNC:Tightness}.
\begin{thm}\label{MinimalGrowth}
Let $f$ be an automorphism of a mixed body $M$. If $g=f|_{Q'}$ is
train-track generic in $H$ then its growth rate is minimal among the
restrictions to $Q'$ of all automorphisms in $\mathcal{A}_f$.
\end{thm}
\begin{proof}
We prove the contrapositive, so assume that there exists
$f'\in\mathcal{A}_f$ such that $g'=f'|_{Q'}$ has growth $\lambda'$
less than the growth $\lambda$ of $g$.
Consider the original disc system $\mathcal{E}_0\subseteq Q_0$,
determining a dual object $\Gamma_0$. For simplicity denote it by
$\Gamma$. Recall that $\Gamma=\hat\Gamma\cup V$, where $\hat \Gamma$
is a graph with one vertex on each component of $V$. Consider the
mapped-in 1-dimensional lamination $\omega\colon\Omega\to Q_0$ (see
Section \ref{Laminations}, Theorem
\ref{CompressionBodyClassificationTheorem}). Recall that $\nu$ denotes
its transverse measure. Use $(\Omega,\nu)$ to give weights $\hat\nu$
to the edges of $\Gamma$. More precisely, if $E_i\subseteq
\mathcal{E}_0$ is the disc corresponding to an edge $e_i$ of $\Gamma$
then the weight on $e_i$ is $\int_{(\omega^{-1}(E_i))}\nu$. For
simplicity, call the pair $(\Gamma,\hat\nu)$ a {\em weighted graph},
although it is really $\hat \Gamma$ which is a weighted graph.
Extend $\mathcal{E}_0$ to a disc system $\mathcal{E}\subseteq Q'$
through the product in $Q'-\intr({Q}_0)$. Apply powers of $g^{-1}$ to
$\Gamma$. One can ``push forward'' the weights on $\Gamma$ to
$\Gamma_n=g^{-n}(\Gamma)$ through $g^{-n}$. More precisely, the weight
on the image $g^{-n}(e)$ of an edge $e$ is the same weight as that on
$e$. Denote the resulting weighted graph as $(\Gamma_n,\hat\nu)$. It
follows that
\begin{equation}\label{Eq:eigenvalue}
\frac{\weight{\mathcal{E}}{\Gamma_n}{\hat\nu}}{\lambda^n}=c>0
\end{equation}
is constant, where $\weight{\mathcal{E}}{\Gamma_n}{\hat\nu}$ denotes
the sum of points of intersection $\mathcal{E}\cap\Gamma_n$ weighted
by $\hat\nu$.
Now consider $f'$ and $g'=f'|_{Q'}$, with corresponding $Q_0'$, disc
system $\mathcal{E}'_0\subseteq Q_0'$ with dual graph $\Gamma'$ and
growth $\lambda'$. As before, the corresponding 1-dimensional
lamination determines a weighted graph $(\Gamma',\hat\nu')$.
But $Q_0$, $Q_0'$ are concentric in $Q'$, therefore one can apply an
ambient isotopy rel $V$ taking $\Gamma$ to some $\bar\Gamma\subseteq
Q_0'$. We can also assume that $\bar\Gamma$ does not ``back-track'' in
the 1-handles of $Q_0'$, in the sense that no vertex of $\bar\Gamma$
is contained in such a 1-handle $K$ and that the edges of $\bar\Gamma$
are transverse to the foliation of $K$ by cocores. The isotopy from
$\Gamma$ to $\bar\Gamma$ pushes forward $\hat\nu$ to yield
$(\bar\Gamma,\hat\nu)$.
We are interested in applying powers of $(g')^{-1}$ to
$\bar\Gamma$. Denote $\bar\Gamma_n=(g')^{-n}(\bar\Gamma)$. By the
construction of $\bar\Gamma\subseteq Q_0'$,
\[
\frac{\weight{\mathcal{E}}{\bar\Gamma_n}{\hat\nu}}{(\lambda')^n}<\bar{C}
\]
is bounded. This is true because the growth rate of intersections
with any essential disc with the images under $(g')^{-n}$ of {\em any}
weighted graph is $\lambda'$. Since $\lambda'<\lambda$,
\begin{equation}\label{Eq:convergence}
\frac{\weight{\mathcal{E}}{\bar\Gamma_n}{\hat\nu}}{(\lambda)^n}\to 0.
\end{equation}
By combining (\ref{Eq:eigenvalue}) and (\ref{Eq:convergence}) it
follows that there exists $N$ such that
\begin{equation}\label{Eq:inequality}
\frac{\weight{\mathcal{E}}{\bar\Gamma_N}{\hat\nu}}{\lambda^N}<
\frac{\weight{\mathcal{E}}{\Gamma_N}{\hat\nu}}{\lambda^N}
\end{equation}
But recall that there is an isotopy rel $V$ taking $\Gamma$ to
$\bar\Gamma$. There is also an isotopy (not necessarily rel $V$)
taking $g^{-N}(\bar\Gamma)$ to $\bar\Gamma_N$ ($f$ and $f'$, which
restrict to $Q'$ as $g$ and $g'$, are isotopic). Therefore there is an
isotopy $\iota$ taking $\Gamma_N$ to $\bar\Gamma_N$. Note that this
is an isotopy in $M$, not just $Q'$.
Regard $Q_{-N}$ as a neighborhood of $\Gamma_N$. Extend the isotopy
$\iota$ to $Q_{-N}$, taking it to a neighborhood $\bar Q_N$ of
$\bar\Gamma_N$. Define $\bar\omega=\iota\circ\omega \colon\Omega \to
Q_{-N}$. The inequality (\ref{Eq:inequality}) is restated in terms of
laminations as
\[
\int_{(\bar\omega)^{-1}(\mathcal{E})}\nu<\int_{\omega^{-1}(\mathcal{E})}\nu.
\]
Now let $\mathcal{S}\subseteq M$ be a ``dual sphere system''
corresponding to $\mathcal{E}$, i.e. $\mathcal{S}\cap
Q'=\mathcal{E}$. The inequality above clearly implies
\begin{equation}\label{Eq:inequality2}
\int_{(\bar\omega)^{-1}(\mathcal{S})}\nu<\int_{\omega^{-1}(\mathcal{S})}\nu.
\end{equation}
The isotopy $\iota$ determines a map $p\colon \Omega\times I\to M$
such that $p|_{\Omega\times\{0\}}=\omega$ and
$p|_{\Omega\times\{1\}}=\bar\omega$. Perturb it to put it in general
position. Consider $p^{-1}(\mathcal{S})$. Let $L\subseteq\Omega$ be a
leaf that embeds through $\omega$. Then $(L\times
I)\subseteq(\Omega\times I)$ is a product $\mathbb{R}\times I$ and
$(L\times I)\cap p^{-1}(\mathcal{S})$ consists of embedded closed
curves and arcs. Ignore closed curves for the moment. Let $A_L$ be
the set of such arcs $\alpha$ intersecting the lower boundary
$L\times\{0\}$. Recall that the set of leaves that do not embed has
measure zero. If for all leaves $L$ that embed and all $\alpha\in A_L$
the other endpoint of $\alpha$ is in $L\times\{1\}$ then
\[
\int_{(\bar\omega)^{-1}(\mathcal{S})}\nu\geq\int_{\omega^{-1}(\mathcal{S})}\nu,
\]
contradicting inequality (\ref{Eq:inequality2}). Therefore there
exists $L$ that embeds and $\alpha\in A_L$ with both endpoints at
$L\times\{0\}$.
Let $\alpha$ be an edgemost such arc. Then there is an arc
$\beta\subseteq (L\times\{0\})$ such that
$\partial\beta=\partial\alpha$. Let $D$ be the half-disc bounded by
$\alpha\cup \beta$ in $L\times I$. Then $p|D$ is a map which sends
$\alpha$ to a sphere $S$, $S \subseteq\mathcal{S}$, and embeds $\beta$
in the leaf $L$. We can homotope the map $p|D$ rel $\beta$ to obtain
a map $d:D\to M$ such that $d(\alpha)\subseteq E_0\subseteq S$, where
$E_0\subseteq \mathcal{E}_0$ is the disc in $S$. Thus one has
$d(\alpha\cup\beta)\subseteq Q_0$.
Recall the Heegaard surface $F$ separates $M$ into $Q$ and $H$. Let
$d$ be transverse to $F$ and let $\Delta=d^{-1}(Q)\subseteq $D$ $. On
$\Delta$ redefine $d$ to be its mirror image $r\circ d$, where $r$ is
the reflection map $r:Q\to Q'$. Then homotope $d$ by pushing slightly
into $Q'$. By post-composing $d$ with an isotopy along the $I$-fibers
of the product $Q'-\intr({Q}_{-1})$ one obtains $d$ mapping into
$Q_0\cup H'$.
Using the irreducibility of $Q_0\cup H'$ (which is a handlebody)
we remove closed curves from $d^{-1}(\mathcal{E}_0)$, which will
then consist of $\alpha$. By unknottedness (Theorem
\ref{UnknottedTheorem}), the embedded image of
$d|\beta\subseteq\Omega$, which lies in a leaf, is parallel to the
subspinal pair $(D_0,D_0)$ for some spotted component
$(K_0,V_0,\mathcal{D}_0)$, where $D_0$ is a duplicate of $E_0$,
proving that $\Omega$ back-tracks.
\end{proof}
\begin{thm}[Compression Body Train Track Theorem]\label{CompressionTrainTrackTheorem}
Let $f\colon M\to M$ be an automorphism of a mixed body. Then there
exists $f'\in\mathcal{A}_f$ such that $g=f'|_{Q'}$ is
1) periodic,
2) rigidly reducible or
3) train-track generic in $H$.
On the other pieces, $Q$ and $H'$, we have:
4) $f'|_{Q}$ is isotopic to a conjugate of $g$, and
5) the class of $f'|_{H'}$ does not depend on
$f'\in\mathcal{A}_f$.
\end{thm}
The conclusions concerning $f'|_{Q}$, $f'|_{H'}$ are proved in
Proposition \ref{FurtherReducingSurface}. We proceed now to prove that
one can always find $f'\in\mathcal{A}_f$ such that $f'|_{Q'}$ is
either periodic, reducible or train-track generic, i.e. admits no
back-tracking. To do so we need to perform unlinkings analogous to
those of Section \ref{TrainTrackSection}. As before, assume that
$g=f|_{Q'}$ is generic and that the leaves of the 2-dimensional
lamination are in Morse position with respect to the height function
determined by the product structure in $Q'-\intr(Q_0)$. We refer
the reader to Section \ref{Laminations} and \cite{UO:Autos} for
details.
For $t\leq 1$ consider spotted balls or products $(K_t,\mathcal{D}_t)$
determined by $Q_t\cap \widehat{Q}_1$.
As before, to define linking we need half-discs. Let
$(\Delta,\alpha,\beta)$ be a half-disc and $(R,\gamma,\gamma')$ a
rectangle (Definitions \ref{HalfDisc}).
\begin{defns}
Let $s<t$. A {\em half-disc for $Q_s$ in $Q_t$} is a half-disc
$(\Delta,\alpha,\beta)$ satisfying:
\begin{enumerate}[\upshape (i)]
\item $\Delta\subseteq K_t\cup H'$ is embedded for some $K_t$,
\item $\beta$ is contained in a spot $D_t$ of $K_t$,
\item $\alpha\subseteq\partial K_s$ for some $K_s\subseteq K_t$ and
$\Delta\cap K_s=\alpha$,
\item $\alpha$ connects two distinct spots of $(K_s,\mathcal{D}_s)$.
\end{enumerate}
As before we refer to $\alpha$ as the {\em upper boundary of
$\Delta$}, denoted by $\partial_u\Delta$, and $\beta$ as its {\em
lower boundary}, denoted by $\partial_l\Delta$.
A {\em rectangle for $Q_s$ in $Q_t$} is a rectangle
$(R_s,\gamma_s,\gamma_t)$ satisfying:
\begin{enumerate}[\upshape (a)]
\item $R_s\subseteq K_t\cup H'$ is embedded for some $K_t$,
\item $\gamma_s\subseteq\partial K_s$ for some $K_s\subseteq K_t$
and $R_s\cap K_s=\gamma_s$,
\item $\gamma_s$ connects distinct spots of $(K_s,\mathcal{D}_s)$,
\item $\gamma_t\subseteq \partial K_t -
\intr(\mathcal{D}_{t_{i+1}})- \intr(V_t)$ (recall that when $K_t$
is a spotted ball then $V_t=\emptyset$),
\item $\partial R_s-(\gamma_s\cup\gamma_t)\subseteq\mathcal{D}_t$.
\end{enumerate}
\end{defns}
Note that a half-disc $\Delta$ in $Q_t$ has the property that
$\partial\Delta\cap\partial_i Q'=\emptyset$. Similarly $\partial
R\cap\partial_i Q'=\emptyset$, where $R$ is a rectangle in $Q_t$.
Also, holes now may appear due not just to intersections with $Q_s$
but also with $H'$.
\begin{defn}\label{D:standard2}
Let $0\leq s<t\leq 1$ and $\Delta$ a half-disc for $Q_s$ in $Q_t$.
Suppose that $s=t_0<t_1<\cdots<t_k=t$ is a sequence of regular
values. We say that $\Delta$ is {\em standard with respect to
$\{\,t_i\,\}$} if there is $0\leq m<k$ such that the following
holds. The half-disc $\Delta$ is the union of rectangles $R_{t_i}$,
$s\leq t_i\leq t_{m-1}$, and a half-disc $\Delta_{t_m}$ (see
Figure~\ref{Aut3StandardDisc}) with the properties that:
\begin{enumerate}
\item each $R_{t_i}$, $s\leq t_i< t_{m}$, is a rectangle for
$Q_{t_i}$ in $Q_{t_{i+1}}$,
\item the holes $\intr({R}_{t_i})\cap Q_{t_i}$ are essential
discs in $Q_{t_m}\cup H'$, each consisting of either an essential
disc in $Q_{t_m}$ or of a vertical annulus in the product part of
$Q_{t_m}$ union an essential disc in $H'$,
\item $\Delta_{t_m}$ is an unholed half-disc for $K_{t_m}$ in
$H_{t_{m+1}}$.
\end{enumerate}
\end{defn}
The following is the analogue of Proposition \ref{P:half-disc} needed
here.
\begin{proposition}\label{P:half-disc2}
Let $0=t_0<\dots<t_k=1$ be a complete sequence of regular values.
Suppose that a component $K_0$ of $Q_0\cap \widehat{Q}_1$ intersects a
disc $D$ of $\widehat{\mathcal{E}}_1$ in two distinct components
$D^0$, $D^1$. If an arc $\sigma\subseteq K_0$ having an endpoint at
each $D^i$ is parallel (rel $\partial\sigma$) in $\widehat{Q}_1\cup
H'$ to an arc $\sigma'\subseteq D$ then there exists a half-disc
$\Delta$ for $Q_0$ connecting $D^0$ and $D^1$ which is standard with
respect to $\{t_i\}$.
\end{proposition}
\begin{proof}
The proof is similar to that of Proposition \ref{P:half-disc}, using
Theorem \ref{UnknottedTheorem}, Lemma \ref{SpinalPairLemma} and
techniques of Lemma \ref{L:half-disc}.
If $K_1$ is a spotted ball then all $K_{t_i}$'s containing $\sigma$
also are spotted balls and the result follows directly from the proof
of Lemma \ref{L:half-disc}. In fact, suppose that $t_m$ is the biggest
$t_i$ with the property that $D^0$ and $D^1$ are contained in distinct
spots $D^0_{t_i}$, $D^1_{t_i}$ of $K_{t_i}$. If this $K_{t_m}$ is a
spotted ball then $K_{t_{m+1}}$ also is a spotted ball, as one can
verify by considering the possible events described in Lemma
\ref{EventLemma} changing $K_{t_m}$ to $K_{t_{m+1}}$. The argument of
Lemma \ref{L:half-disc} still works.
We can therefore assume that $K_{t_m}$ is a spotted product. In this
case $K_{t_{m+1}}$ is also a spotted product. Let $D'$ be the spot of
$K_{t_{m+1}}$ that contains both $D^0_{t_m}$, $D^1_{t_m}$. There is a
single arc $\gamma_{t_{m+1}}\subseteq D'-\intr(D^0_{t_m}\cup
D^1_{t_m})$ connecting $D^0_{t_m}$ to $D^1_{t_m}$ (up to isotopy in
$D'-\intr(D^0_{t_m}\cup D^1_{t_m})$, rel $\partial(D^0_{t_m}\cup
D^1_{t_m})$). Recall that there are anambiguous spinal pairs
$(P_{t_m},\mathcal{D}_{t_m})$, $(P_{t_{m+1}},\mathcal{D}_{t_{m+1}})$
for $K_{t_m}$, $K_{t_{m+1}}$ respectively, and that they are parallel.
We use this parallelism to build $\gamma_{t_m}\subseteq P_{t_m}$
isotopic to $\gamma_{t_{m+1}}$. Note that the parallelism determines
an unholed half-disc $(\Delta_{t_m},\gamma_{t_m},\gamma_{t_{m+1}})$
using the Loop Theorem.
Now proceed inductively ``downwards'', building $\gamma_{t_{i-1}}$
from $\gamma_{t_i}$ in the analogous manner for as long as
$K_{t_{i-1}}$ is a spotted product. More precisely, to each spotted
product $K_{t_i}$ corresponds a spinal pair
$(P_{t_{i}},\mathcal{D}_{t_{i}})$. The parallelism of spinal pairs
$(P_{t_{i-1}},\mathcal{D}_{t_{i-1}})$ and
$(P_{t_i},\mathcal{D}_{t_{i}})$ yields $\gamma_{t_{i-1}}$ from
$\gamma_{t_i}$. Note that this parallelism defines a rectangle
$(R_{t_{i-1}},\gamma_{t_{i-1}},\gamma_{t_i})$, which is also holed
only if there is a spotted ball $K_{t_{i-1}}'\subseteq K_{t_i}$
(i.e. in a situation analogous to Case 2 in Figure
\ref{Aut3Splitting}). In any case, $\gamma_{t_{i-1}}$ has the property
of connecting the spots $D^0_{t_{i-1}}$ and $D^1_{t_{i-1}}$ that
contain $D^0$, $D^1$.
If no $K_{t_j}$ containing $\sigma$ is a spotted ball then
$K_0$ is a spotted product and we obtain an arc $\gamma_0\subseteq
P_0$. But $\gamma_0$ clearly connects the spots $D^0$ and $D^1$ of
$K_0$. Hence the union of all rectangles $R_{t_i}$'s and the half-disc
$\Delta_{t_m}$ yields the desired standard half-disc. This takes care
of the case when there is no spotted ball $K_{t_j}$ containing
$\sigma$.
Therefore assume that $\sigma$ is contained in a spotted ball
$K_{t_k}$. Assume that the spotted ball $K_{t_k}$ is maximal, in the
sense that it is contained in a spotted product $K_{t_{k+1}}$. We also
assume from the construction above that there is an arc
$\gamma_{t_{k+1}}\subseteq P_{t_{k+1}}$ isotopic (rel $D'$) to
$\gamma_{t_{m+1}}$. The original arc $\sigma$ determines an embedded
arc $\sigma_{t_k}\subseteq L_{t_k}$. Now let $H''$ be the component
of $H'$ which intersects $K_{t_{k+1}}$. Then
$H_{t_{k+1}}=K_{t_{k+1}}\cup H''$ is a (spotted) handlebody. We first
note that $\gamma_{t_{k+1}}$ is isotopic (rel $D'$) in $H_{t_{k+1}}$
to $\sigma$. Therefore $\sigma_{t_k}$ is isotopic in $H_{t_{k+1}}$ to
$\gamma_{t_{k+1}}$. But since $\sigma_{t_k}$ is a subgraph of
$\Gamma_{t_k}$, we can apply Lemma \ref{GraphToSpinalLemma} to obtain
a subspinal pair $(\breve P_{t_k}, \breve{\mathcal{D}}_{t_k})$ (a disk
with two spots) which is parallel in $H_{t_{k+1}}$ to
$(P_{t_{k+1}},\mathcal{D}_{t_{k+1}})$.
The parallelism described above determines $\gamma_{t_k}\subseteq
P_{t_k}$. It also determines a rectangle
$(R_{t_k},\gamma_{t_k},\gamma_{t_{k+1}})$ which may be holed. In this
case the holes come from intersections with $H'_{t_k}$, which is the
spotted handlebody corresponding to the spotted product component
$K'_{t_k}\subseteq K_{t_{k+1}}$. As usual, these holes may be adjusted
to be essential discs either in $Q_{t_k}$ or in $H'_{t_k}$, each of
these last consisting of a disc in $H'$ and a vertical annulus in
$K'_{t_k}$.
Now enlarge $(\breve P_{t_k}, \breve{\mathcal{D}}_{t_k})$ to a \spinal
for $K_{t_k}$. We use Lemma \ref{SpinalPairLemma} to obtain \spinals
$(P_{t_i},\mathcal{D}_{t_i})$ for each $t_i<t_k$. We now proceed as in
Lemma \ref{L:half-disc}. Recalling, consider a path
$\gamma_{t_0}=\gamma_0\subseteq P_0$ connecting $D^0$ and $D^1$. Now
use parallelism to go ``upwards'', obtaining $\gamma_{t_i}\subseteq
P_{t_i}$ for any $t_i<t_k$. In the same way $\gamma_{t_{k-1}}$
determines through parallelism an arc $\gamma_{t_k}'$ in
$P_{t_k}$. Now $\gamma_{t_k}$, $\gamma_{t_k}'\subseteq P_{t_k}$ both
connect the same spots of $K_{t_k}$ therefore they are isotopic (rel
$\partial\mathcal{D}_{t_k}$) in $P_{t_k}$. We adjust them to coincide,
obtaining $\gamma_{t_k}'=\gamma_{t_k}$.
To finalize, let $\Delta$ be the union of the rectangles $R_{t_i}$'s
and the half-disc $\Delta_{t_m}$, which is the desired standard
half-disc in this last case.
\end{proof}
\begin{remark}\label{UnholedHalf-Disc2}
Note from the argument that, as in the sphere body case, a rectangle
$R_{t_i}$ is holed only when $K_{t_i}\subseteq K_{t_{i+1}}$ is in a
situation like Case 2 of Figure \ref{Aut3Splitting}.
\end{remark}
The next step is to describe the unlinking move in this setting of
mixed bodies. Suppose that there is a half-disc $\Delta$ for $Q_0$ in
$Q_1$, which we assume to be in standard position with respect to a
complete sequence of regular values $\{t_i\}$. The goal is to remove
holes of the rectangles $R_{t_i}$ and the half-disc
$\Delta_{t_m}$. Let $s<t$ be consecutive terms of the sequence and
consider $R_s\subseteq K_t$. If $K_t$ is a spotted ball then the
operation works precisely as in Section \ref{TrainTrackSection}.
The new situation appears when $K_t$ is a spotted product, and even
there the argument is essentially the same, so we will be brief.
Recall from Section \ref{TrainTrackSection} that the main condition
for the unlinking to be possible was that one side of the holed
rectangle $R_{t_i}$ needed to intersect a spot of $K_{t_{i+1}}$ in a
spot containing a single spot $D$ of $K_{t_i}$. If that is the case,
as before, we consider the sphere $S$ in the mixed body $M$ which is
dual to $D$. The holes of $R_{t_i}$ are removed by ``isotopying them''
along $R_{t_i}$ close to $S$ and then along $S-\intr(D)$. Here the
only new feature is that a hole may correspond to a disc $E\subseteq
Q_{t_i}\cup H'$.
Let $N$ be a neighborhood (in $Q_{t_i}\cup H'$) of a component $E$ of
$R_{t_i}\cap (Q_{t_i}\cup H')$. One can consider it as a neighborhood
of an arc $\alpha$ either in $H'$ (in case $E$ is a disc intersecting
$H'$) or in $Q_{t_i}$ (otherwise). Isotope $\alpha$ (rel
$\partial\alpha$) along $R_{t_i}$ bring it close to $D$. Now perform
the isotopy along $S-\intr(D)$ as before. Note that this isotopy does
not introduce intersections of $\alpha$ with $R_{t_i}$, for $S$
intersects $R_{t_i}$ only at its boundary. Call the composition of
these isotopies $\iota$ and let $\alpha'=\iota(\alpha)\subseteq
K_{t_i}$. Extend $\iota$ to $N$ and let $N'=\iota(N)\subseteq
K_{t_i}$, a neighborhood of $\alpha'$.
We separate two cases. If $\alpha\subseteq Q_{t_i}$ the unlinking
move is done, as in Section \ref{TrainTrackSection}. If
$\alpha\subseteq H'$ note that $\alpha$ and $\alpha'$ are isotopic
(rel $\partial\alpha$) as arcs in $K_{t_{i+1}}\cup H'$. Consider
an ambient isotopy $\kappa$ of $K_{t_{i+1}}\cup H'$ such that
$\kappa(\alpha')=\alpha$ and fixing $\partial K_{t_{i+1}}\cup H'$.
Adjust $\kappa$ so that $\kappa\circ\iota(N)=N$. Now replace
$K_{t_i}$ by $\kappa(K_{t_i})$, so $H'$ is preserved. This
completes the description of the operation, which we call an {\em
unlinking along $R_{t_i}$ and $S$}. Clearly such an unlinking
reduces the number of holes in $R_{t_i}$.
\begin{proposition}\label{P:unlinking2}
Let $0=t_0<t_1<\dots<t_{k-1}<t_k=1$ be a complete sequence of regular
values. Suppose that $\Delta$ is a half-disc for $Q_0\subseteq Q_1$,
standard with respect to $\{\,t_i\,\}$. Then there exists a sequence
of unlinkings taking $Q_0$ to $Q_0'\subseteq Q_1$ for which $\Delta$
is an unholed half-disc. Moreover
$Q_0'\cap\mathcal{E}_1=Q_0\cap\mathcal{E}_1$.
\end{proposition}
\begin{proof}[Sketch of proof]
The proof is essentially the same as that of Proposition
\ref{P:unlinking}. We just note that also here, as Section
\ref{TrainTrackSection}, a rectangle $R_{t_i}$ will be holed only if
$K_{t_i}\subseteq K_{t_{i+1}}$ is in a situation like Case 2 of Figure
\ref{Aut3Splitting} (see Remark \ref{UnholedHalf-Disc2}
above). Therefore one can always find a spot of $K_{t_{i+1}}$
containing a single spot of $K_{t_i}$ and containing a side of
$R_{t_i}$. A finite sequence of unlinkings in $K_{t_{i+1}}$ yields an
unholed rectangle $R_{t_i}$. As before, the final unholed half-disc
$\Delta$ is obtained by induction on $i$. The other conclusion of the
theorem clearly also holds here.
\end{proof}
The last ingredient needed for the proof of Theorem
\ref{CompressionTrainTrackTheorem} is concentricity. The following is
the analogue of Lemma \ref{L:concentric} needed in this setting.
\begin{lemma}\label{L:concentric2}
In the situation described in Proposition \ref{P:unlinking2} we have
that $Q_0'\subseteq Q_1$ are concentric.
\end{lemma}
\begin{proof}[Sketch of proof]
As in Section \ref{TrainTrackSection} the rectangles $R_{t_i}$ in the
original half-disc $\Delta$ will be holed only in a situation like
Case 2 of Figure \ref{Aut3Splitting} (see Remark
\ref{UnholedHalf-Disc2}). Therefore, as in Lemma \ref{L:concentric},
an unlinking does not interfere with the pinching half-discs. The
argument is completed as before.
\end{proof}
\begin{proof}[Proof of Theorem \ref{CompressionTrainTrackTheorem}]
As usual, the proof is analogous to that of Theorem
\ref{TrainTrackTheorem} (which was done as the proof of
Theorem~\ref{BackTrackTheorem}).
Consider $f\in\mathcal{A}_f$. If $g=f|_{Q'}$ is reducible or periodic,
the proof is over, so assume that $g$ is generic, with growth rate
$\lambda$. If it is not train-track generic there is an integer $m<1$
such that $Q_m$ back-tracks in $Q_1$ (Lemma \ref{L:back-track}). Let
$K_m\subseteq K_1$ be the spotted ball in a $K_1$ (spotted ball or
product) as in the lemma and assume that $m$ is the greatest such
value. Let $D_m^0$, $D_m^1$ be the two spots of $K_m$ as in the lemma,
contained in a single spot $D$ of $K_1$. Consider the increasing
sequence of consecutive integers $m=i_0, i_1,\dots,i_{1-m}=1$. Then
there are two spots $D^0$, $D^1$ of $K_{i_{(2-m)}}=K_0$ containing
$D_m^0$, $D_m^1$ respectively (hence $D^0$, $D^1\subseteq D$). The
parallelism in the conclusion of Lemma \ref{L:back-track} gives an arc
$\sigma\subseteq\partial K_0$ as in the statement of Proposition
\ref{P:half-disc2}, yielding a half-disc $\Delta$ for $Q_0$ in $Q_1$
connecting $D^0$ and $D^1$.
Choose a complete sequence $0=t_0<\dots<t_k=1$ of regular values and
assume that $\Delta$ is in standard position with respect to
$\{t_i\}$. Apply Proposition \ref{P:unlinking2} to obtain $\Delta$
unholed. Lemma \ref{L:concentric2} assures that an unlinking preserves
$Q'$, $H'$. This operation does not change the growth rate $\lambda$
of $g$.
Skipping details, one proceeds as in \cite{UO:Autos} and Theorem
\ref{TrainTrackTheorem}, using $\Delta$ to realize down-moves or
diversions, respective analogues of folds and pulling tights, as in
\cite{BH:Tracks} (note that ``splittings'' may be needed as a
preparation for these moves). After performing a down-move the
resulting automorphism $g'$ may be either periodic, reducible or
generic. In the first two cases the proof is done, so assume it is
generic. The operation increases the greatest value $m$ in the
beginning of the proof, therefore repetition of this process has to
stop eventually (when $m=0$).
After performing a diversion, which is the case when $m=0$, the
resulting $g'$ may also be periodic, reducible or generic. Again
assuming it is generic generic its growth $\lambda'$ is strictly
smaller than $\lambda$, therefore repetition of this process also has
to stop.
The procedure above, which can be carried on for as long as $g'$ is
generic admitting back-tracking, yields $g'$ either periodic,
reducible or generic without back-tracking in $H$, completing the
proof for $g'$.
We recall that the conclusions on $f'|_{Q}$, $f'|_{H'}$ follow from
Proposition \ref{FurtherReducingSurface}.
\end{proof}
\hop
\hop
\bibliographystyle{hplain} \bibliography{ReferencesUO3}
\end{document} | {"config": "arxiv", "file": "math0510610/Autos25Oct05.tex"} |
\begin{document}
\maketitle
\vskip 0.5in
\def\HDS{half-disc sum}
\def\Im{\text{Im}}
\def\im{\text{Im}}
\def\rel{\text{ rel }}
\def\irred{irreducible}
\def\half{spinal pair }
\def\spinal{\half}
\def\spinals{\halfs}
\def\halfs{spinal pairs }
\def\reals{\mathbb R}
\def\rationals{\mathbb Q}
\def\complex{\mathbb C}
\def\naturals{\mathbb N}
\def\integers{\mathbb Z}
\def\id{\text{id}}
\def\proj{P}
\def\hyp {\hbox {\rm {H \kern -2.8ex I}\kern 1.15ex}}
\def\Diff{\text{Diff}}
\def\weight#1#2#3{{#1}\raise2.5pt\hbox{$\centerdot$}\left({#2},{#3}\right)}
\def\intr{{\rm int}}
\def\inter{\ \raise4pt\hbox{$^\circ$}\kern -1.6ex}
\def\Cal{\cal}
\def\from{:}
\def\inverse{^{-1}}
\def\Max{{\rm Max}}
\def\Min{{\rm Min}}
\def\fr{{\rm fr}}
\def\embed{\hookrightarrow}
\def\Genus{{\rm Genus}}
\def\Z{Z}
\def\X{X}
\def\interior{\text{int}}
\def\roster{\begin{enumerate}}
\def\endroster{\end{enumerate}}
\def\intersect{\cap}
\def\definition{\begin{defn}}
\def\enddefinition{\end{defn}}
\def\subhead{\subsection\{}
\def\theorem{thm}
\def\endsubhead{\}}
\def\head{\section\{}
\def\endhead{\}}
\def\example{\begin{ex}}
\def\endexample{\end{ex}}
\def\ves{\vs}
\def\mZ{{\mathbb Z}}
\def\M{M(\Phi)}
\def\bdry{\partial}
\def\hop{\vskip 0.15in}
\def\jerk{\vskip 0.08in}
\def\mathring{\inter}
\def\trip{\vskip 0.09in}
\def\H{\mathscr{H}}
\def\S{\mathscr{S}}
\def\E{\mathscr{E}}
\def\K{\mathscr{K}}
\def\B{\mathscr{B}}
\def\cl{\text{Cl}}
\begin{abstract} We analyze the mapping class group $\H_x(W)$ of automorphisms of the exterior boundary $W$ of a compression
body $(Q,V)$ of dimension 3 or 4 which extend over the compression body. Here $V$ is the interior boundary of the compression body $(Q,V)$. Those
automorphisms which extend as automorphisms of
$(Q,V)$ rel
$V$ are called discrepant automorphisms, forming the mapping class group
$\H_d(W)$ of discrepant automorphisms of $W$ in $Q$. If $\H(V)$ denotes the mapping class group of $V$ we describe a short exact sequence of
mapping class groups relating $\H_d(W)$, $\H_x(W)$, and $\H(V)$.
For an orientable, compact, reducible 3-manifold $W$, there is a canonical ``maximal" 4-dimensional compression body
whose exterior boundary is
$W$ and whose interior boundary is the disjoint union of the irreducible summands of $W$. Using the canonical compression body
$Q$, we show that the mapping class group
$\H(W)$ of
$W$ can be identified with
$\H_x(W)$. Thus we obtain a short exact sequence for the mapping class group of a 3-manifold, which gives
the mapping class group of the disjoint union of irreducible summands as a quotient of the entire mapping class
group by the group of discrepant automorphisms. The group of discrepant automorphisms is described in terms of
generators, namely certain ``slide" automorphisms, ``spins," and ``Dehn twists" on 2-spheres.
Much of the motivation for the results in this paper comes from a research
program for classifying automorphisms of compact 3-manifolds, in the spirit of the Nielsen-Thurston classification of automorphisms of
surfaces.
\end{abstract}
\section{Introduction}\label{Intro}
\begin{definition} A {\it compression body} is a manifold triple $(Q,W,V)$ of any dimension $n\ge 3$ constructed as
$V\times I$ with 1-handles attached to $V\times 1$, and with $V=V\times 0$. Here $V$ is a manifold, possibly with
boundary, of dimension $n-1\ge 2$. We use the symbol $A$ to denote $\bdry
V\times I$ and
$W$ to denote
$\bdry Q-(V\cup
\intr(A))$. See Figure \ref{MapCompression}. When $W$ is closed, we can regard the compression body as a manifold pair, and even
when $\bdry W\ne\emptyset$, we will often denote the compression body simply as $(Q,V)$.
\end{definition}
\begin{figure}[ht]
\centering
\psfrag{S}{\fontsize{\figurefontsize}{12}$S$}\psfrag{H}{\fontsize{\figurefontsize}{12}$H$}\psfrag{a}{\fontsize{\figurefontsize}{12}$\alpha$}
\scalebox{1.0}{\includegraphics{MapCompression.eps}} \caption{\small
Compression body.} \label{MapCompression}
\end{figure}
\noindent{\bf Remark:} In most applications, a compression body satisfies the condition that $V$ contains no sphere components, but we choose not to
make this a part of the definition.
\hop
We note that, when $\bdry V\ne\emptyset$, it is crucial in this paper that there be a product ``buffer zone" $A$ between $\bdry V$ and
$\bdry W$. Some applications of the isotopy extension theorem would fail without this buffer.
\begin{definition} The symbol $\H$ will be used to denote the mapping class group of orientation preserving
automorphisms of an orientable manifold, a manifold pair, or a manifold triple. When $(Q,V)$ is a compression body, and $\bdry
V=\emptyset$,
$\H(Q,V)$ is the mapping class group of the manifold pair; when $\bdry V\ne\emptyset$ by abuse of notation, we use $\H(Q,V)$ to
denote $\H(Q, W,V)$.
\end{definition}
\noindent{\bf Convention:} Throughout this paper, manifolds will be compact and orientable; automorphisms will be orientation preserving.
\begin{definition}\label{BasicDefinitions} Suppose $(Q,V)$ is a compression body. Let
$\H_x (W)$ denote the image of the restriction homomorphism $H(Q,V)\to \H(W)$, i.e. the subgroup of $\H(W)$ consisting of automorphisms of
$W$ which extend over
$Q$ to yield an element of $\H (Q,V)$. We call the elements of $\H_x (W)$ {\it extendible} automorphisms of $W$.
Let $\H_d(W)$ denote the image of of the restriction map $H(Q
\rel V)\to \H(W)$, i.e. the subgroup of $\H_x(W)$ consisting of
isotopy classes of automorphisms
$f$ of
$W$ which extend over $Q$ such that $f|_V$ becomes the identity. We call the elements of $\H_d (W)$ {\it discrepant} automorphisms of $W$.
\end{definition}
A fundamental fact well known for compression bodies of dimension 3 is that an automorphism of $W$ which
extends over a compression body $Q$ with $\bdry_eQ=W$ uniquely determines up to isotopy the restriction of the
extended automorphism to $V=\bdry_iQ$. In general we have a weaker statement:
\begin{thm}\label{FundamentalTheorem} Suppose $(Q,V)$ is a compression body of dimension $n\ge 3$. Suppose at most one component of $V$ is
homeomorphic to a sphere
$S^{n-1}$ and suppose each component of
$V$ has universal cover either contractible or homeomorphic to $S^{n-1}$.
If $f:(Q,V)\to (Q,V)$ is an automorphism with
$f|_W$ isotopic to the identity, then:
\jerk
\noindent (a) {$f|_V$ is homotopic to the identity. It follows that for an automorphism $f$ of $(Q,V)$,
$f|_W$ determines
$f|_V$ up to homotopy.}
\jerk
\noindent (b) {If $Q$ is 3-dimensional, $f|_W$ determines $f|_V$ up to isotopy, and $\H_x(W)\approx \H(Q,V)$. }
\jerk
\noindent (c) {If $Q$ is 4-dimensional, and each component of $V$ is irreducible, then $f|_W$ determines $f|_V$ up to isotopy.}
\end{thm}
Suppose $(Q,V)$ is a compression body of dimension 3 or 4 satisfying the conditions of Theorem \ref{FundamentalTheorem} (b) or (c). Then
there is an {\it eduction} homomorphism $e: \H_x(W)\to \H(V)$. For $f\in \H_x(W)$, we extend $f$ to $\bar
f:(Q,V)\to (Q,V)$, then we restrict $\bar f$ to $V$ obtaining $e(f):V\to V$. By the theorem, $e$ is well-defined.
\begin{thm}\label{SequenceThm} Suppose $(Q,V)$ is a compression body of dimension 3 or 4. Suppose at most one component of $V$ is a sphere. In case
$Q$ has dimension 4, suppose every component of
$V$ is a 3-manifold whose universal cover is contractible or $S^3$. Then the
following is a short exact sequence of mapping class groups.
$$1\to \H_d(W)\to \H_x(W)\xrightarrow{e} \H(V)\to 1.$$
\noindent The first map is inclusion, the second map is eduction.
\end{thm}
There is a {\it canonical 4-dimensional compression body} $(Q,V)$ associated to a compact reducible 3-manifold $W$ without
boundary spheres, see Section \ref{ThreeManifoldSection}. It has the following properties.
\begin{proposition}\label{UniquenessProposition} The canonical compression body $(Q,V)$ associated to a compact reducible 3-manifold $W$ has exterior
boundary
$W$ and interior boundary the disjoint union of its irreducible summands. It is unique in the following sense: If $(Q_1,W,V_1)$ and $(Q_2,W,V_2)$ are
two canonical compression bodies, and $v:V_1\to V_2$ is any homeomorphism, then there is a homeomorphism $g:(Q_1,V_1)\to (Q_2,V_2)$
with $g|_{V_1}=v$.
\end{proposition}
We will initially construct a canonical compression body using a ``marking" by a certain type of system $\S$ of essential spheres in
$W$, called a {\it symmetric system}. The result is the
{\it canonical compression body associated to $W$ with the symmetric system $\S$}. The canonical compression body associated to $W$ with $\S$ is
unique in a different sense: If $(Q_1,W,V_1)$ and $(Q_2,W,V_2)$ are canonical compression bodies associated to $W$ with $\S$, then the identity on
$W$ extends to a homeomorphism $g:Q_1\to Q_2$, and $g|_{V_1}$ is unique up to isotopy. Only later will we recognize the uniqueness described in
Proposition \ref{UniquenessProposition} when we compare canonical compression bodies constructed from different symmetric systems of spheres.
There are well-known automorphisms of
$W$ which, as we shall see, lie in
$\H_d(W)$ with respect to the canonical compression body $Q$. The most important ones are called {\it slides}. Briefly, a slide
is an automorphism of
$W$ obtained by cutting on an essential sphere $S$ to obtain $W|S$ ($W$ cut on $S$), capping one of the duplicate
boundary spheres coming from $S$ with a ball $B$ to obtain $W'$, sliding the ball along a closed curve in this capped manifold and
extending the isotopy of $B$ to the manifold, then removing
$B$ and reglueing on the two boundary spheres. An {\it interchanging slide} is done using
two separating essential spheres cutting homeomorphic manifolds from $W$. Cut on the two spheres $S_1$ and $S_2$, remove the
homeomorphic submanifolds, cap the two boundary spheres with two balls $B_1$ and $B_2$, choose two paths to move $B_1$ to $B_2$
and $B_2$ to $B_1$, extend the isotopy, remove the balls, and reglue. Interchanging slides are used to interchange
prime summands. A {\it spin} does a ``half rotation" on an
$S^2\times S^1$ summand. In addition, for each embedded essential sphere, there is an automorphism of at most order 2 called a {\it Dehn twist}
on the sphere. If slides, Dehn twists, interchanging slides, and spins are done using spheres from a given symmetric system $\S$ (or
certain separating spheres associated to non-separating spheres of $\S$) then they are called {\it
$\S$-slides},
{\it $\S$-Dehn twists},
{\it $\S$-interchanging slides}, and
{\it $\S$-spins}. An $\S$-spin is a ``half-Dehn-twist" on a separating sphere associated to a non-separating
sphere of $\S$, not on a sphere of $\S$.
\begin{proposition}\label{CharacterizationProp} If $Q$ is the canonical compression body associated to the compact
orientable, 3-manifold
$W$ with symmetric system $\S$ and $W=\bdry_eQ$, $V=\bdry_iQ$, then
\jerk
\noindent (a) $\H_x(W)=\H(W)$.
\jerk
\noindent (b) The mapping class group $\H_d(W)$ is generated by $\S$-slides, $\S$-Dehn twists, $\S$-slide
interchanges of $S^2\times S^1$ summands, and $\S$-spins.
\end{proposition}
Proposition \ref{CharacterizationProp} (b) is a version of a standard result due to E. C{\'e}sar de S{\'a}
\cite{EC:Automorphisms}, see also M. Scharlemann in Appendix A of \cite{FB:CompressionBody}, and
D. McCullough in \cite{DM:MappingSurvey} (p. 69).
From Proposition \ref{CharacterizationProp} and Theorem \ref{SequenceThm} we obtain:
\begin{thm} \label{ThreeManifoldSequenceThm} If $W$ is a compact, orientable, reducible 3-manifold, and $V$ is the disjoint
union of its irreducible summands, then the following sequence is exact:
$$1\to \H_d(W)\to \H(W)\xrightarrow{e} \H(V)\to 1.$$
\noindent Here $\H_d(W)$ is the group of discrepant automorphisms for a canonical compression body, the first map is inclusion, and the second map
is eduction with respect to a canonical compression body.
\end{thm}
Using automorphisms of compression bodies, one can reinterpret some other useful results. Suppose
$W$ is an
$m$-manifold ($m=2$ or $m=3$) with some sphere boundary components. Let
$V_0$ denote the manifold obtained by capping the sphere boundary components with balls and let $P$ denote the union of capping
balls in $V_0$. There is another appropriately chosen canonical compression body $(Q,V)$ for $W$, such that $V$ is the disjoint
union $V_0\sqcup P$. With respect to this compression body we have a group of discrepant automorphisms, $\H_d(W)$. We say the
pair $(V_0, P)$ is a {\it spotted manifold}. The mapping class group of this spotted manifold, as a pair is the same as
$\H(W)$. The following result probably exists in the literature in some form, and can be proven more directly, and with other terminology.
For example, a special case is mentioned
in
\cite{JB:Braids}. The reason for including it here is to make an analogy with Theorem \ref{SequenceThm}. Let
$f:W\to W$ be an automorphism. Then $f$ clearly induces a {\it capped automorphism}
$f_c:V_0\to V_0$. Simply extend $f$ to the balls of $P$ to obtain $f_c$. Then $f_c$ is determined up to isotopy by $f$.
\begin{thm}\label{SpottedThm} With $W$ and $(Q,V)$ as above, and $V_0$ a connected surface or a connected irreducible 3-manifold,
there is an exact sequence
$$1\to \H_d(W)\to \H(W)\xrightarrow{e} \H(V_0)\times \H(P)\to 1.$$
\noindent $\H(P)$ is the group of
permutations of the balls of $P$; $\H(V_0)$ is the mapping class group of the capped manifold. The group $\H_d(W)$ of discrepant
automorphisms is the subgroup of automorphisms $f$ of $W$ which map each boundary sphere to itself such that the capped automorphism $f_c$ is isotopic
to the identity. The map
$\H_d(W)\to
\H(W)$ is inclusion. The eduction map $e:\H(W)\to \H(V_0)\times \H(P)$ takes $f\in \H(W)$ to $(f_c,\rho)$ where $\rho$ is the
permutation of
$P$ induced by $f$.
\end{thm}
In Section \ref{Sequence} we prove a lemma about mapping class groups of triples. This is the basis for the results about mapping class
groups of compression bodies. Let
$M$ be an
$n$-ma\-ni\-fold, and let $W$ and $V$ be disjoint compact submanifolds of any (possibly different) dimensions $\le n$.
The submanifolds $W$ and
$V$ need not be properly embedded (in the sense that $\bdry W\embed \bdry M$) but they must be submanifolds for
which the isotopy extension theorem applies, for example smooth submanifolds with respect to some smooth structure.
In this more general setting, we make some of the same definitions as in Definition \ref{BasicDefinitions}, as well as some new ones.
\begin{defns} Let $(M,W,V)$ be a triple as above.
We define $\H_x(V)$, the group of {\it extendible automorphisms} of $V$, as the image of the restriction map $\H(M,W,V)\to \H(V)$. Similarly, we
define
$\H_x(W,V)$, the group of {\it extendible automorphisms} of the pair $(W,V)$, as the image of the restriction map $\H(M,W,V)\to \H(W,V)$, and the group
$\H_x (W)$ of {\it extendible automorphisms} of $W$ as the image of the restriction homomorphism $\H(M,W,V)\to \H(W)$. The group $\H_d(W)$ of
{\it discrepant} automorphisms of $W$ is the image of the restriction map $\H((M,W) \rel V)\to \H(W)$.
\end{defns}
\begin{lemma}\label{SequenceGeneralLemma} Suppose $(M,W,V)$ is a triple as above. The
following is a short exact sequence of mapping class groups.
$$1\to \H_d(W)\to \H_x(W,V)\to \H_x(V)\to 1.$$
\noindent The first map is inclusion of an element of $\H_d(W)$ extended to $(W,V)$ by the identity on $V$, the second map is restriction.
\end{lemma}
Let $M$ be an irreducible,
$\bdry$-reducible 3-manifold. There is a 3-dimensional characteristic compression body embedded in $M$ and containing $\bdry M$ as its exterior
boundary, first described by Francis Bonahon, which is unique up to isotopy, see
\cite{FB:CompressionBody}, or see \cite{UO:Autos}, \cite{OC:Autos}. Removing this from $M$ yields an irreducible
$\bdry$-irreducible 3-manifold. The following is an application of Lemma \ref{SequenceGeneralLemma} which describes how mapping
class groups interact which the characteristic compression body.
\begin{thm}\label{MappingCharacteristicThm} Let $M$ be an irreducible, $\bdry$-reducible 3-manifold with characteristic compression
body
$Q$ having exterior boundary $W=\bdry M$ and interior boundary $V$. Cutting $M$ on $V$ yields $Q$ and an irreducible, $\bdry$-irreducible
manifold
$\hat M$. There is an exact sequence
$$1\to \H_d(W)\to \H_x(W)\to \H_x(V)\to 1,$$
\noindent where $ \H_d(W)$ denotes the usual group of discrepant automorphisms for $Q$, $\H_x(W)$ denotes the group of automorphisms
which extend to $M$ and $\H_x(V)$ denotes the automorphisms of $V$ which extend to $\hat M$.
\end{thm}
To a large degree, this paper was motivated by a research program of the author and Leonardo N. Carvalho, to classify
automorphisms of compact 3-manifolds in the spirit of the Nielsen-Thurston classification. This began with a paper of the author
classifying automorphisms of 3-dimensional handlebodies and compression bodies, see \cite{UO:Autos}, and continued with a paper of
Carvalho, \cite{LNC:Tightness}. Recently, Carvalho and the author completed a paper giving a classification of
automorphisms for compact 3-manifolds, see \cite {OC:Autos}. At this writing, the paper \cite {OC:Autos} needs to be revised to make it
consistent with this paper. In the case of a reducible manifold
$W$, the classification applies to elements of $\H_d(W)$ and $\H(V)$, where $V$ is the disjoint union of the irreducible summands of $W$, as
above. The elements of $\H_d(W)$ are represented by automorphisms of $W$ each having support in a connected sum embedded in $W$ of $S^2\times S^1$'s
and handlebodies. It is these automorphisms, above all, which are remarkably difficult to understand from the point of view of finding dynamically
nice representatives. The theory developed so far describes these in terms of automorphisms of compression bodies and handlebodies. Throughout the
theory, both in \cite{UO:Autos} and in \cite {OC:Autos}, the notion of a ``spotted manifold" arises as an irritating technical detail, which we point
out, but do not fully explain. Theorem \ref{SpottedThm} gives some explanation of the phenomenon.
I thank Leonardo N. Carvalho and Allen Hatcher for helpful discussions related to this paper.
\section{Proof of Theorem \ref{FundamentalTheorem}.}\label{Fundamental}
\begin{proof}[Proof of Theorem \ref{FundamentalTheorem}] Suppose $(Q,V)$ is a compression body of dimension $n\ge 3$. We suppose that
every component $V_i$ of $V$ has universal cover either contractible or homeomorphic to $S^{n-1}$. We suppose also that at most one component of
$V$ is homeomorphic to a sphere. A compression body structure for $Q$ is defined by the system $\E$ of $(n-1)$-balls together with a product structure on some
components of $Q|\E$, and an identification of each of the remaining components of $Q|\E$ with the $n$-ball. As elsewhere in this paper, we choose a
particular type of compression body structure corresponding to a system of balls $\E$ such that exactly one duplicate $E_i'$ of a ball $E_i$ of $\E$
appears on
$V_i\times 1$ for each product component of $V_i\times I$ of $Q|\E$. In terms of handles, this means we are regarding
$Q$ as being obtained from $n$-balls and products by attaching just one 1-handle to each $V_i\times I$. For each
component $V_i$ of $V$ we have an embedding $V_i\times I\embed Q$.
Suppose $f:(Q,V)\to (Q,V)$ is the identity on $W$. We will eventually show that for every component $V_0$ of $V$, $f|_{V_0}$ is homotopic to the
identity on $V_0$ (via a homotopy of pairs preserving the boundary). If $V$ contains a single sphere component and
$V_0$ is this sphere component, then clearly $f(V_0)=V_0$ and $f|_{V_0}$ is homotopic to the identity, since every degree 1 map of $S^{n-1}$ is
homotopic to the identity.
If $V_0$ is not a sphere, we will begin by showing that $f(V_0)=V_0$. Consider the case where $\pi_1(V_0)$ is non-trivial. Clearly $\pi_1(Q)$ can be
expressed as a free product of the
$\pi_1(V_i)$'s and some $\integers$'s. If $f$ non-trivially permuted $V_i$'s homeomorphic to $V_0$, the induced map $f_*:\pi_1(Q)\to \pi_1(Q)$ would
not be the identity, since it would conjugate factors of the free product. On the other hand $f_*$ must be the identity, since the inclusion $j:W\to
Q$ induces a surjection on $\pi_1$ and $f:W\to W$ induces the identity on $\pi_1$. Next we consider the case that
$V_0$ is contractible. Then $\bdry V_0\ne \emptyset$. In this case also, we see that $f(V_0)=V_0$, since $f$ is the identity on
$(\bdry V_0\times 1)\subset \bdry W$, and so $f(\bdry V_0)=\bdry V_0$. From our assumptions, it is always the case that $V_0$ is contractible, is a
sphere, or has non-trivial $\pi_1$. We conclude that
$f(V_0)=V_0$ in all cases.
It remains to show that if $V_0$ is not a sphere, then $f|_{V_0}$ is homotopic to the identity. When $Q$ has dimension 3, this is a standard
result, see for example \cite{UO:Autos}, \cite{KJ:Combinatorics3Mfds}. The method of proof, using ``vertical essential surfaces" in $V_0\times
I$, depends on the fact that $V_0$ is not a sphere. In fact, the method shows that $f|_{V_0}$ is isotopic to the identity, which proves the theorem in
this dimension.
Finally now it remains to show that $f|V_0$ is homotopic to the identity when $n\ge 4$, and $V_0$ is not a sphere. To do this, we will work
in a cover of
$Q$. We may assume $V_0$ is either finitely covered by the sphere, but not the sphere, or it has a contractible universal cover. We will denote by
$\tilde Q_0$ the cover of $Q$ corresponding to the subgroup $\pi_1(V_0)$ in $\pi_1(Q)$. The compression body $Q$ can be constructed from a disjoint
union of connected product components and balls, joined by 1-handles. Covering space theory then shows that the cover
$\tilde Q_0$ can also be expressed as a disjoint union of connected products and balls, joined by 1-handles. The balls and 1-handles are lifts of the
balls and 1-handles used to construct
$Q$. The products involved are: one product of the form $V_0\times I$, products of the form
$S^{n-1}\times I$, and products of the form
$U\times I$, where
$U$ is contractible, these being connected components of the preimages of the connected products used to construct $Q$. There is a distinguished
component of the preimage of
$V_0$ in
$\tilde Q_0$ which is homeomorphic to $V_0$; we denote it $V_0$ rather than $\tilde V_0$, since the latter would
probably be interpreted as the entire preimage of $V_0$ in $\tilde Q_0$. If $V_0$ has non-trivial fundamental group, then this is
the only component of the preimage of $V_0$ homeomorphic to $V_0$, otherwise it is distinguished simply by the construction of
the cover corresponding to $\pi_1(V_0)$, via choices of base points. Now let
$\breve Q_0$ denote the manifold obtained from $\tilde Q_0$ by capping all $S^{n-1}$ boundary spheres by $n$-balls.
Evidently, $\breve Q_0$ has the homotopy type of $V_0$. This can be seen, for example, by noting that $\tilde Q_0$ is
obtained from $V_0\times I$ and a collection of $n$-balls by using 1-handles to attach universal covers of $V_i\times
I$, $i\ne 0$. The universal covers of $V_i\times I$ have the form $S^{n-1}\times I$ or $U\times I$ where $U$ is
contractible. When we cap the boundary spheres of $\tilde Q_0$ to obtain $\breve Q_0$, the $(S^{n-1}\times I)$'s are
capped and become balls.
Having described the cover $\tilde Q_0$, and the capped cover $\breve Q_0$, we begin the argument for showing that if
$f:(Q,V)\to (Q,V)$ is an automorphism which extends the identity on $W$, and $V_0$ is a non-sphere component of $V$, then $f|_{V_0}$ is homotopic to the
identity. We have chosen a compression body structure for $Q$
in terms of $\E$, the product $V\times I$, and the collection of balls of $Q|\E$. Using the automorphism $f$
we obtain another compression body structure: $Q|f(\E)$ contains the product $f(V\times I)$, which includes
$f(V_0\times I)$ as a component. Recall that we chose $\E$
such that exactly one duplicate $E_0'$ of exactly one ball $E_0$ of
$\E$ appears on $V_0\times I$. We can lift the inclusion map
$i:V_0\times I\to Q$ to an embedding
$\tilde i: V_0\times I\embed
\tilde Q_0\subset \breve Q_0$. Again we abuse notation by denoting the image product as
$V_0\times I$, and we use $E_0$ to denote the lift of $E_0$ as well. Similarly we lift $f\circ i:V_0\times I\to
Q$ to an embedding
$\widetilde{f\circ i}:V_0\times I\to
\tilde Q_0\subset \breve Q_0$.
\begin{claim} The lifts $\tilde i$ and $\widetilde
{f\circ i}$ coincide on $(V_0\times 1)-\intr(E_0)$. In other words, $\tilde f$ is the identity on $(V_0\times 1)-\intr(E_0)$.
\end{claim}
\begin{proof}[Proof of Claim] We consider two cases:
\noindent{\it Case 1: $\bdry V_0\ne \emptyset$.} The lift $\tilde f$ of $f$ must take $\bdry (V_0\times I)-\intr(E_0)$ in $\tilde Q_0$
to itself by uniqueness of lifting, since in this case $\bdry (V_0\times I)-\intr(E_0)$ is connected.
\noindent{\it Case 2: $V_0$ has nontrivial fundamental group.} In this case, lifting $f\circ i$ and $i$, we see that $\widetilde {f\circ
i}(V_0\times 1-E_0)$ and $\tilde {i}(V_0\times 1-E_0)$ are components in the cover $\tilde
Q_0$ of the preimage of $V_0\times 1-E_0\subset W\subset Q$. From the description
of the cover $\tilde Q_0$, there is only one such component homeomorphic to $V_0-E_0$, hence both lifts must map
$(V_0\times 1)-E_0$ to the same component, and the lifts $\widetilde{f\circ i}$ and $\tilde i$ must coincide on $(V_0\times 1)-\intr(E_0)$ by
uniqueness of lifting.
These two cases cover all the possibilities for $V_0$, for if $V_0$ has trivial fundamental group, then $V_0$ has a contractible
cover by assumption, since we are ruling out the case $V_0$ a sphere. But this implies the boundary is non-empty, and we apply Case 1.
Otherwise, we apply Case 2.
This completes the proof of the claim.
\end{proof}
Following our convention of writing $V_0\times I$ for $\tilde i(V_0\times I)$, we now conclude that though in general $\tilde i|_{E_0}\ne
\widetilde {f\circ i}|_{E_0}$, these lifts are equal on $\bdry E_0$.
\begin{claim} The map $\widetilde{f\circ i}|_{E_0}$ is homotopic rel boundary to $\tilde i|_{E_0}$ in $\breve Q_0$.
\end{claim}
\begin{proof}[Proof of Claim] We consider two cases; in Case 1 the universal cover of $V_0$ is contractible and in
Case 2 the universal cover of $V_0$ is $S^{n-1}$. Let $C\embed \tilde W$ denote $\bdry E_0$, an $(n-2)$-sphere, and
$\tilde W_0$ denote the preimage of $W$ in $\tilde Q_0\subset \breve Q_0$, and choose a base point $w_0$ in $C$.
\hop
\noindent{\it Case 1: The universal cover of $V_0$ is contractible.} In this case $\breve Q_0$ deformation retracts to $V_0\subset V_0\times I$ and
the universal cover $\widetilde{\breve Q}_0$ of $\breve Q_0$ is contractible, by the discussion above. Hence all higher homotopy groups of $\breve Q_0$
are trivial. Using a trivial application of the long exact sequence for relative homotopy groups we obtain:
$$\cdots \pi_{n-1}(\breve Q_0,w_0)\to \pi_{n-1}(\breve Q_0,C,w_0)\to \pi_{n-2}(C,w_0)\to \pi_{n-2}(\breve
Q_0,w_0)\cdots $$
or
$$\cdots 0\to \pi_{n-1}(\breve Q_0,C,w_0)\to\integers\to 0 \cdots, $$
\noindent which implies that all maps of $(n-1)$-balls to $(\breve Q_0,C)$ which are degree one on the boundary are
homotopic.
\hop
\noindent{\it Case 2: The universal cover of $V_0$ is $S^{n-1}$ (but $V_0$ is not homeomorphic to $S^{n-1}$).} In this case again $\breve
Q_0$ deformation retracts to
$V_0\subset V_0\times I\subset \breve Q_0$ by the discussion above. The universal cover $\widetilde{\breve Q}_0$ of $\breve Q_0$ is homotopy equivalent
to $S^{n-1}$, and it has the structure of $S^{n-1}\times I$ attached by 1-handles to products of the form
$U\times I$ with $U$ contractible, and $n$-balls, some of the $n$-balls coming from capped preimages of $V_i\times I$
where $V_i$ is covered by $S^{n-1}$. Thus the higher homotopy groups of $\breve Q_0$ are the same as those of $\widetilde{\breve Q}_0$ or $S^{n-1}$.
Again using an application of the long exact sequence for relative homotopy groups we obtain:
$$\cdots\pi_{n-1}(C,w_0)\to \pi_{n-1}(\breve Q_0,w_0)\to \pi_{n-1}(\breve Q_0,C,w_0)\to \pi_{n-2}(C,w_0)\to
\pi_{n-2}(\breve Q_0,w_0)\cdots $$
\noindent Here, $\pi_{n-1}(C,w_0)$ need not be trivial, but $C$ bounds a ball in $\breve Q_0$, hence the image of $\pi_{n-1}(C,w_0)$ in
$\pi_{n-1}(\breve Q_0,w_0)$ is trivial. On the other hand, $\pi_{n-2}(\breve Q_0,w_0)=0$ since $\breve Q_0$ has the
homotopy type of $S^{n-1}$. Thus we obtain the sequence:
$$\cdots 0\to\integers \to \pi_{n-1}(\breve Q_0,C,w_0)\to \integers\to 0\cdots $$
\noindent This sequence splits, since we can define a homomorphism\hfil \break$\pi_{n-2}(C,w_0)\to \pi_{n-1}(\breve
Q_0,C,w_0)$ which takes the generator $[C]$ to the class $[E_0]$, and the composition with the boundary operator is
the identity. Thus $\pi_{n-1}(\breve Q_0,C,w_0)\approx \integers\oplus \integers$. At first glance it appears that the
lift $\tilde f$ of $f$ could take the class $\beta= [\tilde i|_{E_0}]$ to a different class obtained by adding a multiple
of the image of the generator of $\pi_{n-1}(\breve Q_0,w_0)$. We will show this is impossible.
If the universal cover $S^{n-1}$ of $V_0$ is a
degree
$s$ finite cover, then we observe that the generator $\alpha$ of $\pi_{n-1}(\breve Q_0,w_0)$ is
represented by a degree
$s$ covering map $\varphi_1:S^{n-1}\to \tilde i(V_0\times 1)$, whose image includes $s$ copies of $E_0$. Specifically,
$\varphi_1$ is the covering map $\widetilde{\breve Q}_0\to \breve Q_0$ restricted to the lift to $\widetilde{\breve Q}_0$ of $V_0\times 1=\tilde
i(V_0\times 1)$. The map $\varphi_1$ is homotopic to a similar map $\varphi_0:S^{n-1}\to \tilde i(V_0\times 0)$. Since $\tilde f$ preserves
orientation on $V_0\times 0$, it takes the homotopy class of $\varphi_0$ to itself, so $\tilde f_*(\alpha)=\alpha$ in $\pi_{n-1}(\breve Q_0)$. It
follows that $\tilde f$ also preserves the homotopy class of $\varphi_1$ viewed as an element of $\pi_{n-1}(\breve Q_0,C,w_0)$, since
$\tilde f(C)=C$. We denote the image of $\alpha$ in the relative homotopy group by the same symbol $\alpha$. The preimage $\varphi_1\inverse(E_0)$
consists of $s$ discs. Corresponding to the decomposition of $S^{n-1}$ into these $s$ discs and the complement $X$ of their union, there is an equation
in $\pi_{n-1}(\breve Q_0,C,w_0)$: $\alpha=x+s\beta$, where x is represented by $\varphi_1|_X$ and $\beta=[\tilde i|_{E_0}]$. To interpret $\varphi_1|_X$
as an element of relative
$\pi_{n-1}$, we can use the relative Hurewicz theorem, which says that $\pi_{n-1}(\breve Q_0,C,w_0)\approx H_{n-1}(\tilde Q_0,C)$.
Applying
$\tilde f$ to the equation $\alpha=x+s\beta$, we obtain the equation $\tilde f_*(\alpha)=\tilde f_*(x)+s\gamma$, where
$\gamma=\tilde f_*(\beta)=[\widetilde
{f\circ i}|_{E_0}]$. Since ${\tilde f}_*(\alpha)=\alpha$, and also $\tilde f_*(x)=x$ since $\tilde f$ is the identity on $(V_0\times 1)-E_0$, we obtain
the equation $\alpha=x+s\gamma$. Combining with the earlier equation $\alpha=x+s\beta$, we obtain $s\beta=s\gamma$. Since this holds in the torsion
free group $\integers\oplus \integers$, it follows that $\beta=\gamma$.
This completes the proof of the claim.
\end{proof}
From the claim, we now know that
$\widetilde {f\circ i}|_{E_0}$ is homotopic to $\tilde i|_{E_0}$. We perform the homotopy rel boundary
on $\widetilde {f\circ i}|_{E}$ and extend to $V_0\times I$ rel $[\bdry (V_0\times I)-E_0]$. We obtain a map
$\tilde i':V_0\times I\to \breve Q$ homotopic to $\widetilde {f\circ i}$ rel $(\bdry (V_0\times
I)-E_0)$ such that
$\tilde i=\tilde i'$ on $V_0\times 1$.
Pasting the two maps $\tilde i$ and $\tilde i'$ so that the domain is two copies of $V_0\times I$ doubled on $V_0\times 1$, we obtain a new map
$H:V_0\times I\to \breve Q_0$ with $H|_{V_0\times 0}=\id$ and $H|_{V_0\times 1}=\tilde f|_{V_0\times 0}$.
Finally
$H$ can be homotoped rel $(V_0\times \bdry I)$ to $ V_0\subset \breve Q_0$ by applying a deformation retraction from
$\breve Q_0$ to $V_0=V_0\times 0$. After this homotopy of $H$, we have $H:V_0\times I\to V_0$ a homotopy in $V_0$ from the identity
to
$f$. In case $\bdry V_0\ne \emptyset$ in the above argument (for homotoping $H$ to $V_0$), one must perform the homotopy as
a homotopy of pairs, or first homotope $H|_{\bdry V_0\times I}$ to
$\bdry V_0$. Whenever $\bdry V_0\ne \emptyset$, our construction gives a homotopy of pairs $(f,\bdry f)$ on $(V_0,\bdry V_0)$.
For $Q$ of dimension 4, with components of $V$ irreducible, Grigori Perelman's proof of Thurston's Geometrization Conjecture shows that the universal cover
of each closed component of $V$ is homeomorphic to $\reals^3$ or $S^3$. Components of $V$ which have non-empty boundary are Haken, possibly
$\bdry$-reducible, and also have contractible universal covers. By the first part of the lemma, it follows that
$f|_V$ is uniquely determined up to homotopy. But it is known that automorphisms of such 3-manifolds which are homotopic (homotopic as
pairs) are isotopic, see
\cite{FW:SufficientlyLarge}, \cite{HS:HomotopyIsotopy}, \cite{BO:HomotopyIsotopy}, \cite{BR:HomotopyIsotopy}, \cite{DG:Rigidity},
\cite{GMT:HomotopyHyperbolic}. It follows that
$f|_V$ is determined by $f|_W$ up to isotopy.
\end{proof}
\section{Proof of Theorem \ref{SequenceThm}.}\label{Sequence}
We begin with a basic lemma which is a consequence of the isotopy extension theorem.
Let $M$ be an $n$-manifold, and let $W$ and $V$ be disjoint compact submanifolds of any (possibly different) dimensions $\le n$.
The submanifolds $W$ and
$V$ need not be properly embedded but they must be submanifolds for which isotopy extension holds, e.g. smooth submanifolds.
Various mapping class groups of the triple $(M,W,V)$ were defined in the introduction, before the statement of Lemma \ref{SequenceGeneralLemma}, which
we now prove.
\begin{proof}[Proof of Lemma \ref{SequenceGeneralLemma}] We need to prove the exactness of the sequence
$$1\to \H_d(W)\to \H_x(W,V)\to \H_x(V)\to 1,$$
\noindent where the first map is inclusion of an element of $\H_d(W)$ extended to $(W,V)$ by the identity on $V$, the second map is
restriction. The restriction map takes the group $\H_x(W,V)$ to $\H_x(V)$ and is clearly surjective. It remains to show that the kernel of this
restriction map is $\H_d(W)$. If $(f,g)\in \H_x(W,V)$ restricts to the identity on $\H_x(V)$, then $g$ is isotopic to the identity. By isotopy
extension, we can assume it actually is the identity. Thus the kernel consists of elements $(f,\id)\in \H_x(W,V)$ such that $(f,\text{id})$ extends
over
$M$, which is the definition of $\H_d(W)$.
\end{proof}
We note that $ \H(W,V)\approx \H(W)\times \H(V)$ is a product, but this does not show that the short exact sequence of Lemma
\ref{SequenceGeneralLemma} splits, and that the group $\H_x(W,V)$ is a product.
\begin{proof} [Proof of Theorem \ref{SequenceThm}] We will use Lemma \ref{SequenceGeneralLemma}. We apply the theorem to the triple $(Q,V,W)$,
obtaining the exact sequence $$1\to \H_d(W)\to \H_x(W,V)\to \H_x(V)\to 1.$$ Given an element
$f\in \H_x(W,V)$, this extends to an automorphism of
$(Q,V)$, and by Theorem \ref{FundamentalTheorem}, $f|_V$ is determined up to isotopy, so $H_x(W,V)$ can be replaced by
$H_x(W)$.
To see that $H_x(V)$ in the sequence can be replaced by $\H(V)$, we must show that every element of $\H(V)$ is realized as the restriction of an
automorphism of $(Q,V)$. We begin by supposing $f$ is an automorphism of $V$. Observe that $(Q,V)$ can be represented as a (connected)
handlebody
$H$ attached to $V\times 1\subset V\times I$ by 1-handles, with one 1-handle connecting $H$ to each component of $V\times 1$. In other words,
if $V$ has $c$ components, we obtain $Q$ by identifying $c$ disjointly embedded balls in $\bdry H$ with $c$ balls in $V\times 1$, one in each
component of $V\times 1$. Denote the set, or union, of balls in $\bdry H$ by $\B_0$ and the set, or union, of balls in $V\times 1$ by $\B_1$.
There is a homeomorphism $\phi:\B_0\to \B_1$ which we use as a glueing map. Now $(f\times \id)(\B_1)$ is another collection of balls in
$V\times 1$, again with the property that there is exactly one ball in each component of $V\times 1$. We can then isotope $f\times \id$ to
obtain $F:V\times I\to V\times I$ such that $F(\B_1)=\B_1$. Then $F$ permutes the balls of $\B_1$, and $\phi\inverse\circ F\circ \phi$ permutes the balls
of $\B_0$. We use isotopy extension to obtain $G:H\to H$ with $G|_{\B_0}=\phi\inverse\circ F\circ \phi$. Finally, we can combine $F$ and $G$ by
identifying $F(\B_1)$ with $G(\B_0)$ to obtain an automorphism $g:(Q,V)\to (Q,V)$ with $g|_V=f$.
\end{proof}
Using ideas in the above proof, we can also prove Theorem \ref{MappingCharacteristicThm}.
\begin{proof}[Proof of Theorem \ref{MappingCharacteristicThm}]
We apply Lemma \ref{SequenceGeneralLemma}. Letting $W=\bdry M=\bdry_eQ$ and $V=\bdry_iQ$, where $Q$ is the characteristic compression
body for $M$. Thus we know that the sequence
$$1\to \H_d(W)\to \H_x(W,V)\to \H_x(V)\to 1$$
\noindent is exact.
In the statement of the theorem, $\H_x(V)$ is defined to be the group of automorphisms of $V$ which extend to $\hat M$, whereas in Lemma
\ref{SequenceGeneralLemma}, it is defined to be the group of automorphisms of $V$ which extend to maps of the triple $(M,W,V)$. In fact, these are the same,
since any automorphism of $V$ also extends to $Q$, as we showed in the above proof of Theorem \ref{SequenceThm}.
To verify the exactness of the sequence in the statement, it remains to show that $\H_x(W,V)$ is the same as $\H_x(W)$. To obtain an
isomorphism
$\phi:\H_x(W,V)\to\H_x(W)$, we simply restrict to $W$, $\phi(h)=h|_W$. Given an automorphism $f$ of $\H_x(W)$ we know there is an
automorphism $g$ of $M$ which extends
$f$. Since the characteristic compression body is unique up to isotopy, we can, using isotopy extension, replace $g$ by an
isotopic automorphism preserving the pair $(W,V)$, and thus preserving $Q$, then restrict $g$ to $(W,V)$ to obtain an automorphism $\psi(f)$ of
the pair. This is well-defined by the well-known case of Theorem \ref{FundamentalTheorem}; namely, for an automorphism of a 3-dimensional
compression body, the restriction to the exterior boundary determines the restriction to the interior boundary up to isotopy. Clearly
$\phi\circ\psi$ is the identity. Now using Theorem
\ref{FundamentalTheorem} again, we can check that $\psi\circ \phi$ is the identity.
\end{proof}
\section{{The Canonical Compression Body\hfill} \newline {for a
3-Manifold.\hfill}
\label{ThreeManifoldSection}}
\begin{defns} Let $W$ be a compact 3-manifold with no sphere boundary components. A
{\it symmetric system} $\S$ of disjointly embedded essential spheres is a system with the property
that
$W|\S$ ($W$ cut on $\S$) consists of one ball-with-holes, $\hat V_*$, and other components $\hat V_i$,
$i=1,\ldots k$, bounded by separating spheres $S_i$ of $\S$, each of which is an irreducible 3-manifold $V_i$ with
one open ball removed.
$\hat V_*$ has one boundary sphere
$S_i$ for each $V_i$, $i=1,\ldots k$, and in addition it has $2\ell$ more spheres, two for each non-separating $S_i$ of $\S$. If
$W$ has
$\ell$
$S^2\times S^1$ prime summands, and $k$ irreducible summands, then there are $\ell$ non-separating $S_i$'s in $\S$
and $k$ separating spheres $S_i$ in $\S$. The symbol $\S$ denotes the set or the union of the spheres of $\S$.
We denote by $\S'$ the set of duplicates of spheres of $\S$ appearing on $\bdry \hat V_*$. Choose an orientation for $\hat V_*$. Each sphere of $\S'$
corresponds either to a separating sphere $\S$, which obtains an orientation from $\hat V_*$, or it corresponds to a non-separating sphere of $\S$
with an orientation.
We construct a {\it canonical compression body} associated to the 3-manifold $W$ together with a symmetric system $\S$ by
thickening to replace $W$ by $W\times I$, then attaching a 3-handle along $S_i\times 0$ for each $S_i\in \S$ to obtain a
4-manifold with boundary. Thus for each 3-handle attached to $S_i\times 0$, we have a 3-ball $E_i$ in the 4-manifold
with
$\bdry E_i=S_i$. The boundary of the resulting 4-manifold consists of $W$, one
3-sphere
$V_*$, and the disjoint union of the irreducible non-sphere $V_i$'s. We cap the 3-sphere $V_*$ with
a 4-ball to obtain the {\it canonical compression body} $Q$ associated to $W$ marked by $\S$. Note that the
exterior boundary of $Q$ is
$W$ and the interior boundary is $V=\sqcup V_i$.
For each non-separating sphere $S$ in $\S$ we have an {\it associated
separating sphere}
$C(S)$, where $C(S)$ lies in $\hat V_*$ and separates a 3-holed ball containing the duplicates of $S$ coming from cutting on $S$.
Also, we define $ C(E)$ to be the obvious 3-ball in $Q$ bounded by $C(S)$.
\end{defns}
We have defined the canonical compression body as an object associated to the 3-manifold $W$ {\it together with} the system $\S$. This will be our
working definition. Later, we prove the uniqueness result of Proposition \ref{UniquenessProposition}, which allows us to view the canonical
compression body as being associated to $W$.
In some ways, the dual construction of the canonical compression body may be easier to understand: Let $V_i$, $i=1\ldots k$ be the
irreducible summands of $W$. Begin with the disjoint union of the $V_i\times I$ together with a 4-ball $Z$. For each $i$, identify a 3-ball
$E_i'$ in
$V_i\times 1$ with a 3-ball $E_i''$ in $\bdry Z$ to obtain a ball $E_i$ in the quotient space. Then, for each $S^2\times S^1$ summand of $W$
identify a 3-ball $E_j'$ in $\bdry Z$ with another 3-ball $E_j''$ in $\bdry Z$ to obtain a disc $E_j$ in the quotient. (Of course new $E_j'$'s and
$E_j''$'s are chosen to be disjoint in $\bdry Z$ from previous ones.) The result of all the identifications is the canonical compression body $Q$
for $W$ with balls $E_i$ properly embedded. We denote by $\E$ the union of $E_i$'s. The system of spheres $\S=\bdry \E$ is a symmetric system.
We now describe uniqueness for compression bodies associated to a 3-manifold $W$ with a symmetric system.
\begin{proposition}\label{UniquenessProp} If the canonical compression bodies $(Q_1,V_1)$ and
$(Q_2,V_2)$ are associated to a compact 3-manifold
$W$ with symmetric systems $\S_1$ and $\S_2$ respectively, and $f:W\to W$ is any automorphism with $f(\S_1)=\S_2$, then there is a homeomorphism
$g:(Q_1,V_1)\to (Q_2,V_2)$ with $g|_W=f$. Further, $g|_{V_1}$ is determined up isotopy.
\jerk
\noindent In particular, if $(Q_1,V_1)$ and
$(Q_2,V_2)$ are canonical compression bodies associated to $W$ with the system $\S$, then there is a homeomorphism $g:(Q_1,V_1)\to (Q_2,V_2)$ with
$g|_W=\text{id}$. Further, $g|_{V_1}$ is determined up isotopy.
\end{proposition}
\begin{proof} Starting with $W\times I$, if we attach 3-handles along the spheres of $\S_1\times 0$, then cap with a 4-ball we obtain $Q_1$.
Similarly we obtain $Q_2$ from $W\times I$ and $\S_2\times 0$. Starting with $f\times \text{id}:W\times I\to W\times I$, we can then construct the
homeomorphism $g$ by mapping each 3-handle attached on a sphere of $\S_1$ to a 3-handle attached to the image sphere in $\S_2$. Finally we map the
4-ball in $Q_1$ to the 4-ball in
$Q_2$. This yields $g$ with $g|_W=f$. The uniqueness up to isotopy of $g|_{V_1}$ follows from Theorem \ref{FundamentalTheorem}.
Letting $\S_1=\S_2$ and $f=\text{id}$, we get the second statement.
\end{proof}
\begin{lemma}\label{BoundingLemma} The canonical compression body $(Q,V)$ associated to a compact
3-manifold
$W$ with symmetric system $\S$ has the property that every 2-sphere $S\embed W$ bounds a 3-ball properly
embedded in
$Q$.
\end{lemma}
\begin{proof} Make $S$ transverse to $\S$ and consider a curve of intersection innermost in $S$ and
bounding a disc $D$ whose interior is disjoint from $\S$. If $D$ lies in a holed irreducible $\hat V_i$, $i\ge 1$, then we can
isotope to reduce the number of curves of intersection. If $D$ lies in $\hat V_*$, then $D$ union a
disc $D'$ in $\S$ forms a sphere bounding a 3-ball in $Q$. (The 3-ball is embedded in the 4-ball $Z$.) We replace
$S$ by
$S'=S\cup D'-D$, which can be pushed slightly to have fewer intersections with $\S$ than $S$. Now if $S'$ bounds a 3-ball,
so does $S$, so by induction on the number of circles of intersection of $S$ with $\S$, we have
proved the lemma if we can start the induction by showing that a 2-sphere $S$ whose intersection with $\S$ is empty must bound a ball
in $Q$. If such a 2-sphere $S$ lies in $\hat V_*$, then it is obvious that it bounds a 3-ball in $Z$. If it lies in $\hat V_i$ for some
$i$, then it must be inessential or isotopic to $S_i$, hence it also bounds a ball (isotopic to $E_i$ if $S$ is isotopic to $S_i$).
\end{proof}
\begin{definition} Let $W$ be a reducible 3-manifold. Let $S$ be an essential sphere in
$W$ from a symmetric system $\S$. Or let $S$ be a sphere
associated to a non-separating sphere in $\S$. An {\it
$\S$-slide automorphism} of
$W$ is an automorphism obtained by cutting on $S$, capping one of the resulting boundary spheres of $W|S$ with a ball $B$ to obtain
$W'$, isotoping the ball along a simple closed curve
$\gamma$ in $W'$ returning the ball to itself, extending the isotopy to $W'$, removing the interior of the ball, and reglueing the
two boundary spheres of
the manifold thus obtained from $W'$. We emphasize that when $S$ separates a holed irreducible summand, either that irreducible summand can be
replaced by a ball or the remainder of the manifold can be replaced by a ball. An
$\S$-slide automorphism on a sphere
$S$ can be realized as the boundary of a {\it $\E$-slide automorphism} of
$Q$ rel $V$: cut $Q$ on the ball $B$ bounded by $S$ ($B$ is either an $E$ in $\E$ or it is a ball $C(E)$ associated to a
non-separating ball $E$ in $\E$) to obtain a compression body
$Q'$ with two spots $B'$ and $B''$, duplicate copies of $B$. Slide the
duplicate $B'$ in $\bdry_eQ'$ along a closed curve
$\gamma$ in
$\bdry_e Q'$, extending the isotopy to $Q'$. Then reglue $B'$ to $B''$ to recover the
compression body
$Q$.
There is another kind of automorphism, called an {\it $\S$-interchanging slide} defined by cutting $W$ on
two separating spheres $S_1$ and $S_2$, either both in $\S$ cutting off holed irreducible summands $\hat V_1$ and $\hat V_2$ from
the remainder $\hat W_0$ of $W$, or both spheres associated to different non-separating $S$ in $\S$ and cutting
off holed $S^2\times S^1$ summands $\hat V_1$ and $\hat V_2$ from $W$. Cap both boundary spheres in
$\hat W_0$ and slide the capping balls along two paths from one ball to the other, interchanging them. Then
reglue. The interchanging slide can also be realized as the boundary of an {\it $\E$-interchanging slide} of $Q$ rel $V$.
We will need to distinguish {\it $\S$-interchanging slides of $S^2\times S^1$ summands}.
We define a further type of automorphism of $W$ or $Q$. It is called an $\S$-{\it spin}. Let $S$ be
a non-separating sphere in $\S$. We do a half Dehn twist on the separating sphere $C(S)$ to map $S$ to
itself with the opposite orientation. Again, this spin can be realized as a half Dehn twist on the
3-ball
$C(E)$ bounded by $C(S)$ in $Q$, which is an automorphism of $Q$ rel $V$.
Finally, we consider Dehn twists on spheres of $\S$. These are well known to have order at most two in the mapping class group, and we
call them {\it $\S$-Dehn twists}. These also are boundaries of {\it $\E$-Dehn twists}, automorphisms of $Q$ rel $V$.
\end{definition}
\begin{remark} Slides, spins, Dehn twists on non-separating spheres, and interchanging slides of $S^2\times S^1$ summands all have
the property that they extend over $Q$ as automorphisms rel $V$, so they represent elements of $\H_d(W)$.
\end{remark}
We have defined $\S$-slides, and other automorphisms, in terms of the system $\S$ in order to be able to describe the
corresponding $\E$-slides in the canonical maximal compression body $Q$ associated to $W$ with the marking system $\S$. If we
have not chosen a system $\S$ with the associated canonical compression bodies, there are obvious definitions of {\it slides} (or
interchanging slides, Dehn twists, or spins), without the requirement that the essential spheres involved be in $\S$ or be
associated to spheres in $\S$.
\begin{definition} Let $\S_1$ and $\S_2$ be two symmetric systems for a 3-manifold $W$ and let $\S_1'$ and $\S_2'$ denote the
corresponding sets of duplicates appearing as boundary spheres of the holed spheres $\hat V_{1*}$ and $\hat V_{2*}$, which are components of $W|\S_1$ and
$W|\S_2$ respectively. Then an {\it allowable assignment} is a bijection
$a:\S_1'\to \S_2'$ such that
$a$ assigns to a duplicate of a non-separating sphere $S$, another duplicate of a non-separating sphere $a(S)$, and it assigns to the duplicate in $\S_1'$
of a separating sphere of
$\S_1$ a duplicate in $\S_2'$ of a separating sphere of $\S_2$ cutting off a homeomorphic holed irreducible summand.
\end{definition}
If an automorphisms $f:W\to W$ satisfies $f(\S_1)=\S_2$, then $f$ induces an allowable assignment $a:\S_1'\to \S_2'$.
The following is closely related to results due to E. C{\'e}sar de S{\'a} \cite{EC:Automorphisms}, see also
M. Scharlemann in Appendix A of \cite{FB:CompressionBody}, and
D. McCullough in \cite{DM:MappingSurvey} (p. 69).
\begin{lemma}\label{DiscrepantLemma} Suppose $\S$ is a symmetric system for $W$ and $Q$ the
corresponding compression body. Given any symmetric system
$\S^\sharp$ for $W$, and an allowable assignment $a:\S'\to {\S^\sharp}'$, there is a composition $f$ of $\S$-slides, $\S$-slide
interchanges, and $\S$-spins such that
$f(\S)=\S^\sharp$, respecting orientation and inducing the map $a:\S'\to {S^\sharp}'$.
The composition $f$ extends over $Q$ as a
composition of
$\E$-slides,
$\E$-slide interchanges and $\E$-spins, all of which are automorphisms of $Q\rel V$.
\end{lemma}
\begin{proof} $W$ cut on the system $\S$ is a manifold with components $\hat V_i$ and $\hat V_*$; $W$ cut on the system $\S^\sharp$ is a
manifold with components $\hat V_i^\sharp$ and $\hat V_*^\sharp$. Make
$\S^\sharp$ transverse to
$\S$ and consider a curve
$\alpha$ of intersection innermost on
$\S^\sharp$ and bounding a disc $D^\sharp$ in $\S^\sharp$. The curve $\alpha$ also bounds a disc $D$ in $\S$; in fact, there are two choices
for $D$. If for either of these choices $D\cup D^\sharp$ bounds a ball, then we can isotope $\S^\sharp$ to eliminate some
curves of intersection. If $D^\sharp$ is contained in a $\hat V_i$, $i\ge 1$, so that $\hat V_i$ is an irreducible
3-manifold with one hole, then one of the choices of $D$ must give $D\cup D^\sharp$ bounding a ball, so we may assume now
that $D^\sharp\subset \hat V_*$, the holed 3-sphere. We can also assume that for both choices of $D$, $D\cup D^\sharp$ is a
sphere which (when pushed away from $\bdry \hat V_*$) cuts $\hat V_*$ into two balls-with-holes, neither of which is
a ball. Now we perform slides to move boundary spheres of $\hat V_*$ contained in a chosen ball-with-holes bounded by $D\cup
D^\sharp$ out of that ball-with-holes. That is, for each boundary sphere of $\hat V_*$ in the ball-with-holes, perform a slide by
cutting
$W$ on the sphere, capping to replace $\hat V_i$ with a ball, sliding, and reglueing. One must use a slide path
disjoint from $\S^\sharp$, close to $\S^\sharp$, to return to a different component of $\hat V_*-\S^\sharp$, but this of course does not yield a
closed slide path. To form a closed slide path, it is necessary to isotope $\S^\sharp\cap V_*$ rel $\bdry V_*$. Seeing the
existence of such a slide path requires a little thought; the key is to follow $\S^\sharp$ without intersecting it.
After performing slides to remove all holes of $\hat V_*$ in the holed ball in $V_*$ bounded by $D^\sharp\cup D$, we can
finally isotope $D^\sharp$ to $D$, and beyond, to eliminate curves of intersection in $\S\cap \S^\sharp$. Repeating, we can apply
slide automorphisms and isotopies of $\S^\sharp$ to eliminate all curves of intersection.
We know that $\S^\sharp$ also cuts $W$ into manifolds $\hat V_i^\sharp$ homeomorphic to $\hat V_i$, and a holed sphere $\hat V_{*}^\sharp$
and so we conclude that each sphere $S$ of $\S$ bounding $\hat V_i$, which is an irreducible manifold with an open ball removed, is
isotopic to a sphere
$S^\sharp$ in $\S^\sharp$ bounding $\hat V_i^\sharp$ which is also an irreducible manifold with an open ball removed. Clearly also the
$\hat V_i$ bounded by $S$ is homeomorphic to the $\hat V_i^\sharp$. Further isotopy therefore makes it possible to
assume $f(\S)=\S^\sharp$. At this point, however, $f$ does not necessarily assign each $S'\in \S'$ to the desired $a(S')$. This
can be achieved using interchanging slides and/or spins. The interchanging slides are used to permute spheres of $\S^\sharp$; the spins are used to
reverse the orientation of a non-separating $S^\sharp \in \S^\sharp$, which switches two spheres of ${\S^\sharp}'$.
Clearly we have constructed $f$ as a composition of automorphisms which extend over $Q$ as $\E$-slides,
$\E$-slide interchanges and $\E$-spins, all of which are automorphisms of $Q\rel V$, hence $f$ also extends as an
automorphism of $Q\rel V$.
\end{proof}
Before proving Proposition \ref{CharacterizationProp}, we mention some known results we will need. The first result
says that an automorphism of a 3-sphere with $r$ holes which maps each boundary sphere to itself is isotopic to the identity.
The second result concerns an automorphism of an arbitrary 3-manifold: If it is isotopic to the identity and the isotopy moves the
point along a null-homotopic closed path in the 3-manifold, then the isotopy can be replaced by an isotopy fixing the point at all times. These
facts can be proved by applying a result of R. Palais, \cite{RP:RestrictionMaps},\cite{EL:RestrictionMaps}.
\begin{proof}[Proof of Proposition \ref{CharacterizationProp}] Let $g:W\to W$ be an automorphism. Then by Lemma
\ref{DiscrepantLemma}, we conclude that there is an an discrepant automorphism $f:(Q,V)\to (Q,V)$ mapping $\S$ to
$g(\S)$ according to the assignment
$\S'\to g(\S)'$ induced by $g$. Thus $f\inverse\circ g$ is the identity on $\S$, preserving orientation. Now
we construct $Q$ by attaching 3-handles to $W\times 0\subset W\times I$, a handle attached to each sphere of
$\S$. The automorphisms
$f\inverse\circ g$ and $f$ clearly both extend over $Q$, hence so does $g$. This shows $\H(W)=\H_x(W)$.
Showing that $\H_d(W)$ has the indicated generators is more subtle. Suppose we are given $g:W\to W$ with the property
that
$g$ extends to $Q$ as an automorphism rel $V$. We also use $g$ to denote the extended automorphism. We apply the method of proof in
Lemma \ref{DiscrepantLemma} to find an discrepant automorphism $f$, a composition of slides of holed irreducible summands $\hat V_i$,
interchanging slides of $S^2\times S^1$ summands and spins of $S^2\times S^1$ to make
$f\circ g(\S)$ coincide with $\S$, preserving orientation. Note that $f$ extends over $Q$, and that will be the case below as well, as we
modify
$f$. Thus we can regard $f$ as an automorphism of $Q$. Notice also that in the above we do not need interchanging slides of irreducible
summands, since $g$ is the identity on $V$, hence maps each $V_i$ to itself.
At this point $f\circ g$ maps each $\hat V_i$ to itself, and of course it maps $\hat V_*$ to itself, also mapping each boundary sphere of $\hat V_*$ to
itself. An automorphism of a holed sphere $\hat V_*$ which maps each boundary sphere to itself is isotopic to the identity, as we mentioned above.
However, we must consider this as an automorphism rel the boundary spheres, and so it may be a composition of twists on boundary spheres. On
$W-\cup_i
\interior(\hat V_i)$ (a holed connected sum of $S^2\times S^1$'s) this yields some twists on non-separating spheres of $\S$ corresponding to
$S^2\times S^1$ summands. Precomposing these twists with
$f$, we can now assume that
$f\circ g$ is the identity on
$W-\cup_i \interior(\hat V_i)$.
It remains to analyze $f\circ g$ restricted to $\hat V_i$. We do this by considering the restriction of $f\circ g$ to $V_i\times I$, noting that we
should regard this automorphism as an automorphism of the triple $(V_i\times I, E_i', V_i\times 0)$, where $E_i'$ is a duplicate of the disc $E_i$
cutting
$V_i\times I$ from $Q$. The product gives a homotopy $h_t=f\circ g|_{V_i\times t}$ from $h_0=f\circ g|_{V_i\times 0}=\text{id}$ to $h_1=f\circ
g|_{V_i\times 1}$, where we regard these maps as being defined on $V_i$. Since $V_i$ is an irreducible 3-manifold, we conclude as in the proof of
Theorem
\ref{FundamentalTheorem}, that $h_0$ is isotopic to $h_1$ via an isotopy $h_t$. From this isotopy, we obtain a path $\gamma(t)=h_t(p)$ where $p$ is a
point in $E_i'$ when $V_i\times 1$ is identified with $V_i$. Consider a slide $s$ along $\bar \gamma$: Cut $Q$ on $E_i$, then slide the
duplicate $E_i'$ along the path $\bar \gamma$, extending the isotopy as usual. By construction, $s$ is isotopic to the identity via $s_t$,
$s_0=\text{id}$,
$s_1=s$, and $s_t(p)$ is the path $\bar \gamma$. Combining the isotopies $h_t$ and $s_t$, we see that $(s\circ f\circ g)|_{V_i\times 1}$ is isotopic
to the identity via an isotopy $r_t$ with $r_t(p)$ tracing the path $\gamma\bar\gamma$. This isotopy can be replaced by an isotopy fixing $p$ or
$E_i'$. Thus after replacing $f$ by $s\circ f$, we have $f\circ g$ isotopic to the identity on $\hat V_i$. Doing the same thing for every $i$, we
finally obtain $f$ so that $f\circ g$ is isotopic to the identity on $W$. This shows that
$g$ is a composition of the inverses of the generators used to construct $f$.
\end{proof}
Now we can give a proof that the canonical compression body is unique in a stronger sense.
\begin{proof} [Proof of Proposition \ref{UniquenessProposition}] We suppose $(Q_1,V_1)$ is the canonical compression body associated to $W$ with
symmetric system $\S_1$ and $(Q_2,V_2)$ is the canonical compression body associated to $W$ with
symmetric system $\S_2$. From Proposition \ref{UniquenessProp} we obtain a homeomorphism $h:(Q_1,V_1)\to (Q_2,V_2)$ such that $h(\S_1)=\S_2$. We are
given a homeomorphism $v:V_1\to V_2$. From the exact sequence of Theorem \ref{SequenceThm}, we know that there is an automorphism $k:(Q_2,V_2)\to
(Q_2,V_2)$ which extends the automorphism $v\circ (h\inverse|_{V_2}):V_2\to V_2$. Then $g= k\circ h$ is the required homeomorphism $g:(Q_1,V_1)\to
(Q_2,V_2)$ with the property that $g|_{V_1}=v$.
\end{proof}
\section{Spotted Manifolds.}
\label{SpottedSection}
In this section we present a quick ``compression body point of view" on automorphisms of manifolds with sphere boundary components, or ``spotted
manifolds."
Suppose $W$ is an $m$-manifold, $m=2$ or $m=3$, with some sphere boundary components. For such a manifold, there is a canonical collection
$\S$ of essential spheres, namely a collection of spheres isotopic to the boundary spheres. We can construct the compression body
associated to this collection of spheres, and we obtain an $n=m+1$ dimensional compression body $Q$ whose interior boundary consists
of
$m$-balls, one for each sphere of $\S$, and a closed manifold $V_0$, which is the manifold obtained by capping the sphere boundary
components. We denote the union of the $m$-balls by $P=\cup P_i$ where each $P_i$ is a ball. Thus the interior boundary is
$V=V_0\cup P$. See Figure \ref{MapSpotted}.
\begin{figure}[ht]
\centering
\psfrag{S}{\fontsize{\figurefontsize}{12}$S$}\psfrag{H}
{\fontsize{\figurefontsize}{12}$H$}\psfrag{a}{\fontsize{\figurefontsize}{12}$\alpha$}
\scalebox{1.0}{\includegraphics{MapSpotted.eps}} \caption{\small
Spotted product.} \label{MapSpotted}
\end{figure}
Suppose $V_0$ has universal cover which is either $S^m$ or is contractible. Applying Theorem \ref{SequenceThm}, we obtain the exact
sequence
$$1\to \H_d(W)\to \H_x(W)\to \H(V)\to 1.$$
It remains to interpret the sequence. Notice that we can view $Q$ as a {\it spotted product} $(V_0\times I,P)$, where $P$ is a
collection of balls in $V_0\times 1$. However, in our context we have a buffer zone of the form $S^{m-1}\times I\subset A$
separating each $P_i$ from $W$.
We refer to
$W$ as a holed manifold, since it is obtained from the manifold
$V_0$ by removing open balls. $\H_d(W)$ is the mapping class group of automorphisms which extend to $Q$ as
automorphisms rel
$V=V_0\cup P$. Either directly, or using Theorem \ref{FundamentalTheorem}, we see that the elements of $\H_d(W)$ are automorphisms $f:W\to
W$ such that the capped automorphism $f_c$ is homotopic, hence isotopic to the identity.
The mapping class group
$\H_x(W)$ can be identified with $\H(W)$. This is because any sphere of $\S$ is mapped up to isotopy by any element $f\in \H(W)$ to another sphere of $\S$,
hence $f$ extends to $Q$. Finally
$\H(V)$ can be identified with
$\H(V_0)\times
\H(P)$, where
$\H(P)$ is clearly just a permutation group. We obtain the exact sequence
$$1\to \H_d(W)\to \H(W)\to \H(V_0)\times \H(P)\to 1,$$
\noindent and we have proved Theorem \ref{SpottedThm}.
\bibliographystyle{amsplain}
\bibliography{ReferencesUO3}
\end{document} | {"config": "arxiv", "file": "math0607444/MapClass7_07.tex"} |
TITLE: I am having problems understanding this mark:
QUESTION [2 upvotes]: I have this set defined as follows:
$$A_n=\{(i,j)\in Z^2: (i,j)=1, 0\le i,j \le n\} $$
What does (i,j)=1 means?
Thanks in advance
Now that we know it means gcd(i,j)=1, How can I calculate the size of this set?
REPLY [0 votes]: Let $\displaystyle A_N=\{(i,j)\in{\mathbb{Z}}^2: \gcd(i,j)=1, \ 0\leq i,j\leq N \}$. Prove the existence of
$$\lim_{N\to\infty}\frac{|A_N|}{N^2}$$
and compute that limit.
Perhaps we start with reasoning the convergence. We do that by taking this question to the probability area; asking ourselves what $\frac{|A_N|}{N^2}$ actually means. If we fixing $N=K$, then $\frac{|A_K|}{K^2}$ is the probability that two integers $i$ and $j$, where $0 \leq i,j\leq K$, no common factor. Therefore,
$$\lim_{N\to\infty}\frac{|A_N|}{N^2}$$
answers the question: For two random integers $i$ and $j$, what is the probability that they have no common factor?
Now, we want to compute that limit. Here we have different approaches solving the problem:
Approch $1$: With probability
Two $i$ and $j$ integer are coprime if there is no prime number $p$ divides both $i$ and $j$. For a particular prime $p$, the probability that $i$ is divisible by $p$ is $\frac{1}{p}$, and the same for $j$, so the probability that both are divisible by $p$ is $\frac{1}{p^2}$, and so the probability that $p$ does not divide both of them is $1-\frac{1}{p^2}$. The probability that no prime divides both $i$ and $j$ is therefore the Euler's product($s=2$):
$$\prod_{p\in\mathbb{P}}(1-\frac{1}{p^2})$$
Now, with the help of famous identity,
$$\zeta(2)=\sum_{s=1}^{\infty}\frac{1}{n^2}=\prod_{p\in\mathbb{P}}\frac{1}{(1-\frac{1}{p^2})}$$
And the fact that,
$$\zeta(2)=\frac{\pi^2}{6}$$
After combining these all, we will get,
$$\prod_{p\in\mathbb{P}}(1-\frac{1}{p^2})=\frac{6}{\pi^2}$$
Thus,
$$\lim_{N\to\infty}\frac{|A_N|}{N^2}=\prod_{p\in\mathbb{P}}(1-\frac{1}{p^2})=\frac{6}{\pi^2}$$
Approch $2$: Number theoretical:
Let us define three new sets:
$$P_N=\{(i,j)\in{\mathbb{Z}}^2: \gcd(i,j)=1, \ 0\leq i>j\leq N \}$$
$$O_N=\{(i,j)\in{\mathbb{Z}}^2: \gcd(i,j)=1, \ 0\leq i=j\leq N \}$$
$$N_N=\{(i,j)\in{\mathbb{Z}}^2: \gcd(i,j)=1, \ 0\leq i<j\leq N \}$$
Now, noticing that
$$|P_N|=|\{(i,j)\in{\mathbb{Z}}^2: \gcd(i,j)=1, \ 0\leq i>j\leq N \}|=\varphi(i)$$
$$|O_N|=|\{(i,j)\in{\mathbb{Z}}^2: \gcd(i,j)=1, \ 0\leq i=j\leq N \}|=1$$
$$|N_N|=|\{(i,j)\in{\mathbb{Z}}^2: \gcd(i,j)=1, \ 0\leq i<j\leq N \}|=\varphi(j)$$
where $\varphi$ is Euler's totient function.
Therefore,
$$|A_N|=1+2\sum_{n=1}^n\varphi(n)$$
Using, Martens theorem($1874$)
$$\sum_{n\leq x}\varphi(n)=\frac{3}{\pi^2}x^2+\mathcal{O}(x\log x) $$
Then,
$$\lim_{N\to\infty}\frac{|A_N|}{N^2}=\lim_{N\to\infty}(\frac{1}{N^2}+\frac{6}{\pi^2}+\mathcal{O}(\frac{\log N}{N}))=\frac{6}{\pi^2}$$ | {"set_name": "stack_exchange", "score": 2, "question_id": 477211} |
TITLE: Proof of an interesting matrix property
QUESTION [0 upvotes]: Suppose you have a square matrix $M$ with $n$ rows and $n$ columns. Suppose $M$ enjoys a property $p$ defined as follows: $M_{i,j} = 0$ if $i + j$ is odd and non zero otherwise.
Question: if square matrix $M$ satisfies property $p$ then prove that $M^{-1}$ satisfies $p$ as well.
Ps
Thank you to those who suggested how I can improve the clarity of this question
REPLY [1 votes]: Your property does not imply that $M$ is invertible, so $M^{-1}$ might fail to exist at all.
However suppose $M^{-1}$ does exist. Your property (ignoring the non-zero requirement) says that every "odd" standard basis vector has its image in the span (call it $V_1$) of all odd standard basis vectors, and every "even" standard basis vector has its image in the span (call it $V_0$) of all even standard basis vectors. It follows that $M(V_1)\subseteq V_1$ and $M(V_0)\subseteq V_0$. Since we assumed $M$ invertible, one in fact has equalities here, for dimension reasons. Can you see that $M^{-1}$ has the same properties?
The condition you stated that entries that are allowed to be nonzero (namely those with $i+j$ even) are in fact required to be nonzero complicates matters. In the end you will want to know if an invertible matrix with all its entries nonzero has an inverse with all its entries nonzero. That is true up to size $2\times2$, but fails for larger sizes. Therefore, for sufficiently large matrices (can you see how large?), you will get failure on this point. | {"set_name": "stack_exchange", "score": 0, "question_id": 942953} |
TITLE: On one graphical sequence implying another sequence is graphical
QUESTION [2 upvotes]: I read this statement related to Havel and Hakimi.
Suppose that $\textbf{d}=(d_1,d_2,\ldots,d_n)$ be a nonincreasing sequence of nonnegative integers. Let $\textbf{d'}=(d_2-1,d_3-1,\ldots, d_{d_1+1}-1,d_{d_1+2},\ldots,d_n)$. Then $\textbf{d}$ is graphical if and only if $\textbf{d'}$ is graphical.
Why is this the case? For suppose $G$ is the graph associated to the first sequence. It seems the the second sequence is obtained by removing the vertex $v_1$ from $G$, which induces a subgraph by deleting all $d_1$ edges adjacent to $v_1$, and this would decrease the degree of $d_1$ vertices, giving a graph $G'$ with the second degree sequence. But this argument seems to assume that $v_1$ is connected to the next $d_1$ vertices of highest degree. What is $v_1$ is adjacent to $v_n$ or any $v_k$ for any $k>{d_1+1}$? Why would the claim still be true? Thank you.
REPLY [2 votes]: Both the statement in the question and a little don's answer seem to rely on the following theorem:
If a sequence $d$ is graphic, then there is a graph with degree sequence $d$ such that a vertex with highest degree is adjacent to $d_1$ vertices whose degrees are greater or equal to the degrees of the remaining vertices.
Let a graph with degree sequence $d$ be given, and let $v_1$ be a vertex with highest degree. If there are vertices $v_2$ and $v_3$ such that $v_2$ has higher degree than $v_3$ but $v_1$ is connected to $v_3$ and not to $v_2$, then choose any edge incident to $v_2$ but not to $v_3$. (There is one, since the degree of $v_2$ is greater than that of $v_3$.) Let $v_4$ be the vertex it leads to, with $v_4\neq v_3$ and $v_4\neq v_1$ by construction. Now replace the edges $(v_1,v_3)$ and $(v_2,v_4)$ by the edges $(v_1,v_2)$ and $(v_3,v_4)$. This leaves all degrees invariant. Now $v_1$ is adjacent to $v_2$ as required. Since we have increased the sum of the degrees of the vertices adjacent to $v_1$ and that sum is bounded, we can only perform this operation (with the same $v_1$ but varying $v_2$ to $v_4$) finitely many times, so this will necessarily construct the desired graph. | {"set_name": "stack_exchange", "score": 2, "question_id": 34441} |
\begin{document}
\title{Topological optimization via cost penalization}
\author{Cornel Marius Murea$^1$, Dan Tiba$^2$\\
{\normalsize $^1$ D\'epartement de Math\'ematiques, IRIMAS,}\\
{\normalsize Universit\'e de Haute Alsace, France,}\\
{\normalsize cornel.murea@uha.fr}\\
{\normalsize $^2$ Institute of Mathematics (Romanian Academy) and}\\
{\normalsize Academy of Romanian Scientists, Bucharest, Romania,}\\
{\normalsize dan.tiba@imar.ro}
}
\maketitle
\begin{abstract}
We consider general shape optimization problems governed by Dirichlet boundary value problems.
The proposed approach may be extended to other boundary conditions as well. It is based on a recent
representation result for implicitly defined manifolds, due to the authors, and it is formulated
as an optimal control problem. The discretized approximating problem is introduced and we give
an explicit construction of the associated discrete gradient. Some numerical examples are also indicated.
\textbf{Keywords: geometric optimization; optimal design; topological variations; optimal control methods;
discrete gradient}
\end{abstract}
\section{Introduction}
\setcounter{equation}{0}
Shape optimization is a relatively young branch of mathematics, with important
modern applications in engineering and design. Certain optimization problems in
mechanics, thickness optimization for plate or rods, geometric optimization
of shells, curved rods, drag minimization in fluid mechanics, etc are some examples.
Many appear naturally in the form of control by coefficients problems, due
to the formulation of the mechanical models, with the geometric characteristics
entering the coefficients of the differential operators. See \cite{Tiba2006}, Ch. 6,
where such questions are discussed in details.
It is the aim of this article to develop an optimal control approach, using
penalization methods, to general shape optimization problems as investigated in
\cite{Pironneau1984}, \cite{Sokolowski1992}, \cite{Bucur2005}, \cite{Henrot2005},
\cite{Delfour2001}, etc. We underline that our methodology allows simultaneous topological
and boundary variations.
Here, we fix our attention on the case of Dirichlet boundary conditions and we study
the typical problem (denoted by ($\mathcal{P}$)):
\begin{eqnarray}
\min_\Omega \int_E j\left(\mathbf{x}, y_\Omega(\mathbf{x})\right)d\mathbf{x},
\label{1.1}\\
-\Delta y_\Omega = f \hbox{ in }\Omega,
\label{1.2}\\
y_\Omega = 0 \hbox{ on }\partial\Omega,
\label{1.3}
\end{eqnarray}
where $E\subset\subset D \subset \mathbb{R}^2$ are given bounded domains,
$D$ is of class $\mathcal{C}^{1,1}$ and the minimization parameter,
the unknown domain $\Omega$, satisfies $E \subset \Omega \subset D$
and other possible conditions defining a class of admissible domains.
Notice that the case of dimension
two is of interest in shape optimization.
Moreover, $f\in L^2(D)$, $j:D\times\mathbb{R}\rightarrow\mathbb{R}$ is
some Carath\'eodory mapping. More assumptions
or constraints will be imposed later. Other boundary conditions or differential operators
may be handled as well via this control approach and we shall examine such questions in
a subsequent paper.
For fundamental properties and methods in optimal control theory, we quote
\cite{Lions1969}, \cite{Clarke1983}, \cite{Barbu1986}, \cite{Tiba1994}.
The problem (\ref{1.1})-(\ref{1.3}) and its approximation are strongly non convex and
challenging both from the numerical and theoretical points of view.
The investigation from this paper continues the one
in \cite{Tiba2018a} and is essentially based on the recent implicit
parametrization method as developed in \cite{Tiba2018}, \cite{Tiba2015}, \cite{Tiba2013},
that provides an efficient analytic representation of the unknown domains.
The Hamiltonian approach to implicitly defined manifolds will be briefly recalled
together with other preliminaries in Section 2.
The precise formulation of the problem and its approximation is analyzed in Section 3
together with its differentiability properties. In Section 4, we study the discretized version
and find the general form of the discrete gradient.
The last section is devoted to some numerical experiments, using this paper approach.
The method studied in this paper has a certain complexity due to the use of Hamiltonian systems and its main advantage is the possibility to extend it to other boundary conditions or boundary observation problems. This will be performed in a subsequent article.
\section{Preliminaries}
\setcounter{equation}{0}
Consider the Hamiltonian system
\begin{eqnarray}
x_1^\prime(t) & = & -\frac{\partial g}{\partial x_2}\left(x_1(t),x_2(t)\right),\quad t\in I,
\label{2.1}\\
x_2^\prime(t) & = & \frac{\partial g}{\partial x_1}\left(x_1(t),x_2(t)\right),\quad t\in I,
\label{2.2}\\
\left(x_1(0),x_2(0)\right)& = &\left(x_1^0,x_2^0\right),
\label{2.3}
\end{eqnarray}
where $g:D\rightarrow\mathbb{R}$ is in $\mathcal{C}^1(\overline{D})$,
$\left(x_1^0,x_2^0 \right)\in D$ and $I$ is the local existence interval
for (\ref{2.1})--(\ref{2.3}), around the origin, obtained via the Peano theorem.
The conservation property \cite{Tiba2013} of the Hamiltonian gives:
\begin{proposition}\label{prop:2.1}
We have
\begin{equation}\label{2.4}
g\left(x_1(t),x_2(t)\right)=g\left(x_1^0,x_2^0\right), \quad t\in I.
\end{equation}
\end{proposition}
In the sequel, we assume that
\begin{equation}\label{2.5}
g\left(x_1^0,x_2^0\right)=0,\quad \nabla g\left(x_1^0,x_2^0\right)\neq 0.
\end{equation}
Under condition (\ref{2.5}), i.e. in the noncritical case, it is known that the solution
of (\ref{2.1})--(\ref{2.3}) is also unique (by applying the implicit functions
theorem to (\ref{2.4}), \cite{4}).
\begin{remark}\label{rem:1}
In higher dimension, iterated Hamiltonian systems were introduced in \cite{Tiba2018}
and uniqueness and regularity properties are proved. Some relevant examples
in dimension three are discussed in \cite{Tiba2015}. In the critical case, generalized solutions can be obtained \cite{Tiba2013}, \cite{Tiba2018}.
\end{remark}
We define now the family $\mathcal{F}$ of admissible functions
$g\in \mathcal{C}^2(\overline{D})$ that satisfy the conditions:
\begin{eqnarray}
g(x_1,x_2) & > & 0,\quad\hbox{on }\partial D,
\label{2.6}\\
|\nabla g(x_1,x_2)| & > & 0,\quad\hbox{on }
\mathcal{G}=\left\{ (x_1,x_2)\in D;\ g(x_1,x_2)=0\right\},
\label{2.7}\\
g(x_1,x_2) & < & 0,\quad\hbox{on } \overline{E}.
\label{2.8}
\end{eqnarray}
Condition (\ref{2.6}) says that $\mathcal{G}\cap \partial D=\emptyset$ and condition
(\ref{2.7}) is an extension of (\ref{2.5}). In fact, it is related to the hypothesis
on the non existence of equilibrium points in the Poincare-Bendixson theorem, \cite{hsd},
Ch. 10, and the same is valid for the next proposition.
The family $\mathcal{F}$ defined by (\ref{2.6})--(\ref{2.8}) is obviously very rich, but
it is not ``closed''
(we have strict inequalities). Our approach here, gives a descent algorithm for the shape
optimization problem
($\mathcal{P}$) and existence of optimal shapes is not discussed.
Following \cite{Tiba2018a}, we have the following two propositions:
\begin{proposition}\label{prop:2.2}
Under hypotheses (\ref{2.6}), (\ref{2.7}), $\mathcal{G}$ is a finite union of closed curves of
class $\mathcal{C}^2$, without self intersections, parametrized by
(\ref{2.1})--(\ref{2.3}), when some initial point $(x_1^0,x_2^0)$ is chosen on each
component of $\mathcal{G}$.
\end{proposition}
If $r\in \mathcal{F}$ as well, we define the perturbed set
\begin{equation}\label{2.9}
\mathcal{G}_\lambda=\left\{ (x_1,x_2)\in D;\ (g+\lambda r)(x_1,x_2)=0,\ \lambda\in \mathbb{R}\right\}.
\end{equation}
We also introduce the neighborhood $V_\epsilon$, $\epsilon>0$
\begin{equation}\label{2.10}
V_\epsilon=\left\{ (x_1,x_2)\in D;\ d[(x_1,x_2),\mathcal{G}] < \epsilon\right\},
\end{equation}
where $d[(x_1,x_2),\mathcal{G}]$ is the distance from a point to $\mathcal{G}$.
\begin{proposition}\label{prop:2.3}
If $\epsilon>0$ is small enough, there is $\lambda(\epsilon)>0$ such that,
for $|\lambda| < \lambda(\epsilon)$ we have $\mathcal{G}_\lambda \subset V_\epsilon$
and $\mathcal{G}_\lambda$ is a finite union of class $\mathcal{C}^2$ closed curves.
\end{proposition}
\begin{remark}\label{rem:2}
The inclusion $\mathcal{G}_\lambda \subset V_\epsilon$ shows that $\mathcal{G}_\lambda\rightarrow \mathcal{G}$ for
$\lambda\rightarrow 0$, in the Hausdorff-Pompeiu metric \cite{Tiba2006}.
In a "small" neighborhood of each component of $\mathcal{G}$ there is exactly one component
of $\mathcal{G}_\lambda$ if $|\lambda| < \lambda (\epsilon)$, due to this convergence
property and the implicit functions theorem applied in the initial condition of the
perturbed Hamiltonian system derived from (\ref{2.1})--(\ref{2.3}) .
\end{remark}
\begin{proposition}\label{prop:2.4}
Denote by $T_g$, $T_{g+\lambda r}$ the periods of the trajectories of (\ref{2.1})--(\ref{2.3}),
corresponding to $g$, $g+\lambda r$ respectively. Then $T_{g+\lambda r} \rightarrow T_g$ as
$\lambda \rightarrow 0$.
\end{proposition}
\noindent
\textbf{Proof.}
If $(x_1,x_2)$, respectively $(x_{1\lambda},x_{2\lambda})$ are the corresponding trajectories of
(\ref{2.1})--(\ref{2.3}) respectively, then they are bounded by Proposition \ref{prop:2.3},
if $|\lambda| < \lambda(\epsilon)$.
Consequently, $\nabla g$ may be assumed Lipschitzian with constant $L_g$ and we have
\begin{equation}\label{2.11}
|(x_1,x_2)-(x_{1\lambda},x_{2\lambda}) |(t) \leq
\lambda C + L_g \int_0^t |(x_1,x_2)-(x_{1\lambda},x_{2\lambda}) | dt,
\end{equation}
where we also use that $\nabla r(x_{1\lambda},x_{2\lambda})$ is bounded since
$(x_{1\lambda},x_{2\lambda})$ is bounded on $\mathbb{R}$.
We infer by (\ref{2.11}) that
\begin{eqnarray}
|(x_1,x_2)-(x_{1\lambda},x_{2\lambda}) |(t) \leq\lambda\, ct, t\in \mathbb{R},\label{2.12}\\
|(x_1^\prime,x_2^\prime)-(x_{1\lambda}^\prime,x_{2\lambda}^\prime) |(t) \leq\lambda\, ct, t\in \mathbb{R},\label{2.13}
\end{eqnarray}
for $|\lambda| < \lambda(\epsilon)$ and with some constant independent of $\lambda$, by Gronwall lemma.
Both trajectories start from $(x_1^0,x_2^0)$, surround $\overline{E}$, have no self intersections ( but $(x_{1\lambda},x_{2\lambda})$ may intersect $(x_1,x_2)$ even on infinity of times).
We study them on $[0,jT_g]$, $j<2$, for instance.
Assume that $(x_{1\lambda},x_{2\lambda})$ has the period $T_{g+\lambda r} > jT_g$. Since $(x_1,x_2)$ is periodic with period $T_g$ and
relations (\ref{2.12})--(\ref{2.13}) show that $(x_{1\lambda},x_{2\lambda})$ is very close to $(x_1,x_2)$ in every $t \in [0,jT_g]$ it yields that $(x_{1\lambda},x_{2\lambda})$ is, as well, surrounding $\overline{E}$ at least once.
As it may have no self intersections, it yields that $(x_{1\lambda},x_{2\lambda})$ is as a limit cycle around $\overline{E}$.
Such arguments appear in the proof of the Poincar\'e-Bendixson
theorem, \cite{hsd}, Ch. 10.
That is $(x_{1\lambda},x_{2\lambda})$ cannot be periodic - and
this is a false conclusion due to Proposition \ref{prop:2.3}.
Consequently, we get
\begin{equation}\label{2.14}
T_{g+\lambda r} \leq jT_g,\ |\lambda| < \lambda(\epsilon).
\end{equation}
On a subsequence, by (\ref{2.14}), we obtain $T_{g+\lambda r} \rightarrow T^* \leq jT_g$. We assume that $T^* \neq T_g$.
It is clear that $\left(x_{1}(T^*),x_{2}(T^*)\right)\neq
\left(x_{1}(T_g),x_{2}(T_g)\right)$ by the definition of the
period. However, relation (\ref{2.12}) and the related
convergence properties give the opposite conclusion.
This contradiction shows that $T^* = T_g$ and the convergence
is valid on the whole sequence.\quad$\Box$
\begin{remark}\label{rem:2.7}
Usually, the perturbation of a periodic solution may not
be periodic and just asymptotic convergence properties are
valid, under certain assumptions, Sideris
\cite{Sideris2013}.
A natural
question, taking into account (\ref{2.12})--(\ref{2.13}),
is whether $|T_{g+\lambda r}-T_g| \leq c |\lambda|$, for
$c>0$ independent of $\lambda$,
$|\lambda| < \lambda(\epsilon)$.
\end{remark}
\section{The optimization problem and its approximation}
\setcounter{equation}{0}
Starting from the family $\mathcal{F}$ of admissible functions, we define
the family $\mathcal{O}$ of admissible domains in the shape optimization
problem (\ref{1.1})-(\ref{1.3}) as the connected component of the open set
$\Omega_g$, $g\in \mathcal{F}$
\begin{equation}\label{3.1}
\Omega_g=\left\{ (x_1,x_2)\in D;\ g(x_1,x_2) < 0\right\}
\end{equation}
that contains $E$. Clearly $E\subset \Omega_g$ by (\ref{2.8}).
Notice as well that the domain $\Omega_g$ defined by (\ref{3.1})
(we use this notation for the domain as well) is not simply connected, in general.
This is the reason why the approach to (\ref{1.1})-(\ref{1.3}) that we discuss here is related to
topological optimization in optimal design problems.
But, it also combines topological and boundary variations.
The penalized problem, $\epsilon >0$, is given by:
\begin{equation}\label{3.2}
\min_{g\in \mathcal{F},\, u\in L^2(D)}
\left\{
\int_E j\left(\mathbf{x}, y_\epsilon(\mathbf{x})\right)d\mathbf{x}
+\frac{1}{\epsilon}\int_{I_g} \left(y_\epsilon(\mathbf{z}_g(t))\right)^2|\mathbf{z}_g^\prime(t)|dt
\right\}
\end{equation}
subject to
\begin{eqnarray}
-\Delta y_\epsilon & = & f+ (g+\epsilon)_+^2u,\quad\hbox{in } D,
\label{3.3}\\
y_\epsilon & = & 0,\quad\hbox{on } \partial D,
\label{3.4}
\end{eqnarray}
where $\mathbf{z}_g=(z_g^1,z_g^2)$ satisfies the Hamiltonian system (\ref{2.1})-(\ref{2.3})
in $I_g$ with some $(x_1^0,x_2^0)\in D\setminus\overline{E}$ such that $g(x_1^0,x_2^0)=0$
\begin{eqnarray}
\left(z_g^1\right)^\prime(t) & = & -\frac{\partial g}{\partial x_2}\left(\mathbf{z}_g(t)\right),\quad t\in I_g,
\label{3.5}\\
\left(z_g^2\right)^\prime(t) & = & \frac{\partial g}{\partial x_1}\left(\mathbf{z}_g(t)\right),\quad t\in I_g,
\label{3.6}\\
\mathbf{z}_g(0)& = &\left(x_1^0,x_2^0\right)
\label{3.7}
\end{eqnarray}
and $I_g=[0,T_g]$ is the period interval for (\ref{3.5})-(\ref{3.7}), due to Proposition \ref{prop:2.2}.
The problem (\ref{3.2})-(\ref{3.7}) is an optimal control problem with controls $g\in \mathcal{F}$
and $u\in L^2(D)$ distributed in $D$. The state is given by
$[y_\epsilon,z_g^1,z_g^2]\in H^2(D)\times \left(C^2(I_g)\right)^2$. We also have $y_\epsilon \in H_0^1(D)$.
In case the corresponding domain $\Omega_g$ is not simply connected, in (\ref{3.7}) one has
to choose initial conditions on each component of $\partial\Omega_g$ and the penalization term becomes
a finite sum due to Proposition \ref{prop:2.2}.
The method enters the class of fixed domain methods in shape optimization and can be compared with
\cite{Tiba1992}, \cite{Tiba2009}, \cite{Tiba2012}. It is essentially different from the
level set method
of Osher and Sethian \cite{Osher1988}, Allaire \cite{Allaire2002} or the SIMP approach of
Bendsoe and Sigmund \cite{Bendsoe2003}.
From the computational point of view, it is easy to find initial condition (\ref{3.7}) on
each component
of $\mathcal{G}$ and the corresponding period intervals $I_g$ associated to (\ref{3.5})-(\ref{3.7}). See the last section as well.
We have the following general suboptimality property:
\begin{proposition}\label{prop:3.1}
Let $j(\cdot,\cdot)$ be a Carath\'eodory function on $D\times\mathbb{R}$, bounded from below
by a constant. Denote by $[y_n^\epsilon, g_n^\epsilon, u_n^\epsilon]$ a minimizing sequence in
(\ref{3.2})-(\ref{3.7}). Then, on a subsequence $n(m)$ the (not necessaryly admissible pairs)
$[\Omega_{g_{n(m)}^\epsilon}, y_{n(m)}^\epsilon]$ give a minimizing sequence in (\ref{1.1}),
$y_{n(m)}^\epsilon$ satisfies (\ref{1.2}) in
$\left\{ (x_1,x_2)\in D;\ g(x_1,x_2) < -\epsilon\right\}$ and (\ref{1.3}) is fulfilled with
a perturbation of order $\epsilon^{1/2}$.
\end{proposition}
\noindent
\textbf{Proof.}
Let $[y_{g_m}, g_m]\in H^2(\Omega_{g_{m}})\times \mathcal{F}$ be a minimizing sequence in the problem
(\ref{1.1})-(\ref{1.3}), (\ref{3.1}). By the trace theorem, since $\partial \Omega_{g_{m}}$ and
$D$ are at least $\mathcal{C}^{1,1}$ under our assumptions, there is
$\widetilde{y}_{g_{m}}\in H^2(D)\cap H_0^1(D)$, not unique, such that
$\widetilde{y}_{g_{m}}=y_{g_m}$ in $\Omega_{g_{m}}$. We define the control $u_{g_{m}}\in L^2(D)$ as following:
\begin{eqnarray*}
u_{g_{m}} & = & 0,\quad\hbox{in } \Omega_{g_{m}},
\\
u_{g_{m}} & = & -\frac{\Delta \widetilde{y}_{g_{m}} + f}{(g_m+\epsilon)_+^2},
\quad\hbox{in } \partial D\setminus \Omega_{g_{m}},
\end{eqnarray*}
where $\Omega_{g_{m}}$ is the open set defined in (\ref{3.1}).
Notice that on the second line in the above formula, we have no singularity.
It is clear that the triple $[\widetilde{y}_{g_{m}},g_m,u_{g_{m}}]$ is admissible for the
problem (\ref{3.2})-(\ref{3.7}) with the same cost as in the original problem
(\ref{1.1})-(\ref{1.3}) since the penalization term in (\ref{3.2}) is null due
to the boundary condition (\ref{1.3}) satisfied by $\widetilde{y}_{g_{m}}$.
Consequently, there is $n(m)$ sufficient big, such that
\begin{eqnarray}
&&\int_E j\left(\mathbf{x}, y_{n(m)}^\epsilon(\mathbf{x})\right)d\mathbf{x}
+\frac{1}{\epsilon}\int_{I_{g_{n(m)}^\epsilon}} \left(y_{n(m)}^\epsilon
(\mathbf{z}_{g_{n(m)}^\epsilon}(t))\right)^2|\mathbf{z}_{g_{n(m)}^\epsilon}^\prime(t)|dt
\label{3.8}\\
&\leq &
\int_E j\left(\mathbf{x}, \widetilde{y}_{g_{m}}(\mathbf{x})\right)d\mathbf{x}
=\int_E j\left(\mathbf{x}, y_{g_{m}}(\mathbf{x})\right)d\mathbf{x}
\rightarrow \inf (\mathcal{P}).
\nonumber
\end{eqnarray}
Since $j$ is bounded from below, we get from (\ref{3.8}):
\begin{equation}\label{3.9}
\int_{\partial \Omega_{g_{m}}}\left(y_{n(m)}^\epsilon\right)^2 d\sigma \leq C\epsilon
\end{equation}
with $C$ a constant independent of $\epsilon >0$. Then, (\ref{3.9}) shows that
(\ref{1.3}) is fulfilled with a perturbation of order $\epsilon^{1/2}$.
Moreover, again by (\ref{3.8}), we see the minimizing property of $\{y_{n(m)}^\epsilon\}$
in the original problem $(\mathcal{P})$.
We notice that in the state equation (\ref{3.3}), the right-hand side coincides with $f$
in the set
$\left\{ (x_1,x_2)\in D;\ g(x_1,x_2) < -\epsilon\right\},$
which is an approximation of $\Omega_{g_{n(m)}^\epsilon}$. Namely, we notice that for any
$g\in\mathcal{F}$, the open sets
$\left\{ (x_1,x_2)\in D;\ g(x_1,x_2) < -\epsilon\right\}$
form a nondecreasing sequence contained in $\overline{\Omega}_g$, when $\epsilon\rightarrow 0$.
Take $(x_1,x_2)$ such that $g(x_1,x_2)=0$ and take some sequence
$(x_1^n,x_2^n)\rightarrow (x_1,x_2)$,
$(x_1^n,x_2^n)\in \Omega_g$. We have $g(x_1^n,x_2^n)<0$ by (\ref{3.1}) and $g(x_1^n,x_2^n) \rightarrow 0$. Moreover, $(x_1^n,x_2^n)\in \Omega_{\epsilon_n}=
\Omega_{g+\epsilon_n}$, for $\epsilon_n>0$ sufficiently small.
Consequently, we have the desired convergence property by \cite{Tiba2006}, p. 461.
This ends the proof.
\quad$\Box$
\begin{remark}\label{rem:3}
A detailed study of the approximation properties in the penalized problem is performed
in \cite{Tiba2018a}, in a slightly different case.
\end{remark}
We consider now variations $u+\lambda v$, $g+\lambda r$, where $\lambda\in \mathbb{R}$,
$u,v\in L^2(D)$,
$g,r\in\mathcal{F}$, $g(x_1^0,x_2^0)=r(x_1^0,x_2^0)=0$. Notice that $u+\lambda v \in L^2(D)$ and
$g+\lambda r\in \mathcal{F}$ for $|\lambda|$ sufficiently small . The conditions (\ref{2.6}), (\ref{2.7}), (\ref{2.8}) from the
definition
of $\mathcal{F}$ are satisfied for $|\lambda|$ sufficiently small (depending on $g$) due to
the Weierstrass theorem and the fact that $\overline{E}$, $\partial D$ and $\mathcal{G}$ are compacts.
Here, we also use
Proposition \ref{prop:2.3}. Consequently, we assume $|\lambda|$ ``small''.
We study first the differentiability properties of the state system (\ref{3.3})-(\ref{3.7}):
\begin{proposition}\label{prop:3.2}
The system of variations corresponding to (\ref{3.3})-(\ref{3.7}) is
\begin{eqnarray}
-\Delta q_\epsilon & = & (g+\epsilon)_+^2v + 2(g+\epsilon)_+u\,r,\quad\hbox{in } D,
\label{3.10}\\
q_\epsilon & = & 0,\quad\hbox{on } \partial D,
\label{3.11}\\
w_1^\prime& = & -\nabla\partial_2 g(\mathbf{z}_g)\cdot \mathbf{w}
-\partial_2 r(\mathbf{z}_g),\quad\hbox{in } I_g,
\label{3.12}\\
w_2^\prime& = & \nabla\partial_1 g(\mathbf{z}_g)\cdot \mathbf{w}
+\partial_1 r(\mathbf{z}_g),\quad\hbox{in } I_g,
\label{3.13}\\
w_1(0)& = &0,\ w_2(0) = 0,
\label{3.14}
\end{eqnarray}
where $q_\epsilon=\lim_{\lambda\rightarrow 0}\frac{y_\epsilon^\lambda-y_\epsilon}{\lambda}$,
$\mathbf{w}=[w_1,w_2]=\lim_{\lambda\rightarrow 0} \frac{\mathbf{z}_{g+\lambda r} - \mathbf{z}_g}{\lambda}$
with $y_\epsilon^\lambda\in H^2(D)\cap H^1_0(D)$ being the solution of (\ref{3.3})-(\ref{3.4})
corresponding to $g+\lambda r$, $u+\lambda v$, and $\mathbf{z}_{g+\lambda r}\in \mathcal{C}^1(I_g)^2$
is the solution of (\ref{3.5})-(\ref{3.7}) corresponding to $g+\lambda r$. The limits exist
in the above spaces. We denote by ``$\cdot$'' the scalar product on $\mathbb{R}^2$.
\end{proposition}
\noindent
\textbf{Proof.}
We subtract the equations corresponding to $y_\epsilon^\lambda$ and $y_\epsilon$ and divide
by $\lambda\neq 0$, small:
\begin{equation}\label{3.15}
-\Delta \frac{y_\epsilon^\lambda-y_\epsilon}{\lambda}
= \frac{1}{\lambda}\left[ (g+\lambda r+\epsilon)_+^2(u+\lambda v)
- (g+\epsilon)_+^2 u\right],\quad\hbox{in } D,
\end{equation}
with 0 boundary conditions on $\partial D$.
The regularity conditions on $\mathcal{F}$ and $u,v\in L^2(D)$ give the convergence
of the right-hand side in (\ref{3.15}) to the right-hand side in (\ref{3.10})
(strongly in $L^2(D)$) via some calculations. Then, by elliptic regularity,
we have $\frac{y_\epsilon^\lambda-y_\epsilon}{\lambda}\rightarrow q_\epsilon$ strongly in
$H^2(D)\cap H^1_0(D)$ and (\ref{3.10}), (\ref{3.11}) follows.
For (\ref{3.12})-(\ref{3.14}), the argument is the same as in Proposition 6, \cite{Tiba2013}.
The convergence of the ratio $\frac{\mathbf{z}_{g+\lambda r} - \mathbf{z}_g}{\lambda}$
is in $\mathcal{C}^1(I_g)^2$
on the whole sequence $\lambda\rightarrow 0$, due to the uniqueness property of the
linear system (\ref{3.12})-(\ref{3.14}).
Here, we also use Remark \ref{rem:2}, on the convergence $\mathcal{G}_\lambda \rightarrow \mathcal{G}$ and the
continuity with respect to the perturbations of $g$ in the Hamiltonian system
(\ref{2.1})-(\ref{2.3}), according to \cite{Tiba2013}.
\quad$\Box$
\begin{remark}\label{rem:4}
We have as well imposed the condition
\begin{equation}\label{3.16}
g(x_1^0,x_2^0)=0,\quad \forall g \in \mathcal{F},
\end{equation}
where $(x_1^0,x_2^0)\in D\setminus E$ is some given point.
Similarly, constraints like (\ref{3.16}) may be imposed on a finite number of points
or on some curves in $D\setminus E$ and their geometric meaning is that the boundary
$\partial \Omega_g$ of the admissible unknown domains should contain these points, curves, etc.
\end{remark}
\begin{proposition}\label{prop:3.3}
Assume that $f\in L^p(D)$, $j(\mathbf{x},\cdot)$ is of class $\mathcal{C}^1(\mathbb{R})$
and bounded, $g \in \mathcal{F}$, $u\in L^p(D)$, $p>2$ and
$y_\epsilon (z_g (t)) = 0$ in $[0,T_g]$. Then, for any
direction $[r,v]\in \mathcal{F}\times L^p(D)$, the derivative of the penalized cost
(\ref{3.2}) is given by:
\begin{eqnarray}
&&
\quad \int_E \partial_2 j\left(\mathbf{x}, y_\epsilon(\mathbf{x})\right)q_\epsilon(\mathbf{x}) d\mathbf{x}
+\frac{2}{\epsilon}\int_{I_g} y_\epsilon(\mathbf{z}_g(t))
q_\epsilon(\mathbf{z}_g(t)) |\mathbf{z}_g^\prime(t)| dt
\label{3.17}\\
& + & \frac{2}{\epsilon}\int_{I_g} y_\epsilon(\mathbf{z}_g(t))\nabla y_\epsilon(\mathbf{z}_g(t))
\cdot \mathbf{w}(t) |\mathbf{z}_g^\prime(t)| dt
+\frac{1}{\epsilon}\int_{I_g} \left(y_\epsilon(\mathbf{z}_g(t))\right)^2
\frac{\mathbf{z}_g^\prime(t)\cdot \mathbf{w}^\prime(t)}{|\mathbf{z}_g^\prime(t)|} dt
\nonumber
\end{eqnarray}
where $q_\epsilon\in W^{2,p}(D)\cap W_0^{1,p}(D)$, $\mathbf{w}\in \mathcal{C}^1(I_g)^2$,
$\mathbf{z}_g\in \mathcal{C}^1(I_g)^2$ satisfy (\ref{3.10})-(\ref{3.14})
and (\ref{2.1})-(\ref{2.3})
respectively, and $I_g=[0,T_g]$ is the period interval for $\mathbf{z}_g(\cdot)$.
\end{proposition}
\noindent
\textbf{Proof.}
In the notations of Proposition \ref{prop:3.2}, we compute
\begin{eqnarray}
&&\lim_{\lambda \rightarrow 0}
\left\{ \frac{1}{\lambda}
\int_E \left[ j\left(\mathbf{x}, y_\epsilon^\lambda(\mathbf{x})\right)
-j\left(\mathbf{x}, y_\epsilon(\mathbf{x})\right)
\right] d\mathbf{x}
\right.
\label{3.18}\\
&&
\left.
+\frac{1}{\epsilon\lambda}
\int_{I_g}
\left[
\left(y_\epsilon^\lambda(\mathbf{z}_{g+\lambda r}(t)\right)^2 |\mathbf{z}_{g+\lambda h}^\prime(t)|
-\left(y_\epsilon(\mathbf{z}_{g}(t)\right)^2 |\mathbf{z}_g^\prime(t)|
\right] dt
\right\} .
\nonumber
\end{eqnarray}
In (\ref{3.18}), $\lambda >0$ is ``small'' and Proposition \ref{prop:2.3} ensures that
$g+\lambda r\in \mathcal{F}$ (see \cite{Tiba2018} as well). By Proposition \ref{prop:2.2}
we know that the trajectories associated to $g+\lambda h$ are periodic, that is the
functions in the second integrals are defined on $I_g$.
Moreover, since $f,u\in L^p(D)$, then $y_\epsilon^\lambda,\ y_\epsilon$ defined as in
(\ref{3.3}), (\ref{3.4}) are in $W^{2,p}(D)\subset \mathcal{C}^1(\overline{D})$, by the Sobolev
theorem and elliptic regularity. Consequently, all the integrals
appearing in (\ref{3.17}), (\ref{3.18}) make sense.
Moreover, in (\ref{3.18}), we have neglected the term
\begin{eqnarray*}
L & = & \lim_{\lambda\rightarrow 0}\frac{1}{\lambda\epsilon}
\int_{T_g}^{T_{g+\lambda r}} y_\epsilon^\lambda
\left(\mathbf{z}_{g+\lambda r}(t)\right)^2 |\mathbf{z}_{g+\lambda r}^\prime(t)| dt\\
&=& \lim_{\lambda\rightarrow 0}\frac{1}{\lambda\epsilon}
\int_{T_g}^{T_{g+\lambda r}}\left[
y_\epsilon^\lambda\left(\mathbf{z}_{g+\lambda r}(t)\right)^2
|\mathbf{z}_{g+\lambda r}^\prime(t)|
-y_\epsilon\left(\mathbf{z}_{g}(t)\right)^2
|\mathbf{z}_{g}^\prime(t)|
\right]dt
\end{eqnarray*}
due to the hypothesis on $y_\epsilon\left(\mathbf{z}_{g}(t)\right)$. We can study term by term:
\begin{eqnarray*}
\int_{T_g}^{T_{g+\lambda r}}
\left[
\frac{y_\epsilon^\lambda\left(\mathbf{z}_{g+\lambda r}(t)\right)^2
-y_\epsilon\left(\mathbf{z}_{g+\lambda r}(t)\right)^2}{\lambda}
|\mathbf{z}_{g+\lambda r}^\prime(t)|
\right] dt,\\
\int_{T_g}^{T_{g+\lambda r}}
\left[
\frac{y_\epsilon\left(\mathbf{z}_{g+\lambda r}(t)\right)^2
-y_\epsilon\left(\mathbf{z}_{g}(t)\right)^2}{\lambda}
|\mathbf{z}_{g+\lambda r}^\prime(t)|
\right] dt,\\
\int_{T_g}^{T_{g+\lambda r}}
\left[
y_\epsilon\left(\mathbf{z}_{g}(t)\right)^2
\frac{|\mathbf{z}_{g+\lambda r}^\prime(t)|
-|\mathbf{z}_{g}^\prime(t)|}{\lambda}
\right] dt .
\end{eqnarray*}
By Proposition \ref{prop:3.2}, each of the above three
integrands are uniformly bounded and their limits can be
easily computed, for instance on
$[0,2T_g]$ due to Proposition \ref{prop:2.4}.
Notice, in the last term, that
$|\mathbf{z}_{g}^\prime(t)|
=|\nabla g\left(\mathbf{z}_{g}(t)\right)|\neq 0$ due to
(\ref{3.5}), (\ref{3.6}) and (\ref{3.7}), that is we can
differentiate here as well.
Again by Proposition \ref{prop:2.4} and the above uniform
boundedness, we infer that each of the above three terms
has null limit as $\lambda \rightarrow 0$, i.e. $L=0$.
Consequently, it is enough to study the limit (\ref{3.18}).
We also have $y_\epsilon^\lambda \rightarrow y_\epsilon$ in $\mathcal{C}^1(\overline{D})$, for
$\lambda\rightarrow 0$, by elliptic regularity. Then, under the assumptions on $j(\cdot,\cdot)$,
we get
\begin{equation}\label{3.19}
\frac{1}{\lambda}
\int_E \left[ j\left(\mathbf{x}, y_\epsilon^\lambda(\mathbf{x})\right)
-j\left(\mathbf{x}, y_\epsilon(\mathbf{x})\right)
\right] d\mathbf{x}
\rightarrow
\int_E \partial_2 j\left(\mathbf{x}, y_\epsilon(\mathbf{x})\right)q_\epsilon(\mathbf{x}) d\mathbf{x} .
\end{equation}
For the second integral in (\ref{3.18}), we intercalate certain intermediary terms and we
compute their limits for $\lambda \rightarrow 0$:
\begin{eqnarray}
&&
\lim_{\lambda \rightarrow 0}
\frac{1}{\epsilon\lambda}
\int_{I_g}
\left[
\left(y_\epsilon^\lambda(\mathbf{z}_{g+\lambda r}(t)\right)^2 |\mathbf{z}_{g+\lambda r}^\prime(t)|
-\left(y_\epsilon(\mathbf{z}_{g+\lambda r}(t)\right)^2 |\mathbf{z}_{g+\lambda r}^\prime(t)|
\right] dt
\label{3.20}
\\
&=& \frac{2}{\epsilon}\int_{I_g} y_\epsilon(\mathbf{z}_g(t)) q_\epsilon(\mathbf{z}_g(t))
|\mathbf{z}_g^\prime(t)| dt
\nonumber
\end{eqnarray}
due to the convergence $\mathbf{z}_{g+\lambda r}\rightarrow\mathbf{z}_g$ in $\mathcal{C}^1(I_g)^2$
by $g,r\in \mathcal{C}^2(\overline{D})$ and the continuity properties in (\ref{2.1})-(\ref{2.3});
\begin{eqnarray}
&&
\lim_{\lambda \rightarrow 0}
\frac{1}{\epsilon\lambda}
\int_{I_g}
\left[
\left(y_\epsilon(\mathbf{z}_{g+\lambda r}(t)\right)^2 |\mathbf{z}_{g+\lambda r}^\prime(t)|
-\left(y_\epsilon(\mathbf{z}_{g}(t)\right)^2 |\mathbf{z}_{g+\lambda r}^\prime(t)|
\right] dt
\label{3.21}
\\
&=& \frac{2}{\epsilon}\int_{I_g} y_\epsilon(\mathbf{z}_g(t))\nabla y_\epsilon(\mathbf{z}_g(t))
\cdot \mathbf{w}(t) |\mathbf{z}_g^\prime(t)| dt ,
\nonumber
\end{eqnarray}
where $\mathbf{w}=(w_1,w_2)$ satisfies (\ref{3.12})-(\ref{3.14}) and again we use the regularity and the
convergence properties in $\mathcal{C}^1(D)$, respectively $\mathcal{C}^1(I_g)^2$.
\begin{eqnarray}
&&
\lim_{\lambda \rightarrow 0}
\frac{1}{\epsilon\lambda}
\int_{I_g}
\left[
\left(y_\epsilon(\mathbf{z}_{g}(t)\right)^2 |\mathbf{z}_{g+\lambda r}^\prime(t)|
-\left(y_\epsilon(\mathbf{z}_{g}(t)\right)^2 |\mathbf{z}_{g}^\prime(t)|
\right] dt
\label{3.22}
\\
&=& \frac{1}{\epsilon}
\int_{I_g} \left(y_\epsilon(\mathbf{z}_g(t))\right)^2
\frac{\mathbf{z}_g^\prime(t)\cdot \mathbf{w}^\prime(t)}{|\mathbf{z}_g^\prime(t)|} dt,
\nonumber
\end{eqnarray}
where we recall that $|\mathbf{z}_g^\prime(t)|=\sqrt{(z_g^1)^\prime(t)^2+(z_g^2)^\prime(t)^2}$ is non zero by (\ref{2.7})
and the Hamiltonian system,
and standard derivation rules may be applied under our regularity conditions.
By summing up (\ref{3.19})-(\ref{3.22}), we end the proof of (\ref{3.17}).
\quad$\Box$
\begin{remark}\label{rem:5}
In the case that $\Omega_g$ is not simply connected, the penalization integral in
(\ref{3.2}) is in fact a finite sum and each of these terms can be handled separately,
in the same way as above, due to Proposition \ref{prop:2.3} and Remark \ref{rem:2}.
The significance of the hypothesis
$y_\epsilon(\mathbf{z}_g(t))=0$, is that one should first
minimize the penalization term with respect to the control
$u$ (this is possible due to the arguments in the proof
of Proposition \ref{prop:3.1}). Then, the obtained control
should be fixed and the minimization with respect to
$g\in\mathcal{F}$ is to be performed.
In case $T_{g+\lambda r}$ can be evaluated as in
Remark \ref{rem:2.7}, then the hypothesis can be relaxed
to $y_\epsilon(x_1^0,x_2^0)=0$ via a variant of the above
arguments.
\end{remark}
Now, we denote by
$A:\mathcal{C}^2(\overline{D})\times L^p(D)\rightarrow W^{2,p}(D)\cap W_0^{1,p}(D)$
the linear continuous operator given by
$r, v\rightarrow q_\epsilon$, defined in (\ref{3.10}), (\ref{3.11}).
We also denote by $B: \mathcal{C}^2(\overline{D}) \rightarrow \mathcal{C}^1(I_g)^2$
the linear continuous operator given by (\ref{3.12})-(\ref{3.14}),
$Br=[w_1,w_2]$. In these definitions, $g\in \mathcal{C}^2(\overline{D})$ and
$u\in L^p(D)$ are fixed. We have:
\begin{corollary}\label{cor:1}
The relation (\ref{3.17}) can be rewritten as:
\begin{eqnarray}
&&
\int_E \partial_2 j\left(\mathbf{x}, y_\epsilon(\mathbf{x})\right)A(r,v)(\mathbf{x}) d\mathbf{x}
+\frac{2}{\epsilon}\int_{I_g} y_\epsilon(\mathbf{z}_g(t))
A(r,v)(\mathbf{z}_g(t)) |\mathbf{z}_g^\prime(t)| dt
\label{3.23}\\
&& +\frac{2}{\epsilon}\int_{I_g} y_\epsilon(\mathbf{z}_g(t))\nabla y_\epsilon(\mathbf{z}_g(t))
\cdot Br(t) |\mathbf{z}_g^\prime(t)| dt
\nonumber\\
&&
+\frac{1}{\epsilon}\int_{I_g}
\frac{\left(y_\epsilon(\mathbf{z}_g(t))\right)^2}{|\mathbf{z}_g^\prime(t)|}
\mathbf{z}_g^\prime(t)\cdot [-\partial_2 r,\partial_1 r](\mathbf{z}_g(t)) dt
\nonumber\\
&&
+\frac{1}{\epsilon}\int_{I_g}
\frac{\left(y_\epsilon(\mathbf{z}_g(t))\right)^2}{|\mathbf{z}_g^\prime(t)|}
C(t) \cdot \mathbf{w}(t) dt ,
\nonumber
\end{eqnarray}
where the vector $C(t)$ is explained below.
\end{corollary}
\noindent
\textbf{Proof.}
In the last integral in (\ref{3.17}), we replace $\mathbf{w}^\prime(t)$ by the
right-hand side in (\ref{3.12}), (\ref{3.13}). We compute:
\begin{eqnarray}
&& \mathbf{z}_g^\prime(t) \cdot \mathbf{w}^\prime(t)
\label{3.24}
\\
&=&
\mathbf{z}_g^\prime(t) \cdot
[-\nabla\partial_2 g(\mathbf{z}_g(t))\cdot \mathbf{w}(t)-\partial_2 r(\mathbf{z}_g(t)),
\nabla\partial_1 g(\mathbf{z}_g(t))\cdot \mathbf{w}(t) +\partial_1 r(\mathbf{z}_g(t))
]
\nonumber
\\
&=&
\mathbf{z}_g^\prime(t) \cdot[-\partial_2 r(\mathbf{z}_g(t)),\partial_1 r(\mathbf{z}_g(t))]
\nonumber
\\&+&
\mathbf{z}_g^\prime(t) \cdot
[-\partial_{1,2}^2 g(\mathbf{z}_g(t)) w_1(t),
\partial_{1,1}^2 g(\mathbf{z}_g(t)) w_1(t)
]
\nonumber
\\&+&
\mathbf{z}_g^\prime(t) \cdot
[
-\partial_{2,2}^2 g(\mathbf{z}_g(t)) w_2(t),
\partial_{2,1}^2 g(\mathbf{z}_g(t)) w_2(t)
].
\nonumber
\end{eqnarray}
We denote by $C(t)$ the (known) vector
\begin{eqnarray*}
C(t)&=&[-(z_g^1)^\prime(t)\partial_{1,2}^2 g(\mathbf{z}_g(t))
+(z_g^2)^\prime(t)\partial_{1,1}^2 g(\mathbf{z}_g(t)) ,
\\
&&
-(z_g^1)^\prime(t)\partial_{2,2}^2 g(\mathbf{z}_g(t))
+(z_g^2)^\prime(t)\partial_{2,1}^2 g(\mathbf{z}_g(t))
]
\end{eqnarray*}
and together with (\ref{3.24}), we get (\ref{3.23}). This ends the proof.
\quad$\Box$
\section{Finite element discretization}
\setcounter{equation}{0}
We assume that $D$ and $E$ are polygonal. Let $\mathcal{T}_h$ be a triangulation of $D$ with vertices $A_i$,
$i\in I=\{1,\dots,n\}$. We consider that $\mathcal{T}_h$ is compatible with $E$, i.e.
$$
\forall T\in \mathcal{T}_h, T \subset \overline{E} \hbox{ or } T\subset \overline{D\setminus E}
$$
where $T$ designs a triangle of $\mathcal{T}_h$ and $h$ is the size of $\mathcal{T}_h$.
We consider a triangle as a closed set.
For simplicity, we employ piecewise linear finite element and we denote
$$
\mathbb{W}_h=\{ \varphi_h\in \mathcal{C}(\overline{D});
\ {\varphi_h}_{|T} \in \mathbb{P}_1(T),\ \forall T \in \mathcal{T}_h \}.
$$
We use a standard basis of $\mathbb{W}_h$, $\{ \phi_i\}_{i\in I}$, where $\phi_i$ is the hat function
associated to the vertex $A_i$, see for example \cite{Ciarlet2002}, \cite{Raviart2004}.
The finite element approximations of $g$ and $u$ are
$g_h(\mathbf{x})=\sum_{i\in I} G_i \phi_i(\mathbf{x})$,
$u_h(\mathbf{x})=\sum_{i\in I} U_i \phi_i(\mathbf{x})$,
for all $\mathbf{x}\in \overline{D}$. We set the vectors
$G=(G_i)_{i\in I}^T\in\mathbb{R}^n$, $U=(U_i)_{i\in I}^T\in\mathbb{R}^n$
and $g_h$ can be identified by $G$, etc.
The function $u$ is in $L^p(D)$, as in Proposition \ref{prop:3.3}.
Alternatively, for $u_h$, we can use discontinuous piecewise constant finite element $\mathbb{P}_0$.
In order to approach $g\in \mathcal{C}^2(\overline{D})$, we can use high order finite elements.
\subsection{Discretization of the optimization problem}
We introduce
$$
\mathbb{V}_h=\{ \varphi_h\in \mathbb{W}_h;
\ \varphi_h = 0\hbox{ on }\partial D \},
$$
$I_0=\{i\in I;\ A_i\notin \partial D \}$ and $n_0=card(I_0)$.
The finite element weak formulation of (\ref{3.3})-(\ref{3.4}) is:
find $y_h\in \mathbb{V}_h$ such that
\begin{equation}\label{4.1}
\int_D \nabla y_h \cdot \nabla \varphi_h d\mathbf{x}
= \int_D \left(f + (g_h+\epsilon)_+^2u_h\right) \varphi_h d\mathbf{x},\quad \forall \varphi_h \in \mathbb{V}_h.
\end{equation}
As before, for $y_h(\mathbf{x})=\sum_{j\in I_0} Y_j \phi_j(\mathbf{x})$,
we set $Y=(Y_j)_{j\in I_0}^T\in\mathbb{R}^{n_0}$.
In order to obtain the linear system, we take the basis functions $\varphi_h=\phi_i$ in (\ref{4.1}) for
$i\in I_0$.
Let us consider the vector
$$
F=\left( \int_D f\phi_i d\mathbf{x}\right)_{i\in I_0}^T\in\mathbb{R}^{n_0},
$$
the $n_0\times n_0$ matrix $K$ defined by
$$
K=(K_{ij})_{i\in I_0,j\in I_0},\quad K_{ij}= \int_D \nabla \phi_j \cdot \nabla \phi_i d\mathbf{x}
$$
and the $n_0\times n$ matrix $B^1(G,\epsilon)$ defined by
$$
B^1(G,\epsilon)=(B_{ij}^1)_{i\in I_0,j\in I},\quad
B_{ij}^1= \int_D (g_h+\epsilon)_+^2 \phi_j\phi_i d\mathbf{x} .
$$
The matrix $K$ is symmetric, positive definite.
The finite element approximation of the state system (\ref{3.3})-(\ref{3.4}) is the linear system:
\begin{equation}\label{4.2}
KY=F+B^1(G,\epsilon)U.
\end{equation}
Now, we shall discretize the objective function (\ref{3.2}).
We denote $I_E=\{i\in I;\ A_i\in \overline{E} \}$ and $n_E=card(I_E)$.
For the first term of (\ref{3.2}), we introduce
$$
J_1(Y)=\int_E j(\mathbf{x}, y_h(\mathbf{x})) d\mathbf{x}.
$$
We shall study the second term of (\ref{3.2}).
In order to solve numerically the ODE system (\ref{3.5})-(\ref{3.7}), we use a
partition $[t_0,\dots,t_k,\dots,t_m]$ of $[0,T_g]$, with $t_0=0$ and $t_m=T_g$.
We can use the forward Euler scheme:
\begin{eqnarray}
Z_{k+1}^1 & = & Z_{k}^1 - (t_{k+1}-t_k) \frac{\partial g_h}{\partial x_2}\left(Z_{k}^1,Z_{k}^2\right),
\label{4.3}\\
Z_{k+1}^2 & = & Z_{k}^2 + (t_{k+1}-t_k) \frac{\partial g_h}{\partial x_1}\left(Z_{k}^1,Z_{k}^2\right),
\label{4.4}\\
(Z_0^1,Z_0^2) & = & \left(x_1^0,x_2^0\right),
\label{4.5}
\end{eqnarray}
for $k=0,\dots , m-2$. We set $Z_k=(Z_k^1,Z_k^2)$ and we impose $Z_m=Z_0$.
In fact, $Z_k$ is an approximation of $\mathbf{z}_g(t_k)$.
We do not need to stock $Z_0$ and we set $Z=(Z^1,Z^2)\in \mathbb{R}^m\times \mathbb{R}^m$, with
$Z^1=(Z_k^1)_{1\leq k\leq m}^T$ and $Z^2=(Z_k^2)_{1\leq k\leq m}^T$.
In the applications, one can use more performant numerical methods for the ODE's,
like explicit Runge-Kutta or backward Euler,
but here we want to avoid a too tedious exposition.
Without risk of confusion, we introduce the function $Z:[0,T_g]\rightarrow \mathbb{R}^2$ defined by
$$
Z(t)=\frac{t_{k+1}-t}{(t_{k+1}-t_k)} Z_k + \frac{t-t_k}{(t_{k+1}-t_k)} Z_{k+1},\quad t_k \leq t < t_{k+1}
$$
for $k=0,1,\dots ,m-1$. We have $Z(t_k)=Z_k$ and we can identify the function $Z(\cdot)$ by the vector
$Z\in \mathbb{R}^m\times \mathbb{R}^m$. We remark that $Z(\cdot)$ is derivable on each interval $(t_k, t_{k+1})$
and $Z^\prime(t)=\frac{1}{(t_{k+1}-t_k)}(Z_{k+1}^1-Z_k^1,Z_{k+1}^2-Z_k^2)$ for $t_k \leq t < t_{k+1}$.
We introduce the $n_0\times n_0$ matrix $N(Z)$ defined by
$$
N(Z)=\left(
\int_0^{T_g} \phi_j(Z(t)) \phi_i(Z(t)) |Z^\prime(t)| dt
\right)_{i\in I_0,j\in I_0}
$$
and the second term of (\ref{3.2}) is approached by
$\frac{1}{\epsilon}Y^TN(Z)Y$, then the discrete form of the optimization problem (\ref{3.2})-(\ref{3.7})
is
\begin{equation}\label{4.6}
\min_{G,U\in \mathbb{R}^n} J(G,U)=J_1(Y)+\frac{1}{\epsilon}Y^TN(Z)Y
\end{equation}
subject to (\ref{4.2}). We point out that $Y$ depends on $G$ and $U$ by (\ref{4.2})
and $Z$ depends on $G$ by (\ref{4.3})-(\ref{4.5}).
\subsection{Discretization of the derivative of the objective function}
Let $r_h,v_h$ be in $\mathbb{W}_h$ and $R,V\in \mathbb{R}^n$ be the associated vectors.
The finite element weak formulation of (\ref{3.10})-(\ref{3.11}) is:
find $q_h\in \mathbb{V}_h$ such that
\begin{equation}\label{4.7}
\int_D \nabla q_h \cdot \nabla \varphi_h d\mathbf{x}
= \int_D \left( (g_h+\epsilon)_+^2 v_h + 2(g_h+\epsilon)_+ u_h r_h\right)
\varphi_h d\mathbf{x},
\quad \forall \varphi_h \in \mathbb{V}_h.
\end{equation}
Let $Q\in \mathbb{R}^{n_0}$ be the associated vector to $q_h$ and we construct
the $n_0\times n$ matrix $C^1(G,\epsilon,U)$ defined by
$$
C^1(G,\epsilon,U)=\left(
\int_D 2(g_h+\epsilon)_+ u_h\phi_j\phi_i d\mathbf{x}
\right)_{i\in I_0,j\in I}.
$$
The linear system of (\ref{4.7}) is
\begin{equation}\label{4.8}
KQ=B^1(G,\epsilon)V + C^1(G,\epsilon,U)R.
\end{equation}
In order to approximate $\partial_2 j(\mathbf{x}, y_\epsilon(\mathbf{x}))$, $y_\epsilon$
given by (\ref{3.3})-(\ref{3.4}),
we consider the nonlinear application
$$
Y\in \mathbb{R}^{n_0} \rightarrow L(Y)\in \mathbb{R}^{n_E}
$$
such that
$\partial_2 j(\mathbf{x}, y_h(\mathbf{x}))
=\sum_{i\in I_E} \left(L(Y)\right)_i {\phi_i}_{|E} (\mathbf{x})$
where ${\phi_i}_{|E}$ is the restriction of $\phi_i$ to $E$.
We define the $n_E\times n_0$ matrix $M_{ED}$ defined by
$$
M_{ED}=\left(
\int_D \phi_i\phi_j d\mathbf{x}
\right)_{i\in I_E,j\in I_0}.
$$
The first term of (\ref{3.17}) is approached by
\begin{equation}\label{4.9}
\left(L(Y)\right)^T M_{ED} Q
\end{equation}
and the second term of (\ref{3.17}) is approached by
\begin{equation}\label{4.10}
\frac{2}{\epsilon}Y^T N(Z)Q
\end{equation}
where the matrix $N(Z)$ was introduced in the previous subsection.
Next, we introduce the partial derivative for a piecewise linear function.
Let $g_h\in \mathbb{W}_h$ and $G\in \mathbb{R}^n$ its associated vector, i.e.
$g_h(\mathbf{x})=\sum_{i\in I} G_i \phi_i(\mathbf{x})$.
Let $\Pi_h^1G \in \mathbb{R}^n$ defined by
$$
\left(\Pi_h^1 G\right)_i
=\frac{1}{\sum_{j\in J_i} area(T_j)}
\sum_{j\in J_i} area(T_j)\partial_1 {g_h}_{|T_j}
$$
where $J_i$ is the set of index $j$ such that the triangle $T_j$
has the vertex $A_i$.
Since $g_h$ is a linear function in each triangle $T_j$, then $\partial_1 {g_h}_{|T_j}$
is constant.
Similarly, we construct $\Pi_h^2 G \in \mathbb{R}^n$ for $\partial_2$.
In fact, $\Pi_h^1$ and $\Pi_h^2$ are two $n\times n$ matrices depending on $\mathcal{T}_h$.
Then, we set
$$
\partial_1^h g_h(\mathbf{x})=\sum_{i\in I} \left(\Pi_h^1G\right)_i \phi_i(\mathbf{x})
$$
and similarly for $\partial_2^h g_h$.
Finally, we put $\nabla_h g_h = (\partial_1^h g_h, \partial_2^h g_h)$.
Since $y_h\in \mathbb{V}_h\subset \mathbb{W}_h$, we can define $\partial_1^h y_h$
and $\partial_2^h y_h$.
\begin{example}
We shall give a simple example to understand the discrete derivative of $\mathbb{W}_h$ functions.
We consider the square $[A_1A_2A_4A_3]$ of vertices $A_1=(0,0)$, $A_2=(1,0)$, $A_4=(1,1)$,
$A_3=(0,1)$ and the triangulation of two triangles $T_1=[A_1A_2A_4]$ and $T_2=[A_1A_4A_3]$.
We shall present the discrete derivative of the hat function
$$
\phi_4(x_1,x_2) =
\left\{
\begin{array}{ll}
x_2 &\hbox{ in }T_1\\
x_1 &\hbox{ in }T_2 .
\end{array}
\right.
$$
We have $J_1=\{1,2\}$ and
\begin{eqnarray*}
\left(\Pi_h^1 \phi_4 \right)_1
&=&\frac{1}{area(T_1)+area(T_2)}
\left( area(T_1)\partial_1 {\phi_4}_{|T_1}
+area(T_2)\partial_1 {\phi_4}_{|T_2}
\right)\\
&=&\frac{1}{1/2+1/2}
\left( 1/2 \times 0
+1/2 \times 1
\right)
=1/2.
\end{eqnarray*}
Similarly, $J_2=\{1\}$, $J_3=\{2\}$, $J_4=\{1,2\}$,
\begin{eqnarray*}
\left(\Pi_h^1 \phi_4 \right)_2
&=&\frac{1}{1/2}
\left( 1/2 \times 0
\right)
=0,\\
\left(\Pi_h^1 \phi_4 \right)_3
&=&\frac{1}{1/2}
\left( 1/2 \times 1
\right)
=1/2,\\
\left(\Pi_h^1 \phi_4 \right)_4
&=&\frac{1}{1/2+1/2}
\left( 1/2 \times 0
+1/2 \times 1
\right)
=1/2
\end{eqnarray*}
then
$$
\partial_1^h \phi_4(x_1,x_2)= 1/2 \times \phi_1(x_1,x_2)+ 0\times \phi_2(x_1,x_2)
+ 1 \times \phi_3(x_1,x_2)+ 1/2 \times \phi_4(x_1,x_2).
$$
\end{example}
In order to solve the ODE system (\ref{3.12})-(\ref{3.14}), we use
the forward Euler scheme on the same partition as for (\ref{4.3})-(\ref{4.5}):
\begin{eqnarray}
W_{k+1}^1 & = & W_{k}^1
- (t_{k+1}-t_k) \nabla_h \partial_2^h g_h(Z_{k})\cdot (W_k^1,W_k^2)\label{4.11}\\
&&
- (t_{k+1}-t_k) \partial_2^h r_h(Z_{k}),
\nonumber\\
W_{k+1}^2 &= & W_{k}^2
+ (t_{k+1}-t_k) \nabla_h \partial_1^h g_h \left(Z_{k}\right)\cdot (W_k^1,W_k^2)\label{4.12}\\
&& + (t_{k+1}-t_k) \partial_1^h r_h\left(Z_{k}\right),
\nonumber\\
W_0^1 & = & 0,\ W_0^2=0,
\label{4.13}
\end{eqnarray}
for $k=0,\dots , m-1$.
We set $W_k=(W_k^1,W_k^2)$ and now we have $W_m \neq W_0$ generally.
In fact, $W_k$ is an approximation of $\mathbf{w}(t_k)$.
We do not need to stock $W_0$ and we set
$W=(W^1,W^2)\in \mathbb{R}^m\times \mathbb{R}^m$, with
$W^1=(W_k^1)_{1\leq k\leq m}^T$ and $W^2=(W_k^2)_{1\leq k\leq m}^T$.
As mentioned before, we can use more performant numerical methods for the ODE,
like explicit Runge-Kutta or backward Euler.
We construct $W:[0,T_g]\rightarrow \mathbb{R}^2$ in the same way as for $Z(t)$
$$
W(t)=\frac{t_{k+1}-t}{(t_{k+1}-t_k)} W_k + \frac{t-t_k}{(t_{k+1}-t_k)} W_{k+1},\quad t_k \leq t < t_{k+1}
$$
for $k=0,1,\dots ,m-1$. We have $W(t_k)=W_k$ and
$W^\prime(t)=\frac{1}{(t_{k+1}-t_k)}(W_{k+1}^1-W_k^1,W_{k+1}^2-W_k^2)$ for $t_k \leq t < t_{k+1}$.
If $\psi_k$ is the one-dimensional piecewise linear hat function associated to the point $t_k$ of the
partition $[t_0,\dots,t_k,\dots,t_m]$, we can write equivalently
$W(t)=\sum_{k=0}^m W_k \psi_k(t)$ for $t\in [0,T_g]$.
The third term of (\ref{3.17}) is approached by
\begin{eqnarray}
&&\frac{2}{\epsilon}\sum_{k=0}^{m-1}\int_{t_k}^{t_{k+1}} y_h(Z(t))\nabla_h y_h(Z(t))
\cdot W(t) |Z^\prime(t)| dt
\label{4.14}\\
&=& \frac{2}{\epsilon}\sum_{k=0}^{m-1}\int_{t_k}^{t_{k+1}} y_h(Z(t))\nabla_h y_h(Z(t))
\cdot \left( W_k\psi_k(t)+W_{k+1}\psi_{k+1}(t)\right) |Z^\prime(t)| dt
\nonumber\\
&=& \frac{2}{\epsilon}\sum_{k=0}^{m-1}\int_{t_k}^{t_{k+1}} y_h(Z(t))\partial_1^h y_h(Z(t))
\left( W_k^1\psi_k(t)+W_{k+1}^1\psi_{k+1}(t)\right)\frac{|Z_kZ_{k+1}|}{(t_{k+1}-t_k)} dt
\nonumber\\
&+& \frac{2}{\epsilon}\sum_{k=0}^{m-1}\int_{t_k}^{t_{k+1}} y_h(Z(t))\partial_2^h y_h(Z(t))
\left( W_k^2\psi_k(t)+W_{k+1}^2\psi_{k+1}(t)\right)\frac{|Z_kZ_{k+1}|}{(t_{k+1}-t_k)} dt
\nonumber
\end{eqnarray}
where $|Z_kZ_{k+1}|$ is the length of the segment in $\mathbb{R}^2$ with ends $Z_k$ and $Z_{k+1}$.
We have
\begin{eqnarray}
&&
\quad\int_{t_k}^{t_{k+1}}
y_h(Z(t))\partial_1^h y_h(Z(t))
\left( W_k^1\psi_k(t)+W_{k+1}^1\psi_{k+1}(t)\right)
\frac{|Z_kZ_{k+1}|}{(t_{k+1}-t_k)} dt
\label{4.15}\\
&=& W_k^1\int_{t_k}^{t_{k+1}}
\left(\sum_{i\in I_0} Y_i \phi_i(Z(t))\right)
\left(\sum_{j\in I} (\Pi_h^1Y)_j \phi_j(Z(t)) \right)
\psi_k(t)
\frac{|Z_kZ_{k+1}|}{(t_{k+1}-t_k)} dt
\nonumber\\
&+& W_{k+1}^1\int_{t_k}^{t_{k+1}}
\left(\sum_{i\in I_0} Y_i \phi_i(Z(t))\right)
\left(\sum_{j\in I} (\Pi_h^1Y)_j \phi_j(Z(t)) \right)
\psi_{k+1}(t)
\frac{|Z_kZ_{k+1}|}{(t_{k+1}-t_k)} dt.
\nonumber
\end{eqnarray}
We introduce the $n_0\times n$ matrices $N_k^{[k,k+1]}(Z)$ and $N_{k+1}^{[k,k+1]}(Z)$ defined by
\begin{eqnarray*}
N_k^{[k,k+1]}(Z)&=&\left(
\int_{t_k}^{t_{k+1}} \phi_i(Z(t)) \phi_j(Z(t)) \psi_k(t) \frac{|Z_kZ_{k+1}|}{(t_{k+1}-t_k)} dt
\right)_{i\in I_0,j\in I}\\
N_{k+1}^{[k,k+1]}(Z)&=&\left(
\int_{t_k}^{t_{k+1}} \phi_i(Z(t)) \phi_j(Z(t)) \psi_{k+1}(t) \frac{|Z_kZ_{k+1}|}{(t_{k+1}-t_k)} dt
\right)_{i\in I_0,j\in I}
\end{eqnarray*}
then (\ref{4.15}) can be rewritten as
$$
Y^T\left( W_k^1N_k^{[k,k+1]}(Z)+ W_{k+1}^1N_{k+1}^{[k,k+1]}(Z)\right)(\Pi_h^1Y)
$$
and finally the third term of (\ref{3.17}) is approached by
\begin{eqnarray}\label{4.16}
&& \frac{2}{\epsilon}
Y^T
\sum_{k=0}^{m-1}\left( W_k^1 N_k^{[k,k+1]}(Z)+ W_{k+1}^1 N_{k+1}^{[k,k+1]}(Z)\right)(\Pi_h^1 Y)\\
&+&
\frac{2}{\epsilon}
Y^T
\sum_{k=0}^{m-1}\left( W_k^2 N_k^{[k,k+1]}(Z)+ W_{k+1}^2 N_{k+1}^{[k,k+1]}(Z)\right)(\Pi_h^2 Y).
\nonumber
\end{eqnarray}
We can introduce the linear operators $T^1(Z)$ and $T^2(Z)$ by
\begin{eqnarray}\label{4.17}
\qquad W^1 \in \mathbb{R}^m \rightarrow T^1(Z)W^1&=&
\sum_{k=0}^{m-1}\left( W_k^1 N_k^{[k,k+1]}(Z)+ W_{k+1}^1 N_{k+1}^{[k,k+1]}(Z)\right)\\
\qquad W^2 \in \mathbb{R}^m \rightarrow T^2(Z)W^2&=&
\sum_{k=0}^{m-1}\left( W_k^2 N_k^{[k,k+1]}(Z)+ W_{k+1}^2 N_{k+1}^{[k,k+1]}(Z)\right)
\nonumber
\end{eqnarray}
then (\ref{4.16}) can be rewritten as
\begin{eqnarray}\label{4.18}
\frac{2}{\epsilon}
Y^T \left( T^1(Z)W^1\right) (\Pi_h^1 Y)
+\frac{2}{\epsilon}
Y^T \left(T^2(Z)W^2\right) (\Pi_h^2 Y).
\end{eqnarray}
The fourth term of term of (\ref{3.17}) is approached by
\begin{eqnarray}\label{4.19}
\qquad
\frac{1}{\epsilon}\sum_{k=0}^{m-1}\int_{t_k}^{t_{k+1}}
\left(\sum_{i\in I_0} Y_i \phi_i(Z(t))\right)
\left(\sum_{j\in I_0} Y_j \phi_j(Z(t))\right)
\frac{Z^\prime(t)\cdot W^\prime(t)}{|Z^\prime(t)|} dt.
\end{eqnarray}
But $Z^\prime(t)$ and $W^\prime(t)$ are constants for $t_k \leq t < t_{k+1}$, then
$$
\frac{Z^\prime(t)\cdot W^\prime(t)}{|Z^\prime(t)|}
=\frac{(Z_{k+1}^1-Z_k^1,Z_{k+1}^2-Z_k^2)\cdot(W_{k+1}^1-W_k^1,W_{k+1}^2-W_k^2)}
{(t_{k+1}-t_k) |Z_kZ_{k+1}|}
$$
where $|Z_kZ_{k+1}|$ is the length of the segment in $\mathbb{R}^2$ with ends $Z_k$ and $Z_{k+1}$.
We introduce the $n_0\times n_0$ matrix $R_k(Z)$ defined by
$$
R_k(Z)=\left(
\int_{t_k}^{t_{k+1}} \phi_i(Z(t)) \phi_j(Z(t)) |Z^\prime(t)| dt
\right)_{i\in I_0,j\in I_0}
$$
and the linear operators $T^3(Z)$
\begin{eqnarray}\label{4.20}
&& \qquad W \in \mathbb{R}^m \times \mathbb{R}^m \rightarrow T^3(Z)W\\
&& T^3(Z)W =
\sum_{k=0}^{m-1}\frac{(Z_{k+1}^1-Z_k^1,Z_{k+1}^2-Z_k^2)\cdot(W_{k+1}^1-W_k^1,W_{k+1}^2-W_k^2)}
{|Z_kZ_{k+1}|^2} R_k(Z) .
\nonumber
\end{eqnarray}
The (\ref{4.19}) can be rewritten as
\begin{eqnarray}\label{4.21}
\frac{1}{\epsilon} Y^T \left(T^3(Z)W\right) Y.
\end{eqnarray}
The study of this subsection can be resumed as following:
\begin{proposition}\label{prop:dJ}
The discret version of (\ref{3.17}) is
\begin{eqnarray}\label{4.22}
\ \qquad dJ_{(G,U)}(R,V) & = & \left(L(Y)\right)^T M_{ED} Q
+\frac{2}{\epsilon}Y^T N(Z)Q\\
&+& \frac{2}{\epsilon} Y^T \left( T^1(Z)W^1\right) (\Pi_h^1 Y)
+\frac{2}{\epsilon} Y^T \left(T^2(Z)W^2\right) (\Pi_h^2 Y)\nonumber\\
&+&\frac{1}{\epsilon} Y^T \left(T^3(Z)W\right) Y\nonumber
\end{eqnarray}
which represents the derivative of $J$ at $(G,U)$ in the direction $(R,V)$.
\end{proposition}
\noindent
\textbf{Proof.}
We get (\ref{4.22}) just by assembling (\ref{4.9}), (\ref{4.10}), (\ref{4.18}) and (\ref{4.21}).
\quad$\Box$
\subsection{Discretization of the formula (\ref{3.23})}
From (\ref{4.8}), we get
$$
Q=K^{-1}B^1(G,\epsilon)V + K^{-1}C^1(G,\epsilon,U)R
$$
and the discrete version of the operator $A$ in the Corollary \ref{cor:1} is
$$
(R,V)\in \mathbb{R}^n \times \mathbb{R}^n \rightarrow
A^1(R,V)=K^{-1}B^1(G,\epsilon)V + K^{-1}C^1(G,\epsilon,U)R.
$$
Replacing $Q$ in the first two terms of (\ref{4.22}), we get
\begin{eqnarray}\label{4.23}
&& \left( \left(L(Y)\right)^T M_{ED} +\frac{2}{\epsilon}Y^T N(Z)\right)
K^{-1}B^1(G,\epsilon)V
\\
&+& \left( \left(L(Y)\right)^T M_{ED} +\frac{2}{\epsilon}Y^T N(Z)\right)
K^{-1}C^1(G,\epsilon,U)R.
\nonumber
\end{eqnarray}
We denote
\begin{eqnarray*}
\Lambda_1(t)&=& y_h(Z(t))\nabla y_h(Z(t))|Z^\prime(t)|\\
\Lambda_2(t)&=&\frac{\left(y_h(Z(t))\right)^2}{|Z^\prime(t)|} Z^\prime(t)\\
\Lambda_3(t)&=&\frac{\left(y_h(Z(t))\right)^2}{|Z^\prime(t)|} C(t).
\end{eqnarray*}
The third term of (\ref{3.23}) is approached by
$$
\frac{2}{\epsilon}\int_0^{T_g} \Lambda_1(t) \cdot W(t)\, dt
$$
and using the trapezoidal quadrature formula on each sub-interval $[t_k,t_{k+1}]$, we get
\begin{equation}\label{4.24}
\frac{1}{\epsilon}\sum_{k=0}^{m-1}(t_{k+1}-t_k)\left[
\Lambda_1(t_k)\cdot (W_k^1,W_k^2)
+\Lambda_1(t_{k+1})\cdot (W_{k+1}^1,W_{k+1}^2)
\right].
\end{equation}
Similarly, for the 4th and 5th terms of (\ref{3.23}), we get
\begin{equation}\label{4.25}
\frac{1}{2\epsilon}\sum_{k=0}^{m-1}(t_{k+1}-t_k)\left[
\Lambda_2(t_k)\cdot [-\partial_2^h r_h,\partial_1^h r_h](Z_k)
+\Lambda_2(t_{k+1})\cdot [-\partial_2^h r_h,\partial_1^h r_h](Z_{k+1})
\right]
\end{equation}
and
\begin{equation}\label{4.26}
\frac{1}{2\epsilon}\sum_{k=0}^{m-1}(t_{k+1}-t_k)\left[
\Lambda_3(t_k)\cdot (W_k^1,W_k^2)
+\Lambda_3(t_{k+1})\cdot (W_{k+1}^1,W_{k+1}^2)
\right].
\end{equation}
In order to write (\ref{4.24})-(\ref{4.26}) shorter, we introduce the vectors:\\
\quad $\widetilde{\Lambda}_1^1 \in \mathbb{R}^m$ with first components
$(t_{k+1}-t_{k-1})\Lambda_1^1 (t_{k})$, $1\leq k\leq m-1$
and the last component $(t_{m}-t_{m-1})\Lambda_1^1 (t_{m})$,\\
\quad $\widetilde{\Lambda}_1^2 \in \mathbb{R}^m$ with first components
$(t_{k+1}-t_{k-1})\Lambda_1^2 (t_{k})$, $1\leq k\leq m-1$
and the last component $(t_{m}-t_{m-1})\Lambda_1^2 (t_{m})$,\\
\quad $\widetilde{\Lambda}_3^1 \in \mathbb{R}^m$ with first components
$\frac{1}{2}(t_{k+1}-t_{k-1})\Lambda_3^1 (t_{k})$, $1\leq k\leq m-1$
and the last component $\frac{1}{2}(t_{m}-t_{m-1})\Lambda_3^1 (t_{m})$,\\
\quad $\widetilde{\Lambda}_3^2 \in \mathbb{R}^m$ with first components
$\frac{1}{2}(t_{k+1}-t_{k-1})\Lambda_3^2 (t_{k})$, $1\leq k\leq m-1$
and the last component $\frac{1}{2}(t_{m}-t_{m-1})\Lambda_3^2 (t_{m})$.
Also, we introduce the vectors in $\mathbb{R}^n$:
\begin{eqnarray*}
\widetilde{\Lambda}_2^1 & = & \frac{1}{2}
\sum_{0\leq k\leq m-1}(t_{k+1}-t_k)\left( \Lambda_2^1 (t_k)\Phi(Z_k)+\Lambda_2^1 (t_{k+1})\Phi(Z_{k+1})\right)
\\
\widetilde{\Lambda}_2^2 & = & \frac{1}{2}
\sum_{0\leq k\leq m-1}(t_{k+1}-t_k) \left( \Lambda_2^2 (t_k)\Phi(Z_k)+\Lambda_2^2 (t_{k+1})\Phi(Z_{k+1})\right)
\end{eqnarray*}
where $\Phi(Z_k)=(\phi_i(Z_k))_{i\in I}^T \in \mathbb{R}^n$.
\begin{proposition}\label{prop:3.23}
The discrete version of the (\ref{3.23}) is
\begin{eqnarray}\label{4.26a}
\qquad dJ_{(G,U)}(R,V) & = &
\left( \left(L(Y)\right)^T M_{ED} +\frac{2}{\epsilon}Y^T N(Z)\right)
K^{-1}B^1(G,\epsilon)V
\\
&+& \left( \left(L(Y)\right)^T M_{ED} +\frac{2}{\epsilon}Y^T N(Z)\right)
K^{-1}C^1(G,\epsilon,U)R
\nonumber\\
&+&\frac{1}{\epsilon}\left( (\widetilde{\Lambda}_1^1)^T W^1 + (\widetilde{\Lambda}_1^2)^T W^2 \right)
\nonumber\\
&+&\frac{1}{\epsilon}\left( -(\widetilde{\Lambda}_2^1)^T (\Pi_h^2 R)
+ (\widetilde{\Lambda}_2^2)^T (\Pi_h^1 R)
\right)
\nonumber\\
&+&\frac{1}{\epsilon}\left( (\widetilde{\Lambda}_3^1)^T W^1 + (\widetilde{\Lambda}_3^2)^T W^2 \right).
\nonumber
\end{eqnarray}
\end{proposition}
\noindent
\textbf{Proof.}
We obtain (\ref{4.26a}) by summing (\ref{4.23})-(\ref{4.26}).
\quad$\Box$
\bigskip
Next, we give more details about the relationship between $W$ and $R$.
Let us introduce the $2\times 2$ matrices
$$
M_2(k)=\left(
\begin{array}{cc}
1 -(t_{k+1}-t_k) \partial_1^h\partial_2^h g_h(Z_k) & -(t_{k+1}-t_k) \partial_2^h\partial_2^h g_h(Z_k)\\
(t_{k+1}-t_k) \partial_1^h\partial_1^h g_h(Z_k) & 1 +(t_{k+1}-t_k) \partial_2^h\partial_1^h g_h(Z_k)
\end{array}
\right),
$$
$$
I_2=\left(
\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}
\right)
$$
and the $2\times n$ matrice
$$
N_2(k)=\left(
\begin{array}{r}
-(t_{k+1}-t_k) \Phi^T(Z_k) \Pi_h^2\\
(t_{k+1}-t_k) \Phi^T(Z_k) \Pi_h^1
\end{array}
\right).
$$
We remark that $M_2$ depends on $G$ and $Z$ and $N_2$ on $Z$.
The system (\ref{4.11})-(\ref{4.12}) can be written as
$$
\left(
\begin{array}{c}
W_{k+1}^1\\
W_{k+1}^2
\end{array}
\right)
=M_2(k)
\left(
\begin{array}{c}
W_{k}^1\\
W_{k}^2
\end{array}
\right)
+N_2(k)R.
$$
\begin{proposition}\label{prop:B23}
We have the following equality
\begin{eqnarray}
\left(
\begin{array}{c}
\begin{array}{c}
W_{1}^1\\
W_{1}^2
\end{array}
\\
\vdots
\\
\begin{array}{c}
W_{m}^1\\
W_{m}^2
\end{array}
\end{array}
\right)
&=& M_{2m}
\times
\left(
\begin{array}{c}
N_2(0)\\
N_2(1)\\
\vdots \\
N_2(m-1)
\end{array}
\right)
R
\label{4.B23}
\end{eqnarray}
where at the right-hand side, $M_{2m}$ is a $2m\times 2m$ matrix defined by
$$
\left(
\begin{array}{ccccc}
I_2 & 0 & \cdots & 0 & 0\\
M_2(1) & I_2 & \cdots & 0& 0\\
\vdots & \vdots& & \vdots& \vdots \\
M_2(m-1)\cdots M_2(1), & M_2(m-1)\cdots M_2(2), & \cdots & M_2(m-1), & I_2
\end{array}
\right)
$$
and the size of the second matrix, which contains $N_2$, is $2m\times n$.
\end{proposition}
\noindent
\textbf{Proof.}
From (\ref{4.13}) and the recurrent relation, we have
\begin{eqnarray*}
\left(
\begin{array}{c}
W_{1}^1\\
W_{1}^2
\end{array}
\right) & = & N_2(0)R\\
\left(
\begin{array}{c}
W_{2}^1\\
W_{2}^2
\end{array}
\right) & = & M_2(1) \left(
\begin{array}{c}
W_{1}^1\\
W_{1}^2
\end{array}
\right)
+N_2(1)R
= M_2(1)N_2(0)R + N_2(1)R\\
\vdots &&\\
\left(
\begin{array}{c}
W_{m-1}^1\\
W_{m-1}^2
\end{array}
\right) & = &
M_2(m-2)\cdots M_2(1)N_2(0)R
+ M_2(m-2)\cdots M_2(2)N_2(1)R\\
&&
+\cdots + M_2(m-2)N_2(m-3)R
+ N_2(m-2)R
\\
\left(
\begin{array}{c}
W_{m}^1\\
W_{m}^2
\end{array}
\right) & = &
M_2(m-1)\left(
\begin{array}{c}
W_{m-1}^1\\
W_{m-1}^2
\end{array}
\right)
+N_2(m-1)R\\
&=&
M_2(m-1)M_2(m-2)\cdots M_2(1)N_2(0)R\\
&&
+ M_2(m-1)M_2(m-2)\cdots M_2(2)N_2(1)R \\
&&
+\cdots+ M_2(m-1)M_2(m-2)N_2(m-3)R\\
&&
+ M_2(m-1)N_2(m-2)R + N_2(m-1)R
\end{eqnarray*}
which gives (\ref{4.B23}).
\quad$\Box$
Since $W$ depends on $R$ by (\ref{4.B23}), we can introduce
the linear operator approximation of $B$ in the Corollary \ref{cor:1}
$$
R \in \mathbb{R}^n \rightarrow W=(W^1,W^2)=\left(B^2(G,Z)R,
B^3(G,Z)R \right)\in \mathbb{R}^m \times \mathbb{R}^m.
$$
If we denote by $\ell_i$ the i-th line of the matrix $M_{2m}$ at the right-hand side
of (\ref{4.B23}), for $1\leq i \leq 2m$, then
$$
B^2(G,Z)=
\left(
\begin{array}{c}
\ell_1\\
\ell_3\\
\vdots \\
\ell_{2m-1}
\end{array}
\right)
\left(
\begin{array}{c}
N_2(0)\\
N_2(1)\\
\vdots \\
N_2(m-1)
\end{array}
\right),
$$
$$
B^3(G,Z)=
\left(
\begin{array}{c}
\ell_2\\
\ell_4\\
\vdots \\
\ell_{2m}
\end{array}
\right)
\left(
\begin{array}{c}
N_2(0)\\
N_2(1)\\
\vdots \\
N_2(m-1)
\end{array}
\right)
$$
and $B^2(G,Z)$, $B^3(G,Z)$ are $m\times n$ matrices.
The size of the matrix containing $N_2$ is $2m\times n$.
\subsection{Gradient type algorithm}
We start by presenting the algorithm.
\bigskip
\textbf{Step 1} Start with $k=0$, $\epsilon >0$ some given ``small'' parameter and select
some initial $(G^k,U^k)$.
\textbf{Step 2} Compute $Y^k$ the solution of (\ref{4.2}) and $Z^k$ solution of
(\ref{4.3})-(\ref{4.5}).
\textbf{Step 3} Find $(R^k,V^k)$ such that $dJ_{(G^k,U^k)}(R^k,V^k) <0$.
We say that $(R^k,V^k)$ is a descent direction.
\textbf{Step 4} Define $(G^{k+1},U^{k+1})=(G^k,U^k)+\lambda_k (R^k,V^k)$,
where $\lambda_k >0$ is obtained via some line search
$$
\lambda_k \in \arg\min_{\lambda >0} J\left((G^k,U^k)+\lambda (R^k,V^k)\right).
$$
\textbf{Step 5} If $| J(G^{k+1},U^{k+1}) - J(G^k,U^k)|$ is below some prescribed
tolerance parameter, then \textbf{Stop}.
If not, update $k:=k+1$ and go to \textbf{Step 3}.
\bigskip
In the \textbf{Step 3}, we have to provide a descent direction.
We present in the following a partial result.
Let us introduce a simplified adjoint system: find $p_h\in \mathbb{V}_h$ such that
\begin{equation}\label{4.27}
\int_D \nabla \varphi_h \cdot \nabla p_h d\mathbf{x}
= \int_E \partial_2 j(\mathbf{x}, y_h(\mathbf{x}))\varphi_h d\mathbf{x}
+\frac{2}{\epsilon}\int_0^{T_g} y_h(Z(t))\varphi_h(Z(t))|Z^\prime(t)| dt
\end{equation}
for all $\varphi_h \in \mathbb{V}_h$ and with $Z(t)$ given by (\ref{4.3})-(\ref{4.5}).
We have $p_h(\mathbf{x})=\sum_{i\in I_0} P_i \phi_i(\mathbf{x})$ and
$P=(P_i)_{i\in I_0}^T\in \mathbb{R}^{n_0}$.
The linear system associated to (\ref{4.27}) is
$$
KP=M_{ED}^T L(Y)+\frac{2}{\epsilon} N(Z)Y.
$$
We recall that $K$ and $N(Z)$ are symmetric matrices.
\begin{proposition}\label{prop:4.1}
Given $g_h,u_h\in\mathbb{W}_h$, let $y_h\in\mathbb{V}_h$ the solution of
(\ref{4.1}). If $r_h=-p_hu_h$ and $v_h=-p_h$, where $p_h\in\mathbb{V}_h$
is the solution of
(\ref{4.27}), then
\begin{equation}\label{4.28}
\int_E \partial_2 j(\mathbf{x}, y_h(\mathbf{x})) q_h d\mathbf{x}
+\frac{2}{\epsilon}\int_0^{T_g} y_h(Z(t))q_h(Z(t))|Z^\prime(t)| dt
\leq 0,
\end{equation}
where $q_h\in\mathbb{V}_h$ is the solution of (\ref{4.7}) depending on $r_h$
and $v_h$.
\end{proposition}
\noindent
\textbf{Proof.}
Putting $\varphi_h=p_h$ in (\ref{4.7}) and $\varphi_h=q_h$ in (\ref{4.27}), we get
\begin{eqnarray}\label{4.29}
&& \int_D \left( (g_h+\epsilon)_+^2 v_h + 2(g_h+\epsilon)_+ u_h r_h\right)
p_h d\mathbf{x}
=\int_D \nabla q_h \cdot \nabla p_h d\mathbf{x}
\\
&=&\int_E \partial_2 j(\mathbf{x}, y_h(\mathbf{x}))q_h d\mathbf{x}
+\frac{2}{\epsilon}\int_0^{T_g} y_h(Z(t))q_h(Z(t))|Z^\prime(t)| dt .
\nonumber
\end{eqnarray}
For $v_h=-p_h$, we have
$$
\int_D (g_h+\epsilon)_+^2 v_hp_hd\mathbf{x}
=-\int_D (g_h+\epsilon)_+^2 p_h^2 d\mathbf{x}
\leq 0
$$
and for $r_h=-p_hu_h$, we have
$$
\int_D 2(g_h+\epsilon)_+ u_h r_h p_h d\mathbf{x}
=-\int_D 2(g_h+\epsilon)_+ (u_h p_h)^2 d\mathbf{x}
\leq 0
$$
since $(g_h+\epsilon)_+ \geq 0$ in $D$.
This ends the proof.
\quad$\Box$
\begin{remark}\label{rem:4.2}
The left-hand side of (\ref{4.28}) represents the first two terms of
(\ref{4.22}).
We can obtain a similar result as in Proposition \ref{prop:4.1}, without using the adjoint system,
by taking
\begin{eqnarray*}
(V^*)^T &=& - \left( \left(L(Y)\right)^T M_{ED} +\frac{2}{\epsilon}Y^T N(Z)\right)
K^{-1}B^1(G,\epsilon) \\
(R^*)^T &=& - \left( \left(L(Y)\right)^T M_{ED} +\frac{2}{\epsilon}Y^T N(Z)\right)
K^{-1}C^1(G,\epsilon,U),
\end{eqnarray*}
in place of $V$ and $R$ in (\ref{4.23}). In this case, (\ref{4.23}) becames
$-\| V^*\|_{\mathbb{R}^n}^2 -\| R^*\|_{\mathbb{R}^n}^2 \leq 0$.
We point out that $(V^*)^T=-P^TB^1(G,\epsilon)$ and
$(R^*)^T=-P^TC^1(G,\epsilon,U)$, so $(V^*,R^*)$ is different from the direction given by
Proposition \ref{prop:4.1}.
\end{remark}
Now, we present a descent direction, obtained from the complete gradient of the discrete cost (\ref{4.22}).
\begin{proposition}\label{prop:4.2}
For $(R^{**},V^*)\in \mathbb{R}^n \times \mathbb{R}^n$ given by
\begin{eqnarray*}
(V^*)^T&=& - \left( \left(L(Y)\right)^T M_{ED} +\frac{2}{\epsilon}Y^T N(Z)\right)
K^{-1}B^1(G,\epsilon) \\
(R^{**})^T&=& - \left( \left(L(Y)\right)^T M_{ED} +\frac{2}{\epsilon}Y^T N(Z)\right)
K^{-1}C^1(G,\epsilon,U) \\
&& - \frac{1}{\epsilon}\left( (\widetilde{\Lambda}_1^1)^T B^2(G,Z) + (\widetilde{\Lambda}_1^2)^T B^3(G,Z) \right)
\nonumber\\
&& -\frac{1}{\epsilon}\left( -(\widetilde{\Lambda}_2^1)^T \Pi_h^2
+ (\widetilde{\Lambda}_2^2)^T \Pi_h^1
\right)
\nonumber\\
&& -\frac{1}{\epsilon}\left( (\widetilde{\Lambda}_3^1)^T B^2(G,Z) + (\widetilde{\Lambda}_3^2)^T B^3(G,Z) \right).
\nonumber
\end{eqnarray*}
we obtain a descent direction for $J$ at $(G,U)$.
\end{proposition}
\noindent
\textbf{Proof.}
In (\ref{4.26a}), we replace $W^1$ by $B^2(G,Z)R$ and $W^2$ by $B^3(G,Z)R$, we obtain that
$dJ_{(G,U)}(R,V)=-(V^*)^T V - (R^{**})^T R$, then
$dJ_{(G,U)}(R^{**},V^*)=-\| V^*\|_{\mathbb{R}^n}^2 -\| R^{**}\|_{\mathbb{R}^n}^2 \leq 0$.
\quad$\Box$
\section{Numerical tests}
\setcounter{equation}{0}
Shape optimization problems and their penalization are strongly nonconvex. The computed optimal domain depends on the starting domain, but also on the penalization
$\epsilon$ or other numerical parameters. It may be just a local optimal solution.
Moreover, the final computed value of the penalization integral is small, but not null. This allows differences between the optimal computed domain $\Omega_g$ and the zero level curves of the computed optimal state $y_{\epsilon}$. Consequently, we compare the obtained optimal cost in the penalized problem with the costs in the original problem (\ref{1.1}) - (\ref{1.3}) corresponding to the optimal computed domain $\Omega_g$ and the zero level curves of $y_{\epsilon}$. This is a standard procedure, to inject the approximating optimal solution in the original problem. Notice that in all the experiments, the cost corresponding to $\Omega_g$ is the best one, but the differences with respect to the other computed cost values are small. This shows that the rather complex approximation/penalization that we use is reasonable. Its advantage is that it may be used as well in the case of boundary observation or for Neumann boundary conditions and this will be performed in a subsequent paper.
In the examples, we have employed the software FreeFem++, \cite{freefem++}.
\medskip
\textbf{Example 1.}
The computational domain is $D=]-3,3[\times ]-3,3[$ and
the observation zone $E$ is the disk of center $(0,0)$ and radius $0.5$.
The load is $f=1$, $j(g)=(y_\epsilon-y_d)^2$, where
$y_d(x_1,x_2)=-(x_1-0.5)^2 -(x_2-0.5)^2+\frac{1}{16}$, then the cost function (\ref{3.2}) becomes
\begin{equation}\label{5.1}
\min_{g\in \mathcal{F},\, u\in L^2(D)} J(g,u)=
\left\{
\int_E (y_\epsilon-y_d)^2d\mathbf{x}
+\frac{1}{\epsilon}\int_{I_g} \left(y_\epsilon(\mathbf{z}_g(t))\right)^2|\mathbf{z}_g^\prime(t)|dt
\right\} .
\end{equation}
The mesh of $D$ has 73786 triangles and 37254 vertices.
The penalization parameter is $\epsilon=10^{-3}$ and the tolerance
parameter for the stopping test
at the \textbf{Step 5} of the algorithm is $tol=10^{-6}$.
The initial domain is the disk of center $(0,0)$ and radius $2.5$
with a circular hole of center $(-1,-1)$ and radius $0.5$.
At the \textbf{Step 3} of the Algorithm,
we use $(R^k,V^k)$ given by Proposition \ref{prop:4.1}.
At the \textbf{Step 4}, in order to have $E\subset \Omega_k$,
we use a projection $\mathcal{P}$ at the line search
$$
\lambda_k \in \arg\min_{\lambda >0}
J\left( \mathcal{P}(G^k+\lambda R^k),
U^k+\lambda V^k\right)
$$
and $G^{k+1}=\mathcal{P}(G^k+\lambda_k R^k)$.
If the value of $g_h^{k}+\lambda r_h^k$ at a vertex from $\overline{E}$
is positive, then we set this value to $-0.1$.
We recall that the left-hand side of (\ref{4.28}) represents only the first two
terms of (\ref{4.22}), not the whole gradient.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=5.5cm]{ex2_k_30_y1.pdf}
\
\includegraphics[width=5.5cm]{ex2_k_30_y2.pdf}
\
\includegraphics[width=5.5cm]{ex2_y_bis.pdf}
\end{center}
\caption{Example 1.
The solution of the elliptic problem (\ref{1.2})-(\ref{1.3})
in the domain $\Omega_g$ (left),
in the domain bounded by the zero level sets of $y_\epsilon$ (right) and
the final computed state $y_\epsilon$ in $D$ (bottom).
\label{fig:ex2_y}}
\end{figure}
If $r_h,\ v_h$ are given by Proposition \ref{prop:4.1} and $\gamma >0$ is a scaling
parameter, then $\gamma r_h$ and $v_h$ verify (\ref{4.28}), that is they also give a descent direction.
We take the scaling parameter for $r_h$ given by $\gamma=\frac{1}{\max(r_h)}$, that is a normalization of $r_h$.
In this way we avoid the appearance of very high values of the objective function, that may stop the
algorithm even in the first iteration. For the line search at the \textbf{Step 4}, we use
$\lambda=\rho^i\lambda_0$, with $\lambda_0=1$, $\rho=0.5$ for $i=0,1,\dots,30$.
The stopping test is obtained for $k=94$ and some values of the objective
function are:
$J(G^0,U^0)=33110.5$, $J(G^{30},U^{30})=54.725$, $J(G^{94},U^{94})=14.9851$.
At the final iteration, the first term of the
optimal objective function is $1.03796$ and
$\int_{\partial\Omega_g} y_\epsilon^2(s)ds=1.39471 \times 10^{-2}$.
We point out that the optimal $\Omega_g$ has a hole and the penalization term is a sum of two integrals
$$
\int_{\partial\Omega_g} y_\epsilon^2(s)ds=
\sum_{j=1}^2\int_{I_j} \left(y_\epsilon(\mathbf{z}_g(t))\right)^2|\mathbf{z}_g^\prime(t)|dt
$$
where the integral over $I_1$ corresponding to the exterior boundary of $\Omega_g$
and $I_2$ to the boundary of the hole.
In Figure \ref{fig:ex2_y} in the bottom, we can see the computed optimal state $y_\epsilon$ in iteration 94.
We also compute the costs $\int_E (y_1-y_d)^2d\mathbf{x}=0.998189$ where $y_1$ is the solution
of the initial elliptic problem (\ref{1.2})-(\ref{1.3})
in the domain $\Omega_g$ with g obtained in iteration 94
and $\int_E (y_2-y_d)^2d\mathbf{x}=1.04032$ where $y_2$ is the solution
of the elliptic problem (\ref{1.2})-(\ref{1.3})
in the domain bounded by the zero level sets of $y_\epsilon$ in iteration 94,
see Figure~\ref{fig:ex2_y}.
\medskip
\textbf{Example 2.}
The domains $D$, $E$ and the mesh of $D$ are the same as in Example 1.
For $f=4$ and $y_d(x_1,x_2)=-x_1^2 -x_2^2+1$,
we have the exact optimal state $y=y_d$ defined in the disk of
center $(0,0)$ and radius $1$, that gives an optimal domain of the problem (\ref{1.1})-(\ref{1.3}).
We have used $\epsilon=10^{-1}$ and the starting configuration:
the disk of center $(0,0)$ and radius $2.5$
with the circular hole of center $(-1,-1)$ and radius $0.5$.
We use $(R^k,V^k)$ given by Proposition \ref{prop:4.1}.
The parameters for the line search and $\gamma$ are the same as in the precedent example.
The stopping test is obtained for $k=64$.
The initial and the final computed values of the objective function
are $5368.84$ and $11.2311$.
We obtain a local minimum that is different from the above global solution. The first term of the
final computed objective function is $0.472856$.
The term $\int_{\partial\Omega_g} y_\epsilon^2(s)ds$ is $1.07583$ and it was computed over the exterior
boundary as well as over the boundaries of two holes. The length of the total boundary
of the optimal domain is
$23.9714$ and of the initial domain is $2\pi(2.5+0.5)=18.8495$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=5.5cm]{ex3_k_30_y1.pdf}
\
\includegraphics[width=5.5cm]{ex3_k_30_y2.pdf}
\
\includegraphics[width=5.5cm]{ex3_k_30_y_bis.pdf}
\end{center}
\caption{Example 2.
The numerical solution of the elliptic problem (\ref{1.2})-(\ref{1.3})
in the optimal domain $\Omega_g$ (left),
in the domain bounded by the zero level sets of $y_\epsilon$ (right) and
the computed optimal state $y_\epsilon$ (bottom).
\label{fig:ex3_1hole_y_exact}}
\end{figure}
The domain changes its topology.
The computed optimal state $y_\epsilon$ is presented in Figure \ref{fig:ex3_1hole_y_exact} in the bottom.
At the left, we show $y_1$ the solution of the elliptic problem (\ref{1.2})-(\ref{1.3})
in the domain $\Omega_g$ which gives $\int_E (y_1-y_d)^2d\mathbf{x}=0.295178$,
at the right we show $y_2$ the solution of the elliptic problem (\ref{1.2})-(\ref{1.3})
in the domain bounded by the zero level sets of $y_\epsilon$,
which gives $\int_E (y_2-y_d)^2d\mathbf{x}=0.471788$.
\medskip
\textbf{Example 3.}
We have also used the descent direction given by Proposition \ref{prop:4.2},
for the starting configuration the disk of center $(0,0)$ and radius $1.5$,
$\epsilon=10^{-1}$, $\gamma=\frac{1}{\|r_h\|_\infty}$ and a mesh of $D$ of 32446 triangles
and 16464 vertices.
For solving the ODE systems
(\ref{4.3})-(\ref{4.5}) and (\ref{4.11})-(\ref{4.13}) we use $m=30$.
At the initial iteration, we have
$\int_E (y_\epsilon-y_d)^2d\mathbf{x}=72.3767$,
$\int_{\partial\Omega_g} y_\epsilon^2(s)ds=658.459$ and the value of the objective function is
$J_0=6656.98$. The algorithm stops after 12 iterations and we have at the final iteration
$\int_E (y_\epsilon-y_d)^2d\mathbf{x}=1.22861$,
$\int_{\partial\Omega_g} y_\epsilon^2(s)ds=0.557556$ and the value of the penalized objective function is
$J_{12}=6.80521$. The final domain is a perturbation of the initial one,
the circular non-smooth curve in the
top, left image of Figure \ref{fig:total_k_20_bnd_g_y_bis}.
We have $\int_E (y_1-y_d)^2d\mathbf{x}=1.20398$ for
$y_1$ the solution of the elliptic problem (\ref{1.2})-(\ref{1.3})
in the final domain $\Omega_g$ and
$\int_E (y_2-y_d)^2d\mathbf{x}=1.21767$ for
$y_2$ the solution of the elliptic problem (\ref{1.2})-(\ref{1.3})
in the domain bounded by the zero level sets of $y_\epsilon$,
Figure \ref{fig:total_k_20_bnd_g_y_bis}
at the bottom, right.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4.5cm]{total_k_20_bnd_g_y_bis.pdf}
\
\includegraphics[width=6.5cm]{total_k_20_yh_final.pdf}
\
\includegraphics[width=5.5cm]{total_k_20_y1.pdf}
\
\includegraphics[width=5.5cm]{total_k_20_y2.pdf}
\end{center}
\caption{Example 3. The zero level sets of the computed optimal $g$, $y_\epsilon$ (top, left),
the final state $y_\epsilon$ (top, right),
the solution of the elliptic problem (\ref{1.2})-(\ref{1.3})
in the domain $\Omega_g$ (bottom, left) and
in the domain bounded by the zero level sets of $y_\epsilon$ (bottom, right).
\label{fig:total_k_20_bnd_g_y_bis}}
\end{figure}
Finally, we notice that the hypothesis of Proposition \ref{prop:3.3} is obviously
fulfilled by the null level sets of $y_\epsilon$ with a corresponding parametrization. In an approximate sense, it is also
fulfilled by the computed optimal domain $\Omega_g$ since the penalization
integral is "small" in all the examples. | {"config": "arxiv", "file": "1811.11508/hamilton_arXiv_rev5.tex"} |
\section{Formulas}
\subsection{Transplant Codes}
\label{code}
This section concerns chains whose sequence ends in $11$.
These correspond to geodesic segments which connect a point
in $\Pi$ with a point in $A(\Pi)$. Given a chain $C$ there
is a $6$ digit {\it transplant code\/}
$(c_0,...,c_5)$ with the following property.
Given $p \in \Pi$ the point $A(p) \in A(\Pi)$ develops
out to the point
\begin{equation}
\label{transcode}
\langle C,p \rangle= \sum_{k=0}^4 c_k \exp(2 \pi i k/5) + \exp(\pi i c_5/5) \overline p.
\end{equation}
Here, as usual, we identity $\Pi$ with the pentagon in $\C$
whose vertices are the $5$th roots of unity.
We call $\langle C,p \rangle$ the
{\it transplant\/} of $p$ with respect to $C$.
In our proof of the Correspondence Lemma, we considered
$6 + 60$ chains. The first $6$ chains let us define
the vertices $p_1,p_3,p_4,p_6,p_8,p_9$ of
the hexagon $H_p$. The other $60$ chains are the
competing chains which we eliminate.
The point $p_j$ is give by $\langle c_j,p \rangle$, where
\begin{itemize}
\item $c_1: (0,3,3,1,0,7)$.
\item $c_3: (3,3,1,0,0,3)$.
\item $c_4: (3,3,0,0,1,1)$.
\item $c_6: (3,0,0,1,3,7)$.
\item $c_8: (0,0,1,3,3,3)$.
\item $c_9: (0,0,3,3,1,1)$.
\end{itemize}
Here we $6$ of the sequences corresponding
to the $60$ competing chains and the
corresponding transplant codes.
\begin{itemize}
\item $0,2,1,9,11 \to 2,4,3,0,1,4$
\item $2,10,9,11 \to 1,4,3,2,0,6$
\item $0,3,2,9,11 \to 1,3,4,2,0,6$
\item $0,3,10,9,11 \to 0,2,4,3,1,8$
\item $0,3,2,10,9,11 \to 2,2,5,3,0,7$
\item $0,4,3,10,9,11 \to 0,2,3,5,2,9$.
\end{itemize}
We can deduce the remaining transplant codes
by symmetry.
Given a chain $C$ we define $C^{\#}$ to be the mirror image of $C$.
We define $\omega C$ to be the chain whose developing image
is obtained from that of $C$ by multiplying the whole picture
by $\exp(2 \pi i/5)$. For instance if the sequence associated
to $C$ is $0,2,9,11$ then the sequence associated to
$\omega C$ is $0,3,10,11$. Given any chain $C$ we have
the $10$ chains $\omega^k C$ and $\omega^k C^{\#}$ for
$k=0,1,2,3,4$. We call these new chains the
{\it dihedral images\/} of $C$. Here are the rules for figuring out the
transplant codes for the dihedral images. Assume that $C$ has
transplant code $c_0,...,c_6$ as above. Then...
\begin{enumerate}
\item $C^{\#}$ has transplant code $c_2,c_1,c_0,c_4,c_3,8-c_5$.
\item $\omega C$ has transplant code $c_4,c_0,c_1,c_2,c_3,c_5+4$.
\end{enumerate}
\subsection{Formulas for the City Boundaries}
\label{formula}
We give formulas for the curves considered above.
Every formula for a city edge can be obtained from
the ones below by pre-composing these formulas with
a dihedral symmetry of the pentagonn $\Pi$.
We have the relation
$g_{1683}(x,y)=g_{8316}(x,-y)$, so we won't give the formula explicitly
for $g_{1683}$. This leaves us with the quadruples above which begin
with $83$.
We will simply supply the matrices $\{s_{ij}\}$ and $\{a_{ij}\}$ and $\{b_{ij}\}$ in all
the relevant cases, and in this order.
Since it is easy to mix up matrices with their transposes,
let me say explicitly that the top horizontal row corresponds to the
monomials $1,x,x^2,x^3$. With that said, here is the data for
$g_{8316}$.
{\tiny
\begin{equation}
\left[\begin{matrix} + & - & - & +\cr + & + & - & 0 \cr -&+&0&0\cr - & 0 & 0 & 0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 50 & 585 & 225 & 60 \cr 283 & 12 & 8 & 0 \cr 25 & 40 &0 &0 \cr 8&0&0&0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 20 & 171 & -99 & -16 \cr 105 & -4 & 0 & 0 \cr 0 & 40 &0 &0 \cr 8&0&0&0 \end{matrix}\right]
\end{equation}
\/}
Here are the matrices for $g_{8369}$:
{\tiny
\begin{equation}
\left[\begin{matrix} + & - & - & +\cr + & + & - & 0 \cr -&+&0&0\cr - & 0 & 0 & 0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 4700 & 41160 & 210 & 175 \cr 14180 & 1220 & 5 & 0 \cr 2010 & 175 &0 &0 \cr 5&0&0&0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 2100 & 18400 & 80 & 75 \cr 6316 & 524 & 1 & 0 \cr 880 & 75 &0 &0 \cr 1&0&0&0 \end{matrix}\right]
\end{equation}
\/}
Here are the matrices for $g_{8349}$:
{\tiny
\begin{equation}
\left[\begin{matrix} + & - & - & +\cr + & - & + & 0 \cr -&+&0&0\cr + & 0 & 0 & 0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 940 & 9480 & 235 & 60 \cr 7780 & 100 &20 &0 \cr 75 &60 &0& \cr 20 &0&0&0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 420 & 4400 & 105 & 20 \cr 3476& 44 & -4 & 0 \cr 25 & 20 &0 &0 \cr -4&0&0&0 \end{matrix}\right]
\end{equation}
\/}
Here are the matrices for $g_{8319}$:
{\tiny
\begin{equation}
\left[\begin{matrix} 0 & - & + & -\cr + & + & - & 0 \cr -& - &0&0\cr - & 0 & 0 & 0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 0 & 85 & 130 & 5 \cr 29 & 206 &9 &0 \cr 20 &5 &0& \cr 9 &0&0&0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 0 & 38 & 58 &2 \cr 12 & 90 & 4 & 0 \cr 8 & 2 &0 &0 \cr 4&0&0&0 \end{matrix}\right]
\end{equation}
\/}
Here are the matrices for $g_{8346}$:
{\tiny
\begin{equation}
\left[\begin{matrix} 0 & - & + & +\cr + & + & + & 0 \cr - & + &0&0\cr + & 0 & 0 & 0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 0 & 126075 & 58835 & 16810 \cr 109265 & 336200 &16810 &0 \cr 294175 &5 &0& \cr 16810 &0&0&0 \end{matrix}\right]
\hskip 10 pt
\left[ \begin{matrix} 0 & 42045 & 25215 &0 \cr 31939 & 26896 & 6724 & 126075 \cr 0 & 2 &0 &0 \cr 6724&0&0&0 \end{matrix}\right]
\end{equation}
\/}
The quadruple point in $\Sigma_0^o$ is
$(\cos(\pi/5)t,\sin(\pi/5)t)$, where $t=0.25016...$ is a root of the following cubic.
$$(5 + 3 \sqrt 5) + (-24 - 10 \sqrt 5) t + (-5 + \sqrt 5) t^2 + 4t^3.$$
In the case of the triple points in $\Sigma_1^o$,
I don't know how to prove that the formulas I got from
Mathematica are correct, but I list them anyway.
In the equations below, the list $(a_0,...,a_{10})$ stands for the
polynomial $$a_0+a_1t + ... + a_{10}t^{10}.$$
The two triple points in $\Sigma_1^o$ are the
$$(\cos(2 \pi/5) u_1,\sin(2\pi/5) v_1), \hskip 30 pt
(\cos(2 \pi/5) u_2,\sin(2\pi/5) v_2),$$
where
$u_1,v_1,u_2,v_2$ respectively are the roots of the following
polynomials.
\newline
{\tiny
$$(316255, -1021235 , 1187259 , 628411 , -2861623 ,
3126530 , -1726141 , 440390 , -15077 , -8998 ,
604).$$
$$(-495, 9045 , -59511,
170103 , -171269 , -112328 , 339489 , -267720 ,
108905 , -25870 , 3020)$$
$$(-1044164,
4232724 , -10713465 , 20137044 , -23128795 ,
14627289 , -5047850 , 960889 , -66285 , -10636 ,
724 )$$
$$(-3820, 14590 , 3825 , -149495 , 131854 ,
97712 , -165546 , -51200 , 15 , -10750 ,
3620 10)$$
\/}
To specify the roots exactly it is enough to note that
$$u_1=1.4799...., \hskip 20 pt
v_2=.21542..., \hskip 20 pt u_2=1.4984..., \hskip 20 pt v_2=.23169...$$
These $4$ degree $10$ polynomials are irreducible, and Sage tells us that
their Galois groups are all degree $2$ extensions of $S_5 \times S_5$ where $S_5$ is the symmetric
group on $5$ symbols. Hence the coordinates for these triple points are
not solvable numbers. I also found the polynomials for $x_1,y_1,x_2,y_2$.
The formulas for $x_1,x_2$ are similar to the ones for
$u_1,u_2$. The formulas for $y_1,y_2$ are degree $20$ even polynomials with
enormous integer coefficients.
\subsection{Triangle Maps}
In the proof of the
Comparison Lemma and the Voronoi Structure Lemma,
we relied on certain polynomial maps
from $[0,1]^2$ to certain triangles $\Upsilon$
We call these the {\it triangle maps\/}.
Here are the domains
\begin{enumerate}
\item $\Upsilon_0=\Upsilon_{1638}$. This contains the cities $C_{834}$ and $C_{839}$.
\item $\Upsilon_1=\Upsilon_{8349}$. This contains the cities $C_{831}$ and $C_{836}$.
\item $\Upsilon_{8316}$. This contains the cities $C_{163}$ and $C_{168}$.
\item $\Upsilon_2$. This triangle is such that $\Upsilon_1$ and $\Upsilon_2$ partition the state $\Sigma_1$.
\end{enumerate}
\subsection{The Rhombus Maps}
We will give, in some sense, an algebraic description
of all the maps from Theorem \ref{main} that correspond
to cities in the fundamental domain $T$ described
in \S \ref{divide}.
The formulas for the rhombus maps in Theorem \ref{main} are
rather nasty, but here we explain a method for deriving the
formulas. One method, of course, is to follow the geometric
definition out to its bitter end. We describe a more
algebraic method here.
Given two triples
$(R_1,v_1,e_1)$ and $(R_2,v_2,e_2)$, there is a unique
similarity which maps the first triple to the second.
This similarity conjugates the rhombus map based on
the first triple to the rhombus map based on the
second triple.
Accordingly, we first describe the formula for a
prototypical triple $(R_0,v_0,e_0)$ and then we
describe all the triples associated to maps
whose domain is contained in $T$. We stop
short of actually writing down the similarities.
\newline
\newline
{\bf Prototypical Example:\/}
For this example we find it convenient
to work in $\R^2$ rather than in $\C$.
We take $R_0=[-1,1] \times [0,2]$,
and $v_0=(-1,0)$ and $e_0$ the vertical
edge connecting $(-1,0)$ to $(-1,2)$.
In this case, the map is given by
\begin{equation}
f_0(x,y)=\bigg(x,\frac{x^2-1+2y}{y}\bigg).
\end{equation}
Notice that $f_0$ is just barely more
complicated than a projective transformation,
and that $f$ preserves each vertical segment
and acts there as a projective
transformation.
\newline
Now we describe the $6$ triples associated to
maps whose domains lie in $T$.
$s_k=\sin(k \pi/10)$.
Define
$$c_1=s_3-1/2, \hskip 30 pt
x_1=3 s_4, \hskip 30 pt
y_1=3 s_4s_3/s_2.
$$
Let $R_1$ be the rhombus with vertices
$$(c_1, \pm y_1), \hskip 15 pt
(c_1 \pm x_1,0).$$
Up to rotation, this is the ``fat rhombus'' in Figure 1.2.
We have the following.
\begin{itemize}
\item The triple associated to the mapping $p \to (168;p)$ is
$(R_1,e,v)$ where $v=(c_1,y_1)$ and $e=\overline{(c_1,y_1)(c_1-x_1,0)}$.
\item The triple associated to the mapping $p \to (163;p)$ is
$(R_1,e,v)$ where $v=(c_1,-y_1)$ and $e=\overline{(c_1,-y_1)(c_1+x_1,0)}$.
\item The triple associated to the mapping $p \to (831;p)$ is
$(R_1,e,v)$ where $v=(c_1,-y_1)$ and $e=\overline{(c_1,-y_1)(c_1-x_1,0)}$.
\item The triple associated to the mapping $p \to (836;p)$ is
$(R_1,e,v)$ where $v=(c_1,y_1)$ and $e=\overline{(c_1,y_1)(c_1+x_1,0)}$.
\end{itemize}
Define
$$c_2=-s_7, \hskip 30 pt
x_2=3 s_7 s_4/s_1, \hskip 30 pt
y_2=3 s_7.
$$
Let $R_2$ be the rhombus with vertices
$$(c_2, \pm y_2), \hskip 15 pt
(c_1 \pm x_1,0).$$
up to rotation, this is the ``thin rhombus'' in Figure 1.2.
Let $\omega=\exp(2 \pi i/5)$.
We have the following
\begin{itemize}
\item The triple associated to the mapping $p \to (834;p)$ is
$(\omega^3 R_2,\omega^3 v,\omega^3 e)$ where
$v=(c_2+x_2,0)$ and $e=\overline{(c_2+x_2,0)(c_2,y_2)}$.
\item The triple associated to the mapping $p \to (839;p)$ is
$(\omega^3 R_2,\omega^3 v,\omega^3 e)$ where
$v=(c_2-x_2,0)$ and $e=\overline{(c_2-x_2,0)(c_2,-y_2)}$.
\end{itemize}
This gives a description of all the maps in
the fundamental domain $T$ from
\S \ref{divide}.
\end{document} | {"config": "arxiv", "file": "2104.02567/Extra/8formulas.tex"} |
TITLE: What does this symbol mean for the MLE method?
QUESTION [0 upvotes]: This is an implementation of the maximum likelihood method on $\hat \pi$.
I am unsure what that $\mathbf 1$ looking symbol means.
MLE estimate of $\pi_y$ is
$$\hat\pi_y:\frac{1}{n}\sum_{i=1}^n\mathbf 1\{y_i = y\}.$$
Source.
REPLY [1 votes]: That is an indicator random variable.
It means
$$\mathbf 1\{y_i = y\} = \begin{cases} 1,&\text{if } y_i = y\\0,&\text{if } y_i\neq y\end{cases}$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 1635430} |
TITLE: If two convex sets have the same closure then their relative interiors are the same
QUESTION [0 upvotes]: I am having trouble seeing this. I have read and understood the proofs that cl(ri(C))=cl(C) and ri(cl(C))=ri(C). But to conclude that cl(C1)=cl(C2) iff ri(C1)=ri(C2) from the above two equalities? Do I need to show cl(ri(C)=ri(cl(C))? I'm not sure how I'd show this. All sets in consideration are in Rn and convex. Thanks in advance!
REPLY [0 votes]: The if part: $cl(C_1)=cl(ri(C_1))=cl(ri(C_2))=cl(C_2)$.
The only if part: $ri(C_1)=ri(cl(C_1))=ri(cl(C_2))=ri(C_2)$.
See proof of Proposition 1.3.5 in Convex Optimization Theory by Bertsekas. | {"set_name": "stack_exchange", "score": 0, "question_id": 557612} |
TITLE: How do you get the upper bound over this recurrence?
QUESTION [0 upvotes]: $$T(n) = 4T\left(\frac{n}{2}\right) + \frac{n^2}{\log n}$$
I have the solution here (see example 4 in that pdf), but the problem is that they have solved it by guessing. I couldn't make that guess. So if you are going to go by the guess method too, tell me how should I have made that guess?
Or, I'm actually more interested in knowing some other method that can possibly be used to solve that.
Thanks!
REPLY [1 votes]: $n=2^m$ and let $T(2^m) = f(m)$.
We then have
\begin{align}
f(m) & = 4 f(m-1) + \dfrac{4^m}m = 4 \left( 4f(m-2) + \dfrac{4^{m-1}}{m-1}\right) + \dfrac{4^m}m\\
& = 16 f(m-2) + 4^m \left( \dfrac1{m-1} + \dfrac1m \right)\\
& = 16 \left( 4f(m-3) + \dfrac{4^{m-2}}{m-2}\right) + 4^m \left( \dfrac1{m-1} + \dfrac1m \right)\\
& = 64f(m-3) + 4^m \left( \dfrac1{m-2} + \dfrac1{m-1} + \dfrac1m \right)\\
\end{align}
So proceeding like this we finally get
\begin{align}
f(m) & = 4^{m} f(0) + 4^m \left(1+\dfrac12 + \dfrac13 + \cdots + \dfrac1m \right) \\
& \approx 4^{m} f(0) + 4^m \left(\log_e(m) + \gamma\right)
\end{align}
Plugging in $m = \log_2(n)$, we get $$T(n) \approx n^2 \log_e(\log_2(n)) = \mathcal{O} \left( n^2 \log(\log n)\right)$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 205831} |
\begin{document}
\title{A structure-preserving finite element method for compressible ideal and resistive MHD}
\author{Evan S. Gawlik\thanks{\noindent Department of Mathematics, University of Hawai`i at M\textoverline{a}noa, \href{egawlik@hawaii.edu}{egawlik@hawaii.edu}} \; and \; Fran\c{c}ois Gay-Balmaz\thanks{\noindent CNRS - LMD, Ecole Normale Sup\'erieure, \href{francois.gay-balmaz@lmd.ens.fr}{francois.gay-balmaz@lmd.ens.fr}}}
\date{}
\maketitle
\begin{abstract}
We construct a structure-preserving finite element method and time-stepping scheme for compressible barotropic magnetohydrodynamics (MHD) both in the ideal and resistive cases, and in the presence of viscosity. The method is deduced from the geometric variational formulation of the equations. It preserves the balance laws governing the evolution of total energy and magnetic helicity, and preserves mass and the constraint $ \operatorname{div}B = 0$ to machine precision, both at the spatially and temporally discrete levels. In particular, conservation of energy and magnetic helicity hold at the discrete levels in the ideal case. It is observed that cross helicity is well conserved in our simulation in the ideal case.
\end{abstract}
\section{Introduction}
In this paper we develop a structure-preserving finite element method for the compressible barotropic MHD equations with viscosity and resistivity on a bounded domain $ \Omega \subset \mathbb{R} ^d$, $d \in \{2,3\}$. These equations seek a velocity field $u$, density $ \rho $, and magnetic field $B$ such that
\begin{align}
\rho(\partial_t u + u \cdot \nabla u) - \operatorname{curl} B \times B &= -\nabla p + \mu \Delta u + ( \lambda + \mu ) \nabla \operatorname{div}u & \text{ in } \Omega \times (0,T), \label{velocity0} \\
\partial_t B - \operatorname{curl}(u \times B) &= - \nu \operatorname{curl}\operatorname{curl}B, & \text{ in } \Omega \times (0,T), \label{magnetic0} \\
\partial_t \rho + \dv (\rho u) &= 0, & \text{ in } \Omega \times (0,T), \label{density0} \\
\dv B &= 0, & \text{ in } \Omega \times (0,T), \label{incompressible0} \\
u = B \cdot n = \curl B \times n &= 0, & \text{ on } \partial\Omega \times (0,T), \label{BC} \\
u(0) = u_0, \, B(0) = B_0, \, \rho(0) &= \rho_0, & \text{ in } \Omega, \label{IC}
\end{align}
where $p=p( \rho )$ is the pressure, $ \mu $ and $ \lambda $ are the fluid viscosity coefficients satisfying $ \mu >0$ and $2 \mu + 3 \lambda \geq 0$, and $ \nu >0$ is the resistivity coefficient.
The case $ \mu = \lambda = \nu =0$ corresponds to ideal non-viscous barotropic MHD, for which the boundary conditions \eqref{BC} are replaced by $u \cdot n|_{ \partial \Omega }= B \cdot n|_{ \partial \Omega }=0$.
Much of the literature on structure-preserving methods in MHD simulation has focused on the incompressible and ideal case, with constant density \cite{GaMuPaMaDe2011,HiLiMaZh2018,HuLeXu2020,HuMaXu2017,KrMa2017,LiWa2001,HuXu2019} and with variable density \cite{GaGB2021}. These methods have succeeded in preserving at the discrete levels several invariants and constraints of the continuous system. For instance, in \cite{GaGB2021} a finite element method was proposed which preserves energy, cross-helicity (when the fluid density is constant), magnetic helicity, mass, total squared density, pointwise incompressibility, and the constraint $ \operatorname{div}B=0$ to machine precision, both at the spatially and temporally discrete levels. Little attention has been paid to the development of structure-preserving methods for MHD in the compressible ideal or resistive case. For instance, in the resistive case, energy, magnetic helicity, and cross-helicity are not preserved and their evolution is governed by balance laws showing the impact of resistivity on the dynamics of these quantities. In order to accurately simulate the effect of resistivity in simulations of compressible MHD, it is highly desirable to exactly reproduce these laws at the discrete level. The discrete conservation laws for these quantities are then automatically satisfied in the ideal case, which extend similar properties obtained earlier in the incompressible setting.
In this paper, we construct a structure-preserving finite element method and time-stepping scheme for the compressible MHD system \eqref{velocity0}--\eqref{IC}. The method is deduced from the geometric variational formulation of the equations arising from the Hamilton principle on the diffeomorphism group of fluid motion. It preserves the balance laws governing the evolution of total energy and magnetic helicity, and preserves mass and the constraint $ \operatorname{div}B = 0$ to machine precision, both at the spatially and temporally discrete levels. In particular, conservation of energy and magnetic helicity hold at the discrete levels in the ideal case.
The approach we develop in this paper is built on our earlier work on conservative methods for compressible fluids \cite{GaGB2020} and for incompressible MHD with variable density in \cite{GaGB2019,GaGB2021}. Two notable differences that arise in the viscous, resistive, compressible setting are the change in boundary conditions for the velocity and magnetic fields, and the fact that the magnetic field is not advected as a vector field when the fluid is compressible; that is, $\curl(B \times u)$ does not coincide with the Lie derivative of the vector field $B$ along $u$ when $\dv u \neq 0$.
\section{Geometric variational formulation for MHD}
In this section we review the Hamilton principle for ideal MHD as well as the associated Euler-Poincar\'e variational formulation. We then extend the resulting form of equations to include viscosity and resistivity and examine how the balance of energy, magnetic helicity, and cross-helicity emerge from this formulation.
\paragraph{Lagrangian variational formulation for ideal MHD.} Assume that the fluid moves in a compact domain $ \Omega \subset \mathbb{R} ^3$ with smooth boundary. We denote by $\operatorname{Diff}( \Omega )$ the group of diffeomorphisms of $ \Omega $ and by $ \varphi :[0,T] \rightarrow \operatorname{Diff}( \Omega )$ the fluid flow. The associated motion of a fluid particle with label $X \in \Omega $ is $x= \varphi (t,X)$.
\medskip
When $ \nu =0$, the equation for the magnetic field reduces to $ \partial _tB - \operatorname{curl} (u \times B)=0$, which can be equivalently rewritten in geometric terms as $\partial _t (B \cdot {\rm d}s)+ \pounds _u ( B \cdot {\rm d} s)=0$ with $ \pounds _u(B \cdot {\rm d}s)$ the Lie derivative of the closed $2$-form $B \cdot {\rm d}s$.
Consequently, from the properties of Lie derivatives, the time evolution of the magnetic field is given by the push-forward operation on 2-forms as
\begin{equation}\label{B_frozen}
B(t) \cdot {\rm d}s= \varphi (t)_* ( \mathcal{B} _0 \cdot {\rm d} S)
\end{equation}
for some time independent reference magnetic field $ \mathcal{B} _0(X)$. This describes the fact that the magnetic field is frozen in the flow. Similarly, from the continuity equation $ \partial _t \rho + \operatorname{div}( \rho u)=0$, the evolution of the mass density is given by the push-forward operation on 3-forms as
\[
\rho (t) {\rm d}^3 x = \varphi (t)_* ( \varrho _0 {\rm d}^3X),
\]
for some time independent reference mass density $ \varrho _0(X)$.
\medskip
From these considerations, if follows that the ideal MHD motion is completely characterized by the fluid flow $ \varphi (t) \in \operatorname{Diff}( \Omega )$ and the given reference fields $ \varrho _0$ and $ \mathcal{B} _0$. The Hamilton principle for this system reads
\begin{equation}\label{HP_MHD}
\delta \int_0^T L( \varphi , \partial _t \varphi , \varrho_0, \mathcal{B}_0) {\rm d}t=0,
\end{equation}
with respect to variations $ \delta \varphi $ vanishing at $t=0,T$, and yields the equations of motion in Lagrangian coordinates. In \eqref{HP_MHD} the Lagrangian function $L$ depends on the fluid flow $ \varphi (t)$ and its time derivative $ \partial _t\varphi (t)$ forming an element $( \varphi , \partial _t \varphi )$ in the tangent bundle $T \operatorname{Diff}( \Omega ) $ to $\operatorname{Diff}( \Omega ) $, and also parametrically on the given $ \varrho _0$, $ \mathcal{B} _0$. From the relabelling symmetries, $L$ must be invariant under the subgroup $ \operatorname{Diff}( \Omega )_{ \varrho _0, \mathcal{B}_0 } \subset \operatorname{Diff}( \Omega ) $ of diffeomorphisms that preserve $ \varrho _0$ and $ \mathcal{B} _0$, i.e., diffeomorphisms $ \psi \in \operatorname{Diff}( \Omega )$ such that
\[
\psi ^*( \varrho _0 {\rm d}^3X) = \varrho _0 \quad\text{and}\quad \psi ^* ( \mathcal{B} _0 \cdot {\rm d}S) = \mathcal{B} _0 \cdot {\rm d}S,
\]
i.e., we have
\begin{equation}\label{invariance}
L( \varphi \circ \psi , \partial _t( \varphi \circ \psi ), \varrho _0, \mathcal{B} _0)= L( \varphi , \partial _t \varphi , \varrho_0, \mathcal{B}_0), \quad \forall\; \psi \in \operatorname{Diff}( \Omega )_{ \varrho _0, \mathcal{B}_0 } \subset \operatorname{Diff}( \Omega ) .
\end{equation}
From this invariance, $L$ can be written in terms of Eulerian variables as
\begin{equation}\label{L_ell}
L( \varphi , \partial _t \varphi , \varrho_0, \mathcal{B}_0)=\ell(u, \rho , B)
\end{equation}
where
\begin{equation} \label{Eul_variables}
u = \partial _t \varphi \circ \varphi ^{-1} , \qquad \rho \, {\rm d}^3 x = \varphi _* ( \varrho _0 {\rm d}^3 X), \qquad B \cdot {\rm d s} = \varphi _* ( \mathcal{B} _0 \cdot {\rm d} S),
\end{equation}
thereby yielding the symmetry reduced Lagrangian $\ell(u, \rho , B)$ in the Eulerian description.
In terms of $\ell$, Hamilton's principle \eqref{HP_MHD} reads
\begin{equation}\label{EP_MHD}
\delta \int_0^T\ell(u, \rho , B) {\rm d}t=0,
\end{equation}
with respect to variations of the form
\begin{equation}\label{EP_constraints}
\delta u = \partial _t v+ \pounds _uv, \qquad \delta \rho = - \operatorname{div}( \rho v), \qquad \delta B = \operatorname{curl} ( v \times B)
\end{equation}
where $v:[0,T] \rightarrow \mathfrak{X} ( \Omega )$ is an arbitrary time dependent vector field with $v(0)=v(T)=0$ and $ \pounds _uv=[u,v]$ is the Lie derivative of vector fields. Here $ \mathfrak{X} ( \Omega )$ denotes the space of vector fields $u$ on $ \Omega $ with $u \cdot n=0$ on $ \partial \Omega $, viewed as the Lie algebra of $ \operatorname{Diff}( \Omega )$. We recall that $B \cdot n=0$ on $ \partial \Omega $, a condition that is preserved by the evolution \eqref{B_frozen}. The passing from \eqref{HP_MHD} to \eqref{EP_MHD} is a special instance of the process of Euler-Poincar\'e reduction for invariant systems on Lie groups, see \cite{HoMaRa1998}. A direct application of \eqref{EP_MHD}--\eqref{EP_constraints} yields the fluid momentum equations in the form
\begin{equation}\label{velocity}
\left\langle \partial_t \frac{\delta \ell}{\delta u} , v \right\rangle + a\left(\frac{\delta \ell}{\delta u}, u, v\right) + b\left( \frac{\delta \ell}{\delta \rho},\rho,v \right) + c\left(\frac{\delta \ell}{\delta B},B,v\right)=0,
\end{equation}
for all $v$ with $v \cdot n=0$, with the trilinear forms
\begin{align*}
a(w,u,v)&= -\int_ \Omega w \cdot [u,v]\,{\rm d}x \\
b( \sigma , \rho ,v)&= - \int_ \Omega \rho \nabla \sigma \cdot v \,{\rm d}x\\
c(C,B,v)&= \int_ \Omega C \cdot \operatorname{curl} (B \times v) \,{\rm d}x.
\end{align*}
The equations for $ \rho $ and $B$ follow from their definition in \eqref{Eul_variables}, which are expressed in terms of $b$ and $c$ as
\begin{align}
\langle \partial_t \rho, \sigma \rangle + b(\sigma,\rho,u) &= 0, \quad \forall\; \sigma \label{density} \\
\langle \partial_t B, C \rangle + c(C,B,u) &= 0, \quad \forall\; C, \quad C \cdot n|_{ \partial \Omega }=0.\label{magnetic}
\end{align}
Equation \eqref{velocity} yields the general Euler-Poincar\'e form of the equations for arbitrary Lagrangian $\ell(u, \rho , B)$ as
\begin{equation}\label{EP_equations}
\partial _t \frac{\delta \ell}{\delta u} + \pounds _u \frac{\delta \ell}{\delta u} = \rho \nabla \frac{\delta \ell}{\delta \rho }+ B \times \operatorname{curl}\frac{\delta \ell}{\delta B}
\end{equation}
where in the second term we employed the notation $\pounds _u m= \operatorname{curl} m\times u + \nabla ( u \cdot m) + m \operatorname{div}u$.
\medskip
The Lagrangian for barotropic MHD is
\begin{equation}\label{barotropic_ell}
\ell(u, \rho , B)= \int_ \Omega \Big[\frac{1}{2} \rho |u|^2 - \epsilon ( \rho ) - \frac{1}{2} |B| ^2 \Big]\,{\rm d}x
\end{equation}
with $ \epsilon ( \rho )$ the energy density. Using
\[
\frac{\delta \ell}{\delta u}= \rho u, \qquad \frac{\delta \ell}{\delta \rho } = \frac{1}{2} |u| ^2 - \frac{\partial \epsilon }{\partial \rho } , \qquad \frac{\delta \ell}{\delta B} = - B
\]
in \eqref{EP_equations} yields the barotropic MHD equations \eqref{velocity0} with $ \mu = \lambda =0$.
\medskip
Extension to full compressible ideal MHD subject to gravitational and Coriolis forces is easily achieved by including the entropy density $s$ in the variational formulation and considering the Lagrangian function
\begin{equation} \label{baroclinic_ell}
\ell(u, \rho ,s , B)= \int_ \Omega \Big[\frac{1}{2} \rho |u|^2 + \rho R \cdot u - \epsilon ( \rho ,s ) - \rho \phi - \frac{1}{2} |B| ^2\Big] \,{\rm d} x,
\end{equation}
with $ \phi $ the gravitational potential and a vector field $R$ such that $ \operatorname{curl}R= 2 \omega $ with $ \omega $ the angular velocity of the fluid domain.
\paragraph{Viscous and resistive MHD.} Viscosity and resistivity are included in the formulation \eqref{velocity}--\eqref{magnetic}
by defining the symmetric bilinear forms
\begin{equation}\label{bilinear_form}
d(u,v)= - \int_ \Omega \Big[\mu\nabla u: \nabla v + ( \lambda + \mu ) \operatorname{div}u \operatorname{div}v\Big]\, {\rm d}x , \qquad e(B, C) = -\nu \int_ \Omega \operatorname{curl}B \cdot \operatorname{curl} C \,{\rm d} x
\end{equation}
and considering the no slip boundary condition $u|_{ \partial \Omega }=0$ for the velocity. This corresponds in the Lagrangian description to the choice of the subgroup $ \operatorname{Diff}_0( \Omega ) $ of diffeomorphisms fixing the boundary pointwise.
The viscous and resistive barotropic MHD equations with Lagrangian $\ell(u, \rho , B)$ can be written as follows: seek $u$, $ \rho $, $B$ with $u|_{ \partial \Omega }=0$ and $B \cdot n|_{ \partial \Omega }=0$ such that
\begin{align}
\left\langle \partial_t \frac{\delta \ell}{\delta u} , v \right\rangle + a\left(\frac{\delta \ell}{\delta u}, u, v\right) + b\left( \frac{\delta \ell}{\delta \rho},\rho,v \right) + c\left(\frac{\delta \ell}{\delta B},B,v\right)&= d(u,v) & &\forall\,v, \;\; \text{$v|_{ \partial \Omega }=0$}\label{velocity_res_NS} \\
\langle \partial_t \rho, \sigma \rangle + b(\sigma,\rho,u) &=0, \quad & &\forall\,\sigma \label{density_res_NS}\\
\phantom{\int}\langle \partial_t B, C \rangle + c(C,B,u) &= e(B,C), \quad & &\forall\, C, \;\; C \cdot n|_{ \partial \Omega }=0.\label{magnetic_res_NS}
\end{align}
The boundary condition $ \operatorname{curl}B \times n|_{ \partial \Omega }=0$ emerges from the last equation, while the condition $ \operatorname{div}B(t)=0$ holds if it holds at initial time. For the Lagrangian \eqref{barotropic_ell}, the system \eqref{velocity0}--\eqref{BC} is recovered. While the system \eqref{velocity_res_NS}--\eqref{magnetic_res_NS} is obtained by simply appending the bilinear forms $d$ and $e$ to the Euler-Poincar\'e equations, this system can also be obtained by a variational formulation of Lagrange-d'Alembert type, which extends the Euler-Poincar\'e formulation \eqref{EP_MHD}--\eqref{EP_constraints}, see Appendix~\ref{LdA}.
\paragraph{Balance laws for important quantities.} The balance of total energy $ \mathcal{E} = \left\langle \frac{\delta \ell}{\delta u}, u \right\rangle -\ell(u, \rho , B)$ associated to a Lagrangian $\ell$ is found as
\[
\frac{d}{dt} \mathcal{E} = d(u,u) - e \left( B, \frac{\delta \ell}{\delta B} \right)
\]
and, in the Euler-Poincar\'e formulation \eqref{velocity_res_NS}--\eqref{magnetic_res_NS}, follows from the property
\[
a(w,u,v)=- a(w,v,u), \quad \forall\; u,v,w
\]
of the trilinear form $a$.
The conservation of total mass $\int_ \Omega \rho \, {\rm d} x$ follows from the property
\[
b(1, \rho , v)=0, \quad \forall\; \rho , v.
\]
If $A$ is any vector field satisfying $ \operatorname{curl}A=B$ and $A \times n|_{ \partial \Omega }=0$, the balance of magnetic helicity $\int_ \Omega A \cdot B{\rm d}x$ is found as follows:
\begin{align*}
\frac{d}{dt} \int_ \Omega A \cdot B{\rm d}x&= \left\langle \partial _t A, B \right\rangle + \left\langle A, \partial _t B \right\rangle \\
&= \left\langle \partial _t A, \operatorname{curl} A \right\rangle + \left\langle A, \partial _t B \right\rangle\\
&= \left\langle \operatorname{curl} \partial _t A, A \right\rangle + \left\langle A, \partial _t B \right\rangle\\
&= 2 \left\langle \partial _t B, A \right\rangle \\
&= - 2 c(A,B,u)+ 2e(B,A)\\
&= 2e(B,A),
\end{align*}
where in the third equality we used $A \times n|_{ \partial \Omega }=0$, in the fifth equality we used \eqref{magnetic_res_NS}, and in the last one we used the following property of $c$:
\[
c(A,B,u)=0 \text{ if } B= \operatorname{curl}A \text{ and } u|_{ \partial \Omega }=0.
\]
In absence of viscosity, $u|_{ \partial \Omega }=0$ does not hold and one uses
\[
c(A,B,u)=0 \text{ if } B= \operatorname{curl}A \text{ and } u \cdot n|_{ \partial \Omega }= B \cdot n|_{ \partial \Omega }=0.
\]
\section{Spatial variational discretization} \label{sec:spatial}
We will now construct a spatial discretization of~(\ref{velocity0}-\ref{IC}) using finite elements.
We make use of the following function spaces:
\begin{align*}
H^1_0(\Omega) &= \{f \in L^2(\Omega) \mid \nabla f \in L^2(\Omega)^d, \, f=0 \text{ on } \partial\Omega \}, \\
H_0(\curl,\Omega) &=
\begin{cases}
\{ u \in L^2(\Omega)^2 \mid \partial_x u_y - \partial_y u_x \in L^2(\Omega), \, u_x n_y - u_y n_x = 0 \text{ on } \partial\Omega \}, &\mbox{ if } d = 2, \\
\{ u \in L^2(\Omega)^3 \mid \curl u \in L^2(\Omega)^3, \, u \times n = 0 \text{ on } \partial\Omega \}, &\mbox{ if } d = 3, \\
\end{cases} \\
H_0(\dv,\Omega) &= \{u \in L^2(\Omega)^d \mid \dv u \in L^2(\Omega), \, u \cdot n = 0 \text{ on } \partial\Omega \}.
\end{align*}
Let $\mathcal{T}_h$ be a triangulation of $\Omega$. We regard $\mathcal{T}_h$ as a member of a family of triangulations parametrized by $h = \max_{K \in \mathcal{T}_h} h_K$, where $h_K = \operatorname{diam}K$ denotes the diameter of a simplex $K$. We assume that this family is shape-regular, meaning that the ratio $\max_{K \in \mathcal{T}_h} h_K/\rho_K$ is bounded above by a positive constant for all $h>0$. Here, $\rho_K$ denotes the inradius of $K$.
When $r \ge 0$ is an integer and $K$ is a simplex, we write $P_r(K)$ to denote the space of polynomials on $K$ of degree at most $r$.
Let $r,s \ge 0$ be fixed integers. To discretize the velocity $u$, we use the continuous Galerkin space
\[
U_h^{\grad} = CG_{r+1}(\mathcal{T}_h)^d := \{u \in H^1_0(\Omega)^d \mid \left. u \right|_K \in P_{r+1}(K)^d, \, \forall K \in \mathcal{T}_h\}.
\]
To discretize the magnetic field $B$, we use the Raviart-Thomas space
\[
U_h^{\dv} = RT_r(\mathcal{T}_h) := \{u \in H_0(\dv,\Omega) \mid \left. u \right|_K \in P_r(K)^d + xP_r(K), \, \forall K \in \mathcal{T}_h\}.
\]
To discretize the density $\rho$, we use the discontinuous Galerkin space
\[
F_h = DG_s(\mathcal{T}_h) := \{ f \in L^2(\Omega) \mid \left. f \right|_K \in P_s(K)^d, \, \forall K \in \mathcal{T}_h\}.
\]
Our method will also make use of an auxiliary space, the Nedelec finite element space of the first kind,
\begin{align*}
U_h^{\curl} &\!= NED_r(\mathcal{T}_h) := \begin{cases}
\{u \in H_0(\curl,\Omega) \mid \left. u \right|_K \in P_r(K)^2 + (x_2,-x_1) P_r(K), \, \forall K \in \mathcal{T}_h\},\!\!&\!\!\mbox{ if } d=2,\\
\{u \in H_0(\curl,\Omega) \mid \left. u \right|_K \in P_r(K)^3 + x \times P_r(K)^3, \, \forall K \in \mathcal{T}_h\}, \!\!&\!\!\mbox{ if } d=3,
\end{cases}
\end{align*}
which satisfies $\curl U_h^{\curl} \subset U_h^{\dv}$.
We will need consistent discretizations of the trilinear forms $a,b,c$ and the bilinear forms $d,e$. To construct these, we introduce some notation. Let $\mathcal{E}_h$ denote the set of interior $(d-1)$-dimensional faces in $\mathcal{T}_h$. On a face $e = K_1 \cap K_2 \in \mathcal{E}_h$, we denote the jump and average of a piecewise smooth scalar function $f$ by
\[
\llbracket f \rrbracket = f_1 n_1 + f_2 n_2, \quad \{f\} = \frac{f_1+f_2}{2},
\]
where $f_i = \left. f \right|_{K_i}$, $n_1$ is the normal vector to $e$ pointing from $K_1$ to $K_2$, and similarly for $n_2$. We let $\pi_h^{\grad} : L^2(\Omega)^3 \rightarrow U_h^{\grad}$, $\pi_h^{\curl} : L^2(\Omega)^3 \rightarrow U_h^{\curl}$, $\pi_h^{\dv} : L^2(\Omega)^3 \rightarrow U_h^{\dv}$, and $\pi_h : L^2(\Omega) \rightarrow F_h$ denote the $L^2$-orthogonal projectors onto $U_h^{\grad}$, $U_h^{\curl}$, $U_h^{\dv}$, and $F_h$, respectively. We define $\curl_h : U_h^{\dv} \rightarrow U_h^{\curl}$ by
\[
\langle \curl_h u, v \rangle = \langle u, \curl v \rangle, \quad \forall v \in U_h^{\curl}.
\]
We define trilinear forms $b_h : F_h \times F_h \times U_h^{\dv} \rightarrow \mathbb{R}$, $c_h : L^2(\Omega)^d \times U_h^{\dv} \times U_h^{\grad} \rightarrow \mathbb{R}$, and a bilinear form $e_h : U_h^{\dv} \times U_h^{\dv} \rightarrow \mathbb{R}$ by
\begin{align*}
b_h(f,g,u) &= -\sum_{K \in \mathcal{T}_h} \int_K (u \cdot \nabla f) g \, {\rm d}x + \sum_{e \in \mathcal{E}_h} \int_e u \cdot \llbracket f \rrbracket \{g\} \, {\rm d}s, \\
c_h(C,B,v) &= \langle C, \curl \pi_h^{\curl} (\pi_h^{\curl}B \times \pi_h^{\curl}v) \rangle, \\
e_h(B,C) &= -\nu \langle \curl_h B, \curl_h C \rangle.\phantom{\int}
\end{align*}
Our choice of $c_h$ is motivated in part by the following lemma.
\begin{lemma} \label{lemma:ch_curl}
The trilinear form $c_h$ satisfies
\begin{equation}
c_h(w,u,v) = 0\; \text{ if } \; \curl w = u.
\end{equation}
\end{lemma}
\begin{proof}
If $\curl w = u$, then we can integrate $c_h$ by parts and use the fact that $\left.n \times \pi_h^{\curl} (\pi_h^{\curl}u \times \pi_h^{\curl}v)\right|_{\partial\Omega} = 0$ to obtain
\begin{align*}
c_h(w,u,v)
&= \langle w, \curl \pi_h^{\curl} (\pi_h^{\curl}u \times \pi_h^{\curl}v) \rangle & \\
&= \langle \curl w, \pi_h^{\curl} (\pi_h^{\curl}u \times \pi_h^{\curl}v) \rangle & \\
&= \langle u, \pi_h^{\curl} (\pi_h^{\curl}u \times \pi_h^{\curl}v) \rangle & \\
&= \langle \pi_h^{\curl} u, \pi_h^{\curl}u \times \pi_h^{\curl}v \rangle & \\
&= 0. &
\end{align*}
\end{proof}
\medskip
In the spatially discrete, temporally continuous setting, our method seeks $u : [0,T] \rightarrow U_h^{\grad}$, $\rho : [0,T] \rightarrow F_h$, and $B : [0,T] \rightarrow U_h^{\dv}$ such that
\begin{align}
\langle \sigma, \partial_t \rho \rangle &= -b_h(\sigma,\rho,u), && \forall \sigma \in F_h, \label{rhodoth} \\
\langle C, \partial_t B \rangle &= -c_h(C,B,u), && \forall C \in U_h^{\dv}, \label{Bdoth}
\end{align}
and
\[
\delta \int_0^T \ell(u,\rho,B) \, {\rm d}t = 0
\]
for all variations $\delta u : [0,T] \rightarrow U_h^{\grad}$, $\delta \rho : [0,T] \rightarrow F_h$, and $\delta B : [0,T] \rightarrow U_h^{\dv}$ satisfying
\begin{align}
\langle w, \delta u \rangle &= \langle w, \partial_t v \rangle - a(w,u,v), && \forall w \in U_h^{\grad}, \label{deltauh} \\
\langle \sigma, \delta \rho \rangle &= -b_h(\sigma,\rho,u), && \forall \sigma \in F_h, \label{deltarhoh} \\
\langle C, \delta B \rangle &= -c_h(C,B,u), && \forall C \in U_h^{\dv}, \label{deltaBh}
\end{align}
where $v : [0,T] \rightarrow U_h^{\grad}$ is an arbitrary vector field satisfying $v(0)=v(T)=0$.
Note that~(\ref{rhodoth}-\ref{Bdoth}) and~(\ref{deltauh}-\ref{deltaBh}) are discrete counterparts of the advection laws
\[
\partial_t \rho = -\dv(\rho u), \quad \partial_t B = \curl (u \times B)
\]
and the constraints
\[
\delta u = \partial_t v + [u,v], \quad \delta \rho = -\dv(\rho v), \quad \delta B = \curl(v \times B)
\]
on the variations.
As shown in \cite{GaGB2020}, in the absence of $B$ this variational principle follows from the Hamilton principle on a discrete diffeomorphism group $G_h \subset GL(F_h)$ by applying Euler-Poincar\'e reduction. In particular, the discrete version of $a$ emerging from the Euler-Poincar\'e variational formulation in \cite{GaGB2020} coincides with $a$ on the finite element space $U_h^{\grad}$ used here for the velocity.
The variational principle above yields the following equations for $u \in U_h^{\grad}$, $\rho \in F_h$, $B \in U_h^{\dv}$:
\begin{align}
\left\langle \partial_t \frac{\delta \ell}{\delta u}, v \right\rangle + a\left(\pi_h^{\grad} \frac{\delta \ell}{\delta u}, u, v\right) + b_h\left( \pi_h \frac{\delta \ell}{\delta \rho},\rho,v \right) + c_h\left(\pi_h^{\dv} \frac{\delta \ell}{\delta B},B,v\right)&= 0, && \forall v \in U_h^{\grad}, \label{velocity_nores_NS_h} \\
\langle \partial_t \rho, \sigma \rangle + b_h(\sigma,\rho,u) &=0, && \forall \sigma \in F_h,\label{density_nores_NS_h} \\
\langle \partial_t B, C \rangle + c_h(C,B,u) &= 0, && \forall C \in U_h^{\dv}. \label{magnetic_nores_NS_h}
\end{align}
We introduce viscosity and resistivity by adding $d(u,v)$ and $e_h(B,C)$ to the right-hand sides of~(\ref{velocity_nores_NS_h}) and~(\ref{magnetic_nores_NS_h}). The resulting equations read
\begin{align}
\hspace{-0.5cm}\left\langle \partial_t \frac{\delta \ell}{\delta u}, v \right\rangle + a\left(\pi_h^{\grad} \frac{\delta \ell}{\delta u}, u, v\right) + b_h\left( \pi_h \frac{\delta \ell}{\delta \rho},\rho,v \right) + c_h\left(\pi_h^{\dv} \frac{\delta \ell}{\delta B},B,v\right)&= d(u,v), \!\!\!&\!\!&\!\!\! \forall v \in U_h^{\grad}\!\!, \label{velocity_res_NS_h} \\
\langle \partial_t \rho, \sigma \rangle + b_h(\sigma,\rho,u) &=0, &\!\!&\!\!\!\forall \sigma \in F_h,\label{density_res_NS_h} \\
\langle \partial_t B, C \rangle + c_h(C,B,u) &= e_h(B,C), &\!\!& \!\!\!\forall C \in U_h^{\dv}. \label{magnetic_res_NS_h}
\end{align}
These equations are not implementable in their present form, since the terms involving $c_h$ and $e_h$ contain projections of the test functions $v$ and $C$. To handle these terms, we use the following lemma.
\begin{lemma} \label{lemma:rectify}
Let $u,B \in U_h^{\dv}$ be arbitrary, and let $J,H,U,E,\alpha,j \in U_h^{\curl}$ be defined by the relations
\begin{align}
\langle J, K \rangle &= -\left\langle \frac{\delta \ell}{\delta B}, \curl K \right\rangle, && \forall K \in U_h^{\curl}, \\
\langle H, G \rangle &= \langle B, G \rangle, && \forall G \in U_h^{\curl}, \\
\langle U, V \rangle &= \langle u, V \rangle, && \forall V \in U_h^{\curl}, \\
\langle E, F \rangle &= -\langle U \times H, F \rangle, && \forall F \in U_h^{\curl}, \label{Edef} \\
\langle \alpha, \beta \rangle &= -\langle J \times H, \beta \rangle, && \forall \beta \in U_h^{\curl}, \\
\langle j, k \rangle &= \left\langle B, \curl k \right\rangle, && \forall k \in U_h^{\curl}.
\end{align}
Then, for every $C \in U_h^{\dv}$ and every $v \in U_h^{\grad}$, we have
\begin{align}
c_h(C,B,u) &= \langle \curl E, C \rangle, \\
c_h\left(\pi_h^{\dv} \frac{\delta \ell}{\delta B},B,v\right) &= \langle \alpha, v \rangle, \label{ch_identity2} \\
e_h(B,C) &= -\nu\langle \curl j, C \rangle. \label{eh_identity}
\end{align}
\end{lemma}
\begin{proof}
We have $H=\pi_h^{\curl} B$ and $U = \pi_h^{\curl} u$ by definition. Thus,~(\ref{Edef}) implies that
\[
E = -\pi_h^{\curl}(U \times H) = -\pi_h^{\curl} (\pi_h^{\curl} u \times \pi_h^{\curl} B).
\]
It follows that
\[
\langle \curl E, C \rangle = -\langle \curl \pi_h^{\curl} (\pi_h^{\curl} u \times \pi_h^{\curl} B), C \rangle = c_h(C,B,u).
\]
To prove~(\ref{ch_identity2}), we use the fact that $\curl U_h^{\curl} \subset U_h^{\dv}$ to write
\begin{align*}
\langle \alpha, v \rangle
&= \langle \alpha, \pi_h^{\curl} v \rangle \\
&= -\langle J \times \pi_h^{\curl} B, \pi_h^{\curl} v \rangle \\
&= -\langle J, \pi_h^{\curl} B \times \pi_h^{\curl} v \rangle \\
&= -\langle J, \pi_h^{\curl} (\pi_h^{\curl} B \times \pi_h^{\curl} v) \rangle \\
&= \left\langle \frac{\delta\ell}{\delta B}, \curl \pi_h^{\curl} (\pi_h^{\curl}B \times \pi_h^{\curl}v) \right\rangle \\
&= \left\langle \pi_h^{\dv} \frac{\delta\ell}{\delta B}, \curl \pi_h^{\curl} (\pi_h^{\curl}B \times \pi_h^{\curl}v) \right\rangle \\
&= c_h\left( \pi_h^{\dv} \frac{\delta\ell}{\delta B},B,v\right).
\end{align*}
Finally,~(\ref{eh_identity}) follows from the fact that $j = \curl_h B$ by definition, so
\[
-\nu\langle \curl j, C \rangle = -\nu \langle j, \curl_h C \rangle = e_h(B,C).
\]
\end{proof}
\medskip
The preceding lemma shows that~(\ref{velocity_res_NS_h}-\ref{magnetic_res_NS_h}) can be rewritten in the following equivalent way. We seek $u,w \in U_h^{\grad}$, $B \in U_h^{\dv}$, $\rho,\theta \in F_h$, and $J,H,U,E,\alpha,j \in U_h^{\curl}$ such that
\begin{align}
\left\langle \partial_t \frac{\delta \ell}{\delta u}, v \right\rangle + a\left(w, u, v\right) + b_h(\theta,\rho,v) + \langle \alpha, v \rangle &= d(u,v), && \forall v \in U_h^{\grad}, \label{velocityh_comp} \\
\langle \partial_t \rho, \sigma \rangle + b_h(\sigma,\rho,u) &= 0, && \forall \sigma \in F_h, \label{densityh_comp} \\
\langle \partial_t B, C \rangle + \langle \curl E, C \rangle &= -\nu \langle \curl j, C \rangle, && \forall C \in U_h^{\dv}, \label{magnetich_comp} \\
\langle w, z \rangle &= \left\langle \frac{\delta \ell}{\delta u}, z \right\rangle, && \forall z \in U_h^{\grad}, \label{wh_comp} \\
\langle \theta, \tau \rangle &= \left\langle \frac{\delta \ell}{\delta \rho}, \tau \right\rangle, && \forall \tau \in F_h, \label{thetah_comp} \\
\langle J, K \rangle &= -\left\langle \frac{\delta \ell}{\delta B}, \curl K \right\rangle, && \forall K \in U_h^{\curl}, \\
\langle H, G \rangle &= \langle B, G \rangle, && \forall G \in U_h^{\curl}, \label{Bproj_comp} \\
\langle U, V \rangle &= \langle u, V \rangle, && \forall V \in U_h^{\curl}, \label{uproj_comp} \\
\langle E, F \rangle &= -\langle U \times H, F \rangle, && \forall F \in U_h^{\curl}, \label{Eh_comp} \\
\langle \alpha, \beta \rangle &= -\langle J \times H, \beta \rangle, && \forall \beta \in U_h^{\curl}, \label{alphah_comp} \\
\langle j, k \rangle &= \left\langle B, \curl k \right\rangle, && \forall k \in U_h^{\curl}. \label{jh_comp}
\end{align}
\begin{remark} \label{remark:jJ}
For Lagrangians that satisfy $\frac{\delta \ell}{\delta B} = -B$, we have $j=J$, so~(\ref{jh_comp}) can be omitted.
\end{remark}
\begin{remark}
The above discretization has several commonalities with the one proposed in~\cite{HuXu2019} for a stationary MHD problem. The finite element spaces we use for $u$ and $B$ match the ones used there, and our discretization of the term $-\nu \langle \curl B, \curl C\rangle$ matches the one used in Equation 4.3(b-c) of~\cite{HuXu2019}.
\end{remark}
\begin{remark} \label{remark:dingma} The above discretization also has a few commonalities with one that appears in~\cite{DiMa2020}, where a stable finite element method for compressible MHD is proposed and proved to be convergent. There, the space $CG_2(\mathcal{T}_h)^d$ is used for $u$ and $DG_0(\mathcal{T}_h)$ is used for $\rho$. These choices coincide with ours when $r=1$ and $s=0$. However, the authors of~\cite{DiMa2020} use $NED_0(\mathcal{T}_h)$ rather than $RT_r(\mathcal{T}_h)$ for $B$ and treat the boundary condition $\left. B \times n \right|_{\partial\Omega}=0$ rather than $\left. B \cdot n\right|_{\partial\Omega} = \left. \curl B \times n \right|_{\partial\Omega} = 0$.
\end{remark}
\begin{proposition} \label{prop:invariantsh}
If $B(0)$ is exactly divergence-free, then the solution to~(\ref{velocityh_comp}-\ref{jh_comp}) satisfies
\begin{align}
\dv B(t) &\equiv 0, \\
\frac{d}{dt} \int_\Omega \rho \, {\rm d}x &= 0, \\
\frac{d}{dt} \mathcal{E} &= d(u,u) - e_h\left(B, \pi_h^{\dv} \frac{\delta \ell}{\delta B} \right), \label{energyh} \\
\frac{d}{dt} \int_\Omega A \cdot B \, {\rm d}x &= 2e_h(B,\pi_h^{\dv} A), \label{magnetichelicityh}
\end{align}
for all $t$. Here, $\mathcal{E} = \langle \frac{\delta\ell}{\delta u}, u \rangle - \ell(u,\rho,B)$ denotes the total energy of the system, and $A$ denotes any vector field satisfying $\nabla \times A = B$ and $\left.A \times n \right|_{\partial\Omega} = 0$.
\end{proposition}
\begin{proof}
Since $\curl U_h^{\curl} \subset U_h^{\dv}$, the magnetic field equation~(\ref{magnetich_comp}) implies that the relation
\[
\partial_t B + \curl E = -\nu \curl j
\]
holds pointwise. Taking the divergence of both sides shows that $\partial_t \dv B = 0$, so $B(t)$ is divergence-free for all $t$.
Taking $\sigma=1$ in the density equation~(\ref{densityh_comp}) shows that
\[
\frac{d}{dt} \int_\Omega \rho \, {\rm d}x = \langle \partial_t \rho, 1 \rangle = -b_h(1,\rho,u) = 0.
\]
To compute the rate of change of the energy, we take $v=u$ in~(\ref{velocity_res_NS_h}), $\sigma=-\pi_h \frac{\delta\ell}{\delta \rho}$ in~(\ref{density_res_NS_h}), and $C=-\pi_h^{\dv} \frac{\delta\ell}{\delta B}$ in~(\ref{magnetic_res_NS_h}). Adding the three equations yields
\begin{equation*}
\left\langle \partial_t \frac{\delta \ell}{\delta u}, u \right\rangle - \left\langle \partial_t \rho, \pi_h \frac{\delta\ell}{\delta \rho} \right\rangle - \left\langle \partial_t B, \pi_h^{\dv} \frac{\delta \ell}{\delta B} \right\rangle = d(u,u) - e_h\left(B, \pi_h^{\dv} \frac{\delta \ell}{\delta B} \right).
\end{equation*}
Since $\partial_t \rho \in F_h$ and $\partial_t B \in U_h^{\dv}$, this simplifies to
\begin{equation*}
\left\langle \partial_t \frac{\delta \ell}{\delta u}, u \right\rangle - \left\langle \partial_t \rho, \frac{\delta\ell}{\delta \rho} \right\rangle - \left\langle \partial_t B, \frac{\delta \ell}{\delta B} \right\rangle = d(u,u) - e_h\left(B, \pi_h^{\dv} \frac{\delta \ell}{\delta B} \right),
\end{equation*}
which is equivalent to
\[
\frac{d}{dt} \left( \left\langle \frac{\delta \ell}{\delta u}, u \right\rangle - \ell(u,\rho,B) \right) = d(u,u) - e_h\left(B, \pi_h^{\dv} \frac{\delta \ell}{\delta B} \right).
\]
For the magnetic helicity, we compute
\begin{align*}
\frac{d}{dt} \int_ \Omega A \cdot B \, {\rm d}x &= \left\langle \partial _t A, B \right\rangle + \left\langle A, \partial _t B \right\rangle \\
&= \left\langle \partial _t A, \curl A \right\rangle + \left\langle A, \partial _t B \right\rangle\\
&= \left\langle \curl \partial _t A, A \right\rangle + \left\langle A, \partial _t B \right\rangle\\
&= 2 \left\langle \partial _t B, A \right\rangle \\
&= 2 \langle \partial _t B, \pi_h^{\dv} A \rangle \\
&= - 2 c_h(\pi_h^{\dv} A,B,u)+ 2e_h(B,\pi_h^{\dv} A).
\end{align*}
Since $\curl U_h^{\curl} \subset U_h^{\dv}$, we have
\begin{align*}
c_h(\pi_h^{\dv} A,B,u)
&= \langle \pi_h^{\dv} A, \curl \pi_h^{\curl}(\pi_h^{\curl}B \times \pi_h^{\curl} u) \rangle \\
&= \langle A, \curl \pi_h^{\curl}(\pi_h^{\curl}B \times \pi_h^{\curl} u) \rangle \\
&= c_h(A,B,u).
\end{align*}
Thus,
\[
\frac{d}{dt} \int_ \Omega A \cdot B \, {\rm d}x = - 2 c_h(A,B,u)+ 2e_h(B, \pi_h^{\dv} A).
\]
The first term vanishes by Lemma~\ref{lemma:ch_curl}, yielding~(\ref{magnetichelicityh}).
\end{proof}
\begin{remark}
The proposition above continues to hold if we omit the projection of $\frac{\delta \ell}{\delta u}$ onto $U_h^{\grad}$ in~(\ref{wh_comp}). We find it advantageous to do this for efficiency. As an illustration, let us consider the setting where $\ell(u,\rho,B) = \int_\Omega [\frac{1}{2}\rho|u|^2 - \epsilon(\rho) - \frac{1}{2}|B|^2] \, {\rm d}x$. If we omit~(\ref{wh_comp}) and invoke Remark~\ref{remark:jJ}, then the method seeks $u \in U_h^{\grad}$, $B \in U_h^{\dv}$, $\rho,\theta \in F_h$, and $J,H,U,E,\alpha \in U_h^{\curl}$ such that
\begin{align}
\left\langle \partial_t (\rho u), v \right\rangle + a\left(\rho u, u, v\right) + b_h(\theta,\rho,v) + \langle \alpha, v \rangle &= d(u,v), && \forall v \in U_h^{\grad}, \label{velocityh_mhd} \\
\langle \partial_t \rho, \sigma \rangle + b_h(\sigma,\rho,u) &= 0, && \forall \sigma \in F_h, \label{densityh_mhd} \\
\langle \partial_t B, C \rangle + \langle \curl E, C \rangle &= -\nu \langle \curl J, C \rangle, && \forall C \in U_h^{\dv}, \label{magnetich_mhd} \\
\langle \theta, \tau \rangle &= \left\langle \frac{1}{2}|u|^2 - \frac{\partial \epsilon}{\partial \rho}, \tau \right\rangle, && \forall \tau \in F_h, \label{thetah_mhd} \\
\langle J, K \rangle &= \left\langle B, \curl K \right\rangle, && \forall K \in U_h^{\curl}, \\
\langle H, G \rangle &= \langle B, G \rangle, && \forall G \in U_h^{\curl}, \label{Bproj_mhd} \\
\langle U, V \rangle &= \langle u, V \rangle, && \forall V \in U_h^{\curl}, \label{uproj_mhd} \\
\langle E, F \rangle &= -\langle U \times H, F \rangle, && \forall F \in U_h^{\curl}, \label{Eh_mhd} \\
\langle \alpha, \beta \rangle &= -\langle J \times H, \beta \rangle, && \forall \beta \in U_h^{\curl}. \label{alphah_mhd}
\end{align}
In this setting, the energy identity~(\ref{energyh}) becomes
\[
\frac{d}{dt} \int_\Omega \Big[\frac{1}{2} \rho |u|^2 + \epsilon(\rho) + \frac{1}{2} |B|^2 \Big]\, {\rm d}x = d(u,u) + e_h(B,B).
\]
\end{remark}
\medskip
\section{Temporal discretization}
In this section, we design a temporal discretization of~(\ref{velocityh_mhd}-\ref{alphah_mhd}) for Lagrangians of the form
\[
\ell(u,\rho,B) = \int_\Omega \Big[\frac{1}{2}\rho|u|^2 - \epsilon(\rho) - \frac{1}{2}|B|^2\Big] \, {\rm d}x.
\]
Our temporal discretization will retain all of the structure-preserving properties of our spatial discretization: energy balance, magnetic helicity balance, total mass conservation, and $\dv B = 0$.
We adopt the following notation. For a fixed time step $\Delta t > 0$, we denote $t_k = k\Delta t$. The value of the approximate solution $u \in U_h^{\grad}$ at time $t_k$ is denoted $u_k$, and likewise for $\rho$ and $B$. The auxiliary variables $\theta \in F_h$ and $J,H,U,E,\alpha \in U_h^{\curl}$ will play a role in our calculations, but we do not index them with a subscript $k$. We write $u_{k+1/2} = \frac{u_k+u_{k+1}}{2}$, $\rho_{k+1/2} = \frac{\rho_k+\rho_{k+1}}{2}$, $B_{k+1/2} = \frac{B_k+B_{k+1}}{2}$, and $(\rho u)_{k+1/2} = \frac{\rho_k u_k+\rho_{k+1} u_{k+1}}{2}$. We will also make use of the bivariate function
\[
\delta(x,y) = \frac{\epsilon(y)-\epsilon(x)}{y-x}.
\]
Given $u_k, \rho_k, B_k$, our method steps from time $t_k$ to $t_{k+1}$ by solving
\begin{align}
\left\langle \frac{\rho_{k+1} u_{k+1} - \rho_k u_k}{\Delta t}, v \right\rangle + a\left((\rho u)_{k+1/2}, u_{k+1/2}, v\right) &&\nonumber\\+\, b_h(\theta,\rho_{k+1/2},v) + \langle \alpha, v \rangle &= d(u_{k+1/2},v), && \forall v \in U_h^{\grad}, \label{velocityh_dt} \\
\left\langle \frac{\rho_{k+1}-\rho_k}{\Delta t}, \sigma \right\rangle + b_h(\sigma,\rho_{k+1/2},u_{k+1/2}) &= 0, && \forall \sigma \in F_h, \label{densityh_dt} \\
\left\langle \frac{B_{k+1}-B_k}{\Delta t}, C \right\rangle + \langle \curl E, C \rangle &= -\nu \langle \curl J, C \rangle, && \forall C \in U_h^{\dv}, \label{magnetich_dt}
\end{align}
for $u_{k+1},\rho_{k+1},B_{k+1}$. Here, $\theta$, $\alpha$, $E$, and $J$ (as well as $H$ and $U$) are defined by
\begin{align}
\langle \theta, \tau \rangle &= \left\langle \frac{1}{2}u_k \cdot u_{k+1} - \delta(\rho_k,\rho_{k+1}), \tau \right\rangle, && \forall \tau \in F_h, \label{thetah_dt} \\
\langle J, K \rangle &= \left\langle B_{k+1/2}, \curl K \right\rangle, && \forall K \in U_h^{\curl}, \label{Jh_dt} \\
\langle H, G \rangle &= \langle B_{k+1/2}, G \rangle, && \forall G \in U_h^{\curl}, \label{Bproj_dt} \\
\langle U, V \rangle &= \langle u_{k+1/2}, V \rangle, && \forall V \in U_h^{\curl}, \label{uproj_dt} \\
\langle E, F \rangle &= -\langle U \times H, F \rangle, && \forall F \in U_h^{\curl}, \label{Eh_dt} \\
\langle \alpha, \beta \rangle &= -\langle J \times H, \beta \rangle, && \forall \beta \in U_h^{\curl}. \label{alphah_dt}
\end{align}
Notice that we used the midpoint rule everywhere above except in the definition of $\theta$, where we used
\[
\frac{1}{2}u_k \cdot u_{k+1} - \delta(\rho_k,\rho_{k+1})
\]
instead of
\[
\frac{1}{2} |u_{k+1/2}|^2 - \left.\frac{\partial\epsilon}{\partial\rho}\right|_{\rho=\rho_{k+1/2}}
\]
to discretize $\frac{1}{2} |u|^2 -\frac{\partial\epsilon}{\partial\rho}$. This will allow us to take advantage of the identity
\begin{equation} \label{rhousquared}
\begin{split}
&\frac{1}{\Delta t} \int_\Omega \Big[ \frac{1}{2}\rho_{k+1} |u_{k+1}|^2 + \epsilon(\rho_{k+1}) - \frac{1}{2} \rho_k |u_k|^2 - \epsilon(\rho_k)\Big] \, {\rm d}x \\
&= \left\langle \frac{\rho_{k+1} u_{k+1} - \rho_k u_k}{ \Delta t }, \frac{u_k+u_{k+1}}{2} \right\rangle - \left\langle \frac{\rho_{k+1}-\rho_k}{\Delta t}, \frac{1}{2} u_k \cdot u_{k+1} - \delta(\rho_k,\rho_{k+1}) \right\rangle
\end{split}
\end{equation}
when we prove energy conservation below.
\begin{proposition} \label{prop:invariantsh_dt}
If $B_0$ is exactly divergence-free, then the solution to~(\ref{velocityh_dt}-\ref{alphah_dt}) satisfies
\begin{align}
\dv B_k &\equiv 0, \label{divBh_dt} \\
\int_\Omega \rho_{k+1} \, {\rm d}x &= \int_\Omega \rho_k \, {\rm d}x, \label{massh_dt} \\
\frac{\mathcal{E}_{k+1}-\mathcal{E}_k}{\Delta t} &= d(u_{k+1/2},u_{k+1/2}) + e_h(B_{k+1/2}, B_{k+1/2}), \label{energyh_dt} \\
\frac{1}{\Delta t} \left( \int_\Omega A_{k+1} \cdot B_{k+1} \, {\rm d}x - \int_\Omega A_k \cdot B_k \, {\rm d}x \right) &= 2e_h(B_{k+1/2},\pi_h^{\dv} A_{k+1/2}), \label{magnetichelicityh_dt}
\end{align}
for all $k$. Here, $\mathcal{E}_k = \langle \frac{\delta\ell}{\delta u_k}, u_k \rangle - \ell(u_k,\rho_k,B_k)$ denotes the total energy of the system, and $A_k$ denotes any vector field satisfying $\nabla \times A_k = B_k$ and $\left.A_k \times n \right|_{\partial\Omega} = 0$.
\end{proposition}
\begin{proof}
The magnetic field equation~(\ref{magnetich_dt}) implies that the relation
\[
\frac{B_{k+1}-B_k}{\Delta t} + \curl E = -\nu \curl J
\]
holds pointwise, so taking the divergence of both sides proves~(\ref{divBh_dt}). Conservation of total mass (i.e.~(\ref{massh_dt})) is proved by taking $\sigma \equiv 1$ in the density equation~(\ref{densityh_dt}).
To prove~(\ref{energyh_dt}-\ref{magnetichelicityh_dt}), we introduce some notation. Let $D_{\Delta t} (\rho u) = \frac{\rho_{k+1} u_{k+1} - \rho_k u_k}{\Delta t}$, $D_{\Delta t} B = \frac{B_{k+1}-B_k}{\Delta t}$, etc. To reduce notational clutter, we will suppress the subscript $k+1/2$ on quantities evaluated at $t_{k+1/2}$. Thus, we abbreviate $u_{k+1/2}$, $B_{k+1/2}$, $\rho_{k+1/2}$, and $(\rho u)_{k+1/2}$ as $u$, $B$, $\rho$, and $\rho u$, respectively. Using Lemma~\ref{lemma:rectify}, equations~(\ref{velocityh_dt}-\ref{alphah_dt}) can be rewritten in the form
\begin{align}
\left\langle D_{\Delta t} (\rho u), v \right\rangle + a\left(\rho u, u, v\right) + b_h\left( \theta,\rho,v \right) - c_h\left(B,B,v\right)&= d(u,v), && \forall v \in U_h^{\grad}, \label{velocity_res_NS_h_dt} \\
\langle D_{\Delta t} \rho, \sigma \rangle + b_h(\sigma,\rho,u) &=0, && \forall \sigma \in F_h,\label{density_res_NS_h_dt} \\
\langle D_{\Delta t} B, C \rangle + c_h(C,B,u) &= e_h(B,C), && \forall C \in U_h^{\dv}, \label{magnetic_res_NS_h_dt}
\end{align}
where
\[
\theta = \pi_h \left( \frac{1}{2}u_k \cdot u_{k+1} - \delta(\rho_k,\rho_{k+1}) \right).
\]
Taking $v=u$, $\sigma=-\theta$, and $C=B$ in~(\ref{velocity_res_NS_h_dt}-\ref{magnetic_res_NS_h_dt}) and adding the three equations gives
\[
\langle D_{\Delta t} (\rho u), u \rangle - \langle D_{\Delta t} \rho, \theta \rangle + \langle D_{\Delta t} B, B \rangle = d(u,u) + e_h(B,B).
\]
Written in full detail, this reads
\[\begin{split}
\left\langle \frac{\rho_{k+1} u_{k+1} - \rho_k u_k}{ \Delta t }, \frac{u_k+u_{k+1}}{2} \right\rangle - \left\langle \frac{\rho_{k+1}-\rho_k}{\Delta t}, \pi_h \left( \frac{1}{2} u_k \cdot u_{k+1} - \delta(\rho_k,\rho_{k+1}) \right) \right\rangle \\
+ \left\langle \frac{B_{k+1}-B_k}{\Delta t}, \frac{B_k+B_{k+1}}{2} \right\rangle = d(u_{k+1/2},u_{k+1/2}) + e_h(B_{k+1/2},B_{k+1/2}).
\end{split}\]
Since $\frac{\rho_{k+1}-\rho_k}{\Delta t} \in F_h$, we can remove $\pi_h$ from the second term above and use the identity~(\ref{rhousquared}) to rewrite the equation above as
\[\begin{split}
&\frac{1}{\Delta t} \int_\Omega \left( \frac{1}{2}\rho_{k+1} |u_{k+1}|^2 + \epsilon(\rho_{k+1}) - \frac{1}{2} \rho_k |u_k|^2 - \epsilon(\rho_k) \right) \, {\rm d}x \\ &+ \frac{1}{\Delta t} \int_\Omega \left( \frac{1}{2}|B_{k+1}|^2 - \frac{1}{2}|B_k|^2 \right) \, {\rm d}x = d(u_{k+1/2},u_{k+1/2}) + e_h(B_{k+1/2},B_{k+1/2}).
\end{split}\]
This proves~(\ref{energyh_dt}).
To prove~(\ref{magnetichelicityh_dt}), we revert to our abbreviated notation and compute
\begin{align*}
\frac{1}{\Delta t} \left( \langle A_{k+1}, B_{k+1} \rangle - \langle A_k, B_k \rangle \right)
&= \langle D_{\Delta t} A, B \rangle + \langle A, D_{\Delta t} B \rangle \\
&= \langle D_{\Delta t} A, \curl A \rangle + \langle A, D_{\Delta t} B \rangle \\
&= \left\langle \curl D_{\Delta t} A, A \right\rangle + \left\langle A, D_{\Delta t} B \right\rangle\\
&= 2 \left\langle D_{\Delta t} B, A \right\rangle \\
&= 2 \left\langle D_{\Delta t} B, \pi_h^{\dv} A \right\rangle \\
&= - 2 c_h(\pi_h^{\dv} A,B,u)+ 2e_h(B,\pi_h^{\dv} A) \\
&= - 2 c_h(A,B,u)+ 2e_h(B,\pi_h^{\dv} A).
\end{align*}
The first term vanishes by Lemma~\ref{lemma:ch_curl}, yielding~(\ref{magnetichelicityh_dt}).
\end{proof}
\subsection{Enhancements and Extensions}
Below we discuss several enhancements and extensions of the numerical method~(\ref{velocityh_dt}-\ref{alphah_dt}).
\paragraph{Two dimensions.}
The two-dimensional setting can be treated exactly as above, except one must distinguish between vector fields in the plane ($u$, $B$, $H$, $U$, and $\alpha$) and those orthogonal to it ($J$ and $E$). We thus identify $J$ and $E$ (as well as the test functions $K$ and $F$ appearing in~(\ref{Jh_dt}) and~(\ref{Eh_dt})) with scalar fields, and we use the continuous Galerkin space
\[
\{u \in H^1_0(\Omega) \mid \left. u \right|_K \in P_{r+1}(K), \, \forall K \in \mathcal{T}_h \}
\]
to discretize them.
\paragraph{Upwinding.} To help reduce artificial oscillations in discretizations of scalar advection laws like~(\ref{densityh_dt}), it is customary to incorporate upwinding. As discussed in~\cite{GaGB2021}, this can be accomplished without interfering with any balance laws by introducing a $u$-dependent trilinear form
\[
\widetilde{b}_h(u; f,g,v) = b_h(f,g,v) + \sum_{e \in \mathcal{E}_h} \int_e \beta_e(u) \left( \frac{v \cdot n}{u \cdot n} \right) \llbracket f \rrbracket \cdot \llbracket g \rrbracket \, \mathrm{d}s,
\]
where $\{\beta_e(u)\}_{e \in \mathcal{E}_h}$ are nonnegative scalars. One then replaces every appearance of $b_h(\cdot,\cdot,\cdot)$ in~(\ref{velocityh_dt}-\ref{densityh_dt}) by $\widetilde{b}_h(u_{k+1/2};\,\cdot,\cdot,\cdot)$. It is not hard to see that this enhancement has no effect on the balance laws~(\ref{divBh_dt}-\ref{magnetichelicityh_dt}). That is, Proposition~\ref{prop:invariantsh_dt} continues to hold. We used this upwinding strategy with \[
\beta_e(u) = \frac{1}{\pi} (u \cdot n) \arctan\left( \frac{u \cdot n}{0.01} \right) \approx \frac{1}{2}|u \cdot n|
\]
in all of the numerical experiments that appear in Section~\ref{sec:numerical}.
\paragraph{Zero viscosity.}
When the fluid viscosity coefficients $\mu$ and $\lambda$ vanish, the boundary condition $\left. u \right|_{\partial\Omega} = 0$ changes to $\left. u \cdot n \right|_{\partial\Omega} = 0$, and the term $d(u,v)$ on the right-hand side of~(\ref{velocity_res_NS}) vanishes.
To handle this setting, we modify the scheme~(\ref{velocityh_dt}-\ref{alphah_dt}) as follows. We use the space $U_h^{\dv}$ instead of $U_h^{\grad}$ to discretize $u$ (as well as the test function $v$ appearing on the right-hand side of~(\ref{velocityh_dt})), and we replace the term $a((\rho u)_{k+1/2}, u_{k+1/2}, v)$ by
\[
a_h(u_{k+1/2}; (\rho u)_{k+1/2}, u_{k+1/2}, v),
\]
where $a_h(U;\cdot,\cdot,\cdot)$ denotes the $U$-dependent trilinear form
\[\begin{split}
a_h(U; w,u,v) = \sum_{K \in \mathcal{T}_h} \int_K w \cdot (v \cdot \nabla u - u \cdot \nabla v) \, {\rm d}x + \sum_{e \in \mathcal{E}_h} \int_e n \times (\{w\}+\alpha_e(U)\llbracket w \rrbracket) \cdot \llbracket u \times v \rrbracket \, {\rm d}s.
\end{split}\]
Here, $\llbracket w\rrbracket$ and $\{w\}$ denote the jump and average, respectively, of $w$ across the edge $e$, and $\{\alpha_e(U)\}_{e \in \mathcal{E}_h}$ are nonnegative scalars. We took $\alpha_e(U)=\beta_e(U)/(U \cdot n)$, which is a way of incorporating upwinding in the momentum advection; see~\cite{GaGB2020}. Note that $a_h(U;w,u,v)$ reduces to $a(w,u,v)$ when $u,v \in U_h^{\grad}$ since the term $\llbracket u \times v \rrbracket$ vanishes. Here also, Proposition~\ref{prop:invariantsh_dt} continues to hold (with $b_h$ or $\widetilde{b}_h$), where we now have $d=0$ in \eqref{energyh_dt}. If additionally $\nu=0$, i.e. resistivity is absent, then we have $e_h=0$ in~(\ref{energyh_dt}-\ref{magnetichelicityh_dt}) as well.
\paragraph{Entropy and gravitational forces.}
The extension of our scheme to full compressible ideal MHD subject to gravitational and Coriolis forces is straightforward. Here we describe the incorporation of the entropy density $s$ and the gravitational potential $\phi$, omitting Coriolis forces for simplicity. The Lagrangian~(\ref{baroclinic_ell}) is thus
\[
\ell(u, \rho ,s , B)= \int_ \Omega \Big[\frac{1}{2} \rho |u|^2 - \epsilon ( \rho ,s ) - \rho \phi - \frac{1}{2} |B| ^2\Big] {\rm d} x,
\]
and $s$ is treated as an advected parameter: $\partial_t s + \dv (su) = 0$.
In the discrete setting, this leads to the following modifications of our basic scheme. We introduce an additional unknown $s_k \in F_h$, the discrete entropy density, which is advected according to
\begin{align}
\left\langle \frac{s_{k+1}-s_k}{\Delta t}, \sigma \right\rangle + \widetilde{b}_h(u_{k+1/2};\sigma,s_{k+1/2},u_{k+1/2}) &= 0, && \forall \sigma \in F_h. \label{entropyh_dt_baro}
\end{align}
In place of~(\ref{thetah_dt}), we define two auxiliary variables $\theta_1,\theta_2 \in F_h$ by
\begin{align}
\langle \theta_1, \tau \rangle &= \left\langle \frac{1}{2}u_k \cdot u_{k+1} - \phi - \frac{\delta_1(\rho_k,\rho_{k+1},s_k) + \delta_1(\rho_k,\rho_{k+1},s_{k+1})}{2}, \tau \right\rangle, && \forall \tau \in F_h, \label{thetah_dt_baro1} \\
\langle \theta_2, \tau \rangle &= \left\langle - \frac{\delta_2(s_k,s_{k+1},\rho_k) + \delta_2(s_k,s_{k+1},\rho_{k+1})}{2}, \tau \right\rangle, && \forall \tau \in F_h, \label{thetah_dt_baro2}
\end{align}
where
\begin{align*}
\delta_1(\rho,\rho',s) &= \frac{\epsilon(\rho',s)-\epsilon(\rho,s)}{\rho'-\rho}, \\
\delta_2(s,s',\rho) &= \frac{\epsilon(\rho,s')-\epsilon(\rho,s)}{s'-s}.
\end{align*}
Then we replace the term
\[
b_h(\theta,\rho_{k+1/2},v)
\]
in~(\ref{velocityh_dt}) by
\[
\widetilde{b}_h(u_{k+1/2},\theta_1,\rho_{k+1/2},v) + \widetilde{b}_h(u_{k+1/2},\theta_2,s_{k+1/2},v).
\]
The resulting scheme satisfies all of the balance laws~(\ref{divBh_dt}-\ref{magnetichelicityh_dt}), this time with
\begin{align*}
\mathcal{E}_k
&= \left\langle \frac{\delta\ell}{\delta u_k}, u_k \right\rangle - \ell(u_k,\rho_k,B_k,s_k) \\
&= \int_\Omega \Big[ \frac{1}{2}\rho_k |u_k|^2 + \frac{1}{2}|B_k|^2 + \epsilon(\rho_k,s_k) + \rho_k \phi \Big] \, {\rm d}x.
\end{align*}
Note that this scheme is especially relevant in the absence of viscosity and resistivity, i.e. $d(u,v)=e_h(B,C)=0$, since the entropy density $s$ is treated as an advected parameter above. Nevertheless, we have found it advantageous in some of our numerical experiments to continue to include the terms $d(u,v)$ and $e_h(B,C)$ to promote stability.
\section{Numerical examples} \label{sec:numerical}
To illustrate the structure-preserving properties of our numerical method, we solved the compressible barotropic MHD equations~(\ref{velocity0}-\ref{IC}) with $\mu=\nu=\lambda=0$ and $\epsilon(\rho) = \rho^{5/3}$ on a three-dimensional domain $\Omega = [-1,1]^3$ with initial conditions
\begin{align}
u(x,y,z,0) &= \left( \sin(\pi x)\cos(\pi y)\cos(\pi z), \, \cos(\pi x)\sin(\pi y)\cos(\pi z), \, \cos(\pi x)\cos(\pi y)\sin(\pi z) \right), \label{u03d} \\
B(x,y,z,0) &= \curl \left( (1-x^2)(1-y^2)(1-z^2) v \right), \label{B03d} \\
\rho(x,y,z,0) &= 2 + \sin(\pi x)\sin(\pi y)\sin(\pi z), \label{rho03d}
\end{align}
where $v = \frac{1}{2}(\sin \pi x, \sin \pi y, \sin \pi z)$. We used a uniform triangulation $\mathcal{T}_h$ of $\Omega$ with maximum element diameter $h = \sqrt{3}/2$, and we used the finite element spaces specified in Section~\ref{sec:spatial} of order $r=s=0$. We used a time step $\Delta t = 0.005$. Figure~\ref{fig:invariants3d} shows that the scheme preserves energy, magnetic helicity, mass, and $\dv B = 0$ to machine precision, while cross helicity drifts slightly.
\begin{figure}
\centering
\hspace{-0.5in}
\begin{tikzpicture}
\begin{groupplot}[
group style={
group name=my plots,
group size=1 by 1,
xlabels at=edge bottom,
ylabels at=edge left,
horizontal sep=2cm,vertical sep=2cm},
ymode=log,xlabel=$t$,ylabel=$|F(t)-F(0)|$,
ylabel style={yshift=-0.1cm},
ymin = 10^-20,
ymax = 1,
legend style={at={(1.0,0.0)},
anchor=south,
column sep=1ex,
text=black},
legend columns=6
]
\nextgroupplot[legend to name=testLegend3d,title={$$}]
\addplot[blue,thick] table [x expr=\coordindex*0.005, y expr=abs(\thisrowno{0})]{Data/3d_invariants.dat};
\addplot[red,thick] table [x expr=\coordindex*0.005, y expr=abs(\thisrowno{1})]{Data/3d_invariants.dat};
\addplot[black,thick] table [x expr=\coordindex*0.005, y expr=abs(\thisrowno{2})]{Data/3d_invariants.dat};
\addplot[blue,dashed,thick] table [x expr=\coordindex*0.005, y expr=abs(\thisrowno{3})]{Data/3d_invariants.dat};
\addplot[red,dashed,thick] table [x expr=\coordindex*0.005, y expr=abs(\thisrowno{4})]{Data/3d_invariants.dat};
\legend{$\int \rho \, {\rm d}x$ \\ $\int[ \frac{1}{2} \rho |u|^2 + \frac{1}{2}|B|^2 + \epsilon(\rho) ]\, {\rm d}x$ \\$\int u \cdot B \, {\rm d}x$ \\ $\|\dv B\|_{L^2(\Omega)}$ \\ $\int A \cdot B \, {\rm d}x$\\}
\end{groupplot}
\end{tikzpicture}
\ref{testLegend3d}
\caption{Evolution of mass, energy, cross-helicity, $\|\dv B\|_{L^2(\Omega)}$, and magnetic helicity during a simulation in three dimensions. The absolute deviations $|F(t)-F(0)|$ are plotted for each such quantity $F(t)$.}
\label{fig:invariants3d}
\end{figure}
Next, we simulated a magnetic Rayleigh-Taylor instability on the domain $\Omega = [0,L] \times [0,4L]$ with $L=\frac{1}{4}$. We chose
\[
\epsilon(\rho,s) = K e^{s/(C_v \rho)} \rho^\gamma
\]
with $C_v=K=1$ and $\gamma=\frac{5}{3}$, and we set $\mu=\nu=\lambda=0.01$ and $\phi=-y$, which corresponds to an upward gravitational force.
As initial conditions, we took
\begin{align*}
\rho(x,y,0) &= 1.5-0.5\tanh\left( \frac{y-0.5}{0.02} \right), \\
u(x,y,0) &= \left( 0, -0.025\sqrt{\frac{\gamma p(x,y)}{\rho(x,y,0)}} \cos(8\pi x) \exp\left( -\frac{(y-0.5)^2}{0.09}\right) \right), \\
s(x,y,0) &= C_v \rho(x,y,0) \log\left( \frac{p(x,y)}{(\gamma-1)K\rho(x,y,0)^\gamma} \right), \\
B(x,y,0) &= (B_0,0),
\end{align*}
where
\[
p(x,y) = 1.5y + 1.25 + (0.25-0.5y) \tanh\left( \frac{y-0.5}{0.02} \right).
\]
This system is known to exhibit instability when $B_0 <B_c= \sqrt{( \rho _h - \rho _l) gL}$, where here $ \rho _h=2$, $ \rho _l=1$, $g=1$, $L=1/4$, \cite{Ch1961}.
We imposed boundary conditions $u = (B-(B_0,0)) \cdot n = \curl B \times n = 0$ on $\partial\Omega$. We triangulated $\Omega$ with a uniform triangulation $\mathcal{T}_h$ having maximum element diameter $h=2^{-7}$, and we used the finite element spaces specified in Section~\ref{sec:spatial} of order $r=s=0$. We ran simulations from $t=0$ to $t=5$ using a time step $\Delta t = 0.005$. Plots of the computed mass density for various choices of $B_0$ are shown in Figures~\ref{fig:B0p2}-\ref{fig:B0p8}. The figures indicate that the scheme correctly predicts instability for $B_0<B_c=0.5$ (Figures~\ref{fig:B0p2}-\ref{fig:B0p4}) and stability for $B_0>B_c=0.5$ (Figures~\ref{fig:B0p6}-\ref{fig:B0p8}).
The evolution of energy, cross-helicity, mass, and $\|\dv B\|_{L^2(\Omega)}$ during these simulations is plotted in Figure~\ref{fig:invariants}. (Magnetic helicity is not plotted since it is trivially preserved in two dimensions if we take $A$ to be orthogonal to the plane.) As predicted by Proposition~\ref{prop:invariantsh_dt}, Figure~\ref{fig:invariants} shows that energy decayed monotonically, while mass and $\dv B = 0$ were preserved to machine precision. Interestingly, the cross-helicity drifted by less than $2.5 \times 10^{-4}$ in these experiments, even though the scheme is not designed to preserve it.
\begin{figure}
\centering
\plotsnapshots{Figures/B0p2}
\caption{Contours of the mass density at $t=0,1,2,3,4,5$ when $B_0=0.2$.}
\label{fig:B0p2}
\end{figure}
\begin{figure}
\centering
\plotsnapshots{Figures/B0p4}
\caption{Contours of the mass density at $t=0,1,2,3,4,5$ when $B_0=0.4$.}
\label{fig:B0p4}
\end{figure}
\begin{figure}
\centering
\plotsnapshots{Figures/B0p6}
\caption{Contours of the mass density at $t=0,1,2,3,4,5$ when $B_0=0.6$.}
\label{fig:B0p6}
\end{figure}
\begin{figure}
\centering
\plotsnapshots{Figures/B0p8}
\caption{Contours of the mass density at $t=0,1,2,3,4,5$ when $B_0=0.8$.}
\label{fig:B0p8}
\end{figure}
\begin{figure}
\centering
\hspace{-0.5in}
\begin{tikzpicture}
\begin{groupplot}[
group style={
group name=my plots,
group size=2 by 2,
xlabels at=edge bottom,
ylabels at=edge left,
horizontal sep=2cm,vertical sep=2cm},
ymode=log,xlabel=$t$,ylabel=$|F(t)-F(0)|$,
ylabel style={yshift=-0.1cm},
ymin = 10^-20,
ymax = 1,
legend style={at={(1.0,0.0)},
anchor=south,
column sep=1ex,
text=black},
legend columns=6
]
\nextgroupplot[legend to name=testLegend,title={$B_0=0.2$}]
\plotinvariants{Data/B0p2_invariants.dat}
\legend{$\int \rho \, {\rm d}x$ \\ $\int [\frac{1}{2} \rho |u|^2 + \frac{1}{2}|B|^2 + \epsilon(\rho, s) + \rho\phi ]\, {\rm d}x$ \\$\int u \cdot B \, {\rm d}x$ \\ $\|\dv B\|_{L^2(\Omega)}$ \\}
\nextgroupplot[title={$B_0=0.4$}]
\plotinvariants{Data/B0p4_invariants.dat}
\nextgroupplot[title={$B_0=0.6$}]
\plotinvariants{Data/B0p6_invariants.dat}
\nextgroupplot[title={$B_0=0.8$}]
\plotinvariants{Data/B0p8_invariants.dat}
\end{groupplot}
\end{tikzpicture}
\ref{testLegend}
\caption{Evolution of mass, energy, cross-helicity, and $\|\dv B\|_{L^2(\Omega)}$ during simulations of the magnetic Rayleigh-Taylor instability with $B_0=0.2,0.4,0.6,0.8$. The absolute deviations $|F(t)-F(0)|$ are plotted for each such quantity $F(t)$.}
\label{fig:invariants}
\end{figure}
\appendix
\section{Lagrange-d'Alembert formulation of resistive MHD}\label{LdA}
In this appendix we explain how viscosity and resistivity can be included in the Lagrangian variational formulation by using the Lagrange-d'Alembert principle for forced systems. While viscosity can be quite easily included in the variational formulation by adding the corresponding virtual force term to the Euler-Poincar\'e principle, resistivity breaks the transport equation \eqref{B_frozen} hence the previous Euler-Poicar\'e approach for $B$ must be appropriately modified.
We first observe that in absence of viscosity and resistivity, equations \eqref{EP_equations} can also be obtained from a variational formulation in which the variations $ \delta B$ are unconstrained. It suffices to consider, instead of \eqref{HP_MHD}, the variational principle
\begin{equation}\label{HP_modified}
\delta \int_0^TL( \varphi , \partial _t \varphi , \varrho _0, \mathcal{B} ) - \left\langle \partial _t \mathcal{B} , \mathcal{C} \right\rangle {\rm d} t=0,
\end{equation}
with respect to arbitrary variations $ \delta \varphi $, $ \delta \mathcal{B} $, $ \delta \mathcal{C}$ with $ \delta \varphi $ and $ \delta \mathcal{B} $ vanishing at $t=0,T$. The second term in the action functional imposes $ \partial _t \mathcal{B} =0$, i.e., $ \mathcal{B} (t) =\mathcal{B} _0$. In Eulerian form, we get
\begin{equation}\label{EP_MHD_force_ideal}
\delta \int_0^T\ell(u, \rho , B) - \left\langle \partial _t B - \operatorname{curl} (u \times B), C \right\rangle {\rm d}t =0
\end{equation}
with constrained variations $\delta u = \partial _t v+ \pounds _uv$, $\delta \rho = - \operatorname{div}( \rho v)$ and free variations $ \delta B$, $ \delta C$ with $ v, \delta B$ vanishing at $t=0,T$. In \eqref{EP_MHD_force_ideal}, the magnetic field equation appears as a constraint with Lagrange multiplier $C$. The variational principle \eqref{EP_MHD_force_ideal} yields the three equations
\begin{align}
\partial _t \left( \frac{\delta \ell}{\delta u} + B \times \operatorname{curl} C\right) + \pounds _u \left( \frac{\delta \ell}{\delta u} +B \times \operatorname{curl} C \right) & = \rho \nabla \frac{\delta \ell}{\delta \rho } \label{momentum_eq_2}\\
\partial_t B - \operatorname{curl} (u \times B) &= 0 \label{B_eq_2}\\
\partial _t C + \operatorname{curl} C \times u + \frac{\delta \ell}{\delta B} &=0\label{C_eq_2}
\end{align}
which correspond to the variations associated to $v$, $ \delta B$, and $\delta C$, respectively.
Using the formula
\[
\pounds _u ( B \times \operatorname{curl}C ) = B \times \operatorname{curl} ( \operatorname{curl}C \times u) + \operatorname{curl} (B \times u) \times \operatorname{curl}C
\]
and \eqref{B_eq_2}--\eqref{C_eq_2} in the equations \eqref{momentum_eq_2} does yield \eqref{EP_equations}.
\medskip
Using the Lagrange-d'Alembert approach, the variational principle \eqref{HP_modified} can be modified as
\begin{equation}\label{HP_modified_force}
\delta \int_0^TL( \varphi , \partial _t \varphi , \varrho _0, \mathcal{B} ) - \left\langle \partial _t \mathcal{B} , \mathcal{C} \right\rangle {\rm d} t + \int_0^T D(\varphi , \partial _t \varphi , \delta \varphi )+ E(\varphi , \mathcal{B} , \delta \mathcal{C}) {\rm d}t=0,
\end{equation}
for some expressions $D$ and $E$, bilinear in their last two arguments and invariant under the right action of $ \operatorname{Diff}( \Omega )$. In the Eulerian form, one gets
\begin{equation}\label{EP_MHD_force}
\delta \int_0^T\ell(u, \rho , B) - \left\langle \partial _t B - \operatorname{curl} (u \times B), C \right\rangle {\rm d}t + \int_0^T d( u,v) + e(B, \delta C + \operatorname{curl} C \times v ){\rm d}t=0,\\
\end{equation}
with $e$ and $f$ given by the expressions of $E$ and $F$ evaluated at $ \varphi = id$. To model viscosity and resistivity we choose \eqref{bilinear_form} and change the boundary condition of velocity to $u|_{ \partial \Omega }=0$. This boundary condition corresponds in the Lagrangian description to the choice of the subgroup $ \operatorname{Diff}_0( \Omega ) $ of diffeomorphisms fixing the boundary pointwise. Application of \eqref{EP_MHD_force} yields the viscous and resistive barotropic MHD equations in the form
\begin{align}
\left\langle \partial_t \frac{\delta \ell}{\delta u} , v \right\rangle + a\left(\frac{\delta \ell}{\delta u}, u, v\right) + b\left( \frac{\delta \ell}{\delta \rho},\rho,v \right) + c\left(\frac{\delta \ell}{\delta B},B,v\right)&= d(u,v) \label{velocity_res_NS2} \\
\langle \partial_t \rho, \sigma \rangle + b(\sigma,\rho,u) &=0\label{density_res_NS2} \\
\phantom{\int}\langle \partial_t B, C \rangle + c(C,B,u) &= e(B,C).\label{magnetic_res_NS2}
\end{align} | {"config": "arxiv", "file": "2105.00785/GaGB_Compressible_MHD_submitted.tex"} |
TITLE: Some properties of capacity
QUESTION [2 upvotes]: Let $\Omega\subset\mathbb{R}^N$. For compact $K\subset \Omega$ we can define the $p$-capacity, $p\in (1,\infty)$ as the number $$\operatorname{cap}_p(K)=\inf \int_\Omega |\nabla u|^p$$
where the infimum is taken over all $C_0^\infty(\Omega)$ with $u\ge 1$ in $K$. If $U\subset\Omega$ is open, set $$\operatorname{cap}_p(U)=\sup_{K\subset U}\operatorname{cap}_p (K)$$
where $K$ is compact.
My question is: how to find a open set $U\subset \Omega$ with $\overline{U}\subset \Omega$ such that $$\operatorname {cap}_p(U)\ne\operatorname {cap}_p(\overline {U})$$
I am aware of some equivalent definitions of capacity, for example, $u$ can be chosen in $W_0^{1,p}(\Omega)$ and $u\ge 1$ a.e. in $K$, or $u=1$ in $K$ or $u=1$ in a neighbourhood of $K$, so feel free to choose any of them.
I think I do not understood quite well what capacity measure, so I was unable to find such example. I was thinking in take some $U$ such that it's boundary has positive measure, however, this does not seem to change the value of the capacity, since the derivative of $u$ will be zero on the boundary of $U$.
REPLY [3 votes]: In the (most interesting) case $p < n$, you can argue as follows. For arbitrary $x \in \Omega$, you have $\mathrm{cap}_p(U_r(x)) \to 0$ as $r \to 0$, where $U_r(x)$ is the open ball with center $x$ and radius $r$.
Fix a closed set $K \subset \Omega$ and set $k = \mathrm{cap}_p(K)$.
Now, fix a countable set $M = \{x_1, x_2, \ldots\} \subset \Omega$ with $K \subset \overline M$. For each $n \in \mathbb{N}$, choose $r_n$, such that $\mathrm{cap}_p(U_{r_n}(x_n)) \le k \, 2^{-n-1}$. Set $U = \bigcup_{n=1}^\infty U_{r_n}(x_n)$.
By subadditivity of the capacity, we get $\mathrm{cap}_p(U) \le k/2$ and, since $K \subset \overline U$, we have $\mathrm{cap}_p(\overline U) \ge k$.
If you drop the requirement of $U$ being open, you can simply set $U = M$. Then $\mathrm{cap}_p(U) = 0$, whereas $\mathrm{cap}_p(\overline U) \ge k$. | {"set_name": "stack_exchange", "score": 2, "question_id": 728179} |
TITLE: Determining Spot Rates
QUESTION [1 upvotes]: A three-year, 4%, par-value bond with annual coupons sells for $990$, a two-year, $1000$, 3% bond with annual coupons sells for $988$, and a one-year, zero-coupon, $1000$ bond sells for $974$. Determine the spot rates $r_1$, $r_2$ and $r_3$.
This comes from Mathematical Interest Theory textbook section 8.3 #2. I understand how to compute similar problems, however I am unsure how to solve this given that the bonds have annual coupons(not zero coupon bonds). Any help would be appreciated thank you!
REPLY [0 votes]: For the one year bond we have:
$$
974=\frac{1000}{1+r_1}\qquad\Longrightarrow\qquad r_1=\frac{1000}{974}-1\approx 2.66940\%
$$
For the two years bond we have the coupon $3\%\times 1000=30$ and
$$
988=\frac{30}{1+r_1}+\frac{1030}{(1+r_2)^2}
$$
Observing that $\frac{1}{1+r_1}=\frac{974}{1000}=0.974$ we have
$$
988=\underbrace{30\times 0.974}_{29.22}+\frac{1030}{(1+r_2)^2}\qquad\Longrightarrow\qquad r_2=\left(\frac{1030}{958.78}\right)^{1/2}-1\approx 3.64757\%
$$
Can you now find $r_3$?
$990=\underbrace{\frac{40}{1+r_1}}_{40\times 0.974}+\underbrace{\frac{40}{(1+r_2)^2}}_{40\times\frac{958.78}{1030}}+\frac{1040}{(1+r_3)^3} \qquad\Longrightarrow\qquad r_3\approx 4.4062\%$ | {"set_name": "stack_exchange", "score": 1, "question_id": 2637391} |
TITLE: Are two lines in the exact same position one line?
QUESTION [2 upvotes]: I was just now reading the answers to the question "Is a line parallel with itself?" and that lead me to the question are two lines in exactly the same position in fact just one line?
One can think of two lines that happen to be in the same place. Taking two parallel lines, keeping one fixed, moving the other to it, then have it change sides. Are there always two lines? Or do we have an instant when there is only one?
REPLY [0 votes]: I think so, yes. If two lines are in the exact same position then certainly they have the same set of points. The same line can be parameterised in two different ways, but as As Ryan G pointed out in the comments, the lines represented by the two different parameterisations is in fact the same line.
So in a non-parameterised (usual) context, $y = 2x+1$ and $4x-2y=-2$ are considered to be the "same line" because they represent the same set of points.
With parameterisation: $(x_1(t),y_1(t)): x_1(t) = t$ and $y_1(t) = t $ corresponds to the same set of points as $(x_2(t),y_2(t)): x_2(t) = 2t$ and $y_2(t) = 2t $, despite the value of $(x_1, y_1)$ being different to the value of $(x_2, y_2)$ for a given value of $t$. This means that, despite the two lines having different paths, the two lines are still considered to be the same. | {"set_name": "stack_exchange", "score": 2, "question_id": 3927416} |
TITLE: Relation between slant edge and height of a pyramid with an equlateral triangle as its base
QUESTION [0 upvotes]: Consider a pyramid with an equilateral triangle as its base. Suppose
each side of the base is $a$ and each slant edge is $s$. How to find
the height, $h$.
if the base is a square, we can use the following approach.
Suppose the base is a square with each side $a$ and slant edge $s$.
Then we can easily find the diagonal($d$) of the base from sides. then
there is a right angled triangle where $(d/2)^2+h^2=s^2$
So, there might be a similar approach but i am not able to find how this concept can be applied when the base is an equilateral triangle.
We may be able to use the same concept.
first find height of the base, $h_b$ using the formula
$\dfrac{\sqrt{3}a}{2}$.
then use the formula for Centroid, $c$ = $\dfrac{2h_1}{3}$.
now use $c^2+h^2=s^2$. is this a correct solution?
Please help.
REPLY [1 votes]: You want to drop a line from the vertex of this pyramid down to the base. Where does it hit the base? If the pyramid is regular, it is going to come smack down in the center of the equilateral triangle. Draw the equilateral triangle. And draw the medians of the triangle. Where do they intersect? That is your center.
As it turns out it is 2/3 of the from the vertex to the base along any of the medians.
the slant edge ... now use the Pythagorean theorem.
$s^2 = (\frac{2}{3} a \frac{\sqrt 3}{2})^2 +h^2$ | {"set_name": "stack_exchange", "score": 0, "question_id": 1699086} |
TITLE: Let $m$ be arbitrary positive rational number. Show that the sequence $(mn)_{n=1}^{\infty}$ is not $\epsilon$-steady for any $\epsilon>0$
QUESTION [1 upvotes]: The proposition I would like to prove:
Proposition 1. For arbitrary positive rational $m$, the sequence $(nm)_{n=1}^{\infty}$ is not $\epsilon$-steady for any positive rational $\epsilon$.
Couple of notes:
I will use notation $d(x,y)$ to refer to the distance between rational numbers $x,y$, i.e., $d(x,y) = |x-y|$.
The definition of $\epsilon$-steadiness presented in the book: For some $\epsilon > 0$, the sequence $(a_{n})_{n=m}^{\infty}$ is $\epsilon$-steady if and only if for all natural numbers $j,k$ larger or equal to $m$, $d(a_{j},a_k) ≤ \epsilon$
My attempt:
We will first prove the following lemma:
Lemma 1. For arbitrary positive rational $m$, the sequence $(nm)_{n=1}^{\infty}$ is not $\epsilon$-steady for any positive natural number $\epsilon$.
We will use induction. For the base case, consider $\epsilon := 1$ and suppose that the sequence is $\epsilon$-steady. If $m ≥ 1$, then $d(3m,m) = 2m > 1$, a contradiction. Suppose $m < 1$. Since $m$ is non-zero, there must be some natural number $k$ such that $\frac{1}{m} < k$, and so we have $1 < mk$. But then $d(m(k+1),m)= mk > 1$, a contradiction. So for any positive rational $m$, the sequence cannot be $1$-steady. Now inductively suppose that for some positive natural number $\epsilon$, the sequence is not $\epsilon$-steady. We want to show that the sequence cannot be $(\epsilon+1)$-steady. Since the sequence is not $\epsilon$-steady, it follows that the sequence must contain some elements $a_j,a_k$ (without loss of generality, let's suppose that $a_j > a_k$), such that $|a_j - a_k| = a_j - a_k > \epsilon$. Since $a_j,a_k$ are multiples of $m$, the rationals $3a_{j},3a_{k}$ will also be multiples of $m$, and thus $3a_{j},3a_{k}$ are also elements of the sequence $(nm)_{n=1}^{\infty}$. Furthermore, we have $d(3a_j,3a_k) = |3a_j - 3a_k| = 3a_j - 3a_k > 3\epsilon$, and since $\epsilon ≥ 1$, $3\epsilon > \epsilon + 1$, which implies that $d(3a_j,3a_k) > \epsilon + 1$, but that means that the sequence is not $(\epsilon+1)$-steady. $\Box$
Now back to the proposition 1:
Take arbitrary positive rationals $m,\epsilon$ and suppose that the sequence $(nm)_{n=1}^{\infty}$ is $\epsilon$-steady. Since $\epsilon$ is rational, we know that there must exist some natural number $k$ such that $\epsilon < k$. By lemma 1, we know that the sequence is not $k$-steady, implying that there are some elements $a_i, a_j$ such that $d(a_i,a_j) > k$. But then clearly, $d(a_i,a_j) > \epsilon$, a contradiction. $\Box$
Question 1.
Is the proof correct?
Question 2.
Are there other alternatives to proving the proposition?
REPLY [0 votes]: Possibly an easier approach: If $(a_n)_{n=m}^\infty$ is $\epsilon$-steady, then $(a_n)_{n=m}^\infty$ is bounded. That implies $(a_n)_{n=1}^\infty$ is bounded. But in your situation, $(nm)_{n=1}^\infty$ is unbounded. | {"set_name": "stack_exchange", "score": 1, "question_id": 3958390} |
\begin{document}
\begin{center}
{\large \bf Modules cofinite with respect to ideals of small dimensions}
\vspace{0.5cm} Xiaoyan Yang and Jingwen Shen\\
Department of Mathematics, Northwest Normal University, Lanzhou 730070,
China
E-mails: yangxy@nwnu.edu.cn and 3284541957@qq.com
\end{center}
\bigskip
\centerline { \bf Abstract}
\leftskip10truemm \rightskip10truemm \noindent Let $\mathfrak{a}$ be an ideal of a noetherian (not necessarily local) ring $R$ and $M$ an $R$-module with $\mathrm{Supp}_RM\subseteq\mathrm{V}(\mathfrak{a})$. We show that if $\mathrm{dim}_RM\leq2$, then $M$ is $\mathfrak{a}$-cofinite if and only if $\mathrm{Ext}^i_R(R/\mathfrak{a},M)$ are finitely generated for all $i\leq 2$, which generalizes one of the main results in [Algebr. Represent. Theory 18 (2015) 369--379]. Some new results concerning cofiniteness of local cohomology modules $\mathrm{H}^i_\mathfrak{a}(M)$ for any finitely generated $R$-module $M$ are obtained.\\
\vbox to 0.3cm{}\\
{\it Key Words:} cofinite module; local cohomology\\
{\it 2020 Mathematics Subject Classification:} 13D45; 13E05.
\leftskip0truemm \rightskip0truemm
\bigskip
\section* { \bf Introduction and Preliminaries}
Throughout this paper, $R$ is a commutative noetherian ring with identity and $\mathfrak{a}$ an ideal of $R$. For an $R$-module $M$, the $i$th local cohomology of $M$ with respect to $\mathfrak{a}$ is defined as
\begin{center}$\mathrm{H}^i_\mathfrak{a}(M)=\underrightarrow{\textrm{lim}}_{t>0}\mathrm{Ext}^i_R(R/\mathfrak{a}^t,M)$.\end{center}
The reader can refer to \cite{BS} or \cite{Gr} for more details about local cohomology.
It is a well-known result that if $(R,\mathfrak{m})$ is a local ring, then
the $R$-module $M$ is artinian if and only if $\mathrm{Supp}_RM\subseteq\{\mathfrak{m}\}$ and $\mathrm{Ext}^i_R(R/\mathfrak{m},M)$ are finitely generated for all $i\geq 0$ (see \cite[Proposition 1.1]{H}).
In 1968, Grothendieck \cite{G} conjectured that for any finitely generated $R$-module $M$, the $R$-module $\mathrm{Hom}_R(R/\mathfrak{a},\mathrm{H}^i_\mathfrak{a}(M))$
are finitely generated for
all $i$. One year later, Hartshorne \cite{H} provided a counterexample to show that this conjecture
is false even when $R$ is regular, and then he defined an $R$-module $M$ to be $\mathfrak{a}$-cofinite
if $\mathrm{Supp}_RM\subseteq\mathrm{V}(\mathfrak{a})$ and $\mathrm{Ext}^i_R(R/\mathfrak{a},M)$ is finitely generated for all $i\geq 0$, and
asked:
\vspace{2mm} \noindent{\bf Question 1.}\label{Th1.4} {\it{Are the local cohomology modules $\mathrm{H}^i_\mathfrak{a}(M)$, $\mathfrak{a}$-cofinite for every finitely generated
$R$-module $M$ and every $i\geq 0$?}}
\vspace{2mm} \noindent{\bf Question 2.}\label{Th1.4} {\it{Is the category $\mathcal{M}(R,\mathfrak{a})_{cof}$ of $\mathfrak{a}$-cofinite $R$-modules an abelian subcategory of the
category $\mathcal{M}(R)$ of $R$-modules?}}
\vspace{2mm}
If the module $\mathrm{H}^i_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite, then the set $\mathrm{Ass}_R\mathrm{H}^i_\mathfrak{a}(M)$
of associated primes
and the Bass numbers $\mu^j_{R}(\mathfrak{p},\mathrm{H}^i_\mathfrak{a}(M))$
are finite for all $\mathfrak{p}\in\mathrm{Spec}R$ and $i,j\geq0$. This observation reveals the utmost significance of Question 1.
In the following years, Question 1 were systematically
studied and improved by commutative algebra practitioners in several stages. For example, see \cite{BN,BNS1,DM,K}.
With respect to the Question 2, Hartshorne with an example showed that this is not true
in general. However, he proved that if $R$ is a
complete regular local ring and $\mathfrak{a}$ a prime ideal with $\mathrm{dim}R/\mathfrak{a}=1$, then the answer to his question is yes. Delfino and Marley \cite{DM}
extended this result to arbitrary complete local rings. Kawasaki \cite{K} generalized the Delfino and Marley's result for an arbitrary
ideal $\mathfrak{a}$ of dimension one in a local ring $R$ by using a spectral sequence argument. In 2014, Bahmanpour et al. \cite{BNS} proved that
Question 2 is true for the category of all $\mathfrak{a}$-cofinite $R$-modules $M$ with $\mathrm{dim}_RM\leq1$
for all ideals $\mathfrak{a}$ of $R$. The proof of this result is based
on \cite[Proposition 2.6]{BNS} which states that an $R$-module
$M$ with $\mathrm{dim}_RM\leq1$ and $\mathrm{Supp}_RM\subseteq\mathrm{V}(\mathfrak{a})$ is $\mathfrak{a}$-cofinite if and only if
$\mathrm{Hom}_R(R/\mathfrak{a},M)$ and $\mathrm{Ext}^1_R(R/\mathfrak{a},M)$ are finitely generated. Recently, Bahmanpour et al. \cite{BNS1} extended the above result to the
ideals $\mathfrak{a}$ of dimension two, i.e. $\mathrm{dim}R/\mathfrak{a}=2$, in a local ring $R$.
One of the main goals of present paper is to generalize the Bahmanpour, Naghipour and Sedghi's results to not necessarily local rings. More precisely, we shall prove that:
\vspace{2mm} \noindent{\bf Theorem.}\label{Th1.4} {\it{Let $\mathfrak{a}$ be an ideal of $R$ with $\mathrm{dim}R/\mathfrak{a}\leq2$. Then an $\mathfrak{a}$-torsion $R$-module $M$ is $\mathfrak{a}$-cofinite if and only if $\mathrm{Ext}^i_R(R/\mathfrak{a},M)$ are finitely
generated for all $i\leq2$.}}
\vspace{2mm}
We also answer Question 1 completely in the case
$\mathrm{cd}(\mathfrak{a})\leq2$, and study
the cofiniteness of modules in $FD_{\leq n}$ for $n=0,1,2$.
Next we recall some notions which we will need later.
We write $\mathrm{Spec}R$ for the set of
prime ideals of $R$ and $\mathrm{Max}R$ for the set of
maximal ideals of $R$, For an ideal $\mathfrak{a}$ in $R$, we set
\begin{center}$\mathrm{V}(\mathfrak{a})=\{\mathfrak{p}\in\textrm{Spec}R\hspace{0.03cm}|\hspace{0.03cm}\mathfrak{a}\subseteq\mathfrak{p}\}$.
\end{center}
Let $M$ be an $R$-module. The set $\mathrm{Ass}_RM$ of associated prime of $M$ is
the set of prime ideals $\mathfrak{p}$ of $R$
such that there exists a cyclic submodule $N$ of $M$ such that $\mathfrak{p}=\mathrm{Ann}_RN$, the annihilator of $N$. The support of an $R$-module $M$ is the set \begin{center}$\mathrm{Supp}_RM=\{\mathfrak{p}\in\mathrm{Spec}R\hspace{0.03cm}|\hspace{0.03cm}
M_\mathfrak{p}\neq0\}$.\end{center}
Recall that a prime ideal $\mathfrak{p}$ of $R$ is said to be an attached prime
of $M$ if $\mathfrak{p}=\mathrm{Ann}_RM/L$ for some submodule $L$ of $M$ (see \cite{MS}). The set of attached
primes of $M$ is denoted by $\mathrm{Att}_RM$. If $M$ is artinian, then $M$ admits a
minimal secondary representation $M=M_1+\cdots+M_r$ so that $M_i$
is $\mathfrak{p}_i$-secondary for $i=1,\cdots,r$. In this case, $\mathrm{Att}_RM=\{\mathfrak{p}_1,\cdots,\mathfrak{p}_r\}$.
The arithmetic
rank of the ideal $\mathfrak{a}$, denoted by
$\mathrm{ara}(\mathfrak{a})$, is the least number of elements of $R$ required to generate an ideal which has
the same radical as $\mathfrak{a}$, i.e.,
\begin{center}$\mathrm{ara}(\mathfrak{a})=\mathrm{min}\{n\geq0\hspace{0.03cm}|\hspace{0.03cm}\exists\ a_1,\cdots,a_n\in R\ \textrm{with}\ \mathrm{Rad}(a_1,\cdots,a_n)=\mathrm{Rad}(\mathfrak{a})\}$.\end{center} The arithmetic rank of $\mathfrak{a}$ in $R$ with respect
to an $R$-module $M$, denoted by $\mathrm{ara}_M(\mathfrak{a})$, is defined by the arithmetic rank of the ideal $\mathfrak{a}+
\mathrm{Ann}_RM/\mathrm{Ann}_RM$ in the ring $R/\mathrm{Ann}_RM$.
For each
$R$-module $M$, set
\begin{center}$\mathrm{cd}(\mathfrak{a},M)=\mathrm{sup}\{n\in\mathbb{Z}\hspace{0.03cm}|\hspace{0.03cm}\mathrm{H}_\mathfrak{a}^n(M)\neq0\}$.\end{center}
The cohomological dimension of the ideal $\mathfrak{a}$ is
\begin{center}$\mathrm{cd}(\mathfrak{a})=\mathrm{sup}\{\mathrm{cd}(\mathfrak{a},M)\hspace{0.03cm}|\hspace{0.03cm}M\ \textrm{is\ an}\ R\textrm{-module}\}$.\end{center}
\bigskip
\section{\bf Main resulats}
One of the main results of this work is
examine the cofiniteness of modules with respect to ideals of dimension 2 in an arbitrary noetherian ring,
which is a generalization of \cite[Proposition 2.6]{BNS} and \cite[Theorem 3.5]{BNS1}.
\begin{thm}\label{lem:2.1}{\it{Let $M$ be an $\mathfrak{a}$-torsion $R$-module with $\mathrm{dim}_RM\leq2$. Then
$M$ is $\mathfrak{a}$-cofinite if and only if $\mathrm{Hom}_R(R/\mathfrak{a},M)$, $\mathrm{Ext}^1_R(R/\mathfrak{a},M)$ and $\mathrm{Ext}^2_R(R/\mathfrak{a},M)$ are finitely generated.}}
\end{thm}
\begin{proof} ``Only if'' part is obvious.
``If'' part. By \cite[Proposition 2.6]{BNS} we may assume $\mathrm{dim}_RM=2$ and let
$t=\mathrm{ara}_M(\mathfrak{a})$. If $t=0$, then $\mathfrak{a}^n\subseteq\mathrm{Ann}_RM$ for
some $n\geq0$ by definition, and so $M=(0:_M\mathfrak{a}^n)$ is finitely generated by \cite[Lemma 2.1]{ANS} and the assertion follows. Next, assume that $t>0$. Let\begin{center}
$T_M=\{\mathfrak{p}\in\mathrm{Supp}_RM\hspace{0.03cm}|\hspace{0.03cm}\mathrm{dim}R/\mathfrak{p}=2\}$.
\end{center}
As $\mathrm{Ass}_R\mathrm{Hom}_R(R/\mathfrak{a},M)=\mathrm{Ass}_RM$ is finite, the set $T_M$ is finite. Also for each $\mathfrak{p}\in T_M$, the $R_\mathfrak{p}$-module $\mathrm{Hom}_{R_\mathfrak{p}}(R_\mathfrak{p}/\mathfrak{a}R_\mathfrak{p},M_\mathfrak{p})$ is finitely generated and $M_\mathfrak{p}$ is an $\mathfrak{a}R_\mathfrak{p}$-torsion $R_\mathfrak{p}$-module with $\mathrm{Supp}_{R_\mathfrak{p}}M_\mathfrak{p}\subseteq\mathrm{V}(\mathfrak{p}R_\mathfrak{p})$, it follows from \cite[Proposition 4.1]{LM}
that $M_\mathfrak{p}$ is an artinian $\mathfrak{a}R_\mathfrak{p}$-torsion $R_\mathfrak{p}$-module. Let
$T_M=\{\mathfrak{p}_1,\cdots\mathfrak{,p}_n\}$.
It follows from \cite[Lemma 2.5]{BN} that $\mathrm{V}(\mathfrak{a}R_{\mathfrak{p}_j})\cap\mathrm{Att}_{R_{\mathfrak{p}_j}}M_{\mathfrak{p}_j}
\subseteq\mathrm{V}(\mathfrak{p}_jR_{\mathfrak{p}_j})$ for $j=1,\cdots,n$. Let \begin{center}
$\mathrm{U}_M=\bigcup_{j=1}^n\{\mathfrak{q}\in\mathrm{Spec}R\hspace{0.03cm}|\hspace{0.03cm}\mathfrak{q}R_{\mathfrak{p}_j}
\in\mathrm{Att}_{R_{\mathfrak{p}_j}}M_{\mathfrak{p}_j}\}$.
\end{center}Then $\mathrm{U}_M\cap\mathrm{V}(\mathfrak{a})\subseteq T_M$. Since $t=\mathrm{ara}_M(\mathfrak{a})\geq 1$, there exist $y_1,\cdots,y_t\in\mathfrak{a}$
such that\begin{center}
$\mathrm{Rad}(\mathfrak{a}+\mathrm{Ann}_RM/\mathrm{Ann}_RM)=\mathrm{Rad}((y_1,\cdots,y_t)+\mathrm{Ann}_RM/\mathrm{Ann}_RM)$.
\end{center}Since $\mathfrak{a}\nsubseteq\bigcup_{\mathfrak{q}\in\mathrm{U}_M\backslash\mathrm{V}(\mathfrak{a})}\mathfrak{q}$, it follows that $(y_1,\cdots,y_t)+\mathrm{Ann}_RM\nsubseteq\bigcup_{\mathfrak{q}\in\mathrm{U}_M\backslash\mathrm{V}(\mathfrak{a})}\mathfrak{q}$.
On the other hand, for each $\mathfrak{q}\in\mathrm{U}_M$ we have $\mathfrak{q}R_{\mathfrak{p}_j}
\in\mathrm{Att}_{R_{\mathfrak{p}_j}}M_{\mathfrak{p}_j}$ for some $1\leq j\leq n$. Thus
\begin{center}
$(\mathrm{Ann}_RM)R_{\mathfrak{p}_j}\subseteq\mathrm{Ann}_{R_{\mathfrak{p}_j}}M_{\mathfrak{p}_j}\subseteq \mathfrak{q}R_{\mathfrak{p}_j}$,
\end{center}and so $\mathrm{Ann}_RM\subseteq\mathfrak{q}$. Consequently, $(y_1,\cdots,y_t)\nsubseteq\bigcup_{\mathfrak{q}\in\mathrm{U}_M\backslash\mathrm{V}(\mathfrak{a})}\mathfrak{q}$ as $\mathrm{Ann}_RM\subseteq\bigcap_{\mathfrak{q}\in\mathrm{U}_M\backslash\mathrm{V}(\mathfrak{a})}\mathfrak{q}$. Hence \cite[Ex.16.8]{M}
provides an element $a_1\in(y_2,\cdots,y_t)$ such that $y_1+a_1\not\in\bigcup_{\mathfrak{q}\in\mathrm{U}_M\backslash\mathrm{V}(\mathfrak{a})}\mathfrak{q}$. Set $x_1=y_1+a_1$. Then $x_1\in\mathfrak{a}$ and
\begin{center}
$\mathrm{Rad}(\mathfrak{a}+\mathrm{Ann}_RM/\mathrm{Ann}_RM)=\mathrm{Rad}((x_1,y_2,\cdots,y_t)+\mathrm{Ann}_RM/\mathrm{Ann}_RM)$.
\end{center}Let $M_1=(0:_Mx_1)$. Then $\mathrm{ara}_{M_1}(\mathfrak{a})=
\mathrm{Rad}((y_2,\cdots,y_t)+\mathrm{Ann}_RM_1/\mathrm{Ann}_RM_1)\leq t-1$ as $x_1\in\mathrm{Ann}_RM_1$. The reasoning in the preceding applied to $M_1$, there exists $x_2\in\mathfrak{a}$ such that $\mathrm{ara}_{M_2}(\mathfrak{a})=
\mathrm{Rad}((y_3,\cdots,y_t)+\mathrm{Ann}_RM_2/\mathrm{Ann}_RM_2)\leq t-2$.
Continuing this process, one obtains elements $x_1,\cdots,x_t\in\mathfrak{a}$ and the sequences
\begin{center}
$0\rightarrow M_{i}\rightarrow M_{i-1}\rightarrow x_iM_{i-1}\rightarrow0$,
\end{center}where $M_{0}=M$ and $M_{i}=(0:_{M_{i-1}}x_i)$ such that $\mathrm{ara}_{M_i}(\mathfrak{a})\leq t-i$ for $i=1,\cdots,t$, and these
exact sequences induce an exact sequence
\begin{center}
$0\rightarrow M_t\rightarrow M\rightarrow (x_1,\cdots,x_t)M\rightarrow0$.
\end{center}As $\mathrm{Hom}_R(R/\mathfrak{a},M_t)$ is finitely generated and $\mathrm{ara}_{M_t}(\mathfrak{a})=0$, one has $M_t$ is $\mathfrak{a}$-cofinite. Moreover, the above sequence implies that
$\mathrm{Ext}^1_R(R/\mathfrak{a},(x_1,\cdots,x_t)M)$ and $\mathrm{Ext}^2_R(R/\mathfrak{a},(x_1,\cdots,x_t)M)$ are finitely generated. Also the exact sequence \begin{center}$0\rightarrow (x_1,\cdots,x_t)M\rightarrow M\rightarrow M/(x_1,\cdots,x_t)M\rightarrow0$ \end{center} yields that $\mathrm{Hom}_R(R/\mathfrak{a},M/(x_1,\cdots,x_t)M)$ and $\mathrm{Ext}^1_R(R/\mathfrak{a},M/(x_1,\cdots,x_t)M)$ are finitely generated. On the other hand, for $i=1,\cdots,t$, the exact sequence \begin{center}
$0\rightarrow x_iM_{i-1}\rightarrow (x_1,\cdots,x_i)M\rightarrow (x_1,\cdots,x_{i-1})M\rightarrow0$
\end{center}induces the following commutative diagram
\begin{center}$\xymatrix@C=23pt@R=20pt{
& 0\ar[d] & 0 \ar[d] & \\
0\ar[r]&x_iM_{i-1}\ar[r] \ar[d]& M_{i-1}\ar[d] \ar[r]& M_{i-1}/x_iM_{i-1}\ar[d]^\cong\ar[r]&0\\
0 \ar[r] &(x_1,\cdots,x_i)M \ar[d] \ar[r] &M \ar[d]\ar[r] &M/(x_1,\cdots,x_i)M \ar[r] & 0 \\
& (x_1,\cdots,x_{i-1})M \ar[d]\ar@{=}[r] & (x_1,\cdots,x_{i-1})M\ar[d] \\
& 0& \hspace{0.15cm}0. &}$
\end{center}Let $T_{M_{t-1}}=\{\mathfrak{p}\in\mathrm{Supp}_RM_{t-1}\hspace{0.03cm}|\hspace{0.03cm}\mathrm{dim}R/\mathfrak{p}=2\}
=\{\mathfrak{q}_{1},\cdots,\mathfrak{q}_{m}\}$. Since \begin{center}$\mathrm{Hom}_R(R/\mathfrak{a},M_{t-1}/x_tM_{t-1})\cong\mathrm{Hom}_R(R/\mathfrak{a},M/(x_1,\cdots,x_t)M)$\end{center} is finitely generated, it follows from \cite[Lemma 2.4]{BN} that $(M_{t-1}/x_tM_{t-1})_{\mathfrak{q}_{j}}$ has finite length for $j=1,\cdots,m$. Thus there is a finitely generated submodule $L_{j}$ of $M/(x_1,\cdots,x_t)M$ such that $(M/(x_1,\cdots,x_t)M)_{\mathfrak{q}_{j}}=(L_{j})_{\mathfrak{q}_{j}}$. Set $L=L_{1}+\cdots+L_{m}$. Then $L$ is a finitely generated submodule of $M/(x_1,\cdots,x_t)M$ so that $\mathrm{Supp}_R(M/(x_1,\cdots,x_t)M)/L\subseteq\mathrm{Supp}_RM_{t-1}\backslash\{\mathfrak{q}_{1},\cdots,\mathfrak{q}_{m}\}$ and hence $\mathrm{dim}_R(M/(x_1,\cdots,x_t)M)/L\leq1$. Now the exact sequence \begin{center}$0\rightarrow L\rightarrow M/(x_1,\cdots,x_t)M\rightarrow (M/(x_1,\cdots,x_t)M)/L\rightarrow0$\end{center}induces the following exact sequence
\begin{center}
$\mathrm{Hom}_{R}(R/\mathfrak{a},M/(x_1,\cdots,x_t)M)\rightarrow
\mathrm{Hom}_{R}(R/\mathfrak{a},(M/(x_1,\cdots,x_t)M)/L)\rightarrow\mathrm{Ext}^1_{R}(R/\mathfrak{a},L)\rightarrow
\mathrm{Ext}^1_{R}(R/\mathfrak{a},M/(x_1,\cdots,x_t)M)\rightarrow\mathrm{Ext}^1_{R}(R/\mathfrak{a},(M/(x_1,\cdots,x_t)M)/L)
\rightarrow\mathrm{Ext}^2_{R}(R/\mathfrak{a},L)$.\end{center} Hence $\mathrm{Hom}_{R}(R/\mathfrak{a},(M/(x_1,\cdots,x_t)M)/L)$ and $\mathrm{Ext}^1_{R}(R/\mathfrak{a},(M/(x_1,\cdots,x_t)M)/L)$ are finitely generated, and so $(M/(x_1,\cdots,x_t)M)/L$ is $\mathfrak{a}$-cofinite by \cite[Proposition 2.6]{BNS}. Consequently, $M/(x_1,\cdots,x_t)M$ is $\mathfrak{a}$-cofinite. As $M_t\cong\mathrm{Hom}_{R}(R/(x_1,\cdots,x_t)R,M)$ and $M/(x_1,\cdots,x_t)M$ are $\mathfrak{a}$-cofinite, it follows from \cite[Corollary 3.3]{LM} that $M$ is $\mathfrak{a}$-cofinite, as desired.
\end{proof}
\begin{cor}\label{lem:2.0}{\it{Let $\mathfrak{a}$ be a proper ideal of $R$ with $\mathrm{dim}R/\mathfrak{a}\leq2$ and $M$ an $R$-module such that $\mathrm{Supp}_RM\subseteq\mathrm{V}(\mathfrak{a})$. Then the
following are equivalent:
$(1)$ $M$ is $\mathfrak{a}$-cofinite;
$(2)$ $\mathrm{Hom}_R(R/\mathfrak{a},M)$, $\mathrm{Ext}^1_R(R/\mathfrak{a},M)$ and $\mathrm{Ext}^2_R(R/\mathfrak{a},M)$ are finitely generated.}}
\end{cor}
\begin{cor}\label{lem:2.2}{\it{Let $\mathfrak{a}$ be a proper ideal of $R$ with $\mathrm{dim}R/\mathfrak{a}=2$, and let $M$ be a finitely generated $R$-module. Then $\mathrm{H}^i_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite for every $i\in\mathbb{Z}$ if and only if $\mathrm{Hom}_R(R/\mathfrak{a},\mathrm{H}^i_\mathfrak{a}(M))$ is finitely
generated for every $i\in\mathbb{Z}$.}}
\end{cor}
\begin{proof} This follows from \cite[Theorem 2.9]{BA} and Theorem \ref{lem:2.1}.
\end{proof}
\begin{cor}\label{lem:2.3}{\it{Let $\mathfrak{a}$ be a proper ideal of $R$ such that $\mathrm{dim}R/\mathfrak{a}=2$, and let $M$ be a finitely generated $R$-module.
$(1)$ If $\mathrm{H}^{2i}_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite for every $i\geq0$, then $\mathrm{H}^i_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite for every $i\geq0$.
$(2)$ If $\mathrm{H}^{2i+1}_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite for every $i\geq0$, then $\mathrm{H}^i_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite for every $i\geq0$.}}
\end{cor}
We are now ready to answer Question 1 subsequently in the case $\mathrm{cd}(\mathfrak{a})=2$, which is a generalization of \cite[Theorem 7.4]{LM}.
\begin{lem}\label{lem:4.2}{\it{Let $\mathfrak{a}$ be a proper ideal of $R$ with $\mathrm{ara}(\mathfrak{a})\leq2$. Then $\mathrm{H}^i_\mathfrak{a}(M)$ are $\mathfrak{a}$-cofinite for all $i\in\mathbb{Z}$ and all finitely generated $R$-modules $M$.}}
\end{lem}
\begin{proof} Note that $\mathrm{H}^i_\mathfrak{a}(M)=0$ for all $i\geq \mathrm{ara}(\mathfrak{a})$. If $\mathrm{ara}(\mathfrak{a})=0$, then the result obviously holds. If $\mathrm{ara}(\mathfrak{a})=1$, then $\mathrm{H}^0_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite. Hence \cite[Proposition 3.11]{LM} implies that $\mathrm{H}^1_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite. Now suppose that $\mathrm{ara}(\mathfrak{a})=2$. Then $\mathrm{Rad}(\mathfrak{a})=\mathrm{Rad}(a_1,a_2)$ where $a_1,a_2\in R$. Set $\mathfrak{b}=(a_1)$ and $\mathfrak{c}=(a_2)$. By Mayer-Vietoris sequence, one has an exact sequence\begin{center}$\mathrm{H}^1_{\mathfrak{b}\cap\mathfrak{c}}(M)\rightarrow \mathrm{H}^2_\mathfrak{a}(M)\rightarrow\mathrm{H}^2_\mathfrak{b}(M)\oplus\mathrm{H}^2_\mathfrak{c}(M)=0$.\end{center}By the preceding proof, $\mathrm{H}^1_{\mathfrak{b}\cap\mathfrak{c}}(M)$ is $(\mathfrak{b}\cap\mathfrak{c})$-cofinite and so $\mathrm{Ext}^i_R(R/\mathfrak{a},\mathrm{H}^1_{\mathfrak{b}\cap\mathfrak{c}}(M))$ are finitely generated for all $i\geq 0$ by \cite[Proposition 7.2]{WW}. Consequently, the above exact sequence implies that $\mathrm{Ext}^i_R(R/\mathfrak{a},\mathrm{H}^2_{\mathfrak{a}}(M))$ are finitely generated for all $i\geq 0$. Hence $\mathrm{H}^i_\mathfrak{a}(M)$ are $\mathfrak{a}$-cofinite for all $i$ by \cite[Proposition 3.11]{LM} again.
\end{proof}
\begin{thm}\label{lem:4.4}{\it{Let $\mathfrak{a}$ be a proper ideal of $R$ with $\mathrm{cd}(\mathfrak{a})\leq2$. Then $\mathrm{H}^i_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite for all $i$ and all finitely generated $R$-modules $M$.}}
\end{thm}
\begin{proof} When $\mathrm{cd}(\mathfrak{a})=0,1$, there is nothing to prove. So suppose that $\mathrm{cd}(\mathfrak{a})=2$. We argue by induction on $d=\mathrm{ara}(\mathfrak{a})\geq2$. If $d=2$ then the result holds by Lemma \ref{lem:4.2}. Now suppose $d>2$ and the result has been proved for smaller values of
$d$. Let $a_1,\cdots,a_d\in R$ with $\mathrm{Rad}(\mathfrak{a})=\mathrm{Rad}(a_1,\cdots,a_d)$, and set $\mathfrak{b}=(a_1,\cdots,a_{d-1})$ and $\mathfrak{c}=(a_d)$. By Mayer-Vietoris sequence, one has the following exact sequence\begin{center}$\mathrm{H}^1_{\mathfrak{b}\cap\mathfrak{c}}(M)\rightarrow \mathrm{H}^2_\mathfrak{a}(M)\rightarrow\mathrm{H}^2_\mathfrak{b}(M)$.\end{center}By the induction, $\mathrm{H}^1_{\mathfrak{b}\cap\mathfrak{c}}(M)$ is $\mathfrak{b}\cap\mathfrak{c}$-cofinite and $\mathrm{H}^2_{\mathfrak{b}}(M)$ is $\mathfrak{b}$-cofinite, so $\mathrm{Ext}^i_R(R/\mathfrak{a},\mathrm{H}^1_{\mathfrak{b}\cap\mathfrak{c}}(M))$ and $\mathrm{Ext}^i_R(R/\mathfrak{a},\mathrm{H}^2_{\mathfrak{b}}(M))$ are finitely generated for all $i\geq 0$. Thus the above exact sequence implies that $\mathrm{Ext}^i_R(R/\mathfrak{a},\mathrm{H}^2_{\mathfrak{a}}(M))$ are finitely generated for all $i\geq 0$, and therefore $\mathrm{H}^i_\mathfrak{a}(M)$ are $\mathfrak{a}$-cofinite for all $i\in\mathbb{Z}$.
\end{proof}
\begin{cor}\label{lem:4.5}{\it{Let $R$ be a ring with $\mathrm{dim}R\leq2$. Then $\mathrm{H}^i_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite for all $i$ and all finitely generated $R$-modules $M$.}}
\end{cor}
\begin{proof} This follows from that $\mathrm{cd}(\mathfrak{a})\leq\mathrm{dim}R$.
\end{proof}
Let $n\geq-1$ be an integer. Recall that an $R$-module $M$ is said to be $FD_{\leq n}$
if there is a finitely generated submodule $N$ of $M$ such that $\mathrm{dim}_RM/N\leq n$.
By definition, any finitely generated $R$-module and any $R$-module with dimension
at most $n$ are $FD_{\leq n}$.
Next, we
study the cofiniteness of modules in $FD_{\leq n}$ for $n=0,1,2$, the results generalize \cite[Corollary 2.6]{BNS1} and \cite[Theorem 2.6]{BN}.
\begin{prop}\label{lem:3.6}{\it{Let $n=0,1,2$, and let $M$ be an $R$-module in $FD_{\leq n}$ with $\mathrm{Supp}_RM\subseteq\mathrm{V}(\mathfrak{a})$. Then
$M$ is $\mathfrak{a}$-cofinite if and only if $\mathrm{Ext}^i_R(R/\mathfrak{a},M)$ are finitely generated for all $i\leq n$.}}
\end{prop}
\begin{proof} ``Only if'' part is obvious.
``If'' part. Let $N$ be a finitely generated submodule of $M$ such that $\mathrm{dim}_RM/N\leq n$. Then the exact sequence
\begin{center}
$\mathrm{Hom}_R(R/\mathfrak{a},M)\rightarrow\mathrm{Hom}_R(R/\mathfrak{a},M/N)
\rightarrow\mathrm{Ext}^1_R(R/\mathfrak{a},N)\rightarrow\mathrm{Ext}^1_R(R/\mathfrak{a},M)\rightarrow \mathrm{Ext}^1_R(R/\mathfrak{a},M/N)\rightarrow\mathrm{Ext}^2_R(R/\mathfrak{a},N)\rightarrow\mathrm{Ext}^2_R(R/\mathfrak{a},M)\rightarrow \mathrm{Ext}^2_R(R/\mathfrak{a},M/N)\rightarrow\mathrm{Ext}^3_R(R/\mathfrak{a},N)$
\end{center}implies that $\mathrm{Ext}^i_R(R/\mathfrak{a},M/N)$ are finitely generated for all $i\leq n$. Hence $M/N$ is $\mathfrak{a}$-cofinite by \cite[Proposition 4.1]{LM}, \cite[Proposition 2.6]{BNS} and Theorem \ref{lem:2.1} and then $M$ is $\mathfrak{a}$-cofinite.
\end{proof}
An $R$-module $M$ is minimax if there is a
finitely generated submodule $N$ of $M$, such that $M/N$ is artinian.
\begin{cor}\label{lem:3.6'}{\rm (\cite[Proposition 4.3]{LM}.)} {\it{Let $M$ be a minimax $R$-module with $\mathrm{Supp}_RM\subseteq\mathrm{V}(\mathfrak{a})$. Then
$M$ is $\mathfrak{a}$-cofinite if and only if $\mathrm{Hom}_R(R/\mathfrak{a},M)$ is finitely generated.}}
\end{cor}
An $R$-module $M$ is said to be weakly Laskerian if the set $\mathrm{Ass}_RM/N$
is finite for each submodule $N$ of $M$.
\begin{cor}\label{lem:3.7'} {\it{Let $M$ be a weakly Laskerian $R$-module with $\mathrm{Supp}_RM\subseteq\mathrm{V}(\mathfrak{a})$. Then
$M$ is $\mathfrak{a}$-cofinite if and only if $\mathrm{Hom}_R(R/\mathfrak{a},M)$ and $\mathrm{Ext}^1_R(R/\mathfrak{a},M)$ are finitely generated.}}
\end{cor}
\begin{proof} This follows from \cite[Theorem 3.3]{KB} that $M$ is $FD_{\leq 1}$.
\end{proof}
\begin{cor}\label{lem:3.7}{\it{Let $n=0,1$, and let $M$ be a non-zero finitely generated $R$-module and $t$ a non-negative integer such that $\mathrm{H}^i_\mathfrak{a}(M)$ are $FD_{\leq n}$ for all $i<t$. Then
$(1)$ $\mathrm{H}^i_\mathfrak{a}(M)$ are $\mathfrak{a}$-cofinite for all $i=0,\cdots,t-1$;
$(2)$ $\mathrm{Ext}^j_R(R/\mathfrak{a},\mathrm{H}^t_\mathfrak{a}(M))$ are finitely generated for all $j\leq n$.}}
\end{cor}
\begin{proof} We use induction on $t$.
If $t=1$ then $\mathrm{Ext}^j_R(R/\mathfrak{a},\mathrm{H}^0_\mathfrak{a}(M))$ are finitely generated for all $j\leq n$ by \cite[Theorem 2.9]{BA}. Hence $\mathrm{H}^0_\mathfrak{a}(M)$ is $\mathfrak{a}$-cofinite by Proposition \ref{lem:3.6} and $\mathrm{Ext}^j_R(R/\mathfrak{a},\mathrm{H}^1_\mathfrak{a}(M))$ are finitely generated for all $j\leq n$ by \cite[Theorem 2.9]{BA}.
Now suppose that $t>1$ and that the case $t-1$ is settled.
The induction implies that $\mathrm{H}^i_\mathfrak{a}(X)$ is $\mathfrak{a}$-cofinite for $i<t-1$, and so $\mathrm{Ext}^j_R(R/\mathfrak{a},\mathrm{H}^{t-1}_\mathfrak{a}(M))$ are finitely generated for all $j\leq n$ by \cite[Theorem 2.9]{BA}. Consequently, $\mathrm{H}^{t-1}_\mathfrak{a}(X)$ is $\mathfrak{a}$-cofinite by Proposition \ref{lem:3.6} and $\mathrm{Ext}^j_R(R/\mathfrak{a},\mathrm{H}^t_\mathfrak{a}(M))$ are finitely generated for all $j\leq n$ by \cite[Theorem 2.9]{BA} again.
\end{proof}
\bigskip \centerline {\bf ACKNOWLEDGEMENTS} This research was partially supported by National Natural Science Foundation of China (11761060,11901463).
\bigskip | {"config": "arxiv", "file": "2109.04613.tex"} |
TITLE: Proving convergence of a sum give an inequality
QUESTION [1 upvotes]: Let $\alpha>0$ be a real number and let $N$ be an integer. Let's assume that $a_n$ is a positive sequence such that for every $n>N$:
$(n-1)a_n-na_{n+1}\ge\alpha{a_n}$
I want to prove that the sequence of partial sums $s_k=\sum_{n=1}^k{\alpha a_n}$ is bounded and from that to prove that $\sum_{n=1}^\infty{a_n}$ converges.
I've managed to prove that $a_n$ is monotone decreasing:
$(n-1)a_n-na_{n+1}\ge\alpha{a_n}$
$(n-1)a_n-\alpha a_n\ge na_{n+1}$
$na_n>a_n(n-1-\alpha)\ge na_{n+1}$
$a_n>a_{n+1}$
I thought of using the following lemma:
If $(na_n)_{n=1}^\infty$ is a monotone increasing sequence then $\sum_{n=1}^\infty{a_n}$ is divergant.
REPLY [2 votes]: Since you wrote $na_n>a_n(n-1-\alpha)$ I'm assuming the $a_i$ are $\geq 0$
Summing the inequalities $(n-1)a_n-na_{n+1}\ge\alpha{a_n}$ from $n=N+1$ to $n=M$ yields $$Na_{N+1}\geq \alpha \sum_{k=N+1}^Ma_k +Ma_{M+1}\geq \alpha \sum_{k=N+1}^Ma_k $$
Since $Na_{N+1}\geq \alpha \sum_{k=N+1}^Ma_k$ holds for every $M\geq N+1$, $\sum_{k\geq N} a_k$ converges, and so does $\sum_{k\geq 2} a_k$. | {"set_name": "stack_exchange", "score": 1, "question_id": 1730563} |
\begin{document}
\maketitle
\begin{abstract}
We introduce the notion of \emph{pattern} in the context of
lattice paths, and investigate it in the specific case of Dyck
paths. Similarly to the case of permutations, the
pattern-containment relation defines a poset structure on the set
of all Dyck paths, which we call the \emph{Dyck pattern poset}.
Given a Dyck path $P$, we determine a formula for the number of
Dyck paths covered by $P$, as well as for the number of Dyck paths
covering $P$. We then address some typical pattern-avoidance
issues, enumerating some classes of pattern-avoiding Dyck paths.
Finally, we offer a conjecture concerning the asymptotic behavior
of the sequence counting Dyck paths avoiding a generic pattern and
we pose a series of open problems regarding the structure of the
Dyck pattern poset.
\end{abstract}
\section{Introduction}
One of the most investigated and fruitful notions in contemporary
combinatorics is that of a \emph{pattern}. Historically it was
first considered for permutations \cite{Kn}, then analogous
definitions were provided in the context of many other structures,
such as set partitions \cite{Go,Kl,Sa}, words \cite{Bj,Bu}, and
trees \cite{DPTW,Gi,R}. Perhaps all of these examples have been
motivated or informed by the more classical notion of graphs and
subgraphs. Informally speaking, given a specific class of
combinatorial objects, a pattern can be thought of as an
occurrence of a small object inside a larger one; the word
``inside" means that the pattern is suitably embedded into the
larger object, depending on the specific combinatorial class of
objects. The main aim of the present work is to introduce the
notion of pattern in the context of lattice paths and to begin its
systematic study in the special case of Dyck paths.
\bigskip
For our purposes, a \emph{lattice path} is a path in the discrete
plane starting at the origin of a fixed Cartesian coordinate
system, ending somewhere on the $x$-axis, never going below the
$x$-axis and using only a prescribed set of steps $\Gamma$. We
will refer to such paths as \emph{$\Gamma$-paths}. This definition
is extremely restrictive if compared to what is called a lattice
path in the literature, but it will be enough for our purposes.
Observe that a $\Gamma$-path can be alternatively described as a
finite word on the alphabet $\Gamma$ obeying certain conditions.
Using this language, we say that the \emph{length} of a
$\Gamma$-path is simply the length of the word which encodes such
a path. Among the classical classes of lattice paths, the most common are those
using only steps $U(p)=(1,1)$, $D(own)=(1,-1)$ and $H(orizontal)=(1,0)$;
with these definitions, Dyck, Motzkin and Schr\"oder paths correspond
respectively to the set of steps $\{ U,D\}$, $\{ U,H,D\}$ and $\{
U,H^2 ,D\}$.
\bigskip
Consider the class $\mathcal{P}_\Gamma$ of all $\Gamma$-paths, for
some choice of the set of steps $\Gamma$. Given $P,Q\in
\mathcal{P}_\Gamma$ having length $k$ and $n$ respectively, we say
that \emph{$Q$ contains (an occurrence of) the pattern $P$}
whenever $P$ occurs as a subword of $Q$. So, for instance, in the
class of Dyck paths, $UUDUDDUDUUDD$ contains the pattern $UUDDUD$,
whereas in the class of Motzkin paths, $UUHDUUDHDDUDHUD$ contains
the pattern $UHUDDHUD$. When $Q$ does not contain any occurrence
of $P$ we will say that $Q$ \emph{avoids} $P$. In the Dyck case,
the previously considered path $UUDUDDUDUUDD$ avoids the pattern
$UUUUDDDD$.
This notion of pattern gives rise to a partial order in a very
natural way, by declaring $P\leq Q$ when $P$ occurs as a pattern
in $Q$. In the case of Dyck paths, the resulting poset will be
denoted by $\mathcal{D}$. It is immediate to notice that
$\mathcal{D}$ has a minimum (the empty path), does not have a
maximum, is locally finite and is ranked (the rank of a Dyck path
is given by its semilength). As an example, in Figure
\ref{interval} we provide the Hasse diagram of an interval in the
Dyck pattern poset.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.4]{interval}
\end{center}
\caption{An interval of rank 3 in the Dyck pattern poset.}
\label{interval}
\end{figure}
Observe that this notion of pattern for paths is very close to the
analogous notion for words (considered, for instance, in
\cite{Bj}, where the author determines the M\"obius function of
the associated pattern poset). Formally, instead of considering
the set of \emph{all} words of the alphabet $\{ U,D\}$, we
restrict ourselves to the set of Dyck words (so what we actually
do is to consider a subposet of Bj\"orner's poset). However, the
conditions a word has to obey in order to belong to this subposet
(which translate into the fact of being a Dyck word) make this
subposet highly nontrivial, and fully justify our approach,
consisting of the study of its properties independently of its
relationship with the full word pattern poset.
\section{The covering relation in the Dyck pattern poset}
In the Dyck pattern poset $\mathcal{D}$, following the usual
notation for covering relation, we write $P\prec Q$ ($Q$
\emph{covers} $P$) to indicate that $P\leq Q$ and the rank of $P$
is one less than the rank of $Q$ (i.e., $rank(P)=rank(Q)-1$). Our
first result concerns the enumeration of Dyck paths covered by a
given Dyck path $Q$. We need some notation before stating it. Let
$k+1$ be the number of points of $Q$ lying on the $x$-axis (call
such points $p_0,p_1,\ldots,p_k$). Then $Q$ can be factorized into
$k$ Dyck factors $F_1,\ldots,F_k$, each $F_i$ starting at
$p_{i-1}$ and ending at $p_i$. Let $n_i$ be the number of ascents
in $F_i$ (an ascent being a consecutive run of $U$ steps; $n_i$
also counts both the number of descents and the number of peaks in
$F_i$). Moreover, we denote by $|UDU|$ and $|DUD|$ the number of
occurrences in a Dyck path of a consecutive factor $UDU$ and
$DUD$, respectively. In the path $Q$ of Figure \ref{notation}, we
have $n_1=2$, $n_2=1$, $n_3=2$, $|UDU|=3$, and $|DUD|=2$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.5]{notation}
\end{center}
\caption{A Dyck path having three factors.} \label{notation}
\end{figure}
\begin{prop}
If $Q$ is a Dyck path with $k$ factors $F_1 ,\ldots F_k$, with
$F_i$ having $n_i$ ascents, then the number of Dyck paths covered
by $Q$ is given by
\begin{equation}\label{covered}
\frac{\sum_{i=1}^k{n_i}^2+(\sum_{i=1}^k
n_i)^2}{2}-|UDU|-|DUD|\quad .
\end{equation}
\end{prop}
\emph{Proof.} We proceed by induction on $k$. If $Q$ is any Dyck
path having only one factor (and so necessarily $n_1$ ascents),
then a path $P$ such that $P\prec Q$ is obtained by choosing (and
then removing) a $U$ step and a $D$ step from an ascent and a
descent of $Q$, respectively. This can be done in $n_1^2$
different ways. Note that, once an ascent and a descent have been
selected, the path $P$ is uniquely determined if each of the two
steps $U$ and $D$ is chosen within the same ascent and descent,
respectively. Moreover, for each $UDU$ (resp., $DUD$) occurring in
$Q$, removing the $D$ step from the $UDU$ (resp., the $U$ step
from the $DUD$) and a $U$ (resp., $D$) step from the ascent
(resp., descent) either immediately before $D$ (resp., $U$) or
immediately after $D$ (resp., $U$) produces the same path $P$
covered by $Q$. Therefore, these paths would be counted twice if
the term $n_1^2$ were not corrected by subtracting both $|UDU|$
and $|DUD|$. This leads to formula (\ref{covered}) in the case
$k=1$.
Now suppose that $\tilde{Q}$ is a Dyck path which has $k>1$
factors $F_1,\ldots,F_k$, each factor $F_i$ having $n_i$ ascents.
Let $l$ be the total number of $UDU$ and $DUD$ (i.e.
$l=|UDU|+|DUD|$) in $\tilde{Q}$. If a new factor $F_{k+1}$ having
$n_{k+1}$ ascents and a total number $l_{k+1}$ of $UDU$ and $DUD$
factors is appended to $\tilde{Q}$ (after $F_k$), then the paths
covered by the new path $Q$ can be obtained by removing a $D$ step
and a $U$ step either both belonging to $\tilde{Q}$, or both
belonging to $F_{k+1}$, or one belonging to $\tilde{Q}$ and the
other one belonging to $F_{k+1}$.
We start by supposing that the two factors $F_k$ and $F_{k+1}$ are
both different from $UD$. In the first of the above cases, the
number of covered paths is given by formula (\ref{covered}) thanks
to our inductive hypothesis (since the removal of the steps $U$
and $D$ involves only the first $k$ factors of the Dyck path). The
second case is easily dealt with using the induction hypothesis as
well, namely applying the base case ($k=1$) to the last factor
$F_{k+1}$. Finally, concerning the last case, notice that the step
$D$ must be removed from $\tilde{Q}$, and the step $U$ must be
removed from $F_{k+1}$, otherwise the resulting path would fall
below the $x$-axis. Then, the $D$ step can be selected from
$\sum_{i=1}^kn_i$ different descents of $\tilde{Q}$, while the $U$
step can be chosen among the steps of the $n_{k+1}$ ascents of
$F_{k+1}$, leading to $n_{k+1}\cdot \sum_{i=1}^{k}n_i$ different
paths covered by $Q$. Summing the contributions of the three
cases considered above, we obtain:
\begin{eqnarray}\label{partial}
& &\frac{\sum_{i=1}^k{n_i}^2+(\sum_{i=1}^k n_i)^2}{2}-l+n_{k+1}^2-l_{k+1}+n_{k+1}\sum_{i=1}^kn_i \nonumber\\
&=&\frac{\sum_{i=1}^{k+1}{n_i}^2+(\sum_{i=1}^{k+1}
n_i)^2}{2}-l-l_{k+1}\quad.
\end{eqnarray}
However, we still have to take into account the cases in which
$F_k$ and/or $F_{k+1}$ are equal to $UD$. If $F_k =F_{k+1}=UD$,
then in formula (\ref{partial}) we have to subtract $2$ (since we
have one more factor $UDU$ and one more factor $DUD$ than those
previously counted). In the remaining cases, there is only one
more factor (either $UDU$ or $DUD$), thus in formula
(\ref{partial}) we have to subtract $1$. In all cases, what we get
is precisely formula (\ref{covered}).\cvd
In a similar fashion, we are also able to find a formula for the
number of all Dyck paths which cover a given path.
\begin{prop} If $Q$ is a Dyck path of semilength $n$ with $k$ factors $F_1 ,\ldots F_k$, with
$F_i$ having semilength $f_i$, then the number of Dyck paths
covering $Q$ is given by
\begin{equation}\label{covering}
1+\sum_{i}f_i ^2 +\sum_{i<j}f_i f_j .
\end{equation}
\end{prop}
\emph{Proof.}\quad A path $P$ covers $Q$ if and only if it is
obtained from $Q$ by suitably inserting an up step $U$ and a down
step $D$. Thus the set of all Dyck paths covering $Q$ can be
determined by choosing, in all possible ways, two positions
(inside $Q$) in which to insert an up step and a down step.
Clearly, in performing these insertions, we must take care not to
fall below the $x$-axis.
Let $P$ cover $Q$ and denote with $R$ the first occurrence (from
the left) of $Q$ in $P$. There are precisely two steps
in $P$ (a $U$ and a $D$) which do not belong to $R$. We
distinguish three distinct cases.
\begin{enumerate}
\item The last step of $R$ is the third-to-last step of $P$ (so that
$R$ is a prefix of $P$). This means that the two added steps
are the last two steps of $P$ (which therefore ends with $UD$),
and it is clear that there is precisely one path $P$ covering $Q$
which falls into this case.
\item The last step of $R$ is the second-to-last step of $P$. This
means that the $D$ step inserted into $R$ is the last step of $P$.
Thus $P$ is obtained by inserting a $U$ step somewhere in $R$
(except that at the end, since in this case we will return to the
previous case). The number of path $P$ of this form is then given
by the number of different places of $R$ in which we are allowed
to insert a new up step. Since $R$ is required to be the first
occurrence of $Q$ in $P$, it can be shown that a new up step can
be inserted immediately before each down step of $R$. There are
precisely $n$ Dyck paths of this form.
\item The last step of $R$ is the last step of $P$. In this case,
$P$ is obtained from $R$ by suitably inserting an up step and a
down step. We can consider two distinct cases. If $U$ and $D$ are
inserted into the same factor of $R$, then we can either insert
$U$ before $D$ or vice versa. In this specific case, since $R$ has
to be the first occurrence of $Q$ inside $P$, $U$ can be inserted
immediately before each $D$ step of the factor and $D$ can be
inserted immediately before each $U$; similarly, $D$ can be
inserted immediately before each $U$, except of course for the
very first step of the factor, moreover $D$ can also be inserted
at the end of the factor. There is however one factor that behaves
in a slightly different way. If we choose to insert the two new
steps into the last factor of $P$, then we cannot insert a $D$ at
the end of the factor (since we are supposing that the last step
of $R$ is also the last step of $P$). Thus, if we insert $U$ and
$D$ into the factor $F_i$, $i<k$, then we obtain $f_i ^2$
different paths $P$ of this form, whereas if we insert $U$ and $D$
into $F_k$ we get a total of $f_k (f_k -1)$ paths. So, in this
specific case, the total number of paths thus obtained is
$\sum_{i=1}^{k}f_i ^2 -f_k$. On the other hand, if we choose to
insert $U$ and $D$ into two distinct factors, then $U$ must be
inserted before $D$ (otherwise the resulting path would fall below
the $x$-axis). If we decide to insert $D$ into the factor $F_i$,
$i<k$ (for which, by an argument similar to the above one, we have
$f_i$ possibilities), then we can insert $U$ into any of the
preceding factors, whence in $\sum_{j=1}^{i-1}f_j$ ways. Instead,
if $D$ is inserted into $F_k$, we only have $f_k -1$
possibilities, and we can then insert $U$ in any of the first
$k-1$ factors, for a total of $\sum_{j=1}^{k-1}f_j$ different
paths thus obtained. Thus, in this last case, the total number of
paths $P$ having this form is given by $\sum_{i=1}^{k-1}\left( f_i
\cdot \sum_{j=1}^{i-1}f_j \right) +(f_k -1)\cdot
\sum_{j=1}^{k-1}f_j$.
\end{enumerate}
Finally, summing up all the quantities obtained so far, we find
the following expression for the number of paths covering a given
path $Q$:
\begin{eqnarray*}
& &1+n+\sum_{i=1}^{k}f_i ^2 -f_k +\sum_{i=1}^{k}\left( f_i \cdot
\sum_{j=1}^{i-1}f_j \right) -\sum_{i=1}^{k-1}f_i \nonumber
\\ &=&1+\sum_{i=1}^{k}f_i ^2 +\sum_{i<j}f_i f_j .
\end{eqnarray*}
This is precisely formula (\ref{covering}).\cvd
\section{Enumerative results on pattern avoiding Dyck paths}
In the present section we will be concerned with the enumeration
of some classes of pattern avoiding Dyck paths. Similarly to what
has been done for other combinatorial structures, we are going to
consider classes of Dyck paths avoiding a single pattern, and we
will examine the cases of short patterns. Specifically, we will
count Dyck paths avoiding any single path of length $\leq 3$; each
case will arise as a special case of a more general result
concerning a certain class of patterns.
Given a pattern $P$, we denote by $D_n (P)$ the set of all Dyck
paths of semilength $n$ avoiding the pattern $P$, and by $d_n (P)$
the cardinality of $D_n (P)$.
\subsection{The pattern $(UD)^k$}
This is one of the easiest cases.
\begin{prop} For any $\kinN$, $Q\in D_n ((UD)^k)$ if and only if
$Q$ has at most $k-1$ peaks.
\end{prop}
\emph{Proof.}\quad A Dyck path
$Q=U^{a_1}D^{b_1}U^{a_2}D^{b_2}\cdots U^{a_h}D^{b_h}$ contains the
pattern $(UD)^k$ if and only if $h\geq k$, that is $Q$ has at
least $k$ peaks.\cvd
Since it is well known that the number of Dyck paths of semilength
$n$ and having $k$ peaks is given by the Narayana number $N_{n,k}$
(sequence A001263 in \cite{Sl}), we have that $d_n
((UD)^k)=\sum_{i=0}^{k-1}N_{n,i}$ (partial sums of Narayana
numbers). Thus, in particular:
\begin{itemize}
\item[-] $d_n (UD)=0$;
\item[-] $d_n (UDUD)=1$;
\item[-] $d_n (UDUDUD)=1+{n\choose 2}$.
\end{itemize}
\subsection{The pattern $U^{k-1}DUD^{k-1}$}
Let $Q$ be a Dyck path of length $2n$ and $P=U^{k-1}DUD^{k-1}$.
Clearly if $n<k$, then $Q$ avoids $P$, and if $n=k$, then all Dyck
paths of length $2n$ except one ($Q$ itself) avoid $Q$. Therefore:
\begin{itemize}
\item $d_n(P)=C_n$ if $n<k$, and
\item $d_n(P)=C_n -1$ if $n=k$,
\end{itemize}
\noindent where $C_n$ is the $n$-th Catalan number.
Now suppose $n>k$. Denote by $A$ the end point of the $(k-1)$-th
$U$ step of $Q$. It is easy to verify that $A$ belongs to the line
$r$ having equation $y=-x+2k-2$. Denote with $B$ the starting
point of the $(k-1)$-th-to-last $D$ step of $Q$. An analogous
computation shows that $B$ belongs to the line $s$ having equation
$y=x-\left(2n-2k+2\right)$.
Depending on how the two lines $r$ and $s$ intersect, it is
convenient to distinguish two cases.
\begin{enumerate}
\item If $2n-2k+2\geq2k-4$ (i.e. $n\geq 2k-3$), then $r$ and $s$
intersect at height $\leq 1$, whence $x_A\leq x_B$ (where $x_A$
and $x_B$ denote the abscissas of $A$ and $B$, respectively). The
path $Q$ can be split into three parts (see Figure
\ref{av_UUDUDD_}): a prefix $Q_A$ from the origin $(0,0)$ to
$A$, a path $X$ from $A$ to $B$, and a suffix $Q_B$ from $B$ to
the last point $(2n,0)$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.3]{avoiding_UDUD_}
\end{center}
\vspace{-0.5cm} \caption{Avoiding $U^{k-1}DUD^{k-1}$, with
$n\geq2k-3$} \label{av_UUDUDD_}
\end{figure}
We point out that $Q_A$ has exactly $k-1$ $U$ steps and its last
step is a $U$ step. Analogously, $Q_B$ has exactly $k-1$ $D$ steps
and its first step is a $D$ step. Notice that there is a clear
bijection between the set $\mathcal A$ of Dyck prefixes having
$k-1$ $U$ steps and ending with a $U$ and the set $\mathcal B$ of
Dyck suffixes having $k-1$ $D$ steps and starting with a $D$,
since each element of $\mathcal{B}$ can be read from right to left
thus obtaining an element of $\mathcal{A}$. Moreover,
$\mathcal{A}$ is in bijection with the set of Dyck paths of
semilength $k-1$ (just complete each element of $\mathcal{A}$ with
the correct sequence of $D$ steps), hence $|\mathcal A|=C_{k-1}$.
If we require $Q$ to avoid $P$, then necessarily $X=U^iD^j$, for
suitable $i,j$ (for, if a valley $DU$ occurred in $X$, then $Q$
would contain $P$ since $U^{k-1}$ and $D^{k-1}$ already occur in
$Q_A$ and $Q_B$, respectively). In other words, $A$ and $B$ can be
connected only in one way, using a certain number (possibly zero)
of $U$ steps followed by a certain number (possibly zero) of $D$
steps. Therefore, a path $Q$ avoiding $P$ is essentially
constructed by choosing a prefix $Q_A$ from $\mathcal A$ and a
suffix $Q_B$ from $\mathcal B$, whence:
\begin{equation}\label{ck-1}
d_n(P)=C_{k-1}^2,\quad (\mbox{if}\quad n\geq2k-3).
\end{equation}
\item Suppose now $k+1\leq n<2k-3$ (which means that $r$ and $s$
intersect at height $>1$). Then it can be either $x_A\leq x_B$ or
$x_A>x_B$.
\begin{itemize}
\item[a)] If $x_A\leq x_B$, then we can count all Dyck paths $Q$
avoiding $P$ using an argument analogous to the previous one.
However, in this case the set of allowable prefixes of each such
$Q$ is a proper subset of $\mathcal{A}$. More specifically, we
have to consider only those for which $x_A=k-1,k,k+1,\ldots,n$
(see Figure \ref{av_UUDUDD_2_a}). In other words, an allowable
prefix has $k-1$ $U$ steps and $0,1,2,\ldots$ or $n-k+1$ $D$
steps.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.3]{avoiding_UDUD_2_a}
\end{center}
\vspace{-0.5cm} \caption{Avoiding $U^{k-1}DUD^{k-1}$, with
$x_A\leq x_B$} \label{av_UUDUDD_2_a}
\end{figure}
If $b_{i,j}$ denotes the numbers of Dyck prefixes with $i$ $U$
steps and $j$ $D$ steps ($i\geq j$), then the contribution to $d_n
(P)$ in this case is
$$d_n ^{(1)}(P)=\left(\sum_{j=0}^{n-k+1}b_{k-2,j}\right)^2\quad.$$
The coefficients $b_{i,j}$ are the well-known \emph{ballot
numbers} (sequence A009766 in \cite{Sl}), whose first values are
reported in Table \ref{ballot}.
\item[b)] If $x_A>x_B$, then it is easy to see that $Q$
necessarily avoids $P$, since $A$ clearly occurs after $B$, and so
there are strictly less than $k-1$ $D$ steps from $A$ to $(2n,0)$.
Observe that, in this case, the path $Q$ lies below the profile
drawn by the four lines $y=x$, $r$, $s$ and $y=-x+2n$. In order to
count these paths, referring to Figure \ref{av_UUDUDD_2_b}, just
split each of them into a prefix and a suffix of equal length $n$
and call $C$ the point having abscissa $n$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.3]{avoiding_UDUD_2_b}
\end{center}
\caption{Avoiding $U^{k-1}DUD^{k-1}$, with $x_A>x_B$}
\label{av_UUDUDD_2_b}
\end{figure}
Since $C$ must lie under the point where $r$ and $s$ intersect,
then its ordinate $y_C$ equals $-n+2k-2-2t$ with $t\geq 1$ (and
also recalling that $y_C =-n+2k-2-2t\geq 0$). A prefix whose final
point is $C$ has $k-j$ $U$ steps and $n-k+j$ $D$ steps, with
$j\geq 2$. Since, in this case, a path $Q$ avoiding $P$ is
constructed by gluing a prefix and a suffix chosen among
$b_{k-j,n-k+j}$ possibilities $(j\geq 2)$, we deduce that the
contribution to $d_n (P)$ in this case is:
$$d_n ^{(2)}(P)=\sum_{j\geq2}b_{k-j,n-k+j}^2.$$
\end{itemize}
Summing up the two contributions we have obtained in a) and b), we
get:
\begin{align}\label{general_1}
d_n(P)&=d_n ^{(1)}(P)+d_n
^{(2)}(P)\nonumber \\
&=\left(\sum_{j=0}^{n-k+1}b_{k-2,j}\right)^2+\sum_{j\geq2}b_{k-j,n-k+j}^2,\quad
\mbox{if}\quad k+1\leq n<2k-3.
\end{align}
\end{enumerate}
Notice that formula (\ref{general_1}) reduces to the first sum if
$n\geq2k-3$, since in that case $n-k+j>k-j$, for $j\geq2$. We then
have a single formula including both cases 1. and 2.:
\begin{equation}\label{general_2}
d_n(P)=\left(\sum_{j=0}^{n-k+1}b_{k-2,j}\right)^2+\sum_{j\geq2}b_{k-j,n-k+j}^2,
\quad \mbox{if}\quad n\geq k+1\quad.
\end{equation}
Formula (\ref{general_2}) can be further simplified by recalling a
well known recurrence for ballot numbers, namely that
$$
b_{i+1,j}=\sum_{s=0}^{j}b_{i,s}.
$$
Therefore, we get the following interesting expression for $d_n
(P)$ (when $n\geq k+1$) in terms of sums of squares of ballot
numbers along a skew diagonal (see also Tables \ref{ballot} and
\ref{alfa}):
\begin{equation}\label{ultima}
d_n (P)=\sum_{j\geq 1}b_{k-j,n-k+j}^2 .
\end{equation}
\begin{table}
\begin{tabular}{c|cccccccccc}
\backslashbox{i}{j} & 0 & 1 &2&3&4&5&6&7&8&9\\
\hline
0 & 1 & & & & & & & &&\\
1 & 1 & 1 & & & & & & &&\\
2 & 1 & 2 & 2& & & & & &&\\
3 & 1 & 3 & 5& 5& & & & &&\\
4 & 1 & 4 & 9& 14& 14& & & &&\\
5 & 1 & 5 & 14& 28& \textbf{42}& 42& && \\
6&\multicolumn{1}{>{\columncolor[gray]{.7}}c}{\emph{1}} &
\multicolumn{1}{>{\columncolor[gray]{.7}}c}{\emph{6}} &
\multicolumn{1}{>{\columncolor[gray]{.7}}c}{\emph{20}}&
\textbf{48}& 90& 132& 132&&& \\
7 & 1 & 7 & \textbf{27}& 75& 165& 297& 429& 429&&\\
8 & 1 & 8 & 35 & 110 & 275 & 572 & 1001 & 1430 & 1430\\
9 & 1 & 9 & 44 & 154 & 429 & 1001 & 2002 & 3432 & 4862 & 4862\\
\end{tabular}
\caption{The sum of the gray entries gives the bold entry in the
line below. The sum of the squares of the bold entries gives an
appropriate element of Table \ref{alfa}.} \label{ballot}
\end{table}
\begin{table}
{\footnotesize
\begin{tabular}{c|cccccccccccccccc}
\backslashbox{$k$}{$n$} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 &8&9&10&11&12&13&\ldots\\
\hline
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0& 0& 0& 0& 0& 0& \ldots \\
2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1& 1& 1& 1& 1& 1& \ldots \\
3 & 1 & 1 & 2 & {\slshape 4} & 4 & 4 & 4 & 4 & 4& 4& 4& 4& 4& 4& \ldots \\
4 & 1 & 1 & 2 & 5 & {\slshape 13} & 25 & 25 & 25 & 25& 25& 25& 25& 25& 25 & \ldots \\
5 & 1 & 1 & 2 & 5 & 14 & {\slshape 41} & \textbf{106} & 196 & 196 & 196 & 196 & 196 & 196 & 196 & \ldots \\
6 & 1 & 1 & 2 & 5 & 14 & 42 & {\slshape 131} & \textbf{392} & \textbf{980} & 1764 & 1764 & 1764 & 1764 & 1764 & \ldots \\
7 & 1 & 1 & 2 & 5 & 14 & 42 & 132 & {\slshape 428} & \textbf{1380} & \textbf{4068} & \textbf{9864} & 17424 & 17424 & 17424 & \ldots \\
8 & 1 & 1 & 2 & 5 & 14 & 42 & 132 & 429 & {\slshape 1429} & \textbf{4797} & \textbf{15489} & \textbf{44649} & \textbf{105633} & 184041 & \ldots\\
9 & 1 & 1 & 2 & 5 & 14 & 42 & 132 & 429 & 1430 & {\slshape 4861} & \textbf {16714} & \textbf{56749} & \textbf{181258} & \textbf{511225} & \ldots\\
\end{tabular}
} \vspace{1cm} \caption{Number of Dyck paths of semilength $n$
avoiding $U^{k-1}DUD^{k-1}$. Entries in boldface are the
nontrivial ones ($k+1\leq n<2k-3$).} \label{alfa}
\end{table}
Therefore we obtain in particular:
$$d_n (UUDUDD)=4,\textnormal{ when $n\geq 3$.}$$
\subsection{The pattern $U^k D^k$}
The case $P=U^kD^k$ is very similar to the previous one. We just
observe that, when $x_A\leq x_B$, the two points $A$ and $B$ can
be connected only using a sequence of $D$ steps followed by a
sequence of $U$ steps. This is possible only if $n\leq 2k-2$,
which means that $r$ and $s$ do not intersect below the $x$-axis.
Instead, if $n\geq 2k-1$, $Q$ cannot avoid $P$. Therefore we get
(see also Table \ref{beta}):
$$
d_n(P)= \left\{
\begin{array}{cc}
0 & \mbox{if $n\geq 2k-1$};\\
\sum_{j\geq1}b_{k-j,n-k+j}^2 & \mbox{otherwise}.
\end{array}
\right.
$$
In particular, we then find:
\begin{itemize}
\item[-] $d_n (UUDD)=0$, when $n\geq 3$;
\item[-] $d_n (UUUDDD)=0$, when $n\geq 5$.
\end{itemize}
\begin{table}
{\footnotesize
\begin{tabular}{c|cccccccccccccccc}
\backslashbox{$k$}{$n$} &
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & \ldots\\
\hline
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\
2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\
3 & 1 & 1 & 2 & \slshape{4} & 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\
4 & 1 & 1 & 2 & 5 & \slshape{13} & 25 & 25 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\
5 & 1 & 1 & 2 & 5 & 14 & \slshape{41} & \textbf{106} & 196 & 196 & 0 & 0 & 0 & 0 & 0 & \ldots \\
6 & 1 & 1 & 2 & 5 & 14 & 42 & \slshape{131} & \textbf{392} & \textbf{980} & 1764 & 1764 & 0 & 0 & 0 & \ldots \\
7 & 1 & 1 & 2 & 5 & 14 & 42 & 132 & \slshape{428} & \textbf{1380} & \textbf{4068} & \textbf{9864} & 17424 & 17424 & 0 &\ldots \\
8 & 1 & 1 & 2 & 5 & 14 & 42 & 132 & 429 & \slshape{1429} & \textbf{4797} & \textbf{15489} & \textbf{44649} & \textbf{105633} & 184041 & \ldots\\
9 & 1 & 1 & 2 & 5 & 14 & 42 & 132 & 429 & 1430 & \slshape{4861} & \textbf{16714} & \textbf{56749} & \textbf{181258} & \textbf{511225} & \ldots\\
\end{tabular}
} \vspace{1cm} \caption{Number of Dyck paths of semilength $n$
avoiding $U^{k}D^{k}$. Entries in boldface are the nontrivial ones
($k+1\leq n<2k-3$).} \label{beta}
\end{table}
\subsection{The pattern $U^{k-1}D^{k-1}UD$}
This is by far the most challenging case.
Let $Q$ be a Dyck path of length $2n$ and $P=U^{k-1}D^{k-1}UD$. If
$Q$ avoids $P$, then there are two distinct options: either $Q$
avoids $U^{k-1}D^{k-1}$ or $Q$ contains such a pattern. In the
first case, we already know that $d_n (U^{k-1}D^{k-1})$ is
eventually equal to zero. So, for the sake of simplicity, we will
just find a formula for $d_n (P)$ when $n$ is sufficiently large,
i.e. $n\geq 2k-3$. Therefore, for the rest of this section, we
will suppose that $Q$ contains $U^{k-1}D^{k-1}$.
\bigskip
The $(k-1)$-th $D$ step of the first occurrence of
$U^{k-1}D^{k-1}$ in $Q$ lies on the line having equation
$y=-x+2n$. This is due to the fact that $Q$ has length $2n$ and
there cannot be any occurrence of $UD$ after the first occurrence
of $U^{k-1}D^{k-1}$. The path $Q$ touches the line of equation
$y=-x+2k-2$ for the first time with the end point $A$ of its
$(k-1)$-th $U$ step. After that, the path $Q$ must reach the
starting point $B$ of the $(k-1)$-th $D$ step occurring after $A$.
Finally, a sequence of consecutive $D$ steps terminates $Q$ (see
Figure \ref{challenging_1}). Therefore, $Q$ can be split into
three parts: the first part, from the beginning to $A$, is a Dyck
prefix having $k-1$ $U$ steps and ending with a $U$ step; the
second part, from $A$ to $B$, is a path using $n-k+1$ $U$ steps
and $k-2$ $D$ steps; and the third part, from $B$ to the end, is a
sequence of $D$ steps (whose length depends on the coordinates of
$A$). However, both the first and the second part of $Q$ have to
obey some additional constraints.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.3]{challenging_1}
\end{center}
\caption{A path $Q$ avoiding $P=U^{k-1}D^{k-1}UD$}
\label{challenging_1}
\end{figure}
The height of the point $A$ (where the first part of $Q$ ends)
must allow $Q$ to have at least $k-1$ $D$ steps after $A$. Thus,
the height of $A$ plus the number of $U$ steps from $A$ to $B$
minus the number of $D$ steps from $A$ to $B$ must be greater than
or equal to 1 (to ensure that the pattern $U^{k-1}D^{k-1}$ occurs
in $Q$). Hence, denoting with $x$ the maximum number of $D$ steps
which can occur before $A$, either $x=k-2$ or the following
equality must be satisfied:
$$
(k-1)-x+(n-k+1)-(k-2)=1.
$$
Therefore, $x=\min \{ n-k+1,k-2\}$. Observe however that, since we
are supposing that $n\geq 2k-3$, we always have $x=k-2$.
Concerning the part of $Q$ between $A$ and $B$, since we have to
use $n-k+1$ $U$ steps and $k-2$ $D$ steps, there are ${n-1\choose
k-2}$ distinct paths connecting $A$ and $B$. However, some of them
must be discarded, since they fall below the $x$-axis. In order to
count these ``bad" paths, we split each of them into two parts.
Namely, if $A'$ and $B'$ are the starting and ending points of the
first (necessarily $D$) step below the $x$-axis, the part going
from $A$ to $A'$, and the remaining part (see Fig.
\ref{challenging_2}).
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.3]{challenging_2}
\end{center}
\caption{A forbidden subpath from $A$ to $B$.}
\label{challenging_2}
\end{figure}
It is not too hard to realize that the number of possibilities we
have to choose the first part is given by a ballot number
(essentially because, reading the path from right to left, we have
to choose a Dyck prefix from $A'$ to $A$), whereas the number of
possibilities we have to choose the second part is given by a
binomial coefficient (essentially because, after having discarded
the step starting at $A'$, we have to choose an unrestricted path
from $B'$ to $B$). After a careful inspection, we thus get to the
following expression for the total number $d_n (P)$ of Dyck paths
of semilength $n\geq 2k-3$ avoiding $P$:
\begin{align}\label{bad}
d_n (P)=&{n-1\choose k-2}C_{k-1}\nonumber \\
&-\sum_{s=2}^{k-2}b_{k-2,s}\cdot \left(
\sum_{i=0}^{s-2}b_{k-3-i,s-2-i}{n-k-s+3+2i\choose i}\right) .
\end{align}
Formula (\ref{bad}) specializes to the following expressions for
low values of $k$ (see also Table \ref{difficile}):
\begin{itemize}
\item[-] when $k=3$, $d_n (P)=2n-2$ for $n\geq 3$;
\item[-] when $k=4$, $d_n (P)=\frac{5n^2 -15n+6}{2}$ for $n\geq
5$;
\item[-] when $k=5$, $d_n (P)=\frac{14n^3 -84n^2 +124n-84}{6}$ for
$n\geq 7$.
\end{itemize}
\begin{table}
\begin{tabular}{c|cccccccccc}
\backslashbox{$k$}{$n$} &
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
\hline
1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
3 & 1 & 1 & 2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 \\
4 & 1 & 1 & 2 & 5 & 13 & 28 & 48 & 73 & 103 & 138 \\
5 & 1 & 1 & 2 & 5 & 14 & 41 & 110 & 245 & 450 & 739 \\
6 & 1 & 1 & 2 & 5 & 14 & 42 & 131 & 397 & 1069 & 2427 \\
\end{tabular}
\caption{Avoiding $U^{k-1}D^{k-1}UD$} \label{difficile}
\end{table}
\section{Some remarks on the asymptotics of pattern avoiding Dyck paths}
In this final section we collect some thoughts concerning the
asymptotic behavior of integer sequences counting pattern-avoiding
Dyck paths. Unlike the case of permutations, for Dyck paths it
seems plausible that a sort of ``master theorem'' exists, at least
in the case of single avoidance. This means that all the sequences
which count Dyck paths avoiding a single pattern $P$ have the same
asymptotic behavior (with some parameters, such as the leading
coefficient, depending on the specific path $P$). We have some
computational evidence which leads us to formulate a conjecture,
whose proof we have not been able to complete, and so we leave it
as an open problem.
\bigskip
Let $P$ denote a fixed Dyck path of semilength $x$. We are
interested in the behavior of $d_n (P)$ when $n\rightarrow
\infty$. Our conjecture is the following:
\bigskip
\textbf{Conjecture.}\quad \emph{Suppose that $P$ starts with $a$ U
steps and ends with $b$ D steps. Then, setting $k=2x-2-a-b$, we have
that $d_n (P)$ is asymptotic to
$$\frac{\alpha_P \cdot C_a \cdot
C_b}{k!}n^{k},$$ where $C_m$ denotes the $m$-th Catalan numbers
and $\alpha_P$ is the number of saturated chains in the Dyck
lattice of order $x$ (see \cite{FP}) from $P$ to the maximum $U^x
D^x$.}
\bigskip
Equivalently, $\alpha_P$ is the number of standard Young tableaux
whose Ferrers shape is determined by the region delimited by the
path $P$ and the path $U^x D^x$, as shown in Figure \ref{young}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=.3]{young}
\end{center}
\caption{An instance of a standard Young tableau determined by a
Dyck path.} \label{young}
\end{figure}
In the above conjecture, the only parts of the formula we are able
to justify are the coefficients $C_a$ and $C_b$. Indeed, suppose
that $Q$ is a Dyck path of semilength $n$, with $n$ very large.
Then we can consider the minimum prefix $Q_{pref}$ of $Q$
containing exactly $a$ $U$ steps and the minimum suffix $Q_{suff}$
of $Q$ containing exactly $b$ $D$ steps. They certainly exist, due
to the hypothesis that $n$ is very large. As we have already shown
in the previous section, the number of Dyck prefixes having $a$
$U$ steps and ending with $U$ is precisely equal to $C_a$. Of
course, an analogous fact holds for suffixes as well.
\bigskip
We close our paper with some further conjectures concerning the
order structure of the Dyck pattern poset.
\begin{itemize}
\item What is the M\"obius function of the Dyck pattern poset
(from the bottom element to a given path? Of a generic interval?)?
\item How many (saturated) chains are there up to a given path? Or
in a general interval?
\item Does there exist an infinite antichain in the Dyck pattern
poset?
\end{itemize}
The last conjecture has been suggested by an analogous one for the
permutation pattern poset which has been solved in the affirmative
(see \cite{SB} and the accompanying comment). In the present
context we have no intuition on what could be the answer, though
we are a little bit less optimistic than in the permutation case. | {"config": "arxiv", "file": "1303.3785/path_patterns_4.tex"} |
\section{Improving stability through repartitioning}
\label{sec:repartitioning}
We now introduce a simple repartitioning strategy to eliminate the stability limitations described in the previous two sections. In short, we propose to repartition the system (\ref{eq:semilinear_ode}) as
\begin{align}
\mathbf{y}' = \widehat{\mathbf{L}} \mathbf{y} + \widehat{N}(t,\mathbf{y})
\label{eq:repartioned_semilinear_ode}
\end{align}
where
\begin{align}
\widehat{\mathbf{L}} = \mathbf{L} + \epsilon \mathbf{D}, \quad
\widehat{N}(t,\mathbf{y}) = N(t,\mathbf{y}) - \epsilon \mathbf{D}\mathbf{y},
\label{eq:partitioned_operators}
\end{align}
and $\mathbf{D}$ is a diffusive operator. The repartitioned system (\ref{eq:repartioned_semilinear_ode}) is mathematically equivalent to (\ref{eq:semilinear_ode}), however a partitioned exponential integrator now exponentiates $\widehat{\mathbf{L}}$ instead of the original matrix $\mathbf{L}$.
To improve stability, we seek a matrix $\mathbf{D}$ so that the eigenvalues of $\widehat{\mathbf{L}}$ have some small negative real-component. We start by first assuming that $\mathbf{L}$ is diagonalizable so that $\mathbf{L} = \mathbf{U \Lambda U}^{-1}$ and then select
\begin{align}
\mathbf{D} = -\mathbf{U} \text{abs}(\mathbf{\Lambda}) \mathbf{U}^{-1}.
\label{eq:D_eigen}
\end{align}
Although this choice may be difficult to apply in practice, it is very convenient for analyzing the stability effects of repartitioning. If we apply (\ref{eq:D_eigen}) to the Dahlquist test problem (\ref{eq:dispersive-dahlquist}), then $\mathbf{D} = -|\lambda_1|$ and we obtain the repartitioned non-diffusive Dahlquist equation
\begin{align}
y' = \underbrace{(i \lambda_1 - \epsilon|\lambda_1|)}_{\hat{\mathbf{L}}}y + \underbrace{(i\lambda_2 + \epsilon |\lambda_1|) y}_{\hat{N}(t,y)}.
\label{eq:repartioned_dispersive_dahlquist}
\end{align}
The associated stability region for an exponential integrator is
\begin{align}
S &= \left\{ (k_1, k_2) \in \mathbb{R}^2 : \left| \hat{R}(k_1,k_2) \right| \le 1 \right\}, \\
\hat{R}(k_1,k_2) &= R\left(ik_1 - \epsilon|k_1|, i k_2 + \epsilon |k_1|\right),
\label{eq:stability_region_damped}
\end{align}
where $R$ is the stability function of the unmodified integrator. By choosing
\begin{align}
\epsilon = 1 / \tan(\tfrac{\pi}{2} + \rho) \quad \text{for} \quad \rho \in [0, \tfrac{\pi}{2}),
\label{eq:epsilon_of_rho}
\end{align}
the single eigenvalue of the partitioned linear operator $\hat{\mathbf{L}}$
is now angled $\rho$ radians off the imaginary axis into the left half-plane. Conversely, the ``nonlinear'' operator $\hat{N}(t,y)$
has been rotated and scaled into the right half-plane. Therefore, in order for the method to stay stable, the exponential functions of $\hat{\mathbf{L}}$ must damp the excitation that was introduced in the nonlinear component.
In Figure \ref{fig:repartitioned-stability} we show the stability regions for ERK4, ESDC6, and EPBM5 after repartitioning with different values of $\rho$. Rotating the linear operator by only $\rho = \pi/2048$ radians ($\approx 0.088$ degrees) already leads to large connected linear stability regions for all three methods. This occurs because the magnitude of the partitioned stability function $\hat{R}(k_1,k_2)$ along the line $k_2 = 0$ is now less than one for any $k_1 \ne 0$. In fact, by increasing $\rho$ further, one introduces additional damping for large $k_1$. Therefore, under repartitioning, high-frequency modes will be damped, while low frequency modes will be integrated in a nearly identical fashion to the unmodified integrator with $\rho = 0$. Excluding scenarios where energy conservation is critical, the damping of high-frequency modes is not a serious drawback since large phase errors in the unmodified integrator would still lead to inaccurate solutions (supposing the method is stable in the first place).
Repartitioning cannot be applied indiscriminately and as $\rho$ approaches $\tfrac{\pi}{2}$ one obtains an unstable integrator. To highlight this phenomenon more clearly, we show magnified stability regions in Figure \ref{fig:exp-overpartitioning}, in which we selected sufficiently large $\rho$ values to cause stability region separation for each method. From these results we can see that the maximum amount of allowed repartitioning is integrator dependent, with ESDC6 allowing for the most repartition and EPBM5 the least.
Finally, we note that this repartitioning technique can also be applied to implicit-explicit methods. However, on all the methods we tried, we found that the repartitioning rapidly destabilizes the integrator. In Figure \ref{fig:imex-partitioning} we present the stability regions for IMRK4 using different $\rho$ values and show that stability along the $k_2=0$ line is lost even for small $\rho$ values. The stability region corresponding to $\rho = 0$ can be compared with the stability regions of the exponential integrators in Figure \ref{fig:repartitioned-stability} to see how the damping properties of a repartitioned exponential methods compare to those of an IMEX method.
\begin{figure}[h!]
\begin{center}
\setlength{\tabcolsep}{0.25em}
\begin{tabular}{lcccl}
& ERK4 & ESDC6 & EPBM5 \\
\rotatebox{90}{ \hspace{2.75em} $\rho = 0$} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/erk-rho-1} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/esdc-rho-1} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/epbm-rho-1} &
\includegraphics[align=b,width=0.079\linewidth,trim={100 0 0 0},clip]{figures/stability/rho/amp-colorbar} \\
\rotatebox{90}{ \hspace{2.75em} $\rho = \pi/2048$} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/erk-rho-2} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/esdc-rho-2} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/epbm-rho-2} & \\
\rotatebox{90}{ \hspace{2.75em} $\rho = \pi/512$} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/erk-rho-3} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/esdc-rho-3} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/epbm-rho-3} & \\
\rotatebox{90}{ \hspace{2.75em} $\rho = \pi/128$} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/erk-rho-4} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/esdc-rho-4} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/epbm-rho-4} \\
\rotatebox{90}{ \hspace{2.75em} $\rho = \pi/32$} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/erk-rho-5} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/esdc-rho-5} &
\includegraphics[align=b,width=0.24\linewidth]{figures/stability/rho/epbm-rho-5} &
\end{tabular}
\end{center}
\vspace{-1em}
\caption{
Stability regions $\mathcal{S}$ of the ERK4, ESDC6, and EPBM5 methods for various $\rho$ values. On each plot, the x-axis represents $k_1$ and the y-axis represents $k_2$. The color represents the log of $|\hat{R}(k_1,k_2)|$ from (\ref{eq:stability_region_damped}).
}
\label{fig:repartitioned-stability}
\end{figure}
\begin{figure}[h!]
\begin{center}
\setlength{\tabcolsep}{0.25em}
\begin{tabular}{ccccr}
ERK4 $(\rho = \tfrac{\pi}{4})$ & ESDC6 $(\rho = \tfrac{\pi}{3})$ & EPBM5 $(\rho = \tfrac{\pi}{6})$ \\
\includegraphics[width=0.3\linewidth]{figures/stability/rho-zoom/erk-zoom-rho-12} &
\includegraphics[width=0.3\linewidth]{figures/stability/rho-zoom/esdc-zoom-rho-13} &
\includegraphics[width=0.3\linewidth]{figures/stability/rho-zoom/epbm-zoom-rho-11} &
\includegraphics[align=b,width=0.085\linewidth,trim={100 0 0 0},clip]{figures/stability/rho/amp-colorbar}
\end{tabular}
\end{center}
\caption{
Magnified stability regions that show the effects of adding too much repartitioning. Each integrator family has a different amount of maximal repartitioning before the stability regions split along the $k_2$ line. Amongst the three methods, ESDC6 was the most robust and EPBM5 was the least robust.
}
\label{fig:exp-overpartitioning}
\end{figure}
\begin{figure}[h!]
\begin{center}
\setlength{\tabcolsep}{0.25em}
\begin{tabular}{ccccr}
$\rho = 0$ & $\rho = \tfrac{\pi}{256}$ & $\rho = \tfrac{\pi}{128}$ & $\rho = \tfrac{\pi}{64}$ \\
\includegraphics[width=0.23\linewidth]{figures/stability/rho/imrk-rho-1} &
\includegraphics[width=0.23\linewidth]{figures/stability/rho/imrk-rho-2} &
\includegraphics[width=0.23\linewidth]{figures/stability/rho/imrk-rho-3} &
\includegraphics[width=0.23\linewidth]{figures/stability/rho/imrk-rho-4} &
\includegraphics[align=b,width=0.063\linewidth,trim={100 0 0 0},clip]{figures/stability/rho/amp-colorbar}
\end{tabular}
\end{center}
\caption{
Stability regions for the repartitioned IMRK4 methods with four values of $\rho$. For even small values of $\rho$, instabilities form along the $k_2 = 0$ line near the origin, and for large values of $\rho$, the stability regions separates. The effect appears for even smaller values of $\rho$, however we did not include these plots here, since the effect is not visible at this level of magnification.
}
\label{fig:imex-partitioning}
\end{figure}
\clearpage
\subsection{Solving ZDS with repartitioning}
We now validate our stability results by solving the ZDS equation (\ref{eq:zds}) using several different choices for the diffusive operator $\mathbf{D}$. Since we are solving in Fourier space where $\mathbf{L} = \text{diag}(i \mathbf{k}^3)$, we can easily implement (\ref{eq:D_eigen}) by selecting $\mathbf{D} = -\text{diag}(|\mathbf{k}|^3)$. However, for more general problems, it will often not be possible to compute the eigenvalue decomposition of the operator $\mathbf{L}$. Therefore, to develop a more practical repartitioning, we consider a generic, repartitioned, semilinear partial differential equation
\begin{align}
u_t = \underbrace{L[u] + \epsilon D[u]}_{\hat{L}[u]} + \underbrace{N(t,u) - \epsilon D[u]}_{\hat{N}(t,u)}
\label{eq:repartitioned-semilinear-pde}
\end{align}
where the spatial discretizations of $\hat{L}[u]$ and $\hat{N}(t,u)$ become the $\hat{\mathbf{L}}$ and $\hat{N}(t, \mathbf{y})$ in (\ref{eq:partitioned_operators}), and the linear operator $D[u]$ is the continuous equivalent of the matrix $\mathbf{D}$. A natural choice for a diffusive operator in one spatial dimension is the even-powered derivative
\begin{align}
D[u] =
\begin{cases}
\frac{\partial^k u}{\partial x^k} & k \equiv 0\pmod{4}, \\
-\frac{\partial^k u}{\partial x^k} & k \equiv 2\pmod{4}. \\
\end{cases}
\end{align}
This operator can be easily implemented for different spatial discretizations and boundary conditions. Moreover, it can be generalized to higher dimensional PDEs by adding partial derivatives in other dimensions.
The only remaining question is how to choose $k$. To avoid increasing the number of required boundary conditions, it is advantageous if $k$ is smaller than or equal to the highest derivative order found in $L[u]$. Therefore, in addition to (\ref{eq:D_eigen}), we also consider two additional repartitionings for the ZDS equation that are based on the zeroth and the second spatial derivative of $u(x,t)$. Below, we describe each repartitioning in detail, and in Figure \ref{fig:zds_partitioned_linear_operator} we also plot the spectrum of the corresponding repartitioned linear operators $\hat{\mathbf{L}}$.
\begin{itemize}
\item {\bf Third-order repartitioning}. The diffusive operators are
\begin{align}
D[u] &= \mathcal{F}^{-1}(|k|^3) \ast u &&\text{(Continuous -- physical space),}& \\
\mathbf{D} &= -\text{diag}(|\mathbf{k}|^3) &&\text{(Discrete -- Fourier space),} &
\end{align}
where $\mathcal{F}^{-1}$ denotes the inverse Fourier transform and $\ast$ is a convolution. This choice is equivalent to (\ref{eq:D_eigen}). We choose $\epsilon$ according to (\ref{eq:stability_region_damped}), so that the eigenvalues of $\hat{\mathbf{L}} = \mathbf{L} + \epsilon \mathbf{D}$ lie on the curve
\begin{align*}
r e^{i(\pi/2 + \rho)} \cup r e^{i(3 \pi/2 - \rho)} \quad (r \ge 0).
\end{align*}
For this repartitioning we select $\epsilon$ using $\rho = \frac{\pi}{2048}$, $\frac{\pi}{512}$, $\frac{\pi}{128}$, and $\frac{\pi}{32}$.
\item {\bf Second-order repartitioning}. The diffusive operators are
\begin{align}
D[u] &= u_{xx} &&\text{(Continuous -- physical space),}\\
\mathbf{D} &= -\text{diag}(\mathbf{k}^2) &&\text{(Discrete -- Fourier space),} &
\end{align}
and we again choose $\epsilon$ according to (\ref{eq:stability_region_damped}). Compared to the previous choice, second-order repartitioning over-rotates eigenvalues with magnitude less then one, and under-rotates eigenvalues with magnitude larger than one. Therefore we require larger $\rho$ values to achieve similar damping effects as third-order repartitioning (see Figure \ref{fig:zds_partitioned_linear_operator}); in particular we select $\epsilon$ using $\rho = \frac{\pi}{256}$, $\frac{\pi}{64}$, $\frac{\pi}{16}$, and $\frac{\pi}{4}$.
\item {\bf Zeroth-order repartitioning}. The diffusive operators are
\begin{align}
D[u] &= -u &&\text{(Continuous -- physical space),}\\
\mathbf{D} &= -\mathbf{I} &&\text{(Discrete -- Fourier space).}
\end{align}
This choice translates every eigenvalue of the linear operator $\mathbf{L}$ by a fixed amount $\epsilon$ into the left-hand plane. We consider ${\epsilon = 1, 2, 4}$ and $8$.
\end{itemize}
In Figures \ref{fig:repartitioned-stability-a} we present convergence diagrams for ERK4, ESDC6, and EPBM5 using each of the three repartitioning strategies. Overall, repartitioning resolved the stability issues and enabled the use of exponential integrators for efficiently solving the ZDS equation. We summarize the results as follows.
\vspace{1em}
{\em \noindent Third-order repartitioning.}
Adding even a small amount of third-order repartitioning immediately improves the convergence properties of the exponential integrators. For $\rho = \pi/128$, all integrators achieve proper convergence across the full range of stable timesteps. Moreover, adding additional repartitioning does not damage the accuracy so long as the underlying method remains stable.
\vspace{1em}
{\em \noindent Second-order repartitioning.}
Second-order repartitioning is able to achieve nearly identical results to third-order repartitioning, provided that larger $\rho$-values are used.
Overall, the results are not surprising since the spectrums of the corresponding linear operators shown in Figure \ref{fig:zds_partitioned_linear_operator} look very similar. The main disadvantage of second-order repartitioning is that $\rho$ needs to tuned to ensure that the highest modes have been sufficiently rotated.
\vspace{1em}
{\em \noindent Zeroth-order repartitioning.} Zeroth order repartitioning is extremely simple to implement, however it is also the least effective at improving convergence and preserving accuracy. A small $\epsilon$ does not introduce enough damping and the convergence curves are improved but not fully restored. On the other hand, large $\epsilon$ values stabilize stiff modes, however since all the eigenvalues are shifted by an equal amount, the repartitioning damages the accuracy of non-stiff modes. This leads to convergence curves that have been shifted to the left since we have effectively worsened the error constant of the exponential integrator. Zeroth-order repartitioning also negatively impacted the sensitivity of the integrator to roundoff errors, and we were unable to obtain the solution with a relative error of less than approximately $10^{-8}$.
\begin{figure}[h!]
\centering
{\small \hspace{4em} $\mathbf{D} = -\text{diag}\left(|\mathbf{k}|^3\right)$}
\includegraphics[width=\linewidth,trim={20 0 52 0},clip]{figures/spectrum/spectrum-zds-rotate}
\vspace{1em}
{\small \hspace{4em} $\mathbf{D} = -\text{diag}\left(\mathbf{k}^2\right)$}
\includegraphics[width=\linewidth,trim={20 0 52 0},clip]{figures/spectrum/spectrum-zds-underrotate}
\vspace{1em}
{\small \hspace{4em} $\mathbf{D} = -\mathbf{I}$}
\includegraphics[width=\linewidth,trim={20 0 52 0},clip]{figures/spectrum/spectrum-zds-translate}
\caption{Spectrum of the repartitioned linear operator $\hat{\mathbf{L}} = \mathbf{L} + \epsilon \mathbf{D}$, for three choices of $\mathbf{D}$ and multiple $\epsilon$. In the first two plots $\epsilon$ is selected according to (\ref{eq:epsilon_of_rho}). The choices of $\rho$ for the second-order repartitioning where selected to achieve comparable damping to third-order repartitioning.}
\label{fig:zds_partitioned_linear_operator}
\end{figure}
\begin{figure}[h!]
\begin{center}
\setlength{\tabcolsep}{0.1em}
\begin{tabular}{llcccc}
& \hspace{.5em} & $\mathbf{D} = -\text{diag}\left(|\mathbf{k}|^3\right)$ & $\mathbf{D} = -\text{diag}\left(\mathbf{k}^2\right)$ & $\mathbf{D} = -\mathbf{I}$ \\
\rotatebox{90}{ \hspace{5.9em} ERK4 } & &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-ERK-rotation} &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-ERK-rotation-uxx} &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-ERK-translation} \\
\rotatebox{90}{ \hspace{5.6em} ESDC6 } & &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-ESDC-rotation} &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-ESDC-rotation-uxx} &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-ESDC-translation} \\
\rotatebox{90}{ \hspace{5.45em} EPBM5 } & &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-EPBM_PMFCmS-rotation} &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-EPBM_PMFCmS-rotation-uxx} &
\includegraphics[align=b,width=0.30\linewidth]{figures/experiment/zds-128-EPBM_PMFCmS-translation}
\end{tabular}
\begin{small}
\hspace{3.5em}
\begin{tabular}{c|c|c}
\begin{tabular}{ll}
\includegraphics[align=m,width=0.7cm,trim={560 190 115 14},clip]{figures/experiment/legend-rotation} IMRK4 ~~~~ &
\includegraphics[align=m,width=0.7cm,trim={100 190 580 14},clip]{figures/experiment/legend-rotation} $\rho = 0$ \\
\includegraphics[align=m,width=0.7cm,trim={180 190 505 14},clip]{figures/experiment/legend-rotation} $\rho = \frac{\pi}{2048}$ &
\includegraphics[align=m,width=0.7cm,trim={280 190 400 14},clip]{figures/experiment/legend-rotation} $\rho = \frac{\pi}{512}$ ~~ \\
\includegraphics[align=m,width=0.7cm,trim={375 190 305 14},clip]{figures/experiment/legend-rotation} $\rho = \frac{\pi}{128}$ ~~ &
\includegraphics[align=m,width=0.7cm,trim={475 190 205 14},clip]{figures/experiment/legend-rotation} $\rho = \frac{\pi}{32}$
\end{tabular} \hspace{0.75em}
&
\hspace{0.75em}
\begin{tabular}{ll}
\includegraphics[align=m,width=0.7cm,trim={560 190 115 14},clip]{figures/experiment/legend-rotation} IMRK4 ~~~~ &
\includegraphics[align=m,width=0.7cm,trim={100 190 580 14},clip]{figures/experiment/legend-rotation} $\rho = 0$ \\
\includegraphics[align=m,width=0.7cm,trim={180 190 505 14},clip]{figures/experiment/legend-rotation} $\rho = \frac{\pi}{512}$ &
\includegraphics[align=m,width=0.7cm,trim={280 190 400 14},clip]{figures/experiment/legend-rotation} $\rho = \frac{\pi}{128}$ ~~ \\
\includegraphics[align=m,width=0.7cm,trim={375 190 305 14},clip]{figures/experiment/legend-rotation} $\rho = \frac{\pi}{32}$ ~~ &
\includegraphics[align=m,width=0.7cm,trim={475 190 205 14},clip]{figures/experiment/legend-rotation} $\rho = \frac{\pi}{4}$
\end{tabular} \hspace{0.75em}
&
\hspace{0.75em}
\begin{tabular}{ll}
\includegraphics[align=m,width=0.7cm,trim={560 190 115 14},clip]{figures/experiment/legend-rotation} IMRK4 ~~~~ &
\includegraphics[align=m,width=0.7cm,trim={100 190 580 14},clip]{figures/experiment/legend-rotation} $\epsilon = 0$ \\
\includegraphics[align=m,width=0.7cm,trim={180 190 505 14},clip]{figures/experiment/legend-rotation} $\epsilon = 1$ &
\includegraphics[align=m,width=0.7cm,trim={280 190 400 14},clip]{figures/experiment/legend-rotation} $\epsilon = 2$ ~~ \\
\includegraphics[align=m,width=0.7cm,trim={375 190 305 14},clip]{figures/experiment/legend-rotation} $\epsilon = 4$ ~~ &
\includegraphics[align=m,width=0.7cm,trim={475 190 205 14},clip]{figures/experiment/legend-rotation} $\epsilon = 8$
\end{tabular} \hspace{1.2em}
\end{tabular}
\end{small}
\end{center}
\vspace{-1em}
\caption{
Convergence diagrams for the repartitioned ERK4, ESDC6, and EPBM5 integrators on the ZDS equation using third, second, and zeroth order repartitioning. For reference we also include the unpartitioned IMRK4 integrator in each plot. The columns correspond to different choices for the matrix $\mathbf{D}$, and each row to different integrators. Color denotes the value of $\rho$ from (\ref{eq:epsilon_of_rho}) for the first two columns and the value of $\epsilon$ for the third column.
}
\label{fig:repartitioned-stability-a}
\end{figure}
\clearpage
\subsection{Comparing repartitioning to hyperviscosity}
Time dependent equations with no diffusion are known to cause stability issues for numerical methods, and a commonly applied strategy is to add a hyperviscous term to the equation right-hand-side (see for example \cite{jablonowski2011pros,Ullrich2018-fs}). To avoid destroying the convergence properties of an integrator of order $q$, the magnitude of this term is typically proportional to the stepsize of the integrator $\Delta t$ raised to the power of $q+1$.
In the context of the continuous semilinear partial differential equation
\begin{align}
u_t = L[u] + N(t,u)
\label{eq:generic-semilinear-pde}
\end{align}
this is equivalent to considering a new equation with a vanishing diffusive operator $\tilde{D}[u]$ added to the right-hand-side so that
\begin{align}
u_t = L[u] + (\Delta t)^{q+1} \gamma \tilde{D}[u] + N(t,u),
\label{eq:hyperviscous-semilinear-pde}
\end{align}
where $\gamma$ is a constant that controls the strength of the diffusion. One then approximates the solution to (\ref{eq:generic-semilinear-pde}) by numerically integrating (\ref{eq:hyperviscous-semilinear-pde}). The improvement to stability comes from the fact that we have replaced the original discretized linear operator $\mathbf{L}$ with ${\tilde{\mathbf{L}} = \mathbf{L} + (\Delta t)^{q+1}\gamma \tilde{\mathbf{D}}}$.
Unlike repartitioning, we are no longer adding and subtracting the new operator. We must therefore ensure that $D[u]$ does not damage the accuracy of slow modes as they typically contain the majority of useful information. For this reason, $D[u]$ is generally chosen to be a high-order even derivative since these operators have a negligible effect on low frequencies while causing significant damping of high frequencies.
To compare the differences between repartitioning and hyperviscosity, we re-solve the ZDS equation using ERK4 with hyperviscosity of orders four, six, and eight. Since ERK4 is a fourth-order method, we take $q=4$. In Figure \ref{fig:erk-hyperviscosity-convergence} we show convergence diagrams for these experiments. We immediately see that hyperviscosity is only effective when a sufficiently high-order spatial derivative is used. In particular, fourth-order hyperviscosity fails to improve stability for small $\gamma$ and completely damages the accuracy of the integrator for larger $\gamma$. Sixth-order hyperviscosity offers a marginal improvement at coarse stepsizes, but also damages accuracy at fine stepsizes. Eighth-order hyperviscosity with $\gamma=10^{10}$ is the only choice that achieves results comparable to repartitioning.
In summary, repartitioning offers two key advantages. First, it does not require the use of high-order spatial derivatives, and second, it is less sensitive to overdamping. These advantages are both due to the fact that repartitioning does not modify the underlying problem, while hyperviscosity is only effective if the modified problem (\ref{eq:hyperviscous-semilinear-pde}) closely approximates the original problem (\ref{eq:generic-semilinear-pde}). We discuss both points in more detail below.
\vspace{1em}
{\em \noindent Sensitivity to overdamping.} When adding hyperviscosity, it is critical to select the smallest possible $\gamma$ that suppresses instabilities. Selecting larger $\gamma$ causes the solutions of (\ref{eq:generic-semilinear-pde}) and (\ref{eq:hyperviscous-semilinear-pde}) to grow unnecessarily far apart, and leads to a time integration scheme that converges more slowly to the solution of (\ref{eq:hyperviscous-semilinear-pde}). In a convergence diagram, excessive hyperviscosity does not reduce the order-of-accuracy of an integrator, but it will lead to a method with a larger error constant. This phenomenon appears prominently in Figure \ref{fig:erk-hyperviscosity-convergence}, where ERK4 methods with too much hyperviscosity consistently performed worse than all other methods at fine timesteps (e.g. see graphs for $\omega = 10^6$).
In contrast, second-order and third-order repartitioning are significantly more flexible since they allow for a greater amount of over-partitioning without any significant damage to the accuracy or stability. Excessively large $\epsilon$ values can still cause the stability region separation shown in Figure \ref{fig:exp-overpartitioning}, however such values are unlikely to be used in practice, since they lead to a partitioned linear operator with eigenvalues that have a larger negative real part than imaginary part. Zeroth-order repartitioning is most similar to hyperviscosity since large values of $\epsilon$ also damage the error constant of the method; however, the effects are significantly less pronounced.
\vspace{1em}
{\em \noindent Importance of high-order spatial derivatives.} When adding hyperviscosity we must ensure that the small eigenvalues of the modified linear operator $\tilde{\mathbf{L}}$ closely approximate those of $\mathbf{L}$, or we risk altering the dynamics of slow modes. Therefore, we require a small $\gamma$ for low-order hyperviscous terms. However, this creates a dilemma: choosing a small $\gamma$ may not eliminate the instabilities while choosing a large $\gamma$ damages accuracy. This is exactly why we were not able to efficiently stabilize the ZDS equation using fourth-order hyperviscosity.
In contrast, repartitioning does not require that the small eigenvalues of $\hat{\mathbf{L}}$ closely approximate those of $\mathbf{L}$, since the nonlinear term counteracts any changes. This is perhaps most easily explained by considering the Dahlquist equation (\ref{eq:dispersive-dahlquist}). If $|\lambda| = |\lambda_1 + \lambda_2|$ is small (i.e. the mode is slow), then an exponential integrator will integrate the system accurately so long as $|\lambda_1|$ and $|\lambda_2|$ are also small. Hence, we can freely redistribute the wavenumber between $\lambda_1$ anz $\lambda_2$. This allows us to repartition using second-order diffusion without loosing accuracy.
\begin{figure}[h]
\begin{center}
\begin{footnotesize}
\setlength{\tabcolsep}{0.1em}
\begin{tabular}{ccc}
$\tilde{\mathbf{D}} = -\text{diag}(\mathbf{k}^4)$, $\gamma=10^4 \omega$ & $\tilde{\mathbf{D}} = - \text{diag}(\mathbf{k}^6)$, $\gamma=10^2\omega$ & $\tilde{\mathbf{D}} = -\text{diag}(\mathbf{k}^8)$, $\gamma = \omega$ \\
\includegraphics[align=b,width=0.33\linewidth]{figures/experiment/zds-128-ERK-hyperdiffusion-4} &
\includegraphics[align=b,width=0.33\linewidth]{figures/experiment/zds-128-ERK-hyperdiffusion-6} &
\includegraphics[align=b,width=0.33\linewidth]{figures/experiment/zds-128-ERK-hyperdiffusion-8} \\
\end{tabular}
\end{footnotesize}
\includegraphics[align=b,width=0.9\linewidth,trim={125 190 100 10},clip]{figures/experiment/legend-hyperdiffusion-8}
\end{center}
\vspace{-1em}
\caption{
Convergence plots for ERK4 with hyperviscosity added to the linear operator; for convenience we write $\gamma$ in terms of a new parameter $\omega$. Low-order hyperviscosity is unable stabilize the integrator, while a properly selected 8th-order diffusion yields results that are nearly identical to repartitioning.
}
\label{fig:erk-hyperviscosity-convergence}
\end{figure}
\subsection{Long-time integration of Korteweg-de Vries}
\label{subsec:kdv-long-time}
The ZDS equation (\ref{eq:zds}) causes considerable difficulties for unmodified exponential integrators even on short timescales when the dynamics are simple. However, this does not imply that repartitioning is always necessary for solving dispersive equations. In fact, there are many reported results in the literature where exponential integrators converge successfully without requiring any modifications; several examples include \cite{KassamTrefethen05ETDRK4}, \cite{montanelli2016solving}, and \cite{grooms2011IMEXETDCOMP}.
As discussed in Section \ref{sec:linear_stability}, the instabilities of exponential integrators are very small in magnitude and can therefore go unnoticed for long periods of time. To explore this further, we present a final numerical experiment where we solve the Korteweg-de Vries (KDV) equation from \cite{zabusky1965interaction}
\begin{align}
\begin{aligned}
& \frac{\partial u}{\partial t} = -\left[ \delta \frac{\partial^3 u}{\partial x^3} + \frac{1}{2}\frac{\partial}{\partial x}(u^2) \right] \\ & u(x,t=0) = \cos(\pi x), \hspace{1em} x \in [0,2]
\end{aligned}
\label{eq:kdv}
\end{align}
where $\delta = 0.022$. The boundary conditions are periodic, and we discretize in space using a 512 point Fourier spectral method. As with the ZDS equation, we solve in Fourier space where the linear operator $\mathbf{L} = \text{diag}(i\delta \mathbf{k}^3)$.
This exact problem was used in both \cite{buvoli2019esdc} and \cite{buvoli2021epbm} to validate the convergence of ERK, ESDC, and EPBM methods. In the original experiments the equation was integrated to time $3.6 / \pi$. On these timescales ERK4, ESDC6, and EPBM5 all converge properly and show no signs of instability.
We now make the problem more difficult to solve by extending the integration interval to $t=160$. The longer time interval increases the complexity of the solution and allows for instabilities to fully manifest.
In Figure \ref{fig:kdv-long-time-experiment} we show how the relative error of both unpartitioned and partitoned ERK4, ESDC6, and EPBM5 methods evolves in time. To produce the plot, we run all integrators using 56000 timesteps and compare their outputs to a numerically computed reference solution at 30 equispaced times between $t=0$ and $t=160$.
On short timescales all unmodified exponential integrators are stable and no repartitioning is required. However, on longer timescales, repartitioning becomes necessary. Moreover, the maximum time horizon before instabilities dominate differs amongst the integrators. The unmodified EPBM5 method is the first integrator to become unstable around $t=20$. The unmodified ERK4 method is more robust and remains stable until approximately time $t=55$, while the unmodified ESDC6 method remains stable across almost the entire time interval. Unlike the ZDS example, the time to instability is now correlated to the size of the methods stability region.
Adding zeroth-order, second-order, or third-order repartitioning stabilizes all the methods, and does not damage the accuracy in regions where the unmodified integrator converged. Furthermore, the accuracy differences between the three repartitioning strategies is effectively negligible. Lastly, the repartitioning parameters described in the legend of Figure \ref{fig:kdv-long-time-experiment} allow us to compute the solution at even longer times; we tested all methods out to time $t=1000$, after which we did not look further, and found that all partitioned method configurations remained stable.
\begin{figure}[h!]
\begin{center}
\hspace{3em} KDV Solution Plot
\vspace{1em}
\includegraphics[trim={40 0 55 20},clip,align=b,align=b,width=1\linewidth]{figures/snapshots/kdv-solution}
\vspace{1em}
\hspace{3em} Error vs. Time
\vspace{1em}
\includegraphics[trim={40 0 55 20},clip,align=b,align=b,width=1\linewidth]{figures/experiment-kdv/kdv-error-T160}
\renewcommand{\arraystretch}{2}
\begin{tabular}[t]{ll} \hline
Integrator: &
\begin{tabular}{ccc}
\includegraphics[trim={250 92 417 7},clip,align=b,height=1em]{figures/experiment-kdv/ltc-legend-integrator} ERK4 &
\includegraphics[trim={322 90 344 5},clip,align=b,height=1em]{figures/experiment-kdv/ltc-legend-integrator} ESDC6 &
\includegraphics[trim={398 90 270 5},clip,align=b,height=1em]{figures/experiment-kdv/ltc-legend-integrator} ESDC6
\end{tabular} \\[0em] \hline
Repartitioning: &
\begin{tabular}[t]{ll}
\includegraphics[trim={90 95 560 8},clip,align=b,align=b,height=1em]{figures/experiment-kdv/ltc-legend-repartition.pdf} ~ $\gamma = 0$ &
\includegraphics[trim={165 95 487 8},clip,align=b,height=1em]{figures/experiment-kdv/ltc-legend-repartition.pdf} ~ $\mathbf{D} = -\text{diag}(|\mathbf{k}^3|)$, $\rho = \frac{\pi}{64}$ \\[0em]
\includegraphics[trim={335 95 317 8},clip,align=b,height=1em]{figures/experiment-kdv/ltc-legend-repartition.pdf} ~ $\mathbf{D} = -\text{diag}(\mathbf{k}^2)$, $\rho = \frac{\pi}{3}$ &
\includegraphics[trim={493 95 159 8},clip,align=b,height=1em]{figures/experiment-kdv/ltc-legend-repartition.pdf} ~ $\mathbf{D} = -\mathbf{I}$, $\gamma = 16$
\end{tabular} \\[0.5em] \hline
\end{tabular}
\end{center}
\vspace{-1em}
\caption{Relative error versus time for the KDV equation (\ref{eq:kdv}) solved using ERK4, ESDC6, and EPBM5 with 56000 timesteps. Each integrator is shown using a different line marker. The blue curves denote unpartitioned integrators, while shades of gray denote various repartitionings. The differences in relative error between different repartitionings is very small, so the gray lines almost entirely overlap.}
\label{fig:kdv-long-time-experiment}
\end{figure} | {"config": "arxiv", "file": "2108.00185/repartitioning.tex"} |
TITLE: Axis of rotation of composition of rotations (Artin's Algebra)
QUESTION [4 upvotes]: Say $R_1$, $R_2$ are rotations in $\mathbb{R}^3$ with axes and angles $(v_1,\theta_1), (v_2,\theta_2)$ respectively. Since $SO_3$ is a group, we have that $R_2 \circ R_1$ is a rotation with some axis $v$. Is there a geometric way of finding $v$? This is problem 4.5.10 in Artin's Algebra.
My attempts have included looking at $v$ as a differentiable function of $\theta_1$ and $\theta_2$, writing out the corresponding matrix equations, staring at a wall, and guessing. Not sure where to go from here.
REPLY [4 votes]: A rotation around an axis is a composition of two symmetries (reflections) wrt planes which intersect at this line at half the angle. If your two rotations are represented as $R_i=S_{i1}\circ S_{i2}$ so that each plane $S_{12}$ and $S_{21}$ contains both axes, then $S_{12}=S_{21}$, and $R_1\circ R_2=S_{11}\circ S_{22}$.
REPLY [1 votes]: $\def\\#1{{\bf#1}}$We can find the axis of $R_2R_1$ by writing each of the rotations as a product of two reflections. We may assume without loss of generality that $\\v_1$ and $\\v_2$ are unit vectors, and are independent (if they are scalar multiples of each other then $\\v_1=\pm\\v_2$ and the problem is easy). Define the following vectors:
$$\eqalign{
\\w&=\\v_1\times\\v_2\cr
\\x&=\\w\times\\v_1\cr
\\w_1&=\\w\cos(\theta_1/2)+\\x\sin(\theta_1/2)\cr}$$
(suggestion: draw them in order to visualise what we are doing) and let
$$S_1=(I-2\\w\\w^T)(I-2\\w_1\\w_1^T)\ .$$
Then $S_1$ is a product of two reflections in $\Bbb R^3$ and is therefore a rotation. In fact, we shall show that $S_1$ is the given rotation $R_1$. This takes a bit of algebra: it is helpful to note that we have a lot of perpendicular vectors here, and in particular
$$\\w\cdot\\v_1=\\x\cdot\\v_1=\\w_1\cdot\\v_1=0\ .$$
Also note that if $\\a,\\b,\\c$ are column vectors of the same size then
$$\\a\\b^T\\c=\\a(\\b\cdot\\c)=(\\b\cdot\\c)\\a\ .$$
Now we have
$$(I-2\\w_1\\w_1^T)\\v_1=\\v_1-2(\\w_1\cdot\\v_1)\\w_1=\\v_1$$
and so
$$S_1\\v_1=(I-2\\w\\w^T)\\v_1=\\v_1-2(\\w\cdot\\v_1)\\w=\\v_1\ ;$$
therefore the rotation $S_1$ has axis $\\v_1$. Moreover,
$$\eqalign{S_1\\x
&=(I-2\\w\\w^T)(I-2\\w_1\\w_1^T)\\x\cr
&=(I-2\\w\\w^T)(\\x-2\\w_1\sin(\theta_1/2))\cr
&=\\x-2\\w_1\sin(\theta_1/2)+4\\w\cos(\theta_1/2)\sin(\theta_1/2)\cr
&=\\x\cos\theta_1+\\w\sin\theta_1\ .\cr}$$
Since $\\v_1,\\x,\\w$, in that order, form a right-handed system of axes, this shows that the angle of rotation is $\theta_1$.
Now define also
$$\eqalign{
\\y&=\\w\times\\v_2\cr
\\w_2&=\\w\cos(\theta_2/2)-\\y\sin(\theta_2/2)\cr}$$
and let
$$S_2=(I-2\\w_2\\w_2^T)(I-2\\w\\w^T)\ .$$
The same sort of calculations as above show that $S_2=R_2$. Hence
$$\eqalign{R_2R_1=S_2S_1
&=(I-2\\w_2\\w_2^T)(I-2\\w\\w^T)(I-2\\w\\w^T)(I-2\\w_1\\w_1^T)\cr
&=(I-2\\w_2\\w_2^T)(I-2\\w_1\\w_1^T)\ ;\cr}$$
the fact that the middle terms cancel is because they give the product of a reflection with itself; alternatively we can do the algebra to show that the product is the identity.
All this gives $R_2R_1$ as the product of reflections with axes $\\w_1$ and $\\w_2$, and this is a rotation with axis $\\w_1\times\\w_2$.
Thus, the required axis is $\\w_1\times\\w_2$, where $\\w_1$ and $\\w_2$ are given by the above equations. | {"set_name": "stack_exchange", "score": 4, "question_id": 831211} |
TITLE: Equivalent norms on $C[0,1]$
QUESTION [1 upvotes]: For each $f\in C[0,1]$ set $$\|f\|_1 = \left(\int_0^1 |f(x)|^2 dx\right)^{1/2},\quad\quad \|f\|_2 = \left(\int_0^1 (1+x)|f(x)|^2 dx\right)^{1/2}$$
Then prove that $\|\cdot\|_1$ and $\|\cdot\|_2$ are equivalent norms on $C[0,1]$.
So we want to show that for positive real numbers $a,b$ that $$a\|f\|_2 \leq \|f\|_1 \leq b\|f\|_2$$
Since $\|f\|_2^2 = \int_0^1 |f(x)|^2 dx + \int_0^1 x|f(x)|^2 dx= \|f\|_1^2+ \int_0^1 x|f(x)|^2 dx$ and because we know that $\int_0^1 x|f(x)|^2 dx\geq 0$ we have that:
$$\|f\|_1 \leq 1\times \|f\|_2$$
Now we want to find some $a\gt 0$ such that:
$$a\cdot\|f\|_2\leq \|f\|_1 $$
This I am not sure how to do.
REPLY [2 votes]: In this case $1+x \leq 2$ so $\| f \|_2 \leq \sqrt{2} \| f \| _1$. | {"set_name": "stack_exchange", "score": 1, "question_id": 1425058} |
TITLE: Subspaces of Quotient Spaces
QUESTION [1 upvotes]: Let $X$ be a topological vector space (not necessarily Hausdorff), with topology $\tau$, and $M, N$ linear subspaces of $X$. Let $\pi:X \rightarrow X/N$ be the quotient map, which associates to each $x \in X$ the coset $x + N \in X/N$, and consider the linear subspace $\pi(M)$ of $X/N$.
Define the map $\Phi: M /(M \cap N) \rightarrow \pi(M)$ by associating to each coset $x+ (M \cap N)$, with $x \in M$, the coset $x+N$. It is immediate to see that $\Phi$ is well defined (that is $\phi(x+(M \cap N))$ does not depend on the particular $x$ chosen in the coset $x+ (M \cap N)$) and that it is an isomorphism of vector spaces.
Moreover, in Note (1) below, it is shown that $\Phi$ is continuous. Anyway, in general $\Phi$ is not a homeomorphism: in Note (3) below a counterexample is given in which $X$ is Hausdorff and $M$ is closed.
Can you find a counterexample in which $X$ is Hausdorff and both $M$ and $N$ are closed?
Thank you very much in advance for your attention.
NOTE (1) Suppose that $V$ is open in $\pi(M)$, that is $V=E \cap \pi(M)$, with $\pi^{-1}(E)=W \in \tau$. Then $\Phi^{-1}(V)= \{ x + (M \cap N) \in M/(M \cap N): x + N \in V \}$. So, if $\rho : M \rightarrow M/(M \cap N)$ is the quotient map (which associates to $x \in M$ the coset $x + (M \cap N)$), then we have
\begin{equation}
\rho^{-1}(\Phi^{-1}(V))= \{ x \in M : x+N \in V \}= M \cap W.
\end{equation}
So $\rho^{-1}(\Phi^{-1}(V))$ is an open set of $M$, and we conclude by definition of quotient topology that $\Phi^{-1}(V)$ is an open set of $M/(M\cap N)$.
NOTE (2) If $N \subset M$, then $\Phi$ is open, so it is a homeomorphism. Indeed in this case $M \cap N = N$. Let $F \subset M/N$, with $\rho^{-1}(F)=M \cap W$ and $W \in \tau$. Put $E= \pi(W)=\pi(W+N)$. We have $\pi^{-1}(E)= W + N \in \tau$, so $E$ is an open set of $X/N$.
Note that we have $M \cap W =M \cap (W + N)$. Indeed if $x \in M \cap W$, then $x+N \in F$, so that $x + n \in \rho^{-1}(F)=M \cap W$ for all $n \in N$. So
\begin{equation}
\Phi(F)= F = \{x + N \in M/N: x \in M \cap (W+N) \}.
\end{equation}
If $x+N \in \Phi(F)$, then $x \in M \cap (W + N)$, so that $x+N \in E \cap \pi(M)$. Conversely, if $x+ N \in E \cap \pi(M)$, then for some $m \in M$ and $w \in W$, we have $x+N=m+N=w+N$, so that $m \in M \cap (W+N)$, and $x+N=m+N \in \Phi(F)$. We conclude that $\Phi(F)=E \cap \pi(M)$, which is an open set of $\pi(M)$.
NOTE (3) Let $X$ be a Hausdorff TVS, and $M$ a closed linear subspace which is not complemented (for the definition of "complemented subspace" see Rudin, Functional Analysis, Second Edition, Section 4.20, while for example of uncomplemented subspaces see Rudin, cit., pp. 132-138). Let $\{ v_{\alpha} \}$ be a Hamel basis of $M$, and let $\{ w_{\beta} \}$ a subset of $X$ such that $\{ v_{\alpha} \} \cup \{ w_{\beta} \}$ is a Hamel basis of $X$. Let $N$ be the linear subspace spanned by the $\{ w_{\beta} \}$. Then we have $M + N =X$ and $M \cap N = \{0 \}$. Since $M$ is uncomplemented, $N$ cannot be closed. We have $\pi(M)=X/N$ and $M/(M \cap N)= M/\{0 \}$.
Now, let us remember that if $T$ is a TVS and $S$ a linear subspace of $T$, then the quotient space $T/S$ is Hausdorff if and only if $S$ is closed (see e.g. Horvath, Topological Vector Spaces and Distributions, Proposition (5) at p. 105 or Treves, Topological Vector Spaces, Distributions and Kernels, Proposition (4.5) at p. 34). So $X/N$ is not Hausdorff, while $M/ \{0 \}$ is Hausdorff (since $X$ is Hausdorff, also $M$ is Hausdorff, so $\{0 \}$ is a closed subset of $M$). We conclude that $X/N$ and $M/ \{0 \}$ cannot be homeomorphic. So $\Phi$ cannot be a homeomorphism.
REPLY [1 votes]: I develop here the suggestion given by Bill Johnson in the comment above.
Take $X=\ell^2$, and let $M$ be the subspace all the complex sequences $x=(x_0, x_1,x_2, \dots) \in \ell^2$ such that $x_{2n}=0$ for all non-negative integers $n$, and let $N$ be the subspace of all complex sequences $x=(x_0, x_1,x_2, \dots) \in \ell^2$ such that $x_{n+1}=nx_{2n}$ for all non-negative integers $n$ (this example is taken from Robert Israel's answer to the post The Direct Sum).
We prove that $M$ and $N$ are quasi-complements, but not complements. Indeed, clearly $M$ and $N$ are closed linear subspaces, $M \cap N= \{ 0 \}$ and $M+N$ is dense (since $M+N$ contains all sequences which are definitively zero). But $M + N \neq \ell^2$. Indeed $M+N$ does not contain the sequence $x_n=1/(n+1)$: if it were so, then $x=u+v$, with $u \in M$ and $v \in N$, then $v_{2n}=1/(2n+1)$ and $v_{2n+1}=n/(2n+1)$. But then $v \notin \ell^2$.
Now, let us consider our original question. We shall prove that, with this choice of $X, M$ and $N$, the map $\Phi$ is not a homeomorphism.
Indeed, since $M$ is a closed subspace of $X$, then $M/(M \cap N)= M/\{0 \}=M$ is a sequentially complete TVS. On the other hand, $X/N$ is a Banach space (see e.g. Rudin, Functional Analysis, Second Edition, Theorem (1.41)). Now, if $\Phi$ were a homeomorphism, then $\pi(M)$ and $M$ would isomorphic as topological vector spaces. So $\pi(M)$ would be sequentially complete. But then $\pi(M)$ should be a closed subspace of the Banach space $X/N$. Since $\Phi$ is continuous, we would get that $\pi^{-1}(\pi(M))=M+N$ is closed, which is not the case.
QED | {"set_name": "stack_exchange", "score": 1, "question_id": 261422} |
\section{Continued Fraction Expansion of Irrational Number Converges to Number Itself}
Tags: Continued Fractions
\begin{theorem}
Let $x$ be an [[Definition:Irrational Number|irrational number]].
Then the [[Definition:Continued Fraction Expansion of Irrational Number|continued fraction expansion]] of $x$ [[Definition:Convergent Continued Fraction|converges]] to $x$.
\end{theorem}
\begin{proof}
Let $(a_0, a_1, \ldots)$ be its [[Definition:Continued Fraction Expansion of Irrational Number|continued fraction expansion]].
Let $(p_n)_{n\geq 0}$ and $(q_n)_{n\geq 0}$ be its [[Definition:Numerators and Denominators of Continued Fraction|numerators and denominators]].
Then $C_n = p_n/q_n$ is the $n$th [[Definition:Convergent of Continued Fraction|convergent]].
By [[Accuracy of Convergents of Continued Fraction Expansion of Irrational Number]], for $n \ge 2$:
:$\left|{x - \dfrac {p_n} {q_n}}\right| < \dfrac 1 {q_n q_{n + 1} }$
By [[Lower Bounds for Denominators of Simple Continued Fraction]]:
:$q_nq_{n+1} \geq n$ for $n \geq 5$.
So from [[Definition:Basic Null Sequence|Basic Null Sequences]] and the [[Squeeze Theorem]]:
: $\dfrac 1 {q_n q_{n+1} } \to 0$
as $n \to \infty$.
Thus $C_n = p_n/q_n$ [[Definition:Convergent Real Sequence|converges]] to $x$.
That is, $(a_0, a_1, \ldots)$ [[Definition:Convergent Continued Fraction|converges]] to $x$.
{{qed}}
\end{proof}
| {"config": "wiki", "file": "thm_15153.txt"} |
TITLE: Is there any alternative/simplified formula on deducting multiple percentage of a number from itself?
QUESTION [0 upvotes]: EDIT: Sorry for the unclear description of what I really wanted. I have completely rewritten my question to make it clearer..
On a specific software I am creating, I want to know if there is an alternative formula on finding x here:
d = a / 100
e = b / 100
f = c / 100
g = z - (z * d)
h = g - (g * e)
x = h - (h * f)
The values of a, b, c and z are user inputted. For example, if a=5, b=4, c=3 and z=100:
d = 0.05
e = 0.04
f = 0.03
g = 95
h = 91.2
x = 88.464
Can you guys suggest of a simpler/alternative formula to find x?
EDIT: The purpose of this formula is to apply a certain discount to a product price. For example, if product's price is \$100, and 5/4/3% discount is applied, the new product price should be \$88.464
REPLY [2 votes]: User enters $N $ and $x/y/z $.
Return $N *(1-\frac x {100})*(1-\frac y{100})*(1-\frac z {100}) $
This is assuming you actually wanted $[1 - x - y (1-x)-z (1-y (1-x))]100 $ and not $1-x -xy-xyz $ (which yields 94.794 instead of 88.464).
If you did want the other
Return $N (1-\frac x {100}-\frac {xy} {100^2}-\frac {xyz}{100^3}) $.
REPLY [1 votes]: By your example:
Suppose we want to find the $a/b/c$ percent of a number $n $, then the new number, $n_1$ is given by, $$n_1 =n -\frac {an}{100} - \frac {b}{100}[\frac {an}{100}] - \frac {c}{100}[\frac {b}{100}[\frac {an}{100}]] $$ $$\Rightarrow n_1 =n [1- \frac {a}{10^2} - \frac {ab}{10^4} - \frac {abc}{10^6}] $$
EDIT:
Just backtracking the expressions gives us the new number as: $$x =z [1-d-e-f+de+ef+df-efd] $$
Hope it helps.
REPLY [1 votes]: Your solution is wrong. In second step
5% of 100 = 5. Not 95.
And try this -
Take 5% of 100 common from last three terms,
100 - 5% of 100 [ 1 + 4% + 3% × 4%]
= 100 - 5 [ 1 + 0.04 + 0.0012]
= 100 - 5 [ 1.0412]
= 100 - 5.206 = 94.794 | {"set_name": "stack_exchange", "score": 0, "question_id": 2134570} |
TITLE: How plasma ball works?
QUESTION [2 upvotes]: I am always confused with the concept of earthing in electricity. I have read and done few experiments and I understand that electricity needs to close the circuit. I can understand that from the context of a battery.
But what I am confused always is about earthing. From a battery we can see that the electrons can flow from the negative terminal to the positive terminal to close the circuit.
From that analogy I am unable to understand how a plasma ball works. When I read about it, the explanation given is that the electrons will flow through your body to the earth when you touch it. How does that help to complete the circuit?
I am also unable to understand the earthing in household electricity. What I have read and probably misunderstood is that the Earth is a big capacitor which can store charges. But even then how does it help to complete the circuit in the above two examples like plasma ball or household earthing?
REPLY [1 votes]: I'm assuming that "plasma ball" means these things
"the electrons will flow through your body to the earth when you touch it"
That works because the plasma ball is itself connected to earth (via the power cord), so there is a closed circuit when you touch the ball. A current can only flow if there is a continuous circuit from one power terminal to the other.
In general earthing is used in mains power circuits, to give the current a return path without going through your body, as that could be fatal. With AC equipment, earthing is also used to keep the equipment at a constant voltage level, in order to reduce induced 50Hz hum. If the equipment is battery-powered, earthing is not normally required as there are no high voltages with respect to ground. Even if the equipment generates a high voltage internally, that is not dangerous, unless the current can flow through you.
If proper precautions are used, it is quite safe to touch equipment at a potential of thousands of volts above ground. Just make sure your body cannot become part of a closed circuit, by insulating yourself from ground. Alternatively, make certain that the current through your body will be very small - it's the current that kills, not the voltage. | {"set_name": "stack_exchange", "score": 2, "question_id": 613054} |
\begin{document}
\title{A note on higher regularity boundary Harnack inequality}
\author{D. De Silva}
\address{Department of Mathematics, Barnard College, Columbia University, New York, NY 10027}
\email{\tt desilva@math.columbia.edu}
\author{O. Savin}
\address{Department of Mathematics, Columbia University, New York, NY 10027}\email{\tt savin@math.columbia.edu}
\thanks{ D.~D.~ and O.~ S.~ are supported by the ERC starting grant project 2011 EPSILON. D.~D. is supported by NSF grant DMS-1301535. O.~S.~ is supported by NSF grant DMS-1200701.}
\begin{abstract}
We show that the quotient of two positive harmonic functions vanishing on the boundary of a $C^{k,\alpha}$ domain is of class $C^{k,\alpha}$ up to the boundary.
\end{abstract}
\maketitle
\section{Introduction}
In this note we obtain a higher order boundary Harnack inequality for harmonic functions, and more generally, for solutions to linear elliptic equations.
Let $\Omega$ be a $C^{k,\alpha}$ domain in $\R^n$, $k \ge 1$. Assume for simplicity that
$$\Omega := \{(x', x_n) \in \R^n \ | \ x_n > g(x')\}$$ with $$g: \R^{n-1} \to \R, \quad g \in C^{k,\alpha}, \quad \|g\|_{C^{k,\alpha}} \leq 1, \quad g(0)=0.$$
Our main result is the following.
\begin{thm}\label{main} Let $u>0$ and $v$ be two harmonic functions in $\Omega \cap B_1$ that vanish continuously on $\p \Omega \cap B_1$. Assume $u$ is normalized so that $u\left( e_n/2\right)=1,$ then
\begin{equation}\label{BHI}\left\|\frac v u\right \|_{C^{k,\alpha}(\Omega \cap B_{1/2})} \leq C \|v\|_{L^\infty},\end{equation} with $C$ depending on $n,k,\alpha.$
\end{thm}
We remark that if $v>0$, then the right hand side of \eqref{BHI} can be replaced by $\dfrac v u \left(e_n/2\right)$ as in the classic boundary Harnack inequality.
For a more general statement for solutions to linear elliptic equations, we refer the reader to Section 3.
The classical Schauder estimates imply that $u, v$ are of class $C^{k,\alpha}$ up to the boundary. Using that on $\p \Omega$ we have $u=v=0$ and $u_\nu>0$, one can easily conclude that $v / u$ is of class $C^{k-1,\alpha}$ up to the boundary.
Theorem \ref{main} states that the quotient of two harmonic functions is in fact one derivative better than the quotient of two arbitrary $C^{k,\alpha}$ functions that vanish on the boundary. To the best of our knowledge the result of Theorem \ref{main} is not known in the literature for $k \ge 1$. The case when $k=0$ is well known as boundary Harnack inequality: the quotient of two positive harmonic functions as above must be $C^\alpha$ up to the boundary if $\p \Omega$ is Lipschitz, or the graph of a H\"older function, see \cite{HW, CFMS, JK, HW, F}.
A direct application of Theorem \ref{main} gives smoothness of $C^{1,\alpha}$ free boundaries in the classical obstacle problem without making use of a hodograph transformation, see \cite{KNS, C}.
\begin{cor} Let $\p \Omega \in C^{1,\alpha}$ and let $u$ solve
$$\Delta u= 1\quad \text{in $\Omega$}, \quad u=0, \ \nabla u=0 \quad \text{on $\p \Omega \cap B_1.$}$$ Assume that $u$ is increasing in the $e_n$ direction. Then $\p \Omega \in C^\infty.$
\end{cor}
The corollary follows by repeateadly applying Theorem \ref{main} to the quotient $u_i/u_n$.
Our motivation for the results of this paper comes from the question of higher regularity in thin free boundary problems, which we recently began investigating in \cite{DS}.
The idea of the proof of Theorem \ref{main} is the following. Let $v$ be a harmonic function vanishing on $\p \Omega$. The pointwise $C^{k+1,\alpha}$ estimate at $0 \in \p \Omega$ is achieved by approximating $v$ with polynomials of the type $x_n P$ with $\deg P=k$. It turns out that we may use the same approximation if we replace $x_n$ by a given positive harmonic function $u \in C^{k,\alpha}$ that vanishes on $\p \Omega$. Moreover, the regularity of $\p \Omega$ does not play a role since the approximating functions $u \, P$ already vanish on $\p \Omega$.
In order to fix ideas we treat the case $k=1$ separately in Section 2, and we deal with the general case in Section 3.
\section{The case $k=1$ -- $C^{1,\alpha}$ estimates.}
In this section, we provide the proof of our main Theorem \ref{main} in the case $k=1$. We also extend the result to more general elliptic operators.
Let $\Omega \subset \R^n$ with $\p \Omega \in C^{1,\alpha}$. Precisely,
$$\p \Omega = \{(x', g(x') \ | \ x' \in \R^{n-1}\}, \quad g(0)=0, \quad \nabla_{x'} g(0)=0, \quad \|g\|_{C^{1,\alpha}} \leq 1.$$ Let $u$ be a positive harmonic function in $\Omega \cap B_1$, vanishing continuously on $\p \Omega \cap B_1$. Normalize $u$ so that $u(e_n/2)=1$. Throughout this section, we refer to positive constants depending only on $n,\alpha$ as universal.
\begin{thm}\label{k1} Let v be a harmonic function in $\Omega \cap B_1$ vanishing continuously on $\p \Omega \cap B_1.$ Then,
$$\left\|\frac v u \right\|_{C^{1,\alpha}(\Omega \cap B_{1/2})} \leq C \|v\|_{L^\infty(\Omega \cap B_1)}$$ with $C$ universal.
\end{thm}
First we remark that from the classical Schauder estimates and Hopf lemma, $u$ satisfies
\begin{equation}\label{prop_u_1} u \in C^{1,\alpha}, \quad \|u\|_{C^{1,\alpha}(\Omega \cap B_{1/2})} \leq C, \quad u_{\nu} > c >0 \quad \text{on $\p \Omega \cap B_{1/2}$.}
\end{equation}
Thus, after a dilation and multiplication by a constant we may assume that
\begin{equation}\label{dil}
\|g\|_{C^{1,\alpha}(B_1)} \leq \delta, \quad \nabla u(0)=e_n, \quad [\nabla u]_{C^\alpha} \leq \delta,
\end{equation}
where the constant $\delta$ will be specified later.
We claim that Theorem \ref{k1} will follow, if we show that there exists a linear function \begin{equation}\label{P}P(x) = a_0 + \sum_{i=1}^n a_i x_i, \quad a_n=0\end{equation} such that
\begin{equation}\label{est1} \left|\frac v u(x) - P(x)\right| \leq C |x|^{1+\alpha}, \quad x \in \Omega \cap B_1\end{equation} for $C$ universal.
To obtain \eqref{est1}, we prove the next lemma.
\begin{lem} \label{imp}Assume that, for some $r \leq 1$ and $P$ as in \eqref{P} with $|a_i| \leq 1,$
$$\|v - uP\|_{L^\infty(\Omega \cap B_r)} \leq r^{2+\alpha}.$$ Then, there exists a linear function $$\bar P(x) = \bar a_0 + \sum_{i=1}^n \bar a_i x_i, \quad \bar a_n=0$$ such that
$$\|v - u \bar P\|_{L^\infty(\Omega \cap B_{\rho r})} \leq (\rho r)^{2+\alpha},$$ for some $\rho >0$ universal, and $$\|P-\bar P\|_{L^\infty(B_r)} \leq C r^{1+\alpha}, $$ with $C$ universal.
\end{lem}
\begin{proof} We write
$$v(x) = u(x)P(x) + r^{2+\alpha} \tilde v\left(\frac x r \right), \quad x \in \Omega \cap B_r,$$ with
$$\|\tilde v\|_{L^\infty(\tilde \Omega \cap B_1)} \leq 1, \quad \tilde \Omega := \frac 1 r \Omega.$$
Define also,
$$\tilde u(x) := \frac{u(rx)}{r}, \quad x\in \tilde \Omega \cap B_1.$$
We have,
$$0= \Delta v = \Delta (uP) + r^\alpha \Delta \tilde v\left(\frac x r \right), \quad x\in \Omega \cap B_r,$$
and
$$\Delta (uP) = 2 \nabla u \cdot \nabla P = 2 \sum_{i=1}^{n-1} a_i u_i, \quad x \in \Omega \cap B_r.$$
Moreover, from \eqref{dil} we have
$$\|\nabla u - e_n\|_{L^\infty (\Omega \cap B_r)} \leq \delta r^\alpha.$$
Thus, $\tilde v$ solves
\begin{equation}\label{tildev}|\Delta \tilde v | \leq 2\delta \quad \text{in $\tilde \Omega \cap B_1$}, \quad \tilde v = 0 \quad \text{on $\p \tilde \Omega \cap B_1,$}\end{equation}
and $$\|\tilde v\|_{L^\infty(\tilde \Omega \cap B_1)} \leq 1.$$
Hence, as $\delta \to 0$ (using also \eqref{dil}) $\tilde v$ must converge (up to a subsequence) uniformly to a solution $v_0$ of
$$\Delta v_0 = 0 \quad \text{in $B_1^+$}, \quad v_0 = 0 \quad \text{on $\{x_n=0\} \cap \bar B_1^+$}$$
and
$$|v_0| \leq 1 \quad \text{in $B_1^+$}. $$
Such a $v_0$ satisfies,
$$\|v_0 - x_n Q\|_{L^\infty(B_\rho^+)} \leq C \rho^3 \leq \frac 1 4 \rho^{2+\alpha},$$
for some $\rho$ universal and $Q = b_0 + \sum_{i=1}^n b_i x_i, |b_i| \leq C.$ Notice that $b_n=0$ since $x_n Q$ is harmonic.
By compactness, if $\delta$ is chosen sufficiently small, then
$$\|\tilde v - x_n Q\|_{L^\infty(\tilde \Omega \cap B_\rho)} \leq \frac 1 2 \rho^{2+\alpha}.$$
From \eqref{dil}, $$|\tilde u - x_n| \leq \delta$$ thus
$$\|\tilde v - \tilde u Q\|_{L^\infty(\tilde \Omega \cap B_\rho)} \leq \rho^{2+\alpha}$$
from which the desired conclusion follows by choosing
$$\bar P(x) = P(x) + r^{1+\alpha} Q\left(\frac x r \right).$$
\end{proof}
\begin{rem}\label{betterest} Notice that, from boundary Harnack inequality, $\tilde v$ satisfies (see \eqref{tildev} and recall that $u(\frac 1 2 e_n) =1$)
$$|\tilde v | \leq C \tilde u \quad \text{in $\tilde \Omega \cap B_{1/2}$},$$ with $C$ universal. Thus our assumption can be improved in $B_{r/2}$ to
$$|v(x) - uP(x)| \leq C u(x)r^{1+\alpha} \quad \text{in $\Omega \cap B_{r/2}$.}$$
Moreover,
$$\left[\frac{\tilde v}{\tilde u}\right]_{C^{1,\alpha}(\tilde \Omega \cap B_{1/4}(\frac 12 e_n))} \leq C$$ since $\tilde u$ is bounded below in such region. This, together with the identity
$$\frac{v}{u} = P + r^{1+\alpha} \frac{\tilde v}{\tilde u}\left(\frac x r\right) \quad x \in \Omega \cap B_r$$
implies
\begin{equation}\label{**} \left[\nabla \left(\frac{v}{u}\right)\right]_{C^{\alpha}(\Omega \cap B_{r/4}(\frac r 2 e_n))}=\left[\frac{\tilde v}{\tilde u}\right]_{C^{1,\alpha}(\tilde \Omega \cap B_{1/4}(\frac 12 e_n))} \leq C.\end{equation}
\end{rem}
\
\textit{Proof of Theorem $\ref{k1}.$} After multiplying $v$ by a small constant, the assumptions of the lemma are satisfied with $P=0$ and $r=r_0$ small. Thus, if we choose $r_0$ small universal, we can apply the lemma indefinitely and obtain a limiting linear function $P_0$ such that $$|v - u P_0| \leq Cr^{2+\alpha}, \quad r \leq r_0.$$ In fact, from Remark \ref{betterest} we obtain
$$\left|\frac v u - P_0\right| \leq C |x|^{1+\alpha}$$ which together with \eqref{**} gives the desired conclusion.
\qed
\
It is easy to see that our proof holds in greater generality. For example, if $v$ solves $\Delta v=f \in C^\alpha$ in $\Omega \cap B_1$ and vanishes continuously on $\p \Omega \cap B_1,$ then we get
$$\left\|\frac v u \right\|_{C^{1,\alpha}(\Omega \cap B_{1/2})} \leq C( \|v\|_{L^\infty} + \|f\|_{C^\alpha}).$$ To obtain this estimate it suffices to take in Lemma \ref{imp} linear functions $P(x)= a_0 + \sum_{i=1}^n a_i x_i$ satisfying $2a_n u_n(0) = f(0).$ In fact, the following more general Theorem holds.
\begin{thm}\label{gen} Let $$\mathcal L u := Tr(A \, D^2 u) + b\cdot \nabla u + c \, u,$$ with $A \in C^\alpha, b, c \in L^\infty$ and $$\lambda I \leq A \leq \Lambda I, \quad \|A\|_{C^\alpha}, \|b\|_{L^\infty}, \|c\|_{L^\infty} \leq \Lambda.$$ Assume $$\mathcal L u = 0, \, u>0 \quad \text{in $\Omega \cap B_1$}, \quad u=0 \quad \text{on $\p \Omega \cap B_1$}$$ and
$$\mathcal Lv = f \in C^\alpha \quad \text{in $\Omega \cap B_1,$} \quad v=0\quad \text{on $\p \Omega \cap B_1$}.$$ Then, if $u$ is normalized so that $u(\frac 1 2 e_n)=1$
$$\left\|\frac v u \right\|_{C^{1,\alpha}(\Omega \cap B_{1/2})} \leq C(\|v\|_{L^\infty} + \|f\|_{C^\alpha}) $$ with $C$ depending on $\alpha, \lambda, \Lambda$ and $n$.
\end{thm}
\begin{rem} We emphasize that the conditions on the matrix $A$ and the right hand side $f$ are those that guarantee interior $C^{2,\alpha}$ Schauder estimates. However the conditions on the domain $\Omega$ and the lower order coefficients $b, c$ are those that guarantee interior $C^{1,\alpha}$ Schauder estimates. \end{rem}
\begin{rem} The theorem holds also for divergence type operators
$$Lu=\text{div}(A \nabla u + b u), \quad \quad A \in C^\alpha, \quad b \in C^ \alpha.$$
\end{rem}
The proof of Theorem \ref{gen} follows the same argument of Theorem \ref{k1}. For convenience of the reader, we give a sketch of the proof.
\
\textit{Sketch of the proof of Theorem $\ref{gen}$.} After a dilation we may assume that \eqref{dil} holds and also
\begin{equation}\label{dil2} A(0) = I, \quad \max\{[A]_{C^\alpha},\|b\|_{L^\infty},\|c\|_{L^\infty}, [f]_{C^\alpha}\} \leq \delta
\end{equation}
with $\delta$ to be chosen later. Again, it suffices to show the analogue of Lemma \ref{imp} in this context, with the $x_n$ coefficient of $P$ and $\bar P$ satisfying $$2a_n = 2 \bar a_n = f(0).$$
Define $\tilde v$ as before. Then
$$f = \mathcal L v = \mathcal L(uP) + r^\alpha \tilde{\mathcal L} \tilde v \left(\frac x r\right) \quad x\in \Omega \cap B_r$$
with
$$ \tilde{\mathcal L} \tilde v := Tr(\tilde A \, D^2 \tilde v) + r \tilde b \cdot \nabla \tilde v + r^2 \tilde c \, \tilde v,$$
$$\tilde A(x) = A(rx), \quad \tilde b (x) = b(rx), \quad \tilde c (x) = c(rx), \quad x \in \tilde \Omega \cap B_1.$$
On the other hand,
$$\mathcal L(uP)= (\mathcal L u)P + 2 (\nabla u)^T A \nabla P + u \, b \cdot \nabla P$$
thus, using \eqref{dil}-\eqref{dil2} and the fact that $2a_n=f(0)$
$$|\mathcal L(uP) - f | \leq C \delta r^\alpha, \quad x \in \Omega \cap B_r.$$ From this we conclude that $$|\tilde{\mathcal{L}} \tilde v| \leq C \delta \quad \text{in $\tilde \Omega \cap B_1$}$$
and we can argue by compactness exactly as before. \qed
\section{The general case, $k \geq 2.$}
Let $\Omega \subset \R^n$ with $\p \Omega \in C^{k,\alpha}$. Precisely,
$$\p \Omega = \{(x', g(x') \ | \ x' \in \R^{n-1}\}, \quad g(0)=0, \quad \nabla_{x'} g(0)=0, \quad \|g\|_{C^{k,\alpha}} \leq 1.$$
\begin{thm}\label{genk2} Let $$\mathcal L u := Tr(A D^2 u) + b \cdot \nabla u + cu$$ with $$\lambda I \leq A \leq \Lambda I,$$ and $$\max\{ \|A\|_{C^{k-1, \alpha}}, \|b\|_{C^{k-2,\alpha}}, \|c\|_{C^{k-2,\alpha}} \}\leq \Lambda.$$ Assume \begin{equation}\label{u}\mathcal Lu = 0, u>0 \quad \text{in $\Omega \cap B_1$}, \quad u=0 \quad \text{on $\p \Omega \cap B_1$}\end{equation} and
\begin{equation}\label{v}\mathcal Lv = f \in C^{k-1,\alpha} \quad \text{in $\Omega \cap B_1,$} \quad v=0\quad \text{on $\p \Omega \cap B_1$}.\end{equation} Then, if $u$ is normalized so that $u(\frac 1 2 e_n)=1$
$$\left\|\frac v u \right \|_{C^{k,\alpha}(\Omega \cap B_{1/2})} \leq C(\|v\|_{L^\infty} + \|f\|_{C^{k-1,\alpha}}) $$ with $C$ depending on $k, \alpha, \lambda, \Lambda$ and $n$.
\end{thm}
From now on, a positive constant depending on $n, k, \alpha, \lambda, \Lambda$ is called universal.
\begin{rem} If we are interested only in $C^{k,\alpha}$ estimates for $\dfrac v u$ on $\p \Omega \cap B_{1/2}$, then the regularity assumption on $c$ can be weakened to $\|c\|_{C^{k-3, \alpha}} \leq \Lambda.$
\end{rem}
If $u$ and $v$ solve \eqref{u}-\eqref{v} respectively, the rescalings
$$\tilde u(x) = \frac{1}{r_0} u(r_0 x), \quad \tilde v(x) = \frac{1}{r_0} v(r_0 x)$$ satisfy the same problems with $\Omega, A, b, c$ and $f$ replaced by
\begin{align*} &\tilde \Omega = \frac{1}{r_0} \Omega, \quad \tilde A(x) = A(r_0x), \quad \tilde b(x)=r_0 b(r_0 x), \\ &\tilde c(x) = r_0^2 c(r_0 x), \quad \tilde f(x)= r_0 f(r_0 x).\end{align*}
Thus, as in the case $k=1$, we may assume that
$$\nabla u(0)=e_n, \quad A(0)=I$$
and that the following norms are sufficiently small:
\begin{equation}\label{small}
\max\{\|g\|_{C^{k,\alpha}}, \|A-I\|_{C^{k-1,\alpha}}, \|b\|_{C^{k-2,\alpha}}, \|c\|_{C^{k-2,\alpha}}, \|f\|_{C^{k-1,\alpha}}, \|u-x_n\|_{C^{k,\alpha}} \} \leq \delta,
\end{equation} with $\delta$ to be specified later.
The proof of Theorem \ref{genk2} is essentially the same as in the case $k=1$. However, we now need to work with polynomials of degree $k$ rather than linear functions.
We introduce some notation. A polynomial $P$ of degree $k$ is denoted by
$$P(x)= a_m x^m, \quad m=(m_1,m_2,\ldots, m_n), |m|=m_1+\ldots+m_n,$$
with the $a_m$ non-zero only if $m \geq 0$ and $|m| \leq k.$
We use here the summation convention over repeated indices and the notation
$$x^{m}=x_1^{m_1}\ldots x_{n}^{m_n}.$$
Also, in what follows, $\bar i $ denotes the multi-index with $1$ on the $i$th position and zeros elsewhere and $\|P\|=\max|a_m|$.
Given $u$ a solution to \eqref{u}, we will approximate a solution $v$ to \eqref{v} with polynomials $P$
such that $\mathcal L(uP)$ and $f$ are tangent at 0 of order $k-1$.
Below we show that the coefficients of such polynomials must satisfy a certain linear system.
Indeed,
$$\mathcal L(uP) = (\mathcal L u)P + 2 (\nabla u)^T A \nabla P + u \, tr(A D^2P) + u \, b \cdot \nabla P.$$
Since $\mathcal L u=0,$ we find
$$\mathcal L(uP) = g^i P_i + g^{ij}P_{ij}, \quad g^i \in C^{k-2,\alpha}, \, \, g^{ij} \in C^{k-1,\alpha}.$$
Using the first order in the expansions below ($l.o.t$ = lower order terms),
$$A=I + l.o.t., \quad u=x_n + l.o.t, \quad \nabla u= e_n + l.o.t., $$ we write each $g^i, g^{ij}$ as a sum of a polynomial of degree $k-1$ and a reminder of order $O(|x|^{k-1+\alpha})$. We find
$$g^i=2\delta_{in} + l.o.t, \quad g^{ij}=\delta_{ij} x_n + l.o.t.$$
In the case $P=x^m$ we obtain
\begin{align*} \mathcal L(u \, x^m) = & \, m_n(m_n+1) x^{m-\bar n} + \sum_{i \neq n} m_i(m_i-1) x^{m-2\bar i +\bar n} \\ &+ c_l^m x^l + w_m(x), \end{align*}
with \begin{equation} \label{c1}c_l^m \neq 0 \quad \text{only if $|m| \leq |l| \leq k-1$}, \quad \quad \mbox{and} \quad w_m=O(|x|^{k-1+\alpha}).\end{equation}
Also in view of \eqref{small}
\begin{equation}\label{c2}|c_l^m| \leq C \delta, \quad |w_m| \le C \delta |x|^{k-1+\alpha}, \quad \|w_m\|_{C^{k-2,\alpha}(B_r)} \leq C\delta r.\end{equation}
Thus, if $P=a_m x^m, $ with $\|P\| \leq 1$ then
$$\mathcal L(uP) = R(x) + w(x), \quad R(x) = d_lx^l, \quad \deg R=k-1,$$ with $w$ as above and the coefficients of $R$ satisfying
\begin{equation}\label{Q}
d_l = (l_n +1)(l_n+2)a_{l+\bar n} + \sum_{i \neq n} (l_i+1)(l_i+2) a_{l+2\bar i -n} +c_l^m a_m.
\end{equation}
\begin{defn} We say that $P$ is an approximating polynomial for $v/u$ at $0$ if the coefficients $d_l$ of $R(x)$ coincide with the coefficients of the Taylor polynomial of order $k-1$ for $f$ at 0.
\end{defn}
We think of \eqref{Q} as an equation for $a_{l+\bar n}$ in terms of $d_l$ and a linear combination of $a_m$'s with either $|m| < |l| +1$ or when $|m|= l+1$ with $m_n < l_n+1.$
Thus the $a_m$'s are uniquely determined from the system \eqref{Q} once $d_l$ and $a_m$ with $m_n=0$ are given.
The proof of Theorem \ref{genk2} now follows as in the case $k=1$, once we establish the next lemma.
\begin{lem} Assume that for some $r \leq 1$ and an approximating polynomial $P$ for $v/u$ at $0$, with $\|P\| \le 1$, we have
$$\|v-uP\|_{L^\infty(\Omega \cap B_r)} \leq r^{k+1+\alpha}.$$ Then, there exists an approximating polynomial $\bar P$ for $v/u$ at $0$, such that
$$\|v-u\bar P\|_{L^\infty(\Omega \cap B_{\rho r})} \leq (\rho r)^{k+1+\alpha}$$
for $\rho>0$ universal, and
$$\|P-\bar P\|_{L^\infty(B_r)} \leq Cr^{k+\alpha},$$ with $C$ universal.
\end{lem}
\begin{proof}
We write
$$v(x) = u(x)P(x) + r^{k+1+\alpha} \tilde v\left(\frac x r \right), \quad x \in \Omega \cap B_r,$$ with
$$\|\tilde v\|_{L^\infty(\tilde \Omega \cap B_1)} \leq 1, \quad \tilde \Omega := \frac 1 r \Omega.$$
Define also,
$$\tilde u(x) := \frac{u(rx)}{r}, \quad x\in \tilde \Omega \cap B_1.$$
Then
$$f = \mathcal L v = \mathcal L(uP) + r^{k+\alpha-1} \tilde{\mathcal L} \tilde v \left(\frac x r\right) \quad x\in \Omega \cap B_r$$
with
$$ \tilde{\mathcal L} \tilde v := Tr(\tilde A \, D^2 \tilde v) + r \tilde b \cdot \nabla \tilde v + r^2 \tilde c \, \tilde v,$$
$$\tilde A(x) = A(rx), \quad \tilde b (x) = b(rx), \quad \tilde c (x) = c(rx), \quad x \in \tilde \Omega \cap B_1.$$
Using that $P$ is approximating, we conclude that
\begin{equation}\label{tildeL}
\tilde{\mathcal L} \tilde v = \tilde w \quad \text{in $\tilde \Omega \cap B_1$, } \quad \tilde v=0 \quad \text{on $\p \tilde \Omega \cap B_1,$}
\end{equation}
with
$$\|\tilde v \|_{L^\infty} \leq 1, \quad \|\tilde w\|_{C^{k-2,\alpha}} \leq C\delta.$$
By compactness $\tilde v \to v_0$ with $v_0$ harmonic. Thus we find,
$$\|\tilde v - x_n Q\|_{L^\infty(\tilde \Omega \cap B_\rho)} \leq C \rho^{k+2} \leq \frac 1 2 \rho^{k+\alpha -1}, \quad \deg Q=k, \quad \quad \|Q\| \le C,$$
with $x_n Q$ a harmonic polynomial and $\rho$ universal.
Thus,
$$\|v - u(P+r^{k+\alpha}Q(\frac x r))\|_{L^\infty(\Omega \cap B_{\rho r})} \leq \frac 1 2 (\rho r)^{k+\alpha-1}.$$
However $P+r^{k+\alpha}Q(\frac x r)$ is not approximating for $v/u$ at $0$, and we need to modify $Q$ into a slightly different polynomial $\bar Q$.
We want the coefficients $\bar q_l$ of $\bar Q$ to satisfy (see \eqref{Q})
\begin{align}\label{eqq}
0&=(l_n+1)(l_n+2)\bar q_{l+\bar n} + \sum_{i \neq n } (l_i +1)(l_i+2)\bar q_{l+2\bar i-\bar n}+
\bar c_l^m \bar q_m, \end{align}
with (see \eqref{c1}-\eqref{c2})
$$\bar c_l^m=r^{|l|+1-m} c_l^m, \quad |\bar c_l^m| \leq C \delta.$$
Moreover, since in the flat case i.e. $A=I$, $u=x_n$ and $g$, $b$, $c$, $f$ all vanishing, $Q$ is approximating for $v_0/x_n$ at 0, the coefficients of $Q$ satisfy the system \eqref{Q} with $c_l^m=0$ and $d_l=0$, i.e.
$$0=(l_n+1)(l_n+2) q_{l+\bar n} + \sum_{i \neq n } (l_i+1)(l_i+2)q_{l+2 \bar i-\bar n}.$$
Thus, by subtracting the last two equations, the coefficients of $Q-\bar Q$ solve the system \eqref{eqq} with left hand side bounded by $C\delta$, and we can find $\bar Q$ such that
$$\|Q-\bar Q\|_{L^\infty(B_1)} \leq C\delta.$$
\end{proof} | {"config": "arxiv", "file": "1403.2588.tex"} |
TITLE: The cohomology of a product of sheaves and a plea.
QUESTION [14 upvotes]: The question Consider a topological space $X$ and a family of sheaves (of abelian groups, say) $\; \mathcal F_i \;(i\in I)$ on $X$. Is it true that
$$H^*(X,\prod \limits_{i \in I} \mathcal F_i)=\prod \limits_{i \in I} H^*(X,\mathcal F_i) \;?$$
According to Godement's and to Bredon's monographs this is correct if the family of sheaves is locally finite (In particular if $I$ is finite). [Bredon also mentions in an exercise that equality holds for spaces in which every point has a smallest open neighbourhood.]
What about the general case?
A variant Same question for $\check{C}$ech cohomology: is it true that
$$\check{H}^*(X,\prod \limits_{i \in I} \mathcal F_i)=\prod \limits_{i \in I} \check{H}^*(X,\mathcal F_i) \;?$$
(Of course, $\check{C}$ech cohomology often coincides with derived functor cohomology but still the question should be considered independently)
A prayer Godement's book Topologie algébrique et théorie des faisceaux was published in 1960 and is still, with Bredon's, the most complete book on the subject. I certainly appreciate the privilege of working in a field where a book released half a century ago is still relevant: programmers and molecular biologists are not so lucky. Still I feel that a new treatise is due, in which naïve/foundational questions like the above would be addressed, and which would take the research and shifts in emphasis of half a century into account: one book on sheaf theory every 50 years does not seem an unreasonable frequency. So might I humbly suggest to one or several of the awesome specialists on MathOverflow to write one? I am sure I'm not the only participant here whose eternal gratitude they would earn.
REPLY [8 votes]: The answer to the first question is almost always no, see Roos, Jan-Erik(S-STOC)
Derived functors of inverse limits revisited. (English summary)
J. London Math. Soc. (2) 73 (2006), no. 1, 65--83. .
Addendum: The crucial point is that infinite products are not exact. The most precise counterexample statement is Cor 1.11 combined with Prop 1.6 which identifies the stalks of the higher derived functors of the product with what you are interested in. Formally, it doesn't give a counter example for a single $X$ but Cor 1.11 shows that for any paracompact space with positive cohomological dimension there is some open subset for which your question has a negative answer. It seems clear that one could examples for specific $X$. | {"set_name": "stack_exchange", "score": 14, "question_id": 28386} |
TITLE: Calculating $\pi_2(X\cup_\alpha e_\alpha)$ using Hurewics theorem and covering spaces
QUESTION [2 upvotes]: Consider the CW-complex $X$ obtained by wedging two circles. Denote by $a$ and $b$ the generators of $\pi_1(X)$. On $X$, attach two discs with attaching maps
\begin{align*}S^1\stackrel{a^5(ab)^{-2}}{\longrightarrow}X\quad\quad S^1\stackrel{b^3(ab)^{-2}}{\longrightarrow}X\end{align*}
Call the resulting CW-complex $W$. I want to calculate $\pi_2(W)$. My idea is as follows: Let $\widetilde{W}\stackrel{p}{\longrightarrow}W$ be the universal covering space of $W$. In dimensions 2 or higher, $p_*$ is an isomorphism so by Hurewicz Theorem we obtain $H_2(\widetilde{W})\cong \pi_2(\widetilde{W})\cong\pi_2(W)$. Having a good understanding of the CW-complex structure on $\widetilde{W}$ could then potentially solve the problem, but this is where I get stuck: Since $X\hookrightarrow W$ does not induce isomorphism in $\pi_1$, we do not get from the (general) theory, see eg. Lemma 4.38+proof in Hatcher, that $\widetilde{W}$ is obtained by lifting the cells of $W$ via the covering of $X$. I know that the covering space of $X$ is the Cayley graph, and I have been considering how to appropriately attach 2-cells to obtain a universal cover of $W$, but in vain.
My question is, can someone tell me how to obtain a universal cover of $W$? Otherwise, if the strategy appears to be a dead end, I would appreciate hints to proceed in a more fruitful direction.
Thanks in advance.
Edit: For an alternative reference for finding an explicit CW-structure on a universal cover in a situation such as above, see Hatcher section 1.3 "Cayley Complexes".
REPLY [0 votes]: The process you are looking for is called free Fox differentiation. See for example page 3 of this book.
First fix a lift of the basepoint $*\in \widetilde{W}$.
Each directed $1$-cell $e_i$ in $W$ lifts to a unique $1$-cell $\widetilde{e_i}$ of $\widetilde{W}$, rooted at $*$. In general, the $1$-cells of $\widetilde{W}$ are the translates of these. That is $\widetilde{e_i}g$, for $g\in \pi_1(W)$.
The $1$-cell $\widetilde{e_i}g$ goes from $*g$ to $*g_ig$, where $e_i$ represents $g_i\in \pi_1(W)$.
For each 2-cell of $W$, with boundary a word in the $g_i$ (and their inverses), we can draw a lift in $\widetilde{W}$ as a polygon with edges translates of the directed $1$-cells in the word. If we set the first vertex to represent $*$, what we must translate the remaining vertices and edges by is determined inductively.
The coefficient on $\widetilde{e_i}$ of the boundary of a $2$-cell with boundary word $w_j$ is then the free Fox derivative $\frac{\partial w_j}{\partial g_i}$.
In general, for a word $w$ in symbols $g_i^{\pm1}$, representing elements of a group $G$,the symbol $\frac{\partial w_j}{\partial g_i}\in \mathbb{Z}[G]$ denotes the sum of group elements represented by the word to the right of an instance of $g_i$ in $w_j$, minus group elements represented by the word to the right of an instance of $g_i^{-1}$ in $w_j$, starting from the $g_i^{-1}$.
Note, if the group action on the cover is a right action, then we must read words right to left, when interpreting as paths.
Thus $\pi_2(W)$ for a $2$-complex $W$ may computed algebraically as the kernel of the map over $\mathbb{Z}[\pi_1(W)]$, represented by the matrix with entries $\frac{\partial w_j}{\partial g_i}$.
There is a similar purely algebraic calculation for $\pi_3(W)$, when $W$ is a $2$-complex - see for example this paper. | {"set_name": "stack_exchange", "score": 2, "question_id": 4160070} |
TITLE: "Normalizing" a Function
QUESTION [1 upvotes]: One of our homework problems for my linear algebra course this week is:
On $\mathcal{P}_2(\mathbb{R})$, consider the inner product given by
$$
\left<p, q\right> = \int_0^1 p(x)q(x) \, dx
$$
Apply the Gram-Schmidt procedure to the basis $\left\{1, x, x^2\right\}$ to produce an orthonormal basis for $\mathcal{P}_2(\mathbb{R})$.
Generating an orthogonal basis is trivial but I'm not quite sure how to go about normalizing the functions I get to "length one". For vectors, it's easy since you just divide by the magnitude but what about for functions?
REPLY [1 votes]: The norm in this space is
$$\|u\| = \sqrt{\langle u, u\rangle} = \sqrt{\int_0^1 \left(u(x)\right)^2 dx}$$
So once you have a basis of three functions, compute the norms (i.e. compute the integral of the square, and square root it) and divide the function by the norm. In particular, show that
$$\left\| \frac{u}{\|u\|}\right\| = 1$$ | {"set_name": "stack_exchange", "score": 1, "question_id": 547661} |
TITLE: Solving characteristic equation with matrix coefficients.
QUESTION [0 upvotes]: Given the equation \begin{equation}\det(\lambda^2 I+ B \lambda +K )=0\end{equation} where $I,B,K\in \mathbb R^{m \times m}$. $B$ and $K$ are symmetric matrices with no zero eigenvalues and $B>0$. Let $n^+(A)$ and $n^-(A)$ denote the number of eigenvalues of $A$ with positive real parts and negative real parts respectively. Is the number of roots of the above equation with positive real parts equal to $n^-(B)+n^-(K)$ and the number of roots with negative real parts equal to $n^+(B)+n^+(K)$?
My trial was as follows: let $v \in \mathbb{ R^m}$ if $K$ is positive definite
then $$\det(\lambda^2 I+ B \lambda +K )=0 \implies \exists v: v^T(\lambda^2 I+ B \lambda +K)v=0 \implies \lambda^2 v^2+ v^TBv \lambda +v^TKv=0.$$
All the coefficients are positive, so real part of $\lambda$ must be negative. The same argument can be used when $K<0$, the remaining case is when $K$ is indefinite.
Edit
As Ben Grossmann showed the system can be seen as the characteristic equation of:
$$
J = \pmatrix{0&&I \\ -K && -B}.
$$
where $B>0$, $J$ cannot have an eigenvalue with purely imaginary part if it did then $ \exists v: v^H((bi)^2 I+ B (bi) +K)v=0 \implies v^HBvb=0 \iff b=0$
Let $n_0(A)$ denote the number of zero eigenvalues of A.
lemma 1: $n_0(K)=n_0(J)$
we observe that the reduced echelon form of $J$ is:
$$
J_{red} = \pmatrix{I&&0\\0 && K}.
$$
so $n_0(K)=n_0(J)$.
Let $G(t)=K+t I$ and let
$$
J(t)=\pmatrix{0&&I\\-G(t) &&-B}
$$
observe that as $t\rightarrow \infty$,$G(t)>0 \implies n_-(J(t))=2m$ and as $t\rightarrow -\infty$ , $G(t)<0 \implies n_-(J(t))=n_+(J(t))=m$.
Let the eigenvalues of $J(t)$ be $\{\lambda_i(t)\}$.
As $t$ goes from $-\infty$ to $+\infty$, the eigenvalues
of $G(t)$ become zero $m$ times which is when they change signs. By lemma 1, the eigenvalues
of $J(t)$ must also become zero $m$ times and those times are exactly when eigenvalues of $G(t)$ are zero. It is not difficult to see that the eigenvalues $J(t)$ are zero exactly when one of the positive eigenvalues change signs to negative, if not then one of the positive eigenvalues of $J(t)$ won't change signs and would remain positive as $t \rightarrow +\infty$ which is a contradiction.
So every time an eigenvalue of $G(t)$ changes signs from negative to positive, an eigenvalue of $J(t)$ will change signs from positive to negative, so we have that $\forall t$ , $n_-(J(t))=m+n_+(G(t))=n_+(B)+n_+(G(t))$, taking $t=0$ yields $n_-(J)=n_+(B)+n_+(K)$ as required.
Is there something wrong with this proof?
REPLY [1 votes]: A partial (but non-trivial) answer: if $K$ is negative definite, the number of positive/negative roots can be determined as follows.
First, observe that $\det(\lambda^2 I + \lambda B + K) = \det(\lambda I - M)$, where
$$
M_1 = \pmatrix{-B & -K\\I & 0}.
$$
This holds regardless of the definiteness of $K$. If $K$ is negative definite, then there exists a unique positive definite square-root $P$ of $-K$ (i.e. $P^2 = -K$). We note that the above matrix $M_1$ is similar to
$$
M_2 = \pmatrix{I\\ & P} \pmatrix{-B&-K\\I & 0}\pmatrix{I \\ & P}^{-1} = \pmatrix{-B & P\\P & 0}.
$$
The above matrix is symmetric and thus has real eigenvalues. Furthermore, since $B$ is invertible, we have
$$
n_\pm(M_2) = n_{\pm}(-B) + n_{\pm}(C),
$$
where $C$ denotes the Schur complement
$$
C = M_2/(-B) = PB^{-1}P.
$$
Note that $n_{\pm}(PB^{-1}P) = n_{\pm}(B^{-1}) = n_{\pm}(B)$. With that, we can conclude that regardless of the definiteness of $B$, the number of roots with positive real part and negative real part are both equal to $m$.
For the case where $K$ is positive definite, there is a positive definite (and symmetric) $P$ for which $K = P^2$. Applying the same manipulation as before leaves us with the matrix
$$
M_2 = \pmatrix{-B & -P\\ P & 0}.
$$
We note that the symmetric part of this matrix,
$$
M_S = \frac 12(M_2 + M_2^T) = \pmatrix{-B&0\\0 & 0},
$$
is negative semidefinite whenever $B$ is positive definite. It follows that $M_2$ will only have eigenvalues with non-positive real part in the case that $B$ is positive definite.
Thoughts on the general case:
For the general case, with $B$ positive definite. Let $P$ be such that $PKP^T$ is of the form $\operatorname{diag}(I_{n_+},-I_{n_-})$.
$$
M_2 = \pmatrix{P\\& P^{-T}} \pmatrix{-B & -K\\ I & 0} \pmatrix{P \\ & P^{-T}}^{-1}
= \pmatrix{-PBP^{-1} & -PKP^T\\ (PP^T)^{-1} & 0}
$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 3968456} |
TITLE: Name for a property in a brutally elementary presentation of a monad
QUESTION [5 upvotes]: For evil reasons of my own, I'm trying to give a presentation of a monad in primitive terms, assuming only the notion of a category. More honestly, I looked at this post and got intrigued by the notion that, in some sense, a monad is a monad simply because its Kleisli category is a category, and went on to check to what extent that is true.
I started out with this: let $\mathcal{C}$ be a category, $T:\mathrm{Ob}(\mathcal{C})\to\mathrm{Ob}(\mathcal{C})$ an object assignment, and $\mathcal{K}$ another category (with identity and composition denoted as $\eta$ and $\rhd$, respectively), such that:
$\mathrm{Ob}(\mathcal{K})=\mathrm{Ob}(\mathcal{C})$
$\mathcal{K}(A,B)=\mathcal{C}(A,TB)$
I then tried to derive the Kleisli triple $(T, \eta, \mu)$ (with $Tf = (\eta\circ f)\rhd \mathrm{id}$ and $\mu=\mathrm{id}\rhd\mathrm{id}$), but got quickly stumped when trying to show that $T$ is a functor. My assumptions, it seems, give no relation whatsoever between the compositions in both categories. After some trial and error, I found that adding this assumption:
For every $f:C\to TD$, $g:B\to TC$, and $h:A\to B$ (in $\mathcal{C}$): $(f\rhd g)\circ h = f\rhd(g\circ h)$
yielded back all the monad goodness. I also checked that this property holds for the Kleisli category of every monad, so I'm not adding any extraneous restrictions here.
My question is, does this property have a name? Additionally: is it of any interest whatsoever? is it somehow trivial (for some reason I missed)?
PS: I know this is very close to the Kleisli triple construction (as in this question) which also doesn't depend in any notion beyond a category; for some reason this one feels more symmetrical to me.
REPLY [1 votes]: This property is called "Triple Product Law" in E. Manes, Algebraic Theories, Springer-Verlag, 1976, page 26, 3.8.
It is trivial that this property holds in any Kleisli category: we have $(f \rhd g) \circ h = (f \rhd g) \rhd (\eta_B \circ h) = f \rhd (g \rhd (\eta_B \circ h)) = f \rhd (g \circ h)$.
It is also trivially wrong that this property does not hold if there is no assumption relating the respective compositions : without any assumption, you cannot deduce any property. For instance, $\mathcal{C}$ has exactly one object $*$ and exactly four morphisms ($id_{*}$, b, c, $\eta_\ast$), and, for any morphism $k$, $k \rhd k = \eta_\ast$; moreover $c \circ b = \eta_\ast$, $\eta_\ast \circ b = id_\ast$ and $c \circ c = id_\ast$. I take $f = g = c$ and $h = b$. | {"set_name": "stack_exchange", "score": 5, "question_id": 838751} |
TITLE: examples of completely positive order zero maps to demonstrate a theorem
QUESTION [1 upvotes]: I'm interested explicit examples which can be used to demonstate the theorem:
Theorem: Let $A$ and $B$ $C^*-$algebras and $\phi:A\to B$ be a completely postive map of order zero. Set $C:=C^*(\Phi(A))\subset B$.
Then there is a positive element $h\in M(C)\cap C'$ with $\|h\|=\|\phi\|$ and a $*-$homomorphism
$$\pi_{\phi}:A\to M(C)\cap \{h\}'$$ such that $$\phi(a)=\pi_{\phi}(a)h $$ for $a\in A$.
If $A$ is unital, then $\phi(1_A)=h\in C$.
It can be found in the paper "Completely positive maps of order zero", by Winter and zacharias, theorem 2.3.
Three examples are: A trivial one, if $A$ and $B$ are unital and $\phi:A\to B$ is a unital $\ast$-homomorphism. Then nothing exciting happens, it is $\phi=\pi_{\phi}$.
A second one, for $g\in C([0,1])$ a positive function, $\phi_g:C([0,1])\to C([0,1]),\; f\mapsto gf$, it is $\phi_g(f)=g\pi_{\phi}(f)$ with a $\ast$-homomorphism $\pi_{\phi}:C([0,1])\to C_b(\{x\in [0,1]: g(x)\neq 0\}),\; f\mapsto f$ (here I was inspired by an other question on MSE).
A third one, which can be found here https://math.stackexchange.com/questions/1171402/order-zero-maps-in-matrix-algebra .
My question is: do you know an other example (a nontrivial one) (something with compact operators on a Hilbert space or integral oeprators for example)? Regards.
REPLY [1 votes]: Let $K$ be the C*-algebra of compact operators, $A$ a C*-algebra and $\phi:A\to K$ a c.p.c. order zero map.
$A$ is commutative: assume for simplicity that $A$ is unital. Then there exists a compact Hausdorff space $X$ such that $A\cong C(X)$. In this case $\phi$ is a compression by a positive element $h=\phi(1)\in K$ of a representation of $C(X)$. Observe that the eigenspaces of $h$, which commutes with the representation, are finite-dimensional (I'm excluding the zero from the spectrum, for that simply account for the degeneracy of the representation) and the corresponding projections commute with the representation by functional calculus. This should give you an idea of what $\phi$ looks like.
$A$ is $M_2,M_3,\ldots,K$: this is essentially the case of the third example referred to in the OP.
$A$ is simple, non-elementary: consider the CAR algebra as an example. The support $*$-homomorphism $\pi_\phi$ must either be injective or the zero $*$-homomorphism because of the simplicity of $A$. Furthermore $h_\phi=\phi(1)\in K$ commutes with $\pi_\phi$ which implies that there exists a finite rank projection that commutes with $\pi_\phi$. But the CAR algebra has no finite dimensional representations, hence $\phi=0$. | {"set_name": "stack_exchange", "score": 1, "question_id": 212806} |
TITLE: Proving $K= \{x \in \mathbb R^n: \exists t>0 (t^{-1}, \text { for } x \in A\},$ given convex set $A\subset \mathbb R$
QUESTION [1 upvotes]: I have a convex set $ A \subset \mathbb{R^n}$ and $K= \{x \in \mathbb {R^n}\mid \exists t>0( t^{-1} \text { for } x \in A)\}.$
I want to show, that $K$ is convex.
My idea:
$y \in K \Rightarrow \exists t>0: t^{-1} y \in A$ and $z \in K \Rightarrow \exists t>0: t'^{-1} z \in A$.
Thus $A$ convex, there is a $t''>0: t''^{-1}( \lambda t^{-1} y + (1- \lambda) t'^{-1} z) \in A $ with $ \lambda \in [0,1]$.
How can I argue now, that this convex combination is in $K$?
REPLY [2 votes]: The best way to do things, is to note that if $a \in A$, then every element of the form $at, t > 0$ belongs in $K$, since $(at)t^{-1} = a \in A$. Conversely, if $l \in K$, then for some $t > 0$ we have $lt^{-1} = b \in A$, or that $l=tb$ for some $b \in A$ and $t > 0$.
All this tells us that $K = \{at : a \in A, t > 0\}$. If $A$ is convex, we claim this is convex.
With this description, indeed if $at \in K$ and $bt' \in K$, then if $\lambda \in [0,1]$ we want to show that $l = at\lambda + bt'(1-\lambda) \in K$.
For this, note that $t \lambda \geq 0$ and $t'(1-\lambda) \geq 0$, with $k=t\lambda + t'(1-\lambda) > 0$, since one of $\lambda,1-\lambda$ is strictly positive.
Then, $$l = \frac{l}{k}k = k\left(a\frac{t\lambda}{k} + b \frac{t'(1-\lambda)}{k}\right)$$
is the product of $k>0$ and a convex combination of $a$ and $b$ which lies inside $A$ since it is convex. By definition of $K$, we get that $l \in K$, or that $K$ is convex. | {"set_name": "stack_exchange", "score": 1, "question_id": 3078417} |
TITLE: Regularity of the set of orbits through a submanifold under a tangential flow
QUESTION [2 upvotes]: Let $M$ be a smooth manifold, and $\mathfrak{X}$ be a global smooth vector field. Let $D \subseteq \mathbb{R} \times M$ be the flow domain of the maximal flow $\Phi$ of $\mathfrak{X}$, and for any $p \in M$ let $D_p = \{ t \in \mathbb{R}: (t,p) \in D \}$ and $\mathcal{O}_p = \Phi(D_p)$ (i.e., the maximal orbit of $p$).
It is easy to come up with examples showing that, in general, if $N$ is a smooth embedded submanifold of $M$ then the set
$$ \mathcal{V} = \bigcup_{p \in N} \mathcal{O}_p $$
is not a smooth embedded, or even immersed, submanifold. However, the only examples I've come up with involve $N$'s on which $\mathfrak{X}$ fails to be tangent.
I was wondering to what extent, if at all, the situation is improved by having $\mathfrak{X}$ everywhere tangent to $N$. Is this sufficient to demonstrate that $\mathcal{V}$ is an immersed submanifold of $M$? Or even an embedded one? Are there simple further conditions one can impose on the pair $(N, \mathfrak{X})$ to ensure that $\mathcal{V}$ is even embedded? If $\Phi(D_p) \subseteq N$ for all $p \in N$ then this is trivial, so I'm concerned about the situation where $N$ is not invariant under the maximal flow.
Any help or pointer to a reference would be much appreciated!
REPLY [1 votes]: Consider the torus $T^2$, it is the quotient of $R^2$ by the group $G$ generated by two translations $t_a, t_b$ of respective directions $a$ and $b$, consider $\alpha$ an irrational number. The vector field $X(x)=a+\alpha b$ on $R^2$ is invariant by $G$, it induces a vector $Y$ on $X$. The orbits of $Y$ are dense in $T^2$. Take $x\in T^2$, you have a morphism $f:[0,1]\rightarrow T^2$ such that $f$ is injective and $f([0,1])$ is contained in the orbit of $x$ by the flow of $Y$, the saturated space $V$ of $N=f((0,1))$ is the orbit of $x$ so it is a dense subset of $T^2$, thus it is not a submanifold of $T^2$. | {"set_name": "stack_exchange", "score": 2, "question_id": 1878416} |
TITLE: Question about finite group
QUESTION [0 upvotes]: We consider $A$ an abelian $p$-group and we denote $A[m]=\{a\in A\mid am=0\}$.
If there exists $n>1$ such that
$$
A[p]\cap (pA[p^2])\cap (p^2A[p^3])\cap\dots\cap (p^{n-1}A[p^{n}])=\{0\}
$$
does that implies that $A$ is finite?
REPLY [1 votes]: If the group $A$ has finite exponent, that is, $p^mx=0$ for every $x$, then
$$
p^mA[p^{m+1}]=\{0\}
$$
Consider an infinite direct sum of copies of $\mathbb{Z}/p\mathbb{Z}$ and you have a counterexample. | {"set_name": "stack_exchange", "score": 0, "question_id": 2656190} |
TITLE: Show that if $f$ is continuous at $a$ and $f(a)≠0$ then $f$ is nonzero in an open ball around $a$.
QUESTION [4 upvotes]: Here is the question I'm dealing with:
Let $U$ be an open set of $\mathbb{R}^{n}$, $f:U\rightarrow\mathbb{R}^{n}$ a function and $a\in U$ a given point.
Show that if $f$ is continuous at $a$ and $f(a)\neq 0$ then there exists $r> 0$ such that $B'_{r}(a)\sqsubseteq$ $U$ and $f(x)\neq0$ for each $x\in$ $B'_{r}(a)$.
I am not sure of anything I did so far but I think since $f$ is continuous and $f(a)\neq 0$, $|f(a)|>0$ for every $a$. If I choose $r$ as $1/n$ for some
$n\in\mathbb{N}$ (archimedean property) such that $|f(a)|>1/n>0$ then $x$ would be both in $B'_{r}(a)$ and $f(x)\neq 0$. Is my way of thinking correct? I feel like I'm missing something, because I didn't use f being continuous as I should.
REPLY [0 votes]: $\left \| f(a)) \right \|\neq 0\Rightarrow \exists V$ open in $R^{n}$ which contains $f(a)$ and s.t $v\in V\Rightarrow \left \| v \right \|> 0$. This is because the norm is continuous, i.e. the map $v\rightarrow \left \| v \right \|$ is continuous. But now since $f$ is continuous, there is a $U'$ open in $R^{n}$ s.t. $a\in U'$ and $f(U')\sqsubseteq V$. Now just take any ball contained in $U\cap U'$ and the result follows. | {"set_name": "stack_exchange", "score": 4, "question_id": 1293239} |
TITLE: Solutions to a differential equation
QUESTION [0 upvotes]: I need to group some solution for a given differential equation based on their type (as a homework), and I have a problem with it. The available types are general, particular, singular. What does the singular mean exactly? Can we call, for example, the $\pm\sqrt{x+1}$ a general solution because of the $\pm$ sign? Is the $y=c e^x$ a general solution to the $\frac{\mathrm{d}^2y}{dx^2}=y$ equation, or just the $y=c_1e^x+c_2e^{-x}$? Can the constant be a complex number in the Cauchy-problem?
I think the answer to the last question is yes, but we were only doing it with real numbers in the class.
REPLY [1 votes]: the solution of this equation is given by $$y(x)=C_1e^x+C_2e^{-x}$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2538529} |
TITLE: Why is the value of $\lim_{x\to0} \frac{1}{x^n}$ depend on $n$ being even or odd?
QUESTION [1 upvotes]: I have the limit
$$\lim_{x\to0} \frac{1}{x^n}$$
Why, when $n$ is even this limit is infinity and when $n$ is odd this limit is indeterminate?
REPLY [2 votes]: Recall that in order for $\lim_{x\to a}f(x)$ to exist we require that $\lim_{x\to a^-}f(x)$ and $\lim_{x\to a^+}f(x)$ exist. Consider the limit of $1/x^n$ as $x\to 0$ where $n$ is even. Since $n$ is even then $1/x^n$ will always be positive. Thus, we will have $\lim_{x\to0^-}1/x^n = \lim_{x\to0^+}1/x^n = \infty$ so $\lim_{x\to0}1/x^n=\infty$. Now consider the limit of $1/x^n$ as $x\to0$ when $n$ is odd. Since $n$ is odd then $1/x^n$ will be positive or negative depending on the sign of $x$. In particular, $1/x^n$ is negative for $x<0$ and positive for $x>0$. As a result, the limits from above and below will differ so the two-sided limit does not exist. | {"set_name": "stack_exchange", "score": 1, "question_id": 1037010} |
TITLE: Taylor Series Substitution $e^{x^2-1}$
QUESTION [0 upvotes]: If I'm using substitution to find a Taylor series about $x=1$ for $e^{x^2-1}$, but I'm given the Maclaurin series for $e^x$, how come the fact that the Taylor series is about $x=1$ doesn't matter when computing the Taylor series? If it were about $x=0$, it would be:
$$1+(x^2-1)+(x^2-1)^4/2+(x^2-1)^6/6+....$$
and if it were about $x=1$ it would still be
$$1+(x^2-1)+(x^2-1)^4/2+(x^2-1)^6/6+....$$
Shouldn't the Taylor series be dependent on the center ($a$), since each term contains the nth derivative at $a$? ($e^1$ is different from $e^0$, for example). Also, $(x-a)^k$, is also a term, which would be $x^k$ for $0$ and $(x-1)^k$ for $1$. Why are the taylor series the same for $a=0$ and $a=1$?
REPLY [1 votes]: To get the series around $x=1$ start by rewriting
$\exp(x^2-1) = \exp[(x-1)^2] \exp[2(x-1)] = \left(\sum_{m=0}^\infty \frac{(x-1)^{2m}}{m!}\right)\left(\sum_{n=0}^\infty \frac{2^n(x-1)^n}{n!}\right)$
Now by multiplying out the two sums we get the desired Taylor series
$\exp(x^2-1) = \sum_{k=0}^\infty c_k (x-1)^k$
where
$c_{k} = \sum_{2m + n = k} \frac{1}{m!}\frac{2^n}{n!}$
Here the sum is over all pairs of integers $n,m\geq 0$ such that $2m + n = k$.
For the even terms this gives us
$c_{2k} = \sum_{m + n = k} \frac{1}{m!}\frac{4^n}{(2n)!} = \sum_{n = 0}^k \frac{1}{(k-n)!}\frac{4^{n}}{(2n)!}$
and for the odd terms we get
$c_{2k+1} = \sum_{m + n = k} \frac{2}{m!}\frac{4^n}{(2n+1)!} = \sum_{n = 0}^k \frac{2}{(k-n)!}\frac{4^n}{(2n+1)!}$ | {"set_name": "stack_exchange", "score": 0, "question_id": 784351} |
TITLE: Extendibility of spacetime
QUESTION [0 upvotes]: Say we have a metric $g_{\mu\nu}$ and we consider a light-like geodesic. If we integrate over the time coordinate $t$ and find that it takes a finite amount of time for light to travel to $r= \infty $ and then when we integrate over an affine parameter $\tau$ it takes inifinite amount of time to travel to $r= \infty $, then what does this suggest about our spacetime ?
My (GR) intuition tells me that there is a "coordinate" singularity at $r=\infty$(for coordinate $\tau$). Is that what is happening here?
REPLY [3 votes]: Coordinates don't mean anything. There is not necessarily a coordinate that is "the time coordinate." If you have a coordinate whose gradient happens to be in a timelike direction, you could say it's a timelike coordinate, but the finiteness of a change in such a coordinate doesn't necessarily have any physical significance.
If we integrate over the time coordinate t and find that it takes a finite amount of time for light to travel to r=∞
It doesn't have the interpretation of time to travel.
and then when we integrate over an affine parameter τ it takes inifinite amount of time to travel to r=∞
Nor does this have the interpretation of time to travel.
My (GR) intuition tells me that there is a "coordinate" singularity at r=∞(for coordinate τ).
This affine parameter is not a coordinate.
In the situation you describe, the interpretation is probably that $t$ just doesn't blow up very fast as you approach null infinity. That's not something that has a physical interpretation, it's just a fact about your coordinate system. | {"set_name": "stack_exchange", "score": 0, "question_id": 523663} |
TITLE: Is this set empty?
QUESTION [1 upvotes]: Let
$$M=\{(x_1,x_2,x_3) \in \mathbb{C}^3\;\hbox{such that}\;2\Re e(x_1\overline{x_2})=0,\;\;|x_1|^2-|x_2|^2=0,\;\hbox{and}\;|x_1|^2+|x_2|^2=1\},$$
Is this set empty? And thank you
REPLY [5 votes]: It is not empty, since $\left(\frac1{\sqrt2},\frac i{\sqrt2},0\right)\in M$. | {"set_name": "stack_exchange", "score": 1, "question_id": 2383622} |
\begin{document}
\title[Porosities of Mandelbrot percolation]
{Porosities of Mandelbrot percolation}
\author[A. Berlinkov]{Artemi Berlinkov$^1$}
\address{Department of Mathematics, Bar-Ilan University, Ramat Gan, 5290002, Israel$^1$}
\email{berlinkov@technion.ac.il$^1$}
\thanks{AB was partially supported by the Department of Mathematics at
University of Jyv\"askyl\"a; DFG-Graduirtenkolleg ``Approximation und
algorithmische Verfahren'' at the University of Jena; University ITMO; ISF
grant 396/15; Center for Absorption in Science, Ministry of Immigrant
Absorption, State of Israel}
\author[E. J\"arvenp\"a\"a]{Esa J\"arvenp\"a\"a$^2$}
\address{Department of Mathematical Sciences, PO Box 3000, 90014 University
of Oulu, Finland$^2$}
\email{esa.jarvenpaa@oulu.fi$^2$}
\thanks{EJ acknowledges the support of the Centre of Excellence in
Analysis and Dynamics Research funded by the Academy of Finland and
thanks the ICERM semester program on ``Dimension and Dynamics''.}
\thanks{We thank Maarit J\"arvenp\"a\"a for interesting discussions and
many useful comments.}
\subjclass[2010]{28A80, 37A50, 60D05, 60J80}
\keywords{Random sets, porosity, mean porosity}
\date{\today}
\dedicatory{}
\begin{abstract}
We study porosities in the Mandelbrot percolation process. We show that, almost
surely at almost all points with respect to the natural measure, the mean
porosities of the set and the natural measure exist and are equal to each other
for all parameter values outside of a countable exceptional set. As a
corollary, we obtain that, almost surely at almost all points, the lower
porosities of the set and the natural measure are equal to zero, whereas the
upper porosities obtain their maximum values.
\end{abstract}
\maketitle
\section{Introduction}\label{intro}
The porosity of a set describes the sizes of holes in the set. The concept
dates back to the 1920's when Denjoy introduced a notion which he called index
(see \cite{De}).
In today's terminology, this index is called the upper porosity (see
Definition~\ref{defporoset}). The term porosity was introduced by Dol\v zenko
in \cite{Do}. Intuitively, if the upper porosity of a set equals $\alpha$,
then, in the set, there are
holes of relative size $\alpha$ at arbitrarily small distances.
On the other hand, the lower porosity (see
Definition~\ref{defporoset}) guarantees the existence of holes of certain
relative size at all sufficiently small distances.
The upper porosity turned out to be useful in order to describe properties
of exceptional sets, for example, for measuring sizes of sets where certain
functions are non-differentiable. For more details about the upper porosity,
we refer to an article of Zaj\'\i\v cek \cite{Z}.
Mattila \cite{M} utilised the lower porosity to find upper bounds for Hausdorff
dimensions of set, and Salli \cite{S} verified the corresponding results for
packing and box counting dimensions.
It turns out that upper porosity cannot be used to estimate the dimension
of a set (see \cite[Section 4.12]{M2}). An observation that there are sets
which are not lower porous but
nevertheless contain so many holes that their dimension is smaller than
the dimension of the ambient space, leads to the concept of mean
porosity of a set introduced by Koskela and Rohde \cite{KR} in order to study
the boundary behaviour of conformal and quasiconformal mappings.
Mean porosity guarantees that certain percentage
of scales, that is, distances which are integer powers of some fixed number,
contain holes of fixed relative size. Koskela and Rohde showed that, if a
subset of the $m$-dimensional Euclidean space is mean porous,
then its Hausdorff and packing dimensions are smaller than $m$. For a
modification of their definition, see Definition \ref{defmean}.
The lower porosity of a measure was introduced by
Eckmann and E. and M. J\"arvenp\"a\"a in \cite{EJJ}, the upper one by
Mera and Mor\'an in \cite{MM} and the mean porosity by Beliaev and Smirnov in
\cite{BS}. The relations between porosities and dimensions of sets
and measures have been investigated, for example, in
\cite{BS,BJJKRSS,JJ,JJ2,JJKRRS,JJKS,KRS,Shm2}. For further information on this
subject, we refer to a survey by Shmerkin \cite{Shm}.
Porosity has also been used for studying the conical densities of measures
(see \cite{KS,KS2}).
Note that sets with same dimension may have different
porosities. In \cite{JJM}, E. and M. J\"arvenp\"a\"a and Mauldin and, in
\cite{U}, Urba\'nski characterised deterministic iterated function systems
whose attractors have positive porosity. Porosities of random recursive
constructions were studied in \cite{JJM}. Particularly interesting random
constructions are those in which the copies of the seed set are glued
together in such a way that there are no holes left. Thus, the corresponding
deterministic system would be non-porous and the essential question is whether
the randomness in the construction makes the set or measure porous.
A classical example is the Mandelbrot percolation process (also known as the
fractal percolation) introduced by Mandelbrot
in 1974 in \cite{Man} (see Section \ref{perco}). In \cite{JJM}, it was shown
that, almost surely, the points with minimum porosity as well as
those with maximum porosity are dense in the
limit set. However, the question about porosity of typical points and that of
the natural measure remained open. Later, it turned out \cite{CORS}
that, for typical points, the lower porosity equals 0 and the upper
one is equal to $\frac 12$ as conjectured in \cite{JJM}. Indeed, this is a
corollary of the results of Chen et al. in \cite{CORS} dealing with estimates
on the dimensions of sets of exceptional points regarding the porosity.
In this paper, we prove that the mean porosities of the natural measure and of
the limit set exist and are equal to each other almost surely at almost all
points with respect to the natural measure for all parameter values outside of
a countable set (see Theorem~\ref{equal}). We also show that mean porosities
are continuous as a function of parameter outside this exceptional set (see
Theorem~\ref{meanporosityexists}). Unlike the upper and lower porosities, the
mean porosities of the set and the natural measure at typical points
are non-trivial. Indeed, we prove that almost surely
the mean porosities of the set and the natural measure are positive and less
than one for almost all points with respect to the natural measure for all
non-trivial parameter values (see Corollaries~\ref{positive} and
\ref{mainmeasure}). As an application of our results, we
solve the conjecture of \cite{JJM} completely (and give a new proof for the
part solved in \cite{CORS}) by showing that, almost surely for almost all points
with respect to the natural measure, the lower porosities of the
limit set and of the measure are equal to the minimum value of 0,
the upper porosity of the set attains its maximum value of
$\frac 12$ and the upper porosity of the measure also attains its
maximum value of 1 (see Corollary \ref{infsup}).
The article is organised as follows. In Section~\ref{perco}, we explain some
basic facts about the Mandelbrot percolation and, in Section~\ref{porosities},
we define porosities and mean porosities and describe some
of their properties. Finally, in Section~\ref{results}, we prove
our results about mean porosities of the limit set and of the natural measure
in the Mandelbrot percolation process.
\section{Mandelbrot percolation}\label{perco}
We begin by recalling some basic facts about Mandelbrot percolation in the
$m$-dimensional Euclidean space $\R^m$, where
$m\in\N=\{1,2,\dots\}$. Let $k\ge 2$ be an integer,
$I=\{1,\dots,k^m\}$ and $I^*=\bigcup_{i=0}^\infty I^i$, where $I^0=\emptyset$. An
element $\sigma\in I^i$ is called a word and its length is $|\sigma|=i$. For
all $\sigma\in I^i$ and $\sigma'\in I^j$, we denote by
$\sigma\ast\sigma'$ the element of $I^{i+j}$ whose first $i$ coordinates are
those of $\sigma$ and the last $j$ coordinates are those of $\sigma'$. For all
$i\in\N$ and $\sigma\in I^*\cup I^\N,$ denote by
$\sigma|_i$ the word in $I^i$ formed by the first $i$ elements of $\sigma$. For
$\sigma\in I^*$ and $\tau\in I^*\cup I^\N$, we write $\sigma\prec\tau$ if
the sequence $\tau$ starts with $\sigma$.
Let $\Omega$ be the set of
functions $\omega\colon I^*\to\{c,n\}$ equipped with the topology induced by
the metric $\rho(\omega,\omega')=k^{-|\omega\wedge\omega'|}$, where
\[
|\omega\wedge\omega'|=\min\{j\in\N\mid\exists \sigma\in I^j
\text{ with }\omega(\sigma)\ne\omega'(\sigma)\}.
\]
Each $\omega\in\Omega$ can be thought of as a code
that tells us which cubes we choose (c) and which we neglect (n). More
precisely, let $\omega\in\Omega$. We start with the unit cube $[0,1]^m$ and
denote it by $J_\emptyset$. We divide $J_\emptyset$ into $k^m$ closed $k$-adic
cubes with side length $k^{-1}$, enumerate them with letters from alphabet $I$
and repeat this procedure inside each subcube. For all
$\sigma\in I^i$, we use the notation $J_\sigma$ for the unique closed
subcube of $J_\emptyset$ with side length $k^{-i}$ coded by $\sigma$.
The image of $\eta\in I^\N$ under the natural projection from
$I^\N$ to $[0,1]^m$ is denoted by $x(\eta)$, that is,
\[
x(\eta)=\bigcap_{i=0}^\infty J_{\eta|_i},
\]
where $\eta|_0=\emptyset$. If $\omega(\sigma)=n$ for $\sigma\in I^i$ then
$J_\sigma(\omega)=\emptyset$, and if $\omega(\sigma)=c$ then
$J_\sigma(\omega)=J_\sigma$. Define
\[
K_\omega=\bigcap_{i=0}^\infty\bigcup_{\sigma\in I^i}J_\sigma(\omega).
\]
Fix $0\le p\le 1$. We make the above construction random by demanding that
if $J_\sigma$ is chosen then $J_{\sigma\ast j}$, $j=1,\dots,k^m$, are
chosen independently with probability $p$. Let $P$ be the natural Borel
probability measure on $\Omega$, that is, for all $\sigma\in I^*$ and
$j=1,\dots,k^m$,
\[
\begin{split}
&P(\omega(\emptyset)=c)=1,\\
&P(\omega(\sigma*j)=c\mid\omega(\sigma)=c)=p,\\
&P(\omega(\sigma*j)=n\mid\omega(\sigma)=n)=1.
\end{split}
\]
It is a well-known result in the theory of branching processes that if the
expectation of the number of chosen cubes of side length $k^{-1}$
does not exceed one, then the limit set $K_\omega$ is $P$-almost surely an empty
set (see \cite[Theorem 1]{AN}). In our case, this expectation equals
$k^mp$ and, thus, with positive probability, $K_\omega\ne\emptyset$ provided
that $k^{-m}<p\le 1$. According to \cite[Theorem 1.1]{MW} (see also \cite{KP}),
the Hausdorff dimension of $K_\omega$ is $P$-almost surely equal to
\begin{equation}\label{dim}
d=\frac{\log(k^mp)}{\log k}=m+\frac{\log p}{\log k}
\end{equation}
provided that $K_\omega\ne\emptyset$. For $P$-almost all $\omega\in\Omega$,
there exists a natural Radon measure $\nu_\omega$ on $K_\omega$ (see
\cite[Theorem 3.2]{MW})
and, moreover, there is a natural Radon probability measure $Q$
on $I^\N\times\Omega$ such that, for every Borel set
$B\subset I^\N\times\Omega$, we have
\begin{equation}\label{Qdef}
Q(B)=\frac 1{(\diam J_\emptyset)^d}\int\mu_\omega(B_\omega)dP(\omega),
\end{equation}
where $B_\omega=\{\eta\in I^\N\mid (\eta,\omega)\in B\}$, $\nu_\omega$ is the
image of $\mu_\omega$ under the natural projection and $\diam A$ is the diameter
of a set $A$ (see \cite[(1.13)]{GMW}).
We denote by $\card A$ the number of elements in a set $A$.
For a word $\sigma\in I^*$, consider the martingale
$\{N_{j,\sigma}k^{-jd}\}_{j\in\N}$, where
\[
N_{j,\sigma}=\card\{\tau\in I^*\mid |\tau|=|\sigma|+j,\tau\succ\sigma
\text{ and }\omega(\tau)=c\},
\]
and denote its almost sure (finite) limit by $X_\sigma(\omega)$.
For all $l\in\N\cup\{0\}$, define a random variable $X_l$ on
$I^\N\times\Omega$ by $X_l(\eta,\omega)=X_{\eta|_l}(\omega)$.
It is easy to see that, for $P$-almost all $\omega\in\Omega$,
\[
X_\sigma(\omega)=\sum_{\tau\in I^j}k^{-jd}X_{\sigma*\tau}(\omega)
\1_{\{\omega(\sigma*\tau)=c\}}
\]
for all $j\in\N$, where the characteristic function of a set $A$ is
denoted by $\1_A$. Further, for $\sigma,\tau\in I^*$, the random
variables $X_\sigma$ and $X_\tau$ are identically
distributed (see \cite[Proposition 1]{Be}) and, if
$\sigma\not\prec\tau$ and $\tau\not\prec\sigma$, they are independent.
Thus, $X_l$, $l\in\N\cup\{0\}$, have the same distribution.
According to \cite[Theorem 3.2]{MW}, the variables $X_\sigma(\omega)$ are related
to the measure $\nu_\omega$ for $P$-almost all $\omega\in\Omega$ by the formulae
\begin{align}
&\nu_\omega(J_\sigma)=(\diam J_\sigma)^dX_\sigma(\omega)\text{ for all }\sigma\in I^*
\text{ and}\label{nuversusX}\\
&\sum_{\substack{\tau\in I^j\\J_\tau\cap B\ne\emptyset}}l_\tau^dX_\tau(\omega)
\searrow\nu_\omega(B)\text{ as }j\to\infty\text{ for all Borel sets }B\subset K_\omega,\label{nudef}
\end{align}
where $l_\tau=\diam J_\tau=\diam J_\emptyset\,k^{-j}\1_{\{\omega(\tau)=c\}}$.
By \eqref{Qdef} and \eqref{nuversusX}, expectations with respect to the
measures $P$ and $Q$ are connected in the following way (see also
\cite[(1.16)]{GMW}): if $j\in\N$ and
$Y\colon I^\N\times\Omega\to\R$ is a random variable such that
$Y(\eta,\omega)=Y(\eta^\prime,\omega)$ provided that $\eta|_j=\eta^\prime|_j$, then
\begin{equation}\label{expectationqtop}
E_Q[Y]=E_P\Bigl[\sum_{\substack{\sigma\in I^j\\
\omega(\sigma)=c}} k^{-jd}X_\sigma Y(\sigma,\cdot)\Bigr].
\end{equation}
Hence, we have
\begin{equation}\label{expectationXzero}
Q(X_l=0)=0\text{ and }E_Q[X_l]=E_P[X_0^2]<\infty
\end{equation}
for all $l\in\N\cup\{0\}$ (see \cite[Theorem 2.1]{MW}).
\section{Porosities}\label{porosities}
In this section, we define porosities and mean porosities of sets and
measures and prove some basic properties for them.
\begin{definition}\label{defporoset}
Let $A\subset\R^m$, $x\in\R^m$ and $r>0$. The local
porosity of $A$ at $x$ at distance $r$ is
\[
\begin{split}
\por(A,x,r)=\sup\{\alpha\ge 0\mid\, &\text{there is }z\in\R^n
\text{ such that }\\
&B(z,\alpha r)\subset B(x,r)\setminus A\},
\end{split}
\]
where the open ball centred at $x$ and with radius $r$ is denoted by $B(x,r)$.
The lower and upper porosities of $A$ at $x$ are defined as
\[
\lpor(A,x)=\liminf_{r\to 0}\por(A,x,r)\text{ and }
\upor(A,x)=\limsup_{r\to 0}\por(A,x,r),
\]
respectively. If $\lpor(A,x)=\upor(A,x)$, the common value, denoted by
$\por(A,x)$, is called the porosity of $A$ at $x$.
\end{definition}
\begin{definition}\label{defporomeas}
The lower and upper porosities of a Radon measure $\mu$ on $\R^m$
at a point $x\in\R^m$ are defined by
\[
\begin{split}
&\lpor(\mu,x)=\lim_{\varepsilon\to 0}\liminf_{r\to 0}\por(\mu,x,r,\varepsilon)
\text{ and}\\
&\upor(\mu,x)=\lim_{\varepsilon\to 0}\limsup_{r\to 0}
\por(\mu,x,r,\varepsilon),
\end{split}
\]
respectively, where for all $r,\varepsilon>0$,
\[
\begin{split}
\por(\mu,x,r,\varepsilon)
=&\sup\{\alpha\ge 0\mid\text{ there is }z\in\R^m\text{ such that }\\
&B(z,\alpha r)\subset B(x,r)\text{ and }\mu(B(z,\alpha r))
\le\varepsilon\mu(B(x,r))\}.
\end{split}
\]
If the upper and lower porosities agree, the common value is called the porosity
of $\mu$ at $x$ and denoted by $\por(\mu,x)$.
\end{definition}
\begin{remark}\label{difdef}
(a) In some sources, the condition $B(z,\alpha r)\subset B(x,r)\setminus A$
in Definition~\ref{defporoset} is replaced by the condition
$B(z,\alpha r)\cap A=\emptyset$, leading to the definition
\[
\begin{split}
\widetilde{\por}(A,x,r)=\sup\{\alpha\ge 0\mid\, &\text{there is }z\in B(x,r)
\text{ such that }\\
&B(z,\alpha r)\cap A=\emptyset\}.
\end{split}
\]
It is not difficult to see that
\[
\widetilde{\por}(A,x)=\frac{\por(A,x)}{1-\por(A,x)},
\]
which is valid both for the lower and upper porosity. Indeed, this follows
from two geometric observations. First, $B(z,\alpha r)\cap A=\emptyset$
with $z\in\partial B(x,r)$ if and only if
$B(z,\alpha r)\subset B(x,(1+\alpha)r)\setminus A$ with
$\partial B(z,\alpha r)\cap\partial B(x,(1+\alpha)r)\ne\emptyset$, where
the boundary of a set $B$ is denoted by $\partial B$. Second, at local minima
and maxima of the function $r\mapsto\por(A,x,r)$, we have
$\partial B(z,\alpha r)\cap\partial B(x,(1+\alpha)r)\ne\emptyset$, and at
local minima and maxima of the function $r\mapsto\widetilde\por(A,x,r)$, we
have $z\in\partial B(x,r)$.
(b) Unlike the dimension, the porosity is sensitive to the metric. For
example, defining cube-porosities by using cubes instead of balls in the
definition, there is no formula to convert porosities to cube-porosities or vice
versa. It is easy to construct a set
such that the cube-porosity attains its maximum value (at some point) but the
porosity does not. Take, for example, the union of the x- and y-axes in the
plane. However, the lower porosity is positive, if and only if the lower
cube-porosity is positive.
(c) In general metric spaces, in addition to
$B(z,\alpha r)\subset B(x,r)\setminus A$, it is sometimes useful to require
that the empty ball $B(z,\alpha r)$ is inside the reference ball $B(x,r)$ also
algebraically, that is, $d(x,z)+\alpha r\le r$. For further discussion about
this matter, see \cite{MMPZ}.
\end{remark}
The lower and upper porosities give the relative sizes of the largest and
smallest holes, respectively. Taking into considerations the frequency of
scales where the holes appear, leads to the notion of mean porosity. We proceed
by giving a definition which is adapted to the Mandelbrot percolation process.
We will use the maximum metric $\varrho$, that is,
$\varrho(x,y)=\max_{i\in\{1,\dots,m\}}\{|x_i-y_i|\}$, and denote by $B_\varrho(y,r)$
the open ball centred at $y$ and with radius $r$ with respect to this metric.
Recall that the balls in the maximum metric are cubes whose
faces are parallel to the coordinate planes.
\begin{definition}\label{ourmean}
Let $A\subset\R^m$, $\mu$ be a Radon measure on $\R^m$, $x\in\R^m$,
$\alpha\in[0,1]$ and $\varepsilon>0$. For $j\in\N$, we say that $A$ has an
$\alpha$-hole at scale $j$ near $x$ if there is a point $z\in Q_j^k(x)$
such that
\[
B_\varrho(z,\tfrac 12\alpha k^{-j})\subset Q_j^k(x)\setminus A.
\]
Here $Q_j^k(x)$ is the half-open $k$-adic cube of side length $k^{-j}$
containing $x$ and $B_\varrho(z,\tfrac 12\alpha k^{-j})$ is called
an $\alpha$-hole. We say that $\mu$ has an $(\alpha,\varepsilon)$-hole at scale
$j$ near $x$ if there is a point $z\in Q_j^k(x)$ such that
\[
B_\varrho(z,\tfrac 12\alpha k^{-j})\subset Q_j^k(x)\text{ and }
\mu(B_\varrho(z,\tfrac 12\alpha k^{-j}))\le\varepsilon\mu(Q_j^k(x)).
\]
\end{definition}
\begin{remark}\label{whytwo}
Note that, unlike in Definition \ref{defporoset}, we have divided the radius of
the ball in the complement of $A$ as well as that with small measure
by $2$ and, therefore, $\alpha$ may attain values between $0$ and $1$. The
reason for this is that the point $x$ may be arbitrarily close to the boundary
of $Q_j^k(x)$ and, if the whole cube $Q_j^k(x)$ is empty, it is natural to say
that there is a hole of relative size $1$.
\end{remark}
\begin{definition}\label{defmean}
Let $\alpha\in[0,1]$. The lower $\alpha$-mean
porosity of a set $A\subset\R^m$ at a point $x\in\R^m$ is
\[
\underline{\kappa}(A,x,\alpha)=\liminf_{i\to\infty}\frac{N_i(A,x,\alpha)}i
\]
and the upper $\alpha$-mean porosity is
\[
\overline{\kappa}(A,x,\alpha)=\limsup_{i\to\infty}\frac{N_i(A,x,\alpha)}i,
\]
where
\[
\begin{split}
N_i(A,x,\alpha)=\card\{j\in\N\mid\, & j\le i\text{ and }A\text{ has
an }\alpha\text{-hole at scale }\\
&j\text{ near }x\}.
\end{split}
\]
In the case the limit exists, it is called the $\alpha$-mean porosity and
denoted by $\kappa(A,x,\alpha)$. The lower $\alpha$-mean porosity of a Radon
measure $\mu$ on $\R^m$ at $x\in\R^m$ is
\[
\underline{\kappa}(\mu,x,\alpha)=\lim_{\e\to 0}\liminf_{i\to\infty}
\frac{\widetilde N_i(\mu,x,\alpha,\e)}i
\]
and the upper one is
\[
\overline{\kappa}(\mu,x,\alpha)=\lim_{\e\to 0}\limsup_{i\to\infty}
\frac{\widetilde N_i(\mu,x,\alpha,\e)}i,
\]
where
\[
\begin{split}
\widetilde N_i(\mu,x,\alpha,\varepsilon)=\card\{j\in\N\mid\, & j\le i
\text{ and }\mu\text{ has an }(\alpha,\varepsilon)\text{-hole at}\\
&\text{scale }j\text{ near } x\}.
\end{split}
\]
If the lower and upper mean porosities coincide, the common value, denoted by
$\kappa(\mu,x,\alpha)$, is called the $\alpha$-mean porosity of $\mu$.
\end{definition}
\begin{remark}\label{meanunstable}
Mean porosity is highly sensitive to the choice of
parameters. The definition is given in terms of $k$-adic cubes. For the
Mandelbrot percolation, this is natural. For general sets, fixing an integer
$h>1$, a natural choice is to say that $A$ has an $\alpha$-hole at scale $j$
near $x$, if there is $z\in\R^m$ such that
$B(z,\alpha h^{-j}r_0)\subset B(x,h^{-j}r_0)\setminus A$ for some (or for all)
$h^{-1}<r_0\le 1$. However, the choice of $r_0$ and $h$ matters as will be shown
in Example~\ref{norel} below. Shmerkin proposed in \cite{Shm2} the following
base and starting scale independent notion of lower mean porosity of a measure
(which can be adapted for sets and upper porosity as well): a measure $\mu$ is
lower $(\alpha,\kappa)$-mean porous at a point $x\in\R^m$ if
\[
\liminf_{\rho\to 1}\,(\log\tfrac 1\rho)^{-1}\int_\rho^1
\1_{\{r\mid\por(\mu,x,r,\e)\ge\alpha\}}r^{-1}\,dr\ge\kappa\text{ for all }\e>0.
\]
The disadvantage of this definition is that it is more complicated to calculate
than the discrete version. To avoid these problems, one option
is to aim at qualitative results concerning all parameter values, as our
approach will show.
\end{remark}
Next we give a simple example demonstrating the dependence of
mean porosity on the starting scale and the base of scales.
\begin{example}\label{norel}
Fix an integer $h>1$. In this example, we use a modification of
Definitions~\ref{ourmean} and \ref{defmean} where $A\subset\R^m$ has an
$\alpha$-hole at scale $j$ near $x$, if there exists
$z\in\R^m$ such that $B(z,\alpha h^{-j})\subset B(x,h^{-j})\setminus A$.
Let $x\in\R^2$. We define a set $A\subset\R^2$ as follows.
For all $i\in\N\cup\{0\}$, consider the half-open annulus
$D(i)=\{y\in\R^2\mid h^{-i-1}<|y-x|\le h^{-i}\}$. Let
$A=\bigcup_{i=0}^\infty D(3i+1)\cup D(3i+2)$, that is, we choose
two annuli out of every three successive ones and leave the third one empty.
In this case, $\kappa(A,x,\frac 12(1-h^{-1}))=\frac 13$. If we replaced $h$ by
$h^3$ in the definition of scales, we would conclude that
$\kappa(A,x,\frac 12(1-h^{-1}))=1$. (Note that the lower and upper
porosities are equal.) If we define $A$ by starting with the two
filled annuli, that is, $A=\bigcup_{i=0}^\infty D(3i)\cup D(3i+1)$, then
$\kappa(A,x,\frac 12(1-h^{-1}))=\frac 13$ using scales determined by $h$ and
$\kappa(A,x,\frac 12(1-h^{-1}))=0$ if scales are determined by powers of $h^3$.
By mixing these construction in a suitable way, one easily finds an example
where $\kappa(A,x,\frac 12(1-h^{-1}))=\frac 13$ for scales given by $h$, but
$\underline{\kappa}(A,x,\frac 12(1-h^{-1}))=0$ and
$\overline{\kappa}(A,x,\frac 12(1-h^{-1}))=1$ if the scales are determined by
$h^3$.
\end{example}
We finish this section with some measurability results. For that we need some
notation.
\begin{definition}\label{holeindicator}
For all $j\in\N$ and $\alpha\in[0,1]$, define a function
$\chi_j^\alpha\colon I^\N\times\Omega\to\{0,1\}$ by setting
$\chi_j^\alpha(\eta,\omega)=1$, if and only if
$K_\omega$ has an $\alpha$-hole at scale $j$ near $x(\eta)$. Define a function
$\overline{\chi}_j^\alpha\colon I^\N\times\Omega\to\{0,1\}$ in the same
way except that the $\alpha$-hole is a closed ball instead of an open one.
For all $\alpha\in (0,1)$, $\e>0$ and $j\in\N$, define a function
$\chi_j^{\alpha,\epsilon}\colon I^\N\times\Omega\to\{0,1\}$ by setting
$\chi_j^{\alpha,\epsilon}(\eta,\omega)=1$, if and only if
$\nu_\omega$ has an $(\alpha,\epsilon)$-hole at scale $j$ near $x(\eta)$.
Finally, define a function
$\overline\chi_j^{\alpha,\epsilon}\colon I^\N\times\Omega\to\{0,1\}$ by setting
$\overline\chi_j^{\alpha,\epsilon}(\eta,\omega)=1$, if and only if there exists
$z\in Q_j^k(x(\eta))$ such that
$\nu_\omega(\overline B_\varrho(z,\tfrac 12\alpha k^{-j}))<\varepsilon
\nu_\omega(Q_j^k(x(\eta)))$.
Here the closed ball in metric $\varrho$ centred at $z\in\R^m$ with radius
$r>0$ is denoted by $\overline B_\varrho(z,r)$.
\end{definition}
\begin{lemma}\label{measurability}
The maps
\[
\begin{split}
&(\eta,\omega)\mapsto\underline{\kappa}(K_\omega,x(\eta),\alpha),\\
&(\eta,\omega)\mapsto\overline{\kappa}(K_\omega,x(\eta),\alpha),\\
&(\eta,\omega)\mapsto\underline{\kappa}(\nu_\omega,x(\eta),\alpha)
\text{ and}\\
&(\eta,\omega)\mapsto\overline{\kappa}(\nu_\omega,x(\eta),\alpha)
\end{split}
\]
are Borel measurable for all $\alpha\in[0,1]$.
\end{lemma}
\begin{proof}
Note that $\overline{\chi}_j^\alpha(\cdot,\omega)$ is locally constant for all
$\omega\in\Omega$, that is, its value depends only on $\eta|_j$.
Further, suppose that $\overline{\chi}_j^\alpha(\eta,\omega)=1$. Then $K_\omega$
has a closed $\alpha$-hole $H$ at scale $j$ near $x(\eta)$. Since $H$
and $K_\omega$ are closed, their distance is positive. So there exists a finite
set $T\subset I^*$ such that $\omega(\tau)=n$ for all $\tau\in T$ and
\[
H\subset\bigcup_{\tau\in T}J_\tau.
\]
If $\omega'\in\Omega$ is close to $\omega$, then
$\omega'(\tau)=n$ for all $\tau\in T$, which implies that
$\overline{\chi}_j^\alpha(\eta,\omega')=1$. We conclude that
$\overline{\chi}_j^\alpha$ is continuous at $(\eta,\omega)$. Trivially,
$\overline{\chi}_j^\alpha$ is lower semi-continuous at those points where
$\overline{\chi}_j^\alpha(\eta,\omega)=0$. Therefore, $\overline{\chi}_j^\alpha$
is lower semi-continuous.
Let $\alpha_i$ be a strictly increasing sequence approaching $\alpha$. We claim
that
\begin{equation}\label{alphalim}
\lim_{i\to\infty}\overline{\chi}_j^{\alpha_i}(\eta,\omega)=\chi_j^\alpha(\eta,\omega)
\end{equation}
for all $(\eta,\omega)\in I^\N\times\Omega$. Indeed, obviously
$\chi_j^\alpha(\eta,\omega)\le\overline{\chi}_j^{\alpha_i}(\eta,\omega)$ for all
$i\in\N$,
and the sequence $(\overline{\chi}_j^{\alpha_i}(\eta,\omega))_{i\in\N}$ is
decreasing. Thus, it is enough to study the case
$\lim_{i\to\infty}\overline{\chi}_j^{\alpha_i}(\eta,\omega)=1$. Let
$(\overline B_\varrho(z_i,\frac 12\alpha_ik^{-j}))_{i\in\N}$ be a corresponding
sequence of closed holes. In this case, one may find a convergent subsequence
of $(z_i)_{i\in\N}$ converging to $z\in\R^d$ and
$B_\varrho(z,\frac 12\alpha k^{-j})\subset Q_j^k(x(\eta))\setminus K_\omega$,
completing the proof of \eqref{alphalim}. As a limit of semi-continuous
functions, $\chi_j^\alpha$ is Borel measurable. Now
$N_i(K_\omega,x(\eta),\alpha)=\sum_{j=1}^i\chi_j^\alpha(\eta,\omega)$, implying
that the map
$(\eta,\omega)\mapsto\underline{\kappa}(K_\omega,x(\eta),\alpha)$
(as well as the upper mean porosity) is Borel measurable.
By construction, the map $\omega\mapsto X_\tau(\omega)$ is Borel measurable for
all $\tau\in I^*$. Therefore, $\omega\mapsto\nu_\omega(B)$ is a Borel map for
all Borel sets $B\subset\R^m$ by \eqref{nudef}. In particular, the map
$(\eta,\omega)\mapsto\nu_\omega\bigl(\overline B_\varrho(z,\tfrac 12\alpha k^{-j})
\bigr)-\varepsilon\nu_\omega\bigl(Q_j^k(x(\eta))\bigr)$
is Borel measurable for all $z\in\R^m$, $\alpha\in[0,1]$, $\e>0$ and $j\in\N$.
Let $(z_i)_{i\in\N}$ be a dense set in $[0,1]^m$. Let $s>0$, $\alpha\in[0,1]$
and $j\in\N$. Suppose that there exists $z\in Q_j^k(x(\eta))$ such that
$\nu_\omega(\overline B_\varrho(z,\tfrac 12\alpha k^{-j}))<s$. Since the map
$x\mapsto\nu_\omega(\overline B_\varrho(x,r))$ is upper semicontinuous, there
exists $z_i\in Q_j^k(x(\eta))$ such that
$\nu_\omega(\overline B_\varrho(z_i,\tfrac 12\alpha k^{-j}))<s$. Thus,
$\overline\chi_j^{\alpha,\e}$ is Borel measurable. Further,
$\chi_j^{\alpha,\e}(\eta,\omega)=1$ if and only if there exist an increasing
sequence $(\alpha_i)_{i\in\N}$ tending to $\alpha$ and a decreasing sequence
$(\e_i)_{i\in\N}$ tending to $\e$ such that
$\overline\chi_j^{\alpha_i,\e_i}(\eta,\omega)=1$. Therefore, $\chi_j^{\alpha,\e}$
is Borel measurable, and the claim follows as in the case of mean porosities
of sets.
\end{proof}
\begin{remark}\label{monotonicity}
(a) Note that, for all $(\eta,\omega)\in I^\N\times\Omega$, the function
$\alpha\mapsto\chi_0^\alpha(\eta,\omega)$ is decreasing
and, thus, the lower and upper mean porosity functions are also
decreasing as functions of $\alpha$.
(b) Later, we will need modifications of the functions $\chi_j^\alpha$
defined in the proof of Lemma~\ref{measurability}. Their Borel measurability
can be proven analogously to that of $\chi_j^\alpha$.
\end{remark}
\section{Results}\label{results}
\renewcommand{\theenumi}{\roman{enumi}}
In this section, we state and prove our results concerning mean
porosities of Mandelbrot percolation and its natural measure.
To prove the existence of mean porosity and to
compare the mean porosities of the limit set and the construction measure,
we need a tool to establish the validity of the strong law of large numbers for
certain sequences of random variables. We will use \cite[Theorem 1]{HRV} (see
also \cite[Corollary 11]{L}), which we state (in a simplified form) for the
convenience of the reader.
\begin{theorem}\label{HRV}
Let $\{Y_n\}_{n\in\N}$ be a sequence of square-integrable random variables
and suppose that there exists a sequence of constants $(\rho_k)_{k\in\N}$
such that
\[
\sup_{n\in\N}|\Cov(Y_n,Y_{n+k})|\le\rho_k
\]
for all $k\in\N$. Assume that
\[
\sum_{n=1}^\infty\frac{\Var(Y_n)\log^2n}{n^2}<\infty\text{ and }
\sum_{k=1}^\infty\rho_k<\infty.
\]
Then $\{Y_n\}_{n\in\N}$ satisfies the strong law of large numbers. Here the
covariance and variance are denoted by $\Cov$ and $\Var$, respectively.
\end{theorem}
We will apply Theorem~\ref{HRV} to stationary sequences of random variables
which are indicator functions of events with equal probabilities.
In this setup, all conditions of the theorem will be satisfied if
\begin{equation}\label{convergencecovariances}
\sum_{j=1}^\infty \Cov(Y_0,Y_j)<\infty
\end{equation}
and, in particular, if $Y_i$ is independent from $Y_j$ once
$|i-j|$ is greater than some fixed integer.
For all $\alpha\in[0,1]$ and $j,r\in\N$, define
$\chi_{j,r}^\alpha\colon I^\N\times\Omega\to\{0,1\}$ similarly to
$\chi_j^\alpha$ with the exception that the whole hole is assumed to be in
$J_{\eta|_j}\setminus J_{\eta|_{j+r}}$. Observe that
$\chi_{j,r}^\alpha(\eta,\omega)=\chi_{j,r}^\alpha(\eta',\omega)$ provided that
$\eta|_{j+r}=\eta'|_{j+r}$. Therefore, for any $\tau\in I^{j+r}$, we may define
$\chi_{j,r}^\alpha(\tau,\omega)=\chi_{j,r}^\alpha(\eta,\omega)$, where
$\eta|_{j+r}=\tau$. Note that, given $\omega(\tau|_j)=c$, the value of the
function $\chi_{j,r}^\alpha(\tau,\cdot)$ depends only on the restriction of
$\omega$ to $J_{\tau|_j}\setminus J_{\tau|_{j+r}}$.
\begin{lemma}\label{independence}
Let $\alpha\in[0,1]$ and $r\in\N$. The random variables $\chi_{j,r}^\alpha$
and $\chi_{i,r}^\alpha$ are $Q$-independent for all $i,j\in\N$ with
$|i-j|\ge r$ and
\[
\Cov(\chi_{n,r}^\alpha,\chi_{n+k,r}^\alpha)=\Cov(\chi_{0,r}^\alpha,\chi_{k,r}^\alpha)
\]
for all $n,k,r\in\N$.
\end{lemma}
\begin{proof}
Observe that, for any $\tau\in I^{j+r}$, the variables $X_\tau$ and
$\chi_{0,r}^\alpha(\tau,\cdot)$ are $P$-independent given $\omega(\tau)=c$.
Recalling that $E_P[X_\tau\mid\omega(\tau)=c]=1$ by \eqref{Qdef} and
\eqref{nuversusX}, we obtain by \eqref{expectationqtop} that
\begin{align*}
E_Q[\chi_{0,r}^\alpha]&=E_P\bigl[\sum_{\tau\in I^r}
k^{-rd}\1_{\{\omega(\tau)=c\}}X_\tau
\chi_{0,r}^\alpha(\tau,\cdot)\bigr]\\
&=E_P\bigl[\sum_{\tau\in I^r}k^{-rd}\1_{\{\omega(\tau)=c\}}
E_P[\chi_{0,r}^\alpha(\tau,\cdot)\mid\omega(\tau)=c]\bigr].
\end{align*}
Further, for any $j\in\N$ (using the above calculation in the third equality),
\begin{align*}
&E_Q[\chi_{j,r}^\alpha]=E_P\bigl[\sum_{\sigma\in I^j}k^{-jd}\1_{\{\omega(\sigma)=c\}}
\sum_{\tau\in I^r}k^{-rd}\1_{\{\omega(\sigma\ast\tau)=c\}}X_{\sigma\ast\tau}
\chi_{j,r}^\alpha(\sigma\ast\tau,\cdot)\bigr]\\
&=E_P\Bigl[\sum_{\sigma\in I^j}k^{-jd}\1_{\{\omega(\sigma)=c\}}
E_P\bigl[\sum_{\tau\in I^r}k^{-rd}\1_{\{\omega(\sigma\ast\tau)=c\}}X_{\sigma\ast\tau}
\chi_{j,r}^\alpha(\sigma\ast\tau,\cdot)\mid\omega(\sigma)=c\bigr]\Bigr]\\
&=E_P\bigl[\sum_{\sigma\in I^j}k^{-jd}\1_{\{\omega(\sigma)=c\}}E_Q[\chi_{0,r}^\alpha]
\bigr]=E_Q[\chi_{0,r}^\alpha].
\end{align*}
Let $i,j\in\N$ be such that $j-i\ge r$. Then (using the above calculation in
the third and fourth equality)
\begin{align*}
&E_Q[\chi_{i,r}^\alpha\chi_{j,r}^\alpha]\\
&=E_P\bigl[\sum_{\tau\in I^{i+r}}k^{-(i+r)d}\1_{\{\omega(\tau)=c\}}
\chi_{i,r}^\alpha(\tau,\cdot)
\sum_{\sigma\in I^{j-i}}k^{-(j-i)d}\1_{\{\omega(\tau\ast\sigma)=c\}}X_{\tau\ast\sigma}
\chi_{j,r}^\alpha(\tau\ast\sigma,\cdot)\bigr]\\
&=E_P\Bigl[\sum_{\tau\in I^{i+r}}k^{-(i+r)d}\1_{\{\omega(\tau)=c\}}
E_P[\chi_{i,r}^\alpha(\tau,\cdot)\mid\omega(\tau)=c]\\
&\phantom{llllll}\times E_P\bigl[
\sum_{\sigma\in I^{j-i}}k^{-(j-i)d}X_{\tau\ast\sigma}\1_{\{\omega(\tau\ast\sigma)=c\}}
\chi_{j,r}^\alpha(\tau\ast\sigma,\cdot)\mid\omega(\tau)=c\bigr]\Bigr]\\
&=E_P\bigl[\sum_{\tau\in I^{i+r}}k^{-(i+r)d}\1_{\{\omega(\tau)=c\}}
E_P[\chi_{i,r}^\alpha(\tau,\cdot)\mid\omega(\tau)=c] E_Q[\chi_{0,r}^\alpha]
=E_Q[\chi_{i,r}^\alpha]E_Q[\chi_{j,r}^\alpha].
\end{align*}
The last claim follows from a similar calculation.
\end{proof}
Next we prove a lemma which gives lower and upper bounds for mean porosities
at typical points.
\begin{lemma}\label{holecontinuity} For all $\alpha\in(0,1)$, we have
\[
E_Q[\overline{\chi}_0^\alpha] \le
\underline{\kappa}(K_\omega,x(\eta),\alpha)
\le\overline{\kappa}(K_\omega,x(\eta),\alpha)
\le E_Q[ \chi_0^\alpha]
\]
for $Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$.
\end{lemma}
\begin{proof}
Note that for every $\alpha\in(0,1)$ and $r\in\N$ with $k^{-r}<\alpha$, we have
\[
\chi_{j,r}^\alpha(\eta,\omega)\le\chi_j^\alpha(\eta,\omega)\le
\chi_{j,r}^{\alpha-k^{-r}}(\eta,\omega)
\]
for all $(\eta,\omega)\in I^{\N}\times\Omega$ satisfying $x(\eta)\in K_\omega$.
Recall that $\nu_\omega$ is supported on $K_\omega$ for $P$-almost all
$\omega\in\Omega$.
Combining Lemma~\ref{independence} and Theorem~\ref{HRV}, we conclude that,
for all $r\in\N$ and $Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$,
we have
\begin{align*}
E_Q[\chi_{0,r}^\alpha]&=\lim_{n\to\infty}\frac 1n \sum_{j=1}^n
\chi_{j,r}^\alpha(\eta,\omega)\le\underline\kappa(K_\omega,x(\eta),\alpha)\\
&\le\overline\kappa(K_\omega,x(\eta),\alpha)\le\lim_{n\to\infty}\frac 1n
\sum_{j=1}^n \chi_{j,r}^{\alpha-k^{-r}}(\eta,\omega)=E_Q[\chi_{0,r}^{\alpha-k^{-r}}].
\end{align*}
Observe that, for all $(\eta,\omega)\in I^{\N}\times\Omega$ satisfying
$x(\eta)\in K_\omega$, we have
$\lim_{r\to\infty}\chi_{0,r}^\alpha(\eta,\omega)
\ge\overline\chi_0^\alpha(\eta,\omega)$, since the distance between a closed
$\alpha$-hole and $K_\omega$ is positive. Further, the inequality
$\chi_{0,r}^{\alpha-k^{-r}}\le\overline{\chi}_0^{\alpha-2k^{-r}}$ is always valid and
$\lim_{r\to\infty}\overline{\chi}_0^{\alpha-2k^{-r}}=\chi_0^\alpha$ by \eqref{alphalim}.
Hence,
\[
E_Q[\overline{\chi}_0^\alpha]\le\underline\kappa(K_\omega,x(\eta),\alpha)
\le\overline\kappa(K_\omega,x(\eta),\alpha)\le E_Q[ \chi_0^\alpha]
\]
for $Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$.
\end{proof}
In fact, the upper bound we have found is an exact equality.
\begin{proposition}\label{exactupper}
For all $\alpha\in (0,1)$, we have that
\[
\overline{\kappa}(K_\omega,x(\eta),\alpha)=E_Q[ \chi_0^\alpha]
\]
for $Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$.
\end{proposition}
\begin{proof}
We start by proving that, for all $\alpha\in (0,1)$,
\[
\lim_{j\to\infty}\Cov(\chi_0^\alpha,\chi_j^\alpha)=0.
\]
Let $\alpha\in (0,1)$. Note that, for all $(\eta,\omega)\in I^{\N}\times\Omega$
satisfying $x(\eta)\in K_\omega$, we have
\[
\chi_0^\alpha(\eta,\omega)\le\chi_{0,r}^{\alpha-k^{-r}}(\eta,\omega)
\le\overline{\chi}_0^{\alpha-2k^{-r}}(\eta,\omega)
\]
for all $r\in\N$ such that $2k^{-r}<\alpha$. Therefore, for all $j\in\N$ with
$2k^{-j}<\alpha$, we have the following estimate
\[
\Cov(\chi_0^\alpha,\chi_j^\alpha)=E_Q[\chi_0^\alpha\chi_j^\alpha]
-E_Q[\chi_0^\alpha]E_Q[\chi_j^\alpha]\le E_Q[\chi_{0,j}^{\alpha-k^{-j}}\chi_j^\alpha]-E_Q[\chi_0^\alpha]^2.
\]
Next we note that the random variables $\chi_{0,j}^{\alpha-k^{-j}}$ and
$\chi_j^\alpha$ are $Q$-independent (compare Lemma~\ref{independence}),
hence
\[
\Cov(\chi_0^\alpha,\chi_j^\alpha)\le E_Q[\chi_0^\alpha]E_Q[\chi_{0,j}^{\alpha-k^{-j}}
-\chi_0^\alpha]
\le E_Q[\chi_0^\alpha]E_Q[\overline{\chi}_0^{\alpha-2k^{-j}}-\chi_0^\alpha] .
\]
Since $\lim_{j\to\infty}(\alpha-2k^{-j})=\alpha$, the equality \eqref{alphalim}
and the dominated convergence theorem imply that
$\lim_{j\to\infty}E_Q[\overline{\chi}_0^{\alpha-2k^{-j}}-\chi_0^\alpha]=0$. Now, by
Bernstein's theorem, the sequence
$\frac 1n N_n(A,x,\alpha)=\frac 1n\sum_{i=1}^n\chi_i^\alpha$
converges in probability to $E_Q[\chi_0^\alpha]$. Once we have the convergence
in probability, we can find a subsequence converging almost surely and,
therefore, the upper bound in Theorem~\ref{holecontinuity} is attained.
\end{proof}
Define
\[
D=\{\alpha\in (0,1)\mid\beta\mapsto E_Q[\chi_0^\beta]\text{ is discontinuous at }
\beta=\alpha\}.
\]
Since $\beta\mapsto E_Q[\chi_0^\beta]$ is decreasing, the set $D$ is countable.
\begin{theorem}\label{meanporosityexists}
For $Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$, we have
\[
\kappa(K_\omega,x(\eta),\alpha)=E_Q[ \chi_0^\alpha]
\]
for all $\alpha\in(0,1)\setminus D$. In particular, for $Q$-almost all
$(\eta,\omega)\in I^{\N}\times\Omega$, the function
$\alpha\mapsto\kappa(K_\omega,x(\eta),\alpha)$ is defined and continuous at all
$\alpha\in(0,1)\setminus D$.
\end{theorem}
\begin{proof}
Since $\chi_0^{\alpha'}\le\overline\chi_0^\alpha\le\chi_0^\alpha$ for all
$\alpha'>\alpha$, we have that $E_Q[\overline\chi_0^\alpha]=E_Q[\chi_0^\alpha]$ for
all $\alpha\in (0,1)\setminus D$. Lemma~\ref{holecontinuity} implies that, for
all $\alpha\in (0,1)\setminus D$, there exists a Borel set
$B_\alpha\subset I^{\N}\times\Omega$ such that
$\kappa(K_\omega,x(\eta),\alpha)=E_Q[ \chi_0^\alpha]$ for all
$(\eta,\omega)\in B_\alpha$ and $Q(B_\alpha)=1$. Let $(\alpha_i)_{i\in\N}$ be a
dense set in $(0,1)$. Since the functions
$\alpha\mapsto\underline \kappa(K_\omega,x(\eta),\alpha)$ and
$\alpha\mapsto\overline\kappa(K_\omega,x(\eta),\alpha)$ are decreasing, we have
for all $(\eta,\omega)\in\bigcap_{i=1}^\infty B_{\alpha_i}$ that
$\kappa(K_\omega,x(\eta),\alpha)=E_Q[ \chi_0^\alpha]$ for all
$\alpha\in(0,1)\setminus D$. Since $Q(\bigcap_{i=1}^\infty B_{\alpha_i})=1$, the
proof is complete.
\end{proof}
\begin{proposition}\label{comparison}
Suppose that $m=2$ and $p>k^{-1}$. Then the set $D$ is non-empty.
\end{proposition}
\begin{proof}
Since $E_Q[\chi_0^{\alpha'}]\le E_Q[\overline\chi_0^\alpha]\le E_Q[\chi_0^\alpha]$
for all $\alpha'>\alpha$, it is enough to show that there exists
$\alpha\in (0,1)$ such that $E_Q[\overline\chi_0^\alpha]<E_Q[\chi_0^\alpha]$.
This, in turn, follows if
\begin{equation}\label{positivemeasure}
Q(\{(\eta,\omega)\in I^\N\times\Omega\mid\overline\chi_0^\alpha(\eta,\omega)=0
\text{ and }\chi_0^\alpha(\eta,\omega)=1\})>0,
\end{equation}
since $\overline\chi_0^\alpha\le\chi_0^\alpha$.
It is shown in \cite{FG} that, if $m=2$ and $p>k^{-1}$, the projection of
$K_\omega$ onto the $x$-axis is the whole unit interval $[0,1]$ with positive
probability. In particular, $K_\omega$ intersects all the faces of $[0,1]^2$
with positive probability. Let $\alpha=k^{-r}$ for some $r\in\N$. Fix
$\sigma\in I^r$. Then there exists a Borel set $B\subset\Omega$ with $P(B)>0$
such that, for all $\omega\in B$, we have $\omega(\sigma)=n$ and
$K_\omega$ intersects all the faces of $J_\tau$ for all $\tau\in I^{r+1}$ with
$\tau|_r\ne\sigma$. In this case, $\chi_0^\alpha(\eta,\omega)=1$ and
$\overline\chi_0^\alpha(\eta,\omega)=0$ for all $\omega\in B$ for
$\mu_\omega$-almost all $\eta\in B_\omega$. This implies inequality
\eqref{positivemeasure}.
\end{proof}
\begin{remark}\label{otherdiscontinuities}
A similar construction as in the proof of Propostion~\ref{positivemeasure} can
be done for any positive $\alpha=\sum_{j=1}^nq_jk^{-r_j}<1$, where $r_j\in\N$ and
$q_j\in\mathbb Z$, that is, for any hole which is a finite union of
construction squares. We do not know whether $\kappa(K_\omega,x(\eta),\alpha)$
exists for $\alpha\in D$.
\end{remark}
\begin{corollary}\label{positive}
For $P$-almost all $\omega\in\Omega$ and for
$\nu_\omega$-almost all $x\in K_\omega$, we have that
\[
0<\underline{\kappa}(K_\omega,x,\alpha)\le \overline{\kappa}(K_\omega,x,\alpha)<1
\]
for all $\alpha\in (0,1)$, $\kappa(K_\omega,x,0)=1$ and $\kappa(K_\omega,x,1)=0$.
\end{corollary}
\begin{proof}
Since $0<E_Q[\chi_0^\alpha]<1$ for all $\alpha\in (0,1)$ and the functions
$\alpha\mapsto\underline\kappa(K_\omega,x(\eta),\alpha)$ and
$\alpha\mapsto\overline\kappa(K_\omega,x(\eta),\alpha)$ are decreasing,
the first claim follows from Theorem~\ref{meanporosityexists}.
The claim $\kappa(K_\omega,x,0)=1$ is obvious. Finally, if
$\overline\kappa(K_\omega,x,1)>0$, $K_\omega$ has a $1$-hole
near $x$ at scale $j$ for some $j\in\N$. Hence, $x$ should be on
the boundary of the hole and $J_{\eta|_j}$ which, in turn, implies
that $K_\omega$ has a $1$-hole near $x$ at all scales larger than $j$. Thus
$\kappa(K_\omega,x,\alpha)=1$ for all $\alpha\le 1$ which is a contradiction
with the first claim.
\end{proof}
To study the mean porosities of the natural measure, we need some auxiliary
results.
\begin{proposition}\label{LLN}
For all $s>0$, the sequence $\{\1_{\{X_j\le s\}}\}_{j\in\N}$
satisfies the strong law of large numbers.
\end{proposition}
\begin{proof}
Since the sequence $(X_j)_{j\in\N}$ is stationary, we only have to check that the
series \eqref{convergencecovariances} converges with $Y_j=\1_{\{X_j\le s\}}$.
Since $X_j$ and $X_0-k^{-jd}X_j$ are $Q$-independent (compare
Lemma~\ref{independence} or see the remark before \cite[Lemma 10]{Be2}),
recalling that $X_j$
and $X_0$ have the same distribution, we can make the following estimate
\[
\begin{split}
\Cov&(\1_{\{X_0\le s\}},\1_{\{X_j\le s\}})\\
&=Q(X_0\le s\text{ and }X_j\le s)-Q(X_0\le s)Q(X_j\le s)\\
&\le Q(X_0-k^{-jd}X_j\le s\text{ and }X_j\le s)-Q(X_0\le s)Q(X_j\le s)\\
&=Q(X_j\le s)\bigl(Q(X_0-k^{-jd}X_j\le s)-Q(X_0\le s)\bigr)\\
&=Q(X_0\le s)Q(s<X_0\le s+k^{-jd}X_j)\\
&\le Q(X_0\le s)\bigl(Q(s<X_0\le s+k^{-\frac 12jd})+Q(X_0>k^{\frac 12jd})\bigr).\\
\end{split}
\]
By a result of Dubuc and Seneta \cite{DS} (see also \cite[Theorem II.5.2]{AN}),
the distribution of $X_0$ has a continuous $P$-density $q(x)$ on $(0,+\infty)$.
From formula \eqref{expectationqtop}, we obtain
\[
\begin{split}
Q(s<X_0\le s+k^{-\frac 12jd})&=E_Q\bigl[\1_{\{s<X_0\le s+k^{-\frac 12jd}\}}\bigr]\\
&=E_P\bigl[X_0\1_{\{s< X_0\le s+k^{-\frac 12jd}\}}\bigr]\\
&\le (s+k^{-\frac 12d})P(s< X_0\le s+k^{-\frac 12jd})\\
&\le (s+k^{-\frac 12d})k^{-\frac 12jd} \max\limits_{x\in[s,s+k^{-\frac 12d}]}q(x).\\
\end{split}
\]
Therefore, by Markov's inequality,
\begin{multline*}
\sum_{j=1}^\infty \Cov(\1_{\{X_0\le s\}},\1_{\{X_j\le s\}})\le\\
Q(X_0\le s)\bigl((s+k^{-\frac 12d})\max\limits_{x\in[s,s+k^{-\frac 12d}]}q(x)
+E_Q(X_0)\bigr)\sum_{j=1}^\infty k^{-\frac 12jd}<\infty.
\end{multline*}
\end{proof}
For all $\alpha\in (0,1)$, $\e,\delta>0$ and $j\in\N$, define a function
$H_j^{\alpha,\e,\delta}\colon I^{\N}\times\Omega\to\{0,1\}$ by setting
$H_j^{\alpha,\e,\delta}(\eta,\omega)=1$, if and only if $\nu_\omega$ has an
$(\alpha,\epsilon)$-hole at scale $j$ near $x(\eta)$ but $K_\omega$ does not
have an $(\alpha-\delta)$-hole at scale $j$ near $x(\eta)$.
\begin{lemma}\label{holedif}
Let $\alpha\in (0,1)$. For all $\delta>0$, there exists $\e_0>0$ such that, for
$Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$, we have
\[
\limsup_{n\to\infty}\frac 1n \sum_{j=1}^n H_j^{\alpha,\e,\delta}(\eta,\omega)\le\delta
\]
for all $0<\e\le\e_0$.
\end{lemma}
\begin{proof}
Fix $0<\delta<\alpha$. Let $r\in\N$ be the smallest integer such that
$2k^{-r}<\delta$. Let $\e>0$. Assume that $H_j^{\alpha,\e,\delta}(\eta,\omega)=1$
and denote by $H$ the $(\alpha,\e)$-hole at scale $j$ near $x(\eta)$.
Considering the relative positions of $H$ and $J_{\eta|_{j+r}}$, we will argue
that we arrive at the following possibilities:
\begin{enumerate}
\item If $J_{\eta|_{j+r}}\subset H$, we have
$\nu_\omega(J_{\eta|_{j+r}})\le\e\nu_\omega(J_{\eta|_j})$.
\item In the case $J_{\eta|_{j+r}}\not\subset H$, there exists $\tau_j\in I^{j+r}$
such that $\tau_j\ne\eta|_{j+r}$, $J_{\tau_j}\subset H$,
$K_\omega\cap J_{\tau_j}\ne\emptyset$ and
$\nu_\omega(J_{\tau_j})\le\e\nu_\omega(J_{\eta|_j})$.
\end{enumerate}
Suppose that (i) is not valid. Since $H_j^{\alpha,\e,\delta}(\eta,\omega)=1$, the
set $K_\omega$ does not have an $(\alpha-\delta)$-hole at scale $j$ near
$x(\eta)$. Observe that
\[
H\setminus\bigcup_{\substack{\sigma\in I^{j+r}\\ J_\sigma\not\subset H}}J_\sigma
\]
contains a cube with side length $(\alpha-\delta) k^{-j}$ since $2k^{-r}<\delta$.
Since $J_{\eta|_{j+r}}\not\subset H$, there exists $\tau_j\in I^{j+r}$ as in (ii).
Next we estimate how often (i) or (ii) may happen for $Q$-typical
$(\eta,\omega)\in I^\N\times\Omega$. We start by
considering the case (i). We denote by $A_j^{1,\e}$ the
event that $\nu_\omega(J_{\eta|_{j+r}})\le\e\nu_\omega(J_{\eta|_j})$, that is,
according to \eqref{nuversusX},
\begin{multline*}
A_j^{1,\e}=\bigl\{(\eta,\omega)\in I^\N\times\Omega\mid \\
X_{\eta|_{j+r}}(\omega)\le
\frac {\e}{1-\e}\sum_{\substack{\tau\in I^{j+r}\\
\eta|_j\prec\tau,\ \tau\ne\eta|_{j+r},\ \omega(\tau)=c}}X_\tau(\omega)\bigr\}.
\end{multline*}
For all $s>0$, let
\[
\begin{split}
A_{j,1}^{s,\e}&=\bigl\{(\eta,\omega)\in I^N\times\Omega\mid
X_{\eta|_{j+r}}(\omega)\le \frac{\e s}{1-\e}\bigr\}\\
\text{ and }A_{j,2}^s&=\bigl\{(\eta,\omega)\in I^N\times\Omega\mid
\sum_{\substack{\tau\in I^{j+r}\\\eta|_j\prec\tau,\ \tau\ne\eta|_{j+r},\
\omega(\tau)=c}}X_\tau(\omega)>s\bigr\}.
\end{split}
\]
In the case (ii), let
\begin{multline*}
A_j^{2,\e}=\{(\eta,\omega)\in I^\N\times\Omega\mid\,\exists\tau\in I^{j+r}
\text{ such that }\tau\succ\eta|_j,\\
\tau\ne\eta|_{j+r}\text{ and }0<k^{-rd}X_\tau(\omega)\le\e X_{\eta|_j}(\omega)\}.
\end{multline*}
Recall that, for any $\tau\in I^*$, we have
$P(X_\tau(\omega)>0\mid K_\omega\cap J_\tau\ne\emptyset)=1$ by
\cite[Theorem 3.4]{MW}. Defining
\[
\begin{split}
&A^s_{j,3}=\{(\eta,\omega)\in I^\N\times\Omega\mid
X_{\eta|_j}(\omega)>s\}\text{ and}\\
&A_{j,4}^{s,\e}=\{(\eta,\omega)\in I^\N\times\Omega\mid\exists
\tau\in I^{j+r}\text{ such that }\tau\succ\eta|_j, \\
&\phantom{kkkkkkkkkkkppppppppan}\tau\ne\eta|_{j+r}\text{ and }0<k^{-rd}
X_\tau(\omega)\le\e s\},
\end{split}
\]
we have
\[
H_j^{\alpha,\e,\delta}\le\1_{A_j^{1,\e}} + \1_{A_j^{2,\e}}
\le\1_{A_{j,1}^{s,\e}}+\1_{A_{j,2}^s}+\1_{A_{j,3}^s}+\1_{A_{j,4}^{s,\e}}.
\]
By Proposition~\ref{LLN}, the functions $\1_{A_{j,1}^{s,\e}}$ and $\1_{A_{j,3}^s}$
satisfy the strong law of large numbers. The same is true for $\1_{A_{j,2}^s}$ and
$\1_{A_{j,4}^{s,\e}}$ by Theorem~\ref{HRV}, since $A_{j,2}^s$ and $A_{i,2}^s$ as
well as $A_{j,4}^{s,\e}$ and $A_{i,4}^{s,\e}$ are $Q$-independent if $|i-j|\ge r$.
This can be seen similarly as in the proof of Lemma~\ref{independence}. Hence,
we obtain the estimate
\begin{equation*}
\limsup_{n\to\infty}\frac 1n \sum_{j=1}^n H_j^{\alpha,\e,\delta}(\eta,\omega)\le
Q(A_{0,1}^{s,\e})+Q(A_{0,2}^s)+Q(A_{0,3}^s)+Q(A_{0,4}^{s,\e})
\end{equation*}
for $Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$. Observe that the left
hand side of the above inequality decreases as $\e$ decreases for all
$(\eta,\omega)\in I^{\N}\times\Omega$. For all large enough $s$, the value of
$Q(A_{0,2}^s)+Q(A_{0,3}^s)$ is less than $\frac 12\delta$.
Fix such an $0<s<\infty$. According to \eqref{expectationXzero}, we have
$Q(X_r=0)=0$. Therefore, for all $\e$ small enough, we have
$Q(A_{0,1}^{s,\e})+Q(A_{0,4}^{s,\e})<\frac 12\delta$, completing the proof.
\end{proof}
Now we are ready to prove that the mean porosity of the natural measure equals
that of the Mandelbrot percolation set.
\begin{theorem}\label{equal}
For $Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$, we have
\[
\kappa(K_\omega,x(\eta),\alpha)=\kappa(\nu_\omega,x(\eta),\alpha)
\]
for all $\alpha\in (0,1)\setminus D$.
\end{theorem}
\begin{proof}
For all $\alpha\in (0,1)$, $\e>0$ and $j\in\N$, let $\chi_j^\alpha$ and
$\chi_j^{\alpha,\epsilon}$ be as in Definition~\ref{holeindicator} and
$H_j^{\alpha,\e,\delta}$ as in Lemma~\ref{holedif}. The inequalities
\[
\underline{\kappa}(K_\omega,x(\eta),\alpha)\le
\underline{\kappa}(\nu_\omega,x(\eta),\alpha)\text{ and }
\overline{\kappa}(K_\omega,x(\eta),\alpha)\le
\overline{\kappa}(\nu_\omega,x(\eta),\alpha)
\]
are obvious for all $\alpha\in (0,1)$ and $(\eta,\omega)\in I^{\N}\times\Omega$
since $\chi_j^\alpha\le \chi_j^{\alpha,\epsilon}$. Therefore, for $Q$-almost all
$(\eta,\omega)\in I^{\N}\times\Omega$, we have
$\kappa(K_\omega,x(\eta),\alpha)\le\underline\kappa(\nu_\omega,x(\eta),\alpha)$
for all $\alpha\in (0,1)\setminus D$ by Theorem~\ref{meanporosityexists}.
Let $0<\delta<\alpha$ and $\e>0$. Since
$\chi_j^{\alpha,\e}\le\chi_j^{\alpha-\delta}+H_j^{\alpha,\e,\delta}$ for all $j\in\N$,
we have, by Lemma~\ref{holedif}, for $Q$-almost all
$(\eta,\omega)\in I^{\N}\times\Omega$
\[
\begin{split}
\overline{\kappa}(\nu_\omega,x(\eta),\alpha)=&\lim_{\e\to 0}\limsup_{n\to\infty}
\frac 1n\sum_{j=1}^n\chi_j^{\alpha,\e}(\eta,\omega)\\
\le&\limsup_{n\to\infty}\frac 1n\sum_{j=1}^n\chi_j^{\alpha-\delta}(\eta,\omega)+
\lim_{\e\to 0}\limsup_{n\to\infty}\frac 1n\sum_{j=1}^n
H_j^{\alpha,\e,\delta}(\eta,\omega)\\
\le&\overline{\kappa}(K_\omega,x(\eta),\alpha-\delta)+\delta.
\end{split}
\]
Since $\alpha\mapsto\overline\kappa(K_\omega,x(\eta),\alpha)$ is continuous at
all $\alpha\in (0,1)\setminus D$ by Theorem~\ref{meanporosityexists}, we
conclude that, for all $\alpha\in (0,1)\setminus D$, we have
for $Q$-almost all $(\eta,\omega)\in I^{\N}\times\Omega$ that
$\overline\kappa(\nu_\omega,x(\eta),\alpha)\le\kappa(K_\omega,x(\eta),\alpha)$.
As in the proof of Theorem~\ref{meanporosityexists}, we see that the order of
the quantifiers may be reversed.
\end{proof}
Before stating a corollary of the previous theorem, we prove a lemma, which
is well known, but for which we did not find a reference.
\begin{lemma}\label{zero}
Let $V$ be a coordinate hyperplane and let $e$ be the unit vector perpendicular
to $V$. Then, for all $t\in[0,1]$,
\[
P\bigl(\nu_\omega(K_\omega\cap (te+V))>0\bigr)=0.
\]
\end{lemma}
\begin{proof}
Fix $t\in [0,1]$. According to \eqref{nudef},
\[
\nu_\omega(K_\omega\cap (te+V))=\lim_{j\to\infty}(\diam J_\emptyset)^d
\sum_{\substack{\tau\in I^j\\J_\tau\cap (te+V)\ne\emptyset}}k^{-jd}X_\tau(\omega)
\1_{\{\omega(\tau)=c\}},
\]
and the above sequence decreases monotonically as $j$ tends to infinity. Hence,
\begin{align*}
&E_P[\nu_\omega(K_\omega\cap (te+V))]\\
&\le(\diam J_\emptyset)^d
\lim_{j\to\infty}E_P\Bigl[\sum_{\substack{\tau\in I^j\\J_\tau\cap (te+V)\ne\emptyset}}
k^{-jd}X_\tau(\omega)\1_{\{\omega(\tau)=c\}}\Bigr].
\end{align*}
Note that, without the restriction $J_\tau\cap (te+V)\ne\emptyset$, the
expectation on the right hand side equals 1. Since the restriction
$J_\tau\cap (te+V)\ne\emptyset$ determines an exponentially decreasing
proportion of indices as $j$ tends to infinity and since the random variables
$k^{-jd}X_\tau\1_{\{\omega(\tau)=c\}}$ have the same distribution, the limit of the
expectation equals 0.
\end{proof}
Next corollary is the counterpart of Corollary~\ref{positive} for mean
porosities of the natural measure.
\begin{corollary}\label{mainmeasure}
For $P$-almost all $\omega\in\Omega$ and for $\nu_\omega$-almost all
$x\in K_\omega$, we have
\[
0<\underline\kappa(\nu_\omega,x,\alpha)
\le \overline\kappa(\nu_\omega,x,\alpha)<1
\]
for all $\alpha\in (0,1)$, $\kappa(\nu_\omega,x,0)=1$ and
$\kappa(\nu_\omega,x,1)=0$.
\end{corollary}
\begin{proof}
The first claim follows from Corollary~\ref{positive}, Theorem~\ref{equal} and
the monotonicity of the functions
$\alpha\mapsto\underline\kappa(\nu_\omega,x,\alpha)$ and
$\alpha\mapsto\overline\kappa(\nu_\omega,x,\alpha)$. Since
$\underline\kappa(K_\omega,x,0)\le\underline\kappa(\nu_\omega,x,0)$, the second
claim follows from Corollary~\ref{positive}. Note that
$\chi_j^{1,\epsilon}(\eta,\omega)=1$ only if $\nu_\omega(\partial J_{\eta|_j})>0$.
Therefore, the last claim follows from Lemma~\ref{zero}.
\end{proof}
The following corollary solves completely Conjecture 3.2 stated in \cite{JJM}.
\begin{corollary}\label{infsup}
For $P$-almost all $\omega\in\Omega$ and for $\nu_\omega$-almost all
$x\in K_\omega$, we have
\[
\lpor(K_\omega,x)=\lpor(\nu_\omega,x)=0,\quad\upor(K_\omega,x)=\frac 12
\text{ and }\upor(\nu_\omega,x)=1.
\]
\end{corollary}
\begin{proof}
By Corollary~\ref{positive}, for $Q$-almost all
$(\eta,\omega)\in I^{\N}\times\Omega$, we have that
$\overline\kappa(K_\omega,x(\eta),\alpha)<1$ for all $\alpha>0$. Hence, for
$P$-almost all $\omega\in\Omega$ and for $\nu_\omega$-almost all $x\in K_\omega$,
there are, for all $\alpha>0$, arbitrarily large $i\in\N$ such that $K_\omega$
does not have an $\alpha$-hole at scale $i$ near $x$ which is contained in
$Q_i^k(x)$. Note that, in Definitions~\ref{defporoset} and \ref{defporomeas},
the holes are defined using balls while the mean porosities are defined in
terms of $k$-adic cubes (see Definitions~\ref{ourmean} and \ref{defmean}).
Therefore, it is possible that $B(x,k^{-i})$ contains an $\alpha$-hole which is
outside the construction cube $Q_i^k(x)$ if $x$ is
close to the boundary of $Q_i^k(x)$. We show that there are infinitely many
$i\in\N$ such that this will not happen.
Fix $\alpha\in(0,\frac 14)$ and $r>8$ large enough so that $2k^{-r}<\alpha$.
Let $I'\subset I^r$ be the set of words such that, for all $\tau\in I'$, the
$\varrho$-distance from all points of $J_\tau$ to the centre of $J_\emptyset$ is
at most $\frac 14$. For all $i\in\N$, define
$Y_i^\alpha\colon I^{\N}\times\Omega\to\{0,1\}$ by setting
$Y_i^\alpha(\eta,\omega)=1$, if and only if $J_{\eta|_i}$ is chosen, $\eta|_{i+r}$
ends with a word from $I'$ and $K_\omega$ does not have an $\frac 12\alpha$-hole
at scale $i$ near $x(\eta)$ which is completely inside
$J_{\eta|_i}\setminus J_{\eta|_{i+r}}$. Note that if $x(\eta)\in K_\omega$ and
$K_\omega$ has an $\alpha$-hole
at scale $i$ near $x(\eta)$, then at least half of this hole is in
$J_{\eta|_i}\setminus J_{\eta|_{i+r}}$. Thus, $K_\omega$ does not have an
$\alpha$-hole at scale $i$ near $x(\eta)$ if $Y_i^\alpha(\eta,\omega)=1$.
Since for indices $i$ and $j$ with $|i-j|\ge r$, the events
$\{(\eta,\omega)\in I^{\N}\times\Omega\mid Y_i^\alpha(\eta,\omega)=1\}$ and
$\{(\eta,\omega)\in I^{\N}\times\Omega\mid Y_j^\alpha(\eta,\omega)=1\}$
are $Q$-independent (compare with Lemma~\ref{independence}),
the averages of random variables $Y_i^\alpha(\eta,\omega)$ converge
to $E_Q(Y_0^\alpha)>0$ for $Q$-almost all $(\eta,\omega)\in I^\N\times\Omega$. If
$Y_i^\alpha(\eta,\omega)=1$, then $B(x(\eta),\frac 14k^{-i})\subset J_{\eta|_i}$
and there is no $z\in B(x(\eta),\frac 14k^{-i})$ such that
$B(z,\frac 12\alpha k^{-i})\subset B(x(\eta),\frac 14k^{-i})\setminus K_\omega$.
Therefore, $\por(K_\omega,x(\eta),\frac 14k^{-i})\le 2\alpha$. A similar argument
shows that $\por(\nu_\omega,x(\eta),\frac 14k^{-i})\le 2\alpha$. Let
$(\alpha_j)_{j\in\N}$ and $(\e_k)_{k\in\N}$ be sequences tending to 0.
For $Q$-almost all $(\eta,\omega)\in I^\N\times\Omega$, we have for all
$j,k\in\N$ that there are infinitely many scales $i\in\N$ such that
\[
\por(K_\omega,x(\eta),\frac 14k^{-i})<\alpha_j\text{ and }
\por(\nu_\omega,x(\eta),\frac 14k^{-i},\varepsilon_k)<\alpha_j.
\]
Thus, we conclude that
\[
\lpor(K_\omega,x)=0=\lpor(\nu_\omega,x)
\]
for $P$-almost all $\omega\in\Omega$ and for $\nu_\omega$-almost all
$x\in K_\omega$.
Since
$\underline{\kappa}(K_\omega,x,\alpha)>0$ and
$\underline{\kappa}(\nu_\omega,x,\alpha)>0$ for all $\alpha<1$, we deduce that
\[
\upor(K_\omega,x)=\frac 12\text{ and }\upor(\nu_\omega,x)=1
\]
for $P$-almost all $\omega\in\Omega$ and for $\nu_\omega$-almost all
$x\in K_\omega$.
\end{proof}
\begin{remark}
Theorems~\ref{meanporosityexists} and \ref{equal} should extend to
homogeneous random self-similar sets satisfying the random strong open set
condition.
\end{remark}
\providecommand{\bysame}{\leavevmode\hbox
to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} }
\providecommand{\href}[2]{#2} | {"config": "arxiv", "file": "1606.08655.tex"} |
TITLE: Center of universal enveloping algebra of nilpotent lie algebra
QUESTION [3 upvotes]: Let g be a finite dimensional nilpotent lie algebra over a field k of characteristic zero. Let U(g) be the universal enveloping algebra and Z(g) be its center. Denote by Z_1(g) the augmentation ideal of Z(g). If g is not abelian, is the extension of this ideal of infinite index in U(g)? In other words, is the quotient U(g)/(Z_1(g)) infinite dimensional over k?
REPLY [6 votes]: Yes. Let $J(\mathfrak{g}) := (Z(\mathfrak{g}) \cap \mathfrak{g} U( \mathfrak{g} ) ) \cdot U(\mathfrak{g})$ denote the ideal of $U(\mathfrak{g})$ you're interested in.
First note that if $\mathfrak{g}$ is the $2n + 1$ dimensional Heisenberg Lie algebra $\mathfrak{h}_n$ with basis vectors $x_1,\ldots, x_n, y_1,\ldots, y_n, z$ and the only non-zero Lie brackets $[x_1,y_1] = [x_2,y_2] = \cdots = [x_n,y_n] = z$, then it's not hard to calculate that $Z(\mathfrak{h}_n) = k[z]$ (since $k$ has characteristic zero), so that in this case $J(\mathfrak{h}_n) = zU(\mathfrak{h}_n)$ has infinite codimension in $U(\mathfrak{h}_n)$.
Now if $\varphi : \mathfrak{g} \twoheadrightarrow \mathfrak{h}$ is a quotient of $\mathfrak{g}$, then $\varphi(J(\mathfrak{g})) \subseteq J(\mathfrak{h})$.
Thus it suffices to show that every finite dimensional non-abelian nilpotent Lie algebra $\mathfrak{g}$ admits some $\mathfrak{h}_n$ as a quotient.
Since $\mathfrak{g}$ is non-abelian, by replacing $\mathfrak{g}$ by $\mathfrak{g} / [\mathfrak{g}, [\mathfrak{g},\mathfrak{g}]]$ if necessary, we may assume that $\mathfrak{g}$ has nilpotence class two. Therefore $0 \neq [\mathfrak{g}, \mathfrak{g}]\subseteq z(\mathfrak{g})$ (the centre of the Lie algebra $\mathfrak{g}$).
Pick some non-zero $z \in [\mathfrak{g}, \mathfrak{g}]$ and let $\mathfrak{a}$ be a vector space complement to $kz$ in $z(\mathfrak{g})$. By replacing $\mathfrak{g}$ by $\mathfrak{g} / \mathfrak{a}$, we may therefore assume that $[\mathfrak{g},\mathfrak{g}] = kz$. But now $[\bar{x},\bar{y}] = (x,y)z$ defines a non-trivial alternating bilinear form $(-,-)$ on $\mathfrak{g}/kz$. By the classification of such forms, we can find a quotient $\mathfrak{h}$ of $\mathfrak{g}$ such that $[\mathfrak{h}, \mathfrak{h}] = kz$, and moreover the corresponding bilinear form on $\mathfrak{h} / kz$ is non-degenerate. This $\mathfrak{h}$ is isomorphic to some $\mathfrak{h}_n$. | {"set_name": "stack_exchange", "score": 3, "question_id": 93039} |
TITLE: Calculating lateral/longitudinal acceleration/jerk
QUESTION [0 upvotes]: I know how to calculate the lateral and longitudinal velocities given the velocity $v$ and heading angle $\theta$ :
$ v_{lat} = v × \ \mathrm{sin} \theta$
$v_{long} = v× \cos \theta$
But does this extend to acceleration $a$ and jerk $j$, i.e.,
$a_{lat} = a × \sin \theta$
$a_{long} = a × \cos \theta$
$j_{lat} = j × \sin \theta$
$j_{long} = j \cos \theta$
?
Thanks for your time
REPLY [0 votes]: Yes, since the acceleration and jerk are both vector quantities, your equations should be ok. | {"set_name": "stack_exchange", "score": 0, "question_id": 664542} |
TITLE: Construction of a convex function nondifferentiable on a countable set
QUESTION [1 upvotes]: Let $H$ be a countable subset of $[0,1]$. Construct a convex function $f:[0,1]\rightarrow\mathbb{R}$ such that $f$ is nondifferentiable on $H$ and differentiable in the rest.
REPLY [2 votes]: Enumerate $H$ as $h_1, h_2, \ldots$, and take
$f(x) = \sum_{j=1}^\infty 2^{-j} |x - h_j|$. | {"set_name": "stack_exchange", "score": 1, "question_id": 141624} |
TITLE: What does a nucleus look like?
QUESTION [14 upvotes]: It's a Christmas time and so I hope I'll be pardoned for asking a question which probably doesn't make much sense :-)
In standard undergraduate nuclear physics course one learns about models such as Liquid Drop model and Shell model that explain some properties of nucleus.
But as I understand it these models are purely empirical and most importantly incompatible. Perhaps it's just my weak classical mind but I can't imagine same object being described as a liquid with nucleons being the constituent particles floating freely all around the nucleus and on the other hand a shell model where nucleons occupy discrete energy levels and are separated from each other.
Now I wonder whether these empirical models are really all we've got or whether there are some more precise models. I guess one can't really compute the shape of the nucleus from the first principles as one can do with hydrogen atom in QM. Especially since first principles here probably means starting with QCD (or at least nucleons exchanging pions, but that is still QFT). But I hope there has been at least some progress since the old empirical models. So we come to my questions:
Do we have a better model for description of a nucleus than the ones mentioned?
How would some nuclei (both small and large) qualitatively look in such a better model? Look here means that whether enough is known so that I could imagine nucleus in the same way as I can imagine an atom (i.e. hard nucleus and electrons orbiting around it on various orbitals).
What is the current state of first-principles QCD computations of the nucleus?
REPLY [2 votes]: In standard undergraduate nuclear physics course one learns about models such as Liquid Drop model and Shell model that explain some properties of nucleus.
But as I understand it these models are purely empirical and most importantly incompatible.
This is not really true.
They're not incompatible. The liquid drop model can be obtained as the classical limit of the shell model.
They're not purely empirical. The shell model is usually done using a residual interaction that is empirically determined, but that doesn't mean the model is purely empirical.
We can't define what a nucleus "looks like" in the optical sense because it doesn't interact with photons in ways that would allow that. But the mean field basically has the form of a sphere or prolate ellipsoid in almost all cases. This shape can fluctuate about the equilibrium, just like with any quantum-mechanical oscillator.
What is the current state of first-principles QCD computations of the nucleus?
Very small systems like the deuteron are somewhat doable. But in general QCD is neither necessary nor a sufficient for doing nuclear structure calculations. You're dealing with an n-body problem that is essentially intractable unless you can find tricks to make it tractable. For a nucleus with mass number $A$, we have $n=A$ if we treat it using nucleons as the particles, $n=3A$ if we use quarks (not even counting the gluons). There is no reason to make the problem even more intractable by tripling $n$. Low-energy nuclear structure is nonrelativistic, and there is simply no advantage to using a quark model. | {"set_name": "stack_exchange", "score": 14, "question_id": 2207} |
TITLE: Contingency table
QUESTION [1 upvotes]: I'm trying to solve a question regarding contingency table as far as I know contingency table show count not densities,and I'm having hard time comprehending this simple table.
My attempts was basically calculating the marginal distribution but the probabilities didn't sum to 1.
for example I tried solving the first question by:
$$
P(A, B) = \int(P(A,B,C)dC
$$
but I'm missing something is $dC$ probabilities or just count?
Contingency table
REPLY [1 votes]: This is discrete. You just need to add things up, no integrals required. For the conditional ones you have to scale by the probability of the given event.
In the one that asks for $\Pr(A,B|C=0)$, we see from that table that $\Pr(C=0)=.45$, by adding up all the entries with $C=0$. We also know that $\Pr(A=1,B=1,C=0)=.05$. In the new contingency table, the entry for $A=1,B=1$ will be $\frac{.05}{.45}=\frac19$. The values in the table always have to add up to $1$. | {"set_name": "stack_exchange", "score": 1, "question_id": 3859818} |
TITLE: Difference between intersection of infinite sets having finite, and having infinite elements
QUESTION [1 upvotes]: I could find individual answers for both of these, but can someone compare how being a finite set or an infinite changes the final outcome?
(a) If A1 ⊇ A2 ⊇ A3 ⊇ A4 · · · are all sets containing an infinite number of
elements, then the intersection $$ \bigcap_{n=1}^{\infty} A_n $$ is infinite as well. - False
(b) If A1 ⊇ A2 ⊇ A3 ⊇ A4 · · · are all finite, nonempty sets of real numbers,
then the intersection $$ \bigcap_{n=1}^{\infty} A_n $$ is finite and nonempty. - True
This is from the book Stephen Abbott, Understanding Analysis
REPLY [2 votes]: In (b) the sequence is bound to stabilize: some $n$ exists such that $k\geq n\implies A_k=A_n$.
The finite number of elements of $A_1$ can only be 'diminished' a finite number of times. This combined with the condition that $A_n\neq\varnothing$ assures that the intersection is not empty.
In (a) that obstacle does not appear. Every element in $A_1$ can be "thrown out" at some time. | {"set_name": "stack_exchange", "score": 1, "question_id": 1307760} |
TITLE: Subsets of a Cartesian product are disjoint iff there exist projections that are disjoint
QUESTION [2 upvotes]: The Theorem?
Suppose $\{X_\alpha\}_{\alpha \in J}$ is a family of non-empty sets. Let $X = \prod_{\alpha \in J} X_\alpha$. For each $\alpha \in J$ define $p_\alpha : X \to X_\alpha$ to be the canonical projection from $X$ to $X_\alpha$.
Suppose $U,V$ are subsets of $X$. I want to show that $U$ and $V$ are disjoint if and only if there exists $\beta \in J$ such that $p_\beta(U) \cap p_\beta(V) = \emptyset$.
Background
I wanted to use this "theorem" to show that if the product space $X$ is $T_4$ then so is each $X_\alpha$.
My Thoughts
The $\Longleftarrow$ direction seems straightforward from the definition of the Cartesian product, although I haven't worked it out precisely since it is really $\Longrightarrow$ I care about. Likewise, it seems intuitively true that $\Longrightarrow$ is true. I've done a bit of googling and searching through textbooks, but I haven't quite found what I am looking for. Moreover, I am unsure how to proceed with a proof. Here is my problem:
I want to say that if $U$ and $V$ are disjoint subsets of $X$, then I can write $U$ is a product of subsets of $X_\alpha$ for each $\alpha \in J$, but this can't be correct. For example, $U$ could be the union of subsets of $X$ which cannot, in general, be written as a product of a union of subsets.
REPLY [3 votes]: Simple counterexample: Let $J = \{0,1\}, X_0 = X_1 = \{0,1\}$; let $U = \{(0,0), (1,1)\}$, and let $V = \{(0,1)\}$. Then $p_0(U) = p_1(U) = \{0,1\}$, $p_0(V) = \{0\}$, and $p_1(V) = \{1\}$, but $U \cap V = \emptyset$. | {"set_name": "stack_exchange", "score": 2, "question_id": 1516553} |
\begin{document}
\maketitle
\abstract{Let $G$ be a simple graph of order $n$ and let $k$ be an integer such that $1\leq k\leq n-1$.
The $k$-token graph $G^{\{k\}}$ of $G$ is the graph whose vertices are the $k$-subsets of $V(G)$, where two vertices are adjacent in $G^{\{k\}}$ whenever their symmetric difference is a pair of adjacent vertices in $G$. In this paper we study the Hamiltonicity of the $k$-token graphs of some join graphs. As a consequence, we provide an infinite family of graphs (containing Hamiltonian and non-Hamiltonian graphs) for which their $k$-token graphs are Hamiltonian. Our result provides, to our knowledge, the first family of non-Hamiltonian graphs for which their $k$-token graphs are Hamiltonian, for $2<k<n-2$.}
\section{Introduction}
Throughout this paper, $G$ is a simple graph of order $n \geq 2$ and $k$ is an integer such that $1\leq k\leq n-1$.
The \emph{$k$-token graph} of $G$, that is denoted as $G^{\{k\}}$ or $F_k(G)$, is the graph whose vertices are all the $k$-subsets of $V(G)$,
where two of such vertices are adjacent whenever their symmetric difference is a pair of adjacent vertices in $G$. A classical example of token graphs is the Johnson graph $J(n, k)$ that is, in fact, the $k$-token graph of
the complete graph $K_n$.
The $2$-token graphs are usually called \emph{double vertex graphs} (see Figure~\ref{fig:definition-example} for an example). In this paper, we use $G^{\{k\}}$ to denote the $k$-token graph of $G$.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{tokengraphs-example.pdf}
\caption{\small A graph $G$ and its double vertex graph $G^{\{2\}}$}
\label{fig:definition-example}
\end{figure}
The study of the combinatorial properties of the double vertex graphs began in the 90's with the works of Alavi and its coauthors (see, e.g., \cite{alavi1, alavi2, alavi3, alavi4, zhu}). They studied the connectivity, planarity, regularity and Hamiltonicity of some double vertex graphs. Several years later, Rudolph \cite{aude, rudolph} redefined the token graphs, with the name of {\it symmetric powers of graphs}, with the aim to study the graph isomorphism problem and for its possible applications to quantum mechanics. Several authors have continued with the study of the possible applications of the token graphs in physics (see. e.g., \cite{fisch2, fisch, ouy}).
Fabila-Monroy et al.,~\cite{FFHH} reintroduced the concept of $k$-token graph of $G$ as a model in which $k$
indistinguishable tokens move on the vertices of a graph $G$ along the edges of $G$. They began
a systematic study of some combinatorial parameters of $G^{\{k\}}$ such as connectivity, diameter, cliques, chromatic number,
Hamiltonian paths and Cartesian product. This line of research has been continued by different authors, see, e.g.,
\cite{dealba2, token2, deepa, deepa2, ruyanalea, soto, leancri, leatrujillo}.
A \emph{Hamiltonian cycle} of a graph $G$ is a cycle containing each vertex of $G$ exactly once. A graph $G$ is \emph{Hamiltonian} if it contains a Hamiltonian cycle. The problem of determining if a graph $G$ is Hamiltonian is NP-Complete \cite{np-complete-problems}.
It is well known that the Hamiltonicity of $G$ does not imply the Hamiltonicity of $G^{\{k\}}$.
For example, for the complete bipartite graph $K_{m,m}$, Fabila-Monroy et al.~\cite{FFHH} showed that if $k$ is even, then
$K_{m, m}^{\{k\}}$ is non-Hamiltonian.
A more easy and traditional example is the case of a cycle graph; it is known (see, e.g.~\cite{alavi4}) that if $n=4$ or $n \geq 6$, then $C_n^{\{2\}}$ is not Hamiltonian. On the other hand,
there exist non-Hamiltonian graphs for which its double vertex graph is Hamiltonian,
a simple example is the graph $K_{1,3}$, for which $K_{1,3}^{\{2\}}\simeq C_6$, and so
$K_{1,3}^{\{2\}}$ is Hamiltonian. Some results about the Hamiltonicity of some double vertex graphs can be found, for example, in the survey of Alavi et. al.~\cite{alavi3}.
There are several applications of token graphs to Physics and Coding Theory; see, e.g.~\cite{fisch2, fisch, Jo, soto} .
Regarding to the Hamiltonicity, there is a direct relationship between the Hamiltonicity of token graphs and Gray codes for combinations. Consider the set $[n]\colon =\{1,2,\dots,n\}$ and let $S$ be the set of $k$-subsets of $[n]$. Consider also a closeness relation $C$ for the elements of $S$, for example, two subsets are close under $C$ if their symmetric difference is a pair of elements in $[n]$. A cyclic Gray code for $S$ with respect to $C$ is a sequence $s_1,s_2,\dots,s_{\binom{n}{k}}$ of the elements of $S$ in which any two consecutive elements $s_i$ and $s_{i+1}$ are close under $C$, and also $s_1$ is close to $s_{\binom{n}{k}}$ under $C$. For a closeness relation $C$ (with underlying graph $G$), a cyclic Gray code for $S$ with respect to $C$ corresponds to a Hamiltonian cycle of the $k$-token graph of $G$. A more detailed explanation of this fact is given in Appendix A.
In order to formulate our results, we need the following definitions.
Given two disjoint graphs $G$ and $H$, the \textit{join graph} $G+H$
of $G$ and $H$ is the graph whose vertex set is $V(G)\cup V(H)$ and its edge set is
$E(G)\cup E(H)\cup \{xy:x\in G\text{ and }y\in H\}$. The \emph{generalized fan graph}, $F_{m,n}$, or simply \emph{fan graph}, is the join graph $F_{m,n}=E_m +P_n$,
where $E_m$ denotes the graph of $m$ isolated vertices and $P_n$ denotes the path graph of $n$ vertices (the graph $G$ in Figure~\ref{fig:definition-example} is isomorphic to $F_{1, 3}$).
The last two authors of this article studied the Hamiltonicity of the $k$-token graphs of the fan graphs $F_{1, n}$~\cite{rive-tru}. In this paper, we continue with this line of research but now for the case of generalized fan graphs $F_{m, n}$, for $m>1$. Our main result for the case $k=2$ is the following:
\begin{theorem}
\label{thm:main1}
The double vertex graph of $F_{m,n}$ is Hamiltonian if and only if $n\geq 2$ and $1\leq m \leq 2\,n$, or $n=1$ and $m=3$.
\end{theorem}
For the general case $k \geq 2$, our main result is the following.
\begin{theorem}
\label{thm:general-case-main}
Let $k, n, m$ be positive integers such that $k\geq 2$, $n\geq k$ and $1\leq m\leq 2n$. Then $F_{m, n}^{\{k\}}$ is Hamiltonian.
\end{theorem}
As a consequence of our theorem we obtain the following more general result.
\begin{cor}
\label{cor:main1}
Let $k, n, m$ be positive integers such that $k\geq 2$, $n\geq k$ and $1\leq m\leq 2n$. Let $G_1$ and $G_2$ be two graphs
of order $m$ and $n$, respectively, such that $G_2$ has a Hamiltonian path. Let $G=G_1+G_2$. Then $G^{\{k\}}$ is Hamiltonian.
\end{cor}
We point out that this Corollary provides
an infinite family of non-Hamiltonian graphs (when $n< m \leq 2m$) with Hamiltonian $k$-token graphs, being this the first family of non-Hamiltonian graphs (and without a Hamiltonian path, when $n+1 <m \leq 2m$), except for some complete bipartite graphs,
for which it is proven the Hamiltonicity of $G^{\{k\}}$.
Let us establish some notation that we will use in our proofs. Let $V(P_n):=\{v_1,\ldots,v_n\}$
and $V(E_m):=\{w_1,\ldots, w_m\}$, so we have
$V(F_{m,n})=\{v_1,\ldots,v_n,w_1,\ldots,w_m\}$.
For a path $T=a_1 a_2 \dots a_{l-1}a_{l}$, we denote by $\overleftarrow{T}$ to the reverse path $a_la_{l-1}\dots a_2a_1$.
As usual, for a positive integer $r$, we denote by $[r]$ to the set $\{1,2,\ldots,r\}$.
For a graph $G$, we denote by $\mu(G)$ to the number of components of $G$. If $u$ and $v$ are adjacent vertices, then we write $u \sim v$.
The rest of the paper is organized as follows. In Section~\ref{sec:double} we
present the proof of Theorem~\ref{thm:main1} and in Section~\ref{sec:main} the proof of Theorem~\ref{thm:general-case-main}. Our strategy to prove these results is the following: for $k=2$, we show explicit Hamiltonian cycles, and for $k>2$, we use induction on $k$ and $m$. In Appendix A, we give a more detailed explanation of the relationship between the Hamiltonicity of token graphs and Gray codes for combinations.
\section{Proof of Theorem~\ref{thm:main1}}
\label{sec:double}
This section is devoted to the proof of Theorem~\ref{thm:main1}. For the case $1\leq m\leq 2n$, we construct an explicit Hamiltonian cycle in $F_{m,n}^{\{2\}}$,
in which the vertices $\{w_1,v_1\}$ and $\{v_1,v_2\}$ are adjacent (this fact will be useful in the proof for the case $k>2$).
If $n=1$ then $G\simeq K_{1,m}$, and it is known that $K_{1,m}^{\{2\}}$ is Hamiltonian if and only if $m=3$
(see, e.g., Proposition 5 in~\cite{alavi3}). From now on, assume $n\geq 2$. We distinguish four cases: either $m=1$, $m=2n$,
$1<m<2n$ or $m>2n$.
\begin{itemize}
\item {\bf Case} $\mathbf{m=1.}$
For $n=2$ we have $F_{1,2}^{\{2\}}\simeq F_{1,2}$, and so $F_{1,2}^{\{2\}}$ is Hamiltonian.
Now we work the case $n\geq 3$. For $1\leq i< n$ let
\[T_i:=\{v_i,w_1\}\{v_i,v_{i+1}\}\{v_i, v_{i+2}\} \dots \{v_i,v_n\}\]
and let $T_n:=\{v_n,w_1\}$.
It is clear that every $T_i$ is a path in $F_{1,n}^{\{2\}}$ and that $\{T_1, \dots, T_n\}$
is a partition of $V\left(F_{1,n}^{\{2\}}\right)$.
Let
\[
C:=
\begin{cases}
\overleftarrow{T_1}\,T_2\overleftarrow{T_3}\,T_4\dots \overleftarrow{T_{n-1}}\,T_n\{v_1, v_n\} & \text{ if $n$ is even,} \\
\overleftarrow{T_1}\,T_2\overleftarrow{T_3}\,T_4\,\dots \,T_{n-1}\,\overleftarrow{T_{n}}\,\{v_1, v_n\} & \text{ if $n$ is odd.} \\
\end{cases}
\]
We are going to show that $C$ is a Hamiltonian cycle of $F_{1,n}^{\{2\}}$. Suppose that $n$ is even, so
\[
C=\underbrace{\{v_1, v_n\} \dots \{v_1, w_1\}}_{\overleftarrow{T_1}}\underbrace{\{v_2, w_1\} \dots \{v_2, v_n\}}_{T_2}\,\dots\,
\underbrace{\{v_{n-1}, v_n\}\{v_{n-1}, w_1\}}_{\overleftarrow{T_{n-1}}}\underbrace{\{v_n, w_1\}}_{T_n}\{v_1, v_n\}.
\]
For $i$ odd, the final
vertex of $\overleftarrow{T_i}$ is $\{v_i,w_1\}$, while the initial vertex of $T_{i+1}$ is $\{v_{i+1},w_1\}$, and
since these two vertices are adjacent in $F_{1,n}^{\{2\}}$, the concatenation $\overleftarrow{T_i}\,T_{i+1}$ corresponds to
a path in $F_{1,n}^{\{2\}}$. Similarly, for $i$ even, the final vertex of $T_i$ is $\{v_i,v_n\}$ while the initial
vertex of $\overleftarrow{T_{i+1}}$ is $\{v_{i+1},v_n\}$, so again, the concatenation $T_i\,\overleftarrow{T_{i+1}}$
corresponds to a path in $F_{1,n}^{\{2\}}$. Also note that the unique vertex of $T_n$ is $\{v_n,w_1\}$, which is adjacent
to $\{v_1,v_n\}$. As the first vertex of $\overleftarrow{T_1}$ is $\{v_1,v_n\}$, we have that $C$ is a cycle in $F_{1,n}^{\{2\}}$.
Suppose now that $n$ is odd, then
\[
C=\underbrace{\{v_1, v_n\} \dots \{v_1, w_1\}}_{\overleftarrow{T_1}}\underbrace{\{v_2, w_1\} \dots \{v_2, v_n\}}_{T_2}\,\dots\,
\underbrace{\{v_{n-1}, w_1\}\{v_{n-1}, v_n\}}_{T_{n-1}}\underbrace{\{v_n, w_1\}}_{\overleftarrow{T_n}}\{v_1, v_n\}.
\]
In a similar way to the case $n$ even, we can prove that $C$ is a Hamiltonian cycle of $F_{1,n}^{\{2\}}$. Note that, in both cases, the vertices $\{w_1,v_1\}$ and $\{v_1,v_2\}$ are adjacent in $C$, since they are adjacent in $\overleftarrow{T_1}$
\item {\bf Case} $\mathbf{m=2n.}$
Let $C$ be the cycle defined in the previous case depending on the parity of $n$.
Let
\[
P_1:=\{v_n,w_1\} \xrightarrow{C} \{v_1,v_n\}
\]
be the path obtained from $C$ by deleting the edge between $\{v_n, w_1\}$ and
$\{v_1, v_n\}$.
For $1< i\leq n$ let
\[
\begin{split}
P_i:=&\{w_i,v_n\}\{w_i,w_1\}\{w_i,v_{n-1}\}\{w_i,w_{i+(n-1)}\}\{w_i,v_{n-2}\}\{w_i,w_{i+(n-2)}\}\{w_i,v_{n-3}\}\{w_i,w_{i+(n-3)}\}\ldots \\
&\{w_i,v_2\}\{w_i,w_{i+2}\}\{w_i,v_1\}\{w_i,w_{i+1}\}.
\end{split}
\]
We can observe that after $\{w_i, w_1\}$ the vertices in $P_i$ follows the pattern $\{w_{i}, v_j\}\{w_{i}, w_{i+j}\}$, from $j=n-1$ to $1$.
For $n+1\leq i\leq 2n$ let
\[
\begin{split}
P_i:=&\{w_i,v_n\}\{w_i,w_{i+n}\}\{w_i,v_{n-1}\}\{w_i,w_{i+(n-1)}\}\{w_i,v_{n-2}\}\{w_i,w_{i+(n-2)}\}\ldots\\
& \{w_i,v_2\}\{w_i,w_{i+2}\}\{w_i,v_1\}\{w_i,w_{i+1}\},
\end{split}
\]
where the sums are taken mod $2n$ with the convention that $2n\pmod{2n}=2n$. In this case, the vertices in $P_i$ after $\{w_i,w_{i+n}\}$ follows the pattern $\{w_{i}, v_j\}\{w_{i}, w_{i+j}\}$, from $j=n-1$ to $1$.
We claim that the concatenation
\[
C_2:=P_1\,P_2\,\ldots\,P_{2n}\{v_n,w_1\}
\]
is a Hamiltonian cycle in $F_{m,n}^{\{2\}}$. First we prove that $\{P_1,\ldots,P_{2n}\}$ is a partition of $F_{m,n}^{\{2\}}$. It is clear that the paths $P_1,\ldots,P_{2n}$ are pairwise disjoint in $F_{m,n}^{\{2\}}$. Now, we are going to show that every vertex in $F_{m,n}^{\{2\}}$ belongs to exactly one of the paths $P_1,\ldots,P_{2n}$.
\begin{itemize}
\item $\{v_i, v_j\}$ belongs to $P_1$, for any $i,j\in [n]$ with $i\neq j$.
\item $\{w_i,v_j\}$ belongs to $P_i$, for any $i\in [m]$ and $j\in [n]$.
\item $\{w_i, w_1\}$ belong to $P_i$, for any $i\in [m]$.
\item Consider now the vertices of type $\{w_i,w_j\}$, for $1<i<j\leq n$,
\begin{itemize}
\item $\{w_i,w_j\}$ belongs to $P_i$, for any $1<i\leq n$ and $i<j\leq i+n-1$.
\item $\{w_i,w_j\}$ belongs to $P_j$, for any $1<i\leq n$ and $i+n-1< j\leq 2n$.
\item $\{w_i,w_j\}$ belongs to $P_i$, for any $n< i< 2n$ and $i<j\leq 2n$.
\end{itemize}
\end{itemize}
Next, we show that $C$ is a cycle in $F_{m,n}^{\{2\}}$. The final vertex of $P_1$ is $\{v_1,v_n\}$, while the initial vertex of $P_2$ is $\{w_2, v_n\}$, and these
two vertices are adjacent in $F_{m,n}^{\{2\}}$. Also, for $1<i<2n$, the final vertex of $P_i$ is $\{w_i,w_{i+1}\}$ while
the initial vertex of $P_{i+1}$ is $\{w_{i+1},v_n\}$, and again these two vertices are adjacent in $F_{m,n}^{\{2\}}$.
On the other hand, the final vertex of $P_{2n}$ is $\{w_1,w_n\}$ while the initial vertex of $P_1$ is $\{v_n,w_1\}$, and
these two vertices are adjacent in $F_{m,n}^{\{2\}}$. These four observations together imply that $C_2$ is a cycle in $F_{m,n}^{\{2\}}$, and hence, $C_2$ is a Hamiltonian cycle of $F_{m,n}^{\{2\}}$. Note that the vertices $\{w_1,v_1\}$ and $\{v_1,v_2\}$ are adjacent in $C_2$, since they are adjacent in $P_1$.
\item {\bf Case} $\mathbf{1<m<2\,n.}$
Consider again the paths $P_1,\dots, P_m$ defined in the previous case and let us modify them slightly in the following
way:
\begin{itemize}
\item $P_1'=P_1$;
\item for $1<i<m$, let $P'_i$ be the path obtained from $P_i$ by deleting the vertices of type $\{w_i, w_j\}$, for each $j>m$;
\item let $P_m'$ be the path obtained from $P_m$ by first interchanging the vertices $\{w_m, w_{m+1}\}$ and $\{w_m, w_1\}$ from their current positions in $P_m$, and then deleting the vertices of type $\{w_m, w_j\}$, for every $j>m$.
\end{itemize}
Given this construction of $P'_i$ we have the following:
\begin{itemize}
\item[(A1)] $P'_i$ induces a path in $F_{m,n}^{\{2\}}$;
\item[(A2)] for $1\leq i<m$ the path $P'_i$ has the same initial and final vertices as the path $P_i$, and $P'_m$ has the same initial vertex as $P_m$, and its final vertex is $\{w_i, w_1\}$;
\item[(A3)] since we have deleted only the vertices of type $\{w_i, w_j\}$ from $P_i$ to obtain $P_i'$, for each $j>m$ and $i \in [m]$, it follows that $\{V(P'_1),\ldots,V(P'_m)\}$ is a partition of $V\left(F_{m,n}^{\{2\}}\right)$.
\end{itemize}
By (A1) and (A2) we can concatenate the paths $P'_1,\ldots,P'_m$ into a cycle $C'$ as follows:
\[
C':=P'_1\,P'_2\,\ldots\,P'_m(v_n, w_1)
\]
and then by (A3) it follows that $C'$ is a Hamiltonian cycle in $F_{m,n}^{\{2\}}$. Again, the vertices $\{w_1,v_1\}$ and $\{v_1,v_2\}$ are adjacent in $C'$ since they are adjacent in $P'_1$.
\item {\bf Case} $\mathbf{m>2\,n.}$
Here, our aim is to show that $F_{m,n}^{\{2\}}$ is not Hamiltonian by using
the following known result posed in West's book~\cite{west}.
\begin{prop}[Prop. 7.2.3, \cite{west}] If $G$ has a Hamiltonian cycle, then for each
nonempty set $S\subset V(G)$, the graph $G-S$ has at most $|S|$ components.
\end{prop}
Then, we are going to exhibit a subset $A\subset V\left(F_{m,n}^{\{2\}}\right)$ such that
$\mu(F_{m,n}^{\{2\}}-A)>|A|$.
Let
\[
A:=\big\{\{w_i,v_j\}:i\in [m] \text{ and }j\in [n]\big\}.
\]
Note that for any $i,j\in [m]$ with $i\neq j$, $\{w_i,w_j\}$ is an isolated vertex of $F_{m,n}^{\{2\}}-A$,
and there are $\binom{m}{2}$ vertices of this type. Also note that
the subgraph induced by the vertices of type $\{v_i,v_j\}$, for $i,j\in [n]$ and $i\neq j$,
is a component of $F_{m,n}^{\{2\}}-A$, and since
$|A|=mn$ and $m>2n$, we have
\[\mu(F_{m,n}^{\{2\}}-A)\geq \binom{m}{2}+1=\frac{m(m-1)}{2}+1\geq mn+1>mn=|A|,\]
as required. This completes the proof of Theorem~\ref{thm:main1}.
\end{itemize}
\section{Proof of Theorem~\ref{thm:general-case-main}}
\label{sec:main}
The Hamiltonicity of $F_{1,n}^{\{k\}}$ was proved in~\cite{rive-tru}. However, in order to prove Theorem~\ref{thm:general-case-main} we need one special Hamiltonian cycle in $F_{1, n}^{\{k\}}$.
\begin{lemma}
\label{lemma:m=1}
Let $n\geq 3$ and $2\leq k\leq n-1$. Then $F_{1,n}^{\{k\}}$ have a Hamiltonian cycle
$C$ in which the vertices $\{w_1,v_1,v_2,\ldots,v_{k-1}\}$ and $\{v_1,v_2,\ldots,v_k\}$
are adjacent.
\end{lemma}
\begin{proof}
We proceed by induction on $n$ and $k$. The case $k=2$ is considered in Section~\ref{sec:double}.
For $n=3$ we have $k=2$, and this case is considered in Section~\ref{sec:double}.
Suppose $n>3$ and $2<k<n-1$. Let $v_0:=w_1$, so $V(F_{1,n})=\{v_0,v_1,\ldots,v_n\}$. For $i$ with $k-1\leq i\leq n$, let $H_i$ be the subgraph of $F_{1,n}^{\{k\}}$ induced by the set
\[S_i:=\{\{v_{j_1},v_{j_2},\ldots,v_{j_k}\}\in V\left(F_{1,n}^{\{k\}}\right) \colon 0\leq j_1< j_2 < \dots < j_k=i\}.
\]
It is easy to see that
\begin{remark}
\label{rem:partition}
$\{S_{k-1},S_k,\ldots,S_n\}$ is a partition of $V\left(F_{1,n}^{\{k\}}\right)$.
\end{remark}
The subgraph $H_i$ can be understood as the subgraph of $F_{1,n}^{\{k\}}$ in which there is
one token fixed at $v_i$ and the remaining $k-1$ tokens move on the subgraph induced by $\{v_0,v_1,\ldots,v_{i-1}\}$
(which is isomorphic to $F_{1,i-1}$). Thus, $H_i\simeq F_{1,i-1}^{\{k-1\}}$. By induction, for $i\geq k+1$,
there is a Hamiltonian cycle
$C_i$ of $H_i$ in which the vertices $X_i:=\{v_0,v_1,v_2,\ldots,v_{k-2},v_i\}$ and $Y_i:=\{v_1,v_2,\ldots,v_{k-1},v_i\}$
are adjacent. Let $P_i$ be the path in $F_{1,n}^{\{k\}}$ obtained from $C_i$ by deleting the edge $(X_i,Y_i)$. Assume, without loss of generality, that the start vertex of $P_i$ is $X_i$ and its final vertex is $Y_i$. Thus, we have the following
\begin{remark}
\label{rem:hamiltonianpath}
For $i$ with $k+1\leq i\leq n$, $P_i$ is a Hamiltonian path of $H_i$.
\end{remark}
Let us now proceed by cases depending on the parity of $n-k$.
\begin{itemize}
\item \textbf{$\boldsymbol{n-k}$ is odd:}
Let us consider the subgraphs $H_{k-1}$ and $H_k$.
For $0\leq j\leq k$, let $Z_j:=\{v_0,v_1,\ldots,v_k\}\setminus \{v_j\}$.
Then, $V(H_k)=\{Z_0,Z_1,\ldots,Z_{k-1}\}$ and $V(H_{k-1})=\{Z_k\}$,
with the following adjacencies in $F_{1,n}^{\{k\}}$: $Z_1\sim Z_2\sim \dots \sim Z_k$ and $Z_0$
is adjacent to the vertices $Z_1,\ldots, Z_k$.
Let \[P_k=Z_k\,Z_0\,Z_1\,\ldots\,Z_{k-1},\] so we have the following
\begin{remark}
\label{rem:secondhamiltonianpath}
$P_k$ is a Hamiltonian path of the subgraph induced by $S_{k-1}\cup S_k$.
\end{remark}
Let
\[C:=P_k\, P_{k+1}\, \overleftarrow{P_{k+2}}\,P_{k+3}\,\ldots\,\overleftarrow{P_{n-1}}\,P_n.
\]
We claim that $C$ is a Hamiltonian cycle of $F_{1,n}^{\{k\}}$.
It is straightforward to show that the paths $P_k,P_{k+1},P_{k+2},\ldots,P_n$ can be concatenated
as is defined in $C$, implying that $C$ is a cycle in $F_{1,n}^{\{k\}}$.
Remarks \ref{rem:partition}, \ref{rem:hamiltonianpath} and \ref{rem:secondhamiltonianpath}
together imply that the cycle $C$ is Hamiltonian.
Finally, note that the vertices $Z_k=\{v_0,v_1,v_2,\ldots,v_{k-1}\}=\{w_1,v_1,v_2,\ldots,v_{k-1}\}$
and $Z_0=\{v_1,v_2,\ldots,v_k\}$ are adjacent in $C$ (since they are adjacent in $P_k$), as claimed.
\item \textbf{$\boldsymbol{n-k}$ is even:}
Here we use the paths $P_{k+2},P_{k+3},\ldots,P_n$ and we construct a new Hamiltonian path $P$ of the subgraph $H$
induced by $S:=S_{k-1}\cup S_k\cup S_{k+1}$. Note that $H\simeq F_{1,k+1}^{\{k\}}\simeq F_{1,k+1}^{\{2\}}$, since, in general, for a graph $G$ of order $n$, $G^{\{k\}}$ is isomorphic to $G^{\{n-k\}}$.
For $i,j\in \{0,1,\ldots,k+1\}$ with $i\neq j$, let $A_{i,j}=\{v_0,v_1,\ldots,v_{k+1}\}\setminus \{v_i,v_j\}$. Then,
two vertices $A_{i,j}$ and $A_{r,t}$ are adjacent if $\{v_i,v_j\}$ and $\{v_r,v_t\}$ are adjacent
in $F_{1,k+1}^{\{2\}}$.
For $1\leq t\leq k$, let
\[R_t:=\begin{cases}
A_{1,k}A_{1,k+1}A_{1,0}A_{1,k-1}A_{1,k-2}A_{1,k-3}\ldots A_{1,2} & \text{ if $t=1$, } \\
A_{t,0}A_{t,k+1}A_{t,k}A_{t,k-1}A_{t,k-2}\ldots A_{t,t+1} & \text{ if $1<t<k$, and } \\
A_{k,0} & \text{ if $t=k$. }
\end{cases}
\]
Note that $R_1,R_2,\ldots,R_k$ are paths in $H$, and the concatenation
$R:=R_1\,R_2\,\ldots R_k$ is a cycle in $H$, where the vertices $A_{k-1,k}$ and $A_{k,0}$
are adjacent in $R$. Let $R'$ be the path obtained from $R$ by deleting the edge $(A_{k-1,k},A_{k,0})$,
and assume, without loss of generality, that the initial vertex of $R'$ is $A_{k,0}$ and the final
vertex is $A_{k-1,k}$. Now, let
\[P:=A_{k,k+1}\,A_{0,k+1}\,R',\]
we have that
\begin{remark}
\label{rem:secondpart-hamiltonianpath}
$P$ is a Hamiltonian path of $H$ with initial vertex $A_{k,k+1}=\{v_0,v_1,\ldots,v_{k-1}\}$
and final vertex $A_{k-1,k}=\{v_0,v_1,\ldots,v_{k-2},v_{k+1}\}$.
\end{remark}
Let
\[C:=P\, P_{k+2}\,\overleftarrow{P_{k+3}}\,P_{k+4}\,\dots\,\overleftarrow{P_{n-1}}\,P_n.\]
Remarks \ref{rem:partition}, \ref{rem:hamiltonianpath} and \ref{rem:secondpart-hamiltonianpath}
together imply that $C$ is a Hamiltonian cycle of $F_{1,n}^{\{k\}}$. Finally, the vertices
$A_{k,k+1}=\{v_0,v_1,\ldots,v_{k-1}\}=\{w_1,v_1,\ldots,v_{k-1}\}$ and $A_{0,k+1}=\{v_1,v_2,\ldots,v_k\}$
are adjacent in $C$ (since they are adjacent in $P$), as claimed.
\end{itemize}
\end{proof}
Now, we prove our main result for the fan graph $F_{m,n}$, with $m>1$.
\begin{proof}[Proof of Theorem~\ref{thm:general-case-main}]
We claim that $F_{m,n}^{\{k\}}$ has a Hamiltonian cycle $C$ in which the vertices\linebreak
$\{w_1,v_1,v_2,\ldots,v_{k-1}\}$ and $\{v_1,v_2,\ldots,v_k\}$ are adjacent.
Clearly, this claim implies the theorem.
To show the claim, we use induction twice: the first induction is on $k$,
and once we have fixed $k$, for a fixed $n\geq k$, the second induction is on $m$, for $1\leq m\leq 2n$.
The case $k=2$ is proved in Section~\ref{sec:double}, so the first induction starts. Suppose $k>2$. Fix $n\geq k$. The base case of the second induction corresponds to the case $m=1$, and this case
is proved in Lemma~\ref{lemma:m=1}, so the second induction starts.
Assume $k>2$ and $m>1$. Let
\[S_1:=\{A\in F_{m,n}^{\{k\}} \colon w_1\in A\}\]
and
\[S_2:=\{B\in F_{m,n}^{\{k\}} \colon w_1\notin B\}.\]
Clearly, $\{S_1,S_2\}$ is a partition of $V(F_{m,n}^{\{k\}})$.
Let $H_1$ and $H_2$ be the subgraphs of $F_{m,n}^{\{k\}}$ induced by $S_1$ and $S_2$, respectively.
Note that $H_1\simeq F_{m-1,n}^{\{k-1\}}$ and $H_2\simeq F_{m-1,n}^{\{k\}}$.
By the induction hypothesis there are cycles $C_1$ and $C_2$ such that
\begin{itemize}
\item[(i)] $C_1$ is a Hamiltonian cycle of $H_1$, where the vertices
$X_1:=\{w_1,w_{2},v_1,v_2,\ldots,v_{k-2}\}$ and $Y_1:=\{w_1,v_1,v_2,\ldots,v_{k-1}\}$
are adjacent in $C_1$; and
\item[(ii)] $C_2$ is a Hamiltonian cycle of $H_2$, where the vertices
$X_2:=\{w_{2},v_1,v_2,\ldots,v_{k-1}\}$ and $Y_2:=\{v_1,v_2,\ldots,v_k\}$
are adjacent in $C_2$.
\end{itemize}
For $i=1,2$, let $P_i$ be the subpath of $C_i$, obtained by deleting the edge
$(X_i,Y_i)$. Note that $P_i$ is a Hamiltonian path of $H_i$ joining the vertices $X_i$ and $Y_i$.
On the other hand, we have $X_1\sim X_2$ and $Y_1\sim Y_2$, these two facts together imply
that the concatenation
\[C:=X_1\underset{P_1}{\longrightarrow}Y_1\, Y_2\underset{P_2}{\longrightarrow}X_2\]
corresponds to a cycle in $F_{m,n}^{\{k\}}$.
Since $\{S_1,S_2\}$ is a partition of $V\left(F_{m,n}^{\{k\}}\right)$, it follows that $C$ is
Hamiltonian. Finally, note that the vertices
$Y_1=\{w_1,v_1,v_2,\ldots,v_{k-1}\}$ and $Y_2=\{v_1,v_2,\ldots,v_k\}$ are adjacent in $C$ and this completes the proof.
\end{proof}
\begin{appendices}
\section{A relationship between Gray codes for combinations and the Hamiltonicity of token graphs}
\label{app:Gray}
Consider the problem of generating all the subsets of an $n$-set, which can be reduced to the problem of generating all possible binary strings of length $n$ (since each $k$-subset can be transformed into a $n$-binary string by placing an $1$ in the $j$-th entry if $j$ belongs to the subset, and $0$ otherwise). The most straightforward way of generating all these $n$-binary strings is counting in binary; however, many elements may change from one string to the next. Thus, it is desirable that only a few elements change between succesive
strings. The case when succesive strings differ by a single bit, is commonly known as \emph{Gray codes}. Similarly, the problem of generating all the $k$-subsets of an $n$-set is reduced to the problem of generating all the $n$-binary strings of constant weight $k$ (with exactly $k$ $1$'s).
The term ``Gray'' derives from Frank Gray, a research physicist at the Bell Telephone Laboratories, who used these codes in a patent he obtained for \textit{pulse code communication}.
Gray codes are known to have applications in different areas, such as cryptography, circuit testing, statistics and exhaustive combinatorial searches. For a more detailed information on Gray codes, we refer the reader to \cite{baylis, ruskey-graycodes, savage}.
Next, we present a formal definition of Gray codes.
Let $S$ be a set of $n$ combinatorial objects and $C$ a relation on $S$, $C$ is called the \textit{closeness relation}. A \emph{Combinatorial Gray Code} (or simply \emph{Gray code}) for $S$ is a listing $s_1,s_2,\ldots,s_n$ of the elements of $S$ such that $(s_i,s_{i+1})\in C$, for $i=1,2,\ldots,n-1$. Aditionally, if $(s_n,s_1)\in C$, the Gray code is said to be \emph{cyclic}.
In other words, a Gray code for $S$ with respect to $C$ is a listing of the elements of $S$ in which succesive elements are close (with respect to $C$). There is a digraph $G(S,C)$, the \emph{closeness graph}, associated to $S$ with respect to $C$, where the vertex set and edge set of $G(S,C)$ are $S$ and $C$, respectively. If the closeness relation is symmetric, $G(S,C)$ is an undirected graph. A Gray code (resp. cyclic Gray code) for $S$ with respect to $C$ is a Hamiltonian path (resp. a Hamiltonian cycle) in $G(S,C)$.
We are interested in Gray codes for combinations. A $k$-combination of the set $[n]=\{1,2,\ldots,n\}$ is a $k$-subset of $[n]$, which in turn, can be thought as a binary string of lenght $n$ and constant weight $k$ (it has $k$ $1$'s and $n-k$ $0$'s). Consider the set $S=S(n,k)$ of all the $k$-combinations of $[n]$. Next, we mention three closeness relations that can be applied to $S$; for other closeness relations we refer to \cite{ruskey-graycodes}.
\begin{itemize}
\item[1)] The \emph{transposition} condition: two $k$-subsets are close if they differ in exactly two elements. Example: $\{1,2,5\}$ and $\{2,4,5\}$ are close, while $\{1,2,5\}$ and $\{1,3,4\}$ are not.
\item[2)] The \emph{adjacent transposition} condition: two $k$-subsets are close if they differ in exactly two consecutive elements $i$ and $i+1$. Example: $\{1,2,5\}$ and $\{1,3,5\}$ are close, while $\{1,2,5\}$ and $\{1,4,5\}$ are not.
\item[3)] The \emph{one or two apart transposition} condition: two $k$-subsets are close if they differ in exactly two elements $i$ and $j$, with $|i-j|\leq 2$. Example: $\{1,2,5\}$ and $\{1,4,5\}$ are close, while $\{1,2,5\}$ and $\{2,4,5\}$ are not.
\end{itemize}
The relationship between the closeness graph associated to $S$ with respect to one of these closeness conditions and some token graphs is the following: for the transposition condition, the closeness graph associated is isomorphic to the $k$-token graph of $K_n$; for the adjacent transposition condition, the closeness graph associated is isomorphic to the $k$-token graph of $P_n$; and for the one or two apart transposition condition, the closeness graph associated is isomorphic to the $k$-token graph of $P_n^2$. This relationship can be generalized as follows: for a closeness relation $C$ (with underlying graph $G$), the closeness graph associated to $S$ with respect to $C$ is isomorphic to the $k$-token graph of $G$. A direct consequence of this fact is that a Gray code and a cyclic Gray code for $S$ with respect to $C$ correspond to a Hamiltonian path and a Hamiltonian cycle of the $k$-token graph of $G$.
\end{appendices} | {"config": "arxiv", "file": "2101.01855/hamiltonicity2020-v12.tex"} |
\begin{document}
\maketitle
\begin{abstract}
Survival analysis consists of studying the elapsed time until an event of interest, such as the death or recovery of a patient in medical studies. This work explores the potential of neural networks in survival analysis from clinical and RNA-seq data. If the neural network approach is not recent in survival analysis, methods were classically considered for low-dimensional input data. But with the emergence of high-throughput sequencing data, the number of covariates of interest has become very large, with new statistical issues to consider. We present and test a few recent neural network approaches for survival analysis adapted to high-dimensional inputs.
\end{abstract}
\keywords{Survival analysis \and Neural networks \and High-dimension \and Cancer \and Transcriptomics}
\section{Introduction}
\label{sec:Intro}
Survival analysis consists of studying the elapsed time until an event of interest, such as the death or recovery of a patient in medical studies.
This paper aims to compare methods to predict a patient's survival from clinical and gene expression data.
The Cox model \citep{coxregression1972} is the reference model in the field of survival analysis. It relates the survival duration of an individual to the set of explanatory covariates. It also enables to take into account censored data that are often present in clinical studies. With high-throughput sequencing techniques, transcriptomics data are more and more often used as covariates in survival analysis. Adding these covariates raise issues of high-dimensional statistics, when we have more covariates than individuals in the sample.
Methods based on regularization or screening \citep{tibshiranilasso1997,fanhigh-dimensional2010} have been developed and used to solve this issue.
The Cox model relies on the proportional hazard hypothesis, and in its classical version, does not account for nonlinear effects or interactions, which proves limited in some real situations. Therefore, in this paper, we focus on another type of methods: neural networks. Deep learning methods are more and more popular, notably due to their flexibility and their ability to handle interactions and nonlinear effects, including in the biomedical field~\citep{rajkomarscalable2018,kwongclinical2017,suodeep2018}.
The use of neural networks for survival analysis is not recent, since it dates back to the 90's \citep{faraggineural1995,biganzolifeed1998} but it began being widely used only recently. We can differentiate two strategies. The first one relies on the use of a neural network based on the Cox partial log-likelihood as those developed by \cite{faraggineural1995,chingcox-nnet:2018,katzmandeepsurv:2018,kvamme2019}. The second strategy consists of using a neural network based on a discrete-time survival model, as introduced by \cite{biganzolifeed1998}.
\cite{biganzolifeed1998} have studied this neural network only in low-dimension. In this paper, our objective is to study and adapt this model to the high-dimensional cases, and compare its performances to two other methods:
the two-step procedure with the classical estimation of the parameters of the Cox model with a Lasso penalty to estimate the regression parameter and a kernel estimator of the baseline function (as in \cite{guilloux2016adaptive}) and the Cox-nnet neural network \citep{chingcox-nnet:2018} based on the partial likelihood of the Cox model.
Section 2 recalls the different notations used in survival analysis and presents the different models. Then, we introduce the simulation plan created to compare the models. Finally, we underline the results to conclude with the potential of neural networks in survival analysis.
\section{Models}
\label{sec:Models}
First, we introduce the following notations:
\begin{itemize}
\item $Y_i$ the survival time
\item $C_i$ the censorship time
\item $T_i = \min(Y_i, C_i)$ the observed time
\item $\delta_i$ the censorship indicator (which will be equal to 1 if the interest event occurs and else to 0).
\end{itemize}
\subsection{The Cox model}
\label{sec:Cox}
The Cox model~\citep{coxregression1972} predicts the survival probability of an individual from explanatory covariates $X_{i.} = (X_{i1} ,\dots, X_{ip})^{T} \in \mathbb{R}^p$. The hazard function $\lambda$ is given by:
\begin{equation}
\lambda(t|X_{i.}) = \alpha_0(t)\exp(\beta^TX_{i.}),
\label{eq:Cox}
\end{equation}
where $\alpha_0(t)$ corresponds to the baseline hazard and $\beta =(\beta_1,\dots,\beta_p)^T\in\mathbb{R}^p$ is the vector of regression coefficients.
A benefit of this model is that only $\alpha_0(t)$ depends on time while the second term of the right hand side of (\ref{eq:Cox}) depends only on the covariates (proportional hazard model). The Cox model structure can be helpful when we are interested in the prognostic factors because $\beta$ can be estimated without knowing the function $\alpha_0$. It is possible thanks to the Cox partial log-likelihood, which is the part of the total log-likelihood that does not depend on $\alpha_0(t)$, and is defined by:
$$\mathcal{L}\left(\beta\right)=\sum_{i=1}^n \left(\beta^TX_{i.} \right) - \sum_{i=1}^{n} \delta_i \log\left(\sum_{l\in R_i} \exp\left( \beta^TX_{l.} \right)\right),$$
with $R_i$ the individuals at risk at the observation time $T_i$ of individual $i$, and $\delta_i$ the censorship indicator of individual $i$.
The Lasso procedure was proposed by \cite{tibshiranilasso1997} for the estimation of $\beta$ in the high-dimensional setting. The non-relevant variables are set to zero thanks to the $L_1-$ penalty added to the Cox partial likelihood: $\mathcal{L}(\beta) + \lambda||\beta||_1.$
However, to predict the survival function $S$, we need to fully estimate the hazard risk $\lambda(s|X_{i.})$ since:
$$S(t) = \mathbb{P}(T_i > t | X_{i.}) = \exp\left(- \int_0^t \alpha_0(s) \exp(\beta^TX_{i.} )ds \right).$$
We follow the two-step procedure of \cite{guilloux2016adaptive}: first, we estimate $\beta$ from the penalized Cox partial likelihood, and then we estimate $\alpha_0(t)$ from the kernel estimator introduced by \cite{ramlau-hansensmoothing1983}, in which we have plugged the Lasso estimate of $\beta$.
\subsection{Neural networks}\label{sec:NN}
The studied neural networks in this paper are fully-connected multi-layer perceptrons. Several layers constitute this network with at least one input layer, one output layer, and one or several hidden layers.
\subsubsection{Cox-nnet}
\label{subsec:chap3_coxnn}
\cite{faraggineural1995} developed a neural network based on the proportional hazards model. The idea of \cite{faraggineural1995} was to replace the linear prediction of the Cox regression with the neural network's hidden layer's output.
\cite{faraggineural1995} only applied their neural network to survival analysis from clinical data, in low dimension. More recently, some authors revisited this method \cite{chingcox-nnet:2018,katzmandeepsurv:2018,kvamme2019}. However, only Cox-nnet \citep{chingcox-nnet:2018} was applied in a high-dimensional setting. We will thus use this model as benchmark in our study.
The principle of Cox-nnet is that its output layer corresponds to a Cox regression: the output of the hidden layer replaces the linear function of the covariates in the exponential of the Cox model equation.
To estimate the neural network weights, \cite{chingcox-nnet:2018} uses the Cox partial log-likelihood as the neural network loss:
\begin{equation}
\mathcal{L}(\beta, W, b) = \sum_{i=1}^n \theta_i - \sum_{i=1}^{n} \delta_i \log\left(\sum_{l\in R_i} \exp\left(\theta_l\right)\right)
\end{equation}
with $\delta_i$ the censoring indicator and $\theta_i =\beta^TG(W^TX_{i.}+b)$, where $G$ is the activation function of the hidden layer, $W = (w_{dh})_{1\le d \le p, 1 \le h \le H}$ with $H$ the number of neurons in the hidden layer, and $\beta = (\beta_1, \ldots, \beta_H)^T$ the weights and $b$ the biases of the neural network to be estimated. In this network, the activation function $tanh$ is used. To the partial log-likelihood, \cite{chingcox-nnet:2018} adds a ridge penalty in $L_2-$norm of the parameters. Thus, the final cost function for this neural network is:
\begin{equation}
Loss(\beta, W, b) = \mathcal{L}(\beta,W,b) + \lambda(\|\beta\|_2 + \|W\|_2 + \|b\|_2). \label{eq:Cout_coxnn}
\end{equation}
We maximize this loss function to deduce estimators of $\beta$, $W$ and $b$. The principle in this neural network is that the activation function for the output layer is a Cox regression, so that we have:
\begin{equation}
\label{eq:h}
\hat{h}_i = \exp \left( \underbrace{\sum_{h=1}^{H} \hat\beta_h G \left(\hat b_h + \hat W^TX_{i.}\right)}_{\hat\theta_i = \hat\beta^TG(\hat W^TX_{i.}+\hat b)} \right).
\end{equation}
The output of the neural network $\hat{h}_i$ corresponds to the part of the Cox regression that does not depend on time. \cite{chingcox-nnet:2018} only used $\hat{h}_i$, but in our study, we are interested in the complete survival function, and thus we need to estimate the complete hazard function $\hat{h}(x_i,t)$. For that purpose, we estimate the baseline risk $\alpha_0(t)$, with the kernel estimator introduced by \cite{ramlau-hansensmoothing1983}.
As for the Cox model, we estimate $\alpha_0(t)$ with the two-steps procedure of \cite{guilloux2016adaptive} and this estimator is defined by:
\begin{equation}
\widehat{\alpha}_{m}(t) = \frac{1}{n m} \sum_{i=1}^{n} K \left(\frac{t-u}{m} \right) \frac{\delta_i}{\sum_{l \in R_i} \widehat{h}_l},
\label{eq:RH}
\end{equation}
with $\hat{h}_l$ the estimator defined by (\ref{eq:h}), $K: \mathbb{R} \rightarrow \mathbb{R}$ a kernel (a positive function with integral equal to 1), $m$ the bandwidth, which is a strictly positive real parameter. $m$ can be obtained by cross-validation or by the Goldenshluger \& Lepski method~\cite{goldenshluger_bandwidth_2011} for instance, and we choose the latter.
We can finally derive an estimator of the survival function for individual $i$:
\begin{equation}
\widehat{S}(t|X_{i.}) = \exp \left( - \int_{0}^t \widehat{\alpha}_{m}(s) \hat h_i ds\right).
\end{equation}
\subsubsection{Discrete time neural network}
\label{subsec:chap3_NNsurv1}
\cite{biganzolifeed1998} have proposed a neural network based on a discrete-time model. They introduced $L$ time intervals $A_l = ]t_{l-1},t_l]$, and build a model predicting in which interval, the failure event occurs. We write the discrete hazard as the conditional probability of survival:
\begin{equation}
h_{il} = P(Y_i \in A_l|Y_i> t_{l-1}), \label{eq:hazDis}
\end{equation}
with $Y_i$ the survival time of individual $i$. \cite{biganzolifeed1998} duplicates the individuals as input of the neural network. The duplication of individuals gives it a more original structure than that of a classical multi-layer perceptron. The \cite{biganzolifeed1998}'s neural network takes as input the set of variables of the individual and an additional variable corresponding to the mid-point of each interval. Due to the addition of this variable, the $p$ variables of each individual are repeated for each time interval. The output is thus the estimated hazard ${h}_{il} = h_l(X_i, a_l)$ for the individual $i$ at time $a_l.$ We schematize the structure of this neural network on \textsc{Figure}~\ref{fig:NNsurv}.
\begin{figure}
\includegraphics[width=\textwidth]{NN.eps}
\caption{Structure of the neural network based on the discrete-time model of \cite{biganzolifeed1998}}
\label{fig:NNsurv}
\end{figure}
\cite{biganzolifeed1998} initially used a 3-layers neural network with a logistic function as the activation function for both the hidden and output layers.
The output of the neural network with $H$ neurons in the hidden layer and $p+1$ input variables is given by:
$${h}_{il} = {h}(x_i, t_l) = f_2\left(a + \beta^T f_1\left(b + W^T X_{i.} \right)\right),$$
where $W = (w_{dh})_{1\le d \le p+1, 1 \le h \le H}$, and $\beta = (\beta_{1}, \ldots, \beta_{H})^T$ are the weights of the neural network, $a$ and $b$ are the biases of the neural network to be estimated, and $f_1$ and $f_2$ the sigmoid activation functions. The target of this neural network is the death indicator $d_{il}$, which will indicate if the individual $i$ dies in the interval $A_l$. We introduce $l_i \le L$ the number of intervals in which the individual $i$ is observed, $d_{i0},\ldots,d_{i(l_i-1)} = 0$ whatever the status of the individual $i$ and $d_{il_i}$ is equal to $0$ if the individual $i$ is censored and $1$ otherwise.
The cost function used by \cite{biganzolifeed1998} is the cross-entropy function and the weights of the neural network can be estimated by minimizing it:
\begin{eqnarray}
\mathcal{L}(\beta,W,a,b) = -\sum_{i=1}^{n} \sum_{l=1}^{l_i} d_{il} \log({h}_{il}) + (1-d_{il}) \log(1 - {h}_{il}). \label{eq:crossEntropy}
\end{eqnarray}
The duplication of the individuals for each time interval increases the sample size in the neural network, it is an advantage in a high-dimensional framework. Moreover, \cite{biganzolifeed1998} added a ridge penalty to their cross-entropy function (\ref{eq:crossEntropy}):
\begin{equation}
\label{eq:loss}
Loss(\beta,W,a,b) = \mathcal{L}(\beta,W,a,b)+ \lambda (\|\beta\|_2+\|W\|_2+\|a\|_2+\|b\|_2),
\end{equation}
In \cite{biganzolifeed1998}, $\lambda$ was chosen by deriving an Information Criteria. We choose instead to use cross-validation since it improved model the predictive capacity.
After estimating the parameters of the neural network by minimizing the loss function (\ref{eq:loss}), the output obtained is the estimate of the discrete risk $\widehat{h}_{il}$ for each individual $i$ and the survival function of individual $i$ is estimated using:
\begin{equation}
\widehat{S}(T_{l_i}) = \prod_{i=1}^{l_i} (1 - \widehat{h}_{il}). \label{eq:hazTosurv}
\end{equation}
This model was only applied for low-dimensional inputs, and this paper investigates its performance and capacity to adapt to high-dimensional settings. We denote this network NNsurv. We noticed an improvement of the performance when using a ReLU activation function for the hidden layers and thus used it instead of the original sigmoid functions. Moreover, original neural network only has one hidden layer. We propose to add one supplementary hidden layer to study if a deeper structure could improve the neural network prediction capacity. We call the deeper version NNsurv-deep. Its structure is similar to the one schematized in Figure~\ref{fig:NNsurv}, but with two hidden layers instead of one. The input layer does not change, and the individuals are always duplicated at the input of the neural network. The output layer also has a single neuron corresponding to the discrete hazard estimate. These neural networks are implemented in a package available on \url{https://github.com/mathildesautreuil/NNsurv}.
We will compare the performances of these four models (Cox-Lasso, Cox-nnet, NNsurv, NNsurv-deep) on simulated data and then to a real dataset.
\section{Simulations}
\label{sec:Simu}
We create a simulation design to compare different neural network approaches to predict survival time in high-dimension. We divide the simulation plan into two parts. The first part concerns a simulation study based on \citep{bendergenerating2005} which proposes to generate the survival data from a Cox model. Data simulated with this model naturally favors the two methods based on the Cox model. We also consider a model with a more complex behavior: the Accelerated Hazards (AH) model~\citep{chen_analysis_2000}. In the AH model, variables will accelerate or decelerate the hazard risk. The survival curves of the AH model can therefore cross each other. Other choices of models were also possible, and in the Appendix \ref{appendix}, we also present the results for the Accelerated Failure Time (AFT) model~\citep{kalbfleischstatistical2002} which does not satisfy the proportional risk assumption either, but does not allow the intersection of survival curves of different patients.
In all cases, the models' baseline risk function is assumed known and follows a particular probability distribution. We use the Weibull distribution for the Cox model and the log-normal distribution for the AH model. Several simulations are considered, by varying the sample size, the total number of explanatory variables, and the number of relevant explanatory variables considered in the model. We use the package that we have developed called \verb?survMS? and available on CRAN or \url{https://github.com/mathildesautreuil/survMS}.
\subsection{Generation of survival times}
Considering the survival models (Cox, AFT, and AH models), the survival function $S(t|X)$ can be written as:
\begin{equation}
S(t|X) = \exp(-H_0(\psi_1(X)t)\psi_2(X)\;\mbox{with}
\end{equation}
$H_0$ is the cumulative hazard and
$$
(\psi_1(X), \psi_2(X)) = \left\{
\begin{array}{ll}
(1, \exp(\beta^TX)) & \mbox{for the Cox model } \\
(\exp(\beta^TX), \exp(-\beta^TX)) & \mbox{for the AH model} \\
(\exp(\beta^TX), 1) & \mbox{for the AFT model. }
\end{array}
\right.
$$
The distribution function is deduced from the survival function from the following formula:
\begin{equation}
F(t|X) = 1 - S(t|X). \label{eq:F}
\end{equation}
For data generation, if $Y$ is a random variable that follows a probability distribution $F,$ then $U = F(Y)$ follows a uniform distribution on the interval $[0,1],$ and $(1-U)$ also follows a uniform distribution $\mathcal{U}[0,1].$ From Equation (\ref{eq:F}), we finally obtain that:
\begin{equation}
1 - U = \exp(-H_0(\psi_1(X)t)\psi_2(X)).
\end{equation}
If $\alpha_0(t)$ is positive for all $t,$ then $H_0(t)$ can be inverted, and we can express the survival time of each of the models considered (Cox, AFT and AH) from $H_0^{-1}(u).$ We write in a general form the expression of the random survival times for each of the survival models:
\begin{equation}
T = \frac{1}{\psi_1(X)} H^{-1}_0 \left( \frac{\log(1-U)}{\psi_2(X)} \right).
\end{equation}
Two distributions are used for the cumulative hazard function $H_0(t)$ to generate the survival data. If the survival times are distributed according to a Weibull distribution $\mathcal{W}(a, \lambda),$ the baseline hazard is of the form
:
\begin{equation}
\alpha_0(t) = a\lambda t^{a-1}, \lambda > 0, a > 0.
\end{equation}
The inverse of the cumulative risk function is expressed as follows:
\begin{equation}
H_0^{-1}(u) = \left( \frac{u}{\lambda} \right)^{1/a}.
\end{equation}
For survival times following a log-normal distribution $\mathcal{LN}(\mu, \sigma)$ with mean $\mu$ and standard deviation $\sigma$, the basic risk function is therefore written:
\begin{equation}
\alpha_0(t) = \frac{\frac{1}{\sigma\sqrt{2\pi t}} \exp\left[-\frac{(\log t - \mu)^2 }{2 \sigma^2}\right]}{1 - \Phi\left[\frac{\log t - \mu}{\sigma}\right]},
\end{equation}
with $\Phi(t)$ the distribution function of a standard Normal distribution.
The inverse of the cumulative hazard function is expressed by:
\begin{equation}
H_0^{-1}(u) = \exp(\sigma\Phi^{-1}(1-\exp(-u))+\mu), \label{eq:InvHazLN}
\end{equation}
with $\Phi^{-1}(t)$ the inverse of the distribution function of a centered and reduced normal distribution.
\subsection{Simulation with the Cox - Weibull model}
\paragraph{\textbf{Survival times and baseline function: }}
Generating survival times from a variety of parametric distributions were described by \cite{bendergenerating2005}.
In the case of a Cox model with a baseline function distributed from a Weibull distribution, the inverse cumulative hazard function is $H_0^{-1}(t) = (\frac{t}{\lambda})^{\frac{1}{a}}$ and the survival time T of the Cox model is expressed as:
\begin{equation}
T = \left( -\frac{1}{\lambda} \log(1-U) \exp(-X_{i.} \beta) \right)^{\frac{1}{a}}, \label{eq:Times}
\end{equation}
where $U$ is a random variable with $U \sim \mathcal{U}[0,1].$
\paragraph{\textbf{Choice of parameters of the Weibull distribution:}}
\label{sec:Param}
We chose the Weibull distribution parameters so that our design of simulation is close to real datasets. The mean and the standard deviation of Breast cancer real dataset (available on \url{www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE6532}) is around $2325$ days and $1304$ days respectively. As the survival times follow a Weibull distribution, the mean and the variance of $T$ write as:
$$\mathbb{E}(T) = \frac{1}{\sqrt[a]{\lambda}} \Gamma\left(\frac{1}{a}+1\right) \,\; \mbox{and} \,\; \mathbb{V}(T) = \frac{1}{\sqrt[a]{\lambda^2}} \left[ \Gamma\left(\frac{2}{a}+1\right) - \Gamma^2\left(\frac{1}{a}+1\right)\right],$$
where $\Gamma$ is the Gamma function. We set $a = 2$ and $\lambda = 1.3e^{-7}$ to have a mean and variance of our simulated datasets close to those of the Breast cancer real dataset.
\subsection{Simulation with the AH - Log-Normal model}
\paragraph{\textbf{Survival times and baseline function: }}
Building on the work of \cite{bendergenerating2005}, we also simulate the survival data from the AH model. We perform this simulation to generate data whose survival curves will intersect.
For this simulation, we consider that the survival times follow a log-normal distribution $\mathcal{LN}(\mu,\sigma).$ In this case, the inverse of the cumulative hazard function is expressed as (\ref{eq:InvHazLN}), and we have:
\begin{equation}
T = \frac{1}{\exp(\beta^T X_{i.})} \sigma \Phi^{-1}\left(\frac{\log(1-U)}{\exp(-\beta^T X_{i.})} + \mu\right) \label{eq:Tah}
\end{equation}
with $\Phi^{-1}(t)$ the inverse of the distribution function of a centered and reduced normal distribution.
\paragraph{\textbf{Choice of parameters of the Log-Normal distribution:}}
As in the previous simulation, we ensure that the distribution of the simulated data is close to that of the real ones and we use the formulas:
\begin{equation}
\mu = \ln(\mathrm{E}(T))- \frac{1}{2} \sigma^2 \label{eq:mu} \,\;\, \mbox{and} \,\;\,
\sigma^2 = \ln\left(1+\frac{\mathrm{Var}(T)}{(\mathrm{E}(T))^2}\right).
\end{equation}
Since, the expectation and the standard deviation are respectively $2325$ and $1304$, the values of $\mu$ and $\sigma$ used for the simulation of the survival data should be $\mu = 7.73$ and $\sigma = 0.1760.$ However, to have survival curves crossing rapidly, we take a higher value of $\sigma$: $\sigma=0.7$.
\subsection{Metrics}
\label{subsec:Metric}
To assess the performance of the survival models, we use two classical metrics, the Concordance Index (CI) and the Integrated Brier Score (IBS).
\subsubsection{Concordance Index}
The index measures whether the prediction of the model under study matches the rank of the survival data. If the event time of an individual $i$ is shorter than that of an individual $j$, a good model will predict a higher probability of survival for individual $j$. This metric takes into account censored data, and it takes a value between $0$ and $1$. If the C-index is equal to $0.5$, the model is equivalent to random guessing.
The time-dependent C-index proposed by \cite{antolini_time-dependent_2005}, adapted to non-proportional hazard models, is chosen in this study.
Consider $n$ individuals, and for $1\leq i\leq n$, $T_i$ their observation times (either survival or censoring times) and $\delta_i$ their censorship indicators.
For $i,j = 1, \ldots,n\, i\ne j,$ we define the indicators:
\begin{equation}
comp_{ij} = \mathbb{1}_{\{(T_i < T_j; \delta_i = 1) \cup (T_i = T_j; \delta_i = 1, \delta_j = 0)\}} \nonumber
\end{equation}
and
\begin{equation}
conc_{ij}^{td} = \mathbb{1}_{\{S(T_i | X_{i.}) < S(T_j| X_{j.})\}} comp_{ij}, \nonumber
\end{equation}
The estimate of the time-dependent C-index for survival models
is equal to:
\begin{equation}
\widehat{C}_{td} = \frac{\sum_{i=1}^n \sum_{j \ne i} conc_{ij}^{td}}{\sum_{i=1}^n \sum_{j \ne i} comp_{ij}}
\label{eq:ctd}
\end{equation}
If we are in the proportional hazards or linear transformation models' case, the metric $\widehat{C}_{td}$ of the equation (\ref{eq:ctd}) is equivalent to the usual C-index \cite{gerdsestimating2013}.
\subsubsection{Integrated Brier Score}
The Brier score measures the squared error between the indicator function of surviving at time $t$, $\mathbb{1}_{\{T_i\ge t\}}$, and its prediction by the model $\widehat{S}(t|X_{i.})$. \cite{graf_assessment_1999} adapted the Brier score~\cite{brier_verification_1950} for censored survival data using the inverse probability of censoring weights (IPCW) and \cite{gerds_consistent_2006} subsequently proposed a consistent estimator of the Brier score in the presence of censored data.
The Brier score is defined by:
\begin{equation}
BS(t, \widehat{S}) = \mathbb{E}\left[ \left( Y_i (t) - \widehat{S}(t|X_{i.} )\right)^2 \right],
\end{equation}
where $Y_i(t) = \mathbb{1}_{\{T_i \ge t\}}$ is the status of individual $i$ at time $t$ and $\widehat{S}(t|X_{i.} )$ is the predicted survival probability at time $t$ for individual $i$. Unlike the C-index, a lower value of this score shows a better predictive ability of the model.
As mentioned above, \cite{gerds_consistent_2006} gave an estimate of the Brier score in the presence of censored survival data.
The estimate of the Brier score under right censoring is:
\begin{equation}
\widehat{BS}(t,\widehat{S}) = \frac{1}{n} \sum_{i=1}^n \widehat{W}_i(t)(Y_i(t) - \widehat{S}(t|X_{i.}))^2, \label{eq:BS}
\end{equation}
with $n$ the number of individuals in the test set. Moreover, in the presence of censored data it is necessary to adjust the score by weighting it by the inverse probability of censoring weights (IPCW). This weighting is defined by:
\begin{equation}
\widehat{W}_i(t) = \frac{(1-Y_i (t))\delta_i}{\widehat{G}(T_{i}|X_{i.})} + \frac{Y_i (t)}{\widehat{G}(t|X_{i.})}, \label{eq:Weights}
\end{equation}
where $\delta_i$ is the censored indicator equal to 1 if we observe the survival time and equal to 0 if the survival time is censored, and $\widehat{G}(t|x)$ is the Kaplan-Meier~\cite{kaplan_nonparametric_1958} estimator of the censored time survival function at time $t.$
The integrated Brier score~\cite{mogensen_evaluating_2012} summarizes the predictive performance estimated by the Brier score~\cite{brier_verification_1950}:
\begin{equation}
\widehat{IBS} = \frac{1}{\tau} \int_0^{\tau} \widehat{BS}(t, \widehat{S})dt, \label{eq:IBS}
\end{equation}
where $\widehat{BS}(t, \widehat{S})$ is the estimated Brier score and $\tau > 0$. We take $\tau > 0$ as the maximum of the observed times and the Brier score is averaged over the interval $[0,\tau].$
As for the Brier score, a lower value of the $IBS$ indicates a better predictive ability of the model.
\section{Results}
\label{sec:results}
In this section, we compare the performances of the Cox model with Lasso (denoted CoxL1) \cite{tibshiranilasso1997}, the neural network based on the Cox partial log-likelihood Cox-nnet \cite{chingcox-nnet:2018} presented in Section~\ref{sec:Models}, and discrete-time neural networks (NNsurv and NNsurv-deep), adapted from \cite{biganzolifeed1998} and also presented in Section~\ref{sec:Models}. The performances are compared on simulated data (with the Cox and AH models, and for several parametric configurations) and on a real data case presented below. The discrete-time C-index ($C_{td}$) and Integrated Brier Score ($IBS$) are used for this purpose. We can calculate the reference $C_{td}$ and $IBS$ values from our simulations based on the exact model used for the simulation. Note however, that the models under comparison can sometimes "beat" these reference values by chance (due to the random generation of survival times).
\subsection{Simulation study}
$n$ is the number of samples, $n\in{200;1000}$, and $p$ is the number of covariates, $p\in{10; 100; 1000}$. Note that even if our objective is to apply our models to predict survival from RNA-seq data, we present simulation results up to $1000$ covariates (instead of the potential several tens of thousands usually available with RNA-seq). Indeed, when we performed tests with 10,000 inputs, none of the model were able to perform well, thus underlining the necessity of a preliminary filtering as classically done when handling RNA-seq data \citep{conesa2016}.
\subsubsection{Results for the Cox - Weibull simulation}
The Cox-Weibull simulation corresponds to a Cox model's data with a baseline risk modeled by a Weibull distribution. In this simulation, the model satisfies the proportional hazards assumption.
The results of this simulation in \textsc{Table}~\ref{tab_CoxWeib} show that Cox-nnet performs best concerning the $C_{td}$ in all settings (regardless of the number of variables or sample size) and most settings for the $IBS$. The best $IBS$ values for Cox-nnet, as we can see from \textsc{Table}~\ref{tab_CoxWeib}, are for sample size equal to 200 and number of variables to 10 and 100 or sample size worth to 1000 and number of variables is to 100 and 1000.
CoxL1 also has the best $IBS$ (\textit{i.e.} the lowest) for a sample size of 1000 and 10 variables. These good results of CoxL1 and Cox-nnet are not surprising because we simulated the data from a Cox model. We can observe in \textsc{Table}~\ref{tab_CoxWeib} that NNsurv-deep obtains the lowest $IBS$ value for 200 individuals and 1000 variables. We can also see that the $IBS$ values of NNsurv and NNsurv-deep are very close to the reference $IBS$ values. This phenomenon is also true when the sample size is equal to 1000, and the number of variables is equal to 100.
Moreover, we can observe in \textsc{Table}~\ref{tab_CoxWeib} that some of the values of $C_{td}$ obtained for NNsurv and NNsurv-deep are close to those of Cox-nnet. We notice notably this case when the sample size is equal to 200, and the number of variables is equal to 10, and when the number of samples is 1000 and the number of variables is of 10 and 100. We can see that some of the values of the $C_{td}$ for the discrete-time neural networks are better than those obtained from the Cox model, for example, for a sample size equal to 200 and number of variables worth to 100 or for a sample size worth to 1000 and whatever the number of variables is.
\begin{table*}[!h]
\begin{center}
\tabcolsep = 1.5\tabcolsep
\begin{tabular}{p{1.5cm}|p{1.5cm}||p{1.4cm}p{1.4cm}p{1.4cm}|p{1.4cm}p{1.4cm}p{1.4cm}|}
\hline\hline
& n & \multicolumn{3}{|c|}{200} & \multicolumn{3}{|c|}{1000} \\
\hline
Method & p & 10 & 100 & 1000 & 10 & 100 & 1000\\
\hline
\hline
Reference & $C_{td}^{\star}$ &\textbf{ 0.7442} & \textbf{0.7428} & \textbf{0.7309} & \textbf{0.7442} & \textbf{0.7428} & \textbf{0.7309 }\\
\cline{2-8} & $IBS$$^{\star}$ & \textbf{ 0.0471} & \textbf{0.0549} & \textbf{0.0582} & \textbf{0.0471} & \textbf{0.0549} & \textbf{0.0582} \\
\hline
\hline
NNsurv & $C_{td}$ & 0.7137 & 0.6224 & 0.5036 & 0.7398 & 0.7282& 0.5700 \\
\cline{2-8} & $IBS$ & 0.0980 & 0.0646 & 0.1359 & 0.0759 & 0.0537 & 0.1007 \\
\hline
\hline
NNsurv & $C_{td}$ & 0.7225 & 0.5982 & 0.5054 & 0.7424 & 0.7236 & 0.5741\\
\cline{2-8} deep & $IBS$ & 0.0878 & 0.0689 & \textbf{0.1080} & 0.0591 & 0.0555 & 0.1185 \\
\hline
\hline
Cox & $C_{td}$ & \textbf{0.7313} & \textbf{0.6481} & \textbf{0.5351} & \textbf{0.7427} & \textbf{0.7309} & \textbf{0.6110} \\
\cline{2-8} -nnet & $IBS$ & \textbf{0.0688} & \textbf{0.0622} & 0.1402 & 0.0640 & \textbf{0.0498} & \textbf{0.0710} \\
\hline
\hline
CoxL1 & $C_{td}$ & 0.7292 & 0.5330 & 0.5011 & 0.7419 & 0.7243 & 0.5 \\
\cline{2-8} & $IBS$ & 0.0715 & 0.0672 & 0.1175 & \textbf{0.0541} & 0.0509 & 0.0770 \\
\hline
\hline
\end{tabular}
\caption{Results of predicting methods on Cox-Weibull simulation}
\label{tab_CoxWeib}
\end{center}
\end{table*}
\paragraph{Synthesis: }
Not surprisingly, Cox-nnet has the best results on this dataset simulated from a Cox model with a Weibull distribution. However, the neural networks based on a discrete-time model (NNsurv and NNsurv-deep) have very comparable performances, and clearly outperforms the CoxL1 model when the number of variables increases.
\subsubsection{Results for the AH - Log-Normal simulation}
The results presented in \textsc{Table}~\ref{tab_simAHLN} are those obtained on the AH simulation with the baseline hazard following a log-normal distribution. In this simulation, the risks are not proportional, and the survival functions of different individuals can cross.
We can observe that the neural networks based on a discrete-time model have the best performances concerning the $C_{td}$ and the $IBS$, and their values are close to the reference $C_{td}$ and $IBS$. This phenomenon is particularly correct for the $IBS$ when the sample size is equal to 1000, the $IBS$ values of NNsurv and NNsurv-deep are lower than those of the reference $IBS$.
On the other hand, the methods based on the Cox partial likelihood have the highest $C_{td}$ values for a small sample size (n=200) and a small number of variables (p=10) or, on the contrary, for a large sample size (n=1000) and a large number of variables (p=1000).
For a sample size equal to 200, neural networks based on a discrete-time model have higher $C_{td}$ values than those obtained by CoxL1 and Cox-nnet.
The values obtained for the $IBS$ by the two methods using the Cox partial likelihood are good. For a small number of individuals (n=200), the $IBS$ values of CoxL1 and Cox-nnet are very high. For example, Cox-nnet obtains $IBS$ values equal to 0.2243 and 0.1609 respectively for 10 and 100 variables, and CoxL1 gets $IBS$ values equal to 0.2278 and 0.1614, respectively. These values are very high compared to the baseline $IBS$. CoxL1 and Cox-nnet, therefore, have more difficulty with a small number of samples. The predictions of these two methods are not as good as those given by discrete-time neural networks.
\begin{table*}[!h]
\begin{center}
\tabcolsep = 1.5\tabcolsep
\begin{tabular}{p{1.5cm}|p{1.5cm}||p{1.4cm}p{1.4cm}p{1.4cm}|p{1.4cm}p{1.4cm}p{1.4cm}|}
\hline\hline
& n & \multicolumn{3}{|c|}{200} & \multicolumn{3}{|c|}{1000} \\
\hline
Method & p & 10 & 100 & 1000 & 10 & 100 & 1000\\
\hline
\hline
Reference & $C_{td}^{\star}$ & \textbf{0.7225} & \textbf{0.6857} & \textbf{0.7070} & \textbf{0.7225} & \textbf{0.6867} & \textbf{0.7070} \\
\cline{2-8} & $IBS$$^{\star}$ & \textbf{ 0.0755} & \textbf{0.0316} & \textbf{0.0651} & \textbf{0.0755} & \textbf{0.0316} & \textbf{0.0651} \\
\hline
\hline
NNsurv & $C_{td}$ & 0.6863 & \textbf{0.5971} & \textbf{0.5358} & 0.7084 & 0.6088 & 0.5654 \\
\cline{2-8} & $IBS$ & 0.1247 & \textbf{0.0780} & \textbf{0.0859} & 0.0699 & 0.0347 & 0.0533 \\
\hline
\hline
NNsurv & $C_{td}$ & 0.7042 & 0.5793 & 0.5325 & \textbf{0.7155} & \textbf{0.6450} & 0.5702 \\
\cline{2-8} deep & $IBS$ & 0.1789 & 0.2529 & 0.1554 & \textbf{0.0602} & \textbf{0.0303} & \textbf{0.0484} \\
\hline
\hline
Cox & $C_{td}$ & \textbf{0.7128} & 0.5812 & 0.5356 & 0.7097 & 0.6047 & \textbf{0.5720} \\
\cline{2-8} -nnet & $IBS$ & 0.1342 & 0.2243 & 0.1609 & 0.0843 & 0.0875 & 0.0553 \\
\hline
\hline
CoxL1 & $C_{td}$ & 0.7042 & 0.5219 & 0.5112 & 0.7088 & 0.5597 & 0.5 \\
\cline{2-8} & $IBS$ & 0.1350 & 0.2278 & 0.1614 & 0.0608 & 0.0408 & 0.0553 \\
\hline
\hline
\end{tabular}
\caption{Results of predicting methods on AH/Log-normal simulation}
\label{tab_simAHLN}
\end{center}
\end{table*}
\paragraph{Synthesis: }
On the dataset simulated from an AH model with a log-normal distribution, neural networks based on the discrete-time model have the best performances in most situations. The deep version of the model is also better than the one with only one hidden layer. In this simulation, the data do not check the proportional hazards assumption, and survival curves exhibit complex patterns for which the more versatile NNsurv-deep appears more adapted.
\subsection{Application on real datasets}
\subsubsection{Breast cancer dataset}
\paragraph{Description of data: }
The METABRIC data (for Molecular Taxonomy of Breast Cancer International Consortiulm) \cite{curtis_genomic_2012} include 2509 patients with early breast cancer. These data are available at \url{https://www.synapse.org/#!Synapse:syn1688369/wiki/27311}. Survival time, clinical variables, and expression data were present for 1981 patients, with six clinical variables (age, tumor size, hormone therapy, chemotherapy, tumor grades), and 863 genes (pre-filtered). The percentage of censored individuals is high, equal to $55\%$.
\paragraph{Results: }
The comparison results of the METABRIC dataset are summarized in \textsc{Table}~\ref{tab:metabric}. NNsurv-deep manages to get the highest value of $C_{td}$. The $C_{td}$ of NNsurv is equivalent to that of Cox, but Cox-nnet has a lower value.
The integrated Brier score is very close for NNsurv-deep, Cox-nnet, and CoxL1, although the latter has the lowest $IBS$ value.
On this real dataset, the differences between the models are not striking, despite the small superiority of NNsurv-deep.
\begin{table}[!h]
\centering
\begin{tabular}{l|c|c|c|c|c|c|}
\hline
& & CoxL1 &Cox-nnet & NNsurv-deep & NNsurv \\
\hline\hline
Metabric & $C_{td}$ & 0.6757 & 0.6676 & \textbf{0.6853} & 0.6728 \\
\cline{2-6}& $IBS$ & \textbf{0.1937} & 0.1965 & 0.1972 & 0.2038 \\
\hline\hline
\end{tabular}
\caption{Results of different methods on the breast dataset (METABRIC)}
\label{tab:metabric}
\end{table}
\section{Discussions}
This work is a study of neural networks for the prediction of survival in high-dimension.
In this context, usual methods such as the estimation in a Cox model with the Cox partial likelihood can no longer be performed. Several methods (such as dimension reduction or machine learning methods, like Random Survival Forests \cite{ishwaran2008}) have been proposed, but our interest in this study has been directed towards neural networks and their potential for survival analysis from RNAseq data.
Two neural-network based approaches have been proposed. The first one is based on the Cox model but introduces a neural network for risk determination \citep{faraggineural1995}. The second approach is based on a discrete-time model \cite{biganzolifeed1998} and its adaptation to the high-dimensional setting was the main contribution of our work.
In section \ref{sec:results}, we compared the standard Cox model with Lasso penalty and a neural network based on the Cox model (Cox-nnet) with those based on a discrete-time model adapted to the high dimension (NNsurv, and NNsurv-deep). To evaluate this comparison rigorously, we created a design of simulations. We simulated data from different models (Cox, AH, and AFT in appendix) with varying number of variables and sample sizes, allowing diverse levels of complexity.
We concluded from this study that the best neural network in most situations is Cox-nnet. It can handle nonlinear effects as well as interactions. However, the neural network based on discrete-time modeling, which directly predicts the hazard risk, with several hidden layers (NNsurv-deep), has shown its superiority in the most complex situations, especially in the presence of non-proportional risks and intersecting survival curves.
On the Metabric data, NNsurv-deep performs the best, but only marginally better than the Cox partial log-likelihood-based Lasso estimation procedure, suggesting slight non-linearity and interactions.
The neural networks seem to be interesting methods to predict survival in high-dimension and, in particular,
in the presence of complex data. The effect of censoring in these models was not studied in this work, but \cite{roblin2020use} evaluated several methods to cope with censoring in neural networks models for survival analysis. For practical applications, a disadvantage of neural networks is the interpretation difficulty. On the contrary, the output of a Cox model associated with the Lasso procedure is easily interpretable. The Cox model is therefore privileged by the domain's users nowadays. The interpretability issue of neural networks is more and more studied \citep{hao2019interpretable} and is an exciting research avenue to explore.
\bibliographystyle{ACM-Reference-Format}
\bibliography{references}
\appendix
\section{Appendix: supplementary results}
\label{appendix}
\subsection{Simulation from the AFT - Log-Normal model}
\paragraph{\textbf{Survival times and baseline function: }}
To simulate the data from the AFT/Log-normal model, we relied on \cite{leemis_variate_1990}. We chose to perform this simulation to generate survival data that do not respect the proportional hazards assumption.
For this simulation, we consider that the survival times follow a log-normal distribution $\mathcal{LN}(\mu,\sigma).$ In this case, the inverse of the cumulative hazard function is expressed as (\ref{eq:InvHazLN}).
Survival times can therefore be simulated from:
\begin{equation}
T = \frac{1}{\exp(\beta^T X_{i.})} \exp(\sigma \phi^{-1}(U) + \mu). \label{eq:Taft}
\end{equation}
\paragraph{\textbf{Choice of parameters of Log-Normal distribution:}}
We wish the distribution of the simulated data is close to the real data. We follow the same approach to choose the parameters $\sigma$ and $\mu$ of the survival time distribution as for the Cox/Weibull simulation presented above. The value of the parameters is obtained from the explicit formulas:
\begin{equation}
\mu = \ln(\mathrm{E}(T))- \frac{1}{2} \sigma^2 \label{eq:mu} \,\;\, \mbox{and} \,\;\,
\sigma^2 = \ln\left(1+\frac{\mathrm{Var}(T)}{(\mathrm{E}(T))^2}\right).
\end{equation}
Given the expectation and the standard deviation are respectively $2325$ and $1304$, the values of $\mu$ and $\sigma$ used for the simulation of the survival data should be $\mu = 7.73$ and $\sigma = 0.1760.$
\subsection{Simulation study}
\subsubsection{Results for the AFT - Log-normal simulation}
This section presents the results for data simulated from an AFT model with a baseline risk modeled by a log-normal distribution.
The specificity of these simulated data is that they do not satisfy the proportional hazards assumption, but the survival curves do not cross.
\textsc{Table}~\ref{tab_AFTLN} shows that CoxL1 and Cox-nnet have the best results in most configurations considering $C_{td}$ or $IBS$. This good result for $C_{td}$ is particularly right when the sample size is equal to 200 or when the sample size is equal to 1000, and the number of variables is equal to 10 and 100. The $C_{td}$ obtained by the CoxL1 model is equal to 0.9867 for 200 individuals and ten variables, and the $C_{td}$ obtained for the Cox-nnet model is equal to 0.9060 for 1000 individuals and 100 variables. We can see in \textsc{Table}~\ref{tab_AFTLN} that the $C_{td}$ obtained for the neural networks based on a discrete-time model is very close to those obtained by CoxL1 and Cox-nnet and is either higher than the reference one or slightly below. For example, for a sample size equal to 200 and a number of variables equal to 10, the $C_{td}$ of NNsurv is equal to 0.9832, that of Cox-nnet is equal to 0.9867, and the reference one is equal to 0.9203. We have the same behavior for 100 variables and the same sample size or 100 variables and a sample size of 1000.
Moreover, the $IBS$ values are the lowest for the methods based on Cox modeling in most situations. But the $IBS$ values for NNsurv and NNsurv-deep are also excellent. They are lower than the reference $IBS$ in many cases and are very close to CoxL1 and Cox-nnet. We can observe these results when the number of variables is less than or equal to 100 regardless of the sample size.
The good results of CoxL1 and Cox-nnet might seem surprising, but we can explain it because we simulate these data from an AFT model whose survival curves do not cross.
A method based on a Cox model will predict survival functions that do not cross.
For this simulation, the survival function prediction obtained by CoxL1 and Cox-nnet is not cross and is undoubtedly closer to the survival function of the AFT simulation compared to discrete-time neural networks.
\begin{table}[!h]
\begin{center}
\tabcolsep = 1.5\tabcolsep
\begin{tabular}{p{1.5cm}|p{1.5cm}||p{1.4cm}p{1.4cm}p{1.4cm}|p{1.4cm}p{1.4cm}p{1.4cm}|}
\hline\hline
& n & \multicolumn{3}{|c|}{200} & \multicolumn{3}{|c|}{1000} \\
\hline
Method & p & 10 & 100 & 1000 & 10 & 100 & 1000\\
\hline
\hline
Reference & $C_{td}^{\star}$ & \textbf{0.9203} & \textbf{0.9136} & \textbf{0.9037} & \textbf{0.9203} & \textbf{0.9136} & \textbf{0.9037} \\
\cline{2-8} & $IBS$$^{\star}$ & \textbf{ 0.0504} & \textbf{0.0604} & \textbf{0.0417} & \textbf{ 0.0504} & \textbf{0.0604} & \textbf{0.0417}\\
\hline
\hline
NNsurv & $C_{td}$ & 0.9832 & 0.8349 & 0.5425 & 0.9851 & 0.9038 & 0.7426 \\
\cline{2-8} & $IBS$ & 0.0265 & \textbf{0.0560} & 0.2577& 0.0247 & 0.0188 & 0.0642 \\
\hline
\hline
NNsurv & $C_{td}$ & 0.9786 & 0.8275 & 0.5576 & 0.9857 & \textbf{0.9060}& \textbf{0.7500} \\
\cline{2-8} deep & $IBS$ & 0.0295 & 0.0561 & 0.1886 & 0.0261 & 0.0207 & \textbf{0.0631} \\
\hline
\hline
Cox & $C_{td}$ & 0.9825 & \textbf{0.8558} & \textbf{0.5979} & 0.9844 & \textbf{0.9060} & 0.7085 \\
\cline{2-8} -nnet & $IBS$ & \textbf{0.0122} & 0.0906 & \textbf{0.0959} & 0.0126 & 0.0374 & 0.0808 \\
\hline
\hline
CoxL1 & $C_{td}$ & \textbf{0.9867} & 0.7827 & 0.5091 & 0.9856 & 0.9028 & 0.5349 \\
\cline{2-8} & $IBS$ & 0.0146 & 0.0965 & 0.0960 & \textbf{0.0077} & \textbf{0.0182} & 0.0827 \\
\hline
\hline
\end{tabular}
\caption{Results of predicting methods on AFT/Log-normal simulation}
\label{tab_AFTLN}
\end{center}
\end{table}
\paragraph{Synthesis: }
For data simulated from an AFT model with a log-normal distribution, Cox-nnet is the neural network with the best results in most situations when the sample size is small. When the sample size increases, NNsurv-deep is the best model considering the $C_{td}$ in most situations. Moreover, NNsurv and NNsurv-deep also seem to perform well when the number of variables is less than or equal to 100. We assume that the good results of Cox-nnet are due to the low level of complexity of the data. Indeed, the survival curves of the individuals in this dataset never cross.
\end{document} | {"config": "arxiv", "file": "2105.08338/main.tex"} |
TITLE: What can I imply from $\epsilon^\prime(x) \le -x\epsilon(x)$?
QUESTION [1 upvotes]: Imagine, that I can prove $$\epsilon^\prime(x) \le -x\epsilon(x)$$ for a function $\epsilon:\mathbb R \rightarrow \mathbb R_0^{+}$. Does this imply $$\epsilon(x) \le c e^{-\frac{x^2}{2}}$$ whereby $c \in \mathbb R$ is a constant? (The solution of the DGL $f^\prime(x) = -xf(x)$ is $f(x) = c e^{-\frac{x^2}{2}}$).
When I also can prove $\epsilon(0) \le \alpha$, does this mean $\epsilon(x) \le \alpha e^{-\frac{x^2}{2}}$?
REPLY [2 votes]: Yes, this is both true for $x \ge 0$. It even has a name and a Wikipedia page: Gronwall's inequality | {"set_name": "stack_exchange", "score": 1, "question_id": 1100011} |
TITLE: Does $\sum_{n=1}^\infty \frac {\arctan\,n }{n^{\frac65}}$ converge or diverge?
QUESTION [0 upvotes]: Reason behind question
These were my thoughts on how to prove it.
Since $\arctan\,n \gt \dfrac1n$ for $n\gt2$, we can say $\dfrac {\arctan\,n }{n^{\frac65}} \gt \dfrac 1{n^{\frac{11}5}}$
Now we use limit comparison test, and since bn converges, and we get a number as a result of our limit, $\dfrac {\arctan\,n }{n^{\frac65}}$. Does this converge?
REPLY [3 votes]: Comparison test: $\tan^{-1}(n)<\pi/2$ for all $n$, hence
$$ \sum_{n=1}^\infty \frac{\tan^{-1}(n)}{n^{1.2}} < \frac{\pi}{2} \sum_{n=1}^\infty \frac{1}{n^{1.2}} $$
and the latter sum converges (the sum over $n^{-\alpha}$ converges for all $\alpha>1$).
REPLY [1 votes]: As $\arctan$ is bounded, we can compare $\sum_n\frac{\arctan n}{n^{1.2}}$ with $\sum_n\frac1{n^1.2}$, which converges because $1.2>1$. | {"set_name": "stack_exchange", "score": 0, "question_id": 1395745} |
TITLE: Prove that the median of this isosceles trapezoid is tangent to both circles.
QUESTION [0 upvotes]: What the problem really asks me to prove is that a circle can be inscribed in $ABCD$, or that $AB + CD = AC + BD$ or $2AB = AC + BD$ or $AB = \frac{AC+BD}{2} = PK$. I can do that assuming that $PK$ is tangent to both circles as shown in the picture and then apply the tangent-tangent theorem as needed, but
How can I prove that $PK$ is indeed tangent to both circles in the first place?
Could it be the secant to one of them?
By hand I find that the equality $AB = PK$ has to hold, but just after drawing the figure in Geogebra I realized that $PK$ is tangent to both circles. Otherwise I wouldn't been able to figure that out and wouldn't been able to complete the proof.
The fact that having to assume that $PK$ is tangent as if it were something found underneath a stone bugs me.
(In the original drawing, the only lines shown are $OB$, $OD$ and $OO_2$. The others, including $PK$ are my auxiliary constructions).
REPLY [1 votes]: Let's take the line $l$ tangent to both circles at their tangency point. Suppose it intersects lines $OA$ and $OB$ at $K',P'$. Now $AK'=K'M=K'B$, so $K'$ is a midpoint of $AB$, so $K'=K$. Similarly $P'=P$, so line $l$ passes through $K,P$ so line $KP$ is tangent to both circles. | {"set_name": "stack_exchange", "score": 0, "question_id": 1341506} |
TITLE: Help with Concavity/Max/Min for $f(x) = \ln(x^4 + 27)$
QUESTION [1 upvotes]: Given the following function:
$f(x) = \ln(x^4 + 27)$
I found $f'(x) =$ $\frac{4x^3}{x^4+27}$
I set the $f'(x) = 0$ and found that $x = 0$
The interval would thus be:
$(-\infty, 0), (0, \infty)$
Next I'm told to find the following:
Find the local minimum values.
Find the local maximum values.
Find the inflection points.
Find when the graph is concave upward.
Find when the graph is concave downward.
So I know that a local minimum occurs when $f'' > 0$ and a local maximum occurs when $f'' < 0$. So given my function, I don't have a local maximum, but I do have a local minimum. My guess is that it occurs at zero since $f'$ changes from negative to positive. Am I wrong?
REPLY [2 votes]: You are not wrong. It is correct that $f(x)$ will have a local minimum at $x=0$ since $f'$ changes from negative to positive at $x=0$. | {"set_name": "stack_exchange", "score": 1, "question_id": 363946} |